From xen-devel-bounces@lists.xen.org Sat Feb 01 00:30:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 00:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9OT4-0003jE-PM; Sat, 01 Feb 2014 00:29:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W9OT3-0003j9-Do
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 00:29:53 +0000
Received: from [85.158.137.68:14047] by server-14.bemta-3.messagelabs.com id
	74/8B-08196-0004CE25; Sat, 01 Feb 2014 00:29:52 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391214591!9027854!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30697 invoked from network); 1 Feb 2014 00:29:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-31.messagelabs.com with SMTP;
	1 Feb 2014 00:29:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 31 Jan 2014 16:29:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,760,1384329600"; d="scan'208";a="474139511"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga002.fm.intel.com with ESMTP; 31 Jan 2014 16:29:50 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 16:29:49 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Sat, 1 Feb 2014 08:29:41 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: question on share ept page table with vtd
Thread-Index: Ac8e5CKYvrz5ERWsQCyzfE/CC11Egw==
Date: Sat, 1 Feb 2014 00:29:41 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tim Deegan \(tim@xen.org\)" <tim@xen.org>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>
Subject: [Xen-devel] question on share ept page table with vtd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

I have a question with the page table share between EPT and VTD which is enabled by default. Currently, we will enable log_dirty mode to track vram. And when first enable the log dirty mode, it will set all guest ram to readonly in EPT entry. The question is that if a page is current used as DMA write buffer, isn't it a problem?

best regards
yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 00:30:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 00:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9OT4-0003jE-PM; Sat, 01 Feb 2014 00:29:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W9OT3-0003j9-Do
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 00:29:53 +0000
Received: from [85.158.137.68:14047] by server-14.bemta-3.messagelabs.com id
	74/8B-08196-0004CE25; Sat, 01 Feb 2014 00:29:52 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391214591!9027854!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30697 invoked from network); 1 Feb 2014 00:29:52 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-12.tower-31.messagelabs.com with SMTP;
	1 Feb 2014 00:29:52 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 31 Jan 2014 16:29:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,760,1384329600"; d="scan'208";a="474139511"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by fmsmga002.fm.intel.com with ESMTP; 31 Jan 2014 16:29:50 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 16:29:49 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Sat, 1 Feb 2014 08:29:41 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: question on share ept page table with vtd
Thread-Index: Ac8e5CKYvrz5ERWsQCyzfE/CC11Egw==
Date: Sat, 1 Feb 2014 00:29:41 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tim Deegan \(tim@xen.org\)" <tim@xen.org>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>
Subject: [Xen-devel] question on share ept page table with vtd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

I have a question with the page table share between EPT and VTD which is enabled by default. Currently, we will enable log_dirty mode to track vram. And when first enable the log dirty mode, it will set all guest ram to readonly in EPT entry. The question is that if a page is current used as DMA write buffer, isn't it a problem?

best regards
yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 00:57:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 00:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9OtB-0004JD-7N; Sat, 01 Feb 2014 00:56:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W9Ot9-0004J8-IH
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 00:56:51 +0000
Received: from [193.109.254.147:26137] by server-2.bemta-14.messagelabs.com id
	E9/7D-01236-2564CE25; Sat, 01 Feb 2014 00:56:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391216208!1254361!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6832 invoked from network); 1 Feb 2014 00:56:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 00:56:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98702458"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 00:56:47 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 31 Jan 2014 19:56:47 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sat, 1 Feb 2014
	01:56:46 +0100
Message-ID: <52EC4650.7020302@citrix.com>
Date: Sat, 1 Feb 2014 00:56:48 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: "Tim Deegan \(tim@xen.org\)" <tim@xen.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] question on share ept page table with vtd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/2014 00:29, Zhang, Yang Z wrote:
> Hi all
>
> I have a question with the page table share between EPT and VTD which is enabled by default. Currently, we will enable log_dirty mode to track vram. And when first enable the log dirty mode, it will set all guest ram to readonly in EPT entry. The question is that if a page is current used as DMA write buffer, isn't it a problem?
>
> best regards
> yang

Looking at the code, it would indeed appear to be a problem.

Is has presumably gone unnoticed until now as all writes into the vram
would have been from software, and hit the log_dirty logic, rather than
DMA writes hitting the IOMMU.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 00:57:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 00:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9OtB-0004JD-7N; Sat, 01 Feb 2014 00:56:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1W9Ot9-0004J8-IH
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 00:56:51 +0000
Received: from [193.109.254.147:26137] by server-2.bemta-14.messagelabs.com id
	E9/7D-01236-2564CE25; Sat, 01 Feb 2014 00:56:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391216208!1254361!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6832 invoked from network); 1 Feb 2014 00:56:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 00:56:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98702458"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 00:56:47 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 31 Jan 2014 19:56:47 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sat, 1 Feb 2014
	01:56:46 +0100
Message-ID: <52EC4650.7020302@citrix.com>
Date: Sat, 1 Feb 2014 00:56:48 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: "Tim Deegan \(tim@xen.org\)" <tim@xen.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] question on share ept page table with vtd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/02/2014 00:29, Zhang, Yang Z wrote:
> Hi all
>
> I have a question with the page table share between EPT and VTD which is enabled by default. Currently, we will enable log_dirty mode to track vram. And when first enable the log dirty mode, it will set all guest ram to readonly in EPT entry. The question is that if a page is current used as DMA write buffer, isn't it a problem?
>
> best regards
> yang

Looking at the code, it would indeed appear to be a problem.

Is has presumably gone unnoticed until now as all writes into the vram
would have been from software, and hit the log_dirty logic, rather than
DMA writes hitting the IOMMU.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 01:12:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 01:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9P85-0000Jx-6T; Sat, 01 Feb 2014 01:12:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9P83-0000Js-OY
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 01:12:15 +0000
Received: from [193.109.254.147:23560] by server-7.bemta-14.messagelabs.com id
	4B/C5-23424-FE94CE25; Sat, 01 Feb 2014 01:12:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391217133!1257543!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31947 invoked from network); 1 Feb 2014 01:12:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 01:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96751614"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 01:12:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 20:12:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9P7z-00035A-Kv;
	Sat, 01 Feb 2014 01:12:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9P7z-0006DA-8U;
	Sat, 01 Feb 2014 01:12:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24679-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 01:12:11 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24679: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24679 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24679/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail REGR. vs. 24546

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24546

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90d35066387e1d9c9deeda042c5a907cd57c11cf
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 10 15:56:33 2014 +0000

    xen_pt: Fix passthrough of device with ROM.
    
    QEMU does not need and should not allocate memory for the ROM of a
    passthrough PCI device. So this patch initialize the particular region
    like any other PCI BAR of a passthrough device.
    
    When a guest will access the ROM, Xen will take care of the IO, QEMU
    will not be involved in it.
    
    Xen set a limit of memory available for each guest, allocating memory
    for a ROM can hit this limit.
    
    upstream-commit-id: 794798e36eda77802ce7cc7d7d6b1c65751e8a76
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 01:12:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 01:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9P85-0000Jx-6T; Sat, 01 Feb 2014 01:12:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9P83-0000Js-OY
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 01:12:15 +0000
Received: from [193.109.254.147:23560] by server-7.bemta-14.messagelabs.com id
	4B/C5-23424-FE94CE25; Sat, 01 Feb 2014 01:12:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391217133!1257543!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31947 invoked from network); 1 Feb 2014 01:12:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 01:12:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96751614"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 01:12:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 20:12:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9P7z-00035A-Kv;
	Sat, 01 Feb 2014 01:12:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9P7z-0006DA-8U;
	Sat, 01 Feb 2014 01:12:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24679-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 01:12:11 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24679: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24679 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24679/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail REGR. vs. 24546

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24546

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90d35066387e1d9c9deeda042c5a907cd57c11cf
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 10 15:56:33 2014 +0000

    xen_pt: Fix passthrough of device with ROM.
    
    QEMU does not need and should not allocate memory for the ROM of a
    passthrough PCI device. So this patch initialize the particular region
    like any other PCI BAR of a passthrough device.
    
    When a guest will access the ROM, Xen will take care of the IO, QEMU
    will not be involved in it.
    
    Xen set a limit of memory available for each guest, allocating memory
    for a ROM can hit this limit.
    
    upstream-commit-id: 794798e36eda77802ce7cc7d7d6b1c65751e8a76
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 01:13:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 01:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9P9J-0000O5-Nx; Sat, 01 Feb 2014 01:13:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W9P9H-0000Nx-E0
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 01:13:32 +0000
Received: from [193.109.254.147:3181] by server-3.bemta-14.messagelabs.com id
	8B/F7-00432-A3A4CE25; Sat, 01 Feb 2014 01:13:30 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391217209!1255916!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31747 invoked from network); 1 Feb 2014 01:13:29 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-27.messagelabs.com with SMTP;
	1 Feb 2014 01:13:29 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 31 Jan 2014 17:13:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,760,1384329600"; d="scan'208";a="474155526"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 31 Jan 2014 17:13:28 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 17:13:28 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 17:13:27 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sat, 1 Feb 2014 09:13:23 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [Xen-devel] question on share ept page table with vtd
Thread-Index: Ac8e5CKYvrz5ERWsQCyzfE/CC11Eg///gpYA//95QSA=
Date: Sat, 1 Feb 2014 01:13:23 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCCD7@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
	<52EC4650.7020302@citrix.com>
In-Reply-To: <52EC4650.7020302@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tim Deegan \(tim@xen.org\)" <tim@xen.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] question on share ept page table with vtd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote on 2014-02-01:
> On 01/02/2014 00:29, Zhang, Yang Z wrote:
>> Hi all
>> 
>> I have a question with the page table share between EPT and VTD
>> which is
> enabled by default. Currently, we will enable log_dirty mode to track
> vram. And when first enable the log dirty mode, it will set all guest
> ram to readonly in EPT entry. The question is that if a page is
> current used as DMA write buffer, isn't it a problem?
>> 
>> best regards
>> yang
> 
> Looking at the code, it would indeed appear to be a problem.
> 
> Is has presumably gone unnoticed until now as all writes into the vram
> would have been from software, and hit the log_dirty logic, rather
> than DMA writes hitting the IOMMU.
> 

Yes, and I do saw the issue that caused by this that SRIOV fails issue with Xen 4.4+qemu-xen-dir. 
http://www.gossamer-threads.com/lists/xen/devel/315663

There are two places that will call paging_log_dirty_enable() to enable log dirty mode: one is in vram tracking and on is in domain saving. Since VT-d is not recommended to use with saving/restore, so for latter case, it is OK. But for the vram tracking, I don't think we need to set all guest ram to readonly when first enable log dirty mode. Only set the memory range that used by vram is enough.
Furthermore, I think sharing EPT page table with VT-d is not a good idea since hypervisor can mark the ram as readonly in any time and in any usage mode in furture. If sharing page table is used, this will becomes impossible. 

> ~Andrew


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 01:13:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 01:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9P9J-0000O5-Nx; Sat, 01 Feb 2014 01:13:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1W9P9H-0000Nx-E0
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 01:13:32 +0000
Received: from [193.109.254.147:3181] by server-3.bemta-14.messagelabs.com id
	8B/F7-00432-A3A4CE25; Sat, 01 Feb 2014 01:13:30 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391217209!1255916!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31747 invoked from network); 1 Feb 2014 01:13:29 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-27.messagelabs.com with SMTP;
	1 Feb 2014 01:13:29 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 31 Jan 2014 17:13:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,760,1384329600"; d="scan'208";a="474155526"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga002.fm.intel.com with ESMTP; 31 Jan 2014 17:13:28 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 17:13:28 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 31 Jan 2014 17:13:27 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sat, 1 Feb 2014 09:13:23 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [Xen-devel] question on share ept page table with vtd
Thread-Index: Ac8e5CKYvrz5ERWsQCyzfE/CC11Eg///gpYA//95QSA=
Date: Sat, 1 Feb 2014 01:13:23 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCCD7@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9CCC9C@SHSMSX104.ccr.corp.intel.com>
	<52EC4650.7020302@citrix.com>
In-Reply-To: <52EC4650.7020302@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tim Deegan \(tim@xen.org\)" <tim@xen.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] question on share ept page table with vtd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote on 2014-02-01:
> On 01/02/2014 00:29, Zhang, Yang Z wrote:
>> Hi all
>> 
>> I have a question with the page table share between EPT and VTD
>> which is
> enabled by default. Currently, we will enable log_dirty mode to track
> vram. And when first enable the log dirty mode, it will set all guest
> ram to readonly in EPT entry. The question is that if a page is
> current used as DMA write buffer, isn't it a problem?
>> 
>> best regards
>> yang
> 
> Looking at the code, it would indeed appear to be a problem.
> 
> Is has presumably gone unnoticed until now as all writes into the vram
> would have been from software, and hit the log_dirty logic, rather
> than DMA writes hitting the IOMMU.
> 

Yes, and I do saw the issue that caused by this that SRIOV fails issue with Xen 4.4+qemu-xen-dir. 
http://www.gossamer-threads.com/lists/xen/devel/315663

There are two places that will call paging_log_dirty_enable() to enable log dirty mode: one is in vram tracking and on is in domain saving. Since VT-d is not recommended to use with saving/restore, so for latter case, it is OK. But for the vram tracking, I don't think we need to set all guest ram to readonly when first enable log dirty mode. Only set the memory range that used by vram is enough.
Furthermore, I think sharing EPT page table with VT-d is not a good idea since hypervisor can mark the ram as readonly in any time and in any usage mode in furture. If sharing page table is used, this will becomes impossible. 

> ~Andrew


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 01:33:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 01:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9PRw-0000xY-JT; Sat, 01 Feb 2014 01:32:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1W9PRv-0000xT-Bo
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 01:32:47 +0000
Received: from [193.109.254.147:37244] by server-16.bemta-14.messagelabs.com
	id EC/4E-21945-EBE4CE25; Sat, 01 Feb 2014 01:32:46 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391218366!1257124!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23351 invoked from network); 1 Feb 2014 01:32:46 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 01:32:46 -0000
Received: by mail-we0-f169.google.com with SMTP id t61so274385wes.28
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Jan 2014 17:32:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=4dI75rEh6DTW1BsKet5e/RmfDaaF10ho0aigTfNK0OA=;
	b=WI+gR2RXvAXXaVsqsd8vtCgADPwwcM0DmZlYh6h3EVzgltK29avctO4iJ7qnmcGkrA
	l17JtDAIIEge72yBY0GI0Yun428NwRIxpgDFAdBdIk2WGSHfClGRqQqZQc4c1JO6jx6d
	zQ6rgw0Uy7zPwn2GB4IbCf63tEWAPcZ1ZxXGTtl3g5J+OzIc8S6u6aHAG+MeRtbfdl16
	UqeNQC7zF1EKmx5nqQMkAW5+IHb4OH4ln63isVbg8voj4d8SXPmP55BrhYcIWi+Xc822
	EHtWMt9u07yhFtiw/9TvcXAVsolqJuAABLwgkaLbOZu34uVvzYBt4JB8kFnkUbAjpj+O
	oMCA==
X-Received: by 10.180.73.173 with SMTP id m13mr844497wiv.52.1391218365682;
	Fri, 31 Jan 2014 17:32:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Fri, 31 Jan 2014 17:32:25 -0800 (PST)
From: Miguel Clara <miguelmclara@gmail.com>
Date: Sat, 1 Feb 2014 01:32:25 +0000
Message-ID: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on second
	host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm testing live migration without shared storage (I use LVM at both sides)


Issuing "xl migrate" worked nice and the machine was migrated to the second host

However I see this in the second host log:

[ 1502.563251] EXT4-fs error (device xvda1):
htree_dirblock_to_tree:892: inode #136303: block 533250: comm
run-parts: bad entry in directory: rec_len is smaller than minimal -
offset=0(0), inode=0, rec_len=0, name_len=0
Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
(device xvda1): htree_dirblock_to_tree:892: inode #136303: block
533250: comm run-parts: bad entry in directory: rec_len is smaller
than minimal - offset=0(0), inode=0, rec_len=0, name_len=0

I also get errors like:
-bash: /bin/ping: cannot execute binary file

Is this to be expect on using none shared storage?

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 01:33:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 01:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9PRw-0000xY-JT; Sat, 01 Feb 2014 01:32:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1W9PRv-0000xT-Bo
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 01:32:47 +0000
Received: from [193.109.254.147:37244] by server-16.bemta-14.messagelabs.com
	id EC/4E-21945-EBE4CE25; Sat, 01 Feb 2014 01:32:46 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391218366!1257124!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23351 invoked from network); 1 Feb 2014 01:32:46 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 01:32:46 -0000
Received: by mail-we0-f169.google.com with SMTP id t61so274385wes.28
	for <xen-devel@lists.xensource.com>;
	Fri, 31 Jan 2014 17:32:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=4dI75rEh6DTW1BsKet5e/RmfDaaF10ho0aigTfNK0OA=;
	b=WI+gR2RXvAXXaVsqsd8vtCgADPwwcM0DmZlYh6h3EVzgltK29avctO4iJ7qnmcGkrA
	l17JtDAIIEge72yBY0GI0Yun428NwRIxpgDFAdBdIk2WGSHfClGRqQqZQc4c1JO6jx6d
	zQ6rgw0Uy7zPwn2GB4IbCf63tEWAPcZ1ZxXGTtl3g5J+OzIc8S6u6aHAG+MeRtbfdl16
	UqeNQC7zF1EKmx5nqQMkAW5+IHb4OH4ln63isVbg8voj4d8SXPmP55BrhYcIWi+Xc822
	EHtWMt9u07yhFtiw/9TvcXAVsolqJuAABLwgkaLbOZu34uVvzYBt4JB8kFnkUbAjpj+O
	oMCA==
X-Received: by 10.180.73.173 with SMTP id m13mr844497wiv.52.1391218365682;
	Fri, 31 Jan 2014 17:32:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Fri, 31 Jan 2014 17:32:25 -0800 (PST)
From: Miguel Clara <miguelmclara@gmail.com>
Date: Sat, 1 Feb 2014 01:32:25 +0000
Message-ID: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on second
	host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm testing live migration without shared storage (I use LVM at both sides)


Issuing "xl migrate" worked nice and the machine was migrated to the second host

However I see this in the second host log:

[ 1502.563251] EXT4-fs error (device xvda1):
htree_dirblock_to_tree:892: inode #136303: block 533250: comm
run-parts: bad entry in directory: rec_len is smaller than minimal -
offset=0(0), inode=0, rec_len=0, name_len=0
Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
(device xvda1): htree_dirblock_to_tree:892: inode #136303: block
533250: comm run-parts: bad entry in directory: rec_len is smaller
than minimal - offset=0(0), inode=0, rec_len=0, name_len=0

I also get errors like:
-bash: /bin/ping: cannot execute binary file

Is this to be expect on using none shared storage?

Thanks

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 02:07:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 02:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9PzW-0000MP-2n; Sat, 01 Feb 2014 02:07:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9PzT-0008JQ-JE
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 02:07:28 +0000
Received: from [85.158.139.211:58177] by server-8.bemta-5.messagelabs.com id
	C5/D1-05298-ED65CE25; Sat, 01 Feb 2014 02:07:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391220444!955147!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13627 invoked from network); 1 Feb 2014 02:07:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 02:07:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96759907"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 02:07:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 21:07:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9PzN-0003MN-Ka;
	Sat, 01 Feb 2014 02:07:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9PzN-0004UL-JN;
	Sat, 01 Feb 2014 02:07:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24681-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 02:07:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24681: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24681 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24681/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep  fail in 24671 REGR. vs. 24571

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-amd64 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 24671 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 24671 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-i386 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 24671 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 24671 n/a

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 02:07:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 02:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9PzW-0000MP-2n; Sat, 01 Feb 2014 02:07:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9PzT-0008JQ-JE
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 02:07:28 +0000
Received: from [85.158.139.211:58177] by server-8.bemta-5.messagelabs.com id
	C5/D1-05298-ED65CE25; Sat, 01 Feb 2014 02:07:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391220444!955147!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13627 invoked from network); 1 Feb 2014 02:07:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 02:07:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96759907"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 02:07:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 31 Jan 2014 21:07:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9PzN-0003MN-Ka;
	Sat, 01 Feb 2014 02:07:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9PzN-0004UL-JN;
	Sat, 01 Feb 2014 02:07:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24681-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 02:07:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24681: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24681 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24681/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep  fail in 24671 REGR. vs. 24571

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-amd64 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 24671 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 24671 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-i386 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 24671 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 24671 n/a

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 02:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 02:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9QTp-0001ut-5p; Sat, 01 Feb 2014 02:38:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W9QTm-0001uo-Pp
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 02:38:47 +0000
Received: from [85.158.139.211:8858] by server-13.bemta-5.messagelabs.com id
	74/AA-18801-53E5CE25; Sat, 01 Feb 2014 02:38:45 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391222323!962233!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27438 invoked from network); 1 Feb 2014 02:38:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Feb 2014 02:38:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s112cf8q017937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 1 Feb 2014 02:38:42 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s112cetH000532
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sat, 1 Feb 2014 02:38:41 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s112cdAb009639; Sat, 1 Feb 2014 02:38:40 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 18:38:39 -0800
Date: Fri, 31 Jan 2014 18:38:38 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Tim Deegan <tim@xen.org>
Message-ID: <20140131183838.27075db3@mantra.us.oracle.com>
In-Reply-To: <20131218165152.GO24792@deinos.phlegethon.org>
References: <20131216174728.2ba3ad9a@mantra.us.oracle.com>
	<52B01C88020000780010E042@nat28.tlf.novell.com>
	<20131217101957.GB32721@deinos.phlegethon.org>
	<20131217184412.2372eb45@mantra.us.oracle.com>
	<20131218100958.GB24792@deinos.phlegethon.org>
	<20131218165152.GO24792@deinos.phlegethon.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [RFC PATCH] PVH: cleanup of p2m upon p2m destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013 17:51:52 +0100
Tim Deegan <tim@xen.org> wrote:

> At 11:09 +0100 on 18 Dec (1387361398), Tim Deegan wrote:
> > > An alternative might be to just create a link list then and walk
> > > it. In general, foreign mappings should be very small, so the
> > > overhead of 16 bytes per page for the link list might not be too
> > > bad. I will code it if there is no disagreement from any
> > > maintainer... everyone has different ideas :)...
> > 
> > I think it would be best to walk the p2m trie (i.e. bounded by
> > amount of RAM, rather than max GFN) and do it preemptably.  I'll
> > look into something like that for the mem_sharing loop today, and
> > foreign mapping code can reuse it.
> 
> What I've ended up with is making p2m_change_entry_type_global()
> preemptible (which is a bigger task but will be needed as domains get
> bigger).  Do you think that using that function to switch all mappings
> from p2m_foreign to p2m_invalid, appropriately late in the teardown,
> will be good enough for what you need?
> 
> Cheers,
> 
> Tim.

Finally, coming back to this, the answer is yes. Looks like all I need
to do is:

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9faa663..268a8a2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -470,6 +470,10 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
 
+    /* pvh: we must release refcnt on all foreign pages */
+    if ( is_pvh_domain(d) )
+        p2m_change_entry_type_global(d, p2m_map_foreign, p2m_invalid);
+
     /* Try to unshare any remaining shared p2m entries. Safeguard
      * Since relinquish_shared_pages should have done the work. */ 
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )


In this call, the new atomic_write_ept_entry() will DTRT:

static inline void atomic_write_ept_entry(ept_entry_t *entryptr,
                                          const ept_entry_t *new)
{
    if ( p2m_is_foreign(new->sa_p2mt) )
    {
        struct page_info *page;
        struct domain *fdom;

        ASSERT(mfn_valid(new->mfn));
        page = mfn_to_page(new->mfn);
        fdom = page_get_owner(page);
        get_page(page, fdom);
    }
    if ( p2m_is_foreign(entryptr->sa_p2mt) )
        put_page(mfn_to_page(entryptr->mfn));

    write_atomic(&entryptr->epte, new->epte);
}


If that sounds ok, I'm ready to submit dom0 PVH patches. Pl lmk.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 02:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 02:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9QTp-0001ut-5p; Sat, 01 Feb 2014 02:38:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1W9QTm-0001uo-Pp
	for xen-devel@lists.xenproject.org; Sat, 01 Feb 2014 02:38:47 +0000
Received: from [85.158.139.211:8858] by server-13.bemta-5.messagelabs.com id
	74/AA-18801-53E5CE25; Sat, 01 Feb 2014 02:38:45 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391222323!962233!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27438 invoked from network); 1 Feb 2014 02:38:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Feb 2014 02:38:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s112cf8q017937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 1 Feb 2014 02:38:42 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s112cetH000532
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sat, 1 Feb 2014 02:38:41 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s112cdAb009639; Sat, 1 Feb 2014 02:38:40 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 31 Jan 2014 18:38:39 -0800
Date: Fri, 31 Jan 2014 18:38:38 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Tim Deegan <tim@xen.org>
Message-ID: <20140131183838.27075db3@mantra.us.oracle.com>
In-Reply-To: <20131218165152.GO24792@deinos.phlegethon.org>
References: <20131216174728.2ba3ad9a@mantra.us.oracle.com>
	<52B01C88020000780010E042@nat28.tlf.novell.com>
	<20131217101957.GB32721@deinos.phlegethon.org>
	<20131217184412.2372eb45@mantra.us.oracle.com>
	<20131218100958.GB24792@deinos.phlegethon.org>
	<20131218165152.GO24792@deinos.phlegethon.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [RFC PATCH] PVH: cleanup of p2m upon p2m destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 18 Dec 2013 17:51:52 +0100
Tim Deegan <tim@xen.org> wrote:

> At 11:09 +0100 on 18 Dec (1387361398), Tim Deegan wrote:
> > > An alternative might be to just create a link list then and walk
> > > it. In general, foreign mappings should be very small, so the
> > > overhead of 16 bytes per page for the link list might not be too
> > > bad. I will code it if there is no disagreement from any
> > > maintainer... everyone has different ideas :)...
> > 
> > I think it would be best to walk the p2m trie (i.e. bounded by
> > amount of RAM, rather than max GFN) and do it preemptably.  I'll
> > look into something like that for the mem_sharing loop today, and
> > foreign mapping code can reuse it.
> 
> What I've ended up with is making p2m_change_entry_type_global()
> preemptible (which is a bigger task but will be needed as domains get
> bigger).  Do you think that using that function to switch all mappings
> from p2m_foreign to p2m_invalid, appropriately late in the teardown,
> will be good enough for what you need?
> 
> Cheers,
> 
> Tim.

Finally, coming back to this, the answer is yes. Looks like all I need
to do is:

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9faa663..268a8a2 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -470,6 +470,10 @@ void p2m_teardown(struct p2m_domain *p2m)
 
     p2m_lock(p2m);
 
+    /* pvh: we must release refcnt on all foreign pages */
+    if ( is_pvh_domain(d) )
+        p2m_change_entry_type_global(d, p2m_map_foreign, p2m_invalid);
+
     /* Try to unshare any remaining shared p2m entries. Safeguard
      * Since relinquish_shared_pages should have done the work. */ 
     for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )


In this call, the new atomic_write_ept_entry() will DTRT:

static inline void atomic_write_ept_entry(ept_entry_t *entryptr,
                                          const ept_entry_t *new)
{
    if ( p2m_is_foreign(new->sa_p2mt) )
    {
        struct page_info *page;
        struct domain *fdom;

        ASSERT(mfn_valid(new->mfn));
        page = mfn_to_page(new->mfn);
        fdom = page_get_owner(page);
        get_page(page, fdom);
    }
    if ( p2m_is_foreign(entryptr->sa_p2mt) )
        put_page(mfn_to_page(entryptr->mfn));

    write_atomic(&entryptr->epte, new->epte);
}


If that sounds ok, I'm ready to submit dom0 PVH patches. Pl lmk.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 05:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 05:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9TK0-0006eh-0Y; Sat, 01 Feb 2014 05:40:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9TJx-0006ec-RZ
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 05:40:50 +0000
Received: from [85.158.137.68:22555] by server-4.bemta-3.messagelabs.com id
	66/83-11750-0E88CE25; Sat, 01 Feb 2014 05:40:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391233246!11521461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15848 invoked from network); 1 Feb 2014 05:40:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 05:40:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96804380"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 05:40:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 00:40:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9TJq-0004P7-QS;
	Sat, 01 Feb 2014 05:40:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9TJq-0004GD-5a;
	Sat, 01 Feb 2014 05:40:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24683-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 05:40:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable baseline test] 24683: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 24683 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24683/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24674
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24674

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  a731bb25c072004a7c073625eb6ef14cc9011c74
baseline version:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.

------------------------------------------------------------
commit a731bb25c072004a7c073625eb6ef14cc9011c74
Author: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391)
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 05:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 05:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9TK0-0006eh-0Y; Sat, 01 Feb 2014 05:40:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9TJx-0006ec-RZ
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 05:40:50 +0000
Received: from [85.158.137.68:22555] by server-4.bemta-3.messagelabs.com id
	66/83-11750-0E88CE25; Sat, 01 Feb 2014 05:40:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391233246!11521461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15848 invoked from network); 1 Feb 2014 05:40:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 05:40:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96804380"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 05:40:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 00:40:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9TJq-0004P7-QS;
	Sat, 01 Feb 2014 05:40:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9TJq-0004GD-5a;
	Sat, 01 Feb 2014 05:40:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24683-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 05:40:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable baseline test] 24683: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 24683 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24683/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24674
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail REGR. vs. 24674

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  a731bb25c072004a7c073625eb6ef14cc9011c74
baseline version:
 xen                  a96bbe5fd79ea8ac6b40e90965f84aab839d3391

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.

------------------------------------------------------------
commit a731bb25c072004a7c073625eb6ef14cc9011c74
Author: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    (cherry picked from commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391)
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 07:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 07:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9UeR-0000I6-BR; Sat, 01 Feb 2014 07:06:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9UeP-0000Hy-D6
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 07:06:01 +0000
Received: from [85.158.137.68:6348] by server-13.bemta-3.messagelabs.com id
	81/AA-26923-8DC9CE25; Sat, 01 Feb 2014 07:06:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391238357!9041189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26766 invoked from network); 1 Feb 2014 07:05:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 07:05:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98811394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 07:05:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 02:05:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Udn-0004rP-9B;
	Sat, 01 Feb 2014 07:05:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Udm-0004d6-87;
	Sat, 01 Feb 2014 07:05:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24684-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 07:05:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24684: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24684 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24684/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24676
 test-amd64-amd64-pair        16 guest-start        fail in 24676 pass in 24684

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install fail in 24676 like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24676 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 07:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 07:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9UeR-0000I6-BR; Sat, 01 Feb 2014 07:06:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9UeP-0000Hy-D6
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 07:06:01 +0000
Received: from [85.158.137.68:6348] by server-13.bemta-3.messagelabs.com id
	81/AA-26923-8DC9CE25; Sat, 01 Feb 2014 07:06:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391238357!9041189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26766 invoked from network); 1 Feb 2014 07:05:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 07:05:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98811394"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 07:05:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 02:05:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Udn-0004rP-9B;
	Sat, 01 Feb 2014 07:05:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Udm-0004d6-87;
	Sat, 01 Feb 2014 07:05:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24684-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 07:05:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24684: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24684 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24684/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24676
 test-amd64-amd64-pair        16 guest-start        fail in 24676 pass in 24684

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install fail in 24676 like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24676 never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 08:10:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 08:10:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Veg-0002bt-Mx; Sat, 01 Feb 2014 08:10:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Vef-0002bo-0X
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 08:10:21 +0000
Received: from [85.158.137.68:41618] by server-13.bemta-3.messagelabs.com id
	AC/91-26923-CEBACE25; Sat, 01 Feb 2014 08:10:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391242217!12664460!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19917 invoked from network); 1 Feb 2014 08:10:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 08:10:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98828164"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 08:10:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 03:10:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Vea-0005B9-6b;
	Sat, 01 Feb 2014 08:10:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Vea-0006rM-1c;
	Sat, 01 Feb 2014 08:10:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24686-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 08:10:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24686: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24686 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24686/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24546

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 24679
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24679 pass in 24686
 test-amd64-i386-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail in 24679 pass in 24686

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24679 never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90d35066387e1d9c9deeda042c5a907cd57c11cf
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 10 15:56:33 2014 +0000

    xen_pt: Fix passthrough of device with ROM.
    
    QEMU does not need and should not allocate memory for the ROM of a
    passthrough PCI device. So this patch initialize the particular region
    like any other PCI BAR of a passthrough device.
    
    When a guest will access the ROM, Xen will take care of the IO, QEMU
    will not be involved in it.
    
    Xen set a limit of memory available for each guest, allocating memory
    for a ROM can hit this limit.
    
    upstream-commit-id: 794798e36eda77802ce7cc7d7d6b1c65751e8a76
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 08:10:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 08:10:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Veg-0002bt-Mx; Sat, 01 Feb 2014 08:10:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Vef-0002bo-0X
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 08:10:21 +0000
Received: from [85.158.137.68:41618] by server-13.bemta-3.messagelabs.com id
	AC/91-26923-CEBACE25; Sat, 01 Feb 2014 08:10:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391242217!12664460!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19917 invoked from network); 1 Feb 2014 08:10:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 08:10:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98828164"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 08:10:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 03:10:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Vea-0005B9-6b;
	Sat, 01 Feb 2014 08:10:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Vea-0006rM-1c;
	Sat, 01 Feb 2014 08:10:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24686-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 08:10:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24686: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24686 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24686/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24546

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install       fail pass in 24679
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24679 pass in 24686
 test-amd64-i386-xl-qemuu-win7-amd64 10 guest-saverestore.2 fail in 24679 pass in 24686

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24679 never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90d35066387e1d9c9deeda042c5a907cd57c11cf
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 10 15:56:33 2014 +0000

    xen_pt: Fix passthrough of device with ROM.
    
    QEMU does not need and should not allocate memory for the ROM of a
    passthrough PCI device. So this patch initialize the particular region
    like any other PCI BAR of a passthrough device.
    
    When a guest will access the ROM, Xen will take care of the IO, QEMU
    will not be involved in it.
    
    Xen set a limit of memory available for each guest, allocating memory
    for a ROM can hit this limit.
    
    upstream-commit-id: 794798e36eda77802ce7cc7d7d6b1c65751e8a76
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 09:10:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 09:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Wam-00041O-Al; Sat, 01 Feb 2014 09:10:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <monaka@monami-ya.com>) id 1W9WMq-0003aZ-VT
	for xen-devel@lists.xen.org; Sat, 01 Feb 2014 08:56:01 +0000
Received: from [85.158.143.35:51445] by server-1.bemta-4.messagelabs.com id
	A7/84-31661-0A6BCE25; Sat, 01 Feb 2014 08:56:00 +0000
X-Env-Sender: monaka@monami-ya.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391244958!2362047!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8186 invoked from network); 1 Feb 2014 08:55:59 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 08:55:59 -0000
Received: by mail-lb0-f174.google.com with SMTP id l4so4099294lbv.19
	for <xen-devel@lists.xen.org>; Sat, 01 Feb 2014 00:55:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:reply-to:date:message-id:subject
	:from:to:content-type;
	bh=bplvaLCkKLRVF00yfw4E4YPcAWDuADekVkTYYeMNjBI=;
	b=j87u9f/eBjO5FJtU3tlNhpuaY+i8vxR1zWLtACAYIl5l6OXArHTLqYv2RKRIOC0d94
	tnO8QUWzJXMkl1AO+xEo3UHcfBjvRl4wKdDO1q9qSDygrqbeqYf3l+mKxeVfrgfrLVtn
	bgPep1/DzzzY5zgbEJGkLPBwBZIKDMu4Lgp4JlMnqSNZIDr9v70Fj1BBE2KtXjkWhi5t
	CVcdCimNJWj+KRGPWlm3OL6YGD+c7clmbkAF2RmQM4UQsBBanMnchbl4psYNob3scOOB
	p2Tpbm1ooFHvbV21QJjAf6I5xKfWg/C8Nl6SDmSLoggcu/Wb4uxL6bS4h0yVTAd43Lc/
	NOVw==
X-Gm-Message-State: ALoCoQnqhue+/8/vDA75o+5+XWUjSqlTydrkbToASlnoHWofuPtW5d95IhfCGDGG29F+yXJVzM3z
MIME-Version: 1.0
X-Received: by 10.152.3.99 with SMTP id b3mr94729lab.61.1391244958431; Sat, 01
	Feb 2014 00:55:58 -0800 (PST)
Received: by 10.112.170.42 with HTTP; Sat, 1 Feb 2014 00:55:58 -0800 (PST)
X-Originating-IP: [124.25.156.172]
Date: Sat, 1 Feb 2014 17:55:58 +0900
Message-ID: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
From: Masaki Muranaka <monaka@monami-ya.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Sat, 01 Feb 2014 09:10:22 +0000
Subject: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: monaka@monami-ya.jp
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0277647396084960422=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0277647396084960422==
Content-Type: multipart/alternative; boundary=089e01419fe28b0c7504f154751a

--089e01419fe28b0c7504f154751a
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I got the build error. Log is like this:
/home/azureuser/xen/stubdom/mini-os-x86_64-c/test.o: In function `app_main':
/home/azureuser/xen/extras/mini-os/test.c:542: multiple definition of
`app_main'
/home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xen/extras/mini-os/main.c:187:
first defined here
make[1]: *** [/home/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os] Error 1
make[1]: Leaving directory `/home/azureuser/xen/extras/mini-os'
make: *** [c-stubdom] Error 2


I thinks stubdom/c/minios.cfg should be like this.

diff --git a/stubdom/c/minios.cfg b/stubdom/c/minios.cfg
index e1faee5..f2a3178 100644
--- a/stubdom/c/minios.cfg
+++ b/stubdom/c/minios.cfg
@@ -1 +1 @@
-CONFIG_TEST=y
+CONFIG_TEST=n


Thanks,

-- 
Masaki Muranaka
Monami-ya LLC, Japan.

--089e01419fe28b0c7504f154751a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello,</div><div><br></div><div>I got the build error=
. Log is like this:</div><div>/home/azureuser/xen/stubdom/mini-os-x86_64-c/=
test.o: In function `app_main&#39;:<br></div><div>/home/azureuser/xen/extra=
s/mini-os/test.c:542: multiple definition of `app_main&#39;</div>
<div>/home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xe=
n/extras/mini-os/main.c:187: first defined here</div><div>make[1]: *** [/ho=
me/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os] Error 1</div><div>make[1=
]: Leaving directory `/home/azureuser/xen/extras/mini-os&#39;</div>
<div>make: *** [c-stubdom] Error 2</div><div><br></div><div><br></div><div>=
I thinks stubdom/c/minios.cfg should be like this.</div><div><br></div><div=
><div>diff --git a/stubdom/c/minios.cfg b/stubdom/c/minios.cfg</div><div>
index e1faee5..f2a3178 100644</div><div>--- a/stubdom/c/minios.cfg</div><di=
v>+++ b/stubdom/c/minios.cfg</div><div>@@ -1 +1 @@</div><div>-CONFIG_TEST=
=3Dy</div><div>+CONFIG_TEST=3Dn</div></div><div><br></div><div><br></div><d=
iv>
Thanks,</div><div><br></div>-- <br><div dir=3D"ltr">Masaki Muranaka<div>Mon=
ami-ya LLC, Japan.</div></div>
</div>

--089e01419fe28b0c7504f154751a--


--===============0277647396084960422==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0277647396084960422==--


From xen-devel-bounces@lists.xen.org Sat Feb 01 09:10:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 09:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Wal-00041G-VA; Sat, 01 Feb 2014 09:10:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W9Lwj-00062Y-Eh
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 21:48:21 +0000
Received: from [85.158.143.35:8048] by server-2.bemta-4.messagelabs.com id
	39/3B-10891-42A1CE25; Fri, 31 Jan 2014 21:48:20 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391204897!2292450!2
X-Originating-IP: [216.109.115.156]
X-SpamReason: No, hits=2.8 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,SUSPICIOUS_RECIPS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25934 invoked from network); 31 Jan 2014 21:48:19 -0000
Received: from nm48-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm48-vm1.bullet.mail.bf1.yahoo.com) (216.109.115.156)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 21:48:19 -0000
Received: from [66.196.81.172] by nm48.bullet.mail.bf1.yahoo.com with NNFMP;
	31 Jan 2014 21:48:17 -0000
Received: from [98.139.211.206] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	31 Jan 2014 21:48:17 -0000
Received: from [127.0.0.1] by smtp215.mail.bf1.yahoo.com with NNFMP;
	31 Jan 2014 21:48:17 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391204897; bh=8Ig91VcSKsS+AwCsIXIreANyz1BAku1PF173k9HyLAY=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:Content-Transfer-Encoding:X-Mailer:Thread-Index:Content-Language;
	b=U60ALqDNDUqupIAX7a8McjZufbnRPwj3ntXGf5qbCqad3/VqlWLTmMuO45SCkBqHiFlmwA3D2k11GX79+gcCJe7Zkx1JFAwS43Js6HbQPqvNtu4cw2JyMeZn7rs1DuD8Kz45f+LsfYBkPXOWqLCUnEedxYz/3IkjrJW4cASQkX8=
X-Yahoo-Newman-Id: 516073.73947.bm@smtp215.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: WhCuM5cVM1nF6kQp.pNceGdAHI1TuD.S7mrm6lrs0MWCZ.e
	.mux3zhhFHPeU_IHLwneWqz64kG11Vye00KC4s931kmOAbLwk5btF1HRrS1t
	ox0pbch7g7Ss7xa8oRCzecaEB2fUDe0RZOsWEHdh7yfsOg8sQAXRgRREmE8j
	4DlxkVFRlJGNcvvCh0EBi34qj.xNbl5cxMcDQqFo4wtJ8bgQdOzIa3F1kpm5
	SdxbR8No0pNQ90.gsg9Cu7.ihiiPj9YAnYM4E_Qq2KqY6Qhk.FrCS.WR_3mW
	nDzptvjYjKm4RBuAuA7H.luVGo9FiOZlMUpKIAuPKdHGoaySlcys_ZefgOJ.
	Pz3dkECERjCofSV3xU.Qx22oVYnb1oZlpMnNuOZZaKrxAulxKkpfnjKDybyU
	Ojsk9pcbpwQL6eueeJTiyF1jKAdKmk7GVtLjrFbo0IrESn9kKzyS5ZdvatAw
	KNLin0iRgpQrAc9rtJyzx7f0GZFW91p7DDPKWN6Js8WF19TRIcIzKUKaEHwo
	21Fb1pCQufY.YeP05EJD.g7m2y7trWuPeyZrSoE47XOmiQrYTSlJKPZbxlBc
	.XDUxeuB.f1JAQssXUzA8ZC3NOBSh1w5C8jYnO2gJ142I0MLCNhbTyM8bLYG
	S69IZo7EqfJx4KkcAJGcwwA--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from phobos (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp215.mail.bf1.yahoo.com with SMTP; 31 Jan 2014 13:48:17 -0800 PST
From: "Eric Houby" <ehouby@yahoo.com>
To: "'Russ Pavlicek'" <russell.pavlicek@xenproject.org>,
	<xen-devel@lists.xen.org>, <xen-users@lists.xen.org>,
	<xen-api@lists.xen.org>, <xs-devel@lists.xenserver.org>,
	<cl-mirage@lists.cam.ac.uk>, <publicity@lists.xenproject.org>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
In-Reply-To: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
Date: Fri, 31 Jan 2014 14:48:17 -0700
Message-ID: <009c01cf1ece$2739a820$75acf860$@yahoo.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQFjy0dgAXkVho5qIRcBav+bbzk7pZt2AEDw
Content-Language: en-us
X-Mailman-Approved-At: Sat, 01 Feb 2014 09:10:22 +0000
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> 
> Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.
> 
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
> 
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
> 


Russ,

On the RC3 test instructions page, the link for the RC3 RPMS are pointing to
what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for this
test day?

Thanks,

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 09:10:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 09:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Wam-00041O-Al; Sat, 01 Feb 2014 09:10:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <monaka@monami-ya.com>) id 1W9WMq-0003aZ-VT
	for xen-devel@lists.xen.org; Sat, 01 Feb 2014 08:56:01 +0000
Received: from [85.158.143.35:51445] by server-1.bemta-4.messagelabs.com id
	A7/84-31661-0A6BCE25; Sat, 01 Feb 2014 08:56:00 +0000
X-Env-Sender: monaka@monami-ya.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391244958!2362047!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8186 invoked from network); 1 Feb 2014 08:55:59 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 08:55:59 -0000
Received: by mail-lb0-f174.google.com with SMTP id l4so4099294lbv.19
	for <xen-devel@lists.xen.org>; Sat, 01 Feb 2014 00:55:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:reply-to:date:message-id:subject
	:from:to:content-type;
	bh=bplvaLCkKLRVF00yfw4E4YPcAWDuADekVkTYYeMNjBI=;
	b=j87u9f/eBjO5FJtU3tlNhpuaY+i8vxR1zWLtACAYIl5l6OXArHTLqYv2RKRIOC0d94
	tnO8QUWzJXMkl1AO+xEo3UHcfBjvRl4wKdDO1q9qSDygrqbeqYf3l+mKxeVfrgfrLVtn
	bgPep1/DzzzY5zgbEJGkLPBwBZIKDMu4Lgp4JlMnqSNZIDr9v70Fj1BBE2KtXjkWhi5t
	CVcdCimNJWj+KRGPWlm3OL6YGD+c7clmbkAF2RmQM4UQsBBanMnchbl4psYNob3scOOB
	p2Tpbm1ooFHvbV21QJjAf6I5xKfWg/C8Nl6SDmSLoggcu/Wb4uxL6bS4h0yVTAd43Lc/
	NOVw==
X-Gm-Message-State: ALoCoQnqhue+/8/vDA75o+5+XWUjSqlTydrkbToASlnoHWofuPtW5d95IhfCGDGG29F+yXJVzM3z
MIME-Version: 1.0
X-Received: by 10.152.3.99 with SMTP id b3mr94729lab.61.1391244958431; Sat, 01
	Feb 2014 00:55:58 -0800 (PST)
Received: by 10.112.170.42 with HTTP; Sat, 1 Feb 2014 00:55:58 -0800 (PST)
X-Originating-IP: [124.25.156.172]
Date: Sat, 1 Feb 2014 17:55:58 +0900
Message-ID: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
From: Masaki Muranaka <monaka@monami-ya.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Sat, 01 Feb 2014 09:10:22 +0000
Subject: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: monaka@monami-ya.jp
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0277647396084960422=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0277647396084960422==
Content-Type: multipart/alternative; boundary=089e01419fe28b0c7504f154751a

--089e01419fe28b0c7504f154751a
Content-Type: text/plain; charset=ISO-8859-1

Hello,

I got the build error. Log is like this:
/home/azureuser/xen/stubdom/mini-os-x86_64-c/test.o: In function `app_main':
/home/azureuser/xen/extras/mini-os/test.c:542: multiple definition of
`app_main'
/home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xen/extras/mini-os/main.c:187:
first defined here
make[1]: *** [/home/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os] Error 1
make[1]: Leaving directory `/home/azureuser/xen/extras/mini-os'
make: *** [c-stubdom] Error 2


I thinks stubdom/c/minios.cfg should be like this.

diff --git a/stubdom/c/minios.cfg b/stubdom/c/minios.cfg
index e1faee5..f2a3178 100644
--- a/stubdom/c/minios.cfg
+++ b/stubdom/c/minios.cfg
@@ -1 +1 @@
-CONFIG_TEST=y
+CONFIG_TEST=n


Thanks,

-- 
Masaki Muranaka
Monami-ya LLC, Japan.

--089e01419fe28b0c7504f154751a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello,</div><div><br></div><div>I got the build error=
. Log is like this:</div><div>/home/azureuser/xen/stubdom/mini-os-x86_64-c/=
test.o: In function `app_main&#39;:<br></div><div>/home/azureuser/xen/extra=
s/mini-os/test.c:542: multiple definition of `app_main&#39;</div>
<div>/home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xe=
n/extras/mini-os/main.c:187: first defined here</div><div>make[1]: *** [/ho=
me/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os] Error 1</div><div>make[1=
]: Leaving directory `/home/azureuser/xen/extras/mini-os&#39;</div>
<div>make: *** [c-stubdom] Error 2</div><div><br></div><div><br></div><div>=
I thinks stubdom/c/minios.cfg should be like this.</div><div><br></div><div=
><div>diff --git a/stubdom/c/minios.cfg b/stubdom/c/minios.cfg</div><div>
index e1faee5..f2a3178 100644</div><div>--- a/stubdom/c/minios.cfg</div><di=
v>+++ b/stubdom/c/minios.cfg</div><div>@@ -1 +1 @@</div><div>-CONFIG_TEST=
=3Dy</div><div>+CONFIG_TEST=3Dn</div></div><div><br></div><div><br></div><d=
iv>
Thanks,</div><div><br></div>-- <br><div dir=3D"ltr">Masaki Muranaka<div>Mon=
ami-ya LLC, Japan.</div></div>
</div>

--089e01419fe28b0c7504f154751a--


--===============0277647396084960422==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0277647396084960422==--


From xen-devel-bounces@lists.xen.org Sat Feb 01 09:10:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 09:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Wal-00041G-VA; Sat, 01 Feb 2014 09:10:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W9Lwj-00062Y-Eh
	for xen-devel@lists.xen.org; Fri, 31 Jan 2014 21:48:21 +0000
Received: from [85.158.143.35:8048] by server-2.bemta-4.messagelabs.com id
	39/3B-10891-42A1CE25; Fri, 31 Jan 2014 21:48:20 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391204897!2292450!2
X-Originating-IP: [216.109.115.156]
X-SpamReason: No, hits=2.8 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,SUSPICIOUS_RECIPS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25934 invoked from network); 31 Jan 2014 21:48:19 -0000
Received: from nm48-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm48-vm1.bullet.mail.bf1.yahoo.com) (216.109.115.156)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 31 Jan 2014 21:48:19 -0000
Received: from [66.196.81.172] by nm48.bullet.mail.bf1.yahoo.com with NNFMP;
	31 Jan 2014 21:48:17 -0000
Received: from [98.139.211.206] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	31 Jan 2014 21:48:17 -0000
Received: from [127.0.0.1] by smtp215.mail.bf1.yahoo.com with NNFMP;
	31 Jan 2014 21:48:17 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391204897; bh=8Ig91VcSKsS+AwCsIXIreANyz1BAku1PF173k9HyLAY=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:Content-Transfer-Encoding:X-Mailer:Thread-Index:Content-Language;
	b=U60ALqDNDUqupIAX7a8McjZufbnRPwj3ntXGf5qbCqad3/VqlWLTmMuO45SCkBqHiFlmwA3D2k11GX79+gcCJe7Zkx1JFAwS43Js6HbQPqvNtu4cw2JyMeZn7rs1DuD8Kz45f+LsfYBkPXOWqLCUnEedxYz/3IkjrJW4cASQkX8=
X-Yahoo-Newman-Id: 516073.73947.bm@smtp215.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: WhCuM5cVM1nF6kQp.pNceGdAHI1TuD.S7mrm6lrs0MWCZ.e
	.mux3zhhFHPeU_IHLwneWqz64kG11Vye00KC4s931kmOAbLwk5btF1HRrS1t
	ox0pbch7g7Ss7xa8oRCzecaEB2fUDe0RZOsWEHdh7yfsOg8sQAXRgRREmE8j
	4DlxkVFRlJGNcvvCh0EBi34qj.xNbl5cxMcDQqFo4wtJ8bgQdOzIa3F1kpm5
	SdxbR8No0pNQ90.gsg9Cu7.ihiiPj9YAnYM4E_Qq2KqY6Qhk.FrCS.WR_3mW
	nDzptvjYjKm4RBuAuA7H.luVGo9FiOZlMUpKIAuPKdHGoaySlcys_ZefgOJ.
	Pz3dkECERjCofSV3xU.Qx22oVYnb1oZlpMnNuOZZaKrxAulxKkpfnjKDybyU
	Ojsk9pcbpwQL6eueeJTiyF1jKAdKmk7GVtLjrFbo0IrESn9kKzyS5ZdvatAw
	KNLin0iRgpQrAc9rtJyzx7f0GZFW91p7DDPKWN6Js8WF19TRIcIzKUKaEHwo
	21Fb1pCQufY.YeP05EJD.g7m2y7trWuPeyZrSoE47XOmiQrYTSlJKPZbxlBc
	.XDUxeuB.f1JAQssXUzA8ZC3NOBSh1w5C8jYnO2gJ142I0MLCNhbTyM8bLYG
	S69IZo7EqfJx4KkcAJGcwwA--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from phobos (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp215.mail.bf1.yahoo.com with SMTP; 31 Jan 2014 13:48:17 -0800 PST
From: "Eric Houby" <ehouby@yahoo.com>
To: "'Russ Pavlicek'" <russell.pavlicek@xenproject.org>,
	<xen-devel@lists.xen.org>, <xen-users@lists.xen.org>,
	<xen-api@lists.xen.org>, <xs-devel@lists.xenserver.org>,
	<cl-mirage@lists.cam.ac.uk>, <publicity@lists.xenproject.org>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
In-Reply-To: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
Date: Fri, 31 Jan 2014 14:48:17 -0700
Message-ID: <009c01cf1ece$2739a820$75acf860$@yahoo.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQFjy0dgAXkVho5qIRcBav+bbzk7pZt2AEDw
Content-Language: en-us
X-Mailman-Approved-At: Sat, 01 Feb 2014 09:10:22 +0000
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> 
> Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.
> 
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
> 
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
> 


Russ,

On the RC3 test instructions page, the link for the RC3 RPMS are pointing to
what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for this
test day?

Thanks,

Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 10:47:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 10:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Y6I-00071g-0X; Sat, 01 Feb 2014 10:47:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Y6F-00071b-Ol
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 10:47:00 +0000
Received: from [85.158.139.211:2249] by server-17.bemta-5.messagelabs.com id
	C8/BA-31975-2A0DCE25; Sat, 01 Feb 2014 10:46:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391251614!983573!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19630 invoked from network); 1 Feb 2014 10:46:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 10:46:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98844476"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 10:46:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 05:46:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Y69-0005wv-J5;
	Sat, 01 Feb 2014 10:46:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Y69-0004fk-6I;
	Sat, 01 Feb 2014 10:46:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24687-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 10:46:53 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24687: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24687 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24687/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep  fail in 24671 REGR. vs. 24571

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     11 guest-localmigrate          fail pass in 24681
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 24681 pass in 24671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 24671 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 24671 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 24671 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-amd64 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-i386 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 24671 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 24671 n/a

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 10:47:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 10:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9Y6I-00071g-0X; Sat, 01 Feb 2014 10:47:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9Y6F-00071b-Ol
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 10:47:00 +0000
Received: from [85.158.139.211:2249] by server-17.bemta-5.messagelabs.com id
	C8/BA-31975-2A0DCE25; Sat, 01 Feb 2014 10:46:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391251614!983573!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19630 invoked from network); 1 Feb 2014 10:46:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 10:46:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98844476"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 10:46:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 05:46:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9Y69-0005wv-J5;
	Sat, 01 Feb 2014 10:46:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9Y69-0004fk-6I;
	Sat, 01 Feb 2014 10:46:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24687-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 10:46:53 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24687: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24687 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24687/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep  fail in 24671 REGR. vs. 24571

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     11 guest-localmigrate          fail pass in 24681
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 24681 pass in 24671

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-xl            1 xen-build-check(1)        blocked in 24671 n/a
 test-i386-i386-pv             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemuu-rhel6hvm-intel 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)        blocked in 24671 n/a
 test-i386-i386-xl             1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)      blocked in 24671 n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-amd64 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-qemuu-freebsd10-i386 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-pair          1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)  blocked in 24671 n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)       blocked in 24671 n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 1 xen-build-check(1) blocked in 24671 n/a
 test-i386-i386-pair           1 xen-build-check(1)        blocked in 24671 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 24671 n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)    blocked in 24671 n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)        blocked in 24671 n/a

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0037ec360b8792f966acc154e06ac9f627b00f9f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:01:28 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit 142cf790dcecee00efa880ea6737916d0661ed8f
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Thu Jan 30 09:01:01 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit ed234b1af2bc3edb05a7597b7b89c947a94f7c8b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Jan 30 09:00:09 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 12:47:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 12:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9ZyJ-0001NM-3o; Sat, 01 Feb 2014 12:46:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9ZyH-0001NH-T6
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 12:46:54 +0000
Received: from [193.109.254.147:37004] by server-6.bemta-14.messagelabs.com id
	52/EF-03396-DBCECE25; Sat, 01 Feb 2014 12:46:53 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391258810!1322195!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5380 invoked from network); 1 Feb 2014 12:46:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 12:46:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98859905"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 12:46:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 07:46:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9ZyD-0006XF-Ev;
	Sat, 01 Feb 2014 12:46:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9ZyD-0000Fn-CN;
	Sat, 01 Feb 2014 12:46:49 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24690-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 12:46:49 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24690: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24690 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24690/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail REGR. vs. 24683
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 24683

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  9 guest-start               fail REGR. vs. 24683

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  a731bb25c072004a7c073625eb6ef14cc9011c74

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit feee1ace547cf6247a358d082dd64fa762be2488
Merge: a96bbe5... a731bb2...
Author: Ian Jackson <ijackson@chiark.greenend.org.uk>
Date:   Fri Jan 31 11:21:55 2014 +0000

    Merge branch 'master' into staging

commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 12:47:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 12:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9ZyJ-0001NM-3o; Sat, 01 Feb 2014 12:46:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9ZyH-0001NH-T6
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 12:46:54 +0000
Received: from [193.109.254.147:37004] by server-6.bemta-14.messagelabs.com id
	52/EF-03396-DBCECE25; Sat, 01 Feb 2014 12:46:53 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391258810!1322195!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5380 invoked from network); 1 Feb 2014 12:46:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 12:46:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98859905"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 12:46:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 07:46:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9ZyD-0006XF-Ev;
	Sat, 01 Feb 2014 12:46:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9ZyD-0000Fn-CN;
	Sat, 01 Feb 2014 12:46:49 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24690-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 12:46:49 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24690: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24690 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24690/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail REGR. vs. 24683
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 24683

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  9 guest-start               fail REGR. vs. 24683

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  a731bb25c072004a7c073625eb6ef14cc9011c74

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit feee1ace547cf6247a358d082dd64fa762be2488
Merge: a96bbe5... a731bb2...
Author: Ian Jackson <ijackson@chiark.greenend.org.uk>
Date:   Fri Jan 31 11:21:55 2014 +0000

    Merge branch 'master' into staging

commit a96bbe5fd79ea8ac6b40e90965f84aab839d3391
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Thu Jan 30 03:47:11 2014 +0000

    Update QEMU_TAG and QEMU_UPSTREAM_REVISION for 4.4.0-rc3
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

commit 7754fb8cab292dfb2047b1cb38004d7290f8b6aa
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Tue Jan 28 16:48:55 2014 +0100

    Update QEMU_UPSTREAM_REVISION
    
    Switch back to master.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

commit 927a13de6c85faa388eef1041baae17744be6791
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 13:33:57 2014 +0100

    blkif.h: enhance comments related to the discard feature
    
    Also fix the name of the discard-alignment property, add the missing 'n'.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Keir Fraser <keir@xen.org>
    Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

commit 55d207597e05b79fb0d90868abfe0f77b3ba66f2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jan 28 13:31:28 2014 +0100

    xen/unlz4: always set an error return code on failures
    
    "ret", being set to -1 early on, gets cleared by the first invocation
    of lz4_decompress()/lz4_decompress_unknownoutputsize(), and hence
    subsequent failures wouldn't be noticed by the caller without setting
    it back to -1 right after those calls.
    
    Linux commit: 2a1d689c9ba42a6066540fb221b6ecbd6298b728
    
    Reported-by: Matthew Daley <mattjd@gmail.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 14:15:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 14:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9bLq-0003Vq-Iz; Sat, 01 Feb 2014 14:15:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9bLp-0003Vl-Ev
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 14:15:17 +0000
Received: from [193.109.254.147:2285] by server-7.bemta-14.messagelabs.com id
	2C/5C-23424-4710DE25; Sat, 01 Feb 2014 14:15:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391264114!1330525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25379 invoked from network); 1 Feb 2014 14:15:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 14:15:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98870489"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 14:15:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 09:15:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9bLk-0006yG-5a;
	Sat, 01 Feb 2014 14:15:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9bLk-0006h3-1m;
	Sat, 01 Feb 2014 14:15:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24692-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 14:15:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24692: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24692 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24692/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24591
 build-amd64-oldkern           3 host-build-prep           fail REGR. vs. 24591

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 14:15:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 14:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9bLq-0003Vq-Iz; Sat, 01 Feb 2014 14:15:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9bLp-0003Vl-Ev
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 14:15:17 +0000
Received: from [193.109.254.147:2285] by server-7.bemta-14.messagelabs.com id
	2C/5C-23424-4710DE25; Sat, 01 Feb 2014 14:15:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391264114!1330525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25379 invoked from network); 1 Feb 2014 14:15:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 14:15:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98870489"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 01 Feb 2014 14:15:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 09:15:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9bLk-0006yG-5a;
	Sat, 01 Feb 2014 14:15:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9bLk-0006h3-1m;
	Sat, 01 Feb 2014 14:15:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24692-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 14:15:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24692: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24692 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24692/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24591
 build-amd64-oldkern           3 host-build-prep           fail REGR. vs. 24591

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 15:17:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 15:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9cJm-0004wy-RP; Sat, 01 Feb 2014 15:17:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9cJk-0004wt-VS
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 15:17:13 +0000
Received: from [85.158.137.68:17067] by server-8.bemta-3.messagelabs.com id
	E2/0C-16039-7FF0DE25; Sat, 01 Feb 2014 15:17:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391267829!12729605!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31910 invoked from network); 1 Feb 2014 15:17:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 15:17:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96929434"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 15:16:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 10:16:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9cJE-0007Gj-UY;
	Sat, 01 Feb 2014 15:16:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9cJE-0008T7-UA;
	Sat, 01 Feb 2014 15:16:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24696-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 15:16:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24696: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24696 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24696/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24546

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90d35066387e1d9c9deeda042c5a907cd57c11cf
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 10 15:56:33 2014 +0000

    xen_pt: Fix passthrough of device with ROM.
    
    QEMU does not need and should not allocate memory for the ROM of a
    passthrough PCI device. So this patch initialize the particular region
    like any other PCI BAR of a passthrough device.
    
    When a guest will access the ROM, Xen will take care of the IO, QEMU
    will not be involved in it.
    
    Xen set a limit of memory available for each guest, allocating memory
    for a ROM can hit this limit.
    
    upstream-commit-id: 794798e36eda77802ce7cc7d7d6b1c65751e8a76
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 15:17:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 15:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9cJm-0004wy-RP; Sat, 01 Feb 2014 15:17:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9cJk-0004wt-VS
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 15:17:13 +0000
Received: from [85.158.137.68:17067] by server-8.bemta-3.messagelabs.com id
	E2/0C-16039-7FF0DE25; Sat, 01 Feb 2014 15:17:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391267829!12729605!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31910 invoked from network); 1 Feb 2014 15:17:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 15:17:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96929434"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 15:16:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 10:16:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9cJE-0007Gj-UY;
	Sat, 01 Feb 2014 15:16:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9cJE-0008T7-UA;
	Sat, 01 Feb 2014 15:16:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24696-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 15:16:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24696: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24696 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24696/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 11 guest-localmigrate.2 fail REGR. vs. 24546

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 90d35066387e1d9c9deeda042c5a907cd57c11cf
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jan 10 15:56:33 2014 +0000

    xen_pt: Fix passthrough of device with ROM.
    
    QEMU does not need and should not allocate memory for the ROM of a
    passthrough PCI device. So this patch initialize the particular region
    like any other PCI BAR of a passthrough device.
    
    When a guest will access the ROM, Xen will take care of the IO, QEMU
    will not be involved in it.
    
    Xen set a limit of memory available for each guest, allocating memory
    for a ROM can hit this limit.
    
    upstream-commit-id: 794798e36eda77802ce7cc7d7d6b1c65751e8a76
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Reported-and-Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 16:15:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 16:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9dDx-0006fS-Q3; Sat, 01 Feb 2014 16:15:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9dDw-0006fN-S0
	for xen-devel@lists.xen.org; Sat, 01 Feb 2014 16:15:17 +0000
Received: from [85.158.143.35:12942] by server-2.bemta-4.messagelabs.com id
	2C/46-10891-49D1DE25; Sat, 01 Feb 2014 16:15:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391271315!2424128!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16770 invoked from network); 1 Feb 2014 16:15:15 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Feb 2014 16:15:15 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391271315; l=781;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:To:From:Date:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=twIboOhNoaJUpNn5LtubkhfY2fM=;
	b=ewxB+Y8PLd0h21vaql7q8ijs+K+cFx0o4e/e4IC1GpJWuFOR4EUF3YLaGi0IXRB4vJE
	bfiS5LKMEE49D0MckI/1ZCiEJE9UiwZVAfnZqVUP1ZdsvNsJli2x/V0UH9YTbAExRtUXf
	+2oaH5T+hnQapCgQFpxZ3txq3nEFRuF5qRY=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id 50330eq11GFFCpC
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate) for <xen-devel@lists.xen.org>;
	Sat, 1 Feb 2014 17:15:15 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 25A7F50269; Sat,  1 Feb 2014 17:15:13 +0100 (CET)
Date: Sat, 1 Feb 2014 17:15:13 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140201161513.GA13789@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


I'm  seeing build failures on aarch64 during 'make tools' because both
arch/arm64/include/uapi/asm/ptrace.h from the kernel source and
xen/include/public/arch-arm.h from xen-4.4 define PSR_MODE_EL3t with
sightly different strings.

I think the if defined (__XEN_TOOLS__) should be removed from
xen/include/public/arch-arm.h so that userland tools can pickup the
defines from /usr/include. Untested patch below.

Olaf

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 7496556..17422e6 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -316,7 +316,7 @@ typedef uint64_t xen_callback_t;
 
 #endif
 
-#if defined(__XEN__) || defined(__XEN_TOOLS__)
+#if defined(__XEN__)
 
 /* PSR bits (CPSR, SPSR)*/
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 16:15:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 16:15:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9dDx-0006fS-Q3; Sat, 01 Feb 2014 16:15:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9dDw-0006fN-S0
	for xen-devel@lists.xen.org; Sat, 01 Feb 2014 16:15:17 +0000
Received: from [85.158.143.35:12942] by server-2.bemta-4.messagelabs.com id
	2C/46-10891-49D1DE25; Sat, 01 Feb 2014 16:15:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391271315!2424128!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16770 invoked from network); 1 Feb 2014 16:15:15 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Feb 2014 16:15:15 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391271315; l=781;
	s=domk; d=aepfle.de;
	h=Content-Type:MIME-Version:Subject:To:From:Date:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=twIboOhNoaJUpNn5LtubkhfY2fM=;
	b=ewxB+Y8PLd0h21vaql7q8ijs+K+cFx0o4e/e4IC1GpJWuFOR4EUF3YLaGi0IXRB4vJE
	bfiS5LKMEE49D0MckI/1ZCiEJE9UiwZVAfnZqVUP1ZdsvNsJli2x/V0UH9YTbAExRtUXf
	+2oaH5T+hnQapCgQFpxZ3txq3nEFRuF5qRY=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id 50330eq11GFFCpC
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate) for <xen-devel@lists.xen.org>;
	Sat, 1 Feb 2014 17:15:15 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 25A7F50269; Sat,  1 Feb 2014 17:15:13 +0100 (CET)
Date: Sat, 1 Feb 2014 17:15:13 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140201161513.GA13789@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


I'm  seeing build failures on aarch64 during 'make tools' because both
arch/arm64/include/uapi/asm/ptrace.h from the kernel source and
xen/include/public/arch-arm.h from xen-4.4 define PSR_MODE_EL3t with
sightly different strings.

I think the if defined (__XEN_TOOLS__) should be removed from
xen/include/public/arch-arm.h so that userland tools can pickup the
defines from /usr/include. Untested patch below.

Olaf

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 7496556..17422e6 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -316,7 +316,7 @@ typedef uint64_t xen_callback_t;
 
 #endif
 
-#if defined(__XEN__) || defined(__XEN_TOOLS__)
+#if defined(__XEN__)
 
 /* PSR bits (CPSR, SPSR)*/
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 17:25:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 17:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9eJf-00008i-3u; Sat, 01 Feb 2014 17:25:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W9eJe-00008d-3W
	for xen-devel@lists.xen.org; Sat, 01 Feb 2014 17:25:14 +0000
Received: from [193.109.254.147:45439] by server-3.bemta-14.messagelabs.com id
	23/61-00432-9FD2DE25; Sat, 01 Feb 2014 17:25:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391275511!1324158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 811 invoked from network); 1 Feb 2014 17:25:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 17:25:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96948509"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 17:25:10 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Sat, 1 Feb 2014
	12:25:09 -0500
Message-ID: <1391275508.15093.16.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Sat, 1 Feb 2014 17:25:08 +0000
In-Reply-To: <20140201161513.GA13789@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-01 at 17:15 +0100, Olaf Hering wrote:
> I'm  seeing build failures on aarch64 during 'make tools' because both
> arch/arm64/include/uapi/asm/ptrace.h from the kernel source and
> xen/include/public/arch-arm.h from xen-4.4 define PSR_MODE_EL3t with
> sightly different strings.
> 
> I think the if defined (__XEN_TOOLS__) should be removed from
> xen/include/public/arch-arm.h so that userland tools can pickup the
> defines from /usr/include. Untested patch below.

IMHO this is a glibc bug -- see
https://bugs.launchpad.net/linaro-aarch64/+bug/1169164

Ian.

> 
> Olaf
> 
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 7496556..17422e6 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -316,7 +316,7 @@ typedef uint64_t xen_callback_t;
>  
>  #endif
>  
> -#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +#if defined(__XEN__)
>  
>  /* PSR bits (CPSR, SPSR)*/
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 17:25:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 17:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9eJf-00008i-3u; Sat, 01 Feb 2014 17:25:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W9eJe-00008d-3W
	for xen-devel@lists.xen.org; Sat, 01 Feb 2014 17:25:14 +0000
Received: from [193.109.254.147:45439] by server-3.bemta-14.messagelabs.com id
	23/61-00432-9FD2DE25; Sat, 01 Feb 2014 17:25:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391275511!1324158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 811 invoked from network); 1 Feb 2014 17:25:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 17:25:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96948509"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 17:25:10 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Sat, 1 Feb 2014
	12:25:09 -0500
Message-ID: <1391275508.15093.16.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Sat, 1 Feb 2014 17:25:08 +0000
In-Reply-To: <20140201161513.GA13789@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-01 at 17:15 +0100, Olaf Hering wrote:
> I'm  seeing build failures on aarch64 during 'make tools' because both
> arch/arm64/include/uapi/asm/ptrace.h from the kernel source and
> xen/include/public/arch-arm.h from xen-4.4 define PSR_MODE_EL3t with
> sightly different strings.
> 
> I think the if defined (__XEN_TOOLS__) should be removed from
> xen/include/public/arch-arm.h so that userland tools can pickup the
> defines from /usr/include. Untested patch below.

IMHO this is a glibc bug -- see
https://bugs.launchpad.net/linaro-aarch64/+bug/1169164

Ian.

> 
> Olaf
> 
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 7496556..17422e6 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -316,7 +316,7 @@ typedef uint64_t xen_callback_t;
>  
>  #endif
>  
> -#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +#if defined(__XEN__)
>  
>  /* PSR bits (CPSR, SPSR)*/
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 19:40:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 19:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9gPn-0003Zn-VS; Sat, 01 Feb 2014 19:39:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>)
	id 1W9gPm-0003ZZ-9R; Sat, 01 Feb 2014 19:39:42 +0000
Received: from [193.109.254.147:57705] by server-8.bemta-14.messagelabs.com id
	4C/70-18529-D7D4DE25; Sat, 01 Feb 2014 19:39:41 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391283580!1352809!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA5ODA1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29705 invoked from network); 1 Feb 2014 19:39:41 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Feb 2014 19:39:41 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s11JdPhU022620;
	Sat, 1 Feb 2014 19:39:29 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s11JdMOw005708
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 1 Feb 2014 19:39:22 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s11JdMgc011375;
	Sat, 1 Feb 2014 19:39:22 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s11JdKHS011370; Sat, 1 Feb 2014 19:39:21 GMT
Date: Sat, 1 Feb 2014 19:39:20 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1391209094.13572.50.camel@Abyss>
Message-ID: <alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
	<1391209094.13572.50.camel@Abyss>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s11JdPhU022620
Cc: Eric Houby <ehouby@yahoo.com>, xen <xen@lists.fedoraproject.org>,
	xen-users@lists.xen.org, 'Russ Pavlicek' <russell.pavlicek@xenproject.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
 for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 31 Jan 2014, Dario Faggioli wrote:

> On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:
>>> Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.
>>>
>>> General Information about Test Days can be found here:
>>> http://wiki.xenproject.org/wiki/Xen_Test_Days
>>>
>>> and specific instructions for this Test Day are located here:
>>> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>>>
>>
>> Russ,
>>
>> On the RC3 test instructions page, the link for the RC3 RPMS are pointing to
>> what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for this
>> test day?
>>
> Michael, what do you think? It's late I know... Sorry for that, but I'm
> travelling and couldn't direct your attention to this before.

There is an rc3 build at 
http://koji.fedoraproject.org/koji/taskinfo?taskID=6479953

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 19:40:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 19:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9gPn-0003Zn-VS; Sat, 01 Feb 2014 19:39:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>)
	id 1W9gPm-0003ZZ-9R; Sat, 01 Feb 2014 19:39:42 +0000
Received: from [193.109.254.147:57705] by server-8.bemta-14.messagelabs.com id
	4C/70-18529-D7D4DE25; Sat, 01 Feb 2014 19:39:41 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391283580!1352809!1
X-Originating-IP: [129.234.248.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMiA9PiA5ODA1MA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29705 invoked from network); 1 Feb 2014 19:39:41 -0000
Received: from hermes2.dur.ac.uk (HELO hermes2.dur.ac.uk) (129.234.248.2)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 1 Feb 2014 19:39:41 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s11JdPhU022620;
	Sat, 1 Feb 2014 19:39:29 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s11JdMOw005708
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 1 Feb 2014 19:39:22 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s11JdMgc011375;
	Sat, 1 Feb 2014 19:39:22 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s11JdKHS011370; Sat, 1 Feb 2014 19:39:21 GMT
Date: Sat, 1 Feb 2014 19:39:20 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1391209094.13572.50.camel@Abyss>
Message-ID: <alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
	<1391209094.13572.50.camel@Abyss>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s11JdPhU022620
Cc: Eric Houby <ehouby@yahoo.com>, xen <xen@lists.fedoraproject.org>,
	xen-users@lists.xen.org, 'Russ Pavlicek' <russell.pavlicek@xenproject.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
 for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 31 Jan 2014, Dario Faggioli wrote:

> On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:
>>> Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.
>>>
>>> General Information about Test Days can be found here:
>>> http://wiki.xenproject.org/wiki/Xen_Test_Days
>>>
>>> and specific instructions for this Test Day are located here:
>>> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>>>
>>
>> Russ,
>>
>> On the RC3 test instructions page, the link for the RC3 RPMS are pointing to
>> what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for this
>> test day?
>>
> Michael, what do you think? It's late I know... Sorry for that, but I'm
> travelling and couldn't direct your attention to this before.

There is an rc3 build at 
http://koji.fedoraproject.org/koji/taskinfo?taskID=6479953

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 19:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 19:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9gTx-00042q-8s; Sat, 01 Feb 2014 19:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W9gTw-00042f-1L; Sat, 01 Feb 2014 19:44:00 +0000
Received: from [85.158.139.211:15012] by server-14.bemta-5.messagelabs.com id
	64/52-27598-F7E4DE25; Sat, 01 Feb 2014 19:43:59 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391283837!1036923!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_23,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9136 invoked from network); 1 Feb 2014 19:43:58 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 19:43:58 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so4475063lan.10
	for <multiple recipients>; Sat, 01 Feb 2014 11:43:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=ybUhQYGNt8tqMO5fk+0Qz+Zm+YhnNJt2KTUlc339ghM=;
	b=RnVmIO9t93KgRkKwoCFOew3PK1SKtZpRUPoboAqOPLXk5q6orH/IVHS0CUpYcCJ1EC
	K20XBqpeK75yv57M020UmWAXBB4ACa9jdJZhCi4WlDQUhmmouX9Hyewh58BtSFmhefzM
	UYN3Kccjksf5IiXszx/D0NIERBAX6JfTATpOx2QcQS7lJ4DCS4Sm/cQBz89/rhZ4/qdW
	ZQWrES3x/BsdW/0U8nQS6OqGnCTQavg0iSp5CdNaBMuT2btWFKJ4I+oqGJTjkgwCEz76
	7i9/NfKuXNo53fY1Zr7jfmAGLY5XdIHefffvszAMHdw7KGQgUCIir5m3k5bN030pyYIY
	5g0g==
MIME-Version: 1.0
X-Received: by 10.112.132.102 with SMTP id ot6mr6633619lbb.27.1391283837234;
	Sat, 01 Feb 2014 11:43:57 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Sat, 1 Feb 2014 11:43:57 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Sat, 1 Feb 2014 11:43:57 -0800 (PST)
In-Reply-To: <alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
	<1391209094.13572.50.camel@Abyss>
	<alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
Date: Sat, 1 Feb 2014 14:43:57 -0500
X-Google-Sender-Auth: 0KHNHkz0_CWUL9vxtcnkuLPkqhc
Message-ID: <CAHehzX1_1Ce+HCPiyM-tQm-i0Rt611JTHsd38hcM=0H09A37fw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: M A Young <m.a.young@durham.ac.uk>
Cc: xen-users@lists.xen.org, xen <xen@lists.fedoraproject.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel@lists.xen.org, Eric Houby <ehouby@yahoo.com>
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6329101997891920599=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6329101997891920599==
Content-Type: multipart/alternative; boundary=047d7b3431eae6762404f15d827e

--047d7b3431eae6762404f15d827e
Content-Type: text/plain; charset=ISO-8859-1

Michael,

Splendid, thank you!

I have updated the Wiki page.
 On Feb 1, 2014 2:39 PM, "M A Young" <m.a.young@durham.ac.uk> wrote:

> On Fri, 31 Jan 2014, Dario Faggioli wrote:
>
>  On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:
>>
>>> Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate
>>>> 3.
>>>>
>>>> General Information about Test Days can be found here:
>>>> http://wiki.xenproject.org/wiki/Xen_Test_Days
>>>>
>>>> and specific instructions for this Test Day are located here:
>>>> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>>>>
>>>>
>>> Russ,
>>>
>>> On the RC3 test instructions page, the link for the RC3 RPMS are
>>> pointing to
>>> what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for
>>> this
>>> test day?
>>>
>>>  Michael, what do you think? It's late I know... Sorry for that, but I'm
>> travelling and couldn't direct your attention to this before.
>>
>
> There is an rc3 build at http://koji.fedoraproject.org/
> koji/taskinfo?taskID=6479953
>
>         Michael Young
>

--047d7b3431eae6762404f15d827e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">Michael,</p>
<p dir=3D"ltr">Splendid, thank you!</p>
<p dir=3D"ltr">I have updated the Wiki page.<br>
</p>
<div class=3D"gmail_quote">On Feb 1, 2014 2:39 PM, &quot;M A Young&quot; &l=
t;<a href=3D"mailto:m.a.young@durham.ac.uk">m.a.young@durham.ac.uk</a>&gt; =
wrote:<br type=3D"attribution"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Fri, 31 Jan 2014, Dario Faggioli wrote:<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.<=
br>
<br>
General Information about Test Days can be found here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_Test_Days" target=3D"_blank"=
>http://wiki.xenproject.org/<u></u>wiki/Xen_Test_Days</a><br>
<br>
and specific instructions for this Test Day are located here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" t=
arget=3D"_blank">http://wiki.xenproject.org/<u></u>wiki/Xen_4.4_RC3_test_<u=
></u>instructions</a><br>
<br>
</blockquote>
<br>
Russ,<br>
<br>
On the RC3 test instructions page, the link for the RC3 RPMS are pointing t=
o<br>
what looks like the RC2 RPMs from 1/16. =A0Will there be updated RPMs for t=
his<br>
test day?<br>
<br>
</blockquote>
Michael, what do you think? It&#39;s late I know... Sorry for that, but I&#=
39;m<br>
travelling and couldn&#39;t direct your attention to this before.<br>
</blockquote>
<br>
There is an rc3 build at <a href=3D"http://koji.fedoraproject.org/koji/task=
info?taskID=3D6479953" target=3D"_blank">http://koji.fedoraproject.org/<u><=
/u>koji/taskinfo?taskID=3D6479953</a><br>
<br>
=A0 =A0 =A0 =A0 Michael Young<br>
</blockquote></div>

--047d7b3431eae6762404f15d827e--


--===============6329101997891920599==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6329101997891920599==--


From xen-devel-bounces@lists.xen.org Sat Feb 01 19:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 19:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9gTx-00042q-8s; Sat, 01 Feb 2014 19:44:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1W9gTw-00042f-1L; Sat, 01 Feb 2014 19:44:00 +0000
Received: from [85.158.139.211:15012] by server-14.bemta-5.messagelabs.com id
	64/52-27598-F7E4DE25; Sat, 01 Feb 2014 19:43:59 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391283837!1036923!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_23,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9136 invoked from network); 1 Feb 2014 19:43:58 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 19:43:58 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so4475063lan.10
	for <multiple recipients>; Sat, 01 Feb 2014 11:43:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=ybUhQYGNt8tqMO5fk+0Qz+Zm+YhnNJt2KTUlc339ghM=;
	b=RnVmIO9t93KgRkKwoCFOew3PK1SKtZpRUPoboAqOPLXk5q6orH/IVHS0CUpYcCJ1EC
	K20XBqpeK75yv57M020UmWAXBB4ACa9jdJZhCi4WlDQUhmmouX9Hyewh58BtSFmhefzM
	UYN3Kccjksf5IiXszx/D0NIERBAX6JfTATpOx2QcQS7lJ4DCS4Sm/cQBz89/rhZ4/qdW
	ZQWrES3x/BsdW/0U8nQS6OqGnCTQavg0iSp5CdNaBMuT2btWFKJ4I+oqGJTjkgwCEz76
	7i9/NfKuXNo53fY1Zr7jfmAGLY5XdIHefffvszAMHdw7KGQgUCIir5m3k5bN030pyYIY
	5g0g==
MIME-Version: 1.0
X-Received: by 10.112.132.102 with SMTP id ot6mr6633619lbb.27.1391283837234;
	Sat, 01 Feb 2014 11:43:57 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Sat, 1 Feb 2014 11:43:57 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Sat, 1 Feb 2014 11:43:57 -0800 (PST)
In-Reply-To: <alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
	<1391209094.13572.50.camel@Abyss>
	<alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
Date: Sat, 1 Feb 2014 14:43:57 -0500
X-Google-Sender-Auth: 0KHNHkz0_CWUL9vxtcnkuLPkqhc
Message-ID: <CAHehzX1_1Ce+HCPiyM-tQm-i0Rt611JTHsd38hcM=0H09A37fw@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: M A Young <m.a.young@durham.ac.uk>
Cc: xen-users@lists.xen.org, xen <xen@lists.fedoraproject.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	xen-devel@lists.xen.org, Eric Houby <ehouby@yahoo.com>
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6329101997891920599=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6329101997891920599==
Content-Type: multipart/alternative; boundary=047d7b3431eae6762404f15d827e

--047d7b3431eae6762404f15d827e
Content-Type: text/plain; charset=ISO-8859-1

Michael,

Splendid, thank you!

I have updated the Wiki page.
 On Feb 1, 2014 2:39 PM, "M A Young" <m.a.young@durham.ac.uk> wrote:

> On Fri, 31 Jan 2014, Dario Faggioli wrote:
>
>  On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:
>>
>>> Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate
>>>> 3.
>>>>
>>>> General Information about Test Days can be found here:
>>>> http://wiki.xenproject.org/wiki/Xen_Test_Days
>>>>
>>>> and specific instructions for this Test Day are located here:
>>>> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>>>>
>>>>
>>> Russ,
>>>
>>> On the RC3 test instructions page, the link for the RC3 RPMS are
>>> pointing to
>>> what looks like the RC2 RPMs from 1/16.  Will there be updated RPMs for
>>> this
>>> test day?
>>>
>>>  Michael, what do you think? It's late I know... Sorry for that, but I'm
>> travelling and couldn't direct your attention to this before.
>>
>
> There is an rc3 build at http://koji.fedoraproject.org/
> koji/taskinfo?taskID=6479953
>
>         Michael Young
>

--047d7b3431eae6762404f15d827e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">Michael,</p>
<p dir=3D"ltr">Splendid, thank you!</p>
<p dir=3D"ltr">I have updated the Wiki page.<br>
</p>
<div class=3D"gmail_quote">On Feb 1, 2014 2:39 PM, &quot;M A Young&quot; &l=
t;<a href=3D"mailto:m.a.young@durham.ac.uk">m.a.young@durham.ac.uk</a>&gt; =
wrote:<br type=3D"attribution"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Fri, 31 Jan 2014, Dario Faggioli wrote:<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Fri, 2014-01-31 at 14:48 -0700, Eric Houby wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Next Monday, February 3, is the Test Day for Xen 4.4. Release Candidate 3.<=
br>
<br>
General Information about Test Days can be found here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_Test_Days" target=3D"_blank"=
>http://wiki.xenproject.org/<u></u>wiki/Xen_Test_Days</a><br>
<br>
and specific instructions for this Test Day are located here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" t=
arget=3D"_blank">http://wiki.xenproject.org/<u></u>wiki/Xen_4.4_RC3_test_<u=
></u>instructions</a><br>
<br>
</blockquote>
<br>
Russ,<br>
<br>
On the RC3 test instructions page, the link for the RC3 RPMS are pointing t=
o<br>
what looks like the RC2 RPMs from 1/16. =A0Will there be updated RPMs for t=
his<br>
test day?<br>
<br>
</blockquote>
Michael, what do you think? It&#39;s late I know... Sorry for that, but I&#=
39;m<br>
travelling and couldn&#39;t direct your attention to this before.<br>
</blockquote>
<br>
There is an rc3 build at <a href=3D"http://koji.fedoraproject.org/koji/task=
info?taskID=3D6479953" target=3D"_blank">http://koji.fedoraproject.org/<u><=
/u>koji/taskinfo?taskID=3D6479953</a><br>
<br>
=A0 =A0 =A0 =A0 Michael Young<br>
</blockquote></div>

--047d7b3431eae6762404f15d827e--


--===============6329101997891920599==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6329101997891920599==--


From xen-devel-bounces@lists.xen.org Sat Feb 01 20:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 20:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9goX-0005Mt-DV; Sat, 01 Feb 2014 20:05:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9goV-0005Mo-Kp
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 20:05:15 +0000
Received: from [193.109.254.147:40419] by server-5.bemta-14.messagelabs.com id
	24/D0-16688-A735DE25; Sat, 01 Feb 2014 20:05:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391285112!1359414!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14642 invoked from network); 1 Feb 2014 20:05:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 20:05:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96963862"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 20:05:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 15:05:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9goQ-00010f-KH;
	Sat, 01 Feb 2014 20:05:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9goO-0004tu-DB;
	Sat, 01 Feb 2014 20:05:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24699-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 20:05:08 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24699: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24699 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24699/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           9 guest-start                 fail pass in 24687
 test-amd64-amd64-xl-sedf     11 guest-localmigrate fail in 24687 pass in 24699

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=0037ec360b8792f966acc154e06ac9f627b00f9f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 0037ec360b8792f966acc154e06ac9f627b00f9f
+ branch=xen-4.2-testing
+ revision=0037ec360b8792f966acc154e06ac9f627b00f9f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 0037ec360b8792f966acc154e06ac9f627b00f9f:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   b06c0fd..0037ec3  0037ec360b8792f966acc154e06ac9f627b00f9f -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 20:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 20:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9goX-0005Mt-DV; Sat, 01 Feb 2014 20:05:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9goV-0005Mo-Kp
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 20:05:15 +0000
Received: from [193.109.254.147:40419] by server-5.bemta-14.messagelabs.com id
	24/D0-16688-A735DE25; Sat, 01 Feb 2014 20:05:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391285112!1359414!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14642 invoked from network); 1 Feb 2014 20:05:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 20:05:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96963862"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 01 Feb 2014 20:05:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 15:05:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9goQ-00010f-KH;
	Sat, 01 Feb 2014 20:05:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9goO-0004tu-DB;
	Sat, 01 Feb 2014 20:05:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24699-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 1 Feb 2014 20:05:08 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24699: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24699 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24699/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           9 guest-start                 fail pass in 24687
 test-amd64-amd64-xl-sedf     11 guest-localmigrate fail in 24687 pass in 24699

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f
baseline version:
 xen                  b06c0fd1e6be40843084442ebdb377b16f110602

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=0037ec360b8792f966acc154e06ac9f627b00f9f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 0037ec360b8792f966acc154e06ac9f627b00f9f
+ branch=xen-4.2-testing
+ revision=0037ec360b8792f966acc154e06ac9f627b00f9f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 0037ec360b8792f966acc154e06ac9f627b00f9f:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   b06c0fd..0037ec3  0037ec360b8792f966acc154e06ac9f627b00f9f -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 21:03:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 21:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9hiv-0006eV-I0; Sat, 01 Feb 2014 21:03:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W9hit-0006eQ-Uf
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 21:03:32 +0000
Received: from [85.158.137.68:65366] by server-8.bemta-3.messagelabs.com id
	79/29-16039-3216DE25; Sat, 01 Feb 2014 21:03:31 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391288609!11961784!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15957 invoked from network); 1 Feb 2014 21:03:30 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 21:03:30 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so4406017lan.19
	for <xen-devel@lists.xensource.com>;
	Sat, 01 Feb 2014 13:03:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=qEnsvMcIHI++uWfubAs+Wla5rTV/UAqNLor8TvHPU8w=;
	b=EojSRyQ4752kDLaqlMv7dDggN13lYcZ3K6gpLfNUqwJnSes5EaUWEysoS4sTON2z7H
	kyfhgYk7D0CXo/oS7PB6r755o45JDHd65IP9MD7nFdjhmaiiI6UAd27HBUarchSiOzJz
	n76yJTg6Kut3k4f0PRbP9tcX7QH+kxvnPkGc9kPAfxLm8q4frUO7tM5br9j+KJ8m5FLV
	HqNb0//7eyaJtAmP9t+0AqprmUxJvxYit+behderBhJmoik8iUJRAnSjW2XkF1S5Wakc
	w3rhacDSRbAdXkLsxEoXbAf0iG/04sGH8QToWLCsMGyasn3B0iZy003wZ1zdxm8sijlm
	ygxA==
X-Gm-Message-State: ALoCoQl0wJj/YrkoC+TzVVAavXjBlAKk4m+Hk21+pzaGicbe8C7GySJR0uSMF58RFzPtNEfOoDy6
X-Received: by 10.112.144.35 with SMTP id sj3mr2192339lbb.44.1391288609143;
	Sat, 01 Feb 2014 13:03:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.199.162 with HTTP; Sat, 1 Feb 2014 13:03:08 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Sat, 1 Feb 2014 21:03:08 +0000
Message-ID: <CAFEAcA8+BnfiZvoeC19OLPiJzu3FO4ee2QJUunNXnAykT4m0OQ@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com Devel" <xen-devel@lists.xensource.com>,
	qemu-stable <qemu-stable@nongnu.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Anthony PERARD <Anthony.Perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 0/1] xen-140130
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30 January 2014 14:24, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> The following changes since commit 0169c511554cb0014a00290b0d3d26c31a49818f:
>
>   Merge remote-tracking branch 'qemu-kvm/uq/master' into staging (2014-01-24 15:52:44 -0800)
>
> are available in the git repository at:
>
>
>   git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140130
>
> for you to fetch changes up to 360e607b88a23d378f6efaa769c76d26f538234d:
>
>   address_space_translate: do not cross page boundaries (2014-01-30 14:20:45 +0000)
>
> ----------------------------------------------------------------
> Stefano Stabellini (1):
>       address_space_translate: do not cross page boundaries
>
>  exec.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)

Applied, thanks.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 01 21:03:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Feb 2014 21:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9hiv-0006eV-I0; Sat, 01 Feb 2014 21:03:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1W9hit-0006eQ-Uf
	for xen-devel@lists.xensource.com; Sat, 01 Feb 2014 21:03:32 +0000
Received: from [85.158.137.68:65366] by server-8.bemta-3.messagelabs.com id
	79/29-16039-3216DE25; Sat, 01 Feb 2014 21:03:31 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391288609!11961784!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15957 invoked from network); 1 Feb 2014 21:03:30 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	1 Feb 2014 21:03:30 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so4406017lan.19
	for <xen-devel@lists.xensource.com>;
	Sat, 01 Feb 2014 13:03:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=qEnsvMcIHI++uWfubAs+Wla5rTV/UAqNLor8TvHPU8w=;
	b=EojSRyQ4752kDLaqlMv7dDggN13lYcZ3K6gpLfNUqwJnSes5EaUWEysoS4sTON2z7H
	kyfhgYk7D0CXo/oS7PB6r755o45JDHd65IP9MD7nFdjhmaiiI6UAd27HBUarchSiOzJz
	n76yJTg6Kut3k4f0PRbP9tcX7QH+kxvnPkGc9kPAfxLm8q4frUO7tM5br9j+KJ8m5FLV
	HqNb0//7eyaJtAmP9t+0AqprmUxJvxYit+behderBhJmoik8iUJRAnSjW2XkF1S5Wakc
	w3rhacDSRbAdXkLsxEoXbAf0iG/04sGH8QToWLCsMGyasn3B0iZy003wZ1zdxm8sijlm
	ygxA==
X-Gm-Message-State: ALoCoQl0wJj/YrkoC+TzVVAavXjBlAKk4m+Hk21+pzaGicbe8C7GySJR0uSMF58RFzPtNEfOoDy6
X-Received: by 10.112.144.35 with SMTP id sj3mr2192339lbb.44.1391288609143;
	Sat, 01 Feb 2014 13:03:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.199.162 with HTTP; Sat, 1 Feb 2014 13:03:08 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401301422560.4373@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Sat, 1 Feb 2014 21:03:08 +0000
Message-ID: <CAFEAcA8+BnfiZvoeC19OLPiJzu3FO4ee2QJUunNXnAykT4m0OQ@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com Devel" <xen-devel@lists.xensource.com>,
	qemu-stable <qemu-stable@nongnu.org>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Anthony PERARD <Anthony.Perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 0/1] xen-140130
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 30 January 2014 14:24, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> The following changes since commit 0169c511554cb0014a00290b0d3d26c31a49818f:
>
>   Merge remote-tracking branch 'qemu-kvm/uq/master' into staging (2014-01-24 15:52:44 -0800)
>
> are available in the git repository at:
>
>
>   git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140130
>
> for you to fetch changes up to 360e607b88a23d378f6efaa769c76d26f538234d:
>
>   address_space_translate: do not cross page boundaries (2014-01-30 14:20:45 +0000)
>
> ----------------------------------------------------------------
> Stefano Stabellini (1):
>       address_space_translate: do not cross page boundaries
>
>  exec.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)

Applied, thanks.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 00:08:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 00:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9kbD-0002hG-7I; Sun, 02 Feb 2014 00:07:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9kbA-0002hB-VP
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 00:07:45 +0000
Received: from [193.109.254.147:14540] by server-1.bemta-14.messagelabs.com id
	C7/F8-15438-05C8DE25; Sun, 02 Feb 2014 00:07:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391299662!1375938!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19795 invoked from network); 2 Feb 2014 00:07:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 00:07:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98931557"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 00:07:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 19:07:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9kb6-0002CD-Mu;
	Sun, 02 Feb 2014 00:07:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9kb6-0000BS-EV;
	Sun, 02 Feb 2014 00:07:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24703-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 00:07:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24703: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24703 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24703/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  4 xen-install              fail pass in 24690
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail pass in 24702-bisect
 test-amd64-amd64-xl-sedf-pin  9 guest-start        fail in 24690 pass in 24703
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail in 24690 pass in 24703
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24690 pass in 24703

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24702 like 24683

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  a731bb25c072004a7c073625eb6ef14cc9011c74

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=feee1ace547cf6247a358d082dd64fa762be2488
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable feee1ace547cf6247a358d082dd64fa762be2488
+ branch=xen-unstable
+ revision=feee1ace547cf6247a358d082dd64fa762be2488
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git feee1ace547cf6247a358d082dd64fa762be2488:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   a731bb2..feee1ac  feee1ace547cf6247a358d082dd64fa762be2488 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 00:08:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 00:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9kbD-0002hG-7I; Sun, 02 Feb 2014 00:07:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9kbA-0002hB-VP
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 00:07:45 +0000
Received: from [193.109.254.147:14540] by server-1.bemta-14.messagelabs.com id
	C7/F8-15438-05C8DE25; Sun, 02 Feb 2014 00:07:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391299662!1375938!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19795 invoked from network); 2 Feb 2014 00:07:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 00:07:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98931557"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 00:07:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 19:07:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9kb6-0002CD-Mu;
	Sun, 02 Feb 2014 00:07:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9kb6-0000BS-EV;
	Sun, 02 Feb 2014 00:07:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24703-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 00:07:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24703: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24703 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24703/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  4 xen-install              fail pass in 24690
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail pass in 24702-bisect
 test-amd64-amd64-xl-sedf-pin  9 guest-start        fail in 24690 pass in 24703
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail in 24690 pass in 24703
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24690 pass in 24703

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24702 like 24683

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  a731bb25c072004a7c073625eb6ef14cc9011c74

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=feee1ace547cf6247a358d082dd64fa762be2488
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable feee1ace547cf6247a358d082dd64fa762be2488
+ branch=xen-unstable
+ revision=feee1ace547cf6247a358d082dd64fa762be2488
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git feee1ace547cf6247a358d082dd64fa762be2488:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   a731bb2..feee1ac  feee1ace547cf6247a358d082dd64fa762be2488 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 00:54:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 00:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9lKK-0003xs-NM; Sun, 02 Feb 2014 00:54:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9lKI-0003xn-B4
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 00:54:22 +0000
Received: from [85.158.143.35:13652] by server-2.bemta-4.messagelabs.com id
	90/D5-10891-D379DE25; Sun, 02 Feb 2014 00:54:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391302459!2450417!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4462 invoked from network); 2 Feb 2014 00:54:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 00:54:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96989503"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Feb 2014 00:54:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 19:54:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9lKE-0002QP-1Y;
	Sun, 02 Feb 2014 00:54:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9lKD-0004GP-VR;
	Sun, 02 Feb 2014 00:54:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24708-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 00:54:17 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24708: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24708 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24708/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24553

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=90d35066387e1d9c9deeda042c5a907cd57c11cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 90d35066387e1d9c9deeda042c5a907cd57c11cf
+ branch=qemu-upstream-unstable
+ revision=90d35066387e1d9c9deeda042c5a907cd57c11cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 90d35066387e1d9c9deeda042c5a907cd57c11cf:master
Counting objects: 1   
Counting objects: 9, done.
Compressing objects:  20% (1/5)   
Compressing objects:  40% (2/5)   
Compressing objects:  60% (3/5)   
Compressing objects:  80% (4/5)   
Compressing objects: 100% (5/5)   
Compressing objects: 100% (5/5), done.
Writing objects:  20% (1/5)   
Writing objects:  40% (2/5)   
Writing objects:  60% (3/5)   
Writing objects:  80% (4/5)   
Writing objects: 100% (5/5)   
Writing objects: 100% (5/5), 2.38 KiB, done.
Total 5 (delta 2), reused 3 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   11f6a1c..90d3506  90d35066387e1d9c9deeda042c5a907cd57c11cf -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 00:54:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 00:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9lKK-0003xs-NM; Sun, 02 Feb 2014 00:54:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9lKI-0003xn-B4
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 00:54:22 +0000
Received: from [85.158.143.35:13652] by server-2.bemta-4.messagelabs.com id
	90/D5-10891-D379DE25; Sun, 02 Feb 2014 00:54:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391302459!2450417!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4462 invoked from network); 2 Feb 2014 00:54:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 00:54:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="96989503"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Feb 2014 00:54:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 19:54:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9lKE-0002QP-1Y;
	Sun, 02 Feb 2014 00:54:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9lKD-0004GP-VR;
	Sun, 02 Feb 2014 00:54:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24708-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 00:54:17 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24708: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24708 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24708/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24553

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf
baseline version:
 qemuu                11f6a1cedb8d759fd64d7dd5db95b747591f2ca7

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=90d35066387e1d9c9deeda042c5a907cd57c11cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 90d35066387e1d9c9deeda042c5a907cd57c11cf
+ branch=qemu-upstream-unstable
+ revision=90d35066387e1d9c9deeda042c5a907cd57c11cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 90d35066387e1d9c9deeda042c5a907cd57c11cf:master
Counting objects: 1   
Counting objects: 9, done.
Compressing objects:  20% (1/5)   
Compressing objects:  40% (2/5)   
Compressing objects:  60% (3/5)   
Compressing objects:  80% (4/5)   
Compressing objects: 100% (5/5)   
Compressing objects: 100% (5/5), done.
Writing objects:  20% (1/5)   
Writing objects:  40% (2/5)   
Writing objects:  60% (3/5)   
Writing objects:  80% (4/5)   
Writing objects: 100% (5/5)   
Writing objects: 100% (5/5), 2.38 KiB, done.
Total 5 (delta 2), reused 3 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   11f6a1c..90d3506  90d35066387e1d9c9deeda042c5a907cd57c11cf -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 02:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 02:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9mex-0003uk-Eq; Sun, 02 Feb 2014 02:19:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9meu-0003uf-LS
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 02:19:45 +0000
Received: from [85.158.137.68:28848] by server-12.bemta-3.messagelabs.com id
	33/3A-01674-F3BADE25; Sun, 02 Feb 2014 02:19:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391307580!12816569!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23637 invoked from network); 2 Feb 2014 02:19:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 02:19:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98942342"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 02:19:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 21:19:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9meo-0002qT-Gm;
	Sun, 02 Feb 2014 02:19:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9meo-0001RM-5l;
	Sun, 02 Feb 2014 02:19:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24709-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 02:19:38 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24709: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24709 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24709/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591
 build-i386-oldkern            3 host-build-prep  fail in 24692 REGR. vs. 24591
 build-amd64-oldkern           3 host-build-prep  fail in 24692 REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            9 guest-start                 fail pass in 24692

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 02:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 02:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9mex-0003uk-Eq; Sun, 02 Feb 2014 02:19:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9meu-0003uf-LS
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 02:19:45 +0000
Received: from [85.158.137.68:28848] by server-12.bemta-3.messagelabs.com id
	33/3A-01674-F3BADE25; Sun, 02 Feb 2014 02:19:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391307580!12816569!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23637 invoked from network); 2 Feb 2014 02:19:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 02:19:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98942342"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 02:19:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 1 Feb 2014 21:19:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9meo-0002qT-Gm;
	Sun, 02 Feb 2014 02:19:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9meo-0001RM-5l;
	Sun, 02 Feb 2014 02:19:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24709-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 02:19:38 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24709: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24709 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24709/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24591
 build-i386-oldkern            3 host-build-prep  fail in 24692 REGR. vs. 24591
 build-amd64-oldkern           3 host-build-prep  fail in 24692 REGR. vs. 24591

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            9 guest-start                 fail pass in 24692

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c450908dc9168c3f20787aab268fcc295feaed7d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jan 29 11:56:17 2014 +0100

    x86: don't drop guest visible state updates when 64-bit PV guest is in user mode
    
    Since 64-bit PV uses separate kernel and user mode page tables, kernel
    addresses (as usually provided via VCPUOP_register_runstate_memory_area
    and possibly via VCPUOP_register_vcpu_time_memory_area) aren't
    necessarily accessible when the respective updating occurs. Add logic
    for toggle_guest_mode() to take care of this (if necessary) the next
    time the vCPU switches to kernel mode.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 231d7f4098c8ac9cdb78f18fcb820d8618c8b0c2
    master date: 2014-01-23 10:30:08 +0100

commit affb7e6bc3d3db4880613cf012b8f6cee0fd9c07
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:55:41 2014 +0100

    Nested VMX: prohibit virtual vmentry/vmexit during IO emulation
    
    Sometimes, L0 needs to decode L2's instruction to handle IO access directly.
    And L0 may get X86EMUL_RETRY when handling this IO request. At same time, if
    there is a virtual vmexit pending (for example, an interrupt pending to inject
    to L1) and hypervisor will switch the VCPU context from L2 to L1. Now we
    already are in L1's context, but since we got a X86EMUL_RETRY just now and
    this means hypervisor will retry to handle the IO request later and
    unfortunately, the retry will happen in L1's context. And it will cause the
    problem. The fixing is that if there is a pending IO request, no virtual
    vmexit/vmentry is allowed.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Jun Nakajima <jun.nakajima@intel.com>
    master commit: 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    master date: 2014-01-23 10:27:34 +0100

commit a4c215abc86dad8ccca4992f14f62550e5c02cf6
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:54:32 2014 +0100

    common/sysctl: Don't leak status in SYSCTL_page_offline_op
    
    In addition, 'copyback' should be cleared even in the error case.
    
    Also fix the indentation of the arguments to copy_to_guest() to help clarify
    that the 'ret = -EFAULT' is not part of the condition.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Keir Fraser <keir@xen.org>
    master commit: efd8ff0a04740a698b2b8b2b9adccd639e0fa6c9
    master date: 2014-01-20 09:48:11 +0100

commit f7d6d69edc9c98d25c2b2300eb925bec637d0b2e
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Wed Jan 29 11:54:02 2014 +0100

    nested EPT: fixing wrong handling for L2 guest's direct mmio access
    
    L2 guest will access the physical device directly(nested VT-d). For such access,
    Shadow EPT table should point to device's MMIO. But in current logic, L0 doesn't
    distinguish the MMIO whether from qemu or physical device when building shadow EPT table.
    This is wrong. This patch will setup the correct shadow EPT table for such MMIO ranges.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>
    master commit: 0b988ba711171b39aed9851cfe90fded50f775c5
    master date: 2014-01-17 16:00:21 +0100

commit 9c5b7fb63a79570e4bc14fcbe2d15a23a0f1b433
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Wed Jan 29 11:53:22 2014 +0100

    mce: fix race condition in mctelem_xchg_head
    
    The function (mctelem_xchg_head()) used to exchange mce telemetry
    list heads is racy.  It may write to the head twice, with the second
    write linking to an element in the wrong state.
    
    If there are two threads, T1 inserting on committed list; and T2
    trying to consume it.
    
    1. T1 starts inserting an element (A), sets prev pointer (mcte_prev).
    2. T1 is interrupted after the cmpxchg succeeded.
    3. T2 gets the list and changes element A and updates the commit list
       head.
    4. T1 resumes, reads pointer to prev again and compare with result
       from the cmpxchg which succeeded but in the meantime prev changed
       in memory.
    5. T1 thinks the cmpxchg failed and goes around the loop again,
       linking head to A again.
    
    To solve the race use temporary variable for prev pointer.
    
    *linkp (which point to a field in the element) must be updated before
    the cmpxchg() as after a successful cmpxchg the element might be
    immediately removed and reinitialized.
    
    The wmb() prior to the cmpchgptr() call is not necessary since it is
    already a full memory barrier.  This wmb() is thus removed.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
    master commit: e9af61b969906976188609379183cb304935f448
    master date: 2014-01-17 15:58:27 +0100

commit df75a2685c21158817092e34ee20cbf7ca770e75
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 29 11:52:14 2014 +0100

    dbg_rw_guest_mem: need to call put_gfn in error path
    
    Using a 1G hvm domU (in grub) and gdbsx:
    
    (gdb) set arch i8086
    warning: A handler for the OS ABI "GNU/Linux" is not built into this configuration
    of GDB.  Attempting to continue with the default i8086 settings.
    
    The target architecture is assumed to be i8086
    (gdb) target remote localhost:9999
    Remote debugging using localhost:9999
    Remote debugging from host 127.0.0.1
    0x0000d475 in ?? ()
    (gdb) x/1xh 0x6ae9168b
    
    Will reproduce this bug.
    
    With a debug=y build you will get:
    
    Assertion '!preempt_count()' failed at preempt.c:37
    
    For a debug=n build you will get a dom0 VCPU hung (at some point) in:
    
             [ffff82c4c0126eec] _write_lock+0x3c/0x50
              ffff82c4c01e43a0  __get_gfn_type_access+0x150/0x230
              ffff82c4c0158885  dbg_rw_mem+0x115/0x360
              ffff82c4c0158fc8  arch_do_domctl+0x4b8/0x22f0
              ffff82c4c01709ed  get_page+0x2d/0x100
              ffff82c4c01031aa  do_domctl+0x2ba/0x11e0
              ffff82c4c0179662  do_mmuext_op+0x8d2/0x1b20
              ffff82c4c0183598  __update_vcpu_system_time+0x288/0x340
              ffff82c4c015c719  continue_nonidle_domain+0x9/0x30
              ffff82c4c012938b  add_entry+0x4b/0xb0
              ffff82c4c02223f9  syscall_enter+0xa9/0xae
    
    And gdb output:
    
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     0x3024
    (gdb) x/1xh 0x6ae9168b
    0x6ae9168b:     Ignoring packet error, continuing...
    Reply contains invalid hex digit 116
    
    The 1st one worked because the p2m.lock is recursive and the PCPU
    had not yet changed.
    
    crash reports (for example):
    
    crash> mm_rwlock_t 0xffff83083f913010
    struct mm_rwlock_t {
      lock = {
        raw = {
          lock = 2147483647
        },
        debug = {<No data fields>}
      },
      unlock_level = 0,
      recurse_count = 1,
      locker = 1,
      locker_function = 0xffff82c4c022c640 <__func__.13514> "__get_gfn_type_access"
    }
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    master commit: 3dbab7a8bf4bef1bb2967cb3a8c7ed2146482ab3
    master date: 2014-01-10 17:45:01 +0100
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 07:24:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 07:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9rOv-0002Vv-L8; Sun, 02 Feb 2014 07:23:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W9rOv-0002Vq-3b
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 07:23:33 +0000
Received: from [85.158.139.211:59537] by server-9.bemta-5.messagelabs.com id
	2E/C1-11237-472FDE25; Sun, 02 Feb 2014 07:23:32 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391325809!1095306!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17245 invoked from network); 2 Feb 2014 07:23:31 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 07:23:31 -0000
Received: by mail-pb0-f48.google.com with SMTP id rr13so5917679pbb.21
	for <xen-devel@lists.xen.org>; Sat, 01 Feb 2014 23:23:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=Nm2flcfaYs0CnFcXPtNDiRtEgQFc/ZLwcMu5gfZ4eaE=;
	b=buBivZiEY9vNeV7qBHBYJIAp7LjlkcYYP/yS/VqV4Fg7dreI/Fnte62w3msEHEOg8J
	Mai36+ds0/TCDqiqXQComRos23vd8RJBks1cZalSlOGLF9aRT2qFy92uUryyulbhuxXs
	eJ9D0bEAeMhDW5P4jS9J+IFfCb7v9Hs1GrQdilb6H3P44SwSUT6f0b+MDVn8Lvkhiehc
	rqPy4+kLhupquYbm0MNKcSQk3UrhuHBvvmi3j7pyhI/joAj7e596VejHIlZlG6XjN2b7
	HpKWFaFjJ10JBPdjgftUduBX7mJq73rfhmpKuRvcWud6TmTsPcU8RZEBiuyUltxOBAyM
	G5PA==
X-Received: by 10.68.204.161 with SMTP id kz1mr58940pbc.156.1391325809271;
	Sat, 01 Feb 2014 23:23:29 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	lh13sm112275319pab.4.2014.02.01.23.23.26 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Sat, 01 Feb 2014 23:23:27 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Sat, 01 Feb 2014 23:23:25 -0800
Date: Sat, 1 Feb 2014 23:23:25 -0800
From: Matt Wilson <msw@linux.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20130422195335.GA30755@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Stefan Bader <stefan.bader@canonical.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > 
> > > This series is now rebased onto net-next.
> > > 
> > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > backport if necessary.
> > 
> > All applied, but this was a disaster.
> > 
> 
> Thanks, I misunderstood the workflow.
> 
> > If you want bug fixes propagated into -stable you submit them to 'net'
> > from the beginning.
> > 
> > There is no other method by which to do this.
> > 
> > By merging all of these changes to net-next, you will now have to get
> > them accepted again into 'net', and then (and only then) can you make
> > a request for -stable inclusion.
> > 
> 
> Understood. Will submit them against 'net' later.

Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
to account for max TCP header) at all related to the "skb rides the
rocket" related TX packet drops reported against 3.8.x kernels?

https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474

It seems like there are still some outstanding bugs in various -stable
releases.

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 07:24:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 07:24:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9rOv-0002Vv-L8; Sun, 02 Feb 2014 07:23:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1W9rOv-0002Vq-3b
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 07:23:33 +0000
Received: from [85.158.139.211:59537] by server-9.bemta-5.messagelabs.com id
	2E/C1-11237-472FDE25; Sun, 02 Feb 2014 07:23:32 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391325809!1095306!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17245 invoked from network); 2 Feb 2014 07:23:31 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 07:23:31 -0000
Received: by mail-pb0-f48.google.com with SMTP id rr13so5917679pbb.21
	for <xen-devel@lists.xen.org>; Sat, 01 Feb 2014 23:23:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=Nm2flcfaYs0CnFcXPtNDiRtEgQFc/ZLwcMu5gfZ4eaE=;
	b=buBivZiEY9vNeV7qBHBYJIAp7LjlkcYYP/yS/VqV4Fg7dreI/Fnte62w3msEHEOg8J
	Mai36+ds0/TCDqiqXQComRos23vd8RJBks1cZalSlOGLF9aRT2qFy92uUryyulbhuxXs
	eJ9D0bEAeMhDW5P4jS9J+IFfCb7v9Hs1GrQdilb6H3P44SwSUT6f0b+MDVn8Lvkhiehc
	rqPy4+kLhupquYbm0MNKcSQk3UrhuHBvvmi3j7pyhI/joAj7e596VejHIlZlG6XjN2b7
	HpKWFaFjJ10JBPdjgftUduBX7mJq73rfhmpKuRvcWud6TmTsPcU8RZEBiuyUltxOBAyM
	G5PA==
X-Received: by 10.68.204.161 with SMTP id kz1mr58940pbc.156.1391325809271;
	Sat, 01 Feb 2014 23:23:29 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	lh13sm112275319pab.4.2014.02.01.23.23.26 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Sat, 01 Feb 2014 23:23:27 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Sat, 01 Feb 2014 23:23:25 -0800
Date: Sat, 1 Feb 2014 23:23:25 -0800
From: Matt Wilson <msw@linux.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20130422195335.GA30755@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Stefan Bader <stefan.bader@canonical.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > 
> > > This series is now rebased onto net-next.
> > > 
> > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > backport if necessary.
> > 
> > All applied, but this was a disaster.
> > 
> 
> Thanks, I misunderstood the workflow.
> 
> > If you want bug fixes propagated into -stable you submit them to 'net'
> > from the beginning.
> > 
> > There is no other method by which to do this.
> > 
> > By merging all of these changes to net-next, you will now have to get
> > them accepted again into 'net', and then (and only then) can you make
> > a request for -stable inclusion.
> > 
> 
> Understood. Will submit them against 'net' later.

Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
to account for max TCP header) at all related to the "skb rides the
rocket" related TX packet drops reported against 3.8.x kernels?

https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474

It seems like there are still some outstanding bugs in various -stable
releases.

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 08:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 08:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9sHV-0004Id-KF; Sun, 02 Feb 2014 08:19:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9sHT-0004IY-Px
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 08:19:56 +0000
Received: from [85.158.143.35:11057] by server-3.bemta-4.messagelabs.com id
	C5/02-11539-BAFFDE25; Sun, 02 Feb 2014 08:19:55 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391329194!2485663!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8027 invoked from network); 2 Feb 2014 08:19:54 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 08:19:54 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391329194; l=1042;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=2AHMtK1BiNpkA2txv9JlNb/UI6I=;
	b=pCkIvHhdWfUprIHI0zZ6u7pyZ514ezNsmm4VgB7CkbEc1641mpBfKk1UWDEG/FDUvG9
	RGsMx/Febj+MayvHSJYBzP5MfNGuTdLxIviqXcDUY4cbCHgm+e8FiIfzpsb64QmH5WQTI
	FOiuQaOXCsgK/+s4aREYFP287XovlS7RAnw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id i06b14q128JsImK
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 09:19:54 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A7D9D50269; Sun,  2 Feb 2014 09:19:53 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Sun,  2 Feb 2014 09:19:51 +0100
Message-Id: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/gdbsx: define format strings for aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

gx_main.c: In function '_do_qRcmd_req':
gx_main.c:119:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
             sprintf(buf1, "pgd3val set to: "XGF64"\n", pgd3val);
             ^
gx_main.c:121:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
             sprintf(buf1, "Invalid  pgd3val "XGF64"\n", pgd3val);

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/debugger/gdbsx/xg/xg_public.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/debugger/gdbsx/xg/xg_public.h b/tools/debugger/gdbsx/xg/xg_public.h
index 6236d08..046e21b 100644
--- a/tools/debugger/gdbsx/xg/xg_public.h
+++ b/tools/debugger/gdbsx/xg/xg_public.h
@@ -23,7 +23,7 @@
 #define XGTRC1(...)  \
            do {(xgtrc_on==2) ? (xgtrc(__FUNCTION__,__VA_ARGS__)):0;} while (0)
 
-#if defined(__x86_64__)
+#if defined(__x86_64__) || defined(__aarch64__)
     #define  XGFM64  "%lx"
     #define  XGF64   "%016lx"
 #else

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 08:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 08:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9sHV-0004Id-KF; Sun, 02 Feb 2014 08:19:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9sHT-0004IY-Px
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 08:19:56 +0000
Received: from [85.158.143.35:11057] by server-3.bemta-4.messagelabs.com id
	C5/02-11539-BAFFDE25; Sun, 02 Feb 2014 08:19:55 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391329194!2485663!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8027 invoked from network); 2 Feb 2014 08:19:54 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 08:19:54 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391329194; l=1042;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=2AHMtK1BiNpkA2txv9JlNb/UI6I=;
	b=pCkIvHhdWfUprIHI0zZ6u7pyZ514ezNsmm4VgB7CkbEc1641mpBfKk1UWDEG/FDUvG9
	RGsMx/Febj+MayvHSJYBzP5MfNGuTdLxIviqXcDUY4cbCHgm+e8FiIfzpsb64QmH5WQTI
	FOiuQaOXCsgK/+s4aREYFP287XovlS7RAnw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id i06b14q128JsImK
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 09:19:54 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A7D9D50269; Sun,  2 Feb 2014 09:19:53 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Sun,  2 Feb 2014 09:19:51 +0100
Message-Id: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/gdbsx: define format strings for aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

gx_main.c: In function '_do_qRcmd_req':
gx_main.c:119:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
             sprintf(buf1, "pgd3val set to: "XGF64"\n", pgd3val);
             ^
gx_main.c:121:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
             sprintf(buf1, "Invalid  pgd3val "XGF64"\n", pgd3val);

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/debugger/gdbsx/xg/xg_public.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/debugger/gdbsx/xg/xg_public.h b/tools/debugger/gdbsx/xg/xg_public.h
index 6236d08..046e21b 100644
--- a/tools/debugger/gdbsx/xg/xg_public.h
+++ b/tools/debugger/gdbsx/xg/xg_public.h
@@ -23,7 +23,7 @@
 #define XGTRC1(...)  \
            do {(xgtrc_on==2) ? (xgtrc(__FUNCTION__,__VA_ARGS__)):0;} while (0)
 
-#if defined(__x86_64__)
+#if defined(__x86_64__) || defined(__aarch64__)
     #define  XGFM64  "%lx"
     #define  XGF64   "%016lx"
 #else

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 08:57:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 08:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9srC-00052a-7W; Sun, 02 Feb 2014 08:56:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W9soT-00051t-Ei
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 08:54:01 +0000
Received: from [193.109.254.147:23036] by server-13.bemta-14.messagelabs.com
	id BE/4B-01226-8A70EE25; Sun, 02 Feb 2014 08:54:00 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391331237!1395676!1
X-Originating-IP: [98.138.229.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6355 invoked from network); 2 Feb 2014 08:53:59 -0000
Received: from nm35.bullet.mail.ne1.yahoo.com (HELO
	nm35.bullet.mail.ne1.yahoo.com) (98.138.229.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 08:53:59 -0000
Received: from [127.0.0.1] by nm35.bullet.mail.ne1.yahoo.com with NNFMP;
	02 Feb 2014 08:53:56 -0000
Received: from [98.138.101.128] by nm35.bullet.mail.ne1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
Received: from [98.139.212.153] by tm16.bullet.mail.ne1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
Received: from [98.139.212.250] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
Received: from [127.0.0.1] by omp1059.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
X-Yahoo-Newman-Property: ymail-4
X-Yahoo-Newman-Id: 590922.22631.bm@omp1059.mail.bf1.yahoo.com
Received: (qmail 87592 invoked by uid 60001); 2 Feb 2014 08:51:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391331061; bh=AbDSyuOM1ZGNPN7dEt62EgX/zERl5XkLh9TPge6ogZY=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=gP/A4o3arqzal0lcMNis49dauO2YGzIwS4RWxM2LwCsDONGUDud/QGdiiYFSKRH1h88Vz/VkHbaurh8ul3WOuw2+OiTEWHlGaPlnhvkhQeb0g+aEsZfUO4c6hx8lgNuoCloys67Klae8AmcAHSJwb++hVrxRErH4pAwZYSgBYPk=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=y1iBDgpf0x7phAZq9XYdsO2gMnke5hb+RCKXk3mGDxoasr+zmnlAi5gxsADsiRDgKILS/IxQistxb1SGxTr23PF58NeFUNeYAAeDYU4+Me0ZSEuh5meHZcKQqOpvOMQL/i8lhd95f3w/b/R7Y902HO1MNZklFMfM0vfU5iht3Jo=;
X-YMail-OSG: YF8FZLAVM1mnj6rhIhCmIe0ssHQrH6VFWDrrN_k.GQmQMf.
	pFKK5dWgex_x0Pp1uM2Q9P1XnTEdhk8I2GKN05BpZ04bNzkAdRQLZcKonJyL
	qhH_VeA9HTZwxbEuMHW_b8T5LTUf4OwVK1ecbYGLn0ig720qiKZFbIbFaFbN
	LNg4e3LqTvYLITATAZLFrN_gcLsSpHuRCFYiIjqs9iuPYl8CQZGCMRAOgZVD
	6vEx6rcXrtxKh9KItWo.WCCBsdTdLB.HBoeDBoZNXtTJb_aGotVVmG.LqMhi
	PIjsQTOhnU_M.zIEnum8CUNd403RH9_zQpzspNM.Sjo84btnlmAdeDGjOt5f
	It6t6YSLmc4mHu3QAWQkWuAakEe_aQWuB.abe5FGZcaTJJhhI7uumpRPO2wR
	7U.GZNo_FiDVYoQZXtD63iRiwHIsYL4yALmBl.C7mOhpaPwxfQ8zR8r1kUMO
	nuqcHuERmFeq5CUwlC9hb24mCw40DCwRghiXAPbdczxi4OK0bym858kdZpYQ
	6rOiw3UEiV98vgFeCoMby75CSMk48T1TvUvAyIOsV
Received: from [192.227.225.3] by web161802.mail.bf1.yahoo.com via HTTP;
	Sun, 02 Feb 2014 00:51:01 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8sCkkgY2hlY2sgdGhpcyBmdW5jdGlvbiB4Y19kb21haW5fc2F2ZS5jLCBhbmQgYXR0ZW50aW9uIGF0IGxpbmUgMTE4NSwgZnVuY3Rpb24gc25wcmludGYoKSBhbW91bnRzIGl0ZXIsIHNlbnRfdGhpc19pdGVyLCBza2lwX3RoaXNfaXRlciBmb3IgcHJpbnQuIChpbiB4ZW4gNC4xLjIgKQpidXQgaSBkb24ndCBrbm93IHdoZXJlIGluIGFtb3VudHMgcHJpbnQhISEgOi0owqAKQXJlIGV4Y2VwdCBmaWxlIHhlbmQubG9nIHdoZXJlIGlzIGFub3RoZXIgZm9yIHByaW50IGFuZCBkZWJ1Z2dlZD8KwqAKQWRlbCABMAEBAQE-
X-Mailer: YahooMailWebService/0.8.174.629
Message-ID: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
Date: Sun, 2 Feb 2014 00:51:01 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Kai Huang <dev.kai.huang@gmail.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen <xen-devel@lists.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Sun, 02 Feb 2014 08:56:48 +0000
Subject: [Xen-devel] function snprintf() in xen_save_domain.c for debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2818776504986446499=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2818776504986446499==
Content-Type: multipart/alternative; boundary="-2096837515-1912454375-1391331061=:24599"

---2096837515-1912454375-1391331061=:24599
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,=0AI check this function xc_domain_save.c, and attention at line 1185=
, function snprintf() amounts iter, sent_this_iter, skip_this_iter for prin=
t. (in xen 4.1.2 )=0Abut i don't know where in amounts print!!! :-(=A0=0AAr=
e except file xend.log where is another for print and debugged?=0A=A0=0AAde=
l Amani=0AM.Sc. Candidate@Computer Engineering Department, University of Is=
fahan=0AEmail: A.Amani@eng.ui.ac.ir
---2096837515-1912454375-1391331061=:24599
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div id=3D"yiv87818=
44909yui_3_13_0_ym1_6_1391330013416_10"><div id=3D"yiv8781844909yui_3_13_0_=
ym1_6_1391330013416_10">Hello,</div><div id=3D"yiv8781844909yui_3_13_0_ym1_=
6_1391330013416_10">I check this function xc_domain_save.c, and attention a=
t line 1185, function snprintf() amounts iter, sent_this_iter, skip_this_it=
er for print. (in xen 4.1.2 )</div><div id=3D"yiv8781844909yui_3_13_0_ym1_6=
_1391330013416_10">but i don't know where in amounts print!!! :-(&nbsp;</di=
v><div id=3D"yiv8781844909yui_3_13_0_ym1_6_1391330013416_10">Are except fil=
e xend.log where is another for print and debugged?</div></div><div></div><=
div>&nbsp;</div><div><div style=3D"text-align:center;"><span style=3D"font-=
size: 16px; font-family: 'times new roman', 'new york', times, serif; line-=
height: normal;">Adel Amani</span><br></div><span style=3D"font-family: 'ti=
mes new
 roman', 'new york', times, serif; line-height: normal;"><div style=3D"font=
-size:16px;text-align:center;">M.Sc. Candidate@Computer Engineering Departm=
ent, University of Isfahan</div><div style=3D"text-align:center;"><span sty=
le=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-deco=
ration:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div></d=
iv></body></html>
---2096837515-1912454375-1391331061=:24599--


--===============2818776504986446499==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2818776504986446499==--


From xen-devel-bounces@lists.xen.org Sun Feb 02 08:57:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 08:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9srB-00052S-SF; Sun, 02 Feb 2014 08:56:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W9pRM-0007ne-6p
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 05:17:56 +0000
Received: from [85.158.139.211:4607] by server-2.bemta-5.messagelabs.com id
	87/A7-23037-305DDE25; Sun, 02 Feb 2014 05:17:55 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391318273!1074385!1
X-Originating-IP: [72.30.239.153]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16852 invoked from network); 2 Feb 2014 05:17:54 -0000
Received: from nm39-vm9.bullet.mail.bf1.yahoo.com (HELO
	nm39-vm9.bullet.mail.bf1.yahoo.com) (72.30.239.153)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 05:17:54 -0000
Received: from [66.196.81.170] by nm39.bullet.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 05:17:53 -0000
Received: from [98.139.211.203] by tm16.bullet.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 05:17:53 -0000
Received: from [127.0.0.1] by smtp212.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 05:17:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391318273; bh=1NY0Dyi0sar0luAlvL5FZmC/MugxXnXzGA2p1uCSoGo=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:Cc:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:Content-Transfer-Encoding:X-Mailer:Thread-Index:Content-Language;
	b=loFfarQtLEZqFW+fBHzo5jYpevscsF4U0viEdtmIFI0H8RJhURSZFx2jUQeIBpjEdgsjDV0e3ZX0YzrcbQYPsSzY+t/rmUUgOxNxUHg+YutsG9fSGUwPT/p6URwMgRX2ciVFJW4t3CpK9BM7YllMv0zCk/uwImf5hEO9ipbBS0o=
X-Yahoo-Newman-Id: 56007.19763.bm@smtp212.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: lCjxo5YVM1k9kgwWBGO0qMFA7551d7U5b2tM8vYzMMxrtrZ
	QxYz4fVXcXvar84j7i9e9qpw9jCFYqPx.VrU9rdVFPuQTkZtZuSm.TkAGkkv
	3AA.m9b5G1LMGxGpGHb.U8jHZt61.SjYna7yHWhV9S_ObEglBiX.r6k0x4qf
	czd0LOdGu4u3q3DJwuGXViAPWjq8Fmix8SS04_p0Pupg17CRljVfa_Szt7Yj
	ptq7.hByfovBa4lNfoJa01FGY0sJiG29pBJGhLuiLrDYNiBddZwYhMcxCQ4F
	fc7r3PmUTuoLx3MI_NvuAti7R24K7qImil7vJXE0y3AUZaGroPKjluzEayBu
	kis4WRTamCUvHtPspe2m6sy5xRdlINVczLDMmckMtpXZLJO3FVrBBSRiHnhq
	_p6kgNMVBo705PXv8RLszOOBElkzM_UKgFmgXQwcJj0k7OjJOR8thBAkQ01I
	d7HV3UN6ar1VWOkRBUcptqVNryoZiNq0inXKwuoiFafM5_uZL5GrQeJ8Gbjj
	1jWTheTVY1DQgaoxWW_9zUlrQJSo02zRfCglLu.k4_yZcKukdOTri_j48.7F
	2enNnXOvjj5ldlQw9H2m9SuOXY8vfS87w_rWNRHm6vvIU4eOtvsBW3vq1QTp
	kJwlWSL_YjwCpw5_kRB9nhsnHybzjWkoJxMG0spI6CZ7FieDIyLKm
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from phobos (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp212.mail.bf1.yahoo.com with SMTP; 01 Feb 2014 21:17:52 -0800 PST
From: "Eric Houby" <ehouby@yahoo.com>
To: "'M A Young'" <m.a.young@durham.ac.uk>,
	"'Dario Faggioli'" <dario.faggioli@citrix.com>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
	<1391209094.13572.50.camel@Abyss>
	<alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
In-Reply-To: <alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
Date: Sat, 1 Feb 2014 22:17:55 -0700
Message-ID: <020f01cf1fd6$21bd24e0$65376ea0$@yahoo.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQFjy0dgAXkVho5qIRcBav+bbzk7pQL9gb5iARhwTyIB7O92SptH/liQ
Content-Language: en-us
X-Mailman-Approved-At: Sun, 02 Feb 2014 08:56:48 +0000
Cc: xen-users@lists.xen.org, 'xen' <xen@lists.fedoraproject.org>,
	'Russ Pavlicek' <russell.pavlicek@xenproject.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> There is an rc3 build at
> http://koji.fedoraproject.org/koji/taskinfo?taskID=6479953
> 
>  	Michael Young


Thanks.  I am playing with the build now.

-Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 08:57:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 08:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9srC-00052a-7W; Sun, 02 Feb 2014 08:56:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1W9soT-00051t-Ei
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 08:54:01 +0000
Received: from [193.109.254.147:23036] by server-13.bemta-14.messagelabs.com
	id BE/4B-01226-8A70EE25; Sun, 02 Feb 2014 08:54:00 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391331237!1395676!1
X-Originating-IP: [98.138.229.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6355 invoked from network); 2 Feb 2014 08:53:59 -0000
Received: from nm35.bullet.mail.ne1.yahoo.com (HELO
	nm35.bullet.mail.ne1.yahoo.com) (98.138.229.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 08:53:59 -0000
Received: from [127.0.0.1] by nm35.bullet.mail.ne1.yahoo.com with NNFMP;
	02 Feb 2014 08:53:56 -0000
Received: from [98.138.101.128] by nm35.bullet.mail.ne1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
Received: from [98.139.212.153] by tm16.bullet.mail.ne1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
Received: from [98.139.212.250] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
Received: from [127.0.0.1] by omp1059.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 08:51:01 -0000
X-Yahoo-Newman-Property: ymail-4
X-Yahoo-Newman-Id: 590922.22631.bm@omp1059.mail.bf1.yahoo.com
Received: (qmail 87592 invoked by uid 60001); 2 Feb 2014 08:51:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391331061; bh=AbDSyuOM1ZGNPN7dEt62EgX/zERl5XkLh9TPge6ogZY=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=gP/A4o3arqzal0lcMNis49dauO2YGzIwS4RWxM2LwCsDONGUDud/QGdiiYFSKRH1h88Vz/VkHbaurh8ul3WOuw2+OiTEWHlGaPlnhvkhQeb0g+aEsZfUO4c6hx8lgNuoCloys67Klae8AmcAHSJwb++hVrxRErH4pAwZYSgBYPk=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:MIME-Version:Content-Type;
	b=y1iBDgpf0x7phAZq9XYdsO2gMnke5hb+RCKXk3mGDxoasr+zmnlAi5gxsADsiRDgKILS/IxQistxb1SGxTr23PF58NeFUNeYAAeDYU4+Me0ZSEuh5meHZcKQqOpvOMQL/i8lhd95f3w/b/R7Y902HO1MNZklFMfM0vfU5iht3Jo=;
X-YMail-OSG: YF8FZLAVM1mnj6rhIhCmIe0ssHQrH6VFWDrrN_k.GQmQMf.
	pFKK5dWgex_x0Pp1uM2Q9P1XnTEdhk8I2GKN05BpZ04bNzkAdRQLZcKonJyL
	qhH_VeA9HTZwxbEuMHW_b8T5LTUf4OwVK1ecbYGLn0ig720qiKZFbIbFaFbN
	LNg4e3LqTvYLITATAZLFrN_gcLsSpHuRCFYiIjqs9iuPYl8CQZGCMRAOgZVD
	6vEx6rcXrtxKh9KItWo.WCCBsdTdLB.HBoeDBoZNXtTJb_aGotVVmG.LqMhi
	PIjsQTOhnU_M.zIEnum8CUNd403RH9_zQpzspNM.Sjo84btnlmAdeDGjOt5f
	It6t6YSLmc4mHu3QAWQkWuAakEe_aQWuB.abe5FGZcaTJJhhI7uumpRPO2wR
	7U.GZNo_FiDVYoQZXtD63iRiwHIsYL4yALmBl.C7mOhpaPwxfQ8zR8r1kUMO
	nuqcHuERmFeq5CUwlC9hb24mCw40DCwRghiXAPbdczxi4OK0bym858kdZpYQ
	6rOiw3UEiV98vgFeCoMby75CSMk48T1TvUvAyIOsV
Received: from [192.227.225.3] by web161802.mail.bf1.yahoo.com via HTTP;
	Sun, 02 Feb 2014 00:51:01 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8sCkkgY2hlY2sgdGhpcyBmdW5jdGlvbiB4Y19kb21haW5fc2F2ZS5jLCBhbmQgYXR0ZW50aW9uIGF0IGxpbmUgMTE4NSwgZnVuY3Rpb24gc25wcmludGYoKSBhbW91bnRzIGl0ZXIsIHNlbnRfdGhpc19pdGVyLCBza2lwX3RoaXNfaXRlciBmb3IgcHJpbnQuIChpbiB4ZW4gNC4xLjIgKQpidXQgaSBkb24ndCBrbm93IHdoZXJlIGluIGFtb3VudHMgcHJpbnQhISEgOi0owqAKQXJlIGV4Y2VwdCBmaWxlIHhlbmQubG9nIHdoZXJlIGlzIGFub3RoZXIgZm9yIHByaW50IGFuZCBkZWJ1Z2dlZD8KwqAKQWRlbCABMAEBAQE-
X-Mailer: YahooMailWebService/0.8.174.629
Message-ID: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
Date: Sun, 2 Feb 2014 00:51:01 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Kai Huang <dev.kai.huang@gmail.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen <xen-devel@lists.xen.org>
MIME-Version: 1.0
X-Mailman-Approved-At: Sun, 02 Feb 2014 08:56:48 +0000
Subject: [Xen-devel] function snprintf() in xen_save_domain.c for debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2818776504986446499=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2818776504986446499==
Content-Type: multipart/alternative; boundary="-2096837515-1912454375-1391331061=:24599"

---2096837515-1912454375-1391331061=:24599
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello,=0AI check this function xc_domain_save.c, and attention at line 1185=
, function snprintf() amounts iter, sent_this_iter, skip_this_iter for prin=
t. (in xen 4.1.2 )=0Abut i don't know where in amounts print!!! :-(=A0=0AAr=
e except file xend.log where is another for print and debugged?=0A=A0=0AAde=
l Amani=0AM.Sc. Candidate@Computer Engineering Department, University of Is=
fahan=0AEmail: A.Amani@eng.ui.ac.ir
---2096837515-1912454375-1391331061=:24599
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div id=3D"yiv87818=
44909yui_3_13_0_ym1_6_1391330013416_10"><div id=3D"yiv8781844909yui_3_13_0_=
ym1_6_1391330013416_10">Hello,</div><div id=3D"yiv8781844909yui_3_13_0_ym1_=
6_1391330013416_10">I check this function xc_domain_save.c, and attention a=
t line 1185, function snprintf() amounts iter, sent_this_iter, skip_this_it=
er for print. (in xen 4.1.2 )</div><div id=3D"yiv8781844909yui_3_13_0_ym1_6=
_1391330013416_10">but i don't know where in amounts print!!! :-(&nbsp;</di=
v><div id=3D"yiv8781844909yui_3_13_0_ym1_6_1391330013416_10">Are except fil=
e xend.log where is another for print and debugged?</div></div><div></div><=
div>&nbsp;</div><div><div style=3D"text-align:center;"><span style=3D"font-=
size: 16px; font-family: 'times new roman', 'new york', times, serif; line-=
height: normal;">Adel Amani</span><br></div><span style=3D"font-family: 'ti=
mes new
 roman', 'new york', times, serif; line-height: normal;"><div style=3D"font=
-size:16px;text-align:center;">M.Sc. Candidate@Computer Engineering Departm=
ent, University of Isfahan</div><div style=3D"text-align:center;"><span sty=
le=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-deco=
ration:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div></d=
iv></body></html>
---2096837515-1912454375-1391331061=:24599--


--===============2818776504986446499==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2818776504986446499==--


From xen-devel-bounces@lists.xen.org Sun Feb 02 08:57:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 08:57:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9srB-00052S-SF; Sun, 02 Feb 2014 08:56:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1W9pRM-0007ne-6p
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 05:17:56 +0000
Received: from [85.158.139.211:4607] by server-2.bemta-5.messagelabs.com id
	87/A7-23037-305DDE25; Sun, 02 Feb 2014 05:17:55 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391318273!1074385!1
X-Originating-IP: [72.30.239.153]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16852 invoked from network); 2 Feb 2014 05:17:54 -0000
Received: from nm39-vm9.bullet.mail.bf1.yahoo.com (HELO
	nm39-vm9.bullet.mail.bf1.yahoo.com) (72.30.239.153)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 05:17:54 -0000
Received: from [66.196.81.170] by nm39.bullet.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 05:17:53 -0000
Received: from [98.139.211.203] by tm16.bullet.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 05:17:53 -0000
Received: from [127.0.0.1] by smtp212.mail.bf1.yahoo.com with NNFMP;
	02 Feb 2014 05:17:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391318273; bh=1NY0Dyi0sar0luAlvL5FZmC/MugxXnXzGA2p1uCSoGo=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:From:To:Cc:References:In-Reply-To:Subject:Date:Message-ID:MIME-Version:Content-Type:Content-Transfer-Encoding:X-Mailer:Thread-Index:Content-Language;
	b=loFfarQtLEZqFW+fBHzo5jYpevscsF4U0viEdtmIFI0H8RJhURSZFx2jUQeIBpjEdgsjDV0e3ZX0YzrcbQYPsSzY+t/rmUUgOxNxUHg+YutsG9fSGUwPT/p6URwMgRX2ciVFJW4t3CpK9BM7YllMv0zCk/uwImf5hEO9ipbBS0o=
X-Yahoo-Newman-Id: 56007.19763.bm@smtp212.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: lCjxo5YVM1k9kgwWBGO0qMFA7551d7U5b2tM8vYzMMxrtrZ
	QxYz4fVXcXvar84j7i9e9qpw9jCFYqPx.VrU9rdVFPuQTkZtZuSm.TkAGkkv
	3AA.m9b5G1LMGxGpGHb.U8jHZt61.SjYna7yHWhV9S_ObEglBiX.r6k0x4qf
	czd0LOdGu4u3q3DJwuGXViAPWjq8Fmix8SS04_p0Pupg17CRljVfa_Szt7Yj
	ptq7.hByfovBa4lNfoJa01FGY0sJiG29pBJGhLuiLrDYNiBddZwYhMcxCQ4F
	fc7r3PmUTuoLx3MI_NvuAti7R24K7qImil7vJXE0y3AUZaGroPKjluzEayBu
	kis4WRTamCUvHtPspe2m6sy5xRdlINVczLDMmckMtpXZLJO3FVrBBSRiHnhq
	_p6kgNMVBo705PXv8RLszOOBElkzM_UKgFmgXQwcJj0k7OjJOR8thBAkQ01I
	d7HV3UN6ar1VWOkRBUcptqVNryoZiNq0inXKwuoiFafM5_uZL5GrQeJ8Gbjj
	1jWTheTVY1DQgaoxWW_9zUlrQJSo02zRfCglLu.k4_yZcKukdOTri_j48.7F
	2enNnXOvjj5ldlQw9H2m9SuOXY8vfS87w_rWNRHm6vvIU4eOtvsBW3vq1QTp
	kJwlWSL_YjwCpw5_kRB9nhsnHybzjWkoJxMG0spI6CZ7FieDIyLKm
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from phobos (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp212.mail.bf1.yahoo.com with SMTP; 01 Feb 2014 21:17:52 -0800 PST
From: "Eric Houby" <ehouby@yahoo.com>
To: "'M A Young'" <m.a.young@durham.ac.uk>,
	"'Dario Faggioli'" <dario.faggioli@citrix.com>
References: <CAHehzX1O3y1iEXfhHKXozo70bWLTfcy08qMJdW15UHbBA0fjcA@mail.gmail.com>
	<009c01cf1ece$2739a820$75acf860$@yahoo.com>
	<1391209094.13572.50.camel@Abyss>
	<alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
In-Reply-To: <alpine.DEB.2.00.1402011938300.21894@procyon.dur.ac.uk>
Date: Sat, 1 Feb 2014 22:17:55 -0700
Message-ID: <020f01cf1fd6$21bd24e0$65376ea0$@yahoo.com>
MIME-Version: 1.0
X-Mailer: Microsoft Outlook 15.0
Thread-Index: AQFjy0dgAXkVho5qIRcBav+bbzk7pQL9gb5iARhwTyIB7O92SptH/liQ
Content-Language: en-us
X-Mailman-Approved-At: Sun, 02 Feb 2014 08:56:48 +0000
Cc: xen-users@lists.xen.org, 'xen' <xen@lists.fedoraproject.org>,
	'Russ Pavlicek' <russell.pavlicek@xenproject.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [Xen-users] REMINDER: Feb 3 is Xen Project Test Day
	for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


> There is an rc3 build at
> http://koji.fedoraproject.org/koji/taskinfo?taskID=6479953
> 
>  	Michael Young


Thanks.  I am playing with the build now.

-Eric


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 09:56:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 09:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9tn3-0006Yi-IO; Sun, 02 Feb 2014 09:56:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9tn2-0006Yd-OK
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 09:56:37 +0000
Received: from [85.158.137.68:39630] by server-6.bemta-3.messagelabs.com id
	21/55-09180-3561EE25; Sun, 02 Feb 2014 09:56:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391334992!9165282!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19519 invoked from network); 2 Feb 2014 09:56:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 09:56:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="97041322"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Feb 2014 09:56:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 2 Feb 2014 04:56:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9tmw-0005A1-G9;
	Sun, 02 Feb 2014 09:56:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9tmw-0005CI-6U;
	Sun, 02 Feb 2014 09:56:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24715-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 09:56:30 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24715: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24715 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24715/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 8 guest-saverestore fail pass in 24703
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 24703
 test-amd64-i386-freebsd10-amd64  4 xen-install     fail in 24703 pass in 24715
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24703 pass in 24702-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24702 blocked in 24715

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24703 never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 09:56:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 09:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9tn3-0006Yi-IO; Sun, 02 Feb 2014 09:56:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9tn2-0006Yd-OK
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 09:56:37 +0000
Received: from [85.158.137.68:39630] by server-6.bemta-3.messagelabs.com id
	21/55-09180-3561EE25; Sun, 02 Feb 2014 09:56:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391334992!9165282!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19519 invoked from network); 2 Feb 2014 09:56:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 09:56:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="97041322"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 02 Feb 2014 09:56:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 2 Feb 2014 04:56:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9tmw-0005A1-G9;
	Sun, 02 Feb 2014 09:56:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9tmw-0005CI-6U;
	Sun, 02 Feb 2014 09:56:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24715-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 09:56:30 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24715: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24715 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24715/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 8 guest-saverestore fail pass in 24703
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 24703
 test-amd64-i386-freebsd10-amd64  4 xen-install     fail in 24703 pass in 24715
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24703 pass in 24702-bisect

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 12 guest-localmigrate/x10 fail in 24702 blocked in 24715

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24703 never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:01:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9trA-0006vi-8f; Sun, 02 Feb 2014 10:00:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9tr9-0006vc-70
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:00:51 +0000
Received: from [85.158.137.68:55727] by server-15.bemta-3.messagelabs.com id
	B2/6F-19263-2571EE25; Sun, 02 Feb 2014 10:00:50 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391335249!11661657!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30883 invoked from network); 2 Feb 2014 10:00:49 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 10:00:49 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391335249; l=760;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=nbXYanwh3qziJw453CswID1fDzM=;
	b=Fjsdqu7DuGPPoobkEpIhc98nurOSdMvkczB9ATV9RDe8cr4jebvCboYQ6CpalG1fAsR
	424uxAaUgcNchrtb7GIK+/NnErFQ5h+YO4sbQ0KbZoz+R7uTdXyICZxoj93d0ABVmWiph
	+8wFb0WWCUA8KWwJnn3MG2pEvV6jhI3QPdc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id I034f8q12A0jKJp
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 11:00:45 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D7B4E50269; Sun,  2 Feb 2014 11:00:44 +0100 (CET)
Date: Sun, 2 Feb 2014 11:00:44 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140202100044.GA5898@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Kai Huang <dev.kai.huang@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 02, Adel Amani wrote:

> I check this function xc_domain_save.c, and attention at line 1185,
> function snprintf() amounts iter, sent_this_iter, skip_this_iter for
> print. (in xen 4.1.2 ) but i don't know where in amounts print!!! :-( 
> Are except file xend.log where is another for print and debugged?

The output in 4.1 is sent to xend.log, but only if a logger function is
registered. Please follow the code from tools/xcutils/xc_save.c:main to
the actual xc_report_progress_start call in
tools/libxc/xc_domain_save.c, as you will note xc_interface_open is
called without logger which means no output is printed.

For an example how a logger could look like see the xc_interface_open
call in tools/xenpaging/xenpaging.c.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:01:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9trA-0006vi-8f; Sun, 02 Feb 2014 10:00:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9tr9-0006vc-70
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:00:51 +0000
Received: from [85.158.137.68:55727] by server-15.bemta-3.messagelabs.com id
	B2/6F-19263-2571EE25; Sun, 02 Feb 2014 10:00:50 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391335249!11661657!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30883 invoked from network); 2 Feb 2014 10:00:49 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 10:00:49 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391335249; l=760;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=nbXYanwh3qziJw453CswID1fDzM=;
	b=Fjsdqu7DuGPPoobkEpIhc98nurOSdMvkczB9ATV9RDe8cr4jebvCboYQ6CpalG1fAsR
	424uxAaUgcNchrtb7GIK+/NnErFQ5h+YO4sbQ0KbZoz+R7uTdXyICZxoj93d0ABVmWiph
	+8wFb0WWCUA8KWwJnn3MG2pEvV6jhI3QPdc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id I034f8q12A0jKJp
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 11:00:45 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D7B4E50269; Sun,  2 Feb 2014 11:00:44 +0100 (CET)
Date: Sun, 2 Feb 2014 11:00:44 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140202100044.GA5898@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Kai Huang <dev.kai.huang@gmail.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 02, Adel Amani wrote:

> I check this function xc_domain_save.c, and attention at line 1185,
> function snprintf() amounts iter, sent_this_iter, skip_this_iter for
> print. (in xen 4.1.2 ) but i don't know where in amounts print!!! :-( 
> Are except file xend.log where is another for print and debugged?

The output in 4.1 is sent to xend.log, but only if a logger function is
registered. Please follow the code from tools/xcutils/xc_save.c:main to
the actual xc_report_progress_start call in
tools/libxc/xc_domain_save.c, as you will note xc_interface_open is
called without logger which means no output is printed.

For an example how a logger could look like see the xc_interface_open
call in tools/xenpaging/xenpaging.c.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:25:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uEH-0007WJ-MJ; Sun, 02 Feb 2014 10:24:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W9uEG-0007WC-DU
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:24:44 +0000
Received: from [193.109.254.147:31004] by server-6.bemta-14.messagelabs.com id
	84/DD-03396-BEC1EE25; Sun, 02 Feb 2014 10:24:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391336681!1423133!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30144 invoked from network); 2 Feb 2014 10:24:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 10:24:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98984985"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 10:24:12 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Sun, 2 Feb 2014
	05:24:11 -0500
Message-ID: <1391336650.15093.23.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Sun, 2 Feb 2014 10:24:10 +0000
In-Reply-To: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 09:19 +0100, Olaf Hering wrote:
> gx_main.c: In function '_do_qRcmd_req':
> gx_main.c:119:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
>              sprintf(buf1, "pgd3val set to: "XGF64"\n", pgd3val);
>              ^
> gx_main.c:121:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
>              sprintf(buf1, "Invalid  pgd3val "XGF64"\n", pgd3val);
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I suppose there is no harm in this, but is there any chance that gdbsx
would actually work on arm64 without significant actual work going into
it?

Also, you forgot to CC the gdbsx maintainer.

(why doesn't this code use stdint.h?)

> ---
>  tools/debugger/gdbsx/xg/xg_public.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_public.h b/tools/debugger/gdbsx/xg/xg_public.h
> index 6236d08..046e21b 100644
> --- a/tools/debugger/gdbsx/xg/xg_public.h
> +++ b/tools/debugger/gdbsx/xg/xg_public.h
> @@ -23,7 +23,7 @@
>  #define XGTRC1(...)  \
>             do {(xgtrc_on==2) ? (xgtrc(__FUNCTION__,__VA_ARGS__)):0;} while (0)
>  
> -#if defined(__x86_64__)
> +#if defined(__x86_64__) || defined(__aarch64__)
>      #define  XGFM64  "%lx"
>      #define  XGF64   "%016lx"
>  #else



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:25:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uEH-0007WJ-MJ; Sun, 02 Feb 2014 10:24:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W9uEG-0007WC-DU
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:24:44 +0000
Received: from [193.109.254.147:31004] by server-6.bemta-14.messagelabs.com id
	84/DD-03396-BEC1EE25; Sun, 02 Feb 2014 10:24:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391336681!1423133!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30144 invoked from network); 2 Feb 2014 10:24:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 10:24:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98984985"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 10:24:12 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Sun, 2 Feb 2014
	05:24:11 -0500
Message-ID: <1391336650.15093.23.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Sun, 2 Feb 2014 10:24:10 +0000
In-Reply-To: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 09:19 +0100, Olaf Hering wrote:
> gx_main.c: In function '_do_qRcmd_req':
> gx_main.c:119:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
>              sprintf(buf1, "pgd3val set to: "XGF64"\n", pgd3val);
>              ^
> gx_main.c:121:13: error: format '%llx' expects argument of type 'long long unsigned int', but argument 3 has type 'uint64_t' [-Werror=format=]
>              sprintf(buf1, "Invalid  pgd3val "XGF64"\n", pgd3val);
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I suppose there is no harm in this, but is there any chance that gdbsx
would actually work on arm64 without significant actual work going into
it?

Also, you forgot to CC the gdbsx maintainer.

(why doesn't this code use stdint.h?)

> ---
>  tools/debugger/gdbsx/xg/xg_public.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/debugger/gdbsx/xg/xg_public.h b/tools/debugger/gdbsx/xg/xg_public.h
> index 6236d08..046e21b 100644
> --- a/tools/debugger/gdbsx/xg/xg_public.h
> +++ b/tools/debugger/gdbsx/xg/xg_public.h
> @@ -23,7 +23,7 @@
>  #define XGTRC1(...)  \
>             do {(xgtrc_on==2) ? (xgtrc(__FUNCTION__,__VA_ARGS__)):0;} while (0)
>  
> -#if defined(__x86_64__)
> +#if defined(__x86_64__) || defined(__aarch64__)
>      #define  XGFM64  "%lx"
>      #define  XGF64   "%016lx"
>  #else



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uIM-0007pH-Bw; Sun, 02 Feb 2014 10:28:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9uIL-0007pC-IQ
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:28:57 +0000
Received: from [193.109.254.147:49901] by server-16.bemta-14.messagelabs.com
	id 22/3C-21945-8ED1EE25; Sun, 02 Feb 2014 10:28:56 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391336936!1431112!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31588 invoked from network); 2 Feb 2014 10:28:56 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 10:28:56 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391336936; l=378;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=XKBVReqve2zjYD9R32N0K2+wwgs=;
	b=W6jaf+EM2Rc8EdFHYNgxx/BkA5UcizgZXROft6W7msRmfG+Bn/nWkzsZX53Zsn85ZAE
	ppDzVffgOx6aaTDp//WV2+rnx6BvOafOo2zFGDhqMXvxWc+V66wmJ/Lt8mPefIs630GWU
	GfXUkTuVy3aYaePAEoWhpcDpPU0zKC/suNE=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id z05b7dq12ASrHb5
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 11:28:53 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id EE74150269; Sun,  2 Feb 2014 11:28:52 +0100 (CET)
Date: Sun, 2 Feb 2014 11:28:52 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140202102852.GA9984@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391336650.15093.23.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 02, Ian Campbell wrote:

> I suppose there is no harm in this, but is there any chance that gdbsx
> would actually work on arm64 without significant actual work going into
> it?

I have no idea. But I just noticed its built only due to this line in
our xen.spec:

make -C tools/debugger/gdbsx

Perhaps this should be guarded by a %ifarch x86_64.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uIM-0007pH-Bw; Sun, 02 Feb 2014 10:28:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1W9uIL-0007pC-IQ
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:28:57 +0000
Received: from [193.109.254.147:49901] by server-16.bemta-14.messagelabs.com
	id 22/3C-21945-8ED1EE25; Sun, 02 Feb 2014 10:28:56 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391336936!1431112!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31588 invoked from network); 2 Feb 2014 10:28:56 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 10:28:56 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391336936; l=378;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=XKBVReqve2zjYD9R32N0K2+wwgs=;
	b=W6jaf+EM2Rc8EdFHYNgxx/BkA5UcizgZXROft6W7msRmfG+Bn/nWkzsZX53Zsn85ZAE
	ppDzVffgOx6aaTDp//WV2+rnx6BvOafOo2zFGDhqMXvxWc+V66wmJ/Lt8mPefIs630GWU
	GfXUkTuVy3aYaePAEoWhpcDpPU0zKC/suNE=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id z05b7dq12ASrHb5
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 11:28:53 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id EE74150269; Sun,  2 Feb 2014 11:28:52 +0100 (CET)
Date: Sun, 2 Feb 2014 11:28:52 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140202102852.GA9984@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391336650.15093.23.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 02, Ian Campbell wrote:

> I suppose there is no harm in this, but is there any chance that gdbsx
> would actually work on arm64 without significant actual work going into
> it?

I have no idea. But I just noticed its built only due to this line in
our xen.spec:

make -C tools/debugger/gdbsx

Perhaps this should be guarded by a %ifarch x86_64.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uJV-0007ud-Ra; Sun, 02 Feb 2014 10:30:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W9uJU-0007uT-MC
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 10:30:09 +0000
Received: from [85.158.139.211:14235] by server-10.bemta-5.messagelabs.com id
	6F/6C-08578-E2E1EE25; Sun, 02 Feb 2014 10:30:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391337003!1090894!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26572 invoked from network); 2 Feb 2014 10:30:05 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 10:30:05 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so6032836pbb.39
	for <xen-devel@lists.xenproject.org>;
	Sun, 02 Feb 2014 02:30:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=sJFdTATWuFx3yuen15noFJeXcD0MPfksxhol8HHpDqQ=;
	b=He0F7bFNnwxX8Q66EbIOolIUaIvgjmB5Gru3RA1GEZEVaAfjhG41O9MInJ2YaVyd7X
	429WNkhTH5xzgus43cEAtbYNUcWq3uYgVCOPY7pWqMAy8xZHt88ckaZbUgyt4I4CoY/v
	l5+mJ2CvDqSRaxTnzN8CkSLGQdNuGiTwjV6iJdol652uQdTbofSMcW4ffNpoLsEuGfxe
	hN1ITul/AO2wTR5ZUFWzNJ3mSELvUnIMfenPKoCSF8s00dX5NCBu4bVUDUutyXo0L/1P
	mJCXjTiC0F/pQHWSE075C4MXCFAU6iihOJg7iPkxU3/AT6iqgxBXYT3gkThmuiUd36GV
	bgXQ==
X-Gm-Message-State: ALoCoQkbqVHUrEQsgKir461D5pPZ4uhhDjfKz7Z9x7zpulSzCdV6xT2TMHoK5v33eke2Y9t6fSvm
X-Received: by 10.66.188.203 with SMTP id gc11mr30722087pac.63.1391337003160; 
	Sun, 02 Feb 2014 02:30:03 -0800 (PST)
Received: from localhost.localdomain ([2001:67c:1810:f054:2677:3ff:fe14:448])
	by mx.google.com with ESMTPSA id
	z10sm115701577pas.6.2014.02.02.02.30.00 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 02 Feb 2014 02:30:02 -0800 (PST)
Message-ID: <52EE1E26.2040308@linaro.org>
Date: Sun, 02 Feb 2014 10:29:58 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 23/01/14 23:34, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Zoltan Kiss wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev needs=
 it,
>> for blkback and future netback patches it just cause a lock contention, =
as
>> those pages never go to userspace. Therefore this series does the follow=
ing:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a n=
ew
>>    parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or ju=
st set
>>    the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs wi=
th
>>    m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
>>
>> It also removes a stray space from page.h and change ret to 0 if
>> XENFEAT_auto_translated_physmap, as that is the only possible return val=
ue
>> there.
>>
>> v2:
>> - move the storing of the old mfn in page->index to gnttab_map_refs
>> - move the function header update to a separate patch
>>
>> v3:
>> - a new approach to retain old behaviour where it needed
>> - squash the patches into one
>>
>> v4:
>> - move out the common bits from m2p* functions, and pass pfn/mfn as para=
meter
>> - clear page->private before doing anything with the page, so m2p_find_o=
verride
>>    won't race with this
>>
>> v5:
>> - change return value handling in __gnttab_[un]map_refs
>> - remove a stray space in page.h
>> - add detail why ret =3D 0 now at some places
>>
>> v6:
>> - don't pass pfn to m2p* functions, just get it locally
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> =

> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Hello,

This patch is breaking Linux compilation on ARM:

drivers/xen/grant-table.c: In function =91__gnttab_map_refs=92:
drivers/xen/grant-table.c:989:3: error: implicit declaration of function =
=91FOREIGN_FRAME=92 [-Werror=3Dimplicit-function-declaration]
   if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
   ^
drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
drivers/xen/grant-table.c:1054:3: error: implicit declaration of function =
=91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
   mfn =3D get_phys_to_machine(pfn);
   ^
drivers/xen/grant-table.c:1055:43: error: =91FOREIGN_FRAME_BIT=92 undeclare=
d (first use in this function)
   if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
                                           ^
drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is repo=
rted only once for each function it appears in
drivers/xen/grant-table.c:1068:9: error: too many arguments to function =91=
m2p_remove_override=92
         mfn);
         ^
In file included from include/xen/page.h:4:0,
                 from drivers/xen/grant-table.c:48:
/local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:106:1=
9: note: declared here
 static inline int m2p_remove_override(struct page *page, bool clear_pte)
                   ^
cc1: some warnings being treated as errors

> =

> =

> =

>>   arch/x86/include/asm/xen/page.h     |    5 +-
>>   arch/x86/xen/p2m.c                  |   17 +------
>>   drivers/block/xen-blkback/blkback.c |   15 +++---
>>   drivers/xen/gntdev.c                |   13 +++--
>>   drivers/xen/grant-table.c           |   89 +++++++++++++++++++++++++++=
+++-----
>>   include/xen/grant_table.h           |    8 +++-
>>   6 files changed, 101 insertions(+), 46 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/=
page.h
>> index b913915..ce47243 100644
>> --- a/arch/x86/include/asm/xen/page.h
>> +++ b/arch/x86/include/asm/xen/page.h
>> @@ -52,7 +52,8 @@ extern unsigned long set_phys_range_identity(unsigned =
long pfn_s,
>>   extern int m2p_add_override(unsigned long mfn, struct page *page,
>>   			    struct gnttab_map_grant_ref *kmap_op);
>>   extern int m2p_remove_override(struct page *page,
>> -				struct gnttab_map_grant_ref *kmap_op);
>> +			       struct gnttab_map_grant_ref *kmap_op,
>> +			       unsigned long mfn);
>>   extern struct page *m2p_find_override(unsigned long mfn);
>>   extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned=
 long pfn);
>>   =

>> @@ -121,7 +122,7 @@ static inline unsigned long mfn_to_pfn(unsigned long=
 mfn)
>>   		pfn =3D m2p_find_override_pfn(mfn, ~0);
>>   	}
>>   =

>> -	/*
>> +	/*
>>   	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>>   	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>>   	 * valid entry for it.
>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>> index 2ae8699..bd4724b 100644
>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -888,13 +888,6 @@ int m2p_add_override(unsigned long mfn, struct page=
 *page,
>>   					"m2p_add_override: pfn %lx not mapped", pfn))
>>   			return -EINVAL;
>>   	}
>> -	WARN_ON(PagePrivate(page));
>> -	SetPagePrivate(page);
>> -	set_page_private(page, mfn);
>> -	page->index =3D pfn_to_mfn(pfn);
>> -
>> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>> -		return -ENOMEM;
>>   =

>>   	if (kmap_op !=3D NULL) {
>>   		if (!PageHighMem(page)) {
>> @@ -933,19 +926,16 @@ int m2p_add_override(unsigned long mfn, struct pag=
e *page,
>>   }
>>   EXPORT_SYMBOL_GPL(m2p_add_override);
>>   int m2p_remove_override(struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> +			struct gnttab_map_grant_ref *kmap_op,
>> +			unsigned long mfn)
>>   {
>>   	unsigned long flags;
>> -	unsigned long mfn;
>>   	unsigned long pfn;
>>   	unsigned long uninitialized_var(address);
>>   	unsigned level;
>>   	pte_t *ptep =3D NULL;
>>   =

>>   	pfn =3D page_to_pfn(page);
>> -	mfn =3D get_phys_to_machine(pfn);
>> -	if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
>> -		return -EINVAL;
>>   =

>>   	if (!PageHighMem(page)) {
>>   		address =3D (unsigned long)__va(pfn << PAGE_SHIFT);
>> @@ -959,10 +949,7 @@ int m2p_remove_override(struct page *page,
>>   	spin_lock_irqsave(&m2p_override_lock, flags);
>>   	list_del(&page->lru);
>>   	spin_unlock_irqrestore(&m2p_override_lock, flags);
>> -	WARN_ON(!PagePrivate(page));
>> -	ClearPagePrivate(page);
>>   =

>> -	set_phys_to_machine(pfn, page->index);
>>   	if (kmap_op !=3D NULL) {
>>   		if (!PageHighMem(page)) {
>>   			struct multicall_space mcs;
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blk=
back/blkback.c
>> index 6620b73..875025f 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *b=
lkif, struct rb_root *root,
>>   =

>>   		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>>   			!rb_next(&persistent_gnt->node)) {
>> -			ret =3D gnttab_unmap_refs(unmap, NULL, pages,
>> -				segs_to_unmap);
>> +			ret =3D gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>>   			BUG_ON(ret);
>>   			put_free_pages(blkif, pages, segs_to_unmap);
>>   			segs_to_unmap =3D 0;
>> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *=
work)
>>   		pages[segs_to_unmap] =3D persistent_gnt->page;
>>   =

>>   		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>> -			ret =3D gnttab_unmap_refs(unmap, NULL, pages,
>> -				segs_to_unmap);
>> +			ret =3D gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>>   			BUG_ON(ret);
>>   			put_free_pages(blkif, pages, segs_to_unmap);
>>   			segs_to_unmap =3D 0;
>> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *=
work)
>>   		kfree(persistent_gnt);
>>   	}
>>   	if (segs_to_unmap > 0) {
>> -		ret =3D gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
>> +		ret =3D gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>>   		BUG_ON(ret);
>>   		put_free_pages(blkif, pages, segs_to_unmap);
>>   	}
>> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blki=
f,
>>   				    GNTMAP_host_map, pages[i]->handle);
>>   		pages[i]->handle =3D BLKBACK_INVALID_HANDLE;
>>   		if (++invcount =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>> -			ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages,
>> -			                        invcount);
>> +			ret =3D gnttab_unmap_refs(unmap, unmap_pages, invcount);
>>   			BUG_ON(ret);
>>   			put_free_pages(blkif, unmap_pages, invcount);
>>   			invcount =3D 0;
>>   		}
>>   	}
>>   	if (invcount) {
>> -		ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
>> +		ret =3D gnttab_unmap_refs(unmap, unmap_pages, invcount);
>>   		BUG_ON(ret);
>>   		put_free_pages(blkif, unmap_pages, invcount);
>>   	}
>> @@ -740,7 +737,7 @@ again:
>>   	}
>>   =

>>   	if (segs_to_map) {
>> -		ret =3D gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
>> +		ret =3D gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>>   		BUG_ON(ret);
>>   	}
>>   =

>> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
>> index e41c79c..e652c0e 100644
>> --- a/drivers/xen/gntdev.c
>> +++ b/drivers/xen/gntdev.c
>> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>>   	}
>>   =

>>   	pr_debug("map %d+%d\n", map->index, map->count);
>> -	err =3D gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NUL=
L,
>> -			map->pages, map->count);
>> +	err =3D gnttab_map_refs_userspace(map->map_ops,
>> +					use_ptemod ? map->kmap_ops : NULL,
>> +					map->pages,
>> +					map->count);
>>   	if (err)
>>   		return err;
>>   =

>> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *ma=
p, int offset, int pages)
>>   		}
>>   	}
>>   =

>> -	err =3D gnttab_unmap_refs(map->unmap_ops + offset,
>> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
>> -			pages);
>> +	err =3D gnttab_unmap_refs_userspace(map->unmap_ops + offset,
>> +					  use_ptemod ? map->kmap_ops + offset : NULL,
>> +					  map->pages + offset,
>> +					  pages);
>>   	if (err)
>>   		return err;
>>   =

>> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>> index aa846a4..e4ddfeb 100644
>> --- a/drivers/xen/grant-table.c
>> +++ b/drivers/xen/grant-table.c
>> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, =
unsigned count)
>>   }
>>   EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>>   =

>> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>>   		    struct gnttab_map_grant_ref *kmap_ops,
>> -		    struct page **pages, unsigned int count)
>> +		    struct page **pages, unsigned int count,
>> +		    bool m2p_override)
>>   {
>>   	int i, ret;
>>   	bool lazy =3D false;
>>   	pte_t *pte;
>> -	unsigned long mfn;
>> +	unsigned long mfn, pfn;
>>   =

>> +	BUG_ON(kmap_ops && !m2p_override);
>>   	ret =3D HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, co=
unt);
>>   	if (ret)
>>   		return ret;
>> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *m=
ap_ops,
>>   			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>>   					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>>   		}
>> -		return ret;
>> +		return 0;
>>   	}
>>   =

>> -	if (!in_interrupt() && paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_N=
ONE) {
>> +	if (m2p_override &&
>> +	    !in_interrupt() &&
>> +	    paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_NONE) {
>>   		arch_enter_lazy_mmu_mode();
>>   		lazy =3D true;
>>   	}
>> @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *ma=
p_ops,
>>   		} else {
>>   			mfn =3D PFN_DOWN(map_ops[i].dev_bus_addr);
>>   		}
>> -		ret =3D m2p_add_override(mfn, pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn =3D page_to_pfn(pages[i]);
>> +
>> +		WARN_ON(PagePrivate(pages[i]));
>> +		SetPagePrivate(pages[i]);
>> +		set_page_private(pages[i], mfn);
>> +
>> +		pages[i]->index =3D pfn_to_mfn(pfn);
>> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>> +			ret =3D -ENOMEM;
>> +			goto out;
>> +		}
>> +		if (m2p_override)
>> +			ret =3D m2p_add_override(mfn, pages[i], kmap_ops ?
>> +					       &kmap_ops[i] : NULL);
>>   		if (ret)
>>   			goto out;
>>   	}
>> @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *m=
ap_ops,
>>   =

>>   	return ret;
>>   }
>> +
>> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>> +		    struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>> +}
>>   EXPORT_SYMBOL_GPL(gnttab_map_refs);
>>   =

>> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>> +			      struct gnttab_map_grant_ref *kmap_ops,
>> +			      struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
>> +}
>> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
>> +
>> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>>   		      struct gnttab_map_grant_ref *kmap_ops,
>> -		      struct page **pages, unsigned int count)
>> +		      struct page **pages, unsigned int count,
>> +		      bool m2p_override)
>>   {
>>   	int i, ret;
>>   	bool lazy =3D false;
>> +	unsigned long pfn, mfn;
>>   =

>> +	BUG_ON(kmap_ops && !m2p_override);
>>   	ret =3D HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops=
, count);
>>   	if (ret)
>>   		return ret;
>> @@ -958,17 +991,33 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_re=
f *unmap_ops,
>>   			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>>   					INVALID_P2M_ENTRY);
>>   		}
>> -		return ret;
>> +		return 0;
>>   	}
>>   =

>> -	if (!in_interrupt() && paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_N=
ONE) {
>> +	if (m2p_override &&
>> +	    !in_interrupt() &&
>> +	    paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_NONE) {
>>   		arch_enter_lazy_mmu_mode();
>>   		lazy =3D true;
>>   	}
>>   =

>>   	for (i =3D 0; i < count; i++) {
>> -		ret =3D m2p_remove_override(pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn =3D page_to_pfn(pages[i]);
>> +		mfn =3D get_phys_to_machine(pfn);
>> +		if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>> +			ret =3D -EINVAL;
>> +			goto out;
>> +		}
>> +
>> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
>> +		WARN_ON(!PagePrivate(pages[i]));
>> +		ClearPagePrivate(pages[i]);
>> +		set_phys_to_machine(pfn, pages[i]->index);
>> +		if (m2p_override)
>> +			ret =3D m2p_remove_override(pages[i],
>> +						  kmap_ops ?
>> +						   &kmap_ops[i] : NULL,
>> +						  mfn);
>>   		if (ret)
>>   			goto out;
>>   	}
>> @@ -979,8 +1028,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_re=
f *unmap_ops,
>>   =

>>   	return ret;
>>   }
>> +
>> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
>> +		    struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>> +}
>>   EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>>   =

>> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
>> +				struct gnttab_map_grant_ref *kmap_ops,
>> +				struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
>> +}
>> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
>> +
>>   static unsigned nr_status_frames(unsigned nr_grant_frames)
>>   {
>>   	BUG_ON(grefs_per_grant_frame =3D=3D 0);
>> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
>> index 694dcaf..9a919b1 100644
>> --- a/include/xen/grant_table.h
>> +++ b/include/xen/grant_table.h
>> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>>   #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>>   =

>>   int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>> -		    struct gnttab_map_grant_ref *kmap_ops,
>>   		    struct page **pages, unsigned int count);
>> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>> +			      struct gnttab_map_grant_ref *kmap_ops,
>> +			      struct page **pages, unsigned int count);
>>   int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>> -		      struct gnttab_map_grant_ref *kunmap_ops,
>>   		      struct page **pages, unsigned int count);
>> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_op=
s,
>> +				struct gnttab_map_grant_ref *kunmap_ops,
>> +				struct page **pages, unsigned int count);
>>   =

>>   /* Perform a batch of grant map/copy operations. Retry every batch slot
>>    * for which the hypervisor returns GNTST_eagain. This is typically due
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" =
in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uJV-0007ud-Ra; Sun, 02 Feb 2014 10:30:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1W9uJU-0007uT-MC
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 10:30:09 +0000
Received: from [85.158.139.211:14235] by server-10.bemta-5.messagelabs.com id
	6F/6C-08578-E2E1EE25; Sun, 02 Feb 2014 10:30:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391337003!1090894!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26572 invoked from network); 2 Feb 2014 10:30:05 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 10:30:05 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so6032836pbb.39
	for <xen-devel@lists.xenproject.org>;
	Sun, 02 Feb 2014 02:30:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=sJFdTATWuFx3yuen15noFJeXcD0MPfksxhol8HHpDqQ=;
	b=He0F7bFNnwxX8Q66EbIOolIUaIvgjmB5Gru3RA1GEZEVaAfjhG41O9MInJ2YaVyd7X
	429WNkhTH5xzgus43cEAtbYNUcWq3uYgVCOPY7pWqMAy8xZHt88ckaZbUgyt4I4CoY/v
	l5+mJ2CvDqSRaxTnzN8CkSLGQdNuGiTwjV6iJdol652uQdTbofSMcW4ffNpoLsEuGfxe
	hN1ITul/AO2wTR5ZUFWzNJ3mSELvUnIMfenPKoCSF8s00dX5NCBu4bVUDUutyXo0L/1P
	mJCXjTiC0F/pQHWSE075C4MXCFAU6iihOJg7iPkxU3/AT6iqgxBXYT3gkThmuiUd36GV
	bgXQ==
X-Gm-Message-State: ALoCoQkbqVHUrEQsgKir461D5pPZ4uhhDjfKz7Z9x7zpulSzCdV6xT2TMHoK5v33eke2Y9t6fSvm
X-Received: by 10.66.188.203 with SMTP id gc11mr30722087pac.63.1391337003160; 
	Sun, 02 Feb 2014 02:30:03 -0800 (PST)
Received: from localhost.localdomain ([2001:67c:1810:f054:2677:3ff:fe14:448])
	by mx.google.com with ESMTPSA id
	z10sm115701577pas.6.2014.02.02.02.30.00 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 02 Feb 2014 02:30:02 -0800 (PST)
Message-ID: <52EE1E26.2040308@linaro.org>
Date: Sun, 02 Feb 2014 10:29:58 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 23/01/14 23:34, Stefano Stabellini wrote:
> On Thu, 23 Jan 2014, Zoltan Kiss wrote:
>> The grant mapping API does m2p_override unnecessarily: only gntdev needs=
 it,
>> for blkback and future netback patches it just cause a lock contention, =
as
>> those pages never go to userspace. Therefore this series does the follow=
ing:
>> - the original functions were renamed to __gnttab_[un]map_refs, with a n=
ew
>>    parameter m2p_override
>> - based on m2p_override either they follow the original behaviour, or ju=
st set
>>    the private flag and call set_phys_to_machine
>> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs wi=
th
>>    m2p_override false
>> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
>>
>> It also removes a stray space from page.h and change ret to 0 if
>> XENFEAT_auto_translated_physmap, as that is the only possible return val=
ue
>> there.
>>
>> v2:
>> - move the storing of the old mfn in page->index to gnttab_map_refs
>> - move the function header update to a separate patch
>>
>> v3:
>> - a new approach to retain old behaviour where it needed
>> - squash the patches into one
>>
>> v4:
>> - move out the common bits from m2p* functions, and pass pfn/mfn as para=
meter
>> - clear page->private before doing anything with the page, so m2p_find_o=
verride
>>    won't race with this
>>
>> v5:
>> - change return value handling in __gnttab_[un]map_refs
>> - remove a stray space in page.h
>> - add detail why ret =3D 0 now at some places
>>
>> v6:
>> - don't pass pfn to m2p* functions, just get it locally
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
> =

> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Hello,

This patch is breaking Linux compilation on ARM:

drivers/xen/grant-table.c: In function =91__gnttab_map_refs=92:
drivers/xen/grant-table.c:989:3: error: implicit declaration of function =
=91FOREIGN_FRAME=92 [-Werror=3Dimplicit-function-declaration]
   if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
   ^
drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
drivers/xen/grant-table.c:1054:3: error: implicit declaration of function =
=91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
   mfn =3D get_phys_to_machine(pfn);
   ^
drivers/xen/grant-table.c:1055:43: error: =91FOREIGN_FRAME_BIT=92 undeclare=
d (first use in this function)
   if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
                                           ^
drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is repo=
rted only once for each function it appears in
drivers/xen/grant-table.c:1068:9: error: too many arguments to function =91=
m2p_remove_override=92
         mfn);
         ^
In file included from include/xen/page.h:4:0,
                 from drivers/xen/grant-table.c:48:
/local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:106:1=
9: note: declared here
 static inline int m2p_remove_override(struct page *page, bool clear_pte)
                   ^
cc1: some warnings being treated as errors

> =

> =

> =

>>   arch/x86/include/asm/xen/page.h     |    5 +-
>>   arch/x86/xen/p2m.c                  |   17 +------
>>   drivers/block/xen-blkback/blkback.c |   15 +++---
>>   drivers/xen/gntdev.c                |   13 +++--
>>   drivers/xen/grant-table.c           |   89 +++++++++++++++++++++++++++=
+++-----
>>   include/xen/grant_table.h           |    8 +++-
>>   6 files changed, 101 insertions(+), 46 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/=
page.h
>> index b913915..ce47243 100644
>> --- a/arch/x86/include/asm/xen/page.h
>> +++ b/arch/x86/include/asm/xen/page.h
>> @@ -52,7 +52,8 @@ extern unsigned long set_phys_range_identity(unsigned =
long pfn_s,
>>   extern int m2p_add_override(unsigned long mfn, struct page *page,
>>   			    struct gnttab_map_grant_ref *kmap_op);
>>   extern int m2p_remove_override(struct page *page,
>> -				struct gnttab_map_grant_ref *kmap_op);
>> +			       struct gnttab_map_grant_ref *kmap_op,
>> +			       unsigned long mfn);
>>   extern struct page *m2p_find_override(unsigned long mfn);
>>   extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned=
 long pfn);
>>   =

>> @@ -121,7 +122,7 @@ static inline unsigned long mfn_to_pfn(unsigned long=
 mfn)
>>   		pfn =3D m2p_find_override_pfn(mfn, ~0);
>>   	}
>>   =

>> -	/*
>> +	/*
>>   	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>>   	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>>   	 * valid entry for it.
>> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
>> index 2ae8699..bd4724b 100644
>> --- a/arch/x86/xen/p2m.c
>> +++ b/arch/x86/xen/p2m.c
>> @@ -888,13 +888,6 @@ int m2p_add_override(unsigned long mfn, struct page=
 *page,
>>   					"m2p_add_override: pfn %lx not mapped", pfn))
>>   			return -EINVAL;
>>   	}
>> -	WARN_ON(PagePrivate(page));
>> -	SetPagePrivate(page);
>> -	set_page_private(page, mfn);
>> -	page->index =3D pfn_to_mfn(pfn);
>> -
>> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
>> -		return -ENOMEM;
>>   =

>>   	if (kmap_op !=3D NULL) {
>>   		if (!PageHighMem(page)) {
>> @@ -933,19 +926,16 @@ int m2p_add_override(unsigned long mfn, struct pag=
e *page,
>>   }
>>   EXPORT_SYMBOL_GPL(m2p_add_override);
>>   int m2p_remove_override(struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> +			struct gnttab_map_grant_ref *kmap_op,
>> +			unsigned long mfn)
>>   {
>>   	unsigned long flags;
>> -	unsigned long mfn;
>>   	unsigned long pfn;
>>   	unsigned long uninitialized_var(address);
>>   	unsigned level;
>>   	pte_t *ptep =3D NULL;
>>   =

>>   	pfn =3D page_to_pfn(page);
>> -	mfn =3D get_phys_to_machine(pfn);
>> -	if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
>> -		return -EINVAL;
>>   =

>>   	if (!PageHighMem(page)) {
>>   		address =3D (unsigned long)__va(pfn << PAGE_SHIFT);
>> @@ -959,10 +949,7 @@ int m2p_remove_override(struct page *page,
>>   	spin_lock_irqsave(&m2p_override_lock, flags);
>>   	list_del(&page->lru);
>>   	spin_unlock_irqrestore(&m2p_override_lock, flags);
>> -	WARN_ON(!PagePrivate(page));
>> -	ClearPagePrivate(page);
>>   =

>> -	set_phys_to_machine(pfn, page->index);
>>   	if (kmap_op !=3D NULL) {
>>   		if (!PageHighMem(page)) {
>>   			struct multicall_space mcs;
>> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blk=
back/blkback.c
>> index 6620b73..875025f 100644
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *b=
lkif, struct rb_root *root,
>>   =

>>   		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>>   			!rb_next(&persistent_gnt->node)) {
>> -			ret =3D gnttab_unmap_refs(unmap, NULL, pages,
>> -				segs_to_unmap);
>> +			ret =3D gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>>   			BUG_ON(ret);
>>   			put_free_pages(blkif, pages, segs_to_unmap);
>>   			segs_to_unmap =3D 0;
>> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *=
work)
>>   		pages[segs_to_unmap] =3D persistent_gnt->page;
>>   =

>>   		if (++segs_to_unmap =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>> -			ret =3D gnttab_unmap_refs(unmap, NULL, pages,
>> -				segs_to_unmap);
>> +			ret =3D gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>>   			BUG_ON(ret);
>>   			put_free_pages(blkif, pages, segs_to_unmap);
>>   			segs_to_unmap =3D 0;
>> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *=
work)
>>   		kfree(persistent_gnt);
>>   	}
>>   	if (segs_to_unmap > 0) {
>> -		ret =3D gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
>> +		ret =3D gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>>   		BUG_ON(ret);
>>   		put_free_pages(blkif, pages, segs_to_unmap);
>>   	}
>> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blki=
f,
>>   				    GNTMAP_host_map, pages[i]->handle);
>>   		pages[i]->handle =3D BLKBACK_INVALID_HANDLE;
>>   		if (++invcount =3D=3D BLKIF_MAX_SEGMENTS_PER_REQUEST) {
>> -			ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages,
>> -			                        invcount);
>> +			ret =3D gnttab_unmap_refs(unmap, unmap_pages, invcount);
>>   			BUG_ON(ret);
>>   			put_free_pages(blkif, unmap_pages, invcount);
>>   			invcount =3D 0;
>>   		}
>>   	}
>>   	if (invcount) {
>> -		ret =3D gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
>> +		ret =3D gnttab_unmap_refs(unmap, unmap_pages, invcount);
>>   		BUG_ON(ret);
>>   		put_free_pages(blkif, unmap_pages, invcount);
>>   	}
>> @@ -740,7 +737,7 @@ again:
>>   	}
>>   =

>>   	if (segs_to_map) {
>> -		ret =3D gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
>> +		ret =3D gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>>   		BUG_ON(ret);
>>   	}
>>   =

>> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
>> index e41c79c..e652c0e 100644
>> --- a/drivers/xen/gntdev.c
>> +++ b/drivers/xen/gntdev.c
>> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>>   	}
>>   =

>>   	pr_debug("map %d+%d\n", map->index, map->count);
>> -	err =3D gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NUL=
L,
>> -			map->pages, map->count);
>> +	err =3D gnttab_map_refs_userspace(map->map_ops,
>> +					use_ptemod ? map->kmap_ops : NULL,
>> +					map->pages,
>> +					map->count);
>>   	if (err)
>>   		return err;
>>   =

>> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *ma=
p, int offset, int pages)
>>   		}
>>   	}
>>   =

>> -	err =3D gnttab_unmap_refs(map->unmap_ops + offset,
>> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
>> -			pages);
>> +	err =3D gnttab_unmap_refs_userspace(map->unmap_ops + offset,
>> +					  use_ptemod ? map->kmap_ops + offset : NULL,
>> +					  map->pages + offset,
>> +					  pages);
>>   	if (err)
>>   		return err;
>>   =

>> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
>> index aa846a4..e4ddfeb 100644
>> --- a/drivers/xen/grant-table.c
>> +++ b/drivers/xen/grant-table.c
>> @@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, =
unsigned count)
>>   }
>>   EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>>   =

>> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>>   		    struct gnttab_map_grant_ref *kmap_ops,
>> -		    struct page **pages, unsigned int count)
>> +		    struct page **pages, unsigned int count,
>> +		    bool m2p_override)
>>   {
>>   	int i, ret;
>>   	bool lazy =3D false;
>>   	pte_t *pte;
>> -	unsigned long mfn;
>> +	unsigned long mfn, pfn;
>>   =

>> +	BUG_ON(kmap_ops && !m2p_override);
>>   	ret =3D HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, co=
unt);
>>   	if (ret)
>>   		return ret;
>> @@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *m=
ap_ops,
>>   			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>>   					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>>   		}
>> -		return ret;
>> +		return 0;
>>   	}
>>   =

>> -	if (!in_interrupt() && paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_N=
ONE) {
>> +	if (m2p_override &&
>> +	    !in_interrupt() &&
>> +	    paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_NONE) {
>>   		arch_enter_lazy_mmu_mode();
>>   		lazy =3D true;
>>   	}
>> @@ -927,8 +931,20 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *ma=
p_ops,
>>   		} else {
>>   			mfn =3D PFN_DOWN(map_ops[i].dev_bus_addr);
>>   		}
>> -		ret =3D m2p_add_override(mfn, pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn =3D page_to_pfn(pages[i]);
>> +
>> +		WARN_ON(PagePrivate(pages[i]));
>> +		SetPagePrivate(pages[i]);
>> +		set_page_private(pages[i], mfn);
>> +
>> +		pages[i]->index =3D pfn_to_mfn(pfn);
>> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>> +			ret =3D -ENOMEM;
>> +			goto out;
>> +		}
>> +		if (m2p_override)
>> +			ret =3D m2p_add_override(mfn, pages[i], kmap_ops ?
>> +					       &kmap_ops[i] : NULL);
>>   		if (ret)
>>   			goto out;
>>   	}
>> @@ -939,15 +955,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *m=
ap_ops,
>>   =

>>   	return ret;
>>   }
>> +
>> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>> +		    struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>> +}
>>   EXPORT_SYMBOL_GPL(gnttab_map_refs);
>>   =

>> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>> +			      struct gnttab_map_grant_ref *kmap_ops,
>> +			      struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
>> +}
>> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
>> +
>> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>>   		      struct gnttab_map_grant_ref *kmap_ops,
>> -		      struct page **pages, unsigned int count)
>> +		      struct page **pages, unsigned int count,
>> +		      bool m2p_override)
>>   {
>>   	int i, ret;
>>   	bool lazy =3D false;
>> +	unsigned long pfn, mfn;
>>   =

>> +	BUG_ON(kmap_ops && !m2p_override);
>>   	ret =3D HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops=
, count);
>>   	if (ret)
>>   		return ret;
>> @@ -958,17 +991,33 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_re=
f *unmap_ops,
>>   			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>>   					INVALID_P2M_ENTRY);
>>   		}
>> -		return ret;
>> +		return 0;
>>   	}
>>   =

>> -	if (!in_interrupt() && paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_N=
ONE) {
>> +	if (m2p_override &&
>> +	    !in_interrupt() &&
>> +	    paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_NONE) {
>>   		arch_enter_lazy_mmu_mode();
>>   		lazy =3D true;
>>   	}
>>   =

>>   	for (i =3D 0; i < count; i++) {
>> -		ret =3D m2p_remove_override(pages[i], kmap_ops ?
>> -				       &kmap_ops[i] : NULL);
>> +		pfn =3D page_to_pfn(pages[i]);
>> +		mfn =3D get_phys_to_machine(pfn);
>> +		if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>> +			ret =3D -EINVAL;
>> +			goto out;
>> +		}
>> +
>> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
>> +		WARN_ON(!PagePrivate(pages[i]));
>> +		ClearPagePrivate(pages[i]);
>> +		set_phys_to_machine(pfn, pages[i]->index);
>> +		if (m2p_override)
>> +			ret =3D m2p_remove_override(pages[i],
>> +						  kmap_ops ?
>> +						   &kmap_ops[i] : NULL,
>> +						  mfn);
>>   		if (ret)
>>   			goto out;
>>   	}
>> @@ -979,8 +1028,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_re=
f *unmap_ops,
>>   =

>>   	return ret;
>>   }
>> +
>> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
>> +		    struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>> +}
>>   EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>>   =

>> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
>> +				struct gnttab_map_grant_ref *kmap_ops,
>> +				struct page **pages, unsigned int count)
>> +{
>> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
>> +}
>> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
>> +
>>   static unsigned nr_status_frames(unsigned nr_grant_frames)
>>   {
>>   	BUG_ON(grefs_per_grant_frame =3D=3D 0);
>> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
>> index 694dcaf..9a919b1 100644
>> --- a/include/xen/grant_table.h
>> +++ b/include/xen/grant_table.h
>> @@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
>>   #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>>   =

>>   int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>> -		    struct gnttab_map_grant_ref *kmap_ops,
>>   		    struct page **pages, unsigned int count);
>> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
>> +			      struct gnttab_map_grant_ref *kmap_ops,
>> +			      struct page **pages, unsigned int count);
>>   int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>> -		      struct gnttab_map_grant_ref *kunmap_ops,
>>   		      struct page **pages, unsigned int count);
>> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_op=
s,
>> +				struct gnttab_map_grant_ref *kunmap_ops,
>> +				struct page **pages, unsigned int count);
>>   =

>>   /* Perform a batch of grant map/copy operations. Retry every batch slot
>>    * for which the hypervisor returns GNTST_eagain. This is typically due
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" =
in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>>
> =

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> =


-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:36:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uPW-00087Q-OQ; Sun, 02 Feb 2014 10:36:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W9uPU-00087L-UG
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:36:21 +0000
Received: from [85.158.143.35:51290] by server-1.bemta-4.messagelabs.com id
	5D/A1-31661-4AF1EE25; Sun, 02 Feb 2014 10:36:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391337378!2513060!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25560 invoked from network); 2 Feb 2014 10:36:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 10:36:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98986161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 10:36:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Sun, 2 Feb 2014
	05:36:17 -0500
Message-ID: <1391337376.15093.24.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Sun, 2 Feb 2014 10:36:16 +0000
In-Reply-To: <20140202102852.GA9984@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 11:28 +0100, Olaf Hering wrote:
> On Sun, Feb 02, Ian Campbell wrote:
> 
> > I suppose there is no harm in this, but is there any chance that gdbsx
> > would actually work on arm64 without significant actual work going into
> > it?
> 
> I have no idea. But I just noticed its built only due to this line in
> our xen.spec:
> 
> make -C tools/debugger/gdbsx
> 
> Perhaps this should be guarded by a %ifarch x86_64.

I suspect that right now that this would be wise (maybe i386 too?) --
hopefully Mukesh can advise.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 10:36:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 10:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9uPW-00087Q-OQ; Sun, 02 Feb 2014 10:36:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1W9uPU-00087L-UG
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 10:36:21 +0000
Received: from [85.158.143.35:51290] by server-1.bemta-4.messagelabs.com id
	5D/A1-31661-4AF1EE25; Sun, 02 Feb 2014 10:36:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391337378!2513060!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25560 invoked from network); 2 Feb 2014 10:36:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 10:36:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98986161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 10:36:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Sun, 2 Feb 2014
	05:36:17 -0500
Message-ID: <1391337376.15093.24.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Sun, 2 Feb 2014 10:36:16 +0000
In-Reply-To: <20140202102852.GA9984@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 11:28 +0100, Olaf Hering wrote:
> On Sun, Feb 02, Ian Campbell wrote:
> 
> > I suppose there is no harm in this, but is there any chance that gdbsx
> > would actually work on arm64 without significant actual work going into
> > it?
> 
> I have no idea. But I just noticed its built only due to this line in
> our xen.spec:
> 
> make -C tools/debugger/gdbsx
> 
> Perhaps this should be guarded by a %ifarch x86_64.

I suspect that right now that this would be wise (maybe i386 too?) --
hopefully Mukesh can advise.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 11:21:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 11:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9v6M-0000s5-KO; Sun, 02 Feb 2014 11:20:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9v6L-0000s0-8t
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 11:20:37 +0000
Received: from [85.158.139.211:52964] by server-15.bemta-5.messagelabs.com id
	A6/AA-24395-40A2EE25; Sun, 02 Feb 2014 11:20:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391340033!1106712!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31693 invoked from network); 2 Feb 2014 11:20:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 11:20:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98990194"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 11:20:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 2 Feb 2014 06:20:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9v6G-0005Za-9a;
	Sun, 02 Feb 2014 11:20:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9v6G-0000TV-3b;
	Sun, 02 Feb 2014 11:20:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24716-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 11:20:32 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24716: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24716 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24716/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=c450908dc9168c3f20787aab268fcc295feaed7d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing c450908dc9168c3f20787aab268fcc295feaed7d
+ branch=xen-4.3-testing
+ revision=c450908dc9168c3f20787aab268fcc295feaed7d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git c450908dc9168c3f20787aab268fcc295feaed7d:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   0ac5c12..c450908  c450908dc9168c3f20787aab268fcc295feaed7d -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 11:21:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 11:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9v6M-0000s5-KO; Sun, 02 Feb 2014 11:20:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1W9v6L-0000s0-8t
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 11:20:37 +0000
Received: from [85.158.139.211:52964] by server-15.bemta-5.messagelabs.com id
	A6/AA-24395-40A2EE25; Sun, 02 Feb 2014 11:20:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391340033!1106712!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31693 invoked from network); 2 Feb 2014 11:20:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 11:20:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,760,1384300800"; d="scan'208";a="98990194"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 11:20:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 2 Feb 2014 06:20:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1W9v6G-0005Za-9a;
	Sun, 02 Feb 2014 11:20:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1W9v6G-0000TV-3b;
	Sun, 02 Feb 2014 11:20:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24716-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 2 Feb 2014 11:20:32 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24716: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24716 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24716/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24591

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d
baseline version:
 xen                  0ac5c121734c5055ba2b500b7f515a71800c7b20

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jun Nakajima <jun.nakajima@intel.com>
  Keir Fraser <keir@xen.org>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=c450908dc9168c3f20787aab268fcc295feaed7d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing c450908dc9168c3f20787aab268fcc295feaed7d
+ branch=xen-4.3-testing
+ revision=c450908dc9168c3f20787aab268fcc295feaed7d
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git c450908dc9168c3f20787aab268fcc295feaed7d:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   0ac5c12..c450908  c450908dc9168c3f20787aab268fcc295feaed7d -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 15:07:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 15:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9yd4-0005sR-5r; Sun, 02 Feb 2014 15:06:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W9yd2-0005sM-N2
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 15:06:36 +0000
Received: from [85.158.143.35:43099] by server-3.bemta-4.messagelabs.com id
	D4/8A-11539-CFE5EE25; Sun, 02 Feb 2014 15:06:36 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391353594!2542089!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32631 invoked from network); 2 Feb 2014 15:06:35 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-14.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	2 Feb 2014 15:06:35 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W9ycz-0003hF-Rg
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 07:06:33 -0800
Date: Sun, 2 Feb 2014 07:06:33 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1391353593813-5721087.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] xen and SMP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

I would like to know what is  xen archtecture on SMP - is there xen instance
on each processor? 

best regards 



--
View this message in context: http://xen.1045712.n5.nabble.com/xen-and-SMP-tp5721087.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 15:07:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 15:07:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1W9yd4-0005sR-5r; Sun, 02 Feb 2014 15:06:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W9yd2-0005sM-N2
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 15:06:36 +0000
Received: from [85.158.143.35:43099] by server-3.bemta-4.messagelabs.com id
	D4/8A-11539-CFE5EE25; Sun, 02 Feb 2014 15:06:36 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391353594!2542089!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32631 invoked from network); 2 Feb 2014 15:06:35 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-14.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	2 Feb 2014 15:06:35 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1W9ycz-0003hF-Rg
	for xen-devel@lists.xensource.com; Sun, 02 Feb 2014 07:06:33 -0800
Date: Sun, 2 Feb 2014 07:06:33 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1391353593813-5721087.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] xen and SMP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

I would like to know what is  xen archtecture on SMP - is there xen instance
on each processor? 

best regards 



--
View this message in context: http://xen.1045712.n5.nabble.com/xen-and-SMP-tp5721087.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 18:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 18:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA1ss-0002MH-FD; Sun, 02 Feb 2014 18:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WA1sr-0002MC-32
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 18:35:09 +0000
Received: from [85.158.143.35:46703] by server-2.bemta-4.messagelabs.com id
	29/49-10891-CDF8EE25; Sun, 02 Feb 2014 18:35:08 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391366105!2534985!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2606 invoked from network); 2 Feb 2014 18:35:05 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 18:35:05 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391366105; l=226;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yvOQvnBGNVsXycKbyM6dtc+hgTU=;
	b=j3Ci3VRwFn2o3xNdjqcLPBSW1yL0hzbpVdLZkbEHYGEAkyJA8I/eglamXLq+yeCKTTT
	Zs+kKW98bTnI91JJuQed0yPHkW3Le80t2f6JMd7XFdueuY1pi/QBdDzlGXkmC/g2MqftE
	RhZbnLVYryNbQ25s9qIdYipRZq3cy+/BCfg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id q03d35q12IZ5LJL
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 19:35:05 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9E6BB50269; Sun,  2 Feb 2014 19:35:04 +0100 (CET)
Date: Sun, 2 Feb 2014 19:35:04 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140202183504.GA10385@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391275508.15093.16.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 01, Ian Campbell wrote:

> https://bugs.launchpad.net/linaro-aarch64/+bug/1169164

Is this on anyones radar actually? That bug is almost a year old, its
still broken with a snapshot as of 7 Jan 2014.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 18:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 18:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA1ss-0002MH-FD; Sun, 02 Feb 2014 18:35:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WA1sr-0002MC-32
	for xen-devel@lists.xen.org; Sun, 02 Feb 2014 18:35:09 +0000
Received: from [85.158.143.35:46703] by server-2.bemta-4.messagelabs.com id
	29/49-10891-CDF8EE25; Sun, 02 Feb 2014 18:35:08 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391366105!2534985!1
X-Originating-IP: [81.169.146.219]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2606 invoked from network); 2 Feb 2014 18:35:05 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.219)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 2 Feb 2014 18:35:05 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391366105; l=226;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yvOQvnBGNVsXycKbyM6dtc+hgTU=;
	b=j3Ci3VRwFn2o3xNdjqcLPBSW1yL0hzbpVdLZkbEHYGEAkyJA8I/eglamXLq+yeCKTTT
	Zs+kKW98bTnI91JJuQed0yPHkW3Le80t2f6JMd7XFdueuY1pi/QBdDzlGXkmC/g2MqftE
	RhZbnLVYryNbQ25s9qIdYipRZq3cy+/BCfg=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id q03d35q12IZ5LJL
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sun, 2 Feb 2014 19:35:05 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9E6BB50269; Sun,  2 Feb 2014 19:35:04 +0100 (CET)
Date: Sun, 2 Feb 2014 19:35:04 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140202183504.GA10385@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391275508.15093.16.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 01, Ian Campbell wrote:

> https://bugs.launchpad.net/linaro-aarch64/+bug/1169164

Is this on anyones radar actually? That bug is almost a year old, its
still broken with a snapshot as of 7 Jan 2014.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 18:55:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 18:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA2Br-0002rf-8q; Sun, 02 Feb 2014 18:54:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WA2Bp-0002ra-Vp
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 18:54:46 +0000
Received: from [193.109.254.147:46115] by server-16.bemta-14.messagelabs.com
	id 3D/A7-21945-5749EE25; Sun, 02 Feb 2014 18:54:45 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391367283!1469825!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9185 invoked from network); 2 Feb 2014 18:54:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 18:54:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,767,1384300800"; d="scan'208";a="99040643"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 18:54:42 +0000
Received: from [10.68.14.36] (10.68.14.36) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Sun, 2 Feb 2014
	13:54:41 -0500
Message-ID: <52EE93F0.1020508@citrix.com>
Date: Sun, 2 Feb 2014 19:52:32 +0100
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
	<52EE1E26.2040308@linaro.org>
In-Reply-To: <52EE1E26.2040308@linaro.org>
X-Originating-IP: [10.68.14.36]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/02/14 11:29, Julien Grall wrote:
> Hello,
>
> This patch is breaking Linux compilation on ARM:
>
> drivers/xen/grant-table.c: In function =91__gnttab_map_refs=92:
> drivers/xen/grant-table.c:989:3: error: implicit declaration of function =
=91FOREIGN_FRAME=92 [-Werror=3Dimplicit-function-declaration]
>     if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>     ^
> drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
> drivers/xen/grant-table.c:1054:3: error: implicit declaration of function=
 =91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
>     mfn =3D get_phys_to_machine(pfn);
>     ^
> drivers/xen/grant-table.c:1055:43: error: =91FOREIGN_FRAME_BIT=92 undecla=
red (first use in this function)
>     if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>                                             ^
> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is re=
ported only once for each function it appears in
> drivers/xen/grant-table.c:1068:9: error: too many arguments to function =
=91m2p_remove_override=92
>           mfn);
>           ^
> In file included from include/xen/page.h:4:0,
>                   from drivers/xen/grant-table.c:48:
> /local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:106=
:19: note: declared here
>   static inline int m2p_remove_override(struct page *page, bool clear_pte)
>                     ^
> cc1: some warnings being treated as errors

Hi,

That's bad indeed. I think the best solution is to put those parts =

behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c. =

David, Stefano, what do you think?

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 18:55:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 18:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA2Br-0002rf-8q; Sun, 02 Feb 2014 18:54:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WA2Bp-0002ra-Vp
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 18:54:46 +0000
Received: from [193.109.254.147:46115] by server-16.bemta-14.messagelabs.com
	id 3D/A7-21945-5749EE25; Sun, 02 Feb 2014 18:54:45 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391367283!1469825!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9185 invoked from network); 2 Feb 2014 18:54:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 18:54:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,767,1384300800"; d="scan'208";a="99040643"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 02 Feb 2014 18:54:42 +0000
Received: from [10.68.14.36] (10.68.14.36) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Sun, 2 Feb 2014
	13:54:41 -0500
Message-ID: <52EE93F0.1020508@citrix.com>
Date: Sun, 2 Feb 2014 19:52:32 +0100
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
	<52EE1E26.2040308@linaro.org>
In-Reply-To: <52EE1E26.2040308@linaro.org>
X-Originating-IP: [10.68.14.36]
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/02/14 11:29, Julien Grall wrote:
> Hello,
>
> This patch is breaking Linux compilation on ARM:
>
> drivers/xen/grant-table.c: In function =91__gnttab_map_refs=92:
> drivers/xen/grant-table.c:989:3: error: implicit declaration of function =
=91FOREIGN_FRAME=92 [-Werror=3Dimplicit-function-declaration]
>     if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>     ^
> drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
> drivers/xen/grant-table.c:1054:3: error: implicit declaration of function=
 =91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
>     mfn =3D get_phys_to_machine(pfn);
>     ^
> drivers/xen/grant-table.c:1055:43: error: =91FOREIGN_FRAME_BIT=92 undecla=
red (first use in this function)
>     if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>                                             ^
> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is re=
ported only once for each function it appears in
> drivers/xen/grant-table.c:1068:9: error: too many arguments to function =
=91m2p_remove_override=92
>           mfn);
>           ^
> In file included from include/xen/page.h:4:0,
>                   from drivers/xen/grant-table.c:48:
> /local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:106=
:19: note: declared here
>   static inline int m2p_remove_override(struct page *page, bool clear_pte)
>                     ^
> cc1: some warnings being treated as errors

Hi,

That's bad indeed. I think the best solution is to put those parts =

behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c. =

David, Stefano, what do you think?

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 22:30:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 22:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA5Xe-00007s-2v; Sun, 02 Feb 2014 22:29:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jwboyer@gmail.com>) id 1WA5Xc-00007n-Rf
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 22:29:29 +0000
Received: from [85.158.143.35:45567] by server-1.bemta-4.messagelabs.com id
	9D/AF-31661-8C6CEE25; Sun, 02 Feb 2014 22:29:28 +0000
X-Env-Sender: jwboyer@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391380166!2559395!1
X-Originating-IP: [209.85.214.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4780 invoked from network); 2 Feb 2014 22:29:27 -0000
Received: from mail-ob0-f172.google.com (HELO mail-ob0-f172.google.com)
	(209.85.214.172)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 22:29:27 -0000
Received: by mail-ob0-f172.google.com with SMTP id vb8so7203925obc.3
	for <xen-devel@lists.xenproject.org>;
	Sun, 02 Feb 2014 14:29:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:cc:content-type; 
	bh=vE3YTySYFhcO1b3xlapDI1FFRVWdPj8DH8zOQA66jzs=;
	b=TwoTP6Zv6XC3L7mn71MfYK3506fIroUZrf1TJlioTawyvKqzmvLYV6jug0fc7TM7hI
	nWHDQuWXaxyKjSY5YnXap0UUncnPyI33PgF5ZhcV6Grntm1ssf1UfVg1ifg92sNnTP4J
	xa3Vu3WNJ2kxszYRoaNQrmzZZsgkoq9mx4alBITokMLAMtSTYRQKGg6qKqTWYVM7EGFU
	hooQEd9OQRCSJXyOEszSPfX7QP5vtg639nri1P0SrXSNGU1wrjVNkW/HF0AK2W1XV1UM
	df64qRFrkpyRAKO4HZvGn08fc/OybAqQ6WsLHfqUpeGfpppEQq9r1NIfTnG6LgY7x34B
	ORyg==
MIME-Version: 1.0
X-Received: by 10.60.80.137 with SMTP id r9mr27667094oex.30.1391380165757;
	Sun, 02 Feb 2014 14:29:25 -0800 (PST)
Received: by 10.76.27.197 with HTTP; Sun, 2 Feb 2014 14:29:25 -0800 (PST)
Date: Sun, 2 Feb 2014 17:29:25 -0500
X-Google-Sender-Auth: DnCM5-EAS_78EU8YH8U6ugjgQEE
Message-ID: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
From: Josh Boyer <jwboyer@fedoraproject.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xenproject.org,
	"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: [Xen-devel] Xen build error on ARM with 3.14 merge window kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

With v3.13-11147-gb399c46 I'm seeing the following build errors for
Xen on ARM.  I haven't been able to test Linus' recent tree yet, but I
was wondering if anyone had seen this yet.

josh

drivers/xen/grant-table.c: In function '__gnttab_map_refs':
drivers/xen/grant-table.c:989:3: error: implicit declaration of
function 'FOREIGN_FRAME' [-Werror=implicit-function-declaration]
   if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
   ^
drivers/xen/grant-table.c: In function '__gnttab_unmap_refs':
drivers/xen/grant-table.c:1054:3: error: implicit declaration of
function 'get_phys_to_machine' [-Werror=implicit-function-declaration]
   mfn = get_phys_to_machine(pfn);
   ^
drivers/xen/grant-table.c:1055:43: error: 'FOREIGN_FRAME_BIT'
undeclared (first use in this function)
   if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
                                           ^
drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
reported only once for each function it appears in
drivers/xen/grant-table.c:1068:9: error: too many arguments to
function 'm2p_remove_override'
         mfn);
         ^
In file included from include/xen/page.h:4:0,
                 from drivers/xen/grant-table.c:48:
/home/jwboyer/rpmbuild/BUILD/kernel-3.13.fc20/linux-3.14.0-0.rc0.git20.1.fc20.armv7hl/arch/arm/include/asm/xen/page.h:106:19:
note: declared here
 static inline int m2p_remove_override(struct page *page, bool clear_pte)
                   ^
cc1: some warnings being treated as errors
make[2]: *** [drivers/xen/grant-table.o] Error 1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 22:30:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 22:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA5Xe-00007s-2v; Sun, 02 Feb 2014 22:29:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jwboyer@gmail.com>) id 1WA5Xc-00007n-Rf
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 22:29:29 +0000
Received: from [85.158.143.35:45567] by server-1.bemta-4.messagelabs.com id
	9D/AF-31661-8C6CEE25; Sun, 02 Feb 2014 22:29:28 +0000
X-Env-Sender: jwboyer@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391380166!2559395!1
X-Originating-IP: [209.85.214.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4780 invoked from network); 2 Feb 2014 22:29:27 -0000
Received: from mail-ob0-f172.google.com (HELO mail-ob0-f172.google.com)
	(209.85.214.172)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 22:29:27 -0000
Received: by mail-ob0-f172.google.com with SMTP id vb8so7203925obc.3
	for <xen-devel@lists.xenproject.org>;
	Sun, 02 Feb 2014 14:29:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:cc:content-type; 
	bh=vE3YTySYFhcO1b3xlapDI1FFRVWdPj8DH8zOQA66jzs=;
	b=TwoTP6Zv6XC3L7mn71MfYK3506fIroUZrf1TJlioTawyvKqzmvLYV6jug0fc7TM7hI
	nWHDQuWXaxyKjSY5YnXap0UUncnPyI33PgF5ZhcV6Grntm1ssf1UfVg1ifg92sNnTP4J
	xa3Vu3WNJ2kxszYRoaNQrmzZZsgkoq9mx4alBITokMLAMtSTYRQKGg6qKqTWYVM7EGFU
	hooQEd9OQRCSJXyOEszSPfX7QP5vtg639nri1P0SrXSNGU1wrjVNkW/HF0AK2W1XV1UM
	df64qRFrkpyRAKO4HZvGn08fc/OybAqQ6WsLHfqUpeGfpppEQq9r1NIfTnG6LgY7x34B
	ORyg==
MIME-Version: 1.0
X-Received: by 10.60.80.137 with SMTP id r9mr27667094oex.30.1391380165757;
	Sun, 02 Feb 2014 14:29:25 -0800 (PST)
Received: by 10.76.27.197 with HTTP; Sun, 2 Feb 2014 14:29:25 -0800 (PST)
Date: Sun, 2 Feb 2014 17:29:25 -0500
X-Google-Sender-Auth: DnCM5-EAS_78EU8YH8U6ugjgQEE
Message-ID: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
From: Josh Boyer <jwboyer@fedoraproject.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xenproject.org,
	"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: [Xen-devel] Xen build error on ARM with 3.14 merge window kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

With v3.13-11147-gb399c46 I'm seeing the following build errors for
Xen on ARM.  I haven't been able to test Linus' recent tree yet, but I
was wondering if anyone had seen this yet.

josh

drivers/xen/grant-table.c: In function '__gnttab_map_refs':
drivers/xen/grant-table.c:989:3: error: implicit declaration of
function 'FOREIGN_FRAME' [-Werror=implicit-function-declaration]
   if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
   ^
drivers/xen/grant-table.c: In function '__gnttab_unmap_refs':
drivers/xen/grant-table.c:1054:3: error: implicit declaration of
function 'get_phys_to_machine' [-Werror=implicit-function-declaration]
   mfn = get_phys_to_machine(pfn);
   ^
drivers/xen/grant-table.c:1055:43: error: 'FOREIGN_FRAME_BIT'
undeclared (first use in this function)
   if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
                                           ^
drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
reported only once for each function it appears in
drivers/xen/grant-table.c:1068:9: error: too many arguments to
function 'm2p_remove_override'
         mfn);
         ^
In file included from include/xen/page.h:4:0,
                 from drivers/xen/grant-table.c:48:
/home/jwboyer/rpmbuild/BUILD/kernel-3.13.fc20/linux-3.14.0-0.rc0.git20.1.fc20.armv7hl/arch/arm/include/asm/xen/page.h:106:19:
note: declared here
 static inline int m2p_remove_override(struct page *page, bool clear_pte)
                   ^
cc1: some warnings being treated as errors
make[2]: *** [drivers/xen/grant-table.o] Error 1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 22:46:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 22:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA5ne-0000U4-O9; Sun, 02 Feb 2014 22:46:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mattd@bugfuzz.com>) id 1WA5nc-0000Tz-CZ
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 22:46:00 +0000
Received: from [85.158.139.211:5971] by server-16.bemta-5.messagelabs.com id
	F1/84-05060-7AACEE25; Sun, 02 Feb 2014 22:45:59 +0000
X-Env-Sender: mattd@bugfuzz.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391381157!1165357!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7561 invoked from network); 2 Feb 2014 22:45:58 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 22:45:58 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so9032444qaq.39
	for <xen-devel@lists.xenproject.org>;
	Sun, 02 Feb 2014 14:45:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=PkWpZurKKZzL2vG1xms5SxJbRmBBIhKm4qsGdKbO978=;
	b=YkjyumkqEfZNplSYvuMAJqHC5VoYUFtUGb3IAJ5RZaaVgCpIVUc7Ult41KSG8I2UQP
	EeXNKD7wRHgHpkfbFRqhHSFwSksqf0OYJFbt7TUMcC+88JoaeowRYn2WZmqCqWUPNRBV
	XPALtk2BmPLPTiRZ59dOt3NnFJCrAz2qZBGRFBYMlpaBBP0XUtEQZmRILQo3rqGE56TQ
	exxT8x+5YYwUqhXVw86XDeMGhB3imM1np15zfNxFhWq3sheIznwC1VlNmcFL1hb9GdyC
	v1wmTZYt0J6FkpjuZmFxgljNwZFRkyAd/HFJmo1xQ2+1Y9v+5BkZQRiJZ1e58uFOHiDm
	Pu7A==
X-Gm-Message-State: ALoCoQmk4wgMG3/6sYKErQM4Cj/y8m4rQxXmn+np5SMxuo5+u73LLso9e+k4kBTVQW8ynANpbOdx
MIME-Version: 1.0
X-Received: by 10.224.113.204 with SMTP id b12mr50929866qaq.35.1391381157437; 
	Sun, 02 Feb 2014 14:45:57 -0800 (PST)
Received: by 10.96.6.38 with HTTP; Sun, 2 Feb 2014 14:45:57 -0800 (PST)
X-Originating-IP: [121.99.39.170]
In-Reply-To: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
References: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
Date: Mon, 3 Feb 2014 11:45:57 +1300
Message-ID: <CAD3Canf=nK+uS=sbxAP5-_ikFfxE3Ta-L5R2Y0tkLYAVT=J2KA@mail.gmail.com>
From: Matthew Daley <mattd@bugfuzz.com>
To: Josh Boyer <jwboyer@fedoraproject.org>
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Xen build error on ARM with 3.14 merge window
	kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 3, 2014 at 11:29 AM, Josh Boyer <jwboyer@fedoraproject.org> wrote:
> Hi All,
>
> With v3.13-11147-gb399c46 I'm seeing the following build errors for
> Xen on ARM.  I haven't been able to test Linus' recent tree yet, but I
> was wondering if anyone had seen this yet.

It's just recently been brought up. Check out message
<52EE1E26.2040308@linaro.org>:
http://thread.gmane.org/gmane.linux.kernel/1634907/focus=1638931

- Matthew

>
> josh
>
> drivers/xen/grant-table.c: In function '__gnttab_map_refs':
> drivers/xen/grant-table.c:989:3: error: implicit declaration of
> function 'FOREIGN_FRAME' [-Werror=implicit-function-declaration]
>    if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>    ^
> drivers/xen/grant-table.c: In function '__gnttab_unmap_refs':
> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
> function 'get_phys_to_machine' [-Werror=implicit-function-declaration]
>    mfn = get_phys_to_machine(pfn);
>    ^
> drivers/xen/grant-table.c:1055:43: error: 'FOREIGN_FRAME_BIT'
> undeclared (first use in this function)
>    if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>                                            ^
> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
> reported only once for each function it appears in
> drivers/xen/grant-table.c:1068:9: error: too many arguments to
> function 'm2p_remove_override'
>          mfn);
>          ^
> In file included from include/xen/page.h:4:0,
>                  from drivers/xen/grant-table.c:48:
> /home/jwboyer/rpmbuild/BUILD/kernel-3.13.fc20/linux-3.14.0-0.rc0.git20.1.fc20.armv7hl/arch/arm/include/asm/xen/page.h:106:19:
> note: declared here
>  static inline int m2p_remove_override(struct page *page, bool clear_pte)
>                    ^
> cc1: some warnings being treated as errors
> make[2]: *** [drivers/xen/grant-table.o] Error 1
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 02 22:46:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Feb 2014 22:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WA5ne-0000U4-O9; Sun, 02 Feb 2014 22:46:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mattd@bugfuzz.com>) id 1WA5nc-0000Tz-CZ
	for xen-devel@lists.xenproject.org; Sun, 02 Feb 2014 22:46:00 +0000
Received: from [85.158.139.211:5971] by server-16.bemta-5.messagelabs.com id
	F1/84-05060-7AACEE25; Sun, 02 Feb 2014 22:45:59 +0000
X-Env-Sender: mattd@bugfuzz.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391381157!1165357!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7561 invoked from network); 2 Feb 2014 22:45:58 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	2 Feb 2014 22:45:58 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so9032444qaq.39
	for <xen-devel@lists.xenproject.org>;
	Sun, 02 Feb 2014 14:45:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=PkWpZurKKZzL2vG1xms5SxJbRmBBIhKm4qsGdKbO978=;
	b=YkjyumkqEfZNplSYvuMAJqHC5VoYUFtUGb3IAJ5RZaaVgCpIVUc7Ult41KSG8I2UQP
	EeXNKD7wRHgHpkfbFRqhHSFwSksqf0OYJFbt7TUMcC+88JoaeowRYn2WZmqCqWUPNRBV
	XPALtk2BmPLPTiRZ59dOt3NnFJCrAz2qZBGRFBYMlpaBBP0XUtEQZmRILQo3rqGE56TQ
	exxT8x+5YYwUqhXVw86XDeMGhB3imM1np15zfNxFhWq3sheIznwC1VlNmcFL1hb9GdyC
	v1wmTZYt0J6FkpjuZmFxgljNwZFRkyAd/HFJmo1xQ2+1Y9v+5BkZQRiJZ1e58uFOHiDm
	Pu7A==
X-Gm-Message-State: ALoCoQmk4wgMG3/6sYKErQM4Cj/y8m4rQxXmn+np5SMxuo5+u73LLso9e+k4kBTVQW8ynANpbOdx
MIME-Version: 1.0
X-Received: by 10.224.113.204 with SMTP id b12mr50929866qaq.35.1391381157437; 
	Sun, 02 Feb 2014 14:45:57 -0800 (PST)
Received: by 10.96.6.38 with HTTP; Sun, 2 Feb 2014 14:45:57 -0800 (PST)
X-Originating-IP: [121.99.39.170]
In-Reply-To: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
References: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
Date: Mon, 3 Feb 2014 11:45:57 +1300
Message-ID: <CAD3Canf=nK+uS=sbxAP5-_ikFfxE3Ta-L5R2Y0tkLYAVT=J2KA@mail.gmail.com>
From: Matthew Daley <mattd@bugfuzz.com>
To: Josh Boyer <jwboyer@fedoraproject.org>
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Xen build error on ARM with 3.14 merge window
	kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 3, 2014 at 11:29 AM, Josh Boyer <jwboyer@fedoraproject.org> wrote:
> Hi All,
>
> With v3.13-11147-gb399c46 I'm seeing the following build errors for
> Xen on ARM.  I haven't been able to test Linus' recent tree yet, but I
> was wondering if anyone had seen this yet.

It's just recently been brought up. Check out message
<52EE1E26.2040308@linaro.org>:
http://thread.gmane.org/gmane.linux.kernel/1634907/focus=1638931

- Matthew

>
> josh
>
> drivers/xen/grant-table.c: In function '__gnttab_map_refs':
> drivers/xen/grant-table.c:989:3: error: implicit declaration of
> function 'FOREIGN_FRAME' [-Werror=implicit-function-declaration]
>    if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>    ^
> drivers/xen/grant-table.c: In function '__gnttab_unmap_refs':
> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
> function 'get_phys_to_machine' [-Werror=implicit-function-declaration]
>    mfn = get_phys_to_machine(pfn);
>    ^
> drivers/xen/grant-table.c:1055:43: error: 'FOREIGN_FRAME_BIT'
> undeclared (first use in this function)
>    if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>                                            ^
> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
> reported only once for each function it appears in
> drivers/xen/grant-table.c:1068:9: error: too many arguments to
> function 'm2p_remove_override'
>          mfn);
>          ^
> In file included from include/xen/page.h:4:0,
>                  from drivers/xen/grant-table.c:48:
> /home/jwboyer/rpmbuild/BUILD/kernel-3.13.fc20/linux-3.14.0-0.rc0.git20.1.fc20.armv7hl/arch/arm/include/asm/xen/page.h:106:19:
> note: declared here
>  static inline int m2p_remove_override(struct page *page, bool clear_pte)
>                    ^
> cc1: some warnings being treated as errors
> make[2]: *** [drivers/xen/grant-table.o] Error 1
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 04:56:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 04:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WABa3-0004x6-NG; Mon, 03 Feb 2014 04:56:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WABa2-0004wq-1R; Mon, 03 Feb 2014 04:56:22 +0000
Received: from [85.158.139.211:17159] by server-14.bemta-5.messagelabs.com id
	8E/74-27598-5712FE25; Mon, 03 Feb 2014 04:56:21 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391403379!1179311!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18867 invoked from network); 3 Feb 2014 04:56:20 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 04:56:20 -0000
Received: by mail-lb0-f175.google.com with SMTP id p9so4990447lbv.34
	for <multiple recipients>; Sun, 02 Feb 2014 20:56:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=/lRVhGbU47JDu4Dse7ejDTaqdkIX79zGvUO4VOTT8mE=;
	b=kQKsYwa7u/8E1tydNAveFJxS3IyrjKSgBoPGrejVgYD0NYlSQ0/QHzzojR3STyX+eY
	3qs3mANftKmc3djvyaezS0c41lINJhjB1K0ZfKSjauROc+/3ECfNe+KxedJNh4/5JOUV
	TOcFQJFy3TxXu1H0bWCEC0gS31AyiB3/DGNuXwFobP1g6dNVTo54uKvCsT3OQjFXBwJ5
	EFOtMB8W98LiD16aGzVjNCaN0XbNwX/fgnldgYLiKEbS56GuCp0O02Go6XG4ANWtxo3q
	cdOiRNytcx42RF3algvZ2ZxIcR+trhZ9eRo0z/GzGLZhIswaI+jFIOf4yCSlJSzp17gu
	qkAQ==
MIME-Version: 1.0
X-Received: by 10.152.23.132 with SMTP id m4mr130340laf.34.1391403379196; Sun,
	02 Feb 2014 20:56:19 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Sun, 2 Feb 2014 20:56:19 -0800 (PST)
Date: Sun, 2 Feb 2014 23:56:19 -0500
X-Google-Sender-Auth: avoPZVMD7YRdL0URAnPY1d_MBio
Message-ID: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, 
	cl-mirage@lists.cam.ac.uk, xs-devel@lists.xenserver.org, 
	xen-api@lists.xen.org
Subject: [Xen-devel] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.

RC3 is the first release candidate to include a testable PVH.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions

Developers: please consider monitoring the Freenode IRC channel
#xentest today to make sure that people are able to build and test the
code.

Hope to see you today on #xentest!

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 04:56:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 04:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WABa3-0004x6-NG; Mon, 03 Feb 2014 04:56:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WABa2-0004wq-1R; Mon, 03 Feb 2014 04:56:22 +0000
Received: from [85.158.139.211:17159] by server-14.bemta-5.messagelabs.com id
	8E/74-27598-5712FE25; Mon, 03 Feb 2014 04:56:21 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391403379!1179311!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=2.5 required=7.0 tests=RCVD_BY_IP,
  SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18867 invoked from network); 3 Feb 2014 04:56:20 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 04:56:20 -0000
Received: by mail-lb0-f175.google.com with SMTP id p9so4990447lbv.34
	for <multiple recipients>; Sun, 02 Feb 2014 20:56:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=/lRVhGbU47JDu4Dse7ejDTaqdkIX79zGvUO4VOTT8mE=;
	b=kQKsYwa7u/8E1tydNAveFJxS3IyrjKSgBoPGrejVgYD0NYlSQ0/QHzzojR3STyX+eY
	3qs3mANftKmc3djvyaezS0c41lINJhjB1K0ZfKSjauROc+/3ECfNe+KxedJNh4/5JOUV
	TOcFQJFy3TxXu1H0bWCEC0gS31AyiB3/DGNuXwFobP1g6dNVTo54uKvCsT3OQjFXBwJ5
	EFOtMB8W98LiD16aGzVjNCaN0XbNwX/fgnldgYLiKEbS56GuCp0O02Go6XG4ANWtxo3q
	cdOiRNytcx42RF3algvZ2ZxIcR+trhZ9eRo0z/GzGLZhIswaI+jFIOf4yCSlJSzp17gu
	qkAQ==
MIME-Version: 1.0
X-Received: by 10.152.23.132 with SMTP id m4mr130340laf.34.1391403379196; Sun,
	02 Feb 2014 20:56:19 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Sun, 2 Feb 2014 20:56:19 -0800 (PST)
Date: Sun, 2 Feb 2014 23:56:19 -0500
X-Google-Sender-Auth: avoPZVMD7YRdL0URAnPY1d_MBio
Message-ID: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, 
	cl-mirage@lists.cam.ac.uk, xs-devel@lists.xenserver.org, 
	xen-api@lists.xen.org
Subject: [Xen-devel] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.

RC3 is the first release candidate to include a testable PVH.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions

Developers: please consider monitoring the Freenode IRC channel
#xentest today to make sure that people are able to build and test the
code.

Hope to see you today on #xentest!

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 06:55:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 06:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WADR0-00082J-9i; Mon, 03 Feb 2014 06:55:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jaceksburghardt@gmail.com>)
	id 1WADQw-000820-Oz; Mon, 03 Feb 2014 06:55:09 +0000
Received: from [85.158.137.68:36051] by server-2.bemta-3.messagelabs.com id
	DE/ED-06531-94D3FE25; Mon, 03 Feb 2014 06:55:05 +0000
X-Env-Sender: jaceksburghardt@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391410503!12118036!1
X-Originating-IP: [209.85.216.170]
X-SpamReason: No, hits=2.7 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,SUSPICIOUS_RECIPS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5979 invoked from network); 3 Feb 2014 06:55:04 -0000
Received: from mail-qc0-f170.google.com (HELO mail-qc0-f170.google.com)
	(209.85.216.170)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 06:55:04 -0000
Received: by mail-qc0-f170.google.com with SMTP id e9so10767653qcy.29
	for <multiple recipients>; Sun, 02 Feb 2014 22:55:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ilCNj6kMSsLAUASfThuplg7Pq1XaWc66/W5JRj8GB8s=;
	b=kFW6xh2p7W36o9AEoWAEvbf4tXVEr02XK7gSrccTRWyiIQR//gTRhOSmCpxhOnUREq
	IxwQbT4j6egIaN7mUW9j2W4fEF1hkeOjdb/AkcjuX87u2PX7/kTpxGxLTfhMw30BBebr
	DRChgToXfhEdFVW0bCV62FODBMj+Sj4rVSb9SbbKgzB55jqGbNly35szuM+AkT33p/zl
	szeRxhnCPJw42EkeweC+XAh+vHBEU179FZHIRlAq2SWNQC4YWJSICVhGuPMpE8CLudYF
	TZTZHF3FIU+6rGZf+L+6Tu22nHmeaM8KxGFwq05uZY1Jiziso3zMJmYG/h+YWR18ML7O
	PwjA==
MIME-Version: 1.0
X-Received: by 10.224.137.5 with SMTP id u5mr1152003qat.12.1391410502931; Sun,
	02 Feb 2014 22:55:02 -0800 (PST)
Received: by 10.140.83.180 with HTTP; Sun, 2 Feb 2014 22:55:02 -0800 (PST)
In-Reply-To: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
References: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
Date: Sun, 2 Feb 2014 23:55:02 -0700
Message-ID: <CAHyyzzQmkaA8L8GrqXcHb-Or81zvXC1vyQJm8RsJr=tXi1zVBg@mail.gmail.com>
From: jacek burghardt <jaceksburghardt@gmail.com>
To: Russ Pavlicek <russell.pavlicek@xenproject.org>
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-API] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0723976461855398737=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0723976461855398737==
Content-Type: multipart/alternative; boundary=001a11c28604c3973c04f17b0010

--001a11c28604c3973c04f17b0010
Content-Type: text/plain; charset=ISO-8859-1

Well I was testing rc3 and I am running server 2012 with 12 gb of ram and
share point. I had installed all Microsoft updates and then I would get
server 2012 crashing qemu in dmesg. I had to lower ram assigned  to 10 gb
and I was able to complete patches installation. It seems that xen 4.4 rc3
is limited to 10gb of ram.
Is xen switching to qemu 64 from  i386, it seems that many improvements in
chip set support were made   with 64 bit version of qemu and it uses 256KB
seabios.


On Sun, Feb 2, 2014 at 9:56 PM, Russ Pavlicek <
russell.pavlicek@xenproject.org> wrote:

> This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.
>
> RC3 is the first release candidate to include a testable PVH.
>
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.
>
> Hope to see you today on #xentest!
>
> Russ
>
> _______________________________________________
> Xen-api mailing list
> Xen-api@lists.xen.org
> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
>

--001a11c28604c3973c04f17b0010
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Well I was testing rc3 and I am running server 2012 with 1=
2 gb of ram and share point. I had installed all Microsoft updates and then=
 I would get server 2012 crashing qemu in dmesg. I had to lower ram assigne=
d =A0to 10 gb and I was able to complete patches installation. It seems tha=
t xen 4.4 rc3 is limited to 10gb of ram.=A0<div>

Is xen switching to qemu 64 from =A0i386, it seems that many improvements i=
n chip set support were made =A0 with 64 bit version of qemu and it uses 25=
6KB seabios.=A0</div></div><div class=3D"gmail_extra"><br><br><div class=3D=
"gmail_quote">
On Sun, Feb 2, 2014 at 9:56 PM, Russ Pavlicek <span dir=3D"ltr">&lt;<a href=
=3D"mailto:russell.pavlicek@xenproject.org" target=3D"_blank">russell.pavli=
cek@xenproject.org</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.<=
br>
<br>
RC3 is the first release candidate to include a testable PVH.<br>
<br>
General Information about Test Days can be found here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_Test_Days" target=3D"_blank"=
>http://wiki.xenproject.org/wiki/Xen_Test_Days</a><br>
<br>
and specific instructions for this Test Day are located here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" t=
arget=3D"_blank">http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructi=
ons</a><br>
<br>
Developers: please consider monitoring the Freenode IRC channel<br>
#xentest today to make sure that people are able to build and test the<br>
code.<br>
<br>
Hope to see you today on #xentest!<br>
<br>
Russ<br>
<br>
_______________________________________________<br>
Xen-api mailing list<br>
<a href=3D"mailto:Xen-api@lists.xen.org">Xen-api@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api" target=3D=
"_blank">http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api</a><br>
</blockquote></div><br></div>

--001a11c28604c3973c04f17b0010--


--===============0723976461855398737==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0723976461855398737==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 06:55:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 06:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WADR0-00082J-9i; Mon, 03 Feb 2014 06:55:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jaceksburghardt@gmail.com>)
	id 1WADQw-000820-Oz; Mon, 03 Feb 2014 06:55:09 +0000
Received: from [85.158.137.68:36051] by server-2.bemta-3.messagelabs.com id
	DE/ED-06531-94D3FE25; Mon, 03 Feb 2014 06:55:05 +0000
X-Env-Sender: jaceksburghardt@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391410503!12118036!1
X-Originating-IP: [209.85.216.170]
X-SpamReason: No, hits=2.7 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,SUSPICIOUS_RECIPS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5979 invoked from network); 3 Feb 2014 06:55:04 -0000
Received: from mail-qc0-f170.google.com (HELO mail-qc0-f170.google.com)
	(209.85.216.170)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 06:55:04 -0000
Received: by mail-qc0-f170.google.com with SMTP id e9so10767653qcy.29
	for <multiple recipients>; Sun, 02 Feb 2014 22:55:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ilCNj6kMSsLAUASfThuplg7Pq1XaWc66/W5JRj8GB8s=;
	b=kFW6xh2p7W36o9AEoWAEvbf4tXVEr02XK7gSrccTRWyiIQR//gTRhOSmCpxhOnUREq
	IxwQbT4j6egIaN7mUW9j2W4fEF1hkeOjdb/AkcjuX87u2PX7/kTpxGxLTfhMw30BBebr
	DRChgToXfhEdFVW0bCV62FODBMj+Sj4rVSb9SbbKgzB55jqGbNly35szuM+AkT33p/zl
	szeRxhnCPJw42EkeweC+XAh+vHBEU179FZHIRlAq2SWNQC4YWJSICVhGuPMpE8CLudYF
	TZTZHF3FIU+6rGZf+L+6Tu22nHmeaM8KxGFwq05uZY1Jiziso3zMJmYG/h+YWR18ML7O
	PwjA==
MIME-Version: 1.0
X-Received: by 10.224.137.5 with SMTP id u5mr1152003qat.12.1391410502931; Sun,
	02 Feb 2014 22:55:02 -0800 (PST)
Received: by 10.140.83.180 with HTTP; Sun, 2 Feb 2014 22:55:02 -0800 (PST)
In-Reply-To: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
References: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
Date: Sun, 2 Feb 2014 23:55:02 -0700
Message-ID: <CAHyyzzQmkaA8L8GrqXcHb-Or81zvXC1vyQJm8RsJr=tXi1zVBg@mail.gmail.com>
From: jacek burghardt <jaceksburghardt@gmail.com>
To: Russ Pavlicek <russell.pavlicek@xenproject.org>
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Xen-API] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0723976461855398737=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0723976461855398737==
Content-Type: multipart/alternative; boundary=001a11c28604c3973c04f17b0010

--001a11c28604c3973c04f17b0010
Content-Type: text/plain; charset=ISO-8859-1

Well I was testing rc3 and I am running server 2012 with 12 gb of ram and
share point. I had installed all Microsoft updates and then I would get
server 2012 crashing qemu in dmesg. I had to lower ram assigned  to 10 gb
and I was able to complete patches installation. It seems that xen 4.4 rc3
is limited to 10gb of ram.
Is xen switching to qemu 64 from  i386, it seems that many improvements in
chip set support were made   with 64 bit version of qemu and it uses 256KB
seabios.


On Sun, Feb 2, 2014 at 9:56 PM, Russ Pavlicek <
russell.pavlicek@xenproject.org> wrote:

> This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.
>
> RC3 is the first release candidate to include a testable PVH.
>
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.
>
> Hope to see you today on #xentest!
>
> Russ
>
> _______________________________________________
> Xen-api mailing list
> Xen-api@lists.xen.org
> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
>

--001a11c28604c3973c04f17b0010
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Well I was testing rc3 and I am running server 2012 with 1=
2 gb of ram and share point. I had installed all Microsoft updates and then=
 I would get server 2012 crashing qemu in dmesg. I had to lower ram assigne=
d =A0to 10 gb and I was able to complete patches installation. It seems tha=
t xen 4.4 rc3 is limited to 10gb of ram.=A0<div>

Is xen switching to qemu 64 from =A0i386, it seems that many improvements i=
n chip set support were made =A0 with 64 bit version of qemu and it uses 25=
6KB seabios.=A0</div></div><div class=3D"gmail_extra"><br><br><div class=3D=
"gmail_quote">
On Sun, Feb 2, 2014 at 9:56 PM, Russ Pavlicek <span dir=3D"ltr">&lt;<a href=
=3D"mailto:russell.pavlicek@xenproject.org" target=3D"_blank">russell.pavli=
cek@xenproject.org</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.<=
br>
<br>
RC3 is the first release candidate to include a testable PVH.<br>
<br>
General Information about Test Days can be found here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_Test_Days" target=3D"_blank"=
>http://wiki.xenproject.org/wiki/Xen_Test_Days</a><br>
<br>
and specific instructions for this Test Day are located here:<br>
<a href=3D"http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" t=
arget=3D"_blank">http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructi=
ons</a><br>
<br>
Developers: please consider monitoring the Freenode IRC channel<br>
#xentest today to make sure that people are able to build and test the<br>
code.<br>
<br>
Hope to see you today on #xentest!<br>
<br>
Russ<br>
<br>
_______________________________________________<br>
Xen-api mailing list<br>
<a href=3D"mailto:Xen-api@lists.xen.org">Xen-api@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api" target=3D=
"_blank">http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api</a><br>
</blockquote></div><br></div>

--001a11c28604c3973c04f17b0010--


--===============0723976461855398737==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0723976461855398737==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 07:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 07:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAE3M-00014t-Lj; Mon, 03 Feb 2014 07:34:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAE3L-00014o-5u
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 07:34:47 +0000
Received: from [85.158.137.68:28268] by server-2.bemta-3.messagelabs.com id
	86/31-06531-6964FE25; Mon, 03 Feb 2014 07:34:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391412883!12958626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28623 invoked from network); 3 Feb 2014 07:34:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 07:34:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,770,1384300800"; d="scan'208";a="99122781"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 07:34:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 02:34:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAE3G-00035e-OZ;
	Mon, 03 Feb 2014 07:34:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAE3G-0007QX-Hn;
	Mon, 03 Feb 2014 07:34:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24718-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Feb 2014 07:34:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24718: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24718 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24718/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 24715
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)           broken pass in 24715
 test-amd64-i386-pv            3 host-install(3)           broken pass in 24715
 test-amd64-i386-freebsd10-i386  3 host-install(3)         broken pass in 24715
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24715
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 24715
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24715
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10     fail pass in 24703
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 8 guest-saverestore fail in 24715 pass in 24718
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24715 pass in 24718
 test-amd64-i386-freebsd10-amd64  4 xen-install     fail in 24703 pass in 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24703 pass in 24718

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24715 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24715 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24715 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24703 never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 07:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 07:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAE3M-00014t-Lj; Mon, 03 Feb 2014 07:34:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAE3L-00014o-5u
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 07:34:47 +0000
Received: from [85.158.137.68:28268] by server-2.bemta-3.messagelabs.com id
	86/31-06531-6964FE25; Mon, 03 Feb 2014 07:34:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391412883!12958626!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28623 invoked from network); 3 Feb 2014 07:34:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 07:34:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,770,1384300800"; d="scan'208";a="99122781"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 07:34:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 02:34:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAE3G-00035e-OZ;
	Mon, 03 Feb 2014 07:34:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAE3G-0007QX-Hn;
	Mon, 03 Feb 2014 07:34:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24718-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Feb 2014 07:34:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24718: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24718 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24718/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)           broken pass in 24715
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)           broken pass in 24715
 test-amd64-i386-pv            3 host-install(3)           broken pass in 24715
 test-amd64-i386-freebsd10-i386  3 host-install(3)         broken pass in 24715
 test-amd64-i386-xl-winxpsp3-vcpus1  9 guest-localmigrate    fail pass in 24715
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 24715
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 24715
 test-amd64-i386-xl-win7-amd64 12 guest-localmigrate/x10     fail pass in 24703
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 8 guest-saverestore fail in 24715 pass in 24718
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24715 pass in 24718
 test-amd64-i386-freebsd10-amd64  4 xen-install     fail in 24703 pass in 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 10 guest-saverestore.2 fail in 24703 pass in 24718

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24715 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24715 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24715 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24703 never pass

version targeted for testing:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 07:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 07:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAEIQ-0001cl-Gr; Mon, 03 Feb 2014 07:50:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAEIP-0001cg-I2
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 07:50:22 +0000
Received: from [85.158.143.35:10641] by server-1.bemta-4.messagelabs.com id
	87/DC-31661-C3A4FE25; Mon, 03 Feb 2014 07:50:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391413820!2627676!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19501 invoked from network); 3 Feb 2014 07:50:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 07:50:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Feb 2014 07:50:20 +0000
Message-Id: <52EF5848020000780011899F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 03 Feb 2014 07:50:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
	<20140131165643.GE23648@phenom.dumpdata.com>
In-Reply-To: <20140131165643.GE23648@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.01.14 at 17:56, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> Hence before going further with this approach (for now I only got
>> it to the point that an un-patched Linux is unaffected, i.e. I didn't
>> code up the Linux side yet) I would be interested to hear people's
>> opinions on whether the performance cost is worth it, or whether
>> instead we should consider PVH the one and only route towards
>> gaining that extra level of security.
> 
> Would we get this feature for 'free' if we do PVH? Meaning there
> is not much to modify in the Linux kernel to make it work in PVH mode?

That's my expectation, yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 07:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 07:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAEIQ-0001cl-Gr; Mon, 03 Feb 2014 07:50:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAEIP-0001cg-I2
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 07:50:22 +0000
Received: from [85.158.143.35:10641] by server-1.bemta-4.messagelabs.com id
	87/DC-31661-C3A4FE25; Mon, 03 Feb 2014 07:50:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391413820!2627676!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19501 invoked from network); 3 Feb 2014 07:50:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 07:50:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Feb 2014 07:50:20 +0000
Message-Id: <52EF5848020000780011899F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 03 Feb 2014 07:50:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52E92D580200007800117FC1@nat28.tlf.novell.com>
	<20140131165643.GE23648@phenom.dumpdata.com>
In-Reply-To: <20140131165643.GE23648@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] x86: PV SMAP for 64-bit guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 31.01.14 at 17:56, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>> Hence before going further with this approach (for now I only got
>> it to the point that an un-patched Linux is unaffected, i.e. I didn't
>> code up the Linux side yet) I would be interested to hear people's
>> opinions on whether the performance cost is worth it, or whether
>> instead we should consider PVH the one and only route towards
>> gaining that extra level of security.
> 
> Would we get this feature for 'free' if we do PVH? Meaning there
> is not much to modify in the Linux kernel to make it work in PVH mode?

That's my expectation, yes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 09:40:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 09:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAG0l-0004bJ-7r; Mon, 03 Feb 2014 09:40:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAG0j-0004bE-AM
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 09:40:13 +0000
Received: from [85.158.143.35:9922] by server-2.bemta-4.messagelabs.com id
	C4/96-10891-CF36FE25; Mon, 03 Feb 2014 09:40:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391420409!2673246!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8975 invoked from network); 3 Feb 2014 09:40:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 09:40:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97210199"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 09:40:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	04:40:08 -0500
Message-ID: <1391420407.10515.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Feb 2014 09:40:07 +0000
In-Reply-To: <52EB9F450200007800118618@nat28.tlf.novell.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 12:04 +0000, Jan Beulich wrote:
> Roger,
> 
> so you introduced this, yet looking in a little closer detail I can't seem
> to understand why: struct blkif_request_segment is identical in layout,
> the sole difference between the two is that in the new structure the
> padding field has a name, whereas in the old one it doesn't.

Is this something to do with Linux' use of __attribute__((packed)) once
again causing confusion? (I really hope not API deviation...)

> I'd really like to get rid of this redundant type again, unless there's a
> reason for it to be there which I'm overlooking.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 09:40:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 09:40:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAG0l-0004bJ-7r; Mon, 03 Feb 2014 09:40:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAG0j-0004bE-AM
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 09:40:13 +0000
Received: from [85.158.143.35:9922] by server-2.bemta-4.messagelabs.com id
	C4/96-10891-CF36FE25; Mon, 03 Feb 2014 09:40:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391420409!2673246!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8975 invoked from network); 3 Feb 2014 09:40:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 09:40:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97210199"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 09:40:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	04:40:08 -0500
Message-ID: <1391420407.10515.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 3 Feb 2014 09:40:07 +0000
In-Reply-To: <52EB9F450200007800118618@nat28.tlf.novell.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 12:04 +0000, Jan Beulich wrote:
> Roger,
> 
> so you introduced this, yet looking in a little closer detail I can't seem
> to understand why: struct blkif_request_segment is identical in layout,
> the sole difference between the two is that in the new structure the
> padding field has a name, whereas in the old one it doesn't.

Is this something to do with Linux' use of __attribute__((packed)) once
again causing confusion? (I really hope not API deviation...)

> I'd really like to get rid of this redundant type again, unless there's a
> reason for it to be there which I'm overlooking.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 09:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 09:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAG9u-00051M-UZ; Mon, 03 Feb 2014 09:49:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WAG9t-00051E-JG
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 09:49:41 +0000
Received: from [85.158.139.211:15260] by server-2.bemta-5.messagelabs.com id
	55/CD-23037-4366FE25; Mon, 03 Feb 2014 09:49:40 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391420978!1250178!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28385 invoked from network); 3 Feb 2014 09:49:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 09:49:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s139naH7025383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 09:49:37 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s139nXWP015124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 09:49:35 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s139nXtq018197; Mon, 3 Feb 2014 09:49:33 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 01:49:32 -0800
Date: Mon, 3 Feb 2014 10:49:12 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203094912.GA5273@olila.local.net-space.pl>
References: <52CBC700.1060602@zynstra.com> <52CE7E67.5080708@oracle.com>
	<52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140131165654.GF23648@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: James Dingwall <james.dingwall@zynstra.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Fri, Jan 31, 2014 at 11:56:54AM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
> > Bob Liu wrote:
> > >On 01/29/2014 01:15 AM, James Dingwall wrote:
> > >>Bob Liu wrote:
> > >>>I have made a patch by reserving extra 10% of original total memory, by
> > >>>this way I think we can make the system much more reliably in all cases.
> > >>>Could you please have a test? You don't need to set
> > >>>selfballoon_reserved_mb by yourself any more.
> > >>I have to say that with this patch the situation has definitely
> > >>improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
> > >>quite hard for the last 10 days or so.  Unfortunately yesterday I got an
> > >Good news!
> > >
> > >>OOM during a compile (link) of webkit-gtk.  I think your patch is part
> > >>of the solution but I'm not sure if the other bit is simply to be more
> > >>generous with the guest memory allocation or something else.  Having
> > >>tested with memory = 512  and no tmem I get an OOM with the same
> > >>compile, with memory = 1024 and no tmem the compile completes ok (both
> > >>cases without maxmem).  As my domains are usually started with memory =
> > >>512 and maxmem = 1024 it seems that there should be sufficient with my

Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make -j"?
Could you confirm that webkit-gtk in any "subjobs" do not use "make -j"?

> > >But I think from the beginning tmem/balloon driver can't expand guest
> > >memory from size 'memory' to 'maxmem' automatically.
> > I am carrying this patch for libxl (4.3.1) because maxmem wasn't
> > being honoured.

James, what do you mean by "maxmem wasn't being honoured"?

> Daniel,
>
> Weren't you working on a similar patch? Do you recall what happend to it?

Yep, and it was not applied because it has not been so mature. Additionally,
this patch is part of bigger puzzle. I have it on my todo list but I would
like to finish EFI stuff first. However, if you wish I could back to this
work (if it is more important then EFI right now).

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 09:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 09:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAG9u-00051M-UZ; Mon, 03 Feb 2014 09:49:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WAG9t-00051E-JG
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 09:49:41 +0000
Received: from [85.158.139.211:15260] by server-2.bemta-5.messagelabs.com id
	55/CD-23037-4366FE25; Mon, 03 Feb 2014 09:49:40 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391420978!1250178!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28385 invoked from network); 3 Feb 2014 09:49:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 09:49:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s139naH7025383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 09:49:37 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s139nXWP015124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 09:49:35 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s139nXtq018197; Mon, 3 Feb 2014 09:49:33 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 01:49:32 -0800
Date: Mon, 3 Feb 2014 10:49:12 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203094912.GA5273@olila.local.net-space.pl>
References: <52CBC700.1060602@zynstra.com> <52CE7E67.5080708@oracle.com>
	<52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140131165654.GF23648@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: James Dingwall <james.dingwall@zynstra.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On Fri, Jan 31, 2014 at 11:56:54AM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
> > Bob Liu wrote:
> > >On 01/29/2014 01:15 AM, James Dingwall wrote:
> > >>Bob Liu wrote:
> > >>>I have made a patch by reserving extra 10% of original total memory, by
> > >>>this way I think we can make the system much more reliably in all cases.
> > >>>Could you please have a test? You don't need to set
> > >>>selfballoon_reserved_mb by yourself any more.
> > >>I have to say that with this patch the situation has definitely
> > >>improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
> > >>quite hard for the last 10 days or so.  Unfortunately yesterday I got an
> > >Good news!
> > >
> > >>OOM during a compile (link) of webkit-gtk.  I think your patch is part
> > >>of the solution but I'm not sure if the other bit is simply to be more
> > >>generous with the guest memory allocation or something else.  Having
> > >>tested with memory = 512  and no tmem I get an OOM with the same
> > >>compile, with memory = 1024 and no tmem the compile completes ok (both
> > >>cases without maxmem).  As my domains are usually started with memory =
> > >>512 and maxmem = 1024 it seems that there should be sufficient with my

Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make -j"?
Could you confirm that webkit-gtk in any "subjobs" do not use "make -j"?

> > >But I think from the beginning tmem/balloon driver can't expand guest
> > >memory from size 'memory' to 'maxmem' automatically.
> > I am carrying this patch for libxl (4.3.1) because maxmem wasn't
> > being honoured.

James, what do you mean by "maxmem wasn't being honoured"?

> Daniel,
>
> Weren't you working on a similar patch? Do you recall what happend to it?

Yep, and it was not applied because it has not been so mature. Additionally,
this patch is part of bigger puzzle. I have it on my todo list but I would
like to finish EFI stuff first. However, if you wish I could back to this
work (if it is more important then EFI right now).

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:08:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGS0-0005TA-UV; Mon, 03 Feb 2014 10:08:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGRz-0005T5-Fb
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:08:23 +0000
Received: from [193.109.254.147:48466] by server-1.bemta-14.messagelabs.com id
	1C/45-15438-69A6FE25; Mon, 03 Feb 2014 10:08:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391422099!1593429!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31749 invoked from network); 3 Feb 2014 10:08:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:08:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97216123"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:08:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:08:14 -0500
Message-ID: <1391422093.10515.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 3 Feb 2014 10:08:13 +0000
In-Reply-To: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 20:48 +0400, Igor Kozhukhov wrote:
> Hi All,
> 
> i try to load PV guest by:
> 
> virt-install --debug --paravirt --name cos6x64 --ram 800 --network bridge  --disk path=/dev/zvol/dsk/rpool/xen/cos01,driver=phy --location http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/ --nographics
> 
> could you please help me try to debug issues with PV creation:
> 
> POST operation failed: xend_post: error from xen daemon: (xend.err
> "Error creating domain: (1, 'Internal error', 'panic:
> xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages
> 0x1000+0x1040 [mmap, errno=6 (No such device or address)]')")
> 
> i want try to find what i have to check/update: Xen sources OR illumos sources.

The error message comes from xc_dom_boot.c which is part of Xen. A call
to xc_map_foreign_ranges has failed. This is abstracted out for
different platforms via the xenctrlosdep.h interface.

Are you using xc_solaris.c or have you created a new xc_illumos.c? It is
a pretty good bet that xc_solaris has bitrotten over the years. In
either case my gut feeling is that the issue is either in that code or
on the kernel side in the "privcmd" driver which it calls into.

BTW -- I'd strongly recommend that you switch to using the "xl"
toolstack -- xend is obsolete and deprecated so using it for a new port
is not recommended, it will obscure some of the error messages etc.

For the initial debugging phases I would also suggest to avoid higher
level tools such as libvirt (again because they can obscure some of the
lower level debugging) and to use the "xl create" interface directly. A
simple config file would be something like:
	name = "test"
	kernel = "/path/to/vmlinuz"
	extra = "console=hvc"
Where:
        wget -O /path/to/vmlinuz http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/isolinux/vmlinuz
is the kernel to use.

Launch with 
	xl -vvv create /path/to/cfg
and you'll get plenty of debug output.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:08:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGS0-0005TA-UV; Mon, 03 Feb 2014 10:08:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGRz-0005T5-Fb
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:08:23 +0000
Received: from [193.109.254.147:48466] by server-1.bemta-14.messagelabs.com id
	1C/45-15438-69A6FE25; Mon, 03 Feb 2014 10:08:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391422099!1593429!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31749 invoked from network); 3 Feb 2014 10:08:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:08:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97216123"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:08:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:08:14 -0500
Message-ID: <1391422093.10515.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 3 Feb 2014 10:08:13 +0000
In-Reply-To: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 20:48 +0400, Igor Kozhukhov wrote:
> Hi All,
> 
> i try to load PV guest by:
> 
> virt-install --debug --paravirt --name cos6x64 --ram 800 --network bridge  --disk path=/dev/zvol/dsk/rpool/xen/cos01,driver=phy --location http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/ --nographics
> 
> could you please help me try to debug issues with PV creation:
> 
> POST operation failed: xend_post: error from xen daemon: (xend.err
> "Error creating domain: (1, 'Internal error', 'panic:
> xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages
> 0x1000+0x1040 [mmap, errno=6 (No such device or address)]')")
> 
> i want try to find what i have to check/update: Xen sources OR illumos sources.

The error message comes from xc_dom_boot.c which is part of Xen. A call
to xc_map_foreign_ranges has failed. This is abstracted out for
different platforms via the xenctrlosdep.h interface.

Are you using xc_solaris.c or have you created a new xc_illumos.c? It is
a pretty good bet that xc_solaris has bitrotten over the years. In
either case my gut feeling is that the issue is either in that code or
on the kernel side in the "privcmd" driver which it calls into.

BTW -- I'd strongly recommend that you switch to using the "xl"
toolstack -- xend is obsolete and deprecated so using it for a new port
is not recommended, it will obscure some of the error messages etc.

For the initial debugging phases I would also suggest to avoid higher
level tools such as libvirt (again because they can obscure some of the
lower level debugging) and to use the "xl create" interface directly. A
simple config file would be something like:
	name = "test"
	kernel = "/path/to/vmlinuz"
	extra = "console=hvc"
Where:
        wget -O /path/to/vmlinuz http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/isolinux/vmlinuz
is the kernel to use.

Launch with 
	xl -vvv create /path/to/cfg
and you'll get plenty of debug output.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:13:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGWU-0005mc-CD; Mon, 03 Feb 2014 10:13:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WAGWS-0005mW-U0
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 10:13:01 +0000
Received: from [85.158.139.211:62498] by server-16.bemta-5.messagelabs.com id
	BF/F7-05060-CAB6FE25; Mon, 03 Feb 2014 10:13:00 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391422379!1251486!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32120 invoked from network); 3 Feb 2014 10:12:59 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 10:12:59 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WAGWL-0004oX-Ry; Mon, 03 Feb 2014 10:12:53 +0000
Date: Mon, 3 Feb 2014 11:12:53 +0100
From: Tim Deegan <tim@xen.org>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203101253.GA16514@deinos.phlegethon.org>
References: <20131216174728.2ba3ad9a@mantra.us.oracle.com>
	<52B01C88020000780010E042@nat28.tlf.novell.com>
	<20131217101957.GB32721@deinos.phlegethon.org>
	<20131217184412.2372eb45@mantra.us.oracle.com>
	<20131218100958.GB24792@deinos.phlegethon.org>
	<20131218165152.GO24792@deinos.phlegethon.org>
	<20140131183838.27075db3@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140131183838.27075db3@mantra.us.oracle.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [RFC PATCH] PVH: cleanup of p2m upon p2m destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:38 -0800 on 31 Jan (1391189918), Mukesh Rathor wrote:
> On Wed, 18 Dec 2013 17:51:52 +0100
> Tim Deegan <tim@xen.org> wrote:
> 
> > At 11:09 +0100 on 18 Dec (1387361398), Tim Deegan wrote:
> > > > An alternative might be to just create a link list then and walk
> > > > it. In general, foreign mappings should be very small, so the
> > > > overhead of 16 bytes per page for the link list might not be too
> > > > bad. I will code it if there is no disagreement from any
> > > > maintainer... everyone has different ideas :)...
> > > 
> > > I think it would be best to walk the p2m trie (i.e. bounded by
> > > amount of RAM, rather than max GFN) and do it preemptably.  I'll
> > > look into something like that for the mem_sharing loop today, and
> > > foreign mapping code can reuse it.
> > 
> > What I've ended up with is making p2m_change_entry_type_global()
> > preemptible (which is a bigger task but will be needed as domains get
> > bigger).  Do you think that using that function to switch all mappings
> > from p2m_foreign to p2m_invalid, appropriately late in the teardown,
> > will be good enough for what you need?
> > 
> > Cheers,
> > 
> > Tim.
> 
> Finally, coming back to this, the answer is yes. Looks like all I need
> to do is:
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 9faa663..268a8a2 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -470,6 +470,10 @@ void p2m_teardown(struct p2m_domain *p2m)
>  
>      p2m_lock(p2m);
>  
> +    /* pvh: we must release refcnt on all foreign pages */
> +    if ( is_pvh_domain(d) )
> +        p2m_change_entry_type_global(d, p2m_map_foreign, p2m_invalid);
> +
>      /* Try to unshare any remaining shared p2m entries. Safeguard
>       * Since relinquish_shared_pages should have done the work. */ 
>      for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )

That looks right.  Sorting out how to make it restartable is on my
TODO list, along with other similar code.

> In this call, the new atomic_write_ept_entry() will DTRT:
> 
> static inline void atomic_write_ept_entry(ept_entry_t *entryptr,
>                                           const ept_entry_t *new)
> {
>     if ( p2m_is_foreign(new->sa_p2mt) )
>     {
>         struct page_info *page;
>         struct domain *fdom;
> 
>         ASSERT(mfn_valid(new->mfn));
>         page = mfn_to_page(new->mfn);
>         fdom = page_get_owner(page);
>         get_page(page, fdom);
>     }
>     if ( p2m_is_foreign(entryptr->sa_p2mt) )
>         put_page(mfn_to_page(entryptr->mfn));
> 
>     write_atomic(&entryptr->epte, new->epte);
> }

Yep.  The write_atomic() should happen before the put_page(), so we
don't need to think about race conditions (see, e.g. shadow_set_l1e()
for the idiom), but otherwise that looks fine. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:13:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGWU-0005mc-CD; Mon, 03 Feb 2014 10:13:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WAGWS-0005mW-U0
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 10:13:01 +0000
Received: from [85.158.139.211:62498] by server-16.bemta-5.messagelabs.com id
	BF/F7-05060-CAB6FE25; Mon, 03 Feb 2014 10:13:00 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391422379!1251486!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32120 invoked from network); 3 Feb 2014 10:12:59 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 10:12:59 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WAGWL-0004oX-Ry; Mon, 03 Feb 2014 10:12:53 +0000
Date: Mon, 3 Feb 2014 11:12:53 +0100
From: Tim Deegan <tim@xen.org>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203101253.GA16514@deinos.phlegethon.org>
References: <20131216174728.2ba3ad9a@mantra.us.oracle.com>
	<52B01C88020000780010E042@nat28.tlf.novell.com>
	<20131217101957.GB32721@deinos.phlegethon.org>
	<20131217184412.2372eb45@mantra.us.oracle.com>
	<20131218100958.GB24792@deinos.phlegethon.org>
	<20131218165152.GO24792@deinos.phlegethon.org>
	<20140131183838.27075db3@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140131183838.27075db3@mantra.us.oracle.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [RFC PATCH] PVH: cleanup of p2m upon p2m destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:38 -0800 on 31 Jan (1391189918), Mukesh Rathor wrote:
> On Wed, 18 Dec 2013 17:51:52 +0100
> Tim Deegan <tim@xen.org> wrote:
> 
> > At 11:09 +0100 on 18 Dec (1387361398), Tim Deegan wrote:
> > > > An alternative might be to just create a link list then and walk
> > > > it. In general, foreign mappings should be very small, so the
> > > > overhead of 16 bytes per page for the link list might not be too
> > > > bad. I will code it if there is no disagreement from any
> > > > maintainer... everyone has different ideas :)...
> > > 
> > > I think it would be best to walk the p2m trie (i.e. bounded by
> > > amount of RAM, rather than max GFN) and do it preemptably.  I'll
> > > look into something like that for the mem_sharing loop today, and
> > > foreign mapping code can reuse it.
> > 
> > What I've ended up with is making p2m_change_entry_type_global()
> > preemptible (which is a bigger task but will be needed as domains get
> > bigger).  Do you think that using that function to switch all mappings
> > from p2m_foreign to p2m_invalid, appropriately late in the teardown,
> > will be good enough for what you need?
> > 
> > Cheers,
> > 
> > Tim.
> 
> Finally, coming back to this, the answer is yes. Looks like all I need
> to do is:
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 9faa663..268a8a2 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -470,6 +470,10 @@ void p2m_teardown(struct p2m_domain *p2m)
>  
>      p2m_lock(p2m);
>  
> +    /* pvh: we must release refcnt on all foreign pages */
> +    if ( is_pvh_domain(d) )
> +        p2m_change_entry_type_global(d, p2m_map_foreign, p2m_invalid);
> +
>      /* Try to unshare any remaining shared p2m entries. Safeguard
>       * Since relinquish_shared_pages should have done the work. */ 
>      for ( gfn=0; gfn < p2m->max_mapped_pfn; gfn++ )

That looks right.  Sorting out how to make it restartable is on my
TODO list, along with other similar code.

> In this call, the new atomic_write_ept_entry() will DTRT:
> 
> static inline void atomic_write_ept_entry(ept_entry_t *entryptr,
>                                           const ept_entry_t *new)
> {
>     if ( p2m_is_foreign(new->sa_p2mt) )
>     {
>         struct page_info *page;
>         struct domain *fdom;
> 
>         ASSERT(mfn_valid(new->mfn));
>         page = mfn_to_page(new->mfn);
>         fdom = page_get_owner(page);
>         get_page(page, fdom);
>     }
>     if ( p2m_is_foreign(entryptr->sa_p2mt) )
>         put_page(mfn_to_page(entryptr->mfn));
> 
>     write_atomic(&entryptr->epte, new->epte);
> }

Yep.  The write_atomic() should happen before the put_page(), so we
don't need to think about race conditions (see, e.g. shadow_set_l1e()
for the idiom), but otherwise that looks fine. 

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:17:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:17:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGaU-0005ua-20; Mon, 03 Feb 2014 10:17:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGaS-0005uU-K7
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 10:17:08 +0000
Received: from [85.158.137.68:20124] by server-10.bemta-3.messagelabs.com id
	BA/F8-07302-3AC6FE25; Mon, 03 Feb 2014 10:17:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391422625!12931477!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30163 invoked from network); 3 Feb 2014 10:17:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:17:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97217659"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:17:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:17:05 -0500
Message-ID: <1391422624.10515.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Mon, 3 Feb 2014 10:17:04 +0000
In-Reply-To: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
> I'm testing live migration without shared storage (I use LVM at both sides)
> 
> 
> Issuing "xl migrate" worked nice and the machine was migrated to the second host
> 
> However I see this in the second host log:
> 
> [ 1502.563251] EXT4-fs error (device xvda1):
> htree_dirblock_to_tree:892: inode #136303: block 533250: comm
> run-parts: bad entry in directory: rec_len is smaller than minimal -
> offset=0(0), inode=0, rec_len=0, name_len=0
> Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
> (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
> 533250: comm run-parts: bad entry in directory: rec_len is smaller
> than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
> 
> I also get errors like:
> -bash: /bin/ping: cannot execute binary file
> 
> Is this to be expect on using none shared storage?

Yes. If the underlying disk is not the same device between both hosts
then all bets are off and all sorts of bad things will be happen. Think
about it -- what would you expect to happen to an OS if a disk suddenly
started returning completely different data to what was written to it.

What you have seen seems like a plausible outcome.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:17:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:17:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGaU-0005ua-20; Mon, 03 Feb 2014 10:17:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGaS-0005uU-K7
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 10:17:08 +0000
Received: from [85.158.137.68:20124] by server-10.bemta-3.messagelabs.com id
	BA/F8-07302-3AC6FE25; Mon, 03 Feb 2014 10:17:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391422625!12931477!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30163 invoked from network); 3 Feb 2014 10:17:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:17:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97217659"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:17:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:17:05 -0500
Message-ID: <1391422624.10515.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Mon, 3 Feb 2014 10:17:04 +0000
In-Reply-To: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
> I'm testing live migration without shared storage (I use LVM at both sides)
> 
> 
> Issuing "xl migrate" worked nice and the machine was migrated to the second host
> 
> However I see this in the second host log:
> 
> [ 1502.563251] EXT4-fs error (device xvda1):
> htree_dirblock_to_tree:892: inode #136303: block 533250: comm
> run-parts: bad entry in directory: rec_len is smaller than minimal -
> offset=0(0), inode=0, rec_len=0, name_len=0
> Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
> (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
> 533250: comm run-parts: bad entry in directory: rec_len is smaller
> than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
> 
> I also get errors like:
> -bash: /bin/ping: cannot execute binary file
> 
> Is this to be expect on using none shared storage?

Yes. If the underlying disk is not the same device between both hosts
then all bets are off and all sorts of bad things will be happen. Think
about it -- what would you expect to happen to an OS if a disk suddenly
started returning completely different data to what was written to it.

What you have seen seems like a plausible outcome.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:28:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGlC-0006Tg-31; Mon, 03 Feb 2014 10:28:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGlA-0006Ta-7Z
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:28:12 +0000
Received: from [193.109.254.147:37420] by server-3.bemta-14.messagelabs.com id
	7A/AC-00432-B3F6FE25; Mon, 03 Feb 2014 10:28:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391423289!1594620!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21260 invoked from network); 3 Feb 2014 10:28:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:28:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99152607"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 10:28:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:28:08 -0500
Message-ID: <1391423287.10515.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <monaka@monami-ya.jp>
Date: Mon, 3 Feb 2014 10:28:07 +0000
In-Reply-To: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-01 at 17:55 +0900, Masaki Muranaka wrote:
> Hello,
> 
> 
> I got the build error. Log is like this:
> /home/azureuser/xen/stubdom/mini-os-x86_64-c/test.o: In function
> `app_main':
> 
> /home/azureuser/xen/extras/mini-os/test.c:542: multiple definition of
> `app_main'
> /home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xen/extras/mini-os/main.c:187: first defined here
> make[1]: *** [/home/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os]
> Error 1
> make[1]: Leaving directory `/home/azureuser/xen/extras/mini-os'
> make: *** [c-stubdom] Error 2
> 
> 
> 
> 
> I thinks stubdom/c/minios.cfg should be like this.

That sounds plausible. I'm not sure what would then include the tests at
that point though.

If Samuel agrees with the patch please could you formally submit
according to http://wiki.xen.org/wiki/Submitting_Xen_Patches



> 
> 
> diff --git a/stubdom/c/minios.cfg b/stubdom/c/minios.cfg
> index e1faee5..f2a3178 100644
> --- a/stubdom/c/minios.cfg
> +++ b/stubdom/c/minios.cfg
> @@ -1 +1 @@
> -CONFIG_TEST=y
> +CONFIG_TEST=n
> 
> 
> 
> 
> Thanks,
> 
> 
> -- 
> Masaki Muranaka
> Monami-ya LLC, Japan.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:28:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGlC-0006Tg-31; Mon, 03 Feb 2014 10:28:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGlA-0006Ta-7Z
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:28:12 +0000
Received: from [193.109.254.147:37420] by server-3.bemta-14.messagelabs.com id
	7A/AC-00432-B3F6FE25; Mon, 03 Feb 2014 10:28:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391423289!1594620!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21260 invoked from network); 3 Feb 2014 10:28:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:28:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99152607"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 10:28:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:28:08 -0500
Message-ID: <1391423287.10515.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <monaka@monami-ya.jp>
Date: Mon, 3 Feb 2014 10:28:07 +0000
In-Reply-To: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-01 at 17:55 +0900, Masaki Muranaka wrote:
> Hello,
> 
> 
> I got the build error. Log is like this:
> /home/azureuser/xen/stubdom/mini-os-x86_64-c/test.o: In function
> `app_main':
> 
> /home/azureuser/xen/extras/mini-os/test.c:542: multiple definition of
> `app_main'
> /home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xen/extras/mini-os/main.c:187: first defined here
> make[1]: *** [/home/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os]
> Error 1
> make[1]: Leaving directory `/home/azureuser/xen/extras/mini-os'
> make: *** [c-stubdom] Error 2
> 
> 
> 
> 
> I thinks stubdom/c/minios.cfg should be like this.

That sounds plausible. I'm not sure what would then include the tests at
that point though.

If Samuel agrees with the patch please could you formally submit
according to http://wiki.xen.org/wiki/Submitting_Xen_Patches



> 
> 
> diff --git a/stubdom/c/minios.cfg b/stubdom/c/minios.cfg
> index e1faee5..f2a3178 100644
> --- a/stubdom/c/minios.cfg
> +++ b/stubdom/c/minios.cfg
> @@ -1 +1 @@
> -CONFIG_TEST=y
> +CONFIG_TEST=n
> 
> 
> 
> 
> Thanks,
> 
> 
> -- 
> Masaki Muranaka
> Monami-ya LLC, Japan.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:30:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGnR-0006dn-KS; Mon, 03 Feb 2014 10:30:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WAGnQ-0006dg-RS
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:30:33 +0000
Received: from [193.109.254.147:9662] by server-7.bemta-14.messagelabs.com id
	25/0E-23424-8CF6FE25; Mon, 03 Feb 2014 10:30:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391423430!1569055!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3929 invoked from network); 3 Feb 2014 10:30:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97220408"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:30:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 05:30:29 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WAGnM-00070p-WE;
	Mon, 03 Feb 2014 10:30:28 +0000
Date: Mon, 3 Feb 2014 10:30:28 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Matt Wilson <msw@linux.com>
Message-ID: <20140203103028.GC713@zion.uk.xensource.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Stefan Bader <stefan.bader@canonical.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> > On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > > From: Wei Liu <wei.liu2@citrix.com>
> > > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > > 
> > > > This series is now rebased onto net-next.
> > > > 
> > > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > > backport if necessary.
> > > 
> > > All applied, but this was a disaster.
> > > 
> > 
> > Thanks, I misunderstood the workflow.
> > 
> > > If you want bug fixes propagated into -stable you submit them to 'net'
> > > from the beginning.
> > > 
> > > There is no other method by which to do this.
> > > 
> > > By merging all of these changes to net-next, you will now have to get
> > > them accepted again into 'net', and then (and only then) can you make
> > > a request for -stable inclusion.
> > > 
> > 
> > Understood. Will submit them against 'net' later.
> 
> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
> to account for max TCP header) at all related to the "skb rides the
> rocket" related TX packet drops reported against 3.8.x kernels?
> 
> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
> 
> It seems like there are still some outstanding bugs in various -stable
> releases.
> 

As far as I can remember Ian and I requested relavant patches be
backported in May, after these series settled in mainline for some time.

<1369734465.3469.52.camel@zakaz.uk.xensource.com>

These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
up.

Wei.

> --msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:30:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGnR-0006dn-KS; Mon, 03 Feb 2014 10:30:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WAGnQ-0006dg-RS
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:30:33 +0000
Received: from [193.109.254.147:9662] by server-7.bemta-14.messagelabs.com id
	25/0E-23424-8CF6FE25; Mon, 03 Feb 2014 10:30:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391423430!1569055!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3929 invoked from network); 3 Feb 2014 10:30:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97220408"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:30:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 05:30:29 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WAGnM-00070p-WE;
	Mon, 03 Feb 2014 10:30:28 +0000
Date: Mon, 3 Feb 2014 10:30:28 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Matt Wilson <msw@linux.com>
Message-ID: <20140203103028.GC713@zion.uk.xensource.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Stefan Bader <stefan.bader@canonical.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> > On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > > From: Wei Liu <wei.liu2@citrix.com>
> > > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > > 
> > > > This series is now rebased onto net-next.
> > > > 
> > > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > > backport if necessary.
> > > 
> > > All applied, but this was a disaster.
> > > 
> > 
> > Thanks, I misunderstood the workflow.
> > 
> > > If you want bug fixes propagated into -stable you submit them to 'net'
> > > from the beginning.
> > > 
> > > There is no other method by which to do this.
> > > 
> > > By merging all of these changes to net-next, you will now have to get
> > > them accepted again into 'net', and then (and only then) can you make
> > > a request for -stable inclusion.
> > > 
> > 
> > Understood. Will submit them against 'net' later.
> 
> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
> to account for max TCP header) at all related to the "skb rides the
> rocket" related TX packet drops reported against 3.8.x kernels?
> 
> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
> 
> It seems like there are still some outstanding bugs in various -stable
> releases.
> 

As far as I can remember Ian and I requested relavant patches be
backported in May, after these series settled in mainline for some time.

<1369734465.3469.52.camel@zakaz.uk.xensource.com>

These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
up.

Wei.

> --msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:30:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGnd-0006g4-6l; Mon, 03 Feb 2014 10:30:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAGnc-0006ff-4u
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:30:44 +0000
Received: from [85.158.139.211:37148] by server-12.bemta-5.messagelabs.com id
	C6/F3-15415-3DF6FE25; Mon, 03 Feb 2014 10:30:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391423440!1257322!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10184 invoked from network); 3 Feb 2014 10:30:42 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 10:30:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13AUbN9015323
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 10:30:40 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13AUTtA027256
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 10:30:33 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13AUSR9001601; Mon, 3 Feb 2014 10:30:28 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 02:30:26 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <20140203094912.GA5273@olila.local.net-space.pl>
References: <52CBC700.1060602@zynstra.com> <52CE7E67.5080708@oracle.com>
	<52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
	<20140203094912.GA5273@olila.local.net-space.pl>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 03 Feb 2014 05:30:23 -0500
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <93299bc6-290d-4f18-a85f-b66276dcb3c6@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: James Dingwall <james.dingwall@zynstra.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On February 3, 2014 4:49:12 AM EST, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>Hi,
>
>On Fri, Jan 31, 2014 at 11:56:54AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
>> > Bob Liu wrote:
>> > >On 01/29/2014 01:15 AM, James Dingwall wrote:
>> > >>Bob Liu wrote:
>> > >>>I have made a patch by reserving extra 10% of original total
>memory, by
>> > >>>this way I think we can make the system much more reliably in
>all cases.
>> > >>>Could you please have a test? You don't need to set
>> > >>>selfballoon_reserved_mb by yourself any more.
>> > >>I have to say that with this patch the situation has definitely
>> > >>improved.  I have been running it with 3.12.[78] and 3.13 and
>pushing it
>> > >>quite hard for the last 10 days or so.  Unfortunately yesterday I
>got an
>> > >Good news!
>> > >
>> > >>OOM during a compile (link) of webkit-gtk.  I think your patch is
>part
>> > >>of the solution but I'm not sure if the other bit is simply to be
>more
>> > >>generous with the guest memory allocation or something else. 
>Having
>> > >>tested with memory = 512  and no tmem I get an OOM with the same
>> > >>compile, with memory = 1024 and no tmem the compile completes ok
>(both
>> > >>cases without maxmem).  As my domains are usually started with
>memory =
>> > >>512 and maxmem = 1024 it seems that there should be sufficient
>with my
>
>Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make
>-j"?
>Could you confirm that webkit-gtk in any "subjobs" do not use "make
>-j"?
>
>> > >But I think from the beginning tmem/balloon driver can't expand
>guest
>> > >memory from size 'memory' to 'maxmem' automatically.
>> > I am carrying this patch for libxl (4.3.1) because maxmem wasn't
>> > being honoured.
>
>James, what do you mean by "maxmem wasn't being honoured"?
>
>> Daniel,
>>
>> Weren't you working on a similar patch? Do you recall what happend to
>it?
>
>Yep, and it was not applied because it has not been so mature.
>Additionally,
>this patch is part of bigger puzzle. I have it on my todo list but I
>would
>like to finish EFI stuff first. However, if you wish I could back to
>this
>work (if it is more important then EFI right now).

No - EFI is paramount. I was wondering if you had posted some patches in the past that triggered a discussion about this? Perhaps James can pick up the work and make it mature?
>
>Daniel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:30:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGnd-0006g4-6l; Mon, 03 Feb 2014 10:30:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAGnc-0006ff-4u
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:30:44 +0000
Received: from [85.158.139.211:37148] by server-12.bemta-5.messagelabs.com id
	C6/F3-15415-3DF6FE25; Mon, 03 Feb 2014 10:30:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391423440!1257322!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10184 invoked from network); 3 Feb 2014 10:30:42 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 10:30:42 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13AUbN9015323
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 10:30:40 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13AUTtA027256
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 10:30:33 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13AUSR9001601; Mon, 3 Feb 2014 10:30:28 GMT
Received: from android-9dbdf07b9dd1a1ab.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 02:30:26 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <20140203094912.GA5273@olila.local.net-space.pl>
References: <52CBC700.1060602@zynstra.com> <52CE7E67.5080708@oracle.com>
	<52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
	<20140203094912.GA5273@olila.local.net-space.pl>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 03 Feb 2014 05:30:23 -0500
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <93299bc6-290d-4f18-a85f-b66276dcb3c6@email.android.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: James Dingwall <james.dingwall@zynstra.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On February 3, 2014 4:49:12 AM EST, Daniel Kiper <daniel.kiper@oracle.com> wrote:
>Hi,
>
>On Fri, Jan 31, 2014 at 11:56:54AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
>> > Bob Liu wrote:
>> > >On 01/29/2014 01:15 AM, James Dingwall wrote:
>> > >>Bob Liu wrote:
>> > >>>I have made a patch by reserving extra 10% of original total
>memory, by
>> > >>>this way I think we can make the system much more reliably in
>all cases.
>> > >>>Could you please have a test? You don't need to set
>> > >>>selfballoon_reserved_mb by yourself any more.
>> > >>I have to say that with this patch the situation has definitely
>> > >>improved.  I have been running it with 3.12.[78] and 3.13 and
>pushing it
>> > >>quite hard for the last 10 days or so.  Unfortunately yesterday I
>got an
>> > >Good news!
>> > >
>> > >>OOM during a compile (link) of webkit-gtk.  I think your patch is
>part
>> > >>of the solution but I'm not sure if the other bit is simply to be
>more
>> > >>generous with the guest memory allocation or something else. 
>Having
>> > >>tested with memory = 512  and no tmem I get an OOM with the same
>> > >>compile, with memory = 1024 and no tmem the compile completes ok
>(both
>> > >>cases without maxmem).  As my domains are usually started with
>memory =
>> > >>512 and maxmem = 1024 it seems that there should be sufficient
>with my
>
>Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make
>-j"?
>Could you confirm that webkit-gtk in any "subjobs" do not use "make
>-j"?
>
>> > >But I think from the beginning tmem/balloon driver can't expand
>guest
>> > >memory from size 'memory' to 'maxmem' automatically.
>> > I am carrying this patch for libxl (4.3.1) because maxmem wasn't
>> > being honoured.
>
>James, what do you mean by "maxmem wasn't being honoured"?
>
>> Daniel,
>>
>> Weren't you working on a similar patch? Do you recall what happend to
>it?
>
>Yep, and it was not applied because it has not been so mature.
>Additionally,
>this patch is part of bigger puzzle. I have it on my todo list but I
>would
>like to finish EFI stuff first. However, if you wish I could back to
>this
>work (if it is more important then EFI right now).

No - EFI is paramount. I was wondering if you had posted some patches in the past that triggered a discussion about this? Perhaps James can pick up the work and make it mature?
>
>Daniel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:36:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGso-0007CN-Oy; Mon, 03 Feb 2014 10:36:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WAGsC-00079B-8N; Mon, 03 Feb 2014 10:35:28 +0000
Received: from [85.158.139.211:6131] by server-4.bemta-5.messagelabs.com id
	73/AD-08092-FE07FE25; Mon, 03 Feb 2014 10:35:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391423724!1250300!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17852 invoked from network); 3 Feb 2014 10:35:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97221326"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:35:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:35:23 -0500
Message-ID: <1391423722.10515.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: jacek burghardt <jaceksburghardt@gmail.com>
Date: Mon, 3 Feb 2014 10:35:22 +0000
In-Reply-To: <CAHyyzzQmkaA8L8GrqXcHb-Or81zvXC1vyQJm8RsJr=tXi1zVBg@mail.gmail.com>
References: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
	<CAHyyzzQmkaA8L8GrqXcHb-Or81zvXC1vyQJm8RsJr=tXi1zVBg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
X-Mailman-Approved-At: Mon, 03 Feb 2014 10:36:05 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russ Pavlicek <russell.pavlicek@xenproject.org>
Subject: Re: [Xen-devel] [Xen-API] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(putting all lists except -users to bcc)

On Sun, 2014-02-02 at 23:55 -0700, jacek burghardt wrote:
> Well I was testing rc3 and I am running server 2012 with 12 gb of ram
> and share point. I had installed all Microsoft updates and then I
> would get server 2012 crashing qemu in dmesg.

Please report as a bug in a new thread providing all the relevant
information suggested by
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen

>  I had to lower ram assigned  to 10 gb and I was able to complete
> patches installation. It seems that xen 4.4 rc3 is limited to 10gb of
> ram. 
> Is xen switching to qemu 64 from  i386, it seems that many
> improvements in chip set support were made   with 64 bit version of
> qemu and it uses 256KB seabios. 

Xen does not use the CPU emulation capabilities of qemu. Xen only uses
the device emulation parts of Qemu, which are the same for both i386 and
amd64.

Therefore the distinction between the i386 and amd64 versions of qemu
are irrelevant to our use case.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:36:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGso-0007CN-Oy; Mon, 03 Feb 2014 10:36:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WAGsC-00079B-8N; Mon, 03 Feb 2014 10:35:28 +0000
Received: from [85.158.139.211:6131] by server-4.bemta-5.messagelabs.com id
	73/AD-08092-FE07FE25; Mon, 03 Feb 2014 10:35:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391423724!1250300!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17852 invoked from network); 3 Feb 2014 10:35:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97221326"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:35:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:35:23 -0500
Message-ID: <1391423722.10515.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: jacek burghardt <jaceksburghardt@gmail.com>
Date: Mon, 3 Feb 2014 10:35:22 +0000
In-Reply-To: <CAHyyzzQmkaA8L8GrqXcHb-Or81zvXC1vyQJm8RsJr=tXi1zVBg@mail.gmail.com>
References: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
	<CAHyyzzQmkaA8L8GrqXcHb-Or81zvXC1vyQJm8RsJr=tXi1zVBg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
X-Mailman-Approved-At: Mon, 03 Feb 2014 10:36:05 +0000
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Russ Pavlicek <russell.pavlicek@xenproject.org>
Subject: Re: [Xen-devel] [Xen-API] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(putting all lists except -users to bcc)

On Sun, 2014-02-02 at 23:55 -0700, jacek burghardt wrote:
> Well I was testing rc3 and I am running server 2012 with 12 gb of ram
> and share point. I had installed all Microsoft updates and then I
> would get server 2012 crashing qemu in dmesg.

Please report as a bug in a new thread providing all the relevant
information suggested by
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen

>  I had to lower ram assigned  to 10 gb and I was able to complete
> patches installation. It seems that xen 4.4 rc3 is limited to 10gb of
> ram. 
> Is xen switching to qemu 64 from  i386, it seems that many
> improvements in chip set support were made   with 64 bit version of
> qemu and it uses 256KB seabios. 

Xen does not use the CPU emulation capabilities of qemu. Xen only uses
the device emulation parts of Qemu, which are the same for both i386 and
amd64.

Therefore the distinction between the i386 and amd64 versions of qemu
are irrelevant to our use case.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:36:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGt9-0007Fe-5x; Mon, 03 Feb 2014 10:36:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1WAGt7-0007FH-SL
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 10:36:26 +0000
Received: from [85.158.137.68:37096] by server-11.bemta-3.messagelabs.com id
	02/C0-04255-9217FE25; Mon, 03 Feb 2014 10:36:25 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391423781!12169472!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21908 invoked from network); 3 Feb 2014 10:36:21 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-31.messagelabs.com with SMTP;
	3 Feb 2014 10:36:21 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s13AWZ4i017744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 05:36:01 -0500
Received: from localhost (vpn1-6-147.ams2.redhat.com [10.36.6.147])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s13AA3N0023555; Mon, 3 Feb 2014 05:10:35 -0500
Date: Mon, 3 Feb 2014 11:12:16 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203101215.GA1725@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140131160140.GC23648@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 31, 2014 at 11:01:40AM -0500, Konrad Rzeszutek Wilk wrote:
> Perhaps by using 'subsys_system_register' and stick it there?

This will not call ->resume callback as it is only called for
devices, so additional dummy device is needed, for example:

struct device xap_dev = {
        .init_name = "xen-acpi-processor-dev",
        .bus = &xap_bus,
};
...
subsys_system_register(&xap_bus, NULL);
device_register(&xap_dev);

But I'm not sure if that is good solution. It crate some not necessery
sysfs directories and files. Additionaly it can restore CPU C-states
after some other drivers resume, which prehaps require proper C-states.

Hence maybe adding direct notify from xen core resume will be better
idea (proposed patch below). Plese let me know what you think, I'll
provide solution which you choose to bug reporters for testing.

Thanks
Stanislaw

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 624e8dc..96e4173 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -13,6 +13,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/notifier.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -46,6 +47,20 @@ struct suspend_info {
 	void (*post)(int cancelled);
 };
 
+static RAW_NOTIFIER_HEAD(xen_resume_notifier);
+
+void xen_resume_notifier_register(struct notifier_block *nb)
+{
+	raw_notifier_chain_register(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
+
+void xen_resume_notifier_unregister(struct notifier_block *nb)
+{
+	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
+
 #ifdef CONFIG_HIBERNATE_CALLBACKS
 static void xen_hvm_post_suspend(int cancelled)
 {
@@ -152,6 +167,8 @@ static void do_suspend(void)
 
 	err = stop_machine(xen_suspend, &si, cpumask_of(0));
 
+	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
+
 	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
 
 	if (err) {
diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..82358d1 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -27,10 +27,10 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/types.h>
-#include <linux/syscore_ops.h>
 #include <linux/acpi.h>
 #include <acpi/processor.h>
 #include <xen/xen.h>
+#include <xen/xen-ops.h>
 #include <xen/interface/platform.h>
 #include <asm/xen/hypercall.h>
 
@@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
 	return rc;
 }
 
-static void xen_acpi_processor_resume(void)
+static int xen_acpi_processor_resume(struct notifier_block *nb,
+				     unsigned long action, void *data)
 {
 	bitmap_zero(acpi_ids_done, nr_acpi_bits);
-	xen_upload_processor_pm_data();
+	return xen_upload_processor_pm_data();
 }
 
-static struct syscore_ops xap_syscore_ops = {
-	.resume	= xen_acpi_processor_resume,
+struct notifier_block xen_acpi_processor_resume_nb = {
+	.notifier_call = xen_acpi_processor_resume,
 };
 
 static int __init xen_acpi_processor_init(void)
@@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
 	if (rc)
 		goto err_unregister;
 
-	register_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
 
 	return 0;
 err_unregister:
@@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
 {
 	int i;
 
-	unregister_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
 	kfree(acpi_ids_done);
 	kfree(acpi_id_present);
 	kfree(acpi_id_cst_present);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6412358 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+void xen_resume_notifier_register(struct notifier_block *nb);
+void xen_resume_notifier_unregister(struct notifier_block *nb);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:36:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGt9-0007Fe-5x; Mon, 03 Feb 2014 10:36:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1WAGt7-0007FH-SL
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 10:36:26 +0000
Received: from [85.158.137.68:37096] by server-11.bemta-3.messagelabs.com id
	02/C0-04255-9217FE25; Mon, 03 Feb 2014 10:36:25 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391423781!12169472!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21908 invoked from network); 3 Feb 2014 10:36:21 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-31.messagelabs.com with SMTP;
	3 Feb 2014 10:36:21 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s13AWZ4i017744
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 05:36:01 -0500
Received: from localhost (vpn1-6-147.ams2.redhat.com [10.36.6.147])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s13AA3N0023555; Mon, 3 Feb 2014 05:10:35 -0500
Date: Mon, 3 Feb 2014 11:12:16 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203101215.GA1725@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140131160140.GC23648@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Jan 31, 2014 at 11:01:40AM -0500, Konrad Rzeszutek Wilk wrote:
> Perhaps by using 'subsys_system_register' and stick it there?

This will not call ->resume callback as it is only called for
devices, so additional dummy device is needed, for example:

struct device xap_dev = {
        .init_name = "xen-acpi-processor-dev",
        .bus = &xap_bus,
};
...
subsys_system_register(&xap_bus, NULL);
device_register(&xap_dev);

But I'm not sure if that is good solution. It crate some not necessery
sysfs directories and files. Additionaly it can restore CPU C-states
after some other drivers resume, which prehaps require proper C-states.

Hence maybe adding direct notify from xen core resume will be better
idea (proposed patch below). Plese let me know what you think, I'll
provide solution which you choose to bug reporters for testing.

Thanks
Stanislaw

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 624e8dc..96e4173 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -13,6 +13,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/notifier.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -46,6 +47,20 @@ struct suspend_info {
 	void (*post)(int cancelled);
 };
 
+static RAW_NOTIFIER_HEAD(xen_resume_notifier);
+
+void xen_resume_notifier_register(struct notifier_block *nb)
+{
+	raw_notifier_chain_register(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
+
+void xen_resume_notifier_unregister(struct notifier_block *nb)
+{
+	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
+
 #ifdef CONFIG_HIBERNATE_CALLBACKS
 static void xen_hvm_post_suspend(int cancelled)
 {
@@ -152,6 +167,8 @@ static void do_suspend(void)
 
 	err = stop_machine(xen_suspend, &si, cpumask_of(0));
 
+	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
+
 	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
 
 	if (err) {
diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..82358d1 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -27,10 +27,10 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/types.h>
-#include <linux/syscore_ops.h>
 #include <linux/acpi.h>
 #include <acpi/processor.h>
 #include <xen/xen.h>
+#include <xen/xen-ops.h>
 #include <xen/interface/platform.h>
 #include <asm/xen/hypercall.h>
 
@@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
 	return rc;
 }
 
-static void xen_acpi_processor_resume(void)
+static int xen_acpi_processor_resume(struct notifier_block *nb,
+				     unsigned long action, void *data)
 {
 	bitmap_zero(acpi_ids_done, nr_acpi_bits);
-	xen_upload_processor_pm_data();
+	return xen_upload_processor_pm_data();
 }
 
-static struct syscore_ops xap_syscore_ops = {
-	.resume	= xen_acpi_processor_resume,
+struct notifier_block xen_acpi_processor_resume_nb = {
+	.notifier_call = xen_acpi_processor_resume,
 };
 
 static int __init xen_acpi_processor_init(void)
@@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
 	if (rc)
 		goto err_unregister;
 
-	register_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
 
 	return 0;
 err_unregister:
@@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
 {
 	int i;
 
-	unregister_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
 	kfree(acpi_ids_done);
 	kfree(acpi_id_present);
 	kfree(acpi_id_cst_present);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6412358 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+void xen_resume_notifier_register(struct notifier_block *nb);
+void xen_resume_notifier_unregister(struct notifier_block *nb);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGvz-0007kW-Q6; Mon, 03 Feb 2014 10:39:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGvx-0007kL-Qt
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:39:22 +0000
Received: from [85.158.139.211:10361] by server-6.bemta-5.messagelabs.com id
	50/2E-14342-9D17FE25; Mon, 03 Feb 2014 10:39:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391423958!1244104!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9356 invoked from network); 3 Feb 2014 10:39:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:39:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97222449"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:39:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:39:18 -0500
Message-ID: <1391423956.10515.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 3 Feb 2014 10:39:16 +0000
In-Reply-To: <20140203103028.GC713@zion.uk.xensource.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
	<20140203103028.GC713@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@linux.com>, Stefan Bader <stefan.bader@canonical.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
> > On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> > > On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > > > From: Wei Liu <wei.liu2@citrix.com>
> > > > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > > > 
> > > > > This series is now rebased onto net-next.
> > > > > 
> > > > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > > > backport if necessary.
> > > > 
> > > > All applied, but this was a disaster.
> > > > 
> > > 
> > > Thanks, I misunderstood the workflow.
> > > 
> > > > If you want bug fixes propagated into -stable you submit them to 'net'
> > > > from the beginning.
> > > > 
> > > > There is no other method by which to do this.
> > > > 
> > > > By merging all of these changes to net-next, you will now have to get
> > > > them accepted again into 'net', and then (and only then) can you make
> > > > a request for -stable inclusion.
> > > > 
> > > 
> > > Understood. Will submit them against 'net' later.
> > 
> > Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
> > to account for max TCP header) at all related to the "skb rides the
> > rocket" related TX packet drops reported against 3.8.x kernels?
> > 
> > https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
> > 
> > It seems like there are still some outstanding bugs in various -stable
> > releases.
> > 
> 
> As far as I can remember Ian and I requested relavant patches be
> backported in May, after these series settled in mainline for some time.
> 
> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
> 
> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
> up.

The stable guys don't maintain every tree indefinitely, usually only for
a couple of releases after the next mainline release or something (I
suppose you can find the official policy online somewhere). Presumably
these fixes came too late for the 3.8.y branch.

Longterm stable trees are an exception and get longer backports, I don't
think 3.8 is one of those though.

If anyone wants further backports then they will need to speak to the
Linux stable maintainers, although they should probably expect a "this
stable tree is now closed" type response for 3.8.

Or perhaps the above link implies that Canonical are supporting their
own LTS of Linux 3.8.y -- in which case the request should be made to
whoever that maintainer is.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAGvz-0007kW-Q6; Mon, 03 Feb 2014 10:39:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAGvx-0007kL-Qt
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:39:22 +0000
Received: from [85.158.139.211:10361] by server-6.bemta-5.messagelabs.com id
	50/2E-14342-9D17FE25; Mon, 03 Feb 2014 10:39:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391423958!1244104!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9356 invoked from network); 3 Feb 2014 10:39:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:39:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97222449"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:39:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:39:18 -0500
Message-ID: <1391423956.10515.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Mon, 3 Feb 2014 10:39:16 +0000
In-Reply-To: <20140203103028.GC713@zion.uk.xensource.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
	<20140203103028.GC713@zion.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@linux.com>, Stefan Bader <stefan.bader@canonical.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
> > On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
> > > On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
> > > > From: Wei Liu <wei.liu2@citrix.com>
> > > > Date: Mon, 22 Apr 2013 13:20:39 +0100
> > > > 
> > > > > This series is now rebased onto net-next.
> > > > > 
> > > > > We would also like to ask you to queue it for stable-ish tree. I can do the
> > > > > backport if necessary.
> > > > 
> > > > All applied, but this was a disaster.
> > > > 
> > > 
> > > Thanks, I misunderstood the workflow.
> > > 
> > > > If you want bug fixes propagated into -stable you submit them to 'net'
> > > > from the beginning.
> > > > 
> > > > There is no other method by which to do this.
> > > > 
> > > > By merging all of these changes to net-next, you will now have to get
> > > > them accepted again into 'net', and then (and only then) can you make
> > > > a request for -stable inclusion.
> > > > 
> > > 
> > > Understood. Will submit them against 'net' later.
> > 
> > Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
> > to account for max TCP header) at all related to the "skb rides the
> > rocket" related TX packet drops reported against 3.8.x kernels?
> > 
> > https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195474
> > 
> > It seems like there are still some outstanding bugs in various -stable
> > releases.
> > 
> 
> As far as I can remember Ian and I requested relavant patches be
> backported in May, after these series settled in mainline for some time.
> 
> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
> 
> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick them
> up.

The stable guys don't maintain every tree indefinitely, usually only for
a couple of releases after the next mainline release or something (I
suppose you can find the official policy online somewhere). Presumably
these fixes came too late for the 3.8.y branch.

Longterm stable trees are an exception and get longer backports, I don't
think 3.8 is one of those though.

If anyone wants further backports then they will need to speak to the
Linux stable maintainers, although they should probably expect a "this
stable tree is now closed" type response for 3.8.

Or perhaps the above link implies that Canonical are supporting their
own LTS of Linux 3.8.y -- in which case the request should be made to
whoever that maintainer is.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:46:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAH35-00081k-Om; Mon, 03 Feb 2014 10:46:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAH34-00081d-Bm
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:46:42 +0000
Received: from [193.109.254.147:10173] by server-16.bemta-14.messagelabs.com
	id 11/0E-21945-0937FE25; Mon, 03 Feb 2014 10:46:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391424398!1600744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14286 invoked from network); 3 Feb 2014 10:46:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:46:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99155998"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 10:46:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:46:37 -0500
Message-ID: <1391424397.10515.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Mon, 3 Feb 2014 10:46:37 +0000
In-Reply-To: <20140202183504.GA10385@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
	<20140202183504.GA10385@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 19:35 +0100, Olaf Hering wrote:
> On Sat, Feb 01, Ian Campbell wrote:
> 
> > https://bugs.launchpad.net/linaro-aarch64/+bug/1169164
> 
> Is this on anyones radar actually? That bug is almost a year old, its
> still broken with a snapshot as of 7 Jan 2014.

Yeah, I've been meaning to ping it. Looks like it recently got moved
from Low to Medium priority so it seems to be back on the radar (I think
Julien pinged it within Linaro).

I've also been having this issue on the opensuse aarach64 distro images
but not gotten round to reporting it. Since I assume you have the
necessary bugzilla accounts etc and are also experiencing it I wonder if
I could prevail on you to report it there too?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:46:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAH35-00081k-Om; Mon, 03 Feb 2014 10:46:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAH34-00081d-Bm
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:46:42 +0000
Received: from [193.109.254.147:10173] by server-16.bemta-14.messagelabs.com
	id 11/0E-21945-0937FE25; Mon, 03 Feb 2014 10:46:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391424398!1600744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14286 invoked from network); 3 Feb 2014 10:46:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:46:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99155998"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 10:46:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:46:37 -0500
Message-ID: <1391424397.10515.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Mon, 3 Feb 2014 10:46:37 +0000
In-Reply-To: <20140202183504.GA10385@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
	<20140202183504.GA10385@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 19:35 +0100, Olaf Hering wrote:
> On Sat, Feb 01, Ian Campbell wrote:
> 
> > https://bugs.launchpad.net/linaro-aarch64/+bug/1169164
> 
> Is this on anyones radar actually? That bug is almost a year old, its
> still broken with a snapshot as of 7 Jan 2014.

Yeah, I've been meaning to ping it. Looks like it recently got moved
from Low to Medium priority so it seems to be back on the radar (I think
Julien pinged it within Linaro).

I've also been having this issue on the opensuse aarach64 distro images
but not gotten round to reporting it. Since I assume you have the
necessary bugzilla accounts etc and are also experiencing it I wonder if
I could prevail on you to report it there too?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAH6S-0008Cj-Dp; Mon, 03 Feb 2014 10:50:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.bader@canonical.com>) id 1WAH6Q-0008Cd-Lo
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:50:10 +0000
Received: from [85.158.143.35:9287] by server-3.bemta-4.messagelabs.com id
	FB/F5-11539-2647FE25; Mon, 03 Feb 2014 10:50:10 +0000
X-Env-Sender: stefan.bader@canonical.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391424609!2687674!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8528 invoked from network); 3 Feb 2014 10:50:09 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-16.tower-21.messagelabs.com with SMTP;
	3 Feb 2014 10:50:09 -0000
Received: from hsi-kbw-078-042-118-041.hsi3.kabel-badenwuerttemberg.de
	([78.42.118.41] helo=[192.168.2.5])
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71)
	(envelope-from <stefan.bader@canonical.com>)
	id 1WAH6M-0003Up-37; Mon, 03 Feb 2014 10:50:06 +0000
Message-ID: <52EF7453.2080207@canonical.com>
Date: Mon, 03 Feb 2014 11:49:55 +0100
From: Stefan Bader <stefan.bader@canonical.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>	
	<20130422.154139.1046488577191797292.davem@davemloft.net>	
	<20130422195335.GA30755@zion.uk.xensource.com>	
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>	
	<20140203103028.GC713@zion.uk.xensource.com>
	<1391423956.10515.39.camel@kazak.uk.xensource.com>
In-Reply-To: <1391423956.10515.39.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.5.2
Cc: Matt Wilson <msw@linux.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2894225507270448007=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============2894225507270448007==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 03.02.2014 11:39, Ian Campbell wrote:
> On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
>> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
>>> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
>>>> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>> Date: Mon, 22 Apr 2013 13:20:39 +0100
>>>>>
>>>>>> This series is now rebased onto net-next.
>>>>>>
>>>>>> We would also like to ask you to queue it for stable-ish tree. I c=
an do the
>>>>>> backport if necessary.
>>>>>
>>>>> All applied, but this was a disaster.
>>>>>
>>>>
>>>> Thanks, I misunderstood the workflow.
>>>>
>>>>> If you want bug fixes propagated into -stable you submit them to 'n=
et'
>>>>> from the beginning.
>>>>>
>>>>> There is no other method by which to do this.
>>>>>
>>>>> By merging all of these changes to net-next, you will now have to g=
et
>>>>> them accepted again into 'net', and then (and only then) can you ma=
ke
>>>>> a request for -stable inclusion.
>>>>>
>>>>
>>>> Understood. Will submit them against 'net' later.
>>>
>>> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
>>> to account for max TCP header) at all related to the "skb rides the
>>> rocket" related TX packet drops reported against 3.8.x kernels?
>>>
>>> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/11954=
74
>>>
>>> It seems like there are still some outstanding bugs in various -stabl=
e
>>> releases.
>>>
>>
>> As far as I can remember Ian and I requested relavant patches be
>> backported in May, after these series settled in mainline for some tim=
e.
>>
>> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>>
>> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick th=
em
>> up.
>=20
> The stable guys don't maintain every tree indefinitely, usually only fo=
r
> a couple of releases after the next mainline release or something (I
> suppose you can find the official policy online somewhere). Presumably
> these fixes came too late for the 3.8.y branch.
>=20
> Longterm stable trees are an exception and get longer backports, I don'=
t
> think 3.8 is one of those though.
>=20
> If anyone wants further backports then they will need to speak to the
> Linux stable maintainers, although they should probably expect a "this
> stable tree is now closed" type response for 3.8.
>=20
> Or perhaps the above link implies that Canonical are supporting their
> own LTS of Linux 3.8.y -- in which case the request should be made to
> whoever that maintainer is.
>=20
> Ian.
>=20
Yeah, it would be a Canonical maintained longterm tree. I am just checkin=
g to
verify which ones are missing the series. I will send out a request to pu=
ll them
in after that.

-Stefan


--OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJS73RcAAoJEOhnXe7L7s6jUY8P/3q+txRSlaWhvaQ87uDE8gEW
E4xOyAhZxxvHzILJTAK+DCSYMDBXoOqB5GxqmyUNVz4IYCOPUf+z7C8W2tVCYyqb
Xb6dC0ryfJBxC+4uqdhUsp27/rbrzVIgcQGkrVNt52pxdaqqSeJ3MLovC9+NLJDQ
cp8lbmOnuYFfpb4/rrqQszfYiUyPYqYsp07XcCZurxN/5o39IfbqW5l8YSSmEVgi
B+ni1ENi6siJcK4O0pCRaVBz9h432vL9nvVb/YrEbP1S2GxyeJ4YLPxxw9DG96XO
XwJK0q5zz3fyToLasRGXDvCzw4cRe2yP0BEJpLPtXpzXPRHHK3Pau9F9WG+51HGR
qqWTBQemtpHwQgsrhGoL5LGtt7kzv0BoaIDqCILmaUhWNDcrFZEzfnirFqM1Vuxg
B4fhGk1dPD/F6G2/ZzLAy8SO/2WvgervVGNP634CPhNGTjw7BoJ3xxjL4Q9sdFMB
3hn3BlgxtpFl9yD6AxyyvZDyQXcnFl3jYPpW4pP8AFqkHNGl0y4xhZ/qc9nhnBd4
NbylhCjgZXPvENuH+cojbvn+6AZuGXnFQF0m1brnP9TCSuDYD1QSH0MzEibUY827
VlrlQEO/+CEHnqLLGfHaHyrwntQ2RZwNARmqqavN9upqEdj6XUlm71KU+RpxhUQ3
DWlll8as6Mj2aXymh3YG
=3h2r
-----END PGP SIGNATURE-----

--OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw--


--===============2894225507270448007==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2894225507270448007==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 10:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAH6S-0008Cj-Dp; Mon, 03 Feb 2014 10:50:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.bader@canonical.com>) id 1WAH6Q-0008Cd-Lo
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:50:10 +0000
Received: from [85.158.143.35:9287] by server-3.bemta-4.messagelabs.com id
	FB/F5-11539-2647FE25; Mon, 03 Feb 2014 10:50:10 +0000
X-Env-Sender: stefan.bader@canonical.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391424609!2687674!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8528 invoked from network); 3 Feb 2014 10:50:09 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-16.tower-21.messagelabs.com with SMTP;
	3 Feb 2014 10:50:09 -0000
Received: from hsi-kbw-078-042-118-041.hsi3.kabel-badenwuerttemberg.de
	([78.42.118.41] helo=[192.168.2.5])
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71)
	(envelope-from <stefan.bader@canonical.com>)
	id 1WAH6M-0003Up-37; Mon, 03 Feb 2014 10:50:06 +0000
Message-ID: <52EF7453.2080207@canonical.com>
Date: Mon, 03 Feb 2014 11:49:55 +0100
From: Stefan Bader <stefan.bader@canonical.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>	
	<20130422.154139.1046488577191797292.davem@davemloft.net>	
	<20130422195335.GA30755@zion.uk.xensource.com>	
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>	
	<20140203103028.GC713@zion.uk.xensource.com>
	<1391423956.10515.39.camel@kazak.uk.xensource.com>
In-Reply-To: <1391423956.10515.39.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.5.2
Cc: Matt Wilson <msw@linux.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2894225507270448007=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============2894225507270448007==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 03.02.2014 11:39, Ian Campbell wrote:
> On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
>> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
>>> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
>>>> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>> Date: Mon, 22 Apr 2013 13:20:39 +0100
>>>>>
>>>>>> This series is now rebased onto net-next.
>>>>>>
>>>>>> We would also like to ask you to queue it for stable-ish tree. I c=
an do the
>>>>>> backport if necessary.
>>>>>
>>>>> All applied, but this was a disaster.
>>>>>
>>>>
>>>> Thanks, I misunderstood the workflow.
>>>>
>>>>> If you want bug fixes propagated into -stable you submit them to 'n=
et'
>>>>> from the beginning.
>>>>>
>>>>> There is no other method by which to do this.
>>>>>
>>>>> By merging all of these changes to net-next, you will now have to g=
et
>>>>> them accepted again into 'net', and then (and only then) can you ma=
ke
>>>>> a request for -stable inclusion.
>>>>>
>>>>
>>>> Understood. Will submit them against 'net' later.
>>>
>>> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size
>>> to account for max TCP header) at all related to the "skb rides the
>>> rocket" related TX packet drops reported against 3.8.x kernels?
>>>
>>> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/11954=
74
>>>
>>> It seems like there are still some outstanding bugs in various -stabl=
e
>>> releases.
>>>
>>
>> As far as I can remember Ian and I requested relavant patches be
>> backported in May, after these series settled in mainline for some tim=
e.
>>
>> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>>
>> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick th=
em
>> up.
>=20
> The stable guys don't maintain every tree indefinitely, usually only fo=
r
> a couple of releases after the next mainline release or something (I
> suppose you can find the official policy online somewhere). Presumably
> these fixes came too late for the 3.8.y branch.
>=20
> Longterm stable trees are an exception and get longer backports, I don'=
t
> think 3.8 is one of those though.
>=20
> If anyone wants further backports then they will need to speak to the
> Linux stable maintainers, although they should probably expect a "this
> stable tree is now closed" type response for 3.8.
>=20
> Or perhaps the above link implies that Canonical are supporting their
> own LTS of Linux 3.8.y -- in which case the request should be made to
> whoever that maintainer is.
>=20
> Ian.
>=20
Yeah, it would be a Canonical maintained longterm tree. I am just checkin=
g to
verify which ones are missing the series. I will send out a request to pu=
ll them
in after that.

-Stefan


--OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJS73RcAAoJEOhnXe7L7s6jUY8P/3q+txRSlaWhvaQ87uDE8gEW
E4xOyAhZxxvHzILJTAK+DCSYMDBXoOqB5GxqmyUNVz4IYCOPUf+z7C8W2tVCYyqb
Xb6dC0ryfJBxC+4uqdhUsp27/rbrzVIgcQGkrVNt52pxdaqqSeJ3MLovC9+NLJDQ
cp8lbmOnuYFfpb4/rrqQszfYiUyPYqYsp07XcCZurxN/5o39IfbqW5l8YSSmEVgi
B+ni1ENi6siJcK4O0pCRaVBz9h432vL9nvVb/YrEbP1S2GxyeJ4YLPxxw9DG96XO
XwJK0q5zz3fyToLasRGXDvCzw4cRe2yP0BEJpLPtXpzXPRHHK3Pau9F9WG+51HGR
qqWTBQemtpHwQgsrhGoL5LGtt7kzv0BoaIDqCILmaUhWNDcrFZEzfnirFqM1Vuxg
B4fhGk1dPD/F6G2/ZzLAy8SO/2WvgervVGNP634CPhNGTjw7BoJ3xxjL4Q9sdFMB
3hn3BlgxtpFl9yD6AxyyvZDyQXcnFl3jYPpW4pP8AFqkHNGl0y4xhZ/qc9nhnBd4
NbylhCjgZXPvENuH+cojbvn+6AZuGXnFQF0m1brnP9TCSuDYD1QSH0MzEibUY827
VlrlQEO/+CEHnqLLGfHaHyrwntQ2RZwNARmqqavN9upqEdj6XUlm71KU+RpxhUQ3
DWlll8as6Mj2aXymh3YG
=3h2r
-----END PGP SIGNATURE-----

--OD55E1sxcXkK3wJdNn93hKmnu8RfsAoTw--


--===============2894225507270448007==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2894225507270448007==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 10:52:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAH8x-0008LA-8H; Mon, 03 Feb 2014 10:52:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAH8u-0008L2-GD
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 10:52:45 +0000
Received: from [85.158.137.68:4232] by server-2.bemta-3.messagelabs.com id
	5A/0A-06531-BF47FE25; Mon, 03 Feb 2014 10:52:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391424761!12938521!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18273 invoked from network); 3 Feb 2014 10:52:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:52:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97225438"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:52:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:52:40 -0500
Message-ID: <1391424759.10515.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xennn <openbg@abv.bg>
Date: Mon, 3 Feb 2014 10:52:39 +0000
In-Reply-To: <1391353593813-5721087.post@n5.nabble.com>
References: <1391353593813-5721087.post@n5.nabble.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] xen and SMP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 07:06 -0800, xennn wrote:
> I would like to know what is  xen archtecture on SMP - is there xen instance
> on each processor? 

No, this is not how a typical SMP OS is structured.

Please read http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions , it
might help you to frame your questions better.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:52:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAH8x-0008LA-8H; Mon, 03 Feb 2014 10:52:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAH8u-0008L2-GD
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 10:52:45 +0000
Received: from [85.158.137.68:4232] by server-2.bemta-3.messagelabs.com id
	5A/0A-06531-BF47FE25; Mon, 03 Feb 2014 10:52:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391424761!12938521!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18273 invoked from network); 3 Feb 2014 10:52:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:52:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97225438"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:52:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:52:40 -0500
Message-ID: <1391424759.10515.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xennn <openbg@abv.bg>
Date: Mon, 3 Feb 2014 10:52:39 +0000
In-Reply-To: <1391353593813-5721087.post@n5.nabble.com>
References: <1391353593813-5721087.post@n5.nabble.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] xen and SMP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 07:06 -0800, xennn wrote:
> I would like to know what is  xen archtecture on SMP - is there xen instance
> on each processor? 

No, this is not how a typical SMP OS is structured.

Please read http://wiki.xen.org/wiki/Asking_Xen_Devel_Questions , it
might help you to frame your questions better.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:55:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHBF-0008TF-Pc; Mon, 03 Feb 2014 10:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WAHBE-0008T1-6W
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:55:08 +0000
Received: from [85.158.139.211:56362] by server-16.bemta-5.messagelabs.com id
	54/BE-05060-B857FE25; Mon, 03 Feb 2014 10:55:07 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391424905!1264895!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1286 invoked from network); 3 Feb 2014 10:55:06 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:55:06 -0000
Received: by mail-lb0-f181.google.com with SMTP id z5so5263579lbh.12
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 02:55:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=/p3zPzEpUi5Oaugvaq1GLjj2KrNlIeLm0OpM/RimSlA=;
	b=M3NtjiWLGbEG155YYvJO+T9yfpZglBhG2OocDyK6zHTbw2C2OWiH0yNv4z4l0p1gAu
	Tur/FeX68O7htyguY6htBE4gPl7t/mg13AiRmnmIV65SOzc/XvFSqlM25tpWlnu7lF4O
	NxB1+euSDxxPGcHc7wCVT7sIdHKN1ubyuKMHOBNsALz26rFiDSp5ubQfAM5ELbqBA5Dw
	93DJsVFIEU0yZII5Jlz1Fn1+e55biMsbRsKE4nrJQ+xyxl9egnWqeqXMm7shkDTRcM4Z
	8SCNzLuU8PmnO/OdG1WULoaalij6X2dUpdIpLvxknQskZOT/7n2EVcdcpqdjtRkQukZR
	Wdvw==
X-Received: by 10.153.9.72 with SMTP id dq8mr131906lad.60.1391424905487;
	Mon, 03 Feb 2014 02:55:05 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id w2sm29369925lad.4.2014.02.03.02.55.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 02:55:04 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1391422093.10515.24.camel@kazak.uk.xensource.com>
Date: Mon, 3 Feb 2014 14:55:01 +0400
Message-Id: <0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
	<1391422093.10515.24.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On Feb 3, 2014, at 2:08 PM, Ian Campbell wrote:

> On Fri, 2014-01-31 at 20:48 +0400, Igor Kozhukhov wrote:
>> Hi All,
>> 
>> i try to load PV guest by:
>> 
>> virt-install --debug --paravirt --name cos6x64 --ram 800 --network bridge  --disk path=/dev/zvol/dsk/rpool/xen/cos01,driver=phy --location http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/ --nographics
>> 
>> could you please help me try to debug issues with PV creation:
>> 
>> POST operation failed: xend_post: error from xen daemon: (xend.err
>> "Error creating domain: (1, 'Internal error', 'panic:
>> xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages
>> 0x1000+0x1040 [mmap, errno=6 (No such device or address)]')")
>> 
>> i want try to find what i have to check/update: Xen sources OR illumos sources.
> 
> The error message comes from xc_dom_boot.c which is part of Xen. A call
> to xc_map_foreign_ranges has failed. This is abstracted out for
> different platforms via the xenctrlosdep.h interface.
> 
> Are you using xc_solaris.c or have you created a new xc_illumos.c? It is
> a pretty good bet that xc_solaris has bitrotten over the years. In
> either case my gut feeling is that the issue is either in that code or
> on the kernel side in the "privcmd" driver which it calls into.

i'm using xc_solaris.c - because illumos based on solaris and has open API.
for now - i have found and fixed my problems: i'm able to load PV guests: dilos PV on dilos-xen-4.3-dom0, Linux ContOS 6.4 and Debian 7.0 64bits.
I'll send info later with new ISO and instruction for testing.
'xl' now work well.
i'll work on provide my changes to Xen upstream.

> BTW -- I'd strongly recommend that you switch to using the "xl"
> toolstack -- xend is obsolete and deprecated so using it for a new port
> is not recommended, it will obscure some of the error messages etc.
> 
> For the initial debugging phases I would also suggest to avoid higher
> level tools such as libvirt (again because they can obscure some of the
> lower level debugging) and to use the "xl create" interface directly. A
> simple config file would be something like:
> 	name = "test"
> 	kernel = "/path/to/vmlinuz"
> 	extra = "console=hvc"
> Where:
>        wget -O /path/to/vmlinuz http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/isolinux/vmlinuz
> is the kernel to use.
> 
> Launch with 
> 	xl -vvv create /path/to/cfg
> and you'll get plenty of debug output.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:55:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHBF-0008TF-Pc; Mon, 03 Feb 2014 10:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WAHBE-0008T1-6W
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:55:08 +0000
Received: from [85.158.139.211:56362] by server-16.bemta-5.messagelabs.com id
	54/BE-05060-B857FE25; Mon, 03 Feb 2014 10:55:07 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391424905!1264895!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1286 invoked from network); 3 Feb 2014 10:55:06 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:55:06 -0000
Received: by mail-lb0-f181.google.com with SMTP id z5so5263579lbh.12
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 02:55:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=/p3zPzEpUi5Oaugvaq1GLjj2KrNlIeLm0OpM/RimSlA=;
	b=M3NtjiWLGbEG155YYvJO+T9yfpZglBhG2OocDyK6zHTbw2C2OWiH0yNv4z4l0p1gAu
	Tur/FeX68O7htyguY6htBE4gPl7t/mg13AiRmnmIV65SOzc/XvFSqlM25tpWlnu7lF4O
	NxB1+euSDxxPGcHc7wCVT7sIdHKN1ubyuKMHOBNsALz26rFiDSp5ubQfAM5ELbqBA5Dw
	93DJsVFIEU0yZII5Jlz1Fn1+e55biMsbRsKE4nrJQ+xyxl9egnWqeqXMm7shkDTRcM4Z
	8SCNzLuU8PmnO/OdG1WULoaalij6X2dUpdIpLvxknQskZOT/7n2EVcdcpqdjtRkQukZR
	Wdvw==
X-Received: by 10.153.9.72 with SMTP id dq8mr131906lad.60.1391424905487;
	Mon, 03 Feb 2014 02:55:05 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id w2sm29369925lad.4.2014.02.03.02.55.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 02:55:04 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1391422093.10515.24.camel@kazak.uk.xensource.com>
Date: Mon, 3 Feb 2014 14:55:01 +0400
Message-Id: <0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
	<1391422093.10515.24.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On Feb 3, 2014, at 2:08 PM, Ian Campbell wrote:

> On Fri, 2014-01-31 at 20:48 +0400, Igor Kozhukhov wrote:
>> Hi All,
>> 
>> i try to load PV guest by:
>> 
>> virt-install --debug --paravirt --name cos6x64 --ram 800 --network bridge  --disk path=/dev/zvol/dsk/rpool/xen/cos01,driver=phy --location http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/ --nographics
>> 
>> could you please help me try to debug issues with PV creation:
>> 
>> POST operation failed: xend_post: error from xen daemon: (xend.err
>> "Error creating domain: (1, 'Internal error', 'panic:
>> xc_dom_boot.c:197: xc_dom_boot_domU_map: failed to mmap domU pages
>> 0x1000+0x1040 [mmap, errno=6 (No such device or address)]')")
>> 
>> i want try to find what i have to check/update: Xen sources OR illumos sources.
> 
> The error message comes from xc_dom_boot.c which is part of Xen. A call
> to xc_map_foreign_ranges has failed. This is abstracted out for
> different platforms via the xenctrlosdep.h interface.
> 
> Are you using xc_solaris.c or have you created a new xc_illumos.c? It is
> a pretty good bet that xc_solaris has bitrotten over the years. In
> either case my gut feeling is that the issue is either in that code or
> on the kernel side in the "privcmd" driver which it calls into.

i'm using xc_solaris.c - because illumos based on solaris and has open API.
for now - i have found and fixed my problems: i'm able to load PV guests: dilos PV on dilos-xen-4.3-dom0, Linux ContOS 6.4 and Debian 7.0 64bits.
I'll send info later with new ISO and instruction for testing.
'xl' now work well.
i'll work on provide my changes to Xen upstream.

> BTW -- I'd strongly recommend that you switch to using the "xl"
> toolstack -- xend is obsolete and deprecated so using it for a new port
> is not recommended, it will obscure some of the error messages etc.
> 
> For the initial debugging phases I would also suggest to avoid higher
> level tools such as libvirt (again because they can obscure some of the
> lower level debugging) and to use the "xl create" interface directly. A
> simple config file would be something like:
> 	name = "test"
> 	kernel = "/path/to/vmlinuz"
> 	extra = "console=hvc"
> Where:
>        wget -O /path/to/vmlinuz http://merlin.fit.vutbr.cz/mirrors/centos/6.5/os/x86_64/isolinux/vmlinuz
> is the kernel to use.
> 
> Launch with 
> 	xl -vvv create /path/to/cfg
> and you'll get plenty of debug output.
> 
> Ian.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHCb-00008F-9o; Mon, 03 Feb 2014 10:56:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WAHCZ-00007z-Jj
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:56:31 +0000
Received: from [85.158.143.35:62585] by server-3.bemta-4.messagelabs.com id
	FF/23-11539-FD57FE25; Mon, 03 Feb 2014 10:56:31 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391424990!2690116!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16627 invoked from network); 3 Feb 2014 10:56:30 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 10:56:30 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391424990; l=495;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=e2Tk1VqebSPy4TUJVcq/CnYehU4=;
	b=j+mRd6/rgEQ4FWwxU+KPMLrX8wMSLlKAshF9GUurcatZMhQ//ekeo4AZKGfevr41QCA
	wC2q0Caay8P6WcCKPbrdiNa6hQEWiFT/jSTSOmV+yR7OSzYja5ug8PT0JmFQyFmsJIEam
	3oOPQTfH5NBKk7f6GQgnzJHww737fLYS3vo=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id 603f5bq13AuUS06
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 3 Feb 2014 11:56:30 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9DA0150269; Mon,  3 Feb 2014 11:56:29 +0100 (CET)
Date: Mon, 3 Feb 2014 11:56:29 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140203105629.GA8266@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
	<20140202183504.GA10385@aepfle.de>
	<1391424397.10515.46.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391424397.10515.46.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, Ian Campbell wrote:

> I've also been having this issue on the opensuse aarach64 distro images
> but not gotten round to reporting it. Since I assume you have the
> necessary bugzilla accounts etc and are also experiencing it I wonder if
> I could prevail on you to report it there too?

I can certainly help with reporting bugs. But I have no ARM hardware,
just trying to adjust xen.spec so that its not x86 only anymore.
To get less "pkg build failed" mails.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHCb-00008F-9o; Mon, 03 Feb 2014 10:56:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WAHCZ-00007z-Jj
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:56:31 +0000
Received: from [85.158.143.35:62585] by server-3.bemta-4.messagelabs.com id
	FF/23-11539-FD57FE25; Mon, 03 Feb 2014 10:56:31 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391424990!2690116!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16627 invoked from network); 3 Feb 2014 10:56:30 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 10:56:30 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391424990; l=495;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=e2Tk1VqebSPy4TUJVcq/CnYehU4=;
	b=j+mRd6/rgEQ4FWwxU+KPMLrX8wMSLlKAshF9GUurcatZMhQ//ekeo4AZKGfevr41QCA
	wC2q0Caay8P6WcCKPbrdiNa6hQEWiFT/jSTSOmV+yR7OSzYja5ug8PT0JmFQyFmsJIEam
	3oOPQTfH5NBKk7f6GQgnzJHww737fLYS3vo=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id 603f5bq13AuUS06
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 3 Feb 2014 11:56:30 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9DA0150269; Mon,  3 Feb 2014 11:56:29 +0100 (CET)
Date: Mon, 3 Feb 2014 11:56:29 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140203105629.GA8266@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
	<20140202183504.GA10385@aepfle.de>
	<1391424397.10515.46.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391424397.10515.46.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, Ian Campbell wrote:

> I've also been having this issue on the opensuse aarach64 distro images
> but not gotten round to reporting it. Since I assume you have the
> necessary bugzilla accounts etc and are also experiencing it I wonder if
> I could prevail on you to report it there too?

I can certainly help with reporting bugs. But I have no ARM hardware,
just trying to adjust xen.spec so that its not x86 only anymore.
To get less "pkg build failed" mails.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:57:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHDc-0000EL-ON; Mon, 03 Feb 2014 10:57:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAHDa-0000E7-Ny
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 10:57:34 +0000
Received: from [85.158.137.68:14452] by server-12.bemta-3.messagelabs.com id
	DE/00-01674-D167FE25; Mon, 03 Feb 2014 10:57:33 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391425051!9338098!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7510 invoked from network); 3 Feb 2014 10:57:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:57:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97226482"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:57:30 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:57:30 -0500
Message-ID: <52EF7618.7030402@citrix.com>
Date: Mon, 3 Feb 2014 10:57:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>	<52EE1E26.2040308@linaro.org>
	<52EE93F0.1020508@citrix.com>
In-Reply-To: <52EE93F0.1020508@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/02/14 18:52, Zoltan Kiss wrote:
> On 02/02/14 11:29, Julien Grall wrote:
>> Hello,
>>
>> This patch is breaking Linux compilation on ARM:
>>
>> drivers/xen/grant-table.c: In function =91__gnttab_map_refs=92:
>> drivers/xen/grant-table.c:989:3: error: implicit declaration of
>> function =91FOREIGN_FRAME=92 [-Werror=3Dimplicit-function-declaration]
>>     if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>>     ^
>> drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
>> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
>> function =91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declarat=
ion]
>>     mfn =3D get_phys_to_machine(pfn);
>>     ^
>> drivers/xen/grant-table.c:1055:43: error: =91FOREIGN_FRAME_BIT=92
>> undeclared (first use in this function)
>>     if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>>                                             ^
>> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
>> reported only once for each function it appears in
>> drivers/xen/grant-table.c:1068:9: error: too many arguments to
>> function =91m2p_remove_override=92
>>           mfn);
>>           ^
>> In file included from include/xen/page.h:4:0,
>>                   from drivers/xen/grant-table.c:48:
>> /local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:10=
6:19:
>> note: declared here
>>   static inline int m2p_remove_override(struct page *page, bool
>> clear_pte)
>>                     ^
>> cc1: some warnings being treated as errors
> =

> Hi,
> =

> That's bad indeed. I think the best solution is to put those parts
> behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c.
> David, Stefano, what do you think?

I don't think we want (more) #ifdef CONFIG_X86 in grant-table.c and the
arch-specific bits will have to factored out into their own functions
with suitable stubs provided for ARM.

But, this patch went in late and it's clearly not ready. So I think it
should be reverted and we should aim to get it sorted out for 3.15.

Konrad/Stefano (if you agree) please revert
08ece5bb2312b4510b161a6ef6682f37f4eac8a1 and send a pull request.

Konrad, I also think you should look at adding an ARM build to your test
system (I thought you had this already).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:57:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHDc-0000EL-ON; Mon, 03 Feb 2014 10:57:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAHDa-0000E7-Ny
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 10:57:34 +0000
Received: from [85.158.137.68:14452] by server-12.bemta-3.messagelabs.com id
	DE/00-01674-D167FE25; Mon, 03 Feb 2014 10:57:33 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391425051!9338098!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7510 invoked from network); 3 Feb 2014 10:57:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:57:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97226482"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:57:30 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:57:30 -0500
Message-ID: <52EF7618.7030402@citrix.com>
Date: Mon, 3 Feb 2014 10:57:28 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>	<52EE1E26.2040308@linaro.org>
	<52EE93F0.1020508@citrix.com>
In-Reply-To: <52EE93F0.1020508@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/02/14 18:52, Zoltan Kiss wrote:
> On 02/02/14 11:29, Julien Grall wrote:
>> Hello,
>>
>> This patch is breaking Linux compilation on ARM:
>>
>> drivers/xen/grant-table.c: In function =91__gnttab_map_refs=92:
>> drivers/xen/grant-table.c:989:3: error: implicit declaration of
>> function =91FOREIGN_FRAME=92 [-Werror=3Dimplicit-function-declaration]
>>     if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>>     ^
>> drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
>> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
>> function =91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declarat=
ion]
>>     mfn =3D get_phys_to_machine(pfn);
>>     ^
>> drivers/xen/grant-table.c:1055:43: error: =91FOREIGN_FRAME_BIT=92
>> undeclared (first use in this function)
>>     if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>>                                             ^
>> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
>> reported only once for each function it appears in
>> drivers/xen/grant-table.c:1068:9: error: too many arguments to
>> function =91m2p_remove_override=92
>>           mfn);
>>           ^
>> In file included from include/xen/page.h:4:0,
>>                   from drivers/xen/grant-table.c:48:
>> /local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:10=
6:19:
>> note: declared here
>>   static inline int m2p_remove_override(struct page *page, bool
>> clear_pte)
>>                     ^
>> cc1: some warnings being treated as errors
> =

> Hi,
> =

> That's bad indeed. I think the best solution is to put those parts
> behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c.
> David, Stefano, what do you think?

I don't think we want (more) #ifdef CONFIG_X86 in grant-table.c and the
arch-specific bits will have to factored out into their own functions
with suitable stubs provided for ARM.

But, this patch went in late and it's clearly not ready. So I think it
should be reverted and we should aim to get it sorted out for 3.15.

Konrad/Stefano (if you agree) please revert
08ece5bb2312b4510b161a6ef6682f37f4eac8a1 and send a pull request.

Konrad, I also think you should look at adding an ARM build to your test
system (I thought you had this already).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:59:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHFo-0000PJ-9q; Mon, 03 Feb 2014 10:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAHFm-0000PC-VJ
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:59:51 +0000
Received: from [85.158.139.211:26325] by server-6.bemta-5.messagelabs.com id
	39/C5-14342-6A67FE25; Mon, 03 Feb 2014 10:59:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391425188!1255612!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7923 invoked from network); 3 Feb 2014 10:59:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:59:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97226947"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:59:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:59:47 -0500
Message-ID: <1391425186.10515.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 3 Feb 2014 10:59:46 +0000
In-Reply-To: <0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
	<1391422093.10515.24.camel@kazak.uk.xensource.com>
	<0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 14:55 +0400, Igor Kozhukhov wrote:
> i'm using xc_solaris.c - because illumos based on solaris and has open API.

I suspect this was the case, thanks for confirming.

> for now - i have found and fixed my problems: i'm able to load PV guests: dilos PV on dilos-xen-4.3-dom0, Linux ContOS 6.4 and Debian 7.0 64bits.

Excellent!

Is Illumos usable as a domU on a Linux dom0 as well?

> I'll send info later with new ISO and instruction for testing.
> 'xl' now work well.
> i'll work on provide my changes to Xen upstream.

Thank you!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 10:59:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 10:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHFo-0000PJ-9q; Mon, 03 Feb 2014 10:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAHFm-0000PC-VJ
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 10:59:51 +0000
Received: from [85.158.139.211:26325] by server-6.bemta-5.messagelabs.com id
	39/C5-14342-6A67FE25; Mon, 03 Feb 2014 10:59:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391425188!1255612!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7923 invoked from network); 3 Feb 2014 10:59:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 10:59:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97226947"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 10:59:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	05:59:47 -0500
Message-ID: <1391425186.10515.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 3 Feb 2014 10:59:46 +0000
In-Reply-To: <0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
	<1391422093.10515.24.camel@kazak.uk.xensource.com>
	<0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 14:55 +0400, Igor Kozhukhov wrote:
> i'm using xc_solaris.c - because illumos based on solaris and has open API.

I suspect this was the case, thanks for confirming.

> for now - i have found and fixed my problems: i'm able to load PV guests: dilos PV on dilos-xen-4.3-dom0, Linux ContOS 6.4 and Debian 7.0 64bits.

Excellent!

Is Illumos usable as a domU on a Linux dom0 as well?

> I'll send info later with new ISO and instruction for testing.
> 'xl' now work well.
> i'll work on provide my changes to Xen upstream.

Thank you!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:00:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHGo-0000XI-Of; Mon, 03 Feb 2014 11:00:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAHGn-0000X8-Hw
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:00:53 +0000
Received: from [193.109.254.147:52851] by server-2.bemta-14.messagelabs.com id
	B7/27-01236-4E67FE25; Mon, 03 Feb 2014 11:00:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391425250!1601373!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31200 invoked from network); 3 Feb 2014 11:00:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:00:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97227252"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:00:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	06:00:49 -0500
Message-ID: <1391425248.10515.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Mon, 3 Feb 2014 11:00:48 +0000
In-Reply-To: <20140203105629.GA8266@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
	<20140202183504.GA10385@aepfle.de>
	<1391424397.10515.46.camel@kazak.uk.xensource.com>
	<20140203105629.GA8266@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 11:56 +0100, Olaf Hering wrote:
> On Mon, Feb 03, Ian Campbell wrote:
> 
> > I've also been having this issue on the opensuse aarach64 distro images
> > but not gotten round to reporting it. Since I assume you have the
> > necessary bugzilla accounts etc and are also experiencing it I wonder if
> > I could prevail on you to report it there too?
> 
> I can certainly help with reporting bugs.

Thanks.

>  But I have no ARM hardware,
> just trying to adjust xen.spec so that its not x86 only anymore.
> To get less "pkg build failed" mails.

This isn't really a h/w issue -- more of a build environment one. I
actually do most of my builds using qemu-aarch64-user-static in a chroot
on my main workstation ;-)
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:00:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHGo-0000XI-Of; Mon, 03 Feb 2014 11:00:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAHGn-0000X8-Hw
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:00:53 +0000
Received: from [193.109.254.147:52851] by server-2.bemta-14.messagelabs.com id
	B7/27-01236-4E67FE25; Mon, 03 Feb 2014 11:00:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391425250!1601373!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31200 invoked from network); 3 Feb 2014 11:00:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:00:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97227252"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:00:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	06:00:49 -0500
Message-ID: <1391425248.10515.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Mon, 3 Feb 2014 11:00:48 +0000
In-Reply-To: <20140203105629.GA8266@aepfle.de>
References: <20140201161513.GA13789@aepfle.de>
	<1391275508.15093.16.camel@hastur.hellion.org.uk>
	<20140202183504.GA10385@aepfle.de>
	<1391424397.10515.46.camel@kazak.uk.xensource.com>
	<20140203105629.GA8266@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] arm64: error: PSR_MODE_EL3t redefined
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 11:56 +0100, Olaf Hering wrote:
> On Mon, Feb 03, Ian Campbell wrote:
> 
> > I've also been having this issue on the opensuse aarach64 distro images
> > but not gotten round to reporting it. Since I assume you have the
> > necessary bugzilla accounts etc and are also experiencing it I wonder if
> > I could prevail on you to report it there too?
> 
> I can certainly help with reporting bugs.

Thanks.

>  But I have no ARM hardware,
> just trying to adjust xen.spec so that its not x86 only anymore.
> To get less "pkg build failed" mails.

This isn't really a h/w issue -- more of a build environment one. I
actually do most of my builds using qemu-aarch64-user-static in a chroot
on my main workstation ;-)
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHO3-0000oq-OF; Mon, 03 Feb 2014 11:08:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.bader@canonical.com>) id 1WAHO1-0000ol-Qu
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:08:22 +0000
Received: from [85.158.137.68:38251] by server-17.bemta-3.messagelabs.com id
	7C/E7-22569-5A87FE25; Mon, 03 Feb 2014 11:08:21 +0000
X-Env-Sender: stefan.bader@canonical.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391425700!12990975!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5022 invoked from network); 3 Feb 2014 11:08:20 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-2.tower-31.messagelabs.com with SMTP;
	3 Feb 2014 11:08:20 -0000
Received: from hsi-kbw-078-042-118-041.hsi3.kabel-badenwuerttemberg.de
	([78.42.118.41] helo=[192.168.2.5])
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71)
	(envelope-from <stefan.bader@canonical.com>)
	id 1WAHNy-00041M-2Y; Mon, 03 Feb 2014 11:08:18 +0000
Message-ID: <52EF78A0.5050301@canonical.com>
Date: Mon, 03 Feb 2014 12:08:16 +0100
From: Stefan Bader <stefan.bader@canonical.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>	
	<20130422.154139.1046488577191797292.davem@davemloft.net>	
	<20130422195335.GA30755@zion.uk.xensource.com>	
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>	
	<20140203103028.GC713@zion.uk.xensource.com>
	<1391423956.10515.39.camel@kazak.uk.xensource.com>
	<52EF7453.2080207@canonical.com>
In-Reply-To: <52EF7453.2080207@canonical.com>
X-Enigmail-Version: 1.5.2
Cc: Matt Wilson <msw@linux.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9170118933806053132=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============9170118933806053132==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 03.02.2014 11:49, Stefan Bader wrote:
> On 03.02.2014 11:39, Ian Campbell wrote:
>> On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
>>> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
>>>> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
>>>>> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
>>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>>> Date: Mon, 22 Apr 2013 13:20:39 +0100
>>>>>>
>>>>>>> This series is now rebased onto net-next.
>>>>>>>
>>>>>>> We would also like to ask you to queue it for stable-ish tree. I =
can do the
>>>>>>> backport if necessary.
>>>>>>
>>>>>> All applied, but this was a disaster.
>>>>>>
>>>>>
>>>>> Thanks, I misunderstood the workflow.
>>>>>
>>>>>> If you want bug fixes propagated into -stable you submit them to '=
net'
>>>>>> from the beginning.
>>>>>>
>>>>>> There is no other method by which to do this.
>>>>>>
>>>>>> By merging all of these changes to net-next, you will now have to =
get
>>>>>> them accepted again into 'net', and then (and only then) can you m=
ake
>>>>>> a request for -stable inclusion.
>>>>>>
>>>>>
>>>>> Understood. Will submit them against 'net' later.
>>>>
>>>> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size=

>>>> to account for max TCP header) at all related to the "skb rides the
>>>> rocket" related TX packet drops reported against 3.8.x kernels?
>>>>
>>>> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195=
474
>>>>
>>>> It seems like there are still some outstanding bugs in various -stab=
le
>>>> releases.
>>>>
>>>
>>> As far as I can remember Ian and I requested relavant patches be
>>> backported in May, after these series settled in mainline for some ti=
me.
>>>
>>> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>>>
>>> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick t=
hem
>>> up.
>>
>> The stable guys don't maintain every tree indefinitely, usually only f=
or
>> a couple of releases after the next mainline release or something (I
>> suppose you can find the official policy online somewhere). Presumably=

>> these fixes came too late for the 3.8.y branch.
>>
>> Longterm stable trees are an exception and get longer backports, I don=
't
>> think 3.8 is one of those though.
>>
>> If anyone wants further backports then they will need to speak to the
>> Linux stable maintainers, although they should probably expect a "this=

>> stable tree is now closed" type response for 3.8.
>>
>> Or perhaps the above link implies that Canonical are supporting their
>> own LTS of Linux 3.8.y -- in which case the request should be made to
>> whoever that maintainer is.
>>
>> Ian.
>>
> Yeah, it would be a Canonical maintained longterm tree. I am just check=
ing to
> verify which ones are missing the series. I will send out a request to =
pull them
> in after that.
>=20
> -Stefan
>=20

It turns out that most of the series was applied to the 3.8.y.z longterm =
we look
after and through that made its way into the Raring kernel which is based=
 on
that. Only the first patch of the series fails to apply. But that is only=

changing a error message which actually looks to be correct in that serie=
s.

http://kernel.ubuntu.com/git?p=3Dubuntu/linux.git;a=3Dshortlog;h=3Drefs/h=
eads/linux-3.8.y

-Stefan


--et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJS73igAAoJEOhnXe7L7s6j9owP/1DWIyvvkoj1wdar5zeD8Wlx
jKfkRIwyfOqhz4Zh9MrztXFX76Tg2sjNJs9VqL9+k8QqBgEFcOg0hdMYRcw2bd4Z
Gusf4uP4g1nwfJEABs00MmoGlbVG+/LS/EGa/Mv/H8XIgjDpq9JPpva/7qppQKh8
pbdFtYBhKX9laBdFKtbZCSUs0ZmHpfgmY3VJ+/VQkiTaT6jkSTsA86yJBetgMsbz
IF1ob9tGCF7/qgheQAzeeshc5lrnACe/eUSFiPOoCJ8QiqDd8YUgqiHIrlCG5KuV
rXj1TtNkZihtfNlIo27zV7CtcXwBPDBWcz7veqfT7mCws9lWdcn1FS9Insn9r24O
UMSrUexKwAKgVyIQY8B3C3r9LEarR4LGlXKG7HWqJRffIRmLPQI/9XpZDU8o421N
3hohr3Y+66FtOgDIgKQow6N6x4+IOMjQNB3rmP0QaSEN1sibMg53mYuraPcvP8re
QynN2XNQXfRJngs8Pr5UtXPFEdNHQQ72y/dImnJ8fsBKcEdDbUSWowlSj0qioadZ
sHKtrDBlccS8CdxvJOpWp4RRdYh+SmQzRSFlF29ykOWMjNNGG3HCUNH973APLEyv
iaX7cNWy3cGZBxvT1JGldrlpfzWgqYPOre4GTYmQ90bTJHbx62pEOS7aR/5FplEZ
1PD293c2NNxn+a0YKBnP
=MiQv
-----END PGP SIGNATURE-----

--et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR--


--===============9170118933806053132==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9170118933806053132==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 11:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHO3-0000oq-OF; Mon, 03 Feb 2014 11:08:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stefan.bader@canonical.com>) id 1WAHO1-0000ol-Qu
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:08:22 +0000
Received: from [85.158.137.68:38251] by server-17.bemta-3.messagelabs.com id
	7C/E7-22569-5A87FE25; Mon, 03 Feb 2014 11:08:21 +0000
X-Env-Sender: stefan.bader@canonical.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391425700!12990975!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5022 invoked from network); 3 Feb 2014 11:08:20 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-2.tower-31.messagelabs.com with SMTP;
	3 Feb 2014 11:08:20 -0000
Received: from hsi-kbw-078-042-118-041.hsi3.kabel-badenwuerttemberg.de
	([78.42.118.41] helo=[192.168.2.5])
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71)
	(envelope-from <stefan.bader@canonical.com>)
	id 1WAHNy-00041M-2Y; Mon, 03 Feb 2014 11:08:18 +0000
Message-ID: <52EF78A0.5050301@canonical.com>
Date: Mon, 03 Feb 2014 12:08:16 +0100
From: Stefan Bader <stefan.bader@canonical.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Wei Liu <wei.liu2@citrix.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>	
	<20130422.154139.1046488577191797292.davem@davemloft.net>	
	<20130422195335.GA30755@zion.uk.xensource.com>	
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>	
	<20140203103028.GC713@zion.uk.xensource.com>
	<1391423956.10515.39.camel@kazak.uk.xensource.com>
	<52EF7453.2080207@canonical.com>
In-Reply-To: <52EF7453.2080207@canonical.com>
X-Enigmail-Version: 1.5.2
Cc: Matt Wilson <msw@linux.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9170118933806053132=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============9170118933806053132==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 03.02.2014 11:49, Stefan Bader wrote:
> On 03.02.2014 11:39, Ian Campbell wrote:
>> On Mon, 2014-02-03 at 10:30 +0000, Wei Liu wrote:
>>> On Sat, Feb 01, 2014 at 11:23:25PM -0800, Matt Wilson wrote:
>>>> On Mon, Apr 22, 2013 at 08:53:35PM +0100, Wei Liu wrote:
>>>>> On Mon, Apr 22, 2013 at 08:41:39PM +0100, David Miller wrote:
>>>>>> From: Wei Liu <wei.liu2@citrix.com>
>>>>>> Date: Mon, 22 Apr 2013 13:20:39 +0100
>>>>>>
>>>>>>> This series is now rebased onto net-next.
>>>>>>>
>>>>>>> We would also like to ask you to queue it for stable-ish tree. I =
can do the
>>>>>>> backport if necessary.
>>>>>>
>>>>>> All applied, but this was a disaster.
>>>>>>
>>>>>
>>>>> Thanks, I misunderstood the workflow.
>>>>>
>>>>>> If you want bug fixes propagated into -stable you submit them to '=
net'
>>>>>> from the beginning.
>>>>>>
>>>>>> There is no other method by which to do this.
>>>>>>
>>>>>> By merging all of these changes to net-next, you will now have to =
get
>>>>>> them accepted again into 'net', and then (and only then) can you m=
ake
>>>>>> a request for -stable inclusion.
>>>>>>
>>>>>
>>>>> Understood. Will submit them against 'net' later.
>>>>
>>>> Did this ever happen? Is 9ecd1a75 (xen-netfront: reduce gso_max_size=

>>>> to account for max TCP header) at all related to the "skb rides the
>>>> rocket" related TX packet drops reported against 3.8.x kernels?
>>>>
>>>> https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1195=
474
>>>>
>>>> It seems like there are still some outstanding bugs in various -stab=
le
>>>> releases.
>>>>
>>>
>>> As far as I can remember Ian and I requested relavant patches be
>>> backported in May, after these series settled in mainline for some ti=
me.
>>>
>>> <1369734465.3469.52.camel@zakaz.uk.xensource.com>
>>>
>>> These series was backported to 3.9.y-stable tree. 3.8.y didn't pick t=
hem
>>> up.
>>
>> The stable guys don't maintain every tree indefinitely, usually only f=
or
>> a couple of releases after the next mainline release or something (I
>> suppose you can find the official policy online somewhere). Presumably=

>> these fixes came too late for the 3.8.y branch.
>>
>> Longterm stable trees are an exception and get longer backports, I don=
't
>> think 3.8 is one of those though.
>>
>> If anyone wants further backports then they will need to speak to the
>> Linux stable maintainers, although they should probably expect a "this=

>> stable tree is now closed" type response for 3.8.
>>
>> Or perhaps the above link implies that Canonical are supporting their
>> own LTS of Linux 3.8.y -- in which case the request should be made to
>> whoever that maintainer is.
>>
>> Ian.
>>
> Yeah, it would be a Canonical maintained longterm tree. I am just check=
ing to
> verify which ones are missing the series. I will send out a request to =
pull them
> in after that.
>=20
> -Stefan
>=20

It turns out that most of the series was applied to the 3.8.y.z longterm =
we look
after and through that made its way into the Raring kernel which is based=
 on
that. Only the first patch of the series fails to apply. But that is only=

changing a error message which actually looks to be correct in that serie=
s.

http://kernel.ubuntu.com/git?p=3Dubuntu/linux.git;a=3Dshortlog;h=3Drefs/h=
eads/linux-3.8.y

-Stefan


--et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.11 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQIcBAEBCgAGBQJS73igAAoJEOhnXe7L7s6j9owP/1DWIyvvkoj1wdar5zeD8Wlx
jKfkRIwyfOqhz4Zh9MrztXFX76Tg2sjNJs9VqL9+k8QqBgEFcOg0hdMYRcw2bd4Z
Gusf4uP4g1nwfJEABs00MmoGlbVG+/LS/EGa/Mv/H8XIgjDpq9JPpva/7qppQKh8
pbdFtYBhKX9laBdFKtbZCSUs0ZmHpfgmY3VJ+/VQkiTaT6jkSTsA86yJBetgMsbz
IF1ob9tGCF7/qgheQAzeeshc5lrnACe/eUSFiPOoCJ8QiqDd8YUgqiHIrlCG5KuV
rXj1TtNkZihtfNlIo27zV7CtcXwBPDBWcz7veqfT7mCws9lWdcn1FS9Insn9r24O
UMSrUexKwAKgVyIQY8B3C3r9LEarR4LGlXKG7HWqJRffIRmLPQI/9XpZDU8o421N
3hohr3Y+66FtOgDIgKQow6N6x4+IOMjQNB3rmP0QaSEN1sibMg53mYuraPcvP8re
QynN2XNQXfRJngs8Pr5UtXPFEdNHQQ72y/dImnJ8fsBKcEdDbUSWowlSj0qioadZ
sHKtrDBlccS8CdxvJOpWp4RRdYh+SmQzRSFlF29ykOWMjNNGG3HCUNH973APLEyv
iaX7cNWy3cGZBxvT1JGldrlpfzWgqYPOre4GTYmQ90bTJHbx62pEOS7aR/5FplEZ
1PD293c2NNxn+a0YKBnP
=MiQv
-----END PGP SIGNATURE-----

--et0SBvBJlFCOvNpXuCDD1V5CfAIwkUNIR--


--===============9170118933806053132==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9170118933806053132==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 11:12:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHRq-00012j-M6; Mon, 03 Feb 2014 11:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAHRp-00012e-B2
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:12:17 +0000
Received: from [193.109.254.147:63818] by server-13.bemta-14.messagelabs.com
	id 6A/26-01226-0997FE25; Mon, 03 Feb 2014 11:12:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391425934!1610314!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8732 invoked from network); 3 Feb 2014 11:12:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:12:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97230044"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:12:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	06:12:14 -0500
Message-ID: <1391425932.10515.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefan Bader <stefan.bader@canonical.com>
Date: Mon, 3 Feb 2014 11:12:12 +0000
In-Reply-To: <52EF78A0.5050301@canonical.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
	<20140203103028.GC713@zion.uk.xensource.com>
	<1391423956.10515.39.camel@kazak.uk.xensource.com>
	<52EF7453.2080207@canonical.com> <52EF78A0.5050301@canonical.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@linux.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 12:08 +0100, Stefan Bader wrote:
> It turns out that most of the series was applied to the 3.8.y.z longterm we look
> after and through that made its way into the Raring kernel which is based on
> that. Only the first patch of the series fails to apply. But that is only
> changing a error message which actually looks to be correct in that series.
> 
> http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y

So all is good -- thanks!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:12:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHRq-00012j-M6; Mon, 03 Feb 2014 11:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAHRp-00012e-B2
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:12:17 +0000
Received: from [193.109.254.147:63818] by server-13.bemta-14.messagelabs.com
	id 6A/26-01226-0997FE25; Mon, 03 Feb 2014 11:12:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391425934!1610314!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8732 invoked from network); 3 Feb 2014 11:12:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:12:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="97230044"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:12:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	06:12:14 -0500
Message-ID: <1391425932.10515.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefan Bader <stefan.bader@canonical.com>
Date: Mon, 3 Feb 2014 11:12:12 +0000
In-Reply-To: <52EF78A0.5050301@canonical.com>
References: <1366633243-17775-1-git-send-email-wei.liu2@citrix.com>
	<20130422.154139.1046488577191797292.davem@davemloft.net>
	<20130422195335.GA30755@zion.uk.xensource.com>
	<20140202072323.GA12154@u109add4315675089e695.ant.amazon.com>
	<20140203103028.GC713@zion.uk.xensource.com>
	<1391423956.10515.39.camel@kazak.uk.xensource.com>
	<52EF7453.2080207@canonical.com> <52EF78A0.5050301@canonical.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Matt Wilson <msw@linux.com>, Wei Liu <wei.liu2@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH net-next V7 0/4] Bundle fixes for Xen
 netfront / netback
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 12:08 +0100, Stefan Bader wrote:
> It turns out that most of the series was applied to the 3.8.y.z longterm we look
> after and through that made its way into the Raring kernel which is based on
> that. Only the first patch of the series fails to apply. But that is only
> changing a error message which actually looks to be correct in that series.
> 
> http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.8.y

So all is good -- thanks!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:13:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHTL-000182-5O; Mon, 03 Feb 2014 11:13:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAHTJ-00017v-Nk
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:13:49 +0000
Received: from [193.109.254.147:33646] by server-12.bemta-14.messagelabs.com
	id 20/5B-17220-DE97FE25; Mon, 03 Feb 2014 11:13:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391426026!1602139!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22727 invoked from network); 3 Feb 2014 11:13:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:13:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99161694"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 11:13:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 06:13:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAHTE-0007pL-6e;
	Mon, 03 Feb 2014 11:13:44 +0000
Date: Mon, 3 Feb 2014 11:13:35 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52EF7618.7030402@citrix.com>
Message-ID: <alpine.DEB.2.02.1402031103070.4373@kaball.uk.xensource.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
	<52EE1E26.2040308@linaro.org> <52EE93F0.1020508@citrix.com>
	<52EF7618.7030402@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1122221688-1391425897=:4373"
Content-ID: <alpine.DEB.2.02.1402031111440.4373@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1122221688-1391425897=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1402031111441.4373@kaball.uk.xensource.com>

On Mon, 3 Feb 2014, David Vrabel wrote:
> On 02/02/14 18:52, Zoltan Kiss wrote:
> > On 02/02/14 11:29, Julien Grall wrote:
> >> Hello,
> >>
> >> This patch is breaking Linux compilation on ARM:
> >>
> >> drivers/xen/grant-table.c: In function =E2=80=98__gnttab_map_refs=E2=
=80=99:
> >> drivers/xen/grant-table.c:989:3: error: implicit declaration of
> >> function =E2=80=98FOREIGN_FRAME=E2=80=99 [-Werror=3Dimplicit-function-=
declaration]
> >>     if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> >>     ^
> >> drivers/xen/grant-table.c: In function =E2=80=98__gnttab_unmap_refs=E2=
=80=99:
> >> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
> >> function =E2=80=98get_phys_to_machine=E2=80=99 [-Werror=3Dimplicit-fun=
ction-declaration]
> >>     mfn =3D get_phys_to_machine(pfn);
> >>     ^
> >> drivers/xen/grant-table.c:1055:43: error: =E2=80=98FOREIGN_FRAME_BIT=
=E2=80=99
> >> undeclared (first use in this function)
> >>     if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> >>                                             ^
> >> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
> >> reported only once for each function it appears in
> >> drivers/xen/grant-table.c:1068:9: error: too many arguments to
> >> function =E2=80=98m2p_remove_override=E2=80=99
> >>           mfn);
> >>           ^
> >> In file included from include/xen/page.h:4:0,
> >>                   from drivers/xen/grant-table.c:48:
> >> /local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:=
106:19:
> >> note: declared here
> >>   static inline int m2p_remove_override(struct page *page, bool
> >> clear_pte)
> >>                     ^
> >> cc1: some warnings being treated as errors
> >=20
> > Hi,
> >=20
> > That's bad indeed. I think the best solution is to put those parts
> > behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c.
> > David, Stefano, what do you think?
>=20
> I don't think we want (more) #ifdef CONFIG_X86 in grant-table.c and the
> arch-specific bits will have to factored out into their own functions
> with suitable stubs provided for ARM.

We certainly don't want more ifdefs like that.


> But, this patch went in late and it's clearly not ready. So I think it
> should be reverted and we should aim to get it sorted out for 3.15.
>=20
> Konrad/Stefano (if you agree) please revert
> 08ece5bb2312b4510b161a6ef6682f37f4eac8a1 and send a pull request.

Unfortunately I have to agree: fixing the prototype of
m2p_remove_override and replacing get_phys_to_machine with pfn_to_mfn is
easy.
However FOREIGN_FRAME is an x86-ism and I don't feel confortable with
adding yet another #define under arch/arm/xen just to deal with x86
stuff that spill on common code.

Sorry for not spotting this earlier.


> Konrad, I also think you should look at adding an ARM build to your test
> system (I thought you had this already).

Let's talk about how to set it up offline.
--1342847746-1122221688-1391425897=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1122221688-1391425897=:4373--


From xen-devel-bounces@lists.xen.org Mon Feb 03 11:13:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHTL-000182-5O; Mon, 03 Feb 2014 11:13:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAHTJ-00017v-Nk
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:13:49 +0000
Received: from [193.109.254.147:33646] by server-12.bemta-14.messagelabs.com
	id 20/5B-17220-DE97FE25; Mon, 03 Feb 2014 11:13:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391426026!1602139!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22727 invoked from network); 3 Feb 2014 11:13:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:13:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99161694"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 11:13:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 06:13:44 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAHTE-0007pL-6e;
	Mon, 03 Feb 2014 11:13:44 +0000
Date: Mon, 3 Feb 2014 11:13:35 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52EF7618.7030402@citrix.com>
Message-ID: <alpine.DEB.2.02.1402031103070.4373@kaball.uk.xensource.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
	<52EE1E26.2040308@linaro.org> <52EE93F0.1020508@citrix.com>
	<52EF7618.7030402@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1122221688-1391425897=:4373"
Content-ID: <alpine.DEB.2.02.1402031111440.4373@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1122221688-1391425897=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1402031111441.4373@kaball.uk.xensource.com>

On Mon, 3 Feb 2014, David Vrabel wrote:
> On 02/02/14 18:52, Zoltan Kiss wrote:
> > On 02/02/14 11:29, Julien Grall wrote:
> >> Hello,
> >>
> >> This patch is breaking Linux compilation on ARM:
> >>
> >> drivers/xen/grant-table.c: In function =E2=80=98__gnttab_map_refs=E2=
=80=99:
> >> drivers/xen/grant-table.c:989:3: error: implicit declaration of
> >> function =E2=80=98FOREIGN_FRAME=E2=80=99 [-Werror=3Dimplicit-function-=
declaration]
> >>     if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> >>     ^
> >> drivers/xen/grant-table.c: In function =E2=80=98__gnttab_unmap_refs=E2=
=80=99:
> >> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
> >> function =E2=80=98get_phys_to_machine=E2=80=99 [-Werror=3Dimplicit-fun=
ction-declaration]
> >>     mfn =3D get_phys_to_machine(pfn);
> >>     ^
> >> drivers/xen/grant-table.c:1055:43: error: =E2=80=98FOREIGN_FRAME_BIT=
=E2=80=99
> >> undeclared (first use in this function)
> >>     if (mfn =3D=3D INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> >>                                             ^
> >> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
> >> reported only once for each function it appears in
> >> drivers/xen/grant-table.c:1068:9: error: too many arguments to
> >> function =E2=80=98m2p_remove_override=E2=80=99
> >>           mfn);
> >>           ^
> >> In file included from include/xen/page.h:4:0,
> >>                   from drivers/xen/grant-table.c:48:
> >> /local/home/julien/works/midway/linux/arch/arm/include/asm/xen/page.h:=
106:19:
> >> note: declared here
> >>   static inline int m2p_remove_override(struct page *page, bool
> >> clear_pte)
> >>                     ^
> >> cc1: some warnings being treated as errors
> >=20
> > Hi,
> >=20
> > That's bad indeed. I think the best solution is to put those parts
> > behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c.
> > David, Stefano, what do you think?
>=20
> I don't think we want (more) #ifdef CONFIG_X86 in grant-table.c and the
> arch-specific bits will have to factored out into their own functions
> with suitable stubs provided for ARM.

We certainly don't want more ifdefs like that.


> But, this patch went in late and it's clearly not ready. So I think it
> should be reverted and we should aim to get it sorted out for 3.15.
>=20
> Konrad/Stefano (if you agree) please revert
> 08ece5bb2312b4510b161a6ef6682f37f4eac8a1 and send a pull request.

Unfortunately I have to agree: fixing the prototype of
m2p_remove_override and replacing get_phys_to_machine with pfn_to_mfn is
easy.
However FOREIGN_FRAME is an x86-ism and I don't feel confortable with
adding yet another #define under arch/arm/xen just to deal with x86
stuff that spill on common code.

Sorry for not spotting this earlier.


> Konrad, I also think you should look at adding an ARM build to your test
> system (I thought you had this already).

Let's talk about how to set it up offline.
--1342847746-1122221688-1391425897=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1122221688-1391425897=:4373--


From xen-devel-bounces@lists.xen.org Mon Feb 03 11:14:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHU9-0001Co-Js; Mon, 03 Feb 2014 11:14:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAHU8-0001Cc-8A
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:14:40 +0000
Received: from [193.109.254.147:47299] by server-1.bemta-14.messagelabs.com id
	00/40-15438-F1A7FE25; Mon, 03 Feb 2014 11:14:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391426077!1606588!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16658 invoked from network); 3 Feb 2014 11:14:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:14:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99161880"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 11:14:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 06:14:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAHU4-0007qA-Ei;
	Mon, 03 Feb 2014 11:14:36 +0000
Date: Mon, 3 Feb 2014 11:14:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Josh Boyer <jwboyer@fedoraproject.org>
In-Reply-To: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402031114080.4373@kaball.uk.xensource.com>
References: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Xen build error on ARM with 3.14 merge window
	kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2 Feb 2014, Josh Boyer wrote:
> Hi All,
> 
> With v3.13-11147-gb399c46 I'm seeing the following build errors for
> Xen on ARM.  I haven't been able to test Linus' recent tree yet, but I
> was wondering if anyone had seen this yet.
> 
> josh

Thanks for the report, we'll fix as soon as possible.


> drivers/xen/grant-table.c: In function '__gnttab_map_refs':
> drivers/xen/grant-table.c:989:3: error: implicit declaration of
> function 'FOREIGN_FRAME' [-Werror=implicit-function-declaration]
>    if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>    ^
> drivers/xen/grant-table.c: In function '__gnttab_unmap_refs':
> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
> function 'get_phys_to_machine' [-Werror=implicit-function-declaration]
>    mfn = get_phys_to_machine(pfn);
>    ^
> drivers/xen/grant-table.c:1055:43: error: 'FOREIGN_FRAME_BIT'
> undeclared (first use in this function)
>    if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>                                            ^
> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
> reported only once for each function it appears in
> drivers/xen/grant-table.c:1068:9: error: too many arguments to
> function 'm2p_remove_override'
>          mfn);
>          ^
> In file included from include/xen/page.h:4:0,
>                  from drivers/xen/grant-table.c:48:
> /home/jwboyer/rpmbuild/BUILD/kernel-3.13.fc20/linux-3.14.0-0.rc0.git20.1.fc20.armv7hl/arch/arm/include/asm/xen/page.h:106:19:
> note: declared here
>  static inline int m2p_remove_override(struct page *page, bool clear_pte)
>                    ^
> cc1: some warnings being treated as errors
> make[2]: *** [drivers/xen/grant-table.o] Error 1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:14:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHU9-0001Co-Js; Mon, 03 Feb 2014 11:14:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAHU8-0001Cc-8A
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:14:40 +0000
Received: from [193.109.254.147:47299] by server-1.bemta-14.messagelabs.com id
	00/40-15438-F1A7FE25; Mon, 03 Feb 2014 11:14:39 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391426077!1606588!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16658 invoked from network); 3 Feb 2014 11:14:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:14:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99161880"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 11:14:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 06:14:36 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAHU4-0007qA-Ei;
	Mon, 03 Feb 2014 11:14:36 +0000
Date: Mon, 3 Feb 2014 11:14:27 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Josh Boyer <jwboyer@fedoraproject.org>
In-Reply-To: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402031114080.4373@kaball.uk.xensource.com>
References: <CA+5PVA4BGAN79T-k2_PBYgFeJYuJ5GB0z7+-2isAgpcbGjWN0Q@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Linux-Kernel@Vger. Kernel. Org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] Xen build error on ARM with 3.14 merge window
	kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2 Feb 2014, Josh Boyer wrote:
> Hi All,
> 
> With v3.13-11147-gb399c46 I'm seeing the following build errors for
> Xen on ARM.  I haven't been able to test Linus' recent tree yet, but I
> was wondering if anyone had seen this yet.
> 
> josh

Thanks for the report, we'll fix as soon as possible.


> drivers/xen/grant-table.c: In function '__gnttab_map_refs':
> drivers/xen/grant-table.c:989:3: error: implicit declaration of
> function 'FOREIGN_FRAME' [-Werror=implicit-function-declaration]
>    if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
>    ^
> drivers/xen/grant-table.c: In function '__gnttab_unmap_refs':
> drivers/xen/grant-table.c:1054:3: error: implicit declaration of
> function 'get_phys_to_machine' [-Werror=implicit-function-declaration]
>    mfn = get_phys_to_machine(pfn);
>    ^
> drivers/xen/grant-table.c:1055:43: error: 'FOREIGN_FRAME_BIT'
> undeclared (first use in this function)
>    if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
>                                            ^
> drivers/xen/grant-table.c:1055:43: note: each undeclared identifier is
> reported only once for each function it appears in
> drivers/xen/grant-table.c:1068:9: error: too many arguments to
> function 'm2p_remove_override'
>          mfn);
>          ^
> In file included from include/xen/page.h:4:0,
>                  from drivers/xen/grant-table.c:48:
> /home/jwboyer/rpmbuild/BUILD/kernel-3.13.fc20/linux-3.14.0-0.rc0.git20.1.fc20.armv7hl/arch/arm/include/asm/xen/page.h:106:19:
> note: declared here
>  static inline int m2p_remove_override(struct page *page, bool clear_pte)
>                    ^
> cc1: some warnings being treated as errors
> make[2]: *** [drivers/xen/grant-table.o] Error 1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:20:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:20:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHZv-0001vK-G2; Mon, 03 Feb 2014 11:20:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1WAHZu-0001vF-TR
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:20:39 +0000
Received: from [85.158.143.35:51167] by server-2.bemta-4.messagelabs.com id
	F5/1C-10891-68B7FE25; Mon, 03 Feb 2014 11:20:38 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391426437!2683029!1
X-Originating-IP: [213.199.154.80]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32527 invoked from network); 3 Feb 2014 11:20:37 -0000
Received: from mail-db3lp0080.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.80)
	by server-8.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Feb 2014 11:20:37 -0000
Received: from AMSPRD0310HT003.eurprd03.prod.outlook.com (10.255.40.38) by
	DB4PR03MB572.eurprd03.prod.outlook.com (10.141.238.27) with Microsoft
	SMTP Server (TLS) id 15.0.868.8; Mon, 3 Feb 2014 11:20:36 +0000
Received: from [192.168.10.196] (193.63.64.25) by pod51013.outlook.com
	(10.255.40.38) with Microsoft SMTP Server (TLS) id 14.16.411.0;
	Mon, 3 Feb 2014 11:20:35 +0000
Message-ID: <52EF7B81.4080601@zynstra.com>
Date: Mon, 3 Feb 2014 11:20:33 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
References: <52CBC700.1060602@zynstra.com> <52CE7E67.5080708@oracle.com>
	<52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
	<20140203094912.GA5273@olila.local.net-space.pl>
In-Reply-To: <20140203094912.GA5273@olila.local.net-space.pl>
X-Originating-IP: [193.63.64.25]
X-Forefront-PRVS: 01110342A5
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(6009001)(377454003)(479174003)(51704005)(199002)(189002)(24454002)(80022001)(66066001)(36756003)(63696002)(47776003)(23756003)(76482001)(31966008)(46102001)(92726001)(81816001)(15975445006)(83506001)(47446002)(74502001)(83072002)(74662001)(81686001)(56776001)(54316002)(59766001)(77982001)(59896001)(64126003)(79102001)(15202345003)(74366001)(85852003)(56816005)(93516002)(80976001)(86362001)(47736001)(49866001)(50986001)(47976001)(80316001)(90146001)(85306002)(93136001)(19580395003)(50466002)(83322001)(81542001)(94946001)(94316002)(69226001)(74706001)(92566001)(51856001)(33656001)(74876001)(53806001)(54356001)(76796001)(76786001)(4396001)(81342001)(87936001);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB4PR03MB572;
	H:AMSPRD0310HT003.eurprd03.prod.outlook.com; CLIP:193.63.64.25;
	FPR:CCDFF215.A4DA5DC9.30F1BD7B.82E5B6F1.204B3; InfoNoRecordsA:1;
	MX:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper wrote:
> Hi,
>
> On Fri, Jan 31, 2014 at 11:56:54AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 01/29/2014 01:15 AM, James Dingwall wrote:
>>>>> Bob Liu wrote:
>>>>>> I have made a patch by reserving extra 10% of original total memory, by
>>>>>> this way I think we can make the system much more reliably in all cases.
>>>>>> Could you please have a test? You don't need to set
>>>>>> selfballoon_reserved_mb by yourself any more.
>>>>> I have to say that with this patch the situation has definitely
>>>>> improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
>>>>> quite hard for the last 10 days or so.  Unfortunately yesterday I got an
>>>> Good news!
>>>>
>>>>> OOM during a compile (link) of webkit-gtk.  I think your patch is part
>>>>> of the solution but I'm not sure if the other bit is simply to be more
>>>>> generous with the guest memory allocation or something else.  Having
>>>>> tested with memory = 512  and no tmem I get an OOM with the same
>>>>> compile, with memory = 1024 and no tmem the compile completes ok (both
>>>>> cases without maxmem).  As my domains are usually started with memory =
>>>>> 512 and maxmem = 1024 it seems that there should be sufficient with my
> Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make -j"?
> Could you confirm that webkit-gtk in any "subjobs" do not use "make -j"?
My guest domain is Gentoo and I have MAKEOPTS="-j2" set in make.conf and 
according to the build log for webkit-gtk this is used unchanged:
 >>> Source configured.
 >>> Compiling source in 
/var/tmp/portage/net-libs/webkit-gtk-2.0.4/work/webkitgtk-2.0.4 ...
make -j2

I wouldn't read anything in particular to it being webkit as I have seen 
similar from other large compiles (gcc, glibc, kdelibs, ...)
>
>>>> But I think from the beginning tmem/balloon driver can't expand guest
>>>> memory from size 'memory' to 'maxmem' automatically.
>>> I am carrying this patch for libxl (4.3.1) because maxmem wasn't
>>> being honoured.
> James, what do you mean by "maxmem wasn't being honoured"?
http://lists.xen.org/archives/html/xen-devel/2013-10/msg02228.html - the 
guest would never balloon above the value of memory when maxmem was set 
and was > memory in the configuration file.  There were some discussions 
about the correctness of this patch but the only place I could see an 
impact of maxmem was the parsing of the config parameters for the setup 
of the guest domain. IIRC the xl mem-max command which changes the same 
parameter once the guest domain is running resulted with the balloon up 
behaviour to max-mem working as expected. So the discrepancy between how 
xl behaves with the maxmem in the config or the execution of xl mem-max 
was what I had noted and what this patch resolved.  It would be easy to 
repeat those tests if necessary.

James
>
>> Daniel,
>>
>> Weren't you working on a similar patch? Do you recall what happend to it?
> Yep, and it was not applied because it has not been so mature. Additionally,
> this patch is part of bigger puzzle. I have it on my todo list but I would
> like to finish EFI stuff first. However, if you wish I could back to this
> work (if it is more important then EFI right now).
>
> Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:20:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:20:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHZv-0001vK-G2; Mon, 03 Feb 2014 11:20:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <james.dingwall@zynstra.com>) id 1WAHZu-0001vF-TR
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:20:39 +0000
Received: from [85.158.143.35:51167] by server-2.bemta-4.messagelabs.com id
	F5/1C-10891-68B7FE25; Mon, 03 Feb 2014 11:20:38 +0000
X-Env-Sender: james.dingwall@zynstra.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391426437!2683029!1
X-Originating-IP: [213.199.154.80]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32527 invoked from network); 3 Feb 2014 11:20:37 -0000
Received: from mail-db3lp0080.outbound.protection.outlook.com (HELO
	emea01-db3-obe.outbound.protection.outlook.com) (213.199.154.80)
	by server-8.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	3 Feb 2014 11:20:37 -0000
Received: from AMSPRD0310HT003.eurprd03.prod.outlook.com (10.255.40.38) by
	DB4PR03MB572.eurprd03.prod.outlook.com (10.141.238.27) with Microsoft
	SMTP Server (TLS) id 15.0.868.8; Mon, 3 Feb 2014 11:20:36 +0000
Received: from [192.168.10.196] (193.63.64.25) by pod51013.outlook.com
	(10.255.40.38) with Microsoft SMTP Server (TLS) id 14.16.411.0;
	Mon, 3 Feb 2014 11:20:35 +0000
Message-ID: <52EF7B81.4080601@zynstra.com>
Date: Mon, 3 Feb 2014 11:20:33 +0000
From: James Dingwall <james.dingwall@zynstra.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:26.0) Gecko/20100101 Firefox/26.0 SeaMonkey/2.23
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
References: <52CBC700.1060602@zynstra.com> <52CE7E67.5080708@oracle.com>
	<52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
	<20140203094912.GA5273@olila.local.net-space.pl>
In-Reply-To: <20140203094912.GA5273@olila.local.net-space.pl>
X-Originating-IP: [193.63.64.25]
X-Forefront-PRVS: 01110342A5
X-Forefront-Antispam-Report: SFV:NSPM;
	SFS:(10019001)(6009001)(377454003)(479174003)(51704005)(199002)(189002)(24454002)(80022001)(66066001)(36756003)(63696002)(47776003)(23756003)(76482001)(31966008)(46102001)(92726001)(81816001)(15975445006)(83506001)(47446002)(74502001)(83072002)(74662001)(81686001)(56776001)(54316002)(59766001)(77982001)(59896001)(64126003)(79102001)(15202345003)(74366001)(85852003)(56816005)(93516002)(80976001)(86362001)(47736001)(49866001)(50986001)(47976001)(80316001)(90146001)(85306002)(93136001)(19580395003)(50466002)(83322001)(81542001)(94946001)(94316002)(69226001)(74706001)(92566001)(51856001)(33656001)(74876001)(53806001)(54356001)(76796001)(76786001)(4396001)(81342001)(87936001);
	DIR:OUT; SFP:1102; SCL:1; SRVR:DB4PR03MB572;
	H:AMSPRD0310HT003.eurprd03.prod.outlook.com; CLIP:193.63.64.25;
	FPR:CCDFF215.A4DA5DC9.30F1BD7B.82E5B6F1.204B3; InfoNoRecordsA:1;
	MX:1; LANG:en; 
X-OriginatorOrg: zynstra.com
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper wrote:
> Hi,
>
> On Fri, Jan 31, 2014 at 11:56:54AM -0500, Konrad Rzeszutek Wilk wrote:
>> On Wed, Jan 29, 2014 at 02:45:24PM +0000, James Dingwall wrote:
>>> Bob Liu wrote:
>>>> On 01/29/2014 01:15 AM, James Dingwall wrote:
>>>>> Bob Liu wrote:
>>>>>> I have made a patch by reserving extra 10% of original total memory, by
>>>>>> this way I think we can make the system much more reliably in all cases.
>>>>>> Could you please have a test? You don't need to set
>>>>>> selfballoon_reserved_mb by yourself any more.
>>>>> I have to say that with this patch the situation has definitely
>>>>> improved.  I have been running it with 3.12.[78] and 3.13 and pushing it
>>>>> quite hard for the last 10 days or so.  Unfortunately yesterday I got an
>>>> Good news!
>>>>
>>>>> OOM during a compile (link) of webkit-gtk.  I think your patch is part
>>>>> of the solution but I'm not sure if the other bit is simply to be more
>>>>> generous with the guest memory allocation or something else.  Having
>>>>> tested with memory = 512  and no tmem I get an OOM with the same
>>>>> compile, with memory = 1024 and no tmem the compile completes ok (both
>>>>> cases without maxmem).  As my domains are usually started with memory =
>>>>> 512 and maxmem = 1024 it seems that there should be sufficient with my
> Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make -j"?
> Could you confirm that webkit-gtk in any "subjobs" do not use "make -j"?
My guest domain is Gentoo and I have MAKEOPTS="-j2" set in make.conf and 
according to the build log for webkit-gtk this is used unchanged:
 >>> Source configured.
 >>> Compiling source in 
/var/tmp/portage/net-libs/webkit-gtk-2.0.4/work/webkitgtk-2.0.4 ...
make -j2

I wouldn't read anything in particular to it being webkit as I have seen 
similar from other large compiles (gcc, glibc, kdelibs, ...)
>
>>>> But I think from the beginning tmem/balloon driver can't expand guest
>>>> memory from size 'memory' to 'maxmem' automatically.
>>> I am carrying this patch for libxl (4.3.1) because maxmem wasn't
>>> being honoured.
> James, what do you mean by "maxmem wasn't being honoured"?
http://lists.xen.org/archives/html/xen-devel/2013-10/msg02228.html - the 
guest would never balloon above the value of memory when maxmem was set 
and was > memory in the configuration file.  There were some discussions 
about the correctness of this patch but the only place I could see an 
impact of maxmem was the parsing of the config parameters for the setup 
of the guest domain. IIRC the xl mem-max command which changes the same 
parameter once the guest domain is running resulted with the balloon up 
behaviour to max-mem working as expected. So the discrepancy between how 
xl behaves with the maxmem in the config or the execution of xl mem-max 
was what I had noted and what this patch resolved.  It would be easy to 
repeat those tests if necessary.

James
>
>> Daniel,
>>
>> Weren't you working on a similar patch? Do you recall what happend to it?
> Yep, and it was not applied because it has not been so mature. Additionally,
> this patch is part of bigger puzzle. I have it on my todo list but I would
> like to finish EFI stuff first. However, if you wish I could back to this
> work (if it is more important then EFI right now).
>
> Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:25:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHeQ-0002Ge-Ez; Mon, 03 Feb 2014 11:25:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WAHeO-0002GM-VP; Mon, 03 Feb 2014 11:25:17 +0000
Received: from [85.158.139.211:36211] by server-7.bemta-5.messagelabs.com id
	99/A5-14867-B9C7FE25; Mon, 03 Feb 2014 11:25:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391426713!1257993!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29996 invoked from network); 3 Feb 2014 11:25:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:25:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99164283"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 11:25:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	06:25:12 -0500
Message-ID: <1391426710.10515.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Russ Pavlicek <russell.pavlicek@xenproject.org>
Date: Mon, 3 Feb 2014 11:25:10 +0000
In-Reply-To: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
References: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-api@lists.xen.org, "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 23:56 -0500, Russ Pavlicek wrote:
> This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.
> 
> RC3 is the first release candidate to include a testable PVH.
> 
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
> 
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
> 
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.

Note that Freenode is currently subject to a DDOS which is making it
hard to join/stay in the channel.

https://twitter.com/freenodestaff/status/430272930078273536

Hopefully it will be resolved through the day.

Ian.

> 
> Hope to see you today on #xentest!
> 
> Russ
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:25:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:25:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHeQ-0002Ge-Ez; Mon, 03 Feb 2014 11:25:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WAHeO-0002GM-VP; Mon, 03 Feb 2014 11:25:17 +0000
Received: from [85.158.139.211:36211] by server-7.bemta-5.messagelabs.com id
	99/A5-14867-B9C7FE25; Mon, 03 Feb 2014 11:25:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391426713!1257993!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29996 invoked from network); 3 Feb 2014 11:25:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:25:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,771,1384300800"; d="scan'208";a="99164283"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 11:25:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	06:25:12 -0500
Message-ID: <1391426710.10515.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Russ Pavlicek <russell.pavlicek@xenproject.org>
Date: Mon, 3 Feb 2014 11:25:10 +0000
In-Reply-To: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
References: <CAHehzX1ZZSP2j0v5DUzOc4Mifxy-Za0X7f2qGUxrKVJVyBvFxQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-api@lists.xen.org, "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-02 at 23:56 -0500, Russ Pavlicek wrote:
> This is a reminder that today is the Xen Project Test Day for Xen 4.4 RC3.
> 
> RC3 is the first release candidate to include a testable PVH.
> 
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
> 
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
> 
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.

Note that Freenode is currently subject to a DDOS which is making it
hard to join/stay in the channel.

https://twitter.com/freenodestaff/status/430272930078273536

Hopefully it will be resolved through the day.

Ian.

> 
> Hope to see you today on #xentest!
> 
> Russ
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:28:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHhA-0002ZW-Nt; Mon, 03 Feb 2014 11:28:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WAHh9-0002ZF-TU
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 11:28:08 +0000
Received: from [85.158.137.68:2341] by server-14.bemta-3.messagelabs.com id
	8A/72-08196-74D7FE25; Mon, 03 Feb 2014 11:28:07 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391426885!12184164!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20438 invoked from network); 3 Feb 2014 11:28:06 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-16.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Feb 2014 11:28:06 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WAHh6-0000QR-F6
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 03:28:04 -0800
Date: Mon, 3 Feb 2014 03:28:04 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1391426884439-5721094.post@n5.nabble.com>
In-Reply-To: <1391353593813-5721087.post@n5.nabble.com>
References: <1391353593813-5721087.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] xen and SMP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

What is xen design in smp system .... 
is it asymmetric VMM ? 
I can't find documentation about that question? 

Best Regards 



--
View this message in context: http://xen.1045712.n5.nabble.com/xen-and-SMP-tp5721087p5721094.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:28:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHhA-0002ZW-Nt; Mon, 03 Feb 2014 11:28:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WAHh9-0002ZF-TU
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 11:28:08 +0000
Received: from [85.158.137.68:2341] by server-14.bemta-3.messagelabs.com id
	8A/72-08196-74D7FE25; Mon, 03 Feb 2014 11:28:07 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391426885!12184164!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20438 invoked from network); 3 Feb 2014 11:28:06 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-16.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	3 Feb 2014 11:28:06 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WAHh6-0000QR-F6
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 03:28:04 -0800
Date: Mon, 3 Feb 2014 03:28:04 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1391426884439-5721094.post@n5.nabble.com>
In-Reply-To: <1391353593813-5721087.post@n5.nabble.com>
References: <1391353593813-5721087.post@n5.nabble.com>
MIME-Version: 1.0
Subject: Re: [Xen-devel] xen and SMP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

What is xen design in smp system .... 
is it asymmetric VMM ? 
I can't find documentation about that question? 

Best Regards 



--
View this message in context: http://xen.1045712.n5.nabble.com/xen-and-SMP-tp5721087p5721094.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:36:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:36:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHpJ-00034a-OD; Mon, 03 Feb 2014 11:36:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAHpI-00034V-HZ
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:36:32 +0000
Received: from [85.158.137.68:48295] by server-7.bemta-3.messagelabs.com id
	46/23-13775-F3F7FE25; Mon, 03 Feb 2014 11:36:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391427390!11823613!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23669 invoked from network); 3 Feb 2014 11:36:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 11:36:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Feb 2014 11:36:35 +0000
Message-Id: <52EF8D4A0200007800118A67@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 03 Feb 2014 11:36:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
	<1391420407.10515.10.camel@kazak.uk.xensource.com>
In-Reply-To: <1391420407.10515.10.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.02.14 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-31 at 12:04 +0000, Jan Beulich wrote:
>> Roger,
>> 
>> so you introduced this, yet looking in a little closer detail I can't seem
>> to understand why: struct blkif_request_segment is identical in layout,
>> the sole difference between the two is that in the new structure the
>> padding field has a name, whereas in the old one it doesn't.
> 
> Is this something to do with Linux' use of __attribute__((packed)) once
> again causing confusion? (I really hope not API deviation...)

Yes, I think it has to do with Linux'es way of defining these
structures: My assumption is that the embedded (but not such
attributed) definition of struct blkif_request_segment inside struct
struct blkif_request_rw was assumed to also be packed (which it
isn't, or else upstream Linux front-/backends wouldn't work with
other back-/frontends), thus apparently making it necessary to
have an "aligned" (i.e. un-packed) variant thereof.

Jan

>> I'd really like to get rid of this redundant type again, unless there's a
>> reason for it to be there which I'm overlooking.
>> 
>> Jan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:36:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:36:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHpJ-00034a-OD; Mon, 03 Feb 2014 11:36:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAHpI-00034V-HZ
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:36:32 +0000
Received: from [85.158.137.68:48295] by server-7.bemta-3.messagelabs.com id
	46/23-13775-F3F7FE25; Mon, 03 Feb 2014 11:36:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391427390!11823613!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23669 invoked from network); 3 Feb 2014 11:36:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 11:36:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 03 Feb 2014 11:36:35 +0000
Message-Id: <52EF8D4A0200007800118A67@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 03 Feb 2014 11:36:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
	<1391420407.10515.10.camel@kazak.uk.xensource.com>
In-Reply-To: <1391420407.10515.10.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.02.14 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Fri, 2014-01-31 at 12:04 +0000, Jan Beulich wrote:
>> Roger,
>> 
>> so you introduced this, yet looking in a little closer detail I can't seem
>> to understand why: struct blkif_request_segment is identical in layout,
>> the sole difference between the two is that in the new structure the
>> padding field has a name, whereas in the old one it doesn't.
> 
> Is this something to do with Linux' use of __attribute__((packed)) once
> again causing confusion? (I really hope not API deviation...)

Yes, I think it has to do with Linux'es way of defining these
structures: My assumption is that the embedded (but not such
attributed) definition of struct blkif_request_segment inside struct
struct blkif_request_rw was assumed to also be packed (which it
isn't, or else upstream Linux front-/backends wouldn't work with
other back-/frontends), thus apparently making it necessary to
have an "aligned" (i.e. un-packed) variant thereof.

Jan

>> I'd really like to get rid of this redundant type again, unless there's a
>> reason for it to be there which I'm overlooking.
>> 
>> Jan
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHrl-0003G9-Fj; Mon, 03 Feb 2014 11:39:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHrk-0003Fy-3V
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:04 +0000
Received: from [193.109.254.147:9103] by server-5.bemta-14.messagelabs.com id
	13/CE-16688-7DF7FE25; Mon, 03 Feb 2014 11:39:03 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391427541!1608673!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17399 invoked from network); 3 Feb 2014 11:39:02 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 03 Feb 2014 03:39:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="468983526"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2014 03:38:58 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:26 +0800
Message-Id: <1391427391-120790-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 1/6] x86: detect and initialize Cache QoS
	Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Detect platform QoS feature status and enumerate the resource types,
one of which is to monitor the L3 cache occupancy.

Also introduce a Xen grub command line parameter to control the
QoS feature status.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/misc/xen-command-line.markdown |    7 ++
 xen/arch/x86/Makefile               |    1 +
 xen/arch/x86/pqos.c                 |  156 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |    3 +
 xen/include/asm-x86/cpufeature.h    |    1 +
 xen/include/asm-x86/pqos.h          |   43 ++++++++++
 6 files changed, 211 insertions(+)
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 15aa404..7751ffe 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -770,6 +770,13 @@ This option can be specified more than once (up to 8 times at present).
 ### ple\_window
 > `= <integer>`
 
+### pqos (Intel)
+> `= List of ( <boolean> | cqm:<boolean> | cqm_max_rmid:<integer> )`
+
+> Default: `pqos=1,cqm:1,cqm_max_rmid:255`
+
+Configure platform QoS services.
+
 ### reboot
 > `= t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..54962e0 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += pqos.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
new file mode 100644
index 0000000..ba0de37
--- /dev/null
+++ b/xen/arch/x86/pqos.c
@@ -0,0 +1,156 @@
+/*
+ * pqos.c: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#include <asm/processor.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <asm/pqos.h>
+
+static bool_t __initdata opt_pqos = 1;
+static bool_t __initdata opt_cqm = 1;
+static unsigned int __initdata opt_cqm_max_rmid = 255;
+
+static void __init parse_pqos_param(char *s)
+{
+    char *ss, *val_str;
+    int val;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        val = parse_bool(s);
+        if ( val >= 0 )
+            opt_pqos = val;
+        else
+        {
+            val = !!strncmp(s, "no-", 3);
+            if ( !val )
+                s += 3;
+
+            val_str = strchr(s, ':');
+            if ( val_str )
+                *val_str++ = '\0';
+
+            if ( val_str && !strcmp(s, "cqm") &&
+                 (val = parse_bool(val_str)) >= 0 )
+                opt_cqm = val;
+            else if ( val_str && !strcmp(s, "cqm_max_rmid") )
+                opt_cqm_max_rmid = simple_strtoul(val_str, NULL, 0);
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+custom_param("pqos", parse_pqos_param);
+
+struct pqos_cqm __read_mostly *cqm = NULL;
+
+static void __init init_cqm(void)
+{
+    unsigned int rmid;
+    unsigned int eax, edx;
+    unsigned int cqm_pages;
+    unsigned int i;
+
+    if ( !opt_cqm_max_rmid )
+        return;
+
+    cqm = xzalloc(struct pqos_cqm);
+    if ( !cqm )
+        return;
+
+    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
+    if ( !(edx & QOS_MONITOR_EVTID_L3) )
+        goto out;
+
+    cqm->min_rmid = 1;
+    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
+
+    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
+    if ( !cqm->rmid_to_dom )
+        goto out;
+
+    /* Reserve RMID 0 for all domains not being monitored */
+    cqm->rmid_to_dom[0] = DOMID_XEN;
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+
+    /* Allocate CQM buffer size in initialization stage */
+    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
+                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
+                PAGE_SIZE + 1;
+    cqm->buffer_size = cqm_pages * PAGE_SIZE;
+
+    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
+    if ( !cqm->buffer )
+    {
+        xfree(cqm->rmid_to_dom);
+        goto out;
+    }
+    memset(cqm->buffer, 0, cqm->buffer_size);
+
+    for ( i = 0; i < cqm_pages; i++ )
+        share_xen_page_with_privileged_guests(
+            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),
+            XENSHARE_readonly);
+
+    spin_lock_init(&cqm->cqm_lock);
+
+    cqm->used_rmid = 0;
+
+    printk(XENLOG_INFO "Cache QoS Monitoring Enabled.\n");
+
+    return;
+
+out:
+    xfree(cqm);
+    cqm = NULL;
+}
+
+static void __init init_qos_monitor(void)
+{
+    unsigned int qm_features;
+    unsigned int eax, ebx, ecx;
+
+    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
+        return;
+
+    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+
+    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
+        init_cqm();
+}
+
+void __init init_platform_qos(void)
+{
+    if ( !opt_pqos )
+        return;
+
+    init_qos_monitor();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..639528f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,6 +48,7 @@
 #include <asm/setup.h>
 #include <xen/cpu.h>
 #include <asm/nmi.h>
+#include <asm/pqos.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
@@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     domain_unpause_by_systemcontroller(dom0);
 
+    init_platform_qos();
+
     reset_stack_and_jump(init_done);
 }
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..ca59668 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -147,6 +147,7 @@
 #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
+#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
new file mode 100644
index 0000000..0a8065c
--- /dev/null
+++ b/xen/include/asm-x86/pqos.h
@@ -0,0 +1,43 @@
+/*
+ * pqos.h: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef ASM_PQOS_H
+#define ASM_PQOS_H
+
+#include <public/xen.h>
+#include <xen/spinlock.h>
+
+/* QoS Resource Type Enumeration */
+#define QOS_MONITOR_TYPE_L3            0x2
+
+/* QoS Monitoring Event ID */
+#define QOS_MONITOR_EVTID_L3           0x1
+
+struct pqos_cqm {
+    spinlock_t cqm_lock;
+    uint64_t *buffer;
+    unsigned int min_rmid;
+    unsigned int max_rmid;
+    unsigned int used_rmid;
+    unsigned int upscaling_factor;
+    unsigned int buffer_size;
+    domid_t *rmid_to_dom;
+};
+extern struct pqos_cqm *cqm;
+
+void init_platform_qos(void);
+
+#endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHrl-0003G9-Fj; Mon, 03 Feb 2014 11:39:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHrk-0003Fy-3V
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:04 +0000
Received: from [193.109.254.147:9103] by server-5.bemta-14.messagelabs.com id
	13/CE-16688-7DF7FE25; Mon, 03 Feb 2014 11:39:03 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391427541!1608673!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17399 invoked from network); 3 Feb 2014 11:39:02 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 03 Feb 2014 03:39:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="468983526"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2014 03:38:58 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:26 +0800
Message-Id: <1391427391-120790-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 1/6] x86: detect and initialize Cache QoS
	Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Detect platform QoS feature status and enumerate the resource types,
one of which is to monitor the L3 cache occupancy.

Also introduce a Xen grub command line parameter to control the
QoS feature status.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/misc/xen-command-line.markdown |    7 ++
 xen/arch/x86/Makefile               |    1 +
 xen/arch/x86/pqos.c                 |  156 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |    3 +
 xen/include/asm-x86/cpufeature.h    |    1 +
 xen/include/asm-x86/pqos.h          |   43 ++++++++++
 6 files changed, 211 insertions(+)
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 15aa404..7751ffe 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -770,6 +770,13 @@ This option can be specified more than once (up to 8 times at present).
 ### ple\_window
 > `= <integer>`
 
+### pqos (Intel)
+> `= List of ( <boolean> | cqm:<boolean> | cqm_max_rmid:<integer> )`
+
+> Default: `pqos=1,cqm:1,cqm_max_rmid:255`
+
+Configure platform QoS services.
+
 ### reboot
 > `= t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..54962e0 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += pqos.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
new file mode 100644
index 0000000..ba0de37
--- /dev/null
+++ b/xen/arch/x86/pqos.c
@@ -0,0 +1,156 @@
+/*
+ * pqos.c: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#include <asm/processor.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <asm/pqos.h>
+
+static bool_t __initdata opt_pqos = 1;
+static bool_t __initdata opt_cqm = 1;
+static unsigned int __initdata opt_cqm_max_rmid = 255;
+
+static void __init parse_pqos_param(char *s)
+{
+    char *ss, *val_str;
+    int val;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        val = parse_bool(s);
+        if ( val >= 0 )
+            opt_pqos = val;
+        else
+        {
+            val = !!strncmp(s, "no-", 3);
+            if ( !val )
+                s += 3;
+
+            val_str = strchr(s, ':');
+            if ( val_str )
+                *val_str++ = '\0';
+
+            if ( val_str && !strcmp(s, "cqm") &&
+                 (val = parse_bool(val_str)) >= 0 )
+                opt_cqm = val;
+            else if ( val_str && !strcmp(s, "cqm_max_rmid") )
+                opt_cqm_max_rmid = simple_strtoul(val_str, NULL, 0);
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+custom_param("pqos", parse_pqos_param);
+
+struct pqos_cqm __read_mostly *cqm = NULL;
+
+static void __init init_cqm(void)
+{
+    unsigned int rmid;
+    unsigned int eax, edx;
+    unsigned int cqm_pages;
+    unsigned int i;
+
+    if ( !opt_cqm_max_rmid )
+        return;
+
+    cqm = xzalloc(struct pqos_cqm);
+    if ( !cqm )
+        return;
+
+    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
+    if ( !(edx & QOS_MONITOR_EVTID_L3) )
+        goto out;
+
+    cqm->min_rmid = 1;
+    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
+
+    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
+    if ( !cqm->rmid_to_dom )
+        goto out;
+
+    /* Reserve RMID 0 for all domains not being monitored */
+    cqm->rmid_to_dom[0] = DOMID_XEN;
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+
+    /* Allocate CQM buffer size in initialization stage */
+    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
+                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
+                PAGE_SIZE + 1;
+    cqm->buffer_size = cqm_pages * PAGE_SIZE;
+
+    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
+    if ( !cqm->buffer )
+    {
+        xfree(cqm->rmid_to_dom);
+        goto out;
+    }
+    memset(cqm->buffer, 0, cqm->buffer_size);
+
+    for ( i = 0; i < cqm_pages; i++ )
+        share_xen_page_with_privileged_guests(
+            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),
+            XENSHARE_readonly);
+
+    spin_lock_init(&cqm->cqm_lock);
+
+    cqm->used_rmid = 0;
+
+    printk(XENLOG_INFO "Cache QoS Monitoring Enabled.\n");
+
+    return;
+
+out:
+    xfree(cqm);
+    cqm = NULL;
+}
+
+static void __init init_qos_monitor(void)
+{
+    unsigned int qm_features;
+    unsigned int eax, ebx, ecx;
+
+    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
+        return;
+
+    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+
+    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
+        init_cqm();
+}
+
+void __init init_platform_qos(void)
+{
+    if ( !opt_pqos )
+        return;
+
+    init_qos_monitor();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..639528f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,6 +48,7 @@
 #include <asm/setup.h>
 #include <xen/cpu.h>
 #include <asm/nmi.h>
+#include <asm/pqos.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
@@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     domain_unpause_by_systemcontroller(dom0);
 
+    init_platform_qos();
+
     reset_stack_and_jump(init_done);
 }
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..ca59668 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -147,6 +147,7 @@
 #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
+#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
new file mode 100644
index 0000000..0a8065c
--- /dev/null
+++ b/xen/include/asm-x86/pqos.h
@@ -0,0 +1,43 @@
+/*
+ * pqos.h: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef ASM_PQOS_H
+#define ASM_PQOS_H
+
+#include <public/xen.h>
+#include <xen/spinlock.h>
+
+/* QoS Resource Type Enumeration */
+#define QOS_MONITOR_TYPE_L3            0x2
+
+/* QoS Monitoring Event ID */
+#define QOS_MONITOR_EVTID_L3           0x1
+
+struct pqos_cqm {
+    spinlock_t cqm_lock;
+    uint64_t *buffer;
+    unsigned int min_rmid;
+    unsigned int max_rmid;
+    unsigned int used_rmid;
+    unsigned int upscaling_factor;
+    unsigned int buffer_size;
+    domid_t *rmid_to_dom;
+};
+extern struct pqos_cqm *cqm;
+
+void init_platform_qos(void);
+
+#endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHrn-0003Gs-SW; Mon, 03 Feb 2014 11:39:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHrm-0003GI-8m
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:06 +0000
Received: from [85.158.137.68:6224] by server-1.bemta-3.messagelabs.com id
	4F/74-17293-9DF7FE25; Mon, 03 Feb 2014 11:39:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391427543!12980745!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8559 invoked from network); 3 Feb 2014 11:39:04 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-31.messagelabs.com with SMTP;
	3 Feb 2014 11:39:04 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 03 Feb 2014 03:39:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="475079217"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga002.fm.intel.com with ESMTP; 03 Feb 2014 03:39:00 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:27 +0800
Message-Id: <1391427391-120790-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 2/6] x86: dynamically attach/detach CQM
	service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add hypervisor side support for dynamically attach and detach CQM
services for a certain guest.

When attach CQM service for a guest, system will allocate an RMID for
it. When detach or guest is shutdown, the RMID will be retrieved back
for future use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c        |    3 +++
 xen/arch/x86/domctl.c        |   28 ++++++++++++++++++++
 xen/arch/x86/pqos.c          |   60 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |    2 ++
 xen/include/asm-x86/pqos.h   |   12 +++++++++
 xen/include/public/domctl.h  |   11 ++++++++
 6 files changed, 116 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 16f2b50..2656204 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -60,6 +60,7 @@
 #include <xen/numa.h>
 #include <xen/iommu.h>
 #include <compat/vcpu.h>
+#include <asm/pqos.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 DEFINE_PER_CPU(unsigned long, cr4);
@@ -612,6 +613,8 @@ void arch_domain_destroy(struct domain *d)
 
     free_xenheap_page(d->shared_info);
     cleanup_domain_irq_mapping(d);
+
+    free_cqm_rmid(d);
 }
 
 unsigned long pv_guest_cr4_fixup(const struct vcpu *v, unsigned long guest_cr4)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..7219011 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,6 +35,7 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
+#include <asm/pqos.h>
 
 static int gdbsx_guest_mem_io(
     domid_t domid, struct xen_domctl_gdbsx_memio *iop)
@@ -1245,6 +1246,33 @@ long arch_do_domctl(
     }
     break;
 
+    case XEN_DOMCTL_attach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else
+            ret = alloc_cqm_rmid(d);
+    }
+    break;
+
+    case XEN_DOMCTL_detach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else if ( d->arch.pqos_cqm_rmid > 0 )
+        {
+            free_cqm_rmid(d);
+            ret = 0;
+        }
+        else
+            ret = -ENOENT;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index ba0de37..eb469ac 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <xen/init.h>
 #include <xen/mm.h>
+#include <xen/spinlock.h>
 #include <asm/pqos.h>
 
 static bool_t __initdata opt_pqos = 1;
@@ -145,6 +146,65 @@ void __init init_platform_qos(void)
     init_qos_monitor();
 }
 
+int alloc_cqm_rmid(struct domain *d)
+{
+    int rc = 0;
+    unsigned int rmid;
+
+    ASSERT(system_supports_cqm());
+
+    spin_lock(&cqm->cqm_lock);
+
+    if ( d->arch.pqos_cqm_rmid > 0 )
+    {
+        rc = -EEXIST;
+        goto out;
+    }
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
+            continue;
+
+        cqm->rmid_to_dom[rmid] = d->domain_id;
+        break;
+    }
+
+    /* No CQM RMID available, assign RMID=0 by default */
+    if ( rmid > cqm->max_rmid )
+    {
+        rmid = 0;
+        rc = -EUSERS;
+    }
+    else
+        cqm->used_rmid++;
+
+    d->arch.pqos_cqm_rmid = rmid;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+
+    return rc;
+}
+
+void free_cqm_rmid(struct domain *d)
+{
+    unsigned int rmid;
+
+    spin_lock(&cqm->cqm_lock);
+    rmid = d->arch.pqos_cqm_rmid;
+    /* We do not free system reserved "RMID=0" */
+    if ( rmid == 0 )
+        goto out;
+
+    cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+    d->arch.pqos_cqm_rmid = 0;
+    cqm->used_rmid--;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..662714d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -313,6 +313,8 @@ struct arch_domain
     spinlock_t e820_lock;
     struct e820entry *e820;
     unsigned int nr_e820;
+
+    unsigned int pqos_cqm_rmid;       /* CQM RMID assigned to the domain */
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 0a8065c..f25037d 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -16,6 +16,7 @@
  */
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
+#include <xen/sched.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -38,6 +39,17 @@ struct pqos_cqm {
 };
 extern struct pqos_cqm *cqm;
 
+static inline bool_t system_supports_cqm(void)
+{
+    return !!cqm;
+}
+
+/* IA32_QM_CTR */
+#define IA32_QM_CTR_ERROR_MASK         (0x3ul << 62)
+
 void init_platform_qos(void);
 
+int alloc_cqm_rmid(struct domain *d);
+void free_cqm_rmid(struct domain *d);
+
 #endif
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f8d9293 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,14 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_qos_type {
+#define _XEN_DOMCTL_pqos_cqm      0
+#define XEN_DOMCTL_pqos_cqm       (1U<<_XEN_DOMCTL_pqos_cqm)
+    uint64_t flags;
+};
+typedef struct xen_domctl_qos_type xen_domctl_qos_type_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_qos_type_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +962,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_attach_pqos                   71
+#define XEN_DOMCTL_detach_pqos                   72
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1014,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
+        struct xen_domctl_qos_type          qos_type;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHrn-0003Gs-SW; Mon, 03 Feb 2014 11:39:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHrm-0003GI-8m
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:06 +0000
Received: from [85.158.137.68:6224] by server-1.bemta-3.messagelabs.com id
	4F/74-17293-9DF7FE25; Mon, 03 Feb 2014 11:39:05 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391427543!12980745!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8559 invoked from network); 3 Feb 2014 11:39:04 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-31.messagelabs.com with SMTP;
	3 Feb 2014 11:39:04 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 03 Feb 2014 03:39:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="475079217"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga002.fm.intel.com with ESMTP; 03 Feb 2014 03:39:00 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:27 +0800
Message-Id: <1391427391-120790-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 2/6] x86: dynamically attach/detach CQM
	service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add hypervisor side support for dynamically attach and detach CQM
services for a certain guest.

When attach CQM service for a guest, system will allocate an RMID for
it. When detach or guest is shutdown, the RMID will be retrieved back
for future use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c        |    3 +++
 xen/arch/x86/domctl.c        |   28 ++++++++++++++++++++
 xen/arch/x86/pqos.c          |   60 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |    2 ++
 xen/include/asm-x86/pqos.h   |   12 +++++++++
 xen/include/public/domctl.h  |   11 ++++++++
 6 files changed, 116 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 16f2b50..2656204 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -60,6 +60,7 @@
 #include <xen/numa.h>
 #include <xen/iommu.h>
 #include <compat/vcpu.h>
+#include <asm/pqos.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 DEFINE_PER_CPU(unsigned long, cr4);
@@ -612,6 +613,8 @@ void arch_domain_destroy(struct domain *d)
 
     free_xenheap_page(d->shared_info);
     cleanup_domain_irq_mapping(d);
+
+    free_cqm_rmid(d);
 }
 
 unsigned long pv_guest_cr4_fixup(const struct vcpu *v, unsigned long guest_cr4)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..7219011 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,6 +35,7 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
+#include <asm/pqos.h>
 
 static int gdbsx_guest_mem_io(
     domid_t domid, struct xen_domctl_gdbsx_memio *iop)
@@ -1245,6 +1246,33 @@ long arch_do_domctl(
     }
     break;
 
+    case XEN_DOMCTL_attach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else
+            ret = alloc_cqm_rmid(d);
+    }
+    break;
+
+    case XEN_DOMCTL_detach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else if ( d->arch.pqos_cqm_rmid > 0 )
+        {
+            free_cqm_rmid(d);
+            ret = 0;
+        }
+        else
+            ret = -ENOENT;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index ba0de37..eb469ac 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <xen/init.h>
 #include <xen/mm.h>
+#include <xen/spinlock.h>
 #include <asm/pqos.h>
 
 static bool_t __initdata opt_pqos = 1;
@@ -145,6 +146,65 @@ void __init init_platform_qos(void)
     init_qos_monitor();
 }
 
+int alloc_cqm_rmid(struct domain *d)
+{
+    int rc = 0;
+    unsigned int rmid;
+
+    ASSERT(system_supports_cqm());
+
+    spin_lock(&cqm->cqm_lock);
+
+    if ( d->arch.pqos_cqm_rmid > 0 )
+    {
+        rc = -EEXIST;
+        goto out;
+    }
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
+            continue;
+
+        cqm->rmid_to_dom[rmid] = d->domain_id;
+        break;
+    }
+
+    /* No CQM RMID available, assign RMID=0 by default */
+    if ( rmid > cqm->max_rmid )
+    {
+        rmid = 0;
+        rc = -EUSERS;
+    }
+    else
+        cqm->used_rmid++;
+
+    d->arch.pqos_cqm_rmid = rmid;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+
+    return rc;
+}
+
+void free_cqm_rmid(struct domain *d)
+{
+    unsigned int rmid;
+
+    spin_lock(&cqm->cqm_lock);
+    rmid = d->arch.pqos_cqm_rmid;
+    /* We do not free system reserved "RMID=0" */
+    if ( rmid == 0 )
+        goto out;
+
+    cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+    d->arch.pqos_cqm_rmid = 0;
+    cqm->used_rmid--;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..662714d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -313,6 +313,8 @@ struct arch_domain
     spinlock_t e820_lock;
     struct e820entry *e820;
     unsigned int nr_e820;
+
+    unsigned int pqos_cqm_rmid;       /* CQM RMID assigned to the domain */
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 0a8065c..f25037d 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -16,6 +16,7 @@
  */
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
+#include <xen/sched.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -38,6 +39,17 @@ struct pqos_cqm {
 };
 extern struct pqos_cqm *cqm;
 
+static inline bool_t system_supports_cqm(void)
+{
+    return !!cqm;
+}
+
+/* IA32_QM_CTR */
+#define IA32_QM_CTR_ERROR_MASK         (0x3ul << 62)
+
 void init_platform_qos(void);
 
+int alloc_cqm_rmid(struct domain *d);
+void free_cqm_rmid(struct domain *d);
+
 #endif
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f8d9293 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,14 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_qos_type {
+#define _XEN_DOMCTL_pqos_cqm      0
+#define XEN_DOMCTL_pqos_cqm       (1U<<_XEN_DOMCTL_pqos_cqm)
+    uint64_t flags;
+};
+typedef struct xen_domctl_qos_type xen_domctl_qos_type_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_qos_type_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +962,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_attach_pqos                   71
+#define XEN_DOMCTL_detach_pqos                   72
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1014,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
+        struct xen_domctl_qos_type          qos_type;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHrt-0003JW-AS; Mon, 03 Feb 2014 11:39:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHrr-0003Ir-Rt
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:12 +0000
Received: from [85.158.139.211:57102] by server-11.bemta-5.messagelabs.com id
	99/F1-23886-FDF7FE25; Mon, 03 Feb 2014 11:39:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391427549!1259978!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3623 invoked from network); 3 Feb 2014 11:39:10 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-206.messagelabs.com with SMTP;
	3 Feb 2014 11:39:10 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 03 Feb 2014 03:34:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="477104811"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 03 Feb 2014 03:39:05 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:29 +0800
Message-Id: <1391427391-120790-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 4/6] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c           |    5 +++++
 xen/arch/x86/pqos.c             |   14 ++++++++++++++
 xen/include/asm-x86/msr-index.h |    1 +
 3 files changed, 20 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
     {
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
+        if ( system_supports_cqm() && cqm->used_rmid )
+            cqm_assoc_rmid(0);
         p->arch.ctxt_switch_from(p);
     }
 
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
         }
         vcpu_restore_fpu_eager(n);
         n->arch.ctxt_switch_to(n);
+
+        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
     }
 
     gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
 custom_param("pqos", parse_pqos_param);
 
 struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
 
 static void __init init_cqm(void)
 {
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
 
     cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
 
+    rmid_mask = ~(~0ull << get_count_order(ebx));
+
     if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
         init_cqm();
 }
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
 
 }
 
+void cqm_assoc_rmid(unsigned int rmid)
+{
+    uint64_t val;
+    uint64_t new_val;
+
+    rdmsrl(MSR_IA32_PQR_ASSOC, val);
+    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+    if ( val != new_val )
+        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
 /* Platform QoS register */
 #define MSR_IA32_QOSEVTSEL             0x00000c8d
 #define MSR_IA32_QMC                   0x00000c8e
+#define MSR_IA32_PQR_ASSOC             0x00000c8f
 
 #endif /* __ASM_MSR_INDEX_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHrt-0003JW-AS; Mon, 03 Feb 2014 11:39:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHrr-0003Ir-Rt
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:12 +0000
Received: from [85.158.139.211:57102] by server-11.bemta-5.messagelabs.com id
	99/F1-23886-FDF7FE25; Mon, 03 Feb 2014 11:39:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391427549!1259978!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3623 invoked from network); 3 Feb 2014 11:39:10 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-206.messagelabs.com with SMTP;
	3 Feb 2014 11:39:10 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 03 Feb 2014 03:34:56 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="477104811"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 03 Feb 2014 03:39:05 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:29 +0800
Message-Id: <1391427391-120790-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 4/6] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c           |    5 +++++
 xen/arch/x86/pqos.c             |   14 ++++++++++++++
 xen/include/asm-x86/msr-index.h |    1 +
 3 files changed, 20 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
     {
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
+        if ( system_supports_cqm() && cqm->used_rmid )
+            cqm_assoc_rmid(0);
         p->arch.ctxt_switch_from(p);
     }
 
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
         }
         vcpu_restore_fpu_eager(n);
         n->arch.ctxt_switch_to(n);
+
+        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
     }
 
     gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
 custom_param("pqos", parse_pqos_param);
 
 struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
 
 static void __init init_cqm(void)
 {
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
 
     cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
 
+    rmid_mask = ~(~0ull << get_count_order(ebx));
+
     if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
         init_cqm();
 }
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
 
 }
 
+void cqm_assoc_rmid(unsigned int rmid)
+{
+    uint64_t val;
+    uint64_t new_val;
+
+    rdmsrl(MSR_IA32_PQR_ASSOC, val);
+    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+    if ( val != new_val )
+        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
 /* Platform QoS register */
 #define MSR_IA32_QOSEVTSEL             0x00000c8d
 #define MSR_IA32_QMC                   0x00000c8e
+#define MSR_IA32_PQR_ASSOC             0x00000c8f
 
 #endif /* __ASM_MSR_INDEX_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHs4-0003NR-NY; Mon, 03 Feb 2014 11:39:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs3-0003Ml-2m
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:23 +0000
Received: from [193.109.254.147:6875] by server-10.bemta-14.messagelabs.com id
	60/55-10711-AEF7FE25; Mon, 03 Feb 2014 11:39:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391427541!1608673!2
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17901 invoked from network); 3 Feb 2014 11:39:05 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 03 Feb 2014 03:39:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="468983555"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2014 03:39:02 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:28 +0800
Message-Id: <1391427391-120790-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 3/6] x86: collect CQM information from all
	sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect CQM information (L3 cache occupancy) from all sockets.
Upper layer application can parse the data structure to get the
information of guest's L3 cache occupancy on certain sockets.

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/pqos.c             |   43 ++++++++++++++++++++++++++++
 xen/arch/x86/sysctl.c           |   59 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |    4 +++
 xen/include/asm-x86/pqos.h      |    4 +++
 xen/include/public/sysctl.h     |   11 ++++++++
 5 files changed, 121 insertions(+)

diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index eb469ac..2cde56e 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -15,6 +15,7 @@
  * more details.
  */
 #include <asm/processor.h>
+#include <asm/msr.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/spinlock.h>
@@ -205,6 +206,48 @@ out:
     spin_unlock(&cqm->cqm_lock);
 }
 
+static void read_cqm_data(void *arg)
+{
+    uint64_t cqm_data;
+    unsigned int rmid;
+    int socket = cpu_to_socket(smp_processor_id());
+    unsigned long i;
+
+    ASSERT(system_supports_cqm());
+
+    if ( socket < 0 )
+        return;
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
+            continue;
+
+        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
+        rdmsrl(MSR_IA32_QMC, cqm_data);
+
+        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
+        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
+            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
+    }
+}
+
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
+{
+    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+    unsigned int nr_rmids = cqm->max_rmid + 1;
+
+    /* Read CQM data in current CPU */
+    read_cqm_data(NULL);
+    /* Issue IPI to other CPUs to read CQM data */
+    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
+
+    /* Copy the rmid_to_dom info to the buffer */
+    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
+           sizeof(domid_t) * (cqm->max_rmid + 1));
+
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..5391800 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -28,6 +28,7 @@
 #include <xen/nodemask.h>
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
+#include <asm/pqos.h>
 
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 
@@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+/* Select one random CPU for each socket. Current CPU's socket is excluded */
+static void select_socket_cpu(cpumask_t *cpu_bitmap)
+{
+    int i;
+    unsigned int cpu;
+    int socket, socket_curr = cpu_to_socket(smp_processor_id());
+    DECLARE_BITMAP(sockets, NR_CPUS);
+
+    bitmap_zero(sockets, NR_CPUS);
+    if (socket_curr >= 0)
+        set_bit(socket_curr, sockets);
+
+    cpumask_clear(cpu_bitmap);
+    for ( i = 0; i < NR_CPUS; i++ )
+    {
+        socket = cpu_to_socket(i);
+        if ( socket < 0 || test_and_set_bit(socket, sockets) )
+            continue;
+        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
+        if ( cpu < nr_cpu_ids )
+            cpumask_set_cpu(cpu, cpu_bitmap);
+    }
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +126,40 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_getcqminfo:
+    {
+        cpumask_var_t cpu_cqmdata_map;
+
+        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        if ( !system_supports_cqm() )
+        {
+            ret = -ENODEV;
+            free_cpumask_var(cpu_cqmdata_map);
+            break;
+        }
+
+        memset(cqm->buffer, 0, cqm->buffer_size);
+
+        select_socket_cpu(cpu_cqmdata_map);
+        get_cqm_info(cpu_cqmdata_map);
+
+        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
+        sysctl->u.getcqminfo.size = cqm->buffer_size;
+        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
+        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+
+        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+
+        free_cpumask_var(cpu_cqmdata_map);
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e3ff10c 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -489,4 +489,8 @@
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
 
+/* Platform QoS register */
+#define MSR_IA32_QOSEVTSEL             0x00000c8d
+#define MSR_IA32_QMC                   0x00000c8e
+
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index f25037d..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -17,6 +17,8 @@
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
 #include <xen/sched.h>
+#include <xen/cpumask.h>
+#include <public/domctl.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -51,5 +53,7 @@ void init_platform_qos(void);
 
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
 
 #endif
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..335b1d9 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
 typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
+struct xen_sysctl_getcqminfo {
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
+
 
 struct xen_sysctl {
     uint32_t cmd;
@@ -654,6 +663,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_getcqminfo                    21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +685,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_getcqminfo        getcqminfo;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHs4-0003NR-NY; Mon, 03 Feb 2014 11:39:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs3-0003Ml-2m
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:23 +0000
Received: from [193.109.254.147:6875] by server-10.bemta-14.messagelabs.com id
	60/55-10711-AEF7FE25; Mon, 03 Feb 2014 11:39:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391427541!1608673!2
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17901 invoked from network); 3 Feb 2014 11:39:05 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:05 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 03 Feb 2014 03:39:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="468983555"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2014 03:39:02 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:28 +0800
Message-Id: <1391427391-120790-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 3/6] x86: collect CQM information from all
	sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect CQM information (L3 cache occupancy) from all sockets.
Upper layer application can parse the data structure to get the
information of guest's L3 cache occupancy on certain sockets.

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/pqos.c             |   43 ++++++++++++++++++++++++++++
 xen/arch/x86/sysctl.c           |   59 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |    4 +++
 xen/include/asm-x86/pqos.h      |    4 +++
 xen/include/public/sysctl.h     |   11 ++++++++
 5 files changed, 121 insertions(+)

diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index eb469ac..2cde56e 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -15,6 +15,7 @@
  * more details.
  */
 #include <asm/processor.h>
+#include <asm/msr.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/spinlock.h>
@@ -205,6 +206,48 @@ out:
     spin_unlock(&cqm->cqm_lock);
 }
 
+static void read_cqm_data(void *arg)
+{
+    uint64_t cqm_data;
+    unsigned int rmid;
+    int socket = cpu_to_socket(smp_processor_id());
+    unsigned long i;
+
+    ASSERT(system_supports_cqm());
+
+    if ( socket < 0 )
+        return;
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
+            continue;
+
+        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
+        rdmsrl(MSR_IA32_QMC, cqm_data);
+
+        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
+        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
+            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
+    }
+}
+
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
+{
+    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+    unsigned int nr_rmids = cqm->max_rmid + 1;
+
+    /* Read CQM data in current CPU */
+    read_cqm_data(NULL);
+    /* Issue IPI to other CPUs to read CQM data */
+    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
+
+    /* Copy the rmid_to_dom info to the buffer */
+    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
+           sizeof(domid_t) * (cqm->max_rmid + 1));
+
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..5391800 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -28,6 +28,7 @@
 #include <xen/nodemask.h>
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
+#include <asm/pqos.h>
 
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 
@@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+/* Select one random CPU for each socket. Current CPU's socket is excluded */
+static void select_socket_cpu(cpumask_t *cpu_bitmap)
+{
+    int i;
+    unsigned int cpu;
+    int socket, socket_curr = cpu_to_socket(smp_processor_id());
+    DECLARE_BITMAP(sockets, NR_CPUS);
+
+    bitmap_zero(sockets, NR_CPUS);
+    if (socket_curr >= 0)
+        set_bit(socket_curr, sockets);
+
+    cpumask_clear(cpu_bitmap);
+    for ( i = 0; i < NR_CPUS; i++ )
+    {
+        socket = cpu_to_socket(i);
+        if ( socket < 0 || test_and_set_bit(socket, sockets) )
+            continue;
+        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
+        if ( cpu < nr_cpu_ids )
+            cpumask_set_cpu(cpu, cpu_bitmap);
+    }
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +126,40 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_getcqminfo:
+    {
+        cpumask_var_t cpu_cqmdata_map;
+
+        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        if ( !system_supports_cqm() )
+        {
+            ret = -ENODEV;
+            free_cpumask_var(cpu_cqmdata_map);
+            break;
+        }
+
+        memset(cqm->buffer, 0, cqm->buffer_size);
+
+        select_socket_cpu(cpu_cqmdata_map);
+        get_cqm_info(cpu_cqmdata_map);
+
+        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
+        sysctl->u.getcqminfo.size = cqm->buffer_size;
+        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
+        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+
+        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+
+        free_cpumask_var(cpu_cqmdata_map);
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e3ff10c 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -489,4 +489,8 @@
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
 
+/* Platform QoS register */
+#define MSR_IA32_QOSEVTSEL             0x00000c8d
+#define MSR_IA32_QMC                   0x00000c8e
+
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index f25037d..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -17,6 +17,8 @@
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
 #include <xen/sched.h>
+#include <xen/cpumask.h>
+#include <public/domctl.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -51,5 +53,7 @@ void init_platform_qos(void);
 
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
 
 #endif
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..335b1d9 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
 typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
+struct xen_sysctl_getcqminfo {
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
+
 
 struct xen_sysctl {
     uint32_t cmd;
@@ -654,6 +663,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_getcqminfo                    21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +685,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_getcqminfo        getcqminfo;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHs7-0003PC-DC; Mon, 03 Feb 2014 11:39:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs5-0003O5-Mz
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:26 +0000
Received: from [193.109.254.147:7328] by server-4.bemta-14.messagelabs.com id
	16/D9-32066-CEF7FE25; Mon, 03 Feb 2014 11:39:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391427563!1605592!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4748 invoked from network); 3 Feb 2014 11:39:23 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:23 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 03 Feb 2014 03:35:07 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,772,1384329600"; d="scan'208";a="448850719"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 03 Feb 2014 03:39:10 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:31 +0800
Message-Id: <1391427391-120790-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 6/6] tools: enable Cache QoS Monitoring
	feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduced two new xl commands to attach/detach CQM service for a guest
$ xl pqos-attach cqm domid
$ xl pqos-detach cqm domid

Introduce one new xl command to retrive guest CQM information
$ xl pqos-list cqm

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/libxc/xc_domain.c     |   37 ++++++++++++
 tools/libxc/xenctrl.h       |   12 ++++
 tools/libxl/Makefile        |    3 +-
 tools/libxl/libxl.h         |    4 ++
 tools/libxl/libxl_pqos.c    |  132 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl |    7 +++
 tools/libxl/xl.h            |    3 +
 tools/libxl/xl_cmdimpl.c    |  111 ++++++++++++++++++++++++++++++++++++
 tools/libxl/xl_cmdtable.c   |   15 +++++
 9 files changed, 323 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_pqos.c

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..bcdffd2 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1776,6 +1776,43 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_attach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_detach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
+{
+    int ret = 0;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_getcqminfo;
+    if ( xc_sysctl(xch, &sysctl) < 0 )
+        ret = -1;
+    else
+    {
+        info->buffer_mfn = sysctl.u.getcqminfo.buffer_mfn;
+        info->size = sysctl.u.getcqminfo.size;
+        info->nr_rmids = sysctl.u.getcqminfo.nr_rmids;
+        info->nr_sockets = sysctl.u.getcqminfo.nr_sockets;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..f4eb198 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2427,4 +2427,16 @@ int xc_kexec_load(xc_interface *xch, uint8_t type, uint16_t arch,
  */
 int xc_kexec_unload(xc_interface *xch, int type);
 
+struct xc_cqminfo
+{
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xc_cqminfo xc_cqminfo_t;
+
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info);
 #endif /* XENCTRL_H */
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..8beb7f8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -76,7 +76,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
 			libxl_json.o libxl_aoutils.o libxl_numa.o \
 			libxl_save_callout.o _libxl_save_msgs_callout.o \
-			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
+			libxl_qmp.o libxl_event.o libxl_fork.o libxl_pqos.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
 $(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f3d2202 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
 int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
 int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
 
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);
+
 /* misc */
 
 /* Each of these sets or clears the flag according to whether the
diff --git a/tools/libxl/libxl_pqos.c b/tools/libxl/libxl_pqos.c
new file mode 100644
index 0000000..664a891
--- /dev/null
+++ b/tools/libxl/libxl_pqos.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2014      Intel Corporation
+ * Author Jiongxi Li <jiongxi.li@intel.com>
+ * Author Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+
+static const char * const msg[] = {
+    [EINVAL] = "invalid QoS resource type! Supported types: \"cqm\"",
+    [ENODEV] = "CQM is not supported in this system.",
+    [EEXIST] = "CQM is already attached to this domain.",
+    [ENOENT] = "CQM is not attached to this domain.",
+    [EUSERS] = "there is no free CQM RMID available.",
+    [ESRCH]  = "is this Domain ID valid?",
+};
+
+static void libxl_pqos_err_msg(libxl_ctx *ctx, int err)
+{
+    GC_INIT(ctx);
+
+    switch (err) {
+    case EINVAL:
+    case ENODEV:
+    case EEXIST:
+    case EUSERS:
+    case ESRCH:
+    case ENOENT:
+        LOGE(ERROR, "%s", msg[err]);
+        break;
+    default:
+        LOGE(ERROR, "errno: %d", err);
+    }
+
+    GC_FREE;
+}
+
+static int libxl_pqos_type2flags(const char * qos_type, uint64_t *flags)
+{
+    int rc = 0;
+
+    if (!strcmp(qos_type, "cqm"))
+        *flags |= XEN_DOMCTL_pqos_cqm;
+    else
+        rc = -1;
+
+    return rc;
+}
+
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_attach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_detach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo)
+{
+    int ret;
+    xc_cqminfo_t xcinfo;
+    GC_INIT(ctx);
+
+    ret = xc_domain_getcqminfo(ctx->xch, &xcinfo);
+    if (ret < 0) {
+        LOGE(ERROR, "getting domain cqm info");
+        return;
+    }
+
+    xlinfo->buffer_virt = (uint64_t)xc_map_foreign_range(ctx->xch, DOMID_XEN,
+                              xcinfo.size, PROT_READ, xcinfo.buffer_mfn);
+    if (xlinfo->buffer_virt == 0) {
+        LOGE(ERROR, "Failed to map cqm buffers");
+        return;
+    }
+    xlinfo->size = xcinfo.size;
+    xlinfo->nr_rmids = xcinfo.nr_rmids;
+    xlinfo->nr_sockets = xcinfo.nr_sockets;
+
+    GC_FREE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..43c0f48 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -596,3 +596,10 @@ libxl_event = Struct("event",[
                                  ])),
            ("domain_create_console_available", Struct(None, [])),
            ]))])
+
+libxl_cqminfo = Struct("cqminfo", [
+    ("buffer_virt",    uint64),
+    ("size",           uint32),
+    ("nr_rmids",       uint32),
+    ("nr_sockets",     uint32),
+    ])
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..4362b96 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -106,6 +106,9 @@ int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 int main_devd(int argc, char **argv);
+int main_pqosattach(int argc, char **argv);
+int main_pqosdetach(int argc, char **argv);
+int main_pqoslist(int argc, char **argv);
 
 void help(const char *command);
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..2e0b822 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7364,6 +7364,117 @@ out:
     return ret;
 }
 
+int main_pqosattach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-attach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_attach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+int main_pqosdetach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-detach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_detach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+static void print_cqm_info(const libxl_cqminfo *info)
+{
+    unsigned int i, j, k;
+    char *domname;
+    int print_header;
+    int cqm_domains = 0;
+    uint16_t *rmid_to_dom;
+    uint64_t *l3c_data;
+    uint32_t first_domain = 0;
+    unsigned int num_domains = 1024;
+
+    if (info->nr_rmids == 0) {
+        printf("System doesn't support CQM.\n");
+        return;
+    }
+
+    print_header = 1;
+    l3c_data = (uint64_t *)(info->buffer_virt);
+    rmid_to_dom = (uint16_t *)(info->buffer_virt +
+                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
+    for (i = first_domain; i < (first_domain + num_domains); i++) {
+        for (j = 0; j < info->nr_rmids; j++) {
+            if (rmid_to_dom[j] != i)
+                continue;
+
+            if (print_header) {
+                printf("Name                                        ID");
+                for (k = 0; k < info->nr_sockets; k++)
+                    printf("\tSocketID\tL3C_Usage");
+                print_header = 0;
+            }
+
+            domname = libxl_domid_to_name(ctx, i);
+            printf("\n%-40s %5d", domname, i);
+            free(domname);
+            cqm_domains++;
+
+            for (k = 0; k < info->nr_sockets; k++)
+                printf("%10u %16lu     ",
+                        k, l3c_data[info->nr_rmids * k + j]);
+        }
+    }
+    if (!cqm_domains)
+        printf("No RMID is assigned to domains.\n");
+    else
+        printf("\n");
+
+    printf("\nRMID count %5d\tRMID available %5d\n",
+           info->nr_rmids, info->nr_rmids - cqm_domains - 1);
+}
+
+int main_pqoslist(int argc, char **argv)
+{
+    int opt;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-list", 1) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+
+    if (!strcmp(qos_type, "cqm")) {
+        libxl_cqminfo info;
+        libxl_map_cqm_buffer(ctx, &info);
+        print_cqm_info(&info);
+    } else {
+        fprintf(stderr, "QoS resource type supported is: cqm.\n");
+        help("pqos-list");
+        return 2;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..d4af4a9 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
       "[options]",
       "-F                      Run in the foreground",
     },
+    { "pqos-attach",
+      &main_pqosattach, 0, 1,
+      "Allocate and map qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-detach",
+      &main_pqosdetach, 0, 1,
+      "Reliquish qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-list",
+      &main_pqoslist, 0, 0,
+      "List qos information for all domains",
+      "<Resource>",
+    },
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHs7-0003PC-DC; Mon, 03 Feb 2014 11:39:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs5-0003O5-Mz
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:26 +0000
Received: from [193.109.254.147:7328] by server-4.bemta-14.messagelabs.com id
	16/D9-32066-CEF7FE25; Mon, 03 Feb 2014 11:39:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391427563!1605592!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4748 invoked from network); 3 Feb 2014 11:39:23 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:23 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 03 Feb 2014 03:35:07 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,772,1384329600"; d="scan'208";a="448850719"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 03 Feb 2014 03:39:10 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:31 +0800
Message-Id: <1391427391-120790-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 6/6] tools: enable Cache QoS Monitoring
	feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduced two new xl commands to attach/detach CQM service for a guest
$ xl pqos-attach cqm domid
$ xl pqos-detach cqm domid

Introduce one new xl command to retrive guest CQM information
$ xl pqos-list cqm

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/libxc/xc_domain.c     |   37 ++++++++++++
 tools/libxc/xenctrl.h       |   12 ++++
 tools/libxl/Makefile        |    3 +-
 tools/libxl/libxl.h         |    4 ++
 tools/libxl/libxl_pqos.c    |  132 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl |    7 +++
 tools/libxl/xl.h            |    3 +
 tools/libxl/xl_cmdimpl.c    |  111 ++++++++++++++++++++++++++++++++++++
 tools/libxl/xl_cmdtable.c   |   15 +++++
 9 files changed, 323 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_pqos.c

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..bcdffd2 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1776,6 +1776,43 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_attach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_detach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
+{
+    int ret = 0;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_getcqminfo;
+    if ( xc_sysctl(xch, &sysctl) < 0 )
+        ret = -1;
+    else
+    {
+        info->buffer_mfn = sysctl.u.getcqminfo.buffer_mfn;
+        info->size = sysctl.u.getcqminfo.size;
+        info->nr_rmids = sysctl.u.getcqminfo.nr_rmids;
+        info->nr_sockets = sysctl.u.getcqminfo.nr_sockets;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..f4eb198 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2427,4 +2427,16 @@ int xc_kexec_load(xc_interface *xch, uint8_t type, uint16_t arch,
  */
 int xc_kexec_unload(xc_interface *xch, int type);
 
+struct xc_cqminfo
+{
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xc_cqminfo xc_cqminfo_t;
+
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info);
 #endif /* XENCTRL_H */
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..8beb7f8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -76,7 +76,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
 			libxl_json.o libxl_aoutils.o libxl_numa.o \
 			libxl_save_callout.o _libxl_save_msgs_callout.o \
-			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
+			libxl_qmp.o libxl_event.o libxl_fork.o libxl_pqos.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
 $(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f3d2202 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
 int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
 int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
 
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);
+
 /* misc */
 
 /* Each of these sets or clears the flag according to whether the
diff --git a/tools/libxl/libxl_pqos.c b/tools/libxl/libxl_pqos.c
new file mode 100644
index 0000000..664a891
--- /dev/null
+++ b/tools/libxl/libxl_pqos.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2014      Intel Corporation
+ * Author Jiongxi Li <jiongxi.li@intel.com>
+ * Author Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+
+static const char * const msg[] = {
+    [EINVAL] = "invalid QoS resource type! Supported types: \"cqm\"",
+    [ENODEV] = "CQM is not supported in this system.",
+    [EEXIST] = "CQM is already attached to this domain.",
+    [ENOENT] = "CQM is not attached to this domain.",
+    [EUSERS] = "there is no free CQM RMID available.",
+    [ESRCH]  = "is this Domain ID valid?",
+};
+
+static void libxl_pqos_err_msg(libxl_ctx *ctx, int err)
+{
+    GC_INIT(ctx);
+
+    switch (err) {
+    case EINVAL:
+    case ENODEV:
+    case EEXIST:
+    case EUSERS:
+    case ESRCH:
+    case ENOENT:
+        LOGE(ERROR, "%s", msg[err]);
+        break;
+    default:
+        LOGE(ERROR, "errno: %d", err);
+    }
+
+    GC_FREE;
+}
+
+static int libxl_pqos_type2flags(const char * qos_type, uint64_t *flags)
+{
+    int rc = 0;
+
+    if (!strcmp(qos_type, "cqm"))
+        *flags |= XEN_DOMCTL_pqos_cqm;
+    else
+        rc = -1;
+
+    return rc;
+}
+
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_attach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_detach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo)
+{
+    int ret;
+    xc_cqminfo_t xcinfo;
+    GC_INIT(ctx);
+
+    ret = xc_domain_getcqminfo(ctx->xch, &xcinfo);
+    if (ret < 0) {
+        LOGE(ERROR, "getting domain cqm info");
+        return;
+    }
+
+    xlinfo->buffer_virt = (uint64_t)xc_map_foreign_range(ctx->xch, DOMID_XEN,
+                              xcinfo.size, PROT_READ, xcinfo.buffer_mfn);
+    if (xlinfo->buffer_virt == 0) {
+        LOGE(ERROR, "Failed to map cqm buffers");
+        return;
+    }
+    xlinfo->size = xcinfo.size;
+    xlinfo->nr_rmids = xcinfo.nr_rmids;
+    xlinfo->nr_sockets = xcinfo.nr_sockets;
+
+    GC_FREE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..43c0f48 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -596,3 +596,10 @@ libxl_event = Struct("event",[
                                  ])),
            ("domain_create_console_available", Struct(None, [])),
            ]))])
+
+libxl_cqminfo = Struct("cqminfo", [
+    ("buffer_virt",    uint64),
+    ("size",           uint32),
+    ("nr_rmids",       uint32),
+    ("nr_sockets",     uint32),
+    ])
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..4362b96 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -106,6 +106,9 @@ int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 int main_devd(int argc, char **argv);
+int main_pqosattach(int argc, char **argv);
+int main_pqosdetach(int argc, char **argv);
+int main_pqoslist(int argc, char **argv);
 
 void help(const char *command);
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..2e0b822 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7364,6 +7364,117 @@ out:
     return ret;
 }
 
+int main_pqosattach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-attach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_attach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+int main_pqosdetach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-detach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_detach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+static void print_cqm_info(const libxl_cqminfo *info)
+{
+    unsigned int i, j, k;
+    char *domname;
+    int print_header;
+    int cqm_domains = 0;
+    uint16_t *rmid_to_dom;
+    uint64_t *l3c_data;
+    uint32_t first_domain = 0;
+    unsigned int num_domains = 1024;
+
+    if (info->nr_rmids == 0) {
+        printf("System doesn't support CQM.\n");
+        return;
+    }
+
+    print_header = 1;
+    l3c_data = (uint64_t *)(info->buffer_virt);
+    rmid_to_dom = (uint16_t *)(info->buffer_virt +
+                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
+    for (i = first_domain; i < (first_domain + num_domains); i++) {
+        for (j = 0; j < info->nr_rmids; j++) {
+            if (rmid_to_dom[j] != i)
+                continue;
+
+            if (print_header) {
+                printf("Name                                        ID");
+                for (k = 0; k < info->nr_sockets; k++)
+                    printf("\tSocketID\tL3C_Usage");
+                print_header = 0;
+            }
+
+            domname = libxl_domid_to_name(ctx, i);
+            printf("\n%-40s %5d", domname, i);
+            free(domname);
+            cqm_domains++;
+
+            for (k = 0; k < info->nr_sockets; k++)
+                printf("%10u %16lu     ",
+                        k, l3c_data[info->nr_rmids * k + j]);
+        }
+    }
+    if (!cqm_domains)
+        printf("No RMID is assigned to domains.\n");
+    else
+        printf("\n");
+
+    printf("\nRMID count %5d\tRMID available %5d\n",
+           info->nr_rmids, info->nr_rmids - cqm_domains - 1);
+}
+
+int main_pqoslist(int argc, char **argv)
+{
+    int opt;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-list", 1) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+
+    if (!strcmp(qos_type, "cqm")) {
+        libxl_cqminfo info;
+        libxl_map_cqm_buffer(ctx, &info);
+        print_cqm_info(&info);
+    } else {
+        fprintf(stderr, "QoS resource type supported is: cqm.\n");
+        help("pqos-list");
+        return 2;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..d4af4a9 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
       "[options]",
       "-F                      Run in the foreground",
     },
+    { "pqos-attach",
+      &main_pqosattach, 0, 1,
+      "Allocate and map qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-detach",
+      &main_pqosdetach, 0, 1,
+      "Reliquish qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-list",
+      &main_pqoslist, 0, 0,
+      "List qos information for all domains",
+      "<Resource>",
+    },
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHs8-0003QD-Sz; Mon, 03 Feb 2014 11:39:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs6-0003OQ-KK
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:26 +0000
Received: from [193.109.254.147:10133] by server-13.bemta-14.messagelabs.com
	id 2B/59-01226-DEF7FE25; Mon, 03 Feb 2014 11:39:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391427541!1608673!3
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18039 invoked from network); 3 Feb 2014 11:39:09 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:09 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 03 Feb 2014 03:39:09 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="468983571"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2014 03:39:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:30 +0800
Message-Id: <1391427391-120790-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 5/6] xsm: add platform QoS related xsm
	policies
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xsm policies for attach/detach pqos services and get CQM info
hypercalls.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 ++++-
 xen/xsm/flask/hooks.c                        |    8 ++++++++
 xen/xsm/flask/policy/access_vectors          |   17 ++++++++++++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index dedc035..1f683af 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			getscheduler getvcpuinfo getvcpuextstate getaddrsize
 			getaffinity setaffinity };
-	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim  set_max_evtchn };
+	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim set_max_evtchn pqos_op };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index bb59fe8..115fcfe 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -64,6 +64,9 @@ allow dom0_t xen_t:xen {
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op tmem_op
 	tmem_control getscheduler setscheduler
 };
+allow dom0_t xen_t:xen2 {
+	pqos_op
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
@@ -76,7 +79,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn
+	set_cpuid gettsc settsc setscheduler set_max_evtchn pqos_op
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..6ee7771 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -730,6 +730,10 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_attach_pqos:
+    case XEN_DOMCTL_detach_pqos:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__PQOS_OP);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -785,6 +789,10 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_numainfo:
         return domain_has_xen(current->domain, XEN__PHYSINFO);
 
+    case XEN_SYSCTL_getcqminfo:
+        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
+                                    XEN2__PQOS_OP, NULL);
+
     default:
         printk("flask_sysctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..91af8b2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -3,9 +3,9 @@
 #
 # class class_name { permission_name ... }
 
-# Class xen consists of dom0-only operations dealing with the hypervisor itself.
-# Unless otherwise specified, the source is the domain executing the hypercall,
-# and the target is the xen initial sid (type xen_t).
+# Class xen and xen2 consists of dom0-only operations dealing with the
+# hypervisor itself. Unless otherwise specified, the source is the domain
+# executing the hypercall, and the target is the xen initial sid (type xen_t).
 class xen
 {
 # XENPF_settime
@@ -75,6 +75,14 @@ class xen
     setscheduler
 }
 
+# This is a continuation of class xen, since only 32 permissions can be
+# defined per class
+class xen2
+{
+# XEN_SYSCTL_getcqminfo
+    pqos_op
+}
+
 # Classes domain and domain2 consist of operations that a domain performs on
 # another domain or on itself.  Unless otherwise specified, the source is the
 # domain executing the hypercall, and the target is the domain being operated on
@@ -196,6 +204,9 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_attach_pqos
+# XEN_DOMCTL_detach_pqos
+    pqos_op
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHs8-0003QD-Sz; Mon, 03 Feb 2014 11:39:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs6-0003OQ-KK
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:26 +0000
Received: from [193.109.254.147:10133] by server-13.bemta-14.messagelabs.com
	id 2B/59-01226-DEF7FE25; Mon, 03 Feb 2014 11:39:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391427541!1608673!3
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18039 invoked from network); 3 Feb 2014 11:39:09 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-10.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:09 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 03 Feb 2014 03:39:09 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,771,1384329600"; d="scan'208";a="468983571"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 03 Feb 2014 03:39:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:30 +0800
Message-Id: <1391427391-120790-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 5/6] xsm: add platform QoS related xsm
	policies
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xsm policies for attach/detach pqos services and get CQM info
hypercalls.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 ++++-
 xen/xsm/flask/hooks.c                        |    8 ++++++++
 xen/xsm/flask/policy/access_vectors          |   17 ++++++++++++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index dedc035..1f683af 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			getscheduler getvcpuinfo getvcpuextstate getaddrsize
 			getaffinity setaffinity };
-	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim  set_max_evtchn };
+	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim set_max_evtchn pqos_op };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index bb59fe8..115fcfe 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -64,6 +64,9 @@ allow dom0_t xen_t:xen {
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op tmem_op
 	tmem_control getscheduler setscheduler
 };
+allow dom0_t xen_t:xen2 {
+	pqos_op
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
@@ -76,7 +79,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn
+	set_cpuid gettsc settsc setscheduler set_max_evtchn pqos_op
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..6ee7771 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -730,6 +730,10 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_attach_pqos:
+    case XEN_DOMCTL_detach_pqos:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__PQOS_OP);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -785,6 +789,10 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_numainfo:
         return domain_has_xen(current->domain, XEN__PHYSINFO);
 
+    case XEN_SYSCTL_getcqminfo:
+        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
+                                    XEN2__PQOS_OP, NULL);
+
     default:
         printk("flask_sysctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..91af8b2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -3,9 +3,9 @@
 #
 # class class_name { permission_name ... }
 
-# Class xen consists of dom0-only operations dealing with the hypervisor itself.
-# Unless otherwise specified, the source is the domain executing the hypercall,
-# and the target is the xen initial sid (type xen_t).
+# Class xen and xen2 consists of dom0-only operations dealing with the
+# hypervisor itself. Unless otherwise specified, the source is the domain
+# executing the hypercall, and the target is the xen initial sid (type xen_t).
 class xen
 {
 # XENPF_settime
@@ -75,6 +75,14 @@ class xen
     setscheduler
 }
 
+# This is a continuation of class xen, since only 32 permissions can be
+# defined per class
+class xen2
+{
+# XEN_SYSCTL_getcqminfo
+    pqos_op
+}
+
 # Classes domain and domain2 consist of operations that a domain performs on
 # another domain or on itself.  Unless otherwise specified, the source is the
 # domain executing the hypercall, and the target is the domain being operated on
@@ -196,6 +204,9 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_attach_pqos
+# XEN_DOMCTL_detach_pqos
+    pqos_op
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHsA-0003Rx-Fq; Mon, 03 Feb 2014 11:39:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs8-0003Pr-C9
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:28 +0000
Received: from [193.109.254.147:16450] by server-11.bemta-14.messagelabs.com
	id 22/EF-24604-FEF7FE25; Mon, 03 Feb 2014 11:39:27 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391427566!1605987!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24189 invoked from network); 3 Feb 2014 11:39:26 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:26 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 03 Feb 2014 03:39:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,772,1384329600"; d="scan'208";a="448850546"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 03 Feb 2014 03:38:56 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:25 +0800
Message-Id: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 0/6] enable Cache QoS Monitoring (CQM) feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes from v6:
 - Address comments from Jan Beulich, including:
   * Remove the unnecessary CPUID feature check.
   * Remove the unnecessary socket_cpu_map.
   * Spin_lock related changes, avoid spin_lock_irqsave().
   * Use readonly mapping to pass cqm data between Xen/Userspace,
     to avoid data copying.
   * Optimize RDMSR/WRMSR logic to avoid unnecessary calls.
   * Misc fixes including __read_mostly prefix, return value, etc.

Changes from v5:
 - Address comments from Dario Faggioli, including:
   * Define a new libxl_cqminfo structure to avoid reference of xc
     structure in libxl functions.
   * Use LOGE() instead of the LIBXL__LOG() functions.

Changes from v4:
 - When comparing xl cqm parameter, use strcmp instead of strncmp,
   otherwise, "xl pqos-attach cqmabcd domid" will be considered as
   a valid command line.
 - Address comments from Andrew Cooper, including:
   * Adjust the pqos parameter parsing function.
   * Modify the pqos related documentation.
   * Add a check for opt_cqm_max_rmid in initialization code.
   * Do not IPI CPU that is in same socket with current CPU.
 - Address comments from Dario Faggioli, including:
   * Fix an typo in export symbols.
   * Return correct libxl error code for qos related functions.
   * Abstract the error printing logic into a function.
 - Address comment from Daniel De Graaf, including:
   * Add return value in for pqos related check.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Modify the GPLv2 related file header, remove the address.

Changes from v3:
 - Use structure to better organize CQM related global variables.
 - Address comments from Andrew Cooper, including:
   * Remove the domain creation flag for CQM RMID allocation.
   * Adjust the boot parameter format, use custom_param().
   * Add documentation for the new added boot parameter.
   * Change QoS type flag to be uint64_t.
   * Initialize the per socket cpu bitmap in system boot time.
   * Remove get_cqm_avail() function.
   * Misc of format changes.
 - Address comment from Daniel De Graaf, including:
   * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to SECCLASS_XEN2.

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

RMID count    56        RMID available    53

Dongxiao Xu (6):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 docs/misc/xen-command-line.markdown          |    7 +
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   37 ++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    4 +
 tools/libxl/libxl_pqos.c                     |  132 +++++++++++++
 tools/libxl/libxl_types.idl                  |    7 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  111 +++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/domain.c                        |    8 +
 xen/arch/x86/domctl.c                        |   28 +++
 xen/arch/x86/pqos.c                          |  273 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   59 ++++++
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   59 ++++++
 xen/include/public/domctl.h                  |   11 ++
 xen/include/public/sysctl.h                  |   11 ++
 xen/xsm/flask/hooks.c                        |    8 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 26 files changed, 818 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:39:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAHsA-0003Rx-Fq; Mon, 03 Feb 2014 11:39:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WAHs8-0003Pr-C9
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:39:28 +0000
Received: from [193.109.254.147:16450] by server-11.bemta-14.messagelabs.com
	id 22/EF-24604-FEF7FE25; Mon, 03 Feb 2014 11:39:27 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391427566!1605987!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24189 invoked from network); 3 Feb 2014 11:39:26 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-27.messagelabs.com with SMTP;
	3 Feb 2014 11:39:26 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 03 Feb 2014 03:39:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,772,1384329600"; d="scan'208";a="448850546"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 03 Feb 2014 03:38:56 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:36:25 +0800
Message-Id: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v7 0/6] enable Cache QoS Monitoring (CQM) feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes from v6:
 - Address comments from Jan Beulich, including:
   * Remove the unnecessary CPUID feature check.
   * Remove the unnecessary socket_cpu_map.
   * Spin_lock related changes, avoid spin_lock_irqsave().
   * Use readonly mapping to pass cqm data between Xen/Userspace,
     to avoid data copying.
   * Optimize RDMSR/WRMSR logic to avoid unnecessary calls.
   * Misc fixes including __read_mostly prefix, return value, etc.

Changes from v5:
 - Address comments from Dario Faggioli, including:
   * Define a new libxl_cqminfo structure to avoid reference of xc
     structure in libxl functions.
   * Use LOGE() instead of the LIBXL__LOG() functions.

Changes from v4:
 - When comparing xl cqm parameter, use strcmp instead of strncmp,
   otherwise, "xl pqos-attach cqmabcd domid" will be considered as
   a valid command line.
 - Address comments from Andrew Cooper, including:
   * Adjust the pqos parameter parsing function.
   * Modify the pqos related documentation.
   * Add a check for opt_cqm_max_rmid in initialization code.
   * Do not IPI CPU that is in same socket with current CPU.
 - Address comments from Dario Faggioli, including:
   * Fix an typo in export symbols.
   * Return correct libxl error code for qos related functions.
   * Abstract the error printing logic into a function.
 - Address comment from Daniel De Graaf, including:
   * Add return value in for pqos related check.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Modify the GPLv2 related file header, remove the address.

Changes from v3:
 - Use structure to better organize CQM related global variables.
 - Address comments from Andrew Cooper, including:
   * Remove the domain creation flag for CQM RMID allocation.
   * Adjust the boot parameter format, use custom_param().
   * Add documentation for the new added boot parameter.
   * Change QoS type flag to be uint64_t.
   * Initialize the per socket cpu bitmap in system boot time.
   * Remove get_cqm_avail() function.
   * Misc of format changes.
 - Address comment from Daniel De Graaf, including:
   * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to SECCLASS_XEN2.

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

RMID count    56        RMID available    53

Dongxiao Xu (6):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 docs/misc/xen-command-line.markdown          |    7 +
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   37 ++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    4 +
 tools/libxl/libxl_pqos.c                     |  132 +++++++++++++
 tools/libxl/libxl_types.idl                  |    7 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  111 +++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/domain.c                        |    8 +
 xen/arch/x86/domctl.c                        |   28 +++
 xen/arch/x86/pqos.c                          |  273 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   59 ++++++
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   59 ++++++
 xen/include/public/domctl.h                  |   11 ++
 xen/include/public/sysctl.h                  |   11 ++
 xen/xsm/flask/hooks.c                        |    8 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 26 files changed, 818 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:48:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAI0o-0004Uj-V5; Mon, 03 Feb 2014 11:48:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAI0n-0004Ue-Tq
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:48:26 +0000
Received: from [193.109.254.147:41697] by server-12.bemta-14.messagelabs.com
	id 64/BB-17220-9028FE25; Mon, 03 Feb 2014 11:48:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391428103!1604991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30296 invoked from network); 3 Feb 2014 11:48:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:48:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97240135"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:48:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 06:48:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAI0X-0004Kd-HQ;
	Mon, 03 Feb 2014 11:48:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAI0X-0000VW-Ac;
	Mon, 03 Feb 2014 11:48:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21231.33273.164782.738071@mariner.uk.xensource.com>
Date: Mon, 3 Feb 2014 11:48:09 +0000
To: Don Slutz <dslutz@verizon.com>
In-Reply-To: <52EC2C9C.9090202@terremark.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don Slutz writes ("Re: [Xen-devel] 4.4.0-rc3 tagged"):
> On CentOS release 5.10 (Final) I hit QEMU bug #1257099:

CC'ing Stefano, who is in charge of the Xen qemu-upstream tree.

Thanks,
Ian.

> lt LINK libcacard.la
> /usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_buffer_response_delete' can not be used when making a shared object; recompile with -fPIC
> /usr/bin/ld: final link failed: Bad value
> collect2: ld returned 1 exit status
> make[3]: *** [libcacard.la] Error 1
> make[3]: Leaving directory `/home/don/xen-4.4.0-rc3/tools/qemu-xen-dir'
> make[2]: *** [subdir-all-qemu-xen-dir] Error 2
> make[2]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> make[1]: *** [subdirs-install] Error 2
> make[1]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> make: *** [install-tools] Error 2
> 
> See https://bugs.launchpad.net/bugs/1257099
> 
> Based on
> 
> https://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.html
> 
> it should make it into QEMU at some point.
> 
> So now I either change tools/Makefile to include "--disable-smartcard-nss" for QEMU or use the patch:
> 
> 
>  From c6ce0e32c09979ba5d7d0d416293fbc700372c61 Mon Sep 17 00:00:00 2001
> From: Don Slutz <dslutz@verizon.com>
> Date: Fri, 31 Jan 2014 20:59:28 +0000
> Subject: [PATCH] tools/Makefile: Change QEMU_XEN_ENABLE_DEBUG to an add to
>   allow for additional QEMU options.
> 
> This is currently needed to work around QEMU bug #1257099 on CentOS 5.10
> 
> I.E. via
> 
> export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>   tools/Makefile | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/tools/Makefile b/tools/Makefile
> index 00c69ee..a3b8a7e 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -174,9 +174,7 @@ qemu-xen-dir-force-update:
>          fi
>   
>   ifeq ($(debug),y)
> -QEMU_XEN_ENABLE_DEBUG := --enable-debug --enable-trace-backend=stderr
> -else
> -QEMU_XEN_ENABLE_DEBUG :=
> +QEMU_XEN_ENABLE_DEBUG += --enable-debug --enable-trace-backend=stderr
>   endif
>   
>   subdir-all-qemu-xen-dir: qemu-xen-dir-find
> -- 
> 1.8.2.1
> 
> 
> and also:
> 
> export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> 
> 
> This gets me to:
> 
> Parsing /home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
>   MLDEP
> make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
>   MLC      xenlight.cmo
>   MLA      xenlight.cma
>   CC       xenlight_stubs.o
> cc1: warnings being treated as errors
> xenlight_stubs.c: In function 'Defbool_val':
> xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> xenlight_stubs.c: In function 'String_option_val':
> xenlight_stubs.c:379: error: expected expression before 'char'
> xenlight_stubs.c: In function 'aohow_val':
> xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> make[7]: *** [xenlight_stubs.o] Error 1
> make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> make[6]: *** [subdir-install-xl] Error 2
> make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> make[5]: *** [subdirs-install] Error 2
> make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> make[4]: *** [subdir-install-libs] Error 2
> make[4]: Leaving directory `/home/don/xen/tools/ocaml'
> make[3]: *** [subdirs-install] Error 2
> make[3]: Leaving directory `/home/don/xen/tools/ocaml'
> make[2]: *** [subdir-install-ocaml] Error 2
> make[2]: Leaving directory `/home/don/xen/tools'
> make[1]: *** [subdirs-install] Error 2
> make[1]: Leaving directory `/home/don/xen/tools'
> make: *** [install-tools] Error 2
> 
> 
> Not sure how to work around this.
>      -Don Slutz
> 
> 
> 
> On 01/31/14 07:14, Ian Jackson wrote:
> > We've just tagged 4.4.0-rc3, please test and report bugs.
> >
> > The tarball can be downloaded here:
> >
> > http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz
> >
> > Ian.
> >
> > (PS: Due to an oversight by me, the version number in the xen/Makefile
> > is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
> > that it's "4.4.0-rc2".  Sorry about that.)
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:48:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAI0o-0004Uj-V5; Mon, 03 Feb 2014 11:48:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAI0n-0004Ue-Tq
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:48:26 +0000
Received: from [193.109.254.147:41697] by server-12.bemta-14.messagelabs.com
	id 64/BB-17220-9028FE25; Mon, 03 Feb 2014 11:48:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391428103!1604991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30296 invoked from network); 3 Feb 2014 11:48:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:48:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97240135"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:48:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 06:48:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAI0X-0004Kd-HQ;
	Mon, 03 Feb 2014 11:48:09 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAI0X-0000VW-Ac;
	Mon, 03 Feb 2014 11:48:09 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21231.33273.164782.738071@mariner.uk.xensource.com>
Date: Mon, 3 Feb 2014 11:48:09 +0000
To: Don Slutz <dslutz@verizon.com>
In-Reply-To: <52EC2C9C.9090202@terremark.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Don Slutz writes ("Re: [Xen-devel] 4.4.0-rc3 tagged"):
> On CentOS release 5.10 (Final) I hit QEMU bug #1257099:

CC'ing Stefano, who is in charge of the Xen qemu-upstream tree.

Thanks,
Ian.

> lt LINK libcacard.la
> /usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_buffer_response_delete' can not be used when making a shared object; recompile with -fPIC
> /usr/bin/ld: final link failed: Bad value
> collect2: ld returned 1 exit status
> make[3]: *** [libcacard.la] Error 1
> make[3]: Leaving directory `/home/don/xen-4.4.0-rc3/tools/qemu-xen-dir'
> make[2]: *** [subdir-all-qemu-xen-dir] Error 2
> make[2]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> make[1]: *** [subdirs-install] Error 2
> make[1]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> make: *** [install-tools] Error 2
> 
> See https://bugs.launchpad.net/bugs/1257099
> 
> Based on
> 
> https://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.html
> 
> it should make it into QEMU at some point.
> 
> So now I either change tools/Makefile to include "--disable-smartcard-nss" for QEMU or use the patch:
> 
> 
>  From c6ce0e32c09979ba5d7d0d416293fbc700372c61 Mon Sep 17 00:00:00 2001
> From: Don Slutz <dslutz@verizon.com>
> Date: Fri, 31 Jan 2014 20:59:28 +0000
> Subject: [PATCH] tools/Makefile: Change QEMU_XEN_ENABLE_DEBUG to an add to
>   allow for additional QEMU options.
> 
> This is currently needed to work around QEMU bug #1257099 on CentOS 5.10
> 
> I.E. via
> 
> export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>   tools/Makefile | 4 +---
>   1 file changed, 1 insertion(+), 3 deletions(-)
> 
> diff --git a/tools/Makefile b/tools/Makefile
> index 00c69ee..a3b8a7e 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -174,9 +174,7 @@ qemu-xen-dir-force-update:
>          fi
>   
>   ifeq ($(debug),y)
> -QEMU_XEN_ENABLE_DEBUG := --enable-debug --enable-trace-backend=stderr
> -else
> -QEMU_XEN_ENABLE_DEBUG :=
> +QEMU_XEN_ENABLE_DEBUG += --enable-debug --enable-trace-backend=stderr
>   endif
>   
>   subdir-all-qemu-xen-dir: qemu-xen-dir-find
> -- 
> 1.8.2.1
> 
> 
> and also:
> 
> export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> 
> 
> This gets me to:
> 
> Parsing /home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
>   MLDEP
> make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
>   MLC      xenlight.cmo
>   MLA      xenlight.cma
>   CC       xenlight_stubs.o
> cc1: warnings being treated as errors
> xenlight_stubs.c: In function 'Defbool_val':
> xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> xenlight_stubs.c: In function 'String_option_val':
> xenlight_stubs.c:379: error: expected expression before 'char'
> xenlight_stubs.c: In function 'aohow_val':
> xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> make[7]: *** [xenlight_stubs.o] Error 1
> make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> make[6]: *** [subdir-install-xl] Error 2
> make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> make[5]: *** [subdirs-install] Error 2
> make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> make[4]: *** [subdir-install-libs] Error 2
> make[4]: Leaving directory `/home/don/xen/tools/ocaml'
> make[3]: *** [subdirs-install] Error 2
> make[3]: Leaving directory `/home/don/xen/tools/ocaml'
> make[2]: *** [subdir-install-ocaml] Error 2
> make[2]: Leaving directory `/home/don/xen/tools'
> make[1]: *** [subdirs-install] Error 2
> make[1]: Leaving directory `/home/don/xen/tools'
> make: *** [install-tools] Error 2
> 
> 
> Not sure how to work around this.
>      -Don Slutz
> 
> 
> 
> On 01/31/14 07:14, Ian Jackson wrote:
> > We've just tagged 4.4.0-rc3, please test and report bugs.
> >
> > The tarball can be downloaded here:
> >
> > http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz
> >
> > Ian.
> >
> > (PS: Due to an oversight by me, the version number in the xen/Makefile
> > is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
> > that it's "4.4.0-rc2".  Sorry about that.)
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:49:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAI1l-0004Zu-DC; Mon, 03 Feb 2014 11:49:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAI1j-0004Zk-Nm
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 11:49:24 +0000
Received: from [85.158.139.211:41090] by server-4.bemta-5.messagelabs.com id
	C8/35-08092-2428FE25; Mon, 03 Feb 2014 11:49:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391428160!1281446!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22333 invoked from network); 3 Feb 2014 11:49:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 11:49:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13BnHZq020723
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 11:49:18 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13BnG8F009993
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 11:49:17 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13BnGDi002116; Mon, 3 Feb 2014 11:49:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 03:49:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 964461C0972; Mon,  3 Feb 2014 06:49:14 -0500 (EST)
Date: Mon, 3 Feb 2014 06:49:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140203114914.GA3350@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> We need to set cr4 flags for APs that are already set for BSP.

The title is missing the 'xen' part.

I rewrote it a bit and I think this should go in 3.14.

David, Boris: It is not the full fix as there are other parts to
make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
bug.



>From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 29 Jan 2014 16:15:18 -0800
Subject: [PATCH] xen/pvh: set CR4 flags for APs

The Xen ABI sets said flags for the BSP, but it does
not do that for the CR4. As such fix it up to make
sure we have that flag set.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..201d09a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
 	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
 	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
 	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
+	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
+	*/
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
 }
 
 /*
-- 
1.8.3.1


> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |   12 ++++++++++++
>  1 files changed, 12 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..201d09a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
>  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
>  	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
>  	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
> +
> +	if (!cpu)
> +		return;
> +	/*
> +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
> +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
> +	*/
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
>  }
>  
>  /*
> -- 
> 1.7.2.3
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:49:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAI1l-0004Zu-DC; Mon, 03 Feb 2014 11:49:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAI1j-0004Zk-Nm
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 11:49:24 +0000
Received: from [85.158.139.211:41090] by server-4.bemta-5.messagelabs.com id
	C8/35-08092-2428FE25; Mon, 03 Feb 2014 11:49:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391428160!1281446!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22333 invoked from network); 3 Feb 2014 11:49:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 11:49:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13BnHZq020723
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 11:49:18 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13BnG8F009993
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 11:49:17 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13BnGDi002116; Mon, 3 Feb 2014 11:49:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 03:49:15 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 964461C0972; Mon,  3 Feb 2014 06:49:14 -0500 (EST)
Date: Mon, 3 Feb 2014 06:49:14 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Message-ID: <20140203114914.GA3350@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> We need to set cr4 flags for APs that are already set for BSP.

The title is missing the 'xen' part.

I rewrote it a bit and I think this should go in 3.14.

David, Boris: It is not the full fix as there are other parts to
make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
bug.



>From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 29 Jan 2014 16:15:18 -0800
Subject: [PATCH] xen/pvh: set CR4 flags for APs

The Xen ABI sets said flags for the BSP, but it does
not do that for the CR4. As such fix it up to make
sure we have that flag set.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..201d09a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
 	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
 	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
 	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
+	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
+	*/
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
 }
 
 /*
-- 
1.8.3.1


> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  arch/x86/xen/enlighten.c |   12 ++++++++++++
>  1 files changed, 12 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..201d09a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
>  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
>  	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
>  	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
> +
> +	if (!cpu)
> +		return;
> +	/*
> +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
> +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
> +	*/
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
>  }
>  
>  /*
> -- 
> 1.7.2.3
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:51:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAI49-0004vU-0b; Mon, 03 Feb 2014 11:51:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAI47-0004vM-78
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:51:51 +0000
Received: from [193.109.254.147:15579] by server-10.bemta-14.messagelabs.com
	id A1/7B-10711-6D28FE25; Mon, 03 Feb 2014 11:51:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391428308!1617304!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30763 invoked from network); 3 Feb 2014 11:51:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 11:51:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Bogfx021950
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 11:50:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Bofip013686
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 11:50:42 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Boflt027470; Mon, 3 Feb 2014 11:50:41 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 03:50:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 47C971C0972; Mon,  3 Feb 2014 06:50:40 -0500 (EST)
Date: Mon, 3 Feb 2014 06:50:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140203115040.GB3400@phenom.dumpdata.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
	<52EE1E26.2040308@linaro.org> <52EE93F0.1020508@citrix.com>
	<52EF7618.7030402@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EF7618.7030402@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCBGZWIgMDMsIDIwMTQgYXQgMTA6NTc6MjhBTSArMDAwMCwgRGF2aWQgVnJhYmVsIHdy
b3RlOgo+IE9uIDAyLzAyLzE0IDE4OjUyLCBab2x0YW4gS2lzcyB3cm90ZToKPiA+IE9uIDAyLzAy
LzE0IDExOjI5LCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4gPj4gSGVsbG8sCj4gPj4KPiA+PiBUaGlz
IHBhdGNoIGlzIGJyZWFraW5nIExpbnV4IGNvbXBpbGF0aW9uIG9uIEFSTToKPiA+Pgo+ID4+IGRy
aXZlcnMveGVuL2dyYW50LXRhYmxlLmM6IEluIGZ1bmN0aW9uIOKAmF9fZ250dGFiX21hcF9yZWZz
4oCZOgo+ID4+IGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmM6OTg5OjM6IGVycm9yOiBpbXBsaWNp
dCBkZWNsYXJhdGlvbiBvZgo+ID4+IGZ1bmN0aW9uIOKAmEZPUkVJR05fRlJBTUXigJkgWy1XZXJy
b3I9aW1wbGljaXQtZnVuY3Rpb24tZGVjbGFyYXRpb25dCj4gPj4gICAgIGlmICh1bmxpa2VseSgh
c2V0X3BoeXNfdG9fbWFjaGluZShwZm4sIEZPUkVJR05fRlJBTUUobWZuKSkpKSB7Cj4gPj4gICAg
IF4KPiA+PiBkcml2ZXJzL3hlbi9ncmFudC10YWJsZS5jOiBJbiBmdW5jdGlvbiDigJhfX2dudHRh
Yl91bm1hcF9yZWZz4oCZOgo+ID4+IGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmM6MTA1NDozOiBl
cnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiA+PiBmdW5jdGlvbiDigJhnZXRfcGh5c190
b19tYWNoaW5l4oCZIFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9uLWRlY2xhcmF0aW9uXQo+ID4+
ICAgICBtZm4gPSBnZXRfcGh5c190b19tYWNoaW5lKHBmbik7Cj4gPj4gICAgIF4KPiA+PiBkcml2
ZXJzL3hlbi9ncmFudC10YWJsZS5jOjEwNTU6NDM6IGVycm9yOiDigJhGT1JFSUdOX0ZSQU1FX0JJ
VOKAmQo+ID4+IHVuZGVjbGFyZWQgKGZpcnN0IHVzZSBpbiB0aGlzIGZ1bmN0aW9uKQo+ID4+ICAg
ICBpZiAobWZuID09IElOVkFMSURfUDJNX0VOVFJZIHx8ICEobWZuICYgRk9SRUlHTl9GUkFNRV9C
SVQpKSB7Cj4gPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBe
Cj4gPj4gZHJpdmVycy94ZW4vZ3JhbnQtdGFibGUuYzoxMDU1OjQzOiBub3RlOiBlYWNoIHVuZGVj
bGFyZWQgaWRlbnRpZmllciBpcwo+ID4+IHJlcG9ydGVkIG9ubHkgb25jZSBmb3IgZWFjaCBmdW5j
dGlvbiBpdCBhcHBlYXJzIGluCj4gPj4gZHJpdmVycy94ZW4vZ3JhbnQtdGFibGUuYzoxMDY4Ojk6
IGVycm9yOiB0b28gbWFueSBhcmd1bWVudHMgdG8KPiA+PiBmdW5jdGlvbiDigJhtMnBfcmVtb3Zl
X292ZXJyaWRl4oCZCj4gPj4gICAgICAgICAgIG1mbik7Cj4gPj4gICAgICAgICAgIF4KPiA+PiBJ
biBmaWxlIGluY2x1ZGVkIGZyb20gaW5jbHVkZS94ZW4vcGFnZS5oOjQ6MCwKPiA+PiAgICAgICAg
ICAgICAgICAgICBmcm9tIGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmM6NDg6Cj4gPj4gL2xvY2Fs
L2hvbWUvanVsaWVuL3dvcmtzL21pZHdheS9saW51eC9hcmNoL2FybS9pbmNsdWRlL2FzbS94ZW4v
cGFnZS5oOjEwNjoxOToKPiA+PiBub3RlOiBkZWNsYXJlZCBoZXJlCj4gPj4gICBzdGF0aWMgaW5s
aW5lIGludCBtMnBfcmVtb3ZlX292ZXJyaWRlKHN0cnVjdCBwYWdlICpwYWdlLCBib29sCj4gPj4g
Y2xlYXJfcHRlKQo+ID4+ICAgICAgICAgICAgICAgICAgICAgXgo+ID4+IGNjMTogc29tZSB3YXJu
aW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+ID4gCj4gPiBIaSwKPiA+IAo+ID4gVGhhdCdz
IGJhZCBpbmRlZWQuIEkgdGhpbmsgdGhlIGJlc3Qgc29sdXRpb24gaXMgdG8gcHV0IHRob3NlIHBh
cnRzCj4gPiBiZWhpbmQgYW4gI2lmZGVmIHg4Ni4gVGhlIG9uZXMgbW92ZWQgZnJvbSB4ODYvcDJt
LmMgdG8gZ3JhbnQtdGFibGUuYy4KPiA+IERhdmlkLCBTdGVmYW5vLCB3aGF0IGRvIHlvdSB0aGlu
az8KPiAKPiBJIGRvbid0IHRoaW5rIHdlIHdhbnQgKG1vcmUpICNpZmRlZiBDT05GSUdfWDg2IGlu
IGdyYW50LXRhYmxlLmMgYW5kIHRoZQo+IGFyY2gtc3BlY2lmaWMgYml0cyB3aWxsIGhhdmUgdG8g
ZmFjdG9yZWQgb3V0IGludG8gdGhlaXIgb3duIGZ1bmN0aW9ucwo+IHdpdGggc3VpdGFibGUgc3R1
YnMgcHJvdmlkZWQgZm9yIEFSTS4KPiAKPiBCdXQsIHRoaXMgcGF0Y2ggd2VudCBpbiBsYXRlIGFu
ZCBpdCdzIGNsZWFybHkgbm90IHJlYWR5LiBTbyBJIHRoaW5rIGl0Cj4gc2hvdWxkIGJlIHJldmVy
dGVkIGFuZCB3ZSBzaG91bGQgYWltIHRvIGdldCBpdCBzb3J0ZWQgb3V0IGZvciAzLjE1Lgo+IAo+
IEtvbnJhZC9TdGVmYW5vIChpZiB5b3UgYWdyZWUpIHBsZWFzZSByZXZlcnQKPiAwOGVjZTViYjIz
MTJiNDUxMGIxNjFhNmVmNjY4MmYzN2Y0ZWFjOGExIGFuZCBzZW5kIGEgcHVsbCByZXF1ZXN0LgoK
T0ssIHF1ZXVlZCB1cC4gSSBhbHNvIHB1dCBvbiB0aGUgeGVuL2NyNCBwYXRjaCBvbiB0aGUgcXVl
dWUgLSBqdXN0IHNlbnQKYW4gZW1haWwgd2l0aCBpdC4KCj4gCj4gS29ucmFkLCBJIGFsc28gdGhp
bmsgeW91IHNob3VsZCBsb29rIGF0IGFkZGluZyBhbiBBUk0gYnVpbGQgdG8geW91ciB0ZXN0Cj4g
c3lzdGVtIChJIHRob3VnaHQgeW91IGhhZCB0aGlzIGFscmVhZHkpLgo+IAo+IERhdmlkCj4gCj4g
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:51:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAI49-0004vU-0b; Mon, 03 Feb 2014 11:51:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAI47-0004vM-78
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 11:51:51 +0000
Received: from [193.109.254.147:15579] by server-10.bemta-14.messagelabs.com
	id A1/7B-10711-6D28FE25; Mon, 03 Feb 2014 11:51:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391428308!1617304!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30763 invoked from network); 3 Feb 2014 11:51:49 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 11:51:49 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Bogfx021950
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 11:50:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Bofip013686
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 11:50:42 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Boflt027470; Mon, 3 Feb 2014 11:50:41 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 03:50:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 47C971C0972; Mon,  3 Feb 2014 06:50:40 -0500 (EST)
Date: Mon, 3 Feb 2014 06:50:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140203115040.GB3400@phenom.dumpdata.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>
	<52EE1E26.2040308@linaro.org> <52EE93F0.1020508@citrix.com>
	<52EF7618.7030402@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EF7618.7030402@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCBGZWIgMDMsIDIwMTQgYXQgMTA6NTc6MjhBTSArMDAwMCwgRGF2aWQgVnJhYmVsIHdy
b3RlOgo+IE9uIDAyLzAyLzE0IDE4OjUyLCBab2x0YW4gS2lzcyB3cm90ZToKPiA+IE9uIDAyLzAy
LzE0IDExOjI5LCBKdWxpZW4gR3JhbGwgd3JvdGU6Cj4gPj4gSGVsbG8sCj4gPj4KPiA+PiBUaGlz
IHBhdGNoIGlzIGJyZWFraW5nIExpbnV4IGNvbXBpbGF0aW9uIG9uIEFSTToKPiA+Pgo+ID4+IGRy
aXZlcnMveGVuL2dyYW50LXRhYmxlLmM6IEluIGZ1bmN0aW9uIOKAmF9fZ250dGFiX21hcF9yZWZz
4oCZOgo+ID4+IGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmM6OTg5OjM6IGVycm9yOiBpbXBsaWNp
dCBkZWNsYXJhdGlvbiBvZgo+ID4+IGZ1bmN0aW9uIOKAmEZPUkVJR05fRlJBTUXigJkgWy1XZXJy
b3I9aW1wbGljaXQtZnVuY3Rpb24tZGVjbGFyYXRpb25dCj4gPj4gICAgIGlmICh1bmxpa2VseSgh
c2V0X3BoeXNfdG9fbWFjaGluZShwZm4sIEZPUkVJR05fRlJBTUUobWZuKSkpKSB7Cj4gPj4gICAg
IF4KPiA+PiBkcml2ZXJzL3hlbi9ncmFudC10YWJsZS5jOiBJbiBmdW5jdGlvbiDigJhfX2dudHRh
Yl91bm1hcF9yZWZz4oCZOgo+ID4+IGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmM6MTA1NDozOiBl
cnJvcjogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YKPiA+PiBmdW5jdGlvbiDigJhnZXRfcGh5c190
b19tYWNoaW5l4oCZIFstV2Vycm9yPWltcGxpY2l0LWZ1bmN0aW9uLWRlY2xhcmF0aW9uXQo+ID4+
ICAgICBtZm4gPSBnZXRfcGh5c190b19tYWNoaW5lKHBmbik7Cj4gPj4gICAgIF4KPiA+PiBkcml2
ZXJzL3hlbi9ncmFudC10YWJsZS5jOjEwNTU6NDM6IGVycm9yOiDigJhGT1JFSUdOX0ZSQU1FX0JJ
VOKAmQo+ID4+IHVuZGVjbGFyZWQgKGZpcnN0IHVzZSBpbiB0aGlzIGZ1bmN0aW9uKQo+ID4+ICAg
ICBpZiAobWZuID09IElOVkFMSURfUDJNX0VOVFJZIHx8ICEobWZuICYgRk9SRUlHTl9GUkFNRV9C
SVQpKSB7Cj4gPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBe
Cj4gPj4gZHJpdmVycy94ZW4vZ3JhbnQtdGFibGUuYzoxMDU1OjQzOiBub3RlOiBlYWNoIHVuZGVj
bGFyZWQgaWRlbnRpZmllciBpcwo+ID4+IHJlcG9ydGVkIG9ubHkgb25jZSBmb3IgZWFjaCBmdW5j
dGlvbiBpdCBhcHBlYXJzIGluCj4gPj4gZHJpdmVycy94ZW4vZ3JhbnQtdGFibGUuYzoxMDY4Ojk6
IGVycm9yOiB0b28gbWFueSBhcmd1bWVudHMgdG8KPiA+PiBmdW5jdGlvbiDigJhtMnBfcmVtb3Zl
X292ZXJyaWRl4oCZCj4gPj4gICAgICAgICAgIG1mbik7Cj4gPj4gICAgICAgICAgIF4KPiA+PiBJ
biBmaWxlIGluY2x1ZGVkIGZyb20gaW5jbHVkZS94ZW4vcGFnZS5oOjQ6MCwKPiA+PiAgICAgICAg
ICAgICAgICAgICBmcm9tIGRyaXZlcnMveGVuL2dyYW50LXRhYmxlLmM6NDg6Cj4gPj4gL2xvY2Fs
L2hvbWUvanVsaWVuL3dvcmtzL21pZHdheS9saW51eC9hcmNoL2FybS9pbmNsdWRlL2FzbS94ZW4v
cGFnZS5oOjEwNjoxOToKPiA+PiBub3RlOiBkZWNsYXJlZCBoZXJlCj4gPj4gICBzdGF0aWMgaW5s
aW5lIGludCBtMnBfcmVtb3ZlX292ZXJyaWRlKHN0cnVjdCBwYWdlICpwYWdlLCBib29sCj4gPj4g
Y2xlYXJfcHRlKQo+ID4+ICAgICAgICAgICAgICAgICAgICAgXgo+ID4+IGNjMTogc29tZSB3YXJu
aW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+ID4gCj4gPiBIaSwKPiA+IAo+ID4gVGhhdCdz
IGJhZCBpbmRlZWQuIEkgdGhpbmsgdGhlIGJlc3Qgc29sdXRpb24gaXMgdG8gcHV0IHRob3NlIHBh
cnRzCj4gPiBiZWhpbmQgYW4gI2lmZGVmIHg4Ni4gVGhlIG9uZXMgbW92ZWQgZnJvbSB4ODYvcDJt
LmMgdG8gZ3JhbnQtdGFibGUuYy4KPiA+IERhdmlkLCBTdGVmYW5vLCB3aGF0IGRvIHlvdSB0aGlu
az8KPiAKPiBJIGRvbid0IHRoaW5rIHdlIHdhbnQgKG1vcmUpICNpZmRlZiBDT05GSUdfWDg2IGlu
IGdyYW50LXRhYmxlLmMgYW5kIHRoZQo+IGFyY2gtc3BlY2lmaWMgYml0cyB3aWxsIGhhdmUgdG8g
ZmFjdG9yZWQgb3V0IGludG8gdGhlaXIgb3duIGZ1bmN0aW9ucwo+IHdpdGggc3VpdGFibGUgc3R1
YnMgcHJvdmlkZWQgZm9yIEFSTS4KPiAKPiBCdXQsIHRoaXMgcGF0Y2ggd2VudCBpbiBsYXRlIGFu
ZCBpdCdzIGNsZWFybHkgbm90IHJlYWR5LiBTbyBJIHRoaW5rIGl0Cj4gc2hvdWxkIGJlIHJldmVy
dGVkIGFuZCB3ZSBzaG91bGQgYWltIHRvIGdldCBpdCBzb3J0ZWQgb3V0IGZvciAzLjE1Lgo+IAo+
IEtvbnJhZC9TdGVmYW5vIChpZiB5b3UgYWdyZWUpIHBsZWFzZSByZXZlcnQKPiAwOGVjZTViYjIz
MTJiNDUxMGIxNjFhNmVmNjY4MmYzN2Y0ZWFjOGExIGFuZCBzZW5kIGEgcHVsbCByZXF1ZXN0LgoK
T0ssIHF1ZXVlZCB1cC4gSSBhbHNvIHB1dCBvbiB0aGUgeGVuL2NyNCBwYXRjaCBvbiB0aGUgcXVl
dWUgLSBqdXN0IHNlbnQKYW4gZW1haWwgd2l0aCBpdC4KCj4gCj4gS29ucmFkLCBJIGFsc28gdGhp
bmsgeW91IHNob3VsZCBsb29rIGF0IGFkZGluZyBhbiBBUk0gYnVpbGQgdG8geW91ciB0ZXN0Cj4g
c3lzdGVtIChJIHRob3VnaHQgeW91IGhhZCB0aGlzIGFscmVhZHkpLgo+IAo+IERhdmlkCj4gCj4g
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:59:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAIBZ-00057Z-00; Mon, 03 Feb 2014 11:59:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAIBW-00057U-Mi
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:59:31 +0000
Received: from [193.109.254.147:58373] by server-13.bemta-14.messagelabs.com
	id 93/4F-01226-2A48FE25; Mon, 03 Feb 2014 11:59:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391428767!1620509!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5744 invoked from network); 3 Feb 2014 11:59:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:59:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97243442"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:59:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 06:59:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAIBS-00006X-Nb;
	Mon, 03 Feb 2014 11:59:26 +0000
Message-ID: <52EF849E.7080706@citrix.com>
Date: Mon, 3 Feb 2014 11:59:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dongxiao Xu <dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
	<1391427391-120790-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1391427391-120790-4-git-send-email-dongxiao.xu@intel.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v7 3/6] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 11:36, Dongxiao Xu wrote:
> Collect CQM information (L3 cache occupancy) from all sockets.
> Upper layer application can parse the data structure to get the
> information of guest's L3 cache occupancy on certain sockets.
>
> Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/pqos.c             |   43 ++++++++++++++++++++++++++++
>  xen/arch/x86/sysctl.c           |   59 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/msr-index.h |    4 +++
>  xen/include/asm-x86/pqos.h      |    4 +++
>  xen/include/public/sysctl.h     |   11 ++++++++
>  5 files changed, 121 insertions(+)
>
> diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
> index eb469ac..2cde56e 100644
> --- a/xen/arch/x86/pqos.c
> +++ b/xen/arch/x86/pqos.c
> @@ -15,6 +15,7 @@
>   * more details.
>   */
>  #include <asm/processor.h>
> +#include <asm/msr.h>
>  #include <xen/init.h>
>  #include <xen/mm.h>
>  #include <xen/spinlock.h>
> @@ -205,6 +206,48 @@ out:
>      spin_unlock(&cqm->cqm_lock);
>  }
>  
> +static void read_cqm_data(void *arg)
> +{
> +    uint64_t cqm_data;
> +    unsigned int rmid;
> +    int socket = cpu_to_socket(smp_processor_id());
> +    unsigned long i;
> +
> +    ASSERT(system_supports_cqm());
> +
> +    if ( socket < 0 )
> +        return;
> +
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> +            continue;
> +
> +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> +        rdmsrl(MSR_IA32_QMC, cqm_data);
> +
> +        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
> +        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
> +            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
> +    }
> +}
> +
> +void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
> +{
> +    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
> +    unsigned int nr_rmids = cqm->max_rmid + 1;
> +
> +    /* Read CQM data in current CPU */
> +    read_cqm_data(NULL);
> +    /* Issue IPI to other CPUs to read CQM data */
> +    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
> +
> +    /* Copy the rmid_to_dom info to the buffer */
> +    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
> +           sizeof(domid_t) * (cqm->max_rmid + 1));
> +
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
> index 15d4b91..5391800 100644
> --- a/xen/arch/x86/sysctl.c
> +++ b/xen/arch/x86/sysctl.c
> @@ -28,6 +28,7 @@
>  #include <xen/nodemask.h>
>  #include <xen/cpu.h>
>  #include <xsm/xsm.h>
> +#include <asm/pqos.h>
>  
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
>  
> @@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
>          pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
>  }
>  
> +/* Select one random CPU for each socket. Current CPU's socket is excluded */
> +static void select_socket_cpu(cpumask_t *cpu_bitmap)
> +{
> +    int i;
> +    unsigned int cpu;
> +    int socket, socket_curr = cpu_to_socket(smp_processor_id());
> +    DECLARE_BITMAP(sockets, NR_CPUS);
> +
> +    bitmap_zero(sockets, NR_CPUS);
> +    if (socket_curr >= 0)
> +        set_bit(socket_curr, sockets);
> +
> +    cpumask_clear(cpu_bitmap);
> +    for ( i = 0; i < NR_CPUS; i++ )
> +    {
> +        socket = cpu_to_socket(i);
> +        if ( socket < 0 || test_and_set_bit(socket, sockets) )
> +            continue;
> +        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
> +        if ( cpu < nr_cpu_ids )
> +            cpumask_set_cpu(cpu, cpu_bitmap);
> +    }
> +}
> +
>  long arch_do_sysctl(
>      struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
> @@ -101,6 +126,40 @@ long arch_do_sysctl(
>      }
>      break;
>  
> +    case XEN_SYSCTL_getcqminfo:
> +    {
> +        cpumask_var_t cpu_cqmdata_map;
> +
> +        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
> +        {
> +            ret = -ENOMEM;
> +            break;
> +        }
> +
> +        if ( !system_supports_cqm() )
> +        {
> +            ret = -ENODEV;
> +            free_cpumask_var(cpu_cqmdata_map);
> +            break;
> +        }

Check for -ENODEV first, to avoid pointless memory allocation.

~Andrew

> +
> +        memset(cqm->buffer, 0, cqm->buffer_size);
> +
> +        select_socket_cpu(cpu_cqmdata_map);
> +        get_cqm_info(cpu_cqmdata_map);
> +
> +        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
> +        sysctl->u.getcqminfo.size = cqm->buffer_size;
> +        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
> +        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
> +
> +        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
> +            ret = -EFAULT;
> +
> +        free_cpumask_var(cpu_cqmdata_map);
> +    }
> +    break;
> +
>      default:
>          ret = -ENOSYS;
>          break;
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e3ff10c 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -489,4 +489,8 @@
>  /* Geode defined MSRs */
>  #define MSR_GEODE_BUSCONT_CONF0		0x00001900
>  
> +/* Platform QoS register */
> +#define MSR_IA32_QOSEVTSEL             0x00000c8d
> +#define MSR_IA32_QMC                   0x00000c8e
> +
>  #endif /* __ASM_MSR_INDEX_H */
> diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
> index f25037d..87820d5 100644
> --- a/xen/include/asm-x86/pqos.h
> +++ b/xen/include/asm-x86/pqos.h
> @@ -17,6 +17,8 @@
>  #ifndef ASM_PQOS_H
>  #define ASM_PQOS_H
>  #include <xen/sched.h>
> +#include <xen/cpumask.h>
> +#include <public/domctl.h>
>  
>  #include <public/xen.h>
>  #include <xen/spinlock.h>
> @@ -51,5 +53,7 @@ void init_platform_qos(void);
>  
>  int alloc_cqm_rmid(struct domain *d);
>  void free_cqm_rmid(struct domain *d);
> +void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
> +void cqm_assoc_rmid(unsigned int rmid);

This function prototype lives in the next patch alongside its
implementation.

~Andrew

>  
>  #endif
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 8437d31..335b1d9 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
>  typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
>  
> +struct xen_sysctl_getcqminfo {
> +    uint64_aligned_t buffer_mfn;
> +    uint32_t size;
> +    uint32_t nr_rmids;
> +    uint32_t nr_sockets;
> +};
> +typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
> +
>  
>  struct xen_sysctl {
>      uint32_t cmd;
> @@ -654,6 +663,7 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_cpupool_op                    18
>  #define XEN_SYSCTL_scheduler_op                  19
>  #define XEN_SYSCTL_coverage_op                   20
> +#define XEN_SYSCTL_getcqminfo                    21
>      uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
>      union {
>          struct xen_sysctl_readconsole       readconsole;
> @@ -675,6 +685,7 @@ struct xen_sysctl {
>          struct xen_sysctl_cpupool_op        cpupool_op;
>          struct xen_sysctl_scheduler_op      scheduler_op;
>          struct xen_sysctl_coverage_op       coverage_op;
> +        struct xen_sysctl_getcqminfo        getcqminfo;
>          uint8_t                             pad[128];
>      } u;
>  };


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 11:59:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 11:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAIBZ-00057Z-00; Mon, 03 Feb 2014 11:59:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAIBW-00057U-Mi
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 11:59:31 +0000
Received: from [193.109.254.147:58373] by server-13.bemta-14.messagelabs.com
	id 93/4F-01226-2A48FE25; Mon, 03 Feb 2014 11:59:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391428767!1620509!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5744 invoked from network); 3 Feb 2014 11:59:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 11:59:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97243442"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 11:59:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 06:59:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAIBS-00006X-Nb;
	Mon, 03 Feb 2014 11:59:26 +0000
Message-ID: <52EF849E.7080706@citrix.com>
Date: Mon, 3 Feb 2014 11:59:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dongxiao Xu <dongxiao.xu@intel.com>
References: <1391427391-120790-1-git-send-email-dongxiao.xu@intel.com>
	<1391427391-120790-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1391427391-120790-4-git-send-email-dongxiao.xu@intel.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v7 3/6] x86: collect CQM information from
	all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 11:36, Dongxiao Xu wrote:
> Collect CQM information (L3 cache occupancy) from all sockets.
> Upper layer application can parse the data structure to get the
> information of guest's L3 cache occupancy on certain sockets.
>
> Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> ---
>  xen/arch/x86/pqos.c             |   43 ++++++++++++++++++++++++++++
>  xen/arch/x86/sysctl.c           |   59 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/msr-index.h |    4 +++
>  xen/include/asm-x86/pqos.h      |    4 +++
>  xen/include/public/sysctl.h     |   11 ++++++++
>  5 files changed, 121 insertions(+)
>
> diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
> index eb469ac..2cde56e 100644
> --- a/xen/arch/x86/pqos.c
> +++ b/xen/arch/x86/pqos.c
> @@ -15,6 +15,7 @@
>   * more details.
>   */
>  #include <asm/processor.h>
> +#include <asm/msr.h>
>  #include <xen/init.h>
>  #include <xen/mm.h>
>  #include <xen/spinlock.h>
> @@ -205,6 +206,48 @@ out:
>      spin_unlock(&cqm->cqm_lock);
>  }
>  
> +static void read_cqm_data(void *arg)
> +{
> +    uint64_t cqm_data;
> +    unsigned int rmid;
> +    int socket = cpu_to_socket(smp_processor_id());
> +    unsigned long i;
> +
> +    ASSERT(system_supports_cqm());
> +
> +    if ( socket < 0 )
> +        return;
> +
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> +            continue;
> +
> +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> +        rdmsrl(MSR_IA32_QMC, cqm_data);
> +
> +        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
> +        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
> +            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
> +    }
> +}
> +
> +void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
> +{
> +    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
> +    unsigned int nr_rmids = cqm->max_rmid + 1;
> +
> +    /* Read CQM data in current CPU */
> +    read_cqm_data(NULL);
> +    /* Issue IPI to other CPUs to read CQM data */
> +    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
> +
> +    /* Copy the rmid_to_dom info to the buffer */
> +    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
> +           sizeof(domid_t) * (cqm->max_rmid + 1));
> +
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
> index 15d4b91..5391800 100644
> --- a/xen/arch/x86/sysctl.c
> +++ b/xen/arch/x86/sysctl.c
> @@ -28,6 +28,7 @@
>  #include <xen/nodemask.h>
>  #include <xen/cpu.h>
>  #include <xsm/xsm.h>
> +#include <asm/pqos.h>
>  
>  #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
>  
> @@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
>          pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
>  }
>  
> +/* Select one random CPU for each socket. Current CPU's socket is excluded */
> +static void select_socket_cpu(cpumask_t *cpu_bitmap)
> +{
> +    int i;
> +    unsigned int cpu;
> +    int socket, socket_curr = cpu_to_socket(smp_processor_id());
> +    DECLARE_BITMAP(sockets, NR_CPUS);
> +
> +    bitmap_zero(sockets, NR_CPUS);
> +    if (socket_curr >= 0)
> +        set_bit(socket_curr, sockets);
> +
> +    cpumask_clear(cpu_bitmap);
> +    for ( i = 0; i < NR_CPUS; i++ )
> +    {
> +        socket = cpu_to_socket(i);
> +        if ( socket < 0 || test_and_set_bit(socket, sockets) )
> +            continue;
> +        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
> +        if ( cpu < nr_cpu_ids )
> +            cpumask_set_cpu(cpu, cpu_bitmap);
> +    }
> +}
> +
>  long arch_do_sysctl(
>      struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
>  {
> @@ -101,6 +126,40 @@ long arch_do_sysctl(
>      }
>      break;
>  
> +    case XEN_SYSCTL_getcqminfo:
> +    {
> +        cpumask_var_t cpu_cqmdata_map;
> +
> +        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
> +        {
> +            ret = -ENOMEM;
> +            break;
> +        }
> +
> +        if ( !system_supports_cqm() )
> +        {
> +            ret = -ENODEV;
> +            free_cpumask_var(cpu_cqmdata_map);
> +            break;
> +        }

Check for -ENODEV first, to avoid pointless memory allocation.

~Andrew

> +
> +        memset(cqm->buffer, 0, cqm->buffer_size);
> +
> +        select_socket_cpu(cpu_cqmdata_map);
> +        get_cqm_info(cpu_cqmdata_map);
> +
> +        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
> +        sysctl->u.getcqminfo.size = cqm->buffer_size;
> +        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
> +        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
> +
> +        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
> +            ret = -EFAULT;
> +
> +        free_cpumask_var(cpu_cqmdata_map);
> +    }
> +    break;
> +
>      default:
>          ret = -ENOSYS;
>          break;
> diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
> index fc9fbc6..e3ff10c 100644
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -489,4 +489,8 @@
>  /* Geode defined MSRs */
>  #define MSR_GEODE_BUSCONT_CONF0		0x00001900
>  
> +/* Platform QoS register */
> +#define MSR_IA32_QOSEVTSEL             0x00000c8d
> +#define MSR_IA32_QMC                   0x00000c8e
> +
>  #endif /* __ASM_MSR_INDEX_H */
> diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
> index f25037d..87820d5 100644
> --- a/xen/include/asm-x86/pqos.h
> +++ b/xen/include/asm-x86/pqos.h
> @@ -17,6 +17,8 @@
>  #ifndef ASM_PQOS_H
>  #define ASM_PQOS_H
>  #include <xen/sched.h>
> +#include <xen/cpumask.h>
> +#include <public/domctl.h>
>  
>  #include <public/xen.h>
>  #include <xen/spinlock.h>
> @@ -51,5 +53,7 @@ void init_platform_qos(void);
>  
>  int alloc_cqm_rmid(struct domain *d);
>  void free_cqm_rmid(struct domain *d);
> +void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
> +void cqm_assoc_rmid(unsigned int rmid);

This function prototype lives in the next patch alongside its
implementation.

~Andrew

>  
>  #endif
> diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
> index 8437d31..335b1d9 100644
> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
>  typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
>  
> +struct xen_sysctl_getcqminfo {
> +    uint64_aligned_t buffer_mfn;
> +    uint32_t size;
> +    uint32_t nr_rmids;
> +    uint32_t nr_sockets;
> +};
> +typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
> +
>  
>  struct xen_sysctl {
>      uint32_t cmd;
> @@ -654,6 +663,7 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_cpupool_op                    18
>  #define XEN_SYSCTL_scheduler_op                  19
>  #define XEN_SYSCTL_coverage_op                   20
> +#define XEN_SYSCTL_getcqminfo                    21
>      uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
>      union {
>          struct xen_sysctl_readconsole       readconsole;
> @@ -675,6 +685,7 @@ struct xen_sysctl {
>          struct xen_sysctl_cpupool_op        cpupool_op;
>          struct xen_sysctl_scheduler_op      scheduler_op;
>          struct xen_sysctl_coverage_op       coverage_op;
> +        struct xen_sysctl_getcqminfo        getcqminfo;
>          uint8_t                             pad[128];
>      } u;
>  };


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 12:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 12:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAICE-0005CX-Fg; Mon, 03 Feb 2014 12:00:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAICB-0005BP-GA
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 12:00:11 +0000
Received: from [193.109.254.147:48786] by server-7.bemta-14.messagelabs.com id
	5D/AA-23424-AC48FE25; Mon, 03 Feb 2014 12:00:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391428808!1623015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16956 invoked from network); 3 Feb 2014 12:00:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 12:00:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97243582"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 12:00:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 07:00:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAIC7-00006n-AL;
	Mon, 03 Feb 2014 12:00:07 +0000
Date: Mon, 3 Feb 2014 11:59:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52D65856.6050901@redhat.com>
Message-ID: <alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
	<52D65856.6050901@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Don Slutz <dslutz@verizon.com>,
	1257099@bugs.launchpad.net, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Paolo Bonzini wrote:
> Il 03/01/2014 03:12, Don Slutz ha scritto:
> > Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
> > to be fixed (TMPB).
> > 
> > Add new functions do_libtool and libtool_prog.
> > 
> > Add check for broken gcc and libtool.
> > 
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > ---
> > Was posted as an attachment.
> > 
> > https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
> > 
> >  configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 62 insertions(+), 1 deletion(-)
> > 
> > diff --git a/configure b/configure
> > index edfea95..852d021 100755
> > --- a/configure
> > +++ b/configure
> > @@ -12,7 +12,10 @@ else
> >  fi
> >  
> >  TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
> > -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
> > +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
> > +TMPO="${TMPDIR1}/${TMPB}.o"
> > +TMPL="${TMPDIR1}/${TMPB}.lo"
> > +TMPA="${TMPDIR1}/lib${TMPB}.la"
> >  TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
> >  
> >  # NB: do not call "exit" in the trap handler; this is buggy with some shells;
> > @@ -86,6 +89,38 @@ compile_prog() {
> >    do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
> >  }
> >  
> > +do_libtool() {
> > +    local mode=$1
> > +    shift
> > +    # Run the compiler, capturing its output to the log.
> > +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
> > +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
> > +    # Test passed. If this is an --enable-werror build, rerun
> > +    # the test with -Werror and bail out if it fails. This
> > +    # makes warning-generating-errors in configure test code
> > +    # obvious to developers.
> > +    if test "$werror" != "yes"; then
> > +        return 0
> > +    fi
> > +    # Don't bother rerunning the compile if we were already using -Werror
> > +    case "$*" in
> > +        *-Werror*)
> > +           return 0
> > +        ;;
> > +    esac
> > +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
> > +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
> > +    error_exit "configure test passed without -Werror but failed with -Werror." \
> > +        "This is probably a bug in the configure script. The failing command" \
> > +        "will be at the bottom of config.log." \
> > +        "You can run configure with --disable-werror to bypass this check."
> > +}
> > +
> > +libtool_prog() {
> > +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
> > +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
> > +}
> > +
> >  # symbolically link $1 to $2.  Portable version of "ln -sf".
> >  symlink() {
> >    rm -rf "$2"
> > @@ -1367,6 +1402,32 @@ EOF
> >    fi
> >  fi
> >  
> > +# check for broken gcc and libtool in RHEL5
> > +if test -n "$libtool" -a "$pie" != "no" ; then
> > +  cat > $TMPC <<EOF
> > +
> > +void *f(unsigned char *buf, int len);
> > +void *g(unsigned char *buf, int len);
> > +
> > +void *
> > +f(unsigned char *buf, int len)
> > +{
> > +    return (void*)0L;
> > +}
> > +
> > +void *
> > +g(unsigned char *buf, int len)
> > +{
> > +    return f(buf, len);
> > +}
> > +
> > +EOF
> > +  if ! libtool_prog; then
> > +    echo "Disabling libtool due to broken toolchain support"
> > +    libtool=
> > +  fi
> > +fi
> > +
> >  ##########################################
> >  # __sync_fetch_and_and requires at least -march=i486. Many toolchains
> >  # use i686 as default anyway, but for those that don't, an explicit
> > 
> 
> I'm applying this to a "configure" branch on my github repository.  Thanks!

Paolo, did this patch ever make it upstream? If so, do you have a commit
id?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 12:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 12:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAICE-0005CX-Fg; Mon, 03 Feb 2014 12:00:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAICB-0005BP-GA
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 12:00:11 +0000
Received: from [193.109.254.147:48786] by server-7.bemta-14.messagelabs.com id
	5D/AA-23424-AC48FE25; Mon, 03 Feb 2014 12:00:10 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391428808!1623015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16956 invoked from network); 3 Feb 2014 12:00:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 12:00:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97243582"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 12:00:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 07:00:07 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAIC7-00006n-AL;
	Mon, 03 Feb 2014 12:00:07 +0000
Date: Mon, 3 Feb 2014 11:59:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52D65856.6050901@redhat.com>
Message-ID: <alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
	<52D65856.6050901@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Don Slutz <dslutz@verizon.com>,
	1257099@bugs.launchpad.net, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 15 Jan 2014, Paolo Bonzini wrote:
> Il 03/01/2014 03:12, Don Slutz ha scritto:
> > Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
> > to be fixed (TMPB).
> > 
> > Add new functions do_libtool and libtool_prog.
> > 
> > Add check for broken gcc and libtool.
> > 
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > ---
> > Was posted as an attachment.
> > 
> > https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
> > 
> >  configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
> >  1 file changed, 62 insertions(+), 1 deletion(-)
> > 
> > diff --git a/configure b/configure
> > index edfea95..852d021 100755
> > --- a/configure
> > +++ b/configure
> > @@ -12,7 +12,10 @@ else
> >  fi
> >  
> >  TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
> > -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
> > +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
> > +TMPO="${TMPDIR1}/${TMPB}.o"
> > +TMPL="${TMPDIR1}/${TMPB}.lo"
> > +TMPA="${TMPDIR1}/lib${TMPB}.la"
> >  TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
> >  
> >  # NB: do not call "exit" in the trap handler; this is buggy with some shells;
> > @@ -86,6 +89,38 @@ compile_prog() {
> >    do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
> >  }
> >  
> > +do_libtool() {
> > +    local mode=$1
> > +    shift
> > +    # Run the compiler, capturing its output to the log.
> > +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
> > +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
> > +    # Test passed. If this is an --enable-werror build, rerun
> > +    # the test with -Werror and bail out if it fails. This
> > +    # makes warning-generating-errors in configure test code
> > +    # obvious to developers.
> > +    if test "$werror" != "yes"; then
> > +        return 0
> > +    fi
> > +    # Don't bother rerunning the compile if we were already using -Werror
> > +    case "$*" in
> > +        *-Werror*)
> > +           return 0
> > +        ;;
> > +    esac
> > +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
> > +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
> > +    error_exit "configure test passed without -Werror but failed with -Werror." \
> > +        "This is probably a bug in the configure script. The failing command" \
> > +        "will be at the bottom of config.log." \
> > +        "You can run configure with --disable-werror to bypass this check."
> > +}
> > +
> > +libtool_prog() {
> > +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
> > +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
> > +}
> > +
> >  # symbolically link $1 to $2.  Portable version of "ln -sf".
> >  symlink() {
> >    rm -rf "$2"
> > @@ -1367,6 +1402,32 @@ EOF
> >    fi
> >  fi
> >  
> > +# check for broken gcc and libtool in RHEL5
> > +if test -n "$libtool" -a "$pie" != "no" ; then
> > +  cat > $TMPC <<EOF
> > +
> > +void *f(unsigned char *buf, int len);
> > +void *g(unsigned char *buf, int len);
> > +
> > +void *
> > +f(unsigned char *buf, int len)
> > +{
> > +    return (void*)0L;
> > +}
> > +
> > +void *
> > +g(unsigned char *buf, int len)
> > +{
> > +    return f(buf, len);
> > +}
> > +
> > +EOF
> > +  if ! libtool_prog; then
> > +    echo "Disabling libtool due to broken toolchain support"
> > +    libtool=
> > +  fi
> > +fi
> > +
> >  ##########################################
> >  # __sync_fetch_and_and requires at least -march=i486. Many toolchains
> >  # use i686 as default anyway, but for those that don't, an explicit
> > 
> 
> I'm applying this to a "configure" branch on my github repository.  Thanks!

Paolo, did this patch ever make it upstream? If so, do you have a commit
id?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 12:01:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 12:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAIDV-0005P6-H1; Mon, 03 Feb 2014 12:01:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAIDU-0005Oq-RG
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 12:01:33 +0000
Received: from [193.109.254.147:57989] by server-12.bemta-14.messagelabs.com
	id C6/05-17220-C158FE25; Mon, 03 Feb 2014 12:01:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391428889!1623487!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3661 invoked from network); 3 Feb 2014 12:01:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 12:01:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97244102"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 12:01:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 07:01:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAIDQ-00008M-Qc;
	Mon, 03 Feb 2014 12:01:28 +0000
Date: Mon, 3 Feb 2014 12:01:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <21231.33273.164782.738071@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014, Ian Jackson wrote:
> Don Slutz writes ("Re: [Xen-devel] 4.4.0-rc3 tagged"):
> > On CentOS release 5.10 (Final) I hit QEMU bug #1257099:
> 
> CC'ing Stefano, who is in charge of the Xen qemu-upstream tree.

This is the bug that you fixed with the patch linked below, but it looks
like the patch never made it upstream.

Let me ping Paolo on this.


> Thanks,
> Ian.
> 
> > lt LINK libcacard.la
> > /usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_buffer_response_delete' can not be used when making a shared object; recompile with -fPIC
> > /usr/bin/ld: final link failed: Bad value
> > collect2: ld returned 1 exit status
> > make[3]: *** [libcacard.la] Error 1
> > make[3]: Leaving directory `/home/don/xen-4.4.0-rc3/tools/qemu-xen-dir'
> > make[2]: *** [subdir-all-qemu-xen-dir] Error 2
> > make[2]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> > make[1]: *** [subdirs-install] Error 2
> > make[1]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> > make: *** [install-tools] Error 2
> > 
> > See https://bugs.launchpad.net/bugs/1257099
> > 
> > Based on
> > 
> > https://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.html
> > 
> > it should make it into QEMU at some point.
> > 
> > So now I either change tools/Makefile to include "--disable-smartcard-nss" for QEMU or use the patch:
> > 
> > 
> >  From c6ce0e32c09979ba5d7d0d416293fbc700372c61 Mon Sep 17 00:00:00 2001
> > From: Don Slutz <dslutz@verizon.com>
> > Date: Fri, 31 Jan 2014 20:59:28 +0000
> > Subject: [PATCH] tools/Makefile: Change QEMU_XEN_ENABLE_DEBUG to an add to
> >   allow for additional QEMU options.
> > 
> > This is currently needed to work around QEMU bug #1257099 on CentOS 5.10
> > 
> > I.E. via
> > 
> > export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> > 
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > ---
> >   tools/Makefile | 4 +---
> >   1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/tools/Makefile b/tools/Makefile
> > index 00c69ee..a3b8a7e 100644
> > --- a/tools/Makefile
> > +++ b/tools/Makefile
> > @@ -174,9 +174,7 @@ qemu-xen-dir-force-update:
> >          fi
> >   
> >   ifeq ($(debug),y)
> > -QEMU_XEN_ENABLE_DEBUG := --enable-debug --enable-trace-backend=stderr
> > -else
> > -QEMU_XEN_ENABLE_DEBUG :=
> > +QEMU_XEN_ENABLE_DEBUG += --enable-debug --enable-trace-backend=stderr
> >   endif
> >   
> >   subdir-all-qemu-xen-dir: qemu-xen-dir-find
> > -- 
> > 1.8.2.1
> > 
> > 
> > and also:
> > 
> > export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> > 
> > 
> > This gets me to:
> > 
> > Parsing /home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
> >   MLDEP
> > make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
> >   MLC      xenlight.cmo
> >   MLA      xenlight.cma
> >   CC       xenlight_stubs.o
> > cc1: warnings being treated as errors
> > xenlight_stubs.c: In function 'Defbool_val':
> > xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> > xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> > xenlight_stubs.c: In function 'String_option_val':
> > xenlight_stubs.c:379: error: expected expression before 'char'
> > xenlight_stubs.c: In function 'aohow_val':
> > xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> > make[7]: *** [xenlight_stubs.o] Error 1
> > make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > make[6]: *** [subdir-install-xl] Error 2
> > make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > make[5]: *** [subdirs-install] Error 2
> > make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > make[4]: *** [subdir-install-libs] Error 2
> > make[4]: Leaving directory `/home/don/xen/tools/ocaml'
> > make[3]: *** [subdirs-install] Error 2
> > make[3]: Leaving directory `/home/don/xen/tools/ocaml'
> > make[2]: *** [subdir-install-ocaml] Error 2
> > make[2]: Leaving directory `/home/don/xen/tools'
> > make[1]: *** [subdirs-install] Error 2
> > make[1]: Leaving directory `/home/don/xen/tools'
> > make: *** [install-tools] Error 2
> > 
> > 
> > Not sure how to work around this.
> >      -Don Slutz
> > 
> > 
> > 
> > On 01/31/14 07:14, Ian Jackson wrote:
> > > We've just tagged 4.4.0-rc3, please test and report bugs.
> > >
> > > The tarball can be downloaded here:
> > >
> > > http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz
> > >
> > > Ian.
> > >
> > > (PS: Due to an oversight by me, the version number in the xen/Makefile
> > > is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
> > > that it's "4.4.0-rc2".  Sorry about that.)
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 12:01:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 12:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAIDV-0005P6-H1; Mon, 03 Feb 2014 12:01:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAIDU-0005Oq-RG
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 12:01:33 +0000
Received: from [193.109.254.147:57989] by server-12.bemta-14.messagelabs.com
	id C6/05-17220-C158FE25; Mon, 03 Feb 2014 12:01:32 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391428889!1623487!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3661 invoked from network); 3 Feb 2014 12:01:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 12:01:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97244102"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 12:01:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 07:01:29 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAIDQ-00008M-Qc;
	Mon, 03 Feb 2014 12:01:28 +0000
Date: Mon, 3 Feb 2014 12:01:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <21231.33273.164782.738071@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014, Ian Jackson wrote:
> Don Slutz writes ("Re: [Xen-devel] 4.4.0-rc3 tagged"):
> > On CentOS release 5.10 (Final) I hit QEMU bug #1257099:
> 
> CC'ing Stefano, who is in charge of the Xen qemu-upstream tree.

This is the bug that you fixed with the patch linked below, but it looks
like the patch never made it upstream.

Let me ping Paolo on this.


> Thanks,
> Ian.
> 
> > lt LINK libcacard.la
> > /usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_buffer_response_delete' can not be used when making a shared object; recompile with -fPIC
> > /usr/bin/ld: final link failed: Bad value
> > collect2: ld returned 1 exit status
> > make[3]: *** [libcacard.la] Error 1
> > make[3]: Leaving directory `/home/don/xen-4.4.0-rc3/tools/qemu-xen-dir'
> > make[2]: *** [subdir-all-qemu-xen-dir] Error 2
> > make[2]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> > make[1]: *** [subdirs-install] Error 2
> > make[1]: Leaving directory `/home/don/xen-4.4.0-rc3/tools'
> > make: *** [install-tools] Error 2
> > 
> > See https://bugs.launchpad.net/bugs/1257099
> > 
> > Based on
> > 
> > https://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01826.html
> > 
> > it should make it into QEMU at some point.
> > 
> > So now I either change tools/Makefile to include "--disable-smartcard-nss" for QEMU or use the patch:
> > 
> > 
> >  From c6ce0e32c09979ba5d7d0d416293fbc700372c61 Mon Sep 17 00:00:00 2001
> > From: Don Slutz <dslutz@verizon.com>
> > Date: Fri, 31 Jan 2014 20:59:28 +0000
> > Subject: [PATCH] tools/Makefile: Change QEMU_XEN_ENABLE_DEBUG to an add to
> >   allow for additional QEMU options.
> > 
> > This is currently needed to work around QEMU bug #1257099 on CentOS 5.10
> > 
> > I.E. via
> > 
> > export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> > 
> > Signed-off-by: Don Slutz <dslutz@verizon.com>
> > ---
> >   tools/Makefile | 4 +---
> >   1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/tools/Makefile b/tools/Makefile
> > index 00c69ee..a3b8a7e 100644
> > --- a/tools/Makefile
> > +++ b/tools/Makefile
> > @@ -174,9 +174,7 @@ qemu-xen-dir-force-update:
> >          fi
> >   
> >   ifeq ($(debug),y)
> > -QEMU_XEN_ENABLE_DEBUG := --enable-debug --enable-trace-backend=stderr
> > -else
> > -QEMU_XEN_ENABLE_DEBUG :=
> > +QEMU_XEN_ENABLE_DEBUG += --enable-debug --enable-trace-backend=stderr
> >   endif
> >   
> >   subdir-all-qemu-xen-dir: qemu-xen-dir-find
> > -- 
> > 1.8.2.1
> > 
> > 
> > and also:
> > 
> > export QEMU_XEN_ENABLE_DEBUG=--disable-smartcard-nss
> > 
> > 
> > This gets me to:
> > 
> > Parsing /home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
> >   MLDEP
> > make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
> >   MLC      xenlight.cmo
> >   MLA      xenlight.cma
> >   CC       xenlight_stubs.o
> > cc1: warnings being treated as errors
> > xenlight_stubs.c: In function 'Defbool_val':
> > xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> > xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> > xenlight_stubs.c: In function 'String_option_val':
> > xenlight_stubs.c:379: error: expected expression before 'char'
> > xenlight_stubs.c: In function 'aohow_val':
> > xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> > make[7]: *** [xenlight_stubs.o] Error 1
> > make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > make[6]: *** [subdir-install-xl] Error 2
> > make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > make[5]: *** [subdirs-install] Error 2
> > make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > make[4]: *** [subdir-install-libs] Error 2
> > make[4]: Leaving directory `/home/don/xen/tools/ocaml'
> > make[3]: *** [subdirs-install] Error 2
> > make[3]: Leaving directory `/home/don/xen/tools/ocaml'
> > make[2]: *** [subdir-install-ocaml] Error 2
> > make[2]: Leaving directory `/home/don/xen/tools'
> > make[1]: *** [subdirs-install] Error 2
> > make[1]: Leaving directory `/home/don/xen/tools'
> > make: *** [install-tools] Error 2
> > 
> > 
> > Not sure how to work around this.
> >      -Don Slutz
> > 
> > 
> > 
> > On 01/31/14 07:14, Ian Jackson wrote:
> > > We've just tagged 4.4.0-rc3, please test and report bugs.
> > >
> > > The tarball can be downloaded here:
> > >
> > > http://bits.xensource.com/oss-xen/release/4.4.0-rc3/xen-4.4.0-rc3.tar.gz
> > >
> > > Ian.
> > >
> > > (PS: Due to an oversight by me, the version number in the xen/Makefile
> > > is still "-rc2", so the message printed at startup by 4.4.0-rc3 claims
> > > that it's "4.4.0-rc2".  Sorry about that.)
> > >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> > 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:09:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJGa-0007Ua-QQ; Mon, 03 Feb 2014 13:08:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WAJ7A-00076f-2Z
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 12:59:04 +0000
Received: from [85.158.139.211:24506] by server-9.bemta-5.messagelabs.com id
	1C/5D-11237-7929FE25; Mon, 03 Feb 2014 12:59:03 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391432338!1285249!1
X-Originating-IP: [98.138.229.32]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21835 invoked from network); 3 Feb 2014 12:59:01 -0000
Received: from nm39.bullet.mail.ne1.yahoo.com (HELO
	nm39.bullet.mail.ne1.yahoo.com) (98.138.229.32)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 12:59:01 -0000
Received: from [127.0.0.1] by nm39.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Feb 2014 12:58:58 -0000
Received: from [98.138.101.131] by nm39.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:11 -0000
Received: from [66.196.81.172] by tm19.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:10 -0000
Received: from [98.139.212.237] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:10 -0000
Received: from [127.0.0.1] by omp1046.mail.bf1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:10 -0000
X-Yahoo-Newman-Property: ymail-4
X-Yahoo-Newman-Id: 753963.33324.bm@omp1046.mail.bf1.yahoo.com
Received: (qmail 65789 invoked by uid 60001); 3 Feb 2014 12:56:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391432170; bh=9yZP6Y6a4A1tpx6A2bQGRnRJJzzVQtA+D2ttccO9gEQ=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=RYcxZYkAySg2CENwF7ZKFSmSCyHj/RNmWiDKZhlCPX+WRaf1ko29yuPh7Tl4iCgan3qyk017t0ZtUYhOBBxEnpUspA4QcT5aIm3cxA4TwPt/RODfykRXECgyCdNbcD0ywgyOuiYl2czlyztSG79j50KK1ZPtDrEpxWVxSxuatrw=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=gbvBzMkvsFuJhj8eFej08rQwSrEqa2+LO245IhbAohptIXFHQrAT5bx+PpDf36RGEWJz92h3wRfcPt45j1KsnPyacEHUfCf2qi8RCys2G9clp9Ejo0ZKT9KNIEoeIyC8dKVRLspi2RhjeeGdARCI2NlZIJDtJKArJkvcMmF0JeU=;
X-YMail-OSG: M_GlIYQVM1ku_9Q2HGKrtBhMVa4cYJjIse0N5CAuextmrkv
	pWoNXtSY8phoVw9ZGTwUciFuUxsMewjnvqJ6Qm6JoKhk94JF8s8O9vSoxg6g
	GNHvHOuLqDyLQJmsXcV9P8Paob4RnFcPYtYMyLYNczkZKLeazt0R0kSTbREW
	ewa9uOE0gQpMS2ZorBltFu7so5ZkMLHjdlEq3LwvGCH201FUvWQMWIh2Y3Ek
	d0l1S150EtToMzp3igmMS5GewoUkATL5ghh4aa2DLyiiB2xIzRhgpxRwjyHV
	nJBlRLjLyIa_EWWrPIfOVHOaMm5TNaLlSpA5XbL_Pwt8JK_.R3.yQZ_060Xx
	ySECPrQcs0uUfH7d3ktxCFEJTv0SFFcYd8Faask4xR3fQPQEeaiLceO_DpuF
	vTib7zWQOcNpbpCstjZOA_Ent35gMDNGkNEiYP9HfNArfAduZYc4i1WQ.xE3
	lWzCmx6IpdJxQDn8DVSuOendy7d3PdTElmH8OL7P5MlkWRk5JGUUaPlGTzCp
	2Rj_dzPl6M4Dr6PG6jWzviRhDmb6AOZtPDFX5tVT4QM7VPUdHiBXzFDDCYgp
	VRiU7FJ4nQigDoBky_w--
Received: from [192.227.225.3] by web161806.mail.bf1.yahoo.com via HTTP;
	Mon, 03 Feb 2014 04:56:10 PST
X-Rocket-MIMEInfo: 002.001,
	VGhhbmtzLCBob3cgaSBkZWZpbmUgbG9nZ2VyIGZvcsKgeGNfaW50ZXJmYWNlX29wZW4gdG8gb3V0cHV0IHByaW50PyEKY2FuIGkgdXNlIG9mIGNvZGUgeGNfc2F2ZS5jIGluIHhlbiA0LjMuMSBmb3IgbG9nZ2VyIGluIHhlbiA0LjEuMj8hCsKgCkFkZWwgQW1hbmkKTS5TYy4gQ2FuZGlkYXRlQENvbXB1dGVyIEVuZ2luZWVyaW5nIERlcGFydG1lbnQsIFVuaXZlcnNpdHkgb2YgSXNmYWhhbgpFbWFpbDogQS5BbWFuaUBlbmcudWkuYWMuaXIKCgoKT24gU3VuZGF5LCBGZWJydWFyeSAyLCAyMDE0IDE6MzAgUE0sIE8BMAEBAQE-
X-Mailer: YahooMailWebService/0.8.174.629
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
Message-ID: <1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
Date: Mon, 3 Feb 2014 04:56:10 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140202100044.GA5898@aepfle.de>
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 03 Feb 2014 13:08:48 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0641910218065150884=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0641910218065150884==
Content-Type: multipart/alternative; boundary="1665047788-284586231-1391432170=:33697"

--1665047788-284586231-1391432170=:33697
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks, how i define logger for=A0xc_interface_open to output print?!=0Acan=
 i use of code xc_save.c in xen 4.3.1 for logger in xen 4.1.2?!=0A=A0=0AAde=
l Amani=0AM.Sc. Candidate@Computer Engineering Department, University of Is=
fahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Sunday, February 2, 2014 =
1:30 PM, Olaf Hering <olaf@aepfle.de> wrote:=0A =0AOn Sun, Feb 02, Adel Ama=
ni wrote:=0A=0A=0A> I check this function xc_domain_save.c, and attention a=
t line 1185,=0A> function snprintf() amounts iter, sent_this_iter, skip_thi=
s_iter for=0A> print. (in xen 4.1.2 ) but i don't know where in amounts pri=
nt!!! :-( =0A> Are except file xend.log where is another for print and debu=
gged?=0A=0AThe output in 4.1 is sent to xend.log, but only if a logger func=
tion is=0Aregistered. Please follow the code from tools/xcutils/xc_save.c:m=
ain to=0Athe actual xc_report_progress_start call in=0Atools/libxc/xc_domai=
n_save.c, as you will note xc_interface_open is=0Acalled without logger whi=
ch means no output is printed.=0A=0AFor an example how a logger could look =
like see the xc_interface_open=0Acall in tools/xenpaging/xenpaging.c.=0A=0A=
Olaf
--1665047788-284586231-1391432170=:33697
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div>Thanks, how i =
define logger for<span style=3D"font-family: monospace; font-size: 10pt;">&=
nbsp;</span><span style=3D"font-family: monospace; font-size: 10pt;">xc_int=
erface_open to output print?!</span></div><div style=3D"color: rgb(0, 0, 0)=
; font-size: 10pt; font-family: monospace; background-color: transparent; f=
ont-style: normal;"><span style=3D"font-family: monospace; font-size: 10pt;=
">can i use of code xc_save.c in xen 4.3.1 for logger in xen 4.1.2?!</span>=
</div><div></div><div>&nbsp;</div><div><div style=3D"text-align:center;"><s=
pan style=3D"font-size: 16px; font-family: 'times new roman', 'new york', t=
imes, serif; line-height: normal;">Adel Amani</span><br></div><span style=
=3D"font-family: 'times new roman', 'new york', times, serif; line-height: =
normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@C=
omputer
 Engineering Department, University of Isfahan</div><div style=3D"text-alig=
n:center;"><span style=3D"font-size:13px;">Email: <span style=3D"color:rgb(=
0, 0, 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></=
div></span></div><div class=3D"yahoo_quoted" style=3D"display: block;"> <br=
> <br> <div style=3D"font-family: 'bookman old style', 'new york', times, s=
erif; font-size: 10pt;"> <div style=3D"font-family: HelveticaNeue, 'Helveti=
ca Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 12pt;">=
 <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Sunday, February 2, =
2014 1:30 PM, Olaf Hering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div> =
 <div class=3D"y_msg_container">On Sun, Feb 02, Adel Amani wrote:<div class=
=3D"yqt6891006089" id=3D"yqtfd07285"><br clear=3D"none"><br clear=3D"none">=
&gt; I check this function xc_domain_save.c, and attention at line 1185,<br=
 clear=3D"none">&gt; function snprintf() amounts iter, sent_this_iter, skip=
_this_iter for<br
 clear=3D"none">&gt; print. (in xen 4.1.2 ) but i don't know where in amoun=
ts print!!! :-( <br clear=3D"none">&gt; Are except file xend.log where is a=
nother for print and debugged?</div><br clear=3D"none"><br clear=3D"none">T=
he output in 4.1 is sent to xend.log, but only if a logger function is<br c=
lear=3D"none">registered. Please follow the code from tools/xcutils/xc_save=
.c:main to<br clear=3D"none">the actual xc_report_progress_start call in<br=
 clear=3D"none">tools/libxc/xc_domain_save.c, as you will note xc_interface=
_open is<br clear=3D"none">called without logger which means no output is p=
rinted.<br clear=3D"none"><br clear=3D"none">For an example how a logger co=
uld look like see the xc_interface_open<br clear=3D"none">call in tools/xen=
paging/xenpaging.c.<br clear=3D"none"><br clear=3D"none">Olaf<div class=3D"=
yqt6891006089" id=3D"yqtfd64365"><br clear=3D"none"></div><br><br></div>  <=
/div> </div>  </div> </div></body></html>
--1665047788-284586231-1391432170=:33697--


--===============0641910218065150884==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0641910218065150884==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 13:09:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJGa-0007Ua-QQ; Mon, 03 Feb 2014 13:08:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WAJ7A-00076f-2Z
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 12:59:04 +0000
Received: from [85.158.139.211:24506] by server-9.bemta-5.messagelabs.com id
	1C/5D-11237-7929FE25; Mon, 03 Feb 2014 12:59:03 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391432338!1285249!1
X-Originating-IP: [98.138.229.32]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21835 invoked from network); 3 Feb 2014 12:59:01 -0000
Received: from nm39.bullet.mail.ne1.yahoo.com (HELO
	nm39.bullet.mail.ne1.yahoo.com) (98.138.229.32)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 12:59:01 -0000
Received: from [127.0.0.1] by nm39.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Feb 2014 12:58:58 -0000
Received: from [98.138.101.131] by nm39.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:11 -0000
Received: from [66.196.81.172] by tm19.bullet.mail.ne1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:10 -0000
Received: from [98.139.212.237] by tm18.bullet.mail.bf1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:10 -0000
Received: from [127.0.0.1] by omp1046.mail.bf1.yahoo.com with NNFMP;
	03 Feb 2014 12:56:10 -0000
X-Yahoo-Newman-Property: ymail-4
X-Yahoo-Newman-Id: 753963.33324.bm@omp1046.mail.bf1.yahoo.com
Received: (qmail 65789 invoked by uid 60001); 3 Feb 2014 12:56:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391432170; bh=9yZP6Y6a4A1tpx6A2bQGRnRJJzzVQtA+D2ttccO9gEQ=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=RYcxZYkAySg2CENwF7ZKFSmSCyHj/RNmWiDKZhlCPX+WRaf1ko29yuPh7Tl4iCgan3qyk017t0ZtUYhOBBxEnpUspA4QcT5aIm3cxA4TwPt/RODfykRXECgyCdNbcD0ywgyOuiYl2czlyztSG79j50KK1ZPtDrEpxWVxSxuatrw=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=gbvBzMkvsFuJhj8eFej08rQwSrEqa2+LO245IhbAohptIXFHQrAT5bx+PpDf36RGEWJz92h3wRfcPt45j1KsnPyacEHUfCf2qi8RCys2G9clp9Ejo0ZKT9KNIEoeIyC8dKVRLspi2RhjeeGdARCI2NlZIJDtJKArJkvcMmF0JeU=;
X-YMail-OSG: M_GlIYQVM1ku_9Q2HGKrtBhMVa4cYJjIse0N5CAuextmrkv
	pWoNXtSY8phoVw9ZGTwUciFuUxsMewjnvqJ6Qm6JoKhk94JF8s8O9vSoxg6g
	GNHvHOuLqDyLQJmsXcV9P8Paob4RnFcPYtYMyLYNczkZKLeazt0R0kSTbREW
	ewa9uOE0gQpMS2ZorBltFu7so5ZkMLHjdlEq3LwvGCH201FUvWQMWIh2Y3Ek
	d0l1S150EtToMzp3igmMS5GewoUkATL5ghh4aa2DLyiiB2xIzRhgpxRwjyHV
	nJBlRLjLyIa_EWWrPIfOVHOaMm5TNaLlSpA5XbL_Pwt8JK_.R3.yQZ_060Xx
	ySECPrQcs0uUfH7d3ktxCFEJTv0SFFcYd8Faask4xR3fQPQEeaiLceO_DpuF
	vTib7zWQOcNpbpCstjZOA_Ent35gMDNGkNEiYP9HfNArfAduZYc4i1WQ.xE3
	lWzCmx6IpdJxQDn8DVSuOendy7d3PdTElmH8OL7P5MlkWRk5JGUUaPlGTzCp
	2Rj_dzPl6M4Dr6PG6jWzviRhDmb6AOZtPDFX5tVT4QM7VPUdHiBXzFDDCYgp
	VRiU7FJ4nQigDoBky_w--
Received: from [192.227.225.3] by web161806.mail.bf1.yahoo.com via HTTP;
	Mon, 03 Feb 2014 04:56:10 PST
X-Rocket-MIMEInfo: 002.001,
	VGhhbmtzLCBob3cgaSBkZWZpbmUgbG9nZ2VyIGZvcsKgeGNfaW50ZXJmYWNlX29wZW4gdG8gb3V0cHV0IHByaW50PyEKY2FuIGkgdXNlIG9mIGNvZGUgeGNfc2F2ZS5jIGluIHhlbiA0LjMuMSBmb3IgbG9nZ2VyIGluIHhlbiA0LjEuMj8hCsKgCkFkZWwgQW1hbmkKTS5TYy4gQ2FuZGlkYXRlQENvbXB1dGVyIEVuZ2luZWVyaW5nIERlcGFydG1lbnQsIFVuaXZlcnNpdHkgb2YgSXNmYWhhbgpFbWFpbDogQS5BbWFuaUBlbmcudWkuYWMuaXIKCgoKT24gU3VuZGF5LCBGZWJydWFyeSAyLCAyMDE0IDE6MzAgUE0sIE8BMAEBAQE-
X-Mailer: YahooMailWebService/0.8.174.629
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
Message-ID: <1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
Date: Mon, 3 Feb 2014 04:56:10 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140202100044.GA5898@aepfle.de>
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 03 Feb 2014 13:08:48 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0641910218065150884=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0641910218065150884==
Content-Type: multipart/alternative; boundary="1665047788-284586231-1391432170=:33697"

--1665047788-284586231-1391432170=:33697
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Thanks, how i define logger for=A0xc_interface_open to output print?!=0Acan=
 i use of code xc_save.c in xen 4.3.1 for logger in xen 4.1.2?!=0A=A0=0AAde=
l Amani=0AM.Sc. Candidate@Computer Engineering Department, University of Is=
fahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Sunday, February 2, 2014 =
1:30 PM, Olaf Hering <olaf@aepfle.de> wrote:=0A =0AOn Sun, Feb 02, Adel Ama=
ni wrote:=0A=0A=0A> I check this function xc_domain_save.c, and attention a=
t line 1185,=0A> function snprintf() amounts iter, sent_this_iter, skip_thi=
s_iter for=0A> print. (in xen 4.1.2 ) but i don't know where in amounts pri=
nt!!! :-( =0A> Are except file xend.log where is another for print and debu=
gged?=0A=0AThe output in 4.1 is sent to xend.log, but only if a logger func=
tion is=0Aregistered. Please follow the code from tools/xcutils/xc_save.c:m=
ain to=0Athe actual xc_report_progress_start call in=0Atools/libxc/xc_domai=
n_save.c, as you will note xc_interface_open is=0Acalled without logger whi=
ch means no output is printed.=0A=0AFor an example how a logger could look =
like see the xc_interface_open=0Acall in tools/xenpaging/xenpaging.c.=0A=0A=
Olaf
--1665047788-284586231-1391432170=:33697
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div>Thanks, how i =
define logger for<span style=3D"font-family: monospace; font-size: 10pt;">&=
nbsp;</span><span style=3D"font-family: monospace; font-size: 10pt;">xc_int=
erface_open to output print?!</span></div><div style=3D"color: rgb(0, 0, 0)=
; font-size: 10pt; font-family: monospace; background-color: transparent; f=
ont-style: normal;"><span style=3D"font-family: monospace; font-size: 10pt;=
">can i use of code xc_save.c in xen 4.3.1 for logger in xen 4.1.2?!</span>=
</div><div></div><div>&nbsp;</div><div><div style=3D"text-align:center;"><s=
pan style=3D"font-size: 16px; font-family: 'times new roman', 'new york', t=
imes, serif; line-height: normal;">Adel Amani</span><br></div><span style=
=3D"font-family: 'times new roman', 'new york', times, serif; line-height: =
normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@C=
omputer
 Engineering Department, University of Isfahan</div><div style=3D"text-alig=
n:center;"><span style=3D"font-size:13px;">Email: <span style=3D"color:rgb(=
0, 0, 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></=
div></span></div><div class=3D"yahoo_quoted" style=3D"display: block;"> <br=
> <br> <div style=3D"font-family: 'bookman old style', 'new york', times, s=
erif; font-size: 10pt;"> <div style=3D"font-family: HelveticaNeue, 'Helveti=
ca Neue', Helvetica, Arial, 'Lucida Grande', sans-serif; font-size: 12pt;">=
 <div dir=3D"ltr"> <font size=3D"2" face=3D"Arial"> On Sunday, February 2, =
2014 1:30 PM, Olaf Hering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div> =
 <div class=3D"y_msg_container">On Sun, Feb 02, Adel Amani wrote:<div class=
=3D"yqt6891006089" id=3D"yqtfd07285"><br clear=3D"none"><br clear=3D"none">=
&gt; I check this function xc_domain_save.c, and attention at line 1185,<br=
 clear=3D"none">&gt; function snprintf() amounts iter, sent_this_iter, skip=
_this_iter for<br
 clear=3D"none">&gt; print. (in xen 4.1.2 ) but i don't know where in amoun=
ts print!!! :-( <br clear=3D"none">&gt; Are except file xend.log where is a=
nother for print and debugged?</div><br clear=3D"none"><br clear=3D"none">T=
he output in 4.1 is sent to xend.log, but only if a logger function is<br c=
lear=3D"none">registered. Please follow the code from tools/xcutils/xc_save=
.c:main to<br clear=3D"none">the actual xc_report_progress_start call in<br=
 clear=3D"none">tools/libxc/xc_domain_save.c, as you will note xc_interface=
_open is<br clear=3D"none">called without logger which means no output is p=
rinted.<br clear=3D"none"><br clear=3D"none">For an example how a logger co=
uld look like see the xc_interface_open<br clear=3D"none">call in tools/xen=
paging/xenpaging.c.<br clear=3D"none"><br clear=3D"none">Olaf<div class=3D"=
yqt6891006089" id=3D"yqtfd64365"><br clear=3D"none"></div><br><br></div>  <=
/div> </div>  </div> </div></body></html>
--1665047788-284586231-1391432170=:33697--


--===============0641910218065150884==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0641910218065150884==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 13:11:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJJX-0007b3-Dx; Mon, 03 Feb 2014 13:11:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WAJJW-0007aw-Ev
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 13:11:50 +0000
Received: from [85.158.137.68:26305] by server-16.bemta-3.messagelabs.com id
	FC/4B-29917-5959FE25; Mon, 03 Feb 2014 13:11:49 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391433108!9376505!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32740 invoked from network); 3 Feb 2014 13:11:48 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 13:11:48 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391433108; l=457;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yz9/D1PB0KDV/7bwQxRMn/vkh+o=;
	b=RwGAap1/HFoWSxHE/lLmoxa7unjDomTRwNrezk8nKd81gpdzOxITLBbzwDULVJ+3A1b
	2heRg3AzyKob+unlRoC8sGcELL96tm2zm4UhwmRAsrzxzve1PEJkiowZYZghFSDNmqOHP
	zV+LNRjs6x4Ut0QPtPxdgEIiDxtlIE1BV40=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id 605bf8q13DBjTRN
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 3 Feb 2014 14:11:45 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D0C9450269; Mon,  3 Feb 2014 14:11:44 +0100 (CET)
Date: Mon, 3 Feb 2014 14:11:44 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140203131144.GA31275@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, Adel Amani wrote:

> Thanks, how i define logger for xc_interface_open to output print?!

See  the example I gave in my reply. I quoted it again (see below) for
your convenience.

> can i use of code xc_save.c in xen 4.3.1 for logger in xen 4.1.2?!

If all required changes are backported, most likely yes.

Olaf


> For an example how a logger could look like see the xc_interface_open
> call in tools/xenpaging/xenpaging.c.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:11:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:11:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJJX-0007b3-Dx; Mon, 03 Feb 2014 13:11:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WAJJW-0007aw-Ev
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 13:11:50 +0000
Received: from [85.158.137.68:26305] by server-16.bemta-3.messagelabs.com id
	FC/4B-29917-5959FE25; Mon, 03 Feb 2014 13:11:49 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391433108!9376505!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32740 invoked from network); 3 Feb 2014 13:11:48 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 13:11:48 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391433108; l=457;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yz9/D1PB0KDV/7bwQxRMn/vkh+o=;
	b=RwGAap1/HFoWSxHE/lLmoxa7unjDomTRwNrezk8nKd81gpdzOxITLBbzwDULVJ+3A1b
	2heRg3AzyKob+unlRoC8sGcELL96tm2zm4UhwmRAsrzxzve1PEJkiowZYZghFSDNmqOHP
	zV+LNRjs6x4Ut0QPtPxdgEIiDxtlIE1BV40=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id 605bf8q13DBjTRN
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 3 Feb 2014 14:11:45 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D0C9450269; Mon,  3 Feb 2014 14:11:44 +0100 (CET)
Date: Mon, 3 Feb 2014 14:11:44 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140203131144.GA31275@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, Adel Amani wrote:

> Thanks, how i define logger for xc_interface_open to output print?!

See  the example I gave in my reply. I quoted it again (see below) for
your convenience.

> can i use of code xc_save.c in xen 4.3.1 for logger in xen 4.1.2?!

If all required changes are backported, most likely yes.

Olaf


> For an example how a logger could look like see the xc_interface_open
> call in tools/xenpaging/xenpaging.c.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:25:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJWU-0007xs-Q5; Mon, 03 Feb 2014 13:25:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAJWS-0007xn-Eh
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 13:25:13 +0000
Received: from [193.109.254.147:21656] by server-14.bemta-14.messagelabs.com
	id 09/3D-29228-7B89FE25; Mon, 03 Feb 2014 13:25:11 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391433908!1620573!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9553 invoked from network); 3 Feb 2014 13:25:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:25:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99198402"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 13:25:08 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 08:25:07 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>
Date: Mon, 3 Feb 2014 13:24:58 +0000
Message-ID: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v7] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

v7:
- the previous version broke build on ARM, as there is no need for those p2m
  changes. I've put them into arch specific functions, which are stubs on arm

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/arm/include/asm/xen/page.h     |   15 ++++++-
 arch/x86/include/asm/xen/page.h     |    7 +++-
 arch/x86/xen/p2m.c                  |   49 ++++++++++++++++-------
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   13 +++---
 drivers/xen/grant-table.c           |   75 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 7 files changed, 136 insertions(+), 46 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 3759cac..97fa5f9 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,13 +97,26 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
+static inline int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	return 0;
+}
+
 static inline int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
 {
 	return 0;
 }
 
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
+static inline int restore_foreign_p2m_mapping(struct page *page,
+					      unsigned long mfn)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page,
+				      struct gnttab_map_grant_ref *kmap_op,
+				      unsigned long mfn)
 {
 	return 0;
 }
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..e3fa4b8 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,13 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +124,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..524ace6 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -870,6 +870,22 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	WARN_ON(PagePrivate(page));
+	SetPagePrivate(page);
+	set_page_private(page, mfn);
+
+	page->index = pfn_to_mfn(pfn);
+	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -888,13 +904,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -932,20 +941,33 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
+		return -EINVAL;
+
+	set_page_private(page, INVALID_P2M_ENTRY);
+	WARN_ON(!PagePrivate(page));
+	ClearPagePrivate(page);
+	set_phys_to_machine(pfn, page->index);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(restore_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -959,10 +981,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..9a0be47 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,14 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+
+		ret = set_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
 		if (ret)
 			goto out;
 	}
@@ -939,15 +949,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +985,27 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		ret = restore_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -979,8 +1016,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:25:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJWU-0007xs-Q5; Mon, 03 Feb 2014 13:25:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAJWS-0007xn-Eh
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 13:25:13 +0000
Received: from [193.109.254.147:21656] by server-14.bemta-14.messagelabs.com
	id 09/3D-29228-7B89FE25; Mon, 03 Feb 2014 13:25:11 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391433908!1620573!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9553 invoked from network); 3 Feb 2014 13:25:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:25:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99198402"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 13:25:08 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 08:25:07 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>
Date: Mon, 3 Feb 2014 13:24:58 +0000
Message-ID: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v7] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

v7:
- the previous version broke build on ARM, as there is no need for those p2m
  changes. I've put them into arch specific functions, which are stubs on arm

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/arm/include/asm/xen/page.h     |   15 ++++++-
 arch/x86/include/asm/xen/page.h     |    7 +++-
 arch/x86/xen/p2m.c                  |   49 ++++++++++++++++-------
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   13 +++---
 drivers/xen/grant-table.c           |   75 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 7 files changed, 136 insertions(+), 46 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index 3759cac..97fa5f9 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,13 +97,26 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
+static inline int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	return 0;
+}
+
 static inline int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
 {
 	return 0;
 }
 
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
+static inline int restore_foreign_p2m_mapping(struct page *page,
+					      unsigned long mfn)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page,
+				      struct gnttab_map_grant_ref *kmap_op,
+				      unsigned long mfn)
 {
 	return 0;
 }
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index b913915..e3fa4b8 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,13 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +124,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 2ae8699..524ace6 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -870,6 +870,22 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	WARN_ON(PagePrivate(page));
+	SetPagePrivate(page);
+	set_page_private(page, mfn);
+
+	page->index = pfn_to_mfn(pfn);
+	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -888,13 +904,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -932,20 +941,33 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
+		return -EINVAL;
+
+	set_page_private(page, INVALID_P2M_ENTRY);
+	WARN_ON(!PagePrivate(page));
+	ClearPagePrivate(page);
+	set_phys_to_machine(pfn, page->index);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(restore_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -959,10 +981,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index e41c79c..e652c0e 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index aa846a4..9a0be47 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -880,15 +880,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -907,10 +909,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -927,8 +931,14 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+
+		ret = set_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
 		if (ret)
 			goto out;
 	}
@@ -939,15 +949,32 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
+	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -958,17 +985,27 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		ret = restore_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -979,8 +1016,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 694dcaf..9a919b1 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -184,11 +184,15 @@ unsigned int gnttab_max_grant_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJYh-00085U-JG; Mon, 03 Feb 2014 13:27:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAJYg-00085O-Uc
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 13:27:31 +0000
Received: from [193.109.254.147:32779] by server-16.bemta-14.messagelabs.com
	id 42/F7-21945-2499FE25; Mon, 03 Feb 2014 13:27:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391434048!1641330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23052 invoked from network); 3 Feb 2014 13:27:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:27:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97269764"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 13:27:28 +0000
Received: from [10.68.14.37] (10.68.14.37) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	08:27:27 -0500
Message-ID: <52EF993D.1000701@citrix.com>
Date: Mon, 3 Feb 2014 14:27:25 +0100
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>	<52EE1E26.2040308@linaro.org>
	<52EE93F0.1020508@citrix.com> <52EF7618.7030402@citrix.com>
In-Reply-To: <52EF7618.7030402@citrix.com>
X-Originating-IP: [10.68.14.37]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 11:57, David Vrabel wrote:
>>
>> Hi,
>>
>> That's bad indeed. I think the best solution is to put those parts
>> behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c.
>> David, Stefano, what do you think?
> I don't think we want (more) #ifdef CONFIG_X86 in grant-table.c and the
> arch-specific bits will have to factored out into their own functions
> with suitable stubs provided for ARM.
>
I've just sent in v7 with stubs, I guess that's something you suggested. 
Please review it, I'm especially curious about your thoughts regarding 
the new function name.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJYh-00085U-JG; Mon, 03 Feb 2014 13:27:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAJYg-00085O-Uc
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 13:27:31 +0000
Received: from [193.109.254.147:32779] by server-16.bemta-14.messagelabs.com
	id 42/F7-21945-2499FE25; Mon, 03 Feb 2014 13:27:30 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391434048!1641330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23052 invoked from network); 3 Feb 2014 13:27:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:27:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97269764"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 13:27:28 +0000
Received: from [10.68.14.37] (10.68.14.37) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	08:27:27 -0500
Message-ID: <52EF993D.1000701@citrix.com>
Date: Mon, 3 Feb 2014 14:27:25 +0100
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1390512224-27296-1-git-send-email-zoltan.kiss@citrix.com>	<alpine.DEB.2.02.1401232333280.15917@kaball.uk.xensource.com>	<52EE1E26.2040308@linaro.org>
	<52EE93F0.1020508@citrix.com> <52EF7618.7030402@citrix.com>
In-Reply-To: <52EF7618.7030402@citrix.com>
X-Originating-IP: [10.68.14.37]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	netdev@vger.kernel.org, Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v6] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 11:57, David Vrabel wrote:
>>
>> Hi,
>>
>> That's bad indeed. I think the best solution is to put those parts
>> behind an #ifdef x86. The ones moved from x86/p2m.c to grant-table.c.
>> David, Stefano, what do you think?
> I don't think we want (more) #ifdef CONFIG_X86 in grant-table.c and the
> arch-specific bits will have to factored out into their own functions
> with suitable stubs provided for ARM.
>
I've just sent in v7 with stubs, I guess that's something you suggested. 
Please review it, I'm especially curious about your thoughts regarding 
the new function name.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:47:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJsA-0000Do-Io; Mon, 03 Feb 2014 13:47:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAJs8-0000Dj-N8
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 13:47:37 +0000
Received: from [85.158.143.35:36531] by server-2.bemta-4.messagelabs.com id
	3B/3E-10891-7FD9FE25; Mon, 03 Feb 2014 13:47:35 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391435253!2757647!1
X-Originating-IP: [209.85.216.180]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2009 invoked from network); 3 Feb 2014 13:47:34 -0000
Received: from mail-qc0-f180.google.com (HELO mail-qc0-f180.google.com)
	(209.85.216.180)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:47:34 -0000
Received: by mail-qc0-f180.google.com with SMTP id i17so10895364qcy.25
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 05:47:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=IABxOxMSpB6xRSWw4e59QFT7yAzPEGcHxuGb4s8cbl8=;
	b=OmY1FHs/mdpdFVU1wB0H/GkAR9AWg+2AFfePa2yp6resB9EVPMYQbx5tS3lfm1UmdC
	er2cH1GJOh81fUs44b9j+CsvFW6f5I57XmDlgG4wuEAddihwvfJfQnqUoIEESsD/oCHR
	IVT5xq9GWdOyjrc1SUpHT0PerWT0NfIJMvltto1tzmCx3Po4MjxAr0SB+gbUiqYki8rN
	KKMZ7DpdlerPABMMIzCItwZwuEGcaA27GqygXPi3gK9wXf3c1meDsKdwy15ezYFxbmmL
	lQc7fVs4OCyJVbqzYr+DgaPqDdHibv7T4YR8KQqN5y3xc9PmUqJt2DvBaPjK0n+dwrNd
	Frbg==
MIME-Version: 1.0
X-Received: by 10.140.94.74 with SMTP id f68mr53237124qge.64.1391435252880;
	Mon, 03 Feb 2014 05:47:32 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 05:47:32 -0800 (PST)
Date: Mon, 3 Feb 2014 22:47:32 +0900
Message-ID: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=001a113a332af9ed1b04f180c397
Cc: jbeulich@suse.com, xiantao.zhang@intel.com, suravee.suthikulpanit@amd.com
Subject: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a113a332af9ed1b04f180c397
Content-Type: text/plain; charset=ISO-8859-1

My system based on AMD APU completely crashes when trying to use HVM
domUs. I've asked earlier on user list
http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
they were recommended to ask here.
I've also found same problem describer with another AMD APU here
http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html

My system crashes every time I start HVM domU. But if I use xen
compiled with debug info it works stable at least for hours (not
tested for longer run).

My system is openSUSE 13.1  with Xen 4.4
My hardware is

ASRock  FM2A75 Pro4
AMD A8-6600K APU
Gigabyte Radeon 7850
8 Gb DDR3 1600Mhz

I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.

I've setup with pci serial console and capture xen log ( === === lines
added by myself). Xen and dom0 dmesg logs also attached.

--001a113a332af9ed1b04f180c397
Content-Type: text/plain; charset=US-ASCII; name="xen-serial-new.log"
Content-Disposition: attachment; filename="xen-serial-new.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sbozi0

WGVuIDQuNC4wXzAyLTI5Ny4xCihYRU4pIFhlbiB2ZXJzaW9uIDQuNC4wXzAyLTI5Ny4xIChhYnVp
bGRAKSAoZ2NjIChTVVNFIExpbnV4KSA0LjguMSAyMDEzMDkwOSBbZ2NjLTRfOC1icmFuY2ggcmV2
aXNpb24gMjAyMzg4XSkgZGVidWc9biBUdWUgSmFuIDI4IDE2OjA4OjQ4IFVUQyAyMDE0CihYRU4p
IExhdGVzdCBDaGFuZ2VTZXQ6IAooWEVOKSBCb290bG9hZGVyOiBHUlVCMiAyLjAwCihYRU4pIENv
bW1hbmQgbGluZTogbG9nbHZsPWFsbCBpb21tdT1kZWJ1Zyx2ZXJib3NlIGFwaWNfdmVyYm9zaXR5
PWRlYnVnIGNvbnNvbGU9Y29tMSBjb20xPTExNTIwMCw4bjEscGNpCihYRU4pIFZpZGVvIGluZm9y
bWF0aW9uOgooWEVOKSAgVkdBIGlzIHRleHQgbW9kZSA4MHgyNSwgZm9udCA4eDE2CihYRU4pICBW
QkUvRERDIG1ldGhvZHM6IFYyOyBFRElEIHRyYW5zZmVyIHRpbWU6IDEgc2Vjb25kcwooWEVOKSBE
aXNjIGluZm9ybWF0aW9uOgooWEVOKSAgRm91bmQgMyBNQlIgc2lnbmF0dXJlcwooWEVOKSAgRm91
bmQgNCBFREQgaW5mb3JtYXRpb24gc3RydWN0dXJlcwooWEVOKSBYZW4tZTgyMCBSQU0gbWFwOgoo
WEVOKSAgMDAwMDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWU4MDAgKHVzYWJsZSkKKFhFTikg
IDAwMDAwMDAwMDAwOWU4MDAgLSAwMDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAw
MDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAw
MDAwMDAxMDAwMDAgLSAwMDAwMDAwMDhkNjhiMDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDhk
NjhiMDAwIC0gMDAwMDAwMDA4ZGQwYTAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDhkZDBh
MDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpCihYRU4pICAwMDAwMDAwMDhlMDVhMDAw
IC0gMDAwMDAwMDA4ZWE0NTAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDhlYTQ1MDAwIC0g
MDAwMDAwMDA4ZWE0NjAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDA4ZWE0NjAwMCAtIDAwMDAw
MDAwOGVjNGMwMDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDA4ZWM0YzAwMCAtIDAwMDAwMDAw
OGYwNjQwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwOGYwNjQwMDAgLSAwMDAwMDAwMDhmN2Yz
MDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwOGY3ZjMwMDAgLSAwMDAwMDAwMDhmODAwMDAw
ICh1c2FibGUpCihYRU4pICAwMDAwMDAwMGZlYzAwMDAwIC0gMDAwMDAwMDBmZWMwMTAwMCAocmVz
ZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGZlYzEwMDAwIC0gMDAwMDAwMDBmZWMxMTAwMCAocmVzZXJ2
ZWQpCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0gMDAwMDAwMDBmZWQwMTAwMCAocmVzZXJ2ZWQp
CihYRU4pICAwMDAwMDAwMGZlZDgwMDAwIC0gMDAwMDAwMDBmZWQ5MDAwMCAocmVzZXJ2ZWQpCihY
RU4pICAwMDAwMDAwMGZmODAwMDAwIC0gMDAwMDAwMDEwMDAwMDAwMCAocmVzZXJ2ZWQpCihYRU4p
ICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAwMDI1MDAwMDAwMCAodXNhYmxlKQooWEVOKSBBQ1BJ
OiBSU0RQIDAwMEYwNDkwLCAwMDI0IChyMiBBTEFTS0EpCihYRU4pIEFDUEk6IFhTRFQgOEUwNEEw
NzgsIDAwNzQgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQooWEVO
KSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAwMEY0IChyNCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkg
QU1JICAgICAxMDAxMykKKFhFTikgQUNQSSBXYXJuaW5nICh0YmZhZHQtMDQ2NCk6IE9wdGlvbmFs
IGZpZWxkICJQbTJDb250cm9sQmxvY2siIGhhcyB6ZXJvIGFkZHJlc3Mgb3IgbGVuZ3RoOiAwMDAw
MDAwMDAwMDAwMDAwLzEgWzIwMDcwMTI2XQooWEVOKSBBQ1BJOiBEU0RUIDhFMDRBMTg4LCA1RjlF
IChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAgIDAgSU5UTCAyMDA1MTExNykKKFhFTikgQUNQSTog
RkFDUyA4RTA1MkU4MCwgMDA0MAooWEVOKSBBQ1BJOiBBUElDIDhFMDUwMjIwLCAwMDcyIChyMyBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhFTikgQUNQSTogRlBEVCA4
RTA1MDI5OCwgMDA0NCAocjEgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMp
CihYRU4pIEFDUEk6IE1DRkcgOEUwNTAyRTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3
MjAwOSBNU0ZUICAgIDEwMDEzKQooWEVOKSBBQ1BJOiBBQUZUIDhFMDUwMzIwLCAwMEU3IChyMSBB
TEFTS0EgT0VNQUFGVCAgIDEwNzIwMDkgTVNGVCAgICAgICA5NykKKFhFTikgQUNQSTogSFBFVCA4
RTA1MDQwOCwgMDAzOCAocjEgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgICAgIDUp
CihYRU4pIEFDUEk6IElWUlMgOEUwNTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAg
ICAgMSBBTUQgICAgICAgICAwKQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwNEIwLCAwQTYwIChyMSAg
ICBBTUQgQU5OQVBVUk4gICAgICAgIDEgQU1EICAgICAgICAgMSkKKFhFTikgQUNQSTogU1NEVCA4
RTA1MEYxMCwgMDRCNyAocjIgICAgQU1EIEFOTkFQVVJOICAgICAgICAxIE1TRlQgIDQwMDAwMDAp
CihYRU4pIEFDUEk6IENSQVQgOEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAg
ICAgMSBBTUQgICAgICAgICAxKQooWEVOKSBTeXN0ZW0gUkFNOiA3NjQyTUIgKDc4MjU3MjBrQikK
KFhFTikgTm8gTlVNQSBjb25maWd1cmF0aW9uIGZvdW5kCihYRU4pIEZha2luZyBhIG5vZGUgYXQg
MDAwMDAwMDAwMDAwMDAwMC0wMDAwMDAwMjUwMDAwMDAwCihYRU4pIERvbWFpbiBoZWFwIGluaXRp
YWxpc2VkCihYRU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmZDkwMAooWEVOKSBETUkgMi43
IHByZXNlbnQuCihYRU4pIEFQSUMgYm9vdCBzdGF0ZSBpcyAneGFwaWMnCihYRU4pIFVzaW5nIEFQ
SUMgZHJpdmVyIGRlZmF1bHQKKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgKKFhF
TikgQUNQSTogU0xFRVAgSU5GTzogcG0xeF9jbnRbODA0LDBdLCBwbTF4X2V2dFs4MDAsMF0KKFhF
TikgQUNQSTogMzIvNjRYIEZBQ1MgYWRkcmVzcyBtaXNtYXRjaCBpbiBGQURUIC0gOGUwNTJlODAv
MDAwMDAwMDAwMDAwMDAwMCwgdXNpbmcgMzIKKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVw
X3ZlY1s4ZTA1MmU4Y10sIHZlY19zaXplWzIwXQooWEVOKSBBQ1BJOiBMb2NhbCBBUElDIGFkZHJl
c3MgMHhmZWUwMDAwMAooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAxXSBsYXBpY19pZFsw
eDEwXSBlbmFibGVkKQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYKKFhF
TikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgxMV0gZW5hYmxlZCkKKFhF
TikgUHJvY2Vzc29yICMxNyA1OjMgQVBJQyB2ZXJzaW9uIDE2CihYRU4pIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjMTgg
NTozIEFQSUMgdmVyc2lvbiAxNgooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBp
Y19pZFsweDEzXSBlbmFibGVkKQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElDIHZlcnNpb24g
MTYKKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4
MV0pCihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwNV0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lf
YmFzZVswXSkKKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24gMzMsIGFkZHJlc3Mg
MHhmZWMwMDAwMCwgR1NJIDAtMjMKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19p
cnEgMCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAw
IGJ1c19pcnEgOSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQooWEVOKSBBQ1BJOiBJUlEwIHVzZWQg
Ynkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6IElSUTIgdXNlZCBieSBvdmVycmlkZS4KKFhFTikgQUNQ
STogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLgooWEVOKSBFbmFibGluZyBBUElDIG1vZGU6ICBGbGF0
LiAgVXNpbmcgMSBJL08gQVBJQ3MKKFhFTikgQUNQSTogSFBFVCBpZDogMHgxMDIyODIxMCBiYXNl
OiAweGZlZDAwMDAwCihYRU4pIEVSU1QgdGFibGUgd2FzIG5vdCBmb3VuZAooWEVOKSBVc2luZyBB
Q1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24KKFhFTikgU01QOiBB
bGxvd2luZyA0IENQVXMgKDAgaG90cGx1ZyBDUFVzKQooWEVOKSBOUl9DUFVTOjUxMiBucl9jcHVt
YXNrX2JpdHM6NjQKKFhFTikgbWFwcGVkIEFQSUMgdG8gZmZmZjgyY2ZmZmJmYjAwMCAoZmVlMDAw
MDApCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyY2ZmZmJmYTAwMCAoZmVjMDAwMDApCihY
RU4pIElSUSBsaW1pdHM6IDI0IEdTSSwgNzYwIE1TSS9NU0ktWAooWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCihYRU4pIERldGVjdGVkIDM4OTMuMDIy
IE1IeiBwcm9jZXNzb3IuCihYRU4pIEluaXRpbmcgbWVtb3J5IHNoYXJpbmcuCihYRU4pIHhzdGF0
ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAweDNjMCBhbmQgc3RhdGVzOiAweDQwMDAwMDAwMDAw
MDAwMDcKKFhFTikgQU1EIEZhbTE1aCBtYWNoaW5lIGNoZWNrIHJlcG9ydGluZyBlbmFibGVkCihY
RU4pIFBDSTogTUNGRyBjb25maWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAwMDAgc2VnbWVudCAwMDAw
IGJ1c2VzIDAwIC0gZmYKKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAw
IGJ1cyAwMC1mZgooWEVOKSBBTUQtVmk6IEZvdW5kIE1TSSBjYXBhYmlsaXR5IGJsb2NrIGF0IDB4
NTQKKFhFTikgQU1ELVZpOiBBQ1BJIFRhYmxlOgooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZS
UwooWEVOKSBBTUQtVmk6ICBMZW5ndGggMHg3MAooWEVOKSBBTUQtVmk6ICBSZXZpc2lvbiAweDIK
KFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0gMHhlOAooWEVOKSBBTUQtVmk6ICBPRU1fSWQgQU1ECihY
RU4pIEFNRC1WaTogIE9FTV9UYWJsZV9JZCBBTk5BUFVSTgooWEVOKSBBTUQtVmk6ICBPRU1fUmV2
aXNpb24gMHgxCihYRU4pIEFNRC1WaTogIENyZWF0b3JfSWQgQU1EIAooWEVOKSBBTUQtVmk6ICBD
cmVhdG9yX1JldmlzaW9uIDAKKFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxh
Z3MgMHhmZSBsZW4gMHg0MCBpZCAweDIKKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTog
dHlwZSAweDMgaWQgMHg4IGZsYWdzIDAKKFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDgg
LT4gMHhmZmZlCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAw
eDIwMCBmbGFncyAwCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHgyMDAgLT4gMHgyZmYg
YWxpYXMgMHhhNAooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDAgaWQgMCBm
bGFncyAwCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZs
YWdzIDAKKFhFTikgQU1ELVZpOiBJVkhEIFNwZWNpYWw6IDAwMDA6MDA6MTQuMCB2YXJpZXR5IDB4
MiBoYW5kbGUgMAooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDggaWQg
MCBmbGFncyAweGQ3CihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFy
aWV0eSAweDEgaGFuZGxlIDB4NQooWEVOKSBBTUQtVmk6IElPTU1VIEV4dGVuZGVkIEZlYXR1cmVz
OgooWEVOKSAgLSBQcmVmZXRjaCBQYWdlcyBDb21tYW5kCihYRU4pICAtIFBlcmlwaGVyYWwgUGFn
ZSBTZXJ2aWNlIFJlcXVlc3QKKFhFTikgIC0gR3Vlc3QgVHJhbnNsYXRpb24KKFhFTikgIC0gSW52
YWxpZGF0ZSBBbGwgQ29tbWFuZAooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4KKFhFTikg
QU1ELVZpOiBHdWVzdCBUcmFuc2xhdGlvbiBFbmFibGVkLgooWEVOKSBBTUQtVmk6IElPTU1VIDAg
RW5hYmxlZC4KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQKKFhFTikgIC0gRG9tMCBt
b2RlOiBSZWxheGVkCihYRU4pIEludGVycnVwdCByZW1hcHBpbmcgZW5hYmxlZAooWEVOKSBHZXR0
aW5nIFZFUlNJT046IDgwMDUwMDEwCihYRU4pIEdldHRpbmcgVkVSU0lPTjogODAwNTAwMTAKKFhF
TikgR2V0dGluZyBJRDogMTAwMDAwMDAKKFhFTikgR2V0dGluZyBMVlQwOiA3MDAKKFhFTikgR2V0
dGluZyBMVlQxOiA0MDAKKFhFTikgZW5hYmxlZCBFeHRJTlQgb24gQ1BVIzAKKFhFTikgRU5BQkxJ
TkcgSU8tQVBJQyBJUlFzCihYRU4pICAtPiBVc2luZyBvbGQgQUNLIG1ldGhvZAooWEVOKSBpbml0
IElPX0FQSUMgSVJRcwooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBpbikgNS0wLCA1LTE2LCA1LTE3
LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5vdCBjb25uZWN0ZWQuCihYRU4p
IC4uVElNRVI6IHZlY3Rvcj0weEYwIGFwaWMxPTAgcGluMT0yIGFwaWMyPS0xIHBpbjI9LTEKKFhF
TikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4KKFhFTikgbnVtYmVyIG9mIElPLUFQSUMg
IzUgcmVnaXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uCihYRU4pIElPIEFQSUMgIzUuLi4uLi4KKFhFTikgLi4uLiByZWdpc3RlciAjMDA6
IDA1MDAwMDAwCihYRU4pIC4uLi4uLi4gICAgOiBwaHlzaWNhbCBBUElDIGlkOiAwNQooWEVOKSAu
Li4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTogMAooWEVOKSAuLi4uLi4uICAgIDogTFRTICAgICAg
ICAgIDogMAooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzgwMjEKKFhFTikgLi4uLi4uLiAg
ICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6IFBS
USBpbXBsZW1lbnRlZDogMQooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAy
MQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMjogMDUwMDAwMDAKKFhFTikgLi4uLi4uLiAgICAgOiBh
cmJpdHJhdGlvbjogMDUKKFhFTikgLi4uLiByZWdpc3RlciAjMDM6IDA1MDE4MDIxCihYRU4pIC4u
Li4uLi4gICAgIDogQm9vdCBEVCAgICA6IDEKKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFi
bGU6CihYRU4pICBOUiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZl
Y3Q6ICAgCihYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwCihYRU4pICAwMSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMw
CihYRU4pICAwMiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEYwCihY
RU4pICAwMyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDM4CihYRU4p
ICAwNCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwCihYRU4pICAw
NSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQ4CihYRU4pICAwNiAw
MDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDUwCihYRU4pICAwNyAwMDEg
MDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4CihYRU4pICAwOCAwMDEgMDEg
IDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDYwCihYRU4pICAwOSAwMDEgMDEgIDEg
ICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAwICAgIDAwCihYRU4pICAwYSAwMDEgMDEgIDAgICAg
MCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEYxCihYRU4pICAwYiAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDcwCihYRU4pICAwYyAwMDEgMDEgIDAgICAgMCAgICAw
ICAgMCAgIDAgICAgMSAgICAxICAgIDc4CihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDg4CihYRU4pICAwZSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDkwCihYRU4pICAwZiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDk4CihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAxICAgIDMwCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAxICAgIDMwCihYRU4pICAxMiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAx
ICAgIDMwCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMw
CihYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwCihY
RU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwCihYRU4p
ICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwCihYRU4pIFVz
aW5nIHZlY3Rvci1iYXNlZCBpbmRleGluZwooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOgooWEVO
KSBJUlEyNDAgLT4gMDoyCihYRU4pIElSUTQ4IC0+IDA6MQooWEVOKSBJUlE1NiAtPiAwOjMKKFhF
TikgSVJRNjQgLT4gMDo0CihYRU4pIElSUTcyIC0+IDA6NQooWEVOKSBJUlE4MCAtPiAwOjYKKFhF
TikgSVJRODggLT4gMDo3CihYRU4pIElSUTk2IC0+IDA6OAooWEVOKSBJUlExMDQgLT4gMDo5CihY
RU4pIElSUTI0MSAtPiAwOjEwCihYRU4pIElSUTExMiAtPiAwOjExCihYRU4pIElSUTEyMCAtPiAw
OjEyCihYRU4pIElSUTEzNiAtPiAwOjEzCihYRU4pIElSUTE0NCAtPiAwOjE0CihYRU4pIElSUTE1
MiAtPiAwOjE1CihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLiBkb25l
LgooWEVOKSBVc2luZyBsb2NhbCBBUElDIHRpbWVyIGludGVycnVwdHMuCihYRU4pIGNhbGlicmF0
aW5nIEFQSUMgdGltZXIgLi4uCihYRU4pIC4uLi4uIENQVSBjbG9jayBzcGVlZCBpcyAzODkzLjAw
MzMgTUh6LgooWEVOKSAuLi4uLiBob3N0IGJ1cyBjbG9jayBzcGVlZCBpcyA5OS44MjA2IE1Iei4K
KFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM5CihYRU4pIFBsYXRmb3JtIHRpbWVyIGlzIDE0
LjMxOE1IeiBIUEVUCihYRU4pIEFsbG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgMzIgS2lCLgooWEVO
KSBIVk06IEFTSURzIGVuYWJsZWQuCihYRU4pIFNWTTogU3VwcG9ydGVkIGFkdmFuY2VkIGZlYXR1
cmVzOgooWEVOKSAgLSBOZXN0ZWQgUGFnZSBUYWJsZXMgKE5QVCkKKFhFTikgIC0gTGFzdCBCcmFu
Y2ggUmVjb3JkIChMQlIpIFZpcnR1YWxpc2F0aW9uCihYRU4pICAtIE5leHQtUklQIFNhdmVkIG9u
ICNWTUVYSVQKKFhFTikgIC0gVk1DQiBDbGVhbiBCaXRzCihYRU4pICAtIERlY29kZUFzc2lzdHMK
KFhFTikgIC0gUGF1c2UtSW50ZXJjZXB0IEZpbHRlcgooWEVOKSAgLSBUU0MgUmF0ZSBNU1IKKFhF
TikgSFZNOiBTVk0gZW5hYmxlZAooWEVOKSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAo
SEFQKSBkZXRlY3RlZAooWEVOKSBIVk06IEhBUCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCCihY
RU4pIEhWTTogUFZIIG1vZGUgbm90IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtCihYRU4pIG1h
c2tlZCBFeHRJTlQgb24gQ1BVIzEKKFhFTikgbWljcm9jb2RlOiBDUFUxIGNvbGxlY3RfY3B1X2lu
Zm86IHBhdGNoX2lkPTB4NjAwMTExOQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyCihYRU4p
IG1pY3JvY29kZTogQ1BVMiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0wCihYRU4pIG1hc2tl
ZCBFeHRJTlQgb24gQ1BVIzMKKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86
IHBhdGNoX2lkPTAKKFhFTikgQnJvdWdodCB1cCA0IENQVXMKKFhFTikgQUNQSSBzbGVlcCBtb2Rl
czogUzMKKFhFTikgTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZy
ZXF1ZW5jeQooWEVOKSBtY2hlY2tfcG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0
YXJ0ZWQuCihYRU4pIG10cnI6IHlvdXIgQ1BVcyBoYWQgaW5jb25zaXN0ZW50IHZhcmlhYmxlIE1U
UlIgc2V0dGluZ3MKKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVw
IGFsbCBDUFVzLgooWEVOKSBtdHJyOiBjb3JyZWN0ZWQgY29uZmlndXJhdGlvbi4KKFhFTikgKioq
IExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNv
bXBhdDMyCihYRU4pICBEb20wIGtlcm5lbDogNjQtYml0LCBsc2IsIHBhZGRyIDB4MjAwMCAtPiAw
eGMwNTAwMAooWEVOKSBQSFlTSUNBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6CihYRU4pICBEb20wIGFs
bG9jLjogICAwMDAwMDAwMjIzMDAwMDAwLT4wMDAwMDAwMjI0MDAwMDAwICgxNzQyMDgzIHBhZ2Vz
IHRvIGJlIGFsbG9jYXRlZCkKKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGNmNzUwMDAt
PjAwMDAwMDAyNGZmZmZlMDAKKFhFTikgVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6CihYRU4p
ICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgwMDAyMDAwLT5mZmZmZmZmZjgwYzA1MDAwCihYRU4p
ICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICBQaHlzLU1hY2ggbWFwOiBmZmZmZWEwMDAwMDAwMDAwLT5mZmZmZWEwMDAwZDZhYzcwCihYRU4p
ICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgwYzA1MDAwLT5mZmZmZmZmZjgwYzA1NGI0CihYRU4p
ICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgwYzA2MDAwLT5mZmZmZmZmZjgwYzExMDAwCihYRU4p
ICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgwYzExMDAwLT5mZmZmZmZmZjgwYzEyMDAwCihYRU4p
ICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZmZjgxMDAwMDAwCihYRU4p
ICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgwMDAyMDAwCihYRU4pIERvbTAgaGFzIG1heGltdW0g
NCBWQ1BVcwooWEVOKSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MDAuMAoo
WEVOKSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDowMC4yCihYRU4pIHNldHVw
IDAwMDA6MDA6MDAuMiBmb3IgZDAgZmFpbGVkICgtMTkpCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5LCB0eXBlID0gMHgxLCByb290IHRhYmxl
ID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEwLCB0eXBlID0gMHgyLCByb290
IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgwLCB0eXBlID0gMHgx
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgxLCB0eXBl
ID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDg4
LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2lu
ZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweDkwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAs
IHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZp
Y2UgaWQgPSAweDkyLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxl
OiBkZXZpY2UgaWQgPSAweDk4LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAs
IGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRm
YzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGExLCB0eXBlID0gMHg3LCByb290IHRh
YmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEzLCB0eXBlID0gMHg3LCBy
b290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVO
KSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE0LCB0eXBlID0g
MHg1LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
MwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE1LCB0
eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eGE4LCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweGFhLCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGFiLCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGMwLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMxLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgy
MjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMyLCB0eXBlID0gMHg2LCByb290IHRhYmxl
ID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMzLCB0eXBlID0gMHg2LCByb290
IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGM0LCB0eXBlID0gMHg2
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGM1LCB0eXBl
ID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEw
MCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHgxMDEsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4MjMwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweDIzOCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgzMDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NDAwLCB0eXBlID0gMHgxLCByb290IHRh
YmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwMCwgdHlwZSA9IDB4MSwg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhF
TikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4KKFhFTikgSW5pdGlhbCBsb3cgbWVtb3J5IHZp
cnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFs
bAooWEVOKSBHdWVzdCBMb2dsZXZlbDogTm90aGluZyAoUmF0ZS1saW1pdGVkOiBFcnJvcnMgYW5k
IHdhcm5pbmdzKQooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScg
dGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikKKFhFTikgRnJlZWQgMjkya0IgaW5p
dCBtZW1vcnkuCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwMDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAw
MDAwLgooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAwMC4K
KFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAw
MDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAuCihYRU4p
IHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwMDAwMDA0MTMg
ZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAwMDAwLgooWEVOKSBQQ0k6
IFVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYKKFhFTikgbW0uYzo4MDk6IGQw
OiBGb3JjaW5nIHJlYWQtb25seSBhY2Nlc3MgdG8gTUZOIGUwMDAyCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjIKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MDEuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAyLjAKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTAuMQooWEVOKSBT
Ui1JT1YgZGV2aWNlIDAwMDA6MDA6MTEuMCBoYXMgaXRzIHZpcnR1YWwgZnVuY3Rpb25zIGFscmVh
ZHkgZW5hYmxlZCAoMDFhYikKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMS4wCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTIuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjEyLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTMuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjAKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MTQuMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjQKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxNC41CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMAooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE1LjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
NS4zCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMAooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjE4LjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4yCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4
LjQKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC41CihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDE6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMjowNi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDI6MDcu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAzOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowNDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDU6MDAuMAooWEVOKSBJT0FQ
SUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xNiAtPiAweGEwIC0+IElSUSAxNiBNb2Rl
OjEgQWN0aXZlOjEpCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE3
IC0+IDB4YTggLT4gSVJRIDE3IE1vZGU6MSBBY3RpdmU6MSkKKFhFTikgSU9BUElDWzBdOiBTZXQg
UENJIHJvdXRpbmcgZW50cnkgKDUtMTggLT4gMHhiMCAtPiBJUlEgMTggTW9kZToxIEFjdGl2ZTox
KQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOSAtPiAweGI4IC0+
IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5n
IGVudHJ5ICg1LTIxIC0+IDB4YzAgLT4gSVJRIDIxIE1vZGU6MSBBY3RpdmU6MSkKKFhFTikgSU9B
UElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDUtMjIgLT4gMHhjOCAtPiBJUlEgMjIgTW9k
ZToxIEFjdGl2ZToxKQpbICAgMTYuMjYyOTg5XSBVbmFibGUgdG8gcmVhZCBzeXNycSBjb2RlIGlu
IGNvbnRyb2wvc3lzcnEKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkKKFhFTikgQVBJ
QyBlcnJvciBvbiBDUFUyOiAwMCg0MCkKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUzOiAwMCg0MCkK
KFhFTikgQVBJQyBlcnJvciBvbiBDUFUwOiAwMCg0MCkKPT09IGh2bSBkb20gc3RhcnRlZCAgPT09
PQooWEVOKSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUgPSAw
eDEwMDAyOQooWEVOKSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFi
bGUgPSAweDEwMDFiZQooWEVOKSBBTUQtVmk6IERpc2FibGU6IGRldmljZSBpZCA9IDB4OCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFi
bGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MTAwMWJlMDAw
LCBkb21haW4gPSAyLCBwYWdpbmcgbW9kZSA9IDQKKFhFTikgQU1ELVZpOiBSZS1hc3NpZ24gMDAw
MDowMDowMS4wIGZyb20gZG9tMCB0byBkb20yCihYRU4pIEFNRC1WaTogRGlzYWJsZTogZGV2aWNl
IGlkID0gMHg5LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5LCB0eXBlID0gMHgxLCByb290IHRhYmxl
ID0gMHgxMDAxYmUwMDAsIGRvbWFpbiA9IDIsIHBhZ2luZyBtb2RlID0gNAooWEVOKSBBTUQtVmk6
IFJlLWFzc2lnbiAwMDAwOjAwOjAxLjEgZnJvbSBkb20wIHRvIGRvbTIKPT09IHdob2xlIHN5c3Rl
bSBjcmFzaGVkID09PSAK
--001a113a332af9ed1b04f180c397
Content-Type: application/octet-stream; name="xl4.4-dmesg"
Content-Disposition: attachment; filename="xl4.4-dmesg"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sjf4m1

IFhlbiA0LjQuMF8wMi0yOTcuMQooWEVOKSBYZW4gdmVyc2lvbiA0LjQuMF8wMi0yOTcuMSAoYWJ1
aWxkQCkgKGdjYyAoU1VTRSBMaW51eCkgNC44LjEgMjAxMzA5MDkgW2djYy00XzgtYnJhbmNoIHJl
dmlzaW9uIDIwMjM4OF0pIGRlYnVnPW4gVHVlIEphbiAyOCAxNjowODo0OCBVVEMgMjAxNAooWEVO
KSBMYXRlc3QgQ2hhbmdlU2V0OiAKKFhFTikgQm9vdGxvYWRlcjogR1JVQjIgMi4wMAooWEVOKSBD
b21tYW5kIGxpbmU6IGxvZ2x2bD1hbGwgaW9tbXU9ZGVidWcsdmVyYm9zZSBhcGljX3ZlcmJvc2l0
eT1kZWJ1ZyBjb25zb2xlPWNvbTEgY29tMT0xMTUyMDAsOG4xLHBjaQooWEVOKSBWaWRlbyBpbmZv
cm1hdGlvbjoKKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNgooWEVOKSAg
VkJFL0REQyBtZXRob2RzOiBWMjsgRURJRCB0cmFuc2ZlciB0aW1lOiAxIHNlY29uZHMKKFhFTikg
RGlzYyBpbmZvcm1hdGlvbjoKKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMKKFhFTikgIEZv
dW5kIDQgRUREIGluZm9ybWF0aW9uIHN0cnVjdHVyZXMKKFhFTikgWGVuLWU4MjAgUkFNIG1hcDoK
KFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1c2FibGUpCihYRU4p
ICAwMDAwMDAwMDAwMDllODAwIC0gMDAwMDAwMDAwMDBhMDAwMCAocmVzZXJ2ZWQpCihYRU4pICAw
MDAwMDAwMDAwMGUwMDAwIC0gMDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAw
MDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDA4
ZDY4YjAwMCAtIDAwMDAwMDAwOGRkMGEwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDA4ZGQw
YTAwMCAtIDAwMDAwMDAwOGUwNWEwMDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDA4ZTA1YTAw
MCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDA4ZWE0NTAwMCAt
IDAwMDAwMDAwOGVhNDYwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwOGVhNDYwMDAgLSAwMDAw
MDAwMDhlYzRjMDAwIChBQ1BJIE5WUykKKFhFTikgIDAwMDAwMDAwOGVjNGMwMDAgLSAwMDAwMDAw
MDhmMDY0MDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDhmMDY0MDAwIC0gMDAwMDAwMDA4Zjdm
MzAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDhmN2YzMDAwIC0gMDAwMDAwMDA4ZjgwMDAw
MCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDBmZWMwMDAwMCAtIDAwMDAwMDAwZmVjMDEwMDAgKHJl
c2VydmVkKQooWEVOKSAgMDAwMDAwMDBmZWMxMDAwMCAtIDAwMDAwMDAwZmVjMTEwMDAgKHJlc2Vy
dmVkKQooWEVOKSAgMDAwMDAwMDBmZWQwMDAwMCAtIDAwMDAwMDAwZmVkMDEwMDAgKHJlc2VydmVk
KQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAwMDAwMDAwZmVkOTAwMDAgKHJlc2VydmVkKQoo
WEVOKSAgMDAwMDAwMDBmZjgwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQooWEVO
KSAgMDAwMDAwMDEwMDAwMTAwMCAtIDAwMDAwMDAyNTAwMDAwMDAgKHVzYWJsZSkKKFhFTikgQUNQ
STogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIgQUxBU0tBKQooWEVOKSBBQ1BJOiBYU0RUIDhFMDRB
MDc4LCAwMDc0IChyMSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhF
TikgQUNQSTogRkFDUCA4RTA1MDEyOCwgMDBGNCAocjQgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5
IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEkgV2FybmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25h
bCBmaWVsZCAiUG0yQ29udHJvbEJsb2NrIiBoYXMgemVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAw
MDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEyNl0KKFhFTikgQUNQSTogRFNEVCA4RTA0QTE4OCwgNUY5
RSAocjIgQUxBU0tBICAgIEEgTSBJICAgICAgICAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6
IEZBQ1MgOEUwNTJFODAsIDAwNDAKKFhFTikgQUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMg
QUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IEZQRFQg
OEUwNTAyOTgsIDAwNDQgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEz
KQooWEVOKSBBQ1BJOiBNQ0ZHIDhFMDUwMkUwLCAwMDNDIChyMSBBTEFTS0EgICAgQSBNIEkgIDEw
NzIwMDkgTVNGVCAgICAxMDAxMykKKFhFTikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEg
QUxBU0tBIE9FTUFBRlQgICAxMDcyMDA5IE1TRlQgICAgICAgOTcpCihYRU4pIEFDUEk6IEhQRVQg
OEUwNTA0MDgsIDAwMzggKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgICAgICA1
KQooWEVOKSBBQ1BJOiBJVlJTIDhFMDUwNDQwLCAwMDcwIChyMiAgICBBTUQgQU5OQVBVUk4gICAg
ICAgIDEgQU1EICAgICAgICAgMCkKKFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEg
ICAgQU1EIEFOTkFQVVJOICAgICAgICAxIEFNRCAgICAgICAgIDEpCihYRU4pIEFDUEk6IFNTRFQg
OEUwNTBGMTAsIDA0QjcgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBNU0ZUICA0MDAwMDAw
KQooWEVOKSBBQ1BJOiBDUkFUIDhFMDUxM0M4LCAwMkY4IChyMSAgICBBTUQgQU5OQVBVUk4gICAg
ICAgIDEgQU1EICAgICAgICAgMSkKKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0Ip
CihYRU4pIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZAooWEVOKSBGYWtpbmcgYSBub2RlIGF0
IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDI1MDAwMDAwMAooWEVOKSBEb21haW4gaGVhcCBpbml0
aWFsaXNlZAooWEVOKSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgMDAwZmQ5MDAKKFhFTikgRE1JIDIu
NyBwcmVzZW50LgooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJwooWEVOKSBVc2luZyBB
UElDIGRyaXZlciBkZWZhdWx0CihYRU4pIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4ODA4CihY
RU4pIEFDUEk6IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdCihY
RU4pIEFDUEk6IDMyLzY0WCBGQUNTIGFkZHJlc3MgbWlzbWF0Y2ggaW4gRkFEVCAtIDhlMDUyZTgw
LzAwMDAwMDAwMDAwMDAwMDAsIHVzaW5nIDMyCihYRU4pIEFDUEk6ICAgICAgICAgICAgIHdha2V1
cF92ZWNbOGUwNTJlOGNdLCB2ZWNfc2l6ZVsyMF0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDAKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRb
MHgxMF0gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMxNiA1OjMgQVBJQyB2ZXJzaW9uIDE2CihY
RU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQpCihY
RU4pIFByb2Nlc3NvciAjMTcgNTozIEFQSUMgdmVyc2lvbiAxNgooWEVOKSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDAzXSBsYXBpY19pZFsweDEyXSBlbmFibGVkKQooWEVOKSBQcm9jZXNzb3IgIzE4
IDU6MyBBUElDIHZlcnNpb24gMTYKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFw
aWNfaWRbMHgxM10gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMxOSA1OjMgQVBJQyB2ZXJzaW9u
IDE2CihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlkWzB4MDVdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3Np
X2Jhc2VbMF0pCihYRU4pIElPQVBJQ1swXTogYXBpY19pZCA1LCB2ZXJzaW9uIDMzLCBhZGRyZXNz
IDB4ZmVjMDAwMDAsIEdTSSAwLTIzCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNf
aXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChidXMg
MCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2Vk
IGJ5IG92ZXJyaWRlLgooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFD
UEk6IElSUTkgdXNlZCBieSBvdmVycmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgRmxh
dC4gIFVzaW5nIDEgSS9PIEFQSUNzCihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4MTAyMjgyMTAgYmFz
ZTogMHhmZWQwMDAwMAooWEVOKSBFUlNUIHRhYmxlIHdhcyBub3QgZm91bmQKKFhFTikgVXNpbmcg
QUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFNNUDog
QWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykKKFhFTikgTlJfQ1BVUzo1MTIgbnJfY3B1
bWFza19iaXRzOjY0CihYRU4pIG1hcHBlZCBBUElDIHRvIGZmZmY4MmNmZmZiZmIwMDAgKGZlZTAw
MDAwKQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4MmNmZmZiZmEwMDAgKGZlYzAwMDAwKQoo
WEVOKSBJUlEgbGltaXRzOiAyNCBHU0ksIDc2MCBNU0kvTVNJLVgKKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBEZXRlY3RlZCAzODkzLjAx
MyBNSHogcHJvY2Vzc29yLgooWEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSB4c3Rh
dGVfaW5pdDogdXNpbmcgY250eHRfc2l6ZTogMHgzYzAgYW5kIHN0YXRlczogMHg0MDAwMDAwMDAw
MDAwMDA3CihYRU4pIEFNRCBGYW0xNWggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZAoo
WEVOKSBQQ0k6IE1DRkcgY29uZmlndXJhdGlvbiAwOiBiYXNlIGUwMDAwMDAwIHNlZ21lbnQgMDAw
MCBidXNlcyAwMCAtIGZmCihYRU4pIFBDSTogTm90IHVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAw
MCBidXMgMDAtZmYKKFhFTikgQU1ELVZpOiBGb3VuZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAw
eDU0CihYRU4pIEFNRC1WaTogQUNQSSBUYWJsZToKKFhFTikgQU1ELVZpOiAgU2lnbmF0dXJlIElW
UlMKKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4NzAKKFhFTikgQU1ELVZpOiAgUmV2aXNpb24gMHgy
CihYRU4pIEFNRC1WaTogIENoZWNrU3VtIDB4ZTgKKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRAoo
WEVOKSBBTUQtVmk6ICBPRU1fVGFibGVfSWQgQU5OQVBVUk4KKFhFTikgQU1ELVZpOiAgT0VNX1Jl
dmlzaW9uIDB4MQooWEVOKSBBTUQtVmk6ICBDcmVhdG9yX0lkIEFNRCAKKFhFTikgQU1ELVZpOiAg
Q3JlYXRvcl9SZXZpc2lvbiAwCihYRU4pIEFNRC1WaTogSVZSUyBCbG9jazogdHlwZSAweDEwIGZs
YWdzIDB4ZmUgbGVuIDB4NDAgaWQgMHgyCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6
IHR5cGUgMHgzIGlkIDB4OCBmbGFncyAwCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHg4
IC0+IDB4ZmZmZQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDMgaWQg
MHgyMDAgZmxhZ3MgMAooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZm
IGFsaWFzIDB4YTQKKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAwIGlkIDAg
ZmxhZ3MgMAooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDggaWQgMCBm
bGFncyAwCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAw
eDIgaGFuZGxlIDAKKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDQ4IGlk
IDAgZmxhZ3MgMHhkNwooWEVOKSBBTUQtVmk6IElWSEQgU3BlY2lhbDogMDAwMDowMDoxNC4wIHZh
cmlldHkgMHgxIGhhbmRsZSAweDUKKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJl
czoKKFhFTikgIC0gUHJlZmV0Y2ggUGFnZXMgQ29tbWFuZAooWEVOKSAgLSBQZXJpcGhlcmFsIFBh
Z2UgU2VydmljZSBSZXF1ZXN0CihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uCihYRU4pICAtIElu
dmFsaWRhdGUgQWxsIENvbW1hbmQKKFhFTikgQU1ELVZpOiBQUFIgTG9nIEVuYWJsZWQuCihYRU4p
IEFNRC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4KKFhFTikgQU1ELVZpOiBJT01NVSAw
IEVuYWJsZWQuCihYRU4pIEkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkCihYRU4pICAtIERvbTAg
bW9kZTogUmVsYXhlZAooWEVOKSBJbnRlcnJ1cHQgcmVtYXBwaW5nIGVuYWJsZWQKKFhFTikgR2V0
dGluZyBWRVJTSU9OOiA4MDA1MDAxMAooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwCihY
RU4pIEdldHRpbmcgSUQ6IDEwMDAwMDAwCihYRU4pIEdldHRpbmcgTFZUMDogNzAwCihYRU4pIEdl
dHRpbmcgTFZUMTogNDAwCihYRU4pIGVuYWJsZWQgRXh0SU5UIG9uIENQVSMwCihYRU4pIEVOQUJM
SU5HIElPLUFQSUMgSVJRcwooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBtZXRob2QKKFhFTikgaW5p
dCBJT19BUElDIElSUXMKKFhFTikgIElPLUFQSUMgKGFwaWNpZC1waW4pIDUtMCwgNS0xNiwgNS0x
NywgNS0xOCwgNS0xOSwgNS0yMCwgNS0yMSwgNS0yMiwgNS0yMyBub3QgY29ubmVjdGVkLgooWEVO
KSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0xCihY
RU4pIG51bWJlciBvZiBNUCBJUlEgc291cmNlczogMTUuCihYRU4pIG51bWJlciBvZiBJTy1BUElD
ICM1IHJlZ2lzdGVyczogMjQuCihYRU4pIHRlc3RpbmcgdGhlIElPIEFQSUMuLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLgooWEVOKSBJTyBBUElDICM1Li4uLi4uCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAw
OiAwNTAwMDAwMAooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDUKKFhFTikg
Li4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAKKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAg
ICAgICA6IDAKKFhFTikgLi4uLiByZWdpc3RlciAjMDE6IDAwMTc4MDIxCihYRU4pIC4uLi4uLi4g
ICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAwMTcKKFhFTikgLi4uLi4uLiAgICAgOiBQ
UlEgaW1wbGVtZW50ZWQ6IDEKKFhFTikgLi4uLi4uLiAgICAgOiBJTyBBUElDIHZlcnNpb246IDAw
MjEKKFhFTikgLi4uLiByZWdpc3RlciAjMDI6IDA1MDAwMDAwCihYRU4pIC4uLi4uLi4gICAgIDog
YXJiaXRyYXRpb246IDA1CihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAzOiAwNTAxODAyMQooWEVOKSAu
Li4uLi4uICAgICA6IEJvb3QgRFQgICAgOiAxCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRh
YmxlOgooWEVOKSAgTlIgTG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBW
ZWN0OiAgIAooWEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAg
ICAzMAooWEVOKSAgMDEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAz
MAooWEVOKSAgMDIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMAoo
WEVOKSAgMDMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOAooWEVO
KSAgMDQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0MAooWEVOKSAg
MDUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OAooWEVOKSAgMDYg
MDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1MAooWEVOKSAgMDcgMDAx
IDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1OAooWEVOKSAgMDggMDAxIDAx
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA2MAooWEVOKSAgMDkgMDAxIDAxICAx
ICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMCAgICAwMAooWEVOKSAgMGEgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3MAooWEVOKSAgMGMgMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA3OAooWEVOKSAgMGQgMDAxIDAxICAwICAgIDAgICAgMCAg
IDAgICAwICAgIDEgICAgMSAgICA4OAooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA5MAooWEVOKSAgMGYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA5OAooWEVOKSAgMTAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMSAgICAzMAooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMSAgICAzMAooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MSAgICAzMAooWEVOKSAgMTMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAg
ICAzMAooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MAooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMAoo
WEVOKSAgMTYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMAooWEVO
KSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMAooWEVOKSBV
c2luZyB2ZWN0b3ItYmFzZWQgaW5kZXhpbmcKKFhFTikgSVJRIHRvIHBpbiBtYXBwaW5nczoKKFhF
TikgSVJRMjQwIC0+IDA6MgooWEVOKSBJUlE0OCAtPiAwOjEKKFhFTikgSVJRNTYgLT4gMDozCihY
RU4pIElSUTY0IC0+IDA6NAooWEVOKSBJUlE3MiAtPiAwOjUKKFhFTikgSVJRODAgLT4gMDo2CihY
RU4pIElSUTg4IC0+IDA6NwooWEVOKSBJUlE5NiAtPiAwOjgKKFhFTikgSVJRMTA0IC0+IDA6OQoo
WEVOKSBJUlEyNDEgLT4gMDoxMAooWEVOKSBJUlExMTIgLT4gMDoxMQooWEVOKSBJUlExMjAgLT4g
MDoxMgooWEVOKSBJUlExMzYgLT4gMDoxMwooWEVOKSBJUlExNDQgLT4gMDoxNAooWEVOKSBJUlEx
NTIgLT4gMDoxNQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4gZG9u
ZS4KKFhFTikgVXNpbmcgbG9jYWwgQVBJQyB0aW1lciBpbnRlcnJ1cHRzLgooWEVOKSBjYWxpYnJh
dGluZyBBUElDIHRpbWVyIC4uLgooWEVOKSAuLi4uLiBDUFUgY2xvY2sgc3BlZWQgaXMgMzg5Mi45
OTgxIE1Iei4KKFhFTikgLi4uLi4gaG9zdCBidXMgY2xvY2sgc3BlZWQgaXMgOTkuODIwNCBNSHou
CihYRU4pIC4uLi4uIGJ1c19zY2FsZSA9IDB4NjYzOQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAx
NC4zMThNSHogSFBFVAooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDMyIEtpQi4KKFhF
TikgSFZNOiBBU0lEcyBlbmFibGVkLgooWEVOKSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0
dXJlczoKKFhFTikgIC0gTmVzdGVkIFBhZ2UgVGFibGVzIChOUFQpCihYRU4pICAtIExhc3QgQnJh
bmNoIFJlY29yZCAoTEJSKSBWaXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBv
biAjVk1FWElUCihYRU4pICAtIFZNQ0IgQ2xlYW4gQml0cwooWEVOKSAgLSBEZWNvZGVBc3Npc3Rz
CihYRU4pICAtIFBhdXNlLUludGVyY2VwdCBGaWx0ZXIKKFhFTikgIC0gVFNDIFJhdGUgTVNSCihY
RU4pIEhWTTogU1ZNIGVuYWJsZWQKKFhFTikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcg
KEhBUCkgZGV0ZWN0ZWQKKFhFTikgSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQgoo
WEVOKSBIVk06IFBWSCBtb2RlIG5vdCBzdXBwb3J0ZWQgb24gdGhpcyBwbGF0Zm9ybQooWEVOKSBt
YXNrZWQgRXh0SU5UIG9uIENQVSMxCihYRU4pIG1pY3JvY29kZTogQ1BVMSBjb2xsZWN0X2NwdV9p
bmZvOiBwYXRjaF9pZD0weDYwMDExMTkKKFhFTikgbWFza2VkIEV4dElOVCBvbiBDUFUjMgooWEVO
KSBtaWNyb2NvZGU6IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MAooWEVOKSBtYXNr
ZWQgRXh0SU5UIG9uIENQVSMzCihYRU4pIG1pY3JvY29kZTogQ1BVMyBjb2xsZWN0X2NwdV9pbmZv
OiBwYXRjaF9pZD0wCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzCihYRU4pIEFDUEkgc2xlZXAgbW9k
ZXM6IFMzCihYRU4pIE1DQTogVXNlIGh3IHRocmVzaG9sZGluZyB0byBhZGp1c3QgcG9sbGluZyBm
cmVxdWVuY3kKKFhFTikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBz
dGFydGVkLgooWEVOKSBtdHJyOiB5b3VyIENQVXMgaGFkIGluY29uc2lzdGVudCB2YXJpYWJsZSBN
VFJSIHNldHRpbmdzCihYRU4pIG10cnI6IHByb2JhYmx5IHlvdXIgQklPUyBkb2VzIG5vdCBzZXR1
cCBhbGwgQ1BVcy4KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uCihYRU4pICoq
KiBMT0FESU5HIERPTUFJTiAwICoqKgooWEVOKSAgWGVuICBrZXJuZWw6IDY0LWJpdCwgbHNiLCBj
b21wYXQzMgooWEVOKSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgbHNiLCBwYWRkciAweDIwMDAgLT4g
MHhjMDUwMDAKKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOgooWEVOKSAgRG9tMCBh
bGxvYy46ICAgMDAwMDAwMDIyMzAwMDAwMC0+MDAwMDAwMDIyNDAwMDAwMCAoMTc0MjA4MyBwYWdl
cyB0byBiZSBhbGxvY2F0ZWQpCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRjZjc1MDAw
LT4wMDAwMDAwMjRmZmZmZTAwCihYRU4pIFZJUlRVQUwgTUVNT1JZIEFSUkFOR0VNRU5UOgooWEVO
KSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4MDAwMjAwMC0+ZmZmZmZmZmY4MGMwNTAwMAooWEVO
KSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDAwMDAwMDAwMC0+MDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgUGh5cy1NYWNoIG1hcDogZmZmZmVhMDAwMDAwMDAwMC0+ZmZmZmVhMDAwMGQ2YWM3MAooWEVO
KSAgU3RhcnQgaW5mbzogICAgZmZmZmZmZmY4MGMwNTAwMC0+ZmZmZmZmZmY4MGMwNTRiNAooWEVO
KSAgUGFnZSB0YWJsZXM6ICAgZmZmZmZmZmY4MGMwNjAwMC0+ZmZmZmZmZmY4MGMxMTAwMAooWEVO
KSAgQm9vdCBzdGFjazogICAgZmZmZmZmZmY4MGMxMTAwMC0+ZmZmZmZmZmY4MGMxMjAwMAooWEVO
KSAgVE9UQUw6ICAgICAgICAgZmZmZmZmZmY4MDAwMDAwMC0+ZmZmZmZmZmY4MTAwMDAwMAooWEVO
KSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MDAwMjAwMAooWEVOKSBEb20wIGhhcyBtYXhpbXVt
IDQgVkNQVXMKKFhFTikgQU1ELVZpOiBTa2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjAwLjAK
KFhFTikgQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2aWNlIDAwMDA6MDA6MDAuMgooWEVOKSBzZXR1
cCAwMDAwOjAwOjAwLjIgZm9yIGQwIGZhaWxlZCAoLTE5KQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgxMCwgdHlwZSA9IDB4Miwgcm9v
dCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikg
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MCwgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMK
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MSwgdHlw
ZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4
OCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHg5MCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2
aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJs
ZTogZGV2aWNlIGlkID0gMHg5OCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAw
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5YSwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0
ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9
IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMSwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMywgdHlwZSA9IDB4Nywg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhF
TikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9
IDB4NSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9
IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNSwg
dHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHhhOCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHhhYSwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHhhYiwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhjMCwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjMSwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjMiwgdHlwZSA9IDB4Niwgcm9vdCB0YWJs
ZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjMywgdHlwZSA9IDB4Niwgcm9v
dCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikg
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNCwgdHlwZSA9IDB4
Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMK
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNSwgdHlw
ZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgx
MDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4MTAxLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDIzMCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHgyMzgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZj
MzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwgdHlwZSA9IDB4MSwgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MDAsIHR5cGUgPSAweDEs
IHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihY
RU4pIFNjcnViYmluZyBGcmVlIFJBTTogLmRvbmUuCihYRU4pIEluaXRpYWwgbG93IG1lbW9yeSB2
aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLgooWEVOKSBTdGQuIExvZ2xldmVsOiBB
bGwKKFhFTikgR3Vlc3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFu
ZCB3YXJuaW5ncykKKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBlICdDVFJMLWEn
IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pCihYRU4pIEZyZWVkIDI5MmtCIGlu
aXQgbWVtb3J5LgooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAw
MDAwMC4KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAu
CihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwMDAw
MDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAwMDAwLgooWEVO
KSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEz
IGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAwMC4KKFhFTikgUENJ
OiBVc2luZyBNQ0ZHIGZvciBzZWdtZW50IDAwMDAgYnVzIDAwLWZmCihYRU4pIG1tLmM6ODA5OiBk
MDogRm9yY2luZyByZWFkLW9ubHkgYWNjZXNzIHRvIE1GTiBlMDAwMgooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4yCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDEuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjAxLjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMi4wCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEwLjEKKFhFTikg
U1ItSU9WIGRldmljZSAwMDAwOjAwOjExLjAgaGFzIGl0cyB2aXJ0dWFsIGZ1bmN0aW9ucyBhbHJl
YWR5IGVuYWJsZWQgKDAxYWIpCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuMAooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxMi4yCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTMuMAooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjEzLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4wCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTQuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjE0LjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC40CihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTQuNQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE1LjAKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4yCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MTUuMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjAKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxOC4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMgooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
OC40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAxOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowMC4xCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDI6MDYuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjA3
LjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMzowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDQ6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA1OjAwLjAKKFhFTikgSU9B
UElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDUtMTYgLT4gMHhhMCAtPiBJUlEgMTYgTW9k
ZToxIEFjdGl2ZToxKQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0x
NyAtPiAweGE4IC0+IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpCihYRU4pIElPQVBJQ1swXTogU2V0
IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE4IC0+IDB4YjAgLT4gSVJRIDE4IE1vZGU6MSBBY3RpdmU6
MSkKKFhFTikgSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDUtMTkgLT4gMHhiOCAt
PiBJUlEgMTkgTW9kZToxIEFjdGl2ZToxKQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGlu
ZyBlbnRyeSAoNS0yMSAtPiAweGMwIC0+IElSUSAyMSBNb2RlOjEgQWN0aXZlOjEpCihYRU4pIElP
QVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTIyIC0+IDB4YzggLT4gSVJRIDIyIE1v
ZGU6MSBBY3RpdmU6MSkKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUzOiAwMCg0MCkKKFhFTikgQVBJ
QyBlcnJvciBvbiBDUFUwOiAwMCg0MCkKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUyOiAwMCg0MCkK
KFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkKKFhFTikgQU1ELVZpOiBTaGFyZSBwMm0g
dGFibGUgd2l0aCBpb21tdTogcDJtIHRhYmxlID0gMHgxY2FhYzAK
--001a113a332af9ed1b04f180c397
Content-Type: application/octet-stream; name=dom0-dmesg
Content-Disposition: attachment; filename=dom0-dmesg
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sjlye2

WyAgICAwLjAwMDAwMF0gQlJLIFsweDAwYWRkMDAwLCAweDAwYWRkZmZmXSBQVUQKWyAgICAwLjAw
MDAwMF0gQlJLIFsweDAwYWRlMDAwLCAweDAwYWRlZmZmXSBQTUQKWyAgICAwLjAwMDAwMF0gQlJL
IFsweDAwYWRmMDAwLCAweDAwYWU1ZmZmXSBQVEUKWyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5n
IGNncm91cCBzdWJzeXMgY3B1c2V0ClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAg
c3Vic3lzIGNwdQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVh
Y2N0ClsgICAgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gMy4xMS42LTQteGVuIChnZWVrb0BidWls
ZGhvc3QpIChnY2MgdmVyc2lvbiA0LjguMSAyMDEzMDkwOSBbZ2NjLTRfOC1icmFuY2ggcmV2aXNp
b24gMjAyMzg4XSAoU1VTRSBMaW51eCkgKSAjMSBTTVAgV2VkIE9jdCAzMCAxODowNDo1NiBVVEMg
MjAxMyAoZTZkNGEyNykKWyAgICAwLjAwMDAwMF0gQ29tbWFuZCBsaW5lOiByb290PVVVSUQ9N2Nh
Yzg2ZDItNzk2ZC00ZmZmLTkyNmUtYzZkNjUyMTliNTRjIHJvIHF1aWV0IHF1aWV0IHJlc3VtZT0v
ZGV2L2Rpc2svYnktaWQvYXRhLVNUMzMyMDYyMEFTXzVRRjVEUk1QLXBhcnQ2IHNwbGFzaD1zaWxl
bnQgcXVpZXQgc2hvd29wdHMKWyAgICAwLjAwMDAwMF0gWGVuLXByb3ZpZGVkIG1hY2hpbmUgbWVt
b3J5IG1hcDoKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgw
MDAwMDAwMDAwMDllN2ZmXSB1c2FibGUKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAw
MDAwMDAwOWU4MDAtMHgwMDAwMDAwMDAwMDlmZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBC
SU9TOiBbbWVtIDB4MDAwMDAwMDAwMDBlMDAwMC0weDAwMDAwMDAwMDAwZmZmZmZdIHJlc2VydmVk
ClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMDAwMTAwMDAwLTB4MDAwMDAwMDA4
ZDY4YWZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMDhkNjhi
MDAwLTB4MDAwMDAwMDA4ZGQwOWZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUzogW21l
bSAweDAwMDAwMDAwOGRkMGEwMDAtMHgwMDAwMDAwMDhlMDU5ZmZmXSBBQ1BJIE5WUwpbICAgIDAu
MDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAwMDAwMDA4ZTA1YTAwMC0weDAwMDAwMDAwOGVhNDRmZmZd
IHJlc2VydmVkClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMDhlYTQ1MDAwLTB4
MDAwMDAwMDA4ZWE0NWZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAw
MDAwMDhlYTQ2MDAwLTB4MDAwMDAwMDA4ZWM0YmZmZl0gQUNQSSBOVlMKWyAgICAwLjAwMDAwMF0g
QklPUzogW21lbSAweDAwMDAwMDAwOGVjNGMwMDAtMHgwMDAwMDAwMDhmMDYzZmZmXSB1c2FibGUK
WyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwOGYwNjQwMDAtMHgwMDAwMDAwMDhm
N2YyZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAwMDAwMDA4Zjdm
MzAwMC0weDAwMDAwMDAwOGY3ZmZmZmZdIHVzYWJsZQpbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVt
IDB4MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAwZmVjMDBmZmZdIHJlc2VydmVkClsgICAgMC4w
MDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMGZlYzEwMDAwLTB4MDAwMDAwMDBmZWMxMGZmZl0g
cmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwZmVkMDAwMDAtMHgw
MDAwMDAwMGZlZDAwZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAw
MDAwMDBmZWQ4MDAwMC0weDAwMDAwMDAwZmVkOGZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAwMDBd
IEJJT1M6IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAwMDBmZWVmZmZmZl0gcmVzZXJ2
ZWQKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwZmY4MDAwMDAtMHgwMDAwMDAw
MGZmZmZmZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAwMDAwMDEw
MDAwMTAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVzYWJsZQpbICAgIDAuMDAwMDAwXSBCSU9TOiBb
bWVtIDB4MDAwMDAwZmQwMDAwMDAwMC0weDAwMDAwMGZmZmZmZmZmZmZdIHJlc2VydmVkClsgICAg
MC4wMDAwMDBdIGU4MjA6IFhlbi1wcm92aWRlZCBwaHlzaWNhbCBSQU0gbWFwOgpbICAgIDAuMDAw
MDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDFhZGQ4ZGZmZl0gdXNh
YmxlClsgICAgMC4wMDAwMDBdIE5YIChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2
ZQpbICAgIDAuMDAwMDAwXSBTTUJJT1MgMi43IHByZXNlbnQuClsgICAgMC4wMDAwMDBdIERNSTog
VG8gQmUgRmlsbGVkIEJ5IE8uRS5NLiBUbyBCZSBGaWxsZWQgQnkgTy5FLk0uL0ZNMkE3NSBQcm80
LCBCSU9TIFAyLjQwIDA3LzExLzIwMTMKWyAgICAwLjAwMDAwMF0gZTgyMDogbGFzdF9wZm4gPSAw
eDFhZGQ4ZSBtYXhfYXJjaF9wZm4gPSAweDgwMDAwMDAwClsgICAgMC4wMDAwMDBdIGU4MjA6IGxh
c3RfcGZuID0gMHgxMDAwMDAgbWF4X2FyY2hfcGZuID0gMHg4MDAwMDAwMApbICAgIDAuMDAwMDAw
XSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgW21lbSAweDAwMGZkOTAwLTB4MDAwZmQ5MGZdIG1hcHBl
ZCBhdCBbZmZmZmZmZmZmZjVlZjkwMF0KWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGlu
ZzogW21lbSAweDFhZDIwMDAwMC0weDFhZDNmZmZmZl0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgx
YWQyMDAwMDAtMHgxYWQzZmZmZmZdIHBhZ2UgNGsKWyAgICAwLjAwMDAwMF0gQlJLIFsweDAwYWU3
MDAwLCAweDAwYWU3ZmZmXSBQR1RBQkxFClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMGFlODAwMCwg
MHgwMGFlOGZmZl0gUEdUQUJMRQpbICAgIDAuMDAwMDAwXSBpbml0X21lbW9yeV9tYXBwaW5nOiBb
bWVtIDB4MWFjMDAwMDAwLTB4MWFkMWZmZmZmXQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDFhYzAw
MDAwMC0weDFhZDFmZmZmZl0gcGFnZSA0awpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDBhZTkwMDAs
IDB4MDBhZTlmZmZdIFBHVEFCTEUKWyAgICAwLjAwMDAwMF0gQlJLIFsweDAwYWVhMDAwLCAweDAw
YWVhZmZmXSBQR1RBQkxFClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMGFlYjAwMCwgMHgwMGFlYmZm
Zl0gUEdUQUJMRQpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDBhZWMwMDAsIDB4MDBhZWNmZmZdIFBH
VEFCTEUKWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAweDE4MDAwMDAw
MC0weDFhYmZmZmZmZl0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgxODAwMDAwMDAtMHgxYWJmZmZm
ZmZdIHBhZ2UgNGsKWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAweDAw
MDAwMDAwLTB4MTdmZmZmZmZmXQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDAwMDAwMDAwLTB4MTdm
ZmZmZmZmXSBwYWdlIDRrClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0g
MHgxYWQ0MDAwMDAtMHgxYWRkOGRmZmZdClsgICAgMC4wMDAwMDBdICBbbWVtIDB4MWFkNDAwMDAw
LTB4MWFkZDhkZmZmXSBwYWdlIDRrClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IFttZW0gMHgwMTAw
MDAwMC0weDA0MDhhZmZmXQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBSU0RQIDAwMDAwMDAwMDAwZjA0
OTAgMDAwMjQgKHYwMiBBTEFTS0EpClsgICAgMC4wMDAwMDBdIEFDUEk6IFhTRFQgMDAwMDAwMDA4
ZTA0YTA3OCAwMDA3NCAodjAxIEFMQVNLQSAgICBBIE0gSSAwMTA3MjAwOSBBTUkgIDAwMDEwMDEz
KQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNQIDAwMDAwMDAwOGUwNTAxMjggMDAwRjQgKHYwNCBB
TEFTS0EgICAgQSBNIEkgMDEwNzIwMDkgQU1JICAwMDAxMDAxMykKWyAgICAwLjAwMDAwMF0gQUNQ
SSBCSU9TIFdhcm5pbmcgKGJ1Zyk6IE9wdGlvbmFsIEZBRFQgZmllbGQgUG0yQ29udHJvbEJsb2Nr
IGhhcyB6ZXJvIGFkZHJlc3Mgb3IgbGVuZ3RoOiAweDAwMDAwMDAwMDAwMDAwMDAvMHgxICgyMDEz
MDUxNy90YmZhZHQtNjAzKQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBEU0RUIDAwMDAwMDAwOGUwNGEx
ODggMDVGOUUgKHYwMiBBTEFTS0EgICAgQSBNIEkgMDAwMDAwMDAgSU5UTCAyMDA1MTExNykKWyAg
ICAwLjAwMDAwMF0gQUNQSTogRkFDUyAwMDAwMDAwMDhlMDUyZTgwIDAwMDQwClsgICAgMC4wMDAw
MDBdIEFDUEk6IEFQSUMgMDAwMDAwMDA4ZTA1MDIyMCAwMDA3MiAodjAzIEFMQVNLQSAgICBBIE0g
SSAwMTA3MjAwOSBBTUkgIDAwMDEwMDEzKQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGUERUIDAwMDAw
MDAwOGUwNTAyOTggMDAwNDQgKHYwMSBBTEFTS0EgICAgQSBNIEkgMDEwNzIwMDkgQU1JICAwMDAx
MDAxMykKWyAgICAwLjAwMDAwMF0gQUNQSTogTUNGRyAwMDAwMDAwMDhlMDUwMmUwIDAwMDNDICh2
MDEgQUxBU0tBICAgIEEgTSBJIDAxMDcyMDA5IE1TRlQgMDAwMTAwMTMpClsgICAgMC4wMDAwMDBd
IEFDUEk6IEFBRlQgMDAwMDAwMDA4ZTA1MDMyMCAwMDBFNyAodjAxIEFMQVNLQSBPRU1BQUZUICAw
MTA3MjAwOSBNU0ZUIDAwMDAwMDk3KQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBIUEVUIDAwMDAwMDAw
OGUwNTA0MDggMDAwMzggKHYwMSBBTEFTS0EgICAgQSBNIEkgMDEwNzIwMDkgQU1JICAwMDAwMDAw
NSkKWyAgICAwLjAwMDAwMF0gQUNQSTogSVZSUyAwMDAwMDAwMDhlMDUwNDQwIDAwMDcwICh2MDIg
ICAgQU1EIEFOTkFQVVJOIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDApClsgICAgMC4wMDAwMDBdIEFD
UEk6IFNTRFQgMDAwMDAwMDA4ZTA1MDRiMCAwMEE2MCAodjAxICAgIEFNRCBBTk5BUFVSTiAwMDAw
MDAwMSBBTUQgIDAwMDAwMDAxKQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTU0RUIDAwMDAwMDAwOGUw
NTBmMTAgMDA0QjcgKHYwMiAgICBBTUQgQU5OQVBVUk4gMDAwMDAwMDEgTVNGVCAwNDAwMDAwMCkK
WyAgICAwLjAwMDAwMF0gQUNQSTogQ1JBVCAwMDAwMDAwMDhlMDUxM2M4IDAwMkY4ICh2MDEgICAg
QU1EIEFOTkFQVVJOIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDEpClsgICAgMC4wMDAwMDBdIFpvbmUg
cmFuZ2VzOgpbICAgIDAuMDAwMDAwXSAgIERNQSAgICAgIFttZW0gMHgwMDAwMDAwMC0weDAwZmZm
ZmZmXQpbICAgIDAuMDAwMDAwXSAgIERNQTMyICAgIFttZW0gMHgwMTAwMDAwMC0weGZmZmZmZmZm
XQpbICAgIDAuMDAwMDAwXSAgIE5vcm1hbCAgIFttZW0gMHgxMDAwMDAwMDAtMHgxYWRkOGRmZmZd
ClsgICAgMC4wMDAwMDBdIE1vdmFibGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlClsgICAgMC4w
MDAwMDBdIEVhcmx5IG1lbW9yeSBub2RlIHJhbmdlcwpbICAgIDAuMDAwMDAwXSAgIG5vZGUgICAw
OiBbbWVtIDB4MDAwMDAwMDAtMHgxYWRkOGRmZmZdClsgICAgMC4wMDAwMDBdIE9uIG5vZGUgMCB0
b3RhbHBhZ2VzOiAxNzYwNjU0ClsgICAgMC4wMDAwMDBdIGZyZWVfYXJlYV9pbml0X25vZGU6IG5v
ZGUgMCwgcGdkYXQgZmZmZmZmZmY4MDk2MjQ0MCwgbm9kZV9tZW1fbWFwIGZmZmY4ODAxYTY4ODgw
MDAKWyAgICAwLjAwMDAwMF0gICBETUEgem9uZTogNTYgcGFnZXMgdXNlZCBmb3IgbWVtbWFwClsg
ICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDAgcGFnZXMgcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0g
ICBETUEgem9uZTogNDA5NiBwYWdlcywgTElGTyBiYXRjaDowClsgICAgMC4wMDAwMDBdICAgRE1B
MzIgem9uZTogMTQyODAgcGFnZXMgdXNlZCBmb3IgbWVtbWFwClsgICAgMC4wMDAwMDBdICAgRE1B
MzIgem9uZTogMTA0NDQ4MCBwYWdlcywgTElGTyBiYXRjaDozMQpbICAgIDAuMDAwMDAwXSAgIE5v
cm1hbCB6b25lOiA5NzM2IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcApbICAgIDAuMDAwMDAwXSAgIE5v
cm1hbCB6b25lOiA3MTIwNzggcGFnZXMsIExJRk8gYmF0Y2g6MzEKWyAgICAwLjAwMDAwMF0gQUNQ
STogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgxMF0gZW5hYmxlZCkKWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgxMV0gZW5hYmxlZCkK
WyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRbMHgxMl0g
ZW5hYmxlZCkKWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNf
aWRbMHgxM10gZW5hYmxlZCkKWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lk
WzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMC4wMDAwMDBdIEFDUEk6IElPQVBJQyAo
aWRbMHgwNV0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lfYmFzZVswXSkKWyAgICAwLjAwMDAwMF0g
SU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJ
IDAtMjMKWyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBn
bG9iYWxfaXJxIDIgZGZsIGRmbCkKWyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1
cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuClsgICAgMC4wMDAwMDBdIEFDUEk6IElSUTIgdXNlZCBi
eSBvdmVycmlkZS4KWyAgICAwLjAwMDAwMF0gQUNQSTogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLgpb
ICAgIDAuMDAwMDAwXSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5m
b3JtYXRpb24KWyAgICAwLjAwMDAwMF0gZTgyMDogW21lbSAweDhmODAwMDAwLTB4ZmViZmZmZmZd
IGF2YWlsYWJsZSBmb3IgUENJIGRldmljZXMKWyAgICAwLjAwMDAwMF0gc2V0dXBfcGVyY3B1OiBO
Ul9DUFVTOjUxMiBucl9jcHVtYXNrX2JpdHM6NTEyIG5yX2NwdV9pZHM6NCBucl9ub2RlX2lkczox
ClsgICAgMC4wMDAwMDBdIFBFUkNQVTogRW1iZWRkZWQgMTkgcGFnZXMvY3B1IEBmZmZmODgwMWE1
ODAwMDAwIHM0ODA2NCByODE5MiBkMjE1NjggdTUyNDI4OApbICAgIDAuMDAwMDAwXSBwY3B1LWFs
bG9jOiBzNDgwNjQgcjgxOTIgZDIxNTY4IHU1MjQyODggYWxsb2M9MSoyMDk3MTUyClsgICAgMC4w
MDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwIDEgMiAzIApbICAgIDAuMDAwMDAwXSBTd2FwcGluZyBN
Rk5zIGZvciBQRk4gOTg4IGFuZCAxYTU4MDcgKE1GTiAyMjM5ODggYW5kIDExMGY3KQpbICAgIDAu
MDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0eSBncm91cGlu
ZyBvbi4gIFRvdGFsIHBhZ2VzOiAxNzM2NTgyClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5k
IGxpbmU6IHJvb3Q9VVVJRD03Y2FjODZkMi03OTZkLTRmZmYtOTI2ZS1jNmQ2NTIxOWI1NGMgcm8g
cXVpZXQgcXVpZXQgcmVzdW1lPS9kZXYvZGlzay9ieS1pZC9hdGEtU1QzMzIwNjIwQVNfNVFGNURS
TVAtcGFydDYgc3BsYXNoPXNpbGVudCBxdWlldCBzaG93b3B0cwpbICAgIDAuMDAwMDAwXSBQSUQg
aGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMpClsgICAgMC4w
MDAwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEwNDg1NzYgKG9yZGVyOiAx
MSwgODM4ODYwOCBieXRlcykKWyAgICAwLjAwMDAwMF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBl
bnRyaWVzOiA1MjQyODggKG9yZGVyOiAxMCwgNDE5NDMwNCBieXRlcykKWyAgICAwLjAwMDAwMF0g
eHNhdmU6IGVuYWJsZWQgeHN0YXRlX2J2IDB4NywgY250eHQgc2l6ZSAweDM0MApbICAgIDAuMDAw
MDAwXSBhbGxvY2F0ZWQgMjgxNzA0NjQgYnl0ZXMgb2YgcGFnZV9jZ3JvdXAKWyAgICAwLjAwMDAw
MF0gcGxlYXNlIHRyeSAnY2dyb3VwX2Rpc2FibGU9bWVtb3J5JyBvcHRpb24gaWYgeW91IGRvbid0
IHdhbnQgbWVtb3J5IGNncm91cHMKWyAgICAwLjAwMDAwMF0gU29mdHdhcmUgSU8gVExCIGVuYWJs
ZWQ6IAogQXBlcnR1cmU6ICAgICA2NCBtZWdhYnl0ZXMKIEFkZHJlc3Mgc2l6ZTogMjcgYml0cwog
S2VybmVsIHJhbmdlOiBmZmZmODgwMTlmMTIyMDAwIC0gZmZmZjg4MDFhMzEyMjAwMApbICAgIDAu
MDAwMDAwXSBQQ0ktRE1BOiBVc2luZyBzb2Z0d2FyZSBib3VuY2UgYnVmZmVyaW5nIGZvciBJTyAo
U1dJT1RMQikKWyAgICAwLjAwMDAwMF0gTWVtb3J5OiA2NzMxNjc2Sy83MDQyNjE2SyBhdmFpbGFi
bGUgKDUyNzRLIGtlcm5lbCBjb2RlLCA1NTNLIHJ3ZGF0YSwgMzg3Nksgcm9kYXRhLCA0OTJLIGlu
aXQsIDg2OEsgYnNzLCAzMTA5NDBLIHJlc2VydmVkKQpbICAgIDAuMDAwMDAwXSBIaWVyYXJjaGlj
YWwgUkNVIGltcGxlbWVudGF0aW9uLgpbICAgIDAuMDAwMDAwXSAJUkNVIGR5bnRpY2staWRsZSBn
cmFjZS1wZXJpb2QgYWNjZWxlcmF0aW9uIGlzIGVuYWJsZWQuClsgICAgMC4wMDAwMDBdIAlSQ1Ug
cmVzdHJpY3RpbmcgQ1BVcyBmcm9tIE5SX0NQVVM9NTEyIHRvIG5yX2NwdV9pZHM9NC4KWyAgICAw
LjAwMDAwMF0gCU9mZmxvYWQgUkNVIGNhbGxiYWNrcyBmcm9tIGFsbCBDUFVzClsgICAgMC4wMDAw
MDBdIAlPZmZsb2FkIFJDVSBjYWxsYmFja3MgZnJvbSBDUFVzOiAwLTUxMS4KWyAgICAwLjAwMDAw
MF0gbnJfcGlycXM6IDQwClsgICAgMC4wMDAwMDBdIE5SX0lSUVM6NjczMjggbnJfaXJxczoyNzky
IDE2ClsgICAgMC4wMDAwMDBdIFhlbiByZXBvcnRlZDogMzg5My4wMTIgTUh6IHByb2Nlc3Nvci4K
WyAgICAwLjAwMDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUKWyAgICAwLjAwMDAwMF0g
Y29uc29sZSBbdHR5MF0gZW5hYmxlZApbICAgIDAuMDAwMDAwXSBjb25zb2xlIFt4dmMtMV0gZW5h
YmxlZApbICAgIDAuMDgwMDAxXSBDYWxpYnJhdGluZyBkZWxheSB1c2luZyB0aW1lciBzcGVjaWZp
YyByb3V0aW5lLi4gNzg5My4xNiBCb2dvTUlQUyAobHBqPTE1Nzg2MzI1KQpbICAgIDAuMDgwMDA0
XSBwaWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDEKWyAgICAwLjA4MDAyN10gU2Vj
dXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemVkClsgICAgMC4wODAwNDJdIEFwcEFybW9yOiBBcHBB
cm1vciBpbml0aWFsaXplZApbICAgIDAuMDgwMDUxXSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDI1NgpbICAgIDAuMDgwMTg3XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBtZW1v
cnkKWyAgICAwLjA4MDE5OF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgZGV2aWNlcwpbICAg
IDAuMDgwMjAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBmcmVlemVyClsgICAgMC4wODAy
MDFdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIG5ldF9jbHMKWyAgICAwLjA4MDIwMl0gSW5p
dGlhbGl6aW5nIGNncm91cCBzdWJzeXMgYmxraW8KWyAgICAwLjA4MDIwNF0gSW5pdGlhbGl6aW5n
IGNncm91cCBzdWJzeXMgcGVyZl9ldmVudApbICAgIDAuMDgwMjI5XSBtY2U6IENQVSBzdXBwb3J0
cyAyIE1DRSBiYW5rcwpbICAgIDAuMDgwMjQ2XSBMYXN0IGxldmVsIGlUTEIgZW50cmllczogNEtC
IDUxMiwgMk1CIDEwMjQsIDRNQiA1MTIKTGFzdCBsZXZlbCBkVExCIGVudHJpZXM6IDRLQiAxMDI0
LCAyTUIgMTAyNCwgNE1CIDUxMgp0bGJfZmx1c2hhbGxfc2hpZnQ6IDUKWyAgICAwLjEwOTkwOV0g
QUNQSTogQ29yZSByZXZpc2lvbiAyMDEzMDUxNwpbICAgIDAuMTE2NDQ4XSBBQ1BJOiBBbGwgQUNQ
SSBUYWJsZXMgc3VjY2Vzc2Z1bGx5IGFjcXVpcmVkClsgICAgMC4xMjA2OTRdIFNNUCBhbHRlcm5h
dGl2ZXM6IHN3aXRjaGluZyB0byBTTVAgY29kZQpbICAgIDAuMTUwNjQ5XSBCcm91Z2h0IHVwIDQg
Q1BVcwpbICAgIDAuMTUwNzA3XSBkZXZ0bXBmczogaW5pdGlhbGl6ZWQKWyAgICAwLjE1MDcwN10g
UlRDIHRpbWU6IDIyOjI5OjUyLCBkYXRlOiAwMi8wMy8xNApbICAgIDAuMTUwNzA3XSBORVQ6IFJl
Z2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE2ClsgICAgMC4xNTA3MDddIEFDUEk6IGJ1cyB0eXBl
IFBDSSByZWdpc3RlcmVkClsgICAgMC4xNTA3MDddIGFjcGlwaHA6IEFDUEkgSG90IFBsdWcgUENJ
IENvbnRyb2xsZXIgRHJpdmVyIHZlcnNpb246IDAuNQpbICAgIDAuMTUwNzA3XSBQQ0k6IE1NQ09O
RklHIGZvciBkb21haW4gMDAwMCBbYnVzIDAwLWZmXSBhdCBbbWVtIDB4ZTAwMDAwMDAtMHhlZmZm
ZmZmZl0gKGJhc2UgMHhlMDAwMDAwMCkKWyAgICAwLjE1MDcwN10gUENJOiBub3QgdXNpbmcgTU1D
T05GSUcKWyAgICAwLjE1MDcwN10gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3Ig
YmFzZSBhY2Nlc3MKWyAgICAwLjE1MDcwN10gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUg
MSBmb3IgZXh0ZW5kZWQgYWNjZXNzClsgICAgMC4xNTIwMjldIGJpbzogY3JlYXRlIHNsYWIgPGJp
by0wPiBhdCAwClsgICAgMC4xNTIwNzJdIEFDUEk6IEFkZGVkIF9PU0koTW9kdWxlIERldmljZSkK
WyAgICAwLjE1MjA3NF0gQUNQSTogQWRkZWQgX09TSShQcm9jZXNzb3IgRGV2aWNlKQpbICAgIDAu
MTUyMDc1XSBBQ1BJOiBBZGRlZCBfT1NJKDMuMCBfU0NQIEV4dGVuc2lvbnMpClsgICAgMC4xNTIw
NzddIEFDUEk6IEFkZGVkIF9PU0koUHJvY2Vzc29yIEFnZ3JlZ2F0b3IgRGV2aWNlKQpbICAgIDAu
MTUyODkyXSBBQ1BJOiBFQzogTG9vayB1cCBFQyBpbiBEU0RUClsgICAgMC4xNTM3NjVdIEFDUEk6
IEV4ZWN1dGVkIDEgYmxvY2tzIG9mIG1vZHVsZS1sZXZlbCBleGVjdXRhYmxlIEFNTCBjb2RlClsg
ICAgMC4xNTczNzNdIFtGaXJtd2FyZSBCdWddOiBBQ1BJOiBCSU9TIF9PU0koTGludXgpIHF1ZXJ5
IGlnbm9yZWQKWyAgICAwLjE1ODExN10gQUNQSTogSW50ZXJwcmV0ZXIgZW5hYmxlZApbICAgIDAu
MTU4MTIyXSBBQ1BJIEV4Y2VwdGlvbjogQUVfTk9UX0ZPVU5ELCBXaGlsZSBldmFsdWF0aW5nIFNs
ZWVwIFN0YXRlIFtcX1MxX10gKDIwMTMwNTE3L2h3eGZhY2UtNTcxKQpbICAgIDAuMTU4MTI2XSBB
Q1BJIEV4Y2VwdGlvbjogQUVfTk9UX0ZPVU5ELCBXaGlsZSBldmFsdWF0aW5nIFNsZWVwIFN0YXRl
IFtcX1MyX10gKDIwMTMwNTE3L2h3eGZhY2UtNTcxKQpbICAgIDAuMTU4MTM1XSBBQ1BJOiAoc3Vw
cG9ydHMgUzAgUzMgUzUpClsgICAgMC4xNTgxMzZdIEFDUEk6IFVzaW5nIElPQVBJQyBmb3IgaW50
ZXJydXB0IHJvdXRpbmcKWyAgICAwLjE1ODM2Nl0gUENJOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAw
MDAgW2J1cyAwMC1mZl0gYXQgW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmZdIChiYXNlIDB4ZTAw
MDAwMDApClsgICAgMC4xNTg0MTZdIFBDSTogTU1DT05GSUcgYXQgW21lbSAweGUwMDAwMDAwLTB4
ZWZmZmZmZmZdIHJlc2VydmVkIGluIEFDUEkgbW90aGVyYm9hcmQgcmVzb3VyY2VzClsgICAgMC4y
NTY3NzddIFBDSTogVXNpbmcgaG9zdCBicmlkZ2Ugd2luZG93cyBmcm9tIEFDUEk7IGlmIG5lY2Vz
c2FyeSwgdXNlICJwY2k9bm9jcnMiIGFuZCByZXBvcnQgYSBidWcKWyAgICAwLjI1Njg0M10gQUNQ
STogTm8gZG9jayBkZXZpY2VzIGZvdW5kLgpbICAgIDAuMjY1ODYzXSBBQ1BJOiBQQ0kgUm9vdCBC
cmlkZ2UgW1BDSTBdIChkb21haW4gMDAwMCBbYnVzIDAwLWZmXSkKWyAgICAwLjI2NjExMF0gYWNw
aSBQTlAwQTAzOjAwOiBSZXF1ZXN0aW5nIEFDUEkgX09TQyBjb250cm9sICgweDFkKQpbICAgIDAu
MjY2NDQ1XSBhY3BpIFBOUDBBMDM6MDA6IEFDUEkgX09TQyBjb250cm9sICgweDE5KSBncmFudGVk
ClsgICAgMC4yNjY5NDVdIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDowMApbICAgIDAuMjY2
OTQ4XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgMDAtZmZdClsgICAg
MC4yNjY5NTBdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2lvICAweDAwMDAt
MHgwM2FmXQpbICAgIDAuMjY2OTUxXSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNl
IFtpbyAgMHgwM2UwLTB4MGNmN10KWyAgICAwLjI2Njk1M10gcGNpX2J1cyAwMDAwOjAwOiByb290
IGJ1cyByZXNvdXJjZSBbaW8gIDB4MDNiMC0weDAzZGZdClsgICAgMC4yNjY5NTRdIHBjaV9idXMg
MDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2lvICAweDBkMDAtMHhmZmZmXQpbICAgIDAuMjY2
OTU2XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHgwMDBhMDAwMC0w
eDAwMGJmZmZmXQpbICAgIDAuMjY2OTU4XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291
cmNlIFttZW0gMHgwMDBjMDAwMC0weDAwMGRmZmZmXQpbICAgIDAuMjY2OTU5XSBwY2lfYnVzIDAw
MDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHhiMDAwMDAwMC0weGZmZmZmZmZmXQpbICAg
IDAuMjY2OTcwXSBwY2kgMDAwMDowMDowMC4wOiBbMTAyMjoxNDEwXSB0eXBlIDAwIGNsYXNzIDB4
MDYwMDAwClsgICAgMC4yNjcxMjFdIHBjaSAwMDAwOjAwOjAwLjI6IFsxMDIyOjE0MTldIHR5cGUg
MDAgY2xhc3MgMHgwODA2MDAKWyAgICAwLjI2NzI4OV0gcGNpIDAwMDA6MDA6MDEuMDogWzEwMDI6
OTkwZV0gdHlwZSAwMCBjbGFzcyAweDAzMDAwMApbICAgIDAuMjY3MzAzXSBwY2kgMDAwMDowMDow
MS4wOiByZWcgMHgxMDogW21lbSAweGIwMDAwMDAwLTB4YmZmZmZmZmYgcHJlZl0KWyAgICAwLjI2
NzMxM10gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTQ6IFtpbyAgMHhmMDAwLTB4ZjBmZl0KWyAg
ICAwLjI2NzMyM10gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTg6IFttZW0gMHhmZjcwMDAwMC0w
eGZmNzNmZmZmXQpbICAgIDAuMjY3Mzk5XSBwY2kgMDAwMDowMDowMS4wOiBzdXBwb3J0cyBEMSBE
MgpbICAgIDAuMjY3NDc3XSBwY2kgMDAwMDowMDowMS4xOiBbMTAwMjo5OTAyXSB0eXBlIDAwIGNs
YXNzIDB4MDQwMzAwClsgICAgMC4yNjc0OTBdIHBjaSAwMDAwOjAwOjAxLjE6IHJlZyAweDEwOiBb
bWVtIDB4ZmY3NDAwMDAtMHhmZjc0M2ZmZl0KWyAgICAwLjI2NzU4MV0gcGNpIDAwMDA6MDA6MDEu
MTogc3VwcG9ydHMgRDEgRDIKWyAgICAwLjI2NzY2NF0gcGNpIDAwMDA6MDA6MDIuMDogWzEwMjI6
MTQxMl0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDAuMjY3NzU0XSBwY2kgMDAwMDowMDow
Mi4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDAuMjY3ODA5XSBw
Y2kgMDAwMDowMDowMi4wOiBTeXN0ZW0gd2FrZXVwIGRpc2FibGVkIGJ5IEFDUEkKWyAgICAwLjI2
Nzg4OF0gcGNpIDAwMDA6MDA6MTAuMDogWzEwMjI6NzgxMl0gdHlwZSAwMCBjbGFzcyAweDBjMDMz
MApbICAgIDAuMjY3OTE0XSBwY2kgMDAwMDowMDoxMC4wOiByZWcgMHgxMDogW21lbSAweGZmNzQ2
MDAwLTB4ZmY3NDdmZmYgNjRiaXRdClsgICAgMC4yNjgwMDVdIHBjaSAwMDAwOjAwOjEwLjA6IFBN
RSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkClsgICAgMC4yNjgwNzNdIHBjaSAwMDAw
OjAwOjEwLjA6IFN5c3RlbSB3YWtldXAgZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAuMjY4MTI4XSBw
Y2kgMDAwMDowMDoxMC4xOiBbMTAyMjo3ODEyXSB0eXBlIDAwIGNsYXNzIDB4MGMwMzMwClsgICAg
MC4yNjgxNTVdIHBjaSAwMDAwOjAwOjEwLjE6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NDQwMDAtMHhm
Zjc0NWZmZiA2NGJpdF0KWyAgICAwLjI2ODI5N10gcGNpIDAwMDA6MDA6MTAuMTogUE1FIyBzdXBw
b3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAwLjI2ODM1OV0gcGNpIDAwMDA6MDA6MTAu
MTogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBieSBBQ1BJClsgICAgMC4yNjg0MTFdIHBjaSAwMDAw
OjAwOjExLjA6IFsxMDIyOjc4MDFdIHR5cGUgMDAgY2xhc3MgMHgwMTA2MDEKWyAgICAwLjI2ODQz
N10gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MTA6IFtpbyAgMHhmMTkwLTB4ZjE5N10KWyAgICAw
LjI2ODQ1MF0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MTQ6IFtpbyAgMHhmMTgwLTB4ZjE4M10K
WyAgICAwLjI2ODQ2Ml0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MTg6IFtpbyAgMHhmMTcwLTB4
ZjE3N10KWyAgICAwLjI2ODQ3NV0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MWM6IFtpbyAgMHhm
MTYwLTB4ZjE2M10KWyAgICAwLjI2ODQ4N10gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MjA6IFtp
byAgMHhmMTUwLTB4ZjE1Zl0KWyAgICAwLjI2ODUwMF0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4
MjQ6IFttZW0gMHhmZjc0ZDAwMC0weGZmNzRkN2ZmXQpbICAgIDAuMjY4NjM3XSBwY2kgMDAwMDow
MDoxMi4wOiBbMTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yNjg2NTVd
IHBjaSAwMDAwOjAwOjEyLjA6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NGMwMDAtMHhmZjc0Y2ZmZl0K
WyAgICAwLjI2ODc3NV0gcGNpIDAwMDA6MDA6MTIuMDogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBi
eSBBQ1BJClsgICAgMC4yNjg4MjBdIHBjaSAwMDAwOjAwOjEyLjI6IFsxMDIyOjc4MDhdIHR5cGUg
MDAgY2xhc3MgMHgwYzAzMjAKWyAgICAwLjI2ODg0NV0gcGNpIDAwMDA6MDA6MTIuMjogcmVnIDB4
MTA6IFttZW0gMHhmZjc0YjAwMC0weGZmNzRiMGZmXQpbICAgIDAuMjY4OTYxXSBwY2kgMDAwMDow
MDoxMi4yOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjY4OTYzXSBwY2kgMDAwMDowMDoxMi4yOiBQ
TUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90ClsgICAgMC4yNjkwMTJdIHBjaSAwMDAw
OjAwOjEyLjI6IFN5c3RlbSB3YWtldXAgZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAuMjY5MDU2XSBw
Y2kgMDAwMDowMDoxMy4wOiBbMTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAg
MC4yNjkwNzRdIHBjaSAwMDAwOjAwOjEzLjA6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NGEwMDAtMHhm
Zjc0YWZmZl0KWyAgICAwLjI2OTIxNl0gcGNpIDAwMDA6MDA6MTMuMDogU3lzdGVtIHdha2V1cCBk
aXNhYmxlZCBieSBBQ1BJClsgICAgMC4yNjkyNjFdIHBjaSAwMDAwOjAwOjEzLjI6IFsxMDIyOjc4
MDhdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMjAKWyAgICAwLjI2OTI4Nl0gcGNpIDAwMDA6MDA6MTMu
MjogcmVnIDB4MTA6IFttZW0gMHhmZjc0OTAwMC0weGZmNzQ5MGZmXQpbICAgIDAuMjY5NDAyXSBw
Y2kgMDAwMDowMDoxMy4yOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjY5NDA0XSBwY2kgMDAwMDow
MDoxMy4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90ClsgICAgMC4yNjk0NTRd
IHBjaSAwMDAwOjAwOjEzLjI6IFN5c3RlbSB3YWtldXAgZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAu
MjY5NDk3XSBwY2kgMDAwMDowMDoxNC4wOiBbMTAyMjo3ODBiXSB0eXBlIDAwIGNsYXNzIDB4MGMw
NTAwClsgICAgMC4yNjk2NTJdIHBjaSAwMDAwOjAwOjE0LjE6IFsxMDIyOjc4MGNdIHR5cGUgMDAg
Y2xhc3MgMHgwMTAxOGEKWyAgICAwLjI2OTY3MF0gcGNpIDAwMDA6MDA6MTQuMTogcmVnIDB4MTA6
IFtpbyAgMHhmMTQwLTB4ZjE0N10KWyAgICAwLjI2OTY4M10gcGNpIDAwMDA6MDA6MTQuMTogcmVn
IDB4MTQ6IFtpbyAgMHhmMTMwLTB4ZjEzM10KWyAgICAwLjI2OTY5Nl0gcGNpIDAwMDA6MDA6MTQu
MTogcmVnIDB4MTg6IFtpbyAgMHhmMTIwLTB4ZjEyN10KWyAgICAwLjI2OTcwOV0gcGNpIDAwMDA6
MDA6MTQuMTogcmVnIDB4MWM6IFtpbyAgMHhmMTEwLTB4ZjExM10KWyAgICAwLjI2OTcyMV0gcGNp
IDAwMDA6MDA6MTQuMTogcmVnIDB4MjA6IFtpbyAgMHhmMTAwLTB4ZjEwZl0KWyAgICAwLjI2OTgy
NV0gcGNpIDAwMDA6MDA6MTQuMzogWzEwMjI6NzgwZV0gdHlwZSAwMCBjbGFzcyAweDA2MDEwMApb
ICAgIDAuMjcwNzI0XSBwY2kgMDAwMDowMDoxNC40OiBbMTAyMjo3ODBmXSB0eXBlIDAxIGNsYXNz
IDB4MDYwNDAxClsgICAgMC4yNzA4MTBdIHBjaSAwMDAwOjAwOjE0LjQ6IFN5c3RlbSB3YWtldXAg
ZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAuMjcwODQ1XSBwY2kgMDAwMDowMDoxNC41OiBbMTAyMjo3
ODA5XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yNzA4NjNdIHBjaSAwMDAwOjAwOjE0
LjU6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NDgwMDAtMHhmZjc0OGZmZl0KWyAgICAwLjI3MDk4MF0g
cGNpIDAwMDA6MDA6MTQuNTogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBieSBBQ1BJClsgICAgMC4y
NzEwMjZdIHBjaSAwMDAwOjAwOjE1LjA6IFsxMDIyOjQzYTBdIHR5cGUgMDEgY2xhc3MgMHgwNjA0
MDAKWyAgICAwLjI3MTEzN10gcGNpIDAwMDA6MDA6MTUuMDogc3VwcG9ydHMgRDEgRDIKWyAgICAw
LjI3MTE5Ml0gcGNpIDAwMDA6MDA6MTUuMDogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBieSBBQ1BJ
ClsgICAgMC4yNzEyMzVdIHBjaSAwMDAwOjAwOjE1LjI6IFsxMDIyOjQzYTJdIHR5cGUgMDEgY2xh
c3MgMHgwNjA0MDAKWyAgICAwLjI3MTM0Nl0gcGNpIDAwMDA6MDA6MTUuMjogc3VwcG9ydHMgRDEg
RDIKWyAgICAwLjI3MTQwMV0gcGNpIDAwMDA6MDA6MTUuMjogU3lzdGVtIHdha2V1cCBkaXNhYmxl
ZCBieSBBQ1BJClsgICAgMC4yNzE0NDFdIHBjaSAwMDAwOjAwOjE1LjM6IFsxMDIyOjQzYTNdIHR5
cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAwLjI3MTU1Ml0gcGNpIDAwMDA6MDA6MTUuMzogc3Vw
cG9ydHMgRDEgRDIKWyAgICAwLjI3MTYwN10gcGNpIDAwMDA6MDA6MTUuMzogU3lzdGVtIHdha2V1
cCBkaXNhYmxlZCBieSBBQ1BJClsgICAgMC4yNzE2NDhdIHBjaSAwMDAwOjAwOjE4LjA6IFsxMDIy
OjE0MDBdIHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgICAwLjI3MTc0N10gcGNpIDAwMDA6MDA6
MTguMTogWzEwMjI6MTQwMV0gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgIDAuMjcxODQ1XSBw
Y2kgMDAwMDowMDoxOC4yOiBbMTAyMjoxNDAyXSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsgICAg
MC4yNzE5NDFdIHBjaSAwMDAwOjAwOjE4LjM6IFsxMDIyOjE0MDNdIHR5cGUgMDAgY2xhc3MgMHgw
NjAwMDAKWyAgICAwLjI3MjAwMF0gcGNpIDAwMDA6MDA6MTguNDogWzEwMjI6MTQwNF0gdHlwZSAw
MCBjbGFzcyAweDA2MDAwMApbICAgIDAuMjcyMDkzXSBwY2kgMDAwMDowMDoxOC41OiBbMTAyMjox
NDA1XSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsgICAgMC4yNzIyNzhdIHBjaSAwMDAwOjAxOjAw
LjA6IFsxMDAyOjY4MTldIHR5cGUgMDAgY2xhc3MgMHgwMzAwMDAKWyAgICAwLjI3MjI5OF0gcGNp
IDAwMDA6MDE6MDAuMDogcmVnIDB4MTA6IFttZW0gMHhjMDAwMDAwMC0weGNmZmZmZmZmIDY0Yml0
IHByZWZdClsgICAgMC4yNzIzMTVdIHBjaSAwMDAwOjAxOjAwLjA6IHJlZyAweDE4OiBbbWVtIDB4
ZmY2MDAwMDAtMHhmZjYzZmZmZiA2NGJpdF0KWyAgICAwLjI3MjMyNl0gcGNpIDAwMDA6MDE6MDAu
MDogcmVnIDB4MjA6IFtpbyAgMHhlMDAwLTB4ZTBmZl0KWyAgICAwLjI3MjM0N10gcGNpIDAwMDA6
MDE6MDAuMDogcmVnIDB4MzA6IFttZW0gMHhmZjY0MDAwMC0weGZmNjVmZmZmIHByZWZdClsgICAg
MC4yNzI0MTFdIHBjaSAwMDAwOjAxOjAwLjA6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yNzI0MTNd
IHBjaSAwMDAwOjAxOjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDEgRDIgRDNob3QKWyAgICAw
LjI3MjQ4N10gcGNpIDAwMDA6MDE6MDAuMTogWzEwMDI6YWFiMF0gdHlwZSAwMCBjbGFzcyAweDA0
MDMwMApbICAgIDAuMjcyNTA3XSBwY2kgMDAwMDowMTowMC4xOiByZWcgMHgxMDogW21lbSAweGZm
NjYwMDAwLTB4ZmY2NjNmZmYgNjRiaXRdClsgICAgMC4yNzI2MTldIHBjaSAwMDAwOjAxOjAwLjE6
IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yODAwMzJdIHBjaSAwMDAwOjAwOjAyLjA6IFBDSSBicmlk
Z2UgdG8gW2J1cyAwMV0KWyAgICAwLjI4MDA0MV0gcGNpIDAwMDA6MDA6MDIuMDogICBicmlkZ2Ug
d2luZG93IFtpbyAgMHhlMDAwLTB4ZWZmZl0KWyAgICAwLjI4MDA0NV0gcGNpIDAwMDA6MDA6MDIu
MDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmZjYwMDAwMC0weGZmNmZmZmZmXQpbICAgIDAuMjgw
MDUxXSBwY2kgMDAwMDowMDowMi4wOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGMwMDAwMDAwLTB4
Y2ZmZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjI4MDExOV0gcGNpIDAwMDA6MDI6MDYuMDogWzEx
MDI6MDAwN10gdHlwZSAwMCBjbGFzcyAweDA0MDEwMApbICAgIDAuMjgwMTUzXSBwY2kgMDAwMDow
MjowNi4wOiByZWcgMHgxMDogW2lvICAweGQwMDAtMHhkMDFmXQpbICAgIDAuMjgwMjgwXSBwY2kg
MDAwMDowMjowNi4wOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjgwMzQ1XSBwY2kgMDAwMDowMjow
Ny4wOiBbOTcxMDo5ODM1XSB0eXBlIDAwIGNsYXNzIDB4MDcwMDAyClsgICAgMC4yODAzNjhdIHBj
aSAwMDAwOjAyOjA3LjA6IHJlZyAweDEwOiBbaW8gIDB4ZDA3MC0weGQwNzddClsgICAgMC4yODAz
ODNdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDE0OiBbaW8gIDB4ZDA2MC0weGQwNjddClsgICAg
MC4yODAzOThdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDE4OiBbaW8gIDB4ZDA1MC0weGQwNTdd
ClsgICAgMC4yODA0MTRdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDFjOiBbaW8gIDB4ZDA0MC0w
eGQwNDddClsgICAgMC4yODA0MjldIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDIwOiBbaW8gIDB4
ZDAzMC0weGQwMzddClsgICAgMC4yODA0NDRdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDI0OiBb
aW8gIDB4ZDAyMC0weGQwMmZdClsgICAgMC4yODA1NTFdIHBjaSAwMDAwOjAwOjE0LjQ6IFBDSSBi
cmlkZ2UgdG8gW2J1cyAwMl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDU1Nl0gcGNp
IDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHhkMDAwLTB4ZGZmZl0KWyAgICAw
LjI4MDU2NF0gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwMDAwLTB4
MDNhZl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDU2Nl0gcGNpIDAwMDA6MDA6MTQu
NDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwM2UwLTB4MGNmN10gKHN1YnRyYWN0aXZlIGRlY29k
ZSkKWyAgICAwLjI4MDU2N10gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAg
MHgwM2IwLTB4MDNkZl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDU2OV0gcGNpIDAw
MDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwZDAwLTB4ZmZmZl0gKHN1YnRyYWN0
aXZlIGRlY29kZSkKWyAgICAwLjI4MDU3MF0gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2lu
ZG93IFttZW0gMHgwMDBhMDAwMC0weDAwMGJmZmZmXSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAg
IDAuMjgwNTcyXSBwY2kgMDAwMDowMDoxNC40OiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweDAwMGMw
MDAwLTB4MDAwZGZmZmZdIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMC4yODA1NzNdIHBjaSAw
MDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4YjAwMDAwMDAtMHhmZmZmZmZmZl0g
KHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDY4MF0gcGNpIDAwMDA6MDM6MDAuMDogWzEx
MzE6NzE2MF0gdHlwZSAwMCBjbGFzcyAweDA0ODAwMApbICAgIDAuMjgwNzE0XSBwY2kgMDAwMDow
MzowMC4wOiByZWcgMHgxMDogW21lbSAweGZmNTAwMDAwLTB4ZmY1ZmZmZmYgNjRiaXRdClsgICAg
MC4yODA4ODldIHBjaSAwMDAwOjAzOjAwLjA6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yODA4OTBd
IHBjaSAwMDAwOjAzOjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIKWyAgICAwLjI4
MDk1NF0gcGNpIDAwMDA6MDM6MDAuMDogZGlzYWJsaW5nIEFTUE0gb24gcHJlLTEuMSBQQ0llIGRl
dmljZS4gIFlvdSBjYW4gZW5hYmxlIGl0IHdpdGggJ3BjaWVfYXNwbT1mb3JjZScKWyAgICAwLjI4
MDk2Nl0gcGNpIDAwMDA6MDA6MTUuMDogUENJIGJyaWRnZSB0byBbYnVzIDAzXQpbICAgIDAuMjgw
OTc3XSBwY2kgMDAwMDowMDoxNS4wOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGZmNTAwMDAwLTB4
ZmY1ZmZmZmZdClsgICAgMC4yODEwODddIHBjaSAwMDAwOjA0OjAwLjA6IFsxYjZmOjcwNTJdIHR5
cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgICAwLjI4MTExOV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVn
IDB4MTA6IFttZW0gMHhmZjQwMDAwMC0weGZmNDA3ZmZmIDY0Yml0XQpbICAgIDAuMjgxMjk5XSBw
Y2kgMDAwMDowNDowMC4wOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjgxMzAwXSBwY2kgMDAwMDow
NDowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90IEQzY29sZApbICAgIDAu
Mjg4MDM1XSBwY2kgMDAwMDowMDoxNS4yOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDRdClsgICAgMC4y
ODgwNDhdIHBjaSAwMDAwOjAwOjE1LjI6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZmY0MDAwMDAt
MHhmZjRmZmZmZl0KWyAgICAwLjI4ODE3M10gcGNpIDAwMDA6MDU6MDAuMDogWzEwZWM6ODE2OF0g
dHlwZSAwMCBjbGFzcyAweDAyMDAwMApbICAgIDAuMjg4MTk3XSBwY2kgMDAwMDowNTowMC4wOiBy
ZWcgMHgxMDogW2lvICAweGMwMDAtMHhjMGZmXQpbICAgIDAuMjg4MjM1XSBwY2kgMDAwMDowNTow
MC4wOiByZWcgMHgxODogW21lbSAweGQwMDA0MDAwLTB4ZDAwMDRmZmYgNjRiaXQgcHJlZl0KWyAg
ICAwLjI4ODI2MF0gcGNpIDAwMDA6MDU6MDAuMDogcmVnIDB4MjA6IFttZW0gMHhkMDAwMDAwMC0w
eGQwMDAzZmZmIDY0Yml0IHByZWZdClsgICAgMC4yODgzNjZdIHBjaSAwMDAwOjA1OjAwLjA6IHN1
cHBvcnRzIEQxIEQyClsgICAgMC4yODgzNjddIHBjaSAwMDAwOjA1OjAwLjA6IFBNRSMgc3VwcG9y
dGVkIGZyb20gRDAgRDEgRDIgRDNob3QgRDNjb2xkClsgICAgMC4yOTYwMzRdIHBjaSAwMDAwOjAw
OjE1LjM6IFBDSSBicmlkZ2UgdG8gW2J1cyAwNV0KWyAgICAwLjI5NjA0NF0gcGNpIDAwMDA6MDA6
MTUuMzogICBicmlkZ2Ugd2luZG93IFtpbyAgMHhjMDAwLTB4Y2ZmZl0KWyAgICAwLjI5NjA1NV0g
cGNpIDAwMDA6MDA6MTUuMzogICBicmlkZ2Ugd2luZG93IFttZW0gMHhkMDAwMDAwMC0weGQwMGZm
ZmZmIDY0Yml0IHByZWZdClsgICAgMC4yOTYwOTBdIHBjaV9idXMgMDAwMDowMDogb24gTlVNQSBu
b2RlIDAKWyAgICAwLjI5Njc0OF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktBXSAoSVJR
cyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5Njg0Nl0gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTktCXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5Njk1MF0g
QUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktDXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkg
KjAKWyAgICAwLjI5NzA1OV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktEXSAoSVJRcyA0
IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzE0MF0gQUNQSTogUENJIEludGVycnVwdCBM
aW5rIFtMTktFXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzIwM10gQUNQ
STogUENJIEludGVycnVwdCBMaW5rIFtMTktGXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAK
WyAgICAwLjI5NzI2Nl0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktHXSAoSVJRcyA0IDUg
NyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzMyOV0gQUNQSTogUENJIEludGVycnVwdCBMaW5r
IFtMTktIXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzQ2Nl0gQUNQSTog
XF9TQl8uUENJMDogbm90aWZ5IGhhbmRsZXIgaXMgaW5zdGFsbGVkClsgICAgMC4yOTc0OTZdIEZv
dW5kIDEgYWNwaSByb290IGRldmljZXMKWyAgICAwLjI5NzYyMV0gcGNpIDAwMDA6MDA6MDAuMjog
R1NJMTY6IGxldmVsLWxvdwpbICAgIDAuMjk3NzAzXSBwY2kgMDAwMDowMDowMS4wOiBHU0kxNzog
bGV2ZWwtbG93ClsgICAgMC4yOTc3ODVdIHBjaSAwMDAwOjAwOjAxLjE6IEdTSTE4OiBsZXZlbC1s
b3cKWyAgICAwLjI5ODExNV0gcGNpIDAwMDA6MDA6MTEuMDogR1NJMTk6IGxldmVsLWxvdwpbICAg
IDAuMjk4ODY1XSBwY2kgMDAwMDowMjowNi4wOiBHU0kyMTogbGV2ZWwtbG93ClsgICAgMC4yOTg5
MDJdIHBjaSAwMDAwOjAyOjA3LjA6IEdTSTIyOiBsZXZlbC1sb3cKWyAgICAwLjI5ODk5MV0gdmdh
YXJiOiBkZXZpY2UgYWRkZWQ6IFBDSTowMDAwOjAwOjAxLjAsZGVjb2Rlcz1pbyttZW0sb3ducz1t
ZW0sbG9ja3M9bm9uZQpbICAgIDAuMjk4OTkxXSB2Z2FhcmI6IGRldmljZSBhZGRlZDogUENJOjAw
MDA6MDE6MDAuMCxkZWNvZGVzPWlvK21lbSxvd25zPWlvK21lbSxsb2Nrcz1ub25lClsgICAgMC4y
OTg5OTFdIHZnYWFyYjogbG9hZGVkClsgICAgMC4yOTg5OTFdIHZnYWFyYjogYnJpZGdlIGNvbnRy
b2wgcG9zc2libGUgMDAwMDowMTowMC4wClsgICAgMC4yOTg5OTFdIHZnYWFyYjogbm8gYnJpZGdl
IGNvbnRyb2wgcG9zc2libGUgMDAwMDowMDowMS4wClsgICAgMC4yOTg5OTFdIHhlbl9tZW06IElu
aXRpYWxpc2luZyBiYWxsb29uIGRyaXZlci4KWyAgICAwLjI5ODk5MV0gU0NTSSBzdWJzeXN0ZW0g
aW5pdGlhbGl6ZWQKWyAgICAwLjI5ODk5MV0gQUNQSTogYnVzIHR5cGUgQVRBIHJlZ2lzdGVyZWQK
WyAgICAwLjI5ODk5MV0gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuClsgICAgMC4yOTg5OTFd
IFBDSTogVXNpbmcgQUNQSSBmb3IgSVJRIHJvdXRpbmcKWyAgICAwLjMxMTc3MV0gUENJOiBwY2lf
Y2FjaGVfbGluZV9zaXplIHNldCB0byA2NCBieXRlcwpbICAgIDAuMzExODg1XSBlODIwOiByZXNl
cnZlIFJBTSBidWZmZXIgW21lbSAweDAwMDllODAwLTB4MDAwOWZmZmZdClsgICAgMC4zMTE4ODdd
IGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZlciBbbWVtIDB4OGQ2OGIwMDAtMHg4ZmZmZmZmZl0KWyAg
ICAwLjMxMTg4OV0gZTgyMDogcmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0gMHg4ZWE0NjAwMC0weDhm
ZmZmZmZmXQpbICAgIDAuMzExODkwXSBlODIwOiByZXNlcnZlIFJBTSBidWZmZXIgW21lbSAweDhm
MDY0MDAwLTB4OGZmZmZmZmZdClsgICAgMC4zMTE4OTFdIGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZl
ciBbbWVtIDB4OGY4MDAwMDAtMHg4ZmZmZmZmZl0KWyAgICAwLjMxMTk1OF0gTmV0TGFiZWw6IElu
aXRpYWxpemluZwpbICAgIDAuMzExOTU5XSBOZXRMYWJlbDogIGRvbWFpbiBoYXNoIHNpemUgPSAx
MjgKWyAgICAwLjMxMTk2MF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxBQkVMRUQgQ0lQU092
NApbICAgIDAuMzExOTY4XSBOZXRMYWJlbDogIHVubGFiZWxlZCB0cmFmZmljIGFsbG93ZWQgYnkg
ZGVmYXVsdApbICAgIDAuMzEyMDAwXSBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSB4ZW4KWyAgICAw
LjMxMjAwMF0gQXBwQXJtb3I6IEFwcEFybW9yIEZpbGVzeXN0ZW0gRW5hYmxlZApbICAgIDAuMzEy
MDAwXSBwbnA6IFBuUCBBQ1BJIGluaXQKWyAgICAwLjMxMjAwMF0gQUNQSTogYnVzIHR5cGUgUE5Q
IHJlZ2lzdGVyZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZTAwMDAwMDAt
MHhlZmZmZmZmZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAw
OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMGMwMSAoYWN0aXZlKQpbICAgIDAu
MzEyMDAwXSBzeXN0ZW0gMDA6MDE6IFttZW0gMHg5MDAwMDAwMC0weGFmZmZmZmZmXSBoYXMgYmVl
biByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDE6IFBsdWcgYW5kIFBsYXkgQUNQ
SSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDow
MjogW21lbSAweGZlYjgwMDAwLTB4ZmViZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4z
MTIwMDBdIHN5c3RlbSAwMDowMjogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBj
MDIgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MDRkMC0weDA0
ZDFdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lvICAw
eDA0MGJdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lv
ICAweDA0ZDZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzog
W2lvICAweDBjMDAtMHgwYzAxXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0
ZW0gMDA6MDM6IFtpbyAgMHgwYzE0XSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBz
eXN0ZW0gMDA6MDM6IFtpbyAgMHgwYzUwLTB4MGM1MV0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAw
LjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGM1Ml0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAg
ICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGM2Y10gaGFzIGJlZW4gcmVzZXJ2ZWQK
WyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGM2Zl0gaGFzIGJlZW4gcmVzZXJ2
ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGNkMC0weDBjZDFdIGhhcyBi
ZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lvICAweDBjZDItMHgw
Y2QzXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtpbyAg
MHgwY2Q0LTB4MGNkNV0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAw
OjAzOiBbaW8gIDB4MGNkNi0weDBjZDddIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBd
IHN5c3RlbSAwMDowMzogW2lvICAweDBjZDgtMHgwY2RmXSBoYXMgYmVlbiByZXNlcnZlZApbICAg
IDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtpbyAgMHgwODAwLTB4MDg5Zl0gY291bGQgbm90IGJl
IHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lvICAweDBiMjAtMHgwYjNm
XSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtpbyAgMHgw
OTAwLTB4MDkwZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAz
OiBbaW8gIDB4MDkxMC0weDA5MWZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5
c3RlbSAwMDowMzogW2lvICAweGZlMDAtMHhmZWZlXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAu
MzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWMwMDAwMC0weGZlYzAwZmZmXSBoYXMgYmVl
biByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWUwMDAwMC0w
eGZlZTAwZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6
IFttZW0gMHhmZWQ4MDAwMC0weGZlZDhmZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEy
MDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWQ2MTAwMC0weGZlZDcwZmZmXSBoYXMgYmVlbiBy
ZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWMxMDAwMC0weGZl
YzEwZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtt
ZW0gMHhmZWQwMDAwMC0weGZlZDAwZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAw
XSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZjgwMDAwMC0weGZmZmZmZmZmXSBoYXMgYmVlbiByZXNl
cnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZp
Y2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowNDogW2lv
ICAweDAyOTAtMHgwMjlmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0g
MDA6MDQ6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsg
ICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowNTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURz
IFBOUDBjMDIgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gcG5wIDAwOjA2OiBbZG1hIDRdClsgICAg
MC4zMTIwMDBdIHBucCAwMDowNjogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDAy
MDAgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gcG5wIDAwOjA3OiBQbHVnIGFuZCBQbGF5IEFDUEkg
ZGV2aWNlLCBJRHMgUE5QMGIwMCAoYWN0aXZlKQpbICAgIDAuMzEyMDAwXSBwbnAgMDA6MDg6IFBs
dWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwODAwIChhY3RpdmUpClsgICAgMC4zMTIw
MDBdIHN5c3RlbSAwMDowOTogW2lvICAweDA0ZDAtMHgwNGQxXSBoYXMgYmVlbiByZXNlcnZlZApb
ICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDk6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElE
cyBQTlAwYzAyIChhY3RpdmUpClsgICAgMC4zMTIwMDBdIHBucCAwMDowYTogUGx1ZyBhbmQgUGxh
eSBBQ1BJIGRldmljZSwgSURzIFBOUDBjMDQgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gcG5wIDAw
OjBiOiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMDEwMyAoYWN0aXZlKQpbICAg
IDAuMzEyMDAwXSBwbnA6IFBuUCBBQ1BJOiBmb3VuZCAxMiBkZXZpY2VzClsgICAgMC4zMTIwMDBd
IEFDUEk6IGJ1cyB0eXBlIFBOUCB1bnJlZ2lzdGVyZWQKWyAgICAwLjMxNDA4M10gcGNpIDAwMDA6
MDA6MDIuMDogUENJIGJyaWRnZSB0byBbYnVzIDAxXQpbICAgIDAuMzE0MDg4XSBwY2kgMDAwMDow
MDowMi4wOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweGUwMDAtMHhlZmZmXQpbICAgIDAuMzE0MDkz
XSBwY2kgMDAwMDowMDowMi4wOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGZmNjAwMDAwLTB4ZmY2
ZmZmZmZdClsgICAgMC4zMTQwOTddIHBjaSAwMDAwOjAwOjAyLjA6ICAgYnJpZGdlIHdpbmRvdyBb
bWVtIDB4YzAwMDAwMDAtMHhjZmZmZmZmZiA2NGJpdCBwcmVmXQpbICAgIDAuMzE0MTAzXSBwY2kg
MDAwMDowMDoxNC40OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDJdClsgICAgMC4zMTQxMDZdIHBjaSAw
MDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbaW8gIDB4ZDAwMC0weGRmZmZdClsgICAgMC4z
MTQxMjJdIHBjaSAwMDAwOjAwOjE1LjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwM10KWyAgICAwLjMx
NDEyOF0gcGNpIDAwMDA6MDA6MTUuMDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmZjUwMDAwMC0w
eGZmNWZmZmZmXQpbICAgIDAuMzE0MTM3XSBwY2kgMDAwMDowMDoxNS4yOiBQQ0kgYnJpZGdlIHRv
IFtidXMgMDRdClsgICAgMC4zMTQxNDNdIHBjaSAwMDAwOjAwOjE1LjI6ICAgYnJpZGdlIHdpbmRv
dyBbbWVtIDB4ZmY0MDAwMDAtMHhmZjRmZmZmZl0KWyAgICAwLjMxNDE1M10gcGNpIDAwMDA6MDA6
MTUuMzogUENJIGJyaWRnZSB0byBbYnVzIDA1XQpbICAgIDAuMzE0MTU2XSBwY2kgMDAwMDowMDox
NS4zOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweGMwMDAtMHhjZmZmXQpbICAgIDAuMzE0MTY0XSBw
Y2kgMDAwMDowMDoxNS4zOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGQwMDAwMDAwLTB4ZDAwZmZm
ZmYgNjRiaXQgcHJlZl0KWyAgICAwLjMxNDUyMl0gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0
IFtpbyAgMHgwMDAwLTB4MDNhZl0KWyAgICAwLjMxNDUyNF0gcGNpX2J1cyAwMDAwOjAwOiByZXNv
dXJjZSA1IFtpbyAgMHgwM2UwLTB4MGNmN10KWyAgICAwLjMxNDUyNV0gcGNpX2J1cyAwMDAwOjAw
OiByZXNvdXJjZSA2IFtpbyAgMHgwM2IwLTB4MDNkZl0KWyAgICAwLjMxNDUyN10gcGNpX2J1cyAw
MDAwOjAwOiByZXNvdXJjZSA3IFtpbyAgMHgwZDAwLTB4ZmZmZl0KWyAgICAwLjMxNDUyOF0gcGNp
X2J1cyAwMDAwOjAwOiByZXNvdXJjZSA4IFttZW0gMHgwMDBhMDAwMC0weDAwMGJmZmZmXQpbICAg
IDAuMzE0NTMwXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDkgW21lbSAweDAwMGMwMDAwLTB4
MDAwZGZmZmZdClsgICAgMC4zMTQ1MzFdIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgMTAgW21l
bSAweGIwMDAwMDAwLTB4ZmZmZmZmZmZdClsgICAgMC4zMTQ1MzNdIHBjaV9idXMgMDAwMDowMTog
cmVzb3VyY2UgMCBbaW8gIDB4ZTAwMC0weGVmZmZdClsgICAgMC4zMTQ1MzRdIHBjaV9idXMgMDAw
MDowMTogcmVzb3VyY2UgMSBbbWVtIDB4ZmY2MDAwMDAtMHhmZjZmZmZmZl0KWyAgICAwLjMxNDUz
Nl0gcGNpX2J1cyAwMDAwOjAxOiByZXNvdXJjZSAyIFttZW0gMHhjMDAwMDAwMC0weGNmZmZmZmZm
IDY0Yml0IHByZWZdClsgICAgMC4zMTQ1MzddIHBjaV9idXMgMDAwMDowMjogcmVzb3VyY2UgMCBb
aW8gIDB4ZDAwMC0weGRmZmZdClsgICAgMC4zMTQ1MzldIHBjaV9idXMgMDAwMDowMjogcmVzb3Vy
Y2UgNCBbaW8gIDB4MDAwMC0weDAzYWZdClsgICAgMC4zMTQ1NDBdIHBjaV9idXMgMDAwMDowMjog
cmVzb3VyY2UgNSBbaW8gIDB4MDNlMC0weDBjZjddClsgICAgMC4zMTQ1NDJdIHBjaV9idXMgMDAw
MDowMjogcmVzb3VyY2UgNiBbaW8gIDB4MDNiMC0weDAzZGZdClsgICAgMC4zMTQ1NDNdIHBjaV9i
dXMgMDAwMDowMjogcmVzb3VyY2UgNyBbaW8gIDB4MGQwMC0weGZmZmZdClsgICAgMC4zMTQ1NDRd
IHBjaV9idXMgMDAwMDowMjogcmVzb3VyY2UgOCBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZl0K
WyAgICAwLjMxNDU0Nl0gcGNpX2J1cyAwMDAwOjAyOiByZXNvdXJjZSA5IFttZW0gMHgwMDBjMDAw
MC0weDAwMGRmZmZmXQpbICAgIDAuMzE0NTQ3XSBwY2lfYnVzIDAwMDA6MDI6IHJlc291cmNlIDEw
IFttZW0gMHhiMDAwMDAwMC0weGZmZmZmZmZmXQpbICAgIDAuMzE0NTQ5XSBwY2lfYnVzIDAwMDA6
MDM6IHJlc291cmNlIDEgW21lbSAweGZmNTAwMDAwLTB4ZmY1ZmZmZmZdClsgICAgMC4zMTQ1NTBd
IHBjaV9idXMgMDAwMDowNDogcmVzb3VyY2UgMSBbbWVtIDB4ZmY0MDAwMDAtMHhmZjRmZmZmZl0K
WyAgICAwLjMxNDU1Ml0gcGNpX2J1cyAwMDAwOjA1OiByZXNvdXJjZSAwIFtpbyAgMHhjMDAwLTB4
Y2ZmZl0KWyAgICAwLjMxNDU1M10gcGNpX2J1cyAwMDAwOjA1OiByZXNvdXJjZSAyIFttZW0gMHhk
MDAwMDAwMC0weGQwMGZmZmZmIDY0Yml0IHByZWZdClsgICAgMC4zMTQ2MzFdIE5FVDogUmVnaXN0
ZXJlZCBwcm90b2NvbCBmYW1pbHkgMgpbICAgIDAuMzE0ODAwXSBUQ1AgZXN0YWJsaXNoZWQgaGFz
aCB0YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMpClsgICAgMC4z
MTUwMTRdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA4LCAxMDQ4
NTc2IGJ5dGVzKQpbICAgIDAuMzE1MjM3XSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQgKGVz
dGFibGlzaGVkIDY1NTM2IGJpbmQgNjU1MzYpClsgICAgMC4zMTUyNzFdIFRDUDogcmVubyByZWdp
c3RlcmVkClsgICAgMC4zMTUyNzRdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVy
OiA2LCAyNjIxNDQgYnl0ZXMpClsgICAgMC4zMTUzMjZdIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50
cmllczogNDA5NiAob3JkZXI6IDYsIDI2MjE0NCBieXRlcykKWyAgICAwLjMxNTQ3M10gTkVUOiBS
ZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxClsgICAgMC4zMTU1MDldIHBjaSAwMDAwOjAwOjAx
LjA6IEJvb3QgdmlkZW8gZGV2aWNlClsgICAgMC41NjAyNzRdIHBjaSAwMDAwOjAxOjAwLjA6IEJv
b3QgdmlkZW8gZGV2aWNlClsgICAgMC41NjA0NTFdIFBDSTogQ0xTIDY0IGJ5dGVzLCBkZWZhdWx0
IDY0ClsgICAgMC41NjA0OTJdIFVucGFja2luZyBpbml0cmFtZnMuLi4KWyAgICAwLjYwMjE4Ml0g
RnJlZWluZyBpbml0cmQgbWVtb3J5OiA0OTcwOEsgKGZmZmY4ODAwMDEwMDAwMDAgLSBmZmZmODgw
MDA0MDhiMDAwKQpbICAgIDAuNjAyNTE0XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc29j
a2V0IChkaXNhYmxlZCkKWyAgICAwLjYwMjUyNV0gdHlwZT0yMDAwIGF1ZGl0KDEzOTE0NjY1OTEu
NjAwOjEpOiBpbml0aWFsaXplZApbICAgIDAuNjI1NjcwXSB6YnVkOiBsb2FkZWQKWyAgICAwLjYy
NTg1NV0gVkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjUuMgpbICAgIDAuNjI1ODgwXSBEcXVvdC1j
YWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXIgMCwgNDA5NiBieXRlcykKWyAgICAw
LjYyNjEyMF0gbXNnbW5pIGhhcyBiZWVuIHNldCB0byAxMzgzNgpbICAgIDAuNjI2NDA4XSBCbG9j
ayBsYXllciBTQ1NJIGdlbmVyaWMgKGJzZykgZHJpdmVyIHZlcnNpb24gMC40IGxvYWRlZCAobWFq
b3IgMjUyKQpbICAgIDAuNjI2NDUxXSBpbyBzY2hlZHVsZXIgbm9vcCByZWdpc3RlcmVkClsgICAg
MC42MjY0NTJdIGlvIHNjaGVkdWxlciBkZWFkbGluZSByZWdpc3RlcmVkClsgICAgMC42MjY0NzJd
IGlvIHNjaGVkdWxlciBjZnEgcmVnaXN0ZXJlZCAoZGVmYXVsdCkKWyAgICAwLjYyNjc2M10gcGNp
X2hvdHBsdWc6IFBDSSBIb3QgUGx1ZyBQQ0kgQ29yZSB2ZXJzaW9uOiAwLjUKWyAgICAwLjYyNjc3
M10gcGNpZWhwOiBQQ0kgRXhwcmVzcyBIb3QgUGx1ZyBDb250cm9sbGVyIERyaXZlciB2ZXJzaW9u
OiAwLjQKWyAgICAwLjYyNjg0M10gR0hFUzogSEVTVCBpcyBub3QgZW5hYmxlZCEKWyAgICAwLjYy
Njk1NV0gTm9uLXZvbGF0aWxlIG1lbW9yeSBkcml2ZXIgdjEuMwpbICAgIDAuNjI2OTU5XSBMaW51
eCBhZ3BnYXJ0IGludGVyZmFjZSB2MC4xMDMKWyAgICAwLjYyNzM4OF0gWGVuIHZpcnR1YWwgY29u
c29sZSBzdWNjZXNzZnVsbHkgaW5zdGFsbGVkIGFzIHh2YzAKWyAgICAwLjYyNzQ1Ml0gYWhjaSAw
MDAwOjAwOjExLjA6IHZlcnNpb24gMy4wClsgICAgMC42Mjc2NTZdIGFoY2kgMDAwMDowMDoxMS4w
OiBpcnEgNDQgKDI3NikgLi4uIDQ3ICgyNzkpIGZvciBNU0kKWyAgICAwLjYyNzcxN10gYWhjaSAw
MDAwOjAwOjExLjA6IEFIQ0kgMDAwMS4wMzAwIDMyIHNsb3RzIDMgcG9ydHMgNiBHYnBzIDB4NyBp
bXBsIFNBVEEgbW9kZQpbICAgIDAuNjI3NzE5XSBhaGNpIDAwMDA6MDA6MTEuMDogZmxhZ3M6IDY0
Yml0IG5jcSBzbnRmIGlsY2sgbGVkIGNsbyBwbXAgcGlvIHNsdW0gcGFydCBzeHMgClsgICAgMC42
MjgyMjFdIHNjc2kwIDogYWhjaQpbICAgIDAuNjI4MzIyXSBzY3NpMSA6IGFoY2kKWyAgICAwLjYy
ODM2OV0gc2NzaTIgOiBhaGNpClsgICAgMC42Mjg0MDhdIGF0YTE6IFNBVEEgbWF4IFVETUEvMTMz
IGFiYXIgbTIwNDhAMHhmZjc0ZDAwMCBwb3J0IDB4ZmY3NGQxMDAgaXJxIDQ0ClsgICAgMC42Mjg0
MTBdIGF0YTI6IFNBVEEgbWF4IFVETUEvMTMzIGFiYXIgbTIwNDhAMHhmZjc0ZDAwMCBwb3J0IDB4
ZmY3NGQxODAgaXJxIDQ1ClsgICAgMC42Mjg0MTJdIGF0YTM6IFNBVEEgbWF4IFVETUEvMTMzIGFi
YXIgbTIwNDhAMHhmZjc0ZDAwMCBwb3J0IDB4ZmY3NGQyMDAgaXJxIDQ2ClsgICAgMC42Mjg0NjVd
IGk4MDQyOiBQTlA6IE5vIFBTLzIgY29udHJvbGxlciBmb3VuZC4gUHJvYmluZyBwb3J0cyBkaXJl
Y3RseS4KWyAgICAwLjYzMTE1N10gc2VyaW86IGk4MDQyIEtCRCBwb3J0IGF0IDB4NjAsMHg2NCBp
cnEgMQpbICAgIDAuNjMxMTY1XSBzZXJpbzogaTgwNDIgQVVYIHBvcnQgYXQgMHg2MCwweDY0IGly
cSAxMgpbICAgIDAuNjMxMjg2XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZv
ciBhbGwgbWljZQpbICAgIDAuNjMxMzgwXSBydGNfY21vcyAwMDowNzogUlRDIGNhbiB3YWtlIGZy
b20gUzQKWyAgICAwLjYzMTUyMF0gcnRjX2Ntb3MgMDA6MDc6IHJ0YyBjb3JlOiByZWdpc3RlcmVk
IHJ0Y19jbW9zIGFzIHJ0YzAKWyAgICAwLjYzMTU1Ml0gcnRjX2Ntb3MgMDA6MDc6IGFsYXJtcyB1
cCB0byBvbmUgbW9udGgsIHkzaywgMTE0IGJ5dGVzIG52cmFtClsgICAgMC42MzE1NThdIGxlZHRy
aWctY3B1OiByZWdpc3RlcmVkIHRvIGluZGljYXRlIGFjdGl2aXR5IG9uIENQVXMKWyAgICAwLjYz
MTU3MV0gaGlkcmF3OiByYXcgSElEIGV2ZW50cyBkcml2ZXIgKEMpIEppcmkgS29zaW5hClsgICAg
MC42MzE2OTBdIFRDUDogY3ViaWMgcmVnaXN0ZXJlZApbICAgIDAuNjMxNzY1XSBORVQ6IFJlZ2lz
dGVyZWQgcHJvdG9jb2wgZmFtaWx5IDEwClsgICAgMC42MzE4OThdIEtleSB0eXBlIGRuc19yZXNv
bHZlciByZWdpc3RlcmVkClsgICAgMC42MzE5NzldIE1DRTogYmluZCB2aXJxIGZvciBET00wIGxv
Z2dpbmcKWyAgICAwLjYzMTk5OF0gTUNFX0RPTTBfTE9HOiBlbnRlciBkb20wIG1jZSB2SVJRIGhh
bmRsZXIKWyAgICAwLjYzMjAwMF0gTUNFX0RPTTBfTE9HOiBObyBtb3JlIHVyZ2VudCBkYXRhClsg
ICAgMC42MzIwMDFdIE1DRV9ET00wX0xPRzogTm8gbW9yZSBub251cmdlbnQgZGF0YQpbICAgIDAu
NjMyMTI3XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9uIDEKWyAgICAwLjYzMjY3NV0gICBN
YWdpYyBudW1iZXI6IDI6NDYwOjUwMQpbICAgIDAuNjMyNzQxXSBydGNfY21vcyAwMDowNzogc2V0
dGluZyBzeXN0ZW0gY2xvY2sgdG8gMjAxNC0wMi0wMyAyMjoyOTo1MiBVVEMgKDEzOTE0NjY1OTIp
ClsgICAgMS4xMjAxMDFdIGF0YTI6IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMg
U0NvbnRyb2wgMzAwKQpbICAgIDEuMTIwMTI0XSBhdGEzOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMg
KFNTdGF0dXMgMTIzIFNDb250cm9sIDMwMCkKWyAgICAxLjEyMDE0M10gYXRhMTogU0FUQSBsaW5r
IHVwIDMuMCBHYnBzIChTU3RhdHVzIDEyMyBTQ29udHJvbCAzMDApClsgICAgMS4xMjEzNzFdIGF0
YTIuMDA6IEFUQS04OiBIaXRhY2hpIEhEUzVDMzAyMEFMQTYzMiwgTUw2T0E1ODAsIG1heCBVRE1B
LzEzMwpbICAgIDEuMTIxMzc0XSBhdGEyLjAwOiAzOTA3MDI5MTY4IHNlY3RvcnMsIG11bHRpIDE2
OiBMQkE0OCBOQ1EgKGRlcHRoIDMxLzMyKSwgQUEKWyAgICAxLjEyMTczNl0gYXRhMS4wMDogQVRB
LTg6IFNUMzUwMDMyMEFTLCBTRDFBLCBtYXggVURNQS8xMzMKWyAgICAxLjEyMTczOV0gYXRhMS4w
MDogOTc2NzczMTY4IHNlY3RvcnMsIG11bHRpIDE2OiBMQkE0OCBOQ1EgKGRlcHRoIDMxLzMyKQpb
ICAgIDEuMTIyNjczXSBhdGEyLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMwpbICAgIDEuMTIz
NzQwXSBhdGExLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMwpbICAgIDEuMTIzODQ5XSBzY3Np
IDA6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIFNUMzUwMDMyMEFTICAgICAgU0Qx
QSBQUTogMCBBTlNJOiA1ClsgICAgMS4xMjQwMjddIHNkIDA6MDowOjA6IFtzZGFdIDk3Njc3MzE2
OCA1MTItYnl0ZSBsb2dpY2FsIGJsb2NrczogKDUwMCBHQi80NjUgR2lCKQpbICAgIDEuMTI0MDgx
XSBzZCAwOjA6MDowOiBbc2RhXSBXcml0ZSBQcm90ZWN0IGlzIG9mZgpbICAgIDEuMTI0MDg0XSBz
ZCAwOjA6MDowOiBbc2RhXSBNb2RlIFNlbnNlOiAwMCAzYSAwMCAwMApbICAgIDEuMTI0MTE2XSBz
Y3NpIDE6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIEhpdGFjaGkgSERTNUMzMDIg
TUw2TyBQUTogMCBBTlNJOiA1ClsgICAgMS4xMjQxMTddIHNkIDA6MDowOjA6IFtzZGFdIFdyaXRl
IGNhY2hlOiBlbmFibGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBP
IG9yIEZVQQpbICAgIDEuMTI0MjY2XSBzZCAxOjA6MDowOiBbc2RiXSAzOTA3MDI5MTY4IDUxMi1i
eXRlIGxvZ2ljYWwgYmxvY2tzOiAoMi4wMCBUQi8xLjgxIFRpQikKWyAgICAxLjEyNDMwM10gc2Qg
MTowOjA6MDogW3NkYl0gV3JpdGUgUHJvdGVjdCBpcyBvZmYKWyAgICAxLjEyNDMxMF0gc2QgMTow
OjA6MDogW3NkYl0gTW9kZSBTZW5zZTogMDAgM2EgMDAgMDAKWyAgICAxLjEyNDMyOF0gc2QgMTow
OjA6MDogW3NkYl0gV3JpdGUgY2FjaGU6IGVuYWJsZWQsIHJlYWQgY2FjaGU6IGVuYWJsZWQsIGRv
ZXNuJ3Qgc3VwcG9ydCBEUE8gb3IgRlVBClsgICAgMS4xMjU3OTNdICBzZGI6IHNkYjEKWyAgICAx
LjEyNjAwNV0gc2QgMTowOjA6MDogW3NkYl0gQXR0YWNoZWQgU0NTSSBkaXNrClsgICAgMS4xNjcy
MzBdIGF0YTMuMDA6IEhQQSBkZXRlY3RlZDogY3VycmVudCA2MjUxNDAzMzUsIG5hdGl2ZSA2MjUx
NDI0NDgKWyAgICAxLjE2NzIzNV0gYXRhMy4wMDogQVRBLTc6IFNUMzMyMDYyMEFTLCAzLkFBSywg
bWF4IFVETUEvMTMzClsgICAgMS4xNjcyMzZdIGF0YTMuMDA6IDYyNTE0MDMzNSBzZWN0b3JzLCBt
dWx0aSAxNjogTEJBNDggTkNRIChkZXB0aCAzMS8zMikKWyAgICAxLjE2OTE4OV0gIHNkYTogc2Rh
MSBzZGE0IDwgc2RhNSBzZGE2ID4KWyAgICAxLjE2OTQzOF0gc2QgMDowOjA6MDogW3NkYV0gQXR0
YWNoZWQgU0NTSSBkaXNrClsgICAgMS4yMjU1MzddIGF0YTMuMDA6IGNvbmZpZ3VyZWQgZm9yIFVE
TUEvMTMzClsgICAgMS4yMjU2MjJdIHNjc2kgMjowOjA6MDogRGlyZWN0LUFjY2VzcyAgICAgQVRB
ICAgICAgU1QzMzIwNjIwQVMgICAgICAzLkFBIFBROiAwIEFOU0k6IDUKWyAgICAxLjIyNTczNV0g
c2QgMjowOjA6MDogW3NkY10gNjI1MTQwMzM1IDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tzOiAoMzIw
IEdCLzI5OCBHaUIpClsgICAgMS4yMjU3NzldIHNkIDI6MDowOjA6IFtzZGNdIFdyaXRlIFByb3Rl
Y3QgaXMgb2ZmClsgICAgMS4yMjU3ODFdIHNkIDI6MDowOjA6IFtzZGNdIE1vZGUgU2Vuc2U6IDAw
IDNhIDAwIDAwClsgICAgMS4yMjU3OTRdIHNkIDI6MDowOjA6IFtzZGNdIFdyaXRlIGNhY2hlOiBl
bmFibGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQpb
ICAgIDEuMzM3NTc4XSAgc2RjOiBzZGMxIHNkYzIgPCBzZGM1IHNkYzYgc2RjNyBzZGM4IHNkYzkg
PiBzZGMzIHNkYzQKWyAgICAxLjMzODAzN10gc2QgMjowOjA6MDogW3NkY10gQXR0YWNoZWQgU0NT
SSBkaXNrClsgICAgMS4zMzgzMDhdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBtZW1vcnk6IDQ5Mksg
KGZmZmY4ODAwMDA5ODEwMDAgLSBmZmZmODgwMDAwOWZjMDAwKQpbICAgIDEuMzM4MzEwXSBXcml0
ZSBwcm90ZWN0aW5nIHRoZSBrZXJuZWwgcmVhZC1vbmx5IGRhdGE6IDkxNjRrClsgICAgMS4zODY2
ODhdIHBjaWJhY2sgMDAwMDowMDowMS4wOiBzZWl6aW5nIGRldmljZQpbICAgIDEuMzg2Njk5XSBw
Y2liYWNrIDAwMDA6MDA6MDEuMTogc2VpemluZyBkZXZpY2UKWyAgICAxLjQxNjI1OF0gcGNpYmFj
ayAwMDAwOjAwOjAxLjA6IGVuYWJsaW5nIGRldmljZSAoMDAwNiAtPiAwMDA3KQpbICAgIDEuNDQ4
MzE2XSBwY2liYWNrOiBiYWNrZW5kIGlzIHZwY2kKWyAgICAxLjQ1MDY5N10gSW5pdGlhbGlzaW5n
IHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyLgpbICAgIDEuNDU4Njc5XSBlbWM6IGRldmljZSBoYW5k
bGVyIHJlZ2lzdGVyZWQKWyAgICAxLjQ2MDc1NV0gcmRhYzogZGV2aWNlIGhhbmRsZXIgcmVnaXN0
ZXJlZApbICAgIDEuNDYyNjAzXSBocF9zdzogZGV2aWNlIGhhbmRsZXIgcmVnaXN0ZXJlZApbICAg
IDEuNDY0NjQ3XSBhbHVhOiBkZXZpY2UgaGFuZGxlciByZWdpc3RlcmVkClsgICAgMS40NzA4NjZd
IHN5c3RlbWQtdWRldmRbMTIzXTogc3RhcnRpbmcgdmVyc2lvbiAyMDgKWyAgICAxLjQ4NDQ0OV0g
W2RybV0gSW5pdGlhbGl6ZWQgZHJtIDEuMS4wIDIwMDYwODEwClsgICAgMS40OTIwMzBdIEFDUEk6
IGJ1cyB0eXBlIFVTQiByZWdpc3RlcmVkClsgICAgMS40OTIwNzddIHVzYmNvcmU6IHJlZ2lzdGVy
ZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKWyAgICAxLjQ5MjA4N10gdXNiY29yZTogcmVn
aXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKWyAgICAxLjQ5Mzk3Ml0gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKWyAgICAxLjQ5NDY3MV0geGhjaV9oY2Qg
MDAwMDowMDoxMC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDEuNDk0Njc4XSB4aGNpX2hj
ZCAwMDAwOjAwOjEwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1i
ZXIgMQpbICAgIDEuNDk0NzE1XSBRVUlSSzogRW5hYmxlIEFNRCBQTEwgZml4ClsgICAgMS40OTUw
MDldIHhoY2lfaGNkIDAwMDA6MDA6MTAuMDogaXJxIDQ5ICgyNzUpIGZvciBNU0kvTVNJLVgKWyAg
ICAxLjQ5NTA1NF0geGhjaV9oY2QgMDAwMDowMDoxMC4wOiBpcnEgNTAgKDI3NCkgZm9yIE1TSS9N
U0ktWApbICAgIDEuNDk1MDk5XSB4aGNpX2hjZCAwMDAwOjAwOjEwLjA6IGlycSA1MSAoMjczKSBm
b3IgTVNJL01TSS1YClsgICAgMS40OTUxNDJdIHhoY2lfaGNkIDAwMDA6MDA6MTAuMDogaXJxIDUy
ICgyNzIpIGZvciBNU0kvTVNJLVgKWyAgICAxLjQ5NTE4Nl0geGhjaV9oY2QgMDAwMDowMDoxMC4w
OiBpcnEgNTMgKDI3MSkgZm9yIE1TSS9NU0ktWApbICAgIDEuNDk1MzEwXSB1c2IgdXNiMTogTmV3
IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyClsgICAgMS40
OTUzMTJdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0y
LCBTZXJpYWxOdW1iZXI9MQpbICAgIDEuNDk1MzE0XSB1c2IgdXNiMTogUHJvZHVjdDogeEhDSSBI
b3N0IENvbnRyb2xsZXIKWyAgICAxLjQ5NTMxNl0gdXNiIHVzYjE6IE1hbnVmYWN0dXJlcjogTGlu
dXggMy4xMS42LTQteGVuIHhoY2lfaGNkClsgICAgMS40OTUzMThdIHVzYiB1c2IxOiBTZXJpYWxO
dW1iZXI6IDAwMDA6MDA6MTAuMApbICAgIDEuNDk1NDIzXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50
IGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgICAxLjQ5NTQyN10geEhDSSB4aGNpX2NoZWNrX2JhbmR3
aWR0aCBjYWxsZWQgZm9yIHJvb3QgaHViClsgICAgMS40OTU0NzJdIGh1YiAxLTA6MS4wOiBVU0Ig
aHViIGZvdW5kClsgICAgMS40OTU0ODFdIGh1YiAxLTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkClsg
ICAgMS40OTU1ODJdIHhoY2lfaGNkIDAwMDA6MDA6MTAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIK
WyAgICAxLjQ5NTU4Nl0geGhjaV9oY2QgMDAwMDowMDoxMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3Rl
cmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDIKWyAgICAxLjQ5NTk0M10gZWhjaV9oY2Q6IFVTQiAy
LjAgJ0VuaGFuY2VkJyBIb3N0IENvbnRyb2xsZXIgKEVIQ0kpIERyaXZlcgpbICAgIDEuNDk2MTU1
XSBlaGNpLXBjaTogRUhDSSBQQ0kgcGxhdGZvcm0gZHJpdmVyClsgICAgMS40OTgzMTJdIHVzYiB1
c2IyOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDMK
WyAgICAxLjQ5ODMxNl0gdXNiIHVzYjI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQ
cm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMS40OTgzMThdIHVzYiB1c2IyOiBQcm9kdWN0
OiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDEuNDk4MzIwXSB1c2IgdXNiMjogTWFudWZhY3R1
cmVyOiBMaW51eCAzLjExLjYtNC14ZW4geGhjaV9oY2QKWyAgICAxLjQ5ODMyMV0gdXNiIHVzYjI6
IFNlcmlhbE51bWJlcjogMDAwMDowMDoxMC4wClsgICAgMS40OTg0MzldIHhIQ0kgeGhjaV9hZGRf
ZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAgIDEuNDk4NDQxXSB4SENJIHhoY2lfY2hl
Y2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgICAxLjQ5ODQ2NF0gaHViIDItMDox
LjA6IFVTQiBodWIgZm91bmQKWyAgICAxLjQ5ODQ3MF0gaHViIDItMDoxLjA6IDIgcG9ydHMgZGV0
ZWN0ZWQKWyAgICAxLjUwNDYwMl0gb2hjaV9oY2Q6IFVTQiAxLjEgJ09wZW4nIEhvc3QgQ29udHJv
bGxlciAoT0hDSSkgRHJpdmVyClsgICAgMS41MDU0NDZdIG9oY2ktcGNpOiBPSENJIFBDSSBwbGF0
Zm9ybSBkcml2ZXIKWyAgICAxLjUxMTA1Nl0gW2RybV0gcmFkZW9uIGtlcm5lbCBtb2Rlc2V0dGlu
ZyBlbmFibGVkLgpbICAgIDEuNTI4MzU4XSB4aGNpX2hjZCAwMDAwOjAwOjEwLjE6IHhIQ0kgSG9z
dCBDb250cm9sbGVyClsgICAgMS41MjgzNjddIHhoY2lfaGNkIDAwMDA6MDA6MTAuMTogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgICAgMS41Mjg2NTZdIHho
Y2lfaGNkIDAwMDA6MDA6MTAuMTogaXJxIDU0ICgyNzApIGZvciBNU0kvTVNJLVgKWyAgICAxLjUy
ODcwMl0geGhjaV9oY2QgMDAwMDowMDoxMC4xOiBpcnEgNTUgKDI2OSkgZm9yIE1TSS9NU0ktWApb
ICAgIDEuNTI4NzQ3XSB4aGNpX2hjZCAwMDAwOjAwOjEwLjE6IGlycSA1NiAoMjY4KSBmb3IgTVNJ
L01TSS1YClsgICAgMS41Mjg3OTJdIHhoY2lfaGNkIDAwMDA6MDA6MTAuMTogaXJxIDU3ICgyNjcp
IGZvciBNU0kvTVNJLVgKWyAgICAxLjUyODgzN10geGhjaV9oY2QgMDAwMDowMDoxMC4xOiBpcnEg
NTggKDI2NikgZm9yIE1TSS9NU0ktWApbICAgIDEuNTI4OTY0XSB1c2IgdXNiMzogTmV3IFVTQiBk
ZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyClsgICAgMS41Mjg5Njdd
IHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJp
YWxOdW1iZXI9MQpbICAgIDEuNTI4OTY5XSB1c2IgdXNiMzogUHJvZHVjdDogeEhDSSBIb3N0IENv
bnRyb2xsZXIKWyAgICAxLjUyODk3MF0gdXNiIHVzYjM6IE1hbnVmYWN0dXJlcjogTGludXggMy4x
MS42LTQteGVuIHhoY2lfaGNkClsgICAgMS41Mjg5NzJdIHVzYiB1c2IzOiBTZXJpYWxOdW1iZXI6
IDAwMDA6MDA6MTAuMQpbICAgIDEuNTI5MDQzXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNhbGxl
ZCBmb3Igcm9vdCBodWIKWyAgICAxLjUyOTA0NV0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0aCBj
YWxsZWQgZm9yIHJvb3QgaHViClsgICAgMS41MjkwNjZdIGh1YiAzLTA6MS4wOiBVU0IgaHViIGZv
dW5kClsgICAgMS41MjkwNzNdIGh1YiAzLTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkClsgICAgMS41
MjkxNDddIHhoY2lfaGNkIDAwMDA6MDA6MTAuMTogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAx
LjUyOTE1MV0geGhjaV9oY2QgMDAwMDowMDoxMC4xOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBh
c3NpZ25lZCBidXMgbnVtYmVyIDQKWyAgICAxLjUzMTk3MF0gdXNiIHVzYjQ6IE5ldyBVU0IgZGV2
aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMwpbICAgIDEuNTMxOTcyXSB1
c2IgdXNiNDogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTEKWyAgICAxLjUzMTk3NF0gdXNiIHVzYjQ6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250
cm9sbGVyClsgICAgMS41MzE5NzZdIHVzYiB1c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTEu
Ni00LXhlbiB4aGNpX2hjZApbICAgIDEuNTMxOTc3XSB1c2IgdXNiNDogU2VyaWFsTnVtYmVyOiAw
MDAwOjAwOjEwLjEKWyAgICAxLjUzMjAyN10geEhDSSB4aGNpX2FkZF9lbmRwb2ludCBjYWxsZWQg
Zm9yIHJvb3QgaHViClsgICAgMS41MzIwMjldIHhIQ0kgeGhjaV9jaGVja19iYW5kd2lkdGggY2Fs
bGVkIGZvciByb290IGh1YgpbICAgIDEuNTMyMDQ5XSBodWIgNC0wOjEuMDogVVNCIGh1YiBmb3Vu
ZApbICAgIDEuNTMyMDc0XSBodWIgNC0wOjEuMDogMiBwb3J0cyBkZXRlY3RlZApbICAgIDEuNTUy
NDA0XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMS41
NTI0MTRdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciA1ClsgICAgMS41NTI0MjBdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjog
YXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJv
dW5kClsgICAgMS41NTI0MzVdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogZGVidWcgcG9ydCAxClsg
ICAgMS41NTI1MDldIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogaXJxIDE3LCBpbyBtZW0gMHhmZjc0
YjAwMApbICAgIDEuNTY0MDk2XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IFVTQiAyLjAgc3RhcnRl
ZCwgRUhDSSAxLjAwClsgICAgMS41NjQxMjNdIHVzYiB1c2I1OiBOZXcgVVNCIGRldmljZSBmb3Vu
ZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIKWyAgICAxLjU2NDEyNl0gdXNiIHVzYjU6
IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0x
ClsgICAgMS41NjQxMjldIHVzYiB1c2I1OiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcgpb
ICAgIDEuNTY0MTMxXSB1c2IgdXNiNTogTWFudWZhY3R1cmVyOiBMaW51eCAzLjExLjYtNC14ZW4g
ZWhjaV9oY2QKWyAgICAxLjU2NDEzMl0gdXNiIHVzYjU6IFNlcmlhbE51bWJlcjogMDAwMDowMDox
Mi4yClsgICAgMS41NjQyODhdIGh1YiA1LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS41NjQy
OTJdIGh1YiA1LTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAgMS41NjQzOTZdIHhoY2lfaGNk
IDAwMDA6MDQ6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAxLjU2NDQwOV0geGhjaV9o
Y2QgMDAwMDowNDowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVt
YmVyIDYKWyAgICAxLjU2NDYyOV0geGhjaV9oY2QgMDAwMDowNDowMC4wOiBpcnEgNTkgKDI2NSkg
Zm9yIE1TSS9NU0ktWApbICAgIDEuNTY0NzA2XSB1c2IgdXNiNjogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyClsgICAgMS41NjQ3MDhdIHVzYiB1c2I2
OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9
MQpbICAgIDEuNTY0NzEwXSB1c2IgdXNiNjogUHJvZHVjdDogeEhDSSBIb3N0IENvbnRyb2xsZXIK
WyAgICAxLjU2NDcxMV0gdXNiIHVzYjY6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMS42LTQteGVu
IHhoY2lfaGNkClsgICAgMS41NjQ3MTJdIHVzYiB1c2I2OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDQ6
MDAuMApbICAgIDEuNTY0Nzk0XSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNhbGxlZCBmb3Igcm9v
dCBodWIKWyAgICAxLjU2NDc5Nl0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0aCBjYWxsZWQgZm9y
IHJvb3QgaHViClsgICAgMS41NjQ4MjBdIGh1YiA2LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAg
MS41NjQ4MzBdIGh1YiA2LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgICAgMS41NjQ5MzZdIHho
Y2lfaGNkIDAwMDA6MDQ6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAxLjU2NDkzOV0g
eGhjaV9oY2QgMDAwMDowNDowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBi
dXMgbnVtYmVyIDcKWyAgICAxLjU2NDk1N10gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIGZvdW5k
LCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMwpbICAgIDEuNTY0OTU5XSB1c2IgdXNiNzog
TmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEK
WyAgICAxLjU2NDk2MF0gdXNiIHVzYjc6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsg
ICAgMS41NjQ5NjFdIHVzYiB1c2I3OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTEuNi00LXhlbiB4
aGNpX2hjZApbICAgIDEuNTY0OTYzXSB1c2IgdXNiNzogU2VyaWFsTnVtYmVyOiAwMDAwOjA0OjAw
LjAKWyAgICAxLjU2NDk4Nl0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBFSENJIEhvc3QgQ29udHJv
bGxlcgpbICAgIDEuNTY1MDEwXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNhbGxlZCBmb3Igcm9v
dCBodWIKWyAgICAxLjU2NTAxMl0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0aCBjYWxsZWQgZm9y
IHJvb3QgaHViClsgICAgMS41NjUwMzJdIGh1YiA3LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAg
MS41NjUwNDFdIGh1YiA3LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgICAgMS41NjUxMThdIGVo
Y2ktcGNpIDAwMDA6MDA6MTMuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVz
IG51bWJlciA4ClsgICAgMS41NjUxMjVdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogYXBwbHlpbmcg
QU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJvdW5kClsgICAg
MS41NjUxNDFdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogZGVidWcgcG9ydCAxClsgICAgMS41NjUx
OTJdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogaXJxIDE3LCBpbyBtZW0gMHhmZjc0OTAwMApbICAg
IDEuNTc2MTAyXSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAx
LjAwClsgICAgMS41NzYxMzFdIHVzYiB1c2I4OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5k
b3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIKWyAgICAxLjU3NjEzM10gdXNiIHVzYjg6IE5ldyBVU0Ig
ZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMS41
NzYxMzVdIHVzYiB1c2I4OiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDEuNTc2
MTM2XSB1c2IgdXNiODogTWFudWZhY3R1cmVyOiBMaW51eCAzLjExLjYtNC14ZW4gZWhjaV9oY2QK
WyAgICAxLjU3NjEzOF0gdXNiIHVzYjg6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxMy4yClsgICAg
MS41NzYzMzddIGh1YiA4LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS41NzYzNDFdIGh1YiA4
LTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAgMS41NzY2MzJdIG9oY2ktcGNpIDAwMDA6MDA6
MTIuMDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsgICAgMS41NzY2NDBdIG9oY2ktcGNpIDAw
MDA6MDA6MTIuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciA5
ClsgICAgMS41NzY2OThdIG9oY2ktcGNpIDAwMDA6MDA6MTIuMDogaXJxIDE4LCBpbyBtZW0gMHhm
Zjc0YzAwMApbICAgIDEuNjM2MTUwXSB1c2IgdXNiOTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlk
VmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxClsgICAgMS42MzYxNTRdIHVzYiB1c2I5OiBOZXcg
VVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAg
IDEuNjM2MTU2XSB1c2IgdXNiOTogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsg
ICAgMS42MzYxNTddIHVzYiB1c2I5OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTEuNi00LXhlbiBv
aGNpX2hjZApbICAgIDEuNjM2MTU5XSB1c2IgdXNiOTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEy
LjAKWyAgICAxLjYzNjI3M10gaHViIDktMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgICAxLjYzNjI3
OV0gaHViIDktMDoxLjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgICAxLjYzNjUwOV0gb2hjaS1wY2kg
MDAwMDowMDoxMy4wOiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgICAxLjYzNjUxNF0gb2hj
aS1wY2kgMDAwMDowMDoxMy4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMg
bnVtYmVyIDEwClsgICAgMS42MzY1MzddIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogaXJxIDE4LCBp
byBtZW0gMHhmZjc0YTAwMApbICAgIDEuNjk2MTUxXSB1c2IgdXNiMTA6IE5ldyBVU0IgZGV2aWNl
IGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMQpbICAgIDEuNjk2MTU1XSB1c2Ig
dXNiMTA6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51
bWJlcj0xClsgICAgMS42OTYxNTddIHVzYiB1c2IxMDogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBj
b250cm9sbGVyClsgICAgMS42OTYxNThdIHVzYiB1c2IxMDogTWFudWZhY3R1cmVyOiBMaW51eCAz
LjExLjYtNC14ZW4gb2hjaV9oY2QKWyAgICAxLjY5NjE2MF0gdXNiIHVzYjEwOiBTZXJpYWxOdW1i
ZXI6IDAwMDA6MDA6MTMuMApbICAgIDEuNjk2Mjc1XSBodWIgMTAtMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgICAxLjY5NjI4MV0gaHViIDEwLTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAgMS42
OTY1MTNdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsg
ICAgMS42OTY1MThdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogbmV3IFVTQiBidXMgcmVnaXN0ZXJl
ZCwgYXNzaWduZWQgYnVzIG51bWJlciAxMQpbICAgIDEuNjk2NTQwXSBvaGNpLXBjaSAwMDAwOjAw
OjE0LjU6IGlycSAxOCwgaW8gbWVtIDB4ZmY3NDgwMDAKWyAgICAxLjc1NjE0OF0gdXNiIHVzYjEx
OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEKWyAg
ICAxLjc1NjE1Ml0gdXNiIHVzYjExOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJv
ZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgIDEuNzU2MTU0XSB1c2IgdXNiMTE6IFByb2R1Y3Q6
IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDEuNzU2MTU1XSB1c2IgdXNiMTE6IE1hbnVm
YWN0dXJlcjogTGludXggMy4xMS42LTQteGVuIG9oY2lfaGNkClsgICAgMS43NTYxNTZdIHVzYiB1
c2IxMTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjE0LjUKWyAgICAxLjc1NjI4M10gaHViIDExLTA6
MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS43NTYyODldIGh1YiAxMS0wOjEuMDogMiBwb3J0cyBk
ZXRlY3RlZApbICAgIDEuNzU2ODQzXSBbZHJtXSBpbml0aWFsaXppbmcga2VybmVsIG1vZGVzZXR0
aW5nIChQSVRDQUlSTiAweDEwMDI6MHg2ODE5IDB4MTQ1ODoweDI1NTMpLgpbICAgIDEuNzU2OTg0
XSBbZHJtXSByZWdpc3RlciBtbWlvIGJhc2U6IDB4RkY2MDAwMDAKWyAgICAxLjc1Njk4Nl0gW2Ry
bV0gcmVnaXN0ZXIgbW1pbyBzaXplOiAyNjIxNDQKWyAgICAxLjc1NzIxN10gQVRPTSBCSU9TOiBH
VgpbICAgIDEuNzU3Mjg0XSByYWRlb24gMDAwMDowMTowMC4wOiBWUkFNOiAyMDQ4TSAweDAwMDAw
MDAwMDAwMDAwMDAgLSAweDAwMDAwMDAwN0ZGRkZGRkYgKDIwNDhNIHVzZWQpClsgICAgMS43NTcy
ODZdIHJhZGVvbiAwMDAwOjAxOjAwLjA6IEdUVDogNTEyTSAweDAwMDAwMDAwODAwMDAwMDAgLSAw
eDAwMDAwMDAwOUZGRkZGRkYKWyAgICAxLjc1NzI4OF0gW2RybV0gRGV0ZWN0ZWQgVlJBTSBSQU09
MjA0OE0sIEJBUj0yNTZNClsgICAgMS43NTcyODldIFtkcm1dIFJBTSB3aWR0aCAyNTZiaXRzIERE
UgpbICAgIDEuNzU3MzUyXSBbVFRNXSBab25lICBrZXJuZWw6IEF2YWlsYWJsZSBncmFwaGljcyBt
ZW1vcnk6IDM1NDIzMTIga2lCClsgICAgMS43NTczNTRdIFtUVE1dIFpvbmUgICBkbWEzMjogQXZh
aWxhYmxlIGdyYXBoaWNzIG1lbW9yeTogMjA5NzE1MiBraUIKWyAgICAxLjc1NzM1NV0gW1RUTV0g
SW5pdGlhbGl6aW5nIHBvb2wgYWxsb2NhdG9yClsgICAgMS43NTczNTldIFtUVE1dIEluaXRpYWxp
emluZyBETUEgcG9vbCBhbGxvY2F0b3IKWyAgICAxLjc1NzM5Ml0gW2RybV0gcmFkZW9uOiAyMDQ4
TSBvZiBWUkFNIG1lbW9yeSByZWFkeQpbICAgIDEuNzU3Mzk0XSBbZHJtXSByYWRlb246IDUxMk0g
b2YgR1RUIG1lbW9yeSByZWFkeS4KWyAgICAxLjc1ODUwNV0gW2RybV0gR0FSVDogbnVtIGNwdSBw
YWdlcyAxMzEwNzIsIG51bSBncHUgcGFnZXMgMTMxMDcyClsgICAgMS43NTkxNzFdIFtkcm1dIHBy
b2JpbmcgZ2VuIDIgY2FwcyBmb3IgZGV2aWNlIDEwMjI6MTQxMiA9IDcwMGQwMi82ClsgICAgMS43
NTkxNzZdIFtkcm1dIFBDSUUgZ2VuIDIgbGluayBzcGVlZHMgYWxyZWFkeSBlbmFibGVkClsgICAg
MS43NjYxMzddIFtkcm1dIExvYWRpbmcgUElUQ0FJUk4gTWljcm9jb2RlClsgICAgMS45ODgwOTNd
IHVzYiA2LTQ6IG5ldyBmdWxsLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgeGhjaV9o
Y2QKWyAgICAyLjA2MjEwNl0gdXNiIDYtNDogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9y
PTA0NWUsIGlkUHJvZHVjdD0wMjkxClsgICAgMi4wNjIxMDldIHVzYiA2LTQ6IE5ldyBVU0IgZGV2
aWNlIHN0cmluZ3M6IE1mcj0wLCBQcm9kdWN0PTAsIFNlcmlhbE51bWJlcj0wClsgICAgMi4yMDQx
NzhdIFtkcm1dIFBDSUUgR0FSVCBvZiA1MTJNIGVuYWJsZWQgKHRhYmxlIGF0IDB4MDAwMDAwMDAw
MDI3NjAwMCkuClsgICAgMi4yMDQzMzRdIHJhZGVvbiAwMDAwOjAxOjAwLjA6IFdCIGVuYWJsZWQK
WyAgICAyLjIwNDMzN10gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcg
MCB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAwYzAwIGFuZCBjcHUgYWRkciAweGZmZmY4ODAx
OTk4NWNjMDAKWyAgICAyLjIwNDMzOV0gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVy
IG9uIHJpbmcgMSB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAwYzA0IGFuZCBjcHUgYWRkciAw
eGZmZmY4ODAxOTk4NWNjMDQKWyAgICAyLjIwNDM0MF0gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVu
Y2UgZHJpdmVyIG9uIHJpbmcgMiB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAwYzA4IGFuZCBj
cHUgYWRkciAweGZmZmY4ODAxOTk4NWNjMDgKWyAgICAyLjIwNDM0Ml0gcmFkZW9uIDAwMDA6MDE6
MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgMyB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAw
YzBjIGFuZCBjcHUgYWRkciAweGZmZmY4ODAxOTk4NWNjMGMKWyAgICAyLjIwNDM0NF0gcmFkZW9u
IDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNCB1c2UgZ3B1IGFkZHIgMHgwMDAw
MDAwMDgwMDAwYzEwIGFuZCBjcHUgYWRkciAweGZmZmY4ODAxOTk4NWNjMTAKWyAgICAyLjIwNTMx
MV0gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNSB1c2UgZ3B1IGFk
ZHIgMHgwMDAwMDAwMDAwMDc1YTE4IGFuZCBjcHUgYWRkciAweGZmZmZjOTAwMTAyMzVhMTgKWyAg
ICAyLjIwNTMxM10gW2RybV0gU3VwcG9ydHMgdmJsYW5rIHRpbWVzdGFtcCBjYWNoaW5nIFJldiAx
ICgxMC4xMC4yMDEwKS4KWyAgICAyLjIwNTMxNF0gW2RybV0gRHJpdmVyIHN1cHBvcnRzIHByZWNp
c2UgdmJsYW5rIHRpbWVzdGFtcCBxdWVyeS4KWyAgICAyLjIwNTM2NF0gcmFkZW9uIDAwMDA6MDE6
MDAuMDogaXJxIDYwICgyNjQpIGZvciBNU0kvTVNJLVgKWyAgICAyLjIwNTM4MF0gcmFkZW9uIDAw
MDA6MDE6MDAuMDogcmFkZW9uOiB1c2luZyBNU0kuClsgICAgMi4yMDU0MTZdIFtkcm1dIHJhZGVv
bjogaXJxIGluaXRpYWxpemVkLgpbICAgIDIuMjI3MDk4XSBbZHJtXSByaW5nIHRlc3Qgb24gMCBz
dWNjZWVkZWQgaW4gMyB1c2VjcwpbICAgIDIuMjI3MTA1XSBbZHJtXSByaW5nIHRlc3Qgb24gMSBz
dWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuMjI3MTEwXSBbZHJtXSByaW5nIHRlc3Qgb24gMiBz
dWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuMjI3MTc0XSBbZHJtXSByaW5nIHRlc3Qgb24gMyBz
dWNjZWVkZWQgaW4gMiB1c2VjcwpbICAgIDIuMjI3MTg0XSBbZHJtXSByaW5nIHRlc3Qgb24gNCBz
dWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuNDE1MzkyXSBbZHJtXSByaW5nIHRlc3Qgb24gNSBz
dWNjZWVkZWQgaW4gMiB1c2VjcwpbICAgIDIuNDE1Mzk3XSBbZHJtXSBVVkQgaW5pdGlhbGl6ZWQg
c3VjY2Vzc2Z1bGx5LgpbICAgIDIuNDIzMTg5XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcgMCBzdWNj
ZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMjU3XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcgMSBz
dWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMzE5XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcg
MiBzdWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMzUxXSBbZHJtXSBpYiB0ZXN0IG9uIHJp
bmcgMyBzdWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMzgxXSBbZHJtXSBpYiB0ZXN0IG9u
IHJpbmcgNCBzdWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuNDI4MDcxXSB1c2IgOC00OiBuZXcg
aGlnaC1zcGVlZCBVU0IgZGV2aWNlIG51bWJlciAyIHVzaW5nIGVoY2ktcGNpClsgICAgMi41ODAx
MjNdIFtkcm1dIGliIHRlc3Qgb24gcmluZyA1IHN1Y2NlZWRlZApbICAgIDIuNTgwODMxXSBbZHJt
XSBSYWRlb24gRGlzcGxheSBDb25uZWN0b3JzClsgICAgMi41ODA4MzJdIFtkcm1dIENvbm5lY3Rv
ciAwOgpbICAgIDIuNTgwODM0XSBbZHJtXSAgIERQLTEKWyAgICAyLjU4MDgzNV0gW2RybV0gICBI
UEQ0ClsgICAgMi41ODA4MzZdIFtkcm1dICAgRERDOiAweDY1MzAgMHg2NTMwIDB4NjUzNCAweDY1
MzQgMHg2NTM4IDB4NjUzOCAweDY1M2MgMHg2NTNjClsgICAgMi41ODA4MzddIFtkcm1dICAgRW5j
b2RlcnM6ClsgICAgMi41ODA4MzhdIFtkcm1dICAgICBERlAxOiBJTlRFUk5BTF9VTklQSFkyClsg
ICAgMi41ODA4MzldIFtkcm1dIENvbm5lY3RvciAxOgpbICAgIDIuNTgwODM5XSBbZHJtXSAgIERQ
LTIKWyAgICAyLjU4MDg0MF0gW2RybV0gICBIUEQ1ClsgICAgMi41ODA4NDFdIFtkcm1dICAgRERD
OiAweDY1NDAgMHg2NTQwIDB4NjU0NCAweDY1NDQgMHg2NTQ4IDB4NjU0OCAweDY1NGMgMHg2NTRj
ClsgICAgMi41ODA4NDJdIFtkcm1dICAgRW5jb2RlcnM6ClsgICAgMi41ODA4NDNdIFtkcm1dICAg
ICBERlAyOiBJTlRFUk5BTF9VTklQSFkyClsgICAgMi41ODA4NDRdIFtkcm1dIENvbm5lY3RvciAy
OgpbICAgIDIuNTgwODQ0XSBbZHJtXSAgIEhETUktQS0xClsgICAgMi41ODA4NDVdIFtkcm1dICAg
SFBEMQpbICAgIDIuNTgwODQ2XSBbZHJtXSAgIEREQzogMHg2NTUwIDB4NjU1MCAweDY1NTQgMHg2
NTU0IDB4NjU1OCAweDY1NTggMHg2NTVjIDB4NjU1YwpbICAgIDIuNTgwODQ3XSBbZHJtXSAgIEVu
Y29kZXJzOgpbICAgIDIuNTgwODQ4XSBbZHJtXSAgICAgREZQMzogSU5URVJOQUxfVU5JUEhZMQpb
ICAgIDIuNTgwODQ4XSBbZHJtXSBDb25uZWN0b3IgMzoKWyAgICAyLjU4MDg0OV0gW2RybV0gICBE
VkktSS0xClsgICAgMi41ODA4NTBdIFtkcm1dICAgSFBENgpbICAgIDIuNTgwODUxXSBbZHJtXSAg
IEREQzogMHg2NTgwIDB4NjU4MCAweDY1ODQgMHg2NTg0IDB4NjU4OCAweDY1ODggMHg2NThjIDB4
NjU4YwpbICAgIDIuNTgwODUyXSBbZHJtXSAgIEVuY29kZXJzOgpbICAgIDIuNTgwODUyXSBbZHJt
XSAgICAgREZQNDogSU5URVJOQUxfVU5JUEhZClsgICAgMi41ODA4NTNdIFtkcm1dICAgICBDUlQx
OiBJTlRFUk5BTF9LTERTQ1BfREFDMQpbICAgIDIuNTgwOTA5XSBbZHJtXSBJbnRlcm5hbCB0aGVy
bWFsIGNvbnRyb2xsZXIgd2l0aCBmYW4gY29udHJvbApbICAgIDIuNTgwOTQ4XSBbZHJtXSByYWRl
b246IHBvd2VyIG1hbmFnZW1lbnQgaW5pdGlhbGl6ZWQKWyAgICAyLjYzOTIyNF0gW2RybV0gZmIg
bWFwcGFibGUgYXQgMHhDMTM4ODAwMApbICAgIDIuNjM5MjI3XSBbZHJtXSB2cmFtIGFwcGVyIGF0
IDB4QzAwMDAwMDAKWyAgICAyLjYzOTIyOF0gW2RybV0gc2l6ZSA4Mjk0NDAwClsgICAgMi42Mzky
MjldIFtkcm1dIGZiIGRlcHRoIGlzIDI0ClsgICAgMi42MzkyMzBdIFtkcm1dICAgIHBpdGNoIGlz
IDc2ODAKWyAgICAyLjY2MTQyNV0gQ29uc29sZTogc3dpdGNoaW5nIHRvIGNvbG91ciBmcmFtZSBi
dWZmZXIgZGV2aWNlIDI0MHg2NwpbICAgIDIuNjY1MDQ5XSByYWRlb24gMDAwMDowMTowMC4wOiBm
YjA6IHJhZGVvbmRybWZiIGZyYW1lIGJ1ZmZlciBkZXZpY2UKWyAgICAyLjY2NTA1MV0gcmFkZW9u
IDAwMDA6MDE6MDAuMDogcmVnaXN0ZXJlZCBwYW5pYyBub3RpZmllcgpbICAgIDIuNjY1MDU1XSBb
ZHJtXSBJbml0aWFsaXplZCByYWRlb24gMi4zNC4wIDIwMDgwNTI4IGZvciAwMDAwOjAxOjAwLjAg
b24gbWlub3IgMApbICAgIDIuNzA5MzM5XSB1c2IgOC00OiBOZXcgVVNCIGRldmljZSBmb3VuZCwg
aWRWZW5kb3I9MDQ4ZCwgaWRQcm9kdWN0PTEzMzYKWyAgICAyLjcwOTM0M10gdXNiIDgtNDogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTMKWyAg
ICAyLjcwOTM0NV0gdXNiIDgtNDogUHJvZHVjdDogTWFzcyBTdG9yYWdlIERldmljZQpbICAgIDIu
NzA5MzQ2XSB1c2IgOC00OiBNYW51ZmFjdHVyZXI6IEdlbmVyaWMgICAKWyAgICAyLjcwOTM0OF0g
dXNiIDgtNDogU2VyaWFsTnVtYmVyOiAwMDAwMDAwMDAwMDAwNgpbICAgIDIuNzc3MzU4XSB4b3I6
IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBzcGVlZApbICAgIDIuODE2MDU3XSAgICA4cmVn
cyAgICAgOiAxNjI1Ni4wMDAgTUIvc2VjClsgICAgMi44NDQwOTFdIHVzYiA5LTM6IG5ldyBsb3ct
c3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBvaGNpLXBjaQpbICAgIDIuODU2MDU3XSAg
ICA4cmVnc19wcmVmZXRjaDogMTQ1MzQuMDAwIE1CL3NlYwpbICAgIDIuODk2MDU3XSAgICAzMnJl
Z3MgICAgOiAxMzA4Ny4wMDAgTUIvc2VjClsgICAgMi45MzYwNTddICAgIDMycmVnc19wcmVmZXRj
aDogMTA5MjEuMDAwIE1CL3NlYwpbICAgIDIuOTc2MDU5XSAgICBnZW5lcmljX3NzZTogIDgwNzcu
MDAwIE1CL3NlYwpbICAgIDMuMDEzMTU3XSB1c2IgOS0zOiBOZXcgVVNCIGRldmljZSBmb3VuZCwg
aWRWZW5kb3I9MDlkYSwgaWRQcm9kdWN0PTAwMGEKWyAgICAzLjAxMzE2MV0gdXNiIDktMzogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTAKWyAg
ICAzLjAxMzE2M10gdXNiIDktMzogUHJvZHVjdDogUFMvMitVU0IgTW91c2UKWyAgICAzLjAxMzE2
NF0gdXNiIDktMzogTWFudWZhY3R1cmVyOiBBNFRlY2gKWyAgICAzLjAxNjAxOV0gICAgcHJlZmV0
Y2g2NC1zc2U6ICA4MjY5LjAwMCBNQi9zZWMKWyAgICAzLjAyMDMzMV0gaW5wdXQ6IEE0VGVjaCBQ
Uy8yK1VTQiBNb3VzZSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTIuMC91c2I5Lzkt
My85LTM6MS4wL2lucHV0L2lucHV0MApbICAgIDMuMDIwNTMzXSBhNHRlY2ggMDAwMzowOURBOjAw
MEEuMDAwMTogaW5wdXQsaGlkcmF3MDogVVNCIEhJRCB2MS4xMCBNb3VzZSBbQTRUZWNoIFBTLzIr
VVNCIE1vdXNlXSBvbiB1c2ItMDAwMDowMDoxMi4wLTMvaW5wdXQwClsgICAgMy4wMjA1NTNdIHVz
YmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiaGlkClsgICAgMy4wMjA1
NTRdIHVzYmhpZDogVVNCIEhJRCBjb3JlIGRyaXZlcgpbICAgIDMuMDU2MDU4XSAgICBhdnggICAg
ICAgOiAgNDEyNC4wMDAgTUIvc2VjClsgICAgMy4wNTYwNjBdIHhvcjogdXNpbmcgZnVuY3Rpb246
IDhyZWdzICgxNjI1Ni4wMDAgTUIvc2VjKQpbICAgIDMuMTI0MDYwXSByYWlkNjogc3NlMngxICAg
IDcyNDEgTUIvcwpbICAgIDMuMTQ4MDk0XSB1c2IgOS00OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZp
Y2UgbnVtYmVyIDMgdXNpbmcgb2hjaS1wY2kKWyAgICAzLjE5MjA1OV0gcmFpZDY6IHNzZTJ4MiAg
IDExMzM1IE1CL3MKWyAgICAzLjI2MDA2MF0gcmFpZDY6IHNzZTJ4NCAgIDEyOTA5IE1CL3MKWyAg
ICAzLjI2MDA2MV0gcmFpZDY6IHVzaW5nIGFsZ29yaXRobSBzc2UyeDQgKDEyOTA5IE1CL3MpClsg
ICAgMy4yNjAwNjJdIHJhaWQ2OiB1c2luZyBzc3NlM3gyIHJlY292ZXJ5IGFsZ29yaXRobQpbICAg
IDMuMjYzMTYzXSBiaW86IGNyZWF0ZSBzbGFiIDxiaW8tMT4gYXQgMQpbICAgIDMuMjYzMzc0XSBC
dHJmcyBsb2FkZWQKWyAgICAzLjMzMjE1Nl0gdXNiIDktNDogTmV3IFVTQiBkZXZpY2UgZm91bmQs
IGlkVmVuZG9yPTA0NWUsIGlkUHJvZHVjdD0wMGRkClsgICAgMy4zMzIxNjRdIHVzYiA5LTQ6IE5l
dyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0wClsg
ICAgMy4zMzIxNjZdIHVzYiA5LTQ6IFByb2R1Y3Q6IENvbWZvcnQgQ3VydmUgS2V5Ym9hcmQgMjAw
MApbICAgIDMuMzMyMTY3XSB1c2IgOS00OiBNYW51ZmFjdHVyZXI6IE1pY3Jvc29mdApbICAgIDMu
MzU4MjEwXSBpbnB1dDogTWljcm9zb2Z0IENvbWZvcnQgQ3VydmUgS2V5Ym9hcmQgMjAwMCBhcyAv
ZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTIuMC91c2I5LzktNC85LTQ6MS4wL2lucHV0L2lu
cHV0MQpbICAgIDMuMzU4MzcyXSBoaWQtZ2VuZXJpYyAwMDAzOjA0NUU6MDBERC4wMDAyOiBpbnB1
dCxoaWRyYXcxOiBVU0IgSElEIHYxLjExIEtleWJvYXJkIFtNaWNyb3NvZnQgQ29tZm9ydCBDdXJ2
ZSBLZXlib2FyZCAyMDAwXSBvbiB1c2ItMDAwMDowMDoxMi4wLTQvaW5wdXQwClsgICAgMy4zNjAy
NDhdIGlucHV0OiBNaWNyb3NvZnQgQ29tZm9ydCBDdXJ2ZSBLZXlib2FyZCAyMDAwIGFzIC9kZXZp
Y2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxMi4wL3VzYjkvOS00LzktNDoxLjEvaW5wdXQvaW5wdXQy
ClsgICAgMy4zNjA0NjBdIGhpZC1nZW5lcmljIDAwMDM6MDQ1RTowMERELjAwMDM6IGlucHV0LGhp
ZHJhdzI6IFVTQiBISUQgdjEuMTEgRGV2aWNlIFtNaWNyb3NvZnQgQ29tZm9ydCBDdXJ2ZSBLZXli
b2FyZCAyMDAwXSBvbiB1c2ItMDAwMDowMDoxMi4wLTQvaW5wdXQxClsgICAgNS42ODczOTddIEVY
VDQtZnMgKHNkYzMpOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRlcmVkIGRhdGEgbW9kZS4g
T3B0czogYWNsLHVzZXJfeGF0dHIKWyAgICA1Ljk4NTgwMV0gRVhUNC1mcyAoc2RjMyk6IHJlLW1v
dW50ZWQuIE9wdHM6IGFjbCx1c2VyX3hhdHRyClsgICAgNi44MjEzOTFdIHN5c3RlbWRbMV06IHN5
c3RlbWQgMjA4IHJ1bm5pbmcgaW4gc3lzdGVtIG1vZGUuICgrUEFNICtMSUJXUkFQICtBVURJVCAr
U0VMSU5VWCAtSU1BICtTWVNWSU5JVCArTElCQ1JZUFRTRVRVUCArR0NSWVBUICtBQ0wgK1haKQpb
ICAgIDYuODIxNDQ2XSBzeXN0ZW1kWzFdOiBEZXRlY3RlZCB2aXJ0dWFsaXphdGlvbiAneGVuJy4K
WyAgICA3LjExNjYxMl0gc3lzdGVtZFsxXTogSW5zZXJ0ZWQgbW9kdWxlICdhdXRvZnM0JwpbICAg
IDcuMTI5MzI5XSBzeXN0ZW1kWzFdOiBTZXQgaG9zdG5hbWUgdG8gPGxpbnV4LWI1MmQ+LgpbICAg
IDcuNTQ5Nzg0XSBkZXZpY2UtbWFwcGVyOiB1ZXZlbnQ6IHZlcnNpb24gMS4wLjMKWyAgICA3LjU0
OTg1NV0gZGV2aWNlLW1hcHBlcjogaW9jdGw6IDQuMjUuMC1pb2N0bCAoMjAxMy0wNi0yNikgaW5p
dGlhbGlzZWQ6IGRtLWRldmVsQHJlZGhhdC5jb20KWyAgICA3LjU1MDkwMF0gTFZNOiBBY3RpdmF0
aW9uIGdlbmVyYXRvciBzdWNjZXNzZnVsbHkgY29tcGxldGVkLgpbICAgIDguNTM3MTE5XSBzeXN0
ZW1kWzFdOiBTdGFydGVkIENvbGxlY3QgUmVhZC1BaGVhZCBEYXRhLgpbICAgIDguNTM3MTMxXSBz
eXN0ZW1kWzFdOiBTdGFydGVkIFJlcGxheSBSZWFkLUFoZWFkIERhdGEuClsgICAgOC41MzcxMzld
IHN5c3RlbWRbMV06IEV4cGVjdGluZyBkZXZpY2UgZGV2LXh2Yy0xLmRldmljZS4uLgpbICAgIDgu
NTM3MjI2XSBzeXN0ZW1kWzFdOiBFeHBlY3RpbmcgZGV2aWNlIGRldi14dmMwLmRldmljZS4uLgpb
ICAgIDguNTM3MjgzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBTeXN0ZW0gVGltZSBTeW5jaHJvbml6
ZWQuClsgICAgOC41MzczNDJdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IFN5c3RlbSBUaW1l
IFN5bmNocm9uaXplZC4KWyAgICA4LjUzNzM1MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgUmVtb3Rl
IEZpbGUgU3lzdGVtcyAoUHJlKS4KWyAgICA4LjUzNzQwNl0gc3lzdGVtZFsxXTogUmVhY2hlZCB0
YXJnZXQgUmVtb3RlIEZpbGUgU3lzdGVtcyAoUHJlKS4KWyAgICA4LjUzNzQxM10gc3lzdGVtZFsx
XTogU3RhcnRpbmcgUmVtb3RlIEZpbGUgU3lzdGVtcy4KWyAgICA4LjUzNzQ2N10gc3lzdGVtZFsx
XTogUmVhY2hlZCB0YXJnZXQgUmVtb3RlIEZpbGUgU3lzdGVtcy4KWyAgICA4LjUzNzQ3Nl0gc3lz
dGVtZFsxXTogU3RhcnRpbmcgU3lzbG9nIFNvY2tldC4KWyAgICA4LjUzNzU2Nl0gc3lzdGVtZFsx
XTogTGlzdGVuaW5nIG9uIFN5c2xvZyBTb2NrZXQuClsgICAgOC41Mzc1NzVdIHN5c3RlbWRbMV06
IFN0YXJ0aW5nIERlbGF5ZWQgU2h1dGRvd24gU29ja2V0LgpbICAgIDguNTM3NjQ1XSBzeXN0ZW1k
WzFdOiBMaXN0ZW5pbmcgb24gRGVsYXllZCBTaHV0ZG93biBTb2NrZXQuClsgICAgOC41Mzc2NTNd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIC9kZXYvaW5pdGN0bCBDb21wYXRpYmlsaXR5IE5hbWVkIFBp
cGUuClsgICAgOC41Mzc3MjVdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiAvZGV2L2luaXRjdGwg
Q29tcGF0aWJpbGl0eSBOYW1lZCBQaXBlLgpbICAgIDguNTM3NzM3XSBzeXN0ZW1kWzFdOiBTdGFy
dGluZyBKb3VybmFsIFNvY2tldC4KWyAgICA4LjUzNzgzNV0gc3lzdGVtZFsxXTogTGlzdGVuaW5n
IG9uIEpvdXJuYWwgU29ja2V0LgpbICAgIDguNTgwMDcwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBM
b2FkIEtlcm5lbCBNb2R1bGVzLi4uClsgICAgOC41ODExNDJdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IENyZWF0ZSBsaXN0IG9mIHJlcXVpcmVkIHN0YXRpYyBkZXZpY2Ugbm9kZXMgZm9yIHRoZSBjdXJy
ZW50IGtlcm5lbC4uLgpbICAgIDguNTgxNzcxXSBzeXN0ZW1kWzFdOiBNb3VudGluZyBEZWJ1ZyBG
aWxlIFN5c3RlbS4uLgpbICAgIDguNTgyNDQ1XSBzeXN0ZW1kWzFdOiBNb3VudGVkIEh1Z2UgUGFn
ZXMgRmlsZSBTeXN0ZW0uClsgICAgOC41ODI0OTRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFNldHVw
IFZpcnR1YWwgQ29uc29sZS4uLgpbICAgIDguNTgzMjAwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBD
cmVhdGUgZHluYW1pYyBydWxlIGZvciAvZGV2L3Jvb3QgbGluay4uLgpbICAgIDguNTgzOTE5XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBKb3VybmFsIFNlcnZpY2UuLi4KWyAgICA4LjU4NDcxNl0gc3lz
dGVtZFsxXTogU3RhcnRlZCBKb3VybmFsIFNlcnZpY2UuClsgICAgOC45ODU4MjNdIHN5c3RlbWQt
am91cm5hbGRbMjgzXTogVmFjdXVtaW5nIGRvbmUsIGZyZWVkIDAgYnl0ZXMKWyAgICA5LjI2OTIy
OF0gc2QgMDowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMCB0eXBlIDAKWyAgICA5LjI2
OTI1OF0gc2QgMTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMSB0eXBlIDAKWyAgICA5
LjI2OTI4NV0gc2QgMjowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMiB0eXBlIDAKWyAg
ICA5LjgxNjEyNV0gRVhUNC1mcyAoc2RjMyk6IHJlLW1vdW50ZWQuIE9wdHM6IGFjbCx1c2VyX3hh
dHRyClsgICAgOS44NzY0ODRdIHN5c3RlbWQtdWRldmRbMzIwXTogc3RhcnRpbmcgdmVyc2lvbiAy
MDgKWyAgIDEwLjgwNTg4Nl0gcGlpeDRfc21idXMgMDAwMDowMDoxNC4wOiBTTUJ1cyBIb3N0IENv
bnRyb2xsZXIgYXQgMHhiMDAsIHJldmlzaW9uIDAKWyAgIDEwLjgwODU1NV0gc2NzaTMgOiBwYXRh
X2F0aWl4cApbICAgMTAuODA5MDU1XSBzY3NpNCA6IHBhdGFfYXRpaXhwClsgICAxMC44MDk0ODJd
IGF0YTQ6IFBBVEEgbWF4IFVETUEvMTAwIGNtZCAweDFmMCBjdGwgMHgzZjYgYm1kbWEgMHhmMTAw
IGlycSAxNApbICAgMTAuODA5NDg0XSBhdGE1OiBQQVRBIG1heCBVRE1BLzEwMCBjbWQgMHgxNzAg
Y3RsIDB4Mzc2IGJtZG1hIDB4ZjEwOCBpcnEgMTUKWyAgIDEwLjgwOTk3NF0gc2hwY2hwOiBTdGFu
ZGFyZCBIb3QgUGx1ZyBQQ0kgQ29udHJvbGxlciBEcml2ZXIgdmVyc2lvbjogMC40ClsgICAxMC44
NzI5NjRdIGlucHV0OiBQb3dlciBCdXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvZGV2aWNl
OjAwL1BOUDBDMEM6MDAvaW5wdXQvaW5wdXQzClsgICAxMC44NzMwMzFdIEFDUEk6IFBvd2VyIEJ1
dHRvbiBbUFdSQl0KWyAgIDEwLjg3MzA2OF0gaW5wdXQ6IFBvd2VyIEJ1dHRvbiBhcyAvZGV2aWNl
cy9MTlhTWVNUTTowMC9MTlhQV1JCTjowMC9pbnB1dC9pbnB1dDQKWyAgIDEwLjg3MzA5M10gQUNQ
STogUG93ZXIgQnV0dG9uIFtQV1JGXQpbICAgMTEuMjM4MjQxXSByODE2OSBHaWdhYml0IEV0aGVy
bmV0IGRyaXZlciAyLjNMSy1OQVBJIGxvYWRlZApbICAgMTEuMjM4NTQ3XSByODE2OSAwMDAwOjA1
OjAwLjA6IGlycSA2MSAoMjYzKSBmb3IgTVNJL01TSS1YClsgICAxMS4yMzg2OTRdIHI4MTY5IDAw
MDA6MDU6MDAuMCBldGgwOiBSVEw4MTY4ZXZsLzgxMTFldmwgYXQgMHhmZmZmYzkwMDAwMDJhMDAw
LCBiYzo1ZjpmNDo4YjoyNjo4MSwgWElEIDBjOTAwODAwIElSUSA2MQpbICAgMTEuMjM4Njk2XSBy
ODE2OSAwMDAwOjA1OjAwLjAgZXRoMDoganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTIwMCBieXRl
cywgdHggY2hlY2tzdW1taW5nOiBrb10KWyAgIDExLjI5MDkzOV0gaW5wdXQ6IFBDIFNwZWFrZXIg
YXMgL2RldmljZXMvcGxhdGZvcm0vcGNzcGtyL2lucHV0L2lucHV0NQpbICAgMTEuMzE1NTUzXSB0
YnM2OTgyZmU6IG1vZHVsZSBsaWNlbnNlICdUdXJib1NpZ2h0IFByb3ByaWV0YXJ5OiB3d3cudGJz
ZHR2LmNvbScgdGFpbnRzIGtlcm5lbC4KWyAgIDExLjMxNTU1N10gRGlzYWJsaW5nIGxvY2sgZGVi
dWdnaW5nIGR1ZSB0byBrZXJuZWwgdGFpbnQKWyAgIDExLjM3MjI3OV0gU2VyaWFsOiA4MjUwLzE2
NTUwIGRyaXZlciwgMzIgcG9ydHMsIElSUSBzaGFyaW5nIGRpc2FibGVkClsgICAxMS4zOTYzNjld
IDAwMDA6MDI6MDcuMDogdHR5UzUgYXQgSS9PIDB4ZDA2MCAoaXJxID0gMjIpIGlzIGEgMTY1NTBB
ClsgICAxMS42NTQxMjZdIHVzYi1zdG9yYWdlIDgtNDoxLjA6IFVTQiBNYXNzIFN0b3JhZ2UgZGV2
aWNlIGRldGVjdGVkClsgICAxMS42NTQyMjNdIHNjc2k1IDogdXNiLXN0b3JhZ2UgOC00OjEuMApb
ICAgMTEuNjU0Mjg3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVz
Yi1zdG9yYWdlClsgICAxMS43Mzg0MTJdIGlucHV0OiBYYm94IDM2MCBXaXJlbGVzcyBSZWNlaXZl
ciAoWEJPWCkgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE1LjIvMDAwMDowNDowMC4w
L3VzYjYvNi00LzYtNDoxLjAvaW5wdXQvaW5wdXQ2ClsgICAxMS43Mzg1MTldIGlucHV0OiBYYm94
IDM2MCBXaXJlbGVzcyBSZWNlaXZlciAoWEJPWCkgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAw
OjAwOjE1LjIvMDAwMDowNDowMC4wL3VzYjYvNi00LzYtNDoxLjIvaW5wdXQvaW5wdXQ3ClsgICAx
MS43Mzg1OTJdIGlucHV0OiBYYm94IDM2MCBXaXJlbGVzcyBSZWNlaXZlciAoWEJPWCkgYXMgL2Rl
dmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE1LjIvMDAwMDowNDowMC4wL3VzYjYvNi00LzYtNDox
LjQvaW5wdXQvaW5wdXQ4ClsgICAxMS43Mzg2NjJdIGlucHV0OiBYYm94IDM2MCBXaXJlbGVzcyBS
ZWNlaXZlciAoWEJPWCkgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE1LjIvMDAwMDow
NDowMC4wL3VzYjYvNi00LzYtNDoxLjYvaW5wdXQvaW5wdXQ5ClsgICAxMS43Mzg3MTFdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgeHBhZApbICAgMTEuODg1NzI0XSBz
bmQtY2EwMTA2OiBNb2RlbCAxMDBhIFJldiAwMDAwMDAwMCBTZXJpYWwgMTAwYTExMDIKWyAgIDEy
LjM3NjA5Nl0gUmVnaXN0ZXJlZCBJUiBrZXltYXAgcmMtdGJzLW5lYwpbICAgMTIuMzc2MTcyXSBp
bnB1dDogc2FhNzE2eCBJUiAoVHVyYm9TaWdodCBUQlMgNjIyMCkgYXMgL2RldmljZXMvcGNpMDAw
MDowMC8wMDAwOjAwOjE1LjAvMDAwMDowMzowMC4wL3JjL3JjMC9pbnB1dDEwClsgICAxMi4zNzYy
MjhdIHJjMDogc2FhNzE2eCBJUiAoVHVyYm9TaWdodCBUQlMgNjIyMCkgYXMgL2RldmljZXMvcGNp
MDAwMDowMC8wMDAwOjAwOjE1LjAvMDAwMDowMzowMC4wL3JjL3JjMApbICAgMTIuMzc2Mjc3XSBE
VkI6IHJlZ2lzdGVyaW5nIG5ldyBhZGFwdGVyIChTQUE3MTZ4IGR2YiBhZGFwdGVyKQpbICAgMTIu
NDQxNzE1XSBJUiBORUMgcHJvdG9jb2wgaGFuZGxlciBpbml0aWFsaXplZApbICAgMTIuNDU0MDkx
XSB0ZGExODIxMjogTlhQIFREQTE4MjEySE4gc3VjY2Vzc2Z1bGx5IGlkZW50aWZpZWQuClsgICAx
Mi40NTQwOTddIERWQjogcmVnaXN0ZXJpbmcgYWRhcHRlciAwIGZyb250ZW5kIDAgKFNvbnkgQ1hE
MjgyMFIgKERWQi1UL1QyKSkuLi4KWyAgIDEyLjQ2NjQxMl0gcHBkZXY6IHVzZXItc3BhY2UgcGFy
YWxsZWwgcG9ydCBkcml2ZXIKWyAgIDEyLjQ3NjIzMV0gc3lzdGVtZC11ZGV2ZFszNDVdOiByZW5h
bWVkIG5ldHdvcmsgaW50ZXJmYWNlIGV0aDAgdG8gZW5wNXMwClsgICAxMi41NDIyMThdIElSIFJD
NSh4KSBwcm90b2NvbCBoYW5kbGVyIGluaXRpYWxpemVkClsgICAxMi41OTcwNDldIElSIFJDNiBw
cm90b2NvbCBoYW5kbGVyIGluaXRpYWxpemVkClsgICAxMi42NTI3OTJdIHNjc2kgNTowOjA6MDog
RGlyZWN0LUFjY2VzcyAgICAgR2VuZXJpYyAgU3RvcmFnZSBEZXZpY2UgICAwLjAwIFBROiAwIEFO
U0k6IDIKWyAgIDEyLjY1Mjk3M10gc2QgNTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNn
MyB0eXBlIDAKWyAgIDEyLjY1NjM1OF0gc2QgNTowOjA6MDogW3NkZF0gQXR0YWNoZWQgU0NTSSBy
ZW1vdmFibGUgZGlzawpbICAgMTMuMTE0MzA0XSBJUiBKVkMgcHJvdG9jb2wgaGFuZGxlciBpbml0
aWFsaXplZApbICAgMTMuMjY1MTcwXSBJUiBTb255IHByb3RvY29sIGhhbmRsZXIgaW5pdGlhbGl6
ZWQKWyAgIDEzLjQ4MDg1Ml0gQUxTQSBoZGFfaW50ZWwuYzozMTE2IDAwMDA6MDE6MDAuMTogSGFu
ZGxlIFZHQS1zd2l0Y2hlcm9vIGF1ZGlvIGNsaWVudApbICAgMTMuNDgwODU2XSBBTFNBIGhkYV9p
bnRlbC5jOjMzMTcgMDAwMDowMTowMC4xOiBVc2luZyBMUElCIHBvc2l0aW9uIGZpeApbICAgMTMu
NDgwODU3XSBBTFNBIGhkYV9pbnRlbC5jOjM0MzggMDAwMDowMTowMC4xOiBGb3JjZSB0byBub24t
c25vb3AgbW9kZQpbICAgMTMuNDgwOTM5XSBzbmRfaGRhX2ludGVsIDAwMDA6MDE6MDAuMTogaXJx
IDYyICgyNjIpIGZvciBNU0kvTVNJLVgKWyAgIDEzLjQ4MzU4OV0gQUxTQSBoZGFfaW50ZWwuYzox
Nzg3IDAwMDA6MDE6MDAuMTogRW5hYmxlIHN5bmNfd3JpdGUgZm9yIHN0YWJsZSBjb21tdW5pY2F0
aW9uClsgICAxMy41ODk1NjddIGlucHV0OiBNQ0UgSVIgS2V5Ym9hcmQvTW91c2UgKHNhYTcxNngp
IGFzIC9kZXZpY2VzL3ZpcnR1YWwvaW5wdXQvaW5wdXQxMQpbICAgMTMuNTkwMTI0XSBJUiBNQ0Ug
S2V5Ym9hcmQvbW91c2UgcHJvdG9jb2wgaGFuZGxlciBpbml0aWFsaXplZApbICAgMTMuNjUxMDY5
XSBzeXN0ZW1kLWpvdXJuYWxkWzI4M106IFJlY2VpdmVkIHJlcXVlc3QgdG8gZmx1c2ggcnVudGlt
ZSBqb3VybmFsIGZyb20gUElEIDEKWyAgIDEzLjc2NjY5MV0gaW5wdXQ6IEhEQSBBVEkgSERNSSBI
RE1JL0RQLHBjbT0xMSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MDIuMC8wMDAwOjAx
OjAwLjEvc291bmQvY2FyZDEvaW5wdXQxMgpbICAgMTMuNzY2Nzg0XSBpbnB1dDogSERBIEFUSSBI
RE1JIEhETUkvRFAscGNtPTEwIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDowMi4wLzAw
MDA6MDE6MDAuMS9zb3VuZC9jYXJkMS9pbnB1dDEzClsgICAxMy43NjY4NTBdIGlucHV0OiBIREEg
QVRJIEhETUkgSERNSS9EUCxwY209OSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MDIu
MC8wMDAwOjAxOjAwLjEvc291bmQvY2FyZDEvaW5wdXQxNApbICAgMTMuNzY2OTA1XSBpbnB1dDog
SERBIEFUSSBIRE1JIEhETUkvRFAscGNtPTggYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAw
OjAyLjAvMDAwMDowMTowMC4xL3NvdW5kL2NhcmQxL2lucHV0MTUKWyAgIDEzLjc2Njk2NV0gaW5w
dXQ6IEhEQSBBVEkgSERNSSBIRE1JL0RQLHBjbT03IGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAw
MDowMDowMi4wLzAwMDA6MDE6MDAuMS9zb3VuZC9jYXJkMS9pbnB1dDE2ClsgICAxMy43NjcwMjJd
IGlucHV0OiBIREEgQVRJIEhETUkgSERNSS9EUCxwY209MyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAw
LzAwMDA6MDA6MDIuMC8wMDAwOjAxOjAwLjEvc291bmQvY2FyZDEvaW5wdXQxNwpbICAgMTMuODU3
NjU3XSBsaXJjX2RldjogSVIgUmVtb3RlIENvbnRyb2wgZHJpdmVyIHJlZ2lzdGVyZWQsIG1ham9y
IDI1MCAKWyAgIDEzLjg2NDI5OF0gcmMgcmMwOiBsaXJjX2RldjogZHJpdmVyIGlyLWxpcmMtY29k
ZWMgKHNhYTcxNngpIHJlZ2lzdGVyZWQgYXQgbWlub3IgPSAwClsgICAxMy44NjQzMDFdIElSIExJ
UkMgYnJpZGdlIGhhbmRsZXIgaW5pdGlhbGl6ZWQKWyAgIDE0LjMyMTY0Ml0gQWRkaW5nIDE1NzI5
NDhrIHN3YXAgb24gL2Rldi9zZGM2LiAgUHJpb3JpdHk6LTEgZXh0ZW50czoxIGFjcm9zczoxNTcy
OTQ4ayBGUwpbICAgMTQuNzM1NTk2XSB0eXBlPTE0MDAgYXVkaXQoMTM5MTQ2NjYwNi41OTY6Mik6
IGFwcGFybW9yPSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIve3Vzci8s
fWJpbi9waW5nIiBwaWQ9NTExIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0Ljc1OTU3MV0g
dHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2MDYuNjIwOjMpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVy
YXRpb249InByb2ZpbGVfbG9hZCIgbmFtZT0iL3NiaW4va2xvZ2QiIHBpZD01MjAgY29tbT0iYXBw
YXJtb3JfcGFyc2VyIgpbICAgMTQuNzc3OTExXSB0eXBlPTE0MDAgYXVkaXQoMTM5MTQ2NjYwNi42
NDA6NCk6IGFwcGFybW9yPSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIv
c2Jpbi9zeXNsb2ctbmciIHBpZD01MjQgY29tbT0iYXBwYXJtb3JfcGFyc2VyIgpbICAgMTQuODAx
Mzk2XSB0eXBlPTE0MDAgYXVkaXQoMTM5MTQ2NjYwNi42NjQ6NSk6IGFwcGFybW9yPSJTVEFUVVMi
IG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIvc2Jpbi9zeXNsb2dkIiBwaWQ9NTI4IGNv
bW09ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0Ljg1MDI1N10gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0
NjY2MDYuNzEyOjYpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIg
bmFtZT0iL3Vzci9saWIvYXBhY2hlMi9tcG0tcHJlZm9yay9hcGFjaGUyIiBwaWQ9NTMzIGNvbW09
ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0Ljg1MDQ4MF0gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2
MDYuNzEyOjcpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgbmFt
ZT0iL3Vzci9saWIvYXBhY2hlMi9tcG0tcHJlZm9yay9hcGFjaGUyLy9ERUZBVUxUX1VSSSIgcGlk
PTUzMyBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAxNC44NTA2NjldIHR5cGU9MTQwMCBhdWRp
dCgxMzkxNDY2NjA2LjcxMjo4KTogYXBwYXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxl
X2xvYWQiIG5hbWU9Ii91c3IvbGliL2FwYWNoZTIvbXBtLXByZWZvcmsvYXBhY2hlMi8vSEFORExJ
TkdfVU5UUlVTVEVEX0lOUFVUIiBwaWQ9NTMzIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0
Ljg1MDgzNF0gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2MDYuNzEyOjkpOiBhcHBhcm1vcj0iU1RB
VFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgbmFtZT0iL3Vzci9saWIvYXBhY2hlMi9tcG0t
cHJlZm9yay9hcGFjaGUyLy9waHBzeXNpbmZvIiBwaWQ9NTMzIGNvbW09ImFwcGFybW9yX3BhcnNl
ciIKWyAgIDE0Ljg2OTA1NV0gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2MDYuNzMyOjEwKTogYXBw
YXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIG5hbWU9Ii91c3IvbGliL2Rv
dmVjb3QvZGVsaXZlciIgcGlkPTUzNyBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAxNC44ODk2
MzJdIHR5cGU9MTQwMCBhdWRpdCgxMzkxNDY2NjA2Ljc1MjoxMSk6IGFwcGFybW9yPSJTVEFUVVMi
IG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIvdXNyL2xpYi9kb3ZlY290L2RvdmVjb3Qt
YXV0aCIgcGlkPTU0MSBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAxNi4xOTY0NDRdIHhlbjpl
dnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZApbICAgMTYuNzczODEwXSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYmJhY2sKWyAgIDE2Ljg3NjY0
Ml0gbmJkOiByZWdpc3RlcmVkIGRldmljZSBhdCBtYWpvciA0MwpbICAgMTcuMTg0Mzc0XSBVbmFi
bGUgdG8gcmVhZCBzeXNycSBjb2RlIGluIGNvbnRyb2wvc3lzcnEKWyAgIDE5LjYyMzg1M10gdmdh
YXJiOiBkZXZpY2UgY2hhbmdlZCBkZWNvZGVzOiBQQ0k6MDAwMDowMTowMC4wLG9sZGRlY29kZXM9
aW8rbWVtLGRlY29kZXM9bm9uZTpvd25zPWlvK21lbQpbICAgMTkuNjIzODU2XSB2Z2FhcmI6IHRy
YW5zZmVycmluZyBvd25lciBmcm9tIFBDSTowMDAwOjAxOjAwLjAgdG8gUENJOjAwMDA6MDA6MDEu
MApbICAgMjEuNjUwMTQ0XSByODE2OSAwMDAwOjA1OjAwLjAgZW5wNXMwOiBsaW5rIGRvd24KWyAg
IDIxLjY1MDE5MF0gSVB2NjogQUREUkNPTkYoTkVUREVWX1VQKTogZW5wNXMwOiBsaW5rIGlzIG5v
dCByZWFkeQpbICAgMjEuNjUwMTk0XSByODE2OSAwMDAwOjA1OjAwLjAgZW5wNXMwOiBsaW5rIGRv
d24KWyAgIDIxLjc5MTU2Nl0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNwpbICAg
MjYuNDc3NTk1XSByODE2OSAwMDAwOjA1OjAwLjAgZW5wNXMwOiBsaW5rIHVwClsgICAyNi40Nzc2
MDVdIElQdjY6IEFERFJDT05GKE5FVERFVl9DSEFOR0UpOiBlbnA1czA6IGxpbmsgYmVjb21lcyBy
ZWFkeQpbICAgNDAuOTQ3NTE3XSBCcmlkZ2UgZmlyZXdhbGxpbmcgcmVnaXN0ZXJlZApbICAgNDAu
OTU3NTQ1XSBkZXZpY2UgZW5wNXMwIGVudGVyZWQgcHJvbWlzY3VvdXMgbW9kZQpbICAgNDAuOTYz
MTA2XSBicjA6IHBvcnQgMShlbnA1czApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQpbICAgNDAu
OTYzMTIyXSBicjA6IHBvcnQgMShlbnA1czApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQpbICAg
NDMuMDE3ODQ2XSBFYnRhYmxlcyB2Mi4wIHJlZ2lzdGVyZWQKWyAgIDQzLjA4NjY4MV0gaXBfdGFi
bGVzOiAoQykgMjAwMC0yMDA2IE5ldGZpbHRlciBDb3JlIFRlYW0KWyAgIDQzLjEwNzgxNF0gaXA2
X3RhYmxlczogKEMpIDIwMDAtMjAwNiBOZXRmaWx0ZXIgQ29yZSBUZWFtClsgIDM0MC42ODI4Mjdd
IEJsdWV0b290aDogQ29yZSB2ZXIgMi4xNgpbICAzNDAuNjgyODUxXSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDMxClsgIDM0MC42ODI4NTJdIEJsdWV0b290aDogSENJIGRldmljZSBh
bmQgY29ubmVjdGlvbiBtYW5hZ2VyIGluaXRpYWxpemVkClsgIDM0MC42ODI4NjBdIEJsdWV0b290
aDogSENJIHNvY2tldCBsYXllciBpbml0aWFsaXplZApbICAzNDAuNjgyODYyXSBCbHVldG9vdGg6
IEwyQ0FQIHNvY2tldCBsYXllciBpbml0aWFsaXplZApbICAzNDAuNjgyODcwXSBCbHVldG9vdGg6
IFNDTyBzb2NrZXQgbGF5ZXIgaW5pdGlhbGl6ZWQKWyAgMzQwLjcwNjk3Ml0gQmx1ZXRvb3RoOiBC
TkVQIChFdGhlcm5ldCBFbXVsYXRpb24pIHZlciAxLjMKWyAgMzQwLjcwNjk3Nl0gQmx1ZXRvb3Ro
OiBCTkVQIGZpbHRlcnM6IHByb3RvY29sIG11bHRpY2FzdApbICAzNDAuNzA2OTg0XSBCbHVldG9v
dGg6IEJORVAgc29ja2V0IGxheWVyIGluaXRpYWxpemVkClsgIDM0My44MjU0MzVdIGZ1c2UgaW5p
dCAoQVBJIHZlcnNpb24gNy4yMikK
--001a113a332af9ed1b04f180c397
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a113a332af9ed1b04f180c397--


From xen-devel-bounces@lists.xen.org Mon Feb 03 13:47:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJsA-0000Do-Io; Mon, 03 Feb 2014 13:47:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAJs8-0000Dj-N8
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 13:47:37 +0000
Received: from [85.158.143.35:36531] by server-2.bemta-4.messagelabs.com id
	3B/3E-10891-7FD9FE25; Mon, 03 Feb 2014 13:47:35 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391435253!2757647!1
X-Originating-IP: [209.85.216.180]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2009 invoked from network); 3 Feb 2014 13:47:34 -0000
Received: from mail-qc0-f180.google.com (HELO mail-qc0-f180.google.com)
	(209.85.216.180)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:47:34 -0000
Received: by mail-qc0-f180.google.com with SMTP id i17so10895364qcy.25
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 05:47:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=IABxOxMSpB6xRSWw4e59QFT7yAzPEGcHxuGb4s8cbl8=;
	b=OmY1FHs/mdpdFVU1wB0H/GkAR9AWg+2AFfePa2yp6resB9EVPMYQbx5tS3lfm1UmdC
	er2cH1GJOh81fUs44b9j+CsvFW6f5I57XmDlgG4wuEAddihwvfJfQnqUoIEESsD/oCHR
	IVT5xq9GWdOyjrc1SUpHT0PerWT0NfIJMvltto1tzmCx3Po4MjxAr0SB+gbUiqYki8rN
	KKMZ7DpdlerPABMMIzCItwZwuEGcaA27GqygXPi3gK9wXf3c1meDsKdwy15ezYFxbmmL
	lQc7fVs4OCyJVbqzYr+DgaPqDdHibv7T4YR8KQqN5y3xc9PmUqJt2DvBaPjK0n+dwrNd
	Frbg==
MIME-Version: 1.0
X-Received: by 10.140.94.74 with SMTP id f68mr53237124qge.64.1391435252880;
	Mon, 03 Feb 2014 05:47:32 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 05:47:32 -0800 (PST)
Date: Mon, 3 Feb 2014 22:47:32 +0900
Message-ID: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=001a113a332af9ed1b04f180c397
Cc: jbeulich@suse.com, xiantao.zhang@intel.com, suravee.suthikulpanit@amd.com
Subject: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a113a332af9ed1b04f180c397
Content-Type: text/plain; charset=ISO-8859-1

My system based on AMD APU completely crashes when trying to use HVM
domUs. I've asked earlier on user list
http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
they were recommended to ask here.
I've also found same problem describer with another AMD APU here
http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html

My system crashes every time I start HVM domU. But if I use xen
compiled with debug info it works stable at least for hours (not
tested for longer run).

My system is openSUSE 13.1  with Xen 4.4
My hardware is

ASRock  FM2A75 Pro4
AMD A8-6600K APU
Gigabyte Radeon 7850
8 Gb DDR3 1600Mhz

I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.

I've setup with pci serial console and capture xen log ( === === lines
added by myself). Xen and dom0 dmesg logs also attached.

--001a113a332af9ed1b04f180c397
Content-Type: text/plain; charset=US-ASCII; name="xen-serial-new.log"
Content-Disposition: attachment; filename="xen-serial-new.log"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sbozi0

WGVuIDQuNC4wXzAyLTI5Ny4xCihYRU4pIFhlbiB2ZXJzaW9uIDQuNC4wXzAyLTI5Ny4xIChhYnVp
bGRAKSAoZ2NjIChTVVNFIExpbnV4KSA0LjguMSAyMDEzMDkwOSBbZ2NjLTRfOC1icmFuY2ggcmV2
aXNpb24gMjAyMzg4XSkgZGVidWc9biBUdWUgSmFuIDI4IDE2OjA4OjQ4IFVUQyAyMDE0CihYRU4p
IExhdGVzdCBDaGFuZ2VTZXQ6IAooWEVOKSBCb290bG9hZGVyOiBHUlVCMiAyLjAwCihYRU4pIENv
bW1hbmQgbGluZTogbG9nbHZsPWFsbCBpb21tdT1kZWJ1Zyx2ZXJib3NlIGFwaWNfdmVyYm9zaXR5
PWRlYnVnIGNvbnNvbGU9Y29tMSBjb20xPTExNTIwMCw4bjEscGNpCihYRU4pIFZpZGVvIGluZm9y
bWF0aW9uOgooWEVOKSAgVkdBIGlzIHRleHQgbW9kZSA4MHgyNSwgZm9udCA4eDE2CihYRU4pICBW
QkUvRERDIG1ldGhvZHM6IFYyOyBFRElEIHRyYW5zZmVyIHRpbWU6IDEgc2Vjb25kcwooWEVOKSBE
aXNjIGluZm9ybWF0aW9uOgooWEVOKSAgRm91bmQgMyBNQlIgc2lnbmF0dXJlcwooWEVOKSAgRm91
bmQgNCBFREQgaW5mb3JtYXRpb24gc3RydWN0dXJlcwooWEVOKSBYZW4tZTgyMCBSQU0gbWFwOgoo
WEVOKSAgMDAwMDAwMDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWU4MDAgKHVzYWJsZSkKKFhFTikg
IDAwMDAwMDAwMDAwOWU4MDAgLSAwMDAwMDAwMDAwMGEwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAw
MDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAw
MDAwMDAxMDAwMDAgLSAwMDAwMDAwMDhkNjhiMDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDhk
NjhiMDAwIC0gMDAwMDAwMDA4ZGQwYTAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDhkZDBh
MDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpCihYRU4pICAwMDAwMDAwMDhlMDVhMDAw
IC0gMDAwMDAwMDA4ZWE0NTAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDhlYTQ1MDAwIC0g
MDAwMDAwMDA4ZWE0NjAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDA4ZWE0NjAwMCAtIDAwMDAw
MDAwOGVjNGMwMDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDA4ZWM0YzAwMCAtIDAwMDAwMDAw
OGYwNjQwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwOGYwNjQwMDAgLSAwMDAwMDAwMDhmN2Yz
MDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwOGY3ZjMwMDAgLSAwMDAwMDAwMDhmODAwMDAw
ICh1c2FibGUpCihYRU4pICAwMDAwMDAwMGZlYzAwMDAwIC0gMDAwMDAwMDBmZWMwMTAwMCAocmVz
ZXJ2ZWQpCihYRU4pICAwMDAwMDAwMGZlYzEwMDAwIC0gMDAwMDAwMDBmZWMxMTAwMCAocmVzZXJ2
ZWQpCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0gMDAwMDAwMDBmZWQwMTAwMCAocmVzZXJ2ZWQp
CihYRU4pICAwMDAwMDAwMGZlZDgwMDAwIC0gMDAwMDAwMDBmZWQ5MDAwMCAocmVzZXJ2ZWQpCihY
RU4pICAwMDAwMDAwMGZmODAwMDAwIC0gMDAwMDAwMDEwMDAwMDAwMCAocmVzZXJ2ZWQpCihYRU4p
ICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAwMDI1MDAwMDAwMCAodXNhYmxlKQooWEVOKSBBQ1BJ
OiBSU0RQIDAwMEYwNDkwLCAwMDI0IChyMiBBTEFTS0EpCihYRU4pIEFDUEk6IFhTRFQgOEUwNEEw
NzgsIDAwNzQgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEzKQooWEVO
KSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAwMEY0IChyNCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkg
QU1JICAgICAxMDAxMykKKFhFTikgQUNQSSBXYXJuaW5nICh0YmZhZHQtMDQ2NCk6IE9wdGlvbmFs
IGZpZWxkICJQbTJDb250cm9sQmxvY2siIGhhcyB6ZXJvIGFkZHJlc3Mgb3IgbGVuZ3RoOiAwMDAw
MDAwMDAwMDAwMDAwLzEgWzIwMDcwMTI2XQooWEVOKSBBQ1BJOiBEU0RUIDhFMDRBMTg4LCA1RjlF
IChyMiBBTEFTS0EgICAgQSBNIEkgICAgICAgIDAgSU5UTCAyMDA1MTExNykKKFhFTikgQUNQSTog
RkFDUyA4RTA1MkU4MCwgMDA0MAooWEVOKSBBQ1BJOiBBUElDIDhFMDUwMjIwLCAwMDcyIChyMyBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhFTikgQUNQSTogRlBEVCA4
RTA1MDI5OCwgMDA0NCAocjEgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMp
CihYRU4pIEFDUEk6IE1DRkcgOEUwNTAyRTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3
MjAwOSBNU0ZUICAgIDEwMDEzKQooWEVOKSBBQ1BJOiBBQUZUIDhFMDUwMzIwLCAwMEU3IChyMSBB
TEFTS0EgT0VNQUFGVCAgIDEwNzIwMDkgTVNGVCAgICAgICA5NykKKFhFTikgQUNQSTogSFBFVCA4
RTA1MDQwOCwgMDAzOCAocjEgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgICAgIDUp
CihYRU4pIEFDUEk6IElWUlMgOEUwNTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAg
ICAgMSBBTUQgICAgICAgICAwKQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwNEIwLCAwQTYwIChyMSAg
ICBBTUQgQU5OQVBVUk4gICAgICAgIDEgQU1EICAgICAgICAgMSkKKFhFTikgQUNQSTogU1NEVCA4
RTA1MEYxMCwgMDRCNyAocjIgICAgQU1EIEFOTkFQVVJOICAgICAgICAxIE1TRlQgIDQwMDAwMDAp
CihYRU4pIEFDUEk6IENSQVQgOEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAg
ICAgMSBBTUQgICAgICAgICAxKQooWEVOKSBTeXN0ZW0gUkFNOiA3NjQyTUIgKDc4MjU3MjBrQikK
KFhFTikgTm8gTlVNQSBjb25maWd1cmF0aW9uIGZvdW5kCihYRU4pIEZha2luZyBhIG5vZGUgYXQg
MDAwMDAwMDAwMDAwMDAwMC0wMDAwMDAwMjUwMDAwMDAwCihYRU4pIERvbWFpbiBoZWFwIGluaXRp
YWxpc2VkCihYRU4pIGZvdW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmZDkwMAooWEVOKSBETUkgMi43
IHByZXNlbnQuCihYRU4pIEFQSUMgYm9vdCBzdGF0ZSBpcyAneGFwaWMnCihYRU4pIFVzaW5nIEFQ
SUMgZHJpdmVyIGRlZmF1bHQKKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgKKFhF
TikgQUNQSTogU0xFRVAgSU5GTzogcG0xeF9jbnRbODA0LDBdLCBwbTF4X2V2dFs4MDAsMF0KKFhF
TikgQUNQSTogMzIvNjRYIEZBQ1MgYWRkcmVzcyBtaXNtYXRjaCBpbiBGQURUIC0gOGUwNTJlODAv
MDAwMDAwMDAwMDAwMDAwMCwgdXNpbmcgMzIKKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVw
X3ZlY1s4ZTA1MmU4Y10sIHZlY19zaXplWzIwXQooWEVOKSBBQ1BJOiBMb2NhbCBBUElDIGFkZHJl
c3MgMHhmZWUwMDAwMAooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDAxXSBsYXBpY19pZFsw
eDEwXSBlbmFibGVkKQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYKKFhF
TikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgxMV0gZW5hYmxlZCkKKFhF
TikgUHJvY2Vzc29yICMxNyA1OjMgQVBJQyB2ZXJzaW9uIDE2CihYRU4pIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpCihYRU4pIFByb2Nlc3NvciAjMTgg
NTozIEFQSUMgdmVyc2lvbiAxNgooWEVOKSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBp
Y19pZFsweDEzXSBlbmFibGVkKQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElDIHZlcnNpb24g
MTYKKFhFTikgQUNQSTogTEFQSUNfTk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4
MV0pCihYRU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwNV0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lf
YmFzZVswXSkKKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24gMzMsIGFkZHJlc3Mg
MHhmZWMwMDAwMCwgR1NJIDAtMjMKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19p
cnEgMCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAw
IGJ1c19pcnEgOSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQooWEVOKSBBQ1BJOiBJUlEwIHVzZWQg
Ynkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6IElSUTIgdXNlZCBieSBvdmVycmlkZS4KKFhFTikgQUNQ
STogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLgooWEVOKSBFbmFibGluZyBBUElDIG1vZGU6ICBGbGF0
LiAgVXNpbmcgMSBJL08gQVBJQ3MKKFhFTikgQUNQSTogSFBFVCBpZDogMHgxMDIyODIxMCBiYXNl
OiAweGZlZDAwMDAwCihYRU4pIEVSU1QgdGFibGUgd2FzIG5vdCBmb3VuZAooWEVOKSBVc2luZyBB
Q1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24KKFhFTikgU01QOiBB
bGxvd2luZyA0IENQVXMgKDAgaG90cGx1ZyBDUFVzKQooWEVOKSBOUl9DUFVTOjUxMiBucl9jcHVt
YXNrX2JpdHM6NjQKKFhFTikgbWFwcGVkIEFQSUMgdG8gZmZmZjgyY2ZmZmJmYjAwMCAoZmVlMDAw
MDApCihYRU4pIG1hcHBlZCBJT0FQSUMgdG8gZmZmZjgyY2ZmZmJmYTAwMCAoZmVjMDAwMDApCihY
RU4pIElSUSBsaW1pdHM6IDI0IEdTSSwgNzYwIE1TSS9NU0ktWAooWEVOKSBVc2luZyBzY2hlZHVs
ZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpCihYRU4pIERldGVjdGVkIDM4OTMuMDIy
IE1IeiBwcm9jZXNzb3IuCihYRU4pIEluaXRpbmcgbWVtb3J5IHNoYXJpbmcuCihYRU4pIHhzdGF0
ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAweDNjMCBhbmQgc3RhdGVzOiAweDQwMDAwMDAwMDAw
MDAwMDcKKFhFTikgQU1EIEZhbTE1aCBtYWNoaW5lIGNoZWNrIHJlcG9ydGluZyBlbmFibGVkCihY
RU4pIFBDSTogTUNGRyBjb25maWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAwMDAgc2VnbWVudCAwMDAw
IGJ1c2VzIDAwIC0gZmYKKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAw
IGJ1cyAwMC1mZgooWEVOKSBBTUQtVmk6IEZvdW5kIE1TSSBjYXBhYmlsaXR5IGJsb2NrIGF0IDB4
NTQKKFhFTikgQU1ELVZpOiBBQ1BJIFRhYmxlOgooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZS
UwooWEVOKSBBTUQtVmk6ICBMZW5ndGggMHg3MAooWEVOKSBBTUQtVmk6ICBSZXZpc2lvbiAweDIK
KFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0gMHhlOAooWEVOKSBBTUQtVmk6ICBPRU1fSWQgQU1ECihY
RU4pIEFNRC1WaTogIE9FTV9UYWJsZV9JZCBBTk5BUFVSTgooWEVOKSBBTUQtVmk6ICBPRU1fUmV2
aXNpb24gMHgxCihYRU4pIEFNRC1WaTogIENyZWF0b3JfSWQgQU1EIAooWEVOKSBBTUQtVmk6ICBD
cmVhdG9yX1JldmlzaW9uIDAKKFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxh
Z3MgMHhmZSBsZW4gMHg0MCBpZCAweDIKKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTog
dHlwZSAweDMgaWQgMHg4IGZsYWdzIDAKKFhFTikgQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweDgg
LT4gMHhmZmZlCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAw
eDIwMCBmbGFncyAwCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHgyMDAgLT4gMHgyZmYg
YWxpYXMgMHhhNAooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDAgaWQgMCBm
bGFncyAwCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZs
YWdzIDAKKFhFTikgQU1ELVZpOiBJVkhEIFNwZWNpYWw6IDAwMDA6MDA6MTQuMCB2YXJpZXR5IDB4
MiBoYW5kbGUgMAooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDggaWQg
MCBmbGFncyAweGQ3CihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFy
aWV0eSAweDEgaGFuZGxlIDB4NQooWEVOKSBBTUQtVmk6IElPTU1VIEV4dGVuZGVkIEZlYXR1cmVz
OgooWEVOKSAgLSBQcmVmZXRjaCBQYWdlcyBDb21tYW5kCihYRU4pICAtIFBlcmlwaGVyYWwgUGFn
ZSBTZXJ2aWNlIFJlcXVlc3QKKFhFTikgIC0gR3Vlc3QgVHJhbnNsYXRpb24KKFhFTikgIC0gSW52
YWxpZGF0ZSBBbGwgQ29tbWFuZAooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4KKFhFTikg
QU1ELVZpOiBHdWVzdCBUcmFuc2xhdGlvbiBFbmFibGVkLgooWEVOKSBBTUQtVmk6IElPTU1VIDAg
RW5hYmxlZC4KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQKKFhFTikgIC0gRG9tMCBt
b2RlOiBSZWxheGVkCihYRU4pIEludGVycnVwdCByZW1hcHBpbmcgZW5hYmxlZAooWEVOKSBHZXR0
aW5nIFZFUlNJT046IDgwMDUwMDEwCihYRU4pIEdldHRpbmcgVkVSU0lPTjogODAwNTAwMTAKKFhF
TikgR2V0dGluZyBJRDogMTAwMDAwMDAKKFhFTikgR2V0dGluZyBMVlQwOiA3MDAKKFhFTikgR2V0
dGluZyBMVlQxOiA0MDAKKFhFTikgZW5hYmxlZCBFeHRJTlQgb24gQ1BVIzAKKFhFTikgRU5BQkxJ
TkcgSU8tQVBJQyBJUlFzCihYRU4pICAtPiBVc2luZyBvbGQgQUNLIG1ldGhvZAooWEVOKSBpbml0
IElPX0FQSUMgSVJRcwooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBpbikgNS0wLCA1LTE2LCA1LTE3
LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5vdCBjb25uZWN0ZWQuCihYRU4p
IC4uVElNRVI6IHZlY3Rvcj0weEYwIGFwaWMxPTAgcGluMT0yIGFwaWMyPS0xIHBpbjI9LTEKKFhF
TikgbnVtYmVyIG9mIE1QIElSUSBzb3VyY2VzOiAxNS4KKFhFTikgbnVtYmVyIG9mIElPLUFQSUMg
IzUgcmVnaXN0ZXJzOiAyNC4KKFhFTikgdGVzdGluZyB0aGUgSU8gQVBJQy4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uCihYRU4pIElPIEFQSUMgIzUuLi4uLi4KKFhFTikgLi4uLiByZWdpc3RlciAjMDA6
IDA1MDAwMDAwCihYRU4pIC4uLi4uLi4gICAgOiBwaHlzaWNhbCBBUElDIGlkOiAwNQooWEVOKSAu
Li4uLi4uICAgIDogRGVsaXZlcnkgVHlwZTogMAooWEVOKSAuLi4uLi4uICAgIDogTFRTICAgICAg
ICAgIDogMAooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMTogMDAxNzgwMjEKKFhFTikgLi4uLi4uLiAg
ICAgOiBtYXggcmVkaXJlY3Rpb24gZW50cmllczogMDAxNwooWEVOKSAuLi4uLi4uICAgICA6IFBS
USBpbXBsZW1lbnRlZDogMQooWEVOKSAuLi4uLi4uICAgICA6IElPIEFQSUMgdmVyc2lvbjogMDAy
MQooWEVOKSAuLi4uIHJlZ2lzdGVyICMwMjogMDUwMDAwMDAKKFhFTikgLi4uLi4uLiAgICAgOiBh
cmJpdHJhdGlvbjogMDUKKFhFTikgLi4uLiByZWdpc3RlciAjMDM6IDA1MDE4MDIxCihYRU4pIC4u
Li4uLi4gICAgIDogQm9vdCBEVCAgICA6IDEKKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFi
bGU6CihYRU4pICBOUiBMb2cgUGh5IE1hc2sgVHJpZyBJUlIgUG9sIFN0YXQgRGVzdCBEZWxpIFZl
Y3Q6ICAgCihYRU4pICAwMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwCihYRU4pICAwMSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMw
CihYRU4pICAwMiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEYwCihY
RU4pICAwMyAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDM4CihYRU4p
ICAwNCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwCihYRU4pICAw
NSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQ4CihYRU4pICAwNiAw
MDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDUwCihYRU4pICAwNyAwMDEg
MDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4CihYRU4pICAwOCAwMDEgMDEg
IDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDYwCihYRU4pICAwOSAwMDEgMDEgIDEg
ICAgMSAgICAwICAgMSAgIDAgICAgMSAgICAwICAgIDAwCihYRU4pICAwYSAwMDEgMDEgIDAgICAg
MCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIEYxCihYRU4pICAwYiAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDcwCihYRU4pICAwYyAwMDEgMDEgIDAgICAgMCAgICAw
ICAgMCAgIDAgICAgMSAgICAxICAgIDc4CihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDg4CihYRU4pICAwZSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAg
IDAgICAgMSAgICAxICAgIDkwCihYRU4pICAwZiAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDk4CihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAg
MCAgICAxICAgIDMwCihYRU4pICAxMSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAxICAgIDMwCihYRU4pICAxMiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAx
ICAgIDMwCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwCihYRU4pICAxNCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMw
CihYRU4pICAxNSAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwCihY
RU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwCihYRU4p
ICAxNyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwCihYRU4pIFVz
aW5nIHZlY3Rvci1iYXNlZCBpbmRleGluZwooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOgooWEVO
KSBJUlEyNDAgLT4gMDoyCihYRU4pIElSUTQ4IC0+IDA6MQooWEVOKSBJUlE1NiAtPiAwOjMKKFhF
TikgSVJRNjQgLT4gMDo0CihYRU4pIElSUTcyIC0+IDA6NQooWEVOKSBJUlE4MCAtPiAwOjYKKFhF
TikgSVJRODggLT4gMDo3CihYRU4pIElSUTk2IC0+IDA6OAooWEVOKSBJUlExMDQgLT4gMDo5CihY
RU4pIElSUTI0MSAtPiAwOjEwCihYRU4pIElSUTExMiAtPiAwOjExCihYRU4pIElSUTEyMCAtPiAw
OjEyCihYRU4pIElSUTEzNiAtPiAwOjEzCihYRU4pIElSUTE0NCAtPiAwOjE0CihYRU4pIElSUTE1
MiAtPiAwOjE1CihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLiBkb25l
LgooWEVOKSBVc2luZyBsb2NhbCBBUElDIHRpbWVyIGludGVycnVwdHMuCihYRU4pIGNhbGlicmF0
aW5nIEFQSUMgdGltZXIgLi4uCihYRU4pIC4uLi4uIENQVSBjbG9jayBzcGVlZCBpcyAzODkzLjAw
MzMgTUh6LgooWEVOKSAuLi4uLiBob3N0IGJ1cyBjbG9jayBzcGVlZCBpcyA5OS44MjA2IE1Iei4K
KFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM5CihYRU4pIFBsYXRmb3JtIHRpbWVyIGlzIDE0
LjMxOE1IeiBIUEVUCihYRU4pIEFsbG9jYXRlZCBjb25zb2xlIHJpbmcgb2YgMzIgS2lCLgooWEVO
KSBIVk06IEFTSURzIGVuYWJsZWQuCihYRU4pIFNWTTogU3VwcG9ydGVkIGFkdmFuY2VkIGZlYXR1
cmVzOgooWEVOKSAgLSBOZXN0ZWQgUGFnZSBUYWJsZXMgKE5QVCkKKFhFTikgIC0gTGFzdCBCcmFu
Y2ggUmVjb3JkIChMQlIpIFZpcnR1YWxpc2F0aW9uCihYRU4pICAtIE5leHQtUklQIFNhdmVkIG9u
ICNWTUVYSVQKKFhFTikgIC0gVk1DQiBDbGVhbiBCaXRzCihYRU4pICAtIERlY29kZUFzc2lzdHMK
KFhFTikgIC0gUGF1c2UtSW50ZXJjZXB0IEZpbHRlcgooWEVOKSAgLSBUU0MgUmF0ZSBNU1IKKFhF
TikgSFZNOiBTVk0gZW5hYmxlZAooWEVOKSBIVk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAo
SEFQKSBkZXRlY3RlZAooWEVOKSBIVk06IEhBUCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCCihY
RU4pIEhWTTogUFZIIG1vZGUgbm90IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtCihYRU4pIG1h
c2tlZCBFeHRJTlQgb24gQ1BVIzEKKFhFTikgbWljcm9jb2RlOiBDUFUxIGNvbGxlY3RfY3B1X2lu
Zm86IHBhdGNoX2lkPTB4NjAwMTExOQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyCihYRU4p
IG1pY3JvY29kZTogQ1BVMiBjb2xsZWN0X2NwdV9pbmZvOiBwYXRjaF9pZD0wCihYRU4pIG1hc2tl
ZCBFeHRJTlQgb24gQ1BVIzMKKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86
IHBhdGNoX2lkPTAKKFhFTikgQnJvdWdodCB1cCA0IENQVXMKKFhFTikgQUNQSSBzbGVlcCBtb2Rl
czogUzMKKFhFTikgTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZy
ZXF1ZW5jeQooWEVOKSBtY2hlY2tfcG9sbDogTWFjaGluZSBjaGVjayBwb2xsaW5nIHRpbWVyIHN0
YXJ0ZWQuCihYRU4pIG10cnI6IHlvdXIgQ1BVcyBoYWQgaW5jb25zaXN0ZW50IHZhcmlhYmxlIE1U
UlIgc2V0dGluZ3MKKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVw
IGFsbCBDUFVzLgooWEVOKSBtdHJyOiBjb3JyZWN0ZWQgY29uZmlndXJhdGlvbi4KKFhFTikgKioq
IExPQURJTkcgRE9NQUlOIDAgKioqCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNv
bXBhdDMyCihYRU4pICBEb20wIGtlcm5lbDogNjQtYml0LCBsc2IsIHBhZGRyIDB4MjAwMCAtPiAw
eGMwNTAwMAooWEVOKSBQSFlTSUNBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6CihYRU4pICBEb20wIGFs
bG9jLjogICAwMDAwMDAwMjIzMDAwMDAwLT4wMDAwMDAwMjI0MDAwMDAwICgxNzQyMDgzIHBhZ2Vz
IHRvIGJlIGFsbG9jYXRlZCkKKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGNmNzUwMDAt
PjAwMDAwMDAyNGZmZmZlMDAKKFhFTikgVklSVFVBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6CihYRU4p
ICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgwMDAyMDAwLT5mZmZmZmZmZjgwYzA1MDAwCihYRU4p
ICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihYRU4p
ICBQaHlzLU1hY2ggbWFwOiBmZmZmZWEwMDAwMDAwMDAwLT5mZmZmZWEwMDAwZDZhYzcwCihYRU4p
ICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgwYzA1MDAwLT5mZmZmZmZmZjgwYzA1NGI0CihYRU4p
ICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgwYzA2MDAwLT5mZmZmZmZmZjgwYzExMDAwCihYRU4p
ICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgwYzExMDAwLT5mZmZmZmZmZjgwYzEyMDAwCihYRU4p
ICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZmZjgxMDAwMDAwCihYRU4p
ICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgwMDAyMDAwCihYRU4pIERvbTAgaGFzIG1heGltdW0g
NCBWQ1BVcwooWEVOKSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MDAuMAoo
WEVOKSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2UgMDAwMDowMDowMC4yCihYRU4pIHNldHVw
IDAwMDA6MDA6MDAuMiBmb3IgZDAgZmFpbGVkICgtMTkpCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5LCB0eXBlID0gMHgxLCByb290IHRhYmxl
ID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEwLCB0eXBlID0gMHgyLCByb290
IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgwLCB0eXBlID0gMHgx
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgxLCB0eXBl
ID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDg4
LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2lu
ZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweDkwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAs
IHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZp
Y2UgaWQgPSAweDkyLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxl
OiBkZXZpY2UgaWQgPSAweDk4LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAs
IGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRm
YzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGExLCB0eXBlID0gMHg3LCByb290IHRh
YmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGEzLCB0eXBlID0gMHg3LCBy
b290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVO
KSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE0LCB0eXBlID0g
MHg1LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0g
MwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE1LCB0
eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eGE4LCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweGFhLCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGFiLCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGMwLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMxLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgy
MjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMyLCB0eXBlID0gMHg2LCByb290IHRhYmxl
ID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMzLCB0eXBlID0gMHg2LCByb290
IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGM0LCB0eXBlID0gMHg2
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGM1LCB0eXBl
ID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEw
MCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHgxMDEsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4MjMwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweDIzOCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgzMDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NDAwLCB0eXBlID0gMHgxLCByb290IHRh
YmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwMCwgdHlwZSA9IDB4MSwg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhF
TikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4KKFhFTikgSW5pdGlhbCBsb3cgbWVtb3J5IHZp
cnEgdGhyZXNob2xkIHNldCBhdCAweDQwMDAgcGFnZXMuCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFs
bAooWEVOKSBHdWVzdCBMb2dsZXZlbDogTm90aGluZyAoUmF0ZS1saW1pdGVkOiBFcnJvcnMgYW5k
IHdhcm5pbmdzKQooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+IERPTTAgKHR5cGUgJ0NUUkwtYScg
dGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikKKFhFTikgRnJlZWQgMjkya0IgaW5p
dCBtZW1vcnkuCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAw
MDAwMDAwMDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAw
MDAwLgooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAw
MDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAwMC4K
KFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAw
MDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAuCihYRU4p
IHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwMDAwMDA0MTMg
ZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAwMDAwLgooWEVOKSBQQ0k6
IFVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYKKFhFTikgbW0uYzo4MDk6IGQw
OiBGb3JjaW5nIHJlYWQtb25seSBhY2Nlc3MgdG8gTUZOIGUwMDAyCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjIKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MDEuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAyLjAKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTAuMQooWEVOKSBT
Ui1JT1YgZGV2aWNlIDAwMDA6MDA6MTEuMCBoYXMgaXRzIHZpcnR1YWwgZnVuY3Rpb25zIGFscmVh
ZHkgZW5hYmxlZCAoMDFhYikKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMS4wCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTIuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjEyLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTMuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjAKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MTQuMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjQKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxNC41CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMAooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE1LjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
NS4zCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMAooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjE4LjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4yCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4
LjQKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC41CihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDE6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMjowNi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDI6MDcu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAzOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowNDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDU6MDAuMAooWEVOKSBJT0FQ
SUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xNiAtPiAweGEwIC0+IElSUSAxNiBNb2Rl
OjEgQWN0aXZlOjEpCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE3
IC0+IDB4YTggLT4gSVJRIDE3IE1vZGU6MSBBY3RpdmU6MSkKKFhFTikgSU9BUElDWzBdOiBTZXQg
UENJIHJvdXRpbmcgZW50cnkgKDUtMTggLT4gMHhiMCAtPiBJUlEgMTggTW9kZToxIEFjdGl2ZTox
KQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOSAtPiAweGI4IC0+
IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5n
IGVudHJ5ICg1LTIxIC0+IDB4YzAgLT4gSVJRIDIxIE1vZGU6MSBBY3RpdmU6MSkKKFhFTikgSU9B
UElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDUtMjIgLT4gMHhjOCAtPiBJUlEgMjIgTW9k
ZToxIEFjdGl2ZToxKQpbICAgMTYuMjYyOTg5XSBVbmFibGUgdG8gcmVhZCBzeXNycSBjb2RlIGlu
IGNvbnRyb2wvc3lzcnEKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkKKFhFTikgQVBJ
QyBlcnJvciBvbiBDUFUyOiAwMCg0MCkKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUzOiAwMCg0MCkK
KFhFTikgQVBJQyBlcnJvciBvbiBDUFUwOiAwMCg0MCkKPT09IGh2bSBkb20gc3RhcnRlZCAgPT09
PQooWEVOKSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUgPSAw
eDEwMDAyOQooWEVOKSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFi
bGUgPSAweDEwMDFiZQooWEVOKSBBTUQtVmk6IERpc2FibGU6IGRldmljZSBpZCA9IDB4OCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFi
bGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MTAwMWJlMDAw
LCBkb21haW4gPSAyLCBwYWdpbmcgbW9kZSA9IDQKKFhFTikgQU1ELVZpOiBSZS1hc3NpZ24gMDAw
MDowMDowMS4wIGZyb20gZG9tMCB0byBkb20yCihYRU4pIEFNRC1WaTogRGlzYWJsZTogZGV2aWNl
IGlkID0gMHg5LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5LCB0eXBlID0gMHgxLCByb290IHRhYmxl
ID0gMHgxMDAxYmUwMDAsIGRvbWFpbiA9IDIsIHBhZ2luZyBtb2RlID0gNAooWEVOKSBBTUQtVmk6
IFJlLWFzc2lnbiAwMDAwOjAwOjAxLjEgZnJvbSBkb20wIHRvIGRvbTIKPT09IHdob2xlIHN5c3Rl
bSBjcmFzaGVkID09PSAK
--001a113a332af9ed1b04f180c397
Content-Type: application/octet-stream; name="xl4.4-dmesg"
Content-Disposition: attachment; filename="xl4.4-dmesg"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sjf4m1

IFhlbiA0LjQuMF8wMi0yOTcuMQooWEVOKSBYZW4gdmVyc2lvbiA0LjQuMF8wMi0yOTcuMSAoYWJ1
aWxkQCkgKGdjYyAoU1VTRSBMaW51eCkgNC44LjEgMjAxMzA5MDkgW2djYy00XzgtYnJhbmNoIHJl
dmlzaW9uIDIwMjM4OF0pIGRlYnVnPW4gVHVlIEphbiAyOCAxNjowODo0OCBVVEMgMjAxNAooWEVO
KSBMYXRlc3QgQ2hhbmdlU2V0OiAKKFhFTikgQm9vdGxvYWRlcjogR1JVQjIgMi4wMAooWEVOKSBD
b21tYW5kIGxpbmU6IGxvZ2x2bD1hbGwgaW9tbXU9ZGVidWcsdmVyYm9zZSBhcGljX3ZlcmJvc2l0
eT1kZWJ1ZyBjb25zb2xlPWNvbTEgY29tMT0xMTUyMDAsOG4xLHBjaQooWEVOKSBWaWRlbyBpbmZv
cm1hdGlvbjoKKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNgooWEVOKSAg
VkJFL0REQyBtZXRob2RzOiBWMjsgRURJRCB0cmFuc2ZlciB0aW1lOiAxIHNlY29uZHMKKFhFTikg
RGlzYyBpbmZvcm1hdGlvbjoKKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMKKFhFTikgIEZv
dW5kIDQgRUREIGluZm9ybWF0aW9uIHN0cnVjdHVyZXMKKFhFTikgWGVuLWU4MjAgUkFNIG1hcDoK
KFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1c2FibGUpCihYRU4p
ICAwMDAwMDAwMDAwMDllODAwIC0gMDAwMDAwMDAwMDBhMDAwMCAocmVzZXJ2ZWQpCihYRU4pICAw
MDAwMDAwMDAwMGUwMDAwIC0gMDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAw
MDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDA4
ZDY4YjAwMCAtIDAwMDAwMDAwOGRkMGEwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDA4ZGQw
YTAwMCAtIDAwMDAwMDAwOGUwNWEwMDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDA4ZTA1YTAw
MCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDA4ZWE0NTAwMCAt
IDAwMDAwMDAwOGVhNDYwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwOGVhNDYwMDAgLSAwMDAw
MDAwMDhlYzRjMDAwIChBQ1BJIE5WUykKKFhFTikgIDAwMDAwMDAwOGVjNGMwMDAgLSAwMDAwMDAw
MDhmMDY0MDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDhmMDY0MDAwIC0gMDAwMDAwMDA4Zjdm
MzAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDhmN2YzMDAwIC0gMDAwMDAwMDA4ZjgwMDAw
MCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDBmZWMwMDAwMCAtIDAwMDAwMDAwZmVjMDEwMDAgKHJl
c2VydmVkKQooWEVOKSAgMDAwMDAwMDBmZWMxMDAwMCAtIDAwMDAwMDAwZmVjMTEwMDAgKHJlc2Vy
dmVkKQooWEVOKSAgMDAwMDAwMDBmZWQwMDAwMCAtIDAwMDAwMDAwZmVkMDEwMDAgKHJlc2VydmVk
KQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAwMDAwMDAwZmVkOTAwMDAgKHJlc2VydmVkKQoo
WEVOKSAgMDAwMDAwMDBmZjgwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgKHJlc2VydmVkKQooWEVO
KSAgMDAwMDAwMDEwMDAwMTAwMCAtIDAwMDAwMDAyNTAwMDAwMDAgKHVzYWJsZSkKKFhFTikgQUNQ
STogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIgQUxBU0tBKQooWEVOKSBBQ1BJOiBYU0RUIDhFMDRB
MDc4LCAwMDc0IChyMSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykKKFhF
TikgQUNQSTogRkFDUCA4RTA1MDEyOCwgMDBGNCAocjQgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5
IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEkgV2FybmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25h
bCBmaWVsZCAiUG0yQ29udHJvbEJsb2NrIiBoYXMgemVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAw
MDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEyNl0KKFhFTikgQUNQSTogRFNEVCA4RTA0QTE4OCwgNUY5
RSAocjIgQUxBU0tBICAgIEEgTSBJICAgICAgICAwIElOVEwgMjAwNTExMTcpCihYRU4pIEFDUEk6
IEZBQ1MgOEUwNTJFODAsIDAwNDAKKFhFTikgQUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMg
QUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpCihYRU4pIEFDUEk6IEZQRFQg
OEUwNTAyOTgsIDAwNDQgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgIDEwMDEz
KQooWEVOKSBBQ1BJOiBNQ0ZHIDhFMDUwMkUwLCAwMDNDIChyMSBBTEFTS0EgICAgQSBNIEkgIDEw
NzIwMDkgTVNGVCAgICAxMDAxMykKKFhFTikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEg
QUxBU0tBIE9FTUFBRlQgICAxMDcyMDA5IE1TRlQgICAgICAgOTcpCihYRU4pIEFDUEk6IEhQRVQg
OEUwNTA0MDgsIDAwMzggKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBBTUkgICAgICAgICA1
KQooWEVOKSBBQ1BJOiBJVlJTIDhFMDUwNDQwLCAwMDcwIChyMiAgICBBTUQgQU5OQVBVUk4gICAg
ICAgIDEgQU1EICAgICAgICAgMCkKKFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEg
ICAgQU1EIEFOTkFQVVJOICAgICAgICAxIEFNRCAgICAgICAgIDEpCihYRU4pIEFDUEk6IFNTRFQg
OEUwNTBGMTAsIDA0QjcgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBNU0ZUICA0MDAwMDAw
KQooWEVOKSBBQ1BJOiBDUkFUIDhFMDUxM0M4LCAwMkY4IChyMSAgICBBTUQgQU5OQVBVUk4gICAg
ICAgIDEgQU1EICAgICAgICAgMSkKKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0Ip
CihYRU4pIE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZAooWEVOKSBGYWtpbmcgYSBub2RlIGF0
IDAwMDAwMDAwMDAwMDAwMDAtMDAwMDAwMDI1MDAwMDAwMAooWEVOKSBEb21haW4gaGVhcCBpbml0
aWFsaXNlZAooWEVOKSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgMDAwZmQ5MDAKKFhFTikgRE1JIDIu
NyBwcmVzZW50LgooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJwooWEVOKSBVc2luZyBB
UElDIGRyaXZlciBkZWZhdWx0CihYRU4pIEFDUEk6IFBNLVRpbWVyIElPIFBvcnQ6IDB4ODA4CihY
RU4pIEFDUEk6IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdCihY
RU4pIEFDUEk6IDMyLzY0WCBGQUNTIGFkZHJlc3MgbWlzbWF0Y2ggaW4gRkFEVCAtIDhlMDUyZTgw
LzAwMDAwMDAwMDAwMDAwMDAsIHVzaW5nIDMyCihYRU4pIEFDUEk6ICAgICAgICAgICAgIHdha2V1
cF92ZWNbOGUwNTJlOGNdLCB2ZWNfc2l6ZVsyMF0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDAKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRb
MHgxMF0gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMxNiA1OjMgQVBJQyB2ZXJzaW9uIDE2CihY
RU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQpCihY
RU4pIFByb2Nlc3NvciAjMTcgNTozIEFQSUMgdmVyc2lvbiAxNgooWEVOKSBBQ1BJOiBMQVBJQyAo
YWNwaV9pZFsweDAzXSBsYXBpY19pZFsweDEyXSBlbmFibGVkKQooWEVOKSBQcm9jZXNzb3IgIzE4
IDU6MyBBUElDIHZlcnNpb24gMTYKKFhFTikgQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFw
aWNfaWRbMHgxM10gZW5hYmxlZCkKKFhFTikgUHJvY2Vzc29yICMxOSA1OjMgQVBJQyB2ZXJzaW9u
IDE2CihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVkZ2UgbGludFsw
eDFdKQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlkWzB4MDVdIGFkZHJlc3NbMHhmZWMwMDAwMF0gZ3Np
X2Jhc2VbMF0pCihYRU4pIElPQVBJQ1swXTogYXBpY19pZCA1LCB2ZXJzaW9uIDMzLCBhZGRyZXNz
IDB4ZmVjMDAwMDAsIEdTSSAwLTIzCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNf
aXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChidXMg
MCBidXNfaXJxIDkgZ2xvYmFsX2lycSA5IGxvdyBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2Vk
IGJ5IG92ZXJyaWRlLgooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFD
UEk6IElSUTkgdXNlZCBieSBvdmVycmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgRmxh
dC4gIFVzaW5nIDEgSS9PIEFQSUNzCihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4MTAyMjgyMTAgYmFz
ZTogMHhmZWQwMDAwMAooWEVOKSBFUlNUIHRhYmxlIHdhcyBub3QgZm91bmQKKFhFTikgVXNpbmcg
QUNQSSAoTUFEVCkgZm9yIFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFNNUDog
QWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykKKFhFTikgTlJfQ1BVUzo1MTIgbnJfY3B1
bWFza19iaXRzOjY0CihYRU4pIG1hcHBlZCBBUElDIHRvIGZmZmY4MmNmZmZiZmIwMDAgKGZlZTAw
MDAwKQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4MmNmZmZiZmEwMDAgKGZlYzAwMDAwKQoo
WEVOKSBJUlEgbGltaXRzOiAyNCBHU0ksIDc2MCBNU0kvTVNJLVgKKFhFTikgVXNpbmcgc2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciAoY3JlZGl0KQooWEVOKSBEZXRlY3RlZCAzODkzLjAx
MyBNSHogcHJvY2Vzc29yLgooWEVOKSBJbml0aW5nIG1lbW9yeSBzaGFyaW5nLgooWEVOKSB4c3Rh
dGVfaW5pdDogdXNpbmcgY250eHRfc2l6ZTogMHgzYzAgYW5kIHN0YXRlczogMHg0MDAwMDAwMDAw
MDAwMDA3CihYRU4pIEFNRCBGYW0xNWggbWFjaGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZAoo
WEVOKSBQQ0k6IE1DRkcgY29uZmlndXJhdGlvbiAwOiBiYXNlIGUwMDAwMDAwIHNlZ21lbnQgMDAw
MCBidXNlcyAwMCAtIGZmCihYRU4pIFBDSTogTm90IHVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAw
MCBidXMgMDAtZmYKKFhFTikgQU1ELVZpOiBGb3VuZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAw
eDU0CihYRU4pIEFNRC1WaTogQUNQSSBUYWJsZToKKFhFTikgQU1ELVZpOiAgU2lnbmF0dXJlIElW
UlMKKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4NzAKKFhFTikgQU1ELVZpOiAgUmV2aXNpb24gMHgy
CihYRU4pIEFNRC1WaTogIENoZWNrU3VtIDB4ZTgKKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRAoo
WEVOKSBBTUQtVmk6ICBPRU1fVGFibGVfSWQgQU5OQVBVUk4KKFhFTikgQU1ELVZpOiAgT0VNX1Jl
dmlzaW9uIDB4MQooWEVOKSBBTUQtVmk6ICBDcmVhdG9yX0lkIEFNRCAKKFhFTikgQU1ELVZpOiAg
Q3JlYXRvcl9SZXZpc2lvbiAwCihYRU4pIEFNRC1WaTogSVZSUyBCbG9jazogdHlwZSAweDEwIGZs
YWdzIDB4ZmUgbGVuIDB4NDAgaWQgMHgyCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6
IHR5cGUgMHgzIGlkIDB4OCBmbGFncyAwCihYRU4pIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHg4
IC0+IDB4ZmZmZQooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDMgaWQg
MHgyMDAgZmxhZ3MgMAooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZm
IGFsaWFzIDB4YTQKKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAwIGlkIDAg
ZmxhZ3MgMAooWEVOKSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDggaWQgMCBm
bGFncyAwCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAw
eDIgaGFuZGxlIDAKKFhFTikgQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDQ4IGlk
IDAgZmxhZ3MgMHhkNwooWEVOKSBBTUQtVmk6IElWSEQgU3BlY2lhbDogMDAwMDowMDoxNC4wIHZh
cmlldHkgMHgxIGhhbmRsZSAweDUKKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJl
czoKKFhFTikgIC0gUHJlZmV0Y2ggUGFnZXMgQ29tbWFuZAooWEVOKSAgLSBQZXJpcGhlcmFsIFBh
Z2UgU2VydmljZSBSZXF1ZXN0CihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uCihYRU4pICAtIElu
dmFsaWRhdGUgQWxsIENvbW1hbmQKKFhFTikgQU1ELVZpOiBQUFIgTG9nIEVuYWJsZWQuCihYRU4p
IEFNRC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4KKFhFTikgQU1ELVZpOiBJT01NVSAw
IEVuYWJsZWQuCihYRU4pIEkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkCihYRU4pICAtIERvbTAg
bW9kZTogUmVsYXhlZAooWEVOKSBJbnRlcnJ1cHQgcmVtYXBwaW5nIGVuYWJsZWQKKFhFTikgR2V0
dGluZyBWRVJTSU9OOiA4MDA1MDAxMAooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwCihY
RU4pIEdldHRpbmcgSUQ6IDEwMDAwMDAwCihYRU4pIEdldHRpbmcgTFZUMDogNzAwCihYRU4pIEdl
dHRpbmcgTFZUMTogNDAwCihYRU4pIGVuYWJsZWQgRXh0SU5UIG9uIENQVSMwCihYRU4pIEVOQUJM
SU5HIElPLUFQSUMgSVJRcwooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBtZXRob2QKKFhFTikgaW5p
dCBJT19BUElDIElSUXMKKFhFTikgIElPLUFQSUMgKGFwaWNpZC1waW4pIDUtMCwgNS0xNiwgNS0x
NywgNS0xOCwgNS0xOSwgNS0yMCwgNS0yMSwgNS0yMiwgNS0yMyBub3QgY29ubmVjdGVkLgooWEVO
KSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0tMSBwaW4yPS0xCihY
RU4pIG51bWJlciBvZiBNUCBJUlEgc291cmNlczogMTUuCihYRU4pIG51bWJlciBvZiBJTy1BUElD
ICM1IHJlZ2lzdGVyczogMjQuCihYRU4pIHRlc3RpbmcgdGhlIElPIEFQSUMuLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLgooWEVOKSBJTyBBUElDICM1Li4uLi4uCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAw
OiAwNTAwMDAwMAooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwgQVBJQyBpZDogMDUKKFhFTikg
Li4uLi4uLiAgICA6IERlbGl2ZXJ5IFR5cGU6IDAKKFhFTikgLi4uLi4uLiAgICA6IExUUyAgICAg
ICAgICA6IDAKKFhFTikgLi4uLiByZWdpc3RlciAjMDE6IDAwMTc4MDIxCihYRU4pIC4uLi4uLi4g
ICAgIDogbWF4IHJlZGlyZWN0aW9uIGVudHJpZXM6IDAwMTcKKFhFTikgLi4uLi4uLiAgICAgOiBQ
UlEgaW1wbGVtZW50ZWQ6IDEKKFhFTikgLi4uLi4uLiAgICAgOiBJTyBBUElDIHZlcnNpb246IDAw
MjEKKFhFTikgLi4uLiByZWdpc3RlciAjMDI6IDA1MDAwMDAwCihYRU4pIC4uLi4uLi4gICAgIDog
YXJiaXRyYXRpb246IDA1CihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAzOiAwNTAxODAyMQooWEVOKSAu
Li4uLi4uICAgICA6IEJvb3QgRFQgICAgOiAxCihYRU4pIC4uLi4gSVJRIHJlZGlyZWN0aW9uIHRh
YmxlOgooWEVOKSAgTlIgTG9nIFBoeSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBW
ZWN0OiAgIAooWEVOKSAgMDAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAg
ICAzMAooWEVOKSAgMDEgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAz
MAooWEVOKSAgMDIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMAoo
WEVOKSAgMDMgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICAzOAooWEVO
KSAgMDQgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0MAooWEVOKSAg
MDUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OAooWEVOKSAgMDYg
MDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1MAooWEVOKSAgMDcgMDAx
IDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA1OAooWEVOKSAgMDggMDAxIDAx
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA2MAooWEVOKSAgMDkgMDAxIDAxICAx
ICAgIDEgICAgMCAgIDEgICAwICAgIDEgICAgMCAgICAwMAooWEVOKSAgMGEgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAg
ICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA3MAooWEVOKSAgMGMgMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA3OAooWEVOKSAgMGQgMDAxIDAxICAwICAgIDAgICAgMCAg
IDAgICAwICAgIDEgICAgMSAgICA4OAooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA5MAooWEVOKSAgMGYgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAw
ICAgIDEgICAgMSAgICA5OAooWEVOKSAgMTAgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAg
IDAgICAgMSAgICAzMAooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAg
ICAgMSAgICAzMAooWEVOKSAgMTIgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MSAgICAzMAooWEVOKSAgMTMgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAg
ICAzMAooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MAooWEVOKSAgMTUgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMAoo
WEVOKSAgMTYgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMAooWEVO
KSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMAooWEVOKSBV
c2luZyB2ZWN0b3ItYmFzZWQgaW5kZXhpbmcKKFhFTikgSVJRIHRvIHBpbiBtYXBwaW5nczoKKFhF
TikgSVJRMjQwIC0+IDA6MgooWEVOKSBJUlE0OCAtPiAwOjEKKFhFTikgSVJRNTYgLT4gMDozCihY
RU4pIElSUTY0IC0+IDA6NAooWEVOKSBJUlE3MiAtPiAwOjUKKFhFTikgSVJRODAgLT4gMDo2CihY
RU4pIElSUTg4IC0+IDA6NwooWEVOKSBJUlE5NiAtPiAwOjgKKFhFTikgSVJRMTA0IC0+IDA6OQoo
WEVOKSBJUlEyNDEgLT4gMDoxMAooWEVOKSBJUlExMTIgLT4gMDoxMQooWEVOKSBJUlExMjAgLT4g
MDoxMgooWEVOKSBJUlExMzYgLT4gMDoxMwooWEVOKSBJUlExNDQgLT4gMDoxNAooWEVOKSBJUlEx
NTIgLT4gMDoxNQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4gZG9u
ZS4KKFhFTikgVXNpbmcgbG9jYWwgQVBJQyB0aW1lciBpbnRlcnJ1cHRzLgooWEVOKSBjYWxpYnJh
dGluZyBBUElDIHRpbWVyIC4uLgooWEVOKSAuLi4uLiBDUFUgY2xvY2sgc3BlZWQgaXMgMzg5Mi45
OTgxIE1Iei4KKFhFTikgLi4uLi4gaG9zdCBidXMgY2xvY2sgc3BlZWQgaXMgOTkuODIwNCBNSHou
CihYRU4pIC4uLi4uIGJ1c19zY2FsZSA9IDB4NjYzOQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAx
NC4zMThNSHogSFBFVAooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDMyIEtpQi4KKFhF
TikgSFZNOiBBU0lEcyBlbmFibGVkLgooWEVOKSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0
dXJlczoKKFhFTikgIC0gTmVzdGVkIFBhZ2UgVGFibGVzIChOUFQpCihYRU4pICAtIExhc3QgQnJh
bmNoIFJlY29yZCAoTEJSKSBWaXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBv
biAjVk1FWElUCihYRU4pICAtIFZNQ0IgQ2xlYW4gQml0cwooWEVOKSAgLSBEZWNvZGVBc3Npc3Rz
CihYRU4pICAtIFBhdXNlLUludGVyY2VwdCBGaWx0ZXIKKFhFTikgIC0gVFNDIFJhdGUgTVNSCihY
RU4pIEhWTTogU1ZNIGVuYWJsZWQKKFhFTikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcg
KEhBUCkgZGV0ZWN0ZWQKKFhFTikgSFZNOiBIQVAgcGFnZSBzaXplczogNGtCLCAyTUIsIDFHQgoo
WEVOKSBIVk06IFBWSCBtb2RlIG5vdCBzdXBwb3J0ZWQgb24gdGhpcyBwbGF0Zm9ybQooWEVOKSBt
YXNrZWQgRXh0SU5UIG9uIENQVSMxCihYRU4pIG1pY3JvY29kZTogQ1BVMSBjb2xsZWN0X2NwdV9p
bmZvOiBwYXRjaF9pZD0weDYwMDExMTkKKFhFTikgbWFza2VkIEV4dElOVCBvbiBDUFUjMgooWEVO
KSBtaWNyb2NvZGU6IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MAooWEVOKSBtYXNr
ZWQgRXh0SU5UIG9uIENQVSMzCihYRU4pIG1pY3JvY29kZTogQ1BVMyBjb2xsZWN0X2NwdV9pbmZv
OiBwYXRjaF9pZD0wCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzCihYRU4pIEFDUEkgc2xlZXAgbW9k
ZXM6IFMzCihYRU4pIE1DQTogVXNlIGh3IHRocmVzaG9sZGluZyB0byBhZGp1c3QgcG9sbGluZyBm
cmVxdWVuY3kKKFhFTikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBz
dGFydGVkLgooWEVOKSBtdHJyOiB5b3VyIENQVXMgaGFkIGluY29uc2lzdGVudCB2YXJpYWJsZSBN
VFJSIHNldHRpbmdzCihYRU4pIG10cnI6IHByb2JhYmx5IHlvdXIgQklPUyBkb2VzIG5vdCBzZXR1
cCBhbGwgQ1BVcy4KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uCihYRU4pICoq
KiBMT0FESU5HIERPTUFJTiAwICoqKgooWEVOKSAgWGVuICBrZXJuZWw6IDY0LWJpdCwgbHNiLCBj
b21wYXQzMgooWEVOKSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgbHNiLCBwYWRkciAweDIwMDAgLT4g
MHhjMDUwMDAKKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOgooWEVOKSAgRG9tMCBh
bGxvYy46ICAgMDAwMDAwMDIyMzAwMDAwMC0+MDAwMDAwMDIyNDAwMDAwMCAoMTc0MjA4MyBwYWdl
cyB0byBiZSBhbGxvY2F0ZWQpCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRjZjc1MDAw
LT4wMDAwMDAwMjRmZmZmZTAwCihYRU4pIFZJUlRVQUwgTUVNT1JZIEFSUkFOR0VNRU5UOgooWEVO
KSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4MDAwMjAwMC0+ZmZmZmZmZmY4MGMwNTAwMAooWEVO
KSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDAwMDAwMDAwMC0+MDAwMDAwMDAwMDAwMDAwMAooWEVO
KSAgUGh5cy1NYWNoIG1hcDogZmZmZmVhMDAwMDAwMDAwMC0+ZmZmZmVhMDAwMGQ2YWM3MAooWEVO
KSAgU3RhcnQgaW5mbzogICAgZmZmZmZmZmY4MGMwNTAwMC0+ZmZmZmZmZmY4MGMwNTRiNAooWEVO
KSAgUGFnZSB0YWJsZXM6ICAgZmZmZmZmZmY4MGMwNjAwMC0+ZmZmZmZmZmY4MGMxMTAwMAooWEVO
KSAgQm9vdCBzdGFjazogICAgZmZmZmZmZmY4MGMxMTAwMC0+ZmZmZmZmZmY4MGMxMjAwMAooWEVO
KSAgVE9UQUw6ICAgICAgICAgZmZmZmZmZmY4MDAwMDAwMC0+ZmZmZmZmZmY4MTAwMDAwMAooWEVO
KSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MDAwMjAwMAooWEVOKSBEb20wIGhhcyBtYXhpbXVt
IDQgVkNQVXMKKFhFTikgQU1ELVZpOiBTa2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjAwLjAK
KFhFTikgQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2aWNlIDAwMDA6MDA6MDAuMgooWEVOKSBzZXR1
cCAwMDAwOjAwOjAwLjIgZm9yIGQwIGZhaWxlZCAoLTE5KQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgxMCwgdHlwZSA9IDB4Miwgcm9v
dCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikg
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MCwgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMK
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4MSwgdHlw
ZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4
OCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHg5MCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2
aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJs
ZTogZGV2aWNlIGlkID0gMHg5OCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAw
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5YSwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0
ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9
IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMSwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMywgdHlwZSA9IDB4Nywg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhF
TikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9
IDB4NSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9
IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNSwg
dHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHhhOCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHhhYSwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHhhYiwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhjMCwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjMSwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjMiwgdHlwZSA9IDB4Niwgcm9vdCB0YWJs
ZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjMywgdHlwZSA9IDB4Niwgcm9v
dCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikg
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNCwgdHlwZSA9IDB4
Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMK
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNSwgdHlw
ZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgx
MDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4MTAxLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDIzMCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHgyMzgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZj
MzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwgdHlwZSA9IDB4MSwgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg1MDAsIHR5cGUgPSAweDEs
IHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihY
RU4pIFNjcnViYmluZyBGcmVlIFJBTTogLmRvbmUuCihYRU4pIEluaXRpYWwgbG93IG1lbW9yeSB2
aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLgooWEVOKSBTdGQuIExvZ2xldmVsOiBB
bGwKKFhFTikgR3Vlc3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFu
ZCB3YXJuaW5ncykKKFhFTikgKioqIFNlcmlhbCBpbnB1dCAtPiBET00wICh0eXBlICdDVFJMLWEn
IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCB0byBYZW4pCihYRU4pIEZyZWVkIDI5MmtCIGlu
aXQgbWVtb3J5LgooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAw
MDAwMDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAw
MDAwMC4KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAw
MDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAu
CihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAwMDAw
MDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAwMDAwLgooWEVO
KSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEz
IGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAwMC4KKFhFTikgUENJ
OiBVc2luZyBNQ0ZHIGZvciBzZWdtZW50IDAwMDAgYnVzIDAwLWZmCihYRU4pIG1tLmM6ODA5OiBk
MDogRm9yY2luZyByZWFkLW9ubHkgYWNjZXNzIHRvIE1GTiBlMDAwMgooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4yCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDEuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjAxLjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMi4wCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEwLjEKKFhFTikg
U1ItSU9WIGRldmljZSAwMDAwOjAwOjExLjAgaGFzIGl0cyB2aXJ0dWFsIGZ1bmN0aW9ucyBhbHJl
YWR5IGVuYWJsZWQgKDAxYWIpCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuMAooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxMi4yCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTMuMAooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjEzLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4wCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTQuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjE0LjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC40CihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTQuNQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE1LjAKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4yCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MTUuMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjAKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxOC4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMgooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
OC40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAxOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTowMC4xCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDI6MDYuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjA3
LjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMzowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDQ6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjA1OjAwLjAKKFhFTikgSU9B
UElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDUtMTYgLT4gMHhhMCAtPiBJUlEgMTYgTW9k
ZToxIEFjdGl2ZToxKQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0x
NyAtPiAweGE4IC0+IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpCihYRU4pIElPQVBJQ1swXTogU2V0
IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE4IC0+IDB4YjAgLT4gSVJRIDE4IE1vZGU6MSBBY3RpdmU6
MSkKKFhFTikgSU9BUElDWzBdOiBTZXQgUENJIHJvdXRpbmcgZW50cnkgKDUtMTkgLT4gMHhiOCAt
PiBJUlEgMTkgTW9kZToxIEFjdGl2ZToxKQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGlu
ZyBlbnRyeSAoNS0yMSAtPiAweGMwIC0+IElSUSAyMSBNb2RlOjEgQWN0aXZlOjEpCihYRU4pIElP
QVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTIyIC0+IDB4YzggLT4gSVJRIDIyIE1v
ZGU6MSBBY3RpdmU6MSkKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUzOiAwMCg0MCkKKFhFTikgQVBJ
QyBlcnJvciBvbiBDUFUwOiAwMCg0MCkKKFhFTikgQVBJQyBlcnJvciBvbiBDUFUyOiAwMCg0MCkK
KFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkKKFhFTikgQU1ELVZpOiBTaGFyZSBwMm0g
dGFibGUgd2l0aCBpb21tdTogcDJtIHRhYmxlID0gMHgxY2FhYzAK
--001a113a332af9ed1b04f180c397
Content-Type: application/octet-stream; name=dom0-dmesg
Content-Disposition: attachment; filename=dom0-dmesg
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sjlye2

WyAgICAwLjAwMDAwMF0gQlJLIFsweDAwYWRkMDAwLCAweDAwYWRkZmZmXSBQVUQKWyAgICAwLjAw
MDAwMF0gQlJLIFsweDAwYWRlMDAwLCAweDAwYWRlZmZmXSBQTUQKWyAgICAwLjAwMDAwMF0gQlJL
IFsweDAwYWRmMDAwLCAweDAwYWU1ZmZmXSBQVEUKWyAgICAwLjAwMDAwMF0gSW5pdGlhbGl6aW5n
IGNncm91cCBzdWJzeXMgY3B1c2V0ClsgICAgMC4wMDAwMDBdIEluaXRpYWxpemluZyBjZ3JvdXAg
c3Vic3lzIGNwdQpbICAgIDAuMDAwMDAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBjcHVh
Y2N0ClsgICAgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gMy4xMS42LTQteGVuIChnZWVrb0BidWls
ZGhvc3QpIChnY2MgdmVyc2lvbiA0LjguMSAyMDEzMDkwOSBbZ2NjLTRfOC1icmFuY2ggcmV2aXNp
b24gMjAyMzg4XSAoU1VTRSBMaW51eCkgKSAjMSBTTVAgV2VkIE9jdCAzMCAxODowNDo1NiBVVEMg
MjAxMyAoZTZkNGEyNykKWyAgICAwLjAwMDAwMF0gQ29tbWFuZCBsaW5lOiByb290PVVVSUQ9N2Nh
Yzg2ZDItNzk2ZC00ZmZmLTkyNmUtYzZkNjUyMTliNTRjIHJvIHF1aWV0IHF1aWV0IHJlc3VtZT0v
ZGV2L2Rpc2svYnktaWQvYXRhLVNUMzMyMDYyMEFTXzVRRjVEUk1QLXBhcnQ2IHNwbGFzaD1zaWxl
bnQgcXVpZXQgc2hvd29wdHMKWyAgICAwLjAwMDAwMF0gWGVuLXByb3ZpZGVkIG1hY2hpbmUgbWVt
b3J5IG1hcDoKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwMDAwMDAwMDAtMHgw
MDAwMDAwMDAwMDllN2ZmXSB1c2FibGUKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAw
MDAwMDAwOWU4MDAtMHgwMDAwMDAwMDAwMDlmZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBC
SU9TOiBbbWVtIDB4MDAwMDAwMDAwMDBlMDAwMC0weDAwMDAwMDAwMDAwZmZmZmZdIHJlc2VydmVk
ClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMDAwMTAwMDAwLTB4MDAwMDAwMDA4
ZDY4YWZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMDhkNjhi
MDAwLTB4MDAwMDAwMDA4ZGQwOWZmZl0gcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUzogW21l
bSAweDAwMDAwMDAwOGRkMGEwMDAtMHgwMDAwMDAwMDhlMDU5ZmZmXSBBQ1BJIE5WUwpbICAgIDAu
MDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAwMDAwMDA4ZTA1YTAwMC0weDAwMDAwMDAwOGVhNDRmZmZd
IHJlc2VydmVkClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMDhlYTQ1MDAwLTB4
MDAwMDAwMDA4ZWE0NWZmZl0gdXNhYmxlClsgICAgMC4wMDAwMDBdIEJJT1M6IFttZW0gMHgwMDAw
MDAwMDhlYTQ2MDAwLTB4MDAwMDAwMDA4ZWM0YmZmZl0gQUNQSSBOVlMKWyAgICAwLjAwMDAwMF0g
QklPUzogW21lbSAweDAwMDAwMDAwOGVjNGMwMDAtMHgwMDAwMDAwMDhmMDYzZmZmXSB1c2FibGUK
WyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwOGYwNjQwMDAtMHgwMDAwMDAwMDhm
N2YyZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAwMDAwMDA4Zjdm
MzAwMC0weDAwMDAwMDAwOGY3ZmZmZmZdIHVzYWJsZQpbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVt
IDB4MDAwMDAwMDBmZWMwMDAwMC0weDAwMDAwMDAwZmVjMDBmZmZdIHJlc2VydmVkClsgICAgMC4w
MDAwMDBdIEJJT1M6IFttZW0gMHgwMDAwMDAwMGZlYzEwMDAwLTB4MDAwMDAwMDBmZWMxMGZmZl0g
cmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwZmVkMDAwMDAtMHgw
MDAwMDAwMGZlZDAwZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAw
MDAwMDBmZWQ4MDAwMC0weDAwMDAwMDAwZmVkOGZmZmZdIHJlc2VydmVkClsgICAgMC4wMDAwMDBd
IEJJT1M6IFttZW0gMHgwMDAwMDAwMGZlZTAwMDAwLTB4MDAwMDAwMDBmZWVmZmZmZl0gcmVzZXJ2
ZWQKWyAgICAwLjAwMDAwMF0gQklPUzogW21lbSAweDAwMDAwMDAwZmY4MDAwMDAtMHgwMDAwMDAw
MGZmZmZmZmZmXSByZXNlcnZlZApbICAgIDAuMDAwMDAwXSBCSU9TOiBbbWVtIDB4MDAwMDAwMDEw
MDAwMTAwMC0weDAwMDAwMDAyNGZmZmZmZmZdIHVzYWJsZQpbICAgIDAuMDAwMDAwXSBCSU9TOiBb
bWVtIDB4MDAwMDAwZmQwMDAwMDAwMC0weDAwMDAwMGZmZmZmZmZmZmZdIHJlc2VydmVkClsgICAg
MC4wMDAwMDBdIGU4MjA6IFhlbi1wcm92aWRlZCBwaHlzaWNhbCBSQU0gbWFwOgpbICAgIDAuMDAw
MDAwXSBYZW46IFttZW0gMHgwMDAwMDAwMDAwMDAwMDAwLTB4MDAwMDAwMDFhZGQ4ZGZmZl0gdXNh
YmxlClsgICAgMC4wMDAwMDBdIE5YIChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb246IGFjdGl2
ZQpbICAgIDAuMDAwMDAwXSBTTUJJT1MgMi43IHByZXNlbnQuClsgICAgMC4wMDAwMDBdIERNSTog
VG8gQmUgRmlsbGVkIEJ5IE8uRS5NLiBUbyBCZSBGaWxsZWQgQnkgTy5FLk0uL0ZNMkE3NSBQcm80
LCBCSU9TIFAyLjQwIDA3LzExLzIwMTMKWyAgICAwLjAwMDAwMF0gZTgyMDogbGFzdF9wZm4gPSAw
eDFhZGQ4ZSBtYXhfYXJjaF9wZm4gPSAweDgwMDAwMDAwClsgICAgMC4wMDAwMDBdIGU4MjA6IGxh
c3RfcGZuID0gMHgxMDAwMDAgbWF4X2FyY2hfcGZuID0gMHg4MDAwMDAwMApbICAgIDAuMDAwMDAw
XSBmb3VuZCBTTVAgTVAtdGFibGUgYXQgW21lbSAweDAwMGZkOTAwLTB4MDAwZmQ5MGZdIG1hcHBl
ZCBhdCBbZmZmZmZmZmZmZjVlZjkwMF0KWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGlu
ZzogW21lbSAweDFhZDIwMDAwMC0weDFhZDNmZmZmZl0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgx
YWQyMDAwMDAtMHgxYWQzZmZmZmZdIHBhZ2UgNGsKWyAgICAwLjAwMDAwMF0gQlJLIFsweDAwYWU3
MDAwLCAweDAwYWU3ZmZmXSBQR1RBQkxFClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMGFlODAwMCwg
MHgwMGFlOGZmZl0gUEdUQUJMRQpbICAgIDAuMDAwMDAwXSBpbml0X21lbW9yeV9tYXBwaW5nOiBb
bWVtIDB4MWFjMDAwMDAwLTB4MWFkMWZmZmZmXQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDFhYzAw
MDAwMC0weDFhZDFmZmZmZl0gcGFnZSA0awpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDBhZTkwMDAs
IDB4MDBhZTlmZmZdIFBHVEFCTEUKWyAgICAwLjAwMDAwMF0gQlJLIFsweDAwYWVhMDAwLCAweDAw
YWVhZmZmXSBQR1RBQkxFClsgICAgMC4wMDAwMDBdIEJSSyBbMHgwMGFlYjAwMCwgMHgwMGFlYmZm
Zl0gUEdUQUJMRQpbICAgIDAuMDAwMDAwXSBCUksgWzB4MDBhZWMwMDAsIDB4MDBhZWNmZmZdIFBH
VEFCTEUKWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAweDE4MDAwMDAw
MC0weDFhYmZmZmZmZl0KWyAgICAwLjAwMDAwMF0gIFttZW0gMHgxODAwMDAwMDAtMHgxYWJmZmZm
ZmZdIHBhZ2UgNGsKWyAgICAwLjAwMDAwMF0gaW5pdF9tZW1vcnlfbWFwcGluZzogW21lbSAweDAw
MDAwMDAwLTB4MTdmZmZmZmZmXQpbICAgIDAuMDAwMDAwXSAgW21lbSAweDAwMDAwMDAwLTB4MTdm
ZmZmZmZmXSBwYWdlIDRrClsgICAgMC4wMDAwMDBdIGluaXRfbWVtb3J5X21hcHBpbmc6IFttZW0g
MHgxYWQ0MDAwMDAtMHgxYWRkOGRmZmZdClsgICAgMC4wMDAwMDBdICBbbWVtIDB4MWFkNDAwMDAw
LTB4MWFkZDhkZmZmXSBwYWdlIDRrClsgICAgMC4wMDAwMDBdIFJBTURJU0s6IFttZW0gMHgwMTAw
MDAwMC0weDA0MDhhZmZmXQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBSU0RQIDAwMDAwMDAwMDAwZjA0
OTAgMDAwMjQgKHYwMiBBTEFTS0EpClsgICAgMC4wMDAwMDBdIEFDUEk6IFhTRFQgMDAwMDAwMDA4
ZTA0YTA3OCAwMDA3NCAodjAxIEFMQVNLQSAgICBBIE0gSSAwMTA3MjAwOSBBTUkgIDAwMDEwMDEz
KQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGQUNQIDAwMDAwMDAwOGUwNTAxMjggMDAwRjQgKHYwNCBB
TEFTS0EgICAgQSBNIEkgMDEwNzIwMDkgQU1JICAwMDAxMDAxMykKWyAgICAwLjAwMDAwMF0gQUNQ
SSBCSU9TIFdhcm5pbmcgKGJ1Zyk6IE9wdGlvbmFsIEZBRFQgZmllbGQgUG0yQ29udHJvbEJsb2Nr
IGhhcyB6ZXJvIGFkZHJlc3Mgb3IgbGVuZ3RoOiAweDAwMDAwMDAwMDAwMDAwMDAvMHgxICgyMDEz
MDUxNy90YmZhZHQtNjAzKQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBEU0RUIDAwMDAwMDAwOGUwNGEx
ODggMDVGOUUgKHYwMiBBTEFTS0EgICAgQSBNIEkgMDAwMDAwMDAgSU5UTCAyMDA1MTExNykKWyAg
ICAwLjAwMDAwMF0gQUNQSTogRkFDUyAwMDAwMDAwMDhlMDUyZTgwIDAwMDQwClsgICAgMC4wMDAw
MDBdIEFDUEk6IEFQSUMgMDAwMDAwMDA4ZTA1MDIyMCAwMDA3MiAodjAzIEFMQVNLQSAgICBBIE0g
SSAwMTA3MjAwOSBBTUkgIDAwMDEwMDEzKQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBGUERUIDAwMDAw
MDAwOGUwNTAyOTggMDAwNDQgKHYwMSBBTEFTS0EgICAgQSBNIEkgMDEwNzIwMDkgQU1JICAwMDAx
MDAxMykKWyAgICAwLjAwMDAwMF0gQUNQSTogTUNGRyAwMDAwMDAwMDhlMDUwMmUwIDAwMDNDICh2
MDEgQUxBU0tBICAgIEEgTSBJIDAxMDcyMDA5IE1TRlQgMDAwMTAwMTMpClsgICAgMC4wMDAwMDBd
IEFDUEk6IEFBRlQgMDAwMDAwMDA4ZTA1MDMyMCAwMDBFNyAodjAxIEFMQVNLQSBPRU1BQUZUICAw
MTA3MjAwOSBNU0ZUIDAwMDAwMDk3KQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBIUEVUIDAwMDAwMDAw
OGUwNTA0MDggMDAwMzggKHYwMSBBTEFTS0EgICAgQSBNIEkgMDEwNzIwMDkgQU1JICAwMDAwMDAw
NSkKWyAgICAwLjAwMDAwMF0gQUNQSTogSVZSUyAwMDAwMDAwMDhlMDUwNDQwIDAwMDcwICh2MDIg
ICAgQU1EIEFOTkFQVVJOIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDApClsgICAgMC4wMDAwMDBdIEFD
UEk6IFNTRFQgMDAwMDAwMDA4ZTA1MDRiMCAwMEE2MCAodjAxICAgIEFNRCBBTk5BUFVSTiAwMDAw
MDAwMSBBTUQgIDAwMDAwMDAxKQpbICAgIDAuMDAwMDAwXSBBQ1BJOiBTU0RUIDAwMDAwMDAwOGUw
NTBmMTAgMDA0QjcgKHYwMiAgICBBTUQgQU5OQVBVUk4gMDAwMDAwMDEgTVNGVCAwNDAwMDAwMCkK
WyAgICAwLjAwMDAwMF0gQUNQSTogQ1JBVCAwMDAwMDAwMDhlMDUxM2M4IDAwMkY4ICh2MDEgICAg
QU1EIEFOTkFQVVJOIDAwMDAwMDAxIEFNRCAgMDAwMDAwMDEpClsgICAgMC4wMDAwMDBdIFpvbmUg
cmFuZ2VzOgpbICAgIDAuMDAwMDAwXSAgIERNQSAgICAgIFttZW0gMHgwMDAwMDAwMC0weDAwZmZm
ZmZmXQpbICAgIDAuMDAwMDAwXSAgIERNQTMyICAgIFttZW0gMHgwMTAwMDAwMC0weGZmZmZmZmZm
XQpbICAgIDAuMDAwMDAwXSAgIE5vcm1hbCAgIFttZW0gMHgxMDAwMDAwMDAtMHgxYWRkOGRmZmZd
ClsgICAgMC4wMDAwMDBdIE1vdmFibGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlClsgICAgMC4w
MDAwMDBdIEVhcmx5IG1lbW9yeSBub2RlIHJhbmdlcwpbICAgIDAuMDAwMDAwXSAgIG5vZGUgICAw
OiBbbWVtIDB4MDAwMDAwMDAtMHgxYWRkOGRmZmZdClsgICAgMC4wMDAwMDBdIE9uIG5vZGUgMCB0
b3RhbHBhZ2VzOiAxNzYwNjU0ClsgICAgMC4wMDAwMDBdIGZyZWVfYXJlYV9pbml0X25vZGU6IG5v
ZGUgMCwgcGdkYXQgZmZmZmZmZmY4MDk2MjQ0MCwgbm9kZV9tZW1fbWFwIGZmZmY4ODAxYTY4ODgw
MDAKWyAgICAwLjAwMDAwMF0gICBETUEgem9uZTogNTYgcGFnZXMgdXNlZCBmb3IgbWVtbWFwClsg
ICAgMC4wMDAwMDBdICAgRE1BIHpvbmU6IDAgcGFnZXMgcmVzZXJ2ZWQKWyAgICAwLjAwMDAwMF0g
ICBETUEgem9uZTogNDA5NiBwYWdlcywgTElGTyBiYXRjaDowClsgICAgMC4wMDAwMDBdICAgRE1B
MzIgem9uZTogMTQyODAgcGFnZXMgdXNlZCBmb3IgbWVtbWFwClsgICAgMC4wMDAwMDBdICAgRE1B
MzIgem9uZTogMTA0NDQ4MCBwYWdlcywgTElGTyBiYXRjaDozMQpbICAgIDAuMDAwMDAwXSAgIE5v
cm1hbCB6b25lOiA5NzM2IHBhZ2VzIHVzZWQgZm9yIG1lbW1hcApbICAgIDAuMDAwMDAwXSAgIE5v
cm1hbCB6b25lOiA3MTIwNzggcGFnZXMsIExJRk8gYmF0Y2g6MzEKWyAgICAwLjAwMDAwMF0gQUNQ
STogTEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgxMF0gZW5hYmxlZCkKWyAgICAwLjAw
MDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgxMV0gZW5hYmxlZCkK
WyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRbMHgxMl0g
ZW5hYmxlZCkKWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNf
aWRbMHgxM10gZW5hYmxlZCkKWyAgICAwLjAwMDAwMF0gQUNQSTogTEFQSUNfTk1JIChhY3BpX2lk
WzB4ZmZdIGhpZ2ggZWRnZSBsaW50WzB4MV0pClsgICAgMC4wMDAwMDBdIEFDUEk6IElPQVBJQyAo
aWRbMHgwNV0gYWRkcmVzc1sweGZlYzAwMDAwXSBnc2lfYmFzZVswXSkKWyAgICAwLjAwMDAwMF0g
SU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24gMzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJ
IDAtMjMKWyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEgMCBn
bG9iYWxfaXJxIDIgZGZsIGRmbCkKWyAgICAwLjAwMDAwMF0gQUNQSTogSU5UX1NSQ19PVlIgKGJ1
cyAwIGJ1c19pcnEgOSBnbG9iYWxfaXJxIDkgbG93IGxldmVsKQpbICAgIDAuMDAwMDAwXSBBQ1BJ
OiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuClsgICAgMC4wMDAwMDBdIEFDUEk6IElSUTIgdXNlZCBi
eSBvdmVycmlkZS4KWyAgICAwLjAwMDAwMF0gQUNQSTogSVJROSB1c2VkIGJ5IG92ZXJyaWRlLgpb
ICAgIDAuMDAwMDAwXSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5m
b3JtYXRpb24KWyAgICAwLjAwMDAwMF0gZTgyMDogW21lbSAweDhmODAwMDAwLTB4ZmViZmZmZmZd
IGF2YWlsYWJsZSBmb3IgUENJIGRldmljZXMKWyAgICAwLjAwMDAwMF0gc2V0dXBfcGVyY3B1OiBO
Ul9DUFVTOjUxMiBucl9jcHVtYXNrX2JpdHM6NTEyIG5yX2NwdV9pZHM6NCBucl9ub2RlX2lkczox
ClsgICAgMC4wMDAwMDBdIFBFUkNQVTogRW1iZWRkZWQgMTkgcGFnZXMvY3B1IEBmZmZmODgwMWE1
ODAwMDAwIHM0ODA2NCByODE5MiBkMjE1NjggdTUyNDI4OApbICAgIDAuMDAwMDAwXSBwY3B1LWFs
bG9jOiBzNDgwNjQgcjgxOTIgZDIxNTY4IHU1MjQyODggYWxsb2M9MSoyMDk3MTUyClsgICAgMC4w
MDAwMDBdIHBjcHUtYWxsb2M6IFswXSAwIDEgMiAzIApbICAgIDAuMDAwMDAwXSBTd2FwcGluZyBN
Rk5zIGZvciBQRk4gOTg4IGFuZCAxYTU4MDcgKE1GTiAyMjM5ODggYW5kIDExMGY3KQpbICAgIDAu
MDAwMDAwXSBCdWlsdCAxIHpvbmVsaXN0cyBpbiBab25lIG9yZGVyLCBtb2JpbGl0eSBncm91cGlu
ZyBvbi4gIFRvdGFsIHBhZ2VzOiAxNzM2NTgyClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5k
IGxpbmU6IHJvb3Q9VVVJRD03Y2FjODZkMi03OTZkLTRmZmYtOTI2ZS1jNmQ2NTIxOWI1NGMgcm8g
cXVpZXQgcXVpZXQgcmVzdW1lPS9kZXYvZGlzay9ieS1pZC9hdGEtU1QzMzIwNjIwQVNfNVFGNURS
TVAtcGFydDYgc3BsYXNoPXNpbGVudCBxdWlldCBzaG93b3B0cwpbICAgIDAuMDAwMDAwXSBQSUQg
aGFzaCB0YWJsZSBlbnRyaWVzOiA0MDk2IChvcmRlcjogMywgMzI3NjggYnl0ZXMpClsgICAgMC4w
MDAwMDBdIERlbnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEwNDg1NzYgKG9yZGVyOiAx
MSwgODM4ODYwOCBieXRlcykKWyAgICAwLjAwMDAwMF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBl
bnRyaWVzOiA1MjQyODggKG9yZGVyOiAxMCwgNDE5NDMwNCBieXRlcykKWyAgICAwLjAwMDAwMF0g
eHNhdmU6IGVuYWJsZWQgeHN0YXRlX2J2IDB4NywgY250eHQgc2l6ZSAweDM0MApbICAgIDAuMDAw
MDAwXSBhbGxvY2F0ZWQgMjgxNzA0NjQgYnl0ZXMgb2YgcGFnZV9jZ3JvdXAKWyAgICAwLjAwMDAw
MF0gcGxlYXNlIHRyeSAnY2dyb3VwX2Rpc2FibGU9bWVtb3J5JyBvcHRpb24gaWYgeW91IGRvbid0
IHdhbnQgbWVtb3J5IGNncm91cHMKWyAgICAwLjAwMDAwMF0gU29mdHdhcmUgSU8gVExCIGVuYWJs
ZWQ6IAogQXBlcnR1cmU6ICAgICA2NCBtZWdhYnl0ZXMKIEFkZHJlc3Mgc2l6ZTogMjcgYml0cwog
S2VybmVsIHJhbmdlOiBmZmZmODgwMTlmMTIyMDAwIC0gZmZmZjg4MDFhMzEyMjAwMApbICAgIDAu
MDAwMDAwXSBQQ0ktRE1BOiBVc2luZyBzb2Z0d2FyZSBib3VuY2UgYnVmZmVyaW5nIGZvciBJTyAo
U1dJT1RMQikKWyAgICAwLjAwMDAwMF0gTWVtb3J5OiA2NzMxNjc2Sy83MDQyNjE2SyBhdmFpbGFi
bGUgKDUyNzRLIGtlcm5lbCBjb2RlLCA1NTNLIHJ3ZGF0YSwgMzg3Nksgcm9kYXRhLCA0OTJLIGlu
aXQsIDg2OEsgYnNzLCAzMTA5NDBLIHJlc2VydmVkKQpbICAgIDAuMDAwMDAwXSBIaWVyYXJjaGlj
YWwgUkNVIGltcGxlbWVudGF0aW9uLgpbICAgIDAuMDAwMDAwXSAJUkNVIGR5bnRpY2staWRsZSBn
cmFjZS1wZXJpb2QgYWNjZWxlcmF0aW9uIGlzIGVuYWJsZWQuClsgICAgMC4wMDAwMDBdIAlSQ1Ug
cmVzdHJpY3RpbmcgQ1BVcyBmcm9tIE5SX0NQVVM9NTEyIHRvIG5yX2NwdV9pZHM9NC4KWyAgICAw
LjAwMDAwMF0gCU9mZmxvYWQgUkNVIGNhbGxiYWNrcyBmcm9tIGFsbCBDUFVzClsgICAgMC4wMDAw
MDBdIAlPZmZsb2FkIFJDVSBjYWxsYmFja3MgZnJvbSBDUFVzOiAwLTUxMS4KWyAgICAwLjAwMDAw
MF0gbnJfcGlycXM6IDQwClsgICAgMC4wMDAwMDBdIE5SX0lSUVM6NjczMjggbnJfaXJxczoyNzky
IDE2ClsgICAgMC4wMDAwMDBdIFhlbiByZXBvcnRlZDogMzg5My4wMTIgTUh6IHByb2Nlc3Nvci4K
WyAgICAwLjAwMDAwMF0gQ29uc29sZTogY29sb3VyIFZHQSsgODB4MjUKWyAgICAwLjAwMDAwMF0g
Y29uc29sZSBbdHR5MF0gZW5hYmxlZApbICAgIDAuMDAwMDAwXSBjb25zb2xlIFt4dmMtMV0gZW5h
YmxlZApbICAgIDAuMDgwMDAxXSBDYWxpYnJhdGluZyBkZWxheSB1c2luZyB0aW1lciBzcGVjaWZp
YyByb3V0aW5lLi4gNzg5My4xNiBCb2dvTUlQUyAobHBqPTE1Nzg2MzI1KQpbICAgIDAuMDgwMDA0
XSBwaWRfbWF4OiBkZWZhdWx0OiAzMjc2OCBtaW5pbXVtOiAzMDEKWyAgICAwLjA4MDAyN10gU2Vj
dXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemVkClsgICAgMC4wODAwNDJdIEFwcEFybW9yOiBBcHBB
cm1vciBpbml0aWFsaXplZApbICAgIDAuMDgwMDUxXSBNb3VudC1jYWNoZSBoYXNoIHRhYmxlIGVu
dHJpZXM6IDI1NgpbICAgIDAuMDgwMTg3XSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBtZW1v
cnkKWyAgICAwLjA4MDE5OF0gSW5pdGlhbGl6aW5nIGNncm91cCBzdWJzeXMgZGV2aWNlcwpbICAg
IDAuMDgwMjAwXSBJbml0aWFsaXppbmcgY2dyb3VwIHN1YnN5cyBmcmVlemVyClsgICAgMC4wODAy
MDFdIEluaXRpYWxpemluZyBjZ3JvdXAgc3Vic3lzIG5ldF9jbHMKWyAgICAwLjA4MDIwMl0gSW5p
dGlhbGl6aW5nIGNncm91cCBzdWJzeXMgYmxraW8KWyAgICAwLjA4MDIwNF0gSW5pdGlhbGl6aW5n
IGNncm91cCBzdWJzeXMgcGVyZl9ldmVudApbICAgIDAuMDgwMjI5XSBtY2U6IENQVSBzdXBwb3J0
cyAyIE1DRSBiYW5rcwpbICAgIDAuMDgwMjQ2XSBMYXN0IGxldmVsIGlUTEIgZW50cmllczogNEtC
IDUxMiwgMk1CIDEwMjQsIDRNQiA1MTIKTGFzdCBsZXZlbCBkVExCIGVudHJpZXM6IDRLQiAxMDI0
LCAyTUIgMTAyNCwgNE1CIDUxMgp0bGJfZmx1c2hhbGxfc2hpZnQ6IDUKWyAgICAwLjEwOTkwOV0g
QUNQSTogQ29yZSByZXZpc2lvbiAyMDEzMDUxNwpbICAgIDAuMTE2NDQ4XSBBQ1BJOiBBbGwgQUNQ
SSBUYWJsZXMgc3VjY2Vzc2Z1bGx5IGFjcXVpcmVkClsgICAgMC4xMjA2OTRdIFNNUCBhbHRlcm5h
dGl2ZXM6IHN3aXRjaGluZyB0byBTTVAgY29kZQpbICAgIDAuMTUwNjQ5XSBCcm91Z2h0IHVwIDQg
Q1BVcwpbICAgIDAuMTUwNzA3XSBkZXZ0bXBmczogaW5pdGlhbGl6ZWQKWyAgICAwLjE1MDcwN10g
UlRDIHRpbWU6IDIyOjI5OjUyLCBkYXRlOiAwMi8wMy8xNApbICAgIDAuMTUwNzA3XSBORVQ6IFJl
Z2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE2ClsgICAgMC4xNTA3MDddIEFDUEk6IGJ1cyB0eXBl
IFBDSSByZWdpc3RlcmVkClsgICAgMC4xNTA3MDddIGFjcGlwaHA6IEFDUEkgSG90IFBsdWcgUENJ
IENvbnRyb2xsZXIgRHJpdmVyIHZlcnNpb246IDAuNQpbICAgIDAuMTUwNzA3XSBQQ0k6IE1NQ09O
RklHIGZvciBkb21haW4gMDAwMCBbYnVzIDAwLWZmXSBhdCBbbWVtIDB4ZTAwMDAwMDAtMHhlZmZm
ZmZmZl0gKGJhc2UgMHhlMDAwMDAwMCkKWyAgICAwLjE1MDcwN10gUENJOiBub3QgdXNpbmcgTU1D
T05GSUcKWyAgICAwLjE1MDcwN10gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUgMSBmb3Ig
YmFzZSBhY2Nlc3MKWyAgICAwLjE1MDcwN10gUENJOiBVc2luZyBjb25maWd1cmF0aW9uIHR5cGUg
MSBmb3IgZXh0ZW5kZWQgYWNjZXNzClsgICAgMC4xNTIwMjldIGJpbzogY3JlYXRlIHNsYWIgPGJp
by0wPiBhdCAwClsgICAgMC4xNTIwNzJdIEFDUEk6IEFkZGVkIF9PU0koTW9kdWxlIERldmljZSkK
WyAgICAwLjE1MjA3NF0gQUNQSTogQWRkZWQgX09TSShQcm9jZXNzb3IgRGV2aWNlKQpbICAgIDAu
MTUyMDc1XSBBQ1BJOiBBZGRlZCBfT1NJKDMuMCBfU0NQIEV4dGVuc2lvbnMpClsgICAgMC4xNTIw
NzddIEFDUEk6IEFkZGVkIF9PU0koUHJvY2Vzc29yIEFnZ3JlZ2F0b3IgRGV2aWNlKQpbICAgIDAu
MTUyODkyXSBBQ1BJOiBFQzogTG9vayB1cCBFQyBpbiBEU0RUClsgICAgMC4xNTM3NjVdIEFDUEk6
IEV4ZWN1dGVkIDEgYmxvY2tzIG9mIG1vZHVsZS1sZXZlbCBleGVjdXRhYmxlIEFNTCBjb2RlClsg
ICAgMC4xNTczNzNdIFtGaXJtd2FyZSBCdWddOiBBQ1BJOiBCSU9TIF9PU0koTGludXgpIHF1ZXJ5
IGlnbm9yZWQKWyAgICAwLjE1ODExN10gQUNQSTogSW50ZXJwcmV0ZXIgZW5hYmxlZApbICAgIDAu
MTU4MTIyXSBBQ1BJIEV4Y2VwdGlvbjogQUVfTk9UX0ZPVU5ELCBXaGlsZSBldmFsdWF0aW5nIFNs
ZWVwIFN0YXRlIFtcX1MxX10gKDIwMTMwNTE3L2h3eGZhY2UtNTcxKQpbICAgIDAuMTU4MTI2XSBB
Q1BJIEV4Y2VwdGlvbjogQUVfTk9UX0ZPVU5ELCBXaGlsZSBldmFsdWF0aW5nIFNsZWVwIFN0YXRl
IFtcX1MyX10gKDIwMTMwNTE3L2h3eGZhY2UtNTcxKQpbICAgIDAuMTU4MTM1XSBBQ1BJOiAoc3Vw
cG9ydHMgUzAgUzMgUzUpClsgICAgMC4xNTgxMzZdIEFDUEk6IFVzaW5nIElPQVBJQyBmb3IgaW50
ZXJydXB0IHJvdXRpbmcKWyAgICAwLjE1ODM2Nl0gUENJOiBNTUNPTkZJRyBmb3IgZG9tYWluIDAw
MDAgW2J1cyAwMC1mZl0gYXQgW21lbSAweGUwMDAwMDAwLTB4ZWZmZmZmZmZdIChiYXNlIDB4ZTAw
MDAwMDApClsgICAgMC4xNTg0MTZdIFBDSTogTU1DT05GSUcgYXQgW21lbSAweGUwMDAwMDAwLTB4
ZWZmZmZmZmZdIHJlc2VydmVkIGluIEFDUEkgbW90aGVyYm9hcmQgcmVzb3VyY2VzClsgICAgMC4y
NTY3NzddIFBDSTogVXNpbmcgaG9zdCBicmlkZ2Ugd2luZG93cyBmcm9tIEFDUEk7IGlmIG5lY2Vz
c2FyeSwgdXNlICJwY2k9bm9jcnMiIGFuZCByZXBvcnQgYSBidWcKWyAgICAwLjI1Njg0M10gQUNQ
STogTm8gZG9jayBkZXZpY2VzIGZvdW5kLgpbICAgIDAuMjY1ODYzXSBBQ1BJOiBQQ0kgUm9vdCBC
cmlkZ2UgW1BDSTBdIChkb21haW4gMDAwMCBbYnVzIDAwLWZmXSkKWyAgICAwLjI2NjExMF0gYWNw
aSBQTlAwQTAzOjAwOiBSZXF1ZXN0aW5nIEFDUEkgX09TQyBjb250cm9sICgweDFkKQpbICAgIDAu
MjY2NDQ1XSBhY3BpIFBOUDBBMDM6MDA6IEFDUEkgX09TQyBjb250cm9sICgweDE5KSBncmFudGVk
ClsgICAgMC4yNjY5NDVdIFBDSSBob3N0IGJyaWRnZSB0byBidXMgMDAwMDowMApbICAgIDAuMjY2
OTQ4XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgMDAtZmZdClsgICAg
MC4yNjY5NTBdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2lvICAweDAwMDAt
MHgwM2FmXQpbICAgIDAuMjY2OTUxXSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNl
IFtpbyAgMHgwM2UwLTB4MGNmN10KWyAgICAwLjI2Njk1M10gcGNpX2J1cyAwMDAwOjAwOiByb290
IGJ1cyByZXNvdXJjZSBbaW8gIDB4MDNiMC0weDAzZGZdClsgICAgMC4yNjY5NTRdIHBjaV9idXMg
MDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2lvICAweDBkMDAtMHhmZmZmXQpbICAgIDAuMjY2
OTU2XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHgwMDBhMDAwMC0w
eDAwMGJmZmZmXQpbICAgIDAuMjY2OTU4XSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291
cmNlIFttZW0gMHgwMDBjMDAwMC0weDAwMGRmZmZmXQpbICAgIDAuMjY2OTU5XSBwY2lfYnVzIDAw
MDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFttZW0gMHhiMDAwMDAwMC0weGZmZmZmZmZmXQpbICAg
IDAuMjY2OTcwXSBwY2kgMDAwMDowMDowMC4wOiBbMTAyMjoxNDEwXSB0eXBlIDAwIGNsYXNzIDB4
MDYwMDAwClsgICAgMC4yNjcxMjFdIHBjaSAwMDAwOjAwOjAwLjI6IFsxMDIyOjE0MTldIHR5cGUg
MDAgY2xhc3MgMHgwODA2MDAKWyAgICAwLjI2NzI4OV0gcGNpIDAwMDA6MDA6MDEuMDogWzEwMDI6
OTkwZV0gdHlwZSAwMCBjbGFzcyAweDAzMDAwMApbICAgIDAuMjY3MzAzXSBwY2kgMDAwMDowMDow
MS4wOiByZWcgMHgxMDogW21lbSAweGIwMDAwMDAwLTB4YmZmZmZmZmYgcHJlZl0KWyAgICAwLjI2
NzMxM10gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTQ6IFtpbyAgMHhmMDAwLTB4ZjBmZl0KWyAg
ICAwLjI2NzMyM10gcGNpIDAwMDA6MDA6MDEuMDogcmVnIDB4MTg6IFttZW0gMHhmZjcwMDAwMC0w
eGZmNzNmZmZmXQpbICAgIDAuMjY3Mzk5XSBwY2kgMDAwMDowMDowMS4wOiBzdXBwb3J0cyBEMSBE
MgpbICAgIDAuMjY3NDc3XSBwY2kgMDAwMDowMDowMS4xOiBbMTAwMjo5OTAyXSB0eXBlIDAwIGNs
YXNzIDB4MDQwMzAwClsgICAgMC4yNjc0OTBdIHBjaSAwMDAwOjAwOjAxLjE6IHJlZyAweDEwOiBb
bWVtIDB4ZmY3NDAwMDAtMHhmZjc0M2ZmZl0KWyAgICAwLjI2NzU4MV0gcGNpIDAwMDA6MDA6MDEu
MTogc3VwcG9ydHMgRDEgRDIKWyAgICAwLjI2NzY2NF0gcGNpIDAwMDA6MDA6MDIuMDogWzEwMjI6
MTQxMl0gdHlwZSAwMSBjbGFzcyAweDA2MDQwMApbICAgIDAuMjY3NzU0XSBwY2kgMDAwMDowMDow
Mi4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzaG90IEQzY29sZApbICAgIDAuMjY3ODA5XSBw
Y2kgMDAwMDowMDowMi4wOiBTeXN0ZW0gd2FrZXVwIGRpc2FibGVkIGJ5IEFDUEkKWyAgICAwLjI2
Nzg4OF0gcGNpIDAwMDA6MDA6MTAuMDogWzEwMjI6NzgxMl0gdHlwZSAwMCBjbGFzcyAweDBjMDMz
MApbICAgIDAuMjY3OTE0XSBwY2kgMDAwMDowMDoxMC4wOiByZWcgMHgxMDogW21lbSAweGZmNzQ2
MDAwLTB4ZmY3NDdmZmYgNjRiaXRdClsgICAgMC4yNjgwMDVdIHBjaSAwMDAwOjAwOjEwLjA6IFBN
RSMgc3VwcG9ydGVkIGZyb20gRDAgRDNob3QgRDNjb2xkClsgICAgMC4yNjgwNzNdIHBjaSAwMDAw
OjAwOjEwLjA6IFN5c3RlbSB3YWtldXAgZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAuMjY4MTI4XSBw
Y2kgMDAwMDowMDoxMC4xOiBbMTAyMjo3ODEyXSB0eXBlIDAwIGNsYXNzIDB4MGMwMzMwClsgICAg
MC4yNjgxNTVdIHBjaSAwMDAwOjAwOjEwLjE6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NDQwMDAtMHhm
Zjc0NWZmZiA2NGJpdF0KWyAgICAwLjI2ODI5N10gcGNpIDAwMDA6MDA6MTAuMTogUE1FIyBzdXBw
b3J0ZWQgZnJvbSBEMCBEM2hvdCBEM2NvbGQKWyAgICAwLjI2ODM1OV0gcGNpIDAwMDA6MDA6MTAu
MTogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBieSBBQ1BJClsgICAgMC4yNjg0MTFdIHBjaSAwMDAw
OjAwOjExLjA6IFsxMDIyOjc4MDFdIHR5cGUgMDAgY2xhc3MgMHgwMTA2MDEKWyAgICAwLjI2ODQz
N10gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MTA6IFtpbyAgMHhmMTkwLTB4ZjE5N10KWyAgICAw
LjI2ODQ1MF0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MTQ6IFtpbyAgMHhmMTgwLTB4ZjE4M10K
WyAgICAwLjI2ODQ2Ml0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MTg6IFtpbyAgMHhmMTcwLTB4
ZjE3N10KWyAgICAwLjI2ODQ3NV0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MWM6IFtpbyAgMHhm
MTYwLTB4ZjE2M10KWyAgICAwLjI2ODQ4N10gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4MjA6IFtp
byAgMHhmMTUwLTB4ZjE1Zl0KWyAgICAwLjI2ODUwMF0gcGNpIDAwMDA6MDA6MTEuMDogcmVnIDB4
MjQ6IFttZW0gMHhmZjc0ZDAwMC0weGZmNzRkN2ZmXQpbICAgIDAuMjY4NjM3XSBwY2kgMDAwMDow
MDoxMi4wOiBbMTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yNjg2NTVd
IHBjaSAwMDAwOjAwOjEyLjA6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NGMwMDAtMHhmZjc0Y2ZmZl0K
WyAgICAwLjI2ODc3NV0gcGNpIDAwMDA6MDA6MTIuMDogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBi
eSBBQ1BJClsgICAgMC4yNjg4MjBdIHBjaSAwMDAwOjAwOjEyLjI6IFsxMDIyOjc4MDhdIHR5cGUg
MDAgY2xhc3MgMHgwYzAzMjAKWyAgICAwLjI2ODg0NV0gcGNpIDAwMDA6MDA6MTIuMjogcmVnIDB4
MTA6IFttZW0gMHhmZjc0YjAwMC0weGZmNzRiMGZmXQpbICAgIDAuMjY4OTYxXSBwY2kgMDAwMDow
MDoxMi4yOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjY4OTYzXSBwY2kgMDAwMDowMDoxMi4yOiBQ
TUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90ClsgICAgMC4yNjkwMTJdIHBjaSAwMDAw
OjAwOjEyLjI6IFN5c3RlbSB3YWtldXAgZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAuMjY5MDU2XSBw
Y2kgMDAwMDowMDoxMy4wOiBbMTAyMjo3ODA3XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAg
MC4yNjkwNzRdIHBjaSAwMDAwOjAwOjEzLjA6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NGEwMDAtMHhm
Zjc0YWZmZl0KWyAgICAwLjI2OTIxNl0gcGNpIDAwMDA6MDA6MTMuMDogU3lzdGVtIHdha2V1cCBk
aXNhYmxlZCBieSBBQ1BJClsgICAgMC4yNjkyNjFdIHBjaSAwMDAwOjAwOjEzLjI6IFsxMDIyOjc4
MDhdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMjAKWyAgICAwLjI2OTI4Nl0gcGNpIDAwMDA6MDA6MTMu
MjogcmVnIDB4MTA6IFttZW0gMHhmZjc0OTAwMC0weGZmNzQ5MGZmXQpbICAgIDAuMjY5NDAyXSBw
Y2kgMDAwMDowMDoxMy4yOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjY5NDA0XSBwY2kgMDAwMDow
MDoxMy4yOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90ClsgICAgMC4yNjk0NTRd
IHBjaSAwMDAwOjAwOjEzLjI6IFN5c3RlbSB3YWtldXAgZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAu
MjY5NDk3XSBwY2kgMDAwMDowMDoxNC4wOiBbMTAyMjo3ODBiXSB0eXBlIDAwIGNsYXNzIDB4MGMw
NTAwClsgICAgMC4yNjk2NTJdIHBjaSAwMDAwOjAwOjE0LjE6IFsxMDIyOjc4MGNdIHR5cGUgMDAg
Y2xhc3MgMHgwMTAxOGEKWyAgICAwLjI2OTY3MF0gcGNpIDAwMDA6MDA6MTQuMTogcmVnIDB4MTA6
IFtpbyAgMHhmMTQwLTB4ZjE0N10KWyAgICAwLjI2OTY4M10gcGNpIDAwMDA6MDA6MTQuMTogcmVn
IDB4MTQ6IFtpbyAgMHhmMTMwLTB4ZjEzM10KWyAgICAwLjI2OTY5Nl0gcGNpIDAwMDA6MDA6MTQu
MTogcmVnIDB4MTg6IFtpbyAgMHhmMTIwLTB4ZjEyN10KWyAgICAwLjI2OTcwOV0gcGNpIDAwMDA6
MDA6MTQuMTogcmVnIDB4MWM6IFtpbyAgMHhmMTEwLTB4ZjExM10KWyAgICAwLjI2OTcyMV0gcGNp
IDAwMDA6MDA6MTQuMTogcmVnIDB4MjA6IFtpbyAgMHhmMTAwLTB4ZjEwZl0KWyAgICAwLjI2OTgy
NV0gcGNpIDAwMDA6MDA6MTQuMzogWzEwMjI6NzgwZV0gdHlwZSAwMCBjbGFzcyAweDA2MDEwMApb
ICAgIDAuMjcwNzI0XSBwY2kgMDAwMDowMDoxNC40OiBbMTAyMjo3ODBmXSB0eXBlIDAxIGNsYXNz
IDB4MDYwNDAxClsgICAgMC4yNzA4MTBdIHBjaSAwMDAwOjAwOjE0LjQ6IFN5c3RlbSB3YWtldXAg
ZGlzYWJsZWQgYnkgQUNQSQpbICAgIDAuMjcwODQ1XSBwY2kgMDAwMDowMDoxNC41OiBbMTAyMjo3
ODA5XSB0eXBlIDAwIGNsYXNzIDB4MGMwMzEwClsgICAgMC4yNzA4NjNdIHBjaSAwMDAwOjAwOjE0
LjU6IHJlZyAweDEwOiBbbWVtIDB4ZmY3NDgwMDAtMHhmZjc0OGZmZl0KWyAgICAwLjI3MDk4MF0g
cGNpIDAwMDA6MDA6MTQuNTogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBieSBBQ1BJClsgICAgMC4y
NzEwMjZdIHBjaSAwMDAwOjAwOjE1LjA6IFsxMDIyOjQzYTBdIHR5cGUgMDEgY2xhc3MgMHgwNjA0
MDAKWyAgICAwLjI3MTEzN10gcGNpIDAwMDA6MDA6MTUuMDogc3VwcG9ydHMgRDEgRDIKWyAgICAw
LjI3MTE5Ml0gcGNpIDAwMDA6MDA6MTUuMDogU3lzdGVtIHdha2V1cCBkaXNhYmxlZCBieSBBQ1BJ
ClsgICAgMC4yNzEyMzVdIHBjaSAwMDAwOjAwOjE1LjI6IFsxMDIyOjQzYTJdIHR5cGUgMDEgY2xh
c3MgMHgwNjA0MDAKWyAgICAwLjI3MTM0Nl0gcGNpIDAwMDA6MDA6MTUuMjogc3VwcG9ydHMgRDEg
RDIKWyAgICAwLjI3MTQwMV0gcGNpIDAwMDA6MDA6MTUuMjogU3lzdGVtIHdha2V1cCBkaXNhYmxl
ZCBieSBBQ1BJClsgICAgMC4yNzE0NDFdIHBjaSAwMDAwOjAwOjE1LjM6IFsxMDIyOjQzYTNdIHR5
cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAwLjI3MTU1Ml0gcGNpIDAwMDA6MDA6MTUuMzogc3Vw
cG9ydHMgRDEgRDIKWyAgICAwLjI3MTYwN10gcGNpIDAwMDA6MDA6MTUuMzogU3lzdGVtIHdha2V1
cCBkaXNhYmxlZCBieSBBQ1BJClsgICAgMC4yNzE2NDhdIHBjaSAwMDAwOjAwOjE4LjA6IFsxMDIy
OjE0MDBdIHR5cGUgMDAgY2xhc3MgMHgwNjAwMDAKWyAgICAwLjI3MTc0N10gcGNpIDAwMDA6MDA6
MTguMTogWzEwMjI6MTQwMV0gdHlwZSAwMCBjbGFzcyAweDA2MDAwMApbICAgIDAuMjcxODQ1XSBw
Y2kgMDAwMDowMDoxOC4yOiBbMTAyMjoxNDAyXSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsgICAg
MC4yNzE5NDFdIHBjaSAwMDAwOjAwOjE4LjM6IFsxMDIyOjE0MDNdIHR5cGUgMDAgY2xhc3MgMHgw
NjAwMDAKWyAgICAwLjI3MjAwMF0gcGNpIDAwMDA6MDA6MTguNDogWzEwMjI6MTQwNF0gdHlwZSAw
MCBjbGFzcyAweDA2MDAwMApbICAgIDAuMjcyMDkzXSBwY2kgMDAwMDowMDoxOC41OiBbMTAyMjox
NDA1XSB0eXBlIDAwIGNsYXNzIDB4MDYwMDAwClsgICAgMC4yNzIyNzhdIHBjaSAwMDAwOjAxOjAw
LjA6IFsxMDAyOjY4MTldIHR5cGUgMDAgY2xhc3MgMHgwMzAwMDAKWyAgICAwLjI3MjI5OF0gcGNp
IDAwMDA6MDE6MDAuMDogcmVnIDB4MTA6IFttZW0gMHhjMDAwMDAwMC0weGNmZmZmZmZmIDY0Yml0
IHByZWZdClsgICAgMC4yNzIzMTVdIHBjaSAwMDAwOjAxOjAwLjA6IHJlZyAweDE4OiBbbWVtIDB4
ZmY2MDAwMDAtMHhmZjYzZmZmZiA2NGJpdF0KWyAgICAwLjI3MjMyNl0gcGNpIDAwMDA6MDE6MDAu
MDogcmVnIDB4MjA6IFtpbyAgMHhlMDAwLTB4ZTBmZl0KWyAgICAwLjI3MjM0N10gcGNpIDAwMDA6
MDE6MDAuMDogcmVnIDB4MzA6IFttZW0gMHhmZjY0MDAwMC0weGZmNjVmZmZmIHByZWZdClsgICAg
MC4yNzI0MTFdIHBjaSAwMDAwOjAxOjAwLjA6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yNzI0MTNd
IHBjaSAwMDAwOjAxOjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDEgRDIgRDNob3QKWyAgICAw
LjI3MjQ4N10gcGNpIDAwMDA6MDE6MDAuMTogWzEwMDI6YWFiMF0gdHlwZSAwMCBjbGFzcyAweDA0
MDMwMApbICAgIDAuMjcyNTA3XSBwY2kgMDAwMDowMTowMC4xOiByZWcgMHgxMDogW21lbSAweGZm
NjYwMDAwLTB4ZmY2NjNmZmYgNjRiaXRdClsgICAgMC4yNzI2MTldIHBjaSAwMDAwOjAxOjAwLjE6
IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yODAwMzJdIHBjaSAwMDAwOjAwOjAyLjA6IFBDSSBicmlk
Z2UgdG8gW2J1cyAwMV0KWyAgICAwLjI4MDA0MV0gcGNpIDAwMDA6MDA6MDIuMDogICBicmlkZ2Ug
d2luZG93IFtpbyAgMHhlMDAwLTB4ZWZmZl0KWyAgICAwLjI4MDA0NV0gcGNpIDAwMDA6MDA6MDIu
MDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmZjYwMDAwMC0weGZmNmZmZmZmXQpbICAgIDAuMjgw
MDUxXSBwY2kgMDAwMDowMDowMi4wOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGMwMDAwMDAwLTB4
Y2ZmZmZmZmYgNjRiaXQgcHJlZl0KWyAgICAwLjI4MDExOV0gcGNpIDAwMDA6MDI6MDYuMDogWzEx
MDI6MDAwN10gdHlwZSAwMCBjbGFzcyAweDA0MDEwMApbICAgIDAuMjgwMTUzXSBwY2kgMDAwMDow
MjowNi4wOiByZWcgMHgxMDogW2lvICAweGQwMDAtMHhkMDFmXQpbICAgIDAuMjgwMjgwXSBwY2kg
MDAwMDowMjowNi4wOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjgwMzQ1XSBwY2kgMDAwMDowMjow
Ny4wOiBbOTcxMDo5ODM1XSB0eXBlIDAwIGNsYXNzIDB4MDcwMDAyClsgICAgMC4yODAzNjhdIHBj
aSAwMDAwOjAyOjA3LjA6IHJlZyAweDEwOiBbaW8gIDB4ZDA3MC0weGQwNzddClsgICAgMC4yODAz
ODNdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDE0OiBbaW8gIDB4ZDA2MC0weGQwNjddClsgICAg
MC4yODAzOThdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDE4OiBbaW8gIDB4ZDA1MC0weGQwNTdd
ClsgICAgMC4yODA0MTRdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDFjOiBbaW8gIDB4ZDA0MC0w
eGQwNDddClsgICAgMC4yODA0MjldIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDIwOiBbaW8gIDB4
ZDAzMC0weGQwMzddClsgICAgMC4yODA0NDRdIHBjaSAwMDAwOjAyOjA3LjA6IHJlZyAweDI0OiBb
aW8gIDB4ZDAyMC0weGQwMmZdClsgICAgMC4yODA1NTFdIHBjaSAwMDAwOjAwOjE0LjQ6IFBDSSBi
cmlkZ2UgdG8gW2J1cyAwMl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDU1Nl0gcGNp
IDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHhkMDAwLTB4ZGZmZl0KWyAgICAw
LjI4MDU2NF0gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwMDAwLTB4
MDNhZl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDU2Nl0gcGNpIDAwMDA6MDA6MTQu
NDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwM2UwLTB4MGNmN10gKHN1YnRyYWN0aXZlIGRlY29k
ZSkKWyAgICAwLjI4MDU2N10gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAg
MHgwM2IwLTB4MDNkZl0gKHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDU2OV0gcGNpIDAw
MDA6MDA6MTQuNDogICBicmlkZ2Ugd2luZG93IFtpbyAgMHgwZDAwLTB4ZmZmZl0gKHN1YnRyYWN0
aXZlIGRlY29kZSkKWyAgICAwLjI4MDU3MF0gcGNpIDAwMDA6MDA6MTQuNDogICBicmlkZ2Ugd2lu
ZG93IFttZW0gMHgwMDBhMDAwMC0weDAwMGJmZmZmXSAoc3VidHJhY3RpdmUgZGVjb2RlKQpbICAg
IDAuMjgwNTcyXSBwY2kgMDAwMDowMDoxNC40OiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweDAwMGMw
MDAwLTB4MDAwZGZmZmZdIChzdWJ0cmFjdGl2ZSBkZWNvZGUpClsgICAgMC4yODA1NzNdIHBjaSAw
MDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4YjAwMDAwMDAtMHhmZmZmZmZmZl0g
KHN1YnRyYWN0aXZlIGRlY29kZSkKWyAgICAwLjI4MDY4MF0gcGNpIDAwMDA6MDM6MDAuMDogWzEx
MzE6NzE2MF0gdHlwZSAwMCBjbGFzcyAweDA0ODAwMApbICAgIDAuMjgwNzE0XSBwY2kgMDAwMDow
MzowMC4wOiByZWcgMHgxMDogW21lbSAweGZmNTAwMDAwLTB4ZmY1ZmZmZmYgNjRiaXRdClsgICAg
MC4yODA4ODldIHBjaSAwMDAwOjAzOjAwLjA6IHN1cHBvcnRzIEQxIEQyClsgICAgMC4yODA4OTBd
IHBjaSAwMDAwOjAzOjAwLjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDEgRDIKWyAgICAwLjI4
MDk1NF0gcGNpIDAwMDA6MDM6MDAuMDogZGlzYWJsaW5nIEFTUE0gb24gcHJlLTEuMSBQQ0llIGRl
dmljZS4gIFlvdSBjYW4gZW5hYmxlIGl0IHdpdGggJ3BjaWVfYXNwbT1mb3JjZScKWyAgICAwLjI4
MDk2Nl0gcGNpIDAwMDA6MDA6MTUuMDogUENJIGJyaWRnZSB0byBbYnVzIDAzXQpbICAgIDAuMjgw
OTc3XSBwY2kgMDAwMDowMDoxNS4wOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGZmNTAwMDAwLTB4
ZmY1ZmZmZmZdClsgICAgMC4yODEwODddIHBjaSAwMDAwOjA0OjAwLjA6IFsxYjZmOjcwNTJdIHR5
cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgICAwLjI4MTExOV0gcGNpIDAwMDA6MDQ6MDAuMDogcmVn
IDB4MTA6IFttZW0gMHhmZjQwMDAwMC0weGZmNDA3ZmZmIDY0Yml0XQpbICAgIDAuMjgxMjk5XSBw
Y2kgMDAwMDowNDowMC4wOiBzdXBwb3J0cyBEMSBEMgpbICAgIDAuMjgxMzAwXSBwY2kgMDAwMDow
NDowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQxIEQyIEQzaG90IEQzY29sZApbICAgIDAu
Mjg4MDM1XSBwY2kgMDAwMDowMDoxNS4yOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDRdClsgICAgMC4y
ODgwNDhdIHBjaSAwMDAwOjAwOjE1LjI6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4ZmY0MDAwMDAt
MHhmZjRmZmZmZl0KWyAgICAwLjI4ODE3M10gcGNpIDAwMDA6MDU6MDAuMDogWzEwZWM6ODE2OF0g
dHlwZSAwMCBjbGFzcyAweDAyMDAwMApbICAgIDAuMjg4MTk3XSBwY2kgMDAwMDowNTowMC4wOiBy
ZWcgMHgxMDogW2lvICAweGMwMDAtMHhjMGZmXQpbICAgIDAuMjg4MjM1XSBwY2kgMDAwMDowNTow
MC4wOiByZWcgMHgxODogW21lbSAweGQwMDA0MDAwLTB4ZDAwMDRmZmYgNjRiaXQgcHJlZl0KWyAg
ICAwLjI4ODI2MF0gcGNpIDAwMDA6MDU6MDAuMDogcmVnIDB4MjA6IFttZW0gMHhkMDAwMDAwMC0w
eGQwMDAzZmZmIDY0Yml0IHByZWZdClsgICAgMC4yODgzNjZdIHBjaSAwMDAwOjA1OjAwLjA6IHN1
cHBvcnRzIEQxIEQyClsgICAgMC4yODgzNjddIHBjaSAwMDAwOjA1OjAwLjA6IFBNRSMgc3VwcG9y
dGVkIGZyb20gRDAgRDEgRDIgRDNob3QgRDNjb2xkClsgICAgMC4yOTYwMzRdIHBjaSAwMDAwOjAw
OjE1LjM6IFBDSSBicmlkZ2UgdG8gW2J1cyAwNV0KWyAgICAwLjI5NjA0NF0gcGNpIDAwMDA6MDA6
MTUuMzogICBicmlkZ2Ugd2luZG93IFtpbyAgMHhjMDAwLTB4Y2ZmZl0KWyAgICAwLjI5NjA1NV0g
cGNpIDAwMDA6MDA6MTUuMzogICBicmlkZ2Ugd2luZG93IFttZW0gMHhkMDAwMDAwMC0weGQwMGZm
ZmZmIDY0Yml0IHByZWZdClsgICAgMC4yOTYwOTBdIHBjaV9idXMgMDAwMDowMDogb24gTlVNQSBu
b2RlIDAKWyAgICAwLjI5Njc0OF0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktBXSAoSVJR
cyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5Njg0Nl0gQUNQSTogUENJIEludGVycnVw
dCBMaW5rIFtMTktCXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5Njk1MF0g
QUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktDXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkg
KjAKWyAgICAwLjI5NzA1OV0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktEXSAoSVJRcyA0
IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzE0MF0gQUNQSTogUENJIEludGVycnVwdCBM
aW5rIFtMTktFXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzIwM10gQUNQ
STogUENJIEludGVycnVwdCBMaW5rIFtMTktGXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAK
WyAgICAwLjI5NzI2Nl0gQUNQSTogUENJIEludGVycnVwdCBMaW5rIFtMTktHXSAoSVJRcyA0IDUg
NyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzMyOV0gQUNQSTogUENJIEludGVycnVwdCBMaW5r
IFtMTktIXSAoSVJRcyA0IDUgNyAxMCAxMSAxNCAxNSkgKjAKWyAgICAwLjI5NzQ2Nl0gQUNQSTog
XF9TQl8uUENJMDogbm90aWZ5IGhhbmRsZXIgaXMgaW5zdGFsbGVkClsgICAgMC4yOTc0OTZdIEZv
dW5kIDEgYWNwaSByb290IGRldmljZXMKWyAgICAwLjI5NzYyMV0gcGNpIDAwMDA6MDA6MDAuMjog
R1NJMTY6IGxldmVsLWxvdwpbICAgIDAuMjk3NzAzXSBwY2kgMDAwMDowMDowMS4wOiBHU0kxNzog
bGV2ZWwtbG93ClsgICAgMC4yOTc3ODVdIHBjaSAwMDAwOjAwOjAxLjE6IEdTSTE4OiBsZXZlbC1s
b3cKWyAgICAwLjI5ODExNV0gcGNpIDAwMDA6MDA6MTEuMDogR1NJMTk6IGxldmVsLWxvdwpbICAg
IDAuMjk4ODY1XSBwY2kgMDAwMDowMjowNi4wOiBHU0kyMTogbGV2ZWwtbG93ClsgICAgMC4yOTg5
MDJdIHBjaSAwMDAwOjAyOjA3LjA6IEdTSTIyOiBsZXZlbC1sb3cKWyAgICAwLjI5ODk5MV0gdmdh
YXJiOiBkZXZpY2UgYWRkZWQ6IFBDSTowMDAwOjAwOjAxLjAsZGVjb2Rlcz1pbyttZW0sb3ducz1t
ZW0sbG9ja3M9bm9uZQpbICAgIDAuMjk4OTkxXSB2Z2FhcmI6IGRldmljZSBhZGRlZDogUENJOjAw
MDA6MDE6MDAuMCxkZWNvZGVzPWlvK21lbSxvd25zPWlvK21lbSxsb2Nrcz1ub25lClsgICAgMC4y
OTg5OTFdIHZnYWFyYjogbG9hZGVkClsgICAgMC4yOTg5OTFdIHZnYWFyYjogYnJpZGdlIGNvbnRy
b2wgcG9zc2libGUgMDAwMDowMTowMC4wClsgICAgMC4yOTg5OTFdIHZnYWFyYjogbm8gYnJpZGdl
IGNvbnRyb2wgcG9zc2libGUgMDAwMDowMDowMS4wClsgICAgMC4yOTg5OTFdIHhlbl9tZW06IElu
aXRpYWxpc2luZyBiYWxsb29uIGRyaXZlci4KWyAgICAwLjI5ODk5MV0gU0NTSSBzdWJzeXN0ZW0g
aW5pdGlhbGl6ZWQKWyAgICAwLjI5ODk5MV0gQUNQSTogYnVzIHR5cGUgQVRBIHJlZ2lzdGVyZWQK
WyAgICAwLjI5ODk5MV0gbGliYXRhIHZlcnNpb24gMy4wMCBsb2FkZWQuClsgICAgMC4yOTg5OTFd
IFBDSTogVXNpbmcgQUNQSSBmb3IgSVJRIHJvdXRpbmcKWyAgICAwLjMxMTc3MV0gUENJOiBwY2lf
Y2FjaGVfbGluZV9zaXplIHNldCB0byA2NCBieXRlcwpbICAgIDAuMzExODg1XSBlODIwOiByZXNl
cnZlIFJBTSBidWZmZXIgW21lbSAweDAwMDllODAwLTB4MDAwOWZmZmZdClsgICAgMC4zMTE4ODdd
IGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZlciBbbWVtIDB4OGQ2OGIwMDAtMHg4ZmZmZmZmZl0KWyAg
ICAwLjMxMTg4OV0gZTgyMDogcmVzZXJ2ZSBSQU0gYnVmZmVyIFttZW0gMHg4ZWE0NjAwMC0weDhm
ZmZmZmZmXQpbICAgIDAuMzExODkwXSBlODIwOiByZXNlcnZlIFJBTSBidWZmZXIgW21lbSAweDhm
MDY0MDAwLTB4OGZmZmZmZmZdClsgICAgMC4zMTE4OTFdIGU4MjA6IHJlc2VydmUgUkFNIGJ1ZmZl
ciBbbWVtIDB4OGY4MDAwMDAtMHg4ZmZmZmZmZl0KWyAgICAwLjMxMTk1OF0gTmV0TGFiZWw6IElu
aXRpYWxpemluZwpbICAgIDAuMzExOTU5XSBOZXRMYWJlbDogIGRvbWFpbiBoYXNoIHNpemUgPSAx
MjgKWyAgICAwLjMxMTk2MF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxBQkVMRUQgQ0lQU092
NApbICAgIDAuMzExOTY4XSBOZXRMYWJlbDogIHVubGFiZWxlZCB0cmFmZmljIGFsbG93ZWQgYnkg
ZGVmYXVsdApbICAgIDAuMzEyMDAwXSBTd2l0Y2hlZCB0byBjbG9ja3NvdXJjZSB4ZW4KWyAgICAw
LjMxMjAwMF0gQXBwQXJtb3I6IEFwcEFybW9yIEZpbGVzeXN0ZW0gRW5hYmxlZApbICAgIDAuMzEy
MDAwXSBwbnA6IFBuUCBBQ1BJIGluaXQKWyAgICAwLjMxMjAwMF0gQUNQSTogYnVzIHR5cGUgUE5Q
IHJlZ2lzdGVyZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAwOiBbbWVtIDB4ZTAwMDAwMDAt
MHhlZmZmZmZmZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAw
OiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMGMwMSAoYWN0aXZlKQpbICAgIDAu
MzEyMDAwXSBzeXN0ZW0gMDA6MDE6IFttZW0gMHg5MDAwMDAwMC0weGFmZmZmZmZmXSBoYXMgYmVl
biByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDE6IFBsdWcgYW5kIFBsYXkgQUNQ
SSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDow
MjogW21lbSAweGZlYjgwMDAwLTB4ZmViZmZmZmZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4z
MTIwMDBdIHN5c3RlbSAwMDowMjogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDBj
MDIgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MDRkMC0weDA0
ZDFdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lvICAw
eDA0MGJdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lv
ICAweDA0ZDZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzog
W2lvICAweDBjMDAtMHgwYzAxXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0
ZW0gMDA6MDM6IFtpbyAgMHgwYzE0XSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBz
eXN0ZW0gMDA6MDM6IFtpbyAgMHgwYzUwLTB4MGM1MV0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAw
LjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGM1Ml0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAg
ICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGM2Y10gaGFzIGJlZW4gcmVzZXJ2ZWQK
WyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGM2Zl0gaGFzIGJlZW4gcmVzZXJ2
ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAzOiBbaW8gIDB4MGNkMC0weDBjZDFdIGhhcyBi
ZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lvICAweDBjZDItMHgw
Y2QzXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtpbyAg
MHgwY2Q0LTB4MGNkNV0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAw
OjAzOiBbaW8gIDB4MGNkNi0weDBjZDddIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBd
IHN5c3RlbSAwMDowMzogW2lvICAweDBjZDgtMHgwY2RmXSBoYXMgYmVlbiByZXNlcnZlZApbICAg
IDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtpbyAgMHgwODAwLTB4MDg5Zl0gY291bGQgbm90IGJl
IHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowMzogW2lvICAweDBiMjAtMHgwYjNm
XSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtpbyAgMHgw
OTAwLTB4MDkwZl0gaGFzIGJlZW4gcmVzZXJ2ZWQKWyAgICAwLjMxMjAwMF0gc3lzdGVtIDAwOjAz
OiBbaW8gIDB4MDkxMC0weDA5MWZdIGhhcyBiZWVuIHJlc2VydmVkClsgICAgMC4zMTIwMDBdIHN5
c3RlbSAwMDowMzogW2lvICAweGZlMDAtMHhmZWZlXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAu
MzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWMwMDAwMC0weGZlYzAwZmZmXSBoYXMgYmVl
biByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWUwMDAwMC0w
eGZlZTAwZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6
IFttZW0gMHhmZWQ4MDAwMC0weGZlZDhmZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEy
MDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWQ2MTAwMC0weGZlZDcwZmZmXSBoYXMgYmVlbiBy
ZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZWMxMDAwMC0weGZl
YzEwZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFtt
ZW0gMHhmZWQwMDAwMC0weGZlZDAwZmZmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAw
XSBzeXN0ZW0gMDA6MDM6IFttZW0gMHhmZjgwMDAwMC0weGZmZmZmZmZmXSBoYXMgYmVlbiByZXNl
cnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDM6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZp
Y2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsgICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowNDogW2lv
ICAweDAyOTAtMHgwMjlmXSBoYXMgYmVlbiByZXNlcnZlZApbICAgIDAuMzEyMDAwXSBzeXN0ZW0g
MDA6MDQ6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwYzAyIChhY3RpdmUpClsg
ICAgMC4zMTIwMDBdIHN5c3RlbSAwMDowNTogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURz
IFBOUDBjMDIgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gcG5wIDAwOjA2OiBbZG1hIDRdClsgICAg
MC4zMTIwMDBdIHBucCAwMDowNjogUGx1ZyBhbmQgUGxheSBBQ1BJIGRldmljZSwgSURzIFBOUDAy
MDAgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gcG5wIDAwOjA3OiBQbHVnIGFuZCBQbGF5IEFDUEkg
ZGV2aWNlLCBJRHMgUE5QMGIwMCAoYWN0aXZlKQpbICAgIDAuMzEyMDAwXSBwbnAgMDA6MDg6IFBs
dWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElEcyBQTlAwODAwIChhY3RpdmUpClsgICAgMC4zMTIw
MDBdIHN5c3RlbSAwMDowOTogW2lvICAweDA0ZDAtMHgwNGQxXSBoYXMgYmVlbiByZXNlcnZlZApb
ICAgIDAuMzEyMDAwXSBzeXN0ZW0gMDA6MDk6IFBsdWcgYW5kIFBsYXkgQUNQSSBkZXZpY2UsIElE
cyBQTlAwYzAyIChhY3RpdmUpClsgICAgMC4zMTIwMDBdIHBucCAwMDowYTogUGx1ZyBhbmQgUGxh
eSBBQ1BJIGRldmljZSwgSURzIFBOUDBjMDQgKGFjdGl2ZSkKWyAgICAwLjMxMjAwMF0gcG5wIDAw
OjBiOiBQbHVnIGFuZCBQbGF5IEFDUEkgZGV2aWNlLCBJRHMgUE5QMDEwMyAoYWN0aXZlKQpbICAg
IDAuMzEyMDAwXSBwbnA6IFBuUCBBQ1BJOiBmb3VuZCAxMiBkZXZpY2VzClsgICAgMC4zMTIwMDBd
IEFDUEk6IGJ1cyB0eXBlIFBOUCB1bnJlZ2lzdGVyZWQKWyAgICAwLjMxNDA4M10gcGNpIDAwMDA6
MDA6MDIuMDogUENJIGJyaWRnZSB0byBbYnVzIDAxXQpbICAgIDAuMzE0MDg4XSBwY2kgMDAwMDow
MDowMi4wOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweGUwMDAtMHhlZmZmXQpbICAgIDAuMzE0MDkz
XSBwY2kgMDAwMDowMDowMi4wOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGZmNjAwMDAwLTB4ZmY2
ZmZmZmZdClsgICAgMC4zMTQwOTddIHBjaSAwMDAwOjAwOjAyLjA6ICAgYnJpZGdlIHdpbmRvdyBb
bWVtIDB4YzAwMDAwMDAtMHhjZmZmZmZmZiA2NGJpdCBwcmVmXQpbICAgIDAuMzE0MTAzXSBwY2kg
MDAwMDowMDoxNC40OiBQQ0kgYnJpZGdlIHRvIFtidXMgMDJdClsgICAgMC4zMTQxMDZdIHBjaSAw
MDAwOjAwOjE0LjQ6ICAgYnJpZGdlIHdpbmRvdyBbaW8gIDB4ZDAwMC0weGRmZmZdClsgICAgMC4z
MTQxMjJdIHBjaSAwMDAwOjAwOjE1LjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwM10KWyAgICAwLjMx
NDEyOF0gcGNpIDAwMDA6MDA6MTUuMDogICBicmlkZ2Ugd2luZG93IFttZW0gMHhmZjUwMDAwMC0w
eGZmNWZmZmZmXQpbICAgIDAuMzE0MTM3XSBwY2kgMDAwMDowMDoxNS4yOiBQQ0kgYnJpZGdlIHRv
IFtidXMgMDRdClsgICAgMC4zMTQxNDNdIHBjaSAwMDAwOjAwOjE1LjI6ICAgYnJpZGdlIHdpbmRv
dyBbbWVtIDB4ZmY0MDAwMDAtMHhmZjRmZmZmZl0KWyAgICAwLjMxNDE1M10gcGNpIDAwMDA6MDA6
MTUuMzogUENJIGJyaWRnZSB0byBbYnVzIDA1XQpbICAgIDAuMzE0MTU2XSBwY2kgMDAwMDowMDox
NS4zOiAgIGJyaWRnZSB3aW5kb3cgW2lvICAweGMwMDAtMHhjZmZmXQpbICAgIDAuMzE0MTY0XSBw
Y2kgMDAwMDowMDoxNS4zOiAgIGJyaWRnZSB3aW5kb3cgW21lbSAweGQwMDAwMDAwLTB4ZDAwZmZm
ZmYgNjRiaXQgcHJlZl0KWyAgICAwLjMxNDUyMl0gcGNpX2J1cyAwMDAwOjAwOiByZXNvdXJjZSA0
IFtpbyAgMHgwMDAwLTB4MDNhZl0KWyAgICAwLjMxNDUyNF0gcGNpX2J1cyAwMDAwOjAwOiByZXNv
dXJjZSA1IFtpbyAgMHgwM2UwLTB4MGNmN10KWyAgICAwLjMxNDUyNV0gcGNpX2J1cyAwMDAwOjAw
OiByZXNvdXJjZSA2IFtpbyAgMHgwM2IwLTB4MDNkZl0KWyAgICAwLjMxNDUyN10gcGNpX2J1cyAw
MDAwOjAwOiByZXNvdXJjZSA3IFtpbyAgMHgwZDAwLTB4ZmZmZl0KWyAgICAwLjMxNDUyOF0gcGNp
X2J1cyAwMDAwOjAwOiByZXNvdXJjZSA4IFttZW0gMHgwMDBhMDAwMC0weDAwMGJmZmZmXQpbICAg
IDAuMzE0NTMwXSBwY2lfYnVzIDAwMDA6MDA6IHJlc291cmNlIDkgW21lbSAweDAwMGMwMDAwLTB4
MDAwZGZmZmZdClsgICAgMC4zMTQ1MzFdIHBjaV9idXMgMDAwMDowMDogcmVzb3VyY2UgMTAgW21l
bSAweGIwMDAwMDAwLTB4ZmZmZmZmZmZdClsgICAgMC4zMTQ1MzNdIHBjaV9idXMgMDAwMDowMTog
cmVzb3VyY2UgMCBbaW8gIDB4ZTAwMC0weGVmZmZdClsgICAgMC4zMTQ1MzRdIHBjaV9idXMgMDAw
MDowMTogcmVzb3VyY2UgMSBbbWVtIDB4ZmY2MDAwMDAtMHhmZjZmZmZmZl0KWyAgICAwLjMxNDUz
Nl0gcGNpX2J1cyAwMDAwOjAxOiByZXNvdXJjZSAyIFttZW0gMHhjMDAwMDAwMC0weGNmZmZmZmZm
IDY0Yml0IHByZWZdClsgICAgMC4zMTQ1MzddIHBjaV9idXMgMDAwMDowMjogcmVzb3VyY2UgMCBb
aW8gIDB4ZDAwMC0weGRmZmZdClsgICAgMC4zMTQ1MzldIHBjaV9idXMgMDAwMDowMjogcmVzb3Vy
Y2UgNCBbaW8gIDB4MDAwMC0weDAzYWZdClsgICAgMC4zMTQ1NDBdIHBjaV9idXMgMDAwMDowMjog
cmVzb3VyY2UgNSBbaW8gIDB4MDNlMC0weDBjZjddClsgICAgMC4zMTQ1NDJdIHBjaV9idXMgMDAw
MDowMjogcmVzb3VyY2UgNiBbaW8gIDB4MDNiMC0weDAzZGZdClsgICAgMC4zMTQ1NDNdIHBjaV9i
dXMgMDAwMDowMjogcmVzb3VyY2UgNyBbaW8gIDB4MGQwMC0weGZmZmZdClsgICAgMC4zMTQ1NDRd
IHBjaV9idXMgMDAwMDowMjogcmVzb3VyY2UgOCBbbWVtIDB4MDAwYTAwMDAtMHgwMDBiZmZmZl0K
WyAgICAwLjMxNDU0Nl0gcGNpX2J1cyAwMDAwOjAyOiByZXNvdXJjZSA5IFttZW0gMHgwMDBjMDAw
MC0weDAwMGRmZmZmXQpbICAgIDAuMzE0NTQ3XSBwY2lfYnVzIDAwMDA6MDI6IHJlc291cmNlIDEw
IFttZW0gMHhiMDAwMDAwMC0weGZmZmZmZmZmXQpbICAgIDAuMzE0NTQ5XSBwY2lfYnVzIDAwMDA6
MDM6IHJlc291cmNlIDEgW21lbSAweGZmNTAwMDAwLTB4ZmY1ZmZmZmZdClsgICAgMC4zMTQ1NTBd
IHBjaV9idXMgMDAwMDowNDogcmVzb3VyY2UgMSBbbWVtIDB4ZmY0MDAwMDAtMHhmZjRmZmZmZl0K
WyAgICAwLjMxNDU1Ml0gcGNpX2J1cyAwMDAwOjA1OiByZXNvdXJjZSAwIFtpbyAgMHhjMDAwLTB4
Y2ZmZl0KWyAgICAwLjMxNDU1M10gcGNpX2J1cyAwMDAwOjA1OiByZXNvdXJjZSAyIFttZW0gMHhk
MDAwMDAwMC0weGQwMGZmZmZmIDY0Yml0IHByZWZdClsgICAgMC4zMTQ2MzFdIE5FVDogUmVnaXN0
ZXJlZCBwcm90b2NvbCBmYW1pbHkgMgpbICAgIDAuMzE0ODAwXSBUQ1AgZXN0YWJsaXNoZWQgaGFz
aCB0YWJsZSBlbnRyaWVzOiA2NTUzNiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0ZXMpClsgICAgMC4z
MTUwMTRdIFRDUCBiaW5kIGhhc2ggdGFibGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA4LCAxMDQ4
NTc2IGJ5dGVzKQpbICAgIDAuMzE1MjM3XSBUQ1A6IEhhc2ggdGFibGVzIGNvbmZpZ3VyZWQgKGVz
dGFibGlzaGVkIDY1NTM2IGJpbmQgNjU1MzYpClsgICAgMC4zMTUyNzFdIFRDUDogcmVubyByZWdp
c3RlcmVkClsgICAgMC4zMTUyNzRdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDQwOTYgKG9yZGVy
OiA2LCAyNjIxNDQgYnl0ZXMpClsgICAgMC4zMTUzMjZdIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50
cmllczogNDA5NiAob3JkZXI6IDYsIDI2MjE0NCBieXRlcykKWyAgICAwLjMxNTQ3M10gTkVUOiBS
ZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxClsgICAgMC4zMTU1MDldIHBjaSAwMDAwOjAwOjAx
LjA6IEJvb3QgdmlkZW8gZGV2aWNlClsgICAgMC41NjAyNzRdIHBjaSAwMDAwOjAxOjAwLjA6IEJv
b3QgdmlkZW8gZGV2aWNlClsgICAgMC41NjA0NTFdIFBDSTogQ0xTIDY0IGJ5dGVzLCBkZWZhdWx0
IDY0ClsgICAgMC41NjA0OTJdIFVucGFja2luZyBpbml0cmFtZnMuLi4KWyAgICAwLjYwMjE4Ml0g
RnJlZWluZyBpbml0cmQgbWVtb3J5OiA0OTcwOEsgKGZmZmY4ODAwMDEwMDAwMDAgLSBmZmZmODgw
MDA0MDhiMDAwKQpbICAgIDAuNjAyNTE0XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc29j
a2V0IChkaXNhYmxlZCkKWyAgICAwLjYwMjUyNV0gdHlwZT0yMDAwIGF1ZGl0KDEzOTE0NjY1OTEu
NjAwOjEpOiBpbml0aWFsaXplZApbICAgIDAuNjI1NjcwXSB6YnVkOiBsb2FkZWQKWyAgICAwLjYy
NTg1NV0gVkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjUuMgpbICAgIDAuNjI1ODgwXSBEcXVvdC1j
YWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXIgMCwgNDA5NiBieXRlcykKWyAgICAw
LjYyNjEyMF0gbXNnbW5pIGhhcyBiZWVuIHNldCB0byAxMzgzNgpbICAgIDAuNjI2NDA4XSBCbG9j
ayBsYXllciBTQ1NJIGdlbmVyaWMgKGJzZykgZHJpdmVyIHZlcnNpb24gMC40IGxvYWRlZCAobWFq
b3IgMjUyKQpbICAgIDAuNjI2NDUxXSBpbyBzY2hlZHVsZXIgbm9vcCByZWdpc3RlcmVkClsgICAg
MC42MjY0NTJdIGlvIHNjaGVkdWxlciBkZWFkbGluZSByZWdpc3RlcmVkClsgICAgMC42MjY0NzJd
IGlvIHNjaGVkdWxlciBjZnEgcmVnaXN0ZXJlZCAoZGVmYXVsdCkKWyAgICAwLjYyNjc2M10gcGNp
X2hvdHBsdWc6IFBDSSBIb3QgUGx1ZyBQQ0kgQ29yZSB2ZXJzaW9uOiAwLjUKWyAgICAwLjYyNjc3
M10gcGNpZWhwOiBQQ0kgRXhwcmVzcyBIb3QgUGx1ZyBDb250cm9sbGVyIERyaXZlciB2ZXJzaW9u
OiAwLjQKWyAgICAwLjYyNjg0M10gR0hFUzogSEVTVCBpcyBub3QgZW5hYmxlZCEKWyAgICAwLjYy
Njk1NV0gTm9uLXZvbGF0aWxlIG1lbW9yeSBkcml2ZXIgdjEuMwpbICAgIDAuNjI2OTU5XSBMaW51
eCBhZ3BnYXJ0IGludGVyZmFjZSB2MC4xMDMKWyAgICAwLjYyNzM4OF0gWGVuIHZpcnR1YWwgY29u
c29sZSBzdWNjZXNzZnVsbHkgaW5zdGFsbGVkIGFzIHh2YzAKWyAgICAwLjYyNzQ1Ml0gYWhjaSAw
MDAwOjAwOjExLjA6IHZlcnNpb24gMy4wClsgICAgMC42Mjc2NTZdIGFoY2kgMDAwMDowMDoxMS4w
OiBpcnEgNDQgKDI3NikgLi4uIDQ3ICgyNzkpIGZvciBNU0kKWyAgICAwLjYyNzcxN10gYWhjaSAw
MDAwOjAwOjExLjA6IEFIQ0kgMDAwMS4wMzAwIDMyIHNsb3RzIDMgcG9ydHMgNiBHYnBzIDB4NyBp
bXBsIFNBVEEgbW9kZQpbICAgIDAuNjI3NzE5XSBhaGNpIDAwMDA6MDA6MTEuMDogZmxhZ3M6IDY0
Yml0IG5jcSBzbnRmIGlsY2sgbGVkIGNsbyBwbXAgcGlvIHNsdW0gcGFydCBzeHMgClsgICAgMC42
MjgyMjFdIHNjc2kwIDogYWhjaQpbICAgIDAuNjI4MzIyXSBzY3NpMSA6IGFoY2kKWyAgICAwLjYy
ODM2OV0gc2NzaTIgOiBhaGNpClsgICAgMC42Mjg0MDhdIGF0YTE6IFNBVEEgbWF4IFVETUEvMTMz
IGFiYXIgbTIwNDhAMHhmZjc0ZDAwMCBwb3J0IDB4ZmY3NGQxMDAgaXJxIDQ0ClsgICAgMC42Mjg0
MTBdIGF0YTI6IFNBVEEgbWF4IFVETUEvMTMzIGFiYXIgbTIwNDhAMHhmZjc0ZDAwMCBwb3J0IDB4
ZmY3NGQxODAgaXJxIDQ1ClsgICAgMC42Mjg0MTJdIGF0YTM6IFNBVEEgbWF4IFVETUEvMTMzIGFi
YXIgbTIwNDhAMHhmZjc0ZDAwMCBwb3J0IDB4ZmY3NGQyMDAgaXJxIDQ2ClsgICAgMC42Mjg0NjVd
IGk4MDQyOiBQTlA6IE5vIFBTLzIgY29udHJvbGxlciBmb3VuZC4gUHJvYmluZyBwb3J0cyBkaXJl
Y3RseS4KWyAgICAwLjYzMTE1N10gc2VyaW86IGk4MDQyIEtCRCBwb3J0IGF0IDB4NjAsMHg2NCBp
cnEgMQpbICAgIDAuNjMxMTY1XSBzZXJpbzogaTgwNDIgQVVYIHBvcnQgYXQgMHg2MCwweDY0IGly
cSAxMgpbICAgIDAuNjMxMjg2XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZv
ciBhbGwgbWljZQpbICAgIDAuNjMxMzgwXSBydGNfY21vcyAwMDowNzogUlRDIGNhbiB3YWtlIGZy
b20gUzQKWyAgICAwLjYzMTUyMF0gcnRjX2Ntb3MgMDA6MDc6IHJ0YyBjb3JlOiByZWdpc3RlcmVk
IHJ0Y19jbW9zIGFzIHJ0YzAKWyAgICAwLjYzMTU1Ml0gcnRjX2Ntb3MgMDA6MDc6IGFsYXJtcyB1
cCB0byBvbmUgbW9udGgsIHkzaywgMTE0IGJ5dGVzIG52cmFtClsgICAgMC42MzE1NThdIGxlZHRy
aWctY3B1OiByZWdpc3RlcmVkIHRvIGluZGljYXRlIGFjdGl2aXR5IG9uIENQVXMKWyAgICAwLjYz
MTU3MV0gaGlkcmF3OiByYXcgSElEIGV2ZW50cyBkcml2ZXIgKEMpIEppcmkgS29zaW5hClsgICAg
MC42MzE2OTBdIFRDUDogY3ViaWMgcmVnaXN0ZXJlZApbICAgIDAuNjMxNzY1XSBORVQ6IFJlZ2lz
dGVyZWQgcHJvdG9jb2wgZmFtaWx5IDEwClsgICAgMC42MzE4OThdIEtleSB0eXBlIGRuc19yZXNv
bHZlciByZWdpc3RlcmVkClsgICAgMC42MzE5NzldIE1DRTogYmluZCB2aXJxIGZvciBET00wIGxv
Z2dpbmcKWyAgICAwLjYzMTk5OF0gTUNFX0RPTTBfTE9HOiBlbnRlciBkb20wIG1jZSB2SVJRIGhh
bmRsZXIKWyAgICAwLjYzMjAwMF0gTUNFX0RPTTBfTE9HOiBObyBtb3JlIHVyZ2VudCBkYXRhClsg
ICAgMC42MzIwMDFdIE1DRV9ET00wX0xPRzogTm8gbW9yZSBub251cmdlbnQgZGF0YQpbICAgIDAu
NjMyMTI3XSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2ZXJzaW9uIDEKWyAgICAwLjYzMjY3NV0gICBN
YWdpYyBudW1iZXI6IDI6NDYwOjUwMQpbICAgIDAuNjMyNzQxXSBydGNfY21vcyAwMDowNzogc2V0
dGluZyBzeXN0ZW0gY2xvY2sgdG8gMjAxNC0wMi0wMyAyMjoyOTo1MiBVVEMgKDEzOTE0NjY1OTIp
ClsgICAgMS4xMjAxMDFdIGF0YTI6IFNBVEEgbGluayB1cCA2LjAgR2JwcyAoU1N0YXR1cyAxMzMg
U0NvbnRyb2wgMzAwKQpbICAgIDEuMTIwMTI0XSBhdGEzOiBTQVRBIGxpbmsgdXAgMy4wIEdicHMg
KFNTdGF0dXMgMTIzIFNDb250cm9sIDMwMCkKWyAgICAxLjEyMDE0M10gYXRhMTogU0FUQSBsaW5r
IHVwIDMuMCBHYnBzIChTU3RhdHVzIDEyMyBTQ29udHJvbCAzMDApClsgICAgMS4xMjEzNzFdIGF0
YTIuMDA6IEFUQS04OiBIaXRhY2hpIEhEUzVDMzAyMEFMQTYzMiwgTUw2T0E1ODAsIG1heCBVRE1B
LzEzMwpbICAgIDEuMTIxMzc0XSBhdGEyLjAwOiAzOTA3MDI5MTY4IHNlY3RvcnMsIG11bHRpIDE2
OiBMQkE0OCBOQ1EgKGRlcHRoIDMxLzMyKSwgQUEKWyAgICAxLjEyMTczNl0gYXRhMS4wMDogQVRB
LTg6IFNUMzUwMDMyMEFTLCBTRDFBLCBtYXggVURNQS8xMzMKWyAgICAxLjEyMTczOV0gYXRhMS4w
MDogOTc2NzczMTY4IHNlY3RvcnMsIG11bHRpIDE2OiBMQkE0OCBOQ1EgKGRlcHRoIDMxLzMyKQpb
ICAgIDEuMTIyNjczXSBhdGEyLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMwpbICAgIDEuMTIz
NzQwXSBhdGExLjAwOiBjb25maWd1cmVkIGZvciBVRE1BLzEzMwpbICAgIDEuMTIzODQ5XSBzY3Np
IDA6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIFNUMzUwMDMyMEFTICAgICAgU0Qx
QSBQUTogMCBBTlNJOiA1ClsgICAgMS4xMjQwMjddIHNkIDA6MDowOjA6IFtzZGFdIDk3Njc3MzE2
OCA1MTItYnl0ZSBsb2dpY2FsIGJsb2NrczogKDUwMCBHQi80NjUgR2lCKQpbICAgIDEuMTI0MDgx
XSBzZCAwOjA6MDowOiBbc2RhXSBXcml0ZSBQcm90ZWN0IGlzIG9mZgpbICAgIDEuMTI0MDg0XSBz
ZCAwOjA6MDowOiBbc2RhXSBNb2RlIFNlbnNlOiAwMCAzYSAwMCAwMApbICAgIDEuMTI0MTE2XSBz
Y3NpIDE6MDowOjA6IERpcmVjdC1BY2Nlc3MgICAgIEFUQSAgICAgIEhpdGFjaGkgSERTNUMzMDIg
TUw2TyBQUTogMCBBTlNJOiA1ClsgICAgMS4xMjQxMTddIHNkIDA6MDowOjA6IFtzZGFdIFdyaXRl
IGNhY2hlOiBlbmFibGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBP
IG9yIEZVQQpbICAgIDEuMTI0MjY2XSBzZCAxOjA6MDowOiBbc2RiXSAzOTA3MDI5MTY4IDUxMi1i
eXRlIGxvZ2ljYWwgYmxvY2tzOiAoMi4wMCBUQi8xLjgxIFRpQikKWyAgICAxLjEyNDMwM10gc2Qg
MTowOjA6MDogW3NkYl0gV3JpdGUgUHJvdGVjdCBpcyBvZmYKWyAgICAxLjEyNDMxMF0gc2QgMTow
OjA6MDogW3NkYl0gTW9kZSBTZW5zZTogMDAgM2EgMDAgMDAKWyAgICAxLjEyNDMyOF0gc2QgMTow
OjA6MDogW3NkYl0gV3JpdGUgY2FjaGU6IGVuYWJsZWQsIHJlYWQgY2FjaGU6IGVuYWJsZWQsIGRv
ZXNuJ3Qgc3VwcG9ydCBEUE8gb3IgRlVBClsgICAgMS4xMjU3OTNdICBzZGI6IHNkYjEKWyAgICAx
LjEyNjAwNV0gc2QgMTowOjA6MDogW3NkYl0gQXR0YWNoZWQgU0NTSSBkaXNrClsgICAgMS4xNjcy
MzBdIGF0YTMuMDA6IEhQQSBkZXRlY3RlZDogY3VycmVudCA2MjUxNDAzMzUsIG5hdGl2ZSA2MjUx
NDI0NDgKWyAgICAxLjE2NzIzNV0gYXRhMy4wMDogQVRBLTc6IFNUMzMyMDYyMEFTLCAzLkFBSywg
bWF4IFVETUEvMTMzClsgICAgMS4xNjcyMzZdIGF0YTMuMDA6IDYyNTE0MDMzNSBzZWN0b3JzLCBt
dWx0aSAxNjogTEJBNDggTkNRIChkZXB0aCAzMS8zMikKWyAgICAxLjE2OTE4OV0gIHNkYTogc2Rh
MSBzZGE0IDwgc2RhNSBzZGE2ID4KWyAgICAxLjE2OTQzOF0gc2QgMDowOjA6MDogW3NkYV0gQXR0
YWNoZWQgU0NTSSBkaXNrClsgICAgMS4yMjU1MzddIGF0YTMuMDA6IGNvbmZpZ3VyZWQgZm9yIFVE
TUEvMTMzClsgICAgMS4yMjU2MjJdIHNjc2kgMjowOjA6MDogRGlyZWN0LUFjY2VzcyAgICAgQVRB
ICAgICAgU1QzMzIwNjIwQVMgICAgICAzLkFBIFBROiAwIEFOU0k6IDUKWyAgICAxLjIyNTczNV0g
c2QgMjowOjA6MDogW3NkY10gNjI1MTQwMzM1IDUxMi1ieXRlIGxvZ2ljYWwgYmxvY2tzOiAoMzIw
IEdCLzI5OCBHaUIpClsgICAgMS4yMjU3NzldIHNkIDI6MDowOjA6IFtzZGNdIFdyaXRlIFByb3Rl
Y3QgaXMgb2ZmClsgICAgMS4yMjU3ODFdIHNkIDI6MDowOjA6IFtzZGNdIE1vZGUgU2Vuc2U6IDAw
IDNhIDAwIDAwClsgICAgMS4yMjU3OTRdIHNkIDI6MDowOjA6IFtzZGNdIFdyaXRlIGNhY2hlOiBl
bmFibGVkLCByZWFkIGNhY2hlOiBlbmFibGVkLCBkb2Vzbid0IHN1cHBvcnQgRFBPIG9yIEZVQQpb
ICAgIDEuMzM3NTc4XSAgc2RjOiBzZGMxIHNkYzIgPCBzZGM1IHNkYzYgc2RjNyBzZGM4IHNkYzkg
PiBzZGMzIHNkYzQKWyAgICAxLjMzODAzN10gc2QgMjowOjA6MDogW3NkY10gQXR0YWNoZWQgU0NT
SSBkaXNrClsgICAgMS4zMzgzMDhdIEZyZWVpbmcgdW51c2VkIGtlcm5lbCBtZW1vcnk6IDQ5Mksg
KGZmZmY4ODAwMDA5ODEwMDAgLSBmZmZmODgwMDAwOWZjMDAwKQpbICAgIDEuMzM4MzEwXSBXcml0
ZSBwcm90ZWN0aW5nIHRoZSBrZXJuZWwgcmVhZC1vbmx5IGRhdGE6IDkxNjRrClsgICAgMS4zODY2
ODhdIHBjaWJhY2sgMDAwMDowMDowMS4wOiBzZWl6aW5nIGRldmljZQpbICAgIDEuMzg2Njk5XSBw
Y2liYWNrIDAwMDA6MDA6MDEuMTogc2VpemluZyBkZXZpY2UKWyAgICAxLjQxNjI1OF0gcGNpYmFj
ayAwMDAwOjAwOjAxLjA6IGVuYWJsaW5nIGRldmljZSAoMDAwNiAtPiAwMDA3KQpbICAgIDEuNDQ4
MzE2XSBwY2liYWNrOiBiYWNrZW5kIGlzIHZwY2kKWyAgICAxLjQ1MDY5N10gSW5pdGlhbGlzaW5n
IHZpcnR1YWwgZXRoZXJuZXQgZHJpdmVyLgpbICAgIDEuNDU4Njc5XSBlbWM6IGRldmljZSBoYW5k
bGVyIHJlZ2lzdGVyZWQKWyAgICAxLjQ2MDc1NV0gcmRhYzogZGV2aWNlIGhhbmRsZXIgcmVnaXN0
ZXJlZApbICAgIDEuNDYyNjAzXSBocF9zdzogZGV2aWNlIGhhbmRsZXIgcmVnaXN0ZXJlZApbICAg
IDEuNDY0NjQ3XSBhbHVhOiBkZXZpY2UgaGFuZGxlciByZWdpc3RlcmVkClsgICAgMS40NzA4NjZd
IHN5c3RlbWQtdWRldmRbMTIzXTogc3RhcnRpbmcgdmVyc2lvbiAyMDgKWyAgICAxLjQ4NDQ0OV0g
W2RybV0gSW5pdGlhbGl6ZWQgZHJtIDEuMS4wIDIwMDYwODEwClsgICAgMS40OTIwMzBdIEFDUEk6
IGJ1cyB0eXBlIFVTQiByZWdpc3RlcmVkClsgICAgMS40OTIwNzddIHVzYmNvcmU6IHJlZ2lzdGVy
ZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKWyAgICAxLjQ5MjA4N10gdXNiY29yZTogcmVn
aXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKWyAgICAxLjQ5Mzk3Ml0gdXNiY29yZTog
cmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKWyAgICAxLjQ5NDY3MV0geGhjaV9oY2Qg
MDAwMDowMDoxMC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDEuNDk0Njc4XSB4aGNpX2hj
ZCAwMDAwOjAwOjEwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2lnbmVkIGJ1cyBudW1i
ZXIgMQpbICAgIDEuNDk0NzE1XSBRVUlSSzogRW5hYmxlIEFNRCBQTEwgZml4ClsgICAgMS40OTUw
MDldIHhoY2lfaGNkIDAwMDA6MDA6MTAuMDogaXJxIDQ5ICgyNzUpIGZvciBNU0kvTVNJLVgKWyAg
ICAxLjQ5NTA1NF0geGhjaV9oY2QgMDAwMDowMDoxMC4wOiBpcnEgNTAgKDI3NCkgZm9yIE1TSS9N
U0ktWApbICAgIDEuNDk1MDk5XSB4aGNpX2hjZCAwMDAwOjAwOjEwLjA6IGlycSA1MSAoMjczKSBm
b3IgTVNJL01TSS1YClsgICAgMS40OTUxNDJdIHhoY2lfaGNkIDAwMDA6MDA6MTAuMDogaXJxIDUy
ICgyNzIpIGZvciBNU0kvTVNJLVgKWyAgICAxLjQ5NTE4Nl0geGhjaV9oY2QgMDAwMDowMDoxMC4w
OiBpcnEgNTMgKDI3MSkgZm9yIE1TSS9NU0ktWApbICAgIDEuNDk1MzEwXSB1c2IgdXNiMTogTmV3
IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyClsgICAgMS40
OTUzMTJdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0y
LCBTZXJpYWxOdW1iZXI9MQpbICAgIDEuNDk1MzE0XSB1c2IgdXNiMTogUHJvZHVjdDogeEhDSSBI
b3N0IENvbnRyb2xsZXIKWyAgICAxLjQ5NTMxNl0gdXNiIHVzYjE6IE1hbnVmYWN0dXJlcjogTGlu
dXggMy4xMS42LTQteGVuIHhoY2lfaGNkClsgICAgMS40OTUzMThdIHVzYiB1c2IxOiBTZXJpYWxO
dW1iZXI6IDAwMDA6MDA6MTAuMApbICAgIDEuNDk1NDIzXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50
IGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgICAxLjQ5NTQyN10geEhDSSB4aGNpX2NoZWNrX2JhbmR3
aWR0aCBjYWxsZWQgZm9yIHJvb3QgaHViClsgICAgMS40OTU0NzJdIGh1YiAxLTA6MS4wOiBVU0Ig
aHViIGZvdW5kClsgICAgMS40OTU0ODFdIGh1YiAxLTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkClsg
ICAgMS40OTU1ODJdIHhoY2lfaGNkIDAwMDA6MDA6MTAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIK
WyAgICAxLjQ5NTU4Nl0geGhjaV9oY2QgMDAwMDowMDoxMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3Rl
cmVkLCBhc3NpZ25lZCBidXMgbnVtYmVyIDIKWyAgICAxLjQ5NTk0M10gZWhjaV9oY2Q6IFVTQiAy
LjAgJ0VuaGFuY2VkJyBIb3N0IENvbnRyb2xsZXIgKEVIQ0kpIERyaXZlcgpbICAgIDEuNDk2MTU1
XSBlaGNpLXBjaTogRUhDSSBQQ0kgcGxhdGZvcm0gZHJpdmVyClsgICAgMS40OTgzMTJdIHVzYiB1
c2IyOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDMK
WyAgICAxLjQ5ODMxNl0gdXNiIHVzYjI6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQ
cm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMS40OTgzMThdIHVzYiB1c2IyOiBQcm9kdWN0
OiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDEuNDk4MzIwXSB1c2IgdXNiMjogTWFudWZhY3R1
cmVyOiBMaW51eCAzLjExLjYtNC14ZW4geGhjaV9oY2QKWyAgICAxLjQ5ODMyMV0gdXNiIHVzYjI6
IFNlcmlhbE51bWJlcjogMDAwMDowMDoxMC4wClsgICAgMS40OTg0MzldIHhIQ0kgeGhjaV9hZGRf
ZW5kcG9pbnQgY2FsbGVkIGZvciByb290IGh1YgpbICAgIDEuNDk4NDQxXSB4SENJIHhoY2lfY2hl
Y2tfYmFuZHdpZHRoIGNhbGxlZCBmb3Igcm9vdCBodWIKWyAgICAxLjQ5ODQ2NF0gaHViIDItMDox
LjA6IFVTQiBodWIgZm91bmQKWyAgICAxLjQ5ODQ3MF0gaHViIDItMDoxLjA6IDIgcG9ydHMgZGV0
ZWN0ZWQKWyAgICAxLjUwNDYwMl0gb2hjaV9oY2Q6IFVTQiAxLjEgJ09wZW4nIEhvc3QgQ29udHJv
bGxlciAoT0hDSSkgRHJpdmVyClsgICAgMS41MDU0NDZdIG9oY2ktcGNpOiBPSENJIFBDSSBwbGF0
Zm9ybSBkcml2ZXIKWyAgICAxLjUxMTA1Nl0gW2RybV0gcmFkZW9uIGtlcm5lbCBtb2Rlc2V0dGlu
ZyBlbmFibGVkLgpbICAgIDEuNTI4MzU4XSB4aGNpX2hjZCAwMDAwOjAwOjEwLjE6IHhIQ0kgSG9z
dCBDb250cm9sbGVyClsgICAgMS41MjgzNjddIHhoY2lfaGNkIDAwMDA6MDA6MTAuMTogbmV3IFVT
QiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAzClsgICAgMS41Mjg2NTZdIHho
Y2lfaGNkIDAwMDA6MDA6MTAuMTogaXJxIDU0ICgyNzApIGZvciBNU0kvTVNJLVgKWyAgICAxLjUy
ODcwMl0geGhjaV9oY2QgMDAwMDowMDoxMC4xOiBpcnEgNTUgKDI2OSkgZm9yIE1TSS9NU0ktWApb
ICAgIDEuNTI4NzQ3XSB4aGNpX2hjZCAwMDAwOjAwOjEwLjE6IGlycSA1NiAoMjY4KSBmb3IgTVNJ
L01TSS1YClsgICAgMS41Mjg3OTJdIHhoY2lfaGNkIDAwMDA6MDA6MTAuMTogaXJxIDU3ICgyNjcp
IGZvciBNU0kvTVNJLVgKWyAgICAxLjUyODgzN10geGhjaV9oY2QgMDAwMDowMDoxMC4xOiBpcnEg
NTggKDI2NikgZm9yIE1TSS9NU0ktWApbICAgIDEuNTI4OTY0XSB1c2IgdXNiMzogTmV3IFVTQiBk
ZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyClsgICAgMS41Mjg5Njdd
IHVzYiB1c2IzOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJp
YWxOdW1iZXI9MQpbICAgIDEuNTI4OTY5XSB1c2IgdXNiMzogUHJvZHVjdDogeEhDSSBIb3N0IENv
bnRyb2xsZXIKWyAgICAxLjUyODk3MF0gdXNiIHVzYjM6IE1hbnVmYWN0dXJlcjogTGludXggMy4x
MS42LTQteGVuIHhoY2lfaGNkClsgICAgMS41Mjg5NzJdIHVzYiB1c2IzOiBTZXJpYWxOdW1iZXI6
IDAwMDA6MDA6MTAuMQpbICAgIDEuNTI5MDQzXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNhbGxl
ZCBmb3Igcm9vdCBodWIKWyAgICAxLjUyOTA0NV0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0aCBj
YWxsZWQgZm9yIHJvb3QgaHViClsgICAgMS41MjkwNjZdIGh1YiAzLTA6MS4wOiBVU0IgaHViIGZv
dW5kClsgICAgMS41MjkwNzNdIGh1YiAzLTA6MS4wOiAyIHBvcnRzIGRldGVjdGVkClsgICAgMS41
MjkxNDddIHhoY2lfaGNkIDAwMDA6MDA6MTAuMTogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAx
LjUyOTE1MV0geGhjaV9oY2QgMDAwMDowMDoxMC4xOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBh
c3NpZ25lZCBidXMgbnVtYmVyIDQKWyAgICAxLjUzMTk3MF0gdXNiIHVzYjQ6IE5ldyBVU0IgZGV2
aWNlIGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMwpbICAgIDEuNTMxOTcyXSB1
c2IgdXNiNDogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFs
TnVtYmVyPTEKWyAgICAxLjUzMTk3NF0gdXNiIHVzYjQ6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250
cm9sbGVyClsgICAgMS41MzE5NzZdIHVzYiB1c2I0OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTEu
Ni00LXhlbiB4aGNpX2hjZApbICAgIDEuNTMxOTc3XSB1c2IgdXNiNDogU2VyaWFsTnVtYmVyOiAw
MDAwOjAwOjEwLjEKWyAgICAxLjUzMjAyN10geEhDSSB4aGNpX2FkZF9lbmRwb2ludCBjYWxsZWQg
Zm9yIHJvb3QgaHViClsgICAgMS41MzIwMjldIHhIQ0kgeGhjaV9jaGVja19iYW5kd2lkdGggY2Fs
bGVkIGZvciByb290IGh1YgpbICAgIDEuNTMyMDQ5XSBodWIgNC0wOjEuMDogVVNCIGh1YiBmb3Vu
ZApbICAgIDEuNTMyMDc0XSBodWIgNC0wOjEuMDogMiBwb3J0cyBkZXRlY3RlZApbICAgIDEuNTUy
NDA0XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IEVIQ0kgSG9zdCBDb250cm9sbGVyClsgICAgMS41
NTI0MTRdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNz
aWduZWQgYnVzIG51bWJlciA1ClsgICAgMS41NTI0MjBdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjog
YXBwbHlpbmcgQU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJv
dW5kClsgICAgMS41NTI0MzVdIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogZGVidWcgcG9ydCAxClsg
ICAgMS41NTI1MDldIGVoY2ktcGNpIDAwMDA6MDA6MTIuMjogaXJxIDE3LCBpbyBtZW0gMHhmZjc0
YjAwMApbICAgIDEuNTY0MDk2XSBlaGNpLXBjaSAwMDAwOjAwOjEyLjI6IFVTQiAyLjAgc3RhcnRl
ZCwgRUhDSSAxLjAwClsgICAgMS41NjQxMjNdIHVzYiB1c2I1OiBOZXcgVVNCIGRldmljZSBmb3Vu
ZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIKWyAgICAxLjU2NDEyNl0gdXNiIHVzYjU6
IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0x
ClsgICAgMS41NjQxMjldIHVzYiB1c2I1OiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcgpb
ICAgIDEuNTY0MTMxXSB1c2IgdXNiNTogTWFudWZhY3R1cmVyOiBMaW51eCAzLjExLjYtNC14ZW4g
ZWhjaV9oY2QKWyAgICAxLjU2NDEzMl0gdXNiIHVzYjU6IFNlcmlhbE51bWJlcjogMDAwMDowMDox
Mi4yClsgICAgMS41NjQyODhdIGh1YiA1LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS41NjQy
OTJdIGh1YiA1LTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAgMS41NjQzOTZdIHhoY2lfaGNk
IDAwMDA6MDQ6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAxLjU2NDQwOV0geGhjaV9o
Y2QgMDAwMDowNDowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMgbnVt
YmVyIDYKWyAgICAxLjU2NDYyOV0geGhjaV9oY2QgMDAwMDowNDowMC4wOiBpcnEgNTkgKDI2NSkg
Zm9yIE1TSS9NU0ktWApbICAgIDEuNTY0NzA2XSB1c2IgdXNiNjogTmV3IFVTQiBkZXZpY2UgZm91
bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAyClsgICAgMS41NjQ3MDhdIHVzYiB1c2I2
OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9
MQpbICAgIDEuNTY0NzEwXSB1c2IgdXNiNjogUHJvZHVjdDogeEhDSSBIb3N0IENvbnRyb2xsZXIK
WyAgICAxLjU2NDcxMV0gdXNiIHVzYjY6IE1hbnVmYWN0dXJlcjogTGludXggMy4xMS42LTQteGVu
IHhoY2lfaGNkClsgICAgMS41NjQ3MTJdIHVzYiB1c2I2OiBTZXJpYWxOdW1iZXI6IDAwMDA6MDQ6
MDAuMApbICAgIDEuNTY0Nzk0XSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNhbGxlZCBmb3Igcm9v
dCBodWIKWyAgICAxLjU2NDc5Nl0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0aCBjYWxsZWQgZm9y
IHJvb3QgaHViClsgICAgMS41NjQ4MjBdIGh1YiA2LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAg
MS41NjQ4MzBdIGh1YiA2LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgICAgMS41NjQ5MzZdIHho
Y2lfaGNkIDAwMDA6MDQ6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXIKWyAgICAxLjU2NDkzOV0g
eGhjaV9oY2QgMDAwMDowNDowMC4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBi
dXMgbnVtYmVyIDcKWyAgICAxLjU2NDk1N10gdXNiIHVzYjc6IE5ldyBVU0IgZGV2aWNlIGZvdW5k
LCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMwpbICAgIDEuNTY0OTU5XSB1c2IgdXNiNzog
TmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTEK
WyAgICAxLjU2NDk2MF0gdXNiIHVzYjc6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyClsg
ICAgMS41NjQ5NjFdIHVzYiB1c2I3OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTEuNi00LXhlbiB4
aGNpX2hjZApbICAgIDEuNTY0OTYzXSB1c2IgdXNiNzogU2VyaWFsTnVtYmVyOiAwMDAwOjA0OjAw
LjAKWyAgICAxLjU2NDk4Nl0gZWhjaS1wY2kgMDAwMDowMDoxMy4yOiBFSENJIEhvc3QgQ29udHJv
bGxlcgpbICAgIDEuNTY1MDEwXSB4SENJIHhoY2lfYWRkX2VuZHBvaW50IGNhbGxlZCBmb3Igcm9v
dCBodWIKWyAgICAxLjU2NTAxMl0geEhDSSB4aGNpX2NoZWNrX2JhbmR3aWR0aCBjYWxsZWQgZm9y
IHJvb3QgaHViClsgICAgMS41NjUwMzJdIGh1YiA3LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAg
MS41NjUwNDFdIGh1YiA3LTA6MS4wOiA0IHBvcnRzIGRldGVjdGVkClsgICAgMS41NjUxMThdIGVo
Y2ktcGNpIDAwMDA6MDA6MTMuMjogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVz
IG51bWJlciA4ClsgICAgMS41NjUxMjVdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogYXBwbHlpbmcg
QU1EIFNCNzAwL1NCODAwL0h1ZHNvbi0yLzMgRUhDSSBkdW1teSBxaCB3b3JrYXJvdW5kClsgICAg
MS41NjUxNDFdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogZGVidWcgcG9ydCAxClsgICAgMS41NjUx
OTJdIGVoY2ktcGNpIDAwMDA6MDA6MTMuMjogaXJxIDE3LCBpbyBtZW0gMHhmZjc0OTAwMApbICAg
IDEuNTc2MTAyXSBlaGNpLXBjaSAwMDAwOjAwOjEzLjI6IFVTQiAyLjAgc3RhcnRlZCwgRUhDSSAx
LjAwClsgICAgMS41NzYxMzFdIHVzYiB1c2I4OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5k
b3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIKWyAgICAxLjU3NjEzM10gdXNiIHVzYjg6IE5ldyBVU0Ig
ZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0xClsgICAgMS41
NzYxMzVdIHVzYiB1c2I4OiBQcm9kdWN0OiBFSENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDEuNTc2
MTM2XSB1c2IgdXNiODogTWFudWZhY3R1cmVyOiBMaW51eCAzLjExLjYtNC14ZW4gZWhjaV9oY2QK
WyAgICAxLjU3NjEzOF0gdXNiIHVzYjg6IFNlcmlhbE51bWJlcjogMDAwMDowMDoxMy4yClsgICAg
MS41NzYzMzddIGh1YiA4LTA6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS41NzYzNDFdIGh1YiA4
LTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAgMS41NzY2MzJdIG9oY2ktcGNpIDAwMDA6MDA6
MTIuMDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsgICAgMS41NzY2NDBdIG9oY2ktcGNpIDAw
MDA6MDA6MTIuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciA5
ClsgICAgMS41NzY2OThdIG9oY2ktcGNpIDAwMDA6MDA6MTIuMDogaXJxIDE4LCBpbyBtZW0gMHhm
Zjc0YzAwMApbICAgIDEuNjM2MTUwXSB1c2IgdXNiOTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlk
VmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAxClsgICAgMS42MzYxNTRdIHVzYiB1c2I5OiBOZXcg
VVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAg
IDEuNjM2MTU2XSB1c2IgdXNiOTogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsg
ICAgMS42MzYxNTddIHVzYiB1c2I5OiBNYW51ZmFjdHVyZXI6IExpbnV4IDMuMTEuNi00LXhlbiBv
aGNpX2hjZApbICAgIDEuNjM2MTU5XSB1c2IgdXNiOTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjEy
LjAKWyAgICAxLjYzNjI3M10gaHViIDktMDoxLjA6IFVTQiBodWIgZm91bmQKWyAgICAxLjYzNjI3
OV0gaHViIDktMDoxLjA6IDUgcG9ydHMgZGV0ZWN0ZWQKWyAgICAxLjYzNjUwOV0gb2hjaS1wY2kg
MDAwMDowMDoxMy4wOiBPSENJIFBDSSBob3N0IGNvbnRyb2xsZXIKWyAgICAxLjYzNjUxNF0gb2hj
aS1wY2kgMDAwMDowMDoxMy4wOiBuZXcgVVNCIGJ1cyByZWdpc3RlcmVkLCBhc3NpZ25lZCBidXMg
bnVtYmVyIDEwClsgICAgMS42MzY1MzddIG9oY2ktcGNpIDAwMDA6MDA6MTMuMDogaXJxIDE4LCBp
byBtZW0gMHhmZjc0YTAwMApbICAgIDEuNjk2MTUxXSB1c2IgdXNiMTA6IE5ldyBVU0IgZGV2aWNl
IGZvdW5kLCBpZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMQpbICAgIDEuNjk2MTU1XSB1c2Ig
dXNiMTA6IE5ldyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0zLCBQcm9kdWN0PTIsIFNlcmlhbE51
bWJlcj0xClsgICAgMS42OTYxNTddIHVzYiB1c2IxMDogUHJvZHVjdDogT0hDSSBQQ0kgaG9zdCBj
b250cm9sbGVyClsgICAgMS42OTYxNThdIHVzYiB1c2IxMDogTWFudWZhY3R1cmVyOiBMaW51eCAz
LjExLjYtNC14ZW4gb2hjaV9oY2QKWyAgICAxLjY5NjE2MF0gdXNiIHVzYjEwOiBTZXJpYWxOdW1i
ZXI6IDAwMDA6MDA6MTMuMApbICAgIDEuNjk2Mjc1XSBodWIgMTAtMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgICAxLjY5NjI4MV0gaHViIDEwLTA6MS4wOiA1IHBvcnRzIGRldGVjdGVkClsgICAgMS42
OTY1MTNdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogT0hDSSBQQ0kgaG9zdCBjb250cm9sbGVyClsg
ICAgMS42OTY1MThdIG9oY2ktcGNpIDAwMDA6MDA6MTQuNTogbmV3IFVTQiBidXMgcmVnaXN0ZXJl
ZCwgYXNzaWduZWQgYnVzIG51bWJlciAxMQpbICAgIDEuNjk2NTQwXSBvaGNpLXBjaSAwMDAwOjAw
OjE0LjU6IGlycSAxOCwgaW8gbWVtIDB4ZmY3NDgwMDAKWyAgICAxLjc1NjE0OF0gdXNiIHVzYjEx
OiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDEKWyAg
ICAxLjc1NjE1Ml0gdXNiIHVzYjExOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJv
ZHVjdD0yLCBTZXJpYWxOdW1iZXI9MQpbICAgIDEuNzU2MTU0XSB1c2IgdXNiMTE6IFByb2R1Y3Q6
IE9IQ0kgUENJIGhvc3QgY29udHJvbGxlcgpbICAgIDEuNzU2MTU1XSB1c2IgdXNiMTE6IE1hbnVm
YWN0dXJlcjogTGludXggMy4xMS42LTQteGVuIG9oY2lfaGNkClsgICAgMS43NTYxNTZdIHVzYiB1
c2IxMTogU2VyaWFsTnVtYmVyOiAwMDAwOjAwOjE0LjUKWyAgICAxLjc1NjI4M10gaHViIDExLTA6
MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMS43NTYyODldIGh1YiAxMS0wOjEuMDogMiBwb3J0cyBk
ZXRlY3RlZApbICAgIDEuNzU2ODQzXSBbZHJtXSBpbml0aWFsaXppbmcga2VybmVsIG1vZGVzZXR0
aW5nIChQSVRDQUlSTiAweDEwMDI6MHg2ODE5IDB4MTQ1ODoweDI1NTMpLgpbICAgIDEuNzU2OTg0
XSBbZHJtXSByZWdpc3RlciBtbWlvIGJhc2U6IDB4RkY2MDAwMDAKWyAgICAxLjc1Njk4Nl0gW2Ry
bV0gcmVnaXN0ZXIgbW1pbyBzaXplOiAyNjIxNDQKWyAgICAxLjc1NzIxN10gQVRPTSBCSU9TOiBH
VgpbICAgIDEuNzU3Mjg0XSByYWRlb24gMDAwMDowMTowMC4wOiBWUkFNOiAyMDQ4TSAweDAwMDAw
MDAwMDAwMDAwMDAgLSAweDAwMDAwMDAwN0ZGRkZGRkYgKDIwNDhNIHVzZWQpClsgICAgMS43NTcy
ODZdIHJhZGVvbiAwMDAwOjAxOjAwLjA6IEdUVDogNTEyTSAweDAwMDAwMDAwODAwMDAwMDAgLSAw
eDAwMDAwMDAwOUZGRkZGRkYKWyAgICAxLjc1NzI4OF0gW2RybV0gRGV0ZWN0ZWQgVlJBTSBSQU09
MjA0OE0sIEJBUj0yNTZNClsgICAgMS43NTcyODldIFtkcm1dIFJBTSB3aWR0aCAyNTZiaXRzIERE
UgpbICAgIDEuNzU3MzUyXSBbVFRNXSBab25lICBrZXJuZWw6IEF2YWlsYWJsZSBncmFwaGljcyBt
ZW1vcnk6IDM1NDIzMTIga2lCClsgICAgMS43NTczNTRdIFtUVE1dIFpvbmUgICBkbWEzMjogQXZh
aWxhYmxlIGdyYXBoaWNzIG1lbW9yeTogMjA5NzE1MiBraUIKWyAgICAxLjc1NzM1NV0gW1RUTV0g
SW5pdGlhbGl6aW5nIHBvb2wgYWxsb2NhdG9yClsgICAgMS43NTczNTldIFtUVE1dIEluaXRpYWxp
emluZyBETUEgcG9vbCBhbGxvY2F0b3IKWyAgICAxLjc1NzM5Ml0gW2RybV0gcmFkZW9uOiAyMDQ4
TSBvZiBWUkFNIG1lbW9yeSByZWFkeQpbICAgIDEuNzU3Mzk0XSBbZHJtXSByYWRlb246IDUxMk0g
b2YgR1RUIG1lbW9yeSByZWFkeS4KWyAgICAxLjc1ODUwNV0gW2RybV0gR0FSVDogbnVtIGNwdSBw
YWdlcyAxMzEwNzIsIG51bSBncHUgcGFnZXMgMTMxMDcyClsgICAgMS43NTkxNzFdIFtkcm1dIHBy
b2JpbmcgZ2VuIDIgY2FwcyBmb3IgZGV2aWNlIDEwMjI6MTQxMiA9IDcwMGQwMi82ClsgICAgMS43
NTkxNzZdIFtkcm1dIFBDSUUgZ2VuIDIgbGluayBzcGVlZHMgYWxyZWFkeSBlbmFibGVkClsgICAg
MS43NjYxMzddIFtkcm1dIExvYWRpbmcgUElUQ0FJUk4gTWljcm9jb2RlClsgICAgMS45ODgwOTNd
IHVzYiA2LTQ6IG5ldyBmdWxsLXNwZWVkIFVTQiBkZXZpY2UgbnVtYmVyIDIgdXNpbmcgeGhjaV9o
Y2QKWyAgICAyLjA2MjEwNl0gdXNiIDYtNDogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9y
PTA0NWUsIGlkUHJvZHVjdD0wMjkxClsgICAgMi4wNjIxMDldIHVzYiA2LTQ6IE5ldyBVU0IgZGV2
aWNlIHN0cmluZ3M6IE1mcj0wLCBQcm9kdWN0PTAsIFNlcmlhbE51bWJlcj0wClsgICAgMi4yMDQx
NzhdIFtkcm1dIFBDSUUgR0FSVCBvZiA1MTJNIGVuYWJsZWQgKHRhYmxlIGF0IDB4MDAwMDAwMDAw
MDI3NjAwMCkuClsgICAgMi4yMDQzMzRdIHJhZGVvbiAwMDAwOjAxOjAwLjA6IFdCIGVuYWJsZWQK
WyAgICAyLjIwNDMzN10gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcg
MCB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAwYzAwIGFuZCBjcHUgYWRkciAweGZmZmY4ODAx
OTk4NWNjMDAKWyAgICAyLjIwNDMzOV0gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVy
IG9uIHJpbmcgMSB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAwYzA0IGFuZCBjcHUgYWRkciAw
eGZmZmY4ODAxOTk4NWNjMDQKWyAgICAyLjIwNDM0MF0gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVu
Y2UgZHJpdmVyIG9uIHJpbmcgMiB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAwYzA4IGFuZCBj
cHUgYWRkciAweGZmZmY4ODAxOTk4NWNjMDgKWyAgICAyLjIwNDM0Ml0gcmFkZW9uIDAwMDA6MDE6
MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgMyB1c2UgZ3B1IGFkZHIgMHgwMDAwMDAwMDgwMDAw
YzBjIGFuZCBjcHUgYWRkciAweGZmZmY4ODAxOTk4NWNjMGMKWyAgICAyLjIwNDM0NF0gcmFkZW9u
IDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNCB1c2UgZ3B1IGFkZHIgMHgwMDAw
MDAwMDgwMDAwYzEwIGFuZCBjcHUgYWRkciAweGZmZmY4ODAxOTk4NWNjMTAKWyAgICAyLjIwNTMx
MV0gcmFkZW9uIDAwMDA6MDE6MDAuMDogZmVuY2UgZHJpdmVyIG9uIHJpbmcgNSB1c2UgZ3B1IGFk
ZHIgMHgwMDAwMDAwMDAwMDc1YTE4IGFuZCBjcHUgYWRkciAweGZmZmZjOTAwMTAyMzVhMTgKWyAg
ICAyLjIwNTMxM10gW2RybV0gU3VwcG9ydHMgdmJsYW5rIHRpbWVzdGFtcCBjYWNoaW5nIFJldiAx
ICgxMC4xMC4yMDEwKS4KWyAgICAyLjIwNTMxNF0gW2RybV0gRHJpdmVyIHN1cHBvcnRzIHByZWNp
c2UgdmJsYW5rIHRpbWVzdGFtcCBxdWVyeS4KWyAgICAyLjIwNTM2NF0gcmFkZW9uIDAwMDA6MDE6
MDAuMDogaXJxIDYwICgyNjQpIGZvciBNU0kvTVNJLVgKWyAgICAyLjIwNTM4MF0gcmFkZW9uIDAw
MDA6MDE6MDAuMDogcmFkZW9uOiB1c2luZyBNU0kuClsgICAgMi4yMDU0MTZdIFtkcm1dIHJhZGVv
bjogaXJxIGluaXRpYWxpemVkLgpbICAgIDIuMjI3MDk4XSBbZHJtXSByaW5nIHRlc3Qgb24gMCBz
dWNjZWVkZWQgaW4gMyB1c2VjcwpbICAgIDIuMjI3MTA1XSBbZHJtXSByaW5nIHRlc3Qgb24gMSBz
dWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuMjI3MTEwXSBbZHJtXSByaW5nIHRlc3Qgb24gMiBz
dWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuMjI3MTc0XSBbZHJtXSByaW5nIHRlc3Qgb24gMyBz
dWNjZWVkZWQgaW4gMiB1c2VjcwpbICAgIDIuMjI3MTg0XSBbZHJtXSByaW5nIHRlc3Qgb24gNCBz
dWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuNDE1MzkyXSBbZHJtXSByaW5nIHRlc3Qgb24gNSBz
dWNjZWVkZWQgaW4gMiB1c2VjcwpbICAgIDIuNDE1Mzk3XSBbZHJtXSBVVkQgaW5pdGlhbGl6ZWQg
c3VjY2Vzc2Z1bGx5LgpbICAgIDIuNDIzMTg5XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcgMCBzdWNj
ZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMjU3XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcgMSBz
dWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMzE5XSBbZHJtXSBpYiB0ZXN0IG9uIHJpbmcg
MiBzdWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMzUxXSBbZHJtXSBpYiB0ZXN0IG9uIHJp
bmcgMyBzdWNjZWVkZWQgaW4gMCB1c2VjcwpbICAgIDIuNDIzMzgxXSBbZHJtXSBpYiB0ZXN0IG9u
IHJpbmcgNCBzdWNjZWVkZWQgaW4gMSB1c2VjcwpbICAgIDIuNDI4MDcxXSB1c2IgOC00OiBuZXcg
aGlnaC1zcGVlZCBVU0IgZGV2aWNlIG51bWJlciAyIHVzaW5nIGVoY2ktcGNpClsgICAgMi41ODAx
MjNdIFtkcm1dIGliIHRlc3Qgb24gcmluZyA1IHN1Y2NlZWRlZApbICAgIDIuNTgwODMxXSBbZHJt
XSBSYWRlb24gRGlzcGxheSBDb25uZWN0b3JzClsgICAgMi41ODA4MzJdIFtkcm1dIENvbm5lY3Rv
ciAwOgpbICAgIDIuNTgwODM0XSBbZHJtXSAgIERQLTEKWyAgICAyLjU4MDgzNV0gW2RybV0gICBI
UEQ0ClsgICAgMi41ODA4MzZdIFtkcm1dICAgRERDOiAweDY1MzAgMHg2NTMwIDB4NjUzNCAweDY1
MzQgMHg2NTM4IDB4NjUzOCAweDY1M2MgMHg2NTNjClsgICAgMi41ODA4MzddIFtkcm1dICAgRW5j
b2RlcnM6ClsgICAgMi41ODA4MzhdIFtkcm1dICAgICBERlAxOiBJTlRFUk5BTF9VTklQSFkyClsg
ICAgMi41ODA4MzldIFtkcm1dIENvbm5lY3RvciAxOgpbICAgIDIuNTgwODM5XSBbZHJtXSAgIERQ
LTIKWyAgICAyLjU4MDg0MF0gW2RybV0gICBIUEQ1ClsgICAgMi41ODA4NDFdIFtkcm1dICAgRERD
OiAweDY1NDAgMHg2NTQwIDB4NjU0NCAweDY1NDQgMHg2NTQ4IDB4NjU0OCAweDY1NGMgMHg2NTRj
ClsgICAgMi41ODA4NDJdIFtkcm1dICAgRW5jb2RlcnM6ClsgICAgMi41ODA4NDNdIFtkcm1dICAg
ICBERlAyOiBJTlRFUk5BTF9VTklQSFkyClsgICAgMi41ODA4NDRdIFtkcm1dIENvbm5lY3RvciAy
OgpbICAgIDIuNTgwODQ0XSBbZHJtXSAgIEhETUktQS0xClsgICAgMi41ODA4NDVdIFtkcm1dICAg
SFBEMQpbICAgIDIuNTgwODQ2XSBbZHJtXSAgIEREQzogMHg2NTUwIDB4NjU1MCAweDY1NTQgMHg2
NTU0IDB4NjU1OCAweDY1NTggMHg2NTVjIDB4NjU1YwpbICAgIDIuNTgwODQ3XSBbZHJtXSAgIEVu
Y29kZXJzOgpbICAgIDIuNTgwODQ4XSBbZHJtXSAgICAgREZQMzogSU5URVJOQUxfVU5JUEhZMQpb
ICAgIDIuNTgwODQ4XSBbZHJtXSBDb25uZWN0b3IgMzoKWyAgICAyLjU4MDg0OV0gW2RybV0gICBE
VkktSS0xClsgICAgMi41ODA4NTBdIFtkcm1dICAgSFBENgpbICAgIDIuNTgwODUxXSBbZHJtXSAg
IEREQzogMHg2NTgwIDB4NjU4MCAweDY1ODQgMHg2NTg0IDB4NjU4OCAweDY1ODggMHg2NThjIDB4
NjU4YwpbICAgIDIuNTgwODUyXSBbZHJtXSAgIEVuY29kZXJzOgpbICAgIDIuNTgwODUyXSBbZHJt
XSAgICAgREZQNDogSU5URVJOQUxfVU5JUEhZClsgICAgMi41ODA4NTNdIFtkcm1dICAgICBDUlQx
OiBJTlRFUk5BTF9LTERTQ1BfREFDMQpbICAgIDIuNTgwOTA5XSBbZHJtXSBJbnRlcm5hbCB0aGVy
bWFsIGNvbnRyb2xsZXIgd2l0aCBmYW4gY29udHJvbApbICAgIDIuNTgwOTQ4XSBbZHJtXSByYWRl
b246IHBvd2VyIG1hbmFnZW1lbnQgaW5pdGlhbGl6ZWQKWyAgICAyLjYzOTIyNF0gW2RybV0gZmIg
bWFwcGFibGUgYXQgMHhDMTM4ODAwMApbICAgIDIuNjM5MjI3XSBbZHJtXSB2cmFtIGFwcGVyIGF0
IDB4QzAwMDAwMDAKWyAgICAyLjYzOTIyOF0gW2RybV0gc2l6ZSA4Mjk0NDAwClsgICAgMi42Mzky
MjldIFtkcm1dIGZiIGRlcHRoIGlzIDI0ClsgICAgMi42MzkyMzBdIFtkcm1dICAgIHBpdGNoIGlz
IDc2ODAKWyAgICAyLjY2MTQyNV0gQ29uc29sZTogc3dpdGNoaW5nIHRvIGNvbG91ciBmcmFtZSBi
dWZmZXIgZGV2aWNlIDI0MHg2NwpbICAgIDIuNjY1MDQ5XSByYWRlb24gMDAwMDowMTowMC4wOiBm
YjA6IHJhZGVvbmRybWZiIGZyYW1lIGJ1ZmZlciBkZXZpY2UKWyAgICAyLjY2NTA1MV0gcmFkZW9u
IDAwMDA6MDE6MDAuMDogcmVnaXN0ZXJlZCBwYW5pYyBub3RpZmllcgpbICAgIDIuNjY1MDU1XSBb
ZHJtXSBJbml0aWFsaXplZCByYWRlb24gMi4zNC4wIDIwMDgwNTI4IGZvciAwMDAwOjAxOjAwLjAg
b24gbWlub3IgMApbICAgIDIuNzA5MzM5XSB1c2IgOC00OiBOZXcgVVNCIGRldmljZSBmb3VuZCwg
aWRWZW5kb3I9MDQ4ZCwgaWRQcm9kdWN0PTEzMzYKWyAgICAyLjcwOTM0M10gdXNiIDgtNDogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTMKWyAg
ICAyLjcwOTM0NV0gdXNiIDgtNDogUHJvZHVjdDogTWFzcyBTdG9yYWdlIERldmljZQpbICAgIDIu
NzA5MzQ2XSB1c2IgOC00OiBNYW51ZmFjdHVyZXI6IEdlbmVyaWMgICAKWyAgICAyLjcwOTM0OF0g
dXNiIDgtNDogU2VyaWFsTnVtYmVyOiAwMDAwMDAwMDAwMDAwNgpbICAgIDIuNzc3MzU4XSB4b3I6
IG1lYXN1cmluZyBzb2Z0d2FyZSBjaGVja3N1bSBzcGVlZApbICAgIDIuODE2MDU3XSAgICA4cmVn
cyAgICAgOiAxNjI1Ni4wMDAgTUIvc2VjClsgICAgMi44NDQwOTFdIHVzYiA5LTM6IG5ldyBsb3ct
c3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyBvaGNpLXBjaQpbICAgIDIuODU2MDU3XSAg
ICA4cmVnc19wcmVmZXRjaDogMTQ1MzQuMDAwIE1CL3NlYwpbICAgIDIuODk2MDU3XSAgICAzMnJl
Z3MgICAgOiAxMzA4Ny4wMDAgTUIvc2VjClsgICAgMi45MzYwNTddICAgIDMycmVnc19wcmVmZXRj
aDogMTA5MjEuMDAwIE1CL3NlYwpbICAgIDIuOTc2MDU5XSAgICBnZW5lcmljX3NzZTogIDgwNzcu
MDAwIE1CL3NlYwpbICAgIDMuMDEzMTU3XSB1c2IgOS0zOiBOZXcgVVNCIGRldmljZSBmb3VuZCwg
aWRWZW5kb3I9MDlkYSwgaWRQcm9kdWN0PTAwMGEKWyAgICAzLjAxMzE2MV0gdXNiIDktMzogTmV3
IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTEsIFByb2R1Y3Q9MiwgU2VyaWFsTnVtYmVyPTAKWyAg
ICAzLjAxMzE2M10gdXNiIDktMzogUHJvZHVjdDogUFMvMitVU0IgTW91c2UKWyAgICAzLjAxMzE2
NF0gdXNiIDktMzogTWFudWZhY3R1cmVyOiBBNFRlY2gKWyAgICAzLjAxNjAxOV0gICAgcHJlZmV0
Y2g2NC1zc2U6ICA4MjY5LjAwMCBNQi9zZWMKWyAgICAzLjAyMDMzMV0gaW5wdXQ6IEE0VGVjaCBQ
Uy8yK1VTQiBNb3VzZSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTIuMC91c2I5Lzkt
My85LTM6MS4wL2lucHV0L2lucHV0MApbICAgIDMuMDIwNTMzXSBhNHRlY2ggMDAwMzowOURBOjAw
MEEuMDAwMTogaW5wdXQsaGlkcmF3MDogVVNCIEhJRCB2MS4xMCBNb3VzZSBbQTRUZWNoIFBTLzIr
VVNCIE1vdXNlXSBvbiB1c2ItMDAwMDowMDoxMi4wLTMvaW5wdXQwClsgICAgMy4wMjA1NTNdIHVz
YmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiaGlkClsgICAgMy4wMjA1
NTRdIHVzYmhpZDogVVNCIEhJRCBjb3JlIGRyaXZlcgpbICAgIDMuMDU2MDU4XSAgICBhdnggICAg
ICAgOiAgNDEyNC4wMDAgTUIvc2VjClsgICAgMy4wNTYwNjBdIHhvcjogdXNpbmcgZnVuY3Rpb246
IDhyZWdzICgxNjI1Ni4wMDAgTUIvc2VjKQpbICAgIDMuMTI0MDYwXSByYWlkNjogc3NlMngxICAg
IDcyNDEgTUIvcwpbICAgIDMuMTQ4MDk0XSB1c2IgOS00OiBuZXcgbG93LXNwZWVkIFVTQiBkZXZp
Y2UgbnVtYmVyIDMgdXNpbmcgb2hjaS1wY2kKWyAgICAzLjE5MjA1OV0gcmFpZDY6IHNzZTJ4MiAg
IDExMzM1IE1CL3MKWyAgICAzLjI2MDA2MF0gcmFpZDY6IHNzZTJ4NCAgIDEyOTA5IE1CL3MKWyAg
ICAzLjI2MDA2MV0gcmFpZDY6IHVzaW5nIGFsZ29yaXRobSBzc2UyeDQgKDEyOTA5IE1CL3MpClsg
ICAgMy4yNjAwNjJdIHJhaWQ2OiB1c2luZyBzc3NlM3gyIHJlY292ZXJ5IGFsZ29yaXRobQpbICAg
IDMuMjYzMTYzXSBiaW86IGNyZWF0ZSBzbGFiIDxiaW8tMT4gYXQgMQpbICAgIDMuMjYzMzc0XSBC
dHJmcyBsb2FkZWQKWyAgICAzLjMzMjE1Nl0gdXNiIDktNDogTmV3IFVTQiBkZXZpY2UgZm91bmQs
IGlkVmVuZG9yPTA0NWUsIGlkUHJvZHVjdD0wMGRkClsgICAgMy4zMzIxNjRdIHVzYiA5LTQ6IE5l
dyBVU0IgZGV2aWNlIHN0cmluZ3M6IE1mcj0xLCBQcm9kdWN0PTIsIFNlcmlhbE51bWJlcj0wClsg
ICAgMy4zMzIxNjZdIHVzYiA5LTQ6IFByb2R1Y3Q6IENvbWZvcnQgQ3VydmUgS2V5Ym9hcmQgMjAw
MApbICAgIDMuMzMyMTY3XSB1c2IgOS00OiBNYW51ZmFjdHVyZXI6IE1pY3Jvc29mdApbICAgIDMu
MzU4MjEwXSBpbnB1dDogTWljcm9zb2Z0IENvbWZvcnQgQ3VydmUgS2V5Ym9hcmQgMjAwMCBhcyAv
ZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MTIuMC91c2I5LzktNC85LTQ6MS4wL2lucHV0L2lu
cHV0MQpbICAgIDMuMzU4MzcyXSBoaWQtZ2VuZXJpYyAwMDAzOjA0NUU6MDBERC4wMDAyOiBpbnB1
dCxoaWRyYXcxOiBVU0IgSElEIHYxLjExIEtleWJvYXJkIFtNaWNyb3NvZnQgQ29tZm9ydCBDdXJ2
ZSBLZXlib2FyZCAyMDAwXSBvbiB1c2ItMDAwMDowMDoxMi4wLTQvaW5wdXQwClsgICAgMy4zNjAy
NDhdIGlucHV0OiBNaWNyb3NvZnQgQ29tZm9ydCBDdXJ2ZSBLZXlib2FyZCAyMDAwIGFzIC9kZXZp
Y2VzL3BjaTAwMDA6MDAvMDAwMDowMDoxMi4wL3VzYjkvOS00LzktNDoxLjEvaW5wdXQvaW5wdXQy
ClsgICAgMy4zNjA0NjBdIGhpZC1nZW5lcmljIDAwMDM6MDQ1RTowMERELjAwMDM6IGlucHV0LGhp
ZHJhdzI6IFVTQiBISUQgdjEuMTEgRGV2aWNlIFtNaWNyb3NvZnQgQ29tZm9ydCBDdXJ2ZSBLZXli
b2FyZCAyMDAwXSBvbiB1c2ItMDAwMDowMDoxMi4wLTQvaW5wdXQxClsgICAgNS42ODczOTddIEVY
VDQtZnMgKHNkYzMpOiBtb3VudGVkIGZpbGVzeXN0ZW0gd2l0aCBvcmRlcmVkIGRhdGEgbW9kZS4g
T3B0czogYWNsLHVzZXJfeGF0dHIKWyAgICA1Ljk4NTgwMV0gRVhUNC1mcyAoc2RjMyk6IHJlLW1v
dW50ZWQuIE9wdHM6IGFjbCx1c2VyX3hhdHRyClsgICAgNi44MjEzOTFdIHN5c3RlbWRbMV06IHN5
c3RlbWQgMjA4IHJ1bm5pbmcgaW4gc3lzdGVtIG1vZGUuICgrUEFNICtMSUJXUkFQICtBVURJVCAr
U0VMSU5VWCAtSU1BICtTWVNWSU5JVCArTElCQ1JZUFRTRVRVUCArR0NSWVBUICtBQ0wgK1haKQpb
ICAgIDYuODIxNDQ2XSBzeXN0ZW1kWzFdOiBEZXRlY3RlZCB2aXJ0dWFsaXphdGlvbiAneGVuJy4K
WyAgICA3LjExNjYxMl0gc3lzdGVtZFsxXTogSW5zZXJ0ZWQgbW9kdWxlICdhdXRvZnM0JwpbICAg
IDcuMTI5MzI5XSBzeXN0ZW1kWzFdOiBTZXQgaG9zdG5hbWUgdG8gPGxpbnV4LWI1MmQ+LgpbICAg
IDcuNTQ5Nzg0XSBkZXZpY2UtbWFwcGVyOiB1ZXZlbnQ6IHZlcnNpb24gMS4wLjMKWyAgICA3LjU0
OTg1NV0gZGV2aWNlLW1hcHBlcjogaW9jdGw6IDQuMjUuMC1pb2N0bCAoMjAxMy0wNi0yNikgaW5p
dGlhbGlzZWQ6IGRtLWRldmVsQHJlZGhhdC5jb20KWyAgICA3LjU1MDkwMF0gTFZNOiBBY3RpdmF0
aW9uIGdlbmVyYXRvciBzdWNjZXNzZnVsbHkgY29tcGxldGVkLgpbICAgIDguNTM3MTE5XSBzeXN0
ZW1kWzFdOiBTdGFydGVkIENvbGxlY3QgUmVhZC1BaGVhZCBEYXRhLgpbICAgIDguNTM3MTMxXSBz
eXN0ZW1kWzFdOiBTdGFydGVkIFJlcGxheSBSZWFkLUFoZWFkIERhdGEuClsgICAgOC41MzcxMzld
IHN5c3RlbWRbMV06IEV4cGVjdGluZyBkZXZpY2UgZGV2LXh2Yy0xLmRldmljZS4uLgpbICAgIDgu
NTM3MjI2XSBzeXN0ZW1kWzFdOiBFeHBlY3RpbmcgZGV2aWNlIGRldi14dmMwLmRldmljZS4uLgpb
ICAgIDguNTM3MjgzXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBTeXN0ZW0gVGltZSBTeW5jaHJvbml6
ZWQuClsgICAgOC41MzczNDJdIHN5c3RlbWRbMV06IFJlYWNoZWQgdGFyZ2V0IFN5c3RlbSBUaW1l
IFN5bmNocm9uaXplZC4KWyAgICA4LjUzNzM1MF0gc3lzdGVtZFsxXTogU3RhcnRpbmcgUmVtb3Rl
IEZpbGUgU3lzdGVtcyAoUHJlKS4KWyAgICA4LjUzNzQwNl0gc3lzdGVtZFsxXTogUmVhY2hlZCB0
YXJnZXQgUmVtb3RlIEZpbGUgU3lzdGVtcyAoUHJlKS4KWyAgICA4LjUzNzQxM10gc3lzdGVtZFsx
XTogU3RhcnRpbmcgUmVtb3RlIEZpbGUgU3lzdGVtcy4KWyAgICA4LjUzNzQ2N10gc3lzdGVtZFsx
XTogUmVhY2hlZCB0YXJnZXQgUmVtb3RlIEZpbGUgU3lzdGVtcy4KWyAgICA4LjUzNzQ3Nl0gc3lz
dGVtZFsxXTogU3RhcnRpbmcgU3lzbG9nIFNvY2tldC4KWyAgICA4LjUzNzU2Nl0gc3lzdGVtZFsx
XTogTGlzdGVuaW5nIG9uIFN5c2xvZyBTb2NrZXQuClsgICAgOC41Mzc1NzVdIHN5c3RlbWRbMV06
IFN0YXJ0aW5nIERlbGF5ZWQgU2h1dGRvd24gU29ja2V0LgpbICAgIDguNTM3NjQ1XSBzeXN0ZW1k
WzFdOiBMaXN0ZW5pbmcgb24gRGVsYXllZCBTaHV0ZG93biBTb2NrZXQuClsgICAgOC41Mzc2NTNd
IHN5c3RlbWRbMV06IFN0YXJ0aW5nIC9kZXYvaW5pdGN0bCBDb21wYXRpYmlsaXR5IE5hbWVkIFBp
cGUuClsgICAgOC41Mzc3MjVdIHN5c3RlbWRbMV06IExpc3RlbmluZyBvbiAvZGV2L2luaXRjdGwg
Q29tcGF0aWJpbGl0eSBOYW1lZCBQaXBlLgpbICAgIDguNTM3NzM3XSBzeXN0ZW1kWzFdOiBTdGFy
dGluZyBKb3VybmFsIFNvY2tldC4KWyAgICA4LjUzNzgzNV0gc3lzdGVtZFsxXTogTGlzdGVuaW5n
IG9uIEpvdXJuYWwgU29ja2V0LgpbICAgIDguNTgwMDcwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBM
b2FkIEtlcm5lbCBNb2R1bGVzLi4uClsgICAgOC41ODExNDJdIHN5c3RlbWRbMV06IFN0YXJ0aW5n
IENyZWF0ZSBsaXN0IG9mIHJlcXVpcmVkIHN0YXRpYyBkZXZpY2Ugbm9kZXMgZm9yIHRoZSBjdXJy
ZW50IGtlcm5lbC4uLgpbICAgIDguNTgxNzcxXSBzeXN0ZW1kWzFdOiBNb3VudGluZyBEZWJ1ZyBG
aWxlIFN5c3RlbS4uLgpbICAgIDguNTgyNDQ1XSBzeXN0ZW1kWzFdOiBNb3VudGVkIEh1Z2UgUGFn
ZXMgRmlsZSBTeXN0ZW0uClsgICAgOC41ODI0OTRdIHN5c3RlbWRbMV06IFN0YXJ0aW5nIFNldHVw
IFZpcnR1YWwgQ29uc29sZS4uLgpbICAgIDguNTgzMjAwXSBzeXN0ZW1kWzFdOiBTdGFydGluZyBD
cmVhdGUgZHluYW1pYyBydWxlIGZvciAvZGV2L3Jvb3QgbGluay4uLgpbICAgIDguNTgzOTE5XSBz
eXN0ZW1kWzFdOiBTdGFydGluZyBKb3VybmFsIFNlcnZpY2UuLi4KWyAgICA4LjU4NDcxNl0gc3lz
dGVtZFsxXTogU3RhcnRlZCBKb3VybmFsIFNlcnZpY2UuClsgICAgOC45ODU4MjNdIHN5c3RlbWQt
am91cm5hbGRbMjgzXTogVmFjdXVtaW5nIGRvbmUsIGZyZWVkIDAgYnl0ZXMKWyAgICA5LjI2OTIy
OF0gc2QgMDowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMCB0eXBlIDAKWyAgICA5LjI2
OTI1OF0gc2QgMTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMSB0eXBlIDAKWyAgICA5
LjI2OTI4NV0gc2QgMjowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNnMiB0eXBlIDAKWyAg
ICA5LjgxNjEyNV0gRVhUNC1mcyAoc2RjMyk6IHJlLW1vdW50ZWQuIE9wdHM6IGFjbCx1c2VyX3hh
dHRyClsgICAgOS44NzY0ODRdIHN5c3RlbWQtdWRldmRbMzIwXTogc3RhcnRpbmcgdmVyc2lvbiAy
MDgKWyAgIDEwLjgwNTg4Nl0gcGlpeDRfc21idXMgMDAwMDowMDoxNC4wOiBTTUJ1cyBIb3N0IENv
bnRyb2xsZXIgYXQgMHhiMDAsIHJldmlzaW9uIDAKWyAgIDEwLjgwODU1NV0gc2NzaTMgOiBwYXRh
X2F0aWl4cApbICAgMTAuODA5MDU1XSBzY3NpNCA6IHBhdGFfYXRpaXhwClsgICAxMC44MDk0ODJd
IGF0YTQ6IFBBVEEgbWF4IFVETUEvMTAwIGNtZCAweDFmMCBjdGwgMHgzZjYgYm1kbWEgMHhmMTAw
IGlycSAxNApbICAgMTAuODA5NDg0XSBhdGE1OiBQQVRBIG1heCBVRE1BLzEwMCBjbWQgMHgxNzAg
Y3RsIDB4Mzc2IGJtZG1hIDB4ZjEwOCBpcnEgMTUKWyAgIDEwLjgwOTk3NF0gc2hwY2hwOiBTdGFu
ZGFyZCBIb3QgUGx1ZyBQQ0kgQ29udHJvbGxlciBEcml2ZXIgdmVyc2lvbjogMC40ClsgICAxMC44
NzI5NjRdIGlucHV0OiBQb3dlciBCdXR0b24gYXMgL2RldmljZXMvTE5YU1lTVE06MDAvZGV2aWNl
OjAwL1BOUDBDMEM6MDAvaW5wdXQvaW5wdXQzClsgICAxMC44NzMwMzFdIEFDUEk6IFBvd2VyIEJ1
dHRvbiBbUFdSQl0KWyAgIDEwLjg3MzA2OF0gaW5wdXQ6IFBvd2VyIEJ1dHRvbiBhcyAvZGV2aWNl
cy9MTlhTWVNUTTowMC9MTlhQV1JCTjowMC9pbnB1dC9pbnB1dDQKWyAgIDEwLjg3MzA5M10gQUNQ
STogUG93ZXIgQnV0dG9uIFtQV1JGXQpbICAgMTEuMjM4MjQxXSByODE2OSBHaWdhYml0IEV0aGVy
bmV0IGRyaXZlciAyLjNMSy1OQVBJIGxvYWRlZApbICAgMTEuMjM4NTQ3XSByODE2OSAwMDAwOjA1
OjAwLjA6IGlycSA2MSAoMjYzKSBmb3IgTVNJL01TSS1YClsgICAxMS4yMzg2OTRdIHI4MTY5IDAw
MDA6MDU6MDAuMCBldGgwOiBSVEw4MTY4ZXZsLzgxMTFldmwgYXQgMHhmZmZmYzkwMDAwMDJhMDAw
LCBiYzo1ZjpmNDo4YjoyNjo4MSwgWElEIDBjOTAwODAwIElSUSA2MQpbICAgMTEuMjM4Njk2XSBy
ODE2OSAwMDAwOjA1OjAwLjAgZXRoMDoganVtYm8gZmVhdHVyZXMgW2ZyYW1lczogOTIwMCBieXRl
cywgdHggY2hlY2tzdW1taW5nOiBrb10KWyAgIDExLjI5MDkzOV0gaW5wdXQ6IFBDIFNwZWFrZXIg
YXMgL2RldmljZXMvcGxhdGZvcm0vcGNzcGtyL2lucHV0L2lucHV0NQpbICAgMTEuMzE1NTUzXSB0
YnM2OTgyZmU6IG1vZHVsZSBsaWNlbnNlICdUdXJib1NpZ2h0IFByb3ByaWV0YXJ5OiB3d3cudGJz
ZHR2LmNvbScgdGFpbnRzIGtlcm5lbC4KWyAgIDExLjMxNTU1N10gRGlzYWJsaW5nIGxvY2sgZGVi
dWdnaW5nIGR1ZSB0byBrZXJuZWwgdGFpbnQKWyAgIDExLjM3MjI3OV0gU2VyaWFsOiA4MjUwLzE2
NTUwIGRyaXZlciwgMzIgcG9ydHMsIElSUSBzaGFyaW5nIGRpc2FibGVkClsgICAxMS4zOTYzNjld
IDAwMDA6MDI6MDcuMDogdHR5UzUgYXQgSS9PIDB4ZDA2MCAoaXJxID0gMjIpIGlzIGEgMTY1NTBB
ClsgICAxMS42NTQxMjZdIHVzYi1zdG9yYWdlIDgtNDoxLjA6IFVTQiBNYXNzIFN0b3JhZ2UgZGV2
aWNlIGRldGVjdGVkClsgICAxMS42NTQyMjNdIHNjc2k1IDogdXNiLXN0b3JhZ2UgOC00OjEuMApb
ICAgMTEuNjU0Mjg3XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVz
Yi1zdG9yYWdlClsgICAxMS43Mzg0MTJdIGlucHV0OiBYYm94IDM2MCBXaXJlbGVzcyBSZWNlaXZl
ciAoWEJPWCkgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE1LjIvMDAwMDowNDowMC4w
L3VzYjYvNi00LzYtNDoxLjAvaW5wdXQvaW5wdXQ2ClsgICAxMS43Mzg1MTldIGlucHV0OiBYYm94
IDM2MCBXaXJlbGVzcyBSZWNlaXZlciAoWEJPWCkgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAw
OjAwOjE1LjIvMDAwMDowNDowMC4wL3VzYjYvNi00LzYtNDoxLjIvaW5wdXQvaW5wdXQ3ClsgICAx
MS43Mzg1OTJdIGlucHV0OiBYYm94IDM2MCBXaXJlbGVzcyBSZWNlaXZlciAoWEJPWCkgYXMgL2Rl
dmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE1LjIvMDAwMDowNDowMC4wL3VzYjYvNi00LzYtNDox
LjQvaW5wdXQvaW5wdXQ4ClsgICAxMS43Mzg2NjJdIGlucHV0OiBYYm94IDM2MCBXaXJlbGVzcyBS
ZWNlaXZlciAoWEJPWCkgYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAwOjE1LjIvMDAwMDow
NDowMC4wL3VzYjYvNi00LzYtNDoxLjYvaW5wdXQvaW5wdXQ5ClsgICAxMS43Mzg3MTFdIHVzYmNv
cmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgeHBhZApbICAgMTEuODg1NzI0XSBz
bmQtY2EwMTA2OiBNb2RlbCAxMDBhIFJldiAwMDAwMDAwMCBTZXJpYWwgMTAwYTExMDIKWyAgIDEy
LjM3NjA5Nl0gUmVnaXN0ZXJlZCBJUiBrZXltYXAgcmMtdGJzLW5lYwpbICAgMTIuMzc2MTcyXSBp
bnB1dDogc2FhNzE2eCBJUiAoVHVyYm9TaWdodCBUQlMgNjIyMCkgYXMgL2RldmljZXMvcGNpMDAw
MDowMC8wMDAwOjAwOjE1LjAvMDAwMDowMzowMC4wL3JjL3JjMC9pbnB1dDEwClsgICAxMi4zNzYy
MjhdIHJjMDogc2FhNzE2eCBJUiAoVHVyYm9TaWdodCBUQlMgNjIyMCkgYXMgL2RldmljZXMvcGNp
MDAwMDowMC8wMDAwOjAwOjE1LjAvMDAwMDowMzowMC4wL3JjL3JjMApbICAgMTIuMzc2Mjc3XSBE
VkI6IHJlZ2lzdGVyaW5nIG5ldyBhZGFwdGVyIChTQUE3MTZ4IGR2YiBhZGFwdGVyKQpbICAgMTIu
NDQxNzE1XSBJUiBORUMgcHJvdG9jb2wgaGFuZGxlciBpbml0aWFsaXplZApbICAgMTIuNDU0MDkx
XSB0ZGExODIxMjogTlhQIFREQTE4MjEySE4gc3VjY2Vzc2Z1bGx5IGlkZW50aWZpZWQuClsgICAx
Mi40NTQwOTddIERWQjogcmVnaXN0ZXJpbmcgYWRhcHRlciAwIGZyb250ZW5kIDAgKFNvbnkgQ1hE
MjgyMFIgKERWQi1UL1QyKSkuLi4KWyAgIDEyLjQ2NjQxMl0gcHBkZXY6IHVzZXItc3BhY2UgcGFy
YWxsZWwgcG9ydCBkcml2ZXIKWyAgIDEyLjQ3NjIzMV0gc3lzdGVtZC11ZGV2ZFszNDVdOiByZW5h
bWVkIG5ldHdvcmsgaW50ZXJmYWNlIGV0aDAgdG8gZW5wNXMwClsgICAxMi41NDIyMThdIElSIFJD
NSh4KSBwcm90b2NvbCBoYW5kbGVyIGluaXRpYWxpemVkClsgICAxMi41OTcwNDldIElSIFJDNiBw
cm90b2NvbCBoYW5kbGVyIGluaXRpYWxpemVkClsgICAxMi42NTI3OTJdIHNjc2kgNTowOjA6MDog
RGlyZWN0LUFjY2VzcyAgICAgR2VuZXJpYyAgU3RvcmFnZSBEZXZpY2UgICAwLjAwIFBROiAwIEFO
U0k6IDIKWyAgIDEyLjY1Mjk3M10gc2QgNTowOjA6MDogQXR0YWNoZWQgc2NzaSBnZW5lcmljIHNn
MyB0eXBlIDAKWyAgIDEyLjY1NjM1OF0gc2QgNTowOjA6MDogW3NkZF0gQXR0YWNoZWQgU0NTSSBy
ZW1vdmFibGUgZGlzawpbICAgMTMuMTE0MzA0XSBJUiBKVkMgcHJvdG9jb2wgaGFuZGxlciBpbml0
aWFsaXplZApbICAgMTMuMjY1MTcwXSBJUiBTb255IHByb3RvY29sIGhhbmRsZXIgaW5pdGlhbGl6
ZWQKWyAgIDEzLjQ4MDg1Ml0gQUxTQSBoZGFfaW50ZWwuYzozMTE2IDAwMDA6MDE6MDAuMTogSGFu
ZGxlIFZHQS1zd2l0Y2hlcm9vIGF1ZGlvIGNsaWVudApbICAgMTMuNDgwODU2XSBBTFNBIGhkYV9p
bnRlbC5jOjMzMTcgMDAwMDowMTowMC4xOiBVc2luZyBMUElCIHBvc2l0aW9uIGZpeApbICAgMTMu
NDgwODU3XSBBTFNBIGhkYV9pbnRlbC5jOjM0MzggMDAwMDowMTowMC4xOiBGb3JjZSB0byBub24t
c25vb3AgbW9kZQpbICAgMTMuNDgwOTM5XSBzbmRfaGRhX2ludGVsIDAwMDA6MDE6MDAuMTogaXJx
IDYyICgyNjIpIGZvciBNU0kvTVNJLVgKWyAgIDEzLjQ4MzU4OV0gQUxTQSBoZGFfaW50ZWwuYzox
Nzg3IDAwMDA6MDE6MDAuMTogRW5hYmxlIHN5bmNfd3JpdGUgZm9yIHN0YWJsZSBjb21tdW5pY2F0
aW9uClsgICAxMy41ODk1NjddIGlucHV0OiBNQ0UgSVIgS2V5Ym9hcmQvTW91c2UgKHNhYTcxNngp
IGFzIC9kZXZpY2VzL3ZpcnR1YWwvaW5wdXQvaW5wdXQxMQpbICAgMTMuNTkwMTI0XSBJUiBNQ0Ug
S2V5Ym9hcmQvbW91c2UgcHJvdG9jb2wgaGFuZGxlciBpbml0aWFsaXplZApbICAgMTMuNjUxMDY5
XSBzeXN0ZW1kLWpvdXJuYWxkWzI4M106IFJlY2VpdmVkIHJlcXVlc3QgdG8gZmx1c2ggcnVudGlt
ZSBqb3VybmFsIGZyb20gUElEIDEKWyAgIDEzLjc2NjY5MV0gaW5wdXQ6IEhEQSBBVEkgSERNSSBI
RE1JL0RQLHBjbT0xMSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MDIuMC8wMDAwOjAx
OjAwLjEvc291bmQvY2FyZDEvaW5wdXQxMgpbICAgMTMuNzY2Nzg0XSBpbnB1dDogSERBIEFUSSBI
RE1JIEhETUkvRFAscGNtPTEwIGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAwMDowMDowMi4wLzAw
MDA6MDE6MDAuMS9zb3VuZC9jYXJkMS9pbnB1dDEzClsgICAxMy43NjY4NTBdIGlucHV0OiBIREEg
QVRJIEhETUkgSERNSS9EUCxwY209OSBhcyAvZGV2aWNlcy9wY2kwMDAwOjAwLzAwMDA6MDA6MDIu
MC8wMDAwOjAxOjAwLjEvc291bmQvY2FyZDEvaW5wdXQxNApbICAgMTMuNzY2OTA1XSBpbnB1dDog
SERBIEFUSSBIRE1JIEhETUkvRFAscGNtPTggYXMgL2RldmljZXMvcGNpMDAwMDowMC8wMDAwOjAw
OjAyLjAvMDAwMDowMTowMC4xL3NvdW5kL2NhcmQxL2lucHV0MTUKWyAgIDEzLjc2Njk2NV0gaW5w
dXQ6IEhEQSBBVEkgSERNSSBIRE1JL0RQLHBjbT03IGFzIC9kZXZpY2VzL3BjaTAwMDA6MDAvMDAw
MDowMDowMi4wLzAwMDA6MDE6MDAuMS9zb3VuZC9jYXJkMS9pbnB1dDE2ClsgICAxMy43NjcwMjJd
IGlucHV0OiBIREEgQVRJIEhETUkgSERNSS9EUCxwY209MyBhcyAvZGV2aWNlcy9wY2kwMDAwOjAw
LzAwMDA6MDA6MDIuMC8wMDAwOjAxOjAwLjEvc291bmQvY2FyZDEvaW5wdXQxNwpbICAgMTMuODU3
NjU3XSBsaXJjX2RldjogSVIgUmVtb3RlIENvbnRyb2wgZHJpdmVyIHJlZ2lzdGVyZWQsIG1ham9y
IDI1MCAKWyAgIDEzLjg2NDI5OF0gcmMgcmMwOiBsaXJjX2RldjogZHJpdmVyIGlyLWxpcmMtY29k
ZWMgKHNhYTcxNngpIHJlZ2lzdGVyZWQgYXQgbWlub3IgPSAwClsgICAxMy44NjQzMDFdIElSIExJ
UkMgYnJpZGdlIGhhbmRsZXIgaW5pdGlhbGl6ZWQKWyAgIDE0LjMyMTY0Ml0gQWRkaW5nIDE1NzI5
NDhrIHN3YXAgb24gL2Rldi9zZGM2LiAgUHJpb3JpdHk6LTEgZXh0ZW50czoxIGFjcm9zczoxNTcy
OTQ4ayBGUwpbICAgMTQuNzM1NTk2XSB0eXBlPTE0MDAgYXVkaXQoMTM5MTQ2NjYwNi41OTY6Mik6
IGFwcGFybW9yPSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIve3Vzci8s
fWJpbi9waW5nIiBwaWQ9NTExIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0Ljc1OTU3MV0g
dHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2MDYuNjIwOjMpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVy
YXRpb249InByb2ZpbGVfbG9hZCIgbmFtZT0iL3NiaW4va2xvZ2QiIHBpZD01MjAgY29tbT0iYXBw
YXJtb3JfcGFyc2VyIgpbICAgMTQuNzc3OTExXSB0eXBlPTE0MDAgYXVkaXQoMTM5MTQ2NjYwNi42
NDA6NCk6IGFwcGFybW9yPSJTVEFUVVMiIG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIv
c2Jpbi9zeXNsb2ctbmciIHBpZD01MjQgY29tbT0iYXBwYXJtb3JfcGFyc2VyIgpbICAgMTQuODAx
Mzk2XSB0eXBlPTE0MDAgYXVkaXQoMTM5MTQ2NjYwNi42NjQ6NSk6IGFwcGFybW9yPSJTVEFUVVMi
IG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIvc2Jpbi9zeXNsb2dkIiBwaWQ9NTI4IGNv
bW09ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0Ljg1MDI1N10gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0
NjY2MDYuNzEyOjYpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIg
bmFtZT0iL3Vzci9saWIvYXBhY2hlMi9tcG0tcHJlZm9yay9hcGFjaGUyIiBwaWQ9NTMzIGNvbW09
ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0Ljg1MDQ4MF0gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2
MDYuNzEyOjcpOiBhcHBhcm1vcj0iU1RBVFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgbmFt
ZT0iL3Vzci9saWIvYXBhY2hlMi9tcG0tcHJlZm9yay9hcGFjaGUyLy9ERUZBVUxUX1VSSSIgcGlk
PTUzMyBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAxNC44NTA2NjldIHR5cGU9MTQwMCBhdWRp
dCgxMzkxNDY2NjA2LjcxMjo4KTogYXBwYXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxl
X2xvYWQiIG5hbWU9Ii91c3IvbGliL2FwYWNoZTIvbXBtLXByZWZvcmsvYXBhY2hlMi8vSEFORExJ
TkdfVU5UUlVTVEVEX0lOUFVUIiBwaWQ9NTMzIGNvbW09ImFwcGFybW9yX3BhcnNlciIKWyAgIDE0
Ljg1MDgzNF0gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2MDYuNzEyOjkpOiBhcHBhcm1vcj0iU1RB
VFVTIiBvcGVyYXRpb249InByb2ZpbGVfbG9hZCIgbmFtZT0iL3Vzci9saWIvYXBhY2hlMi9tcG0t
cHJlZm9yay9hcGFjaGUyLy9waHBzeXNpbmZvIiBwaWQ9NTMzIGNvbW09ImFwcGFybW9yX3BhcnNl
ciIKWyAgIDE0Ljg2OTA1NV0gdHlwZT0xNDAwIGF1ZGl0KDEzOTE0NjY2MDYuNzMyOjEwKTogYXBw
YXJtb3I9IlNUQVRVUyIgb3BlcmF0aW9uPSJwcm9maWxlX2xvYWQiIG5hbWU9Ii91c3IvbGliL2Rv
dmVjb3QvZGVsaXZlciIgcGlkPTUzNyBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAxNC44ODk2
MzJdIHR5cGU9MTQwMCBhdWRpdCgxMzkxNDY2NjA2Ljc1MjoxMSk6IGFwcGFybW9yPSJTVEFUVVMi
IG9wZXJhdGlvbj0icHJvZmlsZV9sb2FkIiBuYW1lPSIvdXNyL2xpYi9kb3ZlY290L2RvdmVjb3Qt
YXV0aCIgcGlkPTU0MSBjb21tPSJhcHBhcm1vcl9wYXJzZXIiClsgICAxNi4xOTY0NDRdIHhlbjpl
dnRjaG46IEV2ZW50LWNoYW5uZWwgZGV2aWNlIGluc3RhbGxlZApbICAgMTYuNzczODEwXSB1c2Jj
b3JlOiByZWdpc3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYmJhY2sKWyAgIDE2Ljg3NjY0
Ml0gbmJkOiByZWdpc3RlcmVkIGRldmljZSBhdCBtYWpvciA0MwpbICAgMTcuMTg0Mzc0XSBVbmFi
bGUgdG8gcmVhZCBzeXNycSBjb2RlIGluIGNvbnRyb2wvc3lzcnEKWyAgIDE5LjYyMzg1M10gdmdh
YXJiOiBkZXZpY2UgY2hhbmdlZCBkZWNvZGVzOiBQQ0k6MDAwMDowMTowMC4wLG9sZGRlY29kZXM9
aW8rbWVtLGRlY29kZXM9bm9uZTpvd25zPWlvK21lbQpbICAgMTkuNjIzODU2XSB2Z2FhcmI6IHRy
YW5zZmVycmluZyBvd25lciBmcm9tIFBDSTowMDAwOjAxOjAwLjAgdG8gUENJOjAwMDA6MDA6MDEu
MApbICAgMjEuNjUwMTQ0XSByODE2OSAwMDAwOjA1OjAwLjAgZW5wNXMwOiBsaW5rIGRvd24KWyAg
IDIxLjY1MDE5MF0gSVB2NjogQUREUkNPTkYoTkVUREVWX1VQKTogZW5wNXMwOiBsaW5rIGlzIG5v
dCByZWFkeQpbICAgMjEuNjUwMTk0XSByODE2OSAwMDAwOjA1OjAwLjAgZW5wNXMwOiBsaW5rIGRv
d24KWyAgIDIxLjc5MTU2Nl0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxNwpbICAg
MjYuNDc3NTk1XSByODE2OSAwMDAwOjA1OjAwLjAgZW5wNXMwOiBsaW5rIHVwClsgICAyNi40Nzc2
MDVdIElQdjY6IEFERFJDT05GKE5FVERFVl9DSEFOR0UpOiBlbnA1czA6IGxpbmsgYmVjb21lcyBy
ZWFkeQpbICAgNDAuOTQ3NTE3XSBCcmlkZ2UgZmlyZXdhbGxpbmcgcmVnaXN0ZXJlZApbICAgNDAu
OTU3NTQ1XSBkZXZpY2UgZW5wNXMwIGVudGVyZWQgcHJvbWlzY3VvdXMgbW9kZQpbICAgNDAuOTYz
MTA2XSBicjA6IHBvcnQgMShlbnA1czApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQpbICAgNDAu
OTYzMTIyXSBicjA6IHBvcnQgMShlbnA1czApIGVudGVyZWQgZm9yd2FyZGluZyBzdGF0ZQpbICAg
NDMuMDE3ODQ2XSBFYnRhYmxlcyB2Mi4wIHJlZ2lzdGVyZWQKWyAgIDQzLjA4NjY4MV0gaXBfdGFi
bGVzOiAoQykgMjAwMC0yMDA2IE5ldGZpbHRlciBDb3JlIFRlYW0KWyAgIDQzLjEwNzgxNF0gaXA2
X3RhYmxlczogKEMpIDIwMDAtMjAwNiBOZXRmaWx0ZXIgQ29yZSBUZWFtClsgIDM0MC42ODI4Mjdd
IEJsdWV0b290aDogQ29yZSB2ZXIgMi4xNgpbICAzNDAuNjgyODUxXSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDMxClsgIDM0MC42ODI4NTJdIEJsdWV0b290aDogSENJIGRldmljZSBh
bmQgY29ubmVjdGlvbiBtYW5hZ2VyIGluaXRpYWxpemVkClsgIDM0MC42ODI4NjBdIEJsdWV0b290
aDogSENJIHNvY2tldCBsYXllciBpbml0aWFsaXplZApbICAzNDAuNjgyODYyXSBCbHVldG9vdGg6
IEwyQ0FQIHNvY2tldCBsYXllciBpbml0aWFsaXplZApbICAzNDAuNjgyODcwXSBCbHVldG9vdGg6
IFNDTyBzb2NrZXQgbGF5ZXIgaW5pdGlhbGl6ZWQKWyAgMzQwLjcwNjk3Ml0gQmx1ZXRvb3RoOiBC
TkVQIChFdGhlcm5ldCBFbXVsYXRpb24pIHZlciAxLjMKWyAgMzQwLjcwNjk3Nl0gQmx1ZXRvb3Ro
OiBCTkVQIGZpbHRlcnM6IHByb3RvY29sIG11bHRpY2FzdApbICAzNDAuNzA2OTg0XSBCbHVldG9v
dGg6IEJORVAgc29ja2V0IGxheWVyIGluaXRpYWxpemVkClsgIDM0My44MjU0MzVdIGZ1c2UgaW5p
dCAoQVBJIHZlcnNpb24gNy4yMikK
--001a113a332af9ed1b04f180c397
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a113a332af9ed1b04f180c397--


From xen-devel-bounces@lists.xen.org Mon Feb 03 13:51:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJwD-0000Y4-Hc; Mon, 03 Feb 2014 13:51:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAJwB-0000Xz-A8
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 13:51:47 +0000
Received: from [193.109.254.147:54371] by server-7.bemta-14.messagelabs.com id
	F9/FF-23424-2FE9FE25; Mon, 03 Feb 2014 13:51:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391435504!1649719!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15431 invoked from network); 3 Feb 2014 13:51:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99205105"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 13:51:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 08:51:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAJw7-0001gi-LH;
	Mon, 03 Feb 2014 13:51:43 +0000
Message-ID: <52EF9EEF.8050301@citrix.com>
Date: Mon, 3 Feb 2014 13:51:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
In-Reply-To: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: suravee.suthikulpanit@amd.com, xen-devel@lists.xensource.com,
	xiantao.zhang@intel.com, jbeulich@suse.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 13:47, Vitaliy Tomin wrote:
> My system based on AMD APU completely crashes when trying to use HVM
> domUs. I've asked earlier on user list
> http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
> they were recommended to ask here.
> I've also found same problem describer with another AMD APU here
> http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html
>
> My system crashes every time I start HVM domU. But if I use xen
> compiled with debug info it works stable at least for hours (not
> tested for longer run).
>
> My system is openSUSE 13.1  with Xen 4.4
> My hardware is
>
> ASRock  FM2A75 Pro4
> AMD A8-6600K APU
> Gigabyte Radeon 7850
> 8 Gb DDR3 1600Mhz
>
> I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.
>
> I've setup with pci serial console and capture xen log ( === === lines
> added by myself). Xen and dom0 dmesg logs also attached.

Can you compile Xen with debug=y ?  This will turn on all assertions,
which might help narrow down the issue.

Do you have an lspci -tv and -vv for the system?

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:51:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAJwD-0000Y4-Hc; Mon, 03 Feb 2014 13:51:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAJwB-0000Xz-A8
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 13:51:47 +0000
Received: from [193.109.254.147:54371] by server-7.bemta-14.messagelabs.com id
	F9/FF-23424-2FE9FE25; Mon, 03 Feb 2014 13:51:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391435504!1649719!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15431 invoked from network); 3 Feb 2014 13:51:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:51:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99205105"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 13:51:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 08:51:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAJw7-0001gi-LH;
	Mon, 03 Feb 2014 13:51:43 +0000
Message-ID: <52EF9EEF.8050301@citrix.com>
Date: Mon, 3 Feb 2014 13:51:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
In-Reply-To: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: suravee.suthikulpanit@amd.com, xen-devel@lists.xensource.com,
	xiantao.zhang@intel.com, jbeulich@suse.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 13:47, Vitaliy Tomin wrote:
> My system based on AMD APU completely crashes when trying to use HVM
> domUs. I've asked earlier on user list
> http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
> they were recommended to ask here.
> I've also found same problem describer with another AMD APU here
> http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html
>
> My system crashes every time I start HVM domU. But if I use xen
> compiled with debug info it works stable at least for hours (not
> tested for longer run).
>
> My system is openSUSE 13.1  with Xen 4.4
> My hardware is
>
> ASRock  FM2A75 Pro4
> AMD A8-6600K APU
> Gigabyte Radeon 7850
> 8 Gb DDR3 1600Mhz
>
> I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.
>
> I've setup with pci serial console and capture xen log ( === === lines
> added by myself). Xen and dom0 dmesg logs also attached.

Can you compile Xen with debug=y ?  This will turn on all assertions,
which might help narrow down the issue.

Do you have an lspci -tv and -vv for the system?

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 13:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAK2t-0000k7-H2; Mon, 03 Feb 2014 13:58:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAK2r-0000k1-KT
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 13:58:42 +0000
Received: from [85.158.137.68:18593] by server-15.bemta-3.messagelabs.com id
	DC/DA-19263-090AFE25; Mon, 03 Feb 2014 13:58:40 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391435918!9377303!1
X-Originating-IP: [209.85.216.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3162 invoked from network); 3 Feb 2014 13:58:39 -0000
Received: from mail-qa0-f54.google.com (HELO mail-qa0-f54.google.com)
	(209.85.216.54)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:58:39 -0000
Received: by mail-qa0-f54.google.com with SMTP id i13so10032353qae.13
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 05:58:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=Mjv59cKKz1nKZUX9nmtGef4qbWbwfHaRh57oqLYT8E0=;
	b=z6fEJlyVFkX8s2MGeyFMy7ohkMpYhBbhiU0VbpOlZ8qknT8NlcCZfNDKGzajXkrxiz
	5OHcD5l9tz2jsXbsFOY9+4WiNYibhc5/xKFyGM2fEoEoza4WPmgYMuBUixtENazixOON
	DjRr0ChA2mbk0PvaJo+W8SJJ472ZIZ+S9yvBEJIk7pGNYlyF0/K6hX0Bc1ZiuMi4RZkq
	7AmCGQ3grxeZSHxiatT9dE09oY5tktLsjy6z7y3YwQJIEORFF1SAkpy4lqnu5yz3G45j
	NPpRHiRccnfC9tuaKgZJ9v+O0ZofOkhC8hKpKb752auNVwoitJMdQO3zA3CNmnLlBEJr
	+vOg==
MIME-Version: 1.0
X-Received: by 10.229.139.199 with SMTP id f7mr56229434qcu.2.1391435918238;
	Mon, 03 Feb 2014 05:58:38 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 05:58:37 -0800 (PST)
In-Reply-To: <52EF9EEF.8050301@citrix.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
Date: Mon, 3 Feb 2014 22:58:37 +0900
Message-ID: <CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=001a11c3e454a2907604f180eb92
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11c3e454a2907604f180eb92
Content-Type: text/plain; charset=ISO-8859-1

lspci output attached.

I have never managed to crash system with debug=y, but I can provide
serial log captured with debug=y and HVM domain running up.

On Mon, Feb 3, 2014 at 10:51 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 03/02/14 13:47, Vitaliy Tomin wrote:
>> My system based on AMD APU completely crashes when trying to use HVM
>> domUs. I've asked earlier on user list
>> http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
>> they were recommended to ask here.
>> I've also found same problem describer with another AMD APU here
>> http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html
>>
>> My system crashes every time I start HVM domU. But if I use xen
>> compiled with debug info it works stable at least for hours (not
>> tested for longer run).
>>
>> My system is openSUSE 13.1  with Xen 4.4
>> My hardware is
>>
>> ASRock  FM2A75 Pro4
>> AMD A8-6600K APU
>> Gigabyte Radeon 7850
>> 8 Gb DDR3 1600Mhz
>>
>> I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.
>>
>> I've setup with pci serial console and capture xen log ( === === lines
>> added by myself). Xen and dom0 dmesg logs also attached.
>
> Can you compile Xen with debug=y ?  This will turn on all assertions,
> which might help narrow down the issue.
>
> Do you have an lspci -tv and -vv for the system?
>
> ~Andrew
>

--001a11c3e454a2907604f180eb92
Content-Type: application/octet-stream; name=lspci-tv
Content-Disposition: attachment; filename=lspci-tv
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sz3ni0

LVswMDAwOjAwXS0rLTAwLjAgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFt
aWx5IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3NvciBSb290IENvbXBsZXgKICAgICAgICAg
ICArLTAwLjIgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAo
TW9kZWxzIDEwaC0xZmgpIEkvTyBNZW1vcnkgTWFuYWdlbWVudCBVbml0CiAgICAgICAgICAgKy0w
MS4wICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTUQvQVRJXSBSaWNobGFuZCBbUmFk
ZW9uIEhEIDg1NzBEXQogICAgICAgICAgICstMDEuMSAgQWR2YW5jZWQgTWljcm8gRGV2aWNlcywg
SW5jLiBbQU1EL0FUSV0gVHJpbml0eSBIRE1JIEF1ZGlvIENvbnRyb2xsZXIKICAgICAgICAgICAr
LTAyLjAtWzAxXS0tKy0wMC4wICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTUQvQVRJ
XSBQaXRjYWlybiBQUk8gW1JhZGVvbiBIRCA3ODUwXQogICAgICAgICAgIHwgICAgICAgICAgICBc
LTAwLjEgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRC9BVEldIENhcGUgVmVyZGUv
UGl0Y2Fpcm4gSERNSSBBdWRpbyBbUmFkZW9uIEhEIDc3MDAvNzgwMCBTZXJpZXNdCiAgICAgICAg
ICAgKy0xMC4wICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZDSCBVU0IgWEhD
SSBDb250cm9sbGVyCiAgICAgICAgICAgKy0xMC4xICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJ
bmMuIFtBTURdIEZDSCBVU0IgWEhDSSBDb250cm9sbGVyCiAgICAgICAgICAgKy0xMS4wICBBZHZh
bmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZDSCBTQVRBIENvbnRyb2xsZXIgW0FIQ0kg
bW9kZV0KICAgICAgICAgICArLTEyLjAgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FN
RF0gRkNIIFVTQiBPSENJIENvbnRyb2xsZXIKICAgICAgICAgICArLTEyLjIgIEFkdmFuY2VkIE1p
Y3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBFSENJIENvbnRyb2xsZXIKICAgICAgICAg
ICArLTEzLjAgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBPSENJ
IENvbnRyb2xsZXIKICAgICAgICAgICArLTEzLjIgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIElu
Yy4gW0FNRF0gRkNIIFVTQiBFSENJIENvbnRyb2xsZXIKICAgICAgICAgICArLTE0LjAgIEFkdmFu
Y2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFNNQnVzIENvbnRyb2xsZXIKICAgICAg
ICAgICArLTE0LjEgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIElERSBD
b250cm9sbGVyCiAgICAgICAgICAgKy0xNC4zICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMu
IFtBTURdIEZDSCBMUEMgQnJpZGdlCiAgICAgICAgICAgKy0xNC40LVswMl0tLSstMDYuMCAgQ3Jl
YXRpdmUgTGFicyBDQTAxMDYgU291bmRibGFzdGVyCiAgICAgICAgICAgfCAgICAgICAgICAgIFwt
MDcuMCAgTW9zQ2hpcCBTZW1pY29uZHVjdG9yIFRlY2hub2xvZ3kgTHRkLiBQQ0kgOTgzNSBNdWx0
aS1JL08gQ29udHJvbGxlcgogICAgICAgICAgICstMTQuNSAgQWR2YW5jZWQgTWljcm8gRGV2aWNl
cywgSW5jLiBbQU1EXSBGQ0ggVVNCIE9IQ0kgQ29udHJvbGxlcgogICAgICAgICAgICstMTUuMC1b
MDNdLS0tLTAwLjAgIFBoaWxpcHMgU2VtaWNvbmR1Y3RvcnMgU0FBNzE2MAogICAgICAgICAgICst
MTUuMi1bMDRdLS0tLTAwLjAgIEV0cm9uIFRlY2hub2xvZ3ksIEluYy4gRUoxODgvRUoxOTggVVNC
IDMuMCBIb3N0IENvbnRyb2xsZXIKICAgICAgICAgICArLTE1LjMtWzA1XS0tLS0wMC4wICBSZWFs
dGVrIFNlbWljb25kdWN0b3IgQ28uLCBMdGQuIFJUTDgxMTEvODE2OC84NDExIFBDSSBFeHByZXNz
IEdpZ2FiaXQgRXRoZXJuZXQgQ29udHJvbGxlcgogICAgICAgICAgICstMTguMCAgQWR2YW5jZWQg
TWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgUHJv
Y2Vzc29yIEZ1bmN0aW9uIDAKICAgICAgICAgICArLTE4LjEgIEFkdmFuY2VkIE1pY3JvIERldmlj
ZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3NvciBGdW5j
dGlvbiAxCiAgICAgICAgICAgKy0xOC4yICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtB
TURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgRnVuY3Rpb24gMgogICAg
ICAgICAgICstMTguMyAgQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkg
MTVoIChNb2RlbHMgMTBoLTFmaCkgUHJvY2Vzc29yIEZ1bmN0aW9uIDMKICAgICAgICAgICArLTE4
LjQgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAoTW9kZWxz
IDEwaC0xZmgpIFByb2Nlc3NvciBGdW5jdGlvbiA0CiAgICAgICAgICAgXC0xOC41ICBBZHZhbmNl
ZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQ
cm9jZXNzb3IgRnVuY3Rpb24gNQo=
--001a11c3e454a2907604f180eb92
Content-Type: application/octet-stream; name=lspci-vv
Content-Disposition: attachment; filename=lspci-vv
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sz9701

MDA6MDAuMCBIb3N0IGJyaWRnZTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBG
YW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgUHJvY2Vzc29yIFJvb3QgQ29tcGxleAoJU3Vic3lz
dGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2UgMTQxMAoJQ29udHJvbDogSS9PLSBNZW0r
IEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGlu
Zy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwLSA2Nk1IeisgVURGLSBGYXN0
QjJCLSBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNF
UlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMAoKMDA6MDAuMiBJT01NVTogQWR2YW5jZWQgTWlj
cm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgSS9PIE1l
bW9yeSBNYW5hZ2VtZW50IFVuaXQKCVN1YnN5c3RlbTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywg
SW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgSS9PIE1lbW9yeSBNYW5hZ2Vt
ZW50IFVuaXQKCUNvbnRyb2w6IEkvTy0gTWVtLSBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lO
Vi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglT
dGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFi
b3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwCglJ
bnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJUlEgMTEKCUNhcGFiaWxpdGllczogWzQwXSBTZWN1
cmUgZGV2aWNlIDw/PgoJQ2FwYWJpbGl0aWVzOiBbNTRdIE1TSTogRW5hYmxlKyBDb3VudD0xLzEg
TWFza2FibGUtIDY0Yml0KwoJCUFkZHJlc3M6IDAwMDAwMDAwZmVlMDEwMGMgIERhdGE6IDQxMjgK
CUNhcGFiaWxpdGllczogWzY0XSBIeXBlclRyYW5zcG9ydDogTVNJIE1hcHBpbmcgRW5hYmxlKyBG
aXhlZCsKCjAwOjAxLjAgVkdBIGNvbXBhdGlibGUgY29udHJvbGxlcjogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EL0FUSV0gUmljaGxhbmQgW1JhZGVvbiBIRCA4NTcwRF0gKHByb2ct
aWYgMDAgW1ZHQSBjb250cm9sbGVyXSkKCVN1YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24g
RGV2aWNlIDk5MDEKCUNvbnRyb2w6IEkvTysgTWVtLSBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVt
V0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgt
CglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglJbnRlcnJ1cHQ6
IHBpbiBBIHJvdXRlZCB0byBJUlEgMTcKCVJlZ2lvbiAwOiBNZW1vcnkgYXQgYjAwMDAwMDAgKDMy
LWJpdCwgcHJlZmV0Y2hhYmxlKSBbZGlzYWJsZWRdIFtzaXplPTI1Nk1dCglSZWdpb24gMTogSS9P
IHBvcnRzIGF0IGYwMDAgW3NpemU9MjU2XQoJUmVnaW9uIDI6IE1lbW9yeSBhdCBmZjcwMDAwMCAo
MzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbZGlzYWJsZWRdIFtzaXplPTI1NktdCglFeHBhbnNp
b24gUk9NIGF0IDx1bmFzc2lnbmVkPiBbZGlzYWJsZWRdCglDYXBhYmlsaXRpZXM6IFs1MF0gUG93
ZXIgTWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4
Q3VycmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAg
Tm9Tb2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVz
OiBbNThdIEV4cHJlc3MgKHYyKSBSb290IENvbXBsZXggSW50ZWdyYXRlZCBFbmRwb2ludCwgTVNJ
IDAwCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kg
TDBzIDw0dXMsIEwxIHVubGltaXRlZAoJCQlFeHRUYWcrIFJCRSsgRkxSZXNldC0KCQlEZXZDdGw6
CVJlcG9ydCBlcnJvcnM6IENvcnJlY3RhYmxlLSBOb24tRmF0YWwtIEZhdGFsLSBVbnN1cHBvcnRl
ZC0KCQkJUmx4ZE9yZCsgRXh0VGFnLSBQaGFudEZ1bmMtIEF1eFB3ci0gTm9Tbm9vcCsKCQkJTWF4
UGF5bG9hZCAxMjggYnl0ZXMsIE1heFJlYWRSZXEgMTI4IGJ5dGVzCgkJRGV2U3RhOglDb3JyRXJy
LSBVbmNvcnJFcnItIEZhdGFsRXJyLSBVbnN1cHBSZXEtIEF1eFB3ci0gVHJhbnNQZW5kLQoJCUxu
a0NhcDoJUG9ydCAjMCwgU3BlZWQgdW5rbm93biwgV2lkdGggeDAsIEFTUE0gdW5rbm93biwgTGF0
ZW5jeSBMMCA8NjRucywgTDEgPDF1cwoJCQlDbG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3
Tm90LQoJCUxua0N0bDoJQVNQTSBEaXNhYmxlZDsgRGlzYWJsZWQtIFJldHJhaW4tIENvbW1DbGst
CgkJCUV4dFN5bmNoLSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0ludC0KCQlMbmtT
dGE6CVNwZWVkIHVua25vd24sIFdpZHRoIHgwLCBUckVyci0gVHJhaW4tIFNsb3RDbGstIERMQWN0
aXZlLSBCV01nbXQtIEFCV01nbXQtCgkJRGV2Q2FwMjogQ29tcGxldGlvbiBUaW1lb3V0OiBOb3Qg
U3VwcG9ydGVkLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBOb3QgU3VwcG9ydGVkCgkJRGV2Q3Rs
MjogQ29tcGxldGlvbiBUaW1lb3V0OiA1MHVzIHRvIDUwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBP
QkZGIERpc2FibGVkCgkJTG5rQ3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDIuNUdUL3MsIEVudGVy
Q29tcGxpYW5jZS0gU3BlZWREaXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRp
bmcgUmFuZ2UsIEVudGVyTW9kaWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29t
cGxpYW5jZSBEZS1lbXBoYXNpczogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMg
TGV2ZWw6IC02ZEIsIEVxdWFsaXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJ
CQkgRXF1YWxpemF0aW9uUGhhc2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXph
dGlvblJlcXVlc3QtCglDYXBhYmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBN
YXNrYWJsZS0gNjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJ
Q2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IElEPTAw
MDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2liYWNrCglLZXJu
ZWwgbW9kdWxlczogcmFkZW9uCgowMDowMS4xIEF1ZGlvIGRldmljZTogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EL0FUSV0gVHJpbml0eSBIRE1JIEF1ZGlvIENvbnRyb2xsZXIKCVN1
YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDk5MDIKCUNvbnRyb2w6IEkvTy0g
TWVtLSBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl
cHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0g
RmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+
U0VSUi0gPFBFUlItIElOVHgtCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgMTgKCVJl
Z2lvbiAwOiBNZW1vcnkgYXQgZmY3NDAwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW2Rp
c2FibGVkXSBbc2l6ZT0xNktdCglDYXBhYmlsaXRpZXM6IFs1MF0gUG93ZXIgTWFuYWdlbWVudCB2
ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4Q3VycmVudD0wbUEgUE1F
KEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9Tb2Z0UnN0LSBQTUUt
RW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBbNThdIEV4cHJlc3Mg
KHYyKSBSb290IENvbXBsZXggSW50ZWdyYXRlZCBFbmRwb2ludCwgTVNJIDAwCgkJRGV2Q2FwOglN
YXhQYXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIDw0dXMsIEwxIHVu
bGltaXRlZAoJCQlFeHRUYWcrIFJCRSsgRkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6
IENvcnJlY3RhYmxlLSBOb24tRmF0YWwtIEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZCsg
RXh0VGFnLSBQaGFudEZ1bmMtIEF1eFB3ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAxMjggYnl0
ZXMsIE1heFJlYWRSZXEgMTI4IGJ5dGVzCgkJRGV2U3RhOglDb3JyRXJyLSBVbmNvcnJFcnItIEZh
dGFsRXJyLSBVbnN1cHBSZXEtIEF1eFB3ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMCwg
U3BlZWQgdW5rbm93biwgV2lkdGggeDAsIEFTUE0gdW5rbm93biwgTGF0ZW5jeSBMMCA8NjRucywg
TDEgPDF1cwoJCQlDbG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3Tm90LQoJCUxua0N0bDoJ
QVNQTSBEaXNhYmxlZDsgRGlzYWJsZWQtIFJldHJhaW4tIENvbW1DbGstCgkJCUV4dFN5bmNoLSBD
bG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0ludC0KCQlMbmtTdGE6CVNwZWVkIHVua25v
d24sIFdpZHRoIHgwLCBUckVyci0gVHJhaW4tIFNsb3RDbGstIERMQWN0aXZlLSBCV01nbXQtIEFC
V01nbXQtCgkJRGV2Q2FwMjogQ29tcGxldGlvbiBUaW1lb3V0OiBOb3QgU3VwcG9ydGVkLCBUaW1l
b3V0RGlzLSwgTFRSLSwgT0JGRiBOb3QgU3VwcG9ydGVkCgkJRGV2Q3RsMjogQ29tcGxldGlvbiBU
aW1lb3V0OiA1MHVzIHRvIDUwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkCgkJ
TG5rQ3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDIuNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3Bl
ZWREaXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVy
TW9kaWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBo
YXNpczogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVx
dWFsaXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9u
UGhhc2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglD
YXBhYmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQr
CgkJQWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBb
MTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IElEPTAwMDEgUmV2PTEgTGVuPTAx
MCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2liYWNrCglLZXJuZWwgbW9kdWxlczogc25k
X2hkYV9pbnRlbAoKMDA6MDIuMCBQQ0kgYnJpZGdlOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJ
bmMuIFtBTURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgUm9vdCBQb3J0
IChwcm9nLWlmIDAwIFtOb3JtYWwgZGVjb2RlXSkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0
ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlIt
IEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFy
RXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlIt
IElOVHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVzCglCdXM6IHByaW1h
cnk9MDAsIHNlY29uZGFyeT0wMSwgc3Vib3JkaW5hdGU9MDEsIHNlYy1sYXRlbmN5PTAKCUkvTyBi
ZWhpbmQgYnJpZGdlOiAwMDAwZTAwMC0wMDAwZWZmZgoJTWVtb3J5IGJlaGluZCBicmlkZ2U6IGZm
NjAwMDAwLWZmNmZmZmZmCglQcmVmZXRjaGFibGUgbWVtb3J5IGJlaGluZCBicmlkZ2U6IDAwMDAw
MDAwYzAwMDAwMDAtMDAwMDAwMDBjZmZmZmZmZgoJU2Vjb25kYXJ5IHN0YXR1czogNjZNSHotIEZh
c3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPFNF
UlItIDxQRVJSLQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNFUlItIE5vSVNBLSBWR0ErIE1BYm9ydC0g
PlJlc2V0LSBGYXN0QjJCLQoJCVByaURpc2NUbXItIFNlY0Rpc2NUbXItIERpc2NUbXJTdGF0LSBE
aXNjVG1yU0VSUkVuLQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lv
biAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMS0gRDItIEF1eEN1cnJlbnQ9MG1BIFBNRShEMCss
RDEtLEQyLSxEM2hvdCssRDNjb2xkKykKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJs
ZS0gRFNlbD0wIERTY2FsZT0wIFBNRS0KCUNhcGFiaWxpdGllczogWzU4XSBFeHByZXNzICh2Mikg
Um9vdCBQb3J0IChTbG90KyksIE1TSSAwMAoJCURldkNhcDoJTWF4UGF5bG9hZCAyNTYgYnl0ZXMs
IFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8NjRucywgTDEgPDF1cwoJCQlFeHRUYWcrIFJCRSsg
RkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6IENvcnJlY3RhYmxlLSBOb24tRmF0YWwt
IEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZC0gRXh0VGFnLSBQaGFudEZ1bmMtIEF1eFB3
ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAyNTYgYnl0ZXMsIE1heFJlYWRSZXEgNTEyIGJ5dGVz
CgkJRGV2U3RhOglDb3JyRXJyLSBVbmNvcnJFcnItIEZhdGFsRXJyLSBVbnN1cHBSZXEtIEF1eFB3
ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMCwgU3BlZWQgNUdUL3MsIFdpZHRoIHgxNiwg
QVNQTSBMMHMgTDEsIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMKCQkJQ2xvY2tQTS0gU3VycHJp
c2UtIExMQWN0UmVwKyBCd05vdCsKCQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IFJDQiA2NCBieXRl
cyBEaXNhYmxlZC0gUmV0cmFpbi0gQ29tbUNsaysKCQkJRXh0U3luY2gtIENsb2NrUE0tIEF1dFdp
ZERpcy0gQldJbnQtIEF1dEJXSW50LQoJCUxua1N0YToJU3BlZWQgNUdUL3MsIFdpZHRoIHgxNiwg
VHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZSsgQldNZ210KyBBQldNZ210LQoJCVNsdENh
cDoJQXR0bkJ0bi0gUHdyQ3RybC0gTVJMLSBBdHRuSW5kLSBQd3JJbmQtIEhvdFBsdWctIFN1cnBy
aXNlLQoJCQlTbG90ICMwLCBQb3dlckxpbWl0IDAuMDAwVzsgSW50ZXJsb2NrLSBOb0NvbXBsKwoJ
CVNsdEN0bDoJRW5hYmxlOiBBdHRuQnRuLSBQd3JGbHQtIE1STC0gUHJlc0RldC0gQ21kQ3BsdC0g
SFBJcnEtIExpbmtDaGctCgkJCUNvbnRyb2w6IEF0dG5JbmQgVW5rbm93biwgUHdySW5kIFVua25v
d24sIFBvd2VyLSBJbnRlcmxvY2stCgkJU2x0U3RhOglTdGF0dXM6IEF0dG5CdG4tIFBvd2VyRmx0
LSBNUkwtIENtZENwbHQtIFByZXNEZXQrIEludGVybG9jay0KCQkJQ2hhbmdlZDogTVJMLSBQcmVz
RGV0LSBMaW5rU3RhdGUtCgkJUm9vdEN0bDogRXJyQ29ycmVjdGFibGUtIEVyck5vbi1GYXRhbC0g
RXJyRmF0YWwtIFBNRUludEVuYS0gQ1JTVmlzaWJsZS0KCQlSb290Q2FwOiBDUlNWaXNpYmxlLQoJ
CVJvb3RTdGE6IFBNRSBSZXFJRCAwMDAwLCBQTUVTdGF0dXMtIFBNRVBlbmRpbmctCgkJRGV2Q2Fw
MjogQ29tcGxldGlvbiBUaW1lb3V0OiBSYW5nZSBBQkNELCBUaW1lb3V0RGlzKywgTFRSLSwgT0JG
RiBOb3QgU3VwcG9ydGVkIEFSSUZ3ZC0KCQlEZXZDdGwyOiBDb21wbGV0aW9uIFRpbWVvdXQ6IDY1
bXMgdG8gMjEwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkIEFSSUZ3ZC0KCQlM
bmtDdGwyOiBUYXJnZXQgTGluayBTcGVlZDogNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3BlZWRE
aXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVyTW9k
aWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBoYXNp
czogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVxdWFs
aXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9uUGhh
c2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglDYXBh
YmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQrCgkJ
QWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBbYjBd
IFN1YnN5c3RlbTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBEZXZpY2UgMTIz
NAoJQ2FwYWJpbGl0aWVzOiBbYjhdIEh5cGVyVHJhbnNwb3J0OiBNU0kgTWFwcGluZyBFbmFibGUr
IEZpeGVkKwoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRp
b246IElEPTAwMDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2ll
cG9ydAoJS2VybmVsIG1vZHVsZXM6IHNocGNocAoKMDA6MTAuMCBVU0IgY29udHJvbGxlcjogQWR2
YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGQ0ggVVNCIFhIQ0kgQ29udHJvbGxlciAo
cmV2IDAzKSAocHJvZy1pZiAzMCBbWEhDSV0pCglTdWJzeXN0ZW06IEFkdmFuY2VkIE1pY3JvIERl
dmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBYSENJIENvbnRyb2xsZXIKCUNvbnRyb2w6IEkvTy0g
TWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl
cHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0g
RmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+
U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVz
CglJbnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJUlEgMTgKCVJlZ2lvbiAwOiBNZW1vcnkgYXQg
ZmY3NDYwMDAgKDY0LWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9OEtdCglDYXBhYmlsaXRp
ZXM6IFs1MF0gUG93ZXIgTWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0kt
IEQxLSBEMi0gQXV4Q3VycmVudD0wbUEgUE1FKEQwKyxEMS0sRDItLEQzaG90KyxEM2NvbGQrKQoJ
CVN0YXR1czogRDAgTm9Tb2Z0UnN0KyBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJ
Q2FwYWJpbGl0aWVzOiBbNzBdIE1TSTogRW5hYmxlLSBDb3VudD0xLzggTWFza2FibGUtIDY0Yml0
KwoJCUFkZHJlc3M6IDAwMDAwMDAwMDAwMDAwMDAgIERhdGE6IDAwMDAKCUNhcGFiaWxpdGllczog
WzkwXSBNU0ktWDogRW5hYmxlKyBDb3VudD04IE1hc2tlZC0KCQlWZWN0b3IgdGFibGU6IEJBUj0w
IG9mZnNldD0wMDAwMTAwMAoJCVBCQTogQkFSPTAgb2Zmc2V0PTAwMDAxMDgwCglDYXBhYmlsaXRp
ZXM6IFthMF0gRXhwcmVzcyAodjIpIFJvb3QgQ29tcGxleCBJbnRlZ3JhdGVkIEVuZHBvaW50LCBN
U0kgMDAKCQlEZXZDYXA6CU1heFBheWxvYWQgMTI4IGJ5dGVzLCBQaGFudEZ1bmMgMCwgTGF0ZW5j
eSBMMHMgdW5saW1pdGVkLCBMMSB1bmxpbWl0ZWQKCQkJRXh0VGFnLSBSQkUrIEZMUmVzZXQtCgkJ
RGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3JyZWN0YWJsZS0gTm9uLUZhdGFsLSBGYXRhbC0gVW5z
dXBwb3J0ZWQtCgkJCVJseGRPcmQtIEV4dFRhZy0gUGhhbnRGdW5jLSBBdXhQd3ItIE5vU25vb3Ar
CgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBNYXhSZWFkUmVxIDUxMiBieXRlcwoJCURldlN0YToJ
Q29yckVyci0gVW5jb3JyRXJyLSBGYXRhbEVyci0gVW5zdXBwUmVxLSBBdXhQd3IrIFRyYW5zUGVu
ZC0KCQlMbmtDYXA6CVBvcnQgIzAsIFNwZWVkIHVua25vd24sIFdpZHRoIHgwLCBBU1BNIHVua25v
d24sIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMKCQkJQ2xvY2tQTS0gU3VycHJpc2UtIExMQWN0
UmVwLSBCd05vdC0KCQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IERpc2FibGVkLSBSZXRyYWluLSBD
b21tQ2xrLQoJCQlFeHRTeW5jaC0gQ2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQt
CgkJTG5rU3RhOglTcGVlZCB1bmtub3duLCBXaWR0aCB4MCwgVHJFcnItIFRyYWluLSBTbG90Q2xr
LSBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJCURldkNhcDI6IENvbXBsZXRpb24gVGltZW91
dDogTm90IFN1cHBvcnRlZCwgVGltZW91dERpcyssIExUUissIE9CRkYgTm90IFN1cHBvcnRlZAoJ
CURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNTB1cyB0byA1MG1zLCBUaW1lb3V0RGlzLSwg
TFRSLSwgT0JGRiBEaXNhYmxlZAoJCUxua0N0bDI6IFRhcmdldCBMaW5rIFNwZWVkOiAyLjVHVC9z
LCBFbnRlckNvbXBsaWFuY2UtIFNwZWVkRGlzLQoJCQkgVHJhbnNtaXQgTWFyZ2luOiBOb3JtYWwg
T3BlcmF0aW5nIFJhbmdlLCBFbnRlck1vZGlmaWVkQ29tcGxpYW5jZS0gQ29tcGxpYW5jZVNPUy0K
CQkJIENvbXBsaWFuY2UgRGUtZW1waGFzaXM6IC02ZEIKCQlMbmtTdGEyOiBDdXJyZW50IERlLWVt
cGhhc2lzIExldmVsOiAtNmRCLCBFcXVhbGl6YXRpb25Db21wbGV0ZS0sIEVxdWFsaXphdGlvblBo
YXNlMS0KCQkJIEVxdWFsaXphdGlvblBoYXNlMi0sIEVxdWFsaXphdGlvblBoYXNlMy0sIExpbmtF
cXVhbGl6YXRpb25SZXF1ZXN0LQoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBMYXRlbmN5IFRvbGVy
YW5jZSBSZXBvcnRpbmcKCQlNYXggc25vb3AgbGF0ZW5jeTogMG5zCgkJTWF4IG5vIHNub29wIGxh
dGVuY3k6IDBucwoJS2VybmVsIGRyaXZlciBpbiB1c2U6IHhoY2lfaGNkCglLZXJuZWwgbW9kdWxl
czogeGhjaV9oY2QKCjAwOjEwLjEgVVNCIGNvbnRyb2xsZXI6IEFkdmFuY2VkIE1pY3JvIERldmlj
ZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBYSENJIENvbnRyb2xsZXIgKHJldiAwMykgKHByb2ctaWYg
MzAgW1hIQ0ldKQoJU3Vic3lzdGVtOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURd
IEZDSCBVU0IgWEhDSSBDb250cm9sbGVyCglDb250cm9sOiBJL08tIE1lbSsgQnVzTWFzdGVyKyBT
cGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0
QjJCLSBEaXNJTlR4KwoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0g
REVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4
LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBwaW4g
QiByb3V0ZWQgdG8gSVJRIDE3CglSZWdpb24gMDogTWVtb3J5IGF0IGZmNzQ0MDAwICg2NC1iaXQs
IG5vbi1wcmVmZXRjaGFibGUpIFtzaXplPThLXQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1h
bmFnZW1lbnQgdmVyc2lvbiAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMS0gRDItIEF1eEN1cnJl
bnQ9MG1BIFBNRShEMCssRDEtLEQyLSxEM2hvdCssRDNjb2xkKykKCQlTdGF0dXM6IEQwIE5vU29m
dFJzdCsgUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0wIFBNRSsKCUNhcGFiaWxpdGllczogWzcw
XSBNU0k6IEVuYWJsZS0gQ291bnQ9MS84IE1hc2thYmxlLSA2NGJpdCsKCQlBZGRyZXNzOiAwMDAw
MDAwMDAwMDAwMDAwICBEYXRhOiAwMDAwCglDYXBhYmlsaXRpZXM6IFs5MF0gTVNJLVg6IEVuYWJs
ZSsgQ291bnQ9OCBNYXNrZWQtCgkJVmVjdG9yIHRhYmxlOiBCQVI9MCBvZmZzZXQ9MDAwMDEwMDAK
CQlQQkE6IEJBUj0wIG9mZnNldD0wMDAwMTA4MAoJQ2FwYWJpbGl0aWVzOiBbYTBdIEV4cHJlc3Mg
KHYyKSBSb290IENvbXBsZXggSW50ZWdyYXRlZCBFbmRwb2ludCwgTVNJIDAwCgkJRGV2Q2FwOglN
YXhQYXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIHVubGltaXRlZCwg
TDEgdW5saW1pdGVkCgkJCUV4dFRhZy0gUkJFKyBGTFJlc2V0LQoJCURldkN0bDoJUmVwb3J0IGVy
cm9yczogQ29ycmVjdGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlSbHhk
T3JkLSBFeHRUYWctIFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXlsb2FkIDEy
OCBieXRlcywgTWF4UmVhZFJlcSA1MTIgYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVuY29yckVy
ci0gRmF0YWxFcnItIFVuc3VwcFJlcS0gQXV4UHdyKyBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0
ICMwLCBTcGVlZCB1bmtub3duLCBXaWR0aCB4MCwgQVNQTSB1bmtub3duLCBMYXRlbmN5IEwwIDw2
NG5zLCBMMSA8MXVzCgkJCUNsb2NrUE0tIFN1cnByaXNlLSBMTEFjdFJlcC0gQndOb3QtCgkJTG5r
Q3RsOglBU1BNIERpc2FibGVkOyBEaXNhYmxlZC0gUmV0cmFpbi0gQ29tbUNsay0KCQkJRXh0U3lu
Y2gtIENsb2NrUE0tIEF1dFdpZERpcy0gQldJbnQtIEF1dEJXSW50LQoJCUxua1N0YToJU3BlZWQg
dW5rbm93biwgV2lkdGggeDAsIFRyRXJyLSBUcmFpbi0gU2xvdENsay0gRExBY3RpdmUtIEJXTWdt
dC0gQUJXTWdtdC0KCQlEZXZDYXAyOiBDb21wbGV0aW9uIFRpbWVvdXQ6IE5vdCBTdXBwb3J0ZWQs
IFRpbWVvdXREaXMrLCBMVFIrLCBPQkZGIE5vdCBTdXBwb3J0ZWQKCQlEZXZDdGwyOiBDb21wbGV0
aW9uIFRpbWVvdXQ6IDUwdXMgdG8gNTBtcywgVGltZW91dERpcy0sIExUUi0sIE9CRkYgRGlzYWJs
ZWQKCQlMbmtDdGwyOiBUYXJnZXQgTGluayBTcGVlZDogMi41R1QvcywgRW50ZXJDb21wbGlhbmNl
LSBTcGVlZERpcy0KCQkJIFRyYW5zbWl0IE1hcmdpbjogTm9ybWFsIE9wZXJhdGluZyBSYW5nZSwg
RW50ZXJNb2RpZmllZENvbXBsaWFuY2UtIENvbXBsaWFuY2VTT1MtCgkJCSBDb21wbGlhbmNlIERl
LWVtcGhhc2lzOiAtNmRCCgkJTG5rU3RhMjogQ3VycmVudCBEZS1lbXBoYXNpcyBMZXZlbDogLTZk
QiwgRXF1YWxpemF0aW9uQ29tcGxldGUtLCBFcXVhbGl6YXRpb25QaGFzZTEtCgkJCSBFcXVhbGl6
YXRpb25QaGFzZTItLCBFcXVhbGl6YXRpb25QaGFzZTMtLCBMaW5rRXF1YWxpemF0aW9uUmVxdWVz
dC0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiB4aGNpX2hjZAoJS2VybmVsIG1vZHVsZXM6IHhoY2lf
aGNkCgowMDoxMS4wIFNBVEEgY29udHJvbGxlcjogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5j
LiBbQU1EXSBGQ0ggU0FUQSBDb250cm9sbGVyIFtBSENJIG1vZGVdIChyZXYgNDApIChwcm9nLWlm
IDAxIFtBSENJIDEuMF0pCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9uIERldmljZSA3
ODAxCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZH
QVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3RhdHVz
OiBDYXArIDY2TUh6KyBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPW1lZGl1bSA+VEFib3J0
LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAzMgoJSW50
ZXJydXB0OiBwaW4gQSByb3V0ZWQgdG8gSVJRIDQ0CglSZWdpb24gMDogSS9PIHBvcnRzIGF0IGYx
OTAgW3NpemU9OF0KCVJlZ2lvbiAxOiBJL08gcG9ydHMgYXQgZjE4MCBbc2l6ZT00XQoJUmVnaW9u
IDI6IEkvTyBwb3J0cyBhdCBmMTcwIFtzaXplPThdCglSZWdpb24gMzogSS9PIHBvcnRzIGF0IGYx
NjAgW3NpemU9NF0KCVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZjE1MCBbc2l6ZT0xNl0KCVJlZ2lv
biA1OiBNZW1vcnkgYXQgZmY3NGQwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9
MktdCglDYXBhYmlsaXRpZXM6IFs1MF0gTVNJOiBFbmFibGUrIENvdW50PTQvNCBNYXNrYWJsZS0g
NjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDBmZWUwODAwYyAgRGF0YTogNDAwMAoJQ2FwYWJpbGl0
aWVzOiBbNzBdIFNBVEEgSEJBIHYxLjAgSW5DZmdTcGFjZQoJS2VybmVsIGRyaXZlciBpbiB1c2U6
IGFoY2kKCjAwOjEyLjAgVVNCIGNvbnRyb2xsZXI6IEFkdmFuY2VkIE1pY3JvIERldmljZXMsIElu
Yy4gW0FNRF0gRkNIIFVTQiBPSENJIENvbnRyb2xsZXIgKHJldiAxMSkgKHByb2ctaWYgMTAgW09I
Q0ldKQoJU3Vic3lzdGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2UgNzgwNwoJQ29udHJv
bDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFy
RXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwLSA2Nk1I
eisgVURGLSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0g
PE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMzIsIENhY2hlIExpbmUgU2l6
ZTogNjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAxOAoJUmVnaW9uIDA6
IE1lbW9yeSBhdCBmZjc0YzAwMCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT00S10K
CUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBvaGNpLXBjaQoJS2VybmVsIG1vZHVsZXM6IG9oY2lfcGNp
CgowMDoxMi4yIFVTQiBjb250cm9sbGVyOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtB
TURdIEZDSCBVU0IgRUhDSSBDb250cm9sbGVyIChyZXYgMTEpIChwcm9nLWlmIDIwIFtFSENJXSkK
CVN1YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDc4MDgKCUNvbnRyb2w6IEkv
TysgTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVisgVkdBU25vb3AtIFBhckVyci0g
U3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHorIFVE
Ri0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJv
cnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDMyLCBDYWNoZSBMaW5lIFNpemU6IDY0
IGJ5dGVzCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgMTcKCVJlZ2lvbiAwOiBNZW1v
cnkgYXQgZmY3NGIwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9MjU2XQoJQ2Fw
YWJpbGl0aWVzOiBbYzBdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lvbiAyCgkJRmxhZ3M6IFBNRUNs
ay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJlbnQ9MG1BIFBNRShEMCssRDErLEQyKyxEM2hvdCssRDNj
b2xkLSkKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0w
IFBNRS0KCQlCcmlkZ2U6IFBNLSBCMysKCUNhcGFiaWxpdGllczogW2U0XSBEZWJ1ZyBwb3J0OiBC
QVI9MSBvZmZzZXQ9MDBlMAoJS2VybmVsIGRyaXZlciBpbiB1c2U6IGVoY2ktcGNpCglLZXJuZWwg
bW9kdWxlczogZWhjaV9wY2kKCjAwOjEzLjAgVVNCIGNvbnRyb2xsZXI6IEFkdmFuY2VkIE1pY3Jv
IERldmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBPSENJIENvbnRyb2xsZXIgKHJldiAxMSkgKHBy
b2ctaWYgMTAgW09IQ0ldKQoJU3Vic3lzdGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2Ug
NzgwNwoJQ29udHJvbDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBW
R0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1
czogQ2FwLSA2Nk1IeisgVURGLSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9y
dC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMzIsIENh
Y2hlIExpbmUgU2l6ZTogNjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAx
OAoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmZjc0YTAwMCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxl
KSBbc2l6ZT00S10KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBvaGNpLXBjaQoJS2VybmVsIG1vZHVs
ZXM6IG9oY2lfcGNpCgowMDoxMy4yIFVTQiBjb250cm9sbGVyOiBBZHZhbmNlZCBNaWNybyBEZXZp
Y2VzLCBJbmMuIFtBTURdIEZDSCBVU0IgRUhDSSBDb250cm9sbGVyIChyZXYgMTEpIChwcm9nLWlm
IDIwIFtFSENJXSkKCVN1YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDc4MDgK
CUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVisgVkdBU25v
b3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENh
cCsgNjZNSHorIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxU
QWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDMyLCBDYWNoZSBM
aW5lIFNpemU6IDY0IGJ5dGVzCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgMTcKCVJl
Z2lvbiAwOiBNZW1vcnkgYXQgZmY3NDkwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3Np
emU9MjU2XQoJQ2FwYWJpbGl0aWVzOiBbYzBdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lvbiAyCgkJ
RmxhZ3M6IFBNRUNsay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJlbnQ9MG1BIFBNRShEMCssRDErLEQy
KyxEM2hvdCssRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJsZS0gRFNl
bD0wIERTY2FsZT0wIFBNRS0KCQlCcmlkZ2U6IFBNLSBCMysKCUNhcGFiaWxpdGllczogW2U0XSBE
ZWJ1ZyBwb3J0OiBCQVI9MSBvZmZzZXQ9MDBlMAoJS2VybmVsIGRyaXZlciBpbiB1c2U6IGVoY2kt
cGNpCglLZXJuZWwgbW9kdWxlczogZWhjaV9wY2kKCjAwOjE0LjAgU01CdXM6IEFkdmFuY2VkIE1p
Y3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFNNQnVzIENvbnRyb2xsZXIgKHJldiAxNCkKCVN1
YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDc4MGIKCUNvbnRyb2w6IEkvTysg
TWVtKyBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl
cHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcC0gNjZNSHorIFVERi0g
RmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQt
ID5TRVJSLSA8UEVSUi0gSU5UeC0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwaWl4NF9zbWJ1cwoJ
S2VybmVsIG1vZHVsZXM6IGkyY19waWl4NAoKMDA6MTQuMSBJREUgaW50ZXJmYWNlOiBBZHZhbmNl
ZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZDSCBJREUgQ29udHJvbGxlciAocHJvZy1pZiA4
YSBbTWFzdGVyIFNlY1AgUHJpUF0pCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9uIERl
dmljZSA3ODBjCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJ
TlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJ
U3RhdHVzOiBDYXAtIDY2TUh6KyBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPW1lZGl1bSA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAz
MgoJSW50ZXJydXB0OiBwaW4gQiByb3V0ZWQgdG8gSVJRIDE3CglSZWdpb24gMDogSS9PIHBvcnRz
IGF0IDAxZjAgW3NpemU9OF0KCVJlZ2lvbiAxOiBJL08gcG9ydHMgYXQgMDNmNAoJUmVnaW9uIDI6
IEkvTyBwb3J0cyBhdCAwMTcwIFtzaXplPThdCglSZWdpb24gMzogSS9PIHBvcnRzIGF0IDAzNzQK
CVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZjEwMCBbc2l6ZT0xNl0KCUtlcm5lbCBkcml2ZXIgaW4g
dXNlOiBwYXRhX2F0aWl4cAoJS2VybmVsIG1vZHVsZXM6IHBhdGFfYXRpaXhwLCBwYXRhX2FjcGks
IGF0YV9nZW5lcmljCgowMDoxNC4zIElTQSBicmlkZ2U6IEFkdmFuY2VkIE1pY3JvIERldmljZXMs
IEluYy4gW0FNRF0gRkNIIExQQyBCcmlkZ2UgKHJldiAxMSkKCVN1YnN5c3RlbTogQVNSb2NrIElu
Y29ycG9yYXRpb24gRGV2aWNlIDc4MGUKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIrIFNw
ZWNDeWNsZSsgTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RC
MkItIERpc0lOVHgtCglTdGF0dXM6IENhcC0gNjZNSHorIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBE
RVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5U
eC0KCUxhdGVuY3k6IDAKCjAwOjE0LjQgUENJIGJyaWRnZTogQWR2YW5jZWQgTWljcm8gRGV2aWNl
cywgSW5jLiBbQU1EXSBGQ0ggUENJIEJyaWRnZSAocmV2IDQwKSAocHJvZy1pZiAwMSBbU3VidHJh
Y3RpdmUgZGVjb2RlXSkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0g
TWVtV0lOVi0gVkdBU25vb3ArIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lO
VHgtCglTdGF0dXM6IENhcC0gNjZNSHorIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVk
aXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVu
Y3k6IDY0CglCdXM6IHByaW1hcnk9MDAsIHNlY29uZGFyeT0wMiwgc3Vib3JkaW5hdGU9MDIsIHNl
Yy1sYXRlbmN5PTY0CglJL08gYmVoaW5kIGJyaWRnZTogMDAwMGQwMDAtMDAwMGRmZmYKCVNlY29u
ZGFyeSBzdGF0dXM6IDY2TUh6LSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9y
dC0gPFRBYm9ydC0gPE1BYm9ydCsgPFNFUlItIDxQRVJSLQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNF
UlItIE5vSVNBLSBWR0EtIE1BYm9ydC0gPlJlc2V0LSBGYXN0QjJCLQoJCVByaURpc2NUbXItIFNl
Y0Rpc2NUbXItIERpc2NUbXJTdGF0LSBEaXNjVG1yU0VSUkVuLQoKMDA6MTQuNSBVU0IgY29udHJv
bGxlcjogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGQ0ggVVNCIE9IQ0kgQ29u
dHJvbGxlciAocmV2IDExKSAocHJvZy1pZiAxMCBbT0hDSV0pCglTdWJzeXN0ZW06IEFTUm9jayBJ
bmNvcnBvcmF0aW9uIERldmljZSA3ODA5CglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBT
cGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0
QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6KyBVREYtIEZhc3RCMkIrIFBhckVyci0g
REVWU0VMPW1lZGl1bSA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElO
VHgtCglMYXRlbmN5OiAzMiwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBw
aW4gQyByb3V0ZWQgdG8gSVJRIDE4CglSZWdpb24gMDogTWVtb3J5IGF0IGZmNzQ4MDAwICgzMi1i
aXQsIG5vbi1wcmVmZXRjaGFibGUpIFtzaXplPTRLXQoJS2VybmVsIGRyaXZlciBpbiB1c2U6IG9o
Y2ktcGNpCglLZXJuZWwgbW9kdWxlczogb2hjaV9wY2kKCjAwOjE1LjAgUENJIGJyaWRnZTogQWR2
YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBIdWRzb24gUENJIHRvIFBDSSBicmlkZ2Ug
KFBDSUUgcG9ydCAwKSAocHJvZy1pZiAwMCBbTm9ybWFsIGRlY29kZV0pCglDb250cm9sOiBJL08r
IE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYt
IEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRl
cwoJQnVzOiBwcmltYXJ5PTAwLCBzZWNvbmRhcnk9MDMsIHN1Ym9yZGluYXRlPTAzLCBzZWMtbGF0
ZW5jeT0wCglNZW1vcnkgYmVoaW5kIGJyaWRnZTogZmY1MDAwMDAtZmY1ZmZmZmYKCVNlY29uZGFy
eSBzdGF0dXM6IDY2TUh6LSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxU
QWJvcnQtIDxNQWJvcnQrIDxTRVJSLSA8UEVSUi0KCUJyaWRnZUN0bDogUGFyaXR5LSBTRVJSLSBO
b0lTQS0gVkdBLSBNQWJvcnQtID5SZXNldC0gRmFzdEIyQi0KCQlQcmlEaXNjVG1yLSBTZWNEaXNj
VG1yLSBEaXNjVG1yU3RhdC0gRGlzY1RtclNFUlJFbi0KCUNhcGFiaWxpdGllczogWzUwXSBQb3dl
ciBNYW5hZ2VtZW50IHZlcnNpb24gMwoJCUZsYWdzOiBQTUVDbGstIERTSS0gRDErIEQyKyBBdXhD
dXJyZW50PTBtQSBQTUUoRDAtLEQxLSxEMi0sRDNob3QtLEQzY29sZC0pCgkJU3RhdHVzOiBEMCBO
b1NvZnRSc3QtIFBNRS1FbmFibGUtIERTZWw9MCBEU2NhbGU9MCBQTUUtCglDYXBhYmlsaXRpZXM6
IFs1OF0gRXhwcmVzcyAodjIpIFJvb3QgUG9ydCAoU2xvdC0pLCBNU0kgMDAKCQlEZXZDYXA6CU1h
eFBheWxvYWQgMTI4IGJ5dGVzLCBQaGFudEZ1bmMgMCwgTGF0ZW5jeSBMMHMgPDY0bnMsIEwxIDwx
dXMKCQkJRXh0VGFnKyBSQkUrIEZMUmVzZXQtCgkJRGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3Jy
ZWN0YWJsZS0gTm9uLUZhdGFsLSBGYXRhbC0gVW5zdXBwb3J0ZWQtCgkJCVJseGRPcmQtIEV4dFRh
Zy0gUGhhbnRGdW5jLSBBdXhQd3ItIE5vU25vb3ArCgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBN
YXhSZWFkUmVxIDEyOCBieXRlcwoJCURldlN0YToJQ29yckVyci0gVW5jb3JyRXJyLSBGYXRhbEVy
ci0gVW5zdXBwUmVxLSBBdXhQd3ItIFRyYW5zUGVuZC0KCQlMbmtDYXA6CVBvcnQgIzAsIFNwZWVk
IDIuNUdUL3MsIFdpZHRoIHgxLCBBU1BNIEwwcyBMMSwgTGF0ZW5jeSBMMCA8NjRucywgTDEgPDF1
cwoJCQlDbG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXArIEJ3Tm90KwoJCUxua0N0bDoJQVNQTSBE
aXNhYmxlZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrLQoJCQlFeHRT
eW5jaC0gQ2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVl
ZCAyLjVHVC9zLCBXaWR0aCB4MSwgVHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZSsgQldN
Z210LSBBQldNZ210LQoJCVJvb3RDdGw6IEVyckNvcnJlY3RhYmxlLSBFcnJOb24tRmF0YWwtIEVy
ckZhdGFsLSBQTUVJbnRFbmEtIENSU1Zpc2libGUtCgkJUm9vdENhcDogQ1JTVmlzaWJsZS0KCQlS
b290U3RhOiBQTUUgUmVxSUQgMDAwMCwgUE1FU3RhdHVzLSBQTUVQZW5kaW5nLQoJCURldkNhcDI6
IENvbXBsZXRpb24gVGltZW91dDogUmFuZ2UgQUJDRCwgVGltZW91dERpcyssIExUUi0sIE9CRkYg
Tm90IFN1cHBvcnRlZCBBUklGd2QtCgkJRGV2Q3RsMjogQ29tcGxldGlvbiBUaW1lb3V0OiA2NW1z
IHRvIDIxMG1zLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBEaXNhYmxlZCBBUklGd2QtCgkJTG5r
Q3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDIuNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3BlZWRE
aXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVyTW9k
aWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBoYXNp
czogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVxdWFs
aXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9uUGhh
c2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglDYXBh
YmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQrCgkJ
QWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBbYjBd
IFN1YnN5c3RlbTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBEZXZpY2UgMDAw
MAoJQ2FwYWJpbGl0aWVzOiBbYjhdIEh5cGVyVHJhbnNwb3J0OiBNU0kgTWFwcGluZyBFbmFibGUr
IEZpeGVkKwoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRp
b246IElEPTAwMDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2ll
cG9ydAoJS2VybmVsIG1vZHVsZXM6IHNocGNocAoKMDA6MTUuMiBQQ0kgYnJpZGdlOiBBZHZhbmNl
ZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEh1ZHNvbiBQQ0kgdG8gUENJIGJyaWRnZSAoUENJ
RSBwb3J0IDIpIChwcm9nLWlmIDAwIFtOb3JtYWwgZGVjb2RlXSkKCUNvbnRyb2w6IEkvTysgTWVt
KyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBp
bmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFz
dEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VS
Ui0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVzCglC
dXM6IHByaW1hcnk9MDAsIHNlY29uZGFyeT0wNCwgc3Vib3JkaW5hdGU9MDQsIHNlYy1sYXRlbmN5
PTAKCU1lbW9yeSBiZWhpbmQgYnJpZGdlOiBmZjQwMDAwMC1mZjRmZmZmZgoJU2Vjb25kYXJ5IHN0
YXR1czogNjZNSHotIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9y
dC0gPE1BYm9ydC0gPFNFUlItIDxQRVJSLQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNFUlItIE5vSVNB
LSBWR0EtIE1BYm9ydC0gPlJlc2V0LSBGYXN0QjJCLQoJCVByaURpc2NUbXItIFNlY0Rpc2NUbXIt
IERpc2NUbXJTdGF0LSBEaXNjVG1yU0VSUkVuLQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1h
bmFnZW1lbnQgdmVyc2lvbiAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJl
bnQ9MG1BIFBNRShEMC0sRDEtLEQyLSxEM2hvdC0sRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29m
dFJzdC0gUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0wIFBNRS0KCUNhcGFiaWxpdGllczogWzU4
XSBFeHByZXNzICh2MikgUm9vdCBQb3J0IChTbG90LSksIE1TSSAwMAoJCURldkNhcDoJTWF4UGF5
bG9hZCAxMjggYnl0ZXMsIFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8NjRucywgTDEgPDF1cwoJ
CQlFeHRUYWcrIFJCRSsgRkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6IENvcnJlY3Rh
YmxlLSBOb24tRmF0YWwtIEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZC0gRXh0VGFnLSBQ
aGFudEZ1bmMtIEF1eFB3ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAxMjggYnl0ZXMsIE1heFJl
YWRSZXEgMTI4IGJ5dGVzCgkJRGV2U3RhOglDb3JyRXJyLSBVbmNvcnJFcnItIEZhdGFsRXJyLSBV
bnN1cHBSZXEtIEF1eFB3ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMiwgU3BlZWQgNUdU
L3MsIFdpZHRoIHgxLCBBU1BNIEwwcyBMMSwgTGF0ZW5jeSBMMCA8NjRucywgTDEgPDF1cwoJCQlD
bG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXArIEJ3Tm90KwoJCUxua0N0bDoJQVNQTSBEaXNhYmxl
ZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrKwoJCQlFeHRTeW5jaC0g
Q2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCA1R1Qv
cywgV2lkdGggeDEsIFRyRXJyLSBUcmFpbi0gU2xvdENsaysgRExBY3RpdmUrIEJXTWdtdCsgQUJX
TWdtdC0KCQlSb290Q3RsOiBFcnJDb3JyZWN0YWJsZS0gRXJyTm9uLUZhdGFsLSBFcnJGYXRhbC0g
UE1FSW50RW5hLSBDUlNWaXNpYmxlLQoJCVJvb3RDYXA6IENSU1Zpc2libGUtCgkJUm9vdFN0YTog
UE1FIFJlcUlEIDAwMDAsIFBNRVN0YXR1cy0gUE1FUGVuZGluZy0KCQlEZXZDYXAyOiBDb21wbGV0
aW9uIFRpbWVvdXQ6IFJhbmdlIEFCQ0QsIFRpbWVvdXREaXMrLCBMVFItLCBPQkZGIE5vdCBTdXBw
b3J0ZWQgQVJJRndkLQoJCURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNjVtcyB0byAyMTBt
cywgVGltZW91dERpcy0sIExUUi0sIE9CRkYgRGlzYWJsZWQgQVJJRndkLQoJCUxua0N0bDI6IFRh
cmdldCBMaW5rIFNwZWVkOiA1R1QvcywgRW50ZXJDb21wbGlhbmNlLSBTcGVlZERpcy0KCQkJIFRy
YW5zbWl0IE1hcmdpbjogTm9ybWFsIE9wZXJhdGluZyBSYW5nZSwgRW50ZXJNb2RpZmllZENvbXBs
aWFuY2UtIENvbXBsaWFuY2VTT1MtCgkJCSBDb21wbGlhbmNlIERlLWVtcGhhc2lzOiAtNmRCCgkJ
TG5rU3RhMjogQ3VycmVudCBEZS1lbXBoYXNpcyBMZXZlbDogLTMuNWRCLCBFcXVhbGl6YXRpb25D
b21wbGV0ZS0sIEVxdWFsaXphdGlvblBoYXNlMS0KCQkJIEVxdWFsaXphdGlvblBoYXNlMi0sIEVx
dWFsaXphdGlvblBoYXNlMy0sIExpbmtFcXVhbGl6YXRpb25SZXF1ZXN0LQoJQ2FwYWJpbGl0aWVz
OiBbYTBdIE1TSTogRW5hYmxlLSBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0KwoJCUFkZHJlc3M6
IDAwMDAwMDAwMDAwMDAwMDAgIERhdGE6IDAwMDAKCUNhcGFiaWxpdGllczogW2IwXSBTdWJzeXN0
ZW06IEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRGV2aWNlIDAwMDAKCUNhcGFi
aWxpdGllczogW2I4XSBIeXBlclRyYW5zcG9ydDogTVNJIE1hcHBpbmcgRW5hYmxlKyBGaXhlZCsK
CUNhcGFiaWxpdGllczogWzEwMCB2MV0gVmVuZG9yIFNwZWNpZmljIEluZm9ybWF0aW9uOiBJRD0w
MDAxIFJldj0xIExlbj0wMTAgPD8+CglLZXJuZWwgZHJpdmVyIGluIHVzZTogcGNpZXBvcnQKCUtl
cm5lbCBtb2R1bGVzOiBzaHBjaHAKCjAwOjE1LjMgUENJIGJyaWRnZTogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EXSBIdWRzb24gUENJIHRvIFBDSSBicmlkZ2UgKFBDSUUgcG9ydCAz
KSAocHJvZy1pZiAwMCBbTm9ybWFsIGRlY29kZV0pCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFz
dGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJS
LSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYtIEZhc3RCMkItIFBh
ckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJS
LSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRlcwoJQnVzOiBwcmlt
YXJ5PTAwLCBzZWNvbmRhcnk9MDUsIHN1Ym9yZGluYXRlPTA1LCBzZWMtbGF0ZW5jeT0wCglJL08g
YmVoaW5kIGJyaWRnZTogMDAwMGMwMDAtMDAwMGNmZmYKCVByZWZldGNoYWJsZSBtZW1vcnkgYmVo
aW5kIGJyaWRnZTogMDAwMDAwMDBkMDAwMDAwMC0wMDAwMDAwMGQwMGZmZmZmCglTZWNvbmRhcnkg
c3RhdHVzOiA2Nk1Iei0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFi
b3J0LSA8TUFib3J0LSA8U0VSUi0gPFBFUlItCglCcmlkZ2VDdGw6IFBhcml0eS0gU0VSUi0gTm9J
U0EtIFZHQS0gTUFib3J0LSA+UmVzZXQtIEZhc3RCMkItCgkJUHJpRGlzY1Rtci0gU2VjRGlzY1Rt
ci0gRGlzY1RtclN0YXQtIERpc2NUbXJTRVJSRW4tCglDYXBhYmlsaXRpZXM6IFs1MF0gUG93ZXIg
TWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4Q3Vy
cmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9T
b2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBb
NThdIEV4cHJlc3MgKHYyKSBSb290IFBvcnQgKFNsb3QtKSwgTVNJIDAwCgkJRGV2Q2FwOglNYXhQ
YXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIDw2NG5zLCBMMSA8MXVz
CgkJCUV4dFRhZysgUkJFKyBGTFJlc2V0LQoJCURldkN0bDoJUmVwb3J0IGVycm9yczogQ29ycmVj
dGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlSbHhkT3JkLSBFeHRUYWct
IFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXlsb2FkIDEyOCBieXRlcywgTWF4
UmVhZFJlcSAxMjggYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVuY29yckVyci0gRmF0YWxFcnIt
IFVuc3VwcFJlcS0gQXV4UHdyLSBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0ICMzLCBTcGVlZCA1
R1QvcywgV2lkdGggeDEsIEFTUE0gTDBzIEwxLCBMYXRlbmN5IEwwIDw2NG5zLCBMMSA8MXVzCgkJ
CUNsb2NrUE0tIFN1cnByaXNlLSBMTEFjdFJlcCsgQndOb3QrCgkJTG5rQ3RsOglBU1BNIERpc2Fi
bGVkOyBSQ0IgNjQgYnl0ZXMgRGlzYWJsZWQtIFJldHJhaW4tIENvbW1DbGsrCgkJCUV4dFN5bmNo
LSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0ludC0KCQlMbmtTdGE6CVNwZWVkIDIu
NUdUL3MsIFdpZHRoIHgxLCBUckVyci0gVHJhaW4tIFNsb3RDbGsrIERMQWN0aXZlKyBCV01nbXQr
IEFCV01nbXQtCgkJUm9vdEN0bDogRXJyQ29ycmVjdGFibGUtIEVyck5vbi1GYXRhbC0gRXJyRmF0
YWwtIFBNRUludEVuYS0gQ1JTVmlzaWJsZS0KCQlSb290Q2FwOiBDUlNWaXNpYmxlLQoJCVJvb3RT
dGE6IFBNRSBSZXFJRCAwMDAwLCBQTUVTdGF0dXMtIFBNRVBlbmRpbmctCgkJRGV2Q2FwMjogQ29t
cGxldGlvbiBUaW1lb3V0OiBSYW5nZSBBQkNELCBUaW1lb3V0RGlzKywgTFRSLSwgT0JGRiBOb3Qg
U3VwcG9ydGVkIEFSSUZ3ZC0KCQlEZXZDdGwyOiBDb21wbGV0aW9uIFRpbWVvdXQ6IDY1bXMgdG8g
MjEwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkIEFSSUZ3ZC0KCQlMbmtDdGwy
OiBUYXJnZXQgTGluayBTcGVlZDogNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3BlZWREaXMtCgkJ
CSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVyTW9kaWZpZWRD
b21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBoYXNpczogLTZk
QgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC0zLjVkQiwgRXF1YWxpemF0
aW9uQ29tcGxldGUtLCBFcXVhbGl6YXRpb25QaGFzZTEtCgkJCSBFcXVhbGl6YXRpb25QaGFzZTIt
LCBFcXVhbGl6YXRpb25QaGFzZTMtLCBMaW5rRXF1YWxpemF0aW9uUmVxdWVzdC0KCUNhcGFiaWxp
dGllczogW2EwXSBNU0k6IEVuYWJsZS0gQ291bnQ9MS8xIE1hc2thYmxlLSA2NGJpdCsKCQlBZGRy
ZXNzOiAwMDAwMDAwMDAwMDAwMDAwICBEYXRhOiAwMDAwCglDYXBhYmlsaXRpZXM6IFtiMF0gU3Vi
c3lzdGVtOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIERldmljZSAwMDAwCglD
YXBhYmlsaXRpZXM6IFtiOF0gSHlwZXJUcmFuc3BvcnQ6IE1TSSBNYXBwaW5nIEVuYWJsZSsgRml4
ZWQrCglDYXBhYmlsaXRpZXM6IFsxMDAgdjFdIFZlbmRvciBTcGVjaWZpYyBJbmZvcm1hdGlvbjog
SUQ9MDAwMSBSZXY9MSBMZW49MDEwIDw/PgoJS2VybmVsIGRyaXZlciBpbiB1c2U6IHBjaWVwb3J0
CglLZXJuZWwgbW9kdWxlczogc2hwY2hwCgowMDoxOC4wIEhvc3QgYnJpZGdlOiBBZHZhbmNlZCBN
aWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9j
ZXNzb3IgRnVuY3Rpb24gMAoJQ29udHJvbDogSS9PLSBNZW0tIEJ1c01hc3Rlci0gU3BlY0N5Y2xl
LSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlz
SU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1m
YXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCjAwOjE4
LjEgSG9zdCBicmlkZ2U6IEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5
IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3NvciBGdW5jdGlvbiAxCglDb250cm9sOiBJL08t
IE1lbS0gQnVzTWFzdGVyLSBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBVREYt
IEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoKMDA6MTguMiBIb3N0IGJyaWRnZTogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgUHJvY2Vzc29y
IEZ1bmN0aW9uIDIKCUNvbnRyb2w6IEkvTy0gTWVtLSBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVt
V0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgt
CglTdGF0dXM6IENhcC0gNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCgowMDoxOC4zIEhv
c3QgYnJpZGdlOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWlseSAxNWgg
KE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgRnVuY3Rpb24gMwoJQ29udHJvbDogSS9PLSBNZW0t
IEJ1c01hc3Rlci0gU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGlu
Zy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0
QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJS
LSA8UEVSUi0gSU5UeC0KCUNhcGFiaWxpdGllczogW2YwXSBTZWN1cmUgZGV2aWNlIDw/PgoJS2Vy
bmVsIGRyaXZlciBpbiB1c2U6IGsxMHRlbXAKCUtlcm5lbCBtb2R1bGVzOiBrMTB0ZW1wCgowMDox
OC40IEhvc3QgYnJpZGdlOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWls
eSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgRnVuY3Rpb24gNAoJQ29udHJvbDogSS9P
LSBNZW0tIEJ1c01hc3Rlci0gU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBT
dGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwLSA2Nk1Iei0gVURG
LSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQt
ID5TRVJSLSA8UEVSUi0gSU5UeC0KCjAwOjE4LjUgSG9zdCBicmlkZ2U6IEFkdmFuY2VkIE1pY3Jv
IERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3Nv
ciBGdW5jdGlvbiA1CglDb250cm9sOiBJL08tIE1lbS0gQnVzTWFzdGVyLSBTcGVjQ3ljbGUtIE1l
bVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4
LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3Qg
PlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoKMDE6MDAuMCBW
R0EgY29tcGF0aWJsZSBjb250cm9sbGVyOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtB
TUQvQVRJXSBQaXRjYWlybiBQUk8gW1JhZGVvbiBIRCA3ODUwXSAocHJvZy1pZiAwMCBbVkdBIGNv
bnRyb2xsZXJdKQoJU3Vic3lzdGVtOiBHaWdhYnl0ZSBUZWNobm9sb2d5IENvLiwgTHRkIERldmlj
ZSAyNTUzCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYt
IFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3Rh
dHVzOiBDYXArIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9y
dC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2Fj
aGUgTGluZSBTaXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBwaW4gQSByb3V0ZWQgdG8gSVJRIDYw
CglSZWdpb24gMDogTWVtb3J5IGF0IGMwMDAwMDAwICg2NC1iaXQsIHByZWZldGNoYWJsZSkgW3Np
emU9MjU2TV0KCVJlZ2lvbiAyOiBNZW1vcnkgYXQgZmY2MDAwMDAgKDY0LWJpdCwgbm9uLXByZWZl
dGNoYWJsZSkgW3NpemU9MjU2S10KCVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZTAwMCBbc2l6ZT0y
NTZdCglFeHBhbnNpb24gUk9NIGF0IGZmNjQwMDAwIFtkaXNhYmxlZF0gW3NpemU9MTI4S10KCUNh
cGFiaWxpdGllczogWzQ4XSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IExlbj0wOCA8Pz4K
CUNhcGFiaWxpdGllczogWzUwXSBQb3dlciBNYW5hZ2VtZW50IHZlcnNpb24gMwoJCUZsYWdzOiBQ
TUVDbGstIERTSS0gRDErIEQyKyBBdXhDdXJyZW50PTBtQSBQTUUoRDAtLEQxKyxEMissRDNob3Qr
LEQzY29sZC0pCgkJU3RhdHVzOiBEMCBOb1NvZnRSc3QtIFBNRS1FbmFibGUtIERTZWw9MCBEU2Nh
bGU9MCBQTUUtCglDYXBhYmlsaXRpZXM6IFs1OF0gRXhwcmVzcyAodjIpIExlZ2FjeSBFbmRwb2lu
dCwgTVNJIDAwCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDI1NiBieXRlcywgUGhhbnRGdW5jIDAsIExh
dGVuY3kgTDBzIDw0dXMsIEwxIHVubGltaXRlZAoJCQlFeHRUYWcrIEF0dG5CdG4tIEF0dG5JbmQt
IFB3ckluZC0gUkJFKyBGTFJlc2V0LQoJCURldkN0bDoJUmVwb3J0IGVycm9yczogQ29ycmVjdGFi
bGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlSbHhkT3JkLSBFeHRUYWctIFBo
YW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXlsb2FkIDI1NiBieXRlcywgTWF4UmVh
ZFJlcSA1MTIgYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnIrIFVuY29yckVyci0gRmF0YWxFcnItIFVu
c3VwcFJlcSsgQXV4UHdyLSBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0ICMwLCBTcGVlZCA4R1Qv
cywgV2lkdGggeDE2LCBBU1BNIEwwcyBMMSwgTGF0ZW5jeSBMMCA8NjRucywgTDEgPDF1cwoJCQlD
bG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3Tm90LQoJCUxua0N0bDoJQVNQTSBEaXNhYmxl
ZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrKwoJCQlFeHRTeW5jaC0g
Q2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCA1R1Qv
cywgV2lkdGggeDE2LCBUckVyci0gVHJhaW4tIFNsb3RDbGsrIERMQWN0aXZlLSBCV01nbXQtIEFC
V01nbXQtCgkJRGV2Q2FwMjogQ29tcGxldGlvbiBUaW1lb3V0OiBOb3QgU3VwcG9ydGVkLCBUaW1l
b3V0RGlzLSwgTFRSLSwgT0JGRiBOb3QgU3VwcG9ydGVkCgkJRGV2Q3RsMjogQ29tcGxldGlvbiBU
aW1lb3V0OiA1MHVzIHRvIDUwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkCgkJ
TG5rQ3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDhHVC9zLCBFbnRlckNvbXBsaWFuY2UtIFNwZWVk
RGlzLQoJCQkgVHJhbnNtaXQgTWFyZ2luOiBOb3JtYWwgT3BlcmF0aW5nIFJhbmdlLCBFbnRlck1v
ZGlmaWVkQ29tcGxpYW5jZS0gQ29tcGxpYW5jZVNPUy0KCQkJIENvbXBsaWFuY2UgRGUtZW1waGFz
aXM6IC02ZEIKCQlMbmtTdGEyOiBDdXJyZW50IERlLWVtcGhhc2lzIExldmVsOiAtNmRCLCBFcXVh
bGl6YXRpb25Db21wbGV0ZS0sIEVxdWFsaXphdGlvblBoYXNlMS0KCQkJIEVxdWFsaXphdGlvblBo
YXNlMi0sIEVxdWFsaXphdGlvblBoYXNlMy0sIExpbmtFcXVhbGl6YXRpb25SZXF1ZXN0LQoJQ2Fw
YWJpbGl0aWVzOiBbYTBdIE1TSTogRW5hYmxlKyBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0KwoJ
CUFkZHJlc3M6IDAwMDAwMDAwZmVlMDQwMGMgIERhdGE6IDQwMDAKCUNhcGFiaWxpdGllczogWzEw
MCB2MV0gVmVuZG9yIFNwZWNpZmljIEluZm9ybWF0aW9uOiBJRD0wMDAxIFJldj0xIExlbj0wMTAg
PD8+CglDYXBhYmlsaXRpZXM6IFsxNTAgdjJdIEFkdmFuY2VkIEVycm9yIFJlcG9ydGluZwoJCVVF
U3RhOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBS
eE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlvbC0KCQlVRU1zazoJRExQLSBTREVT
LSBUTFAtIEZDUC0gQ21wbHRUTy0gQ21wbHRBYnJ0LSBVbnhDbXBsdC0gUnhPRi0gTWFsZlRMUC0g
RUNSQy0gVW5zdXBSZXEtIEFDU1Zpb2wtCgkJVUVTdnJ0OglETFArIFNERVMrIFRMUC0gRkNQKyBD
bXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBSeE9GKyBNYWxmVExQKyBFQ1JDLSBVbnN1cFJl
cS0gQUNTVmlvbC0KCQlDRVN0YToJUnhFcnItIEJhZFRMUC0gQmFkRExMUC0gUm9sbG92ZXItIFRp
bWVvdXQtIE5vbkZhdGFsRXJyKwoJCUNFTXNrOglSeEVyci0gQmFkVExQLSBCYWRETExQLSBSb2xs
b3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnIrCgkJQUVSQ2FwOglGaXJzdCBFcnJvciBQb2ludGVy
OiAwMCwgR2VuQ2FwKyBDR2VuRW4tIENoa0NhcCsgQ2hrRW4tCglDYXBhYmlsaXRpZXM6IFsyNzAg
djFdICMxOQoJQ2FwYWJpbGl0aWVzOiBbMmIwIHYxXSBBZGRyZXNzIFRyYW5zbGF0aW9uIFNlcnZp
Y2UgKEFUUykKCQlBVFNDYXA6CUludmFsaWRhdGUgUXVldWUgRGVwdGg6IDAwCgkJQVRTQ3RsOglF
bmFibGUtLCBTbWFsbGVzdCBUcmFuc2xhdGlvbiBVbml0OiAwMAoJQ2FwYWJpbGl0aWVzOiBbMmMw
IHYxXSAjMTMKCUNhcGFiaWxpdGllczogWzJkMCB2MV0gIzFiCglLZXJuZWwgZHJpdmVyIGluIHVz
ZTogcmFkZW9uCglLZXJuZWwgbW9kdWxlczogcmFkZW9uCgowMTowMC4xIEF1ZGlvIGRldmljZTog
QWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EL0FUSV0gQ2FwZSBWZXJkZS9QaXRjYWly
biBIRE1JIEF1ZGlvIFtSYWRlb24gSEQgNzcwMC83ODAwIFNlcmllc10KCVN1YnN5c3RlbTogR2ln
YWJ5dGUgVGVjaG5vbG9neSBDby4sIEx0ZCBEZXZpY2UgYWFiMAoJQ29udHJvbDogSS9PKyBNZW0r
IEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGlu
Zy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeCsKCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0
QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJS
LSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDAsIENhY2hlIExpbmUgU2l6ZTogNjQgYnl0ZXMKCUlu
dGVycnVwdDogcGluIEIgcm91dGVkIHRvIElSUSA2MgoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmZjY2
MDAwMCAoNjQtYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xNktdCglDYXBhYmlsaXRpZXM6
IFs0OF0gVmVuZG9yIFNwZWNpZmljIEluZm9ybWF0aW9uOiBMZW49MDggPD8+CglDYXBhYmlsaXRp
ZXM6IFs1MF0gUG93ZXIgTWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0kt
IEQxKyBEMisgQXV4Q3VycmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJ
CVN0YXR1czogRDAgTm9Tb2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJ
Q2FwYWJpbGl0aWVzOiBbNThdIEV4cHJlc3MgKHYyKSBMZWdhY3kgRW5kcG9pbnQsIE1TSSAwMAoJ
CURldkNhcDoJTWF4UGF5bG9hZCAyNTYgYnl0ZXMsIFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8
NHVzLCBMMSB1bmxpbWl0ZWQKCQkJRXh0VGFnKyBBdHRuQnRuLSBBdHRuSW5kLSBQd3JJbmQtIFJC
RSsgRkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6IENvcnJlY3RhYmxlLSBOb24tRmF0
YWwtIEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZC0gRXh0VGFnLSBQaGFudEZ1bmMtIEF1
eFB3ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAyNTYgYnl0ZXMsIE1heFJlYWRSZXEgNTEyIGJ5
dGVzCgkJRGV2U3RhOglDb3JyRXJyKyBVbmNvcnJFcnItIEZhdGFsRXJyLSBVbnN1cHBSZXErIEF1
eFB3ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMCwgU3BlZWQgOEdUL3MsIFdpZHRoIHgx
NiwgQVNQTSBMMHMgTDEsIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMKCQkJQ2xvY2tQTS0gU3Vy
cHJpc2UtIExMQWN0UmVwLSBCd05vdC0KCQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IFJDQiA2NCBi
eXRlcyBEaXNhYmxlZC0gUmV0cmFpbi0gQ29tbUNsaysKCQkJRXh0U3luY2gtIENsb2NrUE0tIEF1
dFdpZERpcy0gQldJbnQtIEF1dEJXSW50LQoJCUxua1N0YToJU3BlZWQgNUdUL3MsIFdpZHRoIHgx
NiwgVHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJCURl
dkNhcDI6IENvbXBsZXRpb24gVGltZW91dDogTm90IFN1cHBvcnRlZCwgVGltZW91dERpcy0sIExU
Ui0sIE9CRkYgTm90IFN1cHBvcnRlZAoJCURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNTB1
cyB0byA1MG1zLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBEaXNhYmxlZAoJCUxua1N0YTI6IEN1
cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVxdWFsaXphdGlvbkNvbXBsZXRlLSwgRXF1
YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9uUGhhc2UyLSwgRXF1YWxpemF0aW9uUGhh
c2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglDYXBhYmlsaXRpZXM6IFthMF0gTVNJOiBF
bmFibGUrIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDBmZWUw
MjAwYyAgRGF0YTogNDAwMAoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMg
SW5mb3JtYXRpb246IElEPTAwMDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUNhcGFiaWxpdGllczogWzE1
MCB2Ml0gQWR2YW5jZWQgRXJyb3IgUmVwb3J0aW5nCgkJVUVTdGE6CURMUC0gU0RFUy0gVExQLSBG
Q1AtIENtcGx0VE8tIENtcGx0QWJydC0gVW54Q21wbHQtIFJ4T0YtIE1hbGZUTFAtIEVDUkMtIFVu
c3VwUmVxLSBBQ1NWaW9sLQoJCVVFTXNrOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBD
bXBsdEFicnQtIFVueENtcGx0LSBSeE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlv
bC0KCQlVRVN2cnQ6CURMUCsgU0RFUysgVExQLSBGQ1ArIENtcGx0VE8tIENtcGx0QWJydC0gVW54
Q21wbHQtIFJ4T0YrIE1hbGZUTFArIEVDUkMtIFVuc3VwUmVxLSBBQ1NWaW9sLQoJCUNFU3RhOglS
eEVyci0gQmFkVExQLSBCYWRETExQLSBSb2xsb3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnIrCgkJ
Q0VNc2s6CVJ4RXJyLSBCYWRUTFAtIEJhZERMTFAtIFJvbGxvdmVyLSBUaW1lb3V0LSBOb25GYXRh
bEVycisKCQlBRVJDYXA6CUZpcnN0IEVycm9yIFBvaW50ZXI6IDAwLCBHZW5DYXArIENHZW5Fbi0g
Q2hrQ2FwKyBDaGtFbi0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBzbmRfaGRhX2ludGVsCglLZXJu
ZWwgbW9kdWxlczogc25kX2hkYV9pbnRlbAoKMDI6MDYuMCBNdWx0aW1lZGlhIGF1ZGlvIGNvbnRy
b2xsZXI6IENyZWF0aXZlIExhYnMgQ0EwMTA2IFNvdW5kYmxhc3RlcgoJU3Vic3lzdGVtOiBDcmVh
dGl2ZSBMYWJzIFNCMDU3MCBbU0IgQXVkaWd5IFNFXQoJQ29udHJvbDogSS9PKyBNZW0tIEJ1c01h
c3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VS
Ui0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0QjJCKyBQ
YXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQ
RVJSLSBJTlR4LQoJTGF0ZW5jeTogMzIgKDUwMG5zIG1pbiwgNTAwMG5zIG1heCkKCUludGVycnVw
dDogcGluIEEgcm91dGVkIHRvIElSUSAyMQoJUmVnaW9uIDA6IEkvTyBwb3J0cyBhdCBkMDAwIFtz
aXplPTMyXQoJQ2FwYWJpbGl0aWVzOiBbZGNdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lvbiAyCgkJ
RmxhZ3M6IFBNRUNsay0gRFNJKyBEMSsgRDIrIEF1eEN1cnJlbnQ9MG1BIFBNRShEMC0sRDEtLEQy
LSxEM2hvdC0sRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJsZS0gRFNl
bD0wIERTY2FsZT0wIFBNRS0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBzbmRfY2EwMTA2CglLZXJu
ZWwgbW9kdWxlczogc25kX2NhMDEwNgoKMDI6MDcuMCBTZXJpYWwgY29udHJvbGxlcjogTW9zQ2hp
cCBTZW1pY29uZHVjdG9yIFRlY2hub2xvZ3kgTHRkLiBQQ0kgOTgzNSBNdWx0aS1JL08gQ29udHJv
bGxlciAocmV2IDAxKSAocHJvZy1pZiAwMiBbMTY1NTBdKQoJU3Vic3lzdGVtOiBMU0kgTG9naWMg
LyBTeW1iaW9zIExvZ2ljIDJTICgxNkM1NTAgVUFSVCkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNN
YXN0ZXItIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNF
UlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcC0gNjZNSHotIFVERi0gRmFzdEIyQisg
UGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8
UEVSUi0gSU5UeCsKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAyMgoJUmVnaW9uIDA6
IEkvTyBwb3J0cyBhdCBkMDcwIFtzaXplPThdCglSZWdpb24gMTogSS9PIHBvcnRzIGF0IGQwNjAg
W3NpemU9OF0KCVJlZ2lvbiAyOiBJL08gcG9ydHMgYXQgZDA1MCBbc2l6ZT04XQoJUmVnaW9uIDM6
IEkvTyBwb3J0cyBhdCBkMDQwIFtzaXplPThdCglSZWdpb24gNDogSS9PIHBvcnRzIGF0IGQwMzAg
W3NpemU9OF0KCVJlZ2lvbiA1OiBJL08gcG9ydHMgYXQgZDAyMCBbc2l6ZT0xNl0KCUtlcm5lbCBk
cml2ZXIgaW4gdXNlOiBzZXJpYWwKCUtlcm5lbCBtb2R1bGVzOiA4MjUwX3BjaSwgcGFycG9ydF9z
ZXJpYWwKCjAzOjAwLjAgTXVsdGltZWRpYSBjb250cm9sbGVyOiBQaGlsaXBzIFNlbWljb25kdWN0
b3JzIFNBQTcxNjAgKHJldiAwMykKCVN1YnN5c3RlbTogRGV2aWNlIDYyMjA6MDAwMgoJQ29udHJv
bDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFy
RXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1I
ei0gVURGLSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxN
QWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDAsIENhY2hlIExpbmUgU2l6ZTog
NjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAxNgoJUmVnaW9uIDA6IE1l
bW9yeSBhdCBmZjUwMDAwMCAoNjQtYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xTV0KCUNh
cGFiaWxpdGllczogWzQwXSBNU0k6IEVuYWJsZS0gQ291bnQ9MS8zMiBNYXNrYWJsZS0gNjRiaXQr
CgkJQWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBb
NTBdIEV4cHJlc3MgKHYxKSBFbmRwb2ludCwgTVNJIDAwCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDEy
OCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIDwyNTZucywgTDEgPDF1cwoJCQlFeHRU
YWctIEF0dG5CdG4tIEF0dG5JbmQtIFB3ckluZC0gUkJFLSBGTFJlc2V0LQoJCURldkN0bDoJUmVw
b3J0IGVycm9yczogQ29ycmVjdGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJ
CQlSbHhkT3JkLSBFeHRUYWctIFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXls
b2FkIDEyOCBieXRlcywgTWF4UmVhZFJlcSAxMjggYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVu
Y29yckVyci0gRmF0YWxFcnItIFVuc3VwcFJlcS0gQXV4UHdyLSBUcmFuc1BlbmQtCgkJTG5rQ2Fw
OglQb3J0ICMxLCBTcGVlZCAyLjVHVC9zLCBXaWR0aCB4MSwgQVNQTSBMMHMgTDEsIExhdGVuY3kg
TDAgPDR1cywgTDEgPDY0dXMKCQkJQ2xvY2tQTS0gU3VycHJpc2UtIExMQWN0UmVwLSBCd05vdC0K
CQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IFJDQiAxMjggYnl0ZXMgRGlzYWJsZWQtIFJldHJhaW4t
IENvbW1DbGstCgkJCUV4dFN5bmNoLSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0lu
dC0KCQlMbmtTdGE6CVNwZWVkIDIuNUdUL3MsIFdpZHRoIHgxLCBUckVyci0gVHJhaW4tIFNsb3RD
bGstIERMQWN0aXZlLSBCV01nbXQtIEFCV01nbXQtCglDYXBhYmlsaXRpZXM6IFs3NF0gUG93ZXIg
TWFuYWdlbWVudCB2ZXJzaW9uIDIKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4Q3Vy
cmVudD0wbUEgUE1FKEQwKyxEMSssRDIrLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9T
b2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBb
ODBdIFZlbmRvciBTcGVjaWZpYyBJbmZvcm1hdGlvbjogTGVuPTUwIDw/PgoJQ2FwYWJpbGl0aWVz
OiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IElEPTAwMDAgUmV2PTAgTGVu
PTA4OCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBTQUE3MTZ4IFRCUwoJS2VybmVsIG1vZHVs
ZXM6IHNhYTcxNnhfdGJzX2R2YgoKMDQ6MDAuMCBVU0IgY29udHJvbGxlcjogRXRyb24gVGVjaG5v
bG9neSwgSW5jLiBFSjE4OC9FSjE5OCBVU0IgMy4wIEhvc3QgQ29udHJvbGxlciAocHJvZy1pZiAz
MCBbWEhDSV0pCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9uIERldmljZSA3MDUyCglD
b250cm9sOiBJL08tIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29w
LSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3RhdHVzOiBDYXAr
IDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9y
dC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBT
aXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBwaW4gQSByb3V0ZWQgdG8gSVJRIDU5CglSZWdpb24g
MDogTWVtb3J5IGF0IGZmNDAwMDAwICg2NC1iaXQsIG5vbi1wcmVmZXRjaGFibGUpIFtzaXplPTMy
S10KCUNhcGFiaWxpdGllczogWzUwXSBQb3dlciBNYW5hZ2VtZW50IHZlcnNpb24gMwoJCUZsYWdz
OiBQTUVDbGstIERTSS0gRDErIEQyKyBBdXhDdXJyZW50PTBtQSBQTUUoRDArLEQxKyxEMissRDNo
b3QrLEQzY29sZCspCgkJU3RhdHVzOiBEMCBOb1NvZnRSc3QrIFBNRS1FbmFibGUtIERTZWw9MCBE
U2NhbGU9MCBQTUUtCglDYXBhYmlsaXRpZXM6IFs3MF0gTVNJOiBFbmFibGUrIENvdW50PTEvNCBN
YXNrYWJsZSsgNjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDBmZWUwMTAwYyAgRGF0YTogNDAwMAoJ
CU1hc2tpbmc6IDAwMDAwMDBlICBQZW5kaW5nOiAwMDAwMDAwMAoJQ2FwYWJpbGl0aWVzOiBbYTBd
IEV4cHJlc3MgKHYyKSBFbmRwb2ludCwgTVNJIDAxCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDEwMjQg
Ynl0ZXMsIFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8NjRucywgTDEgPDF1cwoJCQlFeHRUYWcr
IEF0dG5CdG4tIEF0dG5JbmQtIFB3ckluZC0gUkJFKyBGTFJlc2V0KwoJCURldkN0bDoJUmVwb3J0
IGVycm9yczogQ29ycmVjdGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlS
bHhkT3JkLSBFeHRUYWctIFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKyBGTFJlc2V0LQoJCQlN
YXhQYXlsb2FkIDEyOCBieXRlcywgTWF4UmVhZFJlcSA1MTIgYnl0ZXMKCQlEZXZTdGE6CUNvcnJF
cnItIFVuY29yckVyci0gRmF0YWxFcnItIFVuc3VwcFJlcS0gQXV4UHdyKyBUcmFuc1BlbmQtCgkJ
TG5rQ2FwOglQb3J0ICMwLCBTcGVlZCA1R1QvcywgV2lkdGggeDEsIEFTUE0gTDBzIEwxLCBMYXRl
bmN5IEwwIDwxdXMsIEwxIDw2NHVzCgkJCUNsb2NrUE0rIFN1cnByaXNlLSBMTEFjdFJlcC0gQndO
b3QtCgkJTG5rQ3RsOglBU1BNIERpc2FibGVkOyBSQ0IgNjQgYnl0ZXMgRGlzYWJsZWQtIFJldHJh
aW4tIENvbW1DbGsrCgkJCUV4dFN5bmNoLSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRC
V0ludC0KCQlMbmtTdGE6CVNwZWVkIDVHVC9zLCBXaWR0aCB4MSwgVHJFcnItIFRyYWluLSBTbG90
Q2xrKyBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJCURldkNhcDI6IENvbXBsZXRpb24gVGlt
ZW91dDogTm90IFN1cHBvcnRlZCwgVGltZW91dERpcy0sIExUUi0sIE9CRkYgTm90IFN1cHBvcnRl
ZAoJCURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNTB1cyB0byA1MG1zLCBUaW1lb3V0RGlz
LSwgTFRSLSwgT0JGRiBEaXNhYmxlZAoJCUxua0N0bDI6IFRhcmdldCBMaW5rIFNwZWVkOiA1R1Qv
cywgRW50ZXJDb21wbGlhbmNlLSBTcGVlZERpcy0KCQkJIFRyYW5zbWl0IE1hcmdpbjogTm9ybWFs
IE9wZXJhdGluZyBSYW5nZSwgRW50ZXJNb2RpZmllZENvbXBsaWFuY2UtIENvbXBsaWFuY2VTT1Mt
CgkJCSBDb21wbGlhbmNlIERlLWVtcGhhc2lzOiAtNmRCCgkJTG5rU3RhMjogQ3VycmVudCBEZS1l
bXBoYXNpcyBMZXZlbDogLTZkQiwgRXF1YWxpemF0aW9uQ29tcGxldGUtLCBFcXVhbGl6YXRpb25Q
aGFzZTEtCgkJCSBFcXVhbGl6YXRpb25QaGFzZTItLCBFcXVhbGl6YXRpb25QaGFzZTMtLCBMaW5r
RXF1YWxpemF0aW9uUmVxdWVzdC0KCUNhcGFiaWxpdGllczogWzEwMCB2MV0gQWR2YW5jZWQgRXJy
b3IgUmVwb3J0aW5nCgkJVUVTdGE6CURMUC0gU0RFUy0gVExQLSBGQ1AtIENtcGx0VE8tIENtcGx0
QWJydC0gVW54Q21wbHQtIFJ4T0YtIE1hbGZUTFAtIEVDUkMtIFVuc3VwUmVxLSBBQ1NWaW9sLQoJ
CVVFTXNrOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0
LSBSeE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlvbC0KCQlVRVN2cnQ6CURMUCsg
U0RFUy0gVExQLSBGQ1ArIENtcGx0VE8tIENtcGx0QWJydC0gVW54Q21wbHQtIFJ4T0YrIE1hbGZU
TFArIEVDUkMtIFVuc3VwUmVxLSBBQ1NWaW9sLQoJCUNFU3RhOglSeEVycisgQmFkVExQLSBCYWRE
TExQLSBSb2xsb3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnItCgkJQ0VNc2s6CVJ4RXJyLSBCYWRU
TFAtIEJhZERMTFAtIFJvbGxvdmVyLSBUaW1lb3V0LSBOb25GYXRhbEVycisKCQlBRVJDYXA6CUZp
cnN0IEVycm9yIFBvaW50ZXI6IDE0LCBHZW5DYXArIENHZW5Fbi0gQ2hrQ2FwKyBDaGtFbi0KCUNh
cGFiaWxpdGllczogWzE5MCB2MV0gRGV2aWNlIFNlcmlhbCBOdW1iZXIgMDEtMDEtMDEtMDEtMDEt
MDEtMDEtMDEKCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiB4aGNpX2hjZAoJS2VybmVsIG1vZHVsZXM6
IHhoY2lfaGNkCgowNTowMC4wIEV0aGVybmV0IGNvbnRyb2xsZXI6IFJlYWx0ZWsgU2VtaWNvbmR1
Y3RvciBDby4sIEx0ZC4gUlRMODExMS84MTY4Lzg0MTEgUENJIEV4cHJlc3MgR2lnYWJpdCBFdGhl
cm5ldCBDb250cm9sbGVyIChyZXYgMDYpCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9u
IE1vdGhlcmJvYXJkIChvbmUgb2YgbWFueSkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIr
IFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZh
c3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJy
LSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElO
VHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVzCglJbnRlcnJ1cHQ6IHBp
biBBIHJvdXRlZCB0byBJUlEgNjEKCVJlZ2lvbiAwOiBJL08gcG9ydHMgYXQgYzAwMCBbc2l6ZT0y
NTZdCglSZWdpb24gMjogTWVtb3J5IGF0IGQwMDA0MDAwICg2NC1iaXQsIHByZWZldGNoYWJsZSkg
W3NpemU9NEtdCglSZWdpb24gNDogTWVtb3J5IGF0IGQwMDAwMDAwICg2NC1iaXQsIHByZWZldGNo
YWJsZSkgW3NpemU9MTZLXQoJQ2FwYWJpbGl0aWVzOiBbNDBdIFBvd2VyIE1hbmFnZW1lbnQgdmVy
c2lvbiAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJlbnQ9Mzc1bUEgUE1F
KEQwKyxEMSssRDIrLEQzaG90KyxEM2NvbGQrKQoJCVN0YXR1czogRDAgTm9Tb2Z0UnN0KyBQTUUt
RW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBbNTBdIE1TSTogRW5h
YmxlKyBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0KwoJCUFkZHJlc3M6IDAwMDAwMDAwZmVlMDQw
MGMgIERhdGE6IDQwMDAKCUNhcGFiaWxpdGllczogWzcwXSBFeHByZXNzICh2MikgRW5kcG9pbnQs
IE1TSSAwMQoJCURldkNhcDoJTWF4UGF5bG9hZCAxMjggYnl0ZXMsIFBoYW50RnVuYyAwLCBMYXRl
bmN5IEwwcyA8NTEybnMsIEwxIDw2NHVzCgkJCUV4dFRhZy0gQXR0bkJ0bi0gQXR0bkluZC0gUHdy
SW5kLSBSQkUrIEZMUmVzZXQtCgkJRGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3JyZWN0YWJsZS0g
Tm9uLUZhdGFsLSBGYXRhbC0gVW5zdXBwb3J0ZWQtCgkJCVJseGRPcmQtIEV4dFRhZy0gUGhhbnRG
dW5jLSBBdXhQd3ItIE5vU25vb3AtCgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBNYXhSZWFkUmVx
IDQwOTYgYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVuY29yckVyci0gRmF0YWxFcnItIFVuc3Vw
cFJlcS0gQXV4UHdyKyBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0ICMwLCBTcGVlZCAyLjVHVC9z
LCBXaWR0aCB4MSwgQVNQTSBMMHMgTDEsIExhdGVuY3kgTDAgdW5saW1pdGVkLCBMMSA8NjR1cwoJ
CQlDbG9ja1BNKyBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3Tm90LQoJCUxua0N0bDoJQVNQTSBEaXNh
YmxlZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrKwoJCQlFeHRTeW5j
aC0gQ2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCAy
LjVHVC9zLCBXaWR0aCB4MSwgVHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZS0gQldNZ210
LSBBQldNZ210LQoJCURldkNhcDI6IENvbXBsZXRpb24gVGltZW91dDogUmFuZ2UgQUJDRCwgVGlt
ZW91dERpcyssIExUUi0sIE9CRkYgTm90IFN1cHBvcnRlZAoJCURldkN0bDI6IENvbXBsZXRpb24g
VGltZW91dDogNTB1cyB0byA1MG1zLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBEaXNhYmxlZAoJ
CUxua0N0bDI6IFRhcmdldCBMaW5rIFNwZWVkOiAyLjVHVC9zLCBFbnRlckNvbXBsaWFuY2UtIFNw
ZWVkRGlzLQoJCQkgVHJhbnNtaXQgTWFyZ2luOiBOb3JtYWwgT3BlcmF0aW5nIFJhbmdlLCBFbnRl
ck1vZGlmaWVkQ29tcGxpYW5jZS0gQ29tcGxpYW5jZVNPUy0KCQkJIENvbXBsaWFuY2UgRGUtZW1w
aGFzaXM6IC02ZEIKCQlMbmtTdGEyOiBDdXJyZW50IERlLWVtcGhhc2lzIExldmVsOiAtNmRCLCBF
cXVhbGl6YXRpb25Db21wbGV0ZS0sIEVxdWFsaXphdGlvblBoYXNlMS0KCQkJIEVxdWFsaXphdGlv
blBoYXNlMi0sIEVxdWFsaXphdGlvblBoYXNlMy0sIExpbmtFcXVhbGl6YXRpb25SZXF1ZXN0LQoJ
Q2FwYWJpbGl0aWVzOiBbYjBdIE1TSS1YOiBFbmFibGUtIENvdW50PTQgTWFza2VkLQoJCVZlY3Rv
ciB0YWJsZTogQkFSPTQgb2Zmc2V0PTAwMDAwMDAwCgkJUEJBOiBCQVI9NCBvZmZzZXQ9MDAwMDA4
MDAKCUNhcGFiaWxpdGllczogW2QwXSBWaXRhbCBQcm9kdWN0IERhdGEKCQlObyBlbmQgdGFnIGZv
dW5kCglDYXBhYmlsaXRpZXM6IFsxMDAgdjFdIEFkdmFuY2VkIEVycm9yIFJlcG9ydGluZwoJCVVF
U3RhOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBS
eE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlvbC0KCQlVRU1zazoJRExQLSBTREVT
LSBUTFAtIEZDUC0gQ21wbHRUTy0gQ21wbHRBYnJ0LSBVbnhDbXBsdC0gUnhPRi0gTWFsZlRMUC0g
RUNSQy0gVW5zdXBSZXEtIEFDU1Zpb2wtCgkJVUVTdnJ0OglETFArIFNERVMrIFRMUC0gRkNQKyBD
bXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBSeE9GKyBNYWxmVExQKyBFQ1JDLSBVbnN1cFJl
cS0gQUNTVmlvbC0KCQlDRVN0YToJUnhFcnItIEJhZFRMUC0gQmFkRExMUC0gUm9sbG92ZXItIFRp
bWVvdXQtIE5vbkZhdGFsRXJyKwoJCUNFTXNrOglSeEVyci0gQmFkVExQLSBCYWRETExQLSBSb2xs
b3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnIrCgkJQUVSQ2FwOglGaXJzdCBFcnJvciBQb2ludGVy
OiAwMCwgR2VuQ2FwKyBDR2VuRW4tIENoa0NhcCsgQ2hrRW4tCglDYXBhYmlsaXRpZXM6IFsxNDAg
djFdIFZpcnR1YWwgQ2hhbm5lbAoJCUNhcHM6CUxQRVZDPTAgUmVmQ2xrPTEwMG5zIFBBVEVudHJ5
Qml0cz0xCgkJQXJiOglGaXhlZC0gV1JSMzItIFdSUjY0LSBXUlIxMjgtCgkJQ3RybDoJQXJiU2Vs
ZWN0PUZpeGVkCgkJU3RhdHVzOglJblByb2dyZXNzLQoJCVZDMDoJQ2FwczoJUEFUT2Zmc2V0PTAw
IE1heFRpbWVTbG90cz0xIFJlalNub29wVHJhbnMtCgkJCUFyYjoJRml4ZWQtIFdSUjMyLSBXUlI2
NC0gV1JSMTI4LSBUV1JSMTI4LSBXUlIyNTYtCgkJCUN0cmw6CUVuYWJsZSsgSUQ9MCBBcmJTZWxl
Y3Q9Rml4ZWQgVEMvVkM9MDEKCQkJU3RhdHVzOglOZWdvUGVuZGluZy0gSW5Qcm9ncmVzcy0KCUNh
cGFiaWxpdGllczogWzE2MCB2MV0gRGV2aWNlIFNlcmlhbCBOdW1iZXIgMDEtMDAtMDAtMDAtNjgt
NGMtZTAtMDAKCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiByODE2OQoJS2VybmVsIG1vZHVsZXM6IHI4
MTY5Cgo=
--001a11c3e454a2907604f180eb92
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11c3e454a2907604f180eb92--


From xen-devel-bounces@lists.xen.org Mon Feb 03 13:58:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 13:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAK2t-0000k7-H2; Mon, 03 Feb 2014 13:58:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAK2r-0000k1-KT
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 13:58:42 +0000
Received: from [85.158.137.68:18593] by server-15.bemta-3.messagelabs.com id
	DC/DA-19263-090AFE25; Mon, 03 Feb 2014 13:58:40 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391435918!9377303!1
X-Originating-IP: [209.85.216.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3162 invoked from network); 3 Feb 2014 13:58:39 -0000
Received: from mail-qa0-f54.google.com (HELO mail-qa0-f54.google.com)
	(209.85.216.54)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 13:58:39 -0000
Received: by mail-qa0-f54.google.com with SMTP id i13so10032353qae.13
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 05:58:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=Mjv59cKKz1nKZUX9nmtGef4qbWbwfHaRh57oqLYT8E0=;
	b=z6fEJlyVFkX8s2MGeyFMy7ohkMpYhBbhiU0VbpOlZ8qknT8NlcCZfNDKGzajXkrxiz
	5OHcD5l9tz2jsXbsFOY9+4WiNYibhc5/xKFyGM2fEoEoza4WPmgYMuBUixtENazixOON
	DjRr0ChA2mbk0PvaJo+W8SJJ472ZIZ+S9yvBEJIk7pGNYlyF0/K6hX0Bc1ZiuMi4RZkq
	7AmCGQ3grxeZSHxiatT9dE09oY5tktLsjy6z7y3YwQJIEORFF1SAkpy4lqnu5yz3G45j
	NPpRHiRccnfC9tuaKgZJ9v+O0ZofOkhC8hKpKb752auNVwoitJMdQO3zA3CNmnLlBEJr
	+vOg==
MIME-Version: 1.0
X-Received: by 10.229.139.199 with SMTP id f7mr56229434qcu.2.1391435918238;
	Mon, 03 Feb 2014 05:58:38 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 05:58:37 -0800 (PST)
In-Reply-To: <52EF9EEF.8050301@citrix.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
Date: Mon, 3 Feb 2014 22:58:37 +0900
Message-ID: <CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=001a11c3e454a2907604f180eb92
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a11c3e454a2907604f180eb92
Content-Type: text/plain; charset=ISO-8859-1

lspci output attached.

I have never managed to crash system with debug=y, but I can provide
serial log captured with debug=y and HVM domain running up.

On Mon, Feb 3, 2014 at 10:51 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 03/02/14 13:47, Vitaliy Tomin wrote:
>> My system based on AMD APU completely crashes when trying to use HVM
>> domUs. I've asked earlier on user list
>> http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
>> they were recommended to ask here.
>> I've also found same problem describer with another AMD APU here
>> http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html
>>
>> My system crashes every time I start HVM domU. But if I use xen
>> compiled with debug info it works stable at least for hours (not
>> tested for longer run).
>>
>> My system is openSUSE 13.1  with Xen 4.4
>> My hardware is
>>
>> ASRock  FM2A75 Pro4
>> AMD A8-6600K APU
>> Gigabyte Radeon 7850
>> 8 Gb DDR3 1600Mhz
>>
>> I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.
>>
>> I've setup with pci serial console and capture xen log ( === === lines
>> added by myself). Xen and dom0 dmesg logs also attached.
>
> Can you compile Xen with debug=y ?  This will turn on all assertions,
> which might help narrow down the issue.
>
> Do you have an lspci -tv and -vv for the system?
>
> ~Andrew
>

--001a11c3e454a2907604f180eb92
Content-Type: application/octet-stream; name=lspci-tv
Content-Disposition: attachment; filename=lspci-tv
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sz3ni0

LVswMDAwOjAwXS0rLTAwLjAgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFt
aWx5IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3NvciBSb290IENvbXBsZXgKICAgICAgICAg
ICArLTAwLjIgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAo
TW9kZWxzIDEwaC0xZmgpIEkvTyBNZW1vcnkgTWFuYWdlbWVudCBVbml0CiAgICAgICAgICAgKy0w
MS4wICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTUQvQVRJXSBSaWNobGFuZCBbUmFk
ZW9uIEhEIDg1NzBEXQogICAgICAgICAgICstMDEuMSAgQWR2YW5jZWQgTWljcm8gRGV2aWNlcywg
SW5jLiBbQU1EL0FUSV0gVHJpbml0eSBIRE1JIEF1ZGlvIENvbnRyb2xsZXIKICAgICAgICAgICAr
LTAyLjAtWzAxXS0tKy0wMC4wICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTUQvQVRJ
XSBQaXRjYWlybiBQUk8gW1JhZGVvbiBIRCA3ODUwXQogICAgICAgICAgIHwgICAgICAgICAgICBc
LTAwLjEgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRC9BVEldIENhcGUgVmVyZGUv
UGl0Y2Fpcm4gSERNSSBBdWRpbyBbUmFkZW9uIEhEIDc3MDAvNzgwMCBTZXJpZXNdCiAgICAgICAg
ICAgKy0xMC4wICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZDSCBVU0IgWEhD
SSBDb250cm9sbGVyCiAgICAgICAgICAgKy0xMC4xICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJ
bmMuIFtBTURdIEZDSCBVU0IgWEhDSSBDb250cm9sbGVyCiAgICAgICAgICAgKy0xMS4wICBBZHZh
bmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZDSCBTQVRBIENvbnRyb2xsZXIgW0FIQ0kg
bW9kZV0KICAgICAgICAgICArLTEyLjAgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FN
RF0gRkNIIFVTQiBPSENJIENvbnRyb2xsZXIKICAgICAgICAgICArLTEyLjIgIEFkdmFuY2VkIE1p
Y3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBFSENJIENvbnRyb2xsZXIKICAgICAgICAg
ICArLTEzLjAgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBPSENJ
IENvbnRyb2xsZXIKICAgICAgICAgICArLTEzLjIgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIElu
Yy4gW0FNRF0gRkNIIFVTQiBFSENJIENvbnRyb2xsZXIKICAgICAgICAgICArLTE0LjAgIEFkdmFu
Y2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFNNQnVzIENvbnRyb2xsZXIKICAgICAg
ICAgICArLTE0LjEgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIElERSBD
b250cm9sbGVyCiAgICAgICAgICAgKy0xNC4zICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMu
IFtBTURdIEZDSCBMUEMgQnJpZGdlCiAgICAgICAgICAgKy0xNC40LVswMl0tLSstMDYuMCAgQ3Jl
YXRpdmUgTGFicyBDQTAxMDYgU291bmRibGFzdGVyCiAgICAgICAgICAgfCAgICAgICAgICAgIFwt
MDcuMCAgTW9zQ2hpcCBTZW1pY29uZHVjdG9yIFRlY2hub2xvZ3kgTHRkLiBQQ0kgOTgzNSBNdWx0
aS1JL08gQ29udHJvbGxlcgogICAgICAgICAgICstMTQuNSAgQWR2YW5jZWQgTWljcm8gRGV2aWNl
cywgSW5jLiBbQU1EXSBGQ0ggVVNCIE9IQ0kgQ29udHJvbGxlcgogICAgICAgICAgICstMTUuMC1b
MDNdLS0tLTAwLjAgIFBoaWxpcHMgU2VtaWNvbmR1Y3RvcnMgU0FBNzE2MAogICAgICAgICAgICst
MTUuMi1bMDRdLS0tLTAwLjAgIEV0cm9uIFRlY2hub2xvZ3ksIEluYy4gRUoxODgvRUoxOTggVVNC
IDMuMCBIb3N0IENvbnRyb2xsZXIKICAgICAgICAgICArLTE1LjMtWzA1XS0tLS0wMC4wICBSZWFs
dGVrIFNlbWljb25kdWN0b3IgQ28uLCBMdGQuIFJUTDgxMTEvODE2OC84NDExIFBDSSBFeHByZXNz
IEdpZ2FiaXQgRXRoZXJuZXQgQ29udHJvbGxlcgogICAgICAgICAgICstMTguMCAgQWR2YW5jZWQg
TWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgUHJv
Y2Vzc29yIEZ1bmN0aW9uIDAKICAgICAgICAgICArLTE4LjEgIEFkdmFuY2VkIE1pY3JvIERldmlj
ZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3NvciBGdW5j
dGlvbiAxCiAgICAgICAgICAgKy0xOC4yICBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtB
TURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgRnVuY3Rpb24gMgogICAg
ICAgICAgICstMTguMyAgQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkg
MTVoIChNb2RlbHMgMTBoLTFmaCkgUHJvY2Vzc29yIEZ1bmN0aW9uIDMKICAgICAgICAgICArLTE4
LjQgIEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAoTW9kZWxz
IDEwaC0xZmgpIFByb2Nlc3NvciBGdW5jdGlvbiA0CiAgICAgICAgICAgXC0xOC41ICBBZHZhbmNl
ZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQ
cm9jZXNzb3IgRnVuY3Rpb24gNQo=
--001a11c3e454a2907604f180eb92
Content-Type: application/octet-stream; name=lspci-vv
Content-Disposition: attachment; filename=lspci-vv
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7sz9701

MDA6MDAuMCBIb3N0IGJyaWRnZTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBG
YW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgUHJvY2Vzc29yIFJvb3QgQ29tcGxleAoJU3Vic3lz
dGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2UgMTQxMAoJQ29udHJvbDogSS9PLSBNZW0r
IEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGlu
Zy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwLSA2Nk1IeisgVURGLSBGYXN0
QjJCLSBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNF
UlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMAoKMDA6MDAuMiBJT01NVTogQWR2YW5jZWQgTWlj
cm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgSS9PIE1l
bW9yeSBNYW5hZ2VtZW50IFVuaXQKCVN1YnN5c3RlbTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywg
SW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgSS9PIE1lbW9yeSBNYW5hZ2Vt
ZW50IFVuaXQKCUNvbnRyb2w6IEkvTy0gTWVtLSBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lO
Vi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglT
dGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFi
b3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwCglJ
bnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJUlEgMTEKCUNhcGFiaWxpdGllczogWzQwXSBTZWN1
cmUgZGV2aWNlIDw/PgoJQ2FwYWJpbGl0aWVzOiBbNTRdIE1TSTogRW5hYmxlKyBDb3VudD0xLzEg
TWFza2FibGUtIDY0Yml0KwoJCUFkZHJlc3M6IDAwMDAwMDAwZmVlMDEwMGMgIERhdGE6IDQxMjgK
CUNhcGFiaWxpdGllczogWzY0XSBIeXBlclRyYW5zcG9ydDogTVNJIE1hcHBpbmcgRW5hYmxlKyBG
aXhlZCsKCjAwOjAxLjAgVkdBIGNvbXBhdGlibGUgY29udHJvbGxlcjogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EL0FUSV0gUmljaGxhbmQgW1JhZGVvbiBIRCA4NTcwRF0gKHByb2ct
aWYgMDAgW1ZHQSBjb250cm9sbGVyXSkKCVN1YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24g
RGV2aWNlIDk5MDEKCUNvbnRyb2w6IEkvTysgTWVtLSBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVt
V0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgt
CglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglJbnRlcnJ1cHQ6
IHBpbiBBIHJvdXRlZCB0byBJUlEgMTcKCVJlZ2lvbiAwOiBNZW1vcnkgYXQgYjAwMDAwMDAgKDMy
LWJpdCwgcHJlZmV0Y2hhYmxlKSBbZGlzYWJsZWRdIFtzaXplPTI1Nk1dCglSZWdpb24gMTogSS9P
IHBvcnRzIGF0IGYwMDAgW3NpemU9MjU2XQoJUmVnaW9uIDI6IE1lbW9yeSBhdCBmZjcwMDAwMCAo
MzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbZGlzYWJsZWRdIFtzaXplPTI1NktdCglFeHBhbnNp
b24gUk9NIGF0IDx1bmFzc2lnbmVkPiBbZGlzYWJsZWRdCglDYXBhYmlsaXRpZXM6IFs1MF0gUG93
ZXIgTWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4
Q3VycmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAg
Tm9Tb2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVz
OiBbNThdIEV4cHJlc3MgKHYyKSBSb290IENvbXBsZXggSW50ZWdyYXRlZCBFbmRwb2ludCwgTVNJ
IDAwCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kg
TDBzIDw0dXMsIEwxIHVubGltaXRlZAoJCQlFeHRUYWcrIFJCRSsgRkxSZXNldC0KCQlEZXZDdGw6
CVJlcG9ydCBlcnJvcnM6IENvcnJlY3RhYmxlLSBOb24tRmF0YWwtIEZhdGFsLSBVbnN1cHBvcnRl
ZC0KCQkJUmx4ZE9yZCsgRXh0VGFnLSBQaGFudEZ1bmMtIEF1eFB3ci0gTm9Tbm9vcCsKCQkJTWF4
UGF5bG9hZCAxMjggYnl0ZXMsIE1heFJlYWRSZXEgMTI4IGJ5dGVzCgkJRGV2U3RhOglDb3JyRXJy
LSBVbmNvcnJFcnItIEZhdGFsRXJyLSBVbnN1cHBSZXEtIEF1eFB3ci0gVHJhbnNQZW5kLQoJCUxu
a0NhcDoJUG9ydCAjMCwgU3BlZWQgdW5rbm93biwgV2lkdGggeDAsIEFTUE0gdW5rbm93biwgTGF0
ZW5jeSBMMCA8NjRucywgTDEgPDF1cwoJCQlDbG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3
Tm90LQoJCUxua0N0bDoJQVNQTSBEaXNhYmxlZDsgRGlzYWJsZWQtIFJldHJhaW4tIENvbW1DbGst
CgkJCUV4dFN5bmNoLSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0ludC0KCQlMbmtT
dGE6CVNwZWVkIHVua25vd24sIFdpZHRoIHgwLCBUckVyci0gVHJhaW4tIFNsb3RDbGstIERMQWN0
aXZlLSBCV01nbXQtIEFCV01nbXQtCgkJRGV2Q2FwMjogQ29tcGxldGlvbiBUaW1lb3V0OiBOb3Qg
U3VwcG9ydGVkLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBOb3QgU3VwcG9ydGVkCgkJRGV2Q3Rs
MjogQ29tcGxldGlvbiBUaW1lb3V0OiA1MHVzIHRvIDUwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBP
QkZGIERpc2FibGVkCgkJTG5rQ3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDIuNUdUL3MsIEVudGVy
Q29tcGxpYW5jZS0gU3BlZWREaXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRp
bmcgUmFuZ2UsIEVudGVyTW9kaWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29t
cGxpYW5jZSBEZS1lbXBoYXNpczogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMg
TGV2ZWw6IC02ZEIsIEVxdWFsaXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJ
CQkgRXF1YWxpemF0aW9uUGhhc2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXph
dGlvblJlcXVlc3QtCglDYXBhYmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBN
YXNrYWJsZS0gNjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJ
Q2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IElEPTAw
MDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2liYWNrCglLZXJu
ZWwgbW9kdWxlczogcmFkZW9uCgowMDowMS4xIEF1ZGlvIGRldmljZTogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EL0FUSV0gVHJpbml0eSBIRE1JIEF1ZGlvIENvbnRyb2xsZXIKCVN1
YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDk5MDIKCUNvbnRyb2w6IEkvTy0g
TWVtLSBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl
cHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0g
RmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+
U0VSUi0gPFBFUlItIElOVHgtCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgMTgKCVJl
Z2lvbiAwOiBNZW1vcnkgYXQgZmY3NDAwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW2Rp
c2FibGVkXSBbc2l6ZT0xNktdCglDYXBhYmlsaXRpZXM6IFs1MF0gUG93ZXIgTWFuYWdlbWVudCB2
ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4Q3VycmVudD0wbUEgUE1F
KEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9Tb2Z0UnN0LSBQTUUt
RW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBbNThdIEV4cHJlc3Mg
KHYyKSBSb290IENvbXBsZXggSW50ZWdyYXRlZCBFbmRwb2ludCwgTVNJIDAwCgkJRGV2Q2FwOglN
YXhQYXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIDw0dXMsIEwxIHVu
bGltaXRlZAoJCQlFeHRUYWcrIFJCRSsgRkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6
IENvcnJlY3RhYmxlLSBOb24tRmF0YWwtIEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZCsg
RXh0VGFnLSBQaGFudEZ1bmMtIEF1eFB3ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAxMjggYnl0
ZXMsIE1heFJlYWRSZXEgMTI4IGJ5dGVzCgkJRGV2U3RhOglDb3JyRXJyLSBVbmNvcnJFcnItIEZh
dGFsRXJyLSBVbnN1cHBSZXEtIEF1eFB3ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMCwg
U3BlZWQgdW5rbm93biwgV2lkdGggeDAsIEFTUE0gdW5rbm93biwgTGF0ZW5jeSBMMCA8NjRucywg
TDEgPDF1cwoJCQlDbG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3Tm90LQoJCUxua0N0bDoJ
QVNQTSBEaXNhYmxlZDsgRGlzYWJsZWQtIFJldHJhaW4tIENvbW1DbGstCgkJCUV4dFN5bmNoLSBD
bG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0ludC0KCQlMbmtTdGE6CVNwZWVkIHVua25v
d24sIFdpZHRoIHgwLCBUckVyci0gVHJhaW4tIFNsb3RDbGstIERMQWN0aXZlLSBCV01nbXQtIEFC
V01nbXQtCgkJRGV2Q2FwMjogQ29tcGxldGlvbiBUaW1lb3V0OiBOb3QgU3VwcG9ydGVkLCBUaW1l
b3V0RGlzLSwgTFRSLSwgT0JGRiBOb3QgU3VwcG9ydGVkCgkJRGV2Q3RsMjogQ29tcGxldGlvbiBU
aW1lb3V0OiA1MHVzIHRvIDUwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkCgkJ
TG5rQ3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDIuNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3Bl
ZWREaXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVy
TW9kaWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBo
YXNpczogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVx
dWFsaXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9u
UGhhc2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglD
YXBhYmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQr
CgkJQWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBb
MTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IElEPTAwMDEgUmV2PTEgTGVuPTAx
MCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2liYWNrCglLZXJuZWwgbW9kdWxlczogc25k
X2hkYV9pbnRlbAoKMDA6MDIuMCBQQ0kgYnJpZGdlOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJ
bmMuIFtBTURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgUm9vdCBQb3J0
IChwcm9nLWlmIDAwIFtOb3JtYWwgZGVjb2RlXSkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0
ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlIt
IEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFy
RXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlIt
IElOVHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVzCglCdXM6IHByaW1h
cnk9MDAsIHNlY29uZGFyeT0wMSwgc3Vib3JkaW5hdGU9MDEsIHNlYy1sYXRlbmN5PTAKCUkvTyBi
ZWhpbmQgYnJpZGdlOiAwMDAwZTAwMC0wMDAwZWZmZgoJTWVtb3J5IGJlaGluZCBicmlkZ2U6IGZm
NjAwMDAwLWZmNmZmZmZmCglQcmVmZXRjaGFibGUgbWVtb3J5IGJlaGluZCBicmlkZ2U6IDAwMDAw
MDAwYzAwMDAwMDAtMDAwMDAwMDBjZmZmZmZmZgoJU2Vjb25kYXJ5IHN0YXR1czogNjZNSHotIEZh
c3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPFNF
UlItIDxQRVJSLQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNFUlItIE5vSVNBLSBWR0ErIE1BYm9ydC0g
PlJlc2V0LSBGYXN0QjJCLQoJCVByaURpc2NUbXItIFNlY0Rpc2NUbXItIERpc2NUbXJTdGF0LSBE
aXNjVG1yU0VSUkVuLQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lv
biAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMS0gRDItIEF1eEN1cnJlbnQ9MG1BIFBNRShEMCss
RDEtLEQyLSxEM2hvdCssRDNjb2xkKykKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJs
ZS0gRFNlbD0wIERTY2FsZT0wIFBNRS0KCUNhcGFiaWxpdGllczogWzU4XSBFeHByZXNzICh2Mikg
Um9vdCBQb3J0IChTbG90KyksIE1TSSAwMAoJCURldkNhcDoJTWF4UGF5bG9hZCAyNTYgYnl0ZXMs
IFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8NjRucywgTDEgPDF1cwoJCQlFeHRUYWcrIFJCRSsg
RkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6IENvcnJlY3RhYmxlLSBOb24tRmF0YWwt
IEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZC0gRXh0VGFnLSBQaGFudEZ1bmMtIEF1eFB3
ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAyNTYgYnl0ZXMsIE1heFJlYWRSZXEgNTEyIGJ5dGVz
CgkJRGV2U3RhOglDb3JyRXJyLSBVbmNvcnJFcnItIEZhdGFsRXJyLSBVbnN1cHBSZXEtIEF1eFB3
ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMCwgU3BlZWQgNUdUL3MsIFdpZHRoIHgxNiwg
QVNQTSBMMHMgTDEsIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMKCQkJQ2xvY2tQTS0gU3VycHJp
c2UtIExMQWN0UmVwKyBCd05vdCsKCQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IFJDQiA2NCBieXRl
cyBEaXNhYmxlZC0gUmV0cmFpbi0gQ29tbUNsaysKCQkJRXh0U3luY2gtIENsb2NrUE0tIEF1dFdp
ZERpcy0gQldJbnQtIEF1dEJXSW50LQoJCUxua1N0YToJU3BlZWQgNUdUL3MsIFdpZHRoIHgxNiwg
VHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZSsgQldNZ210KyBBQldNZ210LQoJCVNsdENh
cDoJQXR0bkJ0bi0gUHdyQ3RybC0gTVJMLSBBdHRuSW5kLSBQd3JJbmQtIEhvdFBsdWctIFN1cnBy
aXNlLQoJCQlTbG90ICMwLCBQb3dlckxpbWl0IDAuMDAwVzsgSW50ZXJsb2NrLSBOb0NvbXBsKwoJ
CVNsdEN0bDoJRW5hYmxlOiBBdHRuQnRuLSBQd3JGbHQtIE1STC0gUHJlc0RldC0gQ21kQ3BsdC0g
SFBJcnEtIExpbmtDaGctCgkJCUNvbnRyb2w6IEF0dG5JbmQgVW5rbm93biwgUHdySW5kIFVua25v
d24sIFBvd2VyLSBJbnRlcmxvY2stCgkJU2x0U3RhOglTdGF0dXM6IEF0dG5CdG4tIFBvd2VyRmx0
LSBNUkwtIENtZENwbHQtIFByZXNEZXQrIEludGVybG9jay0KCQkJQ2hhbmdlZDogTVJMLSBQcmVz
RGV0LSBMaW5rU3RhdGUtCgkJUm9vdEN0bDogRXJyQ29ycmVjdGFibGUtIEVyck5vbi1GYXRhbC0g
RXJyRmF0YWwtIFBNRUludEVuYS0gQ1JTVmlzaWJsZS0KCQlSb290Q2FwOiBDUlNWaXNpYmxlLQoJ
CVJvb3RTdGE6IFBNRSBSZXFJRCAwMDAwLCBQTUVTdGF0dXMtIFBNRVBlbmRpbmctCgkJRGV2Q2Fw
MjogQ29tcGxldGlvbiBUaW1lb3V0OiBSYW5nZSBBQkNELCBUaW1lb3V0RGlzKywgTFRSLSwgT0JG
RiBOb3QgU3VwcG9ydGVkIEFSSUZ3ZC0KCQlEZXZDdGwyOiBDb21wbGV0aW9uIFRpbWVvdXQ6IDY1
bXMgdG8gMjEwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkIEFSSUZ3ZC0KCQlM
bmtDdGwyOiBUYXJnZXQgTGluayBTcGVlZDogNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3BlZWRE
aXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVyTW9k
aWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBoYXNp
czogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVxdWFs
aXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9uUGhh
c2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglDYXBh
YmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQrCgkJ
QWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBbYjBd
IFN1YnN5c3RlbTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBEZXZpY2UgMTIz
NAoJQ2FwYWJpbGl0aWVzOiBbYjhdIEh5cGVyVHJhbnNwb3J0OiBNU0kgTWFwcGluZyBFbmFibGUr
IEZpeGVkKwoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRp
b246IElEPTAwMDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2ll
cG9ydAoJS2VybmVsIG1vZHVsZXM6IHNocGNocAoKMDA6MTAuMCBVU0IgY29udHJvbGxlcjogQWR2
YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGQ0ggVVNCIFhIQ0kgQ29udHJvbGxlciAo
cmV2IDAzKSAocHJvZy1pZiAzMCBbWEhDSV0pCglTdWJzeXN0ZW06IEFkdmFuY2VkIE1pY3JvIERl
dmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBYSENJIENvbnRyb2xsZXIKCUNvbnRyb2w6IEkvTy0g
TWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl
cHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0g
RmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+
U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVz
CglJbnRlcnJ1cHQ6IHBpbiBBIHJvdXRlZCB0byBJUlEgMTgKCVJlZ2lvbiAwOiBNZW1vcnkgYXQg
ZmY3NDYwMDAgKDY0LWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9OEtdCglDYXBhYmlsaXRp
ZXM6IFs1MF0gUG93ZXIgTWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0kt
IEQxLSBEMi0gQXV4Q3VycmVudD0wbUEgUE1FKEQwKyxEMS0sRDItLEQzaG90KyxEM2NvbGQrKQoJ
CVN0YXR1czogRDAgTm9Tb2Z0UnN0KyBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJ
Q2FwYWJpbGl0aWVzOiBbNzBdIE1TSTogRW5hYmxlLSBDb3VudD0xLzggTWFza2FibGUtIDY0Yml0
KwoJCUFkZHJlc3M6IDAwMDAwMDAwMDAwMDAwMDAgIERhdGE6IDAwMDAKCUNhcGFiaWxpdGllczog
WzkwXSBNU0ktWDogRW5hYmxlKyBDb3VudD04IE1hc2tlZC0KCQlWZWN0b3IgdGFibGU6IEJBUj0w
IG9mZnNldD0wMDAwMTAwMAoJCVBCQTogQkFSPTAgb2Zmc2V0PTAwMDAxMDgwCglDYXBhYmlsaXRp
ZXM6IFthMF0gRXhwcmVzcyAodjIpIFJvb3QgQ29tcGxleCBJbnRlZ3JhdGVkIEVuZHBvaW50LCBN
U0kgMDAKCQlEZXZDYXA6CU1heFBheWxvYWQgMTI4IGJ5dGVzLCBQaGFudEZ1bmMgMCwgTGF0ZW5j
eSBMMHMgdW5saW1pdGVkLCBMMSB1bmxpbWl0ZWQKCQkJRXh0VGFnLSBSQkUrIEZMUmVzZXQtCgkJ
RGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3JyZWN0YWJsZS0gTm9uLUZhdGFsLSBGYXRhbC0gVW5z
dXBwb3J0ZWQtCgkJCVJseGRPcmQtIEV4dFRhZy0gUGhhbnRGdW5jLSBBdXhQd3ItIE5vU25vb3Ar
CgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBNYXhSZWFkUmVxIDUxMiBieXRlcwoJCURldlN0YToJ
Q29yckVyci0gVW5jb3JyRXJyLSBGYXRhbEVyci0gVW5zdXBwUmVxLSBBdXhQd3IrIFRyYW5zUGVu
ZC0KCQlMbmtDYXA6CVBvcnQgIzAsIFNwZWVkIHVua25vd24sIFdpZHRoIHgwLCBBU1BNIHVua25v
d24sIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMKCQkJQ2xvY2tQTS0gU3VycHJpc2UtIExMQWN0
UmVwLSBCd05vdC0KCQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IERpc2FibGVkLSBSZXRyYWluLSBD
b21tQ2xrLQoJCQlFeHRTeW5jaC0gQ2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQt
CgkJTG5rU3RhOglTcGVlZCB1bmtub3duLCBXaWR0aCB4MCwgVHJFcnItIFRyYWluLSBTbG90Q2xr
LSBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJCURldkNhcDI6IENvbXBsZXRpb24gVGltZW91
dDogTm90IFN1cHBvcnRlZCwgVGltZW91dERpcyssIExUUissIE9CRkYgTm90IFN1cHBvcnRlZAoJ
CURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNTB1cyB0byA1MG1zLCBUaW1lb3V0RGlzLSwg
TFRSLSwgT0JGRiBEaXNhYmxlZAoJCUxua0N0bDI6IFRhcmdldCBMaW5rIFNwZWVkOiAyLjVHVC9z
LCBFbnRlckNvbXBsaWFuY2UtIFNwZWVkRGlzLQoJCQkgVHJhbnNtaXQgTWFyZ2luOiBOb3JtYWwg
T3BlcmF0aW5nIFJhbmdlLCBFbnRlck1vZGlmaWVkQ29tcGxpYW5jZS0gQ29tcGxpYW5jZVNPUy0K
CQkJIENvbXBsaWFuY2UgRGUtZW1waGFzaXM6IC02ZEIKCQlMbmtTdGEyOiBDdXJyZW50IERlLWVt
cGhhc2lzIExldmVsOiAtNmRCLCBFcXVhbGl6YXRpb25Db21wbGV0ZS0sIEVxdWFsaXphdGlvblBo
YXNlMS0KCQkJIEVxdWFsaXphdGlvblBoYXNlMi0sIEVxdWFsaXphdGlvblBoYXNlMy0sIExpbmtF
cXVhbGl6YXRpb25SZXF1ZXN0LQoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBMYXRlbmN5IFRvbGVy
YW5jZSBSZXBvcnRpbmcKCQlNYXggc25vb3AgbGF0ZW5jeTogMG5zCgkJTWF4IG5vIHNub29wIGxh
dGVuY3k6IDBucwoJS2VybmVsIGRyaXZlciBpbiB1c2U6IHhoY2lfaGNkCglLZXJuZWwgbW9kdWxl
czogeGhjaV9oY2QKCjAwOjEwLjEgVVNCIGNvbnRyb2xsZXI6IEFkdmFuY2VkIE1pY3JvIERldmlj
ZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBYSENJIENvbnRyb2xsZXIgKHJldiAwMykgKHByb2ctaWYg
MzAgW1hIQ0ldKQoJU3Vic3lzdGVtOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURd
IEZDSCBVU0IgWEhDSSBDb250cm9sbGVyCglDb250cm9sOiBJL08tIE1lbSsgQnVzTWFzdGVyKyBT
cGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0
QjJCLSBEaXNJTlR4KwoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0g
REVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4
LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBwaW4g
QiByb3V0ZWQgdG8gSVJRIDE3CglSZWdpb24gMDogTWVtb3J5IGF0IGZmNzQ0MDAwICg2NC1iaXQs
IG5vbi1wcmVmZXRjaGFibGUpIFtzaXplPThLXQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1h
bmFnZW1lbnQgdmVyc2lvbiAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMS0gRDItIEF1eEN1cnJl
bnQ9MG1BIFBNRShEMCssRDEtLEQyLSxEM2hvdCssRDNjb2xkKykKCQlTdGF0dXM6IEQwIE5vU29m
dFJzdCsgUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0wIFBNRSsKCUNhcGFiaWxpdGllczogWzcw
XSBNU0k6IEVuYWJsZS0gQ291bnQ9MS84IE1hc2thYmxlLSA2NGJpdCsKCQlBZGRyZXNzOiAwMDAw
MDAwMDAwMDAwMDAwICBEYXRhOiAwMDAwCglDYXBhYmlsaXRpZXM6IFs5MF0gTVNJLVg6IEVuYWJs
ZSsgQ291bnQ9OCBNYXNrZWQtCgkJVmVjdG9yIHRhYmxlOiBCQVI9MCBvZmZzZXQ9MDAwMDEwMDAK
CQlQQkE6IEJBUj0wIG9mZnNldD0wMDAwMTA4MAoJQ2FwYWJpbGl0aWVzOiBbYTBdIEV4cHJlc3Mg
KHYyKSBSb290IENvbXBsZXggSW50ZWdyYXRlZCBFbmRwb2ludCwgTVNJIDAwCgkJRGV2Q2FwOglN
YXhQYXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIHVubGltaXRlZCwg
TDEgdW5saW1pdGVkCgkJCUV4dFRhZy0gUkJFKyBGTFJlc2V0LQoJCURldkN0bDoJUmVwb3J0IGVy
cm9yczogQ29ycmVjdGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlSbHhk
T3JkLSBFeHRUYWctIFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXlsb2FkIDEy
OCBieXRlcywgTWF4UmVhZFJlcSA1MTIgYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVuY29yckVy
ci0gRmF0YWxFcnItIFVuc3VwcFJlcS0gQXV4UHdyKyBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0
ICMwLCBTcGVlZCB1bmtub3duLCBXaWR0aCB4MCwgQVNQTSB1bmtub3duLCBMYXRlbmN5IEwwIDw2
NG5zLCBMMSA8MXVzCgkJCUNsb2NrUE0tIFN1cnByaXNlLSBMTEFjdFJlcC0gQndOb3QtCgkJTG5r
Q3RsOglBU1BNIERpc2FibGVkOyBEaXNhYmxlZC0gUmV0cmFpbi0gQ29tbUNsay0KCQkJRXh0U3lu
Y2gtIENsb2NrUE0tIEF1dFdpZERpcy0gQldJbnQtIEF1dEJXSW50LQoJCUxua1N0YToJU3BlZWQg
dW5rbm93biwgV2lkdGggeDAsIFRyRXJyLSBUcmFpbi0gU2xvdENsay0gRExBY3RpdmUtIEJXTWdt
dC0gQUJXTWdtdC0KCQlEZXZDYXAyOiBDb21wbGV0aW9uIFRpbWVvdXQ6IE5vdCBTdXBwb3J0ZWQs
IFRpbWVvdXREaXMrLCBMVFIrLCBPQkZGIE5vdCBTdXBwb3J0ZWQKCQlEZXZDdGwyOiBDb21wbGV0
aW9uIFRpbWVvdXQ6IDUwdXMgdG8gNTBtcywgVGltZW91dERpcy0sIExUUi0sIE9CRkYgRGlzYWJs
ZWQKCQlMbmtDdGwyOiBUYXJnZXQgTGluayBTcGVlZDogMi41R1QvcywgRW50ZXJDb21wbGlhbmNl
LSBTcGVlZERpcy0KCQkJIFRyYW5zbWl0IE1hcmdpbjogTm9ybWFsIE9wZXJhdGluZyBSYW5nZSwg
RW50ZXJNb2RpZmllZENvbXBsaWFuY2UtIENvbXBsaWFuY2VTT1MtCgkJCSBDb21wbGlhbmNlIERl
LWVtcGhhc2lzOiAtNmRCCgkJTG5rU3RhMjogQ3VycmVudCBEZS1lbXBoYXNpcyBMZXZlbDogLTZk
QiwgRXF1YWxpemF0aW9uQ29tcGxldGUtLCBFcXVhbGl6YXRpb25QaGFzZTEtCgkJCSBFcXVhbGl6
YXRpb25QaGFzZTItLCBFcXVhbGl6YXRpb25QaGFzZTMtLCBMaW5rRXF1YWxpemF0aW9uUmVxdWVz
dC0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiB4aGNpX2hjZAoJS2VybmVsIG1vZHVsZXM6IHhoY2lf
aGNkCgowMDoxMS4wIFNBVEEgY29udHJvbGxlcjogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5j
LiBbQU1EXSBGQ0ggU0FUQSBDb250cm9sbGVyIFtBSENJIG1vZGVdIChyZXYgNDApIChwcm9nLWlm
IDAxIFtBSENJIDEuMF0pCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9uIERldmljZSA3
ODAxCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZH
QVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3RhdHVz
OiBDYXArIDY2TUh6KyBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPW1lZGl1bSA+VEFib3J0
LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAzMgoJSW50
ZXJydXB0OiBwaW4gQSByb3V0ZWQgdG8gSVJRIDQ0CglSZWdpb24gMDogSS9PIHBvcnRzIGF0IGYx
OTAgW3NpemU9OF0KCVJlZ2lvbiAxOiBJL08gcG9ydHMgYXQgZjE4MCBbc2l6ZT00XQoJUmVnaW9u
IDI6IEkvTyBwb3J0cyBhdCBmMTcwIFtzaXplPThdCglSZWdpb24gMzogSS9PIHBvcnRzIGF0IGYx
NjAgW3NpemU9NF0KCVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZjE1MCBbc2l6ZT0xNl0KCVJlZ2lv
biA1OiBNZW1vcnkgYXQgZmY3NGQwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9
MktdCglDYXBhYmlsaXRpZXM6IFs1MF0gTVNJOiBFbmFibGUrIENvdW50PTQvNCBNYXNrYWJsZS0g
NjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDBmZWUwODAwYyAgRGF0YTogNDAwMAoJQ2FwYWJpbGl0
aWVzOiBbNzBdIFNBVEEgSEJBIHYxLjAgSW5DZmdTcGFjZQoJS2VybmVsIGRyaXZlciBpbiB1c2U6
IGFoY2kKCjAwOjEyLjAgVVNCIGNvbnRyb2xsZXI6IEFkdmFuY2VkIE1pY3JvIERldmljZXMsIElu
Yy4gW0FNRF0gRkNIIFVTQiBPSENJIENvbnRyb2xsZXIgKHJldiAxMSkgKHByb2ctaWYgMTAgW09I
Q0ldKQoJU3Vic3lzdGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2UgNzgwNwoJQ29udHJv
bDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFy
RXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwLSA2Nk1I
eisgVURGLSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0g
PE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMzIsIENhY2hlIExpbmUgU2l6
ZTogNjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAxOAoJUmVnaW9uIDA6
IE1lbW9yeSBhdCBmZjc0YzAwMCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT00S10K
CUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBvaGNpLXBjaQoJS2VybmVsIG1vZHVsZXM6IG9oY2lfcGNp
CgowMDoxMi4yIFVTQiBjb250cm9sbGVyOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtB
TURdIEZDSCBVU0IgRUhDSSBDb250cm9sbGVyIChyZXYgMTEpIChwcm9nLWlmIDIwIFtFSENJXSkK
CVN1YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDc4MDgKCUNvbnRyb2w6IEkv
TysgTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVisgVkdBU25vb3AtIFBhckVyci0g
U3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHorIFVE
Ri0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJv
cnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDMyLCBDYWNoZSBMaW5lIFNpemU6IDY0
IGJ5dGVzCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgMTcKCVJlZ2lvbiAwOiBNZW1v
cnkgYXQgZmY3NGIwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3NpemU9MjU2XQoJQ2Fw
YWJpbGl0aWVzOiBbYzBdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lvbiAyCgkJRmxhZ3M6IFBNRUNs
ay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJlbnQ9MG1BIFBNRShEMCssRDErLEQyKyxEM2hvdCssRDNj
b2xkLSkKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0w
IFBNRS0KCQlCcmlkZ2U6IFBNLSBCMysKCUNhcGFiaWxpdGllczogW2U0XSBEZWJ1ZyBwb3J0OiBC
QVI9MSBvZmZzZXQ9MDBlMAoJS2VybmVsIGRyaXZlciBpbiB1c2U6IGVoY2ktcGNpCglLZXJuZWwg
bW9kdWxlczogZWhjaV9wY2kKCjAwOjEzLjAgVVNCIGNvbnRyb2xsZXI6IEFkdmFuY2VkIE1pY3Jv
IERldmljZXMsIEluYy4gW0FNRF0gRkNIIFVTQiBPSENJIENvbnRyb2xsZXIgKHJldiAxMSkgKHBy
b2ctaWYgMTAgW09IQ0ldKQoJU3Vic3lzdGVtOiBBU1JvY2sgSW5jb3Jwb3JhdGlvbiBEZXZpY2Ug
NzgwNwoJQ29udHJvbDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBW
R0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1
czogQ2FwLSA2Nk1IeisgVURGLSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9y
dC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMzIsIENh
Y2hlIExpbmUgU2l6ZTogNjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAx
OAoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmZjc0YTAwMCAoMzItYml0LCBub24tcHJlZmV0Y2hhYmxl
KSBbc2l6ZT00S10KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBvaGNpLXBjaQoJS2VybmVsIG1vZHVs
ZXM6IG9oY2lfcGNpCgowMDoxMy4yIFVTQiBjb250cm9sbGVyOiBBZHZhbmNlZCBNaWNybyBEZXZp
Y2VzLCBJbmMuIFtBTURdIEZDSCBVU0IgRUhDSSBDb250cm9sbGVyIChyZXYgMTEpIChwcm9nLWlm
IDIwIFtFSENJXSkKCVN1YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDc4MDgK
CUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVisgVkdBU25v
b3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENh
cCsgNjZNSHorIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxU
QWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDMyLCBDYWNoZSBM
aW5lIFNpemU6IDY0IGJ5dGVzCglJbnRlcnJ1cHQ6IHBpbiBCIHJvdXRlZCB0byBJUlEgMTcKCVJl
Z2lvbiAwOiBNZW1vcnkgYXQgZmY3NDkwMDAgKDMyLWJpdCwgbm9uLXByZWZldGNoYWJsZSkgW3Np
emU9MjU2XQoJQ2FwYWJpbGl0aWVzOiBbYzBdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lvbiAyCgkJ
RmxhZ3M6IFBNRUNsay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJlbnQ9MG1BIFBNRShEMCssRDErLEQy
KyxEM2hvdCssRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJsZS0gRFNl
bD0wIERTY2FsZT0wIFBNRS0KCQlCcmlkZ2U6IFBNLSBCMysKCUNhcGFiaWxpdGllczogW2U0XSBE
ZWJ1ZyBwb3J0OiBCQVI9MSBvZmZzZXQ9MDBlMAoJS2VybmVsIGRyaXZlciBpbiB1c2U6IGVoY2kt
cGNpCglLZXJuZWwgbW9kdWxlczogZWhjaV9wY2kKCjAwOjE0LjAgU01CdXM6IEFkdmFuY2VkIE1p
Y3JvIERldmljZXMsIEluYy4gW0FNRF0gRkNIIFNNQnVzIENvbnRyb2xsZXIgKHJldiAxNCkKCVN1
YnN5c3RlbTogQVNSb2NrIEluY29ycG9yYXRpb24gRGV2aWNlIDc4MGIKCUNvbnRyb2w6IEkvTysg
TWVtKyBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3Rl
cHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcC0gNjZNSHorIFVERi0g
RmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQt
ID5TRVJSLSA8UEVSUi0gSU5UeC0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwaWl4NF9zbWJ1cwoJ
S2VybmVsIG1vZHVsZXM6IGkyY19waWl4NAoKMDA6MTQuMSBJREUgaW50ZXJmYWNlOiBBZHZhbmNl
ZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZDSCBJREUgQ29udHJvbGxlciAocHJvZy1pZiA4
YSBbTWFzdGVyIFNlY1AgUHJpUF0pCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9uIERl
dmljZSA3ODBjCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJ
TlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJ
U3RhdHVzOiBDYXAtIDY2TUh6KyBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPW1lZGl1bSA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAz
MgoJSW50ZXJydXB0OiBwaW4gQiByb3V0ZWQgdG8gSVJRIDE3CglSZWdpb24gMDogSS9PIHBvcnRz
IGF0IDAxZjAgW3NpemU9OF0KCVJlZ2lvbiAxOiBJL08gcG9ydHMgYXQgMDNmNAoJUmVnaW9uIDI6
IEkvTyBwb3J0cyBhdCAwMTcwIFtzaXplPThdCglSZWdpb24gMzogSS9PIHBvcnRzIGF0IDAzNzQK
CVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZjEwMCBbc2l6ZT0xNl0KCUtlcm5lbCBkcml2ZXIgaW4g
dXNlOiBwYXRhX2F0aWl4cAoJS2VybmVsIG1vZHVsZXM6IHBhdGFfYXRpaXhwLCBwYXRhX2FjcGks
IGF0YV9nZW5lcmljCgowMDoxNC4zIElTQSBicmlkZ2U6IEFkdmFuY2VkIE1pY3JvIERldmljZXMs
IEluYy4gW0FNRF0gRkNIIExQQyBCcmlkZ2UgKHJldiAxMSkKCVN1YnN5c3RlbTogQVNSb2NrIElu
Y29ycG9yYXRpb24gRGV2aWNlIDc4MGUKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIrIFNw
ZWNDeWNsZSsgTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RC
MkItIERpc0lOVHgtCglTdGF0dXM6IENhcC0gNjZNSHorIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBE
RVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5U
eC0KCUxhdGVuY3k6IDAKCjAwOjE0LjQgUENJIGJyaWRnZTogQWR2YW5jZWQgTWljcm8gRGV2aWNl
cywgSW5jLiBbQU1EXSBGQ0ggUENJIEJyaWRnZSAocmV2IDQwKSAocHJvZy1pZiAwMSBbU3VidHJh
Y3RpdmUgZGVjb2RlXSkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0g
TWVtV0lOVi0gVkdBU25vb3ArIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lO
VHgtCglTdGF0dXM6IENhcC0gNjZNSHorIFVERi0gRmFzdEIyQisgUGFyRXJyLSBERVZTRUw9bWVk
aXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVu
Y3k6IDY0CglCdXM6IHByaW1hcnk9MDAsIHNlY29uZGFyeT0wMiwgc3Vib3JkaW5hdGU9MDIsIHNl
Yy1sYXRlbmN5PTY0CglJL08gYmVoaW5kIGJyaWRnZTogMDAwMGQwMDAtMDAwMGRmZmYKCVNlY29u
ZGFyeSBzdGF0dXM6IDY2TUh6LSBGYXN0QjJCKyBQYXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9y
dC0gPFRBYm9ydC0gPE1BYm9ydCsgPFNFUlItIDxQRVJSLQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNF
UlItIE5vSVNBLSBWR0EtIE1BYm9ydC0gPlJlc2V0LSBGYXN0QjJCLQoJCVByaURpc2NUbXItIFNl
Y0Rpc2NUbXItIERpc2NUbXJTdGF0LSBEaXNjVG1yU0VSUkVuLQoKMDA6MTQuNSBVU0IgY29udHJv
bGxlcjogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBGQ0ggVVNCIE9IQ0kgQ29u
dHJvbGxlciAocmV2IDExKSAocHJvZy1pZiAxMCBbT0hDSV0pCglTdWJzeXN0ZW06IEFTUm9jayBJ
bmNvcnBvcmF0aW9uIERldmljZSA3ODA5CglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBT
cGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0
QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6KyBVREYtIEZhc3RCMkIrIFBhckVyci0g
REVWU0VMPW1lZGl1bSA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElO
VHgtCglMYXRlbmN5OiAzMiwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBw
aW4gQyByb3V0ZWQgdG8gSVJRIDE4CglSZWdpb24gMDogTWVtb3J5IGF0IGZmNzQ4MDAwICgzMi1i
aXQsIG5vbi1wcmVmZXRjaGFibGUpIFtzaXplPTRLXQoJS2VybmVsIGRyaXZlciBpbiB1c2U6IG9o
Y2ktcGNpCglLZXJuZWwgbW9kdWxlczogb2hjaV9wY2kKCjAwOjE1LjAgUENJIGJyaWRnZTogQWR2
YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBIdWRzb24gUENJIHRvIFBDSSBicmlkZ2Ug
KFBDSUUgcG9ydCAwKSAocHJvZy1pZiAwMCBbTm9ybWFsIGRlY29kZV0pCglDb250cm9sOiBJL08r
IE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYt
IEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRl
cwoJQnVzOiBwcmltYXJ5PTAwLCBzZWNvbmRhcnk9MDMsIHN1Ym9yZGluYXRlPTAzLCBzZWMtbGF0
ZW5jeT0wCglNZW1vcnkgYmVoaW5kIGJyaWRnZTogZmY1MDAwMDAtZmY1ZmZmZmYKCVNlY29uZGFy
eSBzdGF0dXM6IDY2TUh6LSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxU
QWJvcnQtIDxNQWJvcnQrIDxTRVJSLSA8UEVSUi0KCUJyaWRnZUN0bDogUGFyaXR5LSBTRVJSLSBO
b0lTQS0gVkdBLSBNQWJvcnQtID5SZXNldC0gRmFzdEIyQi0KCQlQcmlEaXNjVG1yLSBTZWNEaXNj
VG1yLSBEaXNjVG1yU3RhdC0gRGlzY1RtclNFUlJFbi0KCUNhcGFiaWxpdGllczogWzUwXSBQb3dl
ciBNYW5hZ2VtZW50IHZlcnNpb24gMwoJCUZsYWdzOiBQTUVDbGstIERTSS0gRDErIEQyKyBBdXhD
dXJyZW50PTBtQSBQTUUoRDAtLEQxLSxEMi0sRDNob3QtLEQzY29sZC0pCgkJU3RhdHVzOiBEMCBO
b1NvZnRSc3QtIFBNRS1FbmFibGUtIERTZWw9MCBEU2NhbGU9MCBQTUUtCglDYXBhYmlsaXRpZXM6
IFs1OF0gRXhwcmVzcyAodjIpIFJvb3QgUG9ydCAoU2xvdC0pLCBNU0kgMDAKCQlEZXZDYXA6CU1h
eFBheWxvYWQgMTI4IGJ5dGVzLCBQaGFudEZ1bmMgMCwgTGF0ZW5jeSBMMHMgPDY0bnMsIEwxIDwx
dXMKCQkJRXh0VGFnKyBSQkUrIEZMUmVzZXQtCgkJRGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3Jy
ZWN0YWJsZS0gTm9uLUZhdGFsLSBGYXRhbC0gVW5zdXBwb3J0ZWQtCgkJCVJseGRPcmQtIEV4dFRh
Zy0gUGhhbnRGdW5jLSBBdXhQd3ItIE5vU25vb3ArCgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBN
YXhSZWFkUmVxIDEyOCBieXRlcwoJCURldlN0YToJQ29yckVyci0gVW5jb3JyRXJyLSBGYXRhbEVy
ci0gVW5zdXBwUmVxLSBBdXhQd3ItIFRyYW5zUGVuZC0KCQlMbmtDYXA6CVBvcnQgIzAsIFNwZWVk
IDIuNUdUL3MsIFdpZHRoIHgxLCBBU1BNIEwwcyBMMSwgTGF0ZW5jeSBMMCA8NjRucywgTDEgPDF1
cwoJCQlDbG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXArIEJ3Tm90KwoJCUxua0N0bDoJQVNQTSBE
aXNhYmxlZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrLQoJCQlFeHRT
eW5jaC0gQ2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVl
ZCAyLjVHVC9zLCBXaWR0aCB4MSwgVHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZSsgQldN
Z210LSBBQldNZ210LQoJCVJvb3RDdGw6IEVyckNvcnJlY3RhYmxlLSBFcnJOb24tRmF0YWwtIEVy
ckZhdGFsLSBQTUVJbnRFbmEtIENSU1Zpc2libGUtCgkJUm9vdENhcDogQ1JTVmlzaWJsZS0KCQlS
b290U3RhOiBQTUUgUmVxSUQgMDAwMCwgUE1FU3RhdHVzLSBQTUVQZW5kaW5nLQoJCURldkNhcDI6
IENvbXBsZXRpb24gVGltZW91dDogUmFuZ2UgQUJDRCwgVGltZW91dERpcyssIExUUi0sIE9CRkYg
Tm90IFN1cHBvcnRlZCBBUklGd2QtCgkJRGV2Q3RsMjogQ29tcGxldGlvbiBUaW1lb3V0OiA2NW1z
IHRvIDIxMG1zLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBEaXNhYmxlZCBBUklGd2QtCgkJTG5r
Q3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDIuNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3BlZWRE
aXMtCgkJCSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVyTW9k
aWZpZWRDb21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBoYXNp
czogLTZkQgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVxdWFs
aXphdGlvbkNvbXBsZXRlLSwgRXF1YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9uUGhh
c2UyLSwgRXF1YWxpemF0aW9uUGhhc2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglDYXBh
YmlsaXRpZXM6IFthMF0gTVNJOiBFbmFibGUtIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQrCgkJ
QWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBbYjBd
IFN1YnN5c3RlbTogQWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EXSBEZXZpY2UgMDAw
MAoJQ2FwYWJpbGl0aWVzOiBbYjhdIEh5cGVyVHJhbnNwb3J0OiBNU0kgTWFwcGluZyBFbmFibGUr
IEZpeGVkKwoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRp
b246IElEPTAwMDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBwY2ll
cG9ydAoJS2VybmVsIG1vZHVsZXM6IHNocGNocAoKMDA6MTUuMiBQQ0kgYnJpZGdlOiBBZHZhbmNl
ZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEh1ZHNvbiBQQ0kgdG8gUENJIGJyaWRnZSAoUENJ
RSBwb3J0IDIpIChwcm9nLWlmIDAwIFtOb3JtYWwgZGVjb2RlXSkKCUNvbnRyb2w6IEkvTysgTWVt
KyBCdXNNYXN0ZXIrIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBp
bmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFz
dEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VS
Ui0gPFBFUlItIElOVHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVzCglC
dXM6IHByaW1hcnk9MDAsIHNlY29uZGFyeT0wNCwgc3Vib3JkaW5hdGU9MDQsIHNlYy1sYXRlbmN5
PTAKCU1lbW9yeSBiZWhpbmQgYnJpZGdlOiBmZjQwMDAwMC1mZjRmZmZmZgoJU2Vjb25kYXJ5IHN0
YXR1czogNjZNSHotIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9y
dC0gPE1BYm9ydC0gPFNFUlItIDxQRVJSLQoJQnJpZGdlQ3RsOiBQYXJpdHktIFNFUlItIE5vSVNB
LSBWR0EtIE1BYm9ydC0gPlJlc2V0LSBGYXN0QjJCLQoJCVByaURpc2NUbXItIFNlY0Rpc2NUbXIt
IERpc2NUbXJTdGF0LSBEaXNjVG1yU0VSUkVuLQoJQ2FwYWJpbGl0aWVzOiBbNTBdIFBvd2VyIE1h
bmFnZW1lbnQgdmVyc2lvbiAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJl
bnQ9MG1BIFBNRShEMC0sRDEtLEQyLSxEM2hvdC0sRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29m
dFJzdC0gUE1FLUVuYWJsZS0gRFNlbD0wIERTY2FsZT0wIFBNRS0KCUNhcGFiaWxpdGllczogWzU4
XSBFeHByZXNzICh2MikgUm9vdCBQb3J0IChTbG90LSksIE1TSSAwMAoJCURldkNhcDoJTWF4UGF5
bG9hZCAxMjggYnl0ZXMsIFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8NjRucywgTDEgPDF1cwoJ
CQlFeHRUYWcrIFJCRSsgRkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6IENvcnJlY3Rh
YmxlLSBOb24tRmF0YWwtIEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZC0gRXh0VGFnLSBQ
aGFudEZ1bmMtIEF1eFB3ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAxMjggYnl0ZXMsIE1heFJl
YWRSZXEgMTI4IGJ5dGVzCgkJRGV2U3RhOglDb3JyRXJyLSBVbmNvcnJFcnItIEZhdGFsRXJyLSBV
bnN1cHBSZXEtIEF1eFB3ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMiwgU3BlZWQgNUdU
L3MsIFdpZHRoIHgxLCBBU1BNIEwwcyBMMSwgTGF0ZW5jeSBMMCA8NjRucywgTDEgPDF1cwoJCQlD
bG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXArIEJ3Tm90KwoJCUxua0N0bDoJQVNQTSBEaXNhYmxl
ZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrKwoJCQlFeHRTeW5jaC0g
Q2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCA1R1Qv
cywgV2lkdGggeDEsIFRyRXJyLSBUcmFpbi0gU2xvdENsaysgRExBY3RpdmUrIEJXTWdtdCsgQUJX
TWdtdC0KCQlSb290Q3RsOiBFcnJDb3JyZWN0YWJsZS0gRXJyTm9uLUZhdGFsLSBFcnJGYXRhbC0g
UE1FSW50RW5hLSBDUlNWaXNpYmxlLQoJCVJvb3RDYXA6IENSU1Zpc2libGUtCgkJUm9vdFN0YTog
UE1FIFJlcUlEIDAwMDAsIFBNRVN0YXR1cy0gUE1FUGVuZGluZy0KCQlEZXZDYXAyOiBDb21wbGV0
aW9uIFRpbWVvdXQ6IFJhbmdlIEFCQ0QsIFRpbWVvdXREaXMrLCBMVFItLCBPQkZGIE5vdCBTdXBw
b3J0ZWQgQVJJRndkLQoJCURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNjVtcyB0byAyMTBt
cywgVGltZW91dERpcy0sIExUUi0sIE9CRkYgRGlzYWJsZWQgQVJJRndkLQoJCUxua0N0bDI6IFRh
cmdldCBMaW5rIFNwZWVkOiA1R1QvcywgRW50ZXJDb21wbGlhbmNlLSBTcGVlZERpcy0KCQkJIFRy
YW5zbWl0IE1hcmdpbjogTm9ybWFsIE9wZXJhdGluZyBSYW5nZSwgRW50ZXJNb2RpZmllZENvbXBs
aWFuY2UtIENvbXBsaWFuY2VTT1MtCgkJCSBDb21wbGlhbmNlIERlLWVtcGhhc2lzOiAtNmRCCgkJ
TG5rU3RhMjogQ3VycmVudCBEZS1lbXBoYXNpcyBMZXZlbDogLTMuNWRCLCBFcXVhbGl6YXRpb25D
b21wbGV0ZS0sIEVxdWFsaXphdGlvblBoYXNlMS0KCQkJIEVxdWFsaXphdGlvblBoYXNlMi0sIEVx
dWFsaXphdGlvblBoYXNlMy0sIExpbmtFcXVhbGl6YXRpb25SZXF1ZXN0LQoJQ2FwYWJpbGl0aWVz
OiBbYTBdIE1TSTogRW5hYmxlLSBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0KwoJCUFkZHJlc3M6
IDAwMDAwMDAwMDAwMDAwMDAgIERhdGE6IDAwMDAKCUNhcGFiaWxpdGllczogW2IwXSBTdWJzeXN0
ZW06IEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRGV2aWNlIDAwMDAKCUNhcGFi
aWxpdGllczogW2I4XSBIeXBlclRyYW5zcG9ydDogTVNJIE1hcHBpbmcgRW5hYmxlKyBGaXhlZCsK
CUNhcGFiaWxpdGllczogWzEwMCB2MV0gVmVuZG9yIFNwZWNpZmljIEluZm9ybWF0aW9uOiBJRD0w
MDAxIFJldj0xIExlbj0wMTAgPD8+CglLZXJuZWwgZHJpdmVyIGluIHVzZTogcGNpZXBvcnQKCUtl
cm5lbCBtb2R1bGVzOiBzaHBjaHAKCjAwOjE1LjMgUENJIGJyaWRnZTogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EXSBIdWRzb24gUENJIHRvIFBDSSBicmlkZ2UgKFBDSUUgcG9ydCAz
KSAocHJvZy1pZiAwMCBbTm9ybWFsIGRlY29kZV0pCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFz
dGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJS
LSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXArIDY2TUh6LSBVREYtIEZhc3RCMkItIFBh
ckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJS
LSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBTaXplOiA2NCBieXRlcwoJQnVzOiBwcmlt
YXJ5PTAwLCBzZWNvbmRhcnk9MDUsIHN1Ym9yZGluYXRlPTA1LCBzZWMtbGF0ZW5jeT0wCglJL08g
YmVoaW5kIGJyaWRnZTogMDAwMGMwMDAtMDAwMGNmZmYKCVByZWZldGNoYWJsZSBtZW1vcnkgYmVo
aW5kIGJyaWRnZTogMDAwMDAwMDBkMDAwMDAwMC0wMDAwMDAwMGQwMGZmZmZmCglTZWNvbmRhcnkg
c3RhdHVzOiA2Nk1Iei0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFi
b3J0LSA8TUFib3J0LSA8U0VSUi0gPFBFUlItCglCcmlkZ2VDdGw6IFBhcml0eS0gU0VSUi0gTm9J
U0EtIFZHQS0gTUFib3J0LSA+UmVzZXQtIEZhc3RCMkItCgkJUHJpRGlzY1Rtci0gU2VjRGlzY1Rt
ci0gRGlzY1RtclN0YXQtIERpc2NUbXJTRVJSRW4tCglDYXBhYmlsaXRpZXM6IFs1MF0gUG93ZXIg
TWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4Q3Vy
cmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9T
b2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBb
NThdIEV4cHJlc3MgKHYyKSBSb290IFBvcnQgKFNsb3QtKSwgTVNJIDAwCgkJRGV2Q2FwOglNYXhQ
YXlsb2FkIDEyOCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIDw2NG5zLCBMMSA8MXVz
CgkJCUV4dFRhZysgUkJFKyBGTFJlc2V0LQoJCURldkN0bDoJUmVwb3J0IGVycm9yczogQ29ycmVj
dGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlSbHhkT3JkLSBFeHRUYWct
IFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXlsb2FkIDEyOCBieXRlcywgTWF4
UmVhZFJlcSAxMjggYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVuY29yckVyci0gRmF0YWxFcnIt
IFVuc3VwcFJlcS0gQXV4UHdyLSBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0ICMzLCBTcGVlZCA1
R1QvcywgV2lkdGggeDEsIEFTUE0gTDBzIEwxLCBMYXRlbmN5IEwwIDw2NG5zLCBMMSA8MXVzCgkJ
CUNsb2NrUE0tIFN1cnByaXNlLSBMTEFjdFJlcCsgQndOb3QrCgkJTG5rQ3RsOglBU1BNIERpc2Fi
bGVkOyBSQ0IgNjQgYnl0ZXMgRGlzYWJsZWQtIFJldHJhaW4tIENvbW1DbGsrCgkJCUV4dFN5bmNo
LSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0ludC0KCQlMbmtTdGE6CVNwZWVkIDIu
NUdUL3MsIFdpZHRoIHgxLCBUckVyci0gVHJhaW4tIFNsb3RDbGsrIERMQWN0aXZlKyBCV01nbXQr
IEFCV01nbXQtCgkJUm9vdEN0bDogRXJyQ29ycmVjdGFibGUtIEVyck5vbi1GYXRhbC0gRXJyRmF0
YWwtIFBNRUludEVuYS0gQ1JTVmlzaWJsZS0KCQlSb290Q2FwOiBDUlNWaXNpYmxlLQoJCVJvb3RT
dGE6IFBNRSBSZXFJRCAwMDAwLCBQTUVTdGF0dXMtIFBNRVBlbmRpbmctCgkJRGV2Q2FwMjogQ29t
cGxldGlvbiBUaW1lb3V0OiBSYW5nZSBBQkNELCBUaW1lb3V0RGlzKywgTFRSLSwgT0JGRiBOb3Qg
U3VwcG9ydGVkIEFSSUZ3ZC0KCQlEZXZDdGwyOiBDb21wbGV0aW9uIFRpbWVvdXQ6IDY1bXMgdG8g
MjEwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkIEFSSUZ3ZC0KCQlMbmtDdGwy
OiBUYXJnZXQgTGluayBTcGVlZDogNUdUL3MsIEVudGVyQ29tcGxpYW5jZS0gU3BlZWREaXMtCgkJ
CSBUcmFuc21pdCBNYXJnaW46IE5vcm1hbCBPcGVyYXRpbmcgUmFuZ2UsIEVudGVyTW9kaWZpZWRD
b21wbGlhbmNlLSBDb21wbGlhbmNlU09TLQoJCQkgQ29tcGxpYW5jZSBEZS1lbXBoYXNpczogLTZk
QgoJCUxua1N0YTI6IEN1cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC0zLjVkQiwgRXF1YWxpemF0
aW9uQ29tcGxldGUtLCBFcXVhbGl6YXRpb25QaGFzZTEtCgkJCSBFcXVhbGl6YXRpb25QaGFzZTIt
LCBFcXVhbGl6YXRpb25QaGFzZTMtLCBMaW5rRXF1YWxpemF0aW9uUmVxdWVzdC0KCUNhcGFiaWxp
dGllczogW2EwXSBNU0k6IEVuYWJsZS0gQ291bnQ9MS8xIE1hc2thYmxlLSA2NGJpdCsKCQlBZGRy
ZXNzOiAwMDAwMDAwMDAwMDAwMDAwICBEYXRhOiAwMDAwCglDYXBhYmlsaXRpZXM6IFtiMF0gU3Vi
c3lzdGVtOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIERldmljZSAwMDAwCglD
YXBhYmlsaXRpZXM6IFtiOF0gSHlwZXJUcmFuc3BvcnQ6IE1TSSBNYXBwaW5nIEVuYWJsZSsgRml4
ZWQrCglDYXBhYmlsaXRpZXM6IFsxMDAgdjFdIFZlbmRvciBTcGVjaWZpYyBJbmZvcm1hdGlvbjog
SUQ9MDAwMSBSZXY9MSBMZW49MDEwIDw/PgoJS2VybmVsIGRyaXZlciBpbiB1c2U6IHBjaWVwb3J0
CglLZXJuZWwgbW9kdWxlczogc2hwY2hwCgowMDoxOC4wIEhvc3QgYnJpZGdlOiBBZHZhbmNlZCBN
aWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWlseSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9j
ZXNzb3IgRnVuY3Rpb24gMAoJQ29udHJvbDogSS9PLSBNZW0tIEJ1c01hc3Rlci0gU3BlY0N5Y2xl
LSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlz
SU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1m
YXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCjAwOjE4
LjEgSG9zdCBicmlkZ2U6IEFkdmFuY2VkIE1pY3JvIERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5
IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3NvciBGdW5jdGlvbiAxCglDb250cm9sOiBJL08t
IE1lbS0gQnVzTWFzdGVyLSBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0
ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBVREYt
IEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0g
PlNFUlItIDxQRVJSLSBJTlR4LQoKMDA6MTguMiBIb3N0IGJyaWRnZTogQWR2YW5jZWQgTWljcm8g
RGV2aWNlcywgSW5jLiBbQU1EXSBGYW1pbHkgMTVoIChNb2RlbHMgMTBoLTFmaCkgUHJvY2Vzc29y
IEZ1bmN0aW9uIDIKCUNvbnRyb2w6IEkvTy0gTWVtLSBCdXNNYXN0ZXItIFNwZWNDeWNsZS0gTWVt
V0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZhc3RCMkItIERpc0lOVHgt
CglTdGF0dXM6IENhcC0gNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJyLSBERVZTRUw9ZmFzdCA+
VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElOVHgtCgowMDoxOC4zIEhv
c3QgYnJpZGdlOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWlseSAxNWgg
KE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgRnVuY3Rpb24gMwoJQ29udHJvbDogSS9PLSBNZW0t
IEJ1c01hc3Rlci0gU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGlu
Zy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0
QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJS
LSA8UEVSUi0gSU5UeC0KCUNhcGFiaWxpdGllczogW2YwXSBTZWN1cmUgZGV2aWNlIDw/PgoJS2Vy
bmVsIGRyaXZlciBpbiB1c2U6IGsxMHRlbXAKCUtlcm5lbCBtb2R1bGVzOiBrMTB0ZW1wCgowMDox
OC40IEhvc3QgYnJpZGdlOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtBTURdIEZhbWls
eSAxNWggKE1vZGVscyAxMGgtMWZoKSBQcm9jZXNzb3IgRnVuY3Rpb24gNAoJQ29udHJvbDogSS9P
LSBNZW0tIEJ1c01hc3Rlci0gU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBT
dGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwLSA2Nk1Iei0gVURG
LSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQt
ID5TRVJSLSA8UEVSUi0gSU5UeC0KCjAwOjE4LjUgSG9zdCBicmlkZ2U6IEFkdmFuY2VkIE1pY3Jv
IERldmljZXMsIEluYy4gW0FNRF0gRmFtaWx5IDE1aCAoTW9kZWxzIDEwaC0xZmgpIFByb2Nlc3Nv
ciBGdW5jdGlvbiA1CglDb250cm9sOiBJL08tIE1lbS0gQnVzTWFzdGVyLSBTcGVjQ3ljbGUtIE1l
bVdJTlYtIFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4
LQoJU3RhdHVzOiBDYXAtIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3Qg
PlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoKMDE6MDAuMCBW
R0EgY29tcGF0aWJsZSBjb250cm9sbGVyOiBBZHZhbmNlZCBNaWNybyBEZXZpY2VzLCBJbmMuIFtB
TUQvQVRJXSBQaXRjYWlybiBQUk8gW1JhZGVvbiBIRCA3ODUwXSAocHJvZy1pZiAwMCBbVkdBIGNv
bnRyb2xsZXJdKQoJU3Vic3lzdGVtOiBHaWdhYnl0ZSBUZWNobm9sb2d5IENvLiwgTHRkIERldmlj
ZSAyNTUzCglDb250cm9sOiBJL08rIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYt
IFZHQVNub29wLSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3Rh
dHVzOiBDYXArIDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9y
dC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2Fj
aGUgTGluZSBTaXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBwaW4gQSByb3V0ZWQgdG8gSVJRIDYw
CglSZWdpb24gMDogTWVtb3J5IGF0IGMwMDAwMDAwICg2NC1iaXQsIHByZWZldGNoYWJsZSkgW3Np
emU9MjU2TV0KCVJlZ2lvbiAyOiBNZW1vcnkgYXQgZmY2MDAwMDAgKDY0LWJpdCwgbm9uLXByZWZl
dGNoYWJsZSkgW3NpemU9MjU2S10KCVJlZ2lvbiA0OiBJL08gcG9ydHMgYXQgZTAwMCBbc2l6ZT0y
NTZdCglFeHBhbnNpb24gUk9NIGF0IGZmNjQwMDAwIFtkaXNhYmxlZF0gW3NpemU9MTI4S10KCUNh
cGFiaWxpdGllczogWzQ4XSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IExlbj0wOCA8Pz4K
CUNhcGFiaWxpdGllczogWzUwXSBQb3dlciBNYW5hZ2VtZW50IHZlcnNpb24gMwoJCUZsYWdzOiBQ
TUVDbGstIERTSS0gRDErIEQyKyBBdXhDdXJyZW50PTBtQSBQTUUoRDAtLEQxKyxEMissRDNob3Qr
LEQzY29sZC0pCgkJU3RhdHVzOiBEMCBOb1NvZnRSc3QtIFBNRS1FbmFibGUtIERTZWw9MCBEU2Nh
bGU9MCBQTUUtCglDYXBhYmlsaXRpZXM6IFs1OF0gRXhwcmVzcyAodjIpIExlZ2FjeSBFbmRwb2lu
dCwgTVNJIDAwCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDI1NiBieXRlcywgUGhhbnRGdW5jIDAsIExh
dGVuY3kgTDBzIDw0dXMsIEwxIHVubGltaXRlZAoJCQlFeHRUYWcrIEF0dG5CdG4tIEF0dG5JbmQt
IFB3ckluZC0gUkJFKyBGTFJlc2V0LQoJCURldkN0bDoJUmVwb3J0IGVycm9yczogQ29ycmVjdGFi
bGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlSbHhkT3JkLSBFeHRUYWctIFBo
YW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXlsb2FkIDI1NiBieXRlcywgTWF4UmVh
ZFJlcSA1MTIgYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnIrIFVuY29yckVyci0gRmF0YWxFcnItIFVu
c3VwcFJlcSsgQXV4UHdyLSBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0ICMwLCBTcGVlZCA4R1Qv
cywgV2lkdGggeDE2LCBBU1BNIEwwcyBMMSwgTGF0ZW5jeSBMMCA8NjRucywgTDEgPDF1cwoJCQlD
bG9ja1BNLSBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3Tm90LQoJCUxua0N0bDoJQVNQTSBEaXNhYmxl
ZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrKwoJCQlFeHRTeW5jaC0g
Q2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCA1R1Qv
cywgV2lkdGggeDE2LCBUckVyci0gVHJhaW4tIFNsb3RDbGsrIERMQWN0aXZlLSBCV01nbXQtIEFC
V01nbXQtCgkJRGV2Q2FwMjogQ29tcGxldGlvbiBUaW1lb3V0OiBOb3QgU3VwcG9ydGVkLCBUaW1l
b3V0RGlzLSwgTFRSLSwgT0JGRiBOb3QgU3VwcG9ydGVkCgkJRGV2Q3RsMjogQ29tcGxldGlvbiBU
aW1lb3V0OiA1MHVzIHRvIDUwbXMsIFRpbWVvdXREaXMtLCBMVFItLCBPQkZGIERpc2FibGVkCgkJ
TG5rQ3RsMjogVGFyZ2V0IExpbmsgU3BlZWQ6IDhHVC9zLCBFbnRlckNvbXBsaWFuY2UtIFNwZWVk
RGlzLQoJCQkgVHJhbnNtaXQgTWFyZ2luOiBOb3JtYWwgT3BlcmF0aW5nIFJhbmdlLCBFbnRlck1v
ZGlmaWVkQ29tcGxpYW5jZS0gQ29tcGxpYW5jZVNPUy0KCQkJIENvbXBsaWFuY2UgRGUtZW1waGFz
aXM6IC02ZEIKCQlMbmtTdGEyOiBDdXJyZW50IERlLWVtcGhhc2lzIExldmVsOiAtNmRCLCBFcXVh
bGl6YXRpb25Db21wbGV0ZS0sIEVxdWFsaXphdGlvblBoYXNlMS0KCQkJIEVxdWFsaXphdGlvblBo
YXNlMi0sIEVxdWFsaXphdGlvblBoYXNlMy0sIExpbmtFcXVhbGl6YXRpb25SZXF1ZXN0LQoJQ2Fw
YWJpbGl0aWVzOiBbYTBdIE1TSTogRW5hYmxlKyBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0KwoJ
CUFkZHJlc3M6IDAwMDAwMDAwZmVlMDQwMGMgIERhdGE6IDQwMDAKCUNhcGFiaWxpdGllczogWzEw
MCB2MV0gVmVuZG9yIFNwZWNpZmljIEluZm9ybWF0aW9uOiBJRD0wMDAxIFJldj0xIExlbj0wMTAg
PD8+CglDYXBhYmlsaXRpZXM6IFsxNTAgdjJdIEFkdmFuY2VkIEVycm9yIFJlcG9ydGluZwoJCVVF
U3RhOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBS
eE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlvbC0KCQlVRU1zazoJRExQLSBTREVT
LSBUTFAtIEZDUC0gQ21wbHRUTy0gQ21wbHRBYnJ0LSBVbnhDbXBsdC0gUnhPRi0gTWFsZlRMUC0g
RUNSQy0gVW5zdXBSZXEtIEFDU1Zpb2wtCgkJVUVTdnJ0OglETFArIFNERVMrIFRMUC0gRkNQKyBD
bXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBSeE9GKyBNYWxmVExQKyBFQ1JDLSBVbnN1cFJl
cS0gQUNTVmlvbC0KCQlDRVN0YToJUnhFcnItIEJhZFRMUC0gQmFkRExMUC0gUm9sbG92ZXItIFRp
bWVvdXQtIE5vbkZhdGFsRXJyKwoJCUNFTXNrOglSeEVyci0gQmFkVExQLSBCYWRETExQLSBSb2xs
b3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnIrCgkJQUVSQ2FwOglGaXJzdCBFcnJvciBQb2ludGVy
OiAwMCwgR2VuQ2FwKyBDR2VuRW4tIENoa0NhcCsgQ2hrRW4tCglDYXBhYmlsaXRpZXM6IFsyNzAg
djFdICMxOQoJQ2FwYWJpbGl0aWVzOiBbMmIwIHYxXSBBZGRyZXNzIFRyYW5zbGF0aW9uIFNlcnZp
Y2UgKEFUUykKCQlBVFNDYXA6CUludmFsaWRhdGUgUXVldWUgRGVwdGg6IDAwCgkJQVRTQ3RsOglF
bmFibGUtLCBTbWFsbGVzdCBUcmFuc2xhdGlvbiBVbml0OiAwMAoJQ2FwYWJpbGl0aWVzOiBbMmMw
IHYxXSAjMTMKCUNhcGFiaWxpdGllczogWzJkMCB2MV0gIzFiCglLZXJuZWwgZHJpdmVyIGluIHVz
ZTogcmFkZW9uCglLZXJuZWwgbW9kdWxlczogcmFkZW9uCgowMTowMC4xIEF1ZGlvIGRldmljZTog
QWR2YW5jZWQgTWljcm8gRGV2aWNlcywgSW5jLiBbQU1EL0FUSV0gQ2FwZSBWZXJkZS9QaXRjYWly
biBIRE1JIEF1ZGlvIFtSYWRlb24gSEQgNzcwMC83ODAwIFNlcmllc10KCVN1YnN5c3RlbTogR2ln
YWJ5dGUgVGVjaG5vbG9neSBDby4sIEx0ZCBEZXZpY2UgYWFiMAoJQ29udHJvbDogSS9PKyBNZW0r
IEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGlu
Zy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeCsKCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0
QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJS
LSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDAsIENhY2hlIExpbmUgU2l6ZTogNjQgYnl0ZXMKCUlu
dGVycnVwdDogcGluIEIgcm91dGVkIHRvIElSUSA2MgoJUmVnaW9uIDA6IE1lbW9yeSBhdCBmZjY2
MDAwMCAoNjQtYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xNktdCglDYXBhYmlsaXRpZXM6
IFs0OF0gVmVuZG9yIFNwZWNpZmljIEluZm9ybWF0aW9uOiBMZW49MDggPD8+CglDYXBhYmlsaXRp
ZXM6IFs1MF0gUG93ZXIgTWFuYWdlbWVudCB2ZXJzaW9uIDMKCQlGbGFnczogUE1FQ2xrLSBEU0kt
IEQxKyBEMisgQXV4Q3VycmVudD0wbUEgUE1FKEQwLSxEMS0sRDItLEQzaG90LSxEM2NvbGQtKQoJ
CVN0YXR1czogRDAgTm9Tb2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJ
Q2FwYWJpbGl0aWVzOiBbNThdIEV4cHJlc3MgKHYyKSBMZWdhY3kgRW5kcG9pbnQsIE1TSSAwMAoJ
CURldkNhcDoJTWF4UGF5bG9hZCAyNTYgYnl0ZXMsIFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8
NHVzLCBMMSB1bmxpbWl0ZWQKCQkJRXh0VGFnKyBBdHRuQnRuLSBBdHRuSW5kLSBQd3JJbmQtIFJC
RSsgRkxSZXNldC0KCQlEZXZDdGw6CVJlcG9ydCBlcnJvcnM6IENvcnJlY3RhYmxlLSBOb24tRmF0
YWwtIEZhdGFsLSBVbnN1cHBvcnRlZC0KCQkJUmx4ZE9yZC0gRXh0VGFnLSBQaGFudEZ1bmMtIEF1
eFB3ci0gTm9Tbm9vcCsKCQkJTWF4UGF5bG9hZCAyNTYgYnl0ZXMsIE1heFJlYWRSZXEgNTEyIGJ5
dGVzCgkJRGV2U3RhOglDb3JyRXJyKyBVbmNvcnJFcnItIEZhdGFsRXJyLSBVbnN1cHBSZXErIEF1
eFB3ci0gVHJhbnNQZW5kLQoJCUxua0NhcDoJUG9ydCAjMCwgU3BlZWQgOEdUL3MsIFdpZHRoIHgx
NiwgQVNQTSBMMHMgTDEsIExhdGVuY3kgTDAgPDY0bnMsIEwxIDwxdXMKCQkJQ2xvY2tQTS0gU3Vy
cHJpc2UtIExMQWN0UmVwLSBCd05vdC0KCQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IFJDQiA2NCBi
eXRlcyBEaXNhYmxlZC0gUmV0cmFpbi0gQ29tbUNsaysKCQkJRXh0U3luY2gtIENsb2NrUE0tIEF1
dFdpZERpcy0gQldJbnQtIEF1dEJXSW50LQoJCUxua1N0YToJU3BlZWQgNUdUL3MsIFdpZHRoIHgx
NiwgVHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJCURl
dkNhcDI6IENvbXBsZXRpb24gVGltZW91dDogTm90IFN1cHBvcnRlZCwgVGltZW91dERpcy0sIExU
Ui0sIE9CRkYgTm90IFN1cHBvcnRlZAoJCURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNTB1
cyB0byA1MG1zLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBEaXNhYmxlZAoJCUxua1N0YTI6IEN1
cnJlbnQgRGUtZW1waGFzaXMgTGV2ZWw6IC02ZEIsIEVxdWFsaXphdGlvbkNvbXBsZXRlLSwgRXF1
YWxpemF0aW9uUGhhc2UxLQoJCQkgRXF1YWxpemF0aW9uUGhhc2UyLSwgRXF1YWxpemF0aW9uUGhh
c2UzLSwgTGlua0VxdWFsaXphdGlvblJlcXVlc3QtCglDYXBhYmlsaXRpZXM6IFthMF0gTVNJOiBF
bmFibGUrIENvdW50PTEvMSBNYXNrYWJsZS0gNjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDBmZWUw
MjAwYyAgRGF0YTogNDAwMAoJQ2FwYWJpbGl0aWVzOiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMg
SW5mb3JtYXRpb246IElEPTAwMDEgUmV2PTEgTGVuPTAxMCA8Pz4KCUNhcGFiaWxpdGllczogWzE1
MCB2Ml0gQWR2YW5jZWQgRXJyb3IgUmVwb3J0aW5nCgkJVUVTdGE6CURMUC0gU0RFUy0gVExQLSBG
Q1AtIENtcGx0VE8tIENtcGx0QWJydC0gVW54Q21wbHQtIFJ4T0YtIE1hbGZUTFAtIEVDUkMtIFVu
c3VwUmVxLSBBQ1NWaW9sLQoJCVVFTXNrOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBD
bXBsdEFicnQtIFVueENtcGx0LSBSeE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlv
bC0KCQlVRVN2cnQ6CURMUCsgU0RFUysgVExQLSBGQ1ArIENtcGx0VE8tIENtcGx0QWJydC0gVW54
Q21wbHQtIFJ4T0YrIE1hbGZUTFArIEVDUkMtIFVuc3VwUmVxLSBBQ1NWaW9sLQoJCUNFU3RhOglS
eEVyci0gQmFkVExQLSBCYWRETExQLSBSb2xsb3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnIrCgkJ
Q0VNc2s6CVJ4RXJyLSBCYWRUTFAtIEJhZERMTFAtIFJvbGxvdmVyLSBUaW1lb3V0LSBOb25GYXRh
bEVycisKCQlBRVJDYXA6CUZpcnN0IEVycm9yIFBvaW50ZXI6IDAwLCBHZW5DYXArIENHZW5Fbi0g
Q2hrQ2FwKyBDaGtFbi0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBzbmRfaGRhX2ludGVsCglLZXJu
ZWwgbW9kdWxlczogc25kX2hkYV9pbnRlbAoKMDI6MDYuMCBNdWx0aW1lZGlhIGF1ZGlvIGNvbnRy
b2xsZXI6IENyZWF0aXZlIExhYnMgQ0EwMTA2IFNvdW5kYmxhc3RlcgoJU3Vic3lzdGVtOiBDcmVh
dGl2ZSBMYWJzIFNCMDU3MCBbU0IgQXVkaWd5IFNFXQoJQ29udHJvbDogSS9PKyBNZW0tIEJ1c01h
c3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFyRXJyLSBTdGVwcGluZy0gU0VS
Ui0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1Iei0gVURGLSBGYXN0QjJCKyBQ
YXJFcnItIERFVlNFTD1tZWRpdW0gPlRBYm9ydC0gPFRBYm9ydC0gPE1BYm9ydC0gPlNFUlItIDxQ
RVJSLSBJTlR4LQoJTGF0ZW5jeTogMzIgKDUwMG5zIG1pbiwgNTAwMG5zIG1heCkKCUludGVycnVw
dDogcGluIEEgcm91dGVkIHRvIElSUSAyMQoJUmVnaW9uIDA6IEkvTyBwb3J0cyBhdCBkMDAwIFtz
aXplPTMyXQoJQ2FwYWJpbGl0aWVzOiBbZGNdIFBvd2VyIE1hbmFnZW1lbnQgdmVyc2lvbiAyCgkJ
RmxhZ3M6IFBNRUNsay0gRFNJKyBEMSsgRDIrIEF1eEN1cnJlbnQ9MG1BIFBNRShEMC0sRDEtLEQy
LSxEM2hvdC0sRDNjb2xkLSkKCQlTdGF0dXM6IEQwIE5vU29mdFJzdC0gUE1FLUVuYWJsZS0gRFNl
bD0wIERTY2FsZT0wIFBNRS0KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBzbmRfY2EwMTA2CglLZXJu
ZWwgbW9kdWxlczogc25kX2NhMDEwNgoKMDI6MDcuMCBTZXJpYWwgY29udHJvbGxlcjogTW9zQ2hp
cCBTZW1pY29uZHVjdG9yIFRlY2hub2xvZ3kgTHRkLiBQQ0kgOTgzNSBNdWx0aS1JL08gQ29udHJv
bGxlciAocmV2IDAxKSAocHJvZy1pZiAwMiBbMTY1NTBdKQoJU3Vic3lzdGVtOiBMU0kgTG9naWMg
LyBTeW1iaW9zIExvZ2ljIDJTICgxNkM1NTAgVUFSVCkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNN
YXN0ZXItIFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNF
UlItIEZhc3RCMkItIERpc0lOVHgtCglTdGF0dXM6IENhcC0gNjZNSHotIFVERi0gRmFzdEIyQisg
UGFyRXJyLSBERVZTRUw9bWVkaXVtID5UQWJvcnQtIDxUQWJvcnQtIDxNQWJvcnQtID5TRVJSLSA8
UEVSUi0gSU5UeCsKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAyMgoJUmVnaW9uIDA6
IEkvTyBwb3J0cyBhdCBkMDcwIFtzaXplPThdCglSZWdpb24gMTogSS9PIHBvcnRzIGF0IGQwNjAg
W3NpemU9OF0KCVJlZ2lvbiAyOiBJL08gcG9ydHMgYXQgZDA1MCBbc2l6ZT04XQoJUmVnaW9uIDM6
IEkvTyBwb3J0cyBhdCBkMDQwIFtzaXplPThdCglSZWdpb24gNDogSS9PIHBvcnRzIGF0IGQwMzAg
W3NpemU9OF0KCVJlZ2lvbiA1OiBJL08gcG9ydHMgYXQgZDAyMCBbc2l6ZT0xNl0KCUtlcm5lbCBk
cml2ZXIgaW4gdXNlOiBzZXJpYWwKCUtlcm5lbCBtb2R1bGVzOiA4MjUwX3BjaSwgcGFycG9ydF9z
ZXJpYWwKCjAzOjAwLjAgTXVsdGltZWRpYSBjb250cm9sbGVyOiBQaGlsaXBzIFNlbWljb25kdWN0
b3JzIFNBQTcxNjAgKHJldiAwMykKCVN1YnN5c3RlbTogRGV2aWNlIDYyMjA6MDAwMgoJQ29udHJv
bDogSS9PKyBNZW0rIEJ1c01hc3RlcisgU3BlY0N5Y2xlLSBNZW1XSU5WLSBWR0FTbm9vcC0gUGFy
RXJyLSBTdGVwcGluZy0gU0VSUi0gRmFzdEIyQi0gRGlzSU5UeC0KCVN0YXR1czogQ2FwKyA2Nk1I
ei0gVURGLSBGYXN0QjJCLSBQYXJFcnItIERFVlNFTD1mYXN0ID5UQWJvcnQtIDxUQWJvcnQtIDxN
QWJvcnQtID5TRVJSLSA8UEVSUi0gSU5UeC0KCUxhdGVuY3k6IDAsIENhY2hlIExpbmUgU2l6ZTog
NjQgYnl0ZXMKCUludGVycnVwdDogcGluIEEgcm91dGVkIHRvIElSUSAxNgoJUmVnaW9uIDA6IE1l
bW9yeSBhdCBmZjUwMDAwMCAoNjQtYml0LCBub24tcHJlZmV0Y2hhYmxlKSBbc2l6ZT0xTV0KCUNh
cGFiaWxpdGllczogWzQwXSBNU0k6IEVuYWJsZS0gQ291bnQ9MS8zMiBNYXNrYWJsZS0gNjRiaXQr
CgkJQWRkcmVzczogMDAwMDAwMDAwMDAwMDAwMCAgRGF0YTogMDAwMAoJQ2FwYWJpbGl0aWVzOiBb
NTBdIEV4cHJlc3MgKHYxKSBFbmRwb2ludCwgTVNJIDAwCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDEy
OCBieXRlcywgUGhhbnRGdW5jIDAsIExhdGVuY3kgTDBzIDwyNTZucywgTDEgPDF1cwoJCQlFeHRU
YWctIEF0dG5CdG4tIEF0dG5JbmQtIFB3ckluZC0gUkJFLSBGTFJlc2V0LQoJCURldkN0bDoJUmVw
b3J0IGVycm9yczogQ29ycmVjdGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJ
CQlSbHhkT3JkLSBFeHRUYWctIFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKwoJCQlNYXhQYXls
b2FkIDEyOCBieXRlcywgTWF4UmVhZFJlcSAxMjggYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVu
Y29yckVyci0gRmF0YWxFcnItIFVuc3VwcFJlcS0gQXV4UHdyLSBUcmFuc1BlbmQtCgkJTG5rQ2Fw
OglQb3J0ICMxLCBTcGVlZCAyLjVHVC9zLCBXaWR0aCB4MSwgQVNQTSBMMHMgTDEsIExhdGVuY3kg
TDAgPDR1cywgTDEgPDY0dXMKCQkJQ2xvY2tQTS0gU3VycHJpc2UtIExMQWN0UmVwLSBCd05vdC0K
CQlMbmtDdGw6CUFTUE0gRGlzYWJsZWQ7IFJDQiAxMjggYnl0ZXMgRGlzYWJsZWQtIFJldHJhaW4t
IENvbW1DbGstCgkJCUV4dFN5bmNoLSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRCV0lu
dC0KCQlMbmtTdGE6CVNwZWVkIDIuNUdUL3MsIFdpZHRoIHgxLCBUckVyci0gVHJhaW4tIFNsb3RD
bGstIERMQWN0aXZlLSBCV01nbXQtIEFCV01nbXQtCglDYXBhYmlsaXRpZXM6IFs3NF0gUG93ZXIg
TWFuYWdlbWVudCB2ZXJzaW9uIDIKCQlGbGFnczogUE1FQ2xrLSBEU0ktIEQxKyBEMisgQXV4Q3Vy
cmVudD0wbUEgUE1FKEQwKyxEMSssRDIrLEQzaG90LSxEM2NvbGQtKQoJCVN0YXR1czogRDAgTm9T
b2Z0UnN0LSBQTUUtRW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBb
ODBdIFZlbmRvciBTcGVjaWZpYyBJbmZvcm1hdGlvbjogTGVuPTUwIDw/PgoJQ2FwYWJpbGl0aWVz
OiBbMTAwIHYxXSBWZW5kb3IgU3BlY2lmaWMgSW5mb3JtYXRpb246IElEPTAwMDAgUmV2PTAgTGVu
PTA4OCA8Pz4KCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiBTQUE3MTZ4IFRCUwoJS2VybmVsIG1vZHVs
ZXM6IHNhYTcxNnhfdGJzX2R2YgoKMDQ6MDAuMCBVU0IgY29udHJvbGxlcjogRXRyb24gVGVjaG5v
bG9neSwgSW5jLiBFSjE4OC9FSjE5OCBVU0IgMy4wIEhvc3QgQ29udHJvbGxlciAocHJvZy1pZiAz
MCBbWEhDSV0pCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9uIERldmljZSA3MDUyCglD
b250cm9sOiBJL08tIE1lbSsgQnVzTWFzdGVyKyBTcGVjQ3ljbGUtIE1lbVdJTlYtIFZHQVNub29w
LSBQYXJFcnItIFN0ZXBwaW5nLSBTRVJSLSBGYXN0QjJCLSBEaXNJTlR4KwoJU3RhdHVzOiBDYXAr
IDY2TUh6LSBVREYtIEZhc3RCMkItIFBhckVyci0gREVWU0VMPWZhc3QgPlRBYm9ydC0gPFRBYm9y
dC0gPE1BYm9ydC0gPlNFUlItIDxQRVJSLSBJTlR4LQoJTGF0ZW5jeTogMCwgQ2FjaGUgTGluZSBT
aXplOiA2NCBieXRlcwoJSW50ZXJydXB0OiBwaW4gQSByb3V0ZWQgdG8gSVJRIDU5CglSZWdpb24g
MDogTWVtb3J5IGF0IGZmNDAwMDAwICg2NC1iaXQsIG5vbi1wcmVmZXRjaGFibGUpIFtzaXplPTMy
S10KCUNhcGFiaWxpdGllczogWzUwXSBQb3dlciBNYW5hZ2VtZW50IHZlcnNpb24gMwoJCUZsYWdz
OiBQTUVDbGstIERTSS0gRDErIEQyKyBBdXhDdXJyZW50PTBtQSBQTUUoRDArLEQxKyxEMissRDNo
b3QrLEQzY29sZCspCgkJU3RhdHVzOiBEMCBOb1NvZnRSc3QrIFBNRS1FbmFibGUtIERTZWw9MCBE
U2NhbGU9MCBQTUUtCglDYXBhYmlsaXRpZXM6IFs3MF0gTVNJOiBFbmFibGUrIENvdW50PTEvNCBN
YXNrYWJsZSsgNjRiaXQrCgkJQWRkcmVzczogMDAwMDAwMDBmZWUwMTAwYyAgRGF0YTogNDAwMAoJ
CU1hc2tpbmc6IDAwMDAwMDBlICBQZW5kaW5nOiAwMDAwMDAwMAoJQ2FwYWJpbGl0aWVzOiBbYTBd
IEV4cHJlc3MgKHYyKSBFbmRwb2ludCwgTVNJIDAxCgkJRGV2Q2FwOglNYXhQYXlsb2FkIDEwMjQg
Ynl0ZXMsIFBoYW50RnVuYyAwLCBMYXRlbmN5IEwwcyA8NjRucywgTDEgPDF1cwoJCQlFeHRUYWcr
IEF0dG5CdG4tIEF0dG5JbmQtIFB3ckluZC0gUkJFKyBGTFJlc2V0KwoJCURldkN0bDoJUmVwb3J0
IGVycm9yczogQ29ycmVjdGFibGUtIE5vbi1GYXRhbC0gRmF0YWwtIFVuc3VwcG9ydGVkLQoJCQlS
bHhkT3JkLSBFeHRUYWctIFBoYW50RnVuYy0gQXV4UHdyLSBOb1Nub29wKyBGTFJlc2V0LQoJCQlN
YXhQYXlsb2FkIDEyOCBieXRlcywgTWF4UmVhZFJlcSA1MTIgYnl0ZXMKCQlEZXZTdGE6CUNvcnJF
cnItIFVuY29yckVyci0gRmF0YWxFcnItIFVuc3VwcFJlcS0gQXV4UHdyKyBUcmFuc1BlbmQtCgkJ
TG5rQ2FwOglQb3J0ICMwLCBTcGVlZCA1R1QvcywgV2lkdGggeDEsIEFTUE0gTDBzIEwxLCBMYXRl
bmN5IEwwIDwxdXMsIEwxIDw2NHVzCgkJCUNsb2NrUE0rIFN1cnByaXNlLSBMTEFjdFJlcC0gQndO
b3QtCgkJTG5rQ3RsOglBU1BNIERpc2FibGVkOyBSQ0IgNjQgYnl0ZXMgRGlzYWJsZWQtIFJldHJh
aW4tIENvbW1DbGsrCgkJCUV4dFN5bmNoLSBDbG9ja1BNLSBBdXRXaWREaXMtIEJXSW50LSBBdXRC
V0ludC0KCQlMbmtTdGE6CVNwZWVkIDVHVC9zLCBXaWR0aCB4MSwgVHJFcnItIFRyYWluLSBTbG90
Q2xrKyBETEFjdGl2ZS0gQldNZ210LSBBQldNZ210LQoJCURldkNhcDI6IENvbXBsZXRpb24gVGlt
ZW91dDogTm90IFN1cHBvcnRlZCwgVGltZW91dERpcy0sIExUUi0sIE9CRkYgTm90IFN1cHBvcnRl
ZAoJCURldkN0bDI6IENvbXBsZXRpb24gVGltZW91dDogNTB1cyB0byA1MG1zLCBUaW1lb3V0RGlz
LSwgTFRSLSwgT0JGRiBEaXNhYmxlZAoJCUxua0N0bDI6IFRhcmdldCBMaW5rIFNwZWVkOiA1R1Qv
cywgRW50ZXJDb21wbGlhbmNlLSBTcGVlZERpcy0KCQkJIFRyYW5zbWl0IE1hcmdpbjogTm9ybWFs
IE9wZXJhdGluZyBSYW5nZSwgRW50ZXJNb2RpZmllZENvbXBsaWFuY2UtIENvbXBsaWFuY2VTT1Mt
CgkJCSBDb21wbGlhbmNlIERlLWVtcGhhc2lzOiAtNmRCCgkJTG5rU3RhMjogQ3VycmVudCBEZS1l
bXBoYXNpcyBMZXZlbDogLTZkQiwgRXF1YWxpemF0aW9uQ29tcGxldGUtLCBFcXVhbGl6YXRpb25Q
aGFzZTEtCgkJCSBFcXVhbGl6YXRpb25QaGFzZTItLCBFcXVhbGl6YXRpb25QaGFzZTMtLCBMaW5r
RXF1YWxpemF0aW9uUmVxdWVzdC0KCUNhcGFiaWxpdGllczogWzEwMCB2MV0gQWR2YW5jZWQgRXJy
b3IgUmVwb3J0aW5nCgkJVUVTdGE6CURMUC0gU0RFUy0gVExQLSBGQ1AtIENtcGx0VE8tIENtcGx0
QWJydC0gVW54Q21wbHQtIFJ4T0YtIE1hbGZUTFAtIEVDUkMtIFVuc3VwUmVxLSBBQ1NWaW9sLQoJ
CVVFTXNrOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0
LSBSeE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlvbC0KCQlVRVN2cnQ6CURMUCsg
U0RFUy0gVExQLSBGQ1ArIENtcGx0VE8tIENtcGx0QWJydC0gVW54Q21wbHQtIFJ4T0YrIE1hbGZU
TFArIEVDUkMtIFVuc3VwUmVxLSBBQ1NWaW9sLQoJCUNFU3RhOglSeEVycisgQmFkVExQLSBCYWRE
TExQLSBSb2xsb3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnItCgkJQ0VNc2s6CVJ4RXJyLSBCYWRU
TFAtIEJhZERMTFAtIFJvbGxvdmVyLSBUaW1lb3V0LSBOb25GYXRhbEVycisKCQlBRVJDYXA6CUZp
cnN0IEVycm9yIFBvaW50ZXI6IDE0LCBHZW5DYXArIENHZW5Fbi0gQ2hrQ2FwKyBDaGtFbi0KCUNh
cGFiaWxpdGllczogWzE5MCB2MV0gRGV2aWNlIFNlcmlhbCBOdW1iZXIgMDEtMDEtMDEtMDEtMDEt
MDEtMDEtMDEKCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiB4aGNpX2hjZAoJS2VybmVsIG1vZHVsZXM6
IHhoY2lfaGNkCgowNTowMC4wIEV0aGVybmV0IGNvbnRyb2xsZXI6IFJlYWx0ZWsgU2VtaWNvbmR1
Y3RvciBDby4sIEx0ZC4gUlRMODExMS84MTY4Lzg0MTEgUENJIEV4cHJlc3MgR2lnYWJpdCBFdGhl
cm5ldCBDb250cm9sbGVyIChyZXYgMDYpCglTdWJzeXN0ZW06IEFTUm9jayBJbmNvcnBvcmF0aW9u
IE1vdGhlcmJvYXJkIChvbmUgb2YgbWFueSkKCUNvbnRyb2w6IEkvTysgTWVtKyBCdXNNYXN0ZXIr
IFNwZWNDeWNsZS0gTWVtV0lOVi0gVkdBU25vb3AtIFBhckVyci0gU3RlcHBpbmctIFNFUlItIEZh
c3RCMkItIERpc0lOVHgrCglTdGF0dXM6IENhcCsgNjZNSHotIFVERi0gRmFzdEIyQi0gUGFyRXJy
LSBERVZTRUw9ZmFzdCA+VEFib3J0LSA8VEFib3J0LSA8TUFib3J0LSA+U0VSUi0gPFBFUlItIElO
VHgtCglMYXRlbmN5OiAwLCBDYWNoZSBMaW5lIFNpemU6IDY0IGJ5dGVzCglJbnRlcnJ1cHQ6IHBp
biBBIHJvdXRlZCB0byBJUlEgNjEKCVJlZ2lvbiAwOiBJL08gcG9ydHMgYXQgYzAwMCBbc2l6ZT0y
NTZdCglSZWdpb24gMjogTWVtb3J5IGF0IGQwMDA0MDAwICg2NC1iaXQsIHByZWZldGNoYWJsZSkg
W3NpemU9NEtdCglSZWdpb24gNDogTWVtb3J5IGF0IGQwMDAwMDAwICg2NC1iaXQsIHByZWZldGNo
YWJsZSkgW3NpemU9MTZLXQoJQ2FwYWJpbGl0aWVzOiBbNDBdIFBvd2VyIE1hbmFnZW1lbnQgdmVy
c2lvbiAzCgkJRmxhZ3M6IFBNRUNsay0gRFNJLSBEMSsgRDIrIEF1eEN1cnJlbnQ9Mzc1bUEgUE1F
KEQwKyxEMSssRDIrLEQzaG90KyxEM2NvbGQrKQoJCVN0YXR1czogRDAgTm9Tb2Z0UnN0KyBQTUUt
RW5hYmxlLSBEU2VsPTAgRFNjYWxlPTAgUE1FLQoJQ2FwYWJpbGl0aWVzOiBbNTBdIE1TSTogRW5h
YmxlKyBDb3VudD0xLzEgTWFza2FibGUtIDY0Yml0KwoJCUFkZHJlc3M6IDAwMDAwMDAwZmVlMDQw
MGMgIERhdGE6IDQwMDAKCUNhcGFiaWxpdGllczogWzcwXSBFeHByZXNzICh2MikgRW5kcG9pbnQs
IE1TSSAwMQoJCURldkNhcDoJTWF4UGF5bG9hZCAxMjggYnl0ZXMsIFBoYW50RnVuYyAwLCBMYXRl
bmN5IEwwcyA8NTEybnMsIEwxIDw2NHVzCgkJCUV4dFRhZy0gQXR0bkJ0bi0gQXR0bkluZC0gUHdy
SW5kLSBSQkUrIEZMUmVzZXQtCgkJRGV2Q3RsOglSZXBvcnQgZXJyb3JzOiBDb3JyZWN0YWJsZS0g
Tm9uLUZhdGFsLSBGYXRhbC0gVW5zdXBwb3J0ZWQtCgkJCVJseGRPcmQtIEV4dFRhZy0gUGhhbnRG
dW5jLSBBdXhQd3ItIE5vU25vb3AtCgkJCU1heFBheWxvYWQgMTI4IGJ5dGVzLCBNYXhSZWFkUmVx
IDQwOTYgYnl0ZXMKCQlEZXZTdGE6CUNvcnJFcnItIFVuY29yckVyci0gRmF0YWxFcnItIFVuc3Vw
cFJlcS0gQXV4UHdyKyBUcmFuc1BlbmQtCgkJTG5rQ2FwOglQb3J0ICMwLCBTcGVlZCAyLjVHVC9z
LCBXaWR0aCB4MSwgQVNQTSBMMHMgTDEsIExhdGVuY3kgTDAgdW5saW1pdGVkLCBMMSA8NjR1cwoJ
CQlDbG9ja1BNKyBTdXJwcmlzZS0gTExBY3RSZXAtIEJ3Tm90LQoJCUxua0N0bDoJQVNQTSBEaXNh
YmxlZDsgUkNCIDY0IGJ5dGVzIERpc2FibGVkLSBSZXRyYWluLSBDb21tQ2xrKwoJCQlFeHRTeW5j
aC0gQ2xvY2tQTS0gQXV0V2lkRGlzLSBCV0ludC0gQXV0QldJbnQtCgkJTG5rU3RhOglTcGVlZCAy
LjVHVC9zLCBXaWR0aCB4MSwgVHJFcnItIFRyYWluLSBTbG90Q2xrKyBETEFjdGl2ZS0gQldNZ210
LSBBQldNZ210LQoJCURldkNhcDI6IENvbXBsZXRpb24gVGltZW91dDogUmFuZ2UgQUJDRCwgVGlt
ZW91dERpcyssIExUUi0sIE9CRkYgTm90IFN1cHBvcnRlZAoJCURldkN0bDI6IENvbXBsZXRpb24g
VGltZW91dDogNTB1cyB0byA1MG1zLCBUaW1lb3V0RGlzLSwgTFRSLSwgT0JGRiBEaXNhYmxlZAoJ
CUxua0N0bDI6IFRhcmdldCBMaW5rIFNwZWVkOiAyLjVHVC9zLCBFbnRlckNvbXBsaWFuY2UtIFNw
ZWVkRGlzLQoJCQkgVHJhbnNtaXQgTWFyZ2luOiBOb3JtYWwgT3BlcmF0aW5nIFJhbmdlLCBFbnRl
ck1vZGlmaWVkQ29tcGxpYW5jZS0gQ29tcGxpYW5jZVNPUy0KCQkJIENvbXBsaWFuY2UgRGUtZW1w
aGFzaXM6IC02ZEIKCQlMbmtTdGEyOiBDdXJyZW50IERlLWVtcGhhc2lzIExldmVsOiAtNmRCLCBF
cXVhbGl6YXRpb25Db21wbGV0ZS0sIEVxdWFsaXphdGlvblBoYXNlMS0KCQkJIEVxdWFsaXphdGlv
blBoYXNlMi0sIEVxdWFsaXphdGlvblBoYXNlMy0sIExpbmtFcXVhbGl6YXRpb25SZXF1ZXN0LQoJ
Q2FwYWJpbGl0aWVzOiBbYjBdIE1TSS1YOiBFbmFibGUtIENvdW50PTQgTWFza2VkLQoJCVZlY3Rv
ciB0YWJsZTogQkFSPTQgb2Zmc2V0PTAwMDAwMDAwCgkJUEJBOiBCQVI9NCBvZmZzZXQ9MDAwMDA4
MDAKCUNhcGFiaWxpdGllczogW2QwXSBWaXRhbCBQcm9kdWN0IERhdGEKCQlObyBlbmQgdGFnIGZv
dW5kCglDYXBhYmlsaXRpZXM6IFsxMDAgdjFdIEFkdmFuY2VkIEVycm9yIFJlcG9ydGluZwoJCVVF
U3RhOglETFAtIFNERVMtIFRMUC0gRkNQLSBDbXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBS
eE9GLSBNYWxmVExQLSBFQ1JDLSBVbnN1cFJlcS0gQUNTVmlvbC0KCQlVRU1zazoJRExQLSBTREVT
LSBUTFAtIEZDUC0gQ21wbHRUTy0gQ21wbHRBYnJ0LSBVbnhDbXBsdC0gUnhPRi0gTWFsZlRMUC0g
RUNSQy0gVW5zdXBSZXEtIEFDU1Zpb2wtCgkJVUVTdnJ0OglETFArIFNERVMrIFRMUC0gRkNQKyBD
bXBsdFRPLSBDbXBsdEFicnQtIFVueENtcGx0LSBSeE9GKyBNYWxmVExQKyBFQ1JDLSBVbnN1cFJl
cS0gQUNTVmlvbC0KCQlDRVN0YToJUnhFcnItIEJhZFRMUC0gQmFkRExMUC0gUm9sbG92ZXItIFRp
bWVvdXQtIE5vbkZhdGFsRXJyKwoJCUNFTXNrOglSeEVyci0gQmFkVExQLSBCYWRETExQLSBSb2xs
b3Zlci0gVGltZW91dC0gTm9uRmF0YWxFcnIrCgkJQUVSQ2FwOglGaXJzdCBFcnJvciBQb2ludGVy
OiAwMCwgR2VuQ2FwKyBDR2VuRW4tIENoa0NhcCsgQ2hrRW4tCglDYXBhYmlsaXRpZXM6IFsxNDAg
djFdIFZpcnR1YWwgQ2hhbm5lbAoJCUNhcHM6CUxQRVZDPTAgUmVmQ2xrPTEwMG5zIFBBVEVudHJ5
Qml0cz0xCgkJQXJiOglGaXhlZC0gV1JSMzItIFdSUjY0LSBXUlIxMjgtCgkJQ3RybDoJQXJiU2Vs
ZWN0PUZpeGVkCgkJU3RhdHVzOglJblByb2dyZXNzLQoJCVZDMDoJQ2FwczoJUEFUT2Zmc2V0PTAw
IE1heFRpbWVTbG90cz0xIFJlalNub29wVHJhbnMtCgkJCUFyYjoJRml4ZWQtIFdSUjMyLSBXUlI2
NC0gV1JSMTI4LSBUV1JSMTI4LSBXUlIyNTYtCgkJCUN0cmw6CUVuYWJsZSsgSUQ9MCBBcmJTZWxl
Y3Q9Rml4ZWQgVEMvVkM9MDEKCQkJU3RhdHVzOglOZWdvUGVuZGluZy0gSW5Qcm9ncmVzcy0KCUNh
cGFiaWxpdGllczogWzE2MCB2MV0gRGV2aWNlIFNlcmlhbCBOdW1iZXIgMDEtMDAtMDAtMDAtNjgt
NGMtZTAtMDAKCUtlcm5lbCBkcml2ZXIgaW4gdXNlOiByODE2OQoJS2VybmVsIG1vZHVsZXM6IHI4
MTY5Cgo=
--001a11c3e454a2907604f180eb92
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a11c3e454a2907604f180eb92--


From xen-devel-bounces@lists.xen.org Mon Feb 03 14:00:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAK50-0000wx-9y; Mon, 03 Feb 2014 14:00:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WAK4y-0000wr-Rz
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 14:00:53 +0000
Received: from [85.158.139.211:4919] by server-11.bemta-5.messagelabs.com id
	22/DA-23886-411AFE25; Mon, 03 Feb 2014 14:00:52 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391436049!1325630!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10553 invoked from network); 3 Feb 2014 14:00:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 14:00:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13E0mNV030287
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 14:00:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13E0ltU023601
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 14:00:47 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13E0lTb008029; Mon, 3 Feb 2014 14:00:47 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 06:00:46 -0800
Date: Mon, 3 Feb 2014 15:00:42 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: James Dingwall <james.dingwall@zynstra.com>
Message-ID: <20140203140042.GD5436@olila.local.net-space.pl>
References: <52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
	<20140203094912.GA5273@olila.local.net-space.pl>
	<52EF7B81.4080601@zynstra.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EF7B81.4080601@zynstra.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:20:33AM +0000, James Dingwall wrote:
> Daniel Kiper wrote:

[...]

> >Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make -j"?
> >Could you confirm that webkit-gtk in any "subjobs" do not use "make -j"?
> My guest domain is Gentoo and I have MAKEOPTS="-j2" set in make.conf
> and according to the build log for webkit-gtk this is used
> unchanged:
> >>> Source configured.
> >>> Compiling source in
> /var/tmp/portage/net-libs/webkit-gtk-2.0.4/work/webkitgtk-2.0.4 ...
> make -j2
>
> I wouldn't read anything in particular to it being webkit as I have
> seen similar from other large compiles (gcc, glibc, kdelibs, ...)

Thanks, it makes sens. I was afraid that somewhere "make -j" without
an argument has been called.

> >>>>But I think from the beginning tmem/balloon driver can't expand guest
> >>>>memory from size 'memory' to 'maxmem' automatically.
> >>>I am carrying this patch for libxl (4.3.1) because maxmem wasn't
> >>>being honoured.
> >James, what do you mean by "maxmem wasn't being honoured"?
> http://lists.xen.org/archives/html/xen-devel/2013-10/msg02228.html -
> the guest would never balloon above the value of memory when maxmem
> was set and was > memory in the configuration file.  There were some
> discussions about the correctness of this patch but the only place I
> could see an impact of maxmem was the parsing of the config
> parameters for the setup of the guest domain. IIRC the xl mem-max
> command which changes the same parameter once the guest domain is
> running resulted with the balloon up behaviour to max-mem working as
> expected. So the discrepancy between how xl behaves with the maxmem
> in the config or the execution of xl mem-max was what I had noted
> and what this patch resolved.  It would be easy to repeat those
> tests if necessary.

Please look for "[PATCH v4 0/4] libxl: memory management patches",
"[PATCH v2 0/2] xen/balloon: Extension and fix" and earlier related
threads. They contain almost hammered out and agreed solution for
this issue (you also find explanation for this xl behavior; note that
xm behavior was different). However, I was not able to finish this
patches due to other stuff on my plate. I have still this issue on
my todo list. If you would like to work on these patches go ahead.
I am happy to help but I am not able to work on them right now.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:00:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAK50-0000wx-9y; Mon, 03 Feb 2014 14:00:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WAK4y-0000wr-Rz
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 14:00:53 +0000
Received: from [85.158.139.211:4919] by server-11.bemta-5.messagelabs.com id
	22/DA-23886-411AFE25; Mon, 03 Feb 2014 14:00:52 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391436049!1325630!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10553 invoked from network); 3 Feb 2014 14:00:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 14:00:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13E0mNV030287
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 14:00:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13E0ltU023601
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 14:00:47 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13E0lTb008029; Mon, 3 Feb 2014 14:00:47 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 06:00:46 -0800
Date: Mon, 3 Feb 2014 15:00:42 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: James Dingwall <james.dingwall@zynstra.com>
Message-ID: <20140203140042.GD5436@olila.local.net-space.pl>
References: <52D64B87.6000400@zynstra.com> <52D69E0B.5020006@oracle.com>
	<52D6B8B6.5070302@zynstra.com> <52D7346A.3000300@oracle.com>
	<52E7E594.2050104@zynstra.com> <52E911CA.9020700@oracle.com>
	<52E91404.30602@zynstra.com>
	<20140131165654.GF23648@phenom.dumpdata.com>
	<20140203094912.GA5273@olila.local.net-space.pl>
	<52EF7B81.4080601@zynstra.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52EF7B81.4080601@zynstra.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Kernel 3.11 / 3.12 OOM killer and Xen ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:20:33AM +0000, James Dingwall wrote:
> Daniel Kiper wrote:

[...]

> >Hmmm... James, how do you build webkit-gtk? Just simple "make" or "make -j"?
> >Could you confirm that webkit-gtk in any "subjobs" do not use "make -j"?
> My guest domain is Gentoo and I have MAKEOPTS="-j2" set in make.conf
> and according to the build log for webkit-gtk this is used
> unchanged:
> >>> Source configured.
> >>> Compiling source in
> /var/tmp/portage/net-libs/webkit-gtk-2.0.4/work/webkitgtk-2.0.4 ...
> make -j2
>
> I wouldn't read anything in particular to it being webkit as I have
> seen similar from other large compiles (gcc, glibc, kdelibs, ...)

Thanks, it makes sens. I was afraid that somewhere "make -j" without
an argument has been called.

> >>>>But I think from the beginning tmem/balloon driver can't expand guest
> >>>>memory from size 'memory' to 'maxmem' automatically.
> >>>I am carrying this patch for libxl (4.3.1) because maxmem wasn't
> >>>being honoured.
> >James, what do you mean by "maxmem wasn't being honoured"?
> http://lists.xen.org/archives/html/xen-devel/2013-10/msg02228.html -
> the guest would never balloon above the value of memory when maxmem
> was set and was > memory in the configuration file.  There were some
> discussions about the correctness of this patch but the only place I
> could see an impact of maxmem was the parsing of the config
> parameters for the setup of the guest domain. IIRC the xl mem-max
> command which changes the same parameter once the guest domain is
> running resulted with the balloon up behaviour to max-mem working as
> expected. So the discrepancy between how xl behaves with the maxmem
> in the config or the execution of xl mem-max was what I had noted
> and what this patch resolved.  It would be easy to repeat those
> tests if necessary.

Please look for "[PATCH v4 0/4] libxl: memory management patches",
"[PATCH v2 0/2] xen/balloon: Extension and fix" and earlier related
threads. They contain almost hammered out and agreed solution for
this issue (you also find explanation for this xl behavior; note that
xm behavior was different). However, I was not able to finish this
patches due to other stuff on my plate. I have still this issue on
my todo list. If you would like to work on these patches go ahead.
I am happy to help but I am not able to work on them right now.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:08:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKC6-00018h-In; Mon, 03 Feb 2014 14:08:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAKC5-00018c-3H
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:08:13 +0000
Received: from [85.158.137.68:8302] by server-17.bemta-3.messagelabs.com id
	4D/40-22569-CC2AFE25; Mon, 03 Feb 2014 14:08:12 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391436491!12944425!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15769 invoked from network); 3 Feb 2014 14:08:11 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 3 Feb 2014 14:08:11 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51011 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAKBl-0008GV-VG; Mon, 03 Feb 2014 15:07:54 +0100
Date: Mon, 3 Feb 2014 15:08:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <594708521.20140203150808@eikelenboom.it>
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
In-Reply-To: <CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, February 3, 2014, 2:58:37 PM, you wrote:

> lspci output attached.

> I have never managed to crash system with debug=y, but I can provide
> serial log captured with debug=y and HVM domain running up.

Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?


> On Mon, Feb 3, 2014 at 10:51 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 03/02/14 13:47, Vitaliy Tomin wrote:
>>> My system based on AMD APU completely crashes when trying to use HVM
>>> domUs. I've asked earlier on user list
>>> http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
>>> they were recommended to ask here.
>>> I've also found same problem describer with another AMD APU here
>>> http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html
>>>
>>> My system crashes every time I start HVM domU. But if I use xen
>>> compiled with debug info it works stable at least for hours (not
>>> tested for longer run).
>>>
>>> My system is openSUSE 13.1  with Xen 4.4
>>> My hardware is
>>>
>>> ASRock  FM2A75 Pro4
>>> AMD A8-6600K APU
>>> Gigabyte Radeon 7850
>>> 8 Gb DDR3 1600Mhz
>>>
>>> I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.
>>>
>>> I've setup with pci serial console and capture xen log ( === === lines
>>> added by myself). Xen and dom0 dmesg logs also attached.
>>
>> Can you compile Xen with debug=y ?  This will turn on all assertions,
>> which might help narrow down the issue.
>>
>> Do you have an lspci -tv and -vv for the system?
>>
>> ~Andrew
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:08:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKC6-00018h-In; Mon, 03 Feb 2014 14:08:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAKC5-00018c-3H
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:08:13 +0000
Received: from [85.158.137.68:8302] by server-17.bemta-3.messagelabs.com id
	4D/40-22569-CC2AFE25; Mon, 03 Feb 2014 14:08:12 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391436491!12944425!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15769 invoked from network); 3 Feb 2014 14:08:11 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 3 Feb 2014 14:08:11 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51011 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAKBl-0008GV-VG; Mon, 03 Feb 2014 15:07:54 +0100
Date: Mon, 3 Feb 2014 15:08:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <594708521.20140203150808@eikelenboom.it>
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
In-Reply-To: <CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Monday, February 3, 2014, 2:58:37 PM, you wrote:

> lspci output attached.

> I have never managed to crash system with debug=y, but I can provide
> serial log captured with debug=y and HVM domain running up.

Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?


> On Mon, Feb 3, 2014 at 10:51 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 03/02/14 13:47, Vitaliy Tomin wrote:
>>> My system based on AMD APU completely crashes when trying to use HVM
>>> domUs. I've asked earlier on user list
>>> http://lists.xen.org/archives/html/xen-users/2013-11/msg00063.html but
>>> they were recommended to ask here.
>>> I've also found same problem describer with another AMD APU here
>>> http://lists.xen.org/archives/html/xen-devel/2013-08/msg01395.html
>>>
>>> My system crashes every time I start HVM domU. But if I use xen
>>> compiled with debug info it works stable at least for hours (not
>>> tested for longer run).
>>>
>>> My system is openSUSE 13.1  with Xen 4.4
>>> My hardware is
>>>
>>> ASRock  FM2A75 Pro4
>>> AMD A8-6600K APU
>>> Gigabyte Radeon 7850
>>> 8 Gb DDR3 1600Mhz
>>>
>>> I've tested with fresh xen 4.4 and it crash my system as well as stable xen 4.3.
>>>
>>> I've setup with pci serial console and capture xen log ( === === lines
>>> added by myself). Xen and dom0 dmesg logs also attached.
>>
>> Can you compile Xen with debug=y ?  This will turn on all assertions,
>> which might help narrow down the issue.
>>
>> Do you have an lspci -tv and -vv for the system?
>>
>> ~Andrew
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:09:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKD4-0001ES-1H; Mon, 03 Feb 2014 14:09:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAKD2-0001E6-2I
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:09:12 +0000
Received: from [85.158.137.68:33038] by server-9.bemta-3.messagelabs.com id
	26/13-10184-703AFE25; Mon, 03 Feb 2014 14:09:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391436549!13025363!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19122 invoked from network); 3 Feb 2014 14:09:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:09:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99211445"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 14:09:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 09:09:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAKCy-00023L-3q;
	Mon, 03 Feb 2014 14:09:08 +0000
Message-ID: <52EFA304.4000206@citrix.com>
Date: Mon, 3 Feb 2014 14:09:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
In-Reply-To: <CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 13:58, Vitaliy Tomin wrote:
> lspci output attached.
>
> I have never managed to crash system with debug=y, but I can provide
> serial log captured with debug=y and HVM domain running up.

That is a curious data point - it would imply that debug mode is doing
something which non-debug mode fails to do.

Could you provide the log please?

Can you explain "=== whole system crashed ===" a little more.

Given the lack of stack trace or any hint of a problem from Xen, is it a
system hang?  Does adding "watchdog" to the Xen command line cause the
failure to change?


Looking back at the debug=n serial log in combination with the PCI topology,

(XEN) AMD-Vi: No iommu for device 0000:00:00.2
(XEN) setup 0000:00:00.2 for d0 failed (-19)

Device 00:00.2 is the IOMMU itself.  I would have thought applying IOMMU
translation to the IOMMU is going to end in tears. Suravee; Can you
comment about this?  is the IOMMU expected to have an IVRS entry?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:09:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKD4-0001ES-1H; Mon, 03 Feb 2014 14:09:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAKD2-0001E6-2I
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:09:12 +0000
Received: from [85.158.137.68:33038] by server-9.bemta-3.messagelabs.com id
	26/13-10184-703AFE25; Mon, 03 Feb 2014 14:09:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391436549!13025363!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19122 invoked from network); 3 Feb 2014 14:09:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:09:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99211445"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 14:09:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 09:09:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAKCy-00023L-3q;
	Mon, 03 Feb 2014 14:09:08 +0000
Message-ID: <52EFA304.4000206@citrix.com>
Date: Mon, 3 Feb 2014 14:09:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
In-Reply-To: <CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 13:58, Vitaliy Tomin wrote:
> lspci output attached.
>
> I have never managed to crash system with debug=y, but I can provide
> serial log captured with debug=y and HVM domain running up.

That is a curious data point - it would imply that debug mode is doing
something which non-debug mode fails to do.

Could you provide the log please?

Can you explain "=== whole system crashed ===" a little more.

Given the lack of stack trace or any hint of a problem from Xen, is it a
system hang?  Does adding "watchdog" to the Xen command line cause the
failure to change?


Looking back at the debug=n serial log in combination with the PCI topology,

(XEN) AMD-Vi: No iommu for device 0000:00:00.2
(XEN) setup 0000:00:00.2 for d0 failed (-19)

Device 00:00.2 is the IOMMU itself.  I would have thought applying IOMMU
translation to the IOMMU is going to end in tears. Suravee; Can you
comment about this?  is the IOMMU expected to have an IVRS entry?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:14:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKIK-0001ji-Qr; Mon, 03 Feb 2014 14:14:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAKIJ-0001jd-Bx
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:14:39 +0000
Received: from [85.158.143.35:34172] by server-1.bemta-4.messagelabs.com id
	A0/36-31661-E44AFE25; Mon, 03 Feb 2014 14:14:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391436876!2759536!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 580 invoked from network); 3 Feb 2014 14:14:37 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 14:14:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13EEXqP029728
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 14:14:34 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13EEW8t002922
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 14:14:32 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13EEV5o016719; Mon, 3 Feb 2014 14:14:31 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 06:14:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9EF1D1BFA0B; Mon,  3 Feb 2014 09:14:29 -0500 (EST)
Date: Mon, 3 Feb 2014 09:14:29 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>
Message-ID: <20140203141429.GD3400@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203101215.GA1725@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:12:16AM +0100, Stanislaw Gruszka wrote:
> On Fri, Jan 31, 2014 at 11:01:40AM -0500, Konrad Rzeszutek Wilk wrote:
> > Perhaps by using 'subsys_system_register' and stick it there?
> 
> This will not call ->resume callback as it is only called for
> devices, so additional dummy device is needed, for example:
> 
> struct device xap_dev = {
>         .init_name = "xen-acpi-processor-dev",
>         .bus = &xap_bus,
> };
> ...
> subsys_system_register(&xap_bus, NULL);
> device_register(&xap_dev);
> 
> But I'm not sure if that is good solution. It crate some not necessery
> sysfs directories and files. Additionaly it can restore CPU C-states
> after some other drivers resume, which prehaps require proper C-states.

Yes.
> 
> Hence maybe adding direct notify from xen core resume will be better
> idea (proposed patch below). Plese let me know what you think, I'll
> provide solution which you choose to bug reporters for testing.

Let me think about it for a day or so.

Thanks!
> 
> Thanks
> Stanislaw
> 
> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> index 624e8dc..96e4173 100644
> --- a/drivers/xen/manage.c
> +++ b/drivers/xen/manage.c
> @@ -13,6 +13,7 @@
>  #include <linux/freezer.h>
>  #include <linux/syscore_ops.h>
>  #include <linux/export.h>
> +#include <linux/notifier.h>
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> @@ -46,6 +47,20 @@ struct suspend_info {
>  	void (*post)(int cancelled);
>  };
>  
> +static RAW_NOTIFIER_HEAD(xen_resume_notifier);
> +
> +void xen_resume_notifier_register(struct notifier_block *nb)
> +{
> +	raw_notifier_chain_register(&xen_resume_notifier, nb);
> +}
> +EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
> +
> +void xen_resume_notifier_unregister(struct notifier_block *nb)
> +{
> +	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
> +}
> +EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
> +
>  #ifdef CONFIG_HIBERNATE_CALLBACKS
>  static void xen_hvm_post_suspend(int cancelled)
>  {
> @@ -152,6 +167,8 @@ static void do_suspend(void)
>  
>  	err = stop_machine(xen_suspend, &si, cpumask_of(0));
>  
> +	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
> +
>  	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
>  
>  	if (err) {
> diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> index 7231859..82358d1 100644
> --- a/drivers/xen/xen-acpi-processor.c
> +++ b/drivers/xen/xen-acpi-processor.c
> @@ -27,10 +27,10 @@
>  #include <linux/init.h>
>  #include <linux/module.h>
>  #include <linux/types.h>
> -#include <linux/syscore_ops.h>
>  #include <linux/acpi.h>
>  #include <acpi/processor.h>
>  #include <xen/xen.h>
> +#include <xen/xen-ops.h>
>  #include <xen/interface/platform.h>
>  #include <asm/xen/hypercall.h>
>  
> @@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
>  	return rc;
>  }
>  
> -static void xen_acpi_processor_resume(void)
> +static int xen_acpi_processor_resume(struct notifier_block *nb,
> +				     unsigned long action, void *data)
>  {
>  	bitmap_zero(acpi_ids_done, nr_acpi_bits);
> -	xen_upload_processor_pm_data();
> +	return xen_upload_processor_pm_data();
>  }
>  
> -static struct syscore_ops xap_syscore_ops = {
> -	.resume	= xen_acpi_processor_resume,
> +struct notifier_block xen_acpi_processor_resume_nb = {
> +	.notifier_call = xen_acpi_processor_resume,
>  };
>  
>  static int __init xen_acpi_processor_init(void)
> @@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
>  	if (rc)
>  		goto err_unregister;
>  
> -	register_syscore_ops(&xap_syscore_ops);
> +	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
>  
>  	return 0;
>  err_unregister:
> @@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
>  {
>  	int i;
>  
> -	unregister_syscore_ops(&xap_syscore_ops);
> +	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
>  	kfree(acpi_ids_done);
>  	kfree(acpi_id_present);
>  	kfree(acpi_id_cst_present);
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index fb2ea8f..6412358 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
>  void xen_timer_resume(void);
>  void xen_arch_resume(void);
>  
> +void xen_resume_notifier_register(struct notifier_block *nb);
> +void xen_resume_notifier_unregister(struct notifier_block *nb);
> +
>  int xen_setup_shutdown_event(void);
>  
>  extern unsigned long *xen_contiguous_bitmap;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:14:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKIK-0001ji-Qr; Mon, 03 Feb 2014 14:14:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAKIJ-0001jd-Bx
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:14:39 +0000
Received: from [85.158.143.35:34172] by server-1.bemta-4.messagelabs.com id
	A0/36-31661-E44AFE25; Mon, 03 Feb 2014 14:14:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391436876!2759536!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 580 invoked from network); 3 Feb 2014 14:14:37 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 14:14:37 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13EEXqP029728
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 14:14:34 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13EEW8t002922
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 14:14:32 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13EEV5o016719; Mon, 3 Feb 2014 14:14:31 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 06:14:30 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9EF1D1BFA0B; Mon,  3 Feb 2014 09:14:29 -0500 (EST)
Date: Mon, 3 Feb 2014 09:14:29 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>
Message-ID: <20140203141429.GD3400@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203101215.GA1725@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:12:16AM +0100, Stanislaw Gruszka wrote:
> On Fri, Jan 31, 2014 at 11:01:40AM -0500, Konrad Rzeszutek Wilk wrote:
> > Perhaps by using 'subsys_system_register' and stick it there?
> 
> This will not call ->resume callback as it is only called for
> devices, so additional dummy device is needed, for example:
> 
> struct device xap_dev = {
>         .init_name = "xen-acpi-processor-dev",
>         .bus = &xap_bus,
> };
> ...
> subsys_system_register(&xap_bus, NULL);
> device_register(&xap_dev);
> 
> But I'm not sure if that is good solution. It crate some not necessery
> sysfs directories and files. Additionaly it can restore CPU C-states
> after some other drivers resume, which prehaps require proper C-states.

Yes.
> 
> Hence maybe adding direct notify from xen core resume will be better
> idea (proposed patch below). Plese let me know what you think, I'll
> provide solution which you choose to bug reporters for testing.

Let me think about it for a day or so.

Thanks!
> 
> Thanks
> Stanislaw
> 
> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> index 624e8dc..96e4173 100644
> --- a/drivers/xen/manage.c
> +++ b/drivers/xen/manage.c
> @@ -13,6 +13,7 @@
>  #include <linux/freezer.h>
>  #include <linux/syscore_ops.h>
>  #include <linux/export.h>
> +#include <linux/notifier.h>
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> @@ -46,6 +47,20 @@ struct suspend_info {
>  	void (*post)(int cancelled);
>  };
>  
> +static RAW_NOTIFIER_HEAD(xen_resume_notifier);
> +
> +void xen_resume_notifier_register(struct notifier_block *nb)
> +{
> +	raw_notifier_chain_register(&xen_resume_notifier, nb);
> +}
> +EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
> +
> +void xen_resume_notifier_unregister(struct notifier_block *nb)
> +{
> +	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
> +}
> +EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
> +
>  #ifdef CONFIG_HIBERNATE_CALLBACKS
>  static void xen_hvm_post_suspend(int cancelled)
>  {
> @@ -152,6 +167,8 @@ static void do_suspend(void)
>  
>  	err = stop_machine(xen_suspend, &si, cpumask_of(0));
>  
> +	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
> +
>  	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
>  
>  	if (err) {
> diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> index 7231859..82358d1 100644
> --- a/drivers/xen/xen-acpi-processor.c
> +++ b/drivers/xen/xen-acpi-processor.c
> @@ -27,10 +27,10 @@
>  #include <linux/init.h>
>  #include <linux/module.h>
>  #include <linux/types.h>
> -#include <linux/syscore_ops.h>
>  #include <linux/acpi.h>
>  #include <acpi/processor.h>
>  #include <xen/xen.h>
> +#include <xen/xen-ops.h>
>  #include <xen/interface/platform.h>
>  #include <asm/xen/hypercall.h>
>  
> @@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
>  	return rc;
>  }
>  
> -static void xen_acpi_processor_resume(void)
> +static int xen_acpi_processor_resume(struct notifier_block *nb,
> +				     unsigned long action, void *data)
>  {
>  	bitmap_zero(acpi_ids_done, nr_acpi_bits);
> -	xen_upload_processor_pm_data();
> +	return xen_upload_processor_pm_data();
>  }
>  
> -static struct syscore_ops xap_syscore_ops = {
> -	.resume	= xen_acpi_processor_resume,
> +struct notifier_block xen_acpi_processor_resume_nb = {
> +	.notifier_call = xen_acpi_processor_resume,
>  };
>  
>  static int __init xen_acpi_processor_init(void)
> @@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
>  	if (rc)
>  		goto err_unregister;
>  
> -	register_syscore_ops(&xap_syscore_ops);
> +	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
>  
>  	return 0;
>  err_unregister:
> @@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
>  {
>  	int i;
>  
> -	unregister_syscore_ops(&xap_syscore_ops);
> +	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
>  	kfree(acpi_ids_done);
>  	kfree(acpi_id_present);
>  	kfree(acpi_id_cst_present);
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index fb2ea8f..6412358 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
>  void xen_timer_resume(void);
>  void xen_arch_resume(void);
>  
> +void xen_resume_notifier_register(struct notifier_block *nb);
> +void xen_resume_notifier_unregister(struct notifier_block *nb);
> +
>  int xen_setup_shutdown_event(void);
>  
>  extern unsigned long *xen_contiguous_bitmap;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKXT-0002It-Fn; Mon, 03 Feb 2014 14:30:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAKXR-0002Io-6q
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:30:17 +0000
Received: from [85.158.143.35:16785] by server-2.bemta-4.messagelabs.com id
	C8/F7-10891-8F7AFE25; Mon, 03 Feb 2014 14:30:16 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391437813!2766305!1
X-Originating-IP: [209.85.216.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12530 invoked from network); 3 Feb 2014 14:30:14 -0000
Received: from mail-qc0-f182.google.com (HELO mail-qc0-f182.google.com)
	(209.85.216.182)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:30:14 -0000
Received: by mail-qc0-f182.google.com with SMTP id c9so11063476qcz.41
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 06:30:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=mIQpO5p5ZNOVY9O13QaNNJsyZblLTrAYr03RTpi8vx4=;
	b=dfh6kvLZGDklJw19y5ZcI0JP4cBSA/RTE0D2MjbeeaIwPZOx/BgzsONQyEiND5D/4z
	1i1De6TL3cUTlTtiRN8bGUdnfl/xS7uOQYtXLGMSqAI7jkqXJRws2tOLUPm5aH7iVtfe
	IoKRUNEpQvxOxSgqkrPKGYFyTRCEp2NqOoUbYJW1jEUDNnTidRvl6jXe/9NfsLltzrE5
	npIvSfuW4w1j815Wg9MYYrG5xIq/HK9Q+hGvIel0xpF4jtgcvginuXE9xJKjmEg+d1LQ
	cuI9gN8nGGosyrphsgVcShOe8icKZc/OqFcmJpSfyq1rvSsSSkm8qvOZitzYj0vB5afj
	QV9g==
MIME-Version: 1.0
X-Received: by 10.224.14.2 with SMTP id e2mr55982494qaa.73.1391437352335; Mon,
	03 Feb 2014 06:22:32 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 06:22:32 -0800 (PST)
In-Reply-To: <CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
Date: Mon, 3 Feb 2014 23:22:32 +0900
Message-ID: <CABPT1LsY+0DPVrohc+=qwrw921pcT_Vmo9dr8nw9nAx3Gvf9ug@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7bdc8b6e1d134804f1814178
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bdc8b6e1d134804f1814178
Content-Type: text/plain; charset=ISO-8859-1

>Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?

No it crashes even with empty HVM domain (no os, no disk images no network)

>Can you explain "=== whole system crashed ===" a little more.

II means system instantly rebooted. Black screen, no messages, no
image on screen next what I see is POST of my real hardware.

Log of xen runned with debug=y attached,hvm dom runned and no crash.


On Mon, Feb 3, 2014 at 11:21 PM, Vitaliy Tomin <highwaystar.ru@gmail.com> wrote:
>>Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>
> No it crashes even with empty HVM domain (no os, no disk images no network)
>
>>Can you explain "=== whole system crashed ===" a little more.
>
> II means system instantly rebooted. Black screen, no messages, no
> image on screen next what I see is POST of my real hardware.
>
> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>
>
> On Mon, Feb 3, 2014 at 11:09 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 03/02/14 13:58, Vitaliy Tomin wrote:
>>> lspci output attached.
>>>
>>> I have never managed to crash system with debug=y, but I can provide
>>> serial log captured with debug=y and HVM domain running up.
>>
>> That is a curious data point - it would imply that debug mode is doing
>> something which non-debug mode fails to do.
>>
>> Could you provide the log please?
>>
>> Can you explain "=== whole system crashed ===" a little more.
>>
>> Given the lack of stack trace or any hint of a problem from Xen, is it a
>> system hang?  Does adding "watchdog" to the Xen command line cause the
>> failure to change?
>>
>>
>> Looking back at the debug=n serial log in combination with the PCI topology,
>>
>> (XEN) AMD-Vi: No iommu for device 0000:00:00.2
>> (XEN) setup 0000:00:00.2 for d0 failed (-19)
>>
>> Device 00:00.2 is the IOMMU itself.  I would have thought applying IOMMU
>> translation to the IOMMU is going to end in tears. Suravee; Can you
>> comment about this?  is the IOMMU expected to have an IVRS entry?
>>
>> ~Andrew

--047d7bdc8b6e1d134804f1814178
Content-Type: application/octet-stream; name="screenlog.0"
Content-Disposition: attachment; filename="screenlog.0"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7tvifl1

IFhlbiA0LjQuMF8wMi0yOTcuMQ0KKFhFTikgWGVuIHZlcnNpb24gNC40LjBfMDItMjk3LjEgKGFi
dWlsZEApIChnY2MgKFNVU0UgTGludXgpIDQuOC4xIDIwMTMwOTA5IFtnY2MtNF84LWJyYW5jaCBy
ZXZpc2lvbiAyMDIzODhdKSBkZWJ1Zz15IFR1ZSBKYW4gMjggMTY6MDY6MDcgVVRDIDIwMTQNCihY
RU4pIExhdGVzdCBDaGFuZ2VTZXQ6IA0KKFhFTikgQm9vdGxvYWRlcjogR1JVQjIgMi4wMA0KKFhF
TikgQ29tbWFuZCBsaW5lOiBsb2dsdmw9YWxsIGlvbW11PWRlYnVnLHZlcmJvc2UgYXBpY192ZXJi
b3NpdHk9ZGVidWcgY29uc29sZT1jb20xIGNvbTE9MTE1MjAwLDhuMSxwY2kNCihYRU4pIFZpZGVv
IGluZm9ybWF0aW9uOg0KKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNg0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRz
DQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMN
CihYRU4pICBGb3VuZCA0IEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVOKSBYZW4tZTgy
MCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1
c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZTgwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZl
ZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQ0K
KFhFTikgIDAwMDAwMDAwOGQ2OGIwMDAgLSAwMDAwMDAwMDhkZDBhMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDhkZDBhMDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpDQooWEVO
KSAgMDAwMDAwMDA4ZTA1YTAwMCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQ0KKFhFTikg
IDAwMDAwMDAwOGVhNDUwMDAgLSAwMDAwMDAwMDhlYTQ2MDAwICh1c2FibGUpDQooWEVOKSAgMDAw
MDAwMDA4ZWE0NjAwMCAtIDAwMDAwMDAwOGVjNGMwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAw
MDAwOGVjNGMwMDAgLSAwMDAwMDAwMDhmMDY0MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4
ZjA2NDAwMCAtIDAwMDAwMDAwOGY3ZjMwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwOGY3
ZjMwMDAgLSAwMDAwMDAwMDhmODAwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBmZWMwMDAw
MCAtIDAwMDAwMDAwZmVjMDEwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMTAwMDAg
LSAwMDAwMDAwMGZlYzExMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0g
MDAwMDAwMDBmZWQwMTAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAw
MDAwMDAwZmVkOTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmY4MDAwMDAgLSAwMDAw
MDAwMTAwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAw
MDI1MDAwMDAwMCAodXNhYmxlKQ0KKFhFTikgQUNQSTogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIg
QUxBU0tBKQ0KKFhFTikgQUNQSTogWFNEVCA4RTA0QTA3OCwgMDA3NCAocjEgQUxBU0tBICAgIEEg
TSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAw
MEY0IChyNCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFD
UEkgV2FybmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25hbCBmaWVsZCAiUG0yQ29udHJvbEJsb2Nr
IiBoYXMgemVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEy
Nl0NCihYRU4pIEFDUEk6IERTRFQgOEUwNEExODgsIDVGOUUgKHIyIEFMQVNLQSAgICBBIE0gSSAg
ICAgICAgMCBJTlRMIDIwMDUxMTE3KQ0KKFhFTikgQUNQSTogRkFDUyA4RTA1MkU4MCwgMDA0MA0K
KFhFTikgQUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMgQUxBU0tBICAgIEEgTSBJICAxMDcy
MDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGUERUIDhFMDUwMjk4LCAwMDQ0IChyMSBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEk6IE1DRkcg
OEUwNTAyRTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgIDEwMDEz
KQ0KKFhFTikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEgQUxBU0tBIE9FTUFBRlQgICAx
MDcyMDA5IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBIUEVUIDhFMDUwNDA4LCAwMDM4IChy
MSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAgICAgNSkNCihYRU4pIEFDUEk6IElW
UlMgOEUwNTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAg
ICAwKQ0KKFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEgICAgQU1EIEFOTkFQVVJO
ICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwRjEwLCAwNEI3
IChyMiAgICBBTUQgQU5OQVBVUk4gICAgICAgIDEgTVNGVCAgNDAwMDAwMCkNCihYRU4pIEFDUEk6
IENSQVQgOEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAg
ICAgICAxKQ0KKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0IpDQooWEVOKSBObyBO
VU1BIGNvbmZpZ3VyYXRpb24gZm91bmQNCihYRU4pIEZha2luZyBhIG5vZGUgYXQgMDAwMDAwMDAw
MDAwMDAwMC0wMDAwMDAwMjUwMDAwMDAwDQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZA0K
KFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZkOTAwDQooWEVOKSBETUkgMi43IHByZXNl
bnQuDQooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBk
cml2ZXIgZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihYRU4p
IEFDUEk6IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdDQooWEVO
KSBBQ1BJOiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGluIEZBRFQgLSA4ZTA1MmU4MC8w
MDAwMDAwMDAwMDAwMDAwLCB1c2luZyAzMg0KKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVw
X3ZlY1s4ZTA1MmU4Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lk
WzB4MTBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYN
CihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQp
DQooWEVOKSBQcm9jZXNzb3IgIzE3IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNz
b3IgIzE4IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDRdIGxhcGljX2lkWzB4MTNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElD
IHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVk
Z2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA1XSBhZGRyZXNzWzB4ZmVj
MDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24g
MzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZS
IChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVOKSBB
Q1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3Zl
cnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFibGlu
ZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQRVQg
aWQ6IDB4MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgRVJTVCB0YWJsZSB3YXMgbm90
IGZvdW5kDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5m
b3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykNCihY
RU4pIE5SX0NQVVM6NTEyIG5yX2NwdW1hc2tfYml0czo2NA0KKFhFTikgbWFwcGVkIEFQSUMgdG8g
ZmZmZjgyY2ZmZmJmYjAwMCAoZmVlMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4
MmNmZmZiZmEwMDAgKGZlYzAwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCA3NjAgTVNJ
L01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVk
aXQpDQooWEVOKSBEZXRlY3RlZCAzODkzLjA0MyBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGlu
ZyBtZW1vcnkgc2hhcmluZy4NCihYRU4pIHhzdGF0ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAw
eDNjMCBhbmQgc3RhdGVzOiAweDQwMDAwMDAwMDAwMDAwMDcNCihYRU4pIEFNRCBGYW0xNWggbWFj
aGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRp
b24gMDogYmFzZSBlMDAwMDAwMCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJ
OiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZp
OiBGb3VuZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkg
VGFibGU6DQooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVu
Z3RoIDB4NzANCihYRU4pIEFNRC1WaTogIFJldmlzaW9uIDB4Mg0KKFhFTikgQU1ELVZpOiAgQ2hl
Y2tTdW0gMHhlOA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRA0KKFhFTikgQU1ELVZpOiAgT0VN
X1RhYmxlX0lkIEFOTkFQVVJODQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgxDQooWEVO
KSBBTUQtVmk6ICBDcmVhdG9yX0lkIEFNRCANCihYRU4pIEFNRC1WaTogIENyZWF0b3JfUmV2aXNp
b24gMA0KKFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxhZ3MgMHhmZSBsZW4g
MHg0MCBpZCAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlk
IDB4OCBmbGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OCAtPiAweGZmZmUN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAweDIwMCBmbGFn
cyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZmIGFsaWFzIDB4
YTQNCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFu
ZGxlIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZs
YWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0
eSAweDEgaGFuZGxlIDB4NQ0KKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJlczoN
CihYRU4pICAtIFByZWZldGNoIFBhZ2VzIENvbW1hbmQNCihYRU4pICAtIFBlcmlwaGVyYWwgUGFn
ZSBTZXJ2aWNlIFJlcXVlc3QNCihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uDQooWEVOKSAgLSBJ
bnZhbGlkYXRlIEFsbCBDb21tYW5kDQooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4NCihY
RU4pIEFNRC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4NCihYRU4pIEFNRC1WaTogSU9N
TVUgMCBFbmFibGVkLg0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAt
IERvbTAgbW9kZTogUmVsYXhlZA0KKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkDQoo
WEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwDQooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgw
MDUwMDEwDQooWEVOKSBHZXR0aW5nIElEOiAxMDAwMDAwMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3
MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUj
MA0KKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBt
ZXRob2QNCihYRU4pIGluaXQgSU9fQVBJQyBJUlFzDQooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBp
bikgNS0wLCA1LTE2LCA1LTE3LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5v
dCBjb25uZWN0ZWQuDQooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBh
cGljMj0tMSBwaW4yPS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhF
TikgbnVtYmVyIG9mIElPLUFQSUMgIzUgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIHRlc3RpbmcgdGhl
IElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLg0KKFhFTikgSU8gQVBJQyAjNS4uLi4uLg0K
KFhFTikgLi4uLiByZWdpc3RlciAjMDA6IDA1MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgIDogcGh5
c2ljYWwgQVBJQyBpZDogMDUNCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwDQoo
WEVOKSAuLi4uLi4uICAgIDogTFRTICAgICAgICAgIDogMA0KKFhFTikgLi4uLiByZWdpc3RlciAj
MDE6IDAwMTc4MDIxDQooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVz
OiAwMDE3DQooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMQ0KKFhFTikgLi4u
Li4uLiAgICAgOiBJTyBBUElDIHZlcnNpb246IDAwMjENCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAy
OiAwNTAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICAgOiBhcmJpdHJhdGlvbjogMDUNCihYRU4pIC4u
Li4gcmVnaXN0ZXIgIzAzOiAwNTAxODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBCb290IERUICAg
IDogMQ0KKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBo
eSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAw
MCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAwMSAwMDEg
MDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQooWEVOKSAgMDIgMDAxIDAx
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMA0KKFhFTikgIDAzIDAwMSAwMSAg
MCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgNCihYRU4pICAwNCAwMDEgMDEgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDUgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA2IDAwMSAwMSAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwNyAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDggMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA2MA0KKFhFTikgIDA5IDAwMSAwMSAgMSAgICAxICAgIDAg
ICAxICAgMCAgICAxICAgIDAgICAgMDANCihYRU4pICAwYSAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIEYxDQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA3MA0KKFhFTikgIDBjIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgNzgNCihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDg4DQooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA5MA0KKFhFTikgIDBmIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgOTgNCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAxICAgIDMwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MSAgICAzMA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEg
ICAgMzANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzAN
CihYRU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQoo
WEVOKSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhF
TikgVXNpbmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nDQooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdz
Og0KKFhFTikgSVJRMjQwIC0+IDA6Mg0KKFhFTikgSVJRNDggLT4gMDoxDQooWEVOKSBJUlE1NiAt
PiAwOjMNCihYRU4pIElSUTY0IC0+IDA6NA0KKFhFTikgSVJRNzIgLT4gMDo1DQooWEVOKSBJUlE4
MCAtPiAwOjYNCihYRU4pIElSUTg4IC0+IDA6Nw0KKFhFTikgSVJROTYgLT4gMDo4DQooWEVOKSBJ
UlExMDQgLT4gMDo5DQooWEVOKSBJUlEyNDEgLT4gMDoxMA0KKFhFTikgSVJRMTEyIC0+IDA6MTEN
CihYRU4pIElSUTEyMCAtPiAwOjEyDQooWEVOKSBJUlExMzYgLT4gMDoxMw0KKFhFTikgSVJRMTQ0
IC0+IDA6MTQNCihYRU4pIElSUTE1MiAtPiAwOjE1DQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4gZG9uZS4NCihYRU4pIFVzaW5nIGxvY2FsIEFQSUMgdGltZXIgaW50
ZXJydXB0cy4NCihYRU4pIGNhbGlicmF0aW5nIEFQSUMgdGltZXIgLi4uDQooWEVOKSAuLi4uLiBD
UFUgY2xvY2sgc3BlZWQgaXMgMzg5Mi45MjA2IE1Iei4NCihYRU4pIC4uLi4uIGhvc3QgYnVzIGNs
b2NrIHNwZWVkIGlzIDk5LjgxODQgTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM3
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSAgLSBWTUNC
IENsZWFuIEJpdHMNCihYRU4pICAtIERlY29kZUFzc2lzdHMNCihYRU4pICAtIFBhdXNlLUludGVy
Y2VwdCBGaWx0ZXINCihYRU4pICAtIFRTQyBSYXRlIE1TUg0KKFhFTikgSFZNOiBTVk0gZW5hYmxl
ZA0KKFhFTikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihY
RU4pIEhWTTogSEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIEhWTTogUFZIIG1v
ZGUgbm90IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtDQooWEVOKSBtYXNrZWQgRXh0SU5UIG9u
IENQVSMxDQooWEVOKSBtaWNyb2NvZGU6IENQVTEgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9
MHg2MDAxMTE5DQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVOKSBtaWNyb2NvZGU6
IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MA0KKFhFTikgbWFza2VkIEV4dElOVCBv
biBDUFUjMw0KKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lk
PTANCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVOKSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0K
KFhFTikgTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZyZXF1ZW5j
eQ0KKFhFTikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVk
Lg0KKFhFTikgbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgdmFyaWFibGUgTVRSUiBz
ZXR0aW5ncw0KKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVwIGFs
bCBDUFVzLg0KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uDQooWEVOKSAqKiog
TE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRy
PTB4MjAwMCBtZW1zej0weDhmMzAwMA0KKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFk
ZHI9MHg4ZjUwMDAgbWVtc3o9MHg4YjBmMA0KKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogcGhkcjog
cGFkZHI9MHg5ODEwMDAgbWVtc3o9MHhiYmMwDQooWEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRy
OiBwYWRkcj0weDk4ZDAwMCBtZW1zej0weDI3ODAwMA0KKFhFTikgZWxmX3BhcnNlX2JpbmFyeTog
bWVtb3J5OiAweDIwMDAgLT4gMHhjMDUwMDANCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VF
U1RfT1MgPSAibGludXgiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEdVRVNUX1ZFUlNJT04g
PSAiMi42Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVSU0lPTiA9ICJ4ZW4tMy4w
Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JBU0UgPSAweGZmZmZmZmZmODAwMDAw
MDANCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBl
bGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhmZmZmZmZmZjgwMDAyMDAwDQooWEVOKSBlbGZf
eGVuX3BhcnNlX25vdGU6IEhZUEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZmZjgwMDAzMDAwDQooWEVO
KSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVsZiBub3RlICgweGQpDQooWEVOKSBl
bGZfeGVuX3BhcnNlX25vdGU6IE1PRF9TVEFSVF9QRk4gPSAweDENCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogSU5JVF9QMk0gPSAweGZmZmZlYTAwMDAwMDAwMDANCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogRkVBVFVSRVMgPSAid3JpdGFibGVfcGFnZV90YWJsZXN8d3JpdGFibGVfZGVzY3Jp
cHRvcl90YWJsZXN8YXV0b190cmFuc2xhdGVkX3BoeXNtYXB8c3VwZXJ2aXNvcl9tb2RlX2tlcm5l
bHxkb20wIg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBTVVBQT1JURURfRkVBVFVSRVMgPSAw
eDgwZg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyINCihYRU4p
IGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VTUEVORF9DQU5DRUwgPSAweDENCihYRU4pIGVsZl94ZW5f
YWRkcl9jYWxjX2NoZWNrOiBhZGRyZXNzZXM6DQooWEVOKSAgICAgdmlydF9iYXNlICAgICAgICA9
IDB4ZmZmZmZmZmY4MDAwMDAwMA0KKFhFTikgICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAweDANCihY
RU4pICAgICB2aXJ0X29mZnNldCAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSAgICAg
dmlydF9rc3RhcnQgICAgICA9IDB4ZmZmZmZmZmY4MDAwMjAwMA0KKFhFTikgICAgIHZpcnRfa2Vu
ZCAgICAgICAgPSAweGZmZmZmZmZmODBjMDUwMDANCihYRU4pICAgICB2aXJ0X2VudHJ5ICAgICAg
ID0gMHhmZmZmZmZmZjgwMDAyMDAwDQooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4ZmZm
ZmVhMDAwMDAwMDAwMA0KKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzIN
CihYRU4pICBEb20wIGtlcm5lbDogNjQtYml0LCBsc2IsIHBhZGRyIDB4MjAwMCAtPiAweGMwNTAw
MA0KKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikgIERvbTAgYWxsb2Mu
OiAgIDAwMDAwMDAyMjMwMDAwMDAtPjAwMDAwMDAyMjQwMDAwMDAgKDE3NDE5MTggcGFnZXMgdG8g
YmUgYWxsb2NhdGVkKQ0KKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGNmNzUwMDAtPjAw
MDAwMDAyNGZmZmZlMDANCihYRU4pIFZJUlRVQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikg
IExvYWRlZCBrZXJuZWw6IGZmZmZmZmZmODAwMDIwMDAtPmZmZmZmZmZmODBjMDUwMDANCihYRU4p
ICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwDQooWEVO
KSAgUGh5cy1NYWNoIG1hcDogZmZmZmVhMDAwMDAwMDAwMC0+ZmZmZmVhMDAwMGQ2YTc0OA0KKFhF
TikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODBjMDUwMDAtPmZmZmZmZmZmODBjMDU0YjQNCihY
RU4pICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgwYzA2MDAwLT5mZmZmZmZmZjgwYzExMDAwDQoo
WEVOKSAgQm9vdCBzdGFjazogICAgZmZmZmZmZmY4MGMxMTAwMC0+ZmZmZmZmZmY4MGMxMjAwMA0K
KFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZmZmZmODEwMDAwMDAN
CihYRU4pICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgwMDAyMDAwDQooWEVOKSBEb20wIGhhcyBt
YXhpbXVtIDQgVkNQVXMNCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0IDB4ZmZmZmZm
ZmY4MDAwMjAwMCAtPiAweGZmZmZmZmZmODA4ZjUwMDANCihYRU4pIGVsZl9sb2FkX2JpbmFyeTog
cGhkciAxIGF0IDB4ZmZmZmZmZmY4MDhmNTAwMCAtPiAweGZmZmZmZmZmODA5ODAwZjANCihYRU4p
IGVsZl9sb2FkX2JpbmFyeTogcGhkciAyIGF0IDB4ZmZmZmZmZmY4MDk4MTAwMCAtPiAweGZmZmZm
ZmZmODA5OGNiYzANCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAzIGF0IDB4ZmZmZmZmZmY4
MDk4ZDAwMCAtPiAweGZmZmZmZmZmODBhMDQwMDANCihYRU4pIEFNRC1WaTogU2tpcHBpbmcgaG9z
dCBicmlkZ2UgMDAwMDowMDowMC4wDQooWEVOKSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2Ug
MDAwMDowMDowMC4yDQooWEVOKSBzZXR1cCAwMDAwOjAwOjAwLjIgZm9yIGQwIGZhaWxlZCAoLTE5
KQ0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4LCB0
eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg5LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHgxMCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4ODAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDgxLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUx
M2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4OCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9
IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTAsIHR5cGUgPSAweDcsIHJvb3Qg
dGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkyLCB0eXBlID0gMHg3
LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5OCwgdHlw
ZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
OWEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweGEwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHhhMSwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4YTMsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNTEz
YjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE0LCB0eXBlID0gMHg1LCByb290IHRhYmxlID0g
MHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNSwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTgsIHR5cGUgPSAweDIs
IHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGFhLCB0eXBl
ID0gMHgyLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhh
YiwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YzAsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGMxLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhjMiwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI1MTNi
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzMsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAw
eDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGM0LCB0eXBlID0gMHg2LCByb290IHRh
YmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNSwgdHlwZSA9IDB4Niwg
cm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihY
RU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAwLCB0eXBl
ID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgx
MDEsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDIzMCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4MjM4LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAs
IGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHgzMDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIy
NTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1W
aTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NTAwLCB0eXBlID0gMHgxLCBy
b290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4NCihYRU4pIEluaXRpYWwgbG93IG1lbW9yeSB2
aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLg0KKFhFTikgU3RkLiBMb2dsZXZlbDog
QWxsDQooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+
IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikN
CihYRU4pIEZyZWVkIDI1NmtCIGluaXQgbWVtb3J5Lg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAw
MDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAuDQooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAw
IHRvIDB4ODAwODAwMDAwMTAwMDAwMC4NCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwMDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8g
MHg4MDA4MDAwMDAxMDAwMDAwLg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgw
MDgwMDAwMDEwMDAwMDAuDQooWEVOKSBQQ0k6IFVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBi
dXMgMDAtZmYNCihYRU4pIG1tLmM6ODA5OiBkMDogRm9yY2luZyByZWFkLW9ubHkgYWNjZXNzIHRv
IE1GTiBlMDAwMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wDQooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjAwLjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDEu
MA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4xDQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjAyLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTAuMA0KKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMC4xDQooWEVOKSBTUi1JT1YgZGV2aWNlIDAwMDA6MDA6
MTEuMCBoYXMgaXRzIHZpcnR1YWwgZnVuY3Rpb25zIGFscmVhZHkgZW5hYmxlZCAoMDFhYikNCihY
RU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxMi4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjINCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDA6MTMuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4y
DQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjANCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MTQuMQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4zDQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjQNCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MTQuNQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4wDQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjE1LjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMw0KKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjE4LjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxOC4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQN
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNQ0KKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMTowMC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjENCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDI6MDYuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMjow
Ny4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAzOjAwLjANCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDQ6MDAuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNTowMC4wDQooWEVO
KSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xNiAtPiAweGEwIC0+IElSUSAx
NiBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRy
eSAoNS0xNyAtPiAweGE4IC0+IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNb
MF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOCAtPiAweGIwIC0+IElSUSAxOCBNb2RlOjEg
QWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOSAt
PiAweGI4IC0+IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQ
Q0kgcm91dGluZyBlbnRyeSAoNS0yMSAtPiAweGMwIC0+IElSUSAyMSBNb2RlOjEgQWN0aXZlOjEp
DQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0yMiAtPiAweGM4IC0+
IElSUSAyMiBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBBUElDIGVycm9yIG9uIENQVTA6IDAwKDQw
KQ0KKFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkNCihYRU4pIEFQSUMgZXJyb3Igb24g
Q1BVMzogMDAoNDApDQooWEVOKSBBUElDIGVycm9yIG9uIENQVTI6IDAwKDQwKQ0KG1tyG1tIG1tK
DQ0NCldlbGNvbWUgdG8gb3BlblNVU0UgMTMuMSAiQm90dGxlIiAtIEtlcm5lbCAzLjExLjYtNC14
ZW4gKHh2YzApLg0NCg0NCg0NCmxpbnV4LWI1MmQgbG9naW46IChYRU4pIEFNRC1WaTogU2hhcmUg
cDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MjE5Yzk1DQooWEVOKSBBTUQtVmk6
IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUgPSAweDIwYjJkMQ0KKFhFTikg
aW8uYzoyODA6IGQyOiBiaW5kOiBtX2dzaT0xNyBnX2dzaT0zNiBkZXZpY2U9NSBpbnR4PTANCihY
RU4pIEFNRC1WaTogRGlzYWJsZTogZGV2aWNlIGlkID0gMHg4LCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4OCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjBiMmQxMDAwLCBkb21haW4gPSAyLCBw
YWdpbmcgbW9kZSA9IDQNCihYRU4pIEFNRC1WaTogUmUtYXNzaWduIDAwMDA6MDA6MDEuMCBmcm9t
IGRvbTAgdG8gZG9tMg0KKFhFTikgaW8uYzoyODA6IGQyOiBiaW5kOiBtX2dzaT0xOCBnX2dzaT00
MSBkZXZpY2U9NiBpbnR4PTENCihYRU4pIEFNRC1WaTogRGlzYWJsZTogZGV2aWNlIGlkID0gMHg5
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBh
Z2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjBi
MmQxMDAwLCBkb21haW4gPSAyLCBwYWdpbmcgbW9kZSA9IDQNCihYRU4pIEFNRC1WaTogUmUtYXNz
aWduIDAwMDA6MDA6MDEuMSBmcm9tIGRvbTAgdG8gZG9tMg0KKGQyKSBIVk0gTG9hZGVyDQooZDIp
IERldGVjdGVkIFhlbiB2NC40LjBfMDItMjk3LjENCihkMikgWGVuYnVzIHJpbmdzIEAweGZlZmZj
MDAwLCBldmVudCBjaGFubmVsIDMNCihkMikgU3lzdGVtIHJlcXVlc3RlZCBTZWFCSU9TDQooZDIp
IENQVSBzcGVlZCBpcyAzODkzIE1Ieg0KKGQyKSBSZWxvY2F0aW5nIGd1ZXN0IG1lbW9yeSBmb3Ig
bG93bWVtIE1NSU8gc3BhY2UgZGlzYWJsZWQNCihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGlu
ayAwIGNoYW5nZWQgMCAtPiA1DQooZDIpIFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1DQoo
WEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMSBjaGFuZ2VkIDAgLT4gMTANCihkMikgUENJ
LUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwDQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxp
bmsgMiBjaGFuZ2VkIDAgLT4gMTENCihkMikgUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTEx
DQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQ0KKGQyKSBQ
Q0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKGQyKSBwY2kgZGV2IDAxOjIgSU5URC0+SVJR
NQ0KKGQyKSBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTANCihkMikgcGNpIGRldiAwMjowIElOVEEt
PklSUTExDQooZDIpIHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1DQooZDIpIHBjaSBkZXYgMDU6MCBJ
TlRBLT5JUlExMA0KKGQyKSBwY2kgZGV2IDA2OjAgSU5UQi0+SVJRNQ0KKGQyKSBObyBSQU0gaW4g
aGlnaCBtZW1vcnk7IHNldHRpbmcgaGlnaF9tZW0gcmVzb3VyY2UgYmFzZSB0byAxMDAwMDAwMDAN
CihkMikgcGNpIGRldiAwNTowIGJhciAxMCBzaXplIDAxMDAwMDAwMDogMGUwMDAwMDA4DQooWEVO
KSBtZW1vcnlfbWFwOmFkZDogZG9tMiBnZm49ZTAwMDAgbWZuPWIwMDAwIG5yPTEwMDAwDQooZDIp
IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDIwMDAwMDA6IDBmMDAwMDAwOA0KKGQyKSBwY2kg
ZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAxMDAwMDAwOiAwZjIwMDAwMDgNCihkMikgcGNpIGRldiAw
NDowIGJhciAzMCBzaXplIDAwMDA0MDAwMDogMGYzMDAwMDAwDQooWEVOKSBtZW1vcnlfbWFwOmFk
ZDogZG9tMiBnZm49ZjMwNDAgbWZuPWZmNzAwIG5yPTQwDQooZDIpIHBjaSBkZXYgMDU6MCBiYXIg
MTggc2l6ZSAwMDAwNDAwMDA6IDBmMzA0MDAwMA0KKGQyKSBwY2kgZGV2IDAzOjAgYmFyIDMwIHNp
emUgMDAwMDEwMDAwOiAwZjMwODAwMDANCihkMikgcGNpIGRldiAwNjowIGJhciAxMCBzaXplIDAw
MDAwNDAwMDogMGYzMDkwMDAwDQooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMiBnZm49ZjMwOTAg
bWZuPWZmNzQwIG5yPTQNCihkMikgcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAwMDAwMTAwMDog
MGYzMDk0MDAwDQooZDIpIHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMDAwMDAxMDA6IDAwMDAw
YzAwMQ0KKGQyKSBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgMDAwMDAwMTAwOiAwMDAwMGMxMDEN
CihkMikgcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMDEwMDogMGYzMDk1MDAwDQooWEVO
KSBpb3BvcnRfbWFwOmFkZDogZG9tMiBncG9ydD1jMjAwIG1wb3J0PWYwMDAgbnI9MTAwDQooZDIp
IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwMDAxMDA6IDAwMDAwYzIwMQ0KKGQyKSBwY2kg
ZGV2IDAxOjIgYmFyIDIwIHNpemUgMDAwMDAwMDIwOiAwMDAwMGMzMDENCihkMikgcGNpIGRldiAw
MToxIGJhciAyMCBzaXplIDAwMDAwMDAxMDogMDAwMDBjMzIxDQooZDIpIE11bHRpcHJvY2Vzc29y
IGluaXRpYWxpc2F0aW9uOg0KKGQyKSAgLSBDUFUwIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQg
TVRSUnMgLi4uIHZhciBNVFJScyBbMy84XSAuLi4gZG9uZS4NCihkMikgVGVzdGluZyBIVk0gZW52
aXJvbm1lbnQ6DQooZDIpICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBh
c3NlZA0KKGQyKSAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkDQooZDIpIFBh
c3NlZCAyIG9mIDIgdGVzdHMNCihkMikgV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLg0KKGQyKSBM
b2FkaW5nIFNlYUJJT1MgLi4uDQooZDIpIENyZWF0aW5nIE1QIHRhYmxlcyAuLi4NCihkMikgTG9h
ZGluZyBBQ1BJIC4uLg0KKGQyKSB2bTg2IFRTUyBhdCBmYzAwYTAwMA0KKGQyKSBCSU9TIG1hcDoN
CihkMikgIDEwMDAwLTEwMGQzOiBTY3JhdGNoIHNwYWNlDQooZDIpICBlMDAwMC1mZmZmZjogTWFp
biBCSU9TDQooZDIpIEU4MjAgdGFibGU6DQooZDIpICBbMDBdOiAwMDAwMDAwMDowMDAwMDAwMCAt
IDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0NCihkMikgIEhPTEU6IDAwMDAwMDAwOjAwMGEwMDAwIC0g
MDAwMDAwMDA6MDAwZTAwMDANCihkMikgIFswMV06IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAwMDAw
MDA6MDAxMDAwMDA6IFJFU0VSVkVEDQooZDIpICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAw
MDAwMDAwOjNmODAwMDAwOiBSQU0NCihkMikgIEhPTEU6IDAwMDAwMDAwOjNmODAwMDAwIC0gMDAw
MDAwMDA6ZmMwMDAwMDANCihkMikgIFswM106IDAwMDAwMDAwOmZjMDAwMDAwIC0gMDAwMDAwMDE6
MDAwMDAwMDA6IFJFU0VSVkVEDQooZDIpIEludm9raW5nIFNlYUJJT1MgLi4uDQooZDIpIFNlYUJJ
T1MgKHZlcnNpb24gPy0yMDE0MDEyOF8xNTU1NTQtYnVpbGQyMikNCihkMikgDQooZDIpIEZvdW5k
IFhlbiBoeXBlcnZpc29yIHNpZ25hdHVyZSBhdCA0MDAwMDAwMA0KKGQyKSB4ZW46IGNvcHkgZTgy
MC4uLg0KKGQyKSBSYW0gU2l6ZT0weDNmODAwMDAwICgweDAwMDAwMDAwMDAwMDAwMDAgaGlnaCkN
CihkMikgUmVsb2NhdGluZyBsb3cgZGF0YSBmcm9tIDB4MDAwZTQ0ZDAgdG8gMHgwMDBlZjc5MCAo
c2l6ZSAyMTUzKQ0KKGQyKSBSZWxvY2F0aW5nIGluaXQgZnJvbSAweDAwMGU0ZDM5IHRvIDB4M2Y3
ZTMxMTAgKHNpemUgNTI2ODcpDQooZDIpIENQVSBNaHo9Mzg5Mw0KKGQyKSBGb3VuZCAxMCBQQ0kg
ZGV2aWNlcyAobWF4IFBDSSBidXMgaXMgMDApDQooZDIpIEFsbG9jYXRlZCBYZW4gaHlwZXJjYWxs
IHBhZ2UgYXQgM2Y3ZmYwMDANCihkMikgRGV0ZWN0ZWQgWGVuIHY0LjQuMF8wMi0yOTcuMQ0KKGQy
KSB4ZW46IGNvcHkgQklPUyB0YWJsZXMuLi4NCihkMikgQ29weWluZyBTTUJJT1MgZW50cnkgcG9p
bnQgZnJvbSAweDAwMDEwMDEwIHRvIDB4MDAwZmUwMzANCihkMikgQ29weWluZyBNUFRBQkxFIGZy
b20gMHhmYzAwMTE0MC9mYzAwMTE1MCB0byAweDAwMGZkZjUwDQooZDIpIENvcHlpbmcgUElSIGZy
b20gMHgwMDAxMDAzMCB0byAweDAwMGZkZWQwDQooZDIpIENvcHlpbmcgQUNQSSBSU0RQIGZyb20g
MHgwMDAxMDBiMCB0byAweDAwMGZkZWEwDQooZDIpIFNjYW4gZm9yIFZHQSBvcHRpb24gcm9tDQoo
ZDIpIFJ1bm5pbmcgb3B0aW9uIHJvbSBhdCBjMDAwOjAwMDMNCihkMikgVHVybmluZyBvbiB2Z2Eg
dGV4dCBtb2RlIGNvbnNvbGUNCihkMikgU2VhQklPUyAodmVyc2lvbiA/LTIwMTQwMTI4XzE1NTU1
NC1idWlsZDIyKQ0KKGQyKSANCihkMikgVUhDSSBpbml0IG9uIGRldiAwMDowMS4yIChpbz1jMzAw
KQ0KKGQyKSBGb3VuZCAwIGxwdCBwb3J0cw0KKGQyKSBGb3VuZCAxIHNlcmlhbCBwb3J0cw0KKGQy
KSBBVEEgY29udHJvbGxlciAxIGF0IDFmMC8zZjQvYzMyMCAoaXJxIDE0IGRldiA5KQ0KKGQyKSBB
VEEgY29udHJvbGxlciAyIGF0IDE3MC8zNzQvYzMyOCAoaXJxIDE1IGRldiA5KQ0KKGQyKSBhdGEw
LTA6IFFFTVUgSEFSRERJU0sgQVRBLTcgSGFyZC1EaXNrICgyNTYwIE1pQnl0ZXMpDQooZDIpIFNl
YXJjaGluZyBib290b3JkZXIgZm9yOiAvcGNpQGkwY2Y4LypAMSwxL2RyaXZlQDAvZGlza0AwDQoo
ZDIpIFBTMiBrZXlib2FyZCBpbml0aWFsaXplZA0KKGQyKSBBbGwgdGhyZWFkcyBjb21wbGV0ZS4N
CihkMikgU2NhbiBmb3Igb3B0aW9uIHJvbXMNCihkMikgUnVubmluZyBvcHRpb24gcm9tIGF0IGM5
MDA6MDAwMw0KKGQyKSBwbW0gY2FsbCBhcmcxPTENCihkMikgcG1tIGNhbGwgYXJnMT0wDQooZDIp
IHBtbSBjYWxsIGFyZzE9MQ0KKGQyKSBwbW0gY2FsbCBhcmcxPTANCihkMikgU2VhcmNoaW5nIGJv
b3RvcmRlciBmb3I6IC9wY2lAaTBjZjgvKkA0DQooZDIpIFByZXNzIEYxMiBmb3IgYm9vdCBtZW51
Lg0KKGQyKSANCihkMikgZHJpdmUgMHgwMDBmZGU1MDogUENIUz01MjAxLzE2LzYzIHRyYW5zbGF0
aW9uPWxhcmdlIExDSFM9NjUwLzEyOC82MyBzPTUyNDI4ODANCihkMikgU3BhY2UgYXZhaWxhYmxl
IGZvciBVTUI6IDAwMGNhMDAwLTAwMGVmMDAwDQooZDIpIFJldHVybmVkIDYxNDQwIGJ5dGVzIG9m
IFpvbmVIaWdoDQooZDIpIGU4MjAgbWFwIGhhcyA2IGl0ZW1zOg0KKGQyKSAgIDA6IDAwMDAwMDAw
MDAwMDAwMDAgLSAwMDAwMDAwMDAwMDlmYzAwID0gMSBSQU0NCihkMikgICAxOiAwMDAwMDAwMDAw
MDlmYzAwIC0gMDAwMDAwMDAwMDBhMDAwMCA9IDIgUkVTRVJWRUQNCihkMikgICAyOiAwMDAwMDAw
MDAwMGYwMDAwIC0gMDAwMDAwMDAwMDEwMDAwMCA9IDIgUkVTRVJWRUQNCihkMikgICAzOiAwMDAw
MDAwMDAwMTAwMDAwIC0gMDAwMDAwMDAzZjdmZjAwMCA9IDEgUkFNDQooZDIpICAgNDogMDAwMDAw
MDAzZjdmZjAwMCAtIDAwMDAwMDAwM2Y4MDAwMDAgPSAyIFJFU0VSVkVEDQooZDIpICAgNTogMDAw
MDAwMDBmYzAwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgPSAyIFJFU0VSVkVEDQooZDIpIGVudGVy
IGhhbmRsZV8xOToNCihkMikgICBOVUxMDQooZDIpIEJvb3RpbmcgZnJvbSBIYXJkIERpc2suLi4N
CihkMikgQm9vdGluZyBmcm9tIDAwMDA6N2MwMA0KKFhFTikgc3RkdmdhLmM6MTUwOmQyIGVudGVy
aW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKGQyKSBwbnAgY2FsbCBhcmcxPTANCihYRU4p
IHN0ZHZnYS5jOjE1NDpkMiBsZWF2aW5nIHN0ZHZnYQ0KKFhFTikgc3RkdmdhLmM6MTUwOmQyIGVu
dGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikgaXJxLmM6MjcwOiBEb20yIFBD
SSBsaW5rIDAgY2hhbmdlZCA1IC0+IDANCihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAx
IGNoYW5nZWQgMTAgLT4gMA0KKFhFTikgaXJxLmM6MjcwOiBEb20yIFBDSSBsaW5rIDIgY2hhbmdl
ZCAxMSAtPiAwDQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDUgLT4g
MA0KKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTIgZ2ZuPWUwMDAwIG1mbj1iMDAwMCBucj0x
MDAwMA0KKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTIgZ2ZuPWYzMDQwIG1mbj1mZjcwMCBu
cj00MA0KKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTIgZ3BvcnQ9YzIwMCBtcG9ydD1mMDAw
IG5yPTEwMA0KKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWUwMDAwIG1mbj1iMDAwMCBu
cj0xMDAwMA0KKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMDQwIG1mbj1mZjcwMCBu
cj00MA0KKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTIgZ3BvcnQ9YzIwMCBtcG9ydD1mMDAwIG5y
PTEwMA0KKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTIgZ2ZuPWYzMDkwIG1mbj1mZjc0MCBu
cj00DQooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMiBnZm49ZjMwOTAgbWZuPWZmNzQwIG5yPTQN
CihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20yIGdmbj1mMzA5MCBtZm49ZmY3NDAgbnI9NA0K
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMDkwIG1mbj1mZjc0MCBucj00DQooWEVO
KSBzdGR2Z2EuYzoxNTQ6ZDIgbGVhdmluZyBzdGR2Z2ENCg==
--047d7bdc8b6e1d134804f1814178
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bdc8b6e1d134804f1814178--


From xen-devel-bounces@lists.xen.org Mon Feb 03 14:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKXT-0002It-Fn; Mon, 03 Feb 2014 14:30:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAKXR-0002Io-6q
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:30:17 +0000
Received: from [85.158.143.35:16785] by server-2.bemta-4.messagelabs.com id
	C8/F7-10891-8F7AFE25; Mon, 03 Feb 2014 14:30:16 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391437813!2766305!1
X-Originating-IP: [209.85.216.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12530 invoked from network); 3 Feb 2014 14:30:14 -0000
Received: from mail-qc0-f182.google.com (HELO mail-qc0-f182.google.com)
	(209.85.216.182)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:30:14 -0000
Received: by mail-qc0-f182.google.com with SMTP id c9so11063476qcz.41
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 06:30:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=mIQpO5p5ZNOVY9O13QaNNJsyZblLTrAYr03RTpi8vx4=;
	b=dfh6kvLZGDklJw19y5ZcI0JP4cBSA/RTE0D2MjbeeaIwPZOx/BgzsONQyEiND5D/4z
	1i1De6TL3cUTlTtiRN8bGUdnfl/xS7uOQYtXLGMSqAI7jkqXJRws2tOLUPm5aH7iVtfe
	IoKRUNEpQvxOxSgqkrPKGYFyTRCEp2NqOoUbYJW1jEUDNnTidRvl6jXe/9NfsLltzrE5
	npIvSfuW4w1j815Wg9MYYrG5xIq/HK9Q+hGvIel0xpF4jtgcvginuXE9xJKjmEg+d1LQ
	cuI9gN8nGGosyrphsgVcShOe8icKZc/OqFcmJpSfyq1rvSsSSkm8qvOZitzYj0vB5afj
	QV9g==
MIME-Version: 1.0
X-Received: by 10.224.14.2 with SMTP id e2mr55982494qaa.73.1391437352335; Mon,
	03 Feb 2014 06:22:32 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 06:22:32 -0800 (PST)
In-Reply-To: <CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
Date: Mon, 3 Feb 2014 23:22:32 +0900
Message-ID: <CABPT1LsY+0DPVrohc+=qwrw921pcT_Vmo9dr8nw9nAx3Gvf9ug@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7bdc8b6e1d134804f1814178
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bdc8b6e1d134804f1814178
Content-Type: text/plain; charset=ISO-8859-1

>Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?

No it crashes even with empty HVM domain (no os, no disk images no network)

>Can you explain "=== whole system crashed ===" a little more.

II means system instantly rebooted. Black screen, no messages, no
image on screen next what I see is POST of my real hardware.

Log of xen runned with debug=y attached,hvm dom runned and no crash.


On Mon, Feb 3, 2014 at 11:21 PM, Vitaliy Tomin <highwaystar.ru@gmail.com> wrote:
>>Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>
> No it crashes even with empty HVM domain (no os, no disk images no network)
>
>>Can you explain "=== whole system crashed ===" a little more.
>
> II means system instantly rebooted. Black screen, no messages, no
> image on screen next what I see is POST of my real hardware.
>
> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>
>
> On Mon, Feb 3, 2014 at 11:09 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 03/02/14 13:58, Vitaliy Tomin wrote:
>>> lspci output attached.
>>>
>>> I have never managed to crash system with debug=y, but I can provide
>>> serial log captured with debug=y and HVM domain running up.
>>
>> That is a curious data point - it would imply that debug mode is doing
>> something which non-debug mode fails to do.
>>
>> Could you provide the log please?
>>
>> Can you explain "=== whole system crashed ===" a little more.
>>
>> Given the lack of stack trace or any hint of a problem from Xen, is it a
>> system hang?  Does adding "watchdog" to the Xen command line cause the
>> failure to change?
>>
>>
>> Looking back at the debug=n serial log in combination with the PCI topology,
>>
>> (XEN) AMD-Vi: No iommu for device 0000:00:00.2
>> (XEN) setup 0000:00:00.2 for d0 failed (-19)
>>
>> Device 00:00.2 is the IOMMU itself.  I would have thought applying IOMMU
>> translation to the IOMMU is going to end in tears. Suravee; Can you
>> comment about this?  is the IOMMU expected to have an IVRS entry?
>>
>> ~Andrew

--047d7bdc8b6e1d134804f1814178
Content-Type: application/octet-stream; name="screenlog.0"
Content-Disposition: attachment; filename="screenlog.0"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7tvifl1

IFhlbiA0LjQuMF8wMi0yOTcuMQ0KKFhFTikgWGVuIHZlcnNpb24gNC40LjBfMDItMjk3LjEgKGFi
dWlsZEApIChnY2MgKFNVU0UgTGludXgpIDQuOC4xIDIwMTMwOTA5IFtnY2MtNF84LWJyYW5jaCBy
ZXZpc2lvbiAyMDIzODhdKSBkZWJ1Zz15IFR1ZSBKYW4gMjggMTY6MDY6MDcgVVRDIDIwMTQNCihY
RU4pIExhdGVzdCBDaGFuZ2VTZXQ6IA0KKFhFTikgQm9vdGxvYWRlcjogR1JVQjIgMi4wMA0KKFhF
TikgQ29tbWFuZCBsaW5lOiBsb2dsdmw9YWxsIGlvbW11PWRlYnVnLHZlcmJvc2UgYXBpY192ZXJi
b3NpdHk9ZGVidWcgY29uc29sZT1jb20xIGNvbTE9MTE1MjAwLDhuMSxwY2kNCihYRU4pIFZpZGVv
IGluZm9ybWF0aW9uOg0KKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNg0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRz
DQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMN
CihYRU4pICBGb3VuZCA0IEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVOKSBYZW4tZTgy
MCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1
c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZTgwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZl
ZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQ0K
KFhFTikgIDAwMDAwMDAwOGQ2OGIwMDAgLSAwMDAwMDAwMDhkZDBhMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDhkZDBhMDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpDQooWEVO
KSAgMDAwMDAwMDA4ZTA1YTAwMCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQ0KKFhFTikg
IDAwMDAwMDAwOGVhNDUwMDAgLSAwMDAwMDAwMDhlYTQ2MDAwICh1c2FibGUpDQooWEVOKSAgMDAw
MDAwMDA4ZWE0NjAwMCAtIDAwMDAwMDAwOGVjNGMwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAw
MDAwOGVjNGMwMDAgLSAwMDAwMDAwMDhmMDY0MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4
ZjA2NDAwMCAtIDAwMDAwMDAwOGY3ZjMwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwOGY3
ZjMwMDAgLSAwMDAwMDAwMDhmODAwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBmZWMwMDAw
MCAtIDAwMDAwMDAwZmVjMDEwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMTAwMDAg
LSAwMDAwMDAwMGZlYzExMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0g
MDAwMDAwMDBmZWQwMTAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAw
MDAwMDAwZmVkOTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmY4MDAwMDAgLSAwMDAw
MDAwMTAwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAw
MDI1MDAwMDAwMCAodXNhYmxlKQ0KKFhFTikgQUNQSTogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIg
QUxBU0tBKQ0KKFhFTikgQUNQSTogWFNEVCA4RTA0QTA3OCwgMDA3NCAocjEgQUxBU0tBICAgIEEg
TSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAw
MEY0IChyNCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFD
UEkgV2FybmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25hbCBmaWVsZCAiUG0yQ29udHJvbEJsb2Nr
IiBoYXMgemVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEy
Nl0NCihYRU4pIEFDUEk6IERTRFQgOEUwNEExODgsIDVGOUUgKHIyIEFMQVNLQSAgICBBIE0gSSAg
ICAgICAgMCBJTlRMIDIwMDUxMTE3KQ0KKFhFTikgQUNQSTogRkFDUyA4RTA1MkU4MCwgMDA0MA0K
KFhFTikgQUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMgQUxBU0tBICAgIEEgTSBJICAxMDcy
MDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGUERUIDhFMDUwMjk4LCAwMDQ0IChyMSBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEk6IE1DRkcg
OEUwNTAyRTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgIDEwMDEz
KQ0KKFhFTikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEgQUxBU0tBIE9FTUFBRlQgICAx
MDcyMDA5IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBIUEVUIDhFMDUwNDA4LCAwMDM4IChy
MSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAgICAgNSkNCihYRU4pIEFDUEk6IElW
UlMgOEUwNTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAg
ICAwKQ0KKFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEgICAgQU1EIEFOTkFQVVJO
ICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwRjEwLCAwNEI3
IChyMiAgICBBTUQgQU5OQVBVUk4gICAgICAgIDEgTVNGVCAgNDAwMDAwMCkNCihYRU4pIEFDUEk6
IENSQVQgOEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAg
ICAgICAxKQ0KKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0IpDQooWEVOKSBObyBO
VU1BIGNvbmZpZ3VyYXRpb24gZm91bmQNCihYRU4pIEZha2luZyBhIG5vZGUgYXQgMDAwMDAwMDAw
MDAwMDAwMC0wMDAwMDAwMjUwMDAwMDAwDQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZA0K
KFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZkOTAwDQooWEVOKSBETUkgMi43IHByZXNl
bnQuDQooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBk
cml2ZXIgZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihYRU4p
IEFDUEk6IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdDQooWEVO
KSBBQ1BJOiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGluIEZBRFQgLSA4ZTA1MmU4MC8w
MDAwMDAwMDAwMDAwMDAwLCB1c2luZyAzMg0KKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVw
X3ZlY1s4ZTA1MmU4Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lk
WzB4MTBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYN
CihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQp
DQooWEVOKSBQcm9jZXNzb3IgIzE3IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNz
b3IgIzE4IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDRdIGxhcGljX2lkWzB4MTNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElD
IHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVk
Z2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA1XSBhZGRyZXNzWzB4ZmVj
MDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24g
MzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZS
IChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVOKSBB
Q1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3Zl
cnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFibGlu
ZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQRVQg
aWQ6IDB4MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgRVJTVCB0YWJsZSB3YXMgbm90
IGZvdW5kDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5m
b3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykNCihY
RU4pIE5SX0NQVVM6NTEyIG5yX2NwdW1hc2tfYml0czo2NA0KKFhFTikgbWFwcGVkIEFQSUMgdG8g
ZmZmZjgyY2ZmZmJmYjAwMCAoZmVlMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4
MmNmZmZiZmEwMDAgKGZlYzAwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCA3NjAgTVNJ
L01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVk
aXQpDQooWEVOKSBEZXRlY3RlZCAzODkzLjA0MyBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGlu
ZyBtZW1vcnkgc2hhcmluZy4NCihYRU4pIHhzdGF0ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAw
eDNjMCBhbmQgc3RhdGVzOiAweDQwMDAwMDAwMDAwMDAwMDcNCihYRU4pIEFNRCBGYW0xNWggbWFj
aGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRp
b24gMDogYmFzZSBlMDAwMDAwMCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJ
OiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZp
OiBGb3VuZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkg
VGFibGU6DQooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVu
Z3RoIDB4NzANCihYRU4pIEFNRC1WaTogIFJldmlzaW9uIDB4Mg0KKFhFTikgQU1ELVZpOiAgQ2hl
Y2tTdW0gMHhlOA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRA0KKFhFTikgQU1ELVZpOiAgT0VN
X1RhYmxlX0lkIEFOTkFQVVJODQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgxDQooWEVO
KSBBTUQtVmk6ICBDcmVhdG9yX0lkIEFNRCANCihYRU4pIEFNRC1WaTogIENyZWF0b3JfUmV2aXNp
b24gMA0KKFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxhZ3MgMHhmZSBsZW4g
MHg0MCBpZCAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlk
IDB4OCBmbGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OCAtPiAweGZmZmUN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAweDIwMCBmbGFn
cyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZmIGFsaWFzIDB4
YTQNCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFu
ZGxlIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZs
YWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0
eSAweDEgaGFuZGxlIDB4NQ0KKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJlczoN
CihYRU4pICAtIFByZWZldGNoIFBhZ2VzIENvbW1hbmQNCihYRU4pICAtIFBlcmlwaGVyYWwgUGFn
ZSBTZXJ2aWNlIFJlcXVlc3QNCihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uDQooWEVOKSAgLSBJ
bnZhbGlkYXRlIEFsbCBDb21tYW5kDQooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4NCihY
RU4pIEFNRC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4NCihYRU4pIEFNRC1WaTogSU9N
TVUgMCBFbmFibGVkLg0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAt
IERvbTAgbW9kZTogUmVsYXhlZA0KKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkDQoo
WEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwDQooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgw
MDUwMDEwDQooWEVOKSBHZXR0aW5nIElEOiAxMDAwMDAwMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3
MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUj
MA0KKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBt
ZXRob2QNCihYRU4pIGluaXQgSU9fQVBJQyBJUlFzDQooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBp
bikgNS0wLCA1LTE2LCA1LTE3LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5v
dCBjb25uZWN0ZWQuDQooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBh
cGljMj0tMSBwaW4yPS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhF
TikgbnVtYmVyIG9mIElPLUFQSUMgIzUgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIHRlc3RpbmcgdGhl
IElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLg0KKFhFTikgSU8gQVBJQyAjNS4uLi4uLg0K
KFhFTikgLi4uLiByZWdpc3RlciAjMDA6IDA1MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgIDogcGh5
c2ljYWwgQVBJQyBpZDogMDUNCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwDQoo
WEVOKSAuLi4uLi4uICAgIDogTFRTICAgICAgICAgIDogMA0KKFhFTikgLi4uLiByZWdpc3RlciAj
MDE6IDAwMTc4MDIxDQooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVz
OiAwMDE3DQooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMQ0KKFhFTikgLi4u
Li4uLiAgICAgOiBJTyBBUElDIHZlcnNpb246IDAwMjENCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAy
OiAwNTAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICAgOiBhcmJpdHJhdGlvbjogMDUNCihYRU4pIC4u
Li4gcmVnaXN0ZXIgIzAzOiAwNTAxODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBCb290IERUICAg
IDogMQ0KKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBo
eSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAw
MCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAwMSAwMDEg
MDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQooWEVOKSAgMDIgMDAxIDAx
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMA0KKFhFTikgIDAzIDAwMSAwMSAg
MCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgNCihYRU4pICAwNCAwMDEgMDEgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDUgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA2IDAwMSAwMSAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwNyAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDggMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA2MA0KKFhFTikgIDA5IDAwMSAwMSAgMSAgICAxICAgIDAg
ICAxICAgMCAgICAxICAgIDAgICAgMDANCihYRU4pICAwYSAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIEYxDQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA3MA0KKFhFTikgIDBjIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgNzgNCihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDg4DQooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA5MA0KKFhFTikgIDBmIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgOTgNCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAxICAgIDMwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MSAgICAzMA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEg
ICAgMzANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzAN
CihYRU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQoo
WEVOKSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhF
TikgVXNpbmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nDQooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdz
Og0KKFhFTikgSVJRMjQwIC0+IDA6Mg0KKFhFTikgSVJRNDggLT4gMDoxDQooWEVOKSBJUlE1NiAt
PiAwOjMNCihYRU4pIElSUTY0IC0+IDA6NA0KKFhFTikgSVJRNzIgLT4gMDo1DQooWEVOKSBJUlE4
MCAtPiAwOjYNCihYRU4pIElSUTg4IC0+IDA6Nw0KKFhFTikgSVJROTYgLT4gMDo4DQooWEVOKSBJ
UlExMDQgLT4gMDo5DQooWEVOKSBJUlEyNDEgLT4gMDoxMA0KKFhFTikgSVJRMTEyIC0+IDA6MTEN
CihYRU4pIElSUTEyMCAtPiAwOjEyDQooWEVOKSBJUlExMzYgLT4gMDoxMw0KKFhFTikgSVJRMTQ0
IC0+IDA6MTQNCihYRU4pIElSUTE1MiAtPiAwOjE1DQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4gZG9uZS4NCihYRU4pIFVzaW5nIGxvY2FsIEFQSUMgdGltZXIgaW50
ZXJydXB0cy4NCihYRU4pIGNhbGlicmF0aW5nIEFQSUMgdGltZXIgLi4uDQooWEVOKSAuLi4uLiBD
UFUgY2xvY2sgc3BlZWQgaXMgMzg5Mi45MjA2IE1Iei4NCihYRU4pIC4uLi4uIGhvc3QgYnVzIGNs
b2NrIHNwZWVkIGlzIDk5LjgxODQgTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM3
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSAgLSBWTUNC
IENsZWFuIEJpdHMNCihYRU4pICAtIERlY29kZUFzc2lzdHMNCihYRU4pICAtIFBhdXNlLUludGVy
Y2VwdCBGaWx0ZXINCihYRU4pICAtIFRTQyBSYXRlIE1TUg0KKFhFTikgSFZNOiBTVk0gZW5hYmxl
ZA0KKFhFTikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihY
RU4pIEhWTTogSEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIEhWTTogUFZIIG1v
ZGUgbm90IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtDQooWEVOKSBtYXNrZWQgRXh0SU5UIG9u
IENQVSMxDQooWEVOKSBtaWNyb2NvZGU6IENQVTEgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9
MHg2MDAxMTE5DQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVOKSBtaWNyb2NvZGU6
IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MA0KKFhFTikgbWFza2VkIEV4dElOVCBv
biBDUFUjMw0KKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lk
PTANCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVOKSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0K
KFhFTikgTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZyZXF1ZW5j
eQ0KKFhFTikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVk
Lg0KKFhFTikgbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgdmFyaWFibGUgTVRSUiBz
ZXR0aW5ncw0KKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVwIGFs
bCBDUFVzLg0KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uDQooWEVOKSAqKiog
TE9BRElORyBET01BSU4gMCAqKioNCihYRU4pIGVsZl9wYXJzZV9iaW5hcnk6IHBoZHI6IHBhZGRy
PTB4MjAwMCBtZW1zej0weDhmMzAwMA0KKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogcGhkcjogcGFk
ZHI9MHg4ZjUwMDAgbWVtc3o9MHg4YjBmMA0KKFhFTikgZWxmX3BhcnNlX2JpbmFyeTogcGhkcjog
cGFkZHI9MHg5ODEwMDAgbWVtc3o9MHhiYmMwDQooWEVOKSBlbGZfcGFyc2VfYmluYXJ5OiBwaGRy
OiBwYWRkcj0weDk4ZDAwMCBtZW1zej0weDI3ODAwMA0KKFhFTikgZWxmX3BhcnNlX2JpbmFyeTog
bWVtb3J5OiAweDIwMDAgLT4gMHhjMDUwMDANCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogR1VF
U1RfT1MgPSAibGludXgiDQooWEVOKSBlbGZfeGVuX3BhcnNlX25vdGU6IEdVRVNUX1ZFUlNJT04g
PSAiMi42Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBYRU5fVkVSU0lPTiA9ICJ4ZW4tMy4w
Ig0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBWSVJUX0JBU0UgPSAweGZmZmZmZmZmODAwMDAw
MDANCihYRU4pIGVsZl94ZW5fcGFyc2Vfbm90ZTogUEFERFJfT0ZGU0VUID0gMHgwDQooWEVOKSBl
bGZfeGVuX3BhcnNlX25vdGU6IEVOVFJZID0gMHhmZmZmZmZmZjgwMDAyMDAwDQooWEVOKSBlbGZf
eGVuX3BhcnNlX25vdGU6IEhZUEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZmZjgwMDAzMDAwDQooWEVO
KSBlbGZfeGVuX3BhcnNlX25vdGU6IHVua25vd24geGVuIGVsZiBub3RlICgweGQpDQooWEVOKSBl
bGZfeGVuX3BhcnNlX25vdGU6IE1PRF9TVEFSVF9QRk4gPSAweDENCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogSU5JVF9QMk0gPSAweGZmZmZlYTAwMDAwMDAwMDANCihYRU4pIGVsZl94ZW5fcGFy
c2Vfbm90ZTogRkVBVFVSRVMgPSAid3JpdGFibGVfcGFnZV90YWJsZXN8d3JpdGFibGVfZGVzY3Jp
cHRvcl90YWJsZXN8YXV0b190cmFuc2xhdGVkX3BoeXNtYXB8c3VwZXJ2aXNvcl9tb2RlX2tlcm5l
bHxkb20wIg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBTVVBQT1JURURfRkVBVFVSRVMgPSAw
eDgwZg0KKFhFTikgZWxmX3hlbl9wYXJzZV9ub3RlOiBMT0FERVIgPSAiZ2VuZXJpYyINCihYRU4p
IGVsZl94ZW5fcGFyc2Vfbm90ZTogU1VTUEVORF9DQU5DRUwgPSAweDENCihYRU4pIGVsZl94ZW5f
YWRkcl9jYWxjX2NoZWNrOiBhZGRyZXNzZXM6DQooWEVOKSAgICAgdmlydF9iYXNlICAgICAgICA9
IDB4ZmZmZmZmZmY4MDAwMDAwMA0KKFhFTikgICAgIGVsZl9wYWRkcl9vZmZzZXQgPSAweDANCihY
RU4pICAgICB2aXJ0X29mZnNldCAgICAgID0gMHhmZmZmZmZmZjgwMDAwMDAwDQooWEVOKSAgICAg
dmlydF9rc3RhcnQgICAgICA9IDB4ZmZmZmZmZmY4MDAwMjAwMA0KKFhFTikgICAgIHZpcnRfa2Vu
ZCAgICAgICAgPSAweGZmZmZmZmZmODBjMDUwMDANCihYRU4pICAgICB2aXJ0X2VudHJ5ICAgICAg
ID0gMHhmZmZmZmZmZjgwMDAyMDAwDQooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4ZmZm
ZmVhMDAwMDAwMDAwMA0KKFhFTikgIFhlbiAga2VybmVsOiA2NC1iaXQsIGxzYiwgY29tcGF0MzIN
CihYRU4pICBEb20wIGtlcm5lbDogNjQtYml0LCBsc2IsIHBhZGRyIDB4MjAwMCAtPiAweGMwNTAw
MA0KKFhFTikgUEhZU0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikgIERvbTAgYWxsb2Mu
OiAgIDAwMDAwMDAyMjMwMDAwMDAtPjAwMDAwMDAyMjQwMDAwMDAgKDE3NDE5MTggcGFnZXMgdG8g
YmUgYWxsb2NhdGVkKQ0KKFhFTikgIEluaXQuIHJhbWRpc2s6IDAwMDAwMDAyNGNmNzUwMDAtPjAw
MDAwMDAyNGZmZmZlMDANCihYRU4pIFZJUlRVQUwgTUVNT1JZIEFSUkFOR0VNRU5UOg0KKFhFTikg
IExvYWRlZCBrZXJuZWw6IGZmZmZmZmZmODAwMDIwMDAtPmZmZmZmZmZmODBjMDUwMDANCihYRU4p
ICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwDQooWEVO
KSAgUGh5cy1NYWNoIG1hcDogZmZmZmVhMDAwMDAwMDAwMC0+ZmZmZmVhMDAwMGQ2YTc0OA0KKFhF
TikgIFN0YXJ0IGluZm86ICAgIGZmZmZmZmZmODBjMDUwMDAtPmZmZmZmZmZmODBjMDU0YjQNCihY
RU4pICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgwYzA2MDAwLT5mZmZmZmZmZjgwYzExMDAwDQoo
WEVOKSAgQm9vdCBzdGFjazogICAgZmZmZmZmZmY4MGMxMTAwMC0+ZmZmZmZmZmY4MGMxMjAwMA0K
KFhFTikgIFRPVEFMOiAgICAgICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZmZmZmODEwMDAwMDAN
CihYRU4pICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgwMDAyMDAwDQooWEVOKSBEb20wIGhhcyBt
YXhpbXVtIDQgVkNQVXMNCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAwIGF0IDB4ZmZmZmZm
ZmY4MDAwMjAwMCAtPiAweGZmZmZmZmZmODA4ZjUwMDANCihYRU4pIGVsZl9sb2FkX2JpbmFyeTog
cGhkciAxIGF0IDB4ZmZmZmZmZmY4MDhmNTAwMCAtPiAweGZmZmZmZmZmODA5ODAwZjANCihYRU4p
IGVsZl9sb2FkX2JpbmFyeTogcGhkciAyIGF0IDB4ZmZmZmZmZmY4MDk4MTAwMCAtPiAweGZmZmZm
ZmZmODA5OGNiYzANCihYRU4pIGVsZl9sb2FkX2JpbmFyeTogcGhkciAzIGF0IDB4ZmZmZmZmZmY4
MDk4ZDAwMCAtPiAweGZmZmZmZmZmODBhMDQwMDANCihYRU4pIEFNRC1WaTogU2tpcHBpbmcgaG9z
dCBicmlkZ2UgMDAwMDowMDowMC4wDQooWEVOKSBBTUQtVmk6IE5vIGlvbW11IGZvciBkZXZpY2Ug
MDAwMDowMDowMC4yDQooWEVOKSBzZXR1cCAwMDAwOjAwOjAwLjIgZm9yIGQwIGZhaWxlZCAoLTE5
KQ0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4LCB0
eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0g
MHg5LCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHgxMCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4ODAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDgxLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUx
M2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg4OCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9
IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTAsIHR5cGUgPSAweDcsIHJvb3Qg
dGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkyLCB0eXBlID0gMHg3
LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5OCwgdHlw
ZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
OWEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweGEwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHhhMSwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4YTMsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNTEz
YjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE0LCB0eXBlID0gMHg1LCByb290IHRhYmxlID0g
MHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNSwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTgsIHR5cGUgPSAweDIs
IHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGFhLCB0eXBl
ID0gMHgyLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhh
YiwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YzAsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGMxLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhjMiwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI1MTNi
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzMsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAw
eDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGM0LCB0eXBlID0gMHg2LCByb290IHRh
YmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNSwgdHlwZSA9IDB4Niwg
cm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihY
RU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAwLCB0eXBl
ID0gMHgxLCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgx
MDEsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNTEzYjAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDIzMCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI1MTNiMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4MjM4LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjUxM2IwMDAs
IGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHgzMDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIy
NTEzYjAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4MjI1MTNiMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1W
aTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NTAwLCB0eXBlID0gMHgxLCBy
b290IHRhYmxlID0gMHgyMjUxM2IwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhF
TikgU2NydWJiaW5nIEZyZWUgUkFNOiAuZG9uZS4NCihYRU4pIEluaXRpYWwgbG93IG1lbW9yeSB2
aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLg0KKFhFTikgU3RkLiBMb2dsZXZlbDog
QWxsDQooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+
IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikN
CihYRU4pIEZyZWVkIDI1NmtCIGluaXQgbWVtb3J5Lg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERv
bWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAw
MDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAuDQooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWlu
IGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAw
IHRvIDB4ODAwODAwMDAwMTAwMDAwMC4NCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0
ZW1wdGVkIFdSTVNSIDAwMDAwMDAwMDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8g
MHg4MDA4MDAwMDAxMDAwMDAwLg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0
ZWQgV1JNU1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgw
MDgwMDAwMDEwMDAwMDAuDQooWEVOKSBQQ0k6IFVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBi
dXMgMDAtZmYNCihYRU4pIG1tLmM6ODA5OiBkMDogRm9yY2luZyByZWFkLW9ubHkgYWNjZXNzIHRv
IE1GTiBlMDAwMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wDQooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjAwLjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDEu
MA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4xDQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjAyLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTAuMA0KKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMC4xDQooWEVOKSBTUi1JT1YgZGV2aWNlIDAwMDA6MDA6
MTEuMCBoYXMgaXRzIHZpcnR1YWwgZnVuY3Rpb25zIGFscmVhZHkgZW5hYmxlZCAoMDFhYikNCihY
RU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxMi4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjINCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDA6MTMuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4y
DQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjANCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDA6MTQuMQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4zDQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjQNCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MTQuNQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4wDQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjE1LjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMw0KKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjE4LjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxOC4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQN
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNQ0KKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDowMTowMC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjENCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDI6MDYuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMjow
Ny4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAzOjAwLjANCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDQ6MDAuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNTowMC4wDQooWEVO
KSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xNiAtPiAweGEwIC0+IElSUSAx
NiBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRy
eSAoNS0xNyAtPiAweGE4IC0+IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNb
MF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOCAtPiAweGIwIC0+IElSUSAxOCBNb2RlOjEg
QWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOSAt
PiAweGI4IC0+IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQ
Q0kgcm91dGluZyBlbnRyeSAoNS0yMSAtPiAweGMwIC0+IElSUSAyMSBNb2RlOjEgQWN0aXZlOjEp
DQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0yMiAtPiAweGM4IC0+
IElSUSAyMiBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBBUElDIGVycm9yIG9uIENQVTA6IDAwKDQw
KQ0KKFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkNCihYRU4pIEFQSUMgZXJyb3Igb24g
Q1BVMzogMDAoNDApDQooWEVOKSBBUElDIGVycm9yIG9uIENQVTI6IDAwKDQwKQ0KG1tyG1tIG1tK
DQ0NCldlbGNvbWUgdG8gb3BlblNVU0UgMTMuMSAiQm90dGxlIiAtIEtlcm5lbCAzLjExLjYtNC14
ZW4gKHh2YzApLg0NCg0NCg0NCmxpbnV4LWI1MmQgbG9naW46IChYRU4pIEFNRC1WaTogU2hhcmUg
cDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MjE5Yzk1DQooWEVOKSBBTUQtVmk6
IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUgPSAweDIwYjJkMQ0KKFhFTikg
aW8uYzoyODA6IGQyOiBiaW5kOiBtX2dzaT0xNyBnX2dzaT0zNiBkZXZpY2U9NSBpbnR4PTANCihY
RU4pIEFNRC1WaTogRGlzYWJsZTogZGV2aWNlIGlkID0gMHg4LCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9
IDB4OCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjBiMmQxMDAwLCBkb21haW4gPSAyLCBw
YWdpbmcgbW9kZSA9IDQNCihYRU4pIEFNRC1WaTogUmUtYXNzaWduIDAwMDA6MDA6MDEuMCBmcm9t
IGRvbTAgdG8gZG9tMg0KKFhFTikgaW8uYzoyODA6IGQyOiBiaW5kOiBtX2dzaT0xOCBnX2dzaT00
MSBkZXZpY2U9NiBpbnR4PTENCihYRU4pIEFNRC1WaTogRGlzYWJsZTogZGV2aWNlIGlkID0gMHg5
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBh
Z2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjBi
MmQxMDAwLCBkb21haW4gPSAyLCBwYWdpbmcgbW9kZSA9IDQNCihYRU4pIEFNRC1WaTogUmUtYXNz
aWduIDAwMDA6MDA6MDEuMSBmcm9tIGRvbTAgdG8gZG9tMg0KKGQyKSBIVk0gTG9hZGVyDQooZDIp
IERldGVjdGVkIFhlbiB2NC40LjBfMDItMjk3LjENCihkMikgWGVuYnVzIHJpbmdzIEAweGZlZmZj
MDAwLCBldmVudCBjaGFubmVsIDMNCihkMikgU3lzdGVtIHJlcXVlc3RlZCBTZWFCSU9TDQooZDIp
IENQVSBzcGVlZCBpcyAzODkzIE1Ieg0KKGQyKSBSZWxvY2F0aW5nIGd1ZXN0IG1lbW9yeSBmb3Ig
bG93bWVtIE1NSU8gc3BhY2UgZGlzYWJsZWQNCihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGlu
ayAwIGNoYW5nZWQgMCAtPiA1DQooZDIpIFBDSS1JU0EgbGluayAwIHJvdXRlZCB0byBJUlE1DQoo
WEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMSBjaGFuZ2VkIDAgLT4gMTANCihkMikgUENJ
LUlTQSBsaW5rIDEgcm91dGVkIHRvIElSUTEwDQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxp
bmsgMiBjaGFuZ2VkIDAgLT4gMTENCihkMikgUENJLUlTQSBsaW5rIDIgcm91dGVkIHRvIElSUTEx
DQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDAgLT4gNQ0KKGQyKSBQ
Q0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQ0KKGQyKSBwY2kgZGV2IDAxOjIgSU5URC0+SVJR
NQ0KKGQyKSBwY2kgZGV2IDAxOjMgSU5UQS0+SVJRMTANCihkMikgcGNpIGRldiAwMjowIElOVEEt
PklSUTExDQooZDIpIHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1DQooZDIpIHBjaSBkZXYgMDU6MCBJ
TlRBLT5JUlExMA0KKGQyKSBwY2kgZGV2IDA2OjAgSU5UQi0+SVJRNQ0KKGQyKSBObyBSQU0gaW4g
aGlnaCBtZW1vcnk7IHNldHRpbmcgaGlnaF9tZW0gcmVzb3VyY2UgYmFzZSB0byAxMDAwMDAwMDAN
CihkMikgcGNpIGRldiAwNTowIGJhciAxMCBzaXplIDAxMDAwMDAwMDogMGUwMDAwMDA4DQooWEVO
KSBtZW1vcnlfbWFwOmFkZDogZG9tMiBnZm49ZTAwMDAgbWZuPWIwMDAwIG5yPTEwMDAwDQooZDIp
IHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDIwMDAwMDA6IDBmMDAwMDAwOA0KKGQyKSBwY2kg
ZGV2IDAyOjAgYmFyIDE0IHNpemUgMDAxMDAwMDAwOiAwZjIwMDAwMDgNCihkMikgcGNpIGRldiAw
NDowIGJhciAzMCBzaXplIDAwMDA0MDAwMDogMGYzMDAwMDAwDQooWEVOKSBtZW1vcnlfbWFwOmFk
ZDogZG9tMiBnZm49ZjMwNDAgbWZuPWZmNzAwIG5yPTQwDQooZDIpIHBjaSBkZXYgMDU6MCBiYXIg
MTggc2l6ZSAwMDAwNDAwMDA6IDBmMzA0MDAwMA0KKGQyKSBwY2kgZGV2IDAzOjAgYmFyIDMwIHNp
emUgMDAwMDEwMDAwOiAwZjMwODAwMDANCihkMikgcGNpIGRldiAwNjowIGJhciAxMCBzaXplIDAw
MDAwNDAwMDogMGYzMDkwMDAwDQooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMiBnZm49ZjMwOTAg
bWZuPWZmNzQwIG5yPTQNCihkMikgcGNpIGRldiAwMzowIGJhciAxNCBzaXplIDAwMDAwMTAwMDog
MGYzMDk0MDAwDQooZDIpIHBjaSBkZXYgMDI6MCBiYXIgMTAgc2l6ZSAwMDAwMDAxMDA6IDAwMDAw
YzAwMQ0KKGQyKSBwY2kgZGV2IDA0OjAgYmFyIDEwIHNpemUgMDAwMDAwMTAwOiAwMDAwMGMxMDEN
CihkMikgcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMDEwMDogMGYzMDk1MDAwDQooWEVO
KSBpb3BvcnRfbWFwOmFkZDogZG9tMiBncG9ydD1jMjAwIG1wb3J0PWYwMDAgbnI9MTAwDQooZDIp
IHBjaSBkZXYgMDU6MCBiYXIgMTQgc2l6ZSAwMDAwMDAxMDA6IDAwMDAwYzIwMQ0KKGQyKSBwY2kg
ZGV2IDAxOjIgYmFyIDIwIHNpemUgMDAwMDAwMDIwOiAwMDAwMGMzMDENCihkMikgcGNpIGRldiAw
MToxIGJhciAyMCBzaXplIDAwMDAwMDAxMDogMDAwMDBjMzIxDQooZDIpIE11bHRpcHJvY2Vzc29y
IGluaXRpYWxpc2F0aW9uOg0KKGQyKSAgLSBDUFUwIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQg
TVRSUnMgLi4uIHZhciBNVFJScyBbMy84XSAuLi4gZG9uZS4NCihkMikgVGVzdGluZyBIVk0gZW52
aXJvbm1lbnQ6DQooZDIpICAtIFJFUCBJTlNCIGFjcm9zcyBwYWdlIGJvdW5kYXJpZXMgLi4uIHBh
c3NlZA0KKGQyKSAgLSBHUyBiYXNlIE1TUnMgYW5kIFNXQVBHUyAuLi4gcGFzc2VkDQooZDIpIFBh
c3NlZCAyIG9mIDIgdGVzdHMNCihkMikgV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLg0KKGQyKSBM
b2FkaW5nIFNlYUJJT1MgLi4uDQooZDIpIENyZWF0aW5nIE1QIHRhYmxlcyAuLi4NCihkMikgTG9h
ZGluZyBBQ1BJIC4uLg0KKGQyKSB2bTg2IFRTUyBhdCBmYzAwYTAwMA0KKGQyKSBCSU9TIG1hcDoN
CihkMikgIDEwMDAwLTEwMGQzOiBTY3JhdGNoIHNwYWNlDQooZDIpICBlMDAwMC1mZmZmZjogTWFp
biBCSU9TDQooZDIpIEU4MjAgdGFibGU6DQooZDIpICBbMDBdOiAwMDAwMDAwMDowMDAwMDAwMCAt
IDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0NCihkMikgIEhPTEU6IDAwMDAwMDAwOjAwMGEwMDAwIC0g
MDAwMDAwMDA6MDAwZTAwMDANCihkMikgIFswMV06IDAwMDAwMDAwOjAwMGUwMDAwIC0gMDAwMDAw
MDA6MDAxMDAwMDA6IFJFU0VSVkVEDQooZDIpICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAw
MDAwMDAwOjNmODAwMDAwOiBSQU0NCihkMikgIEhPTEU6IDAwMDAwMDAwOjNmODAwMDAwIC0gMDAw
MDAwMDA6ZmMwMDAwMDANCihkMikgIFswM106IDAwMDAwMDAwOmZjMDAwMDAwIC0gMDAwMDAwMDE6
MDAwMDAwMDA6IFJFU0VSVkVEDQooZDIpIEludm9raW5nIFNlYUJJT1MgLi4uDQooZDIpIFNlYUJJ
T1MgKHZlcnNpb24gPy0yMDE0MDEyOF8xNTU1NTQtYnVpbGQyMikNCihkMikgDQooZDIpIEZvdW5k
IFhlbiBoeXBlcnZpc29yIHNpZ25hdHVyZSBhdCA0MDAwMDAwMA0KKGQyKSB4ZW46IGNvcHkgZTgy
MC4uLg0KKGQyKSBSYW0gU2l6ZT0weDNmODAwMDAwICgweDAwMDAwMDAwMDAwMDAwMDAgaGlnaCkN
CihkMikgUmVsb2NhdGluZyBsb3cgZGF0YSBmcm9tIDB4MDAwZTQ0ZDAgdG8gMHgwMDBlZjc5MCAo
c2l6ZSAyMTUzKQ0KKGQyKSBSZWxvY2F0aW5nIGluaXQgZnJvbSAweDAwMGU0ZDM5IHRvIDB4M2Y3
ZTMxMTAgKHNpemUgNTI2ODcpDQooZDIpIENQVSBNaHo9Mzg5Mw0KKGQyKSBGb3VuZCAxMCBQQ0kg
ZGV2aWNlcyAobWF4IFBDSSBidXMgaXMgMDApDQooZDIpIEFsbG9jYXRlZCBYZW4gaHlwZXJjYWxs
IHBhZ2UgYXQgM2Y3ZmYwMDANCihkMikgRGV0ZWN0ZWQgWGVuIHY0LjQuMF8wMi0yOTcuMQ0KKGQy
KSB4ZW46IGNvcHkgQklPUyB0YWJsZXMuLi4NCihkMikgQ29weWluZyBTTUJJT1MgZW50cnkgcG9p
bnQgZnJvbSAweDAwMDEwMDEwIHRvIDB4MDAwZmUwMzANCihkMikgQ29weWluZyBNUFRBQkxFIGZy
b20gMHhmYzAwMTE0MC9mYzAwMTE1MCB0byAweDAwMGZkZjUwDQooZDIpIENvcHlpbmcgUElSIGZy
b20gMHgwMDAxMDAzMCB0byAweDAwMGZkZWQwDQooZDIpIENvcHlpbmcgQUNQSSBSU0RQIGZyb20g
MHgwMDAxMDBiMCB0byAweDAwMGZkZWEwDQooZDIpIFNjYW4gZm9yIFZHQSBvcHRpb24gcm9tDQoo
ZDIpIFJ1bm5pbmcgb3B0aW9uIHJvbSBhdCBjMDAwOjAwMDMNCihkMikgVHVybmluZyBvbiB2Z2Eg
dGV4dCBtb2RlIGNvbnNvbGUNCihkMikgU2VhQklPUyAodmVyc2lvbiA/LTIwMTQwMTI4XzE1NTU1
NC1idWlsZDIyKQ0KKGQyKSANCihkMikgVUhDSSBpbml0IG9uIGRldiAwMDowMS4yIChpbz1jMzAw
KQ0KKGQyKSBGb3VuZCAwIGxwdCBwb3J0cw0KKGQyKSBGb3VuZCAxIHNlcmlhbCBwb3J0cw0KKGQy
KSBBVEEgY29udHJvbGxlciAxIGF0IDFmMC8zZjQvYzMyMCAoaXJxIDE0IGRldiA5KQ0KKGQyKSBB
VEEgY29udHJvbGxlciAyIGF0IDE3MC8zNzQvYzMyOCAoaXJxIDE1IGRldiA5KQ0KKGQyKSBhdGEw
LTA6IFFFTVUgSEFSRERJU0sgQVRBLTcgSGFyZC1EaXNrICgyNTYwIE1pQnl0ZXMpDQooZDIpIFNl
YXJjaGluZyBib290b3JkZXIgZm9yOiAvcGNpQGkwY2Y4LypAMSwxL2RyaXZlQDAvZGlza0AwDQoo
ZDIpIFBTMiBrZXlib2FyZCBpbml0aWFsaXplZA0KKGQyKSBBbGwgdGhyZWFkcyBjb21wbGV0ZS4N
CihkMikgU2NhbiBmb3Igb3B0aW9uIHJvbXMNCihkMikgUnVubmluZyBvcHRpb24gcm9tIGF0IGM5
MDA6MDAwMw0KKGQyKSBwbW0gY2FsbCBhcmcxPTENCihkMikgcG1tIGNhbGwgYXJnMT0wDQooZDIp
IHBtbSBjYWxsIGFyZzE9MQ0KKGQyKSBwbW0gY2FsbCBhcmcxPTANCihkMikgU2VhcmNoaW5nIGJv
b3RvcmRlciBmb3I6IC9wY2lAaTBjZjgvKkA0DQooZDIpIFByZXNzIEYxMiBmb3IgYm9vdCBtZW51
Lg0KKGQyKSANCihkMikgZHJpdmUgMHgwMDBmZGU1MDogUENIUz01MjAxLzE2LzYzIHRyYW5zbGF0
aW9uPWxhcmdlIExDSFM9NjUwLzEyOC82MyBzPTUyNDI4ODANCihkMikgU3BhY2UgYXZhaWxhYmxl
IGZvciBVTUI6IDAwMGNhMDAwLTAwMGVmMDAwDQooZDIpIFJldHVybmVkIDYxNDQwIGJ5dGVzIG9m
IFpvbmVIaWdoDQooZDIpIGU4MjAgbWFwIGhhcyA2IGl0ZW1zOg0KKGQyKSAgIDA6IDAwMDAwMDAw
MDAwMDAwMDAgLSAwMDAwMDAwMDAwMDlmYzAwID0gMSBSQU0NCihkMikgICAxOiAwMDAwMDAwMDAw
MDlmYzAwIC0gMDAwMDAwMDAwMDBhMDAwMCA9IDIgUkVTRVJWRUQNCihkMikgICAyOiAwMDAwMDAw
MDAwMGYwMDAwIC0gMDAwMDAwMDAwMDEwMDAwMCA9IDIgUkVTRVJWRUQNCihkMikgICAzOiAwMDAw
MDAwMDAwMTAwMDAwIC0gMDAwMDAwMDAzZjdmZjAwMCA9IDEgUkFNDQooZDIpICAgNDogMDAwMDAw
MDAzZjdmZjAwMCAtIDAwMDAwMDAwM2Y4MDAwMDAgPSAyIFJFU0VSVkVEDQooZDIpICAgNTogMDAw
MDAwMDBmYzAwMDAwMCAtIDAwMDAwMDAxMDAwMDAwMDAgPSAyIFJFU0VSVkVEDQooZDIpIGVudGVy
IGhhbmRsZV8xOToNCihkMikgICBOVUxMDQooZDIpIEJvb3RpbmcgZnJvbSBIYXJkIERpc2suLi4N
CihkMikgQm9vdGluZyBmcm9tIDAwMDA6N2MwMA0KKFhFTikgc3RkdmdhLmM6MTUwOmQyIGVudGVy
aW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKGQyKSBwbnAgY2FsbCBhcmcxPTANCihYRU4p
IHN0ZHZnYS5jOjE1NDpkMiBsZWF2aW5nIHN0ZHZnYQ0KKFhFTikgc3RkdmdhLmM6MTUwOmQyIGVu
dGVyaW5nIHN0ZHZnYSBhbmQgY2FjaGluZyBtb2Rlcw0KKFhFTikgaXJxLmM6MjcwOiBEb20yIFBD
SSBsaW5rIDAgY2hhbmdlZCA1IC0+IDANCihYRU4pIGlycS5jOjI3MDogRG9tMiBQQ0kgbGluayAx
IGNoYW5nZWQgMTAgLT4gMA0KKFhFTikgaXJxLmM6MjcwOiBEb20yIFBDSSBsaW5rIDIgY2hhbmdl
ZCAxMSAtPiAwDQooWEVOKSBpcnEuYzoyNzA6IERvbTIgUENJIGxpbmsgMyBjaGFuZ2VkIDUgLT4g
MA0KKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTIgZ2ZuPWUwMDAwIG1mbj1iMDAwMCBucj0x
MDAwMA0KKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTIgZ2ZuPWYzMDQwIG1mbj1mZjcwMCBu
cj00MA0KKFhFTikgaW9wb3J0X21hcDpyZW1vdmU6IGRvbTIgZ3BvcnQ9YzIwMCBtcG9ydD1mMDAw
IG5yPTEwMA0KKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWUwMDAwIG1mbj1iMDAwMCBu
cj0xMDAwMA0KKFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMDQwIG1mbj1mZjcwMCBu
cj00MA0KKFhFTikgaW9wb3J0X21hcDphZGQ6IGRvbTIgZ3BvcnQ9YzIwMCBtcG9ydD1mMDAwIG5y
PTEwMA0KKFhFTikgbWVtb3J5X21hcDpyZW1vdmU6IGRvbTIgZ2ZuPWYzMDkwIG1mbj1mZjc0MCBu
cj00DQooWEVOKSBtZW1vcnlfbWFwOmFkZDogZG9tMiBnZm49ZjMwOTAgbWZuPWZmNzQwIG5yPTQN
CihYRU4pIG1lbW9yeV9tYXA6cmVtb3ZlOiBkb20yIGdmbj1mMzA5MCBtZm49ZmY3NDAgbnI9NA0K
KFhFTikgbWVtb3J5X21hcDphZGQ6IGRvbTIgZ2ZuPWYzMDkwIG1mbj1mZjc0MCBucj00DQooWEVO
KSBzdGR2Z2EuYzoxNTQ6ZDIgbGVhdmluZyBzdGR2Z2ENCg==
--047d7bdc8b6e1d134804f1814178
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bdc8b6e1d134804f1814178--


From xen-devel-bounces@lists.xen.org Mon Feb 03 14:32:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKZa-0002Qn-Pi; Mon, 03 Feb 2014 14:32:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAKZa-0002Qg-57
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 14:32:30 +0000
Received: from [85.158.143.35:3832] by server-3.bemta-4.messagelabs.com id
	6C/C9-11539-D78AFE25; Mon, 03 Feb 2014 14:32:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391437947!2766140!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20536 invoked from network); 3 Feb 2014 14:32:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:32:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99220061"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 14:32:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 09:32:26 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAKZW-0002XG-BC;
	Mon, 03 Feb 2014 14:32:26 +0000
Message-ID: <52EFA87A.2000008@citrix.com>
Date: Mon, 3 Feb 2014 14:32:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
In-Reply-To: <CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Sander Eikelenboom <linux@eikelenboom.it>, Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Can you please reply-to-all to keep this on the list.

On 03/02/14 14:21, Vitaliy Tomin wrote:
>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
> No it crashes even with empty HVM domain (no os, no disk images no network)

What about without PCI passthrough?  In all cases you appear to be
passing the embedded graphics through to an HVM domain

>
>> Can you explain "=== whole system crashed ===" a little more.
> II means system instantly rebooted. Black screen, no messages, no
> image on screen next what I see is POST of my real hardware.
>
> Log of xen runned with debug=y attached,hvm dom runned and no crash.

Ok - so something forced a system reset.  Even more curious that debug
mode is fine while non-debug is fatal.

~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:32:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:32:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKZa-0002Qn-Pi; Mon, 03 Feb 2014 14:32:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAKZa-0002Qg-57
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 14:32:30 +0000
Received: from [85.158.143.35:3832] by server-3.bemta-4.messagelabs.com id
	6C/C9-11539-D78AFE25; Mon, 03 Feb 2014 14:32:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391437947!2766140!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20536 invoked from network); 3 Feb 2014 14:32:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:32:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99220061"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 14:32:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 09:32:26 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAKZW-0002XG-BC;
	Mon, 03 Feb 2014 14:32:26 +0000
Message-ID: <52EFA87A.2000008@citrix.com>
Date: Mon, 3 Feb 2014 14:32:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
In-Reply-To: <CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Sander Eikelenboom <linux@eikelenboom.it>, Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	Xen-devel List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Can you please reply-to-all to keep this on the list.

On 03/02/14 14:21, Vitaliy Tomin wrote:
>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
> No it crashes even with empty HVM domain (no os, no disk images no network)

What about without PCI passthrough?  In all cases you appear to be
passing the embedded graphics through to an HVM domain

>
>> Can you explain "=== whole system crashed ===" a little more.
> II means system instantly rebooted. Black screen, no messages, no
> image on screen next what I see is POST of my real hardware.
>
> Log of xen runned with debug=y attached,hvm dom runned and no crash.

Ok - so something forced a system reset.  Even more curious that debug
mode is fine while non-debug is fatal.

~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:36:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKdQ-0002bt-IF; Mon, 03 Feb 2014 14:36:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAKdO-0002bk-PP
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:36:27 +0000
Received: from [85.158.137.68:63902] by server-9.bemta-3.messagelabs.com id
	61/8A-10184-969AFE25; Mon, 03 Feb 2014 14:36:25 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391438183!13003547!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8128 invoked from network); 3 Feb 2014 14:36:24 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:36:24 -0000
Received: by mail-qa0-f45.google.com with SMTP id ii20so10084362qab.32
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 06:36:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=0QbdaKJzcPn3O6RtYFlRIuBe8fdynhQ52ow/ky1FGfY=;
	b=mFtw7MgiiWzppEIvFPJes8jHXTFNKCa9wqr5V51JlPerBAGWLAHGqCwrlItKz0GLm3
	jnNmqnjrVlT644Y7qCzasn39VtAAjBZcCBhbUcw0Pt3eTylvYKClJfGM7Pan07QYij0n
	Y5zbUxZDMqXi+fYk+hPV+K7tZGXiXn7pAQhLlBxNMgbTbPvSqnCdqj7JAsUwgbL8QH1b
	xgAzZW3L7ZWpjzzH6YTxCzb8KfknsMd1MEy/ymHWHXfW0nG4uCpX687kusaZGx7L+/uD
	LT6ER7OjaIuLN5705KWFHfDBzJ97/Qx/meQ7Y1DLWpsCKhd880eazAowQ70GN7ByZ6Xu
	1eUg==
MIME-Version: 1.0
X-Received: by 10.140.96.245 with SMTP id k108mr53940083qge.60.1391438183644; 
	Mon, 03 Feb 2014 06:36:23 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 06:36:23 -0800 (PST)
In-Reply-To: <52EFA87A.2000008@citrix.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
Date: Mon, 3 Feb 2014 23:36:23 +0900
Message-ID: <CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=001a113b9ce4a9d8dc04f1817265
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a113b9ce4a9d8dc04f1817265
Content-Type: text/plain; charset=ISO-8859-1

>What about without PCI passthrough?  In all cases you appear to be
passing the embedded graphics through to an HVM domain

Next log captured with following hvm domain config:

name = 'blank'
builder = 'hvm'
memory = 1024

Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
hvm it takes less than a minute.

Now trying to make log with watchdog added

On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> Can you please reply-to-all to keep this on the list.
>
> On 03/02/14 14:21, Vitaliy Tomin wrote:
>>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>> No it crashes even with empty HVM domain (no os, no disk images no network)
>
> What about without PCI passthrough?  In all cases you appear to be
> passing the embedded graphics through to an HVM domain
>
>>
>>> Can you explain "=== whole system crashed ===" a little more.
>> II means system instantly rebooted. Black screen, no messages, no
>> image on screen next what I see is POST of my real hardware.
>>
>> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>
> Ok - so something forced a system reset.  Even more curious that debug
> mode is fine while non-debug is fatal.
>
> ~Andrew
>
>

--001a113b9ce4a9d8dc04f1817265
Content-Type: application/octet-stream; name="screenlog.0"
Content-Disposition: attachment; filename="screenlog.0"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7uc4nn0

IFhlbiA0LjQuMF8wMi0yOTcuMQ0KKFhFTikgWGVuIHZlcnNpb24gNC40LjBfMDItMjk3LjEgKGFi
dWlsZEApIChnY2MgKFNVU0UgTGludXgpIDQuOC4xIDIwMTMwOTA5IFtnY2MtNF84LWJyYW5jaCBy
ZXZpc2lvbiAyMDIzODhdKSBkZWJ1Zz1uIFR1ZSBKYW4gMjggMTY6MDg6NDggVVRDIDIwMTQNCihY
RU4pIExhdGVzdCBDaGFuZ2VTZXQ6IA0KKFhFTikgQm9vdGxvYWRlcjogR1JVQjIgMi4wMA0KKFhF
TikgQ29tbWFuZCBsaW5lOiBsb2dsdmw9YWxsIGlvbW11PWRlYnVnLHZlcmJvc2UgYXBpY192ZXJi
b3NpdHk9ZGVidWcgY29uc29sZT1jb20xIGNvbTE9MTE1MjAwLDhuMSxwY2kNCihYRU4pIFZpZGVv
IGluZm9ybWF0aW9uOg0KKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNg0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRz
DQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMN
CihYRU4pICBGb3VuZCA0IEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVOKSBYZW4tZTgy
MCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1
c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZTgwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZl
ZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQ0K
KFhFTikgIDAwMDAwMDAwOGQ2OGIwMDAgLSAwMDAwMDAwMDhkZDBhMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDhkZDBhMDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpDQooWEVO
KSAgMDAwMDAwMDA4ZTA1YTAwMCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQ0KKFhFTikg
IDAwMDAwMDAwOGVhNDUwMDAgLSAwMDAwMDAwMDhlYTQ2MDAwICh1c2FibGUpDQooWEVOKSAgMDAw
MDAwMDA4ZWE0NjAwMCAtIDAwMDAwMDAwOGVjNGMwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAw
MDAwOGVjNGMwMDAgLSAwMDAwMDAwMDhmMDY0MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4
ZjA2NDAwMCAtIDAwMDAwMDAwOGY3ZjMwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwOGY3
ZjMwMDAgLSAwMDAwMDAwMDhmODAwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBmZWMwMDAw
MCAtIDAwMDAwMDAwZmVjMDEwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMTAwMDAg
LSAwMDAwMDAwMGZlYzExMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0g
MDAwMDAwMDBmZWQwMTAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAw
MDAwMDAwZmVkOTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmY4MDAwMDAgLSAwMDAw
MDAwMTAwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAw
MDI1MDAwMDAwMCAodXNhYmxlKQ0KKFhFTikgQUNQSTogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIg
QUxBU0tBKQ0KKFhFTikgQUNQSTogWFNEVCA4RTA0QTA3OCwgMDA3NCAocjEgQUxBU0tBICAgIEEg
TSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAw
MEY0IChyNCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFD
UEkgV2FybmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25hbCBmaWVsZCAiUG0yQ29udHJvbEJsb2Nr
IiBoYXMgemVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEy
Nl0NCihYRU4pIEFDUEk6IERTRFQgOEUwNEExODgsIDVGOUUgKHIyIEFMQVNLQSAgICBBIE0gSSAg
ICAgICAgMCBJTlRMIDIwMDUxMTE3KQ0KKFhFTikgQUNQSTogRkFDUyA4RTA1MkU4MCwgMDA0MA0K
KFhFTikgQUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMgQUxBU0tBICAgIEEgTSBJICAxMDcy
MDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGUERUIDhFMDUwMjk4LCAwMDQ0IChyMSBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEk6IE1DRkcg
OEUwNTAyRTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgIDEwMDEz
KQ0KKFhFTikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEgQUxBU0tBIE9FTUFBRlQgICAx
MDcyMDA5IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBIUEVUIDhFMDUwNDA4LCAwMDM4IChy
MSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAgICAgNSkNCihYRU4pIEFDUEk6IElW
UlMgOEUwNTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAg
ICAwKQ0KKFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEgICAgQU1EIEFOTkFQVVJO
ICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwRjEwLCAwNEI3
IChyMiAgICBBTUQgQU5OQVBVUk4gICAgICAgIDEgTVNGVCAgNDAwMDAwMCkNCihYRU4pIEFDUEk6
IENSQVQgOEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAg
ICAgICAxKQ0KKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0IpDQooWEVOKSBObyBO
VU1BIGNvbmZpZ3VyYXRpb24gZm91bmQNCihYRU4pIEZha2luZyBhIG5vZGUgYXQgMDAwMDAwMDAw
MDAwMDAwMC0wMDAwMDAwMjUwMDAwMDAwDQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZA0K
KFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZkOTAwDQooWEVOKSBETUkgMi43IHByZXNl
bnQuDQooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBk
cml2ZXIgZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihYRU4p
IEFDUEk6IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdDQooWEVO
KSBBQ1BJOiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGluIEZBRFQgLSA4ZTA1MmU4MC8w
MDAwMDAwMDAwMDAwMDAwLCB1c2luZyAzMg0KKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVw
X3ZlY1s4ZTA1MmU4Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lk
WzB4MTBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYN
CihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQp
DQooWEVOKSBQcm9jZXNzb3IgIzE3IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNz
b3IgIzE4IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDRdIGxhcGljX2lkWzB4MTNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElD
IHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVk
Z2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA1XSBhZGRyZXNzWzB4ZmVj
MDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24g
MzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZS
IChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVOKSBB
Q1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3Zl
cnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFibGlu
ZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQRVQg
aWQ6IDB4MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgRVJTVCB0YWJsZSB3YXMgbm90
IGZvdW5kDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5m
b3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykNCihY
RU4pIE5SX0NQVVM6NTEyIG5yX2NwdW1hc2tfYml0czo2NA0KKFhFTikgbWFwcGVkIEFQSUMgdG8g
ZmZmZjgyY2ZmZmJmYjAwMCAoZmVlMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4
MmNmZmZiZmEwMDAgKGZlYzAwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCA3NjAgTVNJ
L01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVk
aXQpDQooWEVOKSBEZXRlY3RlZCAzODkyLjk4OCBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGlu
ZyBtZW1vcnkgc2hhcmluZy4NCihYRU4pIHhzdGF0ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAw
eDNjMCBhbmQgc3RhdGVzOiAweDQwMDAwMDAwMDAwMDAwMDcNCihYRU4pIEFNRCBGYW0xNWggbWFj
aGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRp
b24gMDogYmFzZSBlMDAwMDAwMCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJ
OiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZp
OiBGb3VuZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkg
VGFibGU6DQooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVu
Z3RoIDB4NzANCihYRU4pIEFNRC1WaTogIFJldmlzaW9uIDB4Mg0KKFhFTikgQU1ELVZpOiAgQ2hl
Y2tTdW0gMHhlOA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRA0KKFhFTikgQU1ELVZpOiAgT0VN
X1RhYmxlX0lkIEFOTkFQVVJODQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgxDQooWEVO
KSBBTUQtVmk6ICBDcmVhdG9yX0lkIEFNRCANCihYRU4pIEFNRC1WaTogIENyZWF0b3JfUmV2aXNp
b24gMA0KKFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxhZ3MgMHhmZSBsZW4g
MHg0MCBpZCAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlk
IDB4OCBmbGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OCAtPiAweGZmZmUN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAweDIwMCBmbGFn
cyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZmIGFsaWFzIDB4
YTQNCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFu
ZGxlIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZs
YWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0
eSAweDEgaGFuZGxlIDB4NQ0KKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJlczoN
CihYRU4pICAtIFByZWZldGNoIFBhZ2VzIENvbW1hbmQNCihYRU4pICAtIFBlcmlwaGVyYWwgUGFn
ZSBTZXJ2aWNlIFJlcXVlc3QNCihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uDQooWEVOKSAgLSBJ
bnZhbGlkYXRlIEFsbCBDb21tYW5kDQooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4NCihY
RU4pIEFNRC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4NCihYRU4pIEFNRC1WaTogSU9N
TVUgMCBFbmFibGVkLg0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAt
IERvbTAgbW9kZTogUmVsYXhlZA0KKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkDQoo
WEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwDQooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgw
MDUwMDEwDQooWEVOKSBHZXR0aW5nIElEOiAxMDAwMDAwMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3
MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUj
MA0KKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBt
ZXRob2QNCihYRU4pIGluaXQgSU9fQVBJQyBJUlFzDQooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBp
bikgNS0wLCA1LTE2LCA1LTE3LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5v
dCBjb25uZWN0ZWQuDQooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBh
cGljMj0tMSBwaW4yPS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhF
TikgbnVtYmVyIG9mIElPLUFQSUMgIzUgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIHRlc3RpbmcgdGhl
IElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLg0KKFhFTikgSU8gQVBJQyAjNS4uLi4uLg0K
KFhFTikgLi4uLiByZWdpc3RlciAjMDA6IDA1MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgIDogcGh5
c2ljYWwgQVBJQyBpZDogMDUNCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwDQoo
WEVOKSAuLi4uLi4uICAgIDogTFRTICAgICAgICAgIDogMA0KKFhFTikgLi4uLiByZWdpc3RlciAj
MDE6IDAwMTc4MDIxDQooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVz
OiAwMDE3DQooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMQ0KKFhFTikgLi4u
Li4uLiAgICAgOiBJTyBBUElDIHZlcnNpb246IDAwMjENCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAy
OiAwNTAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICAgOiBhcmJpdHJhdGlvbjogMDUNCihYRU4pIC4u
Li4gcmVnaXN0ZXIgIzAzOiAwNTAxODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBCb290IERUICAg
IDogMQ0KKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBo
eSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAw
MCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAwMSAwMDEg
MDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQooWEVOKSAgMDIgMDAxIDAx
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMA0KKFhFTikgIDAzIDAwMSAwMSAg
MCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgNCihYRU4pICAwNCAwMDEgMDEgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDUgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA2IDAwMSAwMSAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwNyAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDggMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA2MA0KKFhFTikgIDA5IDAwMSAwMSAgMSAgICAxICAgIDAg
ICAxICAgMCAgICAxICAgIDAgICAgMDANCihYRU4pICAwYSAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIEYxDQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA3MA0KKFhFTikgIDBjIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgNzgNCihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDg4DQooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA5MA0KKFhFTikgIDBmIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgOTgNCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAxICAgIDMwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MSAgICAzMA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEg
ICAgMzANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzAN
CihYRU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQoo
WEVOKSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhF
TikgVXNpbmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nDQooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdz
Og0KKFhFTikgSVJRMjQwIC0+IDA6Mg0KKFhFTikgSVJRNDggLT4gMDoxDQooWEVOKSBJUlE1NiAt
PiAwOjMNCihYRU4pIElSUTY0IC0+IDA6NA0KKFhFTikgSVJRNzIgLT4gMDo1DQooWEVOKSBJUlE4
MCAtPiAwOjYNCihYRU4pIElSUTg4IC0+IDA6Nw0KKFhFTikgSVJROTYgLT4gMDo4DQooWEVOKSBJ
UlExMDQgLT4gMDo5DQooWEVOKSBJUlEyNDEgLT4gMDoxMA0KKFhFTikgSVJRMTEyIC0+IDA6MTEN
CihYRU4pIElSUTEyMCAtPiAwOjEyDQooWEVOKSBJUlExMzYgLT4gMDoxMw0KKFhFTikgSVJRMTQ0
IC0+IDA6MTQNCihYRU4pIElSUTE1MiAtPiAwOjE1DQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4gZG9uZS4NCihYRU4pIFVzaW5nIGxvY2FsIEFQSUMgdGltZXIgaW50
ZXJydXB0cy4NCihYRU4pIGNhbGlicmF0aW5nIEFQSUMgdGltZXIgLi4uDQooWEVOKSAuLi4uLiBD
UFUgY2xvY2sgc3BlZWQgaXMgMzg5Mi45MjI5IE1Iei4NCihYRU4pIC4uLi4uIGhvc3QgYnVzIGNs
b2NrIHNwZWVkIGlzIDk5LjgxODUgTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM3
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSAgLSBWTUNC
IENsZWFuIEJpdHMNCihYRU4pICAtIERlY29kZUFzc2lzdHMNCihYRU4pICAtIFBhdXNlLUludGVy
Y2VwdCBGaWx0ZXINCihYRU4pICAtIFRTQyBSYXRlIE1TUg0KKFhFTikgSFZNOiBTVk0gZW5hYmxl
ZA0KKFhFTikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihY
RU4pIEhWTTogSEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIEhWTTogUFZIIG1v
ZGUgbm90IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtDQooWEVOKSBtYXNrZWQgRXh0SU5UIG9u
IENQVSMxDQooWEVOKSBtaWNyb2NvZGU6IENQVTEgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9
MHg2MDAxMTE5DQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVOKSBtaWNyb2NvZGU6
IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MA0KKFhFTikgbWFza2VkIEV4dElOVCBv
biBDUFUjMw0KKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lk
PTANCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVOKSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0K
KFhFTikgTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZyZXF1ZW5j
eQ0KKFhFTikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVk
Lg0KKFhFTikgbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgdmFyaWFibGUgTVRSUiBz
ZXR0aW5ncw0KKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVwIGFs
bCBDUFVzLg0KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uDQooWEVOKSAqKiog
TE9BRElORyBET01BSU4gMCAqKioNCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNv
bXBhdDMyDQooWEVOKSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgbHNiLCBwYWRkciAweDIwMDAgLT4g
MHhjMDUwMDANCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVOVDoNCihYRU4pICBEb20w
IGFsbG9jLjogICAwMDAwMDAwMjIzMDAwMDAwLT4wMDAwMDAwMjI0MDAwMDAwICgxNzQyMDgzIHBh
Z2VzIHRvIGJlIGFsbG9jYXRlZCkNCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRjZjc1
MDAwLT4wMDAwMDAwMjRmZmZmZTAwDQooWEVOKSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoN
CihYRU4pICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgwMDAyMDAwLT5mZmZmZmZmZjgwYzA1MDAw
DQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDAwMDAwMDAwMC0+MDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgIFBoeXMtTWFjaCBtYXA6IGZmZmZlYTAwMDAwMDAwMDAtPmZmZmZlYTAwMDBkNmFj
NzANCihYRU4pICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgwYzA1MDAwLT5mZmZmZmZmZjgwYzA1
NGI0DQooWEVOKSAgUGFnZSB0YWJsZXM6ICAgZmZmZmZmZmY4MGMwNjAwMC0+ZmZmZmZmZmY4MGMx
MTAwMA0KKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODBjMTEwMDAtPmZmZmZmZmZmODBj
MTIwMDANCihYRU4pICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZmZjgx
MDAwMDAwDQooWEVOKSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MDAwMjAwMA0KKFhFTikgRG9t
MCBoYXMgbWF4aW11bSA0IFZDUFVzDQooWEVOKSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdl
IDAwMDA6MDA6MDAuMA0KKFhFTikgQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2aWNlIDAwMDA6MDA6
MDAuMg0KKFhFTikgc2V0dXAgMDAwMDowMDowMC4yIGZvciBkMCBmYWlsZWQgKC0xOSkNCihYRU4p
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlw
ZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
MTAsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDgwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHg4MSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4ODgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZj
MzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTgsIHR5cGUgPSAweDcs
IHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCB0eXBl
ID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhh
MCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YTEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGEzLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9IDB4NSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTUsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE4LCB0eXBlID0gMHgyLCByb290IHRh
YmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhYSwgdHlwZSA9IDB4Miwg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihY
RU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YWIsIHR5cGUg
PSAweDIsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMw
LCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2lu
ZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHhjMSwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YzIsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGMzLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNCwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzUsIHR5cGUgPSAweDYsIHJvb3QgdGFi
bGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEwMCwgdHlwZSA9IDB4MSwg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihY
RU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAxLCB0eXBl
ID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgy
MzAsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDIzOCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4MzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAs
IGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHg0MDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIy
NGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFNjcnVi
YmluZyBGcmVlIFJBTTogLmRvbmUuDQooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJl
c2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4NCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbA0KKFhF
TikgR3Vlc3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFuZCB3YXJu
aW5ncykNCihYRU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJl
ZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikgRnJlZWQgMjkya0IgaW5pdCBt
ZW1vcnkuDQooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAw
MC4NCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
MDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAwMDAwLg0K
KFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAw
MDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAuDQooWEVO
KSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEz
IGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFBD
STogVXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgbW0uYzo4MDk6
IGQwOiBGb3JjaW5nIHJlYWQtb25seSBhY2Nlc3MgdG8gTUZOIGUwMDAyDQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAwOjAwLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDAuMg0K
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjAxLjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDIuMA0KKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDoxMC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEw
LjENCihYRU4pIFNSLUlPViBkZXZpY2UgMDAwMDowMDoxMS4wIGhhcyBpdHMgdmlydHVhbCBmdW5j
dGlvbnMgYWxyZWFkeSBlbmFibGVkICgwMWFiKQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxMS4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjANCihYRU4pIFBDSSBhZGQg
ZGV2aWNlIDAwMDA6MDA6MTIuMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wDQoo
WEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEzLjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDA6MTQuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4xDQooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjE0LjMNCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTQu
NA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC41DQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjE1LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMg0KKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjE4LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMQ0KKFhFTikgUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDoxOC4yDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjMNCihY
RU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxOC41DQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjANCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDE6MDAuMQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMjowNi4w
DQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjA3LjANCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDM6MDAuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4wDQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjA1OjAwLjANCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0
aW5nIGVudHJ5ICg1LTE2IC0+IDB4YTAgLT4gSVJRIDE2IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4p
IElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE3IC0+IDB4YTggLT4gSVJRIDE3
IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5
ICg1LTE4IC0+IDB4YjAgLT4gSVJRIDE4IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIElPQVBJQ1sw
XTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE5IC0+IDB4YjggLT4gSVJRIDE5IE1vZGU6MSBB
Y3RpdmU6MSkNCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTIxIC0+
IDB4YzAgLT4gSVJRIDIxIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIElPQVBJQ1swXTogU2V0IFBD
SSByb3V0aW5nIGVudHJ5ICg1LTIyIC0+IDB4YzggLT4gSVJRIDIyIE1vZGU6MSBBY3RpdmU6MSkN
ClsgICAxNS40MTAzMDBdIFVuYWJsZSB0byByZWFkIHN5c3JxIGNvZGUgaW4gY29udHJvbC9zeXNy
cQ0KKFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkNCihYRU4pIEFQSUMgZXJyb3Igb24g
Q1BVMDogMDAoNDApDQooWEVOKSBBUElDIGVycm9yIG9uIENQVTI6IDAwKDQwKQ0KKFhFTikgQVBJ
QyBlcnJvciBvbiBDUFUzOiAwMCg0MCkNChtbchtbSBtbSg0NDQpXZWxjb21lIHRvIG9wZW5TVVNF
IDEzLjEgIkJvdHRsZSIgLSBLZXJuZWwgMy4xMS42LTQteGVuICh4dmMwKS4NDQoNDQoNDQpsaW51
eC1iNTJkIGxvZ2luOiAoWEVOKSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBw
Mm0gdGFibGUgPSAweDFjYWY2Mw0KKFhFTikgQU1ELVZpOiBTaGFyZSBwMm0gdGFibGUgd2l0aCBp
b21tdTogcDJtIHRhYmxlID0gMHgxMDAxN2MNCihYRU4pIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxl
IHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWNhNTJiDQooWEVOKSBBTUQtVmk6IFNoYXJlIHAy
bSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUgPSAweDFjYTUzZQ0KKFhFTikgQU1ELVZpOiBT
aGFyZSBwMm0gdGFibGUgd2l0aCBpb21tdTogcDJtIHRhYmxlID0gMHgxMDAxNGINCihYRU4pIEFN
RC1WaTogU2hhcmUgcDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWY2YjU3DQo=
--001a113b9ce4a9d8dc04f1817265
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a113b9ce4a9d8dc04f1817265--


From xen-devel-bounces@lists.xen.org Mon Feb 03 14:36:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKdQ-0002bt-IF; Mon, 03 Feb 2014 14:36:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WAKdO-0002bk-PP
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:36:27 +0000
Received: from [85.158.137.68:63902] by server-9.bemta-3.messagelabs.com id
	61/8A-10184-969AFE25; Mon, 03 Feb 2014 14:36:25 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391438183!13003547!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8128 invoked from network); 3 Feb 2014 14:36:24 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:36:24 -0000
Received: by mail-qa0-f45.google.com with SMTP id ii20so10084362qab.32
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 06:36:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=0QbdaKJzcPn3O6RtYFlRIuBe8fdynhQ52ow/ky1FGfY=;
	b=mFtw7MgiiWzppEIvFPJes8jHXTFNKCa9wqr5V51JlPerBAGWLAHGqCwrlItKz0GLm3
	jnNmqnjrVlT644Y7qCzasn39VtAAjBZcCBhbUcw0Pt3eTylvYKClJfGM7Pan07QYij0n
	Y5zbUxZDMqXi+fYk+hPV+K7tZGXiXn7pAQhLlBxNMgbTbPvSqnCdqj7JAsUwgbL8QH1b
	xgAzZW3L7ZWpjzzH6YTxCzb8KfknsMd1MEy/ymHWHXfW0nG4uCpX687kusaZGx7L+/uD
	LT6ER7OjaIuLN5705KWFHfDBzJ97/Qx/meQ7Y1DLWpsCKhd880eazAowQ70GN7ByZ6Xu
	1eUg==
MIME-Version: 1.0
X-Received: by 10.140.96.245 with SMTP id k108mr53940083qge.60.1391438183644; 
	Mon, 03 Feb 2014 06:36:23 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 06:36:23 -0800 (PST)
In-Reply-To: <52EFA87A.2000008@citrix.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
Date: Mon, 3 Feb 2014 23:36:23 +0900
Message-ID: <CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=001a113b9ce4a9d8dc04f1817265
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--001a113b9ce4a9d8dc04f1817265
Content-Type: text/plain; charset=ISO-8859-1

>What about without PCI passthrough?  In all cases you appear to be
passing the embedded graphics through to an HVM domain

Next log captured with following hvm domain config:

name = 'blank'
builder = 'hvm'
memory = 1024

Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
hvm it takes less than a minute.

Now trying to make log with watchdog added

On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> Can you please reply-to-all to keep this on the list.
>
> On 03/02/14 14:21, Vitaliy Tomin wrote:
>>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>> No it crashes even with empty HVM domain (no os, no disk images no network)
>
> What about without PCI passthrough?  In all cases you appear to be
> passing the embedded graphics through to an HVM domain
>
>>
>>> Can you explain "=== whole system crashed ===" a little more.
>> II means system instantly rebooted. Black screen, no messages, no
>> image on screen next what I see is POST of my real hardware.
>>
>> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>
> Ok - so something forced a system reset.  Even more curious that debug
> mode is fine while non-debug is fatal.
>
> ~Andrew
>
>

--001a113b9ce4a9d8dc04f1817265
Content-Type: application/octet-stream; name="screenlog.0"
Content-Disposition: attachment; filename="screenlog.0"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7uc4nn0

IFhlbiA0LjQuMF8wMi0yOTcuMQ0KKFhFTikgWGVuIHZlcnNpb24gNC40LjBfMDItMjk3LjEgKGFi
dWlsZEApIChnY2MgKFNVU0UgTGludXgpIDQuOC4xIDIwMTMwOTA5IFtnY2MtNF84LWJyYW5jaCBy
ZXZpc2lvbiAyMDIzODhdKSBkZWJ1Zz1uIFR1ZSBKYW4gMjggMTY6MDg6NDggVVRDIDIwMTQNCihY
RU4pIExhdGVzdCBDaGFuZ2VTZXQ6IA0KKFhFTikgQm9vdGxvYWRlcjogR1JVQjIgMi4wMA0KKFhF
TikgQ29tbWFuZCBsaW5lOiBsb2dsdmw9YWxsIGlvbW11PWRlYnVnLHZlcmJvc2UgYXBpY192ZXJi
b3NpdHk9ZGVidWcgY29uc29sZT1jb20xIGNvbTE9MTE1MjAwLDhuMSxwY2kNCihYRU4pIFZpZGVv
IGluZm9ybWF0aW9uOg0KKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNg0K
KFhFTikgIFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRz
DQooWEVOKSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMN
CihYRU4pICBGb3VuZCA0IEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVOKSBYZW4tZTgy
MCBSQU0gbWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1
c2FibGUpDQooWEVOKSAgMDAwMDAwMDAwMDA5ZTgwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHJlc2Vy
dmVkKQ0KKFhFTikgIDAwMDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZl
ZCkNCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQ0K
KFhFTikgIDAwMDAwMDAwOGQ2OGIwMDAgLSAwMDAwMDAwMDhkZDBhMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDhkZDBhMDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpDQooWEVO
KSAgMDAwMDAwMDA4ZTA1YTAwMCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQ0KKFhFTikg
IDAwMDAwMDAwOGVhNDUwMDAgLSAwMDAwMDAwMDhlYTQ2MDAwICh1c2FibGUpDQooWEVOKSAgMDAw
MDAwMDA4ZWE0NjAwMCAtIDAwMDAwMDAwOGVjNGMwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAw
MDAwOGVjNGMwMDAgLSAwMDAwMDAwMDhmMDY0MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4
ZjA2NDAwMCAtIDAwMDAwMDAwOGY3ZjMwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwOGY3
ZjMwMDAgLSAwMDAwMDAwMDhmODAwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBmZWMwMDAw
MCAtIDAwMDAwMDAwZmVjMDEwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMTAwMDAg
LSAwMDAwMDAwMGZlYzExMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0g
MDAwMDAwMDBmZWQwMTAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAw
MDAwMDAwZmVkOTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmY4MDAwMDAgLSAwMDAw
MDAwMTAwMDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAw
MDI1MDAwMDAwMCAodXNhYmxlKQ0KKFhFTikgQUNQSTogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIg
QUxBU0tBKQ0KKFhFTikgQUNQSTogWFNEVCA4RTA0QTA3OCwgMDA3NCAocjEgQUxBU0tBICAgIEEg
TSBJICAxMDcyMDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAw
MEY0IChyNCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFD
UEkgV2FybmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25hbCBmaWVsZCAiUG0yQ29udHJvbEJsb2Nr
IiBoYXMgemVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEy
Nl0NCihYRU4pIEFDUEk6IERTRFQgOEUwNEExODgsIDVGOUUgKHIyIEFMQVNLQSAgICBBIE0gSSAg
ICAgICAgMCBJTlRMIDIwMDUxMTE3KQ0KKFhFTikgQUNQSTogRkFDUyA4RTA1MkU4MCwgMDA0MA0K
KFhFTikgQUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMgQUxBU0tBICAgIEEgTSBJICAxMDcy
MDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGUERUIDhFMDUwMjk4LCAwMDQ0IChyMSBB
TEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEk6IE1DRkcg
OEUwNTAyRTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgIDEwMDEz
KQ0KKFhFTikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEgQUxBU0tBIE9FTUFBRlQgICAx
MDcyMDA5IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBIUEVUIDhFMDUwNDA4LCAwMDM4IChy
MSBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAgICAgNSkNCihYRU4pIEFDUEk6IElW
UlMgOEUwNTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAg
ICAwKQ0KKFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEgICAgQU1EIEFOTkFQVVJO
ICAgICAgICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwRjEwLCAwNEI3
IChyMiAgICBBTUQgQU5OQVBVUk4gICAgICAgIDEgTVNGVCAgNDAwMDAwMCkNCihYRU4pIEFDUEk6
IENSQVQgOEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAg
ICAgICAxKQ0KKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0IpDQooWEVOKSBObyBO
VU1BIGNvbmZpZ3VyYXRpb24gZm91bmQNCihYRU4pIEZha2luZyBhIG5vZGUgYXQgMDAwMDAwMDAw
MDAwMDAwMC0wMDAwMDAwMjUwMDAwMDAwDQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZA0K
KFhFTikgZm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZkOTAwDQooWEVOKSBETUkgMi43IHByZXNl
bnQuDQooWEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBk
cml2ZXIgZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihYRU4p
IEFDUEk6IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdDQooWEVO
KSBBQ1BJOiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGluIEZBRFQgLSA4ZTA1MmU4MC8w
MDAwMDAwMDAwMDAwMDAwLCB1c2luZyAzMg0KKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVw
X3ZlY1s4ZTA1MmU4Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRy
ZXNzIDB4ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lk
WzB4MTBdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYN
CihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQp
DQooWEVOKSBQcm9jZXNzb3IgIzE3IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExB
UElDIChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNz
b3IgIzE4IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4
MDRdIGxhcGljX2lkWzB4MTNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElD
IHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVk
Z2UgbGludFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA1XSBhZGRyZXNzWzB4ZmVj
MDAwMDBdIGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24g
MzMsIGFkZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZS
IChidXMgMCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRf
U1JDX09WUiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVOKSBB
Q1BJOiBJUlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3Zl
cnJpZGUuDQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFibGlu
ZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQRVQg
aWQ6IDB4MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgRVJTVCB0YWJsZSB3YXMgbm90
IGZvdW5kDQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5m
b3JtYXRpb24NCihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykNCihY
RU4pIE5SX0NQVVM6NTEyIG5yX2NwdW1hc2tfYml0czo2NA0KKFhFTikgbWFwcGVkIEFQSUMgdG8g
ZmZmZjgyY2ZmZmJmYjAwMCAoZmVlMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4
MmNmZmZiZmEwMDAgKGZlYzAwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCA3NjAgTVNJ
L01TSS1YDQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVk
aXQpDQooWEVOKSBEZXRlY3RlZCAzODkyLjk4OCBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGlu
ZyBtZW1vcnkgc2hhcmluZy4NCihYRU4pIHhzdGF0ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAw
eDNjMCBhbmQgc3RhdGVzOiAweDQwMDAwMDAwMDAwMDAwMDcNCihYRU4pIEFNRCBGYW0xNWggbWFj
aGluZSBjaGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRp
b24gMDogYmFzZSBlMDAwMDAwMCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJ
OiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZp
OiBGb3VuZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkg
VGFibGU6DQooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVu
Z3RoIDB4NzANCihYRU4pIEFNRC1WaTogIFJldmlzaW9uIDB4Mg0KKFhFTikgQU1ELVZpOiAgQ2hl
Y2tTdW0gMHhlOA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRA0KKFhFTikgQU1ELVZpOiAgT0VN
X1RhYmxlX0lkIEFOTkFQVVJODQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgxDQooWEVO
KSBBTUQtVmk6ICBDcmVhdG9yX0lkIEFNRCANCihYRU4pIEFNRC1WaTogIENyZWF0b3JfUmV2aXNp
b24gMA0KKFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxhZ3MgMHhmZSBsZW4g
MHg0MCBpZCAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlk
IDB4OCBmbGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OCAtPiAweGZmZmUN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAweDIwMCBmbGFn
cyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZmIGFsaWFzIDB4
YTQNCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFu
ZGxlIDANCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZs
YWdzIDB4ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0
eSAweDEgaGFuZGxlIDB4NQ0KKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJlczoN
CihYRU4pICAtIFByZWZldGNoIFBhZ2VzIENvbW1hbmQNCihYRU4pICAtIFBlcmlwaGVyYWwgUGFn
ZSBTZXJ2aWNlIFJlcXVlc3QNCihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uDQooWEVOKSAgLSBJ
bnZhbGlkYXRlIEFsbCBDb21tYW5kDQooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4NCihY
RU4pIEFNRC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4NCihYRU4pIEFNRC1WaTogSU9N
TVUgMCBFbmFibGVkLg0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAt
IERvbTAgbW9kZTogUmVsYXhlZA0KKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkDQoo
WEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwDQooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgw
MDUwMDEwDQooWEVOKSBHZXR0aW5nIElEOiAxMDAwMDAwMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3
MDANCihYRU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUj
MA0KKFhFTikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBt
ZXRob2QNCihYRU4pIGluaXQgSU9fQVBJQyBJUlFzDQooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBp
bikgNS0wLCA1LTE2LCA1LTE3LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5v
dCBjb25uZWN0ZWQuDQooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBh
cGljMj0tMSBwaW4yPS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhF
TikgbnVtYmVyIG9mIElPLUFQSUMgIzUgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIHRlc3RpbmcgdGhl
IElPIEFQSUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLg0KKFhFTikgSU8gQVBJQyAjNS4uLi4uLg0K
KFhFTikgLi4uLiByZWdpc3RlciAjMDA6IDA1MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgIDogcGh5
c2ljYWwgQVBJQyBpZDogMDUNCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwDQoo
WEVOKSAuLi4uLi4uICAgIDogTFRTICAgICAgICAgIDogMA0KKFhFTikgLi4uLiByZWdpc3RlciAj
MDE6IDAwMTc4MDIxDQooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVz
OiAwMDE3DQooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMQ0KKFhFTikgLi4u
Li4uLiAgICAgOiBJTyBBUElDIHZlcnNpb246IDAwMjENCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAy
OiAwNTAwMDAwMA0KKFhFTikgLi4uLi4uLiAgICAgOiBhcmJpdHJhdGlvbjogMDUNCihYRU4pIC4u
Li4gcmVnaXN0ZXIgIzAzOiAwNTAxODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBCb290IERUICAg
IDogMQ0KKFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBo
eSBNYXNrIFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAw
MCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAwMSAwMDEg
MDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQooWEVOKSAgMDIgMDAxIDAx
ICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMA0KKFhFTikgIDAzIDAwMSAwMSAg
MCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgNCihYRU4pICAwNCAwMDEgMDEgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDUgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA2IDAwMSAwMSAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwNyAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDggMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA2MA0KKFhFTikgIDA5IDAwMSAwMSAgMSAgICAxICAgIDAg
ICAxICAgMCAgICAxICAgIDAgICAgMDANCihYRU4pICAwYSAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIEYxDQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA3MA0KKFhFTikgIDBjIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAg
MCAgICAxICAgIDEgICAgNzgNCihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIDg4DQooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA5MA0KKFhFTikgIDBmIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgOTgNCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAg
ICAxICAgIDMwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAg
MSAgICAzMA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEg
ICAgMzANCihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwDQooWEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MA0KKFhFTikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzAN
CihYRU4pICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQoo
WEVOKSAgMTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhF
TikgVXNpbmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nDQooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdz
Og0KKFhFTikgSVJRMjQwIC0+IDA6Mg0KKFhFTikgSVJRNDggLT4gMDoxDQooWEVOKSBJUlE1NiAt
PiAwOjMNCihYRU4pIElSUTY0IC0+IDA6NA0KKFhFTikgSVJRNzIgLT4gMDo1DQooWEVOKSBJUlE4
MCAtPiAwOjYNCihYRU4pIElSUTg4IC0+IDA6Nw0KKFhFTikgSVJROTYgLT4gMDo4DQooWEVOKSBJ
UlExMDQgLT4gMDo5DQooWEVOKSBJUlEyNDEgLT4gMDoxMA0KKFhFTikgSVJRMTEyIC0+IDA6MTEN
CihYRU4pIElSUTEyMCAtPiAwOjEyDQooWEVOKSBJUlExMzYgLT4gMDoxMw0KKFhFTikgSVJRMTQ0
IC0+IDA6MTQNCihYRU4pIElSUTE1MiAtPiAwOjE1DQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4gZG9uZS4NCihYRU4pIFVzaW5nIGxvY2FsIEFQSUMgdGltZXIgaW50
ZXJydXB0cy4NCihYRU4pIGNhbGlicmF0aW5nIEFQSUMgdGltZXIgLi4uDQooWEVOKSAuLi4uLiBD
UFUgY2xvY2sgc3BlZWQgaXMgMzg5Mi45MjI5IE1Iei4NCihYRU4pIC4uLi4uIGhvc3QgYnVzIGNs
b2NrIHNwZWVkIGlzIDk5LjgxODUgTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM3
DQooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVO
KSBTVk06IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdl
IFRhYmxlcyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxp
c2F0aW9uDQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSAgLSBWTUNC
IENsZWFuIEJpdHMNCihYRU4pICAtIERlY29kZUFzc2lzdHMNCihYRU4pICAtIFBhdXNlLUludGVy
Y2VwdCBGaWx0ZXINCihYRU4pICAtIFRTQyBSYXRlIE1TUg0KKFhFTikgSFZNOiBTVk0gZW5hYmxl
ZA0KKFhFTikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihY
RU4pIEhWTTogSEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIEhWTTogUFZIIG1v
ZGUgbm90IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtDQooWEVOKSBtYXNrZWQgRXh0SU5UIG9u
IENQVSMxDQooWEVOKSBtaWNyb2NvZGU6IENQVTEgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9
MHg2MDAxMTE5DQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVOKSBtaWNyb2NvZGU6
IENQVTIgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MA0KKFhFTikgbWFza2VkIEV4dElOVCBv
biBDUFUjMw0KKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lk
PTANCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVOKSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0K
KFhFTikgTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZyZXF1ZW5j
eQ0KKFhFTikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVk
Lg0KKFhFTikgbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgdmFyaWFibGUgTVRSUiBz
ZXR0aW5ncw0KKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVwIGFs
bCBDUFVzLg0KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uDQooWEVOKSAqKiog
TE9BRElORyBET01BSU4gMCAqKioNCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNv
bXBhdDMyDQooWEVOKSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgbHNiLCBwYWRkciAweDIwMDAgLT4g
MHhjMDUwMDANCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVOVDoNCihYRU4pICBEb20w
IGFsbG9jLjogICAwMDAwMDAwMjIzMDAwMDAwLT4wMDAwMDAwMjI0MDAwMDAwICgxNzQyMDgzIHBh
Z2VzIHRvIGJlIGFsbG9jYXRlZCkNCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRjZjc1
MDAwLT4wMDAwMDAwMjRmZmZmZTAwDQooWEVOKSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoN
CihYRU4pICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgwMDAyMDAwLT5mZmZmZmZmZjgwYzA1MDAw
DQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDAwMDAwMDAwMC0+MDAwMDAwMDAwMDAwMDAw
MA0KKFhFTikgIFBoeXMtTWFjaCBtYXA6IGZmZmZlYTAwMDAwMDAwMDAtPmZmZmZlYTAwMDBkNmFj
NzANCihYRU4pICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgwYzA1MDAwLT5mZmZmZmZmZjgwYzA1
NGI0DQooWEVOKSAgUGFnZSB0YWJsZXM6ICAgZmZmZmZmZmY4MGMwNjAwMC0+ZmZmZmZmZmY4MGMx
MTAwMA0KKFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODBjMTEwMDAtPmZmZmZmZmZmODBj
MTIwMDANCihYRU4pICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZmZjgx
MDAwMDAwDQooWEVOKSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MDAwMjAwMA0KKFhFTikgRG9t
MCBoYXMgbWF4aW11bSA0IFZDUFVzDQooWEVOKSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdl
IDAwMDA6MDA6MDAuMA0KKFhFTikgQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2aWNlIDAwMDA6MDA6
MDAuMg0KKFhFTikgc2V0dXAgMDAwMDowMDowMC4yIGZvciBkMCBmYWlsZWQgKC0xOSkNCihYRU4p
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlw
ZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
MTAsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDgwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHg4MSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4ODgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZj
MzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTgsIHR5cGUgPSAweDcs
IHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCB0eXBl
ID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhh
MCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YTEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGEzLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9IDB4NSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YTUsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE4LCB0eXBlID0gMHgyLCByb290IHRh
YmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1E
LVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhYSwgdHlwZSA9IDB4Miwg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihY
RU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YWIsIHR5cGUg
PSAweDIsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMw
LCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2lu
ZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlk
ID0gMHhjMSwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YzIsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9t
YWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGMzLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhjNCwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0
dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzUsIHR5cGUgPSAweDYsIHJvb3QgdGFi
bGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEwMCwgdHlwZSA9IDB4MSwg
cm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihY
RU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAxLCB0eXBl
ID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgy
MzAsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDIzOCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4MzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAs
IGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFn
ZSB0YWJsZTogZGV2aWNlIGlkID0gMHg0MDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIy
NGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFNjcnVi
YmluZyBGcmVlIFJBTTogLmRvbmUuDQooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJl
c2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4NCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbA0KKFhF
TikgR3Vlc3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFuZCB3YXJu
aW5ncykNCihYRU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0aHJl
ZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQ0KKFhFTikgRnJlZWQgMjkya0IgaW5pdCBt
ZW1vcnkuDQooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAw
MDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAw
MC4NCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVkIFdSTVNSIDAwMDAwMDAw
MDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4MDAwMDAxMDAwMDAwLg0K
KFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAw
MDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAwMDEwMDAwMDAuDQooWEVO
KSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVtcHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEz
IGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4ODAwODAwMDAwMTAwMDAwMC4NCihYRU4pIFBD
STogVXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgbW0uYzo4MDk6
IGQwOiBGb3JjaW5nIHJlYWQtb25seSBhY2Nlc3MgdG8gTUZOIGUwMDAyDQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAwOjAwLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDAuMg0K
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjAxLjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDIuMA0KKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDoxMC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEw
LjENCihYRU4pIFNSLUlPViBkZXZpY2UgMDAwMDowMDoxMS4wIGhhcyBpdHMgdmlydHVhbCBmdW5j
dGlvbnMgYWxyZWFkeSBlbmFibGVkICgwMWFiKQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxMS4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjANCihYRU4pIFBDSSBhZGQg
ZGV2aWNlIDAwMDA6MDA6MTIuMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wDQoo
WEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEzLjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDA6MTQuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4xDQooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjAwOjE0LjMNCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTQu
NA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC41DQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAwOjE1LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMg0KKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjE4LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMQ0KKFhFTikgUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDoxOC4yDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjMNCihY
RU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxOC41DQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjANCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MDE6MDAuMQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMjowNi4w
DQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAyOjA3LjANCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MDM6MDAuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4wDQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjA1OjAwLjANCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0
aW5nIGVudHJ5ICg1LTE2IC0+IDB4YTAgLT4gSVJRIDE2IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4p
IElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE3IC0+IDB4YTggLT4gSVJRIDE3
IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5
ICg1LTE4IC0+IDB4YjAgLT4gSVJRIDE4IE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIElPQVBJQ1sw
XTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTE5IC0+IDB4YjggLT4gSVJRIDE5IE1vZGU6MSBB
Y3RpdmU6MSkNCihYRU4pIElPQVBJQ1swXTogU2V0IFBDSSByb3V0aW5nIGVudHJ5ICg1LTIxIC0+
IDB4YzAgLT4gSVJRIDIxIE1vZGU6MSBBY3RpdmU6MSkNCihYRU4pIElPQVBJQ1swXTogU2V0IFBD
SSByb3V0aW5nIGVudHJ5ICg1LTIyIC0+IDB4YzggLT4gSVJRIDIyIE1vZGU6MSBBY3RpdmU6MSkN
ClsgICAxNS40MTAzMDBdIFVuYWJsZSB0byByZWFkIHN5c3JxIGNvZGUgaW4gY29udHJvbC9zeXNy
cQ0KKFhFTikgQVBJQyBlcnJvciBvbiBDUFUxOiAwMCg0MCkNCihYRU4pIEFQSUMgZXJyb3Igb24g
Q1BVMDogMDAoNDApDQooWEVOKSBBUElDIGVycm9yIG9uIENQVTI6IDAwKDQwKQ0KKFhFTikgQVBJ
QyBlcnJvciBvbiBDUFUzOiAwMCg0MCkNChtbchtbSBtbSg0NDQpXZWxjb21lIHRvIG9wZW5TVVNF
IDEzLjEgIkJvdHRsZSIgLSBLZXJuZWwgMy4xMS42LTQteGVuICh4dmMwKS4NDQoNDQoNDQpsaW51
eC1iNTJkIGxvZ2luOiAoWEVOKSBBTUQtVmk6IFNoYXJlIHAybSB0YWJsZSB3aXRoIGlvbW11OiBw
Mm0gdGFibGUgPSAweDFjYWY2Mw0KKFhFTikgQU1ELVZpOiBTaGFyZSBwMm0gdGFibGUgd2l0aCBp
b21tdTogcDJtIHRhYmxlID0gMHgxMDAxN2MNCihYRU4pIEFNRC1WaTogU2hhcmUgcDJtIHRhYmxl
IHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWNhNTJiDQooWEVOKSBBTUQtVmk6IFNoYXJlIHAy
bSB0YWJsZSB3aXRoIGlvbW11OiBwMm0gdGFibGUgPSAweDFjYTUzZQ0KKFhFTikgQU1ELVZpOiBT
aGFyZSBwMm0gdGFibGUgd2l0aCBpb21tdTogcDJtIHRhYmxlID0gMHgxMDAxNGINCihYRU4pIEFN
RC1WaTogU2hhcmUgcDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWY2YjU3DQo=
--001a113b9ce4a9d8dc04f1817265
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--001a113b9ce4a9d8dc04f1817265--


From xen-devel-bounces@lists.xen.org Mon Feb 03 14:42:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKim-0002yU-K1; Mon, 03 Feb 2014 14:42:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAKil-0002yP-Mz
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:41:59 +0000
Received: from [193.109.254.147:19465] by server-2.bemta-14.messagelabs.com id
	79/83-01236-7BAAFE25; Mon, 03 Feb 2014 14:41:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391438517!1645340!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1314 invoked from network); 3 Feb 2014 14:41:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:41:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99223682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 14:41:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 09:41:56 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAKii-0002ft-3h;
	Mon, 03 Feb 2014 14:41:56 +0000
Message-ID: <52EFAAB4.4010607@citrix.com>
Date: Mon, 3 Feb 2014 14:41:56 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
In-Reply-To: <CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 14:36, Vitaliy Tomin wrote:
>> What about without PCI passthrough?  In all cases you appear to be
> passing the embedded graphics through to an HVM domain
>
> Next log captured with following hvm domain config:
>
> name = 'blank'
> builder = 'hvm'
> memory = 1024
>
> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
> hvm it takes less than a minute.
>
> Now trying to make log with watchdog added

Don't bother - my suggestion of watchdog was for a hang, not a reset.

Looking further through the logs,

(XEN) APIC error on CPU1: 00(40)
(XEN) APIC error on CPU0: 00(40)
(XEN) APIC error on CPU2: 00(40)
(XEN) APIC error on CPU3: 00(40)

Show that "Received illegal vector" has been indicated on all cpus,
beside a comment in the code which states "This interrupt should never
happen with our APIC/SMP architecture"

This appears to consistently happen after updating the PCI routing
entries on the first IOAPIC.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:42:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKim-0002yU-K1; Mon, 03 Feb 2014 14:42:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAKil-0002yP-Mz
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:41:59 +0000
Received: from [193.109.254.147:19465] by server-2.bemta-14.messagelabs.com id
	79/83-01236-7BAAFE25; Mon, 03 Feb 2014 14:41:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391438517!1645340!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1314 invoked from network); 3 Feb 2014 14:41:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:41:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="99223682"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 14:41:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 09:41:56 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAKii-0002ft-3h;
	Mon, 03 Feb 2014 14:41:56 +0000
Message-ID: <52EFAAB4.4010607@citrix.com>
Date: Mon, 3 Feb 2014 14:41:56 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
In-Reply-To: <CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 14:36, Vitaliy Tomin wrote:
>> What about without PCI passthrough?  In all cases you appear to be
> passing the embedded graphics through to an HVM domain
>
> Next log captured with following hvm domain config:
>
> name = 'blank'
> builder = 'hvm'
> memory = 1024
>
> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
> hvm it takes less than a minute.
>
> Now trying to make log with watchdog added

Don't bother - my suggestion of watchdog was for a hang, not a reset.

Looking further through the logs,

(XEN) APIC error on CPU1: 00(40)
(XEN) APIC error on CPU0: 00(40)
(XEN) APIC error on CPU2: 00(40)
(XEN) APIC error on CPU3: 00(40)

Show that "Received illegal vector" has been indicated on all cpus,
beside a comment in the code which states "This interrupt should never
happen with our APIC/SMP architecture"

This appears to consistently happen after updating the PCI routing
entries on the first IOAPIC.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:45:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:45:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKlv-00035F-8e; Mon, 03 Feb 2014 14:45:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAKlt-000358-SG
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 14:45:14 +0000
Received: from [85.158.143.35:33387] by server-2.bemta-4.messagelabs.com id
	C7/E6-10891-97BAFE25; Mon, 03 Feb 2014 14:45:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391438710!2751629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1865 invoked from network); 3 Feb 2014 14:45:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:45:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97301180"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 14:45:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 09:45:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAKln-0005GA-Km;
	Mon, 03 Feb 2014 14:45:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAKln-0004ez-Bm;
	Mon, 03 Feb 2014 14:45:07 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21231.43890.994780.905896@mariner.uk.xensource.com>
Date: Mon, 3 Feb 2014 14:45:06 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52EBC598.6030600@suse.com>
References: <52EB3C47.9050902@suse.com>
	<21227.37990.762802.574421@mariner.uk.xensource.com>
	<21227.48873.879761.827214@mariner.uk.xensource.com>
	<52EBC598.6030600@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: libvirt libxl timer handling issue"):
> Ian Jackson wrote:
> > I think this is due to libxl_event.c not clearing the ->func member of
> > its timeout structs when the timeout occurs.  TBH it's surprising that
> > this hasn't caused more trouble and I haven't been able to test this
> > so I'm not sure.

I have now tested this.  I can confirm that this bug is real, and I
understand why we haven't spotted it before.  My fix is correct, I
think.

> > But please take a look at
> >   git://xenbits.xen.org/people/iwj/xen.git#wip.timeout-func0

I have rebased that ref.

> Ok, thanks. I'll give this a try in a bit.

Good luck and please let me know.

> Well, I should spend some time becoming familiar with this part of libxl
> so I can help fix issues too, instead of whining about them :).

TBH I'm hoping that this lot will only need debugging once ...

I'm going to post v3 of my event fixes series RSN.
(This time for sure...)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:45:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:45:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKlv-00035F-8e; Mon, 03 Feb 2014 14:45:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAKlt-000358-SG
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 14:45:14 +0000
Received: from [85.158.143.35:33387] by server-2.bemta-4.messagelabs.com id
	C7/E6-10891-97BAFE25; Mon, 03 Feb 2014 14:45:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391438710!2751629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1865 invoked from network); 3 Feb 2014 14:45:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:45:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97301180"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 14:45:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 09:45:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAKln-0005GA-Km;
	Mon, 03 Feb 2014 14:45:07 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAKln-0004ez-Bm;
	Mon, 03 Feb 2014 14:45:07 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21231.43890.994780.905896@mariner.uk.xensource.com>
Date: Mon, 3 Feb 2014 14:45:06 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52EBC598.6030600@suse.com>
References: <52EB3C47.9050902@suse.com>
	<21227.37990.762802.574421@mariner.uk.xensource.com>
	<21227.48873.879761.827214@mariner.uk.xensource.com>
	<52EBC598.6030600@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] libvirt libxl timer handling issue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: libvirt libxl timer handling issue"):
> Ian Jackson wrote:
> > I think this is due to libxl_event.c not clearing the ->func member of
> > its timeout structs when the timeout occurs.  TBH it's surprising that
> > this hasn't caused more trouble and I haven't been able to test this
> > so I'm not sure.

I have now tested this.  I can confirm that this bug is real, and I
understand why we haven't spotted it before.  My fix is correct, I
think.

> > But please take a look at
> >   git://xenbits.xen.org/people/iwj/xen.git#wip.timeout-func0

I have rebased that ref.

> Ok, thanks. I'll give this a try in a bit.

Good luck and please let me know.

> Well, I should spend some time becoming familiar with this part of libxl
> so I can help fix issues too, instead of whining about them :).

TBH I'm hoping that this lot will only need debugging once ...

I'm going to post v3 of my event fixes series RSN.
(This time for sure...)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKvZ-0003Rm-G6; Mon, 03 Feb 2014 14:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAKvX-0003Rh-J7
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:55:11 +0000
Received: from [85.158.137.68:10037] by server-9.bemta-3.messagelabs.com id
	F6/AF-10184-ECDAFE25; Mon, 03 Feb 2014 14:55:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391439308!13055520!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19728 invoked from network); 3 Feb 2014 14:55:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 14:55:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Et6F3014630
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 14:55:07 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Et5D4024570
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 14:55:06 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Et4cf024514; Mon, 3 Feb 2014 14:55:05 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 06:55:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B11BE1BFA0B; Mon,  3 Feb 2014 09:55:03 -0500 (EST)
Date: Mon, 3 Feb 2014 09:55:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
Message-ID: <20140203145503.GA3864@phenom.dumpdata.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
> >What about without PCI passthrough?  In all cases you appear to be
> passing the embedded graphics through to an HVM domain
> 
> Next log captured with following hvm domain config:
> 
> name = 'blank'
> builder = 'hvm'
> memory = 1024
> 
> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
> hvm it takes less than a minute.

You might want to add 'sync_console' on your Xen line. That should
give you a bit more of data (I hope?)
> 
> Now trying to make log with watchdog added
> 
> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
> > Can you please reply-to-all to keep this on the list.
> >
> > On 03/02/14 14:21, Vitaliy Tomin wrote:
> >>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
> >> No it crashes even with empty HVM domain (no os, no disk images no network)
> >
> > What about without PCI passthrough?  In all cases you appear to be
> > passing the embedded graphics through to an HVM domain
> >
> >>
> >>> Can you explain "=== whole system crashed ===" a little more.
> >> II means system instantly rebooted. Black screen, no messages, no
> >> image on screen next what I see is POST of my real hardware.
> >>
> >> Log of xen runned with debug=y attached,hvm dom runned and no crash.
> >
> > Ok - so something forced a system reset.  Even more curious that debug
> > mode is fine while non-debug is fatal.
> >
> > ~Andrew
> >
> >


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKvZ-0003Rm-G6; Mon, 03 Feb 2014 14:55:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAKvX-0003Rh-J7
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:55:11 +0000
Received: from [85.158.137.68:10037] by server-9.bemta-3.messagelabs.com id
	F6/AF-10184-ECDAFE25; Mon, 03 Feb 2014 14:55:10 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391439308!13055520!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19728 invoked from network); 3 Feb 2014 14:55:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 14:55:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Et6F3014630
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 14:55:07 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Et5D4024570
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 14:55:06 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Et4cf024514; Mon, 3 Feb 2014 14:55:05 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 06:55:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B11BE1BFA0B; Mon,  3 Feb 2014 09:55:03 -0500 (EST)
Date: Mon, 3 Feb 2014 09:55:03 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
Message-ID: <20140203145503.GA3864@phenom.dumpdata.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
> >What about without PCI passthrough?  In all cases you appear to be
> passing the embedded graphics through to an HVM domain
> 
> Next log captured with following hvm domain config:
> 
> name = 'blank'
> builder = 'hvm'
> memory = 1024
> 
> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
> hvm it takes less than a minute.

You might want to add 'sync_console' on your Xen line. That should
give you a bit more of data (I hope?)
> 
> Now trying to make log with watchdog added
> 
> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
> > Can you please reply-to-all to keep this on the list.
> >
> > On 03/02/14 14:21, Vitaliy Tomin wrote:
> >>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
> >> No it crashes even with empty HVM domain (no os, no disk images no network)
> >
> > What about without PCI passthrough?  In all cases you appear to be
> > passing the embedded graphics through to an HVM domain
> >
> >>
> >>> Can you explain "=== whole system crashed ===" a little more.
> >> II means system instantly rebooted. Black screen, no messages, no
> >> image on screen next what I see is POST of my real hardware.
> >>
> >> Log of xen runned with debug=y attached,hvm dom runned and no crash.
> >
> > Ok - so something forced a system reset.  Even more curious that debug
> > mode is fine while non-debug is fatal.
> >
> > ~Andrew
> >
> >


> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKy9-0003YH-2T; Mon, 03 Feb 2014 14:57:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAKy7-0003Y9-NU
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:57:52 +0000
Received: from [85.158.137.68:10589] by server-13.bemta-3.messagelabs.com id
	FD/AA-26923-E6EAFE25; Mon, 03 Feb 2014 14:57:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391439468!13067724!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29810 invoked from network); 3 Feb 2014 14:57:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:57:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97306013"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 14:57:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 09:57:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAKy3-0005K6-BC;
	Mon, 03 Feb 2014 14:57:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAKy3-0005Ei-4o;
	Mon, 03 Feb 2014 14:57:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24719-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Feb 2014 14:57:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24719: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24719 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24719/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 24718

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24718
 test-amd64-i386-freebsd10-i386  3 host-install(3)            broken like 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24718
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24718

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 04d31ea1b1caeac7f77b5d18910761abd540545f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 3 09:31:03 2014 +0100

    QEMU_UPSTREAM_REVISION -> master again
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 14:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 14:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAKy9-0003YH-2T; Mon, 03 Feb 2014 14:57:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAKy7-0003Y9-NU
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 14:57:52 +0000
Received: from [85.158.137.68:10589] by server-13.bemta-3.messagelabs.com id
	FD/AA-26923-E6EAFE25; Mon, 03 Feb 2014 14:57:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391439468!13067724!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29810 invoked from network); 3 Feb 2014 14:57:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 14:57:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,772,1384300800"; d="scan'208";a="97306013"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 14:57:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 09:57:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAKy3-0005K6-BC;
	Mon, 03 Feb 2014 14:57:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAKy3-0005Ei-4o;
	Mon, 03 Feb 2014 14:57:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24719-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Feb 2014 14:57:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24719: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24719 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24719/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           3 host-install(3)         broken REGR. vs. 24718

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24718
 test-amd64-i386-freebsd10-i386  3 host-install(3)            broken like 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail REGR. vs. 24718
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24718

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 04d31ea1b1caeac7f77b5d18910761abd540545f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 3 09:31:03 2014 +0100

    QEMU_UPSTREAM_REVISION -> master again
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 15:10:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 15:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAL9v-000407-CN; Mon, 03 Feb 2014 15:10:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAL9t-0003zy-9m
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 15:10:01 +0000
Received: from [193.109.254.147:9334] by server-16.bemta-14.messagelabs.com id
	8B/02-21945-841BFE25; Mon, 03 Feb 2014 15:10:00 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391440199!1681604!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2300 invoked from network); 3 Feb 2014 15:09:59 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 15:09:59 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so12091263wgh.5
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 07:09:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version:content-type
	:content-transfer-encoding:subject:from:date:to:cc:message-id;
	bh=MlHpsePjRLXv4XLz11FBxFOBgiEvwd5Fv9u7cStPGoU=;
	b=nn7RpRTlV4uyL0YXtO0/JLLqr93Rxfik+rB1VV+87UmEK9Zbx9eFs5XIZeSFSjUvt1
	r6xUl/7pZcnWVabQESH3NsAw93VHfgGNxSRQcoxYsDRnQiX7+hGFP9kKQGBvS4S4Mu6q
	vDeeSztjS/yiS6Ix+n2+prr8gOld2j6Mg60F8ly4SFtqOcaOtiMyVf5POttEwqfWUOit
	foZflz/GmArOwO/HOSCS1c6RgD4DXS4cXaXXKgoDZZ9RFQuEKTAilmq1VAgs9BR6SXaS
	mPsfq38slPvCwckn8Deb1hm14JNoeWnaylX5BmDqE+rvGUB9UBn6PyEywlpR1D/ZDJct
	V5Tw==
X-Received: by 10.194.175.66 with SMTP id by2mr1553484wjc.59.1391440198894;
	Mon, 03 Feb 2014 07:09:58 -0800 (PST)
Received: from ?IPV6:2001:470:7b2f:0:e01b:a53:6155:e339?
	([2001:470:7b2f:0:e01b:a53:6155:e339])
	by mx.google.com with ESMTPSA id h9sm28267120wjz.16.2014.02.03.07.09.57
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 07:09:58 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <1391422624.10515.27.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Mon, 03 Feb 2014 15:09:53 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
	second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2383904269643908690=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2383904269643908690==
Content-Type: multipart/alternative; boundary="----5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP"
Content-Transfer-Encoding: 8bit

------5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
 charset=UTF-8

Makes sense...

Is there documentation about xl migrate and xl remus?

Say I want to migrate a host but first pause it?

I could also snapshot the lvm but that doesn't save the memory and the domain would have to be offline.

So if I want to migrate but don't have shared storage, what's the best approach drdb?

Thanks


On February 3, 2014 10:17:04 AM GMT, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
>> I'm testing live migration without shared storage (I use LVM at both
>sides)
>> 
>> 
>> Issuing "xl migrate" worked nice and the machine was migrated to the
>second host
>> 
>> However I see this in the second host log:
>> 
>> [ 1502.563251] EXT4-fs error (device xvda1):
>> htree_dirblock_to_tree:892: inode #136303: block 533250: comm
>> run-parts: bad entry in directory: rec_len is smaller than minimal -
>> offset=0(0), inode=0, rec_len=0, name_len=0
>> Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
>> (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
>> 533250: comm run-parts: bad entry in directory: rec_len is smaller
>> than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
>> 
>> I also get errors like:
>> -bash: /bin/ping: cannot execute binary file
>> 
>> Is this to be expect on using none shared storage?
>
>Yes. If the underlying disk is not the same device between both hosts
>then all bets are off and all sorts of bad things will be happen. Think
>about it -- what would you expect to happen to an OS if a disk suddenly
>started returning completely different data to what was written to it.
>
>What you have seen seems like a plausible outcome.
>
>Ian.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head></head><body>Makes sense...<br>
<br>
Is there documentation about xl migrate and xl remus?<br>
<br>
Say I want to migrate a host but first pause it?<br>
<br>
I could also snapshot the lvm but that doesn&#39;t save the memory and the domain would have to be offline.<br>
<br>
So if I want to migrate but don&#39;t have shared storage, what&#39;s the best approach drdb?<br>
<br>
Thanks<br>
<br><br><div class="gmail_quote">On February 3, 2014 10:17:04 AM GMT, Ian Campbell &lt;Ian.Campbell@citrix.com&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> I'm testing live migration without shared storage (I use LVM at both sides)<br /> <br /> <br /> Issuing "xl migrate" worked nice and the machine was migrated to the second host<br /> <br /> However I see this in the second host log:<br /> <br /> [ 1502.563251] EXT4-fs error (device xvda1):<br /> htree_dirblock_to_tree:892: inode #136303: block 533250: comm<br /> run-parts: bad entry in directory: rec_len is smaller than minimal -<br /> offset=0(0), inode=0, rec_len=0, name_len=0<br /> Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error<br /> (device xvda1): htree_dirblock_to_tree:892: inode #136303: block<br /> 533250: comm run-parts: bad entry in directory: rec_len is smaller<br /> than minimal - offset=0(0), inode=0, rec_len=0, name_len=0<br /> <br /> I also get errors
like:<br /> -bash: /bin/ping: cannot execute binary file<br /> <br /> Is this to be expect on using none shared storage?<br /></blockquote><br />Yes. If the underlying disk is not the same device between both hosts<br />then all bets are off and all sorts of bad things will be happen. Think<br />about it -- what would you expect to happen to an OS if a disk suddenly<br />started returning completely different data to what was written to it.<br /><br />What you have seen seems like a plausible outcome.<br /><br />Ian.<br /><br /></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>
------5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP--



--===============2383904269643908690==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2383904269643908690==--



From xen-devel-bounces@lists.xen.org Mon Feb 03 15:10:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 15:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAL9v-000407-CN; Mon, 03 Feb 2014 15:10:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAL9t-0003zy-9m
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 15:10:01 +0000
Received: from [193.109.254.147:9334] by server-16.bemta-14.messagelabs.com id
	8B/02-21945-841BFE25; Mon, 03 Feb 2014 15:10:00 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391440199!1681604!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2300 invoked from network); 3 Feb 2014 15:09:59 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 15:09:59 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so12091263wgh.5
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 07:09:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version:content-type
	:content-transfer-encoding:subject:from:date:to:cc:message-id;
	bh=MlHpsePjRLXv4XLz11FBxFOBgiEvwd5Fv9u7cStPGoU=;
	b=nn7RpRTlV4uyL0YXtO0/JLLqr93Rxfik+rB1VV+87UmEK9Zbx9eFs5XIZeSFSjUvt1
	r6xUl/7pZcnWVabQESH3NsAw93VHfgGNxSRQcoxYsDRnQiX7+hGFP9kKQGBvS4S4Mu6q
	vDeeSztjS/yiS6Ix+n2+prr8gOld2j6Mg60F8ly4SFtqOcaOtiMyVf5POttEwqfWUOit
	foZflz/GmArOwO/HOSCS1c6RgD4DXS4cXaXXKgoDZZ9RFQuEKTAilmq1VAgs9BR6SXaS
	mPsfq38slPvCwckn8Deb1hm14JNoeWnaylX5BmDqE+rvGUB9UBn6PyEywlpR1D/ZDJct
	V5Tw==
X-Received: by 10.194.175.66 with SMTP id by2mr1553484wjc.59.1391440198894;
	Mon, 03 Feb 2014 07:09:58 -0800 (PST)
Received: from ?IPV6:2001:470:7b2f:0:e01b:a53:6155:e339?
	([2001:470:7b2f:0:e01b:a53:6155:e339])
	by mx.google.com with ESMTPSA id h9sm28267120wjz.16.2014.02.03.07.09.57
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 07:09:58 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <1391422624.10515.27.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Mon, 03 Feb 2014 15:09:53 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
	second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2383904269643908690=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2383904269643908690==
Content-Type: multipart/alternative; boundary="----5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP"
Content-Transfer-Encoding: 8bit

------5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
 charset=UTF-8

Makes sense...

Is there documentation about xl migrate and xl remus?

Say I want to migrate a host but first pause it?

I could also snapshot the lvm but that doesn't save the memory and the domain would have to be offline.

So if I want to migrate but don't have shared storage, what's the best approach drdb?

Thanks


On February 3, 2014 10:17:04 AM GMT, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
>> I'm testing live migration without shared storage (I use LVM at both
>sides)
>> 
>> 
>> Issuing "xl migrate" worked nice and the machine was migrated to the
>second host
>> 
>> However I see this in the second host log:
>> 
>> [ 1502.563251] EXT4-fs error (device xvda1):
>> htree_dirblock_to_tree:892: inode #136303: block 533250: comm
>> run-parts: bad entry in directory: rec_len is smaller than minimal -
>> offset=0(0), inode=0, rec_len=0, name_len=0
>> Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
>> (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
>> 533250: comm run-parts: bad entry in directory: rec_len is smaller
>> than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
>> 
>> I also get errors like:
>> -bash: /bin/ping: cannot execute binary file
>> 
>> Is this to be expect on using none shared storage?
>
>Yes. If the underlying disk is not the same device between both hosts
>then all bets are off and all sorts of bad things will be happen. Think
>about it -- what would you expect to happen to an OS if a disk suddenly
>started returning completely different data to what was written to it.
>
>What you have seen seems like a plausible outcome.
>
>Ian.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head></head><body>Makes sense...<br>
<br>
Is there documentation about xl migrate and xl remus?<br>
<br>
Say I want to migrate a host but first pause it?<br>
<br>
I could also snapshot the lvm but that doesn&#39;t save the memory and the domain would have to be offline.<br>
<br>
So if I want to migrate but don&#39;t have shared storage, what&#39;s the best approach drdb?<br>
<br>
Thanks<br>
<br><br><div class="gmail_quote">On February 3, 2014 10:17:04 AM GMT, Ian Campbell &lt;Ian.Campbell@citrix.com&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> I'm testing live migration without shared storage (I use LVM at both sides)<br /> <br /> <br /> Issuing "xl migrate" worked nice and the machine was migrated to the second host<br /> <br /> However I see this in the second host log:<br /> <br /> [ 1502.563251] EXT4-fs error (device xvda1):<br /> htree_dirblock_to_tree:892: inode #136303: block 533250: comm<br /> run-parts: bad entry in directory: rec_len is smaller than minimal -<br /> offset=0(0), inode=0, rec_len=0, name_len=0<br /> Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error<br /> (device xvda1): htree_dirblock_to_tree:892: inode #136303: block<br /> 533250: comm run-parts: bad entry in directory: rec_len is smaller<br /> than minimal - offset=0(0), inode=0, rec_len=0, name_len=0<br /> <br /> I also get errors
like:<br /> -bash: /bin/ping: cannot execute binary file<br /> <br /> Is this to be expect on using none shared storage?<br /></blockquote><br />Yes. If the underlying disk is not the same device between both hosts<br />then all bets are off and all sorts of bad things will be happen. Think<br />about it -- what would you expect to happen to an OS if a disk suddenly<br />started returning completely different data to what was written to it.<br /><br />What you have seen seems like a plausible outcome.<br /><br />Ian.<br /><br /></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>
------5DBNT15NF4JS9IUA7ZDU3OCHFYLPFP--



--===============2383904269643908690==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2383904269643908690==--



From xen-devel-bounces@lists.xen.org Mon Feb 03 15:19:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 15:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WALIY-0004Th-KE; Mon, 03 Feb 2014 15:18:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WALIW-0004Tc-LA
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 15:18:57 +0000
Received: from [193.109.254.147:40198] by server-7.bemta-14.messagelabs.com id
	88/42-23424-063BFE25; Mon, 03 Feb 2014 15:18:56 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391440734!1684175!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16931 invoked from network); 3 Feb 2014 15:18:55 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 15:18:55 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so11162889qcx.23
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 07:18:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=DXLvy0sMQRexBlSB3zhtqHeLA1EBSVGssx7aFMcbNrU=;
	b=fB6SPQFpSQJ83HTn3Cnwac4Bqn+xzHzxUhHQ4o9G0AH94PSI9PvGOMsE96m8ScIg2+
	Yg1Ap8ePYkXrtFNBKWkBHUdTKwlPnee+jFBeoF/bsFHX6gbTi70k7TI0oqTxbh+dGC+G
	zLMsy/lSZ3ZMR7GbX97EUAPUQinzzcjtya1kbCtEKo5vGOaL5iO2RMdWMtZ6/HUD9qQZ
	YNLb6ueOAm74X2ap5B+kEWcooVLYsteIUbQNOxZ3mlqUjdtYJQETTzlwp4GuHQaLRx00
	gt9O4j7seEcSw55WmLcu6is7BUuGjUKM5ps9OnnPFIXlf92CdGcd2dgubvASxWnBD/Qh
	hWEw==
MIME-Version: 1.0
X-Received: by 10.224.14.2 with SMTP id e2mr56472036qaa.73.1391440733632; Mon,
	03 Feb 2014 07:18:53 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 07:18:53 -0800 (PST)
In-Reply-To: <20140203145503.GA3864@phenom.dumpdata.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
	<20140203145503.GA3864@phenom.dumpdata.com>
Date: Tue, 4 Feb 2014 00:18:53 +0900
Message-ID: <CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7bdc8b6ea7917704f1820a75
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bdc8b6ea7917704f1820a75
Content-Type: text/plain; charset=ISO-8859-1

>You might want to add 'sync_console' on your Xen line. That should
give you a bit more of data (I hope?)

Here is  log captured with sync_console and empty hvm config.

On Mon, Feb 3, 2014 at 11:55 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
>> >What about without PCI passthrough?  In all cases you appear to be
>> passing the embedded graphics through to an HVM domain
>>
>> Next log captured with following hvm domain config:
>>
>> name = 'blank'
>> builder = 'hvm'
>> memory = 1024
>>
>> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
>> hvm it takes less than a minute.
>
> You might want to add 'sync_console' on your Xen line. That should
> give you a bit more of data (I hope?)
>>
>> Now trying to make log with watchdog added
>>
>> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>> > Can you please reply-to-all to keep this on the list.
>> >
>> > On 03/02/14 14:21, Vitaliy Tomin wrote:
>> >>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>> >> No it crashes even with empty HVM domain (no os, no disk images no network)
>> >
>> > What about without PCI passthrough?  In all cases you appear to be
>> > passing the embedded graphics through to an HVM domain
>> >
>> >>
>> >>> Can you explain "=== whole system crashed ===" a little more.
>> >> II means system instantly rebooted. Black screen, no messages, no
>> >> image on screen next what I see is POST of my real hardware.
>> >>
>> >> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>> >
>> > Ok - so something forced a system reset.  Even more curious that debug
>> > mode is fine while non-debug is fatal.
>> >
>> > ~Andrew
>> >
>> >
>
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>

--047d7bdc8b6ea7917704f1820a75
Content-Type: application/octet-stream; name="screenlog.0"
Content-Disposition: attachment; filename="screenlog.0"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7vuyjv0

IFhlbiA0LjQuMF8wMi0yOTcuMQ0KKFhFTikgWGVuIHZlcnNpb24gNC40LjBfMDItMjk3LjEgKGFi
dWlsZEApIChnY2MgKFNVU0UgTGludXgpIDQuOC4xIDIwMTMwOTA5IFtnY2MtNF84LWJyYW5jaCBy
ZXZpc2lvbiAyMDIzODhdKSBkZWJ1Zz1uIFR1ZSBKYW4gMjggMTY6MDg6NDggVVRDIDIwMTQNCihY
RU4pIExhdGVzdCBDaGFuZ2VTZXQ6IA0KKFhFTikgQ29uc29sZSBvdXRwdXQgaXMgc3luY2hyb25v
dXMuDQooWEVOKSBCb290bG9hZGVyOiBHUlVCMiAyLjAwDQooWEVOKSBDb21tYW5kIGxpbmU6IHN5
bmNfY29uc29sZSBsb2dsdmw9YWxsIGlvbW11PWRlYnVnLHZlcmJvc2UgYXBpY192ZXJib3NpdHk9
ZGVidWcgY29uc29sZT1jb20xIGNvbTE9MTE1MjAwLDhuMSxwY2kNCihYRU4pIFZpZGVvIGluZm9y
bWF0aW9uOg0KKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNg0KKFhFTikg
IFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRzDQooWEVO
KSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMNCihYRU4p
ICBGb3VuZCA0IEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVOKSBYZW4tZTgyMCBSQU0g
bWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1c2FibGUp
DQooWEVOKSAgMDAwMDAwMDAwMDA5ZTgwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHJlc2VydmVkKQ0K
KFhFTikgIDAwMDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQ0KKFhFTikg
IDAwMDAwMDAwOGQ2OGIwMDAgLSAwMDAwMDAwMDhkZDBhMDAwIChyZXNlcnZlZCkNCihYRU4pICAw
MDAwMDAwMDhkZDBhMDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpDQooWEVOKSAgMDAw
MDAwMDA4ZTA1YTAwMCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAw
MDAwOGVhNDUwMDAgLSAwMDAwMDAwMDhlYTQ2MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4
ZWE0NjAwMCAtIDAwMDAwMDAwOGVjNGMwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAwMDAwOGVj
NGMwMDAgLSAwMDAwMDAwMDhmMDY0MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4ZjA2NDAw
MCAtIDAwMDAwMDAwOGY3ZjMwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwOGY3ZjMwMDAg
LSAwMDAwMDAwMDhmODAwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBmZWMwMDAwMCAtIDAw
MDAwMDAwZmVjMDEwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMTAwMDAgLSAwMDAw
MDAwMGZlYzExMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0gMDAwMDAw
MDBmZWQwMTAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAwMDAwMDAw
ZmVkOTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmY4MDAwMDAgLSAwMDAwMDAwMTAw
MDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAwMDI1MDAw
MDAwMCAodXNhYmxlKQ0KKFhFTikgQUNQSTogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIgQUxBU0tB
KQ0KKFhFTikgQUNQSTogWFNEVCA4RTA0QTA3OCwgMDA3NCAocjEgQUxBU0tBICAgIEEgTSBJICAx
MDcyMDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAwMEY0IChy
NCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEkgV2Fy
bmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25hbCBmaWVsZCAiUG0yQ29udHJvbEJsb2NrIiBoYXMg
emVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEyNl0NCihY
RU4pIEFDUEk6IERTRFQgOEUwNEExODgsIDVGOUUgKHIyIEFMQVNLQSAgICBBIE0gSSAgICAgICAg
MCBJTlRMIDIwMDUxMTE3KQ0KKFhFTikgQUNQSTogRkFDUyA4RTA1MkU4MCwgMDA0MA0KKFhFTikg
QUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFN
SSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGUERUIDhFMDUwMjk4LCAwMDQ0IChyMSBBTEFTS0Eg
ICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEk6IE1DRkcgOEUwNTAy
RTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgIDEwMDEzKQ0KKFhF
TikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEgQUxBU0tBIE9FTUFBRlQgICAxMDcyMDA5
IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBIUEVUIDhFMDUwNDA4LCAwMDM4IChyMSBBTEFT
S0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAgICAgNSkNCihYRU4pIEFDUEk6IElWUlMgOEUw
NTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAgICAwKQ0K
KFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEgICAgQU1EIEFOTkFQVVJOICAgICAg
ICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwRjEwLCAwNEI3IChyMiAg
ICBBTUQgQU5OQVBVUk4gICAgICAgIDEgTVNGVCAgNDAwMDAwMCkNCihYRU4pIEFDUEk6IENSQVQg
OEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAgICAx
KQ0KKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0IpDQooWEVOKSBObyBOVU1BIGNv
bmZpZ3VyYXRpb24gZm91bmQNCihYRU4pIEZha2luZyBhIG5vZGUgYXQgMDAwMDAwMDAwMDAwMDAw
MC0wMDAwMDAwMjUwMDAwMDAwDQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZA0KKFhFTikg
Zm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZkOTAwDQooWEVOKSBETUkgMi43IHByZXNlbnQuDQoo
WEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBkcml2ZXIg
ZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihYRU4pIEFDUEk6
IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdDQooWEVOKSBBQ1BJ
OiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGluIEZBRFQgLSA4ZTA1MmU4MC8wMDAwMDAw
MDAwMDAwMDAwLCB1c2luZyAzMg0KKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVwX3ZlY1s4
ZTA1MmU4Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4
ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MTBd
IGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4p
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQpDQooWEVO
KSBQcm9jZXNzb3IgIzE3IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE4
IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxh
cGljX2lkWzB4MTNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElDIHZlcnNp
b24gMTYNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVkZ2UgbGlu
dFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA1XSBhZGRyZXNzWzB4ZmVjMDAwMDBd
IGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24gMzMsIGFk
ZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChidXMg
MCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRfU1JDX09W
UiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVOKSBBQ1BJOiBJ
UlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUu
DQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFibGluZyBBUElD
IG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4
MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgRVJTVCB0YWJsZSB3YXMgbm90IGZvdW5k
DQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRp
b24NCihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykNCihYRU4pIE5S
X0NQVVM6NTEyIG5yX2NwdW1hc2tfYml0czo2NA0KKFhFTikgbWFwcGVkIEFQSUMgdG8gZmZmZjgy
Y2ZmZmJmYjAwMCAoZmVlMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4MmNmZmZi
ZmEwMDAgKGZlYzAwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCA3NjAgTVNJL01TSS1Y
DQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQoo
WEVOKSBEZXRlY3RlZCAzODkzLjA0MyBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGluZyBtZW1v
cnkgc2hhcmluZy4NCihYRU4pIHhzdGF0ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAweDNjMCBh
bmQgc3RhdGVzOiAweDQwMDAwMDAwMDAwMDAwMDcNCihYRU4pIEFNRCBGYW0xNWggbWFjaGluZSBj
aGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDog
YmFzZSBlMDAwMDAwMCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJOiBOb3Qg
dXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZpOiBGb3Vu
ZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkgVGFibGU6
DQooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4
NzANCihYRU4pIEFNRC1WaTogIFJldmlzaW9uIDB4Mg0KKFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0g
MHhlOA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRA0KKFhFTikgQU1ELVZpOiAgT0VNX1RhYmxl
X0lkIEFOTkFQVVJODQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgxDQooWEVOKSBBTUQt
Vmk6ICBDcmVhdG9yX0lkIEFNRCANCihYRU4pIEFNRC1WaTogIENyZWF0b3JfUmV2aXNpb24gMA0K
KFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxhZ3MgMHhmZSBsZW4gMHg0MCBp
ZCAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4OCBm
bGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OCAtPiAweGZmZmUNCihYRU4p
IEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAweDIwMCBmbGFncyAwDQoo
WEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZmIGFsaWFzIDB4YTQNCihY
RU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMCBpZCAwIGZsYWdzIDANCihYRU4p
IEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDANCihYRU4p
IEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFuZGxlIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDB4
ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDEg
aGFuZGxlIDB4NQ0KKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJlczoNCihYRU4p
ICAtIFByZWZldGNoIFBhZ2VzIENvbW1hbmQNCihYRU4pICAtIFBlcmlwaGVyYWwgUGFnZSBTZXJ2
aWNlIFJlcXVlc3QNCihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uDQooWEVOKSAgLSBJbnZhbGlk
YXRlIEFsbCBDb21tYW5kDQooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4NCihYRU4pIEFN
RC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4NCihYRU4pIEFNRC1WaTogSU9NTVUgMCBF
bmFibGVkLg0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAtIERvbTAg
bW9kZTogUmVsYXhlZA0KKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkDQooWEVOKSBH
ZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwDQooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEw
DQooWEVOKSBHZXR0aW5nIElEOiAxMDAwMDAwMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3MDANCihY
RU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUjMA0KKFhF
TikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBtZXRob2QN
CihYRU4pIGluaXQgSU9fQVBJQyBJUlFzDQooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBpbikgNS0w
LCA1LTE2LCA1LTE3LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5vdCBjb25u
ZWN0ZWQuDQooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0t
MSBwaW4yPS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhFTikgbnVt
YmVyIG9mIElPLUFQSUMgIzUgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIHRlc3RpbmcgdGhlIElPIEFQ
SUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLg0KKFhFTikgSU8gQVBJQyAjNS4uLi4uLg0KKFhFTikg
Li4uLiByZWdpc3RlciAjMDA6IDA1MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwg
QVBJQyBpZDogMDUNCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwDQooWEVOKSAu
Li4uLi4uICAgIDogTFRTICAgICAgICAgIDogMA0KKFhFTikgLi4uLiByZWdpc3RlciAjMDE6IDAw
MTc4MDIxDQooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVzOiAwMDE3
DQooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMQ0KKFhFTikgLi4uLi4uLiAg
ICAgOiBJTyBBUElDIHZlcnNpb246IDAwMjENCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAyOiAwNTAw
MDAwMA0KKFhFTikgLi4uLi4uLiAgICAgOiBhcmJpdHJhdGlvbjogMDUNCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAzOiAwNTAxODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBCb290IERUICAgIDogMQ0K
KFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBoeSBNYXNr
IFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAwMCAwMCAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAwMSAwMDEgMDEgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQooWEVOKSAgMDIgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMA0KKFhFTikgIDAzIDAwMSAwMSAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgNCihYRU4pICAwNCAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDUgMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA2IDAwMSAwMSAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwNyAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDggMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA2MA0KKFhFTikgIDA5IDAwMSAwMSAgMSAgICAxICAgIDAgICAxICAg
MCAgICAxICAgIDAgICAgMDANCihYRU4pICAwYSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIEYxDQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA3MA0KKFhFTikgIDBjIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgNzgNCihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAg
ICAxICAgIDg4DQooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICA5MA0KKFhFTikgIDBmIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgOTgNCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzAN
CihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQoo
WEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhF
TikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4p
ICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSAg
MTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhFTikgVXNp
bmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nDQooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOg0KKFhF
TikgSVJRMjQwIC0+IDA6Mg0KKFhFTikgSVJRNDggLT4gMDoxDQooWEVOKSBJUlE1NiAtPiAwOjMN
CihYRU4pIElSUTY0IC0+IDA6NA0KKFhFTikgSVJRNzIgLT4gMDo1DQooWEVOKSBJUlE4MCAtPiAw
OjYNCihYRU4pIElSUTg4IC0+IDA6Nw0KKFhFTikgSVJROTYgLT4gMDo4DQooWEVOKSBJUlExMDQg
LT4gMDo5DQooWEVOKSBJUlEyNDEgLT4gMDoxMA0KKFhFTikgSVJRMTEyIC0+IDA6MTENCihYRU4p
IElSUTEyMCAtPiAwOjEyDQooWEVOKSBJUlExMzYgLT4gMDoxMw0KKFhFTikgSVJRMTQ0IC0+IDA6
MTQNCihYRU4pIElSUTE1MiAtPiAwOjE1DQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4gZG9uZS4NCihYRU4pIFVzaW5nIGxvY2FsIEFQSUMgdGltZXIgaW50ZXJydXB0
cy4NCihYRU4pIGNhbGlicmF0aW5nIEFQSUMgdGltZXIgLi4uDQooWEVOKSAuLi4uLiBDUFUgY2xv
Y2sgc3BlZWQgaXMgMzg5Mi45MTcyIE1Iei4NCihYRU4pIC4uLi4uIGhvc3QgYnVzIGNsb2NrIHNw
ZWVkIGlzIDk5LjgxODMgTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM3DQooWEVO
KSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVkIGNvbnNv
bGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVOKSBTVk06
IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdlIFRhYmxl
cyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxpc2F0aW9u
DQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSAgLSBWTUNCIENsZWFu
IEJpdHMNCihYRU4pICAtIERlY29kZUFzc2lzdHMNCihYRU4pICAtIFBhdXNlLUludGVyY2VwdCBG
aWx0ZXINCihYRU4pICAtIFRTQyBSYXRlIE1TUg0KKFhFTikgSFZNOiBTVk0gZW5hYmxlZA0KKFhF
TikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihYRU4pIEhW
TTogSEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIEhWTTogUFZIIG1vZGUgbm90
IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtDQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMx
DQooWEVOKSBtaWNyb2NvZGU6IENQVTEgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHg2MDAx
MTE5DQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVOKSBtaWNyb2NvZGU6IENQVTIg
Y29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MA0KKFhFTikgbWFza2VkIEV4dElOVCBvbiBDUFUj
Mw0KKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTANCihY
RU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVOKSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0KKFhFTikg
TUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZyZXF1ZW5jeQ0KKFhF
TikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVkLg0KKFhF
TikgbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgdmFyaWFibGUgTVRSUiBzZXR0aW5n
cw0KKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVwIGFsbCBDUFVz
Lg0KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uDQooWEVOKSAqKiogTE9BRElO
RyBET01BSU4gMCAqKioNCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNvbXBhdDMy
DQooWEVOKSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgbHNiLCBwYWRkciAweDIwMDAgLT4gMHhjMDUw
MDANCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVOVDoNCihYRU4pICBEb20wIGFsbG9j
LjogICAwMDAwMDAwMjIzMDAwMDAwLT4wMDAwMDAwMjI0MDAwMDAwICgxNzQyMDgzIHBhZ2VzIHRv
IGJlIGFsbG9jYXRlZCkNCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRjZjc1MDAwLT4w
MDAwMDAwMjRmZmZmZTAwDQooWEVOKSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoNCihYRU4p
ICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgwMDAyMDAwLT5mZmZmZmZmZjgwYzA1MDAwDQooWEVO
KSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDAwMDAwMDAwMC0+MDAwMDAwMDAwMDAwMDAwMA0KKFhF
TikgIFBoeXMtTWFjaCBtYXA6IGZmZmZlYTAwMDAwMDAwMDAtPmZmZmZlYTAwMDBkNmFjNzANCihY
RU4pICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgwYzA1MDAwLT5mZmZmZmZmZjgwYzA1NGI0DQoo
WEVOKSAgUGFnZSB0YWJsZXM6ICAgZmZmZmZmZmY4MGMwNjAwMC0+ZmZmZmZmZmY4MGMxMTAwMA0K
KFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODBjMTEwMDAtPmZmZmZmZmZmODBjMTIwMDAN
CihYRU4pICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZmZjgxMDAwMDAw
DQooWEVOKSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MDAwMjAwMA0KKFhFTikgRG9tMCBoYXMg
bWF4aW11bSA0IFZDUFVzDQooWEVOKSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6
MDA6MDAuMA0KKFhFTikgQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2aWNlIDAwMDA6MDA6MDAuMg0K
KFhFTikgc2V0dXAgMDAwMDowMDowMC4yIGZvciBkMCBmYWlsZWQgKC0xOSkNCihYRU4pIEFNRC1W
aTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4MSwgcm9v
dCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4p
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHR5
cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eDgwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHg4MSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4ODgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRm
YzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9
IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTgsIHR5cGUgPSAweDcsIHJvb3Qg
dGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCB0eXBlID0gMHg3
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMCwgdHlw
ZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
YTEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweGEzLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9IDB4NSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4YTUsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZj
MzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE4LCB0eXBlID0gMHgyLCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhYSwgdHlwZSA9IDB4Miwgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YWIsIHR5cGUgPSAweDIs
IHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMwLCB0eXBl
ID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhj
MSwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YzIsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGMzLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhjNCwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzUsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEwMCwgdHlwZSA9IDB4MSwgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAxLCB0eXBlID0gMHgx
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyMzAsIHR5
cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eDIzOCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmlj
ZSBpZCA9IDB4MzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJs
ZTogZGV2aWNlIGlkID0gMHg0MDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFNjcnViYmluZyBG
cmVlIFJBTTogLmRvbmUuDQooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4NCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbA0KKFhFTikgR3Vl
c3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFuZCB3YXJuaW5ncykN
CihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihY
RU4pICoqKioqKiogV0FSTklORzogQ09OU09MRSBPVVRQVVQgSVMgU1lOQ0hST05PVVMNCihYRU4p
ICoqKioqKiogVGhpcyBvcHRpb24gaXMgaW50ZW5kZWQgdG8gYWlkIGRlYnVnZ2luZyBvZiBYZW4g
YnkgZW5zdXJpbmcNCihYRU4pICoqKioqKiogdGhhdCBhbGwgb3V0cHV0IGlzIHN5bmNocm9ub3Vz
bHkgZGVsaXZlcmVkIG9uIHRoZSBzZXJpYWwgbGluZS4NCihYRU4pICoqKioqKiogSG93ZXZlciBp
dCBjYW4gaW50cm9kdWNlIFNJR05JRklDQU5UIGxhdGVuY2llcyBhbmQgYWZmZWN0DQooWEVOKSAq
KioqKioqIHRpbWVrZWVwaW5nLiBJdCBpcyBOT1QgcmVjb21tZW5kZWQgZm9yIHByb2R1Y3Rpb24g
dXNlIQ0KKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kg0KKFhFTikgMy4uLiAyLi4uIDEuLi4gDQooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+IERPTTAg
KHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikNCihYRU4p
IEZyZWVkIDI5MmtCIGluaXQgbWVtb3J5Lg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0
byAweDgwMDgwMDAwMDEwMDAwMDAuDQooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4
ODAwODAwMDAwMTAwMDAwMC4NCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwMDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4
MDAwMDAxMDAwMDAwLg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAw
MDEwMDAwMDAuDQooWEVOKSBQQ0k6IFVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAt
ZmYNCihYRU4pIG1tLmM6ODA5OiBkMDogRm9yY2luZyByZWFkLW9ubHkgYWNjZXNzIHRvIE1GTiBl
MDAwMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wDQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjAwLjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDEuMA0KKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4xDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjAyLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTAuMA0KKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxMC4xDQooWEVOKSBTUi1JT1YgZGV2aWNlIDAwMDA6MDA6MTEuMCBo
YXMgaXRzIHZpcnR1YWwgZnVuY3Rpb25zIGFscmVhZHkgZW5hYmxlZCAoMDFhYikNCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
Mi4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjINCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTMuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4yDQooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
MDA6MTQuMQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4zDQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAwOjE0LjQNCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTQuNQ0K
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjE1LjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMw0KKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDoxOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4
LjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxOC4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQNCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MTowMC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjENCihYRU4pIFBDSSBhZGQg
ZGV2aWNlIDAwMDA6MDI6MDYuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMjowNy4wDQoo
WEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAzOjAwLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDQ6MDAuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNTowMC4wDQooWEVOKSBJT0FQ
SUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xNiAtPiAweGEwIC0+IElSUSAxNiBNb2Rl
OjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0x
NyAtPiAweGE4IC0+IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOCAtPiAweGIwIC0+IElSUSAxOCBNb2RlOjEgQWN0aXZl
OjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOSAtPiAweGI4
IC0+IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91
dGluZyBlbnRyeSAoNS0yMSAtPiAweGMwIC0+IElSUSAyMSBNb2RlOjEgQWN0aXZlOjEpDQooWEVO
KSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0yMiAtPiAweGM4IC0+IElSUSAy
MiBNb2RlOjEgQWN0aXZlOjEpDQpbICAgMTguMzkxODQyXSBVbmFibGUgdG8gcmVhZCBzeXNycSBj
b2RlIGluIGNvbnRyb2wvc3lzcnENCihYRU4pIEFQSUMgZXJyb3Igb24gQ1BVMTogMDAoNDApDQoo
WEVOKSBBUElDIGVycm9yIG9uIENQVTM6IDAwKDQwKQ0KKFhFTikgQVBJQyBlcnJvciBvbiBDUFUw
OiAwMCg0MCkNCihYRU4pIEFQSUMgZXJyb3Igb24gQ1BVMjogMDAoNDApDQobW3IbW0gbW0oNDQ0K
V2VsY29tZSB0byBvcGVuU1VTRSAxMy4xICJCb3R0bGUiIC0gS2VybmVsIDMuMTEuNi00LXhlbiAo
eHZjMCkuDQ0KDQ0KDQ0KbGludXgtYjUyZCBsb2dpbjogKFhFTikgQU1ELVZpOiBTaGFyZSBwMm0g
dGFibGUgd2l0aCBpb21tdTogcDJtIHRhYmxlID0gMHgxZDk1YmUNCihYRU4pIEFNRC1WaTogU2hh
cmUgcDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWQxMzBhDQo=
--047d7bdc8b6ea7917704f1820a75
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bdc8b6ea7917704f1820a75--


From xen-devel-bounces@lists.xen.org Mon Feb 03 15:19:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 15:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WALIY-0004Th-KE; Mon, 03 Feb 2014 15:18:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WALIW-0004Tc-LA
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 15:18:57 +0000
Received: from [193.109.254.147:40198] by server-7.bemta-14.messagelabs.com id
	88/42-23424-063BFE25; Mon, 03 Feb 2014 15:18:56 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391440734!1684175!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16931 invoked from network); 3 Feb 2014 15:18:55 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 15:18:55 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so11162889qcx.23
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 07:18:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=DXLvy0sMQRexBlSB3zhtqHeLA1EBSVGssx7aFMcbNrU=;
	b=fB6SPQFpSQJ83HTn3Cnwac4Bqn+xzHzxUhHQ4o9G0AH94PSI9PvGOMsE96m8ScIg2+
	Yg1Ap8ePYkXrtFNBKWkBHUdTKwlPnee+jFBeoF/bsFHX6gbTi70k7TI0oqTxbh+dGC+G
	zLMsy/lSZ3ZMR7GbX97EUAPUQinzzcjtya1kbCtEKo5vGOaL5iO2RMdWMtZ6/HUD9qQZ
	YNLb6ueOAm74X2ap5B+kEWcooVLYsteIUbQNOxZ3mlqUjdtYJQETTzlwp4GuHQaLRx00
	gt9O4j7seEcSw55WmLcu6is7BUuGjUKM5ps9OnnPFIXlf92CdGcd2dgubvASxWnBD/Qh
	hWEw==
MIME-Version: 1.0
X-Received: by 10.224.14.2 with SMTP id e2mr56472036qaa.73.1391440733632; Mon,
	03 Feb 2014 07:18:53 -0800 (PST)
Received: by 10.224.30.77 with HTTP; Mon, 3 Feb 2014 07:18:53 -0800 (PST)
In-Reply-To: <20140203145503.GA3864@phenom.dumpdata.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
	<20140203145503.GA3864@phenom.dumpdata.com>
Date: Tue, 4 Feb 2014 00:18:53 +0900
Message-ID: <CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Content-Type: multipart/mixed; boundary=047d7bdc8b6ea7917704f1820a75
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--047d7bdc8b6ea7917704f1820a75
Content-Type: text/plain; charset=ISO-8859-1

>You might want to add 'sync_console' on your Xen line. That should
give you a bit more of data (I hope?)

Here is  log captured with sync_console and empty hvm config.

On Mon, Feb 3, 2014 at 11:55 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
>> >What about without PCI passthrough?  In all cases you appear to be
>> passing the embedded graphics through to an HVM domain
>>
>> Next log captured with following hvm domain config:
>>
>> name = 'blank'
>> builder = 'hvm'
>> memory = 1024
>>
>> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
>> hvm it takes less than a minute.
>
> You might want to add 'sync_console' on your Xen line. That should
> give you a bit more of data (I hope?)
>>
>> Now trying to make log with watchdog added
>>
>> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
>> <andrew.cooper3@citrix.com> wrote:
>> > Can you please reply-to-all to keep this on the list.
>> >
>> > On 03/02/14 14:21, Vitaliy Tomin wrote:
>> >>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>> >> No it crashes even with empty HVM domain (no os, no disk images no network)
>> >
>> > What about without PCI passthrough?  In all cases you appear to be
>> > passing the embedded graphics through to an HVM domain
>> >
>> >>
>> >>> Can you explain "=== whole system crashed ===" a little more.
>> >> II means system instantly rebooted. Black screen, no messages, no
>> >> image on screen next what I see is POST of my real hardware.
>> >>
>> >> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>> >
>> > Ok - so something forced a system reset.  Even more curious that debug
>> > mode is fine while non-debug is fatal.
>> >
>> > ~Andrew
>> >
>> >
>
>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>

--047d7bdc8b6ea7917704f1820a75
Content-Type: application/octet-stream; name="screenlog.0"
Content-Disposition: attachment; filename="screenlog.0"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hr7vuyjv0

IFhlbiA0LjQuMF8wMi0yOTcuMQ0KKFhFTikgWGVuIHZlcnNpb24gNC40LjBfMDItMjk3LjEgKGFi
dWlsZEApIChnY2MgKFNVU0UgTGludXgpIDQuOC4xIDIwMTMwOTA5IFtnY2MtNF84LWJyYW5jaCBy
ZXZpc2lvbiAyMDIzODhdKSBkZWJ1Zz1uIFR1ZSBKYW4gMjggMTY6MDg6NDggVVRDIDIwMTQNCihY
RU4pIExhdGVzdCBDaGFuZ2VTZXQ6IA0KKFhFTikgQ29uc29sZSBvdXRwdXQgaXMgc3luY2hyb25v
dXMuDQooWEVOKSBCb290bG9hZGVyOiBHUlVCMiAyLjAwDQooWEVOKSBDb21tYW5kIGxpbmU6IHN5
bmNfY29uc29sZSBsb2dsdmw9YWxsIGlvbW11PWRlYnVnLHZlcmJvc2UgYXBpY192ZXJib3NpdHk9
ZGVidWcgY29uc29sZT1jb20xIGNvbTE9MTE1MjAwLDhuMSxwY2kNCihYRU4pIFZpZGVvIGluZm9y
bWF0aW9uOg0KKFhFTikgIFZHQSBpcyB0ZXh0IG1vZGUgODB4MjUsIGZvbnQgOHgxNg0KKFhFTikg
IFZCRS9EREMgbWV0aG9kczogVjI7IEVESUQgdHJhbnNmZXIgdGltZTogMSBzZWNvbmRzDQooWEVO
KSBEaXNjIGluZm9ybWF0aW9uOg0KKFhFTikgIEZvdW5kIDMgTUJSIHNpZ25hdHVyZXMNCihYRU4p
ICBGb3VuZCA0IEVERCBpbmZvcm1hdGlvbiBzdHJ1Y3R1cmVzDQooWEVOKSBYZW4tZTgyMCBSQU0g
bWFwOg0KKFhFTikgIDAwMDAwMDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwMDllODAwICh1c2FibGUp
DQooWEVOKSAgMDAwMDAwMDAwMDA5ZTgwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHJlc2VydmVkKQ0K
KFhFTikgIDAwMDAwMDAwMDAwZTAwMDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkNCihY
RU4pICAwMDAwMDAwMDAwMTAwMDAwIC0gMDAwMDAwMDA4ZDY4YjAwMCAodXNhYmxlKQ0KKFhFTikg
IDAwMDAwMDAwOGQ2OGIwMDAgLSAwMDAwMDAwMDhkZDBhMDAwIChyZXNlcnZlZCkNCihYRU4pICAw
MDAwMDAwMDhkZDBhMDAwIC0gMDAwMDAwMDA4ZTA1YTAwMCAoQUNQSSBOVlMpDQooWEVOKSAgMDAw
MDAwMDA4ZTA1YTAwMCAtIDAwMDAwMDAwOGVhNDUwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAw
MDAwOGVhNDUwMDAgLSAwMDAwMDAwMDhlYTQ2MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4
ZWE0NjAwMCAtIDAwMDAwMDAwOGVjNGMwMDAgKEFDUEkgTlZTKQ0KKFhFTikgIDAwMDAwMDAwOGVj
NGMwMDAgLSAwMDAwMDAwMDhmMDY0MDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDA4ZjA2NDAw
MCAtIDAwMDAwMDAwOGY3ZjMwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwOGY3ZjMwMDAg
LSAwMDAwMDAwMDhmODAwMDAwICh1c2FibGUpDQooWEVOKSAgMDAwMDAwMDBmZWMwMDAwMCAtIDAw
MDAwMDAwZmVjMDEwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmVjMTAwMDAgLSAwMDAw
MDAwMGZlYzExMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMGZlZDAwMDAwIC0gMDAwMDAw
MDBmZWQwMTAwMCAocmVzZXJ2ZWQpDQooWEVOKSAgMDAwMDAwMDBmZWQ4MDAwMCAtIDAwMDAwMDAw
ZmVkOTAwMDAgKHJlc2VydmVkKQ0KKFhFTikgIDAwMDAwMDAwZmY4MDAwMDAgLSAwMDAwMDAwMTAw
MDAwMDAwIChyZXNlcnZlZCkNCihYRU4pICAwMDAwMDAwMTAwMDAxMDAwIC0gMDAwMDAwMDI1MDAw
MDAwMCAodXNhYmxlKQ0KKFhFTikgQUNQSTogUlNEUCAwMDBGMDQ5MCwgMDAyNCAocjIgQUxBU0tB
KQ0KKFhFTikgQUNQSTogWFNEVCA4RTA0QTA3OCwgMDA3NCAocjEgQUxBU0tBICAgIEEgTSBJICAx
MDcyMDA5IEFNSSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGQUNQIDhFMDUwMTI4LCAwMEY0IChy
NCBBTEFTS0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEkgV2Fy
bmluZyAodGJmYWR0LTA0NjQpOiBPcHRpb25hbCBmaWVsZCAiUG0yQ29udHJvbEJsb2NrIiBoYXMg
emVybyBhZGRyZXNzIG9yIGxlbmd0aDogMDAwMDAwMDAwMDAwMDAwMC8xIFsyMDA3MDEyNl0NCihY
RU4pIEFDUEk6IERTRFQgOEUwNEExODgsIDVGOUUgKHIyIEFMQVNLQSAgICBBIE0gSSAgICAgICAg
MCBJTlRMIDIwMDUxMTE3KQ0KKFhFTikgQUNQSTogRkFDUyA4RTA1MkU4MCwgMDA0MA0KKFhFTikg
QUNQSTogQVBJQyA4RTA1MDIyMCwgMDA3MiAocjMgQUxBU0tBICAgIEEgTSBJICAxMDcyMDA5IEFN
SSAgICAgMTAwMTMpDQooWEVOKSBBQ1BJOiBGUERUIDhFMDUwMjk4LCAwMDQ0IChyMSBBTEFTS0Eg
ICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAxMDAxMykNCihYRU4pIEFDUEk6IE1DRkcgOEUwNTAy
RTAsIDAwM0MgKHIxIEFMQVNLQSAgICBBIE0gSSAgMTA3MjAwOSBNU0ZUICAgIDEwMDEzKQ0KKFhF
TikgQUNQSTogQUFGVCA4RTA1MDMyMCwgMDBFNyAocjEgQUxBU0tBIE9FTUFBRlQgICAxMDcyMDA5
IE1TRlQgICAgICAgOTcpDQooWEVOKSBBQ1BJOiBIUEVUIDhFMDUwNDA4LCAwMDM4IChyMSBBTEFT
S0EgICAgQSBNIEkgIDEwNzIwMDkgQU1JICAgICAgICAgNSkNCihYRU4pIEFDUEk6IElWUlMgOEUw
NTA0NDAsIDAwNzAgKHIyICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAgICAwKQ0K
KFhFTikgQUNQSTogU1NEVCA4RTA1MDRCMCwgMEE2MCAocjEgICAgQU1EIEFOTkFQVVJOICAgICAg
ICAxIEFNRCAgICAgICAgIDEpDQooWEVOKSBBQ1BJOiBTU0RUIDhFMDUwRjEwLCAwNEI3IChyMiAg
ICBBTUQgQU5OQVBVUk4gICAgICAgIDEgTVNGVCAgNDAwMDAwMCkNCihYRU4pIEFDUEk6IENSQVQg
OEUwNTEzQzgsIDAyRjggKHIxICAgIEFNRCBBTk5BUFVSTiAgICAgICAgMSBBTUQgICAgICAgICAx
KQ0KKFhFTikgU3lzdGVtIFJBTTogNzY0Mk1CICg3ODI1NzIwa0IpDQooWEVOKSBObyBOVU1BIGNv
bmZpZ3VyYXRpb24gZm91bmQNCihYRU4pIEZha2luZyBhIG5vZGUgYXQgMDAwMDAwMDAwMDAwMDAw
MC0wMDAwMDAwMjUwMDAwMDAwDQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZA0KKFhFTikg
Zm91bmQgU01QIE1QLXRhYmxlIGF0IDAwMGZkOTAwDQooWEVOKSBETUkgMi43IHByZXNlbnQuDQoo
WEVOKSBBUElDIGJvb3Qgc3RhdGUgaXMgJ3hhcGljJw0KKFhFTikgVXNpbmcgQVBJQyBkcml2ZXIg
ZGVmYXVsdA0KKFhFTikgQUNQSTogUE0tVGltZXIgSU8gUG9ydDogMHg4MDgNCihYRU4pIEFDUEk6
IFNMRUVQIElORk86IHBtMXhfY250WzgwNCwwXSwgcG0xeF9ldnRbODAwLDBdDQooWEVOKSBBQ1BJ
OiAzMi82NFggRkFDUyBhZGRyZXNzIG1pc21hdGNoIGluIEZBRFQgLSA4ZTA1MmU4MC8wMDAwMDAw
MDAwMDAwMDAwLCB1c2luZyAzMg0KKFhFTikgQUNQSTogICAgICAgICAgICAgd2FrZXVwX3ZlY1s4
ZTA1MmU4Y10sIHZlY19zaXplWzIwXQ0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4
ZmVlMDAwMDANCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDFdIGxhcGljX2lkWzB4MTBd
IGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE2IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4p
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDJdIGxhcGljX2lkWzB4MTFdIGVuYWJsZWQpDQooWEVO
KSBQcm9jZXNzb3IgIzE3IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChh
Y3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MTJdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE4
IDU6MyBBUElDIHZlcnNpb24gMTYNCihYRU4pIEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDRdIGxh
cGljX2lkWzB4MTNdIGVuYWJsZWQpDQooWEVOKSBQcm9jZXNzb3IgIzE5IDU6MyBBUElDIHZlcnNp
b24gMTYNCihYRU4pIEFDUEk6IExBUElDX05NSSAoYWNwaV9pZFsweGZmXSBoaWdoIGVkZ2UgbGlu
dFsweDFdKQ0KKFhFTikgQUNQSTogSU9BUElDIChpZFsweDA1XSBhZGRyZXNzWzB4ZmVjMDAwMDBd
IGdzaV9iYXNlWzBdKQ0KKFhFTikgSU9BUElDWzBdOiBhcGljX2lkIDUsIHZlcnNpb24gMzMsIGFk
ZHJlc3MgMHhmZWMwMDAwMCwgR1NJIDAtMjMNCihYRU4pIEFDUEk6IElOVF9TUkNfT1ZSIChidXMg
MCBidXNfaXJxIDAgZ2xvYmFsX2lycSAyIGRmbCBkZmwpDQooWEVOKSBBQ1BJOiBJTlRfU1JDX09W
UiAoYnVzIDAgYnVzX2lycSA5IGdsb2JhbF9pcnEgOSBsb3cgbGV2ZWwpDQooWEVOKSBBQ1BJOiBJ
UlEwIHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUu
DQooWEVOKSBBQ1BJOiBJUlE5IHVzZWQgYnkgb3ZlcnJpZGUuDQooWEVOKSBFbmFibGluZyBBUElD
IG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MNCihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4
MTAyMjgyMTAgYmFzZTogMHhmZWQwMDAwMA0KKFhFTikgRVJTVCB0YWJsZSB3YXMgbm90IGZvdW5k
DQooWEVOKSBVc2luZyBBQ1BJIChNQURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRp
b24NCihYRU4pIFNNUDogQWxsb3dpbmcgNCBDUFVzICgwIGhvdHBsdWcgQ1BVcykNCihYRU4pIE5S
X0NQVVM6NTEyIG5yX2NwdW1hc2tfYml0czo2NA0KKFhFTikgbWFwcGVkIEFQSUMgdG8gZmZmZjgy
Y2ZmZmJmYjAwMCAoZmVlMDAwMDApDQooWEVOKSBtYXBwZWQgSU9BUElDIHRvIGZmZmY4MmNmZmZi
ZmEwMDAgKGZlYzAwMDAwKQ0KKFhFTikgSVJRIGxpbWl0czogMjQgR1NJLCA3NjAgTVNJL01TSS1Y
DQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQgU2NoZWR1bGVyIChjcmVkaXQpDQoo
WEVOKSBEZXRlY3RlZCAzODkzLjA0MyBNSHogcHJvY2Vzc29yLg0KKFhFTikgSW5pdGluZyBtZW1v
cnkgc2hhcmluZy4NCihYRU4pIHhzdGF0ZV9pbml0OiB1c2luZyBjbnR4dF9zaXplOiAweDNjMCBh
bmQgc3RhdGVzOiAweDQwMDAwMDAwMDAwMDAwMDcNCihYRU4pIEFNRCBGYW0xNWggbWFjaGluZSBj
aGVjayByZXBvcnRpbmcgZW5hYmxlZA0KKFhFTikgUENJOiBNQ0ZHIGNvbmZpZ3VyYXRpb24gMDog
YmFzZSBlMDAwMDAwMCBzZWdtZW50IDAwMDAgYnVzZXMgMDAgLSBmZg0KKFhFTikgUENJOiBOb3Qg
dXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAwMC1mZg0KKFhFTikgQU1ELVZpOiBGb3Vu
ZCBNU0kgY2FwYWJpbGl0eSBibG9jayBhdCAweDU0DQooWEVOKSBBTUQtVmk6IEFDUEkgVGFibGU6
DQooWEVOKSBBTUQtVmk6ICBTaWduYXR1cmUgSVZSUw0KKFhFTikgQU1ELVZpOiAgTGVuZ3RoIDB4
NzANCihYRU4pIEFNRC1WaTogIFJldmlzaW9uIDB4Mg0KKFhFTikgQU1ELVZpOiAgQ2hlY2tTdW0g
MHhlOA0KKFhFTikgQU1ELVZpOiAgT0VNX0lkIEFNRA0KKFhFTikgQU1ELVZpOiAgT0VNX1RhYmxl
X0lkIEFOTkFQVVJODQooWEVOKSBBTUQtVmk6ICBPRU1fUmV2aXNpb24gMHgxDQooWEVOKSBBTUQt
Vmk6ICBDcmVhdG9yX0lkIEFNRCANCihYRU4pIEFNRC1WaTogIENyZWF0b3JfUmV2aXNpb24gMA0K
KFhFTikgQU1ELVZpOiBJVlJTIEJsb2NrOiB0eXBlIDB4MTAgZmxhZ3MgMHhmZSBsZW4gMHg0MCBp
ZCAweDINCihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4OCBm
bGFncyAwDQooWEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4OCAtPiAweGZmZmUNCihYRU4p
IEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0MyBpZCAweDIwMCBmbGFncyAwDQoo
WEVOKSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4MjAwIC0+IDB4MmZmIGFsaWFzIDB4YTQNCihY
RU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMCBpZCAwIGZsYWdzIDANCihYRU4p
IEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDANCihYRU4p
IEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFuZGxlIDAN
CihYRU4pIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHg0OCBpZCAwIGZsYWdzIDB4
ZDcNCihYRU4pIEFNRC1WaTogSVZIRCBTcGVjaWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDEg
aGFuZGxlIDB4NQ0KKFhFTikgQU1ELVZpOiBJT01NVSBFeHRlbmRlZCBGZWF0dXJlczoNCihYRU4p
ICAtIFByZWZldGNoIFBhZ2VzIENvbW1hbmQNCihYRU4pICAtIFBlcmlwaGVyYWwgUGFnZSBTZXJ2
aWNlIFJlcXVlc3QNCihYRU4pICAtIEd1ZXN0IFRyYW5zbGF0aW9uDQooWEVOKSAgLSBJbnZhbGlk
YXRlIEFsbCBDb21tYW5kDQooWEVOKSBBTUQtVmk6IFBQUiBMb2cgRW5hYmxlZC4NCihYRU4pIEFN
RC1WaTogR3Vlc3QgVHJhbnNsYXRpb24gRW5hYmxlZC4NCihYRU4pIEFNRC1WaTogSU9NTVUgMCBF
bmFibGVkLg0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGVuYWJsZWQNCihYRU4pICAtIERvbTAg
bW9kZTogUmVsYXhlZA0KKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkDQooWEVOKSBH
ZXR0aW5nIFZFUlNJT046IDgwMDUwMDEwDQooWEVOKSBHZXR0aW5nIFZFUlNJT046IDgwMDUwMDEw
DQooWEVOKSBHZXR0aW5nIElEOiAxMDAwMDAwMA0KKFhFTikgR2V0dGluZyBMVlQwOiA3MDANCihY
RU4pIEdldHRpbmcgTFZUMTogNDAwDQooWEVOKSBlbmFibGVkIEV4dElOVCBvbiBDUFUjMA0KKFhF
TikgRU5BQkxJTkcgSU8tQVBJQyBJUlFzDQooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBtZXRob2QN
CihYRU4pIGluaXQgSU9fQVBJQyBJUlFzDQooWEVOKSAgSU8tQVBJQyAoYXBpY2lkLXBpbikgNS0w
LCA1LTE2LCA1LTE3LCA1LTE4LCA1LTE5LCA1LTIwLCA1LTIxLCA1LTIyLCA1LTIzIG5vdCBjb25u
ZWN0ZWQuDQooWEVOKSAuLlRJTUVSOiB2ZWN0b3I9MHhGMCBhcGljMT0wIHBpbjE9MiBhcGljMj0t
MSBwaW4yPS0xDQooWEVOKSBudW1iZXIgb2YgTVAgSVJRIHNvdXJjZXM6IDE1Lg0KKFhFTikgbnVt
YmVyIG9mIElPLUFQSUMgIzUgcmVnaXN0ZXJzOiAyNC4NCihYRU4pIHRlc3RpbmcgdGhlIElPIEFQ
SUMuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLg0KKFhFTikgSU8gQVBJQyAjNS4uLi4uLg0KKFhFTikg
Li4uLiByZWdpc3RlciAjMDA6IDA1MDAwMDAwDQooWEVOKSAuLi4uLi4uICAgIDogcGh5c2ljYWwg
QVBJQyBpZDogMDUNCihYRU4pIC4uLi4uLi4gICAgOiBEZWxpdmVyeSBUeXBlOiAwDQooWEVOKSAu
Li4uLi4uICAgIDogTFRTICAgICAgICAgIDogMA0KKFhFTikgLi4uLiByZWdpc3RlciAjMDE6IDAw
MTc4MDIxDQooWEVOKSAuLi4uLi4uICAgICA6IG1heCByZWRpcmVjdGlvbiBlbnRyaWVzOiAwMDE3
DQooWEVOKSAuLi4uLi4uICAgICA6IFBSUSBpbXBsZW1lbnRlZDogMQ0KKFhFTikgLi4uLi4uLiAg
ICAgOiBJTyBBUElDIHZlcnNpb246IDAwMjENCihYRU4pIC4uLi4gcmVnaXN0ZXIgIzAyOiAwNTAw
MDAwMA0KKFhFTikgLi4uLi4uLiAgICAgOiBhcmJpdHJhdGlvbjogMDUNCihYRU4pIC4uLi4gcmVn
aXN0ZXIgIzAzOiAwNTAxODAyMQ0KKFhFTikgLi4uLi4uLiAgICAgOiBCb290IERUICAgIDogMQ0K
KFhFTikgLi4uLiBJUlEgcmVkaXJlY3Rpb24gdGFibGU6DQooWEVOKSAgTlIgTG9nIFBoeSBNYXNr
IFRyaWcgSVJSIFBvbCBTdGF0IERlc3QgRGVsaSBWZWN0OiAgIA0KKFhFTikgIDAwIDAwMCAwMCAg
MSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4pICAwMSAwMDEgMDEgIDAg
ICAgMCAgICAwICAgMCAgIDAgICAgMSAgICAxICAgIDMwDQooWEVOKSAgMDIgMDAxIDAxICAwICAg
IDAgICAgMCAgIDAgICAwICAgIDEgICAgMSAgICBGMA0KKFhFTikgIDAzIDAwMSAwMSAgMCAgICAw
ICAgIDAgICAwICAgMCAgICAxICAgIDEgICAgMzgNCihYRU4pICAwNCAwMDEgMDEgIDAgICAgMCAg
ICAwICAgMCAgIDAgICAgMSAgICAxICAgIDQwDQooWEVOKSAgMDUgMDAxIDAxICAwICAgIDAgICAg
MCAgIDAgICAwICAgIDEgICAgMSAgICA0OA0KKFhFTikgIDA2IDAwMSAwMSAgMCAgICAwICAgIDAg
ICAwICAgMCAgICAxICAgIDEgICAgNTANCihYRU4pICAwNyAwMDEgMDEgIDAgICAgMCAgICAwICAg
MCAgIDAgICAgMSAgICAxICAgIDU4DQooWEVOKSAgMDggMDAxIDAxICAwICAgIDAgICAgMCAgIDAg
ICAwICAgIDEgICAgMSAgICA2MA0KKFhFTikgIDA5IDAwMSAwMSAgMSAgICAxICAgIDAgICAxICAg
MCAgICAxICAgIDAgICAgMDANCihYRU4pICAwYSAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAg
ICAgMSAgICAxICAgIEYxDQooWEVOKSAgMGIgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAg
IDEgICAgMSAgICA3MA0KKFhFTikgIDBjIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAx
ICAgIDEgICAgNzgNCihYRU4pICAwZCAwMDEgMDEgIDAgICAgMCAgICAwICAgMCAgIDAgICAgMSAg
ICAxICAgIDg4DQooWEVOKSAgMGUgMDAxIDAxICAwICAgIDAgICAgMCAgIDAgICAwICAgIDEgICAg
MSAgICA5MA0KKFhFTikgIDBmIDAwMSAwMSAgMCAgICAwICAgIDAgICAwICAgMCAgICAxICAgIDEg
ICAgOTgNCihYRU4pICAxMCAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAg
IDMwDQooWEVOKSAgMTEgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAz
MA0KKFhFTikgIDEyIDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzAN
CihYRU4pICAxMyAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQoo
WEVOKSAgMTQgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhF
TikgIDE1IDAwMCAwMCAgMSAgICAwICAgIDAgICAwICAgMCAgICAwICAgIDEgICAgMzANCihYRU4p
ICAxNiAwMDAgMDAgIDEgICAgMCAgICAwICAgMCAgIDAgICAgMCAgICAxICAgIDMwDQooWEVOKSAg
MTcgMDAwIDAwICAxICAgIDAgICAgMCAgIDAgICAwICAgIDAgICAgMSAgICAzMA0KKFhFTikgVXNp
bmcgdmVjdG9yLWJhc2VkIGluZGV4aW5nDQooWEVOKSBJUlEgdG8gcGluIG1hcHBpbmdzOg0KKFhF
TikgSVJRMjQwIC0+IDA6Mg0KKFhFTikgSVJRNDggLT4gMDoxDQooWEVOKSBJUlE1NiAtPiAwOjMN
CihYRU4pIElSUTY0IC0+IDA6NA0KKFhFTikgSVJRNzIgLT4gMDo1DQooWEVOKSBJUlE4MCAtPiAw
OjYNCihYRU4pIElSUTg4IC0+IDA6Nw0KKFhFTikgSVJROTYgLT4gMDo4DQooWEVOKSBJUlExMDQg
LT4gMDo5DQooWEVOKSBJUlEyNDEgLT4gMDoxMA0KKFhFTikgSVJRMTEyIC0+IDA6MTENCihYRU4p
IElSUTEyMCAtPiAwOjEyDQooWEVOKSBJUlExMzYgLT4gMDoxMw0KKFhFTikgSVJRMTQ0IC0+IDA6
MTQNCihYRU4pIElSUTE1MiAtPiAwOjE1DQooWEVOKSAuLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4gZG9uZS4NCihYRU4pIFVzaW5nIGxvY2FsIEFQSUMgdGltZXIgaW50ZXJydXB0
cy4NCihYRU4pIGNhbGlicmF0aW5nIEFQSUMgdGltZXIgLi4uDQooWEVOKSAuLi4uLiBDUFUgY2xv
Y2sgc3BlZWQgaXMgMzg5Mi45MTcyIE1Iei4NCihYRU4pIC4uLi4uIGhvc3QgYnVzIGNsb2NrIHNw
ZWVkIGlzIDk5LjgxODMgTUh6Lg0KKFhFTikgLi4uLi4gYnVzX3NjYWxlID0gMHg2NjM3DQooWEVO
KSBQbGF0Zm9ybSB0aW1lciBpcyAxNC4zMThNSHogSFBFVA0KKFhFTikgQWxsb2NhdGVkIGNvbnNv
bGUgcmluZyBvZiAzMiBLaUIuDQooWEVOKSBIVk06IEFTSURzIGVuYWJsZWQuDQooWEVOKSBTVk06
IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoNCihYRU4pICAtIE5lc3RlZCBQYWdlIFRhYmxl
cyAoTlBUKQ0KKFhFTikgIC0gTGFzdCBCcmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxpc2F0aW9u
DQooWEVOKSAgLSBOZXh0LVJJUCBTYXZlZCBvbiAjVk1FWElUDQooWEVOKSAgLSBWTUNCIENsZWFu
IEJpdHMNCihYRU4pICAtIERlY29kZUFzc2lzdHMNCihYRU4pICAtIFBhdXNlLUludGVyY2VwdCBG
aWx0ZXINCihYRU4pICAtIFRTQyBSYXRlIE1TUg0KKFhFTikgSFZNOiBTVk0gZW5hYmxlZA0KKFhF
TikgSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkgZGV0ZWN0ZWQNCihYRU4pIEhW
TTogSEFQIHBhZ2Ugc2l6ZXM6IDRrQiwgMk1CLCAxR0INCihYRU4pIEhWTTogUFZIIG1vZGUgbm90
IHN1cHBvcnRlZCBvbiB0aGlzIHBsYXRmb3JtDQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMx
DQooWEVOKSBtaWNyb2NvZGU6IENQVTEgY29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MHg2MDAx
MTE5DQooWEVOKSBtYXNrZWQgRXh0SU5UIG9uIENQVSMyDQooWEVOKSBtaWNyb2NvZGU6IENQVTIg
Y29sbGVjdF9jcHVfaW5mbzogcGF0Y2hfaWQ9MA0KKFhFTikgbWFza2VkIEV4dElOVCBvbiBDUFUj
Mw0KKFhFTikgbWljcm9jb2RlOiBDUFUzIGNvbGxlY3RfY3B1X2luZm86IHBhdGNoX2lkPTANCihY
RU4pIEJyb3VnaHQgdXAgNCBDUFVzDQooWEVOKSBBQ1BJIHNsZWVwIG1vZGVzOiBTMw0KKFhFTikg
TUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVzdCBwb2xsaW5nIGZyZXF1ZW5jeQ0KKFhF
TikgbWNoZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVkLg0KKFhF
TikgbXRycjogeW91ciBDUFVzIGhhZCBpbmNvbnNpc3RlbnQgdmFyaWFibGUgTVRSUiBzZXR0aW5n
cw0KKFhFTikgbXRycjogcHJvYmFibHkgeW91ciBCSU9TIGRvZXMgbm90IHNldHVwIGFsbCBDUFVz
Lg0KKFhFTikgbXRycjogY29ycmVjdGVkIGNvbmZpZ3VyYXRpb24uDQooWEVOKSAqKiogTE9BRElO
RyBET01BSU4gMCAqKioNCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBsc2IsIGNvbXBhdDMy
DQooWEVOKSAgRG9tMCBrZXJuZWw6IDY0LWJpdCwgbHNiLCBwYWRkciAweDIwMDAgLT4gMHhjMDUw
MDANCihYRU4pIFBIWVNJQ0FMIE1FTU9SWSBBUlJBTkdFTUVOVDoNCihYRU4pICBEb20wIGFsbG9j
LjogICAwMDAwMDAwMjIzMDAwMDAwLT4wMDAwMDAwMjI0MDAwMDAwICgxNzQyMDgzIHBhZ2VzIHRv
IGJlIGFsbG9jYXRlZCkNCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMjRjZjc1MDAwLT4w
MDAwMDAwMjRmZmZmZTAwDQooWEVOKSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoNCihYRU4p
ICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgwMDAyMDAwLT5mZmZmZmZmZjgwYzA1MDAwDQooWEVO
KSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMDAwMDAwMDAwMC0+MDAwMDAwMDAwMDAwMDAwMA0KKFhF
TikgIFBoeXMtTWFjaCBtYXA6IGZmZmZlYTAwMDAwMDAwMDAtPmZmZmZlYTAwMDBkNmFjNzANCihY
RU4pICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgwYzA1MDAwLT5mZmZmZmZmZjgwYzA1NGI0DQoo
WEVOKSAgUGFnZSB0YWJsZXM6ICAgZmZmZmZmZmY4MGMwNjAwMC0+ZmZmZmZmZmY4MGMxMTAwMA0K
KFhFTikgIEJvb3Qgc3RhY2s6ICAgIGZmZmZmZmZmODBjMTEwMDAtPmZmZmZmZmZmODBjMTIwMDAN
CihYRU4pICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZmZjgxMDAwMDAw
DQooWEVOKSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MDAwMjAwMA0KKFhFTikgRG9tMCBoYXMg
bWF4aW11bSA0IFZDUFVzDQooWEVOKSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6
MDA6MDAuMA0KKFhFTikgQU1ELVZpOiBObyBpb21tdSBmb3IgZGV2aWNlIDAwMDA6MDA6MDAuMg0K
KFhFTikgc2V0dXAgMDAwMDowMDowMC4yIGZvciBkMCBmYWlsZWQgKC0xOSkNCihYRU4pIEFNRC1W
aTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OCwgdHlwZSA9IDB4MSwgcm9v
dCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4p
IEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OSwgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMN
CihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHR5
cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eDgwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNl
IGlkID0gMHg4MSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6
IGRldmljZSBpZCA9IDB4ODgsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRm
YzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJ
L08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9
IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTgsIHR5cGUgPSAweDcsIHJvb3Qg
dGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDlhLCB0eXBlID0gMHg3
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMCwgdHlw
ZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
YTEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFn
aW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweGEzLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9IDB4NSwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2Ug
dGFibGU6IGRldmljZSBpZCA9IDB4YTUsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZj
MzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE4LCB0eXBlID0gMHgyLCByb290IHRhYmxlID0g
MHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBT
ZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhYSwgdHlwZSA9IDB4Miwgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YWIsIHR5cGUgPSAweDIs
IHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQoo
WEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMwLCB0eXBl
ID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2Rl
ID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhj
MSwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YzIsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0g
MCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGMzLCB0eXBlID0gMHg2LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHhjNCwgdHlwZSA9IDB4Niwgcm9vdCB0YWJsZSA9IDB4MjI0ZmMz
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9P
IHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4YzUsIHR5cGUgPSAweDYsIHJvb3QgdGFibGUgPSAw
eDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDEwMCwgdHlwZSA9IDB4MSwgcm9vdCB0
YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAxLCB0eXBlID0gMHgx
LCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMw0K
KFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHgyMzAsIHR5
cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1v
ZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eDIzOCwgdHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMNCihYRU4pIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmlj
ZSBpZCA9IDB4MzAwLCB0eXBlID0gMHgxLCByb290IHRhYmxlID0gMHgyMjRmYzMwMDAsIGRvbWFp
biA9IDAsIHBhZ2luZyBtb2RlID0gMw0KKFhFTikgQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJs
ZTogZGV2aWNlIGlkID0gMHg0MDAsIHR5cGUgPSAweDEsIHJvb3QgdGFibGUgPSAweDIyNGZjMzAw
MCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzDQooWEVOKSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4
MjI0ZmMzMDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMNCihYRU4pIFNjcnViYmluZyBG
cmVlIFJBTTogLmRvbmUuDQooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4NCihYRU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbA0KKFhFTikgR3Vl
c3QgTG9nbGV2ZWw6IE5vdGhpbmcgKFJhdGUtbGltaXRlZDogRXJyb3JzIGFuZCB3YXJuaW5ncykN
CihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihY
RU4pICoqKioqKiogV0FSTklORzogQ09OU09MRSBPVVRQVVQgSVMgU1lOQ0hST05PVVMNCihYRU4p
ICoqKioqKiogVGhpcyBvcHRpb24gaXMgaW50ZW5kZWQgdG8gYWlkIGRlYnVnZ2luZyBvZiBYZW4g
YnkgZW5zdXJpbmcNCihYRU4pICoqKioqKiogdGhhdCBhbGwgb3V0cHV0IGlzIHN5bmNocm9ub3Vz
bHkgZGVsaXZlcmVkIG9uIHRoZSBzZXJpYWwgbGluZS4NCihYRU4pICoqKioqKiogSG93ZXZlciBp
dCBjYW4gaW50cm9kdWNlIFNJR05JRklDQU5UIGxhdGVuY2llcyBhbmQgYWZmZWN0DQooWEVOKSAq
KioqKioqIHRpbWVrZWVwaW5nLiBJdCBpcyBOT1QgcmVjb21tZW5kZWQgZm9yIHByb2R1Y3Rpb24g
dXNlIQ0KKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
Kg0KKFhFTikgMy4uLiAyLi4uIDEuLi4gDQooWEVOKSAqKiogU2VyaWFsIGlucHV0IC0+IERPTTAg
KHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0IHRvIFhlbikNCihYRU4p
IEZyZWVkIDI5MmtCIGluaXQgbWVtb3J5Lg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBh
dHRlbXB0ZWQgV1JNU1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0
byAweDgwMDgwMDAwMDEwMDAwMDAuDQooWEVOKSB0cmFwcy5jOjI1MTY6ZDAgRG9tYWluIGF0dGVt
cHRlZCBXUk1TUiAwMDAwMDAwMDAwMDAwNDEzIGZyb20gMHhjMDA4MDAwMDAxMDAwMDAwIHRvIDB4
ODAwODAwMDAwMTAwMDAwMC4NCihYRU4pIHRyYXBzLmM6MjUxNjpkMCBEb21haW4gYXR0ZW1wdGVk
IFdSTVNSIDAwMDAwMDAwMDAwMDA0MTMgZnJvbSAweGMwMDgwMDAwMDEwMDAwMDAgdG8gMHg4MDA4
MDAwMDAxMDAwMDAwLg0KKFhFTikgdHJhcHMuYzoyNTE2OmQwIERvbWFpbiBhdHRlbXB0ZWQgV1JN
U1IgMDAwMDAwMDAwMDAwMDQxMyBmcm9tIDB4YzAwODAwMDAwMTAwMDAwMCB0byAweDgwMDgwMDAw
MDEwMDAwMDAuDQooWEVOKSBQQ0k6IFVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAt
ZmYNCihYRU4pIG1tLmM6ODA5OiBkMDogRm9yY2luZyByZWFkLW9ubHkgYWNjZXNzIHRvIE1GTiBl
MDAwMg0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMC4wDQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjAwLjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDEuMA0KKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMS4xDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjAwOjAyLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTAuMA0KKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxMC4xDQooWEVOKSBTUi1JT1YgZGV2aWNlIDAwMDA6MDA6MTEuMCBo
YXMgaXRzIHZpcnR1YWwgZnVuY3Rpb25zIGFscmVhZHkgZW5hYmxlZCAoMDFhYikNCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
Mi4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjEyLjINCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTMuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4yDQooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE0LjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
MDA6MTQuMQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4zDQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjAwOjE0LjQNCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTQuNQ0K
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjAwOjE1LjINCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTUuMw0KKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDowMDoxOC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4
LjENCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguMg0KKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxOC4zDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjE4LjQNCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTguNQ0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MTowMC4wDQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjENCihYRU4pIFBDSSBhZGQg
ZGV2aWNlIDAwMDA6MDI6MDYuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMjowNy4wDQoo
WEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAzOjAwLjANCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MDQ6MDAuMA0KKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowNTowMC4wDQooWEVOKSBJT0FQ
SUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xNiAtPiAweGEwIC0+IElSUSAxNiBNb2Rl
OjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0x
NyAtPiAweGE4IC0+IElSUSAxNyBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNl
dCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOCAtPiAweGIwIC0+IElSUSAxOCBNb2RlOjEgQWN0aXZl
OjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0xOSAtPiAweGI4
IC0+IElSUSAxOSBNb2RlOjEgQWN0aXZlOjEpDQooWEVOKSBJT0FQSUNbMF06IFNldCBQQ0kgcm91
dGluZyBlbnRyeSAoNS0yMSAtPiAweGMwIC0+IElSUSAyMSBNb2RlOjEgQWN0aXZlOjEpDQooWEVO
KSBJT0FQSUNbMF06IFNldCBQQ0kgcm91dGluZyBlbnRyeSAoNS0yMiAtPiAweGM4IC0+IElSUSAy
MiBNb2RlOjEgQWN0aXZlOjEpDQpbICAgMTguMzkxODQyXSBVbmFibGUgdG8gcmVhZCBzeXNycSBj
b2RlIGluIGNvbnRyb2wvc3lzcnENCihYRU4pIEFQSUMgZXJyb3Igb24gQ1BVMTogMDAoNDApDQoo
WEVOKSBBUElDIGVycm9yIG9uIENQVTM6IDAwKDQwKQ0KKFhFTikgQVBJQyBlcnJvciBvbiBDUFUw
OiAwMCg0MCkNCihYRU4pIEFQSUMgZXJyb3Igb24gQ1BVMjogMDAoNDApDQobW3IbW0gbW0oNDQ0K
V2VsY29tZSB0byBvcGVuU1VTRSAxMy4xICJCb3R0bGUiIC0gS2VybmVsIDMuMTEuNi00LXhlbiAo
eHZjMCkuDQ0KDQ0KDQ0KbGludXgtYjUyZCBsb2dpbjogKFhFTikgQU1ELVZpOiBTaGFyZSBwMm0g
dGFibGUgd2l0aCBpb21tdTogcDJtIHRhYmxlID0gMHgxZDk1YmUNCihYRU4pIEFNRC1WaTogU2hh
cmUgcDJtIHRhYmxlIHdpdGggaW9tbXU6IHAybSB0YWJsZSA9IDB4MWQxMzBhDQo=
--047d7bdc8b6ea7917704f1820a75
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--047d7bdc8b6ea7917704f1820a75--


From xen-devel-bounces@lists.xen.org Mon Feb 03 15:46:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 15:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WALim-000542-CF; Mon, 03 Feb 2014 15:46:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WALik-00053x-Gu
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 15:46:02 +0000
Received: from [193.109.254.147:38130] by server-16.bemta-14.messagelabs.com
	id 22/C2-21945-9B9BFE25; Mon, 03 Feb 2014 15:46:01 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391442359!1683109!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20747 invoked from network); 3 Feb 2014 15:46:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 15:46:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13FjuG3031281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 15:45:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13FjtBR000668
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 15:45:56 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13FjtiX023965; Mon, 3 Feb 2014 15:45:55 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 07:45:54 -0800
Message-ID: <52EFB9FA.7010905@oracle.com>
Date: Mon, 03 Feb 2014 10:47:06 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
	<20140203145503.GA3864@phenom.dumpdata.com>
	<CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
In-Reply-To: <CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 10:18 AM, Vitaliy Tomin wrote:
>> You might want to add 'sync_console' on your Xen line. That should
> give you a bit more of data (I hope?)
>
> Here is  log captured with sync_console and empty hvm config.

 > (XEN) SR-IOV device 0000:00:11.0 has its virtual functions already 
enabled (01ab)

11.0 is the SATA controller on FCH and I don't believe it's an SR-IOV 
device. I dont' think it even has extended config space.

-boris

>
> On Mon, Feb 3, 2014 at 11:55 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
>>>> What about without PCI passthrough?  In all cases you appear to be
>>> passing the embedded graphics through to an HVM domain
>>>
>>> Next log captured with following hvm domain config:
>>>
>>> name = 'blank'
>>> builder = 'hvm'
>>> memory = 1024
>>>
>>> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
>>> hvm it takes less than a minute.
>> You might want to add 'sync_console' on your Xen line. That should
>> give you a bit more of data (I hope?)
>>> Now trying to make log with watchdog added
>>>
>>> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>> Can you please reply-to-all to keep this on the list.
>>>>
>>>> On 03/02/14 14:21, Vitaliy Tomin wrote:
>>>>>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>>>>> No it crashes even with empty HVM domain (no os, no disk images no network)
>>>> What about without PCI passthrough?  In all cases you appear to be
>>>> passing the embedded graphics through to an HVM domain
>>>>
>>>>>> Can you explain "=== whole system crashed ===" a little more.
>>>>> II means system instantly rebooted. Black screen, no messages, no
>>>>> image on screen next what I see is POST of my real hardware.
>>>>>
>>>>> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>>>> Ok - so something forced a system reset.  Even more curious that debug
>>>> mode is fine while non-debug is fatal.
>>>>
>>>> ~Andrew
>>>>
>>>>
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 15:46:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 15:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WALim-000542-CF; Mon, 03 Feb 2014 15:46:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WALik-00053x-Gu
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 15:46:02 +0000
Received: from [193.109.254.147:38130] by server-16.bemta-14.messagelabs.com
	id 22/C2-21945-9B9BFE25; Mon, 03 Feb 2014 15:46:01 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391442359!1683109!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20747 invoked from network); 3 Feb 2014 15:46:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 15:46:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13FjuG3031281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 15:45:57 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13FjtBR000668
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 15:45:56 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13FjtiX023965; Mon, 3 Feb 2014 15:45:55 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 07:45:54 -0800
Message-ID: <52EFB9FA.7010905@oracle.com>
Date: Mon, 03 Feb 2014 10:47:06 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Vitaliy Tomin <highwaystar.ru@gmail.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
	<20140203145503.GA3864@phenom.dumpdata.com>
	<CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
In-Reply-To: <CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 10:18 AM, Vitaliy Tomin wrote:
>> You might want to add 'sync_console' on your Xen line. That should
> give you a bit more of data (I hope?)
>
> Here is  log captured with sync_console and empty hvm config.

 > (XEN) SR-IOV device 0000:00:11.0 has its virtual functions already 
enabled (01ab)

11.0 is the SATA controller on FCH and I don't believe it's an SR-IOV 
device. I dont' think it even has extended config space.

-boris

>
> On Mon, Feb 3, 2014 at 11:55 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>> On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
>>>> What about without PCI passthrough?  In all cases you appear to be
>>> passing the embedded graphics through to an HVM domain
>>>
>>> Next log captured with following hvm domain config:
>>>
>>> name = 'blank'
>>> builder = 'hvm'
>>> memory = 1024
>>>
>>> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
>>> hvm it takes less than a minute.
>> You might want to add 'sync_console' on your Xen line. That should
>> give you a bit more of data (I hope?)
>>> Now trying to make log with watchdog added
>>>
>>> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
>>> <andrew.cooper3@citrix.com> wrote:
>>>> Can you please reply-to-all to keep this on the list.
>>>>
>>>> On 03/02/14 14:21, Vitaliy Tomin wrote:
>>>>>> Does it start with debug=n, but without trying to passthrough the pci device (the graphics core of the apu?) to the hvm ?
>>>>> No it crashes even with empty HVM domain (no os, no disk images no network)
>>>> What about without PCI passthrough?  In all cases you appear to be
>>>> passing the embedded graphics through to an HVM domain
>>>>
>>>>>> Can you explain "=== whole system crashed ===" a little more.
>>>>> II means system instantly rebooted. Black screen, no messages, no
>>>>> image on screen next what I see is POST of my real hardware.
>>>>>
>>>>> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>>>> Ok - so something forced a system reset.  Even more curious that debug
>>>> mode is fine while non-debug is fatal.
>>>>
>>>> ~Andrew
>>>>
>>>>
>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMB1-0006Jc-3F; Mon, 03 Feb 2014 16:15:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAz-0006J5-Gr
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:13 +0000
Received: from [85.158.143.35:25261] by server-1.bemta-4.messagelabs.com id
	21/95-31661-090CFE25; Mon, 03 Feb 2014 16:15:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444106!2789402!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26130 invoked from network); 3 Feb 2014 16:15:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348920"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005i6-A6;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xd-32;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:47 +0000
Message-ID: <1391444091-22796-15-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 14/18] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use the new libxl__pipe_nonblock and _close functions, rather than
open coding the same logic.  Now the pipe is nonblocking, which avoids
a race which could result in libxl deadlocking in a multithreaded
program.

Reported-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl.c      |    6 +-----
 tools/libxl/libxl_fork.c |   12 +++---------
 2 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 4679b51..3730074 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -171,11 +171,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
     libxl__sigchld_notneeded(gc);
-
-    if (ctx->sigchld_selfpipe[0] >= 0) {
-        close(ctx->sigchld_selfpipe[0]);
-        close(ctx->sigchld_selfpipe[1]);
-    }
+    libxl__pipe_close(ctx->sigchld_selfpipe);
 
     pthread_mutex_destroy(&ctx->lock);
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2432512..1d0017b 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -343,17 +343,11 @@ void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 
 int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
-    int r, rc;
+    int rc;
 
     if (CTX->sigchld_selfpipe[0] < 0) {
-        r = pipe(CTX->sigchld_selfpipe);
-        if (r) {
-            CTX->sigchld_selfpipe[0] = -1;
-            LIBXL__LOG_ERRNO(CTX, LIBXL__LOG_ERROR,
-                             "failed to create sigchld pipe");
-            rc = ERROR_FAIL;
-            goto out;
-        }
+        rc = libxl__pipe_nonblock(CTX, CTX->sigchld_selfpipe);
+        if (rc) goto out;
     }
     if (!libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_register(gc, &CTX->sigchld_selfpipe_efd,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMAw-0006Id-H4; Mon, 03 Feb 2014 16:15:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAu-0006IW-Sx
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:09 +0000
Received: from [85.158.143.35:56976] by server-3.bemta-4.messagelabs.com id
	C3/BE-11539-C80CFE25; Mon, 03 Feb 2014 16:15:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444106!2789402!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25624 invoked from network); 3 Feb 2014 16:15:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348829"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005iC-MH;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xn-GL;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:49 +0000
Message-ID: <1391444091-22796-17-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 16/18] libxl: events: timedereg internal unit
	test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Test timeout deregistration idempotency.  In the current tree this
test fails because ev->func is not cleared, meaning that a timeout
can be removed from the list more than once, corrupting the list.

It is necessary to use multiple timeouts to demonstrate this bug,
because removing the very same entry twice from a list in quick
succession, without modifying the list in other ways in between,
doesn't actually corrupt the list.  (Since removing an entry from a
doubly-linked list just copies next and back from the disappearing
entry into its neighbours.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 .gitignore                         |    1 +
 tools/libxl/Makefile               |    2 +-
 tools/libxl/libxl_test_timedereg.c |   97 ++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_test_timedereg.h |    9 ++++
 tools/libxl/test_timedereg.c       |   11 ++++
 5 files changed, 119 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_test_timedereg.c
 create mode 100644 tools/libxl/libxl_test_timedereg.h
 create mode 100644 tools/libxl/test_timedereg.c

diff --git a/.gitignore b/.gitignore
index 3504584..db3b083 100644
--- a/.gitignore
+++ b/.gitignore
@@ -361,6 +361,7 @@ tools/libxl/testidl
 tools/libxl/testidl.c
 tools/libxl/*.pyc
 tools/libxl/libxl-save-helper
+tools/libxl/test_timedereg
 tools/blktap2/control/tap-ctl
 tools/firmware/etherboot/eb-roms.h
 tools/firmware/etherboot/gpxe-git-snapshot.tar.gz
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index f04cba7..1dccbf0 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -79,7 +79,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-LIBXL_TESTS +=
+LIBXL_TESTS += timedereg
 # Each entry FOO in LIBXL_TESTS has two main .c files:
 #   libxl_test_FOO.c  "inside libxl" code to support the test case
 #   test_FOO.c        "outside libxl" code to exercise the test case
diff --git a/tools/libxl/libxl_test_timedereg.c b/tools/libxl/libxl_test_timedereg.c
new file mode 100644
index 0000000..a44639f
--- /dev/null
+++ b/tools/libxl/libxl_test_timedereg.c
@@ -0,0 +1,97 @@
+/*
+ * timedereg test case for the libxl event system
+ *
+ * To run this test:
+ *    ./test_timedereg
+ * Success:
+ *    program takes a few seconds, prints some debugging output and exits 0
+ * Failure:
+ *    crash
+ *
+ * set up [0]-group timeouts 0 1 2
+ * wait for timeout 1 to occur
+ * deregister 0 and 2.  1 is supposed to be deregistered already
+ * register [1]-group 0 1 2
+ * deregister 1 (should be a no-op)
+ * wait for [1]-group 0 1 2 in turn
+ * on final callback assert that all have been deregistered
+ */
+
+#include "libxl_internal.h"
+
+#include "libxl_test_timedereg.h"
+
+#define NTIMES 3
+static const int ms[2][NTIMES] = { { 2000,1000,2000 }, { 1000,2000,3000 } };
+static libxl__ev_time et[2][NTIMES];
+static libxl__ao *tao;
+static int seq;
+
+static void occurs(libxl__egc *egc, libxl__ev_time *ev,
+                   const struct timeval *requested_abs);
+
+static void regs(libxl__gc *gc, int j)
+{
+    int rc, i;
+    LOG(DEBUG,"regs(%d)", j);
+    for (i=0; i<NTIMES; i++) {
+        rc = libxl__ev_time_register_rel(gc, &et[j][i], occurs, ms[j][i]);
+        assert(!rc);
+    }    
+}
+
+int libxl_test_timedereg(libxl_ctx *ctx, libxl_asyncop_how *ao_how)
+{
+    int i;
+    AO_CREATE(ctx, 0, ao_how);
+
+    tao = ao;
+
+    for (i=0; i<NTIMES; i++) {
+        libxl__ev_time_init(&et[0][i]);
+        libxl__ev_time_init(&et[1][i]);
+    }
+
+    regs(gc, 0);
+
+    return AO_INPROGRESS;
+}
+
+static void occurs(libxl__egc *egc, libxl__ev_time *ev,
+                   const struct timeval *requested_abs)
+{
+    EGC_GC;
+    int i;
+
+    int off = ev - &et[0][0];
+    LOG(DEBUG,"occurs[%d][%d] seq=%d", off/NTIMES, off%NTIMES, seq);
+
+    switch (seq) {
+    case 0:
+        assert(ev == &et[0][1]);
+        libxl__ev_time_deregister(gc, &et[0][0]);
+        libxl__ev_time_deregister(gc, &et[0][2]);
+        regs(gc, 1);
+        libxl__ev_time_deregister(gc, &et[0][1]);
+        break;
+
+    case 1:
+    case 2:
+        assert(ev == &et[1][seq-1]);
+        break;
+        
+    case 3:
+        assert(ev == &et[1][2]);
+        for (i=0; i<NTIMES; i++) {
+            assert(!libxl__ev_time_isregistered(&et[0][i]));
+            assert(!libxl__ev_time_isregistered(&et[1][i]));
+        }
+        libxl__ao_complete(egc, tao, 0);
+        return;
+
+    default:
+        abort();
+    }
+
+    seq++;
+}
diff --git a/tools/libxl/libxl_test_timedereg.h b/tools/libxl/libxl_test_timedereg.h
new file mode 100644
index 0000000..9547dba
--- /dev/null
+++ b/tools/libxl/libxl_test_timedereg.h
@@ -0,0 +1,9 @@
+#ifndef TEST_TIMEDEREG_H
+#define TEST_TIMEDEREG_H
+
+#include <pthread.h>
+
+int libxl_test_timedereg(libxl_ctx *ctx, libxl_asyncop_how *ao_how)
+    LIBXL_EXTERNAL_CALLERS_ONLY;
+
+#endif /*TEST_TIMEDEREG_H*/
diff --git a/tools/libxl/test_timedereg.c b/tools/libxl/test_timedereg.c
new file mode 100644
index 0000000..0081ce3
--- /dev/null
+++ b/tools/libxl/test_timedereg.c
@@ -0,0 +1,11 @@
+#include "test_common.h"
+#include "libxl_test_timedereg.h"
+
+int main(int argc, char **argv) {
+    int rc;
+
+    test_common_setup(XTL_DEBUG);
+
+    rc = libxl_test_timedereg(ctx, 0);
+    assert(!rc);
+}
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMB0-0006JP-EM; Mon, 03 Feb 2014 16:15:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAy-0006J1-Rb
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:13 +0000
Received: from [193.109.254.147:54142] by server-7.bemta-14.messagelabs.com id
	DF/D4-23424-090CFE25; Mon, 03 Feb 2014 16:15:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391444101!1691945!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12633 invoked from network); 3 Feb 2014 16:15:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348798"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hc-8r;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005wp-1N;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:37 +0000
Message-ID: <1391444091-22796-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 04/18] libxl: fork: Document
	libxl_sigchld_owner_libxl better
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_sigchld_owner_libxl ought to have been mentioned in the list of
options for chldowner.  Since it's the default, move the description
of the its behaviour into the description of that option.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index ff0b2fa..4f72c4b 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -442,9 +442,26 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  * For programs which run their own children alongside libxl's:
  *
  *     A program which does this must call libxl_childproc_setmode.
- *     There are two options:
+ *     There are three options:
  * 
+ *     libxl_sigchld_owner_libxl:
+ *
+ *       While any libxl operation which might use child processes
+ *       is running, works like libxl_sigchld_owner_libxl_always;
+ *       but, deinstalls the handler the rest of the time.
+ *
+ *       In this mode, the application, while it uses any libxl
+ *       operation which might create or use child processes (see
+ *       above):
+ *           - Must not have any child processes running.
+ *           - Must not install a SIGCHLD handler.
+ *           - Must not reap any children.
+ *
+ *       This is the default (i.e. if setmode is not called, or 0 is
+ *       passed for hooks).
+ *
  *     libxl_sigchld_owner_mainloop:
+ *
  *       The application must install a SIGCHLD handler and reap (at
  *       least) all of libxl's children and pass their exit status to
  *       libxl by calling libxl_childproc_exited.  (If the application
@@ -452,17 +469,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       on each ctx.)
  *
  *     libxl_sigchld_owner_libxl_always:
+ *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
- *
- * An application which fails to call setmode, or which passes 0 for
- * hooks, while it uses any libxl operation which might
- * create or use child processes (see above):
- *   - Must not have any child processes running.
- *   - Must not install a SIGCHLD handler.
- *   - Must not reap any children.
  */
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMAx-0006Io-1r; Mon, 03 Feb 2014 16:15:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAw-0006Ic-6q
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:10 +0000
Received: from [193.109.254.147:53666] by server-9.bemta-14.messagelabs.com id
	42/47-24895-D80CFE25; Mon, 03 Feb 2014 16:15:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391444101!1691945!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12291 invoked from network); 3 Feb 2014 16:15:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348764"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:14:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005i9-Gv;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xi-9U;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:48 +0000
Message-ID: <1391444091-22796-16-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 15/18] libxl: events: Makefile builds internal
	unit tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We provide a new LIBXL_TESTS facility in the Makefile.
Also provide some helpful common routines for unit tests to use.

We don't want to put the weird test case entrypoints and the weird
test case code in the main libxl.so library.  Symbol hiding prevents
us from simply directly linking the libxl_test_FOO.o in later.  So
instead we provide a special library libxenlight_test.so which is used
only locally.

There are not yet any test cases defined; that will come in the next
patch.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile      |   32 ++++++++++++++++++++++++++++----
 tools/libxl/test_common.c |   15 +++++++++++++++
 tools/libxl/test_common.h |   14 ++++++++++++++
 3 files changed, 57 insertions(+), 4 deletions(-)
 create mode 100644 tools/libxl/test_common.c
 create mode 100644 tools/libxl/test_common.h

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..f04cba7 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -79,7 +79,23 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-$(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
+LIBXL_TESTS +=
+# Each entry FOO in LIBXL_TESTS has two main .c files:
+#   libxl_test_FOO.c  "inside libxl" code to support the test case
+#   test_FOO.c        "outside libxl" code to exercise the test case
+# Conventionally there will also be:
+#   libxl_test_FOO.h  interface between the "inside" and "outside" parts
+# The "inside libxl" file is compiled exactly like a piece of libxl, and the
+# "outside libxl" file is compiled exactly like a piece of application
+# code.  They must share information via explicit libxl entrypoints.
+# Unlike proper parts of libxl, it is permissible for libxl_test_FOO.c
+# to use private global variables for its state.
+
+LIBXL_TEST_OBJS += $(foreach t, $(LIBXL_TESTS),libxl_test_$t.o)
+TEST_PROG_OBJS += $(foreach t, $(LIBXL_TESTS),test_$t.o) test_common.o
+TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
+
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
@@ -95,7 +111,7 @@ CFLAGS_XL += $(CFLAGS_libxenlight)
 CFLAGS_XL += -Wshadow
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS) _libxl.api-for-check: \
+$(XL_OBJS) $(TEST_PROG_OBJS) _libxl.api-for-check: \
             CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_XL)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
@@ -109,10 +125,12 @@ testidl.c: libxl_types.idl gentest.py libxl.h $(AUTOINCS)
 	mv testidl.c.new testidl.c
 
 .PHONY: all
-all: $(CLIENTS) libxenlight.so libxenlight.a libxlutil.so libxlutil.a \
+all: $(CLIENTS) $(TEST_PROGS) \
+		libxenlight.so libxenlight.a libxlutil.so libxlutil.a \
 	$(AUTOSRCS) $(AUTOINCS)
 
-$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS): \
+$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS) \
+		$(LIBXL_TEST_OBJS): \
 	$(AUTOINCS) libxl.api-ok
 
 %.c %.h:: %.y
@@ -175,6 +193,9 @@ libxenlight.so.$(MAJOR): libxenlight.so.$(MAJOR).$(MINOR)
 libxenlight.so.$(MAJOR).$(MINOR): $(LIBXL_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
 
+libxenlight_test.so: $(LIBXL_OBJS) $(LIBXL_TEST_OBJS)
+	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight_test.so $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
+
 libxenlight.a: $(LIBXL_OBJS)
 	$(AR) rcs libxenlight.a $^
 
@@ -193,6 +214,9 @@ libxlutil.a: $(LIBXLU_OBJS)
 xl: $(XL_OBJS) libxlutil.so libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) libxlutil.so $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
 
+test_%: test_%.o test_common.o libxlutil.so libxenlight_test.so
+	$(CC) $(LDFLAGS) -o $@ $^ $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
+
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
 
diff --git a/tools/libxl/test_common.c b/tools/libxl/test_common.c
new file mode 100644
index 0000000..83b94eb
--- /dev/null
+++ b/tools/libxl/test_common.c
@@ -0,0 +1,15 @@
+#include "test_common.h"
+
+libxl_ctx *ctx;
+
+void test_common_setup(int level)
+{
+    xentoollog_logger_stdiostream *logger_s
+        = xtl_createlogger_stdiostream(stderr, level,  0);
+    assert(logger_s);
+
+    xentoollog_logger *logger = (xentoollog_logger*)logger_s;
+
+    int rc = libxl_ctx_alloc(&ctx, LIBXL_VERSION, 0, logger);
+    assert(!rc);
+}    
diff --git a/tools/libxl/test_common.h b/tools/libxl/test_common.h
new file mode 100644
index 0000000..8b2471e
--- /dev/null
+++ b/tools/libxl/test_common.h
@@ -0,0 +1,14 @@
+#ifndef TEST_COMMON_H
+#define TEST_COMMON_H
+
+#include "libxl.h"
+
+#include <assert.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+void test_common_setup(int level);
+
+extern libxl_ctx *ctx;
+
+#endif /*TEST_COMMON_H*/
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMAu-0006IQ-4G; Mon, 03 Feb 2014 16:15:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAs-0006II-CN
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:06 +0000
Received: from [85.158.143.35:24320] by server-2.bemta-4.messagelabs.com id
	1E/B8-10891-980CFE25; Mon, 03 Feb 2014 16:15:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444103!2789386!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25324 invoked from network); 3 Feb 2014 16:15:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hf-Fw;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005wu-8G;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:38 +0000
Message-ID: <1391444091-22796-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 05/18] libxl: fork: assert that chldmode is right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In libxl_childproc_reaped, check that the chldmode is as expected.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 7b84765..85db2fb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
 {
     EGC_INIT(ctx);
     CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
     int rc = childproc_reaped(egc, pid, status);
     CTX_UNLOCK;
     EGC_FREE;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMB4-0006Kf-QZ; Mon, 03 Feb 2014 16:15:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMB2-0006Jm-59
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:16 +0000
Received: from [85.158.143.35:61717] by server-3.bemta-4.messagelabs.com id
	EE/EE-11539-390CFE25; Mon, 03 Feb 2014 16:15:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444106!2789402!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26216 invoked from network); 3 Feb 2014 16:15:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005i3-3d;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xY-So;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:46 +0000
Message-ID: <1391444091-22796-14-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian
	Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/18] libxl: events: Break out
	libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Break out the pipe creation and destruction from the poller code
into two new functions libxl__pipe_nonblock and libxl__pipe_close.
Also change direct use of pipe() to libxl_pipe.

No overall functional difference other than minor differences in exact
log messages.

Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
new pipe utilities section in libxl_event.c; this is pure code motion.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

--
v3: Mention that we switched pipe() -> libxl_pipe()
---
 tools/libxl/libxl_event.c    |  104 ++++++++++++++++++++++++++----------------
 tools/libxl/libxl_internal.h |    9 ++++
 2 files changed, 73 insertions(+), 40 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 1c48fee..93f8fdc 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1271,26 +1271,81 @@ int libxl_event_check(libxl_ctx *ctx, libxl_event **event_r,
 }
 
 /*
- * Manipulation of pollers
+ * Utilities for pipes (specifically, useful for self-pipes)
  */
 
-int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+void libxl__pipe_close(int fds[2])
+{
+    if (fds[0] >= 0) close(fds[0]);
+    if (fds[1] >= 0) close(fds[1]);
+    fds[0] = fds[1] = -1;
+}
+
+int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2])
 {
     int r, rc;
-    p->fd_polls = 0;
-    p->fd_rindices = 0;
 
-    r = pipe(p->wakeup_pipe);
+    r = libxl_pipe(ctx, fds);
     if (r) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "cannot create poller pipe");
+        fds[0] = fds[1] = -1;
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[0], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[0], 1);
     if (rc) goto out;
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[1], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[1], 1);
+    if (rc) goto out;
+
+    return 0;
+
+ out:
+    libxl__pipe_close(fds);
+    return rc;
+}
+
+int libxl__self_pipe_wakeup(int fd)
+{
+    static const char buf[1] = "";
+
+    for (;;) {
+        int r = write(fd, buf, 1);
+        if (r==1) return 0;
+        assert(r==-1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+int libxl__self_pipe_eatall(int fd)
+{
+    char buf[256];
+    for (;;) {
+        int r = read(fd, buf, sizeof(buf));
+        if (r == sizeof(buf)) continue;
+        if (r >= 0) return 0;
+        assert(r == -1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+/*
+ * Manipulation of pollers
+ */
+
+int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+{
+    int rc;
+    p->fd_polls = 0;
+    p->fd_rindices = 0;
+
+    rc = libxl__pipe_nonblock(ctx, p->wakeup_pipe);
     if (rc) goto out;
 
     return 0;
@@ -1302,8 +1357,7 @@ int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
 
 void libxl__poller_dispose(libxl__poller *p)
 {
-    if (p->wakeup_pipe[1] > 0) close(p->wakeup_pipe[1]);
-    if (p->wakeup_pipe[0] > 0) close(p->wakeup_pipe[0]);
+    libxl__pipe_close(p->wakeup_pipe);
     free(p->fd_polls);
     free(p->fd_rindices);
 }
@@ -1347,36 +1401,6 @@ void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p)
     if (e) LIBXL__EVENT_DISASTER(egc, "cannot poke watch pipe", e, 0);
 }
 
-int libxl__self_pipe_wakeup(int fd)
-{
-    static const char buf[1] = "";
-
-    for (;;) {
-        int r = write(fd, buf, 1);
-        if (r==1) return 0;
-        assert(r==-1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
-int libxl__self_pipe_eatall(int fd)
-{
-    char buf[256];
-    for (;;) {
-        int r = read(fd, buf, sizeof(buf));
-        if (r == sizeof(buf)) continue;
-        if (r >= 0) return 0;
-        assert(r == -1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
 /*
  * Main event loop iteration
  */
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 8429448..9d17586 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -509,6 +509,15 @@ _hidden char *libxl__strndup(libxl__gc *gc_opt, const char *c, size_t n) NN1;
  * string. (similar to a gc'd dirname(3)). */
 _hidden char *libxl__dirname(libxl__gc *gc_opt, const char *s) NN1;
 
+/* Make a pipe and set both ends nonblocking.  On error, nothing
+ * is left open and both fds[]==-1, and a message is logged.
+ * Useful for self-pipes. */
+_hidden int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2]);
+/* Closes the pipe fd(s).  Either or both of fds[] may be -1 meaning
+ * `not open'.  Ignores any errors.  Sets fds[] to -1. */
+_hidden void libxl__pipe_close(int fds[2]);
+
+
 /* Each of these logs errors and returns a libxl error code.
  * They do not mind if path is already removed.
  * For _file, path must not be a directory; for _directory it must be. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMAw-0006Id-H4; Mon, 03 Feb 2014 16:15:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAu-0006IW-Sx
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:09 +0000
Received: from [85.158.143.35:56976] by server-3.bemta-4.messagelabs.com id
	C3/BE-11539-C80CFE25; Mon, 03 Feb 2014 16:15:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444106!2789402!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25624 invoked from network); 3 Feb 2014 16:15:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348829"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005iC-MH;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xn-GL;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:49 +0000
Message-ID: <1391444091-22796-17-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 16/18] libxl: events: timedereg internal unit
	test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Test timeout deregistration idempotency.  In the current tree this
test fails because ev->func is not cleared, meaning that a timeout
can be removed from the list more than once, corrupting the list.

It is necessary to use multiple timeouts to demonstrate this bug,
because removing the very same entry twice from a list in quick
succession, without modifying the list in other ways in between,
doesn't actually corrupt the list.  (Since removing an entry from a
doubly-linked list just copies next and back from the disappearing
entry into its neighbours.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 .gitignore                         |    1 +
 tools/libxl/Makefile               |    2 +-
 tools/libxl/libxl_test_timedereg.c |   97 ++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_test_timedereg.h |    9 ++++
 tools/libxl/test_timedereg.c       |   11 ++++
 5 files changed, 119 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_test_timedereg.c
 create mode 100644 tools/libxl/libxl_test_timedereg.h
 create mode 100644 tools/libxl/test_timedereg.c

diff --git a/.gitignore b/.gitignore
index 3504584..db3b083 100644
--- a/.gitignore
+++ b/.gitignore
@@ -361,6 +361,7 @@ tools/libxl/testidl
 tools/libxl/testidl.c
 tools/libxl/*.pyc
 tools/libxl/libxl-save-helper
+tools/libxl/test_timedereg
 tools/blktap2/control/tap-ctl
 tools/firmware/etherboot/eb-roms.h
 tools/firmware/etherboot/gpxe-git-snapshot.tar.gz
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index f04cba7..1dccbf0 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -79,7 +79,7 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-LIBXL_TESTS +=
+LIBXL_TESTS += timedereg
 # Each entry FOO in LIBXL_TESTS has two main .c files:
 #   libxl_test_FOO.c  "inside libxl" code to support the test case
 #   test_FOO.c        "outside libxl" code to exercise the test case
diff --git a/tools/libxl/libxl_test_timedereg.c b/tools/libxl/libxl_test_timedereg.c
new file mode 100644
index 0000000..a44639f
--- /dev/null
+++ b/tools/libxl/libxl_test_timedereg.c
@@ -0,0 +1,97 @@
+/*
+ * timedereg test case for the libxl event system
+ *
+ * To run this test:
+ *    ./test_timedereg
+ * Success:
+ *    program takes a few seconds, prints some debugging output and exits 0
+ * Failure:
+ *    crash
+ *
+ * set up [0]-group timeouts 0 1 2
+ * wait for timeout 1 to occur
+ * deregister 0 and 2.  1 is supposed to be deregistered already
+ * register [1]-group 0 1 2
+ * deregister 1 (should be a no-op)
+ * wait for [1]-group 0 1 2 in turn
+ * on final callback assert that all have been deregistered
+ */
+
+#include "libxl_internal.h"
+
+#include "libxl_test_timedereg.h"
+
+#define NTIMES 3
+static const int ms[2][NTIMES] = { { 2000,1000,2000 }, { 1000,2000,3000 } };
+static libxl__ev_time et[2][NTIMES];
+static libxl__ao *tao;
+static int seq;
+
+static void occurs(libxl__egc *egc, libxl__ev_time *ev,
+                   const struct timeval *requested_abs);
+
+static void regs(libxl__gc *gc, int j)
+{
+    int rc, i;
+    LOG(DEBUG,"regs(%d)", j);
+    for (i=0; i<NTIMES; i++) {
+        rc = libxl__ev_time_register_rel(gc, &et[j][i], occurs, ms[j][i]);
+        assert(!rc);
+    }    
+}
+
+int libxl_test_timedereg(libxl_ctx *ctx, libxl_asyncop_how *ao_how)
+{
+    int i;
+    AO_CREATE(ctx, 0, ao_how);
+
+    tao = ao;
+
+    for (i=0; i<NTIMES; i++) {
+        libxl__ev_time_init(&et[0][i]);
+        libxl__ev_time_init(&et[1][i]);
+    }
+
+    regs(gc, 0);
+
+    return AO_INPROGRESS;
+}
+
+static void occurs(libxl__egc *egc, libxl__ev_time *ev,
+                   const struct timeval *requested_abs)
+{
+    EGC_GC;
+    int i;
+
+    int off = ev - &et[0][0];
+    LOG(DEBUG,"occurs[%d][%d] seq=%d", off/NTIMES, off%NTIMES, seq);
+
+    switch (seq) {
+    case 0:
+        assert(ev == &et[0][1]);
+        libxl__ev_time_deregister(gc, &et[0][0]);
+        libxl__ev_time_deregister(gc, &et[0][2]);
+        regs(gc, 1);
+        libxl__ev_time_deregister(gc, &et[0][1]);
+        break;
+
+    case 1:
+    case 2:
+        assert(ev == &et[1][seq-1]);
+        break;
+        
+    case 3:
+        assert(ev == &et[1][2]);
+        for (i=0; i<NTIMES; i++) {
+            assert(!libxl__ev_time_isregistered(&et[0][i]));
+            assert(!libxl__ev_time_isregistered(&et[1][i]));
+        }
+        libxl__ao_complete(egc, tao, 0);
+        return;
+
+    default:
+        abort();
+    }
+
+    seq++;
+}
diff --git a/tools/libxl/libxl_test_timedereg.h b/tools/libxl/libxl_test_timedereg.h
new file mode 100644
index 0000000..9547dba
--- /dev/null
+++ b/tools/libxl/libxl_test_timedereg.h
@@ -0,0 +1,9 @@
+#ifndef TEST_TIMEDEREG_H
+#define TEST_TIMEDEREG_H
+
+#include <pthread.h>
+
+int libxl_test_timedereg(libxl_ctx *ctx, libxl_asyncop_how *ao_how)
+    LIBXL_EXTERNAL_CALLERS_ONLY;
+
+#endif /*TEST_TIMEDEREG_H*/
diff --git a/tools/libxl/test_timedereg.c b/tools/libxl/test_timedereg.c
new file mode 100644
index 0000000..0081ce3
--- /dev/null
+++ b/tools/libxl/test_timedereg.c
@@ -0,0 +1,11 @@
+#include "test_common.h"
+#include "libxl_test_timedereg.h"
+
+int main(int argc, char **argv) {
+    int rc;
+
+    test_common_setup(XTL_DEBUG);
+
+    rc = libxl_test_timedereg(ctx, 0);
+    assert(!rc);
+}
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMB4-0006Kf-QZ; Mon, 03 Feb 2014 16:15:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMB2-0006Jm-59
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:16 +0000
Received: from [85.158.143.35:61717] by server-3.bemta-4.messagelabs.com id
	EE/EE-11539-390CFE25; Mon, 03 Feb 2014 16:15:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444106!2789402!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26216 invoked from network); 3 Feb 2014 16:15:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005i3-3d;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xY-So;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:46 +0000
Message-ID: <1391444091-22796-14-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Ian
	Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 13/18] libxl: events: Break out
	libxl__pipe_nonblock, _close
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Break out the pipe creation and destruction from the poller code
into two new functions libxl__pipe_nonblock and libxl__pipe_close.
Also change direct use of pipe() to libxl_pipe.

No overall functional difference other than minor differences in exact
log messages.

Also move libxl__self_pipe_wakeup and libxl__self_pipe_eatall into the
new pipe utilities section in libxl_event.c; this is pure code motion.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

--
v3: Mention that we switched pipe() -> libxl_pipe()
---
 tools/libxl/libxl_event.c    |  104 ++++++++++++++++++++++++++----------------
 tools/libxl/libxl_internal.h |    9 ++++
 2 files changed, 73 insertions(+), 40 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 1c48fee..93f8fdc 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -1271,26 +1271,81 @@ int libxl_event_check(libxl_ctx *ctx, libxl_event **event_r,
 }
 
 /*
- * Manipulation of pollers
+ * Utilities for pipes (specifically, useful for self-pipes)
  */
 
-int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+void libxl__pipe_close(int fds[2])
+{
+    if (fds[0] >= 0) close(fds[0]);
+    if (fds[1] >= 0) close(fds[1]);
+    fds[0] = fds[1] = -1;
+}
+
+int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2])
 {
     int r, rc;
-    p->fd_polls = 0;
-    p->fd_rindices = 0;
 
-    r = pipe(p->wakeup_pipe);
+    r = libxl_pipe(ctx, fds);
     if (r) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "cannot create poller pipe");
+        fds[0] = fds[1] = -1;
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[0], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[0], 1);
     if (rc) goto out;
 
-    rc = libxl_fd_set_nonblock(ctx, p->wakeup_pipe[1], 1);
+    rc = libxl_fd_set_nonblock(ctx, fds[1], 1);
+    if (rc) goto out;
+
+    return 0;
+
+ out:
+    libxl__pipe_close(fds);
+    return rc;
+}
+
+int libxl__self_pipe_wakeup(int fd)
+{
+    static const char buf[1] = "";
+
+    for (;;) {
+        int r = write(fd, buf, 1);
+        if (r==1) return 0;
+        assert(r==-1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+int libxl__self_pipe_eatall(int fd)
+{
+    char buf[256];
+    for (;;) {
+        int r = read(fd, buf, sizeof(buf));
+        if (r == sizeof(buf)) continue;
+        if (r >= 0) return 0;
+        assert(r == -1);
+        if (errno == EINTR) continue;
+        if (errno == EWOULDBLOCK) return 0;
+        assert(errno);
+        return errno;
+    }
+}
+
+/*
+ * Manipulation of pollers
+ */
+
+int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
+{
+    int rc;
+    p->fd_polls = 0;
+    p->fd_rindices = 0;
+
+    rc = libxl__pipe_nonblock(ctx, p->wakeup_pipe);
     if (rc) goto out;
 
     return 0;
@@ -1302,8 +1357,7 @@ int libxl__poller_init(libxl_ctx *ctx, libxl__poller *p)
 
 void libxl__poller_dispose(libxl__poller *p)
 {
-    if (p->wakeup_pipe[1] > 0) close(p->wakeup_pipe[1]);
-    if (p->wakeup_pipe[0] > 0) close(p->wakeup_pipe[0]);
+    libxl__pipe_close(p->wakeup_pipe);
     free(p->fd_polls);
     free(p->fd_rindices);
 }
@@ -1347,36 +1401,6 @@ void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p)
     if (e) LIBXL__EVENT_DISASTER(egc, "cannot poke watch pipe", e, 0);
 }
 
-int libxl__self_pipe_wakeup(int fd)
-{
-    static const char buf[1] = "";
-
-    for (;;) {
-        int r = write(fd, buf, 1);
-        if (r==1) return 0;
-        assert(r==-1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
-int libxl__self_pipe_eatall(int fd)
-{
-    char buf[256];
-    for (;;) {
-        int r = read(fd, buf, sizeof(buf));
-        if (r == sizeof(buf)) continue;
-        if (r >= 0) return 0;
-        assert(r == -1);
-        if (errno == EINTR) continue;
-        if (errno == EWOULDBLOCK) return 0;
-        assert(errno);
-        return errno;
-    }
-}
-
 /*
  * Main event loop iteration
  */
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 8429448..9d17586 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -509,6 +509,15 @@ _hidden char *libxl__strndup(libxl__gc *gc_opt, const char *c, size_t n) NN1;
  * string. (similar to a gc'd dirname(3)). */
 _hidden char *libxl__dirname(libxl__gc *gc_opt, const char *s) NN1;
 
+/* Make a pipe and set both ends nonblocking.  On error, nothing
+ * is left open and both fds[]==-1, and a message is logged.
+ * Useful for self-pipes. */
+_hidden int libxl__pipe_nonblock(libxl_ctx *ctx, int fds[2]);
+/* Closes the pipe fd(s).  Either or both of fds[] may be -1 meaning
+ * `not open'.  Ignores any errors.  Sets fds[] to -1. */
+_hidden void libxl__pipe_close(int fds[2]);
+
+
 /* Each of these logs errors and returns a libxl error code.
  * They do not mind if path is already removed.
  * For _file, path must not be a directory; for _directory it must be. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMAu-0006IQ-4G; Mon, 03 Feb 2014 16:15:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAs-0006II-CN
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:06 +0000
Received: from [85.158.143.35:24320] by server-2.bemta-4.messagelabs.com id
	1E/B8-10891-980CFE25; Mon, 03 Feb 2014 16:15:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444103!2789386!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25324 invoked from network); 3 Feb 2014 16:15:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hf-Fw;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005wu-8G;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:38 +0000
Message-ID: <1391444091-22796-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 05/18] libxl: fork: assert that chldmode is right
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In libxl_childproc_reaped, check that the chldmode is as expected.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 7b84765..85db2fb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -322,6 +322,8 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
 {
     EGC_INIT(ctx);
     CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
     int rc = childproc_reaped(egc, pid, status);
     CTX_UNLOCK;
     EGC_FREE;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMB0-0006JP-EM; Mon, 03 Feb 2014 16:15:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAy-0006J1-Rb
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:13 +0000
Received: from [193.109.254.147:54142] by server-7.bemta-14.messagelabs.com id
	DF/D4-23424-090CFE25; Mon, 03 Feb 2014 16:15:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391444101!1691945!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12633 invoked from network); 3 Feb 2014 16:15:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348798"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hc-8r;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005wp-1N;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:37 +0000
Message-ID: <1391444091-22796-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 04/18] libxl: fork: Document
	libxl_sigchld_owner_libxl better
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_sigchld_owner_libxl ought to have been mentioned in the list of
options for chldowner.  Since it's the default, move the description
of the its behaviour into the description of that option.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index ff0b2fa..4f72c4b 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -442,9 +442,26 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  * For programs which run their own children alongside libxl's:
  *
  *     A program which does this must call libxl_childproc_setmode.
- *     There are two options:
+ *     There are three options:
  * 
+ *     libxl_sigchld_owner_libxl:
+ *
+ *       While any libxl operation which might use child processes
+ *       is running, works like libxl_sigchld_owner_libxl_always;
+ *       but, deinstalls the handler the rest of the time.
+ *
+ *       In this mode, the application, while it uses any libxl
+ *       operation which might create or use child processes (see
+ *       above):
+ *           - Must not have any child processes running.
+ *           - Must not install a SIGCHLD handler.
+ *           - Must not reap any children.
+ *
+ *       This is the default (i.e. if setmode is not called, or 0 is
+ *       passed for hooks).
+ *
  *     libxl_sigchld_owner_mainloop:
+ *
  *       The application must install a SIGCHLD handler and reap (at
  *       least) all of libxl's children and pass their exit status to
  *       libxl by calling libxl_childproc_exited.  (If the application
@@ -452,17 +469,11 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       on each ctx.)
  *
  *     libxl_sigchld_owner_libxl_always:
+ *
  *       The application expects libxl to reap all of its children,
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
- *
- * An application which fails to call setmode, or which passes 0 for
- * hooks, while it uses any libxl operation which might
- * create or use child processes (see above):
- *   - Must not have any child processes running.
- *   - Must not install a SIGCHLD handler.
- *   - Must not reap any children.
  */
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMAx-0006Io-1r; Mon, 03 Feb 2014 16:15:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAw-0006Ic-6q
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:10 +0000
Received: from [193.109.254.147:53666] by server-9.bemta-14.messagelabs.com id
	42/47-24895-D80CFE25; Mon, 03 Feb 2014 16:15:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391444101!1691945!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12291 invoked from network); 3 Feb 2014 16:15:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348764"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:14:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005i9-Gv;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xi-9U;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:48 +0000
Message-ID: <1391444091-22796-16-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 15/18] libxl: events: Makefile builds internal
	unit tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We provide a new LIBXL_TESTS facility in the Makefile.
Also provide some helpful common routines for unit tests to use.

We don't want to put the weird test case entrypoints and the weird
test case code in the main libxl.so library.  Symbol hiding prevents
us from simply directly linking the libxl_test_FOO.o in later.  So
instead we provide a special library libxenlight_test.so which is used
only locally.

There are not yet any test cases defined; that will come in the next
patch.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile      |   32 ++++++++++++++++++++++++++++----
 tools/libxl/test_common.c |   15 +++++++++++++++
 tools/libxl/test_common.h |   14 ++++++++++++++
 3 files changed, 57 insertions(+), 4 deletions(-)
 create mode 100644 tools/libxl/test_common.c
 create mode 100644 tools/libxl/test_common.h

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..f04cba7 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -79,7 +79,23 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
-$(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
+LIBXL_TESTS +=
+# Each entry FOO in LIBXL_TESTS has two main .c files:
+#   libxl_test_FOO.c  "inside libxl" code to support the test case
+#   test_FOO.c        "outside libxl" code to exercise the test case
+# Conventionally there will also be:
+#   libxl_test_FOO.h  interface between the "inside" and "outside" parts
+# The "inside libxl" file is compiled exactly like a piece of libxl, and the
+# "outside libxl" file is compiled exactly like a piece of application
+# code.  They must share information via explicit libxl entrypoints.
+# Unlike proper parts of libxl, it is permissible for libxl_test_FOO.c
+# to use private global variables for its state.
+
+LIBXL_TEST_OBJS += $(foreach t, $(LIBXL_TESTS),libxl_test_$t.o)
+TEST_PROG_OBJS += $(foreach t, $(LIBXL_TESTS),test_$t.o) test_common.o
+TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
+
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
 	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
@@ -95,7 +111,7 @@ CFLAGS_XL += $(CFLAGS_libxenlight)
 CFLAGS_XL += -Wshadow
 
 XL_OBJS = xl.o xl_cmdimpl.o xl_cmdtable.o xl_sxp.o
-$(XL_OBJS) _libxl.api-for-check: \
+$(XL_OBJS) $(TEST_PROG_OBJS) _libxl.api-for-check: \
             CFLAGS += $(CFLAGS_libxenctrl) # For xentoollog.h
 $(XL_OBJS): CFLAGS += $(CFLAGS_XL)
 $(XL_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h # libxl_json.h needs it.
@@ -109,10 +125,12 @@ testidl.c: libxl_types.idl gentest.py libxl.h $(AUTOINCS)
 	mv testidl.c.new testidl.c
 
 .PHONY: all
-all: $(CLIENTS) libxenlight.so libxenlight.a libxlutil.so libxlutil.a \
+all: $(CLIENTS) $(TEST_PROGS) \
+		libxenlight.so libxenlight.a libxlutil.so libxlutil.a \
 	$(AUTOSRCS) $(AUTOINCS)
 
-$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS): \
+$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS) \
+		$(LIBXL_TEST_OBJS): \
 	$(AUTOINCS) libxl.api-ok
 
 %.c %.h:: %.y
@@ -175,6 +193,9 @@ libxenlight.so.$(MAJOR): libxenlight.so.$(MAJOR).$(MINOR)
 libxenlight.so.$(MAJOR).$(MINOR): $(LIBXL_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
 
+libxenlight_test.so: $(LIBXL_OBJS) $(LIBXL_TEST_OBJS)
+	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight_test.so $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
+
 libxenlight.a: $(LIBXL_OBJS)
 	$(AR) rcs libxenlight.a $^
 
@@ -193,6 +214,9 @@ libxlutil.a: $(LIBXLU_OBJS)
 xl: $(XL_OBJS) libxlutil.so libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) libxlutil.so $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
 
+test_%: test_%.o test_common.o libxlutil.so libxenlight_test.so
+	$(CC) $(LDFLAGS) -o $@ $^ $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
+
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
 
diff --git a/tools/libxl/test_common.c b/tools/libxl/test_common.c
new file mode 100644
index 0000000..83b94eb
--- /dev/null
+++ b/tools/libxl/test_common.c
@@ -0,0 +1,15 @@
+#include "test_common.h"
+
+libxl_ctx *ctx;
+
+void test_common_setup(int level)
+{
+    xentoollog_logger_stdiostream *logger_s
+        = xtl_createlogger_stdiostream(stderr, level,  0);
+    assert(logger_s);
+
+    xentoollog_logger *logger = (xentoollog_logger*)logger_s;
+
+    int rc = libxl_ctx_alloc(&ctx, LIBXL_VERSION, 0, logger);
+    assert(!rc);
+}    
diff --git a/tools/libxl/test_common.h b/tools/libxl/test_common.h
new file mode 100644
index 0000000..8b2471e
--- /dev/null
+++ b/tools/libxl/test_common.h
@@ -0,0 +1,14 @@
+#ifndef TEST_COMMON_H
+#define TEST_COMMON_H
+
+#include "libxl.h"
+
+#include <assert.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+void test_common_setup(int level);
+
+extern libxl_ctx *ctx;
+
+#endif /*TEST_COMMON_H*/
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMB1-0006Jc-3F; Mon, 03 Feb 2014 16:15:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMAz-0006J5-Gr
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:15:13 +0000
Received: from [85.158.143.35:25261] by server-1.bemta-4.messagelabs.com id
	21/95-31661-090CFE25; Mon, 03 Feb 2014 16:15:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391444106!2789402!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26130 invoked from network); 3 Feb 2014 16:15:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97348920"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005i6-A6;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xd-32;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:47 +0000
Message-ID: <1391444091-22796-15-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH 14/18] libxl: fork: Make SIGCHLD self-pipe
	nonblocking
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use the new libxl__pipe_nonblock and _close functions, rather than
open coding the same logic.  Now the pipe is nonblocking, which avoids
a race which could result in libxl deadlocking in a multithreaded
program.

Reported-by: Jim Fehlig <jfehlig@suse.com>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl.c      |    6 +-----
 tools/libxl/libxl_fork.c |   12 +++---------
 2 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 4679b51..3730074 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -171,11 +171,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
     libxl__sigchld_notneeded(gc);
-
-    if (ctx->sigchld_selfpipe[0] >= 0) {
-        close(ctx->sigchld_selfpipe[0]);
-        close(ctx->sigchld_selfpipe[1]);
-    }
+    libxl__pipe_close(ctx->sigchld_selfpipe);
 
     pthread_mutex_destroy(&ctx->lock);
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2432512..1d0017b 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -343,17 +343,11 @@ void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 
 int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
-    int r, rc;
+    int rc;
 
     if (CTX->sigchld_selfpipe[0] < 0) {
-        r = pipe(CTX->sigchld_selfpipe);
-        if (r) {
-            CTX->sigchld_selfpipe[0] = -1;
-            LIBXL__LOG_ERRNO(CTX, LIBXL__LOG_ERROR,
-                             "failed to create sigchld pipe");
-            rc = ERROR_FAIL;
-            goto out;
-        }
+        rc = libxl__pipe_nonblock(CTX, CTX->sigchld_selfpipe);
+        if (rc) goto out;
     }
     if (!libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_register(gc, &CTX->sigchld_selfpipe_efd,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDF-0006qW-MP; Mon, 03 Feb 2014 16:17:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDE-0006q2-Lc
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:32 +0000
Received: from [85.158.143.35:47491] by server-3.bemta-4.messagelabs.com id
	57/53-11539-C11CFE25; Mon, 03 Feb 2014 16:17:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18515 invoked from network); 3 Feb 2014 16:17:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270829"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:01 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005hT-Lj;
	Mon, 03 Feb 2014 16:14:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wa-E3;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:34 +0000
Message-ID: <1391444091-22796-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 01/18] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a simple error-handling wrapper for waitpid.  We're going to
want to call waitpid somewhere else and this avoids some of the
duplication.

No functional change in this patch.  (Technically, we used to check
chldmode_ours again in the EINTR case, and don't now, but that can't
have changed because we continuously hold the libxl ctx lock.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 4ae9f94..2252370 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
  * Actual child process handling
  */
 
+/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
+static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
+{
+    for (;;) {
+        pid_t got = waitpid(want, status, WNOHANG);
+        if (got != -1)
+            return got;
+        if (errno == ECHILD)
+            return got;
+        if (errno == EINTR)
+            continue;
+        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        return 0;
+    }
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents);
 
@@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
-        pid_t pid = waitpid(-1, &status, WNOHANG);
-
-        if (pid == 0) return;
+        pid_t pid = checked_waitpid(egc, -1, &status);
 
-        if (pid == -1) {
-            if (errno == ECHILD) return;
-            if (errno == EINTR) continue;
-            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        if (pid == 0 || pid == -1 /* ECHILD */)
             return;
-        }
 
         int rc = childproc_reaped(egc, pid, status);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDF-0006qK-9K; Mon, 03 Feb 2014 16:17:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDD-0006pn-G9
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:31 +0000
Received: from [85.158.143.35:27482] by server-3.bemta-4.messagelabs.com id
	DB/43-11539-A11CFE25; Mon, 03 Feb 2014 16:17:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18353 invoked from network); 3 Feb 2014 16:17:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270781"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:14:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:14:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005hQ-Et;
	Mon, 03 Feb 2014 16:14:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wX-79;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:33 +0000
Message-ID: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the latest version of my libxl event fixes apropos of Jim's
libvirt testing.

  at  01/18 libxl: fork: Break out checked_waitpid
  at  02/18 libxl: fork: Break out childproc_reaped_ours
  at  03/18 libxl: fork: Clarify docs for libxl_sigchld_owner
  at  04/18 libxl: fork: Document libxl_sigchld_owner_libxl better
  at  05/18 libxl: fork: assert that chldmode is right
  at  06/18 libxl: fork: Provide libxl_childproc_sigchld_occurred
  at  07/18 libxl: fork: Provide ..._always_selective_reap
  at  08/18 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
  at  09/18 libxl: fork: Rename sigchld handler functions
  at  10/18 libxl: fork: Break out sigchld_installhandler_core
 * t  11/18 libxl: fork: Break out sigchld_sethandler_raw
 1at  12/18 libxl: fork: Share SIGCHLD handler amongst ctxs
 +at  13/18 libxl: events: Break out libxl__pipe_nonblock, _close
  at  14/18 libxl: fork: Make SIGCHLD self-pipe nonblocking
 N    15/18 libxl: events: Makefile builds internal unit tests
 N    16/18 libxl: events: timedereg internal unit test
 n    17/18 libxl: timeouts: Break out time_occurs
 n    18/18 libxl: timeouts: Record deregistration when one occurs

Notes:
   a    acked by Ian Campbell
   +    modified description in this patch
   1    this patch combined out of two patches which were part
         of "v2.1"; consequently modified in v3 since v2
   N    entirely new patch
   n    new in this series, but previously passed to Jim for testing
   t    AIUI up to here has been tested by Jim and improves (but does
        not entirely eliminate) the problems

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDH-0006rr-LE; Mon, 03 Feb 2014 16:17:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDG-0006qR-0V
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:34 +0000
Received: from [85.158.139.211:63970] by server-4.bemta-5.messagelabs.com id
	C4/F3-08092-D11CFE25; Mon, 03 Feb 2014 16:17:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3314 invoked from network); 3 Feb 2014 16:17:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270830"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:01 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005hW-Ra;
	Mon, 03 Feb 2014 16:14:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wf-L9;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:35 +0000
Message-ID: <1391444091-22796-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 02/18] libxl: fork: Break out
	childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're going to want to do this again at a new call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2252370..7b84765 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -290,6 +290,14 @@ static int perhaps_installhandler(libxl__gc *gc, bool creating)
     return 0;
 }
 
+static void childproc_reaped_ours(libxl__egc *egc, libxl__ev_child *ch,
+                                 int status)
+{
+    LIBXL_LIST_REMOVE(ch, entry);
+    ch->pid = -1;
+    ch->callback(egc, ch, ch->pid, status);
+}
+
 static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
 {
     EGC_GC;
@@ -303,9 +311,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
     return ERROR_UNKNOWN_CHILD;
 
  found:
-    LIBXL_LIST_REMOVE(ch, entry);
-    ch->pid = -1;
-    ch->callback(egc, ch, pid, status);
+    childproc_reaped_ours(egc, ch, status);
 
     perhaps_removehandler(gc);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDF-0006qK-9K; Mon, 03 Feb 2014 16:17:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDD-0006pn-G9
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:31 +0000
Received: from [85.158.143.35:27482] by server-3.bemta-4.messagelabs.com id
	DB/43-11539-A11CFE25; Mon, 03 Feb 2014 16:17:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18353 invoked from network); 3 Feb 2014 16:17:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270781"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:14:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:14:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005hQ-Et;
	Mon, 03 Feb 2014 16:14:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wX-79;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:33 +0000
Message-ID: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the latest version of my libxl event fixes apropos of Jim's
libvirt testing.

  at  01/18 libxl: fork: Break out checked_waitpid
  at  02/18 libxl: fork: Break out childproc_reaped_ours
  at  03/18 libxl: fork: Clarify docs for libxl_sigchld_owner
  at  04/18 libxl: fork: Document libxl_sigchld_owner_libxl better
  at  05/18 libxl: fork: assert that chldmode is right
  at  06/18 libxl: fork: Provide libxl_childproc_sigchld_occurred
  at  07/18 libxl: fork: Provide ..._always_selective_reap
  at  08/18 libxl: fork: Provide LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
  at  09/18 libxl: fork: Rename sigchld handler functions
  at  10/18 libxl: fork: Break out sigchld_installhandler_core
 * t  11/18 libxl: fork: Break out sigchld_sethandler_raw
 1at  12/18 libxl: fork: Share SIGCHLD handler amongst ctxs
 +at  13/18 libxl: events: Break out libxl__pipe_nonblock, _close
  at  14/18 libxl: fork: Make SIGCHLD self-pipe nonblocking
 N    15/18 libxl: events: Makefile builds internal unit tests
 N    16/18 libxl: events: timedereg internal unit test
 n    17/18 libxl: timeouts: Break out time_occurs
 n    18/18 libxl: timeouts: Record deregistration when one occurs

Notes:
   a    acked by Ian Campbell
   +    modified description in this patch
   1    this patch combined out of two patches which were part
         of "v2.1"; consequently modified in v3 since v2
   N    entirely new patch
   n    new in this series, but previously passed to Jim for testing
   t    AIUI up to here has been tested by Jim and improves (but does
        not entirely eliminate) the problems

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDH-0006rI-5y; Mon, 03 Feb 2014 16:17:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDF-0006qH-Mr
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:33 +0000
Received: from [85.158.143.35:27701] by server-2.bemta-4.messagelabs.com id
	7C/4D-10891-C11CFE25; Mon, 03 Feb 2014 16:17:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18565 invoked from network); 3 Feb 2014 16:17:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270831"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hZ-1y;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wk-Qz;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:36 +0000
Message-ID: <1391444091-22796-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 03/18] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
process's children, and clarify the wording of the description of
libxl_sigchld_owner_libxl_always.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 6261f99..ff0b2fa 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 
 
 typedef enum {
-    /* libxl owns SIGCHLD whenever it has a child. */
+    /* libxl owns SIGCHLD whenever it has a child, and reaps
+     * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
     /* Application promises to call libxl_childproc_exited but NOT
@@ -476,7 +477,7 @@ typedef enum {
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
-     * relying on libxl's event loop for reaping its own children. */
+     * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
 } libxl_sigchld_owner;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDH-0006rI-5y; Mon, 03 Feb 2014 16:17:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDF-0006qH-Mr
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:33 +0000
Received: from [85.158.143.35:27701] by server-2.bemta-4.messagelabs.com id
	7C/4D-10891-C11CFE25; Mon, 03 Feb 2014 16:17:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18565 invoked from network); 3 Feb 2014 16:17:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270831"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hZ-1y;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wk-Qz;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:36 +0000
Message-ID: <1391444091-22796-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 03/18] libxl: fork: Clarify docs for
	libxl_sigchld_owner
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Clarify that libxl_sigchld_owner_libxl causes libxl to reap all the
process's children, and clarify the wording of the description of
libxl_sigchld_owner_libxl_always.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 6261f99..ff0b2fa 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -467,7 +467,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
 
 
 typedef enum {
-    /* libxl owns SIGCHLD whenever it has a child. */
+    /* libxl owns SIGCHLD whenever it has a child, and reaps
+     * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
     /* Application promises to call libxl_childproc_exited but NOT
@@ -476,7 +477,7 @@ typedef enum {
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
-     * relying on libxl's event loop for reaping its own children. */
+     * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
 } libxl_sigchld_owner;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDF-0006qW-MP; Mon, 03 Feb 2014 16:17:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDE-0006q2-Lc
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:32 +0000
Received: from [85.158.143.35:47491] by server-3.bemta-4.messagelabs.com id
	57/53-11539-C11CFE25; Mon, 03 Feb 2014 16:17:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18515 invoked from network); 3 Feb 2014 16:17:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270829"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:01 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005hT-Lj;
	Mon, 03 Feb 2014 16:14:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wa-E3;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:34 +0000
Message-ID: <1391444091-22796-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 01/18] libxl: fork: Break out checked_waitpid
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a simple error-handling wrapper for waitpid.  We're going to
want to call waitpid somewhere else and this avoids some of the
duplication.

No functional change in this patch.  (Technically, we used to check
chldmode_ours again in the EINTR case, and don't now, but that can't
have changed because we continuously hold the libxl ctx lock.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   26 ++++++++++++++++++--------
 1 file changed, 18 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 4ae9f94..2252370 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -155,6 +155,22 @@ int libxl__carefd_fd(const libxl__carefd *cf)
  * Actual child process handling
  */
 
+/* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
+static pid_t checked_waitpid(libxl__egc *egc, pid_t want, int *status)
+{
+    for (;;) {
+        pid_t got = waitpid(want, status, WNOHANG);
+        if (got != -1)
+            return got;
+        if (errno == ECHILD)
+            return got;
+        if (errno == EINTR)
+            continue;
+        LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        return 0;
+    }
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents);
 
@@ -331,16 +347,10 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
-        pid_t pid = waitpid(-1, &status, WNOHANG);
-
-        if (pid == 0) return;
+        pid_t pid = checked_waitpid(egc, -1, &status);
 
-        if (pid == -1) {
-            if (errno == ECHILD) return;
-            if (errno == EINTR) continue;
-            LIBXL__EVENT_DISASTER(egc, "waitpid() failed", errno, 0);
+        if (pid == 0 || pid == -1 /* ECHILD */)
             return;
-        }
 
         int rc = childproc_reaped(egc, pid, status);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDH-0006rr-LE; Mon, 03 Feb 2014 16:17:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDG-0006qR-0V
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:34 +0000
Received: from [85.158.139.211:63970] by server-4.bemta-5.messagelabs.com id
	C4/F3-08092-D11CFE25; Mon, 03 Feb 2014 16:17:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3314 invoked from network); 3 Feb 2014 16:17:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270830"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:01 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005hW-Ra;
	Mon, 03 Feb 2014 16:14:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAi-0005wf-L9;
	Mon, 03 Feb 2014 16:14:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:35 +0000
Message-ID: <1391444091-22796-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 02/18] libxl: fork: Break out
	childproc_reaped_ours
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're going to want to do this again at a new call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 2252370..7b84765 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -290,6 +290,14 @@ static int perhaps_installhandler(libxl__gc *gc, bool creating)
     return 0;
 }
 
+static void childproc_reaped_ours(libxl__egc *egc, libxl__ev_child *ch,
+                                 int status)
+{
+    LIBXL_LIST_REMOVE(ch, entry);
+    ch->pid = -1;
+    ch->callback(egc, ch, ch->pid, status);
+}
+
 static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
 {
     EGC_GC;
@@ -303,9 +311,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
     return ERROR_UNKNOWN_CHILD;
 
  found:
-    LIBXL_LIST_REMOVE(ch, entry);
-    ch->pid = -1;
-    ch->callback(egc, ch, pid, status);
+    childproc_reaped_ours(egc, ch, status);
 
     perhaps_removehandler(gc);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDL-0006v7-2Y; Mon, 03 Feb 2014 16:17:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDG-0006qs-JE
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:34 +0000
Received: from [85.158.143.35:27792] by server-3.bemta-4.messagelabs.com id
	EC/63-11539-D11CFE25; Mon, 03 Feb 2014 16:17:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18614 invoked from network); 3 Feb 2014 16:17:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270841"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hi-NS;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005wz-FL;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:39 +0000
Message-ID: <1391444091-22796-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 06/18] libxl: fork: Provide
	libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which don't keep track of all their child processes
in a manner suitable for coherent dispatch of their termination.  In
such a situation, nothing in the whole process may call wait, or
waitpid(-1,,).  Doing so reaps processes belonging to other parts of
the application and there is then no way to deliver the exit status to
the right place.

To facilitate this, provide a facility for such an application to ask
libxl to call waitpid on each of its children individually.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_fork.c  |   45 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 4f72c4b..3c93955 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -482,9 +482,10 @@ typedef enum {
      * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
-    /* Application promises to call libxl_childproc_exited but NOT
-     * from within a signal handler.  libxl will not itself arrange to
-     * (un)block or catch SIGCHLD. */
+    /* Application promises to discover when SIGCHLD occurs and call
+     * libxl_childproc_exited or libxl_childproc_sigchld_occurred (but
+     * NOT from within a signal handler).  libxl will not itself
+     * arrange to (un)block or catch SIGCHLD. */
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
@@ -542,7 +543,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 
 /*
  * This function is for an application which owns SIGCHLD and which
- * therefore reaps all of the process's children.
+ * reaps all of the process's children, and dispatches the exit status
+ * to the correct place inside the application.
  *
  * May be called only by an application which has called setmode with
  * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
@@ -558,6 +560,25 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 int libxl_childproc_reaped(libxl_ctx *ctx, pid_t, int status)
                            LIBXL_EXTERNAL_CALLERS_ONLY;
 
+/*
+ * This function is for an application which owns SIGCHLD but which
+ * doesn't keep track of all of its own children in a manner suitable
+ * for reaping all of them and then dispatching them.
+ *
+ * Such an the application must notify libxl, by calling this
+ * function, that a SIGCHLD occurred.  libxl will then check all its
+ * children, reap any that are ready, and take any action necessary -
+ * but it will not reap anything else.
+ *
+ * May be called only by an application which has called setmode with
+ * chldowner == libxl_sigchld_owner_mainloop.
+ *
+ * May NOT be called from within a signal handler which might
+ * interrupt any libxl operation (just like libxl_childproc_reaped).
+ */
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+                           LIBXL_EXTERNAL_CALLERS_ONLY;
+
 
 /*
  * An application which initialises a libxl_ctx in a parent process
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 85db2fb..b2325e0 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -330,6 +330,51 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
     return rc;
 }
 
+static void childproc_checkall(libxl__egc *egc)
+{
+    EGC_GC;
+    libxl__ev_child *ch;
+
+    for (;;) {
+        int status;
+        pid_t got;
+
+        LIBXL_LIST_FOREACH(ch, &CTX->children, entry) {
+            got = checked_waitpid(egc, ch->pid, &status);
+            if (got)
+                goto found;
+        }
+        /* not found */
+        return;
+
+    found:
+        if (got == -1) {
+            LIBXL__EVENT_DISASTER
+                (egc, "waitpid() gave ECHILD but we have a child",
+                 ECHILD, 0);
+            /* it must have finished but we don't know its status */
+            status = 255<<8; /* no wait.h macro for this! */
+            assert(WIFEXITED(status));
+            assert(WEXITSTATUS(status)==255);
+            assert(!WIFSIGNALED(status));
+            assert(!WIFSTOPPED(status));
+        }
+        childproc_reaped_ours(egc, ch, status);
+        /* we need to restart the loop, as children may have been edited */
+    }
+}
+
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+{
+    EGC_INIT(ctx);
+    CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
+    childproc_checkall(egc);
+    CTX_UNLOCK;
+    EGC_FREE;
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDL-0006v7-2Y; Mon, 03 Feb 2014 16:17:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDG-0006qs-JE
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:34 +0000
Received: from [85.158.143.35:27792] by server-3.bemta-4.messagelabs.com id
	EC/63-11539-D11CFE25; Mon, 03 Feb 2014 16:17:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18614 invoked from network); 3 Feb 2014 16:17:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270841"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hi-NS;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005wz-FL;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:39 +0000
Message-ID: <1391444091-22796-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 06/18] libxl: fork: Provide
	libxl_childproc_sigchld_occurred
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which don't keep track of all their child processes
in a manner suitable for coherent dispatch of their termination.  In
such a situation, nothing in the whole process may call wait, or
waitpid(-1,,).  Doing so reaps processes belonging to other parts of
the application and there is then no way to deliver the exit status to
the right place.

To facilitate this, provide a facility for such an application to ask
libxl to call waitpid on each of its children individually.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl_event.h |   29 +++++++++++++++++++++++++----
 tools/libxl/libxl_fork.c  |   45 +++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 70 insertions(+), 4 deletions(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 4f72c4b..3c93955 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -482,9 +482,10 @@ typedef enum {
      * all children, including those not spawned by libxl. */
     libxl_sigchld_owner_libxl,
 
-    /* Application promises to call libxl_childproc_exited but NOT
-     * from within a signal handler.  libxl will not itself arrange to
-     * (un)block or catch SIGCHLD. */
+    /* Application promises to discover when SIGCHLD occurs and call
+     * libxl_childproc_exited or libxl_childproc_sigchld_occurred (but
+     * NOT from within a signal handler).  libxl will not itself
+     * arrange to (un)block or catch SIGCHLD. */
     libxl_sigchld_owner_mainloop,
 
     /* libxl owns SIGCHLD all the time, and the application is
@@ -542,7 +543,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 
 /*
  * This function is for an application which owns SIGCHLD and which
- * therefore reaps all of the process's children.
+ * reaps all of the process's children, and dispatches the exit status
+ * to the correct place inside the application.
  *
  * May be called only by an application which has called setmode with
  * chldowner == libxl_sigchld_owner_mainloop.  If pid was a process started
@@ -558,6 +560,25 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
 int libxl_childproc_reaped(libxl_ctx *ctx, pid_t, int status)
                            LIBXL_EXTERNAL_CALLERS_ONLY;
 
+/*
+ * This function is for an application which owns SIGCHLD but which
+ * doesn't keep track of all of its own children in a manner suitable
+ * for reaping all of them and then dispatching them.
+ *
+ * Such an the application must notify libxl, by calling this
+ * function, that a SIGCHLD occurred.  libxl will then check all its
+ * children, reap any that are ready, and take any action necessary -
+ * but it will not reap anything else.
+ *
+ * May be called only by an application which has called setmode with
+ * chldowner == libxl_sigchld_owner_mainloop.
+ *
+ * May NOT be called from within a signal handler which might
+ * interrupt any libxl operation (just like libxl_childproc_reaped).
+ */
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+                           LIBXL_EXTERNAL_CALLERS_ONLY;
+
 
 /*
  * An application which initialises a libxl_ctx in a parent process
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 85db2fb..b2325e0 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -330,6 +330,51 @@ int libxl_childproc_reaped(libxl_ctx *ctx, pid_t pid, int status)
     return rc;
 }
 
+static void childproc_checkall(libxl__egc *egc)
+{
+    EGC_GC;
+    libxl__ev_child *ch;
+
+    for (;;) {
+        int status;
+        pid_t got;
+
+        LIBXL_LIST_FOREACH(ch, &CTX->children, entry) {
+            got = checked_waitpid(egc, ch->pid, &status);
+            if (got)
+                goto found;
+        }
+        /* not found */
+        return;
+
+    found:
+        if (got == -1) {
+            LIBXL__EVENT_DISASTER
+                (egc, "waitpid() gave ECHILD but we have a child",
+                 ECHILD, 0);
+            /* it must have finished but we don't know its status */
+            status = 255<<8; /* no wait.h macro for this! */
+            assert(WIFEXITED(status));
+            assert(WEXITSTATUS(status)==255);
+            assert(!WIFSIGNALED(status));
+            assert(!WIFSTOPPED(status));
+        }
+        childproc_reaped_ours(egc, ch, status);
+        /* we need to restart the loop, as children may have been edited */
+    }
+}
+
+void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
+{
+    EGC_INIT(ctx);
+    CTX_LOCK;
+    assert(CTX->childproc_hooks->chldowner
+           == libxl_sigchld_owner_mainloop);
+    childproc_checkall(egc);
+    CTX_UNLOCK;
+    EGC_FREE;
+}
+
 static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
                                      int fd, short events, short revents)
 {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDL-0006w8-Sy; Mon, 03 Feb 2014 16:17:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDG-0006qz-SX
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:35 +0000
Received: from [85.158.139.211:52564] by server-17.bemta-5.messagelabs.com id
	1D/34-31975-E11CFE25; Mon, 03 Feb 2014 16:17:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3499 invoked from network); 3 Feb 2014 16:17:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270846"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hl-UR;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005x4-Mt;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:40 +0000
Message-ID: <1391444091-22796-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 07/18] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which want to use libxl in an event-driven mode but
which do not integrate child termination into their event system, but
instead reap all their own children synchronously.

In such an application libxl must own SIGCHLD but avoid reaping any
children that don't belong to libxl.

Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
behaviour.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

---
v2: Document the new mode in the big "Subprocess handling" comment.
---
 tools/libxl/libxl_event.h |   11 +++++++++++
 tools/libxl/libxl_fork.c  |    7 +++++++
 2 files changed, 18 insertions(+)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3c93955..824ac88 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -474,6 +474,12 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
+ *
+ *     libxl_sigchld_owner_libxl_always_selective_reap:
+ *
+ *       The application expects to reap all of its own children
+ *       synchronously, and does not use SIGCHLD.  libxl is
+ *       to install a SIGCHLD handler.
  */
 
 
@@ -491,6 +497,11 @@ typedef enum {
     /* libxl owns SIGCHLD all the time, and the application is
      * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
+
+    /* libxl owns SIGCHLD all the time, but it must only reap its own
+     * children.  The application will reap its own children
+     * synchronously with waitpid, without the assistance of SIGCHLD. */
+    libxl_sigchld_owner_libxl_always_selective_reap,
 } libxl_sigchld_owner;
 
 typedef struct {
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b2325e0..16e17f6 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();
@@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
     int e = libxl__self_pipe_eatall(selfpipe);
     if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
 
+    if (CTX->childproc_hooks->chldowner
+        == libxl_sigchld_owner_libxl_always_selective_reap) {
+        childproc_checkall(egc);
+        return;
+    }
+
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
         pid_t pid = checked_waitpid(egc, -1, &status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDL-0006w8-Sy; Mon, 03 Feb 2014 16:17:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDG-0006qz-SX
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:35 +0000
Received: from [85.158.139.211:52564] by server-17.bemta-5.messagelabs.com id
	1D/34-31975-E11CFE25; Mon, 03 Feb 2014 16:17:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3499 invoked from network); 3 Feb 2014 16:17:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270846"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005hl-UR;
	Mon, 03 Feb 2014 16:14:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005x4-Mt;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:40 +0000
Message-ID: <1391444091-22796-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 07/18] libxl: fork: Provide
	..._always_selective_reap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Applications exist which want to use libxl in an event-driven mode but
which do not integrate child termination into their event system, but
instead reap all their own children synchronously.

In such an application libxl must own SIGCHLD but avoid reaping any
children that don't belong to libxl.

Provide libxl_sigchld_owner_libxl_always_selective_reap which has this
behaviour.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

---
v2: Document the new mode in the big "Subprocess handling" comment.
---
 tools/libxl/libxl_event.h |   11 +++++++++++
 tools/libxl/libxl_fork.c  |    7 +++++++
 2 files changed, 18 insertions(+)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 3c93955..824ac88 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -474,6 +474,12 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *       and provides a callback to be notified of their exit
  *       statues.  The application must have only one libxl_ctx
  *       configured this way.
+ *
+ *     libxl_sigchld_owner_libxl_always_selective_reap:
+ *
+ *       The application expects to reap all of its own children
+ *       synchronously, and does not use SIGCHLD.  libxl is
+ *       to install a SIGCHLD handler.
  */
 
 
@@ -491,6 +497,11 @@ typedef enum {
     /* libxl owns SIGCHLD all the time, and the application is
      * relying on libxl's event loop for reaping its children too. */
     libxl_sigchld_owner_libxl_always,
+
+    /* libxl owns SIGCHLD all the time, but it must only reap its own
+     * children.  The application will reap its own children
+     * synchronously with waitpid, without the assistance of SIGCHLD. */
+    libxl_sigchld_owner_libxl_always_selective_reap,
 } libxl_sigchld_owner;
 
 typedef struct {
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index b2325e0..16e17f6 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -268,6 +268,7 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     case libxl_sigchld_owner_mainloop:
         return 0;
     case libxl_sigchld_owner_libxl_always:
+    case libxl_sigchld_owner_libxl_always_selective_reap:
         return 1;
     }
     abort();
@@ -398,6 +399,12 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
     int e = libxl__self_pipe_eatall(selfpipe);
     if (e) LIBXL__EVENT_DISASTER(egc, "read sigchld pipe", e, 0);
 
+    if (CTX->childproc_hooks->chldowner
+        == libxl_sigchld_owner_libxl_always_selective_reap) {
+        childproc_checkall(egc);
+        return;
+    }
+
     while (chldmode_ours(CTX, 0) /* in case the app changes the mode */) {
         int status;
         pid_t pid = checked_waitpid(egc, -1, &status);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDM-0006x9-IU; Mon, 03 Feb 2014 16:17:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDH-0006r4-0j
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:35 +0000
Received: from [193.109.254.147:14324] by server-15.bemta-14.messagelabs.com
	id 59/37-10839-E11CFE25; Mon, 03 Feb 2014 16:17:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391444251!1688638!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11008 invoked from network); 3 Feb 2014 16:17:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270847"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005ho-3w;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005x9-Tr;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:41 +0000
Message-ID: <1391444091-22796-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 08/18] libxl: fork: Provide
	LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the feature test macro for libxl_childproc_sigchld_occurred
and libxl_sigchld_owner_libxl_always_selective_reap.

It is split out into this separate patch because: a single feature
test is sensible because we do not intend anyone to release or ship
libxl versions with one of these but not the other; but, the two
features are in separate patches for clarity; and, this just makes
reading the actual code easier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..1ac34c3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
+ *
+ * If this is defined:
+ *
+ * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
+ * value libxl_sigchld_owner_libxl_always_selective_reap which may be
+ * passed to libxl_childproc_setmode in hooks->chldmode.
+ *
+ * Secondly, the function libxl_childproc_sigchld_occurred exists.
+ */
+#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDM-0006x9-IU; Mon, 03 Feb 2014 16:17:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDH-0006r4-0j
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:35 +0000
Received: from [193.109.254.147:14324] by server-15.bemta-14.messagelabs.com
	id 59/37-10839-E11CFE25; Mon, 03 Feb 2014 16:17:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391444251!1688638!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11008 invoked from network); 3 Feb 2014 16:17:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270847"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005ho-3w;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAj-0005x9-Tr;
	Mon, 03 Feb 2014 16:14:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:41 +0000
Message-ID: <1391444091-22796-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 08/18] libxl: fork: Provide
	LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is the feature test macro for libxl_childproc_sigchld_occurred
and libxl_sigchld_owner_libxl_always_selective_reap.

It is split out into this separate patch because: a single feature
test is sensible because we do not intend anyone to release or ship
libxl versions with one of these but not the other; but, the two
features are in separate patches for clarity; and, this just makes
reading the actual code easier.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.h |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..1ac34c3 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SELECTIVE_REAP
+ *
+ * If this is defined:
+ *
+ * Firstly, the enum libxl_sigchld_owner (in libxl_event.h) has the
+ * value libxl_sigchld_owner_libxl_always_selective_reap which may be
+ * passed to libxl_childproc_setmode in hooks->chldmode.
+ *
+ * Secondly, the function libxl_childproc_sigchld_occurred exists.
+ */
+#define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDN-0006xt-1S; Mon, 03 Feb 2014 16:17:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDH-0006rG-K5
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:36 +0000
Received: from [85.158.143.35:47823] by server-2.bemta-4.messagelabs.com id
	5B/5D-10891-E11CFE25; Mon, 03 Feb 2014 16:17:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18660 invoked from network); 3 Feb 2014 16:17:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270859"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005hx-Lt;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xO-Gd;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:44 +0000
Message-ID: <1391444091-22796-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/18] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to want introduce another call site in the final
substantive patch.

Pure code motion; no functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>

---
v3: Remove now-unused variables from sigchld_installhandler_core
---
 tools/libxl/libxl_fork.c |   23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index ce8e8eb..084d86a 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
     errno = esave;
 }
 
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
+{
+    struct sigaction ours;
+    int r;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, old);
+    assert(!r);
+}
+
 static void sigchld_removehandler_core(void)
 {
     struct sigaction was;
@@ -196,18 +209,10 @@ static void sigchld_removehandler_core(void)
 
 static void sigchld_installhandler_core(libxl__gc *gc)
 {
-    struct sigaction ours;
-    int r;
-
     assert(!sigchld_owner);
     sigchld_owner = CTX;
 
-    memset(&ours,0,sizeof(ours));
-    ours.sa_handler = sigchld_handler;
-    sigemptyset(&ours.sa_mask);
-    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-    assert(!r);
+    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
     assert(((void)"application must negotiate with libxl about SIGCHLD",
             !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDN-0006xt-1S; Mon, 03 Feb 2014 16:17:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDH-0006rG-K5
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:36 +0000
Received: from [85.158.143.35:47823] by server-2.bemta-4.messagelabs.com id
	5B/5D-10891-E11CFE25; Mon, 03 Feb 2014 16:17:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18660 invoked from network); 3 Feb 2014 16:17:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270859"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005hx-Lt;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xO-Gd;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:44 +0000
Message-ID: <1391444091-22796-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 11/18] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to want introduce another call site in the final
substantive patch.

Pure code motion; no functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>

---
v3: Remove now-unused variables from sigchld_installhandler_core
---
 tools/libxl/libxl_fork.c |   23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index ce8e8eb..084d86a 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
     errno = esave;
 }
 
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
+{
+    struct sigaction ours;
+    int r;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, old);
+    assert(!r);
+}
+
 static void sigchld_removehandler_core(void)
 {
     struct sigaction was;
@@ -196,18 +209,10 @@ static void sigchld_removehandler_core(void)
 
 static void sigchld_installhandler_core(libxl__gc *gc)
 {
-    struct sigaction ours;
-    int r;
-
     assert(!sigchld_owner);
     sigchld_owner = CTX;
 
-    memset(&ours,0,sizeof(ours));
-    ours.sa_handler = sigchld_handler;
-    sigemptyset(&ours.sa_mask);
-    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-    assert(!r);
+    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
     assert(((void)"application must negotiate with libxl about SIGCHLD",
             !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDN-0006yQ-FH; Mon, 03 Feb 2014 16:17:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDH-0006rc-Re
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:36 +0000
Received: from [85.158.139.211:52660] by server-6.bemta-5.messagelabs.com id
	A3/8A-14342-F11CFE25; Mon, 03 Feb 2014 16:17:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3652 invoked from network); 3 Feb 2014 16:17:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270851"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005hr-AN;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xE-3N;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:42 +0000
Message-ID: <1391444091-22796-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 09/18] libxl: fork: Rename sigchld handler
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to change these functions so that different libxl ctx's
can share a single SIGCHLD handler.  Rename them now to a new name
which doesn't imply unconditional handler installation or removal.

Also note in the comments that they are idempotent.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.c          |    2 +-
 tools/libxl/libxl_fork.c     |   22 +++++++++++-----------
 tools/libxl/libxl_internal.h |    4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4679b51 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -170,7 +170,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
     /* If we have outstanding children, then the application inherits
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
-    libxl__sigchld_removehandler(gc);
+    libxl__sigchld_notneeded(gc);
 
     if (ctx->sigchld_selfpipe[0] >= 0) {
         close(ctx->sigchld_selfpipe[0]);
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 16e17f6..a15af8e 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,7 +194,7 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
-void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
 
@@ -210,7 +210,7 @@ void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
     }
 }
 
-int libxl__sigchld_installhandler(libxl__gc *gc) /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int r, rc;
 
@@ -274,18 +274,18 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     abort();
 }
 
-static void perhaps_removehandler(libxl__gc *gc)
+static void perhaps_sigchld_notneeded(libxl__gc *gc)
 {
     if (!chldmode_ours(CTX, 0))
-        libxl__sigchld_removehandler(gc);
+        libxl__sigchld_notneeded(gc);
 }
 
-static int perhaps_installhandler(libxl__gc *gc, bool creating)
+static int perhaps_sigchld_needed(libxl__gc *gc, bool creating)
 {
     int rc;
 
     if (chldmode_ours(CTX, creating)) {
-        rc = libxl__sigchld_installhandler(gc);
+        rc = libxl__sigchld_needed(gc);
         if (rc) return rc;
     }
     return 0;
@@ -314,7 +314,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
  found:
     childproc_reaped_ours(egc, ch, status);
 
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
 
     return 0;
 }
@@ -445,7 +445,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     CTX_LOCK;
     int rc;
 
-    perhaps_installhandler(gc, 1);
+    perhaps_sigchld_needed(gc, 1);
 
     pid_t pid =
         CTX->childproc_hooks->fork_replacement
@@ -473,7 +473,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     rc = pid;
 
  out:
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
     CTX_UNLOCK;
     return rc;
 }
@@ -492,8 +492,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
     ctx->childproc_hooks = hooks;
     ctx->childproc_user = user;
 
-    perhaps_removehandler(gc);
-    perhaps_installhandler(gc, 0); /* idempotent, ok to ignore errors for now */
+    perhaps_sigchld_notneeded(gc);
+    perhaps_sigchld_needed(gc, 0); /* idempotent, ok to ignore errors for now */
 
     CTX_UNLOCK;
     GC_FREE;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..fba681d 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -838,8 +838,8 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 
 /* Internal to fork and child reaping machinery */
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
-int libxl__sigchld_installhandler(libxl__gc*); /* non-reentrant; logs errs */
-void libxl__sigchld_removehandler(libxl__gc*); /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
+void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDN-0006yQ-FH; Mon, 03 Feb 2014 16:17:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDH-0006rc-Re
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:36 +0000
Received: from [85.158.139.211:52660] by server-6.bemta-5.messagelabs.com id
	A3/8A-14342-F11CFE25; Mon, 03 Feb 2014 16:17:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3652 invoked from network); 3 Feb 2014 16:17:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270851"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005hr-AN;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xE-3N;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:42 +0000
Message-ID: <1391444091-22796-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 09/18] libxl: fork: Rename sigchld handler
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to change these functions so that different libxl ctx's
can share a single SIGCHLD handler.  Rename them now to a new name
which doesn't imply unconditional handler installation or removal.

Also note in the comments that they are idempotent.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl.c          |    2 +-
 tools/libxl/libxl_fork.c     |   22 +++++++++++-----------
 tools/libxl/libxl_internal.h |    4 ++--
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..4679b51 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -170,7 +170,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
     /* If we have outstanding children, then the application inherits
      * them; we wish the application good luck with understanding
      * this if and when it reaps them. */
-    libxl__sigchld_removehandler(gc);
+    libxl__sigchld_notneeded(gc);
 
     if (ctx->sigchld_selfpipe[0] >= 0) {
         close(ctx->sigchld_selfpipe[0]);
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 16e17f6..a15af8e 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,7 +194,7 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
-void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
 
@@ -210,7 +210,7 @@ void libxl__sigchld_removehandler(libxl__gc *gc) /* non-reentrant */
     }
 }
 
-int libxl__sigchld_installhandler(libxl__gc *gc) /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int r, rc;
 
@@ -274,18 +274,18 @@ static bool chldmode_ours(libxl_ctx *ctx, bool creating)
     abort();
 }
 
-static void perhaps_removehandler(libxl__gc *gc)
+static void perhaps_sigchld_notneeded(libxl__gc *gc)
 {
     if (!chldmode_ours(CTX, 0))
-        libxl__sigchld_removehandler(gc);
+        libxl__sigchld_notneeded(gc);
 }
 
-static int perhaps_installhandler(libxl__gc *gc, bool creating)
+static int perhaps_sigchld_needed(libxl__gc *gc, bool creating)
 {
     int rc;
 
     if (chldmode_ours(CTX, creating)) {
-        rc = libxl__sigchld_installhandler(gc);
+        rc = libxl__sigchld_needed(gc);
         if (rc) return rc;
     }
     return 0;
@@ -314,7 +314,7 @@ static int childproc_reaped(libxl__egc *egc, pid_t pid, int status)
  found:
     childproc_reaped_ours(egc, ch, status);
 
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
 
     return 0;
 }
@@ -445,7 +445,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     CTX_LOCK;
     int rc;
 
-    perhaps_installhandler(gc, 1);
+    perhaps_sigchld_needed(gc, 1);
 
     pid_t pid =
         CTX->childproc_hooks->fork_replacement
@@ -473,7 +473,7 @@ pid_t libxl__ev_child_fork(libxl__gc *gc, libxl__ev_child *ch,
     rc = pid;
 
  out:
-    perhaps_removehandler(gc);
+    perhaps_sigchld_notneeded(gc);
     CTX_UNLOCK;
     return rc;
 }
@@ -492,8 +492,8 @@ void libxl_childproc_setmode(libxl_ctx *ctx, const libxl_childproc_hooks *hooks,
     ctx->childproc_hooks = hooks;
     ctx->childproc_user = user;
 
-    perhaps_removehandler(gc);
-    perhaps_installhandler(gc, 0); /* idempotent, ok to ignore errors for now */
+    perhaps_sigchld_notneeded(gc);
+    perhaps_sigchld_needed(gc, 0); /* idempotent, ok to ignore errors for now */
 
     CTX_UNLOCK;
     GC_FREE;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..fba681d 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -838,8 +838,8 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 
 /* Internal to fork and child reaping machinery */
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
-int libxl__sigchld_installhandler(libxl__gc*); /* non-reentrant; logs errs */
-void libxl__sigchld_removehandler(libxl__gc*); /* non-reentrant */
+int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
+void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDN-0006zR-Vq; Mon, 03 Feb 2014 16:17:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDI-0006rw-Co
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:36 +0000
Received: from [193.109.254.147:14432] by server-3.bemta-14.messagelabs.com id
	6F/20-00432-F11CFE25; Mon, 03 Feb 2014 16:17:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391444251!1688638!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11136 invoked from network); 3 Feb 2014 16:17:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270860"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005i0-TN;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xT-LJ;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:45 +0000
Message-ID: <1391444091-22796-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 12/18] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Previously, an application which had multiple libxl ctxs in multiple
threads, would have to itself plumb SIGCHLD through to each ctx.
Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

We keep a list of all the ctxs which are interested in SIGCHLD and
notify all of their self-pipes.

In more detail:

 * sigchld_owner, the ctx* of the SIGCHLD owner, is replaced by
   sigchld_users, a list of SIGCHLD users.

 * Each ctx keeps track of whether it is on the users list, so that
   libxl__sigchld_needed and libxl__sigchld_notneeded now instead of
   idempotently installing and removing the handler, idempotently add
   or remove the ctx from the list.

   We ensure that we always have the SIGCHLD handler installed
   iff the sigchld_users list is nonempty.  To make this a bit
   easier we make sigchld_installhandler_core and
   sigchld_removehandler_core idempotent.

   Specifically, the call sites for sigchld_installhandler_core and
   sigchld_removehandler_core are updated to manipulate sigchld_users
   and only call the install or remove functions as applicable.

 * In the signal handler we walk the list of SIGCHLD users and write
   to each of their self-pipes.  That means that we need to arrange to
   defer SIGCHLD when we are manipulating the list (to avoid the
   signal handler interrupting our list manipulation); this is quite
   tiresome to arrange.

   The code as written will, on the first installation of the SIGCHLD
   handler, firstly install the real handler, then immediately replace
   it with the deferral handler.  Doing it this way makes the code
   clearer as it makes the SIGCHLD deferral machinery much more
   self-contained (and hence easier to reason about).

 * The first part of libxl__sigchld_notneeded is broken out into a new
   function sigchld_user_remove (which is also needed during for
   postfork).  And of course that first part of the function is now
   rather different, as explained above.

 * sigchld_installhandler_core no longer takes the gc argument,
   because it now deals with SIGCHLD for all ctxs.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---
v3: Include bugfixes from "Fixup SIGCHLD sharing" patch:

    * Use a mutex for defer_sigchld, to guard against concurrency
      between the thread calling defer_sigchld and an instance of the
      primary signal handler on another thread.

    * libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD
      sharing.  Document this correctly.

    Fix "have have" error in comment.

    Move removal of newly unused variables to previous patch.

v2.1: Provide feature test macro LIBXL_HAVE_SIGCHLD_SHARING
---
 tools/libxl/libxl.h          |    9 +++
 tools/libxl/libxl_event.h    |   14 ++--
 tools/libxl/libxl_fork.c     |  153 ++++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    3 +
 4 files changed, 153 insertions(+), 26 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1ac34c3..0b992d1 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -422,6 +422,15 @@
  */
 #define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SHARING
+ *
+ * If this is defined, it is permissible for multiple libxl ctxs
+ * to simultaneously "own" SIGCHLD.  See "Subprocess handling"
+ * in libxl_event.h.
+ */
+#define LIBXL_HAVE_SIGCHLD_SHARING 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 824ac88..ca43cb9 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -470,16 +470,18 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *     libxl_sigchld_owner_libxl_always:
  *
- *       The application expects libxl to reap all of its children,
- *       and provides a callback to be notified of their exit
- *       statues.  The application must have only one libxl_ctx
- *       configured this way.
+ *       The application expects this libxl ctx to reap all of the
+ *       process's children, and provides a callback to be notified of
+ *       their exit statuses.  The application must have only one
+ *       libxl_ctx configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
  *
  *       The application expects to reap all of its own children
- *       synchronously, and does not use SIGCHLD.  libxl is
- *       to install a SIGCHLD handler.
+ *       synchronously, and does not use SIGCHLD.  libxl is to install
+ *       a SIGCHLD handler.  The application may have multiple
+ *       libxl_ctxs configured this way; in which case all of its ctxs
+ *       must be so configured.
  */
 
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 084d86a..2432512 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -46,11 +46,19 @@ static int atfork_registered;
 static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
     LIBXL_LIST_HEAD_INITIALIZER(carefds);
 
-/* non-null iff installed, protected by no_forking */
-static libxl_ctx *sigchld_owner;
+/* Protected against concurrency by no_forking.  sigchld_users is
+ * protected against being interrupted by SIGCHLD (and thus read
+ * asynchronously by the signal handler) by sigchld_defer (see
+ * below). */
+static bool sigchld_installed; /* 0 means not */
+static pthread_mutex_t sigchld_defer_mutex = PTHREAD_MUTEX_INITIALIZER;
+static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
+    LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
 
-static void sigchld_removehandler_core(void);
+static void sigchld_removehandler_core(void); /* idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
 static void atfork_lock(void)
 {
@@ -126,8 +134,7 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    if (sigchld_owner)
-        sigchld_removehandler_core();
+    sigchld_user_remove(ctx);
 
     atfork_unlock();
 }
@@ -152,7 +159,8 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 }
 
 /*
- * Actual child process handling
+ * Low-level functions for child process handling, including
+ * the main SIGCHLD handler.
  */
 
 /* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
@@ -176,9 +184,22 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
 static void sigchld_handler(int signo)
 {
+    /* This function has to be reentrant!  Luckily it is. */
+
+    libxl_ctx *notify;
     int esave = errno;
-    int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
-    assert(!e); /* errors are probably EBADF, very bad */
+
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
+
+    LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
+        int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
+        assert(!e); /* errors are probably EBADF, very bad */
+    }
+
+    r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
     errno = esave;
 }
 
@@ -195,22 +216,89 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
     assert(!r);
 }
 
-static void sigchld_removehandler_core(void)
+/*
+ * SIGCHLD deferral
+ *
+ * sigchld_defer and sigchld_release are a bit like using sigprocmask
+ * to block the signal only they work for the whole process.  Sadly
+ * this has to be done by setting a special handler that records the
+ * "pendingness" of the signal here in the program.  How tedious.
+ *
+ * A property of this approach is that the signal handler itself
+ * must be reentrant (see the comment in release_sigchld).
+ *
+ * Callers have the atfork_lock so there is no risk of concurrency
+ * within these functions, aside from the risk of being interrupted by
+ * the signal.  We use sigchld_defer_mutex to guard against the
+ * possibility of the real signal handler being still running on
+ * another thread.
+ */
+
+static volatile sig_atomic_t sigchld_occurred_while_deferred;
+
+static void sigchld_handler_when_deferred(int signo)
+{
+    sigchld_occurred_while_deferred = 1;
+}
+
+static void defer_sigchld(void)
+{
+    assert(sigchld_installed);
+
+    sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+
+    /* Now _this thread_ cannot any longer be interrupted by the
+     * signal, so we can take the mutex without risk of deadlock.  If
+     * another thread is in the signal handler, either it or we will
+     * block and wait for the other. */
+
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
+}
+
+static void release_sigchld(void)
+{
+    assert(sigchld_installed);
+
+    int r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
+    sigchld_sethandler_raw(sigchld_handler, 0);
+    if (sigchld_occurred_while_deferred) {
+        sigchld_occurred_while_deferred = 0;
+        /* We might get another SIGCHLD here, in which case
+         * sigchld_handler will be interrupted and re-entered.
+         * This is OK. */
+        sigchld_handler(SIGCHLD);
+    }
+}
+
+/*
+ * Meat of the child process handling.
+ */
+
+static void sigchld_removehandler_core(void) /* idempotent */
 {
     struct sigaction was;
     int r;
     
+    if (!sigchld_installed)
+        return;
+
     r = sigaction(SIGCHLD, &sigchld_saved_action, &was);
     assert(!r);
     assert(!(was.sa_flags & SA_SIGINFO));
     assert(was.sa_handler == sigchld_handler);
-    sigchld_owner = 0;
+
+    sigchld_installed = 0;
 }
 
-static void sigchld_installhandler_core(libxl__gc *gc)
+static void sigchld_installhandler_core(void) /* idempotent */
 {
-    assert(!sigchld_owner);
-    sigchld_owner = CTX;
+    if (sigchld_installed)
+        return;
+
+    sigchld_installed = 1;
 
     sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
@@ -220,15 +308,32 @@ static void sigchld_installhandler_core(libxl__gc *gc)
              sigchld_saved_action.sa_handler == SIG_IGN)));
 }
 
-void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx) /* idempotent */
 {
-    int rc;
+    if (!ctx->sigchld_user_registered)
+        return;
 
     atfork_lock();
-    if (sigchld_owner == CTX)
+    defer_sigchld();
+
+    LIBXL_LIST_REMOVE(ctx, sigchld_users_entry);
+
+    release_sigchld();
+
+    if (LIBXL_LIST_EMPTY(&sigchld_users))
         sigchld_removehandler_core();
+
     atfork_unlock();
 
+    ctx->sigchld_user_registered = 0;
+}
+
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+{
+    int rc;
+
+    sigchld_user_remove(CTX);
+
     if (libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, 0);
         if (rc)
@@ -259,12 +364,20 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, POLLIN);
         if (rc) goto out;
     }
+    if (!CTX->sigchld_user_registered) {
+        atfork_lock();
 
-    atfork_lock();
-    if (sigchld_owner != CTX) {
-        sigchld_installhandler_core(gc);
+        sigchld_installhandler_core();
+
+        defer_sigchld();
+
+        LIBXL_LIST_INSERT_HEAD(&sigchld_users, CTX, sigchld_users_entry);
+
+        release_sigchld();
+        atfork_unlock();
+
+        CTX->sigchld_user_registered = 1;
     }
-    atfork_unlock();
 
     rc = 0;
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index fba681d..8429448 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -355,6 +355,8 @@ struct libxl__ctx {
     int sigchld_selfpipe[2]; /* [0]==-1 means handler not installed */
     libxl__ev_fd sigchld_selfpipe_efd;
     LIBXL_LIST_HEAD(, libxl__ev_child) children;
+    bool sigchld_user_registered;
+    LIBXL_LIST_ENTRY(libxl_ctx) sigchld_users_entry;
 
     libxl_version_info version_info;
 };
@@ -840,6 +842,7 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
 int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
 void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
+void libxl__sigchld_check_stale_handler(void);
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDN-0006zR-Vq; Mon, 03 Feb 2014 16:17:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDI-0006rw-Co
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:36 +0000
Received: from [193.109.254.147:14432] by server-3.bemta-14.messagelabs.com id
	6F/20-00432-F11CFE25; Mon, 03 Feb 2014 16:17:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391444251!1688638!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11136 invoked from network); 3 Feb 2014 16:17:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270860"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005i0-TN;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xT-LJ;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:45 +0000
Message-ID: <1391444091-22796-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 12/18] libxl: fork: Share SIGCHLD handler
	amongst ctxs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Previously, an application which had multiple libxl ctxs in multiple
threads, would have to itself plumb SIGCHLD through to each ctx.
Instead, permit multiple libxl ctxs to all share the SIGCHLD handler.

We keep a list of all the ctxs which are interested in SIGCHLD and
notify all of their self-pipes.

In more detail:

 * sigchld_owner, the ctx* of the SIGCHLD owner, is replaced by
   sigchld_users, a list of SIGCHLD users.

 * Each ctx keeps track of whether it is on the users list, so that
   libxl__sigchld_needed and libxl__sigchld_notneeded now instead of
   idempotently installing and removing the handler, idempotently add
   or remove the ctx from the list.

   We ensure that we always have the SIGCHLD handler installed
   iff the sigchld_users list is nonempty.  To make this a bit
   easier we make sigchld_installhandler_core and
   sigchld_removehandler_core idempotent.

   Specifically, the call sites for sigchld_installhandler_core and
   sigchld_removehandler_core are updated to manipulate sigchld_users
   and only call the install or remove functions as applicable.

 * In the signal handler we walk the list of SIGCHLD users and write
   to each of their self-pipes.  That means that we need to arrange to
   defer SIGCHLD when we are manipulating the list (to avoid the
   signal handler interrupting our list manipulation); this is quite
   tiresome to arrange.

   The code as written will, on the first installation of the SIGCHLD
   handler, firstly install the real handler, then immediately replace
   it with the deferral handler.  Doing it this way makes the code
   clearer as it makes the SIGCHLD deferral machinery much more
   self-contained (and hence easier to reason about).

 * The first part of libxl__sigchld_notneeded is broken out into a new
   function sigchld_user_remove (which is also needed during for
   postfork).  And of course that first part of the function is now
   rather different, as explained above.

 * sigchld_installhandler_core no longer takes the gc argument,
   because it now deals with SIGCHLD for all ctxs.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---
v3: Include bugfixes from "Fixup SIGCHLD sharing" patch:

    * Use a mutex for defer_sigchld, to guard against concurrency
      between the thread calling defer_sigchld and an instance of the
      primary signal handler on another thread.

    * libxl_sigchld_owner_libxl_always is incompatible with SIGCHLD
      sharing.  Document this correctly.

    Fix "have have" error in comment.

    Move removal of newly unused variables to previous patch.

v2.1: Provide feature test macro LIBXL_HAVE_SIGCHLD_SHARING
---
 tools/libxl/libxl.h          |    9 +++
 tools/libxl/libxl_event.h    |   14 ++--
 tools/libxl/libxl_fork.c     |  153 ++++++++++++++++++++++++++++++++++++------
 tools/libxl/libxl_internal.h |    3 +
 4 files changed, 153 insertions(+), 26 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1ac34c3..0b992d1 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -422,6 +422,15 @@
  */
 #define LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP 1
 
+/*
+ * LIBXL_HAVE_SIGCHLD_SHARING
+ *
+ * If this is defined, it is permissible for multiple libxl ctxs
+ * to simultaneously "own" SIGCHLD.  See "Subprocess handling"
+ * in libxl_event.h.
+ */
+#define LIBXL_HAVE_SIGCHLD_SHARING 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index 824ac88..ca43cb9 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -470,16 +470,18 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
  *
  *     libxl_sigchld_owner_libxl_always:
  *
- *       The application expects libxl to reap all of its children,
- *       and provides a callback to be notified of their exit
- *       statues.  The application must have only one libxl_ctx
- *       configured this way.
+ *       The application expects this libxl ctx to reap all of the
+ *       process's children, and provides a callback to be notified of
+ *       their exit statuses.  The application must have only one
+ *       libxl_ctx configured this way.
  *
  *     libxl_sigchld_owner_libxl_always_selective_reap:
  *
  *       The application expects to reap all of its own children
- *       synchronously, and does not use SIGCHLD.  libxl is
- *       to install a SIGCHLD handler.
+ *       synchronously, and does not use SIGCHLD.  libxl is to install
+ *       a SIGCHLD handler.  The application may have multiple
+ *       libxl_ctxs configured this way; in which case all of its ctxs
+ *       must be so configured.
  */
 
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 084d86a..2432512 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -46,11 +46,19 @@ static int atfork_registered;
 static LIBXL_LIST_HEAD(, libxl__carefd) carefds =
     LIBXL_LIST_HEAD_INITIALIZER(carefds);
 
-/* non-null iff installed, protected by no_forking */
-static libxl_ctx *sigchld_owner;
+/* Protected against concurrency by no_forking.  sigchld_users is
+ * protected against being interrupted by SIGCHLD (and thus read
+ * asynchronously by the signal handler) by sigchld_defer (see
+ * below). */
+static bool sigchld_installed; /* 0 means not */
+static pthread_mutex_t sigchld_defer_mutex = PTHREAD_MUTEX_INITIALIZER;
+static LIBXL_LIST_HEAD(, libxl_ctx) sigchld_users =
+    LIBXL_LIST_HEAD_INITIALIZER(sigchld_users);
 static struct sigaction sigchld_saved_action;
 
-static void sigchld_removehandler_core(void);
+static void sigchld_removehandler_core(void); /* idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
+static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
 static void atfork_lock(void)
 {
@@ -126,8 +134,7 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    if (sigchld_owner)
-        sigchld_removehandler_core();
+    sigchld_user_remove(ctx);
 
     atfork_unlock();
 }
@@ -152,7 +159,8 @@ int libxl__carefd_fd(const libxl__carefd *cf)
 }
 
 /*
- * Actual child process handling
+ * Low-level functions for child process handling, including
+ * the main SIGCHLD handler.
  */
 
 /* Like waitpid(,,WNOHANG) but handles all errors except ECHILD. */
@@ -176,9 +184,22 @@ static void sigchld_selfpipe_handler(libxl__egc *egc, libxl__ev_fd *ev,
 
 static void sigchld_handler(int signo)
 {
+    /* This function has to be reentrant!  Luckily it is. */
+
+    libxl_ctx *notify;
     int esave = errno;
-    int e = libxl__self_pipe_wakeup(sigchld_owner->sigchld_selfpipe[1]);
-    assert(!e); /* errors are probably EBADF, very bad */
+
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
+
+    LIBXL_LIST_FOREACH(notify, &sigchld_users, sigchld_users_entry) {
+        int e = libxl__self_pipe_wakeup(notify->sigchld_selfpipe[1]);
+        assert(!e); /* errors are probably EBADF, very bad */
+    }
+
+    r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
     errno = esave;
 }
 
@@ -195,22 +216,89 @@ static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
     assert(!r);
 }
 
-static void sigchld_removehandler_core(void)
+/*
+ * SIGCHLD deferral
+ *
+ * sigchld_defer and sigchld_release are a bit like using sigprocmask
+ * to block the signal only they work for the whole process.  Sadly
+ * this has to be done by setting a special handler that records the
+ * "pendingness" of the signal here in the program.  How tedious.
+ *
+ * A property of this approach is that the signal handler itself
+ * must be reentrant (see the comment in release_sigchld).
+ *
+ * Callers have the atfork_lock so there is no risk of concurrency
+ * within these functions, aside from the risk of being interrupted by
+ * the signal.  We use sigchld_defer_mutex to guard against the
+ * possibility of the real signal handler being still running on
+ * another thread.
+ */
+
+static volatile sig_atomic_t sigchld_occurred_while_deferred;
+
+static void sigchld_handler_when_deferred(int signo)
+{
+    sigchld_occurred_while_deferred = 1;
+}
+
+static void defer_sigchld(void)
+{
+    assert(sigchld_installed);
+
+    sigchld_sethandler_raw(sigchld_handler_when_deferred, 0);
+
+    /* Now _this thread_ cannot any longer be interrupted by the
+     * signal, so we can take the mutex without risk of deadlock.  If
+     * another thread is in the signal handler, either it or we will
+     * block and wait for the other. */
+
+    int r = pthread_mutex_lock(&sigchld_defer_mutex);
+    assert(!r);
+}
+
+static void release_sigchld(void)
+{
+    assert(sigchld_installed);
+
+    int r = pthread_mutex_unlock(&sigchld_defer_mutex);
+    assert(!r);
+
+    sigchld_sethandler_raw(sigchld_handler, 0);
+    if (sigchld_occurred_while_deferred) {
+        sigchld_occurred_while_deferred = 0;
+        /* We might get another SIGCHLD here, in which case
+         * sigchld_handler will be interrupted and re-entered.
+         * This is OK. */
+        sigchld_handler(SIGCHLD);
+    }
+}
+
+/*
+ * Meat of the child process handling.
+ */
+
+static void sigchld_removehandler_core(void) /* idempotent */
 {
     struct sigaction was;
     int r;
     
+    if (!sigchld_installed)
+        return;
+
     r = sigaction(SIGCHLD, &sigchld_saved_action, &was);
     assert(!r);
     assert(!(was.sa_flags & SA_SIGINFO));
     assert(was.sa_handler == sigchld_handler);
-    sigchld_owner = 0;
+
+    sigchld_installed = 0;
 }
 
-static void sigchld_installhandler_core(libxl__gc *gc)
+static void sigchld_installhandler_core(void) /* idempotent */
 {
-    assert(!sigchld_owner);
-    sigchld_owner = CTX;
+    if (sigchld_installed)
+        return;
+
+    sigchld_installed = 1;
 
     sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
 
@@ -220,15 +308,32 @@ static void sigchld_installhandler_core(libxl__gc *gc)
              sigchld_saved_action.sa_handler == SIG_IGN)));
 }
 
-void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+static void sigchld_user_remove(libxl_ctx *ctx) /* idempotent */
 {
-    int rc;
+    if (!ctx->sigchld_user_registered)
+        return;
 
     atfork_lock();
-    if (sigchld_owner == CTX)
+    defer_sigchld();
+
+    LIBXL_LIST_REMOVE(ctx, sigchld_users_entry);
+
+    release_sigchld();
+
+    if (LIBXL_LIST_EMPTY(&sigchld_users))
         sigchld_removehandler_core();
+
     atfork_unlock();
 
+    ctx->sigchld_user_registered = 0;
+}
+
+void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
+{
+    int rc;
+
+    sigchld_user_remove(CTX);
+
     if (libxl__ev_fd_isregistered(&CTX->sigchld_selfpipe_efd)) {
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, 0);
         if (rc)
@@ -259,12 +364,20 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
         rc = libxl__ev_fd_modify(gc, &CTX->sigchld_selfpipe_efd, POLLIN);
         if (rc) goto out;
     }
+    if (!CTX->sigchld_user_registered) {
+        atfork_lock();
 
-    atfork_lock();
-    if (sigchld_owner != CTX) {
-        sigchld_installhandler_core(gc);
+        sigchld_installhandler_core();
+
+        defer_sigchld();
+
+        LIBXL_LIST_INSERT_HEAD(&sigchld_users, CTX, sigchld_users_entry);
+
+        release_sigchld();
+        atfork_unlock();
+
+        CTX->sigchld_user_registered = 1;
     }
-    atfork_unlock();
 
     rc = 0;
  out:
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index fba681d..8429448 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -355,6 +355,8 @@ struct libxl__ctx {
     int sigchld_selfpipe[2]; /* [0]==-1 means handler not installed */
     libxl__ev_fd sigchld_selfpipe_efd;
     LIBXL_LIST_HEAD(, libxl__ev_child) children;
+    bool sigchld_user_registered;
+    LIBXL_LIST_ENTRY(libxl_ctx) sigchld_users_entry;
 
     libxl_version_info version_info;
 };
@@ -840,6 +842,7 @@ _hidden void libxl__poller_wakeup(libxl__egc *egc, libxl__poller *p);
 extern const libxl_childproc_hooks libxl__childproc_default_hooks;
 int libxl__sigchld_needed(libxl__gc*); /* non-reentrant idempotent, logs errs */
 void libxl__sigchld_notneeded(libxl__gc*); /* non-reentrant idempotent */
+void libxl__sigchld_check_stale_handler(void);
 int libxl__self_pipe_wakeup(int fd); /* returns 0 or -1 setting errno */
 int libxl__self_pipe_eatall(int fd); /* returns 0 or -1 setting errno */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDO-00070M-G4; Mon, 03 Feb 2014 16:17:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDI-0006se-UN
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:37 +0000
Received: from [85.158.143.35:31999] by server-2.bemta-4.messagelabs.com id
	74/6D-10891-021CFE25; Mon, 03 Feb 2014 16:17:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!6
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18692 invoked from network); 3 Feb 2014 16:17:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270853"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005hu-HE;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xJ-9m;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:43 +0000
Message-ID: <1391444091-22796-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/18] libxl: fork: Break out
	sigchld_installhandler_core
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pure code motion.  This is going to make the final substantive patch
easier to read.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   38 ++++++++++++++++++++++----------------
 1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index a15af8e..ce8e8eb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,6 +194,27 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
+static void sigchld_installhandler_core(libxl__gc *gc)
+{
+    struct sigaction ours;
+    int r;
+
+    assert(!sigchld_owner);
+    sigchld_owner = CTX;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = sigchld_handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
+    assert(!r);
+
+    assert(((void)"application must negotiate with libxl about SIGCHLD",
+            !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
+            (sigchld_saved_action.sa_handler == SIG_DFL ||
+             sigchld_saved_action.sa_handler == SIG_IGN)));
+}
+
 void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
@@ -236,22 +257,7 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 
     atfork_lock();
     if (sigchld_owner != CTX) {
-        struct sigaction ours;
-
-        assert(!sigchld_owner);
-        sigchld_owner = CTX;
-
-        memset(&ours,0,sizeof(ours));
-        ours.sa_handler = sigchld_handler;
-        sigemptyset(&ours.sa_mask);
-        ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-        r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-        assert(!r);
-
-        assert(((void)"application must negotiate with libxl about SIGCHLD",
-                !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-                (sigchld_saved_action.sa_handler == SIG_DFL ||
-                 sigchld_saved_action.sa_handler == SIG_IGN)));
+        sigchld_installhandler_core(gc);
     }
     atfork_unlock();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDO-00070M-G4; Mon, 03 Feb 2014 16:17:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDI-0006se-UN
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:37 +0000
Received: from [85.158.143.35:31999] by server-2.bemta-4.messagelabs.com id
	74/6D-10891-021CFE25; Mon, 03 Feb 2014 16:17:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391444248!2789323!6
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18692 invoked from network); 3 Feb 2014 16:17:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270853"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005hu-HE;
	Mon, 03 Feb 2014 16:14:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAk-0005xJ-9m;
	Mon, 03 Feb 2014 16:14:58 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:43 +0000
Message-ID: <1391444091-22796-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 10/18] libxl: fork: Break out
	sigchld_installhandler_core
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pure code motion.  This is going to make the final substantive patch
easier to read.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_fork.c |   38 ++++++++++++++++++++++----------------
 1 file changed, 22 insertions(+), 16 deletions(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index a15af8e..ce8e8eb 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -194,6 +194,27 @@ static void sigchld_removehandler_core(void)
     sigchld_owner = 0;
 }
 
+static void sigchld_installhandler_core(libxl__gc *gc)
+{
+    struct sigaction ours;
+    int r;
+
+    assert(!sigchld_owner);
+    sigchld_owner = CTX;
+
+    memset(&ours,0,sizeof(ours));
+    ours.sa_handler = sigchld_handler;
+    sigemptyset(&ours.sa_mask);
+    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
+    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
+    assert(!r);
+
+    assert(((void)"application must negotiate with libxl about SIGCHLD",
+            !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
+            (sigchld_saved_action.sa_handler == SIG_DFL ||
+             sigchld_saved_action.sa_handler == SIG_IGN)));
+}
+
 void libxl__sigchld_notneeded(libxl__gc *gc) /* non-reentrant, idempotent */
 {
     int rc;
@@ -236,22 +257,7 @@ int libxl__sigchld_needed(libxl__gc *gc) /* non-reentrant, idempotent */
 
     atfork_lock();
     if (sigchld_owner != CTX) {
-        struct sigaction ours;
-
-        assert(!sigchld_owner);
-        sigchld_owner = CTX;
-
-        memset(&ours,0,sizeof(ours));
-        ours.sa_handler = sigchld_handler;
-        sigemptyset(&ours.sa_mask);
-        ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
-        r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
-        assert(!r);
-
-        assert(((void)"application must negotiate with libxl about SIGCHLD",
-                !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&
-                (sigchld_saved_action.sa_handler == SIG_DFL ||
-                 sigchld_saved_action.sa_handler == SIG_IGN)));
+        sigchld_installhandler_core(gc);
     }
     atfork_unlock();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDP-00071P-2Y; Mon, 03 Feb 2014 16:17:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDJ-0006tO-GY
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:37 +0000
Received: from [193.109.254.147:42241] by server-3.bemta-14.messagelabs.com id
	F4/30-00432-021CFE25; Mon, 03 Feb 2014 16:17:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391444251!1688638!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11241 invoked from network); 3 Feb 2014 16:17:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005iF-T5;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xs-Le;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:50 +0000
Message-ID: <1391444091-22796-18-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: [Xen-devel] [PATCH 17/18] libxl: timeouts: Break out time_occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <Ian.Jackson@eu.citrix.com>

Bring together the two places where etime->func() is called into a new
function time_occurs.  For one call site this is pure code motion.
For the other the only semantic change is the introduction of a new
debugging message.

Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |   18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 93f8fdc..5a99932 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -381,6 +381,15 @@ void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
     return;
 }
 
+static void time_occurs(libxl__egc *egc, libxl__ev_time *etime)
+{
+    DBG("ev_time=%p occurs abs=%lu.%06lu",
+        etime, (unsigned long)etime->abs.tv_sec,
+        (unsigned long)etime->abs.tv_usec);
+
+    etime->func(egc, etime, &etime->abs);
+}
+
 
 /*
  * xenstore watches
@@ -1007,11 +1016,7 @@ static void afterpoll_internal(libxl__egc *egc, libxl__poller *poller,
 
         time_deregister(gc, etime);
 
-        DBG("ev_time=%p occurs abs=%lu.%06lu",
-            etime, (unsigned long)etime->abs.tv_sec,
-            (unsigned long)etime->abs.tv_usec);
-
-        etime->func(egc, etime, &etime->abs);
+        time_occurs(egc, etime);
     }
 }
 
@@ -1092,7 +1097,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     assert(!ev->infinite);
 
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-    ev->func(egc, ev, &ev->abs);
+
+    time_occurs(egc, ev);
 
  out:
     CTX_UNLOCK;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDP-00071P-2Y; Mon, 03 Feb 2014 16:17:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDJ-0006tO-GY
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:37 +0000
Received: from [193.109.254.147:42241] by server-3.bemta-14.messagelabs.com id
	F4/30-00432-021CFE25; Mon, 03 Feb 2014 16:17:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391444251!1688638!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11241 invoked from network); 3 Feb 2014 16:17:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005iF-T5;
	Mon, 03 Feb 2014 16:14:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xs-Le;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:50 +0000
Message-ID: <1391444091-22796-18-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: [Xen-devel] [PATCH 17/18] libxl: timeouts: Break out time_occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <Ian.Jackson@eu.citrix.com>

Bring together the two places where etime->func() is called into a new
function time_occurs.  For one call site this is pure code motion.
For the other the only semantic change is the introduction of a new
debugging message.

Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |   18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 93f8fdc..5a99932 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -381,6 +381,15 @@ void libxl__ev_time_deregister(libxl__gc *gc, libxl__ev_time *ev)
     return;
 }
 
+static void time_occurs(libxl__egc *egc, libxl__ev_time *etime)
+{
+    DBG("ev_time=%p occurs abs=%lu.%06lu",
+        etime, (unsigned long)etime->abs.tv_sec,
+        (unsigned long)etime->abs.tv_usec);
+
+    etime->func(egc, etime, &etime->abs);
+}
+
 
 /*
  * xenstore watches
@@ -1007,11 +1016,7 @@ static void afterpoll_internal(libxl__egc *egc, libxl__poller *poller,
 
         time_deregister(gc, etime);
 
-        DBG("ev_time=%p occurs abs=%lu.%06lu",
-            etime, (unsigned long)etime->abs.tv_sec,
-            (unsigned long)etime->abs.tv_usec);
-
-        etime->func(egc, etime, &etime->abs);
+        time_occurs(egc, etime);
     }
 }
 
@@ -1092,7 +1097,8 @@ void libxl_osevent_occurred_timeout(libxl_ctx *ctx, void *for_libxl)
     assert(!ev->infinite);
 
     LIBXL_TAILQ_REMOVE(&CTX->etimes, ev, entry);
-    ev->func(egc, ev, &ev->abs);
+
+    time_occurs(egc, ev);
 
  out:
     CTX_UNLOCK;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDP-000739-V2; Mon, 03 Feb 2014 16:17:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDK-0006ty-IB
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:38 +0000
Received: from [85.158.139.211:52724] by server-11.bemta-5.messagelabs.com id
	3C/F5-23886-121CFE25; Mon, 03 Feb 2014 16:17:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3772 invoked from network); 3 Feb 2014 16:17:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270884"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:05 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAm-0005iI-3D;
	Mon, 03 Feb 2014 16:15:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xx-SR;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:51 +0000
Message-ID: <1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record deregistration
	when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <Ian.Jackson@eu.citrix.com>

When a timeout has occurred, it is deregistered.  However, we failed
to record this fact by updating etime->func.  As a result,
libxl__ev_time_isregistered would say `true' for a timeout which has
already happened.

The results are that we might try to have the timeout occur again
(causing problems for the call site), and/or corrupt the timeout list.

This fixes the timedereg event system unit test.

Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 5a99932..ea8c744 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -387,7 +387,9 @@ static void time_occurs(libxl__egc *egc, libxl__ev_time *etime)
         etime, (unsigned long)etime->abs.tv_sec,
         (unsigned long)etime->abs.tv_usec);
 
-    etime->func(egc, etime, &etime->abs);
+    libxl__ev_time_callback *func = etime->func;
+    etime->func = 0;
+    func(egc, etime, &etime->abs);
 }
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDP-000739-V2; Mon, 03 Feb 2014 16:17:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDK-0006ty-IB
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:17:38 +0000
Received: from [85.158.139.211:52724] by server-11.bemta-5.messagelabs.com id
	3C/F5-23886-121CFE25; Mon, 03 Feb 2014 16:17:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391444250!1361854!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3772 invoked from network); 3 Feb 2014 16:17:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:17:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99270884"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:15:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:15:05 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAm-0005iI-3D;
	Mon, 03 Feb 2014 16:15:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMAl-0005xx-SR;
	Mon, 03 Feb 2014 16:14:59 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 3 Feb 2014 16:14:51 +0000
Message-ID: <1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record deregistration
	when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <Ian.Jackson@eu.citrix.com>

When a timeout has occurred, it is deregistered.  However, we failed
to record this fact by updating etime->func.  As a result,
libxl__ev_time_isregistered would say `true' for a timeout which has
already happened.

The results are that we might try to have the timeout occur again
(causing problems for the call site), and/or corrupt the timeout list.

This fixes the timedereg event system unit test.

Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
Cc: Jim Fehlig <jfehlig@suse.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/libxl_event.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_event.c b/tools/libxl/libxl_event.c
index 5a99932..ea8c744 100644
--- a/tools/libxl/libxl_event.c
+++ b/tools/libxl/libxl_event.c
@@ -387,7 +387,9 @@ static void time_occurs(libxl__egc *egc, libxl__ev_time *etime)
         etime, (unsigned long)etime->abs.tv_sec,
         (unsigned long)etime->abs.tv_usec);
 
-    etime->func(egc, etime, &etime->abs);
+    libxl__ev_time_callback *func = etime->func;
+    etime->func = 0;
+    func(egc, etime, &etime->abs);
 }
 
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:18:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDr-0007YZ-Ln; Mon, 03 Feb 2014 16:18:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDq-0007X4-Kk
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:18:10 +0000
Received: from [85.158.143.35:38754] by server-1.bemta-4.messagelabs.com id
	72/FA-31661-141CFE25; Mon, 03 Feb 2014 16:18:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391444287!2806632!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 636 invoked from network); 3 Feb 2014 16:18:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:18:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99271705"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:16:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:16:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMC9-0005iu-Aq;
	Mon, 03 Feb 2014 16:16:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMC9-0005zR-30;
	Mon, 03 Feb 2014 16:16:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21231.49368.953196.741218@mariner.uk.xensource.com>
Date: Mon, 3 Feb 2014 16:16:24 +0000
To: <xen-devel@lists.xensource.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>, Jim
	Fehlig <jfehlig@suse.com>
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> This is the latest version of my libxl event fixes apropos of Jim's
> libvirt testing.

This can also be found here:
  git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v3

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:18:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMDr-0007YZ-Ln; Mon, 03 Feb 2014 16:18:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAMDq-0007X4-Kk
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 16:18:10 +0000
Received: from [85.158.143.35:38754] by server-1.bemta-4.messagelabs.com id
	72/FA-31661-141CFE25; Mon, 03 Feb 2014 16:18:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391444287!2806632!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 636 invoked from network); 3 Feb 2014 16:18:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:18:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99271705"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:16:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 11:16:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMC9-0005iu-Aq;
	Mon, 03 Feb 2014 16:16:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAMC9-0005zR-30;
	Mon, 03 Feb 2014 16:16:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21231.49368.953196.741218@mariner.uk.xensource.com>
Date: Mon, 3 Feb 2014 16:16:24 +0000
To: <xen-devel@lists.xensource.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>, Jim
	Fehlig <jfehlig@suse.com>
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> This is the latest version of my libxl event fixes apropos of Jim's
> libvirt testing.

This can also be found here:
  git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v3

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:59:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMrb-0001eS-GV; Mon, 03 Feb 2014 16:59:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAMra-0001e4-L1
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 16:59:15 +0000
Received: from [193.109.254.147:2029] by server-1.bemta-14.messagelabs.com id
	22/65-15438-1EACFE25; Mon, 03 Feb 2014 16:59:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391446751!1710138!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24640 invoked from network); 3 Feb 2014 16:59:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:59:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99292323"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:58:58 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 11:58:57 -0500
Message-ID: <52EFCACF.6080604@citrix.com>
Date: Mon, 3 Feb 2014 17:58:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
In-Reply-To: <52E8CF540200007800117D88@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 09:52, Jan Beulich wrote:
>>>> On 28.01.14 at 18:43, Roger Pau Monne <roger.pau@citrix.com> wrote:
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req 
>> *pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif = pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		/*
>> +		 * Make sure the request is freed before releasing blkif,
>> +		 * or there could be a race between free_req and the
>> +		 * cleanup done in xen_blkif_free during shutdown.
>> +		 *
>> +		 * NB: The fact that we might try to wake up pending_free_wq
>> +		 * before drain_complete (in case there's a drain going on)
>> +		 * it's not a problem with our current implementation
>> +		 * because we can assure there's no thread waiting on
>> +		 * pending_free_wq if there's a drain going on, but it has
>> +		 * to be taken into account if the current model is changed.
>> +		 */
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <= 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
>>  	}
>>  }
> 
> The put is still too early imo - you're explicitly accessing field in the
> structure immediately afterwards. This may not be an issue at
> present, but I think it's at least a latent one.
> 
> Apart from that, the two if()s would - at least to me - be more
> clear if combined into one.

In order to get rid of the race I had to introduce yet another atomic_t 
in xen_blkif struct, which is something I don't really like, but I 
could not see any other way to solve this. If that's fine I will resend 
the series, here is the reworked patch:

---
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index dcfe49f..d9b8cd3 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -945,7 +945,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
 	do {
 		/* The initial value is one, and one refcnt taken at the
 		 * start of the xen_blkif_schedule thread. */
-		if (atomic_read(&blkif->refcnt) <= 2)
+		if (atomic_read(&blkif->inflight) == 0)
 			break;
 		wait_for_completion_interruptible_timeout(
 				&blkif->drain_complete, HZ);
@@ -985,17 +985,30 @@ static void __end_block_io_op(struct pending_req *pending_req, int error)
 	 * the proper response on the ring.
 	 */
 	if (atomic_dec_and_test(&pending_req->pendcnt)) {
-		xen_blkbk_unmap(pending_req->blkif,
+		struct xen_blkif *blkif = pending_req->blkif;
+
+		xen_blkbk_unmap(blkif,
 		                pending_req->segments,
 		                pending_req->nr_pages);
-		make_response(pending_req->blkif, pending_req->id,
+		make_response(blkif, pending_req->id,
 			      pending_req->operation, pending_req->status);
-		xen_blkif_put(pending_req->blkif);
-		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
-			if (atomic_read(&pending_req->blkif->drain))
-				complete(&pending_req->blkif->drain_complete);
+		free_req(blkif, pending_req);
+		/*
+		 * Make sure the request is freed before releasing blkif,
+		 * or there could be a race between free_req and the
+		 * cleanup done in xen_blkif_free during shutdown.
+		 *
+		 * NB: The fact that we might try to wake up pending_free_wq
+		 * before drain_complete (in case there's a drain going on)
+		 * it's not a problem with our current implementation
+		 * because we can assure there's no thread waiting on
+		 * pending_free_wq if there's a drain going on, but it has
+		 * to be taken into account if the current model is changed.
+		 */
+		if (atomic_dec_and_test(&blkif->inflight) && atomic_read(&blkif->drain)) {
+			complete(&blkif->drain_complete);
 		}
-		free_req(pending_req->blkif, pending_req);
+		xen_blkif_put(blkif);
 	}
 }
 
@@ -1249,6 +1262,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
 	 * below (in "!bio") if we are handling a BLKIF_OP_DISCARD.
 	 */
 	xen_blkif_get(blkif);
+	atomic_inc(&blkif->inflight);
 
 	for (i = 0; i < nseg; i++) {
 		while ((bio == NULL) ||
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index f733d76..e40326a 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -278,6 +278,7 @@ struct xen_blkif {
 	/* for barrier (drain) requests */
 	struct completion	drain_complete;
 	atomic_t		drain;
+	atomic_t		inflight;
 	/* One thread per one blkif. */
 	struct task_struct	*xenblkd;
 	unsigned int		waiting_reqs;
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 8afef67..84973c6 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -128,6 +128,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 	INIT_LIST_HEAD(&blkif->persistent_purge_list);
 	blkif->free_pages_num = 0;
 	atomic_set(&blkif->persistent_gnt_in_use, 0);
+	atomic_set(&blkif->inflight, 0);
 
 	INIT_LIST_HEAD(&blkif->pending_free);
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 16:59:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 16:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMrb-0001eS-GV; Mon, 03 Feb 2014 16:59:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAMra-0001e4-L1
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 16:59:15 +0000
Received: from [193.109.254.147:2029] by server-1.bemta-14.messagelabs.com id
	22/65-15438-1EACFE25; Mon, 03 Feb 2014 16:59:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391446751!1710138!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24640 invoked from network); 3 Feb 2014 16:59:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 16:59:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99292323"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 16:58:58 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 11:58:57 -0500
Message-ID: <52EFCACF.6080604@citrix.com>
Date: Mon, 3 Feb 2014 17:58:55 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
In-Reply-To: <52E8CF540200007800117D88@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 29/01/14 09:52, Jan Beulich wrote:
>>>> On 28.01.14 at 18:43, Roger Pau Monne <roger.pau@citrix.com> wrote:
>> --- a/drivers/block/xen-blkback/blkback.c
>> +++ b/drivers/block/xen-blkback/blkback.c
>> @@ -985,17 +985,31 @@ static void __end_block_io_op(struct pending_req 
>> *pending_req, int error)
>>  	 * the proper response on the ring.
>>  	 */
>>  	if (atomic_dec_and_test(&pending_req->pendcnt)) {
>> -		xen_blkbk_unmap(pending_req->blkif,
>> +		struct xen_blkif *blkif = pending_req->blkif;
>> +
>> +		xen_blkbk_unmap(blkif,
>>  		                pending_req->segments,
>>  		                pending_req->nr_pages);
>> -		make_response(pending_req->blkif, pending_req->id,
>> +		make_response(blkif, pending_req->id,
>>  			      pending_req->operation, pending_req->status);
>> -		xen_blkif_put(pending_req->blkif);
>> -		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
>> -			if (atomic_read(&pending_req->blkif->drain))
>> -				complete(&pending_req->blkif->drain_complete);
>> +		free_req(blkif, pending_req);
>> +		/*
>> +		 * Make sure the request is freed before releasing blkif,
>> +		 * or there could be a race between free_req and the
>> +		 * cleanup done in xen_blkif_free during shutdown.
>> +		 *
>> +		 * NB: The fact that we might try to wake up pending_free_wq
>> +		 * before drain_complete (in case there's a drain going on)
>> +		 * it's not a problem with our current implementation
>> +		 * because we can assure there's no thread waiting on
>> +		 * pending_free_wq if there's a drain going on, but it has
>> +		 * to be taken into account if the current model is changed.
>> +		 */
>> +		xen_blkif_put(blkif);
>> +		if (atomic_read(&blkif->refcnt) <= 2) {
>> +			if (atomic_read(&blkif->drain))
>> +				complete(&blkif->drain_complete);
>>  		}
>> -		free_req(pending_req->blkif, pending_req);
>>  	}
>>  }
> 
> The put is still too early imo - you're explicitly accessing field in the
> structure immediately afterwards. This may not be an issue at
> present, but I think it's at least a latent one.
> 
> Apart from that, the two if()s would - at least to me - be more
> clear if combined into one.

In order to get rid of the race I had to introduce yet another atomic_t 
in xen_blkif struct, which is something I don't really like, but I 
could not see any other way to solve this. If that's fine I will resend 
the series, here is the reworked patch:

---
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index dcfe49f..d9b8cd3 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -945,7 +945,7 @@ static void xen_blk_drain_io(struct xen_blkif *blkif)
 	do {
 		/* The initial value is one, and one refcnt taken at the
 		 * start of the xen_blkif_schedule thread. */
-		if (atomic_read(&blkif->refcnt) <= 2)
+		if (atomic_read(&blkif->inflight) == 0)
 			break;
 		wait_for_completion_interruptible_timeout(
 				&blkif->drain_complete, HZ);
@@ -985,17 +985,30 @@ static void __end_block_io_op(struct pending_req *pending_req, int error)
 	 * the proper response on the ring.
 	 */
 	if (atomic_dec_and_test(&pending_req->pendcnt)) {
-		xen_blkbk_unmap(pending_req->blkif,
+		struct xen_blkif *blkif = pending_req->blkif;
+
+		xen_blkbk_unmap(blkif,
 		                pending_req->segments,
 		                pending_req->nr_pages);
-		make_response(pending_req->blkif, pending_req->id,
+		make_response(blkif, pending_req->id,
 			      pending_req->operation, pending_req->status);
-		xen_blkif_put(pending_req->blkif);
-		if (atomic_read(&pending_req->blkif->refcnt) <= 2) {
-			if (atomic_read(&pending_req->blkif->drain))
-				complete(&pending_req->blkif->drain_complete);
+		free_req(blkif, pending_req);
+		/*
+		 * Make sure the request is freed before releasing blkif,
+		 * or there could be a race between free_req and the
+		 * cleanup done in xen_blkif_free during shutdown.
+		 *
+		 * NB: The fact that we might try to wake up pending_free_wq
+		 * before drain_complete (in case there's a drain going on)
+		 * it's not a problem with our current implementation
+		 * because we can assure there's no thread waiting on
+		 * pending_free_wq if there's a drain going on, but it has
+		 * to be taken into account if the current model is changed.
+		 */
+		if (atomic_dec_and_test(&blkif->inflight) && atomic_read(&blkif->drain)) {
+			complete(&blkif->drain_complete);
 		}
-		free_req(pending_req->blkif, pending_req);
+		xen_blkif_put(blkif);
 	}
 }
 
@@ -1249,6 +1262,7 @@ static int dispatch_rw_block_io(struct xen_blkif *blkif,
 	 * below (in "!bio") if we are handling a BLKIF_OP_DISCARD.
 	 */
 	xen_blkif_get(blkif);
+	atomic_inc(&blkif->inflight);
 
 	for (i = 0; i < nseg; i++) {
 		while ((bio == NULL) ||
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
index f733d76..e40326a 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -278,6 +278,7 @@ struct xen_blkif {
 	/* for barrier (drain) requests */
 	struct completion	drain_complete;
 	atomic_t		drain;
+	atomic_t		inflight;
 	/* One thread per one blkif. */
 	struct task_struct	*xenblkd;
 	unsigned int		waiting_reqs;
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 8afef67..84973c6 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -128,6 +128,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 	INIT_LIST_HEAD(&blkif->persistent_purge_list);
 	blkif->free_pages_num = 0;
 	atomic_set(&blkif->persistent_gnt_in_use, 0);
+	atomic_set(&blkif->inflight, 0);
 
 	INIT_LIST_HEAD(&blkif->pending_free);
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuH-0001tx-9t; Mon, 03 Feb 2014 17:02:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuF-0001sn-Cx
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [193.109.254.147:51868] by server-15.bemta-14.messagelabs.com
	id 29/68-10839-68BCFE25; Mon, 03 Feb 2014 17:01:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23899 invoked from network); 3 Feb 2014 17:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370986"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Fx;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:33 +0000
Message-ID: <1391446897-21998-5-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 4/8] x86/xen: only warn once if bad MFNs are
	found during setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

In xen_add_extra_mem(), if the WARN() checks for bad MFNs trigger it is
likely that they will trigger at lot, spamming the log.

Use WARN_ONCE() instead.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/setup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..2afe55e 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -89,10 +89,10 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
 
-		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+		if (WARN_ONCE(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
 			continue;
-		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
-			pfn, mfn);
+		WARN_ONCE(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			  pfn, mfn);
 
 		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 	}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuF-0001so-30; Mon, 03 Feb 2014 17:01:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuE-0001sc-1t
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:58 +0000
Received: from [193.109.254.147:50717] by server-10.bemta-14.messagelabs.com
	id 78/5A-10711-58BCFE25; Mon, 03 Feb 2014 17:01:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23732 invoked from network); 3 Feb 2014 17:01:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370983"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-C5;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:29 +0000
Message-ID: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCHv4 0/8] x86/xen: fixes for mapping high MMIO
	regions (and remove _PAGE_IOMAP)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[ x86 maintainers, this is predominately a Xen series but the end
result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]

This a fix for the problems with mapping high MMIO regions in certain
cases (e.g., the RDMA drivers) as not all mappers were specifing
_PAGE_IOMAP which meant no valid MFN could be found and the resulting
PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series has been tested (in dom0) on all unique machines we have
in out test lab (~100 machines), some of which have PCI devices with
BARs above the end of RAM.

Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
long).

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v4:
- fix p2m_mid_identity initialization.

Changes in v3 (not posted):
- use correct end of e820
- fix xen_remap_domain_mfn_range()

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuH-0001tx-9t; Mon, 03 Feb 2014 17:02:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuF-0001sn-Cx
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [193.109.254.147:51868] by server-15.bemta-14.messagelabs.com
	id 29/68-10839-68BCFE25; Mon, 03 Feb 2014 17:01:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23899 invoked from network); 3 Feb 2014 17:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370986"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Fx;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:33 +0000
Message-ID: <1391446897-21998-5-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 4/8] x86/xen: only warn once if bad MFNs are
	found during setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

In xen_add_extra_mem(), if the WARN() checks for bad MFNs trigger it is
likely that they will trigger at lot, spamming the log.

Use WARN_ONCE() instead.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/setup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..2afe55e 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -89,10 +89,10 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
 
-		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+		if (WARN_ONCE(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
 			continue;
-		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
-			pfn, mfn);
+		WARN_ONCE(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			  pfn, mfn);
 
 		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 	}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuG-0001tR-Gu; Mon, 03 Feb 2014 17:02:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuE-0001sl-UN
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [193.109.254.147:8422] by server-8.bemta-14.messagelabs.com id
	2F/69-18529-68BCFE25; Mon, 03 Feb 2014 17:01:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23839 invoked from network); 3 Feb 2014 17:01:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370984"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Ch;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:30 +0000
Message-ID: <1391446897-21998-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 1/8] x86/xen: rename early_p2m_alloc() and
	early_p2m_alloc_middle()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

early_p2m_alloc_middle() allocates a new leaf page and
early_p2m_alloc() allocates a new middle page.  This is confusing.

Swap the names so they match what the functions actually do.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 8009acb..437f13e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -596,7 +596,7 @@ static bool alloc_p2m(unsigned long pfn)
 	return true;
 }
 
-static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary)
+static bool __init early_alloc_p2m(unsigned long pfn, bool check_boundary)
 {
 	unsigned topidx, mididx, idx;
 	unsigned long *p2m;
@@ -638,7 +638,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary
 	return true;
 }
 
-static bool __init early_alloc_p2m(unsigned long pfn)
+static bool __init early_alloc_p2m_middle(unsigned long pfn)
 {
 	unsigned topidx = p2m_top_index(pfn);
 	unsigned long *mid_mfn_p;
@@ -663,7 +663,7 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
 		/* Note: we don't set mid_mfn_p[midix] here,
-		 * look in early_alloc_p2m_middle */
+		 * look in early_alloc_p2m() */
 	}
 	return true;
 }
@@ -739,7 +739,7 @@ found:
 
 	/* This shouldn't happen */
 	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
-		early_alloc_p2m(set_pfn);
+		early_alloc_p2m_middle(set_pfn);
 
 	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
 		return false;
@@ -754,13 +754,13 @@ found:
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
-		if (!early_alloc_p2m(pfn))
+		if (!early_alloc_p2m_middle(pfn))
 			return false;
 
 		if (early_can_reuse_p2m_middle(pfn, mfn))
 			return __set_phys_to_machine(pfn, mfn);
 
-		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
+		if (!early_alloc_p2m(pfn, false /* boundary crossover OK!*/))
 			return false;
 
 		if (!__set_phys_to_machine(pfn, mfn))
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuG-0001tR-Gu; Mon, 03 Feb 2014 17:02:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuE-0001sl-UN
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [193.109.254.147:8422] by server-8.bemta-14.messagelabs.com id
	2F/69-18529-68BCFE25; Mon, 03 Feb 2014 17:01:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23839 invoked from network); 3 Feb 2014 17:01:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370984"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Ch;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:30 +0000
Message-ID: <1391446897-21998-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 1/8] x86/xen: rename early_p2m_alloc() and
	early_p2m_alloc_middle()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

early_p2m_alloc_middle() allocates a new leaf page and
early_p2m_alloc() allocates a new middle page.  This is confusing.

Swap the names so they match what the functions actually do.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 8009acb..437f13e 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -596,7 +596,7 @@ static bool alloc_p2m(unsigned long pfn)
 	return true;
 }
 
-static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary)
+static bool __init early_alloc_p2m(unsigned long pfn, bool check_boundary)
 {
 	unsigned topidx, mididx, idx;
 	unsigned long *p2m;
@@ -638,7 +638,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary
 	return true;
 }
 
-static bool __init early_alloc_p2m(unsigned long pfn)
+static bool __init early_alloc_p2m_middle(unsigned long pfn)
 {
 	unsigned topidx = p2m_top_index(pfn);
 	unsigned long *mid_mfn_p;
@@ -663,7 +663,7 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
 		/* Note: we don't set mid_mfn_p[midix] here,
-		 * look in early_alloc_p2m_middle */
+		 * look in early_alloc_p2m() */
 	}
 	return true;
 }
@@ -739,7 +739,7 @@ found:
 
 	/* This shouldn't happen */
 	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
-		early_alloc_p2m(set_pfn);
+		early_alloc_p2m_middle(set_pfn);
 
 	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
 		return false;
@@ -754,13 +754,13 @@ found:
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
-		if (!early_alloc_p2m(pfn))
+		if (!early_alloc_p2m_middle(pfn))
 			return false;
 
 		if (early_can_reuse_p2m_middle(pfn, mfn))
 			return __set_phys_to_machine(pfn, mfn);
 
-		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
+		if (!early_alloc_p2m(pfn, false /* boundary crossover OK!*/))
 			return false;
 
 		if (!__set_phys_to_machine(pfn, mfn))
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuF-0001so-30; Mon, 03 Feb 2014 17:01:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuE-0001sc-1t
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:58 +0000
Received: from [193.109.254.147:50717] by server-10.bemta-14.messagelabs.com
	id 78/5A-10711-58BCFE25; Mon, 03 Feb 2014 17:01:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23732 invoked from network); 3 Feb 2014 17:01:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370983"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-C5;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:29 +0000
Message-ID: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCHv4 0/8] x86/xen: fixes for mapping high MMIO
	regions (and remove _PAGE_IOMAP)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[ x86 maintainers, this is predominately a Xen series but the end
result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]

This a fix for the problems with mapping high MMIO regions in certain
cases (e.g., the RDMA drivers) as not all mappers were specifing
_PAGE_IOMAP which meant no valid MFN could be found and the resulting
PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series has been tested (in dom0) on all unique machines we have
in out test lab (~100 machines), some of which have PCI devices with
BARs above the end of RAM.

Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
long).

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v4:
- fix p2m_mid_identity initialization.

Changes in v3 (not posted):
- use correct end of e820
- fix xen_remap_domain_mfn_range()

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuG-0001tg-T7; Mon, 03 Feb 2014 17:02:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuF-0001sm-6J
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [85.158.143.35:15066] by server-2.bemta-4.messagelabs.com id
	5D/D6-10891-68BCFE25; Mon, 03 Feb 2014 17:01:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391446916!2809991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29514 invoked from network); 3 Feb 2014 17:01:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370985"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Ee;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:31 +0000
Message-ID: <1391446897-21998-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 2/8] x86/xen: fix set_phys_range_identity() if
	pfn_e > MAX_P2M_PFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Allow set_phys_range_identity() to work with a range that overlaps
MAX_P2M_PFN by clamping pfn_e to MAX_P2M_PFN.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 437f13e..8969b7c 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -774,7 +774,7 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 {
 	unsigned long pfn;
 
-	if (unlikely(pfn_s >= MAX_P2M_PFN || pfn_e >= MAX_P2M_PFN))
+	if (unlikely(pfn_s >= MAX_P2M_PFN))
 		return 0;
 
 	if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
@@ -783,6 +783,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_s > pfn_e)
 		return 0;
 
+	if (pfn_e > MAX_P2M_PFN)
+		pfn_e = MAX_P2M_PFN;
+
 	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
 		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
 		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuG-0001tg-T7; Mon, 03 Feb 2014 17:02:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuF-0001sm-6J
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [85.158.143.35:15066] by server-2.bemta-4.messagelabs.com id
	5D/D6-10891-68BCFE25; Mon, 03 Feb 2014 17:01:58 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391446916!2809991!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29514 invoked from network); 3 Feb 2014 17:01:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370985"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Ee;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:31 +0000
Message-ID: <1391446897-21998-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 2/8] x86/xen: fix set_phys_range_identity() if
	pfn_e > MAX_P2M_PFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Allow set_phys_range_identity() to work with a range that overlaps
MAX_P2M_PFN by clamping pfn_e to MAX_P2M_PFN.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 437f13e..8969b7c 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -774,7 +774,7 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 {
 	unsigned long pfn;
 
-	if (unlikely(pfn_s >= MAX_P2M_PFN || pfn_e >= MAX_P2M_PFN))
+	if (unlikely(pfn_s >= MAX_P2M_PFN))
 		return 0;
 
 	if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
@@ -783,6 +783,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_s > pfn_e)
 		return 0;
 
+	if (pfn_e > MAX_P2M_PFN)
+		pfn_e = MAX_P2M_PFN;
+
 	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
 		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
 		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuJ-0001vj-T6; Mon, 03 Feb 2014 17:02:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuF-0001sz-PL
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [85.158.143.35:15162] by server-1.bemta-4.messagelabs.com id
	9D/24-31661-78BCFE25; Mon, 03 Feb 2014 17:01:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391446916!2809991!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29557 invoked from network); 3 Feb 2014 17:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370990"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Gb;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:34 +0000
Message-ID: <1391446897-21998-6-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 5/8] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 3f45c27..98c32f6 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -507,7 +507,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2afe55e..210426a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -469,6 +469,15 @@ char * __init xen_memory_setup(void)
 	}
 
 	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(map[i-1].addr / PAGE_SIZE, ~0ul);
+
+	/*
 	 * In domU, the ISA region is normal, usable memory, but we
 	 * reserve ISA memory anyway because too many things poke
 	 * about in there.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuJ-0001vj-T6; Mon, 03 Feb 2014 17:02:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuF-0001sz-PL
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:01:59 +0000
Received: from [85.158.143.35:15162] by server-1.bemta-4.messagelabs.com id
	9D/24-31661-78BCFE25; Mon, 03 Feb 2014 17:01:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391446916!2809991!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29557 invoked from network); 3 Feb 2014 17:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370990"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Gb;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:34 +0000
Message-ID: <1391446897-21998-6-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 5/8] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 3f45c27..98c32f6 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -507,7 +507,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2afe55e..210426a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -469,6 +469,15 @@ char * __init xen_memory_setup(void)
 	}
 
 	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(map[i-1].addr / PAGE_SIZE, ~0ul);
+
+	/*
 	 * In domU, the ISA region is normal, usable memory, but we
 	 * reserve ISA memory anyway because too many things poke
 	 * about in there.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuK-0001wC-AE; Mon, 03 Feb 2014 17:02:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuG-0001tF-5R
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:00 +0000
Received: from [193.109.254.147:51948] by server-13.bemta-14.messagelabs.com
	id 06/65-01226-78BCFE25; Mon, 03 Feb 2014 17:01:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23983 invoked from network); 3 Feb 2014 17:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-HH;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:35 +0000
Message-ID: <1391446897-21998-7-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 6/8] x86/xen: do not use _PAGE_IOMAP in
	xen_remap_domain_mfn_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

_PAGE_IOMAP is used in xen_remap_domain_mfn_range() to prevent the
pfn_pte() call in remap_area_mfn_pte_fn() from using the p2m to translate
the MFN.  If mfn_pte() is used instead, the p2m look up is avoided and
the use of _PAGE_IOMAP is no longer needed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..875ca45 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2523,7 +2523,7 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(mfn_pte(rmd->mfn++, rmd->prot));
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2548,8 +2548,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuK-0001wC-AE; Mon, 03 Feb 2014 17:02:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuG-0001tF-5R
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:00 +0000
Received: from [193.109.254.147:51948] by server-13.bemta-14.messagelabs.com
	id 06/65-01226-78BCFE25; Mon, 03 Feb 2014 17:01:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23983 invoked from network); 3 Feb 2014 17:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370993"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-HH;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:35 +0000
Message-ID: <1391446897-21998-7-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 6/8] x86/xen: do not use _PAGE_IOMAP in
	xen_remap_domain_mfn_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

_PAGE_IOMAP is used in xen_remap_domain_mfn_range() to prevent the
pfn_pte() call in remap_area_mfn_pte_fn() from using the p2m to translate
the MFN.  If mfn_pte() is used instead, the p2m look up is avoided and
the use of _PAGE_IOMAP is no longer needed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..875ca45 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2523,7 +2523,7 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(mfn_pte(rmd->mfn++, rmd->prot));
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2548,8 +2548,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuK-0001wu-PU; Mon, 03 Feb 2014 17:02:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuG-0001tK-Mo
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:00 +0000
Received: from [85.158.143.35:55185] by server-3.bemta-4.messagelabs.com id
	76/DF-11539-78BCFE25; Mon, 03 Feb 2014 17:01:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391446916!2809991!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29606 invoked from network); 3 Feb 2014 17:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370994"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Hs;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:36 +0000
Message-ID: <1391446897-21998-8-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 7/8] x86/xen: do not use _PAGE_IOMAP PTE flag
	for I/O mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since mfn_to_pfn() returns the correct PFN for identity mappings (as
used for MMIO regions), the use of _PAGE_IOMAP is not required in
pte_mfn_to_pfn().

Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
in pte_mfn_to_pfn().

This will allow _PAGE_IOMAP to be removed, making it available for
future use.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |   48 ++++--------------------------------------------
 1 files changed, 4 insertions(+), 44 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 875ca45..85fd96a 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -399,38 +399,14 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ __visible pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 __visible pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ __visible pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2096,7 +2056,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuK-0001wu-PU; Mon, 03 Feb 2014 17:02:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuG-0001tK-Mo
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:00 +0000
Received: from [85.158.143.35:55185] by server-3.bemta-4.messagelabs.com id
	76/DF-11539-78BCFE25; Mon, 03 Feb 2014 17:01:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391446916!2809991!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29606 invoked from network); 3 Feb 2014 17:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370994"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Hs;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:36 +0000
Message-ID: <1391446897-21998-8-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 7/8] x86/xen: do not use _PAGE_IOMAP PTE flag
	for I/O mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since mfn_to_pfn() returns the correct PFN for identity mappings (as
used for MMIO regions), the use of _PAGE_IOMAP is not required in
pte_mfn_to_pfn().

Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
in pte_mfn_to_pfn().

This will allow _PAGE_IOMAP to be removed, making it available for
future use.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |   48 ++++--------------------------------------------
 1 files changed, 4 insertions(+), 44 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 875ca45..85fd96a 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -399,38 +399,14 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ __visible pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 __visible pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ __visible pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2096,7 +2056,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuL-0001xY-7Q; Mon, 03 Feb 2014 17:02:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuI-0001tL-6j
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:03 +0000
Received: from [85.158.143.35:55200] by server-1.bemta-4.messagelabs.com id
	18/34-31661-88BCFE25; Mon, 03 Feb 2014 17:02:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391446918!2793035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1016 invoked from network); 3 Feb 2014 17:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370996"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Jk;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:37 +0000
Message-ID: <1391446897-21998-9-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 8/8] x86: remove the Xen-specific _PAGE_IOMAP
	PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
that were used to map I/O regions that are 1:1 in the p2m.  This
allowed Xen to obtain the correct PFN when converting the MFNs read
from a PTE back to their PFN.

Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
returns the correct PFN by using a combination of the m2p and p2m to
determine if an MFN corresponds to a 1:1 mapping in the the p2m.

Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
future uses of the PTE flag.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
--
This depends on the preceeding Xen changes, so I think it would be
best if this was acked by an x86 maintainer and merged via the Xen
tree.
---
 arch/x86/include/asm/pgtable_types.h |   12 ++++++------
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 --
 arch/x86/xen/enlighten.c             |    2 --
 5 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 1aa9ccd..b821f37 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -164,10 +164,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index e395048..af7259c 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index f35c66c..9e6fa6d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..97e0080 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1543,8 +1543,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuL-0001xY-7Q; Mon, 03 Feb 2014 17:02:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuI-0001tL-6j
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:03 +0000
Received: from [85.158.143.35:55200] by server-1.bemta-4.messagelabs.com id
	18/34-31661-88BCFE25; Mon, 03 Feb 2014 17:02:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391446918!2793035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1016 invoked from network); 3 Feb 2014 17:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97370996"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-Jk;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:37 +0000
Message-ID: <1391446897-21998-9-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 8/8] x86: remove the Xen-specific _PAGE_IOMAP
	PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
that were used to map I/O regions that are 1:1 in the p2m.  This
allowed Xen to obtain the correct PFN when converting the MFNs read
from a PTE back to their PFN.

Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
returns the correct PFN by using a combination of the m2p and p2m to
determine if an MFN corresponds to a 1:1 mapping in the the p2m.

Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
future uses of the PTE flag.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
--
This depends on the preceeding Xen changes, so I think it would be
best if this was acked by an x86 maintainer and merged via the Xen
tree.
---
 arch/x86/include/asm/pgtable_types.h |   12 ++++++------
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 --
 arch/x86/xen/enlighten.c             |    2 --
 5 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 1aa9ccd..b821f37 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -164,10 +164,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index e395048..af7259c 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index f35c66c..9e6fa6d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..97e0080 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1543,8 +1543,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuL-0001yK-Mg; Mon, 03 Feb 2014 17:02:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuH-0001tc-7s
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:03 +0000
Received: from [193.109.254.147:50997] by server-4.bemta-14.messagelabs.com id
	08/6F-32066-88BCFE25; Mon, 03 Feb 2014 17:02:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24053 invoked from network); 3 Feb 2014 17:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97371000"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-FI;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:32 +0000
Message-ID: <1391446897-21998-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 3/8] x86/xen: compactly store large identity
	ranges in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Large (multi-GB) identity ranges currently require a unique middle page
(filled with p2m_identity entries) per 1 GB region.

Similar to the common p2m_mid_missing middle page for large missing
regions, introduce a p2m_mid_identity page (filled with p2m_identity
entries) which can be used instead.

set_phys_range_identity() thus only needs to allocate new middle pages
at the beginning and end of the range.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |  140 +++++++++++++++++++++++++++++++++------------------
 1 files changed, 90 insertions(+), 50 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 8969b7c..3f45c27 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -36,7 +36,7 @@
  *  pfn_to_mfn(0xc0000)=0xc0000
  *
  * The benefit of this is, that we can assume for non-RAM regions (think
- * PCI BARs, or ACPI spaces), we can create mappings easily b/c we
+ * PCI BARs, or ACPI spaces), we can create mappings easily because we
  * get the PFN value to match the MFN.
  *
  * For this to work efficiently we have one new page p2m_identity and
@@ -60,7 +60,7 @@
  * There is also a digram of the P2M at the end that can help.
  * Imagine your E820 looking as so:
  *
- *                    1GB                                           2GB
+ *                    1GB                                           2GB    4GB
  * /-------------------+---------\/----\         /----------\    /---+-----\
  * | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
  * \-------------------+---------/\----/         \----------/    \---+-----/
@@ -77,9 +77,8 @@
  * of the PFN and the end PFN (263424 and 512256 respectively). The first step
  * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
  * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
- * aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn
- * to end pfn.  We reserve_brk top leaf pages if they are missing (means they
- * point to p2m_mid_missing).
+ * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
+ * required to split any existing p2m_mid_missing middle pages.
  *
  * With the E820 example above, 263424 is not 1GB aligned so we allocate a
  * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
@@ -88,7 +87,7 @@
  * Next stage is to determine if we need to do a more granular boundary check
  * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
  * We check if the start pfn and end pfn violate that boundary check, and if
- * so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
+ * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
  * granularity of setting which PFNs are missing and which ones are identity.
  * In our example 263424 and 512256 both fail the check so we reserve_brk two
  * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
@@ -102,9 +101,10 @@
  *
  * The next step is to walk from the start pfn to the end pfn setting
  * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
- * If we find that the middle leaf is pointing to p2m_missing we can swap it
- * over to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this
- * point we do not need to worry about boundary aligment (so no need to
+ * If we find that the middle entry is pointing to p2m_missing we can swap it
+ * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
+ * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
+ * At this point we do not need to worry about boundary aligment (so no need to
  * reserve_brk a middle page, figure out which PFNs are "missing" and which
  * ones are identity), as that has been done earlier.  If we find that the
  * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
@@ -118,6 +118,9 @@
  * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
  * contain the INVALID_P2M_ENTRY value and are considered "missing."
  *
+ * Finally, the region beyond the end of of the E820 (4 GB in this example)
+ * is set to be identity (in case there are MMIO regions placed here).
+ *
  * This is what the p2m ends up looking (for the E820 above) with this
  * fabulous drawing:
  *
@@ -129,21 +132,27 @@
  *  |-----|    \                      | [p2m_identity]+\\    | ....            |
  *  |  2  |--\  \-------------------->|  ...          | \\   \----------------/
  *  |-----|   \                       \---------------/  \\
- *  |  3  |\   \                                          \\  p2m_identity
- *  |-----| \   \-------------------->/---------------\   /-----------------\
- *  | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
- *  \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
- *         / /---------------\        | ....          |   \-----------------/
- *        /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
- *       /   | IDENTITY[@256]|<----/  \---------------/
- *      /    | ~0, ~0, ....  |
- *     |     \---------------/
- *     |
- *   p2m_mid_missing           p2m_missing
- * /-----------------\     /------------\
- * | [p2m_missing]   +---->| ~0, ~0, ~0 |
- * | [p2m_missing]   +---->| ..., ~0    |
- * \-----------------/     \------------/
+ *  |  3  |-\  \                                          \\  p2m_identity [1]
+ *  |-----|  \  \-------------------->/---------------\   /-----------------\
+ *  | ..  |\  |                       | [p2m_identity]+-->| ~0, ~0, ~0, ... |
+ *  \-----/ | |                       | [p2m_identity]+-->| ..., ~0         |
+ *          | |                       | ....          |   \-----------------/
+ *          | |                       +-[x], ~0, ~0.. +\
+ *          | |                       \---------------/ \
+ *          | |                                          \-> /---------------\
+ *          | V  p2m_mid_missing       p2m_missing           | IDENTITY[@0]  |
+ *          | /-----------------\     /------------\         | IDENTITY[@256]|
+ *          | | [p2m_missing]   +---->| ~0, ~0, ...|         | ~0, ~0, ....  |
+ *          | | [p2m_missing]   +---->| ..., ~0    |         \---------------/
+ *          | | ...             |     \------------/
+ *          | \-----------------/
+ *          |
+ *          |     p2m_mid_identity 
+ *          |   /-----------------\     
+ *          \-->| [p2m_identity]  +---->[1]
+ *              | [p2m_identity]  +---->[1]
+ *              | ...             |
+ *              \-----------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -187,13 +196,15 @@ static RESERVE_BRK_ARRAY(unsigned long, p2m_top_mfn, P2M_TOP_PER_PAGE);
 static RESERVE_BRK_ARRAY(unsigned long *, p2m_top_mfn_p, P2M_TOP_PER_PAGE);
 
 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity, P2M_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity, P2M_MID_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long, p2m_mid_identity_mfn, P2M_MID_PER_PAGE);
 
 RESERVE_BRK(p2m_mid, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 
 /* We might hit two boundary violations at the start and end, at max each
  * boundary violation will require three middle nodes. */
-RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
+RESERVE_BRK(p2m_mid_extra, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
@@ -242,20 +253,20 @@ static void p2m_top_mfn_p_init(unsigned long **top)
 		top[i] = p2m_mid_missing_mfn;
 }
 
-static void p2m_mid_init(unsigned long **mid)
+static void p2m_mid_init(unsigned long **mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = p2m_missing;
+		mid[i] = leaf;
 }
 
-static void p2m_mid_mfn_init(unsigned long *mid)
+static void p2m_mid_mfn_init(unsigned long *mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = virt_to_mfn(p2m_missing);
+		mid[i] = virt_to_mfn(leaf);
 }
 
 static void p2m_init(unsigned long *p2m)
@@ -286,7 +297,9 @@ void __ref xen_build_mfn_list_list(void)
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_identity_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 
 		p2m_top_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
 		p2m_top_mfn_p_init(p2m_top_mfn_p);
@@ -295,7 +308,8 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_top_mfn_init(p2m_top_mfn);
 	} else {
 		/* Reinitialise, mfn's all change after migration */
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 	}
 
 	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += P2M_PER_PAGE) {
@@ -327,7 +341,7 @@ void __ref xen_build_mfn_list_list(void)
 			 * it too late.
 			 */
 			mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_mfn_init(mid_mfn_p);
+			p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 			p2m_top_mfn_p[topidx] = mid_mfn_p;
 		}
@@ -365,16 +379,17 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_init(p2m_missing);
+	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_init(p2m_identity);
 
 	p2m_mid_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_mid_init(p2m_mid_missing);
+	p2m_mid_init(p2m_mid_missing, p2m_missing);
+	p2m_mid_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_mid_init(p2m_mid_identity, p2m_identity);
 
 	p2m_top = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_top_init(p2m_top);
 
-	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_init(p2m_identity);
-
 	/*
 	 * The domain builder gives us a pre-constructed p2m array in
 	 * mfn_list for all the pages initially given to us, so we just
@@ -386,7 +401,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 		if (p2m_top[topidx] == p2m_mid_missing) {
 			unsigned long **mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_init(mid);
+			p2m_mid_init(mid, p2m_missing);
 
 			p2m_top[topidx] = mid;
 		}
@@ -545,7 +560,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid)
 			return false;
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		if (cmpxchg(top_p, p2m_mid_missing, mid) != p2m_mid_missing)
 			free_p2m_page(mid);
@@ -565,7 +580,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid_mfn)
 			return false;
 
-		p2m_mid_mfn_init(mid_mfn);
+		p2m_mid_mfn_init(mid_mfn, p2m_missing);
 
 		missing_mfn = virt_to_mfn(p2m_mid_missing_mfn);
 		mid_mfn_mfn = virt_to_mfn(mid_mfn);
@@ -649,7 +664,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	if (mid == p2m_mid_missing) {
 		mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		p2m_top[topidx] = mid;
 
@@ -658,7 +673,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	/* And the save/restore P2M tables.. */
 	if (mid_mfn_p == p2m_mid_missing_mfn) {
 		mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(mid_mfn_p);
+		p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
@@ -769,6 +784,24 @@ bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	return true;
 }
+
+static void __init early_split_p2m(unsigned long pfn)
+{
+	unsigned long mididx, idx;
+
+	mididx = p2m_mid_index(pfn);
+	idx = p2m_index(pfn);
+
+	/*
+	 * Allocate new middle and leaf pages if this pfn lies in the
+	 * middle of one.
+	 */
+	if (mididx || idx)
+		early_alloc_p2m_middle(pfn);
+	if (idx)
+		early_alloc_p2m(pfn, false);
+}
+
 unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 				      unsigned long pfn_e)
 {
@@ -786,15 +819,8 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_e > MAX_P2M_PFN)
 		pfn_e = MAX_P2M_PFN;
 
-	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
-		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
-		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-	{
-		WARN_ON(!early_alloc_p2m(pfn));
-	}
-
-	early_alloc_p2m_middle(pfn_s, true);
-	early_alloc_p2m_middle(pfn_e, true);
+	early_split_p2m(pfn_s);
+	early_split_p2m(pfn_e);
 
 	for (pfn = pfn_s; pfn < pfn_e; pfn++)
 		if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
@@ -828,8 +854,22 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	/* For sparse holes were the p2m leaf has real PFN along with
 	 * PCI holes, stick in the PFN as the MFN value.
+	 *
+	 * set_phys_range_identity() will have allocated new middle
+	 * and leaf pages as required so an existing p2m_mid_missing
+	 * or p2m_missing mean that whole range will be identity so
+	 * these can be switched to p2m_mid_identity or p2m_identity.
 	 */
 	if (mfn != INVALID_P2M_ENTRY && (mfn & IDENTITY_FRAME_BIT)) {
+		if (p2m_top[topidx] == p2m_mid_identity)
+			return true;
+
+		if (p2m_top[topidx] == p2m_mid_missing) {
+			WARN_ON(cmpxchg(&p2m_top[topidx], p2m_mid_missing,
+					p2m_mid_identity) != p2m_mid_missing);
+			return true;
+		}
+
 		if (p2m_top[topidx][mididx] == p2m_identity)
 			return true;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMuL-0001yK-Mg; Mon, 03 Feb 2014 17:02:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAMuH-0001tc-7s
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:02:03 +0000
Received: from [193.109.254.147:50997] by server-4.bemta-14.messagelabs.com id
	08/6F-32066-88BCFE25; Mon, 03 Feb 2014 17:02:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391446915!1699231!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24053 invoked from network); 3 Feb 2014 17:01:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:01:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97371000"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:01:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:01:54 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAMuA-00056S-FI;
	Mon, 03 Feb 2014 17:01:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 3 Feb 2014 17:01:32 +0000
Message-ID: <1391446897-21998-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 3/8] x86/xen: compactly store large identity
	ranges in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Large (multi-GB) identity ranges currently require a unique middle page
(filled with p2m_identity entries) per 1 GB region.

Similar to the common p2m_mid_missing middle page for large missing
regions, introduce a p2m_mid_identity page (filled with p2m_identity
entries) which can be used instead.

set_phys_range_identity() thus only needs to allocate new middle pages
at the beginning and end of the range.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |  140 +++++++++++++++++++++++++++++++++------------------
 1 files changed, 90 insertions(+), 50 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 8969b7c..3f45c27 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -36,7 +36,7 @@
  *  pfn_to_mfn(0xc0000)=0xc0000
  *
  * The benefit of this is, that we can assume for non-RAM regions (think
- * PCI BARs, or ACPI spaces), we can create mappings easily b/c we
+ * PCI BARs, or ACPI spaces), we can create mappings easily because we
  * get the PFN value to match the MFN.
  *
  * For this to work efficiently we have one new page p2m_identity and
@@ -60,7 +60,7 @@
  * There is also a digram of the P2M at the end that can help.
  * Imagine your E820 looking as so:
  *
- *                    1GB                                           2GB
+ *                    1GB                                           2GB    4GB
  * /-------------------+---------\/----\         /----------\    /---+-----\
  * | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
  * \-------------------+---------/\----/         \----------/    \---+-----/
@@ -77,9 +77,8 @@
  * of the PFN and the end PFN (263424 and 512256 respectively). The first step
  * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
  * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
- * aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn
- * to end pfn.  We reserve_brk top leaf pages if they are missing (means they
- * point to p2m_mid_missing).
+ * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
+ * required to split any existing p2m_mid_missing middle pages.
  *
  * With the E820 example above, 263424 is not 1GB aligned so we allocate a
  * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
@@ -88,7 +87,7 @@
  * Next stage is to determine if we need to do a more granular boundary check
  * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
  * We check if the start pfn and end pfn violate that boundary check, and if
- * so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
+ * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
  * granularity of setting which PFNs are missing and which ones are identity.
  * In our example 263424 and 512256 both fail the check so we reserve_brk two
  * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
@@ -102,9 +101,10 @@
  *
  * The next step is to walk from the start pfn to the end pfn setting
  * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
- * If we find that the middle leaf is pointing to p2m_missing we can swap it
- * over to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this
- * point we do not need to worry about boundary aligment (so no need to
+ * If we find that the middle entry is pointing to p2m_missing we can swap it
+ * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
+ * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
+ * At this point we do not need to worry about boundary aligment (so no need to
  * reserve_brk a middle page, figure out which PFNs are "missing" and which
  * ones are identity), as that has been done earlier.  If we find that the
  * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
@@ -118,6 +118,9 @@
  * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
  * contain the INVALID_P2M_ENTRY value and are considered "missing."
  *
+ * Finally, the region beyond the end of of the E820 (4 GB in this example)
+ * is set to be identity (in case there are MMIO regions placed here).
+ *
  * This is what the p2m ends up looking (for the E820 above) with this
  * fabulous drawing:
  *
@@ -129,21 +132,27 @@
  *  |-----|    \                      | [p2m_identity]+\\    | ....            |
  *  |  2  |--\  \-------------------->|  ...          | \\   \----------------/
  *  |-----|   \                       \---------------/  \\
- *  |  3  |\   \                                          \\  p2m_identity
- *  |-----| \   \-------------------->/---------------\   /-----------------\
- *  | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
- *  \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
- *         / /---------------\        | ....          |   \-----------------/
- *        /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
- *       /   | IDENTITY[@256]|<----/  \---------------/
- *      /    | ~0, ~0, ....  |
- *     |     \---------------/
- *     |
- *   p2m_mid_missing           p2m_missing
- * /-----------------\     /------------\
- * | [p2m_missing]   +---->| ~0, ~0, ~0 |
- * | [p2m_missing]   +---->| ..., ~0    |
- * \-----------------/     \------------/
+ *  |  3  |-\  \                                          \\  p2m_identity [1]
+ *  |-----|  \  \-------------------->/---------------\   /-----------------\
+ *  | ..  |\  |                       | [p2m_identity]+-->| ~0, ~0, ~0, ... |
+ *  \-----/ | |                       | [p2m_identity]+-->| ..., ~0         |
+ *          | |                       | ....          |   \-----------------/
+ *          | |                       +-[x], ~0, ~0.. +\
+ *          | |                       \---------------/ \
+ *          | |                                          \-> /---------------\
+ *          | V  p2m_mid_missing       p2m_missing           | IDENTITY[@0]  |
+ *          | /-----------------\     /------------\         | IDENTITY[@256]|
+ *          | | [p2m_missing]   +---->| ~0, ~0, ...|         | ~0, ~0, ....  |
+ *          | | [p2m_missing]   +---->| ..., ~0    |         \---------------/
+ *          | | ...             |     \------------/
+ *          | \-----------------/
+ *          |
+ *          |     p2m_mid_identity 
+ *          |   /-----------------\     
+ *          \-->| [p2m_identity]  +---->[1]
+ *              | [p2m_identity]  +---->[1]
+ *              | ...             |
+ *              \-----------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -187,13 +196,15 @@ static RESERVE_BRK_ARRAY(unsigned long, p2m_top_mfn, P2M_TOP_PER_PAGE);
 static RESERVE_BRK_ARRAY(unsigned long *, p2m_top_mfn_p, P2M_TOP_PER_PAGE);
 
 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity, P2M_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity, P2M_MID_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long, p2m_mid_identity_mfn, P2M_MID_PER_PAGE);
 
 RESERVE_BRK(p2m_mid, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 
 /* We might hit two boundary violations at the start and end, at max each
  * boundary violation will require three middle nodes. */
-RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
+RESERVE_BRK(p2m_mid_extra, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
@@ -242,20 +253,20 @@ static void p2m_top_mfn_p_init(unsigned long **top)
 		top[i] = p2m_mid_missing_mfn;
 }
 
-static void p2m_mid_init(unsigned long **mid)
+static void p2m_mid_init(unsigned long **mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = p2m_missing;
+		mid[i] = leaf;
 }
 
-static void p2m_mid_mfn_init(unsigned long *mid)
+static void p2m_mid_mfn_init(unsigned long *mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = virt_to_mfn(p2m_missing);
+		mid[i] = virt_to_mfn(leaf);
 }
 
 static void p2m_init(unsigned long *p2m)
@@ -286,7 +297,9 @@ void __ref xen_build_mfn_list_list(void)
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_identity_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 
 		p2m_top_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
 		p2m_top_mfn_p_init(p2m_top_mfn_p);
@@ -295,7 +308,8 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_top_mfn_init(p2m_top_mfn);
 	} else {
 		/* Reinitialise, mfn's all change after migration */
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 	}
 
 	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += P2M_PER_PAGE) {
@@ -327,7 +341,7 @@ void __ref xen_build_mfn_list_list(void)
 			 * it too late.
 			 */
 			mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_mfn_init(mid_mfn_p);
+			p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 			p2m_top_mfn_p[topidx] = mid_mfn_p;
 		}
@@ -365,16 +379,17 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_init(p2m_missing);
+	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_init(p2m_identity);
 
 	p2m_mid_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_mid_init(p2m_mid_missing);
+	p2m_mid_init(p2m_mid_missing, p2m_missing);
+	p2m_mid_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_mid_init(p2m_mid_identity, p2m_identity);
 
 	p2m_top = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_top_init(p2m_top);
 
-	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_init(p2m_identity);
-
 	/*
 	 * The domain builder gives us a pre-constructed p2m array in
 	 * mfn_list for all the pages initially given to us, so we just
@@ -386,7 +401,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 		if (p2m_top[topidx] == p2m_mid_missing) {
 			unsigned long **mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_init(mid);
+			p2m_mid_init(mid, p2m_missing);
 
 			p2m_top[topidx] = mid;
 		}
@@ -545,7 +560,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid)
 			return false;
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		if (cmpxchg(top_p, p2m_mid_missing, mid) != p2m_mid_missing)
 			free_p2m_page(mid);
@@ -565,7 +580,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid_mfn)
 			return false;
 
-		p2m_mid_mfn_init(mid_mfn);
+		p2m_mid_mfn_init(mid_mfn, p2m_missing);
 
 		missing_mfn = virt_to_mfn(p2m_mid_missing_mfn);
 		mid_mfn_mfn = virt_to_mfn(mid_mfn);
@@ -649,7 +664,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	if (mid == p2m_mid_missing) {
 		mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		p2m_top[topidx] = mid;
 
@@ -658,7 +673,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	/* And the save/restore P2M tables.. */
 	if (mid_mfn_p == p2m_mid_missing_mfn) {
 		mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(mid_mfn_p);
+		p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
@@ -769,6 +784,24 @@ bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	return true;
 }
+
+static void __init early_split_p2m(unsigned long pfn)
+{
+	unsigned long mididx, idx;
+
+	mididx = p2m_mid_index(pfn);
+	idx = p2m_index(pfn);
+
+	/*
+	 * Allocate new middle and leaf pages if this pfn lies in the
+	 * middle of one.
+	 */
+	if (mididx || idx)
+		early_alloc_p2m_middle(pfn);
+	if (idx)
+		early_alloc_p2m(pfn, false);
+}
+
 unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 				      unsigned long pfn_e)
 {
@@ -786,15 +819,8 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_e > MAX_P2M_PFN)
 		pfn_e = MAX_P2M_PFN;
 
-	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
-		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
-		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-	{
-		WARN_ON(!early_alloc_p2m(pfn));
-	}
-
-	early_alloc_p2m_middle(pfn_s, true);
-	early_alloc_p2m_middle(pfn_e, true);
+	early_split_p2m(pfn_s);
+	early_split_p2m(pfn_e);
 
 	for (pfn = pfn_s; pfn < pfn_e; pfn++)
 		if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
@@ -828,8 +854,22 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	/* For sparse holes were the p2m leaf has real PFN along with
 	 * PCI holes, stick in the PFN as the MFN value.
+	 *
+	 * set_phys_range_identity() will have allocated new middle
+	 * and leaf pages as required so an existing p2m_mid_missing
+	 * or p2m_missing mean that whole range will be identity so
+	 * these can be switched to p2m_mid_identity or p2m_identity.
 	 */
 	if (mfn != INVALID_P2M_ENTRY && (mfn & IDENTITY_FRAME_BIT)) {
+		if (p2m_top[topidx] == p2m_mid_identity)
+			return true;
+
+		if (p2m_top[topidx] == p2m_mid_missing) {
+			WARN_ON(cmpxchg(&p2m_top[topidx], p2m_mid_missing,
+					p2m_mid_identity) != p2m_mid_missing);
+			return true;
+		}
+
 		if (p2m_top[topidx][mididx] == p2m_identity)
 			return true;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:04:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMwj-0002vS-JJ; Mon, 03 Feb 2014 17:04:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAMwi-0002v7-Fg
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:04:32 +0000
Received: from [85.158.139.211:43185] by server-12.bemta-5.messagelabs.com id
	10/BF-15415-F1CCFE25; Mon, 03 Feb 2014 17:04:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391447069!1372845!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11103 invoked from network); 3 Feb 2014 17:04:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:04:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97372182"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:04:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	12:04:22 -0500
Message-ID: <1391447061.10515.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 3 Feb 2014 17:04:21 +0000
In-Reply-To: <CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
	<1390999123.31814.96.camel@kazak.uk.xensource.com>
	<CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:38 +0530, Pranavkumar Sawargaonkar wrote:
> Hi Ian,
> 
> On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
> >
> >> > I also don't see any patch to linux/Documentation/devicetree/bindings,
> >> > as was requested in that posting from 6 months ago. Where can I find
> >> > that?
> >> >
> >> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
> >> > hasn't landed?
> >> Yeah it is dangling and since new patch is already posted i think we
> >> can wait for final DT bindings.
> >
> > It seems from the thread that the final bindings are going to differ
> > significantly from what is implemented in Xen and proposed in the above
> > thread. (with a syscon driver that the reset driver references).
> >
> >> >> Now if you want this to be fixed , i can quickly submit a V7 in which
> >> >> mask field will be just hard-coded to 1 hence xen code will always
> >> >> work even if linux code does gets changed.
> >> >
> >> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
> >> > that seems like a good approach.
> >> >
> >> > I think we'll just have to accept that until the binding is specified
> >> > and documented (in linux/Documentation/devicetree/bindings) then we may
> >> > have to be prepared to change the Xen implementation to match the final
> >> > spec without regard to backwards compat. If we aren't happy with that
> >> > then I should revert the patch now and we will have to live without
> >> > reboot support in the meantime.
> >> Please do not revert the patch , I think we can go ahead with current patch.
> >> Once linux side is concluded i will fix minor changes in xen code
> >> based on new DT bindigs..
> >
> > It doesn't sounds to me like it is going to be minor changes.
> Yes binding are changed in new drivers but now question is what to do
> in current state where new driver is not submitted ?
> 
> My take is we have 3 opts :
> 1. Keep current reboot driver in xen as it is and use it with old
> bindings. (since that is the one merged in linux)
> 2. I will send a new patch (will take 1hr max for me to do it) with
> addresses hardcoded instead of reading it from dts.
>     This will help for xen to have reboot driver for xgene.
> 3. Remove this driver completely from xen as of now.

None of the options are brilliant :-/

I think on balance #2 is probably the way to go.

#1 would set a precedent for using formally undefined bindings which I
think we should avoid.

#3 has obvious downsides, but given that we have already accepted the
functionality it seems a shame to revert it entirely.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:04:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMwj-0002vS-JJ; Mon, 03 Feb 2014 17:04:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAMwi-0002v7-Fg
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:04:32 +0000
Received: from [85.158.139.211:43185] by server-12.bemta-5.messagelabs.com id
	10/BF-15415-F1CCFE25; Mon, 03 Feb 2014 17:04:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391447069!1372845!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11103 invoked from network); 3 Feb 2014 17:04:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:04:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="97372182"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 17:04:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Mon, 3 Feb 2014
	12:04:22 -0500
Message-ID: <1391447061.10515.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Mon, 3 Feb 2014 17:04:21 +0000
In-Reply-To: <CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
	<1390999123.31814.96.camel@kazak.uk.xensource.com>
	<CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 11:38 +0530, Pranavkumar Sawargaonkar wrote:
> Hi Ian,
> 
> On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
> >
> >> > I also don't see any patch to linux/Documentation/devicetree/bindings,
> >> > as was requested in that posting from 6 months ago. Where can I find
> >> > that?
> >> >
> >> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
> >> > hasn't landed?
> >> Yeah it is dangling and since new patch is already posted i think we
> >> can wait for final DT bindings.
> >
> > It seems from the thread that the final bindings are going to differ
> > significantly from what is implemented in Xen and proposed in the above
> > thread. (with a syscon driver that the reset driver references).
> >
> >> >> Now if you want this to be fixed , i can quickly submit a V7 in which
> >> >> mask field will be just hard-coded to 1 hence xen code will always
> >> >> work even if linux code does gets changed.
> >> >
> >> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
> >> > that seems like a good approach.
> >> >
> >> > I think we'll just have to accept that until the binding is specified
> >> > and documented (in linux/Documentation/devicetree/bindings) then we may
> >> > have to be prepared to change the Xen implementation to match the final
> >> > spec without regard to backwards compat. If we aren't happy with that
> >> > then I should revert the patch now and we will have to live without
> >> > reboot support in the meantime.
> >> Please do not revert the patch , I think we can go ahead with current patch.
> >> Once linux side is concluded i will fix minor changes in xen code
> >> based on new DT bindigs..
> >
> > It doesn't sounds to me like it is going to be minor changes.
> Yes binding are changed in new drivers but now question is what to do
> in current state where new driver is not submitted ?
> 
> My take is we have 3 opts :
> 1. Keep current reboot driver in xen as it is and use it with old
> bindings. (since that is the one merged in linux)
> 2. I will send a new patch (will take 1hr max for me to do it) with
> addresses hardcoded instead of reading it from dts.
>     This will help for xen to have reboot driver for xgene.
> 3. Remove this driver completely from xen as of now.

None of the options are brilliant :-/

I think on balance #2 is probably the way to go.

#1 would set a precedent for using formally undefined bindings which I
think we should avoid.

#3 has obvious downsides, but given that we have already accepted the
functionality it seems a shame to revert it entirely.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:04:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMx0-000303-0i; Mon, 03 Feb 2014 17:04:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAMwy-0002zU-90
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:04:48 +0000
Received: from [85.158.139.211:6728] by server-3.bemta-5.messagelabs.com id
	B7/DC-13671-F2CCFE25; Mon, 03 Feb 2014 17:04:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391447085!1368374!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10476 invoked from network); 3 Feb 2014 17:04:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:04:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99295376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 17:04:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:04:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAMwu-0005AK-4o;
	Mon, 03 Feb 2014 17:04:44 +0000
Message-ID: <52EFCC2C.4080106@citrix.com>
Date: Mon, 3 Feb 2014 17:04:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] Xen-4.4-rc3 testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Over the weekend, Xen 4.4 had a nightly from XenRT, and I am please to
announce that there appear to be no regressions between Xen-4.4 and the
version of Xen-4.3 which is our current main trunk.

This includes standard lifecycle testing on all XenServer supported
operating systems (All supported versions of Windows, Most supported
versions of Debian, Ubuntu, RHEL, CentOS, OEL, SLES), as well as
migration from older versions of XenServer (Xen-3.4 and Xen-4.1 based
versions).  Only qemu-trad is tested; qemu-upstream is still a 'TODO'
item to be used in XenServer.

Furthermore, there appears to be no new regressions in
bootability/stability across our set of hardware.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:04:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMx0-000303-0i; Mon, 03 Feb 2014 17:04:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAMwy-0002zU-90
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:04:48 +0000
Received: from [85.158.139.211:6728] by server-3.bemta-5.messagelabs.com id
	B7/DC-13671-F2CCFE25; Mon, 03 Feb 2014 17:04:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391447085!1368374!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10476 invoked from network); 3 Feb 2014 17:04:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:04:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,773,1384300800"; d="scan'208";a="99295376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 03 Feb 2014 17:04:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 3 Feb 2014 12:04:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAMwu-0005AK-4o;
	Mon, 03 Feb 2014 17:04:44 +0000
Message-ID: <52EFCC2C.4080106@citrix.com>
Date: Mon, 3 Feb 2014 17:04:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] Xen-4.4-rc3 testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Over the weekend, Xen 4.4 had a nightly from XenRT, and I am please to
announce that there appear to be no regressions between Xen-4.4 and the
version of Xen-4.3 which is our current main trunk.

This includes standard lifecycle testing on all XenServer supported
operating systems (All supported versions of Windows, Most supported
versions of Debian, Ubuntu, RHEL, CentOS, OEL, SLES), as well as
migration from older versions of XenServer (Xen-3.4 and Xen-4.1 based
versions).  Only qemu-trad is tested; qemu-upstream is still a 'TODO'
item to be used in XenServer.

Furthermore, there appears to be no new regressions in
bootability/stability across our set of hardware.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMzZ-0003Pm-Ju; Mon, 03 Feb 2014 17:07:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1WAMzY-0003PW-Bf
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 17:07:28 +0000
Received: from [85.158.137.68:39962] by server-16.bemta-3.messagelabs.com id
	09/D7-29917-FCCCFE25; Mon, 03 Feb 2014 17:07:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391447246!11928141!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16853 invoked from network); 3 Feb 2014 17:07:27 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:07:27 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so10292271qac.22
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 09:07:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:subject:date:message-id;
	bh=6ZgEOz9nF6kLzgRqdL56unJgsP0lH3mJyJ3+DdWjnMo=;
	b=YMlpgERssWGwYIJ3JWDnKmgeNfVKpX/Jf9/daCt0JRUTclwJ+88P+vqc0m0b/QFGXr
	T3YvCLcc4gsmWlW9Oh2g6OkOnSP1JbTUDFB77swQ0oZUc6gMQ9zeGVBgqTq1x90OkxzX
	UT8S6N6OOZvSAlmMZsjU61BL/THocU2UnSJ5hM1bpb4nPcImL01XCFQfdrI4/yZiqtLH
	SLxbYwc7nP7pWspxJuQR6jN5UoV2bbaud+yvFNAbcZXrTFw1dylXynqweeAQ17x8jbuX
	6MceretGq6YfoWckERnycIBWDdFX3XKNvRfI/EaJFEQWrYXogNXKSsrIMw0GRjXJtBzt
	/Mfw==
X-Received: by 10.224.172.133 with SMTP id l5mr58805684qaz.25.1391447245768;
	Mon, 03 Feb 2014 09:07:25 -0800 (PST)
Received: from build-external.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPSA id d7sm58180309qad.10.2014.02.03.09.07.24
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:07:25 -0800 (PST)
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	xen-devel@lists.xenproject.org, jun.nakajima@intel.com,
	mukesh.rathor@oracle.com, yang.z.zhang@Intel.com
Date: Mon,  3 Feb 2014 12:03:20 -0500
Message-Id: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
Subject: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to Xen
	4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I am hereby requesting an Xen 4.4 exemption for this bug-fix.

The PVH feature is considered experimental, but it would be good to have
it working out of the box without crashing the hypervisor.

Sadly that is not the case as 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
casues an NULL pointer dereference when starting a guest with 'pvh=1'
in the guest config.

There are two ways of fixing this:
a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the path, or
b). Check for ioreq() being NULL. This is actually done in other places
    in the hypervisor - so I choose to piggyback on that.

Thank you!


 xen/arch/x86/hvm/vmx/vvmx.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)


Konrad Rzeszutek Wilk (1):
      pvh: Fix regression caused by assumption that HVM paths MUST use io-backend device.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMzb-0003QA-0g; Mon, 03 Feb 2014 17:07:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1WAMzZ-0003Pj-D9
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 17:07:29 +0000
Received: from [85.158.143.35:37603] by server-1.bemta-4.messagelabs.com id
	07/7C-31661-0DCCFE25; Mon, 03 Feb 2014 17:07:28 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391447246!2807279!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26537 invoked from network); 3 Feb 2014 17:07:27 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:07:27 -0000
Received: by mail-qc0-f172.google.com with SMTP id c9so11749627qcz.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 09:07:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=bp4ZzDUz0t4H9eqixSFE78F9lAIVUtZe4ILsbV9H1p8=;
	b=xfUOf/eA9eyl+ZsjqlL3Kvwbo34iK6JA9tFt19Fm+5vll43Kpu0V/mFKGqUJ6pqFX0
	zH+C6u6urxjj5U0b6ThAd9mjf/KYQfHq0td3LHl3AH+rCjzRb/LCo1oejFvYZYWKwEFn
	hb2gGid5q9WnRrVow6dvuaGohHFOp0CbcHDGr0kbhtafsDnktmJKCPflq2Sq5r6pO7Nn
	6MgpV8AcKqOhEba7gd7lY+JI2E8IEh5Bkikgeawu8YS3xQH7qWjNZ7v8IYJIgAOFHTAk
	YgI3cBmq1jXPAzJCjalhFgl9ZxurkVCdF5C6YLzm6i/Kn5CGFKcQ5tJ/hDrJQb+ps6Df
	W2iQ==
X-Received: by 10.224.130.131 with SMTP id t3mr58074633qas.90.1391447246657;
	Mon, 03 Feb 2014 09:07:26 -0800 (PST)
Received: from build-external.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPSA id d7sm58180309qad.10.2014.02.03.09.07.25
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:07:26 -0800 (PST)
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	xen-devel@lists.xenproject.org, jun.nakajima@intel.com,
	mukesh.rathor@oracle.com, yang.z.zhang@Intel.com
Date: Mon,  3 Feb 2014 12:03:21 -0500
Message-Id: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption that
	HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..2f516c9 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMzZ-0003Pm-Ju; Mon, 03 Feb 2014 17:07:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1WAMzY-0003PW-Bf
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 17:07:28 +0000
Received: from [85.158.137.68:39962] by server-16.bemta-3.messagelabs.com id
	09/D7-29917-FCCCFE25; Mon, 03 Feb 2014 17:07:27 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391447246!11928141!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16853 invoked from network); 3 Feb 2014 17:07:27 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:07:27 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so10292271qac.22
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 09:07:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:subject:date:message-id;
	bh=6ZgEOz9nF6kLzgRqdL56unJgsP0lH3mJyJ3+DdWjnMo=;
	b=YMlpgERssWGwYIJ3JWDnKmgeNfVKpX/Jf9/daCt0JRUTclwJ+88P+vqc0m0b/QFGXr
	T3YvCLcc4gsmWlW9Oh2g6OkOnSP1JbTUDFB77swQ0oZUc6gMQ9zeGVBgqTq1x90OkxzX
	UT8S6N6OOZvSAlmMZsjU61BL/THocU2UnSJ5hM1bpb4nPcImL01XCFQfdrI4/yZiqtLH
	SLxbYwc7nP7pWspxJuQR6jN5UoV2bbaud+yvFNAbcZXrTFw1dylXynqweeAQ17x8jbuX
	6MceretGq6YfoWckERnycIBWDdFX3XKNvRfI/EaJFEQWrYXogNXKSsrIMw0GRjXJtBzt
	/Mfw==
X-Received: by 10.224.172.133 with SMTP id l5mr58805684qaz.25.1391447245768;
	Mon, 03 Feb 2014 09:07:25 -0800 (PST)
Received: from build-external.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPSA id d7sm58180309qad.10.2014.02.03.09.07.24
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:07:25 -0800 (PST)
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	xen-devel@lists.xenproject.org, jun.nakajima@intel.com,
	mukesh.rathor@oracle.com, yang.z.zhang@Intel.com
Date: Mon,  3 Feb 2014 12:03:20 -0500
Message-Id: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
Subject: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to Xen
	4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I am hereby requesting an Xen 4.4 exemption for this bug-fix.

The PVH feature is considered experimental, but it would be good to have
it working out of the box without crashing the hypervisor.

Sadly that is not the case as 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
casues an NULL pointer dereference when starting a guest with 'pvh=1'
in the guest config.

There are two ways of fixing this:
a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the path, or
b). Check for ioreq() being NULL. This is actually done in other places
    in the hypervisor - so I choose to piggyback on that.

Thank you!


 xen/arch/x86/hvm/vmx/vvmx.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)


Konrad Rzeszutek Wilk (1):
      pvh: Fix regression caused by assumption that HVM paths MUST use io-backend device.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:07:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAMzb-0003QA-0g; Mon, 03 Feb 2014 17:07:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ketuzsezr@gmail.com>) id 1WAMzZ-0003Pj-D9
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 17:07:29 +0000
Received: from [85.158.143.35:37603] by server-1.bemta-4.messagelabs.com id
	07/7C-31661-0DCCFE25; Mon, 03 Feb 2014 17:07:28 +0000
X-Env-Sender: ketuzsezr@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391447246!2807279!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26537 invoked from network); 3 Feb 2014 17:07:27 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:07:27 -0000
Received: by mail-qc0-f172.google.com with SMTP id c9so11749627qcz.31
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 09:07:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=bp4ZzDUz0t4H9eqixSFE78F9lAIVUtZe4ILsbV9H1p8=;
	b=xfUOf/eA9eyl+ZsjqlL3Kvwbo34iK6JA9tFt19Fm+5vll43Kpu0V/mFKGqUJ6pqFX0
	zH+C6u6urxjj5U0b6ThAd9mjf/KYQfHq0td3LHl3AH+rCjzRb/LCo1oejFvYZYWKwEFn
	hb2gGid5q9WnRrVow6dvuaGohHFOp0CbcHDGr0kbhtafsDnktmJKCPflq2Sq5r6pO7Nn
	6MgpV8AcKqOhEba7gd7lY+JI2E8IEh5Bkikgeawu8YS3xQH7qWjNZ7v8IYJIgAOFHTAk
	YgI3cBmq1jXPAzJCjalhFgl9ZxurkVCdF5C6YLzm6i/Kn5CGFKcQ5tJ/hDrJQb+ps6Df
	W2iQ==
X-Received: by 10.224.130.131 with SMTP id t3mr58074633qas.90.1391447246657;
	Mon, 03 Feb 2014 09:07:26 -0800 (PST)
Received: from build-external.dumpdata.com
	(50-195-21-189-static.hfc.comcastbusiness.net. [50.195.21.189])
	by mx.google.com with ESMTPSA id d7sm58180309qad.10.2014.02.03.09.07.25
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:07:26 -0800 (PST)
From: Konrad Rzeszutek Wilk <konrad@kernel.org>
To: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	xen-devel@lists.xenproject.org, jun.nakajima@intel.com,
	mukesh.rathor@oracle.com, yang.z.zhang@Intel.com
Date: Mon,  3 Feb 2014 12:03:21 -0500
Message-Id: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
X-Mailer: git-send-email 1.7.7.6
In-Reply-To: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
Subject: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption that
	HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..2f516c9 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:17:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAN9C-0003xB-5R; Mon, 03 Feb 2014 17:17:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAN9A-0003x6-06
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:17:24 +0000
Received: from [193.109.254.147:24562] by server-14.bemta-14.messagelabs.com
	id E1/A1-29228-32FCFE25; Mon, 03 Feb 2014 17:17:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391447841!1710506!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3276 invoked from network); 3 Feb 2014 17:17:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 17:17:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13HH94L011509
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 17:17:10 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13HH88a018996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 17:17:08 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13HH7dX018963; Mon, 3 Feb 2014 17:17:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 09:17:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 278A91BFA0B; Mon,  3 Feb 2014 12:17:06 -0500 (EST)
Date: Mon, 3 Feb 2014 12:17:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140203171706.GA6043@phenom.dumpdata.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: x86@kernel.org, xen-devel@lists.xen.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCHv4 0/8] x86/xen: fixes for mapping high MMIO
 regions (and remove _PAGE_IOMAP)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 05:01:29PM +0000, David Vrabel wrote:
> [ x86 maintainers, this is predominately a Xen series but the end
> result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]
> 
> This a fix for the problems with mapping high MMIO regions in certain
> cases (e.g., the RDMA drivers) as not all mappers were specifing
> _PAGE_IOMAP which meant no valid MFN could be found and the resulting
> PTEs would be set as not present, causing subsequent faults.
> 
> It assumes that anything that isn't RAM (whether ballooned out or not)
> is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
> region after the end of the E820 map and the region beyond the end of
> the p2m.  Ballooned frames are still marked as missing in the p2m as
> before.
> 
> As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
> not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
> translations will now do the right thing for all I/O regions.  This
> means the Xen-specific _PAGE_IOMAP can be removed,
> 
> This series has been tested (in dom0) on all unique machines we have
> in out test lab (~100 machines), some of which have PCI devices with
> BARs above the end of RAM.

This looks good to me and thank you for taking a stab at it.

In the coming weeks I can run some more tests with this patchset where:
 - it is used in a PV PCI domU.
 - it is used in a PV PCI domU with backend devices (so say a NIC
   is exported to it and said NIC is backend for other PV devices).
 - it is used on a regular PV and we do ballooning, migration ,etc.
 - Get cycles on the box that has InfiniBand MMIOs in the 60TB range.
  
> 
> Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
> as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
> long).
> 
> You may find it useful to apply patch #3 to more easily review the
> updated p2m diagram.
> 
> Changes in v4:
> - fix p2m_mid_identity initialization.
> 
> Changes in v3 (not posted):
> - use correct end of e820
> - fix xen_remap_domain_mfn_range()
> 
> Changes in v2:
> - fix to actually set end-of-RAM to 512 GiB region as 1:1.
> - introduce p2m_mid_identity to efficiently store large 1:1 regions.
> - Split the _PAGE_IOMAP patch into Xen and generic x86 halves.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:17:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAN9C-0003xB-5R; Mon, 03 Feb 2014 17:17:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAN9A-0003x6-06
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:17:24 +0000
Received: from [193.109.254.147:24562] by server-14.bemta-14.messagelabs.com
	id E1/A1-29228-32FCFE25; Mon, 03 Feb 2014 17:17:23 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391447841!1710506!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3276 invoked from network); 3 Feb 2014 17:17:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 17:17:22 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13HH94L011509
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 17:17:10 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13HH88a018996
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 17:17:08 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13HH7dX018963; Mon, 3 Feb 2014 17:17:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 09:17:07 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 278A91BFA0B; Mon,  3 Feb 2014 12:17:06 -0500 (EST)
Date: Mon, 3 Feb 2014 12:17:06 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140203171706.GA6043@phenom.dumpdata.com>
References: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391446897-21998-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: x86@kernel.org, xen-devel@lists.xen.org, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCHv4 0/8] x86/xen: fixes for mapping high MMIO
 regions (and remove _PAGE_IOMAP)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 05:01:29PM +0000, David Vrabel wrote:
> [ x86 maintainers, this is predominately a Xen series but the end
> result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]
> 
> This a fix for the problems with mapping high MMIO regions in certain
> cases (e.g., the RDMA drivers) as not all mappers were specifing
> _PAGE_IOMAP which meant no valid MFN could be found and the resulting
> PTEs would be set as not present, causing subsequent faults.
> 
> It assumes that anything that isn't RAM (whether ballooned out or not)
> is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
> region after the end of the E820 map and the region beyond the end of
> the p2m.  Ballooned frames are still marked as missing in the p2m as
> before.
> 
> As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
> not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
> translations will now do the right thing for all I/O regions.  This
> means the Xen-specific _PAGE_IOMAP can be removed,
> 
> This series has been tested (in dom0) on all unique machines we have
> in out test lab (~100 machines), some of which have PCI devices with
> BARs above the end of RAM.

This looks good to me and thank you for taking a stab at it.

In the coming weeks I can run some more tests with this patchset where:
 - it is used in a PV PCI domU.
 - it is used in a PV PCI domU with backend devices (so say a NIC
   is exported to it and said NIC is backend for other PV devices).
 - it is used on a regular PV and we do ballooning, migration ,etc.
 - Get cycles on the box that has InfiniBand MMIOs in the 60TB range.
  
> 
> Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
> as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
> long).
> 
> You may find it useful to apply patch #3 to more easily review the
> updated p2m diagram.
> 
> Changes in v4:
> - fix p2m_mid_identity initialization.
> 
> Changes in v3 (not posted):
> - use correct end of e820
> - fix xen_remap_domain_mfn_range()
> 
> Changes in v2:
> - fix to actually set end-of-RAM to 512 GiB region as 1:1.
> - introduce p2m_mid_identity to efficiently store large 1:1 regions.
> - Split the _PAGE_IOMAP patch into Xen and generic x86 halves.
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANGm-0004Vh-3z; Mon, 03 Feb 2014 17:25:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WANGl-0004Vc-8j
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:25:15 +0000
Received: from [193.109.254.147:20624] by server-15.bemta-14.messagelabs.com
	id 12/CA-10839-AF0DFE25; Mon, 03 Feb 2014 17:25:14 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391448312!1703677!1
X-Originating-IP: [209.85.216.177]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29056 invoked from network); 3 Feb 2014 17:25:13 -0000
Received: from mail-qc0-f177.google.com (HELO mail-qc0-f177.google.com)
	(209.85.216.177)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:25:13 -0000
Received: by mail-qc0-f177.google.com with SMTP id i8so11492758qcq.36
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:25:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=YP97tcUgecTajN9gDJAVPm1KVfUlG5Wnq6xpBJWzr5Q=;
	b=LuB1BHsRlUQJIGGImFXHH0mtDdGp4QfK37uq5o/vHioT0bhcP3jkHKg0FCDfZKVWVu
	+UR9x2rhhV9kELtZDdrXcTsT8qlg/HzITx1u/c3FwIgxEISnl2TvipMlCfqMlKxizzsR
	A/sBDKnq9uaOPe2koniPjALRAFYI39lJ3dAS6mOBEC7dNw6AusPbnH99UHRtN13Td2xt
	iWk5xXvRni4G8TpAiRCrNjFjbASLlDGK0cu1wSO9VOlrg1xQSfBU+BisXh2Ssz5lHy+p
	eTySnU/qDJ4YzPTk+319t/SvOMxF0vonEEn2PY7k4Et102Wa1EcUzVS9SsHDR8uOBDNP
	wT8w==
X-Gm-Message-State: ALoCoQnNElBGMlfSOauc1ryOHQ9/TH4EmtlMbPCKeIL40euLH+6X+ijQq92I3BDYkWICuVNDBZQS
MIME-Version: 1.0
X-Received: by 10.224.72.72 with SMTP id l8mr58467413qaj.51.1391448311920;
	Mon, 03 Feb 2014 09:25:11 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Mon, 3 Feb 2014 09:25:11 -0800 (PST)
In-Reply-To: <1391447061.10515.63.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
	<1390999123.31814.96.camel@kazak.uk.xensource.com>
	<CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
	<1391447061.10515.63.camel@kazak.uk.xensource.com>
Date: Mon, 3 Feb 2014 22:55:11 +0530
Message-ID: <CAAHg+Hj4S7LFEQdur4Zg6hgGfm67ffMUePyT8EVa4JivUn_9sg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 3 February 2014 22:34, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-30 at 11:38 +0530, Pranavkumar Sawargaonkar wrote:
>> Hi Ian,
>>
>> On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
>> >
>> >> > I also don't see any patch to linux/Documentation/devicetree/bindings,
>> >> > as was requested in that posting from 6 months ago. Where can I find
>> >> > that?
>> >> >
>> >> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
>> >> > hasn't landed?
>> >> Yeah it is dangling and since new patch is already posted i think we
>> >> can wait for final DT bindings.
>> >
>> > It seems from the thread that the final bindings are going to differ
>> > significantly from what is implemented in Xen and proposed in the above
>> > thread. (with a syscon driver that the reset driver references).
>> >
>> >> >> Now if you want this to be fixed , i can quickly submit a V7 in which
>> >> >> mask field will be just hard-coded to 1 hence xen code will always
>> >> >> work even if linux code does gets changed.
>> >> >
>> >> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
>> >> > that seems like a good approach.
>> >> >
>> >> > I think we'll just have to accept that until the binding is specified
>> >> > and documented (in linux/Documentation/devicetree/bindings) then we may
>> >> > have to be prepared to change the Xen implementation to match the final
>> >> > spec without regard to backwards compat. If we aren't happy with that
>> >> > then I should revert the patch now and we will have to live without
>> >> > reboot support in the meantime.
>> >> Please do not revert the patch , I think we can go ahead with current patch.
>> >> Once linux side is concluded i will fix minor changes in xen code
>> >> based on new DT bindigs..
>> >
>> > It doesn't sounds to me like it is going to be minor changes.
>> Yes binding are changed in new drivers but now question is what to do
>> in current state where new driver is not submitted ?
>>
>> My take is we have 3 opts :
>> 1. Keep current reboot driver in xen as it is and use it with old
>> bindings. (since that is the one merged in linux)
>> 2. I will send a new patch (will take 1hr max for me to do it) with
>> addresses hardcoded instead of reading it from dts.
>>     This will help for xen to have reboot driver for xgene.
>> 3. Remove this driver completely from xen as of now.
>
> None of the options are brilliant :-/
>
> I think on balance #2 is probably the way to go.
Even i think this is the best way among all these patchy options :P
Tomorrow I will send a new patch with removal of dts stuff from xen
driver so that it will always work irrespective of linux stuff.
Once linux side comes to conclusion and merged i will reintroduce dts
stuff in this driver.

>
> #1 would set a precedent for using formally undefined bindings which I
> think we should avoid.
>
> #3 has obvious downsides, but given that we have already accepted the
> functionality it seems a shame to revert it entirely.
>
> Ian.
>
-
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANGm-0004Vh-3z; Mon, 03 Feb 2014 17:25:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WANGl-0004Vc-8j
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:25:15 +0000
Received: from [193.109.254.147:20624] by server-15.bemta-14.messagelabs.com
	id 12/CA-10839-AF0DFE25; Mon, 03 Feb 2014 17:25:14 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391448312!1703677!1
X-Originating-IP: [209.85.216.177]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29056 invoked from network); 3 Feb 2014 17:25:13 -0000
Received: from mail-qc0-f177.google.com (HELO mail-qc0-f177.google.com)
	(209.85.216.177)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:25:13 -0000
Received: by mail-qc0-f177.google.com with SMTP id i8so11492758qcq.36
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:25:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=YP97tcUgecTajN9gDJAVPm1KVfUlG5Wnq6xpBJWzr5Q=;
	b=LuB1BHsRlUQJIGGImFXHH0mtDdGp4QfK37uq5o/vHioT0bhcP3jkHKg0FCDfZKVWVu
	+UR9x2rhhV9kELtZDdrXcTsT8qlg/HzITx1u/c3FwIgxEISnl2TvipMlCfqMlKxizzsR
	A/sBDKnq9uaOPe2koniPjALRAFYI39lJ3dAS6mOBEC7dNw6AusPbnH99UHRtN13Td2xt
	iWk5xXvRni4G8TpAiRCrNjFjbASLlDGK0cu1wSO9VOlrg1xQSfBU+BisXh2Ssz5lHy+p
	eTySnU/qDJ4YzPTk+319t/SvOMxF0vonEEn2PY7k4Et102Wa1EcUzVS9SsHDR8uOBDNP
	wT8w==
X-Gm-Message-State: ALoCoQnNElBGMlfSOauc1ryOHQ9/TH4EmtlMbPCKeIL40euLH+6X+ijQq92I3BDYkWICuVNDBZQS
MIME-Version: 1.0
X-Received: by 10.224.72.72 with SMTP id l8mr58467413qaj.51.1391448311920;
	Mon, 03 Feb 2014 09:25:11 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Mon, 3 Feb 2014 09:25:11 -0800 (PST)
In-Reply-To: <1391447061.10515.63.camel@kazak.uk.xensource.com>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>
	<1390924071.7753.115.camel@kazak.uk.xensource.com>
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>
	<1390931022.31814.32.camel@kazak.uk.xensource.com>
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>
	<1390999123.31814.96.camel@kazak.uk.xensource.com>
	<CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
	<1391447061.10515.63.camel@kazak.uk.xensource.com>
Date: Mon, 3 Feb 2014 22:55:11 +0530
Message-ID: <CAAHg+Hj4S7LFEQdur4Zg6hgGfm67ffMUePyT8EVa4JivUn_9sg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 3 February 2014 22:34, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-01-30 at 11:38 +0530, Pranavkumar Sawargaonkar wrote:
>> Hi Ian,
>>
>> On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
>> >
>> >> > I also don't see any patch to linux/Documentation/devicetree/bindings,
>> >> > as was requested in that posting from 6 months ago. Where can I find
>> >> > that?
>> >> >
>> >> > It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
>> >> > hasn't landed?
>> >> Yeah it is dangling and since new patch is already posted i think we
>> >> can wait for final DT bindings.
>> >
>> > It seems from the thread that the final bindings are going to differ
>> > significantly from what is implemented in Xen and proposed in the above
>> > thread. (with a syscon driver that the reset driver references).
>> >
>> >> >> Now if you want this to be fixed , i can quickly submit a V7 in which
>> >> >> mask field will be just hard-coded to 1 hence xen code will always
>> >> >> work even if linux code does gets changed.
>> >> >
>> >> > Looks like the Linux driver uses 0xffffffff if the mask isn't given --
>> >> > that seems like a good approach.
>> >> >
>> >> > I think we'll just have to accept that until the binding is specified
>> >> > and documented (in linux/Documentation/devicetree/bindings) then we may
>> >> > have to be prepared to change the Xen implementation to match the final
>> >> > spec without regard to backwards compat. If we aren't happy with that
>> >> > then I should revert the patch now and we will have to live without
>> >> > reboot support in the meantime.
>> >> Please do not revert the patch , I think we can go ahead with current patch.
>> >> Once linux side is concluded i will fix minor changes in xen code
>> >> based on new DT bindigs..
>> >
>> > It doesn't sounds to me like it is going to be minor changes.
>> Yes binding are changed in new drivers but now question is what to do
>> in current state where new driver is not submitted ?
>>
>> My take is we have 3 opts :
>> 1. Keep current reboot driver in xen as it is and use it with old
>> bindings. (since that is the one merged in linux)
>> 2. I will send a new patch (will take 1hr max for me to do it) with
>> addresses hardcoded instead of reading it from dts.
>>     This will help for xen to have reboot driver for xgene.
>> 3. Remove this driver completely from xen as of now.
>
> None of the options are brilliant :-/
>
> I think on balance #2 is probably the way to go.
Even i think this is the best way among all these patchy options :P
Tomorrow I will send a new patch with removal of dts stuff from xen
driver so that it will always work irrespective of linux stuff.
Once linux side comes to conclusion and merged i will reintroduce dts
stuff in this driver.

>
> #1 would set a precedent for using formally undefined bindings which I
> think we should avoid.
>
> #3 has obvious downsides, but given that we have already accepted the
> functionality it seems a shame to revert it entirely.
>
> Ian.
>
-
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:33:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANO3-0004fV-1z; Mon, 03 Feb 2014 17:32:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1WANO2-0004fQ-3m
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:32:46 +0000
Received: from [85.158.139.211:31982] by server-3.bemta-5.messagelabs.com id
	B3/66-13671-DB2DFE25; Mon, 03 Feb 2014 17:32:45 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391448762!1358958!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=2.9 required=7.0 tests=HTML_MESSAGE,
	HTML_OBFUSCATE_05_10,INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18482 invoked from network); 3 Feb 2014 17:32:43 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 17:32:43 -0000
Received: from mail-ob0-f180.google.com ([209.85.214.180]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUu/SubMBYcDbEGwklynVt2h2R2ZbGWGF@postini.com;
	Mon, 03 Feb 2014 09:32:43 PST
Received: by mail-ob0-f180.google.com with SMTP id wp4so8010024obc.25
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:32:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=DqTV2TITtlwbuIV8dVsQW7dRF01aa2nzfHtNdfO0oOs=;
	b=CqazZCL5xRm8isGU0ieAzzDs2FxbtkMiynvgnUFf6x+SQFnMO1wZdhlxsR2J90YYRV
	PjdogWPQ6iNv+X12UBwQXUgPSw4/AuACL8LlFV8b8sNvV1wF0DfLkjxbXLLXy9DYvExP
	Y2wi8lb0H179tDmHfXMe8hFhtuVHN3VKgbxyE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=DqTV2TITtlwbuIV8dVsQW7dRF01aa2nzfHtNdfO0oOs=;
	b=ftQuJgWFGbKLIHIX5/MELywjnux/sWUcm2hocL2x/d0v6CtTgWxhohvRhaIGsnmms/
	aZCP5RF6kO/EIlYzqYMWUwWSRtRIMCUbUseSPhPMjrJcyz7hG27LaYYKVr+wJ/XcrEBY
	MMEoEGw+6YXRIIpBbb8WMlOOfXwzi+wduB+xTbvBmVgzD8VVR9pXKNc9QdBDz7S824TG
	ryusengZfTV+bImE1brsQv0TX+k3IAvkT+x3/LxdGndw359LrR3f+tPdyABh4btI44tg
	0MpkL2cmDnYoV8nojZ7unO49dQB655ox1F1JkbfPgBQ5WBOvukEmM2+dGOqcKyL4Dtb4
	5Nzg==
X-Gm-Message-State: ALoCoQnU1dANr5Dwh4j1q2lQyDnmb/68xfMQx1C3/oHRLGxDKuKykAodXfQywPyk03/dUREDQ2GHc6gZm7a/9WL6Td+16115tDSOzqL+WEOTFVF9CMeM2Anq/AyRXZO/ViaCXkjxwyooo0iD1AT2mDOi5NH8vayWbw3AkmWsQFqxXLB9sYJ0YNQ=
X-Received: by 10.60.148.197 with SMTP id tu5mr30672753oeb.11.1391448761482;
	Mon, 03 Feb 2014 09:32:41 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.148.197 with SMTP id tu5mr30672724oeb.11.1391448761136;
	Mon, 03 Feb 2014 09:32:41 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 3 Feb 2014 09:32:41 -0800 (PST)
Date: Mon, 3 Feb 2014 19:32:41 +0200
Message-ID: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1279582804627214262=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1279582804627214262==
Content-Type: multipart/alternative; boundary=047d7b2e3d0021f08804f183e982

--047d7b2e3d0021f08804f183e982
Content-Type: text/plain; charset=ISO-8859-1

Hi,

has anyone used xentrace on arm with HVM domains? As far as I observe, it
fails to map trace buffers from Xen restricted heap:

xc_map_foreign_batch() call with DOMID_XEN permissions leads to
xenmem_add_to_physmap_one() and then to rcu_lock_domain_by_any_id(), which
fails to find DOMID_XEN in the domain hash (and it doesn't seem at all that
dummy domains are added to this hash). Actually I don't see how this could
work at all since there are no obvious checks for either arch or domain
type (PV or HVM) along the way.

Any advice is appreciated, thanks in advance.

Suikov Pavlo
GlobalLogic
M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--047d7b2e3d0021f08804f183e982
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi,<br><br></div>has anyone used xentrace on arm=
 with HVM domains? As far as I observe, it fails to map trace buffers from =
Xen restricted heap: <br><br>xc_map_foreign_batch() call with DOMID_XEN per=
missions leads to xenmem_add_to_physmap_one() and then to rcu_lock_domain_b=
y_any_id(), which fails to find DOMID_XEN in the domain hash (and it doesn&=
#39;t seem at all that dummy domains are added to this hash). Actually I do=
n&#39;t see how this could work at all since there are no obvious checks fo=
r either arch or domain type (PV or HVM) along the way.<br>
<br></div>Any advice is appreciated, thanks in advance.<br clear=3D"all"><d=
iv><div><div><div><div><div dir=3D"ltr"><font size=3D"-1"><br><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:bold">Suikov Pavlo</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">M +<font size=3D"-1">3</font>8.<font size=3D"-1">0<font size=3D"-1">=
66</font></font>.<font size=3D"-1">66<font size=3D"-1">7</font></font>.<fon=
t size=3D"-1">1<font size=3D"-1">296</font></font>=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font></div>
</div>
</div></div></div></div></div>

--047d7b2e3d0021f08804f183e982--


--===============1279582804627214262==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1279582804627214262==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 17:33:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:33:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANO3-0004fV-1z; Mon, 03 Feb 2014 17:32:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1WANO2-0004fQ-3m
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:32:46 +0000
Received: from [85.158.139.211:31982] by server-3.bemta-5.messagelabs.com id
	B3/66-13671-DB2DFE25; Mon, 03 Feb 2014 17:32:45 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391448762!1358958!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=2.9 required=7.0 tests=HTML_MESSAGE,
	HTML_OBFUSCATE_05_10,INTERRUPTUS,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18482 invoked from network); 3 Feb 2014 17:32:43 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 17:32:43 -0000
Received: from mail-ob0-f180.google.com ([209.85.214.180]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUu/SubMBYcDbEGwklynVt2h2R2ZbGWGF@postini.com;
	Mon, 03 Feb 2014 09:32:43 PST
Received: by mail-ob0-f180.google.com with SMTP id wp4so8010024obc.25
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:32:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=DqTV2TITtlwbuIV8dVsQW7dRF01aa2nzfHtNdfO0oOs=;
	b=CqazZCL5xRm8isGU0ieAzzDs2FxbtkMiynvgnUFf6x+SQFnMO1wZdhlxsR2J90YYRV
	PjdogWPQ6iNv+X12UBwQXUgPSw4/AuACL8LlFV8b8sNvV1wF0DfLkjxbXLLXy9DYvExP
	Y2wi8lb0H179tDmHfXMe8hFhtuVHN3VKgbxyE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=DqTV2TITtlwbuIV8dVsQW7dRF01aa2nzfHtNdfO0oOs=;
	b=ftQuJgWFGbKLIHIX5/MELywjnux/sWUcm2hocL2x/d0v6CtTgWxhohvRhaIGsnmms/
	aZCP5RF6kO/EIlYzqYMWUwWSRtRIMCUbUseSPhPMjrJcyz7hG27LaYYKVr+wJ/XcrEBY
	MMEoEGw+6YXRIIpBbb8WMlOOfXwzi+wduB+xTbvBmVgzD8VVR9pXKNc9QdBDz7S824TG
	ryusengZfTV+bImE1brsQv0TX+k3IAvkT+x3/LxdGndw359LrR3f+tPdyABh4btI44tg
	0MpkL2cmDnYoV8nojZ7unO49dQB655ox1F1JkbfPgBQ5WBOvukEmM2+dGOqcKyL4Dtb4
	5Nzg==
X-Gm-Message-State: ALoCoQnU1dANr5Dwh4j1q2lQyDnmb/68xfMQx1C3/oHRLGxDKuKykAodXfQywPyk03/dUREDQ2GHc6gZm7a/9WL6Td+16115tDSOzqL+WEOTFVF9CMeM2Anq/AyRXZO/ViaCXkjxwyooo0iD1AT2mDOi5NH8vayWbw3AkmWsQFqxXLB9sYJ0YNQ=
X-Received: by 10.60.148.197 with SMTP id tu5mr30672753oeb.11.1391448761482;
	Mon, 03 Feb 2014 09:32:41 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.148.197 with SMTP id tu5mr30672724oeb.11.1391448761136;
	Mon, 03 Feb 2014 09:32:41 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Mon, 3 Feb 2014 09:32:41 -0800 (PST)
Date: Mon, 3 Feb 2014 19:32:41 +0200
Message-ID: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1279582804627214262=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1279582804627214262==
Content-Type: multipart/alternative; boundary=047d7b2e3d0021f08804f183e982

--047d7b2e3d0021f08804f183e982
Content-Type: text/plain; charset=ISO-8859-1

Hi,

has anyone used xentrace on arm with HVM domains? As far as I observe, it
fails to map trace buffers from Xen restricted heap:

xc_map_foreign_batch() call with DOMID_XEN permissions leads to
xenmem_add_to_physmap_one() and then to rcu_lock_domain_by_any_id(), which
fails to find DOMID_XEN in the domain hash (and it doesn't seem at all that
dummy domains are added to this hash). Actually I don't see how this could
work at all since there are no obvious checks for either arch or domain
type (PV or HVM) along the way.

Any advice is appreciated, thanks in advance.

Suikov Pavlo
GlobalLogic
M +38.066.667.1296  S psujkov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--047d7b2e3d0021f08804f183e982
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi,<br><br></div>has anyone used xentrace on arm=
 with HVM domains? As far as I observe, it fails to map trace buffers from =
Xen restricted heap: <br><br>xc_map_foreign_batch() call with DOMID_XEN per=
missions leads to xenmem_add_to_physmap_one() and then to rcu_lock_domain_b=
y_any_id(), which fails to find DOMID_XEN in the domain hash (and it doesn&=
#39;t seem at all that dummy domains are added to this hash). Actually I do=
n&#39;t see how this could work at all since there are no obvious checks fo=
r either arch or domain type (PV or HVM) along the way.<br>
<br></div>Any advice is appreciated, thanks in advance.<br clear=3D"all"><d=
iv><div><div><div><div><div dir=3D"ltr"><font size=3D"-1"><br><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:bold">Suikov Pavlo</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">M +<font size=3D"-1">3</font>8.<font size=3D"-1">0<font size=3D"-1">=
66</font></font>.<font size=3D"-1">66<font size=3D"-1">7</font></font>.<fon=
t size=3D"-1">1<font size=3D"-1">296</font></font>=A0 S psujkov</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline">www.globallogic.com</span></a><span =
style=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font=
-size:12px;background-color:transparent;text-decoration:none;font-family:Ar=
ial;font-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:rgb(17,85,204);background-color:transp=
arent;font-weight:normal;font-style:normal;font-variant:normal;text-decorat=
ion:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:rgb(17,85,204);bac=
kground-color:transparent;font-weight:normal;font-style:normal;font-variant=
:normal;text-decoration:underline;vertical-align:baseline">http://www.globa=
llogic.com/email_disclaimer.txt</span></a><span style=3D"vertical-align:bas=
eline;font-variant:normal;font-style:normal;font-size:11px;background-color=
:transparent;text-decoration:none;font-family:Arial;font-weight:normal"></s=
pan></font></div>
</div>
</div></div></div></div></div>

--047d7b2e3d0021f08804f183e982--


--===============1279582804627214262==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1279582804627214262==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 17:34:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANQ8-0004li-R6; Mon, 03 Feb 2014 17:34:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WANQ7-0004lc-6F
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:34:55 +0000
Received: from [85.158.143.35:8794] by server-3.bemta-4.messagelabs.com id
	C6/4D-11539-E33DFE25; Mon, 03 Feb 2014 17:34:54 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391448891!2809384!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10533 invoked from network); 3 Feb 2014 17:34:53 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 17:34:53 -0000
Received: from mail-ee0-f46.google.com ([74.125.83.46]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUu/TO73g3jA7Ye4hbZ4EW4WpXZbtpoib@postini.com;
	Mon, 03 Feb 2014 09:34:53 PST
Received: by mail-ee0-f46.google.com with SMTP id c13so3721585eek.5
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:34:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google; h=from:to:subject:date:message-id;
	bh=zqL5PFIB1laPj3mSl1iXNfpX+MwGADAygY3KmTyCjIM=;
	b=cDX1G7W/Pz+aK2qisrfBIUlV9NZXQDC63jwq0fKbLsFN6ZN9smq9zvi3BNUmYS4LjY
	gryXKWBQmSDHjvpLgol0xrbL4izp5trKzBwbxz8v3mlRu44UGfxJdRu+ikUvyqsCjawt
	P0asKa+T1yO7kCVsJEn7dtJlXs0STBrnY2wK8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id;
	bh=zqL5PFIB1laPj3mSl1iXNfpX+MwGADAygY3KmTyCjIM=;
	b=DOidNxygpVhRFUygp2jA7wfOoxvC7fNkfg0wjkL0MAjcYS6mBKunl6thrWjFElVmck
	I0SsVod/AnqsaHkTFA5/SYLpJcYO+YN4iEq5KLKeHXTVIKsXKoMVLRddeNJ42Sk4sMuF
	EK3cpKXuv/NKVMIhbYZ4UFFUAWhz8A3ZOZxVxsFtxjEu/42tg1RL8oTPy63MB7KsDRmS
	AoSRwoHlI0FTfXWGjv/q4GMSeiEr+AiYW8MuPZvUBj0kwZNN2yQEjGC9qZ7IC7rFlxe/
	Ugz366yRjo1G4JWKdyZ5LJlI+KuS8jSg+uzhlRNaREBF3OyLTUC5IB42avB+yT+ZO1A+
	ingQ==
X-Gm-Message-State: ALoCoQlp3oEtrweaIadqGaAY8ELJfgtYNs9eIJToX/bv5wVzbr8yrd4wWctxYiOOv6fsVjswQIDeG6/Or8mtZGWRxsZPXkViUn2uL13YvUQaRbFk56Y/uwK09VDmXkPzmM9lxo8oxUIjQUvUUMWFjgBCRUx4dWvXreBN97CZQ1mKhXskxWdOpyA=
X-Received: by 10.14.95.134 with SMTP id p6mr4431401eef.73.1391448889086;
	Mon, 03 Feb 2014 09:34:49 -0800 (PST)
X-Received: by 10.14.95.134 with SMTP id p6mr4431392eef.73.1391448888975;
	Mon, 03 Feb 2014 09:34:48 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id 8sm77783295eeq.15.2014.02.03.09.34.46
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:34:47 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:33:48 +0200
Message-Id: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The possible deadlock scenario is explained below:

non interrupt context:    interrupt contex        interrupt context
                          (CPU0):                 (CPU1):
vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
  |                         |                       |
  vgic_disable_irqs()       ...                     ...
    |                         |                       |
    gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
    |  ...                      |                       |
    |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
    |  ...                        ...                     ...
    |  ... <----------------.---- spin_lock_irqsave(...)  ...
    |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
    |  ...                  . .       Oops! The lock has already taken.
    |  spin_unlock(...)     . .
    |  ...                  . .
    gic_irq_disable()       . .
       ...                  . .
       spin_lock(...)       . .
       ...                  . .
       ... <----------------. .
       ... <------------------.
       ...
       spin_unlock(...)

Since the gic_remove_from_queues() and gic_irq_disable() called from
non interrupt context and they acquire the same lock as gic_set_guest_irq()
which called from interrupt context we must disable interrupts in these
functions to avoid possible deadlocks.

Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
---
 xen/arch/arm/gic.c |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index c44a4d0..7d83b0c 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
 static void gic_irq_disable(struct irq_desc *desc)
 {
     int irq = desc->irq;
+    unsigned long flags;
 
-    spin_lock(&desc->lock);
+    spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);
     /* Disable routing */
     GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
     desc->status |= IRQ_DISABLED;
     spin_unlock(&gic.lock);
-    spin_unlock(&desc->lock);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
 static unsigned int gic_irq_startup(struct irq_desc *desc)
@@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
 void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
+    unsigned long flags;
 
-    spin_lock(&gic.lock);
+    spin_lock_irqsave(&gic.lock, flags);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock(&gic.lock);
+    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:34:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANQ8-0004li-R6; Mon, 03 Feb 2014 17:34:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WANQ7-0004lc-6F
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:34:55 +0000
Received: from [85.158.143.35:8794] by server-3.bemta-4.messagelabs.com id
	C6/4D-11539-E33DFE25; Mon, 03 Feb 2014 17:34:54 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391448891!2809384!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10533 invoked from network); 3 Feb 2014 17:34:53 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 17:34:53 -0000
Received: from mail-ee0-f46.google.com ([74.125.83.46]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUu/TO73g3jA7Ye4hbZ4EW4WpXZbtpoib@postini.com;
	Mon, 03 Feb 2014 09:34:53 PST
Received: by mail-ee0-f46.google.com with SMTP id c13so3721585eek.5
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:34:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google; h=from:to:subject:date:message-id;
	bh=zqL5PFIB1laPj3mSl1iXNfpX+MwGADAygY3KmTyCjIM=;
	b=cDX1G7W/Pz+aK2qisrfBIUlV9NZXQDC63jwq0fKbLsFN6ZN9smq9zvi3BNUmYS4LjY
	gryXKWBQmSDHjvpLgol0xrbL4izp5trKzBwbxz8v3mlRu44UGfxJdRu+ikUvyqsCjawt
	P0asKa+T1yO7kCVsJEn7dtJlXs0STBrnY2wK8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:subject:date:message-id;
	bh=zqL5PFIB1laPj3mSl1iXNfpX+MwGADAygY3KmTyCjIM=;
	b=DOidNxygpVhRFUygp2jA7wfOoxvC7fNkfg0wjkL0MAjcYS6mBKunl6thrWjFElVmck
	I0SsVod/AnqsaHkTFA5/SYLpJcYO+YN4iEq5KLKeHXTVIKsXKoMVLRddeNJ42Sk4sMuF
	EK3cpKXuv/NKVMIhbYZ4UFFUAWhz8A3ZOZxVxsFtxjEu/42tg1RL8oTPy63MB7KsDRmS
	AoSRwoHlI0FTfXWGjv/q4GMSeiEr+AiYW8MuPZvUBj0kwZNN2yQEjGC9qZ7IC7rFlxe/
	Ugz366yRjo1G4JWKdyZ5LJlI+KuS8jSg+uzhlRNaREBF3OyLTUC5IB42avB+yT+ZO1A+
	ingQ==
X-Gm-Message-State: ALoCoQlp3oEtrweaIadqGaAY8ELJfgtYNs9eIJToX/bv5wVzbr8yrd4wWctxYiOOv6fsVjswQIDeG6/Or8mtZGWRxsZPXkViUn2uL13YvUQaRbFk56Y/uwK09VDmXkPzmM9lxo8oxUIjQUvUUMWFjgBCRUx4dWvXreBN97CZQ1mKhXskxWdOpyA=
X-Received: by 10.14.95.134 with SMTP id p6mr4431401eef.73.1391448889086;
	Mon, 03 Feb 2014 09:34:49 -0800 (PST)
X-Received: by 10.14.95.134 with SMTP id p6mr4431392eef.73.1391448888975;
	Mon, 03 Feb 2014 09:34:48 -0800 (PST)
Received: from uglx0186693.synapse.com ([195.238.92.241])
	by mx.google.com with ESMTPSA id 8sm77783295eeq.15.2014.02.03.09.34.46
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:34:47 -0800 (PST)
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Date: Mon,  3 Feb 2014 19:33:48 +0200
Message-Id: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
X-Mailer: git-send-email 1.7.9.5
Subject: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The possible deadlock scenario is explained below:

non interrupt context:    interrupt contex        interrupt context
                          (CPU0):                 (CPU1):
vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
  |                         |                       |
  vgic_disable_irqs()       ...                     ...
    |                         |                       |
    gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
    |  ...                      |                       |
    |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
    |  ...                        ...                     ...
    |  ... <----------------.---- spin_lock_irqsave(...)  ...
    |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
    |  ...                  . .       Oops! The lock has already taken.
    |  spin_unlock(...)     . .
    |  ...                  . .
    gic_irq_disable()       . .
       ...                  . .
       spin_lock(...)       . .
       ...                  . .
       ... <----------------. .
       ... <------------------.
       ...
       spin_unlock(...)

Since the gic_remove_from_queues() and gic_irq_disable() called from
non interrupt context and they acquire the same lock as gic_set_guest_irq()
which called from interrupt context we must disable interrupts in these
functions to avoid possible deadlocks.

Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
---
 xen/arch/arm/gic.c |   10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index c44a4d0..7d83b0c 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
 static void gic_irq_disable(struct irq_desc *desc)
 {
     int irq = desc->irq;
+    unsigned long flags;
 
-    spin_lock(&desc->lock);
+    spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);
     /* Disable routing */
     GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
     desc->status |= IRQ_DISABLED;
     spin_unlock(&gic.lock);
-    spin_unlock(&desc->lock);
+    spin_unlock_irqrestore(&desc->lock, flags);
 }
 
 static unsigned int gic_irq_startup(struct irq_desc *desc)
@@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
 void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
+    unsigned long flags;
 
-    spin_lock(&gic.lock);
+    spin_lock_irqsave(&gic.lock, flags);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock(&gic.lock);
+    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:51:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANfy-0005cy-GE; Mon, 03 Feb 2014 17:51:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WANfx-0005ct-Pj
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:51:18 +0000
Received: from [85.158.139.211:17078] by server-8.bemta-5.messagelabs.com id
	FE/FE-05298-517DFE25; Mon, 03 Feb 2014 17:51:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391449876!1375777!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6378 invoked from network); 3 Feb 2014 17:51:16 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:51:16 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so3874171eaj.26
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:51:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=/iJQZS3XII4UVpfw30Yg9GvfdnBwZZrQLE0QhKqVQAc=;
	b=ZJzt9SGLgNyHIKUVLqhkvmhdlZzxkQOtJIqBxQT7ofivro4GBczWBrNbncW98H2PHx
	NK6bYFM9H08AtunKoR3TJ2G/5CNJ6ugm9D6Ac+JiE7wSycyI4HoLbayDa6V7QnSZMzwe
	BqRVIu3gNZxgHekGDTxQv0pXnzshki+hJAHoYCnf1IBqGjP3OoX/wfVMk3zJ1GCMfTye
	mkDFISKff/c7AGUzFSBFbM3TdEg337DgfkGIzpuYQqMyJN9f2kkq8/sJ6kXtG8hyRBcX
	aKGgHfrVt+PN6efcebO4QSZEncc/yKrdOKZhHuE7Vfr/Zy+cGaPHn+ag26GhWl9SaBvq
	Ms7Q==
X-Gm-Message-State: ALoCoQmjm4Xn1lQCXX7kQPWq+CPdmhexSSQ9+iqRNDcrgBpAE841TEVgDsxU51izwHYNOU+fWMOm
X-Received: by 10.14.93.199 with SMTP id l47mr9200347eef.58.1391449876194;
	Mon, 03 Feb 2014 09:51:16 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm21217367eeg.5.2014.02.03.09.51.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:51:15 -0800 (PST)
Message-ID: <52EFD711.5060201@linaro.org>
Date: Mon, 03 Feb 2014 17:51:13 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(+ Xen ARM maintainers)

Hello Oleksandr,

Thanks for the patch. For next time, can you add the Xen ARM maintainers
in cc? With the amount of mail in the mailing list, your mail could be
lost easily. :)

On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
> The possible deadlock scenario is explained below:
> 
> non interrupt context:    interrupt contex        interrupt context
>                           (CPU0):                 (CPU1):
> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>   |                         |                       |
>   vgic_disable_irqs()       ...                     ...
>     |                         |                       |
>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>     |  ...                      |                       |
>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>     |  ...                        ...                     ...
>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>     |  ...                  . .       Oops! The lock has already taken.
>     |  spin_unlock(...)     . .
>     |  ...                  . .
>     gic_irq_disable()       . .
>        ...                  . .
>        spin_lock(...)       . .
>        ...                  . .
>        ... <----------------. .
>        ... <------------------.
>        ...
>        spin_unlock(...)
> 
> Since the gic_remove_from_queues() and gic_irq_disable() called from
> non interrupt context and they acquire the same lock as gic_set_guest_irq()
> which called from interrupt context we must disable interrupts in these
> functions to avoid possible deadlocks.
> 
> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

I think this patch should have a release exception for Xen 4.4. It's fix
a race condition in the interrupt management.

> ---
>  xen/arch/arm/gic.c |   10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index c44a4d0..7d83b0c 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>  static void gic_irq_disable(struct irq_desc *desc)
>  {
>      int irq = desc->irq;
> +    unsigned long flags;
>  
> -    spin_lock(&desc->lock);
> +    spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>      /* Disable routing */
>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>      desc->status |= IRQ_DISABLED;
>      spin_unlock(&gic.lock);
> -    spin_unlock(&desc->lock);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
>  static unsigned int gic_irq_startup(struct irq_desc *desc)
> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    unsigned long flags;
>  
> -    spin_lock(&gic.lock);
> +    spin_lock_irqsave(&gic.lock, flags);
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> -    spin_unlock(&gic.lock);
> +    spin_unlock_irqrestore(&gic.lock, flags);
>  }
>  
>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 17:51:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 17:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANfy-0005cy-GE; Mon, 03 Feb 2014 17:51:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WANfx-0005ct-Pj
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 17:51:18 +0000
Received: from [85.158.139.211:17078] by server-8.bemta-5.messagelabs.com id
	FE/FE-05298-517DFE25; Mon, 03 Feb 2014 17:51:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391449876!1375777!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6378 invoked from network); 3 Feb 2014 17:51:16 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 17:51:16 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so3874171eaj.26
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:51:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=/iJQZS3XII4UVpfw30Yg9GvfdnBwZZrQLE0QhKqVQAc=;
	b=ZJzt9SGLgNyHIKUVLqhkvmhdlZzxkQOtJIqBxQT7ofivro4GBczWBrNbncW98H2PHx
	NK6bYFM9H08AtunKoR3TJ2G/5CNJ6ugm9D6Ac+JiE7wSycyI4HoLbayDa6V7QnSZMzwe
	BqRVIu3gNZxgHekGDTxQv0pXnzshki+hJAHoYCnf1IBqGjP3OoX/wfVMk3zJ1GCMfTye
	mkDFISKff/c7AGUzFSBFbM3TdEg337DgfkGIzpuYQqMyJN9f2kkq8/sJ6kXtG8hyRBcX
	aKGgHfrVt+PN6efcebO4QSZEncc/yKrdOKZhHuE7Vfr/Zy+cGaPHn+ag26GhWl9SaBvq
	Ms7Q==
X-Gm-Message-State: ALoCoQmjm4Xn1lQCXX7kQPWq+CPdmhexSSQ9+iqRNDcrgBpAE841TEVgDsxU51izwHYNOU+fWMOm
X-Received: by 10.14.93.199 with SMTP id l47mr9200347eef.58.1391449876194;
	Mon, 03 Feb 2014 09:51:16 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm21217367eeg.5.2014.02.03.09.51.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 09:51:15 -0800 (PST)
Message-ID: <52EFD711.5060201@linaro.org>
Date: Mon, 03 Feb 2014 17:51:13 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(+ Xen ARM maintainers)

Hello Oleksandr,

Thanks for the patch. For next time, can you add the Xen ARM maintainers
in cc? With the amount of mail in the mailing list, your mail could be
lost easily. :)

On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
> The possible deadlock scenario is explained below:
> 
> non interrupt context:    interrupt contex        interrupt context
>                           (CPU0):                 (CPU1):
> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>   |                         |                       |
>   vgic_disable_irqs()       ...                     ...
>     |                         |                       |
>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>     |  ...                      |                       |
>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>     |  ...                        ...                     ...
>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>     |  ...                  . .       Oops! The lock has already taken.
>     |  spin_unlock(...)     . .
>     |  ...                  . .
>     gic_irq_disable()       . .
>        ...                  . .
>        spin_lock(...)       . .
>        ...                  . .
>        ... <----------------. .
>        ... <------------------.
>        ...
>        spin_unlock(...)
> 
> Since the gic_remove_from_queues() and gic_irq_disable() called from
> non interrupt context and they acquire the same lock as gic_set_guest_irq()
> which called from interrupt context we must disable interrupts in these
> functions to avoid possible deadlocks.
> 
> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

I think this patch should have a release exception for Xen 4.4. It's fix
a race condition in the interrupt management.

> ---
>  xen/arch/arm/gic.c |   10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index c44a4d0..7d83b0c 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>  static void gic_irq_disable(struct irq_desc *desc)
>  {
>      int irq = desc->irq;
> +    unsigned long flags;
>  
> -    spin_lock(&desc->lock);
> +    spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>      /* Disable routing */
>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>      desc->status |= IRQ_DISABLED;
>      spin_unlock(&gic.lock);
> -    spin_unlock(&desc->lock);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
>  static unsigned int gic_irq_startup(struct irq_desc *desc)
> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    unsigned long flags;
>  
> -    spin_lock(&gic.lock);
> +    spin_lock_irqsave(&gic.lock, flags);
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> -    spin_unlock(&gic.lock);
> +    spin_unlock_irqrestore(&gic.lock, flags);
>  }
>  
>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 18:00:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 18:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANog-0005qC-Bf; Mon, 03 Feb 2014 18:00:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WANoP-0005mZ-9p
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 18:00:01 +0000
Received: from [85.158.143.35:64013] by server-2.bemta-4.messagelabs.com id
	57/9C-10891-029DFE25; Mon, 03 Feb 2014 18:00:00 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391450397!2814435!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9496 invoked from network); 3 Feb 2014 17:59:59 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 17:59:59 -0000
Received: from mail-vb0-f44.google.com ([209.85.212.44]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUu/ZHW2V3u07NUpjtNq64lHy04/eZxGC@postini.com;
	Mon, 03 Feb 2014 09:59:59 PST
Received: by mail-vb0-f44.google.com with SMTP id f12so4906571vbg.17
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:59:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=W42XtXzdWxZVzvCd0jKynJavZTqgGGIMQsj+3oUunD4=;
	b=Pit+On/Ns11UbbO9CW8/aIaUuFk+YgxWJ5+vE8Hdj5mcRhyxODfkgU+I1840UthzkD
	1qlClJl1zk9xtoRQjNBK5FC82PV1/wuwtorpAucpKjCni9AX69k9ww1fzdB6zqDbViLm
	j3Wpuh2O5zpapNuYDsaBGOUtJ+kCleXLS4I5U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=W42XtXzdWxZVzvCd0jKynJavZTqgGGIMQsj+3oUunD4=;
	b=A29tivslYRBD0kpBdrG+3lYBcrQOYH+EcUrrokyfNfrhXEs2UjM5CYJOHduHYH0K7z
	hWNyrcUW+mFTsSRfwCm5GaixECrUNji9eqKhgV4H4vnlB8Gou2pusJlP2aRYMnIZTpy8
	rIIbAW+lltbYg7N4DGchIvS5+oUOhVljVYC4ojtCicb6wvv6da/KGPz5G/q9Bzwy3CaN
	nMul6CNa4tqA1k3uzXpeISbFk2MIHpvf8A7HSfNQ5vwjRCTQjVjoRZ+8SEndtbMoq4Hs
	vqt8vELjInlC/WZjv2XnPr1FjSXBt0VHwMRQKefLQW8OPLKXnJITDnyyi7cl1nfCw19K
	HTEw==
X-Gm-Message-State: ALoCoQnqNl27X0WnhpwNpPeSyF5p55Cti4pttZxbZgY8iU4BUAKHOLIGJUHev/o/S0Gzp0CoeJCvYBoy4QHatZaOLCcXQ1grnfLuXtqE3HSV+fTrL/q+7PWEQ1ddYojm8L3c/1/BneZrcyeDVtojujCTuJscQ+v7OXZrbLVYTjettzd/CQC5BUE=
X-Received: by 10.220.92.135 with SMTP id r7mr29348033vcm.11.1391450396701;
	Mon, 03 Feb 2014 09:59:56 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.92.135 with SMTP id r7mr29348027vcm.11.1391450396615;
	Mon, 03 Feb 2014 09:59:56 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Mon, 3 Feb 2014 09:59:56 -0800 (PST)
In-Reply-To: <52EFD711.5060201@linaro.org>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52EFD711.5060201@linaro.org>
Date: Mon, 3 Feb 2014 19:59:56 +0200
Message-ID: <CAJEb2DEUypYOV-fVEtOe6Cg-Fpa1p2KXy=e4QUG9sLDeWoEb7g@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, all.

On Mon, Feb 3, 2014 at 7:51 PM, Julien Grall <julien.grall@linaro.org> wrote:
> (+ Xen ARM maintainers)
>
> Hello Oleksandr,
>
> Thanks for the patch. For next time, can you add the Xen ARM maintainers
> in cc? With the amount of mail in the mailing list, your mail could be
> lost easily. :)
Yes, I can.
>
> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>> The possible deadlock scenario is explained below:
>>
>> non interrupt context:    interrupt contex        interrupt context
>>                           (CPU0):                 (CPU1):
>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>   |                         |                       |
>>   vgic_disable_irqs()       ...                     ...
>>     |                         |                       |
>>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>     |  ...                      |                       |
>>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>     |  ...                        ...                     ...
>>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>     |  ...                  . .       Oops! The lock has already taken.
>>     |  spin_unlock(...)     . .
>>     |  ...                  . .
>>     gic_irq_disable()       . .
>>        ...                  . .
>>        spin_lock(...)       . .
>>        ...                  . .
>>        ... <----------------. .
>>        ... <------------------.
>>        ...
>>        spin_unlock(...)
>>
>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>> which called from interrupt context we must disable interrupts in these
>> functions to avoid possible deadlocks.
>>
>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
>
> I think this patch should have a release exception for Xen 4.4. It's fix
> a race condition in the interrupt management.
>
>> ---
>>  xen/arch/arm/gic.c |   10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index c44a4d0..7d83b0c 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>  static void gic_irq_disable(struct irq_desc *desc)
>>  {
>>      int irq = desc->irq;
>> +    unsigned long flags;
>>
>> -    spin_lock(&desc->lock);
>> +    spin_lock_irqsave(&desc->lock, flags);
>>      spin_lock(&gic.lock);
>>      /* Disable routing */
>>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>      desc->status |= IRQ_DISABLED;
>>      spin_unlock(&gic.lock);
>> -    spin_unlock(&desc->lock);
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>  }
>>
>>  static unsigned int gic_irq_startup(struct irq_desc *desc)
>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>  {
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned long flags;
>>
>> -    spin_lock(&gic.lock);
>> +    spin_lock_irqsave(&gic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> -    spin_unlock(&gic.lock);
>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>  }
>>
>>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>
>
>
> --
> Julien Grall

Thank you.

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 18:00:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 18:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WANog-0005qC-Bf; Mon, 03 Feb 2014 18:00:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WANoP-0005mZ-9p
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 18:00:01 +0000
Received: from [85.158.143.35:64013] by server-2.bemta-4.messagelabs.com id
	57/9C-10891-029DFE25; Mon, 03 Feb 2014 18:00:00 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391450397!2814435!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9496 invoked from network); 3 Feb 2014 17:59:59 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 17:59:59 -0000
Received: from mail-vb0-f44.google.com ([209.85.212.44]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUu/ZHW2V3u07NUpjtNq64lHy04/eZxGC@postini.com;
	Mon, 03 Feb 2014 09:59:59 PST
Received: by mail-vb0-f44.google.com with SMTP id f12so4906571vbg.17
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 09:59:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=W42XtXzdWxZVzvCd0jKynJavZTqgGGIMQsj+3oUunD4=;
	b=Pit+On/Ns11UbbO9CW8/aIaUuFk+YgxWJ5+vE8Hdj5mcRhyxODfkgU+I1840UthzkD
	1qlClJl1zk9xtoRQjNBK5FC82PV1/wuwtorpAucpKjCni9AX69k9ww1fzdB6zqDbViLm
	j3Wpuh2O5zpapNuYDsaBGOUtJ+kCleXLS4I5U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=W42XtXzdWxZVzvCd0jKynJavZTqgGGIMQsj+3oUunD4=;
	b=A29tivslYRBD0kpBdrG+3lYBcrQOYH+EcUrrokyfNfrhXEs2UjM5CYJOHduHYH0K7z
	hWNyrcUW+mFTsSRfwCm5GaixECrUNji9eqKhgV4H4vnlB8Gou2pusJlP2aRYMnIZTpy8
	rIIbAW+lltbYg7N4DGchIvS5+oUOhVljVYC4ojtCicb6wvv6da/KGPz5G/q9Bzwy3CaN
	nMul6CNa4tqA1k3uzXpeISbFk2MIHpvf8A7HSfNQ5vwjRCTQjVjoRZ+8SEndtbMoq4Hs
	vqt8vELjInlC/WZjv2XnPr1FjSXBt0VHwMRQKefLQW8OPLKXnJITDnyyi7cl1nfCw19K
	HTEw==
X-Gm-Message-State: ALoCoQnqNl27X0WnhpwNpPeSyF5p55Cti4pttZxbZgY8iU4BUAKHOLIGJUHev/o/S0Gzp0CoeJCvYBoy4QHatZaOLCcXQ1grnfLuXtqE3HSV+fTrL/q+7PWEQ1ddYojm8L3c/1/BneZrcyeDVtojujCTuJscQ+v7OXZrbLVYTjettzd/CQC5BUE=
X-Received: by 10.220.92.135 with SMTP id r7mr29348033vcm.11.1391450396701;
	Mon, 03 Feb 2014 09:59:56 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.92.135 with SMTP id r7mr29348027vcm.11.1391450396615;
	Mon, 03 Feb 2014 09:59:56 -0800 (PST)
Received: by 10.220.181.1 with HTTP; Mon, 3 Feb 2014 09:59:56 -0800 (PST)
In-Reply-To: <52EFD711.5060201@linaro.org>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52EFD711.5060201@linaro.org>
Date: Mon, 3 Feb 2014 19:59:56 +0200
Message-ID: <CAJEb2DEUypYOV-fVEtOe6Cg-Fpa1p2KXy=e4QUG9sLDeWoEb7g@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, all.

On Mon, Feb 3, 2014 at 7:51 PM, Julien Grall <julien.grall@linaro.org> wrote:
> (+ Xen ARM maintainers)
>
> Hello Oleksandr,
>
> Thanks for the patch. For next time, can you add the Xen ARM maintainers
> in cc? With the amount of mail in the mailing list, your mail could be
> lost easily. :)
Yes, I can.
>
> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>> The possible deadlock scenario is explained below:
>>
>> non interrupt context:    interrupt contex        interrupt context
>>                           (CPU0):                 (CPU1):
>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>   |                         |                       |
>>   vgic_disable_irqs()       ...                     ...
>>     |                         |                       |
>>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>     |  ...                      |                       |
>>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>     |  ...                        ...                     ...
>>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>     |  ...                  . .       Oops! The lock has already taken.
>>     |  spin_unlock(...)     . .
>>     |  ...                  . .
>>     gic_irq_disable()       . .
>>        ...                  . .
>>        spin_lock(...)       . .
>>        ...                  . .
>>        ... <----------------. .
>>        ... <------------------.
>>        ...
>>        spin_unlock(...)
>>
>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>> which called from interrupt context we must disable interrupts in these
>> functions to avoid possible deadlocks.
>>
>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
>
> I think this patch should have a release exception for Xen 4.4. It's fix
> a race condition in the interrupt management.
>
>> ---
>>  xen/arch/arm/gic.c |   10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index c44a4d0..7d83b0c 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>  static void gic_irq_disable(struct irq_desc *desc)
>>  {
>>      int irq = desc->irq;
>> +    unsigned long flags;
>>
>> -    spin_lock(&desc->lock);
>> +    spin_lock_irqsave(&desc->lock, flags);
>>      spin_lock(&gic.lock);
>>      /* Disable routing */
>>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>      desc->status |= IRQ_DISABLED;
>>      spin_unlock(&gic.lock);
>> -    spin_unlock(&desc->lock);
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>  }
>>
>>  static unsigned int gic_irq_startup(struct irq_desc *desc)
>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>  {
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned long flags;
>>
>> -    spin_lock(&gic.lock);
>> +    spin_lock_irqsave(&gic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> -    spin_unlock(&gic.lock);
>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>  }
>>
>>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>
>
>
> --
> Julien Grall

Thank you.

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 18:52:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 18:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAOcp-0007Xv-Hr; Mon, 03 Feb 2014 18:52:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WAOco-0007Xq-8v
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 18:52:06 +0000
Received: from [85.158.143.35:63399] by server-3.bemta-4.messagelabs.com id
	B6/45-11539-555EFE25; Mon, 03 Feb 2014 18:52:05 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391453524!2830522!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9403 invoked from network); 3 Feb 2014 18:52:05 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 18:52:05 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so5750114lan.33
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 10:52:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=BRg7MXS88mtgwniQTtgzOLGfbFgJaG09AjUXrm/GYYQ=;
	b=wsc93drBEK8k0xMRPZX1+NKxzP7hPHN8sifRNw+pai/XjO7Xb1AbZb/6zfyw7bot/O
	yt4lU2NLrZqE3pTi38821izrI/rRjp2Du6OWTz9+qJBwN33NIFSiRoNfoLb+KAeJTZBq
	MQ3XsoL9/MfdSU/UgfHfscweHi2EXyUU9XWBWz+QWRrZpp9XJMW4gRY3nMIumeugB/fh
	M5+X7H6dALhy8IFl5w7aiAjVH9Vn4HUsTDwDyaHU8/4cKSfxHxEIbFF70FonN28vhIJ9
	nio96Jgv+Hz4UO6J5IMwlhJKZ/046tfq4gNiK9YGLyuf6zSmu++WvovsHHiuNdvx9qB0
	yrGw==
X-Received: by 10.152.4.68 with SMTP id i4mr10555245lai.8.1391453524154;
	Mon, 03 Feb 2014 10:52:04 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	bn5sm22350260lbc.10.2014.02.03.10.52.03 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 10:52:03 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 3 Feb 2014 22:52:01 +0400
Message-Id: <53D5BEB0-82B1-40B4-B4C1-7EAA97BA9276@gmail.com>
To: dilos-dev@lists.illumos.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I have published first dilos-xen-4.3-dom0 ISO.

you can find info about how to play with it here:
http://www.dilos.org/news/2014-02-03

2Dariao: you can post this info to Xen blogs site :)
or copy/past from DilOS site.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 18:52:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 18:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAOcp-0007Xv-Hr; Mon, 03 Feb 2014 18:52:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WAOco-0007Xq-8v
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 18:52:06 +0000
Received: from [85.158.143.35:63399] by server-3.bemta-4.messagelabs.com id
	B6/45-11539-555EFE25; Mon, 03 Feb 2014 18:52:05 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391453524!2830522!1
X-Originating-IP: [209.85.215.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9403 invoked from network); 3 Feb 2014 18:52:05 -0000
Received: from mail-la0-f46.google.com (HELO mail-la0-f46.google.com)
	(209.85.215.46)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 18:52:05 -0000
Received: by mail-la0-f46.google.com with SMTP id b8so5750114lan.33
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 10:52:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=BRg7MXS88mtgwniQTtgzOLGfbFgJaG09AjUXrm/GYYQ=;
	b=wsc93drBEK8k0xMRPZX1+NKxzP7hPHN8sifRNw+pai/XjO7Xb1AbZb/6zfyw7bot/O
	yt4lU2NLrZqE3pTi38821izrI/rRjp2Du6OWTz9+qJBwN33NIFSiRoNfoLb+KAeJTZBq
	MQ3XsoL9/MfdSU/UgfHfscweHi2EXyUU9XWBWz+QWRrZpp9XJMW4gRY3nMIumeugB/fh
	M5+X7H6dALhy8IFl5w7aiAjVH9Vn4HUsTDwDyaHU8/4cKSfxHxEIbFF70FonN28vhIJ9
	nio96Jgv+Hz4UO6J5IMwlhJKZ/046tfq4gNiK9YGLyuf6zSmu++WvovsHHiuNdvx9qB0
	yrGw==
X-Received: by 10.152.4.68 with SMTP id i4mr10555245lai.8.1391453524154;
	Mon, 03 Feb 2014 10:52:04 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	bn5sm22350260lbc.10.2014.02.03.10.52.03 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 10:52:03 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Mon, 3 Feb 2014 22:52:01 +0400
Message-Id: <53D5BEB0-82B1-40B4-B4C1-7EAA97BA9276@gmail.com>
To: dilos-dev@lists.illumos.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I have published first dilos-xen-4.3-dom0 ISO.

you can find info about how to play with it here:
http://www.dilos.org/news/2014-02-03

2Dariao: you can post this info to Xen blogs site :)
or copy/past from DilOS site.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:27:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPAv-0008LA-0n; Mon, 03 Feb 2014 19:27:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAPAs-0008L5-U0
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 19:27:19 +0000
Received: from [85.158.143.35:17563] by server-1.bemta-4.messagelabs.com id
	DC/90-31661-69DEFE25; Mon, 03 Feb 2014 19:27:18 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391455635!2831992!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27786 invoked from network); 3 Feb 2014 19:27:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:27:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13JQ8cK010283
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:26:09 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13JQ7SD029034
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:26:07 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13JQ7FG029013; Mon, 3 Feb 2014 19:26:07 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:26:06 -0800
Date: Mon, 3 Feb 2014 11:26:05 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Message-ID: <20140203112605.66306ae9@mantra.us.oracle.com>
In-Reply-To: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	yang.z.zhang@intel.com, jun.nakajima@intel.com, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
 Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon,  3 Feb 2014 12:03:20 -0500
Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:

> I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> 
> The PVH feature is considered experimental, but it would be good to
> have it working out of the box without crashing the hypervisor.
> 
> Sadly that is not the case as 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
> "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
> casues an NULL pointer dereference when starting a guest with 'pvh=1'
> in the guest config.
> 
> There are two ways of fixing this:
> a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> path, or b). Check for ioreq() being NULL. This is actually done in
> other places in the hypervisor - so I choose to piggyback on that.
> 

I was about to send this patch on friday:

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..563b02f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is



when I realized even after the above fix it is still crashing  for
me... debugging right now. JFYI.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:27:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPAv-0008LA-0n; Mon, 03 Feb 2014 19:27:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAPAs-0008L5-U0
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 19:27:19 +0000
Received: from [85.158.143.35:17563] by server-1.bemta-4.messagelabs.com id
	DC/90-31661-69DEFE25; Mon, 03 Feb 2014 19:27:18 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391455635!2831992!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27786 invoked from network); 3 Feb 2014 19:27:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:27:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13JQ8cK010283
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:26:09 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13JQ7SD029034
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:26:07 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13JQ7FG029013; Mon, 3 Feb 2014 19:26:07 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:26:06 -0800
Date: Mon, 3 Feb 2014 11:26:05 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad@kernel.org>
Message-ID: <20140203112605.66306ae9@mantra.us.oracle.com>
In-Reply-To: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	yang.z.zhang@intel.com, jun.nakajima@intel.com, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
 Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon,  3 Feb 2014 12:03:20 -0500
Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:

> I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> 
> The PVH feature is considered experimental, but it would be good to
> have it working out of the box without crashing the hypervisor.
> 
> Sadly that is not the case as 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
> "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
> casues an NULL pointer dereference when starting a guest with 'pvh=1'
> in the guest config.
> 
> There are two ways of fixing this:
> a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> path, or b). Check for ioreq() being NULL. This is actually done in
> other places in the hypervisor - so I choose to piggyback on that.
> 

I was about to send this patch on friday:

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..563b02f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is



when I realized even after the above fix it is still crashing  for
me... debugging right now. JFYI.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:30:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPDg-0000Bb-KO; Mon, 03 Feb 2014 19:30:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAPDf-0000BU-Cu
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 19:30:11 +0000
Received: from [85.158.137.68:65046] by server-16.bemta-3.messagelabs.com id
	F7/D8-29917-24EEFE25; Mon, 03 Feb 2014 19:30:10 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391455808!13018499!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30456 invoked from network); 3 Feb 2014 19:30:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:30:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13JU5ha024622
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:30:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13JU3G5014542
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:30:05 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13JU3cJ022457; Mon, 3 Feb 2014 19:30:03 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:30:02 -0800
Date: Mon, 3 Feb 2014 11:30:01 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203113001.2433a77c@mantra.us.oracle.com>
In-Reply-To: <20140203114914.GA3350@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 06:49:14 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > We need to set cr4 flags for APs that are already set for BSP.
> 
> The title is missing the 'xen' part.

The patch is for linux, not xen.

> I rewrote it a bit and I think this should go in 3.14.
> 
> David, Boris: It is not the full fix as there are other parts to
> make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
> bug.
> 
> 
> 
> From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> Date: Wed, 29 Jan 2014 16:15:18 -0800
> Subject: [PATCH] xen/pvh: set CR4 flags for APs
> 
> The Xen ABI sets said flags for the BSP, but it does

NO it does not. I said it few times, it's set by probe_page_size_mask
(which is in linux) for the BSP. The comment below also says it.

thanks
mukesh

> not do that for the CR4. As such fix it up to make
> sure we have that flag set.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/enlighten.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..201d09a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
>  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM
> guests
>  	 * (which PVH shared codepaths), while X86_CR0_PG is for
> PVH. */ write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP |
> X86_CR0_AM); +
> +	if (!cpu)
> +		return;
> +	/*
> +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for
> APs
> +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in
> fpu_init.
> +	*/
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
>  }
>  
>  /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:30:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:30:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPDg-0000Bb-KO; Mon, 03 Feb 2014 19:30:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAPDf-0000BU-Cu
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 19:30:11 +0000
Received: from [85.158.137.68:65046] by server-16.bemta-3.messagelabs.com id
	F7/D8-29917-24EEFE25; Mon, 03 Feb 2014 19:30:10 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391455808!13018499!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30456 invoked from network); 3 Feb 2014 19:30:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:30:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13JU5ha024622
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:30:06 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13JU3G5014542
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:30:05 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13JU3cJ022457; Mon, 3 Feb 2014 19:30:03 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:30:02 -0800
Date: Mon, 3 Feb 2014 11:30:01 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203113001.2433a77c@mantra.us.oracle.com>
In-Reply-To: <20140203114914.GA3350@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 06:49:14 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > We need to set cr4 flags for APs that are already set for BSP.
> 
> The title is missing the 'xen' part.

The patch is for linux, not xen.

> I rewrote it a bit and I think this should go in 3.14.
> 
> David, Boris: It is not the full fix as there are other parts to
> make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
> bug.
> 
> 
> 
> From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> Date: Wed, 29 Jan 2014 16:15:18 -0800
> Subject: [PATCH] xen/pvh: set CR4 flags for APs
> 
> The Xen ABI sets said flags for the BSP, but it does

NO it does not. I said it few times, it's set by probe_page_size_mask
(which is in linux) for the BSP. The comment below also says it.

thanks
mukesh

> not do that for the CR4. As such fix it up to make
> sure we have that flag set.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/enlighten.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..201d09a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
>  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM
> guests
>  	 * (which PVH shared codepaths), while X86_CR0_PG is for
> PVH. */ write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP |
> X86_CR0_AM); +
> +	if (!cpu)
> +		return;
> +	/*
> +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for
> APs
> +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in
> fpu_init.
> +	*/
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
>  }
>  
>  /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:53:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPZc-0000nd-45; Mon, 03 Feb 2014 19:52:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAPZa-0000nY-Dt
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 19:52:50 +0000
Received: from [193.109.254.147:49018] by server-8.bemta-14.messagelabs.com id
	74/79-18529-193FFE25; Mon, 03 Feb 2014 19:52:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391457167!1733051!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29247 invoked from network); 3 Feb 2014 19:52:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:52:48 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Jqhe7018334
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:52:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13JqgOD008131
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:52:43 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Jqg91020661; Mon, 3 Feb 2014 19:52:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:52:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D77D11BFA0B; Mon,  3 Feb 2014 14:52:40 -0500 (EST)
Date: Mon, 3 Feb 2014 14:52:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203195240.GA10738@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
	<20140203113001.2433a77c@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203113001.2433a77c@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:30:01AM -0800, Mukesh Rathor wrote:
> On Mon, 3 Feb 2014 06:49:14 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > > We need to set cr4 flags for APs that are already set for BSP.
> > 
> > The title is missing the 'xen' part.
> 
> The patch is for linux, not xen.

Right. And hence you need to prefix the title with 'xen' in it
otherwise it won't be obvious from the Linux log line for what
component of the Linux tree it is.

> 
> > I rewrote it a bit and I think this should go in 3.14.
> > 
> > David, Boris: It is not the full fix as there are other parts to
> > make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
> > bug.
> > 
> > 
> > 
> > From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Date: Wed, 29 Jan 2014 16:15:18 -0800
> > Subject: [PATCH] xen/pvh: set CR4 flags for APs
> > 
> > The Xen ABI sets said flags for the BSP, but it does
> 
> NO it does not. I said it few times, it's set by probe_page_size_mask
> (which is in linux) for the BSP. The comment below also says it.

Where does it set it for APs? Can we piggyback on that?


> 
> thanks
> mukesh
> 
> > not do that for the CR4. As such fix it up to make
> > sure we have that flag set.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/enlighten.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> > 
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index a4d7b64..201d09a 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
> >  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM
> > guests
> >  	 * (which PVH shared codepaths), while X86_CR0_PG is for
> > PVH. */ write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP |
> > X86_CR0_AM); +
> > +	if (!cpu)
> > +		return;
> > +	/*
> > +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for
> > APs
> > +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in
> > fpu_init.
> > +	*/
> > +	if (cpu_has_pse)
> > +		set_in_cr4(X86_CR4_PSE);
> > +
> > +	if (cpu_has_pge)
> > +		set_in_cr4(X86_CR4_PGE);
> >  }
> >  
> >  /*
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:53:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:53:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPZc-0000nd-45; Mon, 03 Feb 2014 19:52:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAPZa-0000nY-Dt
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 19:52:50 +0000
Received: from [193.109.254.147:49018] by server-8.bemta-14.messagelabs.com id
	74/79-18529-193FFE25; Mon, 03 Feb 2014 19:52:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391457167!1733051!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29247 invoked from network); 3 Feb 2014 19:52:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:52:48 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Jqhe7018334
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:52:44 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13JqgOD008131
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:52:43 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Jqg91020661; Mon, 3 Feb 2014 19:52:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:52:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D77D11BFA0B; Mon,  3 Feb 2014 14:52:40 -0500 (EST)
Date: Mon, 3 Feb 2014 14:52:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203195240.GA10738@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
	<20140203113001.2433a77c@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203113001.2433a77c@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:30:01AM -0800, Mukesh Rathor wrote:
> On Mon, 3 Feb 2014 06:49:14 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > > We need to set cr4 flags for APs that are already set for BSP.
> > 
> > The title is missing the 'xen' part.
> 
> The patch is for linux, not xen.

Right. And hence you need to prefix the title with 'xen' in it
otherwise it won't be obvious from the Linux log line for what
component of the Linux tree it is.

> 
> > I rewrote it a bit and I think this should go in 3.14.
> > 
> > David, Boris: It is not the full fix as there are other parts to
> > make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
> > bug.
> > 
> > 
> > 
> > From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
> > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Date: Wed, 29 Jan 2014 16:15:18 -0800
> > Subject: [PATCH] xen/pvh: set CR4 flags for APs
> > 
> > The Xen ABI sets said flags for the BSP, but it does
> 
> NO it does not. I said it few times, it's set by probe_page_size_mask
> (which is in linux) for the BSP. The comment below also says it.

Where does it set it for APs? Can we piggyback on that?


> 
> thanks
> mukesh
> 
> > not do that for the CR4. As such fix it up to make
> > sure we have that flag set.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > ---
> >  arch/x86/xen/enlighten.c | 12 ++++++++++++
> >  1 file changed, 12 insertions(+)
> > 
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index a4d7b64..201d09a 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
> >  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM
> > guests
> >  	 * (which PVH shared codepaths), while X86_CR0_PG is for
> > PVH. */ write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP |
> > X86_CR0_AM); +
> > +	if (!cpu)
> > +		return;
> > +	/*
> > +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for
> > APs
> > +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in
> > fpu_init.
> > +	*/
> > +	if (cpu_has_pse)
> > +		set_in_cr4(X86_CR4_PSE);
> > +
> > +	if (cpu_has_pge)
> > +		set_in_cr4(X86_CR4_PGE);
> >  }
> >  
> >  /*
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:55:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPbt-0000sx-Lz; Mon, 03 Feb 2014 19:55:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAPbs-0000so-2p
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 19:55:12 +0000
Received: from [193.109.254.147:63215] by server-11.bemta-14.messagelabs.com
	id 8A/56-24604-F14FFE25; Mon, 03 Feb 2014 19:55:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391457309!1739172!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23812 invoked from network); 3 Feb 2014 19:55:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:55:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Js2XZ009967
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:54:02 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13Js0VX008439
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:54:01 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Js0u4023356; Mon, 3 Feb 2014 19:54:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:54:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D82F11BFA0B; Mon,  3 Feb 2014 14:53:58 -0500 (EST)
Date: Mon, 3 Feb 2014 14:53:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>, roger.pau@citrix.com
Message-ID: <20140203195358.GB10738@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<20140203112605.66306ae9@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203112605.66306ae9@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@intel.com, yang.z.zhang@intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
	Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:26:05AM -0800, Mukesh Rathor wrote:
> On Mon,  3 Feb 2014 12:03:20 -0500
> Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> 
> > I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> > 
> > The PVH feature is considered experimental, but it would be good to
> > have it working out of the box without crashing the hypervisor.
> > 
> > Sadly that is not the case as 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
> > "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
> > casues an NULL pointer dereference when starting a guest with 'pvh=1'
> > in the guest config.
> > 
> > There are two ways of fixing this:
> > a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> > path, or b). Check for ioreq() being NULL. This is actually done in
> > other places in the hypervisor - so I choose to piggyback on that.
> > 
> 
> I was about to send this patch on friday:
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index d2ba435..563b02f 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *ioreq = get_ioreq(v);
>  
>      /*
>       * a pending IO emualtion may still no finished. In this case,
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
>          return;
>      /*
>       * a softirq may interrupt us between a virtual vmentry is
> 
> 
> 
> when I realized even after the above fix it is still crashing  for
> me... debugging right now. JFYI.

Are you doing it on a 'virgin' 4.4-rc3 or with your extra patches?

Also adding Roger so that he does not have to debug this crash.
> 
> thanks
> Mukesh
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:55:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPbt-0000sx-Lz; Mon, 03 Feb 2014 19:55:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAPbs-0000so-2p
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 19:55:12 +0000
Received: from [193.109.254.147:63215] by server-11.bemta-14.messagelabs.com
	id 8A/56-24604-F14FFE25; Mon, 03 Feb 2014 19:55:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391457309!1739172!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23812 invoked from network); 3 Feb 2014 19:55:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 19:55:10 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Js2XZ009967
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 19:54:02 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13Js0VX008439
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 19:54:01 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13Js0u4023356; Mon, 3 Feb 2014 19:54:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 11:54:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D82F11BFA0B; Mon,  3 Feb 2014 14:53:58 -0500 (EST)
Date: Mon, 3 Feb 2014 14:53:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>, roger.pau@citrix.com
Message-ID: <20140203195358.GB10738@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<20140203112605.66306ae9@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203112605.66306ae9@mantra.us.oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@intel.com, yang.z.zhang@intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
	Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 11:26:05AM -0800, Mukesh Rathor wrote:
> On Mon,  3 Feb 2014 12:03:20 -0500
> Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> 
> > I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> > 
> > The PVH feature is considered experimental, but it would be good to
> > have it working out of the box without crashing the hypervisor.
> > 
> > Sadly that is not the case as 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
> > "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
> > casues an NULL pointer dereference when starting a guest with 'pvh=1'
> > in the guest config.
> > 
> > There are two ways of fixing this:
> > a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> > path, or b). Check for ioreq() being NULL. This is actually done in
> > other places in the hypervisor - so I choose to piggyback on that.
> > 
> 
> I was about to send this patch on friday:
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index d2ba435..563b02f 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *ioreq = get_ioreq(v);
>  
>      /*
>       * a pending IO emualtion may still no finished. In this case,
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
>          return;
>      /*
>       * a softirq may interrupt us between a virtual vmentry is
> 
> 
> 
> when I realized even after the above fix it is still crashing  for
> me... debugging right now. JFYI.

Are you doing it on a 'virgin' 4.4-rc3 or with your extra patches?

Also adding Roger so that he does not have to debug this crash.
> 
> thanks
> Mukesh
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:58:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:58:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPfK-0001FF-DN; Mon, 03 Feb 2014 19:58:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAPfJ-0001F5-4n
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 19:58:45 +0000
Received: from [85.158.143.35:8609] by server-2.bemta-4.messagelabs.com id
	F4/A8-10891-4F4FFE25; Mon, 03 Feb 2014 19:58:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391457522!2841919!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10925 invoked from network); 3 Feb 2014 19:58:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 19:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,774,1384300800"; d="scan'208";a="97444886"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 19:58:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 14:58:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAPfE-0006oL-UK;
	Mon, 03 Feb 2014 19:58:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAPfE-00014j-2i;
	Mon, 03 Feb 2014 19:58:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24721-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Feb 2014 19:58:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24721: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24721 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24721/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 qemuu                a41087bc7110e8378cd49ddd06aa7c9d361f3673
baseline version:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf

------------------------------------------------------------
People who touched revisions under test:
  Anthony Perard <anthony.perard@citrix.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=a41087bc7110e8378cd49ddd06aa7c9d361f3673
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable a41087bc7110e8378cd49ddd06aa7c9d361f3673
+ branch=qemu-upstream-unstable
+ revision=a41087bc7110e8378cd49ddd06aa7c9d361f3673
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git a41087bc7110e8378cd49ddd06aa7c9d361f3673:master
Counting objects: 1   
Counting objects: 5, done.
Compressing objects:  33% (1/3)   
Compressing objects:  66% (2/3)   
Compressing objects: 100% (3/3)   
Compressing objects: 100% (3/3), done.
Writing objects:  33% (1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 988 bytes, done.
Total 3 (delta 2), reused 1 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   90d3506..a41087b  a41087bc7110e8378cd49ddd06aa7c9d361f3673 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 19:58:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 19:58:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPfK-0001FF-DN; Mon, 03 Feb 2014 19:58:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAPfJ-0001F5-4n
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 19:58:45 +0000
Received: from [85.158.143.35:8609] by server-2.bemta-4.messagelabs.com id
	F4/A8-10891-4F4FFE25; Mon, 03 Feb 2014 19:58:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391457522!2841919!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10925 invoked from network); 3 Feb 2014 19:58:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 19:58:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,774,1384300800"; d="scan'208";a="97444886"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 03 Feb 2014 19:58:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 14:58:41 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAPfE-0006oL-UK;
	Mon, 03 Feb 2014 19:58:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAPfE-00014j-2i;
	Mon, 03 Feb 2014 19:58:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24721-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 3 Feb 2014 19:58:40 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24721: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24721 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24721/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 qemuu                a41087bc7110e8378cd49ddd06aa7c9d361f3673
baseline version:
 qemuu                90d35066387e1d9c9deeda042c5a907cd57c11cf

------------------------------------------------------------
People who touched revisions under test:
  Anthony Perard <anthony.perard@citrix.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=a41087bc7110e8378cd49ddd06aa7c9d361f3673
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable a41087bc7110e8378cd49ddd06aa7c9d361f3673
+ branch=qemu-upstream-unstable
+ revision=a41087bc7110e8378cd49ddd06aa7c9d361f3673
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git a41087bc7110e8378cd49ddd06aa7c9d361f3673:master
Counting objects: 1   
Counting objects: 5, done.
Compressing objects:  33% (1/3)   
Compressing objects:  66% (2/3)   
Compressing objects: 100% (3/3)   
Compressing objects: 100% (3/3), done.
Writing objects:  33% (1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 988 bytes, done.
Total 3 (delta 2), reused 1 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   90d3506..a41087b  a41087bc7110e8378cd49ddd06aa7c9d361f3673 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:03:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPjf-0001Tw-7c; Mon, 03 Feb 2014 20:03:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAPjd-0001Tq-VG
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 20:03:14 +0000
Received: from [85.158.143.35:45548] by server-1.bemta-4.messagelabs.com id
	68/B8-31661-106FFE25; Mon, 03 Feb 2014 20:03:13 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391457791!2834983!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11421 invoked from network); 3 Feb 2014 20:03:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:03:12 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13K23Sf018890
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:02:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13K1xEI013084
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:02:00 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13K1xdb001846; Mon, 3 Feb 2014 20:01:59 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:01:57 -0800
Date: Mon, 3 Feb 2014 12:01:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203120156.57dce238@mantra.us.oracle.com>
In-Reply-To: <20140203195358.GB10738@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<20140203112605.66306ae9@mantra.us.oracle.com>
	<20140203195358.GB10738@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@intel.com, yang.z.zhang@intel.com,
	xen-devel@lists.xenproject.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
 Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 14:53:58 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Mon, Feb 03, 2014 at 11:26:05AM -0800, Mukesh Rathor wrote:
> > On Mon,  3 Feb 2014 12:03:20 -0500
> > Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> > 
> > > I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> > > 
> > > The PVH feature is considered experimental, but it would be good
> > > to have it working out of the box without crashing the hypervisor.
> > > 
> > > Sadly that is not the case as
> > > 09bb434748af9bfe3f7fca4b6eef721a7d5042a4 "Nested VMX: prohibit
> > > virtual vmentry/vmexit during IO emulation" casues an NULL
> > > pointer dereference when starting a guest with 'pvh=1' in the
> > > guest config.
> > > 
> > > There are two ways of fixing this:
> > > a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> > > path, or b). Check for ioreq() being NULL. This is actually done
> > > in other places in the hypervisor - so I choose to piggyback on
> > > that.
> > > 
> > 
> > I was about to send this patch on friday:
> > 
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
> > b/xen/arch/x86/hvm/vmx/vvmx.c index d2ba435..563b02f 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
> >      struct vcpu *v = current;
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >      struct cpu_user_regs *regs = guest_cpu_user_regs();
> > +    ioreq_t *ioreq = get_ioreq(v);
> >  
> >      /*
> >       * a pending IO emualtion may still no finished. In this case,
> >       * no virtual vmswith is allowed. Or else, the following IO
> >       * emulation will handled in a wrong VCPU context.
> >       */
> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> > +    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
> >          return;
> >      /*
> >       * a softirq may interrupt us between a virtual vmentry is
> > 
> > 
> > 
> > when I realized even after the above fix it is still crashing  for
> > me... debugging right now. JFYI.
> 
> Are you doing it on a 'virgin' 4.4-rc3 or with your extra patches?

Actually, it's with my extra dom0 patches.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:03:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPjf-0001Tw-7c; Mon, 03 Feb 2014 20:03:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAPjd-0001Tq-VG
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 20:03:14 +0000
Received: from [85.158.143.35:45548] by server-1.bemta-4.messagelabs.com id
	68/B8-31661-106FFE25; Mon, 03 Feb 2014 20:03:13 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391457791!2834983!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11421 invoked from network); 3 Feb 2014 20:03:12 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:03:12 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13K23Sf018890
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:02:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13K1xEI013084
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:02:00 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13K1xdb001846; Mon, 3 Feb 2014 20:01:59 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:01:57 -0800
Date: Mon, 3 Feb 2014 12:01:56 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203120156.57dce238@mantra.us.oracle.com>
In-Reply-To: <20140203195358.GB10738@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<20140203112605.66306ae9@mantra.us.oracle.com>
	<20140203195358.GB10738@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jbeulich@suse.com, george.dunlap@eu.citrix.com,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@intel.com, yang.z.zhang@intel.com,
	xen-devel@lists.xenproject.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
 Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 14:53:58 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Mon, Feb 03, 2014 at 11:26:05AM -0800, Mukesh Rathor wrote:
> > On Mon,  3 Feb 2014 12:03:20 -0500
> > Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> > 
> > > I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> > > 
> > > The PVH feature is considered experimental, but it would be good
> > > to have it working out of the box without crashing the hypervisor.
> > > 
> > > Sadly that is not the case as
> > > 09bb434748af9bfe3f7fca4b6eef721a7d5042a4 "Nested VMX: prohibit
> > > virtual vmentry/vmexit during IO emulation" casues an NULL
> > > pointer dereference when starting a guest with 'pvh=1' in the
> > > guest config.
> > > 
> > > There are two ways of fixing this:
> > > a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> > > path, or b). Check for ioreq() being NULL. This is actually done
> > > in other places in the hypervisor - so I choose to piggyback on
> > > that.
> > > 
> > 
> > I was about to send this patch on friday:
> > 
> > diff --git a/xen/arch/x86/hvm/vmx/vvmx.c
> > b/xen/arch/x86/hvm/vmx/vvmx.c index d2ba435..563b02f 100644
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
> >      struct vcpu *v = current;
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >      struct cpu_user_regs *regs = guest_cpu_user_regs();
> > +    ioreq_t *ioreq = get_ioreq(v);
> >  
> >      /*
> >       * a pending IO emualtion may still no finished. In this case,
> >       * no virtual vmswith is allowed. Or else, the following IO
> >       * emulation will handled in a wrong VCPU context.
> >       */
> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> > +    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
> >          return;
> >      /*
> >       * a softirq may interrupt us between a virtual vmentry is
> > 
> > 
> > 
> > when I realized even after the above fix it is still crashing  for
> > me... debugging right now. JFYI.
> 
> Are you doing it on a 'virgin' 4.4-rc3 or with your extra patches?

Actually, it's with my extra dom0 patches.

Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPlT-0001ab-OA; Mon, 03 Feb 2014 20:05:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAPlR-0001aV-VY
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 20:05:06 +0000
Received: from [85.158.143.35:7571] by server-2.bemta-4.messagelabs.com id
	D2/FC-10891-176FFE25; Mon, 03 Feb 2014 20:05:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391457903!2842914!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30840 invoked from network); 3 Feb 2014 20:05:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:05:04 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13K50T2031758
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:05:01 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13K4xG7021600
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:05:00 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13K4xhK010515; Mon, 3 Feb 2014 20:04:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:04:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3B7371BFA0B; Mon,  3 Feb 2014 15:04:58 -0500 (EST)
Date: Mon, 3 Feb 2014 15:04:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org
Message-ID: <20140203200458.GA11413@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: roger.pau@citrix.com
Subject: [Xen-devel] Xen PVH and 'xl save' bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This hadn't been tested and unsurprisingly we get:

[  190.091108] calling  vbd-51712+ @ 37, parent: none
[  190.091632] call vbd-51712+ returned 0 after 0 usecs
[  190.092167] PM: freeze of devices complete after 18.587 msecs
[  190.092690] suspending xenstore...
[  190.093272] PM: late freeze of devices complete after 0.062 msecs
[  190.093881] PM: noirq freeze of devices complete after 0.073 msecs
[  190.094539] PM: Calling mce_syscore_suspend+0x0/0x70
[  190.095043] PM: Calling timekeeping_suspend+0x0/0x180
[  190.095470] PM: Calling save_ioapic_entries+0x0/0xb0
[  190.095470] PM: Calling i8259A_suspend+0x0/0x30
[  190.095470] PM: Calling fw_suspend+0x0/0x20
[  190.095470] PM: Calling lapic_suspend+0x0/0x1b0

Entering kdb (current=0xffff88003c806090, pid 9) on processor 0 Oops: (null)
due to oops @ 0xffffffff81045e5b
dCPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc1upstream-00019-g4809e80-dirty #2
dtask: ffff88003c806090 ti: ffff88003c810000 task.ti: ffff88003c810000
dRIP: 0010:[<ffffffff81045e5b>]  [<ffffffff81045e5b>] xen_arch_pre_suspend+0x12b/0x170
dRSP: 0018:ffff88003c811d18  EFLAGS: 00010082
dRAX: ffffffffffffffda RBX: 000000000000777d RCX: ffffc90000342000
dRDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffff570000
dRBP: ffff88003c811d38 R08: 0000000000000000 R09: 0000000000000001
dR10: 0000000000007ff0 R11: 000000000000057a R12: ffff88000777b000
dR13: 000000000000777d R14: ffff88003ca61d7c R15: ffff88003fc0ea01
dFS:  0000000000000000(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
dCS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
dCR2: 00007fc373dd0dd0 CR3: 0000000001c0c000 CR4: 00000000000406f0
dDR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
dDR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
dStack:
 ffff88003c811d28 ffff88003ca61de8 0000000000000000 0000000000000003
 ffff88003c811d48 ffffffff813f30f3 ffff88003c811d78 ffffffff813f3045
 ffff88003c811d78 ffffffff810d7898 ffff88003ca61d58 0000000000000296
dCall Trace:
more> 

I hadn't yet digged in this, but will give more patches on this thread.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAPlT-0001ab-OA; Mon, 03 Feb 2014 20:05:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAPlR-0001aV-VY
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 20:05:06 +0000
Received: from [85.158.143.35:7571] by server-2.bemta-4.messagelabs.com id
	D2/FC-10891-176FFE25; Mon, 03 Feb 2014 20:05:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391457903!2842914!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30840 invoked from network); 3 Feb 2014 20:05:04 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:05:04 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13K50T2031758
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:05:01 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13K4xG7021600
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:05:00 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13K4xhK010515; Mon, 3 Feb 2014 20:04:59 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:04:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3B7371BFA0B; Mon,  3 Feb 2014 15:04:58 -0500 (EST)
Date: Mon, 3 Feb 2014 15:04:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xenproject.org
Message-ID: <20140203200458.GA11413@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: roger.pau@citrix.com
Subject: [Xen-devel] Xen PVH and 'xl save' bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This hadn't been tested and unsurprisingly we get:

[  190.091108] calling  vbd-51712+ @ 37, parent: none
[  190.091632] call vbd-51712+ returned 0 after 0 usecs
[  190.092167] PM: freeze of devices complete after 18.587 msecs
[  190.092690] suspending xenstore...
[  190.093272] PM: late freeze of devices complete after 0.062 msecs
[  190.093881] PM: noirq freeze of devices complete after 0.073 msecs
[  190.094539] PM: Calling mce_syscore_suspend+0x0/0x70
[  190.095043] PM: Calling timekeeping_suspend+0x0/0x180
[  190.095470] PM: Calling save_ioapic_entries+0x0/0xb0
[  190.095470] PM: Calling i8259A_suspend+0x0/0x30
[  190.095470] PM: Calling fw_suspend+0x0/0x20
[  190.095470] PM: Calling lapic_suspend+0x0/0x1b0

Entering kdb (current=0xffff88003c806090, pid 9) on processor 0 Oops: (null)
due to oops @ 0xffffffff81045e5b
dCPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc1upstream-00019-g4809e80-dirty #2
dtask: ffff88003c806090 ti: ffff88003c810000 task.ti: ffff88003c810000
dRIP: 0010:[<ffffffff81045e5b>]  [<ffffffff81045e5b>] xen_arch_pre_suspend+0x12b/0x170
dRSP: 0018:ffff88003c811d18  EFLAGS: 00010082
dRAX: ffffffffffffffda RBX: 000000000000777d RCX: ffffc90000342000
dRDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffff570000
dRBP: ffff88003c811d38 R08: 0000000000000000 R09: 0000000000000001
dR10: 0000000000007ff0 R11: 000000000000057a R12: ffff88000777b000
dR13: 000000000000777d R14: ffff88003ca61d7c R15: ffff88003fc0ea01
dFS:  0000000000000000(0000) GS:ffff88003fc00000(0000) knlGS:0000000000000000
dCS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
dCR2: 00007fc373dd0dd0 CR3: 0000000001c0c000 CR4: 00000000000406f0
dDR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
dDR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
dStack:
 ffff88003c811d28 ffff88003ca61de8 0000000000000000 0000000000000003
 ffff88003c811d48 ffffffff813f30f3 ffff88003c811d78 ffffffff813f3045
 ffff88003c811d78 ffffffff810d7898 ffff88003ca61d58 0000000000000296
dCall Trace:
more> 

I hadn't yet digged in this, but will give more patches on this thread.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQCQ-0002Kg-E4; Mon, 03 Feb 2014 20:32:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAQCP-0002Kb-6H
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 20:32:57 +0000
Received: from [193.109.254.147:20708] by server-4.bemta-14.messagelabs.com id
	91/19-32066-8FCFFE25; Mon, 03 Feb 2014 20:32:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391459575!1743591!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8644 invoked from network); 3 Feb 2014 20:32:55 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 20:32:55 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so3801466eek.18
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 12:32:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=clFcXFXFjbcD+zV2tK9b39PuxcpJQvG3I5zSmdGy1L0=;
	b=hdtBxAn1TOd7imGPx0IsTVr/2tPmKDucHRyZcogA+7lDwBHcHanvKv9syP467g7J7a
	AIIYftf+6YaTU5O61Si/J+J94uBx/YswMoBO0P1DDgb+/gutBMzLe7K9c9vSpWavPmHB
	F6lo/Z9JsQ75OlYtmOPeBXvztg5vlImnAQWpCa/660/Av1+G7WGGrpElfUaKTzm8cr4B
	QkVvujBC6AToXTvvNwR8EDS2K8GTjc+tyS39X/nDosAEBJUpBYHlgwRuFawxQJhvTh8v
	k3nD59UHQY2EulGIIW4uBHRLs5OgA/KuZkyyi/TDUqr/Heb/zEfjF71fSnCMFYRp4Zex
	vbQg==
X-Gm-Message-State: ALoCoQkMQ7AzdsMV9qpNsJy+1GbxF3tFMeO7kSoLXEpgvfZwTDmDlUqoNx9Gpj4WwkAAWxBALV8i
X-Received: by 10.14.194.131 with SMTP id m3mr46423128een.2.1391459575551;
	Mon, 03 Feb 2014 12:32:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	d43sm79780679eep.18.2014.02.03.12.32.54 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 12:32:54 -0800 (PST)
Message-ID: <52EFFCF5.5070108@linaro.org>
Date: Mon, 03 Feb 2014 20:32:53 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
In-Reply-To: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 05:32 PM, Pavlo Suikov wrote:
> Hi,

Hello,

> has anyone used xentrace on arm with HVM domains? As far as I observe,
> it fails to map trace buffers from Xen restricted heap:

> xc_map_foreign_batch() call with DOMID_XEN permissions leads to
> xenmem_add_to_physmap_one() and then to rcu_lock_domain_by_any_id(),
> which fails to find DOMID_XEN in the domain hash (and it doesn't seem at
> all that dummy domains are added to this hash). Actually I don't see how
> this could work at all since there are no obvious checks for either arch
> or domain type (PV or HVM) along the way.

After a quick look to Xen, it seems that xentrace was only working for
x86 PV (the issue will be the same on PVH). It was working because PV
domain uses mmu_update hypercall. In this function, x86 has a specific
case for DOMID_XEN (see get_pg_owner).

To support xentrace on ARM, we will need at least:
  - to replace rcu_lock_domain_by_any_id() by a a similar function
  - to add stubs for trace in arm code

BTW, when I tried xentrace I have this following error in kernel log
messages:
Failed to map pfn to mfn rc:0:-3 pfn:1e9d0 mfn:fdfbe
xen_privcmd: unable to unmap MFN range: leaking 1 pages. rc=-2

Is it normal that Linux is trying to unmap a page that has failed to map
ealier?

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQCQ-0002Kg-E4; Mon, 03 Feb 2014 20:32:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAQCP-0002Kb-6H
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 20:32:57 +0000
Received: from [193.109.254.147:20708] by server-4.bemta-14.messagelabs.com id
	91/19-32066-8FCFFE25; Mon, 03 Feb 2014 20:32:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391459575!1743591!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8644 invoked from network); 3 Feb 2014 20:32:55 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 20:32:55 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so3801466eek.18
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 12:32:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=clFcXFXFjbcD+zV2tK9b39PuxcpJQvG3I5zSmdGy1L0=;
	b=hdtBxAn1TOd7imGPx0IsTVr/2tPmKDucHRyZcogA+7lDwBHcHanvKv9syP467g7J7a
	AIIYftf+6YaTU5O61Si/J+J94uBx/YswMoBO0P1DDgb+/gutBMzLe7K9c9vSpWavPmHB
	F6lo/Z9JsQ75OlYtmOPeBXvztg5vlImnAQWpCa/660/Av1+G7WGGrpElfUaKTzm8cr4B
	QkVvujBC6AToXTvvNwR8EDS2K8GTjc+tyS39X/nDosAEBJUpBYHlgwRuFawxQJhvTh8v
	k3nD59UHQY2EulGIIW4uBHRLs5OgA/KuZkyyi/TDUqr/Heb/zEfjF71fSnCMFYRp4Zex
	vbQg==
X-Gm-Message-State: ALoCoQkMQ7AzdsMV9qpNsJy+1GbxF3tFMeO7kSoLXEpgvfZwTDmDlUqoNx9Gpj4WwkAAWxBALV8i
X-Received: by 10.14.194.131 with SMTP id m3mr46423128een.2.1391459575551;
	Mon, 03 Feb 2014 12:32:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	d43sm79780679eep.18.2014.02.03.12.32.54 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 12:32:54 -0800 (PST)
Message-ID: <52EFFCF5.5070108@linaro.org>
Date: Mon, 03 Feb 2014 20:32:53 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
In-Reply-To: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 05:32 PM, Pavlo Suikov wrote:
> Hi,

Hello,

> has anyone used xentrace on arm with HVM domains? As far as I observe,
> it fails to map trace buffers from Xen restricted heap:

> xc_map_foreign_batch() call with DOMID_XEN permissions leads to
> xenmem_add_to_physmap_one() and then to rcu_lock_domain_by_any_id(),
> which fails to find DOMID_XEN in the domain hash (and it doesn't seem at
> all that dummy domains are added to this hash). Actually I don't see how
> this could work at all since there are no obvious checks for either arch
> or domain type (PV or HVM) along the way.

After a quick look to Xen, it seems that xentrace was only working for
x86 PV (the issue will be the same on PVH). It was working because PV
domain uses mmu_update hypercall. In this function, x86 has a specific
case for DOMID_XEN (see get_pg_owner).

To support xentrace on ARM, we will need at least:
  - to replace rcu_lock_domain_by_any_id() by a a similar function
  - to add stubs for trace in arm code

BTW, when I tried xentrace I have this following error in kernel log
messages:
Failed to map pfn to mfn rc:0:-3 pfn:1e9d0 mfn:fdfbe
xen_privcmd: unable to unmap MFN range: leaking 1 pages. rc=-2

Is it normal that Linux is trying to unmap a page that has failed to map
ealier?

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQN1-0002kX-Ps; Mon, 03 Feb 2014 20:43:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAQN0-0002kS-Cy
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 20:43:54 +0000
Received: from [85.158.137.68:38862] by server-11.bemta-3.messagelabs.com id
	1A/41-04255-98FFFE25; Mon, 03 Feb 2014 20:43:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391460231!13124310!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8634 invoked from network); 3 Feb 2014 20:43:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:43:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Khn5c003069
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:43:49 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13Khmx8017422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:43:48 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13Khl24017406; Mon, 3 Feb 2014 20:43:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:43:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4577A1BFA0B; Mon,  3 Feb 2014 15:43:46 -0500 (EST)
Date: Mon, 3 Feb 2014 15:43:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203204346.GA12728@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
	<20140203113001.2433a77c@mantra.us.oracle.com>
	<20140203195240.GA10738@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203195240.GA10738@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 02:52:40PM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 03, 2014 at 11:30:01AM -0800, Mukesh Rathor wrote:
> > On Mon, 3 Feb 2014 06:49:14 -0500
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > > On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > > > We need to set cr4 flags for APs that are already set for BSP.
> > > 
> > > The title is missing the 'xen' part.
> > 
> > The patch is for linux, not xen.
> 
> Right. And hence you need to prefix the title with 'xen' in it
> otherwise it won't be obvious from the Linux log line for what
> component of the Linux tree it is.
> 
> > 
> > > I rewrote it a bit and I think this should go in 3.14.
> > > 
> > > David, Boris: It is not the full fix as there are other parts to
> > > make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
> > > bug.
> > > 
> > > 
> > > 
> > > From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > Date: Wed, 29 Jan 2014 16:15:18 -0800
> > > Subject: [PATCH] xen/pvh: set CR4 flags for APs
> > > 
> > > The Xen ABI sets said flags for the BSP, but it does
> > 
> > NO it does not. I said it few times, it's set by probe_page_size_mask
> > (which is in linux) for the BSP. The comment below also says it.
> 
> Where does it set it for APs? Can we piggyback on that?

And since I am in a hurry to fix an build regression I did the research
myself - but this kind of information needs to be in the commit message.

Here is what I have, please comment as I want to send a git pull to Linux
within the hour.

>From 125ef07fd58e963cc286554f6536e46c9712033c Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 29 Jan 2014 16:15:18 -0800
Subject: [PATCH] xen/pvh: set CR4 flags for APs

During bootup in the 'probe_page_size_mask' these CR4
flags are set in there. But for AP processors they
are not set as we do not use 'secondary_startup_64' which
the baremetal kernels uses. Instead do it in
this function which we use in Xen PVH during our
startup for AP and BSP processors.

As such fix it up to make sure we have that flag set.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..201d09a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
 	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
 	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
 	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
+	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
+	*/
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
 }
 
 /*
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQN1-0002kX-Ps; Mon, 03 Feb 2014 20:43:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAQN0-0002kS-Cy
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 20:43:54 +0000
Received: from [85.158.137.68:38862] by server-11.bemta-3.messagelabs.com id
	1A/41-04255-98FFFE25; Mon, 03 Feb 2014 20:43:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391460231!13124310!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8634 invoked from network); 3 Feb 2014 20:43:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:43:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13Khn5c003069
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:43:49 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s13Khmx8017422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:43:48 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13Khl24017406; Mon, 3 Feb 2014 20:43:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:43:47 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4577A1BFA0B; Mon,  3 Feb 2014 15:43:46 -0500 (EST)
Date: Mon, 3 Feb 2014 15:43:46 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203204346.GA12728@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
	<20140203113001.2433a77c@mantra.us.oracle.com>
	<20140203195240.GA10738@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203195240.GA10738@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 02:52:40PM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 03, 2014 at 11:30:01AM -0800, Mukesh Rathor wrote:
> > On Mon, 3 Feb 2014 06:49:14 -0500
> > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 
> > > On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > > > We need to set cr4 flags for APs that are already set for BSP.
> > > 
> > > The title is missing the 'xen' part.
> > 
> > The patch is for linux, not xen.
> 
> Right. And hence you need to prefix the title with 'xen' in it
> otherwise it won't be obvious from the Linux log line for what
> component of the Linux tree it is.
> 
> > 
> > > I rewrote it a bit and I think this should go in 3.14.
> > > 
> > > David, Boris: It is not the full fix as there are other parts to
> > > make an PVH guest use 2MB or 1GB pages- but this fixes an obvious
> > > bug.
> > > 
> > > 
> > > 
> > > From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17 00:00:00 2001
> > > From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > Date: Wed, 29 Jan 2014 16:15:18 -0800
> > > Subject: [PATCH] xen/pvh: set CR4 flags for APs
> > > 
> > > The Xen ABI sets said flags for the BSP, but it does
> > 
> > NO it does not. I said it few times, it's set by probe_page_size_mask
> > (which is in linux) for the BSP. The comment below also says it.
> 
> Where does it set it for APs? Can we piggyback on that?

And since I am in a hurry to fix an build regression I did the research
myself - but this kind of information needs to be in the commit message.

Here is what I have, please comment as I want to send a git pull to Linux
within the hour.

>From 125ef07fd58e963cc286554f6536e46c9712033c Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Wed, 29 Jan 2014 16:15:18 -0800
Subject: [PATCH] xen/pvh: set CR4 flags for APs

During bootup in the 'probe_page_size_mask' these CR4
flags are set in there. But for AP processors they
are not set as we do not use 'secondary_startup_64' which
the baremetal kernels uses. Instead do it in
this function which we use in Xen PVH during our
startup for AP and BSP processors.

As such fix it up to make sure we have that flag set.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 arch/x86/xen/enlighten.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..201d09a 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
 	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM guests
 	 * (which PVH shared codepaths), while X86_CR0_PG is for PVH. */
 	write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP | X86_CR0_AM);
+
+	if (!cpu)
+		return;
+	/*
+	 * For BSP, PSE PGE are set in probe_page_size_mask(), for APs
+	 * set them here. For all, OSFXSR OSXMMEXCPT are set in fpu_init.
+	*/
+	if (cpu_has_pse)
+		set_in_cr4(X86_CR4_PSE);
+
+	if (cpu_has_pge)
+		set_in_cr4(X86_CR4_PGE);
 }
 
 /*
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:51:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQUV-0002vs-O4; Mon, 03 Feb 2014 20:51:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAQUT-0002vn-DK
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 20:51:37 +0000
Received: from [193.109.254.147:24042] by server-16.bemta-14.messagelabs.com
	id 26/41-21945-85100F25; Mon, 03 Feb 2014 20:51:36 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391460694!1742408!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23879 invoked from network); 3 Feb 2014 20:51:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:51:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13KpWrc012193
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:51:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13KpVAn010656
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:51:32 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13KpVbn006073; Mon, 3 Feb 2014 20:51:31 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:51:30 -0800
Date: Mon, 3 Feb 2014 12:51:29 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203125129.0fb5e5e6@mantra.us.oracle.com>
In-Reply-To: <20140203204346.GA12728@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
	<20140203113001.2433a77c@mantra.us.oracle.com>
	<20140203195240.GA10738@phenom.dumpdata.com>
	<20140203204346.GA12728@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 15:43:46 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Mon, Feb 03, 2014 at 02:52:40PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Mon, Feb 03, 2014 at 11:30:01AM -0800, Mukesh Rathor wrote:
> > > On Mon, 3 Feb 2014 06:49:14 -0500
> > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > > 
> > > > On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > > > > We need to set cr4 flags for APs that are already set for BSP.
> > > > 
> > > > The title is missing the 'xen' part.
> > > 
> > > The patch is for linux, not xen.
> > 
> > Right. And hence you need to prefix the title with 'xen' in it
> > otherwise it won't be obvious from the Linux log line for what
> > component of the Linux tree it is.
> > 
> > > 
> > > > I rewrote it a bit and I think this should go in 3.14.
> > > > 
> > > > David, Boris: It is not the full fix as there are other parts to
> > > > make an PVH guest use 2MB or 1GB pages- but this fixes an
> > > > obvious bug.
> > > > 
> > > > 
> > > > 
> > > > From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17
> > > > 00:00:00 2001 From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > Date: Wed, 29 Jan 2014 16:15:18 -0800
> > > > Subject: [PATCH] xen/pvh: set CR4 flags for APs
> > > > 
> > > > The Xen ABI sets said flags for the BSP, but it does
> > > 
> > > NO it does not. I said it few times, it's set by
> > > probe_page_size_mask (which is in linux) for the BSP. The comment
> > > below also says it.
> > 
> > Where does it set it for APs? Can we piggyback on that?
> 
> And since I am in a hurry to fix an build regression I did the
> research myself - but this kind of information needs to be in the
> commit message.
> 
> Here is what I have, please comment as I want to send a git pull to
> Linux within the hour.
> 
> From 125ef07fd58e963cc286554f6536e46c9712033c Mon Sep 17 00:00:00 2001
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> Date: Wed, 29 Jan 2014 16:15:18 -0800
> Subject: [PATCH] xen/pvh: set CR4 flags for APs
> 
> During bootup in the 'probe_page_size_mask' these CR4
> flags are set in there. But for AP processors they
> are not set as we do not use 'secondary_startup_64' which
> the baremetal kernels uses. Instead do it in
> this function which we use in Xen PVH during our
> startup for AP and BSP processors.
> 
> As such fix it up to make sure we have that flag set.

Thats good enough for me.

Mukesh


> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/enlighten.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..201d09a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
>  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM
> guests
>  	 * (which PVH shared codepaths), while X86_CR0_PG is for
> PVH. */ write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP |
> X86_CR0_AM); +
> +	if (!cpu)
> +		return;
> +	/*
> +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for
> APs
> +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in
> fpu_init.
> +	*/
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
>  }
>  
>  /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:51:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQUV-0002vs-O4; Mon, 03 Feb 2014 20:51:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAQUT-0002vn-DK
	for Xen-devel@lists.xensource.com; Mon, 03 Feb 2014 20:51:37 +0000
Received: from [193.109.254.147:24042] by server-16.bemta-14.messagelabs.com
	id 26/41-21945-85100F25; Mon, 03 Feb 2014 20:51:36 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391460694!1742408!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23879 invoked from network); 3 Feb 2014 20:51:36 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:51:36 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13KpWrc012193
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:51:33 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13KpVAn010656
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 20:51:32 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s13KpVbn006073; Mon, 3 Feb 2014 20:51:31 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:51:30 -0800
Date: Mon, 3 Feb 2014 12:51:29 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203125129.0fb5e5e6@mantra.us.oracle.com>
In-Reply-To: <20140203204346.GA12728@phenom.dumpdata.com>
References: <1391040918-11722-1-git-send-email-mukesh.rathor@oracle.com>
	<1391040918-11722-2-git-send-email-mukesh.rathor@oracle.com>
	<20140203114914.GA3350@phenom.dumpdata.com>
	<20140203113001.2433a77c@mantra.us.oracle.com>
	<20140203195240.GA10738@phenom.dumpdata.com>
	<20140203204346.GA12728@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com,
	Xen-devel@lists.xensource.com, david.vrabel@citrix.com,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] pvh: set cr4 flags for APs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 15:43:46 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

> On Mon, Feb 03, 2014 at 02:52:40PM -0500, Konrad Rzeszutek Wilk wrote:
> > On Mon, Feb 03, 2014 at 11:30:01AM -0800, Mukesh Rathor wrote:
> > > On Mon, 3 Feb 2014 06:49:14 -0500
> > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > > 
> > > > On Wed, Jan 29, 2014 at 04:15:18PM -0800, Mukesh Rathor wrote:
> > > > > We need to set cr4 flags for APs that are already set for BSP.
> > > > 
> > > > The title is missing the 'xen' part.
> > > 
> > > The patch is for linux, not xen.
> > 
> > Right. And hence you need to prefix the title with 'xen' in it
> > otherwise it won't be obvious from the Linux log line for what
> > component of the Linux tree it is.
> > 
> > > 
> > > > I rewrote it a bit and I think this should go in 3.14.
> > > > 
> > > > David, Boris: It is not the full fix as there are other parts to
> > > > make an PVH guest use 2MB or 1GB pages- but this fixes an
> > > > obvious bug.
> > > > 
> > > > 
> > > > 
> > > > From 797ea6812ff0a90cce966a4ff6bad57cbadc43b5 Mon Sep 17
> > > > 00:00:00 2001 From: Mukesh Rathor <mukesh.rathor@oracle.com>
> > > > Date: Wed, 29 Jan 2014 16:15:18 -0800
> > > > Subject: [PATCH] xen/pvh: set CR4 flags for APs
> > > > 
> > > > The Xen ABI sets said flags for the BSP, but it does
> > > 
> > > NO it does not. I said it few times, it's set by
> > > probe_page_size_mask (which is in linux) for the BSP. The comment
> > > below also says it.
> > 
> > Where does it set it for APs? Can we piggyback on that?
> 
> And since I am in a hurry to fix an build regression I did the
> research myself - but this kind of information needs to be in the
> commit message.
> 
> Here is what I have, please comment as I want to send a git pull to
> Linux within the hour.
> 
> From 125ef07fd58e963cc286554f6536e46c9712033c Mon Sep 17 00:00:00 2001
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> Date: Wed, 29 Jan 2014 16:15:18 -0800
> Subject: [PATCH] xen/pvh: set CR4 flags for APs
> 
> During bootup in the 'probe_page_size_mask' these CR4
> flags are set in there. But for AP processors they
> are not set as we do not use 'secondary_startup_64' which
> the baremetal kernels uses. Instead do it in
> this function which we use in Xen PVH during our
> startup for AP and BSP processors.
> 
> As such fix it up to make sure we have that flag set.

Thats good enough for me.

Mukesh


> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  arch/x86/xen/enlighten.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> index a4d7b64..201d09a 100644
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -1473,6 +1473,18 @@ static void xen_pvh_set_cr_flags(int cpu)
>  	 * X86_CR0_TS, X86_CR0_PE, X86_CR0_ET are set by Xen for HVM
> guests
>  	 * (which PVH shared codepaths), while X86_CR0_PG is for
> PVH. */ write_cr0(read_cr0() | X86_CR0_MP | X86_CR0_NE | X86_CR0_WP |
> X86_CR0_AM); +
> +	if (!cpu)
> +		return;
> +	/*
> +	 * For BSP, PSE PGE are set in probe_page_size_mask(), for
> APs
> +	 * set them here. For all, OSFXSR OSXMMEXCPT are set in
> fpu_init.
> +	*/
> +	if (cpu_has_pse)
> +		set_in_cr4(X86_CR4_PSE);
> +
> +	if (cpu_has_pge)
> +		set_in_cr4(X86_CR4_PGE);
>  }
>  
>  /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:54:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQWq-0003B0-B8; Mon, 03 Feb 2014 20:54:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAQWo-0003Av-G1
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 20:54:03 +0000
Received: from [85.158.137.68:35689] by server-2.bemta-3.messagelabs.com id
	6B/01-06531-9E100F25; Mon, 03 Feb 2014 20:54:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391460839!11949392!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1992 invoked from network); 3 Feb 2014 20:54:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:54:00 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13KrsGJ022309
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:53:55 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13KrqZm010308
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 20:53:53 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13KrqbY012408; Mon, 3 Feb 2014 20:53:52 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:53:51 -0800
Date: Mon, 3 Feb 2014 12:53:50 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203125350.023b5436@mantra.us.oracle.com>
In-Reply-To: <20140203200458.GA11413@phenom.dumpdata.com>
References: <20140203200458.GA11413@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen PVH and 'xl save' bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 15:04:58 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

I wouldn't call it a "bug". It's an unimplemented sub-feature... saying
bug could be misleading to some people who'd then expect it to work
under certain circumstances IMO :). Also, it may create impression PVH
is buggy :)... 

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 20:54:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 20:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAQWq-0003B0-B8; Mon, 03 Feb 2014 20:54:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAQWo-0003Av-G1
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 20:54:03 +0000
Received: from [85.158.137.68:35689] by server-2.bemta-3.messagelabs.com id
	6B/01-06531-9E100F25; Mon, 03 Feb 2014 20:54:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391460839!11949392!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1992 invoked from network); 3 Feb 2014 20:54:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 3 Feb 2014 20:54:00 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13KrsGJ022309
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 20:53:55 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13KrqZm010308
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 3 Feb 2014 20:53:53 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13KrqbY012408; Mon, 3 Feb 2014 20:53:52 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 12:53:51 -0800
Date: Mon, 3 Feb 2014 12:53:50 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140203125350.023b5436@mantra.us.oracle.com>
In-Reply-To: <20140203200458.GA11413@phenom.dumpdata.com>
References: <20140203200458.GA11413@phenom.dumpdata.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] Xen PVH and 'xl save' bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 15:04:58 -0500
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:

I wouldn't call it a "bug". It's an unimplemented sub-feature... saying
bug could be misleading to some people who'd then expect it to work
under certain circumstances IMO :). Also, it may create impression PVH
is buggy :)... 

thanks
Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 21:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 21:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WARHp-0004NV-C8; Mon, 03 Feb 2014 21:42:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WARHn-0004NQ-T8
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 21:42:36 +0000
Received: from [85.158.139.211:58221] by server-7.bemta-5.messagelabs.com id
	55/11-14867-B4D00F25; Mon, 03 Feb 2014 21:42:35 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391463752!1413318!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17226 invoked from network); 3 Feb 2014 21:42:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 21:42:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13LfQ43003436
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 21:41:27 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13LfPcX029492
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 21:41:26 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13LfPGH026668; Mon, 3 Feb 2014 21:41:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 13:41:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C32BA1BFA0B; Mon,  3 Feb 2014 16:41:23 -0500 (EST)
Date: Mon, 3 Feb 2014 16:41:23 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Message-ID: <20140203214123.GA13515@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-3.14-rc1-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7468456249649287540=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7468456249649287540==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="wRRV7LY7NUeQGEoC"
Content-Disposition: inline


--wRRV7LY7NUeQGEoC
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc1-tag

which has a revert of a fix that caused build regression on ARM (with Xen enabled),
and a tiny litte fix for Xen PVH where we did not set the CR4 flags properly.

Please pull!


 arch/x86/include/asm/xen/page.h     |  5 +--
 arch/x86/xen/enlighten.c            | 12 +++++
 arch/x86/xen/p2m.c                  | 17 ++++++-
 drivers/block/xen-blkback/blkback.c | 15 ++++---
 drivers/xen/gntdev.c                | 13 +++---
 drivers/xen/grant-table.c           | 89 ++++++-------------------------------
 include/xen/grant_table.h           |  8 +---
 7 files changed, 58 insertions(+), 101 deletions(-)


Konrad Rzeszutek Wilk (1):
      Revert "xen/grant-table: Avoid m2p_override during mapping"

Mukesh Rathor (1):
      xen/pvh: set CR4 flags for APs


--wRRV7LY7NUeQGEoC
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS8Az/AAoJEFjIrFwIi8fJgDwIAJvMEm+1YCWyh0EN4u0/5Pag
+EMilLuvZN76WHL0zFdPNW+sEP3fF+MB+cYz9s4E0FS63eJ5UUXGhUJ45DRgm7yc
zn2wDl8FVkrB3XZvYrL22OU2gDE+ikndcInzUmOgNcvC6jkvtMpXnTGhLRnl9SRC
XsCWP6IN234izbNMmJRytgZzAYlNyZKwrYAgWo96qYJ2ARwu2NnS+X+NuZ+3fJCX
pRuaQ9CQz39+TqI183q00KyNwjoey5xFIwVmf10IudlpwwkkQPPoFBx9LrlxeXcp
YMZgh/M6LLbDrHMc7Ucl+CoPDFEKVjXjak9IL6HAngixBtQTyxtiqQY+ilRv9GU=
=XUnV
-----END PGP SIGNATURE-----

--wRRV7LY7NUeQGEoC--


--===============7468456249649287540==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7468456249649287540==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 21:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 21:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WARHp-0004NV-C8; Mon, 03 Feb 2014 21:42:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WARHn-0004NQ-T8
	for xen-devel@lists.xensource.com; Mon, 03 Feb 2014 21:42:36 +0000
Received: from [85.158.139.211:58221] by server-7.bemta-5.messagelabs.com id
	55/11-14867-B4D00F25; Mon, 03 Feb 2014 21:42:35 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391463752!1413318!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17226 invoked from network); 3 Feb 2014 21:42:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 3 Feb 2014 21:42:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s13LfQ43003436
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 3 Feb 2014 21:41:27 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13LfPcX029492
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 3 Feb 2014 21:41:26 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s13LfPGH026668; Mon, 3 Feb 2014 21:41:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 13:41:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C32BA1BFA0B; Mon,  3 Feb 2014 16:41:23 -0500 (EST)
Date: Mon, 3 Feb 2014 16:41:23 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Message-ID: <20140203214123.GA13515@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-3.14-rc1-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7468456249649287540=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7468456249649287540==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="wRRV7LY7NUeQGEoC"
Content-Disposition: inline


--wRRV7LY7NUeQGEoC
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc1-tag

which has a revert of a fix that caused build regression on ARM (with Xen enabled),
and a tiny litte fix for Xen PVH where we did not set the CR4 flags properly.

Please pull!


 arch/x86/include/asm/xen/page.h     |  5 +--
 arch/x86/xen/enlighten.c            | 12 +++++
 arch/x86/xen/p2m.c                  | 17 ++++++-
 drivers/block/xen-blkback/blkback.c | 15 ++++---
 drivers/xen/gntdev.c                | 13 +++---
 drivers/xen/grant-table.c           | 89 ++++++-------------------------------
 include/xen/grant_table.h           |  8 +---
 7 files changed, 58 insertions(+), 101 deletions(-)


Konrad Rzeszutek Wilk (1):
      Revert "xen/grant-table: Avoid m2p_override during mapping"

Mukesh Rathor (1):
      xen/pvh: set CR4 flags for APs


--wRRV7LY7NUeQGEoC
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS8Az/AAoJEFjIrFwIi8fJgDwIAJvMEm+1YCWyh0EN4u0/5Pag
+EMilLuvZN76WHL0zFdPNW+sEP3fF+MB+cYz9s4E0FS63eJ5UUXGhUJ45DRgm7yc
zn2wDl8FVkrB3XZvYrL22OU2gDE+ikndcInzUmOgNcvC6jkvtMpXnTGhLRnl9SRC
XsCWP6IN234izbNMmJRytgZzAYlNyZKwrYAgWo96qYJ2ARwu2NnS+X+NuZ+3fJCX
pRuaQ9CQz39+TqI183q00KyNwjoey5xFIwVmf10IudlpwwkkQPPoFBx9LrlxeXcp
YMZgh/M6LLbDrHMc7Ucl+CoPDFEKVjXjak9IL6HAngixBtQTyxtiqQY+ilRv9GU=
=XUnV
-----END PGP SIGNATURE-----

--wRRV7LY7NUeQGEoC--


--===============7468456249649287540==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7468456249649287540==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 22:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 22:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WARpP-00058s-EM; Mon, 03 Feb 2014 22:17:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WARpN-00058n-5e
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 22:17:17 +0000
Received: from [193.109.254.147:18181] by server-3.bemta-14.messagelabs.com id
	87/96-00432-C6510F25; Mon, 03 Feb 2014 22:17:16 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391465834!1758432!1
X-Originating-IP: [209.85.220.176]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13051 invoked from network); 3 Feb 2014 22:17:15 -0000
Received: from mail-vc0-f176.google.com (HELO mail-vc0-f176.google.com)
	(209.85.220.176)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 22:17:15 -0000
Received: by mail-vc0-f176.google.com with SMTP id la4so5144789vcb.21
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 14:17:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Z9kO5jA87uOsApkLUcTGVEHVu8QxnvaEEQyGSDVnQ58=;
	b=T8h/Qm/Sr+4GiTamDqLy7D1pezRU3V7HSwZ+LhOVKs0Nefg4RGY/DfMA/BdwXxruju
	zSzEwrGtNuvhN5MjWQ7y5HjtYPVGrMUvMRi3CaD0svIHFCGD42WiITXW0kLcWtb2gM3p
	sriHnmwSd8ILhiVxEUBr0jefsllyqKM2heVF0PG0GXs2PMaipI+/VsnoEudZdyWMyScD
	aJ4urpEvkuEBZA/IKgKzz14/+L8UgqYaEROTecDo25HwqIkt5O+YerlNEvePLei3/UK4
	KohVAOtnwqzkAJSz6NNmBsabYELz1AQTI9p4KZ5vj6oDT00US8oNG/g843HeTTv4VCh8
	o6mg==
MIME-Version: 1.0
X-Received: by 10.220.99.7 with SMTP id s7mr7153135vcn.19.1391465833662; Mon,
	03 Feb 2014 14:17:13 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Mon, 3 Feb 2014 14:17:13 -0800 (PST)
Date: Mon, 3 Feb 2014 14:17:13 -0800
Message-ID: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Enabling kdump on SuSE 11 SP3 Xen resulting in error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7549770801257968727=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7549770801257968727==
Content-Type: multipart/alternative; boundary=001a11c1def2bb8efa04f187e206

--001a11c1def2bb8efa04f187e206
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'm trying to enable kdump on SuSE 11 SP3 with Xen configuration on our
cards but having problem.

If I don't specify KDUMP_DUMPFORMAT=ELF, it does not work and does not try
to take vmcore.

If I specify KDUMP_DUMPLEVEL=0, then it does work but it takes very long
time. I tried using KDUMP_DUMPLEVEL=30 or 10 but it complaint that there is
not much space even though I've about 4.8 GB space with crashkernel
configured as 256M@64M. Here's output from /proc/cmdline :-

lc-1:~ # cat /proc/cmdline
crashkernel=256M@64M root=/dev/sda5 earlyprintk=serial,ttyS0,115200n8
resume=/dev/sda5 splash=silent showopts  console=ttyS0,115200n8

lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg
kernel=vmlinuz-3.0.93-0.8-xen  crashkernel=256M@64M root=/dev/sda5
earlyprintk=serial,ttyS0,115200n8 resume=/dev/sda5 splash=silent showopts
 console=ttyS0,115200n8
options=console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept
extra_guest_irqs=80 reboot=efi crashkernel=256M@64M
lc-1:~ #
lc-1:/var/crash # df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda7       5.7G  641M  4.8G  12% /var
ssc-lc-1:/var/crash #


*kdump configured with KDUMP_DUMPLEVEL=10*

Extracting dmesg
-------------------------------------------------------------------------------
Specify '-E' option for Xen.
Commandline parameter is invalid.
Try `makedumpfile --help' for more information.

makedumpfile Failed.
Running makedumpfile --dump-dmesg /proc/vmcore failed (1).
Saving dump using makedumpfile
-------------------------------------------------------------------------------
Copying data                       : [ 22 %]
Can't write the dump file(/root/var/crash/2014-02-07-14:05/vmcore). No
space left on device

makedumpfile Failed.
Dump too large. Aborting. Check KDUMP_FREE_DISK_SIZE.
Running makedumpfile  -d 10 -E /proc/vmcore failed (1).
statfs() on /root/var/crash/2014-02-07-14:05 failed. (No such file or
directory)
Error in fopen for /root/var/crash/2014-02-07-14:05/makedumpfile-R.pl (No
such file or directory)
Error in fopen for /root/var/crash/2014-02-07-14:05/README.txt (No such
file or directory)
Error in fopen for
/root/var/crash/2014-02-07-14:05/System.map-3.0.93-0.8-xen (No such file or
directory)
Last command failed (1).


*KDUMP configure with KDUMP_DUMPFORMAT="compressed"*

Extracting dmesg
-------------------------------------------------------------------------------
Specify '-E' option for Xen.
Commandline parameter is invalid.
Try `makedumpfile --help' for more information.

makedumpfile Failed.
Running makedumpfile --dump-dmesg /proc/vmcore failed (1).
Saving dump using makedumpfile
-------------------------------------------------------------------------------
Specify '-E' option for Xen.
Commandline parameter is invalid.
Try `makedumpfile --help' for more information.

makedumpfile Failed.
Running makedumpfile  -d 0 -c /proc/vmcore failed (1).
Saving makedumpfile-R.pl       Finished.
Generating rearrange script    Finished.
Generating README              Finished.
Copying System.map             Finished.
Copying kernel                 Finished.
INFO: Cannot find debug information: Unable to find debuginfo file.
Last command failed (1).
[   13.000455] ehci_hcd 0000:00:1d.0: PCI INT A disabled
[   13.005587] Disabling non-boot CPUs ...
[   13.009436] Restarting system.
[   13.012476] machine restart


Let me know how to get around this problem.

Thanks,
/Saurabh

*/etc/sysconfig/kdump file :-*

KDUMPTOOL_FLAGS=""
KDUMP_COMMANDLINE=""
KDUMP_COMMANDLINE_APPEND=""
KDUMP_CONTINUE_ON_ERROR="true"
KDUMP_COPY_KERNEL="yes"
KDUMP_CPUS=1
*KDUMP_DUMPFORMAT="ELF"*
KDUMP_DUMPLEVEL=10
KDUMP_FREE_DISK_SIZE=64
KDUMP_HOST_KEY=""
KDUMP_IMMEDIATE_REBOOT="yes"
KDUMP_KEEP_OLD_DUMPS=5
KDUMP_KERNELVER=""
KDUMP_NETCONFIG="auto"
KDUMP_NOTIFICATION_CC=""
KDUMP_NOTIFICATION_TO=""
KDUMP_POSTSCRIPT=""
KDUMP_PRESCRIPT=""
KDUMP_REQUIRED_PROGRAMS=""
KDUMP_SAVEDIR="file:///var/crash"
KDUMP_SMTP_PASSWORD=""
KDUMP_SMTP_SERVER=""
KDUMP_SMTP_USER=""
KDUMP_TRANSFER=""
KDUMP_VERBOSE=3
KEXEC_OPTIONS=""

--001a11c1def2bb8efa04f187e206
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I&#39;m trying to enable kdump on S=
uSE 11 SP3 with Xen configuration on our cards but having problem.</div><di=
v><br></div><div>If I don&#39;t specify KDUMP_DUMPFORMAT=3DELF, it does not=
 work and does not try to take vmcore.</div>
<div><br></div><div>If I specify KDUMP_DUMPLEVEL=3D0, then it does work but=
 it takes very long time. I tried using KDUMP_DUMPLEVEL=3D30 or 10 but it c=
omplaint that there is not much space even though I&#39;ve about 4.8 GB spa=
ce with crashkernel configured as 256M@64M. Here&#39;s output from /proc/cm=
dline :-</div>
<div><br></div><div><div>lc-1:~ # cat /proc/cmdline=A0</div><div>crashkerne=
l=3D256M@64M root=3D/dev/sda5 earlyprintk=3Dserial,ttyS0,115200n8 resume=3D=
/dev/sda5 splash=3Dsilent showopts =A0console=3DttyS0,115200n8</div><div><b=
r></div><div>
lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg</div><div>kernel=3Dvml=
inuz-3.0.93-0.8-xen =A0crashkernel=3D256M@64M root=3D/dev/sda5 earlyprintk=
=3Dserial,ttyS0,115200n8 resume=3D/dev/sda5 splash=3Dsilent showopts =A0con=
sole=3DttyS0,115200n8</div>
<div>options=3Dconsole=3Dcom1 com1=3D115200 dom0_mem=3D8192m iommu=3D1,shar=
ept extra_guest_irqs=3D80 reboot=3Defi crashkernel=3D256M@64M</div><div>lc-=
1:~ #=A0</div></div><div><div>lc-1:/var/crash # df -h .</div><div>Filesyste=
m =A0 =A0 =A0Size =A0Used Avail Use% Mounted on</div>
<div>/dev/sda7 =A0 =A0 =A0 5.7G =A0641M =A04.8G =A012% /var</div><div>ssc-l=
c-1:/var/crash #=A0</div></div><div><br></div><div><br></div><div><b><u>kdu=
mp configured with KDUMP_DUMPLEVEL=3D10</u></b></div><div><br></div><div><d=
iv>Extracting dmesg</div>
<div>----------------------------------------------------------------------=
---------</div><div>Specify &#39;-E&#39; option for Xen.</div><div>Commandl=
ine parameter is invalid.</div><div>Try `makedumpfile --help&#39; for more =
information.</div>
<div><br></div><div>makedumpfile Failed.</div><div>Running makedumpfile --d=
ump-dmesg /proc/vmcore failed (1).</div><div>Saving dump using makedumpfile=
</div><div>----------------------------------------------------------------=
---------------</div>
<div>Copying data =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 : [ 22 %]</di=
v><div>Can&#39;t write the dump file(/root/var/crash/2014-02-07-14:05/vmcor=
e). No space left on device</div><div><br></div><div>makedumpfile Failed.</=
div><div>Dump too large. Aborting. Check KDUMP_FREE_DISK_SIZE.</div>
<div>Running makedumpfile =A0-d 10 -E /proc/vmcore failed (1).</div><div>st=
atfs() on /root/var/crash/2014-02-07-14:05 failed. (No such file or directo=
ry)</div><div>Error in fopen for /root/var/crash/2014-02-07-14:05/makedumpf=
ile-R.pl (No such file or directory)</div>
<div>Error in fopen for /root/var/crash/2014-02-07-14:05/README.txt (No suc=
h file or directory)</div><div>Error in fopen for /root/var/crash/2014-02-0=
7-14:05/System.map-3.0.93-0.8-xen (No such file or directory)</div><div>
Last command failed (1).</div></div><div><br></div><div><br></div><div><u><=
b>KDUMP configure with=A0KDUMP_DUMPFORMAT=3D&quot;compressed&quot;</b></u><=
/div><div><br></div><div><div>Extracting dmesg</div><div>------------------=
-------------------------------------------------------------</div>
<div>Specify &#39;-E&#39; option for Xen.</div><div>Commandline parameter i=
s invalid.</div><div>Try `makedumpfile --help&#39; for more information.</d=
iv><div><br></div><div>makedumpfile Failed.</div><div>Running makedumpfile =
--dump-dmesg /proc/vmcore failed (1).</div>
<div>Saving dump using makedumpfile</div><div>-----------------------------=
--------------------------------------------------</div><div>Specify &#39;-=
E&#39; option for Xen.</div><div>Commandline parameter is invalid.</div>
<div>Try `makedumpfile --help&#39; for more information.</div><div><br></di=
v><div>makedumpfile Failed.</div><div>Running makedumpfile =A0-d 0 -c /proc=
/vmcore failed (1).</div><div>Saving makedumpfile-R.pl =A0 =A0 =A0 Finished=
.</div>
<div>Generating rearrange script =A0 =A0Finished.</div><div>Generating READ=
ME =A0 =A0 =A0 =A0 =A0 =A0 =A0Finished.</div><div>Copying System.map =A0 =
=A0 =A0 =A0 =A0 =A0 Finished.</div><div>Copying kernel =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 Finished.</div><div>INFO: Cannot find debug information: Unable=
 to find debuginfo file.</div>
<div>Last command failed (1).</div><div>[ =A0 13.000455] ehci_hcd 0000:00:1=
d.0: PCI INT A disabled</div><div>[ =A0 13.005587] Disabling non-boot CPUs =
...</div><div>[ =A0 13.009436] Restarting system.</div><div>[ =A0 13.012476=
] machine restart</div>
</div><div><br></div><div><br></div><div>Let me know how to get around this=
 problem.</div><div><br></div><div>Thanks,</div><div>/Saurabh</div><div><br=
></div><div><b><u>/etc/sysconfig/kdump file :-</u></b></div><div><br></div>
<div>KDUMPTOOL_FLAGS=3D&quot;&quot;<br></div><div><div>KDUMP_COMMANDLINE=3D=
&quot;&quot;</div><div>KDUMP_COMMANDLINE_APPEND=3D&quot;&quot;</div><div>KD=
UMP_CONTINUE_ON_ERROR=3D&quot;true&quot;</div><div>KDUMP_COPY_KERNEL=3D&quo=
t;yes&quot;</div>
<div>KDUMP_CPUS=3D1</div><div><b>KDUMP_DUMPFORMAT=3D&quot;ELF&quot;</b></di=
v><div>KDUMP_DUMPLEVEL=3D10</div><div>KDUMP_FREE_DISK_SIZE=3D64</div><div>K=
DUMP_HOST_KEY=3D&quot;&quot;</div><div>KDUMP_IMMEDIATE_REBOOT=3D&quot;yes&q=
uot;</div>
<div>KDUMP_KEEP_OLD_DUMPS=3D5</div><div>KDUMP_KERNELVER=3D&quot;&quot;</div=
><div>KDUMP_NETCONFIG=3D&quot;auto&quot;</div><div>KDUMP_NOTIFICATION_CC=3D=
&quot;&quot;</div><div>KDUMP_NOTIFICATION_TO=3D&quot;&quot;</div><div>KDUMP=
_POSTSCRIPT=3D&quot;&quot;</div>
<div>KDUMP_PRESCRIPT=3D&quot;&quot;</div><div>KDUMP_REQUIRED_PROGRAMS=3D&qu=
ot;&quot;</div><div>KDUMP_SAVEDIR=3D&quot;file:///var/crash&quot;</div><div=
>KDUMP_SMTP_PASSWORD=3D&quot;&quot;</div><div>KDUMP_SMTP_SERVER=3D&quot;&qu=
ot;</div>
<div>KDUMP_SMTP_USER=3D&quot;&quot;</div><div>KDUMP_TRANSFER=3D&quot;&quot;=
</div><div>KDUMP_VERBOSE=3D3</div><div>KEXEC_OPTIONS=3D&quot;&quot;</div></=
div></div>

--001a11c1def2bb8efa04f187e206--


--===============7549770801257968727==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7549770801257968727==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 22:17:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 22:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WARpP-00058s-EM; Mon, 03 Feb 2014 22:17:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WARpN-00058n-5e
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 22:17:17 +0000
Received: from [193.109.254.147:18181] by server-3.bemta-14.messagelabs.com id
	87/96-00432-C6510F25; Mon, 03 Feb 2014 22:17:16 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391465834!1758432!1
X-Originating-IP: [209.85.220.176]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13051 invoked from network); 3 Feb 2014 22:17:15 -0000
Received: from mail-vc0-f176.google.com (HELO mail-vc0-f176.google.com)
	(209.85.220.176)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 22:17:15 -0000
Received: by mail-vc0-f176.google.com with SMTP id la4so5144789vcb.21
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 14:17:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Z9kO5jA87uOsApkLUcTGVEHVu8QxnvaEEQyGSDVnQ58=;
	b=T8h/Qm/Sr+4GiTamDqLy7D1pezRU3V7HSwZ+LhOVKs0Nefg4RGY/DfMA/BdwXxruju
	zSzEwrGtNuvhN5MjWQ7y5HjtYPVGrMUvMRi3CaD0svIHFCGD42WiITXW0kLcWtb2gM3p
	sriHnmwSd8ILhiVxEUBr0jefsllyqKM2heVF0PG0GXs2PMaipI+/VsnoEudZdyWMyScD
	aJ4urpEvkuEBZA/IKgKzz14/+L8UgqYaEROTecDo25HwqIkt5O+YerlNEvePLei3/UK4
	KohVAOtnwqzkAJSz6NNmBsabYELz1AQTI9p4KZ5vj6oDT00US8oNG/g843HeTTv4VCh8
	o6mg==
MIME-Version: 1.0
X-Received: by 10.220.99.7 with SMTP id s7mr7153135vcn.19.1391465833662; Mon,
	03 Feb 2014 14:17:13 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Mon, 3 Feb 2014 14:17:13 -0800 (PST)
Date: Mon, 3 Feb 2014 14:17:13 -0800
Message-ID: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Enabling kdump on SuSE 11 SP3 Xen resulting in error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7549770801257968727=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7549770801257968727==
Content-Type: multipart/alternative; boundary=001a11c1def2bb8efa04f187e206

--001a11c1def2bb8efa04f187e206
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'm trying to enable kdump on SuSE 11 SP3 with Xen configuration on our
cards but having problem.

If I don't specify KDUMP_DUMPFORMAT=ELF, it does not work and does not try
to take vmcore.

If I specify KDUMP_DUMPLEVEL=0, then it does work but it takes very long
time. I tried using KDUMP_DUMPLEVEL=30 or 10 but it complaint that there is
not much space even though I've about 4.8 GB space with crashkernel
configured as 256M@64M. Here's output from /proc/cmdline :-

lc-1:~ # cat /proc/cmdline
crashkernel=256M@64M root=/dev/sda5 earlyprintk=serial,ttyS0,115200n8
resume=/dev/sda5 splash=silent showopts  console=ttyS0,115200n8

lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg
kernel=vmlinuz-3.0.93-0.8-xen  crashkernel=256M@64M root=/dev/sda5
earlyprintk=serial,ttyS0,115200n8 resume=/dev/sda5 splash=silent showopts
 console=ttyS0,115200n8
options=console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept
extra_guest_irqs=80 reboot=efi crashkernel=256M@64M
lc-1:~ #
lc-1:/var/crash # df -h .
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda7       5.7G  641M  4.8G  12% /var
ssc-lc-1:/var/crash #


*kdump configured with KDUMP_DUMPLEVEL=10*

Extracting dmesg
-------------------------------------------------------------------------------
Specify '-E' option for Xen.
Commandline parameter is invalid.
Try `makedumpfile --help' for more information.

makedumpfile Failed.
Running makedumpfile --dump-dmesg /proc/vmcore failed (1).
Saving dump using makedumpfile
-------------------------------------------------------------------------------
Copying data                       : [ 22 %]
Can't write the dump file(/root/var/crash/2014-02-07-14:05/vmcore). No
space left on device

makedumpfile Failed.
Dump too large. Aborting. Check KDUMP_FREE_DISK_SIZE.
Running makedumpfile  -d 10 -E /proc/vmcore failed (1).
statfs() on /root/var/crash/2014-02-07-14:05 failed. (No such file or
directory)
Error in fopen for /root/var/crash/2014-02-07-14:05/makedumpfile-R.pl (No
such file or directory)
Error in fopen for /root/var/crash/2014-02-07-14:05/README.txt (No such
file or directory)
Error in fopen for
/root/var/crash/2014-02-07-14:05/System.map-3.0.93-0.8-xen (No such file or
directory)
Last command failed (1).


*KDUMP configure with KDUMP_DUMPFORMAT="compressed"*

Extracting dmesg
-------------------------------------------------------------------------------
Specify '-E' option for Xen.
Commandline parameter is invalid.
Try `makedumpfile --help' for more information.

makedumpfile Failed.
Running makedumpfile --dump-dmesg /proc/vmcore failed (1).
Saving dump using makedumpfile
-------------------------------------------------------------------------------
Specify '-E' option for Xen.
Commandline parameter is invalid.
Try `makedumpfile --help' for more information.

makedumpfile Failed.
Running makedumpfile  -d 0 -c /proc/vmcore failed (1).
Saving makedumpfile-R.pl       Finished.
Generating rearrange script    Finished.
Generating README              Finished.
Copying System.map             Finished.
Copying kernel                 Finished.
INFO: Cannot find debug information: Unable to find debuginfo file.
Last command failed (1).
[   13.000455] ehci_hcd 0000:00:1d.0: PCI INT A disabled
[   13.005587] Disabling non-boot CPUs ...
[   13.009436] Restarting system.
[   13.012476] machine restart


Let me know how to get around this problem.

Thanks,
/Saurabh

*/etc/sysconfig/kdump file :-*

KDUMPTOOL_FLAGS=""
KDUMP_COMMANDLINE=""
KDUMP_COMMANDLINE_APPEND=""
KDUMP_CONTINUE_ON_ERROR="true"
KDUMP_COPY_KERNEL="yes"
KDUMP_CPUS=1
*KDUMP_DUMPFORMAT="ELF"*
KDUMP_DUMPLEVEL=10
KDUMP_FREE_DISK_SIZE=64
KDUMP_HOST_KEY=""
KDUMP_IMMEDIATE_REBOOT="yes"
KDUMP_KEEP_OLD_DUMPS=5
KDUMP_KERNELVER=""
KDUMP_NETCONFIG="auto"
KDUMP_NOTIFICATION_CC=""
KDUMP_NOTIFICATION_TO=""
KDUMP_POSTSCRIPT=""
KDUMP_PRESCRIPT=""
KDUMP_REQUIRED_PROGRAMS=""
KDUMP_SAVEDIR="file:///var/crash"
KDUMP_SMTP_PASSWORD=""
KDUMP_SMTP_SERVER=""
KDUMP_SMTP_USER=""
KDUMP_TRANSFER=""
KDUMP_VERBOSE=3
KEXEC_OPTIONS=""

--001a11c1def2bb8efa04f187e206
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I&#39;m trying to enable kdump on S=
uSE 11 SP3 with Xen configuration on our cards but having problem.</div><di=
v><br></div><div>If I don&#39;t specify KDUMP_DUMPFORMAT=3DELF, it does not=
 work and does not try to take vmcore.</div>
<div><br></div><div>If I specify KDUMP_DUMPLEVEL=3D0, then it does work but=
 it takes very long time. I tried using KDUMP_DUMPLEVEL=3D30 or 10 but it c=
omplaint that there is not much space even though I&#39;ve about 4.8 GB spa=
ce with crashkernel configured as 256M@64M. Here&#39;s output from /proc/cm=
dline :-</div>
<div><br></div><div><div>lc-1:~ # cat /proc/cmdline=A0</div><div>crashkerne=
l=3D256M@64M root=3D/dev/sda5 earlyprintk=3Dserial,ttyS0,115200n8 resume=3D=
/dev/sda5 splash=3Dsilent showopts =A0console=3DttyS0,115200n8</div><div><b=
r></div><div>
lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg</div><div>kernel=3Dvml=
inuz-3.0.93-0.8-xen =A0crashkernel=3D256M@64M root=3D/dev/sda5 earlyprintk=
=3Dserial,ttyS0,115200n8 resume=3D/dev/sda5 splash=3Dsilent showopts =A0con=
sole=3DttyS0,115200n8</div>
<div>options=3Dconsole=3Dcom1 com1=3D115200 dom0_mem=3D8192m iommu=3D1,shar=
ept extra_guest_irqs=3D80 reboot=3Defi crashkernel=3D256M@64M</div><div>lc-=
1:~ #=A0</div></div><div><div>lc-1:/var/crash # df -h .</div><div>Filesyste=
m =A0 =A0 =A0Size =A0Used Avail Use% Mounted on</div>
<div>/dev/sda7 =A0 =A0 =A0 5.7G =A0641M =A04.8G =A012% /var</div><div>ssc-l=
c-1:/var/crash #=A0</div></div><div><br></div><div><br></div><div><b><u>kdu=
mp configured with KDUMP_DUMPLEVEL=3D10</u></b></div><div><br></div><div><d=
iv>Extracting dmesg</div>
<div>----------------------------------------------------------------------=
---------</div><div>Specify &#39;-E&#39; option for Xen.</div><div>Commandl=
ine parameter is invalid.</div><div>Try `makedumpfile --help&#39; for more =
information.</div>
<div><br></div><div>makedumpfile Failed.</div><div>Running makedumpfile --d=
ump-dmesg /proc/vmcore failed (1).</div><div>Saving dump using makedumpfile=
</div><div>----------------------------------------------------------------=
---------------</div>
<div>Copying data =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 : [ 22 %]</di=
v><div>Can&#39;t write the dump file(/root/var/crash/2014-02-07-14:05/vmcor=
e). No space left on device</div><div><br></div><div>makedumpfile Failed.</=
div><div>Dump too large. Aborting. Check KDUMP_FREE_DISK_SIZE.</div>
<div>Running makedumpfile =A0-d 10 -E /proc/vmcore failed (1).</div><div>st=
atfs() on /root/var/crash/2014-02-07-14:05 failed. (No such file or directo=
ry)</div><div>Error in fopen for /root/var/crash/2014-02-07-14:05/makedumpf=
ile-R.pl (No such file or directory)</div>
<div>Error in fopen for /root/var/crash/2014-02-07-14:05/README.txt (No suc=
h file or directory)</div><div>Error in fopen for /root/var/crash/2014-02-0=
7-14:05/System.map-3.0.93-0.8-xen (No such file or directory)</div><div>
Last command failed (1).</div></div><div><br></div><div><br></div><div><u><=
b>KDUMP configure with=A0KDUMP_DUMPFORMAT=3D&quot;compressed&quot;</b></u><=
/div><div><br></div><div><div>Extracting dmesg</div><div>------------------=
-------------------------------------------------------------</div>
<div>Specify &#39;-E&#39; option for Xen.</div><div>Commandline parameter i=
s invalid.</div><div>Try `makedumpfile --help&#39; for more information.</d=
iv><div><br></div><div>makedumpfile Failed.</div><div>Running makedumpfile =
--dump-dmesg /proc/vmcore failed (1).</div>
<div>Saving dump using makedumpfile</div><div>-----------------------------=
--------------------------------------------------</div><div>Specify &#39;-=
E&#39; option for Xen.</div><div>Commandline parameter is invalid.</div>
<div>Try `makedumpfile --help&#39; for more information.</div><div><br></di=
v><div>makedumpfile Failed.</div><div>Running makedumpfile =A0-d 0 -c /proc=
/vmcore failed (1).</div><div>Saving makedumpfile-R.pl =A0 =A0 =A0 Finished=
.</div>
<div>Generating rearrange script =A0 =A0Finished.</div><div>Generating READ=
ME =A0 =A0 =A0 =A0 =A0 =A0 =A0Finished.</div><div>Copying System.map =A0 =
=A0 =A0 =A0 =A0 =A0 Finished.</div><div>Copying kernel =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 Finished.</div><div>INFO: Cannot find debug information: Unable=
 to find debuginfo file.</div>
<div>Last command failed (1).</div><div>[ =A0 13.000455] ehci_hcd 0000:00:1=
d.0: PCI INT A disabled</div><div>[ =A0 13.005587] Disabling non-boot CPUs =
...</div><div>[ =A0 13.009436] Restarting system.</div><div>[ =A0 13.012476=
] machine restart</div>
</div><div><br></div><div><br></div><div>Let me know how to get around this=
 problem.</div><div><br></div><div>Thanks,</div><div>/Saurabh</div><div><br=
></div><div><b><u>/etc/sysconfig/kdump file :-</u></b></div><div><br></div>
<div>KDUMPTOOL_FLAGS=3D&quot;&quot;<br></div><div><div>KDUMP_COMMANDLINE=3D=
&quot;&quot;</div><div>KDUMP_COMMANDLINE_APPEND=3D&quot;&quot;</div><div>KD=
UMP_CONTINUE_ON_ERROR=3D&quot;true&quot;</div><div>KDUMP_COPY_KERNEL=3D&quo=
t;yes&quot;</div>
<div>KDUMP_CPUS=3D1</div><div><b>KDUMP_DUMPFORMAT=3D&quot;ELF&quot;</b></di=
v><div>KDUMP_DUMPLEVEL=3D10</div><div>KDUMP_FREE_DISK_SIZE=3D64</div><div>K=
DUMP_HOST_KEY=3D&quot;&quot;</div><div>KDUMP_IMMEDIATE_REBOOT=3D&quot;yes&q=
uot;</div>
<div>KDUMP_KEEP_OLD_DUMPS=3D5</div><div>KDUMP_KERNELVER=3D&quot;&quot;</div=
><div>KDUMP_NETCONFIG=3D&quot;auto&quot;</div><div>KDUMP_NOTIFICATION_CC=3D=
&quot;&quot;</div><div>KDUMP_NOTIFICATION_TO=3D&quot;&quot;</div><div>KDUMP=
_POSTSCRIPT=3D&quot;&quot;</div>
<div>KDUMP_PRESCRIPT=3D&quot;&quot;</div><div>KDUMP_REQUIRED_PROGRAMS=3D&qu=
ot;&quot;</div><div>KDUMP_SAVEDIR=3D&quot;file:///var/crash&quot;</div><div=
>KDUMP_SMTP_PASSWORD=3D&quot;&quot;</div><div>KDUMP_SMTP_SERVER=3D&quot;&qu=
ot;</div>
<div>KDUMP_SMTP_USER=3D&quot;&quot;</div><div>KDUMP_TRANSFER=3D&quot;&quot;=
</div><div>KDUMP_VERBOSE=3D3</div><div>KEXEC_OPTIONS=3D&quot;&quot;</div></=
div></div>

--001a11c1def2bb8efa04f187e206--


--===============7549770801257968727==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7549770801257968727==--


From xen-devel-bounces@lists.xen.org Mon Feb 03 22:48:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 22:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WASJb-0005uN-E3; Mon, 03 Feb 2014 22:48:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WASJZ-0005uA-SD
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 22:48:30 +0000
Received: from [85.158.137.68:42937] by server-9.bemta-3.messagelabs.com id
	DE/59-10184-DBC10F25; Mon, 03 Feb 2014 22:48:29 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391467706!11963237!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23048 invoked from network); 3 Feb 2014 22:48:28 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 22:48:28 -0000
Received: by mail-pa0-f52.google.com with SMTP id bj1so7708452pad.11
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 14:48:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=mK2KsReRwtu6/2n7YhAaHISZT7DFpjCC2OOVpVC4XLI=;
	b=n8XpDDmW2Hgvsg7WmEmq7sotSXV0iW4yXE5qXs/Dp4d8f3VlCH0tTsUOC+ncjAJ3IQ
	gtaL4MzIqEOpH9xItXm11tE8vlVNzr8iiGo/S3Yb+tOymBzYKIX07F+46Oiw0SD8AdiC
	B+tiuvztNJCpD//uZLj39ljhtI3OjmVsPE/ciS0P/bYwoxwRjXjmLw2aMOFQISt98bmK
	eKG2QqcmCR1mLqt6XgSZ5lbJIM3iG6+9G7Ma7Wm1pYadY6+B6GQhorprYNiNf4RQdh1l
	af8hJpaz9Ja/wluPFyUr4NzGzc5uFJbt7Z+BMRMTMdthMlKE2//7qet+rrzIg9ZzN3HV
	7skQ==
X-Received: by 10.66.41.106 with SMTP id e10mr39994233pal.109.1391467706164;
	Mon, 03 Feb 2014 14:48:26 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-185.amazon.com. [54.240.196.185])
	by mx.google.com with ESMTPSA id
	iq10sm59603570pbc.14.2014.02.03.14.48.23 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 14:48:25 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 03 Feb 2014 14:48:22 -0800
Date: Mon, 3 Feb 2014 14:48:22 -0800
From: Matt Wilson <msw@linux.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Message-ID: <20140203224820.GA25942@u109add4315675089e695.ant.amazon.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/2] Fix AMD threshold register definitions
 and activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Mar 15, 2008 at 04:50:23PM -0500, Aravind Gopalakrishnan wrote:
                ^^^^

Please check your system clock.

--msw

> Patch 1: Deals with correcting AMD threshold register definitions.
> Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
>          AMD thresholding registers.
> 
> Aravind Gopalakrishnan (2):
>   hvm,svm: Update AMD Thresholding MSR definitions
>   mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
>     MSRs
> 
>  xen/arch/x86/cpu/mcheck/vmce.c  |    4 ++--
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  3 files changed, 11 insertions(+), 9 deletions(-)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 22:48:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 22:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WASJb-0005uN-E3; Mon, 03 Feb 2014 22:48:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WASJZ-0005uA-SD
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 22:48:30 +0000
Received: from [85.158.137.68:42937] by server-9.bemta-3.messagelabs.com id
	DE/59-10184-DBC10F25; Mon, 03 Feb 2014 22:48:29 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391467706!11963237!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23048 invoked from network); 3 Feb 2014 22:48:28 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 22:48:28 -0000
Received: by mail-pa0-f52.google.com with SMTP id bj1so7708452pad.11
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 14:48:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=mK2KsReRwtu6/2n7YhAaHISZT7DFpjCC2OOVpVC4XLI=;
	b=n8XpDDmW2Hgvsg7WmEmq7sotSXV0iW4yXE5qXs/Dp4d8f3VlCH0tTsUOC+ncjAJ3IQ
	gtaL4MzIqEOpH9xItXm11tE8vlVNzr8iiGo/S3Yb+tOymBzYKIX07F+46Oiw0SD8AdiC
	B+tiuvztNJCpD//uZLj39ljhtI3OjmVsPE/ciS0P/bYwoxwRjXjmLw2aMOFQISt98bmK
	eKG2QqcmCR1mLqt6XgSZ5lbJIM3iG6+9G7Ma7Wm1pYadY6+B6GQhorprYNiNf4RQdh1l
	af8hJpaz9Ja/wluPFyUr4NzGzc5uFJbt7Z+BMRMTMdthMlKE2//7qet+rrzIg9ZzN3HV
	7skQ==
X-Received: by 10.66.41.106 with SMTP id e10mr39994233pal.109.1391467706164;
	Mon, 03 Feb 2014 14:48:26 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-185.amazon.com. [54.240.196.185])
	by mx.google.com with ESMTPSA id
	iq10sm59603570pbc.14.2014.02.03.14.48.23 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 14:48:25 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 03 Feb 2014 14:48:22 -0800
Date: Mon, 3 Feb 2014 14:48:22 -0800
From: Matt Wilson <msw@linux.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Message-ID: <20140203224820.GA25942@u109add4315675089e695.ant.amazon.com>
References: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1205617825-10042-1-git-send-email-aravind.gopalakrishnan@amd.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 0/2] Fix AMD threshold register definitions
 and activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Mar 15, 2008 at 04:50:23PM -0500, Aravind Gopalakrishnan wrote:
                ^^^^

Please check your system clock.

--msw

> Patch 1: Deals with correcting AMD threshold register definitions.
> Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
>          AMD thresholding registers.
> 
> Aravind Gopalakrishnan (2):
>   hvm,svm: Update AMD Thresholding MSR definitions
>   mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
>     MSRs
> 
>  xen/arch/x86/cpu/mcheck/vmce.c  |    4 ++--
>  xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
>  xen/include/asm-x86/msr-index.h |    6 +++---
>  3 files changed, 11 insertions(+), 9 deletions(-)
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 23:46:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 23:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WATDT-0007Ia-DJ; Mon, 03 Feb 2014 23:46:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WATDR-0007IV-2z
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 23:46:13 +0000
Received: from [85.158.139.211:27533] by server-12.bemta-5.messagelabs.com id
	3B/89-15415-44A20F25; Mon, 03 Feb 2014 23:46:12 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391471171!1406864!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9649 invoked from network); 3 Feb 2014 23:46:11 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 3 Feb 2014 23:46:11 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:56792 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WATD5-0002WB-Pb; Tue, 04 Feb 2014 00:45:51 +0100
Date: Tue, 4 Feb 2014 00:46:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1265435524.20140204004608@eikelenboom.it>
To: Michael S. Tsirkin <mst@redhat.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] Commit 9e047b982452c633882b486682966c1d97097015 (piix4:
	add acpi pci hotplug support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Grmbll my fat fingers hit the send shortcut too soon by accident ..
let's try again ..

Hi Michael,

A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.

commit 9e047b982452c633882b486682966c1d97097015
Author: Michael S. Tsirkin <mst@redhat.com>
Date:   Mon Oct 14 18:01:20 2013 +0300

    piix4: add acpi pci hotplug support

    Add support for acpi pci hotplug using the
    new infrastructure.
    PIIX4 legacy interface is maintained as is for
    machine types 1.7 and older.

    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>


The error is not very verbose :

libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.

So it seems there is an issue with preserving the legacy interface.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 03 23:46:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Feb 2014 23:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WATDT-0007Ia-DJ; Mon, 03 Feb 2014 23:46:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WATDR-0007IV-2z
	for xen-devel@lists.xenproject.org; Mon, 03 Feb 2014 23:46:13 +0000
Received: from [85.158.139.211:27533] by server-12.bemta-5.messagelabs.com id
	3B/89-15415-44A20F25; Mon, 03 Feb 2014 23:46:12 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391471171!1406864!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9649 invoked from network); 3 Feb 2014 23:46:11 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 3 Feb 2014 23:46:11 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:56792 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WATD5-0002WB-Pb; Tue, 04 Feb 2014 00:45:51 +0100
Date: Tue, 4 Feb 2014 00:46:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1265435524.20140204004608@eikelenboom.it>
To: Michael S. Tsirkin <mst@redhat.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] Commit 9e047b982452c633882b486682966c1d97097015 (piix4:
	add acpi pci hotplug support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Grmbll my fat fingers hit the send shortcut too soon by accident ..
let's try again ..

Hi Michael,

A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.

commit 9e047b982452c633882b486682966c1d97097015
Author: Michael S. Tsirkin <mst@redhat.com>
Date:   Mon Oct 14 18:01:20 2013 +0300

    piix4: add acpi pci hotplug support

    Add support for acpi pci hotplug using the
    new infrastructure.
    PIIX4 legacy interface is maintained as is for
    machine types 1.7 and older.

    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>


The error is not very verbose :

libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.

So it seems there is an issue with preserving the legacy interface.

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 01:18:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 01:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAUe8-00059B-93; Tue, 04 Feb 2014 01:17:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAUe5-000596-UP
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 01:17:50 +0000
Received: from [85.158.143.35:21154] by server-1.bemta-4.messagelabs.com id
	EC/64-31661-CBF30F25; Tue, 04 Feb 2014 01:17:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391476666!2870734!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19375 invoked from network); 4 Feb 2014 01:17:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 01:17:47 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s141GdQV016231
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 01:16:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s141GcoX021140
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 01:16:38 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s141GcEf027667; Tue, 4 Feb 2014 01:16:38 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 17:16:37 -0800
Date: Mon, 3 Feb 2014 17:16:35 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203171635.6132cf7d@mantra.us.oracle.com>
In-Reply-To: <20140203112605.66306ae9@mantra.us.oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<20140203112605.66306ae9@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jun.nakajima@intel.com, george.dunlap@eu.citrix.com,
	Konrad Rzeszutek Wilk <konrad@kernel.org>, jbeulich@suse.com,
	yang.z.zhang@intel.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
 Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 11:26:05 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Mon,  3 Feb 2014 12:03:20 -0500
> Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> 
> > I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> > 
> > The PVH feature is considered experimental, but it would be good to
> > have it working out of the box without crashing the hypervisor.
> > 
> > Sadly that is not the case as
> > 09bb434748af9bfe3f7fca4b6eef721a7d5042a4 "Nested VMX: prohibit
> > virtual vmentry/vmexit during IO emulation" casues an NULL pointer
> > dereference when starting a guest with 'pvh=1' in the guest config.
> > 
> > There are two ways of fixing this:
> > a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> > path, or b). Check for ioreq() being NULL. This is actually done in
> > other places in the hypervisor - so I choose to piggyback on that.
> > 
> 
> I was about to send this patch on friday:
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index d2ba435..563b02f 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *ioreq = get_ioreq(v);
>  
>      /*
>       * a pending IO emualtion may still no finished. In this case,
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
>          return;
>      /*
>       * a softirq may interrupt us between a virtual vmentry is
> 
> 
> 
> when I realized even after the above fix it is still crashing  for
> me... debugging right now. JFYI.

Ok, the crash in nvmx_switch_guest() even after the above fix is a 
different issue for which I am making a patch. So, this patch should
be applied, however, rather than calling get_ioreq() twice as in 
Konrad's patch, I recommend my code above which calls it once. It's 
inlined now, and prob would be optimized, but the function could
change in many ways in future.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 01:18:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 01:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAUe8-00059B-93; Tue, 04 Feb 2014 01:17:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WAUe5-000596-UP
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 01:17:50 +0000
Received: from [85.158.143.35:21154] by server-1.bemta-4.messagelabs.com id
	EC/64-31661-CBF30F25; Tue, 04 Feb 2014 01:17:48 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391476666!2870734!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19375 invoked from network); 4 Feb 2014 01:17:47 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 01:17:47 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s141GdQV016231
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 01:16:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s141GcoX021140
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 01:16:38 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s141GcEf027667; Tue, 4 Feb 2014 01:16:38 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 03 Feb 2014 17:16:37 -0800
Date: Mon, 3 Feb 2014 17:16:35 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140203171635.6132cf7d@mantra.us.oracle.com>
In-Reply-To: <20140203112605.66306ae9@mantra.us.oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<20140203112605.66306ae9@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: jun.nakajima@intel.com, george.dunlap@eu.citrix.com,
	Konrad Rzeszutek Wilk <konrad@kernel.org>, jbeulich@suse.com,
	yang.z.zhang@intel.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] Xen 4.4-rc3 regression with PVH compared to
 Xen 4.4-rc2.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014 11:26:05 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Mon,  3 Feb 2014 12:03:20 -0500
> Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> 
> > I am hereby requesting an Xen 4.4 exemption for this bug-fix.
> > 
> > The PVH feature is considered experimental, but it would be good to
> > have it working out of the box without crashing the hypervisor.
> > 
> > Sadly that is not the case as
> > 09bb434748af9bfe3f7fca4b6eef721a7d5042a4 "Nested VMX: prohibit
> > virtual vmentry/vmexit during IO emulation" casues an NULL pointer
> > dereference when starting a guest with 'pvh=1' in the guest config.
> > 
> > There are two ways of fixing this:
> > a). Add an '!xen_pvh_domain()' or '!xen_pvh_vcpu(current)' in the
> > path, or b). Check for ioreq() being NULL. This is actually done in
> > other places in the hypervisor - so I choose to piggyback on that.
> > 
> 
> I was about to send this patch on friday:
> 
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index d2ba435..563b02f 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *ioreq = get_ioreq(v);
>  
>      /*
>       * a pending IO emualtion may still no finished. In this case,
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
>          return;
>      /*
>       * a softirq may interrupt us between a virtual vmentry is
> 
> 
> 
> when I realized even after the above fix it is still crashing  for
> me... debugging right now. JFYI.

Ok, the crash in nvmx_switch_guest() even after the above fix is a 
different issue for which I am making a patch. So, this patch should
be applied, however, rather than calling get_ioreq() twice as in 
Konrad's patch, I recommend my code above which calls it once. It's 
inlined now, and prob would be optimized, but the function could
change in many ways in future.

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 01:35:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 01:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAUv8-0005WD-4D; Tue, 04 Feb 2014 01:35:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAUv6-0005W8-H2
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 01:35:24 +0000
Received: from [85.158.139.211:34717] by server-11.bemta-5.messagelabs.com id
	51/E9-23886-BD340F25; Tue, 04 Feb 2014 01:35:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391477721!1430366!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14409 invoked from network); 4 Feb 2014 01:35:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 01:35:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,776,1384300800"; d="scan'208";a="97556540"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 01:34:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 20:34:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAUuX-00016e-3w;
	Tue, 04 Feb 2014 01:34:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAUuW-0008Cn-T9;
	Tue, 04 Feb 2014 01:34:49 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24722-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 01:34:48 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24722: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24722 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24722/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 24719
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 24719
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken pass in 24719
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24719
 test-amd64-amd64-pv           3 host-install(3)  broken in 24719 pass in 24722

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  7 windows-install    fail like 24690
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10  fail like 24690
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24715
 test-amd64-i386-freebsd10-i386  3 host-install(3)   broken in 24719 like 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24719 REGR. vs. 24718
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24719 like 24718

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24719 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24719 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24719 never pass

version targeted for testing:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=04d31ea1b1caeac7f77b5d18910761abd540545f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 04d31ea1b1caeac7f77b5d18910761abd540545f
+ branch=xen-unstable
+ revision=04d31ea1b1caeac7f77b5d18910761abd540545f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 04d31ea1b1caeac7f77b5d18910761abd540545f:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   feee1ac..04d31ea  04d31ea1b1caeac7f77b5d18910761abd540545f -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 01:35:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 01:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAUv8-0005WD-4D; Tue, 04 Feb 2014 01:35:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAUv6-0005W8-H2
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 01:35:24 +0000
Received: from [85.158.139.211:34717] by server-11.bemta-5.messagelabs.com id
	51/E9-23886-BD340F25; Tue, 04 Feb 2014 01:35:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391477721!1430366!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14409 invoked from network); 4 Feb 2014 01:35:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 01:35:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,776,1384300800"; d="scan'208";a="97556540"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 01:34:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 3 Feb 2014 20:34:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAUuX-00016e-3w;
	Tue, 04 Feb 2014 01:34:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAUuW-0008Cn-T9;
	Tue, 04 Feb 2014 01:34:49 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24722-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 01:34:48 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24722: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24722 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24722/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 24719
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)     broken pass in 24719
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken pass in 24719
 test-amd64-i386-xl-winxpsp3-vcpus1  8 guest-saverestore     fail pass in 24719
 test-amd64-amd64-pv           3 host-install(3)  broken in 24719 pass in 24722

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  7 windows-install    fail like 24690
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10  fail like 24690
 test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24715
 test-amd64-i386-freebsd10-i386  3 host-install(3)   broken in 24719 like 24718
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 9 guest-localmigrate fail in 24719 REGR. vs. 24718
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24719 like 24718

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24719 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24719 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24719 never pass

version targeted for testing:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f
baseline version:
 xen                  feee1ace547cf6247a358d082dd64fa762be2488

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=04d31ea1b1caeac7f77b5d18910761abd540545f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 04d31ea1b1caeac7f77b5d18910761abd540545f
+ branch=xen-unstable
+ revision=04d31ea1b1caeac7f77b5d18910761abd540545f
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 04d31ea1b1caeac7f77b5d18910761abd540545f:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   feee1ac..04d31ea  04d31ea1b1caeac7f77b5d18910761abd540545f -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 02:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 02:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAW18-0007ZT-8W; Tue, 04 Feb 2014 02:45:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAW17-0007ZO-8a
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 02:45:41 +0000
Received: from [85.158.137.68:34269] by server-5.bemta-3.messagelabs.com id
	14/2C-04712-35450F25; Tue, 04 Feb 2014 02:45:39 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391481938!13140064!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23967 invoked from network); 4 Feb 2014 02:45:38 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 02:45:38 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so12478820wgh.24
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 18:45:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=BdJ0ymuDUr75onH2UCYdRSJeyykmE03/Nz2cAoj77uA=;
	b=D8T3mPFlzqBv4T2mTADfzJy4skD/Y749cKHaNgbifpTp3K4a7C8l2bzSfX19VP+F+H
	jUlRaWGI7nvtZV+O5FyVjaZa79NuzjkPI43wz95+bfljBC53iOru3stDVnwUZ7IyU5Hr
	UgFl09I2nkNM3y/PaNF9EJ0c/AZZ54FfwNaXCW5ISmfbh3tcDlngzcC3PDtb/tJlB8sq
	0Dlkp1zIaRYPNi8LwQghD54NxgieFzdLsX2dPARR9aW0hBXrwKEMcMkMUC6wn5WaIGHE
	uUC78Nj/nfP6i1S2ypWT+v69iMk+zX1kgPyuDdhvAgrYc1iLOe9A5989//TF28DTfTeu
	It5A==
X-Received: by 10.180.219.44 with SMTP id pl12mr10941734wic.12.1391481938108; 
	Mon, 03 Feb 2014 18:45:38 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Mon, 3 Feb 2014 18:45:18 -0800 (PST)
In-Reply-To: <17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 02:45:18 +0000
Message-ID: <CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The wiki mentions "NBD" but if I got it right this means the storage
is in "host1" and all ndb does is connect to that host, so the disk is
never copied to disk 2?

Am I correct to assume this?

I have a host where what I need is to move it from host1 to host2, or
reverse if needed, there's no problem stopping it first, but I guess
this is not what the "migrate" command is used for!

And for guest where I do wan to keep a backup with remus, is shared
storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:

-> Shared storage is not required

But if "migrate" doesn't work how would remus?

Sorry for all the questions just trying to understand this better, and
there's really no documentation about:
xl migrate and xl remus!

thanks

On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com> wrote:
> Makes sense...
>
> Is there documentation about xl migrate and xl remus?
>
> Say I want to migrate a host but first pause it?
>
> I could also snapshot the lvm but that doesn't save the memory and the
> domain would have to be offline.
>
> So if I want to migrate but don't have shared storage, what's the best
> approach drdb?
>
> Thanks
>
>
>
> On February 3, 2014 10:17:04 AM GMT, Ian Campbell <Ian.Campbell@citrix.com>
> wrote:
>>
>> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
>>>
>>>  I'm testing live migration without shared storage (I use LVM at both
>>> sides)
>>>
>>>
>>>  Issuing "xl migrate" worked nice and the machine was migrated to the
>>> second host
>>>
>>>  However I see this in the second host log:
>>>
>>>  [ 1502.563251] EXT4-fs error (device xvda1):
>>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
>>>  run-parts: bad entry in directory: rec_len is smaller than minimal -
>>>  offset=0(0), inode=0, rec_len=0, name_len=0
>>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
>>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
>>>  533250: comm run-parts: bad entry in directory: rec_len is smaller
>>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
>>>
>>>  I also get errors
>>> like:
>>>  -bash: /bin/ping: cannot execute binary file
>>>
>>>  Is this to be expect on using none shared storage?
>>
>>
>> Yes. If the underlying disk is not the same device between both hosts
>> then all bets are off and all sorts of bad things will be happen. Think
>> about it -- what would you expect to happen to an OS if a disk suddenly
>> started returning completely different data to what was written to it.
>>
>> What you have seen seems like a plausible outcome.
>>
>> Ian.
>>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 02:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 02:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAW18-0007ZT-8W; Tue, 04 Feb 2014 02:45:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAW17-0007ZO-8a
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 02:45:41 +0000
Received: from [85.158.137.68:34269] by server-5.bemta-3.messagelabs.com id
	14/2C-04712-35450F25; Tue, 04 Feb 2014 02:45:39 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391481938!13140064!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23967 invoked from network); 4 Feb 2014 02:45:38 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 02:45:38 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so12478820wgh.24
	for <xen-devel@lists.xensource.com>;
	Mon, 03 Feb 2014 18:45:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=BdJ0ymuDUr75onH2UCYdRSJeyykmE03/Nz2cAoj77uA=;
	b=D8T3mPFlzqBv4T2mTADfzJy4skD/Y749cKHaNgbifpTp3K4a7C8l2bzSfX19VP+F+H
	jUlRaWGI7nvtZV+O5FyVjaZa79NuzjkPI43wz95+bfljBC53iOru3stDVnwUZ7IyU5Hr
	UgFl09I2nkNM3y/PaNF9EJ0c/AZZ54FfwNaXCW5ISmfbh3tcDlngzcC3PDtb/tJlB8sq
	0Dlkp1zIaRYPNi8LwQghD54NxgieFzdLsX2dPARR9aW0hBXrwKEMcMkMUC6wn5WaIGHE
	uUC78Nj/nfP6i1S2ypWT+v69iMk+zX1kgPyuDdhvAgrYc1iLOe9A5989//TF28DTfTeu
	It5A==
X-Received: by 10.180.219.44 with SMTP id pl12mr10941734wic.12.1391481938108; 
	Mon, 03 Feb 2014 18:45:38 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Mon, 3 Feb 2014 18:45:18 -0800 (PST)
In-Reply-To: <17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 02:45:18 +0000
Message-ID: <CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The wiki mentions "NBD" but if I got it right this means the storage
is in "host1" and all ndb does is connect to that host, so the disk is
never copied to disk 2?

Am I correct to assume this?

I have a host where what I need is to move it from host1 to host2, or
reverse if needed, there's no problem stopping it first, but I guess
this is not what the "migrate" command is used for!

And for guest where I do wan to keep a backup with remus, is shared
storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:

-> Shared storage is not required

But if "migrate" doesn't work how would remus?

Sorry for all the questions just trying to understand this better, and
there's really no documentation about:
xl migrate and xl remus!

thanks

On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com> wrote:
> Makes sense...
>
> Is there documentation about xl migrate and xl remus?
>
> Say I want to migrate a host but first pause it?
>
> I could also snapshot the lvm but that doesn't save the memory and the
> domain would have to be offline.
>
> So if I want to migrate but don't have shared storage, what's the best
> approach drdb?
>
> Thanks
>
>
>
> On February 3, 2014 10:17:04 AM GMT, Ian Campbell <Ian.Campbell@citrix.com>
> wrote:
>>
>> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
>>>
>>>  I'm testing live migration without shared storage (I use LVM at both
>>> sides)
>>>
>>>
>>>  Issuing "xl migrate" worked nice and the machine was migrated to the
>>> second host
>>>
>>>  However I see this in the second host log:
>>>
>>>  [ 1502.563251] EXT4-fs error (device xvda1):
>>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
>>>  run-parts: bad entry in directory: rec_len is smaller than minimal -
>>>  offset=0(0), inode=0, rec_len=0, name_len=0
>>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
>>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
>>>  533250: comm run-parts: bad entry in directory: rec_len is smaller
>>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
>>>
>>>  I also get errors
>>> like:
>>>  -bash: /bin/ping: cannot execute binary file
>>>
>>>  Is this to be expect on using none shared storage?
>>
>>
>> Yes. If the underlying disk is not the same device between both hosts
>> then all bets are off and all sorts of bad things will be happen. Think
>> about it -- what would you expect to happen to an OS if a disk suddenly
>> started returning completely different data to what was written to it.
>>
>> What you have seen seems like a plausible outcome.
>>
>> Ian.
>>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 04:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 04:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAXgj-0001MK-7p; Tue, 04 Feb 2014 04:32:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WAXgh-0001MD-2O
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 04:32:43 +0000
Received: from [85.158.139.211:4828] by server-12.bemta-5.messagelabs.com id
	1E/52-15415-A6D60F25; Tue, 04 Feb 2014 04:32:42 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391488360!1451430!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2645 invoked from network); 4 Feb 2014 04:32:41 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 04:32:41 -0000
Received: by mail-qa0-f45.google.com with SMTP id ii20so11230754qab.4
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 20:32:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=zULxmnTayoeYZHmwwBFO1Ti37Jjf2os2J9YwXvTComA=;
	b=vtPBFJBR6hztgKtzDatKQDNRWuH53tf9YATE9LS3mLFIG5pScChf1oUP4QnUTS/a4l
	H0+HnrokVB8e1GSN/LlurE7hBLvbbuzoNO8dwbBBd6vJz70z8YrGAVMIAN0ZTAvo14gy
	GUGxH+Ca6GWXNKxItIoOEAYJugYjY0rZ/aT1INJUlhJplLviKXoq0XMGLHkg2lQvBsxT
	y1a8Dvcz0FgRjVh6afLALDUevCZaLv1v52FLHQzhipeoy3roEV9Z+L+1dhfNYLE3v5YS
	evhlgOE5pg1BoJIJjtl2XIMGE2zJN0phg47paxKq2eXN95qlLwSjBuCssy3NINmyhHlI
	ibSg==
X-Received: by 10.224.51.74 with SMTP id c10mr62865142qag.33.1391488360033;
	Mon, 03 Feb 2014 20:32:40 -0800 (PST)
Received: from yakj.usersys.redhat.com
	(net-37-117-154-249.cust.vodafonedsl.it. [37.117.154.249])
	by mx.google.com with ESMTPSA id z1sm63316094qaz.18.2014.02.03.20.32.36
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 20:32:38 -0800 (PST)
Message-ID: <52F06D63.8090003@redhat.com>
Date: Tue, 04 Feb 2014 05:32:35 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>	<52CAEF54.7030901@suse.de>
	<52CB17D1.2060400@redhat.com>	<20140107123417.GG10654@zion.uk.xensource.com>	<52CC01F6.6050502@redhat.com>	<20140121182745.GA23328@zion.uk.xensource.com>	<52DF9B76.8060807@redhat.com>	<20140122160940.GC24675@zion.uk.xensource.com>	<52E0DCDD.60405@redhat.com>	<20140123135440.GF24675@zion.uk.xensource.com>
	<20140123162329.GG24675@zion.uk.xensource.com>
In-Reply-To: <20140123162329.GG24675@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 17:23, Wei Liu ha scritto:
> On Thu, Jan 23, 2014 at 01:54:40PM +0000, Wei Liu wrote:
>> On Thu, Jan 23, 2014 at 10:11:57AM +0100, Paolo Bonzini wrote:
>>> Il 22/01/2014 17:09, Wei Liu ha scritto:
>>>> On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
>>>>> Il 21/01/2014 19:27, Wei Liu ha scritto:
>>>>>>>>
>>>>>>>> Googling "disable tcg" would have provided an answer, but the patches
>>>>>>>> were old enough to be basically useless.  I'll refresh the current
>>>>>>>> version in the next few days.  Currently I am (or try to be) on
>>>>>>>> vacation, so I cannot really say when, but I'll do my best. :)
>>>>>>>>
>>>>>> Hi Paolo, any update?
>>>>>
>>>>> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
>>>>> branch on my github repository.
>>>>>
>>>>
>>>> Unfortunately your branch didn't work when I enabled TCG support. If I
>>>> use "--disable-tcg" with configure then it works fine.
>>>
>>> Branch fixed.
>>>
>>
>> Yes, it's fixed for the case I reported. Thanks.
>>
>> But it is now broken with following rune:
>> ./configure --enable-kvm --disable-tcg --target-list=i386-softmmu
>> --disable-xen --enable-debug
>>
>>   LINK  i386-softmmu/qemu-system-i386
>>   cpus.o: In function `cpu_signal':
>>   /local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
>>   cpus.o: In function `tcg_cpu_exec':
>>   /local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
>>   cpus.o: In function `tcg_exec_all':
>>   /local/scratch/qemu/cpus.c:1282: undefined reference to `exit_request'
>>   /local/scratch/qemu/cpus.c:1299: undefined reference to `exit_request'
>>   exec.o: In function `tlb_reset_dirty_range_all':
>>   /local/scratch/qemu/exec.c:736: undefined reference to
>>   `cpu_tlb_reset_dirty_all'
>>   collect2: error: ld returned 1 exit status
>>   make[1]: *** [qemu-system-i386] Error 1
>>   make: *** [subdir-i386-softmmu] Error 2
>>
>> --enable-debug is the one to blame. Without that it links successfully.
>>
>> Wei.
>>
>
> Finally I figured out what was wrong. Your patch series was relying on
> compiler to aggresively optimize away unused code.
>
> So when --enable-debug is set, compiler won't optimize away the dead
> code, hence those undefine references. With any optimization option -O
> you series compiles successfully.
>
> Feel free to integrate my patch below, or fix those errors in the way
> you see appropriate.

Thanks!  I added stubs for all three undefined symbols in tcg-stub.c.

Another way to fix it would be -ffunction-sections/-Wl,--gc-sections 
(which shaves 200k more out of the .text section), but that breaks glibc 
static linking.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 04:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 04:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAXgj-0001MK-7p; Tue, 04 Feb 2014 04:32:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WAXgh-0001MD-2O
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 04:32:43 +0000
Received: from [85.158.139.211:4828] by server-12.bemta-5.messagelabs.com id
	1E/52-15415-A6D60F25; Tue, 04 Feb 2014 04:32:42 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391488360!1451430!1
X-Originating-IP: [209.85.216.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2645 invoked from network); 4 Feb 2014 04:32:41 -0000
Received: from mail-qa0-f45.google.com (HELO mail-qa0-f45.google.com)
	(209.85.216.45)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 04:32:41 -0000
Received: by mail-qa0-f45.google.com with SMTP id ii20so11230754qab.4
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 20:32:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=zULxmnTayoeYZHmwwBFO1Ti37Jjf2os2J9YwXvTComA=;
	b=vtPBFJBR6hztgKtzDatKQDNRWuH53tf9YATE9LS3mLFIG5pScChf1oUP4QnUTS/a4l
	H0+HnrokVB8e1GSN/LlurE7hBLvbbuzoNO8dwbBBd6vJz70z8YrGAVMIAN0ZTAvo14gy
	GUGxH+Ca6GWXNKxItIoOEAYJugYjY0rZ/aT1INJUlhJplLviKXoq0XMGLHkg2lQvBsxT
	y1a8Dvcz0FgRjVh6afLALDUevCZaLv1v52FLHQzhipeoy3roEV9Z+L+1dhfNYLE3v5YS
	evhlgOE5pg1BoJIJjtl2XIMGE2zJN0phg47paxKq2eXN95qlLwSjBuCssy3NINmyhHlI
	ibSg==
X-Received: by 10.224.51.74 with SMTP id c10mr62865142qag.33.1391488360033;
	Mon, 03 Feb 2014 20:32:40 -0800 (PST)
Received: from yakj.usersys.redhat.com
	(net-37-117-154-249.cust.vodafonedsl.it. [37.117.154.249])
	by mx.google.com with ESMTPSA id z1sm63316094qaz.18.2014.02.03.20.32.36
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 20:32:38 -0800 (PST)
Message-ID: <52F06D63.8090003@redhat.com>
Date: Tue, 04 Feb 2014 05:32:35 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <CA+aC4kvbAMpfgLecNecOL+B7q=dVT3xWx02V5zTOtsfHVF8vRQ@mail.gmail.com>	<52CAEF54.7030901@suse.de>
	<52CB17D1.2060400@redhat.com>	<20140107123417.GG10654@zion.uk.xensource.com>	<52CC01F6.6050502@redhat.com>	<20140121182745.GA23328@zion.uk.xensource.com>	<52DF9B76.8060807@redhat.com>	<20140122160940.GC24675@zion.uk.xensource.com>	<52E0DCDD.60405@redhat.com>	<20140123135440.GF24675@zion.uk.xensource.com>
	<20140123162329.GG24675@zion.uk.xensource.com>
In-Reply-To: <20140123162329.GG24675@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	Anthony Liguori <anthony@codemonkey.ws>,
	=?ISO-8859-1?Q?Andreas_F=E4rber?= <afaerber@suse.de>
Subject: Re: [Xen-devel] Project idea: make QEMU more flexible
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 23/01/2014 17:23, Wei Liu ha scritto:
> On Thu, Jan 23, 2014 at 01:54:40PM +0000, Wei Liu wrote:
>> On Thu, Jan 23, 2014 at 10:11:57AM +0100, Paolo Bonzini wrote:
>>> Il 22/01/2014 17:09, Wei Liu ha scritto:
>>>> On Wed, Jan 22, 2014 at 11:20:38AM +0100, Paolo Bonzini wrote:
>>>>> Il 21/01/2014 19:27, Wei Liu ha scritto:
>>>>>>>>
>>>>>>>> Googling "disable tcg" would have provided an answer, but the patches
>>>>>>>> were old enough to be basically useless.  I'll refresh the current
>>>>>>>> version in the next few days.  Currently I am (or try to be) on
>>>>>>>> vacation, so I cannot really say when, but I'll do my best. :)
>>>>>>>>
>>>>>> Hi Paolo, any update?
>>>>>
>>>>> Oops, sorry, I thought I had sent that out.  It's in the disable-tcg
>>>>> branch on my github repository.
>>>>>
>>>>
>>>> Unfortunately your branch didn't work when I enabled TCG support. If I
>>>> use "--disable-tcg" with configure then it works fine.
>>>
>>> Branch fixed.
>>>
>>
>> Yes, it's fixed for the case I reported. Thanks.
>>
>> But it is now broken with following rune:
>> ./configure --enable-kvm --disable-tcg --target-list=i386-softmmu
>> --disable-xen --enable-debug
>>
>>   LINK  i386-softmmu/qemu-system-i386
>>   cpus.o: In function `cpu_signal':
>>   /local/scratch/qemu/cpus.c:569: undefined reference to `exit_request'
>>   cpus.o: In function `tcg_cpu_exec':
>>   /local/scratch/qemu/cpus.c:1257: undefined reference to `cpu_x86_exec'
>>   cpus.o: In function `tcg_exec_all':
>>   /local/scratch/qemu/cpus.c:1282: undefined reference to `exit_request'
>>   /local/scratch/qemu/cpus.c:1299: undefined reference to `exit_request'
>>   exec.o: In function `tlb_reset_dirty_range_all':
>>   /local/scratch/qemu/exec.c:736: undefined reference to
>>   `cpu_tlb_reset_dirty_all'
>>   collect2: error: ld returned 1 exit status
>>   make[1]: *** [qemu-system-i386] Error 1
>>   make: *** [subdir-i386-softmmu] Error 2
>>
>> --enable-debug is the one to blame. Without that it links successfully.
>>
>> Wei.
>>
>
> Finally I figured out what was wrong. Your patch series was relying on
> compiler to aggresively optimize away unused code.
>
> So when --enable-debug is set, compiler won't optimize away the dead
> code, hence those undefine references. With any optimization option -O
> you series compiles successfully.
>
> Feel free to integrate my patch below, or fix those errors in the way
> you see appropriate.

Thanks!  I added stubs for all three undefined symbols in tcg-stub.c.

Another way to fix it would be -ffunction-sections/-Wl,--gc-sections 
(which shaves 200k more out of the .text section), but that breaks glibc 
static linking.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:06:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZ8o-00045O-SP; Tue, 04 Feb 2014 06:05:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WAZ8o-00045J-24
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 06:05:50 +0000
Received: from [85.158.137.68:51828] by server-9.bemta-3.messagelabs.com id
	C1/4C-10184-D3380F25; Tue, 04 Feb 2014 06:05:49 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391493946!13135963!1
X-Originating-IP: [209.85.160.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24166 invoked from network); 4 Feb 2014 06:05:48 -0000
Received: from mail-pb0-f53.google.com (HELO mail-pb0-f53.google.com)
	(209.85.160.53)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:05:48 -0000
Received: by mail-pb0-f53.google.com with SMTP id md12so7997257pbc.12
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 22:05:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=8CgLRhxGK4jczJ70UuNRQgAr1z6JIuB/ep5hiK440Ko=;
	b=DBdhxJeLp4ciqwU/LBKM20uusgDjNh5BWxsn6+W0feSy2I6BtXGbfUmwe6Z512owr4
	L1hTKON1SabvExYczcn7qSg5NX/dYd7ZsauTJGgOlGmX4csMRylaXDK7enB9ISIGoKVI
	LXEtZSguWw9Se32Rl20hGBZakiPeJbTtxxjZhRYvujVJCHS4aQ6qjSlxmkZv1N5nFJka
	KD7nmiJnSOJS/yfNClloKOjfD/rUvMFk+dfFAXmWyR/h84q9BaPQfumwOeqnQ3e/NwRz
	v3iWNumyqmp1Aufeh9TceOA13ieBUvxiKC+cmRrMPoUHI9jxYWu8Qs8msCAjZZH+dCnP
	Zh4Q==
X-Gm-Message-State: ALoCoQmvmEZ5CsSwdwDQhRe59d1tVI6mirgznTOMNmwJFlJJ6M02fRrewXnE9ZKrMXePq0aGy8Wu
X-Received: by 10.68.196.195 with SMTP id io3mr42535544pbc.6.1391493946020;
	Mon, 03 Feb 2014 22:05:46 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id ja8sm21837262pbd.3.2014.02.03.22.05.42
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 22:05:45 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Tue,  4 Feb 2014 11:35:32 +0530
Message-Id: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining reset
	specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes reading reset specific values (address, size and mask) from dts
and uses values defined in the code now.
This is because currently xgene reset driver (submitted in linux) is going through
a change (which is not yet accepted), this new driver has a new type of dts bindings
for reset.
Hence till linux driver comes to some conclusion, we will use hardcoded values instead
of reading from dts so that xen code will not break due to the linux transition.

Ref:
http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
http://www.gossamer-threads.com/lists/linux/kernel/1845585

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   43 +++++++++-------------------------
 1 file changed, 11 insertions(+), 32 deletions(-)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 4fc185b..1da9b36 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -25,6 +25,11 @@
 #include <asm/io.h>
 #include <asm/gic.h>
 
+/* XGENE RESET Specific defines */
+#define XGENE_RESET_ADDR        0x17000014UL
+#define XGENE_RESET_SIZE        0x100
+#define XGENE_RESET_MASK        0x1
+
 /* Variables to save reset address of soc during platform initialization. */
 static u64 reset_addr, reset_size;
 static u32 reset_mask;
@@ -141,38 +146,12 @@ static void xgene_storm_reset(void)
 
 static int xgene_storm_init(void)
 {
-    static const struct dt_device_match reset_ids[] __initconst =
-    {
-        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
-        {},
-    };
-    struct dt_device_node *dev;
-    int res;
-
-    dev = dt_find_matching_node(NULL, reset_ids);
-    if ( !dev )
-    {
-        printk("XGENE: Unable to find a compatible reset node in the device tree");
-        return 0;
-    }
-
-    dt_device_set_used_by(dev, DOMID_XEN);
-
-    /* Retrieve base address and size */
-    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
-    if ( res )
-    {
-        printk("XGENE: Unable to retrieve the base address for reset\n");
-        return 0;
-    }
-
-    /* Get reset mask */
-    res = dt_property_read_u32(dev, "mask", &reset_mask);
-    if ( !res )
-    {
-        printk("XGENE: Unable to retrieve the reset mask\n");
-        return 0;
-    }
+    /* TBD: Once Linux side device tree bindings are finalized retrieve
+     * these values from dts.
+     */
+    reset_addr = XGENE_RESET_ADDR;
+    reset_size = XGENE_RESET_SIZE;
+    reset_mask = XGENE_RESET_MASK;
 
     reset_vals_valid = true;
     return 0;
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:06:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZ8o-00045O-SP; Tue, 04 Feb 2014 06:05:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WAZ8o-00045J-24
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 06:05:50 +0000
Received: from [85.158.137.68:51828] by server-9.bemta-3.messagelabs.com id
	C1/4C-10184-D3380F25; Tue, 04 Feb 2014 06:05:49 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391493946!13135963!1
X-Originating-IP: [209.85.160.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24166 invoked from network); 4 Feb 2014 06:05:48 -0000
Received: from mail-pb0-f53.google.com (HELO mail-pb0-f53.google.com)
	(209.85.160.53)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:05:48 -0000
Received: by mail-pb0-f53.google.com with SMTP id md12so7997257pbc.12
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 22:05:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=8CgLRhxGK4jczJ70UuNRQgAr1z6JIuB/ep5hiK440Ko=;
	b=DBdhxJeLp4ciqwU/LBKM20uusgDjNh5BWxsn6+W0feSy2I6BtXGbfUmwe6Z512owr4
	L1hTKON1SabvExYczcn7qSg5NX/dYd7ZsauTJGgOlGmX4csMRylaXDK7enB9ISIGoKVI
	LXEtZSguWw9Se32Rl20hGBZakiPeJbTtxxjZhRYvujVJCHS4aQ6qjSlxmkZv1N5nFJka
	KD7nmiJnSOJS/yfNClloKOjfD/rUvMFk+dfFAXmWyR/h84q9BaPQfumwOeqnQ3e/NwRz
	v3iWNumyqmp1Aufeh9TceOA13ieBUvxiKC+cmRrMPoUHI9jxYWu8Qs8msCAjZZH+dCnP
	Zh4Q==
X-Gm-Message-State: ALoCoQmvmEZ5CsSwdwDQhRe59d1tVI6mirgznTOMNmwJFlJJ6M02fRrewXnE9ZKrMXePq0aGy8Wu
X-Received: by 10.68.196.195 with SMTP id io3mr42535544pbc.6.1391493946020;
	Mon, 03 Feb 2014 22:05:46 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id ja8sm21837262pbd.3.2014.02.03.22.05.42
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 22:05:45 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Tue,  4 Feb 2014 11:35:32 +0530
Message-Id: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining reset
	specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes reading reset specific values (address, size and mask) from dts
and uses values defined in the code now.
This is because currently xgene reset driver (submitted in linux) is going through
a change (which is not yet accepted), this new driver has a new type of dts bindings
for reset.
Hence till linux driver comes to some conclusion, we will use hardcoded values instead
of reading from dts so that xen code will not break due to the linux transition.

Ref:
http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
http://www.gossamer-threads.com/lists/linux/kernel/1845585

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/platforms/xgene-storm.c |   43 +++++++++-------------------------
 1 file changed, 11 insertions(+), 32 deletions(-)

diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
index 4fc185b..1da9b36 100644
--- a/xen/arch/arm/platforms/xgene-storm.c
+++ b/xen/arch/arm/platforms/xgene-storm.c
@@ -25,6 +25,11 @@
 #include <asm/io.h>
 #include <asm/gic.h>
 
+/* XGENE RESET Specific defines */
+#define XGENE_RESET_ADDR        0x17000014UL
+#define XGENE_RESET_SIZE        0x100
+#define XGENE_RESET_MASK        0x1
+
 /* Variables to save reset address of soc during platform initialization. */
 static u64 reset_addr, reset_size;
 static u32 reset_mask;
@@ -141,38 +146,12 @@ static void xgene_storm_reset(void)
 
 static int xgene_storm_init(void)
 {
-    static const struct dt_device_match reset_ids[] __initconst =
-    {
-        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
-        {},
-    };
-    struct dt_device_node *dev;
-    int res;
-
-    dev = dt_find_matching_node(NULL, reset_ids);
-    if ( !dev )
-    {
-        printk("XGENE: Unable to find a compatible reset node in the device tree");
-        return 0;
-    }
-
-    dt_device_set_used_by(dev, DOMID_XEN);
-
-    /* Retrieve base address and size */
-    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
-    if ( res )
-    {
-        printk("XGENE: Unable to retrieve the base address for reset\n");
-        return 0;
-    }
-
-    /* Get reset mask */
-    res = dt_property_read_u32(dev, "mask", &reset_mask);
-    if ( !res )
-    {
-        printk("XGENE: Unable to retrieve the reset mask\n");
-        return 0;
-    }
+    /* TBD: Once Linux side device tree bindings are finalized retrieve
+     * these values from dts.
+     */
+    reset_addr = XGENE_RESET_ADDR;
+    reset_size = XGENE_RESET_SIZE;
+    reset_mask = XGENE_RESET_MASK;
 
     reset_vals_valid = true;
     return 0;
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZXz-0004nV-6m; Tue, 04 Feb 2014 06:31:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WAZXx-0004nQ-EJ
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 06:31:49 +0000
Received: from [85.158.137.68:42317] by server-7.bemta-3.messagelabs.com id
	E4/55-13775-45980F25; Tue, 04 Feb 2014 06:31:48 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391495506!12017346!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18317 invoked from network); 4 Feb 2014 06:31:48 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:31:48 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so8056131pbc.32
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 22:31:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=VMTPzenCr1hfnIq2K/L1SNWhAJJq7Bz4iw0MO/tiNTs=;
	b=PBX9X7h/VC997vrjAgTQpTkeHAKcr1NKowSpH6el+kB5BbzB6wlM8agQ7aPMN5u7QI
	52T2zN0E1hoTxepy6ozURqwojG/aaHMzxmgLP19nRPZ9L2ilrHhi1kYDUCdlTlh6GCKr
	xmtXzHHOXGp1REONLgdc+2uc9Vd8Jc6sBABtzJNK/fMhfA6AP9AeLAG477Yh2ubOykfr
	svx+QD5SlGH/RMn+l0pN+oI1GDFEv/NhVcdXzICaNo1w2z/ViC8/DYlUpQQqIq5SMK9+
	PhGRB/5fb2DPx79Fwk8oON/F1MSZdfvpg7Op6mi1oyfHETAfAXvtq6zDc+NC3r+sJPQ7
	vfHQ==
X-Received: by 10.68.20.1 with SMTP id j1mr8067532pbe.148.1391495505499;
	Mon, 03 Feb 2014 22:31:45 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	rb6sm26940063pbb.41.2014.02.03.22.31.41 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 22:31:44 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 03 Feb 2014 22:31:40 -0800
Date: Mon, 3 Feb 2014 22:31:40 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20140204063139.GA19177@u109add4315675089e695.ant.amazon.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<20140128193837.GA32072@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128193837.GA32072@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: linux-kernel@vger.kernel.org, Matthew Rushton <mrushton@amazon.com>,
	david.vrabel@citrix.com, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 03:38:37PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
> > blkback bug fixes for memory leaks (patches 1 and 2) and a race 
> > (patch 3).
> 
> They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
> branch and are testing them now (hadn't pushed it yet).
> 
> Matt and Matt,
> 
> Could you take a look at the other two patches as well?

Sure, though somehow you didn't address your message to us, so I
didn't see it until today.

Matt Rushton did some review and testing on an earlier version that
came out fine. We'll give the final series a test since there was
still a bit of rework.

--msw

> David, Boris,
> 
> Are you OK with pushing those patches out to Jens Axboe if nobody
> gives an NACK by Friday?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZXz-0004nV-6m; Tue, 04 Feb 2014 06:31:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WAZXx-0004nQ-EJ
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 06:31:49 +0000
Received: from [85.158.137.68:42317] by server-7.bemta-3.messagelabs.com id
	E4/55-13775-45980F25; Tue, 04 Feb 2014 06:31:48 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391495506!12017346!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18317 invoked from network); 4 Feb 2014 06:31:48 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:31:48 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so8056131pbc.32
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 22:31:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=VMTPzenCr1hfnIq2K/L1SNWhAJJq7Bz4iw0MO/tiNTs=;
	b=PBX9X7h/VC997vrjAgTQpTkeHAKcr1NKowSpH6el+kB5BbzB6wlM8agQ7aPMN5u7QI
	52T2zN0E1hoTxepy6ozURqwojG/aaHMzxmgLP19nRPZ9L2ilrHhi1kYDUCdlTlh6GCKr
	xmtXzHHOXGp1REONLgdc+2uc9Vd8Jc6sBABtzJNK/fMhfA6AP9AeLAG477Yh2ubOykfr
	svx+QD5SlGH/RMn+l0pN+oI1GDFEv/NhVcdXzICaNo1w2z/ViC8/DYlUpQQqIq5SMK9+
	PhGRB/5fb2DPx79Fwk8oON/F1MSZdfvpg7Op6mi1oyfHETAfAXvtq6zDc+NC3r+sJPQ7
	vfHQ==
X-Received: by 10.68.20.1 with SMTP id j1mr8067532pbe.148.1391495505499;
	Mon, 03 Feb 2014 22:31:45 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	rb6sm26940063pbb.41.2014.02.03.22.31.41 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 22:31:44 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 03 Feb 2014 22:31:40 -0800
Date: Mon, 3 Feb 2014 22:31:40 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20140204063139.GA19177@u109add4315675089e695.ant.amazon.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<20140128193837.GA32072@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140128193837.GA32072@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: linux-kernel@vger.kernel.org, Matthew Rushton <mrushton@amazon.com>,
	david.vrabel@citrix.com, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Jan 28, 2014 at 03:38:37PM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
> > blkback bug fixes for memory leaks (patches 1 and 2) and a race 
> > (patch 3).
> 
> They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
> branch and are testing them now (hadn't pushed it yet).
> 
> Matt and Matt,
> 
> Could you take a look at the other two patches as well?

Sure, though somehow you didn't address your message to us, so I
didn't see it until today.

Matt Rushton did some review and testing on an earlier version that
came out fine. We'll give the final series a test since there was
still a bit of rework.

--msw

> David, Boris,
> 
> Are you OK with pushing those patches out to Jens Axboe if nobody
> gives an NACK by Friday?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:37:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZcw-0004wy-VU; Tue, 04 Feb 2014 06:36:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WAZcv-0004ws-M5
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 06:36:57 +0000
Received: from [193.109.254.147:32506] by server-14.bemta-14.messagelabs.com
	id B6/27-29228-88A80F25; Tue, 04 Feb 2014 06:36:56 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391495814!1801062!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24817 invoked from network); 4 Feb 2014 06:36:55 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:36:55 -0000
Received: by mail-pd0-f173.google.com with SMTP id y10so7827889pdj.4
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 22:36:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=/a9JnZk8OUjZnZE2r4p+qW4/2VReij7IBSs6FO4qpJg=;
	b=umS+GhUQWNNmUXFEpY4HdJQuDniKTD9PEkZUn1LTSZGzYMzhRGCYfRhi4FNGSjMTyr
	xpSuArJ5OFI5vylxVMU2SWa//WecZN7C6JnpiCb35L2ItdtLnfKHRfbe57Nqxy+4grXf
	4BJvZx8oqQs6S1FdBfDd9zbGky9ffcK43LYoZESbQDStAc/xGGStkgzevSNMhgGJW0BU
	J2tRay5GaU1VHGpwZ/bfa74Yp5YcOEfk5wAza8dhGEQbxBArONGIxykK7zkqhVYPPyzD
	ugZXjcQl01lNdr4f4iOXKHUk7ofrSqpDawQfm61rP1pCSdHWKyHeaWfPzyGE8MKMjnUF
	EjmA==
X-Received: by 10.68.204.161 with SMTP id kz1mr7953716pbc.156.1391495813417;
	Mon, 03 Feb 2014 22:36:53 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	jk16sm62689591pbb.34.2014.02.03.22.36.50 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 22:36:52 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 03 Feb 2014 22:36:48 -0800
Date: Mon, 3 Feb 2014 22:36:48 -0800
From: Matt Wilson <msw@linux.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140204063648.GB19177@u109add4315675089e695.ant.amazon.com>
References: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Matt Wilson <msw@amazon.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v7] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 01:24:58PM +0000, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> v7:
> - the previous version broke build on ARM, as there is no need for those p2m
>   changes. I've put them into arch specific functions, which are stubs on arm
>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

You're still forgetting that this was originally proposed by Anthony
Liguori <aliguori@amazon.com>.

https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:37:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZcw-0004wy-VU; Tue, 04 Feb 2014 06:36:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WAZcv-0004ws-M5
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 06:36:57 +0000
Received: from [193.109.254.147:32506] by server-14.bemta-14.messagelabs.com
	id B6/27-29228-88A80F25; Tue, 04 Feb 2014 06:36:56 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391495814!1801062!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24817 invoked from network); 4 Feb 2014 06:36:55 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:36:55 -0000
Received: by mail-pd0-f173.google.com with SMTP id y10so7827889pdj.4
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 22:36:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=/a9JnZk8OUjZnZE2r4p+qW4/2VReij7IBSs6FO4qpJg=;
	b=umS+GhUQWNNmUXFEpY4HdJQuDniKTD9PEkZUn1LTSZGzYMzhRGCYfRhi4FNGSjMTyr
	xpSuArJ5OFI5vylxVMU2SWa//WecZN7C6JnpiCb35L2ItdtLnfKHRfbe57Nqxy+4grXf
	4BJvZx8oqQs6S1FdBfDd9zbGky9ffcK43LYoZESbQDStAc/xGGStkgzevSNMhgGJW0BU
	J2tRay5GaU1VHGpwZ/bfa74Yp5YcOEfk5wAza8dhGEQbxBArONGIxykK7zkqhVYPPyzD
	ugZXjcQl01lNdr4f4iOXKHUk7ofrSqpDawQfm61rP1pCSdHWKyHeaWfPzyGE8MKMjnUF
	EjmA==
X-Received: by 10.68.204.161 with SMTP id kz1mr7953716pbc.156.1391495813417;
	Mon, 03 Feb 2014 22:36:53 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	jk16sm62689591pbb.34.2014.02.03.22.36.50 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 22:36:52 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Mon, 03 Feb 2014 22:36:48 -0800
Date: Mon, 3 Feb 2014 22:36:48 -0800
From: Matt Wilson <msw@linux.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140204063648.GB19177@u109add4315675089e695.ant.amazon.com>
References: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Matt Wilson <msw@amazon.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v7] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 01:24:58PM +0000, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> v7:
> - the previous version broke build on ARM, as there is no need for those p2m
>   changes. I've put them into arch specific functions, which are stubs on arm
>
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Suggested-by: David Vrabel <david.vrabel@citrix.com>

You're still forgetting that this was originally proposed by Anthony
Liguori <aliguori@amazon.com>.

https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:58:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:58:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZxs-0005ba-2M; Tue, 04 Feb 2014 06:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WAZxr-0005bV-0z
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 06:58:35 +0000
Received: from [85.158.139.211:24322] by server-14.bemta-5.messagelabs.com id
	B9/1D-27598-A9F80F25; Tue, 04 Feb 2014 06:58:34 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391497111!1465718!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1422 invoked from network); 4 Feb 2014 06:58:32 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:58:32 -0000
Received: by mail-qc0-f176.google.com with SMTP id e16so12886082qcx.7
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 22:58:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=HoJv0OWr6Rg42tpv51MH8pSjDvI4Qof564mMiTcFUh8=;
	b=kFtSybC2TZHeeOZcQtkKDPEtvMpvA55TWy5AkqDxGkvXJX0Zbgx28Yd6AV3KKYG0HZ
	7YZywxGdG0gUYWZM0tQgqBLAF4ZanpeEWvRJHk2bXteOUCs3P7S7C5qJ/A83wSYh73Vm
	e1uyvQhUr54cxS30RVO7J/lkmvx5Pj3CxRM7n387DpaO1dO0oFd7SsBmVzi3gI9ADErj
	k4X6aw3E/L29Jg79Rv2bDcJwkVbHSpiZsIm/S+IFr3J1TM/PEBPfmQiY/qtKYXb9vS2c
	erzo3HpJUrdCZzczXu3HhaF/99XNkFMF7XERPTVLEFl7zcC1O0E7fv1YUEe5HBTK86xj
	/lcQ==
MIME-Version: 1.0
X-Received: by 10.224.12.82 with SMTP id w18mr30147597qaw.28.1391497111489;
	Mon, 03 Feb 2014 22:58:31 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Mon, 3 Feb 2014 22:58:31 -0800 (PST)
In-Reply-To: <CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
Date: Tue, 4 Feb 2014 01:58:31 -0500
Message-ID: <CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 26, 2014 at 1:02 PM, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
> On Fri, Jan 24, 2014 at 8:38 AM, Mel Gorman <mgorman@suse.de> wrote:
>> On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
>>> >> >> <SNIP>
>>> >> >>
>>> >> >> This dump doesn't look dramatically different, either.
>>> >> >>
>>> >> >>>
>>> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
>>> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>>> >> >>> turned on?
>>> >> >>
>>> >> >>
>>> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>>> >> >> mean not enabled at runtime?
>>> >> >>
>>> >> >> [1]
>>> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Elena
>>>
>>> I was able to reproduce this consistently, also with the latest mm
>>> patches from yesterday.
>>> Can you please try this:
>>>
>>
>> Thanks Elena,
>>
>>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>>> index ce563be..76dcf96 100644
>>> --- a/arch/x86/xen/mmu.c
>>> +++ b/arch/x86/xen/mmu.c
>>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
>>> *mm, unsigned long addr,
>>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>>  {
>>> -       if (val & _PAGE_PRESENT) {
>>> +       if ((val & _PAGE_PRESENT) || ((val &
>>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>>
>>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>>
>>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>>  {
>>> -       if (val & _PAGE_PRESENT) {
>>> +       if ((val & _PAGE_PRESENT) || ((val &
>>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>>                 unsigned long mfn;
>>
>> Would reusing pte_present be an option? Ordinarily I expect that
>> PAGE_NUMA/PAGE_PROTNONE is only set if PAGE_PRESENT is not set and pte_present
>> is defined as
>>
>> static inline int pte_present(pte_t a)
>> {
>>         return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
>>                                _PAGE_NUMA);
>> }
>>
>> So it looks like it work work. Of course it would need to be split to
>> reuse it within xen if pte_present was split to have a pteval_present
>> helper like so
>>
>> static inline int pteval_present(pteval_t val)
>> {
>>         /*
>>          * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
>>          * way clearly states that the intent is that a protnone and numa
>>          * hinting ptes are considered present for the purposes of
>>          * pagetable operations like zapping, protection changes, gup etc.
>>          */
>>         return val & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
>> }
>>
>> static inline int pte_present(pte_t pte)
>> {
>>         return pteval_present(pte_flags(pte))
>> }
>>
>> If Xen is doing some other tricks with _PAGE_PRESENT then it might be
>> ruled out as an option. If so, then maybe it could still be made a
>> little clearer for future reference?
>
> Yes, sure, it should work, I tried it.
> Thank you Mel.
>
>>
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index c1d406f..ff621de 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>
>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>
>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>                 unsigned long mfn;
>> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
>> index 8e4f41d..693fe00 100644
>> --- a/include/asm-generic/pgtable.h
>> +++ b/include/asm-generic/pgtable.h
>> @@ -654,10 +654,14 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
>>   * (because _PAGE_PRESENT is not set).
>>   */
>>  #ifndef pte_numa
>> +static inline int pteval_numa(pteval_t pteval)
>> +{
>> +       return (pteval & (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
>> +}
>> +
>>  static inline int pte_numa(pte_t pte)
>>  {
>> -       return (pte_flags(pte) &
>> -               (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
>> +       return pteval_numa(pte_flags(pte));
>>  }
>>  #endif
>>
>
>
>
> --
> Elena

Here are two variants of this change . First one adds check for
_PAGE_NUMA flag in xen pte translations.
Second adds proposed by Mel pteval_present (comments are left
untouched :) and respective patch for xen pte translation that
uses pteval_present.
Mel, you can pick any of these two if they look fine and xen
maintainers are ok with the xen change (Konrad, David?)

Subject: [PATCH] xen: add _PAGE_NUMA for pte translations

xen: add _PAGE_NUMA for pte translations

Adds check in xen guest pte addresses translations for
_PAGE_NUMA flag. This resolves reported issues and will
be essential for NUMA support in xen guest with future
vNUMA patches.

Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..c804d58 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
*mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || (val & (_PAGE_NUMA|_PAGE_PRESENT))) {
                unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                unsigned long pfn = mfn_to_pfn(mfn);

@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)

 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || (val & (_PAGE_NUMA|_PAGE_PRESENT))) {
                unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                pteval_t flags = val & PTE_FLAGS_MASK;
                unsigned long mfn;
-- 
1.7.10.4




Subject: [PATCH] mm: adds pteval_present

As suggested by Mel Gorman in https://lkml.org/lkml/2014/1/24/174.
Adds pteval_present to clarify that hinting ptes are considered present
for the purposes of pagetable operations like zapping, protection changes,
gup etc.

Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
 arch/x86/include/asm/pgtable.h |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index bbc8b12..205b00d 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -445,10 +445,20 @@ static inline int pte_same(pte_t a, pte_t b)
        return a.pte == b.pte;
 }

+static inline int pteval_present(pteval_t pteval)
+{
+       /*
+        * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
+        * way clearly states that the intent is that a protnone and numa
+        * hinting ptes are considered present for the purposes of
+        * pagetable operations like zapping, protection changes, gup etc.
+       */
+       return pteval & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
+}
+
 static inline int pte_present(pte_t a)
 {
-       return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
-                              _PAGE_NUMA);
+       return pteval_present(pte_flags(a));
 }

 #define pte_accessible pte_accessible
-- 
1.7.10.4


Subject: [PATCH] xen: use pteval_present for xen pte translations

Uses pteval_present for xen pte translations as sugested
by Mel Gorman in https://lkml.org/lkml/2014/1/24/174.
This also takes into account _PAGE_NUMA flag.

Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..256282e 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
*mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if (pteval_present(val)) {
                unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                unsigned long pfn = mfn_to_pfn(mfn);

@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)

 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if (pteval_present(val)) {
                unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                pteval_t flags = val & PTE_FLAGS_MASK;
                unsigned long mfn;
-- 
1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 06:58:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 06:58:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAZxs-0005ba-2M; Tue, 04 Feb 2014 06:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WAZxr-0005bV-0z
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 06:58:35 +0000
Received: from [85.158.139.211:24322] by server-14.bemta-5.messagelabs.com id
	B9/1D-27598-A9F80F25; Tue, 04 Feb 2014 06:58:34 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391497111!1465718!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1422 invoked from network); 4 Feb 2014 06:58:32 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 06:58:32 -0000
Received: by mail-qc0-f176.google.com with SMTP id e16so12886082qcx.7
	for <xen-devel@lists.xenproject.org>;
	Mon, 03 Feb 2014 22:58:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=HoJv0OWr6Rg42tpv51MH8pSjDvI4Qof564mMiTcFUh8=;
	b=kFtSybC2TZHeeOZcQtkKDPEtvMpvA55TWy5AkqDxGkvXJX0Zbgx28Yd6AV3KKYG0HZ
	7YZywxGdG0gUYWZM0tQgqBLAF4ZanpeEWvRJHk2bXteOUCs3P7S7C5qJ/A83wSYh73Vm
	e1uyvQhUr54cxS30RVO7J/lkmvx5Pj3CxRM7n387DpaO1dO0oFd7SsBmVzi3gI9ADErj
	k4X6aw3E/L29Jg79Rv2bDcJwkVbHSpiZsIm/S+IFr3J1TM/PEBPfmQiY/qtKYXb9vS2c
	erzo3HpJUrdCZzczXu3HhaF/99XNkFMF7XERPTVLEFl7zcC1O0E7fv1YUEe5HBTK86xj
	/lcQ==
MIME-Version: 1.0
X-Received: by 10.224.12.82 with SMTP id w18mr30147597qaw.28.1391497111489;
	Mon, 03 Feb 2014 22:58:31 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Mon, 3 Feb 2014 22:58:31 -0800 (PST)
In-Reply-To: <CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
References: <20140121232708.GA29787@amazon.com>
	<20140122014908.GG18164@kroah.com>
	<CA+55aFw7fTFJtOAa+RETGSL7ZXZE4Ysk9+Xmg6_5yyLkwRtcTw@mail.gmail.com>
	<20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
Date: Tue, 4 Feb 2014 01:58:31 -0500
Message-ID: <CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
	Michel Lespinasse <walken@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Steven Noonan <steven@uplinklabs.net>, Alex Thorlton <athorlton@sgi.com>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	David Vrabel <david.vrabel@citrix.com>, Vlastimil Babka <vbabka@suse.cz>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Borkmann <borkmann@iogearbox.net>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [BISECTED] Linux 3.12.7 introduces page map
	handling regression
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Jan 26, 2014 at 1:02 PM, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
> On Fri, Jan 24, 2014 at 8:38 AM, Mel Gorman <mgorman@suse.de> wrote:
>> On Thu, Jan 23, 2014 at 11:23:37AM -0500, Elena Ufimtseva wrote:
>>> >> >> <SNIP>
>>> >> >>
>>> >> >> This dump doesn't look dramatically different, either.
>>> >> >>
>>> >> >>>
>>> >> >>> The other question is - how is AutoNUMA running when it is not enabled?
>>> >> >>> Shouldn't those _PAGE_NUMA ops be nops when AutoNUMA hasn't even been
>>> >> >>> turned on?
>>> >> >>
>>> >> >>
>>> >> >> Well, NUMA_BALANCING is enabled in the kernel config[1], but I presume you
>>> >> >> mean not enabled at runtime?
>>> >> >>
>>> >> >> [1]
>>> >> >> http://git.uplinklabs.net/snoonan/projects/archlinux/ec2/ec2-packages.git/tree/linux-ec2/config.x86_64
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Elena
>>>
>>> I was able to reproduce this consistently, also with the latest mm
>>> patches from yesterday.
>>> Can you please try this:
>>>
>>
>> Thanks Elena,
>>
>>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>>> index ce563be..76dcf96 100644
>>> --- a/arch/x86/xen/mmu.c
>>> +++ b/arch/x86/xen/mmu.c
>>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
>>> *mm, unsigned long addr,
>>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>>  {
>>> -       if (val & _PAGE_PRESENT) {
>>> +       if ((val & _PAGE_PRESENT) || ((val &
>>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>>
>>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>>
>>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>>  {
>>> -       if (val & _PAGE_PRESENT) {
>>> +       if ((val & _PAGE_PRESENT) || ((val &
>>> (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA)) {
>>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>>                 unsigned long mfn;
>>
>> Would reusing pte_present be an option? Ordinarily I expect that
>> PAGE_NUMA/PAGE_PROTNONE is only set if PAGE_PRESENT is not set and pte_present
>> is defined as
>>
>> static inline int pte_present(pte_t a)
>> {
>>         return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
>>                                _PAGE_NUMA);
>> }
>>
>> So it looks like it work work. Of course it would need to be split to
>> reuse it within xen if pte_present was split to have a pteval_present
>> helper like so
>>
>> static inline int pteval_present(pteval_t val)
>> {
>>         /*
>>          * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
>>          * way clearly states that the intent is that a protnone and numa
>>          * hinting ptes are considered present for the purposes of
>>          * pagetable operations like zapping, protection changes, gup etc.
>>          */
>>         return val & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
>> }
>>
>> static inline int pte_present(pte_t pte)
>> {
>>         return pteval_present(pte_flags(pte))
>> }
>>
>> If Xen is doing some other tricks with _PAGE_PRESENT then it might be
>> ruled out as an option. If so, then maybe it could still be made a
>> little clearer for future reference?
>
> Yes, sure, it should work, I tried it.
> Thank you Mel.
>
>>
>>
>> diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
>> index c1d406f..ff621de 100644
>> --- a/arch/x86/xen/mmu.c
>> +++ b/arch/x86/xen/mmu.c
>> @@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
>>  /* Assume pteval_t is equivalent to all the other *val_t types. */
>>  static pteval_t pte_mfn_to_pfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>>                 unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 unsigned long pfn = mfn_to_pfn(mfn);
>>
>> @@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
>>
>>  static pteval_t pte_pfn_to_mfn(pteval_t val)
>>  {
>> -       if (val & _PAGE_PRESENT) {
>> +       if ((val & _PAGE_PRESENT) || pteval_numa(val)) {
>>                 unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
>>                 pteval_t flags = val & PTE_FLAGS_MASK;
>>                 unsigned long mfn;
>> diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
>> index 8e4f41d..693fe00 100644
>> --- a/include/asm-generic/pgtable.h
>> +++ b/include/asm-generic/pgtable.h
>> @@ -654,10 +654,14 @@ static inline int pmd_trans_unstable(pmd_t *pmd)
>>   * (because _PAGE_PRESENT is not set).
>>   */
>>  #ifndef pte_numa
>> +static inline int pteval_numa(pteval_t pteval)
>> +{
>> +       return (pteval & (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
>> +}
>> +
>>  static inline int pte_numa(pte_t pte)
>>  {
>> -       return (pte_flags(pte) &
>> -               (_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
>> +       return pteval_numa(pte_flags(pte));
>>  }
>>  #endif
>>
>
>
>
> --
> Elena

Here are two variants of this change . First one adds check for
_PAGE_NUMA flag in xen pte translations.
Second adds proposed by Mel pteval_present (comments are left
untouched :) and respective patch for xen pte translation that
uses pteval_present.
Mel, you can pick any of these two if they look fine and xen
maintainers are ok with the xen change (Konrad, David?)

Subject: [PATCH] xen: add _PAGE_NUMA for pte translations

xen: add _PAGE_NUMA for pte translations

Adds check in xen guest pte addresses translations for
_PAGE_NUMA flag. This resolves reported issues and will
be essential for NUMA support in xen guest with future
vNUMA patches.

Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..c804d58 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
*mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || (val & (_PAGE_NUMA|_PAGE_PRESENT))) {
                unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                unsigned long pfn = mfn_to_pfn(mfn);

@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)

 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if ((val & _PAGE_PRESENT) || (val & (_PAGE_NUMA|_PAGE_PRESENT))) {
                unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                pteval_t flags = val & PTE_FLAGS_MASK;
                unsigned long mfn;
-- 
1.7.10.4




Subject: [PATCH] mm: adds pteval_present

As suggested by Mel Gorman in https://lkml.org/lkml/2014/1/24/174.
Adds pteval_present to clarify that hinting ptes are considered present
for the purposes of pagetable operations like zapping, protection changes,
gup etc.

Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
 arch/x86/include/asm/pgtable.h |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index bbc8b12..205b00d 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -445,10 +445,20 @@ static inline int pte_same(pte_t a, pte_t b)
        return a.pte == b.pte;
 }

+static inline int pteval_present(pteval_t pteval)
+{
+       /*
+        * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
+        * way clearly states that the intent is that a protnone and numa
+        * hinting ptes are considered present for the purposes of
+        * pagetable operations like zapping, protection changes, gup etc.
+       */
+       return pteval & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
+}
+
 static inline int pte_present(pte_t a)
 {
-       return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
-                              _PAGE_NUMA);
+       return pteval_present(pte_flags(a));
 }

 #define pte_accessible pte_accessible
-- 
1.7.10.4


Subject: [PATCH] xen: use pteval_present for xen pte translations

Uses pteval_present for xen pte translations as sugested
by Mel Gorman in https://lkml.org/lkml/2014/1/24/174.
This also takes into account _PAGE_NUMA flag.

Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
---
 arch/x86/xen/mmu.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..256282e 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct
*mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if (pteval_present(val)) {
                unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                unsigned long pfn = mfn_to_pfn(mfn);

@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)

 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-       if (val & _PAGE_PRESENT) {
+       if (pteval_present(val)) {
                unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
                pteval_t flags = val & PTE_FLAGS_MASK;
                unsigned long mfn;
-- 
1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 07:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 07:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAajj-0006uM-EP; Tue, 04 Feb 2014 07:48:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAajh-0006uH-8y
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 07:48:02 +0000
Received: from [85.158.137.68:35641] by server-10.bemta-3.messagelabs.com id
	8A/18-07302-03B90F25; Tue, 04 Feb 2014 07:48:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391500077!13174609!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29956 invoked from network); 4 Feb 2014 07:47:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 07:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,777,1384300800"; d="scan'208";a="97623124"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 07:47:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 02:47:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAajc-00030N-78;
	Tue, 04 Feb 2014 07:47:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAajc-0006Ax-0x;
	Tue, 04 Feb 2014 07:47:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24723-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 07:47:56 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24723: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24723 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24723/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    3 host-install(3)           broken pass in 24722
 test-amd64-amd64-xl-sedf     16 guest-start.2               fail pass in 24722
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24722
 test-amd64-amd64-xl           3 host-install(3)  broken in 24722 pass in 24723
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken in 24722 pass in 24723
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail in 24722 pass in 24723
 test-amd64-i386-xl-winxpsp3-vcpus1 8 guest-saverestore fail in 24722 pass in 24723
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24722 pass in 24723
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24722 pass in 24723

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24722
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)     broken in 24722 like 24719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24722 never pass

version targeted for testing:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 07:48:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 07:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAajj-0006uM-EP; Tue, 04 Feb 2014 07:48:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAajh-0006uH-8y
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 07:48:02 +0000
Received: from [85.158.137.68:35641] by server-10.bemta-3.messagelabs.com id
	8A/18-07302-03B90F25; Tue, 04 Feb 2014 07:48:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391500077!13174609!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29956 invoked from network); 4 Feb 2014 07:47:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 07:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,777,1384300800"; d="scan'208";a="97623124"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 07:47:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 02:47:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAajc-00030N-78;
	Tue, 04 Feb 2014 07:47:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAajc-0006Ax-0x;
	Tue, 04 Feb 2014 07:47:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24723-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 07:47:56 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24723: tolerable trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24723 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24723/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2    3 host-install(3)           broken pass in 24722
 test-amd64-amd64-xl-sedf     16 guest-start.2               fail pass in 24722
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24722
 test-amd64-amd64-xl           3 host-install(3)  broken in 24722 pass in 24723
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken in 24722 pass in 24723
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 7 windows-install fail in 24722 pass in 24723
 test-amd64-i386-xl-winxpsp3-vcpus1 8 guest-saverestore fail in 24722 pass in 24723
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail in 24722 pass in 24723
 test-amd64-i386-xl-win7-amd64  7 windows-install   fail in 24722 pass in 24723

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24722
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)     broken in 24722 like 24719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24722 never pass

version targeted for testing:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:02:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAaxp-0007us-G7; Tue, 04 Feb 2014 08:02:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAaxo-0007un-8p
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:02:36 +0000
Received: from [85.158.139.211:37056] by server-11.bemta-5.messagelabs.com id
	8A/B4-23886-B9E90F25; Tue, 04 Feb 2014 08:02:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391500941!1458637!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9345 invoked from network); 4 Feb 2014 08:02:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:02:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:02:22 +0000
Message-Id: <52F0AC990200007800118E09@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:02:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
In-Reply-To: <52EFCACF.6080604@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjAyLjE0IGF0IDE3OjU4LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAyOS8wMS8xNCAwOTo1MiwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+
Pj4+IE9uIDI4LjAxLjE0IGF0IDE4OjQzLCBSb2dlciBQYXUgTW9ubmUgPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPj4+ICsJCWZyZWVfcmVxKGJsa2lmLCBwZW5kaW5nX3JlcSk7Cj4+PiAr
CQkvKgo+Pj4gKwkJICogTWFrZSBzdXJlIHRoZSByZXF1ZXN0IGlzIGZyZWVkIGJlZm9yZSByZWxl
YXNpbmcgYmxraWYsCj4+PiArCQkgKiBvciB0aGVyZSBjb3VsZCBiZSBhIHJhY2UgYmV0d2VlbiBm
cmVlX3JlcSBhbmQgdGhlCj4+PiArCQkgKiBjbGVhbnVwIGRvbmUgaW4geGVuX2Jsa2lmX2ZyZWUg
ZHVyaW5nIHNodXRkb3duLgo+Pj4gKwkJICoKPj4+ICsJCSAqIE5COiBUaGUgZmFjdCB0aGF0IHdl
IG1pZ2h0IHRyeSB0byB3YWtlIHVwIHBlbmRpbmdfZnJlZV93cQo+Pj4gKwkJICogYmVmb3JlIGRy
YWluX2NvbXBsZXRlIChpbiBjYXNlIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBvbikKPj4+ICsJCSAq
IGl0J3Mgbm90IGEgcHJvYmxlbSB3aXRoIG91ciBjdXJyZW50IGltcGxlbWVudGF0aW9uCj4+PiAr
CQkgKiBiZWNhdXNlIHdlIGNhbiBhc3N1cmUgdGhlcmUncyBubyB0aHJlYWQgd2FpdGluZyBvbgo+
Pj4gKwkJICogcGVuZGluZ19mcmVlX3dxIGlmIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBvbiwgYnV0
IGl0IGhhcwo+Pj4gKwkJICogdG8gYmUgdGFrZW4gaW50byBhY2NvdW50IGlmIHRoZSBjdXJyZW50
IG1vZGVsIGlzIGNoYW5nZWQuCj4+PiArCQkgKi8KPj4+ICsJCXhlbl9ibGtpZl9wdXQoYmxraWYp
Owo+Pj4gKwkJaWYgKGF0b21pY19yZWFkKCZibGtpZi0+cmVmY250KSA8PSAyKSB7Cj4+PiArCQkJ
aWYgKGF0b21pY19yZWFkKCZibGtpZi0+ZHJhaW4pKQo+Pj4gKwkJCQljb21wbGV0ZSgmYmxraWYt
PmRyYWluX2NvbXBsZXRlKTsKPj4+ICAJCX0KPj4+IC0JCWZyZWVfcmVxKHBlbmRpbmdfcmVxLT5i
bGtpZiwgcGVuZGluZ19yZXEpOwo+Pj4gIAl9Cj4+PiAgfQo+PiAKPj4gVGhlIHB1dCBpcyBzdGls
bCB0b28gZWFybHkgaW1vIC0geW91J3JlIGV4cGxpY2l0bHkgYWNjZXNzaW5nIGZpZWxkIGluIHRo
ZQo+PiBzdHJ1Y3R1cmUgaW1tZWRpYXRlbHkgYWZ0ZXJ3YXJkcy4gVGhpcyBtYXkgbm90IGJlIGFu
IGlzc3VlIGF0Cj4+IHByZXNlbnQsIGJ1dCBJIHRoaW5rIGl0J3MgYXQgbGVhc3QgYSBsYXRlbnQg
b25lLgo+PiAKPj4gQXBhcnQgZnJvbSB0aGF0LCB0aGUgdHdvIGlmKClzIHdvdWxkIC0gYXQgbGVh
c3QgdG8gbWUgLSBiZSBtb3JlCj4+IGNsZWFyIGlmIGNvbWJpbmVkIGludG8gb25lLgo+IAo+IElu
IG9yZGVyIHRvIGdldCByaWQgb2YgdGhlIHJhY2UgSSBoYWQgdG8gaW50cm9kdWNlIHlldCBhbm90
aGVyIGF0b21pY190IAo+IGluIHhlbl9ibGtpZiBzdHJ1Y3QsIHdoaWNoIGlzIHNvbWV0aGluZyBJ
IGRvbid0IHJlYWxseSBsaWtlLCBidXQgSSAKPiBjb3VsZCBub3Qgc2VlIGFueSBvdGhlciB3YXkg
dG8gc29sdmUgdGhpcy4gSWYgdGhhdCdzIGZpbmUgSSB3aWxsIHJlc2VuZCAKPiB0aGUgc2VyaWVz
LCBoZXJlIGlzIHRoZSByZXdvcmtlZCBwYXRjaDoKCk1pbmQgZXhwbGFpbmluZyB3aHkgeW91IGNh
bid0IHNpbXBseSBtb3ZlIHRoZSB4ZW5fYmxraWZfcHV0KCkKZG93biBiZXR3ZWVuIHRoZSBpZigp
IGFuZCB0aGUgZnJlZV9yZWYoKS4KCkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMu
eGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:02:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAaxp-0007us-G7; Tue, 04 Feb 2014 08:02:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAaxo-0007un-8p
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:02:36 +0000
Received: from [85.158.139.211:37056] by server-11.bemta-5.messagelabs.com id
	8A/B4-23886-B9E90F25; Tue, 04 Feb 2014 08:02:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391500941!1458637!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9345 invoked from network); 4 Feb 2014 08:02:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:02:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:02:22 +0000
Message-Id: <52F0AC990200007800118E09@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:02:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
In-Reply-To: <52EFCACF.6080604@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDAzLjAyLjE0IGF0IDE3OjU4LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAyOS8wMS8xNCAwOTo1MiwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+
Pj4+IE9uIDI4LjAxLjE0IGF0IDE4OjQzLCBSb2dlciBQYXUgTW9ubmUgPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPj4+ICsJCWZyZWVfcmVxKGJsa2lmLCBwZW5kaW5nX3JlcSk7Cj4+PiAr
CQkvKgo+Pj4gKwkJICogTWFrZSBzdXJlIHRoZSByZXF1ZXN0IGlzIGZyZWVkIGJlZm9yZSByZWxl
YXNpbmcgYmxraWYsCj4+PiArCQkgKiBvciB0aGVyZSBjb3VsZCBiZSBhIHJhY2UgYmV0d2VlbiBm
cmVlX3JlcSBhbmQgdGhlCj4+PiArCQkgKiBjbGVhbnVwIGRvbmUgaW4geGVuX2Jsa2lmX2ZyZWUg
ZHVyaW5nIHNodXRkb3duLgo+Pj4gKwkJICoKPj4+ICsJCSAqIE5COiBUaGUgZmFjdCB0aGF0IHdl
IG1pZ2h0IHRyeSB0byB3YWtlIHVwIHBlbmRpbmdfZnJlZV93cQo+Pj4gKwkJICogYmVmb3JlIGRy
YWluX2NvbXBsZXRlIChpbiBjYXNlIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBvbikKPj4+ICsJCSAq
IGl0J3Mgbm90IGEgcHJvYmxlbSB3aXRoIG91ciBjdXJyZW50IGltcGxlbWVudGF0aW9uCj4+PiAr
CQkgKiBiZWNhdXNlIHdlIGNhbiBhc3N1cmUgdGhlcmUncyBubyB0aHJlYWQgd2FpdGluZyBvbgo+
Pj4gKwkJICogcGVuZGluZ19mcmVlX3dxIGlmIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBvbiwgYnV0
IGl0IGhhcwo+Pj4gKwkJICogdG8gYmUgdGFrZW4gaW50byBhY2NvdW50IGlmIHRoZSBjdXJyZW50
IG1vZGVsIGlzIGNoYW5nZWQuCj4+PiArCQkgKi8KPj4+ICsJCXhlbl9ibGtpZl9wdXQoYmxraWYp
Owo+Pj4gKwkJaWYgKGF0b21pY19yZWFkKCZibGtpZi0+cmVmY250KSA8PSAyKSB7Cj4+PiArCQkJ
aWYgKGF0b21pY19yZWFkKCZibGtpZi0+ZHJhaW4pKQo+Pj4gKwkJCQljb21wbGV0ZSgmYmxraWYt
PmRyYWluX2NvbXBsZXRlKTsKPj4+ICAJCX0KPj4+IC0JCWZyZWVfcmVxKHBlbmRpbmdfcmVxLT5i
bGtpZiwgcGVuZGluZ19yZXEpOwo+Pj4gIAl9Cj4+PiAgfQo+PiAKPj4gVGhlIHB1dCBpcyBzdGls
bCB0b28gZWFybHkgaW1vIC0geW91J3JlIGV4cGxpY2l0bHkgYWNjZXNzaW5nIGZpZWxkIGluIHRo
ZQo+PiBzdHJ1Y3R1cmUgaW1tZWRpYXRlbHkgYWZ0ZXJ3YXJkcy4gVGhpcyBtYXkgbm90IGJlIGFu
IGlzc3VlIGF0Cj4+IHByZXNlbnQsIGJ1dCBJIHRoaW5rIGl0J3MgYXQgbGVhc3QgYSBsYXRlbnQg
b25lLgo+PiAKPj4gQXBhcnQgZnJvbSB0aGF0LCB0aGUgdHdvIGlmKClzIHdvdWxkIC0gYXQgbGVh
c3QgdG8gbWUgLSBiZSBtb3JlCj4+IGNsZWFyIGlmIGNvbWJpbmVkIGludG8gb25lLgo+IAo+IElu
IG9yZGVyIHRvIGdldCByaWQgb2YgdGhlIHJhY2UgSSBoYWQgdG8gaW50cm9kdWNlIHlldCBhbm90
aGVyIGF0b21pY190IAo+IGluIHhlbl9ibGtpZiBzdHJ1Y3QsIHdoaWNoIGlzIHNvbWV0aGluZyBJ
IGRvbid0IHJlYWxseSBsaWtlLCBidXQgSSAKPiBjb3VsZCBub3Qgc2VlIGFueSBvdGhlciB3YXkg
dG8gc29sdmUgdGhpcy4gSWYgdGhhdCdzIGZpbmUgSSB3aWxsIHJlc2VuZCAKPiB0aGUgc2VyaWVz
LCBoZXJlIGlzIHRoZSByZXdvcmtlZCBwYXRjaDoKCk1pbmQgZXhwbGFpbmluZyB3aHkgeW91IGNh
bid0IHNpbXBseSBtb3ZlIHRoZSB4ZW5fYmxraWZfcHV0KCkKZG93biBiZXR3ZWVuIHRoZSBpZigp
IGFuZCB0aGUgZnJlZV9yZWYoKS4KCkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMu
eGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:02:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAay1-0007vo-SV; Tue, 04 Feb 2014 08:02:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <skiselkov.ml@gmail.com>) id 1WAOui-00087R-Ih
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 19:10:36 +0000
Received: from [85.158.139.211:40494] by server-15.bemta-5.messagelabs.com id
	B4/3C-24395-BA9EFE25; Mon, 03 Feb 2014 19:10:35 +0000
X-Env-Sender: skiselkov.ml@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391454635!1374407!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17535 invoked from network); 3 Feb 2014 19:10:35 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 19:10:35 -0000
Received: by mail-we0-f173.google.com with SMTP id x55so2925491wes.32
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 11:10:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=mv6aMwi4XxzYBbuwKEX4H3tExA9282AEa3wu6NJrBlk=;
	b=We1/K6bmzLgmAPtmMPLhwaYKk0KJNKySWcBelq07stFVFG2I//3yqY+fA10had0jGS
	+wW5ABN3wW/pru+iLNrHiQesZEsKYvFz0J8UVSSGFjori+68Ky1971VdP7La0QZzN7Jx
	fc9+nxgbaEwcbyBVzDU9DphoES/It5qBT0f49yRCcyZ050FLJJdyBpoMMZZ4DRSVxhFS
	kbqK9NJ8pg2wC30YBO48jQAthLmcK4w6SDo2jx1RTcXkyBH5gWvY7ctmkBYzJ6S0ji34
	OQnIrkvsDK7MaXo960kTZotCDrBr2cQCLxYd4F6gLv3KNiTDDsHm8OhsTvSrm33jXWdg
	kdEA==
X-Received: by 10.180.99.39 with SMTP id en7mr9931769wib.10.1391454634874;
	Mon, 03 Feb 2014 11:10:34 -0800 (PST)
Received: from [192.168.17.21] (35.161.17.46.bridgep.com. [46.17.161.35])
	by mx.google.com with ESMTPSA id p1sm28459950wie.1.2014.02.03.11.10.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 11:10:33 -0800 (PST)
Message-ID: <52EFE9A8.1040601@gmail.com>
Date: Mon, 03 Feb 2014 19:10:32 +0000
From: Saso Kiselkov <skiselkov.ml@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>, dilos-dev@lists.illumos.org
References: <53D5BEB0-82B1-40B4-B4C1-7EAA97BA9276@gmail.com>
In-Reply-To: <53D5BEB0-82B1-40B4-B4C1-7EAA97BA9276@gmail.com>
X-Mailman-Approved-At: Tue, 04 Feb 2014 08:02:49 +0000
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [developer] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/3/14, 6:52 PM, Igor Kozhukhov wrote:
> Hello All,
> 
> I have published first dilos-xen-4.3-dom0 ISO.
> 
> you can find info about how to play with it here:
> http://www.dilos.org/news/2014-02-03
> 
> 2Dariao: you can post this info to Xen blogs site :)
> or copy/past from DilOS site.

Nice work Igor!

Cheers,
-- 
Saso

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:02:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAay1-0007vo-SV; Tue, 04 Feb 2014 08:02:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <skiselkov.ml@gmail.com>) id 1WAOui-00087R-Ih
	for xen-devel@lists.xen.org; Mon, 03 Feb 2014 19:10:36 +0000
Received: from [85.158.139.211:40494] by server-15.bemta-5.messagelabs.com id
	B4/3C-24395-BA9EFE25; Mon, 03 Feb 2014 19:10:35 +0000
X-Env-Sender: skiselkov.ml@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391454635!1374407!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17535 invoked from network); 3 Feb 2014 19:10:35 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	3 Feb 2014 19:10:35 -0000
Received: by mail-we0-f173.google.com with SMTP id x55so2925491wes.32
	for <xen-devel@lists.xen.org>; Mon, 03 Feb 2014 11:10:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=mv6aMwi4XxzYBbuwKEX4H3tExA9282AEa3wu6NJrBlk=;
	b=We1/K6bmzLgmAPtmMPLhwaYKk0KJNKySWcBelq07stFVFG2I//3yqY+fA10had0jGS
	+wW5ABN3wW/pru+iLNrHiQesZEsKYvFz0J8UVSSGFjori+68Ky1971VdP7La0QZzN7Jx
	fc9+nxgbaEwcbyBVzDU9DphoES/It5qBT0f49yRCcyZ050FLJJdyBpoMMZZ4DRSVxhFS
	kbqK9NJ8pg2wC30YBO48jQAthLmcK4w6SDo2jx1RTcXkyBH5gWvY7ctmkBYzJ6S0ji34
	OQnIrkvsDK7MaXo960kTZotCDrBr2cQCLxYd4F6gLv3KNiTDDsHm8OhsTvSrm33jXWdg
	kdEA==
X-Received: by 10.180.99.39 with SMTP id en7mr9931769wib.10.1391454634874;
	Mon, 03 Feb 2014 11:10:34 -0800 (PST)
Received: from [192.168.17.21] (35.161.17.46.bridgep.com. [46.17.161.35])
	by mx.google.com with ESMTPSA id p1sm28459950wie.1.2014.02.03.11.10.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 03 Feb 2014 11:10:33 -0800 (PST)
Message-ID: <52EFE9A8.1040601@gmail.com>
Date: Mon, 03 Feb 2014 19:10:32 +0000
From: Saso Kiselkov <skiselkov.ml@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Igor Kozhukhov <ikozhukhov@gmail.com>, dilos-dev@lists.illumos.org
References: <53D5BEB0-82B1-40B4-B4C1-7EAA97BA9276@gmail.com>
In-Reply-To: <53D5BEB0-82B1-40B4-B4C1-7EAA97BA9276@gmail.com>
X-Mailman-Approved-At: Tue, 04 Feb 2014 08:02:49 +0000
Cc: illumos-dev Developer <developer@lists.illumos.org>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [developer] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/3/14, 6:52 PM, Igor Kozhukhov wrote:
> Hello All,
> 
> I have published first dilos-xen-4.3-dom0 ISO.
> 
> you can find info about how to play with it here:
> http://www.dilos.org/news/2014-02-03
> 
> 2Dariao: you can post this info to Xen blogs site :)
> or copy/past from DilOS site.

Nice work Igor!

Cheers,
-- 
Saso

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:16:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbAg-0008P6-8D; Tue, 04 Feb 2014 08:15:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbAe-0008P1-1P
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:15:52 +0000
Received: from [85.158.143.35:18075] by server-3.bemta-4.messagelabs.com id
	25/EF-11539-7B1A0F25; Tue, 04 Feb 2014 08:15:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391501750!2928193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23005 invoked from network); 4 Feb 2014 08:15:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:15:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:15:51 +0000
Message-Id: <52F0AFC30200007800118E1C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:15:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC8FB67A3.1__="
Cc: Keir Fraser <keir@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] blkif: drop struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC8FB67A3.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Commit 5148b7b5 ("blkif: add indirect descriptors interface to public
headers") added this without really explaining why it is needed: The
structure is identical to struct blkif_request_segment apart from the
padding field not being given a name in the pre-existing type. Their
size and alignment - which are what is relevant - are identical as long
as __alignof__(uint32_t) =3D=3D 4 (which I think we rely upon in various
other places, so we can take as given).

Also correct a few minor glitches in the description, including for it
to no longer assume PAGE_SIZE =3D=3D 4096.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -487,13 +487,13 @@
  * it's less than the number provided by the backend. The indirect_grefs =
field
  * in blkif_request_indirect should be filled by the frontend with the
  * grant references of the pages that are holding the indirect segments.
- * This pages are filled with an array of blkif_request_segment_aligned
- * that hold the information about the segments. The number of indirect
- * pages to use is determined by the maximum number of segments
- * an indirect request contains. Every indirect page can contain a =
maximum
- * of 512 segments (PAGE_SIZE/sizeof(blkif_request_segment_aligned)),
- * so to calculate the number of indirect pages to use we have to do
- * ceil(indirect_segments/512).
+ * These pages are filled with an array of blkif_request_segment that =
hold the
+ * information about the segments. The number of indirect pages to use is
+ * determined by the number of segments an indirect request contains. =
Every
+ * indirect page can contain a maximum of
+ * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so to
+ * calculate the number of indirect pages to use we have to do
+ * ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segme=
nt))).
  *
  * If a backend does not recognize BLKIF_OP_INDIRECT, it should *not*
  * create the "feature-max-indirect-segments" node!
@@ -569,14 +569,6 @@ struct blkif_request_indirect {
 };
 typedef struct blkif_request_indirect blkif_request_indirect_t;
=20
-struct blkif_request_segment_aligned {
-    grant_ref_t gref;            /* reference to I/O buffer frame        =
*/
-    /* @first_sect: first sector in frame to transfer (inclusive).   */
-    /* @last_sect: last sector in frame to transfer (inclusive).     */
-    uint8_t     first_sect, last_sect;
-    uint16_t    _pad; /* padding to make it 8 bytes, so it's cache-aligned=
 */
-};
-
 struct blkif_response {
     uint64_t        id;              /* copied from request */
     uint8_t         operation;       /* copied from request */




--=__PartC8FB67A3.1__=
Content-Type: text/plain; name="blkif-drop-segment-aligned.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="blkif-drop-segment-aligned.patch"

blkif: drop struct blkif_request_segment_aligned=0A=0ACommit 5148b7b5 =
("blkif: add indirect descriptors interface to public=0Aheaders") added =
this without really explaining why it is needed: The=0Astructure is =
identical to struct blkif_request_segment apart from the=0Apadding field =
not being given a name in the pre-existing type. Their=0Asize and =
alignment - which are what is relevant - are identical as long=0Aas =
__alignof__(uint32_t) =3D=3D 4 (which I think we rely upon in various=0Aoth=
er places, so we can take as given).=0A=0AAlso correct a few minor =
glitches in the description, including for it=0Ato no longer assume =
PAGE_SIZE =3D=3D 4096.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/include/public/io/blkif.h=0A+++ b/xen/include/public/io/blk=
if.h=0A@@ -487,13 +487,13 @@=0A  * it's less than the number provided by =
the backend. The indirect_grefs field=0A  * in blkif_request_indirect =
should be filled by the frontend with the=0A  * grant references of the =
pages that are holding the indirect segments.=0A- * This pages are filled =
with an array of blkif_request_segment_aligned=0A- * that hold the =
information about the segments. The number of indirect=0A- * pages to use =
is determined by the maximum number of segments=0A- * an indirect request =
contains. Every indirect page can contain a maximum=0A- * of 512 segments =
(PAGE_SIZE/sizeof(blkif_request_segment_aligned)),=0A- * so to calculate =
the number of indirect pages to use we have to do=0A- * ceil(indirect_segme=
nts/512).=0A+ * These pages are filled with an array of blkif_request_segme=
nt that hold the=0A+ * information about the segments. The number of =
indirect pages to use is=0A+ * determined by the number of segments an =
indirect request contains. Every=0A+ * indirect page can contain a maximum =
of=0A+ * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so =
to=0A+ * calculate the number of indirect pages to use we have to do=0A+ * =
ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segment))=
).=0A  *=0A  * If a backend does not recognize BLKIF_OP_INDIRECT, it =
should *not*=0A  * create the "feature-max-indirect-segments" node!=0A@@ =
-569,14 +569,6 @@ struct blkif_request_indirect {=0A };=0A typedef struct =
blkif_request_indirect blkif_request_indirect_t;=0A =0A-struct blkif_reques=
t_segment_aligned {=0A-    grant_ref_t gref;            /* reference to =
I/O buffer frame        */=0A-    /* @first_sect: first sector in frame to =
transfer (inclusive).   */=0A-    /* @last_sect: last sector in frame to =
transfer (inclusive).     */=0A-    uint8_t     first_sect, last_sect;=0A- =
   uint16_t    _pad; /* padding to make it 8 bytes, so it's cache-aligned =
*/=0A-};=0A-=0A struct blkif_response {=0A     uint64_t        id;         =
     /* copied from request */=0A     uint8_t         operation;       /* =
copied from request */=0A
--=__PartC8FB67A3.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC8FB67A3.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 04 08:16:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbAg-0008P6-8D; Tue, 04 Feb 2014 08:15:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbAe-0008P1-1P
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:15:52 +0000
Received: from [85.158.143.35:18075] by server-3.bemta-4.messagelabs.com id
	25/EF-11539-7B1A0F25; Tue, 04 Feb 2014 08:15:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391501750!2928193!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23005 invoked from network); 4 Feb 2014 08:15:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:15:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:15:51 +0000
Message-Id: <52F0AFC30200007800118E1C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:15:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC8FB67A3.1__="
Cc: Keir Fraser <keir@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] blkif: drop struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC8FB67A3.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Commit 5148b7b5 ("blkif: add indirect descriptors interface to public
headers") added this without really explaining why it is needed: The
structure is identical to struct blkif_request_segment apart from the
padding field not being given a name in the pre-existing type. Their
size and alignment - which are what is relevant - are identical as long
as __alignof__(uint32_t) =3D=3D 4 (which I think we rely upon in various
other places, so we can take as given).

Also correct a few minor glitches in the description, including for it
to no longer assume PAGE_SIZE =3D=3D 4096.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/public/io/blkif.h
+++ b/xen/include/public/io/blkif.h
@@ -487,13 +487,13 @@
  * it's less than the number provided by the backend. The indirect_grefs =
field
  * in blkif_request_indirect should be filled by the frontend with the
  * grant references of the pages that are holding the indirect segments.
- * This pages are filled with an array of blkif_request_segment_aligned
- * that hold the information about the segments. The number of indirect
- * pages to use is determined by the maximum number of segments
- * an indirect request contains. Every indirect page can contain a =
maximum
- * of 512 segments (PAGE_SIZE/sizeof(blkif_request_segment_aligned)),
- * so to calculate the number of indirect pages to use we have to do
- * ceil(indirect_segments/512).
+ * These pages are filled with an array of blkif_request_segment that =
hold the
+ * information about the segments. The number of indirect pages to use is
+ * determined by the number of segments an indirect request contains. =
Every
+ * indirect page can contain a maximum of
+ * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so to
+ * calculate the number of indirect pages to use we have to do
+ * ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segme=
nt))).
  *
  * If a backend does not recognize BLKIF_OP_INDIRECT, it should *not*
  * create the "feature-max-indirect-segments" node!
@@ -569,14 +569,6 @@ struct blkif_request_indirect {
 };
 typedef struct blkif_request_indirect blkif_request_indirect_t;
=20
-struct blkif_request_segment_aligned {
-    grant_ref_t gref;            /* reference to I/O buffer frame        =
*/
-    /* @first_sect: first sector in frame to transfer (inclusive).   */
-    /* @last_sect: last sector in frame to transfer (inclusive).     */
-    uint8_t     first_sect, last_sect;
-    uint16_t    _pad; /* padding to make it 8 bytes, so it's cache-aligned=
 */
-};
-
 struct blkif_response {
     uint64_t        id;              /* copied from request */
     uint8_t         operation;       /* copied from request */




--=__PartC8FB67A3.1__=
Content-Type: text/plain; name="blkif-drop-segment-aligned.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="blkif-drop-segment-aligned.patch"

blkif: drop struct blkif_request_segment_aligned=0A=0ACommit 5148b7b5 =
("blkif: add indirect descriptors interface to public=0Aheaders") added =
this without really explaining why it is needed: The=0Astructure is =
identical to struct blkif_request_segment apart from the=0Apadding field =
not being given a name in the pre-existing type. Their=0Asize and =
alignment - which are what is relevant - are identical as long=0Aas =
__alignof__(uint32_t) =3D=3D 4 (which I think we rely upon in various=0Aoth=
er places, so we can take as given).=0A=0AAlso correct a few minor =
glitches in the description, including for it=0Ato no longer assume =
PAGE_SIZE =3D=3D 4096.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=
=0A=0A--- a/xen/include/public/io/blkif.h=0A+++ b/xen/include/public/io/blk=
if.h=0A@@ -487,13 +487,13 @@=0A  * it's less than the number provided by =
the backend. The indirect_grefs field=0A  * in blkif_request_indirect =
should be filled by the frontend with the=0A  * grant references of the =
pages that are holding the indirect segments.=0A- * This pages are filled =
with an array of blkif_request_segment_aligned=0A- * that hold the =
information about the segments. The number of indirect=0A- * pages to use =
is determined by the maximum number of segments=0A- * an indirect request =
contains. Every indirect page can contain a maximum=0A- * of 512 segments =
(PAGE_SIZE/sizeof(blkif_request_segment_aligned)),=0A- * so to calculate =
the number of indirect pages to use we have to do=0A- * ceil(indirect_segme=
nts/512).=0A+ * These pages are filled with an array of blkif_request_segme=
nt that hold the=0A+ * information about the segments. The number of =
indirect pages to use is=0A+ * determined by the number of segments an =
indirect request contains. Every=0A+ * indirect page can contain a maximum =
of=0A+ * (PAGE_SIZE / sizeof(struct blkif_request_segment)) segments, so =
to=0A+ * calculate the number of indirect pages to use we have to do=0A+ * =
ceil(indirect_segments / (PAGE_SIZE / sizeof(struct blkif_request_segment))=
).=0A  *=0A  * If a backend does not recognize BLKIF_OP_INDIRECT, it =
should *not*=0A  * create the "feature-max-indirect-segments" node!=0A@@ =
-569,14 +569,6 @@ struct blkif_request_indirect {=0A };=0A typedef struct =
blkif_request_indirect blkif_request_indirect_t;=0A =0A-struct blkif_reques=
t_segment_aligned {=0A-    grant_ref_t gref;            /* reference to =
I/O buffer frame        */=0A-    /* @first_sect: first sector in frame to =
transfer (inclusive).   */=0A-    /* @last_sect: last sector in frame to =
transfer (inclusive).     */=0A-    uint8_t     first_sect, last_sect;=0A- =
   uint16_t    _pad; /* padding to make it 8 bytes, so it's cache-aligned =
*/=0A-};=0A-=0A struct blkif_response {=0A     uint64_t        id;         =
     /* copied from request */=0A     uint8_t         operation;       /* =
copied from request */=0A
--=__PartC8FB67A3.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC8FB67A3.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 04 08:16:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbB3-0008QG-Ly; Tue, 04 Feb 2014 08:16:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbB2-0008Q6-I8
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:16:16 +0000
Received: from [85.158.137.68:17442] by server-14.bemta-3.messagelabs.com id
	58/52-08196-FC1A0F25; Tue, 04 Feb 2014 08:16:15 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391501773!12028018!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27277 invoked from network); 4 Feb 2014 08:16:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:16:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99546463"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 08:16:08 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:16:07 -0500
Message-ID: <52F0A1C7.2010607@citrix.com>
Date: Tue, 4 Feb 2014 09:16:07 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
	<52F0AC990200007800118E09@nat28.tlf.novell.com>
In-Reply-To: <52F0AC990200007800118E09@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMDIvMTQgMDk6MDIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+IE9uIDAzLjAyLjE0IGF0
IDE3OjU4LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPj4g
T24gMjkvMDEvMTQgMDk6NTIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4gT24gMjguMDEuMTQg
YXQgMTg6NDMsIFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyb3RlOgo+
Pj4+ICsJCWZyZWVfcmVxKGJsa2lmLCBwZW5kaW5nX3JlcSk7Cj4+Pj4gKwkJLyoKPj4+PiArCQkg
KiBNYWtlIHN1cmUgdGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVhc2luZyBibGtpZiwK
Pj4+PiArCQkgKiBvciB0aGVyZSBjb3VsZCBiZSBhIHJhY2UgYmV0d2VlbiBmcmVlX3JlcSBhbmQg
dGhlCj4+Pj4gKwkJICogY2xlYW51cCBkb25lIGluIHhlbl9ibGtpZl9mcmVlIGR1cmluZyBzaHV0
ZG93bi4KPj4+PiArCQkgKgo+Pj4+ICsJCSAqIE5COiBUaGUgZmFjdCB0aGF0IHdlIG1pZ2h0IHRy
eSB0byB3YWtlIHVwIHBlbmRpbmdfZnJlZV93cQo+Pj4+ICsJCSAqIGJlZm9yZSBkcmFpbl9jb21w
bGV0ZSAoaW4gY2FzZSB0aGVyZSdzIGEgZHJhaW4gZ29pbmcgb24pCj4+Pj4gKwkJICogaXQncyBu
b3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1wbGVtZW50YXRpb24KPj4+PiArCQkgKiBi
ZWNhdXNlIHdlIGNhbiBhc3N1cmUgdGhlcmUncyBubyB0aHJlYWQgd2FpdGluZyBvbgo+Pj4+ICsJ
CSAqIHBlbmRpbmdfZnJlZV93cSBpZiB0aGVyZSdzIGEgZHJhaW4gZ29pbmcgb24sIGJ1dCBpdCBo
YXMKPj4+PiArCQkgKiB0byBiZSB0YWtlbiBpbnRvIGFjY291bnQgaWYgdGhlIGN1cnJlbnQgbW9k
ZWwgaXMgY2hhbmdlZC4KPj4+PiArCQkgKi8KPj4+PiArCQl4ZW5fYmxraWZfcHV0KGJsa2lmKTsK
Pj4+PiArCQlpZiAoYXRvbWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKPj4+PiArCQkJ
aWYgKGF0b21pY19yZWFkKCZibGtpZi0+ZHJhaW4pKQo+Pj4+ICsJCQkJY29tcGxldGUoJmJsa2lm
LT5kcmFpbl9jb21wbGV0ZSk7Cj4+Pj4gIAkJfQo+Pj4+IC0JCWZyZWVfcmVxKHBlbmRpbmdfcmVx
LT5ibGtpZiwgcGVuZGluZ19yZXEpOwo+Pj4+ICAJfQo+Pj4+ICB9Cj4+Pgo+Pj4gVGhlIHB1dCBp
cyBzdGlsbCB0b28gZWFybHkgaW1vIC0geW91J3JlIGV4cGxpY2l0bHkgYWNjZXNzaW5nIGZpZWxk
IGluIHRoZQo+Pj4gc3RydWN0dXJlIGltbWVkaWF0ZWx5IGFmdGVyd2FyZHMuIFRoaXMgbWF5IG5v
dCBiZSBhbiBpc3N1ZSBhdAo+Pj4gcHJlc2VudCwgYnV0IEkgdGhpbmsgaXQncyBhdCBsZWFzdCBh
IGxhdGVudCBvbmUuCj4+Pgo+Pj4gQXBhcnQgZnJvbSB0aGF0LCB0aGUgdHdvIGlmKClzIHdvdWxk
IC0gYXQgbGVhc3QgdG8gbWUgLSBiZSBtb3JlCj4+PiBjbGVhciBpZiBjb21iaW5lZCBpbnRvIG9u
ZS4KPj4KPj4gSW4gb3JkZXIgdG8gZ2V0IHJpZCBvZiB0aGUgcmFjZSBJIGhhZCB0byBpbnRyb2R1
Y2UgeWV0IGFub3RoZXIgYXRvbWljX3QgCj4+IGluIHhlbl9ibGtpZiBzdHJ1Y3QsIHdoaWNoIGlz
IHNvbWV0aGluZyBJIGRvbid0IHJlYWxseSBsaWtlLCBidXQgSSAKPj4gY291bGQgbm90IHNlZSBh
bnkgb3RoZXIgd2F5IHRvIHNvbHZlIHRoaXMuIElmIHRoYXQncyBmaW5lIEkgd2lsbCByZXNlbmQg
Cj4+IHRoZSBzZXJpZXMsIGhlcmUgaXMgdGhlIHJld29ya2VkIHBhdGNoOgo+IAo+IE1pbmQgZXhw
bGFpbmluZyB3aHkgeW91IGNhbid0IHNpbXBseSBtb3ZlIHRoZSB4ZW5fYmxraWZfcHV0KCkKPiBk
b3duIGJldHdlZW4gdGhlIGlmKCkgYW5kIHRoZSBmcmVlX3JlZigpLgoKWW91IG1lYW4gZG9pbmcg
c29tZXRoaW5nIGxpa2U6CgppZiAoYXRvbWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDMpIHsK
CWlmIChhdG9taWNfcmVhZCgmYmxraWYtPmRyYWluKSkKCQljb21wbGV0ZSgmYmxraWYtPmRyYWlu
X2NvbXBsZXRlKTsKfQp4ZW5fYmxraWZfcHV0KGJsa2lmKTsKZnJlZV9yZXEoYmxraWYsIHBlbmRp
bmdfcmVxKTsKClRoaXMgd291bGQgbm90IGJlIHNhZmUsIGJlY2F1c2Ugd2UgYXJlIGNvbXBhcmlu
ZyByZWZjbnQgYmVmb3JlCmRlY3JlbWVudGluZyBpdCwgYW5kIGFsc28gd2UgYXJlIG5vdCBkb2lu
ZyBpdCBhdG9taWNhbGx5ICh0aGUgZGVjIGFuZApjb21wYXJlKS4gSWYgZm9yIGV4YW1wbGUgd2Ug
aGF2ZSB0d28gaW5mbGlnaHQgcmVxdWVzdHMsIGJvdGggY291bGQKcGVyZm9ybSB0aGUgYXRvbWlj
X3JlYWQgb2YgcmVmY250LCBzZWUgdGhlcmUncyBzdGlsbCBhbm90aGVyIGluZmxpZ2h0CnJlcXVl
c3QsIGFuZCB0aGVuIGJvdGggZGVjcmVtZW50LCB3aXRob3V0IGFueW9uZSBjYWxsaW5nIGNvbXBs
ZXRlLgoKVGhlcmUncyBhbHNvIHRoZSBpc3N1ZSB0aGF0IHdpdGggdGhpcyBhcHByb2FjaCBhcyB3
ZSBhcmUgZnJlZWluZyB0aGUKcmVxdWVzdCBhZnRlciB3ZSBoYXZlIHB1dCBibGtpZiwgd2hpY2gg
aXMgYSByYWNlIHdpdGggdGhlIGNsZWFudXAgYmVpbmcKZG9uZSBpbiB4ZW5fYmxraWZfZnJlZS4K
ClJvZ2VyLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xp
c3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:16:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:16:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbB3-0008QG-Ly; Tue, 04 Feb 2014 08:16:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbB2-0008Q6-I8
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:16:16 +0000
Received: from [85.158.137.68:17442] by server-14.bemta-3.messagelabs.com id
	58/52-08196-FC1A0F25; Tue, 04 Feb 2014 08:16:15 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391501773!12028018!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27277 invoked from network); 4 Feb 2014 08:16:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:16:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99546463"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 08:16:08 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:16:07 -0500
Message-ID: <52F0A1C7.2010607@citrix.com>
Date: Tue, 4 Feb 2014 09:16:07 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
	<52F0AC990200007800118E09@nat28.tlf.novell.com>
In-Reply-To: <52F0AC990200007800118E09@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMDIvMTQgMDk6MDIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+IE9uIDAzLjAyLjE0IGF0
IDE3OjU4LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPj4g
T24gMjkvMDEvMTQgMDk6NTIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4gT24gMjguMDEuMTQg
YXQgMTg6NDMsIFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyb3RlOgo+
Pj4+ICsJCWZyZWVfcmVxKGJsa2lmLCBwZW5kaW5nX3JlcSk7Cj4+Pj4gKwkJLyoKPj4+PiArCQkg
KiBNYWtlIHN1cmUgdGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVhc2luZyBibGtpZiwK
Pj4+PiArCQkgKiBvciB0aGVyZSBjb3VsZCBiZSBhIHJhY2UgYmV0d2VlbiBmcmVlX3JlcSBhbmQg
dGhlCj4+Pj4gKwkJICogY2xlYW51cCBkb25lIGluIHhlbl9ibGtpZl9mcmVlIGR1cmluZyBzaHV0
ZG93bi4KPj4+PiArCQkgKgo+Pj4+ICsJCSAqIE5COiBUaGUgZmFjdCB0aGF0IHdlIG1pZ2h0IHRy
eSB0byB3YWtlIHVwIHBlbmRpbmdfZnJlZV93cQo+Pj4+ICsJCSAqIGJlZm9yZSBkcmFpbl9jb21w
bGV0ZSAoaW4gY2FzZSB0aGVyZSdzIGEgZHJhaW4gZ29pbmcgb24pCj4+Pj4gKwkJICogaXQncyBu
b3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1wbGVtZW50YXRpb24KPj4+PiArCQkgKiBi
ZWNhdXNlIHdlIGNhbiBhc3N1cmUgdGhlcmUncyBubyB0aHJlYWQgd2FpdGluZyBvbgo+Pj4+ICsJ
CSAqIHBlbmRpbmdfZnJlZV93cSBpZiB0aGVyZSdzIGEgZHJhaW4gZ29pbmcgb24sIGJ1dCBpdCBo
YXMKPj4+PiArCQkgKiB0byBiZSB0YWtlbiBpbnRvIGFjY291bnQgaWYgdGhlIGN1cnJlbnQgbW9k
ZWwgaXMgY2hhbmdlZC4KPj4+PiArCQkgKi8KPj4+PiArCQl4ZW5fYmxraWZfcHV0KGJsa2lmKTsK
Pj4+PiArCQlpZiAoYXRvbWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKPj4+PiArCQkJ
aWYgKGF0b21pY19yZWFkKCZibGtpZi0+ZHJhaW4pKQo+Pj4+ICsJCQkJY29tcGxldGUoJmJsa2lm
LT5kcmFpbl9jb21wbGV0ZSk7Cj4+Pj4gIAkJfQo+Pj4+IC0JCWZyZWVfcmVxKHBlbmRpbmdfcmVx
LT5ibGtpZiwgcGVuZGluZ19yZXEpOwo+Pj4+ICAJfQo+Pj4+ICB9Cj4+Pgo+Pj4gVGhlIHB1dCBp
cyBzdGlsbCB0b28gZWFybHkgaW1vIC0geW91J3JlIGV4cGxpY2l0bHkgYWNjZXNzaW5nIGZpZWxk
IGluIHRoZQo+Pj4gc3RydWN0dXJlIGltbWVkaWF0ZWx5IGFmdGVyd2FyZHMuIFRoaXMgbWF5IG5v
dCBiZSBhbiBpc3N1ZSBhdAo+Pj4gcHJlc2VudCwgYnV0IEkgdGhpbmsgaXQncyBhdCBsZWFzdCBh
IGxhdGVudCBvbmUuCj4+Pgo+Pj4gQXBhcnQgZnJvbSB0aGF0LCB0aGUgdHdvIGlmKClzIHdvdWxk
IC0gYXQgbGVhc3QgdG8gbWUgLSBiZSBtb3JlCj4+PiBjbGVhciBpZiBjb21iaW5lZCBpbnRvIG9u
ZS4KPj4KPj4gSW4gb3JkZXIgdG8gZ2V0IHJpZCBvZiB0aGUgcmFjZSBJIGhhZCB0byBpbnRyb2R1
Y2UgeWV0IGFub3RoZXIgYXRvbWljX3QgCj4+IGluIHhlbl9ibGtpZiBzdHJ1Y3QsIHdoaWNoIGlz
IHNvbWV0aGluZyBJIGRvbid0IHJlYWxseSBsaWtlLCBidXQgSSAKPj4gY291bGQgbm90IHNlZSBh
bnkgb3RoZXIgd2F5IHRvIHNvbHZlIHRoaXMuIElmIHRoYXQncyBmaW5lIEkgd2lsbCByZXNlbmQg
Cj4+IHRoZSBzZXJpZXMsIGhlcmUgaXMgdGhlIHJld29ya2VkIHBhdGNoOgo+IAo+IE1pbmQgZXhw
bGFpbmluZyB3aHkgeW91IGNhbid0IHNpbXBseSBtb3ZlIHRoZSB4ZW5fYmxraWZfcHV0KCkKPiBk
b3duIGJldHdlZW4gdGhlIGlmKCkgYW5kIHRoZSBmcmVlX3JlZigpLgoKWW91IG1lYW4gZG9pbmcg
c29tZXRoaW5nIGxpa2U6CgppZiAoYXRvbWljX3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDMpIHsK
CWlmIChhdG9taWNfcmVhZCgmYmxraWYtPmRyYWluKSkKCQljb21wbGV0ZSgmYmxraWYtPmRyYWlu
X2NvbXBsZXRlKTsKfQp4ZW5fYmxraWZfcHV0KGJsa2lmKTsKZnJlZV9yZXEoYmxraWYsIHBlbmRp
bmdfcmVxKTsKClRoaXMgd291bGQgbm90IGJlIHNhZmUsIGJlY2F1c2Ugd2UgYXJlIGNvbXBhcmlu
ZyByZWZjbnQgYmVmb3JlCmRlY3JlbWVudGluZyBpdCwgYW5kIGFsc28gd2UgYXJlIG5vdCBkb2lu
ZyBpdCBhdG9taWNhbGx5ICh0aGUgZGVjIGFuZApjb21wYXJlKS4gSWYgZm9yIGV4YW1wbGUgd2Ug
aGF2ZSB0d28gaW5mbGlnaHQgcmVxdWVzdHMsIGJvdGggY291bGQKcGVyZm9ybSB0aGUgYXRvbWlj
X3JlYWQgb2YgcmVmY250LCBzZWUgdGhlcmUncyBzdGlsbCBhbm90aGVyIGluZmxpZ2h0CnJlcXVl
c3QsIGFuZCB0aGVuIGJvdGggZGVjcmVtZW50LCB3aXRob3V0IGFueW9uZSBjYWxsaW5nIGNvbXBs
ZXRlLgoKVGhlcmUncyBhbHNvIHRoZSBpc3N1ZSB0aGF0IHdpdGggdGhpcyBhcHByb2FjaCBhcyB3
ZSBhcmUgZnJlZWluZyB0aGUKcmVxdWVzdCBhZnRlciB3ZSBoYXZlIHB1dCBibGtpZiwgd2hpY2gg
aXMgYSByYWNlIHdpdGggdGhlIGNsZWFudXAgYmVpbmcKZG9uZSBpbiB4ZW5fYmxraWZfZnJlZS4K
ClJvZ2VyLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
Clhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xp
c3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:20:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbEp-0000ON-C5; Tue, 04 Feb 2014 08:20:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbEn-0000OB-VS
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:20:10 +0000
Received: from [193.109.254.147:34273] by server-1.bemta-14.messagelabs.com id
	D9/31-15438-9B2A0F25; Tue, 04 Feb 2014 08:20:09 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391502007!1831274!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12023 invoked from network); 4 Feb 2014 08:20:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:20:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97630490"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 08:20:06 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:20:05 -0500
Message-ID: <52F0A2B6.4080307@citrix.com>
Date: Tue, 4 Feb 2014 09:20:06 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
	<1391420407.10515.10.camel@kazak.uk.xensource.com>
	<52EF8D4A0200007800118A67@nat28.tlf.novell.com>
In-Reply-To: <52EF8D4A0200007800118A67@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 12:36, Jan Beulich wrote:
>>>> On 03.02.14 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Fri, 2014-01-31 at 12:04 +0000, Jan Beulich wrote:
>>> Roger,
>>>
>>> so you introduced this, yet looking in a little closer detail I can't seem
>>> to understand why: struct blkif_request_segment is identical in layout,
>>> the sole difference between the two is that in the new structure the
>>> padding field has a name, whereas in the old one it doesn't.
>>
>> Is this something to do with Linux' use of __attribute__((packed)) once
>> again causing confusion? (I really hope not API deviation...)
> 
> Yes, I think it has to do with Linux'es way of defining these
> structures: My assumption is that the embedded (but not such
> attributed) definition of struct blkif_request_segment inside struct
> struct blkif_request_rw was assumed to also be packed (which it
> isn't, or else upstream Linux front-/backends wouldn't work with
> other back-/frontends), thus apparently making it necessary to
> have an "aligned" (i.e. un-packed) variant thereof.

Yes, this is my fault for wrongly assuming struct blkif_request_segment
inside struct blkif_request_rw was also packed, which it is not (or else
it would break with non Linux backends). Thanks for sending the Xen side
patch, I will take care of the Linux side if it's fine with you.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:20:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbEp-0000ON-C5; Tue, 04 Feb 2014 08:20:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbEn-0000OB-VS
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:20:10 +0000
Received: from [193.109.254.147:34273] by server-1.bemta-14.messagelabs.com id
	D9/31-15438-9B2A0F25; Tue, 04 Feb 2014 08:20:09 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391502007!1831274!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12023 invoked from network); 4 Feb 2014 08:20:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:20:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97630490"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 08:20:06 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:20:05 -0500
Message-ID: <52F0A2B6.4080307@citrix.com>
Date: Tue, 4 Feb 2014 09:20:06 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
	<1391420407.10515.10.camel@kazak.uk.xensource.com>
	<52EF8D4A0200007800118A67@nat28.tlf.novell.com>
In-Reply-To: <52EF8D4A0200007800118A67@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 03/02/14 12:36, Jan Beulich wrote:
>>>> On 03.02.14 at 10:40, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Fri, 2014-01-31 at 12:04 +0000, Jan Beulich wrote:
>>> Roger,
>>>
>>> so you introduced this, yet looking in a little closer detail I can't seem
>>> to understand why: struct blkif_request_segment is identical in layout,
>>> the sole difference between the two is that in the new structure the
>>> padding field has a name, whereas in the old one it doesn't.
>>
>> Is this something to do with Linux' use of __attribute__((packed)) once
>> again causing confusion? (I really hope not API deviation...)
> 
> Yes, I think it has to do with Linux'es way of defining these
> structures: My assumption is that the embedded (but not such
> attributed) definition of struct blkif_request_segment inside struct
> struct blkif_request_rw was assumed to also be packed (which it
> isn't, or else upstream Linux front-/backends wouldn't work with
> other back-/frontends), thus apparently making it necessary to
> have an "aligned" (i.e. un-packed) variant thereof.

Yes, this is my fault for wrongly assuming struct blkif_request_segment
inside struct blkif_request_rw was also packed, which it is not (or else
it would break with non Linux backends). Thanks for sending the Xen side
patch, I will take care of the Linux side if it's fine with you.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbJs-0000au-AD; Tue, 04 Feb 2014 08:25:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbJq-0000ap-5B
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:25:22 +0000
Received: from [85.158.137.68:6388] by server-3.bemta-3.messagelabs.com id
	FA/BF-14520-0F3A0F25; Tue, 04 Feb 2014 08:25:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391502319!13184625!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30418 invoked from network); 4 Feb 2014 08:25:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:25:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:25:18 +0000
Message-Id: <52F0B1FC0200007800118E50@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:25:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
	<1391420407.10515.10.camel@kazak.uk.xensource.com>
	<52EF8D4A0200007800118A67@nat28.tlf.novell.com>
	<52F0A2B6.4080307@citrix.com>
In-Reply-To: <52F0A2B6.4080307@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA0LjAyLjE0IGF0IDA5OjIwLCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAwMy8wMi8xNCAxMjozNiwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+
Pj4+IE9uIDAzLjAyLjE0IGF0IDEwOjQwLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRy
aXguY29tPiB3cm90ZToKPj4+IE9uIEZyaSwgMjAxNC0wMS0zMSBhdCAxMjowNCArMDAwMCwgSmFu
IEJldWxpY2ggd3JvdGU6Cj4+Pj4gUm9nZXIsCj4+Pj4KPj4+PiBzbyB5b3UgaW50cm9kdWNlZCB0
aGlzLCB5ZXQgbG9va2luZyBpbiBhIGxpdHRsZSBjbG9zZXIgZGV0YWlsIEkgY2FuJ3Qgc2VlbQo+
Pj4+IHRvIHVuZGVyc3RhbmQgd2h5OiBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50IGlzIGlk
ZW50aWNhbCBpbiBsYXlvdXQsCj4+Pj4gdGhlIHNvbGUgZGlmZmVyZW5jZSBiZXR3ZWVuIHRoZSB0
d28gaXMgdGhhdCBpbiB0aGUgbmV3IHN0cnVjdHVyZSB0aGUKPj4+PiBwYWRkaW5nIGZpZWxkIGhh
cyBhIG5hbWUsIHdoZXJlYXMgaW4gdGhlIG9sZCBvbmUgaXQgZG9lc24ndC4KPj4+Cj4+PiBJcyB0
aGlzIHNvbWV0aGluZyB0byBkbyB3aXRoIExpbnV4JyB1c2Ugb2YgX19hdHRyaWJ1dGVfXygocGFj
a2VkKSkgb25jZQo+Pj4gYWdhaW4gY2F1c2luZyBjb25mdXNpb24/IChJIHJlYWxseSBob3BlIG5v
dCBBUEkgZGV2aWF0aW9uLi4uKQo+PiAKPj4gWWVzLCBJIHRoaW5rIGl0IGhhcyB0byBkbyB3aXRo
IExpbnV4J2VzIHdheSBvZiBkZWZpbmluZyB0aGVzZQo+PiBzdHJ1Y3R1cmVzOiBNeSBhc3N1bXB0
aW9uIGlzIHRoYXQgdGhlIGVtYmVkZGVkIChidXQgbm90IHN1Y2gKPj4gYXR0cmlidXRlZCkgZGVm
aW5pdGlvbiBvZiBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50IGluc2lkZSBzdHJ1Y3QKPj4g
c3RydWN0IGJsa2lmX3JlcXVlc3Rfcncgd2FzIGFzc3VtZWQgdG8gYWxzbyBiZSBwYWNrZWQgKHdo
aWNoIGl0Cj4+IGlzbid0LCBvciBlbHNlIHVwc3RyZWFtIExpbnV4IGZyb250LS9iYWNrZW5kcyB3
b3VsZG4ndCB3b3JrIHdpdGgKPj4gb3RoZXIgYmFjay0vZnJvbnRlbmRzKSwgdGh1cyBhcHBhcmVu
dGx5IG1ha2luZyBpdCBuZWNlc3NhcnkgdG8KPj4gaGF2ZSBhbiAiYWxpZ25lZCIgKGkuZS4gdW4t
cGFja2VkKSB2YXJpYW50IHRoZXJlb2YuCj4gCj4gWWVzLCB0aGlzIGlzIG15IGZhdWx0IGZvciB3
cm9uZ2x5IGFzc3VtaW5nIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQKPiBpbnNpZGUgc3Ry
dWN0IGJsa2lmX3JlcXVlc3Rfcncgd2FzIGFsc28gcGFja2VkLCB3aGljaCBpdCBpcyBub3QgKG9y
IGVsc2UKPiBpdCB3b3VsZCBicmVhayB3aXRoIG5vbiBMaW51eCBiYWNrZW5kcykuIFRoYW5rcyBm
b3Igc2VuZGluZyB0aGUgWGVuIHNpZGUKPiBwYXRjaCwgSSB3aWxsIHRha2UgY2FyZSBvZiB0aGUg
TGludXggc2lkZSBpZiBpdCdzIGZpbmUgd2l0aCB5b3UuCgpQbGVhc2UgZG8uCgpKYW4KCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVu
LWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbJs-0000au-AD; Tue, 04 Feb 2014 08:25:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbJq-0000ap-5B
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:25:22 +0000
Received: from [85.158.137.68:6388] by server-3.bemta-3.messagelabs.com id
	FA/BF-14520-0F3A0F25; Tue, 04 Feb 2014 08:25:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391502319!13184625!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30418 invoked from network); 4 Feb 2014 08:25:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:25:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:25:18 +0000
Message-Id: <52F0B1FC0200007800118E50@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:25:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>
References: <52EB9F450200007800118618@nat28.tlf.novell.com>
	<1391420407.10515.10.camel@kazak.uk.xensource.com>
	<52EF8D4A0200007800118A67@nat28.tlf.novell.com>
	<52F0A2B6.4080307@citrix.com>
In-Reply-To: <52F0A2B6.4080307@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA0LjAyLjE0IGF0IDA5OjIwLCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAwMy8wMi8xNCAxMjozNiwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+
Pj4+IE9uIDAzLjAyLjE0IGF0IDEwOjQwLCBJYW4gQ2FtcGJlbGwgPElhbi5DYW1wYmVsbEBjaXRy
aXguY29tPiB3cm90ZToKPj4+IE9uIEZyaSwgMjAxNC0wMS0zMSBhdCAxMjowNCArMDAwMCwgSmFu
IEJldWxpY2ggd3JvdGU6Cj4+Pj4gUm9nZXIsCj4+Pj4KPj4+PiBzbyB5b3UgaW50cm9kdWNlZCB0
aGlzLCB5ZXQgbG9va2luZyBpbiBhIGxpdHRsZSBjbG9zZXIgZGV0YWlsIEkgY2FuJ3Qgc2VlbQo+
Pj4+IHRvIHVuZGVyc3RhbmQgd2h5OiBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50IGlzIGlk
ZW50aWNhbCBpbiBsYXlvdXQsCj4+Pj4gdGhlIHNvbGUgZGlmZmVyZW5jZSBiZXR3ZWVuIHRoZSB0
d28gaXMgdGhhdCBpbiB0aGUgbmV3IHN0cnVjdHVyZSB0aGUKPj4+PiBwYWRkaW5nIGZpZWxkIGhh
cyBhIG5hbWUsIHdoZXJlYXMgaW4gdGhlIG9sZCBvbmUgaXQgZG9lc24ndC4KPj4+Cj4+PiBJcyB0
aGlzIHNvbWV0aGluZyB0byBkbyB3aXRoIExpbnV4JyB1c2Ugb2YgX19hdHRyaWJ1dGVfXygocGFj
a2VkKSkgb25jZQo+Pj4gYWdhaW4gY2F1c2luZyBjb25mdXNpb24/IChJIHJlYWxseSBob3BlIG5v
dCBBUEkgZGV2aWF0aW9uLi4uKQo+PiAKPj4gWWVzLCBJIHRoaW5rIGl0IGhhcyB0byBkbyB3aXRo
IExpbnV4J2VzIHdheSBvZiBkZWZpbmluZyB0aGVzZQo+PiBzdHJ1Y3R1cmVzOiBNeSBhc3N1bXB0
aW9uIGlzIHRoYXQgdGhlIGVtYmVkZGVkIChidXQgbm90IHN1Y2gKPj4gYXR0cmlidXRlZCkgZGVm
aW5pdGlvbiBvZiBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50IGluc2lkZSBzdHJ1Y3QKPj4g
c3RydWN0IGJsa2lmX3JlcXVlc3Rfcncgd2FzIGFzc3VtZWQgdG8gYWxzbyBiZSBwYWNrZWQgKHdo
aWNoIGl0Cj4+IGlzbid0LCBvciBlbHNlIHVwc3RyZWFtIExpbnV4IGZyb250LS9iYWNrZW5kcyB3
b3VsZG4ndCB3b3JrIHdpdGgKPj4gb3RoZXIgYmFjay0vZnJvbnRlbmRzKSwgdGh1cyBhcHBhcmVu
dGx5IG1ha2luZyBpdCBuZWNlc3NhcnkgdG8KPj4gaGF2ZSBhbiAiYWxpZ25lZCIgKGkuZS4gdW4t
cGFja2VkKSB2YXJpYW50IHRoZXJlb2YuCj4gCj4gWWVzLCB0aGlzIGlzIG15IGZhdWx0IGZvciB3
cm9uZ2x5IGFzc3VtaW5nIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQKPiBpbnNpZGUgc3Ry
dWN0IGJsa2lmX3JlcXVlc3Rfcncgd2FzIGFsc28gcGFja2VkLCB3aGljaCBpdCBpcyBub3QgKG9y
IGVsc2UKPiBpdCB3b3VsZCBicmVhayB3aXRoIG5vbiBMaW51eCBiYWNrZW5kcykuIFRoYW5rcyBm
b3Igc2VuZGluZyB0aGUgWGVuIHNpZGUKPiBwYXRjaCwgSSB3aWxsIHRha2UgY2FyZSBvZiB0aGUg
TGludXggc2lkZSBpZiBpdCdzIGZpbmUgd2l0aCB5b3UuCgpQbGVhc2UgZG8uCgpKYW4KCl9fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWls
aW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVu
LWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbK4-0000c7-MZ; Tue, 04 Feb 2014 08:25:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbK4-0000by-51
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:25:36 +0000
Received: from [193.109.254.147:5700] by server-11.bemta-14.messagelabs.com id
	A5/19-24604-FF3A0F25; Tue, 04 Feb 2014 08:25:35 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391502333!1832495!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27142 invoked from network); 4 Feb 2014 08:25:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:25:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97632098"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 08:25:33 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:25:33 -0500
Message-ID: <52F0A3FB.8070108@citrix.com>
Date: Tue, 4 Feb 2014 09:25:31 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F0AFC30200007800118E1C@nat28.tlf.novell.com>
In-Reply-To: <52F0AFC30200007800118E1C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] blkif: drop struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 09:15, Jan Beulich wrote:
> Commit 5148b7b5 ("blkif: add indirect descriptors interface to public
> headers") added this without really explaining why it is needed: The
> structure is identical to struct blkif_request_segment apart from the
> padding field not being given a name in the pre-existing type. Their
> size and alignment - which are what is relevant - are identical as long
> as __alignof__(uint32_t) =3D=3D 4 (which I think we rely upon in various
> other places, so we can take as given).
>
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE =3D=3D 4096.
> =

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Thanks for the patch.

Acked-by: Roger Pau Monn=E9 <roger.pau@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:25:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbK4-0000c7-MZ; Tue, 04 Feb 2014 08:25:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbK4-0000by-51
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:25:36 +0000
Received: from [193.109.254.147:5700] by server-11.bemta-14.messagelabs.com id
	A5/19-24604-FF3A0F25; Tue, 04 Feb 2014 08:25:35 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391502333!1832495!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27142 invoked from network); 4 Feb 2014 08:25:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:25:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97632098"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 08:25:33 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:25:33 -0500
Message-ID: <52F0A3FB.8070108@citrix.com>
Date: Tue, 4 Feb 2014 09:25:31 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F0AFC30200007800118E1C@nat28.tlf.novell.com>
In-Reply-To: <52F0AFC30200007800118E1C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] blkif: drop struct blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 09:15, Jan Beulich wrote:
> Commit 5148b7b5 ("blkif: add indirect descriptors interface to public
> headers") added this without really explaining why it is needed: The
> structure is identical to struct blkif_request_segment apart from the
> padding field not being given a name in the pre-existing type. Their
> size and alignment - which are what is relevant - are identical as long
> as __alignof__(uint32_t) =3D=3D 4 (which I think we rely upon in various
> other places, so we can take as given).
>
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE =3D=3D 4096.
> =

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Thanks for the patch.

Acked-by: Roger Pau Monn=E9 <roger.pau@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:32:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbQH-0000sI-Rj; Tue, 04 Feb 2014 08:32:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbQF-0000sC-UB
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:32:00 +0000
Received: from [85.158.139.211:41430] by server-3.bemta-5.messagelabs.com id
	4D/B3-13671-F75A0F25; Tue, 04 Feb 2014 08:31:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391502718!1470129!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12505 invoked from network); 4 Feb 2014 08:31:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:31:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:31:58 +0000
Message-Id: <52F0B38A0200007800118E62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:31:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
	<52F0AC990200007800118E09@nat28.tlf.novell.com>
	<52F0A1C7.2010607@citrix.com>
In-Reply-To: <52F0A1C7.2010607@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA0LjAyLjE0IGF0IDA5OjE2LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAwNC8wMi8xNCAwOTowMiwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+
Pj4+IE9uIDAzLjAyLjE0IGF0IDE3OjU4LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPj4+IE9uIDI5LzAxLzE0IDA5OjUyLCBKYW4gQmV1bGljaCB3cm90ZToK
Pj4+Pj4+PiBPbiAyOC4wMS4xNCBhdCAxODo0MywgUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVA
Y2l0cml4LmNvbT4gd3JvdGU6Cj4+Pj4+ICsJCWZyZWVfcmVxKGJsa2lmLCBwZW5kaW5nX3JlcSk7
Cj4+Pj4+ICsJCS8qCj4+Pj4+ICsJCSAqIE1ha2Ugc3VyZSB0aGUgcmVxdWVzdCBpcyBmcmVlZCBi
ZWZvcmUgcmVsZWFzaW5nIGJsa2lmLAo+Pj4+PiArCQkgKiBvciB0aGVyZSBjb3VsZCBiZSBhIHJh
Y2UgYmV0d2VlbiBmcmVlX3JlcSBhbmQgdGhlCj4+Pj4+ICsJCSAqIGNsZWFudXAgZG9uZSBpbiB4
ZW5fYmxraWZfZnJlZSBkdXJpbmcgc2h1dGRvd24uCj4+Pj4+ICsJCSAqCj4+Pj4+ICsJCSAqIE5C
OiBUaGUgZmFjdCB0aGF0IHdlIG1pZ2h0IHRyeSB0byB3YWtlIHVwIHBlbmRpbmdfZnJlZV93cQo+
Pj4+PiArCQkgKiBiZWZvcmUgZHJhaW5fY29tcGxldGUgKGluIGNhc2UgdGhlcmUncyBhIGRyYWlu
IGdvaW5nIG9uKQo+Pj4+PiArCQkgKiBpdCdzIG5vdCBhIHByb2JsZW0gd2l0aCBvdXIgY3VycmVu
dCBpbXBsZW1lbnRhdGlvbgo+Pj4+PiArCQkgKiBiZWNhdXNlIHdlIGNhbiBhc3N1cmUgdGhlcmUn
cyBubyB0aHJlYWQgd2FpdGluZyBvbgo+Pj4+PiArCQkgKiBwZW5kaW5nX2ZyZWVfd3EgaWYgdGhl
cmUncyBhIGRyYWluIGdvaW5nIG9uLCBidXQgaXQgaGFzCj4+Pj4+ICsJCSAqIHRvIGJlIHRha2Vu
IGludG8gYWNjb3VudCBpZiB0aGUgY3VycmVudCBtb2RlbCBpcyBjaGFuZ2VkLgo+Pj4+PiArCQkg
Ki8KPj4+Pj4gKwkJeGVuX2Jsa2lmX3B1dChibGtpZik7Cj4+Pj4+ICsJCWlmIChhdG9taWNfcmVh
ZCgmYmxraWYtPnJlZmNudCkgPD0gMikgewo+Pj4+PiArCQkJaWYgKGF0b21pY19yZWFkKCZibGtp
Zi0+ZHJhaW4pKQo+Pj4+PiArCQkJCWNvbXBsZXRlKCZibGtpZi0+ZHJhaW5fY29tcGxldGUpOwo+
Pj4+PiAgCQl9Cj4+Pj4+IC0JCWZyZWVfcmVxKHBlbmRpbmdfcmVxLT5ibGtpZiwgcGVuZGluZ19y
ZXEpOwo+Pj4+PiAgCX0KPj4+Pj4gIH0KPj4+Pgo+Pj4+IFRoZSBwdXQgaXMgc3RpbGwgdG9vIGVh
cmx5IGltbyAtIHlvdSdyZSBleHBsaWNpdGx5IGFjY2Vzc2luZyBmaWVsZCBpbiB0aGUKPj4+PiBz
dHJ1Y3R1cmUgaW1tZWRpYXRlbHkgYWZ0ZXJ3YXJkcy4gVGhpcyBtYXkgbm90IGJlIGFuIGlzc3Vl
IGF0Cj4+Pj4gcHJlc2VudCwgYnV0IEkgdGhpbmsgaXQncyBhdCBsZWFzdCBhIGxhdGVudCBvbmUu
Cj4+Pj4KPj4+PiBBcGFydCBmcm9tIHRoYXQsIHRoZSB0d28gaWYoKXMgd291bGQgLSBhdCBsZWFz
dCB0byBtZSAtIGJlIG1vcmUKPj4+PiBjbGVhciBpZiBjb21iaW5lZCBpbnRvIG9uZS4KPj4+Cj4+
PiBJbiBvcmRlciB0byBnZXQgcmlkIG9mIHRoZSByYWNlIEkgaGFkIHRvIGludHJvZHVjZSB5ZXQg
YW5vdGhlciBhdG9taWNfdCAKPj4+IGluIHhlbl9ibGtpZiBzdHJ1Y3QsIHdoaWNoIGlzIHNvbWV0
aGluZyBJIGRvbid0IHJlYWxseSBsaWtlLCBidXQgSSAKPj4+IGNvdWxkIG5vdCBzZWUgYW55IG90
aGVyIHdheSB0byBzb2x2ZSB0aGlzLiBJZiB0aGF0J3MgZmluZSBJIHdpbGwgcmVzZW5kIAo+Pj4g
dGhlIHNlcmllcywgaGVyZSBpcyB0aGUgcmV3b3JrZWQgcGF0Y2g6Cj4+IAo+PiBNaW5kIGV4cGxh
aW5pbmcgd2h5IHlvdSBjYW4ndCBzaW1wbHkgbW92ZSB0aGUgeGVuX2Jsa2lmX3B1dCgpCj4+IGRv
d24gYmV0d2VlbiB0aGUgaWYoKSBhbmQgdGhlIGZyZWVfcmVmKCkuCj4gCj4gWW91IG1lYW4gZG9p
bmcgc29tZXRoaW5nIGxpa2U6Cj4gCj4gaWYgKGF0b21pY19yZWFkKCZibGtpZi0+cmVmY250KSA8
PSAzKSB7Cj4gCWlmIChhdG9taWNfcmVhZCgmYmxraWYtPmRyYWluKSkKPiAJCWNvbXBsZXRlKCZi
bGtpZi0+ZHJhaW5fY29tcGxldGUpOwo+IH0KPiB4ZW5fYmxraWZfcHV0KGJsa2lmKTsKPiBmcmVl
X3JlcShibGtpZiwgcGVuZGluZ19yZXEpOwoKQWN0dWFsbHksIEkgZ290IHRoZSBkZXNjcmlwdGlv
biB3cm9uZy4gSSByZWFsbHkgbWVhbnQKCmZyZWVfcmVxKCk7CmlmIChhdG9taWNfcmVhZCAuLi4p
Cgljb21wbGV0ZSgpOwp4ZW5fYmxraWZfcHV0KCk7CgpKYW4KCl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRl
dmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:32:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbQH-0000sI-Rj; Tue, 04 Feb 2014 08:32:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbQF-0000sC-UB
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:32:00 +0000
Received: from [85.158.139.211:41430] by server-3.bemta-5.messagelabs.com id
	4D/B3-13671-F75A0F25; Tue, 04 Feb 2014 08:31:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391502718!1470129!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12505 invoked from network); 4 Feb 2014 08:31:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:31:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:31:58 +0000
Message-Id: <52F0B38A0200007800118E62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:31:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "=?UTF-8?Q?Roger=20Pau=20Monn=C3=A9?=" <roger.pau@citrix.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
	<52F0AC990200007800118E09@nat28.tlf.novell.com>
	<52F0A1C7.2010607@citrix.com>
In-Reply-To: <52F0A1C7.2010607@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA0LjAyLjE0IGF0IDA5OjE2LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPiBPbiAwNC8wMi8xNCAwOTowMiwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+
Pj4+IE9uIDAzLjAyLjE0IGF0IDE3OjU4LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRy
aXguY29tPiB3cm90ZToKPj4+IE9uIDI5LzAxLzE0IDA5OjUyLCBKYW4gQmV1bGljaCB3cm90ZToK
Pj4+Pj4+PiBPbiAyOC4wMS4xNCBhdCAxODo0MywgUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVA
Y2l0cml4LmNvbT4gd3JvdGU6Cj4+Pj4+ICsJCWZyZWVfcmVxKGJsa2lmLCBwZW5kaW5nX3JlcSk7
Cj4+Pj4+ICsJCS8qCj4+Pj4+ICsJCSAqIE1ha2Ugc3VyZSB0aGUgcmVxdWVzdCBpcyBmcmVlZCBi
ZWZvcmUgcmVsZWFzaW5nIGJsa2lmLAo+Pj4+PiArCQkgKiBvciB0aGVyZSBjb3VsZCBiZSBhIHJh
Y2UgYmV0d2VlbiBmcmVlX3JlcSBhbmQgdGhlCj4+Pj4+ICsJCSAqIGNsZWFudXAgZG9uZSBpbiB4
ZW5fYmxraWZfZnJlZSBkdXJpbmcgc2h1dGRvd24uCj4+Pj4+ICsJCSAqCj4+Pj4+ICsJCSAqIE5C
OiBUaGUgZmFjdCB0aGF0IHdlIG1pZ2h0IHRyeSB0byB3YWtlIHVwIHBlbmRpbmdfZnJlZV93cQo+
Pj4+PiArCQkgKiBiZWZvcmUgZHJhaW5fY29tcGxldGUgKGluIGNhc2UgdGhlcmUncyBhIGRyYWlu
IGdvaW5nIG9uKQo+Pj4+PiArCQkgKiBpdCdzIG5vdCBhIHByb2JsZW0gd2l0aCBvdXIgY3VycmVu
dCBpbXBsZW1lbnRhdGlvbgo+Pj4+PiArCQkgKiBiZWNhdXNlIHdlIGNhbiBhc3N1cmUgdGhlcmUn
cyBubyB0aHJlYWQgd2FpdGluZyBvbgo+Pj4+PiArCQkgKiBwZW5kaW5nX2ZyZWVfd3EgaWYgdGhl
cmUncyBhIGRyYWluIGdvaW5nIG9uLCBidXQgaXQgaGFzCj4+Pj4+ICsJCSAqIHRvIGJlIHRha2Vu
IGludG8gYWNjb3VudCBpZiB0aGUgY3VycmVudCBtb2RlbCBpcyBjaGFuZ2VkLgo+Pj4+PiArCQkg
Ki8KPj4+Pj4gKwkJeGVuX2Jsa2lmX3B1dChibGtpZik7Cj4+Pj4+ICsJCWlmIChhdG9taWNfcmVh
ZCgmYmxraWYtPnJlZmNudCkgPD0gMikgewo+Pj4+PiArCQkJaWYgKGF0b21pY19yZWFkKCZibGtp
Zi0+ZHJhaW4pKQo+Pj4+PiArCQkJCWNvbXBsZXRlKCZibGtpZi0+ZHJhaW5fY29tcGxldGUpOwo+
Pj4+PiAgCQl9Cj4+Pj4+IC0JCWZyZWVfcmVxKHBlbmRpbmdfcmVxLT5ibGtpZiwgcGVuZGluZ19y
ZXEpOwo+Pj4+PiAgCX0KPj4+Pj4gIH0KPj4+Pgo+Pj4+IFRoZSBwdXQgaXMgc3RpbGwgdG9vIGVh
cmx5IGltbyAtIHlvdSdyZSBleHBsaWNpdGx5IGFjY2Vzc2luZyBmaWVsZCBpbiB0aGUKPj4+PiBz
dHJ1Y3R1cmUgaW1tZWRpYXRlbHkgYWZ0ZXJ3YXJkcy4gVGhpcyBtYXkgbm90IGJlIGFuIGlzc3Vl
IGF0Cj4+Pj4gcHJlc2VudCwgYnV0IEkgdGhpbmsgaXQncyBhdCBsZWFzdCBhIGxhdGVudCBvbmUu
Cj4+Pj4KPj4+PiBBcGFydCBmcm9tIHRoYXQsIHRoZSB0d28gaWYoKXMgd291bGQgLSBhdCBsZWFz
dCB0byBtZSAtIGJlIG1vcmUKPj4+PiBjbGVhciBpZiBjb21iaW5lZCBpbnRvIG9uZS4KPj4+Cj4+
PiBJbiBvcmRlciB0byBnZXQgcmlkIG9mIHRoZSByYWNlIEkgaGFkIHRvIGludHJvZHVjZSB5ZXQg
YW5vdGhlciBhdG9taWNfdCAKPj4+IGluIHhlbl9ibGtpZiBzdHJ1Y3QsIHdoaWNoIGlzIHNvbWV0
aGluZyBJIGRvbid0IHJlYWxseSBsaWtlLCBidXQgSSAKPj4+IGNvdWxkIG5vdCBzZWUgYW55IG90
aGVyIHdheSB0byBzb2x2ZSB0aGlzLiBJZiB0aGF0J3MgZmluZSBJIHdpbGwgcmVzZW5kIAo+Pj4g
dGhlIHNlcmllcywgaGVyZSBpcyB0aGUgcmV3b3JrZWQgcGF0Y2g6Cj4+IAo+PiBNaW5kIGV4cGxh
aW5pbmcgd2h5IHlvdSBjYW4ndCBzaW1wbHkgbW92ZSB0aGUgeGVuX2Jsa2lmX3B1dCgpCj4+IGRv
d24gYmV0d2VlbiB0aGUgaWYoKSBhbmQgdGhlIGZyZWVfcmVmKCkuCj4gCj4gWW91IG1lYW4gZG9p
bmcgc29tZXRoaW5nIGxpa2U6Cj4gCj4gaWYgKGF0b21pY19yZWFkKCZibGtpZi0+cmVmY250KSA8
PSAzKSB7Cj4gCWlmIChhdG9taWNfcmVhZCgmYmxraWYtPmRyYWluKSkKPiAJCWNvbXBsZXRlKCZi
bGtpZi0+ZHJhaW5fY29tcGxldGUpOwo+IH0KPiB4ZW5fYmxraWZfcHV0KGJsa2lmKTsKPiBmcmVl
X3JlcShibGtpZiwgcGVuZGluZ19yZXEpOwoKQWN0dWFsbHksIEkgZ290IHRoZSBkZXNjcmlwdGlv
biB3cm9uZy4gSSByZWFsbHkgbWVhbnQKCmZyZWVfcmVxKCk7CmlmIChhdG9taWNfcmVhZCAuLi4p
Cgljb21wbGV0ZSgpOwp4ZW5fYmxraWZfcHV0KCk7CgpKYW4KCl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRl
dmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbXE-0001QW-1x; Tue, 04 Feb 2014 08:39:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbXC-0001QR-TT
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:39:11 +0000
Received: from [85.158.139.211:35935] by server-1.bemta-5.messagelabs.com id
	58/7E-12859-E27A0F25; Tue, 04 Feb 2014 08:39:10 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391503147!1465158!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14920 invoked from network); 4 Feb 2014 08:39:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:39:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99553275"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 08:39:07 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:39:06 -0500
Message-ID: <52F0A729.5070009@citrix.com>
Date: Tue, 4 Feb 2014 09:39:05 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
	<52F0AC990200007800118E09@nat28.tlf.novell.com>
	<52F0A1C7.2010607@citrix.com>
	<52F0B38A0200007800118E62@nat28.tlf.novell.com>
In-Reply-To: <52F0B38A0200007800118E62@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMDIvMTQgMDk6MzEsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+IE9uIDA0LjAyLjE0IGF0
IDA5OjE2LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPj4g
T24gMDQvMDIvMTQgMDk6MDIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4gT24gMDMuMDIuMTQg
YXQgMTc6NTgsIFJvZ2VyIFBhdSBNb25uw6k8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyb3RlOgo+
Pj4+IE9uIDI5LzAxLzE0IDA5OjUyLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+Pj4+Pj4gT24gMjgu
MDEuMTQgYXQgMTg6NDMsIFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdy
b3RlOgo+Pj4+Pj4gKwkJZnJlZV9yZXEoYmxraWYsIHBlbmRpbmdfcmVxKTsKPj4+Pj4+ICsJCS8q
Cj4+Pj4+PiArCQkgKiBNYWtlIHN1cmUgdGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVh
c2luZyBibGtpZiwKPj4+Pj4+ICsJCSAqIG9yIHRoZXJlIGNvdWxkIGJlIGEgcmFjZSBiZXR3ZWVu
IGZyZWVfcmVxIGFuZCB0aGUKPj4+Pj4+ICsJCSAqIGNsZWFudXAgZG9uZSBpbiB4ZW5fYmxraWZf
ZnJlZSBkdXJpbmcgc2h1dGRvd24uCj4+Pj4+PiArCQkgKgo+Pj4+Pj4gKwkJICogTkI6IFRoZSBm
YWN0IHRoYXQgd2UgbWlnaHQgdHJ5IHRvIHdha2UgdXAgcGVuZGluZ19mcmVlX3dxCj4+Pj4+PiAr
CQkgKiBiZWZvcmUgZHJhaW5fY29tcGxldGUgKGluIGNhc2UgdGhlcmUncyBhIGRyYWluIGdvaW5n
IG9uKQo+Pj4+Pj4gKwkJICogaXQncyBub3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1w
bGVtZW50YXRpb24KPj4+Pj4+ICsJCSAqIGJlY2F1c2Ugd2UgY2FuIGFzc3VyZSB0aGVyZSdzIG5v
IHRocmVhZCB3YWl0aW5nIG9uCj4+Pj4+PiArCQkgKiBwZW5kaW5nX2ZyZWVfd3EgaWYgdGhlcmUn
cyBhIGRyYWluIGdvaW5nIG9uLCBidXQgaXQgaGFzCj4+Pj4+PiArCQkgKiB0byBiZSB0YWtlbiBp
bnRvIGFjY291bnQgaWYgdGhlIGN1cnJlbnQgbW9kZWwgaXMgY2hhbmdlZC4KPj4+Pj4+ICsJCSAq
Lwo+Pj4+Pj4gKwkJeGVuX2Jsa2lmX3B1dChibGtpZik7Cj4+Pj4+PiArCQlpZiAoYXRvbWljX3Jl
YWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKPj4+Pj4+ICsJCQlpZiAoYXRvbWljX3JlYWQoJmJs
a2lmLT5kcmFpbikpCj4+Pj4+PiArCQkJCWNvbXBsZXRlKCZibGtpZi0+ZHJhaW5fY29tcGxldGUp
Owo+Pj4+Pj4gIAkJfQo+Pj4+Pj4gLQkJZnJlZV9yZXEocGVuZGluZ19yZXEtPmJsa2lmLCBwZW5k
aW5nX3JlcSk7Cj4+Pj4+PiAgCX0KPj4+Pj4+ICB9Cj4+Pj4+Cj4+Pj4+IFRoZSBwdXQgaXMgc3Rp
bGwgdG9vIGVhcmx5IGltbyAtIHlvdSdyZSBleHBsaWNpdGx5IGFjY2Vzc2luZyBmaWVsZCBpbiB0
aGUKPj4+Pj4gc3RydWN0dXJlIGltbWVkaWF0ZWx5IGFmdGVyd2FyZHMuIFRoaXMgbWF5IG5vdCBi
ZSBhbiBpc3N1ZSBhdAo+Pj4+PiBwcmVzZW50LCBidXQgSSB0aGluayBpdCdzIGF0IGxlYXN0IGEg
bGF0ZW50IG9uZS4KPj4+Pj4KPj4+Pj4gQXBhcnQgZnJvbSB0aGF0LCB0aGUgdHdvIGlmKClzIHdv
dWxkIC0gYXQgbGVhc3QgdG8gbWUgLSBiZSBtb3JlCj4+Pj4+IGNsZWFyIGlmIGNvbWJpbmVkIGlu
dG8gb25lLgo+Pj4+Cj4+Pj4gSW4gb3JkZXIgdG8gZ2V0IHJpZCBvZiB0aGUgcmFjZSBJIGhhZCB0
byBpbnRyb2R1Y2UgeWV0IGFub3RoZXIgYXRvbWljX3QgCj4+Pj4gaW4geGVuX2Jsa2lmIHN0cnVj
dCwgd2hpY2ggaXMgc29tZXRoaW5nIEkgZG9uJ3QgcmVhbGx5IGxpa2UsIGJ1dCBJIAo+Pj4+IGNv
dWxkIG5vdCBzZWUgYW55IG90aGVyIHdheSB0byBzb2x2ZSB0aGlzLiBJZiB0aGF0J3MgZmluZSBJ
IHdpbGwgcmVzZW5kIAo+Pj4+IHRoZSBzZXJpZXMsIGhlcmUgaXMgdGhlIHJld29ya2VkIHBhdGNo
Ogo+Pj4KPj4+IE1pbmQgZXhwbGFpbmluZyB3aHkgeW91IGNhbid0IHNpbXBseSBtb3ZlIHRoZSB4
ZW5fYmxraWZfcHV0KCkKPj4+IGRvd24gYmV0d2VlbiB0aGUgaWYoKSBhbmQgdGhlIGZyZWVfcmVm
KCkuCj4+Cj4+IFlvdSBtZWFuIGRvaW5nIHNvbWV0aGluZyBsaWtlOgo+Pgo+PiBpZiAoYXRvbWlj
X3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDMpIHsKPj4gCWlmIChhdG9taWNfcmVhZCgmYmxraWYt
PmRyYWluKSkKPj4gCQljb21wbGV0ZSgmYmxraWYtPmRyYWluX2NvbXBsZXRlKTsKPj4gfQo+PiB4
ZW5fYmxraWZfcHV0KGJsa2lmKTsKPj4gZnJlZV9yZXEoYmxraWYsIHBlbmRpbmdfcmVxKTsKPiAK
PiBBY3R1YWxseSwgSSBnb3QgdGhlIGRlc2NyaXB0aW9uIHdyb25nLiBJIHJlYWxseSBtZWFudAo+
IAo+IGZyZWVfcmVxKCk7Cj4gaWYgKGF0b21pY19yZWFkIC4uLikKPiAJY29tcGxldGUoKTsKPiB4
ZW5fYmxraWZfcHV0KCk7CgpJTUhPIHRoaXMgaXMgc3RpbGwgYSByYWNlLCBzaW5jZSB3ZSBldmFs
dWF0ZSByZWZjbnQgYmVmb3JlIGRlY3JlbWVudGluZwppdC4gSWYgd2UgaGF2ZSBmb3IgZXhhbXBs
ZSAyIGluIGZsaWdodCByZXF1ZXN0cywgYm90aCBjb3VsZCByZWFkIHJlZmNudCwKYm90aCBjb3Vs
ZCBzZWUgaXQncyBncmVhdGVyIHRoYW4gMyAoc28gbm8gb25lIHdvdWxkIGNhbGwgY29tcGxldGUp
LCBhbmQKdGhlbiBib3RoIHdpbGwgZGVjcmVtZW50IGl0LCB3aXRob3V0IGFueW9uZSBhY3R1YWxs
eSBjYWxsaW5nIGNvbXBsZXRlLgoKUm9nZXIuCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbXE-0001QW-1x; Tue, 04 Feb 2014 08:39:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAbXC-0001QR-TT
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:39:11 +0000
Received: from [85.158.139.211:35935] by server-1.bemta-5.messagelabs.com id
	58/7E-12859-E27A0F25; Tue, 04 Feb 2014 08:39:10 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391503147!1465158!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14920 invoked from network); 4 Feb 2014 08:39:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 08:39:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99553275"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 08:39:07 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 03:39:06 -0500
Message-ID: <52F0A729.5070009@citrix.com>
Date: Tue, 4 Feb 2014 09:39:05 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<1390931015-5490-4-git-send-email-roger.pau@citrix.com>
	<52E8CF540200007800117D88@nat28.tlf.novell.com>
	<52EFCACF.6080604@citrix.com>
	<52F0AC990200007800118E09@nat28.tlf.novell.com>
	<52F0A1C7.2010607@citrix.com>
	<52F0B38A0200007800118E62@nat28.tlf.novell.com>
In-Reply-To: <52F0B38A0200007800118E62@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, DavidVrabel <david.vrabel@citrix.com>,
	Matt Wilson <msw@amazon.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDQvMDIvMTQgMDk6MzEsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+IE9uIDA0LjAyLjE0IGF0
IDA5OjE2LCBSb2dlciBQYXUgTW9ubsOpPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPj4g
T24gMDQvMDIvMTQgMDk6MDIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4gT24gMDMuMDIuMTQg
YXQgMTc6NTgsIFJvZ2VyIFBhdSBNb25uw6k8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyb3RlOgo+
Pj4+IE9uIDI5LzAxLzE0IDA5OjUyLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+Pj4+Pj4gT24gMjgu
MDEuMTQgYXQgMTg6NDMsIFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdy
b3RlOgo+Pj4+Pj4gKwkJZnJlZV9yZXEoYmxraWYsIHBlbmRpbmdfcmVxKTsKPj4+Pj4+ICsJCS8q
Cj4+Pj4+PiArCQkgKiBNYWtlIHN1cmUgdGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVh
c2luZyBibGtpZiwKPj4+Pj4+ICsJCSAqIG9yIHRoZXJlIGNvdWxkIGJlIGEgcmFjZSBiZXR3ZWVu
IGZyZWVfcmVxIGFuZCB0aGUKPj4+Pj4+ICsJCSAqIGNsZWFudXAgZG9uZSBpbiB4ZW5fYmxraWZf
ZnJlZSBkdXJpbmcgc2h1dGRvd24uCj4+Pj4+PiArCQkgKgo+Pj4+Pj4gKwkJICogTkI6IFRoZSBm
YWN0IHRoYXQgd2UgbWlnaHQgdHJ5IHRvIHdha2UgdXAgcGVuZGluZ19mcmVlX3dxCj4+Pj4+PiAr
CQkgKiBiZWZvcmUgZHJhaW5fY29tcGxldGUgKGluIGNhc2UgdGhlcmUncyBhIGRyYWluIGdvaW5n
IG9uKQo+Pj4+Pj4gKwkJICogaXQncyBub3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1w
bGVtZW50YXRpb24KPj4+Pj4+ICsJCSAqIGJlY2F1c2Ugd2UgY2FuIGFzc3VyZSB0aGVyZSdzIG5v
IHRocmVhZCB3YWl0aW5nIG9uCj4+Pj4+PiArCQkgKiBwZW5kaW5nX2ZyZWVfd3EgaWYgdGhlcmUn
cyBhIGRyYWluIGdvaW5nIG9uLCBidXQgaXQgaGFzCj4+Pj4+PiArCQkgKiB0byBiZSB0YWtlbiBp
bnRvIGFjY291bnQgaWYgdGhlIGN1cnJlbnQgbW9kZWwgaXMgY2hhbmdlZC4KPj4+Pj4+ICsJCSAq
Lwo+Pj4+Pj4gKwkJeGVuX2Jsa2lmX3B1dChibGtpZik7Cj4+Pj4+PiArCQlpZiAoYXRvbWljX3Jl
YWQoJmJsa2lmLT5yZWZjbnQpIDw9IDIpIHsKPj4+Pj4+ICsJCQlpZiAoYXRvbWljX3JlYWQoJmJs
a2lmLT5kcmFpbikpCj4+Pj4+PiArCQkJCWNvbXBsZXRlKCZibGtpZi0+ZHJhaW5fY29tcGxldGUp
Owo+Pj4+Pj4gIAkJfQo+Pj4+Pj4gLQkJZnJlZV9yZXEocGVuZGluZ19yZXEtPmJsa2lmLCBwZW5k
aW5nX3JlcSk7Cj4+Pj4+PiAgCX0KPj4+Pj4+ICB9Cj4+Pj4+Cj4+Pj4+IFRoZSBwdXQgaXMgc3Rp
bGwgdG9vIGVhcmx5IGltbyAtIHlvdSdyZSBleHBsaWNpdGx5IGFjY2Vzc2luZyBmaWVsZCBpbiB0
aGUKPj4+Pj4gc3RydWN0dXJlIGltbWVkaWF0ZWx5IGFmdGVyd2FyZHMuIFRoaXMgbWF5IG5vdCBi
ZSBhbiBpc3N1ZSBhdAo+Pj4+PiBwcmVzZW50LCBidXQgSSB0aGluayBpdCdzIGF0IGxlYXN0IGEg
bGF0ZW50IG9uZS4KPj4+Pj4KPj4+Pj4gQXBhcnQgZnJvbSB0aGF0LCB0aGUgdHdvIGlmKClzIHdv
dWxkIC0gYXQgbGVhc3QgdG8gbWUgLSBiZSBtb3JlCj4+Pj4+IGNsZWFyIGlmIGNvbWJpbmVkIGlu
dG8gb25lLgo+Pj4+Cj4+Pj4gSW4gb3JkZXIgdG8gZ2V0IHJpZCBvZiB0aGUgcmFjZSBJIGhhZCB0
byBpbnRyb2R1Y2UgeWV0IGFub3RoZXIgYXRvbWljX3QgCj4+Pj4gaW4geGVuX2Jsa2lmIHN0cnVj
dCwgd2hpY2ggaXMgc29tZXRoaW5nIEkgZG9uJ3QgcmVhbGx5IGxpa2UsIGJ1dCBJIAo+Pj4+IGNv
dWxkIG5vdCBzZWUgYW55IG90aGVyIHdheSB0byBzb2x2ZSB0aGlzLiBJZiB0aGF0J3MgZmluZSBJ
IHdpbGwgcmVzZW5kIAo+Pj4+IHRoZSBzZXJpZXMsIGhlcmUgaXMgdGhlIHJld29ya2VkIHBhdGNo
Ogo+Pj4KPj4+IE1pbmQgZXhwbGFpbmluZyB3aHkgeW91IGNhbid0IHNpbXBseSBtb3ZlIHRoZSB4
ZW5fYmxraWZfcHV0KCkKPj4+IGRvd24gYmV0d2VlbiB0aGUgaWYoKSBhbmQgdGhlIGZyZWVfcmVm
KCkuCj4+Cj4+IFlvdSBtZWFuIGRvaW5nIHNvbWV0aGluZyBsaWtlOgo+Pgo+PiBpZiAoYXRvbWlj
X3JlYWQoJmJsa2lmLT5yZWZjbnQpIDw9IDMpIHsKPj4gCWlmIChhdG9taWNfcmVhZCgmYmxraWYt
PmRyYWluKSkKPj4gCQljb21wbGV0ZSgmYmxraWYtPmRyYWluX2NvbXBsZXRlKTsKPj4gfQo+PiB4
ZW5fYmxraWZfcHV0KGJsa2lmKTsKPj4gZnJlZV9yZXEoYmxraWYsIHBlbmRpbmdfcmVxKTsKPiAK
PiBBY3R1YWxseSwgSSBnb3QgdGhlIGRlc2NyaXB0aW9uIHdyb25nLiBJIHJlYWxseSBtZWFudAo+
IAo+IGZyZWVfcmVxKCk7Cj4gaWYgKGF0b21pY19yZWFkIC4uLikKPiAJY29tcGxldGUoKTsKPiB4
ZW5fYmxraWZfcHV0KCk7CgpJTUhPIHRoaXMgaXMgc3RpbGwgYSByYWNlLCBzaW5jZSB3ZSBldmFs
dWF0ZSByZWZjbnQgYmVmb3JlIGRlY3JlbWVudGluZwppdC4gSWYgd2UgaGF2ZSBmb3IgZXhhbXBs
ZSAyIGluIGZsaWdodCByZXF1ZXN0cywgYm90aCBjb3VsZCByZWFkIHJlZmNudCwKYm90aCBjb3Vs
ZCBzZWUgaXQncyBncmVhdGVyIHRoYW4gMyAoc28gbm8gb25lIHdvdWxkIGNhbGwgY29tcGxldGUp
LCBhbmQKdGhlbiBib3RoIHdpbGwgZGVjcmVtZW50IGl0LCB3aXRob3V0IGFueW9uZSBhY3R1YWxs
eSBjYWxsaW5nIGNvbXBsZXRlLgoKUm9nZXIuCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:55:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:55:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbmd-0001mr-MW; Tue, 04 Feb 2014 08:55:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbmZ-0001mj-My
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:55:06 +0000
Received: from [193.109.254.147:28919] by server-5.bemta-14.messagelabs.com id
	5E/9F-16688-7EAA0F25; Tue, 04 Feb 2014 08:55:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391504102!1834278!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5835 invoked from network); 4 Feb 2014 08:55:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:55:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:55:07 +0000
Message-Id: <52F0B8F30200007800118E81@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:54:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )

As Mukesh pointed out, calling get_ioreq() twice is inefficient.

But to me it's not clear whether a PVH vCPU getting here is wrong
in the first place, i.e. I would think the above condition should be
|| rather than && (after all, even if nested HVM one day became
supported for PVH, there not being an ioreq would still seem to be
a clear indication of no further work to be done here).

Of course, if done that way, the corresponding comment would
benefit from being extended accordingly.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 08:55:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 08:55:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbmd-0001mr-MW; Tue, 04 Feb 2014 08:55:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbmZ-0001mj-My
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 08:55:06 +0000
Received: from [193.109.254.147:28919] by server-5.bemta-14.messagelabs.com id
	5E/9F-16688-7EAA0F25; Tue, 04 Feb 2014 08:55:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391504102!1834278!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5835 invoked from network); 4 Feb 2014 08:55:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 08:55:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 08:55:07 +0000
Message-Id: <52F0B8F30200007800118E81@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 08:54:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad@kernel.org>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
In-Reply-To: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )

As Mukesh pointed out, calling get_ioreq() twice is inefficient.

But to me it's not clear whether a PVH vCPU getting here is wrong
in the first place, i.e. I would think the above condition should be
|| rather than && (after all, even if nested HVM one day became
supported for PVH, there not being an ioreq would still seem to be
a clear indication of no further work to be done here).

Of course, if done that way, the corresponding comment would
benefit from being extended accordingly.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 09:03:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 09:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbun-0002BF-4u; Tue, 04 Feb 2014 09:03:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbuk-0002BA-Qh
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 09:03:31 +0000
Received: from [193.109.254.147:52776] by server-16.bemta-14.messagelabs.com
	id A4/0C-21945-2ECA0F25; Tue, 04 Feb 2014 09:03:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391504609!1841915!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9819 invoked from network); 4 Feb 2014 09:03:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 09:03:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 09:03:37 +0000
Message-Id: <52F0BAEC0200007800118E8E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 09:03:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Saurabh Mishra" <saurabh.globe@gmail.com>
References: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
In-Reply-To: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Enabling kdump on SuSE 11 SP3 Xen resulting in error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.02.14 at 23:17, Saurabh Mishra <saurabh.globe@gmail.com> wrote:
> I'm trying to enable kdump on SuSE 11 SP3 with Xen configuration on our
> cards but having problem.

In general issues like this are not to be dealt with here, but by
SUSE support.

> If I don't specify KDUMP_DUMPFORMAT=ELF, it does not work and does not try
> to take vmcore.
> 
> If I specify KDUMP_DUMPLEVEL=0, then it does work but it takes very long
> time. I tried using KDUMP_DUMPLEVEL=30 or 10 but it complaint that there is
> not much space even though I've about 4.8 GB space with crashkernel
> configured as 256M@64M.

If the dump kernel comes up and the dumping takes lone (or you
have other issues with dumping), then this is the wrong forum in
any case: Xen is no longer involved in this operation (the dump
kernel runs on bare hardware).

> Here's output from /proc/cmdline :-
> 
> lc-1:~ # cat /proc/cmdline
> crashkernel=256M@64M root=/dev/sda5 earlyprintk=serial,ttyS0,115200n8
> resume=/dev/sda5 splash=silent showopts  console=ttyS0,115200n8
> 
> lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg
> kernel=vmlinuz-3.0.93-0.8-xen  crashkernel=256M@64M root=/dev/sda5
> earlyprintk=serial,ttyS0,115200n8 resume=/dev/sda5 splash=silent showopts
>  console=ttyS0,115200n8
> options=console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept
> extra_guest_irqs=80 reboot=efi crashkernel=256M@64M

Now this makes things really interesting: Dumping in a UEFI
environment is generally unsupported (this is actually being worked
on upstream afaik). Whether the dump kernel can come up properly
at all depends on the specific characteristics of your firmware
implementation. It could in particular be that the dump kernel has
no way of finding ACPI tables, and hence setup of interrupts and
alike may not work as expected/needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 09:03:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 09:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAbun-0002BF-4u; Tue, 04 Feb 2014 09:03:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAbuk-0002BA-Qh
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 09:03:31 +0000
Received: from [193.109.254.147:52776] by server-16.bemta-14.messagelabs.com
	id A4/0C-21945-2ECA0F25; Tue, 04 Feb 2014 09:03:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391504609!1841915!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9819 invoked from network); 4 Feb 2014 09:03:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 09:03:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 09:03:37 +0000
Message-Id: <52F0BAEC0200007800118E8E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 09:03:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Saurabh Mishra" <saurabh.globe@gmail.com>
References: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
In-Reply-To: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Enabling kdump on SuSE 11 SP3 Xen resulting in error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 03.02.14 at 23:17, Saurabh Mishra <saurabh.globe@gmail.com> wrote:
> I'm trying to enable kdump on SuSE 11 SP3 with Xen configuration on our
> cards but having problem.

In general issues like this are not to be dealt with here, but by
SUSE support.

> If I don't specify KDUMP_DUMPFORMAT=ELF, it does not work and does not try
> to take vmcore.
> 
> If I specify KDUMP_DUMPLEVEL=0, then it does work but it takes very long
> time. I tried using KDUMP_DUMPLEVEL=30 or 10 but it complaint that there is
> not much space even though I've about 4.8 GB space with crashkernel
> configured as 256M@64M.

If the dump kernel comes up and the dumping takes lone (or you
have other issues with dumping), then this is the wrong forum in
any case: Xen is no longer involved in this operation (the dump
kernel runs on bare hardware).

> Here's output from /proc/cmdline :-
> 
> lc-1:~ # cat /proc/cmdline
> crashkernel=256M@64M root=/dev/sda5 earlyprintk=serial,ttyS0,115200n8
> resume=/dev/sda5 splash=silent showopts  console=ttyS0,115200n8
> 
> lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg
> kernel=vmlinuz-3.0.93-0.8-xen  crashkernel=256M@64M root=/dev/sda5
> earlyprintk=serial,ttyS0,115200n8 resume=/dev/sda5 splash=silent showopts
>  console=ttyS0,115200n8
> options=console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept
> extra_guest_irqs=80 reboot=efi crashkernel=256M@64M

Now this makes things really interesting: Dumping in a UEFI
environment is generally unsupported (this is actually being worked
on upstream afaik). Whether the dump kernel can come up properly
at all depends on the specific characteristics of your firmware
implementation. It could in particular be that the dump kernel has
no way of finding ACPI tables, and hence setup of interrupts and
alike may not work as expected/needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 09:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 09:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAcXU-0003Dr-FI; Tue, 04 Feb 2014 09:43:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAcXT-0003Dm-9j
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 09:43:31 +0000
Received: from [193.109.254.147:18749] by server-5.bemta-14.messagelabs.com id
	6E/0A-16688-246B0F25; Tue, 04 Feb 2014 09:43:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391507008!1851105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23458 invoked from network); 4 Feb 2014 09:43:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 09:43:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99570250"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 09:43:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	04:43:27 -0500
Message-ID: <1391507006.10515.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Tue, 4 Feb 2014 09:43:26 +0000
In-Reply-To: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
> This patch removes reading reset specific values (address, size and mask) from dts
> and uses values defined in the code now.
> This is because currently xgene reset driver (submitted in linux) is going through
> a change (which is not yet accepted), this new driver has a new type of dts bindings
> for reset.
> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
> of reading from dts so that xen code will not break due to the linux transition.
> 
> Ref:
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
> http://www.gossamer-threads.com/lists/linux/kernel/1845585
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

George -- I'd like to take this into 4.4 to avoid shipping a Xen which
relies on an unagreed DTS binding (which is an ABI of sorts).

Ian.

> ---
>  xen/arch/arm/platforms/xgene-storm.c |   43 +++++++++-------------------------
>  1 file changed, 11 insertions(+), 32 deletions(-)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 4fc185b..1da9b36 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -25,6 +25,11 @@
>  #include <asm/io.h>
>  #include <asm/gic.h>
>  
> +/* XGENE RESET Specific defines */
> +#define XGENE_RESET_ADDR        0x17000014UL
> +#define XGENE_RESET_SIZE        0x100
> +#define XGENE_RESET_MASK        0x1
> +
>  /* Variables to save reset address of soc during platform initialization. */
>  static u64 reset_addr, reset_size;
>  static u32 reset_mask;
> @@ -141,38 +146,12 @@ static void xgene_storm_reset(void)
>  
>  static int xgene_storm_init(void)
>  {
> -    static const struct dt_device_match reset_ids[] __initconst =
> -    {
> -        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> -        {},
> -    };
> -    struct dt_device_node *dev;
> -    int res;
> -
> -    dev = dt_find_matching_node(NULL, reset_ids);
> -    if ( !dev )
> -    {
> -        printk("XGENE: Unable to find a compatible reset node in the device tree");
> -        return 0;
> -    }
> -
> -    dt_device_set_used_by(dev, DOMID_XEN);
> -
> -    /* Retrieve base address and size */
> -    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> -    if ( res )
> -    {
> -        printk("XGENE: Unable to retrieve the base address for reset\n");
> -        return 0;
> -    }
> -
> -    /* Get reset mask */
> -    res = dt_property_read_u32(dev, "mask", &reset_mask);
> -    if ( !res )
> -    {
> -        printk("XGENE: Unable to retrieve the reset mask\n");
> -        return 0;
> -    }
> +    /* TBD: Once Linux side device tree bindings are finalized retrieve
> +     * these values from dts.
> +     */
> +    reset_addr = XGENE_RESET_ADDR;
> +    reset_size = XGENE_RESET_SIZE;
> +    reset_mask = XGENE_RESET_MASK;
>  
>      reset_vals_valid = true;
>      return 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 09:44:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 09:44:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAcXU-0003Dr-FI; Tue, 04 Feb 2014 09:43:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAcXT-0003Dm-9j
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 09:43:31 +0000
Received: from [193.109.254.147:18749] by server-5.bemta-14.messagelabs.com id
	6E/0A-16688-246B0F25; Tue, 04 Feb 2014 09:43:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391507008!1851105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23458 invoked from network); 4 Feb 2014 09:43:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 09:43:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99570250"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 09:43:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	04:43:27 -0500
Message-ID: <1391507006.10515.68.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Tue, 4 Feb 2014 09:43:26 +0000
In-Reply-To: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
> This patch removes reading reset specific values (address, size and mask) from dts
> and uses values defined in the code now.
> This is because currently xgene reset driver (submitted in linux) is going through
> a change (which is not yet accepted), this new driver has a new type of dts bindings
> for reset.
> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
> of reading from dts so that xen code will not break due to the linux transition.
> 
> Ref:
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
> http://www.gossamer-threads.com/lists/linux/kernel/1845585
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

George -- I'd like to take this into 4.4 to avoid shipping a Xen which
relies on an unagreed DTS binding (which is an ABI of sorts).

Ian.

> ---
>  xen/arch/arm/platforms/xgene-storm.c |   43 +++++++++-------------------------
>  1 file changed, 11 insertions(+), 32 deletions(-)
> 
> diff --git a/xen/arch/arm/platforms/xgene-storm.c b/xen/arch/arm/platforms/xgene-storm.c
> index 4fc185b..1da9b36 100644
> --- a/xen/arch/arm/platforms/xgene-storm.c
> +++ b/xen/arch/arm/platforms/xgene-storm.c
> @@ -25,6 +25,11 @@
>  #include <asm/io.h>
>  #include <asm/gic.h>
>  
> +/* XGENE RESET Specific defines */
> +#define XGENE_RESET_ADDR        0x17000014UL
> +#define XGENE_RESET_SIZE        0x100
> +#define XGENE_RESET_MASK        0x1
> +
>  /* Variables to save reset address of soc during platform initialization. */
>  static u64 reset_addr, reset_size;
>  static u32 reset_mask;
> @@ -141,38 +146,12 @@ static void xgene_storm_reset(void)
>  
>  static int xgene_storm_init(void)
>  {
> -    static const struct dt_device_match reset_ids[] __initconst =
> -    {
> -        DT_MATCH_COMPATIBLE("apm,xgene-reboot"),
> -        {},
> -    };
> -    struct dt_device_node *dev;
> -    int res;
> -
> -    dev = dt_find_matching_node(NULL, reset_ids);
> -    if ( !dev )
> -    {
> -        printk("XGENE: Unable to find a compatible reset node in the device tree");
> -        return 0;
> -    }
> -
> -    dt_device_set_used_by(dev, DOMID_XEN);
> -
> -    /* Retrieve base address and size */
> -    res = dt_device_get_address(dev, 0, &reset_addr, &reset_size);
> -    if ( res )
> -    {
> -        printk("XGENE: Unable to retrieve the base address for reset\n");
> -        return 0;
> -    }
> -
> -    /* Get reset mask */
> -    res = dt_property_read_u32(dev, "mask", &reset_mask);
> -    if ( !res )
> -    {
> -        printk("XGENE: Unable to retrieve the reset mask\n");
> -        return 0;
> -    }
> +    /* TBD: Once Linux side device tree bindings are finalized retrieve
> +     * these values from dts.
> +     */
> +    reset_addr = XGENE_RESET_ADDR;
> +    reset_size = XGENE_RESET_SIZE;
> +    reset_mask = XGENE_RESET_MASK;
>  
>      reset_vals_valid = true;
>      return 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDE-0004Z1-HE; Tue, 04 Feb 2014 10:26:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdDD-0004YN-EJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 10:26:39 +0000
Received: from [85.158.137.68:10313] by server-16.bemta-3.messagelabs.com id
	6A/2B-29917-D50C0F25; Tue, 04 Feb 2014 10:26:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391509594!13234845!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19830 invoked from network); 4 Feb 2014 10:26:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97665478"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:22 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCw-00037o-2Y;
	Tue, 04 Feb 2014 10:26:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:12 +0100
Message-ID: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDA-0004Y8-78; Tue, 04 Feb 2014 10:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD8-0004Xg-OM
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:34 +0000
Received: from [85.158.143.35:9068] by server-1.bemta-4.messagelabs.com id
	14/0C-31661-A50C0F25; Tue, 04 Feb 2014 10:26:34 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391509592!2978400!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26354 invoked from network); 4 Feb 2014 10:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579772"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:22 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCw-00037o-PH;
	Tue, 04 Feb 2014 10:26:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:13 +0100
Message-ID: <1391509575-3949-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSd2ZSBhdCBsZWFzdCBpZGVudGlmaWVkIHR3byBwb3NzaWJsZSBtZW1vcnkgbGVha3MgaW4gYmxr
YmFjaywgYm90aApyZWxhdGVkIHRvIHRoZSBzaHV0ZG93biBwYXRoIG9mIGEgVkJEOgoKLSBibGti
YWNrIGRvZXNuJ3Qgd2FpdCBmb3IgYW55IHBlbmRpbmcgcHVyZ2Ugd29yayB0byBmaW5pc2ggYmVm
b3JlCiAgY2xlYW5pbmcgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gVGhlIHB1cmdlIHdvcmsgd2ls
bCBjYWxsCiAgcHV0X2ZyZWVfcGFnZXMgYW5kIHRodXMgd2UgbWlnaHQgZW5kIHVwIHdpdGggcGFn
ZXMgYmVpbmcgYWRkZWQgdG8KICB0aGUgZnJlZV9wYWdlcyBsaXN0IGFmdGVyIHdlIGhhdmUgZW1w
dGllZCBpdC4gRml4IHRoaXMgYnkgbWFraW5nCiAgc3VyZSB0aGVyZSdzIG5vIHBlbmRpbmcgcHVy
Z2Ugd29yayBiZWZvcmUgZXhpdGluZwogIHhlbl9ibGtpZl9zY2hlZHVsZSwgYW5kIG1vdmluZyB0
aGUgZnJlZV9wYWdlIGNsZWFudXAgY29kZSB0bwogIHhlbl9ibGtpZl9mcmVlLgotIGJsa2JhY2sg
ZG9lc24ndCB3YWl0IGZvciBwZW5kaW5nIHJlcXVlc3RzIHRvIGVuZCBiZWZvcmUgY2xlYW5pbmcK
ICBwZXJzaXN0ZW50IGdyYW50cyBhbmQgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gQWdhaW4gdGhp
cyBjYW4gYWRkCiAgcGFnZXMgdG8gdGhlIGZyZWVfcGFnZXMgbGlzdCBvciBwZXJzaXN0ZW50IGdy
YW50cyB0byB0aGUKICBwZXJzaXN0ZW50X2dudHMgcmVkLWJsYWNrIHRyZWUuIEZpeGVkIGJ5IG1v
dmluZyB0aGUgcGVyc2lzdGVudAogIGdyYW50cyBhbmQgZnJlZV9wYWdlcyBjbGVhbnVwIGNvZGUg
dG8geGVuX2Jsa2lmX2ZyZWUuCgpBbHNvLCBhZGQgc29tZSBjaGVja3MgaW4geGVuX2Jsa2lmX2Zy
ZWUgdG8gbWFrZSBzdXJlIHdlIGFyZSBjbGVhbmluZwpldmVyeXRoaW5nLgoKU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQgUnpl
c3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFiZWwgPGRh
dmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zz
a3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KQ2M6
IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBi
ZWxsQGNpdHJpeC5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMg
fCAgIDI3ICsrKysrKysrKysrKysrKysrKy0tLS0tLS0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay9jb21tb24uaCAgfCAgICAxICsKIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVz
LmMgIHwgICAxMiArKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMzEgaW5zZXJ0aW9ucygr
KSwgOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDMw
ZWY3YjMuLmRjZmU0OWYgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxr
YmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0zNzUs
NyArMzc1LDcgQEAgc3RhdGljIHZvaWQgcHVyZ2VfcGVyc2lzdGVudF9nbnQoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYpCiAKIAlwcl9kZWJ1ZyhEUlZfUEZYICJHb2luZyB0byBwdXJnZSAldSBwZXJz
aXN0ZW50IGdyYW50c1xuIiwgbnVtX2NsZWFuKTsKIAotCUlOSVRfTElTVF9IRUFEKCZibGtpZi0+
cGVyc2lzdGVudF9wdXJnZV9saXN0KTsKKwlCVUdfT04oIWxpc3RfZW1wdHkoJmJsa2lmLT5wZXJz
aXN0ZW50X3B1cmdlX2xpc3QpKTsKIAlyb290ID0gJmJsa2lmLT5wZXJzaXN0ZW50X2dudHM7CiBw
dXJnZV9saXN0OgogCWZvcmVhY2hfZ3JhbnRfc2FmZShwZXJzaXN0ZW50X2dudCwgbiwgcm9vdCwg
bm9kZSkgewpAQCAtNjI1LDYgKzYyNSwyMyBAQCBwdXJnZV9nbnRfbGlzdDoKIAkJCXByaW50X3N0
YXRzKGJsa2lmKTsKIAl9CiAKKwkvKiBEcmFpbiBwZW5kaW5nIHB1cmdlIHdvcmsgKi8KKwlmbHVz
aF93b3JrKCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV93b3JrKTsKKworCWlmIChsb2dfc3RhdHMp
CisJCXByaW50X3N0YXRzKGJsa2lmKTsKKworCWJsa2lmLT54ZW5ibGtkID0gTlVMTDsKKwl4ZW5f
YmxraWZfcHV0KGJsa2lmKTsKKworCXJldHVybiAwOworfQorCisvKgorICogUmVtb3ZlIHBlcnNp
c3RlbnQgZ3JhbnRzIGFuZCBlbXB0eSB0aGUgcG9vbCBvZiBmcmVlIHBhZ2VzCisgKi8KK3ZvaWQg
eGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQorewogCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMsCkBAIC02MzUsMTQgKzY1Miw2IEBAIHB1cmdlX2dudF9saXN0
OgogCiAJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9t
IHRoZSBidWZmZXIgKi8KIAlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8p
OwotCi0JaWYgKGxvZ19zdGF0cykKLQkJcHJpbnRfc3RhdHMoYmxraWYpOwotCi0JYmxraWYtPnhl
bmJsa2QgPSBOVUxMOwotCXhlbl9ibGtpZl9wdXQoYmxraWYpOwotCi0JcmV0dXJuIDA7CiB9CiAK
IC8qCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCA4ZDg4MDc1Li5mNzMzZDc2IDEw
MDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCisrKyBiL2RyaXZl
cnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTM3Niw2ICszNzYsNyBAQCBpbnQgeGVu
X2Jsa2lmX3hlbmJ1c19pbml0KHZvaWQpOwogaXJxcmV0dXJuX3QgeGVuX2Jsa2lmX2JlX2ludChp
bnQgaXJxLCB2b2lkICpkZXZfaWQpOwogaW50IHhlbl9ibGtpZl9zY2hlZHVsZSh2b2lkICphcmcp
OwogaW50IHhlbl9ibGtpZl9wdXJnZV9wZXJzaXN0ZW50KHZvaWQgKmFyZyk7Cit2b2lkIHhlbl9i
bGtia19mcmVlX2NhY2hlcyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZik7CiAKIGludCB4ZW5fYmxr
YmtfZmx1c2hfZGlza2NhY2hlKHN0cnVjdCB4ZW5idXNfdHJhbnNhY3Rpb24geGJ0LAogCQkJICAg
ICAgc3RydWN0IGJhY2tlbmRfaW5mbyAqYmUsIGludCBzdGF0ZSk7CmRpZmYgLS1naXQgYS9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFj
ay94ZW5idXMuYwppbmRleCBjMjAxNGEwLi44YWZlZjY3IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Js
b2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
eGVuYnVzLmMKQEAgLTEyNSw2ICsxMjUsNyBAQCBzdGF0aWMgc3RydWN0IHhlbl9ibGtpZiAqeGVu
X2Jsa2lmX2FsbG9jKGRvbWlkX3QgZG9taWQpCiAJYmxraWYtPnBlcnNpc3RlbnRfZ250cy5yYl9u
b2RlID0gTlVMTDsKIAlzcGluX2xvY2tfaW5pdCgmYmxraWYtPmZyZWVfcGFnZXNfbG9jayk7CiAJ
SU5JVF9MSVNUX0hFQUQoJmJsa2lmLT5mcmVlX3BhZ2VzKTsKKwlJTklUX0xJU1RfSEVBRCgmYmxr
aWYtPnBlcnNpc3RlbnRfcHVyZ2VfbGlzdCk7CiAJYmxraWYtPmZyZWVfcGFnZXNfbnVtID0gMDsK
IAlhdG9taWNfc2V0KCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlLCAwKTsKIApAQCAtMjU5
LDYgKzI2MCwxNyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxraWZfZnJlZShzdHJ1Y3QgeGVuX2Jsa2lm
ICpibGtpZikKIAlpZiAoIWF0b21pY19kZWNfYW5kX3Rlc3QoJmJsa2lmLT5yZWZjbnQpKQogCQlC
VUcoKTsKIAorCS8qIFJlbW92ZSBhbGwgcGVyc2lzdGVudCBncmFudHMgYW5kIHRoZSBjYWNoZSBv
ZiBiYWxsb29uZWQgcGFnZXMuICovCisJeGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKGJsa2lmKTsKKwor
CS8qIE1ha2Ugc3VyZSBldmVyeXRoaW5nIGlzIGRyYWluZWQgYmVmb3JlIHNodXR0aW5nIGRvd24g
Ki8KKwlCVUdfT04oYmxraWYtPnBlcnNpc3RlbnRfZ250X2MgIT0gMCk7CisJQlVHX09OKGF0b21p
Y19yZWFkKCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlKSAhPSAwKTsKKwlCVUdfT04oYmxr
aWYtPmZyZWVfcGFnZXNfbnVtICE9IDApOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPnBl
cnNpc3RlbnRfcHVyZ2VfbGlzdCkpOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPmZyZWVf
cGFnZXMpKTsKKwlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMp
KTsKKwogCS8qIENoZWNrIHRoYXQgdGhlcmUgaXMgbm8gcmVxdWVzdCBpbiB1c2UgKi8KIAlsaXN0
X2Zvcl9lYWNoX2VudHJ5X3NhZmUocmVxLCBuLCAmYmxraWYtPnBlbmRpbmdfZnJlZSwgZnJlZV9s
aXN0KSB7CiAJCWxpc3RfZGVsKCZyZXEtPmZyZWVfbGlzdCk7Ci0tIAoxLjcuNy41IChBcHBsZSBH
aXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdD9-0004Y1-QJ; Tue, 04 Feb 2014 10:26:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD8-0004Xf-Br
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:34 +0000
Received: from [193.109.254.147:29492] by server-11.bemta-14.messagelabs.com
	id F2/0A-24604-950C0F25; Tue, 04 Feb 2014 10:26:33 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391509590!1840463!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18800 invoked from network); 4 Feb 2014 10:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579771"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:21 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCv-00037o-Eg;
	Tue, 04 Feb 2014 10:26:21 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:11 +0100
Message-ID: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series contain blkback bug fixes for memory leaks (patches 1 and 
2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
since its memory layout is exactly the same as blkif_request_segment 
and should introduce no functional change.

All patches should be backported to stable branches, although the last 
one is not a functional change it will still be nice to have it for 
code correctness.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDA-0004YF-Ij; Tue, 04 Feb 2014 10:26:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD9-0004Xl-99
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:35 +0000
Received: from [193.109.254.147:10917] by server-15.bemta-14.messagelabs.com
	id 27/CB-10839-A50C0F25; Tue, 04 Feb 2014 10:26:34 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391509590!1840463!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18889 invoked from network); 4 Feb 2014 10:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579774"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:23 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCx-00037o-TT;
	Tue, 04 Feb 2014 10:26:23 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:15 +0100
Message-ID: <1391509575-3949-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Matt Wilson <msw@amazon.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyB3YXMgd3JvbmdseSBpbnRyb2R1Y2VkIGluIGNvbW1pdCA0MDJiMjdmOSwgdGhlIG9ubHkg
ZGlmZmVyZW5jZQpiZXR3ZWVuIGJsa2lmX3JlcXVlc3Rfc2VnbWVudF9hbGlnbmVkIGFuZCBibGtp
Zl9yZXF1ZXN0X3NlZ21lbnQgaXMKdGhhdCB0aGUgZm9ybWVyIGhhcyBhIG5hbWVkIHBhZGRpbmcs
IHdoaWxlIGJvdGggc2hhcmUgdGhlIHNhbWUKbWVtb3J5IGxheW91dC4KCkFsc28gY29ycmVjdCBh
IGZldyBtaW5vciBnbGl0Y2hlcyBpbiB0aGUgZGVzY3JpcHRpb24sIGluY2x1ZGluZyBmb3IgaXQK
dG8gbm8gbG9uZ2VyIGFzc3VtZSBQQUdFX1NJWkUgPT0gNDA5Ni4KClNpZ25lZC1vZmYtYnk6IFJv
Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpbRGVzY3JpcHRpb24gZml4IGJ5
IEphbiBCZXVsaWNoXQpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+ClJlcG9ydGVkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkNjOiBLb25y
YWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFi
ZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5v
c3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNv
bT4KQ2M6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYyB8ICAgIDIgKy0KIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
Y29tbW9uLmggIHwgICAgMiArLQogZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYyAgICAgICAg
fCAgICA2ICsrKy0tLQogaW5jbHVkZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmggICAgfCAgIDM0
ICsrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0KIDQgZmlsZXMgY2hhbmdlZCwgMTkg
aW5zZXJ0aW9ucygrKSwgMjUgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYwppbmRleCAzOTRmYTJlLi5lNjEyNjI3IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYwpAQCAtODQ3LDcgKzg0Nyw3IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX3BhcnNlX2luZGly
ZWN0KHN0cnVjdCBibGtpZl9yZXF1ZXN0ICpyZXEsCiAJc3RydWN0IGdyYW50X3BhZ2UgKipwYWdl
cyA9IHBlbmRpbmdfcmVxLT5pbmRpcmVjdF9wYWdlczsKIAlzdHJ1Y3QgeGVuX2Jsa2lmICpibGtp
ZiA9IHBlbmRpbmdfcmVxLT5ibGtpZjsKIAlpbnQgaW5kaXJlY3RfZ3JlZnMsIHJjLCBuLCBuc2Vn
LCBpOwotCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCAqc2VnbWVudHMgPSBO
VUxMOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnNlZ21lbnRzID0gTlVMTDsKIAog
CW5zZWcgPSBwZW5kaW5nX3JlcS0+bnJfcGFnZXM7CiAJaW5kaXJlY3RfZ3JlZnMgPSBJTkRJUkVD
VF9QQUdFUyhuc2VnKTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29t
bW9uLmggYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCmluZGV4IGU0MDMyNmEu
LjllYjM0ZTIgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgK
KysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaApAQCAtNTcsNyArNTcsNyBA
QAogI2RlZmluZSBNQVhfSU5ESVJFQ1RfU0VHTUVOVFMgMjU2CiAKICNkZWZpbmUgU0VHU19QRVJf
SU5ESVJFQ1RfRlJBTUUgXAotCShQQUdFX1NJWkUvc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0
X3NlZ21lbnRfYWxpZ25lZCkpCisJKFBBR0VfU0laRS9zaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVl
c3Rfc2VnbWVudCkpCiAjZGVmaW5lIE1BWF9JTkRJUkVDVF9QQUdFUyBcCiAJKChNQVhfSU5ESVJF
Q1RfU0VHTUVOVFMgKyBTRUdTX1BFUl9JTkRJUkVDVF9GUkFNRSAtIDEpL1NFR1NfUEVSX0lORElS
RUNUX0ZSQU1FKQogI2RlZmluZSBJTkRJUkVDVF9QQUdFUyhfc2VncykgXApkaWZmIC0tZ2l0IGEv
ZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250
LmMKaW5kZXggYzRhNGM5MC4uN2QwOWRmYyAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4t
YmxrZnJvbnQuYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCkBAIC0xNjIsNyAr
MTYyLDcgQEAgc3RhdGljIERFRklORV9TUElOTE9DSyhtaW5vcl9sb2NrKTsKICNkZWZpbmUgREVW
X05BTUUJInh2ZCIJLyogbmFtZSBpbiAvZGV2ICovCiAKICNkZWZpbmUgU0VHU19QRVJfSU5ESVJF
Q1RfRlJBTUUgXAotCShQQUdFX1NJWkUvc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21l
bnRfYWxpZ25lZCkpCisJKFBBR0VfU0laRS9zaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2Vn
bWVudCkpCiAjZGVmaW5lIElORElSRUNUX0dSRUZTKF9zZWdzKSBcCiAJKChfc2VncyArIFNFR1Nf
UEVSX0lORElSRUNUX0ZSQU1FIC0gMSkvU0VHU19QRVJfSU5ESVJFQ1RfRlJBTUUpCiAKQEAgLTM5
Myw3ICszOTMsNyBAQCBzdGF0aWMgaW50IGJsa2lmX3F1ZXVlX3JlcXVlc3Qoc3RydWN0IHJlcXVl
c3QgKnJlcSkKIAl1bnNpZ25lZCBsb25nIGlkOwogCXVuc2lnbmVkIGludCBmc2VjdCwgbHNlY3Q7
CiAJaW50IGksIHJlZiwgbjsKLQlzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50X2FsaWduZWQg
KnNlZ21lbnRzID0gTlVMTDsKKwlzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50ICpzZWdtZW50
cyA9IE5VTEw7CiAKIAkvKgogCSAqIFVzZWQgdG8gc3RvcmUgaWYgd2UgYXJlIGFibGUgdG8gcXVl
dWUgdGhlIHJlcXVlc3QgYnkganVzdCB1c2luZwpAQCAtNTUwLDcgKzU1MCw3IEBAIHN0YXRpYyBp
bnQgYmxraWZfcXVldWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCQkJfSBlbHNlIHsK
IAkJCQluID0gaSAlIFNFR1NfUEVSX0lORElSRUNUX0ZSQU1FOwogCQkJCXNlZ21lbnRzW25dID0K
LQkJCQkJKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCkgeworCQkJCQkoc3Ry
dWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkgewogCQkJCQkJCS5ncmVmICAgICAgID0gcmVmLAog
CQkJCQkJCS5maXJzdF9zZWN0ID0gZnNlY3QsCiAJCQkJCQkJLmxhc3Rfc2VjdCAgPSBsc2VjdCB9
OwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmggYi9pbmNsdWRl
L3hlbi9pbnRlcmZhY2UvaW8vYmxraWYuaAppbmRleCBhZTY2NWFjLi4zMmVjMDVhIDEwMDY0NAot
LS0gYS9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvaW8vYmxraWYuaAorKysgYi9pbmNsdWRlL3hlbi9p
bnRlcmZhY2UvaW8vYmxraWYuaApAQCAtMTEzLDEzICsxMTMsMTMgQEAgdHlwZWRlZiB1aW50NjRf
dCBibGtpZl9zZWN0b3JfdDsKICAqIGl0J3MgbGVzcyB0aGFuIHRoZSBudW1iZXIgcHJvdmlkZWQg
YnkgdGhlIGJhY2tlbmQuIFRoZSBpbmRpcmVjdF9ncmVmcyBmaWVsZAogICogaW4gYmxraWZfcmVx
dWVzdF9pbmRpcmVjdCBzaG91bGQgYmUgZmlsbGVkIGJ5IHRoZSBmcm9udGVuZCB3aXRoIHRoZQog
ICogZ3JhbnQgcmVmZXJlbmNlcyBvZiB0aGUgcGFnZXMgdGhhdCBhcmUgaG9sZGluZyB0aGUgaW5k
aXJlY3Qgc2VnbWVudHMuCi0gKiBUaGlzIHBhZ2VzIGFyZSBmaWxsZWQgd2l0aCBhbiBhcnJheSBv
ZiBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZAotICogdGhhdCBob2xkIHRoZSBpbmZvcm1h
dGlvbiBhYm91dCB0aGUgc2VnbWVudHMuIFRoZSBudW1iZXIgb2YgaW5kaXJlY3QKLSAqIHBhZ2Vz
IHRvIHVzZSBpcyBkZXRlcm1pbmVkIGJ5IHRoZSBtYXhpbXVtIG51bWJlciBvZiBzZWdtZW50cwot
ICogYSBpbmRpcmVjdCByZXF1ZXN0IGNvbnRhaW5zLiBFdmVyeSBpbmRpcmVjdCBwYWdlIGNhbiBj
b250YWluIGEgbWF4aW11bQotICogb2YgNTEyIHNlZ21lbnRzIChQQUdFX1NJWkUvc2l6ZW9mKGJs
a2lmX3JlcXVlc3Rfc2VnbWVudF9hbGlnbmVkKSksCi0gKiBzbyB0byBjYWxjdWxhdGUgdGhlIG51
bWJlciBvZiBpbmRpcmVjdCBwYWdlcyB0byB1c2Ugd2UgaGF2ZSB0byBkbwotICogY2VpbChpbmRp
cmVjdF9zZWdtZW50cy81MTIpLgorICogVGhlc2UgcGFnZXMgYXJlIGZpbGxlZCB3aXRoIGFuIGFy
cmF5IG9mIGJsa2lmX3JlcXVlc3Rfc2VnbWVudCB0aGF0IGhvbGQgdGhlCisgKiBpbmZvcm1hdGlv
biBhYm91dCB0aGUgc2VnbWVudHMuIFRoZSBudW1iZXIgb2YgaW5kaXJlY3QgcGFnZXMgdG8gdXNl
IGlzCisgKiBkZXRlcm1pbmVkIGJ5IHRoZSBudW1iZXIgb2Ygc2VnbWVudHMgYW4gaW5kaXJlY3Qg
cmVxdWVzdCBjb250YWlucy4gRXZlcnkKKyAqIGluZGlyZWN0IHBhZ2UgY2FuIGNvbnRhaW4gYSBt
YXhpbXVtIG9mCisgKiAoUEFHRV9TSVpFIC8gc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3Nl
Z21lbnQpKSBzZWdtZW50cywgc28gdG8KKyAqIGNhbGN1bGF0ZSB0aGUgbnVtYmVyIG9mIGluZGly
ZWN0IHBhZ2VzIHRvIHVzZSB3ZSBoYXZlIHRvIGRvCisgKiBjZWlsKGluZGlyZWN0X3NlZ21lbnRz
IC8gKFBBR0VfU0laRSAvIHNpemVvZihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50KSkpLgog
ICoKICAqIElmIGEgYmFja2VuZCBkb2VzIG5vdCByZWNvZ25pemUgQkxLSUZfT1BfSU5ESVJFQ1Qs
IGl0IHNob3VsZCAqbm90KgogICogY3JlYXRlIHRoZSAiZmVhdHVyZS1tYXgtaW5kaXJlY3Qtc2Vn
bWVudHMiIG5vZGUhCkBAIC0xMzUsMTMgKzEzNSwxMiBAQCB0eXBlZGVmIHVpbnQ2NF90IGJsa2lm
X3NlY3Rvcl90OwogCiAjZGVmaW5lIEJMS0lGX01BWF9JTkRJUkVDVF9QQUdFU19QRVJfUkVRVUVT
VCA4CiAKLXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCB7Ci0JZ3JhbnRfcmVm
X3QgZ3JlZjsgICAgICAgIC8qIHJlZmVyZW5jZSB0byBJL08gYnVmZmVyIGZyYW1lICAgICAgICAq
LwotCS8qIEBmaXJzdF9zZWN0OiBmaXJzdCBzZWN0b3IgaW4gZnJhbWUgdG8gdHJhbnNmZXIgKGlu
Y2x1c2l2ZSkuICAgKi8KLQkvKiBAbGFzdF9zZWN0OiBsYXN0IHNlY3RvciBpbiBmcmFtZSB0byB0
cmFuc2ZlciAoaW5jbHVzaXZlKS4gICAgICovCi0JdWludDhfdCAgICAgZmlyc3Rfc2VjdCwgbGFz
dF9zZWN0OwotCXVpbnQxNl90ICAgIF9wYWQ7IC8qIHBhZGRpbmcgdG8gbWFrZSBpdCA4IGJ5dGVz
LCBzbyBpdCdzIGNhY2hlLWFsaWduZWQgKi8KLX0gX19hdHRyaWJ1dGVfXygoX19wYWNrZWRfXykp
Oworc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCB7CisJCWdyYW50X3JlZl90IGdyZWY7ICAg
ICAgICAvKiByZWZlcmVuY2UgdG8gSS9PIGJ1ZmZlciBmcmFtZSAgICAgICAgKi8KKwkJLyogQGZp
cnN0X3NlY3Q6IGZpcnN0IHNlY3RvciBpbiBmcmFtZSB0byB0cmFuc2ZlciAoaW5jbHVzaXZlKS4g
ICAqLworCQkvKiBAbGFzdF9zZWN0OiBsYXN0IHNlY3RvciBpbiBmcmFtZSB0byB0cmFuc2ZlciAo
aW5jbHVzaXZlKS4gICAgICovCisJCXVpbnQ4X3QgICAgIGZpcnN0X3NlY3QsIGxhc3Rfc2VjdDsK
K307CiAKIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3J3IHsKIAl1aW50OF90ICAgICAgICBucl9zZWdt
ZW50czsgIC8qIG51bWJlciBvZiBzZWdtZW50cyAgICAgICAgICAgICAgICAgICAqLwpAQCAtMTUx
LDEyICsxNTAsNyBAQCBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9ydyB7CiAjZW5kaWYKIAl1aW50NjRf
dCAgICAgICBpZDsgICAgICAgICAgIC8qIHByaXZhdGUgZ3Vlc3QgdmFsdWUsIGVjaG9lZCBpbiBy
ZXNwICAqLwogCWJsa2lmX3NlY3Rvcl90IHNlY3Rvcl9udW1iZXI7Lyogc3RhcnQgc2VjdG9yIGlk
eCBvbiBkaXNrIChyL3cgb25seSkgICovCi0Jc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCB7
Ci0JCWdyYW50X3JlZl90IGdyZWY7ICAgICAgICAvKiByZWZlcmVuY2UgdG8gSS9PIGJ1ZmZlciBm
cmFtZSAgICAgICAgKi8KLQkJLyogQGZpcnN0X3NlY3Q6IGZpcnN0IHNlY3RvciBpbiBmcmFtZSB0
byB0cmFuc2ZlciAoaW5jbHVzaXZlKS4gICAqLwotCQkvKiBAbGFzdF9zZWN0OiBsYXN0IHNlY3Rv
ciBpbiBmcmFtZSB0byB0cmFuc2ZlciAoaW5jbHVzaXZlKS4gICAgICovCi0JCXVpbnQ4X3QgICAg
IGZpcnN0X3NlY3QsIGxhc3Rfc2VjdDsKLQl9IHNlZ1tCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JF
UVVFU1RdOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgc2VnW0JMS0lGX01BWF9TRUdN
RU5UU19QRVJfUkVRVUVTVF07CiB9IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsKIAogc3Ry
dWN0IGJsa2lmX3JlcXVlc3RfZGlzY2FyZCB7Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdD9-0004Xq-EV; Tue, 04 Feb 2014 10:26:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD7-0004Xa-LA
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:33 +0000
Received: from [193.109.254.147:29416] by server-10.bemta-14.messagelabs.com
	id 5C/65-10711-850C0F25; Tue, 04 Feb 2014 10:26:32 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391509590!1840463!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18626 invoked from network); 4 Feb 2014 10:26:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579773"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:23 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCx-00037o-BI;
	Tue, 04 Feb 2014 10:26:23 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:14 +0100
Message-ID: <1391509575-3949-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW50cm9kdWNlIGEgbmV3IHZhcmlhYmxlIHRvIGtlZXAgdHJhY2sgb2YgdGhlIG51bWJlciBvZiBp
bi1mbGlnaHQKcmVxdWVzdHMuIFdlIG5lZWQgdG8gbWFrZSBzdXJlIHRoYXQgd2hlbiB4ZW5fYmxr
aWZfcHV0IGlzIGNhbGxlZCB0aGUKcmVxdWVzdCBoYXMgYWxyZWFkeSBiZWVuIGZyZWVkIGFuZCB3
ZSBjYW4gc2FmZWx5IGZyZWUgeGVuX2Jsa2lmLCB3aGljaAp3YXMgbm90IHRoZSBjYXNlIGJlZm9y
ZS4KClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29t
PgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25yYWQud2lsa0BvcmFjbGUuY29tPgpDYzog
RGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0cml4LmNvbT4KQ2M6IEJvcmlzIE9zdHJvdnNr
eSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+CkNjOiBNYXR0IFJ1c2h0b24gPG1ydXNodG9u
QGFtYXpvbi5jb20+CkNjOiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpvbi5jb20+CkNjOiBJYW4gQ2Ft
cGJlbGwgPElhbi5DYW1wYmVsbEBjaXRyaXguY29tPgotLS0KIGRyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2svYmxrYmFjay5jIHwgICAzMiArKysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLQog
ZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaCAgfCAgICAxICsKIGRyaXZlcnMvYmxv
Y2sveGVuLWJsa2JhY2sveGVuYnVzLmMgIHwgICAgMSArCiAzIGZpbGVzIGNoYW5nZWQsIDI0IGlu
c2VydGlvbnMoKyksIDEwIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNr
LmMKaW5kZXggZGNmZTQ5Zi4uMzk0ZmEyZSAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNr
LmMKQEAgLTk0Myw5ICs5NDMsNyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxrX2RyYWluX2lvKHN0cnVj
dCB4ZW5fYmxraWYgKmJsa2lmKQogewogCWF0b21pY19zZXQoJmJsa2lmLT5kcmFpbiwgMSk7CiAJ
ZG8gewotCQkvKiBUaGUgaW5pdGlhbCB2YWx1ZSBpcyBvbmUsIGFuZCBvbmUgcmVmY250IHRha2Vu
IGF0IHRoZQotCQkgKiBzdGFydCBvZiB0aGUgeGVuX2Jsa2lmX3NjaGVkdWxlIHRocmVhZC4gKi8K
LQkJaWYgKGF0b21pY19yZWFkKCZibGtpZi0+cmVmY250KSA8PSAyKQorCQlpZiAoYXRvbWljX3Jl
YWQoJmJsa2lmLT5pbmZsaWdodCkgPT0gMCkKIAkJCWJyZWFrOwogCQl3YWl0X2Zvcl9jb21wbGV0
aW9uX2ludGVycnVwdGlibGVfdGltZW91dCgKIAkJCQkmYmxraWYtPmRyYWluX2NvbXBsZXRlLCBI
Wik7CkBAIC05ODUsMTcgKzk4MywzMCBAQCBzdGF0aWMgdm9pZCBfX2VuZF9ibG9ja19pb19vcChz
dHJ1Y3QgcGVuZGluZ19yZXEgKnBlbmRpbmdfcmVxLCBpbnQgZXJyb3IpCiAJICogdGhlIHByb3Bl
ciByZXNwb25zZSBvbiB0aGUgcmluZy4KIAkgKi8KIAlpZiAoYXRvbWljX2RlY19hbmRfdGVzdCgm
cGVuZGluZ19yZXEtPnBlbmRjbnQpKSB7Ci0JCXhlbl9ibGtia191bm1hcChwZW5kaW5nX3JlcS0+
YmxraWYsCisJCXN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmID0gcGVuZGluZ19yZXEtPmJsa2lmOwor
CisJCXhlbl9ibGtia191bm1hcChibGtpZiwKIAkJICAgICAgICAgICAgICAgIHBlbmRpbmdfcmVx
LT5zZWdtZW50cywKIAkJICAgICAgICAgICAgICAgIHBlbmRpbmdfcmVxLT5ucl9wYWdlcyk7Ci0J
CW1ha2VfcmVzcG9uc2UocGVuZGluZ19yZXEtPmJsa2lmLCBwZW5kaW5nX3JlcS0+aWQsCisJCW1h
a2VfcmVzcG9uc2UoYmxraWYsIHBlbmRpbmdfcmVxLT5pZCwKIAkJCSAgICAgIHBlbmRpbmdfcmVx
LT5vcGVyYXRpb24sIHBlbmRpbmdfcmVxLT5zdGF0dXMpOwotCQl4ZW5fYmxraWZfcHV0KHBlbmRp
bmdfcmVxLT5ibGtpZik7Ci0JCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJsa2lmLT5y
ZWZjbnQpIDw9IDIpIHsKLQkJCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJsa2lmLT5k
cmFpbikpCi0JCQkJY29tcGxldGUoJnBlbmRpbmdfcmVxLT5ibGtpZi0+ZHJhaW5fY29tcGxldGUp
OworCQlmcmVlX3JlcShibGtpZiwgcGVuZGluZ19yZXEpOworCQkvKgorCQkgKiBNYWtlIHN1cmUg
dGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVhc2luZyBibGtpZiwKKwkJICogb3IgdGhl
cmUgY291bGQgYmUgYSByYWNlIGJldHdlZW4gZnJlZV9yZXEgYW5kIHRoZQorCQkgKiBjbGVhbnVw
IGRvbmUgaW4geGVuX2Jsa2lmX2ZyZWUgZHVyaW5nIHNodXRkb3duLgorCQkgKgorCQkgKiBOQjog
VGhlIGZhY3QgdGhhdCB3ZSBtaWdodCB0cnkgdG8gd2FrZSB1cCBwZW5kaW5nX2ZyZWVfd3EKKwkJ
ICogYmVmb3JlIGRyYWluX2NvbXBsZXRlIChpbiBjYXNlIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBv
bikKKwkJICogaXQncyBub3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1wbGVtZW50YXRp
b24KKwkJICogYmVjYXVzZSB3ZSBjYW4gYXNzdXJlIHRoZXJlJ3Mgbm8gdGhyZWFkIHdhaXRpbmcg
b24KKwkJICogcGVuZGluZ19mcmVlX3dxIGlmIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBvbiwgYnV0
IGl0IGhhcworCQkgKiB0byBiZSB0YWtlbiBpbnRvIGFjY291bnQgaWYgdGhlIGN1cnJlbnQgbW9k
ZWwgaXMgY2hhbmdlZC4KKwkJICovCisJCWlmIChhdG9taWNfZGVjX2FuZF90ZXN0KCZibGtpZi0+
aW5mbGlnaHQpICYmIGF0b21pY19yZWFkKCZibGtpZi0+ZHJhaW4pKSB7CisJCQljb21wbGV0ZSgm
YmxraWYtPmRyYWluX2NvbXBsZXRlKTsKIAkJfQotCQlmcmVlX3JlcShwZW5kaW5nX3JlcS0+Ymxr
aWYsIHBlbmRpbmdfcmVxKTsKKwkJeGVuX2Jsa2lmX3B1dChibGtpZik7CiAJfQogfQogCkBAIC0x
MjQ5LDYgKzEyNjAsNyBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLAogCSAqIGJlbG93IChpbiAiIWJpbyIpIGlmIHdlIGFyZSBoYW5kbGlu
ZyBhIEJMS0lGX09QX0RJU0NBUkQuCiAJICovCiAJeGVuX2Jsa2lmX2dldChibGtpZik7CisJYXRv
bWljX2luYygmYmxraWYtPmluZmxpZ2h0KTsKIAogCWZvciAoaSA9IDA7IGkgPCBuc2VnOyBpKysp
IHsKIAkJd2hpbGUgKChiaW8gPT0gTlVMTCkgfHwKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svY29tbW9uLmggYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5o
CmluZGV4IGY3MzNkNzYuLmU0MDMyNmEgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2svY29tbW9uLmgKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaApA
QCAtMjc4LDYgKzI3OCw3IEBAIHN0cnVjdCB4ZW5fYmxraWYgewogCS8qIGZvciBiYXJyaWVyIChk
cmFpbikgcmVxdWVzdHMgKi8KIAlzdHJ1Y3QgY29tcGxldGlvbglkcmFpbl9jb21wbGV0ZTsKIAlh
dG9taWNfdAkJZHJhaW47CisJYXRvbWljX3QJCWluZmxpZ2h0OwogCS8qIE9uZSB0aHJlYWQgcGVy
IG9uZSBibGtpZi4gKi8KIAlzdHJ1Y3QgdGFza19zdHJ1Y3QJKnhlbmJsa2Q7CiAJdW5zaWduZWQg
aW50CQl3YWl0aW5nX3JlcXM7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwppbmRleCA4YWZl
ZjY3Li44NDk3M2M2IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1
cy5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKQEAgLTEyOCw2ICsx
MjgsNyBAQCBzdGF0aWMgc3RydWN0IHhlbl9ibGtpZiAqeGVuX2Jsa2lmX2FsbG9jKGRvbWlkX3Qg
ZG9taWQpCiAJSU5JVF9MSVNUX0hFQUQoJmJsa2lmLT5wZXJzaXN0ZW50X3B1cmdlX2xpc3QpOwog
CWJsa2lmLT5mcmVlX3BhZ2VzX251bSA9IDA7CiAJYXRvbWljX3NldCgmYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2luX3VzZSwgMCk7CisJYXRvbWljX3NldCgmYmxraWYtPmluZmxpZ2h0LCAwKTsKIAog
CUlOSVRfTElTVF9IRUFEKCZibGtpZi0+cGVuZGluZ19mcmVlKTsKIAotLSAKMS43LjcuNSAoQXBw
bGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDC-0004YS-VN; Tue, 04 Feb 2014 10:26:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdDB-0004YM-Dm
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:37 +0000
Received: from [85.158.143.35:9388] by server-1.bemta-4.messagelabs.com id
	BE/1C-31661-C50C0F25; Tue, 04 Feb 2014 10:26:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391509594!2987792!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15326 invoked from network); 4 Feb 2014 10:26:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97665478"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:22 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCw-00037o-2Y;
	Tue, 04 Feb 2014 10:26:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:12 +0100
Message-ID: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDA-0004Y8-78; Tue, 04 Feb 2014 10:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD8-0004Xg-OM
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:34 +0000
Received: from [85.158.143.35:9068] by server-1.bemta-4.messagelabs.com id
	14/0C-31661-A50C0F25; Tue, 04 Feb 2014 10:26:34 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391509592!2978400!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26354 invoked from network); 4 Feb 2014 10:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579772"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:22 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCw-00037o-PH;
	Tue, 04 Feb 2014 10:26:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:13 +0100
Message-ID: <1391509575-3949-3-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SSd2ZSBhdCBsZWFzdCBpZGVudGlmaWVkIHR3byBwb3NzaWJsZSBtZW1vcnkgbGVha3MgaW4gYmxr
YmFjaywgYm90aApyZWxhdGVkIHRvIHRoZSBzaHV0ZG93biBwYXRoIG9mIGEgVkJEOgoKLSBibGti
YWNrIGRvZXNuJ3Qgd2FpdCBmb3IgYW55IHBlbmRpbmcgcHVyZ2Ugd29yayB0byBmaW5pc2ggYmVm
b3JlCiAgY2xlYW5pbmcgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gVGhlIHB1cmdlIHdvcmsgd2ls
bCBjYWxsCiAgcHV0X2ZyZWVfcGFnZXMgYW5kIHRodXMgd2UgbWlnaHQgZW5kIHVwIHdpdGggcGFn
ZXMgYmVpbmcgYWRkZWQgdG8KICB0aGUgZnJlZV9wYWdlcyBsaXN0IGFmdGVyIHdlIGhhdmUgZW1w
dGllZCBpdC4gRml4IHRoaXMgYnkgbWFraW5nCiAgc3VyZSB0aGVyZSdzIG5vIHBlbmRpbmcgcHVy
Z2Ugd29yayBiZWZvcmUgZXhpdGluZwogIHhlbl9ibGtpZl9zY2hlZHVsZSwgYW5kIG1vdmluZyB0
aGUgZnJlZV9wYWdlIGNsZWFudXAgY29kZSB0bwogIHhlbl9ibGtpZl9mcmVlLgotIGJsa2JhY2sg
ZG9lc24ndCB3YWl0IGZvciBwZW5kaW5nIHJlcXVlc3RzIHRvIGVuZCBiZWZvcmUgY2xlYW5pbmcK
ICBwZXJzaXN0ZW50IGdyYW50cyBhbmQgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gQWdhaW4gdGhp
cyBjYW4gYWRkCiAgcGFnZXMgdG8gdGhlIGZyZWVfcGFnZXMgbGlzdCBvciBwZXJzaXN0ZW50IGdy
YW50cyB0byB0aGUKICBwZXJzaXN0ZW50X2dudHMgcmVkLWJsYWNrIHRyZWUuIEZpeGVkIGJ5IG1v
dmluZyB0aGUgcGVyc2lzdGVudAogIGdyYW50cyBhbmQgZnJlZV9wYWdlcyBjbGVhbnVwIGNvZGUg
dG8geGVuX2Jsa2lmX2ZyZWUuCgpBbHNvLCBhZGQgc29tZSBjaGVja3MgaW4geGVuX2Jsa2lmX2Zy
ZWUgdG8gbWFrZSBzdXJlIHdlIGFyZSBjbGVhbmluZwpldmVyeXRoaW5nLgoKU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBLb25yYWQgUnpl
c3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFiZWwgPGRh
dmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zz
a3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNvbT4KQ2M6
IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBi
ZWxsQGNpdHJpeC5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMg
fCAgIDI3ICsrKysrKysrKysrKysrKysrKy0tLS0tLS0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxr
YmFjay9jb21tb24uaCAgfCAgICAxICsKIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVz
LmMgIHwgICAxMiArKysrKysrKysrKysKIDMgZmlsZXMgY2hhbmdlZCwgMzEgaW5zZXJ0aW9ucygr
KSwgOSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L2Jsa2JhY2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmluZGV4IDMw
ZWY3YjMuLmRjZmU0OWYgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxr
YmFjay5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCkBAIC0zNzUs
NyArMzc1LDcgQEAgc3RhdGljIHZvaWQgcHVyZ2VfcGVyc2lzdGVudF9nbnQoc3RydWN0IHhlbl9i
bGtpZiAqYmxraWYpCiAKIAlwcl9kZWJ1ZyhEUlZfUEZYICJHb2luZyB0byBwdXJnZSAldSBwZXJz
aXN0ZW50IGdyYW50c1xuIiwgbnVtX2NsZWFuKTsKIAotCUlOSVRfTElTVF9IRUFEKCZibGtpZi0+
cGVyc2lzdGVudF9wdXJnZV9saXN0KTsKKwlCVUdfT04oIWxpc3RfZW1wdHkoJmJsa2lmLT5wZXJz
aXN0ZW50X3B1cmdlX2xpc3QpKTsKIAlyb290ID0gJmJsa2lmLT5wZXJzaXN0ZW50X2dudHM7CiBw
dXJnZV9saXN0OgogCWZvcmVhY2hfZ3JhbnRfc2FmZShwZXJzaXN0ZW50X2dudCwgbiwgcm9vdCwg
bm9kZSkgewpAQCAtNjI1LDYgKzYyNSwyMyBAQCBwdXJnZV9nbnRfbGlzdDoKIAkJCXByaW50X3N0
YXRzKGJsa2lmKTsKIAl9CiAKKwkvKiBEcmFpbiBwZW5kaW5nIHB1cmdlIHdvcmsgKi8KKwlmbHVz
aF93b3JrKCZibGtpZi0+cGVyc2lzdGVudF9wdXJnZV93b3JrKTsKKworCWlmIChsb2dfc3RhdHMp
CisJCXByaW50X3N0YXRzKGJsa2lmKTsKKworCWJsa2lmLT54ZW5ibGtkID0gTlVMTDsKKwl4ZW5f
YmxraWZfcHV0KGJsa2lmKTsKKworCXJldHVybiAwOworfQorCisvKgorICogUmVtb3ZlIHBlcnNp
c3RlbnQgZ3JhbnRzIGFuZCBlbXB0eSB0aGUgcG9vbCBvZiBmcmVlIHBhZ2VzCisgKi8KK3ZvaWQg
eGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKHN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmKQorewogCS8qIEZy
ZWUgYWxsIHBlcnNpc3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJs
a2lmLT5wZXJzaXN0ZW50X2dudHMsCkBAIC02MzUsMTQgKzY1Miw2IEBAIHB1cmdlX2dudF9saXN0
OgogCiAJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFsbCBwYWdlcyBmcm9t
IHRoZSBidWZmZXIgKi8KIAlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwgMCAvKiBBbGwgKi8p
OwotCi0JaWYgKGxvZ19zdGF0cykKLQkJcHJpbnRfc3RhdHMoYmxraWYpOwotCi0JYmxraWYtPnhl
bmJsa2QgPSBOVUxMOwotCXhlbl9ibGtpZl9wdXQoYmxraWYpOwotCi0JcmV0dXJuIDA7CiB9CiAK
IC8qCmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oIGIvZHJp
dmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaAppbmRleCA4ZDg4MDc1Li5mNzMzZDc2IDEw
MDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCisrKyBiL2RyaXZl
cnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgKQEAgLTM3Niw2ICszNzYsNyBAQCBpbnQgeGVu
X2Jsa2lmX3hlbmJ1c19pbml0KHZvaWQpOwogaXJxcmV0dXJuX3QgeGVuX2Jsa2lmX2JlX2ludChp
bnQgaXJxLCB2b2lkICpkZXZfaWQpOwogaW50IHhlbl9ibGtpZl9zY2hlZHVsZSh2b2lkICphcmcp
OwogaW50IHhlbl9ibGtpZl9wdXJnZV9wZXJzaXN0ZW50KHZvaWQgKmFyZyk7Cit2b2lkIHhlbl9i
bGtia19mcmVlX2NhY2hlcyhzdHJ1Y3QgeGVuX2Jsa2lmICpibGtpZik7CiAKIGludCB4ZW5fYmxr
YmtfZmx1c2hfZGlza2NhY2hlKHN0cnVjdCB4ZW5idXNfdHJhbnNhY3Rpb24geGJ0LAogCQkJICAg
ICAgc3RydWN0IGJhY2tlbmRfaW5mbyAqYmUsIGludCBzdGF0ZSk7CmRpZmYgLS1naXQgYS9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFj
ay94ZW5idXMuYwppbmRleCBjMjAxNGEwLi44YWZlZjY3IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Js
b2NrL3hlbi1ibGtiYWNrL3hlbmJ1cy5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
eGVuYnVzLmMKQEAgLTEyNSw2ICsxMjUsNyBAQCBzdGF0aWMgc3RydWN0IHhlbl9ibGtpZiAqeGVu
X2Jsa2lmX2FsbG9jKGRvbWlkX3QgZG9taWQpCiAJYmxraWYtPnBlcnNpc3RlbnRfZ250cy5yYl9u
b2RlID0gTlVMTDsKIAlzcGluX2xvY2tfaW5pdCgmYmxraWYtPmZyZWVfcGFnZXNfbG9jayk7CiAJ
SU5JVF9MSVNUX0hFQUQoJmJsa2lmLT5mcmVlX3BhZ2VzKTsKKwlJTklUX0xJU1RfSEVBRCgmYmxr
aWYtPnBlcnNpc3RlbnRfcHVyZ2VfbGlzdCk7CiAJYmxraWYtPmZyZWVfcGFnZXNfbnVtID0gMDsK
IAlhdG9taWNfc2V0KCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlLCAwKTsKIApAQCAtMjU5
LDYgKzI2MCwxNyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxraWZfZnJlZShzdHJ1Y3QgeGVuX2Jsa2lm
ICpibGtpZikKIAlpZiAoIWF0b21pY19kZWNfYW5kX3Rlc3QoJmJsa2lmLT5yZWZjbnQpKQogCQlC
VUcoKTsKIAorCS8qIFJlbW92ZSBhbGwgcGVyc2lzdGVudCBncmFudHMgYW5kIHRoZSBjYWNoZSBv
ZiBiYWxsb29uZWQgcGFnZXMuICovCisJeGVuX2Jsa2JrX2ZyZWVfY2FjaGVzKGJsa2lmKTsKKwor
CS8qIE1ha2Ugc3VyZSBldmVyeXRoaW5nIGlzIGRyYWluZWQgYmVmb3JlIHNodXR0aW5nIGRvd24g
Ki8KKwlCVUdfT04oYmxraWYtPnBlcnNpc3RlbnRfZ250X2MgIT0gMCk7CisJQlVHX09OKGF0b21p
Y19yZWFkKCZibGtpZi0+cGVyc2lzdGVudF9nbnRfaW5fdXNlKSAhPSAwKTsKKwlCVUdfT04oYmxr
aWYtPmZyZWVfcGFnZXNfbnVtICE9IDApOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPnBl
cnNpc3RlbnRfcHVyZ2VfbGlzdCkpOworCUJVR19PTighbGlzdF9lbXB0eSgmYmxraWYtPmZyZWVf
cGFnZXMpKTsKKwlCVUdfT04oIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0ZW50X2dudHMp
KTsKKwogCS8qIENoZWNrIHRoYXQgdGhlcmUgaXMgbm8gcmVxdWVzdCBpbiB1c2UgKi8KIAlsaXN0
X2Zvcl9lYWNoX2VudHJ5X3NhZmUocmVxLCBuLCAmYmxraWYtPnBlbmRpbmdfZnJlZSwgZnJlZV9s
aXN0KSB7CiAJCWxpc3RfZGVsKCZyZXEtPmZyZWVfbGlzdCk7Ci0tIAoxLjcuNy41IChBcHBsZSBH
aXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18K
WGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlz
dHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDA-0004YF-Ij; Tue, 04 Feb 2014 10:26:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD9-0004Xl-99
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:35 +0000
Received: from [193.109.254.147:10917] by server-15.bemta-14.messagelabs.com
	id 27/CB-10839-A50C0F25; Tue, 04 Feb 2014 10:26:34 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391509590!1840463!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18889 invoked from network); 4 Feb 2014 10:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579774"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:23 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCx-00037o-TT;
	Tue, 04 Feb 2014 10:26:23 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:15 +0100
Message-ID: <1391509575-3949-5-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Matt Wilson <msw@amazon.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyB3YXMgd3JvbmdseSBpbnRyb2R1Y2VkIGluIGNvbW1pdCA0MDJiMjdmOSwgdGhlIG9ubHkg
ZGlmZmVyZW5jZQpiZXR3ZWVuIGJsa2lmX3JlcXVlc3Rfc2VnbWVudF9hbGlnbmVkIGFuZCBibGtp
Zl9yZXF1ZXN0X3NlZ21lbnQgaXMKdGhhdCB0aGUgZm9ybWVyIGhhcyBhIG5hbWVkIHBhZGRpbmcs
IHdoaWxlIGJvdGggc2hhcmUgdGhlIHNhbWUKbWVtb3J5IGxheW91dC4KCkFsc28gY29ycmVjdCBh
IGZldyBtaW5vciBnbGl0Y2hlcyBpbiB0aGUgZGVzY3JpcHRpb24sIGluY2x1ZGluZyBmb3IgaXQK
dG8gbm8gbG9uZ2VyIGFzc3VtZSBQQUdFX1NJWkUgPT0gNDA5Ni4KClNpZ25lZC1vZmYtYnk6IFJv
Z2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgpbRGVzY3JpcHRpb24gZml4IGJ5
IEphbiBCZXVsaWNoXQpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+ClJlcG9ydGVkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkNjOiBLb25y
YWQgUnplc3p1dGVrIFdpbGsgPGtvbnJhZC53aWxrQG9yYWNsZS5jb20+CkNjOiBEYXZpZCBWcmFi
ZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5v
c3Ryb3Zza3lAb3JhY2xlLmNvbT4KQ2M6IE1hdHQgUnVzaHRvbiA8bXJ1c2h0b25AYW1hem9uLmNv
bT4KQ2M6IE1hdHQgV2lsc29uIDxtc3dAYW1hem9uLmNvbT4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYyB8ICAgIDIgKy0KIGRyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sv
Y29tbW9uLmggIHwgICAgMiArLQogZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYyAgICAgICAg
fCAgICA2ICsrKy0tLQogaW5jbHVkZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmggICAgfCAgIDM0
ICsrKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0tLS0tLS0KIDQgZmlsZXMgY2hhbmdlZCwgMTkg
aW5zZXJ0aW9ucygrKSwgMjUgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYwppbmRleCAzOTRmYTJlLi5lNjEyNjI3IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hl
bi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYwpAQCAtODQ3LDcgKzg0Nyw3IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2JrX3BhcnNlX2luZGly
ZWN0KHN0cnVjdCBibGtpZl9yZXF1ZXN0ICpyZXEsCiAJc3RydWN0IGdyYW50X3BhZ2UgKipwYWdl
cyA9IHBlbmRpbmdfcmVxLT5pbmRpcmVjdF9wYWdlczsKIAlzdHJ1Y3QgeGVuX2Jsa2lmICpibGtp
ZiA9IHBlbmRpbmdfcmVxLT5ibGtpZjsKIAlpbnQgaW5kaXJlY3RfZ3JlZnMsIHJjLCBuLCBuc2Vn
LCBpOwotCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCAqc2VnbWVudHMgPSBO
VUxMOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgKnNlZ21lbnRzID0gTlVMTDsKIAog
CW5zZWcgPSBwZW5kaW5nX3JlcS0+bnJfcGFnZXM7CiAJaW5kaXJlY3RfZ3JlZnMgPSBJTkRJUkVD
VF9QQUdFUyhuc2VnKTsKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29t
bW9uLmggYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5oCmluZGV4IGU0MDMyNmEu
LjllYjM0ZTIgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svY29tbW9uLmgK
KysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaApAQCAtNTcsNyArNTcsNyBA
QAogI2RlZmluZSBNQVhfSU5ESVJFQ1RfU0VHTUVOVFMgMjU2CiAKICNkZWZpbmUgU0VHU19QRVJf
SU5ESVJFQ1RfRlJBTUUgXAotCShQQUdFX1NJWkUvc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0
X3NlZ21lbnRfYWxpZ25lZCkpCisJKFBBR0VfU0laRS9zaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVl
c3Rfc2VnbWVudCkpCiAjZGVmaW5lIE1BWF9JTkRJUkVDVF9QQUdFUyBcCiAJKChNQVhfSU5ESVJF
Q1RfU0VHTUVOVFMgKyBTRUdTX1BFUl9JTkRJUkVDVF9GUkFNRSAtIDEpL1NFR1NfUEVSX0lORElS
RUNUX0ZSQU1FKQogI2RlZmluZSBJTkRJUkVDVF9QQUdFUyhfc2VncykgXApkaWZmIC0tZ2l0IGEv
ZHJpdmVycy9ibG9jay94ZW4tYmxrZnJvbnQuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2Zyb250
LmMKaW5kZXggYzRhNGM5MC4uN2QwOWRmYyAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4t
YmxrZnJvbnQuYworKysgYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtmcm9udC5jCkBAIC0xNjIsNyAr
MTYyLDcgQEAgc3RhdGljIERFRklORV9TUElOTE9DSyhtaW5vcl9sb2NrKTsKICNkZWZpbmUgREVW
X05BTUUJInh2ZCIJLyogbmFtZSBpbiAvZGV2ICovCiAKICNkZWZpbmUgU0VHU19QRVJfSU5ESVJF
Q1RfRlJBTUUgXAotCShQQUdFX1NJWkUvc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21l
bnRfYWxpZ25lZCkpCisJKFBBR0VfU0laRS9zaXplb2Yoc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2Vn
bWVudCkpCiAjZGVmaW5lIElORElSRUNUX0dSRUZTKF9zZWdzKSBcCiAJKChfc2VncyArIFNFR1Nf
UEVSX0lORElSRUNUX0ZSQU1FIC0gMSkvU0VHU19QRVJfSU5ESVJFQ1RfRlJBTUUpCiAKQEAgLTM5
Myw3ICszOTMsNyBAQCBzdGF0aWMgaW50IGJsa2lmX3F1ZXVlX3JlcXVlc3Qoc3RydWN0IHJlcXVl
c3QgKnJlcSkKIAl1bnNpZ25lZCBsb25nIGlkOwogCXVuc2lnbmVkIGludCBmc2VjdCwgbHNlY3Q7
CiAJaW50IGksIHJlZiwgbjsKLQlzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50X2FsaWduZWQg
KnNlZ21lbnRzID0gTlVMTDsKKwlzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50ICpzZWdtZW50
cyA9IE5VTEw7CiAKIAkvKgogCSAqIFVzZWQgdG8gc3RvcmUgaWYgd2UgYXJlIGFibGUgdG8gcXVl
dWUgdGhlIHJlcXVlc3QgYnkganVzdCB1c2luZwpAQCAtNTUwLDcgKzU1MCw3IEBAIHN0YXRpYyBp
bnQgYmxraWZfcXVldWVfcmVxdWVzdChzdHJ1Y3QgcmVxdWVzdCAqcmVxKQogCQkJfSBlbHNlIHsK
IAkJCQluID0gaSAlIFNFR1NfUEVSX0lORElSRUNUX0ZSQU1FOwogCQkJCXNlZ21lbnRzW25dID0K
LQkJCQkJKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCkgeworCQkJCQkoc3Ry
dWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCkgewogCQkJCQkJCS5ncmVmICAgICAgID0gcmVmLAog
CQkJCQkJCS5maXJzdF9zZWN0ID0gZnNlY3QsCiAJCQkJCQkJLmxhc3Rfc2VjdCAgPSBsc2VjdCB9
OwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL2lvL2Jsa2lmLmggYi9pbmNsdWRl
L3hlbi9pbnRlcmZhY2UvaW8vYmxraWYuaAppbmRleCBhZTY2NWFjLi4zMmVjMDVhIDEwMDY0NAot
LS0gYS9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvaW8vYmxraWYuaAorKysgYi9pbmNsdWRlL3hlbi9p
bnRlcmZhY2UvaW8vYmxraWYuaApAQCAtMTEzLDEzICsxMTMsMTMgQEAgdHlwZWRlZiB1aW50NjRf
dCBibGtpZl9zZWN0b3JfdDsKICAqIGl0J3MgbGVzcyB0aGFuIHRoZSBudW1iZXIgcHJvdmlkZWQg
YnkgdGhlIGJhY2tlbmQuIFRoZSBpbmRpcmVjdF9ncmVmcyBmaWVsZAogICogaW4gYmxraWZfcmVx
dWVzdF9pbmRpcmVjdCBzaG91bGQgYmUgZmlsbGVkIGJ5IHRoZSBmcm9udGVuZCB3aXRoIHRoZQog
ICogZ3JhbnQgcmVmZXJlbmNlcyBvZiB0aGUgcGFnZXMgdGhhdCBhcmUgaG9sZGluZyB0aGUgaW5k
aXJlY3Qgc2VnbWVudHMuCi0gKiBUaGlzIHBhZ2VzIGFyZSBmaWxsZWQgd2l0aCBhbiBhcnJheSBv
ZiBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZAotICogdGhhdCBob2xkIHRoZSBpbmZvcm1h
dGlvbiBhYm91dCB0aGUgc2VnbWVudHMuIFRoZSBudW1iZXIgb2YgaW5kaXJlY3QKLSAqIHBhZ2Vz
IHRvIHVzZSBpcyBkZXRlcm1pbmVkIGJ5IHRoZSBtYXhpbXVtIG51bWJlciBvZiBzZWdtZW50cwot
ICogYSBpbmRpcmVjdCByZXF1ZXN0IGNvbnRhaW5zLiBFdmVyeSBpbmRpcmVjdCBwYWdlIGNhbiBj
b250YWluIGEgbWF4aW11bQotICogb2YgNTEyIHNlZ21lbnRzIChQQUdFX1NJWkUvc2l6ZW9mKGJs
a2lmX3JlcXVlc3Rfc2VnbWVudF9hbGlnbmVkKSksCi0gKiBzbyB0byBjYWxjdWxhdGUgdGhlIG51
bWJlciBvZiBpbmRpcmVjdCBwYWdlcyB0byB1c2Ugd2UgaGF2ZSB0byBkbwotICogY2VpbChpbmRp
cmVjdF9zZWdtZW50cy81MTIpLgorICogVGhlc2UgcGFnZXMgYXJlIGZpbGxlZCB3aXRoIGFuIGFy
cmF5IG9mIGJsa2lmX3JlcXVlc3Rfc2VnbWVudCB0aGF0IGhvbGQgdGhlCisgKiBpbmZvcm1hdGlv
biBhYm91dCB0aGUgc2VnbWVudHMuIFRoZSBudW1iZXIgb2YgaW5kaXJlY3QgcGFnZXMgdG8gdXNl
IGlzCisgKiBkZXRlcm1pbmVkIGJ5IHRoZSBudW1iZXIgb2Ygc2VnbWVudHMgYW4gaW5kaXJlY3Qg
cmVxdWVzdCBjb250YWlucy4gRXZlcnkKKyAqIGluZGlyZWN0IHBhZ2UgY2FuIGNvbnRhaW4gYSBt
YXhpbXVtIG9mCisgKiAoUEFHRV9TSVpFIC8gc2l6ZW9mKHN0cnVjdCBibGtpZl9yZXF1ZXN0X3Nl
Z21lbnQpKSBzZWdtZW50cywgc28gdG8KKyAqIGNhbGN1bGF0ZSB0aGUgbnVtYmVyIG9mIGluZGly
ZWN0IHBhZ2VzIHRvIHVzZSB3ZSBoYXZlIHRvIGRvCisgKiBjZWlsKGluZGlyZWN0X3NlZ21lbnRz
IC8gKFBBR0VfU0laRSAvIHNpemVvZihzdHJ1Y3QgYmxraWZfcmVxdWVzdF9zZWdtZW50KSkpLgog
ICoKICAqIElmIGEgYmFja2VuZCBkb2VzIG5vdCByZWNvZ25pemUgQkxLSUZfT1BfSU5ESVJFQ1Qs
IGl0IHNob3VsZCAqbm90KgogICogY3JlYXRlIHRoZSAiZmVhdHVyZS1tYXgtaW5kaXJlY3Qtc2Vn
bWVudHMiIG5vZGUhCkBAIC0xMzUsMTMgKzEzNSwxMiBAQCB0eXBlZGVmIHVpbnQ2NF90IGJsa2lm
X3NlY3Rvcl90OwogCiAjZGVmaW5lIEJMS0lGX01BWF9JTkRJUkVDVF9QQUdFU19QRVJfUkVRVUVT
VCA4CiAKLXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCB7Ci0JZ3JhbnRfcmVm
X3QgZ3JlZjsgICAgICAgIC8qIHJlZmVyZW5jZSB0byBJL08gYnVmZmVyIGZyYW1lICAgICAgICAq
LwotCS8qIEBmaXJzdF9zZWN0OiBmaXJzdCBzZWN0b3IgaW4gZnJhbWUgdG8gdHJhbnNmZXIgKGlu
Y2x1c2l2ZSkuICAgKi8KLQkvKiBAbGFzdF9zZWN0OiBsYXN0IHNlY3RvciBpbiBmcmFtZSB0byB0
cmFuc2ZlciAoaW5jbHVzaXZlKS4gICAgICovCi0JdWludDhfdCAgICAgZmlyc3Rfc2VjdCwgbGFz
dF9zZWN0OwotCXVpbnQxNl90ICAgIF9wYWQ7IC8qIHBhZGRpbmcgdG8gbWFrZSBpdCA4IGJ5dGVz
LCBzbyBpdCdzIGNhY2hlLWFsaWduZWQgKi8KLX0gX19hdHRyaWJ1dGVfXygoX19wYWNrZWRfXykp
Oworc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCB7CisJCWdyYW50X3JlZl90IGdyZWY7ICAg
ICAgICAvKiByZWZlcmVuY2UgdG8gSS9PIGJ1ZmZlciBmcmFtZSAgICAgICAgKi8KKwkJLyogQGZp
cnN0X3NlY3Q6IGZpcnN0IHNlY3RvciBpbiBmcmFtZSB0byB0cmFuc2ZlciAoaW5jbHVzaXZlKS4g
ICAqLworCQkvKiBAbGFzdF9zZWN0OiBsYXN0IHNlY3RvciBpbiBmcmFtZSB0byB0cmFuc2ZlciAo
aW5jbHVzaXZlKS4gICAgICovCisJCXVpbnQ4X3QgICAgIGZpcnN0X3NlY3QsIGxhc3Rfc2VjdDsK
K307CiAKIHN0cnVjdCBibGtpZl9yZXF1ZXN0X3J3IHsKIAl1aW50OF90ICAgICAgICBucl9zZWdt
ZW50czsgIC8qIG51bWJlciBvZiBzZWdtZW50cyAgICAgICAgICAgICAgICAgICAqLwpAQCAtMTUx
LDEyICsxNTAsNyBAQCBzdHJ1Y3QgYmxraWZfcmVxdWVzdF9ydyB7CiAjZW5kaWYKIAl1aW50NjRf
dCAgICAgICBpZDsgICAgICAgICAgIC8qIHByaXZhdGUgZ3Vlc3QgdmFsdWUsIGVjaG9lZCBpbiBy
ZXNwICAqLwogCWJsa2lmX3NlY3Rvcl90IHNlY3Rvcl9udW1iZXI7Lyogc3RhcnQgc2VjdG9yIGlk
eCBvbiBkaXNrIChyL3cgb25seSkgICovCi0Jc3RydWN0IGJsa2lmX3JlcXVlc3Rfc2VnbWVudCB7
Ci0JCWdyYW50X3JlZl90IGdyZWY7ICAgICAgICAvKiByZWZlcmVuY2UgdG8gSS9PIGJ1ZmZlciBm
cmFtZSAgICAgICAgKi8KLQkJLyogQGZpcnN0X3NlY3Q6IGZpcnN0IHNlY3RvciBpbiBmcmFtZSB0
byB0cmFuc2ZlciAoaW5jbHVzaXZlKS4gICAqLwotCQkvKiBAbGFzdF9zZWN0OiBsYXN0IHNlY3Rv
ciBpbiBmcmFtZSB0byB0cmFuc2ZlciAoaW5jbHVzaXZlKS4gICAgICovCi0JCXVpbnQ4X3QgICAg
IGZpcnN0X3NlY3QsIGxhc3Rfc2VjdDsKLQl9IHNlZ1tCTEtJRl9NQVhfU0VHTUVOVFNfUEVSX1JF
UVVFU1RdOworCXN0cnVjdCBibGtpZl9yZXF1ZXN0X3NlZ21lbnQgc2VnW0JMS0lGX01BWF9TRUdN
RU5UU19QRVJfUkVRVUVTVF07CiB9IF9fYXR0cmlidXRlX18oKF9fcGFja2VkX18pKTsKIAogc3Ry
dWN0IGJsa2lmX3JlcXVlc3RfZGlzY2FyZCB7Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDE-0004Z1-HE; Tue, 04 Feb 2014 10:26:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdDD-0004YN-EJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 10:26:39 +0000
Received: from [85.158.137.68:10313] by server-16.bemta-3.messagelabs.com id
	6A/2B-29917-D50C0F25; Tue, 04 Feb 2014 10:26:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391509594!13234845!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19830 invoked from network); 4 Feb 2014 10:26:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97665478"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:22 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCw-00037o-2Y;
	Tue, 04 Feb 2014 10:26:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:12 +0100
Message-ID: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdDC-0004YS-VN; Tue, 04 Feb 2014 10:26:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdDB-0004YM-Dm
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:37 +0000
Received: from [85.158.143.35:9388] by server-1.bemta-4.messagelabs.com id
	BE/1C-31661-C50C0F25; Tue, 04 Feb 2014 10:26:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391509594!2987792!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15326 invoked from network); 4 Feb 2014 10:26:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97665478"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:22 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCw-00037o-2Y;
	Tue, 04 Feb 2014 10:26:22 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:12 +0100
Message-ID: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org,
	Matt Rushton <mrushton@amazon.com>, David Vrabel <david.vrabel@citrix.com>,
	Anthony Liguori <aliguori@amazon.com>, Matt Wilson <msw@amazon.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
	persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RnJvbTogTWF0dCBSdXNodG9uIDxtcnVzaHRvbkBhbWF6b24uY29tPgoKQ3VycmVudGx5IHNocmlu
a19mcmVlX3BhZ2Vwb29sKCkgaXMgY2FsbGVkIGJlZm9yZSB0aGUgcGFnZXMgdXNlZCBmb3IKcGVy
c2lzdGVudCBncmFudHMgYXJlIHJlbGVhc2VkIHZpYSBmcmVlX3BlcnNpc3RlbnRfZ250cygpLiBU
aGlzCnJlc3VsdHMgaW4gYSBtZW1vcnkgbGVhayB3aGVuIGEgVkJEIHRoYXQgdXNlcyBwZXJzaXN0
ZW50IGdyYW50cyBpcwp0b3JuIGRvd24uCgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25y
YWQud2lsa0BvcmFjbGUuY29tPgpDYzogIlJvZ2VyIFBhdSBNb25uw6kiIDxyb2dlci5wYXVAY2l0
cml4LmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJpeC5jb20+CkNjOiBE
YXZpZCBWcmFiZWwgPGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogbGludXgta2VybmVsQHZn
ZXIua2VybmVsLm9yZwpDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKQ2M6IEFudGhvbnkgTGln
dW9yaSA8YWxpZ3VvcmlAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogTWF0dCBSdXNodG9uIDxt
cnVzaHRvbkBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5OiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpv
bi5jb20+Ci0tLQogZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgfCAgICA2ICsr
Ky0tLQogMSBmaWxlcyBjaGFuZ2VkLCAzIGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNrLmMgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwppbmRleCA2NjIwYjczLi4zMGVmN2IzIDEwMDY0
NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2ZXJz
L2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtNjI1LDkgKzYyNSw2IEBAIHB1cmdlX2du
dF9saXN0OgogCQkJcHJpbnRfc3RhdHMoYmxraWYpOwogCX0KIAotCS8qIFNpbmNlIHdlIGFyZSBz
aHV0dGluZyBkb3duIHJlbW92ZSBhbGwgcGFnZXMgZnJvbSB0aGUgYnVmZmVyICovCi0Jc2hyaW5r
X2ZyZWVfcGFnZXBvb2woYmxraWYsIDAgLyogQWxsICovKTsKLQogCS8qIEZyZWUgYWxsIHBlcnNp
c3RlbnQgZ3JhbnQgcGFnZXMgKi8KIAlpZiAoIVJCX0VNUFRZX1JPT1QoJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMpKQogCQlmcmVlX3BlcnNpc3RlbnRfZ250cyhibGtpZiwgJmJsa2lmLT5wZXJzaXN0
ZW50X2dudHMsCkBAIC02MzYsNiArNjMzLDkgQEAgcHVyZ2VfZ250X2xpc3Q6CiAJQlVHX09OKCFS
Ql9FTVBUWV9ST09UKCZibGtpZi0+cGVyc2lzdGVudF9nbnRzKSk7CiAJYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2MgPSAwOwogCisJLyogU2luY2Ugd2UgYXJlIHNodXR0aW5nIGRvd24gcmVtb3ZlIGFs
bCBwYWdlcyBmcm9tIHRoZSBidWZmZXIgKi8KKwlzaHJpbmtfZnJlZV9wYWdlcG9vbChibGtpZiwg
MCAvKiBBbGwgKi8pOworCiAJaWYgKGxvZ19zdGF0cykKIAkJcHJpbnRfc3RhdHMoYmxraWYpOwog
Ci0tIAoxLjcuNy41IChBcHBsZSBHaXQtMjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdD9-0004Y1-QJ; Tue, 04 Feb 2014 10:26:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD8-0004Xf-Br
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:34 +0000
Received: from [193.109.254.147:29492] by server-11.bemta-14.messagelabs.com
	id F2/0A-24604-950C0F25; Tue, 04 Feb 2014 10:26:33 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391509590!1840463!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18800 invoked from network); 4 Feb 2014 10:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579771"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:21 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCv-00037o-Eg;
	Tue, 04 Feb 2014 10:26:21 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:11 +0100
Message-ID: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series contain blkback bug fixes for memory leaks (patches 1 and 
2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
since its memory layout is exactly the same as blkif_request_segment 
and should introduce no functional change.

All patches should be backported to stable branches, although the last 
one is not a functional change it will still be nice to have it for 
code correctness.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdD9-0004Xq-EV; Tue, 04 Feb 2014 10:26:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAdD7-0004Xa-LA
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:26:33 +0000
Received: from [193.109.254.147:29416] by server-10.bemta-14.messagelabs.com
	id 5C/65-10711-850C0F25; Tue, 04 Feb 2014 10:26:32 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391509590!1840463!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18626 invoked from network); 4 Feb 2014 10:26:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:26:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="99579773"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 10:26:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 05:26:23 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WAdCx-00037o-BI;
	Tue, 04 Feb 2014 10:26:23 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Date: Tue, 4 Feb 2014 11:26:14 +0100
Message-ID: <1391509575-3949-4-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, Matt Rushton <mrushton@amazon.com>,
	David Vrabel <david.vrabel@citrix.com>, Matt Wilson <msw@amazon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW50cm9kdWNlIGEgbmV3IHZhcmlhYmxlIHRvIGtlZXAgdHJhY2sgb2YgdGhlIG51bWJlciBvZiBp
bi1mbGlnaHQKcmVxdWVzdHMuIFdlIG5lZWQgdG8gbWFrZSBzdXJlIHRoYXQgd2hlbiB4ZW5fYmxr
aWZfcHV0IGlzIGNhbGxlZCB0aGUKcmVxdWVzdCBoYXMgYWxyZWFkeSBiZWVuIGZyZWVkIGFuZCB3
ZSBjYW4gc2FmZWx5IGZyZWUgeGVuX2Jsa2lmLCB3aGljaAp3YXMgbm90IHRoZSBjYXNlIGJlZm9y
ZS4KClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29t
PgpDYzogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIDxrb25yYWQud2lsa0BvcmFjbGUuY29tPgpDYzog
RGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0cml4LmNvbT4KQ2M6IEJvcmlzIE9zdHJvdnNr
eSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+CkNjOiBNYXR0IFJ1c2h0b24gPG1ydXNodG9u
QGFtYXpvbi5jb20+CkNjOiBNYXR0IFdpbHNvbiA8bXN3QGFtYXpvbi5jb20+CkNjOiBJYW4gQ2Ft
cGJlbGwgPElhbi5DYW1wYmVsbEBjaXRyaXguY29tPgotLS0KIGRyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2svYmxrYmFjay5jIHwgICAzMiArKysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLQog
ZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaCAgfCAgICAxICsKIGRyaXZlcnMvYmxv
Y2sveGVuLWJsa2JhY2sveGVuYnVzLmMgIHwgICAgMSArCiAzIGZpbGVzIGNoYW5nZWQsIDI0IGlu
c2VydGlvbnMoKyksIDEwIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svYmxrYmFjay5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNr
LmMKaW5kZXggZGNmZTQ5Zi4uMzk0ZmEyZSAxMDA2NDQKLS0tIGEvZHJpdmVycy9ibG9jay94ZW4t
YmxrYmFjay9ibGtiYWNrLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9ibGtiYWNr
LmMKQEAgLTk0Myw5ICs5NDMsNyBAQCBzdGF0aWMgdm9pZCB4ZW5fYmxrX2RyYWluX2lvKHN0cnVj
dCB4ZW5fYmxraWYgKmJsa2lmKQogewogCWF0b21pY19zZXQoJmJsa2lmLT5kcmFpbiwgMSk7CiAJ
ZG8gewotCQkvKiBUaGUgaW5pdGlhbCB2YWx1ZSBpcyBvbmUsIGFuZCBvbmUgcmVmY250IHRha2Vu
IGF0IHRoZQotCQkgKiBzdGFydCBvZiB0aGUgeGVuX2Jsa2lmX3NjaGVkdWxlIHRocmVhZC4gKi8K
LQkJaWYgKGF0b21pY19yZWFkKCZibGtpZi0+cmVmY250KSA8PSAyKQorCQlpZiAoYXRvbWljX3Jl
YWQoJmJsa2lmLT5pbmZsaWdodCkgPT0gMCkKIAkJCWJyZWFrOwogCQl3YWl0X2Zvcl9jb21wbGV0
aW9uX2ludGVycnVwdGlibGVfdGltZW91dCgKIAkJCQkmYmxraWYtPmRyYWluX2NvbXBsZXRlLCBI
Wik7CkBAIC05ODUsMTcgKzk4MywzMCBAQCBzdGF0aWMgdm9pZCBfX2VuZF9ibG9ja19pb19vcChz
dHJ1Y3QgcGVuZGluZ19yZXEgKnBlbmRpbmdfcmVxLCBpbnQgZXJyb3IpCiAJICogdGhlIHByb3Bl
ciByZXNwb25zZSBvbiB0aGUgcmluZy4KIAkgKi8KIAlpZiAoYXRvbWljX2RlY19hbmRfdGVzdCgm
cGVuZGluZ19yZXEtPnBlbmRjbnQpKSB7Ci0JCXhlbl9ibGtia191bm1hcChwZW5kaW5nX3JlcS0+
YmxraWYsCisJCXN0cnVjdCB4ZW5fYmxraWYgKmJsa2lmID0gcGVuZGluZ19yZXEtPmJsa2lmOwor
CisJCXhlbl9ibGtia191bm1hcChibGtpZiwKIAkJICAgICAgICAgICAgICAgIHBlbmRpbmdfcmVx
LT5zZWdtZW50cywKIAkJICAgICAgICAgICAgICAgIHBlbmRpbmdfcmVxLT5ucl9wYWdlcyk7Ci0J
CW1ha2VfcmVzcG9uc2UocGVuZGluZ19yZXEtPmJsa2lmLCBwZW5kaW5nX3JlcS0+aWQsCisJCW1h
a2VfcmVzcG9uc2UoYmxraWYsIHBlbmRpbmdfcmVxLT5pZCwKIAkJCSAgICAgIHBlbmRpbmdfcmVx
LT5vcGVyYXRpb24sIHBlbmRpbmdfcmVxLT5zdGF0dXMpOwotCQl4ZW5fYmxraWZfcHV0KHBlbmRp
bmdfcmVxLT5ibGtpZik7Ci0JCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJsa2lmLT5y
ZWZjbnQpIDw9IDIpIHsKLQkJCWlmIChhdG9taWNfcmVhZCgmcGVuZGluZ19yZXEtPmJsa2lmLT5k
cmFpbikpCi0JCQkJY29tcGxldGUoJnBlbmRpbmdfcmVxLT5ibGtpZi0+ZHJhaW5fY29tcGxldGUp
OworCQlmcmVlX3JlcShibGtpZiwgcGVuZGluZ19yZXEpOworCQkvKgorCQkgKiBNYWtlIHN1cmUg
dGhlIHJlcXVlc3QgaXMgZnJlZWQgYmVmb3JlIHJlbGVhc2luZyBibGtpZiwKKwkJICogb3IgdGhl
cmUgY291bGQgYmUgYSByYWNlIGJldHdlZW4gZnJlZV9yZXEgYW5kIHRoZQorCQkgKiBjbGVhbnVw
IGRvbmUgaW4geGVuX2Jsa2lmX2ZyZWUgZHVyaW5nIHNodXRkb3duLgorCQkgKgorCQkgKiBOQjog
VGhlIGZhY3QgdGhhdCB3ZSBtaWdodCB0cnkgdG8gd2FrZSB1cCBwZW5kaW5nX2ZyZWVfd3EKKwkJ
ICogYmVmb3JlIGRyYWluX2NvbXBsZXRlIChpbiBjYXNlIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBv
bikKKwkJICogaXQncyBub3QgYSBwcm9ibGVtIHdpdGggb3VyIGN1cnJlbnQgaW1wbGVtZW50YXRp
b24KKwkJICogYmVjYXVzZSB3ZSBjYW4gYXNzdXJlIHRoZXJlJ3Mgbm8gdGhyZWFkIHdhaXRpbmcg
b24KKwkJICogcGVuZGluZ19mcmVlX3dxIGlmIHRoZXJlJ3MgYSBkcmFpbiBnb2luZyBvbiwgYnV0
IGl0IGhhcworCQkgKiB0byBiZSB0YWtlbiBpbnRvIGFjY291bnQgaWYgdGhlIGN1cnJlbnQgbW9k
ZWwgaXMgY2hhbmdlZC4KKwkJICovCisJCWlmIChhdG9taWNfZGVjX2FuZF90ZXN0KCZibGtpZi0+
aW5mbGlnaHQpICYmIGF0b21pY19yZWFkKCZibGtpZi0+ZHJhaW4pKSB7CisJCQljb21wbGV0ZSgm
YmxraWYtPmRyYWluX2NvbXBsZXRlKTsKIAkJfQotCQlmcmVlX3JlcShwZW5kaW5nX3JlcS0+Ymxr
aWYsIHBlbmRpbmdfcmVxKTsKKwkJeGVuX2Jsa2lmX3B1dChibGtpZik7CiAJfQogfQogCkBAIC0x
MjQ5LDYgKzEyNjAsNyBAQCBzdGF0aWMgaW50IGRpc3BhdGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4
ZW5fYmxraWYgKmJsa2lmLAogCSAqIGJlbG93IChpbiAiIWJpbyIpIGlmIHdlIGFyZSBoYW5kbGlu
ZyBhIEJMS0lGX09QX0RJU0NBUkQuCiAJICovCiAJeGVuX2Jsa2lmX2dldChibGtpZik7CisJYXRv
bWljX2luYygmYmxraWYtPmluZmxpZ2h0KTsKIAogCWZvciAoaSA9IDA7IGkgPCBuc2VnOyBpKysp
IHsKIAkJd2hpbGUgKChiaW8gPT0gTlVMTCkgfHwKZGlmZiAtLWdpdCBhL2RyaXZlcnMvYmxvY2sv
eGVuLWJsa2JhY2svY29tbW9uLmggYi9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2NvbW1vbi5o
CmluZGV4IGY3MzNkNzYuLmU0MDMyNmEgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2svY29tbW9uLmgKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay9jb21tb24uaApA
QCAtMjc4LDYgKzI3OCw3IEBAIHN0cnVjdCB4ZW5fYmxraWYgewogCS8qIGZvciBiYXJyaWVyIChk
cmFpbikgcmVxdWVzdHMgKi8KIAlzdHJ1Y3QgY29tcGxldGlvbglkcmFpbl9jb21wbGV0ZTsKIAlh
dG9taWNfdAkJZHJhaW47CisJYXRvbWljX3QJCWluZmxpZ2h0OwogCS8qIE9uZSB0aHJlYWQgcGVy
IG9uZSBibGtpZi4gKi8KIAlzdHJ1Y3QgdGFza19zdHJ1Y3QJKnhlbmJsa2Q7CiAJdW5zaWduZWQg
aW50CQl3YWl0aW5nX3JlcXM7CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNr
L3hlbmJ1cy5jIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMuYwppbmRleCA4YWZl
ZjY3Li44NDk3M2M2IDEwMDY0NAotLS0gYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL3hlbmJ1
cy5jCisrKyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMKQEAgLTEyOCw2ICsx
MjgsNyBAQCBzdGF0aWMgc3RydWN0IHhlbl9ibGtpZiAqeGVuX2Jsa2lmX2FsbG9jKGRvbWlkX3Qg
ZG9taWQpCiAJSU5JVF9MSVNUX0hFQUQoJmJsa2lmLT5wZXJzaXN0ZW50X3B1cmdlX2xpc3QpOwog
CWJsa2lmLT5mcmVlX3BhZ2VzX251bSA9IDA7CiAJYXRvbWljX3NldCgmYmxraWYtPnBlcnNpc3Rl
bnRfZ250X2luX3VzZSwgMCk7CisJYXRvbWljX3NldCgmYmxraWYtPmluZmxpZ2h0LCAwKTsKIAog
CUlOSVRfTElTVF9IRUFEKCZibGtpZi0+cGVuZGluZ19mcmVlKTsKIAotLSAKMS43LjcuNSAoQXBw
bGUgR2l0LTI2KQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDov
L2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdGp-0005Pr-7K; Tue, 04 Feb 2014 10:30:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdGn-0005PW-Ay
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:30:21 +0000
Received: from [85.158.139.211:49917] by server-13.bemta-5.messagelabs.com id
	35/6A-18801-C31C0F25; Tue, 04 Feb 2014 10:30:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391509817!1526170!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5251 invoked from network); 4 Feb 2014 10:30:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:30:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97666866"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:30:16 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:30:16 -0500
Message-ID: <52F0C135.5050300@citrix.com>
Date: Tue, 4 Feb 2014 10:30:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-2-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> From: Matt Rushton <mrushton@amazon.com>
> 
> Currently shrink_free_pagepool() is called before the pages used for
> persistent grants are released via free_persistent_gnts(). This
> results in a memory leak when a VBD that uses persistent grants is
> torn down.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdGr-0005QA-FI; Tue, 04 Feb 2014 10:30:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdGo-0005Pi-JH
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 10:30:22 +0000
Received: from [85.158.139.211:14399] by server-5.bemta-5.messagelabs.com id
	85/48-32749-D31C0F25; Tue, 04 Feb 2014 10:30:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391509817!1518896!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17521 invoked from network); 4 Feb 2014 10:30:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:30:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97666866"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:30:16 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:30:16 -0500
Message-ID: <52F0C135.5050300@citrix.com>
Date: Tue, 4 Feb 2014 10:30:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-2-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> From: Matt Rushton <mrushton@amazon.com>
> 
> Currently shrink_free_pagepool() is called before the pages used for
> persistent grants are released via free_persistent_gnts(). This
> results in a memory leak when a VBD that uses persistent grants is
> torn down.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdGp-0005Pr-7K; Tue, 04 Feb 2014 10:30:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdGn-0005PW-Ay
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:30:21 +0000
Received: from [85.158.139.211:49917] by server-13.bemta-5.messagelabs.com id
	35/6A-18801-C31C0F25; Tue, 04 Feb 2014 10:30:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391509817!1526170!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5251 invoked from network); 4 Feb 2014 10:30:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:30:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97666866"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:30:16 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:30:16 -0500
Message-ID: <52F0C135.5050300@citrix.com>
Date: Tue, 4 Feb 2014 10:30:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-2-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> From: Matt Rushton <mrushton@amazon.com>
> 
> Currently shrink_free_pagepool() is called before the pages used for
> persistent grants are released via free_persistent_gnts(). This
> results in a memory leak when a VBD that uses persistent grants is
> torn down.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdGr-0005QA-FI; Tue, 04 Feb 2014 10:30:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdGo-0005Pi-JH
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 10:30:22 +0000
Received: from [85.158.139.211:14399] by server-5.bemta-5.messagelabs.com id
	85/48-32749-D31C0F25; Tue, 04 Feb 2014 10:30:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391509817!1518896!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17521 invoked from network); 4 Feb 2014 10:30:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:30:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97666866"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:30:16 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:30:16 -0500
Message-ID: <52F0C135.5050300@citrix.com>
Date: Tue, 4 Feb 2014 10:30:13 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-2-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-2-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org, Matt Rushton <mrushton@amazon.com>,
	Anthony Liguori <aliguori@amazon.com>,
	xen-devel@lists.xenproject.org, Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 1/4] xen-blkback: fix memory leak when
 persistent grants are used
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> From: Matt Rushton <mrushton@amazon.com>
> 
> Currently shrink_free_pagepool() is called before the pages used for
> persistent grants are released via free_persistent_gnts(). This
> results in a memory leak when a VBD that uses persistent grants is
> torn down.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:32:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdJD-0005kl-BG; Tue, 04 Feb 2014 10:32:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdJB-0005kd-BC
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:32:49 +0000
Received: from [85.158.143.35:8583] by server-1.bemta-4.messagelabs.com id
	85/4B-31661-0D1C0F25; Tue, 04 Feb 2014 10:32:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391509966!2962155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2108 invoked from network); 4 Feb 2014 10:32:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:32:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97667455"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:32:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:32:46 -0500
Message-ID: <52F0C1CC.6040706@citrix.com>
Date: Tue, 4 Feb 2014 10:32:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-3-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-3-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> I've at least identified two possible memory leaks in blkback, both
> related to the shutdown path of a VBD:
> 
> - blkback doesn't wait for any pending purge work to finish before
>   cleaning the list of free_pages. The purge work will call
>   put_free_pages and thus we might end up with pages being added to
>   the free_pages list after we have emptied it. Fix this by making
>   sure there's no pending purge work before exiting
>   xen_blkif_schedule, and moving the free_page cleanup code to
>   xen_blkif_free.
> - blkback doesn't wait for pending requests to end before cleaning
>   persistent grants and the list of free_pages. Again this can add
>   pages to the free_pages list or persistent grants to the
>   persistent_gnts red-black tree. Fixed by moving the persistent
>   grants and free_pages cleanup code to xen_blkif_free.
> 
> Also, add some checks in xen_blkif_free to make sure we are cleaning
> everything.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

> +	if (log_stats)
> +		print_stats(blkif);

Unrelated to this series, but this log_stats stuff can be removed.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:32:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdJD-0005kl-BG; Tue, 04 Feb 2014 10:32:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdJB-0005kd-BC
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:32:49 +0000
Received: from [85.158.143.35:8583] by server-1.bemta-4.messagelabs.com id
	85/4B-31661-0D1C0F25; Tue, 04 Feb 2014 10:32:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391509966!2962155!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2108 invoked from network); 4 Feb 2014 10:32:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:32:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97667455"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:32:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:32:46 -0500
Message-ID: <52F0C1CC.6040706@citrix.com>
Date: Tue, 4 Feb 2014 10:32:44 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-3-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-3-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> I've at least identified two possible memory leaks in blkback, both
> related to the shutdown path of a VBD:
> 
> - blkback doesn't wait for any pending purge work to finish before
>   cleaning the list of free_pages. The purge work will call
>   put_free_pages and thus we might end up with pages being added to
>   the free_pages list after we have emptied it. Fix this by making
>   sure there's no pending purge work before exiting
>   xen_blkif_schedule, and moving the free_page cleanup code to
>   xen_blkif_free.
> - blkback doesn't wait for pending requests to end before cleaning
>   persistent grants and the list of free_pages. Again this can add
>   pages to the free_pages list or persistent grants to the
>   persistent_gnts red-black tree. Fixed by moving the persistent
>   grants and free_pages cleanup code to xen_blkif_free.
> 
> Also, add some checks in xen_blkif_free to make sure we are cleaning
> everything.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

> +	if (log_stats)
> +		print_stats(blkif);

Unrelated to this series, but this log_stats stuff can be removed.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:34:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdKh-0005uo-Qu; Tue, 04 Feb 2014 10:34:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdKg-0005uM-92
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:34:22 +0000
Received: from [85.158.139.211:36411] by server-9.bemta-5.messagelabs.com id
	04/AE-11237-D22C0F25; Tue, 04 Feb 2014 10:34:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391510059!1504682!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5581 invoked from network); 4 Feb 2014 10:34:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:34:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97668226"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:34:18 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:34:18 -0500
Message-ID: <52F0C228.2080901@citrix.com>
Date: Tue, 4 Feb 2014 10:34:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-4-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-4-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> Introduce a new variable to keep track of the number of in-flight
> requests. We need to make sure that when xen_blkif_put is called the
> request has already been freed and we can safely free xen_blkif, which
> was not the case before.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:34:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdKh-0005uo-Qu; Tue, 04 Feb 2014 10:34:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdKg-0005uM-92
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:34:22 +0000
Received: from [85.158.139.211:36411] by server-9.bemta-5.messagelabs.com id
	04/AE-11237-D22C0F25; Tue, 04 Feb 2014 10:34:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391510059!1504682!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5581 invoked from network); 4 Feb 2014 10:34:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:34:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97668226"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:34:18 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:34:18 -0500
Message-ID: <52F0C228.2080901@citrix.com>
Date: Tue, 4 Feb 2014 10:34:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-4-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-4-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> Introduce a new variable to keep track of the number of in-flight
> requests. We need to make sure that when xen_blkif_put is called the
> request has already been freed and we can safely free xen_blkif, which
> was not the case before.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdLl-00061p-9t; Tue, 04 Feb 2014 10:35:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdLk-00061e-41
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:35:28 +0000
Received: from [85.158.139.211:5240] by server-10.bemta-5.messagelabs.com id
	16/2A-08578-F62C0F25; Tue, 04 Feb 2014 10:35:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391510125!1518721!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22511 invoked from network); 4 Feb 2014 10:35:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97668895"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:35:24 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:35:24 -0500
Message-ID: <52F0C26A.30601@citrix.com>
Date: Tue, 4 Feb 2014 10:35:22 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-5-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-5-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> This was wrongly introduced in commit 402b27f9, the only difference
> between blkif_request_segment_aligned and blkif_request_segment is
> that the former has a named padding, while both share the same
> memory layout.
> 
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE == 4096.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 10:35:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 10:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdLl-00061p-9t; Tue, 04 Feb 2014 10:35:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAdLk-00061e-41
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 10:35:28 +0000
Received: from [85.158.139.211:5240] by server-10.bemta-5.messagelabs.com id
	16/2A-08578-F62C0F25; Tue, 04 Feb 2014 10:35:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391510125!1518721!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22511 invoked from network); 4 Feb 2014 10:35:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 10:35:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,778,1384300800"; d="scan'208";a="97668895"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 10:35:24 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	05:35:24 -0500
Message-ID: <52F0C26A.30601@citrix.com>
Date: Tue, 4 Feb 2014 10:35:22 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<1391509575-3949-5-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1391509575-3949-5-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Matt Wilson <msw@amazon.com>, linux-kernel@vger.kernel.org,
	Matt Rushton <mrushton@amazon.com>,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 10:26, Roger Pau Monne wrote:
> This was wrongly introduced in commit 402b27f9, the only difference
> between blkif_request_segment_aligned and blkif_request_segment is
> that the former has a named padding, while both share the same
> memory layout.
> 
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE == 4096.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:11:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdtt-0007Ar-C2; Tue, 04 Feb 2014 11:10:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WAdtr-0007Ak-N5
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 11:10:44 +0000
Received: from [85.158.143.35:35552] by server-1.bemta-4.messagelabs.com id
	28/BC-31661-3BAC0F25; Tue, 04 Feb 2014 11:10:43 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391512239!2975919!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_10_20,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16996 invoked from network); 4 Feb 2014 11:10:40 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:10:40 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so3977894wes.29
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 03:10:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type;
	bh=hRXMV7wKwVZzAjFar3f5mhJ+OOlxb6d80PRFhw8im+k=;
	b=ZVE4Cvxn8/dbFM2dCVm3r3pdpoL6SZa/NkTEO4L0vL3LzPx9MyBCM5V0F0/czFmMFt
	0RteXWjnLJ6jReZopXLKeLYOkMzEgGaM95apdejdcCEYzUs7fh1j9sYHtmxmIv3o+nVu
	uqjv6cfxC4ZSPX2HQhFK30wMqiKGhmcmyf1VSYvtIEbDTQasowjz7eEpsKkwnkugiJse
	cZ/LYD7y8hIqoTg/l3OgOjwV5mLThGi//jVSwyPoeYda+GzeR3nlZ7SzNADFkSM3lhuS
	mcuH/MXweHxyDsoopqkJ9klHY2z6xOer0LodHa427nEnBRPA4ey+Q2KRIeVflIk2JFCO
	CVTg==
X-Gm-Message-State: ALoCoQndaC64jiRxvSVNVQwT1EnK+KYfgpMNQGW5b6YSCOiLgUn7+PlZ1CQLOiTsYBzetvTwn2r1
X-Received: by 10.180.185.197 with SMTP id fe5mr12133392wic.56.1391512239601; 
	Tue, 04 Feb 2014 03:10:39 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id eo4sm34768216wib.9.2014.02.04.03.10.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 03:10:38 -0800 (PST)
Message-ID: <52F0CAAE.6030803@m2r.biz>
Date: Tue, 04 Feb 2014 12:10:38 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
References: <52CC2936.4050900@m2r.biz>
In-Reply-To: <52CC2936.4050900@m2r.biz>
X-Forwarded-Message-Id: <52CC2936.4050900@m2r.biz>
Content-Type: multipart/mixed; boundary="------------000205090700010505010907"
Subject: [Xen-devel] Fwd: Re: MSI regression with xen hvm domUs using virtio
	devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------000205090700010505010907
Content-Type: multipart/alternative;
 boundary="------------040607000107060807010309"


--------------040607000107060807010309
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

I'm forwarding this to xen-devel too.
I have also tried with latest xen-unstable (commit 
feee1ace547cf6247a358d082dd64fa762be2488) and also with latest qemu 
upstream unstable (commit 8cfc114a2f293c40077d1bdb7500b29db359ca22) but 
the problem persist.

Il 03/01/2014 22:14, Konrad Rzeszutek Wilk ha scritto:
> On Thu, Dec 19, 2013 at 01:03:35PM +0100, Fabio Fantoni wrote:
>> Hi, sorry for bothering you.
>> Virtio devices work on xen hvm domUs on windows and on linux with
>> old kernel (for example Squeeze with kernel 2.6.32), while on
>> Precise, Wheezy, Saucy and Sid with kernel >=3.2 they work only with
>> pci=nomsi on the kernel boot line.
> OK.
>> I tried debian experimental kernel (3.12.3) with this patch you made:
>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0e4ccb1505a9e29c50170742ce26ac4655baab2d
>> but the problem persists.
> Right, that fixes a different issue.
>> If you know something else that I can try to solve this problem I'm
>> all ears.
>> Otherwise where could I report this problem?
> So, what is it that does not work?

Thanks for your reply.
All virtio devices with xen hvm domUs with kernel > 2.6.32 don't work
without adding pci=nomsi on linux boot kernel line. (the regression
could be between 2.6.32 and 3.2, I not tested kernels between 2.6.32 and
3.2)
I tested with spice vdagent (that use virtio-serial) and virtio-net.
I've used spice vdagent since end of 2011, mainly on window domUs. It
works correctly and also with xen pv driver loaded and is on xen upstream:
http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=17b29c1cd830acf8b8ecbc6080264cc5b8ad3c6f
Virtio-net can be setted simply on xl cfg, for example:
vif=['model=virtio-net-pci,bridge=xenbr0'], even if is not officially
supported on xen upstream for now and was working since wei liu "virtio
on xen" project:
http://wiki.xen.org/wiki/Virtio_On_Xen
virtio on xen originally (on 2011) required pci=nomsi in order to works
but after this Wei Liu qemu patch (see below) it works out of the box:
http://git.qemu.org/?p=qemu.git;a=commit;h=f1dbf015dfb0aa7f66f710a1f1bc58b662951de2
About virtio-net there is a regression in qemu 1.6, I have narrowed down
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013). But this is
little out of topic.

What I'm interested in is mainly to solve this regression/problem with
kernel > 2.6.32 with virtio on xen in order to have vdagent working out
of the box in all cases. (also with newer kernel without pci=nomsi)


>   Do you see errors on the console?
> Does /proc/interrupts show any number of MSI interrupts going up?
> Is there something obvious in /var/log/xen/qemu-* ?

Only these:
xc: error: linux_gnttab_set_max_grants: ioctl SET_MAX_GRANTS failed (22
= Invalid argument): Internal error
xen be: qdisk-768: xc_gnttab_set_max_grants failed: Invalid argument
but I think that are not related to this problem, are showed also on
domUs without virtio devices.

>
> What does the lspci look like in your earlier kernsl (The ones that
> worked?) OR your /proc/interrupts?

/proc/interrupts seems different from old kernel working without
pci=nomsi and new kernel with both pci=nomsi and not.

On attachments some logs about 3 tests:
- squeeze (debian 6 with kernel 2.6.32) with vdagent and virtio-net,
working (*-squeeze.txt), cmdline: pci=nomsi xen_emul_unplug=never (for
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent and virtio-net, working
(*-precise-nomsi.txt), cmdline: pci=nomsi xen_emul_unplug=never (for
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent, not working
(*-precise-onlyvdagent.txt), cmdline: (blank)

The third test have also xl -vvv create and xl dmesg logs only vdagent
to have all working also with xen pv enabled.

When virtio are not working I not saw particular error, only that
vdagent is not working because does not create
/dev/virtio-ports/com.redhat.spice.0.
*
The only thing I have noticed is the interrupt difference on virtio
devices beetwen squeeze test (PCI-MSI-edge), precise test working
(xen-pirq-ioapic-level) and precise not working (xen-pirq-msi-x).*

If you need more details and/or tests tell me and I'll post them.

Thanks for any reply.


--------------040607000107060807010309
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    I'm forwarding this to xen-devel too.<br>
    I have also tried with latest xen-unstable (<span class="sha-block">commit
      <span class="sha js-selectable-text">feee1ace547cf6247a358d082dd64fa762be2488</span></span>)
    and also with latest qemu upstream unstable (commit
    8cfc114a2f293c40077d1bdb7500b29db359ca22) but the problem persist.<br>
    <div class="moz-forward-container">
      <pre>Il 03/01/2014 22:14, Konrad Rzeszutek Wilk ha scritto:
&gt; On Thu, Dec 19, 2013 at 01:03:35PM +0100, Fabio Fantoni wrote:
&gt;&gt; Hi, sorry for bothering you.
&gt;&gt; Virtio devices work on xen hvm domUs on windows and on linux with
&gt;&gt; old kernel (for example Squeeze with kernel 2.6.32), while on
&gt;&gt; Precise, Wheezy, Saucy and Sid with kernel &gt;=3.2 they work only with
&gt;&gt; pci=nomsi on the kernel boot line.
&gt; OK.
&gt;&gt; I tried debian experimental kernel (3.12.3) with this patch you made:
&gt;&gt; <a class="moz-txt-link-freetext" href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0e4ccb1505a9e29c50170742ce26ac4655baab2d">https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0e4ccb1505a9e29c50170742ce26ac4655baab2d</a>
&gt;&gt; but the problem persists.
&gt; Right, that fixes a different issue.
&gt;&gt; If you know something else that I can try to solve this problem I'm
&gt;&gt; all ears.
&gt;&gt; Otherwise where could I report this problem?
&gt; So, what is it that does not work?

Thanks for your reply.
All virtio devices with xen hvm domUs with kernel &gt; 2.6.32 don't work 
without adding pci=nomsi on linux boot kernel line. (the regression 
could be between 2.6.32 and 3.2, I not tested kernels between 2.6.32 and 
3.2)
I tested with spice vdagent (that use virtio-serial) and virtio-net.
I've used spice vdagent since end of 2011, mainly on window domUs. It 
works correctly and also with xen pv driver loaded and is on xen upstream:
<a class="moz-txt-link-freetext" href="http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=17b29c1cd830acf8b8ecbc6080264cc5b8ad3c6f">http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=17b29c1cd830acf8b8ecbc6080264cc5b8ad3c6f</a>
Virtio-net can be setted simply on xl cfg, for example: 
vif=['model=virtio-net-pci,bridge=xenbr0'], even if is not officially 
supported on xen upstream for now and was working since wei liu "virtio 
on xen" project:
<a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Virtio_On_Xen">http://wiki.xen.org/wiki/Virtio_On_Xen</a>
virtio on xen originally (on 2011) required pci=nomsi in order to works 
but after this Wei Liu qemu patch (see below) it works out of the box:
<a class="moz-txt-link-freetext" href="http://git.qemu.org/?p=qemu.git;a=commit;h=f1dbf015dfb0aa7f66f710a1f1bc58b662951de2">http://git.qemu.org/?p=qemu.git;a=commit;h=f1dbf015dfb0aa7f66f710a1f1bc58b662951de2</a>
About virtio-net there is a regression in qemu 1.6, I have narrowed down 
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013). But this is 
little out of topic.

What I'm interested in is mainly to solve this regression/problem with 
kernel &gt; 2.6.32 with virtio on xen in order to have vdagent working out 
of the box in all cases. (also with newer kernel without pci=nomsi)


&gt;   Do you see errors on the console?
&gt; Does /proc/interrupts show any number of MSI interrupts going up?
&gt; Is there something obvious in /var/log/xen/qemu-* ?

Only these:
xc: error: linux_gnttab_set_max_grants: ioctl SET_MAX_GRANTS failed (22 
= Invalid argument): Internal error
xen be: qdisk-768: xc_gnttab_set_max_grants failed: Invalid argument
but I think that are not related to this problem, are showed also on 
domUs without virtio devices.

&gt;
&gt; What does the lspci look like in your earlier kernsl (The ones that
&gt; worked?) OR your /proc/interrupts?

/proc/interrupts seems different from old kernel working without 
pci=nomsi and new kernel with both pci=nomsi and not.

On attachments some logs about 3 tests:
- squeeze (debian 6 with kernel 2.6.32) with vdagent and virtio-net, 
working (*-squeeze.txt), cmdline: pci=nomsi xen_emul_unplug=never (for 
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent and virtio-net, working 
(*-precise-nomsi.txt), cmdline: pci=nomsi xen_emul_unplug=never (for 
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent, not working 
(*-precise-onlyvdagent.txt), cmdline: (blank)

The third test have also xl -vvv create and xl dmesg logs only vdagent 
to have all working also with xen pv enabled.

When virtio are not working I not saw particular error, only that 
vdagent is not working because does not create 
/dev/virtio-ports/com.redhat.spice.0.
<b>
The only thing I have noticed is the interrupt difference on virtio 
devices beetwen squeeze test (PCI-MSI-edge), precise test working 
(xen-pirq-ioapic-level) and precise not working (xen-pirq-msi-x).</b>

If you need more details and/or tests tell me and I'll post them.

Thanks for any reply.
</pre>
    </div>
  </body>
</html>

--------------040607000107060807010309--

--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="proc-interrupts-squeeze.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="proc-interrupts-squeeze.txt"

            CPU0       CPU1       
   0:         22          0   IO-APIC-edge      timer
   1:         47          0   IO-APIC-edge      i8042
   6:          2          0   IO-APIC-edge      floppy
   8:          1          0   IO-APIC-edge      rtc0
   9:          0          0   IO-APIC-fasteoi   acpi
  12:        102          0   IO-APIC-edge      i8042
  14:       1373          0   IO-APIC-edge      ata_piix
  15:          0          0   IO-APIC-edge      ata_piix
  16:       2360          0  xen-percpu-virq      timer0
  17:          0       1992  xen-percpu-virq      timer1
  18:          0          0   xen-dyn-event     xenbus
  48:          0          0   PCI-MSI-edge      virtio1-config
  49:        580          0   PCI-MSI-edge      virtio1-input
  50:          1          0   PCI-MSI-edge      virtio1-output
  51:          0          0   PCI-MSI-edge      virtio0-config
  52:          0          0   PCI-MSI-edge      virtio0-input
 NMI:          0          0   Non-maskable interrupts
 LOC:          0          0   Local timer interrupts
 SPU:          0          0   Spurious interrupts
 PMI:          0          0   Performance monitoring interrupts
 PND:          0          0   Performance pending work
 RES:       1134       1152   Rescheduling interrupts
 CAL:          6        480   Function call interrupts
 TLB:        160        279   TLB shootdowns
 TRM:          0          0   Thermal event interrupts
 THR:          0          0   Threshold APIC interrupts
 MCE:          0          0   Machine check exceptions
 MCP:          2          2   Machine check polls
 ERR:          0
 MIS:          0


--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="lspci-precise-nomsi.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci-precise-nomsi.txt"

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
	Region 1: [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
	Region 2: [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
	Region 3: [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
	Region 4: I/O ports at c140 [size=16]
	Kernel driver in use: ata_piix

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 9
	Kernel modules: i2c-piix4

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
	Subsystem: XenSource, Inc. Xen Platform Device
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 11
	Region 0: I/O ports at c000 [size=256]
	Region 1: Memory at f0000000 (32-bit, prefetchable) [size=16M]

00:03.0 Communication controller: Red Hat, Inc Virtio console
	Subsystem: Red Hat, Inc Device 0003
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 28
	Region 0: I/O ports at c100 [size=32]
	Region 1: Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable- Count=32 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci

00:04.0 VGA compatible controller: Device 1234:1111 (prog-if 00 [VGA controller])
	Subsystem: Red Hat, Inc Device 1100
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: Memory at f1000000 (32-bit, prefetchable) [size=16M]
	Region 2: Memory at f2021000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2000000 [disabled] [size=64K]

00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
	Subsystem: Red Hat, Inc Device 0001
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 36
	Region 0: I/O ports at c120 [size=32]
	Region 1: Memory at f2022000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2010000 [disabled] [size=64K]
	Capabilities: [40] MSI-X: Enable- Count=3 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci



--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="proc-interrupts-precise-nomsi.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="proc-interrupts-precise-nomsi.txt"

            CPU0       CPU1       
   0:         27          0   IO-APIC-edge      timer
   1:         56          0  xen-pirq-ioapic-edge  i8042
   6:          2          0  xen-pirq-ioapic-edge  floppy
   8:          2          0  xen-pirq-ioapic-edge  rtc0
   9:          0          0   IO-APIC-fasteoi   acpi
  12:        142          0  xen-pirq-ioapic-edge  i8042
  14:      11019          0   IO-APIC-edge      ata_piix
  15:          0          0   IO-APIC-edge      ata_piix
  28:        724          0  xen-pirq-ioapic-level  virtio0
  36:        680          0  xen-pirq-ioapic-level  virtio1
  64:      11202          0  xen-percpu-virq      timer0
  65:       6148          0  xen-percpu-ipi       resched0
  66:          0          0  xen-percpu-ipi       callfunc0
  67:          0          0  xen-percpu-virq      debug0
  68:        113          0  xen-percpu-ipi       callfuncsingle0
  69:          0          0  xen-percpu-ipi       spinlock0
  70:          0          0  xen-percpu-ipi       spinlock1
  71:          0      10132  xen-percpu-virq      timer1
  72:          0       6033  xen-percpu-ipi       resched1
  73:          0          0  xen-percpu-ipi       callfunc1
  74:          0          0  xen-percpu-virq      debug1
  75:          0       5452  xen-percpu-ipi       callfuncsingle1
  76:          0          0   xen-dyn-event     xenbus
 NMI:          0          0   Non-maskable interrupts
 LOC:          0          0   Local timer interrupts
 SPU:          0          0   Spurious interrupts
 PMI:          0          0   Performance monitoring interrupts
 IWI:          0          0   IRQ work interrupts
 RES:       6148       6033   Rescheduling interrupts
 CAL:        113       5452   Function call interrupts
 TLB:        921        709   TLB shootdowns
 TRM:          0          0   Thermal event interrupts
 THR:          0          0   Threshold APIC interrupts
 MCE:          0          0   Machine check exceptions
 MCP:          3          3   Machine check polls
 ERR:          0
 MIS:          0


--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="xl-create-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xl-create-precise-onlyvdagent.txt"

xl -vvv create /etc/xen/PRECISEHVM.cfg
Parsing config from /etc/xen/PRECISEHVM.cfg
libxl: debug: libxl_create.c:1339:do_domain_create: ao 0x1bda1a0: create: how=(nil) callback=(nil) poller=0x1bda660
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:197:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_create.c:783:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bda9e8: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=10, free_memkb=10064
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate with 1 nodes, 8 cpus and 10064 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x19f268
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x29f268
xc: detail: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000029f268
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->000000003f000000
  ENTRY ADDRESS: 0000000000100000
xc: detail: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001f7
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x7f6c081b9000 -> 0x7f6c0834f0ed
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdc440: deregister unregistered
libxl: debug: libxl_dm.c:1316:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   6
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-6,server,nowait
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   PRECISEHVM
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -k
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   it
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -spice
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   port=6002,tls-port=0,addr=0.0.0.0,disable-ticketing,agent-mouse=on,disable-copy-paste
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   virtio-serial
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   spicevmc,id=vdagent,name=vdagent
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   virtserialport,chardev=vdagent,name=com.redhat.spice.0
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   VGA
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   order=c
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   2,maxcpus=2
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   rtl8139,id=nic0,netdev=net0,mac=00:16:3e:1b:18:e3
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif6.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   1008
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   file=/mnt/vm/disks/PRECISEHVM.disk1.xm,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1353:do_domain_create: ao 0x1bda1a0: inprogress: poller=0x1bda660, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: event epath=/local/domain/0/device-model/6/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: event epath=/local/domain/0/device-model/6/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdac20: deregister unregistered
libxl: debug: libxl_qmp.c:697:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-6
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:547:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:547:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:547:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: event epath=/local/domain/0/backend/vif/6/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/6/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: event epath=/local/domain/0/backend/vif/6/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/6/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf808: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf890: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf890: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf890: deregister unregistered
libxl: debug: libxl_event.c:1730:libxl__ao_progress_report: ao 0x1bda1a0: progress report: ignored
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x1bda1a0: complete, rc=0
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x1bda1a0: destroy
xc: debug: hypercall buffer: total allocations:523 total releases:523
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:515 misses:4 toobig:4

--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="xl-dmesg-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xl-dmesg-precise-onlyvdagent.txt"

(d6) HVM Loader
(d6) Detected Xen v4.4-unstable
(d6) Xenbus rings @0xfeffc000, event channel 4
(d6) System requested SeaBIOS
(d6) CPU speed is 2661 MHz
(d6) Relocating guest memory for lowmem MMIO space disabled
(XEN) irq.c:270: Dom6 PCI link 0 changed 0 -> 5
(d6) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom6 PCI link 1 changed 0 -> 10
(d6) PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:270: Dom6 PCI link 2 changed 0 -> 11
(d6) PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:270: Dom6 PCI link 3 changed 0 -> 5
(d6) PCI-ISA link 3 routed to IRQ5
(d6) pci dev 01:3 INTA->IRQ10
(d6) pci dev 02:0 INTA->IRQ11
(d6) pci dev 03:0 INTA->IRQ5
(d6) pci dev 05:0 INTA->IRQ10
(d6) No RAM in high memory; setting high_mem resource base to 100000000
(d6) pci dev 02:0 bar 14 size 001000000: 0f0000008
(d6) pci dev 04:0 bar 10 size 001000000: 0f1000008
(d6) pci dev 04:0 bar 30 size 000010000: 0f2000000
(d6) pci dev 05:0 bar 30 size 000010000: 0f2010000
(d6) pci dev 03:0 bar 14 size 000001000: 0f2020000
(d6) pci dev 04:0 bar 18 size 000001000: 0f2021000
(d6) pci dev 02:0 bar 10 size 000000100: 00000c001
(d6) pci dev 05:0 bar 10 size 000000100: 00000c101
(d6) pci dev 05:0 bar 14 size 000000100: 0f2022000
(d6) pci dev 03:0 bar 10 size 000000020: 00000c201
(d6) pci dev 01:1 bar 20 size 000000010: 00000c221
(d6) Multiprocessor initialisation:
(d6)  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d6)  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d6) Testing HVM environment:
(d6)  - REP INSB across page boundaries ... passed
(d6)  - GS base MSRs and SWAPGS ... passed
(d6) Passed 2 of 2 tests
(d6) Writing SMBIOS tables ...
(d6) Loading SeaBIOS ...
(d6) Creating MP tables ...
(d6) Loading ACPI ...
(d6) vm86 TSS at fc00a080
(d6) BIOS map:
(d6)  10000-100d3: Scratch space
(d6)  e0000-fffff: Main BIOS
(d6) E820 table:
(d6)  [00]: 00000000:00000000 - 00000000:000a0000: RAM
(d6)  HOLE: 00000000:000a0000 - 00000000:000e0000
(d6)  [01]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d6)  [02]: 00000000:00100000 - 00000000:3f000000: RAM
(d6)  HOLE: 00000000:3f000000 - 00000000:fc000000
(d6)  [03]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d6) Invoking SeaBIOS ...
(d6) SeaBIOS (version debian/1.7.3-2-0-gd3d3d64-dirty-20131205_122724-testXG01OX)
(d6) 
(d6) Found Xen hypervisor signature at 40000000
(d6) xen: copy e820...
(d6) Relocating init from 0x000e1f51 to 0x3efe05b0 (size 63875)
(d6) CPU Mhz=2661
(d6) Found 8 PCI devices (max PCI bus is 00)
(d6) Allocated Xen hypercall page at 3efff000
(d6) Detected Xen v4.4-unstable
(d6) xen: copy BIOS tables...
(d6) Copying SMBIOS entry point from 0x00010010 to 0x000f18c0
(d6) Copying MPTABLE from 0xfc001170/fc001180 to 0x000f17c0
(d6) Copying PIR from 0x00010030 to 0x000f1740
(d6) Copying ACPI RSDP from 0x000100b0 to 0x000f1710
(d6) Using pmtimer, ioport 0xb008, freq 3579 kHz
(d6) Scan for VGA option rom
(d6) WARNING! Found unaligned PCI rom (vd=1234:1111)
(d6) Running option rom at c000:0003
(XEN) stdvga.c:147:d6 entering stdvga and caching modes
(d6) Turning on vga text mode console
(d6) SeaBIOS (version debian/1.7.3-2-0-gd3d3d64-dirty-20131205_122724-testXG01OX)
(d6) Machine UUID 6238c6a6-e67a-49a1-a98a-112549d4bb06
(d6) Found 0 lpt ports
(d6) Found 0 serial ports
(d6) ATA controller 1 at 1f0/3f4/0 (irq 14 dev 9)
(d6) ATA controller 2 at 170/374/0 (irq 15 dev 9)
(d6) ata0-0: QEMU HARDDISK ATA-7 Hard-Disk (10000 MiBytes)
(d6) Searching bootorder for: /pci@i0cf8/*@1,1/drive@0/disk@0
(d6) PS2 keyboard initialized
(d6) All threads complete.
(d6) Scan for option roms
(d6) Running option rom at ca00:0003
(d6) pmm call arg1=1
(d6) pmm call arg1=0
(d6) pmm call arg1=1
(d6) pmm call arg1=0
(d6) Searching bootorder for: /pci@i0cf8/*@5
(d6) 
(d6) Press F12 for boot menu.
(d6) 
(d6) Searching bootorder for: HALT
(d6) drive 0x000f16c0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63 s=20480000
(d6) Space available for UMB: cb000-ef000, f0000-f16c0
(d6) Returned 61440 bytes of ZoneHigh
(d6) e820 map has 6 items:
(d6)   0: 0000000000000000 - 000000000009fc00 = 1 RAM
(d6)   1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
(d6)   2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
(d6)   3: 0000000000100000 - 000000003efff000 = 1 RAM
(d6)   4: 000000003efff000 - 000000003f000000 = 2 RESERVED
(d6)   5: 00000000fc000000 - 0000000100000000 = 2 RESERVED
(d6) enter handle_19:
(d6)   NULL
(d6) Booting from Hard Disk...
(d6) Booting from 0000:7c00
(XEN) irq.c:375: Dom6 callback via changed to Direct Vector 0xf3
(XEN) irq.c:270: Dom6 PCI link 0 changed 5 -> 0
(XEN) irq.c:270: Dom6 PCI link 1 changed 10 -> 0
(XEN) irq.c:270: Dom6 PCI link 2 changed 11 -> 0
(XEN) irq.c:270: Dom6 PCI link 3 changed 5 -> 0
(XEN) grant_table.c:289:d0 Increased maptrack size to 2 frames

--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="lspci-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci-precise-onlyvdagent.txt"

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
	Region 1: [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
	Region 2: [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
	Region 3: [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
	Region 4: I/O ports at c220 [size=16]
	Kernel driver in use: ata_piix

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 9
	Kernel modules: i2c-piix4

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
	Subsystem: XenSource, Inc. Xen Platform Device
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 24
	Region 0: I/O ports at c000 [size=256]
	Region 1: Memory at f0000000 (32-bit, prefetchable) [size=16M]
	Kernel driver in use: xen-platform-pci

00:03.0 Communication controller: Red Hat, Inc Virtio console
	Subsystem: Red Hat, Inc Device 0003
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 28
	Region 0: I/O ports at c200 [size=32]
	Region 1: Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable+ Count=32 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci

00:04.0 VGA compatible controller: Device 1234:1111 (prog-if 00 [VGA controller])
	Subsystem: Red Hat, Inc Device 1100
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: Memory at f1000000 (32-bit, prefetchable) [size=16M]
	Region 2: Memory at f2021000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2000000 [disabled] [size=64K]



--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="proc-interrupts-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="proc-interrupts-precise-onlyvdagent.txt"

            CPU0       CPU1       
   0:         26          0   IO-APIC-edge      timer
   1:         10          0  xen-pirq-ioapic-edge  i8042
   6:          2          0  xen-pirq-ioapic-edge  floppy
   8:          2          0  xen-pirq-ioapic-edge  rtc0
   9:          0          0   IO-APIC-fasteoi   acpi
  12:       2882          0  xen-pirq-ioapic-edge  i8042
  14:          0          0   IO-APIC-edge      ata_piix
  15:          0          0   IO-APIC-edge      ata_piix
  64:      38268          0  xen-percpu-virq      timer0
  65:      12302          0  xen-percpu-ipi       resched0
  66:          0          0  xen-percpu-ipi       callfunc0
  67:          0          0  xen-percpu-virq      debug0
  68:         95          0  xen-percpu-ipi       callfuncsingle0
  69:          0          0  xen-percpu-ipi       spinlock0
  70:          0          0  xen-percpu-ipi       spinlock1
  71:          0      29568  xen-percpu-virq      timer1
  72:          0      14375  xen-percpu-ipi       resched1
  73:          0          0  xen-percpu-ipi       callfunc1
  74:          0          0  xen-percpu-virq      debug1
  75:          0        104  xen-percpu-ipi       callfuncsingle1
  76:        306          0   xen-dyn-event     xenbus
  77:       5790          0   xen-dyn-event     blkif
  78:       8987          0   xen-dyn-event     eth0
  79:          0          0  xen-pirq-msi-x     virtio0-config
  80:          0          0  xen-pirq-msi-x     virtio0-virtqueues
  81:         47          0   xen-dyn-event     vkbd
 NMI:          0          0   Non-maskable interrupts
 LOC:          0          0   Local timer interrupts
 SPU:          0          0   Spurious interrupts
 PMI:          0          0   Performance monitoring interrupts
 IWI:          0          0   IRQ work interrupts
 RES:      12302      14375   Rescheduling interrupts
 CAL:         95        104   Function call interrupts
 TLB:        770        933   TLB shootdowns
 TRM:          0          0   Thermal event interrupts
 THR:          0          0   Threshold APIC interrupts
 MCE:          0          0   Machine check exceptions
 MCP:         28         28   Machine check polls
 ERR:          0
 MIS:          0


--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="lsipci-squeeze.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lsipci-squeeze.txt"

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
	Region 1: [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
	Region 2: [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
	Region 3: [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
	Region 4: I/O ports at c140 [size=16]
	Kernel driver in use: ata_piix

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 9

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
	Subsystem: XenSource, Inc. Xen Platform Device
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 11
	Region 0: I/O ports at c000 [size=256]
	Region 1: Memory at f0000000 (32-bit, prefetchable) [size=16M]

00:03.0 Communication controller: Red Hat, Inc Virtio console
	Subsystem: Red Hat, Inc Device 0003
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 28
	Region 0: I/O ports at c100 [size=32]
	Region 1: Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable+ Count=32 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci

00:04.0 VGA compatible controller: Technical Corp. Device 1111 (prog-if 00 [VGA controller])
	Subsystem: Red Hat, Inc Device 1100
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: Memory at f1000000 (32-bit, prefetchable) [size=16M]
	Region 2: Memory at f2021000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2000000 [disabled] [size=64K]

00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
	Subsystem: Red Hat, Inc Device 0001
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 36
	Region 0: I/O ports at c120 [size=32]
	Region 1: Memory at f2022000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2010000 [disabled] [size=64K]
	Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci



--------------000205090700010505010907
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000205090700010505010907--


From xen-devel-bounces@lists.xen.org Tue Feb 04 11:11:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdtt-0007Ar-C2; Tue, 04 Feb 2014 11:10:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WAdtr-0007Ak-N5
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 11:10:44 +0000
Received: from [85.158.143.35:35552] by server-1.bemta-4.messagelabs.com id
	28/BC-31661-3BAC0F25; Tue, 04 Feb 2014 11:10:43 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391512239!2975919!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_10_20,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16996 invoked from network); 4 Feb 2014 11:10:40 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:10:40 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so3977894wes.29
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 03:10:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type;
	bh=hRXMV7wKwVZzAjFar3f5mhJ+OOlxb6d80PRFhw8im+k=;
	b=ZVE4Cvxn8/dbFM2dCVm3r3pdpoL6SZa/NkTEO4L0vL3LzPx9MyBCM5V0F0/czFmMFt
	0RteXWjnLJ6jReZopXLKeLYOkMzEgGaM95apdejdcCEYzUs7fh1j9sYHtmxmIv3o+nVu
	uqjv6cfxC4ZSPX2HQhFK30wMqiKGhmcmyf1VSYvtIEbDTQasowjz7eEpsKkwnkugiJse
	cZ/LYD7y8hIqoTg/l3OgOjwV5mLThGi//jVSwyPoeYda+GzeR3nlZ7SzNADFkSM3lhuS
	mcuH/MXweHxyDsoopqkJ9klHY2z6xOer0LodHa427nEnBRPA4ey+Q2KRIeVflIk2JFCO
	CVTg==
X-Gm-Message-State: ALoCoQndaC64jiRxvSVNVQwT1EnK+KYfgpMNQGW5b6YSCOiLgUn7+PlZ1CQLOiTsYBzetvTwn2r1
X-Received: by 10.180.185.197 with SMTP id fe5mr12133392wic.56.1391512239601; 
	Tue, 04 Feb 2014 03:10:39 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id eo4sm34768216wib.9.2014.02.04.03.10.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 03:10:38 -0800 (PST)
Message-ID: <52F0CAAE.6030803@m2r.biz>
Date: Tue, 04 Feb 2014 12:10:38 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xensource.com>
References: <52CC2936.4050900@m2r.biz>
In-Reply-To: <52CC2936.4050900@m2r.biz>
X-Forwarded-Message-Id: <52CC2936.4050900@m2r.biz>
Content-Type: multipart/mixed; boundary="------------000205090700010505010907"
Subject: [Xen-devel] Fwd: Re: MSI regression with xen hvm domUs using virtio
	devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------000205090700010505010907
Content-Type: multipart/alternative;
 boundary="------------040607000107060807010309"


--------------040607000107060807010309
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: 7bit

I'm forwarding this to xen-devel too.
I have also tried with latest xen-unstable (commit 
feee1ace547cf6247a358d082dd64fa762be2488) and also with latest qemu 
upstream unstable (commit 8cfc114a2f293c40077d1bdb7500b29db359ca22) but 
the problem persist.

Il 03/01/2014 22:14, Konrad Rzeszutek Wilk ha scritto:
> On Thu, Dec 19, 2013 at 01:03:35PM +0100, Fabio Fantoni wrote:
>> Hi, sorry for bothering you.
>> Virtio devices work on xen hvm domUs on windows and on linux with
>> old kernel (for example Squeeze with kernel 2.6.32), while on
>> Precise, Wheezy, Saucy and Sid with kernel >=3.2 they work only with
>> pci=nomsi on the kernel boot line.
> OK.
>> I tried debian experimental kernel (3.12.3) with this patch you made:
>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0e4ccb1505a9e29c50170742ce26ac4655baab2d
>> but the problem persists.
> Right, that fixes a different issue.
>> If you know something else that I can try to solve this problem I'm
>> all ears.
>> Otherwise where could I report this problem?
> So, what is it that does not work?

Thanks for your reply.
All virtio devices with xen hvm domUs with kernel > 2.6.32 don't work
without adding pci=nomsi on linux boot kernel line. (the regression
could be between 2.6.32 and 3.2, I not tested kernels between 2.6.32 and
3.2)
I tested with spice vdagent (that use virtio-serial) and virtio-net.
I've used spice vdagent since end of 2011, mainly on window domUs. It
works correctly and also with xen pv driver loaded and is on xen upstream:
http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=17b29c1cd830acf8b8ecbc6080264cc5b8ad3c6f
Virtio-net can be setted simply on xl cfg, for example:
vif=['model=virtio-net-pci,bridge=xenbr0'], even if is not officially
supported on xen upstream for now and was working since wei liu "virtio
on xen" project:
http://wiki.xen.org/wiki/Virtio_On_Xen
virtio on xen originally (on 2011) required pci=nomsi in order to works
but after this Wei Liu qemu patch (see below) it works out of the box:
http://git.qemu.org/?p=qemu.git;a=commit;h=f1dbf015dfb0aa7f66f710a1f1bc58b662951de2
About virtio-net there is a regression in qemu 1.6, I have narrowed down
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013). But this is
little out of topic.

What I'm interested in is mainly to solve this regression/problem with
kernel > 2.6.32 with virtio on xen in order to have vdagent working out
of the box in all cases. (also with newer kernel without pci=nomsi)


>   Do you see errors on the console?
> Does /proc/interrupts show any number of MSI interrupts going up?
> Is there something obvious in /var/log/xen/qemu-* ?

Only these:
xc: error: linux_gnttab_set_max_grants: ioctl SET_MAX_GRANTS failed (22
= Invalid argument): Internal error
xen be: qdisk-768: xc_gnttab_set_max_grants failed: Invalid argument
but I think that are not related to this problem, are showed also on
domUs without virtio devices.

>
> What does the lspci look like in your earlier kernsl (The ones that
> worked?) OR your /proc/interrupts?

/proc/interrupts seems different from old kernel working without
pci=nomsi and new kernel with both pci=nomsi and not.

On attachments some logs about 3 tests:
- squeeze (debian 6 with kernel 2.6.32) with vdagent and virtio-net,
working (*-squeeze.txt), cmdline: pci=nomsi xen_emul_unplug=never (for
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent and virtio-net, working
(*-precise-nomsi.txt), cmdline: pci=nomsi xen_emul_unplug=never (for
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent, not working
(*-precise-onlyvdagent.txt), cmdline: (blank)

The third test have also xl -vvv create and xl dmesg logs only vdagent
to have all working also with xen pv enabled.

When virtio are not working I not saw particular error, only that
vdagent is not working because does not create
/dev/virtio-ports/com.redhat.spice.0.
*
The only thing I have noticed is the interrupt difference on virtio
devices beetwen squeeze test (PCI-MSI-edge), precise test working
(xen-pirq-ioapic-level) and precise not working (xen-pirq-msi-x).*

If you need more details and/or tests tell me and I'll post them.

Thanks for any reply.


--------------040607000107060807010309
Content-Type: text/html; charset=windows-1252
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    I'm forwarding this to xen-devel too.<br>
    I have also tried with latest xen-unstable (<span class="sha-block">commit
      <span class="sha js-selectable-text">feee1ace547cf6247a358d082dd64fa762be2488</span></span>)
    and also with latest qemu upstream unstable (commit
    8cfc114a2f293c40077d1bdb7500b29db359ca22) but the problem persist.<br>
    <div class="moz-forward-container">
      <pre>Il 03/01/2014 22:14, Konrad Rzeszutek Wilk ha scritto:
&gt; On Thu, Dec 19, 2013 at 01:03:35PM +0100, Fabio Fantoni wrote:
&gt;&gt; Hi, sorry for bothering you.
&gt;&gt; Virtio devices work on xen hvm domUs on windows and on linux with
&gt;&gt; old kernel (for example Squeeze with kernel 2.6.32), while on
&gt;&gt; Precise, Wheezy, Saucy and Sid with kernel &gt;=3.2 they work only with
&gt;&gt; pci=nomsi on the kernel boot line.
&gt; OK.
&gt;&gt; I tried debian experimental kernel (3.12.3) with this patch you made:
&gt;&gt; <a class="moz-txt-link-freetext" href="https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0e4ccb1505a9e29c50170742ce26ac4655baab2d">https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=0e4ccb1505a9e29c50170742ce26ac4655baab2d</a>
&gt;&gt; but the problem persists.
&gt; Right, that fixes a different issue.
&gt;&gt; If you know something else that I can try to solve this problem I'm
&gt;&gt; all ears.
&gt;&gt; Otherwise where could I report this problem?
&gt; So, what is it that does not work?

Thanks for your reply.
All virtio devices with xen hvm domUs with kernel &gt; 2.6.32 don't work 
without adding pci=nomsi on linux boot kernel line. (the regression 
could be between 2.6.32 and 3.2, I not tested kernels between 2.6.32 and 
3.2)
I tested with spice vdagent (that use virtio-serial) and virtio-net.
I've used spice vdagent since end of 2011, mainly on window domUs. It 
works correctly and also with xen pv driver loaded and is on xen upstream:
<a class="moz-txt-link-freetext" href="http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=17b29c1cd830acf8b8ecbc6080264cc5b8ad3c6f">http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=17b29c1cd830acf8b8ecbc6080264cc5b8ad3c6f</a>
Virtio-net can be setted simply on xl cfg, for example: 
vif=['model=virtio-net-pci,bridge=xenbr0'], even if is not officially 
supported on xen upstream for now and was working since wei liu "virtio 
on xen" project:
<a class="moz-txt-link-freetext" href="http://wiki.xen.org/wiki/Virtio_On_Xen">http://wiki.xen.org/wiki/Virtio_On_Xen</a>
virtio on xen originally (on 2011) required pci=nomsi in order to works 
but after this Wei Liu qemu patch (see below) it works out of the box:
<a class="moz-txt-link-freetext" href="http://git.qemu.org/?p=qemu.git;a=commit;h=f1dbf015dfb0aa7f66f710a1f1bc58b662951de2">http://git.qemu.org/?p=qemu.git;a=commit;h=f1dbf015dfb0aa7f66f710a1f1bc58b662951de2</a>
About virtio-net there is a regression in qemu 1.6, I have narrowed down 
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013). But this is 
little out of topic.

What I'm interested in is mainly to solve this regression/problem with 
kernel &gt; 2.6.32 with virtio on xen in order to have vdagent working out 
of the box in all cases. (also with newer kernel without pci=nomsi)


&gt;   Do you see errors on the console?
&gt; Does /proc/interrupts show any number of MSI interrupts going up?
&gt; Is there something obvious in /var/log/xen/qemu-* ?

Only these:
xc: error: linux_gnttab_set_max_grants: ioctl SET_MAX_GRANTS failed (22 
= Invalid argument): Internal error
xen be: qdisk-768: xc_gnttab_set_max_grants failed: Invalid argument
but I think that are not related to this problem, are showed also on 
domUs without virtio devices.

&gt;
&gt; What does the lspci look like in your earlier kernsl (The ones that
&gt; worked?) OR your /proc/interrupts?

/proc/interrupts seems different from old kernel working without 
pci=nomsi and new kernel with both pci=nomsi and not.

On attachments some logs about 3 tests:
- squeeze (debian 6 with kernel 2.6.32) with vdagent and virtio-net, 
working (*-squeeze.txt), cmdline: pci=nomsi xen_emul_unplug=never (for 
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent and virtio-net, working 
(*-precise-nomsi.txt), cmdline: pci=nomsi xen_emul_unplug=never (for 
virtio-net)
- precise (ubuntu 12.04 LTS) with vdagent, not working 
(*-precise-onlyvdagent.txt), cmdline: (blank)

The third test have also xl -vvv create and xl dmesg logs only vdagent 
to have all working also with xen pv enabled.

When virtio are not working I not saw particular error, only that 
vdagent is not working because does not create 
/dev/virtio-ports/com.redhat.spice.0.
<b>
The only thing I have noticed is the interrupt difference on virtio 
devices beetwen squeeze test (PCI-MSI-edge), precise test working 
(xen-pirq-ioapic-level) and precise not working (xen-pirq-msi-x).</b>

If you need more details and/or tests tell me and I'll post them.

Thanks for any reply.
</pre>
    </div>
  </body>
</html>

--------------040607000107060807010309--

--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="proc-interrupts-squeeze.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="proc-interrupts-squeeze.txt"

            CPU0       CPU1       
   0:         22          0   IO-APIC-edge      timer
   1:         47          0   IO-APIC-edge      i8042
   6:          2          0   IO-APIC-edge      floppy
   8:          1          0   IO-APIC-edge      rtc0
   9:          0          0   IO-APIC-fasteoi   acpi
  12:        102          0   IO-APIC-edge      i8042
  14:       1373          0   IO-APIC-edge      ata_piix
  15:          0          0   IO-APIC-edge      ata_piix
  16:       2360          0  xen-percpu-virq      timer0
  17:          0       1992  xen-percpu-virq      timer1
  18:          0          0   xen-dyn-event     xenbus
  48:          0          0   PCI-MSI-edge      virtio1-config
  49:        580          0   PCI-MSI-edge      virtio1-input
  50:          1          0   PCI-MSI-edge      virtio1-output
  51:          0          0   PCI-MSI-edge      virtio0-config
  52:          0          0   PCI-MSI-edge      virtio0-input
 NMI:          0          0   Non-maskable interrupts
 LOC:          0          0   Local timer interrupts
 SPU:          0          0   Spurious interrupts
 PMI:          0          0   Performance monitoring interrupts
 PND:          0          0   Performance pending work
 RES:       1134       1152   Rescheduling interrupts
 CAL:          6        480   Function call interrupts
 TLB:        160        279   TLB shootdowns
 TRM:          0          0   Thermal event interrupts
 THR:          0          0   Threshold APIC interrupts
 MCE:          0          0   Machine check exceptions
 MCP:          2          2   Machine check polls
 ERR:          0
 MIS:          0


--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="lspci-precise-nomsi.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci-precise-nomsi.txt"

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
	Region 1: [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
	Region 2: [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
	Region 3: [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
	Region 4: I/O ports at c140 [size=16]
	Kernel driver in use: ata_piix

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 9
	Kernel modules: i2c-piix4

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
	Subsystem: XenSource, Inc. Xen Platform Device
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 11
	Region 0: I/O ports at c000 [size=256]
	Region 1: Memory at f0000000 (32-bit, prefetchable) [size=16M]

00:03.0 Communication controller: Red Hat, Inc Virtio console
	Subsystem: Red Hat, Inc Device 0003
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 28
	Region 0: I/O ports at c100 [size=32]
	Region 1: Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable- Count=32 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci

00:04.0 VGA compatible controller: Device 1234:1111 (prog-if 00 [VGA controller])
	Subsystem: Red Hat, Inc Device 1100
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: Memory at f1000000 (32-bit, prefetchable) [size=16M]
	Region 2: Memory at f2021000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2000000 [disabled] [size=64K]

00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
	Subsystem: Red Hat, Inc Device 0001
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 36
	Region 0: I/O ports at c120 [size=32]
	Region 1: Memory at f2022000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2010000 [disabled] [size=64K]
	Capabilities: [40] MSI-X: Enable- Count=3 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci



--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="proc-interrupts-precise-nomsi.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="proc-interrupts-precise-nomsi.txt"

            CPU0       CPU1       
   0:         27          0   IO-APIC-edge      timer
   1:         56          0  xen-pirq-ioapic-edge  i8042
   6:          2          0  xen-pirq-ioapic-edge  floppy
   8:          2          0  xen-pirq-ioapic-edge  rtc0
   9:          0          0   IO-APIC-fasteoi   acpi
  12:        142          0  xen-pirq-ioapic-edge  i8042
  14:      11019          0   IO-APIC-edge      ata_piix
  15:          0          0   IO-APIC-edge      ata_piix
  28:        724          0  xen-pirq-ioapic-level  virtio0
  36:        680          0  xen-pirq-ioapic-level  virtio1
  64:      11202          0  xen-percpu-virq      timer0
  65:       6148          0  xen-percpu-ipi       resched0
  66:          0          0  xen-percpu-ipi       callfunc0
  67:          0          0  xen-percpu-virq      debug0
  68:        113          0  xen-percpu-ipi       callfuncsingle0
  69:          0          0  xen-percpu-ipi       spinlock0
  70:          0          0  xen-percpu-ipi       spinlock1
  71:          0      10132  xen-percpu-virq      timer1
  72:          0       6033  xen-percpu-ipi       resched1
  73:          0          0  xen-percpu-ipi       callfunc1
  74:          0          0  xen-percpu-virq      debug1
  75:          0       5452  xen-percpu-ipi       callfuncsingle1
  76:          0          0   xen-dyn-event     xenbus
 NMI:          0          0   Non-maskable interrupts
 LOC:          0          0   Local timer interrupts
 SPU:          0          0   Spurious interrupts
 PMI:          0          0   Performance monitoring interrupts
 IWI:          0          0   IRQ work interrupts
 RES:       6148       6033   Rescheduling interrupts
 CAL:        113       5452   Function call interrupts
 TLB:        921        709   TLB shootdowns
 TRM:          0          0   Thermal event interrupts
 THR:          0          0   Threshold APIC interrupts
 MCE:          0          0   Machine check exceptions
 MCP:          3          3   Machine check polls
 ERR:          0
 MIS:          0


--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="xl-create-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xl-create-precise-onlyvdagent.txt"

xl -vvv create /etc/xen/PRECISEHVM.cfg
Parsing config from /etc/xen/PRECISEHVM.cfg
libxl: debug: libxl_create.c:1339:do_domain_create: ao 0x1bda1a0: create: how=(nil) callback=(nil) poller=0x1bda660
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:197:disk_try_backend: Disk vdev=hda, backend phy unsuitable as phys path not a block device
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend qdisk
libxl: debug: libxl_create.c:783:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bda9e8: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=10, free_memkb=10064
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate with 1 nodes, 8 cpus and 10064 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x19f268
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x29f268
xc: detail: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000029f268
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->000000003f000000
  ENTRY ADDRESS: 0000000000100000
xc: detail: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001f7
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x7f6c081b9000 -> 0x7f6c0834f0ed
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=qdisk
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdc440: deregister unregistered
libxl: debug: libxl_dm.c:1316:libxl__spawn_local_dm: Spawning device-model /usr/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   6
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-6,server,nowait
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   PRECISEHVM
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -k
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   it
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -spice
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   port=6002,tls-port=0,addr=0.0.0.0,disable-ticketing,agent-mouse=on,disable-copy-paste
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   virtio-serial
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   spicevmc,id=vdagent,name=vdagent
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   virtserialport,chardev=vdagent,name=com.redhat.spice.0
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   VGA
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   order=c
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   2,maxcpus=2
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   rtl8139,id=nic0,netdev=net0,mac=00:16:3e:1b:18:e3
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif6.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   1008
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1318:libxl__spawn_local_dm:   file=/mnt/vm/disks/PRECISEHVM.disk1.xm,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: register slotnum=3
libxl: debug: libxl_create.c:1353:do_domain_create: ao 0x1bda1a0: inprogress: poller=0x1bda660, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: event epath=/local/domain/0/device-model/6/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: event epath=/local/domain/0/device-model/6/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1bdac20 wpath=/local/domain/0/device-model/6/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdac20: deregister unregistered
libxl: debug: libxl_qmp.c:697:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-6
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:547:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:547:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:547:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:297:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: event epath=/local/domain/0/backend/vif/6/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/6/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: event epath=/local/domain/0/backend/vif/6/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/6/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1bdf808 wpath=/local/domain/0/backend/vif/6/0/state token=3/1: deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf808: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf890: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf890: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1bdf890: deregister unregistered
libxl: debug: libxl_event.c:1730:libxl__ao_progress_report: ao 0x1bda1a0: progress report: ignored
libxl: debug: libxl_event.c:1560:libxl__ao_complete: ao 0x1bda1a0: complete, rc=0
libxl: debug: libxl_event.c:1532:libxl__ao__destroy: ao 0x1bda1a0: destroy
xc: debug: hypercall buffer: total allocations:523 total releases:523
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:515 misses:4 toobig:4

--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="xl-dmesg-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="xl-dmesg-precise-onlyvdagent.txt"

(d6) HVM Loader
(d6) Detected Xen v4.4-unstable
(d6) Xenbus rings @0xfeffc000, event channel 4
(d6) System requested SeaBIOS
(d6) CPU speed is 2661 MHz
(d6) Relocating guest memory for lowmem MMIO space disabled
(XEN) irq.c:270: Dom6 PCI link 0 changed 0 -> 5
(d6) PCI-ISA link 0 routed to IRQ5
(XEN) irq.c:270: Dom6 PCI link 1 changed 0 -> 10
(d6) PCI-ISA link 1 routed to IRQ10
(XEN) irq.c:270: Dom6 PCI link 2 changed 0 -> 11
(d6) PCI-ISA link 2 routed to IRQ11
(XEN) irq.c:270: Dom6 PCI link 3 changed 0 -> 5
(d6) PCI-ISA link 3 routed to IRQ5
(d6) pci dev 01:3 INTA->IRQ10
(d6) pci dev 02:0 INTA->IRQ11
(d6) pci dev 03:0 INTA->IRQ5
(d6) pci dev 05:0 INTA->IRQ10
(d6) No RAM in high memory; setting high_mem resource base to 100000000
(d6) pci dev 02:0 bar 14 size 001000000: 0f0000008
(d6) pci dev 04:0 bar 10 size 001000000: 0f1000008
(d6) pci dev 04:0 bar 30 size 000010000: 0f2000000
(d6) pci dev 05:0 bar 30 size 000010000: 0f2010000
(d6) pci dev 03:0 bar 14 size 000001000: 0f2020000
(d6) pci dev 04:0 bar 18 size 000001000: 0f2021000
(d6) pci dev 02:0 bar 10 size 000000100: 00000c001
(d6) pci dev 05:0 bar 10 size 000000100: 00000c101
(d6) pci dev 05:0 bar 14 size 000000100: 0f2022000
(d6) pci dev 03:0 bar 10 size 000000020: 00000c201
(d6) pci dev 01:1 bar 20 size 000000010: 00000c221
(d6) Multiprocessor initialisation:
(d6)  - CPU0 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d6)  - CPU1 ... 36-bit phys ... fixed MTRRs ... var MTRRs [2/8] ... done.
(d6) Testing HVM environment:
(d6)  - REP INSB across page boundaries ... passed
(d6)  - GS base MSRs and SWAPGS ... passed
(d6) Passed 2 of 2 tests
(d6) Writing SMBIOS tables ...
(d6) Loading SeaBIOS ...
(d6) Creating MP tables ...
(d6) Loading ACPI ...
(d6) vm86 TSS at fc00a080
(d6) BIOS map:
(d6)  10000-100d3: Scratch space
(d6)  e0000-fffff: Main BIOS
(d6) E820 table:
(d6)  [00]: 00000000:00000000 - 00000000:000a0000: RAM
(d6)  HOLE: 00000000:000a0000 - 00000000:000e0000
(d6)  [01]: 00000000:000e0000 - 00000000:00100000: RESERVED
(d6)  [02]: 00000000:00100000 - 00000000:3f000000: RAM
(d6)  HOLE: 00000000:3f000000 - 00000000:fc000000
(d6)  [03]: 00000000:fc000000 - 00000001:00000000: RESERVED
(d6) Invoking SeaBIOS ...
(d6) SeaBIOS (version debian/1.7.3-2-0-gd3d3d64-dirty-20131205_122724-testXG01OX)
(d6) 
(d6) Found Xen hypervisor signature at 40000000
(d6) xen: copy e820...
(d6) Relocating init from 0x000e1f51 to 0x3efe05b0 (size 63875)
(d6) CPU Mhz=2661
(d6) Found 8 PCI devices (max PCI bus is 00)
(d6) Allocated Xen hypercall page at 3efff000
(d6) Detected Xen v4.4-unstable
(d6) xen: copy BIOS tables...
(d6) Copying SMBIOS entry point from 0x00010010 to 0x000f18c0
(d6) Copying MPTABLE from 0xfc001170/fc001180 to 0x000f17c0
(d6) Copying PIR from 0x00010030 to 0x000f1740
(d6) Copying ACPI RSDP from 0x000100b0 to 0x000f1710
(d6) Using pmtimer, ioport 0xb008, freq 3579 kHz
(d6) Scan for VGA option rom
(d6) WARNING! Found unaligned PCI rom (vd=1234:1111)
(d6) Running option rom at c000:0003
(XEN) stdvga.c:147:d6 entering stdvga and caching modes
(d6) Turning on vga text mode console
(d6) SeaBIOS (version debian/1.7.3-2-0-gd3d3d64-dirty-20131205_122724-testXG01OX)
(d6) Machine UUID 6238c6a6-e67a-49a1-a98a-112549d4bb06
(d6) Found 0 lpt ports
(d6) Found 0 serial ports
(d6) ATA controller 1 at 1f0/3f4/0 (irq 14 dev 9)
(d6) ATA controller 2 at 170/374/0 (irq 15 dev 9)
(d6) ata0-0: QEMU HARDDISK ATA-7 Hard-Disk (10000 MiBytes)
(d6) Searching bootorder for: /pci@i0cf8/*@1,1/drive@0/disk@0
(d6) PS2 keyboard initialized
(d6) All threads complete.
(d6) Scan for option roms
(d6) Running option rom at ca00:0003
(d6) pmm call arg1=1
(d6) pmm call arg1=0
(d6) pmm call arg1=1
(d6) pmm call arg1=0
(d6) Searching bootorder for: /pci@i0cf8/*@5
(d6) 
(d6) Press F12 for boot menu.
(d6) 
(d6) Searching bootorder for: HALT
(d6) drive 0x000f16c0: PCHS=16383/16/63 translation=lba LCHS=1024/255/63 s=20480000
(d6) Space available for UMB: cb000-ef000, f0000-f16c0
(d6) Returned 61440 bytes of ZoneHigh
(d6) e820 map has 6 items:
(d6)   0: 0000000000000000 - 000000000009fc00 = 1 RAM
(d6)   1: 000000000009fc00 - 00000000000a0000 = 2 RESERVED
(d6)   2: 00000000000f0000 - 0000000000100000 = 2 RESERVED
(d6)   3: 0000000000100000 - 000000003efff000 = 1 RAM
(d6)   4: 000000003efff000 - 000000003f000000 = 2 RESERVED
(d6)   5: 00000000fc000000 - 0000000100000000 = 2 RESERVED
(d6) enter handle_19:
(d6)   NULL
(d6) Booting from Hard Disk...
(d6) Booting from 0000:7c00
(XEN) irq.c:375: Dom6 callback via changed to Direct Vector 0xf3
(XEN) irq.c:270: Dom6 PCI link 0 changed 5 -> 0
(XEN) irq.c:270: Dom6 PCI link 1 changed 10 -> 0
(XEN) irq.c:270: Dom6 PCI link 2 changed 11 -> 0
(XEN) irq.c:270: Dom6 PCI link 3 changed 5 -> 0
(XEN) grant_table.c:289:d0 Increased maptrack size to 2 frames

--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="lspci-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lspci-precise-onlyvdagent.txt"

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
	Region 1: [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
	Region 2: [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
	Region 3: [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
	Region 4: I/O ports at c220 [size=16]
	Kernel driver in use: ata_piix

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 9
	Kernel modules: i2c-piix4

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
	Subsystem: XenSource, Inc. Xen Platform Device
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 24
	Region 0: I/O ports at c000 [size=256]
	Region 1: Memory at f0000000 (32-bit, prefetchable) [size=16M]
	Kernel driver in use: xen-platform-pci

00:03.0 Communication controller: Red Hat, Inc Virtio console
	Subsystem: Red Hat, Inc Device 0003
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 28
	Region 0: I/O ports at c200 [size=32]
	Region 1: Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable+ Count=32 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci

00:04.0 VGA compatible controller: Device 1234:1111 (prog-if 00 [VGA controller])
	Subsystem: Red Hat, Inc Device 1100
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: Memory at f1000000 (32-bit, prefetchable) [size=16M]
	Region 2: Memory at f2021000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2000000 [disabled] [size=64K]



--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="proc-interrupts-precise-onlyvdagent.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="proc-interrupts-precise-onlyvdagent.txt"

            CPU0       CPU1       
   0:         26          0   IO-APIC-edge      timer
   1:         10          0  xen-pirq-ioapic-edge  i8042
   6:          2          0  xen-pirq-ioapic-edge  floppy
   8:          2          0  xen-pirq-ioapic-edge  rtc0
   9:          0          0   IO-APIC-fasteoi   acpi
  12:       2882          0  xen-pirq-ioapic-edge  i8042
  14:          0          0   IO-APIC-edge      ata_piix
  15:          0          0   IO-APIC-edge      ata_piix
  64:      38268          0  xen-percpu-virq      timer0
  65:      12302          0  xen-percpu-ipi       resched0
  66:          0          0  xen-percpu-ipi       callfunc0
  67:          0          0  xen-percpu-virq      debug0
  68:         95          0  xen-percpu-ipi       callfuncsingle0
  69:          0          0  xen-percpu-ipi       spinlock0
  70:          0          0  xen-percpu-ipi       spinlock1
  71:          0      29568  xen-percpu-virq      timer1
  72:          0      14375  xen-percpu-ipi       resched1
  73:          0          0  xen-percpu-ipi       callfunc1
  74:          0          0  xen-percpu-virq      debug1
  75:          0        104  xen-percpu-ipi       callfuncsingle1
  76:        306          0   xen-dyn-event     xenbus
  77:       5790          0   xen-dyn-event     blkif
  78:       8987          0   xen-dyn-event     eth0
  79:          0          0  xen-pirq-msi-x     virtio0-config
  80:          0          0  xen-pirq-msi-x     virtio0-virtqueues
  81:         47          0   xen-dyn-event     vkbd
 NMI:          0          0   Non-maskable interrupts
 LOC:          0          0   Local timer interrupts
 SPU:          0          0   Spurious interrupts
 PMI:          0          0   Performance monitoring interrupts
 IWI:          0          0   IRQ work interrupts
 RES:      12302      14375   Rescheduling interrupts
 CAL:         95        104   Function call interrupts
 TLB:        770        933   TLB shootdowns
 TRM:          0          0   Thermal event interrupts
 THR:          0          0   Threshold APIC interrupts
 MCE:          0          0   Machine check exceptions
 MCP:         28         28   Machine check polls
 ERR:          0
 MIS:          0


--------------000205090700010505010907
Content-Type: text/plain; charset=windows-1252;
 name="lsipci-squeeze.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="lsipci-squeeze.txt"

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [Master])
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O+ Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: [virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
	Region 1: [virtual] Memory at 000003f0 (type 3, non-prefetchable) [size=1]
	Region 2: [virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
	Region 3: [virtual] Memory at 00000370 (type 3, non-prefetchable) [size=1]
	Region 4: I/O ports at c140 [size=16]
	Kernel driver in use: ata_piix

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
	Subsystem: Red Hat, Inc Qemu virtual machine
	Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 9

00:02.0 Unassigned class [ff80]: XenSource, Inc. Xen Platform Device (rev 01)
	Subsystem: XenSource, Inc. Xen Platform Device
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 11
	Region 0: I/O ports at c000 [size=256]
	Region 1: Memory at f0000000 (32-bit, prefetchable) [size=16M]

00:03.0 Communication controller: Red Hat, Inc Virtio console
	Subsystem: Red Hat, Inc Device 0003
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 28
	Region 0: I/O ports at c100 [size=32]
	Region 1: Memory at f2020000 (32-bit, non-prefetchable) [size=4K]
	Capabilities: [40] MSI-X: Enable+ Count=32 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci

00:04.0 VGA compatible controller: Technical Corp. Device 1111 (prog-if 00 [VGA controller])
	Subsystem: Red Hat, Inc Device 1100
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
	Status: Cap- 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Region 0: Memory at f1000000 (32-bit, prefetchable) [size=16M]
	Region 2: Memory at f2021000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2000000 [disabled] [size=64K]

00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
	Subsystem: Red Hat, Inc Device 0001
	Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
	Latency: 0
	Interrupt: pin A routed to IRQ 36
	Region 0: I/O ports at c120 [size=32]
	Region 1: Memory at f2022000 (32-bit, non-prefetchable) [size=4K]
	Expansion ROM at f2010000 [disabled] [size=64K]
	Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
		Vector table: BAR=1 offset=00000000
		PBA: BAR=1 offset=00000800
	Kernel driver in use: virtio-pci



--------------000205090700010505010907
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000205090700010505010907--


From xen-devel-bounces@lists.xen.org Tue Feb 04 11:14:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdxq-0007Jp-8u; Tue, 04 Feb 2014 11:14:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAdxo-0007Jk-O9
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:14:48 +0000
Received: from [85.158.139.211:6116] by server-7.bemta-5.messagelabs.com id
	D0/FA-14867-7ABC0F25; Tue, 04 Feb 2014 11:14:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391512486!1517598!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10758 invoked from network); 4 Feb 2014 11:14:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:14:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:14:46 +0000
Message-Id: <52F0D9B20200007800118F44@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:14:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 11/17] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -400,6 +400,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>              return -EFAULT;
>          pvpmu_finish(current->domain, &pmu_params);
>          break;
> +
> +    case XENPMU_lvtpc_set:
> +        if ( copy_from_guest(&pmu_params, arg, 1) )
> +            return -EFAULT;
> +
> +        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);

Once again, please don't ignore (parts of) hypercall input values.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:14:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAdxq-0007Jp-8u; Tue, 04 Feb 2014 11:14:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAdxo-0007Jk-O9
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:14:48 +0000
Received: from [85.158.139.211:6116] by server-7.bemta-5.messagelabs.com id
	D0/FA-14867-7ABC0F25; Tue, 04 Feb 2014 11:14:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391512486!1517598!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10758 invoked from network); 4 Feb 2014 11:14:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:14:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:14:46 +0000
Message-Id: <52F0D9B20200007800118F44@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:14:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 11/17] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> --- a/xen/arch/x86/hvm/vpmu.c
> +++ b/xen/arch/x86/hvm/vpmu.c
> @@ -400,6 +400,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>              return -EFAULT;
>          pvpmu_finish(current->domain, &pmu_params);
>          break;
> +
> +    case XENPMU_lvtpc_set:
> +        if ( copy_from_guest(&pmu_params, arg, 1) )
> +            return -EFAULT;
> +
> +        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);

Once again, please don't ignore (parts of) hypercall input values.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:22:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAe5X-0007UR-7z; Tue, 04 Feb 2014 11:22:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAe5V-0007UM-AZ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:22:45 +0000
Received: from [85.158.139.211:31902] by server-15.bemta-5.messagelabs.com id
	45/51-24395-48DC0F25; Tue, 04 Feb 2014 11:22:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391512963!1516679!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15567 invoked from network); 4 Feb 2014 11:22:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:22:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:22:42 +0000
Message-Id: <52F0DB8F0200007800118F53@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:22:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>  int vpmu_do_interrupt(struct cpu_user_regs *regs)
>  {
>      struct vcpu *v = current;
> -    struct vpmu_struct *vpmu = vcpu_vpmu(v);
> +    struct vpmu_struct *vpmu;
>  
> -    if ( vpmu->arch_vpmu_ops )
> +    /* dom0 will handle this interrupt */
> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
> +        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
> +
> +    vpmu = vcpu_vpmu(v);
> +    if ( !is_hvm_domain(v->domain) )
> +    {
> +        /* PV guest or dom0 is doing system profiling */
> +        const struct cpu_user_regs *gregs;
> +        int err;
> +
> +        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
> +            return 1;
> +
> +        /* PV guest will be reading PMU MSRs from xenpmu_data */
> +        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
> +        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +
> +        /* Store appropriate registers in xenpmu_data */
> +        if ( is_pv_32bit_domain(current->domain) )
> +        {
> +            /*
> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> +             * and therefore we treat it the same way as a non-priviledged
> +             * PV 32-bit domain.
> +             */
> +            struct compat_cpu_user_regs *cmp;
> +
> +            gregs = guest_cpu_user_regs();
> +
> +            cmp = (struct compat_cpu_user_regs *)
> +                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;

Deliberate type changes like this can easily (and more readably as
well as more forward compatibly) be done using (void *).

> +            XLAT_cpu_user_regs(cmp, gregs);
> +        }
> +        else if ( !is_control_domain(current->domain) &&
> +                 !is_idle_vcpu(current) )
> +        {
> +            /* PV guest */
> +            gregs = guest_cpu_user_regs();
> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                   gregs, sizeof(struct cpu_user_regs));
> +        }
> +        else
> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                   regs, sizeof(struct cpu_user_regs));
> +
> +        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
> +        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
> +        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
> +
> +        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
> +        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
> +        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
> +
> +        send_guest_vcpu_virq(v, VIRQ_XENPMU);
> +
> +        return 1;
> +    }
> +    else if ( vpmu->arch_vpmu_ops )

If the previous (and only) if() branch returns unconditionally, using
"else if" is more confusing then clarifying imo (and in any case
needlessly growing the patch, even if just by a bit).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:22:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAe5X-0007UR-7z; Tue, 04 Feb 2014 11:22:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAe5V-0007UM-AZ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:22:45 +0000
Received: from [85.158.139.211:31902] by server-15.bemta-5.messagelabs.com id
	45/51-24395-48DC0F25; Tue, 04 Feb 2014 11:22:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391512963!1516679!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15567 invoked from network); 4 Feb 2014 11:22:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:22:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:22:42 +0000
Message-Id: <52F0DB8F0200007800118F53@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:22:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>  int vpmu_do_interrupt(struct cpu_user_regs *regs)
>  {
>      struct vcpu *v = current;
> -    struct vpmu_struct *vpmu = vcpu_vpmu(v);
> +    struct vpmu_struct *vpmu;
>  
> -    if ( vpmu->arch_vpmu_ops )
> +    /* dom0 will handle this interrupt */
> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
> +        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
> +
> +    vpmu = vcpu_vpmu(v);
> +    if ( !is_hvm_domain(v->domain) )
> +    {
> +        /* PV guest or dom0 is doing system profiling */
> +        const struct cpu_user_regs *gregs;
> +        int err;
> +
> +        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
> +            return 1;
> +
> +        /* PV guest will be reading PMU MSRs from xenpmu_data */
> +        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
> +        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
> +
> +        /* Store appropriate registers in xenpmu_data */
> +        if ( is_pv_32bit_domain(current->domain) )
> +        {
> +            /*
> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> +             * and therefore we treat it the same way as a non-priviledged
> +             * PV 32-bit domain.
> +             */
> +            struct compat_cpu_user_regs *cmp;
> +
> +            gregs = guest_cpu_user_regs();
> +
> +            cmp = (struct compat_cpu_user_regs *)
> +                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;

Deliberate type changes like this can easily (and more readably as
well as more forward compatibly) be done using (void *).

> +            XLAT_cpu_user_regs(cmp, gregs);
> +        }
> +        else if ( !is_control_domain(current->domain) &&
> +                 !is_idle_vcpu(current) )
> +        {
> +            /* PV guest */
> +            gregs = guest_cpu_user_regs();
> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                   gregs, sizeof(struct cpu_user_regs));
> +        }
> +        else
> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                   regs, sizeof(struct cpu_user_regs));
> +
> +        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
> +        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
> +        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
> +
> +        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
> +        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
> +        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
> +
> +        send_guest_vcpu_virq(v, VIRQ_XENPMU);
> +
> +        return 1;
> +    }
> +    else if ( vpmu->arch_vpmu_ops )

If the previous (and only) if() branch returns unconditionally, using
"else if" is more confusing then clarifying imo (and in any case
needlessly growing the patch, even if just by a bit).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:32:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeEX-0007sD-An; Tue, 04 Feb 2014 11:32:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeEV-0007s8-Ey
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:32:03 +0000
Received: from [193.109.254.147:45188] by server-11.bemta-14.messagelabs.com
	id 45/6F-24604-2BFC0F25; Tue, 04 Feb 2014 11:32:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391513522!1891323!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24298 invoked from network); 4 Feb 2014 11:32:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:32:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:32:01 +0000
Message-Id: <52F0DDBE0200007800118F62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:31:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>          err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>          vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>  
> -        /* Store appropriate registers in xenpmu_data */
> -        if ( is_pv_32bit_domain(current->domain) )
> +        if ( !is_hvm_domain(current->domain) )
>          {
> -            /*
> -             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> -             * and therefore we treat it the same way as a non-priviledged
> -             * PV 32-bit domain.
> -             */
> -            struct compat_cpu_user_regs *cmp;
> -
> -            gregs = guest_cpu_user_regs();
> -
> -            cmp = (struct compat_cpu_user_regs *)
> -                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> -            XLAT_cpu_user_regs(cmp, gregs);
> +            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;

The surrounding if checks !hvm, i.e. both PV and PVH can make it
here. But TF_kernel_mode is meaningful for PV only.

> +
> +            /* Store appropriate registers in xenpmu_data */
> +            if ( is_pv_32bit_domain(current->domain) )
> +            {
> +                gregs = guest_cpu_user_regs();
> +
> +                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
> +                     !is_pv_32bit_domain(v->domain) )
> +                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                           gregs, sizeof(struct cpu_user_regs));
> +                else 
> +                {
> +                    /*
> +                     * 32-bit dom0 cannot process Xen's addresses (which are
> +                     * 64 bit) and therefore we treat it the same way as a
> +                     * non-priviledged PV 32-bit domain.
> +                     */
> +
> +                    struct compat_cpu_user_regs *cmp;
> +
> +                    cmp = (struct compat_cpu_user_regs *)
> +                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +                    XLAT_cpu_user_regs(cmp, gregs);
> +                }
> +            }
> +            else if ( !is_control_domain(current->domain) &&
> +                      !is_idle_vcpu(current) )
> +            {
> +                /* PV guest */
> +                gregs = guest_cpu_user_regs();
> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                       gregs, sizeof(struct cpu_user_regs));
> +            }
> +            else
> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                       regs, sizeof(struct cpu_user_regs));
> +
> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            gregs->cs = cs;

And now you store a NUL selector (i.e. just the RPL bits) into the
output field?

>          }
> -        else if ( !is_control_domain(current->domain) &&
> -                 !is_idle_vcpu(current) )
> +        else
>          {
> -            /* PV guest */
> +            /* HVM guest */
> +            struct segment_register cs;
> +
>              gregs = guest_cpu_user_regs();
>              memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>                     gregs, sizeof(struct cpu_user_regs));
> +
> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            gregs->cs = cs.attr.fields.dpl;

And here too? If that's intended, a code comment is a must.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:32:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeEX-0007sD-An; Tue, 04 Feb 2014 11:32:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeEV-0007s8-Ey
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:32:03 +0000
Received: from [193.109.254.147:45188] by server-11.bemta-14.messagelabs.com
	id 45/6F-24604-2BFC0F25; Tue, 04 Feb 2014 11:32:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391513522!1891323!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24298 invoked from network); 4 Feb 2014 11:32:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:32:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:32:01 +0000
Message-Id: <52F0DDBE0200007800118F62@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:31:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> @@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>          err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>          vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>  
> -        /* Store appropriate registers in xenpmu_data */
> -        if ( is_pv_32bit_domain(current->domain) )
> +        if ( !is_hvm_domain(current->domain) )
>          {
> -            /*
> -             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
> -             * and therefore we treat it the same way as a non-priviledged
> -             * PV 32-bit domain.
> -             */
> -            struct compat_cpu_user_regs *cmp;
> -
> -            gregs = guest_cpu_user_regs();
> -
> -            cmp = (struct compat_cpu_user_regs *)
> -                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> -            XLAT_cpu_user_regs(cmp, gregs);
> +            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;

The surrounding if checks !hvm, i.e. both PV and PVH can make it
here. But TF_kernel_mode is meaningful for PV only.

> +
> +            /* Store appropriate registers in xenpmu_data */
> +            if ( is_pv_32bit_domain(current->domain) )
> +            {
> +                gregs = guest_cpu_user_regs();
> +
> +                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
> +                     !is_pv_32bit_domain(v->domain) )
> +                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                           gregs, sizeof(struct cpu_user_regs));
> +                else 
> +                {
> +                    /*
> +                     * 32-bit dom0 cannot process Xen's addresses (which are
> +                     * 64 bit) and therefore we treat it the same way as a
> +                     * non-priviledged PV 32-bit domain.
> +                     */
> +
> +                    struct compat_cpu_user_regs *cmp;
> +
> +                    cmp = (struct compat_cpu_user_regs *)
> +                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +                    XLAT_cpu_user_regs(cmp, gregs);
> +                }
> +            }
> +            else if ( !is_control_domain(current->domain) &&
> +                      !is_idle_vcpu(current) )
> +            {
> +                /* PV guest */
> +                gregs = guest_cpu_user_regs();
> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                       gregs, sizeof(struct cpu_user_regs));
> +            }
> +            else
> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
> +                       regs, sizeof(struct cpu_user_regs));
> +
> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            gregs->cs = cs;

And now you store a NUL selector (i.e. just the RPL bits) into the
output field?

>          }
> -        else if ( !is_control_domain(current->domain) &&
> -                 !is_idle_vcpu(current) )
> +        else
>          {
> -            /* PV guest */
> +            /* HVM guest */
> +            struct segment_register cs;
> +
>              gregs = guest_cpu_user_regs();
>              memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>                     gregs, sizeof(struct cpu_user_regs));
> +
> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> +            gregs->cs = cs.attr.fields.dpl;

And here too? If that's intended, a code comment is a must.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeGC-00086h-SH; Tue, 04 Feb 2014 11:33:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAeGC-00086Y-1a
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 11:33:48 +0000
Received: from [85.158.139.211:25542] by server-8.bemta-5.messagelabs.com id
	36/51-05298-B10D0F25; Tue, 04 Feb 2014 11:33:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391513624!1536889!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22002 invoked from network); 4 Feb 2014 11:33:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:33:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97685451"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 11:33:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	06:33:43 -0500
Message-ID: <1391513622.10515.75.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 11:33:42 +0000
In-Reply-To: <CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 02:45 +0000, Miguel Clara wrote:
> The wiki mentions "NBD" but if I got it right this means the storage
> is in "host1" and all ndb does is connect to that host, so the disk is
> never copied to disk 2?
> 
> Am I correct to assume this?

Yes. N is for network.

> I have a host where what I need is to move it from host1 to host2, or
> reverse if needed, there's no problem stopping it first, but I guess
> this is not what the "migrate" command is used for!
> 
> And for guest where I do wan to keep a backup with remus, is shared
> storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:
> 
> -> Shared storage is not required
> 
> But if "migrate" doesn't work how would remus?

I think Remus with xend supports out of band storage synchronisation.
This has not yet been implemented for xl remus support.

> 
> Sorry for all the questions just trying to understand this better, and
> there's really no documentation about:
> xl migrate and xl remus!
> 
> thanks
> 
> On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com> wrote:
> > Makes sense...
> >
> > Is there documentation about xl migrate and xl remus?
> >
> > Say I want to migrate a host but first pause it?
> >
> > I could also snapshot the lvm but that doesn't save the memory and the
> > domain would have to be offline.
> >
> > So if I want to migrate but don't have shared storage, what's the best
> > approach drdb?
> >
> > Thanks
> >
> >
> >
> > On February 3, 2014 10:17:04 AM GMT, Ian Campbell <Ian.Campbell@citrix.com>
> > wrote:
> >>
> >> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
> >>>
> >>>  I'm testing live migration without shared storage (I use LVM at both
> >>> sides)
> >>>
> >>>
> >>>  Issuing "xl migrate" worked nice and the machine was migrated to the
> >>> second host
> >>>
> >>>  However I see this in the second host log:
> >>>
> >>>  [ 1502.563251] EXT4-fs error (device xvda1):
> >>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
> >>>  run-parts: bad entry in directory: rec_len is smaller than minimal -
> >>>  offset=0(0), inode=0, rec_len=0, name_len=0
> >>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
> >>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
> >>>  533250: comm run-parts: bad entry in directory: rec_len is smaller
> >>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
> >>>
> >>>  I also get errors
> >>> like:
> >>>  -bash: /bin/ping: cannot execute binary file
> >>>
> >>>  Is this to be expect on using none shared storage?
> >>
> >>
> >> Yes. If the underlying disk is not the same device between both hosts
> >> then all bets are off and all sorts of bad things will be happen. Think
> >> about it -- what would you expect to happen to an OS if a disk suddenly
> >> started returning completely different data to what was written to it.
> >>
> >> What you have seen seems like a plausible outcome.
> >>
> >> Ian.
> >>
> >
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeGC-00086h-SH; Tue, 04 Feb 2014 11:33:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAeGC-00086Y-1a
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 11:33:48 +0000
Received: from [85.158.139.211:25542] by server-8.bemta-5.messagelabs.com id
	36/51-05298-B10D0F25; Tue, 04 Feb 2014 11:33:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391513624!1536889!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22002 invoked from network); 4 Feb 2014 11:33:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:33:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97685451"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 11:33:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	06:33:43 -0500
Message-ID: <1391513622.10515.75.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 11:33:42 +0000
In-Reply-To: <CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 02:45 +0000, Miguel Clara wrote:
> The wiki mentions "NBD" but if I got it right this means the storage
> is in "host1" and all ndb does is connect to that host, so the disk is
> never copied to disk 2?
> 
> Am I correct to assume this?

Yes. N is for network.

> I have a host where what I need is to move it from host1 to host2, or
> reverse if needed, there's no problem stopping it first, but I guess
> this is not what the "migrate" command is used for!
> 
> And for guest where I do wan to keep a backup with remus, is shared
> storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:
> 
> -> Shared storage is not required
> 
> But if "migrate" doesn't work how would remus?

I think Remus with xend supports out of band storage synchronisation.
This has not yet been implemented for xl remus support.

> 
> Sorry for all the questions just trying to understand this better, and
> there's really no documentation about:
> xl migrate and xl remus!
> 
> thanks
> 
> On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com> wrote:
> > Makes sense...
> >
> > Is there documentation about xl migrate and xl remus?
> >
> > Say I want to migrate a host but first pause it?
> >
> > I could also snapshot the lvm but that doesn't save the memory and the
> > domain would have to be offline.
> >
> > So if I want to migrate but don't have shared storage, what's the best
> > approach drdb?
> >
> > Thanks
> >
> >
> >
> > On February 3, 2014 10:17:04 AM GMT, Ian Campbell <Ian.Campbell@citrix.com>
> > wrote:
> >>
> >> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
> >>>
> >>>  I'm testing live migration without shared storage (I use LVM at both
> >>> sides)
> >>>
> >>>
> >>>  Issuing "xl migrate" worked nice and the machine was migrated to the
> >>> second host
> >>>
> >>>  However I see this in the second host log:
> >>>
> >>>  [ 1502.563251] EXT4-fs error (device xvda1):
> >>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
> >>>  run-parts: bad entry in directory: rec_len is smaller than minimal -
> >>>  offset=0(0), inode=0, rec_len=0, name_len=0
> >>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
> >>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
> >>>  533250: comm run-parts: bad entry in directory: rec_len is smaller
> >>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
> >>>
> >>>  I also get errors
> >>> like:
> >>>  -bash: /bin/ping: cannot execute binary file
> >>>
> >>>  Is this to be expect on using none shared storage?
> >>
> >>
> >> Yes. If the underlying disk is not the same device between both hosts
> >> then all bets are off and all sorts of bad things will be happen. Think
> >> about it -- what would you expect to happen to an OS if a disk suddenly
> >> started returning completely different data to what was written to it.
> >>
> >> What you have seen seems like a plausible outcome.
> >>
> >> Ian.
> >>
> >
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:38:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeKn-0008Rn-KN; Tue, 04 Feb 2014 11:38:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeKm-0008NF-BV
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:38:32 +0000
Received: from [193.109.254.147:29828] by server-9.bemta-14.messagelabs.com id
	07/0A-24895-731D0F25; Tue, 04 Feb 2014 11:38:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391513910!1893378!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19720 invoked from network); 4 Feb 2014 11:38:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:38:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:38:30 +0000
Message-Id: <52F0DF430200007800118F76@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:38:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 14/17] x86/VPMU: Save VPMU state for PV
 guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Save VPMU state during context switch for both HVM and PV guests unless we
> are in PMU privileged mode (i.e. dom0 is doing all profiling) and the 
> switched
> out domain is not the control domain. The latter condition is needed because
> me may have just turned the privileged PMU mode on and thus need to save 
> last domain.

While this is understandable, ...

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1444,17 +1444,16 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>      }
>  
>      if (prev != next)
> -        update_runstate_area(prev);
> -
> -    if ( is_hvm_vcpu(prev) )
>      {
> -        if (prev != next)
> +        update_runstate_area(prev);
> +        if ( !(vpmu_mode & XENPMU_MODE_PRIV) ||
> +             !is_control_domain(prev->domain) )
>              vpmu_save(prev);

... I'd really like you to investigate ways to achieve the same effect
without this extra second condition added to the context switch path.
E.g. by synchronously issuing a save on all affected vCPU-s when
privileged mode gets turned on.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:38:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeKn-0008Rn-KN; Tue, 04 Feb 2014 11:38:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeKm-0008NF-BV
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:38:32 +0000
Received: from [193.109.254.147:29828] by server-9.bemta-14.messagelabs.com id
	07/0A-24895-731D0F25; Tue, 04 Feb 2014 11:38:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391513910!1893378!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19720 invoked from network); 4 Feb 2014 11:38:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:38:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:38:30 +0000
Message-Id: <52F0DF430200007800118F76@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:38:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 14/17] x86/VPMU: Save VPMU state for PV
 guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Save VPMU state during context switch for both HVM and PV guests unless we
> are in PMU privileged mode (i.e. dom0 is doing all profiling) and the 
> switched
> out domain is not the control domain. The latter condition is needed because
> me may have just turned the privileged PMU mode on and thus need to save 
> last domain.

While this is understandable, ...

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1444,17 +1444,16 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>      }
>  
>      if (prev != next)
> -        update_runstate_area(prev);
> -
> -    if ( is_hvm_vcpu(prev) )
>      {
> -        if (prev != next)
> +        update_runstate_area(prev);
> +        if ( !(vpmu_mode & XENPMU_MODE_PRIV) ||
> +             !is_control_domain(prev->domain) )
>              vpmu_save(prev);

... I'd really like you to investigate ways to achieve the same effect
without this extra second condition added to the context switch path.
E.g. by synchronously issuing a save on all affected vCPU-s when
privileged mode gets turned on.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:41:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeNy-0000B1-8o; Tue, 04 Feb 2014 11:41:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAeNw-0000Aw-NO
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:41:49 +0000
Received: from [85.158.137.68:39254] by server-10.bemta-3.messagelabs.com id
	A8/FD-07302-AF1D0F25; Tue, 04 Feb 2014 11:41:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391514103!13282813!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19863 invoked from network); 4 Feb 2014 11:41:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:41:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="99599536"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 11:41:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 06:41:37 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAeNl-00048y-1x;
	Tue, 04 Feb 2014 11:41:37 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 11:41:36 +0000
Message-ID: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH FOR-4.5] xen: arm: avoid "PV" terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen on ARM guests are neither PV nor HVM, they are just "guests". Avoid the
incorrect use of the term pv in the guest type macros.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
Common code still has is_pv_domain, is_hvm_domain and even is_pvh_domain.

These should be made x86 specific and common code should use specific feature
tests on e.g. paging mode, I might tackle that later.
---
 xen/arch/arm/arm64/domain.c  |    4 ++--
 xen/arch/arm/arm64/domctl.c  |    4 ++--
 xen/arch/arm/decode.c        |    2 +-
 xen/arch/arm/domain.c        |   18 +++++++++---------
 xen/arch/arm/domain_build.c  |    6 +++---
 xen/arch/arm/kernel.c        |    8 ++++----
 xen/arch/arm/traps.c         |   28 ++++++++++++++--------------
 xen/arch/arm/vpsci.c         |    4 ++--
 xen/arch/arm/vtimer.c        |    6 +++---
 xen/include/asm-arm/domain.h |   12 ++++++------
 10 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/xen/arch/arm/arm64/domain.c b/xen/arch/arm/arm64/domain.c
index 6990a7b..ccba21f 100644
--- a/xen/arch/arm/arm64/domain.c
+++ b/xen/arch/arm/arm64/domain.c
@@ -29,7 +29,7 @@ void vcpu_regs_hyp_to_user(const struct vcpu *vcpu,
 {
 #define C(hyp,user) regs->user = vcpu->arch.cpu_info->guest_cpu_user_regs.hyp
     ALLREGS;
-    if ( is_pv32_domain(vcpu->domain) )
+    if ( is_32bit_domain(vcpu->domain) )
     {
         ALLREGS32;
     }
@@ -45,7 +45,7 @@ void vcpu_regs_user_to_hyp(struct vcpu *vcpu,
 {
 #define C(hyp,user) vcpu->arch.cpu_info->guest_cpu_user_regs.hyp = regs->user
     ALLREGS;
-    if ( is_pv32_domain(vcpu->domain) )
+    if ( is_32bit_domain(vcpu->domain) )
     {
         ALLREGS32;
     }
diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index e2b4617..41e2562 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -35,9 +35,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         switch ( domctl->u.address_size.size )
         {
         case 32:
-            return switch_mode(d, DOMAIN_PV32);
+            return switch_mode(d, DOMAIN_32BIT);
         case 64:
-            return switch_mode(d, DOMAIN_PV64);
+            return switch_mode(d, DOMAIN_64BIT);
         default:
             return -EINVAL;
         }
diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 8880c39..9d237f8 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -151,7 +151,7 @@ bad_thumb:
 
 int decode_instruction(const struct cpu_user_regs *regs, struct hsr_dabt *dabt)
 {
-    if ( is_pv32_domain(current->domain) && regs->cpsr & PSR_THUMB )
+    if ( is_32bit_domain(current->domain) && regs->cpsr & PSR_THUMB )
         return decode_thumb(regs->pc, dabt);
 
     /* TODO: Handle ARM instruction */
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..1e4f298 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -75,7 +75,7 @@ static void ctxt_switch_from(struct vcpu *p)
     /* Arch timer */
     virt_timer_save(p);
 
-    if ( is_pv32_domain(p->domain) && cpu_has_thumbee )
+    if ( is_32bit_domain(p->domain) && cpu_has_thumbee )
     {
         p->arch.teecr = READ_SYSREG32(TEECR32_EL1);
         p->arch.teehbr = READ_SYSREG32(TEEHBR32_EL1);
@@ -93,7 +93,7 @@ static void ctxt_switch_from(struct vcpu *p)
     p->arch.ttbcr = READ_SYSREG(TCR_EL1);
     p->arch.ttbr0 = READ_SYSREG64(TTBR0_EL1);
     p->arch.ttbr1 = READ_SYSREG64(TTBR1_EL1);
-    if ( is_pv32_domain(p->domain) )
+    if ( is_32bit_domain(p->domain) )
         p->arch.dacr = READ_SYSREG(DACR32_EL2);
     p->arch.par = READ_SYSREG64(PAR_EL1);
 #if defined(CONFIG_ARM_32)
@@ -116,7 +116,7 @@ static void ctxt_switch_from(struct vcpu *p)
     p->arch.esr = READ_SYSREG64(ESR_EL1);
 #endif
 
-    if ( is_pv32_domain(p->domain) )
+    if ( is_32bit_domain(p->domain) )
         p->arch.ifsr  = READ_SYSREG(IFSR32_EL2);
     p->arch.afsr0 = READ_SYSREG(AFSR0_EL1);
     p->arch.afsr1 = READ_SYSREG(AFSR1_EL1);
@@ -165,7 +165,7 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG64(n->arch.esr, ESR_EL1);
 #endif
 
-    if ( is_pv32_domain(n->domain) )
+    if ( is_32bit_domain(n->domain) )
         WRITE_SYSREG(n->arch.ifsr, IFSR32_EL2);
     WRITE_SYSREG(n->arch.afsr0, AFSR0_EL1);
     WRITE_SYSREG(n->arch.afsr1, AFSR1_EL1);
@@ -175,7 +175,7 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG(n->arch.ttbcr, TCR_EL1);
     WRITE_SYSREG64(n->arch.ttbr0, TTBR0_EL1);
     WRITE_SYSREG64(n->arch.ttbr1, TTBR1_EL1);
-    if ( is_pv32_domain(n->domain) )
+    if ( is_32bit_domain(n->domain) )
         WRITE_SYSREG(n->arch.dacr, DACR32_EL2);
     WRITE_SYSREG64(n->arch.par, PAR_EL1);
 #if defined(CONFIG_ARM_32)
@@ -198,7 +198,7 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG(n->arch.tpidrro_el0, TPIDRRO_EL0);
     WRITE_SYSREG(n->arch.tpidr_el1, TPIDR_EL1);
 
-    if ( is_pv32_domain(n->domain) && cpu_has_thumbee )
+    if ( is_32bit_domain(n->domain) && cpu_has_thumbee )
     {
         WRITE_SYSREG32(n->arch.teecr, TEECR32_EL1);
         WRITE_SYSREG32(n->arch.teehbr, TEEHBR32_EL1);
@@ -215,7 +215,7 @@ static void ctxt_switch_to(struct vcpu *n)
 
     isb();
 
-    if ( is_pv32_domain(n->domain) )
+    if ( is_32bit_domain(n->domain) )
         hcr &= ~HCR_RW;
     else
         hcr |= HCR_RW;
@@ -263,7 +263,7 @@ static void continue_new_vcpu(struct vcpu *prev)
 
     if ( is_idle_vcpu(current) )
         reset_stack_and_jump(idle_loop);
-    else if is_pv32_domain(current->domain)
+    else if is_32bit_domain(current->domain)
         /* check_wakeup_from_wait(); */
         reset_stack_and_jump(return_to_new_vcpu32);
     else
@@ -625,7 +625,7 @@ int arch_set_info_guest(
     struct vcpu_guest_context *ctxt = c.nat;
     struct vcpu_guest_core_regs *regs = &c.nat->user_regs;
 
-    if ( is_pv32_domain(v->domain) )
+    if ( is_32bit_domain(v->domain) )
     {
         if ( !is_guest_pv32_psr(regs->cpsr) )
             return -EINVAL;
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..c4db604 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -504,7 +504,7 @@ static int make_cpus_node(const struct domain *d, void *fdt,
                 return res;
         }
 
-        if ( is_pv64_domain(d) )
+        if ( is_64bit_domain(d) )
         {
             res = fdt_property_string(fdt, "enable-method", "psci");
             if ( res )
@@ -1021,7 +1021,7 @@ int construct_dom0(struct domain *d)
     p2m_load_VTTBR(d);
 #ifdef CONFIG_ARM_64
     d->arch.type = kinfo.type;
-    if ( is_pv32_domain(d) )
+    if ( is_32bit_domain(d) )
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~HCR_RW, HCR_EL2);
     else
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
@@ -1045,7 +1045,7 @@ int construct_dom0(struct domain *d)
 
     regs->pc = (register_t)kinfo.entry;
 
-    if ( is_pv32_domain(d) )
+    if ( is_32bit_domain(d) )
     {
         regs->cpsr = PSR_GUEST32_INIT;
 
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..8b6709c 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -209,7 +209,7 @@ static int kernel_try_zimage64_prepare(struct kernel_info *info,
     info->entry = info->zimage.load_addr;
     info->load = kernel_zimage_load;
 
-    info->type = DOMAIN_PV64;
+    info->type = DOMAIN_64BIT;
 
     return 0;
 }
@@ -281,7 +281,7 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
     info->load = kernel_zimage_load;
 
 #ifdef CONFIG_ARM_64
-    info->type = DOMAIN_PV32;
+    info->type = DOMAIN_32BIT;
 #endif
 
     return 0;
@@ -329,9 +329,9 @@ static int kernel_try_elf_prepare(struct kernel_info *info,
 
 #ifdef CONFIG_ARM_64
     if ( elf_32bit(&info->elf.elf) )
-        info->type = DOMAIN_PV32;
+        info->type = DOMAIN_32BIT;
     else if ( elf_64bit(&info->elf.elf) )
-        info->type = DOMAIN_PV64;
+        info->type = DOMAIN_64BIT;
     else
     {
         printk("Unknown ELF class\n");
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea77cb8..21efd55 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -296,7 +296,7 @@ static void inject_undef32_exception(struct cpu_user_regs *regs)
     /* Saved PC points to the instruction past the faulting instruction. */
     uint32_t return_offset = is_thumb ? 2 : 4;
 
-    BUG_ON( !is_pv32_domain(current->domain) );
+    BUG_ON( !is_32bit_domain(current->domain) );
 
     /* Update processor mode */
     cpsr_switch_mode(regs, PSR_MODE_UND);
@@ -324,7 +324,7 @@ static void inject_abt32_exception(struct cpu_user_regs *regs,
     uint32_t return_offset = is_thumb ? 4 : 0;
     register_t fsr;
 
-    BUG_ON( !is_pv32_domain(current->domain) );
+    BUG_ON( !is_32bit_domain(current->domain) );
 
     cpsr_switch_mode(regs, PSR_MODE_ABT);
 
@@ -394,7 +394,7 @@ static void inject_undef64_exception(struct cpu_user_regs *regs, int instr_len)
         .ec = HSR_EC_UNKNOWN,
     };
 
-    BUG_ON( is_pv32_domain(current->domain) );
+    BUG_ON( is_32bit_domain(current->domain) );
 
     regs->spsr_el1 = regs->cpsr;
     regs->elr_el1 = regs->pc;
@@ -431,7 +431,7 @@ static void inject_abt64_exception(struct cpu_user_regs *regs,
         esr.ec = prefetch
             ? HSR_EC_INSTR_ABORT_CURR_EL : HSR_EC_DATA_ABORT_CURR_EL;
 
-    BUG_ON( is_pv32_domain(current->domain) );
+    BUG_ON( is_32bit_domain(current->domain) );
 
     regs->spsr_el1 = regs->cpsr;
     regs->elr_el1 = regs->pc;
@@ -464,7 +464,7 @@ static void inject_iabt_exception(struct cpu_user_regs *regs,
                                   register_t addr,
                                   int instr_len)
 {
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             inject_pabt32_exception(regs, addr);
 #ifdef CONFIG_ARM_64
         else
@@ -476,7 +476,7 @@ static void inject_dabt_exception(struct cpu_user_regs *regs,
                                   register_t addr,
                                   int instr_len)
 {
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             inject_dabt32_exception(regs, addr);
 #ifdef CONFIG_ARM_64
         else
@@ -681,10 +681,10 @@ static void _show_registers(struct cpu_user_regs *regs,
 
     if ( guest_mode )
     {
-        if ( is_pv32_domain(v->domain) )
+        if ( is_32bit_domain(v->domain) )
             show_registers_32(regs, ctxt, guest_mode, v);
 #ifdef CONFIG_ARM_64
-        else if ( is_pv64_domain(v->domain) )
+        else if ( is_64bit_domain(v->domain) )
             show_registers_64(regs, ctxt, guest_mode, v);
 #endif
     }
@@ -1232,7 +1232,7 @@ static int check_conditional_instr(struct cpu_user_regs *regs, union hsr hsr)
     {
         unsigned long it;
 
-        BUG_ON( !is_pv32_domain(current->domain) || !(cpsr&PSR_THUMB) );
+        BUG_ON( !is_32bit_domain(current->domain) || !(cpsr&PSR_THUMB) );
 
         it = ( (cpsr >> (10-2)) & 0xfc) | ((cpsr >> 25) & 0x3 );
 
@@ -1257,10 +1257,10 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     unsigned long itbits, cond, cpsr = regs->cpsr;
 
     /* PSR_IT_MASK bits can only be set for 32-bit processors in Thumb mode. */
-    BUG_ON( (!is_pv32_domain(current->domain)||!(cpsr&PSR_THUMB))
+    BUG_ON( (!is_32bit_domain(current->domain)||!(cpsr&PSR_THUMB))
             && (cpsr&PSR_IT_MASK) );
 
-    if ( is_pv32_domain(current->domain) && (cpsr&PSR_IT_MASK) )
+    if ( is_32bit_domain(current->domain) && (cpsr&PSR_IT_MASK) )
     {
         /* The ITSTATE[7:0] block is contained in CPSR[15:10],CPSR[26:25]
          *
@@ -1721,12 +1721,12 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
         advance_pc(regs, hsr);
         break;
     case HSR_EC_CP15_32:
-        if ( ! is_pv32_domain(current->domain) )
+        if ( ! is_32bit_domain(current->domain) )
             goto bad_trap;
         do_cp15_32(regs, hsr);
         break;
     case HSR_EC_CP15_64:
-        if ( ! is_pv32_domain(current->domain) )
+        if ( ! is_32bit_domain(current->domain) )
             goto bad_trap;
         do_cp15_64(regs, hsr);
         break;
@@ -1756,7 +1756,7 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
         inject_undef64_exception(regs, hsr.len);
         break;
     case HSR_EC_SYSREG:
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             goto bad_trap;
         do_sysreg(regs, hsr);
         break;
diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c
index c82884f..1ceb8cb 100644
--- a/xen/arch/arm/vpsci.c
+++ b/xen/arch/arm/vpsci.c
@@ -33,7 +33,7 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point)
         return PSCI_EINVAL;
 
     /* THUMB set is not allowed with 64-bit domain */
-    if ( is_pv64_domain(d) && is_thumb )
+    if ( is_64bit_domain(d) && is_thumb )
         return PSCI_EINVAL;
 
     if ( (ctxt = alloc_vcpu_guest_context()) == NULL )
@@ -47,7 +47,7 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point)
     ctxt->ttbr0 = 0;
     ctxt->ttbr1 = 0;
     ctxt->ttbcr = 0; /* Defined Reset Value */
-    if ( is_pv32_domain(d) )
+    if ( is_32bit_domain(d) )
         ctxt->user_regs.cpsr = PSR_GUEST32_INIT;
 #ifdef CONFIG_ARM_64
     else
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..3d6a721 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -266,16 +266,16 @@ int vtimer_emulate(struct cpu_user_regs *regs, union hsr hsr)
 
     switch (hsr.ec) {
     case HSR_EC_CP15_32:
-        if ( !is_pv32_domain(current->domain) )
+        if ( !is_32bit_domain(current->domain) )
             return 0;
         return vtimer_emulate_cp32(regs, hsr);
     case HSR_EC_CP15_64:
-        if ( !is_pv32_domain(current->domain) )
+        if ( !is_32bit_domain(current->domain) )
             return 0;
         return vtimer_emulate_cp64(regs, hsr);
 #ifdef CONFIG_ARM_64
     case HSR_EC_SYSREG:
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             return 0;
         return vtimer_emulate_sysreg(regs, hsr);
 #endif
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..65173d8 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -76,14 +76,14 @@ struct hvm_domain
 
 #ifdef CONFIG_ARM_64
 enum domain_type {
-    DOMAIN_PV32,
-    DOMAIN_PV64,
+    DOMAIN_32BIT,
+    DOMAIN_64BIT,
 };
-#define is_pv32_domain(d) ((d)->arch.type == DOMAIN_PV32)
-#define is_pv64_domain(d) ((d)->arch.type == DOMAIN_PV64)
+#define is_32bit_domain(d) ((d)->arch.type == DOMAIN_32BIT)
+#define is_64bit_domain(d) ((d)->arch.type == DOMAIN_64BIT)
 #else
-#define is_pv32_domain(d) (1)
-#define is_pv64_domain(d) (0)
+#define is_32bit_domain(d) (1)
+#define is_64bit_domain(d) (0)
 #endif
 
 extern int dom0_11_mapping;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:41:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:41:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeNy-0000B1-8o; Tue, 04 Feb 2014 11:41:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAeNw-0000Aw-NO
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:41:49 +0000
Received: from [85.158.137.68:39254] by server-10.bemta-3.messagelabs.com id
	A8/FD-07302-AF1D0F25; Tue, 04 Feb 2014 11:41:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391514103!13282813!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19863 invoked from network); 4 Feb 2014 11:41:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:41:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="99599536"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 11:41:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 06:41:37 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAeNl-00048y-1x;
	Tue, 04 Feb 2014 11:41:37 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 11:41:36 +0000
Message-ID: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH FOR-4.5] xen: arm: avoid "PV" terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen on ARM guests are neither PV nor HVM, they are just "guests". Avoid the
incorrect use of the term pv in the guest type macros.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
Common code still has is_pv_domain, is_hvm_domain and even is_pvh_domain.

These should be made x86 specific and common code should use specific feature
tests on e.g. paging mode, I might tackle that later.
---
 xen/arch/arm/arm64/domain.c  |    4 ++--
 xen/arch/arm/arm64/domctl.c  |    4 ++--
 xen/arch/arm/decode.c        |    2 +-
 xen/arch/arm/domain.c        |   18 +++++++++---------
 xen/arch/arm/domain_build.c  |    6 +++---
 xen/arch/arm/kernel.c        |    8 ++++----
 xen/arch/arm/traps.c         |   28 ++++++++++++++--------------
 xen/arch/arm/vpsci.c         |    4 ++--
 xen/arch/arm/vtimer.c        |    6 +++---
 xen/include/asm-arm/domain.h |   12 ++++++------
 10 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/xen/arch/arm/arm64/domain.c b/xen/arch/arm/arm64/domain.c
index 6990a7b..ccba21f 100644
--- a/xen/arch/arm/arm64/domain.c
+++ b/xen/arch/arm/arm64/domain.c
@@ -29,7 +29,7 @@ void vcpu_regs_hyp_to_user(const struct vcpu *vcpu,
 {
 #define C(hyp,user) regs->user = vcpu->arch.cpu_info->guest_cpu_user_regs.hyp
     ALLREGS;
-    if ( is_pv32_domain(vcpu->domain) )
+    if ( is_32bit_domain(vcpu->domain) )
     {
         ALLREGS32;
     }
@@ -45,7 +45,7 @@ void vcpu_regs_user_to_hyp(struct vcpu *vcpu,
 {
 #define C(hyp,user) vcpu->arch.cpu_info->guest_cpu_user_regs.hyp = regs->user
     ALLREGS;
-    if ( is_pv32_domain(vcpu->domain) )
+    if ( is_32bit_domain(vcpu->domain) )
     {
         ALLREGS32;
     }
diff --git a/xen/arch/arm/arm64/domctl.c b/xen/arch/arm/arm64/domctl.c
index e2b4617..41e2562 100644
--- a/xen/arch/arm/arm64/domctl.c
+++ b/xen/arch/arm/arm64/domctl.c
@@ -35,9 +35,9 @@ long subarch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         switch ( domctl->u.address_size.size )
         {
         case 32:
-            return switch_mode(d, DOMAIN_PV32);
+            return switch_mode(d, DOMAIN_32BIT);
         case 64:
-            return switch_mode(d, DOMAIN_PV64);
+            return switch_mode(d, DOMAIN_64BIT);
         default:
             return -EINVAL;
         }
diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 8880c39..9d237f8 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -151,7 +151,7 @@ bad_thumb:
 
 int decode_instruction(const struct cpu_user_regs *regs, struct hsr_dabt *dabt)
 {
-    if ( is_pv32_domain(current->domain) && regs->cpsr & PSR_THUMB )
+    if ( is_32bit_domain(current->domain) && regs->cpsr & PSR_THUMB )
         return decode_thumb(regs->pc, dabt);
 
     /* TODO: Handle ARM instruction */
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..1e4f298 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -75,7 +75,7 @@ static void ctxt_switch_from(struct vcpu *p)
     /* Arch timer */
     virt_timer_save(p);
 
-    if ( is_pv32_domain(p->domain) && cpu_has_thumbee )
+    if ( is_32bit_domain(p->domain) && cpu_has_thumbee )
     {
         p->arch.teecr = READ_SYSREG32(TEECR32_EL1);
         p->arch.teehbr = READ_SYSREG32(TEEHBR32_EL1);
@@ -93,7 +93,7 @@ static void ctxt_switch_from(struct vcpu *p)
     p->arch.ttbcr = READ_SYSREG(TCR_EL1);
     p->arch.ttbr0 = READ_SYSREG64(TTBR0_EL1);
     p->arch.ttbr1 = READ_SYSREG64(TTBR1_EL1);
-    if ( is_pv32_domain(p->domain) )
+    if ( is_32bit_domain(p->domain) )
         p->arch.dacr = READ_SYSREG(DACR32_EL2);
     p->arch.par = READ_SYSREG64(PAR_EL1);
 #if defined(CONFIG_ARM_32)
@@ -116,7 +116,7 @@ static void ctxt_switch_from(struct vcpu *p)
     p->arch.esr = READ_SYSREG64(ESR_EL1);
 #endif
 
-    if ( is_pv32_domain(p->domain) )
+    if ( is_32bit_domain(p->domain) )
         p->arch.ifsr  = READ_SYSREG(IFSR32_EL2);
     p->arch.afsr0 = READ_SYSREG(AFSR0_EL1);
     p->arch.afsr1 = READ_SYSREG(AFSR1_EL1);
@@ -165,7 +165,7 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG64(n->arch.esr, ESR_EL1);
 #endif
 
-    if ( is_pv32_domain(n->domain) )
+    if ( is_32bit_domain(n->domain) )
         WRITE_SYSREG(n->arch.ifsr, IFSR32_EL2);
     WRITE_SYSREG(n->arch.afsr0, AFSR0_EL1);
     WRITE_SYSREG(n->arch.afsr1, AFSR1_EL1);
@@ -175,7 +175,7 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG(n->arch.ttbcr, TCR_EL1);
     WRITE_SYSREG64(n->arch.ttbr0, TTBR0_EL1);
     WRITE_SYSREG64(n->arch.ttbr1, TTBR1_EL1);
-    if ( is_pv32_domain(n->domain) )
+    if ( is_32bit_domain(n->domain) )
         WRITE_SYSREG(n->arch.dacr, DACR32_EL2);
     WRITE_SYSREG64(n->arch.par, PAR_EL1);
 #if defined(CONFIG_ARM_32)
@@ -198,7 +198,7 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG(n->arch.tpidrro_el0, TPIDRRO_EL0);
     WRITE_SYSREG(n->arch.tpidr_el1, TPIDR_EL1);
 
-    if ( is_pv32_domain(n->domain) && cpu_has_thumbee )
+    if ( is_32bit_domain(n->domain) && cpu_has_thumbee )
     {
         WRITE_SYSREG32(n->arch.teecr, TEECR32_EL1);
         WRITE_SYSREG32(n->arch.teehbr, TEEHBR32_EL1);
@@ -215,7 +215,7 @@ static void ctxt_switch_to(struct vcpu *n)
 
     isb();
 
-    if ( is_pv32_domain(n->domain) )
+    if ( is_32bit_domain(n->domain) )
         hcr &= ~HCR_RW;
     else
         hcr |= HCR_RW;
@@ -263,7 +263,7 @@ static void continue_new_vcpu(struct vcpu *prev)
 
     if ( is_idle_vcpu(current) )
         reset_stack_and_jump(idle_loop);
-    else if is_pv32_domain(current->domain)
+    else if is_32bit_domain(current->domain)
         /* check_wakeup_from_wait(); */
         reset_stack_and_jump(return_to_new_vcpu32);
     else
@@ -625,7 +625,7 @@ int arch_set_info_guest(
     struct vcpu_guest_context *ctxt = c.nat;
     struct vcpu_guest_core_regs *regs = &c.nat->user_regs;
 
-    if ( is_pv32_domain(v->domain) )
+    if ( is_32bit_domain(v->domain) )
     {
         if ( !is_guest_pv32_psr(regs->cpsr) )
             return -EINVAL;
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..c4db604 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -504,7 +504,7 @@ static int make_cpus_node(const struct domain *d, void *fdt,
                 return res;
         }
 
-        if ( is_pv64_domain(d) )
+        if ( is_64bit_domain(d) )
         {
             res = fdt_property_string(fdt, "enable-method", "psci");
             if ( res )
@@ -1021,7 +1021,7 @@ int construct_dom0(struct domain *d)
     p2m_load_VTTBR(d);
 #ifdef CONFIG_ARM_64
     d->arch.type = kinfo.type;
-    if ( is_pv32_domain(d) )
+    if ( is_32bit_domain(d) )
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~HCR_RW, HCR_EL2);
     else
         WRITE_SYSREG(READ_SYSREG(HCR_EL2) | HCR_RW, HCR_EL2);
@@ -1045,7 +1045,7 @@ int construct_dom0(struct domain *d)
 
     regs->pc = (register_t)kinfo.entry;
 
-    if ( is_pv32_domain(d) )
+    if ( is_32bit_domain(d) )
     {
         regs->cpsr = PSR_GUEST32_INIT;
 
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..8b6709c 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -209,7 +209,7 @@ static int kernel_try_zimage64_prepare(struct kernel_info *info,
     info->entry = info->zimage.load_addr;
     info->load = kernel_zimage_load;
 
-    info->type = DOMAIN_PV64;
+    info->type = DOMAIN_64BIT;
 
     return 0;
 }
@@ -281,7 +281,7 @@ static int kernel_try_zimage32_prepare(struct kernel_info *info,
     info->load = kernel_zimage_load;
 
 #ifdef CONFIG_ARM_64
-    info->type = DOMAIN_PV32;
+    info->type = DOMAIN_32BIT;
 #endif
 
     return 0;
@@ -329,9 +329,9 @@ static int kernel_try_elf_prepare(struct kernel_info *info,
 
 #ifdef CONFIG_ARM_64
     if ( elf_32bit(&info->elf.elf) )
-        info->type = DOMAIN_PV32;
+        info->type = DOMAIN_32BIT;
     else if ( elf_64bit(&info->elf.elf) )
-        info->type = DOMAIN_PV64;
+        info->type = DOMAIN_64BIT;
     else
     {
         printk("Unknown ELF class\n");
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea77cb8..21efd55 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -296,7 +296,7 @@ static void inject_undef32_exception(struct cpu_user_regs *regs)
     /* Saved PC points to the instruction past the faulting instruction. */
     uint32_t return_offset = is_thumb ? 2 : 4;
 
-    BUG_ON( !is_pv32_domain(current->domain) );
+    BUG_ON( !is_32bit_domain(current->domain) );
 
     /* Update processor mode */
     cpsr_switch_mode(regs, PSR_MODE_UND);
@@ -324,7 +324,7 @@ static void inject_abt32_exception(struct cpu_user_regs *regs,
     uint32_t return_offset = is_thumb ? 4 : 0;
     register_t fsr;
 
-    BUG_ON( !is_pv32_domain(current->domain) );
+    BUG_ON( !is_32bit_domain(current->domain) );
 
     cpsr_switch_mode(regs, PSR_MODE_ABT);
 
@@ -394,7 +394,7 @@ static void inject_undef64_exception(struct cpu_user_regs *regs, int instr_len)
         .ec = HSR_EC_UNKNOWN,
     };
 
-    BUG_ON( is_pv32_domain(current->domain) );
+    BUG_ON( is_32bit_domain(current->domain) );
 
     regs->spsr_el1 = regs->cpsr;
     regs->elr_el1 = regs->pc;
@@ -431,7 +431,7 @@ static void inject_abt64_exception(struct cpu_user_regs *regs,
         esr.ec = prefetch
             ? HSR_EC_INSTR_ABORT_CURR_EL : HSR_EC_DATA_ABORT_CURR_EL;
 
-    BUG_ON( is_pv32_domain(current->domain) );
+    BUG_ON( is_32bit_domain(current->domain) );
 
     regs->spsr_el1 = regs->cpsr;
     regs->elr_el1 = regs->pc;
@@ -464,7 +464,7 @@ static void inject_iabt_exception(struct cpu_user_regs *regs,
                                   register_t addr,
                                   int instr_len)
 {
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             inject_pabt32_exception(regs, addr);
 #ifdef CONFIG_ARM_64
         else
@@ -476,7 +476,7 @@ static void inject_dabt_exception(struct cpu_user_regs *regs,
                                   register_t addr,
                                   int instr_len)
 {
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             inject_dabt32_exception(regs, addr);
 #ifdef CONFIG_ARM_64
         else
@@ -681,10 +681,10 @@ static void _show_registers(struct cpu_user_regs *regs,
 
     if ( guest_mode )
     {
-        if ( is_pv32_domain(v->domain) )
+        if ( is_32bit_domain(v->domain) )
             show_registers_32(regs, ctxt, guest_mode, v);
 #ifdef CONFIG_ARM_64
-        else if ( is_pv64_domain(v->domain) )
+        else if ( is_64bit_domain(v->domain) )
             show_registers_64(regs, ctxt, guest_mode, v);
 #endif
     }
@@ -1232,7 +1232,7 @@ static int check_conditional_instr(struct cpu_user_regs *regs, union hsr hsr)
     {
         unsigned long it;
 
-        BUG_ON( !is_pv32_domain(current->domain) || !(cpsr&PSR_THUMB) );
+        BUG_ON( !is_32bit_domain(current->domain) || !(cpsr&PSR_THUMB) );
 
         it = ( (cpsr >> (10-2)) & 0xfc) | ((cpsr >> 25) & 0x3 );
 
@@ -1257,10 +1257,10 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     unsigned long itbits, cond, cpsr = regs->cpsr;
 
     /* PSR_IT_MASK bits can only be set for 32-bit processors in Thumb mode. */
-    BUG_ON( (!is_pv32_domain(current->domain)||!(cpsr&PSR_THUMB))
+    BUG_ON( (!is_32bit_domain(current->domain)||!(cpsr&PSR_THUMB))
             && (cpsr&PSR_IT_MASK) );
 
-    if ( is_pv32_domain(current->domain) && (cpsr&PSR_IT_MASK) )
+    if ( is_32bit_domain(current->domain) && (cpsr&PSR_IT_MASK) )
     {
         /* The ITSTATE[7:0] block is contained in CPSR[15:10],CPSR[26:25]
          *
@@ -1721,12 +1721,12 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
         advance_pc(regs, hsr);
         break;
     case HSR_EC_CP15_32:
-        if ( ! is_pv32_domain(current->domain) )
+        if ( ! is_32bit_domain(current->domain) )
             goto bad_trap;
         do_cp15_32(regs, hsr);
         break;
     case HSR_EC_CP15_64:
-        if ( ! is_pv32_domain(current->domain) )
+        if ( ! is_32bit_domain(current->domain) )
             goto bad_trap;
         do_cp15_64(regs, hsr);
         break;
@@ -1756,7 +1756,7 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
         inject_undef64_exception(regs, hsr.len);
         break;
     case HSR_EC_SYSREG:
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             goto bad_trap;
         do_sysreg(regs, hsr);
         break;
diff --git a/xen/arch/arm/vpsci.c b/xen/arch/arm/vpsci.c
index c82884f..1ceb8cb 100644
--- a/xen/arch/arm/vpsci.c
+++ b/xen/arch/arm/vpsci.c
@@ -33,7 +33,7 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point)
         return PSCI_EINVAL;
 
     /* THUMB set is not allowed with 64-bit domain */
-    if ( is_pv64_domain(d) && is_thumb )
+    if ( is_64bit_domain(d) && is_thumb )
         return PSCI_EINVAL;
 
     if ( (ctxt = alloc_vcpu_guest_context()) == NULL )
@@ -47,7 +47,7 @@ int do_psci_cpu_on(uint32_t vcpuid, register_t entry_point)
     ctxt->ttbr0 = 0;
     ctxt->ttbr1 = 0;
     ctxt->ttbcr = 0; /* Defined Reset Value */
-    if ( is_pv32_domain(d) )
+    if ( is_32bit_domain(d) )
         ctxt->user_regs.cpsr = PSR_GUEST32_INIT;
 #ifdef CONFIG_ARM_64
     else
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..3d6a721 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -266,16 +266,16 @@ int vtimer_emulate(struct cpu_user_regs *regs, union hsr hsr)
 
     switch (hsr.ec) {
     case HSR_EC_CP15_32:
-        if ( !is_pv32_domain(current->domain) )
+        if ( !is_32bit_domain(current->domain) )
             return 0;
         return vtimer_emulate_cp32(regs, hsr);
     case HSR_EC_CP15_64:
-        if ( !is_pv32_domain(current->domain) )
+        if ( !is_32bit_domain(current->domain) )
             return 0;
         return vtimer_emulate_cp64(regs, hsr);
 #ifdef CONFIG_ARM_64
     case HSR_EC_SYSREG:
-        if ( is_pv32_domain(current->domain) )
+        if ( is_32bit_domain(current->domain) )
             return 0;
         return vtimer_emulate_sysreg(regs, hsr);
 #endif
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..65173d8 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -76,14 +76,14 @@ struct hvm_domain
 
 #ifdef CONFIG_ARM_64
 enum domain_type {
-    DOMAIN_PV32,
-    DOMAIN_PV64,
+    DOMAIN_32BIT,
+    DOMAIN_64BIT,
 };
-#define is_pv32_domain(d) ((d)->arch.type == DOMAIN_PV32)
-#define is_pv64_domain(d) ((d)->arch.type == DOMAIN_PV64)
+#define is_32bit_domain(d) ((d)->arch.type == DOMAIN_32BIT)
+#define is_64bit_domain(d) ((d)->arch.type == DOMAIN_64BIT)
 #else
-#define is_pv32_domain(d) (1)
-#define is_pv64_domain(d) (0)
+#define is_32bit_domain(d) (1)
+#define is_64bit_domain(d) (0)
 #endif
 
 extern int dom0_11_mapping;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:45:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeQw-0000Li-4J; Tue, 04 Feb 2014 11:44:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1WAeQu-0000LY-Qp
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 11:44:53 +0000
Received: from [193.109.254.147:18105] by server-5.bemta-14.messagelabs.com id
	91/47-16688-4B2D0F25; Tue, 04 Feb 2014 11:44:52 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391514291!1895434!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18092 invoked from network); 4 Feb 2014 11:44:51 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 11:44:51 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id B42FDAC6E;
	Tue,  4 Feb 2014 11:44:50 +0000 (UTC)
Date: Tue, 4 Feb 2014 11:44:45 +0000
From: Mel Gorman <mgorman@suse.de>
To: Ingo Molnar <mingo@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140204114445.GM6732@suse.de>
References: <20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
	<CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>, Linux-X86 <x86@kernel.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Steven Noonan <steven@uplinklabs.net>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [Xen-devel] [PATCH] Subject: [PATCH] xen: Properly account for
 _PAGE_NUMA during xen pte translations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Steven Noonan forwarded a users report where they had a problem starting
vsftpd on a Xen paravirtualized guest, with this in dmesg:

[   60.654862] BUG: Bad page map in process vsftpd  pte:8000000493b88165 pmd:e9cc01067
[   60.654876] page:ffffea00124ee200 count:0 mapcount:-1 mapping:     (null) index:0x0
[   60.654879] page flags: 0x2ffc0000000014(referenced|dirty)
[   60.654885] addr:00007f97eea74000 vm_flags:00100071 anon_vma:ffff880e98f80380 mapping:          (null) index:7f97eea74
[   60.654890] CPU: 4 PID: 587 Comm: vsftpd Not tainted 3.12.7-1-ec2 #1
[   60.654893]  ffff880e9cc6ec38 ffff880e9cc61ca0 ffffffff814c763b 00007f97eea74000
[   60.654900]  ffff880e9cc61ce8 ffffffff8116784e 0000000000000000 0000000000000000
[   60.654906]  ffff880e9cc013a0 ffffea00124ee200 00007f97eea75000 ffff880e9cc61e10
[   60.654912] Call Trace:
[   60.654921]  [<ffffffff814c763b>] dump_stack+0x45/0x56
[   60.654928]  [<ffffffff8116784e>] print_bad_pte+0x22e/0x250
[   60.654933]  [<ffffffff81169073>] unmap_single_vma+0x583/0x890
[   60.654938]  [<ffffffff8116a405>] unmap_vmas+0x65/0x90
[   60.654942]  [<ffffffff81173795>] exit_mmap+0xc5/0x170
[   60.654948]  [<ffffffff8105d295>] mmput+0x65/0x100
[   60.654952]  [<ffffffff81062983>] do_exit+0x393/0x9e0
[   60.654955]  [<ffffffff810630dc>] do_group_exit+0xcc/0x140
[   60.654959]  [<ffffffff81063164>] SyS_exit_group+0x14/0x20
[   60.654965]  [<ffffffff814d602d>] system_call_fastpath+0x1a/0x1f
[   60.654968] Disabling lock debugging due to kernel taint
[   60.655191] BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:0 val:-1
[   60.655196] BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:1 val:1

The issue could not be reproduced under an HVM instance with the same kernel,
so it appears to be exclusive to paravirtual Xen guests. He bisected the
problem to commit 1667918b (mm: numa: clear numa hinting information on
mprotect) that was also included in 3.12-stable.

The problem was related to how xen translates ptes because it was not
accounting for the _PAGE_NUMA bit. This patch splits pte_present to add
a pteval_present helper for use by xen so both bare metal and xen use
the same code when checking if a PTE is present.

[mgorman@suse.de: Wrote changelog, proposed minor modifications]
Reported-and-tested-by: Steven Noonan <steven@uplinklabs.net>
Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: stable@vger.kernel.org # 3.12+
---
 arch/x86/include/asm/pgtable.h | 14 ++++++++++++--
 arch/x86/xen/mmu.c             |  4 ++--
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index bbc8b12..19e3706 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -445,10 +445,20 @@ static inline int pte_same(pte_t a, pte_t b)
 	return a.pte == b.pte;
 }
 
+static inline int pteval_present(pteval_t pteval)
+{
+	/*
+	 * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
+	 * way clearly states that the intent is that a protnone and numa
+	 * hinting ptes are considered present for the purposes of
+	 * pagetable operations like zapping, protection changes, gup etc.
+	 */
+	return pteval & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
+}
+
 static inline int pte_present(pte_t a)
 {
-	return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
-			       _PAGE_NUMA);
+	return pteval_present(pte_flags(a));
 }
 
 #define pte_accessible pte_accessible
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..256282e 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if (pteval_present(val)) {
 		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		unsigned long pfn = mfn_to_pfn(mfn);
 
@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 
 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if (pteval_present(val)) {
 		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		pteval_t flags = val & PTE_FLAGS_MASK;
 		unsigned long mfn;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:45:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeQw-0000Li-4J; Tue, 04 Feb 2014 11:44:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mgorman@suse.de>) id 1WAeQu-0000LY-Qp
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 11:44:53 +0000
Received: from [193.109.254.147:18105] by server-5.bemta-14.messagelabs.com id
	91/47-16688-4B2D0F25; Tue, 04 Feb 2014 11:44:52 +0000
X-Env-Sender: mgorman@suse.de
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391514291!1895434!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18092 invoked from network); 4 Feb 2014 11:44:51 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 11:44:51 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id B42FDAC6E;
	Tue,  4 Feb 2014 11:44:50 +0000 (UTC)
Date: Tue, 4 Feb 2014 11:44:45 +0000
From: Mel Gorman <mgorman@suse.de>
To: Ingo Molnar <mingo@redhat.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140204114445.GM6732@suse.de>
References: <20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
	<CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>, Linux-X86 <x86@kernel.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Steven Noonan <steven@uplinklabs.net>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [Xen-devel] [PATCH] Subject: [PATCH] xen: Properly account for
 _PAGE_NUMA during xen pte translations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Steven Noonan forwarded a users report where they had a problem starting
vsftpd on a Xen paravirtualized guest, with this in dmesg:

[   60.654862] BUG: Bad page map in process vsftpd  pte:8000000493b88165 pmd:e9cc01067
[   60.654876] page:ffffea00124ee200 count:0 mapcount:-1 mapping:     (null) index:0x0
[   60.654879] page flags: 0x2ffc0000000014(referenced|dirty)
[   60.654885] addr:00007f97eea74000 vm_flags:00100071 anon_vma:ffff880e98f80380 mapping:          (null) index:7f97eea74
[   60.654890] CPU: 4 PID: 587 Comm: vsftpd Not tainted 3.12.7-1-ec2 #1
[   60.654893]  ffff880e9cc6ec38 ffff880e9cc61ca0 ffffffff814c763b 00007f97eea74000
[   60.654900]  ffff880e9cc61ce8 ffffffff8116784e 0000000000000000 0000000000000000
[   60.654906]  ffff880e9cc013a0 ffffea00124ee200 00007f97eea75000 ffff880e9cc61e10
[   60.654912] Call Trace:
[   60.654921]  [<ffffffff814c763b>] dump_stack+0x45/0x56
[   60.654928]  [<ffffffff8116784e>] print_bad_pte+0x22e/0x250
[   60.654933]  [<ffffffff81169073>] unmap_single_vma+0x583/0x890
[   60.654938]  [<ffffffff8116a405>] unmap_vmas+0x65/0x90
[   60.654942]  [<ffffffff81173795>] exit_mmap+0xc5/0x170
[   60.654948]  [<ffffffff8105d295>] mmput+0x65/0x100
[   60.654952]  [<ffffffff81062983>] do_exit+0x393/0x9e0
[   60.654955]  [<ffffffff810630dc>] do_group_exit+0xcc/0x140
[   60.654959]  [<ffffffff81063164>] SyS_exit_group+0x14/0x20
[   60.654965]  [<ffffffff814d602d>] system_call_fastpath+0x1a/0x1f
[   60.654968] Disabling lock debugging due to kernel taint
[   60.655191] BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:0 val:-1
[   60.655196] BUG: Bad rss-counter state mm:ffff880e9ca60580 idx:1 val:1

The issue could not be reproduced under an HVM instance with the same kernel,
so it appears to be exclusive to paravirtual Xen guests. He bisected the
problem to commit 1667918b (mm: numa: clear numa hinting information on
mprotect) that was also included in 3.12-stable.

The problem was related to how xen translates ptes because it was not
accounting for the _PAGE_NUMA bit. This patch splits pte_present to add
a pteval_present helper for use by xen so both bare metal and xen use
the same code when checking if a PTE is present.

[mgorman@suse.de: Wrote changelog, proposed minor modifications]
Reported-and-tested-by: Steven Noonan <steven@uplinklabs.net>
Signed-off-by: Elena Ufimtseva <ufimtseva@gmail.com>
Signed-off-by: Mel Gorman <mgorman@suse.de>
Cc: stable@vger.kernel.org # 3.12+
---
 arch/x86/include/asm/pgtable.h | 14 ++++++++++++--
 arch/x86/xen/mmu.c             |  4 ++--
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index bbc8b12..19e3706 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -445,10 +445,20 @@ static inline int pte_same(pte_t a, pte_t b)
 	return a.pte == b.pte;
 }
 
+static inline int pteval_present(pteval_t pteval)
+{
+	/*
+	 * Yes Linus, _PAGE_PROTNONE == _PAGE_NUMA. Expressing it this
+	 * way clearly states that the intent is that a protnone and numa
+	 * hinting ptes are considered present for the purposes of
+	 * pagetable operations like zapping, protection changes, gup etc.
+	 */
+	return pteval & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_NUMA);
+}
+
 static inline int pte_present(pte_t a)
 {
-	return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE |
-			       _PAGE_NUMA);
+	return pteval_present(pte_flags(a));
 }
 
 #define pte_accessible pte_accessible
diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 2423ef0..256282e 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -365,7 +365,7 @@ void xen_ptep_modify_prot_commit(struct mm_struct *mm, unsigned long addr,
 /* Assume pteval_t is equivalent to all the other *val_t types. */
 static pteval_t pte_mfn_to_pfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if (pteval_present(val)) {
 		unsigned long mfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		unsigned long pfn = mfn_to_pfn(mfn);
 
@@ -381,7 +381,7 @@ static pteval_t pte_mfn_to_pfn(pteval_t val)
 
 static pteval_t pte_pfn_to_mfn(pteval_t val)
 {
-	if (val & _PAGE_PRESENT) {
+	if (pteval_present(val)) {
 		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
 		pteval_t flags = val & PTE_FLAGS_MASK;
 		unsigned long mfn;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:48:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeUn-0000aS-Fk; Tue, 04 Feb 2014 11:48:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAeUl-0000aJ-OC
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 11:48:51 +0000
Received: from [85.158.139.211:32074] by server-11.bemta-5.messagelabs.com id
	1E/B3-23886-3A3D0F25; Tue, 04 Feb 2014 11:48:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391514528!1549980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29169 invoked from network); 4 Feb 2014 11:48:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:48:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97689259"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 11:48:48 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	06:48:47 -0500
Message-ID: <52F0D39D.4050409@citrix.com>
Date: Tue, 4 Feb 2014 11:48:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mel Gorman <mgorman@suse.de>
References: <20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
	<CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
	<20140204114445.GM6732@suse.de>
In-Reply-To: <20140204114445.GM6732@suse.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Steven Noonan <steven@uplinklabs.net>, Linux-X86 <x86@kernel.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, Ingo Molnar <mingo@redhat.com>,
	Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH] Subject: [PATCH] xen: Properly account for
 _PAGE_NUMA during xen pte translations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 11:44, Mel Gorman wrote:
> Steven Noonan forwarded a users report where they had a problem starting
> vsftpd on a Xen paravirtualized guest, with this in dmesg:
> 
[...]
> 
> The issue could not be reproduced under an HVM instance with the same kernel,
> so it appears to be exclusive to paravirtual Xen guests. He bisected the
> problem to commit 1667918b (mm: numa: clear numa hinting information on
> mprotect) that was also included in 3.12-stable.
> 
> The problem was related to how xen translates ptes because it was not
> accounting for the _PAGE_NUMA bit. This patch splits pte_present to add
> a pteval_present helper for use by xen so both bare metal and xen use
> the same code when checking if a PTE is present.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:48:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeUn-0000aS-Fk; Tue, 04 Feb 2014 11:48:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAeUl-0000aJ-OC
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 11:48:51 +0000
Received: from [85.158.139.211:32074] by server-11.bemta-5.messagelabs.com id
	1E/B3-23886-3A3D0F25; Tue, 04 Feb 2014 11:48:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391514528!1549980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29169 invoked from network); 4 Feb 2014 11:48:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 11:48:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97689259"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 11:48:48 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	06:48:47 -0500
Message-ID: <52F0D39D.4050409@citrix.com>
Date: Tue, 4 Feb 2014 11:48:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mel Gorman <mgorman@suse.de>
References: <20140122032045.GA22182@falcon.amazon.com>
	<20140122050215.GC9931@konrad-lan.dumpdata.com>
	<20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
	<CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
	<20140204114445.GM6732@suse.de>
In-Reply-To: <20140204114445.GM6732@suse.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Steven Noonan <steven@uplinklabs.net>, Linux-X86 <x86@kernel.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, Ingo Molnar <mingo@redhat.com>,
	Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH] Subject: [PATCH] xen: Properly account for
 _PAGE_NUMA during xen pte translations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 11:44, Mel Gorman wrote:
> Steven Noonan forwarded a users report where they had a problem starting
> vsftpd on a Xen paravirtualized guest, with this in dmesg:
> 
[...]
> 
> The issue could not be reproduced under an HVM instance with the same kernel,
> so it appears to be exclusive to paravirtual Xen guests. He bisected the
> problem to commit 1667918b (mm: numa: clear numa hinting information on
> mprotect) that was also included in 3.12-stable.
> 
> The problem was related to how xen translates ptes because it was not
> accounting for the _PAGE_NUMA bit. This patch splits pte_present to add
> a pteval_present helper for use by xen so both bare metal and xen use
> the same code when checking if a PTE is present.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:49:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeUy-0000cJ-TH; Tue, 04 Feb 2014 11:49:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeUx-0000bw-IK
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:49:03 +0000
Received: from [85.158.143.35:23703] by server-3.bemta-4.messagelabs.com id
	48/3B-11539-EA3D0F25; Tue, 04 Feb 2014 11:49:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391514542!3012154!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27066 invoked from network); 4 Feb 2014 11:49:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:49:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:49:01 +0000
Message-Id: <52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:48:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Add support for using NMIs as PMU interrupts.
> 
> Most of processing is still performed by vpmu_do_interrupt(). However, since
> certain operations are not NMI-safe we defer them to a softint that 
> vpmu_do_interrupt()
> will schedule:
> * For PV guests that would be send_guest_vcpu_virq() and 
> hvm_get_segment_register().

Makes no sense - why would hvm_get_segment_register() be of any
relevance to PV guests?

And then I'm still missing a reasonable level of analysis that the
previously non-NMI-only interrupt handler is now safe to use in NMI
context.

> +uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;

Considering that you store APIC_DM_NMI into this variable in the
NMI case, it needs to be named differently (or else I'd be tempted
to convert it to uint8_t the first time I stumble across it).

> +static void vpmu_send_nmi(struct vcpu *v)
> +{
> +    struct vlapic *vlapic = vcpu_vlapic(v);

Please ASSERT() that you have HVM data available before doing
anything that would be unsafe in PV (and maybe PVH?) context.
This will then at once serve as documentation, clarifying that the
function must only be used for suitable vCPU-s.

> +/* Process the softirq set by PMU NMI handler */
> +static void pmu_softnmi(void)
> +{
> +    struct cpu_user_regs *regs;
> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
> +
> +    if ( vpmu_mode & XENPMU_MODE_PRIV ||

() around the & please.

>  static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
>  {
>      struct vcpu *v;
>      struct page_info *page;
>      uint64_t gmfn = params->d.val;
> -
> +    static int pvpmu_initted = 0;

bool_t? __read_mostly?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:49:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeUy-0000cJ-TH; Tue, 04 Feb 2014 11:49:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeUx-0000bw-IK
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:49:03 +0000
Received: from [85.158.143.35:23703] by server-3.bemta-4.messagelabs.com id
	48/3B-11539-EA3D0F25; Tue, 04 Feb 2014 11:49:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391514542!3012154!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27066 invoked from network); 4 Feb 2014 11:49:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:49:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:49:01 +0000
Message-Id: <52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:48:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> Add support for using NMIs as PMU interrupts.
> 
> Most of processing is still performed by vpmu_do_interrupt(). However, since
> certain operations are not NMI-safe we defer them to a softint that 
> vpmu_do_interrupt()
> will schedule:
> * For PV guests that would be send_guest_vcpu_virq() and 
> hvm_get_segment_register().

Makes no sense - why would hvm_get_segment_register() be of any
relevance to PV guests?

And then I'm still missing a reasonable level of analysis that the
previously non-NMI-only interrupt handler is now safe to use in NMI
context.

> +uint32_t vpmu_apic_vector = PMU_APIC_VECTOR;

Considering that you store APIC_DM_NMI into this variable in the
NMI case, it needs to be named differently (or else I'd be tempted
to convert it to uint8_t the first time I stumble across it).

> +static void vpmu_send_nmi(struct vcpu *v)
> +{
> +    struct vlapic *vlapic = vcpu_vlapic(v);

Please ASSERT() that you have HVM data available before doing
anything that would be unsafe in PV (and maybe PVH?) context.
This will then at once serve as documentation, clarifying that the
function must only be used for suitable vCPU-s.

> +/* Process the softirq set by PMU NMI handler */
> +static void pmu_softnmi(void)
> +{
> +    struct cpu_user_regs *regs;
> +    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
> +
> +    if ( vpmu_mode & XENPMU_MODE_PRIV ||

() around the & please.

>  static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
>  {
>      struct vcpu *v;
>      struct page_info *page;
>      uint64_t gmfn = params->d.val;
> -
> +    static int pvpmu_initted = 0;

bool_t? __read_mostly?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:51:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeXX-0000ro-GS; Tue, 04 Feb 2014 11:51:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeXW-0000rd-1J
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:51:42 +0000
Received: from [85.158.143.35:2667] by server-3.bemta-4.messagelabs.com id
	D1/E0-11539-D44D0F25; Tue, 04 Feb 2014 11:51:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391514698!3017653!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17655 invoked from network); 4 Feb 2014 11:51:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:51:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:51:37 +0000
Message-Id: <52F0E2570200007800118FA5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:51:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> +        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
> +            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
> +                return 0;

Please fold chained if()s like this one.

> @@ -237,7 +242,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>              else if ( !is_control_domain(current->domain) &&
>                        !is_idle_vcpu(current) )
>              {
> -                /* PV guest */
> +                /* PV(H) guest */

I would have expected PVH guests to use the HVM paths here, not
the PV ones. Can you clarify why you do it the other way around?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 11:51:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 11:51:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAeXX-0000ro-GS; Tue, 04 Feb 2014 11:51:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAeXW-0000rd-1J
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 11:51:42 +0000
Received: from [85.158.143.35:2667] by server-3.bemta-4.messagelabs.com id
	D1/E0-11539-D44D0F25; Tue, 04 Feb 2014 11:51:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391514698!3017653!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17655 invoked from network); 4 Feb 2014 11:51:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 11:51:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 11:51:37 +0000
Message-Id: <52F0E2570200007800118FA5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 11:51:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> +        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
> +            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
> +                return 0;

Please fold chained if()s like this one.

> @@ -237,7 +242,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>              else if ( !is_control_domain(current->domain) &&
>                        !is_idle_vcpu(current) )
>              {
> -                /* PV guest */
> +                /* PV(H) guest */

I would have expected PVH guests to use the HVM paths here, not
the PV ones. Can you clarify why you do it the other way around?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 12:39:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 12:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAfHT-0002e4-W9; Tue, 04 Feb 2014 12:39:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1WAfHR-0002dz-HP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 12:39:11 +0000
Received: from [85.158.143.35:63152] by server-1.bemta-4.messagelabs.com id
	5F/FA-31661-C6FD0F25; Tue, 04 Feb 2014 12:39:08 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391517546!3033707!1
X-Originating-IP: [64.18.0.149]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9972 invoked from network); 4 Feb 2014 12:39:07 -0000
Received: from exprod5og117.obsmtp.com (HELO exprod5og117.obsmtp.com)
	(64.18.0.149)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 12:39:07 -0000
Received: from mail-oa0-f54.google.com ([209.85.219.54]) (using TLSv1) by
	exprod5ob117.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvDfaQCzws7RwyxCSYeOMS3rsbKZZl/5@postini.com;
	Tue, 04 Feb 2014 04:39:07 PST
Received: by mail-oa0-f54.google.com with SMTP id i4so9746729oah.13
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 04:39:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=0HsE7Cbd5yT04xlxodZtFt2Y6jDhG0VMqxTY+Yks6LA=;
	b=USRDNZFpx3MgMRNkFPkBbG2Y+Ikz5rnU6swcRaTcDHvbhTUVoviOouU+nwkyHoPzW2
	ihsEdxsKeh5dCPKliJpvVM//sL7Js2P/8lfyyeIy2m7tVD+oNozY+qzU7EOJ4reaZ2Fh
	nmdWy8z3vOlybS2ZGBM5aqcshmeUa66U5E6Fs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=0HsE7Cbd5yT04xlxodZtFt2Y6jDhG0VMqxTY+Yks6LA=;
	b=c4kInzXw8zlqceIh8tqikGeTAqmHOUNpXivwXGjPwKAnsBMXf415TPO8SHrbqbSWvA
	xbvFrjAhcUNFyQ8FWYXaxb3B4lsa5mfH69HZJeiemf5AsaCJ4bjkGBURv2lE6EujPR8Z
	aORlEhYUnnZWxa2Oqi2MW9MzlAvtrWyJBri7Atb0U2ABR9HBxxVnS0x+/5EaEIxhun11
	CbCDlZmu+csX8M66Nausz+nuHWM0WHaJ/fk8yaEgeBpNsSSxtQW9/3aRUkGCahS5l7Rt
	9JLrCQ32NXueuXD1rIPs3OFKTZAA3dInTlfXOQpCB9PDq+I8PzcghOMoMfrc6eyQZzct
	8xIg==
X-Gm-Message-State: ALoCoQkj5A2ZqFj8G+VvXSm0adiBU21b1UoYcfmJfqhe/kV0jGdI29INXhktpXvLzc6Y+TxxDQlKwAz2V4+j0/qF0++cdXB9WHqxXdxtHG4C4GRoS6eKXslsUYhj0xtsXJmQkhOg9T6P3ANmJwXTFtZoU+3wmE3pT4jP6tgtotsP+2W+rpfYpj4=
X-Received: by 10.182.113.195 with SMTP id ja3mr7149053obb.46.1391517545284;
	Tue, 04 Feb 2014 04:39:05 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.113.195 with SMTP id ja3mr7149044obb.46.1391517545196;
	Tue, 04 Feb 2014 04:39:05 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Tue, 4 Feb 2014 04:39:05 -0800 (PST)
In-Reply-To: <52EFFCF5.5070108@linaro.org>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
	<52EFFCF5.5070108@linaro.org>
Date: Tue, 4 Feb 2014 14:39:05 +0200
Message-ID: <CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5844740207262264110=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5844740207262264110==
Content-Type: multipart/alternative; boundary=089e013d0db0fb029b04f193ec45

--089e013d0db0fb029b04f193ec45
Content-Type: text/plain; charset=ISO-8859-1

Well,

> To support xentrace on ARM, we will need at least:

I would readily do that if you give me some directions on where to look at,
or a high-level explanation of:

> - to replace rcu_lock_domain_by_any_id() by a a similar function

What semantics should this function have?

> - to add stubs for trace in arm code

Is there an example of what functionality should be stubbed in arm code?

Thanks for the answer and best regards,
  Pavlo

--089e013d0db0fb029b04f193ec45
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Well,<br><br>&gt; To support xentrace on ARM, we=
 will need at least:<br><br></div>I would readily do that if you give me so=
me directions on where to look at, or a high-level explanation of:<br><br>
&gt; - to replace rcu_lock_domain_by_any_id() by a a similar function<br><b=
r></div>What semantics should this function have?<br><div><br>&gt; - to add=
 stubs for trace in arm code<br><br></div><div>Is there an example of what =
functionality should be stubbed in arm code?<br>
<br></div><div>Thanks for the answer and best regards,<br></div><div>=A0 Pa=
vlo<br>
</div></div>

--089e013d0db0fb029b04f193ec45--


--===============5844740207262264110==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5844740207262264110==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 12:39:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 12:39:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAfHT-0002e4-W9; Tue, 04 Feb 2014 12:39:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1WAfHR-0002dz-HP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 12:39:11 +0000
Received: from [85.158.143.35:63152] by server-1.bemta-4.messagelabs.com id
	5F/FA-31661-C6FD0F25; Tue, 04 Feb 2014 12:39:08 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391517546!3033707!1
X-Originating-IP: [64.18.0.149]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9972 invoked from network); 4 Feb 2014 12:39:07 -0000
Received: from exprod5og117.obsmtp.com (HELO exprod5og117.obsmtp.com)
	(64.18.0.149)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 12:39:07 -0000
Received: from mail-oa0-f54.google.com ([209.85.219.54]) (using TLSv1) by
	exprod5ob117.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvDfaQCzws7RwyxCSYeOMS3rsbKZZl/5@postini.com;
	Tue, 04 Feb 2014 04:39:07 PST
Received: by mail-oa0-f54.google.com with SMTP id i4so9746729oah.13
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 04:39:05 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=0HsE7Cbd5yT04xlxodZtFt2Y6jDhG0VMqxTY+Yks6LA=;
	b=USRDNZFpx3MgMRNkFPkBbG2Y+Ikz5rnU6swcRaTcDHvbhTUVoviOouU+nwkyHoPzW2
	ihsEdxsKeh5dCPKliJpvVM//sL7Js2P/8lfyyeIy2m7tVD+oNozY+qzU7EOJ4reaZ2Fh
	nmdWy8z3vOlybS2ZGBM5aqcshmeUa66U5E6Fs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=0HsE7Cbd5yT04xlxodZtFt2Y6jDhG0VMqxTY+Yks6LA=;
	b=c4kInzXw8zlqceIh8tqikGeTAqmHOUNpXivwXGjPwKAnsBMXf415TPO8SHrbqbSWvA
	xbvFrjAhcUNFyQ8FWYXaxb3B4lsa5mfH69HZJeiemf5AsaCJ4bjkGBURv2lE6EujPR8Z
	aORlEhYUnnZWxa2Oqi2MW9MzlAvtrWyJBri7Atb0U2ABR9HBxxVnS0x+/5EaEIxhun11
	CbCDlZmu+csX8M66Nausz+nuHWM0WHaJ/fk8yaEgeBpNsSSxtQW9/3aRUkGCahS5l7Rt
	9JLrCQ32NXueuXD1rIPs3OFKTZAA3dInTlfXOQpCB9PDq+I8PzcghOMoMfrc6eyQZzct
	8xIg==
X-Gm-Message-State: ALoCoQkj5A2ZqFj8G+VvXSm0adiBU21b1UoYcfmJfqhe/kV0jGdI29INXhktpXvLzc6Y+TxxDQlKwAz2V4+j0/qF0++cdXB9WHqxXdxtHG4C4GRoS6eKXslsUYhj0xtsXJmQkhOg9T6P3ANmJwXTFtZoU+3wmE3pT4jP6tgtotsP+2W+rpfYpj4=
X-Received: by 10.182.113.195 with SMTP id ja3mr7149053obb.46.1391517545284;
	Tue, 04 Feb 2014 04:39:05 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.113.195 with SMTP id ja3mr7149044obb.46.1391517545196;
	Tue, 04 Feb 2014 04:39:05 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Tue, 4 Feb 2014 04:39:05 -0800 (PST)
In-Reply-To: <52EFFCF5.5070108@linaro.org>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
	<52EFFCF5.5070108@linaro.org>
Date: Tue, 4 Feb 2014 14:39:05 +0200
Message-ID: <CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5844740207262264110=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5844740207262264110==
Content-Type: multipart/alternative; boundary=089e013d0db0fb029b04f193ec45

--089e013d0db0fb029b04f193ec45
Content-Type: text/plain; charset=ISO-8859-1

Well,

> To support xentrace on ARM, we will need at least:

I would readily do that if you give me some directions on where to look at,
or a high-level explanation of:

> - to replace rcu_lock_domain_by_any_id() by a a similar function

What semantics should this function have?

> - to add stubs for trace in arm code

Is there an example of what functionality should be stubbed in arm code?

Thanks for the answer and best regards,
  Pavlo

--089e013d0db0fb029b04f193ec45
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Well,<br><br>&gt; To support xentrace on ARM, we=
 will need at least:<br><br></div>I would readily do that if you give me so=
me directions on where to look at, or a high-level explanation of:<br><br>
&gt; - to replace rcu_lock_domain_by_any_id() by a a similar function<br><b=
r></div>What semantics should this function have?<br><div><br>&gt; - to add=
 stubs for trace in arm code<br><br></div><div>Is there an example of what =
functionality should be stubbed in arm code?<br>
<br></div><div>Thanks for the answer and best regards,<br></div><div>=A0 Pa=
vlo<br>
</div></div>

--089e013d0db0fb029b04f193ec45--


--===============5844740207262264110==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5844740207262264110==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 13:02:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 13:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAfdh-0003HM-DQ; Tue, 04 Feb 2014 13:02:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAfdf-0003HE-Py
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 13:02:08 +0000
Received: from [85.158.137.68:10641] by server-9.bemta-3.messagelabs.com id
	00/21-10184-EC4E0F25; Tue, 04 Feb 2014 13:02:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391518926!13245537!1
X-Originating-IP: [74.125.82.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25612 invoked from network); 4 Feb 2014 13:02:06 -0000
Received: from mail-we0-f172.google.com (HELO mail-we0-f172.google.com)
	(74.125.82.172)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 13:02:06 -0000
Received: by mail-we0-f172.google.com with SMTP id p61so4115730wes.31
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 05:02:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=s06/6qb8wZRfwgGCSMrlsTQfIdu7C12N04+msQrOaJU=;
	b=iZDsQdzMkDsGVIeOWdHERh3HYS742Rf3tM/8iDbWSwWh8W8ZjAnuAeLPaUWEuY/+Iu
	qAdFXjpSfOaRsx1sGKzJbbxB1X+MuXB6m5cs8YPm286Q0mstZNmTruhmyuIpJrqaJnaP
	5uktf6yFU+jRhKel7QmOW+ksfr1ePH7Z8L1oQgpJzhmacf7ww9R+g02frhCgU9ciQD+J
	pSdRBAwwYXTrL/t/yc/zwRbStVDZK3oWSZdZqgBYwbGstq+EAj2oJVfJmv1jOZVdZhXT
	/Ru66SX0vMCn5ySas6tXlv1QgMB4pKXWEXdsM/Ez4BA/Rn9YlNyc6GiOPEQRqTGufFtS
	OtBw==
X-Gm-Message-State: ALoCoQnWa75aj2p/c1D1jwvmY7HHoQ1TLc9Bdu/p/WNWmXn0EgU9oTA6owYvY+kttp7LcTgOWJi/
X-Received: by 10.180.75.202 with SMTP id e10mr12536450wiw.50.1391518925944;
	Tue, 04 Feb 2014 05:02:05 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ha1sm49725860wjc.23.2014.02.04.05.02.04 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 05:02:05 -0800 (PST)
Message-ID: <52F0E4CC.3020302@linaro.org>
Date: Tue, 04 Feb 2014 13:02:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH FOR-4.5] xen: arm: avoid "PV" terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 04/02/14 11:41, Ian Campbell wrote:
> Xen on ARM guests are neither PV nor HVM, they are just "guests". Avoid the
> incorrect use of the term pv in the guest type macros.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

I think there is some coding style error (was already on the current 
code), see below. Except that:

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
> Common code still has is_pv_domain, is_hvm_domain and even is_pvh_domain.

I have a patch to get a rid of is_pv_domain in common/grant-table.c. I 
will send it with my IOMMU patch series for ARM.

> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 635a9a4..1e4f298 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -263,7 +263,7 @@ static void continue_new_vcpu(struct vcpu *prev)
>
>       if ( is_idle_vcpu(current) )
>           reset_stack_and_jump(idle_loop);
> -    else if is_pv32_domain(current->domain)
> +    else if is_32bit_domain(current->domain)
        else if ( ... ) ?

> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index ea77cb8..21efd55 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1721,12 +1721,12 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
>           advance_pc(regs, hsr);
>           break;
>       case HSR_EC_CP15_32:
> -        if ( ! is_pv32_domain(current->domain) )
> +        if ( ! is_32bit_domain(current->domain) )
               ( !is_... ) ?

>               goto bad_trap;
>           do_cp15_32(regs, hsr);
>           break;
>       case HSR_EC_CP15_64:
> -        if ( ! is_pv32_domain(current->domain) )
> +        if ( ! is_32bit_domain(current->domain) )

Same here.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 13:02:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 13:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAfdh-0003HM-DQ; Tue, 04 Feb 2014 13:02:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAfdf-0003HE-Py
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 13:02:08 +0000
Received: from [85.158.137.68:10641] by server-9.bemta-3.messagelabs.com id
	00/21-10184-EC4E0F25; Tue, 04 Feb 2014 13:02:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391518926!13245537!1
X-Originating-IP: [74.125.82.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25612 invoked from network); 4 Feb 2014 13:02:06 -0000
Received: from mail-we0-f172.google.com (HELO mail-we0-f172.google.com)
	(74.125.82.172)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 13:02:06 -0000
Received: by mail-we0-f172.google.com with SMTP id p61so4115730wes.31
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 05:02:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=s06/6qb8wZRfwgGCSMrlsTQfIdu7C12N04+msQrOaJU=;
	b=iZDsQdzMkDsGVIeOWdHERh3HYS742Rf3tM/8iDbWSwWh8W8ZjAnuAeLPaUWEuY/+Iu
	qAdFXjpSfOaRsx1sGKzJbbxB1X+MuXB6m5cs8YPm286Q0mstZNmTruhmyuIpJrqaJnaP
	5uktf6yFU+jRhKel7QmOW+ksfr1ePH7Z8L1oQgpJzhmacf7ww9R+g02frhCgU9ciQD+J
	pSdRBAwwYXTrL/t/yc/zwRbStVDZK3oWSZdZqgBYwbGstq+EAj2oJVfJmv1jOZVdZhXT
	/Ru66SX0vMCn5ySas6tXlv1QgMB4pKXWEXdsM/Ez4BA/Rn9YlNyc6GiOPEQRqTGufFtS
	OtBw==
X-Gm-Message-State: ALoCoQnWa75aj2p/c1D1jwvmY7HHoQ1TLc9Bdu/p/WNWmXn0EgU9oTA6owYvY+kttp7LcTgOWJi/
X-Received: by 10.180.75.202 with SMTP id e10mr12536450wiw.50.1391518925944;
	Tue, 04 Feb 2014 05:02:05 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ha1sm49725860wjc.23.2014.02.04.05.02.04 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 05:02:05 -0800 (PST)
Message-ID: <52F0E4CC.3020302@linaro.org>
Date: Tue, 04 Feb 2014 13:02:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH FOR-4.5] xen: arm: avoid "PV" terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 04/02/14 11:41, Ian Campbell wrote:
> Xen on ARM guests are neither PV nor HVM, they are just "guests". Avoid the
> incorrect use of the term pv in the guest type macros.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

I think there is some coding style error (was already on the current 
code), see below. Except that:

Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
> Common code still has is_pv_domain, is_hvm_domain and even is_pvh_domain.

I have a patch to get a rid of is_pv_domain in common/grant-table.c. I 
will send it with my IOMMU patch series for ARM.

> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 635a9a4..1e4f298 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -263,7 +263,7 @@ static void continue_new_vcpu(struct vcpu *prev)
>
>       if ( is_idle_vcpu(current) )
>           reset_stack_and_jump(idle_loop);
> -    else if is_pv32_domain(current->domain)
> +    else if is_32bit_domain(current->domain)
        else if ( ... ) ?

> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index ea77cb8..21efd55 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1721,12 +1721,12 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
>           advance_pc(regs, hsr);
>           break;
>       case HSR_EC_CP15_32:
> -        if ( ! is_pv32_domain(current->domain) )
> +        if ( ! is_32bit_domain(current->domain) )
               ( !is_... ) ?

>               goto bad_trap;
>           do_cp15_32(regs, hsr);
>           break;
>       case HSR_EC_CP15_64:
> -        if ( ! is_pv32_domain(current->domain) )
> +        if ( ! is_32bit_domain(current->domain) )

Same here.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 13:31:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 13:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAg61-000442-6Y; Tue, 04 Feb 2014 13:31:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAg5z-00043x-CM
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 13:31:23 +0000
Received: from [85.158.139.211:47019] by server-12.bemta-5.messagelabs.com id
	54/62-15415-AABE0F25; Tue, 04 Feb 2014 13:31:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391520681!1554793!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13848 invoked from network); 4 Feb 2014 13:31:22 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 13:31:22 -0000
Received: by mail-wg0-f53.google.com with SMTP id y10so12921231wgg.20
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 05:31:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=JVR+v6C5DyvqCl/pBqh+4p+riRN1ug1worAcCEgPY9A=;
	b=T6h3xAfErmA7fu0kYIXyW244fX27plQ25Bl/sdOhFa4ssUHmYfQw89otmbjLjD5pii
	39nUDYVg+DJcZpYrPcCNsjjfJwQTthTOs6L1rjGvAbHsSCo/mGJ9gdf93i4RKBlvEhsb
	d4knTFQ97m03X2coU9efv+pLBsHy0sYNvmt3jz5OwaewIPnkPpW3pMqLfBzcXklSnTOQ
	2NV8oK9sK8A/GBuPcLXPwWcMx1TbdBkUi05Ep6MI7WNh7riea4huKgQgizx9k/k2+ewS
	ZZ86iKLPSsxCY/TKsxxfdNLF2LekewaIr6Ao6eiCpX7itNbtHrPgu3+NjIfI0WNwtITQ
	eobQ==
X-Gm-Message-State: ALoCoQkOaS3/mUMw97LplfkzQl8+AEihKCesOX16FERoaMeYgIcgKO5PJ9Nbs7Ar8WsxJZ4i0z5t
X-Received: by 10.194.219.1 with SMTP id pk1mr8700476wjc.36.1391520681761;
	Tue, 04 Feb 2014 05:31:21 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id y13sm52656076wjr.8.2014.02.04.05.31.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 05:31:21 -0800 (PST)
Message-ID: <52F0EBA8.3000206@linaro.org>
Date: Tue, 04 Feb 2014 13:31:20 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>	<52EFFCF5.5070108@linaro.org>
	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
In-Reply-To: <CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 04/02/14 12:39, Pavlo Suikov wrote:
> Well,

Hello,

Ian/Stefano may have a better answer than me on this part :).

>  > To support xentrace on ARM, we will need at least:
>
> I would readily do that if you give me some directions on where to look
> at, or a high-level explanation of:
>
>  > - to replace rcu_lock_domain_by_any_id() by a a similar function
>
> What semantics should this function have?

I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code. 
The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.

>  > - to add stubs for trace in arm code
>
> Is there an example of what functionality should be stubbed in arm code?
>

You can look at __trace_hypercall_entry on x86, and x86/entry.S where 
the function is called.

Sincerely yours,


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 13:31:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 13:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAg61-000442-6Y; Tue, 04 Feb 2014 13:31:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAg5z-00043x-CM
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 13:31:23 +0000
Received: from [85.158.139.211:47019] by server-12.bemta-5.messagelabs.com id
	54/62-15415-AABE0F25; Tue, 04 Feb 2014 13:31:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391520681!1554793!1
X-Originating-IP: [74.125.82.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13848 invoked from network); 4 Feb 2014 13:31:22 -0000
Received: from mail-wg0-f53.google.com (HELO mail-wg0-f53.google.com)
	(74.125.82.53)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 13:31:22 -0000
Received: by mail-wg0-f53.google.com with SMTP id y10so12921231wgg.20
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 05:31:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=JVR+v6C5DyvqCl/pBqh+4p+riRN1ug1worAcCEgPY9A=;
	b=T6h3xAfErmA7fu0kYIXyW244fX27plQ25Bl/sdOhFa4ssUHmYfQw89otmbjLjD5pii
	39nUDYVg+DJcZpYrPcCNsjjfJwQTthTOs6L1rjGvAbHsSCo/mGJ9gdf93i4RKBlvEhsb
	d4knTFQ97m03X2coU9efv+pLBsHy0sYNvmt3jz5OwaewIPnkPpW3pMqLfBzcXklSnTOQ
	2NV8oK9sK8A/GBuPcLXPwWcMx1TbdBkUi05Ep6MI7WNh7riea4huKgQgizx9k/k2+ewS
	ZZ86iKLPSsxCY/TKsxxfdNLF2LekewaIr6Ao6eiCpX7itNbtHrPgu3+NjIfI0WNwtITQ
	eobQ==
X-Gm-Message-State: ALoCoQkOaS3/mUMw97LplfkzQl8+AEihKCesOX16FERoaMeYgIcgKO5PJ9Nbs7Ar8WsxJZ4i0z5t
X-Received: by 10.194.219.1 with SMTP id pk1mr8700476wjc.36.1391520681761;
	Tue, 04 Feb 2014 05:31:21 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id y13sm52656076wjr.8.2014.02.04.05.31.20
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 05:31:21 -0800 (PST)
Message-ID: <52F0EBA8.3000206@linaro.org>
Date: Tue, 04 Feb 2014 13:31:20 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>	<52EFFCF5.5070108@linaro.org>
	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
In-Reply-To: <CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 04/02/14 12:39, Pavlo Suikov wrote:
> Well,

Hello,

Ian/Stefano may have a better answer than me on this part :).

>  > To support xentrace on ARM, we will need at least:
>
> I would readily do that if you give me some directions on where to look
> at, or a high-level explanation of:
>
>  > - to replace rcu_lock_domain_by_any_id() by a a similar function
>
> What semantics should this function have?

I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code. 
The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.

>  > - to add stubs for trace in arm code
>
> Is there an example of what functionality should be stubbed in arm code?
>

You can look at __trace_hypercall_entry on x86, and x86/entry.S where 
the function is called.

Sincerely yours,


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 13:58:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 13:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgVP-0004n2-6P; Tue, 04 Feb 2014 13:57:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgVN-0004mv-Oh
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 13:57:37 +0000
Received: from [85.158.137.68:54575] by server-17.bemta-3.messagelabs.com id
	56/61-22569-0D1F0F25; Tue, 04 Feb 2014 13:57:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391522253!13312837!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23248 invoked from network); 4 Feb 2014 13:57:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 13:57:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97731824"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 13:57:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	08:57:20 -0500
Message-ID: <1391522239.10515.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 13:57:19 +0000
In-Reply-To: <52F0EBA8.3000206@linaro.org>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
	<52EFFCF5.5070108@linaro.org>
	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
	<52F0EBA8.3000206@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 13:31 +0000, Julien Grall wrote:
> 
> On 04/02/14 12:39, Pavlo Suikov wrote:
> > Well,
> 
> Hello,
> 
> Ian/Stefano may have a better answer than me on this part :).

I know next to nothing about this tracing stuff. Probably best just to
dig in and ask questions as issues arise.

> 
> >  > To support xentrace on ARM, we will need at least:
> >
> > I would readily do that if you give me some directions on where to look
> > at, or a high-level explanation of:
> >
> >  > - to replace rcu_lock_domain_by_any_id() by a a similar function
> >
> > What semantics should this function have?
> 
> I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code. 
> The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.

Depending on how much is shared making it a common function might be
better.

> 
> >  > - to add stubs for trace in arm code
> >
> > Is there an example of what functionality should be stubbed in arm code?
> >
> 
> You can look at __trace_hypercall_entry on x86, and x86/entry.S where 
> the function is called.
> 
> Sincerely yours,
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 13:58:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 13:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgVP-0004n2-6P; Tue, 04 Feb 2014 13:57:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgVN-0004mv-Oh
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 13:57:37 +0000
Received: from [85.158.137.68:54575] by server-17.bemta-3.messagelabs.com id
	56/61-22569-0D1F0F25; Tue, 04 Feb 2014 13:57:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391522253!13312837!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23248 invoked from network); 4 Feb 2014 13:57:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 13:57:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97731824"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 13:57:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	08:57:20 -0500
Message-ID: <1391522239.10515.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 13:57:19 +0000
In-Reply-To: <52F0EBA8.3000206@linaro.org>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
	<52EFFCF5.5070108@linaro.org>
	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
	<52F0EBA8.3000206@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 13:31 +0000, Julien Grall wrote:
> 
> On 04/02/14 12:39, Pavlo Suikov wrote:
> > Well,
> 
> Hello,
> 
> Ian/Stefano may have a better answer than me on this part :).

I know next to nothing about this tracing stuff. Probably best just to
dig in and ask questions as issues arise.

> 
> >  > To support xentrace on ARM, we will need at least:
> >
> > I would readily do that if you give me some directions on where to look
> > at, or a high-level explanation of:
> >
> >  > - to replace rcu_lock_domain_by_any_id() by a a similar function
> >
> > What semantics should this function have?
> 
> I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code. 
> The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.

Depending on how much is shared making it a common function might be
better.

> 
> >  > - to add stubs for trace in arm code
> >
> > Is there an example of what functionality should be stubbed in arm code?
> >
> 
> You can look at __trace_hypercall_entry on x86, and x86/entry.S where 
> the function is called.
> 
> Sincerely yours,
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:00:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgXp-0005I7-Db; Tue, 04 Feb 2014 14:00:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgXn-0005Hq-SG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:00:08 +0000
Received: from [85.158.143.35:59104] by server-3.bemta-4.messagelabs.com id
	5E/D9-11539-662F0F25; Tue, 04 Feb 2014 14:00:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391522405!3044738!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6957 invoked from network); 4 Feb 2014 14:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="99642850"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 14:00:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:00:03 -0500
Message-ID: <1391522402.10515.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 14:00:02 +0000
In-Reply-To: <52F0E4CC.3020302@linaro.org>
References: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
	<52F0E4CC.3020302@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH FOR-4.5] xen: arm: avoid "PV" terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 13:02 +0000, Julien Grall wrote:
> 
> On 04/02/14 11:41, Ian Campbell wrote:
> > Xen on ARM guests are neither PV nor HVM, they are just "guests". Avoid the
> > incorrect use of the term pv in the guest type macros.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I think there is some coding style error (was already on the current 
> code),

Right, I mostly wrote this patch with sed...

>  see below. Except that:
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> > ---
> > Common code still has is_pv_domain, is_hvm_domain and even is_pvh_domain.
> 
> I have a patch to get a rid of is_pv_domain in common/grant-table.c. I 
> will send it with my IOMMU patch series for ARM.
> 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 635a9a4..1e4f298 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -263,7 +263,7 @@ static void continue_new_vcpu(struct vcpu *prev)
> >
> >       if ( is_idle_vcpu(current) )
> >           reset_stack_and_jump(idle_loop);
> > -    else if is_pv32_domain(current->domain)
> > +    else if is_32bit_domain(current->domain)
>         else if ( ... ) ?

Oh wow. this only compiles at all because is_pv32_domain is enclosed in
() in the definition.

> 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index ea77cb8..21efd55 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1721,12 +1721,12 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
> >           advance_pc(regs, hsr);
> >           break;
> >       case HSR_EC_CP15_32:
> > -        if ( ! is_pv32_domain(current->domain) )
> > +        if ( ! is_32bit_domain(current->domain) )
>                ( !is_... ) ?
> 
> >               goto bad_trap;
> >           do_cp15_32(regs, hsr);
> >           break;
> >       case HSR_EC_CP15_64:
> > -        if ( ! is_pv32_domain(current->domain) )
> > +        if ( ! is_32bit_domain(current->domain) )
> 
> Same here.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:00:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgXp-0005I7-Db; Tue, 04 Feb 2014 14:00:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgXn-0005Hq-SG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:00:08 +0000
Received: from [85.158.143.35:59104] by server-3.bemta-4.messagelabs.com id
	5E/D9-11539-662F0F25; Tue, 04 Feb 2014 14:00:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391522405!3044738!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6957 invoked from network); 4 Feb 2014 14:00:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:00:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="99642850"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 14:00:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:00:03 -0500
Message-ID: <1391522402.10515.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 14:00:02 +0000
In-Reply-To: <52F0E4CC.3020302@linaro.org>
References: <1391514096-31805-1-git-send-email-ian.campbell@citrix.com>
	<52F0E4CC.3020302@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH FOR-4.5] xen: arm: avoid "PV" terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 13:02 +0000, Julien Grall wrote:
> 
> On 04/02/14 11:41, Ian Campbell wrote:
> > Xen on ARM guests are neither PV nor HVM, they are just "guests". Avoid the
> > incorrect use of the term pv in the guest type macros.
> >
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I think there is some coding style error (was already on the current 
> code),

Right, I mostly wrote this patch with sed...

>  see below. Except that:
> 
> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> > ---
> > Common code still has is_pv_domain, is_hvm_domain and even is_pvh_domain.
> 
> I have a patch to get a rid of is_pv_domain in common/grant-table.c. I 
> will send it with my IOMMU patch series for ARM.
> 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 635a9a4..1e4f298 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -263,7 +263,7 @@ static void continue_new_vcpu(struct vcpu *prev)
> >
> >       if ( is_idle_vcpu(current) )
> >           reset_stack_and_jump(idle_loop);
> > -    else if is_pv32_domain(current->domain)
> > +    else if is_32bit_domain(current->domain)
>         else if ( ... ) ?

Oh wow. this only compiles at all because is_pv32_domain is enclosed in
() in the definition.

> 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index ea77cb8..21efd55 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> > @@ -1721,12 +1721,12 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
> >           advance_pc(regs, hsr);
> >           break;
> >       case HSR_EC_CP15_32:
> > -        if ( ! is_pv32_domain(current->domain) )
> > +        if ( ! is_32bit_domain(current->domain) )
>                ( !is_... ) ?
> 
> >               goto bad_trap;
> >           do_cp15_32(regs, hsr);
> >           break;
> >       case HSR_EC_CP15_64:
> > -        if ( ! is_pv32_domain(current->domain) )
> > +        if ( ! is_32bit_domain(current->domain) )
> 
> Same here.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtB-0006CR-7R; Tue, 04 Feb 2014 14:22:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtA-0006CJ-2o
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:12 +0000
Received: from [85.158.143.35:19182] by server-1.bemta-4.messagelabs.com id
	28/AC-31661-397F0F25; Tue, 04 Feb 2014 14:22:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391523728!3061808!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26992 invoked from network); 4 Feb 2014 14:22:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="99653228"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 14:21:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:21:43 -0500
Message-ID: <1391523701.5635.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:21:41 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>, Julien
	Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/4 v2] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtB-0006CR-7R; Tue, 04 Feb 2014 14:22:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtA-0006CJ-2o
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:12 +0000
Received: from [85.158.143.35:19182] by server-1.bemta-4.messagelabs.com id
	28/AC-31661-397F0F25; Tue, 04 Feb 2014 14:22:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391523728!3061808!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26992 invoked from network); 4 Feb 2014 14:22:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="99653228"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 14:21:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:21:43 -0500
Message-ID: <1391523701.5635.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:21:41 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>, Julien
	Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/4 v2] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtV-0006Ei-Ko; Tue, 04 Feb 2014 14:22:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtT-0006ED-JP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:31 +0000
Received: from [85.158.139.211:4460] by server-2.bemta-5.messagelabs.com id
	80/1B-23037-6A7F0F25; Tue, 04 Feb 2014 14:22:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391523748!1583330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2149 invoked from network); 4 Feb 2014 14:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743707"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtN-0004zY-Q9;
	Tue, 04 Feb 2014 14:22:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:22 +0000
Message-ID: <1391523745-21139-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 1/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtW-0006F4-3i; Tue, 04 Feb 2014 14:22:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtU-0006EM-Ic
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:32 +0000
Received: from [85.158.139.211:48226] by server-16.bemta-5.messagelabs.com id
	D6/0D-05060-7A7F0F25; Tue, 04 Feb 2014 14:22:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391523748!1583330!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2298 invoked from network); 4 Feb 2014 14:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743708"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtN-0004zY-Tt;
	Tue, 04 Feb 2014 14:22:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:23 +0000
Message-ID: <1391523745-21139-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 2/4] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtV-0006Ei-Ko; Tue, 04 Feb 2014 14:22:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtT-0006ED-JP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:31 +0000
Received: from [85.158.139.211:4460] by server-2.bemta-5.messagelabs.com id
	80/1B-23037-6A7F0F25; Tue, 04 Feb 2014 14:22:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391523748!1583330!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2149 invoked from network); 4 Feb 2014 14:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743707"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtN-0004zY-Q9;
	Tue, 04 Feb 2014 14:22:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:22 +0000
Message-ID: <1391523745-21139-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 1/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtW-0006F4-3i; Tue, 04 Feb 2014 14:22:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtU-0006EM-Ic
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:32 +0000
Received: from [85.158.139.211:48226] by server-16.bemta-5.messagelabs.com id
	D6/0D-05060-7A7F0F25; Tue, 04 Feb 2014 14:22:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391523748!1583330!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2298 invoked from network); 4 Feb 2014 14:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743708"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtN-0004zY-Tt;
	Tue, 04 Feb 2014 14:22:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:23 +0000
Message-ID: <1391523745-21139-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 2/4] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtW-0006FU-L7; Tue, 04 Feb 2014 14:22:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtU-0006EO-Je
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:32 +0000
Received: from [85.158.143.35:23663] by server-1.bemta-4.messagelabs.com id
	14/4D-31661-7A7F0F25; Tue, 04 Feb 2014 14:22:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391523749!3058027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 647 invoked from network); 4 Feb 2014 14:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743721"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtO-0004zY-1U;
	Tue, 04 Feb 2014 14:22:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:24 +0000
Message-ID: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v2:
   Switch to cleaning at page allocation time + explicit flushing of tha eregions
   which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    3 +++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xenctrl.h               |    3 ++-
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |    9 +++++++++
 xen/arch/arm/p2m.c                  |   24 ++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |    3 +++
 xen/xsm/flask/policy/access_vectors |    2 ++
 15 files changed, 100 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3b3f2fb 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..306b414 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid,
+                         phys->first, phys->first + phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..092d610 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t end_pfn)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.end_pfn = end_pfn;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..556810f 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, dst_pfn+1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, dst_pfn+1);
     return 0;
 }
 
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..80c397e 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,7 +453,8 @@ int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
-
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t end_pfn);
 
 /* Functions to produce a dump of a given domain
  *  xc_domain_dumpcore - produces a dump to a specified file
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..8916e49 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = domctl->u.cacheflush.end_pfn;
+
+        if ( e < s )
+            return -EINVAL;
+
+        if ( get_order_from_pages(e-s) > MAX_ORDER )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2f48347..0c1a7b9 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -338,6 +338,15 @@ unsigned long domain_page_map_to_mfn(const void *va)
 }
 #endif
 
+void cacheflush_page(unsigned long mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..d452814 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,14 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+                    cacheflush_page(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +634,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..16bb739 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        cacheflush_page(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..342dde8 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void cacheflush_page(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..271517f 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void cacheflush_page(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d8d8727 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush: [start, end). */
+    xen_pfn_t start_pfn, end_pfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..1345d7e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtW-0006FU-L7; Tue, 04 Feb 2014 14:22:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtU-0006EO-Je
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:32 +0000
Received: from [85.158.143.35:23663] by server-1.bemta-4.messagelabs.com id
	14/4D-31661-7A7F0F25; Tue, 04 Feb 2014 14:22:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391523749!3058027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 647 invoked from network); 4 Feb 2014 14:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743721"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtO-0004zY-1U;
	Tue, 04 Feb 2014 14:22:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:24 +0000
Message-ID: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v2:
   Switch to cleaning at page allocation time + explicit flushing of tha eregions
   which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    3 +++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xenctrl.h               |    3 ++-
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |    9 +++++++++
 xen/arch/arm/p2m.c                  |   24 ++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |    3 +++
 xen/xsm/flask/policy/access_vectors |    2 ++
 15 files changed, 100 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3b3f2fb 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..306b414 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid,
+                         phys->first, phys->first + phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..092d610 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t end_pfn)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.end_pfn = end_pfn;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..556810f 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, dst_pfn+1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, dst_pfn+1);
     return 0;
 }
 
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..80c397e 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -453,7 +453,8 @@ int xc_domain_create(xc_interface *xch,
                      xen_domain_handle_t handle,
                      uint32_t flags,
                      uint32_t *pdomid);
-
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t end_pfn);
 
 /* Functions to produce a dump of a given domain
  *  xc_domain_dumpcore - produces a dump to a specified file
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..8916e49 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = domctl->u.cacheflush.end_pfn;
+
+        if ( e < s )
+            return -EINVAL;
+
+        if ( get_order_from_pages(e-s) > MAX_ORDER )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2f48347..0c1a7b9 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -338,6 +338,15 @@ unsigned long domain_page_map_to_mfn(const void *va)
 }
 #endif
 
+void cacheflush_page(unsigned long mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..d452814 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,14 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+                    cacheflush_page(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +634,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..16bb739 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        cacheflush_page(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..342dde8 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void cacheflush_page(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..271517f 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void cacheflush_page(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..d8d8727 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush: [start, end). */
+    xen_pfn_t start_pfn, end_pfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..1345d7e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtX-0006Gg-FK; Tue, 04 Feb 2014 14:22:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtV-0006Ed-OJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:34 +0000
Received: from [85.158.143.35:23789] by server-2.bemta-4.messagelabs.com id
	0F/C2-10891-9A7F0F25; Tue, 04 Feb 2014 14:22:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391523749!3058027!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 796 invoked from network); 4 Feb 2014 14:22:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743718"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtO-0004zY-4D;
	Tue, 04 Feb 2014 14:22:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:25 +0000
Message-ID: <1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  159 +-----------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 1 insertion(+), 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..ec51d1b 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
@@ -1635,6 +1477,7 @@ done:
     if (first) unmap_domain_page(first);
 }
 
+
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:22:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgtX-0006Gg-FK; Tue, 04 Feb 2014 14:22:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAgtV-0006Ed-OJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:22:34 +0000
Received: from [85.158.143.35:23789] by server-2.bemta-4.messagelabs.com id
	0F/C2-10891-9A7F0F25; Tue, 04 Feb 2014 14:22:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391523749!3058027!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 796 invoked from network); 4 Feb 2014 14:22:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:22:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,779,1384300800"; d="scan'208";a="97743718"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:22:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 09:22:26 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAgtO-0004zY-4D;
	Tue, 04 Feb 2014 14:22:26 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 14:22:25 +0000
Message-ID: <1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391523701.5635.6.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  159 +-----------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 1 insertion(+), 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..ec51d1b 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
@@ -1635,6 +1477,7 @@ done:
     if (first) unmap_domain_page(first);
 }
 
+
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       union hsr hsr)
 {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:28:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgzG-0007FY-CT; Tue, 04 Feb 2014 14:28:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WAgzF-0007ES-4A
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:28:29 +0000
Received: from [85.158.139.211:35888] by server-8.bemta-5.messagelabs.com id
	E5/F6-05298-C09F0F25; Tue, 04 Feb 2014 14:28:28 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391524106!1595319!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7683 invoked from network); 4 Feb 2014 14:28:27 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-206.messagelabs.com with SMTP;
	4 Feb 2014 14:28:27 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14ERKGQ004451
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 09:27:20 -0500
Received: from redhat.com (vpn1-7-112.ams2.redhat.com [10.36.7.112])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s14ERHFH001143; Tue, 4 Feb 2014 09:27:18 -0500
Date: Tue, 4 Feb 2014 16:32:19 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140204143219.GA7561@redhat.com>
References: <1265435524.20140204004608@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1265435524.20140204004608@eikelenboom.it>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Commit 9e047b982452c633882b486682966c1d97097015
 (piix4: add acpi pci hotplug support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> let's try again ..
> 
> Hi Michael,
> 
> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> 
> commit 9e047b982452c633882b486682966c1d97097015
> Author: Michael S. Tsirkin <mst@redhat.com>
> Date:   Mon Oct 14 18:01:20 2013 +0300
> 
>     piix4: add acpi pci hotplug support
> 
>     Add support for acpi pci hotplug using the
>     new infrastructure.
>     PIIX4 legacy interface is maintained as is for
>     machine types 1.7 and older.
> 
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> 
> 
> The error is not very verbose :
> 
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> 
> So it seems there is an issue with preserving the legacy interface.


Which machine type is broken?
What's the command line used?
What's the value of has_acpi_build in hw/i386/pc_piix.c?
What happens if you add
-global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
?

> --
> Sander
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:28:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAgzG-0007FY-CT; Tue, 04 Feb 2014 14:28:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WAgzF-0007ES-4A
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:28:29 +0000
Received: from [85.158.139.211:35888] by server-8.bemta-5.messagelabs.com id
	E5/F6-05298-C09F0F25; Tue, 04 Feb 2014 14:28:28 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391524106!1595319!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7683 invoked from network); 4 Feb 2014 14:28:27 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-206.messagelabs.com with SMTP;
	4 Feb 2014 14:28:27 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14ERKGQ004451
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 09:27:20 -0500
Received: from redhat.com (vpn1-7-112.ams2.redhat.com [10.36.7.112])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s14ERHFH001143; Tue, 4 Feb 2014 09:27:18 -0500
Date: Tue, 4 Feb 2014 16:32:19 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140204143219.GA7561@redhat.com>
References: <1265435524.20140204004608@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1265435524.20140204004608@eikelenboom.it>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Commit 9e047b982452c633882b486682966c1d97097015
 (piix4: add acpi pci hotplug support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> let's try again ..
> 
> Hi Michael,
> 
> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> 
> commit 9e047b982452c633882b486682966c1d97097015
> Author: Michael S. Tsirkin <mst@redhat.com>
> Date:   Mon Oct 14 18:01:20 2013 +0300
> 
>     piix4: add acpi pci hotplug support
> 
>     Add support for acpi pci hotplug using the
>     new infrastructure.
>     PIIX4 legacy interface is maintained as is for
>     machine types 1.7 and older.
> 
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> 
> 
> The error is not very verbose :
> 
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> 
> So it seems there is an issue with preserving the legacy interface.


Which machine type is broken?
What's the command line used?
What's the value of has_acpi_build in hw/i386/pc_piix.c?
What happens if you add
-global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
?

> --
> Sander
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:31:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAh1z-0007c5-4B; Tue, 04 Feb 2014 14:31:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAh1x-0007br-AA
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:31:17 +0000
Received: from [85.158.143.35:50517] by server-3.bemta-4.messagelabs.com id
	CA/CE-11539-4B9F0F25; Tue, 04 Feb 2014 14:31:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391524274!3065265!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23198 invoked from network); 4 Feb 2014 14:31:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 14:31:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14EV0PK015107
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 14:31:01 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s14EUwoM023402
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 14:30:59 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14EUw28002549; Tue, 4 Feb 2014 14:30:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 06:30:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DB4031BFA0B; Tue,  4 Feb 2014 09:30:55 -0500 (EST)
Date: Tue, 4 Feb 2014 09:30:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matt Wilson <msw@linux.com>
Message-ID: <20140204143055.GA3853@phenom.dumpdata.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<20140128193837.GA32072@andromeda.dapyr.net>
	<20140204063139.GA19177@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140204063139.GA19177@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, Matthew Rushton <mrushton@amazon.com>,
	david.vrabel@citrix.com, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	boris.ostrovsky@oracle.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 10:31:40PM -0800, Matt Wilson wrote:
> On Tue, Jan 28, 2014 at 03:38:37PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
> > > blkback bug fixes for memory leaks (patches 1 and 2) and a race 
> > > (patch 3).
> > 
> > They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
> > branch and are testing them now (hadn't pushed it yet).
> > 
> > Matt and Matt,
> > 
> > Could you take a look at the other two patches as well?
> 
> Sure, though somehow you didn't address your message to us, so I
> didn't see it until today.

Duh!
> 
> Matt Rushton did some review and testing on an earlier version that
> came out fine. We'll give the final series a test since there was
> still a bit of rework.

<nods>
> 
> --msw
> 
> > David, Boris,
> > 
> > Are you OK with pushing those patches out to Jens Axboe if nobody
> > gives an NACK by Friday?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:31:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:31:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAh1z-0007c5-4B; Tue, 04 Feb 2014 14:31:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAh1x-0007br-AA
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:31:17 +0000
Received: from [85.158.143.35:50517] by server-3.bemta-4.messagelabs.com id
	CA/CE-11539-4B9F0F25; Tue, 04 Feb 2014 14:31:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391524274!3065265!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23198 invoked from network); 4 Feb 2014 14:31:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 14:31:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14EV0PK015107
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 14:31:01 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s14EUwoM023402
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 14:30:59 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14EUw28002549; Tue, 4 Feb 2014 14:30:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 06:30:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id DB4031BFA0B; Tue,  4 Feb 2014 09:30:55 -0500 (EST)
Date: Tue, 4 Feb 2014 09:30:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matt Wilson <msw@linux.com>
Message-ID: <20140204143055.GA3853@phenom.dumpdata.com>
References: <1390931015-5490-1-git-send-email-roger.pau@citrix.com>
	<20140128193837.GA32072@andromeda.dapyr.net>
	<20140204063139.GA19177@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140204063139.GA19177@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, Matthew Rushton <mrushton@amazon.com>,
	david.vrabel@citrix.com, Matt Wilson <msw@amazon.com>,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	boris.ostrovsky@oracle.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] xen-blkback: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 10:31:40PM -0800, Matt Wilson wrote:
> On Tue, Jan 28, 2014 at 03:38:37PM -0400, Konrad Rzeszutek Wilk wrote:
> > On Tue, Jan 28, 2014 at 06:43:32PM +0100, Roger Pau Monne wrote:
> > > blkback bug fixes for memory leaks (patches 1 and 2) and a race 
> > > (patch 3).
> > 
> > They all look OK to me. I've stuck them in my 'stable/for-jens-3.14'
> > branch and are testing them now (hadn't pushed it yet).
> > 
> > Matt and Matt,
> > 
> > Could you take a look at the other two patches as well?
> 
> Sure, though somehow you didn't address your message to us, so I
> didn't see it until today.

Duh!
> 
> Matt Rushton did some review and testing on an earlier version that
> came out fine. We'll give the final series a test since there was
> still a bit of rework.

<nods>
> 
> --msw
> 
> > David, Boris,
> > 
> > Are you OK with pushing those patches out to Jens Axboe if nobody
> > gives an NACK by Friday?
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:39:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhA0-00089A-6R; Tue, 04 Feb 2014 14:39:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAh9y-000894-V6
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:39:35 +0000
Received: from [193.109.254.147:36964] by server-3.bemta-14.messagelabs.com id
	2B/D0-00432-6ABF0F25; Tue, 04 Feb 2014 14:39:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391524771!1949196!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9985 invoked from network); 4 Feb 2014 14:39:33 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 14:39:33 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Ec5LM007383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 14:38:06 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Ec3RW017004
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 14:38:04 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Ec3I6014540; Tue, 4 Feb 2014 14:38:03 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 06:38:03 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 795901BFA0B; Tue,  4 Feb 2014 09:38:01 -0500 (EST)
Date: Tue, 4 Feb 2014 09:38:01 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>,
	Linus Torvalds <torvalds@linux-foundation.org>
Message-ID: <20140204143801.GC3853@phenom.dumpdata.com>
References: <20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
	<CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
	<20140204114445.GM6732@suse.de> <52F0D39D.4050409@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F0D39D.4050409@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Steven Noonan <steven@uplinklabs.net>, Linux-X86 <x86@kernel.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, Ingo Molnar <mingo@redhat.com>,
	Mel Gorman <mgorman@suse.de>, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH] Subject: [PATCH] xen: Properly account for
 _PAGE_NUMA during xen pte translations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 11:48:45AM +0000, David Vrabel wrote:
> On 04/02/14 11:44, Mel Gorman wrote:
> > Steven Noonan forwarded a users report where they had a problem starting
> > vsftpd on a Xen paravirtualized guest, with this in dmesg:
> > 
> [...]
> > 
> > The issue could not be reproduced under an HVM instance with the same kernel,
> > so it appears to be exclusive to paravirtual Xen guests. He bisected the
> > problem to commit 1667918b (mm: numa: clear numa hinting information on
> > mprotect) that was also included in 3.12-stable.
> > 
> > The problem was related to how xen translates ptes because it was not
> > accounting for the _PAGE_NUMA bit. This patch splits pte_present to add
> > a pteval_present helper for use by xen so both bare metal and xen use
> > the same code when checking if a PTE is present.
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Thank you for fixing it!

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

I can ingest it through the Xen tree for rc2. Or let Linus handle it
if he prefers it.

> 
> Thanks.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:39:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhA0-00089A-6R; Tue, 04 Feb 2014 14:39:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAh9y-000894-V6
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:39:35 +0000
Received: from [193.109.254.147:36964] by server-3.bemta-14.messagelabs.com id
	2B/D0-00432-6ABF0F25; Tue, 04 Feb 2014 14:39:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391524771!1949196!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9985 invoked from network); 4 Feb 2014 14:39:33 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 14:39:33 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Ec5LM007383
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 14:38:06 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Ec3RW017004
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 14:38:04 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Ec3I6014540; Tue, 4 Feb 2014 14:38:03 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 06:38:03 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 795901BFA0B; Tue,  4 Feb 2014 09:38:01 -0500 (EST)
Date: Tue, 4 Feb 2014 09:38:01 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>,
	Linus Torvalds <torvalds@linux-foundation.org>
Message-ID: <20140204143801.GC3853@phenom.dumpdata.com>
References: <20140122072914.GA9283@orcus.uplinklabs.net>
	<52DFD5DB.6060603@iogearbox.net>
	<CAEr7rXhJf1tf2KUErsGgUyYNUpVwLYD=fKUf8aPm0Dcg21MuNQ@mail.gmail.com>
	<20140122203337.GA31908@orcus.uplinklabs.net>
	<CAEr7rXjge6rKzxbwy+0A6-5YhVZL9WGmaLrDYbE8H5hrtwq_4A@mail.gmail.com>
	<20140124133830.GU4963@suse.de>
	<CAEr7rXjmVp-F88zQnc4a2tSzBAJFshzR0FPkOBdo1jbQLOv+2A@mail.gmail.com>
	<CAEr7rXgCoYW7O0E38YGCThUKgmxYFfLfYP-x_KSzVBLOCiHeDg@mail.gmail.com>
	<20140204114445.GM6732@suse.de> <52F0D39D.4050409@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F0D39D.4050409@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Steven Noonan <steven@uplinklabs.net>, Linux-X86 <x86@kernel.org>,
	Linux Kernel mailing List <linux-kernel@vger.kernel.org>,
	Linux-MM <linux-mm@kvack.org>, Ingo Molnar <mingo@redhat.com>,
	Mel Gorman <mgorman@suse.de>, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH] Subject: [PATCH] xen: Properly account for
 _PAGE_NUMA during xen pte translations
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 11:48:45AM +0000, David Vrabel wrote:
> On 04/02/14 11:44, Mel Gorman wrote:
> > Steven Noonan forwarded a users report where they had a problem starting
> > vsftpd on a Xen paravirtualized guest, with this in dmesg:
> > 
> [...]
> > 
> > The issue could not be reproduced under an HVM instance with the same kernel,
> > so it appears to be exclusive to paravirtual Xen guests. He bisected the
> > problem to commit 1667918b (mm: numa: clear numa hinting information on
> > mprotect) that was also included in 3.12-stable.
> > 
> > The problem was related to how xen translates ptes because it was not
> > accounting for the _PAGE_NUMA bit. This patch splits pte_present to add
> > a pteval_present helper for use by xen so both bare metal and xen use
> > the same code when checking if a PTE is present.
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Thank you for fixing it!

Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

I can ingest it through the Xen tree for rc2. Or let Linus handle it
if he prefers it.

> 
> Thanks.
> 
> David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:45:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhFV-0008Uw-Vm; Tue, 04 Feb 2014 14:45:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAhFU-0008Uk-6F
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:45:16 +0000
Received: from [85.158.143.35:36366] by server-2.bemta-4.messagelabs.com id
	D4/AF-10891-BFCF0F25; Tue, 04 Feb 2014 14:45:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391525112!3067599!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15088 invoked from network); 4 Feb 2014 14:45:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:45:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97754935"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:45:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:45:12 -0500
Message-ID: <1391525110.6497.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Tue, 4 Feb 2014 14:45:10 +0000
In-Reply-To: <20140130155536.GA7250@citrix.com>
References: <20140130155536.GA7250@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 15:55 +0000, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
> 
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).

In response to v2 Andrew said:

        This is not the reason for the change to the regex.  The problem with
        the regex is because RHEL7 menu entries have two different single-quote
        delimited strings on the same line, and the greedy grouping gets both
        strings, and the options inbetween.
        
You do not seem to have addressed this feedback.

> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> v2: Added RHEL 7 grub.cfg in pygrub/examples
> v3: Tidied the commit message.

These should be below the first "---" (i.e. 4 lines further down...

> Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails
> to boot on Xen.
> ---

... here)

So should "Kindly...". The reason for this is that the git tools for
applying patches will strip out anything after the "---" so putting
metainformation and commentary which isn't intended for the actual
commit log here saves the committer stripping it manually.

>  tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py            |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2
> 
> diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7-beta.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	set gfxpayload=keep
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:45:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhFV-0008Uw-Vm; Tue, 04 Feb 2014 14:45:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAhFU-0008Uk-6F
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:45:16 +0000
Received: from [85.158.143.35:36366] by server-2.bemta-4.messagelabs.com id
	D4/AF-10891-BFCF0F25; Tue, 04 Feb 2014 14:45:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391525112!3067599!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15088 invoked from network); 4 Feb 2014 14:45:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:45:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97754935"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:45:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:45:12 -0500
Message-ID: <1391525110.6497.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Tue, 4 Feb 2014 14:45:10 +0000
In-Reply-To: <20140130155536.GA7250@citrix.com>
References: <20140130155536.GA7250@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v3] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 15:55 +0000, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
> 
> In addition to this, menuentry has some options as well
> (--class red, --class gnu, etc).

In response to v2 Andrew said:

        This is not the reason for the change to the regex.  The problem with
        the regex is because RHEL7 menu entries have two different single-quote
        delimited strings on the same line, and the greedy grouping gets both
        strings, and the options inbetween.
        
You do not seem to have addressed this feedback.

> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> v2: Added RHEL 7 grub.cfg in pygrub/examples
> v3: Tidied the commit message.

These should be below the first "---" (i.e. 4 lines further down...

> Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails
> to boot on Xen.
> ---

... here)

So should "Kindly...". The reason for this is that the git tools for
applying patches will strip out anything after the "---" so putting
metainformation and commentary which isn't intended for the actual
commit log here saves the committer stripping it manually.

>  tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py            |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2
> 
> diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7-beta.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	set gfxpayload=keep
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:47:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhHY-0000FX-5u; Tue, 04 Feb 2014 14:47:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAhHX-0000FM-0m
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:47:23 +0000
Received: from [85.158.137.68:53036] by server-16.bemta-3.messagelabs.com id
	B5/79-29917-A7DF0F25; Tue, 04 Feb 2014 14:47:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391525239!13304296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11290 invoked from network); 4 Feb 2014 14:47:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:47:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97755738"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:46:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:46:51 -0500
Message-ID: <1391525210.6497.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 4 Feb 2014 14:46:50 +0000
In-Reply-To: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
References: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Yun Wang <bimingery@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 16:35 +0000, Anthony PERARD wrote:
> vcpu-set will try to use the HVM path (through QEMU) instead of the PV
> path (through xenstore) for a PV guest, if there is a QEMU running for
> this domain. This patch check which kind of guest is running before
> before doing any call.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
> 
> Yun, is this patch fix the issue with your PV guest ?

Yun, any feedback on this patch?

George -- I think vcpu-set not working for PV guests is a bug worth
fixing in 4.4 so I intend to apply.

> 
> 
>  tools/libxl/libxl.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 2845ca4..c4fe6af 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -4692,12 +4692,21 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>  {
>      GC_INIT(ctx);
>      int rc;
> -    switch (libxl__device_model_version_running(gc, domid)) {
> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
> -        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
> +    switch (libxl__domain_type(gc, domid)) {
> +    case LIBXL_DOMAIN_TYPE_HVM:
> +        switch (libxl__device_model_version_running(gc, domid)) {
> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
> +            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
> +            break;
> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> +            rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
> +            break;
> +        default:
> +            rc = ERROR_INVAL;
> +        }
>          break;
> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> -        rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
> +    case LIBXL_DOMAIN_TYPE_PV:
> +        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>          break;
>      default:
>          rc = ERROR_INVAL;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:47:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhHY-0000FX-5u; Tue, 04 Feb 2014 14:47:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAhHX-0000FM-0m
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 14:47:23 +0000
Received: from [85.158.137.68:53036] by server-16.bemta-3.messagelabs.com id
	B5/79-29917-A7DF0F25; Tue, 04 Feb 2014 14:47:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391525239!13304296!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11290 invoked from network); 4 Feb 2014 14:47:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 14:47:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97755738"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 14:46:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	09:46:51 -0500
Message-ID: <1391525210.6497.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 4 Feb 2014 14:46:50 +0000
In-Reply-To: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
References: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Yun Wang <bimingery@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 16:35 +0000, Anthony PERARD wrote:
> vcpu-set will try to use the HVM path (through QEMU) instead of the PV
> path (through xenstore) for a PV guest, if there is a QEMU running for
> this domain. This patch check which kind of guest is running before
> before doing any call.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
> 
> Yun, is this patch fix the issue with your PV guest ?

Yun, any feedback on this patch?

George -- I think vcpu-set not working for PV guests is a bug worth
fixing in 4.4 so I intend to apply.

> 
> 
>  tools/libxl/libxl.c | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index 2845ca4..c4fe6af 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -4692,12 +4692,21 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>  {
>      GC_INIT(ctx);
>      int rc;
> -    switch (libxl__device_model_version_running(gc, domid)) {
> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
> -        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
> +    switch (libxl__domain_type(gc, domid)) {
> +    case LIBXL_DOMAIN_TYPE_HVM:
> +        switch (libxl__device_model_version_running(gc, domid)) {
> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
> +            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
> +            break;
> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> +            rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
> +            break;
> +        default:
> +            rc = ERROR_INVAL;
> +        }
>          break;
> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> -        rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
> +    case LIBXL_DOMAIN_TYPE_PV:
> +        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>          break;
>      default:
>          rc = ERROR_INVAL;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:49:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhJp-0000hc-Od; Tue, 04 Feb 2014 14:49:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAhJo-0000hO-1L
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:49:44 +0000
Received: from [85.158.139.211:23743] by server-7.bemta-5.messagelabs.com id
	1B/C5-14867-70EF0F25; Tue, 04 Feb 2014 14:49:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391525380!1604493!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28776 invoked from network); 4 Feb 2014 14:49:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 14:49:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Ema5B021085
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 14:48:36 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14EmYAJ022610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 14:48:35 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14EmYAs013676; Tue, 4 Feb 2014 14:48:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 06:48:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2C2FB1BFA0B; Tue,  4 Feb 2014 09:48:33 -0500 (EST)
Date: Tue, 4 Feb 2014 09:48:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140204144833.GE3853@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F0B8F30200007800118E81@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
> >       * no virtual vmswith is allowed. Or else, the following IO
> >       * emulation will handled in a wrong VCPU context.
> >       */
> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
> 
> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
> 
> But to me it's not clear whether a PVH vCPU getting here is wrong
> in the first place, i.e. I would think the above condition should be
> || rather than && (after all, even if nested HVM one day became

I presume you mean like this:

	if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
		return;

If the Intel maintainers are OK with that I can do it that (and only
do one get_ioreq(v) call) and expand the comment.

Or just take the simple route and squash Mukesh's patch in mine and
revist this later - as I would prefer to make the minimal amount of
changes to any code in during rc3.


> supported for PVH, there not being an ioreq would still seem to be
> a clear indication of no further work to be done here).
> 
> Of course, if done that way, the corresponding comment would
> benefit from being extended accordingly.
> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 14:49:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 14:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhJp-0000hc-Od; Tue, 04 Feb 2014 14:49:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAhJo-0000hO-1L
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 14:49:44 +0000
Received: from [85.158.139.211:23743] by server-7.bemta-5.messagelabs.com id
	1B/C5-14867-70EF0F25; Tue, 04 Feb 2014 14:49:43 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391525380!1604493!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28776 invoked from network); 4 Feb 2014 14:49:42 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 14:49:42 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Ema5B021085
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 14:48:36 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14EmYAJ022610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 14:48:35 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14EmYAs013676; Tue, 4 Feb 2014 14:48:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 06:48:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2C2FB1BFA0B; Tue,  4 Feb 2014 09:48:33 -0500 (EST)
Date: Tue, 4 Feb 2014 09:48:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140204144833.GE3853@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F0B8F30200007800118E81@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
> >       * no virtual vmswith is allowed. Or else, the following IO
> >       * emulation will handled in a wrong VCPU context.
> >       */
> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
> 
> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
> 
> But to me it's not clear whether a PVH vCPU getting here is wrong
> in the first place, i.e. I would think the above condition should be
> || rather than && (after all, even if nested HVM one day became

I presume you mean like this:

	if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
		return;

If the Intel maintainers are OK with that I can do it that (and only
do one get_ioreq(v) call) and expand the comment.

Or just take the simple route and squash Mukesh's patch in mine and
revist this later - as I would prefer to make the minimal amount of
changes to any code in during rc3.


> supported for PVH, there not being an ioreq would still seem to be
> a clear indication of no further work to be done here).
> 
> Of course, if done that way, the corresponding comment would
> benefit from being extended accordingly.
> 
> Jan
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:00:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhTs-00017G-Gm; Tue, 04 Feb 2014 15:00:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAhTr-000177-3x
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:00:07 +0000
Received: from [193.109.254.147:30184] by server-11.bemta-14.messagelabs.com
	id 28/88-24604-67001F25; Tue, 04 Feb 2014 15:00:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391526005!1950931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10743 invoked from network); 4 Feb 2014 15:00:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:00:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:00:05 +0000
Message-Id: <52F10E8202000078001190A7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:00:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 15:22, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>          return -1;
>      }
>  
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1);

Looking at this and further similar code I think it would be cleaner
for the xc interface to take a start MFN and a count, and for the
hypervisor interface to use an inclusive range (such that overflow
is not a problem).

> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
>      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>  }
>  
> +/* No cache maintenance required on x86 architecture. */
> +static inline void cacheflush_page(unsigned long mfn) {}

The function name is certainly sub-optimal: If I needed a page-range
cache flush and found a function named like this, I'd assume it does
what its name says - flush the page from the cache. sync_page() or
some such may be better suited to express that this is something
that may be a no-op on certain architectures.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:00:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhTs-00017G-Gm; Tue, 04 Feb 2014 15:00:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAhTr-000177-3x
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:00:07 +0000
Received: from [193.109.254.147:30184] by server-11.bemta-14.messagelabs.com
	id 28/88-24604-67001F25; Tue, 04 Feb 2014 15:00:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391526005!1950931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10743 invoked from network); 4 Feb 2014 15:00:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:00:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:00:05 +0000
Message-Id: <52F10E8202000078001190A7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:00:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 15:22, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>          return -1;
>      }
>  
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1);

Looking at this and further similar code I think it would be cleaner
for the xc interface to take a start MFN and a count, and for the
hypervisor interface to use an inclusive range (such that overflow
is not a problem).

> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
>      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>  }
>  
> +/* No cache maintenance required on x86 architecture. */
> +static inline void cacheflush_page(unsigned long mfn) {}

The function name is certainly sub-optimal: If I needed a page-range
cache flush and found a function named like this, I'd assume it does
what its name says - flush the page from the cache. sync_page() or
some such may be better suited to express that this is something
that may be a no-op on certain architectures.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:00:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhUc-00019d-Ct; Tue, 04 Feb 2014 15:00:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAhUb-00019T-Ch
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:00:53 +0000
Received: from [85.158.137.68:50369] by server-1.bemta-3.messagelabs.com id
	69/76-17293-4A001F25; Tue, 04 Feb 2014 15:00:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391526050!13273403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2901 invoked from network); 4 Feb 2014 15:00:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:00:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97761557"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:00:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:00:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAhUW-0005Az-N2;
	Tue, 04 Feb 2014 15:00:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAhUW-00069p-FR;
	Tue, 04 Feb 2014 15:00:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24724-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:00:48 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24724: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24724 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24724/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 24723
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24723

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)         broken REGR. vs. 24723
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24722
 test-amd64-i386-xl-qemuu-win7-amd64  9 guest-localmigrate fail REGR. vs. 24723

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  af172d655c3900822d1f710ac13ee38ee9d482d2
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit af172d655c3900822d1f710ac13ee38ee9d482d2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 4 09:22:12 2014 +0100

    x86/domctl: don't ignore errors from vmce_restore_vcpu()
    
    What started out as a simple cleanup patch (eliminating the redundant
    check of domctl->cmd before setting "copyback", which as a result
    turned the "ext_vcpucontext_out" label useless) revealed a bug in the
    handling of XEN_DOMCTL_set_ext_vcpucontext.
    
    Fix this, retaining the cleanup, and at once dropping a stale comment
    and an accompanying formatting issue.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:00:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhUc-00019d-Ct; Tue, 04 Feb 2014 15:00:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAhUb-00019T-Ch
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:00:53 +0000
Received: from [85.158.137.68:50369] by server-1.bemta-3.messagelabs.com id
	69/76-17293-4A001F25; Tue, 04 Feb 2014 15:00:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391526050!13273403!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2901 invoked from network); 4 Feb 2014 15:00:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:00:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97761557"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:00:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:00:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAhUW-0005Az-N2;
	Tue, 04 Feb 2014 15:00:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAhUW-00069p-FR;
	Tue, 04 Feb 2014 15:00:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24724-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:00:48 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24724: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24724 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24724/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 24723
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24723

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)         broken REGR. vs. 24723
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24722
 test-amd64-i386-xl-qemuu-win7-amd64  9 guest-localmigrate fail REGR. vs. 24723

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  af172d655c3900822d1f710ac13ee38ee9d482d2
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit af172d655c3900822d1f710ac13ee38ee9d482d2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 4 09:22:12 2014 +0100

    x86/domctl: don't ignore errors from vmce_restore_vcpu()
    
    What started out as a simple cleanup patch (eliminating the redundant
    check of domctl->cmd before setting "copyback", which as a result
    turned the "ext_vcpucontext_out" label useless) revealed a bug in the
    handling of XEN_DOMCTL_set_ext_vcpucontext.
    
    Fix this, retaining the cleanup, and at once dropping a stale comment
    and an accompanying formatting issue.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:02:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhWX-0001JA-6Q; Tue, 04 Feb 2014 15:02:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAhWV-0001J5-EG
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:02:51 +0000
Received: from [193.109.254.147:28652] by server-6.bemta-14.messagelabs.com id
	32/13-03396-A1101F25; Tue, 04 Feb 2014 15:02:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391526170!1955760!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22775 invoked from network); 4 Feb 2014 15:02:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:02:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:02:49 +0000
Message-Id: <52F10F2402000078001190B7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:02:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
In-Reply-To: <20140204144833.GE3853@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 15:48, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
>> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
>> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
>> >       * no virtual vmswith is allowed. Or else, the following IO
>> >       * emulation will handled in a wrong VCPU context.
>> >       */
>> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
>> 
>> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
>> 
>> But to me it's not clear whether a PVH vCPU getting here is wrong
>> in the first place, i.e. I would think the above condition should be
>> || rather than && (after all, even if nested HVM one day became
> 
> I presume you mean like this:
> 
> 	if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
> 		return;
> 
> If the Intel maintainers are OK with that I can do it that (and only
> do one get_ioreq(v) call) and expand the comment.
> 
> Or just take the simple route and squash Mukesh's patch in mine and
> revist this later - as I would prefer to make the minimal amount of
> changes to any code in during rc3.

Wasn't it that Mukesh's patch simply was yours with the two
get_ioreq()s folded by using a local variable?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:02:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhWX-0001JA-6Q; Tue, 04 Feb 2014 15:02:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAhWV-0001J5-EG
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:02:51 +0000
Received: from [193.109.254.147:28652] by server-6.bemta-14.messagelabs.com id
	32/13-03396-A1101F25; Tue, 04 Feb 2014 15:02:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391526170!1955760!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22775 invoked from network); 4 Feb 2014 15:02:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:02:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:02:49 +0000
Message-Id: <52F10F2402000078001190B7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:02:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
In-Reply-To: <20140204144833.GE3853@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 15:48, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
>> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
>> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
>> >       * no virtual vmswith is allowed. Or else, the following IO
>> >       * emulation will handled in a wrong VCPU context.
>> >       */
>> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
>> 
>> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
>> 
>> But to me it's not clear whether a PVH vCPU getting here is wrong
>> in the first place, i.e. I would think the above condition should be
>> || rather than && (after all, even if nested HVM one day became
> 
> I presume you mean like this:
> 
> 	if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
> 		return;
> 
> If the Intel maintainers are OK with that I can do it that (and only
> do one get_ioreq(v) call) and expand the comment.
> 
> Or just take the simple route and squash Mukesh's patch in mine and
> revist this later - as I would prefer to make the minimal amount of
> changes to any code in during rc3.

Wasn't it that Mukesh's patch simply was yours with the two
get_ioreq()s folded by using a local variable?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhXx-0001Qs-M2; Tue, 04 Feb 2014 15:04:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAhXw-0001Qj-2b
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:04:20 +0000
Received: from [85.158.143.35:64689] by server-3.bemta-4.messagelabs.com id
	61/04-11539-37101F25; Tue, 04 Feb 2014 15:04:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391526255!3066898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 4 Feb 2014 15:04:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:04:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97763649"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:04:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:04:14 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhXp-0005C4-Ue;
	Tue, 04 Feb 2014 15:04:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhXp-0003mg-OH;
	Tue, 04 Feb 2014 15:04:13 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.365.463588.210458@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 15:04:13 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52E7CBB8.7040904@eu.citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
	<52E7CBB8.7040904@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM"):
> If you want to check in this one:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

(from a tools PoV)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhXx-0001Qs-M2; Tue, 04 Feb 2014 15:04:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAhXw-0001Qj-2b
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:04:20 +0000
Received: from [85.158.143.35:64689] by server-3.bemta-4.messagelabs.com id
	61/04-11539-37101F25; Tue, 04 Feb 2014 15:04:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391526255!3066898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30791 invoked from network); 4 Feb 2014 15:04:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:04:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97763649"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:04:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:04:14 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhXp-0005C4-Ue;
	Tue, 04 Feb 2014 15:04:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhXp-0003mg-OH;
	Tue, 04 Feb 2014 15:04:13 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.365.463588.210458@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 15:04:13 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52E7CBB8.7040904@eu.citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
	<52E7CBB8.7040904@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM"):
> If you want to check in this one:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

(from a tools PoV)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:06:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhaF-0001cU-8B; Tue, 04 Feb 2014 15:06:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAhaD-0001cB-TV
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:06:42 +0000
Received: from [193.109.254.147:37719] by server-9.bemta-14.messagelabs.com id
	27/CC-24895-FF101F25; Tue, 04 Feb 2014 15:06:39 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391526397!1929457!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24959 invoked from network); 4 Feb 2014 15:06:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:06:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14F6V3S013137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:06:32 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14F6UwX012363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:06:30 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14F6UXr012357; Tue, 4 Feb 2014 15:06:30 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:06:30 -0800
Message-ID: <52F1023E.2070902@oracle.com>
Date: Tue, 04 Feb 2014 10:07:42 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
	<52F0D9B20200007800118F44@nat28.tlf.novell.com>
In-Reply-To: <52F0D9B20200007800118F44@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 11/17] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:14 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> --- a/xen/arch/x86/hvm/vpmu.c
>> +++ b/xen/arch/x86/hvm/vpmu.c
>> @@ -400,6 +400,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>               return -EFAULT;
>>           pvpmu_finish(current->domain, &pmu_params);
>>           break;
>> +
>> +    case XENPMU_lvtpc_set:
>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>> +            return -EFAULT;
>> +
>> +        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
> Once again, please don't ignore (parts of) hypercall input values.

I can actually pass this value in the shared area where I already have a 
uint32_t for LVTPC. It will also save us from doing the copy.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:06:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhaF-0001cU-8B; Tue, 04 Feb 2014 15:06:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAhaD-0001cB-TV
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:06:42 +0000
Received: from [193.109.254.147:37719] by server-9.bemta-14.messagelabs.com id
	27/CC-24895-FF101F25; Tue, 04 Feb 2014 15:06:39 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391526397!1929457!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24959 invoked from network); 4 Feb 2014 15:06:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:06:39 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14F6V3S013137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:06:32 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14F6UwX012363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:06:30 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14F6UXr012357; Tue, 4 Feb 2014 15:06:30 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:06:30 -0800
Message-ID: <52F1023E.2070902@oracle.com>
Date: Tue, 04 Feb 2014 10:07:42 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-12-git-send-email-boris.ostrovsky@oracle.com>
	<52F0D9B20200007800118F44@nat28.tlf.novell.com>
In-Reply-To: <52F0D9B20200007800118F44@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 11/17] x86/VPMU: Add support for PMU
 register handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:14 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> --- a/xen/arch/x86/hvm/vpmu.c
>> +++ b/xen/arch/x86/hvm/vpmu.c
>> @@ -400,6 +400,14 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
>>               return -EFAULT;
>>           pvpmu_finish(current->domain, &pmu_params);
>>           break;
>> +
>> +    case XENPMU_lvtpc_set:
>> +        if ( copy_from_guest(&pmu_params, arg, 1) )
>> +            return -EFAULT;
>> +
>> +        vpmu_lvtpc_update((uint32_t)pmu_params.d.val);
> Once again, please don't ignore (parts of) hypercall input values.

I can actually pass this value in the shared area where I already have a 
uint32_t for LVTPC. It will also save us from doing the copy.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:07:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhan-0001gj-Mo; Tue, 04 Feb 2014 15:07:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAham-0001gO-1j
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:07:16 +0000
Received: from [193.109.254.147:57351] by server-13.bemta-14.messagelabs.com
	id F5/97-01226-32201F25; Tue, 04 Feb 2014 15:07:15 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391526433!1948894!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20677 invoked from network); 4 Feb 2014 15:07:13 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 15:07:13 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53940 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAhaL-0007Qf-0S; Tue, 04 Feb 2014 16:06:50 +0100
Date: Tue, 4 Feb 2014 16:07:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1047320768.20140204160708@eikelenboom.it>
To: "Michael S. Tsirkin" <mst@redhat.com>
In-Reply-To: <20140204143219.GA7561@redhat.com>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Commit 9e047b982452c633882b486682966c1d97097015
	(piix4: add acpi pci hotplug support) seems to break Xen
	pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, February 4, 2014, 3:32:19 PM, you wrote:

> On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
>> Grmbll my fat fingers hit the send shortcut too soon by accident ..
>> let's try again ..
>> 
>> Hi Michael,
>> 
>> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
>> 
>> commit 9e047b982452c633882b486682966c1d97097015
>> Author: Michael S. Tsirkin <mst@redhat.com>
>> Date:   Mon Oct 14 18:01:20 2013 +0300
>> 
>>     piix4: add acpi pci hotplug support
>> 
>>     Add support for acpi pci hotplug using the
>>     new infrastructure.
>>     PIIX4 legacy interface is maintained as is for
>>     machine types 1.7 and older.
>> 
>>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>> 
>> 
>> The error is not very verbose :
>> 
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> 
>> So it seems there is an issue with preserving the legacy interface.


> Which machine type is broken?

xenfv

> What's the command line used?

See below the output of the creation of the guest

Strange thing is:
char device redirected to /dev/pts/15 (label serial0)
vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
VNC server running on `127.0.0.1:5900'
xen_platform: changed ro/rw state of ROM memory area. now is rw state.
[00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
[00:05.0] xen_pt_pci_intx: intx=2
[00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=2
[00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
[00:05.0] xen_pt_pci_intx: intx=3
[00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=3
[00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_msi_set_enable: disabling MSI.

It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
And an lspci in the guest also doesn't show the devices.

> What's the value of has_acpi_build in hw/i386/pc_piix.c?
static bool has_acpi_build = true;

> What happens if you add
> -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off

That makes it work again ...


>> --
>> Sander
>> 


Parsing config from /etc/xen/domU/production/security.cfg
libxl: debug: libxl_create.c:1348:do_domain_create: ao 0x1415900: create: how=(nil) callback=(nil) poller=0x1415960
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hdb, using backend phy
libxl: debug: libxl_create.c:803:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415ce8: deregister unregistered
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9efa8
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19efa8
xc: detail: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019efa8
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100000
xc: detail: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x7fdc08ee7000 -> 0x7fdc08f7ce2d
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: register slotnum=3
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: register slotnum=2
libxl: debug: libxl_create.c:1362:do_domain_create: ao 0x1415900: inprogress: poller=0x1415960, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: event epath=/local/domain/0/backend/vbd/30/768/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/768/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14171e8: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/768/state token=3/0: empty slot
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: event epath=/local/domain/0/backend/vbd/30/832/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/832/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: deregister slotnum=2
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1418238: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/832/state token=2/1: empty slot
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
libxl: debug: libxl_dm.c:1308:libxl__spawn_local_dm: Spawning device-model /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   /usr/local/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   30
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-30,server,nowait
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   security
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   cirrus-vga
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -global
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   vga.vram_size_mb=8
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   order=c
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   4,maxcpus=4
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   e1000,id=nic0,netdev=net0,mac=00:16:3e:a0:72:69
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif30.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   1016
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security_data,if=ide,index=1,media=disk,format=raw,cache=writeback
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: register slotnum=2
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: deregister slotnum=2
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415f20: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: register slotnum=2
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: deregister slotnum=2
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419d68: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-06_01.0",
        "hostaddr": "0000:06:01.0"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-06_01.1",
        "hostaddr": "0000:06:01.1"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-06_01.2",
        "hostaddr": "0000:06:01.2"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-08_00.0",
        "hostaddr": "0000:08:00.0"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x1415900: progress report: ignored
libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x1415900: complete, rc=0
libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x1415900: destroy
xc: debug: hypercall buffer: total allocations:530 total releases:530
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:526 misses:2 toobig:2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:07:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhan-0001gj-Mo; Tue, 04 Feb 2014 15:07:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAham-0001gO-1j
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:07:16 +0000
Received: from [193.109.254.147:57351] by server-13.bemta-14.messagelabs.com
	id F5/97-01226-32201F25; Tue, 04 Feb 2014 15:07:15 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391526433!1948894!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20677 invoked from network); 4 Feb 2014 15:07:13 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 15:07:13 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53940 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAhaL-0007Qf-0S; Tue, 04 Feb 2014 16:06:50 +0100
Date: Tue, 4 Feb 2014 16:07:08 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1047320768.20140204160708@eikelenboom.it>
To: "Michael S. Tsirkin" <mst@redhat.com>
In-Reply-To: <20140204143219.GA7561@redhat.com>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
MIME-Version: 1.0
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Commit 9e047b982452c633882b486682966c1d97097015
	(piix4: add acpi pci hotplug support) seems to break Xen
	pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, February 4, 2014, 3:32:19 PM, you wrote:

> On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
>> Grmbll my fat fingers hit the send shortcut too soon by accident ..
>> let's try again ..
>> 
>> Hi Michael,
>> 
>> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
>> 
>> commit 9e047b982452c633882b486682966c1d97097015
>> Author: Michael S. Tsirkin <mst@redhat.com>
>> Date:   Mon Oct 14 18:01:20 2013 +0300
>> 
>>     piix4: add acpi pci hotplug support
>> 
>>     Add support for acpi pci hotplug using the
>>     new infrastructure.
>>     PIIX4 legacy interface is maintained as is for
>>     machine types 1.7 and older.
>> 
>>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>> 
>> 
>> The error is not very verbose :
>> 
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
>> 
>> So it seems there is an issue with preserving the legacy interface.


> Which machine type is broken?

xenfv

> What's the command line used?

See below the output of the creation of the guest

Strange thing is:
char device redirected to /dev/pts/15 (label serial0)
vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
VNC server running on `127.0.0.1:5900'
xen_platform: changed ro/rw state of ROM memory area. now is rw state.
[00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
[00:05.0] xen_pt_pci_intx: intx=2
[00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=2
[00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
[00:05.0] xen_pt_pci_intx: intx=3
[00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=3
[00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
[00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
[00:05.0] xen_pt_pci_intx: intx=1
[00:05.0] xen_pt_msi_set_enable: disabling MSI.

It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
And an lspci in the guest also doesn't show the devices.

> What's the value of has_acpi_build in hw/i386/pc_piix.c?
static bool has_acpi_build = true;

> What happens if you add
> -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off

That makes it work again ...


>> --
>> Sander
>> 


Parsing config from /etc/xen/domU/production/security.cfg
libxl: debug: libxl_create.c:1348:do_domain_create: ao 0x1415900: create: how=(nil) callback=(nil) poller=0x1415960
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hdb, using backend phy
libxl: debug: libxl_create.c:803:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415ce8: deregister unregistered
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9efa8
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19efa8
xc: detail: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019efa8
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100000
xc: detail: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x7fdc08ee7000 -> 0x7fdc08f7ce2d
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: register slotnum=3
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: register slotnum=2
libxl: debug: libxl_create.c:1362:do_domain_create: ao 0x1415900: inprogress: poller=0x1415960, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: event epath=/local/domain/0/backend/vbd/30/768/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/768/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14171e8: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/768/state token=3/0: empty slot
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: event epath=/local/domain/0/backend/vbd/30/832/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/832/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: deregister slotnum=2
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1418238: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/832/state token=2/1: empty slot
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
libxl: debug: libxl_dm.c:1308:libxl__spawn_local_dm: Spawning device-model /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   /usr/local/lib/xen/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   30
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-30,server,nowait
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -nodefaults
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   security
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   cirrus-vga
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -global
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   vga.vram_size_mb=8
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   order=c
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   4,maxcpus=4
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   e1000,id=nic0,netdev=net0,mac=00:16:3e:a0:72:69
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif30.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -machine
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   1016
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security_data,if=ide,index=1,media=disk,format=raw,cache=writeback
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: register slotnum=2
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: deregister slotnum=2
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415f20: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 3
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: register slotnum=2
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: deregister slotnum=2
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419d68: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-06_01.0",
        "hostaddr": "0000:06:01.0"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-06_01.1",
        "hostaddr": "0000:06:01.1"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-06_01.2",
        "hostaddr": "0000:06:01.2"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-08_00.0",
        "hostaddr": "0000:08:00.0"
    }
}
'
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x1415900: progress report: ignored
libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x1415900: complete, rc=0
libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x1415900: destroy
xc: debug: hypercall buffer: total allocations:530 total releases:530
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:526 misses:2 toobig:2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhbc-0001nL-Fb; Tue, 04 Feb 2014 15:08:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAhbb-0001n4-SM
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:08:08 +0000
Received: from [85.158.137.68:60176] by server-17.bemta-3.messagelabs.com id
	3C/8F-22569-65201F25; Tue, 04 Feb 2014 15:08:06 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391526483!12506823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5662 invoked from network); 4 Feb 2014 15:08:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:08:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99675779"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:08:00 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:08:00 -0500
Message-ID: <52F1024D.7070302@citrix.com>
Date: Tue, 4 Feb 2014 15:07:57 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
	<20140204063648.GB19177@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140204063648.GB19177@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Matt Wilson <msw@amazon.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v7] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 06:36, Matt Wilson wrote:
> On Mon, Feb 03, 2014 at 01:24:58PM +0000, Zoltan Kiss wrote:
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
>
> You're still forgetting that this was originally proposed by Anthony
> Liguori <aliguori@amazon.com>.
>
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws

Sure, sorry. I've talked with David, I will add Anthony as Original-by, 
instead of David's Suggested-by line.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhbc-0001nL-Fb; Tue, 04 Feb 2014 15:08:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAhbb-0001n4-SM
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:08:08 +0000
Received: from [85.158.137.68:60176] by server-17.bemta-3.messagelabs.com id
	3C/8F-22569-65201F25; Tue, 04 Feb 2014 15:08:06 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391526483!12506823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5662 invoked from network); 4 Feb 2014 15:08:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:08:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99675779"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:08:00 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:08:00 -0500
Message-ID: <52F1024D.7070302@citrix.com>
Date: Tue, 4 Feb 2014 15:07:57 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1391433898-10533-1-git-send-email-zoltan.kiss@citrix.com>
	<20140204063648.GB19177@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140204063648.GB19177@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	Anthony Liguori <aliguori@amazon.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Matt Wilson <msw@amazon.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v7] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 06:36, Matt Wilson wrote:
> On Mon, Feb 03, 2014 at 01:24:58PM +0000, Zoltan Kiss wrote:
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>> Suggested-by: David Vrabel <david.vrabel@citrix.com>
>
> You're still forgetting that this was originally proposed by Anthony
> Liguori <aliguori@amazon.com>.
>
> https://lkml.kernel.org/r/1384307336-5328-1-git-send-email-anthony@codemonkey.ws

Sure, sorry. I've talked with David, I will add Anthony as Original-by, 
instead of David's Suggested-by line.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhc5-0001uL-UD; Tue, 04 Feb 2014 15:08:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1WAhc4-0001tZ-Sz
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:08:37 +0000
Received: from [85.158.137.68:5656] by server-13.bemta-3.messagelabs.com id
	CA/02-26923-47201F25; Tue, 04 Feb 2014 15:08:36 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391526513!13285030!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1655 invoked from network); 4 Feb 2014 15:08:35 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:08:35 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s14F7S4l001602
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Tue, 4 Feb 2014 10:07:28 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s14F7Rdu001600;
	Tue, 4 Feb 2014 10:07:27 -0500
Date: Tue, 4 Feb 2014 11:07:27 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140204150727.GA1529@andromeda.dapyr.net>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.9i
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
	stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 06:15:07PM +0000, Stefano Stabellini wrote:
> Hi all,
> this patch series introduces stolen ticks accounting for Xen on ARM and
> ARM64.
> Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
> typically because Linux is running in a virtual machine and the vcpu has
> been descheduled.
> To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
> so that we can make use of:
> 
> kernel/sched/cputime.c:steal_account_process_tick
> 
> 
> Changes in v9:
> - added back missing new files from patches;
> - fix compilation on avr32 (remove patch #5, revert to previous version
>   of patch #2).
> 
> 
> 
> Stefano Stabellini (5):
>       xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
>       kernel: missing include in cputime.c
>       arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>       arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>       xen/arm: account for stolen ticks
> 
>  arch/arm/Kconfig                  |   20 ++++++++
>  arch/arm/include/asm/paravirt.h   |   20 ++++++++
>  arch/arm/kernel/Makefile          |    2 +
>  arch/arm/kernel/paravirt.c        |   25 ++++++++++
>  arch/arm/xen/enlighten.c          |   21 +++++++++
>  arch/arm64/Kconfig                |   20 ++++++++
>  arch/arm64/include/asm/paravirt.h |   20 ++++++++
>  arch/arm64/kernel/Makefile        |    1 +
>  arch/arm64/kernel/paravirt.c      |   25 ++++++++++
>  arch/ia64/xen/time.c              |   48 +++----------------
>  arch/x86/xen/time.c               |   76 +------------------------------
>  drivers/xen/Makefile              |    2 +-
>  drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
>  include/xen/xen-ops.h             |    5 ++
>  kernel/sched/cputime.c            |    3 ++
>  15 files changed, 261 insertions(+), 118 deletions(-)
>  create mode 100644 arch/arm/include/asm/paravirt.h
>  create mode 100644 arch/arm/kernel/paravirt.c
>  create mode 100644 arch/arm64/include/asm/paravirt.h
>  create mode 100644 arch/arm64/kernel/paravirt.c
>  create mode 100644 drivers/xen/time.c
> 
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9

I tried to merge it on top of 3.14-rc1 + stable/for-linus-3.14 (which
has the revert of " xen/grant-table: Avoid m2p_override during mapping"


And I get:
konrad@phenom:~/linux$ git merge stefano/lost_ticks_9
Auto-merging drivers/xen/Makefile
CONFLICT (content): Merge conflict in drivers/xen/Makefile
Auto-merging arch/x86/xen/time.c
CONFLICT (modify/delete): arch/ia64/xen/time.c deleted in HEAD and
modified in stefano/lost_ticks_9. Version stefano/lost_ticks_9 of
arch/ia64/xen/time.c left in tree.
Auto-merging arch/arm64/kernel/Makefile
CONFLICT (content): Merge conflict in arch/arm64/kernel/Makefile
Auto-merging arch/arm64/Kconfig
Auto-merging arch/arm/xen/enlighten.c
CONFLICT (content): Merge conflict in arch/arm/xen/enlighten.c
Auto-merging arch/arm/Kconfig
CONFLICT (content): Merge conflict in arch/arm/Kconfig
Automatic merge failed; fix conflicts and then commit the result.


I presume that is mostly due to David's FIFO queue patches.

Could you kindly rebase it on top 3.14-rc1 and also tack on
Catalin Marinas Ack on the patches?

Thank you!

> 
> 
> Cheers,
> 
> Stefano
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhc5-0001uL-UD; Tue, 04 Feb 2014 15:08:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1WAhc4-0001tZ-Sz
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:08:37 +0000
Received: from [85.158.137.68:5656] by server-13.bemta-3.messagelabs.com id
	CA/02-26923-47201F25; Tue, 04 Feb 2014 15:08:36 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391526513!13285030!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1655 invoked from network); 4 Feb 2014 15:08:35 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:08:35 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s14F7S4l001602
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Tue, 4 Feb 2014 10:07:28 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s14F7Rdu001600;
	Tue, 4 Feb 2014 10:07:27 -0500
Date: Tue, 4 Feb 2014 11:07:27 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140204150727.GA1529@andromeda.dapyr.net>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.9i
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
	stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 09, 2014 at 06:15:07PM +0000, Stefano Stabellini wrote:
> Hi all,
> this patch series introduces stolen ticks accounting for Xen on ARM and
> ARM64.
> Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
> typically because Linux is running in a virtual machine and the vcpu has
> been descheduled.
> To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
> so that we can make use of:
> 
> kernel/sched/cputime.c:steal_account_process_tick
> 
> 
> Changes in v9:
> - added back missing new files from patches;
> - fix compilation on avr32 (remove patch #5, revert to previous version
>   of patch #2).
> 
> 
> 
> Stefano Stabellini (5):
>       xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
>       kernel: missing include in cputime.c
>       arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>       arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
>       xen/arm: account for stolen ticks
> 
>  arch/arm/Kconfig                  |   20 ++++++++
>  arch/arm/include/asm/paravirt.h   |   20 ++++++++
>  arch/arm/kernel/Makefile          |    2 +
>  arch/arm/kernel/paravirt.c        |   25 ++++++++++
>  arch/arm/xen/enlighten.c          |   21 +++++++++
>  arch/arm64/Kconfig                |   20 ++++++++
>  arch/arm64/include/asm/paravirt.h |   20 ++++++++
>  arch/arm64/kernel/Makefile        |    1 +
>  arch/arm64/kernel/paravirt.c      |   25 ++++++++++
>  arch/ia64/xen/time.c              |   48 +++----------------
>  arch/x86/xen/time.c               |   76 +------------------------------
>  drivers/xen/Makefile              |    2 +-
>  drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
>  include/xen/xen-ops.h             |    5 ++
>  kernel/sched/cputime.c            |    3 ++
>  15 files changed, 261 insertions(+), 118 deletions(-)
>  create mode 100644 arch/arm/include/asm/paravirt.h
>  create mode 100644 arch/arm/kernel/paravirt.c
>  create mode 100644 arch/arm64/include/asm/paravirt.h
>  create mode 100644 arch/arm64/kernel/paravirt.c
>  create mode 100644 drivers/xen/time.c
> 
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9

I tried to merge it on top of 3.14-rc1 + stable/for-linus-3.14 (which
has the revert of " xen/grant-table: Avoid m2p_override during mapping"


And I get:
konrad@phenom:~/linux$ git merge stefano/lost_ticks_9
Auto-merging drivers/xen/Makefile
CONFLICT (content): Merge conflict in drivers/xen/Makefile
Auto-merging arch/x86/xen/time.c
CONFLICT (modify/delete): arch/ia64/xen/time.c deleted in HEAD and
modified in stefano/lost_ticks_9. Version stefano/lost_ticks_9 of
arch/ia64/xen/time.c left in tree.
Auto-merging arch/arm64/kernel/Makefile
CONFLICT (content): Merge conflict in arch/arm64/kernel/Makefile
Auto-merging arch/arm64/Kconfig
Auto-merging arch/arm/xen/enlighten.c
CONFLICT (content): Merge conflict in arch/arm/xen/enlighten.c
Auto-merging arch/arm/Kconfig
CONFLICT (content): Merge conflict in arch/arm/Kconfig
Automatic merge failed; fix conflicts and then commit the result.


I presume that is mostly due to David's FIFO queue patches.

Could you kindly rebase it on top 3.14-rc1 and also tack on
Catalin Marinas Ack on the patches?

Thank you!

> 
> 
> Cheers,
> 
> Stefano
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:15:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhiT-0002Wg-32; Tue, 04 Feb 2014 15:15:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1WAhiP-0002WU-MI
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:15:11 +0000
Received: from [85.158.137.68:51273] by server-11.bemta-3.messagelabs.com id
	84/F2-04255-CF301F25; Tue, 04 Feb 2014 15:15:08 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391526906!13234242!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32691 invoked from network); 4 Feb 2014 15:15:07 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:15:07 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s14FF1NG001838
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Tue, 4 Feb 2014 10:15:01 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s14FF1MM001835;
	Tue, 4 Feb 2014 10:15:01 -0500
Date: Tue, 4 Feb 2014 11:15:01 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Roger Pau Monne <roger.pau@citrix.com>, mrushton@amazon.com, msw@amazon.com
Message-ID: <20140204151501.GA1781@andromeda.dapyr.net>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> This series contain blkback bug fixes for memory leaks (patches 1 and 
> 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> since its memory layout is exactly the same as blkif_request_segment 
> and should introduce no functional change.
> 
> All patches should be backported to stable branches, although the last 
> one is not a functional change it will still be nice to have it for 
> code correctness.

Matt and Matt, could you guys kindly take a look as well? Thank you!

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:15:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhiT-0002Wg-32; Tue, 04 Feb 2014 15:15:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1WAhiP-0002WU-MI
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:15:11 +0000
Received: from [85.158.137.68:51273] by server-11.bemta-3.messagelabs.com id
	84/F2-04255-CF301F25; Tue, 04 Feb 2014 15:15:08 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391526906!13234242!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32691 invoked from network); 4 Feb 2014 15:15:07 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:15:07 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s14FF1NG001838
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Tue, 4 Feb 2014 10:15:01 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s14FF1MM001835;
	Tue, 4 Feb 2014 10:15:01 -0500
Date: Tue, 4 Feb 2014 11:15:01 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Roger Pau Monne <roger.pau@citrix.com>, mrushton@amazon.com, msw@amazon.com
Message-ID: <20140204151501.GA1781@andromeda.dapyr.net>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
User-Agent: Mutt/1.5.9i
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> This series contain blkback bug fixes for memory leaks (patches 1 and 
> 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> since its memory layout is exactly the same as blkif_request_segment 
> and should introduce no functional change.
> 
> All patches should be backported to stable branches, although the last 
> one is not a functional change it will still be nice to have it for 
> code correctness.

Matt and Matt, could you guys kindly take a look as well? Thank you!

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:17:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhkx-0002ka-Dm; Tue, 04 Feb 2014 15:17:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1WAhkv-0002kR-Ec
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:17:45 +0000
Received: from [85.158.143.35:9653] by server-2.bemta-4.messagelabs.com id
	D1/CD-10891-89401F25; Tue, 04 Feb 2014 15:17:44 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391527063!3080298!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3061 invoked from network); 4 Feb 2014 15:17:43 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	4 Feb 2014 15:17:43 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14FGbCu000609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 10:16:37 -0500
Received: from thinkpad (vpn-234-22.phx2.redhat.com [10.3.234.22])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s14FGYE8028583; Tue, 4 Feb 2014 10:16:34 -0500
Date: Tue, 4 Feb 2014 16:16:31 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140204161631.1762ea4d@thinkpad>
In-Reply-To: <1047320768.20140204160708@eikelenboom.it>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
	<1047320768.20140204160708@eikelenboom.it>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] Commit
 9e047b982452c633882b486682966c1d97097015 (piix4: add acpi pci hotplug
 support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014 16:07:08 +0100
Sander Eikelenboom <linux@eikelenboom.it> wrote:

> 
> Tuesday, February 4, 2014, 3:32:19 PM, you wrote:
> 
> > On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> >> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> >> let's try again ..
> >> 
> >> Hi Michael,
> >> 
> >> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> >> 
> >> commit 9e047b982452c633882b486682966c1d97097015
> >> Author: Michael S. Tsirkin <mst@redhat.com>
> >> Date:   Mon Oct 14 18:01:20 2013 +0300
> >> 
> >>     piix4: add acpi pci hotplug support
> >> 
> >>     Add support for acpi pci hotplug using the
> >>     new infrastructure.
> >>     PIIX4 legacy interface is maintained as is for
> >>     machine types 1.7 and older.
> >> 
> >>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >> 
> >> 
> >> The error is not very verbose :
> >> 
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> 
> >> So it seems there is an issue with preserving the legacy interface.
> 
> 
> > Which machine type is broken?
> 
> xenfv
> 
> > What's the command line used?
> 
> See below the output of the creation of the guest
> 
> Strange thing is:
> char device redirected to /dev/pts/15 (label serial0)
> vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
> efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
> VNC server running on `127.0.0.1:5900'
> xen_platform: changed ro/rw state of ROM memory area. now is rw state.
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_msi_set_enable: disabling MSI.
> 
> It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
> And an lspci in the guest also doesn't show the devices.
> 
> > What's the value of has_acpi_build in hw/i386/pc_piix.c?
> static bool has_acpi_build = true;
> 
> > What happens if you add
> > -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
> 
> That makes it work again ...
Does it still work with
http://www.mail-archive.com/qemu-devel@nongnu.org/msg213815.html


> 
> 
> >> --
> >> Sander
> >> 
> 
> 
> Parsing config from /etc/xen/domU/production/security.cfg
> libxl: debug: libxl_create.c:1348:do_domain_create: ao 0x1415900: create: how=(nil) callback=(nil) poller=0x1415960
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hdb, using backend phy
> libxl: debug: libxl_create.c:803:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415ce8: deregister unregistered
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9efa8
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19efa8
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
>   Loader:        0000000000100000->000000000019efa8
>   Modules:       0000000000000000->0000000000000000
>   TOTAL:         0000000000000000->000000003f800000
>   ENTRY ADDRESS: 0000000000100000
> xc: detail: PHYSICAL MEMORY ALLOCATION:
>   4KB PAGES: 0x0000000000000200
>   2MB PAGES: 0x00000000000001fb
>   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7fdc08ee7000 -> 0x7fdc08f7ce2d
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: register slotnum=3
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: register slotnum=2
> libxl: debug: libxl_create.c:1362:do_domain_create: ao 0x1415900: inprogress: poller=0x1415960, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: event epath=/local/domain/0/backend/vbd/30/768/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14171e8: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/768/state token=3/0: empty slot
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: event epath=/local/domain/0/backend/vbd/30/832/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/832/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1418238: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/832/state token=2/1: empty slot
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_dm.c:1308:libxl__spawn_local_dm: Spawning device-model /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   /usr/local/lib/xen/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   30
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-30,server,nowait
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -nodefaults
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   security
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -serial
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   pty
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   cirrus-vga
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   vga.vram_size_mb=8
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   4,maxcpus=4
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   e1000,id=nic0,netdev=net0,mac=00:16:3e:a0:72:69
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -netdev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif30.0-emu,script=no,downscript=no
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -machine
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security,if=ide,index=0,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security_data,if=ide,index=1,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415f20: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-chardev",
>     "id": 2
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-vnc",
>     "id": 3
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419d68: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.0",
>         "hostaddr": "0000:06:01.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.1",
>         "hostaddr": "0000:06:01.1"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.2",
>         "hostaddr": "0000:06:01.2"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-08_00.0",
>         "hostaddr": "0000:08:00.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
> libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x1415900: progress report: ignored
> libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x1415900: complete, rc=0
> libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x1415900: destroy
> xc: debug: hypercall buffer: total allocations:530 total releases:530
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:526 misses:2 toobig:2
> 
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:17:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhkx-0002ka-Dm; Tue, 04 Feb 2014 15:17:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1WAhkv-0002kR-Ec
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:17:45 +0000
Received: from [85.158.143.35:9653] by server-2.bemta-4.messagelabs.com id
	D1/CD-10891-89401F25; Tue, 04 Feb 2014 15:17:44 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391527063!3080298!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3061 invoked from network); 4 Feb 2014 15:17:43 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-21.messagelabs.com with SMTP;
	4 Feb 2014 15:17:43 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14FGbCu000609
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 10:16:37 -0500
Received: from thinkpad (vpn-234-22.phx2.redhat.com [10.3.234.22])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s14FGYE8028583; Tue, 4 Feb 2014 10:16:34 -0500
Date: Tue, 4 Feb 2014 16:16:31 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140204161631.1762ea4d@thinkpad>
In-Reply-To: <1047320768.20140204160708@eikelenboom.it>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
	<1047320768.20140204160708@eikelenboom.it>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] Commit
 9e047b982452c633882b486682966c1d97097015 (piix4: add acpi pci hotplug
 support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014 16:07:08 +0100
Sander Eikelenboom <linux@eikelenboom.it> wrote:

> 
> Tuesday, February 4, 2014, 3:32:19 PM, you wrote:
> 
> > On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> >> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> >> let's try again ..
> >> 
> >> Hi Michael,
> >> 
> >> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> >> 
> >> commit 9e047b982452c633882b486682966c1d97097015
> >> Author: Michael S. Tsirkin <mst@redhat.com>
> >> Date:   Mon Oct 14 18:01:20 2013 +0300
> >> 
> >>     piix4: add acpi pci hotplug support
> >> 
> >>     Add support for acpi pci hotplug using the
> >>     new infrastructure.
> >>     PIIX4 legacy interface is maintained as is for
> >>     machine types 1.7 and older.
> >> 
> >>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >> 
> >> 
> >> The error is not very verbose :
> >> 
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> 
> >> So it seems there is an issue with preserving the legacy interface.
> 
> 
> > Which machine type is broken?
> 
> xenfv
> 
> > What's the command line used?
> 
> See below the output of the creation of the guest
> 
> Strange thing is:
> char device redirected to /dev/pts/15 (label serial0)
> vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
> efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
> VNC server running on `127.0.0.1:5900'
> xen_platform: changed ro/rw state of ROM memory area. now is rw state.
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_msi_set_enable: disabling MSI.
> 
> It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
> And an lspci in the guest also doesn't show the devices.
> 
> > What's the value of has_acpi_build in hw/i386/pc_piix.c?
> static bool has_acpi_build = true;
> 
> > What happens if you add
> > -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
> 
> That makes it work again ...
Does it still work with
http://www.mail-archive.com/qemu-devel@nongnu.org/msg213815.html


> 
> 
> >> --
> >> Sander
> >> 
> 
> 
> Parsing config from /etc/xen/domU/production/security.cfg
> libxl: debug: libxl_create.c:1348:do_domain_create: ao 0x1415900: create: how=(nil) callback=(nil) poller=0x1415960
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hdb, using backend phy
> libxl: debug: libxl_create.c:803:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415ce8: deregister unregistered
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9efa8
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19efa8
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
>   Loader:        0000000000100000->000000000019efa8
>   Modules:       0000000000000000->0000000000000000
>   TOTAL:         0000000000000000->000000003f800000
>   ENTRY ADDRESS: 0000000000100000
> xc: detail: PHYSICAL MEMORY ALLOCATION:
>   4KB PAGES: 0x0000000000000200
>   2MB PAGES: 0x00000000000001fb
>   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7fdc08ee7000 -> 0x7fdc08f7ce2d
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: register slotnum=3
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: register slotnum=2
> libxl: debug: libxl_create.c:1362:do_domain_create: ao 0x1415900: inprogress: poller=0x1415960, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: event epath=/local/domain/0/backend/vbd/30/768/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14171e8: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/768/state token=3/0: empty slot
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: event epath=/local/domain/0/backend/vbd/30/832/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/832/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1418238: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/832/state token=2/1: empty slot
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_dm.c:1308:libxl__spawn_local_dm: Spawning device-model /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   /usr/local/lib/xen/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   30
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-30,server,nowait
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -nodefaults
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   security
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -serial
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   pty
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   cirrus-vga
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   vga.vram_size_mb=8
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   4,maxcpus=4
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   e1000,id=nic0,netdev=net0,mac=00:16:3e:a0:72:69
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -netdev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif30.0-emu,script=no,downscript=no
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -machine
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security,if=ide,index=0,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security_data,if=ide,index=1,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415f20: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-chardev",
>     "id": 2
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-vnc",
>     "id": 3
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419d68: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.0",
>         "hostaddr": "0000:06:01.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.1",
>         "hostaddr": "0000:06:01.1"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.2",
>         "hostaddr": "0000:06:01.2"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-08_00.0",
>         "hostaddr": "0000:08:00.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
> libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x1415900: progress report: ignored
> libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x1415900: complete, rc=0
> libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x1415900: destroy
> xc: debug: hypercall buffer: total allocations:530 total releases:530
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:526 misses:2 toobig:2
> 
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:21:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhoa-0003Oi-9m; Tue, 04 Feb 2014 15:21:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAhoY-0003OX-Kl
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:21:30 +0000
Received: from [85.158.139.211:42389] by server-10.bemta-5.messagelabs.com id
	50/6A-08578-97501F25; Tue, 04 Feb 2014 15:21:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391527287!1615472!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4204 invoked from network); 4 Feb 2014 15:21:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:21:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99685872"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:21:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 10:21:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAhoU-0002H6-Ad;
	Tue, 04 Feb 2014 15:21:26 +0000
Date: Tue, 4 Feb 2014 15:21:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1402041519570.4373@kaball.uk.xensource.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
 gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014, Oleksandr Tyshchenko wrote:
> The possible deadlock scenario is explained below:
> 
> non interrupt context:    interrupt contex        interrupt context
>                           (CPU0):                 (CPU1):
> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>   |                         |                       |
>   vgic_disable_irqs()       ...                     ...
>     |                         |                       |
>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>     |  ...                      |                       |
>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>     |  ...                        ...                     ...
>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>     |  ...                  . .       Oops! The lock has already taken.
>     |  spin_unlock(...)     . .
>     |  ...                  . .
>     gic_irq_disable()       . .
>        ...                  . .
>        spin_lock(...)       . .
>        ...                  . .
>        ... <----------------. .
>        ... <------------------.
>        ...
>        spin_unlock(...)
> 
> Since the gic_remove_from_queues() and gic_irq_disable() called from
> non interrupt context and they acquire the same lock as gic_set_guest_irq()
> which called from interrupt context we must disable interrupts in these
> functions to avoid possible deadlocks.
> 
> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>

nice work Oleksandr!

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/gic.c |   10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index c44a4d0..7d83b0c 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>  static void gic_irq_disable(struct irq_desc *desc)
>  {
>      int irq = desc->irq;
> +    unsigned long flags;
>  
> -    spin_lock(&desc->lock);
> +    spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>      /* Disable routing */
>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>      desc->status |= IRQ_DISABLED;
>      spin_unlock(&gic.lock);
> -    spin_unlock(&desc->lock);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
>  static unsigned int gic_irq_startup(struct irq_desc *desc)
> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    unsigned long flags;
>  
> -    spin_lock(&gic.lock);
> +    spin_lock_irqsave(&gic.lock, flags);
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> -    spin_unlock(&gic.lock);
> +    spin_unlock_irqrestore(&gic.lock, flags);
>  }
>  
>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:21:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhoa-0003Oi-9m; Tue, 04 Feb 2014 15:21:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAhoY-0003OX-Kl
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:21:30 +0000
Received: from [85.158.139.211:42389] by server-10.bemta-5.messagelabs.com id
	50/6A-08578-97501F25; Tue, 04 Feb 2014 15:21:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391527287!1615472!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4204 invoked from network); 4 Feb 2014 15:21:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:21:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99685872"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:21:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 10:21:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAhoU-0002H6-Ad;
	Tue, 04 Feb 2014 15:21:26 +0000
Date: Tue, 4 Feb 2014 15:21:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
In-Reply-To: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Message-ID: <alpine.DEB.2.02.1402041519570.4373@kaball.uk.xensource.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
 gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 3 Feb 2014, Oleksandr Tyshchenko wrote:
> The possible deadlock scenario is explained below:
> 
> non interrupt context:    interrupt contex        interrupt context
>                           (CPU0):                 (CPU1):
> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>   |                         |                       |
>   vgic_disable_irqs()       ...                     ...
>     |                         |                       |
>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>     |  ...                      |                       |
>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>     |  ...                        ...                     ...
>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>     |  ...                  . .       Oops! The lock has already taken.
>     |  spin_unlock(...)     . .
>     |  ...                  . .
>     gic_irq_disable()       . .
>        ...                  . .
>        spin_lock(...)       . .
>        ...                  . .
>        ... <----------------. .
>        ... <------------------.
>        ...
>        spin_unlock(...)
> 
> Since the gic_remove_from_queues() and gic_irq_disable() called from
> non interrupt context and they acquire the same lock as gic_set_guest_irq()
> which called from interrupt context we must disable interrupts in these
> functions to avoid possible deadlocks.
> 
> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>

nice work Oleksandr!

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  xen/arch/arm/gic.c |   10 ++++++----
>  1 file changed, 6 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index c44a4d0..7d83b0c 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>  static void gic_irq_disable(struct irq_desc *desc)
>  {
>      int irq = desc->irq;
> +    unsigned long flags;
>  
> -    spin_lock(&desc->lock);
> +    spin_lock_irqsave(&desc->lock, flags);
>      spin_lock(&gic.lock);
>      /* Disable routing */
>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>      desc->status |= IRQ_DISABLED;
>      spin_unlock(&gic.lock);
> -    spin_unlock(&desc->lock);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
>  static unsigned int gic_irq_startup(struct irq_desc *desc)
> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
> +    unsigned long flags;
>  
> -    spin_lock(&gic.lock);
> +    spin_lock_irqsave(&gic.lock, flags);
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> -    spin_unlock(&gic.lock);
> +    spin_unlock_irqrestore(&gic.lock, flags);
>  }
>  
>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> -- 
> 1.7.9.5
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAht2-0003cz-25; Tue, 04 Feb 2014 15:26:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAht0-0003ct-Rr
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:26:07 +0000
Received: from [193.109.254.147:4684] by server-9.bemta-14.messagelabs.com id
	8F/82-24895-E8601F25; Tue, 04 Feb 2014 15:26:06 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391527501!1927597!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16849 invoked from network); 4 Feb 2014 15:26:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:26:04 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14FOpjp023093
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:24:52 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FOoQi014245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:24:51 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s14FOnBR004689; Tue, 4 Feb 2014 15:24:49 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:24:49 -0800
Message-ID: <52F1068A.6060500@oracle.com>
Date: Tue, 04 Feb 2014 10:26:02 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DB8F0200007800118F53@nat28.tlf.novell.com>
In-Reply-To: <52F0DB8F0200007800118F53@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts
	for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:22 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>   int vpmu_do_interrupt(struct cpu_user_regs *regs)
>>   {
>>       struct vcpu *v = current;
>> -    struct vpmu_struct *vpmu = vcpu_vpmu(v);
>> +    struct vpmu_struct *vpmu;
>>   
>> -    if ( vpmu->arch_vpmu_ops )
>> +    /* dom0 will handle this interrupt */
>> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
>> +        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
>> +
>> +    vpmu = vcpu_vpmu(v);
>> +    if ( !is_hvm_domain(v->domain) )
>> +    {
>> +        /* PV guest or dom0 is doing system profiling */
>> +        const struct cpu_user_regs *gregs;
>> +        int err;
>> +
>> +        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
>> +            return 1;
>> +
>> +        /* PV guest will be reading PMU MSRs from xenpmu_data */
>> +        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>> +        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>> +        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>> +
>> +        /* Store appropriate registers in xenpmu_data */
>> +        if ( is_pv_32bit_domain(current->domain) )
>> +        {
>> +            /*
>> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
>> +             * and therefore we treat it the same way as a non-priviledged
>> +             * PV 32-bit domain.
>> +             */
>> +            struct compat_cpu_user_regs *cmp;
>> +
>> +            gregs = guest_cpu_user_regs();
>> +
>> +            cmp = (struct compat_cpu_user_regs *)
>> +                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> Deliberate type changes like this can easily (and more readably as
> well as more forward compatibly) be done using (void *).
>
>> +            XLAT_cpu_user_regs(cmp, gregs);
>> +        }
>> +        else if ( !is_control_domain(current->domain) &&
>> +                 !is_idle_vcpu(current) )
>> +        {
>> +            /* PV guest */
>> +            gregs = guest_cpu_user_regs();
>> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                   gregs, sizeof(struct cpu_user_regs));
>> +        }
>> +        else
>> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                   regs, sizeof(struct cpu_user_regs));
>> +
>> +        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
>> +        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
>> +        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
>> +
>> +        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
>> +        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
>> +        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
>> +
>> +        send_guest_vcpu_virq(v, VIRQ_XENPMU);
>> +
>> +        return 1;
>> +    }
>> +    else if ( vpmu->arch_vpmu_ops )
> If the previous (and only) if() branch returns unconditionally, using
> "else if" is more confusing then clarifying imo (and in any case
> needlessly growing the patch, even if just by a bit).

Not sure I understand what you are saying here.

Here is the code structure:

int vpmu_do_interrupt(struct cpu_user_regs *regs)
{
      if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
{
         // work
         return 1;
}
     else if ( vpmu->arch_vpmu_ops )
{
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;

         // other work
         return 1;
}

     return 0;
}

What do you propose?


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAht2-0003cz-25; Tue, 04 Feb 2014 15:26:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAht0-0003ct-Rr
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:26:07 +0000
Received: from [193.109.254.147:4684] by server-9.bemta-14.messagelabs.com id
	8F/82-24895-E8601F25; Tue, 04 Feb 2014 15:26:06 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391527501!1927597!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16849 invoked from network); 4 Feb 2014 15:26:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:26:04 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14FOpjp023093
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:24:52 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FOoQi014245
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:24:51 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s14FOnBR004689; Tue, 4 Feb 2014 15:24:49 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:24:49 -0800
Message-ID: <52F1068A.6060500@oracle.com>
Date: Tue, 04 Feb 2014 10:26:02 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DB8F0200007800118F53@nat28.tlf.novell.com>
In-Reply-To: <52F0DB8F0200007800118F53@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts
	for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:22 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>   int vpmu_do_interrupt(struct cpu_user_regs *regs)
>>   {
>>       struct vcpu *v = current;
>> -    struct vpmu_struct *vpmu = vcpu_vpmu(v);
>> +    struct vpmu_struct *vpmu;
>>   
>> -    if ( vpmu->arch_vpmu_ops )
>> +    /* dom0 will handle this interrupt */
>> +    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
>> +        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
>> +
>> +    vpmu = vcpu_vpmu(v);
>> +    if ( !is_hvm_domain(v->domain) )
>> +    {
>> +        /* PV guest or dom0 is doing system profiling */
>> +        const struct cpu_user_regs *gregs;
>> +        int err;
>> +
>> +        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
>> +            return 1;
>> +
>> +        /* PV guest will be reading PMU MSRs from xenpmu_data */
>> +        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>> +        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>> +        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>> +
>> +        /* Store appropriate registers in xenpmu_data */
>> +        if ( is_pv_32bit_domain(current->domain) )
>> +        {
>> +            /*
>> +             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
>> +             * and therefore we treat it the same way as a non-priviledged
>> +             * PV 32-bit domain.
>> +             */
>> +            struct compat_cpu_user_regs *cmp;
>> +
>> +            gregs = guest_cpu_user_regs();
>> +
>> +            cmp = (struct compat_cpu_user_regs *)
>> +                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
> Deliberate type changes like this can easily (and more readably as
> well as more forward compatibly) be done using (void *).
>
>> +            XLAT_cpu_user_regs(cmp, gregs);
>> +        }
>> +        else if ( !is_control_domain(current->domain) &&
>> +                 !is_idle_vcpu(current) )
>> +        {
>> +            /* PV guest */
>> +            gregs = guest_cpu_user_regs();
>> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                   gregs, sizeof(struct cpu_user_regs));
>> +        }
>> +        else
>> +            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                   regs, sizeof(struct cpu_user_regs));
>> +
>> +        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
>> +        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
>> +        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
>> +
>> +        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
>> +        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
>> +        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
>> +
>> +        send_guest_vcpu_virq(v, VIRQ_XENPMU);
>> +
>> +        return 1;
>> +    }
>> +    else if ( vpmu->arch_vpmu_ops )
> If the previous (and only) if() branch returns unconditionally, using
> "else if" is more confusing then clarifying imo (and in any case
> needlessly growing the patch, even if just by a bit).

Not sure I understand what you are saying here.

Here is the code structure:

int vpmu_do_interrupt(struct cpu_user_regs *regs)
{
      if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
{
         // work
         return 1;
}
     else if ( vpmu->arch_vpmu_ops )
{
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;

         // other work
         return 1;
}

     return 0;
}

What do you propose?


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhtq-0003hs-G3; Tue, 04 Feb 2014 15:26:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAhtp-0003hh-8E
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:26:57 +0000
Received: from [85.158.137.68:46851] by server-3.bemta-3.messagelabs.com id
	1C/0D-14520-0C601F25; Tue, 04 Feb 2014 15:26:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391527615!13281522!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30685 invoked from network); 4 Feb 2014 15:26:55 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:26:55 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so2532066eek.15
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:26:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ig331GDdQ1dle+EqpcCCttMNz8GkVh6SBD8AUaZiJtk=;
	b=Mbo2H+I1Np9lmqoB8Ufrfre00GL/yosAcB2weumWQu8Q754wCQCB82HcvmdUYVXNNb
	H23Snv2Wz9mRrFyXA6pxMKyGtQ8jVy4xOfBAGm9UIH5CC74EmeFDWLY5n7F7jLkd69Wa
	ok62HyqZ2K2f12S3SY/veR1Oy5/K27vCRtBy5faPhi5hE3kU1y3+DPlLKQPvZTizMF+z
	+H41J/sTZ3AHA4uCxwzUV3DUdzNDniIHJXhNcHnf7JLugoRsjgCbREQTlY0A11NfVnbu
	zVN4R/mlmcnnZd7vyXP9q2xhhurKSw0B4ZbcFe1T8c0CEpputO84LNO2J7wGcT6k94Vz
	ZX/A==
X-Gm-Message-State: ALoCoQk78J0YTr4C9tBq7mnEsY92n0SH0Agw2h9BPgcbMuc/e9HdVM8PAFGq2vGEoW9AvM7sBXEh
X-Received: by 10.15.23.194 with SMTP id h42mr13865402eeu.32.1391527615572;
	Tue, 04 Feb 2014 07:26:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm89824996eet.6.2014.02.04.07.26.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:26:53 -0800 (PST)
Message-ID: <52F106B5.5050102@linaro.org>
Date: Tue, 04 Feb 2014 15:26:45 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
	<52E7CBB8.7040904@eu.citrix.com>
	<21233.365.463588.210458@mariner.uk.xensource.com>
In-Reply-To: <21233.365.463588.210458@mariner.uk.xensource.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 03:04 PM, Ian Jackson wrote:
> George Dunlap writes ("Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM"):
>> If you want to check in this one:
>>
>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Julien Grall <julien.grall@linaro.org>


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhtq-0003hs-G3; Tue, 04 Feb 2014 15:26:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAhtp-0003hh-8E
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:26:57 +0000
Received: from [85.158.137.68:46851] by server-3.bemta-3.messagelabs.com id
	1C/0D-14520-0C601F25; Tue, 04 Feb 2014 15:26:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391527615!13281522!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30685 invoked from network); 4 Feb 2014 15:26:55 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:26:55 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so2532066eek.15
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:26:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ig331GDdQ1dle+EqpcCCttMNz8GkVh6SBD8AUaZiJtk=;
	b=Mbo2H+I1Np9lmqoB8Ufrfre00GL/yosAcB2weumWQu8Q754wCQCB82HcvmdUYVXNNb
	H23Snv2Wz9mRrFyXA6pxMKyGtQ8jVy4xOfBAGm9UIH5CC74EmeFDWLY5n7F7jLkd69Wa
	ok62HyqZ2K2f12S3SY/veR1Oy5/K27vCRtBy5faPhi5hE3kU1y3+DPlLKQPvZTizMF+z
	+H41J/sTZ3AHA4uCxwzUV3DUdzNDniIHJXhNcHnf7JLugoRsjgCbREQTlY0A11NfVnbu
	zVN4R/mlmcnnZd7vyXP9q2xhhurKSw0B4ZbcFe1T8c0CEpputO84LNO2J7wGcT6k94Vz
	ZX/A==
X-Gm-Message-State: ALoCoQk78J0YTr4C9tBq7mnEsY92n0SH0Agw2h9BPgcbMuc/e9HdVM8PAFGq2vGEoW9AvM7sBXEh
X-Received: by 10.15.23.194 with SMTP id h42mr13865402eeu.32.1391527615572;
	Tue, 04 Feb 2014 07:26:55 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm89824996eet.6.2014.02.04.07.26.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:26:53 -0800 (PST)
Message-ID: <52F106B5.5050102@linaro.org>
Date: Tue, 04 Feb 2014 15:26:45 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
	<52E7CBB8.7040904@eu.citrix.com>
	<21233.365.463588.210458@mariner.uk.xensource.com>
In-Reply-To: <21233.365.463588.210458@mariner.uk.xensource.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 03:04 PM, Ian Jackson wrote:
> George Dunlap writes ("Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM"):
>> If you want to check in this one:
>>
>> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Julien Grall <julien.grall@linaro.org>


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhvQ-0003tX-6i; Tue, 04 Feb 2014 15:28:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WAhtv-0003ia-KR
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:27:04 +0000
Received: from [85.158.139.211:52184] by server-8.bemta-5.messagelabs.com id
	14/0F-05298-6C601F25; Tue, 04 Feb 2014 15:27:02 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391527620!1608511!1
X-Originating-IP: [98.139.213.161]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17849 invoked from network); 4 Feb 2014 15:27:02 -0000
Received: from nm24-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm24-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.161)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:27:02 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391527620; bh=fJzzkguvgNq58MSUWPXCQDfVuvhgPD173gbud72tnOI=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=hS0A3k7801ulQvHKT+OXAF/YSmedAtgmMWn8msW32XwXcrvKZAXFrpgFenfOK/QLByNnIczKlzkD0TiWM9/DH+S36QGwjCBaCVIYSTA9RwGEA1DtJHuwOqFjZT18eg9KKBWZA6F+/VISNzTnRfKvHEzzH7lRBikckuNKS+Mffn8=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Vp4DKS0GHEefNQOj28/taVexj0ZlWhfgGEXMMgrWsrB7a8cT0MjpJJQnxj75KwjKTQG2pO8TJ403XRfTPDejCZ9+FDO7xvGU7c95cbe8OvnHJzdBvcPqPXeHBcaljTUANyyLER3nBbLVXwqpdbxVA5cyBDHf9UpXtZHLiGeWhiQ=;
Received: from [98.139.212.151] by nm24.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:27:00 -0000
Received: from [98.139.211.192] by tm8.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:27:00 -0000
Received: from [127.0.0.1] by smtp201.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:27:00 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391527620; bh=fJzzkguvgNq58MSUWPXCQDfVuvhgPD173gbud72tnOI=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=mfmap7dWyAFHHjuB375hHqc/5Uw+GU+88pKQCov/bGIj6EPUIDeXgm6EODnwlA/liYS60n5RGpDvdXyex1+0t8b64KVUJFl0+yWOtXHZ2KddPXnJeUOrKhK1r12qR3ngb36OJa+M+jhk5OKQRPu0PVrcW6XPOtfELztoh752wt0=
X-Yahoo-Newman-Id: 682250.61665.bm@smtp201.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: lb0yiGkVM1lmWT4VM0pj4TUvx2tscUsj1il8C9.ZdYHRHgm
	Fv5dh9NBsG6InDeIMM3kxOaawzK_uuicnXjNwB.pur9Fm3SOqrevulhxAjtf
	.S.aZiLlJKc3Llgf1wSW7xt4Y5Fw.IRKCKUJXTavfHgCuuDhXLkLWEGUy43p
	MGpURlp5BaAZCCPn4FRcgXR8LuzRYKkE4rRlgQRfNxqEiglK1nePQLvJO9oz
	fYTpEWVg7jEm_yDWIZQBtsKNnCs4ro06HO.nCEt9.PcGeZ82bAZXAzfIOrSW
	P.LSzU7aJSane6KFM8T.cVrWZw0e5hC5Cxi1lORxPlrV3s97wfN9uhrMEPgY
	85QzzvvODhTLKD3HQxq0uENp8C8Nry6adJovdH5tRfp6DSJJtLVzVjfctXD1
	ztfD21HDxehcQphUxQcchxICT2jf5KdfWOGPCsmWTAjeCc9XDU_Mo3okIz_o
	lb2GNwWjIEz3nJQIg1XJkb.HgVPirO8zt16eoEwzVqD58sa7DHZhNDfX8oIF
	cniAqRF2uhaMrZnoLCsfeGyb4upMVQlHz.SwWDopGOdGjmjwW
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp201.mail.bf1.yahoo.com with SMTP; 04 Feb 2014 07:27:00 -0800 PST
Message-ID: <1391527619.2441.16.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Date: Tue, 04 Feb 2014 08:26:59 -0700
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Tue, 04 Feb 2014 15:28:35 +0000
Subject: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen list,

I have a clean F20 install with the RC3 RPMs and see an error when
starting a VM.  A similar issue was seen with the RC2 RPMs.


[root@xen ~]# xl create f20.xl
Parsing config from f20.xl
libxl: error: libxl_create.c:1054:domcreate_launch_dm: unable to add
disk devices
libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device
model pid in /local/domain/2/image/device-model-pid
libxl: error: libxl.c:1425:libxl__destroy_domid:
libxl__destroy_device_model failed for 2
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
to get my domid
libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
failed for 2
[root@xen ~]# 
[root@xen ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
[root@xen ~]# 
[root@xen ~]# xl create f20.xl
Parsing config from f20.xl
[root@xen ~]# 
[root@xen ~]# 
[root@xen ~]# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  2047     2     r-----
19.7
f20                                          3  4095     1     r-----
4.1


After running the xenstore-write command VMs start up without issue.

Thanks,

Eric



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:28:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhvQ-0003tX-6i; Tue, 04 Feb 2014 15:28:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WAhtv-0003ia-KR
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:27:04 +0000
Received: from [85.158.139.211:52184] by server-8.bemta-5.messagelabs.com id
	14/0F-05298-6C601F25; Tue, 04 Feb 2014 15:27:02 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391527620!1608511!1
X-Originating-IP: [98.139.213.161]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17849 invoked from network); 4 Feb 2014 15:27:02 -0000
Received: from nm24-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm24-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.161)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:27:02 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391527620; bh=fJzzkguvgNq58MSUWPXCQDfVuvhgPD173gbud72tnOI=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=hS0A3k7801ulQvHKT+OXAF/YSmedAtgmMWn8msW32XwXcrvKZAXFrpgFenfOK/QLByNnIczKlzkD0TiWM9/DH+S36QGwjCBaCVIYSTA9RwGEA1DtJHuwOqFjZT18eg9KKBWZA6F+/VISNzTnRfKvHEzzH7lRBikckuNKS+Mffn8=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Vp4DKS0GHEefNQOj28/taVexj0ZlWhfgGEXMMgrWsrB7a8cT0MjpJJQnxj75KwjKTQG2pO8TJ403XRfTPDejCZ9+FDO7xvGU7c95cbe8OvnHJzdBvcPqPXeHBcaljTUANyyLER3nBbLVXwqpdbxVA5cyBDHf9UpXtZHLiGeWhiQ=;
Received: from [98.139.212.151] by nm24.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:27:00 -0000
Received: from [98.139.211.192] by tm8.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:27:00 -0000
Received: from [127.0.0.1] by smtp201.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:27:00 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391527620; bh=fJzzkguvgNq58MSUWPXCQDfVuvhgPD173gbud72tnOI=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=mfmap7dWyAFHHjuB375hHqc/5Uw+GU+88pKQCov/bGIj6EPUIDeXgm6EODnwlA/liYS60n5RGpDvdXyex1+0t8b64KVUJFl0+yWOtXHZ2KddPXnJeUOrKhK1r12qR3ngb36OJa+M+jhk5OKQRPu0PVrcW6XPOtfELztoh752wt0=
X-Yahoo-Newman-Id: 682250.61665.bm@smtp201.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: lb0yiGkVM1lmWT4VM0pj4TUvx2tscUsj1il8C9.ZdYHRHgm
	Fv5dh9NBsG6InDeIMM3kxOaawzK_uuicnXjNwB.pur9Fm3SOqrevulhxAjtf
	.S.aZiLlJKc3Llgf1wSW7xt4Y5Fw.IRKCKUJXTavfHgCuuDhXLkLWEGUy43p
	MGpURlp5BaAZCCPn4FRcgXR8LuzRYKkE4rRlgQRfNxqEiglK1nePQLvJO9oz
	fYTpEWVg7jEm_yDWIZQBtsKNnCs4ro06HO.nCEt9.PcGeZ82bAZXAzfIOrSW
	P.LSzU7aJSane6KFM8T.cVrWZw0e5hC5Cxi1lORxPlrV3s97wfN9uhrMEPgY
	85QzzvvODhTLKD3HQxq0uENp8C8Nry6adJovdH5tRfp6DSJJtLVzVjfctXD1
	ztfD21HDxehcQphUxQcchxICT2jf5KdfWOGPCsmWTAjeCc9XDU_Mo3okIz_o
	lb2GNwWjIEz3nJQIg1XJkb.HgVPirO8zt16eoEwzVqD58sa7DHZhNDfX8oIF
	cniAqRF2uhaMrZnoLCsfeGyb4upMVQlHz.SwWDopGOdGjmjwW
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp201.mail.bf1.yahoo.com with SMTP; 04 Feb 2014 07:27:00 -0800 PST
Message-ID: <1391527619.2441.16.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Date: Tue, 04 Feb 2014 08:26:59 -0700
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-Mailman-Approved-At: Tue, 04 Feb 2014 15:28:35 +0000
Subject: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen list,

I have a clean F20 install with the RC3 RPMs and see an error when
starting a VM.  A similar issue was seen with the RC2 RPMs.


[root@xen ~]# xl create f20.xl
Parsing config from f20.xl
libxl: error: libxl_create.c:1054:domcreate_launch_dm: unable to add
disk devices
libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device
model pid in /local/domain/2/image/device-model-pid
libxl: error: libxl.c:1425:libxl__destroy_domid:
libxl__destroy_device_model failed for 2
libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
to get my domid
libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
failed for 2
[root@xen ~]# 
[root@xen ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
[root@xen ~]# 
[root@xen ~]# xl create f20.xl
Parsing config from f20.xl
[root@xen ~]# 
[root@xen ~]# 
[root@xen ~]# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  2047     2     r-----
19.7
f20                                          3  4095     1     r-----
4.1


After running the xenstore-write command VMs start up without issue.

Thanks,

Eric



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:31:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhy7-0004HN-TV; Tue, 04 Feb 2014 15:31:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WAhy6-0004H9-7k
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:31:22 +0000
Received: from [85.158.143.35:47267] by server-3.bemta-4.messagelabs.com id
	42/77-11539-9C701F25; Tue, 04 Feb 2014 15:31:21 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391527872!3061386!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11102 invoked from network); 4 Feb 2014 15:31:12 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:31:12 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so4216098wes.23
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 07:31:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Xk1MsNqBRRL+K3CKy0mBRhaF41xnFY9KxVi0exohZeg=;
	b=GJzzhLwEu6ViBRTwuKHl3a6CduwFhTidDxtcEJL0q0AWHdSjy256s95rB7wc/Zd3p5
	/jLKQu++OQSPLwulJFuAgz7oB9ib6lbB4YEhRni+jToKSD5Q4op18o6AFZvr6jEFvahS
	LiDnDUTWCwlUL9moN6nBLkBayjijR2L6pqEPtyZc6M334yCqAPiEfTCUmDKc4htJDt+P
	jfb45sXii6debl5dtu/8jRiW9bsyQk0kgWOGxpYKik/xsv4pObwwxhfew0SgPGpuFCtF
	OUvkCMYPrV1i3cnKSDpWx/9E/M5ixcbfa0/Ch6DZyxWjoYoGFxuFAEoNX7EN3JOQ0LBK
	o9qQ==
X-Gm-Message-State: ALoCoQn2dHIfN4XPgEnwfirYKd/Q1F3UiEP3q7gZJ55iHBU/Q+rPwooN+gIDgGCffo5kj2W/oCBq
X-Received: by 10.180.21.166 with SMTP id w6mr13000031wie.31.1391527872294;
	Tue, 04 Feb 2014 07:31:12 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id e5sm53542078wja.15.2014.02.04.07.31.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:31:11 -0800 (PST)
Message-ID: <52F107C0.2030509@m2r.biz>
Date: Tue, 04 Feb 2014 16:31:12 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Paolo Bonzini <pbonzini@redhat.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>	<52D65856.6050901@redhat.com>
	<alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	1257099@bugs.launchpad.net, Don Slutz <dslutz@verizon.com>,
	qemu-devel@nongnu.org, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 03/02/2014 12:59, Stefano Stabellini ha scritto:
> On Wed, 15 Jan 2014, Paolo Bonzini wrote:
>> Il 03/01/2014 03:12, Don Slutz ha scritto:
>>> Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
>>> to be fixed (TMPB).
>>>
>>> Add new functions do_libtool and libtool_prog.
>>>
>>> Add check for broken gcc and libtool.
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>> ---
>>> Was posted as an attachment.
>>>
>>> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
>>>
>>>   configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>   1 file changed, 62 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/configure b/configure
>>> index edfea95..852d021 100755
>>> --- a/configure
>>> +++ b/configure
>>> @@ -12,7 +12,10 @@ else
>>>   fi
>>>   
>>>   TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
>>> -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
>>> +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
>>> +TMPO="${TMPDIR1}/${TMPB}.o"
>>> +TMPL="${TMPDIR1}/${TMPB}.lo"
>>> +TMPA="${TMPDIR1}/lib${TMPB}.la"
>>>   TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
>>>   
>>>   # NB: do not call "exit" in the trap handler; this is buggy with some shells;
>>> @@ -86,6 +89,38 @@ compile_prog() {
>>>     do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
>>>   }
>>>   
>>> +do_libtool() {
>>> +    local mode=$1
>>> +    shift
>>> +    # Run the compiler, capturing its output to the log.
>>> +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
>>> +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
>>> +    # Test passed. If this is an --enable-werror build, rerun
>>> +    # the test with -Werror and bail out if it fails. This
>>> +    # makes warning-generating-errors in configure test code
>>> +    # obvious to developers.
>>> +    if test "$werror" != "yes"; then
>>> +        return 0
>>> +    fi
>>> +    # Don't bother rerunning the compile if we were already using -Werror
>>> +    case "$*" in
>>> +        *-Werror*)
>>> +           return 0
>>> +        ;;
>>> +    esac
>>> +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
>>> +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
>>> +    error_exit "configure test passed without -Werror but failed with -Werror." \
>>> +        "This is probably a bug in the configure script. The failing command" \
>>> +        "will be at the bottom of config.log." \
>>> +        "You can run configure with --disable-werror to bypass this check."
>>> +}
>>> +
>>> +libtool_prog() {
>>> +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
>>> +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
>>> +}
>>> +
>>>   # symbolically link $1 to $2.  Portable version of "ln -sf".
>>>   symlink() {
>>>     rm -rf "$2"
>>> @@ -1367,6 +1402,32 @@ EOF
>>>     fi
>>>   fi
>>>   
>>> +# check for broken gcc and libtool in RHEL5
>>> +if test -n "$libtool" -a "$pie" != "no" ; then
>>> +  cat > $TMPC <<EOF
>>> +
>>> +void *f(unsigned char *buf, int len);
>>> +void *g(unsigned char *buf, int len);
>>> +
>>> +void *
>>> +f(unsigned char *buf, int len)
>>> +{
>>> +    return (void*)0L;
>>> +}
>>> +
>>> +void *
>>> +g(unsigned char *buf, int len)
>>> +{
>>> +    return f(buf, len);
>>> +}
>>> +
>>> +EOF
>>> +  if ! libtool_prog; then
>>> +    echo "Disabling libtool due to broken toolchain support"
>>> +    libtool=
>>> +  fi
>>> +fi
>>> +
>>>   ##########################################
>>>   # __sync_fetch_and_and requires at least -march=i486. Many toolchains
>>>   # use i686 as default anyway, but for those that don't, an explicit
>>>
>> I'm applying this to a "configure" branch on my github repository.  Thanks!
> Paolo, did this patch ever make it upstream? If so, do you have a commit
> id?

I searched it on upstream qemu git (master branch now with qemu 2.0 in 
development) and I not found it.

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:31:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhy7-0004HN-TV; Tue, 04 Feb 2014 15:31:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WAhy6-0004H9-7k
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:31:22 +0000
Received: from [85.158.143.35:47267] by server-3.bemta-4.messagelabs.com id
	42/77-11539-9C701F25; Tue, 04 Feb 2014 15:31:21 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391527872!3061386!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11102 invoked from network); 4 Feb 2014 15:31:12 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:31:12 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so4216098wes.23
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 07:31:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Xk1MsNqBRRL+K3CKy0mBRhaF41xnFY9KxVi0exohZeg=;
	b=GJzzhLwEu6ViBRTwuKHl3a6CduwFhTidDxtcEJL0q0AWHdSjy256s95rB7wc/Zd3p5
	/jLKQu++OQSPLwulJFuAgz7oB9ib6lbB4YEhRni+jToKSD5Q4op18o6AFZvr6jEFvahS
	LiDnDUTWCwlUL9moN6nBLkBayjijR2L6pqEPtyZc6M334yCqAPiEfTCUmDKc4htJDt+P
	jfb45sXii6debl5dtu/8jRiW9bsyQk0kgWOGxpYKik/xsv4pObwwxhfew0SgPGpuFCtF
	OUvkCMYPrV1i3cnKSDpWx/9E/M5ixcbfa0/Ch6DZyxWjoYoGFxuFAEoNX7EN3JOQ0LBK
	o9qQ==
X-Gm-Message-State: ALoCoQn2dHIfN4XPgEnwfirYKd/Q1F3UiEP3q7gZJ55iHBU/Q+rPwooN+gIDgGCffo5kj2W/oCBq
X-Received: by 10.180.21.166 with SMTP id w6mr13000031wie.31.1391527872294;
	Tue, 04 Feb 2014 07:31:12 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id e5sm53542078wja.15.2014.02.04.07.31.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:31:11 -0800 (PST)
Message-ID: <52F107C0.2030509@m2r.biz>
Date: Tue, 04 Feb 2014 16:31:12 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Paolo Bonzini <pbonzini@redhat.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>	<52D65856.6050901@redhat.com>
	<alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	1257099@bugs.launchpad.net, Don Slutz <dslutz@verizon.com>,
	qemu-devel@nongnu.org, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 03/02/2014 12:59, Stefano Stabellini ha scritto:
> On Wed, 15 Jan 2014, Paolo Bonzini wrote:
>> Il 03/01/2014 03:12, Don Slutz ha scritto:
>>> Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
>>> to be fixed (TMPB).
>>>
>>> Add new functions do_libtool and libtool_prog.
>>>
>>> Add check for broken gcc and libtool.
>>>
>>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>>> ---
>>> Was posted as an attachment.
>>>
>>> https://lists.gnu.org/archive/html/qemu-devel/2013-12/msg02678.html
>>>
>>>   configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>>   1 file changed, 62 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/configure b/configure
>>> index edfea95..852d021 100755
>>> --- a/configure
>>> +++ b/configure
>>> @@ -12,7 +12,10 @@ else
>>>   fi
>>>   
>>>   TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
>>> -TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
>>> +TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
>>> +TMPO="${TMPDIR1}/${TMPB}.o"
>>> +TMPL="${TMPDIR1}/${TMPB}.lo"
>>> +TMPA="${TMPDIR1}/lib${TMPB}.la"
>>>   TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
>>>   
>>>   # NB: do not call "exit" in the trap handler; this is buggy with some shells;
>>> @@ -86,6 +89,38 @@ compile_prog() {
>>>     do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
>>>   }
>>>   
>>> +do_libtool() {
>>> +    local mode=$1
>>> +    shift
>>> +    # Run the compiler, capturing its output to the log.
>>> +    echo $libtool $mode --tag=CC $cc "$@" >> config.log
>>> +    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
>>> +    # Test passed. If this is an --enable-werror build, rerun
>>> +    # the test with -Werror and bail out if it fails. This
>>> +    # makes warning-generating-errors in configure test code
>>> +    # obvious to developers.
>>> +    if test "$werror" != "yes"; then
>>> +        return 0
>>> +    fi
>>> +    # Don't bother rerunning the compile if we were already using -Werror
>>> +    case "$*" in
>>> +        *-Werror*)
>>> +           return 0
>>> +        ;;
>>> +    esac
>>> +    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
>>> +    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
>>> +    error_exit "configure test passed without -Werror but failed with -Werror." \
>>> +        "This is probably a bug in the configure script. The failing command" \
>>> +        "will be at the bottom of config.log." \
>>> +        "You can run configure with --disable-werror to bypass this check."
>>> +}
>>> +
>>> +libtool_prog() {
>>> +    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
>>> +    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
>>> +}
>>> +
>>>   # symbolically link $1 to $2.  Portable version of "ln -sf".
>>>   symlink() {
>>>     rm -rf "$2"
>>> @@ -1367,6 +1402,32 @@ EOF
>>>     fi
>>>   fi
>>>   
>>> +# check for broken gcc and libtool in RHEL5
>>> +if test -n "$libtool" -a "$pie" != "no" ; then
>>> +  cat > $TMPC <<EOF
>>> +
>>> +void *f(unsigned char *buf, int len);
>>> +void *g(unsigned char *buf, int len);
>>> +
>>> +void *
>>> +f(unsigned char *buf, int len)
>>> +{
>>> +    return (void*)0L;
>>> +}
>>> +
>>> +void *
>>> +g(unsigned char *buf, int len)
>>> +{
>>> +    return f(buf, len);
>>> +}
>>> +
>>> +EOF
>>> +  if ! libtool_prog; then
>>> +    echo "Disabling libtool due to broken toolchain support"
>>> +    libtool=
>>> +  fi
>>> +fi
>>> +
>>>   ##########################################
>>>   # __sync_fetch_and_and requires at least -march=i486. Many toolchains
>>>   # use i686 as default anyway, but for those that don't, an explicit
>>>
>> I'm applying this to a "configure" branch on my github repository.  Thanks!
> Paolo, did this patch ever make it upstream? If so, do you have a commit
> id?

I searched it on upstream qemu git (master branch now with qemu 2.0 in 
development) and I not found it.

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:31:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhye-0004MM-Ga; Tue, 04 Feb 2014 15:31:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1WAhyc-0004Lt-LF
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:31:55 +0000
Received: from [85.158.137.68:14640] by server-12.bemta-3.messagelabs.com id
	25/33-01674-8E701F25; Tue, 04 Feb 2014 15:31:52 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391527910!13238783!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28469 invoked from network); 4 Feb 2014 15:31:51 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-31.messagelabs.com with SMTP;
	4 Feb 2014 15:31:51 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14FUgRo032190
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 10:30:45 -0500
Received: from thinkpad (vpn-234-22.phx2.redhat.com [10.3.234.22])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s14FUSvZ007706; Tue, 4 Feb 2014 10:30:30 -0500
Date: Tue, 4 Feb 2014 16:30:24 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140204163024.2042e8d3@thinkpad>
In-Reply-To: <1047320768.20140204160708@eikelenboom.it>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
	<1047320768.20140204160708@eikelenboom.it>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] Commit
 9e047b982452c633882b486682966c1d97097015 (piix4: add acpi pci hotplug
 support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014 16:07:08 +0100
Sander Eikelenboom <linux@eikelenboom.it> wrote:

> 
> Tuesday, February 4, 2014, 3:32:19 PM, you wrote:
> 
> > On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> >> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> >> let's try again ..
> >> 
> >> Hi Michael,
> >> 
> >> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> >> 
> >> commit 9e047b982452c633882b486682966c1d97097015
> >> Author: Michael S. Tsirkin <mst@redhat.com>
> >> Date:   Mon Oct 14 18:01:20 2013 +0300
> >> 
> >>     piix4: add acpi pci hotplug support
> >> 
> >>     Add support for acpi pci hotplug using the
> >>     new infrastructure.
> >>     PIIX4 legacy interface is maintained as is for
> >>     machine types 1.7 and older.
> >> 
> >>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >> 
> >> 
> >> The error is not very verbose :
> >> 
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> 
> >> So it seems there is an issue with preserving the legacy interface.
> 
> 
> > Which machine type is broken?
> 
> xenfv
> 
> > What's the command line used?
> 
> See below the output of the creation of the guest
> 
> Strange thing is:
> char device redirected to /dev/pts/15 (label serial0)
> vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
> efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
> VNC server running on `127.0.0.1:5900'
> xen_platform: changed ro/rw state of ROM memory area. now is rw state.
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_msi_set_enable: disabling MSI.
> 
> It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
> And an lspci in the guest also doesn't show the devices.
> 
> > What's the value of has_acpi_build in hw/i386/pc_piix.c?
> static bool has_acpi_build = true;
> 
> > What happens if you add
> > -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
> 
> That makes it work again ...
looks like missing bsel property,
could you run qemu with following debug patch to make sure that it's the case:
(run without -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off)

diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
index 4345f5d..fc72cc9 100644
--- a/hw/acpi/pcihp.c
+++ b/hw/acpi/pcihp.c
@@ -192,6 +192,7 @@ int acpi_pcihp_device_hotplug(AcpiPciHpState *s, PCIDevice
*dev, {
     int slot = PCI_SLOT(dev->devfn);
     int bsel = acpi_pcihp_get_bsel(dev->bus);
+    fprintf(stderr, "bsel: %d, bus: %s\n", bsel, dev->bus->qbus.name);
     if (bsel < 0) {
         return -1;
     }


> 
> 
> >> --
> >> Sander
> >> 
> 
> 
> Parsing config from /etc/xen/domU/production/security.cfg
> libxl: debug: libxl_create.c:1348:do_domain_create: ao 0x1415900: create: how=(nil) callback=(nil) poller=0x1415960
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hdb, using backend phy
> libxl: debug: libxl_create.c:803:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415ce8: deregister unregistered
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9efa8
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19efa8
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
>   Loader:        0000000000100000->000000000019efa8
>   Modules:       0000000000000000->0000000000000000
>   TOTAL:         0000000000000000->000000003f800000
>   ENTRY ADDRESS: 0000000000100000
> xc: detail: PHYSICAL MEMORY ALLOCATION:
>   4KB PAGES: 0x0000000000000200
>   2MB PAGES: 0x00000000000001fb
>   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7fdc08ee7000 -> 0x7fdc08f7ce2d
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: register slotnum=3
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: register slotnum=2
> libxl: debug: libxl_create.c:1362:do_domain_create: ao 0x1415900: inprogress: poller=0x1415960, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: event epath=/local/domain/0/backend/vbd/30/768/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14171e8: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/768/state token=3/0: empty slot
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: event epath=/local/domain/0/backend/vbd/30/832/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/832/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1418238: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/832/state token=2/1: empty slot
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_dm.c:1308:libxl__spawn_local_dm: Spawning device-model /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   /usr/local/lib/xen/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   30
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-30,server,nowait
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -nodefaults
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   security
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -serial
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   pty
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   cirrus-vga
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   vga.vram_size_mb=8
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   4,maxcpus=4
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   e1000,id=nic0,netdev=net0,mac=00:16:3e:a0:72:69
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -netdev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif30.0-emu,script=no,downscript=no
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -machine
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security,if=ide,index=0,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security_data,if=ide,index=1,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415f20: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-chardev",
>     "id": 2
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-vnc",
>     "id": 3
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419d68: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.0",
>         "hostaddr": "0000:06:01.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.1",
>         "hostaddr": "0000:06:01.1"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.2",
>         "hostaddr": "0000:06:01.2"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-08_00.0",
>         "hostaddr": "0000:08:00.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
> libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x1415900: progress report: ignored
> libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x1415900: complete, rc=0
> libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x1415900: destroy
> xc: debug: hypercall buffer: total allocations:530 total releases:530
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:526 misses:2 toobig:2
> 
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:31:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:31:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhye-0004MM-Ga; Tue, 04 Feb 2014 15:31:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1WAhyc-0004Lt-LF
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:31:55 +0000
Received: from [85.158.137.68:14640] by server-12.bemta-3.messagelabs.com id
	25/33-01674-8E701F25; Tue, 04 Feb 2014 15:31:52 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391527910!13238783!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28469 invoked from network); 4 Feb 2014 15:31:51 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-31.messagelabs.com with SMTP;
	4 Feb 2014 15:31:51 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14FUgRo032190
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 10:30:45 -0500
Received: from thinkpad (vpn-234-22.phx2.redhat.com [10.3.234.22])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s14FUSvZ007706; Tue, 4 Feb 2014 10:30:30 -0500
Date: Tue, 4 Feb 2014 16:30:24 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140204163024.2042e8d3@thinkpad>
In-Reply-To: <1047320768.20140204160708@eikelenboom.it>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
	<1047320768.20140204160708@eikelenboom.it>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] Commit
 9e047b982452c633882b486682966c1d97097015 (piix4: add acpi pci hotplug
 support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014 16:07:08 +0100
Sander Eikelenboom <linux@eikelenboom.it> wrote:

> 
> Tuesday, February 4, 2014, 3:32:19 PM, you wrote:
> 
> > On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> >> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> >> let's try again ..
> >> 
> >> Hi Michael,
> >> 
> >> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> >> 
> >> commit 9e047b982452c633882b486682966c1d97097015
> >> Author: Michael S. Tsirkin <mst@redhat.com>
> >> Date:   Mon Oct 14 18:01:20 2013 +0300
> >> 
> >>     piix4: add acpi pci hotplug support
> >> 
> >>     Add support for acpi pci hotplug using the
> >>     new infrastructure.
> >>     PIIX4 legacy interface is maintained as is for
> >>     machine types 1.7 and older.
> >> 
> >>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> >> 
> >> 
> >> The error is not very verbose :
> >> 
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> >> 
> >> So it seems there is an issue with preserving the legacy interface.
> 
> 
> > Which machine type is broken?
> 
> xenfv
> 
> > What's the command line used?
> 
> See below the output of the creation of the guest
> 
> Strange thing is:
> char device redirected to /dev/pts/15 (label serial0)
> vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
> efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
> VNC server running on `127.0.0.1:5900'
> xen_platform: changed ro/rw state of ROM memory area. now is rw state.
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=2
> [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=3
> [00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
> [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
> [00:05.0] xen_pt_pci_intx: intx=1
> [00:05.0] xen_pt_msi_set_enable: disabling MSI.
> 
> It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
> And an lspci in the guest also doesn't show the devices.
> 
> > What's the value of has_acpi_build in hw/i386/pc_piix.c?
> static bool has_acpi_build = true;
> 
> > What happens if you add
> > -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
> 
> That makes it work again ...
looks like missing bsel property,
could you run qemu with following debug patch to make sure that it's the case:
(run without -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off)

diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
index 4345f5d..fc72cc9 100644
--- a/hw/acpi/pcihp.c
+++ b/hw/acpi/pcihp.c
@@ -192,6 +192,7 @@ int acpi_pcihp_device_hotplug(AcpiPciHpState *s, PCIDevice
*dev, {
     int slot = PCI_SLOT(dev->devfn);
     int bsel = acpi_pcihp_get_bsel(dev->bus);
+    fprintf(stderr, "bsel: %d, bus: %s\n", bsel, dev->bus->qbus.name);
     if (bsel < 0) {
         return -1;
     }


> 
> 
> >> --
> >> Sander
> >> 
> 
> 
> Parsing config from /etc/xen/domU/production/security.cfg
> libxl: debug: libxl_create.c:1348:do_domain_create: ao 0x1415900: create: how=(nil) callback=(nil) poller=0x1415960
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hda, using backend phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk vdev=hdb, using backend phy
> libxl: debug: libxl_create.c:803:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV domain, skipping bootloader
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415ce8: deregister unregistered
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9efa8
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19efa8
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
>   Loader:        0000000000100000->000000000019efa8
>   Modules:       0000000000000000->0000000000000000
>   TOTAL:         0000000000000000->000000003f800000
>   ENTRY ADDRESS: 0000000000100000
> xc: detail: PHYSICAL MEMORY ALLOCATION:
>   4KB PAGES: 0x0000000000000200
>   2MB PAGES: 0x00000000000001fb
>   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7fdc08ee7000 -> 0x7fdc08f7ce2d
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: register slotnum=3
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk vdev=hdb spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: register slotnum=2
> libxl: debug: libxl_create.c:1362:do_domain_create: ao 0x1415900: inprogress: poller=0x1415960, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: event epath=/local/domain/0/backend/vbd/30/768/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x14171e8 wpath=/local/domain/0/backend/vbd/30/768/state token=3/0: deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14171e8: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/768/state token=3/0: empty slot
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: event epath=/local/domain/0/backend/vbd/30/832/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vbd/30/832/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1418238 wpath=/local/domain/0/backend/vbd/30/832/state token=2/1: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1418238: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:472:watchfd_callback: watch epath=/local/domain/0/backend/vbd/30/832/state token=2/1: empty slot
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1417270: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x14182c0: deregister unregistered
> libxl: debug: libxl_dm.c:1308:libxl__spawn_local_dm: Spawning device-model /usr/local/lib/xen/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   /usr/local/lib/xen/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   30
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-30,server,nowait
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -nodefaults
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   security
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -serial
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   pty
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   cirrus-vga
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   vga.vram_size_mb=8
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   4,maxcpus=4
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   e1000,id=nic0,netdev=net0,mac=00:16:3e:a0:72:69
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -netdev
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   type=tap,id=net0,ifname=vif30.0-emu,script=no,downscript=no
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -machine
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security,if=ide,index=0,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1310:libxl__spawn_local_dm:   file=/dev/xen_vms/security_data,if=ide,index=1,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: event epath=/local/domain/0/device-model/30/state
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1415f20 wpath=/local/domain/0/device-model/30/state token=2/2: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1415f20: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-chardev",
>     "id": 2
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "query-vnc",
>     "id": 3
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: register slotnum=2
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: event epath=/local/domain/0/backend/vif/30/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback: backend /local/domain/0/backend/vif/30/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch w=0x1419d68 wpath=/local/domain/0/backend/vif/30/0/state token=2/3: deregister slotnum=2
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419d68: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch w=0x1419df0: deregister unregistered
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.0",
>         "hostaddr": "0000:06:01.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.1",
>         "hostaddr": "0000:06:01.1"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-06_01.2",
>         "hostaddr": "0000:06:01.2"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-30
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-08_00.0",
>         "hostaddr": "0000:08:00.0"
>     }
> }
> '
> libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: error
> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
> libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x1415900: progress report: ignored
> libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x1415900: complete, rc=0
> libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x1415900: destroy
> xc: debug: hypercall buffer: total allocations:530 total releases:530
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:526 misses:2 toobig:2
> 
> 


-- 
Regards,
  Igor

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:32:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhyr-0004OV-Tm; Tue, 04 Feb 2014 15:32:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAhyq-0004O6-1C
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:32:08 +0000
Received: from [193.109.254.147:31295] by server-10.bemta-14.messagelabs.com
	id 6B/D2-10711-7F701F25; Tue, 04 Feb 2014 15:32:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391527926!1937357!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24253 invoked from network); 4 Feb 2014 15:32:06 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:32:06 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so4504324eaj.12
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:32:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=k09tvt1dSQQ1ZFrLvdEvWSMzldv/F/zJP0sYMk8Wp1c=;
	b=a62ZmTWswuxcmVZ3b+VokzLyv8eivWDHAbYJGAEq+g380Rx0QvfZ+lk+s3/Lqtmy1I
	t1cVkmWuYfmrGAdwbAqhUjKBt1jwYgYCVAGySISUI5MaaYmAQy5BmRu44qPtMrxcyLtO
	HHxQzEX2BB1zAcFc808coJec/V5Br+DatAhS3ehqQGVibuVhLrf8672dmm1mWhHJGnNd
	O3fvROMgZTIgkdirZslZcrzLisVb1SY+TB1LYGYhrMoy0plvlr6DQSr5fOAc4fK/9+Pn
	o6aMLZcMPD9HJaO0xa4KP8+AJyyk+Pq9FIbOTdaUYP9sfg3FKh57WmD3xA1/oz2+sUbU
	NjhQ==
X-Gm-Message-State: ALoCoQlPFODm1TvHrgclYBoMtYHOZq9V7LDyto8kA+41tgywbutspZWnopYjwfLKcFZ1XFsm9Iux
X-Received: by 10.15.55.193 with SMTP id v41mr4057411eew.80.1391527926190;
	Tue, 04 Feb 2014 07:32:06 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v7sm89733417eel.2.2014.02.04.07.32.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:32:05 -0800 (PST)
Message-ID: <52F107F3.9080302@linaro.org>
Date: Tue, 04 Feb 2014 15:32:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 01/29/2014 11:46 AM, Stefano Stabellini wrote:
> Thinking twice about it, it might be the only acceptable change for 4.4.

After thinking, if Ian's patch to prioritize the IPI
(http://www.gossamer-threads.com/lists/xen/devel/315342?do=post_view_threaded)
is not pushed for Xen 4.4, this patch might be useful for Oleksandr.

Can you send it with a commit message and signed-off?

> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..af96a31 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      level = dt_irq_is_level_triggered(irq);
>  
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);

I would add a TODO before the function and perhaps explain why...

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:32:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhyr-0004OV-Tm; Tue, 04 Feb 2014 15:32:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAhyq-0004O6-1C
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:32:08 +0000
Received: from [193.109.254.147:31295] by server-10.bemta-14.messagelabs.com
	id 6B/D2-10711-7F701F25; Tue, 04 Feb 2014 15:32:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391527926!1937357!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24253 invoked from network); 4 Feb 2014 15:32:06 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:32:06 -0000
Received: by mail-ea0-f181.google.com with SMTP id m10so4504324eaj.12
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:32:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=k09tvt1dSQQ1ZFrLvdEvWSMzldv/F/zJP0sYMk8Wp1c=;
	b=a62ZmTWswuxcmVZ3b+VokzLyv8eivWDHAbYJGAEq+g380Rx0QvfZ+lk+s3/Lqtmy1I
	t1cVkmWuYfmrGAdwbAqhUjKBt1jwYgYCVAGySISUI5MaaYmAQy5BmRu44qPtMrxcyLtO
	HHxQzEX2BB1zAcFc808coJec/V5Br+DatAhS3ehqQGVibuVhLrf8672dmm1mWhHJGnNd
	O3fvROMgZTIgkdirZslZcrzLisVb1SY+TB1LYGYhrMoy0plvlr6DQSr5fOAc4fK/9+Pn
	o6aMLZcMPD9HJaO0xa4KP8+AJyyk+Pq9FIbOTdaUYP9sfg3FKh57WmD3xA1/oz2+sUbU
	NjhQ==
X-Gm-Message-State: ALoCoQlPFODm1TvHrgclYBoMtYHOZq9V7LDyto8kA+41tgywbutspZWnopYjwfLKcFZ1XFsm9Iux
X-Received: by 10.15.55.193 with SMTP id v41mr4057411eew.80.1391527926190;
	Tue, 04 Feb 2014 07:32:06 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v7sm89733417eel.2.2014.02.04.07.32.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:32:05 -0800 (PST)
Message-ID: <52F107F3.9080302@linaro.org>
Date: Tue, 04 Feb 2014 15:32:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401291144410.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1 0/2] xen/arm: maintenance_interrupt SMP
 fix
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 01/29/2014 11:46 AM, Stefano Stabellini wrote:
> Thinking twice about it, it might be the only acceptable change for 4.4.

After thinking, if Ian's patch to prioritize the IPI
(http://www.gossamer-threads.com/lists/xen/devel/315342?do=post_view_threaded)
is not pushed for Xen 4.4, this patch might be useful for Oleksandr.

Can you send it with a commit message and signed-off?

> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..af96a31 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +795,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      level = dt_irq_is_level_triggered(irq);
>  
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);

I would add a TODO before the function and perhaps explain why...

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:33:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhzj-0004X7-DB; Tue, 04 Feb 2014 15:33:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAhzh-0004Wo-AE
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:33:01 +0000
Received: from [85.158.143.35:7625] by server-3.bemta-4.messagelabs.com id
	15/7B-11539-C2801F25; Tue, 04 Feb 2014 15:33:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391527978!3084826!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8028 invoked from network); 4 Feb 2014 15:32:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:32:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97782781"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:32:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:32:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhzc-0005Kq-TU;
	Tue, 04 Feb 2014 15:32:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhzc-0003qj-Kf;
	Tue, 04 Feb 2014 15:32:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.2088.315549.191502@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 15:32:56 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
	guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
> 
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.

> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..306b414 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
>          prev->next = phys->next;
>      else
>          dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid,
> +                         phys->first, phys->first + phys->count);
>  }

The approach you are taking here is that for pages explicitly mapped
by some libxc caller, you do the flush on unmap.  But what about
callers who don't unmap ?  Are there callers which don't unmap and
which instead are relying on memory coherency assumptions which aren't
true on arm ?

Aside from this question, the code in this patch looks fine to me.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:33:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAhzj-0004X7-DB; Tue, 04 Feb 2014 15:33:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAhzh-0004Wo-AE
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:33:01 +0000
Received: from [85.158.143.35:7625] by server-3.bemta-4.messagelabs.com id
	15/7B-11539-C2801F25; Tue, 04 Feb 2014 15:33:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391527978!3084826!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8028 invoked from network); 4 Feb 2014 15:32:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:32:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97782781"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:32:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:32:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhzc-0005Kq-TU;
	Tue, 04 Feb 2014 15:32:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAhzc-0003qj-Kf;
	Tue, 04 Feb 2014 15:32:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.2088.315549.191502@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 15:32:56 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
	guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
> 
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.

> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..306b414 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
>          prev->next = phys->next;
>      else
>          dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid,
> +                         phys->first, phys->first + phys->count);
>  }

The approach you are taking here is that for pages explicitly mapped
by some libxc caller, you do the flush on unmap.  But what about
callers who don't unmap ?  Are there callers which don't unmap and
which instead are relying on memory coherency assumptions which aren't
true on arm ?

Aside from this question, the code in this patch looks fine to me.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:33:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi01-0004bH-1H; Tue, 04 Feb 2014 15:33:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAhzy-0004ac-IP
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:33:19 +0000
Received: from [193.109.254.147:51505] by server-16.bemta-14.messagelabs.com
	id 31/D9-21945-D3801F25; Tue, 04 Feb 2014 15:33:17 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391527971!1965676!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8378 invoked from network); 4 Feb 2014 15:32:52 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:32:52 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so4218743wes.23
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 07:32:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version
	:content-transfer-encoding:content-type:subject:from:date:to:cc
	:message-id; bh=dsg8x+XUKtOgqHX223FO8WcypZGXz4u9qRXlUjdfwd8=;
	b=xHdhaLsMS6jYD/sdGMV/hWij+0dZ9gMXqgY9HfgGM5nKyl/CHcGuMvapsBaLthneNJ
	BmpsuYcgldclhDnlGPpfT+7SUmR4Npk5jFgbs8ffZUw1blC3iPnb2oQMl9EcHFgBOVak
	gRckoU7UoOUksBuCuHa0ZQecBqBBR+DgtBLyiI5uYSrFLYp7Si54A1OxjYs3sZ0sxFEX
	GwbAdaO1HJDt1X+GDCD0xY5DsKDfuL1rLS1jlk8fjmJ5bTm836M1oNY2t0lBdrgkNy2f
	wb/4vOJg5pakBA8P8HwLJlZJmp8G//DUMD2MS/6eD3kUP66+BRLMPgFRgRnYfqEwDBxL
	TWLQ==
X-Received: by 10.180.12.65 with SMTP id w1mr13076185wib.58.1391527971756;
	Tue, 04 Feb 2014 07:32:51 -0800 (PST)
Received: from [10.10.250.6] (20.234.54.77.rev.vodafone.pt. [77.54.234.20])
	by mx.google.com with ESMTPSA id u6sm36785484wif.6.2014.02.04.07.32.50
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:32:51 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <1391513622.10515.75.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Tue, 04 Feb 2014 15:32:41 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
	second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On February 4, 2014 11:33:42 AM GMT, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>On Tue, 2014-02-04 at 02:45 +0000, Miguel Clara wrote:
>> The wiki mentions "NBD" but if I got it right this means the storage
>> is in "host1" and all ndb does is connect to that host, so the disk
>is
>> never copied to disk 2?
>> 
>> Am I correct to assume this?
>
>Yes. N is for network.
>
>> I have a host where what I need is to move it from host1 to host2, or
>> reverse if needed, there's no problem stopping it first, but I guess
>> this is not what the "migrate" command is used for!
>> 
>> And for guest where I do wan to keep a backup with remus, is shared
>> storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:
>> 
>> -> Shared storage is not required
>> 
>> But if "migrate" doesn't work how would remus?
>
>I think Remus with xend supports out of band storage synchronisation.
>This has not yet been implemented for xl remus support.
>

And would xl remus work if the disks are drdb 'backend' which in turn point to LVM?

Thanks

>> 
>> Sorry for all the questions just trying to understand this better,
>and
>> there's really no documentation about:
>> xl migrate and xl remus!
>> 
>> thanks
>> 
>> On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com>
>wrote:
>> > Makes sense...
>> >
>> > Is there documentation about xl migrate and xl remus?
>> >
>> > Say I want to migrate a host but first pause it?
>> >
>> > I could also snapshot the lvm but that doesn't save the memory and
>the
>> > domain would have to be offline.
>> >
>> > So if I want to migrate but don't have shared storage, what's the
>best
>> > approach drdb?
>> >
>> > Thanks
>> >
>> >
>> >
>> > On February 3, 2014 10:17:04 AM GMT, Ian Campbell
><Ian.Campbell@citrix.com>
>> > wrote:
>> >>
>> >> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
>> >>>
>> >>>  I'm testing live migration without shared storage (I use LVM at
>both
>> >>> sides)
>> >>>
>> >>>
>> >>>  Issuing "xl migrate" worked nice and the machine was migrated to
>the
>> >>> second host
>> >>>
>> >>>  However I see this in the second host log:
>> >>>
>> >>>  [ 1502.563251] EXT4-fs error (device xvda1):
>> >>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
>> >>>  run-parts: bad entry in directory: rec_len is smaller than
>minimal -
>> >>>  offset=0(0), inode=0, rec_len=0, name_len=0
>> >>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
>> >>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
>> >>>  533250: comm run-parts: bad entry in directory: rec_len is
>smaller
>> >>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
>> >>>
>> >>>  I also get errors
>> >>> like:
>> >>>  -bash: /bin/ping: cannot execute binary file
>> >>>
>> >>>  Is this to be expect on using none shared storage?
>> >>
>> >>
>> >> Yes. If the underlying disk is not the same device between both
>hosts
>> >> then all bets are off and all sorts of bad things will be happen.
>Think
>> >> about it -- what would you expect to happen to an OS if a disk
>suddenly
>> >> started returning completely different data to what was written to
>it.
>> >>
>> >> What you have seen seems like a plausible outcome.
>> >>
>> >> Ian.
>> >>
>> >
>> > --
>> > Sent from my Android device with K-9 Mail. Please excuse my
>brevity.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:33:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi01-0004bH-1H; Tue, 04 Feb 2014 15:33:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAhzy-0004ac-IP
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:33:19 +0000
Received: from [193.109.254.147:51505] by server-16.bemta-14.messagelabs.com
	id 31/D9-21945-D3801F25; Tue, 04 Feb 2014 15:33:17 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391527971!1965676!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8378 invoked from network); 4 Feb 2014 15:32:52 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:32:52 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so4218743wes.23
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 07:32:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version
	:content-transfer-encoding:content-type:subject:from:date:to:cc
	:message-id; bh=dsg8x+XUKtOgqHX223FO8WcypZGXz4u9qRXlUjdfwd8=;
	b=xHdhaLsMS6jYD/sdGMV/hWij+0dZ9gMXqgY9HfgGM5nKyl/CHcGuMvapsBaLthneNJ
	BmpsuYcgldclhDnlGPpfT+7SUmR4Npk5jFgbs8ffZUw1blC3iPnb2oQMl9EcHFgBOVak
	gRckoU7UoOUksBuCuHa0ZQecBqBBR+DgtBLyiI5uYSrFLYp7Si54A1OxjYs3sZ0sxFEX
	GwbAdaO1HJDt1X+GDCD0xY5DsKDfuL1rLS1jlk8fjmJ5bTm836M1oNY2t0lBdrgkNy2f
	wb/4vOJg5pakBA8P8HwLJlZJmp8G//DUMD2MS/6eD3kUP66+BRLMPgFRgRnYfqEwDBxL
	TWLQ==
X-Received: by 10.180.12.65 with SMTP id w1mr13076185wib.58.1391527971756;
	Tue, 04 Feb 2014 07:32:51 -0800 (PST)
Received: from [10.10.250.6] (20.234.54.77.rev.vodafone.pt. [77.54.234.20])
	by mx.google.com with ESMTPSA id u6sm36785484wif.6.2014.02.04.07.32.50
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:32:51 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <1391513622.10515.75.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Tue, 04 Feb 2014 15:32:41 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
	second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On February 4, 2014 11:33:42 AM GMT, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>On Tue, 2014-02-04 at 02:45 +0000, Miguel Clara wrote:
>> The wiki mentions "NBD" but if I got it right this means the storage
>> is in "host1" and all ndb does is connect to that host, so the disk
>is
>> never copied to disk 2?
>> 
>> Am I correct to assume this?
>
>Yes. N is for network.
>
>> I have a host where what I need is to move it from host1 to host2, or
>> reverse if needed, there's no problem stopping it first, but I guess
>> this is not what the "migrate" command is used for!
>> 
>> And for guest where I do wan to keep a backup with remus, is shared
>> storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:
>> 
>> -> Shared storage is not required
>> 
>> But if "migrate" doesn't work how would remus?
>
>I think Remus with xend supports out of band storage synchronisation.
>This has not yet been implemented for xl remus support.
>

And would xl remus work if the disks are drdb 'backend' which in turn point to LVM?

Thanks

>> 
>> Sorry for all the questions just trying to understand this better,
>and
>> there's really no documentation about:
>> xl migrate and xl remus!
>> 
>> thanks
>> 
>> On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com>
>wrote:
>> > Makes sense...
>> >
>> > Is there documentation about xl migrate and xl remus?
>> >
>> > Say I want to migrate a host but first pause it?
>> >
>> > I could also snapshot the lvm but that doesn't save the memory and
>the
>> > domain would have to be offline.
>> >
>> > So if I want to migrate but don't have shared storage, what's the
>best
>> > approach drdb?
>> >
>> > Thanks
>> >
>> >
>> >
>> > On February 3, 2014 10:17:04 AM GMT, Ian Campbell
><Ian.Campbell@citrix.com>
>> > wrote:
>> >>
>> >> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
>> >>>
>> >>>  I'm testing live migration without shared storage (I use LVM at
>both
>> >>> sides)
>> >>>
>> >>>
>> >>>  Issuing "xl migrate" worked nice and the machine was migrated to
>the
>> >>> second host
>> >>>
>> >>>  However I see this in the second host log:
>> >>>
>> >>>  [ 1502.563251] EXT4-fs error (device xvda1):
>> >>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
>> >>>  run-parts: bad entry in directory: rec_len is smaller than
>minimal -
>> >>>  offset=0(0), inode=0, rec_len=0, name_len=0
>> >>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
>> >>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
>> >>>  533250: comm run-parts: bad entry in directory: rec_len is
>smaller
>> >>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
>> >>>
>> >>>  I also get errors
>> >>> like:
>> >>>  -bash: /bin/ping: cannot execute binary file
>> >>>
>> >>>  Is this to be expect on using none shared storage?
>> >>
>> >>
>> >> Yes. If the underlying disk is not the same device between both
>hosts
>> >> then all bets are off and all sorts of bad things will be happen.
>Think
>> >> about it -- what would you expect to happen to an OS if a disk
>suddenly
>> >> started returning completely different data to what was written to
>it.
>> >>
>> >> What you have seen seems like a plausible outcome.
>> >>
>> >> Ian.
>> >>
>> >
>> > --
>> > Sent from my Android device with K-9 Mail. Please excuse my
>brevity.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:34:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi0r-0004mB-Gk; Tue, 04 Feb 2014 15:34:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAi0q-0004lx-GZ
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:34:12 +0000
Received: from [85.158.137.68:53008] by server-6.bemta-3.messagelabs.com id
	48/EB-09180-37801F25; Tue, 04 Feb 2014 15:34:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391528048!13350938!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13815 invoked from network); 4 Feb 2014 15:34:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:34:09 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14FX1kl016822
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:33:02 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FX0R3007097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:33:00 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FX0G8007089; Tue, 4 Feb 2014 15:33:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:33:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B683E1C0972; Tue,  4 Feb 2014 10:32:58 -0500 (EST)
Date: Tue, 4 Feb 2014 10:32:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140204153258.GA6847@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F10F2402000078001190B7@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 15:48, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
> >> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> >> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> >> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
> >> >       * no virtual vmswith is allowed. Or else, the following IO
> >> >       * emulation will handled in a wrong VCPU context.
> >> >       */
> >> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
> >> 
> >> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
> >> 
> >> But to me it's not clear whether a PVH vCPU getting here is wrong
> >> in the first place, i.e. I would think the above condition should be
> >> || rather than && (after all, even if nested HVM one day became
> > 
> > I presume you mean like this:
> > 
> > 	if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
> > 		return;
> > 
> > If the Intel maintainers are OK with that I can do it that (and only
> > do one get_ioreq(v) call) and expand the comment.
> > 
> > Or just take the simple route and squash Mukesh's patch in mine and
> > revist this later - as I would prefer to make the minimal amount of
> > changes to any code in during rc3.
> 
> Wasn't it that Mukesh's patch simply was yours with the two
> get_ioreq()s folded by using a local variable?

Yes. As so

>From 6864786da87c4d67998a8ef46a3d289ac85074a9 Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..143b7a2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-
+    ioreq_t *p = get_ioreq(v);
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( p && p->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:34:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi0r-0004mB-Gk; Tue, 04 Feb 2014 15:34:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAi0q-0004lx-GZ
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:34:12 +0000
Received: from [85.158.137.68:53008] by server-6.bemta-3.messagelabs.com id
	48/EB-09180-37801F25; Tue, 04 Feb 2014 15:34:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391528048!13350938!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13815 invoked from network); 4 Feb 2014 15:34:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:34:09 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14FX1kl016822
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:33:02 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FX0R3007097
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:33:00 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FX0G8007089; Tue, 4 Feb 2014 15:33:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:33:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B683E1C0972; Tue,  4 Feb 2014 10:32:58 -0500 (EST)
Date: Tue, 4 Feb 2014 10:32:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140204153258.GA6847@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F10F2402000078001190B7@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 15:48, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 04, 2014 at 08:54:59AM +0000, Jan Beulich wrote:
> >> >>> On 03.02.14 at 18:03, Konrad Rzeszutek Wilk <konrad@kernel.org> wrote:
> >> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> >> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >> > @@ -1400,7 +1400,7 @@ void nvmx_switch_guest(void)
> >> >       * no virtual vmswith is allowed. Or else, the following IO
> >> >       * emulation will handled in a wrong VCPU context.
> >> >       */
> >> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >> > +    if ( get_ioreq(v) && get_ioreq(v)->state != STATE_IOREQ_NONE )
> >> 
> >> As Mukesh pointed out, calling get_ioreq() twice is inefficient.
> >> 
> >> But to me it's not clear whether a PVH vCPU getting here is wrong
> >> in the first place, i.e. I would think the above condition should be
> >> || rather than && (after all, even if nested HVM one day became
> > 
> > I presume you mean like this:
> > 
> > 	if ( !get_ioreq(v) || get_ioreq(v)->state != STATE_IOREQ_NONE )
> > 		return;
> > 
> > If the Intel maintainers are OK with that I can do it that (and only
> > do one get_ioreq(v) call) and expand the comment.
> > 
> > Or just take the simple route and squash Mukesh's patch in mine and
> > revist this later - as I would prefer to make the minimal amount of
> > changes to any code in during rc3.
> 
> Wasn't it that Mukesh's patch simply was yours with the two
> get_ioreq()s folded by using a local variable?

Yes. As so

>From 6864786da87c4d67998a8ef46a3d289ac85074a9 Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    4 ++--
 1 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..143b7a2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
-
+    ioreq_t *p = get_ioreq(v);
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( p && p->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi1t-0004yS-0U; Tue, 04 Feb 2014 15:35:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi1r-0004y4-9B
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:35:15 +0000
Received: from [193.109.254.147:23040] by server-4.bemta-14.messagelabs.com id
	80/F6-32066-2B801F25; Tue, 04 Feb 2014 15:35:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391528112!1965872!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21782 invoked from network); 4 Feb 2014 15:35:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97784310"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:35:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:35:11 -0500
Message-ID: <1391528110.6497.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mike C. <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 15:35:10 +0000
In-Reply-To: <08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:32 +0000, Mike C. wrote:
> 
> On February 4, 2014 11:33:42 AM GMT, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >On Tue, 2014-02-04 at 02:45 +0000, Miguel Clara wrote:
> >> The wiki mentions "NBD" but if I got it right this means the storage
> >> is in "host1" and all ndb does is connect to that host, so the disk
> >is
> >> never copied to disk 2?
> >> 
> >> Am I correct to assume this?
> >
> >Yes. N is for network.
> >
> >> I have a host where what I need is to move it from host1 to host2, or
> >> reverse if needed, there's no problem stopping it first, but I guess
> >> this is not what the "migrate" command is used for!
> >> 
> >> And for guest where I do wan to keep a backup with remus, is shared
> >> storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:
> >> 
> >> -> Shared storage is not required
> >> 
> >> But if "migrate" doesn't work how would remus?
> >
> >I think Remus with xend supports out of band storage synchronisation.
> >This has not yet been implemented for xl remus support.
> >
> 
> And would xl remus work if the disks are drdb 'backend' which in turn point to LVM?

I don't know, CCing Shriram the maintainer.

> 
> Thanks
> 
> >> 
> >> Sorry for all the questions just trying to understand this better,
> >and
> >> there's really no documentation about:
> >> xl migrate and xl remus!
> >> 
> >> thanks
> >> 
> >> On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com>
> >wrote:
> >> > Makes sense...
> >> >
> >> > Is there documentation about xl migrate and xl remus?
> >> >
> >> > Say I want to migrate a host but first pause it?
> >> >
> >> > I could also snapshot the lvm but that doesn't save the memory and
> >the
> >> > domain would have to be offline.
> >> >
> >> > So if I want to migrate but don't have shared storage, what's the
> >best
> >> > approach drdb?
> >> >
> >> > Thanks
> >> >
> >> >
> >> >
> >> > On February 3, 2014 10:17:04 AM GMT, Ian Campbell
> ><Ian.Campbell@citrix.com>
> >> > wrote:
> >> >>
> >> >> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
> >> >>>
> >> >>>  I'm testing live migration without shared storage (I use LVM at
> >both
> >> >>> sides)
> >> >>>
> >> >>>
> >> >>>  Issuing "xl migrate" worked nice and the machine was migrated to
> >the
> >> >>> second host
> >> >>>
> >> >>>  However I see this in the second host log:
> >> >>>
> >> >>>  [ 1502.563251] EXT4-fs error (device xvda1):
> >> >>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
> >> >>>  run-parts: bad entry in directory: rec_len is smaller than
> >minimal -
> >> >>>  offset=0(0), inode=0, rec_len=0, name_len=0
> >> >>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
> >> >>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
> >> >>>  533250: comm run-parts: bad entry in directory: rec_len is
> >smaller
> >> >>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
> >> >>>
> >> >>>  I also get errors
> >> >>> like:
> >> >>>  -bash: /bin/ping: cannot execute binary file
> >> >>>
> >> >>>  Is this to be expect on using none shared storage?
> >> >>
> >> >>
> >> >> Yes. If the underlying disk is not the same device between both
> >hosts
> >> >> then all bets are off and all sorts of bad things will be happen.
> >Think
> >> >> about it -- what would you expect to happen to an OS if a disk
> >suddenly
> >> >> started returning completely different data to what was written to
> >it.
> >> >>
> >> >> What you have seen seems like a plausible outcome.
> >> >>
> >> >> Ian.
> >> >>
> >> >
> >> > --
> >> > Sent from my Android device with K-9 Mail. Please excuse my
> >brevity.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi1t-0004yS-0U; Tue, 04 Feb 2014 15:35:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi1r-0004y4-9B
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 15:35:15 +0000
Received: from [193.109.254.147:23040] by server-4.bemta-14.messagelabs.com id
	80/F6-32066-2B801F25; Tue, 04 Feb 2014 15:35:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391528112!1965872!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21782 invoked from network); 4 Feb 2014 15:35:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:35:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97784310"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:35:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:35:11 -0500
Message-ID: <1391528110.6497.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mike C. <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 15:35:10 +0000
In-Reply-To: <08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:32 +0000, Mike C. wrote:
> 
> On February 4, 2014 11:33:42 AM GMT, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> >On Tue, 2014-02-04 at 02:45 +0000, Miguel Clara wrote:
> >> The wiki mentions "NBD" but if I got it right this means the storage
> >> is in "host1" and all ndb does is connect to that host, so the disk
> >is
> >> never copied to disk 2?
> >> 
> >> Am I correct to assume this?
> >
> >Yes. N is for network.
> >
> >> I have a host where what I need is to move it from host1 to host2, or
> >> reverse if needed, there's no problem stopping it first, but I guess
> >> this is not what the "migrate" command is used for!
> >> 
> >> And for guest where I do wan to keep a backup with remus, is shared
> >> storage still a must? the wiki http://wiki.xen.org/wiki/Remus states:
> >> 
> >> -> Shared storage is not required
> >> 
> >> But if "migrate" doesn't work how would remus?
> >
> >I think Remus with xend supports out of band storage synchronisation.
> >This has not yet been implemented for xl remus support.
> >
> 
> And would xl remus work if the disks are drdb 'backend' which in turn point to LVM?

I don't know, CCing Shriram the maintainer.

> 
> Thanks
> 
> >> 
> >> Sorry for all the questions just trying to understand this better,
> >and
> >> there's really no documentation about:
> >> xl migrate and xl remus!
> >> 
> >> thanks
> >> 
> >> On Mon, Feb 3, 2014 at 3:09 PM, Mike C. <miguelmclara@gmail.com>
> >wrote:
> >> > Makes sense...
> >> >
> >> > Is there documentation about xl migrate and xl remus?
> >> >
> >> > Say I want to migrate a host but first pause it?
> >> >
> >> > I could also snapshot the lvm but that doesn't save the memory and
> >the
> >> > domain would have to be offline.
> >> >
> >> > So if I want to migrate but don't have shared storage, what's the
> >best
> >> > approach drdb?
> >> >
> >> > Thanks
> >> >
> >> >
> >> >
> >> > On February 3, 2014 10:17:04 AM GMT, Ian Campbell
> ><Ian.Campbell@citrix.com>
> >> > wrote:
> >> >>
> >> >> On Sat, 2014-02-01 at 01:32 +0000, Miguel Clara wrote:
> >> >>>
> >> >>>  I'm testing live migration without shared storage (I use LVM at
> >both
> >> >>> sides)
> >> >>>
> >> >>>
> >> >>>  Issuing "xl migrate" worked nice and the machine was migrated to
> >the
> >> >>> second host
> >> >>>
> >> >>>  However I see this in the second host log:
> >> >>>
> >> >>>  [ 1502.563251] EXT4-fs error (device xvda1):
> >> >>>  htree_dirblock_to_tree:892: inode #136303: block 533250: comm
> >> >>>  run-parts: bad entry in directory: rec_len is smaller than
> >minimal -
> >> >>>  offset=0(0), inode=0, rec_len=0, name_len=0
> >> >>>  Jan 31 17:17:01 remus-test kernel: [ 1502.563251] EXT4-fs error
> >> >>>  (device xvda1): htree_dirblock_to_tree:892: inode #136303: block
> >> >>>  533250: comm run-parts: bad entry in directory: rec_len is
> >smaller
> >> >>>  than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
> >> >>>
> >> >>>  I also get errors
> >> >>> like:
> >> >>>  -bash: /bin/ping: cannot execute binary file
> >> >>>
> >> >>>  Is this to be expect on using none shared storage?
> >> >>
> >> >>
> >> >> Yes. If the underlying disk is not the same device between both
> >hosts
> >> >> then all bets are off and all sorts of bad things will be happen.
> >Think
> >> >> about it -- what would you expect to happen to an OS if a disk
> >suddenly
> >> >> started returning completely different data to what was written to
> >it.
> >> >>
> >> >> What you have seen seems like a plausible outcome.
> >> >>
> >> >> Ian.
> >> >>
> >> >
> >> > --
> >> > Sent from my Android device with K-9 Mail. Please excuse my
> >brevity.
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:36:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi3I-0005Cz-RA; Tue, 04 Feb 2014 15:36:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WAi3H-0005Cg-In
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:36:44 +0000
Received: from [85.158.139.211:22968] by server-9.bemta-5.messagelabs.com id
	0B/9D-11237-A0901F25; Tue, 04 Feb 2014 15:36:42 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391528199!1614220!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25606 invoked from network); 4 Feb 2014 15:36:41 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:36:41 -0000
Received: from mail-vb0-f53.google.com ([209.85.212.53]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvEJB51QdymFVAGOb0VKfgpBKo2FoaK2@postini.com;
	Tue, 04 Feb 2014 07:36:41 PST
Received: by mail-vb0-f53.google.com with SMTP id p17so5735217vbe.40
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:36:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=9UeS6S9wyDpQ7/pyp18I+KweyQnj2sPxBDRsTYoWJKg=;
	b=iIY0MMVbHPuuVO2V5N4SuNK91cM8LKygpCsvysf8HqqspYqzw/6eGAAC0yVaqX28W8
	KG3XzrBGJrTSZBmJDrRxmbzcmJN9fACkxSqM1jf4OaWo1jzJZlIMCOHP7W42QfzNtbSK
	xZcsCZ8KyHmdaQju6rzyHoHxh90OTQ5lv90/A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=9UeS6S9wyDpQ7/pyp18I+KweyQnj2sPxBDRsTYoWJKg=;
	b=OKLPSpP0qXvpKCGNLoyzZwJOTj14Akuhu/SBHmuP1WfW2EFUDv1wSBTOWQ9wIh928w
	+Ug7Bxt3+LGWgHcXfHK+TdWmtEUaJvUjTbmb0g1tj2kyGLdd5ZROVIf8NzPhV8MTf7JF
	bIUy7Q7RIqwyA1me3QpxAO95YhtECAJdjDL1qOkXGnTghFZynWg34LU9yNVN0+gqcc8R
	IgHMKWWQKkfWdTpJtNjWF4Q/A+mJznGwWUHiPRSvYmCEgqjRkkMoJXBGxx/A2if70AJl
	6siO3RwEz09RGqmOgCV2bKwMwgHDqbWOdecani/4/XoUVUMjzS5gbF6jJIhe1HlRi24r
	64Fw==
X-Gm-Message-State: ALoCoQmWQSr0068fSQkU48sI8rRXGcRC8fc+B5uh9wAbTwWPr+kQmebg4/mraOZ9fM5gIDiZB4lAkqpAwIDXiUFph6yN+ri870gYW6XbepivA0v/va235awyg5PKTnPXwNDnRRHkSiq3y6WAwTiA2jG13zU3iue4G+WvDi+OyXs6oFInMjXSE/8=
X-Received: by 10.221.26.10 with SMTP id rk10mr33704601vcb.0.1391528199070;
	Tue, 04 Feb 2014 07:36:39 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.221.26.10 with SMTP id rk10mr33704599vcb.0.1391528198977;
	Tue, 04 Feb 2014 07:36:38 -0800 (PST)
Received: by 10.220.146.73 with HTTP; Tue, 4 Feb 2014 07:36:38 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1402041519570.4373@kaball.uk.xensource.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1402041519570.4373@kaball.uk.xensource.com>
Date: Tue, 4 Feb 2014 17:36:38 +0200
Message-ID: <CAJEb2DFY9m7DPzZD5Kzt-EMqwsEKP0X__7K_hAYP0Aep5M8keA@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks.
Due to this thread "xen/arm: maintenance_interrupt SMP fix", I found
out a lot of interesting things.

On Tue, Feb 4, 2014 at 5:21 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 3 Feb 2014, Oleksandr Tyshchenko wrote:
>> The possible deadlock scenario is explained below:
>>
>> non interrupt context:    interrupt contex        interrupt context
>>                           (CPU0):                 (CPU1):
>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>   |                         |                       |
>>   vgic_disable_irqs()       ...                     ...
>>     |                         |                       |
>>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>     |  ...                      |                       |
>>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>     |  ...                        ...                     ...
>>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>     |  ...                  . .       Oops! The lock has already taken.
>>     |  spin_unlock(...)     . .
>>     |  ...                  . .
>>     gic_irq_disable()       . .
>>        ...                  . .
>>        spin_lock(...)       . .
>>        ...                  . .
>>        ... <----------------. .
>>        ... <------------------.
>>        ...
>>        spin_unlock(...)
>>
>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>> which called from interrupt context we must disable interrupts in these
>> functions to avoid possible deadlocks.
>>
>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>
> nice work Oleksandr!
>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
>
>>  xen/arch/arm/gic.c |   10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index c44a4d0..7d83b0c 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>  static void gic_irq_disable(struct irq_desc *desc)
>>  {
>>      int irq = desc->irq;
>> +    unsigned long flags;
>>
>> -    spin_lock(&desc->lock);
>> +    spin_lock_irqsave(&desc->lock, flags);
>>      spin_lock(&gic.lock);
>>      /* Disable routing */
>>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>      desc->status |= IRQ_DISABLED;
>>      spin_unlock(&gic.lock);
>> -    spin_unlock(&desc->lock);
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>  }
>>
>>  static unsigned int gic_irq_startup(struct irq_desc *desc)
>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>  {
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned long flags;
>>
>> -    spin_lock(&gic.lock);
>> +    spin_lock_irqsave(&gic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> -    spin_unlock(&gic.lock);
>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>  }
>>
>>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:36:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi3I-0005Cz-RA; Tue, 04 Feb 2014 15:36:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WAi3H-0005Cg-In
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:36:44 +0000
Received: from [85.158.139.211:22968] by server-9.bemta-5.messagelabs.com id
	0B/9D-11237-A0901F25; Tue, 04 Feb 2014 15:36:42 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391528199!1614220!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25606 invoked from network); 4 Feb 2014 15:36:41 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:36:41 -0000
Received: from mail-vb0-f53.google.com ([209.85.212.53]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvEJB51QdymFVAGOb0VKfgpBKo2FoaK2@postini.com;
	Tue, 04 Feb 2014 07:36:41 PST
Received: by mail-vb0-f53.google.com with SMTP id p17so5735217vbe.40
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:36:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=9UeS6S9wyDpQ7/pyp18I+KweyQnj2sPxBDRsTYoWJKg=;
	b=iIY0MMVbHPuuVO2V5N4SuNK91cM8LKygpCsvysf8HqqspYqzw/6eGAAC0yVaqX28W8
	KG3XzrBGJrTSZBmJDrRxmbzcmJN9fACkxSqM1jf4OaWo1jzJZlIMCOHP7W42QfzNtbSK
	xZcsCZ8KyHmdaQju6rzyHoHxh90OTQ5lv90/A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=9UeS6S9wyDpQ7/pyp18I+KweyQnj2sPxBDRsTYoWJKg=;
	b=OKLPSpP0qXvpKCGNLoyzZwJOTj14Akuhu/SBHmuP1WfW2EFUDv1wSBTOWQ9wIh928w
	+Ug7Bxt3+LGWgHcXfHK+TdWmtEUaJvUjTbmb0g1tj2kyGLdd5ZROVIf8NzPhV8MTf7JF
	bIUy7Q7RIqwyA1me3QpxAO95YhtECAJdjDL1qOkXGnTghFZynWg34LU9yNVN0+gqcc8R
	IgHMKWWQKkfWdTpJtNjWF4Q/A+mJznGwWUHiPRSvYmCEgqjRkkMoJXBGxx/A2if70AJl
	6siO3RwEz09RGqmOgCV2bKwMwgHDqbWOdecani/4/XoUVUMjzS5gbF6jJIhe1HlRi24r
	64Fw==
X-Gm-Message-State: ALoCoQmWQSr0068fSQkU48sI8rRXGcRC8fc+B5uh9wAbTwWPr+kQmebg4/mraOZ9fM5gIDiZB4lAkqpAwIDXiUFph6yN+ri870gYW6XbepivA0v/va235awyg5PKTnPXwNDnRRHkSiq3y6WAwTiA2jG13zU3iue4G+WvDi+OyXs6oFInMjXSE/8=
X-Received: by 10.221.26.10 with SMTP id rk10mr33704601vcb.0.1391528199070;
	Tue, 04 Feb 2014 07:36:39 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.221.26.10 with SMTP id rk10mr33704599vcb.0.1391528198977;
	Tue, 04 Feb 2014 07:36:38 -0800 (PST)
Received: by 10.220.146.73 with HTTP; Tue, 4 Feb 2014 07:36:38 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1402041519570.4373@kaball.uk.xensource.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<alpine.DEB.2.02.1402041519570.4373@kaball.uk.xensource.com>
Date: Tue, 4 Feb 2014 17:36:38 +0200
Message-ID: <CAJEb2DFY9m7DPzZD5Kzt-EMqwsEKP0X__7K_hAYP0Aep5M8keA@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks.
Due to this thread "xen/arm: maintenance_interrupt SMP fix", I found
out a lot of interesting things.

On Tue, Feb 4, 2014 at 5:21 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Mon, 3 Feb 2014, Oleksandr Tyshchenko wrote:
>> The possible deadlock scenario is explained below:
>>
>> non interrupt context:    interrupt contex        interrupt context
>>                           (CPU0):                 (CPU1):
>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>   |                         |                       |
>>   vgic_disable_irqs()       ...                     ...
>>     |                         |                       |
>>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>     |  ...                      |                       |
>>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>     |  ...                        ...                     ...
>>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>     |  ...                  . .       Oops! The lock has already taken.
>>     |  spin_unlock(...)     . .
>>     |  ...                  . .
>>     gic_irq_disable()       . .
>>        ...                  . .
>>        spin_lock(...)       . .
>>        ...                  . .
>>        ... <----------------. .
>>        ... <------------------.
>>        ...
>>        spin_unlock(...)
>>
>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>> which called from interrupt context we must disable interrupts in these
>> functions to avoid possible deadlocks.
>>
>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>
> nice work Oleksandr!
>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
>
>>  xen/arch/arm/gic.c |   10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index c44a4d0..7d83b0c 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>  static void gic_irq_disable(struct irq_desc *desc)
>>  {
>>      int irq = desc->irq;
>> +    unsigned long flags;
>>
>> -    spin_lock(&desc->lock);
>> +    spin_lock_irqsave(&desc->lock, flags);
>>      spin_lock(&gic.lock);
>>      /* Disable routing */
>>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>      desc->status |= IRQ_DISABLED;
>>      spin_unlock(&gic.lock);
>> -    spin_unlock(&desc->lock);
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>  }
>>
>>  static unsigned int gic_irq_startup(struct irq_desc *desc)
>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>  {
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned long flags;
>>
>> -    spin_lock(&gic.lock);
>> +    spin_lock_irqsave(&gic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> -    spin_unlock(&gic.lock);
>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>  }
>>
>>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>> --
>> 1.7.9.5
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:37:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi4D-0005L4-Bx; Tue, 04 Feb 2014 15:37:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAi4B-0005Kr-Ql
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:37:40 +0000
Received: from [85.158.137.68:38758] by server-3.bemta-3.messagelabs.com id
	7E/2F-14520-24901F25; Tue, 04 Feb 2014 15:37:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391528258!13240410!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10927 invoked from network); 4 Feb 2014 15:37:38 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:37:38 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so3764691ead.10
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:37:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=aum3192yk7jo0Ji6SY/IK5AWMzgWg34ODqPHn0Fff/8=;
	b=FIX7oqVTX46C9ZfQUnpMxVlr6k0mJMzhBeNMOABX/HEk2iidHQFGjx8GgHuHV6A8Q7
	CBn+p4kO24Ph1PSg8JPXxZDO0kKvc6aCeNneX03tJJJ1ionXg1YRzga0WE2+aCDaPu0o
	ReHbyODmatAGOofeDhPB/B7MmmVPQCg+92u7t7nDe7wr4IqIl/q1vXiqo1/Mt6LwR7Fw
	82hotj4NR3R1WV3gxbHCOtvJSrEPeaPl2PvTsfUiMWVi9Zh9pdPAHRTJDZ78oqoXupyf
	KyXkgWO/IXqAaO4XSnL3tRKL1NsBOn4ZDQX3kSHsUwuGQ9SaF3wfPpgzc47x4ihccnst
	/GVw==
X-Gm-Message-State: ALoCoQklDyMpvW+crroiLC77s/aj18D/68BM843RqkfgOLTH/iAzLP1oBMfScnevZXMkKprSj0+w
X-Received: by 10.14.95.134 with SMTP id p6mr4093593eef.73.1391528258036;
	Tue, 04 Feb 2014 07:37:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	k41sm79924007een.19.2014.02.04.07.37.36 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:37:37 -0800 (PST)
Message-ID: <52F10940.9000109@linaro.org>
Date: Tue, 04 Feb 2014 15:37:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 02:22 PM, Ian Campbell wrote:
> This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
> 
> This approach has a short coming in that it breaks when a guest enables its
> MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> same time. It turns out that FreeBSD does this.
> 
> This has now been fixed (yet) another way (third time is the charm!) so remove
> this support. The original commit contained some fixes which are still
> relevant even with the revert of the bulk of the patch:
>  - Correction to HSR_SYSREG_CRN_MASK
>  - Rename of HSR_SYSCTL macros to avoid naming clash
>  - Definition of some additional cp reg specifications
> 
> Since these are still useful they are not reverted.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Except the spurious line toward the end of the patch:

Acked-by: Julien Grall <julien.grall@linaro.org>

> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index b8f2e82..ec51d1b 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c

[..]

>      default:
>          printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
>                 sysreg.read ? "mrs" : "msr",
> @@ -1635,6 +1477,7 @@ done:
>      if (first) unmap_domain_page(first);
>  }
>  
> +

Spurious change?

>  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>                                        union hsr hsr)


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:37:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi4D-0005L4-Bx; Tue, 04 Feb 2014 15:37:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAi4B-0005Kr-Ql
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:37:40 +0000
Received: from [85.158.137.68:38758] by server-3.bemta-3.messagelabs.com id
	7E/2F-14520-24901F25; Tue, 04 Feb 2014 15:37:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391528258!13240410!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10927 invoked from network); 4 Feb 2014 15:37:38 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:37:38 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so3764691ead.10
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:37:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=aum3192yk7jo0Ji6SY/IK5AWMzgWg34ODqPHn0Fff/8=;
	b=FIX7oqVTX46C9ZfQUnpMxVlr6k0mJMzhBeNMOABX/HEk2iidHQFGjx8GgHuHV6A8Q7
	CBn+p4kO24Ph1PSg8JPXxZDO0kKvc6aCeNneX03tJJJ1ionXg1YRzga0WE2+aCDaPu0o
	ReHbyODmatAGOofeDhPB/B7MmmVPQCg+92u7t7nDe7wr4IqIl/q1vXiqo1/Mt6LwR7Fw
	82hotj4NR3R1WV3gxbHCOtvJSrEPeaPl2PvTsfUiMWVi9Zh9pdPAHRTJDZ78oqoXupyf
	KyXkgWO/IXqAaO4XSnL3tRKL1NsBOn4ZDQX3kSHsUwuGQ9SaF3wfPpgzc47x4ihccnst
	/GVw==
X-Gm-Message-State: ALoCoQklDyMpvW+crroiLC77s/aj18D/68BM843RqkfgOLTH/iAzLP1oBMfScnevZXMkKprSj0+w
X-Received: by 10.14.95.134 with SMTP id p6mr4093593eef.73.1391528258036;
	Tue, 04 Feb 2014 07:37:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	k41sm79924007een.19.2014.02.04.07.37.36 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:37:37 -0800 (PST)
Message-ID: <52F10940.9000109@linaro.org>
Date: Tue, 04 Feb 2014 15:37:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 02:22 PM, Ian Campbell wrote:
> This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
> 
> This approach has a short coming in that it breaks when a guest enables its
> MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> same time. It turns out that FreeBSD does this.
> 
> This has now been fixed (yet) another way (third time is the charm!) so remove
> this support. The original commit contained some fixes which are still
> relevant even with the revert of the bulk of the patch:
>  - Correction to HSR_SYSREG_CRN_MASK
>  - Rename of HSR_SYSCTL macros to avoid naming clash
>  - Definition of some additional cp reg specifications
> 
> Since these are still useful they are not reverted.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Except the spurious line toward the end of the patch:

Acked-by: Julien Grall <julien.grall@linaro.org>

> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index b8f2e82..ec51d1b 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c

[..]

>      default:
>          printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
>                 sysreg.read ? "mrs" : "msr",
> @@ -1635,6 +1477,7 @@ done:
>      if (first) unmap_domain_page(first);
>  }
>  
> +

Spurious change?

>  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
>                                        union hsr hsr)


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:37:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi4R-0005OV-08; Tue, 04 Feb 2014 15:37:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi4P-0005Np-IZ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:37:53 +0000
Received: from [85.158.137.68:43005] by server-12.bemta-3.messagelabs.com id
	EA/EE-01674-05901F25; Tue, 04 Feb 2014 15:37:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391528270!13314648!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22896 invoked from network); 4 Feb 2014 15:37:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:37:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97786036"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:37:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:37:43 -0500
Message-ID: <1391528262.6497.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:37:42 +0000
In-Reply-To: <21233.2088.315549.191502@mariner.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:32 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> > 
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page function,
> > which is a stub on x86.
> 
> > diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> > index 77a4e64..306b414 100644
> > --- a/tools/libxc/xc_dom_core.c
> > +++ b/tools/libxc/xc_dom_core.c
> > @@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
> >          prev->next = phys->next;
> >      else
> >          dom->phys_pages = phys->next;
> > +
> > +    xc_domain_cacheflush(dom->xch, dom->guest_domid,
> > +                         phys->first, phys->first + phys->count);
> >  }
> 
> The approach you are taking here is that for pages explicitly mapped
> by some libxc caller, you do the flush on unmap.  But what about
> callers who don't unmap ?  Are there callers which don't unmap and
> which instead are relying on memory coherency assumptions which aren't
> true on arm ?

Callers which don't unmap would be leaking mappings and therefore buggy
in a long running toolstack.

Also after the initial start of day period we require that the guest
enable its caches.

> Aside from this question, the code in this patch looks fine to me.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:37:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi4R-0005OV-08; Tue, 04 Feb 2014 15:37:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi4P-0005Np-IZ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:37:53 +0000
Received: from [85.158.137.68:43005] by server-12.bemta-3.messagelabs.com id
	EA/EE-01674-05901F25; Tue, 04 Feb 2014 15:37:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391528270!13314648!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22896 invoked from network); 4 Feb 2014 15:37:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:37:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97786036"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:37:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:37:43 -0500
Message-ID: <1391528262.6497.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:37:42 +0000
In-Reply-To: <21233.2088.315549.191502@mariner.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:32 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> > 
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page function,
> > which is a stub on x86.
> 
> > diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> > index 77a4e64..306b414 100644
> > --- a/tools/libxc/xc_dom_core.c
> > +++ b/tools/libxc/xc_dom_core.c
> > @@ -603,6 +603,9 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
> >          prev->next = phys->next;
> >      else
> >          dom->phys_pages = phys->next;
> > +
> > +    xc_domain_cacheflush(dom->xch, dom->guest_domid,
> > +                         phys->first, phys->first + phys->count);
> >  }
> 
> The approach you are taking here is that for pages explicitly mapped
> by some libxc caller, you do the flush on unmap.  But what about
> callers who don't unmap ?  Are there callers which don't unmap and
> which instead are relying on memory coherency assumptions which aren't
> true on arm ?

Callers which don't unmap would be leaking mappings and therefore buggy
in a long running toolstack.

Also after the initial start of day period we require that the guest
enable its caches.

> Aside from this question, the code in this patch looks fine to me.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:41:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi82-0005oB-OV; Tue, 04 Feb 2014 15:41:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WAi81-0005o2-Oi
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:41:38 +0000
Received: from [85.158.139.211:44119] by server-1.bemta-5.messagelabs.com id
	E3/F9-12859-03A01F25; Tue, 04 Feb 2014 15:41:36 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391528494!1594510!1
X-Originating-IP: [98.139.212.165]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1049 invoked from network); 4 Feb 2014 15:41:35 -0000
Received: from nm6.bullet.mail.bf1.yahoo.com (HELO
	nm6.bullet.mail.bf1.yahoo.com) (98.139.212.165)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:41:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391528493; bh=kAk5aR+Lxyeo2XyAA55nH8mB6bXqVl5oGXfpC10pDk0=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=Hv/aUcVcQ6VXwkwOZUeGPsWcej0D3REKuyTVUOkyJ1XZjApA8s+AoIizr9lHQhXvGi9B+6i4+xT7LoE2pJOuiMx7rya4PbTzVtlSBPIOtDuNGafkc/F+zwv7G0n/mM++v0rzcZylfE/8XMpu2k/LwBWdlXnQ3T0pn8ZHYvD3hYs=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=g/na7qO7YBrY94OWFBS319+lX4+AavnMo0hJ5OiTw+fFfbew2Ds2XW4/ZbLOvoTceNGNklDDauY/QruerEOsl5g3Wo8sZ4WF5KmHCxYc/iqG24Rh8BSYK2BqyDaZltXyAm3WE3onRuxMWL5MnSEKfxWMg1rst52cGKql/JsRXu8=;
Received: from [98.139.215.140] by nm6.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:41:33 -0000
Received: from [98.139.213.9] by tm11.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:41:33 -0000
Received: from [127.0.0.1] by smtp109.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:41:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391528493; bh=kAk5aR+Lxyeo2XyAA55nH8mB6bXqVl5oGXfpC10pDk0=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=dRxlnHtDdOQTKXEW+q0yruROIJJ+FPbiGPJEUPPLMo0q3h6kJq2Y7D48L5orJ7n4ARUMJIdrpNnBkHgfkfyGm+nTqUH6c1YdpNbbq2uP4NbYPRveoVqGTR2LFw9cmbbGVkWxr/vdUXYzoTpbJYHvZ6QhRglgwbcq80i95eepKyQ=
X-Yahoo-Newman-Id: 801538.91599.bm@smtp109.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: FH07NU4VM1k73EdnC0bDLi_bfLqfOg.9JzlnF8W2CTAuw7_
	MfzCUwyTNBwKijqbgrNpmre5bVpSGq930erhpMJLwzaeFBxv0.4NmvmBlMFQ
	sIyHJDucEhjcRuldL0DW_rhouqEC5ucz1caZNcwdOjKcKOzaAjUF3ixFaGjv
	xvkRhXNvkhcIbGZe6IzWeLszX98QCa6OGGQLy_UWb0hM.P.9xl7s9Ti5.plv
	ti3OW89TpNfXgdXa_h4Yp7Fa0PW5NCn6ndl.UPZjzxaDRr1ZucCG5wAJnwwt
	ruWd.83LViRIZ_qHdo9QLXUlyd.NAE4NF7vba56L3kjqCtRgaJVrvJA.LU94
	32hiqK_.nEKkIl.EH326mQaqwe8Q6mdqCpj9.EgpBeTtX8OW2x4OsOQAclCB
	E4afpCcslsDM_Mxugr57z5My07kXyS72Q_nSI_Q6CztDLHrF363CzKhhua.g
	fuULUZRA2hPyr7GYo.E_XSLyCnqh3xJ3KbI4RzS5J0gPS3e4V99UoQSCfU5S
	Aq1KzTIT7TqhOsve3gB8O56dO3vY3_aTGs.6j04sTNwPBhsf5.IGOf6PgZec
	pTyX.PtDFF42HWxUW_P7VpZIaCRrgUUwblZn3mqfqWIvKIHx8rJeDadKta4M
	FI9fkY7F5DoWe_hn2ihP8exZyrd2e3YwV1IJ82SRoWRq6irqt3cZSBvvDwUs
	4VOZcnVWursvVrd_Ek8ZTj.lxNRQock0-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp109.mail.bf1.yahoo.com with SMTP; 04 Feb 2014 07:41:33 -0800 PST
Message-ID: <1391528492.2441.26.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Date: Tue, 04 Feb 2014 08:41:32 -0700
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Subject: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen list,

I am trying to boot a F20 guest and connect using Spice but have run
into an issue.  

My VM config file includes:
spice = 1
spicehost='0.0.0.0'
spiceport=6001
spicedisable_ticketing=1


Is Spice supported with qemu-xen-traditional? 


[root@xen ~]# xl -vvv create f20.xl
Parsing config from f20.xl
libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x795750: create:
how=(nil) callback=(nil) poller=0x794e10
libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault:
qemu-xen is unavailable, use qemu-xen-traditional instead: No such file
or directory
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
vdev=hda, using backend phy
libxl: debug: libxl_create.c:797:initiate_domain_create: running
bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
domain, skipping bootloader
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x795af8: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=4,
free_memkb=14131
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
candidate with 1 nodes, 8 cpus and 14131 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9e704
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19e704
xc: detail: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019e704
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->00000000ffc00000
  ENTRY ADDRESS: 0000000000100000
xc: detail: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000003fd
  1GB PAGES: 0x0000000000000002
xc: detail: elf_load_binary: phdr 0 at 0x7f87b7100000 -> 0x7f87b719558d
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x795750:
inprogress: poller=0x794e10, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/3/768/state
libxl: debug: libxl_event.c:646:devstate_watch_callback:
backend /local/domain/0/backend/vbd/3/768/state wanted state 2 still
waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/3/768/state
libxl: debug: libxl_event.c:642:devstate_watch_callback:
backend /local/domain/0/backend/vbd/3/768/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x796f68: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
script: /etc/xen/scripts/block add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x796ff0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x796ff0: deregister unregistered
libxl: debug: libxl_dm.c:1303:libxl__spawn_local_dm: Spawning
device-model /usr/lib/xen/bin/qemu-dm with arguments:
libxl: debug:
libxl_dm.c:1305:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -d
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   3
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -domain-name
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   f20
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -videoram
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   c
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -acpi
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpus
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpu_avail
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   0x03
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
wpath=/local/domain/0/device-model/3/state token=3/1: event
epath=/local/domain/0/device-model/3/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
wpath=/local/domain/0/device-model/3/state token=3/1: event
epath=/local/domain/0/device-model/3/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x795d30: deregister unregistered
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
epath=/local/domain/0/backend/vif/3/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback:
backend /local/domain/0/backend/vif/3/0/state wanted state 2 still
waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
epath=/local/domain/0/backend/vif/3/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback:
backend /local/domain/0/backend/vif/3/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a038: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a0c0: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a0c0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a0c0: deregister unregistered
libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x795750:
progress report: ignored
libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x795750:
complete, rc=0
libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x795750:
destroy
xc: debug: hypercall buffer: total allocations:1097 total releases:1097
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:1089 misses:4 toobig:4
[root@xen ~]# 

The QEMU command line created is:

/usr/lib/xen/bin/qemu-dm -d 3 -domain-name f20 -videoram 4 -boot c -acpi
-vcpus 2 -vcpu_avail 0x03 -net
nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000 -net
tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no -M xenfv



Thanks,

Eric









_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:41:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi82-0005oB-OV; Tue, 04 Feb 2014 15:41:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WAi81-0005o2-Oi
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:41:38 +0000
Received: from [85.158.139.211:44119] by server-1.bemta-5.messagelabs.com id
	E3/F9-12859-03A01F25; Tue, 04 Feb 2014 15:41:36 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391528494!1594510!1
X-Originating-IP: [98.139.212.165]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1049 invoked from network); 4 Feb 2014 15:41:35 -0000
Received: from nm6.bullet.mail.bf1.yahoo.com (HELO
	nm6.bullet.mail.bf1.yahoo.com) (98.139.212.165)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:41:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391528493; bh=kAk5aR+Lxyeo2XyAA55nH8mB6bXqVl5oGXfpC10pDk0=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=Hv/aUcVcQ6VXwkwOZUeGPsWcej0D3REKuyTVUOkyJ1XZjApA8s+AoIizr9lHQhXvGi9B+6i4+xT7LoE2pJOuiMx7rya4PbTzVtlSBPIOtDuNGafkc/F+zwv7G0n/mM++v0rzcZylfE/8XMpu2k/LwBWdlXnQ3T0pn8ZHYvD3hYs=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=g/na7qO7YBrY94OWFBS319+lX4+AavnMo0hJ5OiTw+fFfbew2Ds2XW4/ZbLOvoTceNGNklDDauY/QruerEOsl5g3Wo8sZ4WF5KmHCxYc/iqG24Rh8BSYK2BqyDaZltXyAm3WE3onRuxMWL5MnSEKfxWMg1rst52cGKql/JsRXu8=;
Received: from [98.139.215.140] by nm6.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:41:33 -0000
Received: from [98.139.213.9] by tm11.bullet.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:41:33 -0000
Received: from [127.0.0.1] by smtp109.mail.bf1.yahoo.com with NNFMP;
	04 Feb 2014 15:41:33 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391528493; bh=kAk5aR+Lxyeo2XyAA55nH8mB6bXqVl5oGXfpC10pDk0=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Date:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=dRxlnHtDdOQTKXEW+q0yruROIJJ+FPbiGPJEUPPLMo0q3h6kJq2Y7D48L5orJ7n4ARUMJIdrpNnBkHgfkfyGm+nTqUH6c1YdpNbbq2uP4NbYPRveoVqGTR2LFw9cmbbGVkWxr/vdUXYzoTpbJYHvZ6QhRglgwbcq80i95eepKyQ=
X-Yahoo-Newman-Id: 801538.91599.bm@smtp109.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: FH07NU4VM1k73EdnC0bDLi_bfLqfOg.9JzlnF8W2CTAuw7_
	MfzCUwyTNBwKijqbgrNpmre5bVpSGq930erhpMJLwzaeFBxv0.4NmvmBlMFQ
	sIyHJDucEhjcRuldL0DW_rhouqEC5ucz1caZNcwdOjKcKOzaAjUF3ixFaGjv
	xvkRhXNvkhcIbGZe6IzWeLszX98QCa6OGGQLy_UWb0hM.P.9xl7s9Ti5.plv
	ti3OW89TpNfXgdXa_h4Yp7Fa0PW5NCn6ndl.UPZjzxaDRr1ZucCG5wAJnwwt
	ruWd.83LViRIZ_qHdo9QLXUlyd.NAE4NF7vba56L3kjqCtRgaJVrvJA.LU94
	32hiqK_.nEKkIl.EH326mQaqwe8Q6mdqCpj9.EgpBeTtX8OW2x4OsOQAclCB
	E4afpCcslsDM_Mxugr57z5My07kXyS72Q_nSI_Q6CztDLHrF363CzKhhua.g
	fuULUZRA2hPyr7GYo.E_XSLyCnqh3xJ3KbI4RzS5J0gPS3e4V99UoQSCfU5S
	Aq1KzTIT7TqhOsve3gB8O56dO3vY3_aTGs.6j04sTNwPBhsf5.IGOf6PgZec
	pTyX.PtDFF42HWxUW_P7VpZIaCRrgUUwblZn3mqfqWIvKIHx8rJeDadKta4M
	FI9fkY7F5DoWe_hn2ihP8exZyrd2e3YwV1IJ82SRoWRq6irqt3cZSBvvDwUs
	4VOZcnVWursvVrd_Ek8ZTj.lxNRQock0-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp109.mail.bf1.yahoo.com with SMTP; 04 Feb 2014 07:41:33 -0800 PST
Message-ID: <1391528492.2441.26.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Date: Tue, 04 Feb 2014 08:41:32 -0700
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Subject: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Xen list,

I am trying to boot a F20 guest and connect using Spice but have run
into an issue.  

My VM config file includes:
spice = 1
spicehost='0.0.0.0'
spiceport=6001
spicedisable_ticketing=1


Is Spice supported with qemu-xen-traditional? 


[root@xen ~]# xl -vvv create f20.xl
Parsing config from f20.xl
libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x795750: create:
how=(nil) callback=(nil) poller=0x794e10
libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault:
qemu-xen is unavailable, use qemu-xen-traditional instead: No such file
or directory
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
vdev=hda, using backend phy
libxl: debug: libxl_create.c:797:initiate_domain_create: running
bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
domain, skipping bootloader
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x795af8: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=4,
free_memkb=14131
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
candidate with 1 nodes, 8 cpus and 14131 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9e704
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19e704
xc: detail: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->000000000019e704
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->00000000ffc00000
  ENTRY ADDRESS: 0000000000100000
xc: detail: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000003fd
  1GB PAGES: 0x0000000000000002
xc: detail: elf_load_binary: phdr 0 at 0x7f87b7100000 -> 0x7f87b719558d
libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x795750:
inprogress: poller=0x794e10, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/3/768/state
libxl: debug: libxl_event.c:646:devstate_watch_callback:
backend /local/domain/0/backend/vbd/3/768/state wanted state 2 still
waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/3/768/state
libxl: debug: libxl_event.c:642:devstate_watch_callback:
backend /local/domain/0/backend/vbd/3/768/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x796f68: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
script: /etc/xen/scripts/block add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x796ff0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x796ff0: deregister unregistered
libxl: debug: libxl_dm.c:1303:libxl__spawn_local_dm: Spawning
device-model /usr/lib/xen/bin/qemu-dm with arguments:
libxl: debug:
libxl_dm.c:1305:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -d
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   3
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -domain-name
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   f20
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -videoram
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   4
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   c
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -acpi
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpus
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpu_avail
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   0x03
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
wpath=/local/domain/0/device-model/3/state token=3/1: event
epath=/local/domain/0/device-model/3/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
wpath=/local/domain/0/device-model/3/state token=3/1: event
epath=/local/domain/0/device-model/3/state
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x795d30: deregister unregistered
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
register slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
epath=/local/domain/0/backend/vif/3/0/state
libxl: debug: libxl_event.c:646:devstate_watch_callback:
backend /local/domain/0/backend/vif/3/0/state wanted state 2 still
waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
epath=/local/domain/0/backend/vif/3/0/state
libxl: debug: libxl_event.c:642:devstate_watch_callback:
backend /local/domain/0/backend/vif/3/0/state wanted state 2 ok
libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
deregister slotnum=3
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a038: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a0c0: deregister unregistered
libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
script: /etc/xen/scripts/vif-bridge add
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a0c0: deregister unregistered
libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
w=0x79a0c0: deregister unregistered
libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x795750:
progress report: ignored
libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x795750:
complete, rc=0
libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x795750:
destroy
xc: debug: hypercall buffer: total allocations:1097 total releases:1097
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:1089 misses:4 toobig:4
[root@xen ~]# 

The QEMU command line created is:

/usr/lib/xen/bin/qemu-dm -d 3 -domain-name f20 -videoram 4 -boot c -acpi
-vcpus 2 -vcpu_avail 0x03 -net
nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000 -net
tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no -M xenfv



Thanks,

Eric









_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:43:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi9T-0005ug-94; Tue, 04 Feb 2014 15:43:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi9R-0005uQ-TP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:43:06 +0000
Received: from [193.109.254.147:43177] by server-9.bemta-14.messagelabs.com id
	1E/61-24895-98A01F25; Tue, 04 Feb 2014 15:43:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391528583!1968253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29812 invoked from network); 4 Feb 2014 15:43:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:43:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99698754"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:43:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:43:02 -0500
Message-ID: <1391528580.6497.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Feb 2014 15:43:00 +0000
In-Reply-To: <52F10E8202000078001190A7@nat28.tlf.novell.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:00 +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 15:22, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/tools/libxc/xc_dom_boot.c
> > +++ b/tools/libxc/xc_dom_boot.c
> > @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
> >          return -1;
> >      }
> >  
> > +    /* Guest shouldn't really touch its grant table until it has
> > +     * enabled its caches. But lets be nice. */
> > +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1);
> 
> Looking at this and further similar code I think it would be cleaner
> for the xc interface to take a start MFN and a count, and for the
> hypervisor interface to use an inclusive range (such that overflow
> is not a problem).

I think you might be right, I'll give that a go.

> 
> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
> >      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
> >  }
> >  
> > +/* No cache maintenance required on x86 architecture. */
> > +static inline void cacheflush_page(unsigned long mfn) {}
> 
> The function name is certainly sub-optimal:

Yes, I should have said that I wasn't really happy with it in the
context of x86.

>  If I needed a page-range
> cache flush and found a function named like this, I'd assume it does
> what its name says - flush the page from the cache. sync_page() or
> some such may be better suited to express that this is something
> that may be a no-op on certain architectures.

sync_page is a much better suggestion. Perhaps even
sync_page_to/from_cache, I suppose that might actually be more
confusing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:43:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:43:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi9T-0005ug-94; Tue, 04 Feb 2014 15:43:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi9R-0005uQ-TP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:43:06 +0000
Received: from [193.109.254.147:43177] by server-9.bemta-14.messagelabs.com id
	1E/61-24895-98A01F25; Tue, 04 Feb 2014 15:43:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391528583!1968253!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29812 invoked from network); 4 Feb 2014 15:43:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:43:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99698754"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:43:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:43:02 -0500
Message-ID: <1391528580.6497.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Feb 2014 15:43:00 +0000
In-Reply-To: <52F10E8202000078001190A7@nat28.tlf.novell.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:00 +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 15:22, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/tools/libxc/xc_dom_boot.c
> > +++ b/tools/libxc/xc_dom_boot.c
> > @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
> >          return -1;
> >      }
> >  
> > +    /* Guest shouldn't really touch its grant table until it has
> > +     * enabled its caches. But lets be nice. */
> > +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, gnttab_gmfn + 1);
> 
> Looking at this and further similar code I think it would be cleaner
> for the xc interface to take a start MFN and a count, and for the
> hypervisor interface to use an inclusive range (such that overflow
> is not a problem).

I think you might be right, I'll give that a go.

> 
> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
> >      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
> >  }
> >  
> > +/* No cache maintenance required on x86 architecture. */
> > +static inline void cacheflush_page(unsigned long mfn) {}
> 
> The function name is certainly sub-optimal:

Yes, I should have said that I wasn't really happy with it in the
context of x86.

>  If I needed a page-range
> cache flush and found a function named like this, I'd assume it does
> what its name says - flush the page from the cache. sync_page() or
> some such may be better suited to express that this is something
> that may be a no-op on certain architectures.

sync_page is a much better suggestion. Perhaps even
sync_page_to/from_cache, I suppose that might actually be more
confusing.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:43:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi9d-0005wJ-ME; Tue, 04 Feb 2014 15:43:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi9b-0005vt-QK
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:43:15 +0000
Received: from [85.158.143.35:52038] by server-2.bemta-4.messagelabs.com id
	95/5C-10891-39A01F25; Tue, 04 Feb 2014 15:43:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391528593!3095942!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24968 invoked from network); 4 Feb 2014 15:43:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:43:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97789858"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:43:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:43:11 -0500
Message-ID: <1391528589.6497.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:43:09 +0000
In-Reply-To: <21233.365.463588.210458@mariner.uk.xensource.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
	<52E7CBB8.7040904@eu.citrix.com>
	<21233.365.463588.210458@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:04 +0000, Ian Jackson wrote:
> George Dunlap writes ("Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM"):
> > If you want to check in this one:
> > 
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Between that, Julien's ack  and George's Release-ack: Applied.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:43:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAi9d-0005wJ-ME; Tue, 04 Feb 2014 15:43:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAi9b-0005vt-QK
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:43:15 +0000
Received: from [85.158.143.35:52038] by server-2.bemta-4.messagelabs.com id
	95/5C-10891-39A01F25; Tue, 04 Feb 2014 15:43:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391528593!3095942!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24968 invoked from network); 4 Feb 2014 15:43:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:43:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97789858"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:43:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:43:11 -0500
Message-ID: <1391528589.6497.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:43:09 +0000
In-Reply-To: <21233.365.463588.210458@mariner.uk.xensource.com>
References: <1389886079-6855-1-git-send-email-ian.campbell@citrix.com>
	<1390906058.7753.37.camel@kazak.uk.xensource.com>
	<CAFLBxZZb6QTjbMOAp3K=x8QWCBYQzbFZ-Lgs2xrG9bVA9SbuKw@mail.gmail.com>
	<1390919466.7753.97.camel@kazak.uk.xensource.com>
	<52E7C320.7080402@eu.citrix.com>
	<1390921417.7753.106.camel@kazak.uk.xensource.com>
	<52E7CBB8.7040904@eu.citrix.com>
	<21233.365.463588.210458@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, Tim
	Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on
 ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:04 +0000, Ian Jackson wrote:
> George Dunlap writes ("Re: [Xen-devel] [PATCH] tools: libxl: do not set the PoD target on ARM"):
> > If you want to check in this one:
> > 
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Between that, Julien's ack  and George's Release-ack: Applied.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:46:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiCO-0006HQ-L3; Tue, 04 Feb 2014 15:46:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WAiCN-0006HG-KA
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:46:07 +0000
Received: from [193.109.254.147:46546] by server-12.bemta-14.messagelabs.com
	id D8/4F-17220-E3B01F25; Tue, 04 Feb 2014 15:46:06 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391528765!1934153!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10890 invoked from network); 4 Feb 2014 15:46:06 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	4 Feb 2014 15:46:06 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14FiwoJ000362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 10:44:59 -0500
Received: from redhat.com (vpn1-7-112.ams2.redhat.com [10.36.7.112])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s14Fitat007874; Tue, 4 Feb 2014 10:44:56 -0500
Date: Tue, 4 Feb 2014 17:49:57 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20140204154957.GE8617@redhat.com>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
	<1047320768.20140204160708@eikelenboom.it>
	<20140204163024.2042e8d3@thinkpad>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140204163024.2042e8d3@thinkpad>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [Qemu-devel] Commit
 9e047b982452c633882b486682966c1d97097015 (piix4: add acpi pci hotplug
 support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 04:30:24PM +0100, Igor Mammedov wrote:
> On Tue, 4 Feb 2014 16:07:08 +0100
> Sander Eikelenboom <linux@eikelenboom.it> wrote:
> 
> > 
> > Tuesday, February 4, 2014, 3:32:19 PM, you wrote:
> > 
> > > On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> > >> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> > >> let's try again ..
> > >> 
> > >> Hi Michael,
> > >> 
> > >> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> > >> 
> > >> commit 9e047b982452c633882b486682966c1d97097015
> > >> Author: Michael S. Tsirkin <mst@redhat.com>
> > >> Date:   Mon Oct 14 18:01:20 2013 +0300
> > >> 
> > >>     piix4: add acpi pci hotplug support
> > >> 
> > >>     Add support for acpi pci hotplug using the
> > >>     new infrastructure.
> > >>     PIIX4 legacy interface is maintained as is for
> > >>     machine types 1.7 and older.
> > >> 
> > >>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > >> 
> > >> 
> > >> The error is not very verbose :
> > >> 
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> 
> > >> So it seems there is an issue with preserving the legacy interface.
> > 
> > 
> > > Which machine type is broken?
> > 
> > xenfv
> > 
> > > What's the command line used?
> > 
> > See below the output of the creation of the guest
> > 
> > Strange thing is:
> > char device redirected to /dev/pts/15 (label serial0)
> > vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
> > efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
> > VNC server running on `127.0.0.1:5900'
> > xen_platform: changed ro/rw state of ROM memory area. now is rw state.
> > [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
> > [00:05.0] xen_pt_pci_intx: intx=2
> > [00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=2
> > [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
> > [00:05.0] xen_pt_pci_intx: intx=3
> > [00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=3
> > [00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_msi_set_enable: disabling MSI.
> > 
> > It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
> > And an lspci in the guest also doesn't show the devices.
> > 
> > > What's the value of has_acpi_build in hw/i386/pc_piix.c?
> > static bool has_acpi_build = true;
> > 
> > > What happens if you add
> > > -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
> > 
> > That makes it work again ...
> looks like missing bsel property,
> could you run qemu with following debug patch to make sure that it's the case:
> (run without -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off)
> 
> diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> index 4345f5d..fc72cc9 100644
> --- a/hw/acpi/pcihp.c
> +++ b/hw/acpi/pcihp.c
> @@ -192,6 +192,7 @@ int acpi_pcihp_device_hotplug(AcpiPciHpState *s, PCIDevice
> *dev, {
>      int slot = PCI_SLOT(dev->devfn);
>      int bsel = acpi_pcihp_get_bsel(dev->bus);
> +    fprintf(stderr, "bsel: %d, bus: %s\n", bsel, dev->bus->qbus.name);
>      if (bsel < 0) {
>          return -1;
>      }

And if this is the issue, take a look at
acpi_set_pci_info: does find_i440fx fail
for Xen?

-- 
MST

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:46:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiCO-0006HQ-L3; Tue, 04 Feb 2014 15:46:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WAiCN-0006HG-KA
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:46:07 +0000
Received: from [193.109.254.147:46546] by server-12.bemta-14.messagelabs.com
	id D8/4F-17220-E3B01F25; Tue, 04 Feb 2014 15:46:06 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391528765!1934153!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10890 invoked from network); 4 Feb 2014 15:46:06 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	4 Feb 2014 15:46:06 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14FiwoJ000362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 10:44:59 -0500
Received: from redhat.com (vpn1-7-112.ams2.redhat.com [10.36.7.112])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s14Fitat007874; Tue, 4 Feb 2014 10:44:56 -0500
Date: Tue, 4 Feb 2014 17:49:57 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Igor Mammedov <imammedo@redhat.com>
Message-ID: <20140204154957.GE8617@redhat.com>
References: <1265435524.20140204004608@eikelenboom.it>
	<20140204143219.GA7561@redhat.com>
	<1047320768.20140204160708@eikelenboom.it>
	<20140204163024.2042e8d3@thinkpad>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140204163024.2042e8d3@thinkpad>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Sander Eikelenboom <linux@eikelenboom.it>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [Qemu-devel] Commit
 9e047b982452c633882b486682966c1d97097015 (piix4: add acpi pci hotplug
 support) seems to break Xen pci-passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 04:30:24PM +0100, Igor Mammedov wrote:
> On Tue, 4 Feb 2014 16:07:08 +0100
> Sander Eikelenboom <linux@eikelenboom.it> wrote:
> 
> > 
> > Tuesday, February 4, 2014, 3:32:19 PM, you wrote:
> > 
> > > On Tue, Feb 04, 2014 at 12:46:08AM +0100, Sander Eikelenboom wrote:
> > >> Grmbll my fat fingers hit the send shortcut too soon by accident ..
> > >> let's try again ..
> > >> 
> > >> Hi Michael,
> > >> 
> > >> A git bisect turned out that commit 9e047b982452c633882b486682966c1d97097015 breaks pci-passthrough on Xen.
> > >> 
> > >> commit 9e047b982452c633882b486682966c1d97097015
> > >> Author: Michael S. Tsirkin <mst@redhat.com>
> > >> Date:   Mon Oct 14 18:01:20 2013 +0300
> > >> 
> > >>     piix4: add acpi pci hotplug support
> > >> 
> > >>     Add support for acpi pci hotplug using the
> > >>     new infrastructure.
> > >>     PIIX4 legacy interface is maintained as is for
> > >>     machine types 1.7 and older.
> > >> 
> > >>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > >> 
> > >> 
> > >> The error is not very verbose :
> > >> 
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> libxl: error: libxl_qmp.c:287:qmp_handle_error_response: received an error message from QMP server: Device initialization failed.
> > >> 
> > >> So it seems there is an issue with preserving the legacy interface.
> > 
> > 
> > > Which machine type is broken?
> > 
> > xenfv
> > 
> > > What's the command line used?
> > 
> > See below the output of the creation of the guest
> > 
> > Strange thing is:
> > char device redirected to /dev/pts/15 (label serial0)
> > vgabios-cirrus.bin: ROM id 101300b8 / PCI id 101300b8
> > efi-e1000.rom: ROM id 8086100e / PCI id 8086100e
> > VNC server running on `127.0.0.1:5900'
> > xen_platform: changed ro/rw state of ROM memory area. now is rw state.
> > [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.0 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fd000 type: 0)
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_initfn: Real physical device 06:01.0 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.1 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00001000 base_addr=0xfe0fe000 type: 0)
> > [00:05.0] xen_pt_pci_intx: intx=2
> > [00:05.0] xen_pt_initfn: Real physical device 06:01.1 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=2
> > [00:05.0] xen_pt_initfn: Assigning real physical device 06:01.2 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00000100 base_addr=0xfe0ffc00 type: 0)
> > [00:05.0] xen_pt_pci_intx: intx=3
> > [00:05.0] xen_pt_initfn: Real physical device 06:01.2 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=3
> > [00:05.0] xen_pt_initfn: Assigning real physical device 08:00.0 to devfn 0x28
> > [00:05.0] xen_pt_register_regions: IO region 0 registered (size=0x00200000 base_addr=0xfe200000 type: 0x4)
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_initfn: Real physical device 08:00.0 registered successfully!
> > [00:05.0] xen_pt_pci_intx: intx=1
> > [00:05.0] xen_pt_msi_set_enable: disabling MSI.
> > 
> > It's does log succes for registering the pci devices .. however it returns by qmp that the device initialization failed.
> > And an lspci in the guest also doesn't show the devices.
> > 
> > > What's the value of has_acpi_build in hw/i386/pc_piix.c?
> > static bool has_acpi_build = true;
> > 
> > > What happens if you add
> > > -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off
> > 
> > That makes it work again ...
> looks like missing bsel property,
> could you run qemu with following debug patch to make sure that it's the case:
> (run without -global PIIX4_PM.acpi-pci-hotplug-with-bridge-support=off)
> 
> diff --git a/hw/acpi/pcihp.c b/hw/acpi/pcihp.c
> index 4345f5d..fc72cc9 100644
> --- a/hw/acpi/pcihp.c
> +++ b/hw/acpi/pcihp.c
> @@ -192,6 +192,7 @@ int acpi_pcihp_device_hotplug(AcpiPciHpState *s, PCIDevice
> *dev, {
>      int slot = PCI_SLOT(dev->devfn);
>      int bsel = acpi_pcihp_get_bsel(dev->bus);
> +    fprintf(stderr, "bsel: %d, bus: %s\n", bsel, dev->bus->qbus.name);
>      if (bsel < 0) {
>          return -1;
>      }

And if this is the issue, take a look at
acpi_set_pci_info: does find_i440fx fail
for Xen?

-- 
MST

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:47:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiD8-0006Q0-AI; Tue, 04 Feb 2014 15:46:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiD6-0006Pj-Kr
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:46:52 +0000
Received: from [85.158.137.68:57230] by server-3.bemta-3.messagelabs.com id
	38/7E-14520-B6B01F25; Tue, 04 Feb 2014 15:46:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391528811!9678117!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30884 invoked from network); 4 Feb 2014 15:46:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:46:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:46:50 +0000
Message-Id: <52F119780200007800119172@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:46:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
In-Reply-To: <20140204153258.GA6847@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>> Wasn't it that Mukesh's patch simply was yours with the two
>> get_ioreq()s folded by using a local variable?
> 
> Yes. As so

Thanks. Except that ...

> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -
> +    ioreq_t *p = get_ioreq(v);

... you don't want to drop the blank line, and naming the new
variable "ioreq" would seem preferable.

>      /*
>       * a pending IO emualtion may still no finished. In this case,
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( p && p->state != STATE_IOREQ_NONE )

And, as said before, I'd think "!p ||" instead of "p &&" would be
the right thing here. Yang, Jun?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:47:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiD8-0006Q0-AI; Tue, 04 Feb 2014 15:46:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiD6-0006Pj-Kr
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:46:52 +0000
Received: from [85.158.137.68:57230] by server-3.bemta-3.messagelabs.com id
	38/7E-14520-B6B01F25; Tue, 04 Feb 2014 15:46:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391528811!9678117!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30884 invoked from network); 4 Feb 2014 15:46:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:46:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:46:50 +0000
Message-Id: <52F119780200007800119172@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:46:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
In-Reply-To: <20140204153258.GA6847@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>> Wasn't it that Mukesh's patch simply was yours with the two
>> get_ioreq()s folded by using a local variable?
> 
> Yes. As so

Thanks. Except that ...

> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>      struct vcpu *v = current;
>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> -
> +    ioreq_t *p = get_ioreq(v);

... you don't want to drop the blank line, and naming the new
variable "ioreq" would seem preferable.

>      /*
>       * a pending IO emualtion may still no finished. In this case,
>       * no virtual vmswith is allowed. Or else, the following IO
>       * emulation will handled in a wrong VCPU context.
>       */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( p && p->state != STATE_IOREQ_NONE )

And, as said before, I'd think "!p ||" instead of "p &&" would be
the right thing here. Yang, Jun?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:49:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiF7-0006hK-Rt; Tue, 04 Feb 2014 15:48:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiF7-0006gy-A3
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:48:57 +0000
Received: from [85.158.137.68:36780] by server-10.bemta-3.messagelabs.com id
	5C/FA-07302-8EB01F25; Tue, 04 Feb 2014 15:48:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391528935!13354855!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11229 invoked from network); 4 Feb 2014 15:48:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:48:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:48:55 +0000
Message-Id: <52F119F50200007800119175@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:48:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
	<1391528580.6497.36.camel@kazak.uk.xensource.com>
In-Reply-To: <1391528580.6497.36.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> sync_page is a much better suggestion. Perhaps even
> sync_page_to/from_cache, I suppose that might actually be more
> confusing.

Hmm, on one hand it's even more precise, but otoh - how would a
sync_page_to_cache() look like?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:49:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiF7-0006hK-Rt; Tue, 04 Feb 2014 15:48:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiF7-0006gy-A3
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:48:57 +0000
Received: from [85.158.137.68:36780] by server-10.bemta-3.messagelabs.com id
	5C/FA-07302-8EB01F25; Tue, 04 Feb 2014 15:48:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391528935!13354855!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11229 invoked from network); 4 Feb 2014 15:48:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:48:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:48:55 +0000
Message-Id: <52F119F50200007800119175@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:48:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
	<1391528580.6497.36.camel@kazak.uk.xensource.com>
In-Reply-To: <1391528580.6497.36.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> sync_page is a much better suggestion. Perhaps even
> sync_page_to/from_cache, I suppose that might actually be more
> confusing.

Hmm, on one hand it's even more precise, but otoh - how would a
sync_page_to_cache() look like?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGX-0006zD-Cs; Tue, 04 Feb 2014 15:50:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiGV-0006yc-N2
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:50:23 +0000
Received: from [85.158.139.211:64150] by server-3.bemta-5.messagelabs.com id
	6C/4D-13671-E3C01F25; Tue, 04 Feb 2014 15:50:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391529020!1618235!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16589 invoked from network); 4 Feb 2014 15:50:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:50:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97795925"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:50:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:50:19 -0500
Message-ID: <1391529018.6497.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 15:50:18 +0000
In-Reply-To: <1390574921.2124.87.camel@kazak.uk.xensource.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<52E27A63.6030903@linaro.org>
	<1390574921.2124.87.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org, tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:48 +0000, Ian Campbell wrote:
> On Fri, 2014-01-24 at 14:36 +0000, Julien Grall wrote:
> > On 01/24/2014 02:23 PM, Ian Campbell wrote:
> > > find_next_bit takes a "const unsigned long *" but forcing a cast of an
> > > "uint32_t *" throws away the alignment constraints and ends up causing an
> > > alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> > > 
> > > Instead of casting use a temporary variable of the right type.
> > > 
> > > I've had a look around for similar constructs and the only thing I found was
> > > maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> > > although perhaps not best advised is safe I think.
> > > 
> > > This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> > > is just coincidental due to subtle changes to the stack layout etc.
> > > 
> > > Reported-by: Fu Wei <fu.wei@linaro.org>
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Good catch! Do you plan to apply this patch for Xen 4.4?
> 
> Yes, I think it is a suitable bug fix.

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGX-0006zD-Cs; Tue, 04 Feb 2014 15:50:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiGV-0006yc-N2
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:50:23 +0000
Received: from [85.158.139.211:64150] by server-3.bemta-5.messagelabs.com id
	6C/4D-13671-E3C01F25; Tue, 04 Feb 2014 15:50:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391529020!1618235!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16589 invoked from network); 4 Feb 2014 15:50:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:50:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97795925"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:50:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:50:19 -0500
Message-ID: <1391529018.6497.39.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 15:50:18 +0000
In-Reply-To: <1390574921.2124.87.camel@kazak.uk.xensource.com>
References: <1390573387-15386-1-git-send-email-ian.campbell@citrix.com>
	<52E27A63.6030903@linaro.org>
	<1390574921.2124.87.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org, tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: correct use of find_next_bit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 14:48 +0000, Ian Campbell wrote:
> On Fri, 2014-01-24 at 14:36 +0000, Julien Grall wrote:
> > On 01/24/2014 02:23 PM, Ian Campbell wrote:
> > > find_next_bit takes a "const unsigned long *" but forcing a cast of an
> > > "uint32_t *" throws away the alignment constraints and ends up causing an
> > > alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
> > > 
> > > Instead of casting use a temporary variable of the right type.
> > > 
> > > I've had a look around for similar constructs and the only thing I found was
> > > maintenance_interrupt which cases a uint64_t down to an unsigned long, which
> > > although perhaps not best advised is safe I think.
> > > 
> > > This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
> > > is just coincidental due to subtle changes to the stack layout etc.
> > > 
> > > Reported-by: Fu Wei <fu.wei@linaro.org>
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Good catch! Do you plan to apply this patch for Xen 4.4?
> 
> Yes, I think it is a suitable bug fix.

Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGg-000721-R9; Tue, 04 Feb 2014 15:50:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiGe-00071I-Nc
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:50:33 +0000
Received: from [193.109.254.147:28392] by server-14.bemta-14.messagelabs.com
	id A0/CC-29228-74C01F25; Tue, 04 Feb 2014 15:50:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391529030!1963809!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22008 invoked from network); 4 Feb 2014 15:50:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99703410"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:50:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:50:28 -0500
Message-ID: <1391529027.6497.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 15:50:27 +0000
In-Reply-To: <1391206965-25727-1-git-send-email-julien.grall@linaro.org>
References: <1391206965-25727-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Directly return NULL if Xen fails
 to allocate domain struct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 22:22 +0000, Julien Grall wrote:
> The current implementation of alloc_domain_struct, dereference the newly
> allocated pointer even if the allocation has failed.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked + applied. thanks.
> 
> ---
> 
> This is a bug fix for Xen 4.4. Without this patch if Xen run out of
> memory, it will segfault because it's trying to dereference a NULL
> pointer.

In general you need to CC George for this sort of thing. I've applied it
this time since I think it is an uncontroversial fix.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGg-000721-R9; Tue, 04 Feb 2014 15:50:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiGe-00071I-Nc
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 15:50:33 +0000
Received: from [193.109.254.147:28392] by server-14.bemta-14.messagelabs.com
	id A0/CC-29228-74C01F25; Tue, 04 Feb 2014 15:50:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391529030!1963809!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22008 invoked from network); 4 Feb 2014 15:50:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99703410"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:50:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:50:28 -0500
Message-ID: <1391529027.6497.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 15:50:27 +0000
In-Reply-To: <1391206965-25727-1-git-send-email-julien.grall@linaro.org>
References: <1391206965-25727-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Directly return NULL if Xen fails
 to allocate domain struct
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 22:22 +0000, Julien Grall wrote:
> The current implementation of alloc_domain_struct, dereference the newly
> allocated pointer even if the allocation has failed.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked + applied. thanks.
> 
> ---
> 
> This is a bug fix for Xen 4.4. Without this patch if Xen run out of
> memory, it will segfault because it's trying to dereference a NULL
> pointer.

In general you need to CC George for this sort of thing. I've applied it
this time since I think it is an uncontroversial fix.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGt-00075z-97; Tue, 04 Feb 2014 15:50:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiGr-00075E-Oj
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:50:45 +0000
Received: from [85.158.137.68:42237] by server-15.bemta-3.messagelabs.com id
	E0/67-19263-55C01F25; Tue, 04 Feb 2014 15:50:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391529044!13362033!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8739 invoked from network); 4 Feb 2014 15:50:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:50:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:50:43 +0000
Message-Id: <52F11A610200007800119178@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:50:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DB8F0200007800118F53@nat28.tlf.novell.com>
	<52F1068A.6060500@oracle.com>
In-Reply-To: <52F1068A.6060500@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:26, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 06:22 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> +        return 1;
>>> +    }
>>> +    else if ( vpmu->arch_vpmu_ops )
>> If the previous (and only) if() branch returns unconditionally, using
>> "else if" is more confusing then clarifying imo (and in any case
>> needlessly growing the patch, even if just by a bit).
> 
> Not sure I understand what you are saying here.
> 
> Here is the code structure:
> 
> int vpmu_do_interrupt(struct cpu_user_regs *regs)
> {
>       if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
> {
>          // work
>          return 1;
> }
>      else if ( vpmu->arch_vpmu_ops )
> {

      if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
{
         // work
         return 1;
}
     if ( vpmu->arch_vpmu_ops )
{
         ...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGt-00075z-97; Tue, 04 Feb 2014 15:50:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiGr-00075E-Oj
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:50:45 +0000
Received: from [85.158.137.68:42237] by server-15.bemta-3.messagelabs.com id
	E0/67-19263-55C01F25; Tue, 04 Feb 2014 15:50:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391529044!13362033!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8739 invoked from network); 4 Feb 2014 15:50:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 15:50:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 15:50:43 +0000
Message-Id: <52F11A610200007800119178@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 15:50:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-13-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DB8F0200007800118F53@nat28.tlf.novell.com>
	<52F1068A.6060500@oracle.com>
In-Reply-To: <52F1068A.6060500@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 12/17] x86/VPMU: Handle PMU interrupts
 for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:26, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 06:22 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> +        return 1;
>>> +    }
>>> +    else if ( vpmu->arch_vpmu_ops )
>> If the previous (and only) if() branch returns unconditionally, using
>> "else if" is more confusing then clarifying imo (and in any case
>> needlessly growing the patch, even if just by a bit).
> 
> Not sure I understand what you are saying here.
> 
> Here is the code structure:
> 
> int vpmu_do_interrupt(struct cpu_user_regs *regs)
> {
>       if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
> {
>          // work
>          return 1;
> }
>      else if ( vpmu->arch_vpmu_ops )
> {

      if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
{
         // work
         return 1;
}
     if ( vpmu->arch_vpmu_ops )
{
         ...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGw-00077S-OI; Tue, 04 Feb 2014 15:50:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiGv-00076Z-W2
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:50:50 +0000
Received: from [85.158.143.35:9941] by server-1.bemta-4.messagelabs.com id
	48/04-31661-85C01F25; Tue, 04 Feb 2014 15:50:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391529046!3094074!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32521 invoked from network); 4 Feb 2014 15:50:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:50:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99703619"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:50:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:50:45 -0500
Message-ID: <1391529044.6497.41.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:50:44 +0000
In-Reply-To: <52EA63E2.9020603@eu.citrix.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
	<52EA63E2.9020603@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:38 +0000, George Dunlap wrote:
> On 01/29/2014 06:33 PM, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
> >> The original code is wrong because:
> >> * claim mode wants to know the total number of pages needed while
> >>    original code provides the additional number of pages needed.
> >> * if pod is enabled memory will already be allocated by the time we try
> >>    to claim memory.
> >>
> >> So the fix would be:
> >> * move claim mode before actual memory allocation.
> >> * pass the right number of pages to hypervisor.
> >>
> >> The "right number of pages" should be number of pages of target memory
> >> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> >>
> >> This fixes bug #32.
> >>
> >> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> Cc: Konrad Wilk <konrad.wilk@oracle.com>
> >
> > And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'
> 
> Right, well nobody has shouted, so:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied. Thanks all.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:50:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiGw-00077S-OI; Tue, 04 Feb 2014 15:50:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiGv-00076Z-W2
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:50:50 +0000
Received: from [85.158.143.35:9941] by server-1.bemta-4.messagelabs.com id
	48/04-31661-85C01F25; Tue, 04 Feb 2014 15:50:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391529046!3094074!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32521 invoked from network); 4 Feb 2014 15:50:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:50:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99703619"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:50:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:50:45 -0500
Message-ID: <1391529044.6497.41.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:50:44 +0000
In-Reply-To: <52EA63E2.9020603@eu.citrix.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
	<52EA63E2.9020603@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 14:38 +0000, George Dunlap wrote:
> On 01/29/2014 06:33 PM, Konrad Rzeszutek Wilk wrote:
> > On Mon, Jan 27, 2014 at 05:53:38PM +0000, Wei Liu wrote:
> >> The original code is wrong because:
> >> * claim mode wants to know the total number of pages needed while
> >>    original code provides the additional number of pages needed.
> >> * if pod is enabled memory will already be allocated by the time we try
> >>    to claim memory.
> >>
> >> So the fix would be:
> >> * move claim mode before actual memory allocation.
> >> * pass the right number of pages to hypervisor.
> >>
> >> The "right number of pages" should be number of pages of target memory
> >> minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
> >>
> >> This fixes bug #32.
> >>
> >> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> >> Cc: Konrad Wilk <konrad.wilk@oracle.com>
> >
> > And also 'Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>'
> 
> Right, well nobody has shouted, so:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied. Thanks all.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIa-0007Vp-RE; Tue, 04 Feb 2014 15:52:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiIa-0007VR-1s
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:32 +0000
Received: from [85.158.139.211:28924] by server-4.bemta-5.messagelabs.com id
	C1/B5-08092-FBC01F25; Tue, 04 Feb 2014 15:52:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391529148!1618851!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 716 invoked from network); 4 Feb 2014 15:52:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99704303"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:52:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:52:23 -0500
Message-ID: <1391529143.6497.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 4 Feb 2014 15:52:23 +0000
In-Reply-To: <1390987206.31814.37.camel@kazak.uk.xensource.com>
References: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
	<1390987206.31814.37.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: update check-xl-disk-parse to handle
 backend_domname
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 09:20 +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 19:12 +0100, Olaf Hering wrote:
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Thanks.
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I think it doesn't need a freeze exception, so I'll commit it when I
> next go through my queue.

Done, thanks!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIa-0007Ve-EY; Tue, 04 Feb 2014 15:52:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiIZ-0007VA-8c
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:31 +0000
Received: from [85.158.139.211:28800] by server-17.bemta-5.messagelabs.com id
	2F/A3-31975-EBC01F25; Tue, 04 Feb 2014 15:52:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391529148!1618851!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 612 invoked from network); 4 Feb 2014 15:52:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99704147"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:52:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:52:09 -0500
Message-ID: <1391529127.6497.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:52:07 +0000
In-Reply-To: <1391529044.6497.41.camel@kazak.uk.xensource.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
	<52EA63E2.9020603@eu.citrix.com>
	<1391529044.6497.41.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 32
thanks

> > >> This fixes bug #32.

Closed.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIa-0007Vp-RE; Tue, 04 Feb 2014 15:52:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiIa-0007VR-1s
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:32 +0000
Received: from [85.158.139.211:28924] by server-4.bemta-5.messagelabs.com id
	C1/B5-08092-FBC01F25; Tue, 04 Feb 2014 15:52:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391529148!1618851!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 716 invoked from network); 4 Feb 2014 15:52:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99704303"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:52:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:52:23 -0500
Message-ID: <1391529143.6497.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 4 Feb 2014 15:52:23 +0000
In-Reply-To: <1390987206.31814.37.camel@kazak.uk.xensource.com>
References: <1390932736-10568-1-git-send-email-olaf@aepfle.de>
	<1390987206.31814.37.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: update check-xl-disk-parse to handle
 backend_domname
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-01-29 at 09:20 +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 19:12 +0100, Olaf Hering wrote:
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Thanks.
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I think it doesn't need a freeze exception, so I'll commit it when I
> next go through my queue.

Done, thanks!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIa-0007Ve-EY; Tue, 04 Feb 2014 15:52:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiIZ-0007VA-8c
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:31 +0000
Received: from [85.158.139.211:28800] by server-17.bemta-5.messagelabs.com id
	2F/A3-31975-EBC01F25; Tue, 04 Feb 2014 15:52:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391529148!1618851!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 612 invoked from network); 4 Feb 2014 15:52:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99704147"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:52:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:52:09 -0500
Message-ID: <1391529127.6497.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:52:07 +0000
In-Reply-To: <1391529044.6497.41.camel@kazak.uk.xensource.com>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
	<52EA63E2.9020603@eu.citrix.com>
	<1391529044.6497.41.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxc: fix claim mode when creating HVM
	guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 32
thanks

> > >> This fixes bug #32.

Closed.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIe-0007Xq-8F; Tue, 04 Feb 2014 15:52:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WAiId-0007WS-1S
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:35 +0000
Received: from [85.158.137.68:33082] by server-9.bemta-3.messagelabs.com id
	BC/23-10184-2CC01F25; Tue, 04 Feb 2014 15:52:34 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391529153!9682226!1
X-Originating-IP: [209.85.217.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31931 invoked from network); 4 Feb 2014 15:52:33 -0000
Received: from mail-lb0-f179.google.com (HELO mail-lb0-f179.google.com)
	(209.85.217.179)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:33 -0000
Received: by mail-lb0-f179.google.com with SMTP id l4so6551360lbv.38
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:52:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=J9mX470diOp+8XY2rO57q6RTJuL7L/J+/mx0iIbB60Q=;
	b=ergXFyBT2h5gRRjx7sOB9Ol9Z+yipuU4jzKIq7/PltjkgiD5VO+/U5+RRmVz3B8Mv/
	rMfbqJwUORzR4xP4wkwOcl5YM6eNXi8J7JbhqbU5pcAuhuEkq0AT3qOArjLXMhgfkkbX
	at4tBYwZzXsRbLllIRmEt7SCuMLjKicdI+dVWWwEzrOsmAUhqwqE6u1jirauHJeXhvlj
	7csWOxSoTUo+RGkFsSNCpCidYnm4kJcasbGMR3+o15QX7969T2DZOCWBG2XNoeD4D8xu
	rueRltiJ5aWRX1p9UKQrPmNe99egzO9FKW5KE2k1/8lCHxGUG/xncY81LYAYmxQIvsz9
	uZ0w==
X-Received: by 10.112.126.164 with SMTP id mz4mr1865198lbb.52.1391529152830;
	Tue, 04 Feb 2014 07:52:32 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id y2sm1021469lal.10.2014.02.04.07.52.31
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:52:32 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1391425186.10515.51.camel@kazak.uk.xensource.com>
Date: Tue, 4 Feb 2014 19:52:30 +0400
Message-Id: <D20DA3A3-422C-4AAF-93D2-DB917EED6BAA@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
	<1391422093.10515.24.camel@kazak.uk.xensource.com>
	<0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
	<1391425186.10515.51.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 3, 2014, at 2:59 PM, Ian Campbell wrote:

> On Mon, 2014-02-03 at 14:55 +0400, Igor Kozhukhov wrote:
>> i'm using xc_solaris.c - because illumos based on solaris and has open API.
> 
> I suspect this was the case, thanks for confirming.
> 
>> for now - i have found and fixed my problems: i'm able to load PV guests: dilos PV on dilos-xen-4.3-dom0, Linux ContOS 6.4 and Debian 7.0 64bits.
> 
> Excellent!
> 
> Is Illumos usable as a domU on a Linux dom0 as well?

i didn't test it by myself, but i have info about works with Xen-4.2 based on CentOS.
i have tested dilos-xen-3.4-dom0 with Linux PV/HVM guests - work well, also Windows HVM work well too.
i have updated libfsimage by ZFS from illumos and now we can use pygrub for illumos based VMs.

>> I'll send info later with new ISO and instruction for testing.
>> 'xl' now work well.
>> i'll work on provide my changes to Xen upstream.

sent email with info about it - i hope on feedback about testing :)

> Thank you!
> 
> Ian.


-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIe-0007Xq-8F; Tue, 04 Feb 2014 15:52:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WAiId-0007WS-1S
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:35 +0000
Received: from [85.158.137.68:33082] by server-9.bemta-3.messagelabs.com id
	BC/23-10184-2CC01F25; Tue, 04 Feb 2014 15:52:34 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391529153!9682226!1
X-Originating-IP: [209.85.217.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31931 invoked from network); 4 Feb 2014 15:52:33 -0000
Received: from mail-lb0-f179.google.com (HELO mail-lb0-f179.google.com)
	(209.85.217.179)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:33 -0000
Received: by mail-lb0-f179.google.com with SMTP id l4so6551360lbv.38
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:52:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=J9mX470diOp+8XY2rO57q6RTJuL7L/J+/mx0iIbB60Q=;
	b=ergXFyBT2h5gRRjx7sOB9Ol9Z+yipuU4jzKIq7/PltjkgiD5VO+/U5+RRmVz3B8Mv/
	rMfbqJwUORzR4xP4wkwOcl5YM6eNXi8J7JbhqbU5pcAuhuEkq0AT3qOArjLXMhgfkkbX
	at4tBYwZzXsRbLllIRmEt7SCuMLjKicdI+dVWWwEzrOsmAUhqwqE6u1jirauHJeXhvlj
	7csWOxSoTUo+RGkFsSNCpCidYnm4kJcasbGMR3+o15QX7969T2DZOCWBG2XNoeD4D8xu
	rueRltiJ5aWRX1p9UKQrPmNe99egzO9FKW5KE2k1/8lCHxGUG/xncY81LYAYmxQIvsz9
	uZ0w==
X-Received: by 10.112.126.164 with SMTP id mz4mr1865198lbb.52.1391529152830;
	Tue, 04 Feb 2014 07:52:32 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44])
	by mx.google.com with ESMTPSA id y2sm1021469lal.10.2014.02.04.07.52.31
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:52:32 -0800 (PST)
Mime-Version: 1.0 (Apple Message framework v1283)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
In-Reply-To: <1391425186.10515.51.camel@kazak.uk.xensource.com>
Date: Tue, 4 Feb 2014 19:52:30 +0400
Message-Id: <D20DA3A3-422C-4AAF-93D2-DB917EED6BAA@gmail.com>
References: <2D42DA4B-E56F-4354-BC87-F304DCA35F94@gmail.com>
	<1391422093.10515.24.camel@kazak.uk.xensource.com>
	<0A4C59BE-A033-4200-88C2-A1E5A84E44BA@gmail.com>
	<1391425186.10515.51.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1283)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xen-4.3 port to illumos based platform
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 3, 2014, at 2:59 PM, Ian Campbell wrote:

> On Mon, 2014-02-03 at 14:55 +0400, Igor Kozhukhov wrote:
>> i'm using xc_solaris.c - because illumos based on solaris and has open API.
> 
> I suspect this was the case, thanks for confirming.
> 
>> for now - i have found and fixed my problems: i'm able to load PV guests: dilos PV on dilos-xen-4.3-dom0, Linux ContOS 6.4 and Debian 7.0 64bits.
> 
> Excellent!
> 
> Is Illumos usable as a domU on a Linux dom0 as well?

i didn't test it by myself, but i have info about works with Xen-4.2 based on CentOS.
i have tested dilos-xen-3.4-dom0 with Linux PV/HVM guests - work well, also Windows HVM work well too.
i have updated libfsimage by ZFS from illumos and now we can use pygrub for illumos based VMs.

>> I'll send info later with new ISO and instruction for testing.
>> 'xl' now work well.
>> i'll work on provide my changes to Xen upstream.

sent email with info about it - i hope on feedback about testing :)

> Thank you!
> 
> Ian.


-Igor


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIi-0007as-Tw; Tue, 04 Feb 2014 15:52:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiIi-0007aC-6W
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:40 +0000
Received: from [85.158.143.35:23562] by server-2.bemta-4.messagelabs.com id
	2D/BC-10891-7CC01F25; Tue, 04 Feb 2014 15:52:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391529157!3080651!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21544 invoked from network); 4 Feb 2014 15:52:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99704465"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:52:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:52:36 -0500
Message-ID: <1391529155.6497.44.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 4 Feb 2014 15:52:35 +0000
In-Reply-To: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
References: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] doc: Better documentation about the
 usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 16:03 +0000, Anthony PERARD wrote:
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  docs/man/xl.cfg.pod.5 | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 9941395..9c0b438 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1251,6 +1251,10 @@ Host devices can also be passed through in this way, by specifying
>  host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can
>  typically be found by using lsusb or usb-devices.
>  
> +If you wish to use the "host:bus.addr" format, remove any leading '0' from the
> +bus and addr. For example, for the USB device on bus 008 dev 002, you will
> +write "host:8.2".

I tweaked this to "you should write...", then acked + applied.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIi-0007as-Tw; Tue, 04 Feb 2014 15:52:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiIi-0007aC-6W
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:40 +0000
Received: from [85.158.143.35:23562] by server-2.bemta-4.messagelabs.com id
	2D/BC-10891-7CC01F25; Tue, 04 Feb 2014 15:52:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391529157!3080651!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21544 invoked from network); 4 Feb 2014 15:52:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99704465"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:52:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:52:36 -0500
Message-ID: <1391529155.6497.44.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 4 Feb 2014 15:52:35 +0000
In-Reply-To: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
References: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] doc: Better documentation about the
 usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 16:03 +0000, Anthony PERARD wrote:
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  docs/man/xl.cfg.pod.5 | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 9941395..9c0b438 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1251,6 +1251,10 @@ Host devices can also be passed through in this way, by specifying
>  host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can
>  typically be found by using lsusb or usb-devices.
>  
> +If you wish to use the "host:bus.addr" format, remove any leading '0' from the
> +bus and addr. For example, for the USB device on bus 008 dev 002, you will
> +write "host:8.2".

I tweaked this to "you should write...", then acked + applied.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIr-0007gZ-Nz; Tue, 04 Feb 2014 15:52:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAiIp-0007f3-KJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:47 +0000
Received: from [193.109.254.147:5744] by server-15.bemta-14.messagelabs.com id
	FF/7D-10839-ECC01F25; Tue, 04 Feb 2014 15:52:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391529164!1942749!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28901 invoked from network); 4 Feb 2014 15:52:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:52:46 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Fqbod011816
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:52:38 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FqbGm004374
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 15:52:37 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Fqawe004343; Tue, 4 Feb 2014 15:52:36 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:52:36 -0800
Message-ID: <52F10D0D.2050908@oracle.com>
Date: Tue, 04 Feb 2014 10:53:49 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
In-Reply-To: <52F0DDBE0200007800118F62@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> @@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>>           err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>>           vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>>   
>> -        /* Store appropriate registers in xenpmu_data */
>> -        if ( is_pv_32bit_domain(current->domain) )
>> +        if ( !is_hvm_domain(current->domain) )
>>           {
>> -            /*
>> -             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
>> -             * and therefore we treat it the same way as a non-priviledged
>> -             * PV 32-bit domain.
>> -             */
>> -            struct compat_cpu_user_regs *cmp;
>> -
>> -            gregs = guest_cpu_user_regs();
>> -
>> -            cmp = (struct compat_cpu_user_regs *)
>> -                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> -            XLAT_cpu_user_regs(cmp, gregs);
>> +            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
> The surrounding if checks !hvm, i.e. both PV and PVH can make it
> here. But TF_kernel_mode is meaningful for PV only.

As of this patch PVH doesn't work and you won't get into this code path. 
And the later patch (#16) addresses this. (Although it unnecessary 
calculates cs in the line above for PVH, only to call 
hvm_get_segment_register() later. I'll move it to !pvh clause there).

>
>> +
>> +            /* Store appropriate registers in xenpmu_data */
>> +            if ( is_pv_32bit_domain(current->domain) )
>> +            {
>> +                gregs = guest_cpu_user_regs();
>> +
>> +                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
>> +                     !is_pv_32bit_domain(v->domain) )
>> +                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                           gregs, sizeof(struct cpu_user_regs));
>> +                else
>> +                {
>> +                    /*
>> +                     * 32-bit dom0 cannot process Xen's addresses (which are
>> +                     * 64 bit) and therefore we treat it the same way as a
>> +                     * non-priviledged PV 32-bit domain.
>> +                     */
>> +
>> +                    struct compat_cpu_user_regs *cmp;
>> +
>> +                    cmp = (struct compat_cpu_user_regs *)
>> +                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +                    XLAT_cpu_user_regs(cmp, gregs);
>> +                }
>> +            }
>> +            else if ( !is_control_domain(current->domain) &&
>> +                      !is_idle_vcpu(current) )
>> +            {
>> +                /* PV guest */
>> +                gregs = guest_cpu_user_regs();
>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                       gregs, sizeof(struct cpu_user_regs));
>> +            }
>> +            else
>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                       regs, sizeof(struct cpu_user_regs));
>> +
>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            gregs->cs = cs;
> And now you store a NUL selector (i.e. just the RPL bits) into the
> output field?
>>           }
>> -        else if ( !is_control_domain(current->domain) &&
>> -                 !is_idle_vcpu(current) )
>> +        else
>>           {
>> -            /* PV guest */
>> +            /* HVM guest */
>> +            struct segment_register cs;
>> +
>>               gregs = guest_cpu_user_regs();
>>               memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>                      gregs, sizeof(struct cpu_user_regs));
>> +
>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            gregs->cs = cs.attr.fields.dpl;
> And here too? If that's intended, a code comment is a must.

This is HVM-only path, PVH or PV don't go here so cs should be valid.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiIr-0007gZ-Nz; Tue, 04 Feb 2014 15:52:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAiIp-0007f3-KJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:47 +0000
Received: from [193.109.254.147:5744] by server-15.bemta-14.messagelabs.com id
	FF/7D-10839-ECC01F25; Tue, 04 Feb 2014 15:52:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391529164!1942749!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28901 invoked from network); 4 Feb 2014 15:52:46 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:52:46 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Fqbod011816
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:52:38 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FqbGm004374
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 15:52:37 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Fqawe004343; Tue, 4 Feb 2014 15:52:36 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:52:36 -0800
Message-ID: <52F10D0D.2050908@oracle.com>
Date: Tue, 04 Feb 2014 10:53:49 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
In-Reply-To: <52F0DDBE0200007800118F62@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> @@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>>           err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
>>           vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
>>   
>> -        /* Store appropriate registers in xenpmu_data */
>> -        if ( is_pv_32bit_domain(current->domain) )
>> +        if ( !is_hvm_domain(current->domain) )
>>           {
>> -            /*
>> -             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
>> -             * and therefore we treat it the same way as a non-priviledged
>> -             * PV 32-bit domain.
>> -             */
>> -            struct compat_cpu_user_regs *cmp;
>> -
>> -            gregs = guest_cpu_user_regs();
>> -
>> -            cmp = (struct compat_cpu_user_regs *)
>> -                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> -            XLAT_cpu_user_regs(cmp, gregs);
>> +            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
> The surrounding if checks !hvm, i.e. both PV and PVH can make it
> here. But TF_kernel_mode is meaningful for PV only.

As of this patch PVH doesn't work and you won't get into this code path. 
And the later patch (#16) addresses this. (Although it unnecessary 
calculates cs in the line above for PVH, only to call 
hvm_get_segment_register() later. I'll move it to !pvh clause there).

>
>> +
>> +            /* Store appropriate registers in xenpmu_data */
>> +            if ( is_pv_32bit_domain(current->domain) )
>> +            {
>> +                gregs = guest_cpu_user_regs();
>> +
>> +                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
>> +                     !is_pv_32bit_domain(v->domain) )
>> +                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                           gregs, sizeof(struct cpu_user_regs));
>> +                else
>> +                {
>> +                    /*
>> +                     * 32-bit dom0 cannot process Xen's addresses (which are
>> +                     * 64 bit) and therefore we treat it the same way as a
>> +                     * non-priviledged PV 32-bit domain.
>> +                     */
>> +
>> +                    struct compat_cpu_user_regs *cmp;
>> +
>> +                    cmp = (struct compat_cpu_user_regs *)
>> +                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +                    XLAT_cpu_user_regs(cmp, gregs);
>> +                }
>> +            }
>> +            else if ( !is_control_domain(current->domain) &&
>> +                      !is_idle_vcpu(current) )
>> +            {
>> +                /* PV guest */
>> +                gregs = guest_cpu_user_regs();
>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                       gregs, sizeof(struct cpu_user_regs));
>> +            }
>> +            else
>> +                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>> +                       regs, sizeof(struct cpu_user_regs));
>> +
>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            gregs->cs = cs;
> And now you store a NUL selector (i.e. just the RPL bits) into the
> output field?
>>           }
>> -        else if ( !is_control_domain(current->domain) &&
>> -                 !is_idle_vcpu(current) )
>> +        else
>>           {
>> -            /* PV guest */
>> +            /* HVM guest */
>> +            struct segment_register cs;
>> +
>>               gregs = guest_cpu_user_regs();
>>               memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>                      gregs, sizeof(struct cpu_user_regs));
>> +
>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>> +            gregs->cs = cs.attr.fields.dpl;
> And here too? If that's intended, a code comment is a must.

This is HVM-only path, PVH or PV don't go here so cs should be valid.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiJ0-0007mY-5u; Tue, 04 Feb 2014 15:52:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAiIy-0007lA-LG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:56 +0000
Received: from [85.158.139.211:30984] by server-7.bemta-5.messagelabs.com id
	97/06-14867-7DC01F25; Tue, 04 Feb 2014 15:52:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391529175!1612467!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5309 invoked from network); 4 Feb 2014 15:52:55 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:55 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so2191378eek.28
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:52:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=3saflPGjao9WWI/dWoGCtxX3nPESCzGD0deG9FVOopY=;
	b=cKJtZ3ciPu+r/WU5TdGKkLo0Iif0n7ARvvoWP4eXKY+X65BnQx+RWfzhMu+09o+tfV
	a3+wMYJ1ntNCu185wc34kmhzJxtmtdap4PXLXUKCdmAxEz0DJ2ZbOYenlTct11yJeQ3U
	bwenbYnMRbz926m6YijL++bu57gZea6uIVk5QD0O8Fe0Q9I1DfLYz9arP6QfxtVoOw65
	qNH5y6ERQh6LvZoehNn/YlUYu+e0J1pzsVCA8zS17Lwzn/uwHA0KMaD/8wx5rYOdHtB7
	xdL5SFlqogEHZoOenKnQiDRm0UQvEfjZa41CqUKTel1FDu/PCWTIs9vHvKRrbcDFcPkh
	GmQA==
X-Gm-Message-State: ALoCoQljaIOvQ+f61T2ssXCKolP5+AvrEDFezmWguVJAOaaZRRHpL+owz1sfOYTPeH9WGgqe+GqJ
X-Received: by 10.15.111.201 with SMTP id cj49mr15496589eeb.56.1391529174880; 
	Tue, 04 Feb 2014 07:52:54 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j41sm8787317eeg.10.2014.02.04.07.52.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:52:54 -0800 (PST)
Message-ID: <52F10CD4.1050000@linaro.org>
Date: Tue, 04 Feb 2014 15:52:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52EFD711.5060201@linaro.org>
In-Reply-To: <52EFD711.5060201@linaro.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Forget to add George.

On 02/03/2014 05:51 PM, Julien Grall wrote:
> (+ Xen ARM maintainers)
> 
> Hello Oleksandr,
> 
> Thanks for the patch. For next time, can you add the Xen ARM maintainers
> in cc? With the amount of mail in the mailing list, your mail could be
> lost easily. :)
> 
> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>> The possible deadlock scenario is explained below:
>>
>> non interrupt context:    interrupt contex        interrupt context
>>                           (CPU0):                 (CPU1):
>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>   |                         |                       |
>>   vgic_disable_irqs()       ...                     ...
>>     |                         |                       |
>>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>     |  ...                      |                       |
>>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>     |  ...                        ...                     ...
>>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>     |  ...                  . .       Oops! The lock has already taken.
>>     |  spin_unlock(...)     . .
>>     |  ...                  . .
>>     gic_irq_disable()       . .
>>        ...                  . .
>>        spin_lock(...)       . .
>>        ...                  . .
>>        ... <----------------. .
>>        ... <------------------.
>>        ...
>>        spin_unlock(...)
>>
>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>> which called from interrupt context we must disable interrupts in these
>> functions to avoid possible deadlocks.
>>
>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> I think this patch should have a release exception for Xen 4.4. It's fix
> a race condition in the interrupt management.
> 
>> ---
>>  xen/arch/arm/gic.c |   10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index c44a4d0..7d83b0c 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>  static void gic_irq_disable(struct irq_desc *desc)
>>  {
>>      int irq = desc->irq;
>> +    unsigned long flags;
>>  
>> -    spin_lock(&desc->lock);
>> +    spin_lock_irqsave(&desc->lock, flags);
>>      spin_lock(&gic.lock);
>>      /* Disable routing */
>>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>      desc->status |= IRQ_DISABLED;
>>      spin_unlock(&gic.lock);
>> -    spin_unlock(&desc->lock);
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>  }
>>  
>>  static unsigned int gic_irq_startup(struct irq_desc *desc)
>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>  {
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned long flags;
>>  
>> -    spin_lock(&gic.lock);
>> +    spin_lock_irqsave(&gic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> -    spin_unlock(&gic.lock);
>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>  }
>>  
>>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:52:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiJ0-0007mY-5u; Tue, 04 Feb 2014 15:52:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAiIy-0007lA-LG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:52:56 +0000
Received: from [85.158.139.211:30984] by server-7.bemta-5.messagelabs.com id
	97/06-14867-7DC01F25; Tue, 04 Feb 2014 15:52:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391529175!1612467!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5309 invoked from network); 4 Feb 2014 15:52:55 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:52:55 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so2191378eek.28
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 07:52:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=3saflPGjao9WWI/dWoGCtxX3nPESCzGD0deG9FVOopY=;
	b=cKJtZ3ciPu+r/WU5TdGKkLo0Iif0n7ARvvoWP4eXKY+X65BnQx+RWfzhMu+09o+tfV
	a3+wMYJ1ntNCu185wc34kmhzJxtmtdap4PXLXUKCdmAxEz0DJ2ZbOYenlTct11yJeQ3U
	bwenbYnMRbz926m6YijL++bu57gZea6uIVk5QD0O8Fe0Q9I1DfLYz9arP6QfxtVoOw65
	qNH5y6ERQh6LvZoehNn/YlUYu+e0J1pzsVCA8zS17Lwzn/uwHA0KMaD/8wx5rYOdHtB7
	xdL5SFlqogEHZoOenKnQiDRm0UQvEfjZa41CqUKTel1FDu/PCWTIs9vHvKRrbcDFcPkh
	GmQA==
X-Gm-Message-State: ALoCoQljaIOvQ+f61T2ssXCKolP5+AvrEDFezmWguVJAOaaZRRHpL+owz1sfOYTPeH9WGgqe+GqJ
X-Received: by 10.15.111.201 with SMTP id cj49mr15496589eeb.56.1391529174880; 
	Tue, 04 Feb 2014 07:52:54 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j41sm8787317eeg.10.2014.02.04.07.52.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 07:52:54 -0800 (PST)
Message-ID: <52F10CD4.1050000@linaro.org>
Date: Tue, 04 Feb 2014 15:52:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52EFD711.5060201@linaro.org>
In-Reply-To: <52EFD711.5060201@linaro.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Forget to add George.

On 02/03/2014 05:51 PM, Julien Grall wrote:
> (+ Xen ARM maintainers)
> 
> Hello Oleksandr,
> 
> Thanks for the patch. For next time, can you add the Xen ARM maintainers
> in cc? With the amount of mail in the mailing list, your mail could be
> lost easily. :)
> 
> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>> The possible deadlock scenario is explained below:
>>
>> non interrupt context:    interrupt contex        interrupt context
>>                           (CPU0):                 (CPU1):
>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>   |                         |                       |
>>   vgic_disable_irqs()       ...                     ...
>>     |                         |                       |
>>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>     |  ...                      |                       |
>>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>     |  ...                        ...                     ...
>>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>     |  ...                  . .       Oops! The lock has already taken.
>>     |  spin_unlock(...)     . .
>>     |  ...                  . .
>>     gic_irq_disable()       . .
>>        ...                  . .
>>        spin_lock(...)       . .
>>        ...                  . .
>>        ... <----------------. .
>>        ... <------------------.
>>        ...
>>        spin_unlock(...)
>>
>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>> which called from interrupt context we must disable interrupts in these
>> functions to avoid possible deadlocks.
>>
>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
> 
> I think this patch should have a release exception for Xen 4.4. It's fix
> a race condition in the interrupt management.
> 
>> ---
>>  xen/arch/arm/gic.c |   10 ++++++----
>>  1 file changed, 6 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index c44a4d0..7d83b0c 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>  static void gic_irq_disable(struct irq_desc *desc)
>>  {
>>      int irq = desc->irq;
>> +    unsigned long flags;
>>  
>> -    spin_lock(&desc->lock);
>> +    spin_lock_irqsave(&desc->lock, flags);
>>      spin_lock(&gic.lock);
>>      /* Disable routing */
>>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>      desc->status |= IRQ_DISABLED;
>>      spin_unlock(&gic.lock);
>> -    spin_unlock(&desc->lock);
>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>  }
>>  
>>  static unsigned int gic_irq_startup(struct irq_desc *desc)
>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>  void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>  {
>>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>> +    unsigned long flags;
>>  
>> -    spin_lock(&gic.lock);
>> +    spin_lock_irqsave(&gic.lock, flags);
>>      if ( !list_empty(&p->lr_queue) )
>>          list_del_init(&p->lr_queue);
>> -    spin_unlock(&gic.lock);
>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>  }
>>  
>>  void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiL3-0008ON-Qb; Tue, 04 Feb 2014 15:55:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiL2-0008Ny-5G
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:04 +0000
Received: from [193.109.254.147:30523] by server-11.bemta-14.messagelabs.com
	id 90/FF-24604-75D01F25; Tue, 04 Feb 2014 15:55:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391529301!1970087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23013 invoked from network); 4 Feb 2014 15:55:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:55:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99705966"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:55:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:55:00 -0500
Message-ID: <1391529299.6497.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 4 Feb 2014 15:54:59 +0000
In-Reply-To: <1391529155.6497.44.camel@kazak.uk.xensource.com>
References: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
	<1391529155.6497.44.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] doc: Better documentation about the
 usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

graft 15 <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
thanks

On Tue, 2014-02-04 at 15:52 +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 16:03 +0000, Anthony PERARD wrote:
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >  docs/man/xl.cfg.pod.5 | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> > index 9941395..9c0b438 100644
> > --- a/docs/man/xl.cfg.pod.5
> > +++ b/docs/man/xl.cfg.pod.5
> > @@ -1251,6 +1251,10 @@ Host devices can also be passed through in this way, by specifying
> >  host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can
> >  typically be found by using lsusb or usb-devices.
> >  
> > +If you wish to use the "host:bus.addr" format, remove any leading '0' from the
> > +bus and addr. For example, for the USB device on bus 008 dev 002, you will
> > +write "host:8.2".
> 
> I tweaked this to "you should write...", then acked + applied.

There is an associated bug but I think it should remain open until the
underlying issue is fixed, rather than just documented as suggested
here.

If you think this issue should be closed please send email to xen-devel
+ bcc xen@bugs.xenproject.org which begins:
        close 15
        thanks
(unindented)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiL3-0008ON-Qb; Tue, 04 Feb 2014 15:55:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiL2-0008Ny-5G
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:04 +0000
Received: from [193.109.254.147:30523] by server-11.bemta-14.messagelabs.com
	id 90/FF-24604-75D01F25; Tue, 04 Feb 2014 15:55:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391529301!1970087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23013 invoked from network); 4 Feb 2014 15:55:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:55:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99705966"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:55:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:55:00 -0500
Message-ID: <1391529299.6497.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Date: Tue, 4 Feb 2014 15:54:59 +0000
In-Reply-To: <1391529155.6497.44.camel@kazak.uk.xensource.com>
References: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
	<1391529155.6497.44.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] doc: Better documentation about the
 usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

graft 15 <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
thanks

On Tue, 2014-02-04 at 15:52 +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 16:03 +0000, Anthony PERARD wrote:
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >  docs/man/xl.cfg.pod.5 | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> > index 9941395..9c0b438 100644
> > --- a/docs/man/xl.cfg.pod.5
> > +++ b/docs/man/xl.cfg.pod.5
> > @@ -1251,6 +1251,10 @@ Host devices can also be passed through in this way, by specifying
> >  host:USBID, where USBID is of the form xxxx:yyyy.  The USBID can
> >  typically be found by using lsusb or usb-devices.
> >  
> > +If you wish to use the "host:bus.addr" format, remove any leading '0' from the
> > +bus and addr. For example, for the USB device on bus 008 dev 002, you will
> > +write "host:8.2".
> 
> I tweaked this to "you should write...", then acked + applied.

There is an associated bug but I think it should remain open until the
underlying issue is fixed, rather than just documented as suggested
here.

If you think this issue should be closed please send email to xen-devel
+ bcc xen@bugs.xenproject.org which begins:
        close 15
        thanks
(unindented)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLc-0008WP-GD; Tue, 04 Feb 2014 15:55:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiLa-0008Vz-RP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:38 +0000
Received: from [193.109.254.147:26253] by server-14.bemta-14.messagelabs.com
	id C1/64-29228-A7D01F25; Tue, 04 Feb 2014 15:55:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391529336!1943523!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17265 invoked from network); 4 Feb 2014 15:55:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99706437"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:55:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:55:35 -0500
Message-ID: <1391529334.6497.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:55:34 +0000
In-Reply-To: <52E79757.9090708@eu.citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
	<52E79757.9090708@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1/2] tools/libxc: goto correct label on
 error paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 11:41 +0000, George Dunlap wrote:
> On 01/27/2014 04:25 PM, Andrew Cooper wrote:
> > Both of these "goto finish;" statements are actually errors, and need to "goto
> > out;" instead, which will correctly destroy the domain and return an error,
> > rather than trying to finish the migration (and in at least one scenario,
> > return success).
> >
> > Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>
> > CC: Ian Campbell<Ian.Campbell@citrix.com>
> > CC: Ian Jackson<Ian.Jackson@eu.citrix.com>
> > CC: George Dunlap<george.dunlap@eu.citrix.com>
> 
> Right -- I can't imagine any goodness coming from jumping to "finish" in 
> those cases...
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLc-0008WP-GD; Tue, 04 Feb 2014 15:55:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiLa-0008Vz-RP
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:38 +0000
Received: from [193.109.254.147:26253] by server-14.bemta-14.messagelabs.com
	id C1/64-29228-A7D01F25; Tue, 04 Feb 2014 15:55:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391529336!1943523!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17265 invoked from network); 4 Feb 2014 15:55:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99706437"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 15:55:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	10:55:35 -0500
Message-ID: <1391529334.6497.48.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 4 Feb 2014 15:55:34 +0000
In-Reply-To: <52E79757.9090708@eu.citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-2-git-send-email-andrew.cooper3@citrix.com>
	<52E79757.9090708@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 1/2] tools/libxc: goto correct label on
 error paths
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 11:41 +0000, George Dunlap wrote:
> On 01/27/2014 04:25 PM, Andrew Cooper wrote:
> > Both of these "goto finish;" statements are actually errors, and need to "goto
> > out;" instead, which will correctly destroy the domain and return an error,
> > rather than trying to finish the migration (and in at least one scenario,
> > return success).
> >
> > Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>
> > CC: Ian Campbell<Ian.Campbell@citrix.com>
> > CC: Ian Jackson<Ian.Jackson@eu.citrix.com>
> > CC: George Dunlap<george.dunlap@eu.citrix.com>
> 
> Right -- I can't imagine any goodness coming from jumping to "finish" in 
> those cases...
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLk-00007Z-Tr; Tue, 04 Feb 2014 15:55:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiLj-000074-O7
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:47 +0000
Received: from [193.109.254.147:44824] by server-7.bemta-14.messagelabs.com id
	A4/F0-23424-38D01F25; Tue, 04 Feb 2014 15:55:47 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391529345!1961653!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 718 invoked from network); 4 Feb 2014 15:55:46 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 15:55:46 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiPt-0003HJ-H6; Tue, 04 Feb 2014 16:00:05 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1391529605.12602@bugs.xenproject.org>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
	<52EA63E2.9020603@eu.citrix.com>
	<1391529044.6497.41.camel@kazak.uk.xensource.com>
	<1391529127.6497.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1391529127.6497.42.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 04 Feb 2014 16:00:05 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] libxc: fix claim mode when
	creating HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 32
Closing bug #32
> thanks
Finished processing.

Modified/created Bugs:
 - 32: http://bugs.xenproject.org/xen/bug/32

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLk-00007Z-Tr; Tue, 04 Feb 2014 15:55:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiLj-000074-O7
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:47 +0000
Received: from [193.109.254.147:44824] by server-7.bemta-14.messagelabs.com id
	A4/F0-23424-38D01F25; Tue, 04 Feb 2014 15:55:47 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391529345!1961653!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 718 invoked from network); 4 Feb 2014 15:55:46 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 15:55:46 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiPt-0003HJ-H6; Tue, 04 Feb 2014 16:00:05 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1391529605.12602@bugs.xenproject.org>
References: <1390845218-823-1-git-send-email-wei.liu2@citrix.com>
	<20140129183349.GA14312@phenom.dumpdata.com>
	<52EA63E2.9020603@eu.citrix.com>
	<1391529044.6497.41.camel@kazak.uk.xensource.com>
	<1391529127.6497.42.camel@kazak.uk.xensource.com>
In-Reply-To: <1391529127.6497.42.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 04 Feb 2014 16:00:05 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] libxc: fix claim mode when
	creating HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 32
Closing bug #32
> thanks
Finished processing.

Modified/created Bugs:
 - 32: http://bugs.xenproject.org/xen/bug/32

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLm-00008u-BE; Tue, 04 Feb 2014 15:55:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiLl-00007c-DA
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:49 +0000
Received: from [85.158.143.35:54410] by server-3.bemta-4.messagelabs.com id
	4D/55-11539-48D01F25; Tue, 04 Feb 2014 15:55:48 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391529347!3081627!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21632 invoked from network); 4 Feb 2014 15:55:48 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 15:55:48 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiPv-0003HT-MU; Tue, 04 Feb 2014 16:00:07 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1391529605.12603@bugs.xenproject.org>
References: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
	<1391529155.6497.44.camel@kazak.uk.xensource.com>
	<1391529299.6497.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1391529299.6497.46.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 04 Feb 2014 16:00:07 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] doc: Better documentation about
 the usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> graft 15 <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
Graft `<1390924983-4864-1-git-send-email-anthony.perard@citrix.com>' onto #15
> thanks
Finished processing.

Modified/created Bugs:
 - 15: http://bugs.xenproject.org/xen/bug/15

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLm-00008u-BE; Tue, 04 Feb 2014 15:55:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiLl-00007c-DA
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:49 +0000
Received: from [85.158.143.35:54410] by server-3.bemta-4.messagelabs.com id
	4D/55-11539-48D01F25; Tue, 04 Feb 2014 15:55:48 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391529347!3081627!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21632 invoked from network); 4 Feb 2014 15:55:48 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 15:55:48 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WAiPv-0003HT-MU; Tue, 04 Feb 2014 16:00:07 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1391529605.12603@bugs.xenproject.org>
References: <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
	<1391529155.6497.44.camel@kazak.uk.xensource.com>
	<1391529299.6497.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1391529299.6497.46.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 04 Feb 2014 16:00:07 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] doc: Better documentation about
 the usbdevice=['host:bus.addr'] format
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> graft 15 <1390924983-4864-1-git-send-email-anthony.perard@citrix.com>
Graft `<1390924983-4864-1-git-send-email-anthony.perard@citrix.com>' onto #15
> thanks
Finished processing.

Modified/created Bugs:
 - 15: http://bugs.xenproject.org/xen/bug/15

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLn-00009v-1N; Tue, 04 Feb 2014 15:55:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAiLl-00007v-O9
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:49 +0000
Received: from [85.158.143.35:32607] by server-1.bemta-4.messagelabs.com id
	15/4D-31661-58D01F25; Tue, 04 Feb 2014 15:55:49 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391529346!3081625!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21668 invoked from network); 4 Feb 2014 15:55:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:55:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14FtQWW000895
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:55:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FtO0l000342
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:55:24 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FtOJO000327; Tue, 4 Feb 2014 15:55:24 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:55:23 -0800
Message-ID: <52F10DB4.9070407@oracle.com>
Date: Tue, 04 Feb 2014 10:56:36 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DF430200007800118F76@nat28.tlf.novell.com>
In-Reply-To: <52F0DF430200007800118F76@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 14/17] x86/VPMU: Save VPMU state for PV
 guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:38 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Save VPMU state during context switch for both HVM and PV guests unless we
>> are in PMU privileged mode (i.e. dom0 is doing all profiling) and the
>> switched
>> out domain is not the control domain. The latter condition is needed because
>> me may have just turned the privileged PMU mode on and thus need to save
>> last domain.
> While this is understandable, ...
>
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1444,17 +1444,16 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>>       }
>>   
>>       if (prev != next)
>> -        update_runstate_area(prev);
>> -
>> -    if ( is_hvm_vcpu(prev) )
>>       {
>> -        if (prev != next)
>> +        update_runstate_area(prev);
>> +        if ( !(vpmu_mode & XENPMU_MODE_PRIV) ||
>> +             !is_control_domain(prev->domain) )
>>               vpmu_save(prev);
> ... I'd really like you to investigate ways to achieve the same effect
> without this extra second condition added to the context switch path.
> E.g. by synchronously issuing a save on all affected vCPU-s when
> privileged mode gets turned on.

Yes, I should do something like that.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:55:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiLn-00009v-1N; Tue, 04 Feb 2014 15:55:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAiLl-00007v-O9
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:55:49 +0000
Received: from [85.158.143.35:32607] by server-1.bemta-4.messagelabs.com id
	15/4D-31661-58D01F25; Tue, 04 Feb 2014 15:55:49 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391529346!3081625!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21668 invoked from network); 4 Feb 2014 15:55:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 15:55:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14FtQWW000895
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 15:55:26 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FtO0l000342
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 15:55:24 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14FtOJO000327; Tue, 4 Feb 2014 15:55:24 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 07:55:23 -0800
Message-ID: <52F10DB4.9070407@oracle.com>
Date: Tue, 04 Feb 2014 10:56:36 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-15-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DF430200007800118F76@nat28.tlf.novell.com>
In-Reply-To: <52F0DF430200007800118F76@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 14/17] x86/VPMU: Save VPMU state for PV
 guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:38 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Save VPMU state during context switch for both HVM and PV guests unless we
>> are in PMU privileged mode (i.e. dom0 is doing all profiling) and the
>> switched
>> out domain is not the control domain. The latter condition is needed because
>> me may have just turned the privileged PMU mode on and thus need to save
>> last domain.
> While this is understandable, ...
>
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1444,17 +1444,16 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>>       }
>>   
>>       if (prev != next)
>> -        update_runstate_area(prev);
>> -
>> -    if ( is_hvm_vcpu(prev) )
>>       {
>> -        if (prev != next)
>> +        update_runstate_area(prev);
>> +        if ( !(vpmu_mode & XENPMU_MODE_PRIV) ||
>> +             !is_control_domain(prev->domain) )
>>               vpmu_save(prev);
> ... I'd really like you to investigate ways to achieve the same effect
> without this extra second condition added to the context switch path.
> E.g. by synchronously issuing a save on all affected vCPU-s when
> privileged mode gets turned on.

Yes, I should do something like that.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:56:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiMD-0000MN-Sv; Tue, 04 Feb 2014 15:56:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAiMC-0000LZ-EG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:56:16 +0000
Received: from [85.158.137.68:28248] by server-17.bemta-3.messagelabs.com id
	88/47-22569-F9D01F25; Tue, 04 Feb 2014 15:56:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391529373!12165407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19012 invoked from network); 4 Feb 2014 15:56:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97800443"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:55:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:55:58 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAiLu-0005Ry-G8;
	Tue, 04 Feb 2014 15:55:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAiLu-0004RA-7e;
	Tue, 04 Feb 2014 15:55:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.3469.939694.871033@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 15:55:57 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391528262.6497.35.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
	<1391528262.6497.35.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> On Tue, 2014-02-04 at 15:32 +0000, Ian Jackson wrote:
> > The approach you are taking here is that for pages explicitly mapped
> > by some libxc caller, you do the flush on unmap.  But what about
> > callers who don't unmap ?  Are there callers which don't unmap and
> > which instead are relying on memory coherency assumptions which aren't
> > true on arm ?
> 
> Callers which don't unmap would be leaking mappings and therefore buggy
> in a long running toolstack.

What I mean is that they might map the guest pages, and expect to
exchange data with the guest through the pages while they were still
mapped ...

> Also after the initial start of day period we require that the guest
> enable its caches.

... but before the guest enables caching.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 15:56:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 15:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiMD-0000MN-Sv; Tue, 04 Feb 2014 15:56:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAiMC-0000LZ-EG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 15:56:16 +0000
Received: from [85.158.137.68:28248] by server-17.bemta-3.messagelabs.com id
	88/47-22569-F9D01F25; Tue, 04 Feb 2014 15:56:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391529373!12165407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19012 invoked from network); 4 Feb 2014 15:56:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 15:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97800443"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 15:55:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 10:55:58 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAiLu-0005Ry-G8;
	Tue, 04 Feb 2014 15:55:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAiLu-0004RA-7e;
	Tue, 04 Feb 2014 15:55:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.3469.939694.871033@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 15:55:57 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391528262.6497.35.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
	<1391528262.6497.35.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> On Tue, 2014-02-04 at 15:32 +0000, Ian Jackson wrote:
> > The approach you are taking here is that for pages explicitly mapped
> > by some libxc caller, you do the flush on unmap.  But what about
> > callers who don't unmap ?  Are there callers which don't unmap and
> > which instead are relying on memory coherency assumptions which aren't
> > true on arm ?
> 
> Callers which don't unmap would be leaking mappings and therefore buggy
> in a long running toolstack.

What I mean is that they might map the guest pages, and expect to
exchange data with the guest through the pages while they were still
mapped ...

> Also after the initial start of day period we require that the guest
> enable its caches.

... but before the guest enables caching.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:01:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiRB-0001V8-T3; Tue, 04 Feb 2014 16:01:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiR9-0001Uy-RL
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:01:24 +0000
Received: from [85.158.139.211:31748] by server-12.bemta-5.messagelabs.com id
	F0/A7-15415-3DE01F25; Tue, 04 Feb 2014 16:01:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391529682!1603963!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7922 invoked from network); 4 Feb 2014 16:01:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:01:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:01:21 +0000
Message-Id: <52F11CDF02000078001191EB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:01:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
	<52F10D0D.2050908@oracle.com>
In-Reply-To: <52F10D0D.2050908@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:53, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>> +            gregs->cs = cs;
>> And now you store a NUL selector (i.e. just the RPL bits) into the
>> output field?
>>>           }
>>> -        else if ( !is_control_domain(current->domain) &&
>>> -                 !is_idle_vcpu(current) )
>>> +        else
>>>           {
>>> -            /* PV guest */
>>> +            /* HVM guest */
>>> +            struct segment_register cs;
>>> +
>>>               gregs = guest_cpu_user_regs();
>>>               memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>                      gregs, sizeof(struct cpu_user_regs));
>>> +
>>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>> +            gregs->cs = cs.attr.fields.dpl;
>> And here too? If that's intended, a code comment is a must.
> 
> This is HVM-only path, PVH or PV don't go here so cs should be valid.

Isn't the reply of mine a few lines up in PV code? And why would
the selector being wrong for HVM be okay?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:01:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiRB-0001V8-T3; Tue, 04 Feb 2014 16:01:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAiR9-0001Uy-RL
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:01:24 +0000
Received: from [85.158.139.211:31748] by server-12.bemta-5.messagelabs.com id
	F0/A7-15415-3DE01F25; Tue, 04 Feb 2014 16:01:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391529682!1603963!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7922 invoked from network); 4 Feb 2014 16:01:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:01:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:01:21 +0000
Message-Id: <52F11CDF02000078001191EB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:01:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
	<52F10D0D.2050908@oracle.com>
In-Reply-To: <52F10D0D.2050908@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 16:53, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>> +            gregs->cs = cs;
>> And now you store a NUL selector (i.e. just the RPL bits) into the
>> output field?
>>>           }
>>> -        else if ( !is_control_domain(current->domain) &&
>>> -                 !is_idle_vcpu(current) )
>>> +        else
>>>           {
>>> -            /* PV guest */
>>> +            /* HVM guest */
>>> +            struct segment_register cs;
>>> +
>>>               gregs = guest_cpu_user_regs();
>>>               memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>                      gregs, sizeof(struct cpu_user_regs));
>>> +
>>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>> +            gregs->cs = cs.attr.fields.dpl;
>> And here too? If that's intended, a code comment is a must.
> 
> This is HVM-only path, PVH or PV don't go here so cs should be valid.

Isn't the reply of mine a few lines up in PV code? And why would
the selector being wrong for HVM be okay?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiRe-0001X6-AH; Tue, 04 Feb 2014 16:01:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WAiRc-0001Wv-Kc
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:01:52 +0000
Received: from [85.158.143.35:55067] by server-2.bemta-4.messagelabs.com id
	86/3D-10891-0FE01F25; Tue, 04 Feb 2014 16:01:52 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391529710!3072006!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3989 invoked from network); 4 Feb 2014 16:01:51 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:01:51 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so4453905eak.2
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:01:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=KRrFKl82EhdGg8UzKnxlQ8I9ojE9FjPi1aZ/XXFSEn0=;
	b=GDEmfTT31vY5Mw3Wo+ys5z0CH7Qu2fn2uomVdRSR3zT1fuGnJKuF6KQc/GbvQt9Nq0
	2YeViQE6a19wYzhPQLdoMlGYWeGy+MIsVf14Jhh3Oczvr5yal1w+Jhwu1wDWD7Jy+MGR
	d9NwvDNskVuyJSkMSm0QQrqCZH2NB/HPYSNFy+Z5smKYrlr4LZt+hbwrlxVA12MhCZCc
	xoxdbS9kgmfv15J5zRV0Je00umyHp3N2zkAtjNsd2j/6x7vQi/3uu0wdpE60+LcP4lSE
	pY0N44g90tPznAFGiXDIaHghFkZXsgkvgQwr8IR/CDPFPKxzHYMMMtrDC1+hNPLeIG7l
	wQ4w==
X-Gm-Message-State: ALoCoQmBvVZdLAK2yTVQB1tNn9gPw7vS+AZxtFs63pptUs1gjXkTAwR8MlgquREqhlfGGt4VPi2Z
X-Received: by 10.15.61.7 with SMTP id h7mr16234656eex.49.1391529710764;
	Tue, 04 Feb 2014 08:01:50 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id m9sm76835335eeh.3.2014.02.04.08.01.49
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 08:01:50 -0800 (PST)
Message-ID: <52F10EEF.7050402@m2r.biz>
Date: Tue, 04 Feb 2014 17:01:51 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: ehouby@yahoo.com, xen-devel@lists.xen.org, 
 xen@lists.fedoraproject.org
References: <1391528492.2441.26.camel@astar.houby.net>
In-Reply-To: <1391528492.2441.26.camel@astar.houby.net>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 04/02/2014 16:41, Eric Houby ha scritto:
> Xen list,
>
> I am trying to boot a F20 guest and connect using Spice but have run
> into an issue.
>
> My VM config file includes:
> spice = 1
> spicehost='0.0.0.0'
> spiceport=6001
> spicedisable_ticketing=1
>
>
> Is Spice supported with qemu-xen-traditional?

No, only with upstream qemu and if compile xen and qemu from source you 
also enable spice support on qemu build, for example on my xen build 
tests I add:

tools/Makefile
@@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
          --datadir=$(SHAREDIR)/qemu-xen \
          --localstatedir=/var \
          --disable-kvm \
+        --enable-spice \
+        --enable-usb-redir \
          --disable-docs \
          --disable-guest-agent \
          --python=$(PYTHON) \

If you use upstream qemu from distribution package probably have already 
spice build-in, for example, on debian I've already tested and working.

>
>
> [root@xen ~]# xl -vvv create f20.xl
> Parsing config from f20.xl
> libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x795750: create:
> how=(nil) callback=(nil) poller=0x794e10
> libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault:
> qemu-xen is unavailable, use qemu-xen-traditional instead: No such file
> or directory
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
> vdev=hda, using backend phy
> libxl: debug: libxl_create.c:797:initiate_domain_create: running
> bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> domain, skipping bootloader
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x795af8: deregister unregistered
> libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=4,
> free_memkb=14131
> libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> candidate with 1 nodes, 8 cpus and 14131 KB free selected
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9e704
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19e704
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
>    Loader:        0000000000100000->000000000019e704
>    Modules:       0000000000000000->0000000000000000
>    TOTAL:         0000000000000000->00000000ffc00000
>    ENTRY ADDRESS: 0000000000100000
> xc: detail: PHYSICAL MEMORY ALLOCATION:
>    4KB PAGES: 0x0000000000000200
>    2MB PAGES: 0x00000000000003fd
>    1GB PAGES: 0x0000000000000002
> xc: detail: elf_load_binary: phdr 0 at 0x7f87b7100000 -> 0x7f87b719558d
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
> register slotnum=3
> libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x795750:
> inprogress: poller=0x794e10, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
> wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/3/768/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback:
> backend /local/domain/0/backend/vbd/3/768/state wanted state 2 still
> waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
> wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/3/768/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback:
> backend /local/domain/0/backend/vbd/3/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
> w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
> deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x796f68: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
> script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x796ff0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x796ff0: deregister unregistered
> libxl: debug: libxl_dm.c:1303:libxl__spawn_local_dm: Spawning
> device-model /usr/lib/xen/bin/qemu-dm with arguments:
> libxl: debug:
> libxl_dm.c:1305:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -d
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   3
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -domain-name
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   f20
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -videoram
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   4
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   c
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -acpi
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpus
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   2
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpu_avail
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   0x03
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
> nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
> tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -M
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
> register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
> wpath=/local/domain/0/device-model/3/state token=3/1: event
> epath=/local/domain/0/device-model/3/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
> wpath=/local/domain/0/device-model/3/state token=3/1: event
> epath=/local/domain/0/device-model/3/state
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
> w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
> deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x795d30: deregister unregistered
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
> register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
> wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/3/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback:
> backend /local/domain/0/backend/vif/3/0/state wanted state 2 still
> waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
> wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/3/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback:
> backend /local/domain/0/backend/vif/3/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
> w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
> deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a038: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a0c0: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a0c0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a0c0: deregister unregistered
> libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x795750:
> progress report: ignored
> libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x795750:
> complete, rc=0
> libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x795750:
> destroy
> xc: debug: hypercall buffer: total allocations:1097 total releases:1097
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> xc: debug: hypercall buffer: cache current size:4
> xc: debug: hypercall buffer: cache hits:1089 misses:4 toobig:4
> [root@xen ~]#
>
> The QEMU command line created is:
>
> /usr/lib/xen/bin/qemu-dm -d 3 -domain-name f20 -videoram 4 -boot c -acpi
> -vcpus 2 -vcpu_avail 0x03 -net
> nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000 -net
> tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no -M xenfv
>
>
>
> Thanks,
>
> Eric
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiRe-0001X6-AH; Tue, 04 Feb 2014 16:01:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WAiRc-0001Wv-Kc
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:01:52 +0000
Received: from [85.158.143.35:55067] by server-2.bemta-4.messagelabs.com id
	86/3D-10891-0FE01F25; Tue, 04 Feb 2014 16:01:52 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391529710!3072006!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3989 invoked from network); 4 Feb 2014 16:01:51 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:01:51 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so4453905eak.2
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:01:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=KRrFKl82EhdGg8UzKnxlQ8I9ojE9FjPi1aZ/XXFSEn0=;
	b=GDEmfTT31vY5Mw3Wo+ys5z0CH7Qu2fn2uomVdRSR3zT1fuGnJKuF6KQc/GbvQt9Nq0
	2YeViQE6a19wYzhPQLdoMlGYWeGy+MIsVf14Jhh3Oczvr5yal1w+Jhwu1wDWD7Jy+MGR
	d9NwvDNskVuyJSkMSm0QQrqCZH2NB/HPYSNFy+Z5smKYrlr4LZt+hbwrlxVA12MhCZCc
	xoxdbS9kgmfv15J5zRV0Je00umyHp3N2zkAtjNsd2j/6x7vQi/3uu0wdpE60+LcP4lSE
	pY0N44g90tPznAFGiXDIaHghFkZXsgkvgQwr8IR/CDPFPKxzHYMMMtrDC1+hNPLeIG7l
	wQ4w==
X-Gm-Message-State: ALoCoQmBvVZdLAK2yTVQB1tNn9gPw7vS+AZxtFs63pptUs1gjXkTAwR8MlgquREqhlfGGt4VPi2Z
X-Received: by 10.15.61.7 with SMTP id h7mr16234656eex.49.1391529710764;
	Tue, 04 Feb 2014 08:01:50 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id m9sm76835335eeh.3.2014.02.04.08.01.49
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 08:01:50 -0800 (PST)
Message-ID: <52F10EEF.7050402@m2r.biz>
Date: Tue, 04 Feb 2014 17:01:51 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: ehouby@yahoo.com, xen-devel@lists.xen.org, 
 xen@lists.fedoraproject.org
References: <1391528492.2441.26.camel@astar.houby.net>
In-Reply-To: <1391528492.2441.26.camel@astar.houby.net>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 04/02/2014 16:41, Eric Houby ha scritto:
> Xen list,
>
> I am trying to boot a F20 guest and connect using Spice but have run
> into an issue.
>
> My VM config file includes:
> spice = 1
> spicehost='0.0.0.0'
> spiceport=6001
> spicedisable_ticketing=1
>
>
> Is Spice supported with qemu-xen-traditional?

No, only with upstream qemu and if compile xen and qemu from source you 
also enable spice support on qemu build, for example on my xen build 
tests I add:

tools/Makefile
@@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
          --datadir=$(SHAREDIR)/qemu-xen \
          --localstatedir=/var \
          --disable-kvm \
+        --enable-spice \
+        --enable-usb-redir \
          --disable-docs \
          --disable-guest-agent \
          --python=$(PYTHON) \

If you use upstream qemu from distribution package probably have already 
spice build-in, for example, on debian I've already tested and working.

>
>
> [root@xen ~]# xl -vvv create f20.xl
> Parsing config from f20.xl
> libxl: debug: libxl_create.c:1342:do_domain_create: ao 0x795750: create:
> how=(nil) callback=(nil) poller=0x794e10
> libxl: verbose: libxl_create.c:134:libxl__domain_build_info_setdefault:
> qemu-xen is unavailable, use qemu-xen-traditional instead: No such file
> or directory
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:286:libxl__device_disk_set_backend: Disk
> vdev=hda, using backend phy
> libxl: debug: libxl_create.c:797:initiate_domain_create: running
> bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> domain, skipping bootloader
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x795af8: deregister unregistered
> libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> placement candidate found: nr_nodes=1, nr_cpus=8, nr_vcpus=4,
> free_memkb=14131
> libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> candidate with 1 nodes, 8 cpus and 14131 KB free selected
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0x9e704
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x19e704
> xc: detail: VIRTUAL MEMORY ARRANGEMENT:
>    Loader:        0000000000100000->000000000019e704
>    Modules:       0000000000000000->0000000000000000
>    TOTAL:         0000000000000000->00000000ffc00000
>    ENTRY ADDRESS: 0000000000100000
> xc: detail: PHYSICAL MEMORY ALLOCATION:
>    4KB PAGES: 0x0000000000000200
>    2MB PAGES: 0x00000000000003fd
>    1GB PAGES: 0x0000000000000002
> xc: detail: elf_load_binary: phdr 0 at 0x7f87b7100000 -> 0x7f87b719558d
> libxl: debug: libxl_device.c:251:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
> register slotnum=3
> libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x795750:
> inprogress: poller=0x794e10, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
> wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/3/768/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback:
> backend /local/domain/0/backend/vbd/3/768/state wanted state 2 still
> waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x796f68
> wpath=/local/domain/0/backend/vbd/3/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/3/768/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback:
> backend /local/domain/0/backend/vbd/3/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
> w=0x796f68 wpath=/local/domain/0/backend/vbd/3/768/state token=3/0:
> deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x796f68: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
> script: /etc/xen/scripts/block add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x796ff0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x796ff0: deregister unregistered
> libxl: debug: libxl_dm.c:1303:libxl__spawn_local_dm: Spawning
> device-model /usr/lib/xen/bin/qemu-dm with arguments:
> libxl: debug:
> libxl_dm.c:1305:libxl__spawn_local_dm:   /usr/lib/xen/bin/qemu-dm
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -d
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   3
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -domain-name
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   f20
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -videoram
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   4
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   c
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -acpi
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpus
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   2
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -vcpu_avail
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   0x03
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
> nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -net
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:
> tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   -M
> libxl: debug: libxl_dm.c:1305:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
> register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
> wpath=/local/domain/0/device-model/3/state token=3/1: event
> epath=/local/domain/0/device-model/3/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x795d30
> wpath=/local/domain/0/device-model/3/state token=3/1: event
> epath=/local/domain/0/device-model/3/state
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
> w=0x795d30 wpath=/local/domain/0/device-model/3/state token=3/1:
> deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x795d30: deregister unregistered
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
> register slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
> wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/3/0/state
> libxl: debug: libxl_event.c:646:devstate_watch_callback:
> backend /local/domain/0/backend/vif/3/0/state wanted state 2 still
> waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x79a038
> wpath=/local/domain/0/backend/vif/3/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/3/0/state
> libxl: debug: libxl_event.c:642:devstate_watch_callback:
> backend /local/domain/0/backend/vif/3/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:595:libxl__ev_xswatch_deregister: watch
> w=0x79a038 wpath=/local/domain/0/backend/vif/3/0/state token=3/2:
> deregister slotnum=3
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a038: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a0c0: deregister unregistered
> libxl: debug: libxl_device.c:1022:device_hotplug: calling hotplug
> script: /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a0c0: deregister unregistered
> libxl: debug: libxl_event.c:607:libxl__ev_xswatch_deregister: watch
> w=0x79a0c0: deregister unregistered
> libxl: debug: libxl_event.c:1729:libxl__ao_progress_report: ao 0x795750:
> progress report: ignored
> libxl: debug: libxl_event.c:1559:libxl__ao_complete: ao 0x795750:
> complete, rc=0
> libxl: debug: libxl_event.c:1531:libxl__ao__destroy: ao 0x795750:
> destroy
> xc: debug: hypercall buffer: total allocations:1097 total releases:1097
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> xc: debug: hypercall buffer: cache current size:4
> xc: debug: hypercall buffer: cache hits:1089 misses:4 toobig:4
> [root@xen ~]#
>
> The QEMU command line created is:
>
> /usr/lib/xen/bin/qemu-dm -d 3 -domain-name f20 -videoram 4 -boot c -acpi
> -vcpus 2 -vcpu_avail 0x03 -net
> nic,vlan=0,macaddr=00:16:00:00:11:22,model=e1000 -net
> tap,vlan=0,ifname=vif3.0-emu,bridge=br0,script=no,downscript=no -M xenfv
>
>
>
> Thanks,
>
> Eric
>
>
>
>
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:07:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiX5-0001pe-8t; Tue, 04 Feb 2014 16:07:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiX3-0001pZ-Ml
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:07:29 +0000
Received: from [193.109.254.147:58449] by server-16.bemta-14.messagelabs.com
	id 32/EE-21945-14011F25; Tue, 04 Feb 2014 16:07:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391530046!1975105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4072 invoked from network); 4 Feb 2014 16:07:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:07:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99714611"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 16:07:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:07:01 -0500
Message-ID: <1391530020.6497.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 16:07:00 +0000
In-Reply-To: <52F10940.9000109@linaro.org>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
	<52F10940.9000109@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:37 +0000, Julien Grall wrote:
> On 02/04/2014 02:22 PM, Ian Campbell wrote:
> > This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
> > 
> > This approach has a short coming in that it breaks when a guest enables its
> > MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> > same time. It turns out that FreeBSD does this.
> > 
> > This has now been fixed (yet) another way (third time is the charm!) so remove
> > this support. The original commit contained some fixes which are still
> > relevant even with the revert of the bulk of the patch:
> >  - Correction to HSR_SYSREG_CRN_MASK
> >  - Rename of HSR_SYSCTL macros to avoid naming clash
> >  - Definition of some additional cp reg specifications
> > 
> > Since these are still useful they are not reverted.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Except the spurious line toward the end of the patch:

Seems it was spuriously removed by the original, so this was just
reverting that, But I think it is correct to keep it out so I have done
so.

> Acked-by: Julien Grall <julien.grall@linaro.org>

Thanks.

> 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index b8f2e82..ec51d1b 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> 
> [..]
> 
> >      default:
> >          printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
> >                 sysreg.read ? "mrs" : "msr",
> > @@ -1635,6 +1477,7 @@ done:
> >      if (first) unmap_domain_page(first);
> >  }
> >  
> > +
> 
> Spurious change?
> 
> >  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
> >                                        union hsr hsr)
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:07:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiX5-0001pe-8t; Tue, 04 Feb 2014 16:07:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiX3-0001pZ-Ml
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:07:29 +0000
Received: from [193.109.254.147:58449] by server-16.bemta-14.messagelabs.com
	id 32/EE-21945-14011F25; Tue, 04 Feb 2014 16:07:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391530046!1975105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4072 invoked from network); 4 Feb 2014 16:07:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:07:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99714611"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 16:07:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:07:01 -0500
Message-ID: <1391530020.6497.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 4 Feb 2014 16:07:00 +0000
In-Reply-To: <52F10940.9000109@linaro.org>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-4-git-send-email-ian.campbell@citrix.com>
	<52F10940.9000109@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
 accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:37 +0000, Julien Grall wrote:
> On 02/04/2014 02:22 PM, Ian Campbell wrote:
> > This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.
> > 
> > This approach has a short coming in that it breaks when a guest enables its
> > MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
> > same time. It turns out that FreeBSD does this.
> > 
> > This has now been fixed (yet) another way (third time is the charm!) so remove
> > this support. The original commit contained some fixes which are still
> > relevant even with the revert of the bulk of the patch:
> >  - Correction to HSR_SYSREG_CRN_MASK
> >  - Rename of HSR_SYSCTL macros to avoid naming clash
> >  - Definition of some additional cp reg specifications
> > 
> > Since these are still useful they are not reverted.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Except the spurious line toward the end of the patch:

Seems it was spuriously removed by the original, so this was just
reverting that, But I think it is correct to keep it out so I have done
so.

> Acked-by: Julien Grall <julien.grall@linaro.org>

Thanks.

> 
> > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> > index b8f2e82..ec51d1b 100644
> > --- a/xen/arch/arm/traps.c
> > +++ b/xen/arch/arm/traps.c
> 
> [..]
> 
> >      default:
> >          printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
> >                 sysreg.read ? "mrs" : "msr",
> > @@ -1635,6 +1477,7 @@ done:
> >      if (first) unmap_domain_page(first);
> >  }
> >  
> > +
> 
> Spurious change?
> 
> >  static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
> >                                        union hsr hsr)
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:08:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:08:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiXj-0001sm-On; Tue, 04 Feb 2014 16:08:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiXh-0001sW-Qo
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:08:09 +0000
Received: from [85.158.137.68:40466] by server-7.bemta-3.messagelabs.com id
	BD/44-13775-96011F25; Tue, 04 Feb 2014 16:08:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391530086!9683961!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29572 invoked from network); 4 Feb 2014 16:08:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:08:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97810848"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:07:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:07:57 -0500
Message-ID: <1391530076.6497.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Feb 2014 16:07:56 +0000
In-Reply-To: <52F119F50200007800119175@nat28.tlf.novell.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
	<1391528580.6497.36.camel@kazak.uk.xensource.com>
	<52F119F50200007800119175@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:48 +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 16:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > sync_page is a much better suggestion. Perhaps even
> > sync_page_to/from_cache, I suppose that might actually be more
> > confusing.
> 
> Hmm, on one hand it's even more precise, but otoh - how would a
> sync_page_to_cache() look like?

That suggests moving something into the cache, where this function is
actually pushing something out of the cache and into RAM.

sync_page_to_ram then? I think that is better than _from_cache.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:08:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:08:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiXj-0001sm-On; Tue, 04 Feb 2014 16:08:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiXh-0001sW-Qo
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:08:09 +0000
Received: from [85.158.137.68:40466] by server-7.bemta-3.messagelabs.com id
	BD/44-13775-96011F25; Tue, 04 Feb 2014 16:08:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391530086!9683961!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29572 invoked from network); 4 Feb 2014 16:08:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:08:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97810848"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:07:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:07:57 -0500
Message-ID: <1391530076.6497.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 4 Feb 2014 16:07:56 +0000
In-Reply-To: <52F119F50200007800119175@nat28.tlf.novell.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
	<1391528580.6497.36.camel@kazak.uk.xensource.com>
	<52F119F50200007800119175@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:48 +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 16:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > sync_page is a much better suggestion. Perhaps even
> > sync_page_to/from_cache, I suppose that might actually be more
> > confusing.
> 
> Hmm, on one hand it's even more precise, but otoh - how would a
> sync_page_to_cache() look like?

That suggests moving something into the cache, where this function is
actually pushing something out of the cache and into RAM.

sync_page_to_ram then? I think that is better than _from_cache.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiYw-00022R-Sg; Tue, 04 Feb 2014 16:09:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiYu-00022E-Sk
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:09:25 +0000
Received: from [85.158.139.211:51157] by server-9.bemta-5.messagelabs.com id
	64/97-11237-4B011F25; Tue, 04 Feb 2014 16:09:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391530162!1623886!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12886 invoked from network); 4 Feb 2014 16:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97811759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:09:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:09:21 -0500
Message-ID: <1391530160.6497.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 16:09:20 +0000
In-Reply-To: <21233.3469.939694.871033@mariner.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
	<1391528262.6497.35.camel@kazak.uk.xensource.com>
	<21233.3469.939694.871033@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:55 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > On Tue, 2014-02-04 at 15:32 +0000, Ian Jackson wrote:
> > > The approach you are taking here is that for pages explicitly mapped
> > > by some libxc caller, you do the flush on unmap.  But what about
> > > callers who don't unmap ?  Are there callers which don't unmap and
> > > which instead are relying on memory coherency assumptions which aren't
> > > true on arm ?
> > 
> > Callers which don't unmap would be leaking mappings and therefore buggy
> > in a long running toolstack.
> 
> What I mean is that they might map the guest pages, and expect to
> exchange data with the guest through the pages while they were still
> mapped ...
> 
> > Also after the initial start of day period we require that the guest
> > enable its caches.
> 
> ... but before the guest enables caching.

We basically rule that out in the ABI requirements.

The cases where this would happen are things like xenstore and
xenconsole but those are driven from the guest end, which requires the
caches to be on.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiYw-00022R-Sg; Tue, 04 Feb 2014 16:09:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAiYu-00022E-Sk
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:09:25 +0000
Received: from [85.158.139.211:51157] by server-9.bemta-5.messagelabs.com id
	64/97-11237-4B011F25; Tue, 04 Feb 2014 16:09:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391530162!1623886!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12886 invoked from network); 4 Feb 2014 16:09:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:09:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97811759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:09:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:09:21 -0500
Message-ID: <1391530160.6497.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 16:09:20 +0000
In-Reply-To: <21233.3469.939694.871033@mariner.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
	<1391528262.6497.35.camel@kazak.uk.xensource.com>
	<21233.3469.939694.871033@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 15:55 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> > On Tue, 2014-02-04 at 15:32 +0000, Ian Jackson wrote:
> > > The approach you are taking here is that for pages explicitly mapped
> > > by some libxc caller, you do the flush on unmap.  But what about
> > > callers who don't unmap ?  Are there callers which don't unmap and
> > > which instead are relying on memory coherency assumptions which aren't
> > > true on arm ?
> > 
> > Callers which don't unmap would be leaking mappings and therefore buggy
> > in a long running toolstack.
> 
> What I mean is that they might map the guest pages, and expect to
> exchange data with the guest through the pages while they were still
> mapped ...
> 
> > Also after the initial start of day period we require that the guest
> > enable its caches.
> 
> ... but before the guest enables caching.

We basically rule that out in the ABI requirements.

The cases where this would happen are things like xenstore and
xenconsole but those are driven from the guest end, which requires the
caches to be on.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:13:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAicP-0002JC-JH; Tue, 04 Feb 2014 16:13:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAicN-0002Ix-56
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:12:59 +0000
Received: from [85.158.139.211:3189] by server-3.bemta-5.messagelabs.com id
	7D/34-13671-A8111F25; Tue, 04 Feb 2014 16:12:58 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391530375!1624436!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 447 invoked from network); 4 Feb 2014 16:12:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:12:57 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GCU5a025792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:12:30 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s14GCTFR025940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:12:29 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GCSZ2006211; Tue, 4 Feb 2014 16:12:29 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:12:28 -0800
Message-ID: <52F111B4.8030805@oracle.com>
Date: Tue, 04 Feb 2014 11:13:40 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
	<52F10D0D.2050908@oracle.com>
	<52F11CDF02000078001191EB@nat28.tlf.novell.com>
In-Reply-To: <52F11CDF02000078001191EB@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 11:01 AM, Jan Beulich wrote:
>>>> On 04.02.14 at 16:53, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>> +            gregs->cs = cs;
>>> And now you store a NUL selector (i.e. just the RPL bits) into the
>>> output field?
>>>>            }
>>>> -        else if ( !is_control_domain(current->domain) &&
>>>> -                 !is_idle_vcpu(current) )
>>>> +        else
>>>>            {
>>>> -            /* PV guest */
>>>> +            /* HVM guest */
>>>> +            struct segment_register cs;
>>>> +
>>>>                gregs = guest_cpu_user_regs();
>>>>                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>>                       gregs, sizeof(struct cpu_user_regs));
>>>> +
>>>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>> +            gregs->cs = cs.attr.fields.dpl;
>>> And here too? If that's intended, a code comment is a must.
>> This is HVM-only path, PVH or PV don't go here so cs should be valid.
> Isn't the reply of mine a few lines up in PV code? And why would
> the selector being wrong for HVM be okay?


This clause is for privileged profiling: we are in PV clause even though 
the interrupt is taken by an HVM guest.

The diff is somewhat difficult to follow so here is the flow:

    // in privileged mode 'v' is dom0's CPU.
    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
    {
         if ( !is_hvm_domain(current->domain) )
         {
              // either PV (including dom0) or Xen is interrupted
         }
         else
         {
              // This is the clause we are discussing. 'current' is HVM
              hvm_get_segment_register(current, x86_seg_cs, &cs);
         }
         send_guest_vcpu_virq(v, VIRQ_XENPMU);
         return 1;
    }

So I think CS should be correct for the guest, no?


-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:13:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAicR-0002JR-PO; Tue, 04 Feb 2014 16:13:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAicN-0002Iy-AW
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:12:59 +0000
Received: from [193.109.254.147:58721] by server-7.bemta-14.messagelabs.com id
	E7/7A-23424-A8111F25; Tue, 04 Feb 2014 16:12:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391530376!1975904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3408 invoked from network); 4 Feb 2014 16:12:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:12:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97813970"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:12:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:12:55 -0500
Message-ID: <1391530374.6497.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ehouby@yahoo.com>
Date: Tue, 4 Feb 2014 16:12:54 +0000
In-Reply-To: <1391527619.2441.16.camel@astar.houby.net>
References: <1391527619.2441.16.camel@astar.houby.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen@lists.fedoraproject.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 08:26 -0700, Eric Houby wrote:
> Xen list,
> 
> I have a clean F20 install with the RC3 RPMs and see an error when
> starting a VM.  A similar issue was seen with the RC2 RPMs.

Thanks for the report.

> [root@xen ~]# xl create f20.xl
> Parsing config from f20.xl
> libxl: error: libxl_create.c:1054:domcreate_launch_dm: unable to add
> disk devices
> libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device
> model pid in /local/domain/2/image/device-model-pid
> libxl: error: libxl.c:1425:libxl__destroy_domid:
> libxl__destroy_device_model failed for 2
> libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
> to get my domid
> libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
> failed for 2
> [root@xen ~]# 
> [root@xen ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
> [root@xen ~]# 
> [root@xen ~]# xl create f20.xl
> Parsing config from f20.xl
> [root@xen ~]# 
> [root@xen ~]# 
> [root@xen ~]# xl list
> Name                                        ID   Mem VCPUs	State	Time(s)
> Domain-0                                     0  2047     2     r-----
> 19.7
> f20                                          3  4095     1     r-----
> 4.1
> 
> 
> After running the xenstore-write command VMs start up without issue.

This is a bug in the Fedora versions of the initscripts (or I suppose
these days they are systemd unit files or whatever). The sysvinit
scripts shipped with Xen itself have this write in already.

I see you've already CCd the appropriate Fedora list so I'll leave it to
them to let you know what the end fix should be...

We should probably consider taking some unit files into the xen tree, if
someone wants to submit a set?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:13:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAicP-0002JC-JH; Tue, 04 Feb 2014 16:13:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAicN-0002Ix-56
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:12:59 +0000
Received: from [85.158.139.211:3189] by server-3.bemta-5.messagelabs.com id
	7D/34-13671-A8111F25; Tue, 04 Feb 2014 16:12:58 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391530375!1624436!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 447 invoked from network); 4 Feb 2014 16:12:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:12:57 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GCU5a025792
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:12:30 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s14GCTFR025940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:12:29 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GCSZ2006211; Tue, 4 Feb 2014 16:12:29 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:12:28 -0800
Message-ID: <52F111B4.8030805@oracle.com>
Date: Tue, 04 Feb 2014 11:13:40 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
	<52F10D0D.2050908@oracle.com>
	<52F11CDF02000078001191EB@nat28.tlf.novell.com>
In-Reply-To: <52F11CDF02000078001191EB@nat28.tlf.novell.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 11:01 AM, Jan Beulich wrote:
>>>> On 04.02.14 at 16:53, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>> +            gregs->cs = cs;
>>> And now you store a NUL selector (i.e. just the RPL bits) into the
>>> output field?
>>>>            }
>>>> -        else if ( !is_control_domain(current->domain) &&
>>>> -                 !is_idle_vcpu(current) )
>>>> +        else
>>>>            {
>>>> -            /* PV guest */
>>>> +            /* HVM guest */
>>>> +            struct segment_register cs;
>>>> +
>>>>                gregs = guest_cpu_user_regs();
>>>>                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>>                       gregs, sizeof(struct cpu_user_regs));
>>>> +
>>>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>> +            gregs->cs = cs.attr.fields.dpl;
>>> And here too? If that's intended, a code comment is a must.
>> This is HVM-only path, PVH or PV don't go here so cs should be valid.
> Isn't the reply of mine a few lines up in PV code? And why would
> the selector being wrong for HVM be okay?


This clause is for privileged profiling: we are in PV clause even though 
the interrupt is taken by an HVM guest.

The diff is somewhat difficult to follow so here is the flow:

    // in privileged mode 'v' is dom0's CPU.
    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
    {
         if ( !is_hvm_domain(current->domain) )
         {
              // either PV (including dom0) or Xen is interrupted
         }
         else
         {
              // This is the clause we are discussing. 'current' is HVM
              hvm_get_segment_register(current, x86_seg_cs, &cs);
         }
         send_guest_vcpu_virq(v, VIRQ_XENPMU);
         return 1;
    }

So I think CS should be correct for the guest, no?


-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:13:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAicR-0002JR-PO; Tue, 04 Feb 2014 16:13:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAicN-0002Iy-AW
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:12:59 +0000
Received: from [193.109.254.147:58721] by server-7.bemta-14.messagelabs.com id
	E7/7A-23424-A8111F25; Tue, 04 Feb 2014 16:12:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391530376!1975904!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3408 invoked from network); 4 Feb 2014 16:12:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:12:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97813970"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:12:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:12:55 -0500
Message-ID: <1391530374.6497.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ehouby@yahoo.com>
Date: Tue, 4 Feb 2014 16:12:54 +0000
In-Reply-To: <1391527619.2441.16.camel@astar.houby.net>
References: <1391527619.2441.16.camel@astar.houby.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen@lists.fedoraproject.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 08:26 -0700, Eric Houby wrote:
> Xen list,
> 
> I have a clean F20 install with the RC3 RPMs and see an error when
> starting a VM.  A similar issue was seen with the RC2 RPMs.

Thanks for the report.

> [root@xen ~]# xl create f20.xl
> Parsing config from f20.xl
> libxl: error: libxl_create.c:1054:domcreate_launch_dm: unable to add
> disk devices
> libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device
> model pid in /local/domain/2/image/device-model-pid
> libxl: error: libxl.c:1425:libxl__destroy_domid:
> libxl__destroy_device_model failed for 2
> libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
> to get my domid
> libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
> failed for 2
> [root@xen ~]# 
> [root@xen ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
> [root@xen ~]# 
> [root@xen ~]# xl create f20.xl
> Parsing config from f20.xl
> [root@xen ~]# 
> [root@xen ~]# 
> [root@xen ~]# xl list
> Name                                        ID   Mem VCPUs	State	Time(s)
> Domain-0                                     0  2047     2     r-----
> 19.7
> f20                                          3  4095     1     r-----
> 4.1
> 
> 
> After running the xenstore-write command VMs start up without issue.

This is a bug in the Fedora versions of the initscripts (or I suppose
these days they are systemd unit files or whatever). The sysvinit
scripts shipped with Xen itself have this write in already.

I see you've already CCd the appropriate Fedora list so I'll leave it to
them to let you know what the end fix should be...

We should probably consider taking some unit files into the xen tree, if
someone wants to submit a set?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:16:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:16:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAifr-0002jX-Re; Tue, 04 Feb 2014 16:16:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAifq-0002jM-0V
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:16:34 +0000
Received: from [85.158.139.211:19325] by server-10.bemta-5.messagelabs.com id
	1C/C6-08578-16211F25; Tue, 04 Feb 2014 16:16:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391530591!1629797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27974 invoked from network); 4 Feb 2014 16:16:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:16:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99719864"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 16:16:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 11:16:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAifm-0005YF-4b;
	Tue, 04 Feb 2014 16:16:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAifl-0004Tg-To;
	Tue, 04 Feb 2014 16:16:29 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.4701.783696.80971@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 16:16:29 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391530160.6497.52.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
	<1391528262.6497.35.camel@kazak.uk.xensource.com>
	<21233.3469.939694.871033@mariner.uk.xensource.com>
	<1391530160.6497.52.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> On Tue, 2014-02-04 at 15:55 +0000, Ian Jackson wrote:
> > What I mean is that they might map the guest pages, and expect to
> > exchange data with the guest through the pages while they were still
> > mapped ...
> > 
> > > Also after the initial start of day period we require that the guest
> > > enable its caches.
> > 
> > ... but before the guest enables caching.
> 
> We basically rule that out in the ABI requirements.
> 
> The cases where this would happen are things like xenstore and
> xenconsole but those are driven from the guest end, which requires the
> caches to be on.

OK, good.  Thanks for the explanation.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:16:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:16:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAifr-0002jX-Re; Tue, 04 Feb 2014 16:16:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAifq-0002jM-0V
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:16:34 +0000
Received: from [85.158.139.211:19325] by server-10.bemta-5.messagelabs.com id
	1C/C6-08578-16211F25; Tue, 04 Feb 2014 16:16:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391530591!1629797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27974 invoked from network); 4 Feb 2014 16:16:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:16:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99719864"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 16:16:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 11:16:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAifm-0005YF-4b;
	Tue, 04 Feb 2014 16:16:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WAifl-0004Tg-To;
	Tue, 04 Feb 2014 16:16:29 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21233.4701.783696.80971@mariner.uk.xensource.com>
Date: Tue, 4 Feb 2014 16:16:29 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391530160.6497.52.camel@kazak.uk.xensource.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<21233.2088.315549.191502@mariner.uk.xensource.com>
	<1391528262.6497.35.camel@kazak.uk.xensource.com>
	<21233.3469.939694.871033@mariner.uk.xensource.com>
	<1391530160.6497.52.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH v2 3/4] xen/arm: clean and invalidate all guest caches by VMID after domain build."):
> On Tue, 2014-02-04 at 15:55 +0000, Ian Jackson wrote:
> > What I mean is that they might map the guest pages, and expect to
> > exchange data with the guest through the pages while they were still
> > mapped ...
> > 
> > > Also after the initial start of day period we require that the guest
> > > enable its caches.
> > 
> > ... but before the guest enables caching.
> 
> We basically rule that out in the ABI requirements.
> 
> The cases where this would happen are things like xenstore and
> xenconsole but those are driven from the guest end, which requires the
> caches to be on.

OK, good.  Thanks for the explanation.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:20:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAijY-00032Z-2F; Tue, 04 Feb 2014 16:20:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAijW-00032R-N6
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:20:22 +0000
Received: from [193.109.254.147:4279] by server-10.bemta-14.messagelabs.com id
	5C/50-10711-64311F25; Tue, 04 Feb 2014 16:20:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391530820!1963906!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12424 invoked from network); 4 Feb 2014 16:20:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99722642"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 16:20:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 11:20:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAijT-0003op-51;
	Tue, 04 Feb 2014 16:20:19 +0000
Date: Tue, 4 Feb 2014 16:20:09 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

gic_route_irq_to_guest routes all IRQs to
cpumask_of(smp_processor_id()), but actually it always called on cpu0.
To avoid confusion and possible issues in case someone modified the code
and reassigned a particular irq to a cpu other than cpu0, hardcode
cpumask_of(0).

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..8854800 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
 
     level = dt_irq_is_level_triggered(irq);
 
-    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+    /* TODO: handle routing irqs to cpus != cpu0 */
+    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
 
     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:20:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAijY-00032Z-2F; Tue, 04 Feb 2014 16:20:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAijW-00032R-N6
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:20:22 +0000
Received: from [193.109.254.147:4279] by server-10.bemta-14.messagelabs.com id
	5C/50-10711-64311F25; Tue, 04 Feb 2014 16:20:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391530820!1963906!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12424 invoked from network); 4 Feb 2014 16:20:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:20:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="99722642"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 16:20:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 11:20:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAijT-0003op-51;
	Tue, 04 Feb 2014 16:20:19 +0000
Date: Tue, 4 Feb 2014 16:20:09 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

gic_route_irq_to_guest routes all IRQs to
cpumask_of(smp_processor_id()), but actually it always called on cpu0.
To avoid confusion and possible issues in case someone modified the code
and reassigned a particular irq to a cpu other than cpu0, hardcode
cpumask_of(0).

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index e6257a7..8854800 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
 
     level = dt_irq_is_level_triggered(irq);
 
-    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
-                           0xa0);
+    /* TODO: handle routing irqs to cpus != cpu0 */
+    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
 
     retval = __setup_irq(desc, irq->irq, action);
     if (retval) {

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:23:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAimC-0003FL-VQ; Tue, 04 Feb 2014 16:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAim5-0003E9-Vf
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:23:06 +0000
Received: from [85.158.137.68:24811] by server-2.bemta-3.messagelabs.com id
	AF/A7-06531-5E311F25; Tue, 04 Feb 2014 16:23:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391530978!13297243!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22239 invoked from network); 4 Feb 2014 16:22:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:22:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97819961"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:22:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:22:38 -0500
Message-ID: <1391530957.6497.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 4 Feb 2014 16:22:37 +0000
In-Reply-To: <20140130173058.GA12133@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 18:30 +0100, Olaf Hering wrote:

George, any thoughts on:

> > TBH -- if you (==suse I guess?) are contemplating carrying this as a
> > backport even before 4.4 is out the door we should probably be at least
> > considering a freeze exception for 4.4. George CCd for input. (I
> > appreciate that "backport=>freeze exception" is a potentially slippery
> > slope/ripe for abuse...)
> 
> It will make less work for SUSE if this change would be incorporated
> into 4.4, and later replaced with the "final" version I sent out today.
> However, its small and will be easy to port forward to 4.4.X.
> 
> The risk of including such change is small as it requires a patched qemu
> which actually does discard (1.7?), a patched frontend driver (pvops
> 3.15?) before the codepaths it enables are actually executed.
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:23:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAimC-0003FL-VQ; Tue, 04 Feb 2014 16:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAim5-0003E9-Vf
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:23:06 +0000
Received: from [85.158.137.68:24811] by server-2.bemta-3.messagelabs.com id
	AF/A7-06531-5E311F25; Tue, 04 Feb 2014 16:23:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391530978!13297243!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22239 invoked from network); 4 Feb 2014 16:22:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:22:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97819961"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:22:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	11:22:38 -0500
Message-ID: <1391530957.6497.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 4 Feb 2014 16:22:37 +0000
In-Reply-To: <20140130173058.GA12133@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: anthony.perard@citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel@lists.xen.org, Ian.Jackson@eu.citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-01-30 at 18:30 +0100, Olaf Hering wrote:

George, any thoughts on:

> > TBH -- if you (==suse I guess?) are contemplating carrying this as a
> > backport even before 4.4 is out the door we should probably be at least
> > considering a freeze exception for 4.4. George CCd for input. (I
> > appreciate that "backport=>freeze exception" is a potentially slippery
> > slope/ripe for abuse...)
> 
> It will make less work for SUSE if this change would be incorporated
> into 4.4, and later replaced with the "final" version I sent out today.
> However, its small and will be easy to port forward to 4.4.X.
> 
> The risk of including such change is small as it requires a patched qemu
> which actually does discard (1.7?), a patched frontend driver (pvops
> 3.15?) before the codepaths it enables are actually executed.
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:23:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAimC-0003F0-EF; Tue, 04 Feb 2014 16:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WAim6-0003EA-18
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:23:06 +0000
Received: from [85.158.137.68:24914] by server-16.bemta-3.messagelabs.com id
	03/0C-29917-5E311F25; Tue, 04 Feb 2014 16:23:01 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391530979!13331788!1
X-Originating-IP: [209.85.212.54]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26595 invoked from network); 4 Feb 2014 16:23:00 -0000
Received: from mail-vb0-f54.google.com (HELO mail-vb0-f54.google.com)
	(209.85.212.54)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:23:00 -0000
Received: by mail-vb0-f54.google.com with SMTP id w20so5961480vbb.13
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:22:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=WAGHxxZ4Novn0Hroeyb1RW3MQdeFe/usxMtYwU0LZIQ=;
	b=lYrxToG2omR+A7XQk2ckIqVbsin8i1uz9qp2yiAVco7X20KZf8I1dB9nHOAJSizhms
	9OjpTpiWW+6HzjQd2R5HibYwSvHWUZ9rDyqoN1A2fI7zwAFndN04SZNyMj+fV1RATW9t
	kY43GF8hmZ0eIaGnvkl0tmx6VXeYhLilkT6AbFTSz78aNOe1RmIbDHeUwsYNGTjdgjmk
	jNvYgnyq+aNQai776K4hgG2NW3o+nEc64KPh/N3DRNC1eoNqYhbk10sICx+Hq9bc0a5b
	QfbNfgSxEYBS+UD26WkQYGrrlFjao7wEsYpBP5LSMyeKkGZXaO/l612893Av01OlDnCW
	nP1Q==
MIME-Version: 1.0
X-Received: by 10.52.157.8 with SMTP id wi8mr543622vdb.46.1391530978802; Tue,
	04 Feb 2014 08:22:58 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Tue, 4 Feb 2014 08:22:58 -0800 (PST)
In-Reply-To: <52F0BAEC0200007800118E8E@nat28.tlf.novell.com>
References: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
	<52F0BAEC0200007800118E8E@nat28.tlf.novell.com>
Date: Tue, 4 Feb 2014 08:22:58 -0800
Message-ID: <CAMnwyJ03+DKS3y+gY-SE6V+MQBg7nymNqzbcF-mUZ4XRrM=Yzw@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Enabling kdump on SuSE 11 SP3 Xen resulting in error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6309748938392674270=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6309748938392674270==
Content-Type: multipart/alternative; boundary=089e0160c5e4af80e304f1970d92

--089e0160c5e4af80e304f1970d92
Content-Type: text/plain; charset=ISO-8859-1

Ok. Thanks. I'll engage our internal support channel with SuSE. However, I
see these docs on novell site:

http://www.novell.com/support/kb/doc.php?id=7001383


   1. Note:  As of 12/01/08 makedumpfile currently only supports the "ELF"
   format and a dumplevel setting of "0".  Check/etc/sysconfig/kdump file
   and verify KDUMP_DUMPLEVEL="0" and KDUMP_DUMPFORMAT="ELF".


It seems it still holds true with SuSE 11 SP3.

Thanks,
/Saurabh


On Tue, Feb 4, 2014 at 1:03 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 03.02.14 at 23:17, Saurabh Mishra <saurabh.globe@gmail.com> wrote:
> > I'm trying to enable kdump on SuSE 11 SP3 with Xen configuration on our
> > cards but having problem.
>
> In general issues like this are not to be dealt with here, but by
> SUSE support.
>
> > If I don't specify KDUMP_DUMPFORMAT=ELF, it does not work and does not
> try
> > to take vmcore.
> >
> > If I specify KDUMP_DUMPLEVEL=0, then it does work but it takes very long
> > time. I tried using KDUMP_DUMPLEVEL=30 or 10 but it complaint that there
> is
> > not much space even though I've about 4.8 GB space with crashkernel
> > configured as 256M@64M.
>
> If the dump kernel comes up and the dumping takes lone (or you
> have other issues with dumping), then this is the wrong forum in
> any case: Xen is no longer involved in this operation (the dump
> kernel runs on bare hardware).
>
> > Here's output from /proc/cmdline :-
> >
> > lc-1:~ # cat /proc/cmdline
> > crashkernel=256M@64M root=/dev/sda5 earlyprintk=serial,ttyS0,115200n8
> > resume=/dev/sda5 splash=silent showopts  console=ttyS0,115200n8
> >
> > lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg
> > kernel=vmlinuz-3.0.93-0.8-xen  crashkernel=256M@64M root=/dev/sda5
> > earlyprintk=serial,ttyS0,115200n8 resume=/dev/sda5 splash=silent showopts
> >  console=ttyS0,115200n8
> > options=console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept
> > extra_guest_irqs=80 reboot=efi crashkernel=256M@64M
>
> Now this makes things really interesting: Dumping in a UEFI
> environment is generally unsupported (this is actually being worked
> on upstream afaik). Whether the dump kernel can come up properly
> at all depends on the specific characteristics of your firmware
> implementation. It could in particular be that the dump kernel has
> no way of finding ACPI tables, and hence setup of interrupts and
> alike may not work as expected/needed.
>
> Jan
>
>

--089e0160c5e4af80e304f1970d92
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Ok. Thanks. I&#39;ll engage our internal support channel w=
ith SuSE. However, I see these docs on novell site:<div><br></div><div><a h=
ref=3D"http://www.novell.com/support/kb/doc.php?id=3D7001383">http://www.no=
vell.com/support/kb/doc.php?id=3D7001383</a><br>
</div><div><br></div><div><ol style=3D"margin:0px;padding:0px 0px 0px 23px;=
color:rgb(0,0,0);font-family:Arial,Helvetica,sans-serif;font-size:12px"><li=
 style=3D"margin:0px 0px 4px;padding:0px;list-style:decimal;line-height:15p=
x;color:rgb(75,75,75)">
Note:=A0 As of 12/01/08=A0<span style=3D"font-family:&#39;Courier New&#39;"=
>makedumpfile</span>=A0currently only supports the &quot;ELF&quot; format a=
nd a dumplevel setting of &quot;0&quot;.=A0 Check<span style=3D"font-family=
:&#39;Courier New&#39;">/etc/sysconfig/kdump</span>=A0file and verify=A0<sp=
an style=3D"font-family:&#39;Courier New&#39;">KDUMP_DUMPLEVEL=3D&quot;0&qu=
ot;</span>=A0and=A0<span style=3D"font-family:&#39;Courier New&#39;">KDUMP_=
DUMPFORMAT=3D&quot;ELF&quot;.</span></li>
</ol></div><div><br></div><div>It seems it still holds true with SuSE 11 SP=
3.</div><div><br></div><div>Thanks,</div><div>/Saurabh</div></div><div clas=
s=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Feb 4, 2014 at=
 1:03 AM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@suse=
.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">&gt;&gt;&gt; On 03.02.14 a=
t 23:17, Saurabh Mishra &lt;<a href=3D"mailto:saurabh.globe@gmail.com">saur=
abh.globe@gmail.com</a>&gt; wrote:<br>

&gt; I&#39;m trying to enable kdump on SuSE 11 SP3 with Xen configuration o=
n our<br>
&gt; cards but having problem.<br>
<br>
</div>In general issues like this are not to be dealt with here, but by<br>
SUSE support.<br>
<div class=3D"im"><br>
&gt; If I don&#39;t specify KDUMP_DUMPFORMAT=3DELF, it does not work and do=
es not try<br>
&gt; to take vmcore.<br>
&gt;<br>
&gt; If I specify KDUMP_DUMPLEVEL=3D0, then it does work but it takes very =
long<br>
&gt; time. I tried using KDUMP_DUMPLEVEL=3D30 or 10 but it complaint that t=
here is<br>
&gt; not much space even though I&#39;ve about 4.8 GB space with crashkerne=
l<br>
&gt; configured as 256M@64M.<br>
<br>
</div>If the dump kernel comes up and the dumping takes lone (or you<br>
have other issues with dumping), then this is the wrong forum in<br>
any case: Xen is no longer involved in this operation (the dump<br>
kernel runs on bare hardware).<br>
<div class=3D"im"><br>
&gt; Here&#39;s output from /proc/cmdline :-<br>
&gt;<br>
&gt; lc-1:~ # cat /proc/cmdline<br>
&gt; crashkernel=3D256M@64M root=3D/dev/sda5 earlyprintk=3Dserial,ttyS0,115=
200n8<br>
&gt; resume=3D/dev/sda5 splash=3Dsilent showopts =A0console=3DttyS0,115200n=
8<br>
&gt;<br>
&gt; lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg<br>
&gt; kernel=3Dvmlinuz-3.0.93-0.8-xen =A0crashkernel=3D256M@64M root=3D/dev/=
sda5<br>
&gt; earlyprintk=3Dserial,ttyS0,115200n8 resume=3D/dev/sda5 splash=3Dsilent=
 showopts<br>
&gt; =A0console=3DttyS0,115200n8<br>
&gt; options=3Dconsole=3Dcom1 com1=3D115200 dom0_mem=3D8192m iommu=3D1,shar=
ept<br>
&gt; extra_guest_irqs=3D80 reboot=3Defi crashkernel=3D256M@64M<br>
<br>
</div>Now this makes things really interesting: Dumping in a UEFI<br>
environment is generally unsupported (this is actually being worked<br>
on upstream afaik). Whether the dump kernel can come up properly<br>
at all depends on the specific characteristics of your firmware<br>
implementation. It could in particular be that the dump kernel has<br>
no way of finding ACPI tables, and hence setup of interrupts and<br>
alike may not work as expected/needed.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Jan<br>
<br>
</font></span></blockquote></div><br></div>

--089e0160c5e4af80e304f1970d92--


--===============6309748938392674270==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6309748938392674270==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 16:23:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAimC-0003F0-EF; Tue, 04 Feb 2014 16:23:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WAim6-0003EA-18
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:23:06 +0000
Received: from [85.158.137.68:24914] by server-16.bemta-3.messagelabs.com id
	03/0C-29917-5E311F25; Tue, 04 Feb 2014 16:23:01 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391530979!13331788!1
X-Originating-IP: [209.85.212.54]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26595 invoked from network); 4 Feb 2014 16:23:00 -0000
Received: from mail-vb0-f54.google.com (HELO mail-vb0-f54.google.com)
	(209.85.212.54)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:23:00 -0000
Received: by mail-vb0-f54.google.com with SMTP id w20so5961480vbb.13
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:22:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=WAGHxxZ4Novn0Hroeyb1RW3MQdeFe/usxMtYwU0LZIQ=;
	b=lYrxToG2omR+A7XQk2ckIqVbsin8i1uz9qp2yiAVco7X20KZf8I1dB9nHOAJSizhms
	9OjpTpiWW+6HzjQd2R5HibYwSvHWUZ9rDyqoN1A2fI7zwAFndN04SZNyMj+fV1RATW9t
	kY43GF8hmZ0eIaGnvkl0tmx6VXeYhLilkT6AbFTSz78aNOe1RmIbDHeUwsYNGTjdgjmk
	jNvYgnyq+aNQai776K4hgG2NW3o+nEc64KPh/N3DRNC1eoNqYhbk10sICx+Hq9bc0a5b
	QfbNfgSxEYBS+UD26WkQYGrrlFjao7wEsYpBP5LSMyeKkGZXaO/l612893Av01OlDnCW
	nP1Q==
MIME-Version: 1.0
X-Received: by 10.52.157.8 with SMTP id wi8mr543622vdb.46.1391530978802; Tue,
	04 Feb 2014 08:22:58 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Tue, 4 Feb 2014 08:22:58 -0800 (PST)
In-Reply-To: <52F0BAEC0200007800118E8E@nat28.tlf.novell.com>
References: <CAMnwyJ2NtBDw0Fw4-zPVWm4Vi96gdL7z8PjV6ke=esbjXKFwew@mail.gmail.com>
	<52F0BAEC0200007800118E8E@nat28.tlf.novell.com>
Date: Tue, 4 Feb 2014 08:22:58 -0800
Message-ID: <CAMnwyJ03+DKS3y+gY-SE6V+MQBg7nymNqzbcF-mUZ4XRrM=Yzw@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Enabling kdump on SuSE 11 SP3 Xen resulting in error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6309748938392674270=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6309748938392674270==
Content-Type: multipart/alternative; boundary=089e0160c5e4af80e304f1970d92

--089e0160c5e4af80e304f1970d92
Content-Type: text/plain; charset=ISO-8859-1

Ok. Thanks. I'll engage our internal support channel with SuSE. However, I
see these docs on novell site:

http://www.novell.com/support/kb/doc.php?id=7001383


   1. Note:  As of 12/01/08 makedumpfile currently only supports the "ELF"
   format and a dumplevel setting of "0".  Check/etc/sysconfig/kdump file
   and verify KDUMP_DUMPLEVEL="0" and KDUMP_DUMPFORMAT="ELF".


It seems it still holds true with SuSE 11 SP3.

Thanks,
/Saurabh


On Tue, Feb 4, 2014 at 1:03 AM, Jan Beulich <JBeulich@suse.com> wrote:

> >>> On 03.02.14 at 23:17, Saurabh Mishra <saurabh.globe@gmail.com> wrote:
> > I'm trying to enable kdump on SuSE 11 SP3 with Xen configuration on our
> > cards but having problem.
>
> In general issues like this are not to be dealt with here, but by
> SUSE support.
>
> > If I don't specify KDUMP_DUMPFORMAT=ELF, it does not work and does not
> try
> > to take vmcore.
> >
> > If I specify KDUMP_DUMPLEVEL=0, then it does work but it takes very long
> > time. I tried using KDUMP_DUMPLEVEL=30 or 10 but it complaint that there
> is
> > not much space even though I've about 4.8 GB space with crashkernel
> > configured as 256M@64M.
>
> If the dump kernel comes up and the dumping takes lone (or you
> have other issues with dumping), then this is the wrong forum in
> any case: Xen is no longer involved in this operation (the dump
> kernel runs on bare hardware).
>
> > Here's output from /proc/cmdline :-
> >
> > lc-1:~ # cat /proc/cmdline
> > crashkernel=256M@64M root=/dev/sda5 earlyprintk=serial,ttyS0,115200n8
> > resume=/dev/sda5 splash=silent showopts  console=ttyS0,115200n8
> >
> > lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg
> > kernel=vmlinuz-3.0.93-0.8-xen  crashkernel=256M@64M root=/dev/sda5
> > earlyprintk=serial,ttyS0,115200n8 resume=/dev/sda5 splash=silent showopts
> >  console=ttyS0,115200n8
> > options=console=com1 com1=115200 dom0_mem=8192m iommu=1,sharept
> > extra_guest_irqs=80 reboot=efi crashkernel=256M@64M
>
> Now this makes things really interesting: Dumping in a UEFI
> environment is generally unsupported (this is actually being worked
> on upstream afaik). Whether the dump kernel can come up properly
> at all depends on the specific characteristics of your firmware
> implementation. It could in particular be that the dump kernel has
> no way of finding ACPI tables, and hence setup of interrupts and
> alike may not work as expected/needed.
>
> Jan
>
>

--089e0160c5e4af80e304f1970d92
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Ok. Thanks. I&#39;ll engage our internal support channel w=
ith SuSE. However, I see these docs on novell site:<div><br></div><div><a h=
ref=3D"http://www.novell.com/support/kb/doc.php?id=3D7001383">http://www.no=
vell.com/support/kb/doc.php?id=3D7001383</a><br>
</div><div><br></div><div><ol style=3D"margin:0px;padding:0px 0px 0px 23px;=
color:rgb(0,0,0);font-family:Arial,Helvetica,sans-serif;font-size:12px"><li=
 style=3D"margin:0px 0px 4px;padding:0px;list-style:decimal;line-height:15p=
x;color:rgb(75,75,75)">
Note:=A0 As of 12/01/08=A0<span style=3D"font-family:&#39;Courier New&#39;"=
>makedumpfile</span>=A0currently only supports the &quot;ELF&quot; format a=
nd a dumplevel setting of &quot;0&quot;.=A0 Check<span style=3D"font-family=
:&#39;Courier New&#39;">/etc/sysconfig/kdump</span>=A0file and verify=A0<sp=
an style=3D"font-family:&#39;Courier New&#39;">KDUMP_DUMPLEVEL=3D&quot;0&qu=
ot;</span>=A0and=A0<span style=3D"font-family:&#39;Courier New&#39;">KDUMP_=
DUMPFORMAT=3D&quot;ELF&quot;.</span></li>
</ol></div><div><br></div><div>It seems it still holds true with SuSE 11 SP=
3.</div><div><br></div><div>Thanks,</div><div>/Saurabh</div></div><div clas=
s=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Feb 4, 2014 at=
 1:03 AM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"mailto:JBeulich@suse=
.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">&gt;&gt;&gt; On 03.02.14 a=
t 23:17, Saurabh Mishra &lt;<a href=3D"mailto:saurabh.globe@gmail.com">saur=
abh.globe@gmail.com</a>&gt; wrote:<br>

&gt; I&#39;m trying to enable kdump on SuSE 11 SP3 with Xen configuration o=
n our<br>
&gt; cards but having problem.<br>
<br>
</div>In general issues like this are not to be dealt with here, but by<br>
SUSE support.<br>
<div class=3D"im"><br>
&gt; If I don&#39;t specify KDUMP_DUMPFORMAT=3DELF, it does not work and do=
es not try<br>
&gt; to take vmcore.<br>
&gt;<br>
&gt; If I specify KDUMP_DUMPLEVEL=3D0, then it does work but it takes very =
long<br>
&gt; time. I tried using KDUMP_DUMPLEVEL=3D30 or 10 but it complaint that t=
here is<br>
&gt; not much space even though I&#39;ve about 4.8 GB space with crashkerne=
l<br>
&gt; configured as 256M@64M.<br>
<br>
</div>If the dump kernel comes up and the dumping takes lone (or you<br>
have other issues with dumping), then this is the wrong forum in<br>
any case: Xen is no longer involved in this operation (the dump<br>
kernel runs on bare hardware).<br>
<div class=3D"im"><br>
&gt; Here&#39;s output from /proc/cmdline :-<br>
&gt;<br>
&gt; lc-1:~ # cat /proc/cmdline<br>
&gt; crashkernel=3D256M@64M root=3D/dev/sda5 earlyprintk=3Dserial,ttyS0,115=
200n8<br>
&gt; resume=3D/dev/sda5 splash=3Dsilent showopts =A0console=3DttyS0,115200n=
8<br>
&gt;<br>
&gt; lc-1:~ # grep crashkernel /boot/efi/efi/SuSE/xen.cfg<br>
&gt; kernel=3Dvmlinuz-3.0.93-0.8-xen =A0crashkernel=3D256M@64M root=3D/dev/=
sda5<br>
&gt; earlyprintk=3Dserial,ttyS0,115200n8 resume=3D/dev/sda5 splash=3Dsilent=
 showopts<br>
&gt; =A0console=3DttyS0,115200n8<br>
&gt; options=3Dconsole=3Dcom1 com1=3D115200 dom0_mem=3D8192m iommu=3D1,shar=
ept<br>
&gt; extra_guest_irqs=3D80 reboot=3Defi crashkernel=3D256M@64M<br>
<br>
</div>Now this makes things really interesting: Dumping in a UEFI<br>
environment is generally unsupported (this is actually being worked<br>
on upstream afaik). Whether the dump kernel can come up properly<br>
at all depends on the specific characteristics of your firmware<br>
implementation. It could in particular be that the dump kernel has<br>
no way of finding ACPI tables, and hence setup of interrupts and<br>
alike may not work as expected/needed.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Jan<br>
<br>
</font></span></blockquote></div><br></div>

--089e0160c5e4af80e304f1970d92--


--===============6309748938392674270==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6309748938392674270==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 16:29:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAis8-0003mQ-Q2; Tue, 04 Feb 2014 16:29:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WAis7-0003mL-Ho
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 16:29:15 +0000
Received: from [193.109.254.147:6508] by server-8.bemta-14.messagelabs.com id
	4C/61-18529-A5511F25; Tue, 04 Feb 2014 16:29:14 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391531353!1970763!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2543 invoked from network); 4 Feb 2014 16:29:14 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	4 Feb 2014 16:29:14 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14GS8mj022775
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 11:28:08 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-34.ams2.redhat.com
	[10.36.112.34])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s14GS5eQ032643; Tue, 4 Feb 2014 11:28:06 -0500
Message-ID: <52F11515.7000102@redhat.com>
Date: Tue, 04 Feb 2014 17:28:05 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
	<52D65856.6050901@redhat.com>
	<alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	qemu-devel@nongnu.org, Don Slutz <dslutz@verizon.com>,
	1257099@bugs.launchpad.net, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 03/02/2014 12:59, Stefano Stabellini ha scritto:
>> I'm applying this to a "configure" branch on my github repository.  Thanks!
>
> Paolo, did this patch ever make it upstream? If so, do you have a commit
> id?

It's still in my branch, where it is commit fcfd805b.  As soon as I get 
a go/no-go for the modules patches I'll post it.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:29:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAis8-0003mQ-Q2; Tue, 04 Feb 2014 16:29:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WAis7-0003mL-Ho
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 16:29:15 +0000
Received: from [193.109.254.147:6508] by server-8.bemta-14.messagelabs.com id
	4C/61-18529-A5511F25; Tue, 04 Feb 2014 16:29:14 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391531353!1970763!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2543 invoked from network); 4 Feb 2014 16:29:14 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-7.tower-27.messagelabs.com with SMTP;
	4 Feb 2014 16:29:14 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s14GS8mj022775
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 11:28:08 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-34.ams2.redhat.com
	[10.36.112.34])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s14GS5eQ032643; Tue, 4 Feb 2014 11:28:06 -0500
Message-ID: <52F11515.7000102@redhat.com>
Date: Tue, 04 Feb 2014 17:28:05 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
	<52D65856.6050901@redhat.com>
	<alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	qemu-devel@nongnu.org, Don Slutz <dslutz@verizon.com>,
	1257099@bugs.launchpad.net, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 03/02/2014 12:59, Stefano Stabellini ha scritto:
>> I'm applying this to a "configure" branch on my github repository.  Thanks!
>
> Paolo, did this patch ever make it upstream? If so, do you have a commit
> id?

It's still in my branch, where it is commit fcfd805b.  As soon as I get 
a go/no-go for the modules patches I'll post it.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAitM-0003rD-By; Tue, 04 Feb 2014 16:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAitK-0003r0-Tc
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:30:31 +0000
Received: from [85.158.137.68:17969] by server-12.bemta-3.messagelabs.com id
	64/65-01674-6A511F25; Tue, 04 Feb 2014 16:30:30 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391531427!9692218!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12144 invoked from network); 4 Feb 2014 16:30:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:30:29 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GUK8Q031450
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:30:21 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GUJ4C020643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 16:30:19 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GUJVQ020632; Tue, 4 Feb 2014 16:30:19 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:30:18 -0800
Message-ID: <52F115E3.3050109@oracle.com>
Date: Tue, 04 Feb 2014 11:31:31 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
In-Reply-To: <52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:48 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Add support for using NMIs as PMU interrupts.
>>
>> Most of processing is still performed by vpmu_do_interrupt(). However, since
>> certain operations are not NMI-safe we defer them to a softint that
>> vpmu_do_interrupt()
>> will schedule:
>> * For PV guests that would be send_guest_vcpu_virq() and
>> hvm_get_segment_register().
> Makes no sense - why would hvm_get_segment_register() be of any
> relevance to PV guests?

Poorly written explanation. What I meant here is that if we are in 
privileged profiling mode and the interrupted guest is an HVM one then 
we'll need to get CS for that guest, not for the guest doing profiling 
(i.e. dom0). I'll rewrite this.

>
> And then I'm still missing a reasonable level of analysis that the
> previously non-NMI-only interrupt handler is now safe to use in NMI
> context.

How about this?

With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and 
vlapic accesses for HVM moved to sofint, the only routines/macros that 
vpmu_do_interrupt() calls in NMI mode are:
* memcpy()
* querying domain type (is_XX_domain())
* guest_cpu_user_regs()
* XLAT_cpu_user_regs()
* raise_softirq()
* vcpu_vpmu()
* vpmu_ops->arch_vpmu_save()
* vpmu_ops->do_interrupt() (in the future for PVH support)

The latter two can only access PMU MSRs.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAitM-0003rD-By; Tue, 04 Feb 2014 16:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAitK-0003r0-Tc
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:30:31 +0000
Received: from [85.158.137.68:17969] by server-12.bemta-3.messagelabs.com id
	64/65-01674-6A511F25; Tue, 04 Feb 2014 16:30:30 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391531427!9692218!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12144 invoked from network); 4 Feb 2014 16:30:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:30:29 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GUK8Q031450
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:30:21 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GUJ4C020643
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 16:30:19 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GUJVQ020632; Tue, 4 Feb 2014 16:30:19 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:30:18 -0800
Message-ID: <52F115E3.3050109@oracle.com>
Date: Tue, 04 Feb 2014 11:31:31 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
In-Reply-To: <52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:48 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> Add support for using NMIs as PMU interrupts.
>>
>> Most of processing is still performed by vpmu_do_interrupt(). However, since
>> certain operations are not NMI-safe we defer them to a softint that
>> vpmu_do_interrupt()
>> will schedule:
>> * For PV guests that would be send_guest_vcpu_virq() and
>> hvm_get_segment_register().
> Makes no sense - why would hvm_get_segment_register() be of any
> relevance to PV guests?

Poorly written explanation. What I meant here is that if we are in 
privileged profiling mode and the interrupted guest is an HVM one then 
we'll need to get CS for that guest, not for the guest doing profiling 
(i.e. dom0). I'll rewrite this.

>
> And then I'm still missing a reasonable level of analysis that the
> previously non-NMI-only interrupt handler is now safe to use in NMI
> context.

How about this?

With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and 
vlapic accesses for HVM moved to sofint, the only routines/macros that 
vpmu_do_interrupt() calls in NMI mode are:
* memcpy()
* querying domain type (is_XX_domain())
* guest_cpu_user_regs()
* XLAT_cpu_user_regs()
* raise_softirq()
* vcpu_vpmu()
* vpmu_ops->arch_vpmu_save()
* vpmu_ops->do_interrupt() (in the future for PVH support)

The latter two can only access PMU MSRs.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:31:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiue-0003yA-VV; Tue, 04 Feb 2014 16:31:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAiue-0003y4-Ad
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 16:31:52 +0000
Received: from [193.109.254.147:9647] by server-14.bemta-14.messagelabs.com id
	28/6D-29228-7F511F25; Tue, 04 Feb 2014 16:31:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391531509!1975048!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24417 invoked from network); 4 Feb 2014 16:31:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:31:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97825759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:31:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 11:31:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAiuZ-000415-U7;
	Tue, 04 Feb 2014 16:31:47 +0000
Date: Tue, 4 Feb 2014 16:31:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52F11515.7000102@redhat.com>
Message-ID: <alpine.DEB.2.02.1402041630440.4373@kaball.uk.xensource.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
	<52D65856.6050901@redhat.com>
	<alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
	<52F11515.7000102@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	1257099@bugs.launchpad.net, Don Slutz <dslutz@verizon.com>,
	qemu-devel@nongnu.org, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, Paolo Bonzini wrote:
> Il 03/02/2014 12:59, Stefano Stabellini ha scritto:
> > > I'm applying this to a "configure" branch on my github repository.
> > > Thanks!
> > 
> > Paolo, did this patch ever make it upstream? If so, do you have a commit
> > id?
> 
> It's still in my branch, where it is commit fcfd805b.  As soon as I get a
> go/no-go for the modules patches I'll post it.
 
In that case I might have to apply the patch to the qemu-xen tree for
the Xen 4.4 release before the commit reaches QEMU upstream.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:31:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAiue-0003yA-VV; Tue, 04 Feb 2014 16:31:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WAiue-0003y4-Ad
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 16:31:52 +0000
Received: from [193.109.254.147:9647] by server-14.bemta-14.messagelabs.com id
	28/6D-29228-7F511F25; Tue, 04 Feb 2014 16:31:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391531509!1975048!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24417 invoked from network); 4 Feb 2014 16:31:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:31:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,780,1384300800"; d="scan'208";a="97825759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 16:31:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 11:31:48 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WAiuZ-000415-U7;
	Tue, 04 Feb 2014 16:31:47 +0000
Date: Tue, 4 Feb 2014 16:31:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <52F11515.7000102@redhat.com>
Message-ID: <alpine.DEB.2.02.1402041630440.4373@kaball.uk.xensource.com>
References: <1388715166-5868-1-git-send-email-dslutz@verizon.com>
	<52D65856.6050901@redhat.com>
	<alpine.DEB.2.02.1402031159380.4373@kaball.uk.xensource.com>
	<52F11515.7000102@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	1257099@bugs.launchpad.net, Don Slutz <dslutz@verizon.com>,
	qemu-devel@nongnu.org, Richard Henderson <rth@twiddle.net>
Subject: Re: [Xen-devel] [BUGFIX][PATCH v2] configure: Disable libtool if
 -fPIE does not work with it (bug #1257099)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, Paolo Bonzini wrote:
> Il 03/02/2014 12:59, Stefano Stabellini ha scritto:
> > > I'm applying this to a "configure" branch on my github repository.
> > > Thanks!
> > 
> > Paolo, did this patch ever make it upstream? If so, do you have a commit
> > id?
> 
> It's still in my branch, where it is commit fcfd805b.  As soon as I get a
> go/no-go for the modules patches I'll post it.
 
In that case I might have to apply the patch to the qemu-xen tree for
the Xen 4.4 release before the commit reaches QEMU upstream.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:32:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:32:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAivU-00043f-EE; Tue, 04 Feb 2014 16:32:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAivT-00043Q-7B
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:32:43 +0000
Received: from [85.158.139.211:7624] by server-6.bemta-5.messagelabs.com id
	2B/37-14342-A2611F25; Tue, 04 Feb 2014 16:32:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391531561!1607997!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23217 invoked from network); 4 Feb 2014 16:32:41 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:32:41 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so4330679wes.22
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:32:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=tQy76k6NWda2ihJuox2RF7rfhSEpX/4/Orch191T8/o=;
	b=ORZfyedgJ/9SDHl9/diYSOaU22YhveHCfsIgUL412/VlUdg7sis1Qh2/IxVHaRaUA9
	NnXU03mbmpsKpa6l/0sYEzz22fgp/7JhhMoDIKPpkA9P5TPaapwpcAsQice7cm1KhmUC
	oa9E3TOBUIy5OjJkhj1UT1+7+XtTr2XkrW8mnA6U0OCXmjI0mx4oR9DC6SIkteRtTT2y
	6dxd8auxJzVR0SmOOWIbvKwlusUYPYA2nPRShI2fXE/Sy1ftLyv5qHm7DwaUNwhIU7CJ
	o9s/S/2UEdNZ8NXpB5ZQ5PYBmARt/IijPvXDfcDrYJbf045qN5REXHM/qI/Tn51559GL
	PD+Q==
X-Gm-Message-State: ALoCoQmSNfNHfAci9fWOq8VFQYj9ml9J+vL9d3xJh+jmRyYCapxvbY1bVBH91VPUugerAeo51nMM
X-Received: by 10.180.95.162 with SMTP id dl2mr13464964wib.17.1391531561408;
	Tue, 04 Feb 2014 08:32:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	bj3sm53961750wjb.14.2014.02.04.08.32.40 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 08:32:40 -0800 (PST)
Message-ID: <52F11627.4020005@linaro.org>
Date: Tue, 04 Feb 2014 16:32:39 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
> gic_route_irq_to_guest routes all IRQs to
> cpumask_of(smp_processor_id()), but actually it always called on cpu0.
> To avoid confusion and possible issues in case someone modified the code
> and reassigned a particular irq to a cpu other than cpu0, hardcode
> cpumask_of(0).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..8854800 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      level = dt_irq_is_level_triggered(irq);
>  
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    /* TODO: handle routing irqs to cpus != cpu0 */
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>  
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:32:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:32:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAivU-00043f-EE; Tue, 04 Feb 2014 16:32:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAivT-00043Q-7B
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:32:43 +0000
Received: from [85.158.139.211:7624] by server-6.bemta-5.messagelabs.com id
	2B/37-14342-A2611F25; Tue, 04 Feb 2014 16:32:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391531561!1607997!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23217 invoked from network); 4 Feb 2014 16:32:41 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 16:32:41 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so4330679wes.22
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:32:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=tQy76k6NWda2ihJuox2RF7rfhSEpX/4/Orch191T8/o=;
	b=ORZfyedgJ/9SDHl9/diYSOaU22YhveHCfsIgUL412/VlUdg7sis1Qh2/IxVHaRaUA9
	NnXU03mbmpsKpa6l/0sYEzz22fgp/7JhhMoDIKPpkA9P5TPaapwpcAsQice7cm1KhmUC
	oa9E3TOBUIy5OjJkhj1UT1+7+XtTr2XkrW8mnA6U0OCXmjI0mx4oR9DC6SIkteRtTT2y
	6dxd8auxJzVR0SmOOWIbvKwlusUYPYA2nPRShI2fXE/Sy1ftLyv5qHm7DwaUNwhIU7CJ
	o9s/S/2UEdNZ8NXpB5ZQ5PYBmARt/IijPvXDfcDrYJbf045qN5REXHM/qI/Tn51559GL
	PD+Q==
X-Gm-Message-State: ALoCoQmSNfNHfAci9fWOq8VFQYj9ml9J+vL9d3xJh+jmRyYCapxvbY1bVBH91VPUugerAeo51nMM
X-Received: by 10.180.95.162 with SMTP id dl2mr13464964wib.17.1391531561408;
	Tue, 04 Feb 2014 08:32:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	bj3sm53961750wjb.14.2014.02.04.08.32.40 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 08:32:40 -0800 (PST)
Message-ID: <52F11627.4020005@linaro.org>
Date: Tue, 04 Feb 2014 16:32:39 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
> gic_route_irq_to_guest routes all IRQs to
> cpumask_of(smp_processor_id()), but actually it always called on cpu0.
> To avoid confusion and possible issues in case someone modified the code
> and reassigned a particular irq to a cpu other than cpu0, hardcode
> cpumask_of(0).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..8854800 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      level = dt_irq_is_level_triggered(irq);
>  
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    /* TODO: handle routing irqs to cpus != cpu0 */
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>  
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj20-0004Wk-Bx; Tue, 04 Feb 2014 16:39:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAj1y-0004Wf-AG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:39:26 +0000
Received: from [85.158.137.68:64514] by server-5.bemta-3.messagelabs.com id
	D0/7A-04712-DB711F25; Tue, 04 Feb 2014 16:39:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391531964!13375835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22552 invoked from network); 4 Feb 2014 16:39:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:39:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:39:23 +0000
Message-Id: <52F125C90200007800119256@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:39:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
	<52F10D0D.2050908@oracle.com>
	<52F11CDF02000078001191EB@nat28.tlf.novell.com>
	<52F111B4.8030805@oracle.com>
In-Reply-To: <52F111B4.8030805@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 17:13, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 11:01 AM, Jan Beulich wrote:
>>>>> On 04.02.14 at 16:53, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>>> +            gregs->cs = cs;
>>>> And now you store a NUL selector (i.e. just the RPL bits) into the
>>>> output field?
>>>>>            }
>>>>> -        else if ( !is_control_domain(current->domain) &&
>>>>> -                 !is_idle_vcpu(current) )
>>>>> +        else
>>>>>            {
>>>>> -            /* PV guest */
>>>>> +            /* HVM guest */
>>>>> +            struct segment_register cs;
>>>>> +
>>>>>                gregs = guest_cpu_user_regs();
>>>>>                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>>>                       gregs, sizeof(struct cpu_user_regs));
>>>>> +
>>>>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>>> +            gregs->cs = cs.attr.fields.dpl;
>>>> And here too? If that's intended, a code comment is a must.
>>> This is HVM-only path, PVH or PV don't go here so cs should be valid.
>> Isn't the reply of mine a few lines up in PV code? And why would
>> the selector being wrong for HVM be okay?
> 
> 
> This clause is for privileged profiling: we are in PV clause even though 
> the interrupt is taken by an HVM guest.
> 
> The diff is somewhat difficult to follow so here is the flow:
> 
>     // in privileged mode 'v' is dom0's CPU.
>     if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
>     {
>          if ( !is_hvm_domain(current->domain) )
>          {
>               // either PV (including dom0) or Xen is interrupted
>          }
>          else
>          {
>               // This is the clause we are discussing. 'current' is HVM
>               hvm_get_segment_register(current, x86_seg_cs, &cs);
>          }
>          send_guest_vcpu_virq(v, VIRQ_XENPMU);
>          return 1;
>     }
> 
> So I think CS should be correct for the guest, no?

Honestly - I can't tell. All I can tell is that there's a bogus setting
of cs to 0 or 3 (depending on whether in kernel mode) or to the
dpl field of the descriptor read from the hardware. All of which
are wrong without a cleat comment stating why doing it this way
is okay/acceptable/necessary-for-the-time-being.

So either drop this bogus code, or comment it properly.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj20-0004Wk-Bx; Tue, 04 Feb 2014 16:39:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAj1y-0004Wf-AG
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:39:26 +0000
Received: from [85.158.137.68:64514] by server-5.bemta-3.messagelabs.com id
	D0/7A-04712-DB711F25; Tue, 04 Feb 2014 16:39:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391531964!13375835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22552 invoked from network); 4 Feb 2014 16:39:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:39:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:39:23 +0000
Message-Id: <52F125C90200007800119256@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:39:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-14-git-send-email-boris.ostrovsky@oracle.com>
	<52F0DDBE0200007800118F62@nat28.tlf.novell.com>
	<52F10D0D.2050908@oracle.com>
	<52F11CDF02000078001191EB@nat28.tlf.novell.com>
	<52F111B4.8030805@oracle.com>
In-Reply-To: <52F111B4.8030805@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 17:13, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 11:01 AM, Jan Beulich wrote:
>>>>> On 04.02.14 at 16:53, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> On 02/04/2014 06:31 AM, Jan Beulich wrote:
>>>>>>> On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>>> +            gregs->cs = cs;
>>>> And now you store a NUL selector (i.e. just the RPL bits) into the
>>>> output field?
>>>>>            }
>>>>> -        else if ( !is_control_domain(current->domain) &&
>>>>> -                 !is_idle_vcpu(current) )
>>>>> +        else
>>>>>            {
>>>>> -            /* PV guest */
>>>>> +            /* HVM guest */
>>>>> +            struct segment_register cs;
>>>>> +
>>>>>                gregs = guest_cpu_user_regs();
>>>>>                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
>>>>>                       gregs, sizeof(struct cpu_user_regs));
>>>>> +
>>>>> +            hvm_get_segment_register(current, x86_seg_cs, &cs);
>>>>> +            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
>>>>> +            gregs->cs = cs.attr.fields.dpl;
>>>> And here too? If that's intended, a code comment is a must.
>>> This is HVM-only path, PVH or PV don't go here so cs should be valid.
>> Isn't the reply of mine a few lines up in PV code? And why would
>> the selector being wrong for HVM be okay?
> 
> 
> This clause is for privileged profiling: we are in PV clause even though 
> the interrupt is taken by an HVM guest.
> 
> The diff is somewhat difficult to follow so here is the flow:
> 
>     // in privileged mode 'v' is dom0's CPU.
>     if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
>     {
>          if ( !is_hvm_domain(current->domain) )
>          {
>               // either PV (including dom0) or Xen is interrupted
>          }
>          else
>          {
>               // This is the clause we are discussing. 'current' is HVM
>               hvm_get_segment_register(current, x86_seg_cs, &cs);
>          }
>          send_guest_vcpu_virq(v, VIRQ_XENPMU);
>          return 1;
>     }
> 
> So I think CS should be correct for the guest, no?

Honestly - I can't tell. All I can tell is that there's a bogus setting
of cs to 0 or 3 (depending on whether in kernel mode) or to the
dpl field of the descriptor read from the hardware. All of which
are wrong without a cleat comment stating why doing it this way
is okay/acceptable/necessary-for-the-time-being.

So either drop this bogus code, or comment it properly.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:41:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj4H-0004gT-H8; Tue, 04 Feb 2014 16:41:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAj4G-0004gK-DA
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:41:48 +0000
Received: from [85.158.137.68:41295] by server-3.bemta-3.messagelabs.com id
	6D/88-14520-B4811F25; Tue, 04 Feb 2014 16:41:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391532106!13376344!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1295 invoked from network); 4 Feb 2014 16:41:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:41:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:41:46 +0000
Message-Id: <52F126570200007800119259@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:41:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
	<52F115E3.3050109@oracle.com>
In-Reply-To: <52F115E3.3050109@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 17:31, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 06:48 AM, Jan Beulich wrote:
>> And then I'm still missing a reasonable level of analysis that the
>> previously non-NMI-only interrupt handler is now safe to use in NMI
>> context.
> 
> How about this?

Looks okay, except ...

> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and 
> vlapic accesses for HVM moved to sofint, the only routines/macros that 
> vpmu_do_interrupt() calls in NMI mode are:
> * memcpy()
> * querying domain type (is_XX_domain())
> * guest_cpu_user_regs()
> * XLAT_cpu_user_regs()
> * raise_softirq()
> * vcpu_vpmu()
> * vpmu_ops->arch_vpmu_save()
> * vpmu_ops->do_interrupt() (in the future for PVH support)
> 
> The latter two can only access PMU MSRs.

... that this additionally needs to exclude things like
{rd,wr}msr_safe() (i.e. stuff raising exceptions that normally
get recovered from).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:41:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj4H-0004gT-H8; Tue, 04 Feb 2014 16:41:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAj4G-0004gK-DA
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:41:48 +0000
Received: from [85.158.137.68:41295] by server-3.bemta-3.messagelabs.com id
	6D/88-14520-B4811F25; Tue, 04 Feb 2014 16:41:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391532106!13376344!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1295 invoked from network); 4 Feb 2014 16:41:47 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:41:47 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:41:46 +0000
Message-Id: <52F126570200007800119259@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:41:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
	<52F115E3.3050109@oracle.com>
In-Reply-To: <52F115E3.3050109@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 17:31, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/04/2014 06:48 AM, Jan Beulich wrote:
>> And then I'm still missing a reasonable level of analysis that the
>> previously non-NMI-only interrupt handler is now safe to use in NMI
>> context.
> 
> How about this?

Looks okay, except ...

> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and 
> vlapic accesses for HVM moved to sofint, the only routines/macros that 
> vpmu_do_interrupt() calls in NMI mode are:
> * memcpy()
> * querying domain type (is_XX_domain())
> * guest_cpu_user_regs()
> * XLAT_cpu_user_regs()
> * raise_softirq()
> * vcpu_vpmu()
> * vpmu_ops->arch_vpmu_save()
> * vpmu_ops->do_interrupt() (in the future for PVH support)
> 
> The latter two can only access PMU MSRs.

... that this additionally needs to exclude things like
{rd,wr}msr_safe() (i.e. stuff raising exceptions that normally
get recovered from).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:43:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj5V-0004pt-0L; Tue, 04 Feb 2014 16:43:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAj5S-0004pe-2o
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:43:03 +0000
Received: from [85.158.137.68:45412] by server-11.bemta-3.messagelabs.com id
	EF/9D-04255-59811F25; Tue, 04 Feb 2014 16:43:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391532180!12533335!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24969 invoked from network); 4 Feb 2014 16:43:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:43:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:43:00 +0000
Message-Id: <52F126A1020000780011927A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:42:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
	<1391528580.6497.36.camel@kazak.uk.xensource.com>
	<52F119F50200007800119175@nat28.tlf.novell.com>
	<1391530076.6497.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1391530076.6497.50.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 17:07, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-04 at 15:48 +0000, Jan Beulich wrote:
>> >>> On 04.02.14 at 16:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > sync_page is a much better suggestion. Perhaps even
>> > sync_page_to/from_cache, I suppose that might actually be more
>> > confusing.
>> 
>> Hmm, on one hand it's even more precise, but otoh - how would a
>> sync_page_to_cache() look like?
> 
> That suggests moving something into the cache,

My problem is that I don't see how one would do this in a way
that _guarantees_ that at the end the data is in cache.

> where this function is
> actually pushing something out of the cache and into RAM.
> 
> sync_page_to_ram then? I think that is better than _from_cache.

Fine with me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:43:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj5V-0004pt-0L; Tue, 04 Feb 2014 16:43:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WAj5S-0004pe-2o
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:43:03 +0000
Received: from [85.158.137.68:45412] by server-11.bemta-3.messagelabs.com id
	EF/9D-04255-59811F25; Tue, 04 Feb 2014 16:43:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391532180!12533335!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24969 invoked from network); 4 Feb 2014 16:43:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 4 Feb 2014 16:43:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 04 Feb 2014 16:43:00 +0000
Message-Id: <52F126A1020000780011927A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 04 Feb 2014 16:42:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1391523701.5635.6.camel@kazak.uk.xensource.com>
	<1391523745-21139-3-git-send-email-ian.campbell@citrix.com>
	<52F10E8202000078001190A7@nat28.tlf.novell.com>
	<1391528580.6497.36.camel@kazak.uk.xensource.com>
	<52F119F50200007800119175@nat28.tlf.novell.com>
	<1391530076.6497.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1391530076.6497.50.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 04.02.14 at 17:07, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-04 at 15:48 +0000, Jan Beulich wrote:
>> >>> On 04.02.14 at 16:43, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > sync_page is a much better suggestion. Perhaps even
>> > sync_page_to/from_cache, I suppose that might actually be more
>> > confusing.
>> 
>> Hmm, on one hand it's even more precise, but otoh - how would a
>> sync_page_to_cache() look like?
> 
> That suggests moving something into the cache,

My problem is that I don't see how one would do this in a way
that _guarantees_ that at the end the data is in cache.

> where this function is
> actually pushing something out of the cache and into RAM.
> 
> sync_page_to_ram then? I think that is better than _from_cache.

Fine with me.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:43:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj63-0004xW-MD; Tue, 04 Feb 2014 16:43:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAj62-0004xF-BJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:43:38 +0000
Received: from [85.158.139.211:54594] by server-4.bemta-5.messagelabs.com id
	D5/1E-08092-9B811F25; Tue, 04 Feb 2014 16:43:37 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391532215!1611086!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23125 invoked from network); 4 Feb 2014 16:43:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:43:36 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GhTIX017661
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:43:30 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GhSD6027922
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:43:28 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GhRoX010050; Tue, 4 Feb 2014 16:43:27 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:43:27 -0800
Message-ID: <52F118F8.6040202@oracle.com>
Date: Tue, 04 Feb 2014 11:44:40 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E2570200007800118FA5@nat28.tlf.novell.com>
In-Reply-To: <52F0E2570200007800118FA5@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:51 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> +        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
>> +            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
>> +                return 0;
> Please fold chained if()s like this one.
>
>> @@ -237,7 +242,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>>               else if ( !is_control_domain(current->domain) &&
>>                         !is_idle_vcpu(current) )
>>               {
>> -                /* PV guest */
>> +                /* PV(H) guest */
> I would have expected PVH guests to use the HVM paths here, not
> the PV ones. Can you clarify why you do it the other way around?

I could go either way but because PVH uses event channels for interrupts 
I went with PV path. To use HVM route I'd need to send NMIs to the guest 
and that is currently not quite working.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:43:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj63-0004xW-MD; Tue, 04 Feb 2014 16:43:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAj62-0004xF-BJ
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:43:38 +0000
Received: from [85.158.139.211:54594] by server-4.bemta-5.messagelabs.com id
	D5/1E-08092-9B811F25; Tue, 04 Feb 2014 16:43:37 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391532215!1611086!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23125 invoked from network); 4 Feb 2014 16:43:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:43:36 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GhTIX017661
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:43:30 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GhSD6027922
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:43:28 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GhRoX010050; Tue, 4 Feb 2014 16:43:27 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:43:27 -0800
Message-ID: <52F118F8.6040202@oracle.com>
Date: Tue, 04 Feb 2014 11:44:40 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-17-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E2570200007800118FA5@nat28.tlf.novell.com>
In-Reply-To: <52F0E2570200007800118FA5@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:51 AM, Jan Beulich wrote:
>>>> On 21.01.14 at 20:09, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> +        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) )
>> +            if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
>> +                return 0;
> Please fold chained if()s like this one.
>
>> @@ -237,7 +242,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
>>               else if ( !is_control_domain(current->domain) &&
>>                         !is_idle_vcpu(current) )
>>               {
>> -                /* PV guest */
>> +                /* PV(H) guest */
> I would have expected PVH guests to use the HVM paths here, not
> the PV ones. Can you clarify why you do it the other way around?

I could go either way but because PVH uses event channels for interrupts 
I went with PV path. To use HVM route I'd need to send NMIs to the guest 
and that is currently not quite working.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj6f-00054h-3y; Tue, 04 Feb 2014 16:44:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAj6c-00054G-W7
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 16:44:15 +0000
Received: from [193.109.254.147:62494] by server-5.bemta-14.messagelabs.com id
	DC/9A-16688-ED811F25; Tue, 04 Feb 2014 16:44:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391532251!1978323!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30710 invoked from network); 4 Feb 2014 16:44:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:44:12 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Gh2gH017158
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:43:03 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Gh0Db026778
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:43:01 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Gh0Pm008951; Tue, 4 Feb 2014 16:43:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:42:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7CCBA1C0972; Tue,  4 Feb 2014 11:42:58 -0500 (EST)
Date: Tue, 4 Feb 2014 11:42:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140204164258.GB7443@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="oyUTqETQ0mS9luUI"
Content-Disposition: inline
In-Reply-To: <52F119780200007800119172@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >> Wasn't it that Mukesh's patch simply was yours with the two
> >> get_ioreq()s folded by using a local variable?
> > 
> > Yes. As so
> 
> Thanks. Except that ...
> 
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
> >      struct vcpu *v = current;
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >      struct cpu_user_regs *regs = guest_cpu_user_regs();
> > -
> > +    ioreq_t *p = get_ioreq(v);
> 
> ... you don't want to drop the blank line, and naming the new
> variable "ioreq" would seem preferable.
> 
> >      /*
> >       * a pending IO emualtion may still no finished. In this case,
> >       * no virtual vmswith is allowed. Or else, the following IO
> >       * emulation will handled in a wrong VCPU context.
> >       */
> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> > +    if ( p && p->state != STATE_IOREQ_NONE )
> 
> And, as said before, I'd think "!p ||" instead of "p &&" would be
> the right thing here. Yang, Jun?

I have two patches - one the simpler one that is pretty straightfoward
and the one you suggested. Either one fixes PVH guests. I also did
bootup tests with HVM guests to make sure they worked.

Attached and inline.


>From 47a5554201f0bc1778e5bcbde8c39088707f727f Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..563b02f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6



>From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6

> 
> Jan
> 

--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pvh-Fix-regression-caused-by-assumption-that-HVM-pat.patch"

>From 47a5554201f0bc1778e5bcbde8c39088707f727f Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..563b02f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pvh-Fix-regression-due-to-assumption-that-HVM-paths-.patch"

>From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--oyUTqETQ0mS9luUI--


From xen-devel-bounces@lists.xen.org Tue Feb 04 16:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAj6f-00054h-3y; Tue, 04 Feb 2014 16:44:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAj6c-00054G-W7
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 16:44:15 +0000
Received: from [193.109.254.147:62494] by server-5.bemta-14.messagelabs.com id
	DC/9A-16688-ED811F25; Tue, 04 Feb 2014 16:44:14 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391532251!1978323!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30710 invoked from network); 4 Feb 2014 16:44:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:44:12 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14Gh2gH017158
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:43:03 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Gh0Db026778
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:43:01 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14Gh0Pm008951; Tue, 4 Feb 2014 16:43:00 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:42:59 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7CCBA1C0972; Tue,  4 Feb 2014 11:42:58 -0500 (EST)
Date: Tue, 4 Feb 2014 11:42:58 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140204164258.GB7443@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="oyUTqETQ0mS9luUI"
Content-Disposition: inline
In-Reply-To: <52F119780200007800119172@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
> >>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >> Wasn't it that Mukesh's patch simply was yours with the two
> >> get_ioreq()s folded by using a local variable?
> > 
> > Yes. As so
> 
> Thanks. Except that ...
> 
> > --- a/xen/arch/x86/hvm/vmx/vvmx.c
> > +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> > @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
> >      struct vcpu *v = current;
> >      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >      struct cpu_user_regs *regs = guest_cpu_user_regs();
> > -
> > +    ioreq_t *p = get_ioreq(v);
> 
> ... you don't want to drop the blank line, and naming the new
> variable "ioreq" would seem preferable.
> 
> >      /*
> >       * a pending IO emualtion may still no finished. In this case,
> >       * no virtual vmswith is allowed. Or else, the following IO
> >       * emulation will handled in a wrong VCPU context.
> >       */
> > -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> > +    if ( p && p->state != STATE_IOREQ_NONE )
> 
> And, as said before, I'd think "!p ||" instead of "p &&" would be
> the right thing here. Yang, Jun?

I have two patches - one the simpler one that is pretty straightfoward
and the one you suggested. Either one fixes PVH guests. I also did
bootup tests with HVM guests to make sure they worked.

Attached and inline.


>From 47a5554201f0bc1778e5bcbde8c39088707f727f Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..563b02f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6



>From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6

> 
> Jan
> 

--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pvh-Fix-regression-caused-by-assumption-that-HVM-pat.patch"

>From 47a5554201f0bc1778e5bcbde8c39088707f727f Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression caused by assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..563b02f 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,14 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
      * a pending IO emualtion may still no finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
      * emulation will handled in a wrong VCPU context.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( ioreq && ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pvh-Fix-regression-due-to-assumption-that-HVM-paths-.patch"

>From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


--oyUTqETQ0mS9luUI
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--oyUTqETQ0mS9luUI--


From xen-devel-bounces@lists.xen.org Tue Feb 04 16:49:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjBl-0005Zr-Vn; Tue, 04 Feb 2014 16:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAjBk-0005Zk-9v
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:49:32 +0000
Received: from [85.158.143.35:23424] by server-1.bemta-4.messagelabs.com id
	31/2C-31661-B1A11F25; Tue, 04 Feb 2014 16:49:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391532569!3104676!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12726 invoked from network); 4 Feb 2014 16:49:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:49:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GnNEQ013119
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:49:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GnMIk026269
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:49:23 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GnM6w026243; Tue, 4 Feb 2014 16:49:22 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:49:21 -0800
Message-ID: <52F11A55.4020708@oracle.com>
Date: Tue, 04 Feb 2014 11:50:29 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
	<52F115E3.3050109@oracle.com>
	<52F126570200007800119259@nat28.tlf.novell.com>
In-Reply-To: <52F126570200007800119259@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 11:41 AM, Jan Beulich wrote:
>>>> On 04.02.14 at 17:31, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/04/2014 06:48 AM, Jan Beulich wrote:
>>> And then I'm still missing a reasonable level of analysis that the
>>> previously non-NMI-only interrupt handler is now safe to use in NMI
>>> context.
>> How about this?
> Looks okay, except ...
>
>> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and
>> vlapic accesses for HVM moved to sofint, the only routines/macros that
>> vpmu_do_interrupt() calls in NMI mode are:
>> * memcpy()
>> * querying domain type (is_XX_domain())
>> * guest_cpu_user_regs()
>> * XLAT_cpu_user_regs()
>> * raise_softirq()
>> * vcpu_vpmu()
>> * vpmu_ops->arch_vpmu_save()
>> * vpmu_ops->do_interrupt() (in the future for PVH support)
>>
>> The latter two can only access PMU MSRs.
> ... that this additionally needs to exclude things like
> {rd,wr}msr_safe() (i.e. stuff raising exceptions that normally
> get recovered from).


I'll add that. And probably a comment in vendor-specific code to remind 
people that these routines need to be NMI-safe.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:49:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjBl-0005Zr-Vn; Tue, 04 Feb 2014 16:49:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WAjBk-0005Zk-9v
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:49:32 +0000
Received: from [85.158.143.35:23424] by server-1.bemta-4.messagelabs.com id
	31/2C-31661-B1A11F25; Tue, 04 Feb 2014 16:49:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391532569!3104676!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12726 invoked from network); 4 Feb 2014 16:49:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:49:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14GnNEQ013119
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 16:49:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GnMIk026269
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 4 Feb 2014 16:49:23 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14GnM6w026243; Tue, 4 Feb 2014 16:49:22 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 08:49:21 -0800
Message-ID: <52F11A55.4020708@oracle.com>
Date: Tue, 04 Feb 2014 11:50:29 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390331342-3967-1-git-send-email-boris.ostrovsky@oracle.com>
	<1390331342-3967-16-git-send-email-boris.ostrovsky@oracle.com>
	<52F0E1BB0200007800118F8E@nat28.tlf.novell.com>
	<52F115E3.3050109@oracle.com>
	<52F126570200007800119259@nat28.tlf.novell.com>
In-Reply-To: <52F126570200007800119259@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, jun.nakajima@intel.com
Subject: Re: [Xen-devel] [PATCH v4 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 11:41 AM, Jan Beulich wrote:
>>>> On 04.02.14 at 17:31, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/04/2014 06:48 AM, Jan Beulich wrote:
>>> And then I'm still missing a reasonable level of analysis that the
>>> previously non-NMI-only interrupt handler is now safe to use in NMI
>>> context.
>> How about this?
> Looks okay, except ...
>
>> With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and
>> vlapic accesses for HVM moved to sofint, the only routines/macros that
>> vpmu_do_interrupt() calls in NMI mode are:
>> * memcpy()
>> * querying domain type (is_XX_domain())
>> * guest_cpu_user_regs()
>> * XLAT_cpu_user_regs()
>> * raise_softirq()
>> * vcpu_vpmu()
>> * vpmu_ops->arch_vpmu_save()
>> * vpmu_ops->do_interrupt() (in the future for PVH support)
>>
>> The latter two can only access PMU MSRs.
> ... that this additionally needs to exclude things like
> {rd,wr}msr_safe() (i.e. stuff raising exceptions that normally
> get recovered from).


I'll add that. And probably a comment in vendor-specific code to remind 
people that these routines need to be NMI-safe.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:57:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjIr-0005vH-Ut; Tue, 04 Feb 2014 16:56:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WAjIq-0005vB-Pt
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:56:53 +0000
Received: from [85.158.139.211:47439] by server-14.bemta-5.messagelabs.com id
	8D/C6-27598-4DB11F25; Tue, 04 Feb 2014 16:56:52 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391533009!1614401!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP,
	SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5400 invoked from network); 4 Feb 2014 16:56:51 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:56:51 -0000
Received: from mail-ve0-f173.google.com ([209.85.128.173]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvEb0aayEp7Bv01sRhOTBIg3o7tixduR@postini.com;
	Tue, 04 Feb 2014 08:56:51 PST
Received: by mail-ve0-f173.google.com with SMTP id oz11so6300442veb.32
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:56:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=fUd+nbS+Vf6s3UTMXTwZAgkOkhLT1AAdGshuob+MwD8=;
	b=BDI7XFi24RRhddKSwgiV1SZ36GZEi9adLlSWBdPpWUfqGXJmLhI6NYY5m6Uq9gXWbu
	91Ut4TjCyGQO5YNrYu65Ga5QiAAwWkwkDmJwzwPR8Rl4GDvuRlrrsQXD6NroLoJ/vrf2
	Ft8SqtTK5sO2D3xLn1uH/5CzcbkbqDIgJQI1c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=fUd+nbS+Vf6s3UTMXTwZAgkOkhLT1AAdGshuob+MwD8=;
	b=Eh4PJXpC2un4wGEJXz0cdtcauB21InykDDI46aMFfBnH9RQbel2nCjKakSa0Y/kYGe
	pnlFC5c55E7BQHFgjMQIJiF0Us6tiMGEXbGkLS4yRJq2lQhrQX+l72c0SSnSSsi9Nsl5
	k1h8txVrkKeD2FDFV5ut7C4i5fpkDBZ3h2FJmFpTO9Y66FUHWnYt3rOfJSxQZ4buk/+S
	Bkf0ts02pTB89FAyTpQ8JDHMC5vbEp4mt3bPoX3ql/tHVf6tF+XEs7Fg4+f7JjpZZwVe
	l4YSdhplivSk9zLi/w7+3HEl0qzDiKevkjWf8i0hvx52X3sSN/zQH8w78MmQNHpyAGb2
	JJpQ==
X-Gm-Message-State: ALoCoQmAJ59aMIEQprvCQgsgJfOMQrGs8E/pMVpK1OVNLZCZ55us/wnV+3dyEnXhJPNHmDjB0scQuXSsUmHWx5BxQqQWEnhl+92zgT/EZXDml/GgntOoDszRbDZVpvybjZihHIaBbjxA8SzZmU4mZRT3txNFAOmSAiDhDWKQcU70MPJyRlwujoU=
X-Received: by 10.220.97.145 with SMTP id l17mr749587vcn.35.1391533008584;
	Tue, 04 Feb 2014 08:56:48 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.97.145 with SMTP id l17mr749576vcn.35.1391533008426;
	Tue, 04 Feb 2014 08:56:48 -0800 (PST)
Received: by 10.220.146.73 with HTTP; Tue, 4 Feb 2014 08:56:48 -0800 (PST)
In-Reply-To: <52F11627.4020005@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<52F11627.4020005@linaro.org>
Date: Tue, 4 Feb 2014 18:56:48 +0200
Message-ID: <CAJEb2DFQ+cwAC1ehMSF+0dn_TN9P0kb_mNQGZcyT_mSrJDPKKw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks. I need this patch.

On Tue, Feb 4, 2014 at 6:32 PM, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
>> gic_route_irq_to_guest routes all IRQs to
>> cpumask_of(smp_processor_id()), but actually it always called on cpu0.
>> To avoid confusion and possible issues in case someone modified the code
>> and reassigned a particular irq to a cpu other than cpu0, hardcode
>> cpumask_of(0).
>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
>
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index e6257a7..8854800 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>>
>>      level = dt_irq_is_level_triggered(irq);
>>
>> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
>> -                           0xa0);
>> +    /* TODO: handle routing irqs to cpus != cpu0 */
>> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>>
>>      retval = __setup_irq(desc, irq->irq, action);
>>      if (retval) {
>>
>
>
> --
> Julien Grall



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 16:57:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 16:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjIr-0005vH-Ut; Tue, 04 Feb 2014 16:56:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WAjIq-0005vB-Pt
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 16:56:53 +0000
Received: from [85.158.139.211:47439] by server-14.bemta-5.messagelabs.com id
	8D/C6-27598-4DB11F25; Tue, 04 Feb 2014 16:56:52 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391533009!1614401!1
X-Originating-IP: [64.18.0.20]
X-SpamReason: No, hits=1.5 required=7.0 tests=HOT_NASTY,RCVD_BY_IP,
	SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5400 invoked from network); 4 Feb 2014 16:56:51 -0000
Received: from exprod5og110.obsmtp.com (HELO exprod5og110.obsmtp.com)
	(64.18.0.20)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 16:56:51 -0000
Received: from mail-ve0-f173.google.com ([209.85.128.173]) (using TLSv1) by
	exprod5ob110.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvEb0aayEp7Bv01sRhOTBIg3o7tixduR@postini.com;
	Tue, 04 Feb 2014 08:56:51 PST
Received: by mail-ve0-f173.google.com with SMTP id oz11so6300442veb.32
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 08:56:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=fUd+nbS+Vf6s3UTMXTwZAgkOkhLT1AAdGshuob+MwD8=;
	b=BDI7XFi24RRhddKSwgiV1SZ36GZEi9adLlSWBdPpWUfqGXJmLhI6NYY5m6Uq9gXWbu
	91Ut4TjCyGQO5YNrYu65Ga5QiAAwWkwkDmJwzwPR8Rl4GDvuRlrrsQXD6NroLoJ/vrf2
	Ft8SqtTK5sO2D3xLn1uH/5CzcbkbqDIgJQI1c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=fUd+nbS+Vf6s3UTMXTwZAgkOkhLT1AAdGshuob+MwD8=;
	b=Eh4PJXpC2un4wGEJXz0cdtcauB21InykDDI46aMFfBnH9RQbel2nCjKakSa0Y/kYGe
	pnlFC5c55E7BQHFgjMQIJiF0Us6tiMGEXbGkLS4yRJq2lQhrQX+l72c0SSnSSsi9Nsl5
	k1h8txVrkKeD2FDFV5ut7C4i5fpkDBZ3h2FJmFpTO9Y66FUHWnYt3rOfJSxQZ4buk/+S
	Bkf0ts02pTB89FAyTpQ8JDHMC5vbEp4mt3bPoX3ql/tHVf6tF+XEs7Fg4+f7JjpZZwVe
	l4YSdhplivSk9zLi/w7+3HEl0qzDiKevkjWf8i0hvx52X3sSN/zQH8w78MmQNHpyAGb2
	JJpQ==
X-Gm-Message-State: ALoCoQmAJ59aMIEQprvCQgsgJfOMQrGs8E/pMVpK1OVNLZCZ55us/wnV+3dyEnXhJPNHmDjB0scQuXSsUmHWx5BxQqQWEnhl+92zgT/EZXDml/GgntOoDszRbDZVpvybjZihHIaBbjxA8SzZmU4mZRT3txNFAOmSAiDhDWKQcU70MPJyRlwujoU=
X-Received: by 10.220.97.145 with SMTP id l17mr749587vcn.35.1391533008584;
	Tue, 04 Feb 2014 08:56:48 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.97.145 with SMTP id l17mr749576vcn.35.1391533008426;
	Tue, 04 Feb 2014 08:56:48 -0800 (PST)
Received: by 10.220.146.73 with HTTP; Tue, 4 Feb 2014 08:56:48 -0800 (PST)
In-Reply-To: <52F11627.4020005@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<52F11627.4020005@linaro.org>
Date: Tue, 4 Feb 2014 18:56:48 +0200
Message-ID: <CAJEb2DFQ+cwAC1ehMSF+0dn_TN9P0kb_mNQGZcyT_mSrJDPKKw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xen.org,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks. I need this patch.

On Tue, Feb 4, 2014 at 6:32 PM, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
>> gic_route_irq_to_guest routes all IRQs to
>> cpumask_of(smp_processor_id()), but actually it always called on cpu0.
>> To avoid confusion and possible issues in case someone modified the code
>> and reassigned a particular irq to a cpu other than cpu0, hardcode
>> cpumask_of(0).
>>
>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
>
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index e6257a7..8854800 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>>
>>      level = dt_irq_is_level_triggered(irq);
>>
>> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
>> -                           0xa0);
>> +    /* TODO: handle routing irqs to cpus != cpu0 */
>> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>>
>>      retval = __setup_irq(desc, irq->irq, action);
>>      if (retval) {
>>
>
>
> --
> Julien Grall



-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:18:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjcu-0006vG-E2; Tue, 04 Feb 2014 17:17:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAjcs-0006tZ-Qz
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:17:35 +0000
Received: from [85.158.143.35:12500] by server-1.bemta-4.messagelabs.com id
	CB/C5-31661-DA021F25; Tue, 04 Feb 2014 17:17:33 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391534251!3103894!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28781 invoked from network); 4 Feb 2014 17:17:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97852162"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 17:17:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 12:17:00 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAjcK-0004hf-09;
	Tue, 04 Feb 2014 17:17:00 +0000
Message-ID: <52F1208B.6040809@citrix.com>
Date: Tue, 4 Feb 2014 17:16:59 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 16:25, Andrew Cooper wrote:
> The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> most part is left alone until success, at which point it is set to 0.
>
> There is a separate 'frc' which for the most part is used to check function
> calls, keeping errors separate from 'rc'.
>
> For a toolstack which sets callbacks->toolstack_restore(), and the function
> returns 0, any subsequent error will end up with code flow going to "out;",
> resulting in the migration being declared a success.
>
> For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> 'frc', even though their use of 'rc' is currently safe.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

Ping?

>
> ---
>
> Regarding 4.4: If the two "for consistency" changes to
> xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> without affecting the bugfix nature of the patch, but I would argue that
> leaving some examples of "rc = function_call()" leaves a bad precident which
> is likely to lead to similar bugs in the future.
> ---
>  tools/libxc/xc_domain_restore.c |   19 +++++++++----------
>  1 file changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 5ba47d7..817054d 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
>      munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
>  
> -    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> -                            console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> +                             console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;
> @@ -2257,16 +2257,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>      {
>          if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
>          {
> -            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> -                        callbacks->data);
> +            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> +                                               callbacks->data);
>              free(tdata.data);
> -            if ( rc < 0 )
> +            if ( frc < 0 )
>              {
>                  PERROR("error calling toolstack_restore");
>                  goto out;
>              }
>          } else {
> -            rc = -1;
>              ERROR("toolstack data available but no callback provided\n");
>              free(tdata.data);
>              goto out;
> @@ -2326,9 +2325,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          goto out;
>      }
>  
> -    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> -                                console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> +                                 console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:18:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjcu-0006vG-E2; Tue, 04 Feb 2014 17:17:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAjcs-0006tZ-Qz
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:17:35 +0000
Received: from [85.158.143.35:12500] by server-1.bemta-4.messagelabs.com id
	CB/C5-31661-DA021F25; Tue, 04 Feb 2014 17:17:33 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391534251!3103894!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28781 invoked from network); 4 Feb 2014 17:17:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:17:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97852162"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 17:17:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 12:17:00 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAjcK-0004hf-09;
	Tue, 04 Feb 2014 17:17:00 +0000
Message-ID: <52F1208B.6040809@citrix.com>
Date: Tue, 4 Feb 2014 17:16:59 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/01/14 16:25, Andrew Cooper wrote:
> The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> most part is left alone until success, at which point it is set to 0.
>
> There is a separate 'frc' which for the most part is used to check function
> calls, keeping errors separate from 'rc'.
>
> For a toolstack which sets callbacks->toolstack_restore(), and the function
> returns 0, any subsequent error will end up with code flow going to "out;",
> resulting in the migration being declared a success.
>
> For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> 'frc', even though their use of 'rc' is currently safe.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

Ping?

>
> ---
>
> Regarding 4.4: If the two "for consistency" changes to
> xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> without affecting the bugfix nature of the patch, but I would argue that
> leaving some examples of "rc = function_call()" leaves a bad precident which
> is likely to lead to similar bugs in the future.
> ---
>  tools/libxc/xc_domain_restore.c |   19 +++++++++----------
>  1 file changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 5ba47d7..817054d 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
>      munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
>  
> -    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> -                            console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> +                             console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;
> @@ -2257,16 +2257,15 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>      {
>          if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
>          {
> -            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> -                        callbacks->data);
> +            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> +                                               callbacks->data);
>              free(tdata.data);
> -            if ( rc < 0 )
> +            if ( frc < 0 )
>              {
>                  PERROR("error calling toolstack_restore");
>                  goto out;
>              }
>          } else {
> -            rc = -1;
>              ERROR("toolstack data available but no callback provided\n");
>              free(tdata.data);
>              goto out;
> @@ -2326,9 +2325,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          goto out;
>      }
>  
> -    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> -                                console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> +                                 console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:22:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjhv-0007GC-Kb; Tue, 04 Feb 2014 17:22:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAjhu-0007G5-Gu
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:22:46 +0000
Received: from [85.158.139.211:36147] by server-11.bemta-5.messagelabs.com id
	E9/2E-23886-5E121F25; Tue, 04 Feb 2014 17:22:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391534563!1617086!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15532 invoked from network); 4 Feb 2014 17:22:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:22:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97855207"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 17:22:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	12:22:42 -0500
Message-ID: <1391534561.6497.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 4 Feb 2014 17:22:41 +0000
In-Reply-To: <52F1208B.6040809@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
	<52F1208B.6040809@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 17:16 +0000, Andrew Cooper wrote:
> >                  goto out;
> >              }
> >          } else {
> > -            rc = -1;

Mostly looks good but I'm not sure about this change 

We get here on input error (toolstack data available but no callback
provided) which is neither migration success nor failure, it's a bug in
the caller. So arguably returning a separate failure from
success/unsuccess makes sense.

I'd have thought it ought to set errno (too EINVAL perhaps) too, but
lets not mess with that now.


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:22:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjhv-0007GC-Kb; Tue, 04 Feb 2014 17:22:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAjhu-0007G5-Gu
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:22:46 +0000
Received: from [85.158.139.211:36147] by server-11.bemta-5.messagelabs.com id
	E9/2E-23886-5E121F25; Tue, 04 Feb 2014 17:22:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391534563!1617086!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15532 invoked from network); 4 Feb 2014 17:22:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:22:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97855207"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 17:22:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	12:22:42 -0500
Message-ID: <1391534561.6497.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 4 Feb 2014 17:22:41 +0000
In-Reply-To: <52F1208B.6040809@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
	<52F1208B.6040809@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 17:16 +0000, Andrew Cooper wrote:
> >                  goto out;
> >              }
> >          } else {
> > -            rc = -1;

Mostly looks good but I'm not sure about this change 

We get here on input error (toolstack data available but no callback
provided) which is neither migration success nor failure, it's a bug in
the caller. So arguably returning a separate failure from
success/unsuccess makes sense.

I'd have thought it ought to set errno (too EINVAL perhaps) too, but
lets not mess with that now.


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjm1-0007dQ-Be; Tue, 04 Feb 2014 17:27:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAjlz-0007dL-Fo
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:26:59 +0000
Received: from [85.158.139.211:48683] by server-3.bemta-5.messagelabs.com id
	7A/3D-13671-2E221F25; Tue, 04 Feb 2014 17:26:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391534814!1615545!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9147 invoked from network); 4 Feb 2014 17:26:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:26:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="99759963"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 17:26:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 12:26:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAjls-0004wR-R8;
	Tue, 04 Feb 2014 17:26:52 +0000
Message-ID: <52F122DC.8080407@citrix.com>
Date: Tue, 4 Feb 2014 17:26:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
	<52F1208B.6040809@citrix.com>
	<1391534561.6497.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1391534561.6497.60.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 17:22, Ian Campbell wrote:
> On Tue, 2014-02-04 at 17:16 +0000, Andrew Cooper wrote:
>>>                  goto out;
>>>              }
>>>          } else {
>>> -            rc = -1;
> Mostly looks good but I'm not sure about this change 
>
> We get here on input error (toolstack data available but no callback
> provided) which is neither migration success nor failure, it's a bug in
> the caller. So arguably returning a separate failure from
> success/unsuccess makes sense.
>
> I'd have thought it ought to set errno (too EINVAL perhaps) too, but
> lets not mess with that now.
>
>
> Ian.
>

Hilariously, it turns out that xc_domain_restore() is specified to
return 0 on success and -1 on failure.  From what I can tell, this is
the sole action which would cause xc_domain_restore() to return anything
other than 0 or 1.

I think fixing this should fall into the bucket of "sanitisation of
libxc error paths".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAjm1-0007dQ-Be; Tue, 04 Feb 2014 17:27:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAjlz-0007dL-Fo
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:26:59 +0000
Received: from [85.158.139.211:48683] by server-3.bemta-5.messagelabs.com id
	7A/3D-13671-2E221F25; Tue, 04 Feb 2014 17:26:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391534814!1615545!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9147 invoked from network); 4 Feb 2014 17:26:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:26:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="99759963"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 17:26:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 12:26:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WAjls-0004wR-R8;
	Tue, 04 Feb 2014 17:26:52 +0000
Message-ID: <52F122DC.8080407@citrix.com>
Date: Tue, 4 Feb 2014 17:26:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
	<52F1208B.6040809@citrix.com>
	<1391534561.6497.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1391534561.6497.60.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 17:22, Ian Campbell wrote:
> On Tue, 2014-02-04 at 17:16 +0000, Andrew Cooper wrote:
>>>                  goto out;
>>>              }
>>>          } else {
>>> -            rc = -1;
> Mostly looks good but I'm not sure about this change 
>
> We get here on input error (toolstack data available but no callback
> provided) which is neither migration success nor failure, it's a bug in
> the caller. So arguably returning a separate failure from
> success/unsuccess makes sense.
>
> I'd have thought it ought to set errno (too EINVAL perhaps) too, but
> lets not mess with that now.
>
>
> Ian.
>

Hilariously, it turns out that xc_domain_restore() is specified to
return 0 on success and -1 on failure.  From what I can tell, this is
the sole action which would cause xc_domain_restore() to return anything
other than 0 or 1.

I think fixing this should fall into the bucket of "sanitisation of
libxc error paths".

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:43:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAk28-000899-RN; Tue, 04 Feb 2014 17:43:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAk26-00088v-J6
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:43:38 +0000
Received: from [193.109.254.147:34381] by server-4.bemta-14.messagelabs.com id
	E6/38-32066-9C621F25; Tue, 04 Feb 2014 17:43:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391535815!1980540!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12323 invoked from network); 4 Feb 2014 17:43:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:43:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="99768218"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 17:43:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	12:43:34 -0500
Message-ID: <1391535813.6497.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 4 Feb 2014 17:43:33 +0000
In-Reply-To: <52F122DC.8080407@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
	<52F1208B.6040809@citrix.com>
	<1391534561.6497.60.camel@kazak.uk.xensource.com>
	<52F122DC.8080407@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 17:26 +0000, Andrew Cooper wrote:
> On 04/02/14 17:22, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 17:16 +0000, Andrew Cooper wrote:
> >>>                  goto out;
> >>>              }
> >>>          } else {
> >>> -            rc = -1;
> > Mostly looks good but I'm not sure about this change 
> >
> > We get here on input error (toolstack data available but no callback
> > provided) which is neither migration success nor failure, it's a bug in
> > the caller. So arguably returning a separate failure from
> > success/unsuccess makes sense.
> >
> > I'd have thought it ought to set errno (too EINVAL perhaps) too, but
> > lets not mess with that now.
> >
> >
> > Ian.
> >
> 
> Hilariously, it turns out that xc_domain_restore() is specified to
> return 0 on success and -1 on failure.  From what I can tell, this is
> the sole action which would cause xc_domain_restore() to return anything
> other than 0 or 1.
> 
> I think fixing this should fall into the bucket of "sanitisation of
> libxc error paths".

Right, so please leave the rc = -1 alone.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 17:43:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 17:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAk28-000899-RN; Tue, 04 Feb 2014 17:43:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAk26-00088v-J6
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 17:43:38 +0000
Received: from [193.109.254.147:34381] by server-4.bemta-14.messagelabs.com id
	E6/38-32066-9C621F25; Tue, 04 Feb 2014 17:43:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391535815!1980540!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12323 invoked from network); 4 Feb 2014 17:43:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 17:43:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="99768218"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 17:43:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	12:43:34 -0500
Message-ID: <1391535813.6497.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 4 Feb 2014 17:43:33 +0000
In-Reply-To: <52F122DC.8080407@citrix.com>
References: <1390839925-28088-1-git-send-email-andrew.cooper3@citrix.com>
	<1390839925-28088-3-git-send-email-andrew.cooper3@citrix.com>
	<52F1208B.6040809@citrix.com>
	<1391534561.6497.60.camel@kazak.uk.xensource.com>
	<52F122DC.8080407@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch 2/2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 17:26 +0000, Andrew Cooper wrote:
> On 04/02/14 17:22, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 17:16 +0000, Andrew Cooper wrote:
> >>>                  goto out;
> >>>              }
> >>>          } else {
> >>> -            rc = -1;
> > Mostly looks good but I'm not sure about this change 
> >
> > We get here on input error (toolstack data available but no callback
> > provided) which is neither migration success nor failure, it's a bug in
> > the caller. So arguably returning a separate failure from
> > success/unsuccess makes sense.
> >
> > I'd have thought it ought to set errno (too EINVAL perhaps) too, but
> > lets not mess with that now.
> >
> >
> > Ian.
> >
> 
> Hilariously, it turns out that xc_domain_restore() is specified to
> return 0 on success and -1 on failure.  From what I can tell, this is
> the sole action which would cause xc_domain_restore() to return anything
> other than 0 or 1.
> 
> I think fixing this should fall into the bucket of "sanitisation of
> libxc error paths".

Right, so please leave the rc = -1 alone.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:01:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkJE-0000j6-FS; Tue, 04 Feb 2014 18:01:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAkJD-0000iz-8J
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:01:19 +0000
Received: from [193.109.254.147:21568] by server-7.bemta-14.messagelabs.com id
	25/78-23424-EEA21F25; Tue, 04 Feb 2014 18:01:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391536874!1983634!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15422 invoked from network); 4 Feb 2014 18:01:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:01:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97872658"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 18:01:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:01:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WAkJ5-0005xF-2p; Tue, 04 Feb 2014 18:01:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 18:01:10 +0000
Message-ID: <1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391535813.6497.61.camel@kazak.uk.xensource.com>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success from
	xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
most part is left alone until success, at which point it is set to 0.

There is a separate 'frc' which for the most part is used to check function
calls, keeping errors separate from 'rc'.

For a toolstack which sets callbacks->toolstack_restore(), and the function
returns 0, any subsequent error will end up with code flow going to "out;",
resulting in the migration being declared a success.

For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
'frc', even though their use of 'rc' is currently safe.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

Changes in v2:
 * Dont drop rc = -1 from toolstack_restore().

Regarding 4.4: If the two "for consistency" changes to
xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
without affecting the bugfix nature of the patch, but I would argue that
leaving some examples of "rc = function_call()" leaves a bad precident which
is likely to lead to similar bugs in the future.
---
 tools/libxc/xc_domain_restore.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 5ba47d7..1f6ce50 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
     munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
 
-    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
-                            console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
+                             console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
@@ -2257,10 +2257,10 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     {
         if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
         {
-            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
-                        callbacks->data);
+            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
+                                               callbacks->data);
             free(tdata.data);
-            if ( rc < 0 )
+            if ( frc < 0 )
             {
                 PERROR("error calling toolstack_restore");
                 goto out;
@@ -2326,9 +2326,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         goto out;
     }
 
-    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
-                                console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
+                                 console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:01:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkJE-0000j6-FS; Tue, 04 Feb 2014 18:01:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WAkJD-0000iz-8J
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:01:19 +0000
Received: from [193.109.254.147:21568] by server-7.bemta-14.messagelabs.com id
	25/78-23424-EEA21F25; Tue, 04 Feb 2014 18:01:18 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391536874!1983634!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15422 invoked from network); 4 Feb 2014 18:01:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:01:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97872658"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 18:01:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:01:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WAkJ5-0005xF-2p; Tue, 04 Feb 2014 18:01:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 18:01:10 +0000
Message-ID: <1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391535813.6497.61.camel@kazak.uk.xensource.com>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success from
	xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
most part is left alone until success, at which point it is set to 0.

There is a separate 'frc' which for the most part is used to check function
calls, keeping errors separate from 'rc'.

For a toolstack which sets callbacks->toolstack_restore(), and the function
returns 0, any subsequent error will end up with code flow going to "out;",
resulting in the migration being declared a success.

For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
'frc', even though their use of 'rc' is currently safe.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>

---

Changes in v2:
 * Dont drop rc = -1 from toolstack_restore().

Regarding 4.4: If the two "for consistency" changes to
xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
without affecting the bugfix nature of the patch, but I would argue that
leaving some examples of "rc = function_call()" leaves a bad precident which
is likely to lead to similar bugs in the future.
---
 tools/libxc/xc_domain_restore.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
index 5ba47d7..1f6ce50 100644
--- a/tools/libxc/xc_domain_restore.c
+++ b/tools/libxc/xc_domain_restore.c
@@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
     munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
 
-    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
-                            console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
+                             console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
@@ -2257,10 +2257,10 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
     {
         if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
         {
-            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
-                        callbacks->data);
+            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
+                                               callbacks->data);
             free(tdata.data);
-            if ( rc < 0 )
+            if ( frc < 0 )
             {
                 PERROR("error calling toolstack_restore");
                 goto out;
@@ -2326,9 +2326,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         goto out;
     }
 
-    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
-                                console_domid, store_domid);
-    if (rc != 0)
+    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
+                                 console_domid, store_domid);
+    if (frc != 0)
     {
         ERROR("error seeding grant table");
         goto out;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:10:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkSH-0000zA-KP; Tue, 04 Feb 2014 18:10:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1WAkSF-0000z5-OA
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:10:40 +0000
Received: from [85.158.137.68:30724] by server-14.bemta-3.messagelabs.com id
	E4/63-08196-E1D21F25; Tue, 04 Feb 2014 18:10:38 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391537436!13385774!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17384 invoked from network); 4 Feb 2014 18:10:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:10:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97878692"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 18:10:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:10:35 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1WAkSB-00064s-Is; Tue, 04 Feb 2014 18:10:35 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1WAkSB-0001Nt-7D; Tue, 04 Feb 2014
	18:10:35 +0000
Date: Tue, 4 Feb 2014 18:10:35 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140204181023.GA5293@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, RHEL 7 menu entries have two different single-quote
delimited strings on the same line, and the greedy grouping for menuentry
parsing gets both strings, and the options inbetween.

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: george.dunlap@citrix.com
---
v2: Added RHEL 7 grub.cfg in pygrub/examples
v3 & v4: Tidied the commit message based on Andrew Cooper's feedback

Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails to boot
on Xen.

 tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
 tools/pygrub/src/GrubConf.py            |    4 +-
 2 files changed, 121 insertions(+), 1 deletion(-)
 create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2

diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
new file mode 100644
index 0000000..88f0f99
--- /dev/null
+++ b/tools/pygrub/examples/rhel-7-beta.grub2
@@ -0,0 +1,118 @@
+#
+# DO NOT EDIT THIS FILE
+#
+# It is automatically generated by grub2-mkconfig using templates
+# from /etc/grub.d and settings from /etc/default/grub
+#
+
+### BEGIN /etc/grub.d/00_header ###
+set pager=1
+
+if [ -s $prefix/grubenv ]; then
+  load_env
+fi
+if [ "${next_entry}" ] ; then
+   set default="${next_entry}"
+   set next_entry=
+   save_env next_entry
+   set boot_once=true
+else
+   set default="${saved_entry}"
+fi
+
+if [ x"${feature_menuentry_id}" = xy ]; then
+  menuentry_id_option="--id"
+else
+  menuentry_id_option=""
+fi
+
+export menuentry_id_option
+
+if [ "${prev_saved_entry}" ]; then
+  set saved_entry="${prev_saved_entry}"
+  save_env saved_entry
+  set prev_saved_entry=
+  save_env prev_saved_entry
+  set boot_once=true
+fi
+
+function savedefault {
+  if [ -z "${boot_once}" ]; then
+    saved_entry="${chosen}"
+    save_env saved_entry
+  fi
+}
+
+function load_video {
+  if [ x$feature_all_video_module = xy ]; then
+    insmod all_video
+  else
+    insmod efi_gop
+    insmod efi_uga
+    insmod ieee1275_fb
+    insmod vbe
+    insmod vga
+    insmod video_bochs
+    insmod video_cirrus
+  fi
+}
+
+terminal_output console
+set timeout=5
+### END /etc/grub.d/00_header ###
+
+### BEGIN /etc/grub.d/10_linux ###
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	set gfxpayload=keep
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
+	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
+}
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
+	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
+}
+
+### END /etc/grub.d/10_linux ###
+
+### BEGIN /etc/grub.d/20_linux_xen ###
+### END /etc/grub.d/20_linux_xen ###
+
+### BEGIN /etc/grub.d/20_ppc_terminfo ###
+### END /etc/grub.d/20_ppc_terminfo ###
+
+### BEGIN /etc/grub.d/30_os-prober ###
+### END /etc/grub.d/30_os-prober ###
+
+### BEGIN /etc/grub.d/40_custom ###
+# This file provides an easy way to add custom menu entries.  Simply type the
+# menu entries you want to add after this comment.  Be careful not to change
+# the 'exec tail' line above.
+### END /etc/grub.d/40_custom ###
+
+### BEGIN /etc/grub.d/41_custom ###
+if [ -f  ${config_directory}/custom.cfg ]; then
+  source ${config_directory}/custom.cfg
+elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
+  source $prefix/custom.cfg;
+fi
+### END /etc/grub.d/41_custom ###
diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:10:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkSH-0000zA-KP; Tue, 04 Feb 2014 18:10:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pcj@citrix.com>) id 1WAkSF-0000z5-OA
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:10:40 +0000
Received: from [85.158.137.68:30724] by server-14.bemta-3.messagelabs.com id
	E4/63-08196-E1D21F25; Tue, 04 Feb 2014 18:10:38 +0000
X-Env-Sender: pcj@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391537436!13385774!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17384 invoked from network); 4 Feb 2014 18:10:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:10:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97878692"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 18:10:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:10:35 -0500
Received: from joby-pc.uk.xensource.com ([10.80.2.72])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<pcj@citrix.com>)	id 1WAkSB-00064s-Is; Tue, 04 Feb 2014 18:10:35 +0000
Received: from pcj by joby-pc.uk.xensource.com with local (Exim 4.80)
	(envelope-from <pcj@citrix.com>)	id 1WAkSB-0001Nt-7D; Tue, 04 Feb 2014
	18:10:35 +0000
Date: Tue, 4 Feb 2014 18:10:35 +0000
From: Joby Poriyath <joby.poriyath@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140204181023.GA5293@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7 has
 new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
boot after the installation.

In addition to this, RHEL 7 menu entries have two different single-quote
delimited strings on the same line, and the greedy grouping for menuentry
parsing gets both strings, and the options inbetween.

Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: george.dunlap@citrix.com
---
v2: Added RHEL 7 grub.cfg in pygrub/examples
v3 & v4: Tidied the commit message based on Andrew Cooper's feedback

Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails to boot
on Xen.

 tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
 tools/pygrub/src/GrubConf.py            |    4 +-
 2 files changed, 121 insertions(+), 1 deletion(-)
 create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2

diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
new file mode 100644
index 0000000..88f0f99
--- /dev/null
+++ b/tools/pygrub/examples/rhel-7-beta.grub2
@@ -0,0 +1,118 @@
+#
+# DO NOT EDIT THIS FILE
+#
+# It is automatically generated by grub2-mkconfig using templates
+# from /etc/grub.d and settings from /etc/default/grub
+#
+
+### BEGIN /etc/grub.d/00_header ###
+set pager=1
+
+if [ -s $prefix/grubenv ]; then
+  load_env
+fi
+if [ "${next_entry}" ] ; then
+   set default="${next_entry}"
+   set next_entry=
+   save_env next_entry
+   set boot_once=true
+else
+   set default="${saved_entry}"
+fi
+
+if [ x"${feature_menuentry_id}" = xy ]; then
+  menuentry_id_option="--id"
+else
+  menuentry_id_option=""
+fi
+
+export menuentry_id_option
+
+if [ "${prev_saved_entry}" ]; then
+  set saved_entry="${prev_saved_entry}"
+  save_env saved_entry
+  set prev_saved_entry=
+  save_env prev_saved_entry
+  set boot_once=true
+fi
+
+function savedefault {
+  if [ -z "${boot_once}" ]; then
+    saved_entry="${chosen}"
+    save_env saved_entry
+  fi
+}
+
+function load_video {
+  if [ x$feature_all_video_module = xy ]; then
+    insmod all_video
+  else
+    insmod efi_gop
+    insmod efi_uga
+    insmod ieee1275_fb
+    insmod vbe
+    insmod vga
+    insmod video_bochs
+    insmod video_cirrus
+  fi
+}
+
+terminal_output console
+set timeout=5
+### END /etc/grub.d/00_header ###
+
+### BEGIN /etc/grub.d/10_linux ###
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	set gfxpayload=keep
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
+	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
+}
+menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
+	load_video
+	insmod gzio
+	insmod part_msdos
+	insmod xfs
+	set root='hd0,msdos1'
+	if [ x$feature_platform_search_hint = xy ]; then
+	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
+	else
+	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
+	fi
+	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
+	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
+}
+
+### END /etc/grub.d/10_linux ###
+
+### BEGIN /etc/grub.d/20_linux_xen ###
+### END /etc/grub.d/20_linux_xen ###
+
+### BEGIN /etc/grub.d/20_ppc_terminfo ###
+### END /etc/grub.d/20_ppc_terminfo ###
+
+### BEGIN /etc/grub.d/30_os-prober ###
+### END /etc/grub.d/30_os-prober ###
+
+### BEGIN /etc/grub.d/40_custom ###
+# This file provides an easy way to add custom menu entries.  Simply type the
+# menu entries you want to add after this comment.  Be careful not to change
+# the 'exec tail' line above.
+### END /etc/grub.d/40_custom ###
+
+### BEGIN /etc/grub.d/41_custom ###
+if [ -f  ${config_directory}/custom.cfg ]; then
+  source ${config_directory}/custom.cfg
+elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
+  source $prefix/custom.cfg;
+fi
+### END /etc/grub.d/41_custom ###
diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
index cb853c9..974cded 100644
--- a/tools/pygrub/src/GrubConf.py
+++ b/tools/pygrub/src/GrubConf.py
@@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
                 
     commands = {'set:root': 'root',
                 'linux': 'kernel',
+                'linux16': 'kernel',
                 'initrd': 'initrd',
+                'initrd16': 'initrd',
                 'echo': None,
                 'insmod': None,
                 'search': None}
@@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
                 continue
 
             # new image
-            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
+            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
             if title_match:
                 if img is not None:
                     raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:19:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkah-0001gZ-ML; Tue, 04 Feb 2014 18:19:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <martin@xn--hrling-vxa.se>) id 1WAkag-0001gU-SO
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:19:23 +0000
Received: from [85.158.143.35:60330] by server-1.bemta-4.messagelabs.com id
	3B/97-31661-A2F21F25; Tue, 04 Feb 2014 18:19:22 +0000
X-Env-Sender: martin@xn--hrling-vxa.se
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391537959!3126981!1
X-Originating-IP: [209.85.128.181]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20261 invoked from network); 4 Feb 2014 18:19:20 -0000
Received: from mail-ve0-f181.google.com (HELO mail-ve0-f181.google.com)
	(209.85.128.181)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:19:20 -0000
Received: by mail-ve0-f181.google.com with SMTP id cz12so6222739veb.40
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 10:19:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:date:message-id:subject:from
	:to:content-type;
	bh=ZWRKCzAo50X8qkbFM3VBmResqTXBmgLWwsmmROFUbtU=;
	b=LYdX5xQZm6RfLpUe1BSDp0Sp+rvg+S1h37zrZg2XH0tyBf4H+FXKyZLMfAYRiXPa3X
	0TQdkkcRWcMMtcUnlxy2jaf1fWstbHtM0ZMyJ6YjtdVcuakgdHGihDzRBE76b+cHgws+
	EdFrTZRLlQ1PNTePoQ2EIXtAalPcTH/RyPUMTZzTPTFcXuc+I2j9ENbqFw1LtjVFUCJO
	5Wcrh9p2HoQDcnw2DeXF/fsjovy3IrfaB1elFN64IKNsGqHetMaRpTWHHj4t6rnZlqlK
	cjxMLo+0zjKqReVuRRaWu8zV7wDGuS7l55Ao26OerBGRtzDTvhDQHURqOmJuoj6E3kN5
	K57A==
X-Gm-Message-State: ALoCoQlVo4GRYz7ZW8rcv3IyyTIeQBarWdQ54tR2/Ievs2uaL3buY/6FqnQAuZ/8JkRvCGdI8YPf
MIME-Version: 1.0
X-Received: by 10.58.188.78 with SMTP id fy14mr7259946vec.23.1391537956129;
	Tue, 04 Feb 2014 10:19:16 -0800 (PST)
Received: by 10.220.113.205 with HTTP; Tue, 4 Feb 2014 10:19:16 -0800 (PST)
X-Originating-IP: [46.59.53.235]
Date: Tue, 4 Feb 2014 19:19:16 +0100
X-Google-Sender-Auth: 9FBiEYuyLD9J79llkNjMZQpeFBw
Message-ID: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
From: =?ISO-8859-1?Q?Martin_=D6hrling?= <xen@xn--hrling-vxa.se>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
 adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5226278209081400486=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5226278209081400486==
Content-Type: multipart/alternative; boundary=089e013a12f09124b304f198ad7d

--089e013a12f09124b304f198ad7d
Content-Type: text/plain; charset=ISO-8859-1

I tried to use vga passthrough with an AMD card and ran into the issues
with dom0 crash on domU reboot. My intention is to debug the issue and I
have started to read up on the code for pci passthrough. The following
observations may not be an error but perhaps someone could shed some light
over the design intentions.

My first impression was that libxl__device_pci_reset() is a function level
reset function. It is called each time a single function of a device is
added or removed. A device reset would break other domU:s if other
functions of the device are passed through to these domU:s.

The strange thing is that there is only a single libxl__device_pci_reset()
call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting the
impression that the function is assumed to be a device reset function.

Is the attempt to reset a function, when all other functions belongs to
pciback, detected and replaced by a device reset?

Best Regards,
Martin

--089e013a12f09124b304f198ad7d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I tried to use vga passthrough with an AMD card and ran in=
to the issues with dom0 crash on domU reboot. My intention is to debug the =
issue and I have started to read up on the code for pci passthrough. The fo=
llowing observations may not be an error but perhaps someone could shed som=
e light over the design intentions.<div>
<br></div><div>My first impression was that libxl__device_pci_reset() is a =
function level reset function. It is called each time a single function of =
a device is added or removed. A device reset would break other domU:s if ot=
her functions of the device are passed through to these domU:s.</div>
<div><br></div><div>The strange thing is that there is only a single libxl_=
_device_pci_reset() call when=A0<font face=3D"arial, helvetica, sans-serif"=
>pcidev-&gt;vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I&#39;m getting the im=
pression that the function is assumed to be a device reset function.</font>=
</div>
<div><br></div><div><font face=3D"arial, helvetica, sans-serif">Is the atte=
mpt to reset a function, when all other functions belongs to pciback, detec=
ted and replaced by a device reset?</font></div><div><font face=3D"arial, h=
elvetica, sans-serif"><br>
</font></div><div><font face=3D"arial, helvetica, sans-serif">Best Regards,=
</font></div><div><font face=3D"arial, helvetica, sans-serif">Martin</font>=
</div></div>

--089e013a12f09124b304f198ad7d--


--===============5226278209081400486==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5226278209081400486==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 18:19:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkah-0001gZ-ML; Tue, 04 Feb 2014 18:19:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <martin@xn--hrling-vxa.se>) id 1WAkag-0001gU-SO
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:19:23 +0000
Received: from [85.158.143.35:60330] by server-1.bemta-4.messagelabs.com id
	3B/97-31661-A2F21F25; Tue, 04 Feb 2014 18:19:22 +0000
X-Env-Sender: martin@xn--hrling-vxa.se
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391537959!3126981!1
X-Originating-IP: [209.85.128.181]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20261 invoked from network); 4 Feb 2014 18:19:20 -0000
Received: from mail-ve0-f181.google.com (HELO mail-ve0-f181.google.com)
	(209.85.128.181)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:19:20 -0000
Received: by mail-ve0-f181.google.com with SMTP id cz12so6222739veb.40
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 10:19:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:date:message-id:subject:from
	:to:content-type;
	bh=ZWRKCzAo50X8qkbFM3VBmResqTXBmgLWwsmmROFUbtU=;
	b=LYdX5xQZm6RfLpUe1BSDp0Sp+rvg+S1h37zrZg2XH0tyBf4H+FXKyZLMfAYRiXPa3X
	0TQdkkcRWcMMtcUnlxy2jaf1fWstbHtM0ZMyJ6YjtdVcuakgdHGihDzRBE76b+cHgws+
	EdFrTZRLlQ1PNTePoQ2EIXtAalPcTH/RyPUMTZzTPTFcXuc+I2j9ENbqFw1LtjVFUCJO
	5Wcrh9p2HoQDcnw2DeXF/fsjovy3IrfaB1elFN64IKNsGqHetMaRpTWHHj4t6rnZlqlK
	cjxMLo+0zjKqReVuRRaWu8zV7wDGuS7l55Ao26OerBGRtzDTvhDQHURqOmJuoj6E3kN5
	K57A==
X-Gm-Message-State: ALoCoQlVo4GRYz7ZW8rcv3IyyTIeQBarWdQ54tR2/Ievs2uaL3buY/6FqnQAuZ/8JkRvCGdI8YPf
MIME-Version: 1.0
X-Received: by 10.58.188.78 with SMTP id fy14mr7259946vec.23.1391537956129;
	Tue, 04 Feb 2014 10:19:16 -0800 (PST)
Received: by 10.220.113.205 with HTTP; Tue, 4 Feb 2014 10:19:16 -0800 (PST)
X-Originating-IP: [46.59.53.235]
Date: Tue, 4 Feb 2014 19:19:16 +0100
X-Google-Sender-Auth: 9FBiEYuyLD9J79llkNjMZQpeFBw
Message-ID: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
From: =?ISO-8859-1?Q?Martin_=D6hrling?= <xen@xn--hrling-vxa.se>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
 adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5226278209081400486=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5226278209081400486==
Content-Type: multipart/alternative; boundary=089e013a12f09124b304f198ad7d

--089e013a12f09124b304f198ad7d
Content-Type: text/plain; charset=ISO-8859-1

I tried to use vga passthrough with an AMD card and ran into the issues
with dom0 crash on domU reboot. My intention is to debug the issue and I
have started to read up on the code for pci passthrough. The following
observations may not be an error but perhaps someone could shed some light
over the design intentions.

My first impression was that libxl__device_pci_reset() is a function level
reset function. It is called each time a single function of a device is
added or removed. A device reset would break other domU:s if other
functions of the device are passed through to these domU:s.

The strange thing is that there is only a single libxl__device_pci_reset()
call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting the
impression that the function is assumed to be a device reset function.

Is the attempt to reset a function, when all other functions belongs to
pciback, detected and replaced by a device reset?

Best Regards,
Martin

--089e013a12f09124b304f198ad7d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I tried to use vga passthrough with an AMD card and ran in=
to the issues with dom0 crash on domU reboot. My intention is to debug the =
issue and I have started to read up on the code for pci passthrough. The fo=
llowing observations may not be an error but perhaps someone could shed som=
e light over the design intentions.<div>
<br></div><div>My first impression was that libxl__device_pci_reset() is a =
function level reset function. It is called each time a single function of =
a device is added or removed. A device reset would break other domU:s if ot=
her functions of the device are passed through to these domU:s.</div>
<div><br></div><div>The strange thing is that there is only a single libxl_=
_device_pci_reset() call when=A0<font face=3D"arial, helvetica, sans-serif"=
>pcidev-&gt;vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I&#39;m getting the im=
pression that the function is assumed to be a device reset function.</font>=
</div>
<div><br></div><div><font face=3D"arial, helvetica, sans-serif">Is the atte=
mpt to reset a function, when all other functions belongs to pciback, detec=
ted and replaced by a device reset?</font></div><div><font face=3D"arial, h=
elvetica, sans-serif"><br>
</font></div><div><font face=3D"arial, helvetica, sans-serif">Best Regards,=
</font></div><div><font face=3D"arial, helvetica, sans-serif">Martin</font>=
</div></div>

--089e013a12f09124b304f198ad7d--


--===============5226278209081400486==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5226278209081400486==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 18:42:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:42:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkwm-00028b-8G; Tue, 04 Feb 2014 18:42:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WAkwl-00028W-IW
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:42:11 +0000
Received: from [193.109.254.147:57901] by server-10.bemta-14.messagelabs.com
	id 69/82-10711-28431F25; Tue, 04 Feb 2014 18:42:10 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391539329!1979729!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 520 invoked from network); 4 Feb 2014 18:42:10 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 18:42:10 -0000
Received: from smtphost4.dur.ac.uk (smtphost4.dur.ac.uk [129.234.252.4])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14IfucK021169;
	Tue, 4 Feb 2014 18:42:00 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost4.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14IfqWT001730
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 18:41:52 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s14IfqhC032558;
	Tue, 4 Feb 2014 18:41:52 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s14IfpVC032547; Tue, 4 Feb 2014 18:41:51 GMT
Date: Tue, 4 Feb 2014 18:41:51 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391530374.6497.55.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s14IfucK021169
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, Ian Campbell wrote:

> On Tue, 2014-02-04 at 08:26 -0700, Eric Houby wrote:
>> Xen list,
>>
>> I have a clean F20 install with the RC3 RPMs and see an error when
>> starting a VM.  A similar issue was seen with the RC2 RPMs.
>
> Thanks for the report.
>
>> [root@xen ~]# xl create f20.xl
>> Parsing config from f20.xl
>> libxl: error: libxl_create.c:1054:domcreate_launch_dm: unable to add
>> disk devices
>> libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device
>> model pid in /local/domain/2/image/device-model-pid
>> libxl: error: libxl.c:1425:libxl__destroy_domid:
>> libxl__destroy_device_model failed for 2
>> libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
>> to get my domid
>> libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
>> failed for 2
>> [root@xen ~]#
>> [root@xen ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
>> [root@xen ~]#
>> [root@xen ~]# xl create f20.xl
>> Parsing config from f20.xl
>> [root@xen ~]#
>> [root@xen ~]#
>> [root@xen ~]# xl list
>> Name                                        ID   Mem VCPUs	State	Time(s)
>> Domain-0                                     0  2047     2     r-----
>> 19.7
>> f20                                          3  4095     1     r-----
>> 4.1
>>
>>
>> After running the xenstore-write command VMs start up without issue.
>
> This is a bug in the Fedora versions of the initscripts (or I suppose
> these days they are systemd unit files or whatever). The sysvinit
> scripts shipped with Xen itself have this write in already.

I am puzzled by this as the systemd files in the Fedora RC3 test build do 
indeed run
/usr/bin/xenstore-write "/local/domain/0/domid" 0
so I am not sure why it isn't being set in this case. It certainly works 
for me.

> We should probably consider taking some unit files into the xen tree, if
> someone wants to submit a set?

I can submit a set, which start services individually rather than a 
unified xencommons style start file. I didn't find a good way to reproduce 
the xendomains script, so I ended running an edited version of the 
sysvinit script with a systemd wrapper file.

Would it make sense to have some sort of configure option tools to choose 
between sysvinit and systemd?

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:42:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:42:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAkwm-00028b-8G; Tue, 04 Feb 2014 18:42:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WAkwl-00028W-IW
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:42:11 +0000
Received: from [193.109.254.147:57901] by server-10.bemta-14.messagelabs.com
	id 69/82-10711-28431F25; Tue, 04 Feb 2014 18:42:10 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391539329!1979729!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 520 invoked from network); 4 Feb 2014 18:42:10 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 18:42:10 -0000
Received: from smtphost4.dur.ac.uk (smtphost4.dur.ac.uk [129.234.252.4])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14IfucK021169;
	Tue, 4 Feb 2014 18:42:00 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost4.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14IfqWT001730
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 18:41:52 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s14IfqhC032558;
	Tue, 4 Feb 2014 18:41:52 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s14IfpVC032547; Tue, 4 Feb 2014 18:41:51 GMT
Date: Tue, 4 Feb 2014 18:41:51 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391530374.6497.55.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s14IfucK021169
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, Ian Campbell wrote:

> On Tue, 2014-02-04 at 08:26 -0700, Eric Houby wrote:
>> Xen list,
>>
>> I have a clean F20 install with the RC3 RPMs and see an error when
>> starting a VM.  A similar issue was seen with the RC2 RPMs.
>
> Thanks for the report.
>
>> [root@xen ~]# xl create f20.xl
>> Parsing config from f20.xl
>> libxl: error: libxl_create.c:1054:domcreate_launch_dm: unable to add
>> disk devices
>> libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device
>> model pid in /local/domain/2/image/device-model-pid
>> libxl: error: libxl.c:1425:libxl__destroy_domid:
>> libxl__destroy_device_model failed for 2
>> libxl: error: libxl_device.c:780:libxl__initiate_device_remove: unable
>> to get my domid
>> libxl: error: libxl.c:1461:devices_destroy_cb: libxl__devices_destroy
>> failed for 2
>> [root@xen ~]#
>> [root@xen ~]# /usr/bin/xenstore-write "/local/domain/0/domid" 0
>> [root@xen ~]#
>> [root@xen ~]# xl create f20.xl
>> Parsing config from f20.xl
>> [root@xen ~]#
>> [root@xen ~]#
>> [root@xen ~]# xl list
>> Name                                        ID   Mem VCPUs	State	Time(s)
>> Domain-0                                     0  2047     2     r-----
>> 19.7
>> f20                                          3  4095     1     r-----
>> 4.1
>>
>>
>> After running the xenstore-write command VMs start up without issue.
>
> This is a bug in the Fedora versions of the initscripts (or I suppose
> these days they are systemd unit files or whatever). The sysvinit
> scripts shipped with Xen itself have this write in already.

I am puzzled by this as the systemd files in the Fedora RC3 test build do 
indeed run
/usr/bin/xenstore-write "/local/domain/0/domid" 0
so I am not sure why it isn't being set in this case. It certainly works 
for me.

> We should probably consider taking some unit files into the xen tree, if
> someone wants to submit a set?

I can submit a set, which start services individually rather than a 
unified xencommons style start file. I didn't find a good way to reproduce 
the xendomains script, so I ended running an edited version of the 
sysvinit script with a systemd wrapper file.

Would it make sense to have some sort of configure option tools to choose 
between sysvinit and systemd?

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:51:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAl5J-0002do-Ik; Tue, 04 Feb 2014 18:51:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAl5I-0002dj-DT
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:51:00 +0000
Received: from [85.158.137.68:32615] by server-12.bemta-3.messagelabs.com id
	38/EF-01674-39631F25; Tue, 04 Feb 2014 18:50:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391539857!13399136!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10382 invoked from network); 4 Feb 2014 18:50:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="99799262"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:50:56 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAl5E-0006og-6d;
	Tue, 04 Feb 2014 18:50:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 18:50:26 +0000
Message-ID: <1391539826-30962-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"David S. Miller" <davem@davemloft.net>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1 net] xen-netfront: handle backend CLOSED
	without CLOSING
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Backend drivers shouldn't transistion to CLOSED unless the frontend is
CLOSED.  If a backend does transition to CLOSED too soon then the
frontend may not see the CLOSING state and will not properly shutdown.

So, treat an unexpected backend CLOSED state the same as CLOSING.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
Cc: netdev@vger.kernel.org
Cc: "David S. Miller" <davem@davemloft.net>
---
 drivers/net/xen-netfront.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index ff04d4f..f9daa9e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1832,7 +1832,6 @@ static void netback_changed(struct xenbus_device *dev,
 	case XenbusStateReconfiguring:
 	case XenbusStateReconfigured:
 	case XenbusStateUnknown:
-	case XenbusStateClosed:
 		break;
 
 	case XenbusStateInitWait:
@@ -1847,6 +1846,10 @@ static void netback_changed(struct xenbus_device *dev,
 		netdev_notify_peers(netdev);
 		break;
 
+	case XenbusStateClosed:
+		if (dev->state == XenbusStateClosed)
+			break;
+		/* Missed the backend's CLOSING state -- fallthrough */
 	case XenbusStateClosing:
 		xenbus_frontend_closed(dev);
 		break;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:51:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAl5J-0002do-Ik; Tue, 04 Feb 2014 18:51:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAl5I-0002dj-DT
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:51:00 +0000
Received: from [85.158.137.68:32615] by server-12.bemta-3.messagelabs.com id
	38/EF-01674-39631F25; Tue, 04 Feb 2014 18:50:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391539857!13399136!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10382 invoked from network); 4 Feb 2014 18:50:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:50:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="99799262"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 18:50:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:50:56 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAl5E-0006og-6d;
	Tue, 04 Feb 2014 18:50:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 18:50:26 +0000
Message-ID: <1391539826-30962-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: netdev@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"David S. Miller" <davem@davemloft.net>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1 net] xen-netfront: handle backend CLOSED
	without CLOSING
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Backend drivers shouldn't transistion to CLOSED unless the frontend is
CLOSED.  If a backend does transition to CLOSED too soon then the
frontend may not see the CLOSING state and will not properly shutdown.

So, treat an unexpected backend CLOSED state the same as CLOSING.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
Cc: netdev@vger.kernel.org
Cc: "David S. Miller" <davem@davemloft.net>
---
 drivers/net/xen-netfront.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index ff04d4f..f9daa9e 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1832,7 +1832,6 @@ static void netback_changed(struct xenbus_device *dev,
 	case XenbusStateReconfiguring:
 	case XenbusStateReconfigured:
 	case XenbusStateUnknown:
-	case XenbusStateClosed:
 		break;
 
 	case XenbusStateInitWait:
@@ -1847,6 +1846,10 @@ static void netback_changed(struct xenbus_device *dev,
 		netdev_notify_peers(netdev);
 		break;
 
+	case XenbusStateClosed:
+		if (dev->state == XenbusStateClosed)
+			break;
+		/* Missed the backend's CLOSING state -- fallthrough */
 	case XenbusStateClosing:
 		xenbus_frontend_closed(dev);
 		break;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:54:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAl8N-0002kg-7b; Tue, 04 Feb 2014 18:54:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAl8L-0002kX-A1
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:54:09 +0000
Received: from [85.158.139.211:5011] by server-17.bemta-5.messagelabs.com id
	65/02-31975-05731F25; Tue, 04 Feb 2014 18:54:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391540045!1652173!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15629 invoked from network); 4 Feb 2014 18:54:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:54:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97897742"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 18:54:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:54:04 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAl8G-0006r1-EO;
	Tue, 04 Feb 2014 18:54:04 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 18:53:56 +0000
Message-ID: <1391540036-31087-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jens Axboe <axboe@kernel.dk>, linux-kernel@vger.kernel.org,
	stable@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [Xen-devel] [PATCH] xen-blkfront: handle backend CLOSED without
	CLOSING
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Backend drivers shouldn't transistion to CLOSED unless the frontend is
CLOSED.  If a backend does transition to CLOSED too soon then the
frontend may not see the CLOSING state and will not properly shutdown.

So, treat an unexpected backend CLOSED state the same as CLOSING.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: stable@vger.kernel.org
---
Cc: Jens Axboe <axboe@kernel.dk>
---
 drivers/block/xen-blkfront.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8dcfb54..0471d63 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1904,13 +1904,16 @@ static void blkback_changed(struct xenbus_device *dev,
 	case XenbusStateReconfiguring:
 	case XenbusStateReconfigured:
 	case XenbusStateUnknown:
-	case XenbusStateClosed:
 		break;
 
 	case XenbusStateConnected:
 		blkfront_connect(info);
 		break;
 
+	case XenbusStateClosed:
+		if (dev->state == XenbusStateClosed)
+			break;
+		/* Missed the backend's Closing state -- fallthrough */
 	case XenbusStateClosing:
 		blkfront_closing(info);
 		break;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 18:54:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 18:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAl8N-0002kg-7b; Tue, 04 Feb 2014 18:54:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAl8L-0002kX-A1
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 18:54:09 +0000
Received: from [85.158.139.211:5011] by server-17.bemta-5.messagelabs.com id
	65/02-31975-05731F25; Tue, 04 Feb 2014 18:54:08 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391540045!1652173!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15629 invoked from network); 4 Feb 2014 18:54:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 18:54:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97897742"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 18:54:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 4 Feb 2014 13:54:04 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WAl8G-0006r1-EO;
	Tue, 04 Feb 2014 18:54:04 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 4 Feb 2014 18:53:56 +0000
Message-ID: <1391540036-31087-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jens Axboe <axboe@kernel.dk>, linux-kernel@vger.kernel.org,
	stable@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [Xen-devel] [PATCH] xen-blkfront: handle backend CLOSED without
	CLOSING
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Backend drivers shouldn't transistion to CLOSED unless the frontend is
CLOSED.  If a backend does transition to CLOSED too soon then the
frontend may not see the CLOSING state and will not properly shutdown.

So, treat an unexpected backend CLOSED state the same as CLOSING.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: stable@vger.kernel.org
---
Cc: Jens Axboe <axboe@kernel.dk>
---
 drivers/block/xen-blkfront.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8dcfb54..0471d63 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1904,13 +1904,16 @@ static void blkback_changed(struct xenbus_device *dev,
 	case XenbusStateReconfiguring:
 	case XenbusStateReconfigured:
 	case XenbusStateUnknown:
-	case XenbusStateClosed:
 		break;
 
 	case XenbusStateConnected:
 		blkfront_connect(info);
 		break;
 
+	case XenbusStateClosed:
+		if (dev->state == XenbusStateClosed)
+			break;
+		/* Missed the backend's Closing state -- fallthrough */
 	case XenbusStateClosing:
 		blkfront_closing(info);
 		break;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAlWi-00042c-VU; Tue, 04 Feb 2014 19:19:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAlWg-00042S-WF
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:19:19 +0000
Received: from [85.158.143.35:11267] by server-2.bemta-4.messagelabs.com id
	67/14-10891-63D31F25; Tue, 04 Feb 2014 19:19:18 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391541543!3127320!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25729 invoked from network); 4 Feb 2014 19:19:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 19:19:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97909537"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 19:19:02 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	14:19:01 -0500
Message-ID: <52F13D24.5090405@citrix.com>
Date: Tue, 4 Feb 2014 19:19:00 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
	<20140120163814.GE11681@zion.uk.xensource.com>
In-Reply-To: <20140120163814.GE11681@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 16:38, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 05:11:07PM +0000, Zoltan Kiss wrote:
>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>> thread problem, however caused an another one. The receive side can stall, if:
>> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
>> - [INTERRUPT] interrupt happens, and sets rx_event to true
>> - [THREAD] then xenvif_kthread sets rx_event to false
>> - [THREAD] rx_work_todo doesn't return true anymore
>>
>> Also, if interrupt sent but there is still no room in the ring, it take quite a
>> long time until xenvif_rx_action realize it. This patch ditch that two variable,
>> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
>> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
>> kept as 0. Then rx_work_todo will check if:
>> - there is something to send to the ring (like before)
>> - there is space for the topmost packet in the queue
>>
>> I think that's more natural and optimal thing to test than two bool which are
>> set somewhere else.
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>
> Sorry for the delay.
>
> Paul, thanks for reviewing.
>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Hi,

This patch haven't made it to net-next yet, maybe because the subject 
doesn't suggest that this is a bugfix. I suggest to apply it as soon as 
possible, otherwise netback will be quite broken.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAlWi-00042c-VU; Tue, 04 Feb 2014 19:19:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAlWg-00042S-WF
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:19:19 +0000
Received: from [85.158.143.35:11267] by server-2.bemta-4.messagelabs.com id
	67/14-10891-63D31F25; Tue, 04 Feb 2014 19:19:18 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391541543!3127320!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25729 invoked from network); 4 Feb 2014 19:19:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 19:19:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97909537"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 19:19:02 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	14:19:01 -0500
Message-ID: <52F13D24.5090405@citrix.com>
Date: Tue, 4 Feb 2014 19:19:00 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
	<20140120163814.GE11681@zion.uk.xensource.com>
In-Reply-To: <20140120163814.GE11681@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/01/14 16:38, Wei Liu wrote:
> On Wed, Jan 15, 2014 at 05:11:07PM +0000, Zoltan Kiss wrote:
>> The recent patch to fix receive side flow control (11b57f) solved the spinning
>> thread problem, however caused an another one. The receive side can stall, if:
>> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
>> - [INTERRUPT] interrupt happens, and sets rx_event to true
>> - [THREAD] then xenvif_kthread sets rx_event to false
>> - [THREAD] rx_work_todo doesn't return true anymore
>>
>> Also, if interrupt sent but there is still no room in the ring, it take quite a
>> long time until xenvif_rx_action realize it. This patch ditch that two variable,
>> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
>> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
>> kept as 0. Then rx_work_todo will check if:
>> - there is something to send to the ring (like before)
>> - there is space for the topmost packet in the queue
>>
>> I think that's more natural and optimal thing to test than two bool which are
>> set somewhere else.
>>
>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>
> Sorry for the delay.
>
> Paul, thanks for reviewing.
>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Hi,

This patch haven't made it to net-next yet, maybe because the subject 
doesn't suggest that this is a bugfix. I suggest to apply it as soon as 
possible, otherwise netback will be quite broken.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:20:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAlXb-00046x-TZ; Tue, 04 Feb 2014 19:20:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WAlXa-00046h-0V
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 19:20:14 +0000
Received: from [85.158.143.35:9351] by server-3.bemta-4.messagelabs.com id
	34/30-11539-D6D31F25; Tue, 04 Feb 2014 19:20:13 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391541610!3139584!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6495 invoked from network); 4 Feb 2014 19:20:12 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 19:20:12 -0000
Received: from mail-ig0-f178.google.com (mail-ig0-f178.google.com
	[209.85.213.178]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s14JK9Oh018670
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xensource.com>; Tue, 4 Feb 2014 11:20:09 -0800
Received: by mail-ig0-f178.google.com with SMTP id uq10so9042932igb.5
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 11:20:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=dmxn7jj1m7Y7CESHs0YsETujXjNhVQWOYWOoG87cBJE=;
	b=Am7878hN3L1+g20lV+qKpfjNGN4tWmEu/wUL+4hgP2TIIyPrSOXxsH93cxKXDeufoJ
	3CMqMSDbwq7ge15bxSRL6y8739Nnw7SvRotajBdhWXZaGp6GHCNHo40FZZ4xrn/1tflG
	RMKcZ0ik6ufZ/4U7fbuXWZU3pIYG2P8Q4Wv5vhCANK0fNkQTK50ECt8grwVrpNqHS5qj
	rWx+wE52pxCG453IYTZMAuIwRuS6ArO1/sO4/BiTeQTvZs+Kjnk1qD6Mr/90efxiPF6B
	JMOm7JFuPzX1JRnglxcb0BhZdcHdclunVJxGxmtgm3befrX6KZAnXZin1/5hfjJPq7ry
	gOqQ==
X-Received: by 10.42.92.194 with SMTP id u2mr30488211icm.19.1391541608085;
	Tue, 04 Feb 2014 11:20:08 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 4 Feb 2014 11:19:21 -0800 (PST)
In-Reply-To: <1391528110.6497.32.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 4 Feb 2014 11:19:21 -0800
Message-ID: <CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
To: "Mike C." <miguelmclara@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5458616034061950372=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5458616034061950372==
Content-Type: multipart/alternative; boundary=90e6ba3fcf8f3d81d204f19987ba

--90e6ba3fcf8f3d81d204f19987ba
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

>
> > >> I have a host where what I need is to move it from host1 to host2, or
> > >> reverse if needed, there's no problem stopping it first, but I guess
> > >> this is not what the "migrate" command is used for!
> > >>
>

If you just need to migrate an entire VM (memory & disk)  from one host to
another,
without using shared storage, then there are several alternatives.
One way is to use DRBD based disk backends (yes they can run on top of a
LVM)
and use protocol C (synchronous disk replication) to keep the disks at both
hosts synchronized.

Since the disk is always synchronized, you can just use the xl migrate
command to migrate
the memory state from one host to another, with very little downtime.

I would suggest reading up on the DRBD documentation regarding xen live
migration. IIRC,
their current example is based on xend, but it applies equally well to xl,
as xl also supports drbd based
disk backends.

shriram

--90e6ba3fcf8f3d81d204f19987ba
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <span dir=3D"=
ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.C=
ampbell@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div=
 class=3D"gmail_quote">


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><br>
&gt; &gt;&gt; I have a host where what I need is to move it from host1 to h=
ost2, or<br>
&gt; &gt;&gt; reverse if needed, there&#39;s no problem stopping it first, =
but I guess<br>
&gt; &gt;&gt; this is not what the &quot;migrate&quot; command is used for!=
<br>
&gt; &gt;&gt;<br></div></blockquote><div>=A0</div><div>If you just need to =
migrate an entire VM (memory &amp; disk) =A0from one host to another,</div>=
<div>without using shared storage, then there are several alternatives.</di=
v>

<div>One way is to use DRBD based disk backends (yes they can run on top of=
 a LVM)</div><div>and use protocol C (synchronous disk replication) to keep=
 the disks at both hosts synchronized.</div><div><br></div><div>Since the d=
isk is always synchronized, you can just use the xl migrate command to migr=
ate</div>

<div>the memory state from one host to another, with very little downtime.<=
/div><div><br></div><div>I would suggest reading up on the DRBD documentati=
on regarding xen live migration. IIRC,</div><div>their current example is b=
ased on xend, but it applies equally well to xl, as xl also supports drbd b=
ased</div>

<div>disk backends.</div><div><br></div><div>shriram</div></div></div></div=
>

--90e6ba3fcf8f3d81d204f19987ba--


--===============5458616034061950372==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5458616034061950372==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 19:20:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAlXb-00046x-TZ; Tue, 04 Feb 2014 19:20:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WAlXa-00046h-0V
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 19:20:14 +0000
Received: from [85.158.143.35:9351] by server-3.bemta-4.messagelabs.com id
	34/30-11539-D6D31F25; Tue, 04 Feb 2014 19:20:13 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391541610!3139584!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6495 invoked from network); 4 Feb 2014 19:20:12 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 19:20:12 -0000
Received: from mail-ig0-f178.google.com (mail-ig0-f178.google.com
	[209.85.213.178]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s14JK9Oh018670
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xensource.com>; Tue, 4 Feb 2014 11:20:09 -0800
Received: by mail-ig0-f178.google.com with SMTP id uq10so9042932igb.5
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 11:20:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=dmxn7jj1m7Y7CESHs0YsETujXjNhVQWOYWOoG87cBJE=;
	b=Am7878hN3L1+g20lV+qKpfjNGN4tWmEu/wUL+4hgP2TIIyPrSOXxsH93cxKXDeufoJ
	3CMqMSDbwq7ge15bxSRL6y8739Nnw7SvRotajBdhWXZaGp6GHCNHo40FZZ4xrn/1tflG
	RMKcZ0ik6ufZ/4U7fbuXWZU3pIYG2P8Q4Wv5vhCANK0fNkQTK50ECt8grwVrpNqHS5qj
	rWx+wE52pxCG453IYTZMAuIwRuS6ArO1/sO4/BiTeQTvZs+Kjnk1qD6Mr/90efxiPF6B
	JMOm7JFuPzX1JRnglxcb0BhZdcHdclunVJxGxmtgm3befrX6KZAnXZin1/5hfjJPq7ry
	gOqQ==
X-Received: by 10.42.92.194 with SMTP id u2mr30488211icm.19.1391541608085;
	Tue, 04 Feb 2014 11:20:08 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 4 Feb 2014 11:19:21 -0800 (PST)
In-Reply-To: <1391528110.6497.32.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 4 Feb 2014 11:19:21 -0800
Message-ID: <CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
To: "Mike C." <miguelmclara@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5458616034061950372=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5458616034061950372==
Content-Type: multipart/alternative; boundary=90e6ba3fcf8f3d81d204f19987ba

--90e6ba3fcf8f3d81d204f19987ba
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

>
> > >> I have a host where what I need is to move it from host1 to host2, or
> > >> reverse if needed, there's no problem stopping it first, but I guess
> > >> this is not what the "migrate" command is used for!
> > >>
>

If you just need to migrate an entire VM (memory & disk)  from one host to
another,
without using shared storage, then there are several alternatives.
One way is to use DRBD based disk backends (yes they can run on top of a
LVM)
and use protocol C (synchronous disk replication) to keep the disks at both
hosts synchronized.

Since the disk is always synchronized, you can just use the xl migrate
command to migrate
the memory state from one host to another, with very little downtime.

I would suggest reading up on the DRBD documentation regarding xen live
migration. IIRC,
their current example is based on xend, but it applies equally well to xl,
as xl also supports drbd based
disk backends.

shriram

--90e6ba3fcf8f3d81d204f19987ba
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <span dir=3D"=
ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=3D"_blank">Ian.C=
ampbell@citrix.com</a>&gt;</span> wrote:<br><div class=3D"gmail_extra"><div=
 class=3D"gmail_quote">


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><br>
&gt; &gt;&gt; I have a host where what I need is to move it from host1 to h=
ost2, or<br>
&gt; &gt;&gt; reverse if needed, there&#39;s no problem stopping it first, =
but I guess<br>
&gt; &gt;&gt; this is not what the &quot;migrate&quot; command is used for!=
<br>
&gt; &gt;&gt;<br></div></blockquote><div>=A0</div><div>If you just need to =
migrate an entire VM (memory &amp; disk) =A0from one host to another,</div>=
<div>without using shared storage, then there are several alternatives.</di=
v>

<div>One way is to use DRBD based disk backends (yes they can run on top of=
 a LVM)</div><div>and use protocol C (synchronous disk replication) to keep=
 the disks at both hosts synchronized.</div><div><br></div><div>Since the d=
isk is always synchronized, you can just use the xl migrate command to migr=
ate</div>

<div>the memory state from one host to another, with very little downtime.<=
/div><div><br></div><div>I would suggest reading up on the DRBD documentati=
on regarding xen live migration. IIRC,</div><div>their current example is b=
ased on xend, but it applies equally well to xl, as xl also supports drbd b=
ased</div>

<div>disk backends.</div><div><br></div><div>shriram</div></div></div></div=
>

--90e6ba3fcf8f3d81d204f19987ba--


--===============5458616034061950372==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5458616034061950372==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 19:33:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAljp-0004eh-LO; Tue, 04 Feb 2014 19:32:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WAljo-0004ec-OH
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 19:32:52 +0000
Received: from [193.109.254.147:40006] by server-16.bemta-14.messagelabs.com
	id DE/92-21945-36041F25; Tue, 04 Feb 2014 19:32:51 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391542371!2000260!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8056 invoked from network); 4 Feb 2014 19:32:51 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 19:32:51 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14JWdFs028443;
	Tue, 4 Feb 2014 19:32:43 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14JWaAA005444
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 19:32:36 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s14JWZpG013522;
	Tue, 4 Feb 2014 19:32:36 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s14JWZi9013518; Tue, 4 Feb 2014 19:32:35 GMT
Date: Tue, 4 Feb 2014 19:32:35 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
Message-ID: <alpine.DEB.2.00.1402041927330.26392@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s14JWdFs028443
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, M A Young wrote:

> I am puzzled by this as the systemd files in the Fedora RC3 test build do 
> indeed run
> /usr/bin/xenstore-write "/local/domain/0/domid" 0
> so I am not sure why it isn't being set in this case. It certainly works for 
> me.

I have now worked out what the problem is. There are alternate 
systemd files, one for xenstored and one for oxenstored (if it is 
installed) and I only added the line
/usr/bin/xenstore-write "/local/domain/0/domid" 0
to the xenstored one, so if the rpm containing oxenstored was installed 
it would produce the behaviour as reported in this thread.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:33:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAljp-0004eh-LO; Tue, 04 Feb 2014 19:32:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WAljo-0004ec-OH
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 19:32:52 +0000
Received: from [193.109.254.147:40006] by server-16.bemta-14.messagelabs.com
	id DE/92-21945-36041F25; Tue, 04 Feb 2014 19:32:51 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391542371!2000260!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8056 invoked from network); 4 Feb 2014 19:32:51 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 19:32:51 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14JWdFs028443;
	Tue, 4 Feb 2014 19:32:43 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s14JWaAA005444
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 19:32:36 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s14JWZpG013522;
	Tue, 4 Feb 2014 19:32:36 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s14JWZi9013518; Tue, 4 Feb 2014 19:32:35 GMT
Date: Tue, 4 Feb 2014 19:32:35 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
Message-ID: <alpine.DEB.2.00.1402041927330.26392@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s14JWdFs028443
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, M A Young wrote:

> I am puzzled by this as the systemd files in the Fedora RC3 test build do 
> indeed run
> /usr/bin/xenstore-write "/local/domain/0/domid" 0
> so I am not sure why it isn't being set in this case. It certainly works for 
> me.

I have now worked out what the problem is. There are alternate 
systemd files, one for xenstored and one for oxenstored (if it is 
installed) and I only added the line
/usr/bin/xenstore-write "/local/domain/0/domid" 0
to the xenstored one, so if the rpm containing oxenstored was installed 
it would produce the behaviour as reported in this thread.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAlyP-0004rV-5p; Tue, 04 Feb 2014 19:47:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mchan@broadcom.com>) id 1WAlyN-0004rQ-Mj
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:47:55 +0000
Received: from [85.158.139.211:15479] by server-1.bemta-5.messagelabs.com id
	76/8E-12859-AE341F25; Tue, 04 Feb 2014 19:47:54 +0000
X-Env-Sender: mchan@broadcom.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391543272!1635106!1
X-Originating-IP: [216.31.210.62]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 310 invoked from network); 4 Feb 2014 19:47:53 -0000
Received: from mail-gw1-out.broadcom.com (HELO mail-gw1-out.broadcom.com)
	(216.31.210.62) by server-5.tower-206.messagelabs.com with SMTP;
	4 Feb 2014 19:47:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384329600"; d="scan'208";a="13103775"
Received: from irvexchcas08.broadcom.com (HELO
	IRVEXCHCAS08.corp.ad.broadcom.com) ([10.9.208.57])
	by mail-gw1-out.broadcom.com with ESMTP; 04 Feb 2014 12:13:56 -0800
Received: from IRVEXCHSMTP3.corp.ad.broadcom.com (10.9.207.53) by
	IRVEXCHCAS08.corp.ad.broadcom.com (10.9.208.57) with Microsoft SMTP
	Server (TLS) id 14.3.174.1; Tue, 4 Feb 2014 11:47:52 -0800
Received: from mail-irva-13.broadcom.com (10.10.10.20) by
	IRVEXCHSMTP3.corp.ad.broadcom.com (10.9.207.53) with Microsoft SMTP
	Server id 14.3.174.1; Tue, 4 Feb 2014 11:47:51 -0800
Received: from [10.12.137.144] (dhcp-10-12-137-144.irv.broadcom.com
	[10.12.137.144])	by mail-irva-13.broadcom.com (Postfix) with ESMTP id
	B2B21246A4;	Tue,  4 Feb 2014 11:47:51 -0800 (PST)
From: Michael Chan <mchan@broadcom.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52EBA51E.808@citrix.com>
References: <52EAA31B.1090606@schaman.hu>
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52EBA51E.808@citrix.com>
Date: Tue, 4 Feb 2014 11:47:51 -0800
Message-ID: <1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
MIME-Version: 1.0
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	"David S. Miller" <davem@davemloft.net>, e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Zoltan Kiss <zoltan.kiss@schaman.hu>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote: 
> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255 
> dev_watchdog+0x156/0x1f0()
> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out 

The dump shows an internal IRQ pending on MSIX vector 2 which matches
the the queue number that is timing out.  I don't know what happened to
the MSIX and why the driver is not seeing it.  Do you see an IRQ error
message from the kernel a few seconds before the tx timeout message?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAlyP-0004rV-5p; Tue, 04 Feb 2014 19:47:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mchan@broadcom.com>) id 1WAlyN-0004rQ-Mj
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:47:55 +0000
Received: from [85.158.139.211:15479] by server-1.bemta-5.messagelabs.com id
	76/8E-12859-AE341F25; Tue, 04 Feb 2014 19:47:54 +0000
X-Env-Sender: mchan@broadcom.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391543272!1635106!1
X-Originating-IP: [216.31.210.62]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 310 invoked from network); 4 Feb 2014 19:47:53 -0000
Received: from mail-gw1-out.broadcom.com (HELO mail-gw1-out.broadcom.com)
	(216.31.210.62) by server-5.tower-206.messagelabs.com with SMTP;
	4 Feb 2014 19:47:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384329600"; d="scan'208";a="13103775"
Received: from irvexchcas08.broadcom.com (HELO
	IRVEXCHCAS08.corp.ad.broadcom.com) ([10.9.208.57])
	by mail-gw1-out.broadcom.com with ESMTP; 04 Feb 2014 12:13:56 -0800
Received: from IRVEXCHSMTP3.corp.ad.broadcom.com (10.9.207.53) by
	IRVEXCHCAS08.corp.ad.broadcom.com (10.9.208.57) with Microsoft SMTP
	Server (TLS) id 14.3.174.1; Tue, 4 Feb 2014 11:47:52 -0800
Received: from mail-irva-13.broadcom.com (10.10.10.20) by
	IRVEXCHSMTP3.corp.ad.broadcom.com (10.9.207.53) with Microsoft SMTP
	Server id 14.3.174.1; Tue, 4 Feb 2014 11:47:51 -0800
Received: from [10.12.137.144] (dhcp-10-12-137-144.irv.broadcom.com
	[10.12.137.144])	by mail-irva-13.broadcom.com (Postfix) with ESMTP id
	B2B21246A4;	Tue,  4 Feb 2014 11:47:51 -0800 (PST)
From: Michael Chan <mchan@broadcom.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <52EBA51E.808@citrix.com>
References: <52EAA31B.1090606@schaman.hu>
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52EBA51E.808@citrix.com>
Date: Tue, 4 Feb 2014 11:47:51 -0800
Message-ID: <1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
MIME-Version: 1.0
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	"David S. Miller" <davem@davemloft.net>, e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Zoltan Kiss <zoltan.kiss@schaman.hu>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote: 
> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255 
> dev_watchdog+0x156/0x1f0()
> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out 

The dump shows an internal IRQ pending on MSIX vector 2 which matches
the the queue number that is timing out.  I don't know what happened to
the MSIX and why the driver is not seeing it.  Do you see an IRQ error
message from the kernel a few seconds before the tx timeout message?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAm53-0005Be-7l; Tue, 04 Feb 2014 19:54:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAm51-0005BU-SV
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:54:48 +0000
Received: from [85.158.139.211:47346] by server-3.bemta-5.messagelabs.com id
	1D/79-13671-78541F25; Tue, 04 Feb 2014 19:54:47 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391543684!1655439!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8726 invoked from network); 4 Feb 2014 19:54:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 19:54:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97926588"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 19:54:43 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 14:54:43 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 4 Feb 2014 19:54:37 +0000
Message-ID: <1391543677-24039-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net v3] xen-netback: Fix Rx stall due to race
	condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to fix receive side flow control
(11b57f90257c1d6a91cee720151b69e0c2020cf6: xen-netback: stop vif thread
spinning if frontend is unresponsive) solved the spinning thread problem,
however caused an another one. The receive side can stall, if:
- [THREAD] xenvif_rx_action sets rx_queue_stopped to true
- [INTERRUPT] interrupt happens, and sets rx_event to true
- [THREAD] then xenvif_kthread sets rx_event to false
- [THREAD] rx_work_todo doesn't return true anymore

Also, if interrupt sent but there is still no room in the ring, it take quite a
long time until xenvif_rx_action realize it. This patch ditch that two variable,
and rework rx_work_todo. If the thread finds it can't fit more skb's into the
ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
kept as 0. Then rx_work_todo will check if:
- there is something to send to the ring (like before)
- there is space for the topmost packet in the queue

I think that's more natural and optimal thing to test than two bool which are
set somewhere else.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
v2:
- Improve description of the problem

v3:
- Change subject and resend against net to emphasize this is a bugfix

 drivers/net/xen-netback/common.h    |    6 +-----
 drivers/net/xen-netback/interface.c |    1 -
 drivers/net/xen-netback/netback.c   |   16 ++++++----------
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4c76bcb..ae413a2 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,11 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
-	bool rx_queue_stopped;
-	/* Set when the RX interrupt is triggered by the frontend.
-	 * The worker thread may need to wake the queue.
-	 */
-	bool rx_event;
+	RING_IDX rx_last_skb_slots;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..7669d49 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	vif->rx_event = true;
 	xenvif_kick_thread(vif);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 2738563..bb241d0 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	bool need_to_notify = false;
-	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	skb_queue_head_init(&rxq);
 
 	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		int max_slots_needed;
+		RING_IDX max_slots_needed;
 		int i;
 
 		/* We need a cheap worse case estimate for the number of
@@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = true;
-			ring_full = true;
+			vif->rx_last_skb_slots = max_slots_needed;
 			break;
-		}
+		} else
+			vif->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
 		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
@@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
-
 	if (!npo.copy_prod)
 		goto done;
 
@@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
-		vif->rx_event;
+	return !skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
-		vif->rx_event = false;
-
 		if (skb_queue_empty(&vif->rx_queue) &&
 		    netif_queue_stopped(vif->dev))
 			xenvif_start_queue(vif);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAm53-0005Be-7l; Tue, 04 Feb 2014 19:54:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAm51-0005BU-SV
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:54:48 +0000
Received: from [85.158.139.211:47346] by server-3.bemta-5.messagelabs.com id
	1D/79-13671-78541F25; Tue, 04 Feb 2014 19:54:47 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391543684!1655439!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8726 invoked from network); 4 Feb 2014 19:54:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 19:54:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97926588"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 19:54:43 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 14:54:43 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: <ian.campbell@citrix.com>, <wei.liu2@citrix.com>,
	<xen-devel@lists.xenproject.org>, <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <jonathan.davies@citrix.com>
Date: Tue, 4 Feb 2014 19:54:37 +0000
Message-ID: <1391543677-24039-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH net v3] xen-netback: Fix Rx stall due to race
	condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The recent patch to fix receive side flow control
(11b57f90257c1d6a91cee720151b69e0c2020cf6: xen-netback: stop vif thread
spinning if frontend is unresponsive) solved the spinning thread problem,
however caused an another one. The receive side can stall, if:
- [THREAD] xenvif_rx_action sets rx_queue_stopped to true
- [INTERRUPT] interrupt happens, and sets rx_event to true
- [THREAD] then xenvif_kthread sets rx_event to false
- [THREAD] rx_work_todo doesn't return true anymore

Also, if interrupt sent but there is still no room in the ring, it take quite a
long time until xenvif_rx_action realize it. This patch ditch that two variable,
and rework rx_work_todo. If the thread finds it can't fit more skb's into the
ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
kept as 0. Then rx_work_todo will check if:
- there is something to send to the ring (like before)
- there is space for the topmost packet in the queue

I think that's more natural and optimal thing to test than two bool which are
set somewhere else.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
---
v2:
- Improve description of the problem

v3:
- Change subject and resend against net to emphasize this is a bugfix

 drivers/net/xen-netback/common.h    |    6 +-----
 drivers/net/xen-netback/interface.c |    1 -
 drivers/net/xen-netback/netback.c   |   16 ++++++----------
 3 files changed, 7 insertions(+), 16 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4c76bcb..ae413a2 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -143,11 +143,7 @@ struct xenvif {
 	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
-	bool rx_queue_stopped;
-	/* Set when the RX interrupt is triggered by the frontend.
-	 * The worker thread may need to wake the queue.
-	 */
-	bool rx_event;
+	RING_IDX rx_last_skb_slots;
 
 	/* This array is allocated seperately as it is large */
 	struct gnttab_copy *grant_copy_op;
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..7669d49 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -100,7 +100,6 @@ static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
 	struct xenvif *vif = dev_id;
 
-	vif->rx_event = true;
 	xenvif_kick_thread(vif);
 
 	return IRQ_HANDLED;
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 2738563..bb241d0 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -477,7 +477,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 	unsigned long offset;
 	struct skb_cb_overlay *sco;
 	bool need_to_notify = false;
-	bool ring_full = false;
 
 	struct netrx_pending_operations npo = {
 		.copy  = vif->grant_copy_op,
@@ -487,7 +486,7 @@ static void xenvif_rx_action(struct xenvif *vif)
 	skb_queue_head_init(&rxq);
 
 	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
-		int max_slots_needed;
+		RING_IDX max_slots_needed;
 		int i;
 
 		/* We need a cheap worse case estimate for the number of
@@ -510,9 +509,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
 			skb_queue_head(&vif->rx_queue, skb);
 			need_to_notify = true;
-			ring_full = true;
+			vif->rx_last_skb_slots = max_slots_needed;
 			break;
-		}
+		} else
+			vif->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
 		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
@@ -523,8 +523,6 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
 
-	vif->rx_queue_stopped = !npo.copy_prod && ring_full;
-
 	if (!npo.copy_prod)
 		goto done;
 
@@ -1727,8 +1725,8 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 
 static inline int rx_work_todo(struct xenvif *vif)
 {
-	return (!skb_queue_empty(&vif->rx_queue) && !vif->rx_queue_stopped) ||
-		vif->rx_event;
+	return !skb_queue_empty(&vif->rx_queue) &&
+	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
 }
 
 static inline int tx_work_todo(struct xenvif *vif)
@@ -1814,8 +1812,6 @@ int xenvif_kthread(void *data)
 		if (!skb_queue_empty(&vif->rx_queue))
 			xenvif_rx_action(vif);
 
-		vif->rx_event = false;
-
 		if (skb_queue_empty(&vif->rx_queue) &&
 		    netif_queue_stopped(vif->dev))
 			xenvif_start_queue(vif);

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:55:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAm60-0005Ql-Ml; Tue, 04 Feb 2014 19:55:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAm5z-0005Qb-8r
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:55:47 +0000
Received: from [85.158.137.68:18365] by server-9.bemta-3.messagelabs.com id
	2A/4A-10184-2C541F25; Tue, 04 Feb 2014 19:55:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391543744!13392813!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3047 invoked from network); 4 Feb 2014 19:55:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 19:55:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97927001"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 19:55:44 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	14:55:43 -0500
Message-ID: <52F145BE.3090606@citrix.com>
Date: Tue, 4 Feb 2014 19:55:42 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
	<20140120163814.GE11681@zion.uk.xensource.com>
	<52F13D24.5090405@citrix.com>
In-Reply-To: <52F13D24.5090405@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 19:19, Zoltan Kiss wrote:
> On 20/01/14 16:38, Wei Liu wrote:
>> On Wed, Jan 15, 2014 at 05:11:07PM +0000, Zoltan Kiss wrote:
>>> The recent patch to fix receive side flow control (11b57f) solved the
>>> spinning
>>> thread problem, however caused an another one. The receive side can
>>> stall, if:
>>> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
>>> - [INTERRUPT] interrupt happens, and sets rx_event to true
>>> - [THREAD] then xenvif_kthread sets rx_event to false
>>> - [THREAD] rx_work_todo doesn't return true anymore
>>>
>>> Also, if interrupt sent but there is still no room in the ring, it
>>> take quite a
>>> long time until xenvif_rx_action realize it. This patch ditch that
>>> two variable,
>>> and rework rx_work_todo. If the thread finds it can't fit more skb's
>>> into the
>>> ring, it saves the last slot estimation into rx_last_skb_slots,
>>> otherwise it's
>>> kept as 0. Then rx_work_todo will check if:
>>> - there is something to send to the ring (like before)
>>> - there is space for the topmost packet in the queue
>>>
>>> I think that's more natural and optimal thing to test than two bool
>>> which are
>>> set somewhere else.
>>>
>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>
>> Sorry for the delay.
>>
>> Paul, thanks for reviewing.
>>
>> Acked-by: Wei Liu <wei.liu2@citrix.com>
>
> Hi,
>
> This patch haven't made it to net-next yet, maybe because the subject
> doesn't suggest that this is a bugfix. I suggest to apply it as soon as
> possible, otherwise netback will be quite broken.

I've reposted it with clearer subject, sorry for being too vague

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:55:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAm60-0005Ql-Ml; Tue, 04 Feb 2014 19:55:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAm5z-0005Qb-8r
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 19:55:47 +0000
Received: from [85.158.137.68:18365] by server-9.bemta-3.messagelabs.com id
	2A/4A-10184-2C541F25; Tue, 04 Feb 2014 19:55:46 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391543744!13392813!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3047 invoked from network); 4 Feb 2014 19:55:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 19:55:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,781,1384300800"; d="scan'208";a="97927001"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 19:55:44 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	14:55:43 -0500
Message-ID: <52F145BE.3090606@citrix.com>
Date: Tue, 4 Feb 2014 19:55:42 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1389805867-22409-1-git-send-email-zoltan.kiss@citrix.com>
	<20140120163814.GE11681@zion.uk.xensource.com>
	<52F13D24.5090405@citrix.com>
In-Reply-To: <52F13D24.5090405@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [PATCH net-next v2] xen-netback: Rework rx_work_todo
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 19:19, Zoltan Kiss wrote:
> On 20/01/14 16:38, Wei Liu wrote:
>> On Wed, Jan 15, 2014 at 05:11:07PM +0000, Zoltan Kiss wrote:
>>> The recent patch to fix receive side flow control (11b57f) solved the
>>> spinning
>>> thread problem, however caused an another one. The receive side can
>>> stall, if:
>>> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
>>> - [INTERRUPT] interrupt happens, and sets rx_event to true
>>> - [THREAD] then xenvif_kthread sets rx_event to false
>>> - [THREAD] rx_work_todo doesn't return true anymore
>>>
>>> Also, if interrupt sent but there is still no room in the ring, it
>>> take quite a
>>> long time until xenvif_rx_action realize it. This patch ditch that
>>> two variable,
>>> and rework rx_work_todo. If the thread finds it can't fit more skb's
>>> into the
>>> ring, it saves the last slot estimation into rx_last_skb_slots,
>>> otherwise it's
>>> kept as 0. Then rx_work_todo will check if:
>>> - there is something to send to the ring (like before)
>>> - there is space for the topmost packet in the queue
>>>
>>> I think that's more natural and optimal thing to test than two bool
>>> which are
>>> set somewhere else.
>>>
>>> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
>>
>> Sorry for the delay.
>>
>> Paul, thanks for reviewing.
>>
>> Acked-by: Wei Liu <wei.liu2@citrix.com>
>
> Hi,
>
> This patch haven't made it to net-next yet, maybe because the subject
> doesn't suggest that this is a bugfix. I suggest to apply it as soon as
> possible, otherwise netback will be quite broken.

I've reposted it with clearer subject, sorry for being too vague

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:56:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAm6G-0005TB-3v; Tue, 04 Feb 2014 19:56:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAm6D-0005Sl-Jw
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 19:56:02 +0000
Received: from [193.109.254.147:5036] by server-16.bemta-14.messagelabs.com id
	CA/8F-21945-0D541F25; Tue, 04 Feb 2014 19:56:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391543758!2016145!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3095 invoked from network); 4 Feb 2014 19:55:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 19:55:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14JtrlR023845
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 19:55:54 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14JtqwF013279
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 19:55:53 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14JtqvA013276; Tue, 4 Feb 2014 19:55:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 11:55:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 892F91C0972; Tue,  4 Feb 2014 14:55:51 -0500 (EST)
Date: Tue, 4 Feb 2014 14:55:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Martin =?iso-8859-1?Q?=D6hrling?= <xen@xn--hrling-vxa.se>,
	linux@eikelenboom.it
Message-ID: <20140204195551.GA14122@phenom.dumpdata.com>
References: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
 adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 07:19:16PM +0100, Martin =D6hrling wrote:
> I tried to use vga passthrough with an AMD card and ran into the issues
> with dom0 crash on domU reboot. My intention is to debug the issue and I
> have started to read up on the code for pci passthrough. The following
> observations may not be an error but perhaps someone could shed some light
> over the design intentions.
> =

> My first impression was that libxl__device_pci_reset() is a function level
> reset function. It is called each time a single function of a device is
> added or removed. A device reset would break other domU:s if other
> functions of the device are passed through to these domU:s.
> =

> The strange thing is that there is only a single libxl__device_pci_reset()
> call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting the
> impression that the function is assumed to be a device reset function.
> =

> Is the attempt to reset a function, when all other functions belongs to
> pciback, detected and replaced by a device reset?

Yes with these patches:
https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddevel/=
xen-pciback.slot_and_bus.v0

But the last one seems to hang pciback when the device is unbound.

> =

> Best Regards,
> Martin

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 19:56:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 19:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAm6G-0005TB-3v; Tue, 04 Feb 2014 19:56:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WAm6D-0005Sl-Jw
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 19:56:02 +0000
Received: from [193.109.254.147:5036] by server-16.bemta-14.messagelabs.com id
	CA/8F-21945-0D541F25; Tue, 04 Feb 2014 19:56:00 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391543758!2016145!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3095 invoked from network); 4 Feb 2014 19:55:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 19:55:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s14JtrlR023845
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 4 Feb 2014 19:55:54 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14JtqwF013279
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 4 Feb 2014 19:55:53 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s14JtqvA013276; Tue, 4 Feb 2014 19:55:52 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 04 Feb 2014 11:55:52 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 892F91C0972; Tue,  4 Feb 2014 14:55:51 -0500 (EST)
Date: Tue, 4 Feb 2014 14:55:51 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Martin =?iso-8859-1?Q?=D6hrling?= <xen@xn--hrling-vxa.se>,
	linux@eikelenboom.it
Message-ID: <20140204195551.GA14122@phenom.dumpdata.com>
References: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
 adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 07:19:16PM +0100, Martin =D6hrling wrote:
> I tried to use vga passthrough with an AMD card and ran into the issues
> with dom0 crash on domU reboot. My intention is to debug the issue and I
> have started to read up on the code for pci passthrough. The following
> observations may not be an error but perhaps someone could shed some light
> over the design intentions.
> =

> My first impression was that libxl__device_pci_reset() is a function level
> reset function. It is called each time a single function of a device is
> added or removed. A device reset would break other domU:s if other
> functions of the device are passed through to these domU:s.
> =

> The strange thing is that there is only a single libxl__device_pci_reset()
> call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting the
> impression that the function is assumed to be a device reset function.
> =

> Is the attempt to reset a function, when all other functions belongs to
> pciback, detected and replaced by a device reset?

Yes with these patches:
https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddevel/=
xen-pciback.slot_and_bus.v0

But the last one seems to hang pciback when the device is unbound.

> =

> Best Regards,
> Martin

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 20:05:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 20:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAmF2-00063G-E6; Tue, 04 Feb 2014 20:05:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAmF0-00063B-DE
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 20:05:06 +0000
Received: from [193.109.254.147:6684] by server-11.bemta-14.messagelabs.com id
	7B/16-24604-1F741F25; Tue, 04 Feb 2014 20:05:05 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391544304!2018285!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22732 invoked from network); 4 Feb 2014 20:05:04 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 20:05:04 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:56463 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAmEa-0005YY-ED; Tue, 04 Feb 2014 21:04:40 +0100
Date: Tue, 4 Feb 2014 21:05:01 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1022741734.20140204210501@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140204195551.GA14122@phenom.dumpdata.com>
References: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
	<20140204195551.GA14122@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: =?iso-8859-1?Q?Martin_=D6hrling?= <xen@xn--hrling-vxa.se>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
	adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, February 4, 2014, 8:55:51 PM, you wrote:

> On Tue, Feb 04, 2014 at 07:19:16PM +0100, Martin =D6hrling wrote:
>> I tried to use vga passthrough with an AMD card and ran into the issues
>> with dom0 crash on domU reboot. My intention is to debug the issue and I
>> have started to read up on the code for pci passthrough. The following
>> observations may not be an error but perhaps someone could shed some lig=
ht
>> over the design intentions.
>> =

>> My first impression was that libxl__device_pci_reset() is a function lev=
el
>> reset function. It is called each time a single function of a device is
>> added or removed. A device reset would break other domU:s if other
>> functions of the device are passed through to these domU:s.
>> =

>> The strange thing is that there is only a single libxl__device_pci_reset=
()
>> call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting t=
he
>> impression that the function is assumed to be a device reset function.
>> =

>> Is the attempt to reset a function, when all other functions belongs to
>> pciback, detected and replaced by a device reset?

> Yes with these patches:
> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddeve=
l/xen-pciback.slot_and_bus.v0

> But the last one seems to hang pciback when the device is unbound.

Another thing i'm seeing now sometimes is the "irq 16: nobody cared" after =
shutting down
a guest which has a pci device passed through, don't know for sure if that'=
s only since
running with those 4 patches from that tree (could perhaps be due to that p=
atch changing some order,
opening a very small race window there ..

But these patches do improve the other case in which it was crashing when u=
sing xl pci-assignable-remove.

>> =

>> Best Regards,
>> Martin

>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 20:05:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 20:05:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAmF2-00063G-E6; Tue, 04 Feb 2014 20:05:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAmF0-00063B-DE
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 20:05:06 +0000
Received: from [193.109.254.147:6684] by server-11.bemta-14.messagelabs.com id
	7B/16-24604-1F741F25; Tue, 04 Feb 2014 20:05:05 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391544304!2018285!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22732 invoked from network); 4 Feb 2014 20:05:04 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 4 Feb 2014 20:05:04 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:56463 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAmEa-0005YY-ED; Tue, 04 Feb 2014 21:04:40 +0100
Date: Tue, 4 Feb 2014 21:05:01 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1022741734.20140204210501@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140204195551.GA14122@phenom.dumpdata.com>
References: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
	<20140204195551.GA14122@phenom.dumpdata.com>
MIME-Version: 1.0
Cc: =?iso-8859-1?Q?Martin_=D6hrling?= <xen@xn--hrling-vxa.se>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
	adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, February 4, 2014, 8:55:51 PM, you wrote:

> On Tue, Feb 04, 2014 at 07:19:16PM +0100, Martin =D6hrling wrote:
>> I tried to use vga passthrough with an AMD card and ran into the issues
>> with dom0 crash on domU reboot. My intention is to debug the issue and I
>> have started to read up on the code for pci passthrough. The following
>> observations may not be an error but perhaps someone could shed some lig=
ht
>> over the design intentions.
>> =

>> My first impression was that libxl__device_pci_reset() is a function lev=
el
>> reset function. It is called each time a single function of a device is
>> added or removed. A device reset would break other domU:s if other
>> functions of the device are passed through to these domU:s.
>> =

>> The strange thing is that there is only a single libxl__device_pci_reset=
()
>> call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting t=
he
>> impression that the function is assumed to be a device reset function.
>> =

>> Is the attempt to reset a function, when all other functions belongs to
>> pciback, detected and replaced by a device reset?

> Yes with these patches:
> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddeve=
l/xen-pciback.slot_and_bus.v0

> But the last one seems to hang pciback when the device is unbound.

Another thing i'm seeing now sometimes is the "irq 16: nobody cared" after =
shutting down
a guest which has a pci device passed through, don't know for sure if that'=
s only since
running with those 4 patches from that tree (could perhaps be due to that p=
atch changing some order,
opening a very small race window there ..

But these patches do improve the other case in which it was crashing when u=
sing xl pci-assignable-remove.

>> =

>> Best Regards,
>> Martin

>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 20:27:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 20:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAmaX-0006fv-8e; Tue, 04 Feb 2014 20:27:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAmaV-0006fq-NL
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 20:27:20 +0000
Received: from [193.109.254.147:27948] by server-11.bemta-14.messagelabs.com
	id 22/52-24604-62D41F25; Tue, 04 Feb 2014 20:27:18 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391545637!2011670!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10826 invoked from network); 4 Feb 2014 20:27:18 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 20:27:18 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so13166158wgh.17
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 12:27:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=/V3E7WSH7CUtvY+QeY+iMk0y3yBjliHGArWJZfNV//8=;
	b=0nkrENXZFqnKaBKlRgycgM5+8NUvMVFKewVtIwg1zBF4Ypy5j8O6BfzhdZ84OZM9Es
	SPx+pi07+EFFojXqSHthUfRP4d7zVdQ3x7HtZBo/WF8L57oCEfProfvsWhgsJlSsG0n4
	yvk34PcwuhQduFAqLBQeYABDyWVyTdIE3BHc2HQJ4I6y3AT3p59c+T2QYUrTbRWXAQ39
	Z81QChBn6HauyxswW4XCDiSu+gvLGZlNE+WF1ETDHAC9fDNVjiPhRQ33frt6OvQj7UGd
	prEhfG/lu9PyDE2J84ZYWCVUhIqfXR5Y+3/hP1tsd8NFSbHkG1eMZ9UMu1ne6mH6Y2gm
	O0OA==
X-Received: by 10.194.57.239 with SMTP id l15mr3565559wjq.40.1391545637511;
	Tue, 04 Feb 2014 12:27:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 4 Feb 2014 12:26:57 -0800 (PST)
In-Reply-To: <CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 20:26:57 +0000
Message-ID: <CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
To: rshriram@cs.ubc.ca
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks both for the feedback, I will test xl migrate/remus with DRDB
and post back!

As for offline migration I tried xl save + simple scp seems to do the
trick, which works for none shared storage migration as long has the
domain is stopped (which is what save does already).


On Tue, Feb 4, 2014 at 7:19 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
> On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <Ian.Campbell@citrix.com>
> wrote:
>>
>>
>> > >> I have a host where what I need is to move it from host1 to host2, or
>> > >> reverse if needed, there's no problem stopping it first, but I guess
>> > >> this is not what the "migrate" command is used for!
>> > >>
>
>
> If you just need to migrate an entire VM (memory & disk)  from one host to
> another,
> without using shared storage, then there are several alternatives.
> One way is to use DRBD based disk backends (yes they can run on top of a
> LVM)
> and use protocol C (synchronous disk replication) to keep the disks at both
> hosts synchronized.
>
> Since the disk is always synchronized, you can just use the xl migrate
> command to migrate
> the memory state from one host to another, with very little downtime.
>
> I would suggest reading up on the DRBD documentation regarding xen live
> migration. IIRC,
> their current example is based on xend, but it applies equally well to xl,
> as xl also supports drbd based
> disk backends.
>
> shriram

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 20:27:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 20:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAmaX-0006fv-8e; Tue, 04 Feb 2014 20:27:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAmaV-0006fq-NL
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 20:27:20 +0000
Received: from [193.109.254.147:27948] by server-11.bemta-14.messagelabs.com
	id 22/52-24604-62D41F25; Tue, 04 Feb 2014 20:27:18 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391545637!2011670!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10826 invoked from network); 4 Feb 2014 20:27:18 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 20:27:18 -0000
Received: by mail-wg0-f50.google.com with SMTP id l18so13166158wgh.17
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 12:27:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=/V3E7WSH7CUtvY+QeY+iMk0y3yBjliHGArWJZfNV//8=;
	b=0nkrENXZFqnKaBKlRgycgM5+8NUvMVFKewVtIwg1zBF4Ypy5j8O6BfzhdZ84OZM9Es
	SPx+pi07+EFFojXqSHthUfRP4d7zVdQ3x7HtZBo/WF8L57oCEfProfvsWhgsJlSsG0n4
	yvk34PcwuhQduFAqLBQeYABDyWVyTdIE3BHc2HQJ4I6y3AT3p59c+T2QYUrTbRWXAQ39
	Z81QChBn6HauyxswW4XCDiSu+gvLGZlNE+WF1ETDHAC9fDNVjiPhRQ33frt6OvQj7UGd
	prEhfG/lu9PyDE2J84ZYWCVUhIqfXR5Y+3/hP1tsd8NFSbHkG1eMZ9UMu1ne6mH6Y2gm
	O0OA==
X-Received: by 10.194.57.239 with SMTP id l15mr3565559wjq.40.1391545637511;
	Tue, 04 Feb 2014 12:27:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 4 Feb 2014 12:26:57 -0800 (PST)
In-Reply-To: <CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 20:26:57 +0000
Message-ID: <CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
To: rshriram@cs.ubc.ca
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks both for the feedback, I will test xl migrate/remus with DRDB
and post back!

As for offline migration I tried xl save + simple scp seems to do the
trick, which works for none shared storage migration as long has the
domain is stopped (which is what save does already).


On Tue, Feb 4, 2014 at 7:19 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
> On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <Ian.Campbell@citrix.com>
> wrote:
>>
>>
>> > >> I have a host where what I need is to move it from host1 to host2, or
>> > >> reverse if needed, there's no problem stopping it first, but I guess
>> > >> this is not what the "migrate" command is used for!
>> > >>
>
>
> If you just need to migrate an entire VM (memory & disk)  from one host to
> another,
> without using shared storage, then there are several alternatives.
> One way is to use DRBD based disk backends (yes they can run on top of a
> LVM)
> and use protocol C (synchronous disk replication) to keep the disks at both
> hosts synchronized.
>
> Since the disk is always synchronized, you can just use the xl migrate
> command to migrate
> the memory state from one host to another, with very little downtime.
>
> I would suggest reading up on the DRBD documentation regarding xen live
> migration. IIRC,
> their current example is based on xend, but it applies equally well to xl,
> as xl also supports drbd based
> disk backends.
>
> shriram

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 21:15:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 21:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAnLB-000865-Dy; Tue, 04 Feb 2014 21:15:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WAnLA-000860-0X
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 21:15:32 +0000
Received: from [85.158.143.35:55691] by server-3.bemta-4.messagelabs.com id
	04/24-11539-37851F25; Tue, 04 Feb 2014 21:15:31 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391548528!3125712!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13644 invoked from network); 4 Feb 2014 21:15:30 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 21:15:30 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so8998314pbb.3
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 13:15:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=VGa04jafJ5IKnzZ1yyWH2AOjgU4eSk11rPiYZeKzuVY=;
	b=utNgRjiJLvugaUXykDaQMJhikXQuju3gx5yjOlM2TCwJifrLc6qujfNasy2F6a6bPu
	k6g5a/rMZMZ1jEEPZ38n50B7rhBOAksEryDaQzB6qMlkE5SCjkrdmzFOSl3GdeMhFp73
	soOtbU0Iy+KMjIB82ytvxtikCEQJNTzpsCEdgbD/FjJOuMz9TdUnGKZGVC94cfQkMHou
	JWokTAWv4aZrNKDfjPMadBh2W63gvjTnCjIVhWS3RBObywmb5GJa3Qa+usZoCDFx9rIc
	wx+9lcsO1Pc9f9J+0D7C82sARD/QJSAeN6MweO+qWOotjgyw3Olb2R1W/K1kBCtwRA6f
	beQg==
MIME-Version: 1.0
X-Received: by 10.68.171.229 with SMTP id ax5mr46303956pbc.125.1391548528211; 
	Tue, 04 Feb 2014 13:15:28 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Tue, 4 Feb 2014 13:15:28 -0800 (PST)
Date: Tue, 4 Feb 2014 16:15:28 -0500
Message-ID: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5700218981720152586=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5700218981720152586==
Content-Type: multipart/alternative; boundary=047d7bacb2e6b63b0a04f19b238e

--047d7bacb2e6b63b0a04f19b238e
Content-Type: text/plain; charset=ISO-8859-1

Does blkback thread bypass Dom0's memory buffer? If I write in DomU without
O_Direct, the data will be buffered in DomU's memory cache then be flushed
to disk. Will them be buffered in Dom0's memory again? How about the
netback? Thanks.

Regards,
Cong

--047d7bacb2e6b63b0a04f19b238e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Does blkback thread bypass Dom0&#39;s memory buffer? If I =
write in DomU without O_Direct, the data will be buffered in DomU&#39;s mem=
ory cache then be flushed to disk. Will them be buffered in Dom0&#39;s memo=
ry again? How about the netback? Thanks.<div>
<br></div><div>Regards,</div><div>Cong</div></div>

--047d7bacb2e6b63b0a04f19b238e--


--===============5700218981720152586==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5700218981720152586==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 21:15:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 21:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAnLB-000865-Dy; Tue, 04 Feb 2014 21:15:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WAnLA-000860-0X
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 21:15:32 +0000
Received: from [85.158.143.35:55691] by server-3.bemta-4.messagelabs.com id
	04/24-11539-37851F25; Tue, 04 Feb 2014 21:15:31 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391548528!3125712!1
X-Originating-IP: [209.85.160.44]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13644 invoked from network); 4 Feb 2014 21:15:30 -0000
Received: from mail-pb0-f44.google.com (HELO mail-pb0-f44.google.com)
	(209.85.160.44)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 21:15:30 -0000
Received: by mail-pb0-f44.google.com with SMTP id rq2so8998314pbb.3
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 13:15:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=VGa04jafJ5IKnzZ1yyWH2AOjgU4eSk11rPiYZeKzuVY=;
	b=utNgRjiJLvugaUXykDaQMJhikXQuju3gx5yjOlM2TCwJifrLc6qujfNasy2F6a6bPu
	k6g5a/rMZMZ1jEEPZ38n50B7rhBOAksEryDaQzB6qMlkE5SCjkrdmzFOSl3GdeMhFp73
	soOtbU0Iy+KMjIB82ytvxtikCEQJNTzpsCEdgbD/FjJOuMz9TdUnGKZGVC94cfQkMHou
	JWokTAWv4aZrNKDfjPMadBh2W63gvjTnCjIVhWS3RBObywmb5GJa3Qa+usZoCDFx9rIc
	wx+9lcsO1Pc9f9J+0D7C82sARD/QJSAeN6MweO+qWOotjgyw3Olb2R1W/K1kBCtwRA6f
	beQg==
MIME-Version: 1.0
X-Received: by 10.68.171.229 with SMTP id ax5mr46303956pbc.125.1391548528211; 
	Tue, 04 Feb 2014 13:15:28 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Tue, 4 Feb 2014 13:15:28 -0800 (PST)
Date: Tue, 4 Feb 2014 16:15:28 -0500
Message-ID: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: xen-devel@lists.xensource.com
Subject: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5700218981720152586=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5700218981720152586==
Content-Type: multipart/alternative; boundary=047d7bacb2e6b63b0a04f19b238e

--047d7bacb2e6b63b0a04f19b238e
Content-Type: text/plain; charset=ISO-8859-1

Does blkback thread bypass Dom0's memory buffer? If I write in DomU without
O_Direct, the data will be buffered in DomU's memory cache then be flushed
to disk. Will them be buffered in Dom0's memory again? How about the
netback? Thanks.

Regards,
Cong

--047d7bacb2e6b63b0a04f19b238e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Does blkback thread bypass Dom0&#39;s memory buffer? If I =
write in DomU without O_Direct, the data will be buffered in DomU&#39;s mem=
ory cache then be flushed to disk. Will them be buffered in Dom0&#39;s memo=
ry again? How about the netback? Thanks.<div>
<br></div><div>Regards,</div><div>Cong</div></div>

--047d7bacb2e6b63b0a04f19b238e--


--===============5700218981720152586==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5700218981720152586==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 21:29:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 21:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAnY9-0000AD-QK; Tue, 04 Feb 2014 21:28:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAnY8-0000A8-SD
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 21:28:57 +0000
Received: from [85.158.143.35:15116] by server-2.bemta-4.messagelabs.com id
	8C/8E-10891-89B51F25; Tue, 04 Feb 2014 21:28:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391549334!3141509!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4272 invoked from network); 4 Feb 2014 21:28:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 21:28:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,782,1384300800"; d="scan'208";a="97966246"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 21:28:53 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	16:28:52 -0500
Message-ID: <52F15B93.4000000@citrix.com>
Date: Tue, 4 Feb 2014 21:28:51 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>, Julien Grall <julien.grall@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: [Xen-devel] Xen on ARM: upstream kernel compile fails with defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I'm trying to do a default ARM build to verify my grant mapping patches =

doesn't break build. I'm using latest net-next tree as a basis, and I'm =

trying to build it based this howto:

http://wiki.xenproject.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/Al=
lwinner

(note, the actual CPU doesn't matter to me, I just want a default build =

to succeed. Also, net-next is only used because that was checked out, =

and I assume it should work as well)

I'm doing cross-compilation, so my make looks like this:

PATH=3D"/local/repo/arm/gcc-linaro-arm-linux-gnueabihf-4.8-2013.10_linux/bi=
n/:$PATH" =

make -j8 O=3D../o-arm ARCH=3Darm CROSS_COMPILE=3Darm-linux-gnueabihf- zImage

For config I've tried sunxi_defconfig (which actually doesn't have Xen =

enabled) and arndale_ubuntu_defconfig from Julien's repo:

http://wiki.xenproject.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/Ar=
ndale

However both cases the build fails quite early with the same error:

   CC      arch/arm/kernel/asm-offsets.s
In file included from /local/repo/linux-net-next/include/linux/cache.h:5:0,
                  from /local/repo/linux-net-next/include/linux/printk.h:8,
                  from /local/repo/linux-net-next/include/linux/kernel.h:13,
                  from /local/repo/linux-net-next/include/linux/sched.h:15,
                  from =

/local/repo/linux-net-next/arch/arm/kernel/asm-offsets.c:13:
/local/repo/linux-net-next/include/linux/prefetch.h: In function =

=91prefetch_range=92:
/local/repo/linux-net-next/arch/arm/include/asm/cache.h:7:25: error: =

=91CONFIG_ARM_L1_CACHE_SHIFT=92 undeclared (first use in this function)
  #define L1_CACHE_SHIFT  CONFIG_ARM_L1_CACHE_SHIFT

So far I figured out that autoconf.h is not included here, that CONFIG_ =

is defined there. But I'm not that familiar with the kernel build =

system, so I'm a bit stucked here. Probably I'm doing something =

trivially wrong, can someone help me?
Btw. on Julian's repo from the Arndale page I could compile, but that's =

too old for my patch to apply. My main goal is to compile an upstream =

kernel with Xen, and then check if my patch breaks it.

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 21:29:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 21:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAnY9-0000AD-QK; Tue, 04 Feb 2014 21:28:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAnY8-0000A8-SD
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 21:28:57 +0000
Received: from [85.158.143.35:15116] by server-2.bemta-4.messagelabs.com id
	8C/8E-10891-89B51F25; Tue, 04 Feb 2014 21:28:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391549334!3141509!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4272 invoked from network); 4 Feb 2014 21:28:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 21:28:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,782,1384300800"; d="scan'208";a="97966246"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 21:28:53 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	16:28:52 -0500
Message-ID: <52F15B93.4000000@citrix.com>
Date: Tue, 4 Feb 2014 21:28:51 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>, Stefano Stabellini
	<stefano.stabellini@eu.citrix.com>, Julien Grall <julien.grall@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: [Xen-devel] Xen on ARM: upstream kernel compile fails with defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I'm trying to do a default ARM build to verify my grant mapping patches =

doesn't break build. I'm using latest net-next tree as a basis, and I'm =

trying to build it based this howto:

http://wiki.xenproject.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/Al=
lwinner

(note, the actual CPU doesn't matter to me, I just want a default build =

to succeed. Also, net-next is only used because that was checked out, =

and I assume it should work as well)

I'm doing cross-compilation, so my make looks like this:

PATH=3D"/local/repo/arm/gcc-linaro-arm-linux-gnueabihf-4.8-2013.10_linux/bi=
n/:$PATH" =

make -j8 O=3D../o-arm ARCH=3Darm CROSS_COMPILE=3Darm-linux-gnueabihf- zImage

For config I've tried sunxi_defconfig (which actually doesn't have Xen =

enabled) and arndale_ubuntu_defconfig from Julien's repo:

http://wiki.xenproject.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/Ar=
ndale

However both cases the build fails quite early with the same error:

   CC      arch/arm/kernel/asm-offsets.s
In file included from /local/repo/linux-net-next/include/linux/cache.h:5:0,
                  from /local/repo/linux-net-next/include/linux/printk.h:8,
                  from /local/repo/linux-net-next/include/linux/kernel.h:13,
                  from /local/repo/linux-net-next/include/linux/sched.h:15,
                  from =

/local/repo/linux-net-next/arch/arm/kernel/asm-offsets.c:13:
/local/repo/linux-net-next/include/linux/prefetch.h: In function =

=91prefetch_range=92:
/local/repo/linux-net-next/arch/arm/include/asm/cache.h:7:25: error: =

=91CONFIG_ARM_L1_CACHE_SHIFT=92 undeclared (first use in this function)
  #define L1_CACHE_SHIFT  CONFIG_ARM_L1_CACHE_SHIFT

So far I figured out that autoconf.h is not included here, that CONFIG_ =

is defined there. But I'm not that familiar with the kernel build =

system, so I'm a bit stucked here. Probably I'm doing something =

trivially wrong, can someone help me?
Btw. on Julian's repo from the Arndale page I could compile, but that's =

too old for my patch to apply. My main goal is to compile an upstream =

kernel with Xen, and then check if my patch breaks it.

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 21:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 21:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAncN-0000Kt-PK; Tue, 04 Feb 2014 21:33:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAncL-0000Ko-NK
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 21:33:17 +0000
Received: from [193.109.254.147:20855] by server-10.bemta-14.messagelabs.com
	id 44/56-10711-D9C51F25; Tue, 04 Feb 2014 21:33:17 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391549594!2022799!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2222 invoked from network); 4 Feb 2014 21:33:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 21:33:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,782,1384300800"; d="scan'208";a="97968399"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 21:32:57 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	16:32:56 -0500
Message-ID: <52F15C85.7050200@citrix.com>
Date: Tue, 4 Feb 2014 21:32:53 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Zoltan Kiss <zoltan.kiss@schaman.hu>
References: <52EAA31B.1090606@schaman.hu>
	<20140131185619.GB27553@zion.uk.xensource.com>
In-Reply-To: <20140131185619.GB27553@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	Michael Chan <mchan@broadcom.com>, e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/01/14 18:56, Wei Liu wrote:
> On Thu, Jan 30, 2014 at 07:08:11PM +0000, Zoltan Kiss wrote:
>> Hi,
>>
>> I've experienced some queue timeout problems mentioned in the
>> subject with igb and bnx2 cards. I haven't seen them on other cards
>> so far. I'm using XenServer with 3.10 Dom0 kernel (however igb were
>> already updated to latest version), and there are Windows guests
>> sending data through these cards. I noticed these problems in XenRT
>> test runs, and I know that they usually mean some lost interrupt
>> problem or other hardware error, but in my case they started to
>> appear more often, and they are likely connected to my netback grant
>> mapping patches. These patches causing skb's with huge (~64kb)
>> linear buffers to appear more often.
>> The reason for that is an old problem in the ring protocol:
>> originally the maximum amount of slots were linked to MAX_SKB_FRAGS,
>> as every slot ended up as a frag of the skb. When this value were
>> changed, netback had to cope with the situation by coalescing the
>> packets into fewer frags.
>> My patch series take a different approach: the leftover slots
>> (pages) were assigned to a new skb's frags, and that skb were
>> stashed to the frag_list of the first one. Then, before sending it
>> off to the stack it calls skb = skb_copy_expand(skb, 0, 0,
>> GFP_ATOMIC, __GFP_NOWARN), which basically creates a new skb and
>> copied all the data into it. As far as I understood, it put
>> everything into the linear buffer, which can amount to 64KB at most.
>> The original skb are freed then, and this new one were sent to the
>> stack.
>
> Just my two cents, if it is this case, you can try to call
> skb_copy_expand on every SKB netback receives to manually create SKBs
> with ~64KB linear buffer to see how it goes...

I've tried it, and it did break everything in a similar way, so that's a 
strong clue that the problem lies here. I've rewrote that part of my 
patches to do less modification, based on Malcolm's idea: netback pulls 
the first frag into linear buffer, then moves a frag from the frag_list 
skb into the first one. That seems to help, but so far I have only one 
relevant test result, I'm waiting for more results.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 21:33:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 21:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAncN-0000Kt-PK; Tue, 04 Feb 2014 21:33:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WAncL-0000Ko-NK
	for xen-devel@lists.xenproject.org; Tue, 04 Feb 2014 21:33:17 +0000
Received: from [193.109.254.147:20855] by server-10.bemta-14.messagelabs.com
	id 44/56-10711-D9C51F25; Tue, 04 Feb 2014 21:33:17 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391549594!2022799!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2222 invoked from network); 4 Feb 2014 21:33:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 21:33:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,782,1384300800"; d="scan'208";a="97968399"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 04 Feb 2014 21:32:57 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Tue, 4 Feb 2014
	16:32:56 -0500
Message-ID: <52F15C85.7050200@citrix.com>
Date: Tue, 4 Feb 2014 21:32:53 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Zoltan Kiss <zoltan.kiss@schaman.hu>
References: <52EAA31B.1090606@schaman.hu>
	<20140131185619.GB27553@zion.uk.xensource.com>
In-Reply-To: <20140131185619.GB27553@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	Michael Chan <mchan@broadcom.com>, e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 31/01/14 18:56, Wei Liu wrote:
> On Thu, Jan 30, 2014 at 07:08:11PM +0000, Zoltan Kiss wrote:
>> Hi,
>>
>> I've experienced some queue timeout problems mentioned in the
>> subject with igb and bnx2 cards. I haven't seen them on other cards
>> so far. I'm using XenServer with 3.10 Dom0 kernel (however igb were
>> already updated to latest version), and there are Windows guests
>> sending data through these cards. I noticed these problems in XenRT
>> test runs, and I know that they usually mean some lost interrupt
>> problem or other hardware error, but in my case they started to
>> appear more often, and they are likely connected to my netback grant
>> mapping patches. These patches causing skb's with huge (~64kb)
>> linear buffers to appear more often.
>> The reason for that is an old problem in the ring protocol:
>> originally the maximum amount of slots were linked to MAX_SKB_FRAGS,
>> as every slot ended up as a frag of the skb. When this value were
>> changed, netback had to cope with the situation by coalescing the
>> packets into fewer frags.
>> My patch series take a different approach: the leftover slots
>> (pages) were assigned to a new skb's frags, and that skb were
>> stashed to the frag_list of the first one. Then, before sending it
>> off to the stack it calls skb = skb_copy_expand(skb, 0, 0,
>> GFP_ATOMIC, __GFP_NOWARN), which basically creates a new skb and
>> copied all the data into it. As far as I understood, it put
>> everything into the linear buffer, which can amount to 64KB at most.
>> The original skb are freed then, and this new one were sent to the
>> stack.
>
> Just my two cents, if it is this case, you can try to call
> skb_copy_expand on every SKB netback receives to manually create SKBs
> with ~64KB linear buffer to see how it goes...

I've tried it, and it did break everything in a similar way, so that's a 
strong clue that the problem lies here. I've rewrote that part of my 
patches to do less modification, based on Malcolm's idea: netback pulls 
the first frag into linear buffer, then moves a frag from the frag_list 
skb into the first one. That seems to help, but so far I have only one 
relevant test result, I'm waiting for more results.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 22:01:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 22:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAo3E-0000xD-BH; Tue, 04 Feb 2014 22:01:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAo3A-0000x8-Kn
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 22:01:00 +0000
Received: from [85.158.139.211:43157] by server-2.bemta-5.messagelabs.com id
	9C/16-23037-B1361F25; Tue, 04 Feb 2014 22:00:59 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391551258!1673516!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25171 invoked from network); 4 Feb 2014 22:00:59 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 22:00:59 -0000
Received: by mail-wg0-f49.google.com with SMTP id a1so13252759wgh.28
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 14:00:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=HH4lpWiYATVlz/TjiPKCmZlr9KatFUCUfyH74kkE7Po=;
	b=NTa5FaqkMD27yf/Ou/E1zL3K65gxhRQK5s+3q6sJRyAqX61ZX3udSUTuR5Hsp2RPfX
	wua+vrFGCDZ/xweOSea5SJokVYMgy+r5qwOXDxM3fqZ91+8Cm61Hf7MPtEgsPjmy2mY0
	S0JbQRxRxPXyP7LbHMeFHCaDa6i2fOIFNEJq/YiOdZukU4tgsIKJ9a5BZEmY8bH5AdKE
	KBZOooJqhafYeRulC/rEYQXwG2bGTRdUsIoQB8CDgXfgBpYJynbF1F6qBXmnnDWr5tdo
	FRqMxmc+2uuCgrtXluRlSSw+LfeJ2UotDuaWG01qaIGL6L21b0pjGs0o04vPPn89hb+H
	F0Jw==
X-Received: by 10.195.13.234 with SMTP id fb10mr3738584wjd.50.1391551258580;
	Tue, 04 Feb 2014 14:00:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 4 Feb 2014 14:00:38 -0800 (PST)
In-Reply-To: <CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 22:00:38 +0000
Message-ID: <CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
To: rshriram@cs.ubc.ca
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
does this come with xen or the drbd package?

Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
on debian (v 8.3)

Thanks

On Tue, Feb 4, 2014 at 8:26 PM, Miguel Clara <miguelmclara@gmail.com> wrote:
> Thanks both for the feedback, I will test xl migrate/remus with DRDB
> and post back!
>
> As for offline migration I tried xl save + simple scp seems to do the
> trick, which works for none shared storage migration as long has the
> domain is stopped (which is what save does already).
>
>
> On Tue, Feb 4, 2014 at 7:19 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
>> On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <Ian.Campbell@citrix.com>
>> wrote:
>>>
>>>
>>> > >> I have a host where what I need is to move it from host1 to host2, or
>>> > >> reverse if needed, there's no problem stopping it first, but I guess
>>> > >> this is not what the "migrate" command is used for!
>>> > >>
>>
>>
>> If you just need to migrate an entire VM (memory & disk)  from one host to
>> another,
>> without using shared storage, then there are several alternatives.
>> One way is to use DRBD based disk backends (yes they can run on top of a
>> LVM)
>> and use protocol C (synchronous disk replication) to keep the disks at both
>> hosts synchronized.
>>
>> Since the disk is always synchronized, you can just use the xl migrate
>> command to migrate
>> the memory state from one host to another, with very little downtime.
>>
>> I would suggest reading up on the DRBD documentation regarding xen live
>> migration. IIRC,
>> their current example is based on xend, but it applies equally well to xl,
>> as xl also supports drbd based
>> disk backends.
>>
>> shriram

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 22:01:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 22:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAo3E-0000xD-BH; Tue, 04 Feb 2014 22:01:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAo3A-0000x8-Kn
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 22:01:00 +0000
Received: from [85.158.139.211:43157] by server-2.bemta-5.messagelabs.com id
	9C/16-23037-B1361F25; Tue, 04 Feb 2014 22:00:59 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391551258!1673516!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25171 invoked from network); 4 Feb 2014 22:00:59 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 22:00:59 -0000
Received: by mail-wg0-f49.google.com with SMTP id a1so13252759wgh.28
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 14:00:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=HH4lpWiYATVlz/TjiPKCmZlr9KatFUCUfyH74kkE7Po=;
	b=NTa5FaqkMD27yf/Ou/E1zL3K65gxhRQK5s+3q6sJRyAqX61ZX3udSUTuR5Hsp2RPfX
	wua+vrFGCDZ/xweOSea5SJokVYMgy+r5qwOXDxM3fqZ91+8Cm61Hf7MPtEgsPjmy2mY0
	S0JbQRxRxPXyP7LbHMeFHCaDa6i2fOIFNEJq/YiOdZukU4tgsIKJ9a5BZEmY8bH5AdKE
	KBZOooJqhafYeRulC/rEYQXwG2bGTRdUsIoQB8CDgXfgBpYJynbF1F6qBXmnnDWr5tdo
	FRqMxmc+2uuCgrtXluRlSSw+LfeJ2UotDuaWG01qaIGL6L21b0pjGs0o04vPPn89hb+H
	F0Jw==
X-Received: by 10.195.13.234 with SMTP id fb10mr3738584wjd.50.1391551258580;
	Tue, 04 Feb 2014 14:00:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 4 Feb 2014 14:00:38 -0800 (PST)
In-Reply-To: <CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 22:00:38 +0000
Message-ID: <CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
To: rshriram@cs.ubc.ca
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
does this come with xen or the drbd package?

Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
on debian (v 8.3)

Thanks

On Tue, Feb 4, 2014 at 8:26 PM, Miguel Clara <miguelmclara@gmail.com> wrote:
> Thanks both for the feedback, I will test xl migrate/remus with DRDB
> and post back!
>
> As for offline migration I tried xl save + simple scp seems to do the
> trick, which works for none shared storage migration as long has the
> domain is stopped (which is what save does already).
>
>
> On Tue, Feb 4, 2014 at 7:19 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
>> On Tue, Feb 4, 2014 at 7:35 AM, Ian Campbell <Ian.Campbell@citrix.com>
>> wrote:
>>>
>>>
>>> > >> I have a host where what I need is to move it from host1 to host2, or
>>> > >> reverse if needed, there's no problem stopping it first, but I guess
>>> > >> this is not what the "migrate" command is used for!
>>> > >>
>>
>>
>> If you just need to migrate an entire VM (memory & disk)  from one host to
>> another,
>> without using shared storage, then there are several alternatives.
>> One way is to use DRBD based disk backends (yes they can run on top of a
>> LVM)
>> and use protocol C (synchronous disk replication) to keep the disks at both
>> hosts synchronized.
>>
>> Since the disk is always synchronized, you can just use the xl migrate
>> command to migrate
>> the memory state from one host to another, with very little downtime.
>>
>> I would suggest reading up on the DRBD documentation regarding xen live
>> migration. IIRC,
>> their current example is based on xend, but it applies equally well to xl,
>> as xl also supports drbd based
>> disk backends.
>>
>> shriram

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 22:03:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 22:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAo5j-00013g-Kr; Tue, 04 Feb 2014 22:03:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WAo5i-00013Y-Ai
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 22:03:38 +0000
Received: from [193.109.254.147:13619] by server-3.bemta-14.messagelabs.com id
	72/D8-00432-9B361F25; Tue, 04 Feb 2014 22:03:37 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391551415!2015065!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11728 invoked from network); 4 Feb 2014 22:03:36 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 22:03:36 -0000
Received: from mail-ig0-f170.google.com (mail-ig0-f170.google.com
	[209.85.213.170]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s14M3YDQ002790
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xensource.com>; Tue, 4 Feb 2014 14:03:34 -0800
Received: by mail-ig0-f170.google.com with SMTP id m12so11354320iga.1
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 14:03:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=G99rZxmzWeNnPuziggsmNYuHmlroErVGSpLD/wGZhw8=;
	b=buO332e/wXO5jHzS1Va68sH5JYFZdACbNzz8A0t+XSsdJKWp5jd1LtL+gz5o/uTZeF
	oyWnHh5ywCVwgkhrZp/blKkXs/PNyHEBIomohywCqbAwtqe+rD9wkUgob5XBoYjDT2gX
	cnHoVRNSqwOXx/3hxmsqKeuw6Q7iwLUmFcC+ZC4p7YYgGccci1/otVdNWXj16bTJoycx
	rs8pJ4si8Aj/oM//wiztJF4TpWDdAJSF9NOJHUOIdE7c2kItqYNUyM0mtSDrJpwQ8BnD
	A7s9GcMumU6aijt+uA3ba/Bj1/uP9nV1CyQlp1oMb4F4nU7nFDDbNVWTgaRTAP+zn3DK
	johw==
X-Received: by 10.42.228.65 with SMTP id jd1mr2966922icb.62.1391551413049;
	Tue, 04 Feb 2014 14:03:33 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 4 Feb 2014 14:02:52 -0800 (PST)
In-Reply-To: <CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 4 Feb 2014 14:02:52 -0800
Message-ID: <CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
To: Miguel Clara <miguelmclara@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3642679683120983773=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3642679683120983773==
Content-Type: multipart/alternative; boundary=001a1132e4cca9636a04f19bcfd0

--001a1132e4cca9636a04f19bcfd0
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com> wrote:

> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
> does this come with xen or the drbd package?
>
> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
> on debian (v 8.3)
>
>
It comes with the drbd package AFAIK

--001a1132e4cca9636a04f19bcfd0
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <span dir="ltr">&lt;<a href="mailto:miguelmclara@gmail.com" target="_blank">miguelmclara@gmail.com</a>&gt;</span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I noticed /etc/xen/scripts doesn&#39;t include the &#39;block-drbd&#39; script,<br>
does this come with xen or the drbd package?<br>
<br>
Xen 4.3.1 was compiled from source, but drdb is installed from apt-get<br>
on debian (v 8.3)<br>
<br></blockquote><div><br></div><div>It comes with the drbd package AFAIK</div><div><br></div></div></div></div>

--001a1132e4cca9636a04f19bcfd0--


--===============3642679683120983773==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3642679683120983773==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 22:03:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 22:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAo5j-00013g-Kr; Tue, 04 Feb 2014 22:03:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WAo5i-00013Y-Ai
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 22:03:38 +0000
Received: from [193.109.254.147:13619] by server-3.bemta-14.messagelabs.com id
	72/D8-00432-9B361F25; Tue, 04 Feb 2014 22:03:37 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391551415!2015065!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11728 invoked from network); 4 Feb 2014 22:03:36 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 4 Feb 2014 22:03:36 -0000
Received: from mail-ig0-f170.google.com (mail-ig0-f170.google.com
	[209.85.213.170]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s14M3YDQ002790
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <xen-devel@lists.xensource.com>; Tue, 4 Feb 2014 14:03:34 -0800
Received: by mail-ig0-f170.google.com with SMTP id m12so11354320iga.1
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 14:03:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=G99rZxmzWeNnPuziggsmNYuHmlroErVGSpLD/wGZhw8=;
	b=buO332e/wXO5jHzS1Va68sH5JYFZdACbNzz8A0t+XSsdJKWp5jd1LtL+gz5o/uTZeF
	oyWnHh5ywCVwgkhrZp/blKkXs/PNyHEBIomohywCqbAwtqe+rD9wkUgob5XBoYjDT2gX
	cnHoVRNSqwOXx/3hxmsqKeuw6Q7iwLUmFcC+ZC4p7YYgGccci1/otVdNWXj16bTJoycx
	rs8pJ4si8Aj/oM//wiztJF4TpWDdAJSF9NOJHUOIdE7c2kItqYNUyM0mtSDrJpwQ8BnD
	A7s9GcMumU6aijt+uA3ba/Bj1/uP9nV1CyQlp1oMb4F4nU7nFDDbNVWTgaRTAP+zn3DK
	johw==
X-Received: by 10.42.228.65 with SMTP id jd1mr2966922icb.62.1391551413049;
	Tue, 04 Feb 2014 14:03:33 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 4 Feb 2014 14:02:52 -0800 (PST)
In-Reply-To: <CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 4 Feb 2014 14:02:52 -0800
Message-ID: <CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
To: Miguel Clara <miguelmclara@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3642679683120983773=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3642679683120983773==
Content-Type: multipart/alternative; boundary=001a1132e4cca9636a04f19bcfd0

--001a1132e4cca9636a04f19bcfd0
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com> wrote:

> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
> does this come with xen or the drbd package?
>
> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
> on debian (v 8.3)
>
>
It comes with the drbd package AFAIK

--001a1132e4cca9636a04f19bcfd0
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <span dir="ltr">&lt;<a href="mailto:miguelmclara@gmail.com" target="_blank">miguelmclara@gmail.com</a>&gt;</span> wrote:<br><div class="gmail_extra"><div class="gmail_quote">

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I noticed /etc/xen/scripts doesn&#39;t include the &#39;block-drbd&#39; script,<br>
does this come with xen or the drbd package?<br>
<br>
Xen 4.3.1 was compiled from source, but drdb is installed from apt-get<br>
on debian (v 8.3)<br>
<br></blockquote><div><br></div><div>It comes with the drbd package AFAIK</div><div><br></div></div></div></div>

--001a1132e4cca9636a04f19bcfd0--


--===============3642679683120983773==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3642679683120983773==--


From xen-devel-bounces@lists.xen.org Tue Feb 04 22:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 22:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAoSb-0001oo-5u; Tue, 04 Feb 2014 22:27:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAoSZ-0001oj-UO
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 22:27:16 +0000
Received: from [85.158.137.68:16579] by server-6.bemta-3.messagelabs.com id
	EA/4B-09180-34961F25; Tue, 04 Feb 2014 22:27:15 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391552834!13414910!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16432 invoked from network); 4 Feb 2014 22:27:14 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 22:27:14 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so4667377wes.26
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 14:27:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=5EIW3gwzgUCigrmyE5Dgk/YyzWpk7zRuAdHfKcIw9X8=;
	b=WmWKiw9opTzgg+UC/SQdyqWeSFZkqynq3et5oKqVK1qy5y3/brv3527QA5dEilX+vU
	12W1Hw+19k7tALZaGuImHvE2z973x50+im3QheXrKkJe9mMtalEaBtJ/sY2CUsz9DuG0
	gh+UWvUMc8PUsd5GBoWXDYCgfpZ/BxxmrWZrurMtrQnu2epsKfm7LE1xiUv8h4HJGtpP
	9DhMp0LMmGKJIN7zAEUhni2Hk3p9xTGHDbbS6UL/TSx7rCMJ4QUg/X+NKQjJz4X9PUjF
	x2oWZiUNYAVwadJgcFvVBE/unbwVQimr1myZSFJQiavhPSaphZ2lnw+tg0frEfk2w6pH
	PMsg==
X-Received: by 10.194.63.228 with SMTP id j4mr11072829wjs.34.1391552834311;
	Tue, 04 Feb 2014 14:27:14 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 4 Feb 2014 14:26:54 -0800 (PST)
In-Reply-To: <CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 22:26:54 +0000
Message-ID: <CADGo8CWNZtfjR3F_93JVCr+g4cf=R3kH2_7HLWasqCiKBWVtZQ@mail.gmail.com>
To: rshriram@cs.ubc.ca
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

dpkg-query -L drbd8-utils |grep block
/etc/xen/scripts/block-drbd



Seems true, but its not there... Re-installed, same thing! Weird, I'll
have to report it in drbd-user@lists.linbit.com.

On Tue, Feb 4, 2014 at 10:02 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com> wrote:
>>
>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>> does this come with xen or the drbd package?
>>
>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
>> on debian (v 8.3)
>>
>
> It comes with the drbd package AFAIK
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 22:27:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 22:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAoSb-0001oo-5u; Tue, 04 Feb 2014 22:27:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAoSZ-0001oj-UO
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 22:27:16 +0000
Received: from [85.158.137.68:16579] by server-6.bemta-3.messagelabs.com id
	EA/4B-09180-34961F25; Tue, 04 Feb 2014 22:27:15 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391552834!13414910!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16432 invoked from network); 4 Feb 2014 22:27:14 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 22:27:14 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so4667377wes.26
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 14:27:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=5EIW3gwzgUCigrmyE5Dgk/YyzWpk7zRuAdHfKcIw9X8=;
	b=WmWKiw9opTzgg+UC/SQdyqWeSFZkqynq3et5oKqVK1qy5y3/brv3527QA5dEilX+vU
	12W1Hw+19k7tALZaGuImHvE2z973x50+im3QheXrKkJe9mMtalEaBtJ/sY2CUsz9DuG0
	gh+UWvUMc8PUsd5GBoWXDYCgfpZ/BxxmrWZrurMtrQnu2epsKfm7LE1xiUv8h4HJGtpP
	9DhMp0LMmGKJIN7zAEUhni2Hk3p9xTGHDbbS6UL/TSx7rCMJ4QUg/X+NKQjJz4X9PUjF
	x2oWZiUNYAVwadJgcFvVBE/unbwVQimr1myZSFJQiavhPSaphZ2lnw+tg0frEfk2w6pH
	PMsg==
X-Received: by 10.194.63.228 with SMTP id j4mr11072829wjs.34.1391552834311;
	Tue, 04 Feb 2014 14:27:14 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 4 Feb 2014 14:26:54 -0800 (PST)
In-Reply-To: <CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 4 Feb 2014 22:26:54 +0000
Message-ID: <CADGo8CWNZtfjR3F_93JVCr+g4cf=R3kH2_7HLWasqCiKBWVtZQ@mail.gmail.com>
To: rshriram@cs.ubc.ca
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

dpkg-query -L drbd8-utils |grep block
/etc/xen/scripts/block-drbd



Seems true, but its not there... Re-installed, same thing! Weird, I'll
have to report it in drbd-user@lists.linbit.com.

On Tue, Feb 4, 2014 at 10:02 PM, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com> wrote:
>>
>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>> does this come with xen or the drbd package?
>>
>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
>> on debian (v 8.3)
>>
>
> It comes with the drbd package AFAIK
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 23:18:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 23:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WApG5-00032H-9M; Tue, 04 Feb 2014 23:18:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WApG3-00032C-Ss
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 23:18:24 +0000
Received: from [85.158.139.211:37292] by server-12.bemta-5.messagelabs.com id
	EC/63-15415-F3571F25; Tue, 04 Feb 2014 23:18:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391555902!1689213!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20160 invoked from network); 4 Feb 2014 23:18:22 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 23:18:22 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so13330779wgg.13
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 15:18:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:content-type:content-transfer-encoding;
	bh=q6n3TuAwNGYmUkySOhI86VzlW7ejPUrOREnsAvjHO/g=;
	b=LyHu4858RCLWEvFKKnxjb0YaMtwUvPeTkWX9lIFTjQz3uJVcLGo+eENMNZ8wPeS2KZ
	nwExl0nq9JCP/mwf/I9+QWbRSwzD4zlhAr2GC0xoO3GOP9zThYRk7+bvoUFA+NkuFJF2
	/+PRETYQZSuFfi/2lp+Ufyjb/eU/6jLGAdug2zZlfN5wqZfnGdcUjmgw9SMudqpUphO3
	TLVoiWOMxKLZ6XKd9sL+WwfeSybJAJ4N4DLE7BaO5GLueLs3bDQdjreA+UT89BRIpTav
	bplDAoiqiRik6TICite7B9GjMNfgMtWcYZg+YLyt8HkbFkgFTvdYed6yfzh+OXizC9qC
	jeQg==
X-Gm-Message-State: ALoCoQl+962Pi6JeOALq9aIAMeQKezcKLIItch27IfUQ1J496Z9+6rRE3TX4edS9JfWZjPf+OjYq
X-Received: by 10.180.210.171 with SMTP id mv11mr124052wic.44.1391555901928;
	Tue, 04 Feb 2014 15:18:21 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	xt1sm56487897wjb.17.2014.02.04.15.18.20 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 15:18:21 -0800 (PST)
Message-ID: <52F1753C.3010508@linaro.org>
Date: Tue, 04 Feb 2014 23:18:20 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: david.vrabel@citrix.com
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello David,

I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 4.4-rc3).

I have multiple issues with your event channel patch series on Linux and Xen side.
I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create guests).

I'm using a simple guest config:
kernel="/root/zImage"
memory=32
name="test"
vcpus=1
autoballon="off"
extra="console=hvc0"

If everything is ok, I should see that Linux is unable to find the root filesystem.
But here, Linux is stucked.

>From Linux side, after bisecting, I found that the offending commit is:
    xen/events: remove unnecessary init_evtchn_cpu_bindings()
    
    Because the guest-side binding of an event to a VCPU (i.e., setting
    the local per-cpu masks) is always explicitly done after an event
    channel is bound to a port, there is no need to initialize all
    possible events as bound to VCPU 0 at start of day or after a resume.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

With this patch, the function __xen_evtchn_do_upcall won't be able
to find an events (pendings_bits == 0 every time).
It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.

Now, if I'm using Linux 3.14-rc1 as guest and trying to destroy the domain,
I get this following Xen trace:

(XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
(XEN) Xen BUG at mm.c:334
(XEN) CPU1: Unexpected Trap: Undefined Instruction
(XEN) ----[ Xen-4.4-rc2  arm32  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) PC:     0023f7d0 __bug+0x28/0x44
(XEN) CPSR:   2000015a MODE:Hypervisor
(XEN)      R0: 002646dc R1: 00000003 R2: 3fd21d80 R3: 00000fff
(XEN)      R4: 002612b4 R5: 0000014e R6: 00000c00 R7: 00000000
(XEN)      R8: 4007f080 R9: 9ed7e000 R10:7e9ed6e8 R11:7ffdfd64 R12:00000004
(XEN) HYP: SP: 7ffdfd5c LR: 0023f7d0
(XEN) 
(XEN)   VTCR_EL2: 80002558
(XEN)  VTTBR_EL2: 00010002f9ffc000
(XEN) 
(XEN)  SCTLR_EL2: 30cd187f
(XEN)    HCR_EL2: 0000000000282835
(XEN)  TTBR0_EL2: 00000000be016000
(XEN) 
(XEN)    ESR_EL2: 00000000
(XEN)  HPFAR_EL2: 0000000000fff110
(XEN)      HDFAR: a0800f00
(XEN)      HIFAR: 00000000
(XEN) 
(XEN) Xen stack trace from sp=7ffdfd5c:
(XEN)    00000001 7ffdfd74 00247d6c 40076000 10011000 7ffdfd84 0020b17c 40076000
(XEN)    4007f000 7ffdfd94 0020b1d0 40025b70 40076000 7ffdfda4 0020bd3c 4007f000
(XEN)    00000000 7ffdfdc4 0020b024 00000000 8000da84 4007f000 76f9a004 7ffdff58
(XEN)    00000005 7ffdfddc 00207f80 00000001 8000da84 4007f000 76f9a004 7ffdfeec
(XEN)    00206050 00002002 00000000 00002100 00000000 00000000 00000000 00000000
(XEN)    00000000 00000000 00000000 000be077 00000000 2e3022e0 00000000 00000001
(XEN)    00000000 00000000 238c3fd2 604c3b53 fe4aec89 e4988389 00000000 00000002
(XEN)    00000009 76ef0003 76fb7680 76ecf000 00000000 7e9ed7cc 00000001 76fb3140
(XEN)    00000001 00000005 00000000 00036718 76df3018 0003e740 00000001 7e9ed79c
(XEN)    76fb2000 76f20000 0005e770 76f23c70 76fb24c0 00000000 76c00740 7e9ed80c
(XEN)    76fa5857 00000000 00000001 00000005 00000000 7e9ed7bc 76efe04c 00000001
(XEN)    00035830 00035030 00000003 7ffdfedc 8000da84 00000ea1 00000005 7ffdff58
(XEN)    00000005 9ed7e000 7e9ed6e8 7ffdff54 0024cee0 76fb7578 0022c540 0022c334
(XEN)    00000001 002ae000 002b1ff0 002e7614 400238d8 0000000d 00000000 7ffdff3c
(XEN)    7e9ed594 9ecad680 00000005 00305000 00000005 9ed7e000 76fb7578 9ecad680
(XEN)    00000005 00305000 00000005 9ed7e000 7e9ed6e8 7ffdff58 0024f6d0 76f9a004
(XEN)    76f23c70 76fb7578 76fb3140 76fb7578 9ecad680 00000005 00305000 00000005
(XEN)    9ed7e000 7e9ed6e8 9f4172d0 00000024 ffffffff 76f141e4 8000da84 60000013
(XEN)    00000000 7e9ed6ac 80551900 80011cc0 9ed7feac 801f6e9c 8055190c 80011fc0
(XEN)    80551918 80011ea0 00000000 00000000 00000000 00000000 00000000 00000000
(XEN) Xen call trace:
(XEN)    [<0023f7d0>] __bug+0x28/0x44 (PC)
(XEN)    [<0023f7d0>] __bug+0x28/0x44 (LR)
(XEN)    [<00247d6c>] domain_page_map_to_mfn+0x50/0xb4
(XEN)    [<0020b17c>] unmap_guest_page+0x20/0x54
(XEN)    [<0020b1d0>] cleanup_control_block+0x20/0x34
(XEN)    [<0020bd3c>] evtchn_fifo_destroy+0x2c/0x6c
(XEN)    [<0020b024>] evtchn_destroy+0x1a8/0x1b0
(XEN)    [<00207f80>] domain_kill+0x60/0x128
(XEN)    [<00206050>] do_domctl+0xa7c/0x1104
(XEN)    [<0024cee0>] do_trap_hypervisor+0xad8/0xd78
(XEN)    [<0024f6d0>] return_from_trap+0/0x4
(XEN) 

I will try to give more input tomorrow for the Xen bug.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 23:18:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 23:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WApG5-00032H-9M; Tue, 04 Feb 2014 23:18:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WApG3-00032C-Ss
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 23:18:24 +0000
Received: from [85.158.139.211:37292] by server-12.bemta-5.messagelabs.com id
	EC/63-15415-F3571F25; Tue, 04 Feb 2014 23:18:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391555902!1689213!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20160 invoked from network); 4 Feb 2014 23:18:22 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 23:18:22 -0000
Received: by mail-wg0-f46.google.com with SMTP id x12so13330779wgg.13
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 15:18:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:content-type:content-transfer-encoding;
	bh=q6n3TuAwNGYmUkySOhI86VzlW7ejPUrOREnsAvjHO/g=;
	b=LyHu4858RCLWEvFKKnxjb0YaMtwUvPeTkWX9lIFTjQz3uJVcLGo+eENMNZ8wPeS2KZ
	nwExl0nq9JCP/mwf/I9+QWbRSwzD4zlhAr2GC0xoO3GOP9zThYRk7+bvoUFA+NkuFJF2
	/+PRETYQZSuFfi/2lp+Ufyjb/eU/6jLGAdug2zZlfN5wqZfnGdcUjmgw9SMudqpUphO3
	TLVoiWOMxKLZ6XKd9sL+WwfeSybJAJ4N4DLE7BaO5GLueLs3bDQdjreA+UT89BRIpTav
	bplDAoiqiRik6TICite7B9GjMNfgMtWcYZg+YLyt8HkbFkgFTvdYed6yfzh+OXizC9qC
	jeQg==
X-Gm-Message-State: ALoCoQl+962Pi6JeOALq9aIAMeQKezcKLIItch27IfUQ1J496Z9+6rRE3TX4edS9JfWZjPf+OjYq
X-Received: by 10.180.210.171 with SMTP id mv11mr124052wic.44.1391555901928;
	Tue, 04 Feb 2014 15:18:21 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	xt1sm56487897wjb.17.2014.02.04.15.18.20 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 15:18:21 -0800 (PST)
Message-ID: <52F1753C.3010508@linaro.org>
Date: Tue, 04 Feb 2014 23:18:20 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: david.vrabel@citrix.com
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello David,

I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 4.4-rc3).

I have multiple issues with your event channel patch series on Linux and Xen side.
I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create guests).

I'm using a simple guest config:
kernel="/root/zImage"
memory=32
name="test"
vcpus=1
autoballon="off"
extra="console=hvc0"

If everything is ok, I should see that Linux is unable to find the root filesystem.
But here, Linux is stucked.

>From Linux side, after bisecting, I found that the offending commit is:
    xen/events: remove unnecessary init_evtchn_cpu_bindings()
    
    Because the guest-side binding of an event to a VCPU (i.e., setting
    the local per-cpu masks) is always explicitly done after an event
    channel is bound to a port, there is no need to initialize all
    possible events as bound to VCPU 0 at start of day or after a resume.
    
    Signed-off-by: David Vrabel <david.vrabel@citrix.com>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

With this patch, the function __xen_evtchn_do_upcall won't be able
to find an events (pendings_bits == 0 every time).
It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.

Now, if I'm using Linux 3.14-rc1 as guest and trying to destroy the domain,
I get this following Xen trace:

(XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
(XEN) Xen BUG at mm.c:334
(XEN) CPU1: Unexpected Trap: Undefined Instruction
(XEN) ----[ Xen-4.4-rc2  arm32  debug=y  Tainted:    C ]----
(XEN) CPU:    1
(XEN) PC:     0023f7d0 __bug+0x28/0x44
(XEN) CPSR:   2000015a MODE:Hypervisor
(XEN)      R0: 002646dc R1: 00000003 R2: 3fd21d80 R3: 00000fff
(XEN)      R4: 002612b4 R5: 0000014e R6: 00000c00 R7: 00000000
(XEN)      R8: 4007f080 R9: 9ed7e000 R10:7e9ed6e8 R11:7ffdfd64 R12:00000004
(XEN) HYP: SP: 7ffdfd5c LR: 0023f7d0
(XEN) 
(XEN)   VTCR_EL2: 80002558
(XEN)  VTTBR_EL2: 00010002f9ffc000
(XEN) 
(XEN)  SCTLR_EL2: 30cd187f
(XEN)    HCR_EL2: 0000000000282835
(XEN)  TTBR0_EL2: 00000000be016000
(XEN) 
(XEN)    ESR_EL2: 00000000
(XEN)  HPFAR_EL2: 0000000000fff110
(XEN)      HDFAR: a0800f00
(XEN)      HIFAR: 00000000
(XEN) 
(XEN) Xen stack trace from sp=7ffdfd5c:
(XEN)    00000001 7ffdfd74 00247d6c 40076000 10011000 7ffdfd84 0020b17c 40076000
(XEN)    4007f000 7ffdfd94 0020b1d0 40025b70 40076000 7ffdfda4 0020bd3c 4007f000
(XEN)    00000000 7ffdfdc4 0020b024 00000000 8000da84 4007f000 76f9a004 7ffdff58
(XEN)    00000005 7ffdfddc 00207f80 00000001 8000da84 4007f000 76f9a004 7ffdfeec
(XEN)    00206050 00002002 00000000 00002100 00000000 00000000 00000000 00000000
(XEN)    00000000 00000000 00000000 000be077 00000000 2e3022e0 00000000 00000001
(XEN)    00000000 00000000 238c3fd2 604c3b53 fe4aec89 e4988389 00000000 00000002
(XEN)    00000009 76ef0003 76fb7680 76ecf000 00000000 7e9ed7cc 00000001 76fb3140
(XEN)    00000001 00000005 00000000 00036718 76df3018 0003e740 00000001 7e9ed79c
(XEN)    76fb2000 76f20000 0005e770 76f23c70 76fb24c0 00000000 76c00740 7e9ed80c
(XEN)    76fa5857 00000000 00000001 00000005 00000000 7e9ed7bc 76efe04c 00000001
(XEN)    00035830 00035030 00000003 7ffdfedc 8000da84 00000ea1 00000005 7ffdff58
(XEN)    00000005 9ed7e000 7e9ed6e8 7ffdff54 0024cee0 76fb7578 0022c540 0022c334
(XEN)    00000001 002ae000 002b1ff0 002e7614 400238d8 0000000d 00000000 7ffdff3c
(XEN)    7e9ed594 9ecad680 00000005 00305000 00000005 9ed7e000 76fb7578 9ecad680
(XEN)    00000005 00305000 00000005 9ed7e000 7e9ed6e8 7ffdff58 0024f6d0 76f9a004
(XEN)    76f23c70 76fb7578 76fb3140 76fb7578 9ecad680 00000005 00305000 00000005
(XEN)    9ed7e000 7e9ed6e8 9f4172d0 00000024 ffffffff 76f141e4 8000da84 60000013
(XEN)    00000000 7e9ed6ac 80551900 80011cc0 9ed7feac 801f6e9c 8055190c 80011fc0
(XEN)    80551918 80011ea0 00000000 00000000 00000000 00000000 00000000 00000000
(XEN) Xen call trace:
(XEN)    [<0023f7d0>] __bug+0x28/0x44 (PC)
(XEN)    [<0023f7d0>] __bug+0x28/0x44 (LR)
(XEN)    [<00247d6c>] domain_page_map_to_mfn+0x50/0xb4
(XEN)    [<0020b17c>] unmap_guest_page+0x20/0x54
(XEN)    [<0020b1d0>] cleanup_control_block+0x20/0x34
(XEN)    [<0020bd3c>] evtchn_fifo_destroy+0x2c/0x6c
(XEN)    [<0020b024>] evtchn_destroy+0x1a8/0x1b0
(XEN)    [<00207f80>] domain_kill+0x60/0x128
(XEN)    [<00206050>] do_domctl+0xa7c/0x1104
(XEN)    [<0024cee0>] do_trap_hypervisor+0xad8/0xd78
(XEN)    [<0024f6d0>] return_from_trap+0/0x4
(XEN) 

I will try to give more input tomorrow for the Xen bug.

Sincerely yours,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 23:35:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 23:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WApWm-0003ZU-2Z; Tue, 04 Feb 2014 23:35:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WApWl-0003ZP-6Y
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 23:35:39 +0000
Received: from [85.158.143.35:45538] by server-1.bemta-4.messagelabs.com id
	28/16-31661-A4971F25; Tue, 04 Feb 2014 23:35:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391556937!3167309!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9262 invoked from network); 4 Feb 2014 23:35:37 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 23:35:37 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so66008wib.14
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 15:35:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=zrgmTQggQvMcmhrkhP0M0KIyYb/5zO0W9P92fqUDdoM=;
	b=IgBzFaSyccwvbyXJ3iI+JCt21NyP6PueQp58ikgNEKsQb7etFFTbWhtuVbdv+My5O0
	dXm6nDLlvegx0e1sv+R+AoDNy13b00VoKy7FDDDrEgFZcK83bQonYrqtTBteJlpVGAbp
	RQ1HG6kDPESyUlWIhsd2arlZizOIdTDhU1Jw9oS4RN0bhrn9Zl0ZkV7u7Y9dGm3IWTtV
	pysFwPoaAMAEnombV2jywnGFppoT4F8gNJcFABBnchc/FanzyHNPuS4MCzfMDY53HRgz
	rlzuzwYvne84foSQIfB7Oaum3dRFFdrMPXAzPNJr1F7+gdT/+NNVyqJQK/jjfmfnka07
	IEEg==
X-Gm-Message-State: ALoCoQlwJ43C02mRxUHuAhi/lxcLxCnzSzOUpdaFj97W0urS/Qho0lZsc/IjrC2KlDMIls/uVJYg
X-Received: by 10.194.85.75 with SMTP id f11mr3985909wjz.47.1391556937347;
	Tue, 04 Feb 2014 15:35:37 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id e18sm40251740wic.4.2014.02.04.15.35.36
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 15:35:36 -0800 (PST)
Message-ID: <52F17947.3040606@linaro.org>
Date: Tue, 04 Feb 2014 23:35:35 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52F15B93.4000000@citrix.com>
In-Reply-To: <52F15B93.4000000@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: upstream kernel compile fails with
	defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Add Ian Campbell for sunxi bits)

Hello Zoltan,

On 04/02/14 21:28, Zoltan Kiss wrote: =

> However both cases the build fails quite early with the same error:
> =

>    CC      arch/arm/kernel/asm-offsets.s
> In file included from /local/repo/linux-net-next/include/linux/cache.h:5:=
0,
>                   from /local/repo/linux-net-next/include/linux/printk.h:=
8,
>                   from =

> /local/repo/linux-net-next/include/linux/kernel.h:13,
>                   from /local/repo/linux-net-next/include/linux/sched.h:1=
5,
>                   from =

> /local/repo/linux-net-next/arch/arm/kernel/asm-offsets.c:13:
> /local/repo/linux-net-next/include/linux/prefetch.h: In function =

> =91prefetch_range=92:
> /local/repo/linux-net-next/arch/arm/include/asm/cache.h:7:25: error: =

> =91CONFIG_ARM_L1_CACHE_SHIFT=92 undeclared (first use in this function)
>   #define L1_CACHE_SHIFT  CONFIG_ARM_L1_CACHE_SHIFT

I can't get the same error with net-next + your V7. I have this following e=
rror:

drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
drivers/xen/grant-table.c:1047:3: error: implicit declaration of function =
=91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
   mfn =3D get_phys_to_machine(page_to_pfn(pages[i]));
   ^

I'm using this config for Linux: http://xenbits.xen.org/people/julieng/conf=
ig-midway
 =

> So far I figured out that autoconf.h is not included here, that CONFIG_ =

> is defined there. But I'm not that familiar with the kernel build =

> system, so I'm a bit stucked here. Probably I'm doing something =

> trivially wrong, can someone help me?
> Btw. on Julian's repo from the Arndale page I could compile, but that's =

> too old for my patch to apply. My main goal is to compile an upstream =

> kernel with Xen, and then check if my patch breaks it.

The kernel repo for the Arndale is completely out-of-date on the wiki page.
I will update the wiki page to use directly the Linaro tree.

Sincerely yours,


-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 23:35:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 23:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WApWm-0003ZU-2Z; Tue, 04 Feb 2014 23:35:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WApWl-0003ZP-6Y
	for xen-devel@lists.xen.org; Tue, 04 Feb 2014 23:35:39 +0000
Received: from [85.158.143.35:45538] by server-1.bemta-4.messagelabs.com id
	28/16-31661-A4971F25; Tue, 04 Feb 2014 23:35:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391556937!3167309!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9262 invoked from network); 4 Feb 2014 23:35:37 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 23:35:37 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so66008wib.14
	for <xen-devel@lists.xen.org>; Tue, 04 Feb 2014 15:35:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=zrgmTQggQvMcmhrkhP0M0KIyYb/5zO0W9P92fqUDdoM=;
	b=IgBzFaSyccwvbyXJ3iI+JCt21NyP6PueQp58ikgNEKsQb7etFFTbWhtuVbdv+My5O0
	dXm6nDLlvegx0e1sv+R+AoDNy13b00VoKy7FDDDrEgFZcK83bQonYrqtTBteJlpVGAbp
	RQ1HG6kDPESyUlWIhsd2arlZizOIdTDhU1Jw9oS4RN0bhrn9Zl0ZkV7u7Y9dGm3IWTtV
	pysFwPoaAMAEnombV2jywnGFppoT4F8gNJcFABBnchc/FanzyHNPuS4MCzfMDY53HRgz
	rlzuzwYvne84foSQIfB7Oaum3dRFFdrMPXAzPNJr1F7+gdT/+NNVyqJQK/jjfmfnka07
	IEEg==
X-Gm-Message-State: ALoCoQlwJ43C02mRxUHuAhi/lxcLxCnzSzOUpdaFj97W0urS/Qho0lZsc/IjrC2KlDMIls/uVJYg
X-Received: by 10.194.85.75 with SMTP id f11mr3985909wjz.47.1391556937347;
	Tue, 04 Feb 2014 15:35:37 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id e18sm40251740wic.4.2014.02.04.15.35.36
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 15:35:36 -0800 (PST)
Message-ID: <52F17947.3040606@linaro.org>
Date: Tue, 04 Feb 2014 23:35:35 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52F15B93.4000000@citrix.com>
In-Reply-To: <52F15B93.4000000@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: upstream kernel compile fails with
	defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Add Ian Campbell for sunxi bits)

Hello Zoltan,

On 04/02/14 21:28, Zoltan Kiss wrote: =

> However both cases the build fails quite early with the same error:
> =

>    CC      arch/arm/kernel/asm-offsets.s
> In file included from /local/repo/linux-net-next/include/linux/cache.h:5:=
0,
>                   from /local/repo/linux-net-next/include/linux/printk.h:=
8,
>                   from =

> /local/repo/linux-net-next/include/linux/kernel.h:13,
>                   from /local/repo/linux-net-next/include/linux/sched.h:1=
5,
>                   from =

> /local/repo/linux-net-next/arch/arm/kernel/asm-offsets.c:13:
> /local/repo/linux-net-next/include/linux/prefetch.h: In function =

> =91prefetch_range=92:
> /local/repo/linux-net-next/arch/arm/include/asm/cache.h:7:25: error: =

> =91CONFIG_ARM_L1_CACHE_SHIFT=92 undeclared (first use in this function)
>   #define L1_CACHE_SHIFT  CONFIG_ARM_L1_CACHE_SHIFT

I can't get the same error with net-next + your V7. I have this following e=
rror:

drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
drivers/xen/grant-table.c:1047:3: error: implicit declaration of function =
=91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
   mfn =3D get_phys_to_machine(page_to_pfn(pages[i]));
   ^

I'm using this config for Linux: http://xenbits.xen.org/people/julieng/conf=
ig-midway
 =

> So far I figured out that autoconf.h is not included here, that CONFIG_ =

> is defined there. But I'm not that familiar with the kernel build =

> system, so I'm a bit stucked here. Probably I'm doing something =

> trivially wrong, can someone help me?
> Btw. on Julian's repo from the Arndale page I could compile, but that's =

> too old for my patch to apply. My main goal is to compile an upstream =

> kernel with Xen, and then check if my patch breaks it.

The kernel repo for the Arndale is completely out-of-date on the wiki page.
I will update the wiki page to use directly the Linaro tree.

Sincerely yours,


-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 23:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 23:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WApet-0003wK-94; Tue, 04 Feb 2014 23:44:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WApeq-0003wF-Ts
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 23:44:01 +0000
Received: from [193.109.254.147:37499] by server-16.bemta-14.messagelabs.com
	id 1F/0B-21945-04B71F25; Tue, 04 Feb 2014 23:44:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391557437!2034610!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17021 invoked from network); 4 Feb 2014 23:43:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 23:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,782,1384300800"; d="scan'208";a="99913382"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 23:43:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 18:43:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WApem-0007ju-Jw;
	Tue, 04 Feb 2014 23:43:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WApem-0004dG-BV;
	Tue, 04 Feb 2014 23:43:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24727-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 23:43:56 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24727: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24727 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24727/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 24723

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  3 host-install(3)        broken pass in 24724
 test-amd64-i386-pv            3 host-install(3)           broken pass in 24724
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)  broken in 24724 pass in 24727
 test-amd64-i386-xl-qemuu-win7-amd64 9 guest-localmigrate fail in 24724 pass in 24727
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24724 pass in 24727

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-i386  3 host-install(3)            broken like 24719
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)     broken in 24724 like 24722

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  af172d655c3900822d1f710ac13ee38ee9d482d2
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit af172d655c3900822d1f710ac13ee38ee9d482d2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 4 09:22:12 2014 +0100

    x86/domctl: don't ignore errors from vmce_restore_vcpu()
    
    What started out as a simple cleanup patch (eliminating the redundant
    check of domctl->cmd before setting "copyback", which as a result
    turned the "ext_vcpucontext_out" label useless) revealed a bug in the
    handling of XEN_DOMCTL_set_ext_vcpucontext.
    
    Fix this, retaining the cleanup, and at once dropping a stale comment
    and an accompanying formatting issue.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 04 23:44:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Feb 2014 23:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WApet-0003wK-94; Tue, 04 Feb 2014 23:44:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WApeq-0003wF-Ts
	for xen-devel@lists.xensource.com; Tue, 04 Feb 2014 23:44:01 +0000
Received: from [193.109.254.147:37499] by server-16.bemta-14.messagelabs.com
	id 1F/0B-21945-04B71F25; Tue, 04 Feb 2014 23:44:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391557437!2034610!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17021 invoked from network); 4 Feb 2014 23:43:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	4 Feb 2014 23:43:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,782,1384300800"; d="scan'208";a="99913382"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 04 Feb 2014 23:43:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 4 Feb 2014 18:43:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WApem-0007ju-Jw;
	Tue, 04 Feb 2014 23:43:56 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WApem-0004dG-BV;
	Tue, 04 Feb 2014 23:43:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24727-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 4 Feb 2014 23:43:56 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24727: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24727 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24727/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 24723

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-amd64  3 host-install(3)        broken pass in 24724
 test-amd64-i386-pv            3 host-install(3)           broken pass in 24724
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)  broken in 24724 pass in 24727
 test-amd64-i386-xl-qemuu-win7-amd64 9 guest-localmigrate fail in 24724 pass in 24727
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 24724 pass in 24727

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-i386  3 host-install(3)            broken like 24719
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)     broken in 24724 like 24722

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  af172d655c3900822d1f710ac13ee38ee9d482d2
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit af172d655c3900822d1f710ac13ee38ee9d482d2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 4 09:22:12 2014 +0100

    x86/domctl: don't ignore errors from vmce_restore_vcpu()
    
    What started out as a simple cleanup patch (eliminating the redundant
    check of domctl->cmd before setting "copyback", which as a result
    turned the "ext_vcpucontext_out" label useless) revealed a bug in the
    handling of XEN_DOMCTL_set_ext_vcpucontext.
    
    Fix this, retaining the cleanup, and at once dropping a stale comment
    and an accompanying formatting issue.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 00:10:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 00:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAq3y-000578-SU; Wed, 05 Feb 2014 00:09:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1WAq3x-000573-Rk
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 00:09:58 +0000
Received: from [85.158.143.35:43691] by server-3.bemta-4.messagelabs.com id
	F7/54-11539-55181F25; Wed, 05 Feb 2014 00:09:57 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391558996!3161862!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3979 invoked from network); 5 Feb 2014 00:09:56 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 00:09:56 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so83973wib.6
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 16:09:56 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=WDjn3cFutr2f/5QZhOQB8sDJsKWwpWQN+1RXNgihFzE=;
	b=dOURX0X8lVnxLC9AD+L8N9ALGyl1YinMQrWKtv97kcaign1feZshVn5+IRvuQR4Bl0
	8G5rJq9LbYA7pa82sA2BJOjt6TdWw2lX8gL5G90xEQNeGlvYJ6uFpEc63skTn6jrqQvs
	u3JqlCG5vvPkHPuqh03goChadSiKtZJEESzZkhvaEln9TZWK2ZdYrzyA3dsw0McQUqto
	xniP9rC42Phxpjc9nnAjfm1vZ3JsMj9WhvmouspCBWz28GivZ+zXEFZwlxZOADbbqRB5
	wTuoobV6MNGoQQ5Lz3huYFhCp7k7BF91W20GIYFUumO6bod1l6d3vLclb5YN8TcY9RaR
	6Ncg==
X-Gm-Message-State: ALoCoQmahiRSwwwKLukUYT82usxb70SLGFp9ANSU48xWF/F0ssZOj7XAXJyS8upMBWSLeqm6w9TW
X-Received: by 10.194.60.103 with SMTP id g7mr4102279wjr.37.1391558996094;
	Tue, 04 Feb 2014 16:09:56 -0800 (PST)
MIME-Version: 1.0
Received: by 10.180.38.73 with HTTP; Tue, 4 Feb 2014 16:09:25 -0800 (PST)
In-Reply-To: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
From: Wei Liu <liuw@liuw.name>
Date: Wed, 5 Feb 2014 00:09:25 +0000
Message-ID: <CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
To: xu cong <congxumail@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 4, 2014 at 9:15 PM, xu cong <congxumail@gmail.com> wrote:
> Does blkback thread bypass Dom0's memory buffer? If I write in DomU without
> O_Direct, the data will be buffered in DomU's memory cache then be flushed
> to disk. Will them be buffered in Dom0's memory again? How about the
> netback? Thanks.
>

Why is netback involved? I don't quite get your question. Block device
has a completely different data path from network.

Wei.

> Regards,
> Cong
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 00:10:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 00:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAq3y-000578-SU; Wed, 05 Feb 2014 00:09:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1WAq3x-000573-Rk
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 00:09:58 +0000
Received: from [85.158.143.35:43691] by server-3.bemta-4.messagelabs.com id
	F7/54-11539-55181F25; Wed, 05 Feb 2014 00:09:57 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391558996!3161862!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3979 invoked from network); 5 Feb 2014 00:09:56 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 00:09:56 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so83973wib.6
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 16:09:56 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=WDjn3cFutr2f/5QZhOQB8sDJsKWwpWQN+1RXNgihFzE=;
	b=dOURX0X8lVnxLC9AD+L8N9ALGyl1YinMQrWKtv97kcaign1feZshVn5+IRvuQR4Bl0
	8G5rJq9LbYA7pa82sA2BJOjt6TdWw2lX8gL5G90xEQNeGlvYJ6uFpEc63skTn6jrqQvs
	u3JqlCG5vvPkHPuqh03goChadSiKtZJEESzZkhvaEln9TZWK2ZdYrzyA3dsw0McQUqto
	xniP9rC42Phxpjc9nnAjfm1vZ3JsMj9WhvmouspCBWz28GivZ+zXEFZwlxZOADbbqRB5
	wTuoobV6MNGoQQ5Lz3huYFhCp7k7BF91W20GIYFUumO6bod1l6d3vLclb5YN8TcY9RaR
	6Ncg==
X-Gm-Message-State: ALoCoQmahiRSwwwKLukUYT82usxb70SLGFp9ANSU48xWF/F0ssZOj7XAXJyS8upMBWSLeqm6w9TW
X-Received: by 10.194.60.103 with SMTP id g7mr4102279wjr.37.1391558996094;
	Tue, 04 Feb 2014 16:09:56 -0800 (PST)
MIME-Version: 1.0
Received: by 10.180.38.73 with HTTP; Tue, 4 Feb 2014 16:09:25 -0800 (PST)
In-Reply-To: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
From: Wei Liu <liuw@liuw.name>
Date: Wed, 5 Feb 2014 00:09:25 +0000
Message-ID: <CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
To: xu cong <congxumail@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 4, 2014 at 9:15 PM, xu cong <congxumail@gmail.com> wrote:
> Does blkback thread bypass Dom0's memory buffer? If I write in DomU without
> O_Direct, the data will be buffered in DomU's memory cache then be flushed
> to disk. Will them be buffered in Dom0's memory again? How about the
> netback? Thanks.
>

Why is netback involved? I don't quite get your question. Block device
has a completely different data path from network.

Wei.

> Regards,
> Cong
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 00:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 00:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAqHM-0005Tt-Mu; Wed, 05 Feb 2014 00:23:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WAqHL-0005To-2B
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 00:23:47 +0000
Received: from [85.158.143.35:62128] by server-1.bemta-4.messagelabs.com id
	D9/79-31661-29481F25; Wed, 05 Feb 2014 00:23:46 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391559823!3163294!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29781 invoked from network); 5 Feb 2014 00:23:45 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 00:23:45 -0000
Received: by mail-pb0-f41.google.com with SMTP id up15so9237525pbc.0
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 16:23:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=iaWArQo84Anq9k3FN+bqEFhBd0uy5ReqnGYVfJjEtXM=;
	b=FAe2lCVwp1J28TrStwtOy267c7d5Oge4QHzhyFlExX43tLSqq9zldjVHvTWuumgAqQ
	cJprjL8n+6skpi1qP63DF9OZg4ALQ4wiJ7fG4TwcldxpSXCcJhKgrl99wNXSz6Sd4X2e
	yola55PfIJmo7NgEwPTEEHWsS3twYqlQppldrBjahpGkmGOE48wfydge2oSasoCmPlN+
	9RGHm9p9RecWZb7qxjpyE/SqvAd3Ag8A0ASZXRsqX+xV1nXFZEO6cfmIRYftg/zF1eA+
	HQSpT8v3hrs6hoEs8KYaANRM5UJQ9VdOHtcHmklJbwz1Z7GfjAcZWLqVtdOX/UctR0uh
	fvrA==
MIME-Version: 1.0
X-Received: by 10.68.171.229 with SMTP id ax5mr47136481pbc.125.1391559823167; 
	Tue, 04 Feb 2014 16:23:43 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Tue, 4 Feb 2014 16:23:43 -0800 (PST)
In-Reply-To: <CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
	<CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
Date: Tue, 4 Feb 2014 19:23:43 -0500
Message-ID: <CA+hYhXvesPkNHCTWyh8F8V0YF31O_DxvuqJ-K7EQbW9Hv1+Qyg@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: Wei Liu <liuw@liuw.name>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4469611298538584750=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4469611298538584750==
Content-Type: multipart/alternative; boundary=047d7bacb2e6f195fe04f19dc473

--047d7bacb2e6f195fe04f19dc473
Content-Type: text/plain; charset=ISO-8859-1

Yes, my question is confusing. Please ignore the last sentence. I just
wonder whether block backend thread will use Dom0's memory buffer or not.
Thanks.


2014-02-04 Wei Liu <liuw@liuw.name>:

> On Tue, Feb 4, 2014 at 9:15 PM, xu cong <congxumail@gmail.com> wrote:
> > Does blkback thread bypass Dom0's memory buffer? If I write in DomU
> without
> > O_Direct, the data will be buffered in DomU's memory cache then be
> flushed
> > to disk. Will them be buffered in Dom0's memory again? How about the
> > netback? Thanks.
> >
>
> Why is netback involved? I don't quite get your question. Block device
> has a completely different data path from network.
>
> Wei.
>
> > Regards,
> > Cong
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>

--047d7bacb2e6f195fe04f19dc473
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Yes, my question is confusing. Please ignore the last sentence. I just wonder whether block backend thread will use Dom0&#39;s memory buffer or not. Thanks.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2014-02-04 Wei Liu <span dir="ltr">&lt;<a href="mailto:liuw@liuw.name" target="_blank">liuw@liuw.name</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Tue, Feb 4, 2014 at 9:15 PM, xu cong &lt;<a href="mailto:congxumail@gmail.com">congxumail@gmail.com</a>&gt; wrote:<br>
&gt; Does blkback thread bypass Dom0&#39;s memory buffer? If I write in DomU without<br>
&gt; O_Direct, the data will be buffered in DomU&#39;s memory cache then be flushed<br>
&gt; to disk. Will them be buffered in Dom0&#39;s memory again? How about the<br>
&gt; netback? Thanks.<br>
&gt;<br>
<br>
</div></div>Why is netback involved? I don&#39;t quite get your question. Block device<br>
has a completely different data path from network.<br>
<br>
Wei.<br>
<br>
&gt; Regards,<br>
&gt; Cong<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
&gt; <a href="http://lists.xen.org/xen-devel" target="_blank">http://lists.xen.org/xen-devel</a><br>
&gt;<br>
</blockquote></div><br></div>

--047d7bacb2e6f195fe04f19dc473--


--===============4469611298538584750==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4469611298538584750==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 00:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 00:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAqHM-0005Tt-Mu; Wed, 05 Feb 2014 00:23:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WAqHL-0005To-2B
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 00:23:47 +0000
Received: from [85.158.143.35:62128] by server-1.bemta-4.messagelabs.com id
	D9/79-31661-29481F25; Wed, 05 Feb 2014 00:23:46 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391559823!3163294!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29781 invoked from network); 5 Feb 2014 00:23:45 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 00:23:45 -0000
Received: by mail-pb0-f41.google.com with SMTP id up15so9237525pbc.0
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 16:23:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=iaWArQo84Anq9k3FN+bqEFhBd0uy5ReqnGYVfJjEtXM=;
	b=FAe2lCVwp1J28TrStwtOy267c7d5Oge4QHzhyFlExX43tLSqq9zldjVHvTWuumgAqQ
	cJprjL8n+6skpi1qP63DF9OZg4ALQ4wiJ7fG4TwcldxpSXCcJhKgrl99wNXSz6Sd4X2e
	yola55PfIJmo7NgEwPTEEHWsS3twYqlQppldrBjahpGkmGOE48wfydge2oSasoCmPlN+
	9RGHm9p9RecWZb7qxjpyE/SqvAd3Ag8A0ASZXRsqX+xV1nXFZEO6cfmIRYftg/zF1eA+
	HQSpT8v3hrs6hoEs8KYaANRM5UJQ9VdOHtcHmklJbwz1Z7GfjAcZWLqVtdOX/UctR0uh
	fvrA==
MIME-Version: 1.0
X-Received: by 10.68.171.229 with SMTP id ax5mr47136481pbc.125.1391559823167; 
	Tue, 04 Feb 2014 16:23:43 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Tue, 4 Feb 2014 16:23:43 -0800 (PST)
In-Reply-To: <CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
	<CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
Date: Tue, 4 Feb 2014 19:23:43 -0500
Message-ID: <CA+hYhXvesPkNHCTWyh8F8V0YF31O_DxvuqJ-K7EQbW9Hv1+Qyg@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: Wei Liu <liuw@liuw.name>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4469611298538584750=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4469611298538584750==
Content-Type: multipart/alternative; boundary=047d7bacb2e6f195fe04f19dc473

--047d7bacb2e6f195fe04f19dc473
Content-Type: text/plain; charset=ISO-8859-1

Yes, my question is confusing. Please ignore the last sentence. I just
wonder whether block backend thread will use Dom0's memory buffer or not.
Thanks.


2014-02-04 Wei Liu <liuw@liuw.name>:

> On Tue, Feb 4, 2014 at 9:15 PM, xu cong <congxumail@gmail.com> wrote:
> > Does blkback thread bypass Dom0's memory buffer? If I write in DomU
> without
> > O_Direct, the data will be buffered in DomU's memory cache then be
> flushed
> > to disk. Will them be buffered in Dom0's memory again? How about the
> > netback? Thanks.
> >
>
> Why is netback involved? I don't quite get your question. Block device
> has a completely different data path from network.
>
> Wei.
>
> > Regards,
> > Cong
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>

--047d7bacb2e6f195fe04f19dc473
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Yes, my question is confusing. Please ignore the last sentence. I just wonder whether block backend thread will use Dom0&#39;s memory buffer or not. Thanks.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2014-02-04 Wei Liu <span dir="ltr">&lt;<a href="mailto:liuw@liuw.name" target="_blank">liuw@liuw.name</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Tue, Feb 4, 2014 at 9:15 PM, xu cong &lt;<a href="mailto:congxumail@gmail.com">congxumail@gmail.com</a>&gt; wrote:<br>
&gt; Does blkback thread bypass Dom0&#39;s memory buffer? If I write in DomU without<br>
&gt; O_Direct, the data will be buffered in DomU&#39;s memory cache then be flushed<br>
&gt; to disk. Will them be buffered in Dom0&#39;s memory again? How about the<br>
&gt; netback? Thanks.<br>
&gt;<br>
<br>
</div></div>Why is netback involved? I don&#39;t quite get your question. Block device<br>
has a completely different data path from network.<br>
<br>
Wei.<br>
<br>
&gt; Regards,<br>
&gt; Cong<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
&gt; <a href="http://lists.xen.org/xen-devel" target="_blank">http://lists.xen.org/xen-devel</a><br>
&gt;<br>
</blockquote></div><br></div>

--047d7bacb2e6f195fe04f19dc473--


--===============4469611298538584750==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4469611298538584750==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 00:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 00:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAqNe-0005oD-Ec; Wed, 05 Feb 2014 00:30:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WAqNc-0005o7-N4
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 00:30:17 +0000
Received: from [85.158.137.68:3361] by server-9.bemta-3.messagelabs.com id
	7C/3D-10184-71681F25; Wed, 05 Feb 2014 00:30:15 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391560212!13359425!1
X-Originating-IP: [209.85.160.49]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7544 invoked from network); 5 Feb 2014 00:30:14 -0000
Received: from mail-pb0-f49.google.com (HELO mail-pb0-f49.google.com)
	(209.85.160.49)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 00:30:14 -0000
Received: by mail-pb0-f49.google.com with SMTP id up15so9094724pbc.8
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 16:30:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=jK9dYPRAV8zhpgEISr3CiLqs2q3v503rsGlPpm6aBOo=;
	b=y2uI2P1656Et5NTXgrSyBr4ybxBEKC9LlWdn6YwyOl3v3utB1hW+xuQ5bPZfyXztFF
	XINxTy51xgblxuVeskAEwiPeIzZXkt1h6oGDjtqshIn7vwjFH038YAAM8itsaGUTHUrg
	XXmYx+cv/QL1DKXa7ked7sROO+RWRURiZ6xZdV/MLsyEJdMfEXCi8OdFg8xYiZJVugh+
	Fe8HdiNYIfjAcR5qwv8PqVE8cgl8cN+PiAfZ2eoWFXNSGsaSFuU/+SMsi7gb3HetE4Vx
	5X6xF1iZ/crA+NxJvNYc/qGh3KCNBYKbH/ZzYey/Cgdj7FME5pRHs5wOtBScVa3DE/eJ
	Mkqg==
MIME-Version: 1.0
X-Received: by 10.66.175.4 with SMTP id bw4mr47330295pac.56.1391560212386;
	Tue, 04 Feb 2014 16:30:12 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Tue, 4 Feb 2014 16:30:12 -0800 (PST)
In-Reply-To: <CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
	<CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
Date: Tue, 4 Feb 2014 19:30:12 -0500
Message-ID: <CA+hYhXsAaSwjzxTczkChRW3A0UTG_t9QNkLWaRUH-w2Qou_XDQ@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: Wei Liu <liuw@liuw.name>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3016547209833910346=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3016547209833910346==
Content-Type: multipart/alternative; boundary=047d7bea385a2498d504f19ddc14

--047d7bea385a2498d504f19ddc14
Content-Type: text/plain; charset=ISO-8859-1

Yes, my question is confusing. Please ignore the last sentence. I just
wonder whether block backend thread will use Dom0's memory buffer or not.
Thanks.


2014-02-04 Wei Liu <liuw@liuw.name>:

> On Tue, Feb 4, 2014 at 9:15 PM, xu cong <congxumail@gmail.com> wrote:
> > Does blkback thread bypass Dom0's memory buffer? If I write in DomU
> without
> > O_Direct, the data will be buffered in DomU's memory cache then be
> flushed
> > to disk. Will them be buffered in Dom0's memory again? How about the
> > netback? Thanks.
> >
>
> Why is netback involved? I don't quite get your question. Block device
> has a completely different data path from network.
>
> Wei.
>
> > Regards,
> > Cong
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>

--047d7bea385a2498d504f19ddc14
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Yes, my question is confusing. Please ignore the last sentence. I just wonder whether block backend thread will use Dom0&#39;s memory buffer or not. Thanks.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2014-02-04 Wei Liu <span dir="ltr">&lt;<a href="mailto:liuw@liuw.name" target="_blank">liuw@liuw.name</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Tue, Feb 4, 2014 at 9:15 PM, xu cong &lt;<a href="mailto:congxumail@gmail.com">congxumail@gmail.com</a>&gt; wrote:<br>
&gt; Does blkback thread bypass Dom0&#39;s memory buffer? If I write in DomU without<br>
&gt; O_Direct, the data will be buffered in DomU&#39;s memory cache then be flushed<br>
&gt; to disk. Will them be buffered in Dom0&#39;s memory again? How about the<br>
&gt; netback? Thanks.<br>
&gt;<br>
<br>
</div></div>Why is netback involved? I don&#39;t quite get your question. Block device<br>
has a completely different data path from network.<br>
<br>
Wei.<br>
<br>
&gt; Regards,<br>
&gt; Cong<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
&gt; <a href="http://lists.xen.org/xen-devel" target="_blank">http://lists.xen.org/xen-devel</a><br>
&gt;<br>
</blockquote></div><br></div>

--047d7bea385a2498d504f19ddc14--


--===============3016547209833910346==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3016547209833910346==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 00:30:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 00:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAqNe-0005oD-Ec; Wed, 05 Feb 2014 00:30:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WAqNc-0005o7-N4
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 00:30:17 +0000
Received: from [85.158.137.68:3361] by server-9.bemta-3.messagelabs.com id
	7C/3D-10184-71681F25; Wed, 05 Feb 2014 00:30:15 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391560212!13359425!1
X-Originating-IP: [209.85.160.49]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7544 invoked from network); 5 Feb 2014 00:30:14 -0000
Received: from mail-pb0-f49.google.com (HELO mail-pb0-f49.google.com)
	(209.85.160.49)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 00:30:14 -0000
Received: by mail-pb0-f49.google.com with SMTP id up15so9094724pbc.8
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 16:30:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=jK9dYPRAV8zhpgEISr3CiLqs2q3v503rsGlPpm6aBOo=;
	b=y2uI2P1656Et5NTXgrSyBr4ybxBEKC9LlWdn6YwyOl3v3utB1hW+xuQ5bPZfyXztFF
	XINxTy51xgblxuVeskAEwiPeIzZXkt1h6oGDjtqshIn7vwjFH038YAAM8itsaGUTHUrg
	XXmYx+cv/QL1DKXa7ked7sROO+RWRURiZ6xZdV/MLsyEJdMfEXCi8OdFg8xYiZJVugh+
	Fe8HdiNYIfjAcR5qwv8PqVE8cgl8cN+PiAfZ2eoWFXNSGsaSFuU/+SMsi7gb3HetE4Vx
	5X6xF1iZ/crA+NxJvNYc/qGh3KCNBYKbH/ZzYey/Cgdj7FME5pRHs5wOtBScVa3DE/eJ
	Mkqg==
MIME-Version: 1.0
X-Received: by 10.66.175.4 with SMTP id bw4mr47330295pac.56.1391560212386;
	Tue, 04 Feb 2014 16:30:12 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Tue, 4 Feb 2014 16:30:12 -0800 (PST)
In-Reply-To: <CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
	<CAOsiSVWbGVNYEjt5nDEif3HM=5=vfwSSxMhK34xYU4RHzv8pLw@mail.gmail.com>
Date: Tue, 4 Feb 2014 19:30:12 -0500
Message-ID: <CA+hYhXsAaSwjzxTczkChRW3A0UTG_t9QNkLWaRUH-w2Qou_XDQ@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: Wei Liu <liuw@liuw.name>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3016547209833910346=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3016547209833910346==
Content-Type: multipart/alternative; boundary=047d7bea385a2498d504f19ddc14

--047d7bea385a2498d504f19ddc14
Content-Type: text/plain; charset=ISO-8859-1

Yes, my question is confusing. Please ignore the last sentence. I just
wonder whether block backend thread will use Dom0's memory buffer or not.
Thanks.


2014-02-04 Wei Liu <liuw@liuw.name>:

> On Tue, Feb 4, 2014 at 9:15 PM, xu cong <congxumail@gmail.com> wrote:
> > Does blkback thread bypass Dom0's memory buffer? If I write in DomU
> without
> > O_Direct, the data will be buffered in DomU's memory cache then be
> flushed
> > to disk. Will them be buffered in Dom0's memory again? How about the
> > netback? Thanks.
> >
>
> Why is netback involved? I don't quite get your question. Block device
> has a completely different data path from network.
>
> Wei.
>
> > Regards,
> > Cong
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> >
>

--047d7bea385a2498d504f19ddc14
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Yes, my question is confusing. Please ignore the last sentence. I just wonder whether block backend thread will use Dom0&#39;s memory buffer or not. Thanks.<br></div><div class="gmail_extra"><br><br><div class="gmail_quote">
2014-02-04 Wei Liu <span dir="ltr">&lt;<a href="mailto:liuw@liuw.name" target="_blank">liuw@liuw.name</a>&gt;</span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb"><div class="h5">On Tue, Feb 4, 2014 at 9:15 PM, xu cong &lt;<a href="mailto:congxumail@gmail.com">congxumail@gmail.com</a>&gt; wrote:<br>
&gt; Does blkback thread bypass Dom0&#39;s memory buffer? If I write in DomU without<br>
&gt; O_Direct, the data will be buffered in DomU&#39;s memory cache then be flushed<br>
&gt; to disk. Will them be buffered in Dom0&#39;s memory again? How about the<br>
&gt; netback? Thanks.<br>
&gt;<br>
<br>
</div></div>Why is netback involved? I don&#39;t quite get your question. Block device<br>
has a completely different data path from network.<br>
<br>
Wei.<br>
<br>
&gt; Regards,<br>
&gt; Cong<br>
&gt;<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
&gt; <a href="http://lists.xen.org/xen-devel" target="_blank">http://lists.xen.org/xen-devel</a><br>
&gt;<br>
</blockquote></div><br></div>

--047d7bea385a2498d504f19ddc14--


--===============3016547209833910346==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3016547209833910346==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 03:20:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 03:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAt2F-0005CG-Ju; Wed, 05 Feb 2014 03:20:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAt2C-0005CB-VW
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 03:20:21 +0000
Received: from [85.158.137.68:7152] by server-14.bemta-3.messagelabs.com id
	0B/F6-08196-4FDA1F25; Wed, 05 Feb 2014 03:20:20 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391570419!9768138!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30912 invoked from network); 5 Feb 2014 03:20:19 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 03:20:19 -0000
Received: by mail-we0-f173.google.com with SMTP id x55so5026500wes.32
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 19:20:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version:content-type
	:content-transfer-encoding:subject:from:date:to:cc:message-id;
	bh=jUsyvC691DRS7L+2BB9qn6L1rptgEwpsHUhvYN6JjkQ=;
	b=beGLK/exnaWdFJvqsCTogGRvFCWs1+hc3FVIVFCXUmPkRTxrn58dZOx1ikdK94Is0y
	uo2SO/AK1F0iipnLr4Gw6SxVe+9d77nbchcY6uIj70NT+Mbmfw2Dc6wQ7QEm4plHeofZ
	MDV58GwuvTTWoo3LNONWSmPAgyqTa9djV5ZJVVsHuL1J6EAe6n0jaFgF1IQT0hne2Bqu
	tgvpjaQi/YVhiNl4r1dPzB2nUOq3JLkhVfEnstq33rI6UMkrnMp/u0j42gjFy8OiolKa
	CsmyU5dz10wRkbXYHgn0RhHSTA4aRy33MHJQ+Oy9iuKHlUMMbuSoEQYe82brP0/+n4Wq
	4gqw==
X-Received: by 10.194.60.37 with SMTP id e5mr9777071wjr.32.1391570418623;
	Tue, 04 Feb 2014 19:20:18 -0800 (PST)
Received: from ?IPV6:2001:470:7b2f:0:5cf5:841e:ab80:f862?
	([2001:470:7b2f:0:5cf5:841e:ab80:f862])
	by mx.google.com with ESMTPSA id eo4sm41591511wib.9.2014.02.04.19.20.17
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 19:20:18 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Wed, 05 Feb 2014 03:20:10 +0000
To: rshriram@cs.ubc.ca,Shriram Rajagopalan <rshriram@cs.ubc.ca>
Message-ID: <5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
	second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8758349515033605614=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8758349515033605614==
Content-Type: multipart/alternative; boundary="----XL4V5F1FINVP39OQWWEP7HNDN0LM6Q"
Content-Transfer-Encoding: 8bit

------XL4V5F1FINVP39OQWWEP7HNDN0LM6Q
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
 charset=UTF-8

Fixed this but it seems using drbd: in rhe disk config doesnt work with pygrub....

does this makes sense?

I found na old bug reporta but this is Debain squeze Xen 4.3

I seems to work fine booting into the installer bit if ibuse pygrub It doesnt find the drbd device.



On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
>On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
>wrote:
>
>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>> does this come with xen or the drbd package?
>>
>> Xen 4.3.1 was compiled from source, but drdb is installed from
>apt-get
>> on debian (v 8.3)
>>
>>
>It comes with the drbd package AFAIK

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------XL4V5F1FINVP39OQWWEP7HNDN0LM6Q
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head></head><body>Fixed this but it seems using drbd: in rhe disk config doesnt work with pygrub....<br>
<br>
does this makes sense?<br>
<br>
I found na old bug reporta but this is Debain squeze Xen 4.3<br>
<br>
I seems to work fine booting into the installer bit if ibuse pygrub It doesnt find the drbd device.<br>
<br>
<br><br><div class="gmail_quote">On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan &lt;rshriram@cs.ubc.ca&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div dir="ltr">On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <span dir="ltr">&lt;<a href="mailto:miguelmclara@gmail.com" target="_blank">miguelmclara@gmail.com</a>&gt;</span> wrote:<br /><div class="gmail_extra"><div class="gmail_quote">

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I noticed /etc/xen/scripts doesn&#39;t include the &#39;block-drbd&#39; script,<br />
does this come with xen or the drbd package?<br />
<br />
Xen 4.3.1 was compiled from source, but drdb is installed from apt-get<br />
on debian (v 8.3)<br />
<br /></blockquote><div><br /></div><div>It comes with the drbd package AFAIK</div><div><br /></div></div></div></div>
</blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>
------XL4V5F1FINVP39OQWWEP7HNDN0LM6Q--



--===============8758349515033605614==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8758349515033605614==--



From xen-devel-bounces@lists.xen.org Wed Feb 05 03:20:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 03:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAt2F-0005CG-Ju; Wed, 05 Feb 2014 03:20:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WAt2C-0005CB-VW
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 03:20:21 +0000
Received: from [85.158.137.68:7152] by server-14.bemta-3.messagelabs.com id
	0B/F6-08196-4FDA1F25; Wed, 05 Feb 2014 03:20:20 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391570419!9768138!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30912 invoked from network); 5 Feb 2014 03:20:19 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 03:20:19 -0000
Received: by mail-we0-f173.google.com with SMTP id x55so5026500wes.32
	for <xen-devel@lists.xensource.com>;
	Tue, 04 Feb 2014 19:20:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version:content-type
	:content-transfer-encoding:subject:from:date:to:cc:message-id;
	bh=jUsyvC691DRS7L+2BB9qn6L1rptgEwpsHUhvYN6JjkQ=;
	b=beGLK/exnaWdFJvqsCTogGRvFCWs1+hc3FVIVFCXUmPkRTxrn58dZOx1ikdK94Is0y
	uo2SO/AK1F0iipnLr4Gw6SxVe+9d77nbchcY6uIj70NT+Mbmfw2Dc6wQ7QEm4plHeofZ
	MDV58GwuvTTWoo3LNONWSmPAgyqTa9djV5ZJVVsHuL1J6EAe6n0jaFgF1IQT0hne2Bqu
	tgvpjaQi/YVhiNl4r1dPzB2nUOq3JLkhVfEnstq33rI6UMkrnMp/u0j42gjFy8OiolKa
	CsmyU5dz10wRkbXYHgn0RhHSTA4aRy33MHJQ+Oy9iuKHlUMMbuSoEQYe82brP0/+n4Wq
	4gqw==
X-Received: by 10.194.60.37 with SMTP id e5mr9777071wjr.32.1391570418623;
	Tue, 04 Feb 2014 19:20:18 -0800 (PST)
Received: from ?IPV6:2001:470:7b2f:0:5cf5:841e:ab80:f862?
	([2001:470:7b2f:0:5cf5:841e:ab80:f862])
	by mx.google.com with ESMTPSA id eo4sm41591511wib.9.2014.02.04.19.20.17
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 04 Feb 2014 19:20:18 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Wed, 05 Feb 2014 03:20:10 +0000
To: rshriram@cs.ubc.ca,Shriram Rajagopalan <rshriram@cs.ubc.ca>
Message-ID: <5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
	second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8758349515033605614=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8758349515033605614==
Content-Type: multipart/alternative; boundary="----XL4V5F1FINVP39OQWWEP7HNDN0LM6Q"
Content-Transfer-Encoding: 8bit

------XL4V5F1FINVP39OQWWEP7HNDN0LM6Q
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
 charset=UTF-8

Fixed this but it seems using drbd: in rhe disk config doesnt work with pygrub....

does this makes sense?

I found na old bug reporta but this is Debain squeze Xen 4.3

I seems to work fine booting into the installer bit if ibuse pygrub It doesnt find the drbd device.



On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan <rshriram@cs.ubc.ca> wrote:
>On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
>wrote:
>
>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>> does this come with xen or the drbd package?
>>
>> Xen 4.3.1 was compiled from source, but drdb is installed from
>apt-get
>> on debian (v 8.3)
>>
>>
>It comes with the drbd package AFAIK

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------XL4V5F1FINVP39OQWWEP7HNDN0LM6Q
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head></head><body>Fixed this but it seems using drbd: in rhe disk config doesnt work with pygrub....<br>
<br>
does this makes sense?<br>
<br>
I found na old bug reporta but this is Debain squeze Xen 4.3<br>
<br>
I seems to work fine booting into the installer bit if ibuse pygrub It doesnt find the drbd device.<br>
<br>
<br><br><div class="gmail_quote">On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan &lt;rshriram@cs.ubc.ca&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div dir="ltr">On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <span dir="ltr">&lt;<a href="mailto:miguelmclara@gmail.com" target="_blank">miguelmclara@gmail.com</a>&gt;</span> wrote:<br /><div class="gmail_extra"><div class="gmail_quote">

<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I noticed /etc/xen/scripts doesn&#39;t include the &#39;block-drbd&#39; script,<br />
does this come with xen or the drbd package?<br />
<br />
Xen 4.3.1 was compiled from source, but drdb is installed from apt-get<br />
on debian (v 8.3)<br />
<br /></blockquote><div><br /></div><div>It comes with the drbd package AFAIK</div><div><br /></div></div></div></div>
</blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>
------XL4V5F1FINVP39OQWWEP7HNDN0LM6Q--



--===============8758349515033605614==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8758349515033605614==--



From xen-devel-bounces@lists.xen.org Wed Feb 05 04:00:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 04:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAtep-0006DQ-Ha; Wed, 05 Feb 2014 04:00:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WAteo-0006DL-Ay
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 04:00:14 +0000
Received: from [85.158.143.35:21990] by server-3.bemta-4.messagelabs.com id
	20/D5-11539-D47B1F25; Wed, 05 Feb 2014 04:00:13 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391572811!3188309!1
X-Originating-IP: [98.139.213.161]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19941 invoked from network); 5 Feb 2014 04:00:12 -0000
Received: from nm24-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm24-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.161)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 04:00:12 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391572811; bh=xCrLOChMJsE20MJUDA1TIdpQ6Bg0y9eW0SBCXMc91QE=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=lq1f9nj+B3KPH/j8qsQpKRPsrjuvRBZm2iNUEc0TG8fa58jej0gwrvmO4ZocbRSZp/yVlrsdX9fK1Bddr7GuuTHuTaowQwS0PZnMU40HV/+xM8wY7Eg4QIKPo+njmUqjq65YrcfZYrJy4aKycLFa2UrfPfKIEEMuQDwIEZ/ES/E=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Zsud+608nZsI0Qad3VZgqzY2pRTHlpAYYAUnOzuQYNWlGG8loGEW2FsV+TtNjzqLqc62P/vnFfE+sG4O2muYGA3KxjlzE76GnndExlJ1fHnLJ2S9RfcJ6543aqCgT5K+hRoPM99eJbiwNhPyYKm1Otn7CSQStvNflem0/X6YemI=;
Received: from [98.139.212.150] by nm24.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 04:00:11 -0000
Received: from [98.139.213.9] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 04:00:11 -0000
Received: from [127.0.0.1] by smtp109.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 04:00:11 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391572811; bh=xCrLOChMJsE20MJUDA1TIdpQ6Bg0y9eW0SBCXMc91QE=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=V5jcQtT3LilS05n5DftgBlHQ9+X/sGbQQX5I1uf11TrlIKgS05dJ1iDJ+ADhaVKZPQ5ImB1DmOvQZBcBC8gJqV26Oc1pLAI15RiT1Qe428XWPtsdWyTLH3UuUi43WHLoWV2WpLO4x0jirI8N8V4aQnjGzrw7f7q+cWBwRsgAOvY=
X-Yahoo-Newman-Id: 307849.87225.bm@smtp109.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: RvmfSM0VM1kVkd_ye3syhhVBkaFXrAZ6HMn.U63fKKSXACp
	Ungld3o0806N.xSHhEIk8EgUm.ugejNSTsmnP17kOFBQuXQ69xl83Glv7Ufd
	Xv9lBjAAeetHh91X9E8cVdYA6xnCShmQft70FCKM2T1Yt8sK5zqGo4O7sU3b
	XUDScUyWRC4uvE1RNVSjvrxv2lq.SLOsnMsccHA8EkNFOLtcO_SkX3n6y4fP
	zY52.j32K4O8BxaV18p062pwOaB3db76UQBm0t9MZvPpU2gboE0mfLGCqGOF
	2nDZyLzAwsBljfcLf_41VcB1nQRkjeJmeyL7ZFSCmc59NwF5A.Cfwl6UTCVu
	gSjeNdFz5YEfEq40XzPEqEMbUcWuoTUiTl5dyBKPFaFsMTWZ4BX5Q4UT1NpS
	D6p0PJgwuWN0zzvJ74a84bNgVFMOSeLSf7eEEkFEhF_JiCBZxZgGxwO8NjPo
	7Ns3iXELNYY7UrY8PuC_nHpYCgx9xle4abZ_Cjmo.wfxcNGePWva2PfAd6VA
	lDMuXNQI26a9_x7_rAX0Kvri54EtqxhJswK.KVaAu7UzPxUmd.HH7ElM8vWH
	NtQJfY07TDtiFNIm3p6BBNqZopK2WSvxBIHdDJ7kbi_HhKRObq05I8dIu_mT
	BbWzZ4d_PIgW2qJqLbOU_sjh3htQeRefY9dpkOgl8DbcP.Uyg7DvPmzJhJPE
	9ZMS2g8qwbE7bHr6xYvXP9Dnrk7TBOi7tjvujqIxMGpy_s21LJSXUX2hxuIi
	JXfuklkLJyrDjkSD2JgyCXr1NEc1Mn9pyHVJL9VbO0.Xx0HQ-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp109.mail.bf1.yahoo.com with SMTP; 04 Feb 2014 20:00:11 -0800 PST
Message-ID: <1391572808.2441.37.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Tue, 04 Feb 2014 21:00:08 -0700
In-Reply-To: <52F10EEF.7050402@m2r.biz>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> Il 04/02/2014 16:41, Eric Houby ha scritto:
> > Xen list,
> >
> > I am trying to boot a F20 guest and connect using Spice but have run
> > into an issue.
> >
> > My VM config file includes:
> > spice = 1
> > spicehost='0.0.0.0'
> > spiceport=6001
> > spicedisable_ticketing=1
> >
> >
> > Is Spice supported with qemu-xen-traditional?
> 
> No, only with upstream qemu and if compile xen and qemu from source you 
> also enable spice support on qemu build, for example on my xen build 
> tests I add:
> 
> tools/Makefile
> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>           --datadir=$(SHAREDIR)/qemu-xen \
>           --localstatedir=/var \
>           --disable-kvm \
> +        --enable-spice \
> +        --enable-usb-redir \
>           --disable-docs \
>           --disable-guest-agent \
>           --python=$(PYTHON) \
> 
> If you use upstream qemu from distribution package probably have already 
> spice build-in, for example, on debian I've already tested and working.
> 

It is my understanding that the qemu package in F20 does not support xen
so I compiled xen from source per the RC3 Test Day instructions and the
instructions here:

http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source

After adding --enable-spice and --enable-usb-redir to tools/Makefile I
see the following error when I make xen:

ERROR: User requested feature spice
       configure was not able to find it

The build finishes and xen works fine but spice obviously does not work.
More complete log is below.

Thanks,

Eric


if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
        mkdir -p qemu-xen-dir; \
else \
        export GIT=git; \
        /root/src/xen/tools/../scripts/git-checkout.sh
git://xenbits.xen.org/qemu-upstream-unstable.git q
emu-xen-4.4.0-rc3 qemu-xen-dir ; \
fi
if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
        source=git://xenbits.xen.org/qemu-upstream-unstable.git; \
else \
        source=.; \
fi; \
cd qemu-xen-dir; \
$source/configure --enable-xen --target-list=i386-softmmu \
        --enable-debug --enable-trace-backend=stderr \
        --prefix=/usr/local \
        --source-path=$source \
        --extra-cflags="-I/root/src/xen/tools/../tools/include \
        -I/root/src/xen/tools/../tools/libxc \
        -I/root/src/xen/tools/../tools/xenstore \
        -I/root/src/xen/tools/../tools/xenstore/compat \
        " \
        --extra-ldflags="-L/root/src/xen/tools/../tools/libxc \
        -L/root/src/xen/tools/../tools/xenstore" \
        --bindir=/usr/local/lib/xen/bin \
        --datadir=/usr/local/share/qemu-xen \
        --localstatedir=/var \
        --disable-kvm \
        --enable-spice \
        --enable-usb-redir \
        --disable-docs \
        --disable-guest-agent \
        --python=python \
        ; \
make all

ERROR: User requested feature spice
       configure was not able to find it

make[3]: Entering directory `/root/src/xen/tools/qemu-xen-dir-remote'
make[3]: Leaving directory `/root/src/xen/tools/qemu-xen-dir-remote'








_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 04:00:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 04:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAtep-0006DQ-Ha; Wed, 05 Feb 2014 04:00:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WAteo-0006DL-Ay
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 04:00:14 +0000
Received: from [85.158.143.35:21990] by server-3.bemta-4.messagelabs.com id
	20/D5-11539-D47B1F25; Wed, 05 Feb 2014 04:00:13 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391572811!3188309!1
X-Originating-IP: [98.139.213.161]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19941 invoked from network); 5 Feb 2014 04:00:12 -0000
Received: from nm24-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm24-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.161)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 04:00:12 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391572811; bh=xCrLOChMJsE20MJUDA1TIdpQ6Bg0y9eW0SBCXMc91QE=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=lq1f9nj+B3KPH/j8qsQpKRPsrjuvRBZm2iNUEc0TG8fa58jej0gwrvmO4ZocbRSZp/yVlrsdX9fK1Bddr7GuuTHuTaowQwS0PZnMU40HV/+xM8wY7Eg4QIKPo+njmUqjq65YrcfZYrJy4aKycLFa2UrfPfKIEEMuQDwIEZ/ES/E=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Zsud+608nZsI0Qad3VZgqzY2pRTHlpAYYAUnOzuQYNWlGG8loGEW2FsV+TtNjzqLqc62P/vnFfE+sG4O2muYGA3KxjlzE76GnndExlJ1fHnLJ2S9RfcJ6543aqCgT5K+hRoPM99eJbiwNhPyYKm1Otn7CSQStvNflem0/X6YemI=;
Received: from [98.139.212.150] by nm24.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 04:00:11 -0000
Received: from [98.139.213.9] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 04:00:11 -0000
Received: from [127.0.0.1] by smtp109.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 04:00:11 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391572811; bh=xCrLOChMJsE20MJUDA1TIdpQ6Bg0y9eW0SBCXMc91QE=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=V5jcQtT3LilS05n5DftgBlHQ9+X/sGbQQX5I1uf11TrlIKgS05dJ1iDJ+ADhaVKZPQ5ImB1DmOvQZBcBC8gJqV26Oc1pLAI15RiT1Qe428XWPtsdWyTLH3UuUi43WHLoWV2WpLO4x0jirI8N8V4aQnjGzrw7f7q+cWBwRsgAOvY=
X-Yahoo-Newman-Id: 307849.87225.bm@smtp109.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: RvmfSM0VM1kVkd_ye3syhhVBkaFXrAZ6HMn.U63fKKSXACp
	Ungld3o0806N.xSHhEIk8EgUm.ugejNSTsmnP17kOFBQuXQ69xl83Glv7Ufd
	Xv9lBjAAeetHh91X9E8cVdYA6xnCShmQft70FCKM2T1Yt8sK5zqGo4O7sU3b
	XUDScUyWRC4uvE1RNVSjvrxv2lq.SLOsnMsccHA8EkNFOLtcO_SkX3n6y4fP
	zY52.j32K4O8BxaV18p062pwOaB3db76UQBm0t9MZvPpU2gboE0mfLGCqGOF
	2nDZyLzAwsBljfcLf_41VcB1nQRkjeJmeyL7ZFSCmc59NwF5A.Cfwl6UTCVu
	gSjeNdFz5YEfEq40XzPEqEMbUcWuoTUiTl5dyBKPFaFsMTWZ4BX5Q4UT1NpS
	D6p0PJgwuWN0zzvJ74a84bNgVFMOSeLSf7eEEkFEhF_JiCBZxZgGxwO8NjPo
	7Ns3iXELNYY7UrY8PuC_nHpYCgx9xle4abZ_Cjmo.wfxcNGePWva2PfAd6VA
	lDMuXNQI26a9_x7_rAX0Kvri54EtqxhJswK.KVaAu7UzPxUmd.HH7ElM8vWH
	NtQJfY07TDtiFNIm3p6BBNqZopK2WSvxBIHdDJ7kbi_HhKRObq05I8dIu_mT
	BbWzZ4d_PIgW2qJqLbOU_sjh3htQeRefY9dpkOgl8DbcP.Uyg7DvPmzJhJPE
	9ZMS2g8qwbE7bHr6xYvXP9Dnrk7TBOi7tjvujqIxMGpy_s21LJSXUX2hxuIi
	JXfuklkLJyrDjkSD2JgyCXr1NEc1Mn9pyHVJL9VbO0.Xx0HQ-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp109.mail.bf1.yahoo.com with SMTP; 04 Feb 2014 20:00:11 -0800 PST
Message-ID: <1391572808.2441.37.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Tue, 04 Feb 2014 21:00:08 -0700
In-Reply-To: <52F10EEF.7050402@m2r.biz>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> Il 04/02/2014 16:41, Eric Houby ha scritto:
> > Xen list,
> >
> > I am trying to boot a F20 guest and connect using Spice but have run
> > into an issue.
> >
> > My VM config file includes:
> > spice = 1
> > spicehost='0.0.0.0'
> > spiceport=6001
> > spicedisable_ticketing=1
> >
> >
> > Is Spice supported with qemu-xen-traditional?
> 
> No, only with upstream qemu and if compile xen and qemu from source you 
> also enable spice support on qemu build, for example on my xen build 
> tests I add:
> 
> tools/Makefile
> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>           --datadir=$(SHAREDIR)/qemu-xen \
>           --localstatedir=/var \
>           --disable-kvm \
> +        --enable-spice \
> +        --enable-usb-redir \
>           --disable-docs \
>           --disable-guest-agent \
>           --python=$(PYTHON) \
> 
> If you use upstream qemu from distribution package probably have already 
> spice build-in, for example, on debian I've already tested and working.
> 

It is my understanding that the qemu package in F20 does not support xen
so I compiled xen from source per the RC3 Test Day instructions and the
instructions here:

http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source

After adding --enable-spice and --enable-usb-redir to tools/Makefile I
see the following error when I make xen:

ERROR: User requested feature spice
       configure was not able to find it

The build finishes and xen works fine but spice obviously does not work.
More complete log is below.

Thanks,

Eric


if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
        mkdir -p qemu-xen-dir; \
else \
        export GIT=git; \
        /root/src/xen/tools/../scripts/git-checkout.sh
git://xenbits.xen.org/qemu-upstream-unstable.git q
emu-xen-4.4.0-rc3 qemu-xen-dir ; \
fi
if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
        source=git://xenbits.xen.org/qemu-upstream-unstable.git; \
else \
        source=.; \
fi; \
cd qemu-xen-dir; \
$source/configure --enable-xen --target-list=i386-softmmu \
        --enable-debug --enable-trace-backend=stderr \
        --prefix=/usr/local \
        --source-path=$source \
        --extra-cflags="-I/root/src/xen/tools/../tools/include \
        -I/root/src/xen/tools/../tools/libxc \
        -I/root/src/xen/tools/../tools/xenstore \
        -I/root/src/xen/tools/../tools/xenstore/compat \
        " \
        --extra-ldflags="-L/root/src/xen/tools/../tools/libxc \
        -L/root/src/xen/tools/../tools/xenstore" \
        --bindir=/usr/local/lib/xen/bin \
        --datadir=/usr/local/share/qemu-xen \
        --localstatedir=/var \
        --disable-kvm \
        --enable-spice \
        --enable-usb-redir \
        --disable-docs \
        --disable-guest-agent \
        --python=python \
        ; \
make all

ERROR: User requested feature spice
       configure was not able to find it

make[3]: Entering directory `/root/src/xen/tools/qemu-xen-dir-remote'
make[3]: Leaving directory `/root/src/xen/tools/qemu-xen-dir-remote'








_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 04:43:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 04:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAuKH-00078U-IE; Wed, 05 Feb 2014 04:43:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WAuKF-00078P-Fq
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 04:43:03 +0000
Received: from [85.158.143.35:37586] by server-1.bemta-4.messagelabs.com id
	39/04-31661-651C1F25; Wed, 05 Feb 2014 04:43:02 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391575381!3202712!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13093 invoked from network); 5 Feb 2014 04:43:02 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-13.tower-21.messagelabs.com with SMTP;
	5 Feb 2014 04:43:02 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 07D13587204;
	Tue,  4 Feb 2014 20:43:00 -0800 (PST)
Date: Tue, 04 Feb 2014 20:43:00 -0800 (PST)
Message-Id: <20140204.204300.1246992450475475633.davem@davemloft.net>
To: david.vrabel@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1391539826-30962-1-git-send-email-david.vrabel@citrix.com>
References: <1391539826-30962-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv1 net] xen-netfront: handle backend CLOSED
 without CLOSING
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 4 Feb 2014 18:50:26 +0000

> From: David Vrabel <david.vrabel@citrix.com>
> 
> Backend drivers shouldn't transistion to CLOSED unless the frontend is
> CLOSED.  If a backend does transition to CLOSED too soon then the
> frontend may not see the CLOSING state and will not properly shutdown.
> 
> So, treat an unexpected backend CLOSED state the same as CLOSING.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 04:43:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 04:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAuKH-00078U-IE; Wed, 05 Feb 2014 04:43:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WAuKF-00078P-Fq
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 04:43:03 +0000
Received: from [85.158.143.35:37586] by server-1.bemta-4.messagelabs.com id
	39/04-31661-651C1F25; Wed, 05 Feb 2014 04:43:02 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391575381!3202712!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13093 invoked from network); 5 Feb 2014 04:43:02 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-13.tower-21.messagelabs.com with SMTP;
	5 Feb 2014 04:43:02 -0000
Received: from localhost (74-93-104-98-Washington.hfc.comcastbusiness.net
	[74.93.104.98]) (Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 07D13587204;
	Tue,  4 Feb 2014 20:43:00 -0800 (PST)
Date: Tue, 04 Feb 2014 20:43:00 -0800 (PST)
Message-Id: <20140204.204300.1246992450475475633.davem@davemloft.net>
To: david.vrabel@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1391539826-30962-1-git-send-email-david.vrabel@citrix.com>
References: <1391539826-30962-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
Cc: netdev@vger.kernel.org, boris.ostrovsky@oracle.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv1 net] xen-netfront: handle backend CLOSED
 without CLOSING
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 4 Feb 2014 18:50:26 +0000

> From: David Vrabel <david.vrabel@citrix.com>
> 
> Backend drivers shouldn't transistion to CLOSED unless the frontend is
> CLOSED.  If a backend does transition to CLOSED too soon then the
> frontend may not see the CLOSING state and will not properly shutdown.
> 
> So, treat an unexpected backend CLOSED state the same as CLOSING.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

Applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 05:49:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 05:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAvKN-0000CS-MD; Wed, 05 Feb 2014 05:47:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WAvKM-0000CN-2B
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 05:47:14 +0000
Received: from [85.158.143.35:6589] by server-2.bemta-4.messagelabs.com id
	45/BF-10891-C50D1F25; Wed, 05 Feb 2014 05:47:08 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391579225!3190238!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10047 invoked from network); 5 Feb 2014 05:47:07 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 05:47:07 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 04 Feb 2014 22:47:00 -0700
Message-ID: <52F1D050.9040405@suse.com>
Date: Tue, 04 Feb 2014 22:46:56 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<21231.49368.953196.741218@mariner.uk.xensource.com>
In-Reply-To: <21231.49368.953196.741218@mariner.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Jackson writes ("[PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
>   
>> This is the latest version of my libxl event fixes apropos of Jim's
>> libvirt testing.
>>     
>
> This can also be found here:
>   git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v3
>   

I haven't reviewed this version of the series, but have been running it
in my test setup for several hours now without any problems.  The test
setup includes some fixes on the libvirt side, which I will post
tomorrow after fixing up the commit messages.  I'll cc xen-devel in case
you have some comments on those fixes.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 05:49:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 05:49:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAvKN-0000CS-MD; Wed, 05 Feb 2014 05:47:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WAvKM-0000CN-2B
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 05:47:14 +0000
Received: from [85.158.143.35:6589] by server-2.bemta-4.messagelabs.com id
	45/BF-10891-C50D1F25; Wed, 05 Feb 2014 05:47:08 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391579225!3190238!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10047 invoked from network); 5 Feb 2014 05:47:07 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 05:47:07 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Tue, 04 Feb 2014 22:47:00 -0700
Message-ID: <52F1D050.9040405@suse.com>
Date: Tue, 04 Feb 2014 22:46:56 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<21231.49368.953196.741218@mariner.uk.xensource.com>
In-Reply-To: <21231.49368.953196.741218@mariner.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson wrote:
> Ian Jackson writes ("[PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
>   
>> This is the latest version of my libxl event fixes apropos of Jim's
>> libvirt testing.
>>     
>
> This can also be found here:
>   git://xenbits.xen.org/people/iwj/xen.git#wip.enumerate-pids-v3
>   

I haven't reviewed this version of the series, but have been running it
in my test setup for several hours now without any problems.  The test
setup includes some fixes on the libvirt side, which I will post
tomorrow after fixing up the commit messages.  I'll cc xen-devel in case
you have some comments on those fixes.

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 05:54:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 05:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAvRH-0000Ug-Ix; Wed, 05 Feb 2014 05:54:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAvRG-0000Ua-90
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 05:54:22 +0000
Received: from [85.158.139.211:26801] by server-13.bemta-5.messagelabs.com id
	70/22-18801-D02D1F25; Wed, 05 Feb 2014 05:54:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391579658!1690474!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29837 invoked from network); 5 Feb 2014 05:54:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 05:54:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,784,1384300800"; d="scan'208";a="98089090"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 05:53:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 00:53:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAvQr-00031F-4h;
	Wed, 05 Feb 2014 05:53:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAvQn-0006TG-N2;
	Wed, 05 Feb 2014 05:53:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24728-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 05:53:53 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24728: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24728 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24728/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  3 host-install(3)      broken REGR. vs. 24723

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 24723
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24723
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ff1745d5882b7356ea423709919e46e55c31b615
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Jan 16 15:27:59 2014 +0000

    tools: libxl: do not set the PoD target on ARM
    
    ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
    
    The correct solution here would be to check for ENOSYS in libxl, unfortunately
    xc_domain_set_pod_target suffers from the same broken error reporting as the
    rest of libxc and throws away the errno.
    
    So for now conditionally define xc_domain_set_pod_target to return success
    (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
    errno==-1 and returns -1, which matches the broken error reporting of the
    existing function. It appears to have no in tree callers in any case.
    
    The conditional should be removed once libxc has been fixed.
    
    This makes ballooning (xl mem-set) work for ARM domains.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>
    Cc: george.dunlap@citrix.com

commit 5224a733d3bd4d0db3548712047506c50487085e
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Jan 24 14:23:07 2014 +0000

    xen: arm: correct use of find_next_bit
    
    find_next_bit takes a "const unsigned long *" but forcing a cast of an
    "uint32_t *" throws away the alignment constraints and ends up causing an
    alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
    
    Instead of casting use a temporary variable of the right type.
    
    I've had a look around for similar constructs and the only thing I found was
    maintenance_interrupt which cases a uint64_t down to an unsigned long, which
    although perhaps not best advised is safe I think.
    
    This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
    is just coincidental due to subtle changes to the stack layout etc.
    
    Reported-by: Fu Wei <fu.wei@linaro.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit ea527eda9e1f7a8dcd4cf799c01c4b11468e952f
Author: Julien Grall <julien.grall@linaro.org>
Date:   Fri Jan 31 22:22:45 2014 +0000

    xen/arm: Directly return NULL if Xen fails to allocate domain struct
    
    The current implementation of alloc_domain_struct, dereference the newly
    allocated pointer even if the allocation has failed.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 46b5f0fd1fe7a49fb993fbad8a1fa232e2253afc
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Mon Jan 27 17:53:38 2014 +0000

    libxc: fix claim mode when creating HVM guest
    
    The original code is wrong because:
    * claim mode wants to know the total number of pages needed while
      original code provides the additional number of pages needed.
    * if pod is enabled memory will already be allocated by the time we try
      to claim memory.
    
    So the fix would be:
    * move claim mode before actual memory allocation.
    * pass the right number of pages to hypervisor.
    
    The "right number of pages" should be number of pages of target memory
    minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
    
    This fixes bug #32.
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Ian Jackson <ian.jackson@eu.citrix.com>

commit 3f4218742f5efc7c89fc1c61af546942f9d2dbb8
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 19:12:16 2014 +0100

    xl: update check-xl-disk-parse to handle backend_domname
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 834eec157a20746135054d519b8e4ccd3f6a8088
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Tue Jan 28 16:03:03 2014 +0000

    doc: Better documentation about the usbdevice=['host:bus.addr'] format
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    [ ijc -- minor wording tweak ]

commit c2ba706c44813342269eb5cb2288552dc5f2a9a7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 27 16:25:24 2014 +0000

    tools/libxc: goto correct label on error paths
    
    Both of these "goto finish;" statements are actually errors, and need to "goto
    out;" instead, which will correctly destroy the domain and return an error,
    rather than trying to finish the migration (and in at least one scenario,
    return success).
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>

commit af172d655c3900822d1f710ac13ee38ee9d482d2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 4 09:22:12 2014 +0100

    x86/domctl: don't ignore errors from vmce_restore_vcpu()
    
    What started out as a simple cleanup patch (eliminating the redundant
    check of domctl->cmd before setting "copyback", which as a result
    turned the "ext_vcpucontext_out" label useless) revealed a bug in the
    handling of XEN_DOMCTL_set_ext_vcpucontext.
    
    Fix this, retaining the cleanup, and at once dropping a stale comment
    and an accompanying formatting issue.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 05:54:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 05:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAvRH-0000Ug-Ix; Wed, 05 Feb 2014 05:54:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WAvRG-0000Ua-90
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 05:54:22 +0000
Received: from [85.158.139.211:26801] by server-13.bemta-5.messagelabs.com id
	70/22-18801-D02D1F25; Wed, 05 Feb 2014 05:54:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391579658!1690474!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29837 invoked from network); 5 Feb 2014 05:54:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 05:54:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,784,1384300800"; d="scan'208";a="98089090"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 05:53:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 00:53:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WAvQr-00031F-4h;
	Wed, 05 Feb 2014 05:53:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WAvQn-0006TG-N2;
	Wed, 05 Feb 2014 05:53:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24728-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 05:53:53 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24728: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24728 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24728/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64  3 host-install(3)      broken REGR. vs. 24723

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)         broken REGR. vs. 24723
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24723
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install         fail like 24719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ff1745d5882b7356ea423709919e46e55c31b615
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Thu Jan 16 15:27:59 2014 +0000

    tools: libxl: do not set the PoD target on ARM
    
    ARM does not implemented PoD and so returns ENOSYS from XENMEM_set_pod_target.
    
    The correct solution here would be to check for ENOSYS in libxl, unfortunately
    xc_domain_set_pod_target suffers from the same broken error reporting as the
    rest of libxc and throws away the errno.
    
    So for now conditionally define xc_domain_set_pod_target to return success
    (which is what PoD does if nothing needs doing). xc_domain_get_pod_target sets
    errno==-1 and returns -1, which matches the broken error reporting of the
    existing function. It appears to have no in tree callers in any case.
    
    The conditional should be removed once libxc has been fixed.
    
    This makes ballooning (xl mem-set) work for ARM domains.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>
    Cc: george.dunlap@citrix.com

commit 5224a733d3bd4d0db3548712047506c50487085e
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Fri Jan 24 14:23:07 2014 +0000

    xen: arm: correct use of find_next_bit
    
    find_next_bit takes a "const unsigned long *" but forcing a cast of an
    "uint32_t *" throws away the alignment constraints and ends up causing an
    alignment fault on arm64 if the input happened to be 4 but not 8 byte aligned.
    
    Instead of casting use a temporary variable of the right type.
    
    I've had a look around for similar constructs and the only thing I found was
    maintenance_interrupt which cases a uint64_t down to an unsigned long, which
    although perhaps not best advised is safe I think.
    
    This was observed with the AArch64 Linaro toolchain 2013.12 but I think that
    is just coincidental due to subtle changes to the stack layout etc.
    
    Reported-by: Fu Wei <fu.wei@linaro.org>
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Julien Grall <julien.grall@linaro.org>

commit ea527eda9e1f7a8dcd4cf799c01c4b11468e952f
Author: Julien Grall <julien.grall@linaro.org>
Date:   Fri Jan 31 22:22:45 2014 +0000

    xen/arm: Directly return NULL if Xen fails to allocate domain struct
    
    The current implementation of alloc_domain_struct, dereference the newly
    allocated pointer even if the allocation has failed.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 46b5f0fd1fe7a49fb993fbad8a1fa232e2253afc
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Mon Jan 27 17:53:38 2014 +0000

    libxc: fix claim mode when creating HVM guest
    
    The original code is wrong because:
    * claim mode wants to know the total number of pages needed while
      original code provides the additional number of pages needed.
    * if pod is enabled memory will already be allocated by the time we try
      to claim memory.
    
    So the fix would be:
    * move claim mode before actual memory allocation.
    * pass the right number of pages to hypervisor.
    
    The "right number of pages" should be number of pages of target memory
    minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
    
    This fixes bug #32.
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Reviewed-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Cc: Ian Jackson <ian.jackson@eu.citrix.com>

commit 3f4218742f5efc7c89fc1c61af546942f9d2dbb8
Author: Olaf Hering <olaf@aepfle.de>
Date:   Tue Jan 28 19:12:16 2014 +0100

    xl: update check-xl-disk-parse to handle backend_domname
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 834eec157a20746135054d519b8e4ccd3f6a8088
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Tue Jan 28 16:03:03 2014 +0000

    doc: Better documentation about the usbdevice=['host:bus.addr'] format
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    [ ijc -- minor wording tweak ]

commit c2ba706c44813342269eb5cb2288552dc5f2a9a7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jan 27 16:25:24 2014 +0000

    tools/libxc: goto correct label on error paths
    
    Both of these "goto finish;" statements are actually errors, and need to "goto
    out;" instead, which will correctly destroy the domain and return an error,
    rather than trying to finish the migration (and in at least one scenario,
    return success).
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>

commit af172d655c3900822d1f710ac13ee38ee9d482d2
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 4 09:22:12 2014 +0100

    x86/domctl: don't ignore errors from vmce_restore_vcpu()
    
    What started out as a simple cleanup patch (eliminating the redundant
    check of domctl->cmd before setting "copyback", which as a result
    turned the "ext_vcpucontext_out" label useless) revealed a bug in the
    handling of XEN_DOMCTL_set_ext_vcpucontext.
    
    Fix this, retaining the cleanup, and at once dropping a stale comment
    and an accompanying formatting issue.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 07:59:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 07:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAxNv-0003U6-JS; Wed, 05 Feb 2014 07:59:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WAwJp-0001tc-ER
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 06:50:45 +0000
Received: from [85.158.143.35:9493] by server-1.bemta-4.messagelabs.com id
	C8/71-31661-34FD1F25; Wed, 05 Feb 2014 06:50:43 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391583041!3200024!1
X-Originating-IP: [216.109.115.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 766 invoked from network); 5 Feb 2014 06:50:42 -0000
Received: from nm47-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm47-vm1.bullet.mail.bf1.yahoo.com) (216.109.115.124)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 06:50:42 -0000
Received: from [98.139.212.150] by nm47.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 06:50:41 -0000
Received: from [98.139.212.246] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 06:50:41 -0000
Received: from [127.0.0.1] by omp1055.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 06:50:40 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 994483.78095.bm@omp1055.mail.bf1.yahoo.com
Received: (qmail 54337 invoked by uid 60001); 5 Feb 2014 06:50:40 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391583040; bh=HbBQPwAtTasg8GK7rBpkzSt94jqFdO3j7MB4Spvd6UA=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=OsMmxTpMq4chzz0T6DXFipW8lDE0BlWdYOo37RWIrb3XQg529uA3S0WhzKQHOnSRFcY0sOel1RBqGxCaE8kO+JyN1MmynakeNB6ZsqqZAmuQmkAMMytv7JvyoNPKoaclaVBOHVqkbGHBQ2OxchbuqlEJnPqVlqYqE8ZDveYoq9E=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=KBSeHB5+g39/5gn75nihWdlckpMoa6HqpLursIV9pXaUe0bVj6ZayXds9wI835hscgIVg+y0ylZSPSm8+eYYw/hpiUKBdLhNcf1UKrEMQt8ZEwj+aZ08Rox1lAL5LgHZVpj1wuIBNHKASrDsspun0VI049ENCcs7/QNEtea4RKk=;
X-YMail-OSG: orJVOMwVM1nx.dT9UplXNVQQNdr8OMGfyHaCatTiZgaLBNu
	HUF5QDDTmQbt47TuvJOQOiNVtvPZhM2C1eieQwiYFIA11_m8QqVy_3LCFjVc
	Ghf0bIqSApgcFPgqBWdaGPGjZ.joyzp0U4Podg.yEkSvLBKZ4HMYNUYTd1Sb
	D7jHQlfdNoZDy__m49XbC6fSrZZXp4m2HmP_CuEeRfDP18tlh_yVfbJSsXUG
	eiAjONJDuK8dByeaoEqz_Iz3uF23M2hoorg0DH.TnQdKMJlBHoo8X1z7G0v.
	RJF1C80gNxePRDEp7exNCRgpO168IRibCSrh.pjt7g8oNvaDqD2Cf1JIG364
	nhJ.8zq4l8tKHFp30hiN3f.WgkH5C8LIzKcgwdvJtLL5SGB6Uqv2MJq5owK3
	15qkKl6zjG2mVacG8M6Czjuy_P4GapCEJN300NQ87Esk7lkGY_ewsiFLElEM
	Bjc75Zr1YIhYqFkKk5h36t7NW758B_GmqJbUrEFukf4nIk0O81buvprjYE7s
	eF6yEwvMEPwxEHcw546sBkbaUZm9gzg--
Received: from [192.227.225.3] by web161802.mail.bf1.yahoo.com via HTTP;
	Tue, 04 Feb 2014 22:50:40 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8gTXIgT2xhZiwKSSB0cmllZCBmb3IgY2hhbmdlIGNvZGUgeGNfc2F2ZS5jIHRvIGF0dGVudGlvbsKgYmFja3BvcnRlZCwgQnV0IG5vdCBhbnN3ZXIgbWUgOi0oLi4uClNob3VsZCBJIGNoYW5nZSBjb2RlIHhjX2RvbWFpbl9zYXZlLmMgPyBpbiB0aGlzIGNvZGUgdXNlZCBmdW5jdGlvbiAnc3RhdGljIGludCBwcmludF9zdGF0cyguLi4pJyAuCkkgYXR0YWNoZSBmaWxlIHhlbmQubG9nIGxhdGVyIHVzZSBvZsKgQ29kZSBjaGFuZ2luZyB4Y19zYXZlLmM6bWFpbjoKCmludAptYWluKGludCBhcmdjLCBjaGEBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.174.629
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
Message-ID: <1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
Date: Tue, 4 Feb 2014 22:50:40 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140203131144.GA31275@aepfle.de>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="-2096837515-1980809517-1391583040=:24823"
X-Mailman-Approved-At: Wed, 05 Feb 2014 07:59:01 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---2096837515-1980809517-1391583040=:24823
Content-Type: multipart/alternative; boundary="-2096837515-1230453173-1391583040=:24823"

---2096837515-1230453173-1391583040=:24823
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello Mr Olaf,=0AI tried for change code xc_save.c to attention=A0backporte=
d, But not answer me :-(...=0AShould I change code xc_domain_save.c ? in th=
is code used function 'static int print_stats(...)' .=0AI attache file xend=
.log later use of=A0Code changing xc_save.c:main:=0A=0Aint=0Amain(int argc,=
 char **argv)=0A{=0A=A0 =A0 unsigned int maxit, max_f, lflags; //Change, ad=
ded lflags...=0A=A0 =A0 int io_fd, ret, port;=0A=A0 =A0 struct save_callbac=
ks callbacks;=0Axentoollog_level lvl;//added...=0Axentoollog_logger *l;//ad=
ded...=0A=0A=A0 =A0 if (argc !=3D 6)=0A=A0 =A0 =A0 =A0 errx(1, "usage: %s i=
ofd domid maxit maxf flags", argv[0]);=0A=0A=A0 =A0 io_fd =3D atoi(argv[1])=
;=0A=A0 =A0 si.domid =3D atoi(argv[2]);=0A=A0 =A0 maxit =3D atoi(argv[3]);=
=0A=A0 =A0 max_f =3D atoi(argv[4]);=0A=A0 =A0 si.flags =3D atoi(argv[5]);=
=0A=0A=A0 =A0 si.suspend_evtchn =3D -1;=0A=0Alvl =3D si.flags & XCFLAGS_DEB=
UG ? XTL_DEBUG: XTL_DETAIL;//added...=0Alflags =3D XTL_STDIOSTREAM_SHOW_PID=
 | XTL_STDIOSTREAM_HIDE_PROGRESS;//added...=0Al =3D (xentoollog_logger *)xt=
l_createlogger_stdiostream(stderr, lvl, lflags);//added...=0Asi.xch =3D xc_=
interface_open(l,0,0);//Change, orginal: si.xch =3D xc_interface_open(0,0,0=
);=0A=A0 =A0 if (!si.xch)=0A=A0 =A0 =A0 =A0 errx(1, "failed to open control=
 interface");=0A=A0 =A0 si.xce =3D xc_evtchn_open(NULL, 0);=0A=A0 =A0 if (s=
i.xce =3D=3D NULL)=0A=A0 =A0 =A0 =A0 warnx("failed to open event channel ha=
ndle");=0A=A0 =A0 else=0A=A0 =A0 {=0A=A0 =A0 =A0 =A0 port =3D xs_suspend_ev=
tchn_port(si.domid);=0A=0A=A0 =A0 =A0 =A0 if (port < 0)=0A=A0 =A0 =A0 =A0 =
=A0 =A0 warnx("failed to get the suspend evtchn port\n");=0A=A0 =A0 =A0 =A0=
 else=0A=A0 =A0 =A0 =A0 {=0A=A0 =A0 =A0 =A0 =A0 =A0 si.suspend_evtchn =3D=
=0A=A0 =A0 =A0 =A0 =A0 =A0 =A0 xc_suspend_evtchn_init(si.xch, si.xce, si.do=
mid, port);=0A=0A=A0 =A0 =A0 =A0 =A0 =A0 if (si.suspend_evtchn < 0)=0A=A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 warnx("suspend event channel initialization fai=
led, "=0A=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"using slow path");=
=0A=A0 =A0 =A0 =A0 }=0A=A0 =A0 }=0A=A0 =A0 memset(&callbacks, 0, sizeof(cal=
lbacks));=0A=A0 =A0 callbacks.suspend =3D suspend;=0A=A0 =A0 callbacks.swit=
ch_qemu_logdirty =3D switch_qemu_logdirty;=0A=A0 =A0 ret =3D xc_domain_save=
(si.xch, io_fd, si.domid, maxit, max_f, si.flags,=A0=0A=A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&callbacks, !!(si.flags & XCFLAGS_HVM));=A0=
=0A //in code xen 4.3.1 added parametr vm_generationid_addr to xc_domain_sa=
ve(...) but here don't need=0A=0A=A0 =A0 if (si.suspend_evtchn > 0)=0A xc_s=
uspend_evtchn_release(si.xch, si.xce, si.domid, si.suspend_evtchn);=0A=0A=
=A0 =A0 if (si.xce > 0)=0A=A0 =A0 =A0 =A0 xc_evtchn_close(si.xce);=0A=0A=A0=
 =A0 xc_interface_close(si.xch);=0A=0A=A0 =A0 return ret;=0A}=0A=A0=0AAdel =
Amani=0AM.Sc. Candidate@Computer Engineering Department, University of Isfa=
han=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Monday, February 3, 2014 4:=
41 PM, Olaf Hering <olaf@aepfle.de> wrote:=0A =0AOn Mon, Feb 03, Adel Amani=
 wrote:=0A=0A> Thanks, how i define logger for xc_interface_open to output =
print?!=0A=0ASee=A0 the example I gave in my reply. I quoted it again (see =
below) for=0Ayour convenience.=0A=0A> can i use of code xc_save.c in xen 4.=
3.1 for logger in xen 4.1.2?!=0A=0AIf all required changes are backported, =
most likely yes.=0A=0A=0AOlaf=0A=0A=0A> For an example how a logger could l=
ook like see the xc_interface_open=0A> call in tools/xenpaging/xenpaging.c.
---2096837515-1230453173-1391583040=:24823
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>Hello Mr=
 Olaf,</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; font=
-family: 'bookman old style', 'new york', times, serif; background-color: t=
ransparent; font-style: normal;"><span>I tried for change code xc_save.c to=
 attention&nbsp;</span><span style=3D"font-family: monospace; font-size: 10=
pt;">backported, But not answer me :-(...</span></div><div style=3D"color: =
rgb(0, 0, 0); font-size: 10pt; font-family: monospace; background-color: tr=
ansparent; font-style: normal;"><span style=3D"font-family: monospace; font=
-size: 10pt; background-color: transparent;">Should I change code xc_domain=
_save.c ? in this code used function 'static int print_stats(...)' .</span>=
</div><div style=3D"color: rgb(0, 0, 0); font-size: 10pt; font-family: mono=
space; background-color: transparent; font-style: normal;"><span
 style=3D"font-family: monospace; font-size: 10pt; background-color: transp=
arent;">I attache file xend.log later use of&nbsp;</span><span style=3D"bac=
kground-color: transparent; font-size: 10pt;">Code changing xc_save.c:main:=
</span></div><div style=3D"background-color: transparent;"><font face=3D"mo=
nospace"><br></font></div><div style=3D"background-color: transparent;"><fo=
nt face=3D"monospace">int</font></div><div style=3D"background-color: trans=
parent;"><font face=3D"monospace">main(int argc, char **argv)</font></div><=
div style=3D"background-color: transparent;"><font face=3D"monospace">{</fo=
nt></div><div style=3D"background-color: transparent;"><font face=3D"monosp=
ace">&nbsp; &nbsp; unsigned int maxit, max_f, lflags; //Change, added lflag=
s...</font></div><div style=3D"background-color: transparent;"><font face=
=3D"monospace">&nbsp; &nbsp; int io_fd, ret, port;</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
struct save_callbacks
 callbacks;</font></div><div style=3D"background-color: transparent;"><font=
 face=3D"monospace"><span class=3D"Apple-tab-span" style=3D"white-space:pre=
">=09</span>xentoollog_level lvl;//added...</font></div><div style=3D"backg=
round-color: transparent;"><font face=3D"monospace"><span class=3D"Apple-ta=
b-span" style=3D"white-space:pre">=09</span>xentoollog_logger *l;//added...=
</font></div><div style=3D"background-color: transparent;"><font face=3D"mo=
nospace"><br></font></div><div style=3D"background-color: transparent;"><fo=
nt face=3D"monospace">&nbsp; &nbsp; if (argc !=3D 6)</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
&nbsp; &nbsp; errx(1, "usage: %s iofd domid maxit maxf flags", argv[0]);</f=
ont></div><div style=3D"background-color: transparent;"><font face=3D"monos=
pace"><br></font></div><div style=3D"background-color: transparent;"><font =
face=3D"monospace">&nbsp; &nbsp; io_fd =3D atoi(argv[1]);</font></div><div =
style=3D"background-color:
 transparent;"><font face=3D"monospace">&nbsp; &nbsp; si.domid =3D atoi(arg=
v[2]);</font></div><div style=3D"background-color: transparent;"><font face=
=3D"monospace">&nbsp; &nbsp; maxit =3D atoi(argv[3]);</font></div><div styl=
e=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp;=
 max_f =3D atoi(argv[4]);</font></div><div style=3D"background-color: trans=
parent;"><font face=3D"monospace">&nbsp; &nbsp; si.flags =3D atoi(argv[5]);=
</font></div><div style=3D"background-color: transparent;"><font face=3D"mo=
nospace"><br></font></div><div style=3D"background-color: transparent;"><fo=
nt face=3D"monospace">&nbsp; &nbsp; si.suspend_evtchn =3D -1;</font></div><=
div style=3D"background-color: transparent;"><font face=3D"monospace"><br><=
/font></div><div style=3D"background-color: transparent;"><font face=3D"mon=
ospace"><span class=3D"Apple-tab-span" style=3D"white-space:pre">=09</span>=
lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;//added...</fo=
nt></div><div style=3D"background-color:
 transparent;"><font face=3D"monospace"><span class=3D"Apple-tab-span" styl=
e=3D"white-space:pre">=09</span>lflags =3D XTL_STDIOSTREAM_SHOW_PID | XTL_S=
TDIOSTREAM_HIDE_PROGRESS;//added...</font></div><div style=3D"background-co=
lor: transparent;"><font face=3D"monospace"><span class=3D"Apple-tab-span" =
style=3D"white-space:pre">=09</span>l =3D (xentoollog_logger *)xtl_createlo=
gger_stdiostream(stderr, lvl, lflags);//added...</font></div><div style=3D"=
background-color: transparent;"><font face=3D"monospace"><span class=3D"App=
le-tab-span" style=3D"white-space:pre">=09</span>si.xch =3D xc_interface_op=
en(l,0,0);//Change, orginal: si.xch =3D xc_interface_open(0,0,0);</font></d=
iv><div style=3D"background-color: transparent;"><font face=3D"monospace">&=
nbsp; &nbsp; if (!si.xch)</font></div><div style=3D"background-color: trans=
parent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; errx(1, "fail=
ed to open control interface");</font></div><div style=3D"background-color:=
 transparent;"><span
 class=3D"Apple-tab-span" style=3D"white-space:pre"><font face=3D"monospace=
">=09=09</font></span></div><div style=3D"background-color: transparent;"><=
font face=3D"monospace">&nbsp; &nbsp; si.xce =3D xc_evtchn_open(NULL, 0);</=
font></div><div style=3D"background-color: transparent;"><font face=3D"mono=
space">&nbsp; &nbsp; if (si.xce =3D=3D NULL)</font></div><div style=3D"back=
ground-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &=
nbsp; warnx("failed to open event channel handle");</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
else</font></div><div style=3D"background-color: transparent;"><font face=
=3D"monospace">&nbsp; &nbsp; {</font></div><div style=3D"background-color: =
transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; port =3D=
 xs_suspend_evtchn_port(si.domid);</font></div><div style=3D"background-col=
or: transparent;"><font face=3D"monospace"><br></font></div><div style=3D"b=
ackground-color: transparent;"><font
 face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; if (port &lt; 0)</font></di=
v><div style=3D"background-color: transparent;"><font face=3D"monospace">&n=
bsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; warnx("failed to get the suspend ev=
tchn port\n");</font></div><div style=3D"background-color: transparent;"><f=
ont face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; else</font></div><div st=
yle=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbs=
p; &nbsp; &nbsp; {</font></div><div style=3D"background-color: transparent;=
"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; si.sus=
pend_evtchn =3D</font></div><div style=3D"background-color: transparent;"><=
font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; xc=
_suspend_evtchn_init(si.xch, si.xce, si.domid, port);</font></div><div styl=
e=3D"background-color: transparent;"><font face=3D"monospace"><br></font></=
div><div style=3D"background-color: transparent;"><font face=3D"monospace">=
&nbsp; &nbsp; &nbsp;
 &nbsp; &nbsp; &nbsp; if (si.suspend_evtchn &lt; 0)</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; warnx("suspend event channel init=
ialization failed, "</font></div><div style=3D"background-color: transparen=
t;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"using slow path");</font></div><div s=
tyle=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nb=
sp; &nbsp; &nbsp; }</font></div><div style=3D"background-color: transparent=
;"><font face=3D"monospace">&nbsp; &nbsp; }</font></div><div style=3D"backg=
round-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; memset(&a=
mp;callbacks, 0, sizeof(callbacks));</font></div><div style=3D"background-c=
olor: transparent;"><font face=3D"monospace">&nbsp; &nbsp; callbacks.suspen=
d =3D suspend;</font></div><div style=3D"background-color: transparent;"><f=
ont
 face=3D"monospace">&nbsp; &nbsp; callbacks.switch_qemu_logdirty =3D switch=
_qemu_logdirty;</font></div><div style=3D"background-color: transparent;"><=
font face=3D"monospace">&nbsp; &nbsp; ret =3D xc_domain_save(si.xch, io_fd,=
 si.domid, maxit, max_f, si.flags,&nbsp;</font></div><div style=3D"backgrou=
nd-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&amp;callba=
cks, !!(si.flags &amp; XCFLAGS_HVM));&nbsp;</font></div><div style=3D"backg=
round-color: transparent;"><font face=3D"monospace"><span class=3D"Apple-ta=
b-span" style=3D"white-space:pre">=09=09=09=09=09=09</span> //in code xen 4=
.3.1 added parametr vm_generationid_addr to xc_domain_save(...) but here do=
n't need</font></div><div style=3D"background-color: transparent;"><font fa=
ce=3D"monospace"><br></font></div><div style=3D"background-color: transpare=
nt;"><font face=3D"monospace">&nbsp; &nbsp; if (si.suspend_evtchn &gt; 0)</=
font></div><div
 style=3D"background-color: transparent;"><font face=3D"monospace"><span cl=
ass=3D"Apple-tab-span" style=3D"white-space:pre">=09</span> xc_suspend_evtc=
hn_release(si.xch, si.xce, si.domid, si.suspend_evtchn);</font></div><div s=
tyle=3D"background-color: transparent;"><font face=3D"monospace"><br></font=
></div><div style=3D"background-color: transparent;"><font face=3D"monospac=
e">&nbsp; &nbsp; if (si.xce &gt; 0)</font></div><div style=3D"background-co=
lor: transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; xc_=
evtchn_close(si.xce);</font></div><div style=3D"background-color: transpare=
nt;"><font face=3D"monospace"><br></font></div><div style=3D"background-col=
or: transparent;"><font face=3D"monospace">&nbsp; &nbsp; xc_interface_close=
(si.xch);</font></div><div style=3D"background-color: transparent;"><font f=
ace=3D"monospace"><br></font></div><div style=3D"background-color: transpar=
ent;"><font face=3D"monospace">&nbsp; &nbsp; return ret;</font></div><div s=
tyle=3D"background-color:
 transparent;"><font face=3D"monospace"><span></span></font></div><div styl=
e=3D"background-color: transparent;"><font face=3D"monospace">}</font></div=
><div></div><div>&nbsp;</div><div><div style=3D"text-align:center;"><span s=
tyle=3D"font-size: 16px; font-family: 'times new roman', 'new york', times,=
 serif; line-height: normal;">Adel Amani</span><br></div><span style=3D"fon=
t-family: 'times new roman', 'new york', times, serif; line-height: normal;=
"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@Computer=
 Engineering Department, University of Isfahan</div><div style=3D"text-alig=
n:center;"><span style=3D"font-size:13px;">Email: <span style=3D"color:rgb(=
0, 0, 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></=
div></span></div><div class=3D"yahoo_quoted" style=3D"display: block;"> <br=
> <br> <div style=3D"font-family: 'bookman old style', 'new york', times, s=
erif; font-size: 10pt;"> <div style=3D"font-family: HelveticaNeue, 'Helveti=
ca Neue', Helvetica,
 Arial, 'Lucida Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <=
font size=3D"2" face=3D"Arial"> On Monday, February 3, 2014 4:41 PM, Olaf H=
ering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div>  <div class=3D"y_msg=
_container">On Mon, Feb 03, Adel Amani wrote:<br clear=3D"none"><br clear=
=3D"none">&gt; Thanks, how i define logger for xc_interface_open to output =
print?!<br clear=3D"none"><br clear=3D"none">See&nbsp; the example I gave i=
n my reply. I quoted it again (see below) for<br clear=3D"none">your conven=
ience.<br clear=3D"none"><br clear=3D"none">&gt; can i use of code xc_save.=
c in xen 4.3.1 for logger in xen 4.1.2?!<br clear=3D"none"><br clear=3D"non=
e">If all required changes are backported, most likely yes.<div class=3D"yq=
t9723372853" id=3D"yqtfd42590"><br clear=3D"none"><br clear=3D"none">Olaf<b=
r clear=3D"none"><br clear=3D"none"><br clear=3D"none">&gt; For an example =
how a logger could look like see the xc_interface_open<br clear=3D"none">&g=
t; call in
 tools/xenpaging/xenpaging.c.<br clear=3D"none"></div><br><br></div>  </div=
> </div>  </div> </div></body></html>
---2096837515-1230453173-1391583040=:24823--
---2096837515-1980809517-1391583040=:24823
Content-Type: application/octet-stream; name="xend.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xend.log"

IEZpbGUgIi91c3IvbG9jYWwvbGliL3B5dGhvbjIuNy9kaXN0LXBhY2thZ2Vz
L3hlbi94ZW5kL1hlbmRDaGVja3BvaW50LnB5IiwgbGluZSAzNTgsIGluIHJl
c3RvcmUKICAgIHJhaXNlIGV4bgpWbUVycm9yOiBEaXNrIGltYWdlIGRvZXMg
bm90IGV4aXN0OiAvdmFyL2xpYi9saWJ2aXJ0L2ltYWdlcy91YnVudHUxMS5p
bWcKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKFhlbmREb21h
aW5JbmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25h
bWUnLCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5k
X3N0YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUn
XSwgWyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0n
LCBbJ2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBb
J3NlcmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBb
J2Jvb3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywg
W11dLCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScs
ICc6MC4wJ10sIFsnZmRhJywgJyddLCBbJ2ZkYicsICcnXSwgWydndWVzdF9v
c190eXBlJywgJ2RlZmF1bHQnXSwgWydoYXAnLCAxXSwgWydocGV0JywgMF0s
IFsnaXNhJywgMF0sIFsna2V5bWFwJywgJyddLCBbJ2xvY2FsdGltZScsIDBd
LCBbJ25vZ3JhcGhpYycsIDBdLCBbJ29wZW5nbCcsIDFdLCBbJ29vcycsIDFd
LCBbJ3BhZScsIDFdLCBbJ3BjaScsIFtdXSwgWydwY2lfbXNpdHJhbnNsYXRl
JywgMV0sIFsncGNpX3Bvd2VyX21nbXQnLCAwXSwgWydydGNfdGltZW9mZnNl
dCcsIDBdLCBbJ3NkbCcsIDBdLCBbJ3NvdW5kaHcnLCAnJ10sIFsnc3Rkdmdh
JywgMF0sIFsndGltZXJfbW9kZScsIDFdLCBbJ3VzYicsIDFdLCBbJ3VzYmRl
dmljZScsIFsnaG9zdDoxMjVmOmM5NmEnXV0sIFsndmNwdXMnLCAxXSwgWyd2
bmMnLCAxXSwgWyd2bmN1bnVzZWQnLCAxXSwgWyd2aXJpZGlhbicsIDBdLCBb
J3ZwdF9hbGlnbicsIDFdLCBbJ3hhdXRob3JpdHknLCAnL3Jvb3QvLlhhdXRo
b3JpdHknXSwgWyd4ZW5fcGxhdGZvcm1fcGNpJywgMV0sIFsnbWVtb3J5X3No
YXJpbmcnLCAwXSwgWyd2bmNwYXNzd2QnLCAnWFhYWFhYWFgnXSwgWyd0c2Nf
bW9kZScsIDBdLCBbJ25vbWlncmF0ZScsIDBdXV0sIFsnczNfaW50ZWdyaXR5
JywgMV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3VuYW1lJywgJ2ZpbGU6L3Zh
ci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1MTEuaW1nJ10sIFsnZGV2Jywg
J2hkYSddLCBbJ21vZGUnLCAndyddXV0sIFsnZGV2aWNlJywgWyd2YmQnLCBb
J3VuYW1lJywgJ3BoeTovZGV2L2Nkcm9tJ10sIFsnZGV2JywgJ2hkYzpjZHJv
bSddLCBbJ21vZGUnLCAnciddXV0sIFsnZGV2aWNlJywgWyd2aWYnLCBbJ2Jy
aWRnZScsICd4ZW5icjAnXSwgWyd0eXBlJywgJ2lvZW11J11dXV0pClsyMDE0
LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzoy
NDk4KSBYZW5kRG9tYWluSW5mby5jb25zdHJ1Y3REb21haW4KWzIwMTQtMDIt
MDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGJhbGxvb246MTg3KSBCYWxsb29u
OiAyNzQ4MzA0IEtpQiBmcmVlOyBuZWVkIDE2Mzg0OyBkb25lLgpbMjAxNC0w
Mi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoWGVuZERvbWFpbjo0NzYpIEFk
ZGluZyBEb21haW46IDMKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVC
VUcgKFhlbmREb21haW5JbmZvOjI4MzYpIFhlbmREb21haW5JbmZvLmluaXRE
b21haW46IDMgMjU2ClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVH
IChpbWFnZTozMzkpIE5vIFZOQyBwYXNzd2QgY29uZmlndXJlZCBmb3IgdmZi
IGFjY2VzcwpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1h
Z2U6ODkxKSBhcmdzOiBib290LCB2YWw6IGMKWzIwMTQtMDItMDUgMDg6MTA6
NTYgMTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZmRhLCB2YWw6IE5v
bmUKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGltYWdlOjg5
MSkgYXJnczogZmRiLCB2YWw6IE5vbmUKWzIwMTQtMDItMDUgMDg6MTA6NTYg
MTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogc291bmRodywgdmFsOiBO
b25lClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChpbWFnZTo4
OTEpIGFyZ3M6IGxvY2FsdGltZSwgdmFsOiAwClsyMDE0LTAyLTA1IDA4OjEw
OjU2IDE0MjVdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNlcmlhbCwgdmFs
OiBbJ3B0eSddClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChp
bWFnZTo4OTEpIGFyZ3M6IHN0ZC12Z2EsIHZhbDogMApbMjAxNC0wMi0wNSAw
ODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBpc2EsIHZh
bDogMApbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBhY3BpLCB2YWw6IDEKWzIwMTQtMDItMDUgMDg6MTA6NTYg
MTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogdXNiLCB2YWw6IDEKWzIw
MTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJn
czogdXNiZGV2aWNlLCB2YWw6IFsnaG9zdDoxMjVmOmM5NmEnXQpbMjAxNC0w
Mi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBn
ZnhfcGFzc3RocnUsIHZhbDogTm9uZQpbMjAxNC0wMi0wNSAwODoxMDo1NiAx
NDI1XSBJTkZPIChpbWFnZTo4MjIpIE5lZWQgdG8gY3JlYXRlIHBsYXRmb3Jt
IGRldmljZS5bZG9taWQ6M10KWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0g
REVCVUcgKFhlbmREb21haW5JbmZvOjI4NjMpIF9pbml0RG9tYWluOnNoYWRv
d19tZW1vcnk9MHgwLCBtZW1vcnlfc3RhdGljX21heD0weDQwMDAwMDAwLCBt
ZW1vcnlfc3RhdGljX21pbj0weDAuClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0
MjVdIElORk8gKGltYWdlOjE4MikgYnVpbGREb21haW4gb3M9aHZtIGRvbT0z
IHZjcHVzPTEKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGlt
YWdlOjk0OSkgZG9taWQgICAgICAgICAgPSAzClsyMDE0LTAyLTA1IDA4OjEw
OjU2IDE0MjVdIERFQlVHIChpbWFnZTo5NTApIGltYWdlICAgICAgICAgID0g
L3Vzci9saWIveGVuL2Jvb3QvaHZtbG9hZGVyClsyMDE0LTAyLTA1IDA4OjEw
OjU2IDE0MjVdIERFQlVHIChpbWFnZTo5NTEpIHN0b3JlX2V2dGNobiAgID0g
MgpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6OTUy
KSBtZW1zaXplICAgICAgICA9IDEwMjQKWzIwMTQtMDItMDUgMDg6MTA6NTYg
MTQyNV0gREVCVUcgKGltYWdlOjk1MykgdGFyZ2V0ICAgICAgICAgPSAxMDI0
ClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChpbWFnZTo5NTQp
IHZjcHVzICAgICAgICAgID0gMQpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1
XSBERUJVRyAoaW1hZ2U6OTU1KSB2Y3B1X2F2YWlsICAgICA9IDEKWzIwMTQt
MDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGltYWdlOjk1NikgYWNwaSAg
ICAgICAgICAgPSAxClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVH
IChpbWFnZTo5NTcpIGFwaWMgICAgICAgICAgID0gMQpbMjAxNC0wMi0wNSAw
ODoxMDo1NiAxNDI1XSBJTkZPIChYZW5kRG9tYWluSW5mbzoyMzU3KSBjcmVh
dGVEZXZpY2U6IHZmYiA6IHsndm5jdW51c2VkJzogMSwgJ290aGVyX2NvbmZp
Zyc6IHsndm5jdW51c2VkJzogMSwgJ3ZuYyc6ICcxJ30sICd2bmMnOiAnMScs
ICd1dWlkJzogJzU0MzRiMWU3LTk3YWItYjk5ZS1kOWQ2LTMzMWI5MTE1MTQ0
Zid9ClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChEZXZDb250
cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUnOiAn
MScsICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92ZmIvMy8wJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2Rl
dmljZS92ZmIvMC4KWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeyd2
bmN1bnVzZWQnOiAnMScsICdkb21haW4nOiAndWJ1bnR1MTEnLCAnZnJvbnRl
bmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92ZmIvMCcsICd1dWlkJzog
JzU0MzRiMWU3LTk3YWItYjk5ZS1kOWQ2LTMzMWI5MTE1MTQ0ZicsICdmcm9u
dGVuZC1pZCc6ICczJywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAn
dm5jJzogJzEnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92ZmIvMy8w
LgpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBJTkZPIChYZW5kRG9tYWlu
SW5mbzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZiZCA6IHsndXVpZCc6ICdiMDNj
OTNjOC0yYWIzLTA0MGUtODljZS00NWFjN2ZmZWUzMTcnLCAnYm9vdGFibGUn
OiAxLCAnZHJpdmVyJzogJ3BhcmF2aXJ0dWFsaXNlZCcsICdkZXYnOiAnaGRh
JywgJ3VuYW1lJzogJ2ZpbGU6L3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndyd9ClsyMDE0LTAyLTA1IDA4OjEwOjU2
IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVy
OiB3cml0aW5nIHsnYmFja2VuZC1pZCc6ICcwJywgJ3ZpcnR1YWwtZGV2aWNl
JzogJzc2OCcsICdkZXZpY2UtdHlwZSc6ICdkaXNrJywgJ3N0YXRlJzogJzEn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy83
NjgnfSB0byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZiZC83NjguClsyMDE0
LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjk3
KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTEx
JywgJ2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzc2
OCcsICd1dWlkJzogJ2IwM2M5M2M4LTJhYjMtMDQwZS04OWNlLTQ1YWM3ZmZl
ZTMxNycsICdib290YWJsZSc6ICcxJywgJ2Rldic6ICdoZGEnLCAnc3RhdGUn
OiAnMScsICdwYXJhbXMnOiAnL3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndycsICdvbmxpbmUnOiAnMScsICdmcm9u
dGVuZC1pZCc6ICczJywgJ3R5cGUnOiAnZmlsZSd9IHRvIC9sb2NhbC9kb21h
aW4vMC9iYWNrZW5kL3ZiZC8zLzc2OC4KWzIwMTQtMDItMDUgMDg6MTA6NTcg
MTQyNV0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2aWNl
OiB2YmQgOiB7J3V1aWQnOiAnNTU1ZDIwZmMtODZjYS04NjJmLTExOGEtOWM4
ODYxZWMwNDJiJywgJ2Jvb3RhYmxlJzogMCwgJ2RyaXZlcic6ICdwYXJhdmly
dHVhbGlzZWQnLCAnZGV2JzogJ2hkYzpjZHJvbScsICd1bmFtZSc6ICdwaHk6
L2Rldi9jZHJvbScsICdtb2RlJzogJ3InfQpbMjAxNC0wMi0wNSAwODoxMDo1
NyAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxl
cjogd3JpdGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmlj
ZSc6ICc1NjMyJywgJ2RldmljZS10eXBlJzogJ2Nkcm9tJywgJ3N0YXRlJzog
JzEnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQv
My81NjMyJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQvNTYzMi4K
WzIwMTQtMDItMDUgMDg6MTA6NTcgMTQyNV0gREVCVUcgKERldkNvbnRyb2xs
ZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1
bnR1MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92
YmQvNTYzMicsICd1dWlkJzogJzU1NWQyMGZjLTg2Y2EtODYyZi0xMThhLTlj
ODg2MWVjMDQyYicsICdib290YWJsZSc6ICcwJywgJ2Rldic6ICdoZGMnLCAn
c3RhdGUnOiAnMScsICdwYXJhbXMnOiAnL2Rldi9jZHJvbScsICdtb2RlJzog
J3InLCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQtaWQnOiAnMycsICd0eXBl
JzogJ3BoeSd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8zLzU2
MzIuClsyMDE0LTAyLTA1IDA4OjEwOjU3IDE0MjVdIElORk8gKFhlbmREb21h
aW5JbmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmlmIDogeydicmlkZ2UnOiAn
eGVuYnIwJywgJ21hYyc6ICcwMDoxNjozZToyMzplYzo2MicsICd0eXBlJzog
J2lvZW11JywgJ3V1aWQnOiAnZjg4MDAxMWEtYzIwNC05MjdiLWQ5NjQtMGEy
YzA3N2I3YTlmJ30KWzIwMTQtMDItMDUgMDg6MTA6NTcgMTQyNV0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydz
dGF0ZSc6ICcxJywgJ2JhY2tlbmQtaWQnOiAnMCcsICdiYWNrZW5kJzogJy9s
b2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi8zLzAnfSB0byAvbG9jYWwvZG9t
YWluLzMvZGV2aWNlL3ZpZi8wLgpbMjAxNC0wMi0wNSAwODoxMDo1NyAxNDI1
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J2JyaWRnZSc6ICd4ZW5icjAnLCAnZG9tYWluJzogJ3VidW50dTEx
JywgJ2hhbmRsZSc6ICcwJywgJ3V1aWQnOiAnZjg4MDAxMWEtYzIwNC05Mjdi
LWQ5NjQtMGEyYzA3N2I3YTlmJywgJ3NjcmlwdCc6ICcvZXRjL3hlbi9zY3Jp
cHRzL3ZpZi1icmlkZ2UnLCAnbWFjJzogJzAwOjE2OjNlOjIzOmVjOjYyJywg
J2Zyb250ZW5kLWlkJzogJzMnLCAnc3RhdGUnOiAnMScsICdvbmxpbmUnOiAn
MScsICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZpZi8w
JywgJ3R5cGUnOiAnaW9lbXUnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2Vu
ZC92aWYvMy8wLgpbMjAxNC0wMi0wNSAwODoxMDo1NyAxNDI1XSBJTkZPIChp
bWFnZTo0MTgpIHNwYXduaW5nIGRldmljZSBtb2RlbHM6IC91c3IvbGliL3hl
bi9iaW4vcWVtdS1kbSBbJy91c3IvbGliL3hlbi9iaW4vcWVtdS1kbScsICct
ZCcsICczJywgJy1kb21haW4tbmFtZScsICd1YnVudHUxMScsICctdmlkZW9y
YW0nLCAnNCcsICctdm5jJywgJzEyNy4wLjAuMTowJywgJy12bmN1bnVzZWQn
LCAnLXZjcHVzJywgJzEnLCAnLXZjcHVfYXZhaWwnLCAnMHgxJywgJy1ib290
JywgJ2MnLCAnLXNlcmlhbCcsICdwdHknLCAnLWFjcGknLCAnLXVzYicsICct
dXNiZGV2aWNlJywgIlsnaG9zdDoxMjVmOmM5NmEnXSIsICctbmV0JywgJ25p
Yyx2bGFuPTEsbWFjYWRkcj0wMDoxNjozZToyMzplYzo2Mixtb2RlbD1ydGw4
MTM5JywgJy1uZXQnLCAndGFwLHZsYW49MSxpZm5hbWU9dGFwMy4wLGJyaWRn
ZT14ZW5icjAnLCAnLU0nLCAneGVuZnYnXQpbMjAxNC0wMi0wNSAwODoxMDo1
NyAxNDI1XSBJTkZPIChpbWFnZTo0NjcpIGRldmljZSBtb2RlbCBwaWQ6IDQw
NTUKWzIwMTQtMDItMDUgMDg6MTA6NTcgMTQyNV0gSU5GTyAoaW1hZ2U6NTkw
KSB3YWl0aW5nIGZvciBzZW50aW5lbF9maWZvClsyMDE0LTAyLTA1IDA4OjEw
OjU3IDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzozNDIwKSBTdG9yaW5n
IFZNIGRldGFpbHM6IHsnb25feGVuZF9zdG9wJzogJ2lnbm9yZScsICdwb29s
X25hbWUnOiAnUG9vbC0wJywgJ3NoYWRvd19tZW1vcnknOiAnOScsICd1dWlk
JzogJzYwMjQ1YTU0LWZmZGMtYzZjZS00ZGU0LTFlMjEwY2I2NjdjNycsICdv
bl9yZWJvb3QnOiAncmVzdGFydCcsICdzdGFydF90aW1lJzogJzEzOTE1NzUy
NTcuNTMnLCAnb25fcG93ZXJvZmYnOiAnZGVzdHJveScsICdib290bG9hZGVy
X2FyZ3MnOiAnJywgJ29uX3hlbmRfc3RhcnQnOiAnaWdub3JlJywgJ29uX2Ny
YXNoJzogJ3Jlc3RhcnQnLCAneGVuZC9yZXN0YXJ0X2NvdW50JzogJzAnLCAn
dmNwdXMnOiAnMScsICd2Y3B1X2F2YWlsJzogJzEnLCAnYm9vdGxvYWRlcic6
ICcnLCAnaW1hZ2UnOiAiKGh2bSAoa2VybmVsICcnKSAoc3VwZXJwYWdlcyAw
KSAodmlkZW9yYW0gNCkgKGhwZXQgMCkgKHN0ZHZnYSAwKSAobG9hZGVyIC91
c3IvbGliL3hlbi9ib290L2h2bWxvYWRlcikgKHhlbl9wbGF0Zm9ybV9wY2kg
MSkgKG9wZW5nbCAxKSAocnRjX3RpbWVvZmZzZXQgMCkgKHBjaSAoKSkgKGhh
cCAxKSAobG9jYWx0aW1lIDApICh0aW1lcl9tb2RlIDEpIChwY2lfbXNpdHJh
bnNsYXRlIDEpIChvb3MgMSkgKGFwaWMgMSkgKHNkbCAwKSAodXNiZGV2aWNl
IChob3N0OjEyNWY6Yzk2YSkpIChkaXNwbGF5IDowLjApICh2cHRfYWxpZ24g
MSkgKHNlcmlhbCBwdHkpICh2bmN1bnVzZWQgMSkgKGJvb3QgYykgKHBhZSAx
KSAodmlyaWRpYW4gMCkgKGFjcGkgMSkgKHZuYyAxKSAobm9ncmFwaGljIDAp
IChub21pZ3JhdGUgMCkgKHVzYiAxKSAodHNjX21vZGUgMCkgKGd1ZXN0X29z
X3R5cGUgZGVmYXVsdCkgKGRldmljZV9tb2RlbCAvdXNyL2xpYi94ZW4vYmlu
L3FlbXUtZG0pIChwY2lfcG93ZXJfbWdtdCAwKSAoeGF1dGhvcml0eSAvcm9v
dC8uWGF1dGhvcml0eSkgKGlzYSAwKSAobm90ZXMgKFNVU1BFTkRfQ0FOQ0VM
IDEpKSkiLCAnbmFtZSc6ICd1YnVudHUxMSd9ClsyMDE0LTAyLTA1IDA4OjEw
OjU3IDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxNzk0KSBTdG9yaW5n
IGRvbWFpbiBkZXRhaWxzOiB7J2NvbnNvbGUvcG9ydCc6ICczJywgJ2Rlc2Ny
aXB0aW9uJzogJycsICdjb25zb2xlL2xpbWl0JzogJzEwNDg1NzYnLCAnc3Rv
cmUvcG9ydCc6ICcyJywgJ3ZtJzogJy92bS82MDI0NWE1NC1mZmRjLWM2Y2Ut
NGRlNC0xZTIxMGNiNjY3YzcnLCAnZG9taWQnOiAnMycsICdpbWFnZS9zdXNw
ZW5kLWNhbmNlbCc6ICcxJywgJ2NwdS8wL2F2YWlsYWJpbGl0eSc6ICdvbmxp
bmUnLCAnbWVtb3J5L3RhcmdldCc6ICcxMDQ4NTc2JywgJ2NvbnRyb2wvcGxh
dGZvcm0tZmVhdHVyZS1tdWx0aXByb2Nlc3Nvci1zdXNwZW5kJzogJzEnLCAn
c3RvcmUvcmluZy1yZWYnOiAnMTA0NDQ3NicsICdjb25zb2xlL3R5cGUnOiAn
aW9lbXUnLCAnbmFtZSc6ICd1YnVudHUxMSd9ClsyMDE0LTAyLTA1IDA4OjEw
OjU3IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9s
bGVyOiB3cml0aW5nIHsnc3RhdGUnOiAnMScsICdiYWNrZW5kLWlkJzogJzAn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xl
LzMvMCd9IHRvIC9sb2NhbC9kb21haW4vMy9kZXZpY2UvY29uc29sZS8wLgpb
MjAxNC0wMi0wNSAwODoxMDo1NyAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxl
cjo5NykgRGV2Q29udHJvbGxlcjogd3JpdGluZyB7J2RvbWFpbic6ICd1YnVu
dHUxMScsICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL2Nv
bnNvbGUvMCcsICd1dWlkJzogJ2EzNzk5ZDAyLWM1M2EtYTRkOS0zYTQzLWFj
MzFhYzE0NWZlZCcsICdmcm9udGVuZC1pZCc6ICczJywgJ3N0YXRlJzogJzEn
LCAnbG9jYXRpb24nOiAnMycsICdvbmxpbmUnOiAnMScsICdwcm90b2NvbCc6
ICd2dDEwMCd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL2NvbnNvbGUv
My8wLgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29u
dHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdGFwMi4KWzIwMTQt
MDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5
KSBXYWl0aW5nIGZvciBkZXZpY2VzIHZpZi4KWzIwMTQtMDItMDUgMDg6MTA6
NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTQ0KSBXYWl0aW5nIGZv
ciAwLgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTg4MSkgWGVuZERvbWFpbkluZm8uaGFuZGxlU2h1dGRvd25X
YXRjaApbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29u
dHJvbGxlcjo2MjgpIGhvdHBsdWdTdGF0dXNDYWxsYmFjayAvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92aWYvMy8wL2hvdHBsdWctc3RhdHVzLgpbMjAxNC0w
Mi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjo2NDIp
IGhvdHBsdWdTdGF0dXNDYWxsYmFjayAxLgpbMjAxNC0wMi0wNSAwODoxMDo1
OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9y
IGRldmljZXMgdmtiZC4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVC
VUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIGlv
cG9ydHMuClsyMDE0LTAyLTA1IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZD
b250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyB0YXAuClsyMDE0
LTAyLTA1IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjEz
OSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2aWYyLgpbMjAxNC0wMi0wNSAwODox
MDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcg
Zm9yIGRldmljZXMgY29uc29sZS4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQy
NV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTQ0KSBXYWl0aW5nIGZvciAwLgpb
MjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxl
cjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdnNjc2kuClsyMDE0LTAyLTA1
IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2Fp
dGluZyBmb3IgZGV2aWNlcyB2YmQuClsyMDE0LTAyLTA1IDA4OjEwOjU4IDE0
MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgNzY4
LgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJv
bGxlcjo2MjgpIGhvdHBsdWdTdGF0dXNDYWxsYmFjayAvbG9jYWwvZG9tYWlu
LzAvYmFja2VuZC92YmQvMy83NjgvaG90cGx1Zy1zdGF0dXMuClsyMDE0LTAy
LTA1IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjY0Mikg
aG90cGx1Z1N0YXR1c0NhbGxiYWNrIDEuClsyMDE0LTAyLTA1IDA4OjEwOjU4
IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3Ig
NTYzMi4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNv
bnRyb2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2Rv
bWFpbi8wL2JhY2tlbmQvdmJkLzMvNTYzMi9ob3RwbHVnLXN0YXR1cy4KWzIw
MTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6
NjQyKSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgMS4KWzIwMTQtMDItMDUgMDg6
MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5n
IGZvciBkZXZpY2VzIGlycS4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0g
REVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2Vz
IHZmYi4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNv
bnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHBjaS4KWzIwMTQt
MDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5
KSBXYWl0aW5nIGZvciBkZXZpY2VzIHZ1c2IuClsyMDE0LTAyLTA1IDA4OjEw
OjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBm
b3IgZGV2aWNlcyB2dHBtLgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBJ
TkZPIChYZW5kRG9tYWluOjEyMjUpIERvbWFpbiB1YnVudHUxMSAoMykgdW5w
YXVzZWQuClsyMDE0LTAyLTA1IDA4OjE4OjM1IDE0MjVdIERFQlVHIChYZW5k
Q2hlY2twb2ludDoxMjQpIFt4Y19zYXZlXTogL3Vzci9saWIveGVuL2Jpbi94
Y19zYXZlIDI2IDMgMCAwIDUKWzIwMTQtMDItMDUgMDg6MTg6MzUgMTQyNV0g
SU5GTyAoWGVuZENoZWNrcG9pbnQ6NDIzKSB4Y19zYXZlOiBmYWlsZWQgdG8g
Z2V0IHRoZSBzdXNwZW5kIGV2dGNobiBwb3J0ClsyMDE0LTAyLTA1IDA4OjE4
OjM1IDE0MjVdIElORk8gKFhlbmRDaGVja3BvaW50OjQyMykgClsyMDE0LTAy
LTA1IDA4OjIwOjIxIDE0MjVdIERFQlVHIChYZW5kQ2hlY2twb2ludDozOTQp
IHN1c3BlbmQKWzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhl
bmRDaGVja3BvaW50OjEyNykgSW4gc2F2ZUlucHV0SGFuZGxlciBzdXNwZW5k
ClsyMDE0LTAyLTA1IDA4OjIwOjIxIDE0MjVdIERFQlVHIChYZW5kQ2hlY2tw
b2ludDoxMjkpIFN1c3BlbmRpbmcgMyAuLi4KWzIwMTQtMDItMDUgMDg6MjA6
MjEgMTQyNV0gREVCVUcgKFhlbmREb21haW5JbmZvOjUyNCkgWGVuZERvbWFp
bkluZm8uc2h1dGRvd24oc3VzcGVuZCkKWzIwMTQtMDItMDUgMDg6MjA6MjEg
MTQyNV0gREVCVUcgKFhlbmREb21haW5JbmZvOjE4ODEpIFhlbmREb21haW5J
bmZvLmhhbmRsZVNodXRkb3duV2F0Y2gKWzIwMTQtMDItMDUgMDg6MjA6MjEg
MTQyNV0gSU5GTyAoWGVuZERvbWFpbkluZm86NTQxKSBIVk0gc2F2ZTpyZW1v
dGUgc2h1dGRvd24gZG9tIDMhClsyMDE0LTAyLTA1IDA4OjIwOjIxIDE0MjVd
IElORk8gKFhlbmRDaGVja3BvaW50OjEzNSkgRG9tYWluIDMgc3VzcGVuZGVk
LgpbMjAxNC0wMi0wNSAwODoyMDoyMSAxNDI1XSBJTkZPIChYZW5kRG9tYWlu
SW5mbzoyMDc4KSBEb21haW4gaGFzIHNodXRkb3duOiBuYW1lPW1pZ3JhdGlu
Zy11YnVudHUxMSBpZD0zIHJlYXNvbj1zdXNwZW5kLgpbMjAxNC0wMi0wNSAw
ODoyMDoyMSAxNDI1XSBJTkZPIChpbWFnZTo1MzgpIHNpZ25hbERldmljZU1v
ZGVsOnJlc3RvcmUgZG0gc3RhdGUgdG8gcnVubmluZwpbMjAxNC0wMi0wNSAw
ODoyMDoyMSAxNDI1XSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6MTQ0KSBXcml0
dGVuIGRvbmUKWzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhl
bmREb21haW5JbmZvOjMwNzEpIFhlbmREb21haW5JbmZvLmRlc3Ryb3k6IGRv
bWlkPTMKWzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjI0MDEpIERlc3Ryb3lpbmcgZGV2aWNlIG1vZGVsClsyMDE0
LTAyLTA1IDA4OjIwOjIxIDE0MjVdIElORk8gKGltYWdlOjYxNSkgbWlncmF0
aW5nLXVidW50dTExIGRldmljZSBtb2RlbCB0ZXJtaW5hdGVkClsyMDE0LTAy
LTA1IDA4OjIwOjIxIDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDA4
KSBSZWxlYXNpbmcgZGV2aWNlcwpbMjAxNC0wMi0wNSAwODoyMDoyMSAxNDI1
XSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmlmLzAK
WzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhlbmREb21haW5J
bmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6IGRldmlj
ZUNsYXNzID0gdmlmLCBkZXZpY2UgPSB2aWYvMApbMjAxNC0wMi0wNSAwODoy
MDoyMSAxNDI1XSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3Zp
bmcgY29uc29sZS8wClsyMDE0LTAyLTA1IDA4OjIwOjIxIDE0MjVdIERFQlVH
IChYZW5kRG9tYWluSW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95
RGV2aWNlOiBkZXZpY2VDbGFzcyA9IGNvbnNvbGUsIGRldmljZSA9IGNvbnNv
bGUvMApbMjAxNC0wMi0wNSAwODoyMDoyMSAxNDI1XSBERUJVRyAoWGVuZERv
bWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmJkLzc2OApbMjAxNC0wMi0wNSAw
ODoyMDoyMSAxNDI1XSBERUJVRyAoWGVuZERvbWFpbkluZm86MTI3NikgWGVu
ZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2aWNlQ2xhc3MgPSB2YmQs
IGRldmljZSA9IHZiZC83NjgKWzIwMTQtMDItMDUgMDg6MjA6MjIgMTQyNV0g
REVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZiZC81NjMy
ClsyMDE0LTAyLTA1IDA4OjIwOjIyIDE0MjVdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZp
Y2VDbGFzcyA9IHZiZCwgZGV2aWNlID0gdmJkLzU2MzIKWzIwMTQtMDItMDUg
MDg6MjA6MjIgMTQyNV0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJl
bW92aW5nIHZmYi8wClsyMDE0LTAyLTA1IDA4OjIwOjIyIDE0MjVdIERFQlVH
IChYZW5kRG9tYWluSW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95
RGV2aWNlOiBkZXZpY2VDbGFzcyA9IHZmYiwgZGV2aWNlID0gdmZiLzAK

---2096837515-1980809517-1391583040=:24823
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

---2096837515-1980809517-1391583040=:24823--


From xen-devel-bounces@lists.xen.org Wed Feb 05 07:59:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 07:59:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAxNv-0003U6-JS; Wed, 05 Feb 2014 07:59:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WAwJp-0001tc-ER
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 06:50:45 +0000
Received: from [85.158.143.35:9493] by server-1.bemta-4.messagelabs.com id
	C8/71-31661-34FD1F25; Wed, 05 Feb 2014 06:50:43 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391583041!3200024!1
X-Originating-IP: [216.109.115.124]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 766 invoked from network); 5 Feb 2014 06:50:42 -0000
Received: from nm47-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm47-vm1.bullet.mail.bf1.yahoo.com) (216.109.115.124)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 06:50:42 -0000
Received: from [98.139.212.150] by nm47.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 06:50:41 -0000
Received: from [98.139.212.246] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 06:50:41 -0000
Received: from [127.0.0.1] by omp1055.mail.bf1.yahoo.com with NNFMP;
	05 Feb 2014 06:50:40 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 994483.78095.bm@omp1055.mail.bf1.yahoo.com
Received: (qmail 54337 invoked by uid 60001); 5 Feb 2014 06:50:40 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391583040; bh=HbBQPwAtTasg8GK7rBpkzSt94jqFdO3j7MB4Spvd6UA=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=OsMmxTpMq4chzz0T6DXFipW8lDE0BlWdYOo37RWIrb3XQg529uA3S0WhzKQHOnSRFcY0sOel1RBqGxCaE8kO+JyN1MmynakeNB6ZsqqZAmuQmkAMMytv7JvyoNPKoaclaVBOHVqkbGHBQ2OxchbuqlEJnPqVlqYqE8ZDveYoq9E=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=KBSeHB5+g39/5gn75nihWdlckpMoa6HqpLursIV9pXaUe0bVj6ZayXds9wI835hscgIVg+y0ylZSPSm8+eYYw/hpiUKBdLhNcf1UKrEMQt8ZEwj+aZ08Rox1lAL5LgHZVpj1wuIBNHKASrDsspun0VI049ENCcs7/QNEtea4RKk=;
X-YMail-OSG: orJVOMwVM1nx.dT9UplXNVQQNdr8OMGfyHaCatTiZgaLBNu
	HUF5QDDTmQbt47TuvJOQOiNVtvPZhM2C1eieQwiYFIA11_m8QqVy_3LCFjVc
	Ghf0bIqSApgcFPgqBWdaGPGjZ.joyzp0U4Podg.yEkSvLBKZ4HMYNUYTd1Sb
	D7jHQlfdNoZDy__m49XbC6fSrZZXp4m2HmP_CuEeRfDP18tlh_yVfbJSsXUG
	eiAjONJDuK8dByeaoEqz_Iz3uF23M2hoorg0DH.TnQdKMJlBHoo8X1z7G0v.
	RJF1C80gNxePRDEp7exNCRgpO168IRibCSrh.pjt7g8oNvaDqD2Cf1JIG364
	nhJ.8zq4l8tKHFp30hiN3f.WgkH5C8LIzKcgwdvJtLL5SGB6Uqv2MJq5owK3
	15qkKl6zjG2mVacG8M6Czjuy_P4GapCEJN300NQ87Esk7lkGY_ewsiFLElEM
	Bjc75Zr1YIhYqFkKk5h36t7NW758B_GmqJbUrEFukf4nIk0O81buvprjYE7s
	eF6yEwvMEPwxEHcw546sBkbaUZm9gzg--
Received: from [192.227.225.3] by web161802.mail.bf1.yahoo.com via HTTP;
	Tue, 04 Feb 2014 22:50:40 PST
X-Rocket-MIMEInfo: 002.001,
	SGVsbG8gTXIgT2xhZiwKSSB0cmllZCBmb3IgY2hhbmdlIGNvZGUgeGNfc2F2ZS5jIHRvIGF0dGVudGlvbsKgYmFja3BvcnRlZCwgQnV0IG5vdCBhbnN3ZXIgbWUgOi0oLi4uClNob3VsZCBJIGNoYW5nZSBjb2RlIHhjX2RvbWFpbl9zYXZlLmMgPyBpbiB0aGlzIGNvZGUgdXNlZCBmdW5jdGlvbiAnc3RhdGljIGludCBwcmludF9zdGF0cyguLi4pJyAuCkkgYXR0YWNoZSBmaWxlIHhlbmQubG9nIGxhdGVyIHVzZSBvZsKgQ29kZSBjaGFuZ2luZyB4Y19zYXZlLmM6bWFpbjoKCmludAptYWluKGludCBhcmdjLCBjaGEBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.174.629
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
Message-ID: <1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
Date: Tue, 4 Feb 2014 22:50:40 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140203131144.GA31275@aepfle.de>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="-2096837515-1980809517-1391583040=:24823"
X-Mailman-Approved-At: Wed, 05 Feb 2014 07:59:01 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---2096837515-1980809517-1391583040=:24823
Content-Type: multipart/alternative; boundary="-2096837515-1230453173-1391583040=:24823"

---2096837515-1230453173-1391583040=:24823
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

Hello Mr Olaf,=0AI tried for change code xc_save.c to attention=A0backporte=
d, But not answer me :-(...=0AShould I change code xc_domain_save.c ? in th=
is code used function 'static int print_stats(...)' .=0AI attache file xend=
.log later use of=A0Code changing xc_save.c:main:=0A=0Aint=0Amain(int argc,=
 char **argv)=0A{=0A=A0 =A0 unsigned int maxit, max_f, lflags; //Change, ad=
ded lflags...=0A=A0 =A0 int io_fd, ret, port;=0A=A0 =A0 struct save_callbac=
ks callbacks;=0Axentoollog_level lvl;//added...=0Axentoollog_logger *l;//ad=
ded...=0A=0A=A0 =A0 if (argc !=3D 6)=0A=A0 =A0 =A0 =A0 errx(1, "usage: %s i=
ofd domid maxit maxf flags", argv[0]);=0A=0A=A0 =A0 io_fd =3D atoi(argv[1])=
;=0A=A0 =A0 si.domid =3D atoi(argv[2]);=0A=A0 =A0 maxit =3D atoi(argv[3]);=
=0A=A0 =A0 max_f =3D atoi(argv[4]);=0A=A0 =A0 si.flags =3D atoi(argv[5]);=
=0A=0A=A0 =A0 si.suspend_evtchn =3D -1;=0A=0Alvl =3D si.flags & XCFLAGS_DEB=
UG ? XTL_DEBUG: XTL_DETAIL;//added...=0Alflags =3D XTL_STDIOSTREAM_SHOW_PID=
 | XTL_STDIOSTREAM_HIDE_PROGRESS;//added...=0Al =3D (xentoollog_logger *)xt=
l_createlogger_stdiostream(stderr, lvl, lflags);//added...=0Asi.xch =3D xc_=
interface_open(l,0,0);//Change, orginal: si.xch =3D xc_interface_open(0,0,0=
);=0A=A0 =A0 if (!si.xch)=0A=A0 =A0 =A0 =A0 errx(1, "failed to open control=
 interface");=0A=A0 =A0 si.xce =3D xc_evtchn_open(NULL, 0);=0A=A0 =A0 if (s=
i.xce =3D=3D NULL)=0A=A0 =A0 =A0 =A0 warnx("failed to open event channel ha=
ndle");=0A=A0 =A0 else=0A=A0 =A0 {=0A=A0 =A0 =A0 =A0 port =3D xs_suspend_ev=
tchn_port(si.domid);=0A=0A=A0 =A0 =A0 =A0 if (port < 0)=0A=A0 =A0 =A0 =A0 =
=A0 =A0 warnx("failed to get the suspend evtchn port\n");=0A=A0 =A0 =A0 =A0=
 else=0A=A0 =A0 =A0 =A0 {=0A=A0 =A0 =A0 =A0 =A0 =A0 si.suspend_evtchn =3D=
=0A=A0 =A0 =A0 =A0 =A0 =A0 =A0 xc_suspend_evtchn_init(si.xch, si.xce, si.do=
mid, port);=0A=0A=A0 =A0 =A0 =A0 =A0 =A0 if (si.suspend_evtchn < 0)=0A=A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 warnx("suspend event channel initialization fai=
led, "=0A=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0"using slow path");=
=0A=A0 =A0 =A0 =A0 }=0A=A0 =A0 }=0A=A0 =A0 memset(&callbacks, 0, sizeof(cal=
lbacks));=0A=A0 =A0 callbacks.suspend =3D suspend;=0A=A0 =A0 callbacks.swit=
ch_qemu_logdirty =3D switch_qemu_logdirty;=0A=A0 =A0 ret =3D xc_domain_save=
(si.xch, io_fd, si.domid, maxit, max_f, si.flags,=A0=0A=A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0&callbacks, !!(si.flags & XCFLAGS_HVM));=A0=
=0A //in code xen 4.3.1 added parametr vm_generationid_addr to xc_domain_sa=
ve(...) but here don't need=0A=0A=A0 =A0 if (si.suspend_evtchn > 0)=0A xc_s=
uspend_evtchn_release(si.xch, si.xce, si.domid, si.suspend_evtchn);=0A=0A=
=A0 =A0 if (si.xce > 0)=0A=A0 =A0 =A0 =A0 xc_evtchn_close(si.xce);=0A=0A=A0=
 =A0 xc_interface_close(si.xch);=0A=0A=A0 =A0 return ret;=0A}=0A=A0=0AAdel =
Amani=0AM.Sc. Candidate@Computer Engineering Department, University of Isfa=
han=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Monday, February 3, 2014 4:=
41 PM, Olaf Hering <olaf@aepfle.de> wrote:=0A =0AOn Mon, Feb 03, Adel Amani=
 wrote:=0A=0A> Thanks, how i define logger for xc_interface_open to output =
print?!=0A=0ASee=A0 the example I gave in my reply. I quoted it again (see =
below) for=0Ayour convenience.=0A=0A> can i use of code xc_save.c in xen 4.=
3.1 for logger in xen 4.1.2?!=0A=0AIf all required changes are backported, =
most likely yes.=0A=0A=0AOlaf=0A=0A=0A> For an example how a logger could l=
ook like see the xc_interface_open=0A> call in tools/xenpaging/xenpaging.c.
---2096837515-1230453173-1391583040=:24823
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span>Hello Mr=
 Olaf,</span></div><div style=3D"color: rgb(0, 0, 0); font-size: 13px; font=
-family: 'bookman old style', 'new york', times, serif; background-color: t=
ransparent; font-style: normal;"><span>I tried for change code xc_save.c to=
 attention&nbsp;</span><span style=3D"font-family: monospace; font-size: 10=
pt;">backported, But not answer me :-(...</span></div><div style=3D"color: =
rgb(0, 0, 0); font-size: 10pt; font-family: monospace; background-color: tr=
ansparent; font-style: normal;"><span style=3D"font-family: monospace; font=
-size: 10pt; background-color: transparent;">Should I change code xc_domain=
_save.c ? in this code used function 'static int print_stats(...)' .</span>=
</div><div style=3D"color: rgb(0, 0, 0); font-size: 10pt; font-family: mono=
space; background-color: transparent; font-style: normal;"><span
 style=3D"font-family: monospace; font-size: 10pt; background-color: transp=
arent;">I attache file xend.log later use of&nbsp;</span><span style=3D"bac=
kground-color: transparent; font-size: 10pt;">Code changing xc_save.c:main:=
</span></div><div style=3D"background-color: transparent;"><font face=3D"mo=
nospace"><br></font></div><div style=3D"background-color: transparent;"><fo=
nt face=3D"monospace">int</font></div><div style=3D"background-color: trans=
parent;"><font face=3D"monospace">main(int argc, char **argv)</font></div><=
div style=3D"background-color: transparent;"><font face=3D"monospace">{</fo=
nt></div><div style=3D"background-color: transparent;"><font face=3D"monosp=
ace">&nbsp; &nbsp; unsigned int maxit, max_f, lflags; //Change, added lflag=
s...</font></div><div style=3D"background-color: transparent;"><font face=
=3D"monospace">&nbsp; &nbsp; int io_fd, ret, port;</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
struct save_callbacks
 callbacks;</font></div><div style=3D"background-color: transparent;"><font=
 face=3D"monospace"><span class=3D"Apple-tab-span" style=3D"white-space:pre=
">=09</span>xentoollog_level lvl;//added...</font></div><div style=3D"backg=
round-color: transparent;"><font face=3D"monospace"><span class=3D"Apple-ta=
b-span" style=3D"white-space:pre">=09</span>xentoollog_logger *l;//added...=
</font></div><div style=3D"background-color: transparent;"><font face=3D"mo=
nospace"><br></font></div><div style=3D"background-color: transparent;"><fo=
nt face=3D"monospace">&nbsp; &nbsp; if (argc !=3D 6)</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
&nbsp; &nbsp; errx(1, "usage: %s iofd domid maxit maxf flags", argv[0]);</f=
ont></div><div style=3D"background-color: transparent;"><font face=3D"monos=
pace"><br></font></div><div style=3D"background-color: transparent;"><font =
face=3D"monospace">&nbsp; &nbsp; io_fd =3D atoi(argv[1]);</font></div><div =
style=3D"background-color:
 transparent;"><font face=3D"monospace">&nbsp; &nbsp; si.domid =3D atoi(arg=
v[2]);</font></div><div style=3D"background-color: transparent;"><font face=
=3D"monospace">&nbsp; &nbsp; maxit =3D atoi(argv[3]);</font></div><div styl=
e=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp;=
 max_f =3D atoi(argv[4]);</font></div><div style=3D"background-color: trans=
parent;"><font face=3D"monospace">&nbsp; &nbsp; si.flags =3D atoi(argv[5]);=
</font></div><div style=3D"background-color: transparent;"><font face=3D"mo=
nospace"><br></font></div><div style=3D"background-color: transparent;"><fo=
nt face=3D"monospace">&nbsp; &nbsp; si.suspend_evtchn =3D -1;</font></div><=
div style=3D"background-color: transparent;"><font face=3D"monospace"><br><=
/font></div><div style=3D"background-color: transparent;"><font face=3D"mon=
ospace"><span class=3D"Apple-tab-span" style=3D"white-space:pre">=09</span>=
lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;//added...</fo=
nt></div><div style=3D"background-color:
 transparent;"><font face=3D"monospace"><span class=3D"Apple-tab-span" styl=
e=3D"white-space:pre">=09</span>lflags =3D XTL_STDIOSTREAM_SHOW_PID | XTL_S=
TDIOSTREAM_HIDE_PROGRESS;//added...</font></div><div style=3D"background-co=
lor: transparent;"><font face=3D"monospace"><span class=3D"Apple-tab-span" =
style=3D"white-space:pre">=09</span>l =3D (xentoollog_logger *)xtl_createlo=
gger_stdiostream(stderr, lvl, lflags);//added...</font></div><div style=3D"=
background-color: transparent;"><font face=3D"monospace"><span class=3D"App=
le-tab-span" style=3D"white-space:pre">=09</span>si.xch =3D xc_interface_op=
en(l,0,0);//Change, orginal: si.xch =3D xc_interface_open(0,0,0);</font></d=
iv><div style=3D"background-color: transparent;"><font face=3D"monospace">&=
nbsp; &nbsp; if (!si.xch)</font></div><div style=3D"background-color: trans=
parent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; errx(1, "fail=
ed to open control interface");</font></div><div style=3D"background-color:=
 transparent;"><span
 class=3D"Apple-tab-span" style=3D"white-space:pre"><font face=3D"monospace=
">=09=09</font></span></div><div style=3D"background-color: transparent;"><=
font face=3D"monospace">&nbsp; &nbsp; si.xce =3D xc_evtchn_open(NULL, 0);</=
font></div><div style=3D"background-color: transparent;"><font face=3D"mono=
space">&nbsp; &nbsp; if (si.xce =3D=3D NULL)</font></div><div style=3D"back=
ground-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &=
nbsp; warnx("failed to open event channel handle");</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
else</font></div><div style=3D"background-color: transparent;"><font face=
=3D"monospace">&nbsp; &nbsp; {</font></div><div style=3D"background-color: =
transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; port =3D=
 xs_suspend_evtchn_port(si.domid);</font></div><div style=3D"background-col=
or: transparent;"><font face=3D"monospace"><br></font></div><div style=3D"b=
ackground-color: transparent;"><font
 face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; if (port &lt; 0)</font></di=
v><div style=3D"background-color: transparent;"><font face=3D"monospace">&n=
bsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; warnx("failed to get the suspend ev=
tchn port\n");</font></div><div style=3D"background-color: transparent;"><f=
ont face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; else</font></div><div st=
yle=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbs=
p; &nbsp; &nbsp; {</font></div><div style=3D"background-color: transparent;=
"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; si.sus=
pend_evtchn =3D</font></div><div style=3D"background-color: transparent;"><=
font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; xc=
_suspend_evtchn_init(si.xch, si.xce, si.domid, port);</font></div><div styl=
e=3D"background-color: transparent;"><font face=3D"monospace"><br></font></=
div><div style=3D"background-color: transparent;"><font face=3D"monospace">=
&nbsp; &nbsp; &nbsp;
 &nbsp; &nbsp; &nbsp; if (si.suspend_evtchn &lt; 0)</font></div><div style=
=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; warnx("suspend event channel init=
ialization failed, "</font></div><div style=3D"background-color: transparen=
t;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;"using slow path");</font></div><div s=
tyle=3D"background-color: transparent;"><font face=3D"monospace">&nbsp; &nb=
sp; &nbsp; &nbsp; }</font></div><div style=3D"background-color: transparent=
;"><font face=3D"monospace">&nbsp; &nbsp; }</font></div><div style=3D"backg=
round-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; memset(&a=
mp;callbacks, 0, sizeof(callbacks));</font></div><div style=3D"background-c=
olor: transparent;"><font face=3D"monospace">&nbsp; &nbsp; callbacks.suspen=
d =3D suspend;</font></div><div style=3D"background-color: transparent;"><f=
ont
 face=3D"monospace">&nbsp; &nbsp; callbacks.switch_qemu_logdirty =3D switch=
_qemu_logdirty;</font></div><div style=3D"background-color: transparent;"><=
font face=3D"monospace">&nbsp; &nbsp; ret =3D xc_domain_save(si.xch, io_fd,=
 si.domid, maxit, max_f, si.flags,&nbsp;</font></div><div style=3D"backgrou=
nd-color: transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&amp;callba=
cks, !!(si.flags &amp; XCFLAGS_HVM));&nbsp;</font></div><div style=3D"backg=
round-color: transparent;"><font face=3D"monospace"><span class=3D"Apple-ta=
b-span" style=3D"white-space:pre">=09=09=09=09=09=09</span> //in code xen 4=
.3.1 added parametr vm_generationid_addr to xc_domain_save(...) but here do=
n't need</font></div><div style=3D"background-color: transparent;"><font fa=
ce=3D"monospace"><br></font></div><div style=3D"background-color: transpare=
nt;"><font face=3D"monospace">&nbsp; &nbsp; if (si.suspend_evtchn &gt; 0)</=
font></div><div
 style=3D"background-color: transparent;"><font face=3D"monospace"><span cl=
ass=3D"Apple-tab-span" style=3D"white-space:pre">=09</span> xc_suspend_evtc=
hn_release(si.xch, si.xce, si.domid, si.suspend_evtchn);</font></div><div s=
tyle=3D"background-color: transparent;"><font face=3D"monospace"><br></font=
></div><div style=3D"background-color: transparent;"><font face=3D"monospac=
e">&nbsp; &nbsp; if (si.xce &gt; 0)</font></div><div style=3D"background-co=
lor: transparent;"><font face=3D"monospace">&nbsp; &nbsp; &nbsp; &nbsp; xc_=
evtchn_close(si.xce);</font></div><div style=3D"background-color: transpare=
nt;"><font face=3D"monospace"><br></font></div><div style=3D"background-col=
or: transparent;"><font face=3D"monospace">&nbsp; &nbsp; xc_interface_close=
(si.xch);</font></div><div style=3D"background-color: transparent;"><font f=
ace=3D"monospace"><br></font></div><div style=3D"background-color: transpar=
ent;"><font face=3D"monospace">&nbsp; &nbsp; return ret;</font></div><div s=
tyle=3D"background-color:
 transparent;"><font face=3D"monospace"><span></span></font></div><div styl=
e=3D"background-color: transparent;"><font face=3D"monospace">}</font></div=
><div></div><div>&nbsp;</div><div><div style=3D"text-align:center;"><span s=
tyle=3D"font-size: 16px; font-family: 'times new roman', 'new york', times,=
 serif; line-height: normal;">Adel Amani</span><br></div><span style=3D"fon=
t-family: 'times new roman', 'new york', times, serif; line-height: normal;=
"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@Computer=
 Engineering Department, University of Isfahan</div><div style=3D"text-alig=
n:center;"><span style=3D"font-size:13px;">Email: <span style=3D"color:rgb(=
0, 0, 255);text-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></=
div></span></div><div class=3D"yahoo_quoted" style=3D"display: block;"> <br=
> <br> <div style=3D"font-family: 'bookman old style', 'new york', times, s=
erif; font-size: 10pt;"> <div style=3D"font-family: HelveticaNeue, 'Helveti=
ca Neue', Helvetica,
 Arial, 'Lucida Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <=
font size=3D"2" face=3D"Arial"> On Monday, February 3, 2014 4:41 PM, Olaf H=
ering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div>  <div class=3D"y_msg=
_container">On Mon, Feb 03, Adel Amani wrote:<br clear=3D"none"><br clear=
=3D"none">&gt; Thanks, how i define logger for xc_interface_open to output =
print?!<br clear=3D"none"><br clear=3D"none">See&nbsp; the example I gave i=
n my reply. I quoted it again (see below) for<br clear=3D"none">your conven=
ience.<br clear=3D"none"><br clear=3D"none">&gt; can i use of code xc_save.=
c in xen 4.3.1 for logger in xen 4.1.2?!<br clear=3D"none"><br clear=3D"non=
e">If all required changes are backported, most likely yes.<div class=3D"yq=
t9723372853" id=3D"yqtfd42590"><br clear=3D"none"><br clear=3D"none">Olaf<b=
r clear=3D"none"><br clear=3D"none"><br clear=3D"none">&gt; For an example =
how a logger could look like see the xc_interface_open<br clear=3D"none">&g=
t; call in
 tools/xenpaging/xenpaging.c.<br clear=3D"none"></div><br><br></div>  </div=
> </div>  </div> </div></body></html>
---2096837515-1230453173-1391583040=:24823--
---2096837515-1980809517-1391583040=:24823
Content-Type: application/octet-stream; name="xend.log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xend.log"

IEZpbGUgIi91c3IvbG9jYWwvbGliL3B5dGhvbjIuNy9kaXN0LXBhY2thZ2Vz
L3hlbi94ZW5kL1hlbmRDaGVja3BvaW50LnB5IiwgbGluZSAzNTgsIGluIHJl
c3RvcmUKICAgIHJhaXNlIGV4bgpWbUVycm9yOiBEaXNrIGltYWdlIGRvZXMg
bm90IGV4aXN0OiAvdmFyL2xpYi9saWJ2aXJ0L2ltYWdlcy91YnVudHUxMS5p
bWcKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKFhlbmREb21h
aW5JbmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25h
bWUnLCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5k
X3N0YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUn
XSwgWyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0n
LCBbJ2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBb
J3NlcmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBb
J2Jvb3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywg
W11dLCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScs
ICc6MC4wJ10sIFsnZmRhJywgJyddLCBbJ2ZkYicsICcnXSwgWydndWVzdF9v
c190eXBlJywgJ2RlZmF1bHQnXSwgWydoYXAnLCAxXSwgWydocGV0JywgMF0s
IFsnaXNhJywgMF0sIFsna2V5bWFwJywgJyddLCBbJ2xvY2FsdGltZScsIDBd
LCBbJ25vZ3JhcGhpYycsIDBdLCBbJ29wZW5nbCcsIDFdLCBbJ29vcycsIDFd
LCBbJ3BhZScsIDFdLCBbJ3BjaScsIFtdXSwgWydwY2lfbXNpdHJhbnNsYXRl
JywgMV0sIFsncGNpX3Bvd2VyX21nbXQnLCAwXSwgWydydGNfdGltZW9mZnNl
dCcsIDBdLCBbJ3NkbCcsIDBdLCBbJ3NvdW5kaHcnLCAnJ10sIFsnc3Rkdmdh
JywgMF0sIFsndGltZXJfbW9kZScsIDFdLCBbJ3VzYicsIDFdLCBbJ3VzYmRl
dmljZScsIFsnaG9zdDoxMjVmOmM5NmEnXV0sIFsndmNwdXMnLCAxXSwgWyd2
bmMnLCAxXSwgWyd2bmN1bnVzZWQnLCAxXSwgWyd2aXJpZGlhbicsIDBdLCBb
J3ZwdF9hbGlnbicsIDFdLCBbJ3hhdXRob3JpdHknLCAnL3Jvb3QvLlhhdXRo
b3JpdHknXSwgWyd4ZW5fcGxhdGZvcm1fcGNpJywgMV0sIFsnbWVtb3J5X3No
YXJpbmcnLCAwXSwgWyd2bmNwYXNzd2QnLCAnWFhYWFhYWFgnXSwgWyd0c2Nf
bW9kZScsIDBdLCBbJ25vbWlncmF0ZScsIDBdXV0sIFsnczNfaW50ZWdyaXR5
JywgMV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3VuYW1lJywgJ2ZpbGU6L3Zh
ci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1MTEuaW1nJ10sIFsnZGV2Jywg
J2hkYSddLCBbJ21vZGUnLCAndyddXV0sIFsnZGV2aWNlJywgWyd2YmQnLCBb
J3VuYW1lJywgJ3BoeTovZGV2L2Nkcm9tJ10sIFsnZGV2JywgJ2hkYzpjZHJv
bSddLCBbJ21vZGUnLCAnciddXV0sIFsnZGV2aWNlJywgWyd2aWYnLCBbJ2Jy
aWRnZScsICd4ZW5icjAnXSwgWyd0eXBlJywgJ2lvZW11J11dXV0pClsyMDE0
LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzoy
NDk4KSBYZW5kRG9tYWluSW5mby5jb25zdHJ1Y3REb21haW4KWzIwMTQtMDIt
MDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGJhbGxvb246MTg3KSBCYWxsb29u
OiAyNzQ4MzA0IEtpQiBmcmVlOyBuZWVkIDE2Mzg0OyBkb25lLgpbMjAxNC0w
Mi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoWGVuZERvbWFpbjo0NzYpIEFk
ZGluZyBEb21haW46IDMKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVC
VUcgKFhlbmREb21haW5JbmZvOjI4MzYpIFhlbmREb21haW5JbmZvLmluaXRE
b21haW46IDMgMjU2ClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVH
IChpbWFnZTozMzkpIE5vIFZOQyBwYXNzd2QgY29uZmlndXJlZCBmb3IgdmZi
IGFjY2VzcwpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1h
Z2U6ODkxKSBhcmdzOiBib290LCB2YWw6IGMKWzIwMTQtMDItMDUgMDg6MTA6
NTYgMTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZmRhLCB2YWw6IE5v
bmUKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGltYWdlOjg5
MSkgYXJnczogZmRiLCB2YWw6IE5vbmUKWzIwMTQtMDItMDUgMDg6MTA6NTYg
MTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogc291bmRodywgdmFsOiBO
b25lClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChpbWFnZTo4
OTEpIGFyZ3M6IGxvY2FsdGltZSwgdmFsOiAwClsyMDE0LTAyLTA1IDA4OjEw
OjU2IDE0MjVdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNlcmlhbCwgdmFs
OiBbJ3B0eSddClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChp
bWFnZTo4OTEpIGFyZ3M6IHN0ZC12Z2EsIHZhbDogMApbMjAxNC0wMi0wNSAw
ODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBpc2EsIHZh
bDogMApbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBhY3BpLCB2YWw6IDEKWzIwMTQtMDItMDUgMDg6MTA6NTYg
MTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogdXNiLCB2YWw6IDEKWzIw
MTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGltYWdlOjg5MSkgYXJn
czogdXNiZGV2aWNlLCB2YWw6IFsnaG9zdDoxMjVmOmM5NmEnXQpbMjAxNC0w
Mi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBn
ZnhfcGFzc3RocnUsIHZhbDogTm9uZQpbMjAxNC0wMi0wNSAwODoxMDo1NiAx
NDI1XSBJTkZPIChpbWFnZTo4MjIpIE5lZWQgdG8gY3JlYXRlIHBsYXRmb3Jt
IGRldmljZS5bZG9taWQ6M10KWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0g
REVCVUcgKFhlbmREb21haW5JbmZvOjI4NjMpIF9pbml0RG9tYWluOnNoYWRv
d19tZW1vcnk9MHgwLCBtZW1vcnlfc3RhdGljX21heD0weDQwMDAwMDAwLCBt
ZW1vcnlfc3RhdGljX21pbj0weDAuClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0
MjVdIElORk8gKGltYWdlOjE4MikgYnVpbGREb21haW4gb3M9aHZtIGRvbT0z
IHZjcHVzPTEKWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGlt
YWdlOjk0OSkgZG9taWQgICAgICAgICAgPSAzClsyMDE0LTAyLTA1IDA4OjEw
OjU2IDE0MjVdIERFQlVHIChpbWFnZTo5NTApIGltYWdlICAgICAgICAgID0g
L3Vzci9saWIveGVuL2Jvb3QvaHZtbG9hZGVyClsyMDE0LTAyLTA1IDA4OjEw
OjU2IDE0MjVdIERFQlVHIChpbWFnZTo5NTEpIHN0b3JlX2V2dGNobiAgID0g
MgpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBERUJVRyAoaW1hZ2U6OTUy
KSBtZW1zaXplICAgICAgICA9IDEwMjQKWzIwMTQtMDItMDUgMDg6MTA6NTYg
MTQyNV0gREVCVUcgKGltYWdlOjk1MykgdGFyZ2V0ICAgICAgICAgPSAxMDI0
ClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChpbWFnZTo5NTQp
IHZjcHVzICAgICAgICAgID0gMQpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1
XSBERUJVRyAoaW1hZ2U6OTU1KSB2Y3B1X2F2YWlsICAgICA9IDEKWzIwMTQt
MDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcgKGltYWdlOjk1NikgYWNwaSAg
ICAgICAgICAgPSAxClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVH
IChpbWFnZTo5NTcpIGFwaWMgICAgICAgICAgID0gMQpbMjAxNC0wMi0wNSAw
ODoxMDo1NiAxNDI1XSBJTkZPIChYZW5kRG9tYWluSW5mbzoyMzU3KSBjcmVh
dGVEZXZpY2U6IHZmYiA6IHsndm5jdW51c2VkJzogMSwgJ290aGVyX2NvbmZp
Zyc6IHsndm5jdW51c2VkJzogMSwgJ3ZuYyc6ICcxJ30sICd2bmMnOiAnMScs
ICd1dWlkJzogJzU0MzRiMWU3LTk3YWItYjk5ZS1kOWQ2LTMzMWI5MTE1MTQ0
Zid9ClsyMDE0LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChEZXZDb250
cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUnOiAn
MScsICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92ZmIvMy8wJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2Rl
dmljZS92ZmIvMC4KWzIwMTQtMDItMDUgMDg6MTA6NTYgMTQyNV0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeyd2
bmN1bnVzZWQnOiAnMScsICdkb21haW4nOiAndWJ1bnR1MTEnLCAnZnJvbnRl
bmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92ZmIvMCcsICd1dWlkJzog
JzU0MzRiMWU3LTk3YWItYjk5ZS1kOWQ2LTMzMWI5MTE1MTQ0ZicsICdmcm9u
dGVuZC1pZCc6ICczJywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAn
dm5jJzogJzEnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92ZmIvMy8w
LgpbMjAxNC0wMi0wNSAwODoxMDo1NiAxNDI1XSBJTkZPIChYZW5kRG9tYWlu
SW5mbzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZiZCA6IHsndXVpZCc6ICdiMDNj
OTNjOC0yYWIzLTA0MGUtODljZS00NWFjN2ZmZWUzMTcnLCAnYm9vdGFibGUn
OiAxLCAnZHJpdmVyJzogJ3BhcmF2aXJ0dWFsaXNlZCcsICdkZXYnOiAnaGRh
JywgJ3VuYW1lJzogJ2ZpbGU6L3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndyd9ClsyMDE0LTAyLTA1IDA4OjEwOjU2
IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVy
OiB3cml0aW5nIHsnYmFja2VuZC1pZCc6ICcwJywgJ3ZpcnR1YWwtZGV2aWNl
JzogJzc2OCcsICdkZXZpY2UtdHlwZSc6ICdkaXNrJywgJ3N0YXRlJzogJzEn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy83
NjgnfSB0byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZiZC83NjguClsyMDE0
LTAyLTA1IDA4OjEwOjU2IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjk3
KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTEx
JywgJ2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzc2
OCcsICd1dWlkJzogJ2IwM2M5M2M4LTJhYjMtMDQwZS04OWNlLTQ1YWM3ZmZl
ZTMxNycsICdib290YWJsZSc6ICcxJywgJ2Rldic6ICdoZGEnLCAnc3RhdGUn
OiAnMScsICdwYXJhbXMnOiAnL3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndycsICdvbmxpbmUnOiAnMScsICdmcm9u
dGVuZC1pZCc6ICczJywgJ3R5cGUnOiAnZmlsZSd9IHRvIC9sb2NhbC9kb21h
aW4vMC9iYWNrZW5kL3ZiZC8zLzc2OC4KWzIwMTQtMDItMDUgMDg6MTA6NTcg
MTQyNV0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2aWNl
OiB2YmQgOiB7J3V1aWQnOiAnNTU1ZDIwZmMtODZjYS04NjJmLTExOGEtOWM4
ODYxZWMwNDJiJywgJ2Jvb3RhYmxlJzogMCwgJ2RyaXZlcic6ICdwYXJhdmly
dHVhbGlzZWQnLCAnZGV2JzogJ2hkYzpjZHJvbScsICd1bmFtZSc6ICdwaHk6
L2Rldi9jZHJvbScsICdtb2RlJzogJ3InfQpbMjAxNC0wMi0wNSAwODoxMDo1
NyAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxl
cjogd3JpdGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmlj
ZSc6ICc1NjMyJywgJ2RldmljZS10eXBlJzogJ2Nkcm9tJywgJ3N0YXRlJzog
JzEnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQv
My81NjMyJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQvNTYzMi4K
WzIwMTQtMDItMDUgMDg6MTA6NTcgMTQyNV0gREVCVUcgKERldkNvbnRyb2xs
ZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1
bnR1MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92
YmQvNTYzMicsICd1dWlkJzogJzU1NWQyMGZjLTg2Y2EtODYyZi0xMThhLTlj
ODg2MWVjMDQyYicsICdib290YWJsZSc6ICcwJywgJ2Rldic6ICdoZGMnLCAn
c3RhdGUnOiAnMScsICdwYXJhbXMnOiAnL2Rldi9jZHJvbScsICdtb2RlJzog
J3InLCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQtaWQnOiAnMycsICd0eXBl
JzogJ3BoeSd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8zLzU2
MzIuClsyMDE0LTAyLTA1IDA4OjEwOjU3IDE0MjVdIElORk8gKFhlbmREb21h
aW5JbmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmlmIDogeydicmlkZ2UnOiAn
eGVuYnIwJywgJ21hYyc6ICcwMDoxNjozZToyMzplYzo2MicsICd0eXBlJzog
J2lvZW11JywgJ3V1aWQnOiAnZjg4MDAxMWEtYzIwNC05MjdiLWQ5NjQtMGEy
YzA3N2I3YTlmJ30KWzIwMTQtMDItMDUgMDg6MTA6NTcgMTQyNV0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydz
dGF0ZSc6ICcxJywgJ2JhY2tlbmQtaWQnOiAnMCcsICdiYWNrZW5kJzogJy9s
b2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi8zLzAnfSB0byAvbG9jYWwvZG9t
YWluLzMvZGV2aWNlL3ZpZi8wLgpbMjAxNC0wMi0wNSAwODoxMDo1NyAxNDI1
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J2JyaWRnZSc6ICd4ZW5icjAnLCAnZG9tYWluJzogJ3VidW50dTEx
JywgJ2hhbmRsZSc6ICcwJywgJ3V1aWQnOiAnZjg4MDAxMWEtYzIwNC05Mjdi
LWQ5NjQtMGEyYzA3N2I3YTlmJywgJ3NjcmlwdCc6ICcvZXRjL3hlbi9zY3Jp
cHRzL3ZpZi1icmlkZ2UnLCAnbWFjJzogJzAwOjE2OjNlOjIzOmVjOjYyJywg
J2Zyb250ZW5kLWlkJzogJzMnLCAnc3RhdGUnOiAnMScsICdvbmxpbmUnOiAn
MScsICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZpZi8w
JywgJ3R5cGUnOiAnaW9lbXUnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2Vu
ZC92aWYvMy8wLgpbMjAxNC0wMi0wNSAwODoxMDo1NyAxNDI1XSBJTkZPIChp
bWFnZTo0MTgpIHNwYXduaW5nIGRldmljZSBtb2RlbHM6IC91c3IvbGliL3hl
bi9iaW4vcWVtdS1kbSBbJy91c3IvbGliL3hlbi9iaW4vcWVtdS1kbScsICct
ZCcsICczJywgJy1kb21haW4tbmFtZScsICd1YnVudHUxMScsICctdmlkZW9y
YW0nLCAnNCcsICctdm5jJywgJzEyNy4wLjAuMTowJywgJy12bmN1bnVzZWQn
LCAnLXZjcHVzJywgJzEnLCAnLXZjcHVfYXZhaWwnLCAnMHgxJywgJy1ib290
JywgJ2MnLCAnLXNlcmlhbCcsICdwdHknLCAnLWFjcGknLCAnLXVzYicsICct
dXNiZGV2aWNlJywgIlsnaG9zdDoxMjVmOmM5NmEnXSIsICctbmV0JywgJ25p
Yyx2bGFuPTEsbWFjYWRkcj0wMDoxNjozZToyMzplYzo2Mixtb2RlbD1ydGw4
MTM5JywgJy1uZXQnLCAndGFwLHZsYW49MSxpZm5hbWU9dGFwMy4wLGJyaWRn
ZT14ZW5icjAnLCAnLU0nLCAneGVuZnYnXQpbMjAxNC0wMi0wNSAwODoxMDo1
NyAxNDI1XSBJTkZPIChpbWFnZTo0NjcpIGRldmljZSBtb2RlbCBwaWQ6IDQw
NTUKWzIwMTQtMDItMDUgMDg6MTA6NTcgMTQyNV0gSU5GTyAoaW1hZ2U6NTkw
KSB3YWl0aW5nIGZvciBzZW50aW5lbF9maWZvClsyMDE0LTAyLTA1IDA4OjEw
OjU3IDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzozNDIwKSBTdG9yaW5n
IFZNIGRldGFpbHM6IHsnb25feGVuZF9zdG9wJzogJ2lnbm9yZScsICdwb29s
X25hbWUnOiAnUG9vbC0wJywgJ3NoYWRvd19tZW1vcnknOiAnOScsICd1dWlk
JzogJzYwMjQ1YTU0LWZmZGMtYzZjZS00ZGU0LTFlMjEwY2I2NjdjNycsICdv
bl9yZWJvb3QnOiAncmVzdGFydCcsICdzdGFydF90aW1lJzogJzEzOTE1NzUy
NTcuNTMnLCAnb25fcG93ZXJvZmYnOiAnZGVzdHJveScsICdib290bG9hZGVy
X2FyZ3MnOiAnJywgJ29uX3hlbmRfc3RhcnQnOiAnaWdub3JlJywgJ29uX2Ny
YXNoJzogJ3Jlc3RhcnQnLCAneGVuZC9yZXN0YXJ0X2NvdW50JzogJzAnLCAn
dmNwdXMnOiAnMScsICd2Y3B1X2F2YWlsJzogJzEnLCAnYm9vdGxvYWRlcic6
ICcnLCAnaW1hZ2UnOiAiKGh2bSAoa2VybmVsICcnKSAoc3VwZXJwYWdlcyAw
KSAodmlkZW9yYW0gNCkgKGhwZXQgMCkgKHN0ZHZnYSAwKSAobG9hZGVyIC91
c3IvbGliL3hlbi9ib290L2h2bWxvYWRlcikgKHhlbl9wbGF0Zm9ybV9wY2kg
MSkgKG9wZW5nbCAxKSAocnRjX3RpbWVvZmZzZXQgMCkgKHBjaSAoKSkgKGhh
cCAxKSAobG9jYWx0aW1lIDApICh0aW1lcl9tb2RlIDEpIChwY2lfbXNpdHJh
bnNsYXRlIDEpIChvb3MgMSkgKGFwaWMgMSkgKHNkbCAwKSAodXNiZGV2aWNl
IChob3N0OjEyNWY6Yzk2YSkpIChkaXNwbGF5IDowLjApICh2cHRfYWxpZ24g
MSkgKHNlcmlhbCBwdHkpICh2bmN1bnVzZWQgMSkgKGJvb3QgYykgKHBhZSAx
KSAodmlyaWRpYW4gMCkgKGFjcGkgMSkgKHZuYyAxKSAobm9ncmFwaGljIDAp
IChub21pZ3JhdGUgMCkgKHVzYiAxKSAodHNjX21vZGUgMCkgKGd1ZXN0X29z
X3R5cGUgZGVmYXVsdCkgKGRldmljZV9tb2RlbCAvdXNyL2xpYi94ZW4vYmlu
L3FlbXUtZG0pIChwY2lfcG93ZXJfbWdtdCAwKSAoeGF1dGhvcml0eSAvcm9v
dC8uWGF1dGhvcml0eSkgKGlzYSAwKSAobm90ZXMgKFNVU1BFTkRfQ0FOQ0VM
IDEpKSkiLCAnbmFtZSc6ICd1YnVudHUxMSd9ClsyMDE0LTAyLTA1IDA4OjEw
OjU3IDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxNzk0KSBTdG9yaW5n
IGRvbWFpbiBkZXRhaWxzOiB7J2NvbnNvbGUvcG9ydCc6ICczJywgJ2Rlc2Ny
aXB0aW9uJzogJycsICdjb25zb2xlL2xpbWl0JzogJzEwNDg1NzYnLCAnc3Rv
cmUvcG9ydCc6ICcyJywgJ3ZtJzogJy92bS82MDI0NWE1NC1mZmRjLWM2Y2Ut
NGRlNC0xZTIxMGNiNjY3YzcnLCAnZG9taWQnOiAnMycsICdpbWFnZS9zdXNw
ZW5kLWNhbmNlbCc6ICcxJywgJ2NwdS8wL2F2YWlsYWJpbGl0eSc6ICdvbmxp
bmUnLCAnbWVtb3J5L3RhcmdldCc6ICcxMDQ4NTc2JywgJ2NvbnRyb2wvcGxh
dGZvcm0tZmVhdHVyZS1tdWx0aXByb2Nlc3Nvci1zdXNwZW5kJzogJzEnLCAn
c3RvcmUvcmluZy1yZWYnOiAnMTA0NDQ3NicsICdjb25zb2xlL3R5cGUnOiAn
aW9lbXUnLCAnbmFtZSc6ICd1YnVudHUxMSd9ClsyMDE0LTAyLTA1IDA4OjEw
OjU3IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9s
bGVyOiB3cml0aW5nIHsnc3RhdGUnOiAnMScsICdiYWNrZW5kLWlkJzogJzAn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xl
LzMvMCd9IHRvIC9sb2NhbC9kb21haW4vMy9kZXZpY2UvY29uc29sZS8wLgpb
MjAxNC0wMi0wNSAwODoxMDo1NyAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxl
cjo5NykgRGV2Q29udHJvbGxlcjogd3JpdGluZyB7J2RvbWFpbic6ICd1YnVu
dHUxMScsICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL2Nv
bnNvbGUvMCcsICd1dWlkJzogJ2EzNzk5ZDAyLWM1M2EtYTRkOS0zYTQzLWFj
MzFhYzE0NWZlZCcsICdmcm9udGVuZC1pZCc6ICczJywgJ3N0YXRlJzogJzEn
LCAnbG9jYXRpb24nOiAnMycsICdvbmxpbmUnOiAnMScsICdwcm90b2NvbCc6
ICd2dDEwMCd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL2NvbnNvbGUv
My8wLgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29u
dHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdGFwMi4KWzIwMTQt
MDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5
KSBXYWl0aW5nIGZvciBkZXZpY2VzIHZpZi4KWzIwMTQtMDItMDUgMDg6MTA6
NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTQ0KSBXYWl0aW5nIGZv
ciAwLgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTg4MSkgWGVuZERvbWFpbkluZm8uaGFuZGxlU2h1dGRvd25X
YXRjaApbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29u
dHJvbGxlcjo2MjgpIGhvdHBsdWdTdGF0dXNDYWxsYmFjayAvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92aWYvMy8wL2hvdHBsdWctc3RhdHVzLgpbMjAxNC0w
Mi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjo2NDIp
IGhvdHBsdWdTdGF0dXNDYWxsYmFjayAxLgpbMjAxNC0wMi0wNSAwODoxMDo1
OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9y
IGRldmljZXMgdmtiZC4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVC
VUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIGlv
cG9ydHMuClsyMDE0LTAyLTA1IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZD
b250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyB0YXAuClsyMDE0
LTAyLTA1IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjEz
OSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2aWYyLgpbMjAxNC0wMi0wNSAwODox
MDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcg
Zm9yIGRldmljZXMgY29uc29sZS4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQy
NV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTQ0KSBXYWl0aW5nIGZvciAwLgpb
MjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJvbGxl
cjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdnNjc2kuClsyMDE0LTAyLTA1
IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2Fp
dGluZyBmb3IgZGV2aWNlcyB2YmQuClsyMDE0LTAyLTA1IDA4OjEwOjU4IDE0
MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgNzY4
LgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBERUJVRyAoRGV2Q29udHJv
bGxlcjo2MjgpIGhvdHBsdWdTdGF0dXNDYWxsYmFjayAvbG9jYWwvZG9tYWlu
LzAvYmFja2VuZC92YmQvMy83NjgvaG90cGx1Zy1zdGF0dXMuClsyMDE0LTAy
LTA1IDA4OjEwOjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjY0Mikg
aG90cGx1Z1N0YXR1c0NhbGxiYWNrIDEuClsyMDE0LTAyLTA1IDA4OjEwOjU4
IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3Ig
NTYzMi4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNv
bnRyb2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2Rv
bWFpbi8wL2JhY2tlbmQvdmJkLzMvNTYzMi9ob3RwbHVnLXN0YXR1cy4KWzIw
MTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6
NjQyKSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgMS4KWzIwMTQtMDItMDUgMDg6
MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5n
IGZvciBkZXZpY2VzIGlycS4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0g
REVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2Vz
IHZmYi4KWzIwMTQtMDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNv
bnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHBjaS4KWzIwMTQt
MDItMDUgMDg6MTA6NTggMTQyNV0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5
KSBXYWl0aW5nIGZvciBkZXZpY2VzIHZ1c2IuClsyMDE0LTAyLTA1IDA4OjEw
OjU4IDE0MjVdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBm
b3IgZGV2aWNlcyB2dHBtLgpbMjAxNC0wMi0wNSAwODoxMDo1OCAxNDI1XSBJ
TkZPIChYZW5kRG9tYWluOjEyMjUpIERvbWFpbiB1YnVudHUxMSAoMykgdW5w
YXVzZWQuClsyMDE0LTAyLTA1IDA4OjE4OjM1IDE0MjVdIERFQlVHIChYZW5k
Q2hlY2twb2ludDoxMjQpIFt4Y19zYXZlXTogL3Vzci9saWIveGVuL2Jpbi94
Y19zYXZlIDI2IDMgMCAwIDUKWzIwMTQtMDItMDUgMDg6MTg6MzUgMTQyNV0g
SU5GTyAoWGVuZENoZWNrcG9pbnQ6NDIzKSB4Y19zYXZlOiBmYWlsZWQgdG8g
Z2V0IHRoZSBzdXNwZW5kIGV2dGNobiBwb3J0ClsyMDE0LTAyLTA1IDA4OjE4
OjM1IDE0MjVdIElORk8gKFhlbmRDaGVja3BvaW50OjQyMykgClsyMDE0LTAy
LTA1IDA4OjIwOjIxIDE0MjVdIERFQlVHIChYZW5kQ2hlY2twb2ludDozOTQp
IHN1c3BlbmQKWzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhl
bmRDaGVja3BvaW50OjEyNykgSW4gc2F2ZUlucHV0SGFuZGxlciBzdXNwZW5k
ClsyMDE0LTAyLTA1IDA4OjIwOjIxIDE0MjVdIERFQlVHIChYZW5kQ2hlY2tw
b2ludDoxMjkpIFN1c3BlbmRpbmcgMyAuLi4KWzIwMTQtMDItMDUgMDg6MjA6
MjEgMTQyNV0gREVCVUcgKFhlbmREb21haW5JbmZvOjUyNCkgWGVuZERvbWFp
bkluZm8uc2h1dGRvd24oc3VzcGVuZCkKWzIwMTQtMDItMDUgMDg6MjA6MjEg
MTQyNV0gREVCVUcgKFhlbmREb21haW5JbmZvOjE4ODEpIFhlbmREb21haW5J
bmZvLmhhbmRsZVNodXRkb3duV2F0Y2gKWzIwMTQtMDItMDUgMDg6MjA6MjEg
MTQyNV0gSU5GTyAoWGVuZERvbWFpbkluZm86NTQxKSBIVk0gc2F2ZTpyZW1v
dGUgc2h1dGRvd24gZG9tIDMhClsyMDE0LTAyLTA1IDA4OjIwOjIxIDE0MjVd
IElORk8gKFhlbmRDaGVja3BvaW50OjEzNSkgRG9tYWluIDMgc3VzcGVuZGVk
LgpbMjAxNC0wMi0wNSAwODoyMDoyMSAxNDI1XSBJTkZPIChYZW5kRG9tYWlu
SW5mbzoyMDc4KSBEb21haW4gaGFzIHNodXRkb3duOiBuYW1lPW1pZ3JhdGlu
Zy11YnVudHUxMSBpZD0zIHJlYXNvbj1zdXNwZW5kLgpbMjAxNC0wMi0wNSAw
ODoyMDoyMSAxNDI1XSBJTkZPIChpbWFnZTo1MzgpIHNpZ25hbERldmljZU1v
ZGVsOnJlc3RvcmUgZG0gc3RhdGUgdG8gcnVubmluZwpbMjAxNC0wMi0wNSAw
ODoyMDoyMSAxNDI1XSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6MTQ0KSBXcml0
dGVuIGRvbmUKWzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhl
bmREb21haW5JbmZvOjMwNzEpIFhlbmREb21haW5JbmZvLmRlc3Ryb3k6IGRv
bWlkPTMKWzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjI0MDEpIERlc3Ryb3lpbmcgZGV2aWNlIG1vZGVsClsyMDE0
LTAyLTA1IDA4OjIwOjIxIDE0MjVdIElORk8gKGltYWdlOjYxNSkgbWlncmF0
aW5nLXVidW50dTExIGRldmljZSBtb2RlbCB0ZXJtaW5hdGVkClsyMDE0LTAy
LTA1IDA4OjIwOjIxIDE0MjVdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDA4
KSBSZWxlYXNpbmcgZGV2aWNlcwpbMjAxNC0wMi0wNSAwODoyMDoyMSAxNDI1
XSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmlmLzAK
WzIwMTQtMDItMDUgMDg6MjA6MjEgMTQyNV0gREVCVUcgKFhlbmREb21haW5J
bmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6IGRldmlj
ZUNsYXNzID0gdmlmLCBkZXZpY2UgPSB2aWYvMApbMjAxNC0wMi0wNSAwODoy
MDoyMSAxNDI1XSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3Zp
bmcgY29uc29sZS8wClsyMDE0LTAyLTA1IDA4OjIwOjIxIDE0MjVdIERFQlVH
IChYZW5kRG9tYWluSW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95
RGV2aWNlOiBkZXZpY2VDbGFzcyA9IGNvbnNvbGUsIGRldmljZSA9IGNvbnNv
bGUvMApbMjAxNC0wMi0wNSAwODoyMDoyMSAxNDI1XSBERUJVRyAoWGVuZERv
bWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmJkLzc2OApbMjAxNC0wMi0wNSAw
ODoyMDoyMSAxNDI1XSBERUJVRyAoWGVuZERvbWFpbkluZm86MTI3NikgWGVu
ZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2aWNlQ2xhc3MgPSB2YmQs
IGRldmljZSA9IHZiZC83NjgKWzIwMTQtMDItMDUgMDg6MjA6MjIgMTQyNV0g
REVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZiZC81NjMy
ClsyMDE0LTAyLTA1IDA4OjIwOjIyIDE0MjVdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZp
Y2VDbGFzcyA9IHZiZCwgZGV2aWNlID0gdmJkLzU2MzIKWzIwMTQtMDItMDUg
MDg6MjA6MjIgMTQyNV0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJl
bW92aW5nIHZmYi8wClsyMDE0LTAyLTA1IDA4OjIwOjIyIDE0MjVdIERFQlVH
IChYZW5kRG9tYWluSW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95
RGV2aWNlOiBkZXZpY2VDbGFzcyA9IHZmYiwgZGV2aWNlID0gdmZiLzAK

---2096837515-1980809517-1391583040=:24823
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

---2096837515-1980809517-1391583040=:24823--


From xen-devel-bounces@lists.xen.org Wed Feb 05 08:07:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 08:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAxVf-00047p-B8; Wed, 05 Feb 2014 08:07:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAxVd-00047h-2t
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 08:07:01 +0000
Received: from [85.158.137.68:19940] by server-3.bemta-3.messagelabs.com id
	6E/C6-14520-421F1F25; Wed, 05 Feb 2014 08:07:00 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391587619!13483664!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9890 invoked from network); 5 Feb 2014 08:06:59 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 5 Feb 2014 08:06:59 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:49199 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAxVA-0005gk-3M; Wed, 05 Feb 2014 09:06:32 +0100
Date: Wed, 5 Feb 2014 09:06:47 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <15510565929.20140205090647@eikelenboom.it>
To: Eric Houby <ehouby@yahoo.com>
In-Reply-To: <1391572808.2441.37.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
MIME-Version: 1.0
Cc: xen@lists.fedoraproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 5, 2014, 5:00:08 AM, you wrote:

> On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
>> Il 04/02/2014 16:41, Eric Houby ha scritto:
>> > Xen list,
>> >
>> > I am trying to boot a F20 guest and connect using Spice but have run
>> > into an issue.
>> >
>> > My VM config file includes:
>> > spice = 1
>> > spicehost='0.0.0.0'
>> > spiceport=6001
>> > spicedisable_ticketing=1
>> >
>> >
>> > Is Spice supported with qemu-xen-traditional?
>> 
>> No, only with upstream qemu and if compile xen and qemu from source you 
>> also enable spice support on qemu build, for example on my xen build 
>> tests I add:
>> 
>> tools/Makefile
>> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>>           --datadir=$(SHAREDIR)/qemu-xen \
>>           --localstatedir=/var \
>>           --disable-kvm \
>> +        --enable-spice \
>> +        --enable-usb-redir \
>>           --disable-docs \
>>           --disable-guest-agent \
>>           --python=$(PYTHON) \
>> 
>> If you use upstream qemu from distribution package probably have already 
>> spice build-in, for example, on debian I've already tested and working.
>> 

> It is my understanding that the qemu package in F20 does not support xen
> so I compiled xen from source per the RC3 Test Day instructions and the
> instructions here:

> http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source

> After adding --enable-spice and --enable-usb-redir to tools/Makefile I
> see the following error when I make xen:

> ERROR: User requested feature spice
>        configure was not able to find it

Do you have the libspice-dev packages installed for your distro ?

> The build finishes and xen works fine but spice obviously does not work.
> More complete log is below.

> Thanks,

> Eric


> if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
>         mkdir -p qemu-xen-dir; \
> else \
>         export GIT=git; \
>         /root/src/xen/tools/../scripts/git-checkout.sh
> git://xenbits.xen.org/qemu-upstream-unstable.git q
> emu-xen-4.4.0-rc3 qemu-xen-dir ; \
> fi
> if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
>         source=git://xenbits.xen.org/qemu-upstream-unstable.git; \
> else \
>         source=.; \
> fi; \
> cd qemu-xen-dir; \
> $source/configure --enable-xen --target-list=i386-softmmu \
>         --enable-debug --enable-trace-backend=stderr \
>         --prefix=/usr/local \
>         --source-path=$source \
>         --extra-cflags="-I/root/src/xen/tools/../tools/include \
>         -I/root/src/xen/tools/../tools/libxc \
>         -I/root/src/xen/tools/../tools/xenstore \
>         -I/root/src/xen/tools/../tools/xenstore/compat \
>         " \
>         --extra-ldflags="-L/root/src/xen/tools/../tools/libxc \
>         -L/root/src/xen/tools/../tools/xenstore" \
>         --bindir=/usr/local/lib/xen/bin \
>         --datadir=/usr/local/share/qemu-xen \
>         --localstatedir=/var \
>         --disable-kvm \
>         --enable-spice \
>         --enable-usb-redir \
>         --disable-docs \
>         --disable-guest-agent \
>         --python=python \
>         ; \
> make all

> ERROR: User requested feature spice
>        configure was not able to find it

> make[3]: Entering directory `/root/src/xen/tools/qemu-xen-dir-remote'
> make[3]: Leaving directory `/root/src/xen/tools/qemu-xen-dir-remote'











_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 08:07:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 08:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAxVf-00047p-B8; Wed, 05 Feb 2014 08:07:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WAxVd-00047h-2t
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 08:07:01 +0000
Received: from [85.158.137.68:19940] by server-3.bemta-3.messagelabs.com id
	6E/C6-14520-421F1F25; Wed, 05 Feb 2014 08:07:00 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391587619!13483664!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9890 invoked from network); 5 Feb 2014 08:06:59 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 5 Feb 2014 08:06:59 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:49199 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WAxVA-0005gk-3M; Wed, 05 Feb 2014 09:06:32 +0100
Date: Wed, 5 Feb 2014 09:06:47 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <15510565929.20140205090647@eikelenboom.it>
To: Eric Houby <ehouby@yahoo.com>
In-Reply-To: <1391572808.2441.37.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
MIME-Version: 1.0
Cc: xen@lists.fedoraproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 5, 2014, 5:00:08 AM, you wrote:

> On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
>> Il 04/02/2014 16:41, Eric Houby ha scritto:
>> > Xen list,
>> >
>> > I am trying to boot a F20 guest and connect using Spice but have run
>> > into an issue.
>> >
>> > My VM config file includes:
>> > spice = 1
>> > spicehost='0.0.0.0'
>> > spiceport=6001
>> > spicedisable_ticketing=1
>> >
>> >
>> > Is Spice supported with qemu-xen-traditional?
>> 
>> No, only with upstream qemu and if compile xen and qemu from source you 
>> also enable spice support on qemu build, for example on my xen build 
>> tests I add:
>> 
>> tools/Makefile
>> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>>           --datadir=$(SHAREDIR)/qemu-xen \
>>           --localstatedir=/var \
>>           --disable-kvm \
>> +        --enable-spice \
>> +        --enable-usb-redir \
>>           --disable-docs \
>>           --disable-guest-agent \
>>           --python=$(PYTHON) \
>> 
>> If you use upstream qemu from distribution package probably have already 
>> spice build-in, for example, on debian I've already tested and working.
>> 

> It is my understanding that the qemu package in F20 does not support xen
> so I compiled xen from source per the RC3 Test Day instructions and the
> instructions here:

> http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source

> After adding --enable-spice and --enable-usb-redir to tools/Makefile I
> see the following error when I make xen:

> ERROR: User requested feature spice
>        configure was not able to find it

Do you have the libspice-dev packages installed for your distro ?

> The build finishes and xen works fine but spice obviously does not work.
> More complete log is below.

> Thanks,

> Eric


> if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
>         mkdir -p qemu-xen-dir; \
> else \
>         export GIT=git; \
>         /root/src/xen/tools/../scripts/git-checkout.sh
> git://xenbits.xen.org/qemu-upstream-unstable.git q
> emu-xen-4.4.0-rc3 qemu-xen-dir ; \
> fi
> if test -d git://xenbits.xen.org/qemu-upstream-unstable.git ; then \
>         source=git://xenbits.xen.org/qemu-upstream-unstable.git; \
> else \
>         source=.; \
> fi; \
> cd qemu-xen-dir; \
> $source/configure --enable-xen --target-list=i386-softmmu \
>         --enable-debug --enable-trace-backend=stderr \
>         --prefix=/usr/local \
>         --source-path=$source \
>         --extra-cflags="-I/root/src/xen/tools/../tools/include \
>         -I/root/src/xen/tools/../tools/libxc \
>         -I/root/src/xen/tools/../tools/xenstore \
>         -I/root/src/xen/tools/../tools/xenstore/compat \
>         " \
>         --extra-ldflags="-L/root/src/xen/tools/../tools/libxc \
>         -L/root/src/xen/tools/../tools/xenstore" \
>         --bindir=/usr/local/lib/xen/bin \
>         --datadir=/usr/local/share/qemu-xen \
>         --localstatedir=/var \
>         --disable-kvm \
>         --enable-spice \
>         --enable-usb-redir \
>         --disable-docs \
>         --disable-guest-agent \
>         --python=python \
>         ; \
> make all

> ERROR: User requested feature spice
>        configure was not able to find it

> make[3]: Entering directory `/root/src/xen/tools/qemu-xen-dir-remote'
> make[3]: Leaving directory `/root/src/xen/tools/qemu-xen-dir-remote'











_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 08:47:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 08:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAy8V-00054l-Q5; Wed, 05 Feb 2014 08:47:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAy8N-00054Z-0z
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 08:47:10 +0000
Received: from [193.109.254.147:23978] by server-1.bemta-14.messagelabs.com id
	B8/61-15438-68AF1F25; Wed, 05 Feb 2014 08:47:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391590020!2101411!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18158 invoked from network); 5 Feb 2014 08:47:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 08:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="100015879"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 08:47:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 03:46:59 -0500
Message-ID: <52F1FA82.6080101@citrix.com>
Date: Wed, 5 Feb 2014 09:46:58 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xu cong <congxumail@gmail.com>, <xen-devel@lists.xensource.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
In-Reply-To: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 22:15, xu cong wrote:
> Does blkback thread bypass Dom0's memory buffer? If I write in DomU
> without O_Direct, the data will be buffered in DomU's memory cache then
> be flushed to disk. Will them be buffered in Dom0's memory again? How
> about the netback? Thanks.

Blkback will not do any kind of buffering itself, the request is read
from the shared ring and passed to the underlying device using
submit_bio. If you want to make sure your data has hit the disk you
should issue a flush operation (see "feature-flush-cache" in blkif.h)

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 08:47:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 08:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAy8V-00054l-Q5; Wed, 05 Feb 2014 08:47:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WAy8N-00054Z-0z
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 08:47:10 +0000
Received: from [193.109.254.147:23978] by server-1.bemta-14.messagelabs.com id
	B8/61-15438-68AF1F25; Wed, 05 Feb 2014 08:47:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391590020!2101411!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18158 invoked from network); 5 Feb 2014 08:47:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 08:47:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="100015879"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 08:47:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 03:46:59 -0500
Message-ID: <52F1FA82.6080101@citrix.com>
Date: Wed, 5 Feb 2014 09:46:58 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: xu cong <congxumail@gmail.com>, <xen-devel@lists.xensource.com>
References: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
In-Reply-To: <CA+hYhXvMiMn3i6oF=5dO_A2qHo=s=g+PDFeJ-9K=HmeNrkq8mw@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] Does blkback bypass Dom0's memory buffer?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 22:15, xu cong wrote:
> Does blkback thread bypass Dom0's memory buffer? If I write in DomU
> without O_Direct, the data will be buffered in DomU's memory cache then
> be flushed to disk. Will them be buffered in Dom0's memory again? How
> about the netback? Thanks.

Blkback will not do any kind of buffering itself, the request is read
from the shared ring and passed to the underlying device using
submit_bio. If you want to make sure your data has hit the disk you
should issue a flush operation (see "feature-flush-cache" in blkif.h)

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:12:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAyX0-0005p7-L3; Wed, 05 Feb 2014 09:12:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAyWz-0005p2-2W
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:12:29 +0000
Received: from [85.158.137.68:17739] by server-12.bemta-3.messagelabs.com id
	37/E7-01674-C7002F25; Wed, 05 Feb 2014 09:12:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391591546!13471990!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17464 invoked from network); 5 Feb 2014 09:12:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:12:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98125579"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:12:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 04:12:25 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAyWv-00041E-4J;
	Wed, 05 Feb 2014 09:12:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 09:12:24 +0000
Message-ID: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement about
	multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is the compatible string which matters, not the absolute path, and in any
case /chosen/module@N is more often used than /chosen/modules/module@N.

Reported-by: Fu Wei <fu.wei@linaro.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 docs/misc/arm/device-tree/booting.txt |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 8da1e0b..07fde27 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -4,8 +4,7 @@ Dom0 kernel and ramdisk modules
 Xen is passed the dom0 kernel and initrd via a reference in the /chosen
 node of the device tree.
 
-Each node has the form /chosen/modules/module@<N> and contains the following
-properties:
+Each node contains the following properties:
 
 - compatible
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:12:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAyX0-0005p7-L3; Wed, 05 Feb 2014 09:12:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAyWz-0005p2-2W
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:12:29 +0000
Received: from [85.158.137.68:17739] by server-12.bemta-3.messagelabs.com id
	37/E7-01674-C7002F25; Wed, 05 Feb 2014 09:12:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391591546!13471990!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17464 invoked from network); 5 Feb 2014 09:12:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:12:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98125579"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:12:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 04:12:25 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WAyWv-00041E-4J;
	Wed, 05 Feb 2014 09:12:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 09:12:24 +0000
Message-ID: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement about
	multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is the compatible string which matters, not the absolute path, and in any
case /chosen/module@N is more often used than /chosen/modules/module@N.

Reported-by: Fu Wei <fu.wei@linaro.org>
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 docs/misc/arm/device-tree/booting.txt |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 8da1e0b..07fde27 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -4,8 +4,7 @@ Dom0 kernel and ramdisk modules
 Xen is passed the dom0 kernel and initrd via a reference in the /chosen
 node of the device tree.
 
-Each node has the form /chosen/modules/module@<N> and contains the following
-properties:
+Each node contains the following properties:
 
 - compatible
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:21:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAyfw-0006Ac-Ic; Wed, 05 Feb 2014 09:21:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAyfv-0006AQ-2R
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:21:43 +0000
Received: from [85.158.137.68:6956] by server-3.bemta-3.messagelabs.com id
	23/C6-14520-6A202F25; Wed, 05 Feb 2014 09:21:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391592100!13491942!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1201 invoked from network); 5 Feb 2014 09:21:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:21:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98127378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:21:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:21:39 -0500
Message-ID: <1391592098.6497.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 09:21:38 +0000
In-Reply-To: <1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
	<1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 18:01 +0000, Andrew Cooper wrote:
> The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> most part is left alone until success, at which point it is set to 0.
> 
> There is a separate 'frc' which for the most part is used to check function
> calls, keeping errors separate from 'rc'.
> 
> For a toolstack which sets callbacks->toolstack_restore(), and the function
> returns 0, any subsequent error will end up with code flow going to "out;",
> resulting in the migration being declared a success.
> 
> For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> 'frc', even though their use of 'rc' is currently safe.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> Changes in v2:
>  * Dont drop rc = -1 from toolstack_restore().
> 
> Regarding 4.4: If the two "for consistency" changes to
> xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> without affecting the bugfix nature of the patch, but I would argue that
> leaving some examples of "rc = function_call()" leaves a bad precident which
> is likely to lead to similar bugs in the future.
> ---
>  tools/libxc/xc_domain_restore.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 5ba47d7..1f6ce50 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
>      munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
>  
> -    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> -                            console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> +                             console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;
> @@ -2257,10 +2257,10 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>      {
>          if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
>          {
> -            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> -                        callbacks->data);
> +            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> +                                               callbacks->data);
>              free(tdata.data);
> -            if ( rc < 0 )
> +            if ( frc < 0 )
>              {
>                  PERROR("error calling toolstack_restore");
>                  goto out;
> @@ -2326,9 +2326,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          goto out;
>      }
>  
> -    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> -                                console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> +                                 console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:21:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAyfw-0006Ac-Ic; Wed, 05 Feb 2014 09:21:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAyfv-0006AQ-2R
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:21:43 +0000
Received: from [85.158.137.68:6956] by server-3.bemta-3.messagelabs.com id
	23/C6-14520-6A202F25; Wed, 05 Feb 2014 09:21:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391592100!13491942!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1201 invoked from network); 5 Feb 2014 09:21:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:21:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98127378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:21:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:21:39 -0500
Message-ID: <1391592098.6497.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 09:21:38 +0000
In-Reply-To: <1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
	<1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 18:01 +0000, Andrew Cooper wrote:
> The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> most part is left alone until success, at which point it is set to 0.
> 
> There is a separate 'frc' which for the most part is used to check function
> calls, keeping errors separate from 'rc'.
> 
> For a toolstack which sets callbacks->toolstack_restore(), and the function
> returns 0, any subsequent error will end up with code flow going to "out;",
> resulting in the migration being declared a success.
> 
> For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> 'frc', even though their use of 'rc' is currently safe.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> 
> ---
> 
> Changes in v2:
>  * Dont drop rc = -1 from toolstack_restore().
> 
> Regarding 4.4: If the two "for consistency" changes to
> xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> without affecting the bugfix nature of the patch, but I would argue that
> leaving some examples of "rc = function_call()" leaves a bad precident which
> is likely to lead to similar bugs in the future.
> ---
>  tools/libxc/xc_domain_restore.c |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/tools/libxc/xc_domain_restore.c b/tools/libxc/xc_domain_restore.c
> index 5ba47d7..1f6ce50 100644
> --- a/tools/libxc/xc_domain_restore.c
> +++ b/tools/libxc/xc_domain_restore.c
> @@ -2240,9 +2240,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          memcpy(ctx->live_p2m, ctx->p2m, dinfo->p2m_size * sizeof(xen_pfn_t));
>      munmap(ctx->live_p2m, P2M_FL_ENTRIES * PAGE_SIZE);
>  
> -    rc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> -                            console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_seed(xch, dom, *console_mfn, *store_mfn,
> +                             console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;
> @@ -2257,10 +2257,10 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>      {
>          if ( callbacks != NULL && callbacks->toolstack_restore != NULL )
>          {
> -            rc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> -                        callbacks->data);
> +            frc = callbacks->toolstack_restore(dom, tdata.data, tdata.len,
> +                                               callbacks->data);
>              free(tdata.data);
> -            if ( rc < 0 )
> +            if ( frc < 0 )
>              {
>                  PERROR("error calling toolstack_restore");
>                  goto out;
> @@ -2326,9 +2326,9 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
>          goto out;
>      }
>  
> -    rc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> -                                console_domid, store_domid);
> -    if (rc != 0)
> +    frc = xc_dom_gnttab_hvm_seed(xch, dom, *console_mfn, *store_mfn,
> +                                 console_domid, store_domid);
> +    if (frc != 0)
>      {
>          ERROR("error seeding grant table");
>          goto out;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:23:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAyhH-0006He-37; Wed, 05 Feb 2014 09:23:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAyhE-0006HP-NF
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:23:05 +0000
Received: from [85.158.137.68:28148] by server-13.bemta-3.messagelabs.com id
	DE/7C-26923-7F202F25; Wed, 05 Feb 2014 09:23:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391592181!13387451!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28042 invoked from network); 5 Feb 2014 09:23:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:23:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="100023121"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 09:23:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:23:00 -0500
Message-ID: <1391592179.6497.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Wed, 5 Feb 2014 09:22:59 +0000
In-Reply-To: <20140204181023.GA5293@citrix.com>
References: <20140204181023.GA5293@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
> 
> In addition to this, RHEL 7 menu entries have two different single-quote
> delimited strings on the same line, and the greedy grouping for menuentry
> parsing gets both strings, and the options inbetween.
> 
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: george.dunlap@citrix.com

Acked-by: Ian Campbell <ian.campbell@citrix.com>

IMHO this can go into 4.4, unless George objects today I shall commit.

Ian,

> ---
> v2: Added RHEL 7 grub.cfg in pygrub/examples
> v3 & v4: Tidied the commit message based on Andrew Cooper's feedback
> 
> Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails to boot
> on Xen.
> 
>  tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py            |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2
> 
> diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7-beta.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	set gfxpayload=keep
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:23:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAyhH-0006He-37; Wed, 05 Feb 2014 09:23:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAyhE-0006HP-NF
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:23:05 +0000
Received: from [85.158.137.68:28148] by server-13.bemta-3.messagelabs.com id
	DE/7C-26923-7F202F25; Wed, 05 Feb 2014 09:23:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391592181!13387451!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28042 invoked from network); 5 Feb 2014 09:23:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:23:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="100023121"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 09:23:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:23:00 -0500
Message-ID: <1391592179.6497.73.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Date: Wed, 5 Feb 2014 09:22:59 +0000
In-Reply-To: <20140204181023.GA5293@citrix.com>
References: <20140204181023.GA5293@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
> 
> In addition to this, RHEL 7 menu entries have two different single-quote
> delimited strings on the same line, and the greedy grouping for menuentry
> parsing gets both strings, and the options inbetween.
> 
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: george.dunlap@citrix.com

Acked-by: Ian Campbell <ian.campbell@citrix.com>

IMHO this can go into 4.4, unless George objects today I shall commit.

Ian,

> ---
> v2: Added RHEL 7 grub.cfg in pygrub/examples
> v3 & v4: Tidied the commit message based on Andrew Cooper's feedback
> 
> Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails to boot
> on Xen.
> 
>  tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py            |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2
> 
> diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7-beta.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	set gfxpayload=keep
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +	initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +	load_video
> +	insmod gzio
> +	insmod part_msdos
> +	insmod xfs
> +	set root='hd0,msdos1'
> +	if [ x$feature_platform_search_hint = xy ]; then
> +	  search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +	else
> +	  search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +	fi
> +	linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +	initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>                  
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>  
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:28:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAymh-0006lU-0Y; Wed, 05 Feb 2014 09:28:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAymf-0006lL-UH
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:28:42 +0000
Received: from [85.158.143.35:32822] by server-2.bemta-4.messagelabs.com id
	34/98-10891-94402F25; Wed, 05 Feb 2014 09:28:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391592519!3238433!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12455 invoked from network); 5 Feb 2014 09:28:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:28:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98128998"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:28:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:28:38 -0500
Message-ID: <1391592517.6497.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Wed, 5 Feb 2014 09:28:37 +0000
In-Reply-To: <alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 18:41 +0000, M A Young wrote:
> > We should probably consider taking some unit files into the xen tree, if
> > someone wants to submit a set?
> 
> I can submit a set, which start services individually rather than a 
> unified xencommons style start file. I didn't find a good way to reproduce 
> the xendomains script, so I ended running an edited version of the 
> sysvinit script with a systemd wrapper file.

I don't know what is conventional in systemd land but I have no problem
with that approach.

> Would it make sense to have some sort of configure option tools to choose 
> between sysvinit and systemd?

AIUI systemd will DTWT and ignore the initscript if there is a systemd
unit file, so installing both seems to be fine to me, and is certain to
be less error prone.

Although perhaps the xencommons vs split units breaks that suppression
logic?

What depends on the xenstored & xenconsole modules? Is it possible to
have a "metaunit" xencommons which a) causes everything to be started
and b) suppresses the initscript by having the same name?

Or maybe there is a systemd syntax you can use to suppress
xencommons.init?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:28:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAymh-0006lU-0Y; Wed, 05 Feb 2014 09:28:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAymf-0006lL-UH
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:28:42 +0000
Received: from [85.158.143.35:32822] by server-2.bemta-4.messagelabs.com id
	34/98-10891-94402F25; Wed, 05 Feb 2014 09:28:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391592519!3238433!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12455 invoked from network); 5 Feb 2014 09:28:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:28:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98128998"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:28:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:28:38 -0500
Message-ID: <1391592517.6497.76.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Wed, 5 Feb 2014 09:28:37 +0000
In-Reply-To: <alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 18:41 +0000, M A Young wrote:
> > We should probably consider taking some unit files into the xen tree, if
> > someone wants to submit a set?
> 
> I can submit a set, which start services individually rather than a 
> unified xencommons style start file. I didn't find a good way to reproduce 
> the xendomains script, so I ended running an edited version of the 
> sysvinit script with a systemd wrapper file.

I don't know what is conventional in systemd land but I have no problem
with that approach.

> Would it make sense to have some sort of configure option tools to choose 
> between sysvinit and systemd?

AIUI systemd will DTWT and ignore the initscript if there is a systemd
unit file, so installing both seems to be fine to me, and is certain to
be less error prone.

Although perhaps the xencommons vs split units breaks that suppression
logic?

What depends on the xenstored & xenconsole modules? Is it possible to
have a "metaunit" xencommons which a) causes everything to be started
and b) suppresses the initscript by having the same name?

Or maybe there is a systemd syntax you can use to suppress
xencommons.init?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:39:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAywi-0007Mc-5u; Wed, 05 Feb 2014 09:39:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAywg-0007MQ-W7
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:39:03 +0000
Received: from [193.109.254.147:14655] by server-16.bemta-14.messagelabs.com
	id A9/A2-21945-6B602F25; Wed, 05 Feb 2014 09:39:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391593140!2126606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18622 invoked from network); 5 Feb 2014 09:39:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:39:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98131471"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:39:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:38:59 -0500
Message-ID: <1391593138.6497.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 5 Feb 2014 09:38:58 +0000
In-Reply-To: <52F15B93.4000000@citrix.com>
References: <52F15B93.4000000@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen on ARM: upstream kernel compile fails with
 defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 21:28 +0000, Zoltan Kiss wrote:
> Hi,
> 
> I'm trying to do a default ARM build to verify my grant mapping patches 
> doesn't break build. I'm using latest net-next tree as a basis, and I'm 
> trying to build it based this howto:
> 
> http://wiki.xenproject.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/Allwinner
> 
> (note, the actual CPU doesn't matter to me, I just want a default build 
> to succeed. Also, net-next is only used because that was checked out, 
> and I assume it should work as well)
> 
> I'm doing cross-compilation, so my make looks like this:
> 
> PATH="/local/repo/arm/gcc-linaro-arm-linux-gnueabihf-4.8-2013.10_linux/bin/:$PATH" 
> make -j8 O=../o-arm ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- zImage
> 
> For config I've tried sunxi_defconfig (which actually doesn't have Xen 
> enabled) and arndale_ubuntu_defconfig from Julien's repo:

I recommend starting from the upstream "multi_v7_defconfig" and the
enabling CONFIG_XEN.

> So far I figured out that autoconf.h is not included here, that CONFIG_ 
> is defined there. But I'm not that familiar with the kernel build 
> system, so I'm a bit stucked here. Probably I'm doing something 
> trivially wrong, can someone help me?

I can't see what -- autoconf.h is included via a direct "-include ...."
passed to the compiler so there aren't many ways for it to go wrong.

Perhaps trying the oldconfig target before building?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 09:39:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 09:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAywi-0007Mc-5u; Wed, 05 Feb 2014 09:39:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAywg-0007MQ-W7
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 09:39:03 +0000
Received: from [193.109.254.147:14655] by server-16.bemta-14.messagelabs.com
	id A9/A2-21945-6B602F25; Wed, 05 Feb 2014 09:39:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391593140!2126606!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18622 invoked from network); 5 Feb 2014 09:39:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 09:39:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,785,1384300800"; d="scan'208";a="98131471"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 09:39:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	04:38:59 -0500
Message-ID: <1391593138.6497.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 5 Feb 2014 09:38:58 +0000
In-Reply-To: <52F15B93.4000000@citrix.com>
References: <52F15B93.4000000@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen on ARM: upstream kernel compile fails with
 defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-04 at 21:28 +0000, Zoltan Kiss wrote:
> Hi,
> 
> I'm trying to do a default ARM build to verify my grant mapping patches 
> doesn't break build. I'm using latest net-next tree as a basis, and I'm 
> trying to build it based this howto:
> 
> http://wiki.xenproject.org/wiki/Xen_ARMv7_with_Virtualization_Extensions/Allwinner
> 
> (note, the actual CPU doesn't matter to me, I just want a default build 
> to succeed. Also, net-next is only used because that was checked out, 
> and I assume it should work as well)
> 
> I'm doing cross-compilation, so my make looks like this:
> 
> PATH="/local/repo/arm/gcc-linaro-arm-linux-gnueabihf-4.8-2013.10_linux/bin/:$PATH" 
> make -j8 O=../o-arm ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- zImage
> 
> For config I've tried sunxi_defconfig (which actually doesn't have Xen 
> enabled) and arndale_ubuntu_defconfig from Julien's repo:

I recommend starting from the upstream "multi_v7_defconfig" and the
enabling CONFIG_XEN.

> So far I figured out that autoconf.h is not included here, that CONFIG_ 
> is defined there. But I'm not that familiar with the kernel build 
> system, so I'm a bit stucked here. Probably I'm doing something 
> trivially wrong, can someone help me?

I can't see what -- autoconf.h is included via a direct "-include ...."
passed to the compiler so there aren't many ways for it to go wrong.

Perhaps trying the oldconfig target before building?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:10:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzQW-0008LD-KY; Wed, 05 Feb 2014 10:09:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAzQV-0008L8-Ew
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:09:51 +0000
Received: from [85.158.143.35:44661] by server-1.bemta-4.messagelabs.com id
	1F/87-31661-EED02F25; Wed, 05 Feb 2014 10:09:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391594988!3258014!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10143 invoked from network); 5 Feb 2014 10:09:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:09:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100033571"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 10:09:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	05:09:19 -0500
Message-ID: <52F20DCE.3080501@citrix.com>
Date: Wed, 5 Feb 2014 10:09:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <52F1753C.3010508@linaro.org>
In-Reply-To: <52F1753C.3010508@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 23:18, Julien Grall wrote:
> 
> Now, if I'm using Linux 3.14-rc1 as guest and trying to destroy the domain,
> I get this following Xen trace:
> 
> (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
[...]
> (XEN) Xen call trace:
> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (PC)
> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (LR)
> (XEN)    [<00247d6c>] domain_page_map_to_mfn+0x50/0xb4
> (XEN)    [<0020b17c>] unmap_guest_page+0x20/0x54
> (XEN)    [<0020b1d0>] cleanup_control_block+0x20/0x34
> (XEN)    [<0020bd3c>] evtchn_fifo_destroy+0x2c/0x6c
> (XEN)    [<0020b024>] evtchn_destroy+0x1a8/0x1b0
> (XEN)    [<00207f80>] domain_kill+0x60/0x128
> (XEN)    [<00206050>] do_domctl+0xa7c/0x1104
> (XEN)    [<0024cee0>] do_trap_hypervisor+0xad8/0xd78
> (XEN)    [<0024f6d0>] return_from_trap+0/0x4

This is because ARM's domain_page_map_to_mfn() doesn't work with pages
mapped with map_domain_page_global() which uses vmap().

x86's implementation has

    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
    {
        pl1e = virt_to_xen_l1e(va);
        BUG_ON(!pl1e);
    }
    // ...
    return l1e_get_pfn(*pl1e);

So I think ARM's needs something similar.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:10:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzQW-0008LD-KY; Wed, 05 Feb 2014 10:09:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAzQV-0008L8-Ew
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:09:51 +0000
Received: from [85.158.143.35:44661] by server-1.bemta-4.messagelabs.com id
	1F/87-31661-EED02F25; Wed, 05 Feb 2014 10:09:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391594988!3258014!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10143 invoked from network); 5 Feb 2014 10:09:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:09:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100033571"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 10:09:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	05:09:19 -0500
Message-ID: <52F20DCE.3080501@citrix.com>
Date: Wed, 5 Feb 2014 10:09:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <52F1753C.3010508@linaro.org>
In-Reply-To: <52F1753C.3010508@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 23:18, Julien Grall wrote:
> 
> Now, if I'm using Linux 3.14-rc1 as guest and trying to destroy the domain,
> I get this following Xen trace:
> 
> (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
[...]
> (XEN) Xen call trace:
> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (PC)
> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (LR)
> (XEN)    [<00247d6c>] domain_page_map_to_mfn+0x50/0xb4
> (XEN)    [<0020b17c>] unmap_guest_page+0x20/0x54
> (XEN)    [<0020b1d0>] cleanup_control_block+0x20/0x34
> (XEN)    [<0020bd3c>] evtchn_fifo_destroy+0x2c/0x6c
> (XEN)    [<0020b024>] evtchn_destroy+0x1a8/0x1b0
> (XEN)    [<00207f80>] domain_kill+0x60/0x128
> (XEN)    [<00206050>] do_domctl+0xa7c/0x1104
> (XEN)    [<0024cee0>] do_trap_hypervisor+0xad8/0xd78
> (XEN)    [<0024f6d0>] return_from_trap+0/0x4

This is because ARM's domain_page_map_to_mfn() doesn't work with pages
mapped with map_domain_page_global() which uses vmap().

x86's implementation has

    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
    {
        pl1e = virt_to_xen_l1e(va);
        BUG_ON(!pl1e);
    }
    // ...
    return l1e_get_pfn(*pl1e);

So I think ARM's needs something similar.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:33:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzmg-0000SQ-N6; Wed, 05 Feb 2014 10:32:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAzme-0000SL-RT
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:32:45 +0000
Received: from [85.158.143.35:38657] by server-3.bemta-4.messagelabs.com id
	29/44-11539-C4312F25; Wed, 05 Feb 2014 10:32:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391596363!3268641!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11132 invoked from network); 5 Feb 2014 10:32:43 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:32:43 -0000
Received: by mail-wi0-f182.google.com with SMTP id f8so383515wiw.15
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 02:32:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=MA2qw4n5GzzGPod9d/VIYIH9VRY31vzK6XfGCWLsiBs=;
	b=mgPJS4CWLtWoCtPIi2b2wp8wZhVaw/wXoil0OOmaijvqLBfpEFJfAHq1oAXyVoH9KV
	glxju2ik7TdughQ8zEQZ21Kwt3S/WgVrRB2jkF18ki0om9J0KIkyR6dSTEBB1opFWFHR
	BowyziImGX4F9GgvZePHYL94nDF5J4ZI8wF3wehOMTlKHFlUJzaJHlD/czRH5Hq0c2mq
	js867AoMZN5LVwhw9xAp6Ke9hI9TQ6szu1BBcjX9f9juilxNDa2+THlQLoMLiDp3kf79
	0qVyUFZXrI1WcO0nit25pdjNh5BV/5zEkO8RAnuABvZLdU/sqEicv0EtNnLH72wDOQri
	M2oQ==
X-Gm-Message-State: ALoCoQlqERMM8PQeAvGZ4WwNPgsz/5MELkIa//QIwwg8OLaB6SQL4guUXwheybyK3BidJIEcfL/9
X-Received: by 10.181.8.66 with SMTP id di2mr1871847wid.43.1391596363455;
	Wed, 05 Feb 2014 02:32:43 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ff9sm44409235wib.11.2014.02.05.02.32.42 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 02:32:42 -0800 (PST)
Message-ID: <52F21348.1070309@linaro.org>
Date: Wed, 05 Feb 2014 10:32:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement
 about multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 05/02/14 09:12, Ian Campbell wrote:
> It is the compatible string which matters, not the absolute path, and in any
> case /chosen/module@N is more often used than /chosen/modules/module@N.

I'm using /chosen/modules/module@N on my script :)

Is it for Xen 4.4?

>
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.gral@linaro.org>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:33:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzmg-0000SQ-N6; Wed, 05 Feb 2014 10:32:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAzme-0000SL-RT
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:32:45 +0000
Received: from [85.158.143.35:38657] by server-3.bemta-4.messagelabs.com id
	29/44-11539-C4312F25; Wed, 05 Feb 2014 10:32:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391596363!3268641!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11132 invoked from network); 5 Feb 2014 10:32:43 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:32:43 -0000
Received: by mail-wi0-f182.google.com with SMTP id f8so383515wiw.15
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 02:32:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=MA2qw4n5GzzGPod9d/VIYIH9VRY31vzK6XfGCWLsiBs=;
	b=mgPJS4CWLtWoCtPIi2b2wp8wZhVaw/wXoil0OOmaijvqLBfpEFJfAHq1oAXyVoH9KV
	glxju2ik7TdughQ8zEQZ21Kwt3S/WgVrRB2jkF18ki0om9J0KIkyR6dSTEBB1opFWFHR
	BowyziImGX4F9GgvZePHYL94nDF5J4ZI8wF3wehOMTlKHFlUJzaJHlD/czRH5Hq0c2mq
	js867AoMZN5LVwhw9xAp6Ke9hI9TQ6szu1BBcjX9f9juilxNDa2+THlQLoMLiDp3kf79
	0qVyUFZXrI1WcO0nit25pdjNh5BV/5zEkO8RAnuABvZLdU/sqEicv0EtNnLH72wDOQri
	M2oQ==
X-Gm-Message-State: ALoCoQlqERMM8PQeAvGZ4WwNPgsz/5MELkIa//QIwwg8OLaB6SQL4guUXwheybyK3BidJIEcfL/9
X-Received: by 10.181.8.66 with SMTP id di2mr1871847wid.43.1391596363455;
	Wed, 05 Feb 2014 02:32:43 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ff9sm44409235wib.11.2014.02.05.02.32.42 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 02:32:42 -0800 (PST)
Message-ID: <52F21348.1070309@linaro.org>
Date: Wed, 05 Feb 2014 10:32:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement
 about multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 05/02/14 09:12, Ian Campbell wrote:
> It is the compatible string which matters, not the absolute path, and in any
> case /chosen/module@N is more often used than /chosen/modules/module@N.

I'm using /chosen/modules/module@N on my script :)

Is it for Xen 4.4?

>
> Reported-by: Fu Wei <fu.wei@linaro.org>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.gral@linaro.org>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:34:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:34:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAznz-0000W4-Mq; Wed, 05 Feb 2014 10:34:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAzns-0000Vp-V8
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:34:01 +0000
Received: from [193.109.254.147:18077] by server-3.bemta-14.messagelabs.com id
	B4/FD-00432-89312F25; Wed, 05 Feb 2014 10:34:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391596439!2150282!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6876 invoked from network); 5 Feb 2014 10:33:59 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:33:59 -0000
Received: by mail-wg0-f47.google.com with SMTP id m15so143350wgh.2
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 02:33:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5anK/Tbyq4zqkmx7rvmgalMVk+tlLibNxhGZ/7kCd7Y=;
	b=ISj8L2EiFMslkUpOyJ/VCn8B/Jh1MvoyFbhtScmE6YgpnLixatqt2nMWAuuixMqi3Q
	JEf3ymu71YEbpejClaNNiBHhwBiMPYC20YoRFyGL2Kqis5Je/0aF8klP+sUzKb6CclPT
	zcIGRxX2KOXZpKfpN1uIQJhfRAcDC+QA6dYyQe+djUax9pMXGuef15FKh4miBMWF2DMw
	/Xhn0LIFdIrGYlG4ZPm7jBQDuAdJtJzsCz7/i2KzGZT+BQpiap0jqWwkKWMEczkUQVCh
	aaMI/04a1beAniHLaoYI+LD7Ju17nQ1EC/A2fc13SFlrNZO6Srg+o4hf9LDgUztaXKBb
	Htbw==
X-Gm-Message-State: ALoCoQlfxP2g8gOH5d6V1wyQIUr4vv0xjyIWLMzkE6pQ21QgGi/e2pTOtjdm0wNyr2NiXmra8v44
X-Received: by 10.194.60.37 with SMTP id e5mr717268wjr.32.1391596439451;
	Wed, 05 Feb 2014 02:33:59 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ux5sm60257449wjc.6.2014.02.05.02.33.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 02:33:58 -0800 (PST)
Message-ID: <52F21393.4050900@linaro.org>
Date: Wed, 05 Feb 2014 10:33:55 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F20DCE.3080501@citrix.com>
In-Reply-To: <52F20DCE.3080501@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 05/02/14 10:09, David Vrabel wrote:
> On 04/02/14 23:18, Julien Grall wrote:
>>
>> Now, if I'm using Linux 3.14-rc1 as guest and trying to destroy the domain,
>> I get this following Xen trace:
>>
>> (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
> [...]
>> (XEN) Xen call trace:
>> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (PC)
>> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (LR)
>> (XEN)    [<00247d6c>] domain_page_map_to_mfn+0x50/0xb4
>> (XEN)    [<0020b17c>] unmap_guest_page+0x20/0x54
>> (XEN)    [<0020b1d0>] cleanup_control_block+0x20/0x34
>> (XEN)    [<0020bd3c>] evtchn_fifo_destroy+0x2c/0x6c
>> (XEN)    [<0020b024>] evtchn_destroy+0x1a8/0x1b0
>> (XEN)    [<00207f80>] domain_kill+0x60/0x128
>> (XEN)    [<00206050>] do_domctl+0xa7c/0x1104
>> (XEN)    [<0024cee0>] do_trap_hypervisor+0xad8/0xd78
>> (XEN)    [<0024f6d0>] return_from_trap+0/0x4
>
> This is because ARM's domain_page_map_to_mfn() doesn't work with pages
> mapped with map_domain_page_global() which uses vmap().
>
> x86's implementation has
>
>      if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
>      {
>          pl1e = virt_to_xen_l1e(va);
>          BUG_ON(!pl1e);
>      }
>      // ...
>      return l1e_get_pfn(*pl1e);
>
> So I think ARM's needs something similar.

Thanks David, I will take a look on it. For the first bug (in Linux) do 
you have any input?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:34:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:34:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAznz-0000W4-Mq; Wed, 05 Feb 2014 10:34:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WAzns-0000Vp-V8
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:34:01 +0000
Received: from [193.109.254.147:18077] by server-3.bemta-14.messagelabs.com id
	B4/FD-00432-89312F25; Wed, 05 Feb 2014 10:34:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391596439!2150282!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6876 invoked from network); 5 Feb 2014 10:33:59 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:33:59 -0000
Received: by mail-wg0-f47.google.com with SMTP id m15so143350wgh.2
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 02:33:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=5anK/Tbyq4zqkmx7rvmgalMVk+tlLibNxhGZ/7kCd7Y=;
	b=ISj8L2EiFMslkUpOyJ/VCn8B/Jh1MvoyFbhtScmE6YgpnLixatqt2nMWAuuixMqi3Q
	JEf3ymu71YEbpejClaNNiBHhwBiMPYC20YoRFyGL2Kqis5Je/0aF8klP+sUzKb6CclPT
	zcIGRxX2KOXZpKfpN1uIQJhfRAcDC+QA6dYyQe+djUax9pMXGuef15FKh4miBMWF2DMw
	/Xhn0LIFdIrGYlG4ZPm7jBQDuAdJtJzsCz7/i2KzGZT+BQpiap0jqWwkKWMEczkUQVCh
	aaMI/04a1beAniHLaoYI+LD7Ju17nQ1EC/A2fc13SFlrNZO6Srg+o4hf9LDgUztaXKBb
	Htbw==
X-Gm-Message-State: ALoCoQlfxP2g8gOH5d6V1wyQIUr4vv0xjyIWLMzkE6pQ21QgGi/e2pTOtjdm0wNyr2NiXmra8v44
X-Received: by 10.194.60.37 with SMTP id e5mr717268wjr.32.1391596439451;
	Wed, 05 Feb 2014 02:33:59 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ux5sm60257449wjc.6.2014.02.05.02.33.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 02:33:58 -0800 (PST)
Message-ID: <52F21393.4050900@linaro.org>
Date: Wed, 05 Feb 2014 10:33:55 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F20DCE.3080501@citrix.com>
In-Reply-To: <52F20DCE.3080501@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 05/02/14 10:09, David Vrabel wrote:
> On 04/02/14 23:18, Julien Grall wrote:
>>
>> Now, if I'm using Linux 3.14-rc1 as guest and trying to destroy the domain,
>> I get this following Xen trace:
>>
>> (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
> [...]
>> (XEN) Xen call trace:
>> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (PC)
>> (XEN)    [<0023f7d0>] __bug+0x28/0x44 (LR)
>> (XEN)    [<00247d6c>] domain_page_map_to_mfn+0x50/0xb4
>> (XEN)    [<0020b17c>] unmap_guest_page+0x20/0x54
>> (XEN)    [<0020b1d0>] cleanup_control_block+0x20/0x34
>> (XEN)    [<0020bd3c>] evtchn_fifo_destroy+0x2c/0x6c
>> (XEN)    [<0020b024>] evtchn_destroy+0x1a8/0x1b0
>> (XEN)    [<00207f80>] domain_kill+0x60/0x128
>> (XEN)    [<00206050>] do_domctl+0xa7c/0x1104
>> (XEN)    [<0024cee0>] do_trap_hypervisor+0xad8/0xd78
>> (XEN)    [<0024f6d0>] return_from_trap+0/0x4
>
> This is because ARM's domain_page_map_to_mfn() doesn't work with pages
> mapped with map_domain_page_global() which uses vmap().
>
> x86's implementation has
>
>      if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
>      {
>          pl1e = virt_to_xen_l1e(va);
>          BUG_ON(!pl1e);
>      }
>      // ...
>      return l1e_get_pfn(*pl1e);
>
> So I think ARM's needs something similar.

Thanks David, I will take a look on it. For the first bug (in Linux) do 
you have any input?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzq0-0000ev-LJ; Wed, 05 Feb 2014 10:36:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAzpx-0000ea-K2
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:36:11 +0000
Received: from [193.109.254.147:13243] by server-5.bemta-14.messagelabs.com id
	57/A5-16688-81412F25; Wed, 05 Feb 2014 10:36:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391596567!2141864!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3421 invoked from network); 5 Feb 2014 10:36:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:36:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98145007"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 10:36:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	05:36:05 -0500
Message-ID: <1391596564.6497.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 5 Feb 2014 10:36:04 +0000
In-Reply-To: <52F21348.1070309@linaro.org>
References: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
	<52F21348.1070309@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement
 about multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 10:32 +0000, Julien Grall wrote:
> 
> On 05/02/14 09:12, Ian Campbell wrote:
> > It is the compatible string which matters, not the absolute path, and in any
> > case /chosen/module@N is more often used than /chosen/modules/module@N.
> 
> I'm using /chosen/modules/module@N on my script :)

I knew I'd regret making that statement ;-)

> 
> Is it for Xen 4.4?

I don't see any reason to hold off on docs improvements.

> 
> >
> > Reported-by: Fu Wei <fu.wei@linaro.org>
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Julien Grall <julien.gral@linaro.org>

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzq0-0000ev-LJ; Wed, 05 Feb 2014 10:36:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WAzpx-0000ea-K2
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:36:11 +0000
Received: from [193.109.254.147:13243] by server-5.bemta-14.messagelabs.com id
	57/A5-16688-81412F25; Wed, 05 Feb 2014 10:36:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391596567!2141864!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3421 invoked from network); 5 Feb 2014 10:36:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:36:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98145007"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 10:36:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	05:36:05 -0500
Message-ID: <1391596564.6497.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 5 Feb 2014 10:36:04 +0000
In-Reply-To: <52F21348.1070309@linaro.org>
References: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
	<52F21348.1070309@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement
 about multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 10:32 +0000, Julien Grall wrote:
> 
> On 05/02/14 09:12, Ian Campbell wrote:
> > It is the compatible string which matters, not the absolute path, and in any
> > case /chosen/module@N is more often used than /chosen/modules/module@N.
> 
> I'm using /chosen/modules/module@N on my script :)

I knew I'd regret making that statement ;-)

> 
> Is it for Xen 4.4?

I don't see any reason to hold off on docs improvements.

> 
> >
> > Reported-by: Fu Wei <fu.wei@linaro.org>
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Julien Grall <julien.gral@linaro.org>

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:45:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzyr-00013q-6B; Wed, 05 Feb 2014 10:45:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAzyq-00013l-HQ
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:45:20 +0000
Received: from [85.158.143.35:12196] by server-2.bemta-4.messagelabs.com id
	4E/9F-10891-F3612F25; Wed, 05 Feb 2014 10:45:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391597118!3282649!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11851 invoked from network); 5 Feb 2014 10:45:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:45:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98147293"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 10:45:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	05:45:17 -0500
Message-ID: <52F2163C.1000202@citrix.com>
Date: Wed, 5 Feb 2014 10:45:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <52F1753C.3010508@linaro.org>
In-Reply-To: <52F1753C.3010508@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	david.vrabel@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 23:18, Julien Grall wrote:
> Hello David,
> 
> I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 4.4-rc3).
> 
> I have multiple issues with your event channel patch series on Linux and Xen side.
> I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create guests).

I think there must be two issues here as both 2-level and FIFO events
are broken.

> I'm using a simple guest config:
> kernel="/root/zImage"
> memory=32
> name="test"
> vcpus=1
> autoballon="off"
> extra="console=hvc0"
> 
> If everything is ok, I should see that Linux is unable to find the root filesystem.
> But here, Linux is stucked.
> 
>>From Linux side, after bisecting, I found that the offending commit is:
>     xen/events: remove unnecessary init_evtchn_cpu_bindings()
>     
>     Because the guest-side binding of an event to a VCPU (i.e., setting
>     the local per-cpu masks) is always explicitly done after an event
>     channel is bound to a port, there is no need to initialize all
>     possible events as bound to VCPU 0 at start of day or after a resume.
>     
>     Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>     Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>     Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> With this patch, the function __xen_evtchn_do_upcall won't be able
> to find an events (pendings_bits == 0 every time).
> It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.

I think this is because binding an interdomain or allocating an unbound
event channel does call bind_evtchn_to_cpu(evtchn, 0) which is required
to set the local VCPU masks.

I think this happened to work on x86 because during the generic irq
setup, the irq affinity is always set which then binds the event channel
to the right VCPU.  I guess ARM's irq setup misses this step.

This shouldn't affect the FIFO-based events though since
evtchn_fifo_bind_to_cpu() is a no-op.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:45:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WAzyr-00013q-6B; Wed, 05 Feb 2014 10:45:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WAzyq-00013l-HQ
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:45:20 +0000
Received: from [85.158.143.35:12196] by server-2.bemta-4.messagelabs.com id
	4E/9F-10891-F3612F25; Wed, 05 Feb 2014 10:45:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391597118!3282649!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11851 invoked from network); 5 Feb 2014 10:45:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:45:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98147293"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 10:45:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	05:45:17 -0500
Message-ID: <52F2163C.1000202@citrix.com>
Date: Wed, 5 Feb 2014 10:45:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <52F1753C.3010508@linaro.org>
In-Reply-To: <52F1753C.3010508@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	david.vrabel@citrix.com, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 23:18, Julien Grall wrote:
> Hello David,
> 
> I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 4.4-rc3).
> 
> I have multiple issues with your event channel patch series on Linux and Xen side.
> I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create guests).

I think there must be two issues here as both 2-level and FIFO events
are broken.

> I'm using a simple guest config:
> kernel="/root/zImage"
> memory=32
> name="test"
> vcpus=1
> autoballon="off"
> extra="console=hvc0"
> 
> If everything is ok, I should see that Linux is unable to find the root filesystem.
> But here, Linux is stucked.
> 
>>From Linux side, after bisecting, I found that the offending commit is:
>     xen/events: remove unnecessary init_evtchn_cpu_bindings()
>     
>     Because the guest-side binding of an event to a VCPU (i.e., setting
>     the local per-cpu masks) is always explicitly done after an event
>     channel is bound to a port, there is no need to initialize all
>     possible events as bound to VCPU 0 at start of day or after a resume.
>     
>     Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>     Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>     Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> With this patch, the function __xen_evtchn_do_upcall won't be able
> to find an events (pendings_bits == 0 every time).
> It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.

I think this is because binding an interdomain or allocating an unbound
event channel does call bind_evtchn_to_cpu(evtchn, 0) which is required
to set the local VCPU masks.

I think this happened to work on x86 because during the generic irq
setup, the irq affinity is always set which then binds the event channel
to the right VCPU.  I guess ARM's irq setup misses this step.

This shouldn't affect the FIFO-based events though since
evtchn_fifo_bind_to_cpu() is a no-op.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:52:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB05Y-0001ND-7k; Wed, 05 Feb 2014 10:52:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB05X-0001N8-5A
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:52:15 +0000
Received: from [85.158.143.35:23260] by server-2.bemta-4.messagelabs.com id
	90/9D-10891-ED712F25; Wed, 05 Feb 2014 10:52:14 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391597532!3275743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7645 invoked from network); 5 Feb 2014 10:52:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:52:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100042469"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 10:52:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 05:52:11 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WAzxa-0002vM-SZ;
	Wed, 05 Feb 2014 10:44:02 +0000
Message-ID: <52F215E1.1040401@eu.citrix.com>
Date: Wed, 5 Feb 2014 10:43:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>
References: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
	<1391525210.6497.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1391525210.6497.9.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: Yun Wang <bimingery@gmail.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 02:46 PM, Ian Campbell wrote:
> On Fri, 2014-01-31 at 16:35 +0000, Anthony PERARD wrote:
>> vcpu-set will try to use the HVM path (through QEMU) instead of the PV
>> path (through xenstore) for a PV guest, if there is a QEMU running for
>> this domain. This patch check which kind of guest is running before
>> before doing any call.
>>
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> ---
>>
>> Yun, is this patch fix the issue with your PV guest ?
> Yun, any feedback on this patch?
>
> George -- I think vcpu-set not working for PV guests is a bug worth
> fixing in 4.4 so I intend to apply.

Yes, please do.

  -George

>
>>
>>   tools/libxl/libxl.c | 19 ++++++++++++++-----
>>   1 file changed, 14 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
>> index 2845ca4..c4fe6af 100644
>> --- a/tools/libxl/libxl.c
>> +++ b/tools/libxl/libxl.c
>> @@ -4692,12 +4692,21 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>>   {
>>       GC_INIT(ctx);
>>       int rc;
>> -    switch (libxl__device_model_version_running(gc, domid)) {
>> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>> -        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>> +    switch (libxl__domain_type(gc, domid)) {
>> +    case LIBXL_DOMAIN_TYPE_HVM:
>> +        switch (libxl__device_model_version_running(gc, domid)) {
>> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>> +            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>> +            break;
>> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>> +            rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
>> +            break;
>> +        default:
>> +            rc = ERROR_INVAL;
>> +        }
>>           break;
>> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>> -        rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
>> +    case LIBXL_DOMAIN_TYPE_PV:
>> +        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>>           break;
>>       default:
>>           rc = ERROR_INVAL;
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 10:52:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 10:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB05Y-0001ND-7k; Wed, 05 Feb 2014 10:52:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB05X-0001N8-5A
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 10:52:15 +0000
Received: from [85.158.143.35:23260] by server-2.bemta-4.messagelabs.com id
	90/9D-10891-ED712F25; Wed, 05 Feb 2014 10:52:14 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391597532!3275743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7645 invoked from network); 5 Feb 2014 10:52:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 10:52:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100042469"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 10:52:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 05:52:11 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WAzxa-0002vM-SZ;
	Wed, 05 Feb 2014 10:44:02 +0000
Message-ID: <52F215E1.1040401@eu.citrix.com>
Date: Wed, 5 Feb 2014 10:43:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Anthony PERARD
	<anthony.perard@citrix.com>
References: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
	<1391525210.6497.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1391525210.6497.9.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: Yun Wang <bimingery@gmail.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 02:46 PM, Ian Campbell wrote:
> On Fri, 2014-01-31 at 16:35 +0000, Anthony PERARD wrote:
>> vcpu-set will try to use the HVM path (through QEMU) instead of the PV
>> path (through xenstore) for a PV guest, if there is a QEMU running for
>> this domain. This patch check which kind of guest is running before
>> before doing any call.
>>
>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> ---
>>
>> Yun, is this patch fix the issue with your PV guest ?
> Yun, any feedback on this patch?
>
> George -- I think vcpu-set not working for PV guests is a bug worth
> fixing in 4.4 so I intend to apply.

Yes, please do.

  -George

>
>>
>>   tools/libxl/libxl.c | 19 ++++++++++++++-----
>>   1 file changed, 14 insertions(+), 5 deletions(-)
>>
>> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
>> index 2845ca4..c4fe6af 100644
>> --- a/tools/libxl/libxl.c
>> +++ b/tools/libxl/libxl.c
>> @@ -4692,12 +4692,21 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid, libxl_bitmap *cpumap)
>>   {
>>       GC_INIT(ctx);
>>       int rc;
>> -    switch (libxl__device_model_version_running(gc, domid)) {
>> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>> -        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>> +    switch (libxl__domain_type(gc, domid)) {
>> +    case LIBXL_DOMAIN_TYPE_HVM:
>> +        switch (libxl__device_model_version_running(gc, domid)) {
>> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>> +            rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>> +            break;
>> +        case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>> +            rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
>> +            break;
>> +        default:
>> +            rc = ERROR_INVAL;
>> +        }
>>           break;
>> -    case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>> -        rc = libxl__set_vcpuonline_qmp(gc, domid, cpumap);
>> +    case LIBXL_DOMAIN_TYPE_PV:
>> +        rc = libxl__set_vcpuonline_xenstore(gc, domid, cpumap);
>>           break;
>>       default:
>>           rc = ERROR_INVAL;
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:11:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0Nc-0001y1-Ji; Wed, 05 Feb 2014 11:10:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB0Nb-0001xw-D5
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:10:55 +0000
Received: from [85.158.137.68:20924] by server-17.bemta-3.messagelabs.com id
	31/EF-22569-E3C12F25; Wed, 05 Feb 2014 11:10:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391598652!13535601!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5103 invoked from network); 5 Feb 2014 11:10:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:10:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98152625"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 11:10:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 06:10:51 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB0NX-0003YF-7x;
	Wed, 05 Feb 2014 11:10:51 +0000
Message-ID: <52F21C29.8090607@eu.citrix.com>
Date: Wed, 5 Feb 2014 11:10:33 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Olaf Hering <olaf@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>	
	<20140130162558.GA9033@aepfle.de>	
	<1391099505.9495.23.camel@kazak.uk.xensource.com>	
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
In-Reply-To: <1391530957.6497.56.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: anthony.perard@citrix.com, xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 04:22 PM, Ian Campbell wrote:
> On Thu, 2014-01-30 at 18:30 +0100, Olaf Hering wrote:
>
> George, any thoughts on:
>
>>> TBH -- if you (==suse I guess?) are contemplating carrying this as a
>>> backport even before 4.4 is out the door we should probably be at least
>>> considering a freeze exception for 4.4. George CCd for input. (I
>>> appreciate that "backport=>freeze exception" is a potentially slippery
>>> slope/ripe for abuse...)
>> It will make less work for SUSE if this change would be incorporated
>> into 4.4, and later replaced with the "final" version I sent out today.
>> However, its small and will be easy to port forward to 4.4.X.
>>
>> The risk of including such change is small as it requires a patched qemu
>> which actually does discard (1.7?), a patched frontend driver (pvops
>> 3.15?) before the codepaths it enables are actually executed.

Well it looks like in order to keep ABI compatibility (which I don't 
think we ever promised), you're introducing this weird hack with 
overloading a putative boolean value with a magic number?

I think the patch is really ugly.  I assume the reason you're attemping 
to avoid breaking ABI compatibility is because we're so close to the 
release? But if so, adding an ugly hack like this is worse, IMHO.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:11:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0Nc-0001y1-Ji; Wed, 05 Feb 2014 11:10:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB0Nb-0001xw-D5
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:10:55 +0000
Received: from [85.158.137.68:20924] by server-17.bemta-3.messagelabs.com id
	31/EF-22569-E3C12F25; Wed, 05 Feb 2014 11:10:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391598652!13535601!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5103 invoked from network); 5 Feb 2014 11:10:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:10:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98152625"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 11:10:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 06:10:51 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB0NX-0003YF-7x;
	Wed, 05 Feb 2014 11:10:51 +0000
Message-ID: <52F21C29.8090607@eu.citrix.com>
Date: Wed, 5 Feb 2014 11:10:33 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Olaf Hering <olaf@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>	
	<20140130162558.GA9033@aepfle.de>	
	<1391099505.9495.23.camel@kazak.uk.xensource.com>	
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
In-Reply-To: <1391530957.6497.56.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: anthony.perard@citrix.com, xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 04:22 PM, Ian Campbell wrote:
> On Thu, 2014-01-30 at 18:30 +0100, Olaf Hering wrote:
>
> George, any thoughts on:
>
>>> TBH -- if you (==suse I guess?) are contemplating carrying this as a
>>> backport even before 4.4 is out the door we should probably be at least
>>> considering a freeze exception for 4.4. George CCd for input. (I
>>> appreciate that "backport=>freeze exception" is a potentially slippery
>>> slope/ripe for abuse...)
>> It will make less work for SUSE if this change would be incorporated
>> into 4.4, and later replaced with the "final" version I sent out today.
>> However, its small and will be easy to port forward to 4.4.X.
>>
>> The risk of including such change is small as it requires a patched qemu
>> which actually does discard (1.7?), a patched frontend driver (pvops
>> 3.15?) before the codepaths it enables are actually executed.

Well it looks like in order to keep ABI compatibility (which I don't 
think we ever promised), you're introducing this weird hack with 
overloading a putative boolean value with a magic number?

I think the patch is really ugly.  I assume the reason you're attemping 
to avoid breaking ABI compatibility is because we're so close to the 
release? But if so, adding an ugly hack like this is worse, IMHO.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:21:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0Y1-0002Mv-Ve; Wed, 05 Feb 2014 11:21:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB0Y0-0002Mq-PY
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 11:21:40 +0000
Received: from [85.158.143.35:34473] by server-2.bemta-4.messagelabs.com id
	DC/EA-10891-4CE12F25; Wed, 05 Feb 2014 11:21:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391599298!3295644!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15808 invoked from network); 5 Feb 2014 11:21:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:21:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98154841"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 11:21:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 06:21:37 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB0Xx-0004gN-Aw;
	Wed, 05 Feb 2014 11:21:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB0Xx-0007PV-1Z;
	Wed, 05 Feb 2014 11:21:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.7872.627751.632254@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 11:21:36 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52F1D050.9040405@suse.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<21231.49368.953196.741218@mariner.uk.xensource.com>
	<52F1D050.9040405@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> I haven't reviewed this version of the series, but have been running it
> in my test setup for several hours now without any problems.  The test
> setup includes some fixes on the libvirt side, which I will post
> tomorrow after fixing up the commit messages.  I'll cc xen-devel in case
> you have some comments on those fixes.

Thanks!

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:21:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0Y1-0002Mv-Ve; Wed, 05 Feb 2014 11:21:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB0Y0-0002Mq-PY
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 11:21:40 +0000
Received: from [85.158.143.35:34473] by server-2.bemta-4.messagelabs.com id
	DC/EA-10891-4CE12F25; Wed, 05 Feb 2014 11:21:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391599298!3295644!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15808 invoked from network); 5 Feb 2014 11:21:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:21:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98154841"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 11:21:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 06:21:37 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB0Xx-0004gN-Aw;
	Wed, 05 Feb 2014 11:21:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB0Xx-0007PV-1Z;
	Wed, 05 Feb 2014 11:21:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.7872.627751.632254@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 11:21:36 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <52F1D050.9040405@suse.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<21231.49368.953196.741218@mariner.uk.xensource.com>
	<52F1D050.9040405@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> I haven't reviewed this version of the series, but have been running it
> in my test setup for several hours now without any problems.  The test
> setup includes some fixes on the libvirt side, which I will post
> tomorrow after fixing up the commit messages.  I'll cc xen-devel in case
> you have some comments on those fixes.

Thanks!

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:36:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0mI-0002ik-RI; Wed, 05 Feb 2014 11:36:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WB0mH-0002if-SA
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:36:26 +0000
Received: from [85.158.143.35:64209] by server-3.bemta-4.messagelabs.com id
	C9/8B-11539-93222F25; Wed, 05 Feb 2014 11:36:25 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391600184!3298600!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4975 invoked from network); 5 Feb 2014 11:36:24 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 11:36:24 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391600184; l=499;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=bjOoZdmVPwuHiA/BKGe6oFWdTWE=;
	b=XP8zWRu3gl+LW8AHBDFuf4C/3xKRGSRGJf9at2OoN2OmGlNIteH1/ynI5BHllR7nYdH
	J6UZtemdUeF+hBw/RLtB6MlZrayCkNJK/i0d7atQ1nmVfhErYvtQBmUHC+j+J9m+ameH8
	pVRx49OwqEEGzbmiVFQ0+fEZcezQ4QIxfTM=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id u01dadq15BaO1dS
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 5 Feb 2014 12:36:24 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 8E6AC50269; Wed,  5 Feb 2014 12:36:23 +0100 (CET)
Date: Wed, 5 Feb 2014 12:36:23 +0100
From: Olaf Hering <olaf@aepfle.de>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140205113623.GA24025@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F21C29.8090607@eu.citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, George Dunlap wrote:

> Well it looks like in order to keep ABI compatibility (which I don't think
> we ever promised), you're introducing this weird hack with overloading a
> putative boolean value with a magic number?

Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
has to change as well. And is 4.4-rc4 is the right time to do that?
Likely not. I'm fine with carry the 4.4 patch to achieve the result, and
put the other version into 4.5.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:36:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0mI-0002ik-RI; Wed, 05 Feb 2014 11:36:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WB0mH-0002if-SA
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:36:26 +0000
Received: from [85.158.143.35:64209] by server-3.bemta-4.messagelabs.com id
	C9/8B-11539-93222F25; Wed, 05 Feb 2014 11:36:25 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391600184!3298600!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4975 invoked from network); 5 Feb 2014 11:36:24 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 11:36:24 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391600184; l=499;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=bjOoZdmVPwuHiA/BKGe6oFWdTWE=;
	b=XP8zWRu3gl+LW8AHBDFuf4C/3xKRGSRGJf9at2OoN2OmGlNIteH1/ynI5BHllR7nYdH
	J6UZtemdUeF+hBw/RLtB6MlZrayCkNJK/i0d7atQ1nmVfhErYvtQBmUHC+j+J9m+ameH8
	pVRx49OwqEEGzbmiVFQ0+fEZcezQ4QIxfTM=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id u01dadq15BaO1dS
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 5 Feb 2014 12:36:24 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 8E6AC50269; Wed,  5 Feb 2014 12:36:23 +0100 (CET)
Date: Wed, 5 Feb 2014 12:36:23 +0100
From: Olaf Hering <olaf@aepfle.de>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140205113623.GA24025@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F21C29.8090607@eu.citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: anthony.perard@citrix.com, xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, George Dunlap wrote:

> Well it looks like in order to keep ABI compatibility (which I don't think
> we ever promised), you're introducing this weird hack with overloading a
> putative boolean value with a magic number?

Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
has to change as well. And is 4.4-rc4 is the right time to do that?
Likely not. I'm fine with carry the 4.4 patch to achieve the result, and
put the other version into 4.5.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:46:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0vy-00033l-BU; Wed, 05 Feb 2014 11:46:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB0vw-00033g-Kl
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:46:24 +0000
Received: from [85.158.139.211:42421] by server-11.bemta-5.messagelabs.com id
	3A/E4-23886-F8422F25; Wed, 05 Feb 2014 11:46:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391600781!1813384!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20174 invoked from network); 5 Feb 2014 11:46:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:46:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100054453"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 11:46:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	06:46:16 -0500
Message-ID: <1391600775.6497.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 5 Feb 2014 11:46:15 +0000
In-Reply-To: <20140205113623.GA24025@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, anthony.perard@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 12:36 +0100, Olaf Hering wrote:
> On Wed, Feb 05, George Dunlap wrote:
> 
> > Well it looks like in order to keep ABI compatibility (which I don't think
> > we ever promised), you're introducing this weird hack with overloading a
> > putative boolean value with a magic number?
> 
> Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
> has to change as well. And is 4.4-rc4 is the right time to do that?

The SONAME is currently 4.3, so it looks like we need to bump the SONAME
anyways (since it seems unlikely that we have not changed the ABI since
4.3). 

This normally happens fairly late on in the release cycle, although TBH
it should probably have happened by now.

> Likely not.

So perhaps not as unlikely ;-)

>  I'm fine with carry the 4.4 patch to achieve the result, and
> put the other version into 4.5.
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:46:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0vy-00033l-BU; Wed, 05 Feb 2014 11:46:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB0vw-00033g-Kl
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:46:24 +0000
Received: from [85.158.139.211:42421] by server-11.bemta-5.messagelabs.com id
	3A/E4-23886-F8422F25; Wed, 05 Feb 2014 11:46:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391600781!1813384!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20174 invoked from network); 5 Feb 2014 11:46:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:46:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100054453"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 11:46:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	06:46:16 -0500
Message-ID: <1391600775.6497.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 5 Feb 2014 11:46:15 +0000
In-Reply-To: <20140205113623.GA24025@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, anthony.perard@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 12:36 +0100, Olaf Hering wrote:
> On Wed, Feb 05, George Dunlap wrote:
> 
> > Well it looks like in order to keep ABI compatibility (which I don't think
> > we ever promised), you're introducing this weird hack with overloading a
> > putative boolean value with a magic number?
> 
> Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
> has to change as well. And is 4.4-rc4 is the right time to do that?

The SONAME is currently 4.3, so it looks like we need to bump the SONAME
anyways (since it seems unlikely that we have not changed the ABI since
4.3). 

This normally happens fairly late on in the release cycle, although TBH
it should probably have happened by now.

> Likely not.

So perhaps not as unlikely ;-)

>  I'm fine with carry the 4.4 patch to achieve the result, and
> put the other version into 4.5.
> 
> Olaf
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:49:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0z7-0003Lq-1G; Wed, 05 Feb 2014 11:49:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB0z4-0003Lj-Sk
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:49:39 +0000
Received: from [85.158.139.211:15408] by server-9.bemta-5.messagelabs.com id
	B0/07-11237-25522F25; Wed, 05 Feb 2014 11:49:38 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391600975!1786077!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20860 invoked from network); 5 Feb 2014 11:49:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:49:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98160929"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 11:49:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 06:49:33 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB0yz-0004NV-PQ;
	Wed, 05 Feb 2014 11:49:33 +0000
Message-ID: <52F2253B.9000000@eu.citrix.com>
Date: Wed, 5 Feb 2014 11:49:15 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Joby Poriyath
	<joby.poriyath@citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
In-Reply-To: <1391592179.6497.73.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 09:22 AM, Ian Campbell wrote:
> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>> boot after the installation.
>>
>> In addition to this, RHEL 7 menu entries have two different single-quote
>> delimited strings on the same line, and the greedy grouping for menuentry
>> parsing gets both strings, and the options inbetween.

So you're saying that adding the '?' just happens to change the match 
because of a quirk in the algorithms in the python library? That seems 
more like a hack than a proper fix; there may be other versions of 
python (future versions, for instance) where the new regexp will have 
the same effect as the old one, and we'll have another regression.

Even if the behavior described is part of the defined interface, I'd be 
wary of using this because future developers may not realize what it's 
for, or how to modify it properly to retain the properties it has now.

>>
>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: george.dunlap@citrix.com
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> IMHO this can go into 4.4, unless George objects today I shall commit.

I'm a bit on the fence about this one.  If this had been sent a month 
ago, it would be a no-brainer.  It certainly looks like it should work 
just fine.  On the other hand, pygrub is an important bit of 
functionality, and I'm not sure how much testing it gets. But of course 
the XenServer XenRT tests probably exercise it fairly well (or else they 
wouldn't be submitting this patch).

The Register seems to think that RHEL will be released "in the first 
half of 2014", which would certainly be before 4.5.  But we should have 
another point release before then, with enough time to do better testing 
and (possibly) come up with a better solution to the regexp problem 
above (assuming my interpretation is correct).

I'm wondering though whether it would make more sense to save this for 
4.4.1.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 11:49:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 11:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB0z7-0003Lq-1G; Wed, 05 Feb 2014 11:49:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB0z4-0003Lj-Sk
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 11:49:39 +0000
Received: from [85.158.139.211:15408] by server-9.bemta-5.messagelabs.com id
	B0/07-11237-25522F25; Wed, 05 Feb 2014 11:49:38 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391600975!1786077!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20860 invoked from network); 5 Feb 2014 11:49:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 11:49:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98160929"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 11:49:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 06:49:33 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB0yz-0004NV-PQ;
	Wed, 05 Feb 2014 11:49:33 +0000
Message-ID: <52F2253B.9000000@eu.citrix.com>
Date: Wed, 5 Feb 2014 11:49:15 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Joby Poriyath
	<joby.poriyath@citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
In-Reply-To: <1391592179.6497.73.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 09:22 AM, Ian Campbell wrote:
> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>> boot after the installation.
>>
>> In addition to this, RHEL 7 menu entries have two different single-quote
>> delimited strings on the same line, and the greedy grouping for menuentry
>> parsing gets both strings, and the options inbetween.

So you're saying that adding the '?' just happens to change the match 
because of a quirk in the algorithms in the python library? That seems 
more like a hack than a proper fix; there may be other versions of 
python (future versions, for instance) where the new regexp will have 
the same effect as the old one, and we'll have another regression.

Even if the behavior described is part of the defined interface, I'd be 
wary of using this because future developers may not realize what it's 
for, or how to modify it properly to retain the properties it has now.

>>
>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Cc: george.dunlap@citrix.com
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> IMHO this can go into 4.4, unless George objects today I shall commit.

I'm a bit on the fence about this one.  If this had been sent a month 
ago, it would be a no-brainer.  It certainly looks like it should work 
just fine.  On the other hand, pygrub is an important bit of 
functionality, and I'm not sure how much testing it gets. But of course 
the XenServer XenRT tests probably exercise it fairly well (or else they 
wouldn't be submitting this patch).

The Register seems to think that RHEL will be released "in the first 
half of 2014", which would certainly be before 4.5.  But we should have 
another point release before then, with enough time to do better testing 
and (possibly) come up with a better solution to the regexp problem 
above (assuming my interpretation is correct).

I'm wondering though whether it would make more sense to save this for 
4.4.1.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:07:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1GB-0003u0-Rz; Wed, 05 Feb 2014 12:07:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB1GA-0003tv-Sq
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:07:19 +0000
Received: from [193.109.254.147:59001] by server-5.bemta-14.messagelabs.com id
	FC/51-16688-67922F25; Wed, 05 Feb 2014 12:07:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391602036!2172526!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9901 invoked from network); 5 Feb 2014 12:07:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:07:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100058930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 12:07:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	07:07:15 -0500
Message-ID: <1391602034.6497.128.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:07:14 +0000
In-Reply-To: <52F2253B.9000000@eu.citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
	<52F2253B.9000000@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 11:49 +0000, George Dunlap wrote:
> On 02/05/2014 09:22 AM, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
> >> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> >> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> >> boot after the installation.
> >>
> >> In addition to this, RHEL 7 menu entries have two different single-quote
> >> delimited strings on the same line, and the greedy grouping for menuentry
> >> parsing gets both strings, and the options inbetween.
> 
> So you're saying that adding the '?' just happens to change the match 
> because of a quirk in the algorithms in the python library? That seems 
> more like a hack than a proper fix; there may be other versions of 
> python (future versions, for instance) where the new regexp will have 
> the same effect as the old one, and we'll have another regression.
> 
> Even if the behavior described is part of the defined interface,

I believe it is. Joby posted a link earlier. It also seems to be part of
the Perl re syntax -- and lots of things use Perl's regex syntax so I
think it is pretty "standard" (although I was not previously aware of it
either). Wikipedia's regex page talks about it too.

>  I'd be 
> wary of using this because future developers may not realize what it's 
> for, or how to modify it properly to retain the properties it has now.

Hypothetical developer ignorance might call for a comment, but I think
avoiding language features which provide the semantics we need just
because they are a bit obscure would be a mistake.

> >> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Cc: george.dunlap@citrix.com
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > IMHO this can go into 4.4, unless George objects today I shall commit.
> 
> I'm a bit on the fence about this one.  If this had been sent a month 
> ago, it would be a no-brainer.  It certainly looks like it should work 
> just fine.  On the other hand, pygrub is an important bit of 
> functionality, and I'm not sure how much testing it gets. But of course 
> the XenServer XenRT tests probably exercise it fairly well (or else they 
> wouldn't be submitting this patch).

FWIW I intended to run it over the (admittedly small) set of test cases
in the tree as part of the commit process. I believe Joby has already
done so anyway.

> The Register seems to think that RHEL will be released "in the first 
> half of 2014", which would certainly be before 4.5.  But we should have 
> another point release before then, with enough time to do better testing 
> and (possibly) come up with a better solution to the regexp problem 
> above (assuming my interpretation is correct).
> 
> I'm wondering though whether it would make more sense to save this for 
> 4.4.1.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:07:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1GB-0003u0-Rz; Wed, 05 Feb 2014 12:07:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB1GA-0003tv-Sq
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:07:19 +0000
Received: from [193.109.254.147:59001] by server-5.bemta-14.messagelabs.com id
	FC/51-16688-67922F25; Wed, 05 Feb 2014 12:07:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391602036!2172526!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9901 invoked from network); 5 Feb 2014 12:07:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:07:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100058930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 12:07:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	07:07:15 -0500
Message-ID: <1391602034.6497.128.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:07:14 +0000
In-Reply-To: <52F2253B.9000000@eu.citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
	<52F2253B.9000000@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 11:49 +0000, George Dunlap wrote:
> On 02/05/2014 09:22 AM, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
> >> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> >> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> >> boot after the installation.
> >>
> >> In addition to this, RHEL 7 menu entries have two different single-quote
> >> delimited strings on the same line, and the greedy grouping for menuentry
> >> parsing gets both strings, and the options inbetween.
> 
> So you're saying that adding the '?' just happens to change the match 
> because of a quirk in the algorithms in the python library? That seems 
> more like a hack than a proper fix; there may be other versions of 
> python (future versions, for instance) where the new regexp will have 
> the same effect as the old one, and we'll have another regression.
> 
> Even if the behavior described is part of the defined interface,

I believe it is. Joby posted a link earlier. It also seems to be part of
the Perl re syntax -- and lots of things use Perl's regex syntax so I
think it is pretty "standard" (although I was not previously aware of it
either). Wikipedia's regex page talks about it too.

>  I'd be 
> wary of using this because future developers may not realize what it's 
> for, or how to modify it properly to retain the properties it has now.

Hypothetical developer ignorance might call for a comment, but I think
avoiding language features which provide the semantics we need just
because they are a bit obscure would be a mistake.

> >> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> >> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Cc: george.dunlap@citrix.com
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > IMHO this can go into 4.4, unless George objects today I shall commit.
> 
> I'm a bit on the fence about this one.  If this had been sent a month 
> ago, it would be a no-brainer.  It certainly looks like it should work 
> just fine.  On the other hand, pygrub is an important bit of 
> functionality, and I'm not sure how much testing it gets. But of course 
> the XenServer XenRT tests probably exercise it fairly well (or else they 
> wouldn't be submitting this patch).

FWIW I intended to run it over the (admittedly small) set of test cases
in the tree as part of the commit process. I believe Joby has already
done so anyway.

> The Register seems to think that RHEL will be released "in the first 
> half of 2014", which would certainly be before 4.5.  But we should have 
> another point release before then, with enough time to do better testing 
> and (possibly) come up with a better solution to the regexp problem 
> above (assuming my interpretation is correct).
> 
> I'm wondering though whether it would make more sense to save this for 
> 4.4.1.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:16:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1OX-0004SA-Rq; Wed, 05 Feb 2014 12:15:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB1OW-0004S5-UP
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 12:15:57 +0000
Received: from [85.158.139.211:43963] by server-3.bemta-5.messagelabs.com id
	CD/0A-13671-C7B22F25; Wed, 05 Feb 2014 12:15:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391602553!1798951!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22860 invoked from network); 5 Feb 2014 12:15:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:15:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98169558"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:15:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 07:15:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WB1OS-0004wg-8k;
	Wed, 05 Feb 2014 12:15:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WB1OS-00056Q-7c;
	Wed, 05 Feb 2014 12:15:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24729-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:15:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24729: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24729 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24729/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)     broken pass in 24728
 test-amd64-amd64-xl-sedf      3 host-install(3)  broken in 24728 pass in 24729
 test-amd64-i386-freebsd10-amd64 3 host-install(3) broken in 24728 pass in 24729

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl           3 host-install(3)              broken like 24722
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24722
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24723
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24728 like 24719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=ff1745d5882b7356ea423709919e46e55c31b615
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable ff1745d5882b7356ea423709919e46e55c31b615
+ branch=xen-unstable
+ revision=ff1745d5882b7356ea423709919e46e55c31b615
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git ff1745d5882b7356ea423709919e46e55c31b615:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   04d31ea..ff1745d  ff1745d5882b7356ea423709919e46e55c31b615 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:16:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1OX-0004SA-Rq; Wed, 05 Feb 2014 12:15:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB1OW-0004S5-UP
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 12:15:57 +0000
Received: from [85.158.139.211:43963] by server-3.bemta-5.messagelabs.com id
	CD/0A-13671-C7B22F25; Wed, 05 Feb 2014 12:15:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391602553!1798951!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22860 invoked from network); 5 Feb 2014 12:15:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:15:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98169558"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:15:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 07:15:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WB1OS-0004wg-8k;
	Wed, 05 Feb 2014 12:15:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WB1OS-00056Q-7c;
	Wed, 05 Feb 2014 12:15:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24729-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:15:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24729: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24729 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24729/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)     broken pass in 24728
 test-amd64-amd64-xl-sedf      3 host-install(3)  broken in 24728 pass in 24729
 test-amd64-i386-freebsd10-amd64 3 host-install(3) broken in 24728 pass in 24729

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl           3 host-install(3)              broken like 24722
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24722
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24723
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24728 like 24719

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615
baseline version:
 xen                  04d31ea1b1caeac7f77b5d18910761abd540545f

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wei.liu2@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=ff1745d5882b7356ea423709919e46e55c31b615
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable ff1745d5882b7356ea423709919e46e55c31b615
+ branch=xen-unstable
+ revision=ff1745d5882b7356ea423709919e46e55c31b615
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git ff1745d5882b7356ea423709919e46e55c31b615:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   04d31ea..ff1745d  ff1745d5882b7356ea423709919e46e55c31b615 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:21:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1TW-0004lr-P7; Wed, 05 Feb 2014 12:21:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB1TU-0004li-KA
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:21:05 +0000
Received: from [85.158.137.68:34817] by server-1.bemta-3.messagelabs.com id
	DC/D5-17293-FAC22F25; Wed, 05 Feb 2014 12:21:03 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391602861!13563646!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14552 invoked from network); 5 Feb 2014 12:21:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:21:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98170864"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:21:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 07:21:00 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB1TQ-0004qd-Ci;
	Wed, 05 Feb 2014 12:21:00 +0000
Message-ID: <52F22CAC.3050003@citrix.com>
Date: Wed, 5 Feb 2014 12:21:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
	<52F2253B.9000000@eu.citrix.com>
In-Reply-To: <52F2253B.9000000@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 11:49, George Dunlap wrote:
> On 02/05/2014 09:22 AM, Ian Campbell wrote:
>> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>>> boot after the installation.
>>>
>>> In addition to this, RHEL 7 menu entries have two different
>>> single-quote
>>> delimited strings on the same line, and the greedy grouping for
>>> menuentry
>>> parsing gets both strings, and the options inbetween.
>
> So you're saying that adding the '?' just happens to change the match
> because of a quirk in the algorithms in the python library?

No - it is well specified regex syntax.

Skimming the xend code, it gets moderate use.

It is even already used in pygrub itself: "bootfsgroup =
re.findall('zfs-bootfs=(.*?)[\s\,\"]', bootfsargs)"


> That seems more like a hack than a proper fix; there may be other
> versions of python (future versions, for instance) where the new
> regexp will have the same effect as the old one, and we'll have
> another regression.
>
> Even if the behavior described is part of the defined interface, I'd
> be wary of using this because future developers may not realize what
> it's for, or how to modify it properly to retain the properties it has
> now.

That is a matter of opinion, but I would disagree.  I personally use
lazy matching quite often, and encounter it moderately frequently in
others code.

~Andrew

>
>>>
>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: george.dunlap@citrix.com
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> IMHO this can go into 4.4, unless George objects today I shall commit.
>
> I'm a bit on the fence about this one.  If this had been sent a month
> ago, it would be a no-brainer.  It certainly looks like it should work
> just fine.  On the other hand, pygrub is an important bit of
> functionality, and I'm not sure how much testing it gets. But of
> course the XenServer XenRT tests probably exercise it fairly well (or
> else they wouldn't be submitting this patch).
>
> The Register seems to think that RHEL will be released "in the first
> half of 2014", which would certainly be before 4.5.  But we should
> have another point release before then, with enough time to do better
> testing and (possibly) come up with a better solution to the regexp
> problem above (assuming my interpretation is correct).
>
> I'm wondering though whether it would make more sense to save this for
> 4.4.1.
>
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:21:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1TW-0004lr-P7; Wed, 05 Feb 2014 12:21:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB1TU-0004li-KA
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:21:05 +0000
Received: from [85.158.137.68:34817] by server-1.bemta-3.messagelabs.com id
	DC/D5-17293-FAC22F25; Wed, 05 Feb 2014 12:21:03 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391602861!13563646!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14552 invoked from network); 5 Feb 2014 12:21:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:21:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98170864"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:21:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 07:21:00 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB1TQ-0004qd-Ci;
	Wed, 05 Feb 2014 12:21:00 +0000
Message-ID: <52F22CAC.3050003@citrix.com>
Date: Wed, 5 Feb 2014 12:21:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
	<52F2253B.9000000@eu.citrix.com>
In-Reply-To: <52F2253B.9000000@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 11:49, George Dunlap wrote:
> On 02/05/2014 09:22 AM, Ian Campbell wrote:
>> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>>> boot after the installation.
>>>
>>> In addition to this, RHEL 7 menu entries have two different
>>> single-quote
>>> delimited strings on the same line, and the greedy grouping for
>>> menuentry
>>> parsing gets both strings, and the options inbetween.
>
> So you're saying that adding the '?' just happens to change the match
> because of a quirk in the algorithms in the python library?

No - it is well specified regex syntax.

Skimming the xend code, it gets moderate use.

It is even already used in pygrub itself: "bootfsgroup =
re.findall('zfs-bootfs=(.*?)[\s\,\"]', bootfsargs)"


> That seems more like a hack than a proper fix; there may be other
> versions of python (future versions, for instance) where the new
> regexp will have the same effect as the old one, and we'll have
> another regression.
>
> Even if the behavior described is part of the defined interface, I'd
> be wary of using this because future developers may not realize what
> it's for, or how to modify it properly to retain the properties it has
> now.

That is a matter of opinion, but I would disagree.  I personally use
lazy matching quite often, and encounter it moderately frequently in
others code.

~Andrew

>
>>>
>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: george.dunlap@citrix.com
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> IMHO this can go into 4.4, unless George objects today I shall commit.
>
> I'm a bit on the fence about this one.  If this had been sent a month
> ago, it would be a no-brainer.  It certainly looks like it should work
> just fine.  On the other hand, pygrub is an important bit of
> functionality, and I'm not sure how much testing it gets. But of
> course the XenServer XenRT tests probably exercise it fairly well (or
> else they wouldn't be submitting this patch).
>
> The Register seems to think that RHEL will be released "in the first
> half of 2014", which would certainly be before 4.5.  But we should
> have another point release before then, with enough time to do better
> testing and (possibly) come up with a better solution to the regexp
> problem above (assuming my interpretation is correct).
>
> I'm wondering though whether it would make more sense to save this for
> 4.4.1.
>
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:27:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1ZX-0004xJ-Bq; Wed, 05 Feb 2014 12:27:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WB1ZU-0004x7-Ps
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:27:17 +0000
Received: from [85.158.137.68:5395] by server-1.bemta-3.messagelabs.com id
	11/4F-17293-42E22F25; Wed, 05 Feb 2014 12:27:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391603235!13531979!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21383 invoked from network); 5 Feb 2014 12:27:15 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 12:27:15 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391603235; l=336;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=KgCS0H3S7dqvuN9cASXrXbLqHvs=;
	b=F2VEPKUbQM+4C/4t316ft3XQd0D6B9kRPt4bL3fGtzmy4qilanJnl8gw1lJkTRzhdoV
	Au3reEA3UQS0+mykRE+fHJGb4ypqf9uSVlX9MVQJPGjc24efFGnsIh937/xZYswROw+1p
	YNQPVAMK3bW1tsP/+nXGWEs5Tcb3tP8ffAI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id R06df0q15CRFxH7
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 5 Feb 2014 13:27:15 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id AD82B50269; Wed,  5 Feb 2014 13:27:14 +0100 (CET)
Date: Wed, 5 Feb 2014 13:27:14 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140205122714.GA804@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
	<1391600775.6497.117.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391600775.6497.117.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: George Dunlap <george.dunlap@eu.citrix.com>, anthony.perard@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, Ian Campbell wrote:

> The SONAME is currently 4.3, so it looks like we need to bump the SONAME
> anyways (since it seems unlikely that we have not changed the ABI since
> 4.3). 

Looking at git diff RELEASE-4.3.0.. tools/libxl/libxl.h I see that at
least libxl_domain_create_restore got a new parameter.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:27:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1ZX-0004xJ-Bq; Wed, 05 Feb 2014 12:27:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WB1ZU-0004x7-Ps
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:27:17 +0000
Received: from [85.158.137.68:5395] by server-1.bemta-3.messagelabs.com id
	11/4F-17293-42E22F25; Wed, 05 Feb 2014 12:27:16 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391603235!13531979!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21383 invoked from network); 5 Feb 2014 12:27:15 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 12:27:15 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391603235; l=336;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=KgCS0H3S7dqvuN9cASXrXbLqHvs=;
	b=F2VEPKUbQM+4C/4t316ft3XQd0D6B9kRPt4bL3fGtzmy4qilanJnl8gw1lJkTRzhdoV
	Au3reEA3UQS0+mykRE+fHJGb4ypqf9uSVlX9MVQJPGjc24efFGnsIh937/xZYswROw+1p
	YNQPVAMK3bW1tsP/+nXGWEs5Tcb3tP8ffAI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id R06df0q15CRFxH7
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 5 Feb 2014 13:27:15 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id AD82B50269; Wed,  5 Feb 2014 13:27:14 +0100 (CET)
Date: Wed, 5 Feb 2014 13:27:14 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140205122714.GA804@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
	<1391600775.6497.117.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391600775.6497.117.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: George Dunlap <george.dunlap@eu.citrix.com>, anthony.perard@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, Ian Campbell wrote:

> The SONAME is currently 4.3, so it looks like we need to bump the SONAME
> anyways (since it seems unlikely that we have not changed the ABI since
> 4.3). 

Looking at git diff RELEASE-4.3.0.. tools/libxl/libxl.h I see that at
least libxl_domain_create_restore got a new parameter.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:35:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1hJ-0005Yp-MB; Wed, 05 Feb 2014 12:35:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB1hH-0005YX-Io
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:35:19 +0000
Received: from [85.158.143.35:31460] by server-3.bemta-4.messagelabs.com id
	00/C0-11539-60032F25; Wed, 05 Feb 2014 12:35:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391603715!3305834!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9584 invoked from network); 5 Feb 2014 12:35:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:35:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98174252"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:34:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	07:34:47 -0500
Message-ID: <1391603685.6497.146.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 5 Feb 2014 12:34:45 +0000
In-Reply-To: <20140205122714.GA804@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
	<1391600775.6497.117.camel@kazak.uk.xensource.com>
	<20140205122714.GA804@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, anthony.perard@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 13:27 +0100, Olaf Hering wrote:
> On Wed, Feb 05, Ian Campbell wrote:
> 
> > The SONAME is currently 4.3, so it looks like we need to bump the SONAME
> > anyways (since it seems unlikely that we have not changed the ABI since
> > 4.3). 
> 
> Looking at git diff RELEASE-4.3.0.. tools/libxl/libxl.h I see that at
> least libxl_domain_create_restore got a new parameter.

Right. Ian J said he was going to do a sweep for changes at some point
before the release.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:35:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1hJ-0005Yp-MB; Wed, 05 Feb 2014 12:35:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB1hH-0005YX-Io
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:35:19 +0000
Received: from [85.158.143.35:31460] by server-3.bemta-4.messagelabs.com id
	00/C0-11539-60032F25; Wed, 05 Feb 2014 12:35:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391603715!3305834!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9584 invoked from network); 5 Feb 2014 12:35:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:35:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98174252"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:34:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	07:34:47 -0500
Message-ID: <1391603685.6497.146.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 5 Feb 2014 12:34:45 +0000
In-Reply-To: <20140205122714.GA804@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
	<1391600775.6497.117.camel@kazak.uk.xensource.com>
	<20140205122714.GA804@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, anthony.perard@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 13:27 +0100, Olaf Hering wrote:
> On Wed, Feb 05, Ian Campbell wrote:
> 
> > The SONAME is currently 4.3, so it looks like we need to bump the SONAME
> > anyways (since it seems unlikely that we have not changed the ABI since
> > 4.3). 
> 
> Looking at git diff RELEASE-4.3.0.. tools/libxl/libxl.h I see that at
> least libxl_domain_create_restore got a new parameter.

Right. Ian J said he was going to do a sweep for changes at some point
before the release.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:39:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1l5-0005zk-Dq; Wed, 05 Feb 2014 12:39:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WB1l4-0005zc-Ef
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:39:14 +0000
Received: from [193.109.254.147:32630] by server-14.bemta-14.messagelabs.com
	id BF/40-29228-1F032F25; Wed, 05 Feb 2014 12:39:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391603953!2173183!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1524 invoked from network); 5 Feb 2014 12:39:13 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 12:39:13 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391603953; l=675;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=l226ptPydGIjP/N778J3DFTJIB4=;
	b=iZu00Rm+gLxVBRjvojIpFbXZWYfDwL7mmCjGbeKj6em+0WfWk6sbPMYFo3M75GBwld7
	l8aDDVEjhS06QoyAtZPm/IUltt9nhPBVdEV4cMzAGPh/sl3H9z4EcilkBepLaFQ97bnbV
	jp2zAFZpczriF1mGDDzFwp5DUecXLiILSOk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id u06602q15Cd905b
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 5 Feb 2014 13:39:09 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id E401850269; Wed,  5 Feb 2014 13:39:08 +0100 (CET)
Date: Wed, 5 Feb 2014 13:39:08 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140205123908.GA1198@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, Adel Amani wrote:

>     si.flags = atoi(argv[5]);

> lvl = si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;
> lflags = XTL_STDIOSTREAM_SHOW_PID | XTL_STDIOSTREAM_HIDE_PROGRESS;
> l = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr, lvl, lflags);
> si.xch = xc_interface_open(l,0,0);

Please check what XCFLAGS_DEBUG actually means, and if that condition
can ever be true without modifying also xend related code.

I guess in your exploration of how migration internally actually works
it would be easier for you to just write 'lvl = XTL_DEBUG;' and be done
with it.

Other than that, the changes you made appear to be correct.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:39:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1l5-0005zk-Dq; Wed, 05 Feb 2014 12:39:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WB1l4-0005zc-Ef
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:39:14 +0000
Received: from [193.109.254.147:32630] by server-14.bemta-14.messagelabs.com
	id BF/40-29228-1F032F25; Wed, 05 Feb 2014 12:39:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391603953!2173183!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1524 invoked from network); 5 Feb 2014 12:39:13 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 12:39:13 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391603953; l=675;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=l226ptPydGIjP/N778J3DFTJIB4=;
	b=iZu00Rm+gLxVBRjvojIpFbXZWYfDwL7mmCjGbeKj6em+0WfWk6sbPMYFo3M75GBwld7
	l8aDDVEjhS06QoyAtZPm/IUltt9nhPBVdEV4cMzAGPh/sl3H9z4EcilkBepLaFQ97bnbV
	jp2zAFZpczriF1mGDDzFwp5DUecXLiILSOk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.23 AUTH) with ESMTPSA id u06602q15Cd905b
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 5 Feb 2014 13:39:09 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id E401850269; Wed,  5 Feb 2014 13:39:08 +0100 (CET)
Date: Wed, 5 Feb 2014 13:39:08 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140205123908.GA1198@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, Adel Amani wrote:

>     si.flags = atoi(argv[5]);

> lvl = si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;
> lflags = XTL_STDIOSTREAM_SHOW_PID | XTL_STDIOSTREAM_HIDE_PROGRESS;
> l = (xentoollog_logger *)xtl_createlogger_stdiostream(stderr, lvl, lflags);
> si.xch = xc_interface_open(l,0,0);

Please check what XCFLAGS_DEBUG actually means, and if that condition
can ever be true without modifying also xend related code.

I guess in your exploration of how migration internally actually works
it would be easier for you to just write 'lvl = XTL_DEBUG;' and be done
with it.

Other than that, the changes you made appear to be correct.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:43:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1p3-0006E4-CW; Wed, 05 Feb 2014 12:43:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB1p1-0006Dx-L3
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:43:19 +0000
Received: from [85.158.143.35:52923] by server-2.bemta-4.messagelabs.com id
	13/F3-10891-5E132F25; Wed, 05 Feb 2014 12:43:17 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391604195!3318868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7796 invoked from network); 5 Feb 2014 12:43:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:43:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98176681"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:43:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 07:43:14 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB1fR-00050s-RC;
	Wed, 05 Feb 2014 12:33:25 +0000
Message-ID: <52F22F83.50607@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:33:07 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <20140204181023.GA5293@citrix.com>	
	<1391592179.6497.73.camel@kazak.uk.xensource.com>	
	<52F2253B.9000000@eu.citrix.com>
	<1391602034.6497.128.camel@kazak.uk.xensource.com>
In-Reply-To: <1391602034.6497.128.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 12:07 PM, Ian Campbell wrote:
> On Wed, 2014-02-05 at 11:49 +0000, George Dunlap wrote:
>> On 02/05/2014 09:22 AM, Ian Campbell wrote:
>>> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
>>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>>>> boot after the installation.
>>>>
>>>> In addition to this, RHEL 7 menu entries have two different single-quote
>>>> delimited strings on the same line, and the greedy grouping for menuentry
>>>> parsing gets both strings, and the options inbetween.
>> So you're saying that adding the '?' just happens to change the match
>> because of a quirk in the algorithms in the python library? That seems
>> more like a hack than a proper fix; there may be other versions of
>> python (future versions, for instance) where the new regexp will have
>> the same effect as the old one, and we'll have another regression.
>>
>> Even if the behavior described is part of the defined interface,
> I believe it is. Joby posted a link earlier. It also seems to be part of
> the Perl re syntax -- and lots of things use Perl's regex syntax so I
> think it is pretty "standard" (although I was not previously aware of it
> either). Wikipedia's regex page talks about it too.
>
>>   I'd be
>> wary of using this because future developers may not realize what it's
>> for, or how to modify it properly to retain the properties it has now.
> Hypothetical developer ignorance might call for a comment, but I think
> avoiding language features which provide the semantics we need just
> because they are a bit obscure would be a mistake.

Well of course having something fixed in an obscure fashion is better 
than not having it fixed at all.  But having it fixed in a way which is 
more obvious and doesn't rely on quirks of internal algorithms is even 
better yet, if it can be done without being too ugly.  Wouldn't 
something like the following work, and be more robust? (I haven't tested 
this, I'm just going on Joby's description of the problem.)

title_match = re.match('^menuentry ["\']([^"\']*)["\'] (.*){', l)


OTOH, if as Andrew claims (in another e-mail in this thread), ".*?" is a 
common idiom for exactly that (and since it's so common, I can well 
believe there is an idiom) then it's even better to go with the idiom.

>
>>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Cc: george.dunlap@citrix.com
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> IMHO this can go into 4.4, unless George objects today I shall commit.
>> I'm a bit on the fence about this one.  If this had been sent a month
>> ago, it would be a no-brainer.  It certainly looks like it should work
>> just fine.  On the other hand, pygrub is an important bit of
>> functionality, and I'm not sure how much testing it gets. But of course
>> the XenServer XenRT tests probably exercise it fairly well (or else they
>> wouldn't be submitting this patch).
> FWIW I intended to run it over the (admittedly small) set of test cases
> in the tree as part of the commit process. I believe Joby has already
> done so anyway.

Sure; I'm worried about the near infinite number of other grub configs 
not in our tree. :-)  (Although the vast majority are likely to be 
represented by those generated by a distro's update-grub with the 
default parameters.)

On the whole, given that RHEL 7 is not yet out, and the timing of the 
patch, I'm inclined to say this should wait until 4.4.1, unless there's 
a reason that wouldn't be suitable.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:43:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:43:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1p3-0006E4-CW; Wed, 05 Feb 2014 12:43:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB1p1-0006Dx-L3
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:43:19 +0000
Received: from [85.158.143.35:52923] by server-2.bemta-4.messagelabs.com id
	13/F3-10891-5E132F25; Wed, 05 Feb 2014 12:43:17 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391604195!3318868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7796 invoked from network); 5 Feb 2014 12:43:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:43:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98176681"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 12:43:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 07:43:14 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB1fR-00050s-RC;
	Wed, 05 Feb 2014 12:33:25 +0000
Message-ID: <52F22F83.50607@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:33:07 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <20140204181023.GA5293@citrix.com>	
	<1391592179.6497.73.camel@kazak.uk.xensource.com>	
	<52F2253B.9000000@eu.citrix.com>
	<1391602034.6497.128.camel@kazak.uk.xensource.com>
In-Reply-To: <1391602034.6497.128.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 12:07 PM, Ian Campbell wrote:
> On Wed, 2014-02-05 at 11:49 +0000, George Dunlap wrote:
>> On 02/05/2014 09:22 AM, Ian Campbell wrote:
>>> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
>>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
>>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
>>>> boot after the installation.
>>>>
>>>> In addition to this, RHEL 7 menu entries have two different single-quote
>>>> delimited strings on the same line, and the greedy grouping for menuentry
>>>> parsing gets both strings, and the options inbetween.
>> So you're saying that adding the '?' just happens to change the match
>> because of a quirk in the algorithms in the python library? That seems
>> more like a hack than a proper fix; there may be other versions of
>> python (future versions, for instance) where the new regexp will have
>> the same effect as the old one, and we'll have another regression.
>>
>> Even if the behavior described is part of the defined interface,
> I believe it is. Joby posted a link earlier. It also seems to be part of
> the Perl re syntax -- and lots of things use Perl's regex syntax so I
> think it is pretty "standard" (although I was not previously aware of it
> either). Wikipedia's regex page talks about it too.
>
>>   I'd be
>> wary of using this because future developers may not realize what it's
>> for, or how to modify it properly to retain the properties it has now.
> Hypothetical developer ignorance might call for a comment, but I think
> avoiding language features which provide the semantics we need just
> because they are a bit obscure would be a mistake.

Well of course having something fixed in an obscure fashion is better 
than not having it fixed at all.  But having it fixed in a way which is 
more obvious and doesn't rely on quirks of internal algorithms is even 
better yet, if it can be done without being too ugly.  Wouldn't 
something like the following work, and be more robust? (I haven't tested 
this, I'm just going on Joby's description of the problem.)

title_match = re.match('^menuentry ["\']([^"\']*)["\'] (.*){', l)


OTOH, if as Andrew claims (in another e-mail in this thread), ".*?" is a 
common idiom for exactly that (and since it's so common, I can well 
believe there is an idiom) then it's even better to go with the idiom.

>
>>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
>>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Cc: george.dunlap@citrix.com
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> IMHO this can go into 4.4, unless George objects today I shall commit.
>> I'm a bit on the fence about this one.  If this had been sent a month
>> ago, it would be a no-brainer.  It certainly looks like it should work
>> just fine.  On the other hand, pygrub is an important bit of
>> functionality, and I'm not sure how much testing it gets. But of course
>> the XenServer XenRT tests probably exercise it fairly well (or else they
>> wouldn't be submitting this patch).
> FWIW I intended to run it over the (admittedly small) set of test cases
> in the tree as part of the commit process. I believe Joby has already
> done so anyway.

Sure; I'm worried about the near infinite number of other grub configs 
not in our tree. :-)  (Although the vast majority are likely to be 
represented by those generated by a distro's update-grub with the 
default parameters.)

On the whole, given that RHEL 7 is not yet out, and the timing of the 
patch, I'm inclined to say this should wait until 4.4.1, unless there's 
a reason that wouldn't be suitable.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:45:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1rG-0006Ns-0Q; Wed, 05 Feb 2014 12:45:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB1rE-0006Ni-7Z
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:45:36 +0000
Received: from [85.158.143.35:50529] by server-3.bemta-4.messagelabs.com id
	E5/53-11539-F6232F25; Wed, 05 Feb 2014 12:45:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391604333!3325669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11803 invoked from network); 5 Feb 2014 12:45:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:45:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100070702"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 12:45:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	07:45:32 -0500
Message-ID: <1391604332.6497.147.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:45:32 +0000
In-Reply-To: <52F22F83.50607@eu.citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
	<52F2253B.9000000@eu.citrix.com>
	<1391602034.6497.128.camel@kazak.uk.xensource.com>
	<52F22F83.50607@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 12:33 +0000, George Dunlap wrote:
> On 02/05/2014 12:07 PM, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 11:49 +0000, George Dunlap wrote:
> >> On 02/05/2014 09:22 AM, Ian Campbell wrote:
> >>> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
> >>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> >>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> >>>> boot after the installation.
> >>>>
> >>>> In addition to this, RHEL 7 menu entries have two different single-quote
> >>>> delimited strings on the same line, and the greedy grouping for menuentry
> >>>> parsing gets both strings, and the options inbetween.
> >> So you're saying that adding the '?' just happens to change the match
> >> because of a quirk in the algorithms in the python library? That seems
> >> more like a hack than a proper fix; there may be other versions of
> >> python (future versions, for instance) where the new regexp will have
> >> the same effect as the old one, and we'll have another regression.
> >>
> >> Even if the behavior described is part of the defined interface,
> > I believe it is. Joby posted a link earlier. It also seems to be part of
> > the Perl re syntax -- and lots of things use Perl's regex syntax so I
> > think it is pretty "standard" (although I was not previously aware of it
> > either). Wikipedia's regex page talks about it too.
> >
> >>   I'd be
> >> wary of using this because future developers may not realize what it's
> >> for, or how to modify it properly to retain the properties it has now.
> > Hypothetical developer ignorance might call for a comment, but I think
> > avoiding language features which provide the semantics we need just
> > because they are a bit obscure would be a mistake.
> 
> Well of course having something fixed in an obscure fashion is better 
> than not having it fixed at all.  But having it fixed in a way which is 
> more obvious and doesn't rely on quirks of internal algorithms is even 
> better yet, if it can be done without being too ugly.  Wouldn't 
> something like the following work, and be more robust? (I haven't tested 
> this, I'm just going on Joby's description of the problem.)
> 
> title_match = re.match('^menuentry ["\']([^"\']*)["\'] (.*){', l)
> 
> 
> OTOH, if as Andrew claims 

As did I, right above.

> (in another e-mail in this thread), ".*?" is a 
> common idiom for exactly that (and since it's so common, I can well 
> believe there is an idiom) then it's even better to go with the idiom.

Yes, that was my point, this is apparently a common idiom.

> >>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> >>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>> Cc: george.dunlap@citrix.com
> >>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >>>
> >>> IMHO this can go into 4.4, unless George objects today I shall commit.
> >> I'm a bit on the fence about this one.  If this had been sent a month
> >> ago, it would be a no-brainer.  It certainly looks like it should work
> >> just fine.  On the other hand, pygrub is an important bit of
> >> functionality, and I'm not sure how much testing it gets. But of course
> >> the XenServer XenRT tests probably exercise it fairly well (or else they
> >> wouldn't be submitting this patch).
> > FWIW I intended to run it over the (admittedly small) set of test cases
> > in the tree as part of the commit process. I believe Joby has already
> > done so anyway.
> 
> Sure; I'm worried about the near infinite number of other grub configs 
> not in our tree. :-)  (Although the vast majority are likely to be 
> represented by those generated by a distro's update-grub with the 
> default parameters.)
> 
> On the whole, given that RHEL 7 is not yet out, and the timing of the 
> patch, I'm inclined to say this should wait until 4.4.1, unless there's 
> a reason that wouldn't be suitable.
> 
>   -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 12:45:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 12:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB1rG-0006Ns-0Q; Wed, 05 Feb 2014 12:45:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB1rE-0006Ni-7Z
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 12:45:36 +0000
Received: from [85.158.143.35:50529] by server-3.bemta-4.messagelabs.com id
	E5/53-11539-F6232F25; Wed, 05 Feb 2014 12:45:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391604333!3325669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11803 invoked from network); 5 Feb 2014 12:45:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 12:45:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100070702"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 12:45:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	07:45:32 -0500
Message-ID: <1391604332.6497.147.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Feb 2014 12:45:32 +0000
In-Reply-To: <52F22F83.50607@eu.citrix.com>
References: <20140204181023.GA5293@citrix.com>
	<1391592179.6497.73.camel@kazak.uk.xensource.com>
	<52F2253B.9000000@eu.citrix.com>
	<1391602034.6497.128.camel@kazak.uk.xensource.com>
	<52F22F83.50607@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Joby Poriyath <joby.poriyath@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 12:33 +0000, George Dunlap wrote:
> On 02/05/2014 12:07 PM, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 11:49 +0000, George Dunlap wrote:
> >> On 02/05/2014 09:22 AM, Ian Campbell wrote:
> >>> On Tue, 2014-02-04 at 18:10 +0000, Joby Poriyath wrote:
> >>>> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> >>>> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> >>>> boot after the installation.
> >>>>
> >>>> In addition to this, RHEL 7 menu entries have two different single-quote
> >>>> delimited strings on the same line, and the greedy grouping for menuentry
> >>>> parsing gets both strings, and the options inbetween.
> >> So you're saying that adding the '?' just happens to change the match
> >> because of a quirk in the algorithms in the python library? That seems
> >> more like a hack than a proper fix; there may be other versions of
> >> python (future versions, for instance) where the new regexp will have
> >> the same effect as the old one, and we'll have another regression.
> >>
> >> Even if the behavior described is part of the defined interface,
> > I believe it is. Joby posted a link earlier. It also seems to be part of
> > the Perl re syntax -- and lots of things use Perl's regex syntax so I
> > think it is pretty "standard" (although I was not previously aware of it
> > either). Wikipedia's regex page talks about it too.
> >
> >>   I'd be
> >> wary of using this because future developers may not realize what it's
> >> for, or how to modify it properly to retain the properties it has now.
> > Hypothetical developer ignorance might call for a comment, but I think
> > avoiding language features which provide the semantics we need just
> > because they are a bit obscure would be a mistake.
> 
> Well of course having something fixed in an obscure fashion is better 
> than not having it fixed at all.  But having it fixed in a way which is 
> more obvious and doesn't rely on quirks of internal algorithms is even 
> better yet, if it can be done without being too ugly.  Wouldn't 
> something like the following work, and be more robust? (I haven't tested 
> this, I'm just going on Joby's description of the problem.)
> 
> title_match = re.match('^menuentry ["\']([^"\']*)["\'] (.*){', l)
> 
> 
> OTOH, if as Andrew claims 

As did I, right above.

> (in another e-mail in this thread), ".*?" is a 
> common idiom for exactly that (and since it's so common, I can well 
> believe there is an idiom) then it's even better to go with the idiom.

Yes, that was my point, this is apparently a common idiom.

> >>>> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> >>>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>> Cc: george.dunlap@citrix.com
> >>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >>>
> >>> IMHO this can go into 4.4, unless George objects today I shall commit.
> >> I'm a bit on the fence about this one.  If this had been sent a month
> >> ago, it would be a no-brainer.  It certainly looks like it should work
> >> just fine.  On the other hand, pygrub is an important bit of
> >> functionality, and I'm not sure how much testing it gets. But of course
> >> the XenServer XenRT tests probably exercise it fairly well (or else they
> >> wouldn't be submitting this patch).
> > FWIW I intended to run it over the (admittedly small) set of test cases
> > in the tree as part of the commit process. I believe Joby has already
> > done so anyway.
> 
> Sure; I'm worried about the near infinite number of other grub configs 
> not in our tree. :-)  (Although the vast majority are likely to be 
> represented by those generated by a distro's update-grub with the 
> default parameters.)
> 
> On the whole, given that RHEL 7 is not yet out, and the timing of the 
> patch, I'm inclined to say this should wait until 4.4.1, unless there's 
> a reason that wouldn't be suitable.
> 
>   -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:33:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2b2-0000bt-Jv; Wed, 05 Feb 2014 13:32:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1WB2b0-0000bo-3m
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:32:54 +0000
Received: from [193.109.254.147:14220] by server-16.bemta-14.messagelabs.com
	id D7/DD-21945-58D32F25; Wed, 05 Feb 2014 13:32:53 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391607172!2197536!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15795 invoked from network); 5 Feb 2014 13:32:52 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 13:32:52 -0000
Received: from [192.168.15.1] (eggsoft.sp.imz.pl [212.106.158.142])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 27FE05002;
	Wed,  5 Feb 2014 14:32:51 +0100 (CET)
Message-ID: <52F23D82.2060306@jajcus.net>
Date: Wed, 05 Feb 2014 14:32:50 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>	<1391530374.6497.55.camel@kazak.uk.xensource.com>	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
	<1391592517.6497.76.camel@kazak.uk.xensource.com>
In-Reply-To: <1391592517.6497.76.camel@kazak.uk.xensource.com>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/14 10:28, Ian Campbell wrote:
> On Tue, 2014-02-04 at 18:41 +0000, M A Young wrote:
>>> We should probably consider taking some unit files into the xen tree, if
>>> someone wants to submit a set?
>>
>> I can submit a set, which start services individually rather than a =

>> unified xencommons style start file. I didn't find a good way to reprodu=
ce =

>> the xendomains script, so I ended running an edited version of the =

>> sysvinit script with a systemd wrapper file.
> =

> I don't know what is conventional in systemd land but I have no problem
> with that approach.

For systemd the much more natural way is to start and monitor each
daemon with
a separate systemd unit. If some extra scripting is needed (like the
setting xenstore values for dom0), that should be coded in a separate
script and one of the two:

=96 Called via ExecStartPre or ExecStartPost in the unit of the daemon the
scripting is for (e.g. filling xenstore would go to ExecStartPost of
xenstored.service)
=96 Called via ExecStart in its own service unit.

Writing extra scripts may be not necessary when just a few commands
have to be started.

e.g. the xenstored.service from PLD Linux (based on some unit for xenstored
found on the Internet):

[Unit]
Description=3DXenstored - daemon managing xenstore file system
Requires=3Dproc-xen.mount var-lib-xenstored.mount
After=3Dproc-xen.mount var-lib-xenstored.mount
Before=3Dlibvirtd.service libvirt-guests.service xendomains.service
xend.service
RefuseManualStop=3Dtrue
ConditionPathExists=3D/proc/xen

[Service]
Type=3Dforking
Environment=3DXENSTORED_ARGS=3D
Environment=3DXENSTORED_ROOTDIR=3D/var/lib/xenstored
EnvironmentFile=3D-/etc/sysconfig/xenstored
PIDFile=3D/var/run/xenstored.pid
ExecStartPre=3D/bin/grep -q control_d /proc/xen/capabilities
ExecStartPre=3D-/bin/rm -f "$XENSTORED_ROOTDIR"/tdb*
ExecStart=3D/usr/sbin/xenstored --pid-file /var/run/xenstored.pid
$XENSTORED_ARGS
ExecStartPost=3D/usr/bin/xenstore-write "/local/domain/0/name" "Domain-0"

[Install]
WantedBy=3Dmulti-user.target

-- =

Greets,
  Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:33:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2b2-0000bt-Jv; Wed, 05 Feb 2014 13:32:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jajcus@jajcus.net>) id 1WB2b0-0000bo-3m
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:32:54 +0000
Received: from [193.109.254.147:14220] by server-16.bemta-14.messagelabs.com
	id D7/DD-21945-58D32F25; Wed, 05 Feb 2014 13:32:53 +0000
X-Env-Sender: jajcus@jajcus.net
X-Msg-Ref: server-7.tower-27.messagelabs.com!1391607172!2197536!1
X-Originating-IP: [84.205.176.49]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15795 invoked from network); 5 Feb 2014 13:32:52 -0000
Received: from tropek.jajcus.net (HELO tropek.jajcus.net) (84.205.176.49)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 13:32:52 -0000
Received: from [192.168.15.1] (eggsoft.sp.imz.pl [212.106.158.142])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by tropek.jajcus.net (Postfix) with ESMTPSA id 27FE05002;
	Wed,  5 Feb 2014 14:32:51 +0100 (CET)
Message-ID: <52F23D82.2060306@jajcus.net>
Date: Wed, 05 Feb 2014 14:32:50 +0100
From: Jacek Konieczny <jajcus@jajcus.net>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>	<1391530374.6497.55.camel@kazak.uk.xensource.com>	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
	<1391592517.6497.76.camel@kazak.uk.xensource.com>
In-Reply-To: <1391592517.6497.76.camel@kazak.uk.xensource.com>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/14 10:28, Ian Campbell wrote:
> On Tue, 2014-02-04 at 18:41 +0000, M A Young wrote:
>>> We should probably consider taking some unit files into the xen tree, if
>>> someone wants to submit a set?
>>
>> I can submit a set, which start services individually rather than a =

>> unified xencommons style start file. I didn't find a good way to reprodu=
ce =

>> the xendomains script, so I ended running an edited version of the =

>> sysvinit script with a systemd wrapper file.
> =

> I don't know what is conventional in systemd land but I have no problem
> with that approach.

For systemd the much more natural way is to start and monitor each
daemon with
a separate systemd unit. If some extra scripting is needed (like the
setting xenstore values for dom0), that should be coded in a separate
script and one of the two:

=96 Called via ExecStartPre or ExecStartPost in the unit of the daemon the
scripting is for (e.g. filling xenstore would go to ExecStartPost of
xenstored.service)
=96 Called via ExecStart in its own service unit.

Writing extra scripts may be not necessary when just a few commands
have to be started.

e.g. the xenstored.service from PLD Linux (based on some unit for xenstored
found on the Internet):

[Unit]
Description=3DXenstored - daemon managing xenstore file system
Requires=3Dproc-xen.mount var-lib-xenstored.mount
After=3Dproc-xen.mount var-lib-xenstored.mount
Before=3Dlibvirtd.service libvirt-guests.service xendomains.service
xend.service
RefuseManualStop=3Dtrue
ConditionPathExists=3D/proc/xen

[Service]
Type=3Dforking
Environment=3DXENSTORED_ARGS=3D
Environment=3DXENSTORED_ROOTDIR=3D/var/lib/xenstored
EnvironmentFile=3D-/etc/sysconfig/xenstored
PIDFile=3D/var/run/xenstored.pid
ExecStartPre=3D/bin/grep -q control_d /proc/xen/capabilities
ExecStartPre=3D-/bin/rm -f "$XENSTORED_ROOTDIR"/tdb*
ExecStart=3D/usr/sbin/xenstored --pid-file /var/run/xenstored.pid
$XENSTORED_ARGS
ExecStartPost=3D/usr/bin/xenstore-write "/local/domain/0/name" "Domain-0"

[Install]
WantedBy=3Dmulti-user.target

-- =

Greets,
  Jacek

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:34:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2co-0000gl-9q; Wed, 05 Feb 2014 13:34:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WB2cm-0000gZ-Ns
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:34:45 +0000
Received: from [85.158.143.35:38746] by server-2.bemta-4.messagelabs.com id
	D2/AE-10891-4FD32F25; Wed, 05 Feb 2014 13:34:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391607281!3319399!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16395 invoked from network); 5 Feb 2014 13:34:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:34:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98194882"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 13:34:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	08:34:39 -0500
Message-ID: <52F23DEE.7080601@citrix.com>
Date: Wed, 5 Feb 2014 13:34:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
In-Reply-To: <52F2163C.1000202@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 10:45, David Vrabel wrote:
> On 04/02/14 23:18, Julien Grall wrote:
>> Hello David,
>>
>> I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 4.4-rc3).
>>
>> I have multiple issues with your event channel patch series on Linux and Xen side.
>> I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create guests).
> 
> I think there must be two issues here as both 2-level and FIFO events
> are broken.
> 
>> I'm using a simple guest config:
>> kernel="/root/zImage"
>> memory=32
>> name="test"
>> vcpus=1
>> autoballon="off"
>> extra="console=hvc0"
>>
>> If everything is ok, I should see that Linux is unable to find the root filesystem.
>> But here, Linux is stucked.
>>
>> >From Linux side, after bisecting, I found that the offending commit is:
>>     xen/events: remove unnecessary init_evtchn_cpu_bindings()
>>     
>>     Because the guest-side binding of an event to a VCPU (i.e., setting
>>     the local per-cpu masks) is always explicitly done after an event
>>     channel is bound to a port, there is no need to initialize all
>>     possible events as bound to VCPU 0 at start of day or after a resume.
>>     
>>     Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>>     Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>     Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>
>> With this patch, the function __xen_evtchn_do_upcall won't be able
>> to find an events (pendings_bits == 0 every time).
>> It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.
> 
> I think this is because binding an interdomain or allocating an unbound
> event channel does call bind_evtchn_to_cpu(evtchn, 0) which is required
> to set the local VCPU masks.
> 
> I think this happened to work on x86 because during the generic irq
> setup, the irq affinity is always set which then binds the event channel
> to the right VCPU.  I guess ARM's irq setup misses this step.
> 
> This shouldn't affect the FIFO-based events though since
> evtchn_fifo_bind_to_cpu() is a no-op.

I think the following patch should fix the 2-level problems.

You can force the use of 2-level events by using the xen.fifo_events=0
Linux command line option.

8<-------------------------------------------------
xen/events: bind all new interdomain events to VCPU0

From: David Vrabel <david.vrabel@citrix.com>

Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
unnecessary init_evtchn_cpu_bindings()) causes a regression.

The kernel-side VCPU binding was not being correctly set for newly
allocated or bound interdomain events.  In ARM guests where 2-level
events were used, this would result in no interdomain events being
handled because the local VCPU masks would all be clear.

x86 guests would work because the irq affinity was set during irq
setup and this would set the correct kernel-side VCPU binding.

Fix this by by properly initializing the kernel-side VCPU binding in
bind_evtchn_to_irq().

Reported-by: Julian Grall <julien.grall@linaro.org>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..5cc1f78 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -862,6 +862,9 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 			irq = ret;
 			goto out;
 		}
+
+		/* Newly bound event channels start off on VCPU0. */
+		bind_evtchn_to_cpu(evtchn, 0);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:34:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2co-0000gl-9q; Wed, 05 Feb 2014 13:34:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WB2cm-0000gZ-Ns
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:34:45 +0000
Received: from [85.158.143.35:38746] by server-2.bemta-4.messagelabs.com id
	D2/AE-10891-4FD32F25; Wed, 05 Feb 2014 13:34:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391607281!3319399!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16395 invoked from network); 5 Feb 2014 13:34:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:34:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98194882"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 13:34:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	08:34:39 -0500
Message-ID: <52F23DEE.7080601@citrix.com>
Date: Wed, 5 Feb 2014 13:34:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
In-Reply-To: <52F2163C.1000202@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 10:45, David Vrabel wrote:
> On 04/02/14 23:18, Julien Grall wrote:
>> Hello David,
>>
>> I'm currently trying to use Linux 3.14-rc1 as Linux guest on Xen on ARM (Xen 4.4-rc3).
>>
>> I have multiple issues with your event channel patch series on Linux and Xen side.
>> I tried to use Linux 3.14-rc1 as dom0 but it was worst (unable to create guests).
> 
> I think there must be two issues here as both 2-level and FIFO events
> are broken.
> 
>> I'm using a simple guest config:
>> kernel="/root/zImage"
>> memory=32
>> name="test"
>> vcpus=1
>> autoballon="off"
>> extra="console=hvc0"
>>
>> If everything is ok, I should see that Linux is unable to find the root filesystem.
>> But here, Linux is stucked.
>>
>> >From Linux side, after bisecting, I found that the offending commit is:
>>     xen/events: remove unnecessary init_evtchn_cpu_bindings()
>>     
>>     Because the guest-side binding of an event to a VCPU (i.e., setting
>>     the local per-cpu masks) is always explicitly done after an event
>>     channel is bound to a port, there is no need to initialize all
>>     possible events as bound to VCPU 0 at start of day or after a resume.
>>     
>>     Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>>     Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>>     Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>
>> With this patch, the function __xen_evtchn_do_upcall won't be able
>> to find an events (pendings_bits == 0 every time).
>> It seems the second part of init_evtchn_cpu_bindings is necessary on ARM.
> 
> I think this is because binding an interdomain or allocating an unbound
> event channel does call bind_evtchn_to_cpu(evtchn, 0) which is required
> to set the local VCPU masks.
> 
> I think this happened to work on x86 because during the generic irq
> setup, the irq affinity is always set which then binds the event channel
> to the right VCPU.  I guess ARM's irq setup misses this step.
> 
> This shouldn't affect the FIFO-based events though since
> evtchn_fifo_bind_to_cpu() is a no-op.

I think the following patch should fix the 2-level problems.

You can force the use of 2-level events by using the xen.fifo_events=0
Linux command line option.

8<-------------------------------------------------
xen/events: bind all new interdomain events to VCPU0

From: David Vrabel <david.vrabel@citrix.com>

Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
unnecessary init_evtchn_cpu_bindings()) causes a regression.

The kernel-side VCPU binding was not being correctly set for newly
allocated or bound interdomain events.  In ARM guests where 2-level
events were used, this would result in no interdomain events being
handled because the local VCPU masks would all be clear.

x86 guests would work because the irq affinity was set during irq
setup and this would set the correct kernel-side VCPU binding.

Fix this by by properly initializing the kernel-side VCPU binding in
bind_evtchn_to_irq().

Reported-by: Julian Grall <julien.grall@linaro.org>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..5cc1f78 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -862,6 +862,9 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 			irq = ret;
 			goto out;
 		}
+
+		/* Newly bound event channels start off on VCPU0. */
+		bind_evtchn_to_cpu(evtchn, 0);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2mI-000161-Tc; Wed, 05 Feb 2014 13:44:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WB2mH-00015o-54
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:44:33 +0000
Received: from [85.158.139.211:17250] by server-15.bemta-5.messagelabs.com id
	BD/EB-24395-04042F25; Wed, 05 Feb 2014 13:44:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391607871!1846261!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21973 invoked from network); 5 Feb 2014 13:44:31 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:44:31 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so239367eek.13
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 05:44:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=wEL7Tw2d1GrjQ1iFtbdeDcKJlfQoiEcnv/7tBEZyuvU=;
	b=dABkDphF0SaTC/DkE3OSdxtileTnw+gMqThjzreaSBJv3zEDEXHELEOt8sfSlfdBOA
	kWqEsA1Pcqy3lzS7TSFV/3HXyjxJzmDzKMP21TNKF3fxcQnNK/dwjiecuDsa1Vk3t2Lo
	bFNWGi4ubCmHgBVKxX6H4o0QMtqtLsc/f0+DIFX05xCTV+Q6YggmbQiX/u+4Ex86vszw
	50xODIBJFoN6GBKmdirpA3F3akNtzjQHVnbCRMtrHdvvR0Uq2HdeIzI8G3iFYp19xiEu
	v1DpCs5s2tzC78a0afxVAMGBBvKLwyhjPK9SGEiC/ykqFwMPx484xrCfQf4Ud6IPLtLe
	RNbA==
X-Gm-Message-State: ALoCoQkMcBFQms3D4w7u0+fdwFbOXg7/rINjgpba26+jVNq+qule53PBwcB3rKdFqApy8dg9w87o
X-Received: by 10.14.203.197 with SMTP id f45mr2093072eeo.90.1391607871386;
	Wed, 05 Feb 2014 05:44:31 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	y47sm54626569eel.14.2014.02.05.05.44.30 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 05:44:30 -0800 (PST)
Message-ID: <52F2403D.7080006@linaro.org>
Date: Wed, 05 Feb 2014 13:44:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
	<52F23DEE.7080601@citrix.com>
In-Reply-To: <52F23DEE.7080601@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 01:34 PM, David Vrabel wrote:

Hello David,

> I think the following patch should fix the 2-level problems.
> 
> You can force the use of 2-level events by using the xen.fifo_events=0
> Linux command line option.

Thanks for the patch, I'm now able to use 2-level events without issue
for a guest.
Now, I need to look at the fifo events when the domain is killed.

> 8<-------------------------------------------------
> xen/events: bind all new interdomain events to VCPU0
> 
> From: David Vrabel <david.vrabel@citrix.com>
> 
> Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
> unnecessary init_evtchn_cpu_bindings()) causes a regression.
> 
> The kernel-side VCPU binding was not being correctly set for newly
> allocated or bound interdomain events.  In ARM guests where 2-level
> events were used, this would result in no interdomain events being
> handled because the local VCPU masks would all be clear.
> 
> x86 guests would work because the irq affinity was set during irq
> setup and this would set the correct kernel-side VCPU binding.
> 
> Fix this by by properly initializing the kernel-side VCPU binding in
> bind_evtchn_to_irq().
> 
> Reported-by: Julian Grall <julien.grall@linaro.org>

s/Julian/Julien/

> Signed-off-by: David Vrabel <david.vrabel@citrix.com>

Tested-by: Julien Grall <julien.grall@linaro.org>

Regards,

> ---
>  drivers/xen/events/events_base.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index 4672e00..5cc1f78 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -862,6 +862,9 @@ int bind_evtchn_to_irq(unsigned int evtchn)
>  			irq = ret;
>  			goto out;
>  		}
> +
> +		/* Newly bound event channels start off on VCPU0. */
> +		bind_evtchn_to_cpu(evtchn, 0);
>  	} else {
>  		struct irq_info *info = info_for_irq(irq);
>  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2mI-000161-Tc; Wed, 05 Feb 2014 13:44:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WB2mH-00015o-54
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:44:33 +0000
Received: from [85.158.139.211:17250] by server-15.bemta-5.messagelabs.com id
	BD/EB-24395-04042F25; Wed, 05 Feb 2014 13:44:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391607871!1846261!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21973 invoked from network); 5 Feb 2014 13:44:31 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:44:31 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so239367eek.13
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 05:44:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=wEL7Tw2d1GrjQ1iFtbdeDcKJlfQoiEcnv/7tBEZyuvU=;
	b=dABkDphF0SaTC/DkE3OSdxtileTnw+gMqThjzreaSBJv3zEDEXHELEOt8sfSlfdBOA
	kWqEsA1Pcqy3lzS7TSFV/3HXyjxJzmDzKMP21TNKF3fxcQnNK/dwjiecuDsa1Vk3t2Lo
	bFNWGi4ubCmHgBVKxX6H4o0QMtqtLsc/f0+DIFX05xCTV+Q6YggmbQiX/u+4Ex86vszw
	50xODIBJFoN6GBKmdirpA3F3akNtzjQHVnbCRMtrHdvvR0Uq2HdeIzI8G3iFYp19xiEu
	v1DpCs5s2tzC78a0afxVAMGBBvKLwyhjPK9SGEiC/ykqFwMPx484xrCfQf4Ud6IPLtLe
	RNbA==
X-Gm-Message-State: ALoCoQkMcBFQms3D4w7u0+fdwFbOXg7/rINjgpba26+jVNq+qule53PBwcB3rKdFqApy8dg9w87o
X-Received: by 10.14.203.197 with SMTP id f45mr2093072eeo.90.1391607871386;
	Wed, 05 Feb 2014 05:44:31 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	y47sm54626569eel.14.2014.02.05.05.44.30 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 05:44:30 -0800 (PST)
Message-ID: <52F2403D.7080006@linaro.org>
Date: Wed, 05 Feb 2014 13:44:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
	<52F23DEE.7080601@citrix.com>
In-Reply-To: <52F23DEE.7080601@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 01:34 PM, David Vrabel wrote:

Hello David,

> I think the following patch should fix the 2-level problems.
> 
> You can force the use of 2-level events by using the xen.fifo_events=0
> Linux command line option.

Thanks for the patch, I'm now able to use 2-level events without issue
for a guest.
Now, I need to look at the fifo events when the domain is killed.

> 8<-------------------------------------------------
> xen/events: bind all new interdomain events to VCPU0
> 
> From: David Vrabel <david.vrabel@citrix.com>
> 
> Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
> unnecessary init_evtchn_cpu_bindings()) causes a regression.
> 
> The kernel-side VCPU binding was not being correctly set for newly
> allocated or bound interdomain events.  In ARM guests where 2-level
> events were used, this would result in no interdomain events being
> handled because the local VCPU masks would all be clear.
> 
> x86 guests would work because the irq affinity was set during irq
> setup and this would set the correct kernel-side VCPU binding.
> 
> Fix this by by properly initializing the kernel-side VCPU binding in
> bind_evtchn_to_irq().
> 
> Reported-by: Julian Grall <julien.grall@linaro.org>

s/Julian/Julien/

> Signed-off-by: David Vrabel <david.vrabel@citrix.com>

Tested-by: Julien Grall <julien.grall@linaro.org>

Regards,

> ---
>  drivers/xen/events/events_base.c |    3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
> 
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
> index 4672e00..5cc1f78 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -862,6 +862,9 @@ int bind_evtchn_to_irq(unsigned int evtchn)
>  			irq = ret;
>  			goto out;
>  		}
> +
> +		/* Newly bound event channels start off on VCPU0. */
> +		bind_evtchn_to_cpu(evtchn, 0);
>  	} else {
>  		struct irq_info *info = info_for_irq(irq);
>  		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:52:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2ta-0001Qu-3n; Wed, 05 Feb 2014 13:52:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WB2tZ-0001Qp-9D
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:52:05 +0000
Received: from [85.158.143.35:32086] by server-2.bemta-4.messagelabs.com id
	CA/2E-10891-40242F25; Wed, 05 Feb 2014 13:52:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391608323!3356464!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23464 invoked from network); 5 Feb 2014 13:52:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:52:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98200006"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 13:52:02 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	08:52:02 -0500
Message-ID: <52F24201.1070203@citrix.com>
Date: Wed, 5 Feb 2014 13:52:01 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
	<52F23DEE.7080601@citrix.com> <52F2403D.7080006@linaro.org>
In-Reply-To: <52F2403D.7080006@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 13:44, Julien Grall wrote:
> On 02/05/2014 01:34 PM, David Vrabel wrote:
> 
> Hello David,
> 
>> I think the following patch should fix the 2-level problems.
>>
>> You can force the use of 2-level events by using the xen.fifo_events=0
>> Linux command line option.
> 
> Thanks for the patch, I'm now able to use 2-level events without issue
> for a guest.

Good. Thanks for testing.

> Now, I need to look at the fifo events when the domain is killed.

Do FIFO event works apart from the crash on domain shutdown?

>> Reported-by: Julian Grall <julien.grall@linaro.org>
> 
> s/Julian/Julien/

Oops.  Sorry!

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:52:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2ta-0001Qu-3n; Wed, 05 Feb 2014 13:52:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WB2tZ-0001Qp-9D
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:52:05 +0000
Received: from [85.158.143.35:32086] by server-2.bemta-4.messagelabs.com id
	CA/2E-10891-40242F25; Wed, 05 Feb 2014 13:52:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391608323!3356464!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23464 invoked from network); 5 Feb 2014 13:52:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:52:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98200006"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 13:52:02 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	08:52:02 -0500
Message-ID: <52F24201.1070203@citrix.com>
Date: Wed, 5 Feb 2014 13:52:01 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
	<52F23DEE.7080601@citrix.com> <52F2403D.7080006@linaro.org>
In-Reply-To: <52F2403D.7080006@linaro.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 13:44, Julien Grall wrote:
> On 02/05/2014 01:34 PM, David Vrabel wrote:
> 
> Hello David,
> 
>> I think the following patch should fix the 2-level problems.
>>
>> You can force the use of 2-level events by using the xen.fifo_events=0
>> Linux command line option.
> 
> Thanks for the patch, I'm now able to use 2-level events without issue
> for a guest.

Good. Thanks for testing.

> Now, I need to look at the fifo events when the domain is killed.

Do FIFO event works apart from the crash on domain shutdown?

>> Reported-by: Julian Grall <julien.grall@linaro.org>
> 
> s/Julian/Julien/

Oops.  Sorry!

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2zX-0001ln-PI; Wed, 05 Feb 2014 13:58:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WB2zW-0001li-AE
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:58:14 +0000
Received: from [85.158.143.35:10652] by server-2.bemta-4.messagelabs.com id
	7D/39-10891-57342F25; Wed, 05 Feb 2014 13:58:13 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391608693!3346489!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22251 invoked from network); 5 Feb 2014 13:58:13 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:58:13 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so246702eek.10
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 05:58:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=vGmjskqtU31ygu5pZu7fXtbzlo2Jji+5OyNCu0Vy920=;
	b=N/9zp7nzgGaER71vkkpae3JKvRfTfs3bQrDCZPWGfkqPGkpnZz1yvq+IXisfh/8B24
	FwFkRRt7AkCNyeWGQyz4MTI0sjBAZgpaI3mFfS4+QRaKwabXpZFDCrayLgF5h1OfCBzP
	MoXJwp+MGuQ9pA0O/Umx/wfoWM7DaIu1rZ9tAtdAtCzym0ueLmwh8gzHS5z9AC2S4bUR
	typ7ZLA/aV2X6g24DM0uWm8clXSaRrac/jBUA2n0vMUsyHzEcr6t4oiWmUs01wa/iXSX
	uAW5bx2ntPwv9oUKgQ1cAOyzksWYqNBTUhBKIatj9otQarj3UTjoZ8Fe7Xe1vjIRo2e+
	f/FQ==
X-Gm-Message-State: ALoCoQlvznJdVlvKUyZieGIOX59YUuuQgohwhuBHAqFY00f1MXr+nfHmAw2Kzr5i7i/8ebzS7eyn
X-Received: by 10.14.194.2 with SMTP id l2mr2108677een.39.1391608692959;
	Wed, 05 Feb 2014 05:58:12 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm102425561eeq.15.2014.02.05.05.58.11
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 05:58:12 -0800 (PST)
Message-ID: <52F24372.7020700@linaro.org>
Date: Wed, 05 Feb 2014 13:58:10 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
	<52F23DEE.7080601@citrix.com> <52F2403D.7080006@linaro.org>
	<52F24201.1070203@citrix.com>
In-Reply-To: <52F24201.1070203@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 01:52 PM, David Vrabel wrote:
> On 05/02/14 13:44, Julien Grall wrote:
>> On 02/05/2014 01:34 PM, David Vrabel wrote:
>>
>> Hello David,
>>
>>> I think the following patch should fix the 2-level problems.
>>>
>>> You can force the use of 2-level events by using the xen.fifo_events=0
>>> Linux command line option.
>>
>> Thanks for the patch, I'm now able to use 2-level events without issue
>> for a guest.
> 
> Good. Thanks for testing.
> 
>> Now, I need to look at the fifo events when the domain is killed.
> 
> Do FIFO event works apart from the crash on domain shutdown?

In the guest yes. I have to try dom0 now.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 13:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 13:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB2zX-0001ln-PI; Wed, 05 Feb 2014 13:58:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WB2zW-0001li-AE
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 13:58:14 +0000
Received: from [85.158.143.35:10652] by server-2.bemta-4.messagelabs.com id
	7D/39-10891-57342F25; Wed, 05 Feb 2014 13:58:13 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391608693!3346489!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22251 invoked from network); 5 Feb 2014 13:58:13 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 13:58:13 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so246702eek.10
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 05:58:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=vGmjskqtU31ygu5pZu7fXtbzlo2Jji+5OyNCu0Vy920=;
	b=N/9zp7nzgGaER71vkkpae3JKvRfTfs3bQrDCZPWGfkqPGkpnZz1yvq+IXisfh/8B24
	FwFkRRt7AkCNyeWGQyz4MTI0sjBAZgpaI3mFfS4+QRaKwabXpZFDCrayLgF5h1OfCBzP
	MoXJwp+MGuQ9pA0O/Umx/wfoWM7DaIu1rZ9tAtdAtCzym0ueLmwh8gzHS5z9AC2S4bUR
	typ7ZLA/aV2X6g24DM0uWm8clXSaRrac/jBUA2n0vMUsyHzEcr6t4oiWmUs01wa/iXSX
	uAW5bx2ntPwv9oUKgQ1cAOyzksWYqNBTUhBKIatj9otQarj3UTjoZ8Fe7Xe1vjIRo2e+
	f/FQ==
X-Gm-Message-State: ALoCoQlvznJdVlvKUyZieGIOX59YUuuQgohwhuBHAqFY00f1MXr+nfHmAw2Kzr5i7i/8ebzS7eyn
X-Received: by 10.14.194.2 with SMTP id l2mr2108677een.39.1391608692959;
	Wed, 05 Feb 2014 05:58:12 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm102425561eeq.15.2014.02.05.05.58.11
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 05:58:12 -0800 (PST)
Message-ID: <52F24372.7020700@linaro.org>
Date: Wed, 05 Feb 2014 13:58:10 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F1753C.3010508@linaro.org> <52F2163C.1000202@citrix.com>
	<52F23DEE.7080601@citrix.com> <52F2403D.7080006@linaro.org>
	<52F24201.1070203@citrix.com>
In-Reply-To: <52F24201.1070203@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Multiple issues with event channel on Xen on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 01:52 PM, David Vrabel wrote:
> On 05/02/14 13:44, Julien Grall wrote:
>> On 02/05/2014 01:34 PM, David Vrabel wrote:
>>
>> Hello David,
>>
>>> I think the following patch should fix the 2-level problems.
>>>
>>> You can force the use of 2-level events by using the xen.fifo_events=0
>>> Linux command line option.
>>
>> Thanks for the patch, I'm now able to use 2-level events without issue
>> for a guest.
> 
> Good. Thanks for testing.
> 
>> Now, I need to look at the fifo events when the domain is killed.
> 
> Do FIFO event works apart from the crash on domain shutdown?

In the guest yes. I have to try dom0 now.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3AC-0002UA-Az; Wed, 05 Feb 2014 14:09:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3AB-0002Tt-2Q; Wed, 05 Feb 2014 14:09:15 +0000
Received: from [85.158.137.68:59153] by server-11.bemta-3.messagelabs.com id
	4D/DE-04255-A0642F25; Wed, 05 Feb 2014 14:09:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391609351!12399356!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14531 invoked from network); 5 Feb 2014 14:09:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:09:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98207648"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:09:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:09:10 -0500
Message-ID: <1391609348.6497.178.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <lars.kurth@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Dario
	Faggioli" <dario.faggioli@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ben Guthro <benjamin.guthro@citrix.com>, "Andrew
	Cooper" <Andrew.Cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>, 
	Santosh Jodh <Santosh.Jodh@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:09:08 +0000
In-Reply-To: <52E7B6AF.3050604@xen.org>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAxLTI4IGF0IDEzOjU0ICswMDAwLCBMYXJzIEt1cnRoIHdyb3RlOgo+IEhp
IGFsbCwKPiBJIGhhdmUgbm90IGdvdHRlbiBhbnkgcmVwbHkgdG8gdGhpcyB0aHJlYWQuIEkgc2F3
IFdlaSBMdWkgYW5kIEFuZHLDqXMgCj4gTGFnYXItQ2F2aWxsYSBtYWtlIGNoYW5nZXMgdG8gdGhl
IHByb2plY3QgbGlzdC4gUGxlYXNlIGdvIHRocm91Z2ggdGhlIAo+IGl0ZW1zIGJlbG93IGFuZCBt
YWtlIGNoYW5nZXMgYXMgc3VnZ2VzdGVkLiBPdGhlcndpc2UsIG91ciBjaGFuY2VzIHRvIGdldCAK
PiBpbnRvIEdTb0MgMjAxNCB3aWxsIGJlIHJlbGF0aXZlbHkgc2xpbS4KCkdvaW5nIHRocm91Z2gg
dGhlIGxpc3QsIHBlb3BsZSBsaXN0ZWQgYXMgdGVjaG5pY2FsIGNvbnRhY3RzIGZvciBwcm9qZWN0
cwp3aXRoIEdTb0MgPT0geWVzIChvciB1bmtub3duKSBhcmUgaW4gdGhlIFRvIGxpbmUuIFBsZWFz
ZSByZWl0ZXJhdGUgeW91cgppbnRlcmVzdCBpbiBtZW50b3JpbmcgdGhlIHByb2plY3QocykgYW5k
IHVwZGF0ZSBvciByZW1vdmUgdGhlIGVudHJ5IGFzCm5lY2Vzc2FyeS4KCkkgc2tpcHBlZCB0aGlu
Z3MgYWRkZWQgcmVjZW50bHkgYW5kIEkgc2tpcHBlZCAiWGVuIENsb3VkIFBsYXRmb3JtIChYQ1Ap
CmFuZCBYQVBJIHByb2plY3RzIiwgc29tZW9uZSBlbHNlIGNhbiBwaWNrIHRoYXQgdXAuCgpBbmQg
dG8gcmVpdGVyYXRlIHdoYXQgTGFycyBzYWlkOgoKPiBBZGQgbmV3IHdvcmsgaXRlbXMgOiB3ZSBv
dWdodCB0byBoYXZlIGEgZmV3IHNleHkgdG9waWNzIG9uIHNheSAKPiBSZWFsLXRpbWUsIG1vYmls
ZSBhbmQgc29tZSBvZiB0aGUgb3RoZXIgc2VnbWVudHMgKGFzc3VtaW5nIHdlIGNhbiBnZXQKPiBI
VykKWy4uLl0KPiBiKSBBbnlvbmUgd2hvIGhhcyBzb21lIGtlcm5lbC9saW51eC9ic2QvZGlzdHJv
L3FlbXUgd29yay1pdGVtcywgc2hvdWxkIAo+IGdldCB0aGVzZSBsaXN0ZWQgb24gdGhlIHJlc3Bl
Y3RpdmUgb3RoZXIgcHJvZ3JhbXMuIEFuZCB3ZSBzaG91bGQgbGluayAKPiB0byB0aGVzZSBmcm9t
IG91ciBwcm9qZWN0IHBhZ2UuCgpUaGUgbGlzdDoKClBhc2k6CiAgICAgICogSW1wbGVtZW50IFhl
biBQVlNDU0kgc3VwcG9ydCBpbiB4bC9saWJ4bCB0b29sc3RhY2sKICAgICAgKiBJbXBsZW1lbnQg
WGVuIFBWVVNCIHN1cHBvcnQgaW4geGwvbGlieGwgdG9vbHN0YWNrCgogICAgICAgIEJvdGggaGF2
ZSB1bmFuc3dlcmVkIHF1ZXN0aW9ucyBwb3NlZCBieSBMYXJzIGluIEphbnVhcnkgMjAxMy4KICAg
ICAgICAKS29ucmFkOgogICAgICAqIEJsb2NrIGJhY2tlbmQvZnJvbnRlbmQgaW1wcm92ZW1lbnRz
CgogICAgICAgIEkgc3VzcGVjdCBhIGJ1bmNoIG9mIHRoZXNlIGFyZSBkb25lPyAoQWxzbyBDQyBS
b2dlciwgd2hvIG1heQogICAgICAgIGhhdmUgZG9uZSB0aGVtLi4uKQogICAgICAgIAogICAgICAg
IFdhcyBpbiB0aGUgbGlzdCB0d2ljZSwgdGhleSBsb29rZWQgaWRlbnRpY2FsIHNvIEkgbnVrZWQg
b25lLgogICAgICAgIAogICAgICAqIFV0aWxpemUgSW50ZWwgUXVpY2tQYXRoIG9uIG5ldHdvcmsg
YW5kIGJsb2NrIHBhdGguCgogICAgICAgIE5vIGNvbW1lbnRzIGV0YywgYnV0IHNvdW5kcyBhZHZh
bmNlZCBmb3IgYSBHU29DIHN0dWRlbnQsIHBsdXMKICAgICAgICBpdHMgdW5jbGVhciB3aGVuIHN1
Y2ggaGFyZHdhcmUgYmVjYW1lIGF2YWlsYWJsZSwgYXJlIHRoZXkgbGlrZWx5CiAgICAgICAgdG8g
aGF2ZSBpdD8gSXQgc291bmRzIGxpa2UgaXQgbWlnaHQgYWxzbyBiZSBxdWl0ZSBoaWdoIGVuZC4K
ICAgICAgICAKICAgICAgKiBwZXJmIHdvcmtpbmcgd2l0aCBYZW4KCiAgICAgICAgRG9uZS9pbiBw
cm9ncmVzcyBieSBCb3JpcyBJIHRoaW5rCiAgICAgICAgCiAgICAgICogUEFUIHdyaXRlY29tYmlu
ZSBmaXh1cAoKICAgICAgICBEaWQgSSBzZWUgYSBmaXggZm9yIHRoaXMgZ28gcGFzdD8gR1NvQyA9
PSB1bmtub3duPwogICAgICAgIAogICAgICAqIFBhcmFsbGVsIHhlbndhdGNoCgogICAgICAgIEJp
dCBzcGFyc2Ugb24gZGV0YWlscywgR1NPQyA9PSB1bmtub3duCgogICAgICAqIE1pY3JvY29kZSB1
cGxvYWRlciBpbXBsZW1lbnRhdGlvbgogICAgICAgIAogICAgICAgIERvbmUgSSB0aGluaz8KCiAg
ICAgICogSW50ZWdyYXRpbmcgTlVNQSBhbmQgVG1lbQogICAgICAgIAogICAgICAgIExpc3RzIERh
biBhcyBjby1tYWludGFpbmVyIC0tIEtvbnJhZCBkbyB5b3Ugd2FudCB0byBwcm9wb3NlIHRoaXMK
ICAgICAgICB0byB0aGUgbmV3IHRtZW0gZ3V5IChJJ3ZlIGZvcmdvdHRlbiBoaXMgbmFtZSkKICAg
ICAgICAKICAgICAgKiBQZXJmb3JtYW5jZSB0b29scyBvdmVyaGF1bAoKICAgICAgICBCaXQgdmFn
dWUuIEFuZCBoYXMgc29tZSBvZiB0aGlzIGJlZW4gZG9uZT8KCiAgICAgICogIlVwc3RyZWFtIGJ1
Z3MiCgogICAgICAgIFRoZXJlIHdlcmUgNCBvZiB0aGVzZSwgZGF0aW5nIGJhY2sgdG8gMjAxMiwg
SSBkb24ndCB0aGluayB0aGlzCiAgICAgICAgbGlzdCBpcyBhIGdvb2QgcGxhY2UgdG8gdHJhY2sg
YnVncyBhbmQgaXQgc2VlbXMgbGlrZSBhdCBsZWFzdAogICAgICAgIHNvbWUgb2YgdGhlbSBhcmUg
bm93IG9ic29sZXRlLiBTbyBJJ3ZlIG51a2VkIHRoZSBsb3QuIElmIHRoZXkKICAgICAgICBhcmUg
c3RpbGwgcmVsZXZhbnQgSSB0aGluayBpdCB3b3VsZCBiZSBiZXN0IHRvIGdldCB0aGVtIGludG8g
dGhlCiAgICAgICAgYnVnIHRyYWNrZXIuCiAgICAgICAgCkJlbjoKICAgICAgKiBkb20wIGtnZGIg
c3VwcG9ydAoKICAgICAgICBJcyB0aGlzIGZvciBHU29DPwoKR2VvcmdlOgoKICAgICAgKiBJbnRy
b2R1Y2luZyBQb3dlckNsYW1wLWxpa2UgZHJpdmVyIGZvciBYZW4KCiAgICAgICAgSSBkb24ndCB0
aGluayB0aGlzIGhhcyBiZWVuIGRvbmU/CgpEYXJpbzoKICAgICAgKiBOVU1BIGVmZmVjdHMgb24g
aW50ZXItVk0gY29tbXVuaWNhdGlvbiBhbmQgb24gbXVsdGktVk0gd29ya2xvYWRzCgogICAgICAg
IEkgdGhpbmsgdGhpcyB3YXMgdW5kZXIgd2F5IGFzIHBhcnQgb2YgdGhlIEdOT01FIE91dHJlYWNo
CiAgICAgICAgcHJvZ3JhbS4gSW4gdGhhdCBjYXNlIHBlcmhhcHMgaXQgbmVlZHMgdXBkYXRpbmcg
dG8gcmVmbGVjdCB3aGF0CiAgICAgICAgaGFzIGJlZW4gZG9uZSBhbmQgd2hhdCBzdGlsbCBuZWVk
cyB0byBiZSBkb25lPwogICAgICAgIAogICAgICAqIEludGVncmF0aW5nIE5VTUEgYW5kIFRtZW0K
CiAgICAgICAgTGlzdHMgRGFuIGFzIGNvLW1haW50YWluZXIgLS0gY292ZXJlZCB1bmRlciBLb25y
YWQncyBuYW1lIGFib3ZlLgogICAgICAgIAogICAgICAqIElzIFhlbiByZWFkeSBmb3IgdGhlIFJl
YWwtVGltZS9FbWJlZGRlZCBXb3JsZD8KICAgICAgICAgICAgICAgIAogICAgICAgIFNvdW5kcyBh
IGJpdCBibHVlIHNreT8gTm93IHRoYXQgdGhlcmUgaXMgYWN0aXZlIGludGVyZXN0IGluIHRoaXMK
ICAgICAgICBvbiBBUk0gcGVyaGFwcyBhIGZldyBjb25jcmV0ZSBwcm9qZWN0cyBjb3VsZCBiZSBw
cm9wb3NlZCB0bwogICAgICAgIHJlcGxhY2UgaXQ/CgpBbmR5OgoKICAgICAgKiBJT01NVSBjb250
cm9sIGZvciBTV0lPVExCLCB0byBhdm9pZCBkb20wIGNvcHkgb2YgYWxsID40SyBETUEKICAgICAg
ICBhbGxvY2F0aW9ucwoKICAgICAgICBTb3VuZHMgdG9vIGhhcmQgZm9yIGEgR1NvQyB0byBtZS4g
V291bGQgbmVlZCBmbGVzaGluZyBvdXQgaW4gYW55CiAgICAgICAgY2FzZS4KICAgICAgICAKICAg
ICAgKiBDUFUvUkFNL1BDSSBkaWFncmFtIHRvb2wKCiAgICAgICAgRG9lcyB0aGlzIG5vdCBhbHJl
YWR5IGV4aXN0IHNvbWV3aGVyZT8KICAgICAgICAKUGF1bDoKCiAgICAgICogSFZNIHBlci1ldmVu
dC1jaGFubmVsIGludGVycnVwdHMKCiAgICAgICAgTWlnaHQgYmUgZWFzaWVyIG5vdyB0aGF0IFdp
bmRvd3MgUFYgZHJpdmVycyBhcmUgb3BlbmVkIHVwPwoKUm9nZXI6CiAgICAgICogUmVmYWN0b3Ig
TGludXggaG90cGx1ZyBzY3JpcHRzCgogICAgICAgIFlvdSBkaWQgc29tZSBvZiB0aGlzIEkgdGhp
bms/CiAgICAgICAgCklhbiBDOgoKICAgICAgKiBYTCB0byBYQ1AgVk0gbW90aW9uCgogICAgICAg
IFBlcmhhcHMgdGhpcyBjb3VsZCBiZSBicm9hZGVuZWQgaW50byBWTSB0cmFuc3BvcnQgYmV0d2Vl
biBYTCBhbmQKICAgICAgICBvdGhlciB0aGluZ3MgdG9vIC0tIGUuZy4gbGlidmlydD8KClN0ZWZh
bm86CgogICAgICAqIFZNIFNuYXBzaG90cwoKICAgICAgICBTdGlsbCBhIGdvb2QgcHJvamVjdCBJ
IHRoaW5rCiAgICAgICAgCkdlb3JnZToKCiAgICAgICogQWxsb3dpbmcgZ3Vlc3RzIHRvIGJvb3Qg
d2l0aCBhIHBhc3NlZC10aHJvdWdoIEdQVSBhcyB0aGUgcHJpbWFyeQogICAgICAgIGRpc3BsYXkK
CiAgICAgICAgVGhpcyBzZWVtcyBsaWtlIGEgYml0IG9mIGEgcmF0aG9sZSBmb3IgYSBHU29DIHN0
dWRlbnQgdG8gbWUuLi4KICAgICAgICAKICAgICAgKiBBZHZhbmNlZCBTY2hlZHVsaW5nIFBhcmFt
ZXRlcnMKICAgICAgICAKICAgICAgICBTdGlsbCB0byBkbz8KClNhbnRvc2g6CgogICAgICAqIEtE
RCAoV2luZG93cyBEZWJ1Z2dlciBTdHViKSBlbmhhbmNlbWVudHMKICAgICAgICAKRGF2ZQoKICAg
ICAgKiBDcmVhdGUgYSB0aW55IFZNIGZvciBlYXN5IGxvYWQgdGVzdGluZwoKICAgICAgICBTb21l
b25lIHdhcyBsb29raW5nIGF0IHRoaXMgSSB0aGluaz8KICAgICAgICAKSWFuIEo6CgogICAgICAq
IFRlc3RpbmcgUFYgYW5kIEhWTSBpbnN0YWxscyBvZiBEZWJpYW4gdXNpbmcgZGViaWFuLWluc3Rh
bGxlcgogICAgICAqIFRlc3RpbmcgTmV0QlNECgogICAgICAgIEJTRCBpcyBkb25lIEkgdGhpbmss
IGFuZCBJJ20gbG9va2luZyBhdCBEZWJpYW4gSW5zdGFsbGVyIHN0dWZmCiAgICAgICAgbXlzZWxm
LiBJJ3ZlIHJlbW92ZWQgdGhlc2UuCgoKUGhldyEKSWFuLgoKCgo+IExhcnMKPiAKPiBPbiAyMC8w
MS8yMDE0IDA5OjE4LCBMYXJzIEt1cnRoIHdyb3RlOgo+ID4gSGkgYWxsLAo+ID4KPiA+IHRoZSBH
U29DIGFwcGxpY2F0aW9uIGRlYWRsaW5lIGlzIGNvbWluZyB1cCA6IEZlYiAyMDE0LiBJZiB3ZSB3
YW50IHRvIAo+ID4gaGF2ZSBhbnkgY2hhbmNlIG9mIGdldHRpbmcgYWNjZXB0ZWQgdGhpcyB5ZWFy
LCB3ZSBvdWdodCB0byBnZXQgb3VyIAo+ID4gcHJvamVjdCBsaXN0IGludG8gZ29vZCBzaGFwZS4g
VGhlIHByb2plY3QgbGlzdCBhbmQgaG93IHRoZSBwcm9qZWN0IGFuZCAKPiA+IG1lbnRlcnMgcHJl
c2VudCB0aGVtc2VsdmVzIGhhcyBhIGJpZ2dlciBpbXBhY3Qgb24gd2hldGhlciB3ZSBnZXQgCj4g
PiBhY2NlcHRlZCB0aGFuIHRoZSBhY3R1YWwgYXBwbGljYXRpb24uCj4gPgo+ID4gQWxzbywgSSB3
b3VsZCBsaWtlIHRvIGFkZCBhIG1lbnRvciBzZWN0aW9uIHRoaXMgeWVhcjogYSBzaG9ydCBiaW8s
IAo+ID4gd2hhdCB0aGUgbWVudG9yIGNhcmVzIGFib3V0IGFuZCBhIHBpY3R1cmUuIFRoaXMgd2ls
bCBoZWxwIG1ha2UgdGhlIAo+ID4gcHJvamVjdCBsaXN0IG1vcmUgcmVhbC4KPiA+Cj4gPiBXZSBo
YXZlICo0IHdlZWtzKiB0byBkbyB0aGlzLiBUaGUgYmFyIGZvciBHU29DIGhhcyBiZWVuIGdldHRp
bmcgCj4gPiBpbmNyZWFzaW5nbHkgaGlnaC4gSSBrbm93LCB3ZSBhcmUgdGllZCBkb3duIHdpdGgg
WGVuIDQuNCwgYnV0IHRoaXMgaXMgCj4gPiBzb21ldGhpbmcgeW91IG5lZWQgdG8gZG8gaWYgeW91
IHdhbnQgdGhlIFhlbiBQcm9qZWN0IHRvIHBhcnRpY2lwYXRlLgo+ID4KPiA+IGEpIFBsZWFzZSwg
dXBkYXRlIAo+ID4gaHR0cDovL3dpa2kueGVucHJvamVjdC5vcmcvd2lraS9YZW5fRGV2ZWxvcG1l
bnRfUHJvamVjdHMgdXJnZW50bHkgCj4gPiAodGhlc2UgbmVlZCB0byBiZSBpbiBnb29kIHNoYXBl
ICpiZWZvcmUqIHRoZSBhcHBsaWNhdGlvbikuIFdoYXQgSSBuZWVkIAo+ID4geW91IHRvIGRvIGlz
Ogo+ID4gYS4xKSBSZW1vdmUgaXRlbXMgdGhhdCBhcmUgZG9uZQo+ID4gYS4yKSBBZGQgbmV3IHdv
cmsgaXRlbXMgOiB3ZSBvdWdodCB0byBoYXZlIGEgZmV3IHNleHkgdG9waWNzIG9uIHNheSAKPiA+
IFJlYWwtdGltZSwgbW9iaWxlIGFuZCBzb21lIG9mIHRoZSBvdGhlciBzZWdtZW50cyAoYXNzdW1p
bmcgd2UgY2FuIGdldCBIVykKPiA+IGEuMykgQWxsIHByb2plY3QgcHJvcG9zYWxzIG5lZWQgdG8g
YmUgcGVlciByZXZpZXdlZCAqYW5kKiBjbGVhciAuLi4gCj4gPiBUaGUgcGVlciByZXZpZXcgcHJv
Y2VzcyBmb3IgcHJvamVjdHMgd2UgcHV0IGluIHBsYWNlIGxhc3QgeWVhciB3b3JrZWQgCj4gPiB3
ZWxsLCBieSB3aGljaCB3ZSBoYWQgcGFzdCBtZW50b3JzIHNpZ24gb2YgcHJvamVjdCBwcm9wb3Nh
bHMgdGhhdCB3ZXJlIAo+ID4gaW4gZ29vZCBlbm91Z2ggc3RhdGUuCj4gPgo+ID4gYikgQW55b25l
IHdobyBoYXMgc29tZSBrZXJuZWwvbGludXgvYnNkL2Rpc3Ryby9xZW11IHdvcmstaXRlbXMsIHNo
b3VsZCAKPiA+IGdldCB0aGVzZSBsaXN0ZWQgb24gdGhlIHJlc3BlY3RpdmUgb3RoZXIgcHJvZ3Jh
bXMuIEFuZCB3ZSBzaG91bGQgbGluayAKPiA+IHRvIHRoZXNlIGZyb20gb3VyIHByb2plY3QgcGFn
ZS4KPiA+Cj4gPiBCZXN0IFJlZ2FyZHMKPiA+IExhcnMKPiA+IFAuUy46IEkgd2lsbCBhbHNvIHNl
ZSB3aGV0aGVyIHdlIGNhbiBwYXJ0aWNpcGF0ZSBhcyBYZW4gUHJvamVjdCB1bmRlciAKPiA+IHRo
ZSBMRiBHU29DIHByb2dyYW0sIGJ1dCBsYXN0IHllYXIgdGhlcmUgd2FzIHB1c2gtYmFjayBhbmQg
SSBkb24ndCAKPiA+IGV4cGVjdCB0aGlzIHRvIGNoYW5nZQo+ID4KPiAKPiAKPiBfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+IGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:09:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3AC-0002UA-Az; Wed, 05 Feb 2014 14:09:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3AB-0002Tt-2Q; Wed, 05 Feb 2014 14:09:15 +0000
Received: from [85.158.137.68:59153] by server-11.bemta-3.messagelabs.com id
	4D/DE-04255-A0642F25; Wed, 05 Feb 2014 14:09:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391609351!12399356!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14531 invoked from network); 5 Feb 2014 14:09:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:09:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98207648"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:09:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:09:10 -0500
Message-ID: <1391609348.6497.178.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <lars.kurth@xen.org>, Roger Pau Monne <roger.pau@citrix.com>, "Dario
	Faggioli" <dario.faggioli@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ben Guthro <benjamin.guthro@citrix.com>, "Andrew
	Cooper" <Andrew.Cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>, 
	Santosh Jodh <Santosh.Jodh@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:09:08 +0000
In-Reply-To: <52E7B6AF.3050604@xen.org>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAxLTI4IGF0IDEzOjU0ICswMDAwLCBMYXJzIEt1cnRoIHdyb3RlOgo+IEhp
IGFsbCwKPiBJIGhhdmUgbm90IGdvdHRlbiBhbnkgcmVwbHkgdG8gdGhpcyB0aHJlYWQuIEkgc2F3
IFdlaSBMdWkgYW5kIEFuZHLDqXMgCj4gTGFnYXItQ2F2aWxsYSBtYWtlIGNoYW5nZXMgdG8gdGhl
IHByb2plY3QgbGlzdC4gUGxlYXNlIGdvIHRocm91Z2ggdGhlIAo+IGl0ZW1zIGJlbG93IGFuZCBt
YWtlIGNoYW5nZXMgYXMgc3VnZ2VzdGVkLiBPdGhlcndpc2UsIG91ciBjaGFuY2VzIHRvIGdldCAK
PiBpbnRvIEdTb0MgMjAxNCB3aWxsIGJlIHJlbGF0aXZlbHkgc2xpbS4KCkdvaW5nIHRocm91Z2gg
dGhlIGxpc3QsIHBlb3BsZSBsaXN0ZWQgYXMgdGVjaG5pY2FsIGNvbnRhY3RzIGZvciBwcm9qZWN0
cwp3aXRoIEdTb0MgPT0geWVzIChvciB1bmtub3duKSBhcmUgaW4gdGhlIFRvIGxpbmUuIFBsZWFz
ZSByZWl0ZXJhdGUgeW91cgppbnRlcmVzdCBpbiBtZW50b3JpbmcgdGhlIHByb2plY3QocykgYW5k
IHVwZGF0ZSBvciByZW1vdmUgdGhlIGVudHJ5IGFzCm5lY2Vzc2FyeS4KCkkgc2tpcHBlZCB0aGlu
Z3MgYWRkZWQgcmVjZW50bHkgYW5kIEkgc2tpcHBlZCAiWGVuIENsb3VkIFBsYXRmb3JtIChYQ1Ap
CmFuZCBYQVBJIHByb2plY3RzIiwgc29tZW9uZSBlbHNlIGNhbiBwaWNrIHRoYXQgdXAuCgpBbmQg
dG8gcmVpdGVyYXRlIHdoYXQgTGFycyBzYWlkOgoKPiBBZGQgbmV3IHdvcmsgaXRlbXMgOiB3ZSBv
dWdodCB0byBoYXZlIGEgZmV3IHNleHkgdG9waWNzIG9uIHNheSAKPiBSZWFsLXRpbWUsIG1vYmls
ZSBhbmQgc29tZSBvZiB0aGUgb3RoZXIgc2VnbWVudHMgKGFzc3VtaW5nIHdlIGNhbiBnZXQKPiBI
VykKWy4uLl0KPiBiKSBBbnlvbmUgd2hvIGhhcyBzb21lIGtlcm5lbC9saW51eC9ic2QvZGlzdHJv
L3FlbXUgd29yay1pdGVtcywgc2hvdWxkIAo+IGdldCB0aGVzZSBsaXN0ZWQgb24gdGhlIHJlc3Bl
Y3RpdmUgb3RoZXIgcHJvZ3JhbXMuIEFuZCB3ZSBzaG91bGQgbGluayAKPiB0byB0aGVzZSBmcm9t
IG91ciBwcm9qZWN0IHBhZ2UuCgpUaGUgbGlzdDoKClBhc2k6CiAgICAgICogSW1wbGVtZW50IFhl
biBQVlNDU0kgc3VwcG9ydCBpbiB4bC9saWJ4bCB0b29sc3RhY2sKICAgICAgKiBJbXBsZW1lbnQg
WGVuIFBWVVNCIHN1cHBvcnQgaW4geGwvbGlieGwgdG9vbHN0YWNrCgogICAgICAgIEJvdGggaGF2
ZSB1bmFuc3dlcmVkIHF1ZXN0aW9ucyBwb3NlZCBieSBMYXJzIGluIEphbnVhcnkgMjAxMy4KICAg
ICAgICAKS29ucmFkOgogICAgICAqIEJsb2NrIGJhY2tlbmQvZnJvbnRlbmQgaW1wcm92ZW1lbnRz
CgogICAgICAgIEkgc3VzcGVjdCBhIGJ1bmNoIG9mIHRoZXNlIGFyZSBkb25lPyAoQWxzbyBDQyBS
b2dlciwgd2hvIG1heQogICAgICAgIGhhdmUgZG9uZSB0aGVtLi4uKQogICAgICAgIAogICAgICAg
IFdhcyBpbiB0aGUgbGlzdCB0d2ljZSwgdGhleSBsb29rZWQgaWRlbnRpY2FsIHNvIEkgbnVrZWQg
b25lLgogICAgICAgIAogICAgICAqIFV0aWxpemUgSW50ZWwgUXVpY2tQYXRoIG9uIG5ldHdvcmsg
YW5kIGJsb2NrIHBhdGguCgogICAgICAgIE5vIGNvbW1lbnRzIGV0YywgYnV0IHNvdW5kcyBhZHZh
bmNlZCBmb3IgYSBHU29DIHN0dWRlbnQsIHBsdXMKICAgICAgICBpdHMgdW5jbGVhciB3aGVuIHN1
Y2ggaGFyZHdhcmUgYmVjYW1lIGF2YWlsYWJsZSwgYXJlIHRoZXkgbGlrZWx5CiAgICAgICAgdG8g
aGF2ZSBpdD8gSXQgc291bmRzIGxpa2UgaXQgbWlnaHQgYWxzbyBiZSBxdWl0ZSBoaWdoIGVuZC4K
ICAgICAgICAKICAgICAgKiBwZXJmIHdvcmtpbmcgd2l0aCBYZW4KCiAgICAgICAgRG9uZS9pbiBw
cm9ncmVzcyBieSBCb3JpcyBJIHRoaW5rCiAgICAgICAgCiAgICAgICogUEFUIHdyaXRlY29tYmlu
ZSBmaXh1cAoKICAgICAgICBEaWQgSSBzZWUgYSBmaXggZm9yIHRoaXMgZ28gcGFzdD8gR1NvQyA9
PSB1bmtub3duPwogICAgICAgIAogICAgICAqIFBhcmFsbGVsIHhlbndhdGNoCgogICAgICAgIEJp
dCBzcGFyc2Ugb24gZGV0YWlscywgR1NPQyA9PSB1bmtub3duCgogICAgICAqIE1pY3JvY29kZSB1
cGxvYWRlciBpbXBsZW1lbnRhdGlvbgogICAgICAgIAogICAgICAgIERvbmUgSSB0aGluaz8KCiAg
ICAgICogSW50ZWdyYXRpbmcgTlVNQSBhbmQgVG1lbQogICAgICAgIAogICAgICAgIExpc3RzIERh
biBhcyBjby1tYWludGFpbmVyIC0tIEtvbnJhZCBkbyB5b3Ugd2FudCB0byBwcm9wb3NlIHRoaXMK
ICAgICAgICB0byB0aGUgbmV3IHRtZW0gZ3V5IChJJ3ZlIGZvcmdvdHRlbiBoaXMgbmFtZSkKICAg
ICAgICAKICAgICAgKiBQZXJmb3JtYW5jZSB0b29scyBvdmVyaGF1bAoKICAgICAgICBCaXQgdmFn
dWUuIEFuZCBoYXMgc29tZSBvZiB0aGlzIGJlZW4gZG9uZT8KCiAgICAgICogIlVwc3RyZWFtIGJ1
Z3MiCgogICAgICAgIFRoZXJlIHdlcmUgNCBvZiB0aGVzZSwgZGF0aW5nIGJhY2sgdG8gMjAxMiwg
SSBkb24ndCB0aGluayB0aGlzCiAgICAgICAgbGlzdCBpcyBhIGdvb2QgcGxhY2UgdG8gdHJhY2sg
YnVncyBhbmQgaXQgc2VlbXMgbGlrZSBhdCBsZWFzdAogICAgICAgIHNvbWUgb2YgdGhlbSBhcmUg
bm93IG9ic29sZXRlLiBTbyBJJ3ZlIG51a2VkIHRoZSBsb3QuIElmIHRoZXkKICAgICAgICBhcmUg
c3RpbGwgcmVsZXZhbnQgSSB0aGluayBpdCB3b3VsZCBiZSBiZXN0IHRvIGdldCB0aGVtIGludG8g
dGhlCiAgICAgICAgYnVnIHRyYWNrZXIuCiAgICAgICAgCkJlbjoKICAgICAgKiBkb20wIGtnZGIg
c3VwcG9ydAoKICAgICAgICBJcyB0aGlzIGZvciBHU29DPwoKR2VvcmdlOgoKICAgICAgKiBJbnRy
b2R1Y2luZyBQb3dlckNsYW1wLWxpa2UgZHJpdmVyIGZvciBYZW4KCiAgICAgICAgSSBkb24ndCB0
aGluayB0aGlzIGhhcyBiZWVuIGRvbmU/CgpEYXJpbzoKICAgICAgKiBOVU1BIGVmZmVjdHMgb24g
aW50ZXItVk0gY29tbXVuaWNhdGlvbiBhbmQgb24gbXVsdGktVk0gd29ya2xvYWRzCgogICAgICAg
IEkgdGhpbmsgdGhpcyB3YXMgdW5kZXIgd2F5IGFzIHBhcnQgb2YgdGhlIEdOT01FIE91dHJlYWNo
CiAgICAgICAgcHJvZ3JhbS4gSW4gdGhhdCBjYXNlIHBlcmhhcHMgaXQgbmVlZHMgdXBkYXRpbmcg
dG8gcmVmbGVjdCB3aGF0CiAgICAgICAgaGFzIGJlZW4gZG9uZSBhbmQgd2hhdCBzdGlsbCBuZWVk
cyB0byBiZSBkb25lPwogICAgICAgIAogICAgICAqIEludGVncmF0aW5nIE5VTUEgYW5kIFRtZW0K
CiAgICAgICAgTGlzdHMgRGFuIGFzIGNvLW1haW50YWluZXIgLS0gY292ZXJlZCB1bmRlciBLb25y
YWQncyBuYW1lIGFib3ZlLgogICAgICAgIAogICAgICAqIElzIFhlbiByZWFkeSBmb3IgdGhlIFJl
YWwtVGltZS9FbWJlZGRlZCBXb3JsZD8KICAgICAgICAgICAgICAgIAogICAgICAgIFNvdW5kcyBh
IGJpdCBibHVlIHNreT8gTm93IHRoYXQgdGhlcmUgaXMgYWN0aXZlIGludGVyZXN0IGluIHRoaXMK
ICAgICAgICBvbiBBUk0gcGVyaGFwcyBhIGZldyBjb25jcmV0ZSBwcm9qZWN0cyBjb3VsZCBiZSBw
cm9wb3NlZCB0bwogICAgICAgIHJlcGxhY2UgaXQ/CgpBbmR5OgoKICAgICAgKiBJT01NVSBjb250
cm9sIGZvciBTV0lPVExCLCB0byBhdm9pZCBkb20wIGNvcHkgb2YgYWxsID40SyBETUEKICAgICAg
ICBhbGxvY2F0aW9ucwoKICAgICAgICBTb3VuZHMgdG9vIGhhcmQgZm9yIGEgR1NvQyB0byBtZS4g
V291bGQgbmVlZCBmbGVzaGluZyBvdXQgaW4gYW55CiAgICAgICAgY2FzZS4KICAgICAgICAKICAg
ICAgKiBDUFUvUkFNL1BDSSBkaWFncmFtIHRvb2wKCiAgICAgICAgRG9lcyB0aGlzIG5vdCBhbHJl
YWR5IGV4aXN0IHNvbWV3aGVyZT8KICAgICAgICAKUGF1bDoKCiAgICAgICogSFZNIHBlci1ldmVu
dC1jaGFubmVsIGludGVycnVwdHMKCiAgICAgICAgTWlnaHQgYmUgZWFzaWVyIG5vdyB0aGF0IFdp
bmRvd3MgUFYgZHJpdmVycyBhcmUgb3BlbmVkIHVwPwoKUm9nZXI6CiAgICAgICogUmVmYWN0b3Ig
TGludXggaG90cGx1ZyBzY3JpcHRzCgogICAgICAgIFlvdSBkaWQgc29tZSBvZiB0aGlzIEkgdGhp
bms/CiAgICAgICAgCklhbiBDOgoKICAgICAgKiBYTCB0byBYQ1AgVk0gbW90aW9uCgogICAgICAg
IFBlcmhhcHMgdGhpcyBjb3VsZCBiZSBicm9hZGVuZWQgaW50byBWTSB0cmFuc3BvcnQgYmV0d2Vl
biBYTCBhbmQKICAgICAgICBvdGhlciB0aGluZ3MgdG9vIC0tIGUuZy4gbGlidmlydD8KClN0ZWZh
bm86CgogICAgICAqIFZNIFNuYXBzaG90cwoKICAgICAgICBTdGlsbCBhIGdvb2QgcHJvamVjdCBJ
IHRoaW5rCiAgICAgICAgCkdlb3JnZToKCiAgICAgICogQWxsb3dpbmcgZ3Vlc3RzIHRvIGJvb3Qg
d2l0aCBhIHBhc3NlZC10aHJvdWdoIEdQVSBhcyB0aGUgcHJpbWFyeQogICAgICAgIGRpc3BsYXkK
CiAgICAgICAgVGhpcyBzZWVtcyBsaWtlIGEgYml0IG9mIGEgcmF0aG9sZSBmb3IgYSBHU29DIHN0
dWRlbnQgdG8gbWUuLi4KICAgICAgICAKICAgICAgKiBBZHZhbmNlZCBTY2hlZHVsaW5nIFBhcmFt
ZXRlcnMKICAgICAgICAKICAgICAgICBTdGlsbCB0byBkbz8KClNhbnRvc2g6CgogICAgICAqIEtE
RCAoV2luZG93cyBEZWJ1Z2dlciBTdHViKSBlbmhhbmNlbWVudHMKICAgICAgICAKRGF2ZQoKICAg
ICAgKiBDcmVhdGUgYSB0aW55IFZNIGZvciBlYXN5IGxvYWQgdGVzdGluZwoKICAgICAgICBTb21l
b25lIHdhcyBsb29raW5nIGF0IHRoaXMgSSB0aGluaz8KICAgICAgICAKSWFuIEo6CgogICAgICAq
IFRlc3RpbmcgUFYgYW5kIEhWTSBpbnN0YWxscyBvZiBEZWJpYW4gdXNpbmcgZGViaWFuLWluc3Rh
bGxlcgogICAgICAqIFRlc3RpbmcgTmV0QlNECgogICAgICAgIEJTRCBpcyBkb25lIEkgdGhpbmss
IGFuZCBJJ20gbG9va2luZyBhdCBEZWJpYW4gSW5zdGFsbGVyIHN0dWZmCiAgICAgICAgbXlzZWxm
LiBJJ3ZlIHJlbW92ZWQgdGhlc2UuCgoKUGhldyEKSWFuLgoKCgo+IExhcnMKPiAKPiBPbiAyMC8w
MS8yMDE0IDA5OjE4LCBMYXJzIEt1cnRoIHdyb3RlOgo+ID4gSGkgYWxsLAo+ID4KPiA+IHRoZSBH
U29DIGFwcGxpY2F0aW9uIGRlYWRsaW5lIGlzIGNvbWluZyB1cCA6IEZlYiAyMDE0LiBJZiB3ZSB3
YW50IHRvIAo+ID4gaGF2ZSBhbnkgY2hhbmNlIG9mIGdldHRpbmcgYWNjZXB0ZWQgdGhpcyB5ZWFy
LCB3ZSBvdWdodCB0byBnZXQgb3VyIAo+ID4gcHJvamVjdCBsaXN0IGludG8gZ29vZCBzaGFwZS4g
VGhlIHByb2plY3QgbGlzdCBhbmQgaG93IHRoZSBwcm9qZWN0IGFuZCAKPiA+IG1lbnRlcnMgcHJl
c2VudCB0aGVtc2VsdmVzIGhhcyBhIGJpZ2dlciBpbXBhY3Qgb24gd2hldGhlciB3ZSBnZXQgCj4g
PiBhY2NlcHRlZCB0aGFuIHRoZSBhY3R1YWwgYXBwbGljYXRpb24uCj4gPgo+ID4gQWxzbywgSSB3
b3VsZCBsaWtlIHRvIGFkZCBhIG1lbnRvciBzZWN0aW9uIHRoaXMgeWVhcjogYSBzaG9ydCBiaW8s
IAo+ID4gd2hhdCB0aGUgbWVudG9yIGNhcmVzIGFib3V0IGFuZCBhIHBpY3R1cmUuIFRoaXMgd2ls
bCBoZWxwIG1ha2UgdGhlIAo+ID4gcHJvamVjdCBsaXN0IG1vcmUgcmVhbC4KPiA+Cj4gPiBXZSBo
YXZlICo0IHdlZWtzKiB0byBkbyB0aGlzLiBUaGUgYmFyIGZvciBHU29DIGhhcyBiZWVuIGdldHRp
bmcgCj4gPiBpbmNyZWFzaW5nbHkgaGlnaC4gSSBrbm93LCB3ZSBhcmUgdGllZCBkb3duIHdpdGgg
WGVuIDQuNCwgYnV0IHRoaXMgaXMgCj4gPiBzb21ldGhpbmcgeW91IG5lZWQgdG8gZG8gaWYgeW91
IHdhbnQgdGhlIFhlbiBQcm9qZWN0IHRvIHBhcnRpY2lwYXRlLgo+ID4KPiA+IGEpIFBsZWFzZSwg
dXBkYXRlIAo+ID4gaHR0cDovL3dpa2kueGVucHJvamVjdC5vcmcvd2lraS9YZW5fRGV2ZWxvcG1l
bnRfUHJvamVjdHMgdXJnZW50bHkgCj4gPiAodGhlc2UgbmVlZCB0byBiZSBpbiBnb29kIHNoYXBl
ICpiZWZvcmUqIHRoZSBhcHBsaWNhdGlvbikuIFdoYXQgSSBuZWVkIAo+ID4geW91IHRvIGRvIGlz
Ogo+ID4gYS4xKSBSZW1vdmUgaXRlbXMgdGhhdCBhcmUgZG9uZQo+ID4gYS4yKSBBZGQgbmV3IHdv
cmsgaXRlbXMgOiB3ZSBvdWdodCB0byBoYXZlIGEgZmV3IHNleHkgdG9waWNzIG9uIHNheSAKPiA+
IFJlYWwtdGltZSwgbW9iaWxlIGFuZCBzb21lIG9mIHRoZSBvdGhlciBzZWdtZW50cyAoYXNzdW1p
bmcgd2UgY2FuIGdldCBIVykKPiA+IGEuMykgQWxsIHByb2plY3QgcHJvcG9zYWxzIG5lZWQgdG8g
YmUgcGVlciByZXZpZXdlZCAqYW5kKiBjbGVhciAuLi4gCj4gPiBUaGUgcGVlciByZXZpZXcgcHJv
Y2VzcyBmb3IgcHJvamVjdHMgd2UgcHV0IGluIHBsYWNlIGxhc3QgeWVhciB3b3JrZWQgCj4gPiB3
ZWxsLCBieSB3aGljaCB3ZSBoYWQgcGFzdCBtZW50b3JzIHNpZ24gb2YgcHJvamVjdCBwcm9wb3Nh
bHMgdGhhdCB3ZXJlIAo+ID4gaW4gZ29vZCBlbm91Z2ggc3RhdGUuCj4gPgo+ID4gYikgQW55b25l
IHdobyBoYXMgc29tZSBrZXJuZWwvbGludXgvYnNkL2Rpc3Ryby9xZW11IHdvcmstaXRlbXMsIHNo
b3VsZCAKPiA+IGdldCB0aGVzZSBsaXN0ZWQgb24gdGhlIHJlc3BlY3RpdmUgb3RoZXIgcHJvZ3Jh
bXMuIEFuZCB3ZSBzaG91bGQgbGluayAKPiA+IHRvIHRoZXNlIGZyb20gb3VyIHByb2plY3QgcGFn
ZS4KPiA+Cj4gPiBCZXN0IFJlZ2FyZHMKPiA+IExhcnMKPiA+IFAuUy46IEkgd2lsbCBhbHNvIHNl
ZSB3aGV0aGVyIHdlIGNhbiBwYXJ0aWNpcGF0ZSBhcyBYZW4gUHJvamVjdCB1bmRlciAKPiA+IHRo
ZSBMRiBHU29DIHByb2dyYW0sIGJ1dCBsYXN0IHllYXIgdGhlcmUgd2FzIHB1c2gtYmFjayBhbmQg
SSBkb24ndCAKPiA+IGV4cGVjdCB0aGlzIHRvIGNoYW5nZQo+ID4KPiAKPiAKPiBfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwo+IGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hl
bi1kZXZlbAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:10:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3Bf-0002Zj-8z; Wed, 05 Feb 2014 14:10:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3Bd-0002ZU-AI
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 14:10:45 +0000
Received: from [85.158.139.211:9490] by server-14.bemta-5.messagelabs.com id
	A6/18-27598-46642F25; Wed, 05 Feb 2014 14:10:44 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391609442!1838615!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14831 invoked from network); 5 Feb 2014 14:10:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:10:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100099528"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:10:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:10:28 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3BM-0006Vk-SV;
	Wed, 05 Feb 2014 14:10:28 +0000
Message-ID: <52F24642.5000300@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:10:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 04:14 PM, Ian Jackson wrote:
> This is the latest version of my libxl event fixes apropos of Jim's
> libvirt testing.

Did you have any opinions on the suitability of this for 4.4?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:10:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3Bf-0002Zj-8z; Wed, 05 Feb 2014 14:10:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3Bd-0002ZU-AI
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 14:10:45 +0000
Received: from [85.158.139.211:9490] by server-14.bemta-5.messagelabs.com id
	A6/18-27598-46642F25; Wed, 05 Feb 2014 14:10:44 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391609442!1838615!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14831 invoked from network); 5 Feb 2014 14:10:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:10:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100099528"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:10:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:10:28 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3BM-0006Vk-SV;
	Wed, 05 Feb 2014 14:10:28 +0000
Message-ID: <52F24642.5000300@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:10:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 04:14 PM, Ian Jackson wrote:
> This is the latest version of my libxl event fixes apropos of Jim's
> libvirt testing.

Did you have any opinions on the suitability of this for 4.4?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:13:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3EC-0002ji-Th; Wed, 05 Feb 2014 14:13:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WB3EC-0002ja-7o
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:13:24 +0000
Received: from [193.109.254.147:61247] by server-9.bemta-14.messagelabs.com id
	8E/B0-24895-30742F25; Wed, 05 Feb 2014 14:13:23 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391609601!2207807!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18871 invoked from network); 5 Feb 2014 14:13:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:13:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100100663"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:13:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:13:20 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WB3E8-0006Y9-NJ;
	Wed, 05 Feb 2014 14:13:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:13:10 +0000
Message-ID: <1391609590-4449-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Julien Grall <julien.grall@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1] xen/events: bind all new interdomain events
	to VCPU0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
unnecessary init_evtchn_cpu_bindings()) causes a regression.

The kernel-side VCPU binding was not being correctly set for newly
allocated or bound interdomain events.  In ARM guests where 2-level
events were used, this would result in no interdomain events being
handled because the kernel-side VCPU masks would all be clear.

x86 guests would work because the irq affinity was set during irq
setup and this would set the correct kernel-side VCPU binding.

Fix this by properly initializing the kernel-side VCPU binding in
bind_evtchn_to_irq().

Reported-and-tested-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..f4a9e33 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -862,6 +862,8 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 			irq = ret;
 			goto out;
 		}
+		/* New interdomain events are bound to VCPU 0. */
+		bind_evtchn_to_cpu(evtchn, 0);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:13:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3EC-0002ji-Th; Wed, 05 Feb 2014 14:13:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WB3EC-0002ja-7o
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:13:24 +0000
Received: from [193.109.254.147:61247] by server-9.bemta-14.messagelabs.com id
	8E/B0-24895-30742F25; Wed, 05 Feb 2014 14:13:23 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391609601!2207807!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18871 invoked from network); 5 Feb 2014 14:13:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:13:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100100663"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:13:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:13:20 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WB3E8-0006Y9-NJ;
	Wed, 05 Feb 2014 14:13:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:13:10 +0000
Message-ID: <1391609590-4449-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Julien Grall <julien.grall@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1] xen/events: bind all new interdomain events
	to VCPU0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Commit fc087e10734a4d3e40693fc099461ec1270b3fff (xen/events: remove
unnecessary init_evtchn_cpu_bindings()) causes a regression.

The kernel-side VCPU binding was not being correctly set for newly
allocated or bound interdomain events.  In ARM guests where 2-level
events were used, this would result in no interdomain events being
handled because the kernel-side VCPU masks would all be clear.

x86 guests would work because the irq affinity was set during irq
setup and this would set the correct kernel-side VCPU binding.

Fix this by properly initializing the kernel-side VCPU binding in
bind_evtchn_to_irq().

Reported-and-tested-by: Julien Grall <julien.grall@linaro.org>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..f4a9e33 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -862,6 +862,8 @@ int bind_evtchn_to_irq(unsigned int evtchn)
 			irq = ret;
 			goto out;
 		}
+		/* New interdomain events are bound to VCPU 0. */
+		bind_evtchn_to_cpu(evtchn, 0);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
 		WARN_ON(info == NULL || info->type != IRQT_EVTCHN);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3HR-0002uA-M2; Wed, 05 Feb 2014 14:16:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WB3HP-0002tx-Rm
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:16:44 +0000
Received: from [85.158.137.68:6839] by server-13.bemta-3.messagelabs.com id
	59/57-26923-BC742F25; Wed, 05 Feb 2014 14:16:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391609802!13596239!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20474 invoked from network); 5 Feb 2014 14:16:42 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:16:42 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so257850eek.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 06:16:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=1/dedXnaGuXPd8DPfQltG4MMxsI70bEVHavzTXnY5Gw=;
	b=j2A26zf73qwlzShu4i4/BFH8yWbjp/eqvGmUL/mupZ8QN3Kc3YDKlhCToD1LOctsWM
	lyZk3Mna/YKYie9XXiZ1B0Bxy1scygXQKGqVS2wRKnGZNvbQrG7RnaRKdsyNjIbHCqIo
	Slw6Zae+TFg5mjjeO1GP4aeD7rXqhrR5WI/TXw9xKUGeBhbmGEYd94r1IWXD2whYLnUD
	eHp5zh2xDBaNb+KRSOeUG3pOfaE8+fN2evjv9gdEryPNgBSpenOCLteO09Q8N4diN4k9
	fG1xP21ApWMWB5cJTssqsYoGwLpo6HToi61gxzPj6enh7sz36gh+8hR13CUrqJEVhRoM
	8dTQ==
X-Gm-Message-State: ALoCoQnfSglVzCV35UY4EF2xfYKnVenwbG2k2yAMByC8KD8H3DimeBwHLn3Ndjz8fMhDjYf7Munf
X-Received: by 10.15.21.2 with SMTP id c2mr2177716eeu.77.1391609801903;
	Wed, 05 Feb 2014 06:16:41 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm102062950eet.6.2014.02.05.06.16.40
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 05 Feb 2014 06:16:41 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Wed,  5 Feb 2014 14:16:34 +0000
Message-Id: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function domain_page_map_to_mfn can be used to translate a virtual
address mapped by both map_domain_page and map_domain_page_global.
The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
will always fail because the address is not in DOMHEAP range.

Check if the address is in vmap range and use __pa to translate it.

This patch fix guest shutdown when the event fifo is used.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>

---
    This is a bug fix for Xen 4.4. Without this patch, it's impossible to
use Linux 3.14 (and higher) as guest with the event fifo driver.
---
 xen/arch/arm/mm.c |   10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 127cce0..bdca68a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
     local_irq_restore(flags);
 }
 
-unsigned long domain_page_map_to_mfn(const void *va)
+unsigned long domain_page_map_to_mfn(const void *ptr)
 {
+    unsigned long va = (unsigned long)ptr;
     lpae_t *map = this_cpu(xen_dommap);
-    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
-    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
+    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
+    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
+
+    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
+        return virt_to_mfn(va);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
     ASSERT(map[slot].pt.avail != 0);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3HR-0002uA-M2; Wed, 05 Feb 2014 14:16:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WB3HP-0002tx-Rm
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:16:44 +0000
Received: from [85.158.137.68:6839] by server-13.bemta-3.messagelabs.com id
	59/57-26923-BC742F25; Wed, 05 Feb 2014 14:16:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391609802!13596239!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20474 invoked from network); 5 Feb 2014 14:16:42 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:16:42 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so257850eek.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 06:16:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=1/dedXnaGuXPd8DPfQltG4MMxsI70bEVHavzTXnY5Gw=;
	b=j2A26zf73qwlzShu4i4/BFH8yWbjp/eqvGmUL/mupZ8QN3Kc3YDKlhCToD1LOctsWM
	lyZk3Mna/YKYie9XXiZ1B0Bxy1scygXQKGqVS2wRKnGZNvbQrG7RnaRKdsyNjIbHCqIo
	Slw6Zae+TFg5mjjeO1GP4aeD7rXqhrR5WI/TXw9xKUGeBhbmGEYd94r1IWXD2whYLnUD
	eHp5zh2xDBaNb+KRSOeUG3pOfaE8+fN2evjv9gdEryPNgBSpenOCLteO09Q8N4diN4k9
	fG1xP21ApWMWB5cJTssqsYoGwLpo6HToi61gxzPj6enh7sz36gh+8hR13CUrqJEVhRoM
	8dTQ==
X-Gm-Message-State: ALoCoQnfSglVzCV35UY4EF2xfYKnVenwbG2k2yAMByC8KD8H3DimeBwHLn3Ndjz8fMhDjYf7Munf
X-Received: by 10.15.21.2 with SMTP id c2mr2177716eeu.77.1391609801903;
	Wed, 05 Feb 2014 06:16:41 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm102062950eet.6.2014.02.05.06.16.40
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 05 Feb 2014 06:16:41 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Wed,  5 Feb 2014 14:16:34 +0000
Message-Id: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function domain_page_map_to_mfn can be used to translate a virtual
address mapped by both map_domain_page and map_domain_page_global.
The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
will always fail because the address is not in DOMHEAP range.

Check if the address is in vmap range and use __pa to translate it.

This patch fix guest shutdown when the event fifo is used.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>

---
    This is a bug fix for Xen 4.4. Without this patch, it's impossible to
use Linux 3.14 (and higher) as guest with the event fifo driver.
---
 xen/arch/arm/mm.c |   10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 127cce0..bdca68a 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
     local_irq_restore(flags);
 }
 
-unsigned long domain_page_map_to_mfn(const void *va)
+unsigned long domain_page_map_to_mfn(const void *ptr)
 {
+    unsigned long va = (unsigned long)ptr;
     lpae_t *map = this_cpu(xen_dommap);
-    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
-    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
+    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
+    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
+
+    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
+        return virt_to_mfn(va);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
     ASSERT(map[slot].pt.avail != 0);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3JO-00031A-8V; Wed, 05 Feb 2014 14:18:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ben.Guthro@citrix.com>)
	id 1WB3JM-00030v-Ea; Wed, 05 Feb 2014 14:18:44 +0000
Received: from [193.109.254.147:63230] by server-10.bemta-14.messagelabs.com
	id 22/D1-10711-34842F25; Wed, 05 Feb 2014 14:18:43 +0000
X-Env-Sender: Ben.Guthro@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391609921!2214250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16595 invoked from network); 5 Feb 2014 14:18:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:18:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100103481"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:18:41 +0000
Received: from FTLPEX01CL01.citrite.net ([169.254.4.32]) by
	FTLPEX01CL03.citrite.net ([169.254.1.150]) with mapi id 14.02.0342.004;
	Wed, 5 Feb 2014 09:18:40 -0500
From: Ben Guthro <Ben.Guthro@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "lars.kurth@xen.org"
	<lars.kurth@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant
	<Paul.Durrant@citrix.com>, Santosh Jodh <Santosh.Jodh@citrix.com>, "Ian
	Jackson" <Ian.Jackson@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvXBCUk6V9UM0G88s1y0zm8RZqmsyeN
Date: Wed, 5 Feb 2014 14:18:40 +0000
Message-ID: <CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>,
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.9.154.239]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


________________________________________
From: Ian Campbell
Sent: Wednesday, February 05, 2014 9:09 AM
To: lars.kurth@xen.org; Roger Pau Monne; Dario Faggioli; Konrad Rzeszutek Wilk; Ben Guthro; Andrew Cooper; Paul Durrant; Santosh Jodh; Ian Jackson
Cc: xen-devel@lists.xen.org; xen-api@lists.xen.org; mirageos-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014


> Ben:
>      * dom0 kgdb support
>
>        Is this for GSoC?

This has been on there a while, and could be a GSoC project - there has been some development in the Xen IPI support since this was written, so it may work better than advertised here.

That said, Fri is my last day working for Citrix (and consequently, on Xen) - and I'll be moving on to a Start-up company again, so someone else would need to sponsor this project.

Naturally, I'll be available via my Non-Citrix email address (ben@guthro.net)


Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3JO-00031A-8V; Wed, 05 Feb 2014 14:18:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ben.Guthro@citrix.com>)
	id 1WB3JM-00030v-Ea; Wed, 05 Feb 2014 14:18:44 +0000
Received: from [193.109.254.147:63230] by server-10.bemta-14.messagelabs.com
	id 22/D1-10711-34842F25; Wed, 05 Feb 2014 14:18:43 +0000
X-Env-Sender: Ben.Guthro@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391609921!2214250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16595 invoked from network); 5 Feb 2014 14:18:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:18:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100103481"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:18:41 +0000
Received: from FTLPEX01CL01.citrite.net ([169.254.4.32]) by
	FTLPEX01CL03.citrite.net ([169.254.1.150]) with mapi id 14.02.0342.004;
	Wed, 5 Feb 2014 09:18:40 -0500
From: Ben Guthro <Ben.Guthro@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "lars.kurth@xen.org"
	<lars.kurth@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant
	<Paul.Durrant@citrix.com>, Santosh Jodh <Santosh.Jodh@citrix.com>, "Ian
	Jackson" <Ian.Jackson@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvXBCUk6V9UM0G88s1y0zm8RZqmsyeN
Date: Wed, 5 Feb 2014 14:18:40 +0000
Message-ID: <CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>,
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.9.154.239]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


________________________________________
From: Ian Campbell
Sent: Wednesday, February 05, 2014 9:09 AM
To: lars.kurth@xen.org; Roger Pau Monne; Dario Faggioli; Konrad Rzeszutek Wilk; Ben Guthro; Andrew Cooper; Paul Durrant; Santosh Jodh; Ian Jackson
Cc: xen-devel@lists.xen.org; xen-api@lists.xen.org; mirageos-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014


> Ben:
>      * dom0 kgdb support
>
>        Is this for GSoC?

This has been on there a while, and could be a GSoC project - there has been some development in the Xen IPI support since this was written, so it may work better than advertised here.

That said, Fri is my last day working for Citrix (and consequently, on Xen) - and I'll be moving on to a Start-up company again, so someone else would need to sponsor this project.

Naturally, I'll be available via my Non-Citrix email address (ben@guthro.net)


Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:28:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3Sj-0003Rp-R9; Wed, 05 Feb 2014 14:28:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3Si-0003RY-Dn
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:28:24 +0000
Received: from [85.158.143.35:47337] by server-2.bemta-4.messagelabs.com id
	73/92-10891-78A42F25; Wed, 05 Feb 2014 14:28:23 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391610500!3363130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8867 invoked from network); 5 Feb 2014 14:28:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:28:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98216805"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:28:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:28:19 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3Sd-0006l4-LL;
	Wed, 05 Feb 2014 14:28:19 +0000
Message-ID: <52F24A71.7080102@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:28:01 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
In-Reply-To: <20140205113623.GA24025@aepfle.de>
X-DLP: MIA2
Cc: anthony.perard@citrix.com, xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 11:36 AM, Olaf Hering wrote:
> On Wed, Feb 05, George Dunlap wrote:
>
>> Well it looks like in order to keep ABI compatibility (which I don't think
>> we ever promised), you're introducing this weird hack with overloading a
>> putative boolean value with a magic number?
> Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
> has to change as well. And is 4.4-rc4 is the right time to do that?
> Likely not. I'm fine with carry the 4.4 patch to achieve the result, and
> put the other version into 4.5.

Right, and so this exposes another risk of accepting a patch so late: 
that of making poor interface decisions in a rush which either libxl or 
programs written against it have to deal with for a long time.  (i.e., 
either we have to keep supporting the old interface, or the application 
developer has to special case the 4.4 interface if they want to be able 
to compile against it).

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:28:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3Sj-0003Rp-R9; Wed, 05 Feb 2014 14:28:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3Si-0003RY-Dn
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:28:24 +0000
Received: from [85.158.143.35:47337] by server-2.bemta-4.messagelabs.com id
	73/92-10891-78A42F25; Wed, 05 Feb 2014 14:28:23 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391610500!3363130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8867 invoked from network); 5 Feb 2014 14:28:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:28:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98216805"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:28:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:28:19 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3Sd-0006l4-LL;
	Wed, 05 Feb 2014 14:28:19 +0000
Message-ID: <52F24A71.7080102@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:28:01 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Olaf Hering <olaf@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
In-Reply-To: <20140205113623.GA24025@aepfle.de>
X-DLP: MIA2
Cc: anthony.perard@citrix.com, xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 11:36 AM, Olaf Hering wrote:
> On Wed, Feb 05, George Dunlap wrote:
>
>> Well it looks like in order to keep ABI compatibility (which I don't think
>> we ever promised), you're introducing this weird hack with overloading a
>> putative boolean value with a magic number?
> Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
> has to change as well. And is 4.4-rc4 is the right time to do that?
> Likely not. I'm fine with carry the 4.4 patch to achieve the result, and
> put the other version into 4.5.

Right, and so this exposes another risk of accepting a patch so late: 
that of making poor interface decisions in a rush which either libxl or 
programs written against it have to deal with for a long time.  (i.e., 
either we have to keep supporting the old interface, or the application 
developer has to special case the 4.4 interface if they want to be able 
to compile against it).

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:29:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3U0-0003k2-I5; Wed, 05 Feb 2014 14:29:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WB3Tz-0003jn-3p; Wed, 05 Feb 2014 14:29:43 +0000
Received: from [85.158.143.35:59956] by server-2.bemta-4.messagelabs.com id
	6D/A4-10891-6DA42F25; Wed, 05 Feb 2014 14:29:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391610580!3359190!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8287 invoked from network); 5 Feb 2014 14:29:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 14:29:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15ETZ0w031233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 14:29:36 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15ETYNJ029718
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Feb 2014 14:29:34 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15ETYpI029712; Wed, 5 Feb 2014 14:29:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 06:29:33 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 66A531C0972; Wed,  5 Feb 2014 09:29:32 -0500 (EST)
Date: Wed, 5 Feb 2014 09:29:32 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ben Guthro <Ben.Guthro@citrix.com>
Message-ID: <20140205142932.GA3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 02:18:40PM +0000, Ben Guthro wrote:
> 
> ________________________________________
> From: Ian Campbell
> Sent: Wednesday, February 05, 2014 9:09 AM
> To: lars.kurth@xen.org; Roger Pau Monne; Dario Faggioli; Konrad Rzeszutek Wilk; Ben Guthro; Andrew Cooper; Paul Durrant; Santosh Jodh; Ian Jackson
> Cc: xen-devel@lists.xen.org; xen-api@lists.xen.org; mirageos-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014
> 
> 
> > Ben:
> >      * dom0 kgdb support
> >
> >        Is this for GSoC?
> 
> This has been on there a while, and could be a GSoC project - there has been some development in the Xen IPI support since this was written, so it may work better than advertised here.

And it actually works!
> 
> That said, Fri is my last day working for Citrix (and consequently, on Xen) - and I'll be moving on to a Start-up company again, so someone else would need to sponsor this project.
> 
> Naturally, I'll be available via my Non-Citrix email address (ben@guthro.net)
> 
> 
> Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:29:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3U0-0003k2-I5; Wed, 05 Feb 2014 14:29:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WB3Tz-0003jn-3p; Wed, 05 Feb 2014 14:29:43 +0000
Received: from [85.158.143.35:59956] by server-2.bemta-4.messagelabs.com id
	6D/A4-10891-6DA42F25; Wed, 05 Feb 2014 14:29:42 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391610580!3359190!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8287 invoked from network); 5 Feb 2014 14:29:41 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 14:29:41 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15ETZ0w031233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 14:29:36 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15ETYNJ029718
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Feb 2014 14:29:34 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15ETYpI029712; Wed, 5 Feb 2014 14:29:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 06:29:33 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 66A531C0972; Wed,  5 Feb 2014 09:29:32 -0500 (EST)
Date: Wed, 5 Feb 2014 09:29:32 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ben Guthro <Ben.Guthro@citrix.com>
Message-ID: <20140205142932.GA3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 02:18:40PM +0000, Ben Guthro wrote:
> 
> ________________________________________
> From: Ian Campbell
> Sent: Wednesday, February 05, 2014 9:09 AM
> To: lars.kurth@xen.org; Roger Pau Monne; Dario Faggioli; Konrad Rzeszutek Wilk; Ben Guthro; Andrew Cooper; Paul Durrant; Santosh Jodh; Ian Jackson
> Cc: xen-devel@lists.xen.org; xen-api@lists.xen.org; mirageos-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014
> 
> 
> > Ben:
> >      * dom0 kgdb support
> >
> >        Is this for GSoC?
> 
> This has been on there a while, and could be a GSoC project - there has been some development in the Xen IPI support since this was written, so it may work better than advertised here.

And it actually works!
> 
> That said, Fri is my last day working for Citrix (and consequently, on Xen) - and I'll be moving on to a Start-up company again, so someone else would need to sponsor this project.
> 
> Naturally, I'll be available via my Non-Citrix email address (ben@guthro.net)
> 
> 
> Ben

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:31:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:31:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3VI-0003q7-6O; Wed, 05 Feb 2014 14:31:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3VH-0003pq-HI; Wed, 05 Feb 2014 14:31:03 +0000
Received: from [85.158.137.68:12055] by server-1.bemta-3.messagelabs.com id
	EF/37-17293-62B42F25; Wed, 05 Feb 2014 14:31:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391610660!13601185!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10302 invoked from network); 5 Feb 2014 14:31:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:31:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100109328"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:30:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:30:58 -0500
Message-ID: <1391610656.6497.179.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 5 Feb 2014 14:30:56 +0000
In-Reply-To: <20140205142932.GA3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
	<20140205142932.GA3946@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Dario
	Faggioli <dario.faggioli@citrix.com>, Ben Guthro <Ben.Guthro@citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 09:29 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 05, 2014 at 02:18:40PM +0000, Ben Guthro wrote:
> > 
> > ________________________________________
> > From: Ian Campbell
> > Sent: Wednesday, February 05, 2014 9:09 AM
> > To: lars.kurth@xen.org; Roger Pau Monne; Dario Faggioli; Konrad Rzeszutek Wilk; Ben Guthro; Andrew Cooper; Paul Durrant; Santosh Jodh; Ian Jackson
> > Cc: xen-devel@lists.xen.org; xen-api@lists.xen.org; mirageos-devel@lists.xenproject.org
> > Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014
> > 
> > 
> > > Ben:
> > >      * dom0 kgdb support
> > >
> > >        Is this for GSoC?
> > 
> > This has been on there a while, and could be a GSoC project - there has been some development in the Xen IPI support since this was written, so it may work better than advertised here.
> 
> And it actually works!

OK, I'll nuke it from the list hten, thanks!

> > 
> > That said, Fri is my last day working for Citrix (and consequently,
> on Xen) - and I'll be moving on to a Start-up company again,

Good luck!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:31:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:31:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3VI-0003q7-6O; Wed, 05 Feb 2014 14:31:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3VH-0003pq-HI; Wed, 05 Feb 2014 14:31:03 +0000
Received: from [85.158.137.68:12055] by server-1.bemta-3.messagelabs.com id
	EF/37-17293-62B42F25; Wed, 05 Feb 2014 14:31:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391610660!13601185!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10302 invoked from network); 5 Feb 2014 14:31:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:31:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100109328"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:30:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:30:58 -0500
Message-ID: <1391610656.6497.179.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 5 Feb 2014 14:30:56 +0000
In-Reply-To: <20140205142932.GA3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<CE24C0DFE1A2F443BA25C1A565397C1A2D700B@FTLPEX01CL01.citrite.net>
	<20140205142932.GA3946@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Dario
	Faggioli <dario.faggioli@citrix.com>, Ben Guthro <Ben.Guthro@citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 09:29 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 05, 2014 at 02:18:40PM +0000, Ben Guthro wrote:
> > 
> > ________________________________________
> > From: Ian Campbell
> > Sent: Wednesday, February 05, 2014 9:09 AM
> > To: lars.kurth@xen.org; Roger Pau Monne; Dario Faggioli; Konrad Rzeszutek Wilk; Ben Guthro; Andrew Cooper; Paul Durrant; Santosh Jodh; Ian Jackson
> > Cc: xen-devel@lists.xen.org; xen-api@lists.xen.org; mirageos-devel@lists.xenproject.org
> > Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14 2014
> > 
> > 
> > > Ben:
> > >      * dom0 kgdb support
> > >
> > >        Is this for GSoC?
> > 
> > This has been on there a while, and could be a GSoC project - there has been some development in the Xen IPI support since this was written, so it may work better than advertised here.
> 
> And it actually works!

OK, I'll nuke it from the list hten, thanks!

> > 
> > That said, Fri is my last day working for Citrix (and consequently,
> on Xen) - and I'll be moving on to a Start-up company again,

Good luck!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:35:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3ZM-00043u-6B; Wed, 05 Feb 2014 14:35:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WB3ZK-00043g-Hi; Wed, 05 Feb 2014 14:35:14 +0000
Received: from [85.158.137.68:15568] by server-16.bemta-3.messagelabs.com id
	89/D3-29917-12C42F25; Wed, 05 Feb 2014 14:35:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391610911!12416611!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10412 invoked from network); 5 Feb 2014 14:35:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 14:35:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15EY6n4004448
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 14:34:06 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s15EY5Mu020127
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 5 Feb 2014 14:34:06 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15EY4NJ006146; Wed, 5 Feb 2014 14:34:05 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 06:34:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0BD121C0972; Wed,  5 Feb 2014 09:34:03 -0500 (EST)
Date: Wed, 5 Feb 2014 09:34:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140205143402.GB3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	lars.kurth@xen.org, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 02:09:08PM +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 13:54 +0000, Lars Kurth wrote:
> > Hi all,
> > I have not gotten any reply to this thread. I saw Wei Lui and Andr=E9s =

> > Lagar-Cavilla make changes to the project list. Please go through the =

> > items below and make changes as suggested. Otherwise, our chances to ge=
t =

> > into GSoC 2014 will be relatively slim.
> =

> Going through the list, people listed as technical contacts for projects
> with GSoC =3D=3D yes (or unknown) are in the To line. Please reiterate yo=
ur
> interest in mentoring the project(s) and update or remove the entry as
> necessary.
> =

> I skipped things added recently and I skipped "Xen Cloud Platform (XCP)
> and XAPI projects", someone else can pick that up.
> =

> And to reiterate what Lars said:
> =

> > Add new work items : we ought to have a few sexy topics on say =

> > Real-time, mobile and some of the other segments (assuming we can get
> > HW)
> [...]
> > b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should =

> > get these listed on the respective other programs. And we should link =

> > to these from our project page.
> =

> The list:
> =

> Pasi:
>       * Implement Xen PVSCSI support in xl/libxl toolstack
>       * Implement Xen PVUSB support in xl/libxl toolstack
> =

>         Both have unanswered questions posed by Lars in January 2013.
>         =

> Konrad:
>       * Block backend/frontend improvements
> =

>         I suspect a bunch of these are done? (Also CC Roger, who may
>         have done them...)
>         =

>         Was in the list twice, they looked identical so I nuked one.
>         =


<nods>
>       * Utilize Intel QuickPath on network and block path.
> =

>         No comments etc, but sounds advanced for a GSoC student, plus
>         its unclear when such hardware became available, are they likely
>         to have it? It sounds like it might also be quite high end.
>         =

>       * perf working with Xen
> =

>         Done/in progress by Boris I think

<nods>
>         =

>       * PAT writecombine fixup
> =

>         Did I see a fix for this go past? GSoC =3D=3D unknown?

No. Still looking for a victi^H^H^Hvolunteer.

>         =

>       * Parallel xenwatch
> =

>         Bit sparse on details, GSOC =3D=3D unknown

That would still be nice. It came from talking to Matt from Amazon. He
was saying that having only one xenwatch thread slows things down for
a medium to big server with lots of guests. Making multiple
xenwatch threads to process XenBus requests in parallel would be
a good improvement.

> =

>       * Microcode uploader implementation
>         =

>         Done I think?

<nods>
> =

>       * Integrating NUMA and Tmem
>         =

>         Lists Dan as co-maintainer -- Konrad do you want to propose this
>         to the new tmem guy (I've forgotten his name)

Bob Liu. Yes, lets rope him in.
>         =

>       * Performance tools overhaul
> =

>         Bit vague. And has some of this been done?

I can't remember what that is.
> =

>       * "Upstream bugs"
> =

>         There were 4 of these, dating back to 2012, I don't think this
>         list is a good place to track bugs and it seems like at least
>         some of them are now obsolete. So I've nuked the lot. If they
>         are still relevant I think it would be best to get them into the
>         bug tracker.

OK, lets nuke them.

We could also add the:

VCPUOP_register_vcpu_time_memory_area support in Linux upstream kernel.

MSI multi-vector for Linux upstream kernel.

Thought those are mostly just putting pieces together and repost them
so no "new" development.

>         =

> Ben:
>       * dom0 kgdb support
> =

>         Is this for GSoC?
> =

> George:
> =

>       * Introducing PowerClamp-like driver for Xen
> =

>         I don't think this has been done?
> =

> Dario:
>       * NUMA effects on inter-VM communication and on multi-VM workloads
> =

>         I think this was under way as part of the GNOME Outreach
>         program. In that case perhaps it needs updating to reflect what
>         has been done and what still needs to be done?
>         =

>       * Integrating NUMA and Tmem
> =

>         Lists Dan as co-maintainer -- covered under Konrad's name above.
>         =

>       * Is Xen ready for the Real-Time/Embedded World?
>                 =

>         Sounds a bit blue sky? Now that there is active interest in this
>         on ARM perhaps a few concrete projects could be proposed to
>         replace it?
> =

> Andy:
> =

>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>         allocations
> =

>         Sounds too hard for a GSoC to me. Would need fleshing out in any
>         case.
>         =

>       * CPU/RAM/PCI diagram tool
> =

>         Does this not already exist somewhere?
>         =

> Paul:
> =

>       * HVM per-event-channel interrupts
> =

>         Might be easier now that Windows PV drivers are opened up?
> =

> Roger:
>       * Refactor Linux hotplug scripts
> =

>         You did some of this I think?
>         =

> Ian C:
> =

>       * XL to XCP VM motion
> =

>         Perhaps this could be broadened into VM transport between XL and
>         other things too -- e.g. libvirt?
> =

> Stefano:
> =

>       * VM Snapshots
> =

>         Still a good project I think
>         =

> George:
> =

>       * Allowing guests to boot with a passed-through GPU as the primary
>         display
> =

>         This seems like a bit of a rathole for a GSoC student to me...
>         =

>       * Advanced Scheduling Parameters
>         =

>         Still to do?
> =

> Santosh:
> =

>       * KDD (Windows Debugger Stub) enhancements
>         =

> Dave
> =

>       * Create a tiny VM for easy load testing
> =

>         Someone was looking at this I think?
>         =

> Ian J:
> =

>       * Testing PV and HVM installs of Debian using debian-installer
>       * Testing NetBSD
> =

>         BSD is done I think, and I'm looking at Debian Installer stuff
>         myself. I've removed these.
> =

> =

> Phew!
> Ian.
> =

> =

> =

> > Lars
> > =

> > On 20/01/2014 09:18, Lars Kurth wrote:
> > > Hi all,
> > >
> > > the GSoC application deadline is coming up : Feb 2014. If we want to =

> > > have any chance of getting accepted this year, we ought to get our =

> > > project list into good shape. The project list and how the project an=
d =

> > > menters present themselves has a bigger impact on whether we get =

> > > accepted than the actual application.
> > >
> > > Also, I would like to add a mentor section this year: a short bio, =

> > > what the mentor cares about and a picture. This will help make the =

> > > project list more real.
> > >
> > > We have *4 weeks* to do this. The bar for GSoC has been getting =

> > > increasingly high. I know, we are tied down with Xen 4.4, but this is =

> > > something you need to do if you want the Xen Project to participate.
> > >
> > > a) Please, update =

> > > http://wiki.xenproject.org/wiki/Xen_Development_Projects urgently =

> > > (these need to be in good shape *before* the application). What I nee=
d =

> > > you to do is:
> > > a.1) Remove items that are done
> > > a.2) Add new work items : we ought to have a few sexy topics on say =

> > > Real-time, mobile and some of the other segments (assuming we can get=
 HW)
> > > a.3) All project proposals need to be peer reviewed *and* clear ... =

> > > The peer review process for projects we put in place last year worked =

> > > well, by which we had past mentors sign of project proposals that wer=
e =

> > > in good enough state.
> > >
> > > b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, shoul=
d =

> > > get these listed on the respective other programs. And we should link =

> > > to these from our project page.
> > >
> > > Best Regards
> > > Lars
> > > P.S.: I will also see whether we can participate as Xen Project under =

> > > the LF GSoC program, but last year there was push-back and I don't =

> > > expect this to change
> > >
> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:35:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3ZM-00043u-6B; Wed, 05 Feb 2014 14:35:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WB3ZK-00043g-Hi; Wed, 05 Feb 2014 14:35:14 +0000
Received: from [85.158.137.68:15568] by server-16.bemta-3.messagelabs.com id
	89/D3-29917-12C42F25; Wed, 05 Feb 2014 14:35:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391610911!12416611!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10412 invoked from network); 5 Feb 2014 14:35:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 14:35:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15EY6n4004448
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 14:34:06 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s15EY5Mu020127
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 5 Feb 2014 14:34:06 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15EY4NJ006146; Wed, 5 Feb 2014 14:34:05 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 06:34:04 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0BD121C0972; Wed,  5 Feb 2014 09:34:03 -0500 (EST)
Date: Wed, 5 Feb 2014 09:34:02 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140205143402.GB3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	lars.kurth@xen.org, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 02:09:08PM +0000, Ian Campbell wrote:
> On Tue, 2014-01-28 at 13:54 +0000, Lars Kurth wrote:
> > Hi all,
> > I have not gotten any reply to this thread. I saw Wei Lui and Andr=E9s =

> > Lagar-Cavilla make changes to the project list. Please go through the =

> > items below and make changes as suggested. Otherwise, our chances to ge=
t =

> > into GSoC 2014 will be relatively slim.
> =

> Going through the list, people listed as technical contacts for projects
> with GSoC =3D=3D yes (or unknown) are in the To line. Please reiterate yo=
ur
> interest in mentoring the project(s) and update or remove the entry as
> necessary.
> =

> I skipped things added recently and I skipped "Xen Cloud Platform (XCP)
> and XAPI projects", someone else can pick that up.
> =

> And to reiterate what Lars said:
> =

> > Add new work items : we ought to have a few sexy topics on say =

> > Real-time, mobile and some of the other segments (assuming we can get
> > HW)
> [...]
> > b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should =

> > get these listed on the respective other programs. And we should link =

> > to these from our project page.
> =

> The list:
> =

> Pasi:
>       * Implement Xen PVSCSI support in xl/libxl toolstack
>       * Implement Xen PVUSB support in xl/libxl toolstack
> =

>         Both have unanswered questions posed by Lars in January 2013.
>         =

> Konrad:
>       * Block backend/frontend improvements
> =

>         I suspect a bunch of these are done? (Also CC Roger, who may
>         have done them...)
>         =

>         Was in the list twice, they looked identical so I nuked one.
>         =


<nods>
>       * Utilize Intel QuickPath on network and block path.
> =

>         No comments etc, but sounds advanced for a GSoC student, plus
>         its unclear when such hardware became available, are they likely
>         to have it? It sounds like it might also be quite high end.
>         =

>       * perf working with Xen
> =

>         Done/in progress by Boris I think

<nods>
>         =

>       * PAT writecombine fixup
> =

>         Did I see a fix for this go past? GSoC =3D=3D unknown?

No. Still looking for a victi^H^H^Hvolunteer.

>         =

>       * Parallel xenwatch
> =

>         Bit sparse on details, GSOC =3D=3D unknown

That would still be nice. It came from talking to Matt from Amazon. He
was saying that having only one xenwatch thread slows things down for
a medium to big server with lots of guests. Making multiple
xenwatch threads to process XenBus requests in parallel would be
a good improvement.

> =

>       * Microcode uploader implementation
>         =

>         Done I think?

<nods>
> =

>       * Integrating NUMA and Tmem
>         =

>         Lists Dan as co-maintainer -- Konrad do you want to propose this
>         to the new tmem guy (I've forgotten his name)

Bob Liu. Yes, lets rope him in.
>         =

>       * Performance tools overhaul
> =

>         Bit vague. And has some of this been done?

I can't remember what that is.
> =

>       * "Upstream bugs"
> =

>         There were 4 of these, dating back to 2012, I don't think this
>         list is a good place to track bugs and it seems like at least
>         some of them are now obsolete. So I've nuked the lot. If they
>         are still relevant I think it would be best to get them into the
>         bug tracker.

OK, lets nuke them.

We could also add the:

VCPUOP_register_vcpu_time_memory_area support in Linux upstream kernel.

MSI multi-vector for Linux upstream kernel.

Thought those are mostly just putting pieces together and repost them
so no "new" development.

>         =

> Ben:
>       * dom0 kgdb support
> =

>         Is this for GSoC?
> =

> George:
> =

>       * Introducing PowerClamp-like driver for Xen
> =

>         I don't think this has been done?
> =

> Dario:
>       * NUMA effects on inter-VM communication and on multi-VM workloads
> =

>         I think this was under way as part of the GNOME Outreach
>         program. In that case perhaps it needs updating to reflect what
>         has been done and what still needs to be done?
>         =

>       * Integrating NUMA and Tmem
> =

>         Lists Dan as co-maintainer -- covered under Konrad's name above.
>         =

>       * Is Xen ready for the Real-Time/Embedded World?
>                 =

>         Sounds a bit blue sky? Now that there is active interest in this
>         on ARM perhaps a few concrete projects could be proposed to
>         replace it?
> =

> Andy:
> =

>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>         allocations
> =

>         Sounds too hard for a GSoC to me. Would need fleshing out in any
>         case.
>         =

>       * CPU/RAM/PCI diagram tool
> =

>         Does this not already exist somewhere?
>         =

> Paul:
> =

>       * HVM per-event-channel interrupts
> =

>         Might be easier now that Windows PV drivers are opened up?
> =

> Roger:
>       * Refactor Linux hotplug scripts
> =

>         You did some of this I think?
>         =

> Ian C:
> =

>       * XL to XCP VM motion
> =

>         Perhaps this could be broadened into VM transport between XL and
>         other things too -- e.g. libvirt?
> =

> Stefano:
> =

>       * VM Snapshots
> =

>         Still a good project I think
>         =

> George:
> =

>       * Allowing guests to boot with a passed-through GPU as the primary
>         display
> =

>         This seems like a bit of a rathole for a GSoC student to me...
>         =

>       * Advanced Scheduling Parameters
>         =

>         Still to do?
> =

> Santosh:
> =

>       * KDD (Windows Debugger Stub) enhancements
>         =

> Dave
> =

>       * Create a tiny VM for easy load testing
> =

>         Someone was looking at this I think?
>         =

> Ian J:
> =

>       * Testing PV and HVM installs of Debian using debian-installer
>       * Testing NetBSD
> =

>         BSD is done I think, and I'm looking at Debian Installer stuff
>         myself. I've removed these.
> =

> =

> Phew!
> Ian.
> =

> =

> =

> > Lars
> > =

> > On 20/01/2014 09:18, Lars Kurth wrote:
> > > Hi all,
> > >
> > > the GSoC application deadline is coming up : Feb 2014. If we want to =

> > > have any chance of getting accepted this year, we ought to get our =

> > > project list into good shape. The project list and how the project an=
d =

> > > menters present themselves has a bigger impact on whether we get =

> > > accepted than the actual application.
> > >
> > > Also, I would like to add a mentor section this year: a short bio, =

> > > what the mentor cares about and a picture. This will help make the =

> > > project list more real.
> > >
> > > We have *4 weeks* to do this. The bar for GSoC has been getting =

> > > increasingly high. I know, we are tied down with Xen 4.4, but this is =

> > > something you need to do if you want the Xen Project to participate.
> > >
> > > a) Please, update =

> > > http://wiki.xenproject.org/wiki/Xen_Development_Projects urgently =

> > > (these need to be in good shape *before* the application). What I nee=
d =

> > > you to do is:
> > > a.1) Remove items that are done
> > > a.2) Add new work items : we ought to have a few sexy topics on say =

> > > Real-time, mobile and some of the other segments (assuming we can get=
 HW)
> > > a.3) All project proposals need to be peer reviewed *and* clear ... =

> > > The peer review process for projects we put in place last year worked =

> > > well, by which we had past mentors sign of project proposals that wer=
e =

> > > in good enough state.
> > >
> > > b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, shoul=
d =

> > > get these listed on the respective other programs. And we should link =

> > > to these from our project page.
> > >
> > > Best Regards
> > > Lars
> > > P.S.: I will also see whether we can participate as Xen Project under =

> > > the LF GSoC program, but last year there was push-back and I don't =

> > > expect this to change
> > >
> > =

> > =

> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> =

> =


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3aI-00048h-Si; Wed, 05 Feb 2014 14:36:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3aH-00048Y-Kp
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:36:13 +0000
Received: from [193.109.254.147:63203] by server-14.bemta-14.messagelabs.com
	id B0/CB-29228-C5C42F25; Wed, 05 Feb 2014 14:36:12 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391610970!2203418!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23711 invoked from network); 5 Feb 2014 14:36:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:36:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100111436"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:36:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:36:09 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3aD-0006sO-BJ;
	Wed, 05 Feb 2014 14:36:09 +0000
Message-ID: <52F24C47.5070100@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:35:51 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jan Beulich
	<JBeulich@suse.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
In-Reply-To: <20140204164258.GB7443@phenom.dumpdata.com>
X-DLP: MIA2
Cc: yang.z.zhang@Intel.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>> get_ioreq()s folded by using a local variable?
>>> Yes. As so
>> Thanks. Except that ...
>>
>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>       struct vcpu *v = current;
>>>       struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>       struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> -
>>> +    ioreq_t *p = get_ioreq(v);
>> ... you don't want to drop the blank line, and naming the new
>> variable "ioreq" would seem preferable.
>>
>>>       /*
>>>        * a pending IO emualtion may still no finished. In this case,
>>>        * no virtual vmswith is allowed. Or else, the following IO
>>>        * emulation will handled in a wrong VCPU context.
>>>        */
>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>> the right thing here. Yang, Jun?
> I have two patches - one the simpler one that is pretty straightfoward
> and the one you suggested. Either one fixes PVH guests. I also did
> bootup tests with HVM guests to make sure they worked.
>
> Attached and inline.

But they do different things -- one does "ioreq && ioreq->state..." and 
the other does "!ioreq || ioreq->state...".  The first one is incorrect, 
AFAICT.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3aI-00048h-Si; Wed, 05 Feb 2014 14:36:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3aH-00048Y-Kp
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:36:13 +0000
Received: from [193.109.254.147:63203] by server-14.bemta-14.messagelabs.com
	id B0/CB-29228-C5C42F25; Wed, 05 Feb 2014 14:36:12 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391610970!2203418!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23711 invoked from network); 5 Feb 2014 14:36:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:36:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100111436"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:36:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:36:09 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3aD-0006sO-BJ;
	Wed, 05 Feb 2014 14:36:09 +0000
Message-ID: <52F24C47.5070100@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:35:51 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Jan Beulich
	<JBeulich@suse.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
In-Reply-To: <20140204164258.GB7443@phenom.dumpdata.com>
X-DLP: MIA2
Cc: yang.z.zhang@Intel.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>> get_ioreq()s folded by using a local variable?
>>> Yes. As so
>> Thanks. Except that ...
>>
>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>       struct vcpu *v = current;
>>>       struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>       struct cpu_user_regs *regs = guest_cpu_user_regs();
>>> -
>>> +    ioreq_t *p = get_ioreq(v);
>> ... you don't want to drop the blank line, and naming the new
>> variable "ioreq" would seem preferable.
>>
>>>       /*
>>>        * a pending IO emualtion may still no finished. In this case,
>>>        * no virtual vmswith is allowed. Or else, the following IO
>>>        * emulation will handled in a wrong VCPU context.
>>>        */
>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>> the right thing here. Yang, Jun?
> I have two patches - one the simpler one that is pretty straightfoward
> and the one you suggested. Either one fixes PVH guests. I also did
> bootup tests with HVM guests to make sure they worked.
>
> Attached and inline.

But they do different things -- one does "ioreq && ioreq->state..." and 
the other does "!ioreq || ioreq->state...".  The first one is incorrect, 
AFAICT.

  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:39:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3d9-0004Wg-Hs; Wed, 05 Feb 2014 14:39:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3d8-0004WM-RG
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:39:11 +0000
Received: from [85.158.139.211:24786] by server-12.bemta-5.messagelabs.com id
	0B/E5-15415-E0D42F25; Wed, 05 Feb 2014 14:39:10 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391611147!1864024!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31461 invoked from network); 5 Feb 2014 14:39:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:39:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98221680"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:39:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:39:06 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3d5-0006uY-1k;
	Wed, 05 Feb 2014 14:39:07 +0000
Message-ID: <52F24CF8.5070500@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:38:48 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, Oleksandr Tyshchenko
	<oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52EFD711.5060201@linaro.org> <52F10CD4.1050000@linaro.org>
In-Reply-To: <52F10CD4.1050000@linaro.org>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 03:52 PM, Julien Grall wrote:
> Forget to add George.
>
> On 02/03/2014 05:51 PM, Julien Grall wrote:
>> (+ Xen ARM maintainers)
>>
>> Hello Oleksandr,
>>
>> Thanks for the patch. For next time, can you add the Xen ARM maintainers
>> in cc? With the amount of mail in the mailing list, your mail could be
>> lost easily. :)
>>
>> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>>> The possible deadlock scenario is explained below:
>>>
>>> non interrupt context:    interrupt contex        interrupt context
>>>                            (CPU0):                 (CPU1):
>>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>>    |                         |                       |
>>>    vgic_disable_irqs()       ...                     ...
>>>      |                         |                       |
>>>      gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>>      |  ...                      |                       |
>>>      |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>>      |  ...                        ...                     ...
>>>      |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>>      |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>>      |  ...                  . .       Oops! The lock has already taken.
>>>      |  spin_unlock(...)     . .
>>>      |  ...                  . .
>>>      gic_irq_disable()       . .
>>>         ...                  . .
>>>         spin_lock(...)       . .
>>>         ...                  . .
>>>         ... <----------------. .
>>>         ... <------------------.
>>>         ...
>>>         spin_unlock(...)
>>>
>>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>>> which called from interrupt context we must disable interrupts in these
>>> functions to avoid possible deadlocks.
>>>
>>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>> Acked-by: Julien Grall <julien.grall@linaro.org>
>>
>> I think this patch should have a release exception for Xen 4.4. It's fix
>> a race condition in the interrupt management.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>>
>>> ---
>>>   xen/arch/arm/gic.c |   10 ++++++----
>>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index c44a4d0..7d83b0c 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>>   static void gic_irq_disable(struct irq_desc *desc)
>>>   {
>>>       int irq = desc->irq;
>>> +    unsigned long flags;
>>>   
>>> -    spin_lock(&desc->lock);
>>> +    spin_lock_irqsave(&desc->lock, flags);
>>>       spin_lock(&gic.lock);
>>>       /* Disable routing */
>>>       GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>>       desc->status |= IRQ_DISABLED;
>>>       spin_unlock(&gic.lock);
>>> -    spin_unlock(&desc->lock);
>>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>>   }
>>>   
>>>   static unsigned int gic_irq_startup(struct irq_desc *desc)
>>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>>   void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>>   {
>>>       struct pending_irq *p = irq_to_pending(v, virtual_irq);
>>> +    unsigned long flags;
>>>   
>>> -    spin_lock(&gic.lock);
>>> +    spin_lock_irqsave(&gic.lock, flags);
>>>       if ( !list_empty(&p->lr_queue) )
>>>           list_del_init(&p->lr_queue);
>>> -    spin_unlock(&gic.lock);
>>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>>   }
>>>   
>>>   void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:39:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3d9-0004Wg-Hs; Wed, 05 Feb 2014 14:39:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3d8-0004WM-RG
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:39:11 +0000
Received: from [85.158.139.211:24786] by server-12.bemta-5.messagelabs.com id
	0B/E5-15415-E0D42F25; Wed, 05 Feb 2014 14:39:10 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391611147!1864024!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31461 invoked from network); 5 Feb 2014 14:39:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:39:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98221680"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:39:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:39:06 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3d5-0006uY-1k;
	Wed, 05 Feb 2014 14:39:07 +0000
Message-ID: <52F24CF8.5070500@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:38:48 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, Oleksandr Tyshchenko
	<oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52EFD711.5060201@linaro.org> <52F10CD4.1050000@linaro.org>
In-Reply-To: <52F10CD4.1050000@linaro.org>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
	gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 03:52 PM, Julien Grall wrote:
> Forget to add George.
>
> On 02/03/2014 05:51 PM, Julien Grall wrote:
>> (+ Xen ARM maintainers)
>>
>> Hello Oleksandr,
>>
>> Thanks for the patch. For next time, can you add the Xen ARM maintainers
>> in cc? With the amount of mail in the mailing list, your mail could be
>> lost easily. :)
>>
>> On 02/03/2014 05:33 PM, Oleksandr Tyshchenko wrote:
>>> The possible deadlock scenario is explained below:
>>>
>>> non interrupt context:    interrupt contex        interrupt context
>>>                            (CPU0):                 (CPU1):
>>> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>>>    |                         |                       |
>>>    vgic_disable_irqs()       ...                     ...
>>>      |                         |                       |
>>>      gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>>>      |  ...                      |                       |
>>>      |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>>>      |  ...                        ...                     ...
>>>      |  ... <----------------.---- spin_lock_irqsave(...)  ...
>>>      |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>>>      |  ...                  . .       Oops! The lock has already taken.
>>>      |  spin_unlock(...)     . .
>>>      |  ...                  . .
>>>      gic_irq_disable()       . .
>>>         ...                  . .
>>>         spin_lock(...)       . .
>>>         ...                  . .
>>>         ... <----------------. .
>>>         ... <------------------.
>>>         ...
>>>         spin_unlock(...)
>>>
>>> Since the gic_remove_from_queues() and gic_irq_disable() called from
>>> non interrupt context and they acquire the same lock as gic_set_guest_irq()
>>> which called from interrupt context we must disable interrupts in these
>>> functions to avoid possible deadlocks.
>>>
>>> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9
>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
>> Acked-by: Julien Grall <julien.grall@linaro.org>
>>
>> I think this patch should have a release exception for Xen 4.4. It's fix
>> a race condition in the interrupt management.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>>
>>> ---
>>>   xen/arch/arm/gic.c |   10 ++++++----
>>>   1 file changed, 6 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index c44a4d0..7d83b0c 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -147,14 +147,15 @@ static void gic_irq_enable(struct irq_desc *desc)
>>>   static void gic_irq_disable(struct irq_desc *desc)
>>>   {
>>>       int irq = desc->irq;
>>> +    unsigned long flags;
>>>   
>>> -    spin_lock(&desc->lock);
>>> +    spin_lock_irqsave(&desc->lock, flags);
>>>       spin_lock(&gic.lock);
>>>       /* Disable routing */
>>>       GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>>>       desc->status |= IRQ_DISABLED;
>>>       spin_unlock(&gic.lock);
>>> -    spin_unlock(&desc->lock);
>>> +    spin_unlock_irqrestore(&desc->lock, flags);
>>>   }
>>>   
>>>   static unsigned int gic_irq_startup(struct irq_desc *desc)
>>> @@ -658,11 +659,12 @@ static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
>>>   void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>>>   {
>>>       struct pending_irq *p = irq_to_pending(v, virtual_irq);
>>> +    unsigned long flags;
>>>   
>>> -    spin_lock(&gic.lock);
>>> +    spin_lock_irqsave(&gic.lock, flags);
>>>       if ( !list_empty(&p->lr_queue) )
>>>           list_del_init(&p->lr_queue);
>>> -    spin_unlock(&gic.lock);
>>> +    spin_unlock_irqrestore(&gic.lock, flags);
>>>   }
>>>   
>>>   void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>>>
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:40:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3ee-0004yb-RC; Wed, 05 Feb 2014 14:40:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>)
	id 1WB3ed-0004y7-5P; Wed, 05 Feb 2014 14:40:43 +0000
Received: from [85.158.143.35:59771] by server-3.bemta-4.messagelabs.com id
	52/5D-11539-A6D42F25; Wed, 05 Feb 2014 14:40:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391611240!3367824!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25549 invoked from network); 5 Feb 2014 14:40:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:40:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100113523"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:40:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:40:13 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB3e9-0006vT-OJ;
	Wed, 05 Feb 2014 14:40:13 +0000
Message-ID: <52F24D4D.7040004@citrix.com>
Date: Wed, 5 Feb 2014 14:40:13 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, lars.kurth@xen.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 14:09, Ian Campbell wrote:
>
> Andy:
>
>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>         allocations
>
>         Sounds too hard for a GSoC to me. Would need fleshing out in any
>         case.

Malcolm made a prototype for this on the first day of the Hackathon.  It
can disappear.

>         
>       * CPU/RAM/PCI diagram tool
>
>         Does this not already exist somewhere?

Not as far as I (or my ability to google) am aware.

My furrowing into hwloc interacting with Xen and libxc is a start to all
of this, but it is still very much in my copious free time and there is
more than enough other work which could be done if someone were interested.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:40:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:40:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3ee-0004yb-RC; Wed, 05 Feb 2014 14:40:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>)
	id 1WB3ed-0004y7-5P; Wed, 05 Feb 2014 14:40:43 +0000
Received: from [85.158.143.35:59771] by server-3.bemta-4.messagelabs.com id
	52/5D-11539-A6D42F25; Wed, 05 Feb 2014 14:40:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391611240!3367824!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25549 invoked from network); 5 Feb 2014 14:40:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:40:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100113523"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:40:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:40:13 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB3e9-0006vT-OJ;
	Wed, 05 Feb 2014 14:40:13 +0000
Message-ID: <52F24D4D.7040004@citrix.com>
Date: Wed, 5 Feb 2014 14:40:13 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, lars.kurth@xen.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 14:09, Ian Campbell wrote:
>
> Andy:
>
>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>         allocations
>
>         Sounds too hard for a GSoC to me. Would need fleshing out in any
>         case.

Malcolm made a prototype for this on the first day of the Hackathon.  It
can disappear.

>         
>       * CPU/RAM/PCI diagram tool
>
>         Does this not already exist somewhere?

Not as far as I (or my ability to google) am aware.

My furrowing into hwloc interacting with Xen and libxc is a start to all
of this, but it is still very much in my copious free time and there is
more than enough other work which could be done if someone were interested.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:42:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3g7-0005CE-P7; Wed, 05 Feb 2014 14:42:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3g5-0005Bd-Ow; Wed, 05 Feb 2014 14:42:13 +0000
Received: from [85.158.139.211:14820] by server-3.bemta-5.messagelabs.com id
	1E/33-13671-4CD42F25; Wed, 05 Feb 2014 14:42:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391611330!1862568!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8202 invoked from network); 5 Feb 2014 14:42:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:42:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100114587"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:42:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:42:09 -0500
Message-ID: <1391611327.6497.181.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 5 Feb 2014 14:42:07 +0000
In-Reply-To: <20140205143402.GB3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<20140205143402.GB3946@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Dario
	Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	lars.kurth@xen.org, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 09:34 -0500, Konrad Rzeszutek Wilk wrote:
> >         
> >       * Parallel xenwatch
> > 
> >         Bit sparse on details, GSOC == unknown
> 
> That would still be nice. It came from talking to Matt from Amazon. He
> was saying that having only one xenwatch thread slows things down for
> a medium to big server with lots of guests. Making multiple
> xenwatch threads to process XenBus requests in parallel would be
> a good improvement.

Would need to be careful about exposing cases of insufficient locking in
the existing code.

> 
> > 
> >       * Microcode uploader implementation
> >         
> >         Done I think?
> 
> <nods>

Nuked
     
> >       * Performance tools overhaul
> > 
> >         Bit vague. And has some of this been done?
> 
> I can't remember what that is.

OK, I nuked it -- a new project can always be added if you remember what
it was.

> > 
> >       * "Upstream bugs"
> > 
> >         There were 4 of these, dating back to 2012, I don't think this
> >         list is a good place to track bugs and it seems like at least
> >         some of them are now obsolete. So I've nuked the lot. If they
> >         are still relevant I think it would be best to get them into the
> >         bug tracker.
> 
> OK, lets nuke them.

Done.

> 
> We could also add the:
> 
> VCPUOP_register_vcpu_time_memory_area support in Linux upstream kernel.
> 
> MSI multi-vector for Linux upstream kernel.
> 
> Thought those are mostly just putting pieces together and repost them
> so no "new" development.

Not really suitable for this list then IMHO.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:42:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3g7-0005CE-P7; Wed, 05 Feb 2014 14:42:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3g5-0005Bd-Ow; Wed, 05 Feb 2014 14:42:13 +0000
Received: from [85.158.139.211:14820] by server-3.bemta-5.messagelabs.com id
	1E/33-13671-4CD42F25; Wed, 05 Feb 2014 14:42:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391611330!1862568!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8202 invoked from network); 5 Feb 2014 14:42:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:42:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100114587"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:42:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:42:09 -0500
Message-ID: <1391611327.6497.181.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 5 Feb 2014 14:42:07 +0000
In-Reply-To: <20140205143402.GB3946@phenom.dumpdata.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<20140205143402.GB3946@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Dario
	Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	lars.kurth@xen.org, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 09:34 -0500, Konrad Rzeszutek Wilk wrote:
> >         
> >       * Parallel xenwatch
> > 
> >         Bit sparse on details, GSOC == unknown
> 
> That would still be nice. It came from talking to Matt from Amazon. He
> was saying that having only one xenwatch thread slows things down for
> a medium to big server with lots of guests. Making multiple
> xenwatch threads to process XenBus requests in parallel would be
> a good improvement.

Would need to be careful about exposing cases of insufficient locking in
the existing code.

> 
> > 
> >       * Microcode uploader implementation
> >         
> >         Done I think?
> 
> <nods>

Nuked
     
> >       * Performance tools overhaul
> > 
> >         Bit vague. And has some of this been done?
> 
> I can't remember what that is.

OK, I nuked it -- a new project can always be added if you remember what
it was.

> > 
> >       * "Upstream bugs"
> > 
> >         There were 4 of these, dating back to 2012, I don't think this
> >         list is a good place to track bugs and it seems like at least
> >         some of them are now obsolete. So I've nuked the lot. If they
> >         are still relevant I think it would be best to get them into the
> >         bug tracker.
> 
> OK, lets nuke them.

Done.

> 
> We could also add the:
> 
> VCPUOP_register_vcpu_time_memory_area support in Linux upstream kernel.
> 
> MSI multi-vector for Linux upstream kernel.
> 
> Thought those are mostly just putting pieces together and repost them
> so no "new" development.

Not really suitable for this list then IMHO.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:43:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3hg-0005OU-Du; Wed, 05 Feb 2014 14:43:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB3he-0005OF-HQ
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 14:43:50 +0000
Received: from [85.158.137.68:29779] by server-1.bemta-3.messagelabs.com id
	8D/8C-17293-52E42F25; Wed, 05 Feb 2014 14:43:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391611427!13490236!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16886 invoked from network); 5 Feb 2014 14:43:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:43:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98223567"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:43:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 09:43:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3hV-0005fk-6Q;
	Wed, 05 Feb 2014 14:43:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3hU-00081c-U5;
	Wed, 05 Feb 2014 14:43:40 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Feb 2014 14:43:38 +0000
Message-ID: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There have been ABI/API changes in libxc.  Bump its MAJOR (which
affets libxenguest et al too.)

There have been ABI changes in libxl.  Bump its MAJOR.
(The API changes have been dealt with as we go along - there is
already a LIBXL_API_VERSION 0x040400.)

None of the other libraries have changed their interfaces.  I have
verified this by building the tools and searching the dist/install
tree for files matching *.so.*.  For each library that showed up, I
did this:
  git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
where FOO is the corresponding source directory.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxc/Makefile |    2 +-
 tools/libxl/Makefile |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index f2d6e56..2cca2b2 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR    = 4.3
+MAJOR    = 4.4
 MINOR    = 0
 
 CTRL_SRCS-y       :=
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..1410c44 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR = 4.3
+MAJOR = 4.4
 MINOR = 0
 
 XLUMAJOR = 4.3
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:43:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3hg-0005OU-Du; Wed, 05 Feb 2014 14:43:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB3he-0005OF-HQ
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 14:43:50 +0000
Received: from [85.158.137.68:29779] by server-1.bemta-3.messagelabs.com id
	8D/8C-17293-52E42F25; Wed, 05 Feb 2014 14:43:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391611427!13490236!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16886 invoked from network); 5 Feb 2014 14:43:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:43:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98223567"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:43:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 09:43:46 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3hV-0005fk-6Q;
	Wed, 05 Feb 2014 14:43:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3hU-00081c-U5;
	Wed, 05 Feb 2014 14:43:40 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 5 Feb 2014 14:43:38 +0000
Message-ID: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There have been ABI/API changes in libxc.  Bump its MAJOR (which
affets libxenguest et al too.)

There have been ABI changes in libxl.  Bump its MAJOR.
(The API changes have been dealt with as we go along - there is
already a LIBXL_API_VERSION 0x040400.)

None of the other libraries have changed their interfaces.  I have
verified this by building the tools and searching the dist/install
tree for files matching *.so.*.  For each library that showed up, I
did this:
  git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
where FOO is the corresponding source directory.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxc/Makefile |    2 +-
 tools/libxl/Makefile |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index f2d6e56..2cca2b2 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -1,7 +1,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR    = 4.3
+MAJOR    = 4.4
 MINOR    = 0
 
 CTRL_SRCS-y       :=
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..1410c44 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -5,7 +5,7 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR = 4.3
+MAJOR = 4.4
 MINOR = 0
 
 XLUMAJOR = 4.3
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:46:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3ju-0005cI-FA; Wed, 05 Feb 2014 14:46:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3jo-0005bV-SH; Wed, 05 Feb 2014 14:46:08 +0000
Received: from [85.158.143.35:52070] by server-1.bemta-4.messagelabs.com id
	77/5D-31661-CAE42F25; Wed, 05 Feb 2014 14:46:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391611562!3365194!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6655 invoked from network); 5 Feb 2014 14:46:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:46:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98224477"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:46:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:46:00 -0500
Message-ID: <1391611558.23098.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 14:45:58 +0000
In-Reply-To: <52F24D4D.7040004@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52F24D4D.7040004@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org, Dario
	Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, lars.kurth@xen.org, Paul
	Durrant <paul.durrant@citrix.com>, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
> On 05/02/14 14:09, Ian Campbell wrote:
> >
> > Andy:
> >
> >       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
> >         allocations
> >
> >         Sounds too hard for a GSoC to me. Would need fleshing out in any
> >         case.
> 
> Malcolm made a prototype for this on the first day of the Hackathon.  It
> can disappear.

Removed.

> >       * CPU/RAM/PCI diagram tool
> >
> >         Does this not already exist somewhere?
> 
> Not as far as I (or my ability to google) am aware.
> 
> My furrowing into hwloc interacting with Xen and libxc is a start to all
> of this, but it is still very much in my copious free time and there is
> more than enough other work which could be done if someone were interested.

OK, left in place.

This could conceivably be done under another umbrella such as the Linux
one too, since it seems generic.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:46:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3ju-0005cI-FA; Wed, 05 Feb 2014 14:46:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WB3jo-0005bV-SH; Wed, 05 Feb 2014 14:46:08 +0000
Received: from [85.158.143.35:52070] by server-1.bemta-4.messagelabs.com id
	77/5D-31661-CAE42F25; Wed, 05 Feb 2014 14:46:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391611562!3365194!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6655 invoked from network); 5 Feb 2014 14:46:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:46:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98224477"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:46:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:46:00 -0500
Message-ID: <1391611558.23098.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 14:45:58 +0000
In-Reply-To: <52F24D4D.7040004@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52F24D4D.7040004@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org, Dario
	Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, lars.kurth@xen.org, Paul
	Durrant <paul.durrant@citrix.com>, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
> On 05/02/14 14:09, Ian Campbell wrote:
> >
> > Andy:
> >
> >       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
> >         allocations
> >
> >         Sounds too hard for a GSoC to me. Would need fleshing out in any
> >         case.
> 
> Malcolm made a prototype for this on the first day of the Hackathon.  It
> can disappear.

Removed.

> >       * CPU/RAM/PCI diagram tool
> >
> >         Does this not already exist somewhere?
> 
> Not as far as I (or my ability to google) am aware.
> 
> My furrowing into hwloc interacting with Xen and libxc is a start to all
> of this, but it is still very much in my copious free time and there is
> more than enough other work which could be done if someone were interested.

OK, left in place.

This could conceivably be done under another umbrella such as the Linux
one too, since it seems generic.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:48:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3m5-0005qg-7o; Wed, 05 Feb 2014 14:48:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3m3-0005qV-04
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:48:23 +0000
Received: from [85.158.137.68:24882] by server-11.bemta-3.messagelabs.com id
	A3/4E-04255-63F42F25; Wed, 05 Feb 2014 14:48:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391611701!9930406!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18417 invoked from network); 5 Feb 2014 14:48:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:48:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:48:20 +0000
Message-Id: <52F25D4202000078001196AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:48:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: x86: fix FS/GS base handling when using the fsgsbase feature
2: domctl: also pause domain for extended context updates
3: domctl: pause vCPU for context reads

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:48:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3m5-0005qg-7o; Wed, 05 Feb 2014 14:48:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3m3-0005qV-04
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:48:23 +0000
Received: from [85.158.137.68:24882] by server-11.bemta-3.messagelabs.com id
	A3/4E-04255-63F42F25; Wed, 05 Feb 2014 14:48:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391611701!9930406!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18417 invoked from network); 5 Feb 2014 14:48:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:48:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:48:20 +0000
Message-Id: <52F25D4202000078001196AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:48:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: x86: fix FS/GS base handling when using the fsgsbase feature
2: domctl: also pause domain for extended context updates
3: domctl: pause vCPU for context reads

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:49:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3mk-000693-NZ; Wed, 05 Feb 2014 14:49:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>)
	id 1WB3mj-00068j-Na; Wed, 05 Feb 2014 14:49:05 +0000
Received: from [85.158.143.35:18684] by server-2.bemta-4.messagelabs.com id
	99/98-10891-16F42F25; Wed, 05 Feb 2014 14:49:05 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391611742!3364999!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9818 invoked from network); 5 Feb 2014 14:49:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:49:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98225850"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:49:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:49:01 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB3mf-00073Y-KG;
	Wed, 05 Feb 2014 14:49:01 +0000
Message-ID: <52F24F5D.7020207@citrix.com>
Date: Wed, 5 Feb 2014 14:49:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52F24D4D.7040004@citrix.com>
	<1391611558.23098.2.camel@kazak.uk.xensource.com>
In-Reply-To: <1391611558.23098.2.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, lars.kurth@xen.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 14:45, Ian Campbell wrote:
> On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
>> On 05/02/14 14:09, Ian Campbell wrote:
>>> Andy:
>>>
>>>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>>>         allocations
>>>
>>>         Sounds too hard for a GSoC to me. Would need fleshing out in any
>>>         case.
>> Malcolm made a prototype for this on the first day of the Hackathon.  It
>> can disappear.
> Removed.
>
>>>       * CPU/RAM/PCI diagram tool
>>>
>>>         Does this not already exist somewhere?
>> Not as far as I (or my ability to google) am aware.
>>
>> My furrowing into hwloc interacting with Xen and libxc is a start to all
>> of this, but it is still very much in my copious free time and there is
>> more than enough other work which could be done if someone were interested.
> OK, left in place.
>
> This could conceivably be done under another umbrella such as the Linux
> one too, since it seems generic.
>
> Ian.
>
>

For native Linux, hwloc kinda already does this already - certainly the
CPU and PCI bits.  Under Xen there are quite a few areas needing
improvement, which will require active development work.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:49:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3mk-000693-NZ; Wed, 05 Feb 2014 14:49:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>)
	id 1WB3mj-00068j-Na; Wed, 05 Feb 2014 14:49:05 +0000
Received: from [85.158.143.35:18684] by server-2.bemta-4.messagelabs.com id
	99/98-10891-16F42F25; Wed, 05 Feb 2014 14:49:05 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391611742!3364999!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9818 invoked from network); 5 Feb 2014 14:49:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:49:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98225850"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:49:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:49:01 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB3mf-00073Y-KG;
	Wed, 05 Feb 2014 14:49:01 +0000
Message-ID: <52F24F5D.7020207@citrix.com>
Date: Wed, 5 Feb 2014 14:49:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52F24D4D.7040004@citrix.com>
	<1391611558.23098.2.camel@kazak.uk.xensource.com>
In-Reply-To: <1391611558.23098.2.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, lars.kurth@xen.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 14:45, Ian Campbell wrote:
> On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
>> On 05/02/14 14:09, Ian Campbell wrote:
>>> Andy:
>>>
>>>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>>>         allocations
>>>
>>>         Sounds too hard for a GSoC to me. Would need fleshing out in any
>>>         case.
>> Malcolm made a prototype for this on the first day of the Hackathon.  It
>> can disappear.
> Removed.
>
>>>       * CPU/RAM/PCI diagram tool
>>>
>>>         Does this not already exist somewhere?
>> Not as far as I (or my ability to google) am aware.
>>
>> My furrowing into hwloc interacting with Xen and libxc is a start to all
>> of this, but it is still very much in my copious free time and there is
>> more than enough other work which could be done if someone were interested.
> OK, left in place.
>
> This could conceivably be done under another umbrella such as the Linux
> one too, since it seems generic.
>
> Ian.
>
>

For native Linux, hwloc kinda already does this already - certainly the
CPU and PCI bits.  Under Xen there are quite a few areas needing
improvement, which will require active development work.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3mu-0006C1-AL; Wed, 05 Feb 2014 14:49:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3ms-0006BX-M3
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:49:14 +0000
Received: from [85.158.139.211:56466] by server-9.bemta-5.messagelabs.com id
	65/E8-11237-96F42F25; Wed, 05 Feb 2014 14:49:13 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391611751!1867043!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17099 invoked from network); 5 Feb 2014 14:49:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:49:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100117252"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:49:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:49:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3mo-000748-MK;
	Wed, 05 Feb 2014 14:49:10 +0000
Message-ID: <52F24F54.7030209@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:48:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>	
	<1390924071.7753.115.camel@kazak.uk.xensource.com>	
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>	
	<1390931022.31814.32.camel@kazak.uk.xensource.com>	
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>	
	<1390999123.31814.96.camel@kazak.uk.xensource.com>	
	<CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
	<1391447061.10515.63.camel@kazak.uk.xensource.com>
In-Reply-To: <1391447061.10515.63.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 05:04 PM, Ian Campbell wrote:
> On Thu, 2014-01-30 at 11:38 +0530, Pranavkumar Sawargaonkar wrote:
>> Hi Ian,
>>
>> On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
>>>
>>>>> I also don't see any patch to linux/Documentation/devicetree/bindings,
>>>>> as was requested in that posting from 6 months ago. Where can I find
>>>>> that?
>>>>>
>>>>> It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
>>>>> hasn't landed?
>>>> Yeah it is dangling and since new patch is already posted i think we
>>>> can wait for final DT bindings.
>>> It seems from the thread that the final bindings are going to differ
>>> significantly from what is implemented in Xen and proposed in the above
>>> thread. (with a syscon driver that the reset driver references).
>>>
>>>>>> Now if you want this to be fixed , i can quickly submit a V7 in which
>>>>>> mask field will be just hard-coded to 1 hence xen code will always
>>>>>> work even if linux code does gets changed.
>>>>> Looks like the Linux driver uses 0xffffffff if the mask isn't given --
>>>>> that seems like a good approach.
>>>>>
>>>>> I think we'll just have to accept that until the binding is specified
>>>>> and documented (in linux/Documentation/devicetree/bindings) then we may
>>>>> have to be prepared to change the Xen implementation to match the final
>>>>> spec without regard to backwards compat. If we aren't happy with that
>>>>> then I should revert the patch now and we will have to live without
>>>>> reboot support in the meantime.
>>>> Please do not revert the patch , I think we can go ahead with current patch.
>>>> Once linux side is concluded i will fix minor changes in xen code
>>>> based on new DT bindigs..
>>> It doesn't sounds to me like it is going to be minor changes.
>> Yes binding are changed in new drivers but now question is what to do
>> in current state where new driver is not submitted ?
>>
>> My take is we have 3 opts :
>> 1. Keep current reboot driver in xen as it is and use it with old
>> bindings. (since that is the one merged in linux)
>> 2. I will send a new patch (will take 1hr max for me to do it) with
>> addresses hardcoded instead of reading it from dts.
>>      This will help for xen to have reboot driver for xgene.
>> 3. Remove this driver completely from xen as of now.
> None of the options are brilliant :-/
>
> I think on balance #2 is probably the way to go.
>
> #1 would set a precedent for using formally undefined bindings which I
> think we should avoid.
>
> #3 has obvious downsides, but given that we have already accepted the
> functionality it seems a shame to revert it entirely.

That sounds reasonable to me.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3mu-0006C1-AL; Wed, 05 Feb 2014 14:49:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3ms-0006BX-M3
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:49:14 +0000
Received: from [85.158.139.211:56466] by server-9.bemta-5.messagelabs.com id
	65/E8-11237-96F42F25; Wed, 05 Feb 2014 14:49:13 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391611751!1867043!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17099 invoked from network); 5 Feb 2014 14:49:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:49:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100117252"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:49:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:49:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3mo-000748-MK;
	Wed, 05 Feb 2014 14:49:10 +0000
Message-ID: <52F24F54.7030209@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:48:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1390822488-22183-1-git-send-email-pranavkumar@linaro.org>	
	<1390924071.7753.115.camel@kazak.uk.xensource.com>	
	<CAAHg+Hgvp-zgU5BN_EnbK31fPwTbty0QALPVZUjFYUYQVKBfxg@mail.gmail.com>	
	<1390931022.31814.32.camel@kazak.uk.xensource.com>	
	<CAAHg+Hg9sYHZKrkoMcnkQGes1WtFBo1WHO3oayrB7qaR_4X03g@mail.gmail.com>	
	<1390999123.31814.96.camel@kazak.uk.xensource.com>	
	<CAAHg+HgnijUNKaV6bGOk4xQq1FybWBtBy+9ou=pzq6cqRBOoGA@mail.gmail.com>
	<1391447061.10515.63.camel@kazak.uk.xensource.com>
In-Reply-To: <1391447061.10515.63.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: patches@apm.com, Patch Tracking <patches@linaro.org>,
	stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V6] xen: arm: platforms: Adding reset
 support for xgene arm64 platform.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/2014 05:04 PM, Ian Campbell wrote:
> On Thu, 2014-01-30 at 11:38 +0530, Pranavkumar Sawargaonkar wrote:
>> Hi Ian,
>>
>> On 29 January 2014 18:08, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>>> On Tue, 2014-01-28 at 23:27 +0530, Pranavkumar Sawargaonkar wrote:
>>>
>>>>> I also don't see any patch to linux/Documentation/devicetree/bindings,
>>>>> as was requested in that posting from 6 months ago. Where can I find
>>>>> that?
>>>>>
>>>>> It seems like the patch to arch/arm64/boot/dts/apm-storm.dtsi also
>>>>> hasn't landed?
>>>> Yeah it is dangling and since new patch is already posted i think we
>>>> can wait for final DT bindings.
>>> It seems from the thread that the final bindings are going to differ
>>> significantly from what is implemented in Xen and proposed in the above
>>> thread. (with a syscon driver that the reset driver references).
>>>
>>>>>> Now if you want this to be fixed , i can quickly submit a V7 in which
>>>>>> mask field will be just hard-coded to 1 hence xen code will always
>>>>>> work even if linux code does gets changed.
>>>>> Looks like the Linux driver uses 0xffffffff if the mask isn't given --
>>>>> that seems like a good approach.
>>>>>
>>>>> I think we'll just have to accept that until the binding is specified
>>>>> and documented (in linux/Documentation/devicetree/bindings) then we may
>>>>> have to be prepared to change the Xen implementation to match the final
>>>>> spec without regard to backwards compat. If we aren't happy with that
>>>>> then I should revert the patch now and we will have to live without
>>>>> reboot support in the meantime.
>>>> Please do not revert the patch , I think we can go ahead with current patch.
>>>> Once linux side is concluded i will fix minor changes in xen code
>>>> based on new DT bindigs..
>>> It doesn't sounds to me like it is going to be minor changes.
>> Yes binding are changed in new drivers but now question is what to do
>> in current state where new driver is not submitted ?
>>
>> My take is we have 3 opts :
>> 1. Keep current reboot driver in xen as it is and use it with old
>> bindings. (since that is the one merged in linux)
>> 2. I will send a new patch (will take 1hr max for me to do it) with
>> addresses hardcoded instead of reading it from dts.
>>      This will help for xen to have reboot driver for xgene.
>> 3. Remove this driver completely from xen as of now.
> None of the options are brilliant :-/
>
> I think on balance #2 is probably the way to go.
>
> #1 would set a precedent for using formally undefined bindings which I
> think we should avoid.
>
> #3 has obvious downsides, but given that we have already accepted the
> functionality it seems a shame to revert it entirely.

That sounds reasonable to me.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:51:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3pS-0006TU-72; Wed, 05 Feb 2014 14:51:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3pR-0006TJ-C5
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:51:53 +0000
Received: from [85.158.137.68:28932] by server-2.bemta-3.messagelabs.com id
	11/37-06531-80052F25; Wed, 05 Feb 2014 14:51:52 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391611910!13607692!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32290 invoked from network); 5 Feb 2014 14:51:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:51:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98227115"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:51:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:51:49 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3pN-000764-7Z;
	Wed, 05 Feb 2014 14:51:49 +0000
Message-ID: <52F24FF3.5050107@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:51:31 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
X-DLP: MIA2
Cc: stefano.stabellini@citrix.com, George Dunlap <george.dunlap@citrix.com>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 02:16 PM, Julien Grall wrote:
> The function domain_page_map_to_mfn can be used to translate a virtual
> address mapped by both map_domain_page and map_domain_page_global.
> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> will always fail because the address is not in DOMHEAP range.
>
> Check if the address is in vmap range and use __pa to translate it.
>
> This patch fix guest shutdown when the event fifo is used.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

I assume this brings the arm paths into line with the x86 functionality?

  -George


>
> ---
>      This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> use Linux 3.14 (and higher) as guest with the event fifo driver.
> ---
>   xen/arch/arm/mm.c |   10 +++++++---
>   1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 127cce0..bdca68a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
>       local_irq_restore(flags);
>   }
>   
> -unsigned long domain_page_map_to_mfn(const void *va)
> +unsigned long domain_page_map_to_mfn(const void *ptr)
>   {
> +    unsigned long va = (unsigned long)ptr;
>       lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +
> +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +        return virt_to_mfn(va);
>   
>       ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>       ASSERT(map[slot].pt.avail != 0);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:51:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3pS-0006TU-72; Wed, 05 Feb 2014 14:51:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3pR-0006TJ-C5
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:51:53 +0000
Received: from [85.158.137.68:28932] by server-2.bemta-3.messagelabs.com id
	11/37-06531-80052F25; Wed, 05 Feb 2014 14:51:52 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391611910!13607692!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32290 invoked from network); 5 Feb 2014 14:51:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:51:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98227115"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:51:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:51:49 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3pN-000764-7Z;
	Wed, 05 Feb 2014 14:51:49 +0000
Message-ID: <52F24FF3.5050107@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:51:31 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
X-DLP: MIA2
Cc: stefano.stabellini@citrix.com, George Dunlap <george.dunlap@citrix.com>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 02:16 PM, Julien Grall wrote:
> The function domain_page_map_to_mfn can be used to translate a virtual
> address mapped by both map_domain_page and map_domain_page_global.
> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> will always fail because the address is not in DOMHEAP range.
>
> Check if the address is in vmap range and use __pa to translate it.
>
> This patch fix guest shutdown when the event fifo is used.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

I assume this brings the arm paths into line with the x86 functionality?

  -George


>
> ---
>      This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> use Linux 3.14 (and higher) as guest with the event fifo driver.
> ---
>   xen/arch/arm/mm.c |   10 +++++++---
>   1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 127cce0..bdca68a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
>       local_irq_restore(flags);
>   }
>   
> -unsigned long domain_page_map_to_mfn(const void *va)
> +unsigned long domain_page_map_to_mfn(const void *ptr)
>   {
> +    unsigned long va = (unsigned long)ptr;
>       lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +
> +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +        return virt_to_mfn(va);
>   
>       ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>       ASSERT(map[slot].pt.avail != 0);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3q9-0006aQ-Q9; Wed, 05 Feb 2014 14:52:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3q8-0006a4-Cr
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:52:36 +0000
Received: from [85.158.137.68:18409] by server-7.bemta-3.messagelabs.com id
	58/17-13775-33052F25; Wed, 05 Feb 2014 14:52:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391611954!13572678!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12444 invoked from network); 5 Feb 2014 14:52:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:52:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:52:33 +0000
Message-Id: <52F25E3F02000078001196E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:52:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartA7940A3F.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 1/3] x86: fix FS/GS base handling when using the
 fsgsbase feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartA7940A3F.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In that case, due to the respective instructions not being privileged,
we can't rely on our in-memory data to always be correct: While the
guest is running, it may change without us knowing about it. Therefore
we need to
- read the correct values from hardware during context switch out
  (save_segments())
- read the correct values from hardware during RDMSR emulation
- update in-memory values during guest mode change
  (toggle_guest_mode())

For completeness/consistency, WRMSR emulation is also being switched
to use wr[fg]sbase().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1251,6 +1251,15 @@ static void save_segments(struct vcpu *v
     regs->fs =3D read_segment_register(fs);
     regs->gs =3D read_segment_register(gs);
=20
+    if ( cpu_has_fsgsbase && !is_pv_32bit_vcpu(v) )
+    {
+        v->arch.pv_vcpu.fs_base =3D __rdfsbase();
+        if ( v->arch.flags & TF_kernel_mode )
+            v->arch.pv_vcpu.gs_base_kernel =3D __rdgsbase();
+        else
+            v->arch.pv_vcpu.gs_base_user =3D __rdgsbase();
+    }
+
     if ( regs->ds )
         dirty_segment_mask |=3D DIRTY_DS;
=20
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2382,15 +2382,13 @@ static int emulate_privileged_op(struct=20
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_FS_BASE, msr_content) )
-                goto fail;
+            wrfsbase(msr_content);
             v->arch.pv_vcpu.fs_base =3D msr_content;
             break;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )
-                goto fail;
+            wrgsbase(msr_content);
             v->arch.pv_vcpu.gs_base_kernel =3D msr_content;
             break;
         case MSR_SHADOW_GS_BASE:
@@ -2535,15 +2533,14 @@ static int emulate_privileged_op(struct=20
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs->eax =3D v->arch.pv_vcpu.fs_base & 0xFFFFFFFFUL;
-            regs->edx =3D v->arch.pv_vcpu.fs_base >> 32;
-            break;
+            val =3D cpu_has_fsgsbase ? __rdfsbase() : v->arch.pv_vcpu.fs_b=
ase;
+            goto rdmsr_writeback;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs->eax =3D v->arch.pv_vcpu.gs_base_kernel & 0xFFFFFFFFUL;
-            regs->edx =3D v->arch.pv_vcpu.gs_base_kernel >> 32;
-            break;
+            val =3D cpu_has_fsgsbase ? __rdgsbase()
+                                   : v->arch.pv_vcpu.gs_base_kernel;
+            goto rdmsr_writeback;
         case MSR_SHADOW_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -257,6 +257,13 @@ void toggle_guest_mode(struct vcpu *v)
 {
     if ( is_pv_32bit_vcpu(v) )
         return;
+    if ( cpu_has_fsgsbase )
+    {
+        if ( v->arch.flags & TF_kernel_mode )
+            v->arch.pv_vcpu.gs_base_kernel =3D __rdgsbase();
+        else
+            v->arch.pv_vcpu.gs_base_user =3D __rdgsbase();
+    }
     v->arch.flags ^=3D TF_kernel_mode;
     asm volatile ( "swapgs" );
     update_cr3(v);
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in
 			  : "=3Da" (low), "=3Dd" (high) \
 			  : "c" (counter))
=20
-static inline unsigned long rdfsbase(void)
+static inline unsigned long __rdfsbase(void)
 {
     unsigned long base;
=20
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdfsbase %0" : "=3Dr" (base) );
+    asm volatile ( "rdfsbase %0" : "=3Dr" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" =
(base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" (base) =
);
 #endif
-    else
-        rdmsrl(MSR_FS_BASE, base);
=20
     return base;
 }
=20
-static inline unsigned long rdgsbase(void)
+static inline unsigned long __rdgsbase(void)
 {
     unsigned long base;
=20
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdgsbase %0" : "=3Dr" (base) );
+    asm volatile ( "rdgsbase %0" : "=3Dr" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=3Da" =
(base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=3Da" (base) =
);
 #endif
-    else
-        rdmsrl(MSR_GS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdfsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdfsbase();
+
+    rdmsrl(MSR_FS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdgsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdgsbase();
+
+    rdmsrl(MSR_GS_BASE, base);
=20
     return base;
 }



--=__PartA7940A3F.1__=
Content-Type: text/plain; name="x86-pv-fsgsbase-rw.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-pv-fsgsbase-rw.patch"

x86: fix FS/GS base handling when using the fsgsbase feature=0A=0AIn that =
case, due to the respective instructions not being privileged,=0Awe can't =
rely on our in-memory data to always be correct: While the=0Aguest is =
running, it may change without us knowing about it. Therefore=0Awe need =
to=0A- read the correct values from hardware during context switch out=0A  =
(save_segments())=0A- read the correct values from hardware during RDMSR =
emulation=0A- update in-memory values during guest mode change=0A  =
(toggle_guest_mode())=0A=0AFor completeness/consistency, WRMSR emulation =
is also being switched=0Ato use wr[fg]sbase().=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/domain.c=0A+++ =
b/xen/arch/x86/domain.c=0A@@ -1251,6 +1251,15 @@ static void save_segments(=
struct vcpu *v=0A     regs->fs =3D read_segment_register(fs);=0A     =
regs->gs =3D read_segment_register(gs);=0A =0A+    if ( cpu_has_fsgsbase =
&& !is_pv_32bit_vcpu(v) )=0A+    {=0A+        v->arch.pv_vcpu.fs_base =3D =
__rdfsbase();=0A+        if ( v->arch.flags & TF_kernel_mode )=0A+         =
   v->arch.pv_vcpu.gs_base_kernel =3D __rdgsbase();=0A+        else=0A+    =
        v->arch.pv_vcpu.gs_base_user =3D __rdgsbase();=0A+    }=0A+=0A     =
if ( regs->ds )=0A         dirty_segment_mask |=3D DIRTY_DS;=0A =0A--- =
a/xen/arch/x86/traps.c=0A+++ b/xen/arch/x86/traps.c=0A@@ -2382,15 +2382,13 =
@@ static int emulate_privileged_op(struct =0A         case MSR_FS_BASE:=0A=
             if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A- =
           if ( wrmsr_safe(MSR_FS_BASE, msr_content) )=0A-                =
goto fail;=0A+            wrfsbase(msr_content);=0A             v->arch.pv_=
vcpu.fs_base =3D msr_content;=0A             break;=0A         case =
MSR_GS_BASE:=0A             if ( is_pv_32on64_vcpu(v) )=0A                 =
goto fail;=0A-            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )=0A-  =
              goto fail;=0A+            wrgsbase(msr_content);=0A          =
   v->arch.pv_vcpu.gs_base_kernel =3D msr_content;=0A             =
break;=0A         case MSR_SHADOW_GS_BASE:=0A@@ -2535,15 +2533,14 @@ =
static int emulate_privileged_op(struct =0A         case MSR_FS_BASE:=0A   =
          if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A-    =
        regs->eax =3D v->arch.pv_vcpu.fs_base & 0xFFFFFFFFUL;=0A-          =
  regs->edx =3D v->arch.pv_vcpu.fs_base >> 32;=0A-            break;=0A+   =
         val =3D cpu_has_fsgsbase ? __rdfsbase() : v->arch.pv_vcpu.fs_base;=
=0A+            goto rdmsr_writeback;=0A         case MSR_GS_BASE:=0A      =
       if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A-       =
     regs->eax =3D v->arch.pv_vcpu.gs_base_kernel & 0xFFFFFFFFUL;=0A-      =
      regs->edx =3D v->arch.pv_vcpu.gs_base_kernel >> 32;=0A-            =
break;=0A+            val =3D cpu_has_fsgsbase ? __rdgsbase()=0A+          =
                         : v->arch.pv_vcpu.gs_base_kernel;=0A+            =
goto rdmsr_writeback;=0A         case MSR_SHADOW_GS_BASE:=0A             =
if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A--- a/xen/arch/=
x86/x86_64/traps.c=0A+++ b/xen/arch/x86/x86_64/traps.c=0A@@ -257,6 +257,13 =
@@ void toggle_guest_mode(struct vcpu *v)=0A {=0A     if ( is_pv_32bit_vcpu=
(v) )=0A         return;=0A+    if ( cpu_has_fsgsbase )=0A+    {=0A+       =
 if ( v->arch.flags & TF_kernel_mode )=0A+            v->arch.pv_vcpu.gs_ba=
se_kernel =3D __rdgsbase();=0A+        else=0A+            v->arch.pv_vcpu.=
gs_base_user =3D __rdgsbase();=0A+    }=0A     v->arch.flags ^=3D =
TF_kernel_mode;=0A     asm volatile ( "swapgs" );=0A     update_cr3(v);=0A-=
-- a/xen/include/asm-x86/msr.h=0A+++ b/xen/include/asm-x86/msr.h=0A@@ =
-98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in=0A 			=
  : "=3Da" (low), "=3Dd" (high) \=0A 			  : "c" (counter))=
=0A =0A-static inline unsigned long rdfsbase(void)=0A+static inline =
unsigned long __rdfsbase(void)=0A {=0A     unsigned long base;=0A =0A-    =
if ( cpu_has_fsgsbase )=0A #ifdef HAVE_GAS_FSGSBASE=0A-        asm =
volatile ( "rdfsbase %0" : "=3Dr" (base) );=0A+    asm volatile ( =
"rdfsbase %0" : "=3Dr" (base) );=0A #else=0A-        asm volatile ( ".byte =
0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" (base) );=0A+    asm volatile ( =
".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" (base) );=0A #endif=0A-    =
else=0A-        rdmsrl(MSR_FS_BASE, base);=0A =0A     return base;=0A }=0A =
=0A-static inline unsigned long rdgsbase(void)=0A+static inline unsigned =
long __rdgsbase(void)=0A {=0A     unsigned long base;=0A =0A-    if ( =
cpu_has_fsgsbase )=0A #ifdef HAVE_GAS_FSGSBASE=0A-        asm volatile ( =
"rdgsbase %0" : "=3Dr" (base) );=0A+    asm volatile ( "rdgsbase %0" : =
"=3Dr" (base) );=0A #else=0A-        asm volatile ( ".byte 0xf3, 0x48, =
0x0f, 0xae, 0xc8" : "=3Da" (base) );=0A+    asm volatile ( ".byte 0xf3, =
0x48, 0x0f, 0xae, 0xc8" : "=3Da" (base) );=0A #endif=0A-    else=0A-       =
 rdmsrl(MSR_GS_BASE, base);=0A+=0A+    return base;=0A+}=0A+=0A+static =
inline unsigned long rdfsbase(void)=0A+{=0A+    unsigned long base;=0A+=0A+=
    if ( cpu_has_fsgsbase )=0A+        return __rdfsbase();=0A+=0A+    =
rdmsrl(MSR_FS_BASE, base);=0A+=0A+    return base;=0A+}=0A+=0A+static =
inline unsigned long rdgsbase(void)=0A+{=0A+    unsigned long base;=0A+=0A+=
    if ( cpu_has_fsgsbase )=0A+        return __rdgsbase();=0A+=0A+    =
rdmsrl(MSR_GS_BASE, base);=0A =0A     return base;=0A }=0A
--=__PartA7940A3F.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartA7940A3F.1__=--


From xen-devel-bounces@lists.xen.org Wed Feb 05 14:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3q9-0006aQ-Q9; Wed, 05 Feb 2014 14:52:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3q8-0006a4-Cr
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:52:36 +0000
Received: from [85.158.137.68:18409] by server-7.bemta-3.messagelabs.com id
	58/17-13775-33052F25; Wed, 05 Feb 2014 14:52:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391611954!13572678!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12444 invoked from network); 5 Feb 2014 14:52:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:52:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:52:33 +0000
Message-Id: <52F25E3F02000078001196E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:52:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartA7940A3F.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 1/3] x86: fix FS/GS base handling when using the
 fsgsbase feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartA7940A3F.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In that case, due to the respective instructions not being privileged,
we can't rely on our in-memory data to always be correct: While the
guest is running, it may change without us knowing about it. Therefore
we need to
- read the correct values from hardware during context switch out
  (save_segments())
- read the correct values from hardware during RDMSR emulation
- update in-memory values during guest mode change
  (toggle_guest_mode())

For completeness/consistency, WRMSR emulation is also being switched
to use wr[fg]sbase().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1251,6 +1251,15 @@ static void save_segments(struct vcpu *v
     regs->fs =3D read_segment_register(fs);
     regs->gs =3D read_segment_register(gs);
=20
+    if ( cpu_has_fsgsbase && !is_pv_32bit_vcpu(v) )
+    {
+        v->arch.pv_vcpu.fs_base =3D __rdfsbase();
+        if ( v->arch.flags & TF_kernel_mode )
+            v->arch.pv_vcpu.gs_base_kernel =3D __rdgsbase();
+        else
+            v->arch.pv_vcpu.gs_base_user =3D __rdgsbase();
+    }
+
     if ( regs->ds )
         dirty_segment_mask |=3D DIRTY_DS;
=20
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2382,15 +2382,13 @@ static int emulate_privileged_op(struct=20
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_FS_BASE, msr_content) )
-                goto fail;
+            wrfsbase(msr_content);
             v->arch.pv_vcpu.fs_base =3D msr_content;
             break;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )
-                goto fail;
+            wrgsbase(msr_content);
             v->arch.pv_vcpu.gs_base_kernel =3D msr_content;
             break;
         case MSR_SHADOW_GS_BASE:
@@ -2535,15 +2533,14 @@ static int emulate_privileged_op(struct=20
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs->eax =3D v->arch.pv_vcpu.fs_base & 0xFFFFFFFFUL;
-            regs->edx =3D v->arch.pv_vcpu.fs_base >> 32;
-            break;
+            val =3D cpu_has_fsgsbase ? __rdfsbase() : v->arch.pv_vcpu.fs_b=
ase;
+            goto rdmsr_writeback;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs->eax =3D v->arch.pv_vcpu.gs_base_kernel & 0xFFFFFFFFUL;
-            regs->edx =3D v->arch.pv_vcpu.gs_base_kernel >> 32;
-            break;
+            val =3D cpu_has_fsgsbase ? __rdgsbase()
+                                   : v->arch.pv_vcpu.gs_base_kernel;
+            goto rdmsr_writeback;
         case MSR_SHADOW_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -257,6 +257,13 @@ void toggle_guest_mode(struct vcpu *v)
 {
     if ( is_pv_32bit_vcpu(v) )
         return;
+    if ( cpu_has_fsgsbase )
+    {
+        if ( v->arch.flags & TF_kernel_mode )
+            v->arch.pv_vcpu.gs_base_kernel =3D __rdgsbase();
+        else
+            v->arch.pv_vcpu.gs_base_user =3D __rdgsbase();
+    }
     v->arch.flags ^=3D TF_kernel_mode;
     asm volatile ( "swapgs" );
     update_cr3(v);
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in
 			  : "=3Da" (low), "=3Dd" (high) \
 			  : "c" (counter))
=20
-static inline unsigned long rdfsbase(void)
+static inline unsigned long __rdfsbase(void)
 {
     unsigned long base;
=20
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdfsbase %0" : "=3Dr" (base) );
+    asm volatile ( "rdfsbase %0" : "=3Dr" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" =
(base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" (base) =
);
 #endif
-    else
-        rdmsrl(MSR_FS_BASE, base);
=20
     return base;
 }
=20
-static inline unsigned long rdgsbase(void)
+static inline unsigned long __rdgsbase(void)
 {
     unsigned long base;
=20
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdgsbase %0" : "=3Dr" (base) );
+    asm volatile ( "rdgsbase %0" : "=3Dr" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=3Da" =
(base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=3Da" (base) =
);
 #endif
-    else
-        rdmsrl(MSR_GS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdfsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdfsbase();
+
+    rdmsrl(MSR_FS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdgsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdgsbase();
+
+    rdmsrl(MSR_GS_BASE, base);
=20
     return base;
 }



--=__PartA7940A3F.1__=
Content-Type: text/plain; name="x86-pv-fsgsbase-rw.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-pv-fsgsbase-rw.patch"

x86: fix FS/GS base handling when using the fsgsbase feature=0A=0AIn that =
case, due to the respective instructions not being privileged,=0Awe can't =
rely on our in-memory data to always be correct: While the=0Aguest is =
running, it may change without us knowing about it. Therefore=0Awe need =
to=0A- read the correct values from hardware during context switch out=0A  =
(save_segments())=0A- read the correct values from hardware during RDMSR =
emulation=0A- update in-memory values during guest mode change=0A  =
(toggle_guest_mode())=0A=0AFor completeness/consistency, WRMSR emulation =
is also being switched=0Ato use wr[fg]sbase().=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/domain.c=0A+++ =
b/xen/arch/x86/domain.c=0A@@ -1251,6 +1251,15 @@ static void save_segments(=
struct vcpu *v=0A     regs->fs =3D read_segment_register(fs);=0A     =
regs->gs =3D read_segment_register(gs);=0A =0A+    if ( cpu_has_fsgsbase =
&& !is_pv_32bit_vcpu(v) )=0A+    {=0A+        v->arch.pv_vcpu.fs_base =3D =
__rdfsbase();=0A+        if ( v->arch.flags & TF_kernel_mode )=0A+         =
   v->arch.pv_vcpu.gs_base_kernel =3D __rdgsbase();=0A+        else=0A+    =
        v->arch.pv_vcpu.gs_base_user =3D __rdgsbase();=0A+    }=0A+=0A     =
if ( regs->ds )=0A         dirty_segment_mask |=3D DIRTY_DS;=0A =0A--- =
a/xen/arch/x86/traps.c=0A+++ b/xen/arch/x86/traps.c=0A@@ -2382,15 +2382,13 =
@@ static int emulate_privileged_op(struct =0A         case MSR_FS_BASE:=0A=
             if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A- =
           if ( wrmsr_safe(MSR_FS_BASE, msr_content) )=0A-                =
goto fail;=0A+            wrfsbase(msr_content);=0A             v->arch.pv_=
vcpu.fs_base =3D msr_content;=0A             break;=0A         case =
MSR_GS_BASE:=0A             if ( is_pv_32on64_vcpu(v) )=0A                 =
goto fail;=0A-            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )=0A-  =
              goto fail;=0A+            wrgsbase(msr_content);=0A          =
   v->arch.pv_vcpu.gs_base_kernel =3D msr_content;=0A             =
break;=0A         case MSR_SHADOW_GS_BASE:=0A@@ -2535,15 +2533,14 @@ =
static int emulate_privileged_op(struct =0A         case MSR_FS_BASE:=0A   =
          if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A-    =
        regs->eax =3D v->arch.pv_vcpu.fs_base & 0xFFFFFFFFUL;=0A-          =
  regs->edx =3D v->arch.pv_vcpu.fs_base >> 32;=0A-            break;=0A+   =
         val =3D cpu_has_fsgsbase ? __rdfsbase() : v->arch.pv_vcpu.fs_base;=
=0A+            goto rdmsr_writeback;=0A         case MSR_GS_BASE:=0A      =
       if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A-       =
     regs->eax =3D v->arch.pv_vcpu.gs_base_kernel & 0xFFFFFFFFUL;=0A-      =
      regs->edx =3D v->arch.pv_vcpu.gs_base_kernel >> 32;=0A-            =
break;=0A+            val =3D cpu_has_fsgsbase ? __rdgsbase()=0A+          =
                         : v->arch.pv_vcpu.gs_base_kernel;=0A+            =
goto rdmsr_writeback;=0A         case MSR_SHADOW_GS_BASE:=0A             =
if ( is_pv_32on64_vcpu(v) )=0A                 goto fail;=0A--- a/xen/arch/=
x86/x86_64/traps.c=0A+++ b/xen/arch/x86/x86_64/traps.c=0A@@ -257,6 +257,13 =
@@ void toggle_guest_mode(struct vcpu *v)=0A {=0A     if ( is_pv_32bit_vcpu=
(v) )=0A         return;=0A+    if ( cpu_has_fsgsbase )=0A+    {=0A+       =
 if ( v->arch.flags & TF_kernel_mode )=0A+            v->arch.pv_vcpu.gs_ba=
se_kernel =3D __rdgsbase();=0A+        else=0A+            v->arch.pv_vcpu.=
gs_base_user =3D __rdgsbase();=0A+    }=0A     v->arch.flags ^=3D =
TF_kernel_mode;=0A     asm volatile ( "swapgs" );=0A     update_cr3(v);=0A-=
-- a/xen/include/asm-x86/msr.h=0A+++ b/xen/include/asm-x86/msr.h=0A@@ =
-98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in=0A 			=
  : "=3Da" (low), "=3Dd" (high) \=0A 			  : "c" (counter))=
=0A =0A-static inline unsigned long rdfsbase(void)=0A+static inline =
unsigned long __rdfsbase(void)=0A {=0A     unsigned long base;=0A =0A-    =
if ( cpu_has_fsgsbase )=0A #ifdef HAVE_GAS_FSGSBASE=0A-        asm =
volatile ( "rdfsbase %0" : "=3Dr" (base) );=0A+    asm volatile ( =
"rdfsbase %0" : "=3Dr" (base) );=0A #else=0A-        asm volatile ( ".byte =
0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" (base) );=0A+    asm volatile ( =
".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=3Da" (base) );=0A #endif=0A-    =
else=0A-        rdmsrl(MSR_FS_BASE, base);=0A =0A     return base;=0A }=0A =
=0A-static inline unsigned long rdgsbase(void)=0A+static inline unsigned =
long __rdgsbase(void)=0A {=0A     unsigned long base;=0A =0A-    if ( =
cpu_has_fsgsbase )=0A #ifdef HAVE_GAS_FSGSBASE=0A-        asm volatile ( =
"rdgsbase %0" : "=3Dr" (base) );=0A+    asm volatile ( "rdgsbase %0" : =
"=3Dr" (base) );=0A #else=0A-        asm volatile ( ".byte 0xf3, 0x48, =
0x0f, 0xae, 0xc8" : "=3Da" (base) );=0A+    asm volatile ( ".byte 0xf3, =
0x48, 0x0f, 0xae, 0xc8" : "=3Da" (base) );=0A #endif=0A-    else=0A-       =
 rdmsrl(MSR_GS_BASE, base);=0A+=0A+    return base;=0A+}=0A+=0A+static =
inline unsigned long rdfsbase(void)=0A+{=0A+    unsigned long base;=0A+=0A+=
    if ( cpu_has_fsgsbase )=0A+        return __rdfsbase();=0A+=0A+    =
rdmsrl(MSR_FS_BASE, base);=0A+=0A+    return base;=0A+}=0A+=0A+static =
inline unsigned long rdgsbase(void)=0A+{=0A+    unsigned long base;=0A+=0A+=
    if ( cpu_has_fsgsbase )=0A+        return __rdgsbase();=0A+=0A+    =
rdmsrl(MSR_GS_BASE, base);=0A =0A     return base;=0A }=0A
--=__PartA7940A3F.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartA7940A3F.1__=--


From xen-devel-bounces@lists.xen.org Wed Feb 05 14:52:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3qG-0006cQ-7g; Wed, 05 Feb 2014 14:52:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB3qE-0006bo-56
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:52:42 +0000
Received: from [193.109.254.147:23103] by server-14.bemta-14.messagelabs.com
	id 4F/76-29228-93052F25; Wed, 05 Feb 2014 14:52:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391611931!2234877!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32708 invoked from network); 5 Feb 2014 14:52:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:52:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100118485"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:52:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:52:10 -0500
Message-ID: <1391611929.23098.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 14:52:09 +0000
In-Reply-To: <52F24F5D.7020207@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52F24D4D.7040004@citrix.com>
	<1391611558.23098.2.camel@kazak.uk.xensource.com>
	<52F24F5D.7020207@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(trimming cc, most people are presumably not interested)

On Wed, 2014-02-05 at 14:49 +0000, Andrew Cooper wrote:
> On 05/02/14 14:45, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
> >> On 05/02/14 14:09, Ian Campbell wrote:
> >>> Andy:
> >>>
> >>>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
> >>>         allocations
> >>>
> >>>         Sounds too hard for a GSoC to me. Would need fleshing out in any
> >>>         case.
> >> Malcolm made a prototype for this on the first day of the Hackathon.  It
> >> can disappear.
> > Removed.
> >
> >>>       * CPU/RAM/PCI diagram tool
> >>>
> >>>         Does this not already exist somewhere?
> >> Not as far as I (or my ability to google) am aware.
> >>
> >> My furrowing into hwloc interacting with Xen and libxc is a start to all
> >> of this, but it is still very much in my copious free time and there is
> >> more than enough other work which could be done if someone were interested.
> > OK, left in place.
> >
> > This could conceivably be done under another umbrella such as the Linux
> > one too, since it seems generic.
> >
> > Ian.
> >
> >
> 
> For native Linux, hwloc kinda already does this already - certainly the
> CPU and PCI bits.

That's what I meant by "does this not alreayd exist somewhere". So it
sounds like extending hwloc is the right answer, the blurb should
reflect this and list the specific things which it is lacking.

Can you update the description please?

>   Under Xen there are quite a few areas needing
> improvement, which will require active development work.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:52:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3qG-0006cQ-7g; Wed, 05 Feb 2014 14:52:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB3qE-0006bo-56
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:52:42 +0000
Received: from [193.109.254.147:23103] by server-14.bemta-14.messagelabs.com
	id 4F/76-29228-93052F25; Wed, 05 Feb 2014 14:52:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391611931!2234877!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32708 invoked from network); 5 Feb 2014 14:52:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:52:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100118485"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:52:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	09:52:10 -0500
Message-ID: <1391611929.23098.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 14:52:09 +0000
In-Reply-To: <52F24F5D.7020207@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52F24D4D.7040004@citrix.com>
	<1391611558.23098.2.camel@kazak.uk.xensource.com>
	<52F24F5D.7020207@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(trimming cc, most people are presumably not interested)

On Wed, 2014-02-05 at 14:49 +0000, Andrew Cooper wrote:
> On 05/02/14 14:45, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
> >> On 05/02/14 14:09, Ian Campbell wrote:
> >>> Andy:
> >>>
> >>>       * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
> >>>         allocations
> >>>
> >>>         Sounds too hard for a GSoC to me. Would need fleshing out in any
> >>>         case.
> >> Malcolm made a prototype for this on the first day of the Hackathon.  It
> >> can disappear.
> > Removed.
> >
> >>>       * CPU/RAM/PCI diagram tool
> >>>
> >>>         Does this not already exist somewhere?
> >> Not as far as I (or my ability to google) am aware.
> >>
> >> My furrowing into hwloc interacting with Xen and libxc is a start to all
> >> of this, but it is still very much in my copious free time and there is
> >> more than enough other work which could be done if someone were interested.
> > OK, left in place.
> >
> > This could conceivably be done under another umbrella such as the Linux
> > one too, since it seems generic.
> >
> > Ian.
> >
> >
> 
> For native Linux, hwloc kinda already does this already - certainly the
> CPU and PCI bits.

That's what I meant by "does this not alreayd exist somewhere". So it
sounds like extending hwloc is the right answer, the blurb should
reflect this and list the specific things which it is lacking.

Can you update the description please?

>   Under Xen there are quite a few areas needing
> improvement, which will require active development work.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:54:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3rb-0006q1-Oz; Wed, 05 Feb 2014 14:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3ra-0006pf-A6
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:54:06 +0000
Received: from [193.109.254.147:60180] by server-10.bemta-14.messagelabs.com
	id C9/BA-10711-D8052F25; Wed, 05 Feb 2014 14:54:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391612044!2227316!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20354 invoked from network); 5 Feb 2014 14:54:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:54:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:54:04 +0000
Message-Id: <52F25E9902000078001196E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:54:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0033AD98.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 2/3] domctl: also pause domain for extended
 context updates
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0033AD98.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This is not just for consistency with "base" context updates, but
actually needed so that guest side accesses can't race with control
domain side updates.

This would have been a security issue if XSA-77 hadn't waived them on
the affected domctl operation.

While looking at the code I also spotted a redundant NULL check in the
"base" context update handling code, which is being removed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -853,6 +853,8 @@ long arch_do_domctl(
         }
         else
         {
+            if ( d =3D=3D current->domain ) /* no domain_pause() */
+                break;
             ret =3D -EINVAL;
             if ( evc->size < offsetof(typeof(*evc), vmce) )
                 break;
@@ -861,6 +863,7 @@ long arch_do_domctl(
                 if ( !is_canonical_address(evc->sysenter_callback_eip) ||
                      !is_canonical_address(evc->syscall32_callback_eip) )
                     break;
+                domain_pause(d);
                 fixup_guest_code_selector(d, evc->sysenter_callback_cs);
                 v->arch.pv_vcpu.sysenter_callback_cs      =3D
                     evc->sysenter_callback_cs;
@@ -881,6 +884,8 @@ long arch_do_domctl(
                       (evc->syscall32_callback_cs & ~3) ||
                       evc->syscall32_callback_eip )
                 break;
+            else
+                domain_pause(d);
=20
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=3D
@@ -899,6 +904,8 @@ long arch_do_domctl(
             }
             else
                 ret =3D 0;
+
+            domain_unpause(d);
         }
     }
     break;
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -334,10 +334,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         unsigned int vcpu =3D op->u.vcpucontext.vcpu;
         struct vcpu *v;
=20
-        ret =3D -ESRCH;
-        if ( d =3D=3D NULL )
-            break;
-
         ret =3D -EINVAL;
         if ( (d =3D=3D current->domain) || /* no domain_pause() */
              (vcpu >=3D d->max_vcpus) || ((v =3D d->vcpu[vcpu]) =3D=3D =
NULL) )




--=__Part0033AD98.1__=
Content-Type: text/plain; name="vcpu-set-context-pause.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="vcpu-set-context-pause.patch"

domctl: also pause domain for "extended" context updates=0A=0AThis is not =
just for consistency with "base" context updates, but=0Aactually needed so =
that guest side accesses can't race with control=0Adomain side updates.=0A=
=0AThis would have been a security issue if XSA-77 hadn't waived them =
on=0Athe affected domctl operation.=0A=0AWhile looking at the code I also =
spotted a redundant NULL check in the=0A"base" context update handling =
code, which is being removed.=0A=0ASigned-off-by: Jan Beulich <jbeulich@sus=
e.com>=0A=0A--- a/xen/arch/x86/domctl.c=0A+++ b/xen/arch/x86/domctl.c=0A@@ =
-853,6 +853,8 @@ long arch_do_domctl(=0A         }=0A         else=0A      =
   {=0A+            if ( d =3D=3D current->domain ) /* no domain_pause() =
*/=0A+                break;=0A             ret =3D -EINVAL;=0A            =
 if ( evc->size < offsetof(typeof(*evc), vmce) )=0A                 =
break;=0A@@ -861,6 +863,7 @@ long arch_do_domctl(=0A                 if ( =
!is_canonical_address(evc->sysenter_callback_eip) ||=0A                    =
  !is_canonical_address(evc->syscall32_callback_eip) )=0A                  =
   break;=0A+                domain_pause(d);=0A                 fixup_gues=
t_code_selector(d, evc->sysenter_callback_cs);=0A                 =
v->arch.pv_vcpu.sysenter_callback_cs      =3D=0A                     =
evc->sysenter_callback_cs;=0A@@ -881,6 +884,8 @@ long arch_do_domctl(=0A   =
                    (evc->syscall32_callback_cs & ~3) ||=0A                =
       evc->syscall32_callback_eip )=0A                 break;=0A+         =
   else=0A+                domain_pause(d);=0A =0A             BUILD_BUG_ON=
(offsetof(struct xen_domctl_ext_vcpucontext,=0A                            =
       mcg_cap) !=3D=0A@@ -899,6 +904,8 @@ long arch_do_domctl(=0A         =
    }=0A             else=0A                 ret =3D 0;=0A+=0A+            =
domain_unpause(d);=0A         }=0A     }=0A     break;=0A--- a/xen/common/d=
omctl.c=0A+++ b/xen/common/domctl.c=0A@@ -334,10 +334,6 @@ long do_domctl(X=
EN_GUEST_HANDLE_PARAM(xe=0A         unsigned int vcpu =3D op->u.vcpucontext=
.vcpu;=0A         struct vcpu *v;=0A =0A-        ret =3D -ESRCH;=0A-       =
 if ( d =3D=3D NULL )=0A-            break;=0A-=0A         ret =3D =
-EINVAL;=0A         if ( (d =3D=3D current->domain) || /* no domain_pause()=
 */=0A              (vcpu >=3D d->max_vcpus) || ((v =3D d->vcpu[vcpu]) =
=3D=3D NULL) )=0A
--=__Part0033AD98.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0033AD98.1__=--


From xen-devel-bounces@lists.xen.org Wed Feb 05 14:54:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3rb-0006q1-Oz; Wed, 05 Feb 2014 14:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3ra-0006pf-A6
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:54:06 +0000
Received: from [193.109.254.147:60180] by server-10.bemta-14.messagelabs.com
	id C9/BA-10711-D8052F25; Wed, 05 Feb 2014 14:54:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391612044!2227316!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20354 invoked from network); 5 Feb 2014 14:54:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:54:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:54:04 +0000
Message-Id: <52F25E9902000078001196E8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:54:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0033AD98.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 2/3] domctl: also pause domain for extended
 context updates
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0033AD98.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This is not just for consistency with "base" context updates, but
actually needed so that guest side accesses can't race with control
domain side updates.

This would have been a security issue if XSA-77 hadn't waived them on
the affected domctl operation.

While looking at the code I also spotted a redundant NULL check in the
"base" context update handling code, which is being removed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -853,6 +853,8 @@ long arch_do_domctl(
         }
         else
         {
+            if ( d =3D=3D current->domain ) /* no domain_pause() */
+                break;
             ret =3D -EINVAL;
             if ( evc->size < offsetof(typeof(*evc), vmce) )
                 break;
@@ -861,6 +863,7 @@ long arch_do_domctl(
                 if ( !is_canonical_address(evc->sysenter_callback_eip) ||
                      !is_canonical_address(evc->syscall32_callback_eip) )
                     break;
+                domain_pause(d);
                 fixup_guest_code_selector(d, evc->sysenter_callback_cs);
                 v->arch.pv_vcpu.sysenter_callback_cs      =3D
                     evc->sysenter_callback_cs;
@@ -881,6 +884,8 @@ long arch_do_domctl(
                       (evc->syscall32_callback_cs & ~3) ||
                       evc->syscall32_callback_eip )
                 break;
+            else
+                domain_pause(d);
=20
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=3D
@@ -899,6 +904,8 @@ long arch_do_domctl(
             }
             else
                 ret =3D 0;
+
+            domain_unpause(d);
         }
     }
     break;
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -334,10 +334,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         unsigned int vcpu =3D op->u.vcpucontext.vcpu;
         struct vcpu *v;
=20
-        ret =3D -ESRCH;
-        if ( d =3D=3D NULL )
-            break;
-
         ret =3D -EINVAL;
         if ( (d =3D=3D current->domain) || /* no domain_pause() */
              (vcpu >=3D d->max_vcpus) || ((v =3D d->vcpu[vcpu]) =3D=3D =
NULL) )




--=__Part0033AD98.1__=
Content-Type: text/plain; name="vcpu-set-context-pause.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="vcpu-set-context-pause.patch"

domctl: also pause domain for "extended" context updates=0A=0AThis is not =
just for consistency with "base" context updates, but=0Aactually needed so =
that guest side accesses can't race with control=0Adomain side updates.=0A=
=0AThis would have been a security issue if XSA-77 hadn't waived them =
on=0Athe affected domctl operation.=0A=0AWhile looking at the code I also =
spotted a redundant NULL check in the=0A"base" context update handling =
code, which is being removed.=0A=0ASigned-off-by: Jan Beulich <jbeulich@sus=
e.com>=0A=0A--- a/xen/arch/x86/domctl.c=0A+++ b/xen/arch/x86/domctl.c=0A@@ =
-853,6 +853,8 @@ long arch_do_domctl(=0A         }=0A         else=0A      =
   {=0A+            if ( d =3D=3D current->domain ) /* no domain_pause() =
*/=0A+                break;=0A             ret =3D -EINVAL;=0A            =
 if ( evc->size < offsetof(typeof(*evc), vmce) )=0A                 =
break;=0A@@ -861,6 +863,7 @@ long arch_do_domctl(=0A                 if ( =
!is_canonical_address(evc->sysenter_callback_eip) ||=0A                    =
  !is_canonical_address(evc->syscall32_callback_eip) )=0A                  =
   break;=0A+                domain_pause(d);=0A                 fixup_gues=
t_code_selector(d, evc->sysenter_callback_cs);=0A                 =
v->arch.pv_vcpu.sysenter_callback_cs      =3D=0A                     =
evc->sysenter_callback_cs;=0A@@ -881,6 +884,8 @@ long arch_do_domctl(=0A   =
                    (evc->syscall32_callback_cs & ~3) ||=0A                =
       evc->syscall32_callback_eip )=0A                 break;=0A+         =
   else=0A+                domain_pause(d);=0A =0A             BUILD_BUG_ON=
(offsetof(struct xen_domctl_ext_vcpucontext,=0A                            =
       mcg_cap) !=3D=0A@@ -899,6 +904,8 @@ long arch_do_domctl(=0A         =
    }=0A             else=0A                 ret =3D 0;=0A+=0A+            =
domain_unpause(d);=0A         }=0A     }=0A     break;=0A--- a/xen/common/d=
omctl.c=0A+++ b/xen/common/domctl.c=0A@@ -334,10 +334,6 @@ long do_domctl(X=
EN_GUEST_HANDLE_PARAM(xe=0A         unsigned int vcpu =3D op->u.vcpucontext=
.vcpu;=0A         struct vcpu *v;=0A =0A-        ret =3D -ESRCH;=0A-       =
 if ( d =3D=3D NULL )=0A-            break;=0A-=0A         ret =3D =
-EINVAL;=0A         if ( (d =3D=3D current->domain) || /* no domain_pause()=
 */=0A              (vcpu >=3D d->max_vcpus) || ((v =3D d->vcpu[vcpu]) =
=3D=3D NULL) )=0A
--=__Part0033AD98.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0033AD98.1__=--


From xen-devel-bounces@lists.xen.org Wed Feb 05 14:54:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3s4-0006vK-7x; Wed, 05 Feb 2014 14:54:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3s2-0006uw-Ld
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:54:34 +0000
Received: from [85.158.137.68:12695] by server-2.bemta-3.messagelabs.com id
	A0/AB-06531-9A052F25; Wed, 05 Feb 2014 14:54:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391612072!13493295!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12008 invoked from network); 5 Feb 2014 14:54:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:54:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:54:32 +0000
Message-Id: <52F25EB702000078001196EC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:54:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2F1C82B7.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2F1C82B7.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"Base" context reads already paused the subject vCPU when being the
current one, but that special case isn't being properly dealt with
anyway (at the very least when x86's fsgsbase feature is in use), so
just disallow it.

"Extended" context reads so far didn't do any pausing.

While we can't avoid the reported data being stale by the time it
arrives at the caller, this way we at least guarantee that it is
consistent.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -819,7 +819,13 @@ long arch_do_domctl(
=20
         if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
         {
+            if ( v =3D=3D current ) /* no vcpu_pause() */
+                break;
+
             evc->size =3D sizeof(*evc);
+
+            vcpu_pause(v);
+
             if ( is_pv_domain(d) )
             {
                 evc->sysenter_callback_cs      =3D
@@ -849,6 +855,7 @@ long arch_do_domctl(
             evc->vmce.mci_ctl2_bank1 =3D v->arch.vmce.bank[1].mci_ctl2;
=20
             ret =3D 0;
+            vcpu_unpause(v);
             copyback =3D 1;
         }
         else
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -675,11 +675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         struct vcpu         *v;
=20
         ret =3D -EINVAL;
-        if ( op->u.vcpucontext.vcpu >=3D d->max_vcpus )
-            goto getvcpucontext_out;
-
-        ret =3D -ESRCH;
-        if ( (v =3D d->vcpu[op->u.vcpucontext.vcpu]) =3D=3D NULL )
+        if ( op->u.vcpucontext.vcpu >=3D d->max_vcpus ||
+             (v =3D d->vcpu[op->u.vcpucontext.vcpu]) =3D=3D NULL ||
+             v =3D=3D current ) /* no vcpu_pause() */
             goto getvcpucontext_out;
=20
         ret =3D -ENODATA;
@@ -694,14 +692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( (c.nat =3D xmalloc(struct vcpu_guest_context)) =3D=3D NULL )
             goto getvcpucontext_out;
=20
-        if ( v !=3D current )
-            vcpu_pause(v);
+        vcpu_pause(v);
=20
         arch_get_info_guest(v, c);
         ret =3D 0;
=20
-        if ( v !=3D current )
-            vcpu_unpause(v);
+        vcpu_unpause(v);
=20
 #ifdef CONFIG_COMPAT
         if ( !is_pv_32on64_vcpu(v) )




--=__Part2F1C82B7.1__=
Content-Type: text/plain; name="vcpu-get-context-pause.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="vcpu-get-context-pause.patch"

domctl: pause vCPU for context reads=0A=0A"Base" context reads already =
paused the subject vCPU when being the=0Acurrent one, but that special =
case isn't being properly dealt with=0Aanyway (at the very least when =
x86's fsgsbase feature is in use), so=0Ajust disallow it.=0A=0A"Extended" =
context reads so far didn't do any pausing.=0A=0AWhile we can't avoid the =
reported data being stale by the time it=0Aarrives at the caller, this way =
we at least guarantee that it is=0Aconsistent.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/domctl.c=0A+++ =
b/xen/arch/x86/domctl.c=0A@@ -819,7 +819,13 @@ long arch_do_domctl(=0A =0A =
        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )=0A        =
 {=0A+            if ( v =3D=3D current ) /* no vcpu_pause() */=0A+        =
        break;=0A+=0A             evc->size =3D sizeof(*evc);=0A+=0A+      =
      vcpu_pause(v);=0A+=0A             if ( is_pv_domain(d) )=0A          =
   {=0A                 evc->sysenter_callback_cs      =3D=0A@@ -849,6 =
+855,7 @@ long arch_do_domctl(=0A             evc->vmce.mci_ctl2_bank1 =3D =
v->arch.vmce.bank[1].mci_ctl2;=0A =0A             ret =3D 0;=0A+           =
 vcpu_unpause(v);=0A             copyback =3D 1;=0A         }=0A         =
else=0A--- a/xen/common/domctl.c=0A+++ b/xen/common/domctl.c=0A@@ -675,11 =
+675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         struct vcpu  =
       *v;=0A =0A         ret =3D -EINVAL;=0A-        if ( op->u.vcpucontex=
t.vcpu >=3D d->max_vcpus )=0A-            goto getvcpucontext_out;=0A-=0A- =
       ret =3D -ESRCH;=0A-        if ( (v =3D d->vcpu[op->u.vcpucontext.vcp=
u]) =3D=3D NULL )=0A+        if ( op->u.vcpucontext.vcpu >=3D d->max_vcpus =
||=0A+             (v =3D d->vcpu[op->u.vcpucontext.vcpu]) =3D=3D NULL =
||=0A+             v =3D=3D current ) /* no vcpu_pause() */=0A             =
goto getvcpucontext_out;=0A =0A         ret =3D -ENODATA;=0A@@ -694,14 =
+692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         if ( (c.nat =
=3D xmalloc(struct vcpu_guest_context)) =3D=3D NULL )=0A             goto =
getvcpucontext_out;=0A =0A-        if ( v !=3D current )=0A-            =
vcpu_pause(v);=0A+        vcpu_pause(v);=0A =0A         arch_get_info_guest=
(v, c);=0A         ret =3D 0;=0A =0A-        if ( v !=3D current )=0A-     =
       vcpu_unpause(v);=0A+        vcpu_unpause(v);=0A =0A #ifdef =
CONFIG_COMPAT=0A         if ( !is_pv_32on64_vcpu(v) )=0A
--=__Part2F1C82B7.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2F1C82B7.1__=--


From xen-devel-bounces@lists.xen.org Wed Feb 05 14:54:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:54:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3s4-0006vK-7x; Wed, 05 Feb 2014 14:54:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3s2-0006uw-Ld
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:54:34 +0000
Received: from [85.158.137.68:12695] by server-2.bemta-3.messagelabs.com id
	A0/AB-06531-9A052F25; Wed, 05 Feb 2014 14:54:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391612072!13493295!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12008 invoked from network); 5 Feb 2014 14:54:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:54:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:54:32 +0000
Message-Id: <52F25EB702000078001196EC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:54:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part2F1C82B7.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part2F1C82B7.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

"Base" context reads already paused the subject vCPU when being the
current one, but that special case isn't being properly dealt with
anyway (at the very least when x86's fsgsbase feature is in use), so
just disallow it.

"Extended" context reads so far didn't do any pausing.

While we can't avoid the reported data being stale by the time it
arrives at the caller, this way we at least guarantee that it is
consistent.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -819,7 +819,13 @@ long arch_do_domctl(
=20
         if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )
         {
+            if ( v =3D=3D current ) /* no vcpu_pause() */
+                break;
+
             evc->size =3D sizeof(*evc);
+
+            vcpu_pause(v);
+
             if ( is_pv_domain(d) )
             {
                 evc->sysenter_callback_cs      =3D
@@ -849,6 +855,7 @@ long arch_do_domctl(
             evc->vmce.mci_ctl2_bank1 =3D v->arch.vmce.bank[1].mci_ctl2;
=20
             ret =3D 0;
+            vcpu_unpause(v);
             copyback =3D 1;
         }
         else
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -675,11 +675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         struct vcpu         *v;
=20
         ret =3D -EINVAL;
-        if ( op->u.vcpucontext.vcpu >=3D d->max_vcpus )
-            goto getvcpucontext_out;
-
-        ret =3D -ESRCH;
-        if ( (v =3D d->vcpu[op->u.vcpucontext.vcpu]) =3D=3D NULL )
+        if ( op->u.vcpucontext.vcpu >=3D d->max_vcpus ||
+             (v =3D d->vcpu[op->u.vcpucontext.vcpu]) =3D=3D NULL ||
+             v =3D=3D current ) /* no vcpu_pause() */
             goto getvcpucontext_out;
=20
         ret =3D -ENODATA;
@@ -694,14 +692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( (c.nat =3D xmalloc(struct vcpu_guest_context)) =3D=3D NULL )
             goto getvcpucontext_out;
=20
-        if ( v !=3D current )
-            vcpu_pause(v);
+        vcpu_pause(v);
=20
         arch_get_info_guest(v, c);
         ret =3D 0;
=20
-        if ( v !=3D current )
-            vcpu_unpause(v);
+        vcpu_unpause(v);
=20
 #ifdef CONFIG_COMPAT
         if ( !is_pv_32on64_vcpu(v) )




--=__Part2F1C82B7.1__=
Content-Type: text/plain; name="vcpu-get-context-pause.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="vcpu-get-context-pause.patch"

domctl: pause vCPU for context reads=0A=0A"Base" context reads already =
paused the subject vCPU when being the=0Acurrent one, but that special =
case isn't being properly dealt with=0Aanyway (at the very least when =
x86's fsgsbase feature is in use), so=0Ajust disallow it.=0A=0A"Extended" =
context reads so far didn't do any pausing.=0A=0AWhile we can't avoid the =
reported data being stale by the time it=0Aarrives at the caller, this way =
we at least guarantee that it is=0Aconsistent.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/domctl.c=0A+++ =
b/xen/arch/x86/domctl.c=0A@@ -819,7 +819,13 @@ long arch_do_domctl(=0A =0A =
        if ( domctl->cmd =3D=3D XEN_DOMCTL_get_ext_vcpucontext )=0A        =
 {=0A+            if ( v =3D=3D current ) /* no vcpu_pause() */=0A+        =
        break;=0A+=0A             evc->size =3D sizeof(*evc);=0A+=0A+      =
      vcpu_pause(v);=0A+=0A             if ( is_pv_domain(d) )=0A          =
   {=0A                 evc->sysenter_callback_cs      =3D=0A@@ -849,6 =
+855,7 @@ long arch_do_domctl(=0A             evc->vmce.mci_ctl2_bank1 =3D =
v->arch.vmce.bank[1].mci_ctl2;=0A =0A             ret =3D 0;=0A+           =
 vcpu_unpause(v);=0A             copyback =3D 1;=0A         }=0A         =
else=0A--- a/xen/common/domctl.c=0A+++ b/xen/common/domctl.c=0A@@ -675,11 =
+675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         struct vcpu  =
       *v;=0A =0A         ret =3D -EINVAL;=0A-        if ( op->u.vcpucontex=
t.vcpu >=3D d->max_vcpus )=0A-            goto getvcpucontext_out;=0A-=0A- =
       ret =3D -ESRCH;=0A-        if ( (v =3D d->vcpu[op->u.vcpucontext.vcp=
u]) =3D=3D NULL )=0A+        if ( op->u.vcpucontext.vcpu >=3D d->max_vcpus =
||=0A+             (v =3D d->vcpu[op->u.vcpucontext.vcpu]) =3D=3D NULL =
||=0A+             v =3D=3D current ) /* no vcpu_pause() */=0A             =
goto getvcpucontext_out;=0A =0A         ret =3D -ENODATA;=0A@@ -694,14 =
+692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe=0A         if ( (c.nat =
=3D xmalloc(struct vcpu_guest_context)) =3D=3D NULL )=0A             goto =
getvcpucontext_out;=0A =0A-        if ( v !=3D current )=0A-            =
vcpu_pause(v);=0A+        vcpu_pause(v);=0A =0A         arch_get_info_guest=
(v, c);=0A         ret =3D 0;=0A =0A-        if ( v !=3D current )=0A-     =
       vcpu_unpause(v);=0A+        vcpu_unpause(v);=0A =0A #ifdef =
CONFIG_COMPAT=0A         if ( !is_pv_32on64_vcpu(v) )=0A
--=__Part2F1C82B7.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part2F1C82B7.1__=--


From xen-devel-bounces@lists.xen.org Wed Feb 05 14:55:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3sw-00075K-T7; Wed, 05 Feb 2014 14:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3sv-000754-Rd
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:55:29 +0000
Received: from [85.158.137.68:64486] by server-11.bemta-3.messagelabs.com id
	47/6C-04255-1E052F25; Wed, 05 Feb 2014 14:55:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391612128!13579062!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19222 invoked from network); 5 Feb 2014 14:55:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:55:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:55:27 +0000
Message-Id: <52F25EEE02000078001196EF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:55:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 15:48, "Jan Beulich" <JBeulich@suse.com> wrote:
> 1: x86: fix FS/GS base handling when using the fsgsbase feature
> 2: domctl: also pause domain for extended context updates
> 3: domctl: pause vCPU for context reads
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

And I think is goes without saying that this being bug fixes I expect
them to be suitable for 4.4.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:55:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3sw-00075K-T7; Wed, 05 Feb 2014 14:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3sv-000754-Rd
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 14:55:29 +0000
Received: from [85.158.137.68:64486] by server-11.bemta-3.messagelabs.com id
	47/6C-04255-1E052F25; Wed, 05 Feb 2014 14:55:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391612128!13579062!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19222 invoked from network); 5 Feb 2014 14:55:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 14:55:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 14:55:27 +0000
Message-Id: <52F25EEE02000078001196EF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 14:55:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 15:48, "Jan Beulich" <JBeulich@suse.com> wrote:
> 1: x86: fix FS/GS base handling when using the fsgsbase feature
> 2: domctl: also pause domain for extended context updates
> 3: domctl: pause vCPU for context reads
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

And I think is goes without saying that this being bug fixes I expect
them to be suitable for 4.4.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:56:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3tQ-0007BL-Al; Wed, 05 Feb 2014 14:56:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3tP-0007B4-AQ
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:55:59 +0000
Received: from [85.158.139.211:43707] by server-4.bemta-5.messagelabs.com id
	7E/11-08092-EF052F25; Wed, 05 Feb 2014 14:55:58 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391612156!1872531!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17849 invoked from network); 5 Feb 2014 14:55:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:55:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98229315"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:55:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:55:53 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3tJ-0007Aa-6f;
	Wed, 05 Feb 2014 14:55:53 +0000
Message-ID: <52F250E7.1090008@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:55:35 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
	<1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:01 PM, Andrew Cooper wrote:
> The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> most part is left alone until success, at which point it is set to 0.
>
> There is a separate 'frc' which for the most part is used to check function
> calls, keeping errors separate from 'rc'.
>
> For a toolstack which sets callbacks->toolstack_restore(), and the function
> returns 0, any subsequent error will end up with code flow going to "out;",
> resulting in the migration being declared a success.
>
> For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> 'frc', even though their use of 'rc' is currently safe.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
>
> ---
>
> Changes in v2:
>   * Dont drop rc = -1 from toolstack_restore().
>
> Regarding 4.4: If the two "for consistency" changes to
> xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> without affecting the bugfix nature of the patch, but I would argue that
> leaving some examples of "rc = function_call()" leaves a bad precident which
> is likely to lead to similar bugs in the future.

Yes, these are all pretty clear bug fixes.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:56:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3tQ-0007BL-Al; Wed, 05 Feb 2014 14:56:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3tP-0007B4-AQ
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:55:59 +0000
Received: from [85.158.139.211:43707] by server-4.bemta-5.messagelabs.com id
	7E/11-08092-EF052F25; Wed, 05 Feb 2014 14:55:58 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391612156!1872531!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17849 invoked from network); 5 Feb 2014 14:55:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:55:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98229315"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 14:55:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:55:53 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3tJ-0007Aa-6f;
	Wed, 05 Feb 2014 14:55:53 +0000
Message-ID: <52F250E7.1090008@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:55:35 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
	<1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 06:01 PM, Andrew Cooper wrote:
> The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> most part is left alone until success, at which point it is set to 0.
>
> There is a separate 'frc' which for the most part is used to check function
> calls, keeping errors separate from 'rc'.
>
> For a toolstack which sets callbacks->toolstack_restore(), and the function
> returns 0, any subsequent error will end up with code flow going to "out;",
> resulting in the migration being declared a success.
>
> For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> 'frc', even though their use of 'rc' is currently safe.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
>
> ---
>
> Changes in v2:
>   * Dont drop rc = -1 from toolstack_restore().
>
> Regarding 4.4: If the two "for consistency" changes to
> xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> without affecting the bugfix nature of the patch, but I would argue that
> leaving some examples of "rc = function_call()" leaves a bad precident which
> is likely to lead to similar bugs in the future.

Yes, these are all pretty clear bug fixes.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:56:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3to-0007H5-QS; Wed, 05 Feb 2014 14:56:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB3tn-0007Gf-OM
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:56:23 +0000
Received: from [85.158.143.35:44290] by server-3.bemta-4.messagelabs.com id
	27/5C-11539-71152F25; Wed, 05 Feb 2014 14:56:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391612181!3372306!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10427 invoked from network); 5 Feb 2014 14:56:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100120646"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:56:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 09:56:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3tk-0005k1-Aw;
	Wed, 05 Feb 2014 14:56:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3tk-00085O-1p;
	Wed, 05 Feb 2014 14:56:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.20755.786798.217533@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 14:56:19 +0000
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140205113623.GA24025@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	anthony.perard@citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard support to xl disk configuration"):
> On Wed, Feb 05, George Dunlap wrote:
> > Well it looks like in order to keep ABI compatibility (which I don't think
> > we ever promised), you're introducing this weird hack with overloading a
> > putative boolean value with a magic number?
> 
> Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
> has to change as well. And is 4.4-rc4 is the right time to do that?
> Likely not. I'm fine with carry the 4.4 patch to achieve the result, and
> put the other version into 4.5.

I think there is no reason not to change the ABI now.  RCs are
provided for testing purposes, not production use.

The real question is whether the actual functional patch ought to be
accepted.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:56:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3to-0007H5-QS; Wed, 05 Feb 2014 14:56:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB3tn-0007Gf-OM
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:56:23 +0000
Received: from [85.158.143.35:44290] by server-3.bemta-4.messagelabs.com id
	27/5C-11539-71152F25; Wed, 05 Feb 2014 14:56:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391612181!3372306!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10427 invoked from network); 5 Feb 2014 14:56:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:56:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100120646"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:56:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 09:56:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3tk-0005k1-Aw;
	Wed, 05 Feb 2014 14:56:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB3tk-00085O-1p;
	Wed, 05 Feb 2014 14:56:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.20755.786798.217533@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 14:56:19 +0000
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140205113623.GA24025@aepfle.de>
References: <1391083364-29483-1-git-send-email-olaf@aepfle.de>
	<20140130162558.GA9033@aepfle.de>
	<1391099505.9495.23.camel@kazak.uk.xensource.com>
	<20140130173058.GA12133@aepfle.de>
	<1391530957.6497.56.camel@kazak.uk.xensource.com>
	<52F21C29.8090607@eu.citrix.com> <20140205113623.GA24025@aepfle.de>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	anthony.perard@citrix.com
Subject: Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard
 support to xl disk configuration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] [PATCH for xen-4.4] libxl: add option for discard support to xl disk configuration"):
> On Wed, Feb 05, George Dunlap wrote:
> > Well it looks like in order to keep ABI compatibility (which I don't think
> > we ever promised), you're introducing this weird hack with overloading a
> > putative boolean value with a magic number?
> 
> Yes, thats the point. If libxl_device_disk changes then IMO the SONAME
> has to change as well. And is 4.4-rc4 is the right time to do that?
> Likely not. I'm fine with carry the 4.4 patch to achieve the result, and
> put the other version into 4.5.

I think there is no reason not to change the ABI now.  RCs are
provided for testing purposes, not production use.

The real question is whether the actual functional patch ought to be
accepted.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:57:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3v8-0007Up-B9; Wed, 05 Feb 2014 14:57:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3v6-0007UM-1U
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:57:44 +0000
Received: from [85.158.137.68:10137] by server-15.bemta-3.messagelabs.com id
	F3/13-19263-76152F25; Wed, 05 Feb 2014 14:57:43 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391612261!9934237!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31447 invoked from network); 5 Feb 2014 14:57:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:57:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100121112"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:57:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:57:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3v2-0007CH-1f;
	Wed, 05 Feb 2014 14:57:40 +0000
Message-ID: <52F25151.207@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:57:21 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
	<1391507006.10515.68.camel@kazak.uk.xensource.com>
In-Reply-To: <1391507006.10515.68.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 09:43 AM, Ian Campbell wrote:
> On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch removes reading reset specific values (address, size and mask) from dts
>> and uses values defined in the code now.
>> This is because currently xgene reset driver (submitted in linux) is going through
>> a change (which is not yet accepted), this new driver has a new type of dts bindings
>> for reset.
>> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
>> of reading from dts so that xen code will not break due to the linux transition.
>>
>> Ref:
>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
>> http://www.gossamer-threads.com/lists/linux/kernel/1845585
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> George -- I'd like to take this into 4.4 to avoid shipping a Xen which
> relies on an unagreed DTS binding (which is an ABI of sorts).

Is this the reboot binging one with 3 options, #2 of which was hard-code 
the values rather than using DT?

Assuming so:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 14:57:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 14:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3v8-0007Up-B9; Wed, 05 Feb 2014 14:57:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB3v6-0007UM-1U
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 14:57:44 +0000
Received: from [85.158.137.68:10137] by server-15.bemta-3.messagelabs.com id
	F3/13-19263-76152F25; Wed, 05 Feb 2014 14:57:43 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391612261!9934237!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31447 invoked from network); 5 Feb 2014 14:57:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 14:57:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100121112"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 14:57:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 09:57:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB3v2-0007CH-1f;
	Wed, 05 Feb 2014 14:57:40 +0000
Message-ID: <52F25151.207@eu.citrix.com>
Date: Wed, 5 Feb 2014 14:57:21 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
	<1391507006.10515.68.camel@kazak.uk.xensource.com>
In-Reply-To: <1391507006.10515.68.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/04/2014 09:43 AM, Ian Campbell wrote:
> On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch removes reading reset specific values (address, size and mask) from dts
>> and uses values defined in the code now.
>> This is because currently xgene reset driver (submitted in linux) is going through
>> a change (which is not yet accepted), this new driver has a new type of dts bindings
>> for reset.
>> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
>> of reading from dts so that xen code will not break due to the linux transition.
>>
>> Ref:
>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
>> http://www.gossamer-threads.com/lists/linux/kernel/1845585
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> George -- I'd like to take this into 4.4 to avoid shipping a Xen which
> relies on an unagreed DTS binding (which is an ABI of sorts).

Is this the reboot binging one with 3 options, #2 of which was hard-code 
the values rather than using DT?

Assuming so:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:00:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3xX-0007sX-3z; Wed, 05 Feb 2014 15:00:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3xV-0007sR-LF
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:00:13 +0000
Received: from [85.158.143.35:6043] by server-2.bemta-4.messagelabs.com id
	8D/7D-10891-DF152F25; Wed, 05 Feb 2014 15:00:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391612412!3369658!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7482 invoked from network); 5 Feb 2014 15:00:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 15:00:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 15:00:11 +0000
Message-Id: <52F26009020000780011971D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 15:00:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
In-Reply-To: <52F24C47.5070100@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: yang.z.zhang@Intel.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 15:35, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>> I have two patches - one the simpler one that is pretty straightfoward
>> and the one you suggested. Either one fixes PVH guests. I also did
>> bootup tests with HVM guests to make sure they worked.
>>
>> Attached and inline.
> 
> But they do different things -- one does "ioreq && ioreq->state..." and 
> the other does "!ioreq || ioreq->state...".  The first one is incorrect, 
> AFAICT.

Not outright incorrect perhaps, but sub-optimal. That was the subject
of the discussion so far, and we're mainly waiting for the two copied
VMX guys to confirm that the second patch is correct.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:00:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3xX-0007sX-3z; Wed, 05 Feb 2014 15:00:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB3xV-0007sR-LF
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:00:13 +0000
Received: from [85.158.143.35:6043] by server-2.bemta-4.messagelabs.com id
	8D/7D-10891-DF152F25; Wed, 05 Feb 2014 15:00:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391612412!3369658!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7482 invoked from network); 5 Feb 2014 15:00:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 15:00:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 15:00:11 +0000
Message-Id: <52F26009020000780011971D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 15:00:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
In-Reply-To: <52F24C47.5070100@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: yang.z.zhang@Intel.com, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 15:35, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>> I have two patches - one the simpler one that is pretty straightfoward
>> and the one you suggested. Either one fixes PVH guests. I also did
>> bootup tests with HVM guests to make sure they worked.
>>
>> Attached and inline.
> 
> But they do different things -- one does "ioreq && ioreq->state..." and 
> the other does "!ioreq || ioreq->state...".  The first one is incorrect, 
> AFAICT.

Not outright incorrect perhaps, but sub-optimal. That was the subject
of the discussion so far, and we're mainly waiting for the two copied
VMX guys to confirm that the second patch is correct.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:01:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3yy-0008EW-GH; Wed, 05 Feb 2014 15:01:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB3yw-0008E4-SB
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 15:01:43 +0000
Received: from [85.158.139.211:13596] by server-1.bemta-5.messagelabs.com id
	6B/D2-12859-65252F25; Wed, 05 Feb 2014 15:01:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391612498!1851201!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14600 invoked from network); 5 Feb 2014 15:01:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:01:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98231675"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 15:01:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	10:01:18 -0500
Message-ID: <1391612476.23098.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Feb 2014 15:01:16 +0000
In-Reply-To: <52F25151.207@eu.citrix.com>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
	<1391507006.10515.68.camel@kazak.uk.xensource.com>
	<52F25151.207@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:57 +0000, George Dunlap wrote:
> On 02/04/2014 09:43 AM, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch removes reading reset specific values (address, size and mask) from dts
> >> and uses values defined in the code now.
> >> This is because currently xgene reset driver (submitted in linux) is going through
> >> a change (which is not yet accepted), this new driver has a new type of dts bindings
> >> for reset.
> >> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
> >> of reading from dts so that xen code will not break due to the linux transition.
> >>
> >> Ref:
> >> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
> >> http://www.gossamer-threads.com/lists/linux/kernel/1845585
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > George -- I'd like to take this into 4.4 to avoid shipping a Xen which
> > relies on an unagreed DTS binding (which is an ABI of sorts).
> 
> Is this the reboot binging one with 3 options, #2 of which was hard-code 
> the values rather than using DT?

Yes.

> Assuming so:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:01:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB3yy-0008EW-GH; Wed, 05 Feb 2014 15:01:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB3yw-0008E4-SB
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 15:01:43 +0000
Received: from [85.158.139.211:13596] by server-1.bemta-5.messagelabs.com id
	6B/D2-12859-65252F25; Wed, 05 Feb 2014 15:01:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391612498!1851201!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14600 invoked from network); 5 Feb 2014 15:01:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:01:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="98231675"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 15:01:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	10:01:18 -0500
Message-ID: <1391612476.23098.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 5 Feb 2014 15:01:16 +0000
In-Reply-To: <52F25151.207@eu.citrix.com>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
	<1391507006.10515.68.camel@kazak.uk.xensource.com>
	<52F25151.207@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:57 +0000, George Dunlap wrote:
> On 02/04/2014 09:43 AM, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch removes reading reset specific values (address, size and mask) from dts
> >> and uses values defined in the code now.
> >> This is because currently xgene reset driver (submitted in linux) is going through
> >> a change (which is not yet accepted), this new driver has a new type of dts bindings
> >> for reset.
> >> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
> >> of reading from dts so that xen code will not break due to the linux transition.
> >>
> >> Ref:
> >> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
> >> http://www.gossamer-threads.com/lists/linux/kernel/1845585
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > George -- I'd like to take this into 4.4 to avoid shipping a Xen which
> > relies on an unagreed DTS binding (which is an ABI of sorts).
> 
> Is this the reboot binging one with 3 options, #2 of which was hard-code 
> the values rather than using DT?

Yes.

> Assuming so:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:03:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB40X-0008S7-MB; Wed, 05 Feb 2014 15:03:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB40X-0008Rw-4B
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 15:03:21 +0000
Received: from [193.109.254.147:62859] by server-11.bemta-14.messagelabs.com
	id 85/49-24604-8B252F25; Wed, 05 Feb 2014 15:03:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391612598!2224440!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29017 invoked from network); 5 Feb 2014 15:03:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100124322"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 15:03:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 10:03:12 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB40O-0005m1-64;
	Wed, 05 Feb 2014 15:03:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB40N-00088N-TY;
	Wed, 05 Feb 2014 15:03:11 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.21167.684304.970488@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 15:03:11 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52F24642.5000300@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> On 02/03/2014 04:14 PM, Ian Jackson wrote:
> > This is the latest version of my libxl event fixes apropos of Jim's
> > libvirt testing.
> 
> Did you have any opinions on the suitability of this for 4.4?

Sorry, I should have made that clear in the body text rather than just
the subject line.

I think this needs a freeze exception on the following grounds:

 * There is little change visible to non-eventy/thready callers and
   the risk of new races there is limited; basic functional testing
   ought to catch those errors.

 * The most prominent eventy/thready caller we are currently aware of
   is libvirt.  Without these changes it is nearly impossible to have
   a reliable libvirt.

 * These changes fall into three categories:

     - Bugfixes (3 or so);

     - The new SIGCHLD sharing feature
        "libxl: fork: Share SIGCHLD handler amongst ctxs";

     - New libxl innards unit test framework and a unit test;

     - Documentation improvements;

     - Changes with no functional effect which have been broken out
       from the above in order to facilitate review (lots, but
       all small easily reviewable).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:03:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:03:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB40X-0008S7-MB; Wed, 05 Feb 2014 15:03:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB40X-0008Rw-4B
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 15:03:21 +0000
Received: from [193.109.254.147:62859] by server-11.bemta-14.messagelabs.com
	id 85/49-24604-8B252F25; Wed, 05 Feb 2014 15:03:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391612598!2224440!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29017 invoked from network); 5 Feb 2014 15:03:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; d="scan'208";a="100124322"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 15:03:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 10:03:12 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB40O-0005m1-64;
	Wed, 05 Feb 2014 15:03:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB40N-00088N-TY;
	Wed, 05 Feb 2014 15:03:11 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.21167.684304.970488@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 15:03:11 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52F24642.5000300@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> On 02/03/2014 04:14 PM, Ian Jackson wrote:
> > This is the latest version of my libxl event fixes apropos of Jim's
> > libvirt testing.
> 
> Did you have any opinions on the suitability of this for 4.4?

Sorry, I should have made that clear in the body text rather than just
the subject line.

I think this needs a freeze exception on the following grounds:

 * There is little change visible to non-eventy/thready callers and
   the risk of new races there is limited; basic functional testing
   ought to catch those errors.

 * The most prominent eventy/thready caller we are currently aware of
   is libvirt.  Without these changes it is nearly impossible to have
   a reliable libvirt.

 * These changes fall into three categories:

     - Bugfixes (3 or so);

     - The new SIGCHLD sharing feature
        "libxl: fork: Share SIGCHLD handler amongst ctxs";

     - New libxl innards unit test framework and a unit test;

     - Documentation improvements;

     - Changes with no functional effect which have been broken out
       from the above in order to facilitate review (lots, but
       all small easily reviewable).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:17:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Dk-0000y3-8C; Wed, 05 Feb 2014 15:17:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4Dj-0000xv-H7
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:16:59 +0000
Received: from [85.158.139.211:27593] by server-11.bemta-5.messagelabs.com id
	61/B3-23886-AE552F25; Wed, 05 Feb 2014 15:16:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391613416!1855334!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12002 invoked from network); 5 Feb 2014 15:16:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:16:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; 
	d="scan'208,217";a="100134542"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 15:16:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:16:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB42F-0007Jv-C3;
	Wed, 05 Feb 2014 15:05:07 +0000
Message-ID: <52F25323.9080208@citrix.com>
Date: Wed, 5 Feb 2014 15:05:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25E9902000078001196E8@nat28.tlf.novell.com>
In-Reply-To: <52F25E9902000078001196E8@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] domctl: also pause domain for extended
 context updates
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4778219406206390804=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4778219406206390804==
Content-Type: multipart/alternative;
	boundary="------------040702000603030909050407"

--------------040702000603030909050407
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 05/02/14 14:54, Jan Beulich wrote:
> This is not just for consistency with "base" context updates, but
> actually needed so that guest side accesses can't race with control
> domain side updates.
>
> This would have been a security issue if XSA-77 hadn't waived them on
> the affected domctl operation.
>
> While looking at the code I also spotted a redundant NULL check in the
> "base" context update handling code, which is being removed.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -853,6 +853,8 @@ long arch_do_domctl(
>          }
>          else
>          {
> +            if ( d == current->domain ) /* no domain_pause() */
> +                break;
>              ret = -EINVAL;
>              if ( evc->size < offsetof(typeof(*evc), vmce) )
>                  break;
> @@ -861,6 +863,7 @@ long arch_do_domctl(
>                  if ( !is_canonical_address(evc->sysenter_callback_eip) ||
>                       !is_canonical_address(evc->syscall32_callback_eip) )
>                      break;
> +                domain_pause(d);
>                  fixup_guest_code_selector(d, evc->sysenter_callback_cs);
>                  v->arch.pv_vcpu.sysenter_callback_cs      =
>                      evc->sysenter_callback_cs;
> @@ -881,6 +884,8 @@ long arch_do_domctl(
>                        (evc->syscall32_callback_cs & ~3) ||
>                        evc->syscall32_callback_eip )
>                  break;
> +            else
> +                domain_pause(d);
>  
>              BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
>                                    mcg_cap) !=
> @@ -899,6 +904,8 @@ long arch_do_domctl(
>              }
>              else
>                  ret = 0;
> +
> +            domain_unpause(d);
>          }
>      }
>      break;
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -334,10 +334,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          unsigned int vcpu = op->u.vcpucontext.vcpu;
>          struct vcpu *v;
>  
> -        ret = -ESRCH;
> -        if ( d == NULL )
> -            break;
> -
>          ret = -EINVAL;
>          if ( (d == current->domain) || /* no domain_pause() */
>               (vcpu >= d->max_vcpus) || ((v = d->vcpu[vcpu]) == NULL) )
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------040702000603030909050407
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 05/02/14 14:54, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F25E9902000078001196E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">This is not just for consistency with "base" context updates, but
actually needed so that guest side accesses can't race with control
domain side updates.

This would have been a security issue if XSA-77 hadn't waived them on
the affected domctl operation.

While looking at the code I also spotted a redundant NULL check in the
"base" context update handling code, which is being removed.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F25E9902000078001196E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -853,6 +853,8 @@ long arch_do_domctl(
         }
         else
         {
+            if ( d == current-&gt;domain ) /* no domain_pause() */
+                break;
             ret = -EINVAL;
             if ( evc-&gt;size &lt; offsetof(typeof(*evc), vmce) )
                 break;
@@ -861,6 +863,7 @@ long arch_do_domctl(
                 if ( !is_canonical_address(evc-&gt;sysenter_callback_eip) ||
                      !is_canonical_address(evc-&gt;syscall32_callback_eip) )
                     break;
+                domain_pause(d);
                 fixup_guest_code_selector(d, evc-&gt;sysenter_callback_cs);
                 v-&gt;arch.pv_vcpu.sysenter_callback_cs      =
                     evc-&gt;sysenter_callback_cs;
@@ -881,6 +884,8 @@ long arch_do_domctl(
                       (evc-&gt;syscall32_callback_cs &amp; ~3) ||
                       evc-&gt;syscall32_callback_eip )
                 break;
+            else
+                domain_pause(d);
 
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=
@@ -899,6 +904,8 @@ long arch_do_domctl(
             }
             else
                 ret = 0;
+
+            domain_unpause(d);
         }
     }
     break;
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -334,10 +334,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         unsigned int vcpu = op-&gt;u.vcpucontext.vcpu;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = -EINVAL;
         if ( (d == current-&gt;domain) || /* no domain_pause() */
              (vcpu &gt;= d-&gt;max_vcpus) || ((v = d-&gt;vcpu[vcpu]) == NULL) )



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------040702000603030909050407--


--===============4778219406206390804==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4778219406206390804==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 15:17:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:17:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Dk-0000y3-8C; Wed, 05 Feb 2014 15:17:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4Dj-0000xv-H7
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:16:59 +0000
Received: from [85.158.139.211:27593] by server-11.bemta-5.messagelabs.com id
	61/B3-23886-AE552F25; Wed, 05 Feb 2014 15:16:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391613416!1855334!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12002 invoked from network); 5 Feb 2014 15:16:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:16:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; 
	d="scan'208,217";a="100134542"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 15:16:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:16:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB42F-0007Jv-C3;
	Wed, 05 Feb 2014 15:05:07 +0000
Message-ID: <52F25323.9080208@citrix.com>
Date: Wed, 5 Feb 2014 15:05:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25E9902000078001196E8@nat28.tlf.novell.com>
In-Reply-To: <52F25E9902000078001196E8@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 2/3] domctl: also pause domain for extended
 context updates
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4778219406206390804=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4778219406206390804==
Content-Type: multipart/alternative;
	boundary="------------040702000603030909050407"

--------------040702000603030909050407
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 05/02/14 14:54, Jan Beulich wrote:
> This is not just for consistency with "base" context updates, but
> actually needed so that guest side accesses can't race with control
> domain side updates.
>
> This would have been a security issue if XSA-77 hadn't waived them on
> the affected domctl operation.
>
> While looking at the code I also spotted a redundant NULL check in the
> "base" context update handling code, which is being removed.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -853,6 +853,8 @@ long arch_do_domctl(
>          }
>          else
>          {
> +            if ( d == current->domain ) /* no domain_pause() */
> +                break;
>              ret = -EINVAL;
>              if ( evc->size < offsetof(typeof(*evc), vmce) )
>                  break;
> @@ -861,6 +863,7 @@ long arch_do_domctl(
>                  if ( !is_canonical_address(evc->sysenter_callback_eip) ||
>                       !is_canonical_address(evc->syscall32_callback_eip) )
>                      break;
> +                domain_pause(d);
>                  fixup_guest_code_selector(d, evc->sysenter_callback_cs);
>                  v->arch.pv_vcpu.sysenter_callback_cs      =
>                      evc->sysenter_callback_cs;
> @@ -881,6 +884,8 @@ long arch_do_domctl(
>                        (evc->syscall32_callback_cs & ~3) ||
>                        evc->syscall32_callback_eip )
>                  break;
> +            else
> +                domain_pause(d);
>  
>              BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
>                                    mcg_cap) !=
> @@ -899,6 +904,8 @@ long arch_do_domctl(
>              }
>              else
>                  ret = 0;
> +
> +            domain_unpause(d);
>          }
>      }
>      break;
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -334,10 +334,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          unsigned int vcpu = op->u.vcpucontext.vcpu;
>          struct vcpu *v;
>  
> -        ret = -ESRCH;
> -        if ( d == NULL )
> -            break;
> -
>          ret = -EINVAL;
>          if ( (d == current->domain) || /* no domain_pause() */
>               (vcpu >= d->max_vcpus) || ((v = d->vcpu[vcpu]) == NULL) )
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------040702000603030909050407
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 05/02/14 14:54, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F25E9902000078001196E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">This is not just for consistency with "base" context updates, but
actually needed so that guest side accesses can't race with control
domain side updates.

This would have been a security issue if XSA-77 hadn't waived them on
the affected domctl operation.

While looking at the code I also spotted a redundant NULL check in the
"base" context update handling code, which is being removed.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F25E9902000078001196E8@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -853,6 +853,8 @@ long arch_do_domctl(
         }
         else
         {
+            if ( d == current-&gt;domain ) /* no domain_pause() */
+                break;
             ret = -EINVAL;
             if ( evc-&gt;size &lt; offsetof(typeof(*evc), vmce) )
                 break;
@@ -861,6 +863,7 @@ long arch_do_domctl(
                 if ( !is_canonical_address(evc-&gt;sysenter_callback_eip) ||
                      !is_canonical_address(evc-&gt;syscall32_callback_eip) )
                     break;
+                domain_pause(d);
                 fixup_guest_code_selector(d, evc-&gt;sysenter_callback_cs);
                 v-&gt;arch.pv_vcpu.sysenter_callback_cs      =
                     evc-&gt;sysenter_callback_cs;
@@ -881,6 +884,8 @@ long arch_do_domctl(
                       (evc-&gt;syscall32_callback_cs &amp; ~3) ||
                       evc-&gt;syscall32_callback_eip )
                 break;
+            else
+                domain_pause(d);
 
             BUILD_BUG_ON(offsetof(struct xen_domctl_ext_vcpucontext,
                                   mcg_cap) !=
@@ -899,6 +904,8 @@ long arch_do_domctl(
             }
             else
                 ret = 0;
+
+            domain_unpause(d);
         }
     }
     break;
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -334,10 +334,6 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         unsigned int vcpu = op-&gt;u.vcpucontext.vcpu;
         struct vcpu *v;
 
-        ret = -ESRCH;
-        if ( d == NULL )
-            break;
-
         ret = -EINVAL;
         if ( (d == current-&gt;domain) || /* no domain_pause() */
              (vcpu &gt;= d-&gt;max_vcpus) || ((v = d-&gt;vcpu[vcpu]) == NULL) )



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------040702000603030909050407--


--===============4778219406206390804==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4778219406206390804==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 15:28:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4OT-0001Z1-1M; Wed, 05 Feb 2014 15:28:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WB4OR-0001Yv-Bp
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:28:03 +0000
Received: from [193.109.254.147:25090] by server-14.bemta-14.messagelabs.com
	id 7F/C2-29228-28852F25; Wed, 05 Feb 2014 15:28:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391614079!2234285!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4091 invoked from network); 5 Feb 2014 15:28:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 15:28:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15FQqcC009283
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 15:26:52 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15FQpUo014587
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 5 Feb 2014 15:26:51 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15FQoKx014547; Wed, 5 Feb 2014 15:26:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 07:26:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1F29D1C0972; Wed,  5 Feb 2014 10:26:49 -0500 (EST)
Date: Wed, 5 Feb 2014 10:26:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140205152649.GA5167@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F24C47.5070100@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jan Beulich <JBeulich@suse.com>, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
> >On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
> >>>>>On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >>>On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>>>Wasn't it that Mukesh's patch simply was yours with the two
> >>>>get_ioreq()s folded by using a local variable?
> >>>Yes. As so
> >>Thanks. Except that ...
> >>
> >>>--- a/xen/arch/x86/hvm/vmx/vvmx.c
> >>>+++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >>>@@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
> >>>      struct vcpu *v = current;
> >>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> >>>-
> >>>+    ioreq_t *p = get_ioreq(v);
> >>... you don't want to drop the blank line, and naming the new
> >>variable "ioreq" would seem preferable.
> >>
> >>>      /*
> >>>       * a pending IO emualtion may still no finished. In this case,
> >>>       * no virtual vmswith is allowed. Or else, the following IO
> >>>       * emulation will handled in a wrong VCPU context.
> >>>       */
> >>>-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >>>+    if ( p && p->state != STATE_IOREQ_NONE )
> >>And, as said before, I'd think "!p ||" instead of "p &&" would be
> >>the right thing here. Yang, Jun?
> >I have two patches - one the simpler one that is pretty straightfoward
> >and the one you suggested. Either one fixes PVH guests. I also did
> >bootup tests with HVM guests to make sure they worked.
> >
> >Attached and inline.
> 
> But they do different things -- one does "ioreq && ioreq->state..."

Correct.
> and the other does "!ioreq || ioreq->state...".  The first one is
> incorrect, AFAICT.

Both of them fix the hypervisor blowing up with any PVH guest.
> 
>  -George
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:28:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4OT-0001Z1-1M; Wed, 05 Feb 2014 15:28:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WB4OR-0001Yv-Bp
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:28:03 +0000
Received: from [193.109.254.147:25090] by server-14.bemta-14.messagelabs.com
	id 7F/C2-29228-28852F25; Wed, 05 Feb 2014 15:28:02 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391614079!2234285!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4091 invoked from network); 5 Feb 2014 15:28:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 15:28:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15FQqcC009283
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 15:26:52 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15FQpUo014587
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 5 Feb 2014 15:26:51 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15FQoKx014547; Wed, 5 Feb 2014 15:26:50 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 07:26:50 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 1F29D1C0972; Wed,  5 Feb 2014 10:26:49 -0500 (EST)
Date: Wed, 5 Feb 2014 10:26:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Message-ID: <20140205152649.GA5167@phenom.dumpdata.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F24C47.5070100@eu.citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jan Beulich <JBeulich@suse.com>, Konrad Rzeszutek Wilk <konrad@kernel.org>,
	jun.nakajima@Intel.com, yang.z.zhang@Intel.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
> >On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
> >>>>>On 04.02.14 at 16:32, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> >>>On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>>>Wasn't it that Mukesh's patch simply was yours with the two
> >>>>get_ioreq()s folded by using a local variable?
> >>>Yes. As so
> >>Thanks. Except that ...
> >>
> >>>--- a/xen/arch/x86/hvm/vmx/vvmx.c
> >>>+++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >>>@@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
> >>>      struct vcpu *v = current;
> >>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> >>>-
> >>>+    ioreq_t *p = get_ioreq(v);
> >>... you don't want to drop the blank line, and naming the new
> >>variable "ioreq" would seem preferable.
> >>
> >>>      /*
> >>>       * a pending IO emualtion may still no finished. In this case,
> >>>       * no virtual vmswith is allowed. Or else, the following IO
> >>>       * emulation will handled in a wrong VCPU context.
> >>>       */
> >>>-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >>>+    if ( p && p->state != STATE_IOREQ_NONE )
> >>And, as said before, I'd think "!p ||" instead of "p &&" would be
> >>the right thing here. Yang, Jun?
> >I have two patches - one the simpler one that is pretty straightfoward
> >and the one you suggested. Either one fixes PVH guests. I also did
> >bootup tests with HVM guests to make sure they worked.
> >
> >Attached and inline.
> 
> But they do different things -- one does "ioreq && ioreq->state..."

Correct.
> and the other does "!ioreq || ioreq->state...".  The first one is
> incorrect, AFAICT.

Both of them fix the hypervisor blowing up with any PVH guest.
> 
>  -George
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:29:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Pl-0001pa-IL; Wed, 05 Feb 2014 15:29:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4Pk-0001nH-2O
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:29:24 +0000
Received: from [85.158.139.211:10241] by server-17.bemta-5.messagelabs.com id
	1E/20-31975-3D852F25; Wed, 05 Feb 2014 15:29:23 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391614160!1863761!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19422 invoked from network); 5 Feb 2014 15:29:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:29:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; 
	d="scan'208,217";a="100140362"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 15:29:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:29:19 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB4Pf-0007jw-Tj;
	Wed, 05 Feb 2014 15:29:19 +0000
Message-ID: <52F258CF.50706@citrix.com>
Date: Wed, 5 Feb 2014 15:29:19 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EB702000078001196EC@nat28.tlf.novell.com>
In-Reply-To: <52F25EB702000078001196EC@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5996960502112969408=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5996960502112969408==
Content-Type: multipart/alternative;
	boundary="------------060902040206030101090001"

--------------060902040206030101090001
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 05/02/14 14:54, Jan Beulich wrote:
> "Base" context reads already paused the subject vCPU when being the
> current one, but that special case isn't being properly dealt with
> anyway (at the very least when x86's fsgsbase feature is in use), so
> just disallow it.
>
> "Extended" context reads so far didn't do any pausing.
>
> While we can't avoid the reported data being stale by the time it
> arrives at the caller, this way we at least guarantee that it is
> consistent.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Now I come to think about it, is this an ABI change, as we are now
disallowing a control domain to issue these hypercalls on itself?

~Andrew

>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -819,7 +819,13 @@ long arch_do_domctl(
>  
>          if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
>          {
> +            if ( v == current ) /* no vcpu_pause() */
> +                break;
> +
>              evc->size = sizeof(*evc);
> +
> +            vcpu_pause(v);
> +
>              if ( is_pv_domain(d) )
>              {
>                  evc->sysenter_callback_cs      =
> @@ -849,6 +855,7 @@ long arch_do_domctl(
>              evc->vmce.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
>  
>              ret = 0;
> +            vcpu_unpause(v);
>              copyback = 1;
>          }
>          else
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -675,11 +675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          struct vcpu         *v;
>  
>          ret = -EINVAL;
> -        if ( op->u.vcpucontext.vcpu >= d->max_vcpus )
> -            goto getvcpucontext_out;
> -
> -        ret = -ESRCH;
> -        if ( (v = d->vcpu[op->u.vcpucontext.vcpu]) == NULL )
> +        if ( op->u.vcpucontext.vcpu >= d->max_vcpus ||
> +             (v = d->vcpu[op->u.vcpucontext.vcpu]) == NULL ||
> +             v == current ) /* no vcpu_pause() */
>              goto getvcpucontext_out;
>  
>          ret = -ENODATA;
> @@ -694,14 +692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          if ( (c.nat = xmalloc(struct vcpu_guest_context)) == NULL )
>              goto getvcpucontext_out;
>  
> -        if ( v != current )
> -            vcpu_pause(v);
> +        vcpu_pause(v);
>  
>          arch_get_info_guest(v, c);
>          ret = 0;
>  
> -        if ( v != current )
> -            vcpu_unpause(v);
> +        vcpu_unpause(v);
>  
>  #ifdef CONFIG_COMPAT
>          if ( !is_pv_32on64_vcpu(v) )
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------060902040206030101090001
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 05/02/14 14:54, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F25EB702000078001196EC@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">"Base" context reads already paused the subject vCPU when being the
current one, but that special case isn't being properly dealt with
anyway (at the very least when x86's fsgsbase feature is in use), so
just disallow it.

"Extended" context reads so far didn't do any pausing.

While we can't avoid the reported data being stale by the time it
arrives at the caller, this way we at least guarantee that it is
consistent.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Now I come to think about it, is this an ABI change, as we are now
    disallowing a control domain to issue these hypercalls on itself?<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote cite="mid:52F25EB702000078001196EC@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -819,7 +819,13 @@ long arch_do_domctl(
 
         if ( domctl-&gt;cmd == XEN_DOMCTL_get_ext_vcpucontext )
         {
+            if ( v == current ) /* no vcpu_pause() */
+                break;
+
             evc-&gt;size = sizeof(*evc);
+
+            vcpu_pause(v);
+
             if ( is_pv_domain(d) )
             {
                 evc-&gt;sysenter_callback_cs      =
@@ -849,6 +855,7 @@ long arch_do_domctl(
             evc-&gt;vmce.mci_ctl2_bank1 = v-&gt;arch.vmce.bank[1].mci_ctl2;
 
             ret = 0;
+            vcpu_unpause(v);
             copyback = 1;
         }
         else
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -675,11 +675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         struct vcpu         *v;
 
         ret = -EINVAL;
-        if ( op-&gt;u.vcpucontext.vcpu &gt;= d-&gt;max_vcpus )
-            goto getvcpucontext_out;
-
-        ret = -ESRCH;
-        if ( (v = d-&gt;vcpu[op-&gt;u.vcpucontext.vcpu]) == NULL )
+        if ( op-&gt;u.vcpucontext.vcpu &gt;= d-&gt;max_vcpus ||
+             (v = d-&gt;vcpu[op-&gt;u.vcpucontext.vcpu]) == NULL ||
+             v == current ) /* no vcpu_pause() */
             goto getvcpucontext_out;
 
         ret = -ENODATA;
@@ -694,14 +692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( (c.nat = xmalloc(struct vcpu_guest_context)) == NULL )
             goto getvcpucontext_out;
 
-        if ( v != current )
-            vcpu_pause(v);
+        vcpu_pause(v);
 
         arch_get_info_guest(v, c);
         ret = 0;
 
-        if ( v != current )
-            vcpu_unpause(v);
+        vcpu_unpause(v);
 
 #ifdef CONFIG_COMPAT
         if ( !is_pv_32on64_vcpu(v) )



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------060902040206030101090001--


--===============5996960502112969408==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5996960502112969408==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 15:29:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Pl-0001pa-IL; Wed, 05 Feb 2014 15:29:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4Pk-0001nH-2O
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:29:24 +0000
Received: from [85.158.139.211:10241] by server-17.bemta-5.messagelabs.com id
	1E/20-31975-3D852F25; Wed, 05 Feb 2014 15:29:23 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391614160!1863761!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19422 invoked from network); 5 Feb 2014 15:29:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:29:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,786,1384300800"; 
	d="scan'208,217";a="100140362"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 15:29:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:29:19 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB4Pf-0007jw-Tj;
	Wed, 05 Feb 2014 15:29:19 +0000
Message-ID: <52F258CF.50706@citrix.com>
Date: Wed, 5 Feb 2014 15:29:19 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EB702000078001196EC@nat28.tlf.novell.com>
In-Reply-To: <52F25EB702000078001196EC@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5996960502112969408=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5996960502112969408==
Content-Type: multipart/alternative;
	boundary="------------060902040206030101090001"

--------------060902040206030101090001
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 05/02/14 14:54, Jan Beulich wrote:
> "Base" context reads already paused the subject vCPU when being the
> current one, but that special case isn't being properly dealt with
> anyway (at the very least when x86's fsgsbase feature is in use), so
> just disallow it.
>
> "Extended" context reads so far didn't do any pausing.
>
> While we can't avoid the reported data being stale by the time it
> arrives at the caller, this way we at least guarantee that it is
> consistent.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Now I come to think about it, is this an ABI change, as we are now
disallowing a control domain to issue these hypercalls on itself?

~Andrew

>
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -819,7 +819,13 @@ long arch_do_domctl(
>  
>          if ( domctl->cmd == XEN_DOMCTL_get_ext_vcpucontext )
>          {
> +            if ( v == current ) /* no vcpu_pause() */
> +                break;
> +
>              evc->size = sizeof(*evc);
> +
> +            vcpu_pause(v);
> +
>              if ( is_pv_domain(d) )
>              {
>                  evc->sysenter_callback_cs      =
> @@ -849,6 +855,7 @@ long arch_do_domctl(
>              evc->vmce.mci_ctl2_bank1 = v->arch.vmce.bank[1].mci_ctl2;
>  
>              ret = 0;
> +            vcpu_unpause(v);
>              copyback = 1;
>          }
>          else
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -675,11 +675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          struct vcpu         *v;
>  
>          ret = -EINVAL;
> -        if ( op->u.vcpucontext.vcpu >= d->max_vcpus )
> -            goto getvcpucontext_out;
> -
> -        ret = -ESRCH;
> -        if ( (v = d->vcpu[op->u.vcpucontext.vcpu]) == NULL )
> +        if ( op->u.vcpucontext.vcpu >= d->max_vcpus ||
> +             (v = d->vcpu[op->u.vcpucontext.vcpu]) == NULL ||
> +             v == current ) /* no vcpu_pause() */
>              goto getvcpucontext_out;
>  
>          ret = -ENODATA;
> @@ -694,14 +692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
>          if ( (c.nat = xmalloc(struct vcpu_guest_context)) == NULL )
>              goto getvcpucontext_out;
>  
> -        if ( v != current )
> -            vcpu_pause(v);
> +        vcpu_pause(v);
>  
>          arch_get_info_guest(v, c);
>          ret = 0;
>  
> -        if ( v != current )
> -            vcpu_unpause(v);
> +        vcpu_unpause(v);
>  
>  #ifdef CONFIG_COMPAT
>          if ( !is_pv_32on64_vcpu(v) )
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------060902040206030101090001
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 05/02/14 14:54, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F25EB702000078001196EC@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">"Base" context reads already paused the subject vCPU when being the
current one, but that special case isn't being properly dealt with
anyway (at the very least when x86's fsgsbase feature is in use), so
just disallow it.

"Extended" context reads so far didn't do any pausing.

While we can't avoid the reported data being stale by the time it
arrives at the caller, this way we at least guarantee that it is
consistent.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Now I come to think about it, is this an ABI change, as we are now
    disallowing a control domain to issue these hypercalls on itself?<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote cite="mid:52F25EB702000078001196EC@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -819,7 +819,13 @@ long arch_do_domctl(
 
         if ( domctl-&gt;cmd == XEN_DOMCTL_get_ext_vcpucontext )
         {
+            if ( v == current ) /* no vcpu_pause() */
+                break;
+
             evc-&gt;size = sizeof(*evc);
+
+            vcpu_pause(v);
+
             if ( is_pv_domain(d) )
             {
                 evc-&gt;sysenter_callback_cs      =
@@ -849,6 +855,7 @@ long arch_do_domctl(
             evc-&gt;vmce.mci_ctl2_bank1 = v-&gt;arch.vmce.bank[1].mci_ctl2;
 
             ret = 0;
+            vcpu_unpause(v);
             copyback = 1;
         }
         else
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -675,11 +675,9 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         struct vcpu         *v;
 
         ret = -EINVAL;
-        if ( op-&gt;u.vcpucontext.vcpu &gt;= d-&gt;max_vcpus )
-            goto getvcpucontext_out;
-
-        ret = -ESRCH;
-        if ( (v = d-&gt;vcpu[op-&gt;u.vcpucontext.vcpu]) == NULL )
+        if ( op-&gt;u.vcpucontext.vcpu &gt;= d-&gt;max_vcpus ||
+             (v = d-&gt;vcpu[op-&gt;u.vcpucontext.vcpu]) == NULL ||
+             v == current ) /* no vcpu_pause() */
             goto getvcpucontext_out;
 
         ret = -ENODATA;
@@ -694,14 +692,12 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xe
         if ( (c.nat = xmalloc(struct vcpu_guest_context)) == NULL )
             goto getvcpucontext_out;
 
-        if ( v != current )
-            vcpu_pause(v);
+        vcpu_pause(v);
 
         arch_get_info_guest(v, c);
         ret = 0;
 
-        if ( v != current )
-            vcpu_unpause(v);
+        vcpu_unpause(v);
 
 #ifdef CONFIG_COMPAT
         if ( !is_pv_32on64_vcpu(v) )



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------060902040206030101090001--


--===============5996960502112969408==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5996960502112969408==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 15:35:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Vl-0002BW-Kj; Wed, 05 Feb 2014 15:35:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WB4Vj-0002BR-Ve
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:35:36 +0000
Received: from [85.158.143.35:5163] by server-1.bemta-4.messagelabs.com id
	20/B9-31661-74A52F25; Wed, 05 Feb 2014 15:35:35 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391614534!3356808!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4182 invoked from network); 5 Feb 2014 15:35:34 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:35:34 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so421554wes.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 07:35:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=LrFnK0s7ODmgk5ndBokzrEilRXlQ9PiH9fjaB5MbyNU=;
	b=fxUGyLeUNHSwp64x1XU3tZHKhMr287xdmvTxeE9zA3OPfgF3cBC9Cabng9AGNXKulj
	zzUkUvu/Yu9PRuo7EMyKi71cFXSqOJi6p9BzShxrn8IjIWr2r7yDnVeI9PdvZQ+VSp+P
	ishvYerEq+HqIOY8YIz/8keED92UBTl+8v2Fymfpl/Hj1vnJSfZO85J3/1PuCUKr8aUR
	PxJdHBv2X4vz6ne3YIve0CBQFR1x2+X9IbFFmJFuyPAGXVxA+Z/hU79Cs0ExJhLAHETd
	utspf1s/x1M4QoThlNVLQDSjo1c+80Dg+r39KaS0kpVDokQr/Ul8fqJwMJNsAm8M8ITX
	1J4w==
X-Received: by 10.194.58.180 with SMTP id s20mr1910332wjq.54.1391614534404;
	Wed, 05 Feb 2014 07:35:34 -0800 (PST)
Received: from [172.16.10.73] (212.44.45.197.ip.redstone-isp.net.
	[212.44.45.197]) by mx.google.com with ESMTPSA id
	bm8sm62351429wjc.12.2014.02.05.07.35.29 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 07:35:33 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Wed, 05 Feb 2014 15:35:24 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CF180ABC.703C5%keir.xen@gmail.com>
Thread-Topic: [PATCH 0/3] guest context management adjustments
Thread-Index: Ac8ih+Lqyd5+vgiFM0m46Ffvo4km0A==
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 14:48, "Jan Beulich" <JBeulich@suse.com> wrote:

> 1: x86: fix FS/GS base handling when using the fsgsbase feature
> 2: domctl: also pause domain for extended context updates
> 3: domctl: pause vCPU for context reads
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:35:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Vl-0002BW-Kj; Wed, 05 Feb 2014 15:35:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WB4Vj-0002BR-Ve
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:35:36 +0000
Received: from [85.158.143.35:5163] by server-1.bemta-4.messagelabs.com id
	20/B9-31661-74A52F25; Wed, 05 Feb 2014 15:35:35 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391614534!3356808!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4182 invoked from network); 5 Feb 2014 15:35:34 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:35:34 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so421554wes.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 07:35:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=LrFnK0s7ODmgk5ndBokzrEilRXlQ9PiH9fjaB5MbyNU=;
	b=fxUGyLeUNHSwp64x1XU3tZHKhMr287xdmvTxeE9zA3OPfgF3cBC9Cabng9AGNXKulj
	zzUkUvu/Yu9PRuo7EMyKi71cFXSqOJi6p9BzShxrn8IjIWr2r7yDnVeI9PdvZQ+VSp+P
	ishvYerEq+HqIOY8YIz/8keED92UBTl+8v2Fymfpl/Hj1vnJSfZO85J3/1PuCUKr8aUR
	PxJdHBv2X4vz6ne3YIve0CBQFR1x2+X9IbFFmJFuyPAGXVxA+Z/hU79Cs0ExJhLAHETd
	utspf1s/x1M4QoThlNVLQDSjo1c+80Dg+r39KaS0kpVDokQr/Ul8fqJwMJNsAm8M8ITX
	1J4w==
X-Received: by 10.194.58.180 with SMTP id s20mr1910332wjq.54.1391614534404;
	Wed, 05 Feb 2014 07:35:34 -0800 (PST)
Received: from [172.16.10.73] (212.44.45.197.ip.redstone-isp.net.
	[212.44.45.197]) by mx.google.com with ESMTPSA id
	bm8sm62351429wjc.12.2014.02.05.07.35.29 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 07:35:33 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Wed, 05 Feb 2014 15:35:24 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <CF180ABC.703C5%keir.xen@gmail.com>
Thread-Topic: [PATCH 0/3] guest context management adjustments
Thread-Index: Ac8ih+Lqyd5+vgiFM0m46Ffvo4km0A==
In-Reply-To: <52F25D4202000078001196AB@nat28.tlf.novell.com>
Mime-version: 1.0
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 14:48, "Jan Beulich" <JBeulich@suse.com> wrote:

> 1: x86: fix FS/GS base handling when using the fsgsbase feature
> 2: domctl: also pause domain for extended context updates
> 3: domctl: pause vCPU for context reads
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:39:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Zp-0002NE-Gs; Wed, 05 Feb 2014 15:39:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB4Zo-0002N8-6q
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:39:48 +0000
Received: from [193.109.254.147:22011] by server-15.bemta-14.messagelabs.com
	id D1/CA-10839-34B52F25; Wed, 05 Feb 2014 15:39:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391614786!2249584!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11039 invoked from network); 5 Feb 2014 15:39:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 15:39:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 15:39:46 +0000
Message-Id: <52F2695002000078001197AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 15:39:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EB702000078001196EC@nat28.tlf.novell.com>
	<52F258CF.50706@citrix.com>
In-Reply-To: <52F258CF.50706@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 16:29, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 05/02/14 14:54, Jan Beulich wrote:
>> "Base" context reads already paused the subject vCPU when being the
>> current one, but that special case isn't being properly dealt with
>> anyway (at the very least when x86's fsgsbase feature is in use), so
>> just disallow it.
>>
>> "Extended" context reads so far didn't do any pausing.
>>
>> While we can't avoid the reported data being stale by the time it
>> arrives at the caller, this way we at least guarantee that it is
>> consistent.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Now I come to think about it, is this an ABI change, as we are now
> disallowing a control domain to issue these hypercalls on itself?

Of course it is, and intentionally so. As was patch 2. And imo it
was never correct to allow this.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:39:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4Zp-0002NE-Gs; Wed, 05 Feb 2014 15:39:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WB4Zo-0002N8-6q
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:39:48 +0000
Received: from [193.109.254.147:22011] by server-15.bemta-14.messagelabs.com
	id D1/CA-10839-34B52F25; Wed, 05 Feb 2014 15:39:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391614786!2249584!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11039 invoked from network); 5 Feb 2014 15:39:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 15:39:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 05 Feb 2014 15:39:46 +0000
Message-Id: <52F2695002000078001197AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 05 Feb 2014 15:39:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EB702000078001196EC@nat28.tlf.novell.com>
	<52F258CF.50706@citrix.com>
In-Reply-To: <52F258CF.50706@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 16:29, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 05/02/14 14:54, Jan Beulich wrote:
>> "Base" context reads already paused the subject vCPU when being the
>> current one, but that special case isn't being properly dealt with
>> anyway (at the very least when x86's fsgsbase feature is in use), so
>> just disallow it.
>>
>> "Extended" context reads so far didn't do any pausing.
>>
>> While we can't avoid the reported data being stale by the time it
>> arrives at the caller, this way we at least guarantee that it is
>> consistent.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Now I come to think about it, is this an ABI change, as we are now
> disallowing a control domain to issue these hypercalls on itself?

Of course it is, and intentionally so. As was patch 2. And imo it
was never correct to allow this.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4dY-0002h7-Mc; Wed, 05 Feb 2014 15:43:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4dW-0002h0-Hv
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:43:38 +0000
Received: from [193.109.254.147:14616] by server-4.bemta-14.messagelabs.com id
	F0/41-32066-92C52F25; Wed, 05 Feb 2014 15:43:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391615015!2238704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14433 invoked from network); 5 Feb 2014 15:43:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:43:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98256214"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 15:43:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:43:34 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB4dS-0007xD-Ag;
	Wed, 05 Feb 2014 15:43:34 +0000
Message-ID: <52F25C26.10606@citrix.com>
Date: Wed, 5 Feb 2014 15:43:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EB702000078001196EC@nat28.tlf.novell.com>
	<52F258CF.50706@citrix.com>
	<52F2695002000078001197AB@nat28.tlf.novell.com>
In-Reply-To: <52F2695002000078001197AB@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 15:39, Jan Beulich wrote:
>>>> On 05.02.14 at 16:29, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 05/02/14 14:54, Jan Beulich wrote:
>>> "Base" context reads already paused the subject vCPU when being the
>>> current one, but that special case isn't being properly dealt with
>>> anyway (at the very least when x86's fsgsbase feature is in use), so
>>> just disallow it.
>>>
>>> "Extended" context reads so far didn't do any pausing.
>>>
>>> While we can't avoid the reported data being stale by the time it
>>> arrives at the caller, this way we at least guarantee that it is
>>> consistent.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Now I come to think about it, is this an ABI change, as we are now
>> disallowing a control domain to issue these hypercalls on itself?
> Of course it is, and intentionally so. As was patch 2. And imo it
> was never correct to allow this.
>
> Jan
>

It is certainly possible to get libxc to do this, but I would agree that
it has no valid use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4dY-0002h7-Mc; Wed, 05 Feb 2014 15:43:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4dW-0002h0-Hv
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:43:38 +0000
Received: from [193.109.254.147:14616] by server-4.bemta-14.messagelabs.com id
	F0/41-32066-92C52F25; Wed, 05 Feb 2014 15:43:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391615015!2238704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14433 invoked from network); 5 Feb 2014 15:43:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:43:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98256214"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 15:43:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:43:34 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB4dS-0007xD-Ag;
	Wed, 05 Feb 2014 15:43:34 +0000
Message-ID: <52F25C26.10606@citrix.com>
Date: Wed, 5 Feb 2014 15:43:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EB702000078001196EC@nat28.tlf.novell.com>
	<52F258CF.50706@citrix.com>
	<52F2695002000078001197AB@nat28.tlf.novell.com>
In-Reply-To: <52F2695002000078001197AB@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 3/3] domctl: pause vCPU for context reads
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 15:39, Jan Beulich wrote:
>>>> On 05.02.14 at 16:29, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 05/02/14 14:54, Jan Beulich wrote:
>>> "Base" context reads already paused the subject vCPU when being the
>>> current one, but that special case isn't being properly dealt with
>>> anyway (at the very least when x86's fsgsbase feature is in use), so
>>> just disallow it.
>>>
>>> "Extended" context reads so far didn't do any pausing.
>>>
>>> While we can't avoid the reported data being stale by the time it
>>> arrives at the caller, this way we at least guarantee that it is
>>> consistent.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Now I come to think about it, is this an ABI change, as we are now
>> disallowing a control domain to issue these hypercalls on itself?
> Of course it is, and intentionally so. As was patch 2. And imo it
> was never correct to allow this.
>
> Jan
>

It is certainly possible to get libxc to do this, but I would agree that
it has no valid use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 15:51:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4kn-00035f-07; Wed, 05 Feb 2014 15:51:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4kl-00035X-NJ
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:51:08 +0000
Received: from [85.158.143.35:2264] by server-3.bemta-4.messagelabs.com id
	B0/06-11539-BED52F25; Wed, 05 Feb 2014 15:51:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391615463!3378786!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31475 invoked from network); 5 Feb 2014 15:51:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:51:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208,217";a="98259883"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 15:51:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:50:59 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB4kd-00084Y-Je;
	Wed, 05 Feb 2014 15:50:59 +0000
Message-ID: <52F25DE3.4070306@citrix.com>
Date: Wed, 5 Feb 2014 15:50:59 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25E3F02000078001196E4@nat28.tlf.novell.com>
In-Reply-To: <52F25E3F02000078001196E4@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] x86: fix FS/GS base handling when using
 the fsgsbase feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4569693584749241606=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4569693584749241606==
Content-Type: multipart/alternative;
	boundary="------------020507090806000700020401"

--------------020507090806000700020401
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 05/02/14 14:52, Jan Beulich wrote:
> In that case, due to the respective instructions not being privileged,
> we can't rely on our in-memory data to always be correct: While the
> guest is running, it may change without us knowing about it. Therefore
> we need to
> - read the correct values from hardware during context switch out
>   (save_segments())
> - read the correct values from hardware during RDMSR emulation
> - update in-memory values during guest mode change
>   (toggle_guest_mode())
>
> For completeness/consistency, WRMSR emulation is also being switched
> to use wr[fg]sbase().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1251,6 +1251,15 @@ static void save_segments(struct vcpu *v
>      regs->fs = read_segment_register(fs);
>      regs->gs = read_segment_register(gs);
>  
> +    if ( cpu_has_fsgsbase && !is_pv_32bit_vcpu(v) )
> +    {
> +        v->arch.pv_vcpu.fs_base = __rdfsbase();
> +        if ( v->arch.flags & TF_kernel_mode )
> +            v->arch.pv_vcpu.gs_base_kernel = __rdgsbase();
> +        else
> +            v->arch.pv_vcpu.gs_base_user = __rdgsbase();
> +    }
> +
>      if ( regs->ds )
>          dirty_segment_mask |= DIRTY_DS;
>  
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2382,15 +2382,13 @@ static int emulate_privileged_op(struct 
>          case MSR_FS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            if ( wrmsr_safe(MSR_FS_BASE, msr_content) )
> -                goto fail;
> +            wrfsbase(msr_content);
>              v->arch.pv_vcpu.fs_base = msr_content;
>              break;
>          case MSR_GS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )
> -                goto fail;
> +            wrgsbase(msr_content);
>              v->arch.pv_vcpu.gs_base_kernel = msr_content;
>              break;
>          case MSR_SHADOW_GS_BASE:
> @@ -2535,15 +2533,14 @@ static int emulate_privileged_op(struct 
>          case MSR_FS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            regs->eax = v->arch.pv_vcpu.fs_base & 0xFFFFFFFFUL;
> -            regs->edx = v->arch.pv_vcpu.fs_base >> 32;
> -            break;
> +            val = cpu_has_fsgsbase ? __rdfsbase() : v->arch.pv_vcpu.fs_base;
> +            goto rdmsr_writeback;
>          case MSR_GS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            regs->eax = v->arch.pv_vcpu.gs_base_kernel & 0xFFFFFFFFUL;
> -            regs->edx = v->arch.pv_vcpu.gs_base_kernel >> 32;
> -            break;
> +            val = cpu_has_fsgsbase ? __rdgsbase()
> +                                   : v->arch.pv_vcpu.gs_base_kernel;
> +            goto rdmsr_writeback;
>          case MSR_SHADOW_GS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -257,6 +257,13 @@ void toggle_guest_mode(struct vcpu *v)
>  {
>      if ( is_pv_32bit_vcpu(v) )
>          return;
> +    if ( cpu_has_fsgsbase )
> +    {
> +        if ( v->arch.flags & TF_kernel_mode )
> +            v->arch.pv_vcpu.gs_base_kernel = __rdgsbase();
> +        else
> +            v->arch.pv_vcpu.gs_base_user = __rdgsbase();
> +    }
>      v->arch.flags ^= TF_kernel_mode;
>      asm volatile ( "swapgs" );
>      update_cr3(v);
> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in
>  			  : "=a" (low), "=d" (high) \
>  			  : "c" (counter))
>  
> -static inline unsigned long rdfsbase(void)
> +static inline unsigned long __rdfsbase(void)
>  {
>      unsigned long base;
>  
> -    if ( cpu_has_fsgsbase )
>  #ifdef HAVE_GAS_FSGSBASE
> -        asm volatile ( "rdfsbase %0" : "=r" (base) );
> +    asm volatile ( "rdfsbase %0" : "=r" (base) );
>  #else
> -        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
> +    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
>  #endif
> -    else
> -        rdmsrl(MSR_FS_BASE, base);
>  
>      return base;
>  }
>  
> -static inline unsigned long rdgsbase(void)
> +static inline unsigned long __rdgsbase(void)
>  {
>      unsigned long base;
>  
> -    if ( cpu_has_fsgsbase )
>  #ifdef HAVE_GAS_FSGSBASE
> -        asm volatile ( "rdgsbase %0" : "=r" (base) );
> +    asm volatile ( "rdgsbase %0" : "=r" (base) );
>  #else
> -        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
> +    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
>  #endif
> -    else
> -        rdmsrl(MSR_GS_BASE, base);
> +
> +    return base;
> +}
> +
> +static inline unsigned long rdfsbase(void)
> +{
> +    unsigned long base;
> +
> +    if ( cpu_has_fsgsbase )
> +        return __rdfsbase();
> +
> +    rdmsrl(MSR_FS_BASE, base);
> +
> +    return base;
> +}
> +
> +static inline unsigned long rdgsbase(void)
> +{
> +    unsigned long base;
> +
> +    if ( cpu_has_fsgsbase )
> +        return __rdgsbase();
> +
> +    rdmsrl(MSR_GS_BASE, base);
>  
>      return base;
>  }
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------020507090806000700020401
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 05/02/14 14:52, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F25E3F02000078001196E4@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">In that case, due to the respective instructions not being privileged,
we can't rely on our in-memory data to always be correct: While the
guest is running, it may change without us knowing about it. Therefore
we need to
- read the correct values from hardware during context switch out
  (save_segments())
- read the correct values from hardware during RDMSR emulation
- update in-memory values during guest mode change
  (toggle_guest_mode())

For completeness/consistency, WRMSR emulation is also being switched
to use wr[fg]sbase().

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F25E3F02000078001196E4@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1251,6 +1251,15 @@ static void save_segments(struct vcpu *v
     regs-&gt;fs = read_segment_register(fs);
     regs-&gt;gs = read_segment_register(gs);
 
+    if ( cpu_has_fsgsbase &amp;&amp; !is_pv_32bit_vcpu(v) )
+    {
+        v-&gt;arch.pv_vcpu.fs_base = __rdfsbase();
+        if ( v-&gt;arch.flags &amp; TF_kernel_mode )
+            v-&gt;arch.pv_vcpu.gs_base_kernel = __rdgsbase();
+        else
+            v-&gt;arch.pv_vcpu.gs_base_user = __rdgsbase();
+    }
+
     if ( regs-&gt;ds )
         dirty_segment_mask |= DIRTY_DS;
 
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2382,15 +2382,13 @@ static int emulate_privileged_op(struct 
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_FS_BASE, msr_content) )
-                goto fail;
+            wrfsbase(msr_content);
             v-&gt;arch.pv_vcpu.fs_base = msr_content;
             break;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )
-                goto fail;
+            wrgsbase(msr_content);
             v-&gt;arch.pv_vcpu.gs_base_kernel = msr_content;
             break;
         case MSR_SHADOW_GS_BASE:
@@ -2535,15 +2533,14 @@ static int emulate_privileged_op(struct 
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs-&gt;eax = v-&gt;arch.pv_vcpu.fs_base &amp; 0xFFFFFFFFUL;
-            regs-&gt;edx = v-&gt;arch.pv_vcpu.fs_base &gt;&gt; 32;
-            break;
+            val = cpu_has_fsgsbase ? __rdfsbase() : v-&gt;arch.pv_vcpu.fs_base;
+            goto rdmsr_writeback;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs-&gt;eax = v-&gt;arch.pv_vcpu.gs_base_kernel &amp; 0xFFFFFFFFUL;
-            regs-&gt;edx = v-&gt;arch.pv_vcpu.gs_base_kernel &gt;&gt; 32;
-            break;
+            val = cpu_has_fsgsbase ? __rdgsbase()
+                                   : v-&gt;arch.pv_vcpu.gs_base_kernel;
+            goto rdmsr_writeback;
         case MSR_SHADOW_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -257,6 +257,13 @@ void toggle_guest_mode(struct vcpu *v)
 {
     if ( is_pv_32bit_vcpu(v) )
         return;
+    if ( cpu_has_fsgsbase )
+    {
+        if ( v-&gt;arch.flags &amp; TF_kernel_mode )
+            v-&gt;arch.pv_vcpu.gs_base_kernel = __rdgsbase();
+        else
+            v-&gt;arch.pv_vcpu.gs_base_user = __rdgsbase();
+    }
     v-&gt;arch.flags ^= TF_kernel_mode;
     asm volatile ( "swapgs" );
     update_cr3(v);
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in
 			  : "=a" (low), "=d" (high) \
 			  : "c" (counter))
 
-static inline unsigned long rdfsbase(void)
+static inline unsigned long __rdfsbase(void)
 {
     unsigned long base;
 
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdfsbase %0" : "=r" (base) );
+    asm volatile ( "rdfsbase %0" : "=r" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
 #endif
-    else
-        rdmsrl(MSR_FS_BASE, base);
 
     return base;
 }
 
-static inline unsigned long rdgsbase(void)
+static inline unsigned long __rdgsbase(void)
 {
     unsigned long base;
 
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdgsbase %0" : "=r" (base) );
+    asm volatile ( "rdgsbase %0" : "=r" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
 #endif
-    else
-        rdmsrl(MSR_GS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdfsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdfsbase();
+
+    rdmsrl(MSR_FS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdgsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdgsbase();
+
+    rdmsrl(MSR_GS_BASE, base);
 
     return base;
 }


</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------020507090806000700020401--


--===============4569693584749241606==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4569693584749241606==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 15:51:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 15:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4kn-00035f-07; Wed, 05 Feb 2014 15:51:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB4kl-00035X-NJ
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 15:51:08 +0000
Received: from [85.158.143.35:2264] by server-3.bemta-4.messagelabs.com id
	B0/06-11539-BED52F25; Wed, 05 Feb 2014 15:51:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391615463!3378786!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31475 invoked from network); 5 Feb 2014 15:51:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 15:51:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208,217";a="98259883"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 15:51:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 10:50:59 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WB4kd-00084Y-Je;
	Wed, 05 Feb 2014 15:50:59 +0000
Message-ID: <52F25DE3.4070306@citrix.com>
Date: Wed, 5 Feb 2014 15:50:59 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25E3F02000078001196E4@nat28.tlf.novell.com>
In-Reply-To: <52F25E3F02000078001196E4@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 1/3] x86: fix FS/GS base handling when using
 the fsgsbase feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4569693584749241606=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4569693584749241606==
Content-Type: multipart/alternative;
	boundary="------------020507090806000700020401"

--------------020507090806000700020401
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 05/02/14 14:52, Jan Beulich wrote:
> In that case, due to the respective instructions not being privileged,
> we can't rely on our in-memory data to always be correct: While the
> guest is running, it may change without us knowing about it. Therefore
> we need to
> - read the correct values from hardware during context switch out
>   (save_segments())
> - read the correct values from hardware during RDMSR emulation
> - update in-memory values during guest mode change
>   (toggle_guest_mode())
>
> For completeness/consistency, WRMSR emulation is also being switched
> to use wr[fg]sbase().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1251,6 +1251,15 @@ static void save_segments(struct vcpu *v
>      regs->fs = read_segment_register(fs);
>      regs->gs = read_segment_register(gs);
>  
> +    if ( cpu_has_fsgsbase && !is_pv_32bit_vcpu(v) )
> +    {
> +        v->arch.pv_vcpu.fs_base = __rdfsbase();
> +        if ( v->arch.flags & TF_kernel_mode )
> +            v->arch.pv_vcpu.gs_base_kernel = __rdgsbase();
> +        else
> +            v->arch.pv_vcpu.gs_base_user = __rdgsbase();
> +    }
> +
>      if ( regs->ds )
>          dirty_segment_mask |= DIRTY_DS;
>  
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2382,15 +2382,13 @@ static int emulate_privileged_op(struct 
>          case MSR_FS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            if ( wrmsr_safe(MSR_FS_BASE, msr_content) )
> -                goto fail;
> +            wrfsbase(msr_content);
>              v->arch.pv_vcpu.fs_base = msr_content;
>              break;
>          case MSR_GS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )
> -                goto fail;
> +            wrgsbase(msr_content);
>              v->arch.pv_vcpu.gs_base_kernel = msr_content;
>              break;
>          case MSR_SHADOW_GS_BASE:
> @@ -2535,15 +2533,14 @@ static int emulate_privileged_op(struct 
>          case MSR_FS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            regs->eax = v->arch.pv_vcpu.fs_base & 0xFFFFFFFFUL;
> -            regs->edx = v->arch.pv_vcpu.fs_base >> 32;
> -            break;
> +            val = cpu_has_fsgsbase ? __rdfsbase() : v->arch.pv_vcpu.fs_base;
> +            goto rdmsr_writeback;
>          case MSR_GS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> -            regs->eax = v->arch.pv_vcpu.gs_base_kernel & 0xFFFFFFFFUL;
> -            regs->edx = v->arch.pv_vcpu.gs_base_kernel >> 32;
> -            break;
> +            val = cpu_has_fsgsbase ? __rdgsbase()
> +                                   : v->arch.pv_vcpu.gs_base_kernel;
> +            goto rdmsr_writeback;
>          case MSR_SHADOW_GS_BASE:
>              if ( is_pv_32on64_vcpu(v) )
>                  goto fail;
> --- a/xen/arch/x86/x86_64/traps.c
> +++ b/xen/arch/x86/x86_64/traps.c
> @@ -257,6 +257,13 @@ void toggle_guest_mode(struct vcpu *v)
>  {
>      if ( is_pv_32bit_vcpu(v) )
>          return;
> +    if ( cpu_has_fsgsbase )
> +    {
> +        if ( v->arch.flags & TF_kernel_mode )
> +            v->arch.pv_vcpu.gs_base_kernel = __rdgsbase();
> +        else
> +            v->arch.pv_vcpu.gs_base_user = __rdgsbase();
> +    }
>      v->arch.flags ^= TF_kernel_mode;
>      asm volatile ( "swapgs" );
>      update_cr3(v);
> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in
>  			  : "=a" (low), "=d" (high) \
>  			  : "c" (counter))
>  
> -static inline unsigned long rdfsbase(void)
> +static inline unsigned long __rdfsbase(void)
>  {
>      unsigned long base;
>  
> -    if ( cpu_has_fsgsbase )
>  #ifdef HAVE_GAS_FSGSBASE
> -        asm volatile ( "rdfsbase %0" : "=r" (base) );
> +    asm volatile ( "rdfsbase %0" : "=r" (base) );
>  #else
> -        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
> +    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
>  #endif
> -    else
> -        rdmsrl(MSR_FS_BASE, base);
>  
>      return base;
>  }
>  
> -static inline unsigned long rdgsbase(void)
> +static inline unsigned long __rdgsbase(void)
>  {
>      unsigned long base;
>  
> -    if ( cpu_has_fsgsbase )
>  #ifdef HAVE_GAS_FSGSBASE
> -        asm volatile ( "rdgsbase %0" : "=r" (base) );
> +    asm volatile ( "rdgsbase %0" : "=r" (base) );
>  #else
> -        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
> +    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
>  #endif
> -    else
> -        rdmsrl(MSR_GS_BASE, base);
> +
> +    return base;
> +}
> +
> +static inline unsigned long rdfsbase(void)
> +{
> +    unsigned long base;
> +
> +    if ( cpu_has_fsgsbase )
> +        return __rdfsbase();
> +
> +    rdmsrl(MSR_FS_BASE, base);
> +
> +    return base;
> +}
> +
> +static inline unsigned long rdgsbase(void)
> +{
> +    unsigned long base;
> +
> +    if ( cpu_has_fsgsbase )
> +        return __rdgsbase();
> +
> +    rdmsrl(MSR_GS_BASE, base);
>  
>      return base;
>  }
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------020507090806000700020401
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 05/02/14 14:52, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F25E3F02000078001196E4@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">In that case, due to the respective instructions not being privileged,
we can't rely on our in-memory data to always be correct: While the
guest is running, it may change without us knowing about it. Therefore
we need to
- read the correct values from hardware during context switch out
  (save_segments())
- read the correct values from hardware during RDMSR emulation
- update in-memory values during guest mode change
  (toggle_guest_mode())

For completeness/consistency, WRMSR emulation is also being switched
to use wr[fg]sbase().

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F25E3F02000078001196E4@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1251,6 +1251,15 @@ static void save_segments(struct vcpu *v
     regs-&gt;fs = read_segment_register(fs);
     regs-&gt;gs = read_segment_register(gs);
 
+    if ( cpu_has_fsgsbase &amp;&amp; !is_pv_32bit_vcpu(v) )
+    {
+        v-&gt;arch.pv_vcpu.fs_base = __rdfsbase();
+        if ( v-&gt;arch.flags &amp; TF_kernel_mode )
+            v-&gt;arch.pv_vcpu.gs_base_kernel = __rdgsbase();
+        else
+            v-&gt;arch.pv_vcpu.gs_base_user = __rdgsbase();
+    }
+
     if ( regs-&gt;ds )
         dirty_segment_mask |= DIRTY_DS;
 
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2382,15 +2382,13 @@ static int emulate_privileged_op(struct 
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_FS_BASE, msr_content) )
-                goto fail;
+            wrfsbase(msr_content);
             v-&gt;arch.pv_vcpu.fs_base = msr_content;
             break;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            if ( wrmsr_safe(MSR_GS_BASE, msr_content) )
-                goto fail;
+            wrgsbase(msr_content);
             v-&gt;arch.pv_vcpu.gs_base_kernel = msr_content;
             break;
         case MSR_SHADOW_GS_BASE:
@@ -2535,15 +2533,14 @@ static int emulate_privileged_op(struct 
         case MSR_FS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs-&gt;eax = v-&gt;arch.pv_vcpu.fs_base &amp; 0xFFFFFFFFUL;
-            regs-&gt;edx = v-&gt;arch.pv_vcpu.fs_base &gt;&gt; 32;
-            break;
+            val = cpu_has_fsgsbase ? __rdfsbase() : v-&gt;arch.pv_vcpu.fs_base;
+            goto rdmsr_writeback;
         case MSR_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
-            regs-&gt;eax = v-&gt;arch.pv_vcpu.gs_base_kernel &amp; 0xFFFFFFFFUL;
-            regs-&gt;edx = v-&gt;arch.pv_vcpu.gs_base_kernel &gt;&gt; 32;
-            break;
+            val = cpu_has_fsgsbase ? __rdgsbase()
+                                   : v-&gt;arch.pv_vcpu.gs_base_kernel;
+            goto rdmsr_writeback;
         case MSR_SHADOW_GS_BASE:
             if ( is_pv_32on64_vcpu(v) )
                 goto fail;
--- a/xen/arch/x86/x86_64/traps.c
+++ b/xen/arch/x86/x86_64/traps.c
@@ -257,6 +257,13 @@ void toggle_guest_mode(struct vcpu *v)
 {
     if ( is_pv_32bit_vcpu(v) )
         return;
+    if ( cpu_has_fsgsbase )
+    {
+        if ( v-&gt;arch.flags &amp; TF_kernel_mode )
+            v-&gt;arch.pv_vcpu.gs_base_kernel = __rdgsbase();
+        else
+            v-&gt;arch.pv_vcpu.gs_base_user = __rdgsbase();
+    }
     v-&gt;arch.flags ^= TF_kernel_mode;
     asm volatile ( "swapgs" );
     update_cr3(v);
--- a/xen/include/asm-x86/msr.h
+++ b/xen/include/asm-x86/msr.h
@@ -98,34 +98,52 @@ static inline int wrmsr_safe(unsigned in
 			  : "=a" (low), "=d" (high) \
 			  : "c" (counter))
 
-static inline unsigned long rdfsbase(void)
+static inline unsigned long __rdfsbase(void)
 {
     unsigned long base;
 
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdfsbase %0" : "=r" (base) );
+    asm volatile ( "rdfsbase %0" : "=r" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc0" : "=a" (base) );
 #endif
-    else
-        rdmsrl(MSR_FS_BASE, base);
 
     return base;
 }
 
-static inline unsigned long rdgsbase(void)
+static inline unsigned long __rdgsbase(void)
 {
     unsigned long base;
 
-    if ( cpu_has_fsgsbase )
 #ifdef HAVE_GAS_FSGSBASE
-        asm volatile ( "rdgsbase %0" : "=r" (base) );
+    asm volatile ( "rdgsbase %0" : "=r" (base) );
 #else
-        asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
+    asm volatile ( ".byte 0xf3, 0x48, 0x0f, 0xae, 0xc8" : "=a" (base) );
 #endif
-    else
-        rdmsrl(MSR_GS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdfsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdfsbase();
+
+    rdmsrl(MSR_FS_BASE, base);
+
+    return base;
+}
+
+static inline unsigned long rdgsbase(void)
+{
+    unsigned long base;
+
+    if ( cpu_has_fsgsbase )
+        return __rdgsbase();
+
+    rdmsrl(MSR_GS_BASE, base);
 
     return base;
 }


</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------020507090806000700020401--


--===============4569693584749241606==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4569693584749241606==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 16:03:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4wc-0003uV-6U; Wed, 05 Feb 2014 16:03:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB4wb-0003uQ-85
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 16:03:21 +0000
Received: from [85.158.137.68:42201] by server-6.bemta-3.messagelabs.com id
	A2/9E-09180-8C062F25; Wed, 05 Feb 2014 16:03:20 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391616198!13620083!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19809 invoked from network); 5 Feb 2014 16:03:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="100157004"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 11:03:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB4wT-0008GU-N4;
	Wed, 05 Feb 2014 16:03:13 +0000
Message-ID: <52F260AF.6070607@eu.citrix.com>
Date: Wed, 5 Feb 2014 16:02:55 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EEE02000078001196EF@nat28.tlf.novell.com>
In-Reply-To: <52F25EEE02000078001196EF@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 02:55 PM, Jan Beulich wrote:
>>>> On 05.02.14 at 15:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 1: x86: fix FS/GS base handling when using the fsgsbase feature
>> 2: domctl: also pause domain for extended context updates
>> 3: domctl: pause vCPU for context reads
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> And I think is goes without saying that this being bug fixes I expect
> them to be suitable for 4.4.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:03:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4wc-0003uV-6U; Wed, 05 Feb 2014 16:03:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WB4wb-0003uQ-85
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 16:03:21 +0000
Received: from [85.158.137.68:42201] by server-6.bemta-3.messagelabs.com id
	A2/9E-09180-8C062F25; Wed, 05 Feb 2014 16:03:20 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391616198!13620083!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19809 invoked from network); 5 Feb 2014 16:03:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="100157004"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 11:03:13 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WB4wT-0008GU-N4;
	Wed, 05 Feb 2014 16:03:13 +0000
Message-ID: <52F260AF.6070607@eu.citrix.com>
Date: Wed, 5 Feb 2014 16:02:55 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F25D4202000078001196AB@nat28.tlf.novell.com>
	<52F25EEE02000078001196EF@nat28.tlf.novell.com>
In-Reply-To: <52F25EEE02000078001196EF@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH 0/3] guest context management adjustments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 02:55 PM, Jan Beulich wrote:
>>>> On 05.02.14 at 15:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>> 1: x86: fix FS/GS base handling when using the fsgsbase feature
>> 2: domctl: also pause domain for extended context updates
>> 3: domctl: pause vCPU for context reads
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> And I think is goes without saying that this being bug fixes I expect
> them to be suitable for 4.4.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:03:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4wu-0003vr-KJ; Wed, 05 Feb 2014 16:03:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4wt-0003vh-BY
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:03:39 +0000
Received: from [193.109.254.147:8830] by server-13.bemta-14.messagelabs.com id
	1D/7B-01226-AD062F25; Wed, 05 Feb 2014 16:03:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391616216!2255030!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22322 invoked from network); 5 Feb 2014 16:03:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266275"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	11:03:35 -0500
Message-ID: <1391616214.23098.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:34 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>, Julien
	Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/4 v3] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George, this should go into 4.4 -- there is a security aspect.

Changes in v3:
                s/cacheflush_page/sync_page_to_ram/
                    
                xc interface takes a length instead of an end
                    
                make the domctl range inclusive.
                    
                make xc interface internal -- it isn't needed from libxl
                in the current design and it is easier to expose an
                interface in the future than to hide it.
                
Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:03:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4wu-0003vr-KJ; Wed, 05 Feb 2014 16:03:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4wt-0003vh-BY
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:03:39 +0000
Received: from [193.109.254.147:8830] by server-13.bemta-14.messagelabs.com id
	1D/7B-01226-AD062F25; Wed, 05 Feb 2014 16:03:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391616216!2255030!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22322 invoked from network); 5 Feb 2014 16:03:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266275"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	11:03:35 -0500
Message-ID: <1391616214.23098.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:34 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>, Julien
	Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/4 v3] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George, this should go into 4.4 -- there is a security aspect.

Changes in v3:
                s/cacheflush_page/sync_page_to_ram/
                    
                xc interface takes a length instead of an end
                    
                make the domctl range inclusive.
                    
                make xc interface internal -- it isn't needed from libxl
                in the current design and it is easier to expose an
                interface in the future than to hide it.
                
Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xK-0003zk-SY; Wed, 05 Feb 2014 16:04:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xD-0003zC-UY
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:00 +0000
Received: from [85.158.137.68:9142] by server-11.bemta-3.messagelabs.com id
	30/B2-04255-FE062F25; Wed, 05 Feb 2014 16:03:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391616236!13567916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2604 invoked from network); 5 Feb 2014 16:03:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="100157362"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-7s;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:52 +0000
Message-ID: <1391616235-22703-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 1/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xL-00040O-LL; Wed, 05 Feb 2014 16:04:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xD-0003zE-Vc
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:00 +0000
Received: from [85.158.139.211:54086] by server-14.bemta-5.messagelabs.com id
	AD/B3-27598-FE062F25; Wed, 05 Feb 2014 16:03:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391616236!1892776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17360 invoked from network); 5 Feb 2014 16:03:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266512"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-Bb;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:53 +0000
Message-ID: <1391616235-22703-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 2/4] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xK-0003zk-SY; Wed, 05 Feb 2014 16:04:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xD-0003zC-UY
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:00 +0000
Received: from [85.158.137.68:9142] by server-11.bemta-3.messagelabs.com id
	30/B2-04255-FE062F25; Wed, 05 Feb 2014 16:03:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391616236!13567916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2604 invoked from network); 5 Feb 2014 16:03:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="100157362"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-7s;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:52 +0000
Message-ID: <1391616235-22703-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 1/4] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xL-00040O-LL; Wed, 05 Feb 2014 16:04:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xD-0003zE-Vc
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:00 +0000
Received: from [85.158.139.211:54086] by server-14.bemta-5.messagelabs.com id
	AD/B3-27598-FE062F25; Wed, 05 Feb 2014 16:03:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391616236!1892776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17360 invoked from network); 5 Feb 2014 16:03:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266512"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-Bb;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:53 +0000
Message-ID: <1391616235-22703-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 2/4] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xM-000413-6Y; Wed, 05 Feb 2014 16:04:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xF-0003zU-66
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:01 +0000
Received: from [85.158.139.211:37048] by server-2.bemta-5.messagelabs.com id
	EE/67-23037-0F062F25; Wed, 05 Feb 2014 16:04:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391616236!1892776!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17438 invoked from network); 5 Feb 2014 16:03:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266513"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-IF;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:55 +0000
Message-ID: <1391616235-22703-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  158 ------------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xM-000413-6Y; Wed, 05 Feb 2014 16:04:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xF-0003zU-66
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:01 +0000
Received: from [85.158.139.211:37048] by server-2.bemta-5.messagelabs.com id
	EE/67-23037-0F062F25; Wed, 05 Feb 2014 16:04:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391616236!1892776!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17438 invoked from network); 5 Feb 2014 16:03:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:03:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266513"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-IF;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:55 +0000
Message-ID: <1391616235-22703-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v2 4/4] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  158 ------------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xM-00041s-RE; Wed, 05 Feb 2014 16:04:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xF-0003za-Vv
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:03 +0000
Received: from [85.158.143.35:31443] by server-3.bemta-4.messagelabs.com id
	8B/AE-11539-1F062F25; Wed, 05 Feb 2014 16:04:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391616239!3377401!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4982 invoked from network); 5 Feb 2014 16:04:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:04:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266516"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-Fb;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:54 +0000
Message-ID: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v3:
    s/cacheflush_page/sync_page_to_ram/

    xc interface takes a length instead of an end

    make the domctl range inclusive.

    make xc interface internal -- it isn't needed from libxl in the current
    design and it is easier to expose an interface in the future than to hide
    it.

v2:
   Switch to cleaning at page allocation time + explicit flushing of the
   regions which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    2 ++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xc_private.h            |    3 +++
 tools/libxc/xenctrl.h               |    1 -
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |    9 +++++++++
 xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |    3 +++
 xen/xsm/flask/policy/access_vectors |    2 ++
 16 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..4e8e995 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.end_pfn = start_pfn + nr_pfns;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..026239f 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -454,7 +454,6 @@ int xc_domain_create(xc_interface *xch,
                      uint32_t flags,
                      uint32_t *pdomid);
 
-
 /* Functions to produce a dump of a given domain
  *  xc_domain_dumpcore - produces a dump to a specified file
  *  xc_domain_dumpcore_via_callback - produces a dump, using a specified
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..8916e49 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = domctl->u.cacheflush.end_pfn;
+
+        if ( e < s )
+            return -EINVAL;
+
+        if ( get_order_from_pages(e-s) > MAX_ORDER )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2f48347..b33bef5 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -338,6 +338,15 @@ unsigned long domain_page_map_to_mfn(const void *va)
 }
 #endif
 
+void sync_page_to_ram(unsigned long mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..86f13e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+
+                    sync_page_to_ram(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..c73c717 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        sync_page_to_ram(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..67d64c9 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void sync_page_to_ram(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..abe35fb 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void sync_page_to_ram(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..497e2a3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush, inclusive. */
+    xen_pfn_t start_pfn, end_pfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..1345d7e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:04:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB4xM-00041s-RE; Wed, 05 Feb 2014 16:04:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB4xF-0003za-Vv
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:04:03 +0000
Received: from [85.158.143.35:31443] by server-3.bemta-4.messagelabs.com id
	8B/AE-11539-1F062F25; Wed, 05 Feb 2014 16:04:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391616239!3377401!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4982 invoked from network); 5 Feb 2014 16:04:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:04:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98266516"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:03:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:03:55 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB4x9-00064X-Fb;
	Wed, 05 Feb 2014 16:03:55 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 16:03:54 +0000
Message-ID: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v3:
    s/cacheflush_page/sync_page_to_ram/

    xc interface takes a length instead of an end

    make the domctl range inclusive.

    make xc interface internal -- it isn't needed from libxl in the current
    design and it is easier to expose an interface in the future than to hide
    it.

v2:
   Switch to cleaning at page allocation time + explicit flushing of the
   regions which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    2 ++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xc_private.h            |    3 +++
 tools/libxc/xenctrl.h               |    1 -
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |    9 +++++++++
 xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |    3 +++
 xen/xsm/flask/policy/access_vectors |    2 ++
 16 files changed, 101 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..4e8e995 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.end_pfn = start_pfn + nr_pfns;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..026239f 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -454,7 +454,6 @@ int xc_domain_create(xc_interface *xch,
                      uint32_t flags,
                      uint32_t *pdomid);
 
-
 /* Functions to produce a dump of a given domain
  *  xc_domain_dumpcore - produces a dump to a specified file
  *  xc_domain_dumpcore_via_callback - produces a dump, using a specified
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..8916e49 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = domctl->u.cacheflush.end_pfn;
+
+        if ( e < s )
+            return -EINVAL;
+
+        if ( get_order_from_pages(e-s) > MAX_ORDER )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2f48347..b33bef5 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -338,6 +338,15 @@ unsigned long domain_page_map_to_mfn(const void *va)
 }
 #endif
 
+void sync_page_to_ram(unsigned long mfn)
+{
+    void *v = map_domain_page(mfn);
+
+    flush_xen_dcache_va_range(v, PAGE_SIZE);
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..86f13e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+
+                    sync_page_to_ram(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..c73c717 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        sync_page_to_ram(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..67d64c9 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void sync_page_to_ram(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..abe35fb 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void sync_page_to_ram(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..497e2a3 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush, inclusive. */
+    xen_pfn_t start_pfn, end_pfn;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..1345d7e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:28:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5Kh-0005m5-Bp; Wed, 05 Feb 2014 16:28:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WB5Kg-0005m0-2B
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 16:28:14 +0000
Received: from [85.158.137.68:10341] by server-17.bemta-3.messagelabs.com id
	EF/16-22569-D9662F25; Wed, 05 Feb 2014 16:28:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391617689!9957604!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9643 invoked from network); 5 Feb 2014 16:28:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:28:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98278893"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:27:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 11:27:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WB5Jg-0000Aq-5g;
	Wed, 05 Feb 2014 16:27:12 +0000
Date: Wed, 5 Feb 2014 16:27:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1402051626480.4373@kaball.uk.xensource.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 5 Feb 2014, Julien Grall wrote:
> The function domain_page_map_to_mfn can be used to translate a virtual
> address mapped by both map_domain_page and map_domain_page_global.
> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> will always fail because the address is not in DOMHEAP range.
> 
> Check if the address is in vmap range and use __pa to translate it.
> 
> This patch fix guest shutdown when the event fifo is used.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>

It looks good to me.

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>     This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> use Linux 3.14 (and higher) as guest with the event fifo driver.
> ---
>  xen/arch/arm/mm.c |   10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 127cce0..bdca68a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
>      local_irq_restore(flags);
>  }
>  
> -unsigned long domain_page_map_to_mfn(const void *va)
> +unsigned long domain_page_map_to_mfn(const void *ptr)
>  {
> +    unsigned long va = (unsigned long)ptr;
>      lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +
> +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +        return virt_to_mfn(va);
>  
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>      ASSERT(map[slot].pt.avail != 0);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:28:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5Kh-0005m5-Bp; Wed, 05 Feb 2014 16:28:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WB5Kg-0005m0-2B
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 16:28:14 +0000
Received: from [85.158.137.68:10341] by server-17.bemta-3.messagelabs.com id
	EF/16-22569-D9662F25; Wed, 05 Feb 2014 16:28:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391617689!9957604!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9643 invoked from network); 5 Feb 2014 16:28:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:28:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98278893"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 16:27:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 5 Feb 2014 11:27:12 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WB5Jg-0000Aq-5g;
	Wed, 05 Feb 2014 16:27:12 +0000
Date: Wed, 5 Feb 2014 16:27:01 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1402051626480.4373@kaball.uk.xensource.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 5 Feb 2014, Julien Grall wrote:
> The function domain_page_map_to_mfn can be used to translate a virtual
> address mapped by both map_domain_page and map_domain_page_global.
> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> will always fail because the address is not in DOMHEAP range.
> 
> Check if the address is in vmap range and use __pa to translate it.
> 
> This patch fix guest shutdown when the event fifo is used.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>

It looks good to me.

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


> ---
>     This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> use Linux 3.14 (and higher) as guest with the event fifo driver.
> ---
>  xen/arch/arm/mm.c |   10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 127cce0..bdca68a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
>      local_irq_restore(flags);
>  }
>  
> -unsigned long domain_page_map_to_mfn(const void *va)
> +unsigned long domain_page_map_to_mfn(const void *ptr)
>  {
> +    unsigned long va = (unsigned long)ptr;
>      lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +
> +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +        return virt_to_mfn(va);
>  
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>      ASSERT(map[slot].pt.avail != 0);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:52:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5i0-0000I2-B3; Wed, 05 Feb 2014 16:52:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WB5hy-0000Hu-BA
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 16:52:18 +0000
Received: from [193.109.254.147:17742] by server-7.bemta-14.messagelabs.com id
	4E/F5-23424-14C62F25; Wed, 05 Feb 2014 16:52:17 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391619136!2240652!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10721 invoked from network); 5 Feb 2014 16:52:16 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 16:52:16 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 399182;
	Wed, 5 Feb 2014 17:52:16 +0100
Message-ID: <52F26C40.2060901@scytl.com>
Date: Wed, 05 Feb 2014 17:52:16 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
X-Enigmail-Version: 1.6
Subject: [Xen-devel] Questions about the usage of the vTPM implemented in
	Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dear all,

I have recently configured a Xen 4.3 server with the vTPM enabled and a
guest virtual machine that takes advantage of it. After playing a bit
with it, I have a few questions:

1.According to the documentation, to shutdown the vTPM stubdom it is
only needed to normally shutdown the guest VM. Theoretically, the vTPM
stubdom automatically shuts down after this. Nevertheless, if I shutdown
the guest the vTPM stubdom continues active and, moreover, I can start
the machine again and the values of the vTPM are the last ones there
were in the previous instance of the guest. Is this normal?

2.In the documentation it is recommended to avoid accessing the physical
TPM from Dom0 at the same time than the vTPM Manager stubdom.
Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
without any apparent issue. Why is not recommended directly accessing
the physical TPM of Dom0?

3.If it is not recommended to directly accessing the physical TPM in
Dom0, which is the advisable way to check the integrity of this domain?
With solutions such as TBOOT and IntelTXT?

Best regards,
Jordi.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:52:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5i0-0000I2-B3; Wed, 05 Feb 2014 16:52:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WB5hy-0000Hu-BA
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 16:52:18 +0000
Received: from [193.109.254.147:17742] by server-7.bemta-14.messagelabs.com id
	4E/F5-23424-14C62F25; Wed, 05 Feb 2014 16:52:17 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391619136!2240652!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10721 invoked from network); 5 Feb 2014 16:52:16 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 16:52:16 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 399182;
	Wed, 5 Feb 2014 17:52:16 +0100
Message-ID: <52F26C40.2060901@scytl.com>
Date: Wed, 05 Feb 2014 17:52:16 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
X-Enigmail-Version: 1.6
Subject: [Xen-devel] Questions about the usage of the vTPM implemented in
	Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dear all,

I have recently configured a Xen 4.3 server with the vTPM enabled and a
guest virtual machine that takes advantage of it. After playing a bit
with it, I have a few questions:

1.According to the documentation, to shutdown the vTPM stubdom it is
only needed to normally shutdown the guest VM. Theoretically, the vTPM
stubdom automatically shuts down after this. Nevertheless, if I shutdown
the guest the vTPM stubdom continues active and, moreover, I can start
the machine again and the values of the vTPM are the last ones there
were in the previous instance of the guest. Is this normal?

2.In the documentation it is recommended to avoid accessing the physical
TPM from Dom0 at the same time than the vTPM Manager stubdom.
Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
without any apparent issue. Why is not recommended directly accessing
the physical TPM of Dom0?

3.If it is not recommended to directly accessing the physical TPM in
Dom0, which is the advisable way to check the integrity of this domain?
With solutions such as TBOOT and IntelTXT?

Best regards,
Jordi.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:53:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5ig-0000Ls-PH; Wed, 05 Feb 2014 16:53:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB5if-0000Lj-5o
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:53:01 +0000
Received: from [85.158.139.211:49498] by server-17.bemta-5.messagelabs.com id
	CB/3C-31975-C6C62F25; Wed, 05 Feb 2014 16:53:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391619177!1906614!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4745 invoked from network); 5 Feb 2014 16:52:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:52:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="100183928"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 16:52:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:52:57 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB5ia-0006Jz-MI;
	Wed, 05 Feb 2014 16:52:56 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 16:52:56 +0000
Message-ID: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Enable armhf tests for linux-linus
	branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rather than whitelisting interesting branches, instead blacklist the hisotrical
branches which are not interesting to armhf, ensuring that any new branches
will pick up armhf automatically.

This creates the expected build-armhf, build-armhf-pvops and
test-armhf-armhf-xl jobs for the linux-linus branch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 6 ++++--
 mfi-common  | 6 ++++--
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/make-flight b/make-flight
index 092a0cf..056fc7a 100755
--- a/make-flight
+++ b/make-flight
@@ -80,8 +80,10 @@ test_matrix_branch_filter_callback () {
   case "$xenarch" in
   armhf)
         case "$branch" in
-        linux-arm-xen) ;;
-        linux-*) return 1;;
+        linux-3.0) return 1;;
+        linux-3.4) return 1;;
+        linux-mingo-tip-master) return 1;;
+        linux-*) ;;
         qemu-*) return 1;;
         esac
         ;;
diff --git a/mfi-common b/mfi-common
index 60802ef..8f56092 100644
--- a/mfi-common
+++ b/mfi-common
@@ -47,8 +47,10 @@ create_build_jobs () {
     case "$arch" in
     armhf)
       case "$branch" in
-      linux-arm-xen) ;;
-      linux-*) continue;;
+      linux-3.0) continue;;
+      linux-3.4) continue;;
+      linux-mingo-tip-master) continue;;
+      linux-*) ;;
       qemu-*) continue;;
       esac
       case "$xenbranch" in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 16:53:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 16:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5ig-0000Ls-PH; Wed, 05 Feb 2014 16:53:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB5if-0000Lj-5o
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 16:53:01 +0000
Received: from [85.158.139.211:49498] by server-17.bemta-5.messagelabs.com id
	CB/3C-31975-C6C62F25; Wed, 05 Feb 2014 16:53:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391619177!1906614!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4745 invoked from network); 5 Feb 2014 16:52:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 16:52:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="100183928"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 16:52:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 11:52:57 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WB5ia-0006Jz-MI;
	Wed, 05 Feb 2014 16:52:56 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 16:52:56 +0000
Message-ID: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Enable armhf tests for linux-linus
	branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rather than whitelisting interesting branches, instead blacklist the hisotrical
branches which are not interesting to armhf, ensuring that any new branches
will pick up armhf automatically.

This creates the expected build-armhf, build-armhf-pvops and
test-armhf-armhf-xl jobs for the linux-linus branch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 make-flight | 6 ++++--
 mfi-common  | 6 ++++--
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/make-flight b/make-flight
index 092a0cf..056fc7a 100755
--- a/make-flight
+++ b/make-flight
@@ -80,8 +80,10 @@ test_matrix_branch_filter_callback () {
   case "$xenarch" in
   armhf)
         case "$branch" in
-        linux-arm-xen) ;;
-        linux-*) return 1;;
+        linux-3.0) return 1;;
+        linux-3.4) return 1;;
+        linux-mingo-tip-master) return 1;;
+        linux-*) ;;
         qemu-*) return 1;;
         esac
         ;;
diff --git a/mfi-common b/mfi-common
index 60802ef..8f56092 100644
--- a/mfi-common
+++ b/mfi-common
@@ -47,8 +47,10 @@ create_build_jobs () {
     case "$arch" in
     armhf)
       case "$branch" in
-      linux-arm-xen) ;;
-      linux-*) continue;;
+      linux-3.0) continue;;
+      linux-3.4) continue;;
+      linux-mingo-tip-master) continue;;
+      linux-*) ;;
       qemu-*) continue;;
       esac
       case "$xenbranch" in
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:03:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5t7-0001OA-G8; Wed, 05 Feb 2014 17:03:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB5t5-0001Nw-CC
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:03:47 +0000
Received: from [85.158.143.35:63742] by server-2.bemta-4.messagelabs.com id
	C9/07-10891-2FE62F25; Wed, 05 Feb 2014 17:03:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391619824!3380143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4319 invoked from network); 5 Feb 2014 17:03:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 17:03:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98297946"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 17:03:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 12:03:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB5t0-0006N7-HU;
	Wed, 05 Feb 2014 17:03:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB5t0-0008UC-9E;
	Wed, 05 Feb 2014 17:03:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.28398.70777.976829@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 17:03:42 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
References: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Enable armhf tests for linux-linus
	branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Enable armhf tests for linux-linus branch"):
> Rather than whitelisting interesting branches, instead blacklist the hisotrical
> branches which are not interesting to armhf, ensuring that any new branches
> will pick up armhf automatically.
> 
> This creates the expected build-armhf, build-armhf-pvops and
> test-armhf-armhf-xl jobs for the linux-linus branch.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Can you push this right away ?  My fix to make cr-daily-branch not try
to list the thousands of Linux contributors in each patch wants some
more local testing and may not make it into the push gate today.

If my patch is ready I can rebase it onto yours.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:03:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5t7-0001OA-G8; Wed, 05 Feb 2014 17:03:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB5t5-0001Nw-CC
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:03:47 +0000
Received: from [85.158.143.35:63742] by server-2.bemta-4.messagelabs.com id
	C9/07-10891-2FE62F25; Wed, 05 Feb 2014 17:03:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391619824!3380143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4319 invoked from network); 5 Feb 2014 17:03:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 17:03:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98297946"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 17:03:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 12:03:42 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB5t0-0006N7-HU;
	Wed, 05 Feb 2014 17:03:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WB5t0-0008UC-9E;
	Wed, 05 Feb 2014 17:03:42 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21234.28398.70777.976829@mariner.uk.xensource.com>
Date: Wed, 5 Feb 2014 17:03:42 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
References: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Enable armhf tests for linux-linus
	branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Enable armhf tests for linux-linus branch"):
> Rather than whitelisting interesting branches, instead blacklist the hisotrical
> branches which are not interesting to armhf, ensuring that any new branches
> will pick up armhf automatically.
> 
> This creates the expected build-armhf, build-armhf-pvops and
> test-armhf-armhf-xl jobs for the linux-linus branch.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Can you push this right away ?  My fix to make cr-daily-branch not try
to list the thousands of Linux contributors in each patch wants some
more local testing and may not make it into the push gate today.

If my patch is ready I can rebase it onto yours.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:10:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5yx-0001fK-Js; Wed, 05 Feb 2014 17:09:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB5yw-0001fE-Fl
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:09:50 +0000
Received: from [85.158.139.211:51863] by server-14.bemta-5.messagelabs.com id
	3F/93-27598-D5072F25; Wed, 05 Feb 2014 17:09:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391620187!1909982!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20583 invoked from network); 5 Feb 2014 17:09:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 17:09:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98301222"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 17:09:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	12:09:46 -0500
Message-ID: <1391620185.23098.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 17:09:45 +0000
In-Reply-To: <21234.28398.70777.976829@mariner.uk.xensource.com>
References: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
	<21234.28398.70777.976829@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Enable armhf tests for linux-linus
	branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 17:03 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] Enable armhf tests for linux-linus branch"):
> > Rather than whitelisting interesting branches, instead blacklist the hisotrical
> > branches which are not interesting to armhf, ensuring that any new branches
> > will pick up armhf automatically.
> > 
> > This creates the expected build-armhf, build-armhf-pvops and
> > test-armhf-armhf-xl jobs for the linux-linus branch.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Can you push this right away ? 

Done. Thanks.

>  My fix to make cr-daily-branch not try
> to list the thousands of Linux contributors in each patch wants some
> more local testing and may not make it into the push gate today.
> 
> If my patch is ready I can rebase it onto yours.
> 
> Thanks,
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:10:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB5yx-0001fK-Js; Wed, 05 Feb 2014 17:09:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WB5yw-0001fE-Fl
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:09:50 +0000
Received: from [85.158.139.211:51863] by server-14.bemta-5.messagelabs.com id
	3F/93-27598-D5072F25; Wed, 05 Feb 2014 17:09:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391620187!1909982!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20583 invoked from network); 5 Feb 2014 17:09:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 17:09:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,787,1384300800"; d="scan'208";a="98301222"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 17:09:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	12:09:46 -0500
Message-ID: <1391620185.23098.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 17:09:45 +0000
In-Reply-To: <21234.28398.70777.976829@mariner.uk.xensource.com>
References: <1391619176-29362-1-git-send-email-ian.campbell@citrix.com>
	<21234.28398.70777.976829@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Enable armhf tests for linux-linus
	branch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 17:03 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] Enable armhf tests for linux-linus branch"):
> > Rather than whitelisting interesting branches, instead blacklist the hisotrical
> > branches which are not interesting to armhf, ensuring that any new branches
> > will pick up armhf automatically.
> > 
> > This creates the expected build-armhf, build-armhf-pvops and
> > test-armhf-armhf-xl jobs for the linux-linus branch.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Can you push this right away ? 

Done. Thanks.

>  My fix to make cr-daily-branch not try
> to list the thousands of Linux contributors in each patch wants some
> more local testing and may not make it into the push gate today.
> 
> If my patch is ready I can rebase it onto yours.
> 
> Thanks,
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SE-0002ym-3X; Wed, 05 Feb 2014 17:40:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SC-0002yP-Tm
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:05 +0000
Received: from [193.109.254.147:17818] by server-8.bemta-14.messagelabs.com id
	B8/BA-18529-47772F25; Wed, 05 Feb 2014 17:40:04 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391622001!2242593!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21194 invoked from network); 5 Feb 2014 17:40:03 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:03 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:42 -0700
Message-Id: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
Cc: bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/4] libxl: fixes related to concurrency
	improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While reviving old patches to add job support to the libxl driver,
testing revealed some problems that were difficult to encounter
in the current, more serialized processing approach used in the
driver.

The first patch is a bug fix, plugging leaks of libxlDomainObjPrivate
objects.  The second patch removes the list of libxl timer registrations
maintained in the driver - a hack I was never fond of.  The third patch
moves domain shutdown handling to a thread, instead of doing all the
shutdown work in the event handler.  The fourth patch fixes an issue wrt
child process handling discussed in this thread

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01553.html

Ian Jackson's latest patches on the libxl side are here

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00124.html


Jim Fehlig (4):
  libxl: fix leaking libxlDomainObjPrivate
  libxl: remove list of timer registrations from libxlDomainObjPrivate
  libxl: handle domain shutdown events in a thread
  libxl: improve subprocess handling

 src/libxl/libxl_conf.h   |   5 +-
 src/libxl/libxl_domain.c | 102 ++++++++---------------------------
 src/libxl/libxl_domain.h |   8 +--
 src/libxl/libxl_driver.c | 135 +++++++++++++++++++++++++++++++----------------
 4 files changed, 115 insertions(+), 135 deletions(-)

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SG-0002ze-QX; Wed, 05 Feb 2014 17:40:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SF-0002yp-2H
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:07 +0000
Received: from [85.158.137.68:48643] by server-15.bemta-3.messagelabs.com id
	E5/B8-19263-57772F25; Wed, 05 Feb 2014 17:40:05 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391622002!9971754!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15853 invoked from network); 5 Feb 2014 17:40:04 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:04 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:45 -0700
Message-Id: <1391621986-7341-4-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/4] libxl: handle domain shutdown events in a
	thread
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Handling the domain shutdown event within the event handler seems
a bit unfair to libxl's event machinery.  Domain "shutdown" could
take considerable time.  E.g. if the shutdown reason is reboot,
the domain must be reaped and then started again.

Spawn a shutdown handler thread to do this work, allowing libxl's
event machinery to go about its business.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 src/libxl/libxl_driver.c | 132 ++++++++++++++++++++++++++++++++---------------
 1 file changed, 89 insertions(+), 43 deletions(-)

diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index d639011..a1c6c0f 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -352,61 +352,107 @@ libxlVmReap(libxlDriverPrivatePtr driver,
 # define VIR_LIBXL_EVENT_CONST const
 #endif
 
+struct libxlShutdownThreadInfo
+{
+    virDomainObjPtr vm;
+    libxl_event *event;
+};
+
+
 static void
-libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
+libxlDomainShutdownThread(void *opaque)
 {
     libxlDriverPrivatePtr driver = libxl_driver;
-    libxlDomainObjPrivatePtr priv = ((virDomainObjPtr)data)->privateData;
-    virDomainObjPtr vm = NULL;
+    struct libxlShutdownThreadInfo *shutdown_info = opaque;
+    virDomainObjPtr vm = shutdown_info->vm;
+    libxlDomainObjPrivatePtr priv = vm->privateData;
+    libxl_event *ev = shutdown_info->event;
+    libxl_ctx *ctx = priv->ctx;
     virObjectEventPtr dom_event = NULL;
-    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
-
-    if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
-        virDomainShutoffReason reason;
-
-        /*
-         * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
-         * after calling libxl_domain_suspend() are handled by it's callers.
-         */
-        if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
-            goto cleanup;
+    libxl_shutdown_reason xl_reason = ev->u.domain_shutdown.shutdown_reason;
+    virDomainShutoffReason reason;
 
-        vm = virDomainObjListFindByID(driver->domains, event->domid);
-        if (!vm)
-            goto cleanup;
+    virObjectLock(vm);
 
-        switch (xl_reason) {
-            case LIBXL_SHUTDOWN_REASON_POWEROFF:
-            case LIBXL_SHUTDOWN_REASON_CRASH:
-                if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
-                    dom_event = virDomainEventLifecycleNewFromObj(vm,
-                                              VIR_DOMAIN_EVENT_STOPPED,
-                                              VIR_DOMAIN_EVENT_STOPPED_CRASHED);
-                    reason = VIR_DOMAIN_SHUTOFF_CRASHED;
-                } else {
-                    reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
-                }
-                libxlVmReap(driver, vm, reason);
-                if (!vm->persistent) {
-                    virDomainObjListRemove(driver->domains, vm);
-                    vm = NULL;
-                }
-                break;
-            case LIBXL_SHUTDOWN_REASON_REBOOT:
-                libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
-                libxlVmStart(driver, vm, 0, -1);
-                break;
-            default:
-                VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
-                break;
-        }
+    switch (xl_reason) {
+        case LIBXL_SHUTDOWN_REASON_POWEROFF:
+        case LIBXL_SHUTDOWN_REASON_CRASH:
+            if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
+                dom_event = virDomainEventLifecycleNewFromObj(vm,
+                                           VIR_DOMAIN_EVENT_STOPPED,
+                                           VIR_DOMAIN_EVENT_STOPPED_CRASHED);
+                reason = VIR_DOMAIN_SHUTOFF_CRASHED;
+            } else {
+                reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
+            }
+            libxlVmReap(driver, vm, reason);
+            if (!vm->persistent) {
+                virDomainObjListRemove(driver->domains, vm);
+                vm = NULL;
+            }
+            break;
+        case LIBXL_SHUTDOWN_REASON_REBOOT:
+            libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
+            libxlVmStart(driver, vm, 0, -1);
+            break;
+        default:
+            VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
+            break;
     }
 
-cleanup:
     if (vm)
         virObjectUnlock(vm);
     if (dom_event)
         libxlDomainEventQueue(driver, dom_event);
+    libxl_event_free(ctx, ev);
+    VIR_FREE(shutdown_info);
+}
+
+static void
+libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
+{
+    virDomainObjPtr vm = data;
+    libxlDomainObjPrivatePtr priv = vm->privateData;
+    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
+    struct libxlShutdownThreadInfo *shutdown_info;
+    virThread thread;
+
+    if (event->type != LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
+        VIR_INFO("Unhandled event type %d", event->type);
+        goto cleanup;
+    }
+
+    /*
+     * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
+     * after calling libxl_domain_suspend() are handled by it's callers.
+     */
+    if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
+        goto cleanup;
+
+    /*
+     * Start a thread to handle shutdown.  We don't want to be tying up
+     * libxl's event machinery by doing a potentially lengthy shutdown.
+     */
+    if (VIR_ALLOC(shutdown_info) < 0)
+        goto cleanup;
+
+    shutdown_info->vm = data;
+    shutdown_info->event = (libxl_event *)event;
+    if (virThreadCreate(&thread, true, libxlDomainShutdownThread,
+                        shutdown_info) < 0) {
+        /*
+         * Not much we can do on error here except log it.
+         */
+        VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
+        goto cleanup;
+    }
+
+    /*
+     * libxl_event freed in shutdown thread
+     */
+    return;
+
+cleanup:
     /* Cast away any const */
     libxl_event_free(priv->ctx, (libxl_event *)event);
 }
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SG-0002ze-QX; Wed, 05 Feb 2014 17:40:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SF-0002yp-2H
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:07 +0000
Received: from [85.158.137.68:48643] by server-15.bemta-3.messagelabs.com id
	E5/B8-19263-57772F25; Wed, 05 Feb 2014 17:40:05 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391622002!9971754!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15853 invoked from network); 5 Feb 2014 17:40:04 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:04 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:45 -0700
Message-Id: <1391621986-7341-4-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/4] libxl: handle domain shutdown events in a
	thread
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Handling the domain shutdown event within the event handler seems
a bit unfair to libxl's event machinery.  Domain "shutdown" could
take considerable time.  E.g. if the shutdown reason is reboot,
the domain must be reaped and then started again.

Spawn a shutdown handler thread to do this work, allowing libxl's
event machinery to go about its business.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 src/libxl/libxl_driver.c | 132 ++++++++++++++++++++++++++++++++---------------
 1 file changed, 89 insertions(+), 43 deletions(-)

diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index d639011..a1c6c0f 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -352,61 +352,107 @@ libxlVmReap(libxlDriverPrivatePtr driver,
 # define VIR_LIBXL_EVENT_CONST const
 #endif
 
+struct libxlShutdownThreadInfo
+{
+    virDomainObjPtr vm;
+    libxl_event *event;
+};
+
+
 static void
-libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
+libxlDomainShutdownThread(void *opaque)
 {
     libxlDriverPrivatePtr driver = libxl_driver;
-    libxlDomainObjPrivatePtr priv = ((virDomainObjPtr)data)->privateData;
-    virDomainObjPtr vm = NULL;
+    struct libxlShutdownThreadInfo *shutdown_info = opaque;
+    virDomainObjPtr vm = shutdown_info->vm;
+    libxlDomainObjPrivatePtr priv = vm->privateData;
+    libxl_event *ev = shutdown_info->event;
+    libxl_ctx *ctx = priv->ctx;
     virObjectEventPtr dom_event = NULL;
-    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
-
-    if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
-        virDomainShutoffReason reason;
-
-        /*
-         * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
-         * after calling libxl_domain_suspend() are handled by it's callers.
-         */
-        if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
-            goto cleanup;
+    libxl_shutdown_reason xl_reason = ev->u.domain_shutdown.shutdown_reason;
+    virDomainShutoffReason reason;
 
-        vm = virDomainObjListFindByID(driver->domains, event->domid);
-        if (!vm)
-            goto cleanup;
+    virObjectLock(vm);
 
-        switch (xl_reason) {
-            case LIBXL_SHUTDOWN_REASON_POWEROFF:
-            case LIBXL_SHUTDOWN_REASON_CRASH:
-                if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
-                    dom_event = virDomainEventLifecycleNewFromObj(vm,
-                                              VIR_DOMAIN_EVENT_STOPPED,
-                                              VIR_DOMAIN_EVENT_STOPPED_CRASHED);
-                    reason = VIR_DOMAIN_SHUTOFF_CRASHED;
-                } else {
-                    reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
-                }
-                libxlVmReap(driver, vm, reason);
-                if (!vm->persistent) {
-                    virDomainObjListRemove(driver->domains, vm);
-                    vm = NULL;
-                }
-                break;
-            case LIBXL_SHUTDOWN_REASON_REBOOT:
-                libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
-                libxlVmStart(driver, vm, 0, -1);
-                break;
-            default:
-                VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
-                break;
-        }
+    switch (xl_reason) {
+        case LIBXL_SHUTDOWN_REASON_POWEROFF:
+        case LIBXL_SHUTDOWN_REASON_CRASH:
+            if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
+                dom_event = virDomainEventLifecycleNewFromObj(vm,
+                                           VIR_DOMAIN_EVENT_STOPPED,
+                                           VIR_DOMAIN_EVENT_STOPPED_CRASHED);
+                reason = VIR_DOMAIN_SHUTOFF_CRASHED;
+            } else {
+                reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
+            }
+            libxlVmReap(driver, vm, reason);
+            if (!vm->persistent) {
+                virDomainObjListRemove(driver->domains, vm);
+                vm = NULL;
+            }
+            break;
+        case LIBXL_SHUTDOWN_REASON_REBOOT:
+            libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
+            libxlVmStart(driver, vm, 0, -1);
+            break;
+        default:
+            VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
+            break;
     }
 
-cleanup:
     if (vm)
         virObjectUnlock(vm);
     if (dom_event)
         libxlDomainEventQueue(driver, dom_event);
+    libxl_event_free(ctx, ev);
+    VIR_FREE(shutdown_info);
+}
+
+static void
+libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
+{
+    virDomainObjPtr vm = data;
+    libxlDomainObjPrivatePtr priv = vm->privateData;
+    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
+    struct libxlShutdownThreadInfo *shutdown_info;
+    virThread thread;
+
+    if (event->type != LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
+        VIR_INFO("Unhandled event type %d", event->type);
+        goto cleanup;
+    }
+
+    /*
+     * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
+     * after calling libxl_domain_suspend() are handled by it's callers.
+     */
+    if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
+        goto cleanup;
+
+    /*
+     * Start a thread to handle shutdown.  We don't want to be tying up
+     * libxl's event machinery by doing a potentially lengthy shutdown.
+     */
+    if (VIR_ALLOC(shutdown_info) < 0)
+        goto cleanup;
+
+    shutdown_info->vm = data;
+    shutdown_info->event = (libxl_event *)event;
+    if (virThreadCreate(&thread, true, libxlDomainShutdownThread,
+                        shutdown_info) < 0) {
+        /*
+         * Not much we can do on error here except log it.
+         */
+        VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
+        goto cleanup;
+    }
+
+    /*
+     * libxl_event freed in shutdown thread
+     */
+    return;
+
+cleanup:
     /* Cast away any const */
     libxl_event_free(priv->ctx, (libxl_event *)event);
 }
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SJ-00030F-Tg; Wed, 05 Feb 2014 17:40:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SI-0002zc-55
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:10 +0000
Received: from [85.158.143.35:9997] by server-3.bemta-4.messagelabs.com id
	1D/44-11539-87772F25; Wed, 05 Feb 2014 17:40:08 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391622005!3412759!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19682 invoked from network); 5 Feb 2014 17:40:07 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:07 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:46 -0700
Message-Id: <1391621986-7341-5-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 4/4] libxl: improve subprocess handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If available, let libxl handle reaping any children it creates by
specifying libxl_sigchld_owner_libxl_always_selective_reap.  This
feature was added to improve subprocess handling in libxl when used
in an application that does not install a SIGCHLD handler like
libvirt

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01555.html

Prior to this patch, it is possible to hit asserts in libxl when
reaping subprocesses, particularly during simultaneous operations
on multiple domains.  With this patch, and the corresponding changes
to libxl, I no longer see the asserts.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---

The libxl patch has not yet hit xen.git, but without it this patch
has no semantic change, only explicitly setting chldowner to the
default of libxl_sigchld_owner_libxl.

 src/libxl/libxl_domain.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index fbd6cab..eb2e50e 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -358,6 +358,14 @@ virDomainDefParserConfig libxlDomainDefParserConfig = {
     .devicesPostParseCallback = libxlDomainDeviceDefPostParse,
 };
 
+static const libxl_childproc_hooks libxl_child_hooks = {
+#ifdef LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP
+    .chldowner = libxl_sigchld_owner_libxl_always_selective_reap,
+#else
+    .chldowner = libxl_sigchld_owner_libxl,
+#endif
+};
+
 int
 libxlDomainObjPrivateInitCtx(virDomainObjPtr vm)
 {
@@ -395,6 +403,7 @@ libxlDomainObjPrivateInitCtx(virDomainObjPtr vm)
     }
 
     libxl_osevent_register_hooks(priv->ctx, &libxl_event_callbacks, priv);
+    libxl_childproc_setmode(priv->ctx, &libxl_child_hooks, priv);
 
     ret = 0;
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SJ-00030F-Tg; Wed, 05 Feb 2014 17:40:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SI-0002zc-55
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:10 +0000
Received: from [85.158.143.35:9997] by server-3.bemta-4.messagelabs.com id
	1D/44-11539-87772F25; Wed, 05 Feb 2014 17:40:08 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391622005!3412759!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19682 invoked from network); 5 Feb 2014 17:40:07 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:07 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:46 -0700
Message-Id: <1391621986-7341-5-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 4/4] libxl: improve subprocess handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If available, let libxl handle reaping any children it creates by
specifying libxl_sigchld_owner_libxl_always_selective_reap.  This
feature was added to improve subprocess handling in libxl when used
in an application that does not install a SIGCHLD handler like
libvirt

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01555.html

Prior to this patch, it is possible to hit asserts in libxl when
reaping subprocesses, particularly during simultaneous operations
on multiple domains.  With this patch, and the corresponding changes
to libxl, I no longer see the asserts.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---

The libxl patch has not yet hit xen.git, but without it this patch
has no semantic change, only explicitly setting chldowner to the
default of libxl_sigchld_owner_libxl.

 src/libxl/libxl_domain.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index fbd6cab..eb2e50e 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -358,6 +358,14 @@ virDomainDefParserConfig libxlDomainDefParserConfig = {
     .devicesPostParseCallback = libxlDomainDeviceDefPostParse,
 };
 
+static const libxl_childproc_hooks libxl_child_hooks = {
+#ifdef LIBXL_HAVE_SIGCHLD_OWNER_SELECTIVE_REAP
+    .chldowner = libxl_sigchld_owner_libxl_always_selective_reap,
+#else
+    .chldowner = libxl_sigchld_owner_libxl,
+#endif
+};
+
 int
 libxlDomainObjPrivateInitCtx(virDomainObjPtr vm)
 {
@@ -395,6 +403,7 @@ libxlDomainObjPrivateInitCtx(virDomainObjPtr vm)
     }
 
     libxl_osevent_register_hooks(priv->ctx, &libxl_event_callbacks, priv);
+    libxl_childproc_setmode(priv->ctx, &libxl_child_hooks, priv);
 
     ret = 0;
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SE-0002ym-3X; Wed, 05 Feb 2014 17:40:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SC-0002yP-Tm
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:05 +0000
Received: from [193.109.254.147:17818] by server-8.bemta-14.messagelabs.com id
	B8/BA-18529-47772F25; Wed, 05 Feb 2014 17:40:04 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391622001!2242593!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21194 invoked from network); 5 Feb 2014 17:40:03 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:03 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:42 -0700
Message-Id: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
Cc: bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/4] libxl: fixes related to concurrency
	improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

While reviving old patches to add job support to the libxl driver,
testing revealed some problems that were difficult to encounter
in the current, more serialized processing approach used in the
driver.

The first patch is a bug fix, plugging leaks of libxlDomainObjPrivate
objects.  The second patch removes the list of libxl timer registrations
maintained in the driver - a hack I was never fond of.  The third patch
moves domain shutdown handling to a thread, instead of doing all the
shutdown work in the event handler.  The fourth patch fixes an issue wrt
child process handling discussed in this thread

http://lists.xen.org/archives/html/xen-devel/2014-01/msg01553.html

Ian Jackson's latest patches on the libxl side are here

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00124.html


Jim Fehlig (4):
  libxl: fix leaking libxlDomainObjPrivate
  libxl: remove list of timer registrations from libxlDomainObjPrivate
  libxl: handle domain shutdown events in a thread
  libxl: improve subprocess handling

 src/libxl/libxl_conf.h   |   5 +-
 src/libxl/libxl_domain.c | 102 ++++++++---------------------------
 src/libxl/libxl_domain.h |   8 +--
 src/libxl/libxl_driver.c | 135 +++++++++++++++++++++++++++++++----------------
 4 files changed, 115 insertions(+), 135 deletions(-)

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SF-0002yz-1S; Wed, 05 Feb 2014 17:40:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SC-0002yQ-V0
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:05 +0000
Received: from [85.158.143.35:26040] by server-2.bemta-4.messagelabs.com id
	6E/04-10891-47772F25; Wed, 05 Feb 2014 17:40:04 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391622001!3417965!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25617 invoked from network); 5 Feb 2014 17:40:03 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:03 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:43 -0700
Message-Id: <1391621986-7341-2-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/4] libxl: fix leaking libxlDomainObjPrivate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When libxl registers an FD with the libxl driver, the refcnt of the
associated libxlDomainObjPrivate object is incremented. The refcnt
is decremented when libxl deregisters the FD.  But some FDs are only
deregistered when their libxl ctx is freed, which unfortunately is
done in the libxlDomainObjPrivate dispose function.  With references
held by the FDs, libxlDomainObjPrivate is never disposed.

I added the ref/unref in FD registration/deregistration when adding
the same in timer registration/deregistration.  For timers, this
is a simple approach to ensuring the libxlDomainObjPrivate is not
disposed prior to their expirtation, which libxl guarantees will
occur.  It is not needed for FDs, and only causes
libxlDomainObjPrivate to leak.

This patch removes the reference on libxlDomainObjPrivate for FD
registrations, but retains them for timer registrations.  Tested on
the latest releases of Xen supported by the libxl driver:  4.2.3,
4.3.1, and 4.4.0 RC3.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 src/libxl/libxl_domain.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index e72c483..7efc13b 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -1,7 +1,7 @@
 /*
  * libxl_domain.c: libxl domain object private state
  *
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Lesser General Public
@@ -92,7 +92,13 @@ libxlDomainObjPrivateOnceInit(void)
 VIR_ONCE_GLOBAL_INIT(libxlDomainObjPrivate)
 
 static void
-libxlDomainObjEventHookInfoFree(void *obj)
+libxlDomainObjFDEventHookInfoFree(void *obj)
+{
+    VIR_FREE(obj);
+}
+
+static void
+libxlDomainObjTimerEventHookInfoFree(void *obj)
 {
     libxlEventHookInfoPtr info = obj;
 
@@ -138,12 +144,6 @@ libxlDomainObjFDRegisterEventHook(void *priv,
         return -1;
 
     info->priv = priv;
-    /*
-     * Take a reference on the domain object.  Reference is dropped in
-     * libxlDomainObjEventHookInfoFree, ensuring the domain object outlives
-     * the fd event objects.
-     */
-    virObjectRef(info->priv);
     info->xl_priv = xl_priv;
 
     if (events & POLLIN)
@@ -152,9 +152,8 @@ libxlDomainObjFDRegisterEventHook(void *priv,
         vir_events |= VIR_EVENT_HANDLE_WRITABLE;
 
     info->id = virEventAddHandle(fd, vir_events, libxlDomainObjFDEventCallback,
-                                 info, libxlDomainObjEventHookInfoFree);
+                                 info, libxlDomainObjFDEventHookInfoFree);
     if (info->id < 0) {
-        virObjectUnref(info->priv);
         VIR_FREE(info);
         return -1;
     }
@@ -259,7 +258,7 @@ libxlDomainObjTimeoutRegisterEventHook(void *priv,
         timeout = res.tv_sec * 1000 + (res.tv_usec + 999) / 1000;
     }
     info->id = virEventAddTimeout(timeout, libxlDomainObjTimerCallback,
-                                  info, libxlDomainObjEventHookInfoFree);
+                                  info, libxlDomainObjTimerEventHookInfoFree);
     if (info->id < 0) {
         virObjectUnref(info->priv);
         VIR_FREE(info);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SF-0002yz-1S; Wed, 05 Feb 2014 17:40:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SC-0002yQ-V0
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:05 +0000
Received: from [85.158.143.35:26040] by server-2.bemta-4.messagelabs.com id
	6E/04-10891-47772F25; Wed, 05 Feb 2014 17:40:04 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391622001!3417965!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25617 invoked from network); 5 Feb 2014 17:40:03 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 17:40:03 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:43 -0700
Message-Id: <1391621986-7341-2-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/4] libxl: fix leaking libxlDomainObjPrivate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When libxl registers an FD with the libxl driver, the refcnt of the
associated libxlDomainObjPrivate object is incremented. The refcnt
is decremented when libxl deregisters the FD.  But some FDs are only
deregistered when their libxl ctx is freed, which unfortunately is
done in the libxlDomainObjPrivate dispose function.  With references
held by the FDs, libxlDomainObjPrivate is never disposed.

I added the ref/unref in FD registration/deregistration when adding
the same in timer registration/deregistration.  For timers, this
is a simple approach to ensuring the libxlDomainObjPrivate is not
disposed prior to their expirtation, which libxl guarantees will
occur.  It is not needed for FDs, and only causes
libxlDomainObjPrivate to leak.

This patch removes the reference on libxlDomainObjPrivate for FD
registrations, but retains them for timer registrations.  Tested on
the latest releases of Xen supported by the libxl driver:  4.2.3,
4.3.1, and 4.4.0 RC3.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 src/libxl/libxl_domain.c | 21 ++++++++++-----------
 1 file changed, 10 insertions(+), 11 deletions(-)

diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index e72c483..7efc13b 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -1,7 +1,7 @@
 /*
  * libxl_domain.c: libxl domain object private state
  *
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Lesser General Public
@@ -92,7 +92,13 @@ libxlDomainObjPrivateOnceInit(void)
 VIR_ONCE_GLOBAL_INIT(libxlDomainObjPrivate)
 
 static void
-libxlDomainObjEventHookInfoFree(void *obj)
+libxlDomainObjFDEventHookInfoFree(void *obj)
+{
+    VIR_FREE(obj);
+}
+
+static void
+libxlDomainObjTimerEventHookInfoFree(void *obj)
 {
     libxlEventHookInfoPtr info = obj;
 
@@ -138,12 +144,6 @@ libxlDomainObjFDRegisterEventHook(void *priv,
         return -1;
 
     info->priv = priv;
-    /*
-     * Take a reference on the domain object.  Reference is dropped in
-     * libxlDomainObjEventHookInfoFree, ensuring the domain object outlives
-     * the fd event objects.
-     */
-    virObjectRef(info->priv);
     info->xl_priv = xl_priv;
 
     if (events & POLLIN)
@@ -152,9 +152,8 @@ libxlDomainObjFDRegisterEventHook(void *priv,
         vir_events |= VIR_EVENT_HANDLE_WRITABLE;
 
     info->id = virEventAddHandle(fd, vir_events, libxlDomainObjFDEventCallback,
-                                 info, libxlDomainObjEventHookInfoFree);
+                                 info, libxlDomainObjFDEventHookInfoFree);
     if (info->id < 0) {
-        virObjectUnref(info->priv);
         VIR_FREE(info);
         return -1;
     }
@@ -259,7 +258,7 @@ libxlDomainObjTimeoutRegisterEventHook(void *priv,
         timeout = res.tv_sec * 1000 + (res.tv_usec + 999) / 1000;
     }
     info->id = virEventAddTimeout(timeout, libxlDomainObjTimerCallback,
-                                  info, libxlDomainObjEventHookInfoFree);
+                                  info, libxlDomainObjTimerEventHookInfoFree);
     if (info->id < 0) {
         virObjectUnref(info->priv);
         VIR_FREE(info);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SF-0002zF-T0; Wed, 05 Feb 2014 17:40:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SE-0002yl-3A
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:06 +0000
Received: from [85.158.139.211:9442] by server-1.bemta-5.messagelabs.com id
	BB/AE-12859-57772F25; Wed, 05 Feb 2014 17:40:05 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391622001!1915590!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 344 invoked from network); 5 Feb 2014 17:40:03 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 17:40:03 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:44 -0700
Message-Id: <1391621986-7341-3-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/4] libxl: remove list of timer registrations
	from libxlDomainObjPrivate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Due to some misunderstanding of requirements libxl places on timer
handling, I introduced the half-brained idea of maintaining a list
of timeouts that the driver could force to expire before freeing a
libxlDomainObjPrivate (and hence libxl_ctx).  But testing all
the latest versions of Xen supported by the libxl driver (4.2.3,
4.3.1, 4.4.0 RC3), I see that libxl will handle this just fine and
there is no need to force expiration behind libxl's back.  Indeed it
may be harmful to do so.

This patch removes the timer list, allowing libxl to handle cleanup
of its timer registrations.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 src/libxl/libxl_conf.h   |  5 +---
 src/libxl/libxl_domain.c | 72 +++---------------------------------------------
 src/libxl/libxl_domain.h |  8 +-----
 src/libxl/libxl_driver.c |  3 +-
 4 files changed, 7 insertions(+), 81 deletions(-)

diff --git a/src/libxl/libxl_conf.h b/src/libxl/libxl_conf.h
index f743541..ca7bc7d 100644
--- a/src/libxl/libxl_conf.h
+++ b/src/libxl/libxl_conf.h
@@ -1,7 +1,7 @@
 /*
  * libxl_conf.h: libxl configuration management
  *
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  * Copyright (C) 2011 Univention GmbH.
  *
  * This library is free software; you can redistribute it and/or
@@ -115,9 +115,6 @@ struct _libxlDriverPrivate {
     virSysinfoDefPtr hostsysinfo;
 };
 
-typedef struct _libxlEventHookInfo libxlEventHookInfo;
-typedef libxlEventHookInfo *libxlEventHookInfoPtr;
-
 # define LIBXL_SAVE_MAGIC "libvirt-xml\n \0 \r"
 # define LIBXL_SAVE_VERSION 1
 
diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index 7efc13b..fbd6cab 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -34,37 +34,9 @@
 #define VIR_FROM_THIS VIR_FROM_LIBXL
 
 
-/* Append an event registration to the list of registrations */
-#define LIBXL_EV_REG_APPEND(head, add)                 \
-    do {                                               \
-        libxlEventHookInfoPtr temp;                    \
-        if (head) {                                    \
-            temp = head;                               \
-            while (temp->next)                         \
-                temp = temp->next;                     \
-            temp->next = add;                          \
-        } else {                                       \
-            head = add;                                \
-        }                                              \
-    } while (0)
-
-/* Remove an event registration from the list of registrations */
-#define LIBXL_EV_REG_REMOVE(head, del)                 \
-    do {                                               \
-        libxlEventHookInfoPtr temp;                    \
-        if (head == del) {                             \
-            head = head->next;                         \
-        } else {                                       \
-            temp = head;                               \
-            while (temp->next && temp->next != del)    \
-                temp = temp->next;                     \
-            if (temp->next) {                          \
-                temp->next = del->next;                \
-            }                                          \
-        }                                              \
-    } while (0)
-
 /* Object used to store info related to libxl event registrations */
+typedef struct _libxlEventHookInfo libxlEventHookInfo;
+typedef libxlEventHookInfo *libxlEventHookInfoPtr;
 struct _libxlEventHookInfo {
     libxlEventHookInfoPtr next;
     libxlDomainObjPrivatePtr priv;
@@ -214,12 +186,7 @@ libxlDomainObjTimerCallback(int timer ATTRIBUTE_UNUSED, void *timer_info)
     virObjectUnlock(p);
     libxl_osevent_occurred_timeout(p->ctx, info->xl_priv);
     virObjectLock(p);
-    /*
-     * Timeout could have been freed while the lock was dropped.
-     * Only remove it from the list if it still exists.
-     */
-    if (virEventRemoveTimeout(info->id) == 0)
-        LIBXL_EV_REG_REMOVE(p->timerRegistrations, info);
+    virEventRemoveTimeout(info->id);
     virObjectUnlock(p);
 }
 
@@ -265,9 +232,6 @@ libxlDomainObjTimeoutRegisterEventHook(void *priv,
         return -1;
     }
 
-    virObjectLock(info->priv);
-    LIBXL_EV_REG_APPEND(info->priv->timerRegistrations, info);
-    virObjectUnlock(info->priv);
     *hndp = info;
 
     return 0;
@@ -306,12 +270,7 @@ libxlDomainObjTimeoutDeregisterEventHook(void *priv ATTRIBUTE_UNUSED,
     libxlDomainObjPrivatePtr p = info->priv;
 
     virObjectLock(p);
-    /*
-     * Only remove the timeout from the list if removal from the
-     * event loop is successful.
-     */
-    if (virEventRemoveTimeout(info->id) == 0)
-        LIBXL_EV_REG_REMOVE(p->timerRegistrations, info);
+    virEventRemoveTimeout(info->id);
     virObjectUnlock(p);
 }
 
@@ -443,26 +402,3 @@ cleanup:
     VIR_FREE(log_file);
     return ret;
 }
-
-void
-libxlDomainObjRegisteredTimeoutsCleanup(libxlDomainObjPrivatePtr priv)
-{
-    libxlEventHookInfoPtr info;
-
-    virObjectLock(priv);
-    info = priv->timerRegistrations;
-    while (info) {
-        /*
-         * libxl expects the event to be deregistered when calling
-         * libxl_osevent_occurred_timeout, but we dont want the event info
-         * destroyed.  Disable the timeout and only remove it after returning
-         * from libxl.
-         */
-        virEventUpdateTimeout(info->id, -1);
-        libxl_osevent_occurred_timeout(priv->ctx, info->xl_priv);
-        virEventRemoveTimeout(info->id);
-        info = info->next;
-    }
-    priv->timerRegistrations = NULL;
-    virObjectUnlock(priv);
-}
diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h
index e4695ef..8565820 100644
--- a/src/libxl/libxl_domain.h
+++ b/src/libxl/libxl_domain.h
@@ -1,7 +1,7 @@
 /*
  * libxl_domain.h: libxl domain object private state
  *
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Lesser General Public
@@ -43,9 +43,6 @@ struct _libxlDomainObjPrivate {
     /* console */
     virChrdevsPtr devs;
     libxl_evgen_domain_death *deathW;
-
-    /* list of libxl timeout registrations */
-    libxlEventHookInfoPtr timerRegistrations;
 };
 
 
@@ -56,7 +53,4 @@ extern virDomainDefParserConfig libxlDomainDefParserConfig;
 int
 libxlDomainObjPrivateInitCtx(virDomainObjPtr vm);
 
-void
-libxlDomainObjRegisteredTimeoutsCleanup(libxlDomainObjPrivatePtr priv);
-
 #endif /* LIBXL_DOMAIN_H */
diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index cb3deec..d639011 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -2,7 +2,7 @@
  * libxl_driver.c: core driver methods for managing libxenlight domains
  *
  * Copyright (C) 2006-2014 Red Hat, Inc.
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  * Copyright (C) 2011 Univention GmbH.
  *
  * This library is free software; you can redistribute it and/or
@@ -313,7 +313,6 @@ libxlVmCleanup(libxlDriverPrivatePtr driver,
         vm->newDef = NULL;
     }
 
-    libxlDomainObjRegisteredTimeoutsCleanup(priv);
     virObjectUnref(cfg);
 }
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 17:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 17:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6SF-0002zF-T0; Wed, 05 Feb 2014 17:40:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WB6SE-0002yl-3A
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 17:40:06 +0000
Received: from [85.158.139.211:9442] by server-1.bemta-5.messagelabs.com id
	BB/AE-12859-57772F25; Wed, 05 Feb 2014 17:40:05 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391622001!1915590!1
X-Originating-IP: [137.65.250.26]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 344 invoked from network); 5 Feb 2014 17:40:03 -0000
Received: from victor.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.26)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 5 Feb 2014 17:40:03 -0000
Received: from jfehlig2.provo.novell.com (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (NOT encrypted);
	Wed, 05 Feb 2014 10:39:51 -0700
From: Jim Fehlig <jfehlig@suse.com>
To: libvir-list@redhat.com
Date: Wed,  5 Feb 2014 10:39:44 -0700
Message-Id: <1391621986-7341-3-git-send-email-jfehlig@suse.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
Cc: Jim Fehlig <jfehlig@suse.com>, bjzhang@suse.com, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/4] libxl: remove list of timer registrations
	from libxlDomainObjPrivate
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Due to some misunderstanding of requirements libxl places on timer
handling, I introduced the half-brained idea of maintaining a list
of timeouts that the driver could force to expire before freeing a
libxlDomainObjPrivate (and hence libxl_ctx).  But testing all
the latest versions of Xen supported by the libxl driver (4.2.3,
4.3.1, 4.4.0 RC3), I see that libxl will handle this just fine and
there is no need to force expiration behind libxl's back.  Indeed it
may be harmful to do so.

This patch removes the timer list, allowing libxl to handle cleanup
of its timer registrations.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---
 src/libxl/libxl_conf.h   |  5 +---
 src/libxl/libxl_domain.c | 72 +++---------------------------------------------
 src/libxl/libxl_domain.h |  8 +-----
 src/libxl/libxl_driver.c |  3 +-
 4 files changed, 7 insertions(+), 81 deletions(-)

diff --git a/src/libxl/libxl_conf.h b/src/libxl/libxl_conf.h
index f743541..ca7bc7d 100644
--- a/src/libxl/libxl_conf.h
+++ b/src/libxl/libxl_conf.h
@@ -1,7 +1,7 @@
 /*
  * libxl_conf.h: libxl configuration management
  *
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  * Copyright (C) 2011 Univention GmbH.
  *
  * This library is free software; you can redistribute it and/or
@@ -115,9 +115,6 @@ struct _libxlDriverPrivate {
     virSysinfoDefPtr hostsysinfo;
 };
 
-typedef struct _libxlEventHookInfo libxlEventHookInfo;
-typedef libxlEventHookInfo *libxlEventHookInfoPtr;
-
 # define LIBXL_SAVE_MAGIC "libvirt-xml\n \0 \r"
 # define LIBXL_SAVE_VERSION 1
 
diff --git a/src/libxl/libxl_domain.c b/src/libxl/libxl_domain.c
index 7efc13b..fbd6cab 100644
--- a/src/libxl/libxl_domain.c
+++ b/src/libxl/libxl_domain.c
@@ -34,37 +34,9 @@
 #define VIR_FROM_THIS VIR_FROM_LIBXL
 
 
-/* Append an event registration to the list of registrations */
-#define LIBXL_EV_REG_APPEND(head, add)                 \
-    do {                                               \
-        libxlEventHookInfoPtr temp;                    \
-        if (head) {                                    \
-            temp = head;                               \
-            while (temp->next)                         \
-                temp = temp->next;                     \
-            temp->next = add;                          \
-        } else {                                       \
-            head = add;                                \
-        }                                              \
-    } while (0)
-
-/* Remove an event registration from the list of registrations */
-#define LIBXL_EV_REG_REMOVE(head, del)                 \
-    do {                                               \
-        libxlEventHookInfoPtr temp;                    \
-        if (head == del) {                             \
-            head = head->next;                         \
-        } else {                                       \
-            temp = head;                               \
-            while (temp->next && temp->next != del)    \
-                temp = temp->next;                     \
-            if (temp->next) {                          \
-                temp->next = del->next;                \
-            }                                          \
-        }                                              \
-    } while (0)
-
 /* Object used to store info related to libxl event registrations */
+typedef struct _libxlEventHookInfo libxlEventHookInfo;
+typedef libxlEventHookInfo *libxlEventHookInfoPtr;
 struct _libxlEventHookInfo {
     libxlEventHookInfoPtr next;
     libxlDomainObjPrivatePtr priv;
@@ -214,12 +186,7 @@ libxlDomainObjTimerCallback(int timer ATTRIBUTE_UNUSED, void *timer_info)
     virObjectUnlock(p);
     libxl_osevent_occurred_timeout(p->ctx, info->xl_priv);
     virObjectLock(p);
-    /*
-     * Timeout could have been freed while the lock was dropped.
-     * Only remove it from the list if it still exists.
-     */
-    if (virEventRemoveTimeout(info->id) == 0)
-        LIBXL_EV_REG_REMOVE(p->timerRegistrations, info);
+    virEventRemoveTimeout(info->id);
     virObjectUnlock(p);
 }
 
@@ -265,9 +232,6 @@ libxlDomainObjTimeoutRegisterEventHook(void *priv,
         return -1;
     }
 
-    virObjectLock(info->priv);
-    LIBXL_EV_REG_APPEND(info->priv->timerRegistrations, info);
-    virObjectUnlock(info->priv);
     *hndp = info;
 
     return 0;
@@ -306,12 +270,7 @@ libxlDomainObjTimeoutDeregisterEventHook(void *priv ATTRIBUTE_UNUSED,
     libxlDomainObjPrivatePtr p = info->priv;
 
     virObjectLock(p);
-    /*
-     * Only remove the timeout from the list if removal from the
-     * event loop is successful.
-     */
-    if (virEventRemoveTimeout(info->id) == 0)
-        LIBXL_EV_REG_REMOVE(p->timerRegistrations, info);
+    virEventRemoveTimeout(info->id);
     virObjectUnlock(p);
 }
 
@@ -443,26 +402,3 @@ cleanup:
     VIR_FREE(log_file);
     return ret;
 }
-
-void
-libxlDomainObjRegisteredTimeoutsCleanup(libxlDomainObjPrivatePtr priv)
-{
-    libxlEventHookInfoPtr info;
-
-    virObjectLock(priv);
-    info = priv->timerRegistrations;
-    while (info) {
-        /*
-         * libxl expects the event to be deregistered when calling
-         * libxl_osevent_occurred_timeout, but we dont want the event info
-         * destroyed.  Disable the timeout and only remove it after returning
-         * from libxl.
-         */
-        virEventUpdateTimeout(info->id, -1);
-        libxl_osevent_occurred_timeout(priv->ctx, info->xl_priv);
-        virEventRemoveTimeout(info->id);
-        info = info->next;
-    }
-    priv->timerRegistrations = NULL;
-    virObjectUnlock(priv);
-}
diff --git a/src/libxl/libxl_domain.h b/src/libxl/libxl_domain.h
index e4695ef..8565820 100644
--- a/src/libxl/libxl_domain.h
+++ b/src/libxl/libxl_domain.h
@@ -1,7 +1,7 @@
 /*
  * libxl_domain.h: libxl domain object private state
  *
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Lesser General Public
@@ -43,9 +43,6 @@ struct _libxlDomainObjPrivate {
     /* console */
     virChrdevsPtr devs;
     libxl_evgen_domain_death *deathW;
-
-    /* list of libxl timeout registrations */
-    libxlEventHookInfoPtr timerRegistrations;
 };
 
 
@@ -56,7 +53,4 @@ extern virDomainDefParserConfig libxlDomainDefParserConfig;
 int
 libxlDomainObjPrivateInitCtx(virDomainObjPtr vm);
 
-void
-libxlDomainObjRegisteredTimeoutsCleanup(libxlDomainObjPrivatePtr priv);
-
 #endif /* LIBXL_DOMAIN_H */
diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
index cb3deec..d639011 100644
--- a/src/libxl/libxl_driver.c
+++ b/src/libxl/libxl_driver.c
@@ -2,7 +2,7 @@
  * libxl_driver.c: core driver methods for managing libxenlight domains
  *
  * Copyright (C) 2006-2014 Red Hat, Inc.
- * Copyright (C) 2011-2013 SUSE LINUX Products GmbH, Nuernberg, Germany.
+ * Copyright (C) 2011-2014 SUSE LINUX Products GmbH, Nuernberg, Germany.
  * Copyright (C) 2011 Univention GmbH.
  *
  * This library is free software; you can redistribute it and/or
@@ -313,7 +313,6 @@ libxlVmCleanup(libxlDriverPrivatePtr driver,
         vm->newDef = NULL;
     }
 
-    libxlDomainObjRegisteredTimeoutsCleanup(priv);
     virObjectUnref(cfg);
 }
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 18:04:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 18:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6pK-0004Tr-0p; Wed, 05 Feb 2014 18:03:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WB6pI-0004Tm-Ii
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 18:03:56 +0000
Received: from [85.158.139.211:53606] by server-3.bemta-5.messagelabs.com id
	BC/F1-13671-B0D72F25; Wed, 05 Feb 2014 18:03:55 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391623435!1916478!1
X-Originating-IP: [74.125.82.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11131 invoked from network); 5 Feb 2014 18:03:55 -0000
Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com)
	(74.125.82.41)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 18:03:55 -0000
Received: by mail-wg0-f41.google.com with SMTP id n12so5660962wgh.2
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 10:03:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=SmGkLny/KZ3Pu35iMAMumZTORxgrVEv+Ir3Nwuv6vfo=;
	b=pGdj9K1FkoBND8ieKyE9xOZtNmXoHQy3P1Nb329ig7jAxZfe8Ee23fxv+DYG8aI00a
	guS2bMjOXPD6/2+vNInFXm1tl3lk7rNz3iWs1/I7BvG+wB57QU3pvGb1lHoKVboV1V9a
	UyQRC1qGH1/cRxM8MGUPOJHqa05lcBTC3UfUe/7hlihXUjW7piPX2RwPgJgLp90v0p7A
	3g8I3MbouKCTWuawfzJanf8hTS1Jh6Ho0rguGzR3QSF//6nDXq41GtV7aoehEWhsiwmZ
	+p7KTMeOK1NtSwBlZvQK82HzujUWQqkBtyvHdV5uszVdD2eSDYgfLY53TjuIaiEcjgZE
	6zQA==
MIME-Version: 1.0
X-Received: by 10.180.105.41 with SMTP id gj9mr18008928wib.28.1391623434792;
	Wed, 05 Feb 2014 10:03:54 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Wed, 5 Feb 2014 10:03:54 -0800 (PST)
In-Reply-To: <20140203125350.023b5436@mantra.us.oracle.com>
References: <20140203200458.GA11413@phenom.dumpdata.com>
	<20140203125350.023b5436@mantra.us.oracle.com>
Date: Wed, 5 Feb 2014 18:03:54 +0000
X-Google-Sender-Auth: 0lQX949yvcPT6gR0uzqgPdWXfmg
Message-ID: <CAFLBxZaZ_ykk17+xrG3yTYwCg34dLcUy7vohnVVU+RGTA5HOMA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Xen PVH and 'xl save' bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 3, 2014 at 8:53 PM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Mon, 3 Feb 2014 15:04:58 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>
> I wouldn't call it a "bug". It's an unimplemented sub-feature... saying
> bug could be misleading to some people who'd then expect it to work
> under certain circumstances IMO :). Also, it may create impression PVH
> is buggy :)...

Yes, save/restore is a special case of migration (or vice versa),
which is not yet supported.

I suppose a seatbelt stopping xl from attempting a migrate might be
helpful, but not necessarily critical at this point.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 18:04:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 18:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB6pK-0004Tr-0p; Wed, 05 Feb 2014 18:03:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WB6pI-0004Tm-Ii
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 18:03:56 +0000
Received: from [85.158.139.211:53606] by server-3.bemta-5.messagelabs.com id
	BC/F1-13671-B0D72F25; Wed, 05 Feb 2014 18:03:55 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391623435!1916478!1
X-Originating-IP: [74.125.82.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11131 invoked from network); 5 Feb 2014 18:03:55 -0000
Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com)
	(74.125.82.41)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 18:03:55 -0000
Received: by mail-wg0-f41.google.com with SMTP id n12so5660962wgh.2
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 10:03:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=SmGkLny/KZ3Pu35iMAMumZTORxgrVEv+Ir3Nwuv6vfo=;
	b=pGdj9K1FkoBND8ieKyE9xOZtNmXoHQy3P1Nb329ig7jAxZfe8Ee23fxv+DYG8aI00a
	guS2bMjOXPD6/2+vNInFXm1tl3lk7rNz3iWs1/I7BvG+wB57QU3pvGb1lHoKVboV1V9a
	UyQRC1qGH1/cRxM8MGUPOJHqa05lcBTC3UfUe/7hlihXUjW7piPX2RwPgJgLp90v0p7A
	3g8I3MbouKCTWuawfzJanf8hTS1Jh6Ho0rguGzR3QSF//6nDXq41GtV7aoehEWhsiwmZ
	+p7KTMeOK1NtSwBlZvQK82HzujUWQqkBtyvHdV5uszVdD2eSDYgfLY53TjuIaiEcjgZE
	6zQA==
MIME-Version: 1.0
X-Received: by 10.180.105.41 with SMTP id gj9mr18008928wib.28.1391623434792;
	Wed, 05 Feb 2014 10:03:54 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Wed, 5 Feb 2014 10:03:54 -0800 (PST)
In-Reply-To: <20140203125350.023b5436@mantra.us.oracle.com>
References: <20140203200458.GA11413@phenom.dumpdata.com>
	<20140203125350.023b5436@mantra.us.oracle.com>
Date: Wed, 5 Feb 2014 18:03:54 +0000
X-Google-Sender-Auth: 0lQX949yvcPT6gR0uzqgPdWXfmg
Message-ID: <CAFLBxZaZ_ykk17+xrG3yTYwCg34dLcUy7vohnVVU+RGTA5HOMA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Xen PVH and 'xl save' bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 3, 2014 at 8:53 PM, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Mon, 3 Feb 2014 15:04:58 -0500
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>
> I wouldn't call it a "bug". It's an unimplemented sub-feature... saying
> bug could be misleading to some people who'd then expect it to work
> under certain circumstances IMO :). Also, it may create impression PVH
> is buggy :)...

Yes, save/restore is a special case of migration (or vice versa),
which is not yet supported.

I suppose a seatbelt stopping xl from attempting a migrate might be
helpful, but not necessarily critical at this point.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 18:41:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 18:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB7PT-0005Ut-89; Wed, 05 Feb 2014 18:41:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WB7PR-0005Uo-G6
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 18:41:17 +0000
Received: from [85.158.143.35:23227] by server-3.bemta-4.messagelabs.com id
	27/EE-11539-CC582F25; Wed, 05 Feb 2014 18:41:16 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391625674!3429311!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3583 invoked from network); 5 Feb 2014 18:41:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 18:41:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98340037"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 18:41:11 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	13:41:11 -0500
Message-ID: <52F285C5.907@citrix.com>
Date: Wed, 5 Feb 2014 18:41:09 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xen.org>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>
References: <52F15B93.4000000@citrix.com> <52F17947.3040606@linaro.org>
In-Reply-To: <52F17947.3040606@linaro.org>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: upstream kernel compile fails with
	defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 23:35, Julien Grall wrote:
> (Add Ian Campbell for sunxi bits)
>
> Hello Zoltan,
>
> On 04/02/14 21:28, Zoltan Kiss wrote:
>> However both cases the build fails quite early with the same error:
>>
>>     CC      arch/arm/kernel/asm-offsets.s
>> In file included from /local/repo/linux-net-next/include/linux/cache.h:5=
:0,
>>                    from /local/repo/linux-net-next/include/linux/printk.=
h:8,
>>                    from
>> /local/repo/linux-net-next/include/linux/kernel.h:13,
>>                    from /local/repo/linux-net-next/include/linux/sched.h=
:15,
>>                    from
>> /local/repo/linux-net-next/arch/arm/kernel/asm-offsets.c:13:
>> /local/repo/linux-net-next/include/linux/prefetch.h: In function
>> =91prefetch_range=92:
>> /local/repo/linux-net-next/arch/arm/include/asm/cache.h:7:25: error:
>> =91CONFIG_ARM_L1_CACHE_SHIFT=92 undeclared (first use in this function)
>>    #define L1_CACHE_SHIFT  CONFIG_ARM_L1_CACHE_SHIFT
>
> I can't get the same error with net-next + your V7. I have this following=
 error:
>
> drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
> drivers/xen/grant-table.c:1047:3: error: implicit declaration of function=
 =91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
>     mfn =3D get_phys_to_machine(page_to_pfn(pages[i]));
>     ^
>
> I'm using this config for Linux: http://xenbits.xen.org/people/julieng/co=
nfig-midway

OK, I've fixed that, will run some tests before sending v8. I managed to =

fix the compile by checking out the mainline repo. Either net-next has =

some issue, or my checked out repo has some nasty failur (more likely :)

Thanks,

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 18:41:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 18:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB7PT-0005Ut-89; Wed, 05 Feb 2014 18:41:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WB7PR-0005Uo-G6
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 18:41:17 +0000
Received: from [85.158.143.35:23227] by server-3.bemta-4.messagelabs.com id
	27/EE-11539-CC582F25; Wed, 05 Feb 2014 18:41:16 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391625674!3429311!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3583 invoked from network); 5 Feb 2014 18:41:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 18:41:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98340037"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 18:41:11 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	13:41:11 -0500
Message-ID: <52F285C5.907@citrix.com>
Date: Wed, 5 Feb 2014 18:41:09 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xen.org>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>
References: <52F15B93.4000000@citrix.com> <52F17947.3040606@linaro.org>
In-Reply-To: <52F17947.3040606@linaro.org>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen on ARM: upstream kernel compile fails with
	defconfig
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 23:35, Julien Grall wrote:
> (Add Ian Campbell for sunxi bits)
>
> Hello Zoltan,
>
> On 04/02/14 21:28, Zoltan Kiss wrote:
>> However both cases the build fails quite early with the same error:
>>
>>     CC      arch/arm/kernel/asm-offsets.s
>> In file included from /local/repo/linux-net-next/include/linux/cache.h:5=
:0,
>>                    from /local/repo/linux-net-next/include/linux/printk.=
h:8,
>>                    from
>> /local/repo/linux-net-next/include/linux/kernel.h:13,
>>                    from /local/repo/linux-net-next/include/linux/sched.h=
:15,
>>                    from
>> /local/repo/linux-net-next/arch/arm/kernel/asm-offsets.c:13:
>> /local/repo/linux-net-next/include/linux/prefetch.h: In function
>> =91prefetch_range=92:
>> /local/repo/linux-net-next/arch/arm/include/asm/cache.h:7:25: error:
>> =91CONFIG_ARM_L1_CACHE_SHIFT=92 undeclared (first use in this function)
>>    #define L1_CACHE_SHIFT  CONFIG_ARM_L1_CACHE_SHIFT
>
> I can't get the same error with net-next + your V7. I have this following=
 error:
>
> drivers/xen/grant-table.c: In function =91__gnttab_unmap_refs=92:
> drivers/xen/grant-table.c:1047:3: error: implicit declaration of function=
 =91get_phys_to_machine=92 [-Werror=3Dimplicit-function-declaration]
>     mfn =3D get_phys_to_machine(page_to_pfn(pages[i]));
>     ^
>
> I'm using this config for Linux: http://xenbits.xen.org/people/julieng/co=
nfig-midway

OK, I've fixed that, will run some tests before sending v8. I managed to =

fix the compile by checking out the mainline repo. Either net-next has =

some issue, or my checked out repo has some nasty failur (more likely :)

Thanks,

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 18:56:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 18:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB7dz-0005wI-F2; Wed, 05 Feb 2014 18:56:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB7dx-0005wD-7V
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 18:56:17 +0000
Received: from [85.158.143.35:43397] by server-1.bemta-4.messagelabs.com id
	91/31-31661-05982F25; Wed, 05 Feb 2014 18:56:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391626573!3425741!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3101 invoked from network); 5 Feb 2014 18:56:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 18:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98346184"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 18:55:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 13:55:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WB7dT-0006vi-KD;
	Wed, 05 Feb 2014 18:55:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WB7dT-0005I5-B7;
	Wed, 05 Feb 2014 18:55:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24734-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 18:55:47 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3634413117047852419=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3634413117047852419==
Content-Type: text/plain

flight 24734 linux-arm-xen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24734/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass

version targeted for testing:
 linux                518e624ddfaef545408c19c30fff31bc64d6b346
baseline version:
 linux                d264bde089ceea20640d6d4472a0dcade9d2e199

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Aaro Koskinen <aaro.koskinen@iki.fi>
  Aaron Brown <aaron.f.brown@intel.com>
  Abhilash Kesavan <a.kesavan@samsung.com>
  Alan Cox <alan@linux.intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Mezin <mezin.alexander@gmail.com>
  Alexander van Heukelum <heukelum@fastmail.fm>
  Anatolij Gustschin <agust@denx.de>
  Andre Przywara <andre.przywara@linaro.org>
  Andreas Reis <andreas.reis@gmail.com>
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Bresticker <abrestic@chromium.org>
  Andrew Jones <drjones@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@amacapital.net>
  Anton Blanchard <anton@samba.org>
  Anton Vorontsov <anton@enomsg.org>
  Antonio Quartulli <antonio@meshcoding.com>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Ariel Elior <ariele@broadcom.com>
  Arron Wang <arron.wang@intel.com>
  Austin Boyle <boyle.austin@gmail.com>
  Ben Dooks <ben.dooks@codethink.co.uk>
  Ben Myers <bpm@sgi.com>
  Ben Skeggs <bskeggs@redhat.com>
  Ben Widawsky <ben@bwidawsk.net>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Betty Dall <betty.dall@hp.com>
  Bjorn Helgaas <bhelgaas@google.com>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Brian W Hart <hartb@linux.vnet.ibm.com>
  Bruce Allan <bruce.w.allan@intel.com>
  Bryan Wu <cooloney@gmail.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Christian Engelmayer <cengelma@gmx.at>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoph Paasch <christoph.paasch@uclouvain.be>
  Cong Wang <xiyou.wangcong@gmail.com>
  Curt Brune <curt@cumulusnetworks.com>
  CÃ©dric Le Goater <clg@fr.ibm.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dcbw@redhat.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  Dave Ertman <davidx.m.ertman@intel.com>
  Dave Kleikamp <dave.kleikamp@oracle.com>
  David Ertman <davidx.m.ertman@intel.com>
  David Gibson <david@gibson.dropbear.id.au>
  David S. Miller <davem@davemloft.net>
  Dimitris Michailidis <dm@chelsio.com>
  Ding Tianhong <dingtianhong@huawei.com>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Dmitry Kravkov <dmitry@broadcom.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Don Skidmore <donald.c.skidmore@intel.com>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Eric Dumazet <edumazet@google.com>
  Eric Whitney <enwlinux@gmail.com>
  Erik Hugne <erik.hugne@ericsson.com>
  Fabio Estevam <fabio.estevam@freescale.com>
  Fan Du <fan.du@windriver.com>
  Felix Fietkau <nbd@openwrt.org>
  Flavio Leitner <fbl@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Gao feng <gaofeng@cn.fujitsu.com>
  Gavin Shan <shangw@linux.vnet.ibm.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Gerhard Sittig <gsi@denx.de>
  Gerrit Renker <gerrit@erg.abdn.ac.uk>
  Grant Likely <grant.likely@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregor Beck <gbeck@sernet.de>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Padovan <gustavo.padovan@collabora.co.uk>
  H. Nikolaus Schaller <hns@goldelico.com>
  H. Peter Anvin <hpa@linux.intel.com>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Hariprasad Shenai <hariprasad@chelsio.com>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Heiko Stuebner <heiko@sntech.de>
  Helge Deller <deller@gmx.de>
  Helmut Schaa <helmut.schaa@googlemail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Huacai Chen <chenhc@lemote.com>
  Hugh Dickins <hughd@google.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  Ivan Vecera <ivecera@redhat.com>
  Jamal Hadi Salim <jhs@mojatatu.com>
  James Hogan <james.hogan@imgtec.com>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jani Nikula <jani.nikula@intel.com>
  Jason Baron <jbaron@akamai.com>
  Jason Wang <jasowang@redhat.com>
  Javier Lopez <jlopex@cozybit.com>
  Jay Vosburgh <fubar@us.ibm.com>
  Jean Delvare <khali@linux-fr.org>
  Jeff Kirsher <jeffrey.t.kirsher@intel.com>
  Jeff Layton <jlayton@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jesse Barnes <jbarnes@virtuousgeek.org>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jie Liu <jeff.liu@oracle.com>
  Jiri Pirko <jiri@resnulli.us>
  Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
  Johan Hedberg <johan.hedberg@intel.com>
  Johannes Berg <johannes.berg@intel.com>
  John Crispin <blogic@openwrt.org>
  John David Anglin <dave.anglin@bell.net>
  John Fastabend <john.r.fastabend@intel.com.com>
  John Fastabend <john.r.fastabend@intel.com>
  John Stultz <john.stultz@linaro.org>
  John W. Linville <linville@tuxdriver.com>
  Jon Maloy <jon.maloy@ericsson.com>
  Jonghwa Lee <jonghwa3.lee@samsung.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Julian Anastasov <ja@ssi.bg>
  Kelly Doran <kel.p.doran@gmail.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Krzysztof HaÅ‚asa <khalasa@piap.pl>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Kumar Sanghvi <kumaras@chelsio.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Laura Abbott <lauraa@codeaurora.org>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Leigh Brown <leigh@solinno.co.uk>
  Li RongQing <roy.qing.li@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu, Chuansheng <chuansheng.liu@intel.com>
  Manish Chopra <manish.chopra@qlogic.com>
  Marcel Holtmann <marcel@holtmann.org>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marco Piazza <mpiazza@gmail.com>
  Marek Lindner <mareklindner@neomailbox.ch>
  Marek OlÅ¡Ã¡k <marek.olsak@amd.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Matteo Facchinetti <matteo.facchinetti@sirius-es.it>
  Mel Gorman <mgorman@suse.de>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <mikey@neuling.org>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.cz>
  Michal Kalderon <michals@broadcom.com>
  Michal Schmidt <mschmidt@redhat.com>
  Michal Simek <michal.simek@xilinx.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Turquette <mturquette@linaro.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milo Kim <milo.kim@ti.com>
  Ming Lei <ming.lei@canonical.com>
  Ming Lei <tom.leiming@gmail.com>
  Mugunthan V N <mugunthanvnm@ti.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Neal Cardwell <ncardwell@google.com>
  Neil Horman <nhorman@tuxdriver.com>
  NeilBrown <neilb@suse.de>
  Nicolas Schichan <nschichan@freebox.fr>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Octavian Purdila <octavian.purdila@intel.com>
  Oleg Nesterov <oleg@redhat.com>
  Olof Johansson <olof@lixom.net>
  Oren Givon <oren.givon@intel.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <paul.durrant@citrix.com>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paulo Zanoni <paulo.r.zanoni@intel.com>
  Pekka Enberg <penberg@kernel.org>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Phil Schmitt <phillip.j.schmitt@intel.com>
  Qais Yousef <qais.yousef@imgtec.com>
  Qingshuai Tian <qingshuai.tian@intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rajesh B Prathipati <rprathip@linux.vnet.ibm.com>
  Richard Cochran <richardcochran@gmail.com>
  Richard Weinberger <richard@nod.at>
  Rik van Riel <riel@redhat.com>
  Rob Herring <rob.herring@calxeda.com>
  Rob Herring <robh@kernel.org>
  Robert Richter <rric@kernel.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Sachin Kamat <sachin.kamat@linaro.org>
  Sachin Prabhu <sprabhu@redhat.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Sathya Perla <sathya.perla@emulex.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Sebastian Ott <sebott@linux.vnet.ibm.com>
  Serge E. Hallyn <serge.hallyn@ubuntu.com>
  Serge Hallyn <serge.hallyn@canonical.com>
  Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Shahed Shaikh <shahed.shaikh@qlogic.com>
  Shirish Pargaonkar <spargaonkar@suse.com>
  Shuah Khan <shuah.kh@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Simon Wunderlich <sw@simonwunderlich.de>
  Soren Brinkmann <soren.brinkmann@xilinx.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steve Capper <steve.capper@linaro.org>
  Steve French <smfrench@gmail.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suresh Reddy <suresh.reddy@emulex.com>
  Taras Kondratiuk <taras.kondratiuk@linaro.org>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Tony Lindgren <tony@atomide.com>
  Toshi Kani <toshi.kani@hp.com>
  Ujjal Roy <royujjal@gmail.com>
  Vasundhara Volam <vasundhara.volam@emulex.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vince Bridgers <vbridgers2013@gmail.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vivek Goyal <vgoyal@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Weidong <wangweidong1@huawei.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Wei-Chun Chao <weichunc@plumgrid.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Will Deacon <will.deacon@arm.com>
  Wolfram Sang <wsa@the-dreams.de>
  Yaniv Rosner <yanivr@broadcom.com>
  Yasushi Asano <yasushi.asano@jp.fujitsu.com>
  Yijing Wang <wangyijing@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yuval Mintz <yuvalmin@broadcom.com>
------------------------------------------------------------

jobs:
 build-armhf                                                  pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-arm-xen
+ revision=518e624ddfaef545408c19c30fff31bc64d6b346
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-arm-xen 518e624ddfaef545408c19c30fff31bc64d6b346
+ branch=linux-arm-xen
+ revision=518e624ddfaef545408c19c30fff31bc64d6b346
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-arm-xen
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-arm-xen
++ : daily-cron.linux-arm-xen
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-arm-xen
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-arm-xen
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : linux-arm-xen
+ : linux-arm-xen
+ : linux-arm-xen
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-arm-xen
+ : tested/linux-arm-xen
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 518e624ddfaef545408c19c30fff31bc64d6b346:tested/linux-arm-xen
Counting objects: 1   
Counting objects: 10   
Counting objects: 62   
Counting objects: 69   
Counting objects: 95   
Counting objects: 101   
Counting objects: 119   
Counting objects: 122   
Counting objects: 123   
Counting objects: 158   
Counting objects: 162   
Counting objects: 163   
Counting objects: 164   
Counting objects: 197   
Counting objects: 210   
Counting objects: 211   
Counting objects: 221   
Counting objects: 230   
Counting objects: 236   
Counting objects: 258   
Counting objects: 288   
Counting objects: 291   
Counting objects: 293   
Counting objects: 295   
Counting objects: 371   
Counting objects: 2439   
Counting objects: 3644, done.
Compressing objects:   0% (1/1281)   
Compressing objects:   1% (13/1281)   
Compressing objects:   2% (26/1281)   
Compressing objects:   3% (39/1281)   
Compressing objects:   4% (52/1281)   
Compressing objects:   5% (65/1281)   
Compressing objects:   6% (77/1281)   
Compressing objects:   7% (90/1281)   
Compressing objects:   8% (103/1281)   
Compressing objects:   9% (116/1281)   
Compressing objects:  10% (129/1281)   
Compressing objects:  10% (139/1281)   
Compressing objects:  11% (141/1281)   
Compressing objects:  12% (154/1281)   
Compressing objects:  13% (167/1281)   
Compressing objects:  14% (180/1281)   
Compressing objects:  15% (193/1281)   
Compressing objects:  16% (205/1281)   
Compressing objects:  17% (218/1281)   
Compressing objects:  18% (231/1281)   
Compressing objects:  19% (244/1281)   
Compressing objects:  20% (257/1281)   
Compressing objects:  21% (270/1281)   
Compressing objects:  22% (282/1281)   
Compressing objects:  23% (295/1281)   
Compressing objects:  24% (308/1281)   
Compressing objects:  25% (321/1281)   
Compressing objects:  26% (334/1281)   
Compressing objects:  27% (346/1281)   
Compressing objects:  28% (359/1281)   
Compressing objects:  29% (372/1281)   
Compressing objects:  30% (385/1281)   
Compressing objects:  31% (398/1281)   
Compressing objects:  32% (410/1281)   
Compressing objects:  33% (423/1281)   
Compressing objects:  34% (436/1281)   
Compressing objects:  35% (449/1281)   
Compressing objects:  36% (462/1281)   
Compressing objects:  37% (474/1281)   
Compressing objects:  38% (487/1281)   
Compressing objects:  39% (500/1281)   
Compressing objects:  40% (513/1281)   
Compressing objects:  40% (525/1281)   
Compressing objects:  41% (526/1281)   
Compressing objects:  42% (539/1281)   
Compressing objects:  43% (551/1281)   
Compressing objects:  44% (564/1281)   
Compressing objects:  45% (577/1281)   
Compressing objects:  46% (590/1281)   
Compressing objects:  47% (603/1281)   
Compressing objects:  48% (615/1281)   
Compressing objects:  49% (628/1281)   
Compressing objects:  50% (641/1281)   
Compressing objects:  51% (654/1281)   
Compressing objects:  52% (667/1281)   
Compressing objects:  53% (679/1281)   
Compressing objects:  54% (692/1281)   
Compressing objects:  55% (705/1281)   
Compressing objects:  56% (718/1281)   
Compressing objects:  57% (731/1281)   
Compressing objects:  58% (743/1281)   
Compressing objects:  59% (756/1281)   
Compressing objects:  60% (769/1281)   
Compressing objects:  61% (782/1281)   
Compressing objects:  62% (795/1281)   
Compressing objects:  63% (808/1281)   
Compressing objects:  64% (820/1281)   
Compressing objects:  65% (833/1281)   
Compressing objects:  66% (846/1281)   
Compressing objects:  67% (859/1281)   
Compressing objects:  68% (872/1281)   
Compressing objects:  69% (884/1281)   
Compressing objects:  70% (897/1281)   
Compressing objects:  71% (910/1281)   
Compressing objects:  72% (923/1281)   
Compressing objects:  73% (936/1281)   
Compressing objects:  74% (948/1281)   
Compressing objects:  75% (961/1281)   
Compressing objects:  76% (974/1281)   
Compressing objects:  77% (987/1281)   
Compressing objects:  78% (1000/1281)   
Compressing objects:  79% (1012/1281)   
Compressing objects:  80% (1025/1281)   
Compressing objects:  81% (1038/1281)   
Compressing objects:  82% (1051/1281)   
Compressing objects:  83% (1064/1281)   
Compressing objects:  84% (1077/1281)   
Compressing objects:  85% (1089/1281)   
Compressing objects:  86% (1102/1281)   
Compressing objects:  87% (1115/1281)   
Compressing objects:  88% (1128/1281)   
Compressing objects:  89% (1141/1281)   
Compressing objects:  90% (1153/1281)   
Compressing objects:  91% (1166/1281)   
Compressing objects:  92% (1179/1281)   
Compressing objects:  93% (1192/1281)   
Compressing objects:  94% (1205/1281)   
Compressing objects:  95% (1217/1281)   
Compressing objects:  96% (1230/1281)   
Compressing objects:  97% (1243/1281)   
Compressing objects:  98% (1256/1281)   
Compressing objects:  99% (1269/1281)   
Compressing objects: 100% (1281/1281)   
Compressing objects: 100% (1281/1281), done.
Writing objects:   0% (1/2309)   
Writing objects:   1% (24/2309)   
Writing objects:   2% (47/2309)   
Writing objects:   3% (70/2309)   
Writing objects:   4% (93/2309)   
Writing objects:   5% (116/2309)   
Writing objects:   6% (139/2309)   
Writing objects:   7% (162/2309)   
Writing objects:   8% (185/2309)   
Writing objects:   9% (208/2309)   
Writing objects:  10% (231/2309)   
Writing objects:  11% (254/2309)   
Writing objects:  12% (278/2309)   
Writing objects:  13% (301/2309)   
Writing objects:  14% (324/2309)   
Writing objects:  15% (347/2309)   
Writing objects:  16% (370/2309)   
Writing objects:  17% (393/2309)   
Writing objects:  18% (416/2309)   
Writing objects:  19% (439/2309)   
Writing objects:  20% (462/2309)   
Writing objects:  21% (485/2309)   
Writing objects:  22% (508/2309)   
Writing objects:  23% (532/2309)   
Writing objects:  24% (555/2309)   
Writing objects:  25% (578/2309)   
Writing objects:  26% (601/2309)   
Writing objects:  27% (624/2309)   
Writing objects:  28% (647/2309)   
Writing objects:  29% (670/2309)   
Writing objects:  30% (693/2309)   
Writing objects:  31% (716/2309)   
Writing objects:  32% (739/2309)   
Writing objects:  33% (762/2309)   
Writing objects:  34% (786/2309)   
Writing objects:  35% (809/2309)   
Writing objects:  36% (832/2309)   
Writing objects:  37% (855/2309)   
Writing objects:  38% (878/2309)   
Writing objects:  39% (901/2309)   
Writing objects:  40% (924/2309)   
Writing objects:  41% (949/2309)   
Writing objects:  42% (970/2309)   
Writing objects:  43% (993/2309)   
Writing objects:  44% (1019/2309)   
Writing objects:  45% (1040/2309)   
Writing objects:  46% (1063/2309)   
Writing objects:  47% (1086/2309)   
Writing objects:  48% (1109/2309)   
Writing objects:  49% (1132/2309)   
Writing objects:  50% (1155/2309)   
Writing objects:  51% (1178/2309)   
Writing objects:  52% (1201/2309)   
Writing objects:  53% (1224/2309)   
Writing objects:  54% (1247/2309)   
Writing objects:  55% (1270/2309)   
Writing objects:  56% (1294/2309)   
Writing objects:  57% (1317/2309)   
Writing objects:  58% (1340/2309)   
Writing objects:  59% (1363/2309)   
Writing objects:  60% (1386/2309)   
Writing objects:  61% (1409/2309)   
Writing objects:  62% (1432/2309)   
Writing objects:  63% (1456/2309)   
Writing objects:  64% (1478/2309)   
Writing objects:  65% (1501/2309)   
Writing objects:  66% (1524/2309)   
Writing objects:  67% (1548/2309)   
Writing objects:  68% (1571/2309)   
Writing objects:  69% (1594/2309)   
Writing objects:  70% (1617/2309)   
Writing objects:  71% (1640/2309)   
Writing objects:  72% (1663/2309)   
Writing objects:  73% (1686/2309)   
Writing objects:  74% (1709/2309)   
Writing objects:  75% (1732/2309)   
Writing objects:  76% (1755/2309)   
Writing objects:  77% (1778/2309)   
Writing objects:  78% (1802/2309)   
Writing objects:  79% (1825/2309)   
Writing objects:  80% (1848/2309)   
Writing objects:  81% (1871/2309)   
Writing objects:  82% (1894/2309)   
Writing objects:  83% (1917/2309)   
Writing objects:  84% (1940/2309)   
Writing objects:  85% (1963/2309)   
Writing objects:  86% (1986/2309)   
Writing objects:  87% (2009/2309)   
Writing objects:  88% (2032/2309)   
Writing objects:  89% (2056/2309)   
Writing objects:  90% (2079/2309)   
Writing objects:  91% (2102/2309)   
Writing objects:  92% (2125/2309)   
Writing objects:  93% (2148/2309)   
Writing objects:  94% (2171/2309)   
Writing objects:  95% (2194/2309)   
Writing objects:  96% (2217/2309)   
Writing objects:  97% (2240/2309)   
Writing objects:  98% (2263/2309)   
Writing objects:  99% (2286/2309)   
Writing objects: 100% (2309/2309)   
Writing objects: 100% (2309/2309), 520.91 KiB, done.
Total 2309 (delta 1872), reused 1348 (delta 1027)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   d264bde..518e624  518e624ddfaef545408c19c30fff31bc64d6b346 -> tested/linux-arm-xen
+ exit 0


--===============3634413117047852419==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3634413117047852419==--

From xen-devel-bounces@lists.xen.org Wed Feb 05 18:56:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 18:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB7dz-0005wI-F2; Wed, 05 Feb 2014 18:56:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WB7dx-0005wD-7V
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 18:56:17 +0000
Received: from [85.158.143.35:43397] by server-1.bemta-4.messagelabs.com id
	91/31-31661-05982F25; Wed, 05 Feb 2014 18:56:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391626573!3425741!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3101 invoked from network); 5 Feb 2014 18:56:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 18:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98346184"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 18:55:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 13:55:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WB7dT-0006vi-KD;
	Wed, 05 Feb 2014 18:55:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WB7dT-0005I5-B7;
	Wed, 05 Feb 2014 18:55:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24734-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 18:55:47 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3634413117047852419=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3634413117047852419==
Content-Type: text/plain

flight 24734 linux-arm-xen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24734/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass

version targeted for testing:
 linux                518e624ddfaef545408c19c30fff31bc64d6b346
baseline version:
 linux                d264bde089ceea20640d6d4472a0dcade9d2e199

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Aaro Koskinen <aaro.koskinen@iki.fi>
  Aaron Brown <aaron.f.brown@intel.com>
  Abhilash Kesavan <a.kesavan@samsung.com>
  Alan Cox <alan@linux.intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Mezin <mezin.alexander@gmail.com>
  Alexander van Heukelum <heukelum@fastmail.fm>
  Anatolij Gustschin <agust@denx.de>
  Andre Przywara <andre.przywara@linaro.org>
  Andreas Reis <andreas.reis@gmail.com>
  Andreas Rohner <andreas.rohner@gmx.net>
  Andrew Bresticker <abrestic@chromium.org>
  Andrew Jones <drjones@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Lutomirski <luto@amacapital.net>
  Anton Blanchard <anton@samba.org>
  Anton Vorontsov <anton@enomsg.org>
  Antonio Quartulli <antonio@meshcoding.com>
  Ard Biesheuvel <ard.biesheuvel@linaro.org>
  Ariel Elior <ariele@broadcom.com>
  Arron Wang <arron.wang@intel.com>
  Austin Boyle <boyle.austin@gmail.com>
  Ben Dooks <ben.dooks@codethink.co.uk>
  Ben Myers <bpm@sgi.com>
  Ben Skeggs <bskeggs@redhat.com>
  Ben Widawsky <ben@bwidawsk.net>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Betty Dall <betty.dall@hp.com>
  Bjorn Helgaas <bhelgaas@google.com>
  BjÃ¸rn Mork <bjorn@mork.no>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Brian W Hart <hartb@linux.vnet.ibm.com>
  Bruce Allan <bruce.w.allan@intel.com>
  Bryan Wu <cooloney@gmail.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Christian Engelmayer <cengelma@gmx.at>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Christoph Paasch <christoph.paasch@uclouvain.be>
  Cong Wang <xiyou.wangcong@gmail.com>
  Curt Brune <curt@cumulusnetworks.com>
  CÃ©dric Le Goater <clg@fr.ibm.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Williams <dcbw@redhat.com>
  Daniel Borkmann <dborkman@redhat.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  Dave Ertman <davidx.m.ertman@intel.com>
  Dave Kleikamp <dave.kleikamp@oracle.com>
  David Ertman <davidx.m.ertman@intel.com>
  David Gibson <david@gibson.dropbear.id.au>
  David S. Miller <davem@davemloft.net>
  Dimitris Michailidis <dm@chelsio.com>
  Ding Tianhong <dingtianhong@huawei.com>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Dmitry Kravkov <dmitry@broadcom.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Don Skidmore <donald.c.skidmore@intel.com>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Eric Dumazet <edumazet@google.com>
  Eric Whitney <enwlinux@gmail.com>
  Erik Hugne <erik.hugne@ericsson.com>
  Fabio Estevam <fabio.estevam@freescale.com>
  Fan Du <fan.du@windriver.com>
  Felix Fietkau <nbd@openwrt.org>
  Flavio Leitner <fbl@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@freescale.com>
  Gao feng <gaofeng@cn.fujitsu.com>
  Gavin Shan <shangw@linux.vnet.ibm.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Gerhard Sittig <gsi@denx.de>
  Gerrit Renker <gerrit@erg.abdn.ac.uk>
  Grant Likely <grant.likely@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregor Beck <gbeck@sernet.de>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Padovan <gustavo.padovan@collabora.co.uk>
  H. Nikolaus Schaller <hns@goldelico.com>
  H. Peter Anvin <hpa@linux.intel.com>
  H. Peter Anvin <hpa@zytor.com>
  Haiyang Zhang <haiyangz@microsoft.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Hariprasad Shenai <hariprasad@chelsio.com>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Heiko Stuebner <heiko@sntech.de>
  Helge Deller <deller@gmx.de>
  Helmut Schaa <helmut.schaa@googlemail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Huacai Chen <chenhc@lemote.com>
  Hugh Dickins <hughd@google.com>
  Ilia Mirkin <imirkin@alum.mit.edu>
  Ingo Molnar <mingo@kernel.org>
  Ivan Vecera <ivecera@redhat.com>
  Jamal Hadi Salim <jhs@mojatatu.com>
  James Hogan <james.hogan@imgtec.com>
  Jan Kara <jack@suse.cz>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jani Nikula <jani.nikula@intel.com>
  Jason Baron <jbaron@akamai.com>
  Jason Wang <jasowang@redhat.com>
  Javier Lopez <jlopex@cozybit.com>
  Jay Vosburgh <fubar@us.ibm.com>
  Jean Delvare <khali@linux-fr.org>
  Jeff Kirsher <jeffrey.t.kirsher@intel.com>
  Jeff Layton <jlayton@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jesse Barnes <jbarnes@virtuousgeek.org>
  Jiang Liu <jiang.liu@linux.intel.com>
  Jie Liu <jeff.liu@oracle.com>
  Jiri Pirko <jiri@resnulli.us>
  Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
  Johan Hedberg <johan.hedberg@intel.com>
  Johannes Berg <johannes.berg@intel.com>
  John Crispin <blogic@openwrt.org>
  John David Anglin <dave.anglin@bell.net>
  John Fastabend <john.r.fastabend@intel.com.com>
  John Fastabend <john.r.fastabend@intel.com>
  John Stultz <john.stultz@linaro.org>
  John W. Linville <linville@tuxdriver.com>
  Jon Maloy <jon.maloy@ericsson.com>
  Jonghwa Lee <jonghwa3.lee@samsung.com>
  Josh Boyer <jwboyer@fedoraproject.org>
  Julian Anastasov <ja@ssi.bg>
  Kelly Doran <kel.p.doran@gmail.com>
  Kirill Tkhai <tkhai@yandex.ru>
  Krzysztof HaÅ‚asa <khalasa@piap.pl>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Kumar Sanghvi <kumaras@chelsio.com>
  Lan Tianyu <tianyu.lan@intel.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Laura Abbott <lauraa@codeaurora.org>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Leigh Brown <leigh@solinno.co.uk>
  Li RongQing <roy.qing.li@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu, Chuansheng <chuansheng.liu@intel.com>
  Manish Chopra <manish.chopra@qlogic.com>
  Marcel Holtmann <marcel@holtmann.org>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marco Piazza <mpiazza@gmail.com>
  Marek Lindner <mareklindner@neomailbox.ch>
  Marek OlÅ¡Ã¡k <marek.olsak@amd.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Mathy Vanhoef <vanhoefm@gmail.com>
  Matteo Facchinetti <matteo.facchinetti@sirius-es.it>
  Mel Gorman <mgorman@suse.de>
  Michael Chan <mchan@broadcom.com>
  Michael Neuling <mikey@neuling.org>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.cz>
  Michal Kalderon <michals@broadcom.com>
  Michal Schmidt <mschmidt@redhat.com>
  Michal Simek <michal.simek@xilinx.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Turquette <mturquette@linaro.org>
  Mikulas Patocka <mpatocka@redhat.com>
  Milo Kim <milo.kim@ti.com>
  Ming Lei <ming.lei@canonical.com>
  Ming Lei <tom.leiming@gmail.com>
  Mugunthan V N <mugunthanvnm@ti.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Neal Cardwell <ncardwell@google.com>
  Neil Horman <nhorman@tuxdriver.com>
  NeilBrown <neilb@suse.de>
  Nicolas Schichan <nschichan@freebox.fr>
  Nithin Nayak Sujir <nsujir@broadcom.com>
  Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
  Octavian Purdila <octavian.purdila@intel.com>
  Oleg Nesterov <oleg@redhat.com>
  Olof Johansson <olof@lixom.net>
  Oren Givon <oren.givon@intel.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <paul.durrant@citrix.com>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paulo Zanoni <paulo.r.zanoni@intel.com>
  Pekka Enberg <penberg@kernel.org>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Zijlstra <peterz@infradead.org>
  Phil Schmitt <phillip.j.schmitt@intel.com>
  Qais Yousef <qais.yousef@imgtec.com>
  Qingshuai Tian <qingshuai.tian@intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  RafaÅ‚ MiÅ‚ecki <zajec5@gmail.com>
  Rajesh B Prathipati <rprathip@linux.vnet.ibm.com>
  Richard Cochran <richardcochran@gmail.com>
  Richard Weinberger <richard@nod.at>
  Rik van Riel <riel@redhat.com>
  Rob Herring <rob.herring@calxeda.com>
  Rob Herring <robh@kernel.org>
  Robert Richter <rric@kernel.org>
  Russell King <rmk+kernel@arm.linux.org.uk>
  Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
  Sachin Kamat <sachin.kamat@linaro.org>
  Sachin Prabhu <sprabhu@redhat.com>
  Salva PeirÃ³ <speiro@ai2.upv.es>
  Samuel Ortiz <sameo@linux.intel.com>
  Santosh Shilimkar <santosh.shilimkar@ti.com>
  Sasha Levin <sasha.levin@oracle.com>
  Sathya Perla <sathya.perla@emulex.com>
  Scott Feldman <sfeldma@cumulusnetworks.com>
  Sebastian Ott <sebott@linux.vnet.ibm.com>
  Serge E. Hallyn <serge.hallyn@ubuntu.com>
  Serge Hallyn <serge.hallyn@canonical.com>
  Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
  Seung-Woo Kim <sw0312.kim@samsung.com>
  Shahed Shaikh <shahed.shaikh@qlogic.com>
  Shirish Pargaonkar <spargaonkar@suse.com>
  Shuah Khan <shuah.kh@samsung.com>
  Simon Guinot <sguinot@lacie.com>
  Simon Horman <horms+renesas@verge.net.au>
  Simon Horman <horms@verge.net.au>
  Simon Wunderlich <sw@simonwunderlich.de>
  Soren Brinkmann <soren.brinkmann@xilinx.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Stephen Boyd <sboyd@codeaurora.org>
  Stephen Warren <swarren@nvidia.com>
  Steve Capper <steve.capper@linaro.org>
  Steve French <smfrench@gmail.com>
  Steven Rostedt <rostedt@goodmis.org>
  Steven Whitehouse <swhiteho@redhat.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Sujith Manoharan <c_manoha@qca.qualcomm.com>
  Suresh Reddy <suresh.reddy@emulex.com>
  Taras Kondratiuk <taras.kondratiuk@linaro.org>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
  Thomas Gleixner <tglx@linutronix.de>
  Timo TerÃ¤s <timo.teras@iki.fi>
  Tomasz Figa <t.figa@samsung.com>
  Tony Lindgren <tony@atomide.com>
  Toshi Kani <toshi.kani@hp.com>
  Ujjal Roy <royujjal@gmail.com>
  Vasundhara Volam <vasundhara.volam@emulex.com>
  Ville SyrjÃ¤lÃ¤ <ville.syrjala@linux.intel.com>
  Vince Bridgers <vbridgers2013@gmail.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vivek Goyal <vgoyal@redhat.com>
  Vlad Yasevich <vyasevich@gmail.com>
  Vladimir Davydov <vdavydov@parallels.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Weidong <wangweidong1@huawei.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Wei-Chun Chao <weichunc@plumgrid.com>
  Wenliang Fan <fanwlexca@gmail.com>
  Will Deacon <will.deacon@arm.com>
  Wolfram Sang <wsa@the-dreams.de>
  Yaniv Rosner <yanivr@broadcom.com>
  Yasushi Asano <yasushi.asano@jp.fujitsu.com>
  Yijing Wang <wangyijing@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yuval Mintz <yuvalmin@broadcom.com>
------------------------------------------------------------

jobs:
 build-armhf                                                  pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-arm-xen
+ revision=518e624ddfaef545408c19c30fff31bc64d6b346
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-arm-xen 518e624ddfaef545408c19c30fff31bc64d6b346
+ branch=linux-arm-xen
+ revision=518e624ddfaef545408c19c30fff31bc64d6b346
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-arm-xen
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-arm-xen
++ : daily-cron.linux-arm-xen
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-arm-xen
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-arm-xen
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : linux-arm-xen
+ : linux-arm-xen
+ : linux-arm-xen
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-arm-xen
+ : tested/linux-arm-xen
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 518e624ddfaef545408c19c30fff31bc64d6b346:tested/linux-arm-xen
Counting objects: 1   
Counting objects: 10   
Counting objects: 62   
Counting objects: 69   
Counting objects: 95   
Counting objects: 101   
Counting objects: 119   
Counting objects: 122   
Counting objects: 123   
Counting objects: 158   
Counting objects: 162   
Counting objects: 163   
Counting objects: 164   
Counting objects: 197   
Counting objects: 210   
Counting objects: 211   
Counting objects: 221   
Counting objects: 230   
Counting objects: 236   
Counting objects: 258   
Counting objects: 288   
Counting objects: 291   
Counting objects: 293   
Counting objects: 295   
Counting objects: 371   
Counting objects: 2439   
Counting objects: 3644, done.
Compressing objects:   0% (1/1281)   
Compressing objects:   1% (13/1281)   
Compressing objects:   2% (26/1281)   
Compressing objects:   3% (39/1281)   
Compressing objects:   4% (52/1281)   
Compressing objects:   5% (65/1281)   
Compressing objects:   6% (77/1281)   
Compressing objects:   7% (90/1281)   
Compressing objects:   8% (103/1281)   
Compressing objects:   9% (116/1281)   
Compressing objects:  10% (129/1281)   
Compressing objects:  10% (139/1281)   
Compressing objects:  11% (141/1281)   
Compressing objects:  12% (154/1281)   
Compressing objects:  13% (167/1281)   
Compressing objects:  14% (180/1281)   
Compressing objects:  15% (193/1281)   
Compressing objects:  16% (205/1281)   
Compressing objects:  17% (218/1281)   
Compressing objects:  18% (231/1281)   
Compressing objects:  19% (244/1281)   
Compressing objects:  20% (257/1281)   
Compressing objects:  21% (270/1281)   
Compressing objects:  22% (282/1281)   
Compressing objects:  23% (295/1281)   
Compressing objects:  24% (308/1281)   
Compressing objects:  25% (321/1281)   
Compressing objects:  26% (334/1281)   
Compressing objects:  27% (346/1281)   
Compressing objects:  28% (359/1281)   
Compressing objects:  29% (372/1281)   
Compressing objects:  30% (385/1281)   
Compressing objects:  31% (398/1281)   
Compressing objects:  32% (410/1281)   
Compressing objects:  33% (423/1281)   
Compressing objects:  34% (436/1281)   
Compressing objects:  35% (449/1281)   
Compressing objects:  36% (462/1281)   
Compressing objects:  37% (474/1281)   
Compressing objects:  38% (487/1281)   
Compressing objects:  39% (500/1281)   
Compressing objects:  40% (513/1281)   
Compressing objects:  40% (525/1281)   
Compressing objects:  41% (526/1281)   
Compressing objects:  42% (539/1281)   
Compressing objects:  43% (551/1281)   
Compressing objects:  44% (564/1281)   
Compressing objects:  45% (577/1281)   
Compressing objects:  46% (590/1281)   
Compressing objects:  47% (603/1281)   
Compressing objects:  48% (615/1281)   
Compressing objects:  49% (628/1281)   
Compressing objects:  50% (641/1281)   
Compressing objects:  51% (654/1281)   
Compressing objects:  52% (667/1281)   
Compressing objects:  53% (679/1281)   
Compressing objects:  54% (692/1281)   
Compressing objects:  55% (705/1281)   
Compressing objects:  56% (718/1281)   
Compressing objects:  57% (731/1281)   
Compressing objects:  58% (743/1281)   
Compressing objects:  59% (756/1281)   
Compressing objects:  60% (769/1281)   
Compressing objects:  61% (782/1281)   
Compressing objects:  62% (795/1281)   
Compressing objects:  63% (808/1281)   
Compressing objects:  64% (820/1281)   
Compressing objects:  65% (833/1281)   
Compressing objects:  66% (846/1281)   
Compressing objects:  67% (859/1281)   
Compressing objects:  68% (872/1281)   
Compressing objects:  69% (884/1281)   
Compressing objects:  70% (897/1281)   
Compressing objects:  71% (910/1281)   
Compressing objects:  72% (923/1281)   
Compressing objects:  73% (936/1281)   
Compressing objects:  74% (948/1281)   
Compressing objects:  75% (961/1281)   
Compressing objects:  76% (974/1281)   
Compressing objects:  77% (987/1281)   
Compressing objects:  78% (1000/1281)   
Compressing objects:  79% (1012/1281)   
Compressing objects:  80% (1025/1281)   
Compressing objects:  81% (1038/1281)   
Compressing objects:  82% (1051/1281)   
Compressing objects:  83% (1064/1281)   
Compressing objects:  84% (1077/1281)   
Compressing objects:  85% (1089/1281)   
Compressing objects:  86% (1102/1281)   
Compressing objects:  87% (1115/1281)   
Compressing objects:  88% (1128/1281)   
Compressing objects:  89% (1141/1281)   
Compressing objects:  90% (1153/1281)   
Compressing objects:  91% (1166/1281)   
Compressing objects:  92% (1179/1281)   
Compressing objects:  93% (1192/1281)   
Compressing objects:  94% (1205/1281)   
Compressing objects:  95% (1217/1281)   
Compressing objects:  96% (1230/1281)   
Compressing objects:  97% (1243/1281)   
Compressing objects:  98% (1256/1281)   
Compressing objects:  99% (1269/1281)   
Compressing objects: 100% (1281/1281)   
Compressing objects: 100% (1281/1281), done.
Writing objects:   0% (1/2309)   
Writing objects:   1% (24/2309)   
Writing objects:   2% (47/2309)   
Writing objects:   3% (70/2309)   
Writing objects:   4% (93/2309)   
Writing objects:   5% (116/2309)   
Writing objects:   6% (139/2309)   
Writing objects:   7% (162/2309)   
Writing objects:   8% (185/2309)   
Writing objects:   9% (208/2309)   
Writing objects:  10% (231/2309)   
Writing objects:  11% (254/2309)   
Writing objects:  12% (278/2309)   
Writing objects:  13% (301/2309)   
Writing objects:  14% (324/2309)   
Writing objects:  15% (347/2309)   
Writing objects:  16% (370/2309)   
Writing objects:  17% (393/2309)   
Writing objects:  18% (416/2309)   
Writing objects:  19% (439/2309)   
Writing objects:  20% (462/2309)   
Writing objects:  21% (485/2309)   
Writing objects:  22% (508/2309)   
Writing objects:  23% (532/2309)   
Writing objects:  24% (555/2309)   
Writing objects:  25% (578/2309)   
Writing objects:  26% (601/2309)   
Writing objects:  27% (624/2309)   
Writing objects:  28% (647/2309)   
Writing objects:  29% (670/2309)   
Writing objects:  30% (693/2309)   
Writing objects:  31% (716/2309)   
Writing objects:  32% (739/2309)   
Writing objects:  33% (762/2309)   
Writing objects:  34% (786/2309)   
Writing objects:  35% (809/2309)   
Writing objects:  36% (832/2309)   
Writing objects:  37% (855/2309)   
Writing objects:  38% (878/2309)   
Writing objects:  39% (901/2309)   
Writing objects:  40% (924/2309)   
Writing objects:  41% (949/2309)   
Writing objects:  42% (970/2309)   
Writing objects:  43% (993/2309)   
Writing objects:  44% (1019/2309)   
Writing objects:  45% (1040/2309)   
Writing objects:  46% (1063/2309)   
Writing objects:  47% (1086/2309)   
Writing objects:  48% (1109/2309)   
Writing objects:  49% (1132/2309)   
Writing objects:  50% (1155/2309)   
Writing objects:  51% (1178/2309)   
Writing objects:  52% (1201/2309)   
Writing objects:  53% (1224/2309)   
Writing objects:  54% (1247/2309)   
Writing objects:  55% (1270/2309)   
Writing objects:  56% (1294/2309)   
Writing objects:  57% (1317/2309)   
Writing objects:  58% (1340/2309)   
Writing objects:  59% (1363/2309)   
Writing objects:  60% (1386/2309)   
Writing objects:  61% (1409/2309)   
Writing objects:  62% (1432/2309)   
Writing objects:  63% (1456/2309)   
Writing objects:  64% (1478/2309)   
Writing objects:  65% (1501/2309)   
Writing objects:  66% (1524/2309)   
Writing objects:  67% (1548/2309)   
Writing objects:  68% (1571/2309)   
Writing objects:  69% (1594/2309)   
Writing objects:  70% (1617/2309)   
Writing objects:  71% (1640/2309)   
Writing objects:  72% (1663/2309)   
Writing objects:  73% (1686/2309)   
Writing objects:  74% (1709/2309)   
Writing objects:  75% (1732/2309)   
Writing objects:  76% (1755/2309)   
Writing objects:  77% (1778/2309)   
Writing objects:  78% (1802/2309)   
Writing objects:  79% (1825/2309)   
Writing objects:  80% (1848/2309)   
Writing objects:  81% (1871/2309)   
Writing objects:  82% (1894/2309)   
Writing objects:  83% (1917/2309)   
Writing objects:  84% (1940/2309)   
Writing objects:  85% (1963/2309)   
Writing objects:  86% (1986/2309)   
Writing objects:  87% (2009/2309)   
Writing objects:  88% (2032/2309)   
Writing objects:  89% (2056/2309)   
Writing objects:  90% (2079/2309)   
Writing objects:  91% (2102/2309)   
Writing objects:  92% (2125/2309)   
Writing objects:  93% (2148/2309)   
Writing objects:  94% (2171/2309)   
Writing objects:  95% (2194/2309)   
Writing objects:  96% (2217/2309)   
Writing objects:  97% (2240/2309)   
Writing objects:  98% (2263/2309)   
Writing objects:  99% (2286/2309)   
Writing objects: 100% (2309/2309)   
Writing objects: 100% (2309/2309), 520.91 KiB, done.
Total 2309 (delta 1872), reused 1348 (delta 1027)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   d264bde..518e624  518e624ddfaef545408c19c30fff31bc64d6b346 -> tested/linux-arm-xen
+ exit 0


--===============3634413117047852419==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3634413117047852419==--

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:08:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:08:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB8le-0007kY-DA; Wed, 05 Feb 2014 20:08:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WB8ld-0007kT-Jf
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:08:17 +0000
Received: from [85.158.143.35:6089] by server-1.bemta-4.messagelabs.com id
	5A/35-31661-03A92F25; Wed, 05 Feb 2014 20:08:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391630894!3446490!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9880 invoked from network); 5 Feb 2014 20:08:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 20:08:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15K7Aok027571
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 20:07:11 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15K7A93000812
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Feb 2014 20:07:10 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15K791q024374; Wed, 5 Feb 2014 20:07:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 12:07:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 435EF1C0972; Wed,  5 Feb 2014 15:07:08 -0500 (EST)
Date: Wed, 5 Feb 2014 15:07:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140205200708.GA9278@phenom.dumpdata.com>
References: <20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="vkogqOf2sHV7VnPd"
Content-Disposition: inline
In-Reply-To: <20140124215652.GA18710@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re:
 [PATCH] x86/msi: Validate the guest-identified PCI devices in
 pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

> And sure enough if I boot Xen without 'pci=assign-busses' it works just
> fine.
> 
> Ugh.
> 
> I wonder how Xen 4.3 would actually do the PCI passthrough - it booted with
> the 'assign-busses' - but I hadn't tried to do PCI passthrough of the
> PF device (the I210).

I did not work very well. Especially with PCI devices.

> 
> If do pass in '05:00.0' (new bus number) I wonder if it will use IOMMU context
> with whatever '05:00.0' was _before_ the bus re-assigment  aka:
> 
> 05:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
>         Flags: bus master, fast devsel, latency 0
>         Bus: primary=05, secondary=06, subordinate=07, sec-latency=32
>         Memory behind bridge: f1500000-f16fffff
>         Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
>         Capabilities: [a0] Power Management version 3
> 
> Which I think would confuse Xen as this is clearly labeled as bridge
> not a PCI device.
> 
> 
> The reason for me using 'pci=assign-busses' is that it looks to be
> the only option to use SR-IOV.

.. on this particular motherboard.

> 
> Which I suppose makes sense as it tries to create VFs right after its own bus id:
> 
> 
>   +-01.1-[02-03]--+-[0000:03]-+-10.0  Intel Corporation 82576 Virtual Function
>            |               |           +-10.1  Intel Corporation 82576 Virtual Function
>            |               |           +-10.2  Intel Corporation 82576 Virtual Function
>            |               |           +-10.3  Intel Corporation 82576 Virtual Function
>            |               |           +-10.4  Intel Corporation 82576 Virtual Function
>            |               |           +-10.5  Intel Corporation 82576 Virtual Function
>            |               |           +-10.6  Intel Corporation 82576 Virtual Function
>            |               |           +-10.7  Intel Corporation 82576 Virtual Function
>            |               |           +-11.0  Intel Corporation 82576 Virtual Function
>            |               |           +-11.1  Intel Corporation 82576 Virtual Function
>            |               |           +-11.2  Intel Corporation 82576 Virtual Function
>            |               |           +-11.3  Intel Corporation 82576 Virtual Function
>            |               |           +-11.4  Intel Corporation 82576 Virtual Function
>            |               |           \-11.5  Intel Corporation 82576 Virtual Function
>            |               \-[0000:02]-+-00.0  Intel Corporation 82576 Gigabit Network Connection
>            |                           \-00.1  Intel Corporation 82576 Gigabit Network Connection
> 
> 
> But why does it have to have the bus _right_ after its own? Can't it
> use one at the end of the its bus-space? The bus is after it is occupied
> by another card (if I boot without 'pci=assign-busses').

Because you need to program the bridge to accept the bus requests for the
PF and VF bus numbers. And hence the need to program it in the bridge
to span more bus numbers.
> 
> I do recall using this particular SR-IOV card on a different hardware
> a year ago or so. And it did work. I think that might be because
> there were no PCI cards _after_ the SR-IOV card.

It was because it was a motherboard with an SRIOV aware BIOS. And a
server one while this is more geared towards .. budget-servers?

Anyhow, what I discovered was that the patch attached does allow me to
boot with Xen. It is not pretty.

But I was thinking to fix in the hypervisor and realized there are three
ways of fixing it:

 1). Do the hypercall to delete/add devices and let initial domain figure
     this out. (which the Linux attached patch does).

 2). Be more aware of the bus2bridge topology when removing a PCI bridge or
     device. I had one bug where we ended up with this bus2bridge structure:

      6 -> 06:00.0
      7 -> 06:00.0
      8 -> 07:01.0

Which meant that for devices on bus 8, 7 and 6 we would never find the 
upstream bridge. The reason is that 6 -> 06 points to itself so
find_upstream_bridge ends up looping 255 times around and returns -1.

Oddly enough the 06:00.0 device does get removed and then added (Via
the hypercalls) and the reason for the bus2bridge having stale information
is that it copies the data but does not invalidate that.

I am not entirely sure I undertand why we do that. In 'free_pdev' we do
this:

	for ( ; sec_bus <= sub_bus; sec_bus++ )
		pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];

and then:
 list_del(&pdev->alldevs_list);
 xfree(pdev->msix);
 xfree(pdev);

so if the device that is being deleted is the bridge - we point the secondary
and subordinate to the deleted device. But if the deleted device is the
upstream bridge we end up leaving a stale bus2bridge context.

That is OK normally as 'alloc_pdev' would over-write it (if the secondary
and subordinate did not change). But in 'assign-busses' case they change so
we are left with an 'stale' one.

This means when the same device is added (but with a new bus value) we
end fixing up the secondary to subordinate busses to point to us (06).
But '06' which used to be a secondary bus, still retains the old value.

One way to fix this is to detect it and correct:

          spin_lock(&pseg->bus2bridge_lock);
            for ( ; sec_bus <= sub_bus; sec_bus++ )
                pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];
            /* Check for infinite recursion where bus2bridge would point to
             * itself, aka:
             *    6-> 06:00.0
             *    7-> 06:00.0
             * and we are removing 06:00.0, but may have 07:00.0 devices.
             * We invalidate the 6 as the upstream bridge is effectively
             * removed. We cannot remove the 07 as the 06:00.0 might be
             * added right back in. */
            if ( pseg->bus2bridge[pdev->bus].map )
            {    
                u8 bus, devfn;

                bus = pdev->bus;
                if ( __find_upstream_bridge( pseg->nr, &bus, &devfn, &sec_bus, 1 ) < 0 )
                {
                    /* Recursion detected! Invalidate ourselves. */
                    printk("%04x:%02x:%02x.%u recursed clearing it",
                       pseg->nr, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
                    if (pdev->bus != bus || devfn != pdev->devfn)
                        printk(" %02x:%02x supplied", pdev->bus, pdev->devfn);
                                PCI_SLOT(pseg->bus2bridge[i].devfn),
                                PCI_FUNC(pseg->bus2bridge[i].devfn));
                    pseg->bus2bridge[pdev->bus].map = 0;
                }
            }
            spin_unlock(&pseg->bus2bridge_lock);

But I am not sure if that is the right way of doing it. Anyhow there
was another assumption made in which 'assign-busses' crippled Xen
(see second attachment).

3). Trap on PCI_SECONDARY_BUS and PCI_SUBORDINATE_BUS writes and
    fixup the structures.

    I hadn't attempted that but that could also be done. That way Xen
    is aware of those changes and can update its PCI structures.

Thoughts?

--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-DRAFT-xen-pci-Re-add-all-PCI-devices-if-pci-assign-b.patch"

>From bee45c2613b1f827e2610d7f8d06989f3cd76907 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 5 Feb 2014 14:26:56 -0500
Subject: [PATCH] DRAFT xen/pci: Re-add all PCI devices if pci=assign-busses
 is used.

That parameter wreaks havoc with Xen hypervisor. Its internal
structures end up being confused such that 'upstream bridge'
information is lost.

As such, this patch re-programs the Xen hypervisor's PCI devices.
It does it in three steps:

 1). Before 'acpi_init' (which parses the ACPI DSDT for PCI devices)
     in register_xen_pci_notifier we collect all of the PCI devices
     BDFs that are active.

 2). When 'acpi_init' has finished and has reprogrammed the bus
     numbers, we intersect the list of all of the PCI devices that
     Linux knows with the list we created in step 1). The result
     is an array of BDFs which are orphaned - meaning they are not
     present on the machine any more - but Xen hypervisor is still
     holding on to them - because Linux has not made the
     'xen_remove_device' call on them. The reason for that is
     explained later in this description[*1]. With the list of
     orphaned PCI devices and the ones we have added - we make
     the hypercall to remove all the orphaned ones and all the
     ones that were added.
     At this stage Xen has no knowledge of any PCI devices.
 3). We all of the PCI devices that Linux knows about.
     This way the view from Linux and Xen is synced when it comes
     to the PCI devices.

[*1]. Linux seperates the PCI devices from PCI bridges in two
structures. That means that PCI devices know their slot and function
number. While the bus structure keeps track of the bus number. This
seperation allows Linux to expand the bridge to span more bus
numbers and the changes are only updated in the PCI bus structures.
The PCI devices are oblivious to this. Also the notifier call chain
is only executed when a PCI device is added - and since this is
during early bootup - the notifier is not used to 'delete' the devices
that might have existed with the old bus numbers - because Linux
hasn't gotten to enumerate them.

With this patch, pci=assign-busses works with Xen hypervisors.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/pci.c |  118 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 117 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
index dd9c249..178de97 100644
--- a/drivers/xen/pci.c
+++ b/drivers/xen/pci.c
@@ -186,12 +186,104 @@ static struct notifier_block device_nb = {
 	.notifier_call = xen_pci_notifier,
 };
 
+#include <linux/device.h>
+static void __init walk_bus(struct pci_bus *bus, int (*fnc)(struct device *dev))
+{
+	struct pci_dev *dev;
+	struct pci_bus *child;
+
+	list_for_each_entry(dev, &bus->devices, bus_list) {
+		if (dev->subordinate)
+			continue; /* Scan bridges in the next loop */
+		(void)fnc(&dev->dev);
+	}
+	list_for_each_entry(child, &bus->children, node) {
+		dev = child->self;
+		if (dev)
+			(void)fnc(&dev->dev);
+		walk_bus(child, fnc);
+	}
+}
+static void __init walk_tree(int (*fnc)(struct device *dev))
+{
+	struct pci_bus *bus;
+
+	down_read(&pci_bus_sem);
+	list_for_each_entry(bus, &pci_root_buses, node)
+		walk_bus(bus, fnc);
+	up_read(&pci_bus_sem);
+}
+
+#include <asm/pci-direct.h>
+
+#define PCI_BUS(bdf)    (((bdf) >> 8) & 0xff)
+#define PCI_BDF(b,d,f)  ((((b) & 0xff) << 8) | PCI_DEVFN(d,f))
+#define PCI_DEVFN2(bdf) ((bdf) & 0xff)
+#define PCI_BDF2(b,df)  ((((b) & 0xff) << 8) | ((df) & 0xff))
+static unsigned long __initdata *pci_devs;
+
+static void __init check_device(int bus, int slot, int func)
+{
+	u16 class;
+
+	class = read_pci_config(bus, slot, func, PCI_CLASS_REVISION);
+	if (class == 0xffff)
+		return;
+
+	set_bit(PCI_BDF(bus, slot, func), pci_devs);
+}
+static int __init xen_prune_pci_devs(struct device *dev)
+{
+	struct pci_dev *pci_dev = to_pci_dev(dev);
+	u16 busdevfn;
+
+	busdevfn = PCI_BDF2(pci_dev->bus->number, pci_dev->devfn);
+	if (test_bit(busdevfn, pci_devs)) /* If present it is not orphaned */
+		clear_bit(busdevfn, pci_devs);
+	return 0;
+}
+static void __init xen_delete_orphaned_pci_devs(void)
+{
+	struct physdev_manage_pci manage_pci;
+	unsigned int i;
+
+	for_each_set_bit(i, pci_devs, PCI_BDF(-1, -1, -1) + 1) {
+		manage_pci.bus = PCI_BUS(i);
+		manage_pci.devfn = PCI_DEVFN2(i);
+		(void)HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_remove,
+					  &manage_pci);
+	}
+}
+
 static int __init register_xen_pci_notifier(void)
 {
+	int bus, slot, func, rc = 0;
+
 	if (!xen_initial_domain())
 		return 0;
 
-	return bus_register_notifier(&pci_bus_type, &device_nb);
+	rc = bus_register_notifier(&pci_bus_type, &device_nb);
+
+	if (!pcibios_assign_all_busses())
+		return rc;
+
+	if (!early_pci_allowed())
+		return rc;
+
+	/* 64K bits needed - we will revisit it in xen_pci_refresh */
+	pci_devs = kcalloc(BITS_TO_LONGS(PCI_BDF(-1, -1, -1) + 1), sizeof(unsigned long), GFP_KERNEL);
+	if (!pci_devs)
+		return rc;
+
+	/* Poor man's PCI discovery */
+	for (bus = 0; bus < 256; bus++) {
+		for (slot = 0; slot < 32; slot++) {
+			for (func = 0; func < 8; func++) {
+				check_device(bus, slot, func);
+			}
+		}
+	}
+	return rc;
 }
 
 arch_initcall(register_xen_pci_notifier);
@@ -241,3 +333,27 @@ static int __init xen_mcfg_late(void)
  */
 subsys_initcall_sync(xen_mcfg_late);
 #endif
+
+static int __init xen_pci_refresh(void)
+{
+	if (!xen_initial_domain())
+		return 0;
+
+	if (!pcibios_assign_all_busses())
+		return 0;
+
+	/* Update the list - so that we only have orphaned devices. */
+	walk_tree(&xen_prune_pci_devs);
+
+	/* Remove orphaned devices. */
+	xen_delete_orphaned_pci_devs();
+	/* Remove all existing ones */
+	walk_tree(&xen_remove_device);
+
+	/* Now the hypervisor has no PCI devices, so lets add them in */
+	walk_tree(&xen_add_device);
+
+	kfree(pci_devs);
+	return 0;
+}
+subsys_initcall_sync(xen_pci_refresh);
-- 
1.7.7.6


--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pci-Don-t-assume-the-removed-device-is-a-bridge.patch"

>From 76dc10b829f3beebd23c0c99dd653e50b429c5bd Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 11:48:16 -0500
Subject: [PATCH] pci: Don't assume the removed device is a bridge.

When we are instructed to remove a PCI device it is usally
done from the initial domain via PHYSDEVOP_pci_device_remove or
PHYSDEVOP_manage_pci_remove. That is OK except in the case where
the initial domain has re-programmed the PCI bridges with a new
PCI_SUBORDINATE_BUS value causing the bus number to change.

That means a device that had been addressed via say this
BDF: 06:00.0 is now addressed via 09:00.0. Now assume that the
device that is being deleted is a bridge and it used to be
06:00.0 - but since the bus numbers are different any
reads done on the PCI_SUBORDINATE_BUS can return bogus values
(as we are now addressing a completetly new device).

To guard against that we save away the subordinate and secondary
bus numbers the first time the device is introduced. Then when
the device is deleted we use those values instead of reading
from the PCI device.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c |    8 ++++----
 xen/include/xen/pci.h         |    4 ++++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 6152370..4e73427 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -201,6 +201,8 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
             sub_bus = pci_conf_read8(pseg->nr, bus, PCI_SLOT(devfn),
                                      PCI_FUNC(devfn), PCI_SUBORDINATE_BUS);
 
+            pdev->info.bus.sec = sec_bus;
+            pdev->info.bus.sub = sub_bus;
             spin_lock(&pseg->bus2bridge_lock);
             for ( ; sec_bus <= sub_bus; sec_bus++ )
             {
@@ -265,10 +267,8 @@ static void free_pdev(struct pci_seg *pseg, struct pci_dev *pdev)
         case DEV_TYPE_LEGACY_PCI_BRIDGE:
             dev = PCI_SLOT(pdev->devfn);
             func = PCI_FUNC(pdev->devfn);
-            sec_bus = pci_conf_read8(pseg->nr, pdev->bus, dev, func,
-                                     PCI_SECONDARY_BUS);
-            sub_bus = pci_conf_read8(pseg->nr, pdev->bus, dev, func,
-                                     PCI_SUBORDINATE_BUS);
+            sec_bus = pdev->info.bus.sec;
+            sub_bus = pdev->info.bus.sub;
 
             spin_lock(&pseg->bus2bridge_lock);
             for ( ; sec_bus <= sub_bus; sec_bus++ )
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index cadb525..c3f6ee4 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -39,6 +39,10 @@ struct pci_dev_info {
         u8 bus;
         u8 devfn;
     } physfn;
+    struct {
+        u8 sec;
+        u8 sub;
+    } bus; /* Only set if device is a bridge */
 };
 
 struct pci_dev {
-- 
1.7.7.6


--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--vkogqOf2sHV7VnPd--


From xen-devel-bounces@lists.xen.org Wed Feb 05 20:08:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:08:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB8le-0007kY-DA; Wed, 05 Feb 2014 20:08:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WB8ld-0007kT-Jf
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:08:17 +0000
Received: from [85.158.143.35:6089] by server-1.bemta-4.messagelabs.com id
	5A/35-31661-03A92F25; Wed, 05 Feb 2014 20:08:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391630894!3446490!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9880 invoked from network); 5 Feb 2014 20:08:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 20:08:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s15K7Aok027571
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 20:07:11 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15K7A93000812
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Feb 2014 20:07:10 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s15K791q024374; Wed, 5 Feb 2014 20:07:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 05 Feb 2014 12:07:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 435EF1C0972; Wed,  5 Feb 2014 15:07:08 -0500 (EST)
Date: Wed, 5 Feb 2014 15:07:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140205200708.GA9278@phenom.dumpdata.com>
References: <20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="vkogqOf2sHV7VnPd"
Content-Disposition: inline
In-Reply-To: <20140124215652.GA18710@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re:
 [PATCH] x86/msi: Validate the guest-identified PCI devices in
 pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

> And sure enough if I boot Xen without 'pci=assign-busses' it works just
> fine.
> 
> Ugh.
> 
> I wonder how Xen 4.3 would actually do the PCI passthrough - it booted with
> the 'assign-busses' - but I hadn't tried to do PCI passthrough of the
> PF device (the I210).

I did not work very well. Especially with PCI devices.

> 
> If do pass in '05:00.0' (new bus number) I wonder if it will use IOMMU context
> with whatever '05:00.0' was _before_ the bus re-assigment  aka:
> 
> 05:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01) (prog-if 01 [Subtractive decode])
>         Flags: bus master, fast devsel, latency 0
>         Bus: primary=05, secondary=06, subordinate=07, sec-latency=32
>         Memory behind bridge: f1500000-f16fffff
>         Capabilities: [60] Subsystem: Super Micro Computer Inc Device 0805
>         Capabilities: [a0] Power Management version 3
> 
> Which I think would confuse Xen as this is clearly labeled as bridge
> not a PCI device.
> 
> 
> The reason for me using 'pci=assign-busses' is that it looks to be
> the only option to use SR-IOV.

.. on this particular motherboard.

> 
> Which I suppose makes sense as it tries to create VFs right after its own bus id:
> 
> 
>   +-01.1-[02-03]--+-[0000:03]-+-10.0  Intel Corporation 82576 Virtual Function
>            |               |           +-10.1  Intel Corporation 82576 Virtual Function
>            |               |           +-10.2  Intel Corporation 82576 Virtual Function
>            |               |           +-10.3  Intel Corporation 82576 Virtual Function
>            |               |           +-10.4  Intel Corporation 82576 Virtual Function
>            |               |           +-10.5  Intel Corporation 82576 Virtual Function
>            |               |           +-10.6  Intel Corporation 82576 Virtual Function
>            |               |           +-10.7  Intel Corporation 82576 Virtual Function
>            |               |           +-11.0  Intel Corporation 82576 Virtual Function
>            |               |           +-11.1  Intel Corporation 82576 Virtual Function
>            |               |           +-11.2  Intel Corporation 82576 Virtual Function
>            |               |           +-11.3  Intel Corporation 82576 Virtual Function
>            |               |           +-11.4  Intel Corporation 82576 Virtual Function
>            |               |           \-11.5  Intel Corporation 82576 Virtual Function
>            |               \-[0000:02]-+-00.0  Intel Corporation 82576 Gigabit Network Connection
>            |                           \-00.1  Intel Corporation 82576 Gigabit Network Connection
> 
> 
> But why does it have to have the bus _right_ after its own? Can't it
> use one at the end of the its bus-space? The bus is after it is occupied
> by another card (if I boot without 'pci=assign-busses').

Because you need to program the bridge to accept the bus requests for the
PF and VF bus numbers. And hence the need to program it in the bridge
to span more bus numbers.
> 
> I do recall using this particular SR-IOV card on a different hardware
> a year ago or so. And it did work. I think that might be because
> there were no PCI cards _after_ the SR-IOV card.

It was because it was a motherboard with an SRIOV aware BIOS. And a
server one while this is more geared towards .. budget-servers?

Anyhow, what I discovered was that the patch attached does allow me to
boot with Xen. It is not pretty.

But I was thinking to fix in the hypervisor and realized there are three
ways of fixing it:

 1). Do the hypercall to delete/add devices and let initial domain figure
     this out. (which the Linux attached patch does).

 2). Be more aware of the bus2bridge topology when removing a PCI bridge or
     device. I had one bug where we ended up with this bus2bridge structure:

      6 -> 06:00.0
      7 -> 06:00.0
      8 -> 07:01.0

Which meant that for devices on bus 8, 7 and 6 we would never find the 
upstream bridge. The reason is that 6 -> 06 points to itself so
find_upstream_bridge ends up looping 255 times around and returns -1.

Oddly enough the 06:00.0 device does get removed and then added (Via
the hypercalls) and the reason for the bus2bridge having stale information
is that it copies the data but does not invalidate that.

I am not entirely sure I undertand why we do that. In 'free_pdev' we do
this:

	for ( ; sec_bus <= sub_bus; sec_bus++ )
		pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];

and then:
 list_del(&pdev->alldevs_list);
 xfree(pdev->msix);
 xfree(pdev);

so if the device that is being deleted is the bridge - we point the secondary
and subordinate to the deleted device. But if the deleted device is the
upstream bridge we end up leaving a stale bus2bridge context.

That is OK normally as 'alloc_pdev' would over-write it (if the secondary
and subordinate did not change). But in 'assign-busses' case they change so
we are left with an 'stale' one.

This means when the same device is added (but with a new bus value) we
end fixing up the secondary to subordinate busses to point to us (06).
But '06' which used to be a secondary bus, still retains the old value.

One way to fix this is to detect it and correct:

          spin_lock(&pseg->bus2bridge_lock);
            for ( ; sec_bus <= sub_bus; sec_bus++ )
                pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];
            /* Check for infinite recursion where bus2bridge would point to
             * itself, aka:
             *    6-> 06:00.0
             *    7-> 06:00.0
             * and we are removing 06:00.0, but may have 07:00.0 devices.
             * We invalidate the 6 as the upstream bridge is effectively
             * removed. We cannot remove the 07 as the 06:00.0 might be
             * added right back in. */
            if ( pseg->bus2bridge[pdev->bus].map )
            {    
                u8 bus, devfn;

                bus = pdev->bus;
                if ( __find_upstream_bridge( pseg->nr, &bus, &devfn, &sec_bus, 1 ) < 0 )
                {
                    /* Recursion detected! Invalidate ourselves. */
                    printk("%04x:%02x:%02x.%u recursed clearing it",
                       pseg->nr, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
                    if (pdev->bus != bus || devfn != pdev->devfn)
                        printk(" %02x:%02x supplied", pdev->bus, pdev->devfn);
                                PCI_SLOT(pseg->bus2bridge[i].devfn),
                                PCI_FUNC(pseg->bus2bridge[i].devfn));
                    pseg->bus2bridge[pdev->bus].map = 0;
                }
            }
            spin_unlock(&pseg->bus2bridge_lock);

But I am not sure if that is the right way of doing it. Anyhow there
was another assumption made in which 'assign-busses' crippled Xen
(see second attachment).

3). Trap on PCI_SECONDARY_BUS and PCI_SUBORDINATE_BUS writes and
    fixup the structures.

    I hadn't attempted that but that could also be done. That way Xen
    is aware of those changes and can update its PCI structures.

Thoughts?

--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-DRAFT-xen-pci-Re-add-all-PCI-devices-if-pci-assign-b.patch"

>From bee45c2613b1f827e2610d7f8d06989f3cd76907 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 5 Feb 2014 14:26:56 -0500
Subject: [PATCH] DRAFT xen/pci: Re-add all PCI devices if pci=assign-busses
 is used.

That parameter wreaks havoc with Xen hypervisor. Its internal
structures end up being confused such that 'upstream bridge'
information is lost.

As such, this patch re-programs the Xen hypervisor's PCI devices.
It does it in three steps:

 1). Before 'acpi_init' (which parses the ACPI DSDT for PCI devices)
     in register_xen_pci_notifier we collect all of the PCI devices
     BDFs that are active.

 2). When 'acpi_init' has finished and has reprogrammed the bus
     numbers, we intersect the list of all of the PCI devices that
     Linux knows with the list we created in step 1). The result
     is an array of BDFs which are orphaned - meaning they are not
     present on the machine any more - but Xen hypervisor is still
     holding on to them - because Linux has not made the
     'xen_remove_device' call on them. The reason for that is
     explained later in this description[*1]. With the list of
     orphaned PCI devices and the ones we have added - we make
     the hypercall to remove all the orphaned ones and all the
     ones that were added.
     At this stage Xen has no knowledge of any PCI devices.
 3). We all of the PCI devices that Linux knows about.
     This way the view from Linux and Xen is synced when it comes
     to the PCI devices.

[*1]. Linux seperates the PCI devices from PCI bridges in two
structures. That means that PCI devices know their slot and function
number. While the bus structure keeps track of the bus number. This
seperation allows Linux to expand the bridge to span more bus
numbers and the changes are only updated in the PCI bus structures.
The PCI devices are oblivious to this. Also the notifier call chain
is only executed when a PCI device is added - and since this is
during early bootup - the notifier is not used to 'delete' the devices
that might have existed with the old bus numbers - because Linux
hasn't gotten to enumerate them.

With this patch, pci=assign-busses works with Xen hypervisors.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 drivers/xen/pci.c |  118 ++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 files changed, 117 insertions(+), 1 deletions(-)

diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
index dd9c249..178de97 100644
--- a/drivers/xen/pci.c
+++ b/drivers/xen/pci.c
@@ -186,12 +186,104 @@ static struct notifier_block device_nb = {
 	.notifier_call = xen_pci_notifier,
 };
 
+#include <linux/device.h>
+static void __init walk_bus(struct pci_bus *bus, int (*fnc)(struct device *dev))
+{
+	struct pci_dev *dev;
+	struct pci_bus *child;
+
+	list_for_each_entry(dev, &bus->devices, bus_list) {
+		if (dev->subordinate)
+			continue; /* Scan bridges in the next loop */
+		(void)fnc(&dev->dev);
+	}
+	list_for_each_entry(child, &bus->children, node) {
+		dev = child->self;
+		if (dev)
+			(void)fnc(&dev->dev);
+		walk_bus(child, fnc);
+	}
+}
+static void __init walk_tree(int (*fnc)(struct device *dev))
+{
+	struct pci_bus *bus;
+
+	down_read(&pci_bus_sem);
+	list_for_each_entry(bus, &pci_root_buses, node)
+		walk_bus(bus, fnc);
+	up_read(&pci_bus_sem);
+}
+
+#include <asm/pci-direct.h>
+
+#define PCI_BUS(bdf)    (((bdf) >> 8) & 0xff)
+#define PCI_BDF(b,d,f)  ((((b) & 0xff) << 8) | PCI_DEVFN(d,f))
+#define PCI_DEVFN2(bdf) ((bdf) & 0xff)
+#define PCI_BDF2(b,df)  ((((b) & 0xff) << 8) | ((df) & 0xff))
+static unsigned long __initdata *pci_devs;
+
+static void __init check_device(int bus, int slot, int func)
+{
+	u16 class;
+
+	class = read_pci_config(bus, slot, func, PCI_CLASS_REVISION);
+	if (class == 0xffff)
+		return;
+
+	set_bit(PCI_BDF(bus, slot, func), pci_devs);
+}
+static int __init xen_prune_pci_devs(struct device *dev)
+{
+	struct pci_dev *pci_dev = to_pci_dev(dev);
+	u16 busdevfn;
+
+	busdevfn = PCI_BDF2(pci_dev->bus->number, pci_dev->devfn);
+	if (test_bit(busdevfn, pci_devs)) /* If present it is not orphaned */
+		clear_bit(busdevfn, pci_devs);
+	return 0;
+}
+static void __init xen_delete_orphaned_pci_devs(void)
+{
+	struct physdev_manage_pci manage_pci;
+	unsigned int i;
+
+	for_each_set_bit(i, pci_devs, PCI_BDF(-1, -1, -1) + 1) {
+		manage_pci.bus = PCI_BUS(i);
+		manage_pci.devfn = PCI_DEVFN2(i);
+		(void)HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_remove,
+					  &manage_pci);
+	}
+}
+
 static int __init register_xen_pci_notifier(void)
 {
+	int bus, slot, func, rc = 0;
+
 	if (!xen_initial_domain())
 		return 0;
 
-	return bus_register_notifier(&pci_bus_type, &device_nb);
+	rc = bus_register_notifier(&pci_bus_type, &device_nb);
+
+	if (!pcibios_assign_all_busses())
+		return rc;
+
+	if (!early_pci_allowed())
+		return rc;
+
+	/* 64K bits needed - we will revisit it in xen_pci_refresh */
+	pci_devs = kcalloc(BITS_TO_LONGS(PCI_BDF(-1, -1, -1) + 1), sizeof(unsigned long), GFP_KERNEL);
+	if (!pci_devs)
+		return rc;
+
+	/* Poor man's PCI discovery */
+	for (bus = 0; bus < 256; bus++) {
+		for (slot = 0; slot < 32; slot++) {
+			for (func = 0; func < 8; func++) {
+				check_device(bus, slot, func);
+			}
+		}
+	}
+	return rc;
 }
 
 arch_initcall(register_xen_pci_notifier);
@@ -241,3 +333,27 @@ static int __init xen_mcfg_late(void)
  */
 subsys_initcall_sync(xen_mcfg_late);
 #endif
+
+static int __init xen_pci_refresh(void)
+{
+	if (!xen_initial_domain())
+		return 0;
+
+	if (!pcibios_assign_all_busses())
+		return 0;
+
+	/* Update the list - so that we only have orphaned devices. */
+	walk_tree(&xen_prune_pci_devs);
+
+	/* Remove orphaned devices. */
+	xen_delete_orphaned_pci_devs();
+	/* Remove all existing ones */
+	walk_tree(&xen_remove_device);
+
+	/* Now the hypervisor has no PCI devices, so lets add them in */
+	walk_tree(&xen_add_device);
+
+	kfree(pci_devs);
+	return 0;
+}
+subsys_initcall_sync(xen_pci_refresh);
-- 
1.7.7.6


--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pci-Don-t-assume-the-removed-device-is-a-bridge.patch"

>From 76dc10b829f3beebd23c0c99dd653e50b429c5bd Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 11:48:16 -0500
Subject: [PATCH] pci: Don't assume the removed device is a bridge.

When we are instructed to remove a PCI device it is usally
done from the initial domain via PHYSDEVOP_pci_device_remove or
PHYSDEVOP_manage_pci_remove. That is OK except in the case where
the initial domain has re-programmed the PCI bridges with a new
PCI_SUBORDINATE_BUS value causing the bus number to change.

That means a device that had been addressed via say this
BDF: 06:00.0 is now addressed via 09:00.0. Now assume that the
device that is being deleted is a bridge and it used to be
06:00.0 - but since the bus numbers are different any
reads done on the PCI_SUBORDINATE_BUS can return bogus values
(as we are now addressing a completetly new device).

To guard against that we save away the subordinate and secondary
bus numbers the first time the device is introduced. Then when
the device is deleted we use those values instead of reading
from the PCI device.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c |    8 ++++----
 xen/include/xen/pci.h         |    4 ++++
 2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 6152370..4e73427 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -201,6 +201,8 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
             sub_bus = pci_conf_read8(pseg->nr, bus, PCI_SLOT(devfn),
                                      PCI_FUNC(devfn), PCI_SUBORDINATE_BUS);
 
+            pdev->info.bus.sec = sec_bus;
+            pdev->info.bus.sub = sub_bus;
             spin_lock(&pseg->bus2bridge_lock);
             for ( ; sec_bus <= sub_bus; sec_bus++ )
             {
@@ -265,10 +267,8 @@ static void free_pdev(struct pci_seg *pseg, struct pci_dev *pdev)
         case DEV_TYPE_LEGACY_PCI_BRIDGE:
             dev = PCI_SLOT(pdev->devfn);
             func = PCI_FUNC(pdev->devfn);
-            sec_bus = pci_conf_read8(pseg->nr, pdev->bus, dev, func,
-                                     PCI_SECONDARY_BUS);
-            sub_bus = pci_conf_read8(pseg->nr, pdev->bus, dev, func,
-                                     PCI_SUBORDINATE_BUS);
+            sec_bus = pdev->info.bus.sec;
+            sub_bus = pdev->info.bus.sub;
 
             spin_lock(&pseg->bus2bridge_lock);
             for ( ; sec_bus <= sub_bus; sec_bus++ )
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index cadb525..c3f6ee4 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -39,6 +39,10 @@ struct pci_dev_info {
         u8 bus;
         u8 devfn;
     } physfn;
+    struct {
+        u8 sec;
+        u8 sub;
+    } bus; /* Only set if device is a bridge */
 };
 
 struct pci_dev {
-- 
1.7.7.6


--vkogqOf2sHV7VnPd
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--vkogqOf2sHV7VnPd--


From xen-devel-bounces@lists.xen.org Wed Feb 05 20:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB90v-0008Ky-0F; Wed, 05 Feb 2014 20:24:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WB90t-0008Kt-HF
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 20:24:04 +0000
Received: from [193.109.254.147:57306] by server-14.bemta-14.messagelabs.com
	id 7D/41-29228-2ED92F25; Wed, 05 Feb 2014 20:24:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391631840!2303925!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8479 invoked from network); 5 Feb 2014 20:24:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:24:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="100280715"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 20:23:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	15:23:58 -0500
Message-ID: <52F29DDC.7010908@citrix.com>
Date: Wed, 5 Feb 2014 20:23:56 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Michael Chan <mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
In-Reply-To: <1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 19:47, Michael Chan wrote:
> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>> dev_watchdog+0x156/0x1f0()
>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>
> The dump shows an internal IRQ pending on MSIX vector 2 which matches
> the the queue number that is timing out.  I don't know what happened to
> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
> message from the kernel a few seconds before the tx timeout message?

I haven't seen any IRQ related error message. Note, this is on Xen 
4.3.1. Now I have new results with a reworked version of the patch, 
unfortunately it still has this issue. Here is a bnx2 dump, lspci output 
and some Xen debug output (MSI and interrupt bindings, I have more if 
needed).

[82099.288743] bnx2 0000:02:00.0 eth0: <--- start FTQ dump --->
[82099.288767] bnx2 0000:02:00.0 eth0: RV2P_PFTQ_CTL 00010002
[82099.288779] bnx2 0000:02:00.0 eth0: RV2P_TFTQ_CTL 00020000
[82099.288790] bnx2 0000:02:00.0 eth0: RV2P_MFTQ_CTL 00004000
[82099.288801] bnx2 0000:02:00.0 eth0: TBDR_FTQ_CTL 00404002
[82099.288812] bnx2 0000:02:00.0 eth0: TDMA_FTQ_CTL 00010002
[82099.288823] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 00810002
[82099.288834] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 01010002
[82099.288845] bnx2 0000:02:00.0 eth0: TPAT_FTQ_CTL 00010002
[82099.288878] bnx2 0000:02:00.0 eth0: RXP_CFTQ_CTL 00008000
[82099.288889] bnx2 0000:02:00.0 eth0: RXP_FTQ_CTL 00100002
[82099.288899] bnx2 0000:02:00.0 eth0: COM_COMXQ_FTQ_CTL 00010000
[82099.288911] bnx2 0000:02:00.0 eth0: COM_COMTQ_FTQ_CTL 00020000
[82099.288923] bnx2 0000:02:00.0 eth0: COM_COMQ_FTQ_CTL 00010000
[82099.288934] bnx2 0000:02:00.0 eth0: CP_CPQ_FTQ_CTL 00004000
[82099.288944] bnx2 0000:02:00.0 eth0: CPU states:
[82099.288960] bnx2 0000:02:00.0 eth0: 045000 mode b84c state 80005000 
evt_mask 500 pc 8001284 pc 8000cb8 instr 35690100
[82099.288984] bnx2 0000:02:00.0 eth0: 085000 mode b84c state 80001000 
evt_mask 500 pc 8000a58 pc 8000a4c instr 38420001
[82099.289007] bnx2 0000:02:00.0 eth0: 0c5000 mode b84c state 80001000 
evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
[82099.289030] bnx2 0000:02:00.0 eth0: 105000 mode b8cc state 80000000 
evt_mask 500 pc 8000a94 pc 8000a94 instr 8c420020
[82099.289063] bnx2 0000:02:00.0 eth0: 145000 mode b880 state 80000000 
evt_mask 500 pc 800d244 pc 8008aac instr 8c460000
[82099.289087] bnx2 0000:02:00.0 eth0: 185000 mode b8cc state 80000000 
evt_mask 500 pc 8000c6c pc 8000c6c instr 3c056000
[82099.289103] bnx2 0000:02:00.0 eth0: <--- end FTQ dump --->
[82099.289112] bnx2 0000:02:00.0 eth0: <--- start TBDC dump --->
[82099.289124] bnx2 0000:02:00.0 eth0: TBDC free cnt: 31
[82099.289133] bnx2 0000:02:00.0 eth0: LINE     CID  BIDX   CMD  VALIDS
[82099.289148] bnx2 0000:02:00.0 eth0: 00    000800  a3b8   00    [1]
[82099.289163] bnx2 0000:02:00.0 eth0: 01    001100  1b58   00    [0]
[82099.289178] bnx2 0000:02:00.0 eth0: 02    000800  a390   00    [0]
[82099.289193] bnx2 0000:02:00.0 eth0: 03    000800  a370   00    [0]
[82099.289217] bnx2 0000:02:00.0 eth0: 04    000800  a378   00    [0]
[82099.289232] bnx2 0000:02:00.0 eth0: 05    000800  a388   00    [0]
[82099.289247] bnx2 0000:02:00.0 eth0: 06    000800  a398   00    [0]
[82099.289262] bnx2 0000:02:00.0 eth0: 07    000800  a3a8   00    [0]
[82099.289277] bnx2 0000:02:00.0 eth0: 08    000800  a3b0   00    [0]
[82099.289291] bnx2 0000:02:00.0 eth0: 09    000800  a3b8   00    [0]
[82099.289306] bnx2 0000:02:00.0 eth0: 0a    000800  8c10   00    [0]
[82099.289321] bnx2 0000:02:00.0 eth0: 0b    000800  eaf0   00    [0]
[82099.289336] bnx2 0000:02:00.0 eth0: 0c    000800  eaf8   00    [0]
[82099.289351] bnx2 0000:02:00.0 eth0: 0d    001100  5e60   00    [0]
[82099.289365] bnx2 0000:02:00.0 eth0: 0e    001100  5e68   00    [0]
[82099.289380] bnx2 0000:02:00.0 eth0: 0f    001100  5e70   00    [0]
[82099.289395] bnx2 0000:02:00.0 eth0: 10    001100  5e88   00    [0]
[82099.289410] bnx2 0000:02:00.0 eth0: 11    001100  5e90   00    [0]
[82099.289425] bnx2 0000:02:00.0 eth0: 12    001100  5ee8   00    [0]
[82099.289440] bnx2 0000:02:00.0 eth0: 13    001100  5ef8   00    [0]
[82099.289454] bnx2 0000:02:00.0 eth0: 14    001100  5e00   00    [0]
[82099.289470] bnx2 0000:02:00.0 eth0: 15    001100  5a20   00    [0]
[82099.289485] bnx2 0000:02:00.0 eth0: 16    001100  59a8   00    [0]
[82099.289499] bnx2 0000:02:00.0 eth0: 17    001100  59b0   00    [0]
[82099.289514] bnx2 0000:02:00.0 eth0: 18    001100  59b8   00    [0]
[82099.289529] bnx2 0000:02:00.0 eth0: 19    001100  5a28   00    [0]
[82099.289544] bnx2 0000:02:00.0 eth0: 1a    001100  5a30   00    [0]
[82099.289559] bnx2 0000:02:00.0 eth0: 1b    000800  8c58   00    [0]
[82099.289573] bnx2 0000:02:00.0 eth0: 1c    000800  8c60   00    [0]
[82099.289588] bnx2 0000:02:00.0 eth0: 1d    055e80  dca8   fb    [0]
[82099.289603] bnx2 0000:02:00.0 eth0: 1e    1cf780  f7b8   af    [0]
[82099.289618] bnx2 0000:02:00.0 eth0: 1f    1dff80  efe0   bf    [0]
[82099.289629] bnx2 0000:02:00.0 eth0: <--- end TBDC dump --->
[82099.289644] bnx2 0000:02:00.0 eth0: DEBUG: intr_sem[0] PCI_CMD[00100406]
[82099.289661] bnx2 0000:02:00.0 eth0: DEBUG: PCI_PM[19002008] 
PCI_MISC_CFG[92000088]
[82099.289676] bnx2 0000:02:00.0 eth0: DEBUG: EMAC_TX_STATUS[0000000e] 
EMAC_RX_STATUS[00000000]
[82099.289691] bnx2 0000:02:00.0 eth0: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[82099.289703] bnx2 0000:02:00.0 eth0: DEBUG: 
HC_STATS_INTERRUPT_STATUS[01ee0000]
[82099.289716] bnx2 0000:02:00.0 eth0: DEBUG: PBA[00000000]
[82099.289726] bnx2 0000:02:00.0 eth0: <--- start MCP states dump --->
[82099.289739] bnx2 0000:02:00.0 eth0: DEBUG: MCP_STATE_P0[0003610e] 
MCP_STATE_P1[0003610e]
[82099.289756] bnx2 0000:02:00.0 eth0: DEBUG: MCP mode[0000b880] 
state[80008000] evt_mask[00000500]
[82099.289773] bnx2 0000:02:00.0 eth0: DEBUG: pc[0800b110] pc[0800aff0] 
instr[afbf0048]
[82099.289787] bnx2 0000:02:00.0 eth0: DEBUG: shmem states:
[82099.289800] bnx2 0000:02:00.0 eth0: DEBUG: drv_mb[0d000005] 
fw_mb[00000005] link_status[0010026f]
[82099.289820] bnx2 0000:02:00.0 eth0: DEBUG: 
dev_info_signature[44564903] reset_type[01005254]
[82099.289842] bnx2 0000:02:00.0 eth0: DEBUG: 000001c0: 01005254 
42530088 0003610e 00000000
[82099.289860] bnx2 0000:02:00.0 eth0: DEBUG: 000003cc: 44444444 
44444444 44444444 00000a28
[82099.289879] bnx2 0000:02:00.0 eth0: DEBUG: 000003dc: 000cffff 
00000000 ffff0000 00000000
[82099.289897] bnx2 0000:02:00.0 eth0: DEBUG: 000003ec: 00000000 
00000000 00000000 00000000
[82099.289911] bnx2 0000:02:00.0 eth0: DEBUG: 0x3fc[0000ffff]
[82099.289921] bnx2 0000:02:00.0 eth0: <--- end MCP states dump --->

lspci -s 02:00.0 -vv
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S 
Gigabit Ethernet (rev 20)
	Subsystem: Dell Device 045f
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- 
<MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 32
	Region 0: Memory at c8000000 (64-bit, non-prefetchable) [size=32M]
	Expansion ROM at c6000000 [disabled] [size=128K]
	Capabilities: [48] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] Vital Product Data
		Product Name: Broadcom NetXtreme II Ethernet Controller
		Read-only fields:
			[PN] Part number: BCM95709C0
			[EC] Engineering changes: 220197-2
			[SN] Serial number: 0123456789
			[MN] Manufacture ID: 31 30 32 38
			[V0] Vendor specific: 6.2.14
			[RV] Reserved: checksum good, 22 byte(s) reserved
		End
	Capabilities: [58] MSI: Enable- Count=1/16 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [a0] MSI-X: Enable+ Count=9 Masked-
		Vector table: BAR=0 offset=0000c000
		PBA: BAR=0 offset=0000e000
	Capabilities: [ac] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
		DevCtl:	Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr+ NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Latency L0 <2us, 
L1 <2us
			ClockPM- Surprise- LLActRep- BwNot-
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- 
BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not 
Supported
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF 
Disabled
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- 
ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, 
EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [100 v1] Device Serial Number b8-ac-6f-ff-fe-b4-17-20
	Capabilities: [110 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- 
MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+ UnxCmplt+ RxOF- 
MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP+ FCP+ CmpltTO+ CmpltAbrt- UnxCmplt- RxOF+ 
MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		CEMsk:	RxErr- BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [150 v1] Power Budgeting <?>
	Capabilities: [160 v1] Virtual Channel
		Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
		Arb:	Fixed- WRR32- WRR64- WRR128-
		Ctrl:	ArbSelect=Fixed
		Status:	InProgress-
		VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
			Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
			Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
			Status:	NegoPending- InProgress-
	Kernel driver in use: bnx2
	Kernel modules: bnx2



(XEN) [2014-02-05 20:15:20] MSI information:
(XEN) [2014-02-05 20:15:20]  IOMMU   56 vec=28  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/?
(XEN) [2014-02-05 20:15:20]  MSI     57 vec=d0  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     58 vec=d8  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     59 vec=21  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     60 vec=29  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     61 vec=31  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     62 vec=39  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   63 vec=62  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   64 vec=d7  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   65 vec=ba  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   66 vec=92  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   67 vec=3a  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   68 vec=b8  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   69 vec=2a  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   70 vec=33  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   71 vec=c2  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   72 vec=9a  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   73 vec=d0  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   74 vec=da  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   75 vec=b2  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   76 vec=6b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   77 vec=68  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   78 vec=b2  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   79 vec=53  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   80 vec=78  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   81 vec=4b  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   82 vec=a7  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   83 vec=63  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   84 vec=6f  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   85 vec=5b  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   86 vec=99  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   87 vec=a3  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   88 vec=73  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   89 vec=58  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   90 vec=aa  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   91 vec=38  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   92 vec=8f  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   93 vec=3c  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   94 vec=3b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   95 vec=ca  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   96 vec=a8  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   97 vec=32  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   98 vec=23  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   99 vec=94  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X  100 vec=7b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  101 vec=60  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  102 vec=a2  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  103 vec=4b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  104 vec=a1  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  105 vec=2d  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X  106 vec=43  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  107 vec=d2  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  108 vec=61  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  109 vec=21  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  110 vec=2b  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  111 vec=85  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:22] Guest interrupt information:
(XEN) [2014-02-05 20:15:22]    IRQ:   0 affinity:00000001 vec:f0 
type=IO-APIC-edge    status=00000000 timer_interrupt+0/0x18f
(XEN) [2014-02-05 20:15:22]    IRQ:   1 affinity:00000001 vec:30 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   3 affinity:00000001 vec:38 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   4 affinity:00000001 vec:f1 
type=IO-APIC-edge    status=00000000 ns16550_interrupt+0/0x73
(XEN) [2014-02-05 20:15:22]    IRQ:   5 affinity:00000001 vec:40 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   6 affinity:00000001 vec:48 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   7 affinity:00000001 vec:50 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   8 affinity:00000001 vec:58 
type=IO-APIC-edge    status=00000030 in-flight=0 domain-list=0:  8(---),
(XEN) [2014-02-05 20:15:22]    IRQ:   9 affinity:00000001 vec:60 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0:  9(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  10 affinity:00000001 vec:68 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  11 affinity:00000001 vec:70 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  12 affinity:00000001 vec:78 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  13 affinity:0000ffff vec:88 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  14 affinity:00000001 vec:90 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  15 affinity:00000001 vec:98 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  17 affinity:00000010 vec:59 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 17(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  18 affinity:00000010 vec:61 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 18(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  19 affinity:00000010 vec:b9 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 19(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  20 affinity:00000010 vec:71 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 20(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  21 affinity:00000400 vec:6d 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 21(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  32 affinity:0000ffff vec:79 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  33 affinity:0000ffff vec:89 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  36 affinity:0000ffff vec:41 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  38 affinity:0000ffff vec:99 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  39 affinity:0000ffff vec:a9 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  42 affinity:0000ffff vec:81 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  43 affinity:0000ffff vec:91 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  45 affinity:0000ffff vec:a1 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  47 affinity:0000ffff vec:b1 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  53 affinity:0000ffff vec:a0 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  56 affinity:0000ffff vec:28 
type=DMA_MSI         status=00000000 iommu_page_fault+0/0x12
(XEN) [2014-02-05 20:15:22]    IRQ:  57 affinity:00000001 vec:d0 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:311(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  58 affinity:00000001 vec:d8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:310(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  59 affinity:00000001 vec:21 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:309(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  60 affinity:00000001 vec:29 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:308(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  61 affinity:00000001 vec:31 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:307(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  62 affinity:00000001 vec:39 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:306(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  63 affinity:00000004 vec:62 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:305(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  64 affinity:00000004 vec:d7 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:304(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  65 affinity:00000100 vec:ba 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:303(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  66 affinity:00000004 vec:92 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:302(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  67 affinity:00000002 vec:3a 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:301(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  68 affinity:00000004 vec:b8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:300(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  69 affinity:00000001 vec:2a 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  70 affinity:00000002 vec:33 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:298(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  71 affinity:00000100 vec:c2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:297(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  72 affinity:00000004 vec:9a 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:296(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  73 affinity:00000004 vec:d0 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:295(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  74 affinity:00000004 vec:da 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:294(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  75 affinity:00000001 vec:b2 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  76 affinity:00000002 vec:6b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:292(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  77 affinity:00000004 vec:68 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:291(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  78 affinity:00000004 vec:b2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:290(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  79 affinity:00000002 vec:53 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:289(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  80 affinity:00000004 vec:78 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:288(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  81 affinity:00000001 vec:4b 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  82 affinity:00000004 vec:a7 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:286(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  83 affinity:00000004 vec:63 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:285(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  84 affinity:00000100 vec:6f 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:284(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  85 affinity:00000004 vec:5b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:283(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  86 affinity:00000004 vec:99 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:282(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  87 affinity:00000001 vec:a3 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  88 affinity:00000100 vec:73 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:280(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  89 affinity:00000004 vec:58 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:279(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  90 affinity:00000100 vec:aa 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:278(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  91 affinity:00000004 vec:38 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:277(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  92 affinity:00000004 vec:8f 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:276(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  93 affinity:00000001 vec:3c 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  94 affinity:00000002 vec:3b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:274(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  95 affinity:00000004 vec:ca 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:273(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  96 affinity:00000004 vec:a8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:272(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  97 affinity:00000004 vec:32 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:271(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  98 affinity:00000100 vec:23 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:270(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  99 affinity:00000001 vec:94 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ: 100 affinity:00000002 vec:7b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:268(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 101 affinity:00000004 vec:60 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:267(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 102 affinity:00000004 vec:a2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:266(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 103 affinity:00000002 vec:4b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:265(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 104 affinity:00000002 vec:a1 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:264(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 105 affinity:00000001 vec:2d 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ: 106 affinity:00000100 vec:43 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:262(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 107 affinity:00000004 vec:d2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:261(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 108 affinity:00000100 vec:61 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:260(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 109 affinity:00000004 vec:21 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:259(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 110 affinity:00000100 vec:2b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:258(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 111 affinity:00000001 vec:85 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22] IO-APIC interrupt information:
(XEN) [2014-02-05 20:15:22]     IRQ  0 Vec240:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  2: vec=f0 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  1 Vec 48:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  1: vec=30 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  3 Vec 56:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  3: vec=38 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  4 Vec241:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  4: vec=f1 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  5 Vec 64:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  5: vec=40 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  6 Vec 72:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  6: vec=48 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  7 Vec 80:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  7: vec=50 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  8 Vec 88:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  8: vec=58 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  9 Vec 96:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  9: vec=60 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=L mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 10 Vec104:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 10: vec=68 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 11 Vec112:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 11: vec=70 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 12 Vec120:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 12: vec=78 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 13 Vec136:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 13: vec=88 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 14 Vec144:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 14: vec=90 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 15 Vec152:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 15: vec=98 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 17 Vec 89:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 17: vec=59 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 18 Vec 97:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 18: vec=61 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 19 Vec185:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 19: vec=b9 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 20 Vec113:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 20: vec=71 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 21 Vec109:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 21: vec=6d 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:2
(XEN) [2014-02-05 20:15:22]     IRQ 32 Vec121:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  0: vec=79 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 33 Vec137:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  1: vec=89 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 36 Vec 65:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  4: vec=41 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 38 Vec153:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  6: vec=99 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 39 Vec169:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  7: vec=a9 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 42 Vec129:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 10: vec=81 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 43 Vec145:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 11: vec=91 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 45 Vec161:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 13: vec=a1 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 47 Vec177:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 15: vec=b1 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 53 Vec160:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 21: vec=a0 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:24:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB90v-0008Ky-0F; Wed, 05 Feb 2014 20:24:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WB90t-0008Kt-HF
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 20:24:04 +0000
Received: from [193.109.254.147:57306] by server-14.bemta-14.messagelabs.com
	id 7D/41-29228-2ED92F25; Wed, 05 Feb 2014 20:24:02 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391631840!2303925!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8479 invoked from network); 5 Feb 2014 20:24:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:24:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="100280715"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 20:23:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	15:23:58 -0500
Message-ID: <52F29DDC.7010908@citrix.com>
Date: Wed, 5 Feb 2014 20:23:56 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Michael Chan <mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
In-Reply-To: <1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 04/02/14 19:47, Michael Chan wrote:
> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>> dev_watchdog+0x156/0x1f0()
>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>
> The dump shows an internal IRQ pending on MSIX vector 2 which matches
> the the queue number that is timing out.  I don't know what happened to
> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
> message from the kernel a few seconds before the tx timeout message?

I haven't seen any IRQ related error message. Note, this is on Xen 
4.3.1. Now I have new results with a reworked version of the patch, 
unfortunately it still has this issue. Here is a bnx2 dump, lspci output 
and some Xen debug output (MSI and interrupt bindings, I have more if 
needed).

[82099.288743] bnx2 0000:02:00.0 eth0: <--- start FTQ dump --->
[82099.288767] bnx2 0000:02:00.0 eth0: RV2P_PFTQ_CTL 00010002
[82099.288779] bnx2 0000:02:00.0 eth0: RV2P_TFTQ_CTL 00020000
[82099.288790] bnx2 0000:02:00.0 eth0: RV2P_MFTQ_CTL 00004000
[82099.288801] bnx2 0000:02:00.0 eth0: TBDR_FTQ_CTL 00404002
[82099.288812] bnx2 0000:02:00.0 eth0: TDMA_FTQ_CTL 00010002
[82099.288823] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 00810002
[82099.288834] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 01010002
[82099.288845] bnx2 0000:02:00.0 eth0: TPAT_FTQ_CTL 00010002
[82099.288878] bnx2 0000:02:00.0 eth0: RXP_CFTQ_CTL 00008000
[82099.288889] bnx2 0000:02:00.0 eth0: RXP_FTQ_CTL 00100002
[82099.288899] bnx2 0000:02:00.0 eth0: COM_COMXQ_FTQ_CTL 00010000
[82099.288911] bnx2 0000:02:00.0 eth0: COM_COMTQ_FTQ_CTL 00020000
[82099.288923] bnx2 0000:02:00.0 eth0: COM_COMQ_FTQ_CTL 00010000
[82099.288934] bnx2 0000:02:00.0 eth0: CP_CPQ_FTQ_CTL 00004000
[82099.288944] bnx2 0000:02:00.0 eth0: CPU states:
[82099.288960] bnx2 0000:02:00.0 eth0: 045000 mode b84c state 80005000 
evt_mask 500 pc 8001284 pc 8000cb8 instr 35690100
[82099.288984] bnx2 0000:02:00.0 eth0: 085000 mode b84c state 80001000 
evt_mask 500 pc 8000a58 pc 8000a4c instr 38420001
[82099.289007] bnx2 0000:02:00.0 eth0: 0c5000 mode b84c state 80001000 
evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
[82099.289030] bnx2 0000:02:00.0 eth0: 105000 mode b8cc state 80000000 
evt_mask 500 pc 8000a94 pc 8000a94 instr 8c420020
[82099.289063] bnx2 0000:02:00.0 eth0: 145000 mode b880 state 80000000 
evt_mask 500 pc 800d244 pc 8008aac instr 8c460000
[82099.289087] bnx2 0000:02:00.0 eth0: 185000 mode b8cc state 80000000 
evt_mask 500 pc 8000c6c pc 8000c6c instr 3c056000
[82099.289103] bnx2 0000:02:00.0 eth0: <--- end FTQ dump --->
[82099.289112] bnx2 0000:02:00.0 eth0: <--- start TBDC dump --->
[82099.289124] bnx2 0000:02:00.0 eth0: TBDC free cnt: 31
[82099.289133] bnx2 0000:02:00.0 eth0: LINE     CID  BIDX   CMD  VALIDS
[82099.289148] bnx2 0000:02:00.0 eth0: 00    000800  a3b8   00    [1]
[82099.289163] bnx2 0000:02:00.0 eth0: 01    001100  1b58   00    [0]
[82099.289178] bnx2 0000:02:00.0 eth0: 02    000800  a390   00    [0]
[82099.289193] bnx2 0000:02:00.0 eth0: 03    000800  a370   00    [0]
[82099.289217] bnx2 0000:02:00.0 eth0: 04    000800  a378   00    [0]
[82099.289232] bnx2 0000:02:00.0 eth0: 05    000800  a388   00    [0]
[82099.289247] bnx2 0000:02:00.0 eth0: 06    000800  a398   00    [0]
[82099.289262] bnx2 0000:02:00.0 eth0: 07    000800  a3a8   00    [0]
[82099.289277] bnx2 0000:02:00.0 eth0: 08    000800  a3b0   00    [0]
[82099.289291] bnx2 0000:02:00.0 eth0: 09    000800  a3b8   00    [0]
[82099.289306] bnx2 0000:02:00.0 eth0: 0a    000800  8c10   00    [0]
[82099.289321] bnx2 0000:02:00.0 eth0: 0b    000800  eaf0   00    [0]
[82099.289336] bnx2 0000:02:00.0 eth0: 0c    000800  eaf8   00    [0]
[82099.289351] bnx2 0000:02:00.0 eth0: 0d    001100  5e60   00    [0]
[82099.289365] bnx2 0000:02:00.0 eth0: 0e    001100  5e68   00    [0]
[82099.289380] bnx2 0000:02:00.0 eth0: 0f    001100  5e70   00    [0]
[82099.289395] bnx2 0000:02:00.0 eth0: 10    001100  5e88   00    [0]
[82099.289410] bnx2 0000:02:00.0 eth0: 11    001100  5e90   00    [0]
[82099.289425] bnx2 0000:02:00.0 eth0: 12    001100  5ee8   00    [0]
[82099.289440] bnx2 0000:02:00.0 eth0: 13    001100  5ef8   00    [0]
[82099.289454] bnx2 0000:02:00.0 eth0: 14    001100  5e00   00    [0]
[82099.289470] bnx2 0000:02:00.0 eth0: 15    001100  5a20   00    [0]
[82099.289485] bnx2 0000:02:00.0 eth0: 16    001100  59a8   00    [0]
[82099.289499] bnx2 0000:02:00.0 eth0: 17    001100  59b0   00    [0]
[82099.289514] bnx2 0000:02:00.0 eth0: 18    001100  59b8   00    [0]
[82099.289529] bnx2 0000:02:00.0 eth0: 19    001100  5a28   00    [0]
[82099.289544] bnx2 0000:02:00.0 eth0: 1a    001100  5a30   00    [0]
[82099.289559] bnx2 0000:02:00.0 eth0: 1b    000800  8c58   00    [0]
[82099.289573] bnx2 0000:02:00.0 eth0: 1c    000800  8c60   00    [0]
[82099.289588] bnx2 0000:02:00.0 eth0: 1d    055e80  dca8   fb    [0]
[82099.289603] bnx2 0000:02:00.0 eth0: 1e    1cf780  f7b8   af    [0]
[82099.289618] bnx2 0000:02:00.0 eth0: 1f    1dff80  efe0   bf    [0]
[82099.289629] bnx2 0000:02:00.0 eth0: <--- end TBDC dump --->
[82099.289644] bnx2 0000:02:00.0 eth0: DEBUG: intr_sem[0] PCI_CMD[00100406]
[82099.289661] bnx2 0000:02:00.0 eth0: DEBUG: PCI_PM[19002008] 
PCI_MISC_CFG[92000088]
[82099.289676] bnx2 0000:02:00.0 eth0: DEBUG: EMAC_TX_STATUS[0000000e] 
EMAC_RX_STATUS[00000000]
[82099.289691] bnx2 0000:02:00.0 eth0: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[82099.289703] bnx2 0000:02:00.0 eth0: DEBUG: 
HC_STATS_INTERRUPT_STATUS[01ee0000]
[82099.289716] bnx2 0000:02:00.0 eth0: DEBUG: PBA[00000000]
[82099.289726] bnx2 0000:02:00.0 eth0: <--- start MCP states dump --->
[82099.289739] bnx2 0000:02:00.0 eth0: DEBUG: MCP_STATE_P0[0003610e] 
MCP_STATE_P1[0003610e]
[82099.289756] bnx2 0000:02:00.0 eth0: DEBUG: MCP mode[0000b880] 
state[80008000] evt_mask[00000500]
[82099.289773] bnx2 0000:02:00.0 eth0: DEBUG: pc[0800b110] pc[0800aff0] 
instr[afbf0048]
[82099.289787] bnx2 0000:02:00.0 eth0: DEBUG: shmem states:
[82099.289800] bnx2 0000:02:00.0 eth0: DEBUG: drv_mb[0d000005] 
fw_mb[00000005] link_status[0010026f]
[82099.289820] bnx2 0000:02:00.0 eth0: DEBUG: 
dev_info_signature[44564903] reset_type[01005254]
[82099.289842] bnx2 0000:02:00.0 eth0: DEBUG: 000001c0: 01005254 
42530088 0003610e 00000000
[82099.289860] bnx2 0000:02:00.0 eth0: DEBUG: 000003cc: 44444444 
44444444 44444444 00000a28
[82099.289879] bnx2 0000:02:00.0 eth0: DEBUG: 000003dc: 000cffff 
00000000 ffff0000 00000000
[82099.289897] bnx2 0000:02:00.0 eth0: DEBUG: 000003ec: 00000000 
00000000 00000000 00000000
[82099.289911] bnx2 0000:02:00.0 eth0: DEBUG: 0x3fc[0000ffff]
[82099.289921] bnx2 0000:02:00.0 eth0: <--- end MCP states dump --->

lspci -s 02:00.0 -vv
02:00.0 Ethernet controller: Broadcom Corporation NetXtreme II BCM5709S 
Gigabit Ethernet (rev 20)
	Subsystem: Dell Device 045f
	Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx+
	Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- 
<MAbort- >SERR- <PERR- INTx-
	Latency: 0, Cache Line Size: 64 bytes
	Interrupt: pin A routed to IRQ 32
	Region 0: Memory at c8000000 (64-bit, non-prefetchable) [size=32M]
	Expansion ROM at c6000000 [disabled] [size=128K]
	Capabilities: [48] Power Management version 3
		Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
		Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=1 PME-
	Capabilities: [50] Vital Product Data
		Product Name: Broadcom NetXtreme II Ethernet Controller
		Read-only fields:
			[PN] Part number: BCM95709C0
			[EC] Engineering changes: 220197-2
			[SN] Serial number: 0123456789
			[MN] Manufacture ID: 31 30 32 38
			[V0] Vendor specific: 6.2.14
			[RV] Reserved: checksum good, 22 byte(s) reserved
		End
	Capabilities: [58] MSI: Enable- Count=1/16 Maskable- 64bit+
		Address: 0000000000000000  Data: 0000
	Capabilities: [a0] MSI-X: Enable+ Count=9 Masked-
		Vector table: BAR=0 offset=0000c000
		PBA: BAR=0 offset=0000e000
	Capabilities: [ac] Express (v2) Endpoint, MSI 00
		DevCap:	MaxPayload 512 bytes, PhantFunc 0, Latency L0s <4us, L1 <64us
			ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
		DevCtl:	Report errors: Correctable- Non-Fatal+ Fatal+ Unsupported+
			RlxdOrd+ ExtTag- PhantFunc- AuxPwr+ NoSnoop+
			MaxPayload 256 bytes, MaxReadReq 512 bytes
		DevSta:	CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend-
		LnkCap:	Port #0, Speed 5GT/s, Width x4, ASPM L0s L1, Latency L0 <2us, 
L1 <2us
			ClockPM- Surprise- LLActRep- BwNot-
		LnkCtl:	ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+
			ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
		LnkSta:	Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- 
BWMgmt- ABWMgmt-
		DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not 
Supported
		DevCtl2: Completion Timeout: 65ms to 210ms, TimeoutDis-, LTR-, OBFF 
Disabled
		LnkCtl2: Target Link Speed: 2.5GT/s, EnterCompliance- SpeedDis-
			 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- 
ComplianceSOS-
			 Compliance De-emphasis: -6dB
		LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, 
EqualizationPhase1-
			 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
	Capabilities: [100 v1] Device Serial Number b8-ac-6f-ff-fe-b4-17-20
	Capabilities: [110 v1] Advanced Error Reporting
		UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- 
MalfTLP- ECRC- UnsupReq- ACSViol-
		UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt+ UnxCmplt+ RxOF- 
MalfTLP- ECRC- UnsupReq- ACSViol-
		UESvrt:	DLP+ SDES+ TLP+ FCP+ CmpltTO+ CmpltAbrt- UnxCmplt- RxOF+ 
MalfTLP+ ECRC+ UnsupReq- ACSViol-
		CESta:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
		CEMsk:	RxErr- BadTLP+ BadDLLP+ Rollover+ Timeout+ NonFatalErr+
		AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
	Capabilities: [150 v1] Power Budgeting <?>
	Capabilities: [160 v1] Virtual Channel
		Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
		Arb:	Fixed- WRR32- WRR64- WRR128-
		Ctrl:	ArbSelect=Fixed
		Status:	InProgress-
		VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
			Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
			Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=ff
			Status:	NegoPending- InProgress-
	Kernel driver in use: bnx2
	Kernel modules: bnx2



(XEN) [2014-02-05 20:15:20] MSI information:
(XEN) [2014-02-05 20:15:20]  IOMMU   56 vec=28  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/?
(XEN) [2014-02-05 20:15:20]  MSI     57 vec=d0  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     58 vec=d8  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     59 vec=21  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     60 vec=29  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     61 vec=31  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI     62 vec=39  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   63 vec=62  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   64 vec=d7  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   65 vec=ba  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   66 vec=92  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   67 vec=3a  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   68 vec=b8  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   69 vec=2a  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   70 vec=33  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   71 vec=c2  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   72 vec=9a  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   73 vec=d0  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   74 vec=da  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   75 vec=b2  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   76 vec=6b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   77 vec=68  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   78 vec=b2  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   79 vec=53  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   80 vec=78  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   81 vec=4b  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   82 vec=a7  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   83 vec=63  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   84 vec=6f  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   85 vec=5b  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   86 vec=99  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   87 vec=a3  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   88 vec=73  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   89 vec=58  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   90 vec=aa  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   91 vec=38  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   92 vec=8f  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   93 vec=3c  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X   94 vec=3b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   95 vec=ca  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   96 vec=a8  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   97 vec=32  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   98 vec=23  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   99 vec=94  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X  100 vec=7b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  101 vec=60  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  102 vec=a2  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  103 vec=4b  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  104 vec=a1  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  105 vec=2d  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:20]  MSI-X  106 vec=43  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  107 vec=d2  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  108 vec=61  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  109 vec=21  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  110 vec=2b  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X  111 vec=85  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
(XEN) [2014-02-05 20:15:22] Guest interrupt information:
(XEN) [2014-02-05 20:15:22]    IRQ:   0 affinity:00000001 vec:f0 
type=IO-APIC-edge    status=00000000 timer_interrupt+0/0x18f
(XEN) [2014-02-05 20:15:22]    IRQ:   1 affinity:00000001 vec:30 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   3 affinity:00000001 vec:38 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   4 affinity:00000001 vec:f1 
type=IO-APIC-edge    status=00000000 ns16550_interrupt+0/0x73
(XEN) [2014-02-05 20:15:22]    IRQ:   5 affinity:00000001 vec:40 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   6 affinity:00000001 vec:48 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   7 affinity:00000001 vec:50 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:   8 affinity:00000001 vec:58 
type=IO-APIC-edge    status=00000030 in-flight=0 domain-list=0:  8(---),
(XEN) [2014-02-05 20:15:22]    IRQ:   9 affinity:00000001 vec:60 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0:  9(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  10 affinity:00000001 vec:68 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  11 affinity:00000001 vec:70 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  12 affinity:00000001 vec:78 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  13 affinity:0000ffff vec:88 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  14 affinity:00000001 vec:90 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  15 affinity:00000001 vec:98 
type=IO-APIC-edge    status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  17 affinity:00000010 vec:59 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 17(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  18 affinity:00000010 vec:61 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 18(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  19 affinity:00000010 vec:b9 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 19(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  20 affinity:00000010 vec:71 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 20(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  21 affinity:00000400 vec:6d 
type=IO-APIC-level   status=00000030 in-flight=0 domain-list=0: 21(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  32 affinity:0000ffff vec:79 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  33 affinity:0000ffff vec:89 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  36 affinity:0000ffff vec:41 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  38 affinity:0000ffff vec:99 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  39 affinity:0000ffff vec:a9 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  42 affinity:0000ffff vec:81 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  43 affinity:0000ffff vec:91 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  45 affinity:0000ffff vec:a1 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  47 affinity:0000ffff vec:b1 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  53 affinity:0000ffff vec:a0 
type=IO-APIC-level   status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  56 affinity:0000ffff vec:28 
type=DMA_MSI         status=00000000 iommu_page_fault+0/0x12
(XEN) [2014-02-05 20:15:22]    IRQ:  57 affinity:00000001 vec:d0 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:311(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  58 affinity:00000001 vec:d8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:310(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  59 affinity:00000001 vec:21 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:309(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  60 affinity:00000001 vec:29 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:308(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  61 affinity:00000001 vec:31 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:307(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  62 affinity:00000001 vec:39 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:306(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  63 affinity:00000004 vec:62 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:305(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  64 affinity:00000004 vec:d7 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:304(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  65 affinity:00000100 vec:ba 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:303(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  66 affinity:00000004 vec:92 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:302(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  67 affinity:00000002 vec:3a 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:301(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  68 affinity:00000004 vec:b8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:300(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  69 affinity:00000001 vec:2a 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  70 affinity:00000002 vec:33 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:298(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  71 affinity:00000100 vec:c2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:297(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  72 affinity:00000004 vec:9a 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:296(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  73 affinity:00000004 vec:d0 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:295(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  74 affinity:00000004 vec:da 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:294(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  75 affinity:00000001 vec:b2 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  76 affinity:00000002 vec:6b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:292(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  77 affinity:00000004 vec:68 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:291(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  78 affinity:00000004 vec:b2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:290(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  79 affinity:00000002 vec:53 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:289(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  80 affinity:00000004 vec:78 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:288(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  81 affinity:00000001 vec:4b 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  82 affinity:00000004 vec:a7 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:286(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  83 affinity:00000004 vec:63 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:285(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  84 affinity:00000100 vec:6f 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:284(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  85 affinity:00000004 vec:5b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:283(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  86 affinity:00000004 vec:99 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:282(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  87 affinity:00000001 vec:a3 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  88 affinity:00000100 vec:73 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:280(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  89 affinity:00000004 vec:58 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:279(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  90 affinity:00000100 vec:aa 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:278(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  91 affinity:00000004 vec:38 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:277(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  92 affinity:00000004 vec:8f 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:276(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  93 affinity:00000001 vec:3c 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ:  94 affinity:00000002 vec:3b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:274(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  95 affinity:00000004 vec:ca 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:273(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  96 affinity:00000004 vec:a8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:272(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  97 affinity:00000004 vec:32 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:271(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  98 affinity:00000100 vec:23 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:270(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  99 affinity:00000001 vec:94 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ: 100 affinity:00000002 vec:7b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:268(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 101 affinity:00000004 vec:60 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:267(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 102 affinity:00000004 vec:a2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:266(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 103 affinity:00000002 vec:4b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:265(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 104 affinity:00000002 vec:a1 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:264(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 105 affinity:00000001 vec:2d 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22]    IRQ: 106 affinity:00000100 vec:43 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:262(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 107 affinity:00000004 vec:d2 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:261(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 108 affinity:00000100 vec:61 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:260(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 109 affinity:00000004 vec:21 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:259(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 110 affinity:00000100 vec:2b 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:258(---),
(XEN) [2014-02-05 20:15:22]    IRQ: 111 affinity:00000001 vec:85 
type=PCI-MSI/-X      status=00000002 mapped, unbound
(XEN) [2014-02-05 20:15:22] IO-APIC interrupt information:
(XEN) [2014-02-05 20:15:22]     IRQ  0 Vec240:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  2: vec=f0 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  1 Vec 48:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  1: vec=30 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  3 Vec 56:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  3: vec=38 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  4 Vec241:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  4: vec=f1 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  5 Vec 64:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  5: vec=40 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  6 Vec 72:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  6: vec=48 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  7 Vec 80:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  7: vec=50 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  8 Vec 88:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  8: vec=58 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ  9 Vec 96:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin  9: vec=60 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=L mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 10 Vec104:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 10: vec=68 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 11 Vec112:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 11: vec=70 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 12 Vec120:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 12: vec=78 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 13 Vec136:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 13: vec=88 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 14 Vec144:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 14: vec=90 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 15 Vec152:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 15: vec=98 
delivery=Fixed dest=P status=0 polarity=0 irr=0 trig=E mask=0 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 17 Vec 89:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 17: vec=59 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 18 Vec 97:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 18: vec=61 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 19 Vec185:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 19: vec=b9 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 20 Vec113:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 20: vec=71 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:50
(XEN) [2014-02-05 20:15:22]     IRQ 21 Vec109:
(XEN) [2014-02-05 20:15:22]       Apic 0x00, Pin 21: vec=6d 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=0 dest_id:2
(XEN) [2014-02-05 20:15:22]     IRQ 32 Vec121:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  0: vec=79 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 33 Vec137:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  1: vec=89 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 36 Vec 65:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  4: vec=41 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 38 Vec153:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  6: vec=99 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 39 Vec169:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin  7: vec=a9 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 42 Vec129:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 10: vec=81 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 43 Vec145:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 11: vec=91 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 45 Vec161:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 13: vec=a1 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 47 Vec177:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 15: vec=b1 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32
(XEN) [2014-02-05 20:15:22]     IRQ 53 Vec160:
(XEN) [2014-02-05 20:15:22]       Apic 0x01, Pin 21: vec=a0 
delivery=Fixed dest=P status=0 polarity=1 irr=0 trig=L mask=1 dest_id:32


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:28:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB94g-0008Tw-An; Wed, 05 Feb 2014 20:27:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WB94e-0008Tq-Rj
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 20:27:57 +0000
Received: from [85.158.143.35:35634] by server-2.bemta-4.messagelabs.com id
	EA/F8-10891-CCE92F25; Wed, 05 Feb 2014 20:27:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391632072!3441067!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14325 invoked from network); 5 Feb 2014 20:27:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98393106"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 20:27:52 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	15:27:49 -0500
Message-ID: <52F29EC2.7040503@citrix.com>
Date: Wed, 5 Feb 2014 20:27:46 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Michael Chan <mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52F29DDC.7010908@citrix.com>
In-Reply-To: <52F29DDC.7010908@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 20:23, Zoltan Kiss wrote:
> On 04/02/14 19:47, Michael Chan wrote:
>> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>>> dev_watchdog+0x156/0x1f0()
>>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>>
>> The dump shows an internal IRQ pending on MSIX vector 2 which matches
>> the the queue number that is timing out.  I don't know what happened to
>> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
>> message from the kernel a few seconds before the tx timeout message?
>
> I haven't seen any IRQ related error message. Note, this is on Xen
> 4.3.1. Now I have new results with a reworked version of the patch,
> unfortunately it still has this issue. Here is a bnx2 dump, lspci output
> and some Xen debug output (MSI and interrupt bindings, I have more if
> needed).

And here is the watchdog message and the first dump, if it matters:

[10118.282007] ------------[ cut here ]------------
[10118.282018] WARNING: at net/sched/sch_generic.c:255 
dev_watchdog+0x156/0x1f0()
[10118.282021] NETDEV WATCHDOG: eth0 (bnx2): transmit queue 0 timed out
[10118.282024] Modules linked in: tun nfsv3 nfs_acl rpcsec_gss_krb5 
auth_rpcgss oid_registry nfsv4 nfs fscache lockd sunrpc ipv6 openvswitch(O)
frag_ipv4 xt_state nf_conntrack xt_tcpudp iptable_filter ip_tables 
x_tables sr_mod cdrom nls_utf8 isofs dm_multipath scsi_dh dm_mod 
usb_storage
lk_helper cryptd lrw aes_i586 xts gf128mul coretemp microcode 
hid_generic lpc_ich mfd_core ehci_pci ehci_hcd i7core_edac edac_core 
bnx2 sg hed u
scsi_transport_sas raid_class scsi_mod
[10118.282083] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G           O 
3.10.11-0.xs1.8.50.175.377583 #1
[10118.282086] Hardware name: Dell Inc. PowerEdge M710HD/05GGXD, BIOS 
2.0.0 01/31/2011
[10118.282089]  000000ff ee0a5dd0 c1488cd3 ee0a5df8 c1046664 c1658a88 
ee0a5e24 000000ff
[10118.282097]  c13fc1c6 c13fc1c6 ec778000 00000000 00256a1c ee0a5e10 
c1046723 00000009
[10118.282104]  ee0a5e08 c1658a88 ee0a5e24 ee0a5e48 c13fc1c6 c16556e1 
000000ff c1658a88
[10118.282112] Call Trace:
[10118.282118]  [<c1488cd3>] dump_stack+0x16/0x1b
[10118.282125]  [<c1046664>] warn_slowpath_common+0x64/0x80
[10118.282129]  [<c13fc1c6>] ? dev_watchdog+0x156/0x1f0
[10118.282133]  [<c13fc1c6>] ? dev_watchdog+0x156/0x1f0
[10118.282137]  [<c1046723>] warn_slowpath_fmt+0x33/0x40
[10118.282141]  [<c13fc1c6>] dev_watchdog+0x156/0x1f0
[10118.282149]  [<c10549ce>] call_timer_fn+0x3e/0xf0
[10118.282155]  [<c10013a7>] ? xen_hypercall_sched_op+0x7/0x20
[10118.282159]  [<c13fc070>] ? __netdev_watchdog_up+0x60/0x60
[10118.282164]  [<c1055c1b>] run_timer_softirq+0x1ab/0x210
[10118.282169]  [<c10be4fd>] ? irq_get_irq_data+0xd/0x10
[10118.282176]  [<c130fb2d>] ? info_for_irq+0xd/0x20
[10118.282180]  [<c13fc070>] ? __netdev_watchdog_up+0x60/0x60
[10118.282184]  [<c104e3f4>] __do_softirq+0xc4/0x200
[10118.282189]  [<c1312316>] ? evtchn_fifo_handle_events+0xf6/0x120
[10118.282193]  [<c104e5bd>] irq_exit+0x3d/0x90
[10118.282198]  [<c130fe55>] xen_evtchn_do_upcall+0x25/0x40
[10118.282203]  [<c14935c7>] xen_do_upcall+0x7/0xc
[10118.282207]  [<c10013a7>] ? xen_hypercall_sched_op+0x7/0x20
[10118.282213]  [<c1007f12>] ? xen_safe_halt+0x12/0x20
[10118.282218]  [<c1015eff>] default_idle+0x3f/0xb0
[10118.282222]  [<c1015a17>] arch_cpu_idle+0x17/0x30
[10118.282229]  [<c108f591>] cpu_startup_entry+0x141/0x1f0
[10118.282234]  [<c147d11b>] cpu_bringup_and_idle+0x12/0x14
[10118.282237] ---[ end trace 25ed24391f6c7acd ]---
[10118.282242] bnx2 0000:02:00.0 eth0: <--- start FTQ dump --->
[10118.282267] bnx2 0000:02:00.0 eth0: RV2P_PFTQ_CTL 00010000
[10118.282277] bnx2 0000:02:00.0 eth0: RV2P_TFTQ_CTL 00020000
[10118.282288] bnx2 0000:02:00.0 eth0: RV2P_MFTQ_CTL 00004000
[10118.282298] bnx2 0000:02:00.0 eth0: TBDR_FTQ_CTL 00004002
[10118.282309] bnx2 0000:02:00.0 eth0: TDMA_FTQ_CTL 00010002
[10118.282319] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 01810002
[10118.282330] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 01810002
[10118.282340] bnx2 0000:02:00.0 eth0: TPAT_FTQ_CTL 00010002
[10118.282372] bnx2 0000:02:00.0 eth0: RXP_CFTQ_CTL 00008000
[10118.282383] bnx2 0000:02:00.0 eth0: RXP_FTQ_CTL 00100000
[10118.282392] bnx2 0000:02:00.0 eth0: COM_COMXQ_FTQ_CTL 00010000
[10118.282403] bnx2 0000:02:00.0 eth0: COM_COMTQ_FTQ_CTL 00020000
[10118.282414] bnx2 0000:02:00.0 eth0: COM_COMQ_FTQ_CTL 00010000
[10118.282425] bnx2 0000:02:00.0 eth0: CP_CPQ_FTQ_CTL 00004000
[10118.282434] bnx2 0000:02:00.0 eth0: CPU states:
[10118.282449] bnx2 0000:02:00.0 eth0: 045000 mode b84c state 80001000 
evt_mask 500 pc 8000844 pc 80012bc instr a0e00012
[10118.282471] bnx2 0000:02:00.0 eth0: 085000 mode b84c state 80001000 
evt_mask 500 pc 8000a50 pc 8000ac4 instr 38420001
[10118.282493] bnx2 0000:02:00.0 eth0: 0c5000 mode b84c state 80001000 
evt_mask 500 pc 8004c14 pc 8004c18 instr 32070001
[10118.282515] bnx2 0000:02:00.0 eth0: 105000 mode b8cc state 80000000 
evt_mask 500 pc 8000a9c pc 8000b28 instr 8c530000
[10118.282537] bnx2 0000:02:00.0 eth0: 145000 mode b880 state 80000000 
evt_mask 500 pc 800d1a8 pc 800af74 instr 441010a
[10118.282560] bnx2 0000:02:00.0 eth0: 185000 mode b8cc state 80000000 
evt_mask 500 pc 8000918 pc 8000928 instr 8f870048
[10118.282577] bnx2 0000:02:00.0 eth0: <--- end FTQ dump --->
[10118.282586] bnx2 0000:02:00.0 eth0: <--- start TBDC dump --->
[10118.282597] bnx2 0000:02:00.0 eth0: TBDC free cnt: 31
[10118.282607] bnx2 0000:02:00.0 eth0: LINE     CID  BIDX   CMD  VALIDS
[10118.282622] bnx2 0000:02:00.0 eth0: 00    000800  a3b8   00    [1]
[10118.282637] bnx2 0000:02:00.0 eth0: 01    001100  1b58   00    [0]
[10118.282652] bnx2 0000:02:00.0 eth0: 02    000800  a390   00    [0]
[10118.282667] bnx2 0000:02:00.0 eth0: 03    000800  a370   00    [0]
[10118.282682] bnx2 0000:02:00.0 eth0: 04    000800  a378   00    [0]
[10118.282696] bnx2 0000:02:00.0 eth0: 05    000800  a388   00    [0]
[10118.282711] bnx2 0000:02:00.0 eth0: 06    000800  a398   00    [0]
[10118.282726] bnx2 0000:02:00.0 eth0: 07    000800  a3a8   00    [0]
[10118.282741] bnx2 0000:02:00.0 eth0: 08    000800  a3b0   00    [0]
[10118.282756] bnx2 0000:02:00.0 eth0: 09    000800  a3b8   00    [0]
[10118.282771] bnx2 0000:02:00.0 eth0: 0a    000800  8c10   00    [0]
[10118.282785] bnx2 0000:02:00.0 eth0: 0b    000800  eaf0   00    [0]
[10118.282800] bnx2 0000:02:00.0 eth0: 0c    000800  eaf8   00    [0]
[10118.282815] bnx2 0000:02:00.0 eth0: 0d    001100  5e60   00    [0]
[10118.282830] bnx2 0000:02:00.0 eth0: 0e    001100  5e68   00    [0]
[10118.282845] bnx2 0000:02:00.0 eth0: 0f    001100  5e70   00    [0]
[10118.282860] bnx2 0000:02:00.0 eth0: 10    001100  5e88   00    [0]
[10118.282875] bnx2 0000:02:00.0 eth0: 11    001100  5e90   00    [0]
[10118.282890] bnx2 0000:02:00.0 eth0: 12    001100  5ee8   00    [0]
[10118.282905] bnx2 0000:02:00.0 eth0: 13    001100  5ef8   00    [0]
[10118.282920] bnx2 0000:02:00.0 eth0: 14    001100  5e00   00    [0]
[10118.282935] bnx2 0000:02:00.0 eth0: 15    001100  5a20   00    [0]
[10118.282950] bnx2 0000:02:00.0 eth0: 16    001100  59a8   00    [0]
[10118.282964] bnx2 0000:02:00.0 eth0: 17    001100  59b0   00    [0]
[10118.282979] bnx2 0000:02:00.0 eth0: 18    001100  59b8   00    [0]
[10118.282994] bnx2 0000:02:00.0 eth0: 19    001100  5a28   00    [0]
[10118.283009] bnx2 0000:02:00.0 eth0: 1a    001100  5a30   00    [0]
[10118.283024] bnx2 0000:02:00.0 eth0: 1b    000800  8c58   00    [0]
[10118.283038] bnx2 0000:02:00.0 eth0: 1c    000800  8c60   00    [0]
[10118.283053] bnx2 0000:02:00.0 eth0: 1d    055e80  dca8   fb    [0]
[10118.283068] bnx2 0000:02:00.0 eth0: 1e    1cf780  f7b8   af    [0]
[10118.283083] bnx2 0000:02:00.0 eth0: 1f    1dff80  efe0   bf    [0]
[10118.283094] bnx2 0000:02:00.0 eth0: <--- end TBDC dump --->
[10118.283111] bnx2 0000:02:00.0 eth0: DEBUG: intr_sem[0] PCI_CMD[00100406]
[10118.283128] bnx2 0000:02:00.0 eth0: DEBUG: PCI_PM[19002008] 
PCI_MISC_CFG[92000088]
[10118.283143] bnx2 0000:02:00.0 eth0: DEBUG: EMAC_TX_STATUS[00000008] 
EMAC_RX_STATUS[00000000]
[10118.283159] bnx2 0000:02:00.0 eth0: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[10118.283170] bnx2 0000:02:00.0 eth0: DEBUG: 
HC_STATS_INTERRUPT_STATUS[01fe0001]
[10118.283184] bnx2 0000:02:00.0 eth0: DEBUG: PBA[00000000]
[10118.283194] bnx2 0000:02:00.0 eth0: <--- start MCP states dump --->
[10118.283207] bnx2 0000:02:00.0 eth0: DEBUG: MCP_STATE_P0[0003610e] 
MCP_STATE_P1[0003610e]
[10118.283224] bnx2 0000:02:00.0 eth0: DEBUG: MCP mode[0000b880] 
state[80004000] evt_mask[00000500]
[10118.283242] bnx2 0000:02:00.0 eth0: DEBUG: pc[0800d244] pc[0800b0ac] 
instr[00000000]
[10118.283255] bnx2 0000:02:00.0 eth0: DEBUG: shmem states:
[10118.283268] bnx2 0000:02:00.0 eth0: DEBUG: drv_mb[0d000005] 
fw_mb[00000005] link_status[0010026f]
[10118.283283]  drv_pulse_mb[00002768]
[10118.283288] bnx2 0000:02:00.0 eth0: DEBUG: 
dev_info_signature[44564903] reset_type[01005254]
[10118.283302]  condition[0003610e]
[10118.283310] bnx2 0000:02:00.0 eth0: DEBUG: 000001c0: 01005254 
42530088 0003610e 00000000
[10118.283328] bnx2 0000:02:00.0 eth0: DEBUG: 000003cc: 44444444 
44444444 44444444 00000a28
[10118.283346] bnx2 0000:02:00.0 eth0: DEBUG: 000003dc: 000cffff 
00000000 ffff0000 00000000
[10118.283364] bnx2 0000:02:00.0 eth0: DEBUG: 000003ec: 00000000 
00000000 00000000 00000000
[10118.283379] bnx2 0000:02:00.0 eth0: DEBUG: 0x3fc[0000ffff]
[10118.283389] bnx2 0000:02:00.0 eth0: <--- end MCP states dump --->


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:28:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB94g-0008Tw-An; Wed, 05 Feb 2014 20:27:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WB94e-0008Tq-Rj
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 20:27:57 +0000
Received: from [85.158.143.35:35634] by server-2.bemta-4.messagelabs.com id
	EA/F8-10891-CCE92F25; Wed, 05 Feb 2014 20:27:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391632072!3441067!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14325 invoked from network); 5 Feb 2014 20:27:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:27:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98393106"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 20:27:52 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	15:27:49 -0500
Message-ID: <52F29EC2.7040503@citrix.com>
Date: Wed, 5 Feb 2014 20:27:46 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Michael Chan <mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52F29DDC.7010908@citrix.com>
In-Reply-To: <52F29DDC.7010908@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 20:23, Zoltan Kiss wrote:
> On 04/02/14 19:47, Michael Chan wrote:
>> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>>> dev_watchdog+0x156/0x1f0()
>>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>>
>> The dump shows an internal IRQ pending on MSIX vector 2 which matches
>> the the queue number that is timing out.  I don't know what happened to
>> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
>> message from the kernel a few seconds before the tx timeout message?
>
> I haven't seen any IRQ related error message. Note, this is on Xen
> 4.3.1. Now I have new results with a reworked version of the patch,
> unfortunately it still has this issue. Here is a bnx2 dump, lspci output
> and some Xen debug output (MSI and interrupt bindings, I have more if
> needed).

And here is the watchdog message and the first dump, if it matters:

[10118.282007] ------------[ cut here ]------------
[10118.282018] WARNING: at net/sched/sch_generic.c:255 
dev_watchdog+0x156/0x1f0()
[10118.282021] NETDEV WATCHDOG: eth0 (bnx2): transmit queue 0 timed out
[10118.282024] Modules linked in: tun nfsv3 nfs_acl rpcsec_gss_krb5 
auth_rpcgss oid_registry nfsv4 nfs fscache lockd sunrpc ipv6 openvswitch(O)
frag_ipv4 xt_state nf_conntrack xt_tcpudp iptable_filter ip_tables 
x_tables sr_mod cdrom nls_utf8 isofs dm_multipath scsi_dh dm_mod 
usb_storage
lk_helper cryptd lrw aes_i586 xts gf128mul coretemp microcode 
hid_generic lpc_ich mfd_core ehci_pci ehci_hcd i7core_edac edac_core 
bnx2 sg hed u
scsi_transport_sas raid_class scsi_mod
[10118.282083] CPU: 2 PID: 0 Comm: swapper/2 Tainted: G           O 
3.10.11-0.xs1.8.50.175.377583 #1
[10118.282086] Hardware name: Dell Inc. PowerEdge M710HD/05GGXD, BIOS 
2.0.0 01/31/2011
[10118.282089]  000000ff ee0a5dd0 c1488cd3 ee0a5df8 c1046664 c1658a88 
ee0a5e24 000000ff
[10118.282097]  c13fc1c6 c13fc1c6 ec778000 00000000 00256a1c ee0a5e10 
c1046723 00000009
[10118.282104]  ee0a5e08 c1658a88 ee0a5e24 ee0a5e48 c13fc1c6 c16556e1 
000000ff c1658a88
[10118.282112] Call Trace:
[10118.282118]  [<c1488cd3>] dump_stack+0x16/0x1b
[10118.282125]  [<c1046664>] warn_slowpath_common+0x64/0x80
[10118.282129]  [<c13fc1c6>] ? dev_watchdog+0x156/0x1f0
[10118.282133]  [<c13fc1c6>] ? dev_watchdog+0x156/0x1f0
[10118.282137]  [<c1046723>] warn_slowpath_fmt+0x33/0x40
[10118.282141]  [<c13fc1c6>] dev_watchdog+0x156/0x1f0
[10118.282149]  [<c10549ce>] call_timer_fn+0x3e/0xf0
[10118.282155]  [<c10013a7>] ? xen_hypercall_sched_op+0x7/0x20
[10118.282159]  [<c13fc070>] ? __netdev_watchdog_up+0x60/0x60
[10118.282164]  [<c1055c1b>] run_timer_softirq+0x1ab/0x210
[10118.282169]  [<c10be4fd>] ? irq_get_irq_data+0xd/0x10
[10118.282176]  [<c130fb2d>] ? info_for_irq+0xd/0x20
[10118.282180]  [<c13fc070>] ? __netdev_watchdog_up+0x60/0x60
[10118.282184]  [<c104e3f4>] __do_softirq+0xc4/0x200
[10118.282189]  [<c1312316>] ? evtchn_fifo_handle_events+0xf6/0x120
[10118.282193]  [<c104e5bd>] irq_exit+0x3d/0x90
[10118.282198]  [<c130fe55>] xen_evtchn_do_upcall+0x25/0x40
[10118.282203]  [<c14935c7>] xen_do_upcall+0x7/0xc
[10118.282207]  [<c10013a7>] ? xen_hypercall_sched_op+0x7/0x20
[10118.282213]  [<c1007f12>] ? xen_safe_halt+0x12/0x20
[10118.282218]  [<c1015eff>] default_idle+0x3f/0xb0
[10118.282222]  [<c1015a17>] arch_cpu_idle+0x17/0x30
[10118.282229]  [<c108f591>] cpu_startup_entry+0x141/0x1f0
[10118.282234]  [<c147d11b>] cpu_bringup_and_idle+0x12/0x14
[10118.282237] ---[ end trace 25ed24391f6c7acd ]---
[10118.282242] bnx2 0000:02:00.0 eth0: <--- start FTQ dump --->
[10118.282267] bnx2 0000:02:00.0 eth0: RV2P_PFTQ_CTL 00010000
[10118.282277] bnx2 0000:02:00.0 eth0: RV2P_TFTQ_CTL 00020000
[10118.282288] bnx2 0000:02:00.0 eth0: RV2P_MFTQ_CTL 00004000
[10118.282298] bnx2 0000:02:00.0 eth0: TBDR_FTQ_CTL 00004002
[10118.282309] bnx2 0000:02:00.0 eth0: TDMA_FTQ_CTL 00010002
[10118.282319] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 01810002
[10118.282330] bnx2 0000:02:00.0 eth0: TXP_FTQ_CTL 01810002
[10118.282340] bnx2 0000:02:00.0 eth0: TPAT_FTQ_CTL 00010002
[10118.282372] bnx2 0000:02:00.0 eth0: RXP_CFTQ_CTL 00008000
[10118.282383] bnx2 0000:02:00.0 eth0: RXP_FTQ_CTL 00100000
[10118.282392] bnx2 0000:02:00.0 eth0: COM_COMXQ_FTQ_CTL 00010000
[10118.282403] bnx2 0000:02:00.0 eth0: COM_COMTQ_FTQ_CTL 00020000
[10118.282414] bnx2 0000:02:00.0 eth0: COM_COMQ_FTQ_CTL 00010000
[10118.282425] bnx2 0000:02:00.0 eth0: CP_CPQ_FTQ_CTL 00004000
[10118.282434] bnx2 0000:02:00.0 eth0: CPU states:
[10118.282449] bnx2 0000:02:00.0 eth0: 045000 mode b84c state 80001000 
evt_mask 500 pc 8000844 pc 80012bc instr a0e00012
[10118.282471] bnx2 0000:02:00.0 eth0: 085000 mode b84c state 80001000 
evt_mask 500 pc 8000a50 pc 8000ac4 instr 38420001
[10118.282493] bnx2 0000:02:00.0 eth0: 0c5000 mode b84c state 80001000 
evt_mask 500 pc 8004c14 pc 8004c18 instr 32070001
[10118.282515] bnx2 0000:02:00.0 eth0: 105000 mode b8cc state 80000000 
evt_mask 500 pc 8000a9c pc 8000b28 instr 8c530000
[10118.282537] bnx2 0000:02:00.0 eth0: 145000 mode b880 state 80000000 
evt_mask 500 pc 800d1a8 pc 800af74 instr 441010a
[10118.282560] bnx2 0000:02:00.0 eth0: 185000 mode b8cc state 80000000 
evt_mask 500 pc 8000918 pc 8000928 instr 8f870048
[10118.282577] bnx2 0000:02:00.0 eth0: <--- end FTQ dump --->
[10118.282586] bnx2 0000:02:00.0 eth0: <--- start TBDC dump --->
[10118.282597] bnx2 0000:02:00.0 eth0: TBDC free cnt: 31
[10118.282607] bnx2 0000:02:00.0 eth0: LINE     CID  BIDX   CMD  VALIDS
[10118.282622] bnx2 0000:02:00.0 eth0: 00    000800  a3b8   00    [1]
[10118.282637] bnx2 0000:02:00.0 eth0: 01    001100  1b58   00    [0]
[10118.282652] bnx2 0000:02:00.0 eth0: 02    000800  a390   00    [0]
[10118.282667] bnx2 0000:02:00.0 eth0: 03    000800  a370   00    [0]
[10118.282682] bnx2 0000:02:00.0 eth0: 04    000800  a378   00    [0]
[10118.282696] bnx2 0000:02:00.0 eth0: 05    000800  a388   00    [0]
[10118.282711] bnx2 0000:02:00.0 eth0: 06    000800  a398   00    [0]
[10118.282726] bnx2 0000:02:00.0 eth0: 07    000800  a3a8   00    [0]
[10118.282741] bnx2 0000:02:00.0 eth0: 08    000800  a3b0   00    [0]
[10118.282756] bnx2 0000:02:00.0 eth0: 09    000800  a3b8   00    [0]
[10118.282771] bnx2 0000:02:00.0 eth0: 0a    000800  8c10   00    [0]
[10118.282785] bnx2 0000:02:00.0 eth0: 0b    000800  eaf0   00    [0]
[10118.282800] bnx2 0000:02:00.0 eth0: 0c    000800  eaf8   00    [0]
[10118.282815] bnx2 0000:02:00.0 eth0: 0d    001100  5e60   00    [0]
[10118.282830] bnx2 0000:02:00.0 eth0: 0e    001100  5e68   00    [0]
[10118.282845] bnx2 0000:02:00.0 eth0: 0f    001100  5e70   00    [0]
[10118.282860] bnx2 0000:02:00.0 eth0: 10    001100  5e88   00    [0]
[10118.282875] bnx2 0000:02:00.0 eth0: 11    001100  5e90   00    [0]
[10118.282890] bnx2 0000:02:00.0 eth0: 12    001100  5ee8   00    [0]
[10118.282905] bnx2 0000:02:00.0 eth0: 13    001100  5ef8   00    [0]
[10118.282920] bnx2 0000:02:00.0 eth0: 14    001100  5e00   00    [0]
[10118.282935] bnx2 0000:02:00.0 eth0: 15    001100  5a20   00    [0]
[10118.282950] bnx2 0000:02:00.0 eth0: 16    001100  59a8   00    [0]
[10118.282964] bnx2 0000:02:00.0 eth0: 17    001100  59b0   00    [0]
[10118.282979] bnx2 0000:02:00.0 eth0: 18    001100  59b8   00    [0]
[10118.282994] bnx2 0000:02:00.0 eth0: 19    001100  5a28   00    [0]
[10118.283009] bnx2 0000:02:00.0 eth0: 1a    001100  5a30   00    [0]
[10118.283024] bnx2 0000:02:00.0 eth0: 1b    000800  8c58   00    [0]
[10118.283038] bnx2 0000:02:00.0 eth0: 1c    000800  8c60   00    [0]
[10118.283053] bnx2 0000:02:00.0 eth0: 1d    055e80  dca8   fb    [0]
[10118.283068] bnx2 0000:02:00.0 eth0: 1e    1cf780  f7b8   af    [0]
[10118.283083] bnx2 0000:02:00.0 eth0: 1f    1dff80  efe0   bf    [0]
[10118.283094] bnx2 0000:02:00.0 eth0: <--- end TBDC dump --->
[10118.283111] bnx2 0000:02:00.0 eth0: DEBUG: intr_sem[0] PCI_CMD[00100406]
[10118.283128] bnx2 0000:02:00.0 eth0: DEBUG: PCI_PM[19002008] 
PCI_MISC_CFG[92000088]
[10118.283143] bnx2 0000:02:00.0 eth0: DEBUG: EMAC_TX_STATUS[00000008] 
EMAC_RX_STATUS[00000000]
[10118.283159] bnx2 0000:02:00.0 eth0: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[10118.283170] bnx2 0000:02:00.0 eth0: DEBUG: 
HC_STATS_INTERRUPT_STATUS[01fe0001]
[10118.283184] bnx2 0000:02:00.0 eth0: DEBUG: PBA[00000000]
[10118.283194] bnx2 0000:02:00.0 eth0: <--- start MCP states dump --->
[10118.283207] bnx2 0000:02:00.0 eth0: DEBUG: MCP_STATE_P0[0003610e] 
MCP_STATE_P1[0003610e]
[10118.283224] bnx2 0000:02:00.0 eth0: DEBUG: MCP mode[0000b880] 
state[80004000] evt_mask[00000500]
[10118.283242] bnx2 0000:02:00.0 eth0: DEBUG: pc[0800d244] pc[0800b0ac] 
instr[00000000]
[10118.283255] bnx2 0000:02:00.0 eth0: DEBUG: shmem states:
[10118.283268] bnx2 0000:02:00.0 eth0: DEBUG: drv_mb[0d000005] 
fw_mb[00000005] link_status[0010026f]
[10118.283283]  drv_pulse_mb[00002768]
[10118.283288] bnx2 0000:02:00.0 eth0: DEBUG: 
dev_info_signature[44564903] reset_type[01005254]
[10118.283302]  condition[0003610e]
[10118.283310] bnx2 0000:02:00.0 eth0: DEBUG: 000001c0: 01005254 
42530088 0003610e 00000000
[10118.283328] bnx2 0000:02:00.0 eth0: DEBUG: 000003cc: 44444444 
44444444 44444444 00000a28
[10118.283346] bnx2 0000:02:00.0 eth0: DEBUG: 000003dc: 000cffff 
00000000 ffff0000 00000000
[10118.283364] bnx2 0000:02:00.0 eth0: DEBUG: 000003ec: 00000000 
00000000 00000000 00000000
[10118.283379] bnx2 0000:02:00.0 eth0: DEBUG: 0x3fc[0000ffff]
[10118.283389] bnx2 0000:02:00.0 eth0: <--- end MCP states dump --->


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Ex-0000XT-MV; Wed, 05 Feb 2014 20:38:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Ew-0000XO-EF
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:38:34 +0000
Received: from [85.158.139.211:2713] by server-8.bemta-5.messagelabs.com id
	88/27-05298-941A2F25; Wed, 05 Feb 2014 20:38:33 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391632711!1937835!1
X-Originating-IP: [216.32.180.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13387 invoked from network); 5 Feb 2014 20:38:32 -0000
Received: from va3ehsobe006.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.16)
	by server-15.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:38:32 -0000
Received: from mail130-va3-R.bigfish.com (10.7.14.231) by
	VA3EHSOBE002.bigfish.com (10.7.40.22) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:38:31 +0000
Received: from mail130-va3 (localhost [127.0.0.1])	by
	mail130-va3-R.bigfish.com (Postfix) with ESMTP id 64A343801AA;
	Wed,  5 Feb 2014 20:38:31 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(zz13e6Kzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h177df4h8275eh17326ah8275bh1de097h186068ha1495iz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail130-va3 (localhost.localdomain [127.0.0.1]) by mail130-va3
	(MessageSwitch) id 1391632709523360_13497;
	Wed,  5 Feb 2014 20:38:29 +0000 (UTC)
Received: from VA3EHSMHS019.bigfish.com (unknown [10.7.14.251])	by
	mail130-va3.bigfish.com (Postfix) with ESMTP id 6CE2E360053;
	Wed,  5 Feb 2014 20:38:29 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by VA3EHSMHS019.bigfish.com
	(10.7.99.29) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:38:23 +0000
X-WSS-ID: 0N0JINU-08-BV3-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2937FD22015;	Wed,  5 Feb 2014 14:38:17 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:38:40 -0600
Received: from autotest-xen-olivehill.amd.com (10.180.168.240) by
	satlexdag05.amd.com (10.181.40.11) with Microsoft SMTP Server id
	14.2.328.9; Wed, 5 Feb 2014 15:38:21 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>
Date: Wed, 5 Feb 2014 14:59:32 -0600
Message-ID: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.

Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
Rev. 3.04, November2013 for details on the Erratum.

Tested the patch on Fam16h server platform and it works fine.

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

---
 xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3307141..f2780c4 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -369,6 +369,7 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 	u32 l, h;
 
 	unsigned long long value;
+	u32 pci_val;
 
 	/* Disable TLB flush filter by setting HWCR.FFDIS on K8
 	 * bit 6 of msr C001_0015
@@ -477,6 +478,35 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 		       " all your (PV) guest kernels. ***\n");
 
 	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
+        /*
+         * Apply workaround for erratum 792
+         * Description:
+         * Processor does not ensure DRAM scrub read/write sequence
+         * is atomic wrt accesses to CC6 save state area. Therefore
+         * if a concurrent scrub read/write access is to same address
+         * the entry may appear as if it is not written. This quirk
+         * applies to Fam16h models 00h-0Fh
+         *
+         * See "Revision Guide" for AMD F16h models 00h-0fh,
+         * document 51810 rev. 3.04, Nov 2013
+         *
+         * Equivalent Linux patch link:
+         * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
+         */
+        if (smp_processor_id() == 0) {
+            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+            if (pci_val & 0x1f) {
+                pci_val &= ~(0x1f);
+                pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
+            }
+
+            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+            if (pci_val & 0x1) {
+                pci_val &= ~(0x1);
+                pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
+            }
+        }
+
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
 			static bool_t warned;
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Ex-0000XT-MV; Wed, 05 Feb 2014 20:38:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Ew-0000XO-EF
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:38:34 +0000
Received: from [85.158.139.211:2713] by server-8.bemta-5.messagelabs.com id
	88/27-05298-941A2F25; Wed, 05 Feb 2014 20:38:33 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391632711!1937835!1
X-Originating-IP: [216.32.180.16]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13387 invoked from network); 5 Feb 2014 20:38:32 -0000
Received: from va3ehsobe006.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.16)
	by server-15.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:38:32 -0000
Received: from mail130-va3-R.bigfish.com (10.7.14.231) by
	VA3EHSOBE002.bigfish.com (10.7.40.22) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:38:31 +0000
Received: from mail130-va3 (localhost [127.0.0.1])	by
	mail130-va3-R.bigfish.com (Postfix) with ESMTP id 64A343801AA;
	Wed,  5 Feb 2014 20:38:31 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(zz13e6Kzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h177df4h8275eh17326ah8275bh1de097h186068ha1495iz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail130-va3 (localhost.localdomain [127.0.0.1]) by mail130-va3
	(MessageSwitch) id 1391632709523360_13497;
	Wed,  5 Feb 2014 20:38:29 +0000 (UTC)
Received: from VA3EHSMHS019.bigfish.com (unknown [10.7.14.251])	by
	mail130-va3.bigfish.com (Postfix) with ESMTP id 6CE2E360053;
	Wed,  5 Feb 2014 20:38:29 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by VA3EHSMHS019.bigfish.com
	(10.7.99.29) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:38:23 +0000
X-WSS-ID: 0N0JINU-08-BV3-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2937FD22015;	Wed,  5 Feb 2014 14:38:17 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:38:40 -0600
Received: from autotest-xen-olivehill.amd.com (10.180.168.240) by
	satlexdag05.amd.com (10.181.40.11) with Microsoft SMTP Server id
	14.2.328.9; Wed, 5 Feb 2014 15:38:21 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>
Date: Wed, 5 Feb 2014 14:59:32 -0600
Message-ID: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.

Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
Rev. 3.04, November2013 for details on the Erratum.

Tested the patch on Fam16h server platform and it works fine.

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

---
 xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3307141..f2780c4 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -369,6 +369,7 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 	u32 l, h;
 
 	unsigned long long value;
+	u32 pci_val;
 
 	/* Disable TLB flush filter by setting HWCR.FFDIS on K8
 	 * bit 6 of msr C001_0015
@@ -477,6 +478,35 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 		       " all your (PV) guest kernels. ***\n");
 
 	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
+        /*
+         * Apply workaround for erratum 792
+         * Description:
+         * Processor does not ensure DRAM scrub read/write sequence
+         * is atomic wrt accesses to CC6 save state area. Therefore
+         * if a concurrent scrub read/write access is to same address
+         * the entry may appear as if it is not written. This quirk
+         * applies to Fam16h models 00h-0Fh
+         *
+         * See "Revision Guide" for AMD F16h models 00h-0fh,
+         * document 51810 rev. 3.04, Nov 2013
+         *
+         * Equivalent Linux patch link:
+         * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
+         */
+        if (smp_processor_id() == 0) {
+            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+            if (pci_val & 0x1f) {
+                pci_val &= ~(0x1f);
+                pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
+            }
+
+            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+            if (pci_val & 0x1) {
+                pci_val &= ~(0x1);
+                pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
+            }
+        }
+
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
 			static bool_t warned;
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Ha-0000iI-9L; Wed, 05 Feb 2014 20:41:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9HZ-0000i7-0b
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:41:17 +0000
Received: from [85.158.143.35:33173] by server-3.bemta-4.messagelabs.com id
	D4/15-11539-CE1A2F25; Wed, 05 Feb 2014 20:41:16 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391632873!3436557!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7466 invoked from network); 5 Feb 2014 20:41:15 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-12.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:41:15 -0000
Received: from mail84-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE029.bigfish.com (10.243.66.94) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:41:13 +0000
Received: from mail84-co1 (localhost [127.0.0.1])	by mail84-co1-R.bigfish.com
	(Postfix) with ESMTP id 20187CC00A5;
	Wed,  5 Feb 2014 20:41:13 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(z579ehzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail84-co1 (localhost.localdomain [127.0.0.1]) by mail84-co1
	(MessageSwitch) id 1391632871254184_15331;
	Wed,  5 Feb 2014 20:41:11 +0000 (UTC)
Received: from CO1EHSMHS015.bigfish.com (unknown [10.243.78.244])	by
	mail84-co1.bigfish.com (Postfix) with ESMTP id 3047D70004C;
	Wed,  5 Feb 2014 20:41:11 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO1EHSMHS015.bigfish.com
	(10.243.66.25) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 5 Feb 2014 20:41:11 +0000
X-WSS-ID: 0N0JISK-07-02J-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	28DA4CAE642;	Wed,  5 Feb 2014 14:41:07 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:41:26 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:41:08 -0500
Message-ID: <52F2A1E4.9030700@amd.com>
Date: Wed, 5 Feb 2014 14:41:08 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
In-Reply-To: <52E7A17D020000780011784E@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/28/2014 5:24 AM, Jan Beulich wrote:
>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>>   
>>       *val = 0;
>>   
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
> As one of the other reviewers already said - 0xC0000000 would
> be better recognizable here.
>
> As to the 3 -> 0x13 change - I don't think this is conceptually
> correct. While at present we emulate only 2 banks, this had
> been different in the past and may become different again.
> Hence introducing a dis-contiguity after bank 3 is undesirable.
>
>

IMHO, including the '0x13' is necessary. The reason is that 0x413, 
0xc0000408 and 0xc0000409
together form the set of MC4 thresholding registers. Not including 0x13 
in the mask would mean
that accesses to 0x413 alone would not be handled. (which would be 
confusing if someone new
were to look into the mce codebase)

Also, (in response to Boris's comments) - AFAICT, this should not affect 
Intel codepath..
Intel's vmce_* functions only care about MSR_IA32_MC0_CTL2 = 0x00000280 
(if bank0) or 0x00000281 (if bank 1)
and, having 0x13 in the mask does not affect the ability of 
bank_mce_[rd|wr]msr functions to call into vmce_intel*
functions..
(I haven't tested this, so if someone could test and let me know, that'd 
be great.)

I have addressed Christoph's and Boris.O's earlier comments too in the 
next version of this patch.


Thanks,
-Aravind.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Hb-0000iT-Lx; Wed, 05 Feb 2014 20:41:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9HZ-0000i9-LG
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:41:17 +0000
Received: from [85.158.143.35:33227] by server-1.bemta-4.messagelabs.com id
	84/F9-31661-CE1A2F25; Wed, 05 Feb 2014 20:41:16 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391632875!3439718!1
X-Originating-IP: [216.32.180.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 550 invoked from network); 5 Feb 2014 20:41:15 -0000
Received: from va3ehsobe001.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.11)
	by server-4.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:41:15 -0000
Received: from mail11-va3-R.bigfish.com (10.7.14.228) by
	VA3EHSOBE011.bigfish.com (10.7.40.61) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:41:14 +0000
Received: from mail11-va3 (localhost [127.0.0.1])	by mail11-va3-R.bigfish.com
	(Postfix) with ESMTP id 5FFF432007B;
	Wed,  5 Feb 2014 20:41:14 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -9
X-BigFish: VPS-9(z579ehzbb2dI98dI9371I148cIe0eah1432I111aIzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail11-va3 (localhost.localdomain [127.0.0.1]) by mail11-va3
	(MessageSwitch) id 1391632872515586_27675;
	Wed,  5 Feb 2014 20:41:12 +0000 (UTC)
Received: from VA3EHSMHS031.bigfish.com (unknown [10.7.14.246])	by
	mail11-va3.bigfish.com (Postfix) with ESMTP id 6A7642004C;
	Wed,  5 Feb 2014 20:41:12 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS031.bigfish.com
	(10.7.99.41) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:41:08 +0000
X-WSS-ID: 0N0JISI-07-02H-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2FE49CAE646;	Wed,  5 Feb 2014 14:41:05 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:41:24 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:41:05 -0500
Message-ID: <52F2A1E1.1010800@amd.com>
Date: Wed, 5 Feb 2014 14:41:05 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E79FEB0200007800117840@nat28.tlf.novell.com>
In-Reply-To: <52E79FEB0200007800117840@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2 V2] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/28/2014 5:17 AM, Jan Beulich wrote:
>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
>> MSR 0xC000040A is marked as reserved from Fam15 onwards and
>> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
>> So remove the unnecessary definition of the reserved MSR and
>> use MSR_IA32_MCx_MISC() to define MSR 0x413.
> My Fam10 BKDG version doesn't say anything like this. Instead it
> says that the low 32 bits of all 4 registers are identical (i.e. all are
> aliasing 0x413), whereas the high 32 bits are different among all
> the four registers (with 0xc000040a having them all zero).

Thanks for pointing this out; looks like MSR 0xc000040a is zeroed out 
completely:
(F3x178 (MSRC000_040A): RAZ.)(page 339 on 
http://support.amd.com/TechDocs/31116.pdf)
I have reworded commit message which (hopefully) conveys this better..

>> Also, according to BKDG, MSR 0x413 is the first of the thresholding
>> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
>> respectively. So rework the #define's accordingly.
>>
>> Fam15 Model 00h-0fh  BKDG reference:
>> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf
> Higher model numbers appear to also have 0xc0000409 reserved...
>
>

Yes, thanks again for the pointer..

I have now reworked the code to care for the existence of extended block 
of MC4_MISC registers..
(Note: 0xc0000409 is reserved in newer model of F15h, while both 
0xc0000408 and 0xc0000409 are reserved in F16h)
a) If the registers exist in HW, then we continue to enforce current 
policy of not emulating as we do not expose MC4 to guest
b) If they don't, then #GP fault to guest.

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Ha-0000iI-9L; Wed, 05 Feb 2014 20:41:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9HZ-0000i7-0b
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:41:17 +0000
Received: from [85.158.143.35:33173] by server-3.bemta-4.messagelabs.com id
	D4/15-11539-CE1A2F25; Wed, 05 Feb 2014 20:41:16 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391632873!3436557!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7466 invoked from network); 5 Feb 2014 20:41:15 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-12.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:41:15 -0000
Received: from mail84-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE029.bigfish.com (10.243.66.94) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:41:13 +0000
Received: from mail84-co1 (localhost [127.0.0.1])	by mail84-co1-R.bigfish.com
	(Postfix) with ESMTP id 20187CC00A5;
	Wed,  5 Feb 2014 20:41:13 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(z579ehzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail84-co1 (localhost.localdomain [127.0.0.1]) by mail84-co1
	(MessageSwitch) id 1391632871254184_15331;
	Wed,  5 Feb 2014 20:41:11 +0000 (UTC)
Received: from CO1EHSMHS015.bigfish.com (unknown [10.243.78.244])	by
	mail84-co1.bigfish.com (Postfix) with ESMTP id 3047D70004C;
	Wed,  5 Feb 2014 20:41:11 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO1EHSMHS015.bigfish.com
	(10.243.66.25) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 5 Feb 2014 20:41:11 +0000
X-WSS-ID: 0N0JISK-07-02J-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	28DA4CAE642;	Wed,  5 Feb 2014 14:41:07 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:41:26 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:41:08 -0500
Message-ID: <52F2A1E4.9030700@amd.com>
Date: Wed, 5 Feb 2014 14:41:08 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
In-Reply-To: <52E7A17D020000780011784E@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/28/2014 5:24 AM, Jan Beulich wrote:
>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>>   
>>       *val = 0;
>>   
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
> As one of the other reviewers already said - 0xC0000000 would
> be better recognizable here.
>
> As to the 3 -> 0x13 change - I don't think this is conceptually
> correct. While at present we emulate only 2 banks, this had
> been different in the past and may become different again.
> Hence introducing a dis-contiguity after bank 3 is undesirable.
>
>

IMHO, including the '0x13' is necessary. The reason is that 0x413, 
0xc0000408 and 0xc0000409
together form the set of MC4 thresholding registers. Not including 0x13 
in the mask would mean
that accesses to 0x413 alone would not be handled. (which would be 
confusing if someone new
were to look into the mce codebase)

Also, (in response to Boris's comments) - AFAICT, this should not affect 
Intel codepath..
Intel's vmce_* functions only care about MSR_IA32_MC0_CTL2 = 0x00000280 
(if bank0) or 0x00000281 (if bank 1)
and, having 0x13 in the mask does not affect the ability of 
bank_mce_[rd|wr]msr functions to call into vmce_intel*
functions..
(I haven't tested this, so if someone could test and let me know, that'd 
be great.)

I have addressed Christoph's and Boris.O's earlier comments too in the 
next version of this patch.


Thanks,
-Aravind.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:41:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Hb-0000iT-Lx; Wed, 05 Feb 2014 20:41:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9HZ-0000i9-LG
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:41:17 +0000
Received: from [85.158.143.35:33227] by server-1.bemta-4.messagelabs.com id
	84/F9-31661-CE1A2F25; Wed, 05 Feb 2014 20:41:16 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391632875!3439718!1
X-Originating-IP: [216.32.180.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 550 invoked from network); 5 Feb 2014 20:41:15 -0000
Received: from va3ehsobe001.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.11)
	by server-4.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:41:15 -0000
Received: from mail11-va3-R.bigfish.com (10.7.14.228) by
	VA3EHSOBE011.bigfish.com (10.7.40.61) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:41:14 +0000
Received: from mail11-va3 (localhost [127.0.0.1])	by mail11-va3-R.bigfish.com
	(Postfix) with ESMTP id 5FFF432007B;
	Wed,  5 Feb 2014 20:41:14 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -9
X-BigFish: VPS-9(z579ehzbb2dI98dI9371I148cIe0eah1432I111aIzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail11-va3 (localhost.localdomain [127.0.0.1]) by mail11-va3
	(MessageSwitch) id 1391632872515586_27675;
	Wed,  5 Feb 2014 20:41:12 +0000 (UTC)
Received: from VA3EHSMHS031.bigfish.com (unknown [10.7.14.246])	by
	mail11-va3.bigfish.com (Postfix) with ESMTP id 6A7642004C;
	Wed,  5 Feb 2014 20:41:12 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS031.bigfish.com
	(10.7.99.41) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:41:08 +0000
X-WSS-ID: 0N0JISI-07-02H-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2FE49CAE646;	Wed,  5 Feb 2014 14:41:05 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:41:24 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:41:05 -0500
Message-ID: <52F2A1E1.1010800@amd.com>
Date: Wed, 5 Feb 2014 14:41:05 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E79FEB0200007800117840@nat28.tlf.novell.com>
In-Reply-To: <52E79FEB0200007800117840@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/2 V2] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/28/2014 5:17 AM, Jan Beulich wrote:
>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
>> MSR 0xC000040A is marked as reserved from Fam15 onwards and
>> MSR 0x413 is marked alias of MSR0xC000040A on Fam10 BKDG.
>> So remove the unnecessary definition of the reserved MSR and
>> use MSR_IA32_MCx_MISC() to define MSR 0x413.
> My Fam10 BKDG version doesn't say anything like this. Instead it
> says that the low 32 bits of all 4 registers are identical (i.e. all are
> aliasing 0x413), whereas the high 32 bits are different among all
> the four registers (with 0xc000040a having them all zero).

Thanks for pointing this out; looks like MSR 0xc000040a is zeroed out 
completely:
(F3x178 (MSRC000_040A): RAZ.)(page 339 on 
http://support.amd.com/TechDocs/31116.pdf)
I have reworded commit message which (hopefully) conveys this better..

>> Also, according to BKDG, MSR 0x413 is the first of the thresholding
>> registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
>> respectively. So rework the #define's accordingly.
>>
>> Fam15 Model 00h-0fh  BKDG reference:
>> http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf
> Higher model numbers appear to also have 0xc0000409 reserved...
>
>

Yes, thanks again for the pointer..

I have now reworked the code to care for the existence of extended block 
of MC4_MISC registers..
(Note: 0xc0000409 is reserved in newer model of F15h, while both 
0xc0000408 and 0xc0000409 are reserved in F16h)
a) If the registers exist in HW, then we continue to enforce current 
policy of not emulating as we do not expose MC4 to guest
b) If they don't, then #GP fault to guest.

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9KH-0000yw-Ea; Wed, 05 Feb 2014 20:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB9KG-0000yl-8S
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 20:44:04 +0000
Received: from [85.158.139.211:37499] by server-16.bemta-5.messagelabs.com id
	C5/38-05060-392A2F25; Wed, 05 Feb 2014 20:44:03 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391633041!1934231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30064 invoked from network); 5 Feb 2014 20:44:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98398059"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 20:44:01 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 5 Feb 2014 15:44:00 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	21:43:58 +0100
Message-ID: <52F2A282.5040502@citrix.com>
Date: Wed, 5 Feb 2014 20:43:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Michael Chan <mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52F29DDC.7010908@citrix.com>
In-Reply-To: <52F29DDC.7010908@citrix.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>, Peter
	P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 20:23, Zoltan Kiss wrote:
> On 04/02/14 19:47, Michael Chan wrote:
>> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>>> dev_watchdog+0x156/0x1f0()
>>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>>
>> The dump shows an internal IRQ pending on MSIX vector 2 which matches
>> the the queue number that is timing out.  I don't know what happened to
>> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
>> message from the kernel a few seconds before the tx timeout message?
>
> I haven't seen any IRQ related error message. Note, this is on Xen
> 4.3.1. Now I have new results with a reworked version of the patch,
> unfortunately it still has this issue. Here is a bnx2 dump, lspci
> output and some Xen debug output (MSI and interrupt bindings, I have
> more if needed).

You need debug-keys 'Q' as well to map between the PCI devices and Xen IRQs

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9KH-0000yw-Ea; Wed, 05 Feb 2014 20:44:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB9KG-0000yl-8S
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 20:44:04 +0000
Received: from [85.158.139.211:37499] by server-16.bemta-5.messagelabs.com id
	C5/38-05060-392A2F25; Wed, 05 Feb 2014 20:44:03 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391633041!1934231!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30064 invoked from network); 5 Feb 2014 20:44:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:44:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98398059"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 20:44:01 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 5 Feb 2014 15:44:00 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	21:43:58 +0100
Message-ID: <52F2A282.5040502@citrix.com>
Date: Wed, 5 Feb 2014 20:43:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>, Michael Chan <mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52F29DDC.7010908@citrix.com>
In-Reply-To: <52F29DDC.7010908@citrix.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>, Peter
	P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 20:23, Zoltan Kiss wrote:
> On 04/02/14 19:47, Michael Chan wrote:
>> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>>> dev_watchdog+0x156/0x1f0()
>>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>>
>> The dump shows an internal IRQ pending on MSIX vector 2 which matches
>> the the queue number that is timing out.  I don't know what happened to
>> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
>> message from the kernel a few seconds before the tx timeout message?
>
> I haven't seen any IRQ related error message. Note, this is on Xen
> 4.3.1. Now I have new results with a reworked version of the patch,
> unfortunately it still has this issue. Here is a bnx2 dump, lspci
> output and some Xen debug output (MSI and interrupt bindings, I have
> more if needed).

You need debug-keys 'Q' as well to map between the PCI devices and Xen IRQs

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kl-00012m-Sp; Wed, 05 Feb 2014 20:44:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kj-00012Q-Px
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:33 +0000
Received: from [85.158.143.35:55035] by server-2.bemta-4.messagelabs.com id
	CE/E2-10891-1B2A2F25; Wed, 05 Feb 2014 20:44:33 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391633071!3446490!1
X-Originating-IP: [216.32.180.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21358 invoked from network); 5 Feb 2014 20:44:32 -0000
Received: from va3ehsobe002.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.12)
	by server-11.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:32 -0000
Received: from mail206-va3-R.bigfish.com (10.7.14.226) by
	VA3EHSOBE012.bigfish.com (10.7.40.62) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:31 +0000
Received: from mail206-va3 (localhost [127.0.0.1])	by
	mail206-va3-R.bigfish.com (Postfix) with ESMTP id EEFF14200E3;
	Wed,  5 Feb 2014 20:44:30 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail206-va3 (localhost.localdomain [127.0.0.1]) by mail206-va3
	(MessageSwitch) id 1391633068869343_16267;
	Wed,  5 Feb 2014 20:44:28 +0000 (UTC)
Received: from VA3EHSMHS018.bigfish.com (unknown [10.7.14.237])	by
	mail206-va3.bigfish.com (Postfix) with ESMTP id C57D4C0004A;
	Wed,  5 Feb 2014 20:44:28 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by VA3EHSMHS018.bigfish.com
	(10.7.99.28) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:44:26 +0000
X-WSS-ID: 0N0JIXX-08-08R-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	269E4D16063;	Wed,  5 Feb 2014 14:44:21 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:43 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:25 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:14 -0600
Message-ID: <1391632096-6209-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 2/3 V3] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits and bit 4 which meant the
register accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/cpu/mcheck/vmce.c |    6 ++++--
 xen/arch/x86/cpu/mcheck/vmce.h |    3 +++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..841bd46 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | AMD_MC4_MISC1_INCLUDE_MASK |
+                    AMD_MC4_EXTENDED_MISC_INCLUDE_MASK) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +211,8 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | AMD_MC4_MISC1_INCLUDE_MASK |
+                    AMD_MC4_EXTENDED_MISC_INCLUDE_MASK) )
     {
     case MSR_IA32_MC0_CTL:
         /*
diff --git a/xen/arch/x86/cpu/mcheck/vmce.h b/xen/arch/x86/cpu/mcheck/vmce.h
index 6b2c95a..5e9b091 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.h
+++ b/xen/arch/x86/cpu/mcheck/vmce.h
@@ -8,6 +8,9 @@ int vmce_init(struct cpuinfo_x86 *c);
 #define dom0_vmce_enabled() (dom0 && dom0->max_vcpus && dom0->vcpu[0] \
         && guest_enabled_event(dom0->vcpu[0], VIRQ_MCA))
 
+#define AMD_MC4_MISC1_INCLUDE_MASK          0x13
+#define AMD_MC4_EXTENDED_MISC_INCLUDE_MASK  0xc0000000
+
 int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn);
 
 int vmce_intel_rdmsr(const struct vcpu *, uint32_t msr, uint64_t *val);
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kl-00012m-Sp; Wed, 05 Feb 2014 20:44:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kj-00012Q-Px
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:33 +0000
Received: from [85.158.143.35:55035] by server-2.bemta-4.messagelabs.com id
	CE/E2-10891-1B2A2F25; Wed, 05 Feb 2014 20:44:33 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391633071!3446490!1
X-Originating-IP: [216.32.180.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21358 invoked from network); 5 Feb 2014 20:44:32 -0000
Received: from va3ehsobe002.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.12)
	by server-11.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:32 -0000
Received: from mail206-va3-R.bigfish.com (10.7.14.226) by
	VA3EHSOBE012.bigfish.com (10.7.40.62) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:31 +0000
Received: from mail206-va3 (localhost [127.0.0.1])	by
	mail206-va3-R.bigfish.com (Postfix) with ESMTP id EEFF14200E3;
	Wed,  5 Feb 2014 20:44:30 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail206-va3 (localhost.localdomain [127.0.0.1]) by mail206-va3
	(MessageSwitch) id 1391633068869343_16267;
	Wed,  5 Feb 2014 20:44:28 +0000 (UTC)
Received: from VA3EHSMHS018.bigfish.com (unknown [10.7.14.237])	by
	mail206-va3.bigfish.com (Postfix) with ESMTP id C57D4C0004A;
	Wed,  5 Feb 2014 20:44:28 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by VA3EHSMHS018.bigfish.com
	(10.7.99.28) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:44:26 +0000
X-WSS-ID: 0N0JIXX-08-08R-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	269E4D16063;	Wed,  5 Feb 2014 14:44:21 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:43 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:25 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:14 -0600
Message-ID: <1391632096-6209-3-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 2/3 V3] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits and bit 4 which meant the
register accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/cpu/mcheck/vmce.c |    6 ++++--
 xen/arch/x86/cpu/mcheck/vmce.h |    3 +++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..841bd46 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | AMD_MC4_MISC1_INCLUDE_MASK |
+                    AMD_MC4_EXTENDED_MISC_INCLUDE_MASK) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +211,8 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (MSR_IA32_MC0_CTL | AMD_MC4_MISC1_INCLUDE_MASK |
+                    AMD_MC4_EXTENDED_MISC_INCLUDE_MASK) )
     {
     case MSR_IA32_MC0_CTL:
         /*
diff --git a/xen/arch/x86/cpu/mcheck/vmce.h b/xen/arch/x86/cpu/mcheck/vmce.h
index 6b2c95a..5e9b091 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.h
+++ b/xen/arch/x86/cpu/mcheck/vmce.h
@@ -8,6 +8,9 @@ int vmce_init(struct cpuinfo_x86 *c);
 #define dom0_vmce_enabled() (dom0 && dom0->max_vcpus && dom0->vcpu[0] \
         && guest_enabled_event(dom0->vcpu[0], VIRQ_MCA))
 
+#define AMD_MC4_MISC1_INCLUDE_MASK          0x13
+#define AMD_MC4_EXTENDED_MISC_INCLUDE_MASK  0xc0000000
+
 int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn);
 
 int vmce_intel_rdmsr(const struct vcpu *, uint32_t msr, uint64_t *val);
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kp-000142-9V; Wed, 05 Feb 2014 20:44:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kn-00013X-V6
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:38 +0000
Received: from [193.109.254.147:19669] by server-1.bemta-14.messagelabs.com id
	A2/C2-15438-5B2A2F25; Wed, 05 Feb 2014 20:44:37 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391633075!2297713!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19166 invoked from network); 5 Feb 2014 20:44:36 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-10.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:36 -0000
Received: from mail31-va3-R.bigfish.com (10.7.14.247) by
	VA3EHSOBE008.bigfish.com (10.7.40.28) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:35 +0000
Received: from mail31-va3 (localhost [127.0.0.1])	by mail31-va3-R.bigfish.com
	(Postfix) with ESMTP id 2EB4E2C0168;
	Wed,  5 Feb 2014 20:44:35 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail31-va3 (localhost.localdomain [127.0.0.1]) by mail31-va3
	(MessageSwitch) id 139163307363650_7245; Wed,  5 Feb 2014 20:44:33 +0000
	(UTC)
Received: from VA3EHSMHS044.bigfish.com (unknown [10.7.14.229])	by
	mail31-va3.bigfish.com (Postfix) with ESMTP id F23741A005C;
	Wed,  5 Feb 2014 20:44:32 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS044.bigfish.com
	(10.7.99.54) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:44:25 +0000
X-WSS-ID: 0N0JIXZ-07-084-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29DEBCAE64D;	Wed,  5 Feb 2014 14:44:23 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:42 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:24 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:13 -0600
Message-ID: <1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 1/3 V3] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Defining MSR 0xC000040A is needless as it is marked reserved
from Fam15 onwards and zeroed in Fam10

Also, according to BKDG, MSR 0x413 is the first of the thresholding
registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
respectively. So rework the #define's accordingly and use
MSR_IA32_MCx_MISC() to define MSR 0x413

Fam10 BKDG reference:
http://support.amd.com/TechDocs/31116.pdf

Fam15 Model 00h-0fh BKDG reference:
http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..155c66e 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold register */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: We report that the threshold register is unavailable
          * for OS use (locked by the BIOS).
@@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         vpmu_do_wrmsr(msr, msr_content);
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold register */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: Threshold register is reported to be locked, so we ignore
          * all write accesses. This behaviour matches real HW, so guests should
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e5ffbf2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -219,9 +219,9 @@
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
 
 /* AMD Family10h machine check MSRs */
-#define MSR_F10_MC4_MISC1		0xc0000408
-#define MSR_F10_MC4_MISC2		0xc0000409
-#define MSR_F10_MC4_MISC3		0xc000040A
+#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
+#define MSR_F10_MC4_MISC2		0xc0000408
+#define MSR_F10_MC4_MISC3		0xc0000409
 
 /* AMD Family10h Bus Unit MSRs */
 #define MSR_F10_BU_CFG 		0xc0011023
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kp-000142-9V; Wed, 05 Feb 2014 20:44:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kn-00013X-V6
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:38 +0000
Received: from [193.109.254.147:19669] by server-1.bemta-14.messagelabs.com id
	A2/C2-15438-5B2A2F25; Wed, 05 Feb 2014 20:44:37 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391633075!2297713!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19166 invoked from network); 5 Feb 2014 20:44:36 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-10.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:36 -0000
Received: from mail31-va3-R.bigfish.com (10.7.14.247) by
	VA3EHSOBE008.bigfish.com (10.7.40.28) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:35 +0000
Received: from mail31-va3 (localhost [127.0.0.1])	by mail31-va3-R.bigfish.com
	(Postfix) with ESMTP id 2EB4E2C0168;
	Wed,  5 Feb 2014 20:44:35 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h17326ah8275bh1de097h186068h5eeeKz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail31-va3 (localhost.localdomain [127.0.0.1]) by mail31-va3
	(MessageSwitch) id 139163307363650_7245; Wed,  5 Feb 2014 20:44:33 +0000
	(UTC)
Received: from VA3EHSMHS044.bigfish.com (unknown [10.7.14.229])	by
	mail31-va3.bigfish.com (Postfix) with ESMTP id F23741A005C;
	Wed,  5 Feb 2014 20:44:32 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS044.bigfish.com
	(10.7.99.54) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:44:25 +0000
X-WSS-ID: 0N0JIXZ-07-084-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29DEBCAE64D;	Wed,  5 Feb 2014 14:44:23 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:42 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:24 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:13 -0600
Message-ID: <1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 1/3 V3] hvm,
	svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Defining MSR 0xC000040A is needless as it is marked reserved
from Fam15 onwards and zeroed in Fam10

Also, according to BKDG, MSR 0x413 is the first of the thresholding
registers; MSR 0xC0000408 and MSR 0xC0000409 are second and third
respectively. So rework the #define's accordingly and use
MSR_IA32_MCx_MISC() to define MSR 0x413

Fam10 BKDG reference:
http://support.amd.com/TechDocs/31116.pdf

Fam15 Model 00h-0fh BKDG reference:
http://support.amd.com/TechDocs/42301_15h_Mod_00h-0Fh_BKDG.pdf

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/hvm/svm/svm.c      |   10 ++++++----
 xen/include/asm-x86/msr-index.h |    6 +++---
 2 files changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..155c66e 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold register */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: We report that the threshold register is unavailable
          * for OS use (locked by the BIOS).
@@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         vpmu_do_wrmsr(msr, msr_content);
         break;
 
-    case MSR_IA32_MCx_MISC(4): /* Threshold register */
-    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
+    case MSR_F10_MC4_MISC1: /* Threshold register */
+    case MSR_F10_MC4_MISC2:
+    case MSR_F10_MC4_MISC3:
         /*
          * MCA/MCE: Threshold register is reported to be locked, so we ignore
          * all write accesses. This behaviour matches real HW, so guests should
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e5ffbf2 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -219,9 +219,9 @@
 #define AMD64_NB_CFG_CF8_EXT_ENABLE_BIT	46
 
 /* AMD Family10h machine check MSRs */
-#define MSR_F10_MC4_MISC1		0xc0000408
-#define MSR_F10_MC4_MISC2		0xc0000409
-#define MSR_F10_MC4_MISC3		0xc000040A
+#define MSR_F10_MC4_MISC1		MSR_IA32_MCx_MISC(4)
+#define MSR_F10_MC4_MISC2		0xc0000408
+#define MSR_F10_MC4_MISC3		0xc0000409
 
 /* AMD Family10h Bus Unit MSRs */
 #define MSR_F10_BU_CFG 		0xc0011023
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kr-00014w-1m; Wed, 05 Feb 2014 20:44:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kp-00013u-88
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:39 +0000
Received: from [193.109.254.147:38407] by server-5.bemta-14.messagelabs.com id
	0D/F3-16688-6B2A2F25; Wed, 05 Feb 2014 20:44:38 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391633074!2296997!1
X-Originating-IP: [207.46.163.27]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27277 invoked from network); 5 Feb 2014 20:44:36 -0000
Received: from co9ehsobe004.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.27)
	by server-3.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:36 -0000
Received: from mail219-co9-R.bigfish.com (10.236.132.241) by
	CO9EHSOBE013.bigfish.com (10.236.130.76) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:33 +0000
Received: from mail219-co9 (localhost [127.0.0.1])	by
	mail219-co9-R.bigfish.com (Postfix) with ESMTP id C05902C0453;
	Wed,  5 Feb 2014 20:44:33 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail219-co9 (localhost.localdomain [127.0.0.1]) by mail219-co9
	(MessageSwitch) id 1391633071447079_10747;
	Wed,  5 Feb 2014 20:44:31 +0000 (UTC)
Received: from CO9EHSMHS020.bigfish.com (unknown [10.236.132.237])	by
	mail219-co9.bigfish.com (Postfix) with ESMTP id 67D47A004A;
	Wed,  5 Feb 2014 20:44:31 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS020.bigfish.com
	(10.236.130.30) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 5 Feb 2014 20:44:28 +0000
X-WSS-ID: 0N0JIXZ-08-08T-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2FEBBD16063;	Wed,  5 Feb 2014 14:44:22 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:44 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:26 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:15 -0600
Message-ID: <1391632096-6209-4-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 3/3 V3] mcheck,
	mce_amd: Verify presence of extended AMD_MC4_MISC registers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR 0x413 is present in all families from F10 onwards. But
the extended block of MC4 MISC registers do not exist always.
In this patch, we rework the vmce_amd_[wr|rd]msr functions
to return #GP to guest if register does not exist in HW.

If they do, then we continue current policy of blocking access
as this bank is not emulated. So for reads - return value = 0
and for writes, do nothing.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   64 +++++++++++++++++--------------------
 xen/arch/x86/cpu/mcheck/mce_amd.h |    3 ++
 2 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..b1ccda4 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -102,46 +102,40 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 	return mcheck_amd_famXX;
 }
 
+/* check for AMD MC4 extended MISC register presence */
+static inline int amd_thresholding_reg_present(uint32_t msr)
+{
+    uint64_t val;
+    rdmsr_safe(msr, val);
+    if ( val & (AMD_MC4_MISC_VAL_MASK | AMD_MC4_MISC_CNTP_MASK) )
+        return 1;
+
+    return 0;
+}
+
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    if ( msr != MSR_F10_MC4_MISC1 )
+    {
+        /* If not present, #GP fault, else do nothing as we don't emulate */
+        if ( !amd_thresholding_reg_present(msr) )
+            return -1;
+    }
+
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    if ( msr != MSR_F10_MC4_MISC1 )
+    {
+        /* If not present, #GP fault, else assign '0' as we don't emulate */
+        if ( !amd_thresholding_reg_present(msr) )
+            return -1;
+    }
+
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.h b/xen/arch/x86/cpu/mcheck/mce_amd.h
index 5d047e7..a6024fb 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.h
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.h
@@ -1,6 +1,9 @@
 #ifndef _MCHECK_AMD_H
 #define _MCHECK_AMD_H
 
+#define AMD_MC4_MISC_VAL_MASK           (1ULL << 63)
+#define AMD_MC4_MISC_CNTP_MASK          (1ULL << 62)
+
 enum mcheck_type amd_k8_mcheck_init(struct cpuinfo_x86 *c);
 enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c);
 
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kr-00014w-1m; Wed, 05 Feb 2014 20:44:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kp-00013u-88
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:39 +0000
Received: from [193.109.254.147:38407] by server-5.bemta-14.messagelabs.com id
	0D/F3-16688-6B2A2F25; Wed, 05 Feb 2014 20:44:38 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391633074!2296997!1
X-Originating-IP: [207.46.163.27]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27277 invoked from network); 5 Feb 2014 20:44:36 -0000
Received: from co9ehsobe004.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.27)
	by server-3.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:36 -0000
Received: from mail219-co9-R.bigfish.com (10.236.132.241) by
	CO9EHSOBE013.bigfish.com (10.236.130.76) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:33 +0000
Received: from mail219-co9 (localhost [127.0.0.1])	by
	mail219-co9-R.bigfish.com (Postfix) with ESMTP id C05902C0453;
	Wed,  5 Feb 2014 20:44:33 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail219-co9 (localhost.localdomain [127.0.0.1]) by mail219-co9
	(MessageSwitch) id 1391633071447079_10747;
	Wed,  5 Feb 2014 20:44:31 +0000 (UTC)
Received: from CO9EHSMHS020.bigfish.com (unknown [10.236.132.237])	by
	mail219-co9.bigfish.com (Postfix) with ESMTP id 67D47A004A;
	Wed,  5 Feb 2014 20:44:31 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS020.bigfish.com
	(10.236.130.30) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 5 Feb 2014 20:44:28 +0000
X-WSS-ID: 0N0JIXZ-08-08T-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2FEBBD16063;	Wed,  5 Feb 2014 14:44:22 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:44 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:26 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:15 -0600
Message-ID: <1391632096-6209-4-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 3/3 V3] mcheck,
	mce_amd: Verify presence of extended AMD_MC4_MISC registers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR 0x413 is present in all families from F10 onwards. But
the extended block of MC4 MISC registers do not exist always.
In this patch, we rework the vmce_amd_[wr|rd]msr functions
to return #GP to guest if register does not exist in HW.

If they do, then we continue current policy of blocking access
as this bank is not emulated. So for reads - return value = 0
and for writes, do nothing.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   64 +++++++++++++++++--------------------
 xen/arch/x86/cpu/mcheck/mce_amd.h |    3 ++
 2 files changed, 32 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..b1ccda4 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -102,46 +102,40 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 	return mcheck_amd_famXX;
 }
 
+/* check for AMD MC4 extended MISC register presence */
+static inline int amd_thresholding_reg_present(uint32_t msr)
+{
+    uint64_t val;
+    rdmsr_safe(msr, val);
+    if ( val & (AMD_MC4_MISC_VAL_MASK | AMD_MC4_MISC_CNTP_MASK) )
+        return 1;
+
+    return 0;
+}
+
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    if ( msr != MSR_F10_MC4_MISC1 )
+    {
+        /* If not present, #GP fault, else do nothing as we don't emulate */
+        if ( !amd_thresholding_reg_present(msr) )
+            return -1;
+    }
+
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    if ( msr != MSR_F10_MC4_MISC1 )
+    {
+        /* If not present, #GP fault, else assign '0' as we don't emulate */
+        if ( !amd_thresholding_reg_present(msr) )
+            return -1;
+    }
+
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.h b/xen/arch/x86/cpu/mcheck/mce_amd.h
index 5d047e7..a6024fb 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.h
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.h
@@ -1,6 +1,9 @@
 #ifndef _MCHECK_AMD_H
 #define _MCHECK_AMD_H
 
+#define AMD_MC4_MISC_VAL_MASK           (1ULL << 63)
+#define AMD_MC4_MISC_CNTP_MASK          (1ULL << 62)
+
 enum mcheck_type amd_k8_mcheck_init(struct cpuinfo_x86 *c);
 enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c);
 
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kr-00015c-IJ; Wed, 05 Feb 2014 20:44:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kq-00014m-L0
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:40 +0000
Received: from [85.158.143.35:55377] by server-1.bemta-4.messagelabs.com id
	23/DB-31661-8B2A2F25; Wed, 05 Feb 2014 20:44:40 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391633078!3437026!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20242 invoked from network); 5 Feb 2014 20:44:39 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-12.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:39 -0000
Received: from mail67-va3-R.bigfish.com (10.7.14.253) by
	VA3EHSOBE012.bigfish.com (10.7.40.62) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:38 +0000
Received: from mail67-va3 (localhost [127.0.0.1])	by mail67-va3-R.bigfish.com
	(Postfix) with ESMTP id EF8CF300257;
	Wed,  5 Feb 2014 20:44:37 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchzz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail67-va3 (localhost.localdomain [127.0.0.1]) by mail67-va3
	(MessageSwitch) id 139163307322985_32287;
	Wed,  5 Feb 2014 20:44:33 +0000 (UTC)
Received: from VA3EHSMHS038.bigfish.com (unknown [10.7.14.246])	by
	mail67-va3.bigfish.com (Postfix) with ESMTP id 127CD220072;
	Wed,  5 Feb 2014 20:44:30 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS038.bigfish.com
	(10.7.99.48) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:44:23 +0000
X-WSS-ID: 0N0JIXY-07-083-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29C3FCAE64D;	Wed,  5 Feb 2014 14:44:22 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:41 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:22 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:12 -0600
Message-ID: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 0/3 V3] Fix AMD threshold register definitions
	and activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch 1: Deals with correcting AMD threshold register definitions.
Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
         AMD thresholding registers.
Patch 3: Verify presence of AMD extended block of MISC registers before we decide to emulate(or not)

Changes in V2:
    - Correct time skew on the V1 patch.
Changes in V3:
    - Use #defines for the masks (per C.Egger and Boris.O comments)
    - Reword commit message
    - Rework code to care for the differences in BKDG wrt to presence of extended MC4 MISC registers (per Jan comments)

Aravind Gopalakrishnan (3):
  hvm,svm: Update AMD Thresholding MSR definitions
  mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
    MSRs
  mcheck, mce_amd: Verify presence of extended AMD_MC4_MISC registers

 xen/arch/x86/cpu/mcheck/amd_f10.c |   64 +++++++++++++++++--------------------
 xen/arch/x86/cpu/mcheck/mce_amd.h |    3 ++
 xen/arch/x86/cpu/mcheck/vmce.c    |    6 ++--
 xen/arch/x86/cpu/mcheck/vmce.h    |    3 ++
 xen/arch/x86/hvm/svm/svm.c        |   10 +++---
 xen/include/asm-x86/msr-index.h   |    6 ++--
 6 files changed, 48 insertions(+), 44 deletions(-)

-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:44:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:44:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Kr-00015c-IJ; Wed, 05 Feb 2014 20:44:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9Kq-00014m-L0
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:44:40 +0000
Received: from [85.158.143.35:55377] by server-1.bemta-4.messagelabs.com id
	23/DB-31661-8B2A2F25; Wed, 05 Feb 2014 20:44:40 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391633078!3437026!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20242 invoked from network); 5 Feb 2014 20:44:39 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-12.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 20:44:39 -0000
Received: from mail67-va3-R.bigfish.com (10.7.14.253) by
	VA3EHSOBE012.bigfish.com (10.7.40.62) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 20:44:38 +0000
Received: from mail67-va3 (localhost [127.0.0.1])	by mail67-va3-R.bigfish.com
	(Postfix) with ESMTP id EF8CF300257;
	Wed,  5 Feb 2014 20:44:37 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchzz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail67-va3 (localhost.localdomain [127.0.0.1]) by mail67-va3
	(MessageSwitch) id 139163307322985_32287;
	Wed,  5 Feb 2014 20:44:33 +0000 (UTC)
Received: from VA3EHSMHS038.bigfish.com (unknown [10.7.14.246])	by
	mail67-va3.bigfish.com (Postfix) with ESMTP id 127CD220072;
	Wed,  5 Feb 2014 20:44:30 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS038.bigfish.com
	(10.7.99.48) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	20:44:23 +0000
X-WSS-ID: 0N0JIXY-07-083-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29C3FCAE64D;	Wed,  5 Feb 2014 14:44:22 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 14:44:41 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 5 Feb 2014 15:44:22 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>
Date: Wed, 5 Feb 2014 14:28:12 -0600
Message-ID: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH 0/3 V3] Fix AMD threshold register definitions
	and activate vmce_amd_* functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Patch 1: Deals with correcting AMD threshold register definitions.
Patch 2: Fixes mask in vmce.c to allow vmce_amd_* functions to handle access to
         AMD thresholding registers.
Patch 3: Verify presence of AMD extended block of MISC registers before we decide to emulate(or not)

Changes in V2:
    - Correct time skew on the V1 patch.
Changes in V3:
    - Use #defines for the masks (per C.Egger and Boris.O comments)
    - Reword commit message
    - Rework code to care for the differences in BKDG wrt to presence of extended MC4 MISC registers (per Jan comments)

Aravind Gopalakrishnan (3):
  hvm,svm: Update AMD Thresholding MSR definitions
  mcheck,vmce: Allow vmce_amd_* functions to handle AMD thresolding
    MSRs
  mcheck, mce_amd: Verify presence of extended AMD_MC4_MISC registers

 xen/arch/x86/cpu/mcheck/amd_f10.c |   64 +++++++++++++++++--------------------
 xen/arch/x86/cpu/mcheck/mce_amd.h |    3 ++
 xen/arch/x86/cpu/mcheck/vmce.c    |    6 ++--
 xen/arch/x86/cpu/mcheck/vmce.h    |    3 ++
 xen/arch/x86/hvm/svm/svm.c        |   10 +++---
 xen/include/asm-x86/msr-index.h   |    6 ++--
 6 files changed, 48 insertions(+), 44 deletions(-)

-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:55:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Ul-0001yH-3R; Wed, 05 Feb 2014 20:54:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB9Uj-0001yC-M6
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:54:53 +0000
Received: from [193.109.254.147:41379] by server-13.bemta-14.messagelabs.com
	id 18/49-01226-D15A2F25; Wed, 05 Feb 2014 20:54:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391633690!2298171!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31354 invoked from network); 5 Feb 2014 20:54:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:54:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98400714"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 20:54:49 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 5 Feb 2014 15:54:49 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	21:54:47 +0100
Message-ID: <52F2A512.30304@citrix.com>
Date: Wed, 5 Feb 2014 20:54:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	<jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>
References: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 20:59, Aravind Gopalakrishnan wrote:
> Workaround for the Erratum will be in BIOSes spun only after
> Jan 2014 onwards. But initial production parts shipped in 2013
> itself. Since there is a coverage hole, we should carry this fix
> in software in case BIOS does not do the right thing or someone
> is using old BIOS.
>
> Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> Rev. 3.04, November2013 for details on the Erratum.
>
> Tested the patch on Fam16h server platform and it works fine.
>
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>
> ---
>  xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
>
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 3307141..f2780c4 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -369,6 +369,7 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  	u32 l, h;
>  
>  	unsigned long long value;
> +	u32 pci_val;

Please move this to the scope created by the lower hunk.

>  
>  	/* Disable TLB flush filter by setting HWCR.FFDIS on K8
>  	 * bit 6 of msr C001_0015
> @@ -477,6 +478,35 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  		       " all your (PV) guest kernels. ***\n");
>  
>  	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> +        /*
> +         * Apply workaround for erratum 792
> +         * Description:
> +         * Processor does not ensure DRAM scrub read/write sequence
> +         * is atomic wrt accesses to CC6 save state area. Therefore
> +         * if a concurrent scrub read/write access is to same address
> +         * the entry may appear as if it is not written. This quirk
> +         * applies to Fam16h models 00h-0Fh
> +         *
> +         * See "Revision Guide" for AMD F16h models 00h-0fh,
> +         * document 51810 rev. 3.04, Nov 2013
> +         *
> +         * Equivalent Linux patch link:
> +         * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
> +         */
> +        if (smp_processor_id() == 0) {
> +            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> +            if (pci_val & 0x1f) {
> +                pci_val &= ~(0x1f);

0x1f is by default an int, so you use a u suffix to make it u32 to use
as a mask.  Brackets are not really needed.

> +                pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
> +            }
> +
> +            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> +            if (pci_val & 0x1) {
> +                pci_val &= ~(0x1);
> +                pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
> +            }
> +        }
> +

Indentation needs some work.  This file derives from Linux so uses tabs
(at 8 spaces wide), rather than the Xen style of 4 spaces.

~Andrew

>  		rdmsrl(MSR_AMD64_LS_CFG, value);
>  		if (!(value & (1 << 15))) {
>  			static bool_t warned;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 20:55:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 20:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9Ul-0001yH-3R; Wed, 05 Feb 2014 20:54:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WB9Uj-0001yC-M6
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 20:54:53 +0000
Received: from [193.109.254.147:41379] by server-13.bemta-14.messagelabs.com
	id 18/49-01226-D15A2F25; Wed, 05 Feb 2014 20:54:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391633690!2298171!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31354 invoked from network); 5 Feb 2014 20:54:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 20:54:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98400714"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 20:54:49 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 5 Feb 2014 15:54:49 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	21:54:47 +0100
Message-ID: <52F2A512.30304@citrix.com>
Date: Wed, 5 Feb 2014 20:54:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	<jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>
References: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 20:59, Aravind Gopalakrishnan wrote:
> Workaround for the Erratum will be in BIOSes spun only after
> Jan 2014 onwards. But initial production parts shipped in 2013
> itself. Since there is a coverage hole, we should carry this fix
> in software in case BIOS does not do the right thing or someone
> is using old BIOS.
>
> Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> Rev. 3.04, November2013 for details on the Erratum.
>
> Tested the patch on Fam16h server platform and it works fine.
>
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>
> ---
>  xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
>
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 3307141..f2780c4 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -369,6 +369,7 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  	u32 l, h;
>  
>  	unsigned long long value;
> +	u32 pci_val;

Please move this to the scope created by the lower hunk.

>  
>  	/* Disable TLB flush filter by setting HWCR.FFDIS on K8
>  	 * bit 6 of msr C001_0015
> @@ -477,6 +478,35 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  		       " all your (PV) guest kernels. ***\n");
>  
>  	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> +        /*
> +         * Apply workaround for erratum 792
> +         * Description:
> +         * Processor does not ensure DRAM scrub read/write sequence
> +         * is atomic wrt accesses to CC6 save state area. Therefore
> +         * if a concurrent scrub read/write access is to same address
> +         * the entry may appear as if it is not written. This quirk
> +         * applies to Fam16h models 00h-0Fh
> +         *
> +         * See "Revision Guide" for AMD F16h models 00h-0fh,
> +         * document 51810 rev. 3.04, Nov 2013
> +         *
> +         * Equivalent Linux patch link:
> +         * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
> +         */
> +        if (smp_processor_id() == 0) {
> +            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> +            if (pci_val & 0x1f) {
> +                pci_val &= ~(0x1f);

0x1f is by default an int, so you use a u suffix to make it u32 to use
as a mask.  Brackets are not really needed.

> +                pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
> +            }
> +
> +            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> +            if (pci_val & 0x1) {
> +                pci_val &= ~(0x1);
> +                pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
> +            }
> +        }
> +

Indentation needs some work.  This file derives from Linux so uses tabs
(at 8 spaces wide), rather than the Xen style of 4 spaces.

~Andrew

>  		rdmsrl(MSR_AMD64_LS_CFG, value);
>  		if (!(value & (1 << 15))) {
>  			static bool_t warned;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:22:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9vO-0002qa-VC; Wed, 05 Feb 2014 21:22:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9vN-0002qV-Cc
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:22:25 +0000
Received: from [85.158.139.211:47654] by server-16.bemta-5.messagelabs.com id
	4E/51-05060-09BA2F25; Wed, 05 Feb 2014 21:22:24 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391635342!1936376!1
X-Originating-IP: [216.32.181.183]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29661 invoked from network); 5 Feb 2014 21:22:23 -0000
Received: from ch1ehsobe003.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.183)
	by server-3.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 21:22:23 -0000
Received: from mail196-ch1-R.bigfish.com (10.43.68.225) by
	CH1EHSOBE019.bigfish.com (10.43.70.76) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 21:22:22 +0000
Received: from mail196-ch1 (localhost [127.0.0.1])	by
	mail196-ch1-R.bigfish.com (Postfix) with ESMTP id 0EDE42603A2;
	Wed,  5 Feb 2014 21:22:22 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z579ehzbb2dI98dI1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail196-ch1 (localhost.localdomain [127.0.0.1]) by mail196-ch1
	(MessageSwitch) id 1391635338833427_20603;
	Wed,  5 Feb 2014 21:22:18 +0000 (UTC)
Received: from CH1EHSMHS025.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.239])	by mail196-ch1.bigfish.com (Postfix) with ESMTP id
	C6716320067;	Wed,  5 Feb 2014 21:22:18 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CH1EHSMHS025.bigfish.com
	(10.43.70.25) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	21:22:18 +0000
X-WSS-ID: 0N0JKP0-08-278-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2EC54D16063;	Wed,  5 Feb 2014 15:22:11 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:22:34 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 16:22:15 -0500
Date: Wed, 5 Feb 2014 15:22:22 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140205212221.GA8837@arav-dinar>
References: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F2A512.30304@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F2A512.30304@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, jbeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 08:54:42PM +0000, Andrew Cooper wrote:
> On 05/02/2014 20:59, Aravind Gopalakrishnan wrote:
> >  	unsigned long long value;
> > +	u32 pci_val;
> 
> Please move this to the scope created by the lower hunk.
> 

Done

> > +            if (pci_val & 0x1f) {
> > +                pci_val &= ~(0x1f);
> 
> 0x1f is by default an int, so you use a u suffix to make it u32 to use
> as a mask.  Brackets are not really needed.
> 

Done

> > +            }
> > +        }
> > +
> 
> Indentation needs some work.  This file derives from Linux so uses tabs
> (at 8 spaces wide), rather than the Xen style of 4 spaces.
> 

Done;
Sending out changes in V2.

Thanks,
Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:22:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9vZ-0002qz-CD; Wed, 05 Feb 2014 21:22:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9vX-0002qs-Tc
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:22:36 +0000
Received: from [85.158.139.211:51013] by server-12.bemta-5.messagelabs.com id
	AA/D7-15415-B9BA2F25; Wed, 05 Feb 2014 21:22:35 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391635352!1922162!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17319 invoked from network); 5 Feb 2014 21:22:34 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-13.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 21:22:34 -0000
Received: from mail13-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE035.bigfish.com (10.243.66.100) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 21:22:32 +0000
Received: from mail13-co1 (localhost [127.0.0.1])	by mail13-co1-R.bigfish.com
	(Postfix) with ESMTP id 423468203D2;
	Wed,  5 Feb 2014 21:22:32 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(zz13e6Kzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h177df4h8275eh17326ah8275bh1de097h186068ha1495iz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail13-co1 (localhost.localdomain [127.0.0.1]) by mail13-co1
	(MessageSwitch) id 1391635349881998_29555;
	Wed,  5 Feb 2014 21:22:29 +0000 (UTC)
Received: from CO1EHSMHS008.bigfish.com (unknown [10.243.78.226])	by
	mail13-co1.bigfish.com (Postfix) with ESMTP id C8C98C4004C;
	Wed,  5 Feb 2014 21:22:29 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO1EHSMHS008.bigfish.com
	(10.243.66.18) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 5 Feb 2014 21:22:29 +0000
X-WSS-ID: 0N0JKPE-07-291-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2225612C0004;	Wed,  5 Feb 2014 15:22:26 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:22:45 -0600
Received: from autotest-xen-olivehill.amd.com (10.180.168.240) by
	SATLEXDAG02.amd.com (10.181.40.5) with Microsoft SMTP Server id
	14.2.328.9; Wed, 5 Feb 2014 16:22:27 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>, <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 15:43:39 -0600
Message-ID: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.

Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
Rev. 3.04, November2013 for details on the Erratum.

Tested the patch on Fam16h server platform and it works fine.

Changes in V2: (per Andrew.C comments)
	- Move pci_val into same scope
	- rework indentation to match linux style

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3307141..703bbda 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -477,6 +477,36 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 		       " all your (PV) guest kernels. ***\n");
 
 	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
+		/*
+		 * Apply workaround for erratum 792
+		 * Description:
+		 * Processor does not ensure DRAM scrub read/write sequence
+		 * is atomic wrt accesses to CC6 save state area. Therefore
+		 * if a concurrent scrub read/write access is to same address
+		 * the entry may appear as if it is not written. This quirk
+		 * applies to Fam16h models 00h-0Fh
+		 *
+		 * See "Revision Guide" for AMD F16h models 00h-0fh,
+		 * document 51810 rev. 3.04, Nov 2013
+		 *
+		 * Equivalent Linux patch link:
+		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
+		 */
+		if (smp_processor_id() == 0) {
+			u32 pci_val;
+			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+			if (pci_val & 0x1f) {
+				pci_val &= ~0x1fu;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
+			}
+
+			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+			if (pci_val & 0x1) {
+				pci_val &= ~0x1u;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
+			}
+		}
+
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
 			static bool_t warned;
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:22:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9vO-0002qa-VC; Wed, 05 Feb 2014 21:22:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9vN-0002qV-Cc
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:22:25 +0000
Received: from [85.158.139.211:47654] by server-16.bemta-5.messagelabs.com id
	4E/51-05060-09BA2F25; Wed, 05 Feb 2014 21:22:24 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391635342!1936376!1
X-Originating-IP: [216.32.181.183]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29661 invoked from network); 5 Feb 2014 21:22:23 -0000
Received: from ch1ehsobe003.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.183)
	by server-3.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 21:22:23 -0000
Received: from mail196-ch1-R.bigfish.com (10.43.68.225) by
	CH1EHSOBE019.bigfish.com (10.43.70.76) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 21:22:22 +0000
Received: from mail196-ch1 (localhost [127.0.0.1])	by
	mail196-ch1-R.bigfish.com (Postfix) with ESMTP id 0EDE42603A2;
	Wed,  5 Feb 2014 21:22:22 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z579ehzbb2dI98dI1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail196-ch1 (localhost.localdomain [127.0.0.1]) by mail196-ch1
	(MessageSwitch) id 1391635338833427_20603;
	Wed,  5 Feb 2014 21:22:18 +0000 (UTC)
Received: from CH1EHSMHS025.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.239])	by mail196-ch1.bigfish.com (Postfix) with ESMTP id
	C6716320067;	Wed,  5 Feb 2014 21:22:18 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CH1EHSMHS025.bigfish.com
	(10.43.70.25) with Microsoft SMTP Server id 14.16.227.3; Wed, 5 Feb 2014
	21:22:18 +0000
X-WSS-ID: 0N0JKP0-08-278-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2EC54D16063;	Wed,  5 Feb 2014 15:22:11 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:22:34 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 16:22:15 -0500
Date: Wed, 5 Feb 2014 15:22:22 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140205212221.GA8837@arav-dinar>
References: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F2A512.30304@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F2A512.30304@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: keir@xen.org, suravee.suthikulpanit@amd.com, jbeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 08:54:42PM +0000, Andrew Cooper wrote:
> On 05/02/2014 20:59, Aravind Gopalakrishnan wrote:
> >  	unsigned long long value;
> > +	u32 pci_val;
> 
> Please move this to the scope created by the lower hunk.
> 

Done

> > +            if (pci_val & 0x1f) {
> > +                pci_val &= ~(0x1f);
> 
> 0x1f is by default an int, so you use a u suffix to make it u32 to use
> as a mask.  Brackets are not really needed.
> 

Done

> > +            }
> > +        }
> > +
> 
> Indentation needs some work.  This file derives from Linux so uses tabs
> (at 8 spaces wide), rather than the Xen style of 4 spaces.
> 

Done;
Sending out changes in V2.

Thanks,
Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:22:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WB9vZ-0002qz-CD; Wed, 05 Feb 2014 21:22:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WB9vX-0002qs-Tc
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:22:36 +0000
Received: from [85.158.139.211:51013] by server-12.bemta-5.messagelabs.com id
	AA/D7-15415-B9BA2F25; Wed, 05 Feb 2014 21:22:35 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391635352!1922162!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17319 invoked from network); 5 Feb 2014 21:22:34 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-13.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	5 Feb 2014 21:22:34 -0000
Received: from mail13-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE035.bigfish.com (10.243.66.100) with Microsoft SMTP Server id
	14.1.225.22; Wed, 5 Feb 2014 21:22:32 +0000
Received: from mail13-co1 (localhost [127.0.0.1])	by mail13-co1-R.bigfish.com
	(Postfix) with ESMTP id 423468203D2;
	Wed,  5 Feb 2014 21:22:32 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(zz13e6Kzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h177df4h8275eh17326ah8275bh1de097h186068ha1495iz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail13-co1 (localhost.localdomain [127.0.0.1]) by mail13-co1
	(MessageSwitch) id 1391635349881998_29555;
	Wed,  5 Feb 2014 21:22:29 +0000 (UTC)
Received: from CO1EHSMHS008.bigfish.com (unknown [10.243.78.226])	by
	mail13-co1.bigfish.com (Postfix) with ESMTP id C8C98C4004C;
	Wed,  5 Feb 2014 21:22:29 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO1EHSMHS008.bigfish.com
	(10.243.66.18) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 5 Feb 2014 21:22:29 +0000
X-WSS-ID: 0N0JKPE-07-291-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2225612C0004;	Wed,  5 Feb 2014 15:22:26 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 5 Feb 2014 15:22:45 -0600
Received: from autotest-xen-olivehill.amd.com (10.180.168.240) by
	SATLEXDAG02.amd.com (10.181.40.5) with Microsoft SMTP Server id
	14.2.328.9; Wed, 5 Feb 2014 16:22:27 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>, <andrew.cooper3@citrix.com>
Date: Wed, 5 Feb 2014 15:43:39 -0600
Message-ID: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.

Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
Rev. 3.04, November2013 for details on the Erratum.

Tested the patch on Fam16h server platform and it works fine.

Changes in V2: (per Andrew.C comments)
	- Move pci_val into same scope
	- rework indentation to match linux style

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
 1 file changed, 30 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3307141..703bbda 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -477,6 +477,36 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 		       " all your (PV) guest kernels. ***\n");
 
 	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
+		/*
+		 * Apply workaround for erratum 792
+		 * Description:
+		 * Processor does not ensure DRAM scrub read/write sequence
+		 * is atomic wrt accesses to CC6 save state area. Therefore
+		 * if a concurrent scrub read/write access is to same address
+		 * the entry may appear as if it is not written. This quirk
+		 * applies to Fam16h models 00h-0Fh
+		 *
+		 * See "Revision Guide" for AMD F16h models 00h-0fh,
+		 * document 51810 rev. 3.04, Nov 2013
+		 *
+		 * Equivalent Linux patch link:
+		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
+		 */
+		if (smp_processor_id() == 0) {
+			u32 pci_val;
+			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+			if (pci_val & 0x1f) {
+				pci_val &= ~0x1fu;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
+			}
+
+			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+			if (pci_val & 0x1) {
+				pci_val &= ~0x1u;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
+			}
+		}
+
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
 			static bool_t warned;
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:28:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBA18-00035M-EX; Wed, 05 Feb 2014 21:28:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBA16-00035H-EL
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:28:20 +0000
Received: from [85.158.143.35:7119] by server-3.bemta-4.messagelabs.com id
	FA/90-11539-3FCA2F25; Wed, 05 Feb 2014 21:28:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391635694!3456659!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17680 invoked from network); 5 Feb 2014 21:28:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 21:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98411654"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 21:28:12 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 5 Feb 2014 16:28:07 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	22:27:53 +0100
Message-ID: <52F2ACD8.9090902@citrix.com>
Date: Wed, 5 Feb 2014 21:27:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	<jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>
References: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 21:43, Aravind Gopalakrishnan wrote:
> Workaround for the Erratum will be in BIOSes spun only after
> Jan 2014 onwards. But initial production parts shipped in 2013
> itself. Since there is a coverage hole, we should carry this fix
> in software in case BIOS does not do the right thing or someone
> is using old BIOS.
>
> Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> Rev. 3.04, November2013 for details on the Erratum.
>
> Tested the patch on Fam16h server platform and it works fine.
>
> Changes in V2: (per Andrew.C comments)
> 	- Move pci_val into same scope
> 	- rework indentation to match linux style
>
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
>  xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
>
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 3307141..703bbda 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -477,6 +477,36 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  		       " all your (PV) guest kernels. ***\n");
>  
>  	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> +		/*
> +		 * Apply workaround for erratum 792
> +		 * Description:
> +		 * Processor does not ensure DRAM scrub read/write sequence
> +		 * is atomic wrt accesses to CC6 save state area. Therefore
> +		 * if a concurrent scrub read/write access is to same address
> +		 * the entry may appear as if it is not written. This quirk
> +		 * applies to Fam16h models 00h-0Fh
> +		 *
> +		 * See "Revision Guide" for AMD F16h models 00h-0fh,
> +		 * document 51810 rev. 3.04, Nov 2013
> +		 *
> +		 * Equivalent Linux patch link:
> +		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
> +		 */
> +		if (smp_processor_id() == 0) {
> +			u32 pci_val;
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> +			if (pci_val & 0x1f) {
> +				pci_val &= ~0x1fu;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
> +			}
> +
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> +			if (pci_val & 0x1) {
> +				pci_val &= ~0x1u;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
> +			}
> +		}
> +
>  		rdmsrl(MSR_AMD64_LS_CFG, value);
>  		if (!(value & (1 << 15))) {
>  			static bool_t warned;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:28:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBA18-00035M-EX; Wed, 05 Feb 2014 21:28:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBA16-00035H-EL
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:28:20 +0000
Received: from [85.158.143.35:7119] by server-3.bemta-4.messagelabs.com id
	FA/90-11539-3FCA2F25; Wed, 05 Feb 2014 21:28:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391635694!3456659!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17680 invoked from network); 5 Feb 2014 21:28:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 21:28:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,788,1384300800"; d="scan'208";a="98411654"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 05 Feb 2014 21:28:12 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Wed, 5 Feb 2014 16:28:07 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Wed, 5 Feb 2014
	22:27:53 +0100
Message-ID: <52F2ACD8.9090902@citrix.com>
Date: Wed, 5 Feb 2014 21:27:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	<jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>
References: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Subject: Re: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/2014 21:43, Aravind Gopalakrishnan wrote:
> Workaround for the Erratum will be in BIOSes spun only after
> Jan 2014 onwards. But initial production parts shipped in 2013
> itself. Since there is a coverage hole, we should carry this fix
> in software in case BIOS does not do the right thing or someone
> is using old BIOS.
>
> Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> Rev. 3.04, November2013 for details on the Erratum.
>
> Tested the patch on Fam16h server platform and it works fine.
>
> Changes in V2: (per Andrew.C comments)
> 	- Move pci_val into same scope
> 	- rework indentation to match linux style
>
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
>  xen/arch/x86/cpu/amd.c | 30 ++++++++++++++++++++++++++++++
>  1 file changed, 30 insertions(+)
>
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 3307141..703bbda 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -477,6 +477,36 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  		       " all your (PV) guest kernels. ***\n");
>  
>  	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> +		/*
> +		 * Apply workaround for erratum 792
> +		 * Description:
> +		 * Processor does not ensure DRAM scrub read/write sequence
> +		 * is atomic wrt accesses to CC6 save state area. Therefore
> +		 * if a concurrent scrub read/write access is to same address
> +		 * the entry may appear as if it is not written. This quirk
> +		 * applies to Fam16h models 00h-0Fh
> +		 *
> +		 * See "Revision Guide" for AMD F16h models 00h-0fh,
> +		 * document 51810 rev. 3.04, Nov 2013
> +		 *
> +		 * Equivalent Linux patch link:
> +		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
> +		 */
> +		if (smp_processor_id() == 0) {
> +			u32 pci_val;
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> +			if (pci_val & 0x1f) {
> +				pci_val &= ~0x1fu;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
> +			}
> +
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> +			if (pci_val & 0x1) {
> +				pci_val &= ~0x1u;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
> +			}
> +		}
> +
>  		rdmsrl(MSR_AMD64_LS_CFG, value);
>  		if (!(value & (1 << 15))) {
>  			static bool_t warned;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:32:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBA4u-0003RL-9B; Wed, 05 Feb 2014 21:32:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Robert.VanVossen@dornerworks.com>)
	id 1WBA4t-0003RG-LC
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:32:15 +0000
Received: from [193.109.254.147:44936] by server-10.bemta-14.messagelabs.com
	id E2/76-10711-FDDA2F25; Wed, 05 Feb 2014 21:32:15 +0000
X-Env-Sender: Robert.VanVossen@dornerworks.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391635933!2303753!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24583 invoked from network); 5 Feb 2014 21:32:14 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-10.tower-27.messagelabs.com with SMTP;
	5 Feb 2014 21:32:14 -0000
Received: from [172.27.12.69] (172.27.12.69) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Wed, 5 Feb 2014 16:28:50 -0500
Message-ID: <52F2AD63.7030109@dornerworks.com>
Date: Wed, 5 Feb 2014 16:30:11 -0500
From: Robbie VanVossen <robert.vanvossen@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130620 Thunderbird/17.0.7
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
In-Reply-To: <1390230336.23576.24.camel@Solace>
X-Originating-IP: [172.27.12.69]
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/20/2014 10:05 AM, Dario Faggioli wrote:
>> Test program makes nothing but sleeping for 30 (5, 500) ms then
>> printing timestamp in an endless loop. 
>>
> Ok, so something similar to cyclictest, right?
> 
> https://rt.wiki.kernel.org/index.php/Cyclictest
> 
> I'm also investigating on running it in a bunch of consideration... I'll
> have results that we can hopefully compare to yours very soon.
> 
> What about giving a try to it yourself? I think standardizing on one (a
> set of) specific tool could be a good thing.

Dario,

We thought we would try to get some similar readings for the Arinc653 scheduler.
We followed your suggestions from this thread and have gotten some readings for
the following configurations:

----------------
Configuration 1 - Only Domain-0
Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.0-35

xl list -n:
Name                       ID   Mem VCPUs     State	Time(s)  NODE Affinity
Domain-0                    0  1535     1     r-----      100.2  all

xl vcpu-list:
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   r--     103.0  all

----------------
Configuration 2 - Domain-0 and Unscheduled guest
Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.0-35
dom1: 		Ubuntu 12.04.1 - Linux 3.2.0-35

xl list -n:
Name                  ID   Mem VCPUs	State	Time(s) NODE Affinity
Domain-0               0  1535     1    r-----     146.4 all
dom1                   1   512     1    ------       0.0 all

xl vcpu-list:
Name                  ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0               0     0    0   r--     147.4  all
dom1                   1     0    0   ---       0.0  all

----------------
Configuration 3 - Domain-0 and Scheduled guest (In separate CPU Pools)
Xen: 		4.4-rc2 - Credit Scheduler
Pool: Pool-0 - Credit Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.0-35
Pool: arinc  - Arinc653 Scheduler
dom1: 		Ubuntu 12.04.1 - Linux 3.2.0-35

xl list -n:
Name                            ID   Mem VCPUs	State	Time(s) NODE Affinity
Domain-0                         0  1535     2  r-----   111.5  all
dom1                             1   512     1   b----     4.3  all

xl vcpu-list:
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   -b-      81.0  all
Domain-0                             0     1    0   r--      47.1  all
dom1                                 1     0    1   -b-       4.7  all

----------------

We used the following command to get results for a 30 millisecond (30,000us)
interval with 500 loops:

cyclictest -t1 -i 30000 -l 500 -q

Results:

+--------+--------+-----------+-------+-------+-------+
| Config | Domain | Scheduler |      Latency (us)     |
|        |        |           |   Min |   Max |   Avg |
+--------+--------+-----------+-------+-------+-------+
|      1 |      0 |  Arinc653 |    20 |   163 |    68 |
|      2 |      0 |  Arinc653 |    21 |   173 |    68 |
|      3 |      1 |  Arinc653 |    20 |   155 |    75 |
+--------+--------+-----------+-------+-------+-------+

It looks like we get negligible latencies for each of these simplistic
configurations.

Thanks,

-- 
---
Robbie VanVossen
DornerWorks, Ltd.
Embedded Systems Engineering

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:32:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBA4u-0003RL-9B; Wed, 05 Feb 2014 21:32:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Robert.VanVossen@dornerworks.com>)
	id 1WBA4t-0003RG-LC
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 21:32:15 +0000
Received: from [193.109.254.147:44936] by server-10.bemta-14.messagelabs.com
	id E2/76-10711-FDDA2F25; Wed, 05 Feb 2014 21:32:15 +0000
X-Env-Sender: Robert.VanVossen@dornerworks.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391635933!2303753!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24583 invoked from network); 5 Feb 2014 21:32:14 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-10.tower-27.messagelabs.com with SMTP;
	5 Feb 2014 21:32:14 -0000
Received: from [172.27.12.69] (172.27.12.69) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Wed, 5 Feb 2014 16:28:50 -0500
Message-ID: <52F2AD63.7030109@dornerworks.com>
Date: Wed, 5 Feb 2014 16:30:11 -0500
From: Robbie VanVossen <robert.vanvossen@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130620 Thunderbird/17.0.7
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
In-Reply-To: <1390230336.23576.24.camel@Solace>
X-Originating-IP: [172.27.12.69]
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 1/20/2014 10:05 AM, Dario Faggioli wrote:
>> Test program makes nothing but sleeping for 30 (5, 500) ms then
>> printing timestamp in an endless loop. 
>>
> Ok, so something similar to cyclictest, right?
> 
> https://rt.wiki.kernel.org/index.php/Cyclictest
> 
> I'm also investigating on running it in a bunch of consideration... I'll
> have results that we can hopefully compare to yours very soon.
> 
> What about giving a try to it yourself? I think standardizing on one (a
> set of) specific tool could be a good thing.

Dario,

We thought we would try to get some similar readings for the Arinc653 scheduler.
We followed your suggestions from this thread and have gotten some readings for
the following configurations:

----------------
Configuration 1 - Only Domain-0
Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.0-35

xl list -n:
Name                       ID   Mem VCPUs     State	Time(s)  NODE Affinity
Domain-0                    0  1535     1     r-----      100.2  all

xl vcpu-list:
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   r--     103.0  all

----------------
Configuration 2 - Domain-0 and Unscheduled guest
Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.0-35
dom1: 		Ubuntu 12.04.1 - Linux 3.2.0-35

xl list -n:
Name                  ID   Mem VCPUs	State	Time(s) NODE Affinity
Domain-0               0  1535     1    r-----     146.4 all
dom1                   1   512     1    ------       0.0 all

xl vcpu-list:
Name                  ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0               0     0    0   r--     147.4  all
dom1                   1     0    0   ---       0.0  all

----------------
Configuration 3 - Domain-0 and Scheduled guest (In separate CPU Pools)
Xen: 		4.4-rc2 - Credit Scheduler
Pool: Pool-0 - Credit Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.0-35
Pool: arinc  - Arinc653 Scheduler
dom1: 		Ubuntu 12.04.1 - Linux 3.2.0-35

xl list -n:
Name                            ID   Mem VCPUs	State	Time(s) NODE Affinity
Domain-0                         0  1535     2  r-----   111.5  all
dom1                             1   512     1   b----     4.3  all

xl vcpu-list:
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   -b-      81.0  all
Domain-0                             0     1    0   r--      47.1  all
dom1                                 1     0    1   -b-       4.7  all

----------------

We used the following command to get results for a 30 millisecond (30,000us)
interval with 500 loops:

cyclictest -t1 -i 30000 -l 500 -q

Results:

+--------+--------+-----------+-------+-------+-------+
| Config | Domain | Scheduler |      Latency (us)     |
|        |        |           |   Min |   Max |   Avg |
+--------+--------+-----------+-------+-------+-------+
|      1 |      0 |  Arinc653 |    20 |   163 |    68 |
|      2 |      0 |  Arinc653 |    21 |   173 |    68 |
|      3 |      1 |  Arinc653 |    20 |   155 |    75 |
+--------+--------+-----------+-------+-------+-------+

It looks like we get negligible latencies for each of these simplistic
configurations.

Thanks,

-- 
---
Robbie VanVossen
DornerWorks, Ltd.
Embedded Systems Engineering

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBAG2-0003pJ-UC; Wed, 05 Feb 2014 21:43:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBAG1-0003pE-GL
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 21:43:45 +0000
Received: from [193.109.254.147:53448] by server-7.bemta-14.messagelabs.com id
	AE/D5-23424-090B2F25; Wed, 05 Feb 2014 21:43:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391636622!2301644!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2085 invoked from network); 5 Feb 2014 21:43:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 21:43:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,789,1384300800"; d="scan'208";a="100302800"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 21:43:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 16:43:40 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBAFw-0007kK-Qo;
	Wed, 05 Feb 2014 21:43:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBAFw-0005wS-9P;
	Wed, 05 Feb 2014 21:43:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24733-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 21:43:40 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24733: regressions -
	trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24733 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24733/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 24721
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 24721

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391
baseline version:
 qemuu                a41087bc7110e8378cd49ddd06aa7c9d361f3673

------------------------------------------------------------
People who touched revisions under test:
  Don Slutz <address@hidden>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 027c412ff71ad8bff6e335cc7932857f4ea74391
Author: Don Slutz <address@hidden>
Date:   Sat Dec 14 19:43:56 2013 +0000

    configure: Disable libtool if -fPIE does not work with it (bug #1257099)
    
    Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
    to be fixed (TMPB).
    
    Add new functions do_libtool and libtool_prog.
    
    Add check for broken gcc and libtool.
    
    Signed-off-by: Don Slutz <address@hidden>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 21:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 21:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBAG2-0003pJ-UC; Wed, 05 Feb 2014 21:43:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBAG1-0003pE-GL
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 21:43:45 +0000
Received: from [193.109.254.147:53448] by server-7.bemta-14.messagelabs.com id
	AE/D5-23424-090B2F25; Wed, 05 Feb 2014 21:43:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391636622!2301644!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2085 invoked from network); 5 Feb 2014 21:43:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 21:43:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,789,1384300800"; d="scan'208";a="100302800"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 05 Feb 2014 21:43:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 16:43:40 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBAFw-0007kK-Qo;
	Wed, 05 Feb 2014 21:43:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBAFw-0005wS-9P;
	Wed, 05 Feb 2014 21:43:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24733-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 5 Feb 2014 21:43:40 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24733: regressions -
	trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24733 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24733/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)   broken REGR. vs. 24721
 test-amd64-i386-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 24721

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391
baseline version:
 qemuu                a41087bc7110e8378cd49ddd06aa7c9d361f3673

------------------------------------------------------------
People who touched revisions under test:
  Don Slutz <address@hidden>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 027c412ff71ad8bff6e335cc7932857f4ea74391
Author: Don Slutz <address@hidden>
Date:   Sat Dec 14 19:43:56 2013 +0000

    configure: Disable libtool if -fPIE does not work with it (bug #1257099)
    
    Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
    to be fixed (TMPB).
    
    Add new functions do_libtool and libtool_prog.
    
    Add check for broken gcc and libtool.
    
    Signed-off-by: Don Slutz <address@hidden>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 22:18:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 22:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBAnL-0004yD-LT; Wed, 05 Feb 2014 22:18:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WBAnG-0004y5-7V
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 22:18:10 +0000
Received: from [193.109.254.147:46983] by server-5.bemta-14.messagelabs.com id
	27/AF-16688-D98B2F25; Wed, 05 Feb 2014 22:18:05 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391638679!2315321!1
X-Originating-IP: [202.81.31.144]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NCA9PiAzMTY3NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6457 invoked from network); 5 Feb 2014 22:18:02 -0000
Received: from e23smtp02.au.ibm.com (HELO e23smtp02.au.ibm.com) (202.81.31.144)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 22:18:02 -0000
Received: from /spool/local
	by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Thu, 6 Feb 2014 08:17:56 +1000
Received: from d23dlp02.au.ibm.com (202.81.31.213)
	by e23smtp02.au.ibm.com (202.81.31.208) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 6 Feb 2014 08:17:53 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp02.au.ibm.com (Postfix) with ESMTP id 76E0C2BB0052
	for <xen-devel@lists.xenproject.org>;
	Thu,  6 Feb 2014 09:17:52 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s15LwM0X63045702
	for <xen-devel@lists.xenproject.org>; Thu, 6 Feb 2014 08:58:23 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s15MHosF017486
	for <xen-devel@lists.xenproject.org>; Thu, 6 Feb 2014 09:17:51 +1100
Received: from srivatsabhat.in.ibm.com ([9.79.225.33])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s15MHhcF017411; Thu, 6 Feb 2014 09:17:45 +1100
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: paulus@samba.org, oleg@redhat.com, rusty@rustcorp.com.au,
	peterz@infradead.org, tglx@linutronix.de, akpm@linux-foundation.org
Date: Thu, 06 Feb 2014 03:42:32 +0530
Message-ID: <20140205221231.19080.66618.stgit@srivatsabhat.in.ibm.com>
In-Reply-To: <20140205220251.19080.92336.stgit@srivatsabhat.in.ibm.com>
References: <20140205220251.19080.92336.stgit@srivatsabhat.in.ibm.com>
User-Agent: StGIT/0.14.3
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14020522-5490-0000-0000-000004E467C4
Cc: ego@linux.vnet.ibm.com, walken@google.com, linux@arm.linux.org.uk,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>,
	tj@kernel.org, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: [Xen-devel] [PATCH 44/51] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	register_cpu_notifier(&foobar_cpu_notifier);

	put_online_cpus();

This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).

Interestingly, the balloon code in xen can actually prevent double
initialization and hence can use the following simplified form of callback
registration:

	register_cpu_notifier(&foobar_cpu_notifier);

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	put_online_cpus();

A hotplug operation that occurs between registering the notifier and calling
get_online_cpus(), won't disrupt anything, because the code takes care to
perform the memory allocations only once.

So reorganize the balloon code in xen this way to fix the deadlock with
callback registration.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 37d06ea..afe1a3f 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
 	}
 }
 
+static int alloc_balloon_scratch_page(int cpu)
+{
+	if (per_cpu(balloon_scratch_page, cpu) != NULL)
+		return 0;
+
+	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+
 static int balloon_cpu_notify(struct notifier_block *self,
 				    unsigned long action, void *hcpu)
 {
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_UP_PREPARE:
-		if (per_cpu(balloon_scratch_page, cpu) != NULL)
-			break;
-		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		if (alloc_balloon_scratch_page(cpu))
 			return NOTIFY_BAD;
-		}
 		break;
 	default:
 		break;
@@ -624,15 +634,16 @@ static int __init balloon_init(void)
 		return -ENODEV;
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		for_each_online_cpu(cpu)
-		{
-			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		register_cpu_notifier(&balloon_cpu_notifier);
+
+		get_online_cpus();
+		for_each_online_cpu(cpu) {
+			if (alloc_balloon_scratch_page(cpu)) {
+				put_online_cpus();
 				return -ENOMEM;
 			}
 		}
-		register_cpu_notifier(&balloon_cpu_notifier);
+		put_online_cpus();
 	}
 
 	pr_info("Initialising balloon driver\n");


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 22:18:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 22:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBAnL-0004yD-LT; Wed, 05 Feb 2014 22:18:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WBAnG-0004y5-7V
	for xen-devel@lists.xenproject.org; Wed, 05 Feb 2014 22:18:10 +0000
Received: from [193.109.254.147:46983] by server-5.bemta-14.messagelabs.com id
	27/AF-16688-D98B2F25; Wed, 05 Feb 2014 22:18:05 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391638679!2315321!1
X-Originating-IP: [202.81.31.144]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NCA9PiAzMTY3NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6457 invoked from network); 5 Feb 2014 22:18:02 -0000
Received: from e23smtp02.au.ibm.com (HELO e23smtp02.au.ibm.com) (202.81.31.144)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 22:18:02 -0000
Received: from /spool/local
	by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Thu, 6 Feb 2014 08:17:56 +1000
Received: from d23dlp02.au.ibm.com (202.81.31.213)
	by e23smtp02.au.ibm.com (202.81.31.208) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 6 Feb 2014 08:17:53 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp02.au.ibm.com (Postfix) with ESMTP id 76E0C2BB0052
	for <xen-devel@lists.xenproject.org>;
	Thu,  6 Feb 2014 09:17:52 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s15LwM0X63045702
	for <xen-devel@lists.xenproject.org>; Thu, 6 Feb 2014 08:58:23 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s15MHosF017486
	for <xen-devel@lists.xenproject.org>; Thu, 6 Feb 2014 09:17:51 +1100
Received: from srivatsabhat.in.ibm.com ([9.79.225.33])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s15MHhcF017411; Thu, 6 Feb 2014 09:17:45 +1100
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: paulus@samba.org, oleg@redhat.com, rusty@rustcorp.com.au,
	peterz@infradead.org, tglx@linutronix.de, akpm@linux-foundation.org
Date: Thu, 06 Feb 2014 03:42:32 +0530
Message-ID: <20140205221231.19080.66618.stgit@srivatsabhat.in.ibm.com>
In-Reply-To: <20140205220251.19080.92336.stgit@srivatsabhat.in.ibm.com>
References: <20140205220251.19080.92336.stgit@srivatsabhat.in.ibm.com>
User-Agent: StGIT/0.14.3
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14020522-5490-0000-0000-000004E467C4
Cc: ego@linux.vnet.ibm.com, walken@google.com, linux@arm.linux.org.uk,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>,
	tj@kernel.org, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: [Xen-devel] [PATCH 44/51] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	register_cpu_notifier(&foobar_cpu_notifier);

	put_online_cpus();

This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).

Interestingly, the balloon code in xen can actually prevent double
initialization and hence can use the following simplified form of callback
registration:

	register_cpu_notifier(&foobar_cpu_notifier);

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	put_online_cpus();

A hotplug operation that occurs between registering the notifier and calling
get_online_cpus(), won't disrupt anything, because the code takes care to
perform the memory allocations only once.

So reorganize the balloon code in xen this way to fix the deadlock with
callback registration.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 37d06ea..afe1a3f 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
 	}
 }
 
+static int alloc_balloon_scratch_page(int cpu)
+{
+	if (per_cpu(balloon_scratch_page, cpu) != NULL)
+		return 0;
+
+	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+
 static int balloon_cpu_notify(struct notifier_block *self,
 				    unsigned long action, void *hcpu)
 {
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_UP_PREPARE:
-		if (per_cpu(balloon_scratch_page, cpu) != NULL)
-			break;
-		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		if (alloc_balloon_scratch_page(cpu))
 			return NOTIFY_BAD;
-		}
 		break;
 	default:
 		break;
@@ -624,15 +634,16 @@ static int __init balloon_init(void)
 		return -ENODEV;
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		for_each_online_cpu(cpu)
-		{
-			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		register_cpu_notifier(&balloon_cpu_notifier);
+
+		get_online_cpus();
+		for_each_online_cpu(cpu) {
+			if (alloc_balloon_scratch_page(cpu)) {
+				put_online_cpus();
 				return -ENOMEM;
 			}
 		}
-		register_cpu_notifier(&balloon_cpu_notifier);
+		put_online_cpus();
 	}
 
 	pr_info("Initialising balloon driver\n");


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 22:48:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 22:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBBGi-0005vY-80; Wed, 05 Feb 2014 22:48:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ksrujandas@gmail.com>) id 1WBBGh-0005vN-5C
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 22:48:31 +0000
Received: from [85.158.143.35:30331] by server-3.bemta-4.messagelabs.com id
	88/8A-11539-EBFB2F25; Wed, 05 Feb 2014 22:48:30 +0000
X-Env-Sender: ksrujandas@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391640508!3437650!1
X-Originating-IP: [209.85.216.51]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7345 invoked from network); 5 Feb 2014 22:48:29 -0000
Received: from mail-qa0-f51.google.com (HELO mail-qa0-f51.google.com)
	(209.85.216.51)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 22:48:29 -0000
Received: by mail-qa0-f51.google.com with SMTP id f11so1632151qae.10
	for <xen-devel@lists.xensource.com>;
	Wed, 05 Feb 2014 14:48:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Xsk40PucSVHoeL3XI8FB/oSz/wEsFjWMccg5oQXznP0=;
	b=rksACgxDrlpEm9DksU352PPKNfD1IxadUjanzrdOEZ1rE6ASfSc4cFG50VhE3+Jd8G
	OQgttXJ1iJ6O8WaAj6A+r/fHDrIQMKJAcmCkW19CgdrkSjCzXhHjd5KezKphlZWC5es5
	IJyqmIIi+PUUVfnsn2xa12SkwN0v/z7s0xt+04I+dNia0pJO1M/M+cU8IBL9qruWtu84
	KShh8VmU/skS5WjrE4JidBXKqojPBtk3HKp9YvRbBZayvC7m+tzuSWa0Mt4NBzw/9rCd
	nVkKs668s4WFgDLmhq6uEUllIo3WC7i3FK9SaKnCEzsCSzxRvqy98oaXhXGDycgfJb08
	GrGA==
MIME-Version: 1.0
X-Received: by 10.224.114.141 with SMTP id e13mr6983855qaq.65.1391640507835;
	Wed, 05 Feb 2014 14:48:27 -0800 (PST)
Received: by 10.140.85.111 with HTTP; Wed, 5 Feb 2014 14:48:27 -0800 (PST)
Date: Wed, 5 Feb 2014 16:48:27 -0600
Message-ID: <CAKLFbfxmjiMkkjATUbmgm6Qb=+xrVqQCp680do=a0chsNYKM-g@mail.gmail.com>
From: Srujan Kotikela <ksrujandas@gmail.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] DomU measured late launch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5423459431319707513=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5423459431319707513==
Content-Type: multipart/alternative; boundary=047d7bea44d41feac604f1b08ea2

--047d7bea44d41feac604f1b08ea2
Content-Type: text/plain; charset=ISO-8859-1

Hi,

Does Xen support measured late launch of a DomU (using Intel TXT or AMD
SVM)?

~ SDK

--047d7bea44d41feac604f1b08ea2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>Does Xen support measured late laun=
ch of a DomU (using Intel TXT or AMD SVM)?=A0</div><div><br clear=3D"all"><=
div>~ SDK<br></div>
</div></div>

--047d7bea44d41feac604f1b08ea2--


--===============5423459431319707513==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5423459431319707513==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 22:48:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 22:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBBGi-0005vY-80; Wed, 05 Feb 2014 22:48:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ksrujandas@gmail.com>) id 1WBBGh-0005vN-5C
	for xen-devel@lists.xensource.com; Wed, 05 Feb 2014 22:48:31 +0000
Received: from [85.158.143.35:30331] by server-3.bemta-4.messagelabs.com id
	88/8A-11539-EBFB2F25; Wed, 05 Feb 2014 22:48:30 +0000
X-Env-Sender: ksrujandas@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391640508!3437650!1
X-Originating-IP: [209.85.216.51]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7345 invoked from network); 5 Feb 2014 22:48:29 -0000
Received: from mail-qa0-f51.google.com (HELO mail-qa0-f51.google.com)
	(209.85.216.51)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	5 Feb 2014 22:48:29 -0000
Received: by mail-qa0-f51.google.com with SMTP id f11so1632151qae.10
	for <xen-devel@lists.xensource.com>;
	Wed, 05 Feb 2014 14:48:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Xsk40PucSVHoeL3XI8FB/oSz/wEsFjWMccg5oQXznP0=;
	b=rksACgxDrlpEm9DksU352PPKNfD1IxadUjanzrdOEZ1rE6ASfSc4cFG50VhE3+Jd8G
	OQgttXJ1iJ6O8WaAj6A+r/fHDrIQMKJAcmCkW19CgdrkSjCzXhHjd5KezKphlZWC5es5
	IJyqmIIi+PUUVfnsn2xa12SkwN0v/z7s0xt+04I+dNia0pJO1M/M+cU8IBL9qruWtu84
	KShh8VmU/skS5WjrE4JidBXKqojPBtk3HKp9YvRbBZayvC7m+tzuSWa0Mt4NBzw/9rCd
	nVkKs668s4WFgDLmhq6uEUllIo3WC7i3FK9SaKnCEzsCSzxRvqy98oaXhXGDycgfJb08
	GrGA==
MIME-Version: 1.0
X-Received: by 10.224.114.141 with SMTP id e13mr6983855qaq.65.1391640507835;
	Wed, 05 Feb 2014 14:48:27 -0800 (PST)
Received: by 10.140.85.111 with HTTP; Wed, 5 Feb 2014 14:48:27 -0800 (PST)
Date: Wed, 5 Feb 2014 16:48:27 -0600
Message-ID: <CAKLFbfxmjiMkkjATUbmgm6Qb=+xrVqQCp680do=a0chsNYKM-g@mail.gmail.com>
From: Srujan Kotikela <ksrujandas@gmail.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] DomU measured late launch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5423459431319707513=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5423459431319707513==
Content-Type: multipart/alternative; boundary=047d7bea44d41feac604f1b08ea2

--047d7bea44d41feac604f1b08ea2
Content-Type: text/plain; charset=ISO-8859-1

Hi,

Does Xen support measured late launch of a DomU (using Intel TXT or AMD
SVM)?

~ SDK

--047d7bea44d41feac604f1b08ea2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>Does Xen support measured late laun=
ch of a DomU (using Intel TXT or AMD SVM)?=A0</div><div><br clear=3D"all"><=
div>~ SDK<br></div>
</div></div>

--047d7bea44d41feac604f1b08ea2--


--===============5423459431319707513==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5423459431319707513==--


From xen-devel-bounces@lists.xen.org Wed Feb 05 23:08:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 23:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBBZP-0006gQ-CR; Wed, 05 Feb 2014 23:07:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WBBZN-0006gK-Rm
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 23:07:50 +0000
Received: from [193.109.254.147:31915] by server-9.bemta-14.messagelabs.com id
	75/23-24895-544C2F25; Wed, 05 Feb 2014 23:07:49 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391641668!2286148!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15966 invoked from network); 5 Feb 2014 23:07:48 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 23:07:48 -0000
Received: from smtphost3.dur.ac.uk (smtphost3.dur.ac.uk [129.234.252.3])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s15N7YRe017006;
	Wed, 5 Feb 2014 23:07:39 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost3.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s15N7Rwx024873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Feb 2014 23:07:27 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s15N7ReH003920;
	Wed, 5 Feb 2014 23:07:27 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s15N7RA9003911; Wed, 5 Feb 2014 23:07:27 GMT
Date: Wed, 5 Feb 2014 23:07:27 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391592517.6497.76.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1402052302070.6184@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
	<1391592517.6497.76.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s15N7YRe017006
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 5 Feb 2014, Ian Campbell wrote:

> On Tue, 2014-02-04 at 18:41 +0000, M A Young wrote:
>>> We should probably consider taking some unit files into the xen tree, if
>>> someone wants to submit a set?
>>
>> I can submit a set, which start services individually rather than a
>> unified xencommons style start file. I didn't find a good way to reproduce
>> the xendomains script, so I ended running an edited version of the
>> sysvinit script with a systemd wrapper file.
>
> I don't know what is conventional in systemd land but I have no problem
> with that approach.
>
>> Would it make sense to have some sort of configure option tools to choose
>> between sysvinit and systemd?
>
> AIUI systemd will DTWT and ignore the initscript if there is a systemd
> unit file, so installing both seems to be fine to me, and is certain to
> be less error prone.
>
> Although perhaps the xencommons vs split units breaks that suppression
> logic?
>
> What depends on the xenstored & xenconsole modules? Is it possible to
> have a "metaunit" xencommons which a) causes everything to be started
> and b) suppresses the initscript by having the same name?
>
> Or maybe there is a systemd syntax you can use to suppress
> xencommons.init?

I don't know of an alternate way to suppress and init.d script, so I agree 
it makes sense to have a dummy xencommons.service unit which will start 
any other xen units, but doesn't do anything functional itself.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 05 23:08:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Feb 2014 23:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBBZP-0006gQ-CR; Wed, 05 Feb 2014 23:07:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WBBZN-0006gK-Rm
	for xen-devel@lists.xen.org; Wed, 05 Feb 2014 23:07:50 +0000
Received: from [193.109.254.147:31915] by server-9.bemta-14.messagelabs.com id
	75/23-24895-544C2F25; Wed, 05 Feb 2014 23:07:49 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391641668!2286148!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15966 invoked from network); 5 Feb 2014 23:07:48 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 5 Feb 2014 23:07:48 -0000
Received: from smtphost3.dur.ac.uk (smtphost3.dur.ac.uk [129.234.252.3])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s15N7YRe017006;
	Wed, 5 Feb 2014 23:07:39 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost3.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s15N7Rwx024873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 5 Feb 2014 23:07:27 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s15N7ReH003920;
	Wed, 5 Feb 2014 23:07:27 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s15N7RA9003911; Wed, 5 Feb 2014 23:07:27 GMT
Date: Wed, 5 Feb 2014 23:07:27 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391592517.6497.76.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.00.1402052302070.6184@procyon.dur.ac.uk>
References: <1391527619.2441.16.camel@astar.houby.net>
	<1391530374.6497.55.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.00.1402041828250.26392@procyon.dur.ac.uk>
	<1391592517.6497.76.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s15N7YRe017006
Cc: ehouby@yahoo.com, xen-devel@lists.xen.org, xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] xen 4.4 FC3 xl create error
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 5 Feb 2014, Ian Campbell wrote:

> On Tue, 2014-02-04 at 18:41 +0000, M A Young wrote:
>>> We should probably consider taking some unit files into the xen tree, if
>>> someone wants to submit a set?
>>
>> I can submit a set, which start services individually rather than a
>> unified xencommons style start file. I didn't find a good way to reproduce
>> the xendomains script, so I ended running an edited version of the
>> sysvinit script with a systemd wrapper file.
>
> I don't know what is conventional in systemd land but I have no problem
> with that approach.
>
>> Would it make sense to have some sort of configure option tools to choose
>> between sysvinit and systemd?
>
> AIUI systemd will DTWT and ignore the initscript if there is a systemd
> unit file, so installing both seems to be fine to me, and is certain to
> be less error prone.
>
> Although perhaps the xencommons vs split units breaks that suppression
> logic?
>
> What depends on the xenstored & xenconsole modules? Is it possible to
> have a "metaunit" xencommons which a) causes everything to be started
> and b) suppresses the initscript by having the same name?
>
> Or maybe there is a systemd syntax you can use to suppress
> xencommons.init?

I don't know of an alternate way to suppress and init.d script, so I agree 
it makes sense to have a dummy xencommons.service unit which will start 
any other xen units, but doesn't do anything functional itself.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 00:18:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 00:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBCf9-0000iI-Nu; Thu, 06 Feb 2014 00:17:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBCf8-0000iD-Ek
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 00:17:50 +0000
Received: from [85.158.137.68:36031] by server-1.bemta-3.messagelabs.com id
	FF/2F-17293-DA4D2F25; Thu, 06 Feb 2014 00:17:49 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391645867!13695098!1
X-Originating-IP: [98.139.213.74]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12495 invoked from network); 6 Feb 2014 00:17:48 -0000
Received: from nm26-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm26-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.74)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 00:17:48 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391645867; bh=sXXMKtLBYbfIHMqbU5kP6iHECnv5EuKnxnHQZysJ4wM=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=G0OxLgJ6LdPUq5ZAaIfO/4qUFw0lySZSx+mRTmrPvlQcv6hQOYuMNqSzvdmxMb8oHHIMW6/D87B8dnxJ3xaIv7R+X/FuqvIDmWqG+0AWlFk9v0VMNOCGYEO00y77OucC1/fKEiP9lig1yJbEn1B5DQUoqyfp+BSkiRjz2Iti5pc=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=HdN5XG40hq2w9Ve7v5swXNlDiI1mA/QL5rYcZBjEhJSwEFEdLgjViJ0NA9udYSRKcutiMd5nogMYUHsbkSQhS7OVZf2WMUY4hf2WJtfLG5YgWnCqi1XwjL4K2gw74K1fiwbWiTc32ZfDcNWHv7v5AbdlalxVaJFnozXPk+H5t4E=;
Received: from [98.139.212.153] by nm26.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 00:17:47 -0000
Received: from [98.139.211.201] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 00:17:46 -0000
Received: from [127.0.0.1] by smtp210.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 00:17:46 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391645866; bh=sXXMKtLBYbfIHMqbU5kP6iHECnv5EuKnxnHQZysJ4wM=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=QwAHFEr+aGkg5duCBTLbXqCBw+oK4/dsFpLky+I1ntXYTYjwYb/AL4h/SR+94h8a086/Eh5hsIJ7sk0iqt0So7/HMN0g8ANQfMPko+5wauxzaIhcIjTsTvgRbIEwtd8i/AQZXxbjUcB8ENZVXIoXal/jd65Zmq2DCZAtjhfidBM=
X-Yahoo-Newman-Id: 917242.3248.bm@smtp210.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: eBobRrUVM1lzEAEf4LX6Qk7BRCCGBGz64in.h_XtLYvJeEU
	2C3igsbbeQlnaUBDnLfN2F2GrC0B8miB.KllXKUcP_ZFe80jS.XWVpQzeJrm
	m3js1EzxiKThlLmWElEjFQ2s621_QAzxWJW3PWTIXOg_R.QqTnTKyI6CjOqC
	iUQY44h3o0L0VATohp8l6NIH3KoZr3lGWUriEflRa26Aa_ZSr5gY2ziDVS.z
	uJ0GYukS1cezohRSpI5ahl1yqhK9.u1BJUFNXeup7T0W4PVBETltaIO3U7gt
	AFZuidUpuJSKXyGoA4PsnWxpyZHZQ5SLLOC5.jMD_dHvrHmVcMWHGyhNz.aR
	d1x11PuvJgb2Em.2.7.yiguwZSS.KBfc_DVLw5TQwrM5JduKAL_v2DNMnILe
	SViv_U2S.yCQjfVPJhIt5jntE03P12LMlaw9cmJx_0PuQnQFxDC6BFIgOlMG
	alq85ioNCW1_gHlV_fMWH3AROBsORF6tj0AyEKxvs09V3rjd.qOkPqHF1KT1
	.m3S.PE9KV3.10pLn8YhBP9PXJHmRb3yf7cEGP_.ZQuBx7B.81pprtA9W.3J
	j.i3qM1hISt7IWSsPAQ0OfmVM90kYXwtWSH6vR0HqIVjWdX.wJwv7fBcfIBg
	FxdRzkTbZRs4X4oNLYoTgK9AW4v9cpkdcpCXLXjI-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp210.mail.bf1.yahoo.com with SMTP; 05 Feb 2014 16:17:46 -0800 PST
Message-ID: <1391645864.2751.9.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Wed, 05 Feb 2014 17:17:44 -0700
In-Reply-To: <15510565929.20140205090647@eikelenboom.it>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:
> Wednesday, February 5, 2014, 5:00:08 AM, you wrote:
> 
> > On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> >> Il 04/02/2014 16:41, Eric Houby ha scritto:
> >> > Xen list,
> >> >
> >> > I am trying to boot a F20 guest and connect using Spice but have run
> >> > into an issue.
> >> >
> >> > My VM config file includes:
> >> > spice = 1
> >> > spicehost='0.0.0.0'
> >> > spiceport=6001
> >> > spicedisable_ticketing=1
> >> >
> >> >
> >> > Is Spice supported with qemu-xen-traditional?
> >> 
> >> No, only with upstream qemu and if compile xen and qemu from source you 
> >> also enable spice support on qemu build, for example on my xen build 
> >> tests I add:
> >> 
> >> tools/Makefile
> >> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> >>           --datadir=$(SHAREDIR)/qemu-xen \
> >>           --localstatedir=/var \
> >>           --disable-kvm \
> >> +        --enable-spice \
> >> +        --enable-usb-redir \
> >>           --disable-docs \
> >>           --disable-guest-agent \
> >>           --python=$(PYTHON) \
> >> 
> >> If you use upstream qemu from distribution package probably have already 
> >> spice build-in, for example, on debian I've already tested and working.
> >> 
> 
> > It is my understanding that the qemu package in F20 does not support xen
> > so I compiled xen from source per the RC3 Test Day instructions and the
> > instructions here:
> 
> > http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source
> 
> > After adding --enable-spice and --enable-usb-redir to tools/Makefile I
> > see the following error when I make xen:
> 
> > ERROR: User requested feature spice
> >        configure was not able to find it
> 
> Do you have the libspice-dev packages installed for your distro ?
> 

I do now. I also have the usbredir-devel package. 

Thanks for the help.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 00:18:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 00:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBCf9-0000iI-Nu; Thu, 06 Feb 2014 00:17:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBCf8-0000iD-Ek
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 00:17:50 +0000
Received: from [85.158.137.68:36031] by server-1.bemta-3.messagelabs.com id
	FF/2F-17293-DA4D2F25; Thu, 06 Feb 2014 00:17:49 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391645867!13695098!1
X-Originating-IP: [98.139.213.74]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12495 invoked from network); 6 Feb 2014 00:17:48 -0000
Received: from nm26-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm26-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.74)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 00:17:48 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391645867; bh=sXXMKtLBYbfIHMqbU5kP6iHECnv5EuKnxnHQZysJ4wM=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=G0OxLgJ6LdPUq5ZAaIfO/4qUFw0lySZSx+mRTmrPvlQcv6hQOYuMNqSzvdmxMb8oHHIMW6/D87B8dnxJ3xaIv7R+X/FuqvIDmWqG+0AWlFk9v0VMNOCGYEO00y77OucC1/fKEiP9lig1yJbEn1B5DQUoqyfp+BSkiRjz2Iti5pc=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=HdN5XG40hq2w9Ve7v5swXNlDiI1mA/QL5rYcZBjEhJSwEFEdLgjViJ0NA9udYSRKcutiMd5nogMYUHsbkSQhS7OVZf2WMUY4hf2WJtfLG5YgWnCqi1XwjL4K2gw74K1fiwbWiTc32ZfDcNWHv7v5AbdlalxVaJFnozXPk+H5t4E=;
Received: from [98.139.212.153] by nm26.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 00:17:47 -0000
Received: from [98.139.211.201] by tm10.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 00:17:46 -0000
Received: from [127.0.0.1] by smtp210.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 00:17:46 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391645866; bh=sXXMKtLBYbfIHMqbU5kP6iHECnv5EuKnxnHQZysJ4wM=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=QwAHFEr+aGkg5duCBTLbXqCBw+oK4/dsFpLky+I1ntXYTYjwYb/AL4h/SR+94h8a086/Eh5hsIJ7sk0iqt0So7/HMN0g8ANQfMPko+5wauxzaIhcIjTsTvgRbIEwtd8i/AQZXxbjUcB8ENZVXIoXal/jd65Zmq2DCZAtjhfidBM=
X-Yahoo-Newman-Id: 917242.3248.bm@smtp210.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: eBobRrUVM1lzEAEf4LX6Qk7BRCCGBGz64in.h_XtLYvJeEU
	2C3igsbbeQlnaUBDnLfN2F2GrC0B8miB.KllXKUcP_ZFe80jS.XWVpQzeJrm
	m3js1EzxiKThlLmWElEjFQ2s621_QAzxWJW3PWTIXOg_R.QqTnTKyI6CjOqC
	iUQY44h3o0L0VATohp8l6NIH3KoZr3lGWUriEflRa26Aa_ZSr5gY2ziDVS.z
	uJ0GYukS1cezohRSpI5ahl1yqhK9.u1BJUFNXeup7T0W4PVBETltaIO3U7gt
	AFZuidUpuJSKXyGoA4PsnWxpyZHZQ5SLLOC5.jMD_dHvrHmVcMWHGyhNz.aR
	d1x11PuvJgb2Em.2.7.yiguwZSS.KBfc_DVLw5TQwrM5JduKAL_v2DNMnILe
	SViv_U2S.yCQjfVPJhIt5jntE03P12LMlaw9cmJx_0PuQnQFxDC6BFIgOlMG
	alq85ioNCW1_gHlV_fMWH3AROBsORF6tj0AyEKxvs09V3rjd.qOkPqHF1KT1
	.m3S.PE9KV3.10pLn8YhBP9PXJHmRb3yf7cEGP_.ZQuBx7B.81pprtA9W.3J
	j.i3qM1hISt7IWSsPAQ0OfmVM90kYXwtWSH6vR0HqIVjWdX.wJwv7fBcfIBg
	FxdRzkTbZRs4X4oNLYoTgK9AW4v9cpkdcpCXLXjI-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.138.105.21])
	by smtp210.mail.bf1.yahoo.com with SMTP; 05 Feb 2014 16:17:46 -0800 PST
Message-ID: <1391645864.2751.9.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Wed, 05 Feb 2014 17:17:44 -0700
In-Reply-To: <15510565929.20140205090647@eikelenboom.it>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:
> Wednesday, February 5, 2014, 5:00:08 AM, you wrote:
> 
> > On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> >> Il 04/02/2014 16:41, Eric Houby ha scritto:
> >> > Xen list,
> >> >
> >> > I am trying to boot a F20 guest and connect using Spice but have run
> >> > into an issue.
> >> >
> >> > My VM config file includes:
> >> > spice = 1
> >> > spicehost='0.0.0.0'
> >> > spiceport=6001
> >> > spicedisable_ticketing=1
> >> >
> >> >
> >> > Is Spice supported with qemu-xen-traditional?
> >> 
> >> No, only with upstream qemu and if compile xen and qemu from source you 
> >> also enable spice support on qemu build, for example on my xen build 
> >> tests I add:
> >> 
> >> tools/Makefile
> >> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> >>           --datadir=$(SHAREDIR)/qemu-xen \
> >>           --localstatedir=/var \
> >>           --disable-kvm \
> >> +        --enable-spice \
> >> +        --enable-usb-redir \
> >>           --disable-docs \
> >>           --disable-guest-agent \
> >>           --python=$(PYTHON) \
> >> 
> >> If you use upstream qemu from distribution package probably have already 
> >> spice build-in, for example, on debian I've already tested and working.
> >> 
> 
> > It is my understanding that the qemu package in F20 does not support xen
> > so I compiled xen from source per the RC3 Test Day instructions and the
> > instructions here:
> 
> > http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source
> 
> > After adding --enable-spice and --enable-usb-redir to tools/Makefile I
> > see the following error when I make xen:
> 
> > ERROR: User requested feature spice
> >        configure was not able to find it
> 
> Do you have the libspice-dev packages installed for your distro ?
> 

I do now. I also have the usbredir-devel package. 

Thanks for the help.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 00:31:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 00:31:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBCrm-0001IQ-SX; Thu, 06 Feb 2014 00:30:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WBCrl-0001IL-Fv
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 00:30:53 +0000
Received: from [85.158.139.211:15103] by server-17.bemta-5.messagelabs.com id
	5C/45-31975-CB7D2F25; Thu, 06 Feb 2014 00:30:52 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391646651!1965749!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9501 invoked from network); 6 Feb 2014 00:30:52 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 00:30:52 -0000
Received: by mail-lb0-f181.google.com with SMTP id z5so908434lbh.40
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 16:30:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=p7sV1fQSwtqzyMwyoZRLXaqqLeFllObweSd5cZUSyZ8=;
	b=mdMg/YFv41RlD1Szbio36pJloRKQh5z5ABGwymPWeLN0PsyFgPNlg88wvAlCm/F0jg
	DtAvIiS3sEPjR0HckSGUneO5ouKyzPSJF3bpfFfocEufa0Nh8Fi9r7d93eO+xSv7CcwH
	NsnuIHqjwdLLqavUGtLzO9+INcM/OIjcY+6MGwErEmcmdgZgCkZDEgw9dE+p6400rc7d
	tbQmyYOeyAgNa46I8ORADLuinNZRBsHVvS6+sRTSmKo/eBKVgob0BTuKxTbnePqUCDK0
	6FNS82Ry8HRulfjcBZ2hf7yog+DlE8OULjkyWu6s4NtWw1kwt0ZVinDa3RXl+X/OtOxW
	A1kA==
X-Received: by 10.153.8.225 with SMTP id dn1mr3094723lad.17.1391646651516;
	Wed, 05 Feb 2014 16:30:51 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.128.226 with HTTP; Wed, 5 Feb 2014 16:30:31 -0800 (PST)
In-Reply-To: <1391572808.2441.37.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
From: Dario Faggioli <raistlin@linux.it>
Date: Thu, 6 Feb 2014 01:30:31 +0100
X-Google-Sender-Auth: Fn3Ef6VliqhegxNfKWI7_JSiShw
Message-ID: <CAAWQecu-gWouUgHOVTDXfNFsK4iNyGeWj82F61YnhQfwOiOcLQ@mail.gmail.com>
To: Eric Houby <ehouby@yahoo.com>
Cc: xen <xen@lists.fedoraproject.org>, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	M A Young <m.a.young@durham.ac.uk>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 5, 2014 at 5:00 AM, Eric Houby <ehouby@yahoo.com> wrote:
> It is my understanding that the qemu package in F20 does not support xen
> so I compiled xen from source per the RC3 Test Day instructions and the
> instructions here:
>
Yes, we figured out that was the case during the previous TestDay.
Michael, any news on this? Who should we be talking with to have Xen
enabled in default Fedora qemu package build? I guess it won't be a
F20 thing anyway, but let's make sure we will have that enable in the
next Fedora release! :-)

Regards,
Dario

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 00:31:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 00:31:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBCrm-0001IQ-SX; Thu, 06 Feb 2014 00:30:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WBCrl-0001IL-Fv
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 00:30:53 +0000
Received: from [85.158.139.211:15103] by server-17.bemta-5.messagelabs.com id
	5C/45-31975-CB7D2F25; Thu, 06 Feb 2014 00:30:52 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391646651!1965749!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9501 invoked from network); 6 Feb 2014 00:30:52 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 00:30:52 -0000
Received: by mail-lb0-f181.google.com with SMTP id z5so908434lbh.40
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 16:30:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=p7sV1fQSwtqzyMwyoZRLXaqqLeFllObweSd5cZUSyZ8=;
	b=mdMg/YFv41RlD1Szbio36pJloRKQh5z5ABGwymPWeLN0PsyFgPNlg88wvAlCm/F0jg
	DtAvIiS3sEPjR0HckSGUneO5ouKyzPSJF3bpfFfocEufa0Nh8Fi9r7d93eO+xSv7CcwH
	NsnuIHqjwdLLqavUGtLzO9+INcM/OIjcY+6MGwErEmcmdgZgCkZDEgw9dE+p6400rc7d
	tbQmyYOeyAgNa46I8ORADLuinNZRBsHVvS6+sRTSmKo/eBKVgob0BTuKxTbnePqUCDK0
	6FNS82Ry8HRulfjcBZ2hf7yog+DlE8OULjkyWu6s4NtWw1kwt0ZVinDa3RXl+X/OtOxW
	A1kA==
X-Received: by 10.153.8.225 with SMTP id dn1mr3094723lad.17.1391646651516;
	Wed, 05 Feb 2014 16:30:51 -0800 (PST)
MIME-Version: 1.0
Received: by 10.152.128.226 with HTTP; Wed, 5 Feb 2014 16:30:31 -0800 (PST)
In-Reply-To: <1391572808.2441.37.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
From: Dario Faggioli <raistlin@linux.it>
Date: Thu, 6 Feb 2014 01:30:31 +0100
X-Google-Sender-Auth: Fn3Ef6VliqhegxNfKWI7_JSiShw
Message-ID: <CAAWQecu-gWouUgHOVTDXfNFsK4iNyGeWj82F61YnhQfwOiOcLQ@mail.gmail.com>
To: Eric Houby <ehouby@yahoo.com>
Cc: xen <xen@lists.fedoraproject.org>, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	M A Young <m.a.young@durham.ac.uk>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 5, 2014 at 5:00 AM, Eric Houby <ehouby@yahoo.com> wrote:
> It is my understanding that the qemu package in F20 does not support xen
> so I compiled xen from source per the RC3 Test Day instructions and the
> instructions here:
>
Yes, we figured out that was the case during the previous TestDay.
Michael, any news on this? Who should we be talking with to have Xen
enabled in default Fedora qemu package build? I guess it won't be a
F20 thing anyway, but let's make sure we will have that enable in the
next Fedora release! :-)

Regards,
Dario

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 01:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBDcX-0006iw-G3; Thu, 06 Feb 2014 01:19:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBDcV-0006ik-S7
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 01:19:12 +0000
Received: from [85.158.137.68:41873] by server-2.bemta-3.messagelabs.com id
	08/A3-06531-F03E2F25; Thu, 06 Feb 2014 01:19:11 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391649547!12515160!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12673 invoked from network); 6 Feb 2014 01:19:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 01:19:09 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s161IJJS029159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 01:18:20 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s161IGPF025210
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Feb 2014 01:18:16 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s161IGUw025203; Thu, 6 Feb 2014 01:18:16 GMT
MIME-Version: 1.0
Message-ID: <7c7623ad-516f-4615-8923-c64ea203636c@default>
Date: Wed, 5 Feb 2014 17:18:16 -0800 (PST)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: <srivatsa.bhat@linux.vnet.ibm.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: ego@linux.vnet.ibm.com, walken@google.com, linux@arm.linux.org.uk,
	akpm@linux-foundation.org, peterz@infradead.org,
	rusty@rustcorp.com.au, oleg@redhat.com,
	linux-kernel@vger.kernel.org, paulus@samba.org,
	david.vrabel@citrix.com, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [PATCH 44/51] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


----- srivatsa.bhat@linux.vnet.ibm.com wrote:

> Subsystems that want to register CPU hotplug callbacks, as well as
> perform
> initialization for the CPUs that are already online, often do it as
> shown
> below:
> 
> 	get_online_cpus();
> 
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
> 
> 	register_cpu_notifier(&foobar_cpu_notifier);
> 
> 	put_online_cpus();
> 
> This is wrong, since it is prone to ABBA deadlocks involving the
> cpu_add_remove_lock and the cpu_hotplug.lock (when running
> concurrently
> with CPU hotplug operations).
> 
> Interestingly, the balloon code in xen can actually prevent double
> initialization and hence can use the following simplified form of
> callback
> registration:
> 
> 	register_cpu_notifier(&foobar_cpu_notifier);
> 
> 	get_online_cpus();
> 
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
> 
> 	put_online_cpus();
> 
> A hotplug operation that occurs between registering the notifier and
> calling
> get_online_cpus(), won't disrupt anything, because the code takes care
> to
> perform the memory allocations only once.
> 
> So reorganize the balloon code in xen this way to fix the deadlock
> with
> callback registration.
> 
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
> 
>  drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
>  1 file changed, 23 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 37d06ea..afe1a3f 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned
> long start_pfn,
>  	}
>  }
>  
> +static int alloc_balloon_scratch_page(int cpu)
> +{
> +	if (per_cpu(balloon_scratch_page, cpu) != NULL)
> +		return 0;
> +
> +	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> +	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> +		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n",
> cpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +
>  static int balloon_cpu_notify(struct notifier_block *self,
>  				    unsigned long action, void *hcpu)
>  {
>  	int cpu = (long)hcpu;
>  	switch (action) {
>  	case CPU_UP_PREPARE:
> -		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> -			break;
> -		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n",
> cpu);
> +		if (alloc_balloon_scratch_page(cpu))
>  			return NOTIFY_BAD;
> -		}
>  		break;
>  	default:
>  		break;
> @@ -624,15 +634,16 @@ static int __init balloon_init(void)
>  		return -ENODEV;
>  
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for_each_online_cpu(cpu)
> -		{
> -			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n",
> cpu);
> +		register_cpu_notifier(&balloon_cpu_notifier);
> +
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			if (alloc_balloon_scratch_page(cpu)) {
> +				put_online_cpus();
>  				return -ENOMEM;


Not that original code was doing a particularly thorough job of cleaning up on allocation failure but if it couldn't get memory it would not register the notifier. So perhaps you should unregister it before returning here.

I am also not sure how we were susceptible to the deadlock here since we didn't call get_online_cpus(). (We probably should have but then commit description should say it).

-boris

>  			}
>  		}
> -		register_cpu_notifier(&balloon_cpu_notifier);
> +		put_online_cpus();
>  	}
>  
>  	pr_info("Initialising balloon driver\n");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 01:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 01:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBDcX-0006iw-G3; Thu, 06 Feb 2014 01:19:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBDcV-0006ik-S7
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 01:19:12 +0000
Received: from [85.158.137.68:41873] by server-2.bemta-3.messagelabs.com id
	08/A3-06531-F03E2F25; Thu, 06 Feb 2014 01:19:11 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391649547!12515160!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12673 invoked from network); 6 Feb 2014 01:19:09 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 01:19:09 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s161IJJS029159
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 01:18:20 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s161IGPF025210
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Feb 2014 01:18:16 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s161IGUw025203; Thu, 6 Feb 2014 01:18:16 GMT
MIME-Version: 1.0
Message-ID: <7c7623ad-516f-4615-8923-c64ea203636c@default>
Date: Wed, 5 Feb 2014 17:18:16 -0800 (PST)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: <srivatsa.bhat@linux.vnet.ibm.com>
X-Mailer: Zimbra on Oracle Beehive
Content-Disposition: inline
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: ego@linux.vnet.ibm.com, walken@google.com, linux@arm.linux.org.uk,
	akpm@linux-foundation.org, peterz@infradead.org,
	rusty@rustcorp.com.au, oleg@redhat.com,
	linux-kernel@vger.kernel.org, paulus@samba.org,
	david.vrabel@citrix.com, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [PATCH 44/51] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


----- srivatsa.bhat@linux.vnet.ibm.com wrote:

> Subsystems that want to register CPU hotplug callbacks, as well as
> perform
> initialization for the CPUs that are already online, often do it as
> shown
> below:
> 
> 	get_online_cpus();
> 
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
> 
> 	register_cpu_notifier(&foobar_cpu_notifier);
> 
> 	put_online_cpus();
> 
> This is wrong, since it is prone to ABBA deadlocks involving the
> cpu_add_remove_lock and the cpu_hotplug.lock (when running
> concurrently
> with CPU hotplug operations).
> 
> Interestingly, the balloon code in xen can actually prevent double
> initialization and hence can use the following simplified form of
> callback
> registration:
> 
> 	register_cpu_notifier(&foobar_cpu_notifier);
> 
> 	get_online_cpus();
> 
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
> 
> 	put_online_cpus();
> 
> A hotplug operation that occurs between registering the notifier and
> calling
> get_online_cpus(), won't disrupt anything, because the code takes care
> to
> perform the memory allocations only once.
> 
> So reorganize the balloon code in xen this way to fix the deadlock
> with
> callback registration.
> 
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
> 
>  drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
>  1 file changed, 23 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 37d06ea..afe1a3f 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned
> long start_pfn,
>  	}
>  }
>  
> +static int alloc_balloon_scratch_page(int cpu)
> +{
> +	if (per_cpu(balloon_scratch_page, cpu) != NULL)
> +		return 0;
> +
> +	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> +	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> +		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n",
> cpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +
>  static int balloon_cpu_notify(struct notifier_block *self,
>  				    unsigned long action, void *hcpu)
>  {
>  	int cpu = (long)hcpu;
>  	switch (action) {
>  	case CPU_UP_PREPARE:
> -		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> -			break;
> -		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n",
> cpu);
> +		if (alloc_balloon_scratch_page(cpu))
>  			return NOTIFY_BAD;
> -		}
>  		break;
>  	default:
>  		break;
> @@ -624,15 +634,16 @@ static int __init balloon_init(void)
>  		return -ENODEV;
>  
>  	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for_each_online_cpu(cpu)
> -		{
> -			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n",
> cpu);
> +		register_cpu_notifier(&balloon_cpu_notifier);
> +
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			if (alloc_balloon_scratch_page(cpu)) {
> +				put_online_cpus();
>  				return -ENOMEM;


Not that original code was doing a particularly thorough job of cleaning up on allocation failure but if it couldn't get memory it would not register the notifier. So perhaps you should unregister it before returning here.

I am also not sure how we were susceptible to the deadlock here since we didn't call get_online_cpus(). (We probably should have but then commit description should say it).

-boris

>  			}
>  		}
> -		register_cpu_notifier(&balloon_cpu_notifier);
> +		put_online_cpus();
>  	}
>  
>  	pr_info("Initialising balloon driver\n");

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 01:33:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 01:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBDqT-0007Dy-JD; Thu, 06 Feb 2014 01:33:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBDqS-0007Dt-DY
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 01:33:36 +0000
Received: from [193.109.254.147:24527] by server-4.bemta-14.messagelabs.com id
	53/3C-32066-F66E2F25; Thu, 06 Feb 2014 01:33:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391650413!2321064!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16189 invoked from network); 6 Feb 2014 01:33:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 01:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,790,1384300800"; d="scan'208";a="100351004"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 01:33:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 20:33:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBDqO-0000S7-0J;
	Thu, 06 Feb 2014 01:33:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBDqN-0001VZ-PL;
	Thu, 06 Feb 2014 01:33:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24737-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 01:33:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24737: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24737 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24737/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391
baseline version:
 qemuu                a41087bc7110e8378cd49ddd06aa7c9d361f3673

------------------------------------------------------------
People who touched revisions under test:
  Don Slutz <address@hidden>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=027c412ff71ad8bff6e335cc7932857f4ea74391
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 027c412ff71ad8bff6e335cc7932857f4ea74391
+ branch=qemu-upstream-unstable
+ revision=027c412ff71ad8bff6e335cc7932857f4ea74391
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 027c412ff71ad8bff6e335cc7932857f4ea74391:master
Counting objects: 1   
Counting objects: 5, done.
Compressing objects:  33% (1/3)   
Compressing objects:  66% (2/3)   
Compressing objects: 100% (3/3)   
Compressing objects: 100% (3/3), done.
Writing objects:  33% (1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 30.67 KiB, done.
Total 3 (delta 1), reused 2 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   a41087b..027c412  027c412ff71ad8bff6e335cc7932857f4ea74391 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 01:33:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 01:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBDqT-0007Dy-JD; Thu, 06 Feb 2014 01:33:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBDqS-0007Dt-DY
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 01:33:36 +0000
Received: from [193.109.254.147:24527] by server-4.bemta-14.messagelabs.com id
	53/3C-32066-F66E2F25; Thu, 06 Feb 2014 01:33:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391650413!2321064!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16189 invoked from network); 6 Feb 2014 01:33:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 01:33:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,790,1384300800"; d="scan'208";a="100351004"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 01:33:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 5 Feb 2014 20:33:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBDqO-0000S7-0J;
	Thu, 06 Feb 2014 01:33:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBDqN-0001VZ-PL;
	Thu, 06 Feb 2014 01:33:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24737-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 01:33:31 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 24737: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24737 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24737/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391
baseline version:
 qemuu                a41087bc7110e8378cd49ddd06aa7c9d361f3673

------------------------------------------------------------
People who touched revisions under test:
  Don Slutz <address@hidden>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=027c412ff71ad8bff6e335cc7932857f4ea74391
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 027c412ff71ad8bff6e335cc7932857f4ea74391
+ branch=qemu-upstream-unstable
+ revision=027c412ff71ad8bff6e335cc7932857f4ea74391
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 027c412ff71ad8bff6e335cc7932857f4ea74391:master
Counting objects: 1   
Counting objects: 5, done.
Compressing objects:  33% (1/3)   
Compressing objects:  66% (2/3)   
Compressing objects: 100% (3/3)   
Compressing objects: 100% (3/3), done.
Writing objects:  33% (1/3)   
Writing objects:  66% (2/3)   
Writing objects: 100% (3/3)   
Writing objects: 100% (3/3), 30.67 KiB, done.
Total 3 (delta 1), reused 2 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   a41087b..027c412  027c412ff71ad8bff6e335cc7932857f4ea74391 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 03:38:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 03:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBFn0-0002Gy-DI; Thu, 06 Feb 2014 03:38:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WBFmz-0002Gt-A0
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 03:38:09 +0000
Received: from [85.158.143.35:44491] by server-2.bemta-4.messagelabs.com id
	72/60-10891-0A303F25; Thu, 06 Feb 2014 03:38:08 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391657887!3485370!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11945 invoked from network); 6 Feb 2014 03:38:07 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 03:38:07 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so879133wgg.18
	for <xen-devel@lists.xensource.com>;
	Wed, 05 Feb 2014 19:38:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=qb9zGtYm+GJhPTXbPDCfapG5AVD1IkNy5KwzwjOWc8I=;
	b=n355FmP9PhwxuaTvRWsWgkVhMDtr1RxozOugGWL4Okz7fBnp9l3KCMwOoYDkIQU6zd
	3WhkbLN2vOINBtKOSlbwSmwORGblP433ZfxQksg0efaXHb81NtO8teoj5nUG5e0lTHYy
	ZPpMd/7GRVuZnH9f4cNdni661jmHqgh3PY8s4ukpg3PHoXw1KGRmFk8s8zpjU/A6lFIu
	tO6Pz60/xcUltCDxGRhpCV+fxjdZukiglpv25mdq2BJQ1z1iVwbJCvWmOLn+LQF0WuAs
	Jv9rd09t8+qnaapmKLI1zIweVypfwPCE0VZRBzUPEA5MxRLqqT8tOT+Al+uonsvBB5mM
	3edg==
X-Received: by 10.180.37.193 with SMTP id a1mr5020483wik.52.1391657887314;
	Wed, 05 Feb 2014 19:38:07 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Wed, 5 Feb 2014 19:37:47 -0800 (PST)
In-Reply-To: <5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 6 Feb 2014 03:37:47 +0000
Message-ID: <CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
To: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry forgot to add the error:

# xl create test.cfg
Parsing config from test.cfg
libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
failed - consult logfile /var/log/xen/bootloader.34.log
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
bootloader [10762] exited with error status 1
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -3

# cat /var/log/xen/bootloader.34.log
Traceback (most recent call last):
  File "/usr/local/lib/xen/bin/pygrub", line 844, in <module>
    part_offs = get_partition_offsets(file)
  File "/usr/local/lib/xen/bin/pygrub", line 105, in get_partition_offsets
    image_type = identify_disk_image(file)
  File "/usr/local/lib/xen/bin/pygrub", line 47, in identify_disk_image
    fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory: 'drbd-remus-test'

On Wed, Feb 5, 2014 at 3:20 AM, Mike C. <miguelmclara@gmail.com> wrote:
> Fixed this but it seems using drbd: in rhe disk config doesnt work with
> pygrub....
>
> does this makes sense?
>
> I found na old bug reporta but this is Debain squeze Xen 4.3
>
> I seems to work fine booting into the installer bit if ibuse pygrub It
> doesnt find the drbd device.
>
>
>
>
> On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan
> <rshriram@cs.ubc.ca> wrote:
>>
>> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
>> wrote:
>>>
>>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>>> does this come with xen or the drbd package?
>>>
>>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
>>> on debian (v 8.3)
>>>
>>
>> It comes with the drbd package AFAIK
>>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 03:38:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 03:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBFn0-0002Gy-DI; Thu, 06 Feb 2014 03:38:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WBFmz-0002Gt-A0
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 03:38:09 +0000
Received: from [85.158.143.35:44491] by server-2.bemta-4.messagelabs.com id
	72/60-10891-0A303F25; Thu, 06 Feb 2014 03:38:08 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391657887!3485370!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11945 invoked from network); 6 Feb 2014 03:38:07 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 03:38:07 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so879133wgg.18
	for <xen-devel@lists.xensource.com>;
	Wed, 05 Feb 2014 19:38:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=qb9zGtYm+GJhPTXbPDCfapG5AVD1IkNy5KwzwjOWc8I=;
	b=n355FmP9PhwxuaTvRWsWgkVhMDtr1RxozOugGWL4Okz7fBnp9l3KCMwOoYDkIQU6zd
	3WhkbLN2vOINBtKOSlbwSmwORGblP433ZfxQksg0efaXHb81NtO8teoj5nUG5e0lTHYy
	ZPpMd/7GRVuZnH9f4cNdni661jmHqgh3PY8s4ukpg3PHoXw1KGRmFk8s8zpjU/A6lFIu
	tO6Pz60/xcUltCDxGRhpCV+fxjdZukiglpv25mdq2BJQ1z1iVwbJCvWmOLn+LQF0WuAs
	Jv9rd09t8+qnaapmKLI1zIweVypfwPCE0VZRBzUPEA5MxRLqqT8tOT+Al+uonsvBB5mM
	3edg==
X-Received: by 10.180.37.193 with SMTP id a1mr5020483wik.52.1391657887314;
	Wed, 05 Feb 2014 19:38:07 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Wed, 5 Feb 2014 19:37:47 -0800 (PST)
In-Reply-To: <5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 6 Feb 2014 03:37:47 +0000
Message-ID: <CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
To: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry forgot to add the error:

# xl create test.cfg
Parsing config from test.cfg
libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
failed - consult logfile /var/log/xen/bootloader.34.log
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
bootloader [10762] exited with error status 1
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -3

# cat /var/log/xen/bootloader.34.log
Traceback (most recent call last):
  File "/usr/local/lib/xen/bin/pygrub", line 844, in <module>
    part_offs = get_partition_offsets(file)
  File "/usr/local/lib/xen/bin/pygrub", line 105, in get_partition_offsets
    image_type = identify_disk_image(file)
  File "/usr/local/lib/xen/bin/pygrub", line 47, in identify_disk_image
    fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory: 'drbd-remus-test'

On Wed, Feb 5, 2014 at 3:20 AM, Mike C. <miguelmclara@gmail.com> wrote:
> Fixed this but it seems using drbd: in rhe disk config doesnt work with
> pygrub....
>
> does this makes sense?
>
> I found na old bug reporta but this is Debain squeze Xen 4.3
>
> I seems to work fine booting into the installer bit if ibuse pygrub It
> doesnt find the drbd device.
>
>
>
>
> On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan
> <rshriram@cs.ubc.ca> wrote:
>>
>> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
>> wrote:
>>>
>>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>>> does this come with xen or the drbd package?
>>>
>>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
>>> on debian (v 8.3)
>>>
>>
>> It comes with the drbd package AFAIK
>>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 04:24:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 04:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBGVI-0003fY-Vn; Thu, 06 Feb 2014 04:23:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBGVH-0003fQ-Jn
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 04:23:56 +0000
Received: from [85.158.143.35:5872] by server-2.bemta-4.messagelabs.com id
	E8/23-10891-A5E03F25; Thu, 06 Feb 2014 04:23:54 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391660632!3471393!1
X-Originating-IP: [72.30.238.138]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30228 invoked from network); 6 Feb 2014 04:23:53 -0000
Received: from nm36-vm2.bullet.mail.bf1.yahoo.com (HELO
	nm36-vm2.bullet.mail.bf1.yahoo.com) (72.30.238.138)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 04:23:53 -0000
Received: from [98.139.212.149] by nm36.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 04:23:52 -0000
Received: from [98.139.211.201] by tm6.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 04:23:52 -0000
Received: from [127.0.0.1] by smtp210.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 04:23:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391660632; bh=/+GqNniwMnYse4BmjdTBLvR7hiWN1AMb9iBlSo0G1/A=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=XZWVcAWwmlfiysgTq32EHLTnQomfa/MQX/LqlQWCNs/8U0T/siuLzYvSohKjes2LrRac4zLSWqvqMuQHOwRSV2M2FJRsKmV7xrQbW7smBtHYu4juHeOVWZ9Ele1Ki9hvgNiT88l0R7E2PuUXk9DYlq5mDW8aU78jlftGF92H79g=
X-Yahoo-Newman-Id: 562048.39236.bm@smtp210.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 8l9ORfEVM1m_41mycLe7zRXqMFIdMWBlLPe3LdJFn5N0ATZ
	_DCV0w5N_l770Co30Cr_VwDmaOvF.7cTPwzVFZ.j.iVW80GgHCbVtajAR7.c
	sagufXfMBF3Dz7sMKRibWLvaTNuQcZm9VLSe6oEVUVzwzMA0cbXpXLzv2RmN
	s91lAeerUfWlsaW4AdmFrarVnXUVt.KWZWgFW7CDl3HepZFsa1Y85cx3gUA.
	W2IEJsuxKpJ.V7MyptQq6kezDcqd8m_frTOOSrv39VAGA_yxUpJ8lmtfEvcm
	Rcof2cV62ssD3pPeFLTessCYwI_JErbkWvNpHIYXWZ8d4e22j0wOsq0I3dKc
	yPpteK.McBS16GOH03iCpgmWbbbOP5Im8RAHm7F7aqFa6kdSAuaX86MufHLj
	KXe7pPFPt18SCLzHhKafaZ4MBP_wZvUcH5pfF4jcV0vAYlEXe3m..K_DcN0r
	6sZAbtlIN.3eW5AIiU3.wuSZTVebh68M2eEb2tXUs4WUfZAlf0EcREBSOmiJ
	EEvld7nHOZW42dGt_URODK_0br1lrZeIzGqnxpILjZIT7GfQRAClsefVB7XF
	09iwsNmGXvDhuuMxGp_d8XBx5YtgPlzsKPCkNbt..or2mLuDdKAN.mlcfY5v
	I_nGPdlvAPfquy3_qAUqaMDMFExLB3D0gxuQvzw--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp210.mail.bf1.yahoo.com with SMTP; 05 Feb 2014 20:23:52 -0800 PST
Message-ID: <1391660630.2751.12.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Wed, 05 Feb 2014 21:23:50 -0700
In-Reply-To: <1391645864.2751.9.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 17:17 -0700, Eric Houby wrote:
> On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:
> > Wednesday, February 5, 2014, 5:00:08 AM, you wrote:
> > 
> > > On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> > >> Il 04/02/2014 16:41, Eric Houby ha scritto:
> > >> > Xen list,
> > >> >
> > >> > I am trying to boot a F20 guest and connect using Spice but have run
> > >> > into an issue.
> > >> >
> > >> > My VM config file includes:
> > >> > spice = 1
> > >> > spicehost='0.0.0.0'
> > >> > spiceport=6001
> > >> > spicedisable_ticketing=1
> > >> >
> > >> >
> > >> > Is Spice supported with qemu-xen-traditional?
> > >> 
> > >> No, only with upstream qemu and if compile xen and qemu from source you 
> > >> also enable spice support on qemu build, for example on my xen build 
> > >> tests I add:
> > >> 
> > >> tools/Makefile
> > >> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> > >>           --datadir=$(SHAREDIR)/qemu-xen \
> > >>           --localstatedir=/var \
> > >>           --disable-kvm \
> > >> +        --enable-spice \
> > >> +        --enable-usb-redir \
> > >>           --disable-docs \
> > >>           --disable-guest-agent \
> > >>           --python=$(PYTHON) \
> > >> 
> > >> If you use upstream qemu from distribution package probably have already 
> > >> spice build-in, for example, on debian I've already tested and working.
> > >> 
> > 
> > > It is my understanding that the qemu package in F20 does not support xen
> > > so I compiled xen from source per the RC3 Test Day instructions and the
> > > instructions here:
> > 
> > > http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source
> > 
> > > After adding --enable-spice and --enable-usb-redir to tools/Makefile I
> > > see the following error when I make xen:
> > 
> > > ERROR: User requested feature spice
> > >        configure was not able to find it
> > 
> > Do you have the libspice-dev packages installed for your distro ?
> > 
> 
> I do now. I also have the usbredir-devel package. 
> 
> Thanks for the help.
> 

Is there a knob for qxl support?

[root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
qemu-system-i386: -vga qxl: invalid option





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 04:24:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 04:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBGVI-0003fY-Vn; Thu, 06 Feb 2014 04:23:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBGVH-0003fQ-Jn
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 04:23:56 +0000
Received: from [85.158.143.35:5872] by server-2.bemta-4.messagelabs.com id
	E8/23-10891-A5E03F25; Thu, 06 Feb 2014 04:23:54 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391660632!3471393!1
X-Originating-IP: [72.30.238.138]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30228 invoked from network); 6 Feb 2014 04:23:53 -0000
Received: from nm36-vm2.bullet.mail.bf1.yahoo.com (HELO
	nm36-vm2.bullet.mail.bf1.yahoo.com) (72.30.238.138)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 04:23:53 -0000
Received: from [98.139.212.149] by nm36.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 04:23:52 -0000
Received: from [98.139.211.201] by tm6.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 04:23:52 -0000
Received: from [127.0.0.1] by smtp210.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 04:23:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391660632; bh=/+GqNniwMnYse4BmjdTBLvR7hiWN1AMb9iBlSo0G1/A=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=XZWVcAWwmlfiysgTq32EHLTnQomfa/MQX/LqlQWCNs/8U0T/siuLzYvSohKjes2LrRac4zLSWqvqMuQHOwRSV2M2FJRsKmV7xrQbW7smBtHYu4juHeOVWZ9Ele1Ki9hvgNiT88l0R7E2PuUXk9DYlq5mDW8aU78jlftGF92H79g=
X-Yahoo-Newman-Id: 562048.39236.bm@smtp210.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 8l9ORfEVM1m_41mycLe7zRXqMFIdMWBlLPe3LdJFn5N0ATZ
	_DCV0w5N_l770Co30Cr_VwDmaOvF.7cTPwzVFZ.j.iVW80GgHCbVtajAR7.c
	sagufXfMBF3Dz7sMKRibWLvaTNuQcZm9VLSe6oEVUVzwzMA0cbXpXLzv2RmN
	s91lAeerUfWlsaW4AdmFrarVnXUVt.KWZWgFW7CDl3HepZFsa1Y85cx3gUA.
	W2IEJsuxKpJ.V7MyptQq6kezDcqd8m_frTOOSrv39VAGA_yxUpJ8lmtfEvcm
	Rcof2cV62ssD3pPeFLTessCYwI_JErbkWvNpHIYXWZ8d4e22j0wOsq0I3dKc
	yPpteK.McBS16GOH03iCpgmWbbbOP5Im8RAHm7F7aqFa6kdSAuaX86MufHLj
	KXe7pPFPt18SCLzHhKafaZ4MBP_wZvUcH5pfF4jcV0vAYlEXe3m..K_DcN0r
	6sZAbtlIN.3eW5AIiU3.wuSZTVebh68M2eEb2tXUs4WUfZAlf0EcREBSOmiJ
	EEvld7nHOZW42dGt_URODK_0br1lrZeIzGqnxpILjZIT7GfQRAClsefVB7XF
	09iwsNmGXvDhuuMxGp_d8XBx5YtgPlzsKPCkNbt..or2mLuDdKAN.mlcfY5v
	I_nGPdlvAPfquy3_qAUqaMDMFExLB3D0gxuQvzw--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp210.mail.bf1.yahoo.com with SMTP; 05 Feb 2014 20:23:52 -0800 PST
Message-ID: <1391660630.2751.12.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Wed, 05 Feb 2014 21:23:50 -0700
In-Reply-To: <1391645864.2751.9.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 17:17 -0700, Eric Houby wrote:
> On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:
> > Wednesday, February 5, 2014, 5:00:08 AM, you wrote:
> > 
> > > On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> > >> Il 04/02/2014 16:41, Eric Houby ha scritto:
> > >> > Xen list,
> > >> >
> > >> > I am trying to boot a F20 guest and connect using Spice but have run
> > >> > into an issue.
> > >> >
> > >> > My VM config file includes:
> > >> > spice = 1
> > >> > spicehost='0.0.0.0'
> > >> > spiceport=6001
> > >> > spicedisable_ticketing=1
> > >> >
> > >> >
> > >> > Is Spice supported with qemu-xen-traditional?
> > >> 
> > >> No, only with upstream qemu and if compile xen and qemu from source you 
> > >> also enable spice support on qemu build, for example on my xen build 
> > >> tests I add:
> > >> 
> > >> tools/Makefile
> > >> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> > >>           --datadir=$(SHAREDIR)/qemu-xen \
> > >>           --localstatedir=/var \
> > >>           --disable-kvm \
> > >> +        --enable-spice \
> > >> +        --enable-usb-redir \
> > >>           --disable-docs \
> > >>           --disable-guest-agent \
> > >>           --python=$(PYTHON) \
> > >> 
> > >> If you use upstream qemu from distribution package probably have already 
> > >> spice build-in, for example, on debian I've already tested and working.
> > >> 
> > 
> > > It is my understanding that the qemu package in F20 does not support xen
> > > so I compiled xen from source per the RC3 Test Day instructions and the
> > > instructions here:
> > 
> > > http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source
> > 
> > > After adding --enable-spice and --enable-usb-redir to tools/Makefile I
> > > see the following error when I make xen:
> > 
> > > ERROR: User requested feature spice
> > >        configure was not able to find it
> > 
> > Do you have the libspice-dev packages installed for your distro ?
> > 
> 
> I do now. I also have the usbredir-devel package. 
> 
> Thanks for the help.
> 

Is there a knob for qxl support?

[root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
qemu-system-i386: -vga qxl: invalid option





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 04:58:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 04:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBH1w-0004UU-Pp; Thu, 06 Feb 2014 04:57:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WBH1u-0004UP-OE
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 04:57:38 +0000
Received: from [85.158.143.35:27655] by server-2.bemta-4.messagelabs.com id
	EA/21-10891-14613F25; Thu, 06 Feb 2014 04:57:37 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391662655!3499992!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28774 invoked from network); 6 Feb 2014 04:57:37 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 04:57:37 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so1242087pde.13
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 20:57:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=vBMsmyPijxUXfsWcXfmljHlj7x1t6QQK6lfvHue7igc=;
	b=FZBfUM7nFz7tRr3BmZu3FJ4RVWyAskQCLCuowRamAjA44v2stNjZRcvTw+nMKBzR0+
	dVeuattmaNY4o9ls1afJdL6m+Cma2DWgtTeelgfyhzJhWGFDjEFvn9Ev1yGf/VlT3gvq
	0lLhZ4WXYktaA5P0wRPM/isyiQU7ycs0xeHJLjMiWTKSHvXQgWj3HxOLudxSQYbbjFSm
	330L531VVADEXdoHsqVI598cnGD4Gour+pZLrta+yGEm2LbFOBe2jou6rTDWT9qsx96o
	fJOIBvCcFJhfOvKnwEWUg6cyxPbYWoHYMArU58sAOQ0yPoLYb4cUtdDpLe6w+od+d7Yg
	cUtQ==
X-Received: by 10.68.238.201 with SMTP id vm9mr9126839pbc.18.1391662654758;
	Wed, 05 Feb 2014 20:57:34 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	id1sm82371248pbc.11.2014.02.05.20.57.31 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 20:57:33 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Wed, 05 Feb 2014 20:57:30 -0800
Date: Wed, 5 Feb 2014 20:57:30 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140204151501.GA1781@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: mrushton@amazon.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org, msw@amazon.com,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 11:15:01AM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> > This series contain blkback bug fixes for memory leaks (patches 1 and 
> > 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> > since its memory layout is exactly the same as blkif_request_segment 
> > and should introduce no functional change.
> > 
> > All patches should be backported to stable branches, although the last 
> > one is not a functional change it will still be nice to have it for 
> > code correctness.
> 
> Matt and Matt, could you guys kindly take a look as well? Thank you!

Matt R. did some testing today and set up additional tests to run
overnight. He'll follow up after the overnight tests complete.

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 04:58:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 04:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBH1w-0004UU-Pp; Thu, 06 Feb 2014 04:57:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WBH1u-0004UP-OE
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 04:57:38 +0000
Received: from [85.158.143.35:27655] by server-2.bemta-4.messagelabs.com id
	EA/21-10891-14613F25; Thu, 06 Feb 2014 04:57:37 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391662655!3499992!1
X-Originating-IP: [209.85.192.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28774 invoked from network); 6 Feb 2014 04:57:37 -0000
Received: from mail-pd0-f182.google.com (HELO mail-pd0-f182.google.com)
	(209.85.192.182)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 04:57:37 -0000
Received: by mail-pd0-f182.google.com with SMTP id v10so1242087pde.13
	for <xen-devel@lists.xenproject.org>;
	Wed, 05 Feb 2014 20:57:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=vBMsmyPijxUXfsWcXfmljHlj7x1t6QQK6lfvHue7igc=;
	b=FZBfUM7nFz7tRr3BmZu3FJ4RVWyAskQCLCuowRamAjA44v2stNjZRcvTw+nMKBzR0+
	dVeuattmaNY4o9ls1afJdL6m+Cma2DWgtTeelgfyhzJhWGFDjEFvn9Ev1yGf/VlT3gvq
	0lLhZ4WXYktaA5P0wRPM/isyiQU7ycs0xeHJLjMiWTKSHvXQgWj3HxOLudxSQYbbjFSm
	330L531VVADEXdoHsqVI598cnGD4Gour+pZLrta+yGEm2LbFOBe2jou6rTDWT9qsx96o
	fJOIBvCcFJhfOvKnwEWUg6cyxPbYWoHYMArU58sAOQ0yPoLYb4cUtdDpLe6w+od+d7Yg
	cUtQ==
X-Received: by 10.68.238.201 with SMTP id vm9mr9126839pbc.18.1391662654758;
	Wed, 05 Feb 2014 20:57:34 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	id1sm82371248pbc.11.2014.02.05.20.57.31 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 20:57:33 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Wed, 05 Feb 2014 20:57:30 -0800
Date: Wed, 5 Feb 2014 20:57:30 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Message-ID: <20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140204151501.GA1781@andromeda.dapyr.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: mrushton@amazon.com, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org, msw@amazon.com,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 04, 2014 at 11:15:01AM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> > This series contain blkback bug fixes for memory leaks (patches 1 and 
> > 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> > since its memory layout is exactly the same as blkif_request_segment 
> > and should introduce no functional change.
> > 
> > All patches should be backported to stable branches, although the last 
> > one is not a functional change it will still be nice to have it for 
> > code correctness.
> 
> Matt and Matt, could you guys kindly take a look as well? Thank you!

Matt R. did some testing today and set up additional tests to run
overnight. He'll follow up after the overnight tests complete.

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 07:29:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 07:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBJOZ-0000BL-7V; Thu, 06 Feb 2014 07:29:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBJOY-0000BG-AS
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 07:29:10 +0000
Received: from [85.158.137.68:33949] by server-9.bemta-3.messagelabs.com id
	14/96-10184-5C933F25; Thu, 06 Feb 2014 07:29:09 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391671747!13733422!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18881 invoked from network); 6 Feb 2014 07:29:08 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 07:29:08 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so1360708pad.28
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 23:29:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=xBybOY5H2n1bj+ApD/MvxwaFVpZaoS4QNHI7/Nqp2I8=;
	b=WNJJiYk74v8yJVmZo20pz/545UYNZTUAzvAWMptOGY16fHOyZyMCbMg+hEpGZVwcle
	Os4955gjk692MXqpm1WuVEG63dK/wpcTjTa8WOGBSf9u5Cn1ffIkG+dAc9KupfDVVvUd
	EwsijLx/zw1cRy744MqATwb1+fXR60Owg/tHqRP/HPkA46Ff99txgrrUctUhvnPLhUo0
	sA50TcVBZ6mUdxlrmpze2hp7phN8VRMDE2xud6F6RoSnw0Qz4uKei6eZoSvWlVq2s+QX
	qUjk0aHDDXnLoFrRbASN2r76B+ZaJr3ye2R28McOPz67/Mvf6EQFRbk7kXoPiIptwSS1
	vC9g==
X-Gm-Message-State: ALoCoQnuWUcxd/EogZ+3L6SJdIPNVbyh7OAtys1H7WslHPUER5pOr0Hke+t+s/A4yBKEeCML381m
X-Received: by 10.69.2.2 with SMTP id bk2mr10039602pbd.75.1391671746646;
	Wed, 05 Feb 2014 23:29:06 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id yd4sm224963pbc.13.2014.02.05.23.29.02
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 23:29:05 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Thu,  6 Feb 2014 12:58:42 +0530
Message-Id: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds VFP save/restore support form arm64 across context switch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/arm64/vfp.h |    4 ++++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 74e6a50..8c1479a 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -1,13 +1,62 @@
 #include <xen/sched.h>
 #include <asm/processor.h>
+#include <asm/cpufeature.h>
 #include <asm/vfp.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     /* TODO: implement it */
+    if ( !cpu_has_fp )
+        return;
+
+    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
+                 "stp q2, q3, [%0, #16 * 2]\n\t"
+                 "stp q4, q5, [%0, #16 * 4]\n\t"
+                 "stp q6, q7, [%0, #16 * 6]\n\t"
+                 "stp q8, q9, [%0, #16 * 8]\n\t"
+                 "stp q10, q11, [%0, #16 * 10]\n\t"
+                 "stp q12, q13, [%0, #16 * 12]\n\t"
+                 "stp q14, q15, [%0, #16 * 14]\n\t"
+                 "stp q16, q17, [%0, #16 * 16]\n\t"
+                 "stp q18, q19, [%0, #16 * 18]\n\t"
+                 "stp q20, q21, [%0, #16 * 20]\n\t"
+                 "stp q22, q23, [%0, #16 * 22]\n\t"
+                 "stp q24, q25, [%0, #16 * 24]\n\t"
+                 "stp q26, q27, [%0, #16 * 26]\n\t"
+                 "stp q28, q29, [%0, #16 * 28]\n\t"
+                 "stp q30, q31, [%0, #16 * 30]\n\t"
+                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+
+    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
+    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
+    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
 }
 
 void vfp_restore_state(struct vcpu *v)
 {
     /* TODO: implement it */
+    if ( !cpu_has_fp )
+        return;
+
+    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
+                 "ldp q2, q3, [%0, #16 * 2]\n\t"
+                 "ldp q4, q5, [%0, #16 * 4]\n\t"
+                 "ldp q6, q7, [%0, #16 * 6]\n\t"
+                 "ldp q8, q9, [%0, #16 * 8]\n\t"
+                 "ldp q10, q11, [%0, #16 * 10]\n\t"
+                 "ldp q12, q13, [%0, #16 * 12]\n\t"
+                 "ldp q14, q15, [%0, #16 * 14]\n\t"
+                 "ldp q16, q17, [%0, #16 * 16]\n\t"
+                 "ldp q18, q19, [%0, #16 * 18]\n\t"
+                 "ldp q20, q21, [%0, #16 * 20]\n\t"
+                 "ldp q22, q23, [%0, #16 * 22]\n\t"
+                 "ldp q24, q25, [%0, #16 * 24]\n\t"
+                 "ldp q26, q27, [%0, #16 * 26]\n\t"
+                 "ldp q28, q29, [%0, #16 * 28]\n\t"
+                 "ldp q30, q31, [%0, #16 * 30]\n\t"
+                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+
+    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
+    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
+    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
 }
diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
index 3733d2c..373f156 100644
--- a/xen/include/asm-arm/arm64/vfp.h
+++ b/xen/include/asm-arm/arm64/vfp.h
@@ -3,6 +3,10 @@
 
 struct vfp_state
 {
+    uint64_t fpregs[64];
+    uint32_t fpcr;
+    uint32_t fpexc32_el2;
+    uint32_t fpsr;
 };
 
 #endif /* _ARM_ARM64_VFP_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 07:29:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 07:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBJOZ-0000BL-7V; Thu, 06 Feb 2014 07:29:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBJOY-0000BG-AS
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 07:29:10 +0000
Received: from [85.158.137.68:33949] by server-9.bemta-3.messagelabs.com id
	14/96-10184-5C933F25; Thu, 06 Feb 2014 07:29:09 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391671747!13733422!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18881 invoked from network); 6 Feb 2014 07:29:08 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 07:29:08 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so1360708pad.28
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 23:29:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=xBybOY5H2n1bj+ApD/MvxwaFVpZaoS4QNHI7/Nqp2I8=;
	b=WNJJiYk74v8yJVmZo20pz/545UYNZTUAzvAWMptOGY16fHOyZyMCbMg+hEpGZVwcle
	Os4955gjk692MXqpm1WuVEG63dK/wpcTjTa8WOGBSf9u5Cn1ffIkG+dAc9KupfDVVvUd
	EwsijLx/zw1cRy744MqATwb1+fXR60Owg/tHqRP/HPkA46Ff99txgrrUctUhvnPLhUo0
	sA50TcVBZ6mUdxlrmpze2hp7phN8VRMDE2xud6F6RoSnw0Qz4uKei6eZoSvWlVq2s+QX
	qUjk0aHDDXnLoFrRbASN2r76B+ZaJr3ye2R28McOPz67/Mvf6EQFRbk7kXoPiIptwSS1
	vC9g==
X-Gm-Message-State: ALoCoQnuWUcxd/EogZ+3L6SJdIPNVbyh7OAtys1H7WslHPUER5pOr0Hke+t+s/A4yBKEeCML381m
X-Received: by 10.69.2.2 with SMTP id bk2mr10039602pbd.75.1391671746646;
	Wed, 05 Feb 2014 23:29:06 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id yd4sm224963pbc.13.2014.02.05.23.29.02
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 05 Feb 2014 23:29:05 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Thu,  6 Feb 2014 12:58:42 +0530
Message-Id: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds VFP save/restore support form arm64 across context switch.

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/arm64/vfp.h |    4 ++++
 2 files changed, 53 insertions(+)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 74e6a50..8c1479a 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -1,13 +1,62 @@
 #include <xen/sched.h>
 #include <asm/processor.h>
+#include <asm/cpufeature.h>
 #include <asm/vfp.h>
 
 void vfp_save_state(struct vcpu *v)
 {
     /* TODO: implement it */
+    if ( !cpu_has_fp )
+        return;
+
+    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
+                 "stp q2, q3, [%0, #16 * 2]\n\t"
+                 "stp q4, q5, [%0, #16 * 4]\n\t"
+                 "stp q6, q7, [%0, #16 * 6]\n\t"
+                 "stp q8, q9, [%0, #16 * 8]\n\t"
+                 "stp q10, q11, [%0, #16 * 10]\n\t"
+                 "stp q12, q13, [%0, #16 * 12]\n\t"
+                 "stp q14, q15, [%0, #16 * 14]\n\t"
+                 "stp q16, q17, [%0, #16 * 16]\n\t"
+                 "stp q18, q19, [%0, #16 * 18]\n\t"
+                 "stp q20, q21, [%0, #16 * 20]\n\t"
+                 "stp q22, q23, [%0, #16 * 22]\n\t"
+                 "stp q24, q25, [%0, #16 * 24]\n\t"
+                 "stp q26, q27, [%0, #16 * 26]\n\t"
+                 "stp q28, q29, [%0, #16 * 28]\n\t"
+                 "stp q30, q31, [%0, #16 * 30]\n\t"
+                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+
+    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
+    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
+    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
 }
 
 void vfp_restore_state(struct vcpu *v)
 {
     /* TODO: implement it */
+    if ( !cpu_has_fp )
+        return;
+
+    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
+                 "ldp q2, q3, [%0, #16 * 2]\n\t"
+                 "ldp q4, q5, [%0, #16 * 4]\n\t"
+                 "ldp q6, q7, [%0, #16 * 6]\n\t"
+                 "ldp q8, q9, [%0, #16 * 8]\n\t"
+                 "ldp q10, q11, [%0, #16 * 10]\n\t"
+                 "ldp q12, q13, [%0, #16 * 12]\n\t"
+                 "ldp q14, q15, [%0, #16 * 14]\n\t"
+                 "ldp q16, q17, [%0, #16 * 16]\n\t"
+                 "ldp q18, q19, [%0, #16 * 18]\n\t"
+                 "ldp q20, q21, [%0, #16 * 20]\n\t"
+                 "ldp q22, q23, [%0, #16 * 22]\n\t"
+                 "ldp q24, q25, [%0, #16 * 24]\n\t"
+                 "ldp q26, q27, [%0, #16 * 26]\n\t"
+                 "ldp q28, q29, [%0, #16 * 28]\n\t"
+                 "ldp q30, q31, [%0, #16 * 30]\n\t"
+                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+
+    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
+    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
+    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
 }
diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
index 3733d2c..373f156 100644
--- a/xen/include/asm-arm/arm64/vfp.h
+++ b/xen/include/asm-arm/arm64/vfp.h
@@ -3,6 +3,10 @@
 
 struct vfp_state
 {
+    uint64_t fpregs[64];
+    uint32_t fpcr;
+    uint32_t fpexc32_el2;
+    uint32_t fpsr;
 };
 
 #endif /* _ARM_ARM64_VFP_H */
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 07:35:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 07:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBJUi-0000LT-8A; Thu, 06 Feb 2014 07:35:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WBJUg-0000LN-3u
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 07:35:30 +0000
Received: from [85.158.137.68:13540] by server-12.bemta-3.messagelabs.com id
	BD/50-01674-14B33F25; Thu, 06 Feb 2014 07:35:29 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391672127!13653063!1
X-Originating-IP: [209.85.214.50]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15466 invoked from network); 6 Feb 2014 07:35:28 -0000
Received: from mail-bk0-f50.google.com (HELO mail-bk0-f50.google.com)
	(209.85.214.50)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 07:35:28 -0000
Received: by mail-bk0-f50.google.com with SMTP id w16so521287bkz.37
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 23:35:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=MTOGjlzPlQ2Cdivsio14n+H7geR3WShlSQBI1qu16x0=;
	b=kv1yLYrZlq3B5qyhtEn2jk9Kn3hF/Qw72r6Zjv/i9G/yxxLvZMzdSO+B08pxO796De
	GvwEE6ojoEq3Ugdhl0G0kZaExogkrYaH+xE7JmxWPs17Xyl4yP3bn8qgIHlPiOUQEv5v
	2vuwa8FCtCxVwHWVsFjn9sbLG4LFIKWRJiAORRxVMlbx+NWROiAFphPZhBOqdMeCeScU
	zDRds+seJx6KVB4lHA1yENpPWSvxhuTxOL8unkSxOriYWgzJI9BguQHU7639tPK01hds
	bCy/6EZWZMnRncK/JWUTTXJr8TN0/3hsjSo4lqP74tzTZZzY2oGpY1NXgTgzh+dPxOCj
	VBbw==
X-Gm-Message-State: ALoCoQndSPlxyvs3zkzYDDLMS/5HN82YPS5bOVZI6G2XLN1zlHjlZLmfE9IcGg/jWq/erxDLMX6I
MIME-Version: 1.0
X-Received: by 10.204.172.145 with SMTP id l17mr4142653bkz.26.1391672127505;
	Wed, 05 Feb 2014 23:35:27 -0800 (PST)
Received: by 10.204.103.194 with HTTP; Wed, 5 Feb 2014 23:35:27 -0800 (PST)
X-Originating-IP: [87.0.89.102]
In-Reply-To: <1391660630.2751.12.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
Date: Thu, 6 Feb 2014 08:35:27 +0100
Message-ID: <CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: ehouby@yahoo.com
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4687639335773584100=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4687639335773584100==
Content-Type: multipart/alternative; boundary=bcaec52c5f5bce1fe704f1b7ea4b

--bcaec52c5f5bce1fe704f1b7ea4b
Content-Type: text/plain; charset=ISO-8859-1

2014-02-06 5:23 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

> On Wed, 2014-02-05 at 17:17 -0700, Eric Houby wrote:
> > On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:
> > > Wednesday, February 5, 2014, 5:00:08 AM, you wrote:
> > >
> > > > On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> > > >> Il 04/02/2014 16:41, Eric Houby ha scritto:
> > > >> > Xen list,
> > > >> >
> > > >> > I am trying to boot a F20 guest and connect using Spice but have
> run
> > > >> > into an issue.
> > > >> >
> > > >> > My VM config file includes:
> > > >> > spice = 1
> > > >> > spicehost='0.0.0.0'
> > > >> > spiceport=6001
> > > >> > spicedisable_ticketing=1
> > > >> >
> > > >> >
> > > >> > Is Spice supported with qemu-xen-traditional?
> > > >>
> > > >> No, only with upstream qemu and if compile xen and qemu from source
> you
> > > >> also enable spice support on qemu build, for example on my xen build
> > > >> tests I add:
> > > >>
> > > >> tools/Makefile
> > > >> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> > > >>           --datadir=$(SHAREDIR)/qemu-xen \
> > > >>           --localstatedir=/var \
> > > >>           --disable-kvm \
> > > >> +        --enable-spice \
> > > >> +        --enable-usb-redir \
> > > >>           --disable-docs \
> > > >>           --disable-guest-agent \
> > > >>           --python=$(PYTHON) \
> > > >>
> > > >> If you use upstream qemu from distribution package probably have
> already
> > > >> spice build-in, for example, on debian I've already tested and
> working.
> > > >>
> > >
> > > > It is my understanding that the qemu package in F20 does not support
> xen
> > > > so I compiled xen from source per the RC3 Test Day instructions and
> the
> > > > instructions here:
> > >
> > > > http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source
> > >
> > > > After adding --enable-spice and --enable-usb-redir to tools/Makefile
> I
> > > > see the following error when I make xen:
> > >
> > > > ERROR: User requested feature spice
> > > >        configure was not able to find it
> > >
> > > Do you have the libspice-dev packages installed for your distro ?
> > >
> >
> > I do now. I also have the usbredir-devel package.
> >
> > Thanks for the help.
> >
>
> Is there a knob for qxl support?
>
> [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
> qemu-system-i386: -vga qxl: invalid option
>
>
>
>
>
Here there is a patch that add qxl support in libxl updated to xen 4.4-rc3
if you want add it:
https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
Or you can simply compile from this already ready for spice/qxl testing:
https://github.com/Fantu/Xen/commits/rebase/m2r-testing

Is not upstream for now because there is something on xen that make it not
working on linux domUs with qxl driver active and working with high
performance problem on windows domUs.
I spent several days without finding the exact problem to be solved :(
If you want you can try it out and see if anything changes using Fedora instead
of Debian as dom0, differents kernel domUs etc.
Maybe you could even find some new informations/errors useful for solving the
problem.

--bcaec52c5f5bce1fe704f1b7ea4b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-06 5:23 GMT+01:00 Eric Houby <span dir=3D"ltr">&lt=
;<a href=3D"mailto:ehouby@yahoo.com" target=3D"_blank">ehouby@yahoo.com</a>=
&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D""><div class=3D"h5">On Wed, 2014-02-05 at 17:17 -0700, Eric H=
ouby wrote:<br>
&gt; On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:<br>
&gt; &gt; Wednesday, February 5, 2014, 5:00:08 AM, you wrote:<br>
&gt; &gt;<br>
&gt; &gt; &gt; On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:<br>
&gt; &gt; &gt;&gt; Il 04/02/2014 16:41, Eric Houby ha scritto:<br>
&gt; &gt; &gt;&gt; &gt; Xen list,<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt; I am trying to boot a F20 guest and connect using S=
pice but have run<br>
&gt; &gt; &gt;&gt; &gt; into an issue.<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt; My VM config file includes:<br>
&gt; &gt; &gt;&gt; &gt; spice =3D 1<br>
&gt; &gt; &gt;&gt; &gt; spicehost=3D&#39;0.0.0.0&#39;<br>
&gt; &gt; &gt;&gt; &gt; spiceport=3D6001<br>
&gt; &gt; &gt;&gt; &gt; spicedisable_ticketing=3D1<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt; Is Spice supported with qemu-xen-traditional?<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt;&gt; No, only with upstream qemu and if compile xen and qemu =
from source you<br>
&gt; &gt; &gt;&gt; also enable spice support on qemu build, for example on =
my xen build<br>
&gt; &gt; &gt;&gt; tests I add:<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt;&gt; tools/Makefile<br>
&gt; &gt; &gt;&gt; @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-di=
r-find<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --datadir=3D$(SHAREDIR)/qemu-xen \<b=
r>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --localstatedir=3D/var \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --disable-kvm \<br>
&gt; &gt; &gt;&gt; + =A0 =A0 =A0 =A0--enable-spice \<br>
&gt; &gt; &gt;&gt; + =A0 =A0 =A0 =A0--enable-usb-redir \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --disable-docs \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --disable-guest-agent \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --python=3D$(PYTHON) \<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt;&gt; If you use upstream qemu from distribution package proba=
bly have already<br>
&gt; &gt; &gt;&gt; spice build-in, for example, on debian I&#39;ve already =
tested and working.<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt; &gt; It is my understanding that the qemu package in F20 does not=
 support xen<br>
&gt; &gt; &gt; so I compiled xen from source per the RC3 Test Day instructi=
ons and the<br>
&gt; &gt; &gt; instructions here:<br>
&gt; &gt;<br>
&gt; &gt; &gt; <a href=3D"http://wiki.xenproject.org/wiki/Compiling_Xen_Fro=
m_Source" target=3D"_blank">http://wiki.xenproject.org/wiki/Compiling_Xen_F=
rom_Source</a><br>
&gt; &gt;<br>
&gt; &gt; &gt; After adding --enable-spice and --enable-usb-redir to tools/=
Makefile I<br>
&gt; &gt; &gt; see the following error when I make xen:<br>
&gt; &gt;<br>
&gt; &gt; &gt; ERROR: User requested feature spice<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0configure was not able to find it<br>
&gt; &gt;<br>
&gt; &gt; Do you have the libspice-dev packages installed for your distro ?=
<br>
&gt; &gt;<br>
&gt;<br>
&gt; I do now. I also have the usbredir-devel package.<br>
&gt;<br>
&gt; Thanks for the help.<br>
&gt;<br>
<br>
</div></div>Is there a knob for qxl support?<br>
<br>
[root@xen ~]# cat /var/log/xen/qemu-dm-f20.log<br>
qemu-system-i386: -vga qxl: invalid option<br>
<br>
<br>
<br>
<br>
</blockquote></div><br></div><div class=3D"gmail_extra">Here there is a pat=
ch that add qxl support in libxl updated to xen 4.4-rc3 if you want add it:=
<br><a href=3D"https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd=
98e9263645bff56b">https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591c=
ebd98e9263645bff56b</a><br>
</div><div class=3D"gmail_extra">Or you can simply compile from this alread=
y ready for spice/qxl testing:<br><a href=3D"https://github.com/Fantu/Xen/c=
ommits/rebase/m2r-testing">https://github.com/Fantu/Xen/commits/rebase/m2r-=
testing</a><br>
</div><div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">Is no=
t upstream for now because there is something on xen that make it not worki=
ng on linux domUs with qxl driver active and working with high performance =
problem on windows domUs.<br>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">I spent</sp=
an> <span class=3D"">several</span> <span class=3D"">days</span> <span clas=
s=3D"">without finding</span> <span class=3D"">the exact problem</span> <sp=
an class=3D"">to be solved :(</span></span><br>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">If</span> <=
span class=3D"">you want you can</span> <span class=3D"">try it out and</sp=
an> <span class=3D"">see</span> <span class=3D"">if anything changes</span>=
 <span class=3D"">using</span> <span class=3D"">Fedora</span> <span class=
=3D"">instead of</span> <span class=3D"">Debian</span> <span class=3D"">as<=
/span> <span class=3D"">dom0</span><span class=3D"">,</span> differents <sp=
an class=3D"">kernel</span> <span class=3D"">domUs</span> <span class=3D"">=
etc.<br>
</span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D=
"">Maybe you could</span> <span class=3D"">even</span> <span class=3D"">fin=
d</span> <span class=3D"">some new informations</span><span class=3D"">/</s=
pan><span class=3D"">errors</span> <span class=3D"">useful for solving</spa=
n> <span class=3D"">the problem</span></span>.<br>
</div></div>

--bcaec52c5f5bce1fe704f1b7ea4b--


--===============4687639335773584100==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4687639335773584100==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 07:35:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 07:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBJUi-0000LT-8A; Thu, 06 Feb 2014 07:35:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WBJUg-0000LN-3u
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 07:35:30 +0000
Received: from [85.158.137.68:13540] by server-12.bemta-3.messagelabs.com id
	BD/50-01674-14B33F25; Thu, 06 Feb 2014 07:35:29 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391672127!13653063!1
X-Originating-IP: [209.85.214.50]
X-SpamReason: No, hits=1.4 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15466 invoked from network); 6 Feb 2014 07:35:28 -0000
Received: from mail-bk0-f50.google.com (HELO mail-bk0-f50.google.com)
	(209.85.214.50)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 07:35:28 -0000
Received: by mail-bk0-f50.google.com with SMTP id w16so521287bkz.37
	for <xen-devel@lists.xen.org>; Wed, 05 Feb 2014 23:35:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=MTOGjlzPlQ2Cdivsio14n+H7geR3WShlSQBI1qu16x0=;
	b=kv1yLYrZlq3B5qyhtEn2jk9Kn3hF/Qw72r6Zjv/i9G/yxxLvZMzdSO+B08pxO796De
	GvwEE6ojoEq3Ugdhl0G0kZaExogkrYaH+xE7JmxWPs17Xyl4yP3bn8qgIHlPiOUQEv5v
	2vuwa8FCtCxVwHWVsFjn9sbLG4LFIKWRJiAORRxVMlbx+NWROiAFphPZhBOqdMeCeScU
	zDRds+seJx6KVB4lHA1yENpPWSvxhuTxOL8unkSxOriYWgzJI9BguQHU7639tPK01hds
	bCy/6EZWZMnRncK/JWUTTXJr8TN0/3hsjSo4lqP74tzTZZzY2oGpY1NXgTgzh+dPxOCj
	VBbw==
X-Gm-Message-State: ALoCoQndSPlxyvs3zkzYDDLMS/5HN82YPS5bOVZI6G2XLN1zlHjlZLmfE9IcGg/jWq/erxDLMX6I
MIME-Version: 1.0
X-Received: by 10.204.172.145 with SMTP id l17mr4142653bkz.26.1391672127505;
	Wed, 05 Feb 2014 23:35:27 -0800 (PST)
Received: by 10.204.103.194 with HTTP; Wed, 5 Feb 2014 23:35:27 -0800 (PST)
X-Originating-IP: [87.0.89.102]
In-Reply-To: <1391660630.2751.12.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
Date: Thu, 6 Feb 2014 08:35:27 +0100
Message-ID: <CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: ehouby@yahoo.com
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4687639335773584100=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4687639335773584100==
Content-Type: multipart/alternative; boundary=bcaec52c5f5bce1fe704f1b7ea4b

--bcaec52c5f5bce1fe704f1b7ea4b
Content-Type: text/plain; charset=ISO-8859-1

2014-02-06 5:23 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

> On Wed, 2014-02-05 at 17:17 -0700, Eric Houby wrote:
> > On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:
> > > Wednesday, February 5, 2014, 5:00:08 AM, you wrote:
> > >
> > > > On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:
> > > >> Il 04/02/2014 16:41, Eric Houby ha scritto:
> > > >> > Xen list,
> > > >> >
> > > >> > I am trying to boot a F20 guest and connect using Spice but have
> run
> > > >> > into an issue.
> > > >> >
> > > >> > My VM config file includes:
> > > >> > spice = 1
> > > >> > spicehost='0.0.0.0'
> > > >> > spiceport=6001
> > > >> > spicedisable_ticketing=1
> > > >> >
> > > >> >
> > > >> > Is Spice supported with qemu-xen-traditional?
> > > >>
> > > >> No, only with upstream qemu and if compile xen and qemu from source
> you
> > > >> also enable spice support on qemu build, for example on my xen build
> > > >> tests I add:
> > > >>
> > > >> tools/Makefile
> > > >> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> > > >>           --datadir=$(SHAREDIR)/qemu-xen \
> > > >>           --localstatedir=/var \
> > > >>           --disable-kvm \
> > > >> +        --enable-spice \
> > > >> +        --enable-usb-redir \
> > > >>           --disable-docs \
> > > >>           --disable-guest-agent \
> > > >>           --python=$(PYTHON) \
> > > >>
> > > >> If you use upstream qemu from distribution package probably have
> already
> > > >> spice build-in, for example, on debian I've already tested and
> working.
> > > >>
> > >
> > > > It is my understanding that the qemu package in F20 does not support
> xen
> > > > so I compiled xen from source per the RC3 Test Day instructions and
> the
> > > > instructions here:
> > >
> > > > http://wiki.xenproject.org/wiki/Compiling_Xen_From_Source
> > >
> > > > After adding --enable-spice and --enable-usb-redir to tools/Makefile
> I
> > > > see the following error when I make xen:
> > >
> > > > ERROR: User requested feature spice
> > > >        configure was not able to find it
> > >
> > > Do you have the libspice-dev packages installed for your distro ?
> > >
> >
> > I do now. I also have the usbredir-devel package.
> >
> > Thanks for the help.
> >
>
> Is there a knob for qxl support?
>
> [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
> qemu-system-i386: -vga qxl: invalid option
>
>
>
>
>
Here there is a patch that add qxl support in libxl updated to xen 4.4-rc3
if you want add it:
https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
Or you can simply compile from this already ready for spice/qxl testing:
https://github.com/Fantu/Xen/commits/rebase/m2r-testing

Is not upstream for now because there is something on xen that make it not
working on linux domUs with qxl driver active and working with high
performance problem on windows domUs.
I spent several days without finding the exact problem to be solved :(
If you want you can try it out and see if anything changes using Fedora instead
of Debian as dom0, differents kernel domUs etc.
Maybe you could even find some new informations/errors useful for solving the
problem.

--bcaec52c5f5bce1fe704f1b7ea4b
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-06 5:23 GMT+01:00 Eric Houby <span dir=3D"ltr">&lt=
;<a href=3D"mailto:ehouby@yahoo.com" target=3D"_blank">ehouby@yahoo.com</a>=
&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D""><div class=3D"h5">On Wed, 2014-02-05 at 17:17 -0700, Eric H=
ouby wrote:<br>
&gt; On Wed, 2014-02-05 at 09:06 +0100, Sander Eikelenboom wrote:<br>
&gt; &gt; Wednesday, February 5, 2014, 5:00:08 AM, you wrote:<br>
&gt; &gt;<br>
&gt; &gt; &gt; On Tue, 2014-02-04 at 17:01 +0100, Fabio Fantoni wrote:<br>
&gt; &gt; &gt;&gt; Il 04/02/2014 16:41, Eric Houby ha scritto:<br>
&gt; &gt; &gt;&gt; &gt; Xen list,<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt; I am trying to boot a F20 guest and connect using S=
pice but have run<br>
&gt; &gt; &gt;&gt; &gt; into an issue.<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt; My VM config file includes:<br>
&gt; &gt; &gt;&gt; &gt; spice =3D 1<br>
&gt; &gt; &gt;&gt; &gt; spicehost=3D&#39;0.0.0.0&#39;<br>
&gt; &gt; &gt;&gt; &gt; spiceport=3D6001<br>
&gt; &gt; &gt;&gt; &gt; spicedisable_ticketing=3D1<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt;&gt; &gt; Is Spice supported with qemu-xen-traditional?<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt;&gt; No, only with upstream qemu and if compile xen and qemu =
from source you<br>
&gt; &gt; &gt;&gt; also enable spice support on qemu build, for example on =
my xen build<br>
&gt; &gt; &gt;&gt; tests I add:<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt;&gt; tools/Makefile<br>
&gt; &gt; &gt;&gt; @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-di=
r-find<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --datadir=3D$(SHAREDIR)/qemu-xen \<b=
r>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --localstatedir=3D/var \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --disable-kvm \<br>
&gt; &gt; &gt;&gt; + =A0 =A0 =A0 =A0--enable-spice \<br>
&gt; &gt; &gt;&gt; + =A0 =A0 =A0 =A0--enable-usb-redir \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --disable-docs \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --disable-guest-agent \<br>
&gt; &gt; &gt;&gt; =A0 =A0 =A0 =A0 =A0 --python=3D$(PYTHON) \<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt;&gt; If you use upstream qemu from distribution package proba=
bly have already<br>
&gt; &gt; &gt;&gt; spice build-in, for example, on debian I&#39;ve already =
tested and working.<br>
&gt; &gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt; &gt; It is my understanding that the qemu package in F20 does not=
 support xen<br>
&gt; &gt; &gt; so I compiled xen from source per the RC3 Test Day instructi=
ons and the<br>
&gt; &gt; &gt; instructions here:<br>
&gt; &gt;<br>
&gt; &gt; &gt; <a href=3D"http://wiki.xenproject.org/wiki/Compiling_Xen_Fro=
m_Source" target=3D"_blank">http://wiki.xenproject.org/wiki/Compiling_Xen_F=
rom_Source</a><br>
&gt; &gt;<br>
&gt; &gt; &gt; After adding --enable-spice and --enable-usb-redir to tools/=
Makefile I<br>
&gt; &gt; &gt; see the following error when I make xen:<br>
&gt; &gt;<br>
&gt; &gt; &gt; ERROR: User requested feature spice<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0configure was not able to find it<br>
&gt; &gt;<br>
&gt; &gt; Do you have the libspice-dev packages installed for your distro ?=
<br>
&gt; &gt;<br>
&gt;<br>
&gt; I do now. I also have the usbredir-devel package.<br>
&gt;<br>
&gt; Thanks for the help.<br>
&gt;<br>
<br>
</div></div>Is there a knob for qxl support?<br>
<br>
[root@xen ~]# cat /var/log/xen/qemu-dm-f20.log<br>
qemu-system-i386: -vga qxl: invalid option<br>
<br>
<br>
<br>
<br>
</blockquote></div><br></div><div class=3D"gmail_extra">Here there is a pat=
ch that add qxl support in libxl updated to xen 4.4-rc3 if you want add it:=
<br><a href=3D"https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd=
98e9263645bff56b">https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591c=
ebd98e9263645bff56b</a><br>
</div><div class=3D"gmail_extra">Or you can simply compile from this alread=
y ready for spice/qxl testing:<br><a href=3D"https://github.com/Fantu/Xen/c=
ommits/rebase/m2r-testing">https://github.com/Fantu/Xen/commits/rebase/m2r-=
testing</a><br>
</div><div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">Is no=
t upstream for now because there is something on xen that make it not worki=
ng on linux domUs with qxl driver active and working with high performance =
problem on windows domUs.<br>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">I spent</sp=
an> <span class=3D"">several</span> <span class=3D"">days</span> <span clas=
s=3D"">without finding</span> <span class=3D"">the exact problem</span> <sp=
an class=3D"">to be solved :(</span></span><br>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">If</span> <=
span class=3D"">you want you can</span> <span class=3D"">try it out and</sp=
an> <span class=3D"">see</span> <span class=3D"">if anything changes</span>=
 <span class=3D"">using</span> <span class=3D"">Fedora</span> <span class=
=3D"">instead of</span> <span class=3D"">Debian</span> <span class=3D"">as<=
/span> <span class=3D"">dom0</span><span class=3D"">,</span> differents <sp=
an class=3D"">kernel</span> <span class=3D"">domUs</span> <span class=3D"">=
etc.<br>
</span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D=
"">Maybe you could</span> <span class=3D"">even</span> <span class=3D"">fin=
d</span> <span class=3D"">some new informations</span><span class=3D"">/</s=
pan><span class=3D"">errors</span> <span class=3D"">useful for solving</spa=
n> <span class=3D"">the problem</span></span>.<br>
</div></div>

--bcaec52c5f5bce1fe704f1b7ea4b--


--===============4687639335773584100==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4687639335773584100==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 07:36:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 07:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBJVt-0000QH-N3; Thu, 06 Feb 2014 07:36:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBJVs-0000Q9-8D
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 07:36:44 +0000
Received: from [85.158.143.35:59123] by server-1.bemta-4.messagelabs.com id
	34/55-31661-B8B33F25; Thu, 06 Feb 2014 07:36:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391672201!3520327!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22758 invoked from network); 6 Feb 2014 07:36:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 07:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="98510785"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 07:36:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 02:36:40 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBJVo-0002Hl-73;
	Thu, 06 Feb 2014 07:36:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBJVn-0004qj-Ac;
	Thu, 06 Feb 2014 07:36:39 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24739-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 07:36:39 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24739: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24739 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24739/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              2 host-install(2)         broken REGR. vs. 24729

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)              broken like 24728
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)        broken like 24729
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24729

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615
baseline version:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615

jobs:
 build-amd64-xend                                             broken  
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 07:36:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 07:36:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBJVt-0000QH-N3; Thu, 06 Feb 2014 07:36:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBJVs-0000Q9-8D
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 07:36:44 +0000
Received: from [85.158.143.35:59123] by server-1.bemta-4.messagelabs.com id
	34/55-31661-B8B33F25; Thu, 06 Feb 2014 07:36:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391672201!3520327!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22758 invoked from network); 6 Feb 2014 07:36:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 07:36:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="98510785"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 07:36:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 02:36:40 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBJVo-0002Hl-73;
	Thu, 06 Feb 2014 07:36:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBJVn-0004qj-Ac;
	Thu, 06 Feb 2014 07:36:39 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24739-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 07:36:39 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24739: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24739 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24739/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              2 host-install(2)         broken REGR. vs. 24729

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)              broken like 24728
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)        broken like 24729
 test-amd64-i386-qemuu-rhel6hvm-amd  3 host-install(3)        broken like 24729

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615
baseline version:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615

jobs:
 build-amd64-xend                                             broken  
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           broken  
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 08:37:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 08:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKSJ-0002fo-8P; Thu, 06 Feb 2014 08:37:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBKSH-0002fi-92
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 08:37:05 +0000
Received: from [85.158.143.35:27061] by server-1.bemta-4.messagelabs.com id
	1D/70-31661-0B943F25; Thu, 06 Feb 2014 08:37:04 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391675823!3542954!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22006 invoked from network); 6 Feb 2014 08:37:03 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 08:37:03 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginm.net ([86.6.25.227]
	helo=[192.168.1.7]) by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBKS7-0004i9-GH; Thu, 06 Feb 2014 08:36:55 +0000
Message-ID: <1391675812.22033.2.camel@dagon.hellion.org.uk>
From: Ian Campbell <ijc@hellion.org.uk>
To: Gerd Hoffmann <kraxel@redhat.com>
Date: Thu, 06 Feb 2014 08:36:52 +0000
In-Reply-To: <1391675519.17309.27.camel@nilsson.home.kraxel.org>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
X-Mailer: Evolution 3.4.4-4+b1 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5186497624973064391=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5186497624973064391==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-PedganKQ7gIk9JpW4WS9"


--=-PedganKQ7gIk9JpW4WS9
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable

(adding xen-devel too)=20
On Thu, 2014-02-06 at 09:31 +0100, Gerd Hoffmann wrote:
> > commit e144bb7af49ca8756b7222a75811f3b85b0bc1f5
> > Author: Gerd Hoffmann <kraxel@redhat.com>
> > Date:   Mon Jun 3 16:30:18 2013 +0200
> >=20
> >     usb: add xhci support
> >   =20
> >     $subject says all.  Support for usb3 streams is not implemented yet=
,
> >     otherwise it is fully functional.  Tested all usb devices supported
> >     by qemu (keyboard, storage, usb hubs), except for usb attached scsi
> >     in usb3 mode (which needs streams).
> >   =20
> >     Tested on qemu only, tagged with QEMU_HARDWARE because of that.
> >     Testing with physical hardware to be done.
>=20
> That commit made seabios size (default qemu config, gcc 4.7+) jump from
> 128k to 256k in size because the code didn't fit into 128k any more.
>=20
> Most likely the failure isn't related to xhci at all, but to the size
> change.
>=20
> You can try to turn off some features (hardware support) you don't need
> to make the bios image smaller.  1.7.4 also has a config option to
> explicitly set the image size you want.
>=20
> IIRC xen combines seabios and hvmloader into a single 256k image
> somehow, so it might make sense to set the seabios image size to
> something between 128k and 256k.  But better ask the xen people for
> details here.

I think this was fixed in Xen with
http://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dcommit;h=3D5f2875739beef3a75=
c7a7e8579b6cbcb464e61b3

I suppose this should be backported to the 4.3.x stable branch.

Ian.

--=-PedganKQ7gIk9JpW4WS9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)

iQIcBAABCgAGBQJS80mkAAoJECouJZ9pWkbGZNsP/1G8xHB/QnerldYBKHTg/O+K
JCZ1L7u3l19gYU6+mMwYmNJPY4qPwi2czV3/KP9Um3DQ5PiuqjHkL+dMIxTtwAuI
21SCUtQw6/beIcwtelg3GOZqFUrfEeq4uqey/388/SnvkJYY3Uhub1rHKvgliVCv
E0iNzFo/0IwQGHgCGaOEdjqB+QbEqV1/ubHmHK/64aiW7R6gzl3S05bpHYTqXmIz
FTUb5AkRCfgHU5Dq8rnKD18H1oyaSeLnQh3SPzeolNchTY3fgQAS2aYSrcWtvQmb
wyVeLZbf1dD9jHhwxweXqa1vKOsDGjKMKDeliLk7Vb7MSUIaczwMI9GJTWPySZU6
JEVHlXQp6JKBWfAAFk2n6qiJ8P1H6y3vVQgLYbdBSt89lZgYYVm/nHtjECsGiui0
cS75Gsgdy4uvsI1jrVlBsNn3V9qTpCGdqUF/x1y244rznb8pcO5PA9VZDkYsW3gq
AVWeAB/bEu0gtDEPpb21QPpbBrNhRyeswpuW718mwhygrySHjB5B/af9Awgm7seA
iWEUKa2P+wxPd6QeQIsCJkEhmMth0hlmpDIRzyZzK7vRslpl5mnaHMDK6QG2f8Zd
2acKw8D3iksKyFlhDuIObPdl3N6TP5YGN2NKUJThOd9KvdtUiT+ATMvfaSwzVatj
GhIsfeNs05WZuBf1sWxT
=ST3G
-----END PGP SIGNATURE-----

--=-PedganKQ7gIk9JpW4WS9--



--===============5186497624973064391==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5186497624973064391==--



From xen-devel-bounces@lists.xen.org Thu Feb 06 08:37:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 08:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKSJ-0002fo-8P; Thu, 06 Feb 2014 08:37:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBKSH-0002fi-92
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 08:37:05 +0000
Received: from [85.158.143.35:27061] by server-1.bemta-4.messagelabs.com id
	1D/70-31661-0B943F25; Thu, 06 Feb 2014 08:37:04 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391675823!3542954!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22006 invoked from network); 6 Feb 2014 08:37:03 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 08:37:03 -0000
Received: from cpc22-cmbg14-2-0-cust482.5-4.cable.virginm.net ([86.6.25.227]
	helo=[192.168.1.7]) by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBKS7-0004i9-GH; Thu, 06 Feb 2014 08:36:55 +0000
Message-ID: <1391675812.22033.2.camel@dagon.hellion.org.uk>
From: Ian Campbell <ijc@hellion.org.uk>
To: Gerd Hoffmann <kraxel@redhat.com>
Date: Thu, 06 Feb 2014 08:36:52 +0000
In-Reply-To: <1391675519.17309.27.camel@nilsson.home.kraxel.org>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
X-Mailer: Evolution 3.4.4-4+b1 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5186497624973064391=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5186497624973064391==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-PedganKQ7gIk9JpW4WS9"


--=-PedganKQ7gIk9JpW4WS9
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: quoted-printable

(adding xen-devel too)=20
On Thu, 2014-02-06 at 09:31 +0100, Gerd Hoffmann wrote:
> > commit e144bb7af49ca8756b7222a75811f3b85b0bc1f5
> > Author: Gerd Hoffmann <kraxel@redhat.com>
> > Date:   Mon Jun 3 16:30:18 2013 +0200
> >=20
> >     usb: add xhci support
> >   =20
> >     $subject says all.  Support for usb3 streams is not implemented yet=
,
> >     otherwise it is fully functional.  Tested all usb devices supported
> >     by qemu (keyboard, storage, usb hubs), except for usb attached scsi
> >     in usb3 mode (which needs streams).
> >   =20
> >     Tested on qemu only, tagged with QEMU_HARDWARE because of that.
> >     Testing with physical hardware to be done.
>=20
> That commit made seabios size (default qemu config, gcc 4.7+) jump from
> 128k to 256k in size because the code didn't fit into 128k any more.
>=20
> Most likely the failure isn't related to xhci at all, but to the size
> change.
>=20
> You can try to turn off some features (hardware support) you don't need
> to make the bios image smaller.  1.7.4 also has a config option to
> explicitly set the image size you want.
>=20
> IIRC xen combines seabios and hvmloader into a single 256k image
> somehow, so it might make sense to set the seabios image size to
> something between 128k and 256k.  But better ask the xen people for
> details here.

I think this was fixed in Xen with
http://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dcommit;h=3D5f2875739beef3a75=
c7a7e8579b6cbcb464e61b3

I suppose this should be backported to the 4.3.x stable branch.

Ian.

--=-PedganKQ7gIk9JpW4WS9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)

iQIcBAABCgAGBQJS80mkAAoJECouJZ9pWkbGZNsP/1G8xHB/QnerldYBKHTg/O+K
JCZ1L7u3l19gYU6+mMwYmNJPY4qPwi2czV3/KP9Um3DQ5PiuqjHkL+dMIxTtwAuI
21SCUtQw6/beIcwtelg3GOZqFUrfEeq4uqey/388/SnvkJYY3Uhub1rHKvgliVCv
E0iNzFo/0IwQGHgCGaOEdjqB+QbEqV1/ubHmHK/64aiW7R6gzl3S05bpHYTqXmIz
FTUb5AkRCfgHU5Dq8rnKD18H1oyaSeLnQh3SPzeolNchTY3fgQAS2aYSrcWtvQmb
wyVeLZbf1dD9jHhwxweXqa1vKOsDGjKMKDeliLk7Vb7MSUIaczwMI9GJTWPySZU6
JEVHlXQp6JKBWfAAFk2n6qiJ8P1H6y3vVQgLYbdBSt89lZgYYVm/nHtjECsGiui0
cS75Gsgdy4uvsI1jrVlBsNn3V9qTpCGdqUF/x1y244rznb8pcO5PA9VZDkYsW3gq
AVWeAB/bEu0gtDEPpb21QPpbBrNhRyeswpuW718mwhygrySHjB5B/af9Awgm7seA
iWEUKa2P+wxPd6QeQIsCJkEhmMth0hlmpDIRzyZzK7vRslpl5mnaHMDK6QG2f8Zd
2acKw8D3iksKyFlhDuIObPdl3N6TP5YGN2NKUJThOd9KvdtUiT+ATMvfaSwzVatj
GhIsfeNs05WZuBf1sWxT
=ST3G
-----END PGP SIGNATURE-----

--=-PedganKQ7gIk9JpW4WS9--



--===============5186497624973064391==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5186497624973064391==--



From xen-devel-bounces@lists.xen.org Thu Feb 06 08:50:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 08:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKeh-0003Sb-HO; Thu, 06 Feb 2014 08:49:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@redhat.com>) id 1WBClZ-000143-7G
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 00:24:29 +0000
Received: from [85.158.137.68:2497] by server-15.bemta-3.messagelabs.com id
	2D/6A-19263-C36D2F25; Thu, 06 Feb 2014 00:24:28 +0000
X-Env-Sender: davem@redhat.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391646266!13665259!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2152 invoked from network); 6 Feb 2014 00:24:27 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-31.messagelabs.com with SMTP;
	6 Feb 2014 00:24:27 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s160ONL2004403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 19:24:23 -0500
Received: from localhost (ovpn-113-99.phx2.redhat.com [10.3.113.99])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s160OMte013161; Wed, 5 Feb 2014 19:24:22 -0500
Date: Wed, 05 Feb 2014 16:24:22 -0800 (PST)
Message-Id: <20140205.162422.857987635200528036.davem@redhat.com>
To: zoltan.kiss@citrix.com
From: David Miller <davem@redhat.com>
In-Reply-To: <1391543677-24039-1-git-send-email-zoltan.kiss@citrix.com>
References: <1391543677-24039-1-git-send-email-zoltan.kiss@citrix.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
X-Mailman-Approved-At: Thu, 06 Feb 2014 08:49:54 +0000
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net v3] xen-netback: Fix Rx stall due to
	race condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 4 Feb 2014 19:54:37 +0000

> The recent patch to fix receive side flow control
> (11b57f90257c1d6a91cee720151b69e0c2020cf6: xen-netback: stop vif thread
> spinning if frontend is unresponsive) solved the spinning thread problem,
> however caused an another one. The receive side can stall, if:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo doesn't return true anymore
> 
> Also, if interrupt sent but there is still no room in the ring, it take quite a
> long time until xenvif_rx_action realize it. This patch ditch that two variable,
> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> kept as 0. Then rx_work_todo will check if:
> - there is something to send to the ring (like before)
> - there is space for the topmost packet in the queue
> 
> I think that's more natural and optimal thing to test than two bool which are
> set somewhere else.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 08:50:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 08:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKeh-0003Sb-HO; Thu, 06 Feb 2014 08:49:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@redhat.com>) id 1WBClZ-000143-7G
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 00:24:29 +0000
Received: from [85.158.137.68:2497] by server-15.bemta-3.messagelabs.com id
	2D/6A-19263-C36D2F25; Thu, 06 Feb 2014 00:24:28 +0000
X-Env-Sender: davem@redhat.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391646266!13665259!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2152 invoked from network); 6 Feb 2014 00:24:27 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-31.messagelabs.com with SMTP;
	6 Feb 2014 00:24:27 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s160ONL2004403
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 5 Feb 2014 19:24:23 -0500
Received: from localhost (ovpn-113-99.phx2.redhat.com [10.3.113.99])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s160OMte013161; Wed, 5 Feb 2014 19:24:22 -0500
Date: Wed, 05 Feb 2014 16:24:22 -0800 (PST)
Message-Id: <20140205.162422.857987635200528036.davem@redhat.com>
To: zoltan.kiss@citrix.com
From: David Miller <davem@redhat.com>
In-Reply-To: <1391543677-24039-1-git-send-email-zoltan.kiss@citrix.com>
References: <1391543677-24039-1-git-send-email-zoltan.kiss@citrix.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
X-Mailman-Approved-At: Thu, 06 Feb 2014 08:49:54 +0000
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, ian.campbell@citrix.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net v3] xen-netback: Fix Rx stall due to
	race condition
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 4 Feb 2014 19:54:37 +0000

> The recent patch to fix receive side flow control
> (11b57f90257c1d6a91cee720151b69e0c2020cf6: xen-netback: stop vif thread
> spinning if frontend is unresponsive) solved the spinning thread problem,
> however caused an another one. The receive side can stall, if:
> - [THREAD] xenvif_rx_action sets rx_queue_stopped to true
> - [INTERRUPT] interrupt happens, and sets rx_event to true
> - [THREAD] then xenvif_kthread sets rx_event to false
> - [THREAD] rx_work_todo doesn't return true anymore
> 
> Also, if interrupt sent but there is still no room in the ring, it take quite a
> long time until xenvif_rx_action realize it. This patch ditch that two variable,
> and rework rx_work_todo. If the thread finds it can't fit more skb's into the
> ring, it saves the last slot estimation into rx_last_skb_slots, otherwise it's
> kept as 0. Then rx_work_todo will check if:
> - there is something to send to the ring (like before)
> - there is space for the topmost packet in the queue
> 
> I think that's more natural and optimal thing to test than two bool which are
> set somewhere else.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> Acked-by: Wei Liu <wei.liu2@citrix.com>

Applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 08:59:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 08:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKnb-0003u8-8N; Thu, 06 Feb 2014 08:59:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WBKnZ-0003u3-QF
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 08:59:06 +0000
Received: from [85.158.137.68:56738] by server-6.bemta-3.messagelabs.com id
	FE/11-09180-9DE43F25; Thu, 06 Feb 2014 08:59:05 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391677143!13755499!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11114 invoked from network); 6 Feb 2014 08:59:04 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 08:59:04 -0000
Received: by mail-qc0-f179.google.com with SMTP id e16so2608390qcx.24
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 00:59:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=zn2YUqFfIRBwVp4oUz9gcqlzq9ivY1WUJJIj4ur9FYs=;
	b=eK6KjPYYsAhfUqLDZl+MNlJFRQF8c3rKuRYYio8ozN2OR2tmdKcL2rPuCa1X1DFz92
	aK6dS1pUVpe7xwN+EndWoh+TUQXQGvX9plOF14XDjwC77SU6HR5jFDZuU912R6/CwXyY
	xhslUqm4OLPEbOO5g/FTe3b6PW+NHzxzDvBmWjXjTi6LHOHWXpCJAYjtnCundSJo7iz2
	jUi91L1xhhEVnY8eNpSU4x9+thdcRys/6+H8ZHA+emrRIq4mXUBSAS7BeMHM8buNoeus
	6TBM+lIILbUpq/b3pP2dnydNf9cmrCTNHJzvjXTtXi0eg3V6G1sVlp734BdKoHK+lTNr
	srhQ==
X-Gm-Message-State: ALoCoQms1CklmiVjK1eHYR2u5Gjp7hzSCU8yRgmotpyENrIwKFe1OH/hAkxE/KjayggVE+kMpr0A
X-Received: by 10.140.83.212 with SMTP id j78mr9928438qgd.42.1391677142941;
	Thu, 06 Feb 2014 00:59:02 -0800 (PST)
Received: from debian-vm.localdomain (cpe-72-130-147-24.hawaii.res.rr.com.
	[72.130.147.24])
	by mx.google.com with ESMTPSA id j97sm276254qge.22.2014.02.06.00.59.00
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 06 Feb 2014 00:59:02 -0800 (PST)
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Date: Wed,  5 Feb 2014 22:58:38 -1000
Message-Id: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	esb@ics.hawaii.edu, henric@hawaii.edu
Subject: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.

CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.

Socket information is available for each individual CPU when each
gets the STARTING callback (I believe socket information is also
available for CPU 0 by that time). This patch adds the following
algorithm...

IF cpu is on the same socket as CPU 0, add it to run queue 0
ELSE, IF cpu is on socket 0, add it to a run queue based on the
         socket CPU 0 is actually on
      ELSE add it to a run queue based on the socket it is on
---
 xen/common/sched_credit2.c |   37 +++++++++++++++++++++++++++----------
 1 file changed, 27 insertions(+), 10 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..c0ecb50 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,7 @@
  * to a small value, and a fixed credit is added to everyone.
  *
  * The plan is for all cores that share an L2 will share the same
- * runqueue.  At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. 
  */
 
 /*
@@ -1945,6 +1944,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
 static void init_pcpu(const struct scheduler *ops, int cpu)
 {
     int rqi;
+    int cpu0_socket;
+    int cpu_socket;
     unsigned long flags;
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_runqueue_data *rqd;
@@ -1962,12 +1963,28 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
     /* Figure out which runqueue to put it in */
     rqi = 0;
 
-    /* Figure out which runqueue to put it in */
     /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
     if ( cpu == 0 )
         rqi = 0;
     else
-        rqi = cpu_to_socket(cpu);
+    {
+        cpu_socket = cpu_to_socket(cpu);
+        cpu0_socket = cpu_to_socket(0);
+
+        /* If cpu is on the same socket as CPU 0, put it with CPU 0 on run queue 0 */
+        if ( cpu_socket == cpu0_socket )
+            rqi = 0;
+        else            
+            /* If cpu is on socket 0, assign it to a run queue based on the
+             * socket CPU 0 is actually on */
+            if ( cpu_socket == 0 )
+                rqi = cpu0_socket;
+
+            /* If cpu is NOT on socket 0, just assign it to a run queue based on
+             * its own socket */
+            else
+                rqi = cpu_socket;
+    }
 
     if ( rqi < 0 )
     {
@@ -2010,13 +2027,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
 static void *
 csched_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    /* Check to see if the cpu is online yet */
-    /* Note: cpu 0 doesn't get a STARTING callback */
-    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+    /* This function is only for calling init_pcpu on CPU 0
+     * because it does not get a STARTING callback */
+
+    if ( cpu == 0 )
         init_pcpu(ops, cpu);
-    else
-        printk("%s: cpu %d not online yet, deferring initializatgion\n",
-               __func__, cpu);
 
     return (void *)1;
 }
@@ -2072,6 +2087,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 static int
 csched_cpu_starting(int cpu)
 {
+    // This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
     struct scheduler *ops;
 
     /* Hope this is safe from cpupools switching things around. :-) */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 08:59:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 08:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKnb-0003u8-8N; Thu, 06 Feb 2014 08:59:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WBKnZ-0003u3-QF
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 08:59:06 +0000
Received: from [85.158.137.68:56738] by server-6.bemta-3.messagelabs.com id
	FE/11-09180-9DE43F25; Thu, 06 Feb 2014 08:59:05 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391677143!13755499!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11114 invoked from network); 6 Feb 2014 08:59:04 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 08:59:04 -0000
Received: by mail-qc0-f179.google.com with SMTP id e16so2608390qcx.24
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 00:59:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=zn2YUqFfIRBwVp4oUz9gcqlzq9ivY1WUJJIj4ur9FYs=;
	b=eK6KjPYYsAhfUqLDZl+MNlJFRQF8c3rKuRYYio8ozN2OR2tmdKcL2rPuCa1X1DFz92
	aK6dS1pUVpe7xwN+EndWoh+TUQXQGvX9plOF14XDjwC77SU6HR5jFDZuU912R6/CwXyY
	xhslUqm4OLPEbOO5g/FTe3b6PW+NHzxzDvBmWjXjTi6LHOHWXpCJAYjtnCundSJo7iz2
	jUi91L1xhhEVnY8eNpSU4x9+thdcRys/6+H8ZHA+emrRIq4mXUBSAS7BeMHM8buNoeus
	6TBM+lIILbUpq/b3pP2dnydNf9cmrCTNHJzvjXTtXi0eg3V6G1sVlp734BdKoHK+lTNr
	srhQ==
X-Gm-Message-State: ALoCoQms1CklmiVjK1eHYR2u5Gjp7hzSCU8yRgmotpyENrIwKFe1OH/hAkxE/KjayggVE+kMpr0A
X-Received: by 10.140.83.212 with SMTP id j78mr9928438qgd.42.1391677142941;
	Thu, 06 Feb 2014 00:59:02 -0800 (PST)
Received: from debian-vm.localdomain (cpe-72-130-147-24.hawaii.res.rr.com.
	[72.130.147.24])
	by mx.google.com with ESMTPSA id j97sm276254qge.22.2014.02.06.00.59.00
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 06 Feb 2014 00:59:02 -0800 (PST)
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Date: Wed,  5 Feb 2014 22:58:38 -1000
Message-Id: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	esb@ics.hawaii.edu, henric@hawaii.edu
Subject: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.

CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.

Socket information is available for each individual CPU when each
gets the STARTING callback (I believe socket information is also
available for CPU 0 by that time). This patch adds the following
algorithm...

IF cpu is on the same socket as CPU 0, add it to run queue 0
ELSE, IF cpu is on socket 0, add it to a run queue based on the
         socket CPU 0 is actually on
      ELSE add it to a run queue based on the socket it is on
---
 xen/common/sched_credit2.c |   37 +++++++++++++++++++++++++++----------
 1 file changed, 27 insertions(+), 10 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..c0ecb50 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,7 @@
  * to a small value, and a fixed credit is added to everyone.
  *
  * The plan is for all cores that share an L2 will share the same
- * runqueue.  At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. 
  */
 
 /*
@@ -1945,6 +1944,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
 static void init_pcpu(const struct scheduler *ops, int cpu)
 {
     int rqi;
+    int cpu0_socket;
+    int cpu_socket;
     unsigned long flags;
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_runqueue_data *rqd;
@@ -1962,12 +1963,28 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
     /* Figure out which runqueue to put it in */
     rqi = 0;
 
-    /* Figure out which runqueue to put it in */
     /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
     if ( cpu == 0 )
         rqi = 0;
     else
-        rqi = cpu_to_socket(cpu);
+    {
+        cpu_socket = cpu_to_socket(cpu);
+        cpu0_socket = cpu_to_socket(0);
+
+        /* If cpu is on the same socket as CPU 0, put it with CPU 0 on run queue 0 */
+        if ( cpu_socket == cpu0_socket )
+            rqi = 0;
+        else            
+            /* If cpu is on socket 0, assign it to a run queue based on the
+             * socket CPU 0 is actually on */
+            if ( cpu_socket == 0 )
+                rqi = cpu0_socket;
+
+            /* If cpu is NOT on socket 0, just assign it to a run queue based on
+             * its own socket */
+            else
+                rqi = cpu_socket;
+    }
 
     if ( rqi < 0 )
     {
@@ -2010,13 +2027,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
 static void *
 csched_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    /* Check to see if the cpu is online yet */
-    /* Note: cpu 0 doesn't get a STARTING callback */
-    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+    /* This function is only for calling init_pcpu on CPU 0
+     * because it does not get a STARTING callback */
+
+    if ( cpu == 0 )
         init_pcpu(ops, cpu);
-    else
-        printk("%s: cpu %d not online yet, deferring initializatgion\n",
-               __func__, cpu);
 
     return (void *)1;
 }
@@ -2072,6 +2087,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 static int
 csched_cpu_starting(int cpu)
 {
+    // This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
     struct scheduler *ops;
 
     /* Hope this is safe from cpupools switching things around. :-) */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:03:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:03:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKrP-00045c-6n; Thu, 06 Feb 2014 09:03:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBKrM-00045R-J3
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:03:00 +0000
Received: from [85.158.139.211:31692] by server-15.bemta-5.messagelabs.com id
	A5/B6-24395-3CF43F25; Thu, 06 Feb 2014 09:02:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391677379!2008313!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27795 invoked from network); 6 Feb 2014 09:02:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:02:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 09:02:58 +0000
Message-Id: <52F35DD10200007800119A78@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 09:02:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
In-Reply-To: <20140205200708.GA9278@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re:
 [PATCH] x86/msi: Validate the guest-identified PCI devices in
 pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:07, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> Anyhow, what I discovered was that the patch attached does allow me to
> boot with Xen. It is not pretty.

There were two patches attached - an ugly kernel one, and a
bogus hypervisor one. The hypervisor one is bogus because it
attempts to paper over Dom0 not fulfilling its role (here: failure
to propagate bus topology changes). That said ...

> But I was thinking to fix in the hypervisor and realized there are three
> ways of fixing it:
> 
>  1). Do the hypercall to delete/add devices and let initial domain figure
>      this out. (which the Linux attached patch does).
> 
>  2). Be more aware of the bus2bridge topology when removing a PCI bridge or
>      device. I had one bug where we ended up with this bus2bridge structure:
> 
>       6 -> 06:00.0
>       7 -> 06:00.0
>       8 -> 07:01.0
> 
> Which meant that for devices on bus 8, 7 and 6 we would never find the 
> upstream bridge. The reason is that 6 -> 06 points to itself so
> find_upstream_bridge ends up looping 255 times around and returns -1.

... I agree that removal of bridges could likely do with some
improvements: In particular, all devices behind a bridge should
be removed at the same time.

> I am not entirely sure I undertand why we do that. In 'free_pdev' we do
> this:
> 
> 	for ( ; sec_bus <= sub_bus; sec_bus++ )
> 		pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];
> 
> and then:
>  list_del(&pdev->alldevs_list);
>  xfree(pdev->msix);
>  xfree(pdev);
> 
> so if the device that is being deleted is the bridge - we point the secondary
> and subordinate to the deleted device.

No - we point it to the upstream bridge of the deleted one. Which
ought to properly reflect reality (in that this is what now becomes
responsible for all the buses previously covered by the bridge being
removed).

> But if the deleted device is the
> upstream bridge we end up leaving a stale bus2bridge context.

I don't think we do, but I also don't have too much faith in the
correctness of that code, so by way of an example I may be
convinced that there is a problem here.

> That is OK normally as 'alloc_pdev' would over-write it (if the secondary
> and subordinate did not change). But in 'assign-busses' case they change so
> we are left with an 'stale' one.
> 
> This means when the same device is added (but with a new bus value) we
> end fixing up the secondary to subordinate busses to point to us (06).
> But '06' which used to be a secondary bus, still retains the old value.

Once again - we shouldn't talk about overwriting of previously
wrong values. The values should be kept correct, by means of
the Dom0 kernel propagating all updates it does.

> 3). Trap on PCI_SECONDARY_BUS and PCI_SUBORDINATE_BUS writes and
>     fixup the structures.
> 
>     I hadn't attempted that but that could also be done. That way Xen
>     is aware of those changes and can update its PCI structures.

That would be horrible - for the MMCONFIG case we'd need to
mark all respective bridges' config spaces read-only (and emulate
writes). We avoided that previously, and we should avoid that
here. It still all boils down to Dom0 needing to propagate correct/
complete information.

And please be clear about one point: The bus scan the hypervisor
does is unavoidable in order to set up the IOMMU for Dom0 such
that it can access certain devices (namely ones needed for console
access) _before_ it actually does its own bus scan (including the
reporting of the devices to the hypervisor). Hence Dom0 has to
take into account that the hypervisor already may have knowledge
about device -> bus assignments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:03:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:03:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKrP-00045c-6n; Thu, 06 Feb 2014 09:03:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBKrM-00045R-J3
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:03:00 +0000
Received: from [85.158.139.211:31692] by server-15.bemta-5.messagelabs.com id
	A5/B6-24395-3CF43F25; Thu, 06 Feb 2014 09:02:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391677379!2008313!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27795 invoked from network); 6 Feb 2014 09:02:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:02:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 09:02:58 +0000
Message-Id: <52F35DD10200007800119A78@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 09:02:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140122043128.GA9931@konrad-lan.dumpdata.com>
	<52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
In-Reply-To: <20140205200708.GA9278@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re:
 [PATCH] x86/msi: Validate the guest-identified PCI devices in
 pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:07, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> Anyhow, what I discovered was that the patch attached does allow me to
> boot with Xen. It is not pretty.

There were two patches attached - an ugly kernel one, and a
bogus hypervisor one. The hypervisor one is bogus because it
attempts to paper over Dom0 not fulfilling its role (here: failure
to propagate bus topology changes). That said ...

> But I was thinking to fix in the hypervisor and realized there are three
> ways of fixing it:
> 
>  1). Do the hypercall to delete/add devices and let initial domain figure
>      this out. (which the Linux attached patch does).
> 
>  2). Be more aware of the bus2bridge topology when removing a PCI bridge or
>      device. I had one bug where we ended up with this bus2bridge structure:
> 
>       6 -> 06:00.0
>       7 -> 06:00.0
>       8 -> 07:01.0
> 
> Which meant that for devices on bus 8, 7 and 6 we would never find the 
> upstream bridge. The reason is that 6 -> 06 points to itself so
> find_upstream_bridge ends up looping 255 times around and returns -1.

... I agree that removal of bridges could likely do with some
improvements: In particular, all devices behind a bridge should
be removed at the same time.

> I am not entirely sure I undertand why we do that. In 'free_pdev' we do
> this:
> 
> 	for ( ; sec_bus <= sub_bus; sec_bus++ )
> 		pseg->bus2bridge[sec_bus] = pseg->bus2bridge[pdev->bus];
> 
> and then:
>  list_del(&pdev->alldevs_list);
>  xfree(pdev->msix);
>  xfree(pdev);
> 
> so if the device that is being deleted is the bridge - we point the secondary
> and subordinate to the deleted device.

No - we point it to the upstream bridge of the deleted one. Which
ought to properly reflect reality (in that this is what now becomes
responsible for all the buses previously covered by the bridge being
removed).

> But if the deleted device is the
> upstream bridge we end up leaving a stale bus2bridge context.

I don't think we do, but I also don't have too much faith in the
correctness of that code, so by way of an example I may be
convinced that there is a problem here.

> That is OK normally as 'alloc_pdev' would over-write it (if the secondary
> and subordinate did not change). But in 'assign-busses' case they change so
> we are left with an 'stale' one.
> 
> This means when the same device is added (but with a new bus value) we
> end fixing up the secondary to subordinate busses to point to us (06).
> But '06' which used to be a secondary bus, still retains the old value.

Once again - we shouldn't talk about overwriting of previously
wrong values. The values should be kept correct, by means of
the Dom0 kernel propagating all updates it does.

> 3). Trap on PCI_SECONDARY_BUS and PCI_SUBORDINATE_BUS writes and
>     fixup the structures.
> 
>     I hadn't attempted that but that could also be done. That way Xen
>     is aware of those changes and can update its PCI structures.

That would be horrible - for the MMCONFIG case we'd need to
mark all respective bridges' config spaces read-only (and emulate
writes). We avoided that previously, and we should avoid that
here. It still all boils down to Dom0 needing to propagate correct/
complete information.

And please be clear about one point: The bus scan the hypervisor
does is unavoidable in order to set up the IOMMU for Dom0 such
that it can access certain devices (namely ones needed for console
access) _before_ it actually does its own bus scan (including the
reporting of the devices to the hypervisor). Hence Dom0 has to
take into account that the hypervisor already may have knowledge
about device -> bus assignments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:07:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKvq-0004FJ-10; Thu, 06 Feb 2014 09:07:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBKvo-0004FD-IB
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:07:36 +0000
Received: from [85.158.139.211:54226] by server-8.bemta-5.messagelabs.com id
	EA/42-05298-7D053F25; Thu, 06 Feb 2014 09:07:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391677654!2038548!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23006 invoked from network); 6 Feb 2014 09:07:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:07:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 09:07:34 +0000
Message-Id: <52F35EE60200007800119A84@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 09:07:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
	<52F2A1E4.9030700@amd.com>
In-Reply-To: <52F2A1E4.9030700@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:41, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
wrote:
> On 1/28/2014 5:24 AM, Jan Beulich wrote:
>>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
> wrote:
>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t 
> msr, uint64_t *val)
>>>   
>>>       *val = 0;
>>>   
>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>> As one of the other reviewers already said - 0xC0000000 would
>> be better recognizable here.
>>
>> As to the 3 -> 0x13 change - I don't think this is conceptually
>> correct. While at present we emulate only 2 banks, this had
>> been different in the past and may become different again.
>> Hence introducing a dis-contiguity after bank 3 is undesirable.
> 
> IMHO, including the '0x13' is necessary. The reason is that 0x413, 
> 0xc0000408 and 0xc0000409
> together form the set of MC4 thresholding registers. Not including 0x13 
> in the mask would mean
> that accesses to 0x413 alone would not be handled. (which would be 
> confusing if someone new
> were to look into the mce codebase)

No - bit 4 is part of what forms the bank number. Hence it must
be masked out in the switch() expression.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:07:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBKvq-0004FJ-10; Thu, 06 Feb 2014 09:07:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBKvo-0004FD-IB
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:07:36 +0000
Received: from [85.158.139.211:54226] by server-8.bemta-5.messagelabs.com id
	EA/42-05298-7D053F25; Thu, 06 Feb 2014 09:07:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391677654!2038548!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23006 invoked from network); 6 Feb 2014 09:07:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:07:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 09:07:34 +0000
Message-Id: <52F35EE60200007800119A84@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 09:07:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
	<52F2A1E4.9030700@amd.com>
In-Reply-To: <52F2A1E4.9030700@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:41, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
wrote:
> On 1/28/2014 5:24 AM, Jan Beulich wrote:
>>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
> wrote:
>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t 
> msr, uint64_t *val)
>>>   
>>>       *val = 0;
>>>   
>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>> As one of the other reviewers already said - 0xC0000000 would
>> be better recognizable here.
>>
>> As to the 3 -> 0x13 change - I don't think this is conceptually
>> correct. While at present we emulate only 2 banks, this had
>> been different in the past and may become different again.
>> Hence introducing a dis-contiguity after bank 3 is undesirable.
> 
> IMHO, including the '0x13' is necessary. The reason is that 0x413, 
> 0xc0000408 and 0xc0000409
> together form the set of MC4 thresholding registers. Not including 0x13 
> in the mask would mean
> that accesses to 0x413 alone would not be handled. (which would be 
> confusing if someone new
> were to look into the mce codebase)

No - bit 4 is part of what forms the bank number. Hence it must
be masked out in the switch() expression.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:13:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBL1Z-0004dN-22; Thu, 06 Feb 2014 09:13:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WBL1X-0004dI-75
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:13:31 +0000
Received: from [85.158.139.211:36671] by server-12.bemta-5.messagelabs.com id
	FC/7D-15415-A3253F25; Thu, 06 Feb 2014 09:13:30 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391678009!2015918!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2465 invoked from network); 6 Feb 2014 09:13:29 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:13:29 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=s3OZfqk/k+WexwBcbelHggO0VVOcthg0DwH7ZnTxe7mb4y5iBrB60VoQ
	/gGbtC4Gz7wT+JmRnXIXCkXh7yAFfNVnSDaFa5E40+3QY0h5r+KB2a2+S
	HTiYgc/6UpJRMkM90QcQu173+0gLOqgKRUEtzpxlmXcXDNcfAOB9Tqf9Z
	MBdS1nNLEmC2bCxcDS9RQP0XfssDSpQBIxE9DeyxtaK3DGpsuL+7IXSkO
	Fq4Gi27Sh5zEDi/Ue8obhpp8DGIo8;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1391678009; x=1423214009;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=Ozd402+3d4smUw+jrlGgZ/FDfktOoMxB9udMGlQPGUg=;
	b=Qqv9NNsgWtPqF/Cc2GW7bIp3O7cNK7NNcpkdBJjUnwQl8jlZvg90NBo4
	F49QAvVrgc0BA1FWcDTKtox68L7lfPkmcE+OPB64nEq+rzcexid6z8Gba
	YFM/RL2zirhMK54808dNFp+un7C1uUR9ph+ddUuquXjPASVNgftsEmdeu
	fQk88yNhl5kki4DdQqPmZJ9/vaiMi8FSyo2UZOI0Cw/oWuLuOYcRNjOAV
	b8nDWqrbS/DBrrT0Gv8PrwrZvMs+V;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,792,1384297200"; d="scan'208";a="184797048"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 06 Feb 2014 10:13:29 +0100
X-IronPort-AV: E=Sophos;i="4.95,792,1384297200"; d="scan'208";a="30963817"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.36]) ([10.172.102.36])
	by abgdate50u.abg.fsc.net with ESMTP; 06 Feb 2014 10:13:29 +0100
Message-ID: <52F35238.90806@ts.fujitsu.com>
Date: Thu, 06 Feb 2014 10:13:28 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Justin Weaver <jtweaver@hawaii.edu>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
In-Reply-To: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	dario.faggioli@citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06.02.2014 09:58, Justin Weaver wrote:
> This patch attempts to address the issue of the Xen Credit 2
> Scheduler only creating one vCPU run queue on multiple physical
> processor systems. It should be creating one run queue per
> physical processor.
>
> CPU 0 does not get a starting callback, so it is hard coded to run
> queue 0. At the time this happens, socket information is not
> available for CPU 0.
>
> Socket information is available for each individual CPU when each
> gets the STARTING callback (I believe socket information is also
> available for CPU 0 by that time). This patch adds the following
> algorithm...
>
> IF cpu is on the same socket as CPU 0, add it to run queue 0

You should check whether cpu and CPU0 are in the same cpupool.

BTW: CPU0 is allowed to be moved to another cpupool, too.


Juergen

> ELSE, IF cpu is on socket 0, add it to a run queue based on the
>           socket CPU 0 is actually on
>        ELSE add it to a run queue based on the socket it is on
> ---
>   xen/common/sched_credit2.c |   37 +++++++++++++++++++++++++++----------
>   1 file changed, 27 insertions(+), 10 deletions(-)
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 4e68375..c0ecb50 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -85,8 +85,7 @@
>    * to a small value, and a fixed credit is added to everyone.
>    *
>    * The plan is for all cores that share an L2 will share the same
> - * runqueue.  At the moment, there is one global runqueue for all
> - * cores.
> + * runqueue.
>    */
>
>   /*
> @@ -1945,6 +1944,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
>   static void init_pcpu(const struct scheduler *ops, int cpu)
>   {
>       int rqi;
> +    int cpu0_socket;
> +    int cpu_socket;
>       unsigned long flags;
>       struct csched_private *prv = CSCHED_PRIV(ops);
>       struct csched_runqueue_data *rqd;
> @@ -1962,12 +1963,28 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>       /* Figure out which runqueue to put it in */
>       rqi = 0;
>
> -    /* Figure out which runqueue to put it in */
>       /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
>       if ( cpu == 0 )
>           rqi = 0;
>       else
> -        rqi = cpu_to_socket(cpu);
> +    {
> +        cpu_socket = cpu_to_socket(cpu);
> +        cpu0_socket = cpu_to_socket(0);
> +
> +        /* If cpu is on the same socket as CPU 0, put it with CPU 0 on run queue 0 */
> +        if ( cpu_socket == cpu0_socket )
> +            rqi = 0;
> +        else
> +            /* If cpu is on socket 0, assign it to a run queue based on the
> +             * socket CPU 0 is actually on */
> +            if ( cpu_socket == 0 )
> +                rqi = cpu0_socket;
> +
> +            /* If cpu is NOT on socket 0, just assign it to a run queue based on
> +             * its own socket */
> +            else
> +                rqi = cpu_socket;
> +    }
>
>       if ( rqi < 0 )
>       {
> @@ -2010,13 +2027,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>   static void *
>   csched_alloc_pdata(const struct scheduler *ops, int cpu)
>   {
> -    /* Check to see if the cpu is online yet */
> -    /* Note: cpu 0 doesn't get a STARTING callback */
> -    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
> +    /* This function is only for calling init_pcpu on CPU 0
> +     * because it does not get a STARTING callback */
> +
> +    if ( cpu == 0 )
>           init_pcpu(ops, cpu);
> -    else
> -        printk("%s: cpu %d not online yet, deferring initializatgion\n",
> -               __func__, cpu);
>
>       return (void *)1;
>   }
> @@ -2072,6 +2087,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
>   static int
>   csched_cpu_starting(int cpu)
>   {
> +    // This function is for calling init_pcpu on every CPU, except for CPU 0 */
> +
>       struct scheduler *ops;
>
>       /* Hope this is safe from cpupools switching things around. :-) */
>


-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:13:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBL1Z-0004dN-22; Thu, 06 Feb 2014 09:13:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WBL1X-0004dI-75
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:13:31 +0000
Received: from [85.158.139.211:36671] by server-12.bemta-5.messagelabs.com id
	FC/7D-15415-A3253F25; Thu, 06 Feb 2014 09:13:30 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391678009!2015918!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2465 invoked from network); 6 Feb 2014 09:13:29 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:13:29 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=s3OZfqk/k+WexwBcbelHggO0VVOcthg0DwH7ZnTxe7mb4y5iBrB60VoQ
	/gGbtC4Gz7wT+JmRnXIXCkXh7yAFfNVnSDaFa5E40+3QY0h5r+KB2a2+S
	HTiYgc/6UpJRMkM90QcQu173+0gLOqgKRUEtzpxlmXcXDNcfAOB9Tqf9Z
	MBdS1nNLEmC2bCxcDS9RQP0XfssDSpQBIxE9DeyxtaK3DGpsuL+7IXSkO
	Fq4Gi27Sh5zEDi/Ue8obhpp8DGIo8;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1391678009; x=1423214009;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=Ozd402+3d4smUw+jrlGgZ/FDfktOoMxB9udMGlQPGUg=;
	b=Qqv9NNsgWtPqF/Cc2GW7bIp3O7cNK7NNcpkdBJjUnwQl8jlZvg90NBo4
	F49QAvVrgc0BA1FWcDTKtox68L7lfPkmcE+OPB64nEq+rzcexid6z8Gba
	YFM/RL2zirhMK54808dNFp+un7C1uUR9ph+ddUuquXjPASVNgftsEmdeu
	fQk88yNhl5kki4DdQqPmZJ9/vaiMi8FSyo2UZOI0Cw/oWuLuOYcRNjOAV
	b8nDWqrbS/DBrrT0Gv8PrwrZvMs+V;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,792,1384297200"; d="scan'208";a="184797048"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 06 Feb 2014 10:13:29 +0100
X-IronPort-AV: E=Sophos;i="4.95,792,1384297200"; d="scan'208";a="30963817"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.36]) ([10.172.102.36])
	by abgdate50u.abg.fsc.net with ESMTP; 06 Feb 2014 10:13:29 +0100
Message-ID: <52F35238.90806@ts.fujitsu.com>
Date: Thu, 06 Feb 2014 10:13:28 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Justin Weaver <jtweaver@hawaii.edu>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
In-Reply-To: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	dario.faggioli@citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06.02.2014 09:58, Justin Weaver wrote:
> This patch attempts to address the issue of the Xen Credit 2
> Scheduler only creating one vCPU run queue on multiple physical
> processor systems. It should be creating one run queue per
> physical processor.
>
> CPU 0 does not get a starting callback, so it is hard coded to run
> queue 0. At the time this happens, socket information is not
> available for CPU 0.
>
> Socket information is available for each individual CPU when each
> gets the STARTING callback (I believe socket information is also
> available for CPU 0 by that time). This patch adds the following
> algorithm...
>
> IF cpu is on the same socket as CPU 0, add it to run queue 0

You should check whether cpu and CPU0 are in the same cpupool.

BTW: CPU0 is allowed to be moved to another cpupool, too.


Juergen

> ELSE, IF cpu is on socket 0, add it to a run queue based on the
>           socket CPU 0 is actually on
>        ELSE add it to a run queue based on the socket it is on
> ---
>   xen/common/sched_credit2.c |   37 +++++++++++++++++++++++++++----------
>   1 file changed, 27 insertions(+), 10 deletions(-)
>
> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 4e68375..c0ecb50 100644
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -85,8 +85,7 @@
>    * to a small value, and a fixed credit is added to everyone.
>    *
>    * The plan is for all cores that share an L2 will share the same
> - * runqueue.  At the moment, there is one global runqueue for all
> - * cores.
> + * runqueue.
>    */
>
>   /*
> @@ -1945,6 +1944,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
>   static void init_pcpu(const struct scheduler *ops, int cpu)
>   {
>       int rqi;
> +    int cpu0_socket;
> +    int cpu_socket;
>       unsigned long flags;
>       struct csched_private *prv = CSCHED_PRIV(ops);
>       struct csched_runqueue_data *rqd;
> @@ -1962,12 +1963,28 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>       /* Figure out which runqueue to put it in */
>       rqi = 0;
>
> -    /* Figure out which runqueue to put it in */
>       /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
>       if ( cpu == 0 )
>           rqi = 0;
>       else
> -        rqi = cpu_to_socket(cpu);
> +    {
> +        cpu_socket = cpu_to_socket(cpu);
> +        cpu0_socket = cpu_to_socket(0);
> +
> +        /* If cpu is on the same socket as CPU 0, put it with CPU 0 on run queue 0 */
> +        if ( cpu_socket == cpu0_socket )
> +            rqi = 0;
> +        else
> +            /* If cpu is on socket 0, assign it to a run queue based on the
> +             * socket CPU 0 is actually on */
> +            if ( cpu_socket == 0 )
> +                rqi = cpu0_socket;
> +
> +            /* If cpu is NOT on socket 0, just assign it to a run queue based on
> +             * its own socket */
> +            else
> +                rqi = cpu_socket;
> +    }
>
>       if ( rqi < 0 )
>       {
> @@ -2010,13 +2027,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>   static void *
>   csched_alloc_pdata(const struct scheduler *ops, int cpu)
>   {
> -    /* Check to see if the cpu is online yet */
> -    /* Note: cpu 0 doesn't get a STARTING callback */
> -    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
> +    /* This function is only for calling init_pcpu on CPU 0
> +     * because it does not get a STARTING callback */
> +
> +    if ( cpu == 0 )
>           init_pcpu(ops, cpu);
> -    else
> -        printk("%s: cpu %d not online yet, deferring initializatgion\n",
> -               __func__, cpu);
>
>       return (void *)1;
>   }
> @@ -2072,6 +2087,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
>   static int
>   csched_cpu_starting(int cpu)
>   {
> +    // This function is for calling init_pcpu on every CPU, except for CPU 0 */
> +
>       struct scheduler *ops;
>
>       /* Hope this is safe from cpupools switching things around. :-) */
>


-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:46:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLWX-0005UE-Js; Thu, 06 Feb 2014 09:45:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLWW-0005U9-8b
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 09:45:32 +0000
Received: from [85.158.143.35:34154] by server-3.bemta-4.messagelabs.com id
	E3/8E-11539-BB953F25; Thu, 06 Feb 2014 09:45:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391679929!3558650!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16285 invoked from network); 6 Feb 2014 09:45:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 09:45:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="98529263"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 09:45:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	04:45:28 -0500
Message-ID: <1391679928.23098.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 09:45:28 +0000
In-Reply-To: <osstest-24729-mainreport@xen.org>
References: <osstest-24729-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24729: tolerable trouble:
 broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 12:15 +0000, xen.org wrote:
> flight 24729 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24729/
> 
> Failures :-/ but no regressions.
> 
> Tests which are failing intermittently (not blocking):
>  test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)     broken pass in 24728
>  test-amd64-amd64-xl-sedf      3 host-install(3)  broken in 24728 pass in 24729
>  test-amd64-i386-freebsd10-amd64 3 host-install(3) broken in 24728 pass in 24729

All of these were on woodlouse.

So were the host-install failures in 24728, 24727... all the way back to
24715 on Sunday which had no such regressions (only looking at
xen-unstable flights). First bad one looks like 24718.

Failure is
2014-02-03 03:36:54 Z FAILURE: fetch woodlouse_preseed: wait timed out: (none).

The woodlouse serial log stops having anything interesting in it around Feb  2 13:30:18.

Is woodlouse the one which periodically forgets its BIOS settings?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:46:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLWX-0005UE-Js; Thu, 06 Feb 2014 09:45:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLWW-0005U9-8b
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 09:45:32 +0000
Received: from [85.158.143.35:34154] by server-3.bemta-4.messagelabs.com id
	E3/8E-11539-BB953F25; Thu, 06 Feb 2014 09:45:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391679929!3558650!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16285 invoked from network); 6 Feb 2014 09:45:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 09:45:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="98529263"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 09:45:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	04:45:28 -0500
Message-ID: <1391679928.23098.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 09:45:28 +0000
In-Reply-To: <osstest-24729-mainreport@xen.org>
References: <osstest-24729-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24729: tolerable trouble:
 broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 12:15 +0000, xen.org wrote:
> flight 24729 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24729/
> 
> Failures :-/ but no regressions.
> 
> Tests which are failing intermittently (not blocking):
>  test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)     broken pass in 24728
>  test-amd64-amd64-xl-sedf      3 host-install(3)  broken in 24728 pass in 24729
>  test-amd64-i386-freebsd10-amd64 3 host-install(3) broken in 24728 pass in 24729

All of these were on woodlouse.

So were the host-install failures in 24728, 24727... all the way back to
24715 on Sunday which had no such regressions (only looking at
xen-unstable flights). First bad one looks like 24718.

Failure is
2014-02-03 03:36:54 Z FAILURE: fetch woodlouse_preseed: wait timed out: (none).

The woodlouse serial log stops having anything interesting in it around Feb  2 13:30:18.

Is woodlouse the one which periodically forgets its BIOS settings?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLeA-0005u6-Tq; Thu, 06 Feb 2014 09:53:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBLe9-0005tw-BM
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:53:25 +0000
Received: from [193.109.254.147:61051] by server-5.bemta-14.messagelabs.com id
	C6/04-16688-49B53F25; Thu, 06 Feb 2014 09:53:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391680401!2387052!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17511 invoked from network); 6 Feb 2014 09:53:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:53:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 09:53:21 +0000
Message-Id: <52F3699E0200007800119AE3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 09:53:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 17:03, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.end_pfn = start_pfn + nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}

I'm confused - both in the overview mail and in domctl.h below
you state the range to now be inclusive, yet neither here nor
in the hypervisor changes this seems to actually be the case
(unless the earlier "rename ..." patches now did more than just
renaming - I didn't look at them).

> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -454,7 +454,6 @@ int xc_domain_create(xc_interface *xch,
>                       uint32_t flags,
>                       uint32_t *pdomid);
>  
> -
>  /* Functions to produce a dump of a given domain
>   *  xc_domain_dumpcore - produces a dump to a specified file
>   *  xc_domain_dumpcore_via_callback - produces a dump, using a specified

Stray leftover change?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLeB-0005uK-A2; Thu, 06 Feb 2014 09:53:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLe9-0005tx-N4
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 09:53:26 +0000
Received: from [193.109.254.147:61139] by server-15.bemta-14.messagelabs.com
	id 36/D6-10839-59B53F25; Thu, 06 Feb 2014 09:53:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391680398!2387029!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17097 invoked from network); 6 Feb 2014 09:53:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 09:53:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="98530632"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 09:53:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	04:53:16 -0500
Message-ID: <1391680396.23098.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<stefano.stabellini@citrix.com>
Date: Thu, 6 Feb 2014 09:53:16 +0000
In-Reply-To: <osstest-24734-mainreport@xen.org>
References: <osstest-24734-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAyLTA1IGF0IDE4OjU1ICswMDAwLCB4ZW4ub3JnIHdyb3RlOgo+IGZsaWdo
dCAyNDczNCBsaW51eC1hcm0teGVuIHJlYWwgW3JlYWxdCj4gaHR0cDovL3d3dy5jaGlhcmsuZ3Jl
ZW5lbmQub3JnLnVrL354ZW5zcmN0cy9sb2dzLzI0NzM0Lwo+IAo+IEZhaWx1cmVzIDotLyBidXQg
bm8gcmVncmVzc2lvbnMuCj4gCj4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkLCBidXQgYXJl
IG5vdCBibG9ja2luZzoKPiAgdGVzdC1hcm1oZi1hcm1oZi14bCAgICAgICAgICAgOSBndWVzdC1z
dGFydCAgICAgICAgICAgICAgICAgIGZhaWwgICBuZXZlciBwYXNzCgpTdGVmYW5vLCBwbGVhc2Ug
Y2FuIHlvdSBjaGVycnktcGljazoKCiAgICAgICAgY29tbWl0IGUxN2IyZjExNGNiYTU0MjBmYjI4
ZmE0YmZlYWQ1N2Q0MDZhMTY1MzMKICAgICAgICBBdXRob3I6IElhbiBDYW1wYmVsbCA8aWFuLmNh
bXBiZWxsQGNpdHJpeC5jb20+CiAgICAgICAgRGF0ZTogICBNb24gSmFuIDIwIDExOjMwOjQxIDIw
MTQgKzAwMDAKICAgICAgICAKICAgICAgICAgICAgeGVuOiBzd2lvdGxiOiBoYW5kbGUgc2l6ZW9m
KGRtYV9hZGRyX3QpICE9IHNpemVvZihwaHlzX2FkZHJfdCkKICAgICAgICAgICAgCiAgICAgICAg
ICAgIFRoZSB1c2Ugb2YgcGh5c190b19tYWNoaW5lIGFuZCBtYWNoaW5lX3RvX3BoeXMgaW4gdGhl
IHBoeXM8PT5idXMgY29udmVyc2lvbgogICAgICAgICAgICBjYXVzZXMgdXMgdG8gbG9zZSB0aGUg
dG9wIGJpdHMgb2YgdGhlIERNQSBhZGRyZXNzIGlmIHRoZSBzaXplIG9mIGEgRE1BIGFkZHIKICAg
ICAgICAgICAgCiAgICAgICAgICAgIFRoaXMgY2FuIGhhcHBlbiBpbiBwcmFjdGljZSBvbiBBUk0g
d2hlcmUgZm9yZWlnbiBwYWdlcyBjYW4gYmUgYWJvdmUgNEdCIGV2ZQogICAgICAgICAgICB0aG91
Z2ggdGhlIGxvY2FsIGtlcm5lbCBkb2VzIG5vdCBoYXZlIExQQUUgcGFnZSB0YWJsZXMgZW5hYmxl
ZCAod2hpY2ggaXMKICAgICAgICAgICAgdG90YWxseSByZWFzb25hYmxlIGlmIHRoZSBndWVzdCBk
b2VzIG5vdCBpdHNlbGYgaGF2ZSA+NEdCIG9mIFJBTSkuIEluIHRoaXMKICAgICAgICAgICAgY2Fz
ZSB0aGUga2VybmVsIHN0aWxsIG1hcHMgdGhlIGZvcmVpZ24gcGFnZXMgYXQgYSBwaHlzIGFkZHIg
YmVsb3cgNEcgKGFzIGl0CiAgICAgICAgICAgIG11c3QpIGJ1dCB0aGUgcmVzdWx0aW5nIERNQSBh
ZGRyZXNzIChyZXR1cm5lZCBieSB0aGUgZ3JhbnQgbWFwIG9wZXJhdGlvbikgaQogICAgICAgICAg
ICBtdWNoIGhpZ2hlci4KICAgICAgICAgICAgCiAgICAgICAgICAgIFRoaXMgaXMgYW5hbG9nb3Vz
IHRvIGEgaGFyZHdhcmUgZGV2aWNlIHdoaWNoIGhhcyBpdHMgdmlldyBvZiBSQU0gbWFwcGVkIHVw
CiAgICAgICAgICAgIGhpZ2ggZm9yIHNvbWUgcmVhc29uLgogICAgICAgICAgICAKICAgICAgICAg
ICAgVGhpcyBwYXRjaCBtYWtlcyBJL08gdG8gZm9yZWlnbiBwYWdlcyAoc3BlY2lmaWNhbGx5IGJs
a2lmKSB3b3JrIG9uIDMyLWJpdCBBCiAgICAgICAgICAgIHN5c3RlbXMgd2l0aCBtb3JlIHRoYW4g
NEdCIG9mIFJBTS4KICAgICAgICAgICAgCiAgICAgICAgICAgIFNpZ25lZC1vZmYtYnk6IElhbiBD
YW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CiAgICAgICAgICAgIFNpZ25lZC1vZmYt
Ynk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3RlZmFuby5zdGFiZWxsaW5pQGV1LmNpdHJpeC5jb20+
Cgpmcm9tIG1haW5saW5lIGludG8gdGhpcyB0cmVlLgoKVGhhbmtzLApJYW4uCgo+IAo+IHZlcnNp
b24gdGFyZ2V0ZWQgZm9yIHRlc3Rpbmc6Cj4gIGxpbnV4ICAgICAgICAgICAgICAgIDUxOGU2MjRk
ZGZhZWY1NDU0MDhjMTljMzBmZmYzMWJjNjRkNmIzNDYKPiBiYXNlbGluZSB2ZXJzaW9uOgo+ICBs
aW51eCAgICAgICAgICAgICAgICBkMjY0YmRlMDg5Y2VlYTIwNjQwZDZkNDQ3MmEwZGNhZGU5ZDJl
MTk5Cj4gCj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tCj4gUGVvcGxlIHdobyB0b3VjaGVkIHJldmlzaW9ucyB1bmRlciB0ZXN0Ogo+
ICAgIkVyaWMgVy4gQmllZGVybWFuIiA8ZWJpZWRlcm1AeG1pc3Npb24uY29tPgo+ICAgQWFybyBL
b3NraW5lbiA8YWFyby5rb3NraW5lbkBpa2kuZmk+Cj4gICBBYXJvbiBCcm93biA8YWFyb24uZi5i
cm93bkBpbnRlbC5jb20+Cj4gICBBYmhpbGFzaCBLZXNhdmFuIDxhLmtlc2F2YW5Ac2Ftc3VuZy5j
b20+Cj4gICBBbGFuIENveCA8YWxhbkBsaW51eC5pbnRlbC5jb20+Cj4gICBBbGV4IERldWNoZXIg
PGFsZXhhbmRlci5kZXVjaGVyQGFtZC5jb20+Cj4gICBBbGV4YW5kZXIgTWV6aW4gPG1lemluLmFs
ZXhhbmRlckBnbWFpbC5jb20+Cj4gICBBbGV4YW5kZXIgdmFuIEhldWtlbHVtIDxoZXVrZWx1bUBm
YXN0bWFpbC5mbT4KPiAgIEFuYXRvbGlqIEd1c3RzY2hpbiA8YWd1c3RAZGVueC5kZT4KPiAgIEFu
ZHJlIFByenl3YXJhIDxhbmRyZS5wcnp5d2FyYUBsaW5hcm8ub3JnPgo+ICAgQW5kcmVhcyBSZWlz
IDxhbmRyZWFzLnJlaXNAZ21haWwuY29tPgo+ICAgQW5kcmVhcyBSb2huZXIgPGFuZHJlYXMucm9o
bmVyQGdteC5uZXQ+Cj4gICBBbmRyZXcgQnJlc3RpY2tlciA8YWJyZXN0aWNAY2hyb21pdW0ub3Jn
Pgo+ICAgQW5kcmV3IEpvbmVzIDxkcmpvbmVzQHJlZGhhdC5jb20+Cj4gICBBbmRyZXcgTW9ydG9u
IDxha3BtQGxpbnV4LWZvdW5kYXRpb24ub3JnPgo+ICAgQW5keSBMdXRvbWlyc2tpIDxsdXRvQGFt
YWNhcGl0YWwubmV0Pgo+ICAgQW50b24gQmxhbmNoYXJkIDxhbnRvbkBzYW1iYS5vcmc+Cj4gICBB
bnRvbiBWb3JvbnRzb3YgPGFudG9uQGVub21zZy5vcmc+Cj4gICBBbnRvbmlvIFF1YXJ0dWxsaSA8
YW50b25pb0BtZXNoY29kaW5nLmNvbT4KPiAgIEFyZCBCaWVzaGV1dmVsIDxhcmQuYmllc2hldXZl
bEBsaW5hcm8ub3JnPgo+ICAgQXJpZWwgRWxpb3IgPGFyaWVsZUBicm9hZGNvbS5jb20+Cj4gICBB
cnJvbiBXYW5nIDxhcnJvbi53YW5nQGludGVsLmNvbT4KPiAgIEF1c3RpbiBCb3lsZSA8Ym95bGUu
YXVzdGluQGdtYWlsLmNvbT4KPiAgIEJlbiBEb29rcyA8YmVuLmRvb2tzQGNvZGV0aGluay5jby51
az4KPiAgIEJlbiBNeWVycyA8YnBtQHNnaS5jb20+Cj4gICBCZW4gU2tlZ2dzIDxic2tlZ2dzQHJl
ZGhhdC5jb20+Cj4gICBCZW4gV2lkYXdza3kgPGJlbkBid2lkYXdzay5uZXQ+Cj4gICBCZW5qYW1p
biBIZXJyZW5zY2htaWR0IDxiZW5oQGtlcm5lbC5jcmFzaGluZy5vcmc+Cj4gICBCZXR0eSBEYWxs
IDxiZXR0eS5kYWxsQGhwLmNvbT4KPiAgIEJqb3JuIEhlbGdhYXMgPGJoZWxnYWFzQGdvb2dsZS5j
b20+Cj4gICBCasO4cm4gTW9yayA8Ympvcm5AbW9yay5ubz4KPiAgIEJvYiBQZXRlcnNvbiA8cnBl
dGVyc29AcmVkaGF0LmNvbT4KPiAgIEJvcmlzbGF2IFBldGtvdiA8YnBAc3VzZS5kZT4KPiAgIEJy
aWFuIFcgSGFydCA8aGFydGJAbGludXgudm5ldC5pYm0uY29tPgo+ICAgQnJ1Y2UgQWxsYW4gPGJy
dWNlLncuYWxsYW5AaW50ZWwuY29tPgo+ICAgQnJ5YW4gV3UgPGNvb2xvbmV5QGdtYWlsLmNvbT4K
PiAgIENhdGFsaW4gTWFyaW5hcyA8Y2F0YWxpbi5tYXJpbmFzQGFybS5jb20+Cj4gICBDaHJpc3Rp
YW4gRW5nZWxtYXllciA8Y2VuZ2VsbWFAZ214LmF0Pgo+ICAgQ2hyaXN0aWFuIEvDtm5pZyA8Y2hy
aXN0aWFuLmtvZW5pZ0BhbWQuY29tPgo+ICAgQ2hyaXN0b3BoIFBhYXNjaCA8Y2hyaXN0b3BoLnBh
YXNjaEB1Y2xvdXZhaW4uYmU+Cj4gICBDb25nIFdhbmcgPHhpeW91Lndhbmdjb25nQGdtYWlsLmNv
bT4KPiAgIEN1cnQgQnJ1bmUgPGN1cnRAY3VtdWx1c25ldHdvcmtzLmNvbT4KPiAgIEPDqWRyaWMg
TGUgR29hdGVyIDxjbGdAZnIuaWJtLmNvbT4KPiAgIERhbiBDYXJwZW50ZXIgPGRhbi5jYXJwZW50
ZXJAb3JhY2xlLmNvbT4KPiAgIERhbiBXaWxsaWFtcyA8ZGNid0ByZWRoYXQuY29tPgo+ICAgRGFu
aWVsIEJvcmttYW5uIDxkYm9ya21hbkByZWRoYXQuY29tPgo+ICAgRGFuaWVsIExlemNhbm8gPGRh
bmllbC5sZXpjYW5vQGxpbmFyby5vcmc+Cj4gICBEYW5pZWwgVmV0dGVyIDxkYW5pZWwudmV0dGVy
QGZmd2xsLmNoPgo+ICAgRGF2ZSBBaXJsaWUgPGFpcmxpZWRAcmVkaGF0LmNvbT4KPiAgIERhdmUg
RXJ0bWFuIDxkYXZpZHgubS5lcnRtYW5AaW50ZWwuY29tPgo+ICAgRGF2ZSBLbGVpa2FtcCA8ZGF2
ZS5rbGVpa2FtcEBvcmFjbGUuY29tPgo+ICAgRGF2aWQgRXJ0bWFuIDxkYXZpZHgubS5lcnRtYW5A
aW50ZWwuY29tPgo+ICAgRGF2aWQgR2lic29uIDxkYXZpZEBnaWJzb24uZHJvcGJlYXIuaWQuYXU+
Cj4gICBEYXZpZCBTLiBNaWxsZXIgPGRhdmVtQGRhdmVtbG9mdC5uZXQ+Cj4gICBEaW1pdHJpcyBN
aWNoYWlsaWRpcyA8ZG1AY2hlbHNpby5jb20+Cj4gICBEaW5nIFRpYW5ob25nIDxkaW5ndGlhbmhv
bmdAaHVhd2VpLmNvbT4KPiAgIERpcmsgQnJhbmRld2llIDxkaXJrLmouYnJhbmRld2llQGludGVs
LmNvbT4KPiAgIERtaXRyeSBLcmF2a292IDxkbWl0cnlAYnJvYWRjb20uY29tPgo+ICAgRG1pdHJ5
IFRvcm9raG92IDxkbWl0cnkudG9yb2tob3ZAZ21haWwuY29tPgo+ICAgRG9uIFNraWRtb3JlIDxk
b25hbGQuYy5za2lkbW9yZUBpbnRlbC5jb20+Cj4gICBFbW1hbnVlbCBHcnVtYmFjaCA8ZW1tYW51
ZWwuZ3J1bWJhY2hAaW50ZWwuY29tPgo+ICAgRXJpYyBEdW1hemV0IDxlZHVtYXpldEBnb29nbGUu
Y29tPgo+ICAgRXJpYyBXaGl0bmV5IDxlbndsaW51eEBnbWFpbC5jb20+Cj4gICBFcmlrIEh1Z25l
IDxlcmlrLmh1Z25lQGVyaWNzc29uLmNvbT4KPiAgIEZhYmlvIEVzdGV2YW0gPGZhYmlvLmVzdGV2
YW1AZnJlZXNjYWxlLmNvbT4KPiAgIEZhbiBEdSA8ZmFuLmR1QHdpbmRyaXZlci5jb20+Cj4gICBG
ZWxpeCBGaWV0a2F1IDxuYmRAb3BlbndydC5vcmc+Cj4gICBGbGF2aW8gTGVpdG5lciA8ZmJsQHJl
ZGhhdC5jb20+Cj4gICBGbG9yaWFuIEZhaW5lbGxpIDxmLmZhaW5lbGxpQGdtYWlsLmNvbT4KPiAg
IEZsb3JpYW4gV2VzdHBoYWwgPGZ3QHN0cmxlbi5kZT4KPiAgIEZyYW5rIExpIDxGcmFuay5MaUBm
cmVlc2NhbGUuY29tPgo+ICAgR2FvIGZlbmcgPGdhb2ZlbmdAY24uZnVqaXRzdS5jb20+Cj4gICBH
YXZpbiBTaGFuIDxzaGFuZ3dAbGludXgudm5ldC5pYm0uY29tPgo+ICAgR2VlcnQgVXl0dGVyaG9l
dmVuIDxnZWVydCtyZW5lc2FzQGxpbnV4LW02OGsub3JnPgo+ICAgR2VyaGFyZCBTaXR0aWcgPGdz
aUBkZW54LmRlPgo+ICAgR2Vycml0IFJlbmtlciA8Z2Vycml0QGVyZy5hYmRuLmFjLnVrPgo+ICAg
R3JhbnQgTGlrZWx5IDxncmFudC5saWtlbHlAbGluYXJvLm9yZz4KPiAgIEdyZWcgS3JvYWgtSGFy
dG1hbiA8Z3JlZ2toQGxpbnV4Zm91bmRhdGlvbi5vcmc+Cj4gICBHcmVnb3IgQmVjayA8Z2JlY2tA
c2VybmV0LmRlPgo+ICAgR3VlbnRlciBSb2VjayA8bGludXhAcm9lY2stdXMubmV0Pgo+ICAgR3Vz
dGF2byBQYWRvdmFuIDxndXN0YXZvLnBhZG92YW5AY29sbGFib3JhLmNvLnVrPgo+ICAgSC4gTmlr
b2xhdXMgU2NoYWxsZXIgPGhuc0Bnb2xkZWxpY28uY29tPgo+ICAgSC4gUGV0ZXIgQW52aW4gPGhw
YUBsaW51eC5pbnRlbC5jb20+Cj4gICBILiBQZXRlciBBbnZpbiA8aHBhQHp5dG9yLmNvbT4KPiAg
IEhhaXlhbmcgWmhhbmcgPGhhaXlhbmd6QG1pY3Jvc29mdC5jb20+Cj4gICBIYW5nYmluIExpdSA8
bGl1aGFuZ2JpbkBnbWFpbC5jb20+Cj4gICBIYW5uZXMgRnJlZGVyaWMgU293YSA8aGFubmVzQHN0
cmVzc2luZHVrdGlvbi5vcmc+Cj4gICBIYXJpcHJhc2FkIFNoZW5haSA8aGFyaXByYXNhZEBjaGVs
c2lvLmNvbT4KPiAgIEhlaWtvIENhcnN0ZW5zIDxoZWlrby5jYXJzdGVuc0BkZS5pYm0uY29tPgo+
ICAgSGVpa28gU3R1ZWJuZXIgPGhlaWtvQHNudGVjaC5kZT4KPiAgIEhlbGdlIERlbGxlciA8ZGVs
bGVyQGdteC5kZT4KPiAgIEhlbG11dCBTY2hhYSA8aGVsbXV0LnNjaGFhQGdvb2dsZW1haWwuY29t
Pgo+ICAgSGVyYmVydCBYdSA8aGVyYmVydEBnb25kb3IuYXBhbmEub3JnLmF1Pgo+ICAgSHVhY2Fp
IENoZW4gPGNoZW5oY0BsZW1vdGUuY29tPgo+ICAgSHVnaCBEaWNraW5zIDxodWdoZEBnb29nbGUu
Y29tPgo+ICAgSWxpYSBNaXJraW4gPGltaXJraW5AYWx1bS5taXQuZWR1Pgo+ICAgSW5nbyBNb2xu
YXIgPG1pbmdvQGtlcm5lbC5vcmc+Cj4gICBJdmFuIFZlY2VyYSA8aXZlY2VyYUByZWRoYXQuY29t
Pgo+ICAgSmFtYWwgSGFkaSBTYWxpbSA8amhzQG1vamF0YXR1LmNvbT4KPiAgIEphbWVzIEhvZ2Fu
IDxqYW1lcy5ob2dhbkBpbWd0ZWMuY29tPgo+ICAgSmFuIEthcmEgPGphY2tAc3VzZS5jej4KPiAg
IEphbiBLaXN6a2EgPGphbi5raXN6a2FAc2llbWVucy5jb20+Cj4gICBKYW5pIE5pa3VsYSA8amFu
aS5uaWt1bGFAaW50ZWwuY29tPgo+ICAgSmFzb24gQmFyb24gPGpiYXJvbkBha2FtYWkuY29tPgo+
ICAgSmFzb24gV2FuZyA8amFzb3dhbmdAcmVkaGF0LmNvbT4KPiAgIEphdmllciBMb3BleiA8amxv
cGV4QGNvenliaXQuY29tPgo+ICAgSmF5IFZvc2J1cmdoIDxmdWJhckB1cy5pYm0uY29tPgo+ICAg
SmVhbiBEZWx2YXJlIDxraGFsaUBsaW51eC1mci5vcmc+Cj4gICBKZWZmIEtpcnNoZXIgPGplZmZy
ZXkudC5raXJzaGVyQGludGVsLmNvbT4KPiAgIEplZmYgTGF5dG9uIDxqbGF5dG9uQHJlZGhhdC5j
b20+Cj4gICBKZW5zIEF4Ym9lIDxheGJvZUBrZXJuZWwuZGs+Cj4gICBKZXNwZXIgRGFuZ2FhcmQg
QnJvdWVyIDxicm91ZXJAcmVkaGF0LmNvbT4KPiAgIEplc3NlIEJhcm5lcyA8amJhcm5lc0B2aXJ0
dW91c2dlZWsub3JnPgo+ICAgSmlhbmcgTGl1IDxqaWFuZy5saXVAbGludXguaW50ZWwuY29tPgo+
ICAgSmllIExpdSA8amVmZi5saXVAb3JhY2xlLmNvbT4KPiAgIEppcmkgUGlya28gPGppcmlAcmVz
bnVsbGkudXM+Cj4gICBKaXRlbmRyYSBLYWxzYXJpYSA8aml0ZW5kcmEua2Fsc2FyaWFAcWxvZ2lj
LmNvbT4KPiAgIEpvaGFuIEhlZGJlcmcgPGpvaGFuLmhlZGJlcmdAaW50ZWwuY29tPgo+ICAgSm9o
YW5uZXMgQmVyZyA8am9oYW5uZXMuYmVyZ0BpbnRlbC5jb20+Cj4gICBKb2huIENyaXNwaW4gPGJs
b2dpY0BvcGVud3J0Lm9yZz4KPiAgIEpvaG4gRGF2aWQgQW5nbGluIDxkYXZlLmFuZ2xpbkBiZWxs
Lm5ldD4KPiAgIEpvaG4gRmFzdGFiZW5kIDxqb2huLnIuZmFzdGFiZW5kQGludGVsLmNvbS5jb20+
Cj4gICBKb2huIEZhc3RhYmVuZCA8am9obi5yLmZhc3RhYmVuZEBpbnRlbC5jb20+Cj4gICBKb2hu
IFN0dWx0eiA8am9obi5zdHVsdHpAbGluYXJvLm9yZz4KPiAgIEpvaG4gVy4gTGludmlsbGUgPGxp
bnZpbGxlQHR1eGRyaXZlci5jb20+Cj4gICBKb24gTWFsb3kgPGpvbi5tYWxveUBlcmljc3Nvbi5j
b20+Cj4gICBKb25naHdhIExlZSA8am9uZ2h3YTMubGVlQHNhbXN1bmcuY29tPgo+ICAgSm9zaCBC
b3llciA8andib3llckBmZWRvcmFwcm9qZWN0Lm9yZz4KPiAgIEp1bGlhbiBBbmFzdGFzb3YgPGph
QHNzaS5iZz4KPiAgIEtlbGx5IERvcmFuIDxrZWwucC5kb3JhbkBnbWFpbC5jb20+Cj4gICBLaXJp
bGwgVGtoYWkgPHRraGFpQHlhbmRleC5ydT4KPiAgIEtyenlzenRvZiBIYcWCYXNhIDxraGFsYXNh
QHBpYXAucGw+Cj4gICBLcnp5c3p0b2YgS296bG93c2tpIDxrLmtvemxvd3NraUBzYW1zdW5nLmNv
bT4KPiAgIEt1bWFyIFNhbmdodmkgPGt1bWFyYXNAY2hlbHNpby5jb20+Cj4gICBMYW4gVGlhbnl1
IDx0aWFueXUubGFuQGludGVsLmNvbT4KPiAgIExhcnJ5IEZpbmdlciA8TGFycnkuRmluZ2VyQGx3
ZmluZ2VyLm5ldD4KPiAgIExhdXJhIEFiYm90dCA8bGF1cmFhQGNvZGVhdXJvcmEub3JnPgo+ICAg
TGF1cmVudCBQaW5jaGFydCA8bGF1cmVudC5waW5jaGFydCtyZW5lc2FzQGlkZWFzb25ib2FyZC5j
b20+Cj4gICBMZWlnaCBCcm93biA8bGVpZ2hAc29saW5uby5jby51az4KPiAgIExpIFJvbmdRaW5n
IDxyb3kucWluZy5saUBnbWFpbC5jb20+Cj4gICBMaW51cyBUb3J2YWxkcyA8dG9ydmFsZHNAbGlu
dXgtZm91bmRhdGlvbi5vcmc+Cj4gICBMaXUsIENodWFuc2hlbmcgPGNodWFuc2hlbmcubGl1QGlu
dGVsLmNvbT4KPiAgIE1hbmlzaCBDaG9wcmEgPG1hbmlzaC5jaG9wcmFAcWxvZ2ljLmNvbT4KPiAg
IE1hcmNlbCBIb2x0bWFubiA8bWFyY2VsQGhvbHRtYW5uLm9yZz4KPiAgIE1hcmNlbG8gVG9zYXR0
aSA8bXRvc2F0dGlAcmVkaGF0LmNvbT4KPiAgIE1hcmNvIFBpYXp6YSA8bXBpYXp6YUBnbWFpbC5j
b20+Cj4gICBNYXJlayBMaW5kbmVyIDxtYXJla2xpbmRuZXJAbmVvbWFpbGJveC5jaD4KPiAgIE1h
cmVrIE9sxaHDoWsgPG1hcmVrLm9sc2FrQGFtZC5jb20+Cj4gICBNYXJrIFJ1dGxhbmQgPG1hcmsu
cnV0bGFuZEBhcm0uY29tPgo+ICAgTWFydGluIFNjaHdpZGVmc2t5IDxzY2h3aWRlZnNreUBkZS5p
Ym0uY29tPgo+ICAgTWF0aHkgVmFuaG9lZiA8dmFuaG9lZm1AZ21haWwuY29tPgo+ICAgTWF0dGVv
IEZhY2NoaW5ldHRpIDxtYXR0ZW8uZmFjY2hpbmV0dGlAc2lyaXVzLWVzLml0Pgo+ICAgTWVsIEdv
cm1hbiA8bWdvcm1hbkBzdXNlLmRlPgo+ICAgTWljaGFlbCBDaGFuIDxtY2hhbkBicm9hZGNvbS5j
b20+Cj4gICBNaWNoYWVsIE5ldWxpbmcgPG1pa2V5QG5ldWxpbmcub3JnPgo+ICAgTWljaGFlbCBT
LiBUc2lya2luIDxtc3RAcmVkaGF0LmNvbT4KPiAgIE1pY2hhbCBIb2NrbyA8bWhvY2tvQHN1c2Uu
Y3o+Cj4gICBNaWNoYWwgS2FsZGVyb24gPG1pY2hhbHNAYnJvYWRjb20uY29tPgo+ICAgTWljaGFs
IFNjaG1pZHQgPG1zY2htaWR0QHJlZGhhdC5jb20+Cj4gICBNaWNoYWwgU2ltZWsgPG1pY2hhbC5z
aW1la0B4aWxpbnguY29tPgo+ICAgTWlrYSBXZXN0ZXJiZXJnIDxtaWthLndlc3RlcmJlcmdAbGlu
dXguaW50ZWwuY29tPgo+ICAgTWlrZSBUdXJxdWV0dGUgPG10dXJxdWV0dGVAbGluYXJvLm9yZz4K
PiAgIE1pa3VsYXMgUGF0b2NrYSA8bXBhdG9ja2FAcmVkaGF0LmNvbT4KPiAgIE1pbG8gS2ltIDxt
aWxvLmtpbUB0aS5jb20+Cj4gICBNaW5nIExlaSA8bWluZy5sZWlAY2Fub25pY2FsLmNvbT4KPiAg
IE1pbmcgTGVpIDx0b20ubGVpbWluZ0BnbWFpbC5jb20+Cj4gICBNdWd1bnRoYW4gViBOIDxtdWd1
bnRoYW52bm1AdGkuY29tPgo+ICAgTmFveWEgSG9yaWd1Y2hpIDxuLWhvcmlndWNoaUBhaC5qcC5u
ZWMuY29tPgo+ICAgTmVhbCBDYXJkd2VsbCA8bmNhcmR3ZWxsQGdvb2dsZS5jb20+Cj4gICBOZWls
IEhvcm1hbiA8bmhvcm1hbkB0dXhkcml2ZXIuY29tPgo+ICAgTmVpbEJyb3duIDxuZWlsYkBzdXNl
LmRlPgo+ICAgTmljb2xhcyBTY2hpY2hhbiA8bnNjaGljaGFuQGZyZWVib3guZnI+Cj4gICBOaXRo
aW4gTmF5YWsgU3VqaXIgPG5zdWppckBicm9hZGNvbS5jb20+Cj4gICBOb2J1aGlybyBJd2FtYXRz
dSA8bm9idWhpcm8uaXdhbWF0c3UueWpAcmVuZXNhcy5jb20+Cj4gICBPY3RhdmlhbiBQdXJkaWxh
IDxvY3Rhdmlhbi5wdXJkaWxhQGludGVsLmNvbT4KPiAgIE9sZWcgTmVzdGVyb3YgPG9sZWdAcmVk
aGF0LmNvbT4KPiAgIE9sb2YgSm9oYW5zc29uIDxvbG9mQGxpeG9tLm5ldD4KPiAgIE9yZW4gR2l2
b24gPG9yZW4uZ2l2b25AaW50ZWwuY29tPgo+ICAgUGFibG8gTmVpcmEgQXl1c28gPHBhYmxvQG5l
dGZpbHRlci5vcmc+Cj4gICBQYW9sbyBCb256aW5pIDxwYm9uemluaUByZWRoYXQuY29tPgo+ICAg
UGF1bCBEdXJyYW50IDxwYXVsLmR1cnJhbnRAY2l0cml4LmNvbT4KPiAgIFBhdWwgRS4gTWNLZW5u
ZXkgPHBhdWxtY2tAbGludXgudm5ldC5pYm0uY29tPgo+ICAgUGF1bG8gWmFub25pIDxwYXVsby5y
Lnphbm9uaUBpbnRlbC5jb20+Cj4gICBQZWtrYSBFbmJlcmcgPHBlbmJlcmdAa2VybmVsLm9yZz4K
PiAgIFBldGVyIEtvcnNnYWFyZCA8cGV0ZXJAa29yc2dhYXJkLmNvbT4KPiAgIFBldGVyIFppamxz
dHJhIDxwZXRlcnpAaW5mcmFkZWFkLm9yZz4KPiAgIFBoaWwgU2NobWl0dCA8cGhpbGxpcC5qLnNj
aG1pdHRAaW50ZWwuY29tPgo+ICAgUWFpcyBZb3VzZWYgPHFhaXMueW91c2VmQGltZ3RlYy5jb20+
Cj4gICBRaW5nc2h1YWkgVGlhbiA8cWluZ3NodWFpLnRpYW5AaW50ZWwuY29tPgo+ICAgUmFmYWVs
IEouIFd5c29ja2kgPHJhZmFlbC5qLnd5c29ja2lAaW50ZWwuY29tPgo+ICAgUmFmYcWCIE1pxYJl
Y2tpIDx6YWplYzVAZ21haWwuY29tPgo+ICAgUmFqZXNoIEIgUHJhdGhpcGF0aSA8cnByYXRoaXBA
bGludXgudm5ldC5pYm0uY29tPgo+ICAgUmljaGFyZCBDb2NocmFuIDxyaWNoYXJkY29jaHJhbkBn
bWFpbC5jb20+Cj4gICBSaWNoYXJkIFdlaW5iZXJnZXIgPHJpY2hhcmRAbm9kLmF0Pgo+ICAgUmlr
IHZhbiBSaWVsIDxyaWVsQHJlZGhhdC5jb20+Cj4gICBSb2IgSGVycmluZyA8cm9iLmhlcnJpbmdA
Y2FseGVkYS5jb20+Cj4gICBSb2IgSGVycmluZyA8cm9iaEBrZXJuZWwub3JnPgo+ICAgUm9iZXJ0
IFJpY2h0ZXIgPHJyaWNAa2VybmVsLm9yZz4KPiAgIFJ1c3NlbGwgS2luZyA8cm1rK2tlcm5lbEBh
cm0ubGludXgub3JnLnVrPgo+ICAgUnl1c3VrZSBLb25pc2hpIDxrb25pc2hpLnJ5dXN1a2VAbGFi
Lm50dC5jby5qcD4KPiAgIFNhY2hpbiBLYW1hdCA8c2FjaGluLmthbWF0QGxpbmFyby5vcmc+Cj4g
ICBTYWNoaW4gUHJhYmh1IDxzcHJhYmh1QHJlZGhhdC5jb20+Cj4gICBTYWx2YSBQZWlyw7MgPHNw
ZWlyb0BhaTIudXB2LmVzPgo+ICAgU2FtdWVsIE9ydGl6IDxzYW1lb0BsaW51eC5pbnRlbC5jb20+
Cj4gICBTYW50b3NoIFNoaWxpbWthciA8c2FudG9zaC5zaGlsaW1rYXJAdGkuY29tPgo+ICAgU2Fz
aGEgTGV2aW4gPHNhc2hhLmxldmluQG9yYWNsZS5jb20+Cj4gICBTYXRoeWEgUGVybGEgPHNhdGh5
YS5wZXJsYUBlbXVsZXguY29tPgo+ICAgU2NvdHQgRmVsZG1hbiA8c2ZlbGRtYUBjdW11bHVzbmV0
d29ya3MuY29tPgo+ICAgU2ViYXN0aWFuIE90dCA8c2Vib3R0QGxpbnV4LnZuZXQuaWJtLmNvbT4K
PiAgIFNlcmdlIEUuIEhhbGx5biA8c2VyZ2UuaGFsbHluQHVidW50dS5jb20+Cj4gICBTZXJnZSBI
YWxseW4gPHNlcmdlLmhhbGx5bkBjYW5vbmljYWwuY29tPgo+ICAgU2VyZ2VpIFNodHlseW92IDxz
ZXJnZWkuc2h0eWx5b3ZAY29nZW50ZW1iZWRkZWQuY29tPgo+ICAgU2V1bmctV29vIEtpbSA8c3cw
MzEyLmtpbUBzYW1zdW5nLmNvbT4KPiAgIFNoYWhlZCBTaGFpa2ggPHNoYWhlZC5zaGFpa2hAcWxv
Z2ljLmNvbT4KPiAgIFNoaXJpc2ggUGFyZ2FvbmthciA8c3Bhcmdhb25rYXJAc3VzZS5jb20+Cj4g
ICBTaHVhaCBLaGFuIDxzaHVhaC5raEBzYW1zdW5nLmNvbT4KPiAgIFNpbW9uIEd1aW5vdCA8c2d1
aW5vdEBsYWNpZS5jb20+Cj4gICBTaW1vbiBIb3JtYW4gPGhvcm1zK3JlbmVzYXNAdmVyZ2UubmV0
LmF1Pgo+ICAgU2ltb24gSG9ybWFuIDxob3Jtc0B2ZXJnZS5uZXQuYXU+Cj4gICBTaW1vbiBXdW5k
ZXJsaWNoIDxzd0BzaW1vbnd1bmRlcmxpY2guZGU+Cj4gICBTb3JlbiBCcmlua21hbm4gPHNvcmVu
LmJyaW5rbWFubkB4aWxpbnguY29tPgo+ICAgU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0
YWJlbGxpbmlAZXUuY2l0cml4LmNvbT4KPiAgIFN0ZXBoZW4gQm95ZCA8c2JveWRAY29kZWF1cm9y
YS5vcmc+Cj4gICBTdGVwaGVuIFdhcnJlbiA8c3dhcnJlbkBudmlkaWEuY29tPgo+ICAgU3RldmUg
Q2FwcGVyIDxzdGV2ZS5jYXBwZXJAbGluYXJvLm9yZz4KPiAgIFN0ZXZlIEZyZW5jaCA8c21mcmVu
Y2hAZ21haWwuY29tPgo+ICAgU3RldmVuIFJvc3RlZHQgPHJvc3RlZHRAZ29vZG1pcy5vcmc+Cj4g
ICBTdGV2ZW4gV2hpdGVob3VzZSA8c3doaXRlaG9AcmVkaGF0LmNvbT4KPiAgIFN1ZGVlcCBIb2xs
YSA8c3VkZWVwLmhvbGxhQGFybS5jb20+Cj4gICBTdWppdGggTWFub2hhcmFuIDxjX21hbm9oYUBx
Y2EucXVhbGNvbW0uY29tPgo+ICAgU3VyZXNoIFJlZGR5IDxzdXJlc2gucmVkZHlAZW11bGV4LmNv
bT4KPiAgIFRhcmFzIEtvbmRyYXRpdWsgPHRhcmFzLmtvbmRyYXRpdWtAbGluYXJvLm9yZz4KPiAg
IFRlanVuIEhlbyA8dGpAa2VybmVsLm9yZz4KPiAgIFRldHN1byBIYW5kYSA8cGVuZ3Vpbi1rZXJu
ZWxASS1sb3ZlLlNBS1VSQS5uZS5qcD4KPiAgIFRoYWRldSBMaW1hIGRlIFNvdXphIENhc2NhcmRv
IDxjYXNjYXJkb0BsaW51eC52bmV0LmlibS5jb20+Cj4gICBUaG9tYXMgR2xlaXhuZXIgPHRnbHhA
bGludXRyb25peC5kZT4KPiAgIFRpbW8gVGVyw6RzIDx0aW1vLnRlcmFzQGlraS5maT4KPiAgIFRv
bWFzeiBGaWdhIDx0LmZpZ2FAc2Ftc3VuZy5jb20+Cj4gICBUb255IExpbmRncmVuIDx0b255QGF0
b21pZGUuY29tPgo+ICAgVG9zaGkgS2FuaSA8dG9zaGkua2FuaUBocC5jb20+Cj4gICBVamphbCBS
b3kgPHJveXVqamFsQGdtYWlsLmNvbT4KPiAgIFZhc3VuZGhhcmEgVm9sYW0gPHZhc3VuZGhhcmEu
dm9sYW1AZW11bGV4LmNvbT4KPiAgIFZpbGxlIFN5cmrDpGzDpCA8dmlsbGUuc3lyamFsYUBsaW51
eC5pbnRlbC5jb20+Cj4gICBWaW5jZSBCcmlkZ2VycyA8dmJyaWRnZXJzMjAxM0BnbWFpbC5jb20+
Cj4gICBWaXJlc2ggS3VtYXIgPHZpcmVzaC5rdW1hckBsaW5hcm8ub3JnPgo+ICAgVml2ZWsgR295
YWwgPHZnb3lhbEByZWRoYXQuY29tPgo+ICAgVmxhZCBZYXNldmljaCA8dnlhc2V2aWNoQGdtYWls
LmNvbT4KPiAgIFZsYWRpbWlyIERhdnlkb3YgPHZkYXZ5ZG92QHBhcmFsbGVscy5jb20+Cj4gICBW
bGFzdGltaWwgQmFia2EgPHZiYWJrYUBzdXNlLmN6Pgo+ICAgV2FuZyBXZWlkb25nIDx3YW5nd2Vp
ZG9uZzFAaHVhd2VpLmNvbT4KPiAgIFdlaSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+Cj4gICBX
ZWkgWW9uZ2p1biA8eW9uZ2p1bl93ZWlAdHJlbmRtaWNyby5jb20uY24+Cj4gICBXZWktQ2h1biBD
aGFvIDx3ZWljaHVuY0BwbHVtZ3JpZC5jb20+Cj4gICBXZW5saWFuZyBGYW4gPGZhbndsZXhjYUBn
bWFpbC5jb20+Cj4gICBXaWxsIERlYWNvbiA8d2lsbC5kZWFjb25AYXJtLmNvbT4KPiAgIFdvbGZy
YW0gU2FuZyA8d3NhQHRoZS1kcmVhbXMuZGU+Cj4gICBZYW5pdiBSb3NuZXIgPHlhbml2ckBicm9h
ZGNvbS5jb20+Cj4gICBZYXN1c2hpIEFzYW5vIDx5YXN1c2hpLmFzYW5vQGpwLmZ1aml0c3UuY29t
Pgo+ICAgWWlqaW5nIFdhbmcgPHdhbmd5aWppbmdAaHVhd2VpLmNvbT4KPiAgIFlpbmcgWHVlIDx5
aW5nLnh1ZUB3aW5kcml2ZXIuY29tPgo+ICAgWXV2YWwgTWludHogPHl1dmFsbWluQGJyb2FkY29t
LmNvbT4KPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KPiAKPiBqb2JzOgo+ICBidWlsZC1hcm1oZiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcyAgICAKPiAgYnVpbGQtYXJtaGYtcHZv
cHMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MgICAgCj4g
IHRlc3QtYXJtaGYtYXJtaGYteGwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBmYWlsICAgIAo+IAo+IAo+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+IHNnLXJlcG9ydC1mbGlnaHQgb24gd29raW5nLmNh
bS54Y2ktdGVzdC5jb20KPiBsb2dzOiAvaG9tZS94Y19vc3N0ZXN0L2xvZ3MKPiBpbWFnZXM6IC9o
b21lL3hjX29zc3Rlc3QvaW1hZ2VzCj4gCj4gTG9ncywgY29uZmlnIGZpbGVzLCBldGMuIGFyZSBh
dmFpbGFibGUgYXQKPiAgICAgaHR0cDovL3d3dy5jaGlhcmsuZ3JlZW5lbmQub3JnLnVrL354ZW5z
cmN0cy9sb2dzCj4gCj4gVGVzdCBoYXJuZXNzIGNvZGUgY2FuIGJlIGZvdW5kIGF0Cj4gICAgIGh0
dHA6Ly94ZW5iaXRzLnhlbnNvdXJjZS5jb20vZ2l0d2ViP3A9b3NzdGVzdC5naXQ7YT1zdW1tYXJ5
Cj4gCj4gCj4gUHVzaGluZyByZXZpc2lvbiA6Cj4gCj4gKyBicmFuY2g9bGludXgtYXJtLXhlbgo+
ICsgcmV2aXNpb249NTE4ZTYyNGRkZmFlZjU0NTQwOGMxOWMzMGZmZjMxYmM2NGQ2YjM0Ngo+ICsg
LiBjcmktbG9jay1yZXBvcwo+ICsrIC4gY3JpLWNvbW1vbgo+ICsrKyAuIGNyaS1nZXRjb25maWcK
PiArKysgdW1hc2sgMDAyCj4gKysrIGdldGNvbmZpZyBSZXBvcwo+ICsrKyBwZXJsIC1lICcKPiAg
ICAgICAgICAgICAgICAgdXNlIE9zc3Rlc3Q7Cj4gICAgICAgICAgICAgICAgIHJlYWRnbG9iYWxj
b25maWcoKTsKPiAgICAgICAgICAgICAgICAgcHJpbnQgJGN7IlJlcG9zIn0gb3IgZGllICQhOwo+
ICAgICAgICAgJwo+ICsrIHJlcG9zPS9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zCj4gKysgcmVw
b3NfbG9jaz0vZXhwb3J0L2hvbWUvb3NzdGVzdC9yZXBvcy9sb2NrCj4gKysgJ1snIHggJyE9JyB4
L2V4cG9ydC9ob21lL29zc3Rlc3QvcmVwb3MvbG9jayAnXScKPiArKyBPU1NURVNUX1JFUE9TX0xP
Q0tfTE9DS0VEPS9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zL2xvY2sKPiArKyBleGVjIHdpdGgt
bG9jay1leCAtdyAvZXhwb3J0L2hvbWUvb3NzdGVzdC9yZXBvcy9sb2NrIC4vYXAtcHVzaCBsaW51
eC1hcm0teGVuIDUxOGU2MjRkZGZhZWY1NDU0MDhjMTljMzBmZmYzMWJjNjRkNmIzNDYKPiArIGJy
YW5jaD1saW51eC1hcm0teGVuCj4gKyByZXZpc2lvbj01MThlNjI0ZGRmYWVmNTQ1NDA4YzE5YzMw
ZmZmMzFiYzY0ZDZiMzQ2Cj4gKyAuIGNyaS1sb2NrLXJlcG9zCj4gKysgLiBjcmktY29tbW9uCj4g
KysrIC4gY3JpLWdldGNvbmZpZwo+ICsrKyB1bWFzayAwMDIKPiArKysgZ2V0Y29uZmlnIFJlcG9z
Cj4gKysrIHBlcmwgLWUgJwo+ICAgICAgICAgICAgICAgICB1c2UgT3NzdGVzdDsKPiAgICAgICAg
ICAgICAgICAgcmVhZGdsb2JhbGNvbmZpZygpOwo+ICAgICAgICAgICAgICAgICBwcmludCAkY3si
UmVwb3MifSBvciBkaWUgJCE7Cj4gICAgICAgICAnCj4gKysgcmVwb3M9L2V4cG9ydC9ob21lL29z
c3Rlc3QvcmVwb3MKPiArKyByZXBvc19sb2NrPS9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zL2xv
Y2sKPiArKyAnWycgeC9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zL2xvY2sgJyE9JyB4L2V4cG9y
dC9ob21lL29zc3Rlc3QvcmVwb3MvbG9jayAnXScKPiArIC4gY3JpLWNvbW1vbgo+ICsrIC4gY3Jp
LWdldGNvbmZpZwo+ICsrIHVtYXNrIDAwMgo+ICsgc2VsZWN0X3hlbmJyYW5jaAo+ICsgY2FzZSAi
JGJyYW5jaCIgaW4KPiArIHRyZWU9bGludXgKPiArIHhlbmJyYW5jaD14ZW4tdW5zdGFibGUKPiAr
ICdbJyB4bGludXggPSB4bGludXggJ10nCj4gKyBsaW51eGJyYW5jaD1saW51eC1hcm0teGVuCj4g
KyA6IHRlc3RlZC8yLjYuMzkueAo+ICsgLiBhcC1jb21tb24KPiArKyA6IG9zc3Rlc3RAeGVuYml0
cy54ZW5zb3VyY2UuY29tCj4gKysgOiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcveGVuLmdpdAo+ICsr
IDogb3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5jb206L2hvbWUveGVuL2dpdC94ZW4uZ2l0Cj4g
KysgOiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcvc3RhZ2luZy9xZW11LXhlbi11bnN0YWJsZS5naXQK
PiArKyA6IGdpdDovL2dpdC5rZXJuZWwub3JnCj4gKysgOiBnaXQ6Ly9naXQua2VybmVsLm9yZy9w
dWIvc2NtL2xpbnV4L2tlcm5lbC9naXQKPiArKyA6IGdpdAo+ICsrIDogZ2l0Oi8veGVuYml0cy54
ZW4ub3JnL29zc3Rlc3QvbGludXgtZmlybXdhcmUuZ2l0Cj4gKysgOiBvc3N0ZXN0QHhlbmJpdHMu
eGVuc291cmNlLmNvbTovaG9tZS9vc3N0ZXN0L2V4dC9saW51eC1maXJtd2FyZS5naXQKPiArKyA6
IGdpdDovL2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC9maXJtd2FyZS9s
aW51eC1maXJtd2FyZS5naXQKPiArKyA6IG9zc3Rlc3RAeGVuYml0cy54ZW5zb3VyY2UuY29tOi9o
b21lL3hlbi9naXQvbGludXgtcHZvcHMuZ2l0Cj4gKysgOiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcv
bGludXgtcHZvcHMuZ2l0Cj4gKysgOiB0ZXN0ZWQvbGludXgtMy40Cj4gKysgOiB0ZXN0ZWQvbGlu
dXgtYXJtLXhlbgo+ICsrICdbJyB4Z2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9r
ZXJuZWwvZ2l0L3NzdGFiZWxsaW5pL3hlbi5naXQgPSB4ICddJwo+ICsrICdbJyB4Z2l0Oi8vZ2l0
Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L3NzdGFiZWxsaW5pL3hlbi5naXQg
PSB4ICddJwo+ICsrIDogZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwv
Z2l0L2tvbnJhZC94ZW4uZ2l0Cj4gKysgOiB0ZXN0ZWQvMi42LjM5LngKPiArKyA6IGRhaWx5LWNy
b24ubGludXgtYXJtLXhlbgo+ICsrIDogZGFpbHktY3Jvbi5saW51eC1hcm0teGVuCj4gKysgOiBo
dHRwOi8vaGcudWsueGVuc291cmNlLmNvbS9jYXJib24vdHJ1bmsvbGludXgtMi42LjI3Cj4gKysg
OiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcvc3RhZ2luZy9xZW11LXVwc3RyZWFtLXVuc3RhYmxlLmdp
dAo+ICsrIDogZGFpbHktY3Jvbi5saW51eC1hcm0teGVuCj4gKyBUUkVFX0xJTlVYPW9zc3Rlc3RA
eGVuYml0cy54ZW5zb3VyY2UuY29tOi9ob21lL3hlbi9naXQvbGludXgtcHZvcHMuZ2l0Cj4gKyBU
UkVFX1FFTVVfVVBTVFJFQU09b3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5jb206L2hvbWUveGVu
L2dpdC9xZW11LXVwc3RyZWFtLXVuc3RhYmxlLmdpdAo+ICsgVFJFRV9YRU49b3NzdGVzdEB4ZW5i
aXRzLnhlbnNvdXJjZS5jb206L2hvbWUveGVuL2dpdC94ZW4uZ2l0Cj4gKyBpbmZvX2xpbnV4X3Ry
ZWUgbGludXgtYXJtLXhlbgo+ICsgY2FzZSAkMSBpbgo+ICsgOiBnaXQ6Ly9naXQua2VybmVsLm9y
Zy9wdWIvc2NtL2xpbnV4L2tlcm5lbC9naXQvc3N0YWJlbGxpbmkveGVuLmdpdAo+ICsgOiBnaXQ6
Ly9naXQua2VybmVsLm9yZy9wdWIvc2NtL2xpbnV4L2tlcm5lbC9naXQvc3N0YWJlbGxpbmkveGVu
LmdpdAo+ICsgOiBsaW51eC1hcm0teGVuCj4gKyA6IGxpbnV4LWFybS14ZW4KPiArIDogbGludXgt
YXJtLXhlbgo+ICsgOiBnaXQKPiArIDogZ2l0Cj4gKyA6IGdpdDovL3hlbmJpdHMueGVuLm9yZy9s
aW51eC1wdm9wcy5naXQKPiArIDogb3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5jb206L2hvbWUv
eGVuL2dpdC9saW51eC1wdm9wcy5naXQKPiArIDogdGVzdGVkL2xpbnV4LWFybS14ZW4KPiArIDog
dGVzdGVkL2xpbnV4LWFybS14ZW4KPiArIHJldHVybiAwCj4gKyBjZCAvZXhwb3J0L2hvbWUvb3Nz
dGVzdC9yZXBvcy9saW51eAo+ICsgZ2l0IHB1c2ggb3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5j
b206L2hvbWUveGVuL2dpdC9saW51eC1wdm9wcy5naXQgNTE4ZTYyNGRkZmFlZjU0NTQwOGMxOWMz
MGZmZjMxYmM2NGQ2YjM0Njp0ZXN0ZWQvbGludXgtYXJtLXhlbgo+IENvdW50aW5nIG9iamVjdHM6
IDEgICAKPiBDb3VudGluZyBvYmplY3RzOiAxMCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDYyICAg
Cj4gQ291bnRpbmcgb2JqZWN0czogNjkgICAKPiBDb3VudGluZyBvYmplY3RzOiA5NSAgIAo+IENv
dW50aW5nIG9iamVjdHM6IDEwMSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDExOSAgIAo+IENvdW50
aW5nIG9iamVjdHM6IDEyMiAgIAo+IENvdW50aW5nIG9iamVjdHM6IDEyMyAgIAo+IENvdW50aW5n
IG9iamVjdHM6IDE1OCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDE2MiAgIAo+IENvdW50aW5nIG9i
amVjdHM6IDE2MyAgIAo+IENvdW50aW5nIG9iamVjdHM6IDE2NCAgIAo+IENvdW50aW5nIG9iamVj
dHM6IDE5NyAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIxMCAgIAo+IENvdW50aW5nIG9iamVjdHM6
IDIxMSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIyMSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIz
MCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIzNiAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI1OCAg
IAo+IENvdW50aW5nIG9iamVjdHM6IDI4OCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI5MSAgIAo+
IENvdW50aW5nIG9iamVjdHM6IDI5MyAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI5NSAgIAo+IENv
dW50aW5nIG9iamVjdHM6IDM3MSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI0MzkgICAKPiBDb3Vu
dGluZyBvYmplY3RzOiAzNjQ0LCBkb25lLgo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAgMCUgKDEv
MTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDElICgxMy8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAgMiUgKDI2LzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czog
ICAzJSAoMzkvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDQlICg1Mi8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAgNSUgKDY1LzEyODEpICAgCj4gQ29tcHJlc3Npbmcg
b2JqZWN0czogICA2JSAoNzcvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDclICg5
MC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOCUgKDEwMy8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICAgOSUgKDExNi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICAxMCUgKDEyOS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMCUgKDEzOS8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMSUgKDE0MS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAxMiUgKDE1NC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICAxMyUgKDE2Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNCUgKDE4MC8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNSUgKDE5My8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICAxNiUgKDIwNS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAx
NyUgKDIxOC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxOCUgKDIzMS8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxOSUgKDI0NC8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICAyMCUgKDI1Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyMSUg
KDI3MC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyMiUgKDI4Mi8xMjgxKSAgIAo+
IENvbXByZXNzaW5nIG9iamVjdHM6ICAyMyUgKDI5NS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9i
amVjdHM6ICAyNCUgKDMwOC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNSUgKDMy
MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNiUgKDMzNC8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICAyNyUgKDM0Ni8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICAyOCUgKDM1OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyOSUgKDM3Mi8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMCUgKDM4NS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAzMSUgKDM5OC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICAzMiUgKDQxMC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMyUgKDQyMy8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzNCUgKDQzNi8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICAzNSUgKDQ0OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAz
NiUgKDQ2Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzNyUgKDQ3NC8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzOCUgKDQ4Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICAzOSUgKDUwMC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MCUg
KDUxMy8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MCUgKDUyNS8xMjgxKSAgIAo+
IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MSUgKDUyNi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9i
amVjdHM6ICA0MiUgKDUzOS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MyUgKDU1
MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NCUgKDU2NC8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICA0NSUgKDU3Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICA0NiUgKDU5MC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NyUgKDYwMy8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OCUgKDYxNS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA0OSUgKDYyOC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICA1MCUgKDY0MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MSUgKDY1NC8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MiUgKDY2Ny8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICA1MyUgKDY3OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1
NCUgKDY5Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NSUgKDcwNS8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NiUgKDcxOC8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICA1NyUgKDczMS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1OCUg
KDc0My8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1OSUgKDc1Ni8xMjgxKSAgIAo+
IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MCUgKDc2OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9i
amVjdHM6ICA2MSUgKDc4Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MiUgKDc5
NS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MyUgKDgwOC8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICA2NCUgKDgyMC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICA2NSUgKDgzMy8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NiUgKDg0Ni8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NyUgKDg1OS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA2OCUgKDg3Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICA2OSUgKDg4NC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3MCUgKDg5Ny8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3MSUgKDkxMC8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICA3MiUgKDkyMy8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3
MyUgKDkzNi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3NCUgKDk0OC8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3NSUgKDk2MS8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICA3NiUgKDk3NC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3NyUg
KDk4Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3OCUgKDEwMDAvMTI4MSkgICAK
PiBDb21wcmVzc2luZyBvYmplY3RzOiAgNzklICgxMDEyLzEyODEpICAgCj4gQ29tcHJlc3Npbmcg
b2JqZWN0czogIDgwJSAoMTAyNS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA4MSUg
KDEwMzgvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgODIlICgxMDUxLzEyODEpICAg
Cj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDgzJSAoMTA2NC8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICA4NCUgKDEwNzcvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgODUl
ICgxMDg5LzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDg2JSAoMTEwMi8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA4NyUgKDExMTUvMTI4MSkgICAKPiBDb21wcmVzc2lu
ZyBvYmplY3RzOiAgODglICgxMTI4LzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDg5
JSAoMTE0MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA5MCUgKDExNTMvMTI4MSkg
ICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTElICgxMTY2LzEyODEpICAgCj4gQ29tcHJlc3Np
bmcgb2JqZWN0czogIDkyJSAoMTE3OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA5
MyUgKDExOTIvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTQlICgxMjA1LzEyODEp
ICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDk1JSAoMTIxNy8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICA5NiUgKDEyMzAvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAg
OTclICgxMjQzLzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDk4JSAoMTI1Ni8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA5OSUgKDEyNjkvMTI4MSkgICAKPiBDb21wcmVz
c2luZyBvYmplY3RzOiAxMDAlICgxMjgxLzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czog
MTAwJSAoMTI4MS8xMjgxKSwgZG9uZS4KPiBXcml0aW5nIG9iamVjdHM6ICAgMCUgKDEvMjMwOSkg
ICAKPiBXcml0aW5nIG9iamVjdHM6ICAgMSUgKDI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgIDIlICg0Ny8yMzA5KSAgIAo+IFdyaXRpbmcgb2JqZWN0czogICAzJSAoNzAvMjMwOSkgICAK
PiBXcml0aW5nIG9iamVjdHM6ICAgNCUgKDkzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
IDUlICgxMTYvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAgNiUgKDEzOS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogICA3JSAoMTYyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
IDglICgxODUvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAgOSUgKDIwOC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDEwJSAoMjMxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MTElICgyNTQvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAxMiUgKDI3OC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDEzJSAoMzAxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MTQlICgzMjQvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAxNSUgKDM0Ny8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDE2JSAoMzcwLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MTclICgzOTMvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAxOCUgKDQxNi8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDE5JSAoNDM5LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjAlICg0NjIvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAyMSUgKDQ4NS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDIyJSAoNTA4LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjMlICg1MzIvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAyNCUgKDU1NS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDI1JSAoNTc4LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjYlICg2MDEvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAyNyUgKDYyNC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDI4JSAoNjQ3LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjklICg2NzAvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzMCUgKDY5My8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDMxJSAoNzE2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MzIlICg3MzkvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzMyUgKDc2Mi8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDM0JSAoNzg2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MzUlICg4MDkvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzNiUgKDgzMi8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDM3JSAoODU1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MzglICg4NzgvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzOSUgKDkwMS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDQwJSAoOTI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
NDElICg5NDkvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICA0MiUgKDk3MC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDQzJSAoOTkzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
NDQlICgxMDE5LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNDUlICgxMDQwLzIzMDkpICAg
Cj4gV3JpdGluZyBvYmplY3RzOiAgNDYlICgxMDYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgNDclICgxMDg2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNDglICgxMTA5LzIzMDkp
ICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNDklICgxMTMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmpl
Y3RzOiAgNTAlICgxMTU1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTElICgxMTc4LzIz
MDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTIlICgxMjAxLzIzMDkpICAgCj4gV3JpdGluZyBv
YmplY3RzOiAgNTMlICgxMjI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTQlICgxMjQ3
LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTUlICgxMjcwLzIzMDkpICAgCj4gV3JpdGlu
ZyBvYmplY3RzOiAgNTYlICgxMjk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTclICgx
MzE3LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTglICgxMzQwLzIzMDkpICAgCj4gV3Jp
dGluZyBvYmplY3RzOiAgNTklICgxMzYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjAl
ICgxMzg2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjElICgxNDA5LzIzMDkpICAgCj4g
V3JpdGluZyBvYmplY3RzOiAgNjIlICgxNDMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
NjMlICgxNDU2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjQlICgxNDc4LzIzMDkpICAg
Cj4gV3JpdGluZyBvYmplY3RzOiAgNjUlICgxNTAxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgNjYlICgxNTI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjclICgxNTQ4LzIzMDkp
ICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjglICgxNTcxLzIzMDkpICAgCj4gV3JpdGluZyBvYmpl
Y3RzOiAgNjklICgxNTk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzAlICgxNjE3LzIz
MDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzElICgxNjQwLzIzMDkpICAgCj4gV3JpdGluZyBv
YmplY3RzOiAgNzIlICgxNjYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzMlICgxNjg2
LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzQlICgxNzA5LzIzMDkpICAgCj4gV3JpdGlu
ZyBvYmplY3RzOiAgNzUlICgxNzMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzYlICgx
NzU1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzclICgxNzc4LzIzMDkpICAgCj4gV3Jp
dGluZyBvYmplY3RzOiAgNzglICgxODAyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzkl
ICgxODI1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODAlICgxODQ4LzIzMDkpICAgCj4g
V3JpdGluZyBvYmplY3RzOiAgODElICgxODcxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
ODIlICgxODk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODMlICgxOTE3LzIzMDkpICAg
Cj4gV3JpdGluZyBvYmplY3RzOiAgODQlICgxOTQwLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgODUlICgxOTYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODYlICgxOTg2LzIzMDkp
ICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODclICgyMDA5LzIzMDkpICAgCj4gV3JpdGluZyBvYmpl
Y3RzOiAgODglICgyMDMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODklICgyMDU2LzIz
MDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTAlICgyMDc5LzIzMDkpICAgCj4gV3JpdGluZyBv
YmplY3RzOiAgOTElICgyMTAyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTIlICgyMTI1
LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTMlICgyMTQ4LzIzMDkpICAgCj4gV3JpdGlu
ZyBvYmplY3RzOiAgOTQlICgyMTcxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTUlICgy
MTk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTYlICgyMjE3LzIzMDkpICAgCj4gV3Jp
dGluZyBvYmplY3RzOiAgOTclICgyMjQwLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTgl
ICgyMjYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTklICgyMjg2LzIzMDkpICAgCj4g
V3JpdGluZyBvYmplY3RzOiAxMDAlICgyMzA5LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAx
MDAlICgyMzA5LzIzMDkpLCA1MjAuOTEgS2lCLCBkb25lLgo+IFRvdGFsIDIzMDkgKGRlbHRhIDE4
NzIpLCByZXVzZWQgMTM0OCAoZGVsdGEgMTAyNykKPiBUbyBvc3N0ZXN0QHhlbmJpdHMueGVuc291
cmNlLmNvbTovaG9tZS94ZW4vZ2l0L2xpbnV4LXB2b3BzLmdpdAo+ICAgIGQyNjRiZGUuLjUxOGU2
MjQgIDUxOGU2MjRkZGZhZWY1NDU0MDhjMTljMzBmZmYzMWJjNjRkNmIzNDYgLT4gdGVzdGVkL2xp
bnV4LWFybS14ZW4KPiArIGV4aXQgMAo+IAo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fCj4gWGVuLWRldmVsIG1haWxpbmcgbGlzdAo+IFhlbi1kZXZlbEBs
aXN0cy54ZW4ub3JnCj4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCgoKCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLeA-0005u6-Tq; Thu, 06 Feb 2014 09:53:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBLe9-0005tw-BM
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 09:53:25 +0000
Received: from [193.109.254.147:61051] by server-5.bemta-14.messagelabs.com id
	C6/04-16688-49B53F25; Thu, 06 Feb 2014 09:53:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391680401!2387052!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17511 invoked from network); 6 Feb 2014 09:53:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 09:53:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 09:53:21 +0000
Message-Id: <52F3699E0200007800119AE3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 09:53:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 17:03, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.end_pfn = start_pfn + nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}

I'm confused - both in the overview mail and in domctl.h below
you state the range to now be inclusive, yet neither here nor
in the hypervisor changes this seems to actually be the case
(unless the earlier "rename ..." patches now did more than just
renaming - I didn't look at them).

> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -454,7 +454,6 @@ int xc_domain_create(xc_interface *xch,
>                       uint32_t flags,
>                       uint32_t *pdomid);
>  
> -
>  /* Functions to produce a dump of a given domain
>   *  xc_domain_dumpcore - produces a dump to a specified file
>   *  xc_domain_dumpcore_via_callback - produces a dump, using a specified

Stray leftover change?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLeB-0005uK-A2; Thu, 06 Feb 2014 09:53:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLe9-0005tx-N4
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 09:53:26 +0000
Received: from [193.109.254.147:61139] by server-15.bemta-14.messagelabs.com
	id 36/D6-10839-59B53F25; Thu, 06 Feb 2014 09:53:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391680398!2387029!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17097 invoked from network); 6 Feb 2014 09:53:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 09:53:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="98530632"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 09:53:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	04:53:16 -0500
Message-ID: <1391680396.23098.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<stefano.stabellini@citrix.com>
Date: Thu, 6 Feb 2014 09:53:16 +0000
In-Reply-To: <osstest-24734-mainreport@xen.org>
References: <osstest-24734-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCAyMDE0LTAyLTA1IGF0IDE4OjU1ICswMDAwLCB4ZW4ub3JnIHdyb3RlOgo+IGZsaWdo
dCAyNDczNCBsaW51eC1hcm0teGVuIHJlYWwgW3JlYWxdCj4gaHR0cDovL3d3dy5jaGlhcmsuZ3Jl
ZW5lbmQub3JnLnVrL354ZW5zcmN0cy9sb2dzLzI0NzM0Lwo+IAo+IEZhaWx1cmVzIDotLyBidXQg
bm8gcmVncmVzc2lvbnMuCj4gCj4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVkLCBidXQgYXJl
IG5vdCBibG9ja2luZzoKPiAgdGVzdC1hcm1oZi1hcm1oZi14bCAgICAgICAgICAgOSBndWVzdC1z
dGFydCAgICAgICAgICAgICAgICAgIGZhaWwgICBuZXZlciBwYXNzCgpTdGVmYW5vLCBwbGVhc2Ug
Y2FuIHlvdSBjaGVycnktcGljazoKCiAgICAgICAgY29tbWl0IGUxN2IyZjExNGNiYTU0MjBmYjI4
ZmE0YmZlYWQ1N2Q0MDZhMTY1MzMKICAgICAgICBBdXRob3I6IElhbiBDYW1wYmVsbCA8aWFuLmNh
bXBiZWxsQGNpdHJpeC5jb20+CiAgICAgICAgRGF0ZTogICBNb24gSmFuIDIwIDExOjMwOjQxIDIw
MTQgKzAwMDAKICAgICAgICAKICAgICAgICAgICAgeGVuOiBzd2lvdGxiOiBoYW5kbGUgc2l6ZW9m
KGRtYV9hZGRyX3QpICE9IHNpemVvZihwaHlzX2FkZHJfdCkKICAgICAgICAgICAgCiAgICAgICAg
ICAgIFRoZSB1c2Ugb2YgcGh5c190b19tYWNoaW5lIGFuZCBtYWNoaW5lX3RvX3BoeXMgaW4gdGhl
IHBoeXM8PT5idXMgY29udmVyc2lvbgogICAgICAgICAgICBjYXVzZXMgdXMgdG8gbG9zZSB0aGUg
dG9wIGJpdHMgb2YgdGhlIERNQSBhZGRyZXNzIGlmIHRoZSBzaXplIG9mIGEgRE1BIGFkZHIKICAg
ICAgICAgICAgCiAgICAgICAgICAgIFRoaXMgY2FuIGhhcHBlbiBpbiBwcmFjdGljZSBvbiBBUk0g
d2hlcmUgZm9yZWlnbiBwYWdlcyBjYW4gYmUgYWJvdmUgNEdCIGV2ZQogICAgICAgICAgICB0aG91
Z2ggdGhlIGxvY2FsIGtlcm5lbCBkb2VzIG5vdCBoYXZlIExQQUUgcGFnZSB0YWJsZXMgZW5hYmxl
ZCAod2hpY2ggaXMKICAgICAgICAgICAgdG90YWxseSByZWFzb25hYmxlIGlmIHRoZSBndWVzdCBk
b2VzIG5vdCBpdHNlbGYgaGF2ZSA+NEdCIG9mIFJBTSkuIEluIHRoaXMKICAgICAgICAgICAgY2Fz
ZSB0aGUga2VybmVsIHN0aWxsIG1hcHMgdGhlIGZvcmVpZ24gcGFnZXMgYXQgYSBwaHlzIGFkZHIg
YmVsb3cgNEcgKGFzIGl0CiAgICAgICAgICAgIG11c3QpIGJ1dCB0aGUgcmVzdWx0aW5nIERNQSBh
ZGRyZXNzIChyZXR1cm5lZCBieSB0aGUgZ3JhbnQgbWFwIG9wZXJhdGlvbikgaQogICAgICAgICAg
ICBtdWNoIGhpZ2hlci4KICAgICAgICAgICAgCiAgICAgICAgICAgIFRoaXMgaXMgYW5hbG9nb3Vz
IHRvIGEgaGFyZHdhcmUgZGV2aWNlIHdoaWNoIGhhcyBpdHMgdmlldyBvZiBSQU0gbWFwcGVkIHVw
CiAgICAgICAgICAgIGhpZ2ggZm9yIHNvbWUgcmVhc29uLgogICAgICAgICAgICAKICAgICAgICAg
ICAgVGhpcyBwYXRjaCBtYWtlcyBJL08gdG8gZm9yZWlnbiBwYWdlcyAoc3BlY2lmaWNhbGx5IGJs
a2lmKSB3b3JrIG9uIDMyLWJpdCBBCiAgICAgICAgICAgIHN5c3RlbXMgd2l0aCBtb3JlIHRoYW4g
NEdCIG9mIFJBTS4KICAgICAgICAgICAgCiAgICAgICAgICAgIFNpZ25lZC1vZmYtYnk6IElhbiBD
YW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+CiAgICAgICAgICAgIFNpZ25lZC1vZmYt
Ynk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3RlZmFuby5zdGFiZWxsaW5pQGV1LmNpdHJpeC5jb20+
Cgpmcm9tIG1haW5saW5lIGludG8gdGhpcyB0cmVlLgoKVGhhbmtzLApJYW4uCgo+IAo+IHZlcnNp
b24gdGFyZ2V0ZWQgZm9yIHRlc3Rpbmc6Cj4gIGxpbnV4ICAgICAgICAgICAgICAgIDUxOGU2MjRk
ZGZhZWY1NDU0MDhjMTljMzBmZmYzMWJjNjRkNmIzNDYKPiBiYXNlbGluZSB2ZXJzaW9uOgo+ICBs
aW51eCAgICAgICAgICAgICAgICBkMjY0YmRlMDg5Y2VlYTIwNjQwZDZkNDQ3MmEwZGNhZGU5ZDJl
MTk5Cj4gCj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tCj4gUGVvcGxlIHdobyB0b3VjaGVkIHJldmlzaW9ucyB1bmRlciB0ZXN0Ogo+
ICAgIkVyaWMgVy4gQmllZGVybWFuIiA8ZWJpZWRlcm1AeG1pc3Npb24uY29tPgo+ICAgQWFybyBL
b3NraW5lbiA8YWFyby5rb3NraW5lbkBpa2kuZmk+Cj4gICBBYXJvbiBCcm93biA8YWFyb24uZi5i
cm93bkBpbnRlbC5jb20+Cj4gICBBYmhpbGFzaCBLZXNhdmFuIDxhLmtlc2F2YW5Ac2Ftc3VuZy5j
b20+Cj4gICBBbGFuIENveCA8YWxhbkBsaW51eC5pbnRlbC5jb20+Cj4gICBBbGV4IERldWNoZXIg
PGFsZXhhbmRlci5kZXVjaGVyQGFtZC5jb20+Cj4gICBBbGV4YW5kZXIgTWV6aW4gPG1lemluLmFs
ZXhhbmRlckBnbWFpbC5jb20+Cj4gICBBbGV4YW5kZXIgdmFuIEhldWtlbHVtIDxoZXVrZWx1bUBm
YXN0bWFpbC5mbT4KPiAgIEFuYXRvbGlqIEd1c3RzY2hpbiA8YWd1c3RAZGVueC5kZT4KPiAgIEFu
ZHJlIFByenl3YXJhIDxhbmRyZS5wcnp5d2FyYUBsaW5hcm8ub3JnPgo+ICAgQW5kcmVhcyBSZWlz
IDxhbmRyZWFzLnJlaXNAZ21haWwuY29tPgo+ICAgQW5kcmVhcyBSb2huZXIgPGFuZHJlYXMucm9o
bmVyQGdteC5uZXQ+Cj4gICBBbmRyZXcgQnJlc3RpY2tlciA8YWJyZXN0aWNAY2hyb21pdW0ub3Jn
Pgo+ICAgQW5kcmV3IEpvbmVzIDxkcmpvbmVzQHJlZGhhdC5jb20+Cj4gICBBbmRyZXcgTW9ydG9u
IDxha3BtQGxpbnV4LWZvdW5kYXRpb24ub3JnPgo+ICAgQW5keSBMdXRvbWlyc2tpIDxsdXRvQGFt
YWNhcGl0YWwubmV0Pgo+ICAgQW50b24gQmxhbmNoYXJkIDxhbnRvbkBzYW1iYS5vcmc+Cj4gICBB
bnRvbiBWb3JvbnRzb3YgPGFudG9uQGVub21zZy5vcmc+Cj4gICBBbnRvbmlvIFF1YXJ0dWxsaSA8
YW50b25pb0BtZXNoY29kaW5nLmNvbT4KPiAgIEFyZCBCaWVzaGV1dmVsIDxhcmQuYmllc2hldXZl
bEBsaW5hcm8ub3JnPgo+ICAgQXJpZWwgRWxpb3IgPGFyaWVsZUBicm9hZGNvbS5jb20+Cj4gICBB
cnJvbiBXYW5nIDxhcnJvbi53YW5nQGludGVsLmNvbT4KPiAgIEF1c3RpbiBCb3lsZSA8Ym95bGUu
YXVzdGluQGdtYWlsLmNvbT4KPiAgIEJlbiBEb29rcyA8YmVuLmRvb2tzQGNvZGV0aGluay5jby51
az4KPiAgIEJlbiBNeWVycyA8YnBtQHNnaS5jb20+Cj4gICBCZW4gU2tlZ2dzIDxic2tlZ2dzQHJl
ZGhhdC5jb20+Cj4gICBCZW4gV2lkYXdza3kgPGJlbkBid2lkYXdzay5uZXQ+Cj4gICBCZW5qYW1p
biBIZXJyZW5zY2htaWR0IDxiZW5oQGtlcm5lbC5jcmFzaGluZy5vcmc+Cj4gICBCZXR0eSBEYWxs
IDxiZXR0eS5kYWxsQGhwLmNvbT4KPiAgIEJqb3JuIEhlbGdhYXMgPGJoZWxnYWFzQGdvb2dsZS5j
b20+Cj4gICBCasO4cm4gTW9yayA8Ympvcm5AbW9yay5ubz4KPiAgIEJvYiBQZXRlcnNvbiA8cnBl
dGVyc29AcmVkaGF0LmNvbT4KPiAgIEJvcmlzbGF2IFBldGtvdiA8YnBAc3VzZS5kZT4KPiAgIEJy
aWFuIFcgSGFydCA8aGFydGJAbGludXgudm5ldC5pYm0uY29tPgo+ICAgQnJ1Y2UgQWxsYW4gPGJy
dWNlLncuYWxsYW5AaW50ZWwuY29tPgo+ICAgQnJ5YW4gV3UgPGNvb2xvbmV5QGdtYWlsLmNvbT4K
PiAgIENhdGFsaW4gTWFyaW5hcyA8Y2F0YWxpbi5tYXJpbmFzQGFybS5jb20+Cj4gICBDaHJpc3Rp
YW4gRW5nZWxtYXllciA8Y2VuZ2VsbWFAZ214LmF0Pgo+ICAgQ2hyaXN0aWFuIEvDtm5pZyA8Y2hy
aXN0aWFuLmtvZW5pZ0BhbWQuY29tPgo+ICAgQ2hyaXN0b3BoIFBhYXNjaCA8Y2hyaXN0b3BoLnBh
YXNjaEB1Y2xvdXZhaW4uYmU+Cj4gICBDb25nIFdhbmcgPHhpeW91Lndhbmdjb25nQGdtYWlsLmNv
bT4KPiAgIEN1cnQgQnJ1bmUgPGN1cnRAY3VtdWx1c25ldHdvcmtzLmNvbT4KPiAgIEPDqWRyaWMg
TGUgR29hdGVyIDxjbGdAZnIuaWJtLmNvbT4KPiAgIERhbiBDYXJwZW50ZXIgPGRhbi5jYXJwZW50
ZXJAb3JhY2xlLmNvbT4KPiAgIERhbiBXaWxsaWFtcyA8ZGNid0ByZWRoYXQuY29tPgo+ICAgRGFu
aWVsIEJvcmttYW5uIDxkYm9ya21hbkByZWRoYXQuY29tPgo+ICAgRGFuaWVsIExlemNhbm8gPGRh
bmllbC5sZXpjYW5vQGxpbmFyby5vcmc+Cj4gICBEYW5pZWwgVmV0dGVyIDxkYW5pZWwudmV0dGVy
QGZmd2xsLmNoPgo+ICAgRGF2ZSBBaXJsaWUgPGFpcmxpZWRAcmVkaGF0LmNvbT4KPiAgIERhdmUg
RXJ0bWFuIDxkYXZpZHgubS5lcnRtYW5AaW50ZWwuY29tPgo+ICAgRGF2ZSBLbGVpa2FtcCA8ZGF2
ZS5rbGVpa2FtcEBvcmFjbGUuY29tPgo+ICAgRGF2aWQgRXJ0bWFuIDxkYXZpZHgubS5lcnRtYW5A
aW50ZWwuY29tPgo+ICAgRGF2aWQgR2lic29uIDxkYXZpZEBnaWJzb24uZHJvcGJlYXIuaWQuYXU+
Cj4gICBEYXZpZCBTLiBNaWxsZXIgPGRhdmVtQGRhdmVtbG9mdC5uZXQ+Cj4gICBEaW1pdHJpcyBN
aWNoYWlsaWRpcyA8ZG1AY2hlbHNpby5jb20+Cj4gICBEaW5nIFRpYW5ob25nIDxkaW5ndGlhbmhv
bmdAaHVhd2VpLmNvbT4KPiAgIERpcmsgQnJhbmRld2llIDxkaXJrLmouYnJhbmRld2llQGludGVs
LmNvbT4KPiAgIERtaXRyeSBLcmF2a292IDxkbWl0cnlAYnJvYWRjb20uY29tPgo+ICAgRG1pdHJ5
IFRvcm9raG92IDxkbWl0cnkudG9yb2tob3ZAZ21haWwuY29tPgo+ICAgRG9uIFNraWRtb3JlIDxk
b25hbGQuYy5za2lkbW9yZUBpbnRlbC5jb20+Cj4gICBFbW1hbnVlbCBHcnVtYmFjaCA8ZW1tYW51
ZWwuZ3J1bWJhY2hAaW50ZWwuY29tPgo+ICAgRXJpYyBEdW1hemV0IDxlZHVtYXpldEBnb29nbGUu
Y29tPgo+ICAgRXJpYyBXaGl0bmV5IDxlbndsaW51eEBnbWFpbC5jb20+Cj4gICBFcmlrIEh1Z25l
IDxlcmlrLmh1Z25lQGVyaWNzc29uLmNvbT4KPiAgIEZhYmlvIEVzdGV2YW0gPGZhYmlvLmVzdGV2
YW1AZnJlZXNjYWxlLmNvbT4KPiAgIEZhbiBEdSA8ZmFuLmR1QHdpbmRyaXZlci5jb20+Cj4gICBG
ZWxpeCBGaWV0a2F1IDxuYmRAb3BlbndydC5vcmc+Cj4gICBGbGF2aW8gTGVpdG5lciA8ZmJsQHJl
ZGhhdC5jb20+Cj4gICBGbG9yaWFuIEZhaW5lbGxpIDxmLmZhaW5lbGxpQGdtYWlsLmNvbT4KPiAg
IEZsb3JpYW4gV2VzdHBoYWwgPGZ3QHN0cmxlbi5kZT4KPiAgIEZyYW5rIExpIDxGcmFuay5MaUBm
cmVlc2NhbGUuY29tPgo+ICAgR2FvIGZlbmcgPGdhb2ZlbmdAY24uZnVqaXRzdS5jb20+Cj4gICBH
YXZpbiBTaGFuIDxzaGFuZ3dAbGludXgudm5ldC5pYm0uY29tPgo+ICAgR2VlcnQgVXl0dGVyaG9l
dmVuIDxnZWVydCtyZW5lc2FzQGxpbnV4LW02OGsub3JnPgo+ICAgR2VyaGFyZCBTaXR0aWcgPGdz
aUBkZW54LmRlPgo+ICAgR2Vycml0IFJlbmtlciA8Z2Vycml0QGVyZy5hYmRuLmFjLnVrPgo+ICAg
R3JhbnQgTGlrZWx5IDxncmFudC5saWtlbHlAbGluYXJvLm9yZz4KPiAgIEdyZWcgS3JvYWgtSGFy
dG1hbiA8Z3JlZ2toQGxpbnV4Zm91bmRhdGlvbi5vcmc+Cj4gICBHcmVnb3IgQmVjayA8Z2JlY2tA
c2VybmV0LmRlPgo+ICAgR3VlbnRlciBSb2VjayA8bGludXhAcm9lY2stdXMubmV0Pgo+ICAgR3Vz
dGF2byBQYWRvdmFuIDxndXN0YXZvLnBhZG92YW5AY29sbGFib3JhLmNvLnVrPgo+ICAgSC4gTmlr
b2xhdXMgU2NoYWxsZXIgPGhuc0Bnb2xkZWxpY28uY29tPgo+ICAgSC4gUGV0ZXIgQW52aW4gPGhw
YUBsaW51eC5pbnRlbC5jb20+Cj4gICBILiBQZXRlciBBbnZpbiA8aHBhQHp5dG9yLmNvbT4KPiAg
IEhhaXlhbmcgWmhhbmcgPGhhaXlhbmd6QG1pY3Jvc29mdC5jb20+Cj4gICBIYW5nYmluIExpdSA8
bGl1aGFuZ2JpbkBnbWFpbC5jb20+Cj4gICBIYW5uZXMgRnJlZGVyaWMgU293YSA8aGFubmVzQHN0
cmVzc2luZHVrdGlvbi5vcmc+Cj4gICBIYXJpcHJhc2FkIFNoZW5haSA8aGFyaXByYXNhZEBjaGVs
c2lvLmNvbT4KPiAgIEhlaWtvIENhcnN0ZW5zIDxoZWlrby5jYXJzdGVuc0BkZS5pYm0uY29tPgo+
ICAgSGVpa28gU3R1ZWJuZXIgPGhlaWtvQHNudGVjaC5kZT4KPiAgIEhlbGdlIERlbGxlciA8ZGVs
bGVyQGdteC5kZT4KPiAgIEhlbG11dCBTY2hhYSA8aGVsbXV0LnNjaGFhQGdvb2dsZW1haWwuY29t
Pgo+ICAgSGVyYmVydCBYdSA8aGVyYmVydEBnb25kb3IuYXBhbmEub3JnLmF1Pgo+ICAgSHVhY2Fp
IENoZW4gPGNoZW5oY0BsZW1vdGUuY29tPgo+ICAgSHVnaCBEaWNraW5zIDxodWdoZEBnb29nbGUu
Y29tPgo+ICAgSWxpYSBNaXJraW4gPGltaXJraW5AYWx1bS5taXQuZWR1Pgo+ICAgSW5nbyBNb2xu
YXIgPG1pbmdvQGtlcm5lbC5vcmc+Cj4gICBJdmFuIFZlY2VyYSA8aXZlY2VyYUByZWRoYXQuY29t
Pgo+ICAgSmFtYWwgSGFkaSBTYWxpbSA8amhzQG1vamF0YXR1LmNvbT4KPiAgIEphbWVzIEhvZ2Fu
IDxqYW1lcy5ob2dhbkBpbWd0ZWMuY29tPgo+ICAgSmFuIEthcmEgPGphY2tAc3VzZS5jej4KPiAg
IEphbiBLaXN6a2EgPGphbi5raXN6a2FAc2llbWVucy5jb20+Cj4gICBKYW5pIE5pa3VsYSA8amFu
aS5uaWt1bGFAaW50ZWwuY29tPgo+ICAgSmFzb24gQmFyb24gPGpiYXJvbkBha2FtYWkuY29tPgo+
ICAgSmFzb24gV2FuZyA8amFzb3dhbmdAcmVkaGF0LmNvbT4KPiAgIEphdmllciBMb3BleiA8amxv
cGV4QGNvenliaXQuY29tPgo+ICAgSmF5IFZvc2J1cmdoIDxmdWJhckB1cy5pYm0uY29tPgo+ICAg
SmVhbiBEZWx2YXJlIDxraGFsaUBsaW51eC1mci5vcmc+Cj4gICBKZWZmIEtpcnNoZXIgPGplZmZy
ZXkudC5raXJzaGVyQGludGVsLmNvbT4KPiAgIEplZmYgTGF5dG9uIDxqbGF5dG9uQHJlZGhhdC5j
b20+Cj4gICBKZW5zIEF4Ym9lIDxheGJvZUBrZXJuZWwuZGs+Cj4gICBKZXNwZXIgRGFuZ2FhcmQg
QnJvdWVyIDxicm91ZXJAcmVkaGF0LmNvbT4KPiAgIEplc3NlIEJhcm5lcyA8amJhcm5lc0B2aXJ0
dW91c2dlZWsub3JnPgo+ICAgSmlhbmcgTGl1IDxqaWFuZy5saXVAbGludXguaW50ZWwuY29tPgo+
ICAgSmllIExpdSA8amVmZi5saXVAb3JhY2xlLmNvbT4KPiAgIEppcmkgUGlya28gPGppcmlAcmVz
bnVsbGkudXM+Cj4gICBKaXRlbmRyYSBLYWxzYXJpYSA8aml0ZW5kcmEua2Fsc2FyaWFAcWxvZ2lj
LmNvbT4KPiAgIEpvaGFuIEhlZGJlcmcgPGpvaGFuLmhlZGJlcmdAaW50ZWwuY29tPgo+ICAgSm9o
YW5uZXMgQmVyZyA8am9oYW5uZXMuYmVyZ0BpbnRlbC5jb20+Cj4gICBKb2huIENyaXNwaW4gPGJs
b2dpY0BvcGVud3J0Lm9yZz4KPiAgIEpvaG4gRGF2aWQgQW5nbGluIDxkYXZlLmFuZ2xpbkBiZWxs
Lm5ldD4KPiAgIEpvaG4gRmFzdGFiZW5kIDxqb2huLnIuZmFzdGFiZW5kQGludGVsLmNvbS5jb20+
Cj4gICBKb2huIEZhc3RhYmVuZCA8am9obi5yLmZhc3RhYmVuZEBpbnRlbC5jb20+Cj4gICBKb2hu
IFN0dWx0eiA8am9obi5zdHVsdHpAbGluYXJvLm9yZz4KPiAgIEpvaG4gVy4gTGludmlsbGUgPGxp
bnZpbGxlQHR1eGRyaXZlci5jb20+Cj4gICBKb24gTWFsb3kgPGpvbi5tYWxveUBlcmljc3Nvbi5j
b20+Cj4gICBKb25naHdhIExlZSA8am9uZ2h3YTMubGVlQHNhbXN1bmcuY29tPgo+ICAgSm9zaCBC
b3llciA8andib3llckBmZWRvcmFwcm9qZWN0Lm9yZz4KPiAgIEp1bGlhbiBBbmFzdGFzb3YgPGph
QHNzaS5iZz4KPiAgIEtlbGx5IERvcmFuIDxrZWwucC5kb3JhbkBnbWFpbC5jb20+Cj4gICBLaXJp
bGwgVGtoYWkgPHRraGFpQHlhbmRleC5ydT4KPiAgIEtyenlzenRvZiBIYcWCYXNhIDxraGFsYXNh
QHBpYXAucGw+Cj4gICBLcnp5c3p0b2YgS296bG93c2tpIDxrLmtvemxvd3NraUBzYW1zdW5nLmNv
bT4KPiAgIEt1bWFyIFNhbmdodmkgPGt1bWFyYXNAY2hlbHNpby5jb20+Cj4gICBMYW4gVGlhbnl1
IDx0aWFueXUubGFuQGludGVsLmNvbT4KPiAgIExhcnJ5IEZpbmdlciA8TGFycnkuRmluZ2VyQGx3
ZmluZ2VyLm5ldD4KPiAgIExhdXJhIEFiYm90dCA8bGF1cmFhQGNvZGVhdXJvcmEub3JnPgo+ICAg
TGF1cmVudCBQaW5jaGFydCA8bGF1cmVudC5waW5jaGFydCtyZW5lc2FzQGlkZWFzb25ib2FyZC5j
b20+Cj4gICBMZWlnaCBCcm93biA8bGVpZ2hAc29saW5uby5jby51az4KPiAgIExpIFJvbmdRaW5n
IDxyb3kucWluZy5saUBnbWFpbC5jb20+Cj4gICBMaW51cyBUb3J2YWxkcyA8dG9ydmFsZHNAbGlu
dXgtZm91bmRhdGlvbi5vcmc+Cj4gICBMaXUsIENodWFuc2hlbmcgPGNodWFuc2hlbmcubGl1QGlu
dGVsLmNvbT4KPiAgIE1hbmlzaCBDaG9wcmEgPG1hbmlzaC5jaG9wcmFAcWxvZ2ljLmNvbT4KPiAg
IE1hcmNlbCBIb2x0bWFubiA8bWFyY2VsQGhvbHRtYW5uLm9yZz4KPiAgIE1hcmNlbG8gVG9zYXR0
aSA8bXRvc2F0dGlAcmVkaGF0LmNvbT4KPiAgIE1hcmNvIFBpYXp6YSA8bXBpYXp6YUBnbWFpbC5j
b20+Cj4gICBNYXJlayBMaW5kbmVyIDxtYXJla2xpbmRuZXJAbmVvbWFpbGJveC5jaD4KPiAgIE1h
cmVrIE9sxaHDoWsgPG1hcmVrLm9sc2FrQGFtZC5jb20+Cj4gICBNYXJrIFJ1dGxhbmQgPG1hcmsu
cnV0bGFuZEBhcm0uY29tPgo+ICAgTWFydGluIFNjaHdpZGVmc2t5IDxzY2h3aWRlZnNreUBkZS5p
Ym0uY29tPgo+ICAgTWF0aHkgVmFuaG9lZiA8dmFuaG9lZm1AZ21haWwuY29tPgo+ICAgTWF0dGVv
IEZhY2NoaW5ldHRpIDxtYXR0ZW8uZmFjY2hpbmV0dGlAc2lyaXVzLWVzLml0Pgo+ICAgTWVsIEdv
cm1hbiA8bWdvcm1hbkBzdXNlLmRlPgo+ICAgTWljaGFlbCBDaGFuIDxtY2hhbkBicm9hZGNvbS5j
b20+Cj4gICBNaWNoYWVsIE5ldWxpbmcgPG1pa2V5QG5ldWxpbmcub3JnPgo+ICAgTWljaGFlbCBT
LiBUc2lya2luIDxtc3RAcmVkaGF0LmNvbT4KPiAgIE1pY2hhbCBIb2NrbyA8bWhvY2tvQHN1c2Uu
Y3o+Cj4gICBNaWNoYWwgS2FsZGVyb24gPG1pY2hhbHNAYnJvYWRjb20uY29tPgo+ICAgTWljaGFs
IFNjaG1pZHQgPG1zY2htaWR0QHJlZGhhdC5jb20+Cj4gICBNaWNoYWwgU2ltZWsgPG1pY2hhbC5z
aW1la0B4aWxpbnguY29tPgo+ICAgTWlrYSBXZXN0ZXJiZXJnIDxtaWthLndlc3RlcmJlcmdAbGlu
dXguaW50ZWwuY29tPgo+ICAgTWlrZSBUdXJxdWV0dGUgPG10dXJxdWV0dGVAbGluYXJvLm9yZz4K
PiAgIE1pa3VsYXMgUGF0b2NrYSA8bXBhdG9ja2FAcmVkaGF0LmNvbT4KPiAgIE1pbG8gS2ltIDxt
aWxvLmtpbUB0aS5jb20+Cj4gICBNaW5nIExlaSA8bWluZy5sZWlAY2Fub25pY2FsLmNvbT4KPiAg
IE1pbmcgTGVpIDx0b20ubGVpbWluZ0BnbWFpbC5jb20+Cj4gICBNdWd1bnRoYW4gViBOIDxtdWd1
bnRoYW52bm1AdGkuY29tPgo+ICAgTmFveWEgSG9yaWd1Y2hpIDxuLWhvcmlndWNoaUBhaC5qcC5u
ZWMuY29tPgo+ICAgTmVhbCBDYXJkd2VsbCA8bmNhcmR3ZWxsQGdvb2dsZS5jb20+Cj4gICBOZWls
IEhvcm1hbiA8bmhvcm1hbkB0dXhkcml2ZXIuY29tPgo+ICAgTmVpbEJyb3duIDxuZWlsYkBzdXNl
LmRlPgo+ICAgTmljb2xhcyBTY2hpY2hhbiA8bnNjaGljaGFuQGZyZWVib3guZnI+Cj4gICBOaXRo
aW4gTmF5YWsgU3VqaXIgPG5zdWppckBicm9hZGNvbS5jb20+Cj4gICBOb2J1aGlybyBJd2FtYXRz
dSA8bm9idWhpcm8uaXdhbWF0c3UueWpAcmVuZXNhcy5jb20+Cj4gICBPY3RhdmlhbiBQdXJkaWxh
IDxvY3Rhdmlhbi5wdXJkaWxhQGludGVsLmNvbT4KPiAgIE9sZWcgTmVzdGVyb3YgPG9sZWdAcmVk
aGF0LmNvbT4KPiAgIE9sb2YgSm9oYW5zc29uIDxvbG9mQGxpeG9tLm5ldD4KPiAgIE9yZW4gR2l2
b24gPG9yZW4uZ2l2b25AaW50ZWwuY29tPgo+ICAgUGFibG8gTmVpcmEgQXl1c28gPHBhYmxvQG5l
dGZpbHRlci5vcmc+Cj4gICBQYW9sbyBCb256aW5pIDxwYm9uemluaUByZWRoYXQuY29tPgo+ICAg
UGF1bCBEdXJyYW50IDxwYXVsLmR1cnJhbnRAY2l0cml4LmNvbT4KPiAgIFBhdWwgRS4gTWNLZW5u
ZXkgPHBhdWxtY2tAbGludXgudm5ldC5pYm0uY29tPgo+ICAgUGF1bG8gWmFub25pIDxwYXVsby5y
Lnphbm9uaUBpbnRlbC5jb20+Cj4gICBQZWtrYSBFbmJlcmcgPHBlbmJlcmdAa2VybmVsLm9yZz4K
PiAgIFBldGVyIEtvcnNnYWFyZCA8cGV0ZXJAa29yc2dhYXJkLmNvbT4KPiAgIFBldGVyIFppamxz
dHJhIDxwZXRlcnpAaW5mcmFkZWFkLm9yZz4KPiAgIFBoaWwgU2NobWl0dCA8cGhpbGxpcC5qLnNj
aG1pdHRAaW50ZWwuY29tPgo+ICAgUWFpcyBZb3VzZWYgPHFhaXMueW91c2VmQGltZ3RlYy5jb20+
Cj4gICBRaW5nc2h1YWkgVGlhbiA8cWluZ3NodWFpLnRpYW5AaW50ZWwuY29tPgo+ICAgUmFmYWVs
IEouIFd5c29ja2kgPHJhZmFlbC5qLnd5c29ja2lAaW50ZWwuY29tPgo+ICAgUmFmYcWCIE1pxYJl
Y2tpIDx6YWplYzVAZ21haWwuY29tPgo+ICAgUmFqZXNoIEIgUHJhdGhpcGF0aSA8cnByYXRoaXBA
bGludXgudm5ldC5pYm0uY29tPgo+ICAgUmljaGFyZCBDb2NocmFuIDxyaWNoYXJkY29jaHJhbkBn
bWFpbC5jb20+Cj4gICBSaWNoYXJkIFdlaW5iZXJnZXIgPHJpY2hhcmRAbm9kLmF0Pgo+ICAgUmlr
IHZhbiBSaWVsIDxyaWVsQHJlZGhhdC5jb20+Cj4gICBSb2IgSGVycmluZyA8cm9iLmhlcnJpbmdA
Y2FseGVkYS5jb20+Cj4gICBSb2IgSGVycmluZyA8cm9iaEBrZXJuZWwub3JnPgo+ICAgUm9iZXJ0
IFJpY2h0ZXIgPHJyaWNAa2VybmVsLm9yZz4KPiAgIFJ1c3NlbGwgS2luZyA8cm1rK2tlcm5lbEBh
cm0ubGludXgub3JnLnVrPgo+ICAgUnl1c3VrZSBLb25pc2hpIDxrb25pc2hpLnJ5dXN1a2VAbGFi
Lm50dC5jby5qcD4KPiAgIFNhY2hpbiBLYW1hdCA8c2FjaGluLmthbWF0QGxpbmFyby5vcmc+Cj4g
ICBTYWNoaW4gUHJhYmh1IDxzcHJhYmh1QHJlZGhhdC5jb20+Cj4gICBTYWx2YSBQZWlyw7MgPHNw
ZWlyb0BhaTIudXB2LmVzPgo+ICAgU2FtdWVsIE9ydGl6IDxzYW1lb0BsaW51eC5pbnRlbC5jb20+
Cj4gICBTYW50b3NoIFNoaWxpbWthciA8c2FudG9zaC5zaGlsaW1rYXJAdGkuY29tPgo+ICAgU2Fz
aGEgTGV2aW4gPHNhc2hhLmxldmluQG9yYWNsZS5jb20+Cj4gICBTYXRoeWEgUGVybGEgPHNhdGh5
YS5wZXJsYUBlbXVsZXguY29tPgo+ICAgU2NvdHQgRmVsZG1hbiA8c2ZlbGRtYUBjdW11bHVzbmV0
d29ya3MuY29tPgo+ICAgU2ViYXN0aWFuIE90dCA8c2Vib3R0QGxpbnV4LnZuZXQuaWJtLmNvbT4K
PiAgIFNlcmdlIEUuIEhhbGx5biA8c2VyZ2UuaGFsbHluQHVidW50dS5jb20+Cj4gICBTZXJnZSBI
YWxseW4gPHNlcmdlLmhhbGx5bkBjYW5vbmljYWwuY29tPgo+ICAgU2VyZ2VpIFNodHlseW92IDxz
ZXJnZWkuc2h0eWx5b3ZAY29nZW50ZW1iZWRkZWQuY29tPgo+ICAgU2V1bmctV29vIEtpbSA8c3cw
MzEyLmtpbUBzYW1zdW5nLmNvbT4KPiAgIFNoYWhlZCBTaGFpa2ggPHNoYWhlZC5zaGFpa2hAcWxv
Z2ljLmNvbT4KPiAgIFNoaXJpc2ggUGFyZ2FvbmthciA8c3Bhcmdhb25rYXJAc3VzZS5jb20+Cj4g
ICBTaHVhaCBLaGFuIDxzaHVhaC5raEBzYW1zdW5nLmNvbT4KPiAgIFNpbW9uIEd1aW5vdCA8c2d1
aW5vdEBsYWNpZS5jb20+Cj4gICBTaW1vbiBIb3JtYW4gPGhvcm1zK3JlbmVzYXNAdmVyZ2UubmV0
LmF1Pgo+ICAgU2ltb24gSG9ybWFuIDxob3Jtc0B2ZXJnZS5uZXQuYXU+Cj4gICBTaW1vbiBXdW5k
ZXJsaWNoIDxzd0BzaW1vbnd1bmRlcmxpY2guZGU+Cj4gICBTb3JlbiBCcmlua21hbm4gPHNvcmVu
LmJyaW5rbWFubkB4aWxpbnguY29tPgo+ICAgU3RlZmFubyBTdGFiZWxsaW5pIDxzdGVmYW5vLnN0
YWJlbGxpbmlAZXUuY2l0cml4LmNvbT4KPiAgIFN0ZXBoZW4gQm95ZCA8c2JveWRAY29kZWF1cm9y
YS5vcmc+Cj4gICBTdGVwaGVuIFdhcnJlbiA8c3dhcnJlbkBudmlkaWEuY29tPgo+ICAgU3RldmUg
Q2FwcGVyIDxzdGV2ZS5jYXBwZXJAbGluYXJvLm9yZz4KPiAgIFN0ZXZlIEZyZW5jaCA8c21mcmVu
Y2hAZ21haWwuY29tPgo+ICAgU3RldmVuIFJvc3RlZHQgPHJvc3RlZHRAZ29vZG1pcy5vcmc+Cj4g
ICBTdGV2ZW4gV2hpdGVob3VzZSA8c3doaXRlaG9AcmVkaGF0LmNvbT4KPiAgIFN1ZGVlcCBIb2xs
YSA8c3VkZWVwLmhvbGxhQGFybS5jb20+Cj4gICBTdWppdGggTWFub2hhcmFuIDxjX21hbm9oYUBx
Y2EucXVhbGNvbW0uY29tPgo+ICAgU3VyZXNoIFJlZGR5IDxzdXJlc2gucmVkZHlAZW11bGV4LmNv
bT4KPiAgIFRhcmFzIEtvbmRyYXRpdWsgPHRhcmFzLmtvbmRyYXRpdWtAbGluYXJvLm9yZz4KPiAg
IFRlanVuIEhlbyA8dGpAa2VybmVsLm9yZz4KPiAgIFRldHN1byBIYW5kYSA8cGVuZ3Vpbi1rZXJu
ZWxASS1sb3ZlLlNBS1VSQS5uZS5qcD4KPiAgIFRoYWRldSBMaW1hIGRlIFNvdXphIENhc2NhcmRv
IDxjYXNjYXJkb0BsaW51eC52bmV0LmlibS5jb20+Cj4gICBUaG9tYXMgR2xlaXhuZXIgPHRnbHhA
bGludXRyb25peC5kZT4KPiAgIFRpbW8gVGVyw6RzIDx0aW1vLnRlcmFzQGlraS5maT4KPiAgIFRv
bWFzeiBGaWdhIDx0LmZpZ2FAc2Ftc3VuZy5jb20+Cj4gICBUb255IExpbmRncmVuIDx0b255QGF0
b21pZGUuY29tPgo+ICAgVG9zaGkgS2FuaSA8dG9zaGkua2FuaUBocC5jb20+Cj4gICBVamphbCBS
b3kgPHJveXVqamFsQGdtYWlsLmNvbT4KPiAgIFZhc3VuZGhhcmEgVm9sYW0gPHZhc3VuZGhhcmEu
dm9sYW1AZW11bGV4LmNvbT4KPiAgIFZpbGxlIFN5cmrDpGzDpCA8dmlsbGUuc3lyamFsYUBsaW51
eC5pbnRlbC5jb20+Cj4gICBWaW5jZSBCcmlkZ2VycyA8dmJyaWRnZXJzMjAxM0BnbWFpbC5jb20+
Cj4gICBWaXJlc2ggS3VtYXIgPHZpcmVzaC5rdW1hckBsaW5hcm8ub3JnPgo+ICAgVml2ZWsgR295
YWwgPHZnb3lhbEByZWRoYXQuY29tPgo+ICAgVmxhZCBZYXNldmljaCA8dnlhc2V2aWNoQGdtYWls
LmNvbT4KPiAgIFZsYWRpbWlyIERhdnlkb3YgPHZkYXZ5ZG92QHBhcmFsbGVscy5jb20+Cj4gICBW
bGFzdGltaWwgQmFia2EgPHZiYWJrYUBzdXNlLmN6Pgo+ICAgV2FuZyBXZWlkb25nIDx3YW5nd2Vp
ZG9uZzFAaHVhd2VpLmNvbT4KPiAgIFdlaSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+Cj4gICBX
ZWkgWW9uZ2p1biA8eW9uZ2p1bl93ZWlAdHJlbmRtaWNyby5jb20uY24+Cj4gICBXZWktQ2h1biBD
aGFvIDx3ZWljaHVuY0BwbHVtZ3JpZC5jb20+Cj4gICBXZW5saWFuZyBGYW4gPGZhbndsZXhjYUBn
bWFpbC5jb20+Cj4gICBXaWxsIERlYWNvbiA8d2lsbC5kZWFjb25AYXJtLmNvbT4KPiAgIFdvbGZy
YW0gU2FuZyA8d3NhQHRoZS1kcmVhbXMuZGU+Cj4gICBZYW5pdiBSb3NuZXIgPHlhbml2ckBicm9h
ZGNvbS5jb20+Cj4gICBZYXN1c2hpIEFzYW5vIDx5YXN1c2hpLmFzYW5vQGpwLmZ1aml0c3UuY29t
Pgo+ICAgWWlqaW5nIFdhbmcgPHdhbmd5aWppbmdAaHVhd2VpLmNvbT4KPiAgIFlpbmcgWHVlIDx5
aW5nLnh1ZUB3aW5kcml2ZXIuY29tPgo+ICAgWXV2YWwgTWludHogPHl1dmFsbWluQGJyb2FkY29t
LmNvbT4KPiAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0KPiAKPiBqb2JzOgo+ICBidWlsZC1hcm1oZiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFzcyAgICAKPiAgYnVpbGQtYXJtaGYtcHZv
cHMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhc3MgICAgCj4g
IHRlc3QtYXJtaGYtYXJtaGYteGwgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBmYWlsICAgIAo+IAo+IAo+IC0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLQo+IHNnLXJlcG9ydC1mbGlnaHQgb24gd29raW5nLmNh
bS54Y2ktdGVzdC5jb20KPiBsb2dzOiAvaG9tZS94Y19vc3N0ZXN0L2xvZ3MKPiBpbWFnZXM6IC9o
b21lL3hjX29zc3Rlc3QvaW1hZ2VzCj4gCj4gTG9ncywgY29uZmlnIGZpbGVzLCBldGMuIGFyZSBh
dmFpbGFibGUgYXQKPiAgICAgaHR0cDovL3d3dy5jaGlhcmsuZ3JlZW5lbmQub3JnLnVrL354ZW5z
cmN0cy9sb2dzCj4gCj4gVGVzdCBoYXJuZXNzIGNvZGUgY2FuIGJlIGZvdW5kIGF0Cj4gICAgIGh0
dHA6Ly94ZW5iaXRzLnhlbnNvdXJjZS5jb20vZ2l0d2ViP3A9b3NzdGVzdC5naXQ7YT1zdW1tYXJ5
Cj4gCj4gCj4gUHVzaGluZyByZXZpc2lvbiA6Cj4gCj4gKyBicmFuY2g9bGludXgtYXJtLXhlbgo+
ICsgcmV2aXNpb249NTE4ZTYyNGRkZmFlZjU0NTQwOGMxOWMzMGZmZjMxYmM2NGQ2YjM0Ngo+ICsg
LiBjcmktbG9jay1yZXBvcwo+ICsrIC4gY3JpLWNvbW1vbgo+ICsrKyAuIGNyaS1nZXRjb25maWcK
PiArKysgdW1hc2sgMDAyCj4gKysrIGdldGNvbmZpZyBSZXBvcwo+ICsrKyBwZXJsIC1lICcKPiAg
ICAgICAgICAgICAgICAgdXNlIE9zc3Rlc3Q7Cj4gICAgICAgICAgICAgICAgIHJlYWRnbG9iYWxj
b25maWcoKTsKPiAgICAgICAgICAgICAgICAgcHJpbnQgJGN7IlJlcG9zIn0gb3IgZGllICQhOwo+
ICAgICAgICAgJwo+ICsrIHJlcG9zPS9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zCj4gKysgcmVw
b3NfbG9jaz0vZXhwb3J0L2hvbWUvb3NzdGVzdC9yZXBvcy9sb2NrCj4gKysgJ1snIHggJyE9JyB4
L2V4cG9ydC9ob21lL29zc3Rlc3QvcmVwb3MvbG9jayAnXScKPiArKyBPU1NURVNUX1JFUE9TX0xP
Q0tfTE9DS0VEPS9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zL2xvY2sKPiArKyBleGVjIHdpdGgt
bG9jay1leCAtdyAvZXhwb3J0L2hvbWUvb3NzdGVzdC9yZXBvcy9sb2NrIC4vYXAtcHVzaCBsaW51
eC1hcm0teGVuIDUxOGU2MjRkZGZhZWY1NDU0MDhjMTljMzBmZmYzMWJjNjRkNmIzNDYKPiArIGJy
YW5jaD1saW51eC1hcm0teGVuCj4gKyByZXZpc2lvbj01MThlNjI0ZGRmYWVmNTQ1NDA4YzE5YzMw
ZmZmMzFiYzY0ZDZiMzQ2Cj4gKyAuIGNyaS1sb2NrLXJlcG9zCj4gKysgLiBjcmktY29tbW9uCj4g
KysrIC4gY3JpLWdldGNvbmZpZwo+ICsrKyB1bWFzayAwMDIKPiArKysgZ2V0Y29uZmlnIFJlcG9z
Cj4gKysrIHBlcmwgLWUgJwo+ICAgICAgICAgICAgICAgICB1c2UgT3NzdGVzdDsKPiAgICAgICAg
ICAgICAgICAgcmVhZGdsb2JhbGNvbmZpZygpOwo+ICAgICAgICAgICAgICAgICBwcmludCAkY3si
UmVwb3MifSBvciBkaWUgJCE7Cj4gICAgICAgICAnCj4gKysgcmVwb3M9L2V4cG9ydC9ob21lL29z
c3Rlc3QvcmVwb3MKPiArKyByZXBvc19sb2NrPS9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zL2xv
Y2sKPiArKyAnWycgeC9leHBvcnQvaG9tZS9vc3N0ZXN0L3JlcG9zL2xvY2sgJyE9JyB4L2V4cG9y
dC9ob21lL29zc3Rlc3QvcmVwb3MvbG9jayAnXScKPiArIC4gY3JpLWNvbW1vbgo+ICsrIC4gY3Jp
LWdldGNvbmZpZwo+ICsrIHVtYXNrIDAwMgo+ICsgc2VsZWN0X3hlbmJyYW5jaAo+ICsgY2FzZSAi
JGJyYW5jaCIgaW4KPiArIHRyZWU9bGludXgKPiArIHhlbmJyYW5jaD14ZW4tdW5zdGFibGUKPiAr
ICdbJyB4bGludXggPSB4bGludXggJ10nCj4gKyBsaW51eGJyYW5jaD1saW51eC1hcm0teGVuCj4g
KyA6IHRlc3RlZC8yLjYuMzkueAo+ICsgLiBhcC1jb21tb24KPiArKyA6IG9zc3Rlc3RAeGVuYml0
cy54ZW5zb3VyY2UuY29tCj4gKysgOiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcveGVuLmdpdAo+ICsr
IDogb3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5jb206L2hvbWUveGVuL2dpdC94ZW4uZ2l0Cj4g
KysgOiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcvc3RhZ2luZy9xZW11LXhlbi11bnN0YWJsZS5naXQK
PiArKyA6IGdpdDovL2dpdC5rZXJuZWwub3JnCj4gKysgOiBnaXQ6Ly9naXQua2VybmVsLm9yZy9w
dWIvc2NtL2xpbnV4L2tlcm5lbC9naXQKPiArKyA6IGdpdAo+ICsrIDogZ2l0Oi8veGVuYml0cy54
ZW4ub3JnL29zc3Rlc3QvbGludXgtZmlybXdhcmUuZ2l0Cj4gKysgOiBvc3N0ZXN0QHhlbmJpdHMu
eGVuc291cmNlLmNvbTovaG9tZS9vc3N0ZXN0L2V4dC9saW51eC1maXJtd2FyZS5naXQKPiArKyA6
IGdpdDovL2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC9maXJtd2FyZS9s
aW51eC1maXJtd2FyZS5naXQKPiArKyA6IG9zc3Rlc3RAeGVuYml0cy54ZW5zb3VyY2UuY29tOi9o
b21lL3hlbi9naXQvbGludXgtcHZvcHMuZ2l0Cj4gKysgOiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcv
bGludXgtcHZvcHMuZ2l0Cj4gKysgOiB0ZXN0ZWQvbGludXgtMy40Cj4gKysgOiB0ZXN0ZWQvbGlu
dXgtYXJtLXhlbgo+ICsrICdbJyB4Z2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9r
ZXJuZWwvZ2l0L3NzdGFiZWxsaW5pL3hlbi5naXQgPSB4ICddJwo+ICsrICdbJyB4Z2l0Oi8vZ2l0
Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L3NzdGFiZWxsaW5pL3hlbi5naXQg
PSB4ICddJwo+ICsrIDogZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwv
Z2l0L2tvbnJhZC94ZW4uZ2l0Cj4gKysgOiB0ZXN0ZWQvMi42LjM5LngKPiArKyA6IGRhaWx5LWNy
b24ubGludXgtYXJtLXhlbgo+ICsrIDogZGFpbHktY3Jvbi5saW51eC1hcm0teGVuCj4gKysgOiBo
dHRwOi8vaGcudWsueGVuc291cmNlLmNvbS9jYXJib24vdHJ1bmsvbGludXgtMi42LjI3Cj4gKysg
OiBnaXQ6Ly94ZW5iaXRzLnhlbi5vcmcvc3RhZ2luZy9xZW11LXVwc3RyZWFtLXVuc3RhYmxlLmdp
dAo+ICsrIDogZGFpbHktY3Jvbi5saW51eC1hcm0teGVuCj4gKyBUUkVFX0xJTlVYPW9zc3Rlc3RA
eGVuYml0cy54ZW5zb3VyY2UuY29tOi9ob21lL3hlbi9naXQvbGludXgtcHZvcHMuZ2l0Cj4gKyBU
UkVFX1FFTVVfVVBTVFJFQU09b3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5jb206L2hvbWUveGVu
L2dpdC9xZW11LXVwc3RyZWFtLXVuc3RhYmxlLmdpdAo+ICsgVFJFRV9YRU49b3NzdGVzdEB4ZW5i
aXRzLnhlbnNvdXJjZS5jb206L2hvbWUveGVuL2dpdC94ZW4uZ2l0Cj4gKyBpbmZvX2xpbnV4X3Ry
ZWUgbGludXgtYXJtLXhlbgo+ICsgY2FzZSAkMSBpbgo+ICsgOiBnaXQ6Ly9naXQua2VybmVsLm9y
Zy9wdWIvc2NtL2xpbnV4L2tlcm5lbC9naXQvc3N0YWJlbGxpbmkveGVuLmdpdAo+ICsgOiBnaXQ6
Ly9naXQua2VybmVsLm9yZy9wdWIvc2NtL2xpbnV4L2tlcm5lbC9naXQvc3N0YWJlbGxpbmkveGVu
LmdpdAo+ICsgOiBsaW51eC1hcm0teGVuCj4gKyA6IGxpbnV4LWFybS14ZW4KPiArIDogbGludXgt
YXJtLXhlbgo+ICsgOiBnaXQKPiArIDogZ2l0Cj4gKyA6IGdpdDovL3hlbmJpdHMueGVuLm9yZy9s
aW51eC1wdm9wcy5naXQKPiArIDogb3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5jb206L2hvbWUv
eGVuL2dpdC9saW51eC1wdm9wcy5naXQKPiArIDogdGVzdGVkL2xpbnV4LWFybS14ZW4KPiArIDog
dGVzdGVkL2xpbnV4LWFybS14ZW4KPiArIHJldHVybiAwCj4gKyBjZCAvZXhwb3J0L2hvbWUvb3Nz
dGVzdC9yZXBvcy9saW51eAo+ICsgZ2l0IHB1c2ggb3NzdGVzdEB4ZW5iaXRzLnhlbnNvdXJjZS5j
b206L2hvbWUveGVuL2dpdC9saW51eC1wdm9wcy5naXQgNTE4ZTYyNGRkZmFlZjU0NTQwOGMxOWMz
MGZmZjMxYmM2NGQ2YjM0Njp0ZXN0ZWQvbGludXgtYXJtLXhlbgo+IENvdW50aW5nIG9iamVjdHM6
IDEgICAKPiBDb3VudGluZyBvYmplY3RzOiAxMCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDYyICAg
Cj4gQ291bnRpbmcgb2JqZWN0czogNjkgICAKPiBDb3VudGluZyBvYmplY3RzOiA5NSAgIAo+IENv
dW50aW5nIG9iamVjdHM6IDEwMSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDExOSAgIAo+IENvdW50
aW5nIG9iamVjdHM6IDEyMiAgIAo+IENvdW50aW5nIG9iamVjdHM6IDEyMyAgIAo+IENvdW50aW5n
IG9iamVjdHM6IDE1OCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDE2MiAgIAo+IENvdW50aW5nIG9i
amVjdHM6IDE2MyAgIAo+IENvdW50aW5nIG9iamVjdHM6IDE2NCAgIAo+IENvdW50aW5nIG9iamVj
dHM6IDE5NyAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIxMCAgIAo+IENvdW50aW5nIG9iamVjdHM6
IDIxMSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIyMSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIz
MCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDIzNiAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI1OCAg
IAo+IENvdW50aW5nIG9iamVjdHM6IDI4OCAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI5MSAgIAo+
IENvdW50aW5nIG9iamVjdHM6IDI5MyAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI5NSAgIAo+IENv
dW50aW5nIG9iamVjdHM6IDM3MSAgIAo+IENvdW50aW5nIG9iamVjdHM6IDI0MzkgICAKPiBDb3Vu
dGluZyBvYmplY3RzOiAzNjQ0LCBkb25lLgo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAgMCUgKDEv
MTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDElICgxMy8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAgMiUgKDI2LzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czog
ICAzJSAoMzkvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDQlICg1Mi8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAgNSUgKDY1LzEyODEpICAgCj4gQ29tcHJlc3Npbmcg
b2JqZWN0czogICA2JSAoNzcvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgIDclICg5
MC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAgOCUgKDEwMy8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICAgOSUgKDExNi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICAxMCUgKDEyOS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMCUgKDEzOS8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxMSUgKDE0MS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAxMiUgKDE1NC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICAxMyUgKDE2Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNCUgKDE4MC8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxNSUgKDE5My8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICAxNiUgKDIwNS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAx
NyUgKDIxOC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxOCUgKDIzMS8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAxOSUgKDI0NC8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICAyMCUgKDI1Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyMSUg
KDI3MC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyMiUgKDI4Mi8xMjgxKSAgIAo+
IENvbXByZXNzaW5nIG9iamVjdHM6ICAyMyUgKDI5NS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9i
amVjdHM6ICAyNCUgKDMwOC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNSUgKDMy
MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyNiUgKDMzNC8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICAyNyUgKDM0Ni8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICAyOCUgKDM1OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAyOSUgKDM3Mi8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMCUgKDM4NS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICAzMSUgKDM5OC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICAzMiUgKDQxMC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzMyUgKDQyMy8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzNCUgKDQzNi8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICAzNSUgKDQ0OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAz
NiUgKDQ2Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzNyUgKDQ3NC8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICAzOCUgKDQ4Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICAzOSUgKDUwMC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MCUg
KDUxMy8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MCUgKDUyNS8xMjgxKSAgIAo+
IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MSUgKDUyNi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9i
amVjdHM6ICA0MiUgKDUzOS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0MyUgKDU1
MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NCUgKDU2NC8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICA0NSUgKDU3Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICA0NiUgKDU5MC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0NyUgKDYwMy8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA0OCUgKDYxNS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA0OSUgKDYyOC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICA1MCUgKDY0MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MSUgKDY1NC8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1MiUgKDY2Ny8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICA1MyUgKDY3OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1
NCUgKDY5Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NSUgKDcwNS8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1NiUgKDcxOC8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICA1NyUgKDczMS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1OCUg
KDc0My8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA1OSUgKDc1Ni8xMjgxKSAgIAo+
IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MCUgKDc2OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9i
amVjdHM6ICA2MSUgKDc4Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MiUgKDc5
NS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2MyUgKDgwOC8xMjgxKSAgIAo+IENv
bXByZXNzaW5nIG9iamVjdHM6ICA2NCUgKDgyMC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVj
dHM6ICA2NSUgKDgzMy8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NiUgKDg0Ni8x
MjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA2NyUgKDg1OS8xMjgxKSAgIAo+IENvbXBy
ZXNzaW5nIG9iamVjdHM6ICA2OCUgKDg3Mi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6
ICA2OSUgKDg4NC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3MCUgKDg5Ny8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3MSUgKDkxMC8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICA3MiUgKDkyMy8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3
MyUgKDkzNi8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3NCUgKDk0OC8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3NSUgKDk2MS8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICA3NiUgKDk3NC8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3NyUg
KDk4Ny8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA3OCUgKDEwMDAvMTI4MSkgICAK
PiBDb21wcmVzc2luZyBvYmplY3RzOiAgNzklICgxMDEyLzEyODEpICAgCj4gQ29tcHJlc3Npbmcg
b2JqZWN0czogIDgwJSAoMTAyNS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA4MSUg
KDEwMzgvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgODIlICgxMDUxLzEyODEpICAg
Cj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDgzJSAoMTA2NC8xMjgxKSAgIAo+IENvbXByZXNzaW5n
IG9iamVjdHM6ICA4NCUgKDEwNzcvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgODUl
ICgxMDg5LzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDg2JSAoMTEwMi8xMjgxKSAg
IAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA4NyUgKDExMTUvMTI4MSkgICAKPiBDb21wcmVzc2lu
ZyBvYmplY3RzOiAgODglICgxMTI4LzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDg5
JSAoMTE0MS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA5MCUgKDExNTMvMTI4MSkg
ICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTElICgxMTY2LzEyODEpICAgCj4gQ29tcHJlc3Np
bmcgb2JqZWN0czogIDkyJSAoMTE3OS8xMjgxKSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA5
MyUgKDExOTIvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAgOTQlICgxMjA1LzEyODEp
ICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDk1JSAoMTIxNy8xMjgxKSAgIAo+IENvbXByZXNz
aW5nIG9iamVjdHM6ICA5NiUgKDEyMzAvMTI4MSkgICAKPiBDb21wcmVzc2luZyBvYmplY3RzOiAg
OTclICgxMjQzLzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czogIDk4JSAoMTI1Ni8xMjgx
KSAgIAo+IENvbXByZXNzaW5nIG9iamVjdHM6ICA5OSUgKDEyNjkvMTI4MSkgICAKPiBDb21wcmVz
c2luZyBvYmplY3RzOiAxMDAlICgxMjgxLzEyODEpICAgCj4gQ29tcHJlc3Npbmcgb2JqZWN0czog
MTAwJSAoMTI4MS8xMjgxKSwgZG9uZS4KPiBXcml0aW5nIG9iamVjdHM6ICAgMCUgKDEvMjMwOSkg
ICAKPiBXcml0aW5nIG9iamVjdHM6ICAgMSUgKDI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgIDIlICg0Ny8yMzA5KSAgIAo+IFdyaXRpbmcgb2JqZWN0czogICAzJSAoNzAvMjMwOSkgICAK
PiBXcml0aW5nIG9iamVjdHM6ICAgNCUgKDkzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
IDUlICgxMTYvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAgNiUgKDEzOS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogICA3JSAoMTYyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
IDglICgxODUvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAgOSUgKDIwOC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDEwJSAoMjMxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MTElICgyNTQvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAxMiUgKDI3OC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDEzJSAoMzAxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MTQlICgzMjQvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAxNSUgKDM0Ny8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDE2JSAoMzcwLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MTclICgzOTMvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAxOCUgKDQxNi8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDE5JSAoNDM5LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjAlICg0NjIvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAyMSUgKDQ4NS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDIyJSAoNTA4LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjMlICg1MzIvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAyNCUgKDU1NS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDI1JSAoNTc4LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjYlICg2MDEvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAyNyUgKDYyNC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDI4JSAoNjQ3LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MjklICg2NzAvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzMCUgKDY5My8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDMxJSAoNzE2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MzIlICg3MzkvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzMyUgKDc2Mi8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDM0JSAoNzg2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MzUlICg4MDkvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzNiUgKDgzMi8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDM3JSAoODU1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
MzglICg4NzgvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICAzOSUgKDkwMS8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDQwJSAoOTI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
NDElICg5NDkvMjMwOSkgICAKPiBXcml0aW5nIG9iamVjdHM6ICA0MiUgKDk3MC8yMzA5KSAgIAo+
IFdyaXRpbmcgb2JqZWN0czogIDQzJSAoOTkzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
NDQlICgxMDE5LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNDUlICgxMDQwLzIzMDkpICAg
Cj4gV3JpdGluZyBvYmplY3RzOiAgNDYlICgxMDYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgNDclICgxMDg2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNDglICgxMTA5LzIzMDkp
ICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNDklICgxMTMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmpl
Y3RzOiAgNTAlICgxMTU1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTElICgxMTc4LzIz
MDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTIlICgxMjAxLzIzMDkpICAgCj4gV3JpdGluZyBv
YmplY3RzOiAgNTMlICgxMjI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTQlICgxMjQ3
LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTUlICgxMjcwLzIzMDkpICAgCj4gV3JpdGlu
ZyBvYmplY3RzOiAgNTYlICgxMjk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTclICgx
MzE3LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNTglICgxMzQwLzIzMDkpICAgCj4gV3Jp
dGluZyBvYmplY3RzOiAgNTklICgxMzYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjAl
ICgxMzg2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjElICgxNDA5LzIzMDkpICAgCj4g
V3JpdGluZyBvYmplY3RzOiAgNjIlICgxNDMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
NjMlICgxNDU2LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjQlICgxNDc4LzIzMDkpICAg
Cj4gV3JpdGluZyBvYmplY3RzOiAgNjUlICgxNTAxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgNjYlICgxNTI0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjclICgxNTQ4LzIzMDkp
ICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNjglICgxNTcxLzIzMDkpICAgCj4gV3JpdGluZyBvYmpl
Y3RzOiAgNjklICgxNTk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzAlICgxNjE3LzIz
MDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzElICgxNjQwLzIzMDkpICAgCj4gV3JpdGluZyBv
YmplY3RzOiAgNzIlICgxNjYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzMlICgxNjg2
LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzQlICgxNzA5LzIzMDkpICAgCj4gV3JpdGlu
ZyBvYmplY3RzOiAgNzUlICgxNzMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzYlICgx
NzU1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzclICgxNzc4LzIzMDkpICAgCj4gV3Jp
dGluZyBvYmplY3RzOiAgNzglICgxODAyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgNzkl
ICgxODI1LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODAlICgxODQ4LzIzMDkpICAgCj4g
V3JpdGluZyBvYmplY3RzOiAgODElICgxODcxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAg
ODIlICgxODk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODMlICgxOTE3LzIzMDkpICAg
Cj4gV3JpdGluZyBvYmplY3RzOiAgODQlICgxOTQwLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3Rz
OiAgODUlICgxOTYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODYlICgxOTg2LzIzMDkp
ICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODclICgyMDA5LzIzMDkpICAgCj4gV3JpdGluZyBvYmpl
Y3RzOiAgODglICgyMDMyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgODklICgyMDU2LzIz
MDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTAlICgyMDc5LzIzMDkpICAgCj4gV3JpdGluZyBv
YmplY3RzOiAgOTElICgyMTAyLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTIlICgyMTI1
LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTMlICgyMTQ4LzIzMDkpICAgCj4gV3JpdGlu
ZyBvYmplY3RzOiAgOTQlICgyMTcxLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTUlICgy
MTk0LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTYlICgyMjE3LzIzMDkpICAgCj4gV3Jp
dGluZyBvYmplY3RzOiAgOTclICgyMjQwLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTgl
ICgyMjYzLzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAgOTklICgyMjg2LzIzMDkpICAgCj4g
V3JpdGluZyBvYmplY3RzOiAxMDAlICgyMzA5LzIzMDkpICAgCj4gV3JpdGluZyBvYmplY3RzOiAx
MDAlICgyMzA5LzIzMDkpLCA1MjAuOTEgS2lCLCBkb25lLgo+IFRvdGFsIDIzMDkgKGRlbHRhIDE4
NzIpLCByZXVzZWQgMTM0OCAoZGVsdGEgMTAyNykKPiBUbyBvc3N0ZXN0QHhlbmJpdHMueGVuc291
cmNlLmNvbTovaG9tZS94ZW4vZ2l0L2xpbnV4LXB2b3BzLmdpdAo+ICAgIGQyNjRiZGUuLjUxOGU2
MjQgIDUxOGU2MjRkZGZhZWY1NDU0MDhjMTljMzBmZmYzMWJjNjRkNmIzNDYgLT4gdGVzdGVkL2xp
bnV4LWFybS14ZW4KPiArIGV4aXQgMAo+IAo+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fCj4gWGVuLWRldmVsIG1haWxpbmcgbGlzdAo+IFhlbi1kZXZlbEBs
aXN0cy54ZW4ub3JnCj4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCgoKCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLj7-0006OK-V9; Thu, 06 Feb 2014 09:58:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WBLj7-0006MJ-5I
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 09:58:33 +0000
Received: from [193.109.254.147:11633] by server-4.bemta-14.messagelabs.com id
	B0/3F-32066-8CC53F25; Thu, 06 Feb 2014 09:58:32 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391680709!2419479!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24420 invoked from network); 6 Feb 2014 09:58:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 09:58:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="100413046"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 09:58:28 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	04:58:28 -0500
Message-ID: <52F35CC1.60401@citrix.com>
Date: Thu, 6 Feb 2014 09:58:25 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Michael Chan
	<mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52F29DDC.7010908@citrix.com> <52F2A282.5040502@citrix.com>
In-Reply-To: <52F2A282.5040502@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 20:43, Andrew Cooper wrote:
> On 05/02/2014 20:23, Zoltan Kiss wrote:
>> On 04/02/14 19:47, Michael Chan wrote:
>>> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>>>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>>>> dev_watchdog+0x156/0x1f0()
>>>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>>>
>>> The dump shows an internal IRQ pending on MSIX vector 2 which matches
>>> the the queue number that is timing out.  I don't know what happened to
>>> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
>>> message from the kernel a few seconds before the tx timeout message?
>>
>> I haven't seen any IRQ related error message. Note, this is on Xen
>> 4.3.1. Now I have new results with a reworked version of the patch,
>> unfortunately it still has this issue. Here is a bnx2 dump, lspci
>> output and some Xen debug output (MSI and interrupt bindings, I have
>> more if needed).
>
> You need debug-keys 'Q' as well to map between the PCI devices and Xen IRQs
>
> ~Andrew
>

I could have it after reboot:

(XEN) [2014-02-06 09:44:34] 0000:02:00.0 - dom 0   - MSIs < 64 65 66 67 
68 69 >

So the relevant MSI informations:

(XEN) [2014-02-05 20:15:20]  MSI-X   64 vec=d7  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   65 vec=ba  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   66 vec=92  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   67 vec=3a  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   68 vec=b8  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   69 vec=2a  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
...
(XEN) [2014-02-05 20:15:22]    IRQ:  64 affinity:00000004 vec:d7 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:304(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  65 affinity:00000100 vec:ba 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:303(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  66 affinity:00000004 vec:92 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:302(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  67 affinity:00000002 vec:3a 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:301(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  68 affinity:00000004 vec:b8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:300(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  69 affinity:00000001 vec:2a 
type=PCI-MSI/-X      status=00000002 mapped, unbound


Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 09:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 09:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLj7-0006OK-V9; Thu, 06 Feb 2014 09:58:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WBLj7-0006MJ-5I
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 09:58:33 +0000
Received: from [193.109.254.147:11633] by server-4.bemta-14.messagelabs.com id
	B0/3F-32066-8CC53F25; Thu, 06 Feb 2014 09:58:32 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391680709!2419479!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24420 invoked from network); 6 Feb 2014 09:58:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 09:58:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,792,1384300800"; d="scan'208";a="100413046"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 09:58:28 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	04:58:28 -0500
Message-ID: <52F35CC1.60401@citrix.com>
Date: Thu, 6 Feb 2014 09:58:25 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Michael Chan
	<mchan@broadcom.com>
References: <52EAA31B.1090606@schaman.hu>	
	<1391114048.4804.2.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>	
	<52EBA51E.808@citrix.com>
	<1391543271.4804.44.camel@LTIRV-MCHAN1.corp.ad.broadcom.com>
	<52F29DDC.7010908@citrix.com> <52F2A282.5040502@citrix.com>
In-Reply-To: <52F2A282.5040502@citrix.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Alex Duyck <alexander.h.duyck@intel.com>, linux-kernel@vger.kernel.org,
	e1000-devel@lists.sourceforge.net,
	Don Skidmore <donald.c.skidmore@intel.com>,
	Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>,
	Bruce Allan <bruce.w.allan@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Greg Rose <gregory.v.rose@intel.com>,
	John Ronciak <john.ronciak@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Carolyn Wyborny <carolyn.wyborny@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>,
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 20:43, Andrew Cooper wrote:
> On 05/02/2014 20:23, Zoltan Kiss wrote:
>> On 04/02/14 19:47, Michael Chan wrote:
>>> On Fri, 2014-01-31 at 14:29 +0100, Zoltan Kiss wrote:
>>>> [ 5417.275472] WARNING: at net/sched/sch_generic.c:255
>>>> dev_watchdog+0x156/0x1f0()
>>>> [ 5417.275474] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 2 timed out
>>>
>>> The dump shows an internal IRQ pending on MSIX vector 2 which matches
>>> the the queue number that is timing out.  I don't know what happened to
>>> the MSIX and why the driver is not seeing it.  Do you see an IRQ error
>>> message from the kernel a few seconds before the tx timeout message?
>>
>> I haven't seen any IRQ related error message. Note, this is on Xen
>> 4.3.1. Now I have new results with a reworked version of the patch,
>> unfortunately it still has this issue. Here is a bnx2 dump, lspci
>> output and some Xen debug output (MSI and interrupt bindings, I have
>> more if needed).
>
> You need debug-keys 'Q' as well to map between the PCI devices and Xen IRQs
>
> ~Andrew
>

I could have it after reboot:

(XEN) [2014-02-06 09:44:34] 0000:02:00.0 - dom 0   - MSIs < 64 65 66 67 
68 69 >

So the relevant MSI informations:

(XEN) [2014-02-05 20:15:20]  MSI-X   64 vec=d7  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   65 vec=ba  fixed  edge   assert 
phys    cpu dest=00000000 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   66 vec=92  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   67 vec=3a  fixed  edge   assert 
phys    cpu dest=00000021 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   68 vec=b8  fixed  edge   assert 
phys    cpu dest=00000022 mask=1/0/0
(XEN) [2014-02-05 20:15:20]  MSI-X   69 vec=2a  fixed  edge   assert 
phys    cpu dest=00000020 mask=1/1/1
...
(XEN) [2014-02-05 20:15:22]    IRQ:  64 affinity:00000004 vec:d7 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:304(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  65 affinity:00000100 vec:ba 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:303(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  66 affinity:00000004 vec:92 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:302(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  67 affinity:00000002 vec:3a 
type=PCI-MSI/-X      status=00000010 in-flight=0 domain-list=0:301(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  68 affinity:00000004 vec:b8 
type=PCI-MSI/-X      status=00000030 in-flight=0 domain-list=0:300(---),
(XEN) [2014-02-05 20:15:22]    IRQ:  69 affinity:00000001 vec:2a 
type=PCI-MSI/-X      status=00000002 mapped, unbound


Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLmH-0006hl-K6; Thu, 06 Feb 2014 10:01:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBLmG-0006he-BN
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:01:48 +0000
Received: from [85.158.139.211:7108] by server-5.bemta-5.messagelabs.com id
	21/17-32749-B8D53F25; Thu, 06 Feb 2014 10:01:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391680906!2033188!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12610 invoked from network); 6 Feb 2014 10:01:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:01:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:01:46 +0000
Message-Id: <52F36B920200007800119AF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:01:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ijc@hellion.org.uk>,
 "Gerd Hoffmann" <kraxel@redhat.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
In-Reply-To: <1391675812.22033.2.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keith Moyer <Keith.Moyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Hoyer <David.Hoyer@netapp.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 09:36, Ian Campbell <ijc@hellion.org.uk> wrote:
> (adding xen-devel too) 
> On Thu, 2014-02-06 at 09:31 +0100, Gerd Hoffmann wrote:
>> > commit e144bb7af49ca8756b7222a75811f3b85b0bc1f5
>> > Author: Gerd Hoffmann <kraxel@redhat.com>
>> > Date:   Mon Jun 3 16:30:18 2013 +0200
>> > 
>> >     usb: add xhci support
>> >    
>> >     $subject says all.  Support for usb3 streams is not implemented yet,
>> >     otherwise it is fully functional.  Tested all usb devices supported
>> >     by qemu (keyboard, storage, usb hubs), except for usb attached scsi
>> >     in usb3 mode (which needs streams).
>> >    
>> >     Tested on qemu only, tagged with QEMU_HARDWARE because of that.
>> >     Testing with physical hardware to be done.
>> 
>> That commit made seabios size (default qemu config, gcc 4.7+) jump from
>> 128k to 256k in size because the code didn't fit into 128k any more.
>> 
>> Most likely the failure isn't related to xhci at all, but to the size
>> change.
>> 
>> You can try to turn off some features (hardware support) you don't need
>> to make the bios image smaller.  1.7.4 also has a config option to
>> explicitly set the image size you want.
>> 
>> IIRC xen combines seabios and hvmloader into a single 256k image
>> somehow, so it might make sense to set the seabios image size to
>> something between 128k and 256k.  But better ask the xen people for
>> details here.

A patch allowing the size to be other than a power of 2 was rejected.
And I vaguely seem to recall that you actually participated in that?

> I think this was fixed in Xen with
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5f2875739beef3a75c7a7e85 
> 79b6cbcb464e61b3
> 
> I suppose this should be backported to the 4.3.x stable branch.

It was backported already, and is part of 4.3.1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLmH-0006hl-K6; Thu, 06 Feb 2014 10:01:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBLmG-0006he-BN
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:01:48 +0000
Received: from [85.158.139.211:7108] by server-5.bemta-5.messagelabs.com id
	21/17-32749-B8D53F25; Thu, 06 Feb 2014 10:01:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391680906!2033188!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12610 invoked from network); 6 Feb 2014 10:01:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:01:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:01:46 +0000
Message-Id: <52F36B920200007800119AF4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:01:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ijc@hellion.org.uk>,
 "Gerd Hoffmann" <kraxel@redhat.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
In-Reply-To: <1391675812.22033.2.camel@dagon.hellion.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keith Moyer <Keith.Moyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Hoyer <David.Hoyer@netapp.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 09:36, Ian Campbell <ijc@hellion.org.uk> wrote:
> (adding xen-devel too) 
> On Thu, 2014-02-06 at 09:31 +0100, Gerd Hoffmann wrote:
>> > commit e144bb7af49ca8756b7222a75811f3b85b0bc1f5
>> > Author: Gerd Hoffmann <kraxel@redhat.com>
>> > Date:   Mon Jun 3 16:30:18 2013 +0200
>> > 
>> >     usb: add xhci support
>> >    
>> >     $subject says all.  Support for usb3 streams is not implemented yet,
>> >     otherwise it is fully functional.  Tested all usb devices supported
>> >     by qemu (keyboard, storage, usb hubs), except for usb attached scsi
>> >     in usb3 mode (which needs streams).
>> >    
>> >     Tested on qemu only, tagged with QEMU_HARDWARE because of that.
>> >     Testing with physical hardware to be done.
>> 
>> That commit made seabios size (default qemu config, gcc 4.7+) jump from
>> 128k to 256k in size because the code didn't fit into 128k any more.
>> 
>> Most likely the failure isn't related to xhci at all, but to the size
>> change.
>> 
>> You can try to turn off some features (hardware support) you don't need
>> to make the bios image smaller.  1.7.4 also has a config option to
>> explicitly set the image size you want.
>> 
>> IIRC xen combines seabios and hvmloader into a single 256k image
>> somehow, so it might make sense to set the seabios image size to
>> something between 128k and 256k.  But better ask the xen people for
>> details here.

A patch allowing the size to be other than a power of 2 was rejected.
And I vaguely seem to recall that you actually participated in that?

> I think this was fixed in Xen with
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5f2875739beef3a75c7a7e85 
> 79b6cbcb464e61b3
> 
> I suppose this should be backported to the 4.3.x stable branch.

It was backported already, and is part of 4.3.1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:03:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLnr-0006pI-96; Thu, 06 Feb 2014 10:03:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLnp-0006p8-UX
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 10:03:26 +0000
Received: from [85.158.139.211:48070] by server-4.bemta-5.messagelabs.com id
	4B/9C-08092-DED53F25; Thu, 06 Feb 2014 10:03:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391681002!2040609!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11503 invoked from network); 6 Feb 2014 10:03:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:03:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100414271"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:03:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	05:03:21 -0500
Message-ID: <1391681000.23098.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 6 Feb 2014 10:03:20 +0000
In-Reply-To: <CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 03:37 +0000, Miguel Clara wrote:
> Sorry forgot to add the error:
> 
> # xl create test.cfg
> Parsing config from test.cfg
> libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
> failed - consult logfile /var/log/xen/bootloader.34.log
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
> bootloader [10762] exited with error status 1
> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
> (re-)build domain: -3
> 
> # cat /var/log/xen/bootloader.34.log
> Traceback (most recent call last):
>   File "/usr/local/lib/xen/bin/pygrub", line 844, in <module>
>     part_offs = get_partition_offsets(file)
>   File "/usr/local/lib/xen/bin/pygrub", line 105, in get_partition_offsets
>     image_type = identify_disk_image(file)
>   File "/usr/local/lib/xen/bin/pygrub", line 47, in identify_disk_image
>     fd = os.open(file, os.O_RDONLY)
> OSError: [Errno 2] No such file or directory: 'drbd-remus-test'

You haven't shared your cfg file but this sounds to me like something is
missing the necessary absolute path.

> 
> On Wed, Feb 5, 2014 at 3:20 AM, Mike C. <miguelmclara@gmail.com> wrote:
> > Fixed this but it seems using drbd: in rhe disk config doesnt work with
> > pygrub....
> >
> > does this makes sense?
> >
> > I found na old bug reporta but this is Debain squeze Xen 4.3
> >
> > I seems to work fine booting into the installer bit if ibuse pygrub It
> > doesnt find the drbd device.
> >
> >
> >
> >
> > On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan
> > <rshriram@cs.ubc.ca> wrote:
> >>
> >> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
> >> wrote:
> >>>
> >>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
> >>> does this come with xen or the drbd package?
> >>>
> >>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
> >>> on debian (v 8.3)
> >>>
> >>
> >> It comes with the drbd package AFAIK
> >>
> >
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:03:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLnr-0006pI-96; Thu, 06 Feb 2014 10:03:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLnp-0006p8-UX
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 10:03:26 +0000
Received: from [85.158.139.211:48070] by server-4.bemta-5.messagelabs.com id
	4B/9C-08092-DED53F25; Thu, 06 Feb 2014 10:03:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391681002!2040609!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11503 invoked from network); 6 Feb 2014 10:03:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:03:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100414271"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:03:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	05:03:21 -0500
Message-ID: <1391681000.23098.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 6 Feb 2014 10:03:20 +0000
In-Reply-To: <CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 03:37 +0000, Miguel Clara wrote:
> Sorry forgot to add the error:
> 
> # xl create test.cfg
> Parsing config from test.cfg
> libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
> failed - consult logfile /var/log/xen/bootloader.34.log
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
> bootloader [10762] exited with error status 1
> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
> (re-)build domain: -3
> 
> # cat /var/log/xen/bootloader.34.log
> Traceback (most recent call last):
>   File "/usr/local/lib/xen/bin/pygrub", line 844, in <module>
>     part_offs = get_partition_offsets(file)
>   File "/usr/local/lib/xen/bin/pygrub", line 105, in get_partition_offsets
>     image_type = identify_disk_image(file)
>   File "/usr/local/lib/xen/bin/pygrub", line 47, in identify_disk_image
>     fd = os.open(file, os.O_RDONLY)
> OSError: [Errno 2] No such file or directory: 'drbd-remus-test'

You haven't shared your cfg file but this sounds to me like something is
missing the necessary absolute path.

> 
> On Wed, Feb 5, 2014 at 3:20 AM, Mike C. <miguelmclara@gmail.com> wrote:
> > Fixed this but it seems using drbd: in rhe disk config doesnt work with
> > pygrub....
> >
> > does this makes sense?
> >
> > I found na old bug reporta but this is Debain squeze Xen 4.3
> >
> > I seems to work fine booting into the installer bit if ibuse pygrub It
> > doesnt find the drbd device.
> >
> >
> >
> >
> > On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan
> > <rshriram@cs.ubc.ca> wrote:
> >>
> >> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
> >> wrote:
> >>>
> >>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
> >>> does this come with xen or the drbd package?
> >>>
> >>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
> >>> on debian (v 8.3)
> >>>
> >>
> >> It comes with the drbd package AFAIK
> >>
> >
> > --
> > Sent from my Android device with K-9 Mail. Please excuse my brevity.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLz5-0007RD-Q5; Thu, 06 Feb 2014 10:15:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WBLz2-0007R3-Hh
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:15:00 +0000
Received: from [85.158.139.211:45886] by server-15.bemta-5.messagelabs.com id
	DA/B9-24395-3A063F25; Thu, 06 Feb 2014 10:14:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391681697!2032507!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8546 invoked from network); 6 Feb 2014 10:14:57 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:14:57 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WBLyy-0009hr-LI; Thu, 06 Feb 2014 10:14:56 +0000
Date: Thu, 6 Feb 2014 11:14:56 +0100
From: Tim Deegan <tim@xen.org>
To: Tamas K Lengyel <tamas.lengyel@zentific.com>
Message-ID: <20140206101456.GA35797@deinos.phlegethon.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:34 +0100 on 30 Jan (1391117656), Tamas K Lengyel wrote:
> This patch extends the information returned for CR0/CR3/CR4 register write events
> with the previous value of the register. The old value was already passed to the trap
> processing function, just never placed into the returned request. By returning     
> this value, applications subscribing the CR events obtain additional context about
> the event.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Looks OK to me. (But for after 4.4.)

Acked-by: Tim Deegan <tim@xen.org>


> ---
>  xen/arch/x86/hvm/hvm.c         |    4 ++++
>  xen/include/public/mem_event.h |    6 +++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..d46abf2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
>          req.gla = gla;
>          req.gla_valid = 1;
>      }
> +    else
> +    {
> +        req.gla = old;
> +    }
>      
>      mem_event_put_request(d, &d->mem_event->access, &req);
>      
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index c9ed546..3831b41 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -40,9 +40,9 @@
>  /* Reasons for the memory event request */
>  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
> +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
> +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
> +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:15:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLz5-0007RD-Q5; Thu, 06 Feb 2014 10:15:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WBLz2-0007R3-Hh
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:15:00 +0000
Received: from [85.158.139.211:45886] by server-15.bemta-5.messagelabs.com id
	DA/B9-24395-3A063F25; Thu, 06 Feb 2014 10:14:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391681697!2032507!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8546 invoked from network); 6 Feb 2014 10:14:57 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:14:57 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WBLyy-0009hr-LI; Thu, 06 Feb 2014 10:14:56 +0000
Date: Thu, 6 Feb 2014 11:14:56 +0100
From: Tim Deegan <tim@xen.org>
To: Tamas K Lengyel <tamas.lengyel@zentific.com>
Message-ID: <20140206101456.GA35797@deinos.phlegethon.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:34 +0100 on 30 Jan (1391117656), Tamas K Lengyel wrote:
> This patch extends the information returned for CR0/CR3/CR4 register write events
> with the previous value of the register. The old value was already passed to the trap
> processing function, just never placed into the returned request. By returning     
> this value, applications subscribing the CR events obtain additional context about
> the event.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Looks OK to me. (But for after 4.4.)

Acked-by: Tim Deegan <tim@xen.org>


> ---
>  xen/arch/x86/hvm/hvm.c         |    4 ++++
>  xen/include/public/mem_event.h |    6 +++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..d46abf2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
>          req.gla = gla;
>          req.gla_valid = 1;
>      }
> +    else
> +    {
> +        req.gla = old;
> +    }
>      
>      mem_event_put_request(d, &d->mem_event->access, &req);
>      
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index c9ed546..3831b41 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -40,9 +40,9 @@
>  /* Reasons for the memory event request */
>  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
> +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
> +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
> +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:15:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLzM-0007Tf-Qx; Thu, 06 Feb 2014 10:15:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLzL-0007TC-N4
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:15:20 +0000
Received: from [85.158.139.211:36979] by server-7.bemta-5.messagelabs.com id
	67/EC-14867-7B063F25; Thu, 06 Feb 2014 10:15:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391681716!2054514!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7412 invoked from network); 6 Feb 2014 10:15:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:15:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100415732"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:15:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	05:15:15 -0500
Message-ID: <1391681714.23098.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Thu, 6 Feb 2014 10:15:14 +0000
In-Reply-To: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds VFP save/restore support form arm64 across context switch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>

This should go in for 4.4 -- not context switching floating point
registers is obviously a big problem. (bit embarrassed that I forgot
about this...)

> ---
>  xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/arm64/vfp.h |    4 ++++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index 74e6a50..8c1479a 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -1,13 +1,62 @@
>  #include <xen/sched.h>
>  #include <asm/processor.h>
> +#include <asm/cpufeature.h>
>  #include <asm/vfp.h>
>  
>  void vfp_save_state(struct vcpu *v)
>  {
>      /* TODO: implement it */

You can probably remove this comment, and the one in restore, unless
there is something still left to do?

Actually, I'll do it on commit, so:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +
> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>  }
>  
>  void vfp_restore_state(struct vcpu *v)
>  {
>      /* TODO: implement it */
> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +
> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>  }
> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
> index 3733d2c..373f156 100644
> --- a/xen/include/asm-arm/arm64/vfp.h
> +++ b/xen/include/asm-arm/arm64/vfp.h
> @@ -3,6 +3,10 @@
>  
>  struct vfp_state
>  {
> +    uint64_t fpregs[64];
> +    uint32_t fpcr;
> +    uint32_t fpexc32_el2;
> +    uint32_t fpsr;
>  };
>  
>  #endif /* _ARM_ARM64_VFP_H */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:15:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLzM-0007Tf-Qx; Thu, 06 Feb 2014 10:15:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBLzL-0007TC-N4
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:15:20 +0000
Received: from [85.158.139.211:36979] by server-7.bemta-5.messagelabs.com id
	67/EC-14867-7B063F25; Thu, 06 Feb 2014 10:15:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391681716!2054514!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7412 invoked from network); 6 Feb 2014 10:15:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:15:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100415732"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:15:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	05:15:15 -0500
Message-ID: <1391681714.23098.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Thu, 6 Feb 2014 10:15:14 +0000
In-Reply-To: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
> This patch adds VFP save/restore support form arm64 across context switch.
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>

This should go in for 4.4 -- not context switching floating point
registers is obviously a big problem. (bit embarrassed that I forgot
about this...)

> ---
>  xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/arm64/vfp.h |    4 ++++
>  2 files changed, 53 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index 74e6a50..8c1479a 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -1,13 +1,62 @@
>  #include <xen/sched.h>
>  #include <asm/processor.h>
> +#include <asm/cpufeature.h>
>  #include <asm/vfp.h>
>  
>  void vfp_save_state(struct vcpu *v)
>  {
>      /* TODO: implement it */

You can probably remove this comment, and the one in restore, unless
there is something still left to do?

Actually, I'll do it on commit, so:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +
> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>  }
>  
>  void vfp_restore_state(struct vcpu *v)
>  {
>      /* TODO: implement it */
> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +
> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>  }
> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
> index 3733d2c..373f156 100644
> --- a/xen/include/asm-arm/arm64/vfp.h
> +++ b/xen/include/asm-arm/arm64/vfp.h
> @@ -3,6 +3,10 @@
>  
>  struct vfp_state
>  {
> +    uint64_t fpregs[64];
> +    uint32_t fpcr;
> +    uint32_t fpexc32_el2;
> +    uint32_t fpsr;
>  };
>  
>  #endif /* _ARM_ARM64_VFP_H */



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:15:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLzw-0007aR-A0; Thu, 06 Feb 2014 10:15:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBLzu-0007a0-VW
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:15:55 +0000
Received: from [85.158.139.211:61574] by server-11.bemta-5.messagelabs.com id
	B4/B0-23886-AD063F25; Thu, 06 Feb 2014 10:15:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391681753!2054791!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13087 invoked from network); 6 Feb 2014 10:15:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:15:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:15:52 +0000
Message-Id: <52F36EE70200007800119B13@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:15:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F2A512.30304@citrix.com>
In-Reply-To: <52F2A512.30304@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:54, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 05/02/2014 20:59, Aravind Gopalakrishnan wrote:
>> +        if (smp_processor_id() == 0) {
>> +            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
>> +            if (pci_val & 0x1f) {
>> +                pci_val &= ~(0x1f);
> 
> 0x1f is by default an int, so you use a u suffix to make it u32 to use
> as a mask.

This seems pretty pointless a request - conversion from signed to
unsigned types is well defined, even more so when the converted
value is representable by both.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:15:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBLzw-0007aR-A0; Thu, 06 Feb 2014 10:15:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBLzu-0007a0-VW
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:15:55 +0000
Received: from [85.158.139.211:61574] by server-11.bemta-5.messagelabs.com id
	B4/B0-23886-AD063F25; Thu, 06 Feb 2014 10:15:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391681753!2054791!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13087 invoked from network); 6 Feb 2014 10:15:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:15:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:15:52 +0000
Message-Id: <52F36EE70200007800119B13@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:15:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1391633972-4433-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F2A512.30304@citrix.com>
In-Reply-To: <52F2A512.30304@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:54, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 05/02/2014 20:59, Aravind Gopalakrishnan wrote:
>> +        if (smp_processor_id() == 0) {
>> +            pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
>> +            if (pci_val & 0x1f) {
>> +                pci_val &= ~(0x1f);
> 
> 0x1f is by default an int, so you use a u suffix to make it u32 to use
> as a mask.

This seems pretty pointless a request - conversion from signed to
unsigned types is well defined, even more so when the converted
value is representable by both.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:16:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBM0X-0007ih-Ob; Thu, 06 Feb 2014 10:16:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBM0W-0007iA-8y
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:16:32 +0000
Received: from [85.158.137.68:60524] by server-8.bemta-3.messagelabs.com id
	70/0F-16039-FF063F25; Thu, 06 Feb 2014 10:16:31 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391681790!8334!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7759 invoked from network); 6 Feb 2014 10:16:30 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-6.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 10:16:30 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBM0O-0005qj-Cf; Thu, 06 Feb 2014 10:16:24 +0000
Message-ID: <1391681783.23098.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 06 Feb 2014 10:16:23 +0000
In-Reply-To: <52F36B920200007800119AF4@nat28.tlf.novell.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<52F36B920200007800119AF4@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	David Hoyer <David.Hoyer@netapp.com>, Keith Moyer <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 10:01 +0000, Jan Beulich wrote:
> >>> On 06.02.14 at 09:36, Ian Campbell <ijc@hellion.org.uk> wrote:
> > (adding xen-devel too) 
> > On Thu, 2014-02-06 at 09:31 +0100, Gerd Hoffmann wrote:
> >> > commit e144bb7af49ca8756b7222a75811f3b85b0bc1f5
> >> > Author: Gerd Hoffmann <kraxel@redhat.com>
> >> > Date:   Mon Jun 3 16:30:18 2013 +0200
> >> > 
> >> >     usb: add xhci support
> >> >    
> >> >     $subject says all.  Support for usb3 streams is not implemented yet,
> >> >     otherwise it is fully functional.  Tested all usb devices supported
> >> >     by qemu (keyboard, storage, usb hubs), except for usb attached scsi
> >> >     in usb3 mode (which needs streams).
> >> >    
> >> >     Tested on qemu only, tagged with QEMU_HARDWARE because of that.
> >> >     Testing with physical hardware to be done.
> >> 
> >> That commit made seabios size (default qemu config, gcc 4.7+) jump from
> >> 128k to 256k in size because the code didn't fit into 128k any more.
> >> 
> >> Most likely the failure isn't related to xhci at all, but to the size
> >> change.
> >> 
> >> You can try to turn off some features (hardware support) you don't need
> >> to make the bios image smaller.  1.7.4 also has a config option to
> >> explicitly set the image size you want.
> >> 
> >> IIRC xen combines seabios and hvmloader into a single 256k image
> >> somehow, so it might make sense to set the seabios image size to
> >> something between 128k and 256k.  But better ask the xen people for
> >> details here.
> 
> A patch allowing the size to be other than a power of 2 was rejected.
> And I vaguely seem to recall that you actually participated in that?
> 
> > I think this was fixed in Xen with
> > http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5f2875739beef3a75c7a7e85 
> > 79b6cbcb464e61b3
> > 
> > I suppose this should be backported to the 4.3.x stable branch.
> 
> It was backported already, and is part of 4.3.1.

Oops, gitweb didn't find it.

Debian only has 4.3.0 so that makes sense...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:16:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:16:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBM0X-0007ih-Ob; Thu, 06 Feb 2014 10:16:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBM0W-0007iA-8y
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:16:32 +0000
Received: from [85.158.137.68:60524] by server-8.bemta-3.messagelabs.com id
	70/0F-16039-FF063F25; Thu, 06 Feb 2014 10:16:31 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391681790!8334!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7759 invoked from network); 6 Feb 2014 10:16:30 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-6.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 10:16:30 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBM0O-0005qj-Cf; Thu, 06 Feb 2014 10:16:24 +0000
Message-ID: <1391681783.23098.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 06 Feb 2014 10:16:23 +0000
In-Reply-To: <52F36B920200007800119AF4@nat28.tlf.novell.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<52F36B920200007800119AF4@nat28.tlf.novell.com>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	David Hoyer <David.Hoyer@netapp.com>, Keith Moyer <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 10:01 +0000, Jan Beulich wrote:
> >>> On 06.02.14 at 09:36, Ian Campbell <ijc@hellion.org.uk> wrote:
> > (adding xen-devel too) 
> > On Thu, 2014-02-06 at 09:31 +0100, Gerd Hoffmann wrote:
> >> > commit e144bb7af49ca8756b7222a75811f3b85b0bc1f5
> >> > Author: Gerd Hoffmann <kraxel@redhat.com>
> >> > Date:   Mon Jun 3 16:30:18 2013 +0200
> >> > 
> >> >     usb: add xhci support
> >> >    
> >> >     $subject says all.  Support for usb3 streams is not implemented yet,
> >> >     otherwise it is fully functional.  Tested all usb devices supported
> >> >     by qemu (keyboard, storage, usb hubs), except for usb attached scsi
> >> >     in usb3 mode (which needs streams).
> >> >    
> >> >     Tested on qemu only, tagged with QEMU_HARDWARE because of that.
> >> >     Testing with physical hardware to be done.
> >> 
> >> That commit made seabios size (default qemu config, gcc 4.7+) jump from
> >> 128k to 256k in size because the code didn't fit into 128k any more.
> >> 
> >> Most likely the failure isn't related to xhci at all, but to the size
> >> change.
> >> 
> >> You can try to turn off some features (hardware support) you don't need
> >> to make the bios image smaller.  1.7.4 also has a config option to
> >> explicitly set the image size you want.
> >> 
> >> IIRC xen combines seabios and hvmloader into a single 256k image
> >> somehow, so it might make sense to set the seabios image size to
> >> something between 128k and 256k.  But better ask the xen people for
> >> details here.
> 
> A patch allowing the size to be other than a power of 2 was rejected.
> And I vaguely seem to recall that you actually participated in that?
> 
> > I think this was fixed in Xen with
> > http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5f2875739beef3a75c7a7e85 
> > 79b6cbcb464e61b3
> > 
> > I suppose this should be backported to the 4.3.x stable branch.
> 
> It was backported already, and is part of 4.3.1.

Oops, gitweb didn't find it.

Debian only has 4.3.0 so that makes sense...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBM8F-0008TN-5j; Thu, 06 Feb 2014 10:24:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBM8D-0008TH-J0
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:24:29 +0000
Received: from [85.158.143.35:31358] by server-2.bemta-4.messagelabs.com id
	84/F6-10891-CD263F25; Thu, 06 Feb 2014 10:24:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391682268!3567013!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23401 invoked from network); 6 Feb 2014 10:24:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:24:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:24:27 +0000
Message-Id: <52F370EC0200007800119B43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:24:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <Aravind.Gopalakrishnan@amd.com>
References: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 22:43, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -477,6 +477,36 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  		       " all your (PV) guest kernels. ***\n");
>  
>  	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> +		/*
> +		 * Apply workaround for erratum 792
> +		 * Description:
> +		 * Processor does not ensure DRAM scrub read/write sequence
> +		 * is atomic wrt accesses to CC6 save state area. Therefore
> +		 * if a concurrent scrub read/write access is to same address
> +		 * the entry may appear as if it is not written. This quirk
> +		 * applies to Fam16h models 00h-0Fh
> +		 *
> +		 * See "Revision Guide" for AMD F16h models 00h-0fh,
> +		 * document 51810 rev. 3.04, Nov 2013
> +		 *
> +		 * Equivalent Linux patch link:
> +		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2 
> +		 */
> +		if (smp_processor_id() == 0) {
> +			u32 pci_val;
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> +			if (pci_val & 0x1f) {
> +				pci_val &= ~0x1fu;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
> +			}
> +
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> +			if (pci_val & 0x1) {
> +				pci_val &= ~0x1u;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
> +			}
> +		}
> +
>  		rdmsrl(MSR_AMD64_LS_CFG, value);
>  		if (!(value & (1 << 15))) {
>  			static bool_t warned;

The patch context even shows what is missing: A diagnostic
message making it possible to know that the workaround was
applied. Of course you don't need two separate messages for
the two parts of the workaround, but indicating in the message
which of them was applied would seem desirable.

Furthermore, I don't see why you would need a new local
variable here at all - there are two suitable variables available
throughout the entire function (l and h).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBM8F-0008TN-5j; Thu, 06 Feb 2014 10:24:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBM8D-0008TH-J0
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:24:29 +0000
Received: from [85.158.143.35:31358] by server-2.bemta-4.messagelabs.com id
	84/F6-10891-CD263F25; Thu, 06 Feb 2014 10:24:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391682268!3567013!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23401 invoked from network); 6 Feb 2014 10:24:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:24:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:24:27 +0000
Message-Id: <52F370EC0200007800119B43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:24:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <Aravind.Gopalakrishnan@amd.com>
References: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 22:43, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -477,6 +477,36 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>  		       " all your (PV) guest kernels. ***\n");
>  
>  	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
> +		/*
> +		 * Apply workaround for erratum 792
> +		 * Description:
> +		 * Processor does not ensure DRAM scrub read/write sequence
> +		 * is atomic wrt accesses to CC6 save state area. Therefore
> +		 * if a concurrent scrub read/write access is to same address
> +		 * the entry may appear as if it is not written. This quirk
> +		 * applies to Fam16h models 00h-0Fh
> +		 *
> +		 * See "Revision Guide" for AMD F16h models 00h-0fh,
> +		 * document 51810 rev. 3.04, Nov 2013
> +		 *
> +		 * Equivalent Linux patch link:
> +		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2 
> +		 */
> +		if (smp_processor_id() == 0) {
> +			u32 pci_val;
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
> +			if (pci_val & 0x1f) {
> +				pci_val &= ~0x1fu;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, pci_val);
> +			}
> +
> +			pci_val = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
> +			if (pci_val & 0x1) {
> +				pci_val &= ~0x1u;
> +				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, pci_val);
> +			}
> +		}
> +
>  		rdmsrl(MSR_AMD64_LS_CFG, value);
>  		if (!(value & (1 << 15))) {
>  			static bool_t warned;

The patch context even shows what is missing: A diagnostic
message making it possible to know that the workaround was
applied. Of course you don't need two separate messages for
the two parts of the workaround, but indicating in the message
which of them was applied would seem desirable.

Furthermore, I don't see why you would need a new local
variable here at all - there are two suitable variables available
throughout the entire function (l and h).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMBu-0000Bv-U6; Thu, 06 Feb 2014 10:28:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBMBs-0000Bo-P4
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:28:16 +0000
Received: from [193.109.254.147:20639] by server-6.bemta-14.messagelabs.com id
	0D/7B-03396-0C363F25; Thu, 06 Feb 2014 10:28:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391682494!2423688!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11977 invoked from network); 6 Feb 2014 10:28:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100417610"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:28:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	05:28:02 -0500
Message-ID: <1391682481.23098.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 6 Feb 2014 10:28:01 +0000
In-Reply-To: <52F3699E0200007800119AE3@nat28.tlf.novell.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F3699E0200007800119AE3@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 09:53 +0000, Jan Beulich wrote:
> >>> On 05.02.14 at 17:03, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
> >      return 0;
> >  }
> >  
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> > +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> > +    domctl.u.cacheflush.start_pfn = start_pfn;
> > +    domctl.u.cacheflush.end_pfn = start_pfn + nr_pfns;
> > +    return do_domctl(xch, &domctl);
> > +}
> 
> I'm confused - both in the overview mail and in domctl.h below
> you state the range to now be inclusive, yet neither here nor
> in the hypervisor changes this seems to actually be the case
> (unless the earlier "rename ..." patches now did more than just
> renaming - I didn't look at them).

Yes, I think I got myself confused.

I've actually now concluded that the start + nr interface should be
pushed down into the domctl layer too -- that seems to be the common
idiom and is less prone to confusion...

> 
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -454,7 +454,6 @@ int xc_domain_create(xc_interface *xch,
> >                       uint32_t flags,
> >                       uint32_t *pdomid);
> >  
> > -
> >  /* Functions to produce a dump of a given domain
> >   *  xc_domain_dumpcore - produces a dump to a specified file
> >   *  xc_domain_dumpcore_via_callback - produces a dump, using a specified
> 
> Stray leftover change?

Yes, will remove.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMBu-0000Bv-U6; Thu, 06 Feb 2014 10:28:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBMBs-0000Bo-P4
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:28:16 +0000
Received: from [193.109.254.147:20639] by server-6.bemta-14.messagelabs.com id
	0D/7B-03396-0C363F25; Thu, 06 Feb 2014 10:28:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391682494!2423688!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11977 invoked from network); 6 Feb 2014 10:28:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100417610"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:28:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	05:28:02 -0500
Message-ID: <1391682481.23098.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 6 Feb 2014 10:28:01 +0000
In-Reply-To: <52F3699E0200007800119AE3@nat28.tlf.novell.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F3699E0200007800119AE3@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 09:53 +0000, Jan Beulich wrote:
> >>> On 05.02.14 at 17:03, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
> >      return 0;
> >  }
> >  
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> > +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> > +{
> > +    DECLARE_DOMCTL;
> > +    domctl.cmd = XEN_DOMCTL_cacheflush;
> > +    domctl.domain = (domid_t)domid;
> > +    domctl.u.cacheflush.start_pfn = start_pfn;
> > +    domctl.u.cacheflush.end_pfn = start_pfn + nr_pfns;
> > +    return do_domctl(xch, &domctl);
> > +}
> 
> I'm confused - both in the overview mail and in domctl.h below
> you state the range to now be inclusive, yet neither here nor
> in the hypervisor changes this seems to actually be the case
> (unless the earlier "rename ..." patches now did more than just
> renaming - I didn't look at them).

Yes, I think I got myself confused.

I've actually now concluded that the start + nr interface should be
pushed down into the domctl layer too -- that seems to be the common
idiom and is less prone to confusion...

> 
> > --- a/tools/libxc/xenctrl.h
> > +++ b/tools/libxc/xenctrl.h
> > @@ -454,7 +454,6 @@ int xc_domain_create(xc_interface *xch,
> >                       uint32_t flags,
> >                       uint32_t *pdomid);
> >  
> > -
> >  /* Functions to produce a dump of a given domain
> >   *  xc_domain_dumpcore - produces a dump to a specified file
> >   *  xc_domain_dumpcore_via_callback - produces a dump, using a specified
> 
> Stray leftover change?

Yes, will remove.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:42:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMP9-0000pY-Hq; Thu, 06 Feb 2014 10:41:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBMP8-0000pT-Jt
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:41:58 +0000
Received: from [85.158.137.68:58541] by server-13.bemta-3.messagelabs.com id
	C6/11-26923-5F663F25; Thu, 06 Feb 2014 10:41:57 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391683315!16742!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19490 invoked from network); 6 Feb 2014 10:41:57 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:41:57 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so2415668qac.22
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 02:41:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=guQOZP57z5/v3yTE+HOPY17z8us/Euulz2tq5m8N9gI=;
	b=KmGJxHeYVTMSyAL8rtFzr+3K+l6FRcOVIdGPc6XDl5vnO2MlmuHL7cE6GJcbtd/F3u
	VTqFBXTm0p/778Yl68Ds0sEGWnrhHQFlEoEebDgaSdvr2Y8kkWD1yc5SZtYubpkT4wpM
	7PsF+sR2tdP4GI+Mk3izu3nMSjWvhCJ/aIHNfa2k6Q7bE2QciCCSDqC27Ue/8+ef2Y6Y
	g9Hfk9mGmtazgGW/KB4O72wygXZ4gzBZQNR+8D2FhauReHQ6IkascenfGxsqDlkVd8Hb
	7aT5gkKSPdU+BozmRPad58Tp5DWkVJVHPpGAGmolAKg6YFCJiGWCKhzJJiW5NZtIlpV4
	suGQ==
X-Gm-Message-State: ALoCoQmBs7WRdwMiUrjLq101obFaMZ05/ygfYxeZT9OYXeJ8qm3ZHISQXSlcaJY3CTdFLOOF83oe
MIME-Version: 1.0
X-Received: by 10.140.23.6 with SMTP id 6mr10436726qgo.17.1391683315659; Thu,
	06 Feb 2014 02:41:55 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Thu, 6 Feb 2014 02:41:55 -0800 (PST)
In-Reply-To: <1391681714.23098.35.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
Date: Thu, 6 Feb 2014 16:11:55 +0530
Message-ID: <CAAHg+HhpU2jnnofqJ3izVfLqQMAZMwPn3dxjKhi+RCcKLEE=jQ@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 6 February 2014 15:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds VFP save/restore support form arm64 across context switch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>
> This should go in for 4.4 -- not context switching floating point
> registers is obviously a big problem. (bit embarrassed that I forgot
> about this...)
>
>> ---
>>  xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/arm64/vfp.h |    4 ++++
>>  2 files changed, 53 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index 74e6a50..8c1479a 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -1,13 +1,62 @@
>>  #include <xen/sched.h>
>>  #include <asm/processor.h>
>> +#include <asm/cpufeature.h>
>>  #include <asm/vfp.h>
>>
>>  void vfp_save_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>
> You can probably remove this comment, and the one in restore, unless
> there is something still left to do?
Oops, sorry let me remove and repost it.

>
> Actually, I'll do it on commit, so:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>>  }
>>
>>  void vfp_restore_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>>  }
>> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
>> index 3733d2c..373f156 100644
>> --- a/xen/include/asm-arm/arm64/vfp.h
>> +++ b/xen/include/asm-arm/arm64/vfp.h
>> @@ -3,6 +3,10 @@
>>
>>  struct vfp_state
>>  {
>> +    uint64_t fpregs[64];
>> +    uint32_t fpcr;
>> +    uint32_t fpexc32_el2;
>> +    uint32_t fpsr;
>>  };
>>
>>  #endif /* _ARM_ARM64_VFP_H */
>
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:42:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMP9-0000pY-Hq; Thu, 06 Feb 2014 10:41:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBMP8-0000pT-Jt
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:41:58 +0000
Received: from [85.158.137.68:58541] by server-13.bemta-3.messagelabs.com id
	C6/11-26923-5F663F25; Thu, 06 Feb 2014 10:41:57 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391683315!16742!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19490 invoked from network); 6 Feb 2014 10:41:57 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:41:57 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so2415668qac.22
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 02:41:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=guQOZP57z5/v3yTE+HOPY17z8us/Euulz2tq5m8N9gI=;
	b=KmGJxHeYVTMSyAL8rtFzr+3K+l6FRcOVIdGPc6XDl5vnO2MlmuHL7cE6GJcbtd/F3u
	VTqFBXTm0p/778Yl68Ds0sEGWnrhHQFlEoEebDgaSdvr2Y8kkWD1yc5SZtYubpkT4wpM
	7PsF+sR2tdP4GI+Mk3izu3nMSjWvhCJ/aIHNfa2k6Q7bE2QciCCSDqC27Ue/8+ef2Y6Y
	g9Hfk9mGmtazgGW/KB4O72wygXZ4gzBZQNR+8D2FhauReHQ6IkascenfGxsqDlkVd8Hb
	7aT5gkKSPdU+BozmRPad58Tp5DWkVJVHPpGAGmolAKg6YFCJiGWCKhzJJiW5NZtIlpV4
	suGQ==
X-Gm-Message-State: ALoCoQmBs7WRdwMiUrjLq101obFaMZ05/ygfYxeZT9OYXeJ8qm3ZHISQXSlcaJY3CTdFLOOF83oe
MIME-Version: 1.0
X-Received: by 10.140.23.6 with SMTP id 6mr10436726qgo.17.1391683315659; Thu,
	06 Feb 2014 02:41:55 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Thu, 6 Feb 2014 02:41:55 -0800 (PST)
In-Reply-To: <1391681714.23098.35.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
Date: Thu, 6 Feb 2014 16:11:55 +0530
Message-ID: <CAAHg+HhpU2jnnofqJ3izVfLqQMAZMwPn3dxjKhi+RCcKLEE=jQ@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 6 February 2014 15:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds VFP save/restore support form arm64 across context switch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>
> This should go in for 4.4 -- not context switching floating point
> registers is obviously a big problem. (bit embarrassed that I forgot
> about this...)
>
>> ---
>>  xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/arm64/vfp.h |    4 ++++
>>  2 files changed, 53 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index 74e6a50..8c1479a 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -1,13 +1,62 @@
>>  #include <xen/sched.h>
>>  #include <asm/processor.h>
>> +#include <asm/cpufeature.h>
>>  #include <asm/vfp.h>
>>
>>  void vfp_save_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>
> You can probably remove this comment, and the one in restore, unless
> there is something still left to do?
Oops, sorry let me remove and repost it.

>
> Actually, I'll do it on commit, so:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>>  }
>>
>>  void vfp_restore_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>>  }
>> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
>> index 3733d2c..373f156 100644
>> --- a/xen/include/asm-arm/arm64/vfp.h
>> +++ b/xen/include/asm-arm/arm64/vfp.h
>> @@ -3,6 +3,10 @@
>>
>>  struct vfp_state
>>  {
>> +    uint64_t fpregs[64];
>> +    uint32_t fpcr;
>> +    uint32_t fpexc32_el2;
>> +    uint32_t fpsr;
>>  };
>>
>>  #endif /* _ARM_ARM64_VFP_H */
>
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:50:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMXJ-0001Cx-2H; Thu, 06 Feb 2014 10:50:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBMXH-0001Cs-9S
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:50:23 +0000
Received: from [85.158.143.35:34363] by server-2.bemta-4.messagelabs.com id
	02/E1-10891-EE863F25; Thu, 06 Feb 2014 10:50:22 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391683820!3561092!1
X-Originating-IP: [209.85.216.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31113 invoked from network); 6 Feb 2014 10:50:21 -0000
Received: from mail-qa0-f54.google.com (HELO mail-qa0-f54.google.com)
	(209.85.216.54)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:50:21 -0000
Received: by mail-qa0-f54.google.com with SMTP id i13so2473064qae.13
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 02:50:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=eoX7TS9F6DPuO9eUJ2Sk0JgkBM1QqqFD0cGLUI1n9l4=;
	b=VcNymlUfLNPjkc3eXZYLnkvAVKF/grUHyYu4EelFm2qjtZE5DCbNMqtsCqKmCo6Utu
	WGCPJVcot0z7NOwB/2CaqNy3F49VxK6rLZ8LHUZxuwLjYZY98e9pBVGgs+0JYBs5pikS
	jelAOdNhUkCzO3JjUg10LKwwb79Zc4jvRDdPtq4gcDtLwbUlfZ5Up81Zu71qedJQaTPp
	yK+bahvsajM35ZA3KGSuBRxVQ/3ICBn55lWi4UJnL7vxQCU2yCmzqoZ98xCygWlhXUvQ
	jdKej46cR79tz5Q/wdO+5/qr8+Lavu3z7dRi+DU1F8GElr5kXq4bTJWndPik45dS8AL9
	R5Gg==
X-Gm-Message-State: ALoCoQnef7no2xCBGRogU5tD26fs4pBoT/aCLnnGVLsoWWCbLcslO+wnNitSE3nGETir+ZzIZ3oc
MIME-Version: 1.0
X-Received: by 10.224.72.72 with SMTP id l8mr11059847qaj.51.1391683820433;
	Thu, 06 Feb 2014 02:50:20 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Thu, 6 Feb 2014 02:50:20 -0800 (PST)
In-Reply-To: <1391681714.23098.35.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
Date: Thu, 6 Feb 2014 16:20:20 +0530
Message-ID: <CAAHg+HiN4hftyW5m62ogD8W_GAMRT9J=QAtZno8iz_4oGMK2zg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 6 February 2014 15:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds VFP save/restore support form arm64 across context switch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>
> This should go in for 4.4 -- not context switching floating point
> registers is obviously a big problem. (bit embarrassed that I forgot
> about this...)
>
>> ---
>>  xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/arm64/vfp.h |    4 ++++
>>  2 files changed, 53 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index 74e6a50..8c1479a 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -1,13 +1,62 @@
>>  #include <xen/sched.h>
>>  #include <asm/processor.h>
>> +#include <asm/cpufeature.h>
>>  #include <asm/vfp.h>
>>
>>  void vfp_save_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>
> You can probably remove this comment, and the one in restore, unless
> there is something still left to do?
>
> Actually, I'll do it on commit, so:
Missed out above line. Please remove it during commit.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Thanks,
Pranav
>
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>>  }
>>
>>  void vfp_restore_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>>  }
>> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
>> index 3733d2c..373f156 100644
>> --- a/xen/include/asm-arm/arm64/vfp.h
>> +++ b/xen/include/asm-arm/arm64/vfp.h
>> @@ -3,6 +3,10 @@
>>
>>  struct vfp_state
>>  {
>> +    uint64_t fpregs[64];
>> +    uint32_t fpcr;
>> +    uint32_t fpexc32_el2;
>> +    uint32_t fpsr;
>>  };
>>
>>  #endif /* _ARM_ARM64_VFP_H */
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:50:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMXJ-0001Cx-2H; Thu, 06 Feb 2014 10:50:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBMXH-0001Cs-9S
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:50:23 +0000
Received: from [85.158.143.35:34363] by server-2.bemta-4.messagelabs.com id
	02/E1-10891-EE863F25; Thu, 06 Feb 2014 10:50:22 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391683820!3561092!1
X-Originating-IP: [209.85.216.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31113 invoked from network); 6 Feb 2014 10:50:21 -0000
Received: from mail-qa0-f54.google.com (HELO mail-qa0-f54.google.com)
	(209.85.216.54)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:50:21 -0000
Received: by mail-qa0-f54.google.com with SMTP id i13so2473064qae.13
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 02:50:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=eoX7TS9F6DPuO9eUJ2Sk0JgkBM1QqqFD0cGLUI1n9l4=;
	b=VcNymlUfLNPjkc3eXZYLnkvAVKF/grUHyYu4EelFm2qjtZE5DCbNMqtsCqKmCo6Utu
	WGCPJVcot0z7NOwB/2CaqNy3F49VxK6rLZ8LHUZxuwLjYZY98e9pBVGgs+0JYBs5pikS
	jelAOdNhUkCzO3JjUg10LKwwb79Zc4jvRDdPtq4gcDtLwbUlfZ5Up81Zu71qedJQaTPp
	yK+bahvsajM35ZA3KGSuBRxVQ/3ICBn55lWi4UJnL7vxQCU2yCmzqoZ98xCygWlhXUvQ
	jdKej46cR79tz5Q/wdO+5/qr8+Lavu3z7dRi+DU1F8GElr5kXq4bTJWndPik45dS8AL9
	R5Gg==
X-Gm-Message-State: ALoCoQnef7no2xCBGRogU5tD26fs4pBoT/aCLnnGVLsoWWCbLcslO+wnNitSE3nGETir+ZzIZ3oc
MIME-Version: 1.0
X-Received: by 10.224.72.72 with SMTP id l8mr11059847qaj.51.1391683820433;
	Thu, 06 Feb 2014 02:50:20 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Thu, 6 Feb 2014 02:50:20 -0800 (PST)
In-Reply-To: <1391681714.23098.35.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
Date: Thu, 6 Feb 2014 16:20:20 +0530
Message-ID: <CAAHg+HiN4hftyW5m62ogD8W_GAMRT9J=QAtZno8iz_4oGMK2zg@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 6 February 2014 15:45, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds VFP save/restore support form arm64 across context switch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>
> This should go in for 4.4 -- not context switching floating point
> registers is obviously a big problem. (bit embarrassed that I forgot
> about this...)
>
>> ---
>>  xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>>  xen/include/asm-arm/arm64/vfp.h |    4 ++++
>>  2 files changed, 53 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index 74e6a50..8c1479a 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -1,13 +1,62 @@
>>  #include <xen/sched.h>
>>  #include <asm/processor.h>
>> +#include <asm/cpufeature.h>
>>  #include <asm/vfp.h>
>>
>>  void vfp_save_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>
> You can probably remove this comment, and the one in restore, unless
> there is something still left to do?
>
> Actually, I'll do it on commit, so:
Missed out above line. Please remove it during commit.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>
Thanks,
Pranav
>
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>>  }
>>
>>  void vfp_restore_state(struct vcpu *v)
>>  {
>>      /* TODO: implement it */
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>>  }
>> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
>> index 3733d2c..373f156 100644
>> --- a/xen/include/asm-arm/arm64/vfp.h
>> +++ b/xen/include/asm-arm/arm64/vfp.h
>> @@ -3,6 +3,10 @@
>>
>>  struct vfp_state
>>  {
>> +    uint64_t fpregs[64];
>> +    uint32_t fpcr;
>> +    uint32_t fpexc32_el2;
>> +    uint32_t fpsr;
>>  };
>>
>>  #endif /* _ARM_ARM64_VFP_H */
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:57:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMe4-0001Lz-2S; Thu, 06 Feb 2014 10:57:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBMe2-0001Lu-IY
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:57:22 +0000
Received: from [193.109.254.147:10325] by server-12.bemta-14.messagelabs.com
	id AA/64-17220-19A63F25; Thu, 06 Feb 2014 10:57:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391684240!2444456!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3517 invoked from network); 6 Feb 2014 10:57:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:57:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:57:20 +0000
Message-Id: <52F3789E0200007800119B83@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:57:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3 V3] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:28, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold register */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold register */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we ignore
>           * all write accesses. This behaviour matches real HW, so guests should

While both of these changes are fine with the code as it currently
is, they both rely on behavior that is sub-optimal and would hence
need to be changed: Neither should MSR reads blindly read the
hardware MSR as a fallback, nor should MSR write be blindly
ignored. Yet with removing 0xC000040A from the explicit list of
registers handled, a guest OS accessing this MSR (knowing it
exists on Fam10) would start to see unexpected #GP faults as
soon as the respective default cases would get corrected. And I
assume you agree that correcting these default cases should not
require the author to be particularly knowledgeable about AMD
CPU family differences.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:57:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:57:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMe4-0001Lz-2S; Thu, 06 Feb 2014 10:57:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBMe2-0001Lu-IY
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:57:22 +0000
Received: from [193.109.254.147:10325] by server-12.bemta-14.messagelabs.com
	id AA/64-17220-19A63F25; Thu, 06 Feb 2014 10:57:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391684240!2444456!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3517 invoked from network); 6 Feb 2014 10:57:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 10:57:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 10:57:20 +0000
Message-Id: <52F3789E0200007800119B83@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 10:57:18 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3 V3] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:28, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold register */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: We report that the threshold register is unavailable
>           * for OS use (locked by the BIOS).
> @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>          vpmu_do_wrmsr(msr, msr_content);
>          break;
>  
> -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> +    case MSR_F10_MC4_MISC1: /* Threshold register */
> +    case MSR_F10_MC4_MISC2:
> +    case MSR_F10_MC4_MISC3:
>          /*
>           * MCA/MCE: Threshold register is reported to be locked, so we ignore
>           * all write accesses. This behaviour matches real HW, so guests should

While both of these changes are fine with the code as it currently
is, they both rely on behavior that is sub-optimal and would hence
need to be changed: Neither should MSR reads blindly read the
hardware MSR as a fallback, nor should MSR write be blindly
ignored. Yet with removing 0xC000040A from the explicit list of
registers handled, a guest OS accessing this MSR (knowing it
exists on Fam10) would start to see unexpected #GP faults as
soon as the respective default cases would get corrected. And I
assume you agree that correcting these default cases should not
require the author to be particularly knowledgeable about AMD
CPU family differences.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:59:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMg4-0001ef-L8; Thu, 06 Feb 2014 10:59:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBMg3-0001eO-Bp
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:59:27 +0000
Received: from [85.158.137.68:45774] by server-6.bemta-3.messagelabs.com id
	79/85-09180-D0B63F25; Thu, 06 Feb 2014 10:59:25 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391684362!22677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25613 invoked from network); 6 Feb 2014 10:59:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100422470"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:59:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 05:59:21 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBMak-0006rc-Rw;
	Thu, 06 Feb 2014 10:53:58 +0000
Message-ID: <52F369C6.5020108@eu.citrix.com>
Date: Thu, 6 Feb 2014 10:53:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
In-Reply-To: <1391681714.23098.35.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/06/2014 10:15 AM, Ian Campbell wrote:
> On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds VFP save/restore support form arm64 across context switch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> This should go in for 4.4 -- not context switching floating point
> registers is obviously a big problem. (bit embarrassed that I forgot
> about this...)

Yes, absolutely.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
>> ---
>>   xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-arm/arm64/vfp.h |    4 ++++
>>   2 files changed, 53 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index 74e6a50..8c1479a 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -1,13 +1,62 @@
>>   #include <xen/sched.h>
>>   #include <asm/processor.h>
>> +#include <asm/cpufeature.h>
>>   #include <asm/vfp.h>
>>   
>>   void vfp_save_state(struct vcpu *v)
>>   {
>>       /* TODO: implement it */
> You can probably remove this comment, and the one in restore, unless
> there is something still left to do?
>
> Actually, I'll do it on commit, so:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>>   }
>>   
>>   void vfp_restore_state(struct vcpu *v)
>>   {
>>       /* TODO: implement it */
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>>   }
>> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
>> index 3733d2c..373f156 100644
>> --- a/xen/include/asm-arm/arm64/vfp.h
>> +++ b/xen/include/asm-arm/arm64/vfp.h
>> @@ -3,6 +3,10 @@
>>   
>>   struct vfp_state
>>   {
>> +    uint64_t fpregs[64];
>> +    uint32_t fpcr;
>> +    uint32_t fpexc32_el2;
>> +    uint32_t fpsr;
>>   };
>>   
>>   #endif /* _ARM_ARM64_VFP_H */
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:59:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMg4-0001ef-L8; Thu, 06 Feb 2014 10:59:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBMg3-0001eO-Bp
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 10:59:27 +0000
Received: from [85.158.137.68:45774] by server-6.bemta-3.messagelabs.com id
	79/85-09180-D0B63F25; Thu, 06 Feb 2014 10:59:25 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391684362!22677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25613 invoked from network); 6 Feb 2014 10:59:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100422470"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:59:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 05:59:21 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBMak-0006rc-Rw;
	Thu, 06 Feb 2014 10:53:58 +0000
Message-ID: <52F369C6.5020108@eu.citrix.com>
Date: Thu, 6 Feb 2014 10:53:58 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
In-Reply-To: <1391681714.23098.35.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/06/2014 10:15 AM, Ian Campbell wrote:
> On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch adds VFP save/restore support form arm64 across context switch.
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> This should go in for 4.4 -- not context switching floating point
> registers is obviously a big problem. (bit embarrassed that I forgot
> about this...)

Yes, absolutely.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
>> ---
>>   xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-arm/arm64/vfp.h |    4 ++++
>>   2 files changed, 53 insertions(+)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index 74e6a50..8c1479a 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -1,13 +1,62 @@
>>   #include <xen/sched.h>
>>   #include <asm/processor.h>
>> +#include <asm/cpufeature.h>
>>   #include <asm/vfp.h>
>>   
>>   void vfp_save_state(struct vcpu *v)
>>   {
>>       /* TODO: implement it */
> You can probably remove this comment, and the one in restore, unless
> there is something still left to do?
>
> Actually, I'll do it on commit, so:
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>>   }
>>   
>>   void vfp_restore_state(struct vcpu *v)
>>   {
>>       /* TODO: implement it */
>> +    if ( !cpu_has_fp )
>> +        return;
>> +
>> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +
>> +    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
>> +    WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
>>   }
>> diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
>> index 3733d2c..373f156 100644
>> --- a/xen/include/asm-arm/arm64/vfp.h
>> +++ b/xen/include/asm-arm/arm64/vfp.h
>> @@ -3,6 +3,10 @@
>>   
>>   struct vfp_state
>>   {
>> +    uint64_t fpregs[64];
>> +    uint32_t fpcr;
>> +    uint32_t fpexc32_el2;
>> +    uint32_t fpsr;
>>   };
>>   
>>   #endif /* _ARM_ARM64_VFP_H */
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:59:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMgR-0001i2-QN; Thu, 06 Feb 2014 10:59:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBMgP-0001hj-ND
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 10:59:49 +0000
Received: from [193.109.254.147:10937] by server-8.bemta-14.messagelabs.com id
	7D/CA-18529-42B63F25; Thu, 06 Feb 2014 10:59:48 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391684387!2434463!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11620 invoked from network); 6 Feb 2014 10:59:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:59:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100422527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:59:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 05:59:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBMZT-0006py-QE;
	Thu, 06 Feb 2014 10:52:39 +0000
Message-ID: <52F36977.6030106@eu.citrix.com>
Date: Thu, 6 Feb 2014 10:52:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
In-Reply-To: <21234.21167.684304.970488@mariner.uk.xensource.com>
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 03:03 PM, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
>> On 02/03/2014 04:14 PM, Ian Jackson wrote:
>>> This is the latest version of my libxl event fixes apropos of Jim's
>>> libvirt testing.
>> Did you have any opinions on the suitability of this for 4.4?
> Sorry, I should have made that clear in the body text rather than just
> the subject line.
>
> I think this needs a freeze exception on the following grounds:
>
>   * There is little change visible to non-eventy/thready callers and
>     the risk of new races there is limited; basic functional testing
>     ought to catch those errors.
>
>   * The most prominent eventy/thready caller we are currently aware of
>     is libvirt.  Without these changes it is nearly impossible to have
>     a reliable libvirt.

Thanks.

I think libvirt support for libxl is a really important functionality 
from a strategic perspective: a solid support should make it much easier 
to integrate with other projects such as OpenStack and Cloudstack, as 
well as (in theory) other tools built on top of libvirt.

So I'm inclined to consider this a blocker*; I think we should accept it 
and delay the release until we feel comfortable that it has been 
sufficiently tested.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

* "blocker" is never an absolute specification; there is almost always a 
point where we would say, "we're just going to have to release without 
this".  Specifying a feature or bug a blocker just means, "At this 
point, we are still willing to slip the release if necessary to include 
this feature / bug fix."


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 10:59:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 10:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMgR-0001i2-QN; Thu, 06 Feb 2014 10:59:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBMgP-0001hj-ND
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 10:59:49 +0000
Received: from [193.109.254.147:10937] by server-8.bemta-14.messagelabs.com id
	7D/CA-18529-42B63F25; Thu, 06 Feb 2014 10:59:48 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391684387!2434463!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11620 invoked from network); 6 Feb 2014 10:59:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 10:59:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100422527"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 10:59:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 05:59:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBMZT-0006py-QE;
	Thu, 06 Feb 2014 10:52:39 +0000
Message-ID: <52F36977.6030106@eu.citrix.com>
Date: Thu, 6 Feb 2014 10:52:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
In-Reply-To: <21234.21167.684304.970488@mariner.uk.xensource.com>
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 03:03 PM, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
>> On 02/03/2014 04:14 PM, Ian Jackson wrote:
>>> This is the latest version of my libxl event fixes apropos of Jim's
>>> libvirt testing.
>> Did you have any opinions on the suitability of this for 4.4?
> Sorry, I should have made that clear in the body text rather than just
> the subject line.
>
> I think this needs a freeze exception on the following grounds:
>
>   * There is little change visible to non-eventy/thready callers and
>     the risk of new races there is limited; basic functional testing
>     ought to catch those errors.
>
>   * The most prominent eventy/thready caller we are currently aware of
>     is libvirt.  Without these changes it is nearly impossible to have
>     a reliable libvirt.

Thanks.

I think libvirt support for libxl is a really important functionality 
from a strategic perspective: a solid support should make it much easier 
to integrate with other projects such as OpenStack and Cloudstack, as 
well as (in theory) other tools built on top of libvirt.

So I'm inclined to consider this a blocker*; I think we should accept it 
and delay the release until we feel comfortable that it has been 
sufficiently tested.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

* "blocker" is never an absolute specification; there is almost always a 
point where we would say, "we're just going to have to release without 
this".  Specifying a feature or bug a blocker just means, "At this 
point, we are still willing to slip the release if necessary to include 
this feature / bug fix."


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:00:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMgz-0001my-Vt; Thu, 06 Feb 2014 11:00:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBMgh-0001lw-V8
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 11:00:22 +0000
Received: from [193.109.254.147:57386] by server-6.bemta-14.messagelabs.com id
	22/66-03396-73B63F25; Thu, 06 Feb 2014 11:00:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391684405!2445463!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32437 invoked from network); 6 Feb 2014 11:00:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 11:00:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 11:00:04 +0000
Message-Id: <52F379440200007800119B9E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 11:00:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <chegger@amazon.de>,
	"Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>,
	<suravee.suthikulpanit@amd.com>,<jinsong.liu@intel.com>,
	<xen-devel@lists.xen.org>, <boris.ostrovsky@oracle.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1391632096-6209-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1391632096-6209-3-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH 2/3 V3] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:28, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.h
> +++ b/xen/arch/x86/cpu/mcheck/vmce.h
> @@ -8,6 +8,9 @@ int vmce_init(struct cpuinfo_x86 *c);
>  #define dom0_vmce_enabled() (dom0 && dom0->max_vcpus && dom0->vcpu[0] \
>          && guest_enabled_event(dom0->vcpu[0], VIRQ_MCA))
>  
> +#define AMD_MC4_MISC1_INCLUDE_MASK          0x13

As said before, this ought to remain 3, and has nothing to do
with AMD. No need to have a variable for this at all.

> +#define AMD_MC4_EXTENDED_MISC_INCLUDE_MASK  0xc0000000

For this one I meanwhile think that the better mask would be
-MSR_IA32_MC0_CTL (i.e. 0xfffffc00). Which then would also
eliminate the need for a separate manifest constant.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:00:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMgz-0001my-Vt; Thu, 06 Feb 2014 11:00:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBMgh-0001lw-V8
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 11:00:22 +0000
Received: from [193.109.254.147:57386] by server-6.bemta-14.messagelabs.com id
	22/66-03396-73B63F25; Thu, 06 Feb 2014 11:00:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391684405!2445463!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32437 invoked from network); 6 Feb 2014 11:00:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 11:00:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 11:00:04 +0000
Message-Id: <52F379440200007800119B9E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 11:00:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <chegger@amazon.de>,
	"Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>,
	<suravee.suthikulpanit@amd.com>,<jinsong.liu@intel.com>,
	<xen-devel@lists.xen.org>, <boris.ostrovsky@oracle.com>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1391632096-6209-3-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1391632096-6209-3-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: Re: [Xen-devel] [PATCH 2/3 V3] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 05.02.14 at 21:28, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.h
> +++ b/xen/arch/x86/cpu/mcheck/vmce.h
> @@ -8,6 +8,9 @@ int vmce_init(struct cpuinfo_x86 *c);
>  #define dom0_vmce_enabled() (dom0 && dom0->max_vcpus && dom0->vcpu[0] \
>          && guest_enabled_event(dom0->vcpu[0], VIRQ_MCA))
>  
> +#define AMD_MC4_MISC1_INCLUDE_MASK          0x13

As said before, this ought to remain 3, and has nothing to do
with AMD. No need to have a variable for this at all.

> +#define AMD_MC4_EXTENDED_MISC_INCLUDE_MASK  0xc0000000

For this one I meanwhile think that the better mask would be
-MSR_IA32_MC0_CTL (i.e. 0xfffffc00). Which then would also
eliminate the need for a separate manifest constant.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:16:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMvt-0002SS-4d; Thu, 06 Feb 2014 11:15:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1WBMvr-0002SK-M7; Thu, 06 Feb 2014 11:15:47 +0000
Received: from [85.158.143.35:62337] by server-3.bemta-4.messagelabs.com id
	9D/2B-11539-2EE63F25; Thu, 06 Feb 2014 11:15:46 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391685341!3600406!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8459 invoked from network); 6 Feb 2014 11:15:42 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 11:15:42 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so1148411wgh.0
	for <multiple recipients>; Thu, 06 Feb 2014 03:15:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=tWfx+hLlkvxWNgFPg9BXDw3USJ/gmN2xeV97+hLEYc8=;
	b=ZPdXleKA3TLfRZoqyL4wZgL/QYOCnA3f1ZiGatPpxDwT6/lJiAQPpkpOMQxFjT3FLH
	woO5+21KB/3It78R+5tCaAJhBP2NTcfwurDvDZbBvAC0t7yv7sgtA6SdytjnsvVFSu/k
	G1gl+Ir9nxfzm6LZtLIwT6NghoKQzu8iVbtpXmQYgUL0W/YjTbc8u+KAJYZfH1MgC9sj
	idDUzYXmkzeKYPdvteL7wDthnPjTMjLpc9/UGzbnJsLRYUfnkkpMJat2KxWHWcwXS5zg
	ONSXrkEbiJqdx+fR6QTaW0NF8Hk0U9Pnk0JnhGaQ5k8Zj7Gxga2bvmGNjD7RKSRxhZ4A
	K43w==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr20783079wic.56.1391685341063; 
	Thu, 06 Feb 2014 03:15:41 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 6 Feb 2014 03:15:40 -0800 (PST)
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
Date: Thu, 6 Feb 2014 11:15:40 +0000
X-Google-Sender-Auth: zbmLP1_IZa_gNqVy2VQhCv5jGkQ
Message-ID: <CAFLBxZYqfm2ZJB-dR9gMa1+j3J0urvZv_u0pDicPFgoRCUQu-A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Lars Kurth <lars.kurth@xen.org>, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 5, 2014 at 2:09 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> George:
>
>       * Introducing PowerClamp-like driver for Xen
>
>         I don't think this has been done?

Not that I'm aware of.

> George:
>
>       * Allowing guests to boot with a passed-through GPU as the primary
>         display
>
>         This seems like a bit of a rathole for a GSoC student to me...

Possibly.  I thought it might be a bit more attractive project, but
maybe that's a bad thing...

>       * Advanced Scheduling Parameters
>
>         Still to do?

Yes.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:16:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMvt-0002SS-4d; Thu, 06 Feb 2014 11:15:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1WBMvr-0002SK-M7; Thu, 06 Feb 2014 11:15:47 +0000
Received: from [85.158.143.35:62337] by server-3.bemta-4.messagelabs.com id
	9D/2B-11539-2EE63F25; Thu, 06 Feb 2014 11:15:46 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391685341!3600406!1
X-Originating-IP: [74.125.82.45]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8459 invoked from network); 6 Feb 2014 11:15:42 -0000
Received: from mail-wg0-f45.google.com (HELO mail-wg0-f45.google.com)
	(74.125.82.45)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 11:15:42 -0000
Received: by mail-wg0-f45.google.com with SMTP id n12so1148411wgh.0
	for <multiple recipients>; Thu, 06 Feb 2014 03:15:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=tWfx+hLlkvxWNgFPg9BXDw3USJ/gmN2xeV97+hLEYc8=;
	b=ZPdXleKA3TLfRZoqyL4wZgL/QYOCnA3f1ZiGatPpxDwT6/lJiAQPpkpOMQxFjT3FLH
	woO5+21KB/3It78R+5tCaAJhBP2NTcfwurDvDZbBvAC0t7yv7sgtA6SdytjnsvVFSu/k
	G1gl+Ir9nxfzm6LZtLIwT6NghoKQzu8iVbtpXmQYgUL0W/YjTbc8u+KAJYZfH1MgC9sj
	idDUzYXmkzeKYPdvteL7wDthnPjTMjLpc9/UGzbnJsLRYUfnkkpMJat2KxWHWcwXS5zg
	ONSXrkEbiJqdx+fR6QTaW0NF8Hk0U9Pnk0JnhGaQ5k8Zj7Gxga2bvmGNjD7RKSRxhZ4A
	K43w==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr20783079wic.56.1391685341063; 
	Thu, 06 Feb 2014 03:15:41 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 6 Feb 2014 03:15:40 -0800 (PST)
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
Date: Thu, 6 Feb 2014 11:15:40 +0000
X-Google-Sender-Auth: zbmLP1_IZa_gNqVy2VQhCv5jGkQ
Message-ID: <CAFLBxZYqfm2ZJB-dR9gMa1+j3J0urvZv_u0pDicPFgoRCUQu-A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	mirageos-devel@lists.xenproject.org,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Lars Kurth <lars.kurth@xen.org>, Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Paul Durrant <paul.durrant@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 5, 2014 at 2:09 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> George:
>
>       * Introducing PowerClamp-like driver for Xen
>
>         I don't think this has been done?

Not that I'm aware of.

> George:
>
>       * Allowing guests to boot with a passed-through GPU as the primary
>         display
>
>         This seems like a bit of a rathole for a GSoC student to me...

Possibly.  I thought it might be a bit more attractive project, but
maybe that's a bad thing...

>       * Advanced Scheduling Parameters
>
>         Still to do?

Yes.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:17:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMx6-0002X0-Pm; Thu, 06 Feb 2014 11:17:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBMx5-0002Wu-Ui
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 11:17:04 +0000
Received: from [85.158.139.211:42197] by server-5.bemta-5.messagelabs.com id
	20/79-32749-F2F63F25; Thu, 06 Feb 2014 11:17:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391685422!2074902!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3093 invoked from network); 6 Feb 2014 11:17:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 11:17:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 11:17:01 +0000
Message-Id: <52F37D3C0200007800119BDF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 11:17:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Justin Weaver" <jtweaver@hawaii.edu>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
In-Reply-To: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	dario.faggioli@citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 09:58, Justin Weaver <jtweaver@hawaii.edu> wrote:
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -85,8 +85,7 @@
>   * to a small value, and a fixed credit is added to everyone.
>   *
>   * The plan is for all cores that share an L2 will share the same
> - * runqueue.  At the moment, there is one global runqueue for all
> - * cores.
> + * runqueue. 

If this is the intention, then ...

> @@ -1962,12 +1963,28 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>      /* Figure out which runqueue to put it in */
>      rqi = 0;
>  
> -    /* Figure out which runqueue to put it in */
>      /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
>      if ( cpu == 0 )
>          rqi = 0;
>      else
> -        rqi = cpu_to_socket(cpu);
> +    {
> +        cpu_socket = cpu_to_socket(cpu);
> +        cpu0_socket = cpu_to_socket(0);
> +
> +        /* If cpu is on the same socket as CPU 0, put it with CPU 0 on run queue 0 */
> +        if ( cpu_socket == cpu0_socket )
> +            rqi = 0;
> +        else            
> +            /* If cpu is on socket 0, assign it to a run queue based on the
> +             * socket CPU 0 is actually on */
> +            if ( cpu_socket == 0 )
> +                rqi = cpu0_socket;
> +
> +            /* If cpu is NOT on socket 0, just assign it to a run queue based on
> +             * its own socket */
> +            else
> +                rqi = cpu_socket;
> +    }

... this is too simplistic: Whether the L2 is shared by all cores on a
socket should be determined, not assumed.

Apart from that keeping the CPU0 special case at the top is pointless
with the cpu0_socket special casing.

As to coding style: Please fix your comments and get the indentation
of the if/else sequence above right (i.e. either use "else if" with no
added indentation, or enclose the inner if/else in figure braces (I'd
personally prefer the former).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:17:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMx6-0002X0-Pm; Thu, 06 Feb 2014 11:17:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBMx5-0002Wu-Ui
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 11:17:04 +0000
Received: from [85.158.139.211:42197] by server-5.bemta-5.messagelabs.com id
	20/79-32749-F2F63F25; Thu, 06 Feb 2014 11:17:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391685422!2074902!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3093 invoked from network); 6 Feb 2014 11:17:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 11:17:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 11:17:01 +0000
Message-Id: <52F37D3C0200007800119BDF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 11:17:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Justin Weaver" <jtweaver@hawaii.edu>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
In-Reply-To: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	dario.faggioli@citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 09:58, Justin Weaver <jtweaver@hawaii.edu> wrote:
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -85,8 +85,7 @@
>   * to a small value, and a fixed credit is added to everyone.
>   *
>   * The plan is for all cores that share an L2 will share the same
> - * runqueue.  At the moment, there is one global runqueue for all
> - * cores.
> + * runqueue. 

If this is the intention, then ...

> @@ -1962,12 +1963,28 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>      /* Figure out which runqueue to put it in */
>      rqi = 0;
>  
> -    /* Figure out which runqueue to put it in */
>      /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
>      if ( cpu == 0 )
>          rqi = 0;
>      else
> -        rqi = cpu_to_socket(cpu);
> +    {
> +        cpu_socket = cpu_to_socket(cpu);
> +        cpu0_socket = cpu_to_socket(0);
> +
> +        /* If cpu is on the same socket as CPU 0, put it with CPU 0 on run queue 0 */
> +        if ( cpu_socket == cpu0_socket )
> +            rqi = 0;
> +        else            
> +            /* If cpu is on socket 0, assign it to a run queue based on the
> +             * socket CPU 0 is actually on */
> +            if ( cpu_socket == 0 )
> +                rqi = cpu0_socket;
> +
> +            /* If cpu is NOT on socket 0, just assign it to a run queue based on
> +             * its own socket */
> +            else
> +                rqi = cpu_socket;
> +    }

... this is too simplistic: Whether the L2 is shared by all cores on a
socket should be determined, not assumed.

Apart from that keeping the CPU0 special case at the top is pointless
with the cpu0_socket special casing.

As to coding style: Please fix your comments and get the indentation
of the if/else sequence above right (i.e. either use "else if" with no
added indentation, or enclose the inner if/else in figure braces (I'd
personally prefer the former).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:19:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMzG-0002sx-Be; Thu, 06 Feb 2014 11:19:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBMzE-0002sn-If
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 11:19:17 +0000
Received: from [193.109.254.147:18381] by server-3.bemta-14.messagelabs.com id
	BF/DB-00432-3BF63F25; Thu, 06 Feb 2014 11:19:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391685552!2444846!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15510 invoked from network); 6 Feb 2014 11:19:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 11:19:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100426244"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 11:19:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 06:19:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBMz9-0007IA-Ce;
	Thu, 06 Feb 2014 11:19:11 +0000
Date: Thu, 6 Feb 2014 11:18:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391680396.23098.26.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402061118290.4373@kaball.uk.xensource.com>
References: <osstest-24734-mainreport@xen.org>
	<1391680396.23098.26.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-845825030-1391685539=:4373"
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-845825030-1391685539=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Thu, 6 Feb 2014, Ian Campbell wrote:
> On Wed, 2014-02-05 at 18:55 +0000, xen.org wrote:
> > flight 24734 linux-arm-xen real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24734/
> >=20
> > Failures :-/ but no regressions.
> >=20
> > Tests which did not succeed, but are not blocking:
> >  test-armhf-armhf-xl           9 guest-start                  fail   ne=
ver pass
>=20
> Stefano, please can you cherry-pick:
>=20
>         commit e17b2f114cba5420fb28fa4bfead57d406a16533
>         Author: Ian Campbell <ian.campbell@citrix.com>
>         Date:   Mon Jan 20 11:30:41 2014 +0000
>        =20
>             xen: swiotlb: handle sizeof(dma_addr_t) !=3D sizeof(phys_addr=
_t)
>            =20
>             The use of phys_to_machine and machine_to_phys in the phys<=
=3D>bus conversion
>             causes us to lose the top bits of the DMA address if the size=
 of a DMA addr
>            =20
>             This can happen in practice on ARM where foreign pages can be=
 above 4GB eve
>             though the local kernel does not have LPAE page tables enable=
d (which is
>             totally reasonable if the guest does not itself have >4GB of =
RAM). In this
>             case the kernel still maps the foreign pages at a phys addr b=
elow 4G (as it
>             must) but the resulting DMA address (returned by the grant ma=
p operation) i
>             much higher.
>            =20
>             This is analogous to a hardware device which has its view of =
RAM mapped up
>             high for some reason.
>            =20
>             This patch makes I/O to foreign pages (specifically blkif) wo=
rk on 32-bit A
>             systems with more than 4GB of RAM.
>            =20
>             Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>             Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citr=
ix.com>
>=20
> from mainline into this tree.

I merged xentip/stable/for-linus-3.14 into linux-arm-xen.


> Thanks,
> Ian.
>=20
> >=20
> > version targeted for testing:
> >  linux                518e624ddfaef545408c19c30fff31bc64d6b346
> > baseline version:
> >  linux                d264bde089ceea20640d6d4472a0dcade9d2e199
> >=20
> > ------------------------------------------------------------
> > People who touched revisions under test:
> >   "Eric W. Biederman" <ebiederm@xmission.com>
> >   Aaro Koskinen <aaro.koskinen@iki.fi>
> >   Aaron Brown <aaron.f.brown@intel.com>
> >   Abhilash Kesavan <a.kesavan@samsung.com>
> >   Alan Cox <alan@linux.intel.com>
> >   Alex Deucher <alexander.deucher@amd.com>
> >   Alexander Mezin <mezin.alexander@gmail.com>
> >   Alexander van Heukelum <heukelum@fastmail.fm>
> >   Anatolij Gustschin <agust@denx.de>
> >   Andre Przywara <andre.przywara@linaro.org>
> >   Andreas Reis <andreas.reis@gmail.com>
> >   Andreas Rohner <andreas.rohner@gmx.net>
> >   Andrew Bresticker <abrestic@chromium.org>
> >   Andrew Jones <drjones@redhat.com>
> >   Andrew Morton <akpm@linux-foundation.org>
> >   Andy Lutomirski <luto@amacapital.net>
> >   Anton Blanchard <anton@samba.org>
> >   Anton Vorontsov <anton@enomsg.org>
> >   Antonio Quartulli <antonio@meshcoding.com>
> >   Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >   Ariel Elior <ariele@broadcom.com>
> >   Arron Wang <arron.wang@intel.com>
> >   Austin Boyle <boyle.austin@gmail.com>
> >   Ben Dooks <ben.dooks@codethink.co.uk>
> >   Ben Myers <bpm@sgi.com>
> >   Ben Skeggs <bskeggs@redhat.com>
> >   Ben Widawsky <ben@bwidawsk.net>
> >   Benjamin Herrenschmidt <benh@kernel.crashing.org>
> >   Betty Dall <betty.dall@hp.com>
> >   Bjorn Helgaas <bhelgaas@google.com>
> >   Bj=C3=B8rn Mork <bjorn@mork.no>
> >   Bob Peterson <rpeterso@redhat.com>
> >   Borislav Petkov <bp@suse.de>
> >   Brian W Hart <hartb@linux.vnet.ibm.com>
> >   Bruce Allan <bruce.w.allan@intel.com>
> >   Bryan Wu <cooloney@gmail.com>
> >   Catalin Marinas <catalin.marinas@arm.com>
> >   Christian Engelmayer <cengelma@gmx.at>
> >   Christian K=C3=B6nig <christian.koenig@amd.com>
> >   Christoph Paasch <christoph.paasch@uclouvain.be>
> >   Cong Wang <xiyou.wangcong@gmail.com>
> >   Curt Brune <curt@cumulusnetworks.com>
> >   C=C3=A9dric Le Goater <clg@fr.ibm.com>
> >   Dan Carpenter <dan.carpenter@oracle.com>
> >   Dan Williams <dcbw@redhat.com>
> >   Daniel Borkmann <dborkman@redhat.com>
> >   Daniel Lezcano <daniel.lezcano@linaro.org>
> >   Daniel Vetter <daniel.vetter@ffwll.ch>
> >   Dave Airlie <airlied@redhat.com>
> >   Dave Ertman <davidx.m.ertman@intel.com>
> >   Dave Kleikamp <dave.kleikamp@oracle.com>
> >   David Ertman <davidx.m.ertman@intel.com>
> >   David Gibson <david@gibson.dropbear.id.au>
> >   David S. Miller <davem@davemloft.net>
> >   Dimitris Michailidis <dm@chelsio.com>
> >   Ding Tianhong <dingtianhong@huawei.com>
> >   Dirk Brandewie <dirk.j.brandewie@intel.com>
> >   Dmitry Kravkov <dmitry@broadcom.com>
> >   Dmitry Torokhov <dmitry.torokhov@gmail.com>
> >   Don Skidmore <donald.c.skidmore@intel.com>
> >   Emmanuel Grumbach <emmanuel.grumbach@intel.com>
> >   Eric Dumazet <edumazet@google.com>
> >   Eric Whitney <enwlinux@gmail.com>
> >   Erik Hugne <erik.hugne@ericsson.com>
> >   Fabio Estevam <fabio.estevam@freescale.com>
> >   Fan Du <fan.du@windriver.com>
> >   Felix Fietkau <nbd@openwrt.org>
> >   Flavio Leitner <fbl@redhat.com>
> >   Florian Fainelli <f.fainelli@gmail.com>
> >   Florian Westphal <fw@strlen.de>
> >   Frank Li <Frank.Li@freescale.com>
> >   Gao feng <gaofeng@cn.fujitsu.com>
> >   Gavin Shan <shangw@linux.vnet.ibm.com>
> >   Geert Uytterhoeven <geert+renesas@linux-m68k.org>
> >   Gerhard Sittig <gsi@denx.de>
> >   Gerrit Renker <gerrit@erg.abdn.ac.uk>
> >   Grant Likely <grant.likely@linaro.org>
> >   Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> >   Gregor Beck <gbeck@sernet.de>
> >   Guenter Roeck <linux@roeck-us.net>
> >   Gustavo Padovan <gustavo.padovan@collabora.co.uk>
> >   H. Nikolaus Schaller <hns@goldelico.com>
> >   H. Peter Anvin <hpa@linux.intel.com>
> >   H. Peter Anvin <hpa@zytor.com>
> >   Haiyang Zhang <haiyangz@microsoft.com>
> >   Hangbin Liu <liuhangbin@gmail.com>
> >   Hannes Frederic Sowa <hannes@stressinduktion.org>
> >   Hariprasad Shenai <hariprasad@chelsio.com>
> >   Heiko Carstens <heiko.carstens@de.ibm.com>
> >   Heiko Stuebner <heiko@sntech.de>
> >   Helge Deller <deller@gmx.de>
> >   Helmut Schaa <helmut.schaa@googlemail.com>
> >   Herbert Xu <herbert@gondor.apana.org.au>
> >   Huacai Chen <chenhc@lemote.com>
> >   Hugh Dickins <hughd@google.com>
> >   Ilia Mirkin <imirkin@alum.mit.edu>
> >   Ingo Molnar <mingo@kernel.org>
> >   Ivan Vecera <ivecera@redhat.com>
> >   Jamal Hadi Salim <jhs@mojatatu.com>
> >   James Hogan <james.hogan@imgtec.com>
> >   Jan Kara <jack@suse.cz>
> >   Jan Kiszka <jan.kiszka@siemens.com>
> >   Jani Nikula <jani.nikula@intel.com>
> >   Jason Baron <jbaron@akamai.com>
> >   Jason Wang <jasowang@redhat.com>
> >   Javier Lopez <jlopex@cozybit.com>
> >   Jay Vosburgh <fubar@us.ibm.com>
> >   Jean Delvare <khali@linux-fr.org>
> >   Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> >   Jeff Layton <jlayton@redhat.com>
> >   Jens Axboe <axboe@kernel.dk>
> >   Jesper Dangaard Brouer <brouer@redhat.com>
> >   Jesse Barnes <jbarnes@virtuousgeek.org>
> >   Jiang Liu <jiang.liu@linux.intel.com>
> >   Jie Liu <jeff.liu@oracle.com>
> >   Jiri Pirko <jiri@resnulli.us>
> >   Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
> >   Johan Hedberg <johan.hedberg@intel.com>
> >   Johannes Berg <johannes.berg@intel.com>
> >   John Crispin <blogic@openwrt.org>
> >   John David Anglin <dave.anglin@bell.net>
> >   John Fastabend <john.r.fastabend@intel.com.com>
> >   John Fastabend <john.r.fastabend@intel.com>
> >   John Stultz <john.stultz@linaro.org>
> >   John W. Linville <linville@tuxdriver.com>
> >   Jon Maloy <jon.maloy@ericsson.com>
> >   Jonghwa Lee <jonghwa3.lee@samsung.com>
> >   Josh Boyer <jwboyer@fedoraproject.org>
> >   Julian Anastasov <ja@ssi.bg>
> >   Kelly Doran <kel.p.doran@gmail.com>
> >   Kirill Tkhai <tkhai@yandex.ru>
> >   Krzysztof Ha=C5=82asa <khalasa@piap.pl>
> >   Krzysztof Kozlowski <k.kozlowski@samsung.com>
> >   Kumar Sanghvi <kumaras@chelsio.com>
> >   Lan Tianyu <tianyu.lan@intel.com>
> >   Larry Finger <Larry.Finger@lwfinger.net>
> >   Laura Abbott <lauraa@codeaurora.org>
> >   Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> >   Leigh Brown <leigh@solinno.co.uk>
> >   Li RongQing <roy.qing.li@gmail.com>
> >   Linus Torvalds <torvalds@linux-foundation.org>
> >   Liu, Chuansheng <chuansheng.liu@intel.com>
> >   Manish Chopra <manish.chopra@qlogic.com>
> >   Marcel Holtmann <marcel@holtmann.org>
> >   Marcelo Tosatti <mtosatti@redhat.com>
> >   Marco Piazza <mpiazza@gmail.com>
> >   Marek Lindner <mareklindner@neomailbox.ch>
> >   Marek Ol=C5=A1=C3=A1k <marek.olsak@amd.com>
> >   Mark Rutland <mark.rutland@arm.com>
> >   Martin Schwidefsky <schwidefsky@de.ibm.com>
> >   Mathy Vanhoef <vanhoefm@gmail.com>
> >   Matteo Facchinetti <matteo.facchinetti@sirius-es.it>
> >   Mel Gorman <mgorman@suse.de>
> >   Michael Chan <mchan@broadcom.com>
> >   Michael Neuling <mikey@neuling.org>
> >   Michael S. Tsirkin <mst@redhat.com>
> >   Michal Hocko <mhocko@suse.cz>
> >   Michal Kalderon <michals@broadcom.com>
> >   Michal Schmidt <mschmidt@redhat.com>
> >   Michal Simek <michal.simek@xilinx.com>
> >   Mika Westerberg <mika.westerberg@linux.intel.com>
> >   Mike Turquette <mturquette@linaro.org>
> >   Mikulas Patocka <mpatocka@redhat.com>
> >   Milo Kim <milo.kim@ti.com>
> >   Ming Lei <ming.lei@canonical.com>
> >   Ming Lei <tom.leiming@gmail.com>
> >   Mugunthan V N <mugunthanvnm@ti.com>
> >   Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> >   Neal Cardwell <ncardwell@google.com>
> >   Neil Horman <nhorman@tuxdriver.com>
> >   NeilBrown <neilb@suse.de>
> >   Nicolas Schichan <nschichan@freebox.fr>
> >   Nithin Nayak Sujir <nsujir@broadcom.com>
> >   Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
> >   Octavian Purdila <octavian.purdila@intel.com>
> >   Oleg Nesterov <oleg@redhat.com>
> >   Olof Johansson <olof@lixom.net>
> >   Oren Givon <oren.givon@intel.com>
> >   Pablo Neira Ayuso <pablo@netfilter.org>
> >   Paolo Bonzini <pbonzini@redhat.com>
> >   Paul Durrant <paul.durrant@citrix.com>
> >   Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >   Paulo Zanoni <paulo.r.zanoni@intel.com>
> >   Pekka Enberg <penberg@kernel.org>
> >   Peter Korsgaard <peter@korsgaard.com>
> >   Peter Zijlstra <peterz@infradead.org>
> >   Phil Schmitt <phillip.j.schmitt@intel.com>
> >   Qais Yousef <qais.yousef@imgtec.com>
> >   Qingshuai Tian <qingshuai.tian@intel.com>
> >   Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >   Rafa=C5=82 Mi=C5=82ecki <zajec5@gmail.com>
> >   Rajesh B Prathipati <rprathip@linux.vnet.ibm.com>
> >   Richard Cochran <richardcochran@gmail.com>
> >   Richard Weinberger <richard@nod.at>
> >   Rik van Riel <riel@redhat.com>
> >   Rob Herring <rob.herring@calxeda.com>
> >   Rob Herring <robh@kernel.org>
> >   Robert Richter <rric@kernel.org>
> >   Russell King <rmk+kernel@arm.linux.org.uk>
> >   Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
> >   Sachin Kamat <sachin.kamat@linaro.org>
> >   Sachin Prabhu <sprabhu@redhat.com>
> >   Salva Peir=C3=B3 <speiro@ai2.upv.es>
> >   Samuel Ortiz <sameo@linux.intel.com>
> >   Santosh Shilimkar <santosh.shilimkar@ti.com>
> >   Sasha Levin <sasha.levin@oracle.com>
> >   Sathya Perla <sathya.perla@emulex.com>
> >   Scott Feldman <sfeldma@cumulusnetworks.com>
> >   Sebastian Ott <sebott@linux.vnet.ibm.com>
> >   Serge E. Hallyn <serge.hallyn@ubuntu.com>
> >   Serge Hallyn <serge.hallyn@canonical.com>
> >   Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
> >   Seung-Woo Kim <sw0312.kim@samsung.com>
> >   Shahed Shaikh <shahed.shaikh@qlogic.com>
> >   Shirish Pargaonkar <spargaonkar@suse.com>
> >   Shuah Khan <shuah.kh@samsung.com>
> >   Simon Guinot <sguinot@lacie.com>
> >   Simon Horman <horms+renesas@verge.net.au>
> >   Simon Horman <horms@verge.net.au>
> >   Simon Wunderlich <sw@simonwunderlich.de>
> >   Soren Brinkmann <soren.brinkmann@xilinx.com>
> >   Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >   Stephen Boyd <sboyd@codeaurora.org>
> >   Stephen Warren <swarren@nvidia.com>
> >   Steve Capper <steve.capper@linaro.org>
> >   Steve French <smfrench@gmail.com>
> >   Steven Rostedt <rostedt@goodmis.org>
> >   Steven Whitehouse <swhiteho@redhat.com>
> >   Sudeep Holla <sudeep.holla@arm.com>
> >   Sujith Manoharan <c_manoha@qca.qualcomm.com>
> >   Suresh Reddy <suresh.reddy@emulex.com>
> >   Taras Kondratiuk <taras.kondratiuk@linaro.org>
> >   Tejun Heo <tj@kernel.org>
> >   Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> >   Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
> >   Thomas Gleixner <tglx@linutronix.de>
> >   Timo Ter=C3=A4s <timo.teras@iki.fi>
> >   Tomasz Figa <t.figa@samsung.com>
> >   Tony Lindgren <tony@atomide.com>
> >   Toshi Kani <toshi.kani@hp.com>
> >   Ujjal Roy <royujjal@gmail.com>
> >   Vasundhara Volam <vasundhara.volam@emulex.com>
> >   Ville Syrj=C3=A4l=C3=A4 <ville.syrjala@linux.intel.com>
> >   Vince Bridgers <vbridgers2013@gmail.com>
> >   Viresh Kumar <viresh.kumar@linaro.org>
> >   Vivek Goyal <vgoyal@redhat.com>
> >   Vlad Yasevich <vyasevich@gmail.com>
> >   Vladimir Davydov <vdavydov@parallels.com>
> >   Vlastimil Babka <vbabka@suse.cz>
> >   Wang Weidong <wangweidong1@huawei.com>
> >   Wei Liu <wei.liu2@citrix.com>
> >   Wei Yongjun <yongjun_wei@trendmicro.com.cn>
> >   Wei-Chun Chao <weichunc@plumgrid.com>
> >   Wenliang Fan <fanwlexca@gmail.com>
> >   Will Deacon <will.deacon@arm.com>
> >   Wolfram Sang <wsa@the-dreams.de>
> >   Yaniv Rosner <yanivr@broadcom.com>
> >   Yasushi Asano <yasushi.asano@jp.fujitsu.com>
> >   Yijing Wang <wangyijing@huawei.com>
> >   Ying Xue <ying.xue@windriver.com>
> >   Yuval Mintz <yuvalmin@broadcom.com>
> > ------------------------------------------------------------
> >=20
> > jobs:
> >  build-armhf                                                  pass   =
=20
> >  build-armhf-pvops                                            pass   =
=20
> >  test-armhf-armhf-xl                                          fail   =
=20
> >=20
> >=20
> > ------------------------------------------------------------
> > sg-report-flight on woking.cam.xci-test.com
> > logs: /home/xc_osstest/logs
> > images: /home/xc_osstest/images
> >=20
> > Logs, config files, etc. are available at
> >     http://www.chiark.greenend.org.uk/~xensrcts/logs
> >=20
> > Test harness code can be found at
> >     http://xenbits.xensource.com/gitweb?p=3Dosstest.git;a=3Dsummary
> >=20
> >=20
> > Pushing revision :
> >=20
> > + branch=3Dlinux-arm-xen
> > + revision=3D518e624ddfaef545408c19c30fff31bc64d6b346
> > + . cri-lock-repos
> > ++ . cri-common
> > +++ . cri-getconfig
> > +++ umask 002
> > +++ getconfig Repos
> > +++ perl -e '
> >                 use Osstest;
> >                 readglobalconfig();
> >                 print $c{"Repos"} or die $!;
> >         '
> > ++ repos=3D/export/home/osstest/repos
> > ++ repos_lock=3D/export/home/osstest/repos/lock
> > ++ '[' x '!=3D' x/export/home/osstest/repos/lock ']'
> > ++ OSSTEST_REPOS_LOCK_LOCKED=3D/export/home/osstest/repos/lock
> > ++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux=
-arm-xen 518e624ddfaef545408c19c30fff31bc64d6b346
> > + branch=3Dlinux-arm-xen
> > + revision=3D518e624ddfaef545408c19c30fff31bc64d6b346
> > + . cri-lock-repos
> > ++ . cri-common
> > +++ . cri-getconfig
> > +++ umask 002
> > +++ getconfig Repos
> > +++ perl -e '
> >                 use Osstest;
> >                 readglobalconfig();
> >                 print $c{"Repos"} or die $!;
> >         '
> > ++ repos=3D/export/home/osstest/repos
> > ++ repos_lock=3D/export/home/osstest/repos/lock
> > ++ '[' x/export/home/osstest/repos/lock '!=3D' x/export/home/osstest/re=
pos/lock ']'
> > + . cri-common
> > ++ . cri-getconfig
> > ++ umask 002
> > + select_xenbranch
> > + case "$branch" in
> > + tree=3Dlinux
> > + xenbranch=3Dxen-unstable
> > + '[' xlinux =3D xlinux ']'
> > + linuxbranch=3Dlinux-arm-xen
> > + : tested/2.6.39.x
> > + . ap-common
> > ++ : osstest@xenbits.xensource.com
> > ++ : git://xenbits.xen.org/xen.git
> > ++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
> > ++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
> > ++ : git://git.kernel.org
> > ++ : git://git.kernel.org/pub/scm/linux/kernel/git
> > ++ : git
> > ++ : git://xenbits.xen.org/osstest/linux-firmware.git
> > ++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
> > ++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmw=
are.git
> > ++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
> > ++ : git://xenbits.xen.org/linux-pvops.git
> > ++ : tested/linux-3.4
> > ++ : tested/linux-arm-xen
> > ++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.g=
it =3D x ']'
> > ++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.g=
it =3D x ']'
> > ++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> > ++ : tested/2.6.39.x
> > ++ : daily-cron.linux-arm-xen
> > ++ : daily-cron.linux-arm-xen
> > ++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
> > ++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
> > ++ : daily-cron.linux-arm-xen
> > + TREE_LINUX=3Dosstest@xenbits.xensource.com:/home/xen/git/linux-pvops.=
git
> > + TREE_QEMU_UPSTREAM=3Dosstest@xenbits.xensource.com:/home/xen/git/qemu=
-upstream-unstable.git
> > + TREE_XEN=3Dosstest@xenbits.xensource.com:/home/xen/git/xen.git
> > + info_linux_tree linux-arm-xen
> > + case $1 in
> > + : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> > + : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> > + : linux-arm-xen
> > + : linux-arm-xen
> > + : linux-arm-xen
> > + : git
> > + : git
> > + : git://xenbits.xen.org/linux-pvops.git
> > + : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
> > + : tested/linux-arm-xen
> > + : tested/linux-arm-xen
> > + return 0
> > + cd /export/home/osstest/repos/linux
> > + git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git =
518e624ddfaef545408c19c30fff31bc64d6b346:tested/linux-arm-xen
> > Counting objects: 1  =20
> > Counting objects: 10  =20
> > Counting objects: 62  =20
> > Counting objects: 69  =20
> > Counting objects: 95  =20
> > Counting objects: 101  =20
> > Counting objects: 119  =20
> > Counting objects: 122  =20
> > Counting objects: 123  =20
> > Counting objects: 158  =20
> > Counting objects: 162  =20
> > Counting objects: 163  =20
> > Counting objects: 164  =20
> > Counting objects: 197  =20
> > Counting objects: 210  =20
> > Counting objects: 211  =20
> > Counting objects: 221  =20
> > Counting objects: 230  =20
> > Counting objects: 236  =20
> > Counting objects: 258  =20
> > Counting objects: 288  =20
> > Counting objects: 291  =20
> > Counting objects: 293  =20
> > Counting objects: 295  =20
> > Counting objects: 371  =20
> > Counting objects: 2439  =20
> > Counting objects: 3644, done.
> > Compressing objects:   0% (1/1281)  =20
> > Compressing objects:   1% (13/1281)  =20
> > Compressing objects:   2% (26/1281)  =20
> > Compressing objects:   3% (39/1281)  =20
> > Compressing objects:   4% (52/1281)  =20
> > Compressing objects:   5% (65/1281)  =20
> > Compressing objects:   6% (77/1281)  =20
> > Compressing objects:   7% (90/1281)  =20
> > Compressing objects:   8% (103/1281)  =20
> > Compressing objects:   9% (116/1281)  =20
> > Compressing objects:  10% (129/1281)  =20
> > Compressing objects:  10% (139/1281)  =20
> > Compressing objects:  11% (141/1281)  =20
> > Compressing objects:  12% (154/1281)  =20
> > Compressing objects:  13% (167/1281)  =20
> > Compressing objects:  14% (180/1281)  =20
> > Compressing objects:  15% (193/1281)  =20
> > Compressing objects:  16% (205/1281)  =20
> > Compressing objects:  17% (218/1281)  =20
> > Compressing objects:  18% (231/1281)  =20
> > Compressing objects:  19% (244/1281)  =20
> > Compressing objects:  20% (257/1281)  =20
> > Compressing objects:  21% (270/1281)  =20
> > Compressing objects:  22% (282/1281)  =20
> > Compressing objects:  23% (295/1281)  =20
> > Compressing objects:  24% (308/1281)  =20
> > Compressing objects:  25% (321/1281)  =20
> > Compressing objects:  26% (334/1281)  =20
> > Compressing objects:  27% (346/1281)  =20
> > Compressing objects:  28% (359/1281)  =20
> > Compressing objects:  29% (372/1281)  =20
> > Compressing objects:  30% (385/1281)  =20
> > Compressing objects:  31% (398/1281)  =20
> > Compressing objects:  32% (410/1281)  =20
> > Compressing objects:  33% (423/1281)  =20
> > Compressing objects:  34% (436/1281)  =20
> > Compressing objects:  35% (449/1281)  =20
> > Compressing objects:  36% (462/1281)  =20
> > Compressing objects:  37% (474/1281)  =20
> > Compressing objects:  38% (487/1281)  =20
> > Compressing objects:  39% (500/1281)  =20
> > Compressing objects:  40% (513/1281)  =20
> > Compressing objects:  40% (525/1281)  =20
> > Compressing objects:  41% (526/1281)  =20
> > Compressing objects:  42% (539/1281)  =20
> > Compressing objects:  43% (551/1281)  =20
> > Compressing objects:  44% (564/1281)  =20
> > Compressing objects:  45% (577/1281)  =20
> > Compressing objects:  46% (590/1281)  =20
> > Compressing objects:  47% (603/1281)  =20
> > Compressing objects:  48% (615/1281)  =20
> > Compressing objects:  49% (628/1281)  =20
> > Compressing objects:  50% (641/1281)  =20
> > Compressing objects:  51% (654/1281)  =20
> > Compressing objects:  52% (667/1281)  =20
> > Compressing objects:  53% (679/1281)  =20
> > Compressing objects:  54% (692/1281)  =20
> > Compressing objects:  55% (705/1281)  =20
> > Compressing objects:  56% (718/1281)  =20
> > Compressing objects:  57% (731/1281)  =20
> > Compressing objects:  58% (743/1281)  =20
> > Compressing objects:  59% (756/1281)  =20
> > Compressing objects:  60% (769/1281)  =20
> > Compressing objects:  61% (782/1281)  =20
> > Compressing objects:  62% (795/1281)  =20
> > Compressing objects:  63% (808/1281)  =20
> > Compressing objects:  64% (820/1281)  =20
> > Compressing objects:  65% (833/1281)  =20
> > Compressing objects:  66% (846/1281)  =20
> > Compressing objects:  67% (859/1281)  =20
> > Compressing objects:  68% (872/1281)  =20
> > Compressing objects:  69% (884/1281)  =20
> > Compressing objects:  70% (897/1281)  =20
> > Compressing objects:  71% (910/1281)  =20
> > Compressing objects:  72% (923/1281)  =20
> > Compressing objects:  73% (936/1281)  =20
> > Compressing objects:  74% (948/1281)  =20
> > Compressing objects:  75% (961/1281)  =20
> > Compressing objects:  76% (974/1281)  =20
> > Compressing objects:  77% (987/1281)  =20
> > Compressing objects:  78% (1000/1281)  =20
> > Compressing objects:  79% (1012/1281)  =20
> > Compressing objects:  80% (1025/1281)  =20
> > Compressing objects:  81% (1038/1281)  =20
> > Compressing objects:  82% (1051/1281)  =20
> > Compressing objects:  83% (1064/1281)  =20
> > Compressing objects:  84% (1077/1281)  =20
> > Compressing objects:  85% (1089/1281)  =20
> > Compressing objects:  86% (1102/1281)  =20
> > Compressing objects:  87% (1115/1281)  =20
> > Compressing objects:  88% (1128/1281)  =20
> > Compressing objects:  89% (1141/1281)  =20
> > Compressing objects:  90% (1153/1281)  =20
> > Compressing objects:  91% (1166/1281)  =20
> > Compressing objects:  92% (1179/1281)  =20
> > Compressing objects:  93% (1192/1281)  =20
> > Compressing objects:  94% (1205/1281)  =20
> > Compressing objects:  95% (1217/1281)  =20
> > Compressing objects:  96% (1230/1281)  =20
> > Compressing objects:  97% (1243/1281)  =20
> > Compressing objects:  98% (1256/1281)  =20
> > Compressing objects:  99% (1269/1281)  =20
> > Compressing objects: 100% (1281/1281)  =20
> > Compressing objects: 100% (1281/1281), done.
> > Writing objects:   0% (1/2309)  =20
> > Writing objects:   1% (24/2309)  =20
> > Writing objects:   2% (47/2309)  =20
> > Writing objects:   3% (70/2309)  =20
> > Writing objects:   4% (93/2309)  =20
> > Writing objects:   5% (116/2309)  =20
> > Writing objects:   6% (139/2309)  =20
> > Writing objects:   7% (162/2309)  =20
> > Writing objects:   8% (185/2309)  =20
> > Writing objects:   9% (208/2309)  =20
> > Writing objects:  10% (231/2309)  =20
> > Writing objects:  11% (254/2309)  =20
> > Writing objects:  12% (278/2309)  =20
> > Writing objects:  13% (301/2309)  =20
> > Writing objects:  14% (324/2309)  =20
> > Writing objects:  15% (347/2309)  =20
> > Writing objects:  16% (370/2309)  =20
> > Writing objects:  17% (393/2309)  =20
> > Writing objects:  18% (416/2309)  =20
> > Writing objects:  19% (439/2309)  =20
> > Writing objects:  20% (462/2309)  =20
> > Writing objects:  21% (485/2309)  =20
> > Writing objects:  22% (508/2309)  =20
> > Writing objects:  23% (532/2309)  =20
> > Writing objects:  24% (555/2309)  =20
> > Writing objects:  25% (578/2309)  =20
> > Writing objects:  26% (601/2309)  =20
> > Writing objects:  27% (624/2309)  =20
> > Writing objects:  28% (647/2309)  =20
> > Writing objects:  29% (670/2309)  =20
> > Writing objects:  30% (693/2309)  =20
> > Writing objects:  31% (716/2309)  =20
> > Writing objects:  32% (739/2309)  =20
> > Writing objects:  33% (762/2309)  =20
> > Writing objects:  34% (786/2309)  =20
> > Writing objects:  35% (809/2309)  =20
> > Writing objects:  36% (832/2309)  =20
> > Writing objects:  37% (855/2309)  =20
> > Writing objects:  38% (878/2309)  =20
> > Writing objects:  39% (901/2309)  =20
> > Writing objects:  40% (924/2309)  =20
> > Writing objects:  41% (949/2309)  =20
> > Writing objects:  42% (970/2309)  =20
> > Writing objects:  43% (993/2309)  =20
> > Writing objects:  44% (1019/2309)  =20
> > Writing objects:  45% (1040/2309)  =20
> > Writing objects:  46% (1063/2309)  =20
> > Writing objects:  47% (1086/2309)  =20
> > Writing objects:  48% (1109/2309)  =20
> > Writing objects:  49% (1132/2309)  =20
> > Writing objects:  50% (1155/2309)  =20
> > Writing objects:  51% (1178/2309)  =20
> > Writing objects:  52% (1201/2309)  =20
> > Writing objects:  53% (1224/2309)  =20
> > Writing objects:  54% (1247/2309)  =20
> > Writing objects:  55% (1270/2309)  =20
> > Writing objects:  56% (1294/2309)  =20
> > Writing objects:  57% (1317/2309)  =20
> > Writing objects:  58% (1340/2309)  =20
> > Writing objects:  59% (1363/2309)  =20
> > Writing objects:  60% (1386/2309)  =20
> > Writing objects:  61% (1409/2309)  =20
> > Writing objects:  62% (1432/2309)  =20
> > Writing objects:  63% (1456/2309)  =20
> > Writing objects:  64% (1478/2309)  =20
> > Writing objects:  65% (1501/2309)  =20
> > Writing objects:  66% (1524/2309)  =20
> > Writing objects:  67% (1548/2309)  =20
> > Writing objects:  68% (1571/2309)  =20
> > Writing objects:  69% (1594/2309)  =20
> > Writing objects:  70% (1617/2309)  =20
> > Writing objects:  71% (1640/2309)  =20
> > Writing objects:  72% (1663/2309)  =20
> > Writing objects:  73% (1686/2309)  =20
> > Writing objects:  74% (1709/2309)  =20
> > Writing objects:  75% (1732/2309)  =20
> > Writing objects:  76% (1755/2309)  =20
> > Writing objects:  77% (1778/2309)  =20
> > Writing objects:  78% (1802/2309)  =20
> > Writing objects:  79% (1825/2309)  =20
> > Writing objects:  80% (1848/2309)  =20
> > Writing objects:  81% (1871/2309)  =20
> > Writing objects:  82% (1894/2309)  =20
> > Writing objects:  83% (1917/2309)  =20
> > Writing objects:  84% (1940/2309)  =20
> > Writing objects:  85% (1963/2309)  =20
> > Writing objects:  86% (1986/2309)  =20
> > Writing objects:  87% (2009/2309)  =20
> > Writing objects:  88% (2032/2309)  =20
> > Writing objects:  89% (2056/2309)  =20
> > Writing objects:  90% (2079/2309)  =20
> > Writing objects:  91% (2102/2309)  =20
> > Writing objects:  92% (2125/2309)  =20
> > Writing objects:  93% (2148/2309)  =20
> > Writing objects:  94% (2171/2309)  =20
> > Writing objects:  95% (2194/2309)  =20
> > Writing objects:  96% (2217/2309)  =20
> > Writing objects:  97% (2240/2309)  =20
> > Writing objects:  98% (2263/2309)  =20
> > Writing objects:  99% (2286/2309)  =20
> > Writing objects: 100% (2309/2309)  =20
> > Writing objects: 100% (2309/2309), 520.91 KiB, done.
> > Total 2309 (delta 1872), reused 1348 (delta 1027)
> > To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
> >    d264bde..518e624  518e624ddfaef545408c19c30fff31bc64d6b346 -> tested=
/linux-arm-xen
> > + exit 0
> >=20
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>=20
>=20
--1342847746-845825030-1391685539=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-845825030-1391685539=:4373--


From xen-devel-bounces@lists.xen.org Thu Feb 06 11:19:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBMzG-0002sx-Be; Thu, 06 Feb 2014 11:19:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBMzE-0002sn-If
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 11:19:17 +0000
Received: from [193.109.254.147:18381] by server-3.bemta-14.messagelabs.com id
	BF/DB-00432-3BF63F25; Thu, 06 Feb 2014 11:19:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391685552!2444846!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15510 invoked from network); 6 Feb 2014 11:19:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 11:19:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100426244"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 11:19:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 06:19:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBMz9-0007IA-Ce;
	Thu, 06 Feb 2014 11:19:11 +0000
Date: Thu, 6 Feb 2014 11:18:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391680396.23098.26.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402061118290.4373@kaball.uk.xensource.com>
References: <osstest-24734-mainreport@xen.org>
	<1391680396.23098.26.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="1342847746-845825030-1391685539=:4373"
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-845825030-1391685539=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Thu, 6 Feb 2014, Ian Campbell wrote:
> On Wed, 2014-02-05 at 18:55 +0000, xen.org wrote:
> > flight 24734 linux-arm-xen real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24734/
> >=20
> > Failures :-/ but no regressions.
> >=20
> > Tests which did not succeed, but are not blocking:
> >  test-armhf-armhf-xl           9 guest-start                  fail   ne=
ver pass
>=20
> Stefano, please can you cherry-pick:
>=20
>         commit e17b2f114cba5420fb28fa4bfead57d406a16533
>         Author: Ian Campbell <ian.campbell@citrix.com>
>         Date:   Mon Jan 20 11:30:41 2014 +0000
>        =20
>             xen: swiotlb: handle sizeof(dma_addr_t) !=3D sizeof(phys_addr=
_t)
>            =20
>             The use of phys_to_machine and machine_to_phys in the phys<=
=3D>bus conversion
>             causes us to lose the top bits of the DMA address if the size=
 of a DMA addr
>            =20
>             This can happen in practice on ARM where foreign pages can be=
 above 4GB eve
>             though the local kernel does not have LPAE page tables enable=
d (which is
>             totally reasonable if the guest does not itself have >4GB of =
RAM). In this
>             case the kernel still maps the foreign pages at a phys addr b=
elow 4G (as it
>             must) but the resulting DMA address (returned by the grant ma=
p operation) i
>             much higher.
>            =20
>             This is analogous to a hardware device which has its view of =
RAM mapped up
>             high for some reason.
>            =20
>             This patch makes I/O to foreign pages (specifically blkif) wo=
rk on 32-bit A
>             systems with more than 4GB of RAM.
>            =20
>             Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>             Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citr=
ix.com>
>=20
> from mainline into this tree.

I merged xentip/stable/for-linus-3.14 into linux-arm-xen.


> Thanks,
> Ian.
>=20
> >=20
> > version targeted for testing:
> >  linux                518e624ddfaef545408c19c30fff31bc64d6b346
> > baseline version:
> >  linux                d264bde089ceea20640d6d4472a0dcade9d2e199
> >=20
> > ------------------------------------------------------------
> > People who touched revisions under test:
> >   "Eric W. Biederman" <ebiederm@xmission.com>
> >   Aaro Koskinen <aaro.koskinen@iki.fi>
> >   Aaron Brown <aaron.f.brown@intel.com>
> >   Abhilash Kesavan <a.kesavan@samsung.com>
> >   Alan Cox <alan@linux.intel.com>
> >   Alex Deucher <alexander.deucher@amd.com>
> >   Alexander Mezin <mezin.alexander@gmail.com>
> >   Alexander van Heukelum <heukelum@fastmail.fm>
> >   Anatolij Gustschin <agust@denx.de>
> >   Andre Przywara <andre.przywara@linaro.org>
> >   Andreas Reis <andreas.reis@gmail.com>
> >   Andreas Rohner <andreas.rohner@gmx.net>
> >   Andrew Bresticker <abrestic@chromium.org>
> >   Andrew Jones <drjones@redhat.com>
> >   Andrew Morton <akpm@linux-foundation.org>
> >   Andy Lutomirski <luto@amacapital.net>
> >   Anton Blanchard <anton@samba.org>
> >   Anton Vorontsov <anton@enomsg.org>
> >   Antonio Quartulli <antonio@meshcoding.com>
> >   Ard Biesheuvel <ard.biesheuvel@linaro.org>
> >   Ariel Elior <ariele@broadcom.com>
> >   Arron Wang <arron.wang@intel.com>
> >   Austin Boyle <boyle.austin@gmail.com>
> >   Ben Dooks <ben.dooks@codethink.co.uk>
> >   Ben Myers <bpm@sgi.com>
> >   Ben Skeggs <bskeggs@redhat.com>
> >   Ben Widawsky <ben@bwidawsk.net>
> >   Benjamin Herrenschmidt <benh@kernel.crashing.org>
> >   Betty Dall <betty.dall@hp.com>
> >   Bjorn Helgaas <bhelgaas@google.com>
> >   Bj=C3=B8rn Mork <bjorn@mork.no>
> >   Bob Peterson <rpeterso@redhat.com>
> >   Borislav Petkov <bp@suse.de>
> >   Brian W Hart <hartb@linux.vnet.ibm.com>
> >   Bruce Allan <bruce.w.allan@intel.com>
> >   Bryan Wu <cooloney@gmail.com>
> >   Catalin Marinas <catalin.marinas@arm.com>
> >   Christian Engelmayer <cengelma@gmx.at>
> >   Christian K=C3=B6nig <christian.koenig@amd.com>
> >   Christoph Paasch <christoph.paasch@uclouvain.be>
> >   Cong Wang <xiyou.wangcong@gmail.com>
> >   Curt Brune <curt@cumulusnetworks.com>
> >   C=C3=A9dric Le Goater <clg@fr.ibm.com>
> >   Dan Carpenter <dan.carpenter@oracle.com>
> >   Dan Williams <dcbw@redhat.com>
> >   Daniel Borkmann <dborkman@redhat.com>
> >   Daniel Lezcano <daniel.lezcano@linaro.org>
> >   Daniel Vetter <daniel.vetter@ffwll.ch>
> >   Dave Airlie <airlied@redhat.com>
> >   Dave Ertman <davidx.m.ertman@intel.com>
> >   Dave Kleikamp <dave.kleikamp@oracle.com>
> >   David Ertman <davidx.m.ertman@intel.com>
> >   David Gibson <david@gibson.dropbear.id.au>
> >   David S. Miller <davem@davemloft.net>
> >   Dimitris Michailidis <dm@chelsio.com>
> >   Ding Tianhong <dingtianhong@huawei.com>
> >   Dirk Brandewie <dirk.j.brandewie@intel.com>
> >   Dmitry Kravkov <dmitry@broadcom.com>
> >   Dmitry Torokhov <dmitry.torokhov@gmail.com>
> >   Don Skidmore <donald.c.skidmore@intel.com>
> >   Emmanuel Grumbach <emmanuel.grumbach@intel.com>
> >   Eric Dumazet <edumazet@google.com>
> >   Eric Whitney <enwlinux@gmail.com>
> >   Erik Hugne <erik.hugne@ericsson.com>
> >   Fabio Estevam <fabio.estevam@freescale.com>
> >   Fan Du <fan.du@windriver.com>
> >   Felix Fietkau <nbd@openwrt.org>
> >   Flavio Leitner <fbl@redhat.com>
> >   Florian Fainelli <f.fainelli@gmail.com>
> >   Florian Westphal <fw@strlen.de>
> >   Frank Li <Frank.Li@freescale.com>
> >   Gao feng <gaofeng@cn.fujitsu.com>
> >   Gavin Shan <shangw@linux.vnet.ibm.com>
> >   Geert Uytterhoeven <geert+renesas@linux-m68k.org>
> >   Gerhard Sittig <gsi@denx.de>
> >   Gerrit Renker <gerrit@erg.abdn.ac.uk>
> >   Grant Likely <grant.likely@linaro.org>
> >   Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> >   Gregor Beck <gbeck@sernet.de>
> >   Guenter Roeck <linux@roeck-us.net>
> >   Gustavo Padovan <gustavo.padovan@collabora.co.uk>
> >   H. Nikolaus Schaller <hns@goldelico.com>
> >   H. Peter Anvin <hpa@linux.intel.com>
> >   H. Peter Anvin <hpa@zytor.com>
> >   Haiyang Zhang <haiyangz@microsoft.com>
> >   Hangbin Liu <liuhangbin@gmail.com>
> >   Hannes Frederic Sowa <hannes@stressinduktion.org>
> >   Hariprasad Shenai <hariprasad@chelsio.com>
> >   Heiko Carstens <heiko.carstens@de.ibm.com>
> >   Heiko Stuebner <heiko@sntech.de>
> >   Helge Deller <deller@gmx.de>
> >   Helmut Schaa <helmut.schaa@googlemail.com>
> >   Herbert Xu <herbert@gondor.apana.org.au>
> >   Huacai Chen <chenhc@lemote.com>
> >   Hugh Dickins <hughd@google.com>
> >   Ilia Mirkin <imirkin@alum.mit.edu>
> >   Ingo Molnar <mingo@kernel.org>
> >   Ivan Vecera <ivecera@redhat.com>
> >   Jamal Hadi Salim <jhs@mojatatu.com>
> >   James Hogan <james.hogan@imgtec.com>
> >   Jan Kara <jack@suse.cz>
> >   Jan Kiszka <jan.kiszka@siemens.com>
> >   Jani Nikula <jani.nikula@intel.com>
> >   Jason Baron <jbaron@akamai.com>
> >   Jason Wang <jasowang@redhat.com>
> >   Javier Lopez <jlopex@cozybit.com>
> >   Jay Vosburgh <fubar@us.ibm.com>
> >   Jean Delvare <khali@linux-fr.org>
> >   Jeff Kirsher <jeffrey.t.kirsher@intel.com>
> >   Jeff Layton <jlayton@redhat.com>
> >   Jens Axboe <axboe@kernel.dk>
> >   Jesper Dangaard Brouer <brouer@redhat.com>
> >   Jesse Barnes <jbarnes@virtuousgeek.org>
> >   Jiang Liu <jiang.liu@linux.intel.com>
> >   Jie Liu <jeff.liu@oracle.com>
> >   Jiri Pirko <jiri@resnulli.us>
> >   Jitendra Kalsaria <jitendra.kalsaria@qlogic.com>
> >   Johan Hedberg <johan.hedberg@intel.com>
> >   Johannes Berg <johannes.berg@intel.com>
> >   John Crispin <blogic@openwrt.org>
> >   John David Anglin <dave.anglin@bell.net>
> >   John Fastabend <john.r.fastabend@intel.com.com>
> >   John Fastabend <john.r.fastabend@intel.com>
> >   John Stultz <john.stultz@linaro.org>
> >   John W. Linville <linville@tuxdriver.com>
> >   Jon Maloy <jon.maloy@ericsson.com>
> >   Jonghwa Lee <jonghwa3.lee@samsung.com>
> >   Josh Boyer <jwboyer@fedoraproject.org>
> >   Julian Anastasov <ja@ssi.bg>
> >   Kelly Doran <kel.p.doran@gmail.com>
> >   Kirill Tkhai <tkhai@yandex.ru>
> >   Krzysztof Ha=C5=82asa <khalasa@piap.pl>
> >   Krzysztof Kozlowski <k.kozlowski@samsung.com>
> >   Kumar Sanghvi <kumaras@chelsio.com>
> >   Lan Tianyu <tianyu.lan@intel.com>
> >   Larry Finger <Larry.Finger@lwfinger.net>
> >   Laura Abbott <lauraa@codeaurora.org>
> >   Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
> >   Leigh Brown <leigh@solinno.co.uk>
> >   Li RongQing <roy.qing.li@gmail.com>
> >   Linus Torvalds <torvalds@linux-foundation.org>
> >   Liu, Chuansheng <chuansheng.liu@intel.com>
> >   Manish Chopra <manish.chopra@qlogic.com>
> >   Marcel Holtmann <marcel@holtmann.org>
> >   Marcelo Tosatti <mtosatti@redhat.com>
> >   Marco Piazza <mpiazza@gmail.com>
> >   Marek Lindner <mareklindner@neomailbox.ch>
> >   Marek Ol=C5=A1=C3=A1k <marek.olsak@amd.com>
> >   Mark Rutland <mark.rutland@arm.com>
> >   Martin Schwidefsky <schwidefsky@de.ibm.com>
> >   Mathy Vanhoef <vanhoefm@gmail.com>
> >   Matteo Facchinetti <matteo.facchinetti@sirius-es.it>
> >   Mel Gorman <mgorman@suse.de>
> >   Michael Chan <mchan@broadcom.com>
> >   Michael Neuling <mikey@neuling.org>
> >   Michael S. Tsirkin <mst@redhat.com>
> >   Michal Hocko <mhocko@suse.cz>
> >   Michal Kalderon <michals@broadcom.com>
> >   Michal Schmidt <mschmidt@redhat.com>
> >   Michal Simek <michal.simek@xilinx.com>
> >   Mika Westerberg <mika.westerberg@linux.intel.com>
> >   Mike Turquette <mturquette@linaro.org>
> >   Mikulas Patocka <mpatocka@redhat.com>
> >   Milo Kim <milo.kim@ti.com>
> >   Ming Lei <ming.lei@canonical.com>
> >   Ming Lei <tom.leiming@gmail.com>
> >   Mugunthan V N <mugunthanvnm@ti.com>
> >   Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> >   Neal Cardwell <ncardwell@google.com>
> >   Neil Horman <nhorman@tuxdriver.com>
> >   NeilBrown <neilb@suse.de>
> >   Nicolas Schichan <nschichan@freebox.fr>
> >   Nithin Nayak Sujir <nsujir@broadcom.com>
> >   Nobuhiro Iwamatsu <nobuhiro.iwamatsu.yj@renesas.com>
> >   Octavian Purdila <octavian.purdila@intel.com>
> >   Oleg Nesterov <oleg@redhat.com>
> >   Olof Johansson <olof@lixom.net>
> >   Oren Givon <oren.givon@intel.com>
> >   Pablo Neira Ayuso <pablo@netfilter.org>
> >   Paolo Bonzini <pbonzini@redhat.com>
> >   Paul Durrant <paul.durrant@citrix.com>
> >   Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >   Paulo Zanoni <paulo.r.zanoni@intel.com>
> >   Pekka Enberg <penberg@kernel.org>
> >   Peter Korsgaard <peter@korsgaard.com>
> >   Peter Zijlstra <peterz@infradead.org>
> >   Phil Schmitt <phillip.j.schmitt@intel.com>
> >   Qais Yousef <qais.yousef@imgtec.com>
> >   Qingshuai Tian <qingshuai.tian@intel.com>
> >   Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> >   Rafa=C5=82 Mi=C5=82ecki <zajec5@gmail.com>
> >   Rajesh B Prathipati <rprathip@linux.vnet.ibm.com>
> >   Richard Cochran <richardcochran@gmail.com>
> >   Richard Weinberger <richard@nod.at>
> >   Rik van Riel <riel@redhat.com>
> >   Rob Herring <rob.herring@calxeda.com>
> >   Rob Herring <robh@kernel.org>
> >   Robert Richter <rric@kernel.org>
> >   Russell King <rmk+kernel@arm.linux.org.uk>
> >   Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
> >   Sachin Kamat <sachin.kamat@linaro.org>
> >   Sachin Prabhu <sprabhu@redhat.com>
> >   Salva Peir=C3=B3 <speiro@ai2.upv.es>
> >   Samuel Ortiz <sameo@linux.intel.com>
> >   Santosh Shilimkar <santosh.shilimkar@ti.com>
> >   Sasha Levin <sasha.levin@oracle.com>
> >   Sathya Perla <sathya.perla@emulex.com>
> >   Scott Feldman <sfeldma@cumulusnetworks.com>
> >   Sebastian Ott <sebott@linux.vnet.ibm.com>
> >   Serge E. Hallyn <serge.hallyn@ubuntu.com>
> >   Serge Hallyn <serge.hallyn@canonical.com>
> >   Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
> >   Seung-Woo Kim <sw0312.kim@samsung.com>
> >   Shahed Shaikh <shahed.shaikh@qlogic.com>
> >   Shirish Pargaonkar <spargaonkar@suse.com>
> >   Shuah Khan <shuah.kh@samsung.com>
> >   Simon Guinot <sguinot@lacie.com>
> >   Simon Horman <horms+renesas@verge.net.au>
> >   Simon Horman <horms@verge.net.au>
> >   Simon Wunderlich <sw@simonwunderlich.de>
> >   Soren Brinkmann <soren.brinkmann@xilinx.com>
> >   Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >   Stephen Boyd <sboyd@codeaurora.org>
> >   Stephen Warren <swarren@nvidia.com>
> >   Steve Capper <steve.capper@linaro.org>
> >   Steve French <smfrench@gmail.com>
> >   Steven Rostedt <rostedt@goodmis.org>
> >   Steven Whitehouse <swhiteho@redhat.com>
> >   Sudeep Holla <sudeep.holla@arm.com>
> >   Sujith Manoharan <c_manoha@qca.qualcomm.com>
> >   Suresh Reddy <suresh.reddy@emulex.com>
> >   Taras Kondratiuk <taras.kondratiuk@linaro.org>
> >   Tejun Heo <tj@kernel.org>
> >   Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> >   Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
> >   Thomas Gleixner <tglx@linutronix.de>
> >   Timo Ter=C3=A4s <timo.teras@iki.fi>
> >   Tomasz Figa <t.figa@samsung.com>
> >   Tony Lindgren <tony@atomide.com>
> >   Toshi Kani <toshi.kani@hp.com>
> >   Ujjal Roy <royujjal@gmail.com>
> >   Vasundhara Volam <vasundhara.volam@emulex.com>
> >   Ville Syrj=C3=A4l=C3=A4 <ville.syrjala@linux.intel.com>
> >   Vince Bridgers <vbridgers2013@gmail.com>
> >   Viresh Kumar <viresh.kumar@linaro.org>
> >   Vivek Goyal <vgoyal@redhat.com>
> >   Vlad Yasevich <vyasevich@gmail.com>
> >   Vladimir Davydov <vdavydov@parallels.com>
> >   Vlastimil Babka <vbabka@suse.cz>
> >   Wang Weidong <wangweidong1@huawei.com>
> >   Wei Liu <wei.liu2@citrix.com>
> >   Wei Yongjun <yongjun_wei@trendmicro.com.cn>
> >   Wei-Chun Chao <weichunc@plumgrid.com>
> >   Wenliang Fan <fanwlexca@gmail.com>
> >   Will Deacon <will.deacon@arm.com>
> >   Wolfram Sang <wsa@the-dreams.de>
> >   Yaniv Rosner <yanivr@broadcom.com>
> >   Yasushi Asano <yasushi.asano@jp.fujitsu.com>
> >   Yijing Wang <wangyijing@huawei.com>
> >   Ying Xue <ying.xue@windriver.com>
> >   Yuval Mintz <yuvalmin@broadcom.com>
> > ------------------------------------------------------------
> >=20
> > jobs:
> >  build-armhf                                                  pass   =
=20
> >  build-armhf-pvops                                            pass   =
=20
> >  test-armhf-armhf-xl                                          fail   =
=20
> >=20
> >=20
> > ------------------------------------------------------------
> > sg-report-flight on woking.cam.xci-test.com
> > logs: /home/xc_osstest/logs
> > images: /home/xc_osstest/images
> >=20
> > Logs, config files, etc. are available at
> >     http://www.chiark.greenend.org.uk/~xensrcts/logs
> >=20
> > Test harness code can be found at
> >     http://xenbits.xensource.com/gitweb?p=3Dosstest.git;a=3Dsummary
> >=20
> >=20
> > Pushing revision :
> >=20
> > + branch=3Dlinux-arm-xen
> > + revision=3D518e624ddfaef545408c19c30fff31bc64d6b346
> > + . cri-lock-repos
> > ++ . cri-common
> > +++ . cri-getconfig
> > +++ umask 002
> > +++ getconfig Repos
> > +++ perl -e '
> >                 use Osstest;
> >                 readglobalconfig();
> >                 print $c{"Repos"} or die $!;
> >         '
> > ++ repos=3D/export/home/osstest/repos
> > ++ repos_lock=3D/export/home/osstest/repos/lock
> > ++ '[' x '!=3D' x/export/home/osstest/repos/lock ']'
> > ++ OSSTEST_REPOS_LOCK_LOCKED=3D/export/home/osstest/repos/lock
> > ++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux=
-arm-xen 518e624ddfaef545408c19c30fff31bc64d6b346
> > + branch=3Dlinux-arm-xen
> > + revision=3D518e624ddfaef545408c19c30fff31bc64d6b346
> > + . cri-lock-repos
> > ++ . cri-common
> > +++ . cri-getconfig
> > +++ umask 002
> > +++ getconfig Repos
> > +++ perl -e '
> >                 use Osstest;
> >                 readglobalconfig();
> >                 print $c{"Repos"} or die $!;
> >         '
> > ++ repos=3D/export/home/osstest/repos
> > ++ repos_lock=3D/export/home/osstest/repos/lock
> > ++ '[' x/export/home/osstest/repos/lock '!=3D' x/export/home/osstest/re=
pos/lock ']'
> > + . cri-common
> > ++ . cri-getconfig
> > ++ umask 002
> > + select_xenbranch
> > + case "$branch" in
> > + tree=3Dlinux
> > + xenbranch=3Dxen-unstable
> > + '[' xlinux =3D xlinux ']'
> > + linuxbranch=3Dlinux-arm-xen
> > + : tested/2.6.39.x
> > + . ap-common
> > ++ : osstest@xenbits.xensource.com
> > ++ : git://xenbits.xen.org/xen.git
> > ++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
> > ++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
> > ++ : git://git.kernel.org
> > ++ : git://git.kernel.org/pub/scm/linux/kernel/git
> > ++ : git
> > ++ : git://xenbits.xen.org/osstest/linux-firmware.git
> > ++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
> > ++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmw=
are.git
> > ++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
> > ++ : git://xenbits.xen.org/linux-pvops.git
> > ++ : tested/linux-3.4
> > ++ : tested/linux-arm-xen
> > ++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.g=
it =3D x ']'
> > ++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.g=
it =3D x ']'
> > ++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> > ++ : tested/2.6.39.x
> > ++ : daily-cron.linux-arm-xen
> > ++ : daily-cron.linux-arm-xen
> > ++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
> > ++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
> > ++ : daily-cron.linux-arm-xen
> > + TREE_LINUX=3Dosstest@xenbits.xensource.com:/home/xen/git/linux-pvops.=
git
> > + TREE_QEMU_UPSTREAM=3Dosstest@xenbits.xensource.com:/home/xen/git/qemu=
-upstream-unstable.git
> > + TREE_XEN=3Dosstest@xenbits.xensource.com:/home/xen/git/xen.git
> > + info_linux_tree linux-arm-xen
> > + case $1 in
> > + : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> > + : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
> > + : linux-arm-xen
> > + : linux-arm-xen
> > + : linux-arm-xen
> > + : git
> > + : git
> > + : git://xenbits.xen.org/linux-pvops.git
> > + : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
> > + : tested/linux-arm-xen
> > + : tested/linux-arm-xen
> > + return 0
> > + cd /export/home/osstest/repos/linux
> > + git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git =
518e624ddfaef545408c19c30fff31bc64d6b346:tested/linux-arm-xen
> > Counting objects: 1  =20
> > Counting objects: 10  =20
> > Counting objects: 62  =20
> > Counting objects: 69  =20
> > Counting objects: 95  =20
> > Counting objects: 101  =20
> > Counting objects: 119  =20
> > Counting objects: 122  =20
> > Counting objects: 123  =20
> > Counting objects: 158  =20
> > Counting objects: 162  =20
> > Counting objects: 163  =20
> > Counting objects: 164  =20
> > Counting objects: 197  =20
> > Counting objects: 210  =20
> > Counting objects: 211  =20
> > Counting objects: 221  =20
> > Counting objects: 230  =20
> > Counting objects: 236  =20
> > Counting objects: 258  =20
> > Counting objects: 288  =20
> > Counting objects: 291  =20
> > Counting objects: 293  =20
> > Counting objects: 295  =20
> > Counting objects: 371  =20
> > Counting objects: 2439  =20
> > Counting objects: 3644, done.
> > Compressing objects:   0% (1/1281)  =20
> > Compressing objects:   1% (13/1281)  =20
> > Compressing objects:   2% (26/1281)  =20
> > Compressing objects:   3% (39/1281)  =20
> > Compressing objects:   4% (52/1281)  =20
> > Compressing objects:   5% (65/1281)  =20
> > Compressing objects:   6% (77/1281)  =20
> > Compressing objects:   7% (90/1281)  =20
> > Compressing objects:   8% (103/1281)  =20
> > Compressing objects:   9% (116/1281)  =20
> > Compressing objects:  10% (129/1281)  =20
> > Compressing objects:  10% (139/1281)  =20
> > Compressing objects:  11% (141/1281)  =20
> > Compressing objects:  12% (154/1281)  =20
> > Compressing objects:  13% (167/1281)  =20
> > Compressing objects:  14% (180/1281)  =20
> > Compressing objects:  15% (193/1281)  =20
> > Compressing objects:  16% (205/1281)  =20
> > Compressing objects:  17% (218/1281)  =20
> > Compressing objects:  18% (231/1281)  =20
> > Compressing objects:  19% (244/1281)  =20
> > Compressing objects:  20% (257/1281)  =20
> > Compressing objects:  21% (270/1281)  =20
> > Compressing objects:  22% (282/1281)  =20
> > Compressing objects:  23% (295/1281)  =20
> > Compressing objects:  24% (308/1281)  =20
> > Compressing objects:  25% (321/1281)  =20
> > Compressing objects:  26% (334/1281)  =20
> > Compressing objects:  27% (346/1281)  =20
> > Compressing objects:  28% (359/1281)  =20
> > Compressing objects:  29% (372/1281)  =20
> > Compressing objects:  30% (385/1281)  =20
> > Compressing objects:  31% (398/1281)  =20
> > Compressing objects:  32% (410/1281)  =20
> > Compressing objects:  33% (423/1281)  =20
> > Compressing objects:  34% (436/1281)  =20
> > Compressing objects:  35% (449/1281)  =20
> > Compressing objects:  36% (462/1281)  =20
> > Compressing objects:  37% (474/1281)  =20
> > Compressing objects:  38% (487/1281)  =20
> > Compressing objects:  39% (500/1281)  =20
> > Compressing objects:  40% (513/1281)  =20
> > Compressing objects:  40% (525/1281)  =20
> > Compressing objects:  41% (526/1281)  =20
> > Compressing objects:  42% (539/1281)  =20
> > Compressing objects:  43% (551/1281)  =20
> > Compressing objects:  44% (564/1281)  =20
> > Compressing objects:  45% (577/1281)  =20
> > Compressing objects:  46% (590/1281)  =20
> > Compressing objects:  47% (603/1281)  =20
> > Compressing objects:  48% (615/1281)  =20
> > Compressing objects:  49% (628/1281)  =20
> > Compressing objects:  50% (641/1281)  =20
> > Compressing objects:  51% (654/1281)  =20
> > Compressing objects:  52% (667/1281)  =20
> > Compressing objects:  53% (679/1281)  =20
> > Compressing objects:  54% (692/1281)  =20
> > Compressing objects:  55% (705/1281)  =20
> > Compressing objects:  56% (718/1281)  =20
> > Compressing objects:  57% (731/1281)  =20
> > Compressing objects:  58% (743/1281)  =20
> > Compressing objects:  59% (756/1281)  =20
> > Compressing objects:  60% (769/1281)  =20
> > Compressing objects:  61% (782/1281)  =20
> > Compressing objects:  62% (795/1281)  =20
> > Compressing objects:  63% (808/1281)  =20
> > Compressing objects:  64% (820/1281)  =20
> > Compressing objects:  65% (833/1281)  =20
> > Compressing objects:  66% (846/1281)  =20
> > Compressing objects:  67% (859/1281)  =20
> > Compressing objects:  68% (872/1281)  =20
> > Compressing objects:  69% (884/1281)  =20
> > Compressing objects:  70% (897/1281)  =20
> > Compressing objects:  71% (910/1281)  =20
> > Compressing objects:  72% (923/1281)  =20
> > Compressing objects:  73% (936/1281)  =20
> > Compressing objects:  74% (948/1281)  =20
> > Compressing objects:  75% (961/1281)  =20
> > Compressing objects:  76% (974/1281)  =20
> > Compressing objects:  77% (987/1281)  =20
> > Compressing objects:  78% (1000/1281)  =20
> > Compressing objects:  79% (1012/1281)  =20
> > Compressing objects:  80% (1025/1281)  =20
> > Compressing objects:  81% (1038/1281)  =20
> > Compressing objects:  82% (1051/1281)  =20
> > Compressing objects:  83% (1064/1281)  =20
> > Compressing objects:  84% (1077/1281)  =20
> > Compressing objects:  85% (1089/1281)  =20
> > Compressing objects:  86% (1102/1281)  =20
> > Compressing objects:  87% (1115/1281)  =20
> > Compressing objects:  88% (1128/1281)  =20
> > Compressing objects:  89% (1141/1281)  =20
> > Compressing objects:  90% (1153/1281)  =20
> > Compressing objects:  91% (1166/1281)  =20
> > Compressing objects:  92% (1179/1281)  =20
> > Compressing objects:  93% (1192/1281)  =20
> > Compressing objects:  94% (1205/1281)  =20
> > Compressing objects:  95% (1217/1281)  =20
> > Compressing objects:  96% (1230/1281)  =20
> > Compressing objects:  97% (1243/1281)  =20
> > Compressing objects:  98% (1256/1281)  =20
> > Compressing objects:  99% (1269/1281)  =20
> > Compressing objects: 100% (1281/1281)  =20
> > Compressing objects: 100% (1281/1281), done.
> > Writing objects:   0% (1/2309)  =20
> > Writing objects:   1% (24/2309)  =20
> > Writing objects:   2% (47/2309)  =20
> > Writing objects:   3% (70/2309)  =20
> > Writing objects:   4% (93/2309)  =20
> > Writing objects:   5% (116/2309)  =20
> > Writing objects:   6% (139/2309)  =20
> > Writing objects:   7% (162/2309)  =20
> > Writing objects:   8% (185/2309)  =20
> > Writing objects:   9% (208/2309)  =20
> > Writing objects:  10% (231/2309)  =20
> > Writing objects:  11% (254/2309)  =20
> > Writing objects:  12% (278/2309)  =20
> > Writing objects:  13% (301/2309)  =20
> > Writing objects:  14% (324/2309)  =20
> > Writing objects:  15% (347/2309)  =20
> > Writing objects:  16% (370/2309)  =20
> > Writing objects:  17% (393/2309)  =20
> > Writing objects:  18% (416/2309)  =20
> > Writing objects:  19% (439/2309)  =20
> > Writing objects:  20% (462/2309)  =20
> > Writing objects:  21% (485/2309)  =20
> > Writing objects:  22% (508/2309)  =20
> > Writing objects:  23% (532/2309)  =20
> > Writing objects:  24% (555/2309)  =20
> > Writing objects:  25% (578/2309)  =20
> > Writing objects:  26% (601/2309)  =20
> > Writing objects:  27% (624/2309)  =20
> > Writing objects:  28% (647/2309)  =20
> > Writing objects:  29% (670/2309)  =20
> > Writing objects:  30% (693/2309)  =20
> > Writing objects:  31% (716/2309)  =20
> > Writing objects:  32% (739/2309)  =20
> > Writing objects:  33% (762/2309)  =20
> > Writing objects:  34% (786/2309)  =20
> > Writing objects:  35% (809/2309)  =20
> > Writing objects:  36% (832/2309)  =20
> > Writing objects:  37% (855/2309)  =20
> > Writing objects:  38% (878/2309)  =20
> > Writing objects:  39% (901/2309)  =20
> > Writing objects:  40% (924/2309)  =20
> > Writing objects:  41% (949/2309)  =20
> > Writing objects:  42% (970/2309)  =20
> > Writing objects:  43% (993/2309)  =20
> > Writing objects:  44% (1019/2309)  =20
> > Writing objects:  45% (1040/2309)  =20
> > Writing objects:  46% (1063/2309)  =20
> > Writing objects:  47% (1086/2309)  =20
> > Writing objects:  48% (1109/2309)  =20
> > Writing objects:  49% (1132/2309)  =20
> > Writing objects:  50% (1155/2309)  =20
> > Writing objects:  51% (1178/2309)  =20
> > Writing objects:  52% (1201/2309)  =20
> > Writing objects:  53% (1224/2309)  =20
> > Writing objects:  54% (1247/2309)  =20
> > Writing objects:  55% (1270/2309)  =20
> > Writing objects:  56% (1294/2309)  =20
> > Writing objects:  57% (1317/2309)  =20
> > Writing objects:  58% (1340/2309)  =20
> > Writing objects:  59% (1363/2309)  =20
> > Writing objects:  60% (1386/2309)  =20
> > Writing objects:  61% (1409/2309)  =20
> > Writing objects:  62% (1432/2309)  =20
> > Writing objects:  63% (1456/2309)  =20
> > Writing objects:  64% (1478/2309)  =20
> > Writing objects:  65% (1501/2309)  =20
> > Writing objects:  66% (1524/2309)  =20
> > Writing objects:  67% (1548/2309)  =20
> > Writing objects:  68% (1571/2309)  =20
> > Writing objects:  69% (1594/2309)  =20
> > Writing objects:  70% (1617/2309)  =20
> > Writing objects:  71% (1640/2309)  =20
> > Writing objects:  72% (1663/2309)  =20
> > Writing objects:  73% (1686/2309)  =20
> > Writing objects:  74% (1709/2309)  =20
> > Writing objects:  75% (1732/2309)  =20
> > Writing objects:  76% (1755/2309)  =20
> > Writing objects:  77% (1778/2309)  =20
> > Writing objects:  78% (1802/2309)  =20
> > Writing objects:  79% (1825/2309)  =20
> > Writing objects:  80% (1848/2309)  =20
> > Writing objects:  81% (1871/2309)  =20
> > Writing objects:  82% (1894/2309)  =20
> > Writing objects:  83% (1917/2309)  =20
> > Writing objects:  84% (1940/2309)  =20
> > Writing objects:  85% (1963/2309)  =20
> > Writing objects:  86% (1986/2309)  =20
> > Writing objects:  87% (2009/2309)  =20
> > Writing objects:  88% (2032/2309)  =20
> > Writing objects:  89% (2056/2309)  =20
> > Writing objects:  90% (2079/2309)  =20
> > Writing objects:  91% (2102/2309)  =20
> > Writing objects:  92% (2125/2309)  =20
> > Writing objects:  93% (2148/2309)  =20
> > Writing objects:  94% (2171/2309)  =20
> > Writing objects:  95% (2194/2309)  =20
> > Writing objects:  96% (2217/2309)  =20
> > Writing objects:  97% (2240/2309)  =20
> > Writing objects:  98% (2263/2309)  =20
> > Writing objects:  99% (2286/2309)  =20
> > Writing objects: 100% (2309/2309)  =20
> > Writing objects: 100% (2309/2309), 520.91 KiB, done.
> > Total 2309 (delta 1872), reused 1348 (delta 1027)
> > To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
> >    d264bde..518e624  518e624ddfaef545408c19c30fff31bc64d6b346 -> tested=
/linux-arm-xen
> > + exit 0
> >=20
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>=20
>=20
--1342847746-845825030-1391685539=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-845825030-1391685539=:4373--


From xen-devel-bounces@lists.xen.org Thu Feb 06 11:32:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNBs-0003O3-B0; Thu, 06 Feb 2014 11:32:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kraxel@redhat.com>) id 1WBNBq-0003Ny-QQ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 11:32:18 +0000
Received: from [85.158.143.35:45072] by server-2.bemta-4.messagelabs.com id
	30/BB-10891-2C273F25; Thu, 06 Feb 2014 11:32:18 +0000
X-Env-Sender: kraxel@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391686336!3609468!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10477 invoked from network); 6 Feb 2014 11:32:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	6 Feb 2014 11:32:17 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s16BVBVA004363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 06:31:11 -0500
Received: from [10.36.7.101] (vpn1-7-101.ams2.redhat.com [10.36.7.101])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s16BV8S9002604; Thu, 6 Feb 2014 06:31:08 -0500
Message-ID: <1391686266.17309.45.camel@nilsson.home.kraxel.org>
From: Gerd Hoffmann <kraxel@redhat.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 06 Feb 2014 12:31:06 +0100
In-Reply-To: <52F36B920200007800119AF4@nat28.tlf.novell.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<52F36B920200007800119AF4@nat28.tlf.novell.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: "seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, David Hoyer <David.Hoyer@netapp.com>,
	Ian Campbell <ijc@hellion.org.uk>, Keith Moyer <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  Hi,

> >> IIRC xen combines seabios and hvmloader into a single 256k image
> >> somehow, so it might make sense to set the seabios image size to
> >> something between 128k and 256k.  But better ask the xen people for
> >> details here.
> 
> A patch allowing the size to be other than a power of 2 was rejected.
> And I vaguely seem to recall that you actually participated in that?

seabios by default still picks the smallest possible power-of-two size.
If you need something else you can simply set the new CONFIG_ROM_SIZE
option to whatever you want, including non-power-of-two sizes.

cheers,
  Gerd



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:32:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNBs-0003O3-B0; Thu, 06 Feb 2014 11:32:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <kraxel@redhat.com>) id 1WBNBq-0003Ny-QQ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 11:32:18 +0000
Received: from [85.158.143.35:45072] by server-2.bemta-4.messagelabs.com id
	30/BB-10891-2C273F25; Thu, 06 Feb 2014 11:32:18 +0000
X-Env-Sender: kraxel@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391686336!3609468!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10477 invoked from network); 6 Feb 2014 11:32:17 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	6 Feb 2014 11:32:17 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s16BVBVA004363
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 06:31:11 -0500
Received: from [10.36.7.101] (vpn1-7-101.ams2.redhat.com [10.36.7.101])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s16BV8S9002604; Thu, 6 Feb 2014 06:31:08 -0500
Message-ID: <1391686266.17309.45.camel@nilsson.home.kraxel.org>
From: Gerd Hoffmann <kraxel@redhat.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 06 Feb 2014 12:31:06 +0100
In-Reply-To: <52F36B920200007800119AF4@nat28.tlf.novell.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<52F36B920200007800119AF4@nat28.tlf.novell.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: "seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, David Hoyer <David.Hoyer@netapp.com>,
	Ian Campbell <ijc@hellion.org.uk>, Keith Moyer <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

  Hi,

> >> IIRC xen combines seabios and hvmloader into a single 256k image
> >> somehow, so it might make sense to set the seabios image size to
> >> something between 128k and 256k.  But better ask the xen people for
> >> details here.
> 
> A patch allowing the size to be other than a power of 2 was rejected.
> And I vaguely seem to recall that you actually participated in that?

seabios by default still picks the smallest possible power-of-two size.
If you need something else you can simply set the new CONFIG_ROM_SIZE
option to whatever you want, including non-power-of-two sizes.

cheers,
  Gerd



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:46:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNPh-0003v9-KD; Thu, 06 Feb 2014 11:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBNPg-0003v4-29
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 11:46:36 +0000
Received: from [193.109.254.147:49889] by server-12.bemta-14.messagelabs.com
	id D5/F4-17220-B1673F25; Thu, 06 Feb 2014 11:46:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391687193!2454147!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21571 invoked from network); 6 Feb 2014 11:46:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 11:46:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100430495"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 11:46:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	06:46:32 -0500
Message-ID: <1391687191.23098.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 11:46:31 +0000
In-Reply-To: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:43 +0000, Ian Jackson wrote:
> There have been ABI/API changes in libxc.  Bump its MAJOR (which
> affets libxenguest et al too.)

"affects" (can be fixed on commit, no need to resend)

> There have been ABI changes in libxl.  Bump its MAJOR.
> (The API changes have been dealt with as we go along - there is
> already a LIBXL_API_VERSION 0x040400.)
> 
> None of the other libraries have changed their interfaces.  I have
> verified this by building the tools and searching the dist/install
> tree for files matching *.so.*.  For each library that showed up, I
> did this:
>   git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
> where FOO is the corresponding source directory.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Should we apply immediately? Only reason not to would be to wait for the
conclusion of the libxlu_disk thing discussion with George and Olaf.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 11:46:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 11:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNPh-0003v9-KD; Thu, 06 Feb 2014 11:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBNPg-0003v4-29
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 11:46:36 +0000
Received: from [193.109.254.147:49889] by server-12.bemta-14.messagelabs.com
	id D5/F4-17220-B1673F25; Thu, 06 Feb 2014 11:46:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391687193!2454147!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21571 invoked from network); 6 Feb 2014 11:46:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 11:46:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100430495"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 11:46:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	06:46:32 -0500
Message-ID: <1391687191.23098.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 11:46:31 +0000
In-Reply-To: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:43 +0000, Ian Jackson wrote:
> There have been ABI/API changes in libxc.  Bump its MAJOR (which
> affets libxenguest et al too.)

"affects" (can be fixed on commit, no need to resend)

> There have been ABI changes in libxl.  Bump its MAJOR.
> (The API changes have been dealt with as we go along - there is
> already a LIBXL_API_VERSION 0x040400.)
> 
> None of the other libraries have changed their interfaces.  I have
> verified this by building the tools and searching the dist/install
> tree for files matching *.so.*.  For each library that showed up, I
> did this:
>   git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
> where FOO is the corresponding source directory.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Should we apply immediately? Only reason not to would be to wait for the
conclusion of the libxlu_disk thing discussion with George and Olaf.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:02:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:02:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNep-0004oq-DF; Thu, 06 Feb 2014 12:02:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBNeo-0004ok-SB
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:02:15 +0000
Received: from [85.158.137.68:54087] by server-10.bemta-3.messagelabs.com id
	6F/94-07302-6C973F25; Thu, 06 Feb 2014 12:02:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391688131!44477!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16279 invoked from network); 6 Feb 2014 12:02:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:02:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98552992"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:02:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:02:11 -0500
Message-ID: <1391688129.23098.103.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:02:09 +0000
In-Reply-To: <52F24FF3.5050107@eu.citrix.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
	<52F24FF3.5050107@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:51 +0000, George Dunlap wrote:
> On 02/05/2014 02:16 PM, Julien Grall wrote:
> > The function domain_page_map_to_mfn can be used to translate a virtual
> > address mapped by both map_domain_page and map_domain_page_global.
> > The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> > will always fail because the address is not in DOMHEAP range.
> >
> > Check if the address is in vmap range and use __pa to translate it.
> >
> > This patch fix guest shutdown when the event fifo is used.
> >
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> I assume this brings the arm paths into line with the x86 functionality?

Yes, functionality which is now expected by common code as well (FIFO
evthcn stuff).

> 
>   -George
> 
> 
> >
> > ---
> >      This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> > use Linux 3.14 (and higher) as guest with the event fifo driver.
> > ---
> >   xen/arch/arm/mm.c |   10 +++++++---
> >   1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 127cce0..bdca68a 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
> >       local_irq_restore(flags);
> >   }
> >   
> > -unsigned long domain_page_map_to_mfn(const void *va)
> > +unsigned long domain_page_map_to_mfn(const void *ptr)
> >   {
> > +    unsigned long va = (unsigned long)ptr;
> >       lpae_t *map = this_cpu(xen_dommap);
> > -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> > -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> > +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> > +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> > +
> > +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> > +        return virt_to_mfn(va);
> >   
> >       ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> >       ASSERT(map[slot].pt.avail != 0);
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:02:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:02:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNep-0004oq-DF; Thu, 06 Feb 2014 12:02:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBNeo-0004ok-SB
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:02:15 +0000
Received: from [85.158.137.68:54087] by server-10.bemta-3.messagelabs.com id
	6F/94-07302-6C973F25; Thu, 06 Feb 2014 12:02:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391688131!44477!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16279 invoked from network); 6 Feb 2014 12:02:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:02:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98552992"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:02:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:02:11 -0500
Message-ID: <1391688129.23098.103.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:02:09 +0000
In-Reply-To: <52F24FF3.5050107@eu.citrix.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
	<52F24FF3.5050107@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:51 +0000, George Dunlap wrote:
> On 02/05/2014 02:16 PM, Julien Grall wrote:
> > The function domain_page_map_to_mfn can be used to translate a virtual
> > address mapped by both map_domain_page and map_domain_page_global.
> > The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> > will always fail because the address is not in DOMHEAP range.
> >
> > Check if the address is in vmap range and use __pa to translate it.
> >
> > This patch fix guest shutdown when the event fifo is used.
> >
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> I assume this brings the arm paths into line with the x86 functionality?

Yes, functionality which is now expected by common code as well (FIFO
evthcn stuff).

> 
>   -George
> 
> 
> >
> > ---
> >      This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> > use Linux 3.14 (and higher) as guest with the event fifo driver.
> > ---
> >   xen/arch/arm/mm.c |   10 +++++++---
> >   1 file changed, 7 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > index 127cce0..bdca68a 100644
> > --- a/xen/arch/arm/mm.c
> > +++ b/xen/arch/arm/mm.c
> > @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
> >       local_irq_restore(flags);
> >   }
> >   
> > -unsigned long domain_page_map_to_mfn(const void *va)
> > +unsigned long domain_page_map_to_mfn(const void *ptr)
> >   {
> > +    unsigned long va = (unsigned long)ptr;
> >       lpae_t *map = this_cpu(xen_dommap);
> > -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> > -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> > +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> > +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> > +
> > +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> > +        return virt_to_mfn(va);
> >   
> >       ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> >       ASSERT(map[slot].pt.avail != 0);
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:03:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNgS-0004un-V8; Thu, 06 Feb 2014 12:03:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBNgR-0004uf-Et
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:03:55 +0000
Received: from [85.158.143.35:62672] by server-3.bemta-4.messagelabs.com id
	49/E3-11539-A2A73F25; Thu, 06 Feb 2014 12:03:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391688232!3614602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4951 invoked from network); 6 Feb 2014 12:03:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:03:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100433635"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:03:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 07:03:32 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBNg3-0007uT-QM;
	Thu, 06 Feb 2014 12:03:31 +0000
Message-ID: <52F37A13.7020909@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:03:31 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>	
	<52F24FF3.5050107@eu.citrix.com>
	<1391688129.23098.103.camel@kazak.uk.xensource.com>
In-Reply-To: <1391688129.23098.103.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: patches@linaro.org, Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/06/2014 12:02 PM, Ian Campbell wrote:
> On Wed, 2014-02-05 at 14:51 +0000, George Dunlap wrote:
>> On 02/05/2014 02:16 PM, Julien Grall wrote:
>>> The function domain_page_map_to_mfn can be used to translate a virtual
>>> address mapped by both map_domain_page and map_domain_page_global.
>>> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
>>> will always fail because the address is not in DOMHEAP range.
>>>
>>> Check if the address is in vmap range and use __pa to translate it.
>>>
>>> This patch fix guest shutdown when the event fifo is used.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> I assume this brings the arm paths into line with the x86 functionality?
> Yes, functionality which is now expected by common code as well (FIFO
> evthcn stuff).

In that case:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:03:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBNgS-0004un-V8; Thu, 06 Feb 2014 12:03:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBNgR-0004uf-Et
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:03:55 +0000
Received: from [85.158.143.35:62672] by server-3.bemta-4.messagelabs.com id
	49/E3-11539-A2A73F25; Thu, 06 Feb 2014 12:03:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391688232!3614602!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4951 invoked from network); 6 Feb 2014 12:03:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:03:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100433635"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:03:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 07:03:32 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBNg3-0007uT-QM;
	Thu, 06 Feb 2014 12:03:31 +0000
Message-ID: <52F37A13.7020909@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:03:31 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>	
	<52F24FF3.5050107@eu.citrix.com>
	<1391688129.23098.103.camel@kazak.uk.xensource.com>
In-Reply-To: <1391688129.23098.103.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: patches@linaro.org, Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/06/2014 12:02 PM, Ian Campbell wrote:
> On Wed, 2014-02-05 at 14:51 +0000, George Dunlap wrote:
>> On 02/05/2014 02:16 PM, Julien Grall wrote:
>>> The function domain_page_map_to_mfn can be used to translate a virtual
>>> address mapped by both map_domain_page and map_domain_page_global.
>>> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
>>> will always fail because the address is not in DOMHEAP range.
>>>
>>> Check if the address is in vmap range and use __pa to translate it.
>>>
>>> This patch fix guest shutdown when the event fifo is used.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> I assume this brings the arm paths into line with the x86 functionality?
> Yes, functionality which is now expected by common code as well (FIFO
> evthcn stuff).

In that case:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:27:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBO2l-0005dd-LG; Thu, 06 Feb 2014 12:26:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBO2k-0005dW-R1
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:26:59 +0000
Received: from [193.109.254.147:8426] by server-5.bemta-14.messagelabs.com id
	92/83-16688-29F73F25; Thu, 06 Feb 2014 12:26:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391689616!2469604!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18150 invoked from network); 6 Feb 2014 12:26:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:26:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100439295"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:26:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:26:55 -0500
Message-ID: <1391689614.23098.105.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 12:26:54 +0000
In-Reply-To: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:16 +0000, Julien Grall wrote:
> The function domain_page_map_to_mfn can be used to translate a virtual
> address mapped by both map_domain_page and map_domain_page_global.
> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> will always fail because the address is not in DOMHEAP range.
> 
> Check if the address is in vmap range and use __pa to translate it.
> 
> This patch fix guest shutdown when the event fifo is used.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> ---
>     This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> use Linux 3.14 (and higher) as guest with the event fifo driver.
> ---
>  xen/arch/arm/mm.c |   10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 127cce0..bdca68a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
>      local_irq_restore(flags);
>  }
>  
> -unsigned long domain_page_map_to_mfn(const void *va)
> +unsigned long domain_page_map_to_mfn(const void *ptr)
>  {
> +    unsigned long va = (unsigned long)ptr;
>      lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +
> +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +        return virt_to_mfn(va);
>  
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>      ASSERT(map[slot].pt.avail != 0);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:27:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBO2l-0005dd-LG; Thu, 06 Feb 2014 12:26:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBO2k-0005dW-R1
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:26:59 +0000
Received: from [193.109.254.147:8426] by server-5.bemta-14.messagelabs.com id
	92/83-16688-29F73F25; Thu, 06 Feb 2014 12:26:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391689616!2469604!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18150 invoked from network); 6 Feb 2014 12:26:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:26:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100439295"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:26:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:26:55 -0500
Message-ID: <1391689614.23098.105.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 12:26:54 +0000
In-Reply-To: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:16 +0000, Julien Grall wrote:
> The function domain_page_map_to_mfn can be used to translate a virtual
> address mapped by both map_domain_page and map_domain_page_global.
> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> will always fail because the address is not in DOMHEAP range.
> 
> Check if the address is in vmap range and use __pa to translate it.
> 
> This patch fix guest shutdown when the event fifo is used.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> 
> ---
>     This is a bug fix for Xen 4.4. Without this patch, it's impossible to
> use Linux 3.14 (and higher) as guest with the event fifo driver.
> ---
>  xen/arch/arm/mm.c |   10 +++++++---
>  1 file changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 127cce0..bdca68a 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -325,11 +325,15 @@ void unmap_domain_page(const void *va)
>      local_irq_restore(flags);
>  }
>  
> -unsigned long domain_page_map_to_mfn(const void *va)
> +unsigned long domain_page_map_to_mfn(const void *ptr)
>  {
> +    unsigned long va = (unsigned long)ptr;
>      lpae_t *map = this_cpu(xen_dommap);
> -    int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> -    unsigned long offset = ((unsigned long)va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +    int slot = (va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
> +    unsigned long offset = (va>>THIRD_SHIFT) & LPAE_ENTRY_MASK;
> +
> +    if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
> +        return virt_to_mfn(va);
>  
>      ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
>      ASSERT(map[slot].pt.avail != 0);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:35:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOBG-000689-R0; Thu, 06 Feb 2014 12:35:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBOBF-000682-GF
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 12:35:45 +0000
Received: from [85.158.139.211:32526] by server-11.bemta-5.messagelabs.com id
	63/E5-23886-0A183F25; Thu, 06 Feb 2014 12:35:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391690142!2109311!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9807 invoked from network); 6 Feb 2014 12:35:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:35:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98561140"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:35:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 07:35:41 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBB-0003uF-D7;
	Thu, 06 Feb 2014 12:35:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBB-0001dw-2x;
	Thu, 06 Feb 2014 12:35:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.33180.577781.42813@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 12:35:40 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52F36977.6030106@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> I think libvirt support for libxl is a really important functionality 
> from a strategic perspective: a solid support should make it much easier 
> to integrate with other projects such as OpenStack and Cloudstack, as 
> well as (in theory) other tools built on top of libvirt.

Right.

> So I'm inclined to consider this a blocker*; I think we should accept it 
> and delay the release until we feel comfortable that it has been 
> sufficiently tested.

Thanks.

> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

I think the right answer then would be to commit it as soon as it has
been acked.  There are four patches at the end that could do with an
ack from Ian C.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:35:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOBG-000689-R0; Thu, 06 Feb 2014 12:35:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBOBF-000682-GF
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 12:35:45 +0000
Received: from [85.158.139.211:32526] by server-11.bemta-5.messagelabs.com id
	63/E5-23886-0A183F25; Thu, 06 Feb 2014 12:35:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391690142!2109311!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9807 invoked from network); 6 Feb 2014 12:35:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:35:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98561140"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:35:42 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 07:35:41 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBB-0003uF-D7;
	Thu, 06 Feb 2014 12:35:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBB-0001dw-2x;
	Thu, 06 Feb 2014 12:35:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.33180.577781.42813@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 12:35:40 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <52F36977.6030106@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Jim Fehlig <jfehlig@suse.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> I think libvirt support for libxl is a really important functionality 
> from a strategic perspective: a solid support should make it much easier 
> to integrate with other projects such as OpenStack and Cloudstack, as 
> well as (in theory) other tools built on top of libvirt.

Right.

> So I'm inclined to consider this a blocker*; I think we should accept it 
> and delay the release until we feel comfortable that it has been 
> sufficiently tested.

Thanks.

> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

I think the right answer then would be to commit it as soon as it has
been acked.  There are four patches at the end that could do with an
ack from Ian C.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:36:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOC3-0006CG-8w; Thu, 06 Feb 2014 12:36:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBOC1-0006C7-S3
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 12:36:34 +0000
Received: from [85.158.139.211:49681] by server-9.bemta-5.messagelabs.com id
	01/54-11237-1D183F25; Thu, 06 Feb 2014 12:36:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391690191!2103012!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19263 invoked from network); 6 Feb 2014 12:36:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:36:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98561275"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:36:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 07:36:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBy-0003uZ-6w;
	Thu, 06 Feb 2014 12:36:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBx-0001e6-VG;
	Thu, 06 Feb 2014 12:36:30 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.33229.690917.184727@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 12:36:29 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391687191.23098.97.camel@kazak.uk.xensource.com>
References: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391687191.23098.97.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] tools: Bump library SONAMEs for 4.4"):
> On Wed, 2014-02-05 at 14:43 +0000, Ian Jackson wrote:
> > There have been ABI/API changes in libxc.  Bump its MAJOR (which
> > affets libxenguest et al too.)
> 
> "affects" (can be fixed on commit, no need to resend)

Oops.

> > There have been ABI changes in libxl.  Bump its MAJOR.
> > (The API changes have been dealt with as we go along - there is
> > already a LIBXL_API_VERSION 0x040400.)
> > 
> > None of the other libraries have changed their interfaces.  I have
> > verified this by building the tools and searching the dist/install
> > tree for files matching *.so.*.  For each library that showed up, I
> > did this:
> >   git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
> > where FOO is the corresponding source directory.
> > 
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Should we apply immediately? Only reason not to would be to wait for the
> conclusion of the libxlu_disk thing discussion with George and Olaf.

I don't think that's a reason to delay.  I think Olaf's libxlu_disk
thing can go in even after this change.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:36:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOC3-0006CG-8w; Thu, 06 Feb 2014 12:36:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBOC1-0006C7-S3
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 12:36:34 +0000
Received: from [85.158.139.211:49681] by server-9.bemta-5.messagelabs.com id
	01/54-11237-1D183F25; Thu, 06 Feb 2014 12:36:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391690191!2103012!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19263 invoked from network); 6 Feb 2014 12:36:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:36:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98561275"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:36:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 07:36:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBy-0003uZ-6w;
	Thu, 06 Feb 2014 12:36:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBOBx-0001e6-VG;
	Thu, 06 Feb 2014 12:36:30 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.33229.690917.184727@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 12:36:29 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391687191.23098.97.camel@kazak.uk.xensource.com>
References: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391687191.23098.97.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] tools: Bump library SONAMEs for 4.4"):
> On Wed, 2014-02-05 at 14:43 +0000, Ian Jackson wrote:
> > There have been ABI/API changes in libxc.  Bump its MAJOR (which
> > affets libxenguest et al too.)
> 
> "affects" (can be fixed on commit, no need to resend)

Oops.

> > There have been ABI changes in libxl.  Bump its MAJOR.
> > (The API changes have been dealt with as we go along - there is
> > already a LIBXL_API_VERSION 0x040400.)
> > 
> > None of the other libraries have changed their interfaces.  I have
> > verified this by building the tools and searching the dist/install
> > tree for files matching *.so.*.  For each library that showed up, I
> > did this:
> >   git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
> > where FOO is the corresponding source directory.
> > 
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Should we apply immediately? Only reason not to would be to wait for the
> conclusion of the libxlu_disk thing discussion with George and Olaf.

I don't think that's a reason to delay.  I think Olaf's libxlu_disk
thing can go in even after this change.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:37:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOCu-0006Jl-NL; Thu, 06 Feb 2014 12:37:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOCt-0006JU-7Q
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:37:27 +0000
Received: from [85.158.137.68:9426] by server-14.bemta-3.messagelabs.com id
	6B/FE-08196-60283F25; Thu, 06 Feb 2014 12:37:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391690244!55089!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8303 invoked from network); 6 Feb 2014 12:37:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:37:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100441465"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:37:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:37:23 -0500
Message-ID: <1391690242.23098.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:37:22 +0000
In-Reply-To: <52F250E7.1090008@eu.citrix.com>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
	<1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
	<52F250E7.1090008@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:55 +0000, George Dunlap wrote:
> On 02/04/2014 06:01 PM, Andrew Cooper wrote:
> > The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> > most part is left alone until success, at which point it is set to 0.
> >
> > There is a separate 'frc' which for the most part is used to check function
> > calls, keeping errors separate from 'rc'.
> >
> > For a toolstack which sets callbacks->toolstack_restore(), and the function
> > returns 0, any subsequent error will end up with code flow going to "out;",
> > resulting in the migration being declared a success.
> >
> > For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> > 'frc', even though their use of 'rc' is currently safe.
> >
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > CC: Ian Campbell <Ian.Campbell@citrix.com>
> > CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> > CC: George Dunlap <george.dunlap@eu.citrix.com>
> >
> > ---
> >
> > Changes in v2:
> >   * Dont drop rc = -1 from toolstack_restore().
> >
> > Regarding 4.4: If the two "for consistency" changes to
> > xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> > without affecting the bugfix nature of the patch, but I would argue that
> > leaving some examples of "rc = function_call()" leaves a bad precident which
> > is likely to lead to similar bugs in the future.
> 
> Yes, these are all pretty clear bug fixes.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:37:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOCu-0006Jl-NL; Thu, 06 Feb 2014 12:37:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOCt-0006JU-7Q
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:37:27 +0000
Received: from [85.158.137.68:9426] by server-14.bemta-3.messagelabs.com id
	6B/FE-08196-60283F25; Thu, 06 Feb 2014 12:37:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391690244!55089!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8303 invoked from network); 6 Feb 2014 12:37:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:37:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100441465"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:37:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:37:23 -0500
Message-ID: <1391690242.23098.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:37:22 +0000
In-Reply-To: <52F250E7.1090008@eu.citrix.com>
References: <1391535813.6497.61.camel@kazak.uk.xensource.com>
	<1391536870-22809-1-git-send-email-andrew.cooper3@citrix.com>
	<52F250E7.1090008@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] tools/libxc: Prevent erroneous success
 from xc_domain_restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:55 +0000, George Dunlap wrote:
> On 02/04/2014 06:01 PM, Andrew Cooper wrote:
> > The variable 'rc' is set to 1 at the top of xc_domain_restore, and for the
> > most part is left alone until success, at which point it is set to 0.
> >
> > There is a separate 'frc' which for the most part is used to check function
> > calls, keeping errors separate from 'rc'.
> >
> > For a toolstack which sets callbacks->toolstack_restore(), and the function
> > returns 0, any subsequent error will end up with code flow going to "out;",
> > resulting in the migration being declared a success.
> >
> > For consistency, update the callsites of xc_dom_gnttab{,_hvm}_seed() to use
> > 'frc', even though their use of 'rc' is currently safe.
> >
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > CC: Ian Campbell <Ian.Campbell@citrix.com>
> > CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> > CC: George Dunlap <george.dunlap@eu.citrix.com>
> >
> > ---
> >
> > Changes in v2:
> >   * Dont drop rc = -1 from toolstack_restore().
> >
> > Regarding 4.4: If the two "for consistency" changes to
> > xc_dom_gnttab{,_hvm}_seed() are considered too risky, they can be dropped
> > without affecting the bugfix nature of the patch, but I would argue that
> > leaving some examples of "rc = function_call()" leaves a bad precident which
> > is likely to lead to similar bugs in the future.
> 
> Yes, these are all pretty clear bug fixes.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:39:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOEO-0006j0-PC; Thu, 06 Feb 2014 12:39:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEM-0006iH-GZ; Thu, 06 Feb 2014 12:38:58 +0000
Received: from [85.158.143.35:20726] by server-3.bemta-4.messagelabs.com id
	9C/FB-11539-16283F25; Thu, 06 Feb 2014 12:38:57 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391690336!3626393!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10992 invoked from network); 6 Feb 2014 12:38:56 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-14.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 12:38:56 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEG-0008Oc-1y; Thu, 06 Feb 2014 12:38:52 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEF-0000Zm-Nk; Thu, 06 Feb 2014 12:38:52 +0000
Date: Thu, 06 Feb 2014 12:38:51 +0000
Message-Id: <E1WBOEF-0000Zm-Nk@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 85 - Off-by-one error in
 FLASK_AVC_CACHESTAT hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                     Xen Security Advisory XSA-85
                              version 2

          Off-by-one error in FLASK_AVC_CACHESTAT hypercall

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

The FLASK_AVC_CACHESTAT hypercall, which provides access to per-cpu
statistics on the Flask security policy, incorrectly validates the
CPU for which statistics are being requested.

IMPACT
======

An attacker can cause the hypervisor to read past the end of an
array. This may result in either a host crash, leading to a denial of
service, or access to a small and static region of hypervisor memory,
leading to an information leak.

VULNERABLE SYSTEMS
==================

Xen version 4.2 and later are vulnerable to this issue when built with
XSM/Flask support. XSM support is disabled by default and is enabled
by building with XSM_ENABLE=y.

Only systems with the maximum supported number of physical CPUs are
vulnerable. Systems with a greater number of physical CPUs will only
make use of the maximum supported number and are therefore vulnerable.

By default the following maximums apply:
 * x86_32: 128 (only until Xen 4.2.x)
 * x86_64: 256
These defaults can be overridden at build time via max_phys_cpus=N.

The vulnerable hypercall is exposed to all domains.

MITIGATION
==========

Rebuilding Xen with more supported physical CPUs can avoid the
vulnerability; provided that the supported number is strictly greater
than the actual number of CPUs on any host on which the hypervisor is
to run.

If XSM is compiled in, but not actually in use, compiling it out (with
XSM_ENABLE=n) will avoid the vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa85.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x

$ sha256sum xsa85*.patch
20571024e6815eeb40d2f92a3d70ae699047cffafb5431ec74b652e0843a5315  xsa85.patch
$

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS84H+AAoJEIP+FMlX6CvZXy8H/An+HT3e3Av9G3PWIv+i10O3
FE7fhT53tBCbDlcqDghoO9PE6YctWV8glJHdg5TfpzXkjbVL2Go/poUhwvVqxePj
ja5x5saXHvXoKwglc7sZmryil5bhecTKspNL5AfTlvP4dyNZMnOAvlbnyCtKUS45
bH0TSonTL50yRH1tCEaIKYDnOisIk3E5yduIpkRnqwamKw+DbHMGlmq5sPZq4rLH
EYa/yhqh4bDStGAlRuBHG8ms+F7SgxH8dTjXhCbTe5BeAxYg1cP5yGX61y14xJJt
KAObUS4E1KOcP1jRWIQ1HhHQxwWwEDdRk+ZQspGuIt34hY1SfMcbpFu7LutcI4Y=
=SiDW
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa85.patch"
Content-Disposition: attachment; filename="xsa85.patch"
Content-Transfer-Encoding: base64

RnJvbSA1OTNiYzhjNjNkNTgyZWMwZmMyYjNhMzUzMzYxMDZjZjljM2E4YjM0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBNYXR0aGV3IERhbGV5
IDxtYXR0ZEBidWdmdXp6LmNvbT4KRGF0ZTogU3VuLCAxMiBKYW4gMjAxNCAx
NDoyOTozMiArMTMwMApTdWJqZWN0OiBbUEFUQ0hdIHhzbS9mbGFzazogY29y
cmVjdCBvZmYtYnktb25lIGluCiBmbGFza19zZWN1cml0eV9hdmNfY2FjaGVz
dGF0cyBjcHUgaWQgY2hlY2sKClRoaXMgaXMgWFNBLTg1CgpTaWduZWQtb2Zm
LWJ5OiBNYXR0aGV3IERhbGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KUmV2aWV3
ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3
ZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+
Ci0tLQogeGVuL3hzbS9mbGFzay9mbGFza19vcC5jIHwgMiArLQogMSBmaWxl
IGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZm
IC0tZ2l0IGEveGVuL3hzbS9mbGFzay9mbGFza19vcC5jIGIveGVuL3hzbS9m
bGFzay9mbGFza19vcC5jCmluZGV4IDQ0MjZhYjkuLjIyODc4ZjUgMTAwNjQ0
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTQ1Nyw3ICs0NTcsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X2F2Y19jYWNoZXN0YXRzKHN0cnVjdCB4ZW5f
Zmxhc2tfY2FjaGVfc3RhdHMgKmFyZykKIHsKICAgICBzdHJ1Y3QgYXZjX2Nh
Y2hlX3N0YXRzICpzdDsKIAotICAgIGlmICggYXJnLT5jcHUgPiBucl9jcHVf
aWRzICkKKyAgICBpZiAoIGFyZy0+Y3B1ID49IG5yX2NwdV9pZHMgKQogICAg
ICAgICByZXR1cm4gLUVOT0VOVDsKICAgICBpZiAoICFjcHVfb25saW5lKGFy
Zy0+Y3B1KSApCiAgICAgICAgIHJldHVybiAtRU5PRU5UOwotLSAKMS44LjUu
MgoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Feb 06 12:39:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOEO-0006j0-PC; Thu, 06 Feb 2014 12:39:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEM-0006iH-GZ; Thu, 06 Feb 2014 12:38:58 +0000
Received: from [85.158.143.35:20726] by server-3.bemta-4.messagelabs.com id
	9C/FB-11539-16283F25; Thu, 06 Feb 2014 12:38:57 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391690336!3626393!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10992 invoked from network); 6 Feb 2014 12:38:56 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-14.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 12:38:56 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEG-0008Oc-1y; Thu, 06 Feb 2014 12:38:52 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEF-0000Zm-Nk; Thu, 06 Feb 2014 12:38:52 +0000
Date: Thu, 06 Feb 2014 12:38:51 +0000
Message-Id: <E1WBOEF-0000Zm-Nk@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 85 - Off-by-one error in
 FLASK_AVC_CACHESTAT hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                     Xen Security Advisory XSA-85
                              version 2

          Off-by-one error in FLASK_AVC_CACHESTAT hypercall

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

The FLASK_AVC_CACHESTAT hypercall, which provides access to per-cpu
statistics on the Flask security policy, incorrectly validates the
CPU for which statistics are being requested.

IMPACT
======

An attacker can cause the hypervisor to read past the end of an
array. This may result in either a host crash, leading to a denial of
service, or access to a small and static region of hypervisor memory,
leading to an information leak.

VULNERABLE SYSTEMS
==================

Xen version 4.2 and later are vulnerable to this issue when built with
XSM/Flask support. XSM support is disabled by default and is enabled
by building with XSM_ENABLE=y.

Only systems with the maximum supported number of physical CPUs are
vulnerable. Systems with a greater number of physical CPUs will only
make use of the maximum supported number and are therefore vulnerable.

By default the following maximums apply:
 * x86_32: 128 (only until Xen 4.2.x)
 * x86_64: 256
These defaults can be overridden at build time via max_phys_cpus=N.

The vulnerable hypercall is exposed to all domains.

MITIGATION
==========

Rebuilding Xen with more supported physical CPUs can avoid the
vulnerability; provided that the supported number is strictly greater
than the actual number of CPUs on any host on which the hypervisor is
to run.

If XSM is compiled in, but not actually in use, compiling it out (with
XSM_ENABLE=n) will avoid the vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa85.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x

$ sha256sum xsa85*.patch
20571024e6815eeb40d2f92a3d70ae699047cffafb5431ec74b652e0843a5315  xsa85.patch
$

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS84H+AAoJEIP+FMlX6CvZXy8H/An+HT3e3Av9G3PWIv+i10O3
FE7fhT53tBCbDlcqDghoO9PE6YctWV8glJHdg5TfpzXkjbVL2Go/poUhwvVqxePj
ja5x5saXHvXoKwglc7sZmryil5bhecTKspNL5AfTlvP4dyNZMnOAvlbnyCtKUS45
bH0TSonTL50yRH1tCEaIKYDnOisIk3E5yduIpkRnqwamKw+DbHMGlmq5sPZq4rLH
EYa/yhqh4bDStGAlRuBHG8ms+F7SgxH8dTjXhCbTe5BeAxYg1cP5yGX61y14xJJt
KAObUS4E1KOcP1jRWIQ1HhHQxwWwEDdRk+ZQspGuIt34hY1SfMcbpFu7LutcI4Y=
=SiDW
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa85.patch"
Content-Disposition: attachment; filename="xsa85.patch"
Content-Transfer-Encoding: base64

RnJvbSA1OTNiYzhjNjNkNTgyZWMwZmMyYjNhMzUzMzYxMDZjZjljM2E4YjM0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBNYXR0aGV3IERhbGV5
IDxtYXR0ZEBidWdmdXp6LmNvbT4KRGF0ZTogU3VuLCAxMiBKYW4gMjAxNCAx
NDoyOTozMiArMTMwMApTdWJqZWN0OiBbUEFUQ0hdIHhzbS9mbGFzazogY29y
cmVjdCBvZmYtYnktb25lIGluCiBmbGFza19zZWN1cml0eV9hdmNfY2FjaGVz
dGF0cyBjcHUgaWQgY2hlY2sKClRoaXMgaXMgWFNBLTg1CgpTaWduZWQtb2Zm
LWJ5OiBNYXR0aGV3IERhbGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KUmV2aWV3
ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3
ZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+
Ci0tLQogeGVuL3hzbS9mbGFzay9mbGFza19vcC5jIHwgMiArLQogMSBmaWxl
IGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZm
IC0tZ2l0IGEveGVuL3hzbS9mbGFzay9mbGFza19vcC5jIGIveGVuL3hzbS9m
bGFzay9mbGFza19vcC5jCmluZGV4IDQ0MjZhYjkuLjIyODc4ZjUgMTAwNjQ0
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTQ1Nyw3ICs0NTcsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X2F2Y19jYWNoZXN0YXRzKHN0cnVjdCB4ZW5f
Zmxhc2tfY2FjaGVfc3RhdHMgKmFyZykKIHsKICAgICBzdHJ1Y3QgYXZjX2Nh
Y2hlX3N0YXRzICpzdDsKIAotICAgIGlmICggYXJnLT5jcHUgPiBucl9jcHVf
aWRzICkKKyAgICBpZiAoIGFyZy0+Y3B1ID49IG5yX2NwdV9pZHMgKQogICAg
ICAgICByZXR1cm4gLUVOT0VOVDsKICAgICBpZiAoICFjcHVfb25saW5lKGFy
Zy0+Y3B1KSApCiAgICAgICAgIHJldHVybiAtRU5PRU5UOwotLSAKMS44LjUu
MgoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Feb 06 12:39:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOEo-0006rA-73; Thu, 06 Feb 2014 12:39:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEl-0006q9-VA; Thu, 06 Feb 2014 12:39:24 +0000
Received: from [85.158.143.35:32043] by server-3.bemta-4.messagelabs.com id
	C6/0D-11539-B7283F25; Thu, 06 Feb 2014 12:39:23 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391690361!3632109!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5432 invoked from network); 6 Feb 2014 12:39:22 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 12:39:22 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEg-0008Pj-3h; Thu, 06 Feb 2014 12:39:18 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEf-0000s1-Tq; Thu, 06 Feb 2014 12:39:18 +0000
Date: Thu, 06 Feb 2014 12:39:17 +0000
Message-Id: <E1WBOEf-0000s1-Tq@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 86 - libvchan failure handling
 malicious ring indexes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                     Xen Security Advisory XSA-86
                              version 2

           libvchan failure handling malicious ring indexes

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

libvchan (a library for inter-domain communication) does not correctly
handle unusual or malicious contents in the xenstore ring.  A
malicious guest can exploit this to cause a libvchan-using facility to
read or write past the end of the ring.

IMPACT
======

libvchan-using facilities are vulnerable to denial of service and
perhaps privilege escalation.

There are no such services provided in the upstream Xen Project
codebase.

VULNERABLE SYSTEMS
==================

All versions of libvchan are vulnerable.  Only installations which use
libvchan for communication involving untrusted domains are vulnerable.

libvirt, xapi, xend, libxl and xl do not use libvchan.  If your
installation contains other Xen-related software components it is
possible that they use libvchan and might be vulnerable.

Xen versions 4.1 and earlier do not contain libvchan.

MITIGATION
==========

Disabling libvchan-based facilities could be used to mitigate the
vulnerability.

CREDITS
=======

This issue was discovered by Marek Marczykowski-GÃ³recki of Invisible
Things Lab.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

After the patch is applied to the Xen tree and built, any software
which is statically linked against libvchan will need to be relinked
against the new libvchan.a for the fix to take effect.

xsa86.patch        Xen 4.2.x, 4.3.x, 4.4-RC series, and xen-unstable

$ sha256sum xsa86*.patch
cd2df017e42717dd2a1b6f2fdd3ad30a38d3c0fbdd9d08b5f56ee0a01cd87b51  xsa86.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS84JeAAoJEIP+FMlX6CvZsvYH/3HbxPvs42Al1gncMsc4uh+R
V+j48ENTQzSNhVTtXQq9bUgNk5Dp/kok7RpZbxCWIBl79UUP/fpPUT/FjD5egMOX
NU8FslhmalOkkpmyeX0Kt1SvhQt6FvaozTTOdR47wHerfd+mKkYchFRrkCBvllBU
/UIVItU6fA5xyXSsFy8quT66g2a88OTlv30YTsg3jhDo48FxO7A54ay4xVAIyOFK
4Wl+hpEgTSE47VRSIGriAvjOMSSQjiMFPjR/DSbUMj8FaVhwVSitIEG9cRhn+3HE
I6HqPFzy2jP+Lzj/WFkkZrt/k12GL4cZafg7th3/YcmABfR23QMN5SwfYDLKqqw=
=XbpF
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa86.patch"
Content-Disposition: attachment; filename="xsa86.patch"
Content-Transfer-Encoding: base64

RnJvbSBiNGM0NTI2NDZlZmQzN2I0Y2QwOTk2MjU2ZGQwYWI3YmY2Y2NiN2Y2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/TWFy
ZWs9MjBNYXJjenlrb3dza2ktRz1DMz1CM3JlY2tpPz0KIDxtYXJtYXJla0Bp
bnZpc2libGV0aGluZ3NsYWIuY29tPgpEYXRlOiBNb24sIDIwIEphbiAyMDE0
IDE1OjUxOjU2ICswMDAwClN1YmplY3Q6IFtQQVRDSF0gbGlidmNoYW46IEZp
eCBoYW5kbGluZyBvZiBpbnZhbGlkIHJpbmcgYnVmZmVyIGluZGljZXMKTUlN
RS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy
c2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKClRo
ZSByZW1vdGUgKGhvc3RpbGUpIHByb2Nlc3MgY2FuIHNldCByaW5nIGJ1ZmZl
ciBpbmRpY2VzIHRvIGFueSB2YWx1ZQphdCBhbnkgdGltZS4gSWYgdGhhdCBo
YXBwZW5zLCBpdCBpcyBwb3NzaWJsZSB0byBnZXQgImJ1ZmZlciBzcGFjZSIK
KGVpdGhlciBmb3Igd3JpdGluZyBkYXRhLCBvciByZWFkeSBmb3IgcmVhZGlu
ZykgbmVnYXRpdmUgb3IgZ3JlYXRlcgp0aGFuIGJ1ZmZlciBzaXplLiAgVGhp
cyB3aWxsIGVuZCB1cCB3aXRoIGJ1ZmZlciBvdmVyZmxvdyBpbiB0aGUgc2Vj
b25kCm1lbWNweSBpbnNpZGUgb2YgZG9fc2VuZC9kb19yZWN2LgoKRml4IHRo
aXMgYnkgaW50cm9kdWNpbmcgbmV3IGF2YWlsYWJsZSBieXRlcyBhY2Nlc3Nv
ciBmdW5jdGlvbnMKcmF3X2dldF9kYXRhX3JlYWR5IGFuZCByYXdfZ2V0X2J1
ZmZlcl9zcGFjZSB3aGljaCBhcmUgcm9idXN0IGFnYWluc3QKbWFkIHJpbmcg
c3RhdGVzLCBhbmQgb25seSByZXR1cm4gc2FuaXRpc2VkIHZhbHVlcy4KClBy
b29mIHNrZXRjaCBvZiBjb3JyZWN0bmVzczoKCk5vdyB7cmQsd3J9X3tjb25z
LHByb2R9IGFyZSBvbmx5IGV2ZXIgdXNlZCBpbiB0aGUgcmF3IGF2YWlsYWJs
ZSBieXRlcwpmdW5jdGlvbnMsIGFuZCBpbiBkb19zZW5kIGFuZCBkb19yZWN2
LgoKVGhlIHJhdyBhdmFpbGFibGUgYnl0ZXMgZnVuY3Rpb25zIGRvIHVuc2ln
bmVkIGFyaXRobWV0aWMgb24gdGhlCnJldHVybmVkIHZhbHVlcy4gIElmIHRo
ZSByZXN1bHQgaXMgIm5lZ2F0aXZlIiBvciB0b28gYmlnIGl0IHdpbGwgYmUK
PnJpbmdfc2l6ZSAoc2luY2Ugd2UgdXNlZCB1bnNpZ25lZCBhcml0aG1ldGlj
KS4gIE90aGVyd2lzZSB0aGUgcmVzdWx0CmlzIGEgcG9zaXRpdmUgaW4tcmFu
Z2UgdmFsdWUgcmVwcmVzZW50aW5nIGEgcmVhc29uYWJsZSByaW5nIHN0YXRl
LCBpbgp3aGljaCBjYXNlIHdlIGNhbiBzYWZlbHkgY29udmVydCBpdCB0byBp
bnQgKGFzIHRoZSByZXN0IG9mIHRoZSBjb2RlCmV4cGVjdHMpLgoKZG9fc2Vu
ZCBhbmQgZG9fcmVjdiBpbW1lZGlhdGVseSBtYXNrIHRoZSByaW5nIGluZGV4
IHZhbHVlIHdpdGggdGhlCnJpbmcgc2l6ZS4gIFRoZSByZXN1bHQgaXMgYWx3
YXlzIGdvaW5nIHRvIGJlIHBsYXVzaWJsZS4gIElmIHRoZSByaW5nCnN0YXRl
IGhhcyBiZWNvbWUgbWFkLCB0aGUgd29yc3QgY2FzZSBpcyB0aGF0IG91ciBi
ZWhhdmlvdXIgaXMKaW5jb25zaXN0ZW50IHdpdGggdGhlIHBlZXIncyByaW5n
IHBvaW50ZXIuICBJLmUuIHdlIHJlYWQgb3Igd3JpdGUgdG8KYXJndWFibHkt
aW5jb3JyZWN0IHBhcnRzIG9mIHRoZSByaW5nIC0gYnV0IGFsd2F5cyBwYXJ0
cyBvZiB0aGUgcmluZy4KQW5kIG9mIGNvdXJzZSBpZiBhIHBlZXIgbWlzb3Bl
cmF0ZXMgdGhlIHJpbmcgdGhleSBjYW4gYWNoaWV2ZSB0aGlzCmVmZmVjdCBh
bnl3YXkuCgpTbyB0aGUgc2VjdXJpdHkgcHJvYmxlbSBpcyBmaXhlZC4KClRo
aXMgaXMgWFNBLTg2LgoKKFRoZSBwYXRjaCBpcyBlc3NlbnRpYWxseSBJYW4g
SmFja3NvbidzIHdvcmssIGFsdGhvdWdoIHBhcnRzIG9mIHRoZQpjb21taXQg
bWVzc2FnZSBhcmUgYnkgTWFyZWsuKQoKU2lnbmVkLW9mZi1ieTogTWFyZWsg
TWFyY3p5a293c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGlu
Z3NsYWIuY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmph
Y2tzb25AZXUuY2l0cml4LmNvbT4KQ2M6IE1hcmVrIE1hcmN6eWtvd3NraS1H
w7NyZWNraSA8bWFybWFyZWtAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4KQ2M6
IEpvYW5uYSBSdXRrb3dza2EgPGpvYW5uYUBpbnZpc2libGV0aGluZ3NsYWIu
Y29tPgotLS0KIHRvb2xzL2xpYnZjaGFuL2lvLmMgfCAgIDQ3ICsrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tCiAxIGZp
bGUgY2hhbmdlZCwgNDEgaW5zZXJ0aW9ucygrKSwgNiBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS90b29scy9saWJ2Y2hhbi9pby5jIGIvdG9vbHMvbGli
dmNoYW4vaW8uYwppbmRleCAyMzgzMzY0Li44MDRjNjNjIDEwMDY0NAotLS0g
YS90b29scy9saWJ2Y2hhbi9pby5jCisrKyBiL3Rvb2xzL2xpYnZjaGFuL2lv
LmMKQEAgLTExMSwxMiArMTExLDI2IEBAIHN0YXRpYyBpbmxpbmUgaW50IHNl
bmRfbm90aWZ5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgdWludDhfdCBi
aXQpCiAJCXJldHVybiAwOwogfQogCisvKgorICogR2V0IHRoZSBhbW91bnQg
b2YgYnVmZmVyIHNwYWNlIGF2YWlsYWJsZSwgYW5kIGRvIG5vdGhpbmcgYWJv
dXQKKyAqIG5vdGlmaWNhdGlvbnMuCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50
IHJhd19nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
Cit7CisJdWludDMyX3QgcmVhZHkgPSByZF9wcm9kKGN0cmwpIC0gcmRfY29u
cyhjdHJsKTsKKwlpZiAocmVhZHkgPj0gcmRfcmluZ19zaXplKGN0cmwpKQor
CQkvKiBXZSBoYXZlIG5vIHdheSB0byByZXR1cm4gZXJyb3JzLiAgTG9ja2lu
ZyB1cCB0aGUgcmluZyBpcworCQkgKiBiZXR0ZXIgdGhhbiB0aGUgYWx0ZXJu
YXRpdmVzLiAqLworCQlyZXR1cm4gMDsKKwlyZXR1cm4gcmVhZHk7Cit9CisK
IC8qKgogICogR2V0IHRoZSBhbW91bnQgb2YgYnVmZmVyIHNwYWNlIGF2YWls
YWJsZSBhbmQgZW5hYmxlIG5vdGlmaWNhdGlvbnMgaWYgbmVlZGVkLgogICov
CiBzdGF0aWMgaW5saW5lIGludCBmYXN0X2dldF9kYXRhX3JlYWR5KHN0cnVj
dCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJlcXVlc3QpCiB7Ci0JaW50
IHJlYWR5ID0gcmRfcHJvZChjdHJsKSAtIHJkX2NvbnMoY3RybCk7CisJaW50
IHJlYWR5ID0gcmF3X2dldF9kYXRhX3JlYWR5KGN0cmwpOwogCWlmIChyZWFk
eSA+PSByZXF1ZXN0KQogCQlyZXR1cm4gcmVhZHk7CiAJLyogV2UgcGxhbiB0
byBjb25zdW1lIGFsbCBkYXRhOyBwbGVhc2UgdGVsbCB1cyBpZiB5b3Ugc2Vu
ZCBtb3JlICovCkBAIC0xMjYsNyArMTQwLDcgQEAgc3RhdGljIGlubGluZSBp
bnQgZmFzdF9nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0
cmwsIHNpemVfdCByZXF1ZXN0KQogCSAqIHdpbGwgbm90IGdldCBub3RpZmll
ZCBldmVuIHRob3VnaCB0aGUgYWN0dWFsIGFtb3VudCBvZiBkYXRhIHJlYWR5
IGlzCiAJICogYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHJkX3Byb2QgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiByZF9wcm9kKGN0cmwpIC0g
cmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRhX3JlYWR5KGN0
cmwpOwogfQogCiBpbnQgbGlieGVudmNoYW5fZGF0YV9yZWFkeShzdHJ1Y3Qg
bGlieGVudmNoYW4gKmN0cmwpCkBAIC0xMzUsNyArMTQ5LDIxIEBAIGludCBs
aWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCkKIAkgKiB3aGVuIGl0IGNoYW5nZXMKIAkgKi8KIAlyZXF1ZXN0X25vdGlm
eShjdHJsLCBWQ0hBTl9OT1RJRllfV1JJVEUpOwotCXJldHVybiByZF9wcm9k
KGN0cmwpIC0gcmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRh
X3JlYWR5KGN0cmwpOworfQorCisvKioKKyAqIEdldCB0aGUgYW1vdW50IG9m
IGJ1ZmZlciBzcGFjZSBhdmFpbGFibGUsIGFuZCBkbyBub3RoaW5nCisgKiBh
Ym91dCBub3RpZmljYXRpb25zCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50IHJh
d19nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCkK
K3sKKwl1aW50MzJfdCByZWFkeSA9IHdyX3Jpbmdfc2l6ZShjdHJsKSAtICh3
cl9wcm9kKGN0cmwpIC0gd3JfY29ucyhjdHJsKSk7CisJaWYgKHJlYWR5ID4g
d3JfcmluZ19zaXplKGN0cmwpKQorCQkvKiBXZSBoYXZlIG5vIHdheSB0byBy
ZXR1cm4gZXJyb3JzLiAgTG9ja2luZyB1cCB0aGUgcmluZyBpcworCQkgKiBi
ZXR0ZXIgdGhhbiB0aGUgYWx0ZXJuYXRpdmVzLiAqLworCQlyZXR1cm4gMDsK
KwlyZXR1cm4gcmVhZHk7CiB9CiAKIC8qKgpAQCAtMTQzLDcgKzE3MSw3IEBA
IGludCBsaWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hh
biAqY3RybCkKICAqLwogc3RhdGljIGlubGluZSBpbnQgZmFzdF9nZXRfYnVm
ZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJl
cXVlc3QpCiB7Ci0JaW50IHJlYWR5ID0gd3JfcmluZ19zaXplKGN0cmwpIC0g
KHdyX3Byb2QoY3RybCkgLSB3cl9jb25zKGN0cmwpKTsKKwlpbnQgcmVhZHkg
PSByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIAlpZiAocmVhZHkgPj0g
cmVxdWVzdCkKIAkJcmV0dXJuIHJlYWR5OwogCS8qIFdlIHBsYW4gdG8gZmls
bCB0aGUgYnVmZmVyOyBwbGVhc2UgdGVsbCB1cyB3aGVuIHlvdSd2ZSByZWFk
IGl0ICovCkBAIC0xNTMsNyArMTgxLDcgQEAgc3RhdGljIGlubGluZSBpbnQg
ZmFzdF9nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCwgc2l6ZV90IHJlcXVlc3QKIAkgKiB3aWxsIG5vdCBnZXQgbm90aWZpZWQg
ZXZlbiB0aG91Z2ggdGhlIGFjdHVhbCBhbW91bnQgb2YgYnVmZmVyIHNwYWNl
CiAJICogaXMgYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHdyX2NvbnMgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiB3cl9yaW5nX3NpemUoY3Ry
bCkgLSAod3JfcHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVy
biByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhl
bnZjaGFuX2J1ZmZlcl9zcGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
CkBAIC0xNjIsNyArMTkwLDcgQEAgaW50IGxpYnhlbnZjaGFuX2J1ZmZlcl9z
cGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwpCiAJICogd2hlbiBpdCBj
aGFuZ2VzCiAJICovCiAJcmVxdWVzdF9ub3RpZnkoY3RybCwgVkNIQU5fTk9U
SUZZX1JFQUQpOwotCXJldHVybiB3cl9yaW5nX3NpemUoY3RybCkgLSAod3Jf
cHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVybiByYXdfZ2V0
X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhlbnZjaGFuX3dh
aXQoc3RydWN0IGxpYnhlbnZjaGFuICpjdHJsKQpAQCAtMTc2LDYgKzIwNCw4
IEBAIGludCBsaWJ4ZW52Y2hhbl93YWl0KHN0cnVjdCBsaWJ4ZW52Y2hhbiAq
Y3RybCkKIAogLyoqCiAgKiByZXR1cm5zIC0xIG9uIGVycm9yLCBvciBzaXpl
IG9uIHN1Y2Nlc3MKKyAqCisgKiBjYWxsZXIgbXVzdCBoYXZlIGNoZWNrZWQg
dGhhdCBlbm91Z2ggc3BhY2UgaXMgYXZhaWxhYmxlCiAgKi8KIHN0YXRpYyBp
bnQgZG9fc2VuZChzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIGNvbnN0IHZv
aWQgKmRhdGEsIHNpemVfdCBzaXplKQogewpAQCAtMjQ4LDYgKzI3OCwxMSBA
QCBpbnQgbGlieGVudmNoYW5fd3JpdGUoc3RydWN0IGxpYnhlbnZjaGFuICpj
dHJsLCBjb25zdCB2b2lkICpkYXRhLCBzaXplX3Qgc2l6ZSkKIAl9CiB9CiAK
Ky8qKgorICogcmV0dXJucyAtMSBvbiBlcnJvciwgb3Igc2l6ZSBvbiBzdWNj
ZXNzCisgKgorICogY2FsbGVyIG11c3QgaGF2ZSBjaGVja2VkIHRoYXQgZW5v
dWdoIGRhdGEgaXMgYXZhaWxhYmxlCisgKi8KIHN0YXRpYyBpbnQgZG9fcmVj
dihzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIHZvaWQgKmRhdGEsIHNpemVf
dCBzaXplKQogewogCWludCByZWFsX2lkeCA9IHJkX2NvbnMoY3RybCkgJiAo
cmRfcmluZ19zaXplKGN0cmwpIC0gMSk7Ci0tIAoxLjcuMTAuNAoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Feb 06 12:39:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOEo-0006rA-73; Thu, 06 Feb 2014 12:39:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEl-0006q9-VA; Thu, 06 Feb 2014 12:39:24 +0000
Received: from [85.158.143.35:32043] by server-3.bemta-4.messagelabs.com id
	C6/0D-11539-B7283F25; Thu, 06 Feb 2014 12:39:23 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391690361!3632109!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5432 invoked from network); 6 Feb 2014 12:39:22 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-15.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 12:39:22 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEg-0008Pj-3h; Thu, 06 Feb 2014 12:39:18 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBOEf-0000s1-Tq; Thu, 06 Feb 2014 12:39:18 +0000
Date: Thu, 06 Feb 2014 12:39:17 +0000
Message-Id: <E1WBOEf-0000s1-Tq@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 86 - libvchan failure handling
 malicious ring indexes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                     Xen Security Advisory XSA-86
                              version 2

           libvchan failure handling malicious ring indexes

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

libvchan (a library for inter-domain communication) does not correctly
handle unusual or malicious contents in the xenstore ring.  A
malicious guest can exploit this to cause a libvchan-using facility to
read or write past the end of the ring.

IMPACT
======

libvchan-using facilities are vulnerable to denial of service and
perhaps privilege escalation.

There are no such services provided in the upstream Xen Project
codebase.

VULNERABLE SYSTEMS
==================

All versions of libvchan are vulnerable.  Only installations which use
libvchan for communication involving untrusted domains are vulnerable.

libvirt, xapi, xend, libxl and xl do not use libvchan.  If your
installation contains other Xen-related software components it is
possible that they use libvchan and might be vulnerable.

Xen versions 4.1 and earlier do not contain libvchan.

MITIGATION
==========

Disabling libvchan-based facilities could be used to mitigate the
vulnerability.

CREDITS
=======

This issue was discovered by Marek Marczykowski-GÃ³recki of Invisible
Things Lab.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

After the patch is applied to the Xen tree and built, any software
which is statically linked against libvchan will need to be relinked
against the new libvchan.a for the fix to take effect.

xsa86.patch        Xen 4.2.x, 4.3.x, 4.4-RC series, and xen-unstable

$ sha256sum xsa86*.patch
cd2df017e42717dd2a1b6f2fdd3ad30a38d3c0fbdd9d08b5f56ee0a01cd87b51  xsa86.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS84JeAAoJEIP+FMlX6CvZsvYH/3HbxPvs42Al1gncMsc4uh+R
V+j48ENTQzSNhVTtXQq9bUgNk5Dp/kok7RpZbxCWIBl79UUP/fpPUT/FjD5egMOX
NU8FslhmalOkkpmyeX0Kt1SvhQt6FvaozTTOdR47wHerfd+mKkYchFRrkCBvllBU
/UIVItU6fA5xyXSsFy8quT66g2a88OTlv30YTsg3jhDo48FxO7A54ay4xVAIyOFK
4Wl+hpEgTSE47VRSIGriAvjOMSSQjiMFPjR/DSbUMj8FaVhwVSitIEG9cRhn+3HE
I6HqPFzy2jP+Lzj/WFkkZrt/k12GL4cZafg7th3/YcmABfR23QMN5SwfYDLKqqw=
=XbpF
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa86.patch"
Content-Disposition: attachment; filename="xsa86.patch"
Content-Transfer-Encoding: base64

RnJvbSBiNGM0NTI2NDZlZmQzN2I0Y2QwOTk2MjU2ZGQwYWI3YmY2Y2NiN2Y2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/TWFy
ZWs9MjBNYXJjenlrb3dza2ktRz1DMz1CM3JlY2tpPz0KIDxtYXJtYXJla0Bp
bnZpc2libGV0aGluZ3NsYWIuY29tPgpEYXRlOiBNb24sIDIwIEphbiAyMDE0
IDE1OjUxOjU2ICswMDAwClN1YmplY3Q6IFtQQVRDSF0gbGlidmNoYW46IEZp
eCBoYW5kbGluZyBvZiBpbnZhbGlkIHJpbmcgYnVmZmVyIGluZGljZXMKTUlN
RS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy
c2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKClRo
ZSByZW1vdGUgKGhvc3RpbGUpIHByb2Nlc3MgY2FuIHNldCByaW5nIGJ1ZmZl
ciBpbmRpY2VzIHRvIGFueSB2YWx1ZQphdCBhbnkgdGltZS4gSWYgdGhhdCBo
YXBwZW5zLCBpdCBpcyBwb3NzaWJsZSB0byBnZXQgImJ1ZmZlciBzcGFjZSIK
KGVpdGhlciBmb3Igd3JpdGluZyBkYXRhLCBvciByZWFkeSBmb3IgcmVhZGlu
ZykgbmVnYXRpdmUgb3IgZ3JlYXRlcgp0aGFuIGJ1ZmZlciBzaXplLiAgVGhp
cyB3aWxsIGVuZCB1cCB3aXRoIGJ1ZmZlciBvdmVyZmxvdyBpbiB0aGUgc2Vj
b25kCm1lbWNweSBpbnNpZGUgb2YgZG9fc2VuZC9kb19yZWN2LgoKRml4IHRo
aXMgYnkgaW50cm9kdWNpbmcgbmV3IGF2YWlsYWJsZSBieXRlcyBhY2Nlc3Nv
ciBmdW5jdGlvbnMKcmF3X2dldF9kYXRhX3JlYWR5IGFuZCByYXdfZ2V0X2J1
ZmZlcl9zcGFjZSB3aGljaCBhcmUgcm9idXN0IGFnYWluc3QKbWFkIHJpbmcg
c3RhdGVzLCBhbmQgb25seSByZXR1cm4gc2FuaXRpc2VkIHZhbHVlcy4KClBy
b29mIHNrZXRjaCBvZiBjb3JyZWN0bmVzczoKCk5vdyB7cmQsd3J9X3tjb25z
LHByb2R9IGFyZSBvbmx5IGV2ZXIgdXNlZCBpbiB0aGUgcmF3IGF2YWlsYWJs
ZSBieXRlcwpmdW5jdGlvbnMsIGFuZCBpbiBkb19zZW5kIGFuZCBkb19yZWN2
LgoKVGhlIHJhdyBhdmFpbGFibGUgYnl0ZXMgZnVuY3Rpb25zIGRvIHVuc2ln
bmVkIGFyaXRobWV0aWMgb24gdGhlCnJldHVybmVkIHZhbHVlcy4gIElmIHRo
ZSByZXN1bHQgaXMgIm5lZ2F0aXZlIiBvciB0b28gYmlnIGl0IHdpbGwgYmUK
PnJpbmdfc2l6ZSAoc2luY2Ugd2UgdXNlZCB1bnNpZ25lZCBhcml0aG1ldGlj
KS4gIE90aGVyd2lzZSB0aGUgcmVzdWx0CmlzIGEgcG9zaXRpdmUgaW4tcmFu
Z2UgdmFsdWUgcmVwcmVzZW50aW5nIGEgcmVhc29uYWJsZSByaW5nIHN0YXRl
LCBpbgp3aGljaCBjYXNlIHdlIGNhbiBzYWZlbHkgY29udmVydCBpdCB0byBp
bnQgKGFzIHRoZSByZXN0IG9mIHRoZSBjb2RlCmV4cGVjdHMpLgoKZG9fc2Vu
ZCBhbmQgZG9fcmVjdiBpbW1lZGlhdGVseSBtYXNrIHRoZSByaW5nIGluZGV4
IHZhbHVlIHdpdGggdGhlCnJpbmcgc2l6ZS4gIFRoZSByZXN1bHQgaXMgYWx3
YXlzIGdvaW5nIHRvIGJlIHBsYXVzaWJsZS4gIElmIHRoZSByaW5nCnN0YXRl
IGhhcyBiZWNvbWUgbWFkLCB0aGUgd29yc3QgY2FzZSBpcyB0aGF0IG91ciBi
ZWhhdmlvdXIgaXMKaW5jb25zaXN0ZW50IHdpdGggdGhlIHBlZXIncyByaW5n
IHBvaW50ZXIuICBJLmUuIHdlIHJlYWQgb3Igd3JpdGUgdG8KYXJndWFibHkt
aW5jb3JyZWN0IHBhcnRzIG9mIHRoZSByaW5nIC0gYnV0IGFsd2F5cyBwYXJ0
cyBvZiB0aGUgcmluZy4KQW5kIG9mIGNvdXJzZSBpZiBhIHBlZXIgbWlzb3Bl
cmF0ZXMgdGhlIHJpbmcgdGhleSBjYW4gYWNoaWV2ZSB0aGlzCmVmZmVjdCBh
bnl3YXkuCgpTbyB0aGUgc2VjdXJpdHkgcHJvYmxlbSBpcyBmaXhlZC4KClRo
aXMgaXMgWFNBLTg2LgoKKFRoZSBwYXRjaCBpcyBlc3NlbnRpYWxseSBJYW4g
SmFja3NvbidzIHdvcmssIGFsdGhvdWdoIHBhcnRzIG9mIHRoZQpjb21taXQg
bWVzc2FnZSBhcmUgYnkgTWFyZWsuKQoKU2lnbmVkLW9mZi1ieTogTWFyZWsg
TWFyY3p5a293c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGlu
Z3NsYWIuY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmph
Y2tzb25AZXUuY2l0cml4LmNvbT4KQ2M6IE1hcmVrIE1hcmN6eWtvd3NraS1H
w7NyZWNraSA8bWFybWFyZWtAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4KQ2M6
IEpvYW5uYSBSdXRrb3dza2EgPGpvYW5uYUBpbnZpc2libGV0aGluZ3NsYWIu
Y29tPgotLS0KIHRvb2xzL2xpYnZjaGFuL2lvLmMgfCAgIDQ3ICsrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tCiAxIGZp
bGUgY2hhbmdlZCwgNDEgaW5zZXJ0aW9ucygrKSwgNiBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS90b29scy9saWJ2Y2hhbi9pby5jIGIvdG9vbHMvbGli
dmNoYW4vaW8uYwppbmRleCAyMzgzMzY0Li44MDRjNjNjIDEwMDY0NAotLS0g
YS90b29scy9saWJ2Y2hhbi9pby5jCisrKyBiL3Rvb2xzL2xpYnZjaGFuL2lv
LmMKQEAgLTExMSwxMiArMTExLDI2IEBAIHN0YXRpYyBpbmxpbmUgaW50IHNl
bmRfbm90aWZ5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgdWludDhfdCBi
aXQpCiAJCXJldHVybiAwOwogfQogCisvKgorICogR2V0IHRoZSBhbW91bnQg
b2YgYnVmZmVyIHNwYWNlIGF2YWlsYWJsZSwgYW5kIGRvIG5vdGhpbmcgYWJv
dXQKKyAqIG5vdGlmaWNhdGlvbnMuCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50
IHJhd19nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
Cit7CisJdWludDMyX3QgcmVhZHkgPSByZF9wcm9kKGN0cmwpIC0gcmRfY29u
cyhjdHJsKTsKKwlpZiAocmVhZHkgPj0gcmRfcmluZ19zaXplKGN0cmwpKQor
CQkvKiBXZSBoYXZlIG5vIHdheSB0byByZXR1cm4gZXJyb3JzLiAgTG9ja2lu
ZyB1cCB0aGUgcmluZyBpcworCQkgKiBiZXR0ZXIgdGhhbiB0aGUgYWx0ZXJu
YXRpdmVzLiAqLworCQlyZXR1cm4gMDsKKwlyZXR1cm4gcmVhZHk7Cit9CisK
IC8qKgogICogR2V0IHRoZSBhbW91bnQgb2YgYnVmZmVyIHNwYWNlIGF2YWls
YWJsZSBhbmQgZW5hYmxlIG5vdGlmaWNhdGlvbnMgaWYgbmVlZGVkLgogICov
CiBzdGF0aWMgaW5saW5lIGludCBmYXN0X2dldF9kYXRhX3JlYWR5KHN0cnVj
dCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJlcXVlc3QpCiB7Ci0JaW50
IHJlYWR5ID0gcmRfcHJvZChjdHJsKSAtIHJkX2NvbnMoY3RybCk7CisJaW50
IHJlYWR5ID0gcmF3X2dldF9kYXRhX3JlYWR5KGN0cmwpOwogCWlmIChyZWFk
eSA+PSByZXF1ZXN0KQogCQlyZXR1cm4gcmVhZHk7CiAJLyogV2UgcGxhbiB0
byBjb25zdW1lIGFsbCBkYXRhOyBwbGVhc2UgdGVsbCB1cyBpZiB5b3Ugc2Vu
ZCBtb3JlICovCkBAIC0xMjYsNyArMTQwLDcgQEAgc3RhdGljIGlubGluZSBp
bnQgZmFzdF9nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0
cmwsIHNpemVfdCByZXF1ZXN0KQogCSAqIHdpbGwgbm90IGdldCBub3RpZmll
ZCBldmVuIHRob3VnaCB0aGUgYWN0dWFsIGFtb3VudCBvZiBkYXRhIHJlYWR5
IGlzCiAJICogYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHJkX3Byb2QgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiByZF9wcm9kKGN0cmwpIC0g
cmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRhX3JlYWR5KGN0
cmwpOwogfQogCiBpbnQgbGlieGVudmNoYW5fZGF0YV9yZWFkeShzdHJ1Y3Qg
bGlieGVudmNoYW4gKmN0cmwpCkBAIC0xMzUsNyArMTQ5LDIxIEBAIGludCBs
aWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCkKIAkgKiB3aGVuIGl0IGNoYW5nZXMKIAkgKi8KIAlyZXF1ZXN0X25vdGlm
eShjdHJsLCBWQ0hBTl9OT1RJRllfV1JJVEUpOwotCXJldHVybiByZF9wcm9k
KGN0cmwpIC0gcmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRh
X3JlYWR5KGN0cmwpOworfQorCisvKioKKyAqIEdldCB0aGUgYW1vdW50IG9m
IGJ1ZmZlciBzcGFjZSBhdmFpbGFibGUsIGFuZCBkbyBub3RoaW5nCisgKiBh
Ym91dCBub3RpZmljYXRpb25zCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50IHJh
d19nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCkK
K3sKKwl1aW50MzJfdCByZWFkeSA9IHdyX3Jpbmdfc2l6ZShjdHJsKSAtICh3
cl9wcm9kKGN0cmwpIC0gd3JfY29ucyhjdHJsKSk7CisJaWYgKHJlYWR5ID4g
d3JfcmluZ19zaXplKGN0cmwpKQorCQkvKiBXZSBoYXZlIG5vIHdheSB0byBy
ZXR1cm4gZXJyb3JzLiAgTG9ja2luZyB1cCB0aGUgcmluZyBpcworCQkgKiBi
ZXR0ZXIgdGhhbiB0aGUgYWx0ZXJuYXRpdmVzLiAqLworCQlyZXR1cm4gMDsK
KwlyZXR1cm4gcmVhZHk7CiB9CiAKIC8qKgpAQCAtMTQzLDcgKzE3MSw3IEBA
IGludCBsaWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hh
biAqY3RybCkKICAqLwogc3RhdGljIGlubGluZSBpbnQgZmFzdF9nZXRfYnVm
ZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJl
cXVlc3QpCiB7Ci0JaW50IHJlYWR5ID0gd3JfcmluZ19zaXplKGN0cmwpIC0g
KHdyX3Byb2QoY3RybCkgLSB3cl9jb25zKGN0cmwpKTsKKwlpbnQgcmVhZHkg
PSByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIAlpZiAocmVhZHkgPj0g
cmVxdWVzdCkKIAkJcmV0dXJuIHJlYWR5OwogCS8qIFdlIHBsYW4gdG8gZmls
bCB0aGUgYnVmZmVyOyBwbGVhc2UgdGVsbCB1cyB3aGVuIHlvdSd2ZSByZWFk
IGl0ICovCkBAIC0xNTMsNyArMTgxLDcgQEAgc3RhdGljIGlubGluZSBpbnQg
ZmFzdF9nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCwgc2l6ZV90IHJlcXVlc3QKIAkgKiB3aWxsIG5vdCBnZXQgbm90aWZpZWQg
ZXZlbiB0aG91Z2ggdGhlIGFjdHVhbCBhbW91bnQgb2YgYnVmZmVyIHNwYWNl
CiAJICogaXMgYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHdyX2NvbnMgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiB3cl9yaW5nX3NpemUoY3Ry
bCkgLSAod3JfcHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVy
biByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhl
bnZjaGFuX2J1ZmZlcl9zcGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
CkBAIC0xNjIsNyArMTkwLDcgQEAgaW50IGxpYnhlbnZjaGFuX2J1ZmZlcl9z
cGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwpCiAJICogd2hlbiBpdCBj
aGFuZ2VzCiAJICovCiAJcmVxdWVzdF9ub3RpZnkoY3RybCwgVkNIQU5fTk9U
SUZZX1JFQUQpOwotCXJldHVybiB3cl9yaW5nX3NpemUoY3RybCkgLSAod3Jf
cHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVybiByYXdfZ2V0
X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhlbnZjaGFuX3dh
aXQoc3RydWN0IGxpYnhlbnZjaGFuICpjdHJsKQpAQCAtMTc2LDYgKzIwNCw4
IEBAIGludCBsaWJ4ZW52Y2hhbl93YWl0KHN0cnVjdCBsaWJ4ZW52Y2hhbiAq
Y3RybCkKIAogLyoqCiAgKiByZXR1cm5zIC0xIG9uIGVycm9yLCBvciBzaXpl
IG9uIHN1Y2Nlc3MKKyAqCisgKiBjYWxsZXIgbXVzdCBoYXZlIGNoZWNrZWQg
dGhhdCBlbm91Z2ggc3BhY2UgaXMgYXZhaWxhYmxlCiAgKi8KIHN0YXRpYyBp
bnQgZG9fc2VuZChzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIGNvbnN0IHZv
aWQgKmRhdGEsIHNpemVfdCBzaXplKQogewpAQCAtMjQ4LDYgKzI3OCwxMSBA
QCBpbnQgbGlieGVudmNoYW5fd3JpdGUoc3RydWN0IGxpYnhlbnZjaGFuICpj
dHJsLCBjb25zdCB2b2lkICpkYXRhLCBzaXplX3Qgc2l6ZSkKIAl9CiB9CiAK
Ky8qKgorICogcmV0dXJucyAtMSBvbiBlcnJvciwgb3Igc2l6ZSBvbiBzdWNj
ZXNzCisgKgorICogY2FsbGVyIG11c3QgaGF2ZSBjaGVja2VkIHRoYXQgZW5v
dWdoIGRhdGEgaXMgYXZhaWxhYmxlCisgKi8KIHN0YXRpYyBpbnQgZG9fcmVj
dihzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIHZvaWQgKmRhdGEsIHNpemVf
dCBzaXplKQogewogCWludCByZWFsX2lkeCA9IHJkX2NvbnMoY3RybCkgJiAo
cmRfcmluZ19zaXplKGN0cmwpIC0gMSk7Ci0tIAoxLjcuMTAuNAoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Feb 06 12:39:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOF7-000704-Jc; Thu, 06 Feb 2014 12:39:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOF6-0006zH-56
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:39:44 +0000
Received: from [85.158.143.35:59588] by server-2.bemta-4.messagelabs.com id
	A3/63-10891-F8283F25; Thu, 06 Feb 2014 12:39:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391690381!3613364!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4810 invoked from network); 6 Feb 2014 12:39:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:39:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100441788"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:39:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:39:40 -0500
Message-ID: <1391690379.23098.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 12:39:39 +0000
In-Reply-To: <1391596564.6497.88.camel@kazak.uk.xensource.com>
References: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
	<52F21348.1070309@linaro.org>
	<1391596564.6497.88.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement
 about multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 10:36 +0000, Ian Campbell wrote:
> On Wed, 2014-02-05 at 10:32 +0000, Julien Grall wrote:
> > 
> > On 05/02/14 09:12, Ian Campbell wrote:
> > > It is the compatible string which matters, not the absolute path, and in any
> > > case /chosen/module@N is more often used than /chosen/modules/module@N.
> > 
> > I'm using /chosen/modules/module@N on my script :)
> 
> I knew I'd regret making that statement ;-)
> 
> > 
> > Is it for Xen 4.4?
> 
> I don't see any reason to hold off on docs improvements.
> 
> > 
> > >
> > > Reported-by: Fu Wei <fu.wei@linaro.org>
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Julien Grall <julien.gral@linaro.org>
> 

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:39:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOF7-000704-Jc; Thu, 06 Feb 2014 12:39:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOF6-0006zH-56
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:39:44 +0000
Received: from [85.158.143.35:59588] by server-2.bemta-4.messagelabs.com id
	A3/63-10891-F8283F25; Thu, 06 Feb 2014 12:39:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391690381!3613364!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4810 invoked from network); 6 Feb 2014 12:39:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:39:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100441788"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:39:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:39:40 -0500
Message-ID: <1391690379.23098.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 12:39:39 +0000
In-Reply-To: <1391596564.6497.88.camel@kazak.uk.xensource.com>
References: <1391591544-16072-1-git-send-email-ian.campbell@citrix.com>
	<52F21348.1070309@linaro.org>
	<1391596564.6497.88.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org, tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: remove innaccurate statement
 about multiboot module path
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 10:36 +0000, Ian Campbell wrote:
> On Wed, 2014-02-05 at 10:32 +0000, Julien Grall wrote:
> > 
> > On 05/02/14 09:12, Ian Campbell wrote:
> > > It is the compatible string which matters, not the absolute path, and in any
> > > case /chosen/module@N is more often used than /chosen/modules/module@N.
> > 
> > I'm using /chosen/modules/module@N on my script :)
> 
> I knew I'd regret making that statement ;-)
> 
> > 
> > Is it for Xen 4.4?
> 
> I don't see any reason to hold off on docs improvements.
> 
> > 
> > >
> > > Reported-by: Fu Wei <fu.wei@linaro.org>
> > > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Julien Grall <julien.gral@linaro.org>
> 

Applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:40:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOFU-00078H-39; Thu, 06 Feb 2014 12:40:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOFS-00077m-Au
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:40:06 +0000
Received: from [85.158.143.35:64380] by server-2.bemta-4.messagelabs.com id
	18/24-10891-5A283F25; Thu, 06 Feb 2014 12:40:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391690403!3627675!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20359 invoked from network); 6 Feb 2014 12:40:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:40:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98561886"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:40:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:40:02 -0500
Message-ID: <1391690401.23098.110.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:40:01 +0000
In-Reply-To: <52F369C6.5020108@eu.citrix.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
	<52F369C6.5020108@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 10:53 +0000, George Dunlap wrote:
> On 02/06/2014 10:15 AM, Ian Campbell wrote:
> > On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch adds VFP save/restore support form arm64 across context switch.
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > This should go in for 4.4 -- not context switching floating point
> > registers is obviously a big problem. (bit embarrassed that I forgot
> > about this...)
> 
> Yes, absolutely.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, thanks all.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:40:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOFU-00078H-39; Thu, 06 Feb 2014 12:40:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOFS-00077m-Au
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:40:06 +0000
Received: from [85.158.143.35:64380] by server-2.bemta-4.messagelabs.com id
	18/24-10891-5A283F25; Thu, 06 Feb 2014 12:40:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391690403!3627675!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20359 invoked from network); 6 Feb 2014 12:40:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:40:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98561886"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:40:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:40:02 -0500
Message-ID: <1391690401.23098.110.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:40:01 +0000
In-Reply-To: <52F369C6.5020108@eu.citrix.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<1391681714.23098.35.camel@kazak.uk.xensource.com>
	<52F369C6.5020108@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 10:53 +0000, George Dunlap wrote:
> On 02/06/2014 10:15 AM, Ian Campbell wrote:
> > On Thu, 2014-02-06 at 12:58 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch adds VFP save/restore support form arm64 across context switch.
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > This should go in for 4.4 -- not context switching floating point
> > registers is obviously a big problem. (bit embarrassed that I forgot
> > about this...)
> 
> Yes, absolutely.
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, thanks all.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:40:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOG9-0007Pl-U6; Thu, 06 Feb 2014 12:40:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOG7-0007Oo-Ub
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:40:48 +0000
Received: from [193.109.254.147:64175] by server-9.bemta-14.messagelabs.com id
	21/23-24895-FC283F25; Thu, 06 Feb 2014 12:40:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391690445!2468997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3968 invoked from network); 6 Feb 2014 12:40:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:40:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100441923"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:40:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:40:44 -0500
Message-ID: <1391690442.23098.111.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:40:42 +0000
In-Reply-To: <52F25151.207@eu.citrix.com>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
	<1391507006.10515.68.camel@kazak.uk.xensource.com>
	<52F25151.207@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:57 +0000, George Dunlap wrote:
> On 02/04/2014 09:43 AM, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch removes reading reset specific values (address, size and mask) from dts
> >> and uses values defined in the code now.
> >> This is because currently xgene reset driver (submitted in linux) is going through
> >> a change (which is not yet accepted), this new driver has a new type of dts bindings
> >> for reset.
> >> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
> >> of reading from dts so that xen code will not break due to the linux transition.
> >>
> >> Ref:
> >> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
> >> http://www.gossamer-threads.com/lists/linux/kernel/1845585
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > George -- I'd like to take this into 4.4 to avoid shipping a Xen which
> > relies on an unagreed DTS binding (which is an ABI of sorts).
> 
> Is this the reboot binging one with 3 options, #2 of which was hard-code 
> the values rather than using DT?
> 
> Assuming so:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, thanks all.

I reflowed the commit message to make it fit in 80 columns.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:40:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOG9-0007Pl-U6; Thu, 06 Feb 2014 12:40:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOG7-0007Oo-Ub
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:40:48 +0000
Received: from [193.109.254.147:64175] by server-9.bemta-14.messagelabs.com id
	21/23-24895-FC283F25; Thu, 06 Feb 2014 12:40:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391690445!2468997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3968 invoked from network); 6 Feb 2014 12:40:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:40:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100441923"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 12:40:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:40:44 -0500
Message-ID: <1391690442.23098.111.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:40:42 +0000
In-Reply-To: <52F25151.207@eu.citrix.com>
References: <1391493932-31268-1-git-send-email-pranavkumar@linaro.org>
	<1391507006.10515.68.camel@kazak.uk.xensource.com>
	<52F25151.207@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: platforms: Remove determining
 reset specific values from dts for XGENE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 14:57 +0000, George Dunlap wrote:
> On 02/04/2014 09:43 AM, Ian Campbell wrote:
> > On Tue, 2014-02-04 at 11:35 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch removes reading reset specific values (address, size and mask) from dts
> >> and uses values defined in the code now.
> >> This is because currently xgene reset driver (submitted in linux) is going through
> >> a change (which is not yet accepted), this new driver has a new type of dts bindings
> >> for reset.
> >> Hence till linux driver comes to some conclusion, we will use hardcoded values instead
> >> of reading from dts so that xen code will not break due to the linux transition.
> >>
> >> Ref:
> >> http://lists.xen.org/archives/html/xen-devel/2014-01/msg02256.html
> >> http://www.gossamer-threads.com/lists/linux/kernel/1845585
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> > George -- I'd like to take this into 4.4 to avoid shipping a Xen which
> > relies on an unagreed DTS binding (which is an ABI of sorts).
> 
> Is this the reboot binging one with 3 options, #2 of which was hard-code 
> the values rather than using DT?
> 
> Assuming so:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Applied, thanks all.

I reflowed the commit message to make it fit in 80 columns.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:40:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOGE-0007Rz-Bp; Thu, 06 Feb 2014 12:40:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOGD-0007RN-Lw
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:40:53 +0000
Received: from [193.109.254.147:14271] by server-11.bemta-14.messagelabs.com
	id B7/51-24604-5D283F25; Thu, 06 Feb 2014 12:40:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391690451!2474316!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19540 invoked from network); 6 Feb 2014 12:40:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:40:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98562034"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:40:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:40:50 -0500
Message-ID: <1391690448.23098.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:40:48 +0000
In-Reply-To: <52F215E1.1040401@eu.citrix.com>
References: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
	<1391525210.6497.9.camel@kazak.uk.xensource.com>
	<52F215E1.1040401@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>, Yun Wang <bimingery@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 10:43 +0000, George Dunlap wrote:
> On 02/04/2014 02:46 PM, Ian Campbell wrote:
> > On Fri, 2014-01-31 at 16:35 +0000, Anthony PERARD wrote:
> >> vcpu-set will try to use the HVM path (through QEMU) instead of the PV
> >> path (through xenstore) for a PV guest, if there is a QEMU running for
> >> this domain. This patch check which kind of guest is running before
> >> before doing any call.
> >>
> >> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> >> ---
> >>
> >> Yun, is this patch fix the issue with your PV guest ?
> > Yun, any feedback on this patch?
> >
> > George -- I think vcpu-set not working for PV guests is a bug worth
> > fixing in 4.4 so I intend to apply.
> 
> Yes, please do.

Done. Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:40:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOGE-0007Rz-Bp; Thu, 06 Feb 2014 12:40:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOGD-0007RN-Lw
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:40:53 +0000
Received: from [193.109.254.147:14271] by server-11.bemta-14.messagelabs.com
	id B7/51-24604-5D283F25; Thu, 06 Feb 2014 12:40:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391690451!2474316!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19540 invoked from network); 6 Feb 2014 12:40:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:40:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98562034"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:40:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:40:50 -0500
Message-ID: <1391690448.23098.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:40:48 +0000
In-Reply-To: <52F215E1.1040401@eu.citrix.com>
References: <1391186147-15191-1-git-send-email-anthony.perard@citrix.com>
	<1391525210.6497.9.camel@kazak.uk.xensource.com>
	<52F215E1.1040401@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>, Yun Wang <bimingery@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] libxl: Fix vcpu-set for PV guest.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-05 at 10:43 +0000, George Dunlap wrote:
> On 02/04/2014 02:46 PM, Ian Campbell wrote:
> > On Fri, 2014-01-31 at 16:35 +0000, Anthony PERARD wrote:
> >> vcpu-set will try to use the HVM path (through QEMU) instead of the PV
> >> path (through xenstore) for a PV guest, if there is a QEMU running for
> >> this domain. This patch check which kind of guest is running before
> >> before doing any call.
> >>
> >> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> >
> >> ---
> >>
> >> Yun, is this patch fix the issue with your PV guest ?
> > Yun, any feedback on this patch?
> >
> > George -- I think vcpu-set not working for PV guests is a bug worth
> > fixing in 4.4 so I intend to apply.
> 
> Yes, please do.

Done. Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOJP-0008I9-4d; Thu, 06 Feb 2014 12:44:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBOJN-0008HI-Mz
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:44:09 +0000
Received: from [85.158.139.211:14040] by server-10.bemta-5.messagelabs.com id
	58/EB-08578-79383F25; Thu, 06 Feb 2014 12:44:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391690646!2106628!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26706 invoked from network); 6 Feb 2014 12:44:06 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:44:06 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so1266863wes.30
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 04:44:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=jjtquIP5uqmw9OvdfAOCIyvqEQa0DYYnKSlXdQO9UJ0=;
	b=VhNg+UKhO/0rItnzWm+hbbGx8k6i7fTZs10cvdILqoLCSqXkq1Dcrm94W0BOczSTmo
	mjc3d0O+nvYiHe9jUMd3fGxk8SNMCAofeYxM+JV/P9SKo27/r4M5+j5IHoZAxmt3rw+w
	AaR0lqfVg03sp7ZrGmI5i5tSGWlBDt4Wqs5v5GquiQBg1Z97Qni/9NC3U6vbw9kdio+V
	c4rRJHYPALhDNoEGMN+9X8Z6237l1aSOdKbXSX660NVX+xd95XwJRmCmtsHUzFqlkKte
	+oezsyJLSXIJ0diq1t8tcZkNMPeKiaEenBPBzdyCAI/ufYoJDMLfdJCymdu6DyNfuh/T
	HJbQ==
X-Gm-Message-State: ALoCoQkWi8x5F0AjwWZJbswfAFGLgLSz4+48+ixFURGQr7aZPajPqyM+gzq24+bV3pWp03nJnMUz
X-Received: by 10.194.24.65 with SMTP id s1mr5797676wjf.38.1391690646279;
	Thu, 06 Feb 2014 04:44:06 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id f3sm5435199wiv.2.2014.02.06.04.44.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 04:44:05 -0800 (PST)
Message-ID: <52F38394.50405@linaro.org>
Date: Thu, 06 Feb 2014 12:44:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>, xen-devel@lists.xen.org
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
Cc: stefano.stabellini@citrix.com, patches@apm.com, ian.campbell@citrix.com,
	Anup Patel <anup.patel@linaro.org>, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On 06/02/14 07:28, Pranavkumar Sawargaonkar wrote:
> This patch adds VFP save/restore support form arm64 across context switch.
>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>   xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>   xen/include/asm-arm/arm64/vfp.h |    4 ++++
>   2 files changed, 53 insertions(+)
>
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index 74e6a50..8c1479a 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -1,13 +1,62 @@
>   #include <xen/sched.h>
>   #include <asm/processor.h>
> +#include <asm/cpufeature.h>
>   #include <asm/vfp.h>
>
>   void vfp_save_state(struct vcpu *v)
>   {
>       /* TODO: implement it */
> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");

I remember we had a discussion when I have implemented vfp context 
switch for arm32 for the memory constraints 
(http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).

I think you should use "=Q" also here to avoid cloberring the whole memory.

At the same time, is it necessary the (char *)?

> +
> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>   }
>
>   void vfp_restore_state(struct vcpu *v)
>   {
>       /* TODO: implement it */
> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");

Same here.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:44:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOJP-0008I9-4d; Thu, 06 Feb 2014 12:44:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBOJN-0008HI-Mz
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:44:09 +0000
Received: from [85.158.139.211:14040] by server-10.bemta-5.messagelabs.com id
	58/EB-08578-79383F25; Thu, 06 Feb 2014 12:44:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391690646!2106628!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26706 invoked from network); 6 Feb 2014 12:44:06 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:44:06 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so1266863wes.30
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 04:44:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=jjtquIP5uqmw9OvdfAOCIyvqEQa0DYYnKSlXdQO9UJ0=;
	b=VhNg+UKhO/0rItnzWm+hbbGx8k6i7fTZs10cvdILqoLCSqXkq1Dcrm94W0BOczSTmo
	mjc3d0O+nvYiHe9jUMd3fGxk8SNMCAofeYxM+JV/P9SKo27/r4M5+j5IHoZAxmt3rw+w
	AaR0lqfVg03sp7ZrGmI5i5tSGWlBDt4Wqs5v5GquiQBg1Z97Qni/9NC3U6vbw9kdio+V
	c4rRJHYPALhDNoEGMN+9X8Z6237l1aSOdKbXSX660NVX+xd95XwJRmCmtsHUzFqlkKte
	+oezsyJLSXIJ0diq1t8tcZkNMPeKiaEenBPBzdyCAI/ufYoJDMLfdJCymdu6DyNfuh/T
	HJbQ==
X-Gm-Message-State: ALoCoQkWi8x5F0AjwWZJbswfAFGLgLSz4+48+ixFURGQr7aZPajPqyM+gzq24+bV3pWp03nJnMUz
X-Received: by 10.194.24.65 with SMTP id s1mr5797676wjf.38.1391690646279;
	Thu, 06 Feb 2014 04:44:06 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id f3sm5435199wiv.2.2014.02.06.04.44.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 04:44:05 -0800 (PST)
Message-ID: <52F38394.50405@linaro.org>
Date: Thu, 06 Feb 2014 12:44:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>, xen-devel@lists.xen.org
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
Cc: stefano.stabellini@citrix.com, patches@apm.com, ian.campbell@citrix.com,
	Anup Patel <anup.patel@linaro.org>, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

On 06/02/14 07:28, Pranavkumar Sawargaonkar wrote:
> This patch adds VFP save/restore support form arm64 across context switch.
>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>   xen/arch/arm/arm64/vfp.c        |   49 +++++++++++++++++++++++++++++++++++++++
>   xen/include/asm-arm/arm64/vfp.h |    4 ++++
>   2 files changed, 53 insertions(+)
>
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index 74e6a50..8c1479a 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -1,13 +1,62 @@
>   #include <xen/sched.h>
>   #include <asm/processor.h>
> +#include <asm/cpufeature.h>
>   #include <asm/vfp.h>
>
>   void vfp_save_state(struct vcpu *v)
>   {
>       /* TODO: implement it */
> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> +                 "stp q2, q3, [%0, #16 * 2]\n\t"
> +                 "stp q4, q5, [%0, #16 * 4]\n\t"
> +                 "stp q6, q7, [%0, #16 * 6]\n\t"
> +                 "stp q8, q9, [%0, #16 * 8]\n\t"
> +                 "stp q10, q11, [%0, #16 * 10]\n\t"
> +                 "stp q12, q13, [%0, #16 * 12]\n\t"
> +                 "stp q14, q15, [%0, #16 * 14]\n\t"
> +                 "stp q16, q17, [%0, #16 * 16]\n\t"
> +                 "stp q18, q19, [%0, #16 * 18]\n\t"
> +                 "stp q20, q21, [%0, #16 * 20]\n\t"
> +                 "stp q22, q23, [%0, #16 * 22]\n\t"
> +                 "stp q24, q25, [%0, #16 * 24]\n\t"
> +                 "stp q26, q27, [%0, #16 * 26]\n\t"
> +                 "stp q28, q29, [%0, #16 * 28]\n\t"
> +                 "stp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");

I remember we had a discussion when I have implemented vfp context 
switch for arm32 for the memory constraints 
(http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).

I think you should use "=Q" also here to avoid cloberring the whole memory.

At the same time, is it necessary the (char *)?

> +
> +    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
> +    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
> +    v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
>   }
>
>   void vfp_restore_state(struct vcpu *v)
>   {
>       /* TODO: implement it */
> +    if ( !cpu_has_fp )
> +        return;
> +
> +    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
> +                 "ldp q2, q3, [%0, #16 * 2]\n\t"
> +                 "ldp q4, q5, [%0, #16 * 4]\n\t"
> +                 "ldp q6, q7, [%0, #16 * 6]\n\t"
> +                 "ldp q8, q9, [%0, #16 * 8]\n\t"
> +                 "ldp q10, q11, [%0, #16 * 10]\n\t"
> +                 "ldp q12, q13, [%0, #16 * 12]\n\t"
> +                 "ldp q14, q15, [%0, #16 * 14]\n\t"
> +                 "ldp q16, q17, [%0, #16 * 16]\n\t"
> +                 "ldp q18, q19, [%0, #16 * 18]\n\t"
> +                 "ldp q20, q21, [%0, #16 * 20]\n\t"
> +                 "ldp q22, q23, [%0, #16 * 22]\n\t"
> +                 "ldp q24, q25, [%0, #16 * 24]\n\t"
> +                 "ldp q26, q27, [%0, #16 * 26]\n\t"
> +                 "ldp q28, q29, [%0, #16 * 28]\n\t"
> +                 "ldp q30, q31, [%0, #16 * 30]\n\t"
> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");

Same here.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:56:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOVJ-0000tg-LR; Thu, 06 Feb 2014 12:56:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WBOVH-0000ta-MR
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:56:28 +0000
Received: from [193.109.254.147:31878] by server-3.bemta-14.messagelabs.com id
	D2/DA-00432-A7683F25; Thu, 06 Feb 2014 12:56:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391691384!2486677!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27848 invoked from network); 6 Feb 2014 12:56:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:56:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98565143"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:56:23 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 07:56:23 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:56:16 +0000
Message-ID: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v8] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour
- it cuts out common parts from m2p_*_override functions to
  *_foreign_p2m_mapping functions

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

v7:
- the previous version broke build on ARM, as there is no need for those p2m
  changes. I've put them into arch specific functions, which are stubs on arm

v8:
- give credit to Anthony Liguori who submitted a very similar patch originally:
http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
- create ARM stub for get_phys_to_machine
- move definition of mfn in __gnttab_unmap_refs to the right place

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Original-by: Anthony Liguori <aliguori@amazon.com>
---
 arch/arm/include/asm/xen/page.h     |   20 +++++++++-
 arch/x86/include/asm/xen/page.h     |    7 +++-
 arch/x86/xen/p2m.c                  |   49 ++++++++++++++++-------
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   13 +++---
 drivers/xen/grant-table.c           |   74 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 7 files changed, 140 insertions(+), 46 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index e0965ab..d26c3d7 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,13 +97,31 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
+static inline unsigned long get_phys_to_machine(unsigned long pfn)
+{
+	return 0;
+}
+
+static inline int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	return 0;
+}
+
 static inline int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
 {
 	return 0;
 }
 
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
+static inline int restore_foreign_p2m_mapping(struct page *page,
+					      unsigned long mfn)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page,
+				      struct gnttab_map_grant_ref *kmap_op,
+				      unsigned long mfn)
 {
 	return 0;
 }
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 3e276eb..0340954 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,13 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +124,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..3e7cfa9 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -881,6 +881,22 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	WARN_ON(PagePrivate(page));
+	SetPagePrivate(page);
+	set_page_private(page, mfn);
+
+	page->index = pfn_to_mfn(pfn);
+	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -899,13 +915,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -943,20 +952,33 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
+		return -EINVAL;
+
+	set_page_private(page, INVALID_P2M_ENTRY);
+	WARN_ON(!PagePrivate(page));
+	ClearPagePrivate(page);
+	set_phys_to_machine(pfn, page->index);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(restore_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -970,10 +992,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 4b97b86..da18046 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 073b4a1..34a2704 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index b84e3ab..c719bc8 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -928,15 +928,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -955,10 +957,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -975,8 +979,14 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+
+		ret = set_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
 		if (ret)
 			goto out;
 	}
@@ -987,15 +997,31 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -1006,17 +1032,27 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		ret = restore_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -1027,8 +1063,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index a5af2a2..7ad033d 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:56:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOVJ-0000tg-LR; Thu, 06 Feb 2014 12:56:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WBOVH-0000ta-MR
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 12:56:28 +0000
Received: from [193.109.254.147:31878] by server-3.bemta-14.messagelabs.com id
	D2/DA-00432-A7683F25; Thu, 06 Feb 2014 12:56:26 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391691384!2486677!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27848 invoked from network); 6 Feb 2014 12:56:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:56:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98565143"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:56:23 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 07:56:23 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Feb 2014 12:56:16 +0000
Message-ID: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v8] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- based on m2p_override either they follow the original behaviour, or just set
  the private flag and call set_phys_to_machine
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour
- it cuts out common parts from m2p_*_override functions to
  *_foreign_p2m_mapping functions

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

v7:
- the previous version broke build on ARM, as there is no need for those p2m
  changes. I've put them into arch specific functions, which are stubs on arm

v8:
- give credit to Anthony Liguori who submitted a very similar patch originally:
http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
- create ARM stub for get_phys_to_machine
- move definition of mfn in __gnttab_unmap_refs to the right place

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Original-by: Anthony Liguori <aliguori@amazon.com>
---
 arch/arm/include/asm/xen/page.h     |   20 +++++++++-
 arch/x86/include/asm/xen/page.h     |    7 +++-
 arch/x86/xen/p2m.c                  |   49 ++++++++++++++++-------
 drivers/block/xen-blkback/blkback.c |   15 +++----
 drivers/xen/gntdev.c                |   13 +++---
 drivers/xen/grant-table.c           |   74 +++++++++++++++++++++++++++++------
 include/xen/grant_table.h           |    8 +++-
 7 files changed, 140 insertions(+), 46 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index e0965ab..d26c3d7 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,13 +97,31 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
+static inline unsigned long get_phys_to_machine(unsigned long pfn)
+{
+	return 0;
+}
+
+static inline int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	return 0;
+}
+
 static inline int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
 {
 	return 0;
 }
 
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
+static inline int restore_foreign_p2m_mapping(struct page *page,
+					      unsigned long mfn)
+{
+	return 0;
+}
+
+static inline int m2p_remove_override(struct page *page,
+				      struct gnttab_map_grant_ref *kmap_op,
+				      unsigned long mfn)
 {
 	return 0;
 }
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 3e276eb..0340954 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,13 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +124,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..3e7cfa9 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -881,6 +881,22 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	WARN_ON(PagePrivate(page));
+	SetPagePrivate(page);
+	set_page_private(page, mfn);
+
+	page->index = pfn_to_mfn(pfn);
+	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
+		return -ENOMEM;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -899,13 +915,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -943,20 +952,33 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn)
+{
+	unsigned long pfn = page_to_pfn(page);
+
+	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
+		return -EINVAL;
+
+	set_page_private(page, INVALID_P2M_ENTRY);
+	WARN_ON(!PagePrivate(page));
+	ClearPagePrivate(page);
+	set_phys_to_machine(pfn, page->index);
+	return 0;
+}
+EXPORT_SYMBOL_GPL(restore_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -970,10 +992,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 4b97b86..da18046 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 073b4a1..34a2704 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index b84e3ab..c719bc8 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -928,15 +928,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+		    struct page **pages, unsigned int count,
+		    bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 	pte_t *pte;
 	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -955,10 +957,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
 					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
@@ -975,8 +979,14 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		} else {
 			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
 		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+
+		ret = set_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
 		if (ret)
 			goto out;
 	}
@@ -987,15 +997,31 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 
 	return ret;
 }
+
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+		      struct page **pages, unsigned int count,
+		      bool m2p_override)
 {
 	int i, ret;
 	bool lazy = false;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
@@ -1006,17 +1032,27 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
 					INVALID_P2M_ENTRY);
 		}
-		return ret;
+		return 0;
 	}
 
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
 		arch_enter_lazy_mmu_mode();
 		lazy = true;
 	}
 
 	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
+		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		ret = restore_foreign_p2m_mapping(pages[i], mfn);
+		if (ret)
+			goto out;
+
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						   &kmap_ops[i] : NULL,
+						  mfn);
 		if (ret)
 			goto out;
 	}
@@ -1027,8 +1063,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 
 	return ret;
 }
+
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
+}
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index a5af2a2..7ad033d 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:57:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOWL-0000yc-5F; Thu, 06 Feb 2014 12:57:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOWK-0000yU-Bv
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:57:32 +0000
Received: from [85.158.143.35:64960] by server-1.bemta-4.messagelabs.com id
	B6/4B-31661-BB683F25; Thu, 06 Feb 2014 12:57:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391691430!3632513!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15183 invoked from network); 6 Feb 2014 12:57:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:57:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98565236"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:57:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:57:08 -0500
Message-ID: <1391691427.23098.118.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 12:57:07 +0000
In-Reply-To: <52F38394.50405@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
> > +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> 
> I remember we had a discussion when I have implemented vfp context 
> switch for arm32 for the memory constraints 
> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
> 
> I think you should use "=Q" also here to avoid cloberring the whole memory.

Yes, I forgot to say: I think getting something in now is the priority,
which is why I committed it, but this should be tightened up, probably
for 4.5 unless the difference is benchmarkable.

> At the same time, is it necessary the (char *)?

I missed that, yes I suspect it is unnecessary.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 12:57:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 12:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOWL-0000yc-5F; Thu, 06 Feb 2014 12:57:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOWK-0000yU-Bv
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:57:32 +0000
Received: from [85.158.143.35:64960] by server-1.bemta-4.messagelabs.com id
	B6/4B-31661-BB683F25; Thu, 06 Feb 2014 12:57:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391691430!3632513!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15183 invoked from network); 6 Feb 2014 12:57:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 12:57:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98565236"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 12:57:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	07:57:08 -0500
Message-ID: <1391691427.23098.118.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 12:57:07 +0000
In-Reply-To: <52F38394.50405@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
> > +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> 
> I remember we had a discussion when I have implemented vfp context 
> switch for arm32 for the memory constraints 
> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
> 
> I think you should use "=Q" also here to avoid cloberring the whole memory.

Yes, I forgot to say: I think getting something in now is the priority,
which is why I committed it, but this should be tightened up, probably
for 4.5 unless the difference is benchmarkable.

> At the same time, is it necessary the (char *)?

I missed that, yes I suspect it is unnecessary.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:06:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOeD-0001ZB-CU; Thu, 06 Feb 2014 13:05:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOeC-0001Z6-1q
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 13:05:40 +0000
Received: from [85.158.137.68:12080] by server-3.bemta-3.messagelabs.com id
	56/89-14520-3A883F25; Thu, 06 Feb 2014 13:05:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391691936!64976!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25414 invoked from network); 6 Feb 2014 13:05:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:05:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98568174"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 13:05:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:05:36 -0500
Message-ID: <1391691935.23098.121.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:05:35 +0000
In-Reply-To: <alpine.DEB.2.02.1402061118290.4373@kaball.uk.xensource.com>
References: <osstest-24734-mainreport@xen.org>
	<1391680396.23098.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402061118290.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 11:18 +0000, Stefano Stabellini wrote:
> On Thu, 6 Feb 2014, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 18:55 +0000, xen.org wrote:
> > > flight 24734 linux-arm-xen real [real]
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24734/
> > > 
> > > Failures :-/ but no regressions.
> > > 
> > > Tests which did not succeed, but are not blocking:
> > >  test-armhf-armhf-xl           9 guest-start                  fail   never pass
> > 
> > Stefano, please can you cherry-pick:
> > 
> >         commit e17b2f114cba5420fb28fa4bfead57d406a16533
> >         Author: Ian Campbell <ian.campbell@citrix.com>
> >         Date:   Mon Jan 20 11:30:41 2014 +0000
> >         
> >             xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t)
> >             
> > [...]
> > 
> > from mainline into this tree.
> 
> I merged xentip/stable/for-linus-3.14 into linux-arm-xen.

Will this pull in the various evtchn breakages which Julien has
reported? That wouldn't help much with the aim of getting a sensible
pass on xen.git master prior to Xen 4.4...

I think we now have the fix for one ofthe two issues in the Xen tree,
the other was a Linux side issue IIRC, was that included here?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:06:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOeD-0001ZB-CU; Thu, 06 Feb 2014 13:05:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOeC-0001Z6-1q
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 13:05:40 +0000
Received: from [85.158.137.68:12080] by server-3.bemta-3.messagelabs.com id
	56/89-14520-3A883F25; Thu, 06 Feb 2014 13:05:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391691936!64976!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25414 invoked from network); 6 Feb 2014 13:05:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:05:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98568174"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 13:05:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:05:36 -0500
Message-ID: <1391691935.23098.121.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:05:35 +0000
In-Reply-To: <alpine.DEB.2.02.1402061118290.4373@kaball.uk.xensource.com>
References: <osstest-24734-mainreport@xen.org>
	<1391680396.23098.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402061118290.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-arm-xen test] 24734: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 11:18 +0000, Stefano Stabellini wrote:
> On Thu, 6 Feb 2014, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 18:55 +0000, xen.org wrote:
> > > flight 24734 linux-arm-xen real [real]
> > > http://www.chiark.greenend.org.uk/~xensrcts/logs/24734/
> > > 
> > > Failures :-/ but no regressions.
> > > 
> > > Tests which did not succeed, but are not blocking:
> > >  test-armhf-armhf-xl           9 guest-start                  fail   never pass
> > 
> > Stefano, please can you cherry-pick:
> > 
> >         commit e17b2f114cba5420fb28fa4bfead57d406a16533
> >         Author: Ian Campbell <ian.campbell@citrix.com>
> >         Date:   Mon Jan 20 11:30:41 2014 +0000
> >         
> >             xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t)
> >             
> > [...]
> > 
> > from mainline into this tree.
> 
> I merged xentip/stable/for-linus-3.14 into linux-arm-xen.

Will this pull in the various evtchn breakages which Julien has
reported? That wouldn't help much with the aim of getting a sensible
pass on xen.git master prior to Xen 4.4...

I think we now have the fix for one ofthe two issues in the Xen tree,
the other was a Linux side issue IIRC, was that included here?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:06:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOfN-0001dY-SS; Thu, 06 Feb 2014 13:06:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOfM-0001dQ-Tj
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 13:06:53 +0000
Received: from [85.158.137.68:28872] by server-16.bemta-3.messagelabs.com id
	36/7F-29917-CE883F25; Thu, 06 Feb 2014 13:06:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391692010!65115!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12607 invoked from network); 6 Feb 2014 13:06:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:06:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98568515"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 13:06:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:06:49 -0500
Message-ID: <1391692008.23098.122.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:06:48 +0000
In-Reply-To: <21235.33229.690917.184727@mariner.uk.xensource.com>
References: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391687191.23098.97.camel@kazak.uk.xensource.com>
	<21235.33229.690917.184727@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:36 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] tools: Bump library SONAMEs for 4.4"):
> > On Wed, 2014-02-05 at 14:43 +0000, Ian Jackson wrote:
> > > There have been ABI/API changes in libxc.  Bump its MAJOR (which
> > > affets libxenguest et al too.)
> > 
> > "affects" (can be fixed on commit, no need to resend)
> 
> Oops.
> 
> > > There have been ABI changes in libxl.  Bump its MAJOR.
> > > (The API changes have been dealt with as we go along - there is
> > > already a LIBXL_API_VERSION 0x040400.)
> > > 
> > > None of the other libraries have changed their interfaces.  I have
> > > verified this by building the tools and searching the dist/install
> > > tree for files matching *.so.*.  For each library that showed up, I
> > > did this:
> > >   git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
> > > where FOO is the corresponding source directory.
> > > 
> > > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> > 
> > Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> > 
> > Should we apply immediately? Only reason not to would be to wait for the
> > conclusion of the libxlu_disk thing discussion with George and Olaf.
> 
> I don't think that's a reason to delay.  I think Olaf's libxlu_disk
> thing can go in even after this change.

Applied. I don't think this sort of procedural change needs a release
ack.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:06:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOfN-0001dY-SS; Thu, 06 Feb 2014 13:06:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOfM-0001dQ-Tj
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 13:06:53 +0000
Received: from [85.158.137.68:28872] by server-16.bemta-3.messagelabs.com id
	36/7F-29917-CE883F25; Thu, 06 Feb 2014 13:06:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391692010!65115!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12607 invoked from network); 6 Feb 2014 13:06:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:06:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98568515"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 13:06:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:06:49 -0500
Message-ID: <1391692008.23098.122.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:06:48 +0000
In-Reply-To: <21235.33229.690917.184727@mariner.uk.xensource.com>
References: <1391611418-30816-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391687191.23098.97.camel@kazak.uk.xensource.com>
	<21235.33229.690917.184727@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] tools: Bump library SONAMEs for 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:36 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] tools: Bump library SONAMEs for 4.4"):
> > On Wed, 2014-02-05 at 14:43 +0000, Ian Jackson wrote:
> > > There have been ABI/API changes in libxc.  Bump its MAJOR (which
> > > affets libxenguest et al too.)
> > 
> > "affects" (can be fixed on commit, no need to resend)
> 
> Oops.
> 
> > > There have been ABI changes in libxl.  Bump its MAJOR.
> > > (The API changes have been dealt with as we go along - there is
> > > already a LIBXL_API_VERSION 0x040400.)
> > > 
> > > None of the other libraries have changed their interfaces.  I have
> > > verified this by building the tools and searching the dist/install
> > > tree for files matching *.so.*.  For each library that showed up, I
> > > did this:
> > >   git-diff RELEASE-4.3.0..staging -- `find tools/FOO/ -name \*.h`
> > > where FOO is the corresponding source directory.
> > > 
> > > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> > 
> > Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> > 
> > Should we apply immediately? Only reason not to would be to wait for the
> > conclusion of the libxlu_disk thing discussion with George and Olaf.
> 
> I don't think that's a reason to delay.  I think Olaf's libxlu_disk
> thing can go in even after this change.

Applied. I don't think this sort of procedural change needs a release
ack.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:07:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOg4-0001iq-9Z; Thu, 06 Feb 2014 13:07:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOg2-0001iZ-UV
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 13:07:35 +0000
Received: from [85.158.139.211:52091] by server-15.bemta-5.messagelabs.com id
	AF/FF-24395-61983F25; Thu, 06 Feb 2014 13:07:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391692052!2114020!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25529 invoked from network); 6 Feb 2014 13:07:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:07:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100448279"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:07:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:07:31 -0500
Message-ID: <1391692049.23098.123.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:07:29 +0000
In-Reply-To: <52F37A13.7020909@eu.citrix.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
	<52F24FF3.5050107@eu.citrix.com>
	<1391688129.23098.103.camel@kazak.uk.xensource.com>
	<52F37A13.7020909@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:03 +0000, George Dunlap wrote:
> On 02/06/2014 12:02 PM, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 14:51 +0000, George Dunlap wrote:
> >> On 02/05/2014 02:16 PM, Julien Grall wrote:
> >>> The function domain_page_map_to_mfn can be used to translate a virtual
> >>> address mapped by both map_domain_page and map_domain_page_global.
> >>> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> >>> will always fail because the address is not in DOMHEAP range.
> >>>
> >>> Check if the address is in vmap range and use __pa to translate it.
> >>>
> >>> This patch fix guest shutdown when the event fifo is used.
> >>>
> >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >> I assume this brings the arm paths into line with the x86 functionality?
> > Yes, functionality which is now expected by common code as well (FIFO
> > evthcn stuff).
> 
> In that case:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Thanks, applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:07:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOg4-0001iq-9Z; Thu, 06 Feb 2014 13:07:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOg2-0001iZ-UV
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 13:07:35 +0000
Received: from [85.158.139.211:52091] by server-15.bemta-5.messagelabs.com id
	AF/FF-24395-61983F25; Thu, 06 Feb 2014 13:07:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391692052!2114020!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25529 invoked from network); 6 Feb 2014 13:07:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:07:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100448279"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:07:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:07:31 -0500
Message-ID: <1391692049.23098.123.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:07:29 +0000
In-Reply-To: <52F37A13.7020909@eu.citrix.com>
References: <1391609794-505-1-git-send-email-julien.grall@linaro.org>
	<52F24FF3.5050107@eu.citrix.com>
	<1391688129.23098.103.camel@kazak.uk.xensource.com>
	<52F37A13.7020909@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly implement
	domain_page_map_to_mfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:03 +0000, George Dunlap wrote:
> On 02/06/2014 12:02 PM, Ian Campbell wrote:
> > On Wed, 2014-02-05 at 14:51 +0000, George Dunlap wrote:
> >> On 02/05/2014 02:16 PM, Julien Grall wrote:
> >>> The function domain_page_map_to_mfn can be used to translate a virtual
> >>> address mapped by both map_domain_page and map_domain_page_global.
> >>> The former is using vmap to map the mfn, therefore domain_page_map_to_mfn
> >>> will always fail because the address is not in DOMHEAP range.
> >>>
> >>> Check if the address is in vmap range and use __pa to translate it.
> >>>
> >>> This patch fix guest shutdown when the event fifo is used.
> >>>
> >>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >> I assume this brings the arm paths into line with the x86 functionality?
> > Yes, functionality which is now expected by common code as well (FIFO
> > evthcn stuff).
> 
> In that case:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Thanks, applied.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:07:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOg9-0001kd-SO; Thu, 06 Feb 2014 13:07:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOg8-0001k9-0V
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:07:40 +0000
Received: from [193.109.254.147:46594] by server-3.bemta-14.messagelabs.com id
	79/0D-00432-B1983F25; Thu, 06 Feb 2014 13:07:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391692057!2486617!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31583 invoked from network); 6 Feb 2014 13:07:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:07:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98568706"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 13:07:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:07:36 -0500
Message-ID: <1391692055.23098.125.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Thu, 6 Feb 2014 13:07:35 +0000
In-Reply-To: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
 gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 19:33 +0200, Oleksandr Tyshchenko wrote:
> The possible deadlock scenario is explained below:
> 
> non interrupt context:    interrupt contex        interrupt context
>                           (CPU0):                 (CPU1):
> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>   |                         |                       |
>   vgic_disable_irqs()       ...                     ...
>     |                         |                       |
>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>     |  ...                      |                       |
>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>     |  ...                        ...                     ...
>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>     |  ...                  . .       Oops! The lock has already taken.
>     |  spin_unlock(...)     . .
>     |  ...                  . .
>     gic_irq_disable()       . .
>        ...                  . .
>        spin_lock(...)       . .
>        ...                  . .
>        ... <----------------. .
>        ... <------------------.
>        ...
>        spin_unlock(...)
> 
> Since the gic_remove_from_queues() and gic_irq_disable() called from
> non interrupt context and they acquire the same lock as gic_set_guest_irq()
> which called from interrupt context we must disable interrupts in these
> functions to avoid possible deadlocks.
> 
> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9

What is this?

> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>

Applied with acks from Stefano, Julien and myself.

Was there anything else arising from "xen/arm: maintenance_interrupt SMP
fix" which is a target for 4.4? I don't think so, the IPI priority
raising is targeted at 4.5.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:07:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOg9-0001kd-SO; Thu, 06 Feb 2014 13:07:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOg8-0001k9-0V
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:07:40 +0000
Received: from [193.109.254.147:46594] by server-3.bemta-14.messagelabs.com id
	79/0D-00432-B1983F25; Thu, 06 Feb 2014 13:07:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391692057!2486617!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31583 invoked from network); 6 Feb 2014 13:07:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:07:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98568706"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 13:07:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:07:36 -0500
Message-ID: <1391692055.23098.125.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Thu, 6 Feb 2014 13:07:35 +0000
In-Reply-To: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
References: <1391448828-28414-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v1] xen/arm: Fix deadlock in
 gic_set_guest_irq()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 19:33 +0200, Oleksandr Tyshchenko wrote:
> The possible deadlock scenario is explained below:
> 
> non interrupt context:    interrupt contex        interrupt context
>                           (CPU0):                 (CPU1):
> vgic_distr_mmio_write()   do_trap_irq()           do_softirq()
>   |                         |                       |
>   vgic_disable_irqs()       ...                     ...
>     |                         |                       |
>     gic_remove_from_queues()  vgic_vcpu_inject_irq()  vgic_vcpu_inject_irq()
>     |  ...                      |                       |
>     |  spin_lock(...)           gic_set_guest_irq()     gic_set_guest_irq()
>     |  ...                        ...                     ...
>     |  ... <----------------.---- spin_lock_irqsave(...)  ...
>     |  ... <----------------.-.---------------------------spin_lock_irqsave(...)
>     |  ...                  . .       Oops! The lock has already taken.
>     |  spin_unlock(...)     . .
>     |  ...                  . .
>     gic_irq_disable()       . .
>        ...                  . .
>        spin_lock(...)       . .
>        ...                  . .
>        ... <----------------. .
>        ... <------------------.
>        ...
>        spin_unlock(...)
> 
> Since the gic_remove_from_queues() and gic_irq_disable() called from
> non interrupt context and they acquire the same lock as gic_set_guest_irq()
> which called from interrupt context we must disable interrupts in these
> functions to avoid possible deadlocks.
> 
> Change-Id: Ia354d87bb44418956e30cd7e49cc76616c359cc9

What is this?

> Signed-off-by: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>

Applied with acks from Stefano, Julien and myself.

Was there anything else arising from "xen/arm: maintenance_interrupt SMP
fix" which is a target for 4.4? I don't think so, the IPI priority
raising is targeted at 4.5.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:08:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOgf-0001sH-Cv; Thu, 06 Feb 2014 13:08:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBOge-0001s3-Ig
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:08:12 +0000
Received: from [193.109.254.147:52020] by server-3.bemta-14.messagelabs.com id
	78/ED-00432-B3983F25; Thu, 06 Feb 2014 13:08:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391692091!2477837!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17927 invoked from network); 6 Feb 2014 13:08:11 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:08:11 -0000
Received: by mail-we0-f175.google.com with SMTP id q59so1261003wes.20
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 05:08:11 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=jZZ0RQvbYOCRumGdtUT8R2jT4j9GHLyaHnCXYCxZQyQ=;
	b=C0Twr8a5M4tXdtOP4hkkV9tueM9ayfVnj1CxkiuyRo4HjaxPFjbFM4Nz5rOvW76mhA
	kl6iJwFJiswEHOKSNlh1qqppa1O/g2O0fjLvOyP/jtuSESpHX4a5xw3AWzE4+8Od6lxG
	X+pftPtzMnUH2hE1cEv00Y2U2LDGo++NxpBRUGJSlrthpJ6NlqyR8remoqgrRW5NItfC
	zEjiAHXRtapD0e0ajg6qD1KlNQj/qLpg07x8cHpqUT2+lskKKAFpABwdPx/z7JpzTrD6
	zWzPWarq0P2aY8SP2n/pN/kcQWSuQpqRIcL6X0pbZS60cKsYYVBGiIYAffWu806xo6LU
	3Eyg==
X-Gm-Message-State: ALoCoQne/utujNpCdvcvkHOOTjxgD2u+X6f85mYbfY0N726jyOMWNnprMNWjmpOueXflZiV6fSn/
X-Received: by 10.194.219.232 with SMTP id pr8mr5814381wjc.6.1391692090672;
	Thu, 06 Feb 2014 05:08:10 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm2267203wjc.5.2014.02.06.05.08.09
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 05:08:09 -0800 (PST)
Message-ID: <52F38938.2090500@linaro.org>
Date: Thu, 06 Feb 2014 13:08:08 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>	
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
In-Reply-To: <1391691427.23098.118.camel@kazak.uk.xensource.com>
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 06/02/14 12:57, Ian Campbell wrote:
> On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>>
>> I remember we had a discussion when I have implemented vfp context
>> switch for arm32 for the memory constraints
>> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>>
>> I think you should use "=Q" also here to avoid cloberring the whole memory.
>
> Yes, I forgot to say: I think getting something in now is the priority,
> which is why I committed it, but this should be tightened up, probably
> for 4.5 unless the difference is benchmarkable.

The fix is very simple (a matter of 2 lines changes). I would prefer to 
delay this patch for a couple of days and having a correct 
implementation from the beginning, so we will not forgot to change the 
code for Xen 4.5.

Moreover Pranav usually answer quickly :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:08:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOgf-0001sH-Cv; Thu, 06 Feb 2014 13:08:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBOge-0001s3-Ig
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:08:12 +0000
Received: from [193.109.254.147:52020] by server-3.bemta-14.messagelabs.com id
	78/ED-00432-B3983F25; Thu, 06 Feb 2014 13:08:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391692091!2477837!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17927 invoked from network); 6 Feb 2014 13:08:11 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:08:11 -0000
Received: by mail-we0-f175.google.com with SMTP id q59so1261003wes.20
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 05:08:11 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=jZZ0RQvbYOCRumGdtUT8R2jT4j9GHLyaHnCXYCxZQyQ=;
	b=C0Twr8a5M4tXdtOP4hkkV9tueM9ayfVnj1CxkiuyRo4HjaxPFjbFM4Nz5rOvW76mhA
	kl6iJwFJiswEHOKSNlh1qqppa1O/g2O0fjLvOyP/jtuSESpHX4a5xw3AWzE4+8Od6lxG
	X+pftPtzMnUH2hE1cEv00Y2U2LDGo++NxpBRUGJSlrthpJ6NlqyR8remoqgrRW5NItfC
	zEjiAHXRtapD0e0ajg6qD1KlNQj/qLpg07x8cHpqUT2+lskKKAFpABwdPx/z7JpzTrD6
	zWzPWarq0P2aY8SP2n/pN/kcQWSuQpqRIcL6X0pbZS60cKsYYVBGiIYAffWu806xo6LU
	3Eyg==
X-Gm-Message-State: ALoCoQne/utujNpCdvcvkHOOTjxgD2u+X6f85mYbfY0N726jyOMWNnprMNWjmpOueXflZiV6fSn/
X-Received: by 10.194.219.232 with SMTP id pr8mr5814381wjc.6.1391692090672;
	Thu, 06 Feb 2014 05:08:10 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm2267203wjc.5.2014.02.06.05.08.09
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 05:08:09 -0800 (PST)
Message-ID: <52F38938.2090500@linaro.org>
Date: Thu, 06 Feb 2014 13:08:08 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>	
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
In-Reply-To: <1391691427.23098.118.camel@kazak.uk.xensource.com>
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 06/02/14 12:57, Ian Campbell wrote:
> On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>>
>> I remember we had a discussion when I have implemented vfp context
>> switch for arm32 for the memory constraints
>> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>>
>> I think you should use "=Q" also here to avoid cloberring the whole memory.
>
> Yes, I forgot to say: I think getting something in now is the priority,
> which is why I committed it, but this should be tightened up, probably
> for 4.5 unless the difference is benchmarkable.

The fix is very simple (a matter of 2 lines changes). I would prefer to 
delay this patch for a couple of days and having a correct 
implementation from the beginning, so we will not forgot to change the 
code for Xen 4.5.

Moreover Pranav usually answer quickly :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOhj-0002EN-1N; Thu, 06 Feb 2014 13:09:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mprivozn@redhat.com>) id 1WBOT3-0000qy-3Z
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:54:09 +0000
Received: from [85.158.143.35:39128] by server-1.bemta-4.messagelabs.com id
	E5/55-31661-0F583F25; Thu, 06 Feb 2014 12:54:08 +0000
X-Env-Sender: mprivozn@redhat.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391691210!3635871!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6627 invoked from network); 6 Feb 2014 12:53:31 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	6 Feb 2014 12:53:31 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s16CrTTj002819
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 07:53:29 -0500
Received: from [10.3.235.16] (vpn-235-16.phx2.redhat.com [10.3.235.16])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s16CrGD1003793; Thu, 6 Feb 2014 07:53:27 -0500
Message-ID: <52F385BB.4090608@redhat.com>
Date: Thu, 06 Feb 2014 13:53:15 +0100
From: Michal Privoznik <mprivozn@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>, libvir-list@redhat.com
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
	<1391621986-7341-4-git-send-email-jfehlig@suse.com>
In-Reply-To: <1391621986-7341-4-git-send-email-jfehlig@suse.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Thu, 06 Feb 2014 13:09:16 +0000
Cc: xen-devel@lists.xen.org, bjzhang@suse.com
Subject: Re: [Xen-devel] [libvirt] [PATCH 3/4] libxl: handle domain shutdown
 events in a thread
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05.02.2014 18:39, Jim Fehlig wrote:
> Handling the domain shutdown event within the event handler seems
> a bit unfair to libxl's event machinery.  Domain "shutdown" could
> take considerable time.  E.g. if the shutdown reason is reboot,
> the domain must be reaped and then started again.
>
> Spawn a shutdown handler thread to do this work, allowing libxl's
> event machinery to go about its business.
>
> Signed-off-by: Jim Fehlig <jfehlig@suse.com>
> ---
>   src/libxl/libxl_driver.c | 132 ++++++++++++++++++++++++++++++++---------------
>   1 file changed, 89 insertions(+), 43 deletions(-)
>
> diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
> index d639011..a1c6c0f 100644
> --- a/src/libxl/libxl_driver.c
> +++ b/src/libxl/libxl_driver.c
> @@ -352,61 +352,107 @@ libxlVmReap(libxlDriverPrivatePtr driver,
>   # define VIR_LIBXL_EVENT_CONST const
>   #endif
>
> +struct libxlShutdownThreadInfo
> +{
> +    virDomainObjPtr vm;
> +    libxl_event *event;
> +};
> +
> +
>   static void
> -libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
> +libxlDomainShutdownThread(void *opaque)
>   {
>       libxlDriverPrivatePtr driver = libxl_driver;
> -    libxlDomainObjPrivatePtr priv = ((virDomainObjPtr)data)->privateData;
> -    virDomainObjPtr vm = NULL;
> +    struct libxlShutdownThreadInfo *shutdown_info = opaque;
> +    virDomainObjPtr vm = shutdown_info->vm;
> +    libxlDomainObjPrivatePtr priv = vm->privateData;
> +    libxl_event *ev = shutdown_info->event;
> +    libxl_ctx *ctx = priv->ctx;
>       virObjectEventPtr dom_event = NULL;
> -    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
> -
> -    if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
> -        virDomainShutoffReason reason;
> -
> -        /*
> -         * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
> -         * after calling libxl_domain_suspend() are handled by it's callers.
> -         */
> -        if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
> -            goto cleanup;
> +    libxl_shutdown_reason xl_reason = ev->u.domain_shutdown.shutdown_reason;
> +    virDomainShutoffReason reason;
>
> -        vm = virDomainObjListFindByID(driver->domains, event->domid);
> -        if (!vm)
> -            goto cleanup;
> +    virObjectLock(vm);
>
> -        switch (xl_reason) {
> -            case LIBXL_SHUTDOWN_REASON_POWEROFF:
> -            case LIBXL_SHUTDOWN_REASON_CRASH:
> -                if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
> -                    dom_event = virDomainEventLifecycleNewFromObj(vm,
> -                                              VIR_DOMAIN_EVENT_STOPPED,
> -                                              VIR_DOMAIN_EVENT_STOPPED_CRASHED);
> -                    reason = VIR_DOMAIN_SHUTOFF_CRASHED;
> -                } else {
> -                    reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
> -                }
> -                libxlVmReap(driver, vm, reason);
> -                if (!vm->persistent) {
> -                    virDomainObjListRemove(driver->domains, vm);
> -                    vm = NULL;
> -                }
> -                break;
> -            case LIBXL_SHUTDOWN_REASON_REBOOT:
> -                libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
> -                libxlVmStart(driver, vm, 0, -1);
> -                break;
> -            default:
> -                VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
> -                break;
> -        }
> +    switch (xl_reason) {
> +        case LIBXL_SHUTDOWN_REASON_POWEROFF:
> +        case LIBXL_SHUTDOWN_REASON_CRASH:
> +            if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
> +                dom_event = virDomainEventLifecycleNewFromObj(vm,
> +                                           VIR_DOMAIN_EVENT_STOPPED,
> +                                           VIR_DOMAIN_EVENT_STOPPED_CRASHED);
> +                reason = VIR_DOMAIN_SHUTOFF_CRASHED;
> +            } else {
> +                reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
> +            }
> +            libxlVmReap(driver, vm, reason);
> +            if (!vm->persistent) {
> +                virDomainObjListRemove(driver->domains, vm);
> +                vm = NULL;
> +            }
> +            break;
> +        case LIBXL_SHUTDOWN_REASON_REBOOT:
> +            libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
> +            libxlVmStart(driver, vm, 0, -1);
> +            break;
> +        default:
> +            VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
> +            break;
>       }
>
> -cleanup:
>       if (vm)
>           virObjectUnlock(vm);
>       if (dom_event)
>           libxlDomainEventQueue(driver, dom_event);
> +    libxl_event_free(ctx, ev);
> +    VIR_FREE(shutdown_info);
> +}
> +
> +static void
> +libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
> +{
> +    virDomainObjPtr vm = data;
> +    libxlDomainObjPrivatePtr priv = vm->privateData;
> +    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
> +    struct libxlShutdownThreadInfo *shutdown_info;
> +    virThread thread;
> +
> +    if (event->type != LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
> +        VIR_INFO("Unhandled event type %d", event->type);
> +        goto cleanup;
> +    }
> +
> +    /*
> +     * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
> +     * after calling libxl_domain_suspend() are handled by it's callers.
> +     */
> +    if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
> +        goto cleanup;
> +
> +    /*
> +     * Start a thread to handle shutdown.  We don't want to be tying up
> +     * libxl's event machinery by doing a potentially lengthy shutdown.
> +     */
> +    if (VIR_ALLOC(shutdown_info) < 0)
> +        goto cleanup;
> +
> +    shutdown_info->vm = data;
> +    shutdown_info->event = (libxl_event *)event;
> +    if (virThreadCreate(&thread, true, libxlDomainShutdownThread,
> +                        shutdown_info) < 0) {

The 2nd argument 'true' tells, if @thread is joinable. Since you are not 
joining it anywhere, I guess it should be rather 'false'. Moreover, it 
will avoid some memory leaks, as pthread free()-s memory needed for 
thread itself (otherwise the memory would be free()-d in phtread_join - 
which again is not called anywhere).

> +        /*
> +         * Not much we can do on error here except log it.
> +         */
> +        VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
> +        goto cleanup;
> +    }
> +
> +    /*
> +     * libxl_event freed in shutdown thread
> +     */
> +    return;
> +
> +cleanup:

Since this is in fact an error path, I suggest renaming the label to 
'error'.

>       /* Cast away any const */
>       libxl_event_free(priv->ctx, (libxl_event *)event);
>   }
>

ACK if you fix these two issues.

Michal

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOhi-0002E6-Gh; Thu, 06 Feb 2014 13:09:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mprivozn@redhat.com>) id 1WBOSS-0000pV-Hg
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:53:33 +0000
Received: from [193.109.254.147:36215] by server-10.bemta-14.messagelabs.com
	id E0/7C-10711-BC583F25; Thu, 06 Feb 2014 12:53:31 +0000
X-Env-Sender: mprivozn@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391691210!2485623!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24076 invoked from network); 6 Feb 2014 12:53:31 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	6 Feb 2014 12:53:31 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s16CrTWQ004177
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 07:53:30 -0500
Received: from [10.3.235.16] (vpn-235-16.phx2.redhat.com [10.3.235.16])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s16CrIuA007603; Thu, 6 Feb 2014 07:53:27 -0500
Message-ID: <52F385BE.5020509@redhat.com>
Date: Thu, 06 Feb 2014 13:53:18 +0100
From: Michal Privoznik <mprivozn@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>, libvir-list@redhat.com
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Thu, 06 Feb 2014 13:09:16 +0000
Cc: xen-devel@lists.xen.org, bjzhang@suse.com
Subject: Re: [Xen-devel] [libvirt] [PATCH 0/4] libxl: fixes related to
	concurrency improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05.02.2014 18:39, Jim Fehlig wrote:
> While reviving old patches to add job support to the libxl driver,
> testing revealed some problems that were difficult to encounter
> in the current, more serialized processing approach used in the
> driver.
>
> The first patch is a bug fix, plugging leaks of libxlDomainObjPrivate
> objects.  The second patch removes the list of libxl timer registrations
> maintained in the driver - a hack I was never fond of.  The third patch
> moves domain shutdown handling to a thread, instead of doing all the
> shutdown work in the event handler.  The fourth patch fixes an issue wrt
> child process handling discussed in this thread
>
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01553.html
>
> Ian Jackson's latest patches on the libxl side are here
>
> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00124.html
>
>
> Jim Fehlig (4):
>    libxl: fix leaking libxlDomainObjPrivate
>    libxl: remove list of timer registrations from libxlDomainObjPrivate
>    libxl: handle domain shutdown events in a thread
>    libxl: improve subprocess handling
>
>   src/libxl/libxl_conf.h   |   5 +-
>   src/libxl/libxl_domain.c | 102 ++++++++---------------------------
>   src/libxl/libxl_domain.h |   8 +--
>   src/libxl/libxl_driver.c | 135 +++++++++++++++++++++++++++++++----------------
>   4 files changed, 115 insertions(+), 135 deletions(-)
>

ACK series but see my comment on 3/4 where I'm asking for a pair of 
fixes prior pushing.

Michal

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOhi-0002E6-Gh; Thu, 06 Feb 2014 13:09:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mprivozn@redhat.com>) id 1WBOSS-0000pV-Hg
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:53:33 +0000
Received: from [193.109.254.147:36215] by server-10.bemta-14.messagelabs.com
	id E0/7C-10711-BC583F25; Thu, 06 Feb 2014 12:53:31 +0000
X-Env-Sender: mprivozn@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391691210!2485623!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24076 invoked from network); 6 Feb 2014 12:53:31 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	6 Feb 2014 12:53:31 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s16CrTWQ004177
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 07:53:30 -0500
Received: from [10.3.235.16] (vpn-235-16.phx2.redhat.com [10.3.235.16])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s16CrIuA007603; Thu, 6 Feb 2014 07:53:27 -0500
Message-ID: <52F385BE.5020509@redhat.com>
Date: Thu, 06 Feb 2014 13:53:18 +0100
From: Michal Privoznik <mprivozn@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>, libvir-list@redhat.com
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
In-Reply-To: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
X-Mailman-Approved-At: Thu, 06 Feb 2014 13:09:16 +0000
Cc: xen-devel@lists.xen.org, bjzhang@suse.com
Subject: Re: [Xen-devel] [libvirt] [PATCH 0/4] libxl: fixes related to
	concurrency improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05.02.2014 18:39, Jim Fehlig wrote:
> While reviving old patches to add job support to the libxl driver,
> testing revealed some problems that were difficult to encounter
> in the current, more serialized processing approach used in the
> driver.
>
> The first patch is a bug fix, plugging leaks of libxlDomainObjPrivate
> objects.  The second patch removes the list of libxl timer registrations
> maintained in the driver - a hack I was never fond of.  The third patch
> moves domain shutdown handling to a thread, instead of doing all the
> shutdown work in the event handler.  The fourth patch fixes an issue wrt
> child process handling discussed in this thread
>
> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01553.html
>
> Ian Jackson's latest patches on the libxl side are here
>
> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00124.html
>
>
> Jim Fehlig (4):
>    libxl: fix leaking libxlDomainObjPrivate
>    libxl: remove list of timer registrations from libxlDomainObjPrivate
>    libxl: handle domain shutdown events in a thread
>    libxl: improve subprocess handling
>
>   src/libxl/libxl_conf.h   |   5 +-
>   src/libxl/libxl_domain.c | 102 ++++++++---------------------------
>   src/libxl/libxl_domain.h |   8 +--
>   src/libxl/libxl_driver.c | 135 +++++++++++++++++++++++++++++++----------------
>   4 files changed, 115 insertions(+), 135 deletions(-)
>

ACK series but see my comment on 3/4 where I'm asking for a pair of 
fixes prior pushing.

Michal

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOhj-0002EN-1N; Thu, 06 Feb 2014 13:09:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mprivozn@redhat.com>) id 1WBOT3-0000qy-3Z
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:54:09 +0000
Received: from [85.158.143.35:39128] by server-1.bemta-4.messagelabs.com id
	E5/55-31661-0F583F25; Thu, 06 Feb 2014 12:54:08 +0000
X-Env-Sender: mprivozn@redhat.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391691210!3635871!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6627 invoked from network); 6 Feb 2014 12:53:31 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-21.messagelabs.com with SMTP;
	6 Feb 2014 12:53:31 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s16CrTTj002819
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 07:53:29 -0500
Received: from [10.3.235.16] (vpn-235-16.phx2.redhat.com [10.3.235.16])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s16CrGD1003793; Thu, 6 Feb 2014 07:53:27 -0500
Message-ID: <52F385BB.4090608@redhat.com>
Date: Thu, 06 Feb 2014 13:53:15 +0100
From: Michal Privoznik <mprivozn@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>, libvir-list@redhat.com
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
	<1391621986-7341-4-git-send-email-jfehlig@suse.com>
In-Reply-To: <1391621986-7341-4-git-send-email-jfehlig@suse.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
X-Mailman-Approved-At: Thu, 06 Feb 2014 13:09:16 +0000
Cc: xen-devel@lists.xen.org, bjzhang@suse.com
Subject: Re: [Xen-devel] [libvirt] [PATCH 3/4] libxl: handle domain shutdown
 events in a thread
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05.02.2014 18:39, Jim Fehlig wrote:
> Handling the domain shutdown event within the event handler seems
> a bit unfair to libxl's event machinery.  Domain "shutdown" could
> take considerable time.  E.g. if the shutdown reason is reboot,
> the domain must be reaped and then started again.
>
> Spawn a shutdown handler thread to do this work, allowing libxl's
> event machinery to go about its business.
>
> Signed-off-by: Jim Fehlig <jfehlig@suse.com>
> ---
>   src/libxl/libxl_driver.c | 132 ++++++++++++++++++++++++++++++++---------------
>   1 file changed, 89 insertions(+), 43 deletions(-)
>
> diff --git a/src/libxl/libxl_driver.c b/src/libxl/libxl_driver.c
> index d639011..a1c6c0f 100644
> --- a/src/libxl/libxl_driver.c
> +++ b/src/libxl/libxl_driver.c
> @@ -352,61 +352,107 @@ libxlVmReap(libxlDriverPrivatePtr driver,
>   # define VIR_LIBXL_EVENT_CONST const
>   #endif
>
> +struct libxlShutdownThreadInfo
> +{
> +    virDomainObjPtr vm;
> +    libxl_event *event;
> +};
> +
> +
>   static void
> -libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
> +libxlDomainShutdownThread(void *opaque)
>   {
>       libxlDriverPrivatePtr driver = libxl_driver;
> -    libxlDomainObjPrivatePtr priv = ((virDomainObjPtr)data)->privateData;
> -    virDomainObjPtr vm = NULL;
> +    struct libxlShutdownThreadInfo *shutdown_info = opaque;
> +    virDomainObjPtr vm = shutdown_info->vm;
> +    libxlDomainObjPrivatePtr priv = vm->privateData;
> +    libxl_event *ev = shutdown_info->event;
> +    libxl_ctx *ctx = priv->ctx;
>       virObjectEventPtr dom_event = NULL;
> -    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
> -
> -    if (event->type == LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
> -        virDomainShutoffReason reason;
> -
> -        /*
> -         * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
> -         * after calling libxl_domain_suspend() are handled by it's callers.
> -         */
> -        if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
> -            goto cleanup;
> +    libxl_shutdown_reason xl_reason = ev->u.domain_shutdown.shutdown_reason;
> +    virDomainShutoffReason reason;
>
> -        vm = virDomainObjListFindByID(driver->domains, event->domid);
> -        if (!vm)
> -            goto cleanup;
> +    virObjectLock(vm);
>
> -        switch (xl_reason) {
> -            case LIBXL_SHUTDOWN_REASON_POWEROFF:
> -            case LIBXL_SHUTDOWN_REASON_CRASH:
> -                if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
> -                    dom_event = virDomainEventLifecycleNewFromObj(vm,
> -                                              VIR_DOMAIN_EVENT_STOPPED,
> -                                              VIR_DOMAIN_EVENT_STOPPED_CRASHED);
> -                    reason = VIR_DOMAIN_SHUTOFF_CRASHED;
> -                } else {
> -                    reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
> -                }
> -                libxlVmReap(driver, vm, reason);
> -                if (!vm->persistent) {
> -                    virDomainObjListRemove(driver->domains, vm);
> -                    vm = NULL;
> -                }
> -                break;
> -            case LIBXL_SHUTDOWN_REASON_REBOOT:
> -                libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
> -                libxlVmStart(driver, vm, 0, -1);
> -                break;
> -            default:
> -                VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
> -                break;
> -        }
> +    switch (xl_reason) {
> +        case LIBXL_SHUTDOWN_REASON_POWEROFF:
> +        case LIBXL_SHUTDOWN_REASON_CRASH:
> +            if (xl_reason == LIBXL_SHUTDOWN_REASON_CRASH) {
> +                dom_event = virDomainEventLifecycleNewFromObj(vm,
> +                                           VIR_DOMAIN_EVENT_STOPPED,
> +                                           VIR_DOMAIN_EVENT_STOPPED_CRASHED);
> +                reason = VIR_DOMAIN_SHUTOFF_CRASHED;
> +            } else {
> +                reason = VIR_DOMAIN_SHUTOFF_SHUTDOWN;
> +            }
> +            libxlVmReap(driver, vm, reason);
> +            if (!vm->persistent) {
> +                virDomainObjListRemove(driver->domains, vm);
> +                vm = NULL;
> +            }
> +            break;
> +        case LIBXL_SHUTDOWN_REASON_REBOOT:
> +            libxlVmReap(driver, vm, VIR_DOMAIN_SHUTOFF_SHUTDOWN);
> +            libxlVmStart(driver, vm, 0, -1);
> +            break;
> +        default:
> +            VIR_INFO("Unhandled shutdown_reason %d", xl_reason);
> +            break;
>       }
>
> -cleanup:
>       if (vm)
>           virObjectUnlock(vm);
>       if (dom_event)
>           libxlDomainEventQueue(driver, dom_event);
> +    libxl_event_free(ctx, ev);
> +    VIR_FREE(shutdown_info);
> +}
> +
> +static void
> +libxlEventHandler(void *data, VIR_LIBXL_EVENT_CONST libxl_event *event)
> +{
> +    virDomainObjPtr vm = data;
> +    libxlDomainObjPrivatePtr priv = vm->privateData;
> +    libxl_shutdown_reason xl_reason = event->u.domain_shutdown.shutdown_reason;
> +    struct libxlShutdownThreadInfo *shutdown_info;
> +    virThread thread;
> +
> +    if (event->type != LIBXL_EVENT_TYPE_DOMAIN_SHUTDOWN) {
> +        VIR_INFO("Unhandled event type %d", event->type);
> +        goto cleanup;
> +    }
> +
> +    /*
> +     * Similar to the xl implementation, ignore SUSPEND.  Any actions needed
> +     * after calling libxl_domain_suspend() are handled by it's callers.
> +     */
> +    if (xl_reason == LIBXL_SHUTDOWN_REASON_SUSPEND)
> +        goto cleanup;
> +
> +    /*
> +     * Start a thread to handle shutdown.  We don't want to be tying up
> +     * libxl's event machinery by doing a potentially lengthy shutdown.
> +     */
> +    if (VIR_ALLOC(shutdown_info) < 0)
> +        goto cleanup;
> +
> +    shutdown_info->vm = data;
> +    shutdown_info->event = (libxl_event *)event;
> +    if (virThreadCreate(&thread, true, libxlDomainShutdownThread,
> +                        shutdown_info) < 0) {

The 2nd argument 'true' tells, if @thread is joinable. Since you are not 
joining it anywhere, I guess it should be rather 'false'. Moreover, it 
will avoid some memory leaks, as pthread free()-s memory needed for 
thread itself (otherwise the memory would be free()-d in phtread_join - 
which again is not called anywhere).

> +        /*
> +         * Not much we can do on error here except log it.
> +         */
> +        VIR_ERROR(_("Failed to create thread to handle domain shutdown"));
> +        goto cleanup;
> +    }
> +
> +    /*
> +     * libxl_event freed in shutdown thread
> +     */
> +    return;
> +
> +cleanup:

Since this is in fact an error path, I suggest renaming the label to 
'error'.

>       /* Cast away any const */
>       libxl_event_free(priv->ctx, (libxl_event *)event);
>   }
>

ACK if you fix these two issues.

Michal

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOhh-0002Dp-SD; Thu, 06 Feb 2014 13:09:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WBO4E-0005ih-IZ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:28:31 +0000
Received: from [85.158.137.68:35057] by server-1.bemta-3.messagelabs.com id
	0E/1D-17293-DEF73F25; Thu, 06 Feb 2014 12:28:29 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391689706!51826!1
X-Originating-IP: [98.139.213.147]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30381 invoked from network); 6 Feb 2014 12:28:28 -0000
Received: from nm10-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm10-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.147)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 12:28:28 -0000
Received: from [66.196.81.170] by nm10.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 12:28:26 -0000
Received: from [98.139.212.203] by tm16.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 12:28:26 -0000
Received: from [127.0.0.1] by omp1012.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 12:28:26 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 574334.35455.bm@omp1012.mail.bf1.yahoo.com
Received: (qmail 79349 invoked by uid 60001); 6 Feb 2014 12:28:26 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391689706; bh=O3stWYs6Va5xeebi9roOMxuVC228brrFSPMJ0VQQDGg=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=2O5oZ/e/IVQrDcs/68CpUf7sMauAYLlwBdFV7BEE9B0XpIowFTBkw1fIHupA4/+IJRRnRoO35te+9wSHtFQGVosgnwwHd/9WvCmUvvUJgJSEVqcISp//5kl2Qb05J2LfgmBiZUer0CB0B6rf0mvMT8WQ+wlKlkcT5ShRWvoSX8c=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=RU4gcpnhTicRnrf5XSjZ7Q7jgVE54GVWQuvvGGwA75YHYQTOLHB5e8RgpfpZvssRSSjCtBJn+8xCIlGHaHWg6b5HRSSESoe41h4Wn4OuWPZidTM5lrGkibyRpwjEYG7r9aNeDSBOnpcTv/3YnPo1prtf8oXmPInpVuhQsIiQkHY=;
X-YMail-OSG: jenx46EVM1lawmsHbHiIK_reVsBvmC1iMZcdqHSCBO3ZO.V
	hKSve32JqEZ4av0f0oY53ZbRw2H7KFyDtKc29.ggDVMe9WvI8V18xJ8pjjKY
	CUlIepiYKAQGzac6QCTLzRC1r7lano9bHvtbM2mfdR9CrcKirTcIT5QGYPmd
	4UmFXkx6k54q6KA1LItfwaozjJmjiF4rNiwDX42JlUxrCgo3fOfXHaD6BfXw
	8yKWsT3wNqEXU.MR6o1zBOutxWXlOUuhj80djDMs_3s8EGJ9TuU7uo.D86jO
	154gwiklKP_Ppzg0IXyH0.kwv.uvNG5JlKtn41ot8K7e1SjK..vMlII0C46H
	2r.NBbdMBU3_1DMMxlczKH4Sba5Rnxe.DB4y_Igmqg8xMiOaHFFZhKu3NVHo
	UP05H3m0bSdxjrrC5rcLMp341T7n6_HNgW3zqLv0hz7h6muT0U3F7hPErYeE
	O91gjhYT8.aFn1.WIwjFnOP1IQooqUG8tgXicsVet6cjk5F2KWze734zy9Uv
	tsjikyBvZVmRkL3n4WcwEtphJPVthEVU-
Received: from [192.227.225.3] by web161806.mail.bf1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 04:28:26 PST
X-Rocket-MIMEInfo: 002.001,
	SSBjaGVja8KgWENGTEFHU19ERUJVRyB0aGF0IGRlZmluaXRpb24gYXQgbGluZSAyNyBvZiBmaWxlIHhlbmd1ZXN0LmggdGhhdCBtZWFuJ3PCoAojZGVmaW5lIFhDRkxBR1NfREVCVUcgICAgICgxIDw8IDEpCkkgZG9uZSAyIG1pZ3JhdGlvbiB0byBhbW91bnTCoGx2bCA9IFhUTF9ERUJVRzsgYW5kwqBsdmwgPSBYVExfREVUQUlMOyB0aGF0IHJlc3VsdCBhdHRhY2hlZCBCdXQgdW50aWwgaSBoYXZlIG5vdCBsb2cgb2YgZGlydHkgcGFnZShkaXJ0eSBtZW1vcnkpYW5kIGRvd250aW1lIDotKCAuLi4KCsKgCkFkZWwBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.173.622
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
Message-ID: <1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
Date: Thu, 6 Feb 2014 04:28:26 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140205123908.GA1198@aepfle.de>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1665047788-1877962208-1391689706=:75705"
X-Mailman-Approved-At: Thu, 06 Feb 2014 13:09:16 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1665047788-1877962208-1391689706=:75705
Content-Type: multipart/alternative; boundary="1665047788-2062883207-1391689706=:75705"

--1665047788-2062883207-1391689706=:75705
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

I check=A0XCFLAGS_DEBUG that definition at line 27 of file xenguest.h that =
mean's=A0=0A#define XCFLAGS_DEBUG     (1 << 1)=0AI done 2 migration to amou=
nt=A0lvl =3D XTL_DEBUG; and=A0lvl =3D XTL_DETAIL; that result attached But =
until i have not log of dirty page(dirty memory)and downtime :-( ...=0A=0A=
=A0=0AAdel Amani=0AM.Sc. Candidate@Computer Engineering Department, Univers=
ity of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Wednesday, Febru=
ary 5, 2014 4:09 PM, Olaf Hering <olaf@aepfle.de> wrote:=0A =0AOn Tue, Feb =
04, Adel Amani wrote:=0A=0A>=A0 =A0  si.flags =3D atoi(argv[5]);=0A=0A> lvl=
 =3D si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;=0A> lflags =3D XTL_S=
TDIOSTREAM_SHOW_PID | XTL_STDIOSTREAM_HIDE_PROGRESS;=0A> l =3D (xentoollog_=
logger *)xtl_createlogger_stdiostream(stderr, lvl, lflags);=0A> si.xch =3D =
xc_interface_open(l,0,0);=0A=0APlease check what XCFLAGS_DEBUG actually mea=
ns, and if that condition=0Acan ever be true without modifying also xend re=
lated code.=0A=0AI guess in your exploration of how migration internally ac=
tually works=0Ait would be easier for you to just write 'lvl =3D XTL_DEBUG;=
' and be done=0Awith it.=0A=0AOther than that, the changes you made appear =
to be correct.=0A=0A=0AOlaf
--1665047788-2062883207-1391689706=:75705
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div>I check&nbsp;<=
span style=3D"font-family: monospace; font-size: 10pt;"><span style=3D"colo=
r: rgb(164, 96, 22);">XCFLAGS_DEBU</span><span style=3D"color: rgb(164, 96,=
 22);">G</span> that definition at line 27 of file <span style=3D"color: rg=
b(157, 24, 17);">xenguest.h</span> that mean's&nbsp;</span></div><div style=
=3D"color: rgb(0, 0, 0); font-size: 10pt; font-family: monospace; backgroun=
d-color: transparent; font-style: normal;"><span style=3D"font-family: mono=
space; font-size: 10pt;"><span style=3D"background-color: rgb(251, 252, 253=
); color: rgb(128, 96, 32); font-family: monospace, fixed; line-height: 13p=
x; text-indent: -53px; white-space: pre-wrap; font-size: 10pt;">#define XCF=
LAGS_DEBUG     (1 &lt;&lt; 1)</span></span></div><div style=3D"font-size: 1=
0pt; background-color: rgb(251, 252, 253); font-style: normal;"><span
 style=3D"font-family: monospace;">I done 2 migration to amount<span style=
=3D"color: rgb(68, 0, 98);">&nbsp;</span><span style=3D"color: rgb(221, 144=
, 47);">lvl =3D XTL_DEBUG</span><span style=3D"color: rgb(164, 96, 22);">;<=
/span> and<span style=3D"color: rgb(221, 144, 47);">&nbsp;lvl =3D XTL_DETAI=
L;</span> that result attached But until i have not log of dirty page(dirty=
 memory)and downtime :-( ...</span></div><div style=3D"font-size: 13px; bac=
kground-color: transparent; font-style: normal; color: rgb(221, 144, 47); f=
ont-family: monospace;"><span style=3D"font-family: monospace;"><br></span>=
</div><div></div><div>&nbsp;</div><div><div style=3D"text-align:center;"><s=
pan style=3D"font-size: 16px; font-family: 'times new roman', 'new york', t=
imes, serif; line-height: normal;">Adel Amani</span><br></div><span style=
=3D"font-family: 'times new roman', 'new york', times, serif; line-height: =
normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@C=
omputer Engineering
 Department, University of Isfahan</div><div style=3D"text-align:center;"><=
span style=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);t=
ext-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span><=
/div><div class=3D"yahoo_quoted" style=3D"display: block;"> <br> <br> <div =
style=3D"font-family: 'bookman old style', 'new york', times, serif; font-s=
ize: 10pt;"> <div style=3D"font-family: HelveticaNeue, 'Helvetica Neue', He=
lvetica, Arial, 'Lucida Grande', sans-serif; font-size: 12pt;"> <div dir=3D=
"ltr"> <font size=3D"2" face=3D"Arial"> On Wednesday, February 5, 2014 4:09=
 PM, Olaf Hering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div>  <div cla=
ss=3D"y_msg_container">On Tue, Feb 04, Adel Amani wrote:<br clear=3D"none">=
<br clear=3D"none">&gt;&nbsp; &nbsp;  si.flags =3D atoi(argv[5]);<br clear=
=3D"none"><br clear=3D"none">&gt; lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XT=
L_DEBUG: XTL_DETAIL;<br clear=3D"none">&gt; lflags =3D XTL_STDIOSTREAM_SHOW=
_PID |
 XTL_STDIOSTREAM_HIDE_PROGRESS;<br clear=3D"none">&gt; l =3D (xentoollog_lo=
gger *)xtl_createlogger_stdiostream(stderr, lvl, lflags);<br clear=3D"none"=
>&gt; si.xch =3D xc_interface_open(l,0,0);<br clear=3D"none"><br clear=3D"n=
one">Please check what XCFLAGS_DEBUG actually means, and if that condition<=
br clear=3D"none">can ever be true without modifying also xend related code=
.<br clear=3D"none"><br clear=3D"none">I guess in your exploration of how m=
igration internally actually works<br clear=3D"none">it would be easier for=
 you to just write 'lvl =3D XTL_DEBUG;' and be done<br clear=3D"none">with =
it.<br clear=3D"none"><br clear=3D"none">Other than that, the changes you m=
ade appear to be correct.<div class=3D"yqt5901774014" id=3D"yqtfd04120"><br=
 clear=3D"none"><br clear=3D"none">Olaf<br clear=3D"none"></div><br><br></d=
iv>  </div> </div>  </div> </div></body></html>
--1665047788-2062883207-1391689706=:75705--
--1665047788-1877962208-1391689706=:75705
Content-Type: application/octet-stream; name="xend (XTL_DETAIL).log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xend (XTL_DETAIL).log"

WzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKFhlbmREb21haW5J
bmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25hbWUn
LCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5kX3N0
YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUnXSwg
Wyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0nLCBb
J2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBbJ3Nl
cmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBbJ2Jv
b3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywgW11d
LCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScsICc6
MCddLCBbJ2ZkYScsICcnXSwgWydmZGInLCAnJ10sIFsnZ3Vlc3Rfb3NfdHlw
ZScsICdkZWZhdWx0J10sIFsnaGFwJywgMV0sIFsnaHBldCcsIDBdLCBbJ2lz
YScsIDBdLCBbJ2tleW1hcCcsICcnXSwgWydsb2NhbHRpbWUnLCAwXSwgWydu
b2dyYXBoaWMnLCAwXSwgWydvcGVuZ2wnLCAxXSwgWydvb3MnLCAxXSwgWydw
YWUnLCAxXSwgWydwY2knLCBbXV0sIFsncGNpX21zaXRyYW5zbGF0ZScsIDFd
LCBbJ3BjaV9wb3dlcl9tZ210JywgMF0sIFsncnRjX3RpbWVvZmZzZXQnLCAw
XSwgWydzZGwnLCAwXSwgWydzb3VuZGh3JywgJyddLCBbJ3N0ZHZnYScsIDBd
LCBbJ3RpbWVyX21vZGUnLCAxXSwgWyd1c2InLCAxXSwgWyd1c2JkZXZpY2Un
LCBbJ2hvc3Q6MTI1ZjpjOTZhJ11dLCBbJ3ZjcHVzJywgMV0sIFsndm5jJywg
MV0sIFsndm5jdW51c2VkJywgMV0sIFsndmlyaWRpYW4nLCAwXSwgWyd2cHRf
YWxpZ24nLCAxXSwgWyd4YXV0aG9yaXR5JywgJy9yb290Ly5YYXV0aG9yaXR5
J10sIFsneGVuX3BsYXRmb3JtX3BjaScsIDFdLCBbJ21lbW9yeV9zaGFyaW5n
JywgMF0sIFsndm5jcGFzc3dkJywgJ1hYWFhYWFhYJ10sIFsndHNjX21vZGUn
LCAwXSwgWydub21pZ3JhdGUnLCAwXV1dLCBbJ3MzX2ludGVncml0eScsIDFd
LCBbJ2RldmljZScsIFsndmJkJywgWyd1bmFtZScsICdmaWxlOi92YXIvbGli
L2xpYnZpcnQvaW1hZ2VzL3VidW50dTExLmltZyddLCBbJ2RldicsICdoZGEn
XSwgWydtb2RlJywgJ3cnXV1dLCBbJ2RldmljZScsIFsndmJkJywgWyd1bmFt
ZScsICdwaHk6L2Rldi9jZHJvbSddLCBbJ2RldicsICdoZGM6Y2Ryb20nXSwg
Wydtb2RlJywgJ3InXV1dLCBbJ2RldmljZScsIFsndmlmJywgWydicmlkZ2Un
LCAneGVuYnIwJ10sIFsndHlwZScsICdpb2VtdSddXV1dKQpbMjAxNC0wMi0w
NiAxNTowNDoxNiAxNTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQ5OCkg
WGVuZERvbWFpbkluZm8uY29uc3RydWN0RG9tYWluClsyMDE0LTAyLTA2IDE1
OjA0OjE2IDE1NzBdIERFQlVHIChiYWxsb29uOjE4NykgQmFsbG9vbjogMjc0
ODMwNCBLaUIgZnJlZTsgbmVlZCAxNjM4NDsgZG9uZS4KWzIwMTQtMDItMDYg
MTU6MDQ6MTYgMTU3MF0gREVCVUcgKFhlbmREb21haW46NDc2KSBBZGRpbmcg
RG9tYWluOiAzClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzoyODM2KSBYZW5kRG9tYWluSW5mby5pbml0RG9tYWlu
OiAzIDI1NgpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1h
Z2U6MzM5KSBObyBWTkMgcGFzc3dkIGNvbmZpZ3VyZWQgZm9yIHZmYiBhY2Nl
c3MKWzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5
MSkgYXJnczogYm9vdCwgdmFsOiBjClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1
NzBdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IGZkYSwgdmFsOiBOb25lClsy
MDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo4OTEpIGFy
Z3M6IGZkYiwgdmFsOiBOb25lClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBd
IERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNvdW5kaHcsIHZhbDogTm9uZQpb
MjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1hZ2U6ODkxKSBh
cmdzOiBsb2NhbHRpbWUsIHZhbDogMApbMjAxNC0wMi0wNiAxNTowNDoxNiAx
NTcwXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBzZXJpYWwsIHZhbDogWydw
dHknXQpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBzdGQtdmdhLCB2YWw6IDAKWzIwMTQtMDItMDYgMTU6MDQ6
MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogaXNhLCB2YWw6IDAK
WzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5MSkg
YXJnczogYWNwaSwgdmFsOiAxClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBd
IERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHVzYiwgdmFsOiAxClsyMDE0LTAy
LTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHVz
YmRldmljZSwgdmFsOiBbJ2hvc3Q6MTI1ZjpjOTZhJ10KWzIwMTQtMDItMDYg
MTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZ2Z4X3Bh
c3N0aHJ1LCB2YWw6IE5vbmUKWzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0g
SU5GTyAoaW1hZ2U6ODIyKSBOZWVkIHRvIGNyZWF0ZSBwbGF0Zm9ybSBkZXZp
Y2UuW2RvbWlkOjNdClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVH
IChYZW5kRG9tYWluSW5mbzoyODYzKSBfaW5pdERvbWFpbjpzaGFkb3dfbWVt
b3J5PTB4MCwgbWVtb3J5X3N0YXRpY19tYXg9MHg0MDAwMDAwMCwgbWVtb3J5
X3N0YXRpY19taW49MHgwLgpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBJ
TkZPIChpbWFnZToxODIpIGJ1aWxkRG9tYWluIG9zPWh2bSBkb209MyB2Y3B1
cz0xClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo5
NDkpIGRvbWlkICAgICAgICAgID0gMwpbMjAxNC0wMi0wNiAxNTowNDoxNiAx
NTcwXSBERUJVRyAoaW1hZ2U6OTUwKSBpbWFnZSAgICAgICAgICA9IC91c3Iv
bGliL3hlbi9ib290L2h2bWxvYWRlcgpbMjAxNC0wMi0wNiAxNTowNDoxNiAx
NTcwXSBERUJVRyAoaW1hZ2U6OTUxKSBzdG9yZV9ldnRjaG4gICA9IDIKWzIw
MTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjk1MikgbWVt
c2l6ZSAgICAgICAgPSAxMDI0ClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBd
IERFQlVHIChpbWFnZTo5NTMpIHRhcmdldCAgICAgICAgID0gMTAyNApbMjAx
NC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1hZ2U6OTU0KSB2Y3B1
cyAgICAgICAgICA9IDEKWzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVC
VUcgKGltYWdlOjk1NSkgdmNwdV9hdmFpbCAgICAgPSAxClsyMDE0LTAyLTA2
IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo5NTYpIGFjcGkgICAgICAg
ICAgID0gMQpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1h
Z2U6OTU3KSBhcGljICAgICAgICAgICA9IDEKWzIwMTQtMDItMDYgMTU6MDQ6
MTYgMTU3MF0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2
aWNlOiB2ZmIgOiB7J3ZuY3VudXNlZCc6IDEsICdvdGhlcl9jb25maWcnOiB7
J3ZuY3VudXNlZCc6IDEsICd2bmMnOiAnMSd9LCAndm5jJzogJzEnLCAndXVp
ZCc6ICcyYjJiN2EyNy04OGE3LTQ0ZTgtZThlMS0yOTE3NzZkMjg1MzcnfQpb
MjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxl
cjo5NSkgRGV2Q29udHJvbGxlcjogd3JpdGluZyB7J3N0YXRlJzogJzEnLCAn
YmFja2VuZC1pZCc6ICcwJywgJ2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8w
L2JhY2tlbmQvdmZiLzMvMCd9IHRvIC9sb2NhbC9kb21haW4vMy9kZXZpY2Uv
dmZiLzAuClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChEZXZD
b250cm9sbGVyOjk3KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsndm5jdW51
c2VkJzogJzEnLCAnZG9tYWluJzogJ3VidW50dTExJywgJ2Zyb250ZW5kJzog
Jy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmZiLzAnLCAndXVpZCc6ICcyYjJi
N2EyNy04OGE3LTQ0ZTgtZThlMS0yOTE3NzZkMjg1MzcnLCAnZnJvbnRlbmQt
aWQnOiAnMycsICdzdGF0ZSc6ICcxJywgJ29ubGluZSc6ICcxJywgJ3ZuYyc6
ICcxJ30gdG8gL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmZiLzMvMC4KWzIw
MTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gSU5GTyAoWGVuZERvbWFpbkluZm86
MjM1NykgY3JlYXRlRGV2aWNlOiB2YmQgOiB7J3V1aWQnOiAnZTBjMDRmYjct
ZDVhNS0wNDI3LTgxN2QtZWZmZThkNjk2YjNiJywgJ2Jvb3RhYmxlJzogMSwg
J2RyaXZlcic6ICdwYXJhdmlydHVhbGlzZWQnLCAnZGV2JzogJ2hkYScsICd1
bmFtZSc6ICdmaWxlOi92YXIvbGliL2xpYnZpcnQvaW1hZ2VzL3VidW50dTEx
LmltZycsICdtb2RlJzogJ3cnfQpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcw
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmljZSc6ICc3
NjgnLCAnZGV2aWNlLXR5cGUnOiAnZGlzaycsICdzdGF0ZSc6ICcxJywgJ2Jh
Y2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmJkLzMvNzY4J30g
dG8gL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQvNzY4LgpbMjAxNC0wMi0w
NiAxNTowNDoxNiAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2
Q29udHJvbGxlcjogd3JpdGluZyB7J2RvbWFpbic6ICd1YnVudHUxMScsICdm
cm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZiZC83NjgnLCAn
dXVpZCc6ICdlMGMwNGZiNy1kNWE1LTA0MjctODE3ZC1lZmZlOGQ2OTZiM2In
LCAnYm9vdGFibGUnOiAnMScsICdkZXYnOiAnaGRhJywgJ3N0YXRlJzogJzEn
LCAncGFyYW1zJzogJy92YXIvbGliL2xpYnZpcnQvaW1hZ2VzL3VidW50dTEx
LmltZycsICdtb2RlJzogJ3cnLCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQt
aWQnOiAnMycsICd0eXBlJzogJ2ZpbGUnfSB0byAvbG9jYWwvZG9tYWluLzAv
YmFja2VuZC92YmQvMy83NjguClsyMDE0LTAyLTA2IDE1OjA0OjE3IDE1NzBd
IElORk8gKFhlbmREb21haW5JbmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmJk
IDogeyd1dWlkJzogJzZmMTllYWNlLTM1YTYtNTc4NS00MGQ4LTYyYWJmOTVl
OWE3NycsICdib290YWJsZSc6IDAsICdkcml2ZXInOiAncGFyYXZpcnR1YWxp
c2VkJywgJ2Rldic6ICdoZGM6Y2Ryb20nLCAndW5hbWUnOiAncGh5Oi9kZXYv
Y2Ryb20nLCAnbW9kZSc6ICdyJ30KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3
MF0gREVCVUcgKERldkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdy
aXRpbmcgeydiYWNrZW5kLWlkJzogJzAnLCAndmlydHVhbC1kZXZpY2UnOiAn
NTYzMicsICdkZXZpY2UtdHlwZSc6ICdjZHJvbScsICdzdGF0ZSc6ICcxJywg
J2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmJkLzMvNTYz
Mid9IHRvIC9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzU2MzIuClsyMDE0
LTAyLTA2IDE1OjA0OjE3IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjk3
KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTEx
JywgJ2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzU2
MzInLCAndXVpZCc6ICc2ZjE5ZWFjZS0zNWE2LTU3ODUtNDBkOC02MmFiZjk1
ZTlhNzcnLCAnYm9vdGFibGUnOiAnMCcsICdkZXYnOiAnaGRjJywgJ3N0YXRl
JzogJzEnLCAncGFyYW1zJzogJy9kZXYvY2Ryb20nLCAnbW9kZSc6ICdyJywg
J29ubGluZSc6ICcxJywgJ2Zyb250ZW5kLWlkJzogJzMnLCAndHlwZSc6ICdw
aHknfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy81NjMyLgpb
MjAxNC0wMi0wNiAxNTowNDoxNyAxNTcwXSBJTkZPIChYZW5kRG9tYWluSW5m
bzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZpZiA6IHsnYnJpZGdlJzogJ3hlbmJy
MCcsICdtYWMnOiAnMDA6MTY6M2U6MDM6NjM6YTMnLCAndHlwZSc6ICdpb2Vt
dScsICd1dWlkJzogJzIxNzBiMDQwLTNiZTYtNzIyMS1iYjUyLTZkOTU3YWIw
ZDczMyd9ClsyMDE0LTAyLTA2IDE1OjA0OjE3IDE1NzBdIERFQlVHIChEZXZD
b250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUn
OiAnMScsICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwv
ZG9tYWluLzAvYmFja2VuZC92aWYvMy8wJ30gdG8gL2xvY2FsL2RvbWFpbi8z
L2RldmljZS92aWYvMC4KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3MF0gREVC
VUcgKERldkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcg
eydicmlkZ2UnOiAneGVuYnIwJywgJ2RvbWFpbic6ICd1YnVudHUxMScsICdo
YW5kbGUnOiAnMCcsICd1dWlkJzogJzIxNzBiMDQwLTNiZTYtNzIyMS1iYjUy
LTZkOTU3YWIwZDczMycsICdzY3JpcHQnOiAnL2V0Yy94ZW4vc2NyaXB0cy92
aWYtYnJpZGdlJywgJ21hYyc6ICcwMDoxNjozZTowMzo2MzphMycsICdmcm9u
dGVuZC1pZCc6ICczJywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAn
ZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92aWYvMCcsICd0
eXBlJzogJ2lvZW11J30gdG8gL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlm
LzMvMC4KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3MF0gSU5GTyAoaW1hZ2U6
NDE4KSBzcGF3bmluZyBkZXZpY2UgbW9kZWxzOiAvdXNyL2xpYi94ZW4vYmlu
L3FlbXUtZG0gWycvdXNyL2xpYi94ZW4vYmluL3FlbXUtZG0nLCAnLWQnLCAn
MycsICctZG9tYWluLW5hbWUnLCAndWJ1bnR1MTEnLCAnLXZpZGVvcmFtJywg
JzQnLCAnLXZuYycsICcxMjcuMC4wLjE6MCcsICctdm5jdW51c2VkJywgJy12
Y3B1cycsICcxJywgJy12Y3B1X2F2YWlsJywgJzB4MScsICctYm9vdCcsICdj
JywgJy1zZXJpYWwnLCAncHR5JywgJy1hY3BpJywgJy11c2InLCAnLXVzYmRl
dmljZScsICJbJ2hvc3Q6MTI1ZjpjOTZhJ10iLCAnLW5ldCcsICduaWMsdmxh
bj0xLG1hY2FkZHI9MDA6MTY6M2U6MDM6NjM6YTMsbW9kZWw9cnRsODEzOScs
ICctbmV0JywgJ3RhcCx2bGFuPTEsaWZuYW1lPXRhcDMuMCxicmlkZ2U9eGVu
YnIwJywgJy1NJywgJ3hlbmZ2J10KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3
MF0gSU5GTyAoaW1hZ2U6NDY3KSBkZXZpY2UgbW9kZWwgcGlkOiAzNjEyClsy
MDE0LTAyLTA2IDE1OjA0OjE3IDE1NzBdIElORk8gKGltYWdlOjU5MCkgd2Fp
dGluZyBmb3Igc2VudGluZWxfZmlmbwpbMjAxNC0wMi0wNiAxNTowNDoxNyAx
NTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MzQyMCkgU3RvcmluZyBWTSBk
ZXRhaWxzOiB7J29uX3hlbmRfc3RvcCc6ICdpZ25vcmUnLCAncG9vbF9uYW1l
JzogJ1Bvb2wtMCcsICdzaGFkb3dfbWVtb3J5JzogJzknLCAndXVpZCc6ICcx
OTZiMWJjYi0wYTIxLTA4OTEtNDFiZS01MDgwMTMyMjk5NTAnLCAnb25fcmVi
b290JzogJ3Jlc3RhcnQnLCAnc3RhcnRfdGltZSc6ICcxMzkxNjg2NDU3Ljcn
LCAnb25fcG93ZXJvZmYnOiAnZGVzdHJveScsICdib290bG9hZGVyX2FyZ3Mn
OiAnJywgJ29uX3hlbmRfc3RhcnQnOiAnaWdub3JlJywgJ29uX2NyYXNoJzog
J3Jlc3RhcnQnLCAneGVuZC9yZXN0YXJ0X2NvdW50JzogJzAnLCAndmNwdXMn
OiAnMScsICd2Y3B1X2F2YWlsJzogJzEnLCAnYm9vdGxvYWRlcic6ICcnLCAn
aW1hZ2UnOiAiKGh2bSAoa2VybmVsICcnKSAoc3VwZXJwYWdlcyAwKSAodmlk
ZW9yYW0gNCkgKGhwZXQgMCkgKHN0ZHZnYSAwKSAobG9hZGVyIC91c3IvbGli
L3hlbi9ib290L2h2bWxvYWRlcikgKHhlbl9wbGF0Zm9ybV9wY2kgMSkgKG9w
ZW5nbCAxKSAocnRjX3RpbWVvZmZzZXQgMCkgKHBjaSAoKSkgKGhhcCAxKSAo
bG9jYWx0aW1lIDApICh0aW1lcl9tb2RlIDEpIChwY2lfbXNpdHJhbnNsYXRl
IDEpIChvb3MgMSkgKGFwaWMgMSkgKHNkbCAwKSAodXNiZGV2aWNlIChob3N0
OjEyNWY6Yzk2YSkpIChkaXNwbGF5IDowKSAodnB0X2FsaWduIDEpIChzZXJp
YWwgcHR5KSAodm5jdW51c2VkIDEpIChib290IGMpIChwYWUgMSkgKHZpcmlk
aWFuIDApIChhY3BpIDEpICh2bmMgMSkgKG5vZ3JhcGhpYyAwKSAobm9taWdy
YXRlIDApICh1c2IgMSkgKHRzY19tb2RlIDApIChndWVzdF9vc190eXBlIGRl
ZmF1bHQpIChkZXZpY2VfbW9kZWwgL3Vzci9saWIveGVuL2Jpbi9xZW11LWRt
KSAocGNpX3Bvd2VyX21nbXQgMCkgKHhhdXRob3JpdHkgL3Jvb3QvLlhhdXRo
b3JpdHkpIChpc2EgMCkgKG5vdGVzIChTVVNQRU5EX0NBTkNFTCAxKSkpIiwg
J25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxNTowNDoxNyAxNTcw
XSBERUJVRyAoWGVuZERvbWFpbkluZm86MTc5NCkgU3RvcmluZyBkb21haW4g
ZGV0YWlsczogeydjb25zb2xlL3BvcnQnOiAnMycsICdkZXNjcmlwdGlvbic6
ICcnLCAnY29uc29sZS9saW1pdCc6ICcxMDQ4NTc2JywgJ3N0b3JlL3BvcnQn
OiAnMicsICd2bSc6ICcvdm0vMTk2YjFiY2ItMGEyMS0wODkxLTQxYmUtNTA4
MDEzMjI5OTUwJywgJ2RvbWlkJzogJzMnLCAnaW1hZ2Uvc3VzcGVuZC1jYW5j
ZWwnOiAnMScsICdjcHUvMC9hdmFpbGFiaWxpdHknOiAnb25saW5lJywgJ21l
bW9yeS90YXJnZXQnOiAnMTA0ODU3NicsICdjb250cm9sL3BsYXRmb3JtLWZl
YXR1cmUtbXVsdGlwcm9jZXNzb3Itc3VzcGVuZCc6ICcxJywgJ3N0b3JlL3Jp
bmctcmVmJzogJzEwNDQ0NzYnLCAnY29uc29sZS90eXBlJzogJ2lvZW11Jywg
J25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxNTowNDoxNyAxNTcw
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J3N0YXRlJzogJzEnLCAnYmFja2VuZC1pZCc6ICcwJywgJ2JhY2tl
bmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvY29uc29sZS8zLzAnfSB0
byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL2NvbnNvbGUvMC4KWzIwMTQtMDIt
MDYgMTU6MDQ6MTcgMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6OTcpIERl
dkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1bnR1MTEnLCAn
ZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS9jb25zb2xlLzAn
LCAndXVpZCc6ICc4NjA4YWFmMS1jYzEzLThkNzItMWU1NS01MGYyMDE3MmMw
ZWUnLCAnZnJvbnRlbmQtaWQnOiAnMycsICdzdGF0ZSc6ICcxJywgJ2xvY2F0
aW9uJzogJzMnLCAnb25saW5lJzogJzEnLCAncHJvdG9jb2wnOiAndnQxMDAn
fSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xlLzMvMC4KWzIw
MTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKFhlbmREb21haW5JbmZv
OjE4ODEpIFhlbmREb21haW5JbmZvLmhhbmRsZVNodXRkb3duV2F0Y2gKWzIw
MTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6
MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHRhcDIuClsyMDE0LTAyLTA2IDE1
OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGlu
ZyBmb3IgZGV2aWNlcyB2aWYuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBd
IERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgMC4KWzIw
MTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6
NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFpbi8wL2Jh
Y2tlbmQvdmlmLzMvMC9ob3RwbHVnLXN0YXR1cy4KWzIwMTQtMDItMDYgMTU6
MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6NjQyKSBob3RwbHVn
U3RhdHVzQ2FsbGJhY2sgMS4KWzIwMTQtMDItMDYgMTU6MDQ6MTggMTU3MF0g
REVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2Vz
IHZrYmQuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZD
b250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBpb3BvcnRzLgpb
MjAxNC0wMi0wNiAxNTowNDoxOCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxl
cjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdGFwLgpbMjAxNC0wMi0wNiAx
NTowNDoxOCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRp
bmcgZm9yIGRldmljZXMgdmlmMi4KWzIwMTQtMDItMDYgMTU6MDQ6MTggMTU3
MF0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZp
Y2VzIGNvbnNvbGUuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVH
IChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgMC4KWzIwMTQtMDIt
MDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBX
YWl0aW5nIGZvciBkZXZpY2VzIHZzY3NpLgpbMjAxNC0wMi0wNiAxNTowNDox
OCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9y
IGRldmljZXMgdmJkLgpbMjAxNC0wMi0wNiAxNTowNDoxOCAxNTcwXSBERUJV
RyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9yIDc2OC4KWzIwMTQt
MDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6NjI4
KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFpbi8wL2JhY2tl
bmQvdmJkLzMvNzY4L2hvdHBsdWctc3RhdHVzLgpbMjAxNC0wMi0wNiAxNTow
NDoxOCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjo2NDIpIGhvdHBsdWdT
dGF0dXNDYWxsYmFjayAxLgpbMjAxNC0wMi0wNiAxNTowNDoxOCAxNTcwXSBE
RUJVRyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9yIDU2MzIuClsy
MDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVy
OjYyOCkgaG90cGx1Z1N0YXR1c0NhbGxiYWNrIC9sb2NhbC9kb21haW4vMC9i
YWNrZW5kL3ZiZC8zLzU2MzIvaG90cGx1Zy1zdGF0dXMuClsyMDE0LTAyLTA2
IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjY0MikgaG90
cGx1Z1N0YXR1c0NhbGxiYWNrIDEuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1
NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2
aWNlcyBpcnEuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChE
ZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2ZmIuClsy
MDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVy
OjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBwY2kuClsyMDE0LTAyLTA2IDE1
OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGlu
ZyBmb3IgZGV2aWNlcyB2dXNiLgpbMjAxNC0wMi0wNiAxNTowNDoxOCAxNTcw
XSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9yIGRldmlj
ZXMgdnRwbS4KWzIwMTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gSU5GTyAoWGVu
ZERvbWFpbjoxMjI1KSBEb21haW4gdWJ1bnR1MTEgKDMpIHVucGF1c2VkLgpb
MjAxNC0wMi0wNiAxNTowNTozMyAxNTcwXSBERUJVRyAoWGVuZENoZWNrcG9p
bnQ6MTI0KSBbeGNfc2F2ZV06IC91c3IvbGliL3hlbi9iaW4veGNfc2F2ZSAy
NiAzIDAgMCA1ClsyMDE0LTAyLTA2IDE1OjA1OjMzIDE1NzBdIElORk8gKFhl
bmRDaGVja3BvaW50OjQyMykgeGNfc2F2ZTogZmFpbGVkIHRvIGdldCB0aGUg
c3VzcGVuZCBldnRjaG4gcG9ydApbMjAxNC0wMi0wNiAxNTowNTozMyAxNTcw
XSBJTkZPIChYZW5kQ2hlY2twb2ludDo0MjMpIApbMjAxNC0wMi0wNiAxNTow
NzoxOSAxNTcwXSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6Mzk0KSBzdXNwZW5k
ClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIERFQlVHIChYZW5kQ2hlY2tw
b2ludDoxMjcpIEluIHNhdmVJbnB1dEhhbmRsZXIgc3VzcGVuZApbMjAxNC0w
Mi0wNiAxNTowNzoxOSAxNTcwXSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6MTI5
KSBTdXNwZW5kaW5nIDMgLi4uClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBd
IERFQlVHIChYZW5kRG9tYWluSW5mbzo1MjQpIFhlbmREb21haW5JbmZvLnNo
dXRkb3duKHN1c3BlbmQpClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIERF
QlVHIChYZW5kRG9tYWluSW5mbzoxODgxKSBYZW5kRG9tYWluSW5mby5oYW5k
bGVTaHV0ZG93bldhdGNoClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIElO
Rk8gKFhlbmREb21haW5JbmZvOjU0MSkgSFZNIHNhdmU6cmVtb3RlIHNodXRk
b3duIGRvbSAzIQpbMjAxNC0wMi0wNiAxNTowNzoxOSAxNTcwXSBJTkZPIChY
ZW5kQ2hlY2twb2ludDoxMzUpIERvbWFpbiAzIHN1c3BlbmRlZC4KWzIwMTQt
MDItMDYgMTU6MDc6MTkgMTU3MF0gSU5GTyAoWGVuZERvbWFpbkluZm86MjA3
OCkgRG9tYWluIGhhcyBzaHV0ZG93bjogbmFtZT1taWdyYXRpbmctdWJ1bnR1
MTEgaWQ9MyByZWFzb249c3VzcGVuZC4KWzIwMTQtMDItMDYgMTU6MDc6MTkg
MTU3MF0gSU5GTyAoaW1hZ2U6NTM4KSBzaWduYWxEZXZpY2VNb2RlbDpyZXN0
b3JlIGRtIHN0YXRlIHRvIHJ1bm5pbmcKWzIwMTQtMDItMDYgMTU6MDc6MTkg
MTU3MF0gREVCVUcgKFhlbmRDaGVja3BvaW50OjE0NCkgV3JpdHRlbiBkb25l
ClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzozMDcxKSBYZW5kRG9tYWluSW5mby5kZXN0cm95OiBkb21pZD0zClsy
MDE0LTAyLTA2IDE1OjA3OjIwIDE1NzBdIERFQlVHIChYZW5kRG9tYWluSW5m
bzoyNDAxKSBEZXN0cm95aW5nIGRldmljZSBtb2RlbApbMjAxNC0wMi0wNiAx
NTowNzoyMCAxNTcwXSBJTkZPIChpbWFnZTo2MTUpIG1pZ3JhdGluZy11YnVu
dHUxMSBkZXZpY2UgbW9kZWwgdGVybWluYXRlZApbMjAxNC0wMi0wNiAxNTow
NzoyMCAxNTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQwOCkgUmVsZWFz
aW5nIGRldmljZXMKWzIwMTQtMDItMDYgMTU6MDc6MjAgMTU3MF0gREVCVUcg
KFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZpZi8wClsyMDE0LTAy
LTA2IDE1OjA3OjIwIDE1NzBdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxMjc2
KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZpY2VDbGFzcyA9
IHZpZiwgZGV2aWNlID0gdmlmLzAKWzIwMTQtMDItMDYgMTU6MDc6MjAgMTU3
MF0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIGNvbnNv
bGUvMApbMjAxNC0wMi0wNiAxNTowNzoyMCAxNTcwXSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTog
ZGV2aWNlQ2xhc3MgPSBjb25zb2xlLCBkZXZpY2UgPSBjb25zb2xlLzAKWzIw
MTQtMDItMDYgMTU6MDc6MjAgMTU3MF0gREVCVUcgKFhlbmREb21haW5JbmZv
OjI0MTQpIFJlbW92aW5nIHZiZC83NjgKWzIwMTQtMDItMDYgMTU6MDc6MjAg
MTU3MF0gREVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhlbmREb21haW5J
bmZvLmRlc3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gdmJkLCBkZXZpY2Ug
PSB2YmQvNzY4ClsyMDE0LTAyLTA2IDE1OjA3OjIwIDE1NzBdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2YmQvNTYzMgpbMjAxNC0w
Mi0wNiAxNTowNzoyMCAxNTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MTI3
NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2aWNlQ2xhc3Mg
PSB2YmQsIGRldmljZSA9IHZiZC81NjMyClsyMDE0LTAyLTA2IDE1OjA3OjIw
IDE1NzBdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2
ZmIvMApbMjAxNC0wMi0wNiAxNTowNzoyMCAxNTcwXSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTog
ZGV2aWNlQ2xhc3MgPSB2ZmIsIGRldmljZSA9IHZmYi8wCg==

--1665047788-1877962208-1391689706=:75705
Content-Type: application/octet-stream; name="xend(XTL_DEBUG).log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xend(XTL_DEBUG).log"

WzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKFhlbmREb21haW5J
bmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25hbWUn
LCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5kX3N0
YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUnXSwg
Wyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0nLCBb
J2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBbJ3Nl
cmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBbJ2Jv
b3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywgW11d
LCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScsICc6
MC4wJ10sIFsnZmRhJywgJyddLCBbJ2ZkYicsICcnXSwgWydndWVzdF9vc190
eXBlJywgJ2RlZmF1bHQnXSwgWydoYXAnLCAxXSwgWydocGV0JywgMF0sIFsn
aXNhJywgMF0sIFsna2V5bWFwJywgJyddLCBbJ2xvY2FsdGltZScsIDBdLCBb
J25vZ3JhcGhpYycsIDBdLCBbJ29wZW5nbCcsIDFdLCBbJ29vcycsIDFdLCBb
J3BhZScsIDFdLCBbJ3BjaScsIFtdXSwgWydwY2lfbXNpdHJhbnNsYXRlJywg
MV0sIFsncGNpX3Bvd2VyX21nbXQnLCAwXSwgWydydGNfdGltZW9mZnNldCcs
IDBdLCBbJ3NkbCcsIDBdLCBbJ3NvdW5kaHcnLCAnJ10sIFsnc3RkdmdhJywg
MF0sIFsndGltZXJfbW9kZScsIDFdLCBbJ3VzYicsIDFdLCBbJ3VzYmRldmlj
ZScsIFsnaG9zdDoxMjVmOmM5NmEnXV0sIFsndmNwdXMnLCAxXSwgWyd2bmMn
LCAxXSwgWyd2bmN1bnVzZWQnLCAxXSwgWyd2aXJpZGlhbicsIDBdLCBbJ3Zw
dF9hbGlnbicsIDFdLCBbJ3hhdXRob3JpdHknLCAnL3Jvb3QvLlhhdXRob3Jp
dHknXSwgWyd4ZW5fcGxhdGZvcm1fcGNpJywgMV0sIFsnbWVtb3J5X3NoYXJp
bmcnLCAwXSwgWyd2bmNwYXNzd2QnLCAnWFhYWFhYWFgnXSwgWyd0c2NfbW9k
ZScsIDBdLCBbJ25vbWlncmF0ZScsIDBdXV0sIFsnczNfaW50ZWdyaXR5Jywg
MV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3VuYW1lJywgJ2ZpbGU6L3Zhci9s
aWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1MTEuaW1nJ10sIFsnZGV2JywgJ2hk
YSddLCBbJ21vZGUnLCAndyddXV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3Vu
YW1lJywgJ3BoeTovZGV2L2Nkcm9tJ10sIFsnZGV2JywgJ2hkYzpjZHJvbSdd
LCBbJ21vZGUnLCAnciddXV0sIFsnZGV2aWNlJywgWyd2aWYnLCBbJ2JyaWRn
ZScsICd4ZW5icjAnXSwgWyd0eXBlJywgJ2lvZW11J11dXV0pClsyMDE0LTAy
LTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDk4
KSBYZW5kRG9tYWluSW5mby5jb25zdHJ1Y3REb21haW4KWzIwMTQtMDItMDYg
MTM6Mjc6NTggMTUyMl0gREVCVUcgKGJhbGxvb246MTg3KSBCYWxsb29uOiAy
NzQ4MzA0IEtpQiBmcmVlOyBuZWVkIDE2Mzg0OyBkb25lLgpbMjAxNC0wMi0w
NiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoWGVuZERvbWFpbjo0NzYpIEFkZGlu
ZyBEb21haW46IDMKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcg
KFhlbmREb21haW5JbmZvOjI4MzYpIFhlbmREb21haW5JbmZvLmluaXREb21h
aW46IDMgMjU2ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChp
bWFnZTozMzkpIE5vIFZOQyBwYXNzd2QgY29uZmlndXJlZCBmb3IgdmZiIGFj
Y2VzcwpbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBib290LCB2YWw6IGMKWzIwMTQtMDItMDYgMTM6Mjc6NTgg
MTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZmRhLCB2YWw6IE5vbmUK
WzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdlOjg5MSkg
YXJnczogZmRiLCB2YWw6IE5vbmUKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogc291bmRodywgdmFsOiBOb25l
ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChpbWFnZTo4OTEp
IGFyZ3M6IGxvY2FsdGltZSwgdmFsOiAwClsyMDE0LTAyLTA2IDEzOjI3OjU4
IDE1MjJdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNlcmlhbCwgdmFsOiBb
J3B0eSddClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChpbWFn
ZTo4OTEpIGFyZ3M6IHN0ZC12Z2EsIHZhbDogMApbMjAxNC0wMi0wNiAxMzoy
Nzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBpc2EsIHZhbDog
MApbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkx
KSBhcmdzOiBhY3BpLCB2YWw6IDEKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogdXNiLCB2YWw6IDEKWzIwMTQt
MDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczog
dXNiZGV2aWNlLCB2YWw6IFsnaG9zdDoxMjVmOmM5NmEnXQpbMjAxNC0wMi0w
NiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBnZnhf
cGFzc3RocnUsIHZhbDogTm9uZQpbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIy
XSBJTkZPIChpbWFnZTo4MjIpIE5lZWQgdG8gY3JlYXRlIHBsYXRmb3JtIGRl
dmljZS5bZG9taWQ6M10KWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVC
VUcgKFhlbmREb21haW5JbmZvOjI4NjMpIF9pbml0RG9tYWluOnNoYWRvd19t
ZW1vcnk9MHgwLCBtZW1vcnlfc3RhdGljX21heD0weDQwMDAwMDAwLCBtZW1v
cnlfc3RhdGljX21pbj0weDAuClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJd
IElORk8gKGltYWdlOjE4MikgYnVpbGREb21haW4gb3M9aHZtIGRvbT0zIHZj
cHVzPTEKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdl
Ojk0OSkgZG9taWQgICAgICAgICAgPSAzClsyMDE0LTAyLTA2IDEzOjI3OjU4
IDE1MjJdIERFQlVHIChpbWFnZTo5NTApIGltYWdlICAgICAgICAgID0gL3Vz
ci9saWIveGVuL2Jvb3QvaHZtbG9hZGVyClsyMDE0LTAyLTA2IDEzOjI3OjU4
IDE1MjJdIERFQlVHIChpbWFnZTo5NTEpIHN0b3JlX2V2dGNobiAgID0gMgpb
MjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6OTUyKSBt
ZW1zaXplICAgICAgICA9IDEwMjQKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gREVCVUcgKGltYWdlOjk1MykgdGFyZ2V0ICAgICAgICAgPSAxMDI0Clsy
MDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChpbWFnZTo5NTQpIHZj
cHVzICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBE
RUJVRyAoaW1hZ2U6OTU1KSB2Y3B1X2F2YWlsICAgICA9IDEKWzIwMTQtMDIt
MDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdlOjk1NikgYWNwaSAgICAg
ICAgICAgPSAxClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChp
bWFnZTo5NTcpIGFwaWMgICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAxMzoy
Nzo1OCAxNTIyXSBJTkZPIChYZW5kRG9tYWluSW5mbzoyMzU3KSBjcmVhdGVE
ZXZpY2U6IHZmYiA6IHsndm5jdW51c2VkJzogMSwgJ290aGVyX2NvbmZpZyc6
IHsndm5jdW51c2VkJzogMSwgJ3ZuYyc6ICcxJ30sICd2bmMnOiAnMScsICd1
dWlkJzogJ2RhOGEzOWVjLTU3ZWItMGVkZS1iYzBiLWMxZDY3ZWFhMDkyNid9
ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChEZXZDb250cm9s
bGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUnOiAnMScs
ICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWlu
LzAvYmFja2VuZC92ZmIvMy8wJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2Rldmlj
ZS92ZmIvMC4KWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKERl
dkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeyd2bmN1
bnVzZWQnOiAnMScsICdkb21haW4nOiAndWJ1bnR1MTEnLCAnZnJvbnRlbmQn
OiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92ZmIvMCcsICd1dWlkJzogJ2Rh
OGEzOWVjLTU3ZWItMGVkZS1iYzBiLWMxZDY3ZWFhMDkyNicsICdmcm9udGVu
ZC1pZCc6ICczJywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAndm5j
JzogJzEnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92ZmIvMy8wLgpb
MjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBJTkZPIChYZW5kRG9tYWluSW5m
bzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZiZCA6IHsndXVpZCc6ICc0ZDQzNTA4
Ni01ZmU0LWZiNmQtYjViOC1mMDZiODE2NzgzYTEnLCAnYm9vdGFibGUnOiAx
LCAnZHJpdmVyJzogJ3BhcmF2aXJ0dWFsaXNlZCcsICdkZXYnOiAnaGRhJywg
J3VuYW1lJzogJ2ZpbGU6L3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1
MTEuaW1nJywgJ21vZGUnOiAndyd9ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1
MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3
cml0aW5nIHsnYmFja2VuZC1pZCc6ICcwJywgJ3ZpcnR1YWwtZGV2aWNlJzog
Jzc2OCcsICdkZXZpY2UtdHlwZSc6ICdkaXNrJywgJ3N0YXRlJzogJzEnLCAn
YmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy83Njgn
fSB0byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZiZC83NjguClsyMDE0LTAy
LTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk3KSBE
ZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTExJywg
J2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzc2OCcs
ICd1dWlkJzogJzRkNDM1MDg2LTVmZTQtZmI2ZC1iNWI4LWYwNmI4MTY3ODNh
MScsICdib290YWJsZSc6ICcxJywgJ2Rldic6ICdoZGEnLCAnc3RhdGUnOiAn
MScsICdwYXJhbXMnOiAnL3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1
MTEuaW1nJywgJ21vZGUnOiAndycsICdvbmxpbmUnOiAnMScsICdmcm9udGVu
ZC1pZCc6ICczJywgJ3R5cGUnOiAnZmlsZSd9IHRvIC9sb2NhbC9kb21haW4v
MC9iYWNrZW5kL3ZiZC8zLzc2OC4KWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2aWNlOiB2
YmQgOiB7J3V1aWQnOiAnNmJkMzI0NWEtMzg1ZS1jYWIxLTViY2EtYTU2MjFk
ODQ2NmZmJywgJ2Jvb3RhYmxlJzogMCwgJ2RyaXZlcic6ICdwYXJhdmlydHVh
bGlzZWQnLCAnZGV2JzogJ2hkYzpjZHJvbScsICd1bmFtZSc6ICdwaHk6L2Rl
di9jZHJvbScsICdtb2RlJzogJ3InfQpbMjAxNC0wMi0wNiAxMzoyNzo1OCAx
NTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxlcjog
d3JpdGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmljZSc6
ICc1NjMyJywgJ2RldmljZS10eXBlJzogJ2Nkcm9tJywgJ3N0YXRlJzogJzEn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy81
NjMyJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQvNTYzMi4KWzIw
MTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6
OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1bnR1
MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQv
NTYzMicsICd1dWlkJzogJzZiZDMyNDVhLTM4NWUtY2FiMS01YmNhLWE1NjIx
ZDg0NjZmZicsICdib290YWJsZSc6ICcwJywgJ2Rldic6ICdoZGMnLCAnc3Rh
dGUnOiAnMScsICdwYXJhbXMnOiAnL2Rldi9jZHJvbScsICdtb2RlJzogJ3In
LCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQtaWQnOiAnMycsICd0eXBlJzog
J3BoeSd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8zLzU2MzIu
ClsyMDE0LTAyLTA2IDEzOjI3OjU5IDE1MjJdIElORk8gKFhlbmREb21haW5J
bmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmlmIDogeydicmlkZ2UnOiAneGVu
YnIwJywgJ21hYyc6ICcwMDoxNjozZTowMzozZTpjMicsICd0eXBlJzogJ2lv
ZW11JywgJ3V1aWQnOiAnN2U0ZDY5MmUtZjg0Ni05NjRmLWEzNGUtYmZjY2Jl
OWQ0YmNkJ30KWzIwMTQtMDItMDYgMTM6Mjc6NTkgMTUyMl0gREVCVUcgKERl
dkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydzdGF0
ZSc6ICcxJywgJ2JhY2tlbmQtaWQnOiAnMCcsICdiYWNrZW5kJzogJy9sb2Nh
bC9kb21haW4vMC9iYWNrZW5kL3ZpZi8zLzAnfSB0byAvbG9jYWwvZG9tYWlu
LzMvZGV2aWNlL3ZpZi8wLgpbMjAxNC0wMi0wNiAxMzoyNzo1OSAxNTIyXSBE
RUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2Q29udHJvbGxlcjogd3JpdGlu
ZyB7J2JyaWRnZSc6ICd4ZW5icjAnLCAnZG9tYWluJzogJ3VidW50dTExJywg
J2hhbmRsZSc6ICcwJywgJ3V1aWQnOiAnN2U0ZDY5MmUtZjg0Ni05NjRmLWEz
NGUtYmZjY2JlOWQ0YmNkJywgJ3NjcmlwdCc6ICcvZXRjL3hlbi9zY3JpcHRz
L3ZpZi1icmlkZ2UnLCAnbWFjJzogJzAwOjE2OjNlOjAzOjNlOmMyJywgJ2Zy
b250ZW5kLWlkJzogJzMnLCAnc3RhdGUnOiAnMScsICdvbmxpbmUnOiAnMScs
ICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZpZi8wJywg
J3R5cGUnOiAnaW9lbXUnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92
aWYvMy8wLgpbMjAxNC0wMi0wNiAxMzoyNzo1OSAxNTIyXSBJTkZPIChpbWFn
ZTo0MTgpIHNwYXduaW5nIGRldmljZSBtb2RlbHM6IC91c3IvbGliL3hlbi9i
aW4vcWVtdS1kbSBbJy91c3IvbGliL3hlbi9iaW4vcWVtdS1kbScsICctZCcs
ICczJywgJy1kb21haW4tbmFtZScsICd1YnVudHUxMScsICctdmlkZW9yYW0n
LCAnNCcsICctdm5jJywgJzEyNy4wLjAuMTowJywgJy12bmN1bnVzZWQnLCAn
LXZjcHVzJywgJzEnLCAnLXZjcHVfYXZhaWwnLCAnMHgxJywgJy1ib290Jywg
J2MnLCAnLXNlcmlhbCcsICdwdHknLCAnLWFjcGknLCAnLXVzYicsICctdXNi
ZGV2aWNlJywgIlsnaG9zdDoxMjVmOmM5NmEnXSIsICctbmV0JywgJ25pYyx2
bGFuPTEsbWFjYWRkcj0wMDoxNjozZTowMzozZTpjMixtb2RlbD1ydGw4MTM5
JywgJy1uZXQnLCAndGFwLHZsYW49MSxpZm5hbWU9dGFwMy4wLGJyaWRnZT14
ZW5icjAnLCAnLU0nLCAneGVuZnYnXQpbMjAxNC0wMi0wNiAxMzoyNzo1OSAx
NTIyXSBJTkZPIChpbWFnZTo0NjcpIGRldmljZSBtb2RlbCBwaWQ6IDExMTI0
ClsyMDE0LTAyLTA2IDEzOjI3OjU5IDE1MjJdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzozNDIwKSBTdG9yaW5nIFZNIGRldGFpbHM6IHsnb25feGVuZF9zdG9w
JzogJ2lnbm9yZScsICdwb29sX25hbWUnOiAnUG9vbC0wJywgJ3NoYWRvd19t
ZW1vcnknOiAnOScsICd1dWlkJzogJ2IxYzVmMWM4LTkwOTctZjNiNy05NjY4
LTg0NzRlZGNjMzM4ZScsICdvbl9yZWJvb3QnOiAncmVzdGFydCcsICdzdGFy
dF90aW1lJzogJzEzOTE2ODA2NzkuNDInLCAnb25fcG93ZXJvZmYnOiAnZGVz
dHJveScsICdib290bG9hZGVyX2FyZ3MnOiAnJywgJ29uX3hlbmRfc3RhcnQn
OiAnaWdub3JlJywgJ29uX2NyYXNoJzogJ3Jlc3RhcnQnLCAneGVuZC9yZXN0
YXJ0X2NvdW50JzogJzAnLCAndmNwdXMnOiAnMScsICd2Y3B1X2F2YWlsJzog
JzEnLCAnYm9vdGxvYWRlcic6ICcnLCAnaW1hZ2UnOiAiKGh2bSAoa2VybmVs
ICcnKSAoc3VwZXJwYWdlcyAwKSAodmlkZW9yYW0gNCkgKGhwZXQgMCkgKHN0
ZHZnYSAwKSAobG9hZGVyIC91c3IvbGliL3hlbi9ib290L2h2bWxvYWRlcikg
KHhlbl9wbGF0Zm9ybV9wY2kgMSkgKG9wZW5nbCAxKSAocnRjX3RpbWVvZmZz
ZXQgMCkgKHBjaSAoKSkgKGhhcCAxKSAobG9jYWx0aW1lIDApICh0aW1lcl9t
b2RlIDEpIChwY2lfbXNpdHJhbnNsYXRlIDEpIChvb3MgMSkgKGFwaWMgMSkg
KHNkbCAwKSAodXNiZGV2aWNlIChob3N0OjEyNWY6Yzk2YSkpIChkaXNwbGF5
IDowLjApICh2cHRfYWxpZ24gMSkgKHNlcmlhbCBwdHkpICh2bmN1bnVzZWQg
MSkgKGJvb3QgYykgKHBhZSAxKSAodmlyaWRpYW4gMCkgKGFjcGkgMSkgKHZu
YyAxKSAobm9ncmFwaGljIDApIChub21pZ3JhdGUgMCkgKHVzYiAxKSAodHNj
X21vZGUgMCkgKGd1ZXN0X29zX3R5cGUgZGVmYXVsdCkgKGRldmljZV9tb2Rl
bCAvdXNyL2xpYi94ZW4vYmluL3FlbXUtZG0pIChwY2lfcG93ZXJfbWdtdCAw
KSAoeGF1dGhvcml0eSAvcm9vdC8uWGF1dGhvcml0eSkgKGlzYSAwKSAobm90
ZXMgKFNVU1BFTkRfQ0FOQ0VMIDEpKSkiLCAnbmFtZSc6ICd1YnVudHUxMSd9
ClsyMDE0LTAyLTA2IDEzOjI3OjU5IDE1MjJdIElORk8gKGltYWdlOjU5MCkg
d2FpdGluZyBmb3Igc2VudGluZWxfZmlmbwpbMjAxNC0wMi0wNiAxMzoyNzo1
OSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MTc5NCkgU3RvcmluZyBk
b21haW4gZGV0YWlsczogeydjb25zb2xlL3BvcnQnOiAnMycsICdkZXNjcmlw
dGlvbic6ICcnLCAnY29uc29sZS9saW1pdCc6ICcxMDQ4NTc2JywgJ3N0b3Jl
L3BvcnQnOiAnMicsICd2bSc6ICcvdm0vYjFjNWYxYzgtOTA5Ny1mM2I3LTk2
NjgtODQ3NGVkY2MzMzhlJywgJ2RvbWlkJzogJzMnLCAnaW1hZ2Uvc3VzcGVu
ZC1jYW5jZWwnOiAnMScsICdjcHUvMC9hdmFpbGFiaWxpdHknOiAnb25saW5l
JywgJ21lbW9yeS90YXJnZXQnOiAnMTA0ODU3NicsICdjb250cm9sL3BsYXRm
b3JtLWZlYXR1cmUtbXVsdGlwcm9jZXNzb3Itc3VzcGVuZCc6ICcxJywgJ3N0
b3JlL3JpbmctcmVmJzogJzEwNDQ0NzYnLCAnY29uc29sZS90eXBlJzogJ2lv
ZW11JywgJ25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxMzoyNzo1
OSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxl
cjogd3JpdGluZyB7J3N0YXRlJzogJzEnLCAnYmFja2VuZC1pZCc6ICcwJywg
J2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvY29uc29sZS8z
LzAnfSB0byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL2NvbnNvbGUvMC4KWzIw
MTQtMDItMDYgMTM6Mjc6NTkgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6
OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1bnR1
MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS9jb25z
b2xlLzAnLCAndXVpZCc6ICdhZmEwZWE5Zi0xNmZjLWJjMmItYWUxNy05Yjcy
YTRmN2MyNDYnLCAnZnJvbnRlbmQtaWQnOiAnMycsICdzdGF0ZSc6ICcxJywg
J2xvY2F0aW9uJzogJzMnLCAnb25saW5lJzogJzEnLCAncHJvdG9jb2wnOiAn
dnQxMDAnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xlLzMv
MC4KWzIwMTQtMDItMDYgMTM6Mjc6NTkgMTUyMl0gREVCVUcgKERldkNvbnRy
b2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHRhcDIuClsyMDE0LTAy
LTA2IDEzOjI3OjU5IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkg
V2FpdGluZyBmb3IgZGV2aWNlcyB2aWYuClsyMDE0LTAyLTA2IDEzOjI4OjAw
IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3Ig
MC4KWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21h
aW5JbmZvOjE4ODEpIFhlbmREb21haW5JbmZvLmhhbmRsZVNodXRkb3duV2F0
Y2gKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKERldkNvbnRy
b2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFp
bi8wL2JhY2tlbmQvdmlmLzMvMC9ob3RwbHVnLXN0YXR1cy4KWzIwMTQtMDIt
MDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6NjQyKSBo
b3RwbHVnU3RhdHVzQ2FsbGJhY2sgMi4KWzIwMTQtMDItMDYgMTM6Mjg6MDAg
MTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjMwNzEpIFhlbmREb21haW5J
bmZvLmRlc3Ryb3k6IGRvbWlkPTMKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUy
Ml0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MDEpIERlc3Ryb3lpbmcgZGV2
aWNlIG1vZGVsClsyMDE0LTAyLTA2IDEzOjI4OjAwIDE1MjJdIElORk8gKGlt
YWdlOjYxNSkgdWJ1bnR1MTEgZGV2aWNlIG1vZGVsIHRlcm1pbmF0ZWQKWzIw
MTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZv
OjI0MDgpIFJlbGVhc2luZyBkZXZpY2VzClsyMDE0LTAyLTA2IDEzOjI4OjAw
IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2
aWYvMApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTog
ZGV2aWNlQ2xhc3MgPSB2aWYsIGRldmljZSA9IHZpZi8wClsyMDE0LTAyLTA2
IDEzOjI4OjAwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBS
ZW1vdmluZyBjb25zb2xlLzAKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0g
REVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRl
c3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gY29uc29sZSwgZGV2aWNlID0g
Y29uc29sZS8wClsyMDE0LTAyLTA2IDEzOjI4OjAwIDE1MjJdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2YmQvNzY4ClsyMDE0LTAy
LTA2IDEzOjI4OjAwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxMjc2
KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZpY2VDbGFzcyA9
IHZiZCwgZGV2aWNlID0gdmJkLzc2OApbMjAxNC0wMi0wNiAxMzoyODowMCAx
NTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmJk
LzU2MzIKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6
IGRldmljZUNsYXNzID0gdmJkLCBkZXZpY2UgPSB2YmQvNTYzMgpbMjAxNC0w
Mi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQx
NCkgUmVtb3ZpbmcgdmZiLzAKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0g
REVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRl
c3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gdmZiLCBkZXZpY2UgPSB2ZmIv
MApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERvbWFp
bkluZm86MjQwNikgTm8gZGV2aWNlIG1vZGVsClsyMDE0LTAyLTA2IDEzOjI4
OjAwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDA4KSBSZWxlYXNp
bmcgZGV2aWNlcwpbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAo
WGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmlmLzAKWzIwMTQtMDIt
MDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYp
IFhlbmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0g
dmlmLCBkZXZpY2UgPSB2aWYvMApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIy
XSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmJkLzc2
OApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERvbWFp
bkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2
aWNlQ2xhc3MgPSB2YmQsIGRldmljZSA9IHZiZC83NjgKWzIwMTQtMDItMDYg
MTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJl
bW92aW5nIHZiZC81NjMyClsyMDE0LTAyLTA2IDEzOjI4OjAwIDE1MjJdIERF
QlVHIChYZW5kRG9tYWluSW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0
cm95RGV2aWNlOiBkZXZpY2VDbGFzcyA9IHZiZCwgZGV2aWNlID0gdmJkLzU2
MzIKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKFhlbmREb21h
aW5JbmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25h
bWUnLCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5k
X3N0YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUn
XSwgWyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0n
LCBbJ2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBb
J3NlcmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBb
J2Jvb3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywg
W11dLCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScs
ICc6MC4wJ10sIFsnZmRhJywgJyddLCBbJ2ZkYicsICcnXSwgWydndWVzdF9v
c190eXBlJywgJ2RlZmF1bHQnXSwgWydoYXAnLCAxXSwgWydocGV0JywgMF0s
IFsnaXNhJywgMF0sIFsna2V5bWFwJywgJyddLCBbJ2xvY2FsdGltZScsIDBd
LCBbJ25vZ3JhcGhpYycsIDBdLCBbJ29wZW5nbCcsIDFdLCBbJ29vcycsIDFd
LCBbJ3BhZScsIDFdLCBbJ3BjaScsIFtdXSwgWydwY2lfbXNpdHJhbnNsYXRl
JywgMV0sIFsncGNpX3Bvd2VyX21nbXQnLCAwXSwgWydydGNfdGltZW9mZnNl
dCcsIDBdLCBbJ3NkbCcsIDBdLCBbJ3NvdW5kaHcnLCAnJ10sIFsnc3Rkdmdh
JywgMF0sIFsndGltZXJfbW9kZScsIDFdLCBbJ3VzYicsIDFdLCBbJ3VzYmRl
dmljZScsIFsnaG9zdDoxMjVmOmM5NmEnXV0sIFsndmNwdXMnLCAxXSwgWyd2
bmMnLCAxXSwgWyd2bmN1bnVzZWQnLCAxXSwgWyd2aXJpZGlhbicsIDBdLCBb
J3ZwdF9hbGlnbicsIDFdLCBbJ3hhdXRob3JpdHknLCAnL3Jvb3QvLlhhdXRo
b3JpdHknXSwgWyd4ZW5fcGxhdGZvcm1fcGNpJywgMV0sIFsnbWVtb3J5X3No
YXJpbmcnLCAwXSwgWyd2bmNwYXNzd2QnLCAnWFhYWFhYWFgnXSwgWyd0c2Nf
bW9kZScsIDBdLCBbJ25vbWlncmF0ZScsIDBdXV0sIFsnczNfaW50ZWdyaXR5
JywgMV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3VuYW1lJywgJ2ZpbGU6L3Zh
ci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1MTEuaW1nJ10sIFsnZGV2Jywg
J2hkYSddLCBbJ21vZGUnLCAndyddXV0sIFsnZGV2aWNlJywgWyd2YmQnLCBb
J3VuYW1lJywgJ3BoeTovZGV2L2Nkcm9tJ10sIFsnZGV2JywgJ2hkYzpjZHJv
bSddLCBbJ21vZGUnLCAnciddXV0sIFsnZGV2aWNlJywgWyd2aWYnLCBbJ2Jy
aWRnZScsICd4ZW5icjAnXSwgWyd0eXBlJywgJ2lvZW11J11dXV0pClsyMDE0
LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoy
NDk4KSBYZW5kRG9tYWluSW5mby5jb25zdHJ1Y3REb21haW4KWzIwMTQtMDIt
MDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGJhbGxvb246MTg3KSBCYWxsb29u
OiAyNzQ4MzA0IEtpQiBmcmVlOyBuZWVkIDE2Mzg0OyBkb25lLgpbMjAxNC0w
Mi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoWGVuZERvbWFpbjo0NzYpIEFk
ZGluZyBEb21haW46IDQKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVC
VUcgKFhlbmREb21haW5JbmZvOjI4MzYpIFhlbmREb21haW5JbmZvLmluaXRE
b21haW46IDQgMjU2ClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVH
IChpbWFnZTozMzkpIE5vIFZOQyBwYXNzd2QgY29uZmlndXJlZCBmb3IgdmZi
IGFjY2VzcwpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1h
Z2U6ODkxKSBhcmdzOiBib290LCB2YWw6IGMKWzIwMTQtMDItMDYgMTM6Mjk6
MzAgMTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZmRhLCB2YWw6IE5v
bmUKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGltYWdlOjg5
MSkgYXJnczogZmRiLCB2YWw6IE5vbmUKWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogc291bmRodywgdmFsOiBO
b25lClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChpbWFnZTo4
OTEpIGFyZ3M6IGxvY2FsdGltZSwgdmFsOiAwClsyMDE0LTAyLTA2IDEzOjI5
OjMwIDE1MjJdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNlcmlhbCwgdmFs
OiBbJ3B0eSddClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChp
bWFnZTo4OTEpIGFyZ3M6IHN0ZC12Z2EsIHZhbDogMApbMjAxNC0wMi0wNiAx
MzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBpc2EsIHZh
bDogMApbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBhY3BpLCB2YWw6IDEKWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogdXNiLCB2YWw6IDEKWzIw
MTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJn
czogdXNiZGV2aWNlLCB2YWw6IFsnaG9zdDoxMjVmOmM5NmEnXQpbMjAxNC0w
Mi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBn
ZnhfcGFzc3RocnUsIHZhbDogTm9uZQpbMjAxNC0wMi0wNiAxMzoyOTozMCAx
NTIyXSBJTkZPIChpbWFnZTo4MjIpIE5lZWQgdG8gY3JlYXRlIHBsYXRmb3Jt
IGRldmljZS5bZG9taWQ6NF0KWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0g
REVCVUcgKFhlbmREb21haW5JbmZvOjI4NjMpIF9pbml0RG9tYWluOnNoYWRv
d19tZW1vcnk9MHgwLCBtZW1vcnlfc3RhdGljX21heD0weDQwMDAwMDAwLCBt
ZW1vcnlfc3RhdGljX21pbj0weDAuClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1
MjJdIElORk8gKGltYWdlOjE4MikgYnVpbGREb21haW4gb3M9aHZtIGRvbT00
IHZjcHVzPTEKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGlt
YWdlOjk0OSkgZG9taWQgICAgICAgICAgPSA0ClsyMDE0LTAyLTA2IDEzOjI5
OjMwIDE1MjJdIERFQlVHIChpbWFnZTo5NTApIGltYWdlICAgICAgICAgID0g
L3Vzci9saWIveGVuL2Jvb3QvaHZtbG9hZGVyClsyMDE0LTAyLTA2IDEzOjI5
OjMwIDE1MjJdIERFQlVHIChpbWFnZTo5NTEpIHN0b3JlX2V2dGNobiAgID0g
MgpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6OTUy
KSBtZW1zaXplICAgICAgICA9IDEwMjQKWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gREVCVUcgKGltYWdlOjk1MykgdGFyZ2V0ICAgICAgICAgPSAxMDI0
ClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChpbWFnZTo5NTQp
IHZjcHVzICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIy
XSBERUJVRyAoaW1hZ2U6OTU1KSB2Y3B1X2F2YWlsICAgICA9IDEKWzIwMTQt
MDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGltYWdlOjk1NikgYWNwaSAg
ICAgICAgICAgPSAxClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVH
IChpbWFnZTo5NTcpIGFwaWMgICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAx
MzoyOTozMCAxNTIyXSBJTkZPIChYZW5kRG9tYWluSW5mbzoyMzU3KSBjcmVh
dGVEZXZpY2U6IHZmYiA6IHsndm5jdW51c2VkJzogMSwgJ290aGVyX2NvbmZp
Zyc6IHsndm5jdW51c2VkJzogMSwgJ3ZuYyc6ICcxJ30sICd2bmMnOiAnMScs
ICd1dWlkJzogJzhlYTk2NzIyLTE0YmUtODRjZC05YmFhLWMwYjRlMWMzYmQz
Yyd9ClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChEZXZDb250
cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUnOiAn
MScsICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92ZmIvNC8wJ30gdG8gL2xvY2FsL2RvbWFpbi80L2Rl
dmljZS92ZmIvMC4KWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeyd2
bmN1bnVzZWQnOiAnMScsICdkb21haW4nOiAndWJ1bnR1MTEnLCAnZnJvbnRl
bmQnOiAnL2xvY2FsL2RvbWFpbi80L2RldmljZS92ZmIvMCcsICd1dWlkJzog
JzhlYTk2NzIyLTE0YmUtODRjZC05YmFhLWMwYjRlMWMzYmQzYycsICdmcm9u
dGVuZC1pZCc6ICc0JywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAn
dm5jJzogJzEnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92ZmIvNC8w
LgpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBJTkZPIChYZW5kRG9tYWlu
SW5mbzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZiZCA6IHsndXVpZCc6ICdiODUw
NDg2ZS05YjI2LWI4MmUtMjMxNC02OGNiOGIzOGNkNmMnLCAnYm9vdGFibGUn
OiAxLCAnZHJpdmVyJzogJ3BhcmF2aXJ0dWFsaXNlZCcsICdkZXYnOiAnaGRh
JywgJ3VuYW1lJzogJ2ZpbGU6L3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndyd9ClsyMDE0LTAyLTA2IDEzOjI5OjMw
IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVy
OiB3cml0aW5nIHsnYmFja2VuZC1pZCc6ICcwJywgJ3ZpcnR1YWwtZGV2aWNl
JzogJzc2OCcsICdkZXZpY2UtdHlwZSc6ICdkaXNrJywgJ3N0YXRlJzogJzEn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvNC83
NjgnfSB0byAvbG9jYWwvZG9tYWluLzQvZGV2aWNlL3ZiZC83NjguClsyMDE0
LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk3
KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTEx
JywgJ2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vNC9kZXZpY2UvdmJkLzc2
OCcsICd1dWlkJzogJ2I4NTA0ODZlLTliMjYtYjgyZS0yMzE0LTY4Y2I4YjM4
Y2Q2YycsICdib290YWJsZSc6ICcxJywgJ2Rldic6ICdoZGEnLCAnc3RhdGUn
OiAnMScsICdwYXJhbXMnOiAnL3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndycsICdvbmxpbmUnOiAnMScsICdmcm9u
dGVuZC1pZCc6ICc0JywgJ3R5cGUnOiAnZmlsZSd9IHRvIC9sb2NhbC9kb21h
aW4vMC9iYWNrZW5kL3ZiZC80Lzc2OC4KWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2aWNl
OiB2YmQgOiB7J3V1aWQnOiAnYTMzMTI3OGQtMmY1Zi01ZDczLTRlOWUtMGM3
ODcxZDYwNjcwJywgJ2Jvb3RhYmxlJzogMCwgJ2RyaXZlcic6ICdwYXJhdmly
dHVhbGlzZWQnLCAnZGV2JzogJ2hkYzpjZHJvbScsICd1bmFtZSc6ICdwaHk6
L2Rldi9jZHJvbScsICdtb2RlJzogJ3InfQpbMjAxNC0wMi0wNiAxMzoyOToz
MCAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxl
cjogd3JpdGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmlj
ZSc6ICc1NjMyJywgJ2RldmljZS10eXBlJzogJ2Nkcm9tJywgJ3N0YXRlJzog
JzEnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQv
NC81NjMyJ30gdG8gL2xvY2FsL2RvbWFpbi80L2RldmljZS92YmQvNTYzMi4K
WzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKERldkNvbnRyb2xs
ZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1
bnR1MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi80L2RldmljZS92
YmQvNTYzMicsICd1dWlkJzogJ2EzMzEyNzhkLTJmNWYtNWQ3My00ZTllLTBj
Nzg3MWQ2MDY3MCcsICdib290YWJsZSc6ICcwJywgJ2Rldic6ICdoZGMnLCAn
c3RhdGUnOiAnMScsICdwYXJhbXMnOiAnL2Rldi9jZHJvbScsICdtb2RlJzog
J3InLCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQtaWQnOiAnNCcsICd0eXBl
JzogJ3BoeSd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC80LzU2
MzIuClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIElORk8gKFhlbmREb21h
aW5JbmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmlmIDogeydicmlkZ2UnOiAn
eGVuYnIwJywgJ21hYyc6ICcwMDoxNjozZTo2ODo1Njo3OScsICd0eXBlJzog
J2lvZW11JywgJ3V1aWQnOiAnNmU2MWQ2MjQtN2VkYS03ZGQ3LTE5YzYtZGQ1
OGU5YTkzNzM4J30KWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydz
dGF0ZSc6ICcxJywgJ2JhY2tlbmQtaWQnOiAnMCcsICdiYWNrZW5kJzogJy9s
b2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi80LzAnfSB0byAvbG9jYWwvZG9t
YWluLzQvZGV2aWNlL3ZpZi8wLgpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIy
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J2JyaWRnZSc6ICd4ZW5icjAnLCAnZG9tYWluJzogJ3VidW50dTEx
JywgJ2hhbmRsZSc6ICcwJywgJ3V1aWQnOiAnNmU2MWQ2MjQtN2VkYS03ZGQ3
LTE5YzYtZGQ1OGU5YTkzNzM4JywgJ3NjcmlwdCc6ICcvZXRjL3hlbi9zY3Jp
cHRzL3ZpZi1icmlkZ2UnLCAnbWFjJzogJzAwOjE2OjNlOjY4OjU2Ojc5Jywg
J2Zyb250ZW5kLWlkJzogJzQnLCAnc3RhdGUnOiAnMScsICdvbmxpbmUnOiAn
MScsICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzQvZGV2aWNlL3ZpZi8w
JywgJ3R5cGUnOiAnaW9lbXUnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2Vu
ZC92aWYvNC8wLgpbMjAxNC0wMi0wNiAxMzoyOTozMSAxNTIyXSBJTkZPIChp
bWFnZTo0MTgpIHNwYXduaW5nIGRldmljZSBtb2RlbHM6IC91c3IvbGliL3hl
bi9iaW4vcWVtdS1kbSBbJy91c3IvbGliL3hlbi9iaW4vcWVtdS1kbScsICct
ZCcsICc0JywgJy1kb21haW4tbmFtZScsICd1YnVudHUxMScsICctdmlkZW9y
YW0nLCAnNCcsICctdm5jJywgJzEyNy4wLjAuMTowJywgJy12bmN1bnVzZWQn
LCAnLXZjcHVzJywgJzEnLCAnLXZjcHVfYXZhaWwnLCAnMHgxJywgJy1ib290
JywgJ2MnLCAnLXNlcmlhbCcsICdwdHknLCAnLWFjcGknLCAnLXVzYicsICct
dXNiZGV2aWNlJywgIlsnaG9zdDoxMjVmOmM5NmEnXSIsICctbmV0JywgJ25p
Yyx2bGFuPTEsbWFjYWRkcj0wMDoxNjozZTo2ODo1Njo3OSxtb2RlbD1ydGw4
MTM5JywgJy1uZXQnLCAndGFwLHZsYW49MSxpZm5hbWU9dGFwNC4wLGJyaWRn
ZT14ZW5icjAnLCAnLU0nLCAneGVuZnYnXQpbMjAxNC0wMi0wNiAxMzoyOToz
MSAxNTIyXSBJTkZPIChpbWFnZTo0NjcpIGRldmljZSBtb2RlbCBwaWQ6IDEy
MjU0ClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIElORk8gKGltYWdlOjU5
MCkgd2FpdGluZyBmb3Igc2VudGluZWxfZmlmbwpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MzQyMCkgU3Rvcmlu
ZyBWTSBkZXRhaWxzOiB7J29uX3hlbmRfc3RvcCc6ICdpZ25vcmUnLCAncG9v
bF9uYW1lJzogJ1Bvb2wtMCcsICdzaGFkb3dfbWVtb3J5JzogJzknLCAndXVp
ZCc6ICc1ZGE0NzY4Ni00NGE4LTNhZDQtZDIzOC0xNDJjOTYxZmQ3MmMnLCAn
b25fcmVib290JzogJ3Jlc3RhcnQnLCAnc3RhcnRfdGltZSc6ICcxMzkxNjgw
NzcxLjA0JywgJ29uX3Bvd2Vyb2ZmJzogJ2Rlc3Ryb3knLCAnYm9vdGxvYWRl
cl9hcmdzJzogJycsICdvbl94ZW5kX3N0YXJ0JzogJ2lnbm9yZScsICdvbl9j
cmFzaCc6ICdyZXN0YXJ0JywgJ3hlbmQvcmVzdGFydF9jb3VudCc6ICcwJywg
J3ZjcHVzJzogJzEnLCAndmNwdV9hdmFpbCc6ICcxJywgJ2Jvb3Rsb2FkZXIn
OiAnJywgJ2ltYWdlJzogIihodm0gKGtlcm5lbCAnJykgKHN1cGVycGFnZXMg
MCkgKHZpZGVvcmFtIDQpIChocGV0IDApIChzdGR2Z2EgMCkgKGxvYWRlciAv
dXNyL2xpYi94ZW4vYm9vdC9odm1sb2FkZXIpICh4ZW5fcGxhdGZvcm1fcGNp
IDEpIChvcGVuZ2wgMSkgKHJ0Y190aW1lb2Zmc2V0IDApIChwY2kgKCkpICho
YXAgMSkgKGxvY2FsdGltZSAwKSAodGltZXJfbW9kZSAxKSAocGNpX21zaXRy
YW5zbGF0ZSAxKSAob29zIDEpIChhcGljIDEpIChzZGwgMCkgKHVzYmRldmlj
ZSAoaG9zdDoxMjVmOmM5NmEpKSAoZGlzcGxheSA6MC4wKSAodnB0X2FsaWdu
IDEpIChzZXJpYWwgcHR5KSAodm5jdW51c2VkIDEpIChib290IGMpIChwYWUg
MSkgKHZpcmlkaWFuIDApIChhY3BpIDEpICh2bmMgMSkgKG5vZ3JhcGhpYyAw
KSAobm9taWdyYXRlIDApICh1c2IgMSkgKHRzY19tb2RlIDApIChndWVzdF9v
c190eXBlIGRlZmF1bHQpIChkZXZpY2VfbW9kZWwgL3Vzci9saWIveGVuL2Jp
bi9xZW11LWRtKSAocGNpX3Bvd2VyX21nbXQgMCkgKHhhdXRob3JpdHkgL3Jv
b3QvLlhhdXRob3JpdHkpIChpc2EgMCkgKG5vdGVzIChTVVNQRU5EX0NBTkNF
TCAxKSkpIiwgJ25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MTc5NCkgU3Rvcmlu
ZyBkb21haW4gZGV0YWlsczogeydjb25zb2xlL3BvcnQnOiAnMycsICdkZXNj
cmlwdGlvbic6ICcnLCAnY29uc29sZS9saW1pdCc6ICcxMDQ4NTc2JywgJ3N0
b3JlL3BvcnQnOiAnMicsICd2bSc6ICcvdm0vNWRhNDc2ODYtNDRhOC0zYWQ0
LWQyMzgtMTQyYzk2MWZkNzJjJywgJ2RvbWlkJzogJzQnLCAnaW1hZ2Uvc3Vz
cGVuZC1jYW5jZWwnOiAnMScsICdjcHUvMC9hdmFpbGFiaWxpdHknOiAnb25s
aW5lJywgJ21lbW9yeS90YXJnZXQnOiAnMTA0ODU3NicsICdjb250cm9sL3Bs
YXRmb3JtLWZlYXR1cmUtbXVsdGlwcm9jZXNzb3Itc3VzcGVuZCc6ICcxJywg
J3N0b3JlL3JpbmctcmVmJzogJzEwNDQ0NzYnLCAnY29uc29sZS90eXBlJzog
J2lvZW11JywgJ25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJv
bGxlcjogd3JpdGluZyB7J3N0YXRlJzogJzEnLCAnYmFja2VuZC1pZCc6ICcw
JywgJ2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvY29uc29s
ZS80LzAnfSB0byAvbG9jYWwvZG9tYWluLzQvZGV2aWNlL2NvbnNvbGUvMC4K
WzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xs
ZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1
bnR1MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi80L2RldmljZS9j
b25zb2xlLzAnLCAndXVpZCc6ICdiMzk0MGVlMi0wOGRlLTE3MWEtYWM4NS1h
MTU0NjUwNzQ3MWYnLCAnZnJvbnRlbmQtaWQnOiAnNCcsICdzdGF0ZSc6ICcx
JywgJ2xvY2F0aW9uJzogJzMnLCAnb25saW5lJzogJzEnLCAncHJvdG9jb2wn
OiAndnQxMDAnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xl
LzQvMC4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjE4ODEpIFhlbmREb21haW5JbmZvLmhhbmRsZVNodXRkb3du
V2F0Y2gKWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNv
bnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHRhcDIuClsyMDE0
LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEz
OSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2aWYuClsyMDE0LTAyLTA2IDEzOjI5
OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBm
b3IgMC4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNv
bnRyb2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2Rv
bWFpbi8wL2JhY2tlbmQvdmlmLzQvMC9ob3RwbHVnLXN0YXR1cy4KWzIwMTQt
MDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6NjQy
KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgMS4KWzIwMTQtMDItMDYgMTM6Mjk6
MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZv
ciBkZXZpY2VzIHZrYmQuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERF
QlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBp
b3BvcnRzLgpbMjAxNC0wMi0wNiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2
Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdGFwLgpbMjAx
NC0wMi0wNiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjox
MzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdmlmMi4KWzIwMTQtMDItMDYgMTM6
Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5n
IGZvciBkZXZpY2VzIGNvbnNvbGUuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1
MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgMC4K
WzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xs
ZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHZzY3NpLgpbMjAxNC0wMi0w
NiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdh
aXRpbmcgZm9yIGRldmljZXMgdmJkLgpbMjAxNC0wMi0wNiAxMzoyOTozMSAx
NTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9yIDc2
OC4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRy
b2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFp
bi8wL2JhY2tlbmQvdmJkLzQvNzY4L2hvdHBsdWctc3RhdHVzLgpbMjAxNC0w
Mi0wNiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo2NDIp
IGhvdHBsdWdTdGF0dXNDYWxsYmFjayAxLgpbMjAxNC0wMi0wNiAxMzoyOToz
MSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9y
IDU2MzIuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZD
b250cm9sbGVyOjYyOCkgaG90cGx1Z1N0YXR1c0NhbGxiYWNrIC9sb2NhbC9k
b21haW4vMC9iYWNrZW5kL3ZiZC80LzU2MzIvaG90cGx1Zy1zdGF0dXMuClsy
MDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVy
OjY0MikgaG90cGx1Z1N0YXR1c0NhbGxiYWNrIDEuClsyMDE0LTAyLTA2IDEz
OjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGlu
ZyBmb3IgZGV2aWNlcyBpcnEuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJd
IERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNl
cyB2ZmIuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZD
b250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBwY2kuClsyMDE0
LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEz
OSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2dXNiLgpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcg
Zm9yIGRldmljZXMgdnRwbS4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0g
SU5GTyAoWGVuZERvbWFpbjoxMjI1KSBEb21haW4gdWJ1bnR1MTEgKDQpIHVu
cGF1c2VkLgpbMjAxNC0wMi0wNiAxMzozMDo1MiAxNTIyXSBERUJVRyAoWGVu
ZENoZWNrcG9pbnQ6MTI0KSBbeGNfc2F2ZV06IC91c3IvbGliL3hlbi9iaW4v
eGNfc2F2ZSAyOCA0IDAgMCA1ClsyMDE0LTAyLTA2IDEzOjMwOjUyIDE1MjJd
IElORk8gKFhlbmRDaGVja3BvaW50OjQyMykgeGNfc2F2ZTogZmFpbGVkIHRv
IGdldCB0aGUgc3VzcGVuZCBldnRjaG4gcG9ydApbMjAxNC0wMi0wNiAxMzoz
MDo1MiAxNTIyXSBJTkZPIChYZW5kQ2hlY2twb2ludDo0MjMpIApbMjAxNC0w
Mi0wNiAxMzozMjozOCAxNTIyXSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6Mzk0
KSBzdXNwZW5kClsyMDE0LTAyLTA2IDEzOjMyOjM4IDE1MjJdIERFQlVHIChY
ZW5kQ2hlY2twb2ludDoxMjcpIEluIHNhdmVJbnB1dEhhbmRsZXIgc3VzcGVu
ZApbMjAxNC0wMi0wNiAxMzozMjozOCAxNTIyXSBERUJVRyAoWGVuZENoZWNr
cG9pbnQ6MTI5KSBTdXNwZW5kaW5nIDQgLi4uClsyMDE0LTAyLTA2IDEzOjMy
OjM4IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzo1MjQpIFhlbmREb21h
aW5JbmZvLnNodXRkb3duKHN1c3BlbmQpClsyMDE0LTAyLTA2IDEzOjMyOjM4
IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxODgxKSBYZW5kRG9tYWlu
SW5mby5oYW5kbGVTaHV0ZG93bldhdGNoClsyMDE0LTAyLTA2IDEzOjMyOjM4
IDE1MjJdIElORk8gKFhlbmREb21haW5JbmZvOjU0MSkgSFZNIHNhdmU6cmVt
b3RlIHNodXRkb3duIGRvbSA0IQpbMjAxNC0wMi0wNiAxMzozMjozOCAxNTIy
XSBJTkZPIChYZW5kQ2hlY2twb2ludDoxMzUpIERvbWFpbiA0IHN1c3BlbmRl
ZC4KWzIwMTQtMDItMDYgMTM6MzI6MzggMTUyMl0gSU5GTyAoWGVuZERvbWFp
bkluZm86MjA3OCkgRG9tYWluIGhhcyBzaHV0ZG93bjogbmFtZT1taWdyYXRp
bmctdWJ1bnR1MTEgaWQ9NCByZWFzb249c3VzcGVuZC4KWzIwMTQtMDItMDYg
MTM6MzI6MzggMTUyMl0gSU5GTyAoaW1hZ2U6NTM4KSBzaWduYWxEZXZpY2VN
b2RlbDpyZXN0b3JlIGRtIHN0YXRlIHRvIHJ1bm5pbmcKWzIwMTQtMDItMDYg
MTM6MzI6MzggMTUyMl0gREVCVUcgKFhlbmRDaGVja3BvaW50OjE0NCkgV3Jp
dHRlbiBkb25lClsyMDE0LTAyLTA2IDEzOjMyOjM4IDE1MjJdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzozMDcxKSBYZW5kRG9tYWluSW5mby5kZXN0cm95OiBk
b21pZD00ClsyMDE0LTAyLTA2IDEzOjMyOjM5IDE1MjJdIERFQlVHIChYZW5k
RG9tYWluSW5mbzoyNDAxKSBEZXN0cm95aW5nIGRldmljZSBtb2RlbApbMjAx
NC0wMi0wNiAxMzozMjozOSAxNTIyXSBJTkZPIChpbWFnZTo2MTUpIG1pZ3Jh
dGluZy11YnVudHUxMSBkZXZpY2UgbW9kZWwgdGVybWluYXRlZApbMjAxNC0w
Mi0wNiAxMzozMjozOSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQw
OCkgUmVsZWFzaW5nIGRldmljZXMKWzIwMTQtMDItMDYgMTM6MzI6MzkgMTUy
Ml0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZpZi8w
ClsyMDE0LTAyLTA2IDEzOjMyOjM5IDE1MjJdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZp
Y2VDbGFzcyA9IHZpZiwgZGV2aWNlID0gdmlmLzAKWzIwMTQtMDItMDYgMTM6
MzI6MzkgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92
aW5nIGNvbnNvbGUvMApbMjAxNC0wMi0wNiAxMzozMjozOSAxNTIyXSBERUJV
RyAoWGVuZERvbWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJv
eURldmljZTogZGV2aWNlQ2xhc3MgPSBjb25zb2xlLCBkZXZpY2UgPSBjb25z
b2xlLzAKWzIwMTQtMDItMDYgMTM6MzI6MzkgMTUyMl0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZiZC83NjgKWzIwMTQtMDItMDYg
MTM6MzI6MzkgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhl
bmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gdmJk
LCBkZXZpY2UgPSB2YmQvNzY4ClsyMDE0LTAyLTA2IDEzOjMyOjM5IDE1MjJd
IERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2YmQvNTYz
MgpbMjAxNC0wMi0wNiAxMzozMjozOSAxNTIyXSBERUJVRyAoWGVuZERvbWFp
bkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2
aWNlQ2xhc3MgPSB2YmQsIGRldmljZSA9IHZiZC81NjMyClsyMDE0LTAyLTA2
IDEzOjMyOjM5IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBS
ZW1vdmluZyB2ZmIvMApbMjAxNC0wMi0wNiAxMzozMjozOSAxNTIyXSBERUJV
RyAoWGVuZERvbWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJv
eURldmljZTogZGV2aWNlQ2xhc3MgPSB2ZmIsIGRldmljZSA9IHZmYi8wCg==


--1665047788-1877962208-1391689706=:75705
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1665047788-1877962208-1391689706=:75705--


From xen-devel-bounces@lists.xen.org Thu Feb 06 13:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOhh-0002Dp-SD; Thu, 06 Feb 2014 13:09:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WBO4E-0005ih-IZ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 12:28:31 +0000
Received: from [85.158.137.68:35057] by server-1.bemta-3.messagelabs.com id
	0E/1D-17293-DEF73F25; Thu, 06 Feb 2014 12:28:29 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391689706!51826!1
X-Originating-IP: [98.139.213.147]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30381 invoked from network); 6 Feb 2014 12:28:28 -0000
Received: from nm10-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm10-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.147)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 12:28:28 -0000
Received: from [66.196.81.170] by nm10.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 12:28:26 -0000
Received: from [98.139.212.203] by tm16.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 12:28:26 -0000
Received: from [127.0.0.1] by omp1012.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 12:28:26 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 574334.35455.bm@omp1012.mail.bf1.yahoo.com
Received: (qmail 79349 invoked by uid 60001); 6 Feb 2014 12:28:26 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391689706; bh=O3stWYs6Va5xeebi9roOMxuVC228brrFSPMJ0VQQDGg=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=2O5oZ/e/IVQrDcs/68CpUf7sMauAYLlwBdFV7BEE9B0XpIowFTBkw1fIHupA4/+IJRRnRoO35te+9wSHtFQGVosgnwwHd/9WvCmUvvUJgJSEVqcISp//5kl2Qb05J2LfgmBiZUer0CB0B6rf0mvMT8WQ+wlKlkcT5ShRWvoSX8c=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=RU4gcpnhTicRnrf5XSjZ7Q7jgVE54GVWQuvvGGwA75YHYQTOLHB5e8RgpfpZvssRSSjCtBJn+8xCIlGHaHWg6b5HRSSESoe41h4Wn4OuWPZidTM5lrGkibyRpwjEYG7r9aNeDSBOnpcTv/3YnPo1prtf8oXmPInpVuhQsIiQkHY=;
X-YMail-OSG: jenx46EVM1lawmsHbHiIK_reVsBvmC1iMZcdqHSCBO3ZO.V
	hKSve32JqEZ4av0f0oY53ZbRw2H7KFyDtKc29.ggDVMe9WvI8V18xJ8pjjKY
	CUlIepiYKAQGzac6QCTLzRC1r7lano9bHvtbM2mfdR9CrcKirTcIT5QGYPmd
	4UmFXkx6k54q6KA1LItfwaozjJmjiF4rNiwDX42JlUxrCgo3fOfXHaD6BfXw
	8yKWsT3wNqEXU.MR6o1zBOutxWXlOUuhj80djDMs_3s8EGJ9TuU7uo.D86jO
	154gwiklKP_Ppzg0IXyH0.kwv.uvNG5JlKtn41ot8K7e1SjK..vMlII0C46H
	2r.NBbdMBU3_1DMMxlczKH4Sba5Rnxe.DB4y_Igmqg8xMiOaHFFZhKu3NVHo
	UP05H3m0bSdxjrrC5rcLMp341T7n6_HNgW3zqLv0hz7h6muT0U3F7hPErYeE
	O91gjhYT8.aFn1.WIwjFnOP1IQooqUG8tgXicsVet6cjk5F2KWze734zy9Uv
	tsjikyBvZVmRkL3n4WcwEtphJPVthEVU-
Received: from [192.227.225.3] by web161806.mail.bf1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 04:28:26 PST
X-Rocket-MIMEInfo: 002.001,
	SSBjaGVja8KgWENGTEFHU19ERUJVRyB0aGF0IGRlZmluaXRpb24gYXQgbGluZSAyNyBvZiBmaWxlIHhlbmd1ZXN0LmggdGhhdCBtZWFuJ3PCoAojZGVmaW5lIFhDRkxBR1NfREVCVUcgICAgICgxIDw8IDEpCkkgZG9uZSAyIG1pZ3JhdGlvbiB0byBhbW91bnTCoGx2bCA9IFhUTF9ERUJVRzsgYW5kwqBsdmwgPSBYVExfREVUQUlMOyB0aGF0IHJlc3VsdCBhdHRhY2hlZCBCdXQgdW50aWwgaSBoYXZlIG5vdCBsb2cgb2YgZGlydHkgcGFnZShkaXJ0eSBtZW1vcnkpYW5kIGRvd250aW1lIDotKCAuLi4KCsKgCkFkZWwBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.173.622
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
Message-ID: <1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
Date: Thu, 6 Feb 2014 04:28:26 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140205123908.GA1198@aepfle.de>
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1665047788-1877962208-1391689706=:75705"
X-Mailman-Approved-At: Thu, 06 Feb 2014 13:09:16 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1665047788-1877962208-1391689706=:75705
Content-Type: multipart/alternative; boundary="1665047788-2062883207-1391689706=:75705"

--1665047788-2062883207-1391689706=:75705
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

I check=A0XCFLAGS_DEBUG that definition at line 27 of file xenguest.h that =
mean's=A0=0A#define XCFLAGS_DEBUG     (1 << 1)=0AI done 2 migration to amou=
nt=A0lvl =3D XTL_DEBUG; and=A0lvl =3D XTL_DETAIL; that result attached But =
until i have not log of dirty page(dirty memory)and downtime :-( ...=0A=0A=
=A0=0AAdel Amani=0AM.Sc. Candidate@Computer Engineering Department, Univers=
ity of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=0A=0A=0AOn Wednesday, Febru=
ary 5, 2014 4:09 PM, Olaf Hering <olaf@aepfle.de> wrote:=0A =0AOn Tue, Feb =
04, Adel Amani wrote:=0A=0A>=A0 =A0  si.flags =3D atoi(argv[5]);=0A=0A> lvl=
 =3D si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;=0A> lflags =3D XTL_S=
TDIOSTREAM_SHOW_PID | XTL_STDIOSTREAM_HIDE_PROGRESS;=0A> l =3D (xentoollog_=
logger *)xtl_createlogger_stdiostream(stderr, lvl, lflags);=0A> si.xch =3D =
xc_interface_open(l,0,0);=0A=0APlease check what XCFLAGS_DEBUG actually mea=
ns, and if that condition=0Acan ever be true without modifying also xend re=
lated code.=0A=0AI guess in your exploration of how migration internally ac=
tually works=0Ait would be easier for you to just write 'lvl =3D XTL_DEBUG;=
' and be done=0Awith it.=0A=0AOther than that, the changes you made appear =
to be correct.=0A=0A=0AOlaf
--1665047788-2062883207-1391689706=:75705
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div>I check&nbsp;<=
span style=3D"font-family: monospace; font-size: 10pt;"><span style=3D"colo=
r: rgb(164, 96, 22);">XCFLAGS_DEBU</span><span style=3D"color: rgb(164, 96,=
 22);">G</span> that definition at line 27 of file <span style=3D"color: rg=
b(157, 24, 17);">xenguest.h</span> that mean's&nbsp;</span></div><div style=
=3D"color: rgb(0, 0, 0); font-size: 10pt; font-family: monospace; backgroun=
d-color: transparent; font-style: normal;"><span style=3D"font-family: mono=
space; font-size: 10pt;"><span style=3D"background-color: rgb(251, 252, 253=
); color: rgb(128, 96, 32); font-family: monospace, fixed; line-height: 13p=
x; text-indent: -53px; white-space: pre-wrap; font-size: 10pt;">#define XCF=
LAGS_DEBUG     (1 &lt;&lt; 1)</span></span></div><div style=3D"font-size: 1=
0pt; background-color: rgb(251, 252, 253); font-style: normal;"><span
 style=3D"font-family: monospace;">I done 2 migration to amount<span style=
=3D"color: rgb(68, 0, 98);">&nbsp;</span><span style=3D"color: rgb(221, 144=
, 47);">lvl =3D XTL_DEBUG</span><span style=3D"color: rgb(164, 96, 22);">;<=
/span> and<span style=3D"color: rgb(221, 144, 47);">&nbsp;lvl =3D XTL_DETAI=
L;</span> that result attached But until i have not log of dirty page(dirty=
 memory)and downtime :-( ...</span></div><div style=3D"font-size: 13px; bac=
kground-color: transparent; font-style: normal; color: rgb(221, 144, 47); f=
ont-family: monospace;"><span style=3D"font-family: monospace;"><br></span>=
</div><div></div><div>&nbsp;</div><div><div style=3D"text-align:center;"><s=
pan style=3D"font-size: 16px; font-family: 'times new roman', 'new york', t=
imes, serif; line-height: normal;">Adel Amani</span><br></div><span style=
=3D"font-family: 'times new roman', 'new york', times, serif; line-height: =
normal;"><div style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@C=
omputer Engineering
 Department, University of Isfahan</div><div style=3D"text-align:center;"><=
span style=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);t=
ext-decoration:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span><=
/div><div class=3D"yahoo_quoted" style=3D"display: block;"> <br> <br> <div =
style=3D"font-family: 'bookman old style', 'new york', times, serif; font-s=
ize: 10pt;"> <div style=3D"font-family: HelveticaNeue, 'Helvetica Neue', He=
lvetica, Arial, 'Lucida Grande', sans-serif; font-size: 12pt;"> <div dir=3D=
"ltr"> <font size=3D"2" face=3D"Arial"> On Wednesday, February 5, 2014 4:09=
 PM, Olaf Hering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div>  <div cla=
ss=3D"y_msg_container">On Tue, Feb 04, Adel Amani wrote:<br clear=3D"none">=
<br clear=3D"none">&gt;&nbsp; &nbsp;  si.flags =3D atoi(argv[5]);<br clear=
=3D"none"><br clear=3D"none">&gt; lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XT=
L_DEBUG: XTL_DETAIL;<br clear=3D"none">&gt; lflags =3D XTL_STDIOSTREAM_SHOW=
_PID |
 XTL_STDIOSTREAM_HIDE_PROGRESS;<br clear=3D"none">&gt; l =3D (xentoollog_lo=
gger *)xtl_createlogger_stdiostream(stderr, lvl, lflags);<br clear=3D"none"=
>&gt; si.xch =3D xc_interface_open(l,0,0);<br clear=3D"none"><br clear=3D"n=
one">Please check what XCFLAGS_DEBUG actually means, and if that condition<=
br clear=3D"none">can ever be true without modifying also xend related code=
.<br clear=3D"none"><br clear=3D"none">I guess in your exploration of how m=
igration internally actually works<br clear=3D"none">it would be easier for=
 you to just write 'lvl =3D XTL_DEBUG;' and be done<br clear=3D"none">with =
it.<br clear=3D"none"><br clear=3D"none">Other than that, the changes you m=
ade appear to be correct.<div class=3D"yqt5901774014" id=3D"yqtfd04120"><br=
 clear=3D"none"><br clear=3D"none">Olaf<br clear=3D"none"></div><br><br></d=
iv>  </div> </div>  </div> </div></body></html>
--1665047788-2062883207-1391689706=:75705--
--1665047788-1877962208-1391689706=:75705
Content-Type: application/octet-stream; name="xend (XTL_DETAIL).log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xend (XTL_DETAIL).log"

WzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKFhlbmREb21haW5J
bmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25hbWUn
LCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5kX3N0
YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUnXSwg
Wyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0nLCBb
J2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBbJ3Nl
cmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBbJ2Jv
b3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywgW11d
LCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScsICc6
MCddLCBbJ2ZkYScsICcnXSwgWydmZGInLCAnJ10sIFsnZ3Vlc3Rfb3NfdHlw
ZScsICdkZWZhdWx0J10sIFsnaGFwJywgMV0sIFsnaHBldCcsIDBdLCBbJ2lz
YScsIDBdLCBbJ2tleW1hcCcsICcnXSwgWydsb2NhbHRpbWUnLCAwXSwgWydu
b2dyYXBoaWMnLCAwXSwgWydvcGVuZ2wnLCAxXSwgWydvb3MnLCAxXSwgWydw
YWUnLCAxXSwgWydwY2knLCBbXV0sIFsncGNpX21zaXRyYW5zbGF0ZScsIDFd
LCBbJ3BjaV9wb3dlcl9tZ210JywgMF0sIFsncnRjX3RpbWVvZmZzZXQnLCAw
XSwgWydzZGwnLCAwXSwgWydzb3VuZGh3JywgJyddLCBbJ3N0ZHZnYScsIDBd
LCBbJ3RpbWVyX21vZGUnLCAxXSwgWyd1c2InLCAxXSwgWyd1c2JkZXZpY2Un
LCBbJ2hvc3Q6MTI1ZjpjOTZhJ11dLCBbJ3ZjcHVzJywgMV0sIFsndm5jJywg
MV0sIFsndm5jdW51c2VkJywgMV0sIFsndmlyaWRpYW4nLCAwXSwgWyd2cHRf
YWxpZ24nLCAxXSwgWyd4YXV0aG9yaXR5JywgJy9yb290Ly5YYXV0aG9yaXR5
J10sIFsneGVuX3BsYXRmb3JtX3BjaScsIDFdLCBbJ21lbW9yeV9zaGFyaW5n
JywgMF0sIFsndm5jcGFzc3dkJywgJ1hYWFhYWFhYJ10sIFsndHNjX21vZGUn
LCAwXSwgWydub21pZ3JhdGUnLCAwXV1dLCBbJ3MzX2ludGVncml0eScsIDFd
LCBbJ2RldmljZScsIFsndmJkJywgWyd1bmFtZScsICdmaWxlOi92YXIvbGli
L2xpYnZpcnQvaW1hZ2VzL3VidW50dTExLmltZyddLCBbJ2RldicsICdoZGEn
XSwgWydtb2RlJywgJ3cnXV1dLCBbJ2RldmljZScsIFsndmJkJywgWyd1bmFt
ZScsICdwaHk6L2Rldi9jZHJvbSddLCBbJ2RldicsICdoZGM6Y2Ryb20nXSwg
Wydtb2RlJywgJ3InXV1dLCBbJ2RldmljZScsIFsndmlmJywgWydicmlkZ2Un
LCAneGVuYnIwJ10sIFsndHlwZScsICdpb2VtdSddXV1dKQpbMjAxNC0wMi0w
NiAxNTowNDoxNiAxNTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQ5OCkg
WGVuZERvbWFpbkluZm8uY29uc3RydWN0RG9tYWluClsyMDE0LTAyLTA2IDE1
OjA0OjE2IDE1NzBdIERFQlVHIChiYWxsb29uOjE4NykgQmFsbG9vbjogMjc0
ODMwNCBLaUIgZnJlZTsgbmVlZCAxNjM4NDsgZG9uZS4KWzIwMTQtMDItMDYg
MTU6MDQ6MTYgMTU3MF0gREVCVUcgKFhlbmREb21haW46NDc2KSBBZGRpbmcg
RG9tYWluOiAzClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzoyODM2KSBYZW5kRG9tYWluSW5mby5pbml0RG9tYWlu
OiAzIDI1NgpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1h
Z2U6MzM5KSBObyBWTkMgcGFzc3dkIGNvbmZpZ3VyZWQgZm9yIHZmYiBhY2Nl
c3MKWzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5
MSkgYXJnczogYm9vdCwgdmFsOiBjClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1
NzBdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IGZkYSwgdmFsOiBOb25lClsy
MDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo4OTEpIGFy
Z3M6IGZkYiwgdmFsOiBOb25lClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBd
IERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNvdW5kaHcsIHZhbDogTm9uZQpb
MjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1hZ2U6ODkxKSBh
cmdzOiBsb2NhbHRpbWUsIHZhbDogMApbMjAxNC0wMi0wNiAxNTowNDoxNiAx
NTcwXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBzZXJpYWwsIHZhbDogWydw
dHknXQpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBzdGQtdmdhLCB2YWw6IDAKWzIwMTQtMDItMDYgMTU6MDQ6
MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogaXNhLCB2YWw6IDAK
WzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5MSkg
YXJnczogYWNwaSwgdmFsOiAxClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBd
IERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHVzYiwgdmFsOiAxClsyMDE0LTAy
LTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHVz
YmRldmljZSwgdmFsOiBbJ2hvc3Q6MTI1ZjpjOTZhJ10KWzIwMTQtMDItMDYg
MTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZ2Z4X3Bh
c3N0aHJ1LCB2YWw6IE5vbmUKWzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0g
SU5GTyAoaW1hZ2U6ODIyKSBOZWVkIHRvIGNyZWF0ZSBwbGF0Zm9ybSBkZXZp
Y2UuW2RvbWlkOjNdClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVH
IChYZW5kRG9tYWluSW5mbzoyODYzKSBfaW5pdERvbWFpbjpzaGFkb3dfbWVt
b3J5PTB4MCwgbWVtb3J5X3N0YXRpY19tYXg9MHg0MDAwMDAwMCwgbWVtb3J5
X3N0YXRpY19taW49MHgwLgpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBJ
TkZPIChpbWFnZToxODIpIGJ1aWxkRG9tYWluIG9zPWh2bSBkb209MyB2Y3B1
cz0xClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo5
NDkpIGRvbWlkICAgICAgICAgID0gMwpbMjAxNC0wMi0wNiAxNTowNDoxNiAx
NTcwXSBERUJVRyAoaW1hZ2U6OTUwKSBpbWFnZSAgICAgICAgICA9IC91c3Iv
bGliL3hlbi9ib290L2h2bWxvYWRlcgpbMjAxNC0wMi0wNiAxNTowNDoxNiAx
NTcwXSBERUJVRyAoaW1hZ2U6OTUxKSBzdG9yZV9ldnRjaG4gICA9IDIKWzIw
MTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVCVUcgKGltYWdlOjk1MikgbWVt
c2l6ZSAgICAgICAgPSAxMDI0ClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBd
IERFQlVHIChpbWFnZTo5NTMpIHRhcmdldCAgICAgICAgID0gMTAyNApbMjAx
NC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1hZ2U6OTU0KSB2Y3B1
cyAgICAgICAgICA9IDEKWzIwMTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gREVC
VUcgKGltYWdlOjk1NSkgdmNwdV9hdmFpbCAgICAgPSAxClsyMDE0LTAyLTA2
IDE1OjA0OjE2IDE1NzBdIERFQlVHIChpbWFnZTo5NTYpIGFjcGkgICAgICAg
ICAgID0gMQpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoaW1h
Z2U6OTU3KSBhcGljICAgICAgICAgICA9IDEKWzIwMTQtMDItMDYgMTU6MDQ6
MTYgMTU3MF0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2
aWNlOiB2ZmIgOiB7J3ZuY3VudXNlZCc6IDEsICdvdGhlcl9jb25maWcnOiB7
J3ZuY3VudXNlZCc6IDEsICd2bmMnOiAnMSd9LCAndm5jJzogJzEnLCAndXVp
ZCc6ICcyYjJiN2EyNy04OGE3LTQ0ZTgtZThlMS0yOTE3NzZkMjg1MzcnfQpb
MjAxNC0wMi0wNiAxNTowNDoxNiAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxl
cjo5NSkgRGV2Q29udHJvbGxlcjogd3JpdGluZyB7J3N0YXRlJzogJzEnLCAn
YmFja2VuZC1pZCc6ICcwJywgJ2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8w
L2JhY2tlbmQvdmZiLzMvMCd9IHRvIC9sb2NhbC9kb21haW4vMy9kZXZpY2Uv
dmZiLzAuClsyMDE0LTAyLTA2IDE1OjA0OjE2IDE1NzBdIERFQlVHIChEZXZD
b250cm9sbGVyOjk3KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsndm5jdW51
c2VkJzogJzEnLCAnZG9tYWluJzogJ3VidW50dTExJywgJ2Zyb250ZW5kJzog
Jy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmZiLzAnLCAndXVpZCc6ICcyYjJi
N2EyNy04OGE3LTQ0ZTgtZThlMS0yOTE3NzZkMjg1MzcnLCAnZnJvbnRlbmQt
aWQnOiAnMycsICdzdGF0ZSc6ICcxJywgJ29ubGluZSc6ICcxJywgJ3ZuYyc6
ICcxJ30gdG8gL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmZiLzMvMC4KWzIw
MTQtMDItMDYgMTU6MDQ6MTYgMTU3MF0gSU5GTyAoWGVuZERvbWFpbkluZm86
MjM1NykgY3JlYXRlRGV2aWNlOiB2YmQgOiB7J3V1aWQnOiAnZTBjMDRmYjct
ZDVhNS0wNDI3LTgxN2QtZWZmZThkNjk2YjNiJywgJ2Jvb3RhYmxlJzogMSwg
J2RyaXZlcic6ICdwYXJhdmlydHVhbGlzZWQnLCAnZGV2JzogJ2hkYScsICd1
bmFtZSc6ICdmaWxlOi92YXIvbGliL2xpYnZpcnQvaW1hZ2VzL3VidW50dTEx
LmltZycsICdtb2RlJzogJ3cnfQpbMjAxNC0wMi0wNiAxNTowNDoxNiAxNTcw
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmljZSc6ICc3
NjgnLCAnZGV2aWNlLXR5cGUnOiAnZGlzaycsICdzdGF0ZSc6ICcxJywgJ2Jh
Y2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmJkLzMvNzY4J30g
dG8gL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQvNzY4LgpbMjAxNC0wMi0w
NiAxNTowNDoxNiAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2
Q29udHJvbGxlcjogd3JpdGluZyB7J2RvbWFpbic6ICd1YnVudHUxMScsICdm
cm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZiZC83NjgnLCAn
dXVpZCc6ICdlMGMwNGZiNy1kNWE1LTA0MjctODE3ZC1lZmZlOGQ2OTZiM2In
LCAnYm9vdGFibGUnOiAnMScsICdkZXYnOiAnaGRhJywgJ3N0YXRlJzogJzEn
LCAncGFyYW1zJzogJy92YXIvbGliL2xpYnZpcnQvaW1hZ2VzL3VidW50dTEx
LmltZycsICdtb2RlJzogJ3cnLCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQt
aWQnOiAnMycsICd0eXBlJzogJ2ZpbGUnfSB0byAvbG9jYWwvZG9tYWluLzAv
YmFja2VuZC92YmQvMy83NjguClsyMDE0LTAyLTA2IDE1OjA0OjE3IDE1NzBd
IElORk8gKFhlbmREb21haW5JbmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmJk
IDogeyd1dWlkJzogJzZmMTllYWNlLTM1YTYtNTc4NS00MGQ4LTYyYWJmOTVl
OWE3NycsICdib290YWJsZSc6IDAsICdkcml2ZXInOiAncGFyYXZpcnR1YWxp
c2VkJywgJ2Rldic6ICdoZGM6Y2Ryb20nLCAndW5hbWUnOiAncGh5Oi9kZXYv
Y2Ryb20nLCAnbW9kZSc6ICdyJ30KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3
MF0gREVCVUcgKERldkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdy
aXRpbmcgeydiYWNrZW5kLWlkJzogJzAnLCAndmlydHVhbC1kZXZpY2UnOiAn
NTYzMicsICdkZXZpY2UtdHlwZSc6ICdjZHJvbScsICdzdGF0ZSc6ICcxJywg
J2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmJkLzMvNTYz
Mid9IHRvIC9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzU2MzIuClsyMDE0
LTAyLTA2IDE1OjA0OjE3IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjk3
KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTEx
JywgJ2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzU2
MzInLCAndXVpZCc6ICc2ZjE5ZWFjZS0zNWE2LTU3ODUtNDBkOC02MmFiZjk1
ZTlhNzcnLCAnYm9vdGFibGUnOiAnMCcsICdkZXYnOiAnaGRjJywgJ3N0YXRl
JzogJzEnLCAncGFyYW1zJzogJy9kZXYvY2Ryb20nLCAnbW9kZSc6ICdyJywg
J29ubGluZSc6ICcxJywgJ2Zyb250ZW5kLWlkJzogJzMnLCAndHlwZSc6ICdw
aHknfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy81NjMyLgpb
MjAxNC0wMi0wNiAxNTowNDoxNyAxNTcwXSBJTkZPIChYZW5kRG9tYWluSW5m
bzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZpZiA6IHsnYnJpZGdlJzogJ3hlbmJy
MCcsICdtYWMnOiAnMDA6MTY6M2U6MDM6NjM6YTMnLCAndHlwZSc6ICdpb2Vt
dScsICd1dWlkJzogJzIxNzBiMDQwLTNiZTYtNzIyMS1iYjUyLTZkOTU3YWIw
ZDczMyd9ClsyMDE0LTAyLTA2IDE1OjA0OjE3IDE1NzBdIERFQlVHIChEZXZD
b250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUn
OiAnMScsICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwv
ZG9tYWluLzAvYmFja2VuZC92aWYvMy8wJ30gdG8gL2xvY2FsL2RvbWFpbi8z
L2RldmljZS92aWYvMC4KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3MF0gREVC
VUcgKERldkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcg
eydicmlkZ2UnOiAneGVuYnIwJywgJ2RvbWFpbic6ICd1YnVudHUxMScsICdo
YW5kbGUnOiAnMCcsICd1dWlkJzogJzIxNzBiMDQwLTNiZTYtNzIyMS1iYjUy
LTZkOTU3YWIwZDczMycsICdzY3JpcHQnOiAnL2V0Yy94ZW4vc2NyaXB0cy92
aWYtYnJpZGdlJywgJ21hYyc6ICcwMDoxNjozZTowMzo2MzphMycsICdmcm9u
dGVuZC1pZCc6ICczJywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAn
ZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92aWYvMCcsICd0
eXBlJzogJ2lvZW11J30gdG8gL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvdmlm
LzMvMC4KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3MF0gSU5GTyAoaW1hZ2U6
NDE4KSBzcGF3bmluZyBkZXZpY2UgbW9kZWxzOiAvdXNyL2xpYi94ZW4vYmlu
L3FlbXUtZG0gWycvdXNyL2xpYi94ZW4vYmluL3FlbXUtZG0nLCAnLWQnLCAn
MycsICctZG9tYWluLW5hbWUnLCAndWJ1bnR1MTEnLCAnLXZpZGVvcmFtJywg
JzQnLCAnLXZuYycsICcxMjcuMC4wLjE6MCcsICctdm5jdW51c2VkJywgJy12
Y3B1cycsICcxJywgJy12Y3B1X2F2YWlsJywgJzB4MScsICctYm9vdCcsICdj
JywgJy1zZXJpYWwnLCAncHR5JywgJy1hY3BpJywgJy11c2InLCAnLXVzYmRl
dmljZScsICJbJ2hvc3Q6MTI1ZjpjOTZhJ10iLCAnLW5ldCcsICduaWMsdmxh
bj0xLG1hY2FkZHI9MDA6MTY6M2U6MDM6NjM6YTMsbW9kZWw9cnRsODEzOScs
ICctbmV0JywgJ3RhcCx2bGFuPTEsaWZuYW1lPXRhcDMuMCxicmlkZ2U9eGVu
YnIwJywgJy1NJywgJ3hlbmZ2J10KWzIwMTQtMDItMDYgMTU6MDQ6MTcgMTU3
MF0gSU5GTyAoaW1hZ2U6NDY3KSBkZXZpY2UgbW9kZWwgcGlkOiAzNjEyClsy
MDE0LTAyLTA2IDE1OjA0OjE3IDE1NzBdIElORk8gKGltYWdlOjU5MCkgd2Fp
dGluZyBmb3Igc2VudGluZWxfZmlmbwpbMjAxNC0wMi0wNiAxNTowNDoxNyAx
NTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MzQyMCkgU3RvcmluZyBWTSBk
ZXRhaWxzOiB7J29uX3hlbmRfc3RvcCc6ICdpZ25vcmUnLCAncG9vbF9uYW1l
JzogJ1Bvb2wtMCcsICdzaGFkb3dfbWVtb3J5JzogJzknLCAndXVpZCc6ICcx
OTZiMWJjYi0wYTIxLTA4OTEtNDFiZS01MDgwMTMyMjk5NTAnLCAnb25fcmVi
b290JzogJ3Jlc3RhcnQnLCAnc3RhcnRfdGltZSc6ICcxMzkxNjg2NDU3Ljcn
LCAnb25fcG93ZXJvZmYnOiAnZGVzdHJveScsICdib290bG9hZGVyX2FyZ3Mn
OiAnJywgJ29uX3hlbmRfc3RhcnQnOiAnaWdub3JlJywgJ29uX2NyYXNoJzog
J3Jlc3RhcnQnLCAneGVuZC9yZXN0YXJ0X2NvdW50JzogJzAnLCAndmNwdXMn
OiAnMScsICd2Y3B1X2F2YWlsJzogJzEnLCAnYm9vdGxvYWRlcic6ICcnLCAn
aW1hZ2UnOiAiKGh2bSAoa2VybmVsICcnKSAoc3VwZXJwYWdlcyAwKSAodmlk
ZW9yYW0gNCkgKGhwZXQgMCkgKHN0ZHZnYSAwKSAobG9hZGVyIC91c3IvbGli
L3hlbi9ib290L2h2bWxvYWRlcikgKHhlbl9wbGF0Zm9ybV9wY2kgMSkgKG9w
ZW5nbCAxKSAocnRjX3RpbWVvZmZzZXQgMCkgKHBjaSAoKSkgKGhhcCAxKSAo
bG9jYWx0aW1lIDApICh0aW1lcl9tb2RlIDEpIChwY2lfbXNpdHJhbnNsYXRl
IDEpIChvb3MgMSkgKGFwaWMgMSkgKHNkbCAwKSAodXNiZGV2aWNlIChob3N0
OjEyNWY6Yzk2YSkpIChkaXNwbGF5IDowKSAodnB0X2FsaWduIDEpIChzZXJp
YWwgcHR5KSAodm5jdW51c2VkIDEpIChib290IGMpIChwYWUgMSkgKHZpcmlk
aWFuIDApIChhY3BpIDEpICh2bmMgMSkgKG5vZ3JhcGhpYyAwKSAobm9taWdy
YXRlIDApICh1c2IgMSkgKHRzY19tb2RlIDApIChndWVzdF9vc190eXBlIGRl
ZmF1bHQpIChkZXZpY2VfbW9kZWwgL3Vzci9saWIveGVuL2Jpbi9xZW11LWRt
KSAocGNpX3Bvd2VyX21nbXQgMCkgKHhhdXRob3JpdHkgL3Jvb3QvLlhhdXRo
b3JpdHkpIChpc2EgMCkgKG5vdGVzIChTVVNQRU5EX0NBTkNFTCAxKSkpIiwg
J25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxNTowNDoxNyAxNTcw
XSBERUJVRyAoWGVuZERvbWFpbkluZm86MTc5NCkgU3RvcmluZyBkb21haW4g
ZGV0YWlsczogeydjb25zb2xlL3BvcnQnOiAnMycsICdkZXNjcmlwdGlvbic6
ICcnLCAnY29uc29sZS9saW1pdCc6ICcxMDQ4NTc2JywgJ3N0b3JlL3BvcnQn
OiAnMicsICd2bSc6ICcvdm0vMTk2YjFiY2ItMGEyMS0wODkxLTQxYmUtNTA4
MDEzMjI5OTUwJywgJ2RvbWlkJzogJzMnLCAnaW1hZ2Uvc3VzcGVuZC1jYW5j
ZWwnOiAnMScsICdjcHUvMC9hdmFpbGFiaWxpdHknOiAnb25saW5lJywgJ21l
bW9yeS90YXJnZXQnOiAnMTA0ODU3NicsICdjb250cm9sL3BsYXRmb3JtLWZl
YXR1cmUtbXVsdGlwcm9jZXNzb3Itc3VzcGVuZCc6ICcxJywgJ3N0b3JlL3Jp
bmctcmVmJzogJzEwNDQ0NzYnLCAnY29uc29sZS90eXBlJzogJ2lvZW11Jywg
J25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxNTowNDoxNyAxNTcw
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J3N0YXRlJzogJzEnLCAnYmFja2VuZC1pZCc6ICcwJywgJ2JhY2tl
bmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvY29uc29sZS8zLzAnfSB0
byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL2NvbnNvbGUvMC4KWzIwMTQtMDIt
MDYgMTU6MDQ6MTcgMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6OTcpIERl
dkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1bnR1MTEnLCAn
ZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS9jb25zb2xlLzAn
LCAndXVpZCc6ICc4NjA4YWFmMS1jYzEzLThkNzItMWU1NS01MGYyMDE3MmMw
ZWUnLCAnZnJvbnRlbmQtaWQnOiAnMycsICdzdGF0ZSc6ICcxJywgJ2xvY2F0
aW9uJzogJzMnLCAnb25saW5lJzogJzEnLCAncHJvdG9jb2wnOiAndnQxMDAn
fSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xlLzMvMC4KWzIw
MTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKFhlbmREb21haW5JbmZv
OjE4ODEpIFhlbmREb21haW5JbmZvLmhhbmRsZVNodXRkb3duV2F0Y2gKWzIw
MTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6
MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHRhcDIuClsyMDE0LTAyLTA2IDE1
OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGlu
ZyBmb3IgZGV2aWNlcyB2aWYuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBd
IERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgMC4KWzIw
MTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6
NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFpbi8wL2Jh
Y2tlbmQvdmlmLzMvMC9ob3RwbHVnLXN0YXR1cy4KWzIwMTQtMDItMDYgMTU6
MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6NjQyKSBob3RwbHVn
U3RhdHVzQ2FsbGJhY2sgMS4KWzIwMTQtMDItMDYgMTU6MDQ6MTggMTU3MF0g
REVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2Vz
IHZrYmQuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZD
b250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBpb3BvcnRzLgpb
MjAxNC0wMi0wNiAxNTowNDoxOCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxl
cjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdGFwLgpbMjAxNC0wMi0wNiAx
NTowNDoxOCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRp
bmcgZm9yIGRldmljZXMgdmlmMi4KWzIwMTQtMDItMDYgMTU6MDQ6MTggMTU3
MF0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZp
Y2VzIGNvbnNvbGUuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVH
IChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgMC4KWzIwMTQtMDIt
MDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBX
YWl0aW5nIGZvciBkZXZpY2VzIHZzY3NpLgpbMjAxNC0wMi0wNiAxNTowNDox
OCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9y
IGRldmljZXMgdmJkLgpbMjAxNC0wMi0wNiAxNTowNDoxOCAxNTcwXSBERUJV
RyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9yIDc2OC4KWzIwMTQt
MDItMDYgMTU6MDQ6MTggMTU3MF0gREVCVUcgKERldkNvbnRyb2xsZXI6NjI4
KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFpbi8wL2JhY2tl
bmQvdmJkLzMvNzY4L2hvdHBsdWctc3RhdHVzLgpbMjAxNC0wMi0wNiAxNTow
NDoxOCAxNTcwXSBERUJVRyAoRGV2Q29udHJvbGxlcjo2NDIpIGhvdHBsdWdT
dGF0dXNDYWxsYmFjayAxLgpbMjAxNC0wMi0wNiAxNTowNDoxOCAxNTcwXSBE
RUJVRyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9yIDU2MzIuClsy
MDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVy
OjYyOCkgaG90cGx1Z1N0YXR1c0NhbGxiYWNrIC9sb2NhbC9kb21haW4vMC9i
YWNrZW5kL3ZiZC8zLzU2MzIvaG90cGx1Zy1zdGF0dXMuClsyMDE0LTAyLTA2
IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjY0MikgaG90
cGx1Z1N0YXR1c0NhbGxiYWNrIDEuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1
NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2
aWNlcyBpcnEuClsyMDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChE
ZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2ZmIuClsy
MDE0LTAyLTA2IDE1OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVy
OjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBwY2kuClsyMDE0LTAyLTA2IDE1
OjA0OjE4IDE1NzBdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGlu
ZyBmb3IgZGV2aWNlcyB2dXNiLgpbMjAxNC0wMi0wNiAxNTowNDoxOCAxNTcw
XSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9yIGRldmlj
ZXMgdnRwbS4KWzIwMTQtMDItMDYgMTU6MDQ6MTggMTU3MF0gSU5GTyAoWGVu
ZERvbWFpbjoxMjI1KSBEb21haW4gdWJ1bnR1MTEgKDMpIHVucGF1c2VkLgpb
MjAxNC0wMi0wNiAxNTowNTozMyAxNTcwXSBERUJVRyAoWGVuZENoZWNrcG9p
bnQ6MTI0KSBbeGNfc2F2ZV06IC91c3IvbGliL3hlbi9iaW4veGNfc2F2ZSAy
NiAzIDAgMCA1ClsyMDE0LTAyLTA2IDE1OjA1OjMzIDE1NzBdIElORk8gKFhl
bmRDaGVja3BvaW50OjQyMykgeGNfc2F2ZTogZmFpbGVkIHRvIGdldCB0aGUg
c3VzcGVuZCBldnRjaG4gcG9ydApbMjAxNC0wMi0wNiAxNTowNTozMyAxNTcw
XSBJTkZPIChYZW5kQ2hlY2twb2ludDo0MjMpIApbMjAxNC0wMi0wNiAxNTow
NzoxOSAxNTcwXSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6Mzk0KSBzdXNwZW5k
ClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIERFQlVHIChYZW5kQ2hlY2tw
b2ludDoxMjcpIEluIHNhdmVJbnB1dEhhbmRsZXIgc3VzcGVuZApbMjAxNC0w
Mi0wNiAxNTowNzoxOSAxNTcwXSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6MTI5
KSBTdXNwZW5kaW5nIDMgLi4uClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBd
IERFQlVHIChYZW5kRG9tYWluSW5mbzo1MjQpIFhlbmREb21haW5JbmZvLnNo
dXRkb3duKHN1c3BlbmQpClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIERF
QlVHIChYZW5kRG9tYWluSW5mbzoxODgxKSBYZW5kRG9tYWluSW5mby5oYW5k
bGVTaHV0ZG93bldhdGNoClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIElO
Rk8gKFhlbmREb21haW5JbmZvOjU0MSkgSFZNIHNhdmU6cmVtb3RlIHNodXRk
b3duIGRvbSAzIQpbMjAxNC0wMi0wNiAxNTowNzoxOSAxNTcwXSBJTkZPIChY
ZW5kQ2hlY2twb2ludDoxMzUpIERvbWFpbiAzIHN1c3BlbmRlZC4KWzIwMTQt
MDItMDYgMTU6MDc6MTkgMTU3MF0gSU5GTyAoWGVuZERvbWFpbkluZm86MjA3
OCkgRG9tYWluIGhhcyBzaHV0ZG93bjogbmFtZT1taWdyYXRpbmctdWJ1bnR1
MTEgaWQ9MyByZWFzb249c3VzcGVuZC4KWzIwMTQtMDItMDYgMTU6MDc6MTkg
MTU3MF0gSU5GTyAoaW1hZ2U6NTM4KSBzaWduYWxEZXZpY2VNb2RlbDpyZXN0
b3JlIGRtIHN0YXRlIHRvIHJ1bm5pbmcKWzIwMTQtMDItMDYgMTU6MDc6MTkg
MTU3MF0gREVCVUcgKFhlbmRDaGVja3BvaW50OjE0NCkgV3JpdHRlbiBkb25l
ClsyMDE0LTAyLTA2IDE1OjA3OjE5IDE1NzBdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzozMDcxKSBYZW5kRG9tYWluSW5mby5kZXN0cm95OiBkb21pZD0zClsy
MDE0LTAyLTA2IDE1OjA3OjIwIDE1NzBdIERFQlVHIChYZW5kRG9tYWluSW5m
bzoyNDAxKSBEZXN0cm95aW5nIGRldmljZSBtb2RlbApbMjAxNC0wMi0wNiAx
NTowNzoyMCAxNTcwXSBJTkZPIChpbWFnZTo2MTUpIG1pZ3JhdGluZy11YnVu
dHUxMSBkZXZpY2UgbW9kZWwgdGVybWluYXRlZApbMjAxNC0wMi0wNiAxNTow
NzoyMCAxNTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQwOCkgUmVsZWFz
aW5nIGRldmljZXMKWzIwMTQtMDItMDYgMTU6MDc6MjAgMTU3MF0gREVCVUcg
KFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZpZi8wClsyMDE0LTAy
LTA2IDE1OjA3OjIwIDE1NzBdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxMjc2
KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZpY2VDbGFzcyA9
IHZpZiwgZGV2aWNlID0gdmlmLzAKWzIwMTQtMDItMDYgMTU6MDc6MjAgMTU3
MF0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIGNvbnNv
bGUvMApbMjAxNC0wMi0wNiAxNTowNzoyMCAxNTcwXSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTog
ZGV2aWNlQ2xhc3MgPSBjb25zb2xlLCBkZXZpY2UgPSBjb25zb2xlLzAKWzIw
MTQtMDItMDYgMTU6MDc6MjAgMTU3MF0gREVCVUcgKFhlbmREb21haW5JbmZv
OjI0MTQpIFJlbW92aW5nIHZiZC83NjgKWzIwMTQtMDItMDYgMTU6MDc6MjAg
MTU3MF0gREVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhlbmREb21haW5J
bmZvLmRlc3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gdmJkLCBkZXZpY2Ug
PSB2YmQvNzY4ClsyMDE0LTAyLTA2IDE1OjA3OjIwIDE1NzBdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2YmQvNTYzMgpbMjAxNC0w
Mi0wNiAxNTowNzoyMCAxNTcwXSBERUJVRyAoWGVuZERvbWFpbkluZm86MTI3
NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2aWNlQ2xhc3Mg
PSB2YmQsIGRldmljZSA9IHZiZC81NjMyClsyMDE0LTAyLTA2IDE1OjA3OjIw
IDE1NzBdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2
ZmIvMApbMjAxNC0wMi0wNiAxNTowNzoyMCAxNTcwXSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTog
ZGV2aWNlQ2xhc3MgPSB2ZmIsIGRldmljZSA9IHZmYi8wCg==

--1665047788-1877962208-1391689706=:75705
Content-Type: application/octet-stream; name="xend(XTL_DEBUG).log"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="xend(XTL_DEBUG).log"

WzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKFhlbmREb21haW5J
bmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25hbWUn
LCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5kX3N0
YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUnXSwg
Wyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0nLCBb
J2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBbJ3Nl
cmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBbJ2Jv
b3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywgW11d
LCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScsICc6
MC4wJ10sIFsnZmRhJywgJyddLCBbJ2ZkYicsICcnXSwgWydndWVzdF9vc190
eXBlJywgJ2RlZmF1bHQnXSwgWydoYXAnLCAxXSwgWydocGV0JywgMF0sIFsn
aXNhJywgMF0sIFsna2V5bWFwJywgJyddLCBbJ2xvY2FsdGltZScsIDBdLCBb
J25vZ3JhcGhpYycsIDBdLCBbJ29wZW5nbCcsIDFdLCBbJ29vcycsIDFdLCBb
J3BhZScsIDFdLCBbJ3BjaScsIFtdXSwgWydwY2lfbXNpdHJhbnNsYXRlJywg
MV0sIFsncGNpX3Bvd2VyX21nbXQnLCAwXSwgWydydGNfdGltZW9mZnNldCcs
IDBdLCBbJ3NkbCcsIDBdLCBbJ3NvdW5kaHcnLCAnJ10sIFsnc3RkdmdhJywg
MF0sIFsndGltZXJfbW9kZScsIDFdLCBbJ3VzYicsIDFdLCBbJ3VzYmRldmlj
ZScsIFsnaG9zdDoxMjVmOmM5NmEnXV0sIFsndmNwdXMnLCAxXSwgWyd2bmMn
LCAxXSwgWyd2bmN1bnVzZWQnLCAxXSwgWyd2aXJpZGlhbicsIDBdLCBbJ3Zw
dF9hbGlnbicsIDFdLCBbJ3hhdXRob3JpdHknLCAnL3Jvb3QvLlhhdXRob3Jp
dHknXSwgWyd4ZW5fcGxhdGZvcm1fcGNpJywgMV0sIFsnbWVtb3J5X3NoYXJp
bmcnLCAwXSwgWyd2bmNwYXNzd2QnLCAnWFhYWFhYWFgnXSwgWyd0c2NfbW9k
ZScsIDBdLCBbJ25vbWlncmF0ZScsIDBdXV0sIFsnczNfaW50ZWdyaXR5Jywg
MV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3VuYW1lJywgJ2ZpbGU6L3Zhci9s
aWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1MTEuaW1nJ10sIFsnZGV2JywgJ2hk
YSddLCBbJ21vZGUnLCAndyddXV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3Vu
YW1lJywgJ3BoeTovZGV2L2Nkcm9tJ10sIFsnZGV2JywgJ2hkYzpjZHJvbSdd
LCBbJ21vZGUnLCAnciddXV0sIFsnZGV2aWNlJywgWyd2aWYnLCBbJ2JyaWRn
ZScsICd4ZW5icjAnXSwgWyd0eXBlJywgJ2lvZW11J11dXV0pClsyMDE0LTAy
LTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDk4
KSBYZW5kRG9tYWluSW5mby5jb25zdHJ1Y3REb21haW4KWzIwMTQtMDItMDYg
MTM6Mjc6NTggMTUyMl0gREVCVUcgKGJhbGxvb246MTg3KSBCYWxsb29uOiAy
NzQ4MzA0IEtpQiBmcmVlOyBuZWVkIDE2Mzg0OyBkb25lLgpbMjAxNC0wMi0w
NiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoWGVuZERvbWFpbjo0NzYpIEFkZGlu
ZyBEb21haW46IDMKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcg
KFhlbmREb21haW5JbmZvOjI4MzYpIFhlbmREb21haW5JbmZvLmluaXREb21h
aW46IDMgMjU2ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChp
bWFnZTozMzkpIE5vIFZOQyBwYXNzd2QgY29uZmlndXJlZCBmb3IgdmZiIGFj
Y2VzcwpbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBib290LCB2YWw6IGMKWzIwMTQtMDItMDYgMTM6Mjc6NTgg
MTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZmRhLCB2YWw6IE5vbmUK
WzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdlOjg5MSkg
YXJnczogZmRiLCB2YWw6IE5vbmUKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogc291bmRodywgdmFsOiBOb25l
ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChpbWFnZTo4OTEp
IGFyZ3M6IGxvY2FsdGltZSwgdmFsOiAwClsyMDE0LTAyLTA2IDEzOjI3OjU4
IDE1MjJdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNlcmlhbCwgdmFsOiBb
J3B0eSddClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChpbWFn
ZTo4OTEpIGFyZ3M6IHN0ZC12Z2EsIHZhbDogMApbMjAxNC0wMi0wNiAxMzoy
Nzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBpc2EsIHZhbDog
MApbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkx
KSBhcmdzOiBhY3BpLCB2YWw6IDEKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogdXNiLCB2YWw6IDEKWzIwMTQt
MDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczog
dXNiZGV2aWNlLCB2YWw6IFsnaG9zdDoxMjVmOmM5NmEnXQpbMjAxNC0wMi0w
NiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBnZnhf
cGFzc3RocnUsIHZhbDogTm9uZQpbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIy
XSBJTkZPIChpbWFnZTo4MjIpIE5lZWQgdG8gY3JlYXRlIHBsYXRmb3JtIGRl
dmljZS5bZG9taWQ6M10KWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVC
VUcgKFhlbmREb21haW5JbmZvOjI4NjMpIF9pbml0RG9tYWluOnNoYWRvd19t
ZW1vcnk9MHgwLCBtZW1vcnlfc3RhdGljX21heD0weDQwMDAwMDAwLCBtZW1v
cnlfc3RhdGljX21pbj0weDAuClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJd
IElORk8gKGltYWdlOjE4MikgYnVpbGREb21haW4gb3M9aHZtIGRvbT0zIHZj
cHVzPTEKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdl
Ojk0OSkgZG9taWQgICAgICAgICAgPSAzClsyMDE0LTAyLTA2IDEzOjI3OjU4
IDE1MjJdIERFQlVHIChpbWFnZTo5NTApIGltYWdlICAgICAgICAgID0gL3Vz
ci9saWIveGVuL2Jvb3QvaHZtbG9hZGVyClsyMDE0LTAyLTA2IDEzOjI3OjU4
IDE1MjJdIERFQlVHIChpbWFnZTo5NTEpIHN0b3JlX2V2dGNobiAgID0gMgpb
MjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBERUJVRyAoaW1hZ2U6OTUyKSBt
ZW1zaXplICAgICAgICA9IDEwMjQKWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gREVCVUcgKGltYWdlOjk1MykgdGFyZ2V0ICAgICAgICAgPSAxMDI0Clsy
MDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChpbWFnZTo5NTQpIHZj
cHVzICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBE
RUJVRyAoaW1hZ2U6OTU1KSB2Y3B1X2F2YWlsICAgICA9IDEKWzIwMTQtMDIt
MDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKGltYWdlOjk1NikgYWNwaSAgICAg
ICAgICAgPSAxClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChp
bWFnZTo5NTcpIGFwaWMgICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAxMzoy
Nzo1OCAxNTIyXSBJTkZPIChYZW5kRG9tYWluSW5mbzoyMzU3KSBjcmVhdGVE
ZXZpY2U6IHZmYiA6IHsndm5jdW51c2VkJzogMSwgJ290aGVyX2NvbmZpZyc6
IHsndm5jdW51c2VkJzogMSwgJ3ZuYyc6ICcxJ30sICd2bmMnOiAnMScsICd1
dWlkJzogJ2RhOGEzOWVjLTU3ZWItMGVkZS1iYzBiLWMxZDY3ZWFhMDkyNid9
ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChEZXZDb250cm9s
bGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUnOiAnMScs
ICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWlu
LzAvYmFja2VuZC92ZmIvMy8wJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2Rldmlj
ZS92ZmIvMC4KWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKERl
dkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeyd2bmN1
bnVzZWQnOiAnMScsICdkb21haW4nOiAndWJ1bnR1MTEnLCAnZnJvbnRlbmQn
OiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92ZmIvMCcsICd1dWlkJzogJ2Rh
OGEzOWVjLTU3ZWItMGVkZS1iYzBiLWMxZDY3ZWFhMDkyNicsICdmcm9udGVu
ZC1pZCc6ICczJywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAndm5j
JzogJzEnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92ZmIvMy8wLgpb
MjAxNC0wMi0wNiAxMzoyNzo1OCAxNTIyXSBJTkZPIChYZW5kRG9tYWluSW5m
bzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZiZCA6IHsndXVpZCc6ICc0ZDQzNTA4
Ni01ZmU0LWZiNmQtYjViOC1mMDZiODE2NzgzYTEnLCAnYm9vdGFibGUnOiAx
LCAnZHJpdmVyJzogJ3BhcmF2aXJ0dWFsaXNlZCcsICdkZXYnOiAnaGRhJywg
J3VuYW1lJzogJ2ZpbGU6L3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1
MTEuaW1nJywgJ21vZGUnOiAndyd9ClsyMDE0LTAyLTA2IDEzOjI3OjU4IDE1
MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3
cml0aW5nIHsnYmFja2VuZC1pZCc6ICcwJywgJ3ZpcnR1YWwtZGV2aWNlJzog
Jzc2OCcsICdkZXZpY2UtdHlwZSc6ICdkaXNrJywgJ3N0YXRlJzogJzEnLCAn
YmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy83Njgn
fSB0byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZiZC83NjguClsyMDE0LTAy
LTA2IDEzOjI3OjU4IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk3KSBE
ZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTExJywg
J2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vMy9kZXZpY2UvdmJkLzc2OCcs
ICd1dWlkJzogJzRkNDM1MDg2LTVmZTQtZmI2ZC1iNWI4LWYwNmI4MTY3ODNh
MScsICdib290YWJsZSc6ICcxJywgJ2Rldic6ICdoZGEnLCAnc3RhdGUnOiAn
MScsICdwYXJhbXMnOiAnL3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1
MTEuaW1nJywgJ21vZGUnOiAndycsICdvbmxpbmUnOiAnMScsICdmcm9udGVu
ZC1pZCc6ICczJywgJ3R5cGUnOiAnZmlsZSd9IHRvIC9sb2NhbC9kb21haW4v
MC9iYWNrZW5kL3ZiZC8zLzc2OC4KWzIwMTQtMDItMDYgMTM6Mjc6NTggMTUy
Ml0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2aWNlOiB2
YmQgOiB7J3V1aWQnOiAnNmJkMzI0NWEtMzg1ZS1jYWIxLTViY2EtYTU2MjFk
ODQ2NmZmJywgJ2Jvb3RhYmxlJzogMCwgJ2RyaXZlcic6ICdwYXJhdmlydHVh
bGlzZWQnLCAnZGV2JzogJ2hkYzpjZHJvbScsICd1bmFtZSc6ICdwaHk6L2Rl
di9jZHJvbScsICdtb2RlJzogJ3InfQpbMjAxNC0wMi0wNiAxMzoyNzo1OCAx
NTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxlcjog
d3JpdGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmljZSc6
ICc1NjMyJywgJ2RldmljZS10eXBlJzogJ2Nkcm9tJywgJ3N0YXRlJzogJzEn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvMy81
NjMyJ30gdG8gL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQvNTYzMi4KWzIw
MTQtMDItMDYgMTM6Mjc6NTggMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6
OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1bnR1
MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS92YmQv
NTYzMicsICd1dWlkJzogJzZiZDMyNDVhLTM4NWUtY2FiMS01YmNhLWE1NjIx
ZDg0NjZmZicsICdib290YWJsZSc6ICcwJywgJ2Rldic6ICdoZGMnLCAnc3Rh
dGUnOiAnMScsICdwYXJhbXMnOiAnL2Rldi9jZHJvbScsICdtb2RlJzogJ3In
LCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQtaWQnOiAnMycsICd0eXBlJzog
J3BoeSd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC8zLzU2MzIu
ClsyMDE0LTAyLTA2IDEzOjI3OjU5IDE1MjJdIElORk8gKFhlbmREb21haW5J
bmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmlmIDogeydicmlkZ2UnOiAneGVu
YnIwJywgJ21hYyc6ICcwMDoxNjozZTowMzozZTpjMicsICd0eXBlJzogJ2lv
ZW11JywgJ3V1aWQnOiAnN2U0ZDY5MmUtZjg0Ni05NjRmLWEzNGUtYmZjY2Jl
OWQ0YmNkJ30KWzIwMTQtMDItMDYgMTM6Mjc6NTkgMTUyMl0gREVCVUcgKERl
dkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydzdGF0
ZSc6ICcxJywgJ2JhY2tlbmQtaWQnOiAnMCcsICdiYWNrZW5kJzogJy9sb2Nh
bC9kb21haW4vMC9iYWNrZW5kL3ZpZi8zLzAnfSB0byAvbG9jYWwvZG9tYWlu
LzMvZGV2aWNlL3ZpZi8wLgpbMjAxNC0wMi0wNiAxMzoyNzo1OSAxNTIyXSBE
RUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2Q29udHJvbGxlcjogd3JpdGlu
ZyB7J2JyaWRnZSc6ICd4ZW5icjAnLCAnZG9tYWluJzogJ3VidW50dTExJywg
J2hhbmRsZSc6ICcwJywgJ3V1aWQnOiAnN2U0ZDY5MmUtZjg0Ni05NjRmLWEz
NGUtYmZjY2JlOWQ0YmNkJywgJ3NjcmlwdCc6ICcvZXRjL3hlbi9zY3JpcHRz
L3ZpZi1icmlkZ2UnLCAnbWFjJzogJzAwOjE2OjNlOjAzOjNlOmMyJywgJ2Zy
b250ZW5kLWlkJzogJzMnLCAnc3RhdGUnOiAnMScsICdvbmxpbmUnOiAnMScs
ICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzMvZGV2aWNlL3ZpZi8wJywg
J3R5cGUnOiAnaW9lbXUnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92
aWYvMy8wLgpbMjAxNC0wMi0wNiAxMzoyNzo1OSAxNTIyXSBJTkZPIChpbWFn
ZTo0MTgpIHNwYXduaW5nIGRldmljZSBtb2RlbHM6IC91c3IvbGliL3hlbi9i
aW4vcWVtdS1kbSBbJy91c3IvbGliL3hlbi9iaW4vcWVtdS1kbScsICctZCcs
ICczJywgJy1kb21haW4tbmFtZScsICd1YnVudHUxMScsICctdmlkZW9yYW0n
LCAnNCcsICctdm5jJywgJzEyNy4wLjAuMTowJywgJy12bmN1bnVzZWQnLCAn
LXZjcHVzJywgJzEnLCAnLXZjcHVfYXZhaWwnLCAnMHgxJywgJy1ib290Jywg
J2MnLCAnLXNlcmlhbCcsICdwdHknLCAnLWFjcGknLCAnLXVzYicsICctdXNi
ZGV2aWNlJywgIlsnaG9zdDoxMjVmOmM5NmEnXSIsICctbmV0JywgJ25pYyx2
bGFuPTEsbWFjYWRkcj0wMDoxNjozZTowMzozZTpjMixtb2RlbD1ydGw4MTM5
JywgJy1uZXQnLCAndGFwLHZsYW49MSxpZm5hbWU9dGFwMy4wLGJyaWRnZT14
ZW5icjAnLCAnLU0nLCAneGVuZnYnXQpbMjAxNC0wMi0wNiAxMzoyNzo1OSAx
NTIyXSBJTkZPIChpbWFnZTo0NjcpIGRldmljZSBtb2RlbCBwaWQ6IDExMTI0
ClsyMDE0LTAyLTA2IDEzOjI3OjU5IDE1MjJdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzozNDIwKSBTdG9yaW5nIFZNIGRldGFpbHM6IHsnb25feGVuZF9zdG9w
JzogJ2lnbm9yZScsICdwb29sX25hbWUnOiAnUG9vbC0wJywgJ3NoYWRvd19t
ZW1vcnknOiAnOScsICd1dWlkJzogJ2IxYzVmMWM4LTkwOTctZjNiNy05NjY4
LTg0NzRlZGNjMzM4ZScsICdvbl9yZWJvb3QnOiAncmVzdGFydCcsICdzdGFy
dF90aW1lJzogJzEzOTE2ODA2NzkuNDInLCAnb25fcG93ZXJvZmYnOiAnZGVz
dHJveScsICdib290bG9hZGVyX2FyZ3MnOiAnJywgJ29uX3hlbmRfc3RhcnQn
OiAnaWdub3JlJywgJ29uX2NyYXNoJzogJ3Jlc3RhcnQnLCAneGVuZC9yZXN0
YXJ0X2NvdW50JzogJzAnLCAndmNwdXMnOiAnMScsICd2Y3B1X2F2YWlsJzog
JzEnLCAnYm9vdGxvYWRlcic6ICcnLCAnaW1hZ2UnOiAiKGh2bSAoa2VybmVs
ICcnKSAoc3VwZXJwYWdlcyAwKSAodmlkZW9yYW0gNCkgKGhwZXQgMCkgKHN0
ZHZnYSAwKSAobG9hZGVyIC91c3IvbGliL3hlbi9ib290L2h2bWxvYWRlcikg
KHhlbl9wbGF0Zm9ybV9wY2kgMSkgKG9wZW5nbCAxKSAocnRjX3RpbWVvZmZz
ZXQgMCkgKHBjaSAoKSkgKGhhcCAxKSAobG9jYWx0aW1lIDApICh0aW1lcl9t
b2RlIDEpIChwY2lfbXNpdHJhbnNsYXRlIDEpIChvb3MgMSkgKGFwaWMgMSkg
KHNkbCAwKSAodXNiZGV2aWNlIChob3N0OjEyNWY6Yzk2YSkpIChkaXNwbGF5
IDowLjApICh2cHRfYWxpZ24gMSkgKHNlcmlhbCBwdHkpICh2bmN1bnVzZWQg
MSkgKGJvb3QgYykgKHBhZSAxKSAodmlyaWRpYW4gMCkgKGFjcGkgMSkgKHZu
YyAxKSAobm9ncmFwaGljIDApIChub21pZ3JhdGUgMCkgKHVzYiAxKSAodHNj
X21vZGUgMCkgKGd1ZXN0X29zX3R5cGUgZGVmYXVsdCkgKGRldmljZV9tb2Rl
bCAvdXNyL2xpYi94ZW4vYmluL3FlbXUtZG0pIChwY2lfcG93ZXJfbWdtdCAw
KSAoeGF1dGhvcml0eSAvcm9vdC8uWGF1dGhvcml0eSkgKGlzYSAwKSAobm90
ZXMgKFNVU1BFTkRfQ0FOQ0VMIDEpKSkiLCAnbmFtZSc6ICd1YnVudHUxMSd9
ClsyMDE0LTAyLTA2IDEzOjI3OjU5IDE1MjJdIElORk8gKGltYWdlOjU5MCkg
d2FpdGluZyBmb3Igc2VudGluZWxfZmlmbwpbMjAxNC0wMi0wNiAxMzoyNzo1
OSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MTc5NCkgU3RvcmluZyBk
b21haW4gZGV0YWlsczogeydjb25zb2xlL3BvcnQnOiAnMycsICdkZXNjcmlw
dGlvbic6ICcnLCAnY29uc29sZS9saW1pdCc6ICcxMDQ4NTc2JywgJ3N0b3Jl
L3BvcnQnOiAnMicsICd2bSc6ICcvdm0vYjFjNWYxYzgtOTA5Ny1mM2I3LTk2
NjgtODQ3NGVkY2MzMzhlJywgJ2RvbWlkJzogJzMnLCAnaW1hZ2Uvc3VzcGVu
ZC1jYW5jZWwnOiAnMScsICdjcHUvMC9hdmFpbGFiaWxpdHknOiAnb25saW5l
JywgJ21lbW9yeS90YXJnZXQnOiAnMTA0ODU3NicsICdjb250cm9sL3BsYXRm
b3JtLWZlYXR1cmUtbXVsdGlwcm9jZXNzb3Itc3VzcGVuZCc6ICcxJywgJ3N0
b3JlL3JpbmctcmVmJzogJzEwNDQ0NzYnLCAnY29uc29sZS90eXBlJzogJ2lv
ZW11JywgJ25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxMzoyNzo1
OSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxl
cjogd3JpdGluZyB7J3N0YXRlJzogJzEnLCAnYmFja2VuZC1pZCc6ICcwJywg
J2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvY29uc29sZS8z
LzAnfSB0byAvbG9jYWwvZG9tYWluLzMvZGV2aWNlL2NvbnNvbGUvMC4KWzIw
MTQtMDItMDYgMTM6Mjc6NTkgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6
OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1bnR1
MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi8zL2RldmljZS9jb25z
b2xlLzAnLCAndXVpZCc6ICdhZmEwZWE5Zi0xNmZjLWJjMmItYWUxNy05Yjcy
YTRmN2MyNDYnLCAnZnJvbnRlbmQtaWQnOiAnMycsICdzdGF0ZSc6ICcxJywg
J2xvY2F0aW9uJzogJzMnLCAnb25saW5lJzogJzEnLCAncHJvdG9jb2wnOiAn
dnQxMDAnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xlLzMv
MC4KWzIwMTQtMDItMDYgMTM6Mjc6NTkgMTUyMl0gREVCVUcgKERldkNvbnRy
b2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHRhcDIuClsyMDE0LTAy
LTA2IDEzOjI3OjU5IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkg
V2FpdGluZyBmb3IgZGV2aWNlcyB2aWYuClsyMDE0LTAyLTA2IDEzOjI4OjAw
IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3Ig
MC4KWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21h
aW5JbmZvOjE4ODEpIFhlbmREb21haW5JbmZvLmhhbmRsZVNodXRkb3duV2F0
Y2gKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKERldkNvbnRy
b2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFp
bi8wL2JhY2tlbmQvdmlmLzMvMC9ob3RwbHVnLXN0YXR1cy4KWzIwMTQtMDIt
MDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6NjQyKSBo
b3RwbHVnU3RhdHVzQ2FsbGJhY2sgMi4KWzIwMTQtMDItMDYgMTM6Mjg6MDAg
MTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjMwNzEpIFhlbmREb21haW5J
bmZvLmRlc3Ryb3k6IGRvbWlkPTMKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUy
Ml0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MDEpIERlc3Ryb3lpbmcgZGV2
aWNlIG1vZGVsClsyMDE0LTAyLTA2IDEzOjI4OjAwIDE1MjJdIElORk8gKGlt
YWdlOjYxNSkgdWJ1bnR1MTEgZGV2aWNlIG1vZGVsIHRlcm1pbmF0ZWQKWzIw
MTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZv
OjI0MDgpIFJlbGVhc2luZyBkZXZpY2VzClsyMDE0LTAyLTA2IDEzOjI4OjAw
IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2
aWYvMApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERv
bWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTog
ZGV2aWNlQ2xhc3MgPSB2aWYsIGRldmljZSA9IHZpZi8wClsyMDE0LTAyLTA2
IDEzOjI4OjAwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBS
ZW1vdmluZyBjb25zb2xlLzAKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0g
REVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRl
c3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gY29uc29sZSwgZGV2aWNlID0g
Y29uc29sZS8wClsyMDE0LTAyLTA2IDEzOjI4OjAwIDE1MjJdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2YmQvNzY4ClsyMDE0LTAy
LTA2IDEzOjI4OjAwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxMjc2
KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZpY2VDbGFzcyA9
IHZiZCwgZGV2aWNlID0gdmJkLzc2OApbMjAxNC0wMi0wNiAxMzoyODowMCAx
NTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmJk
LzU2MzIKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6
IGRldmljZUNsYXNzID0gdmJkLCBkZXZpY2UgPSB2YmQvNTYzMgpbMjAxNC0w
Mi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQx
NCkgUmVtb3ZpbmcgdmZiLzAKWzIwMTQtMDItMDYgMTM6Mjg6MDAgMTUyMl0g
REVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhlbmREb21haW5JbmZvLmRl
c3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gdmZiLCBkZXZpY2UgPSB2ZmIv
MApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERvbWFp
bkluZm86MjQwNikgTm8gZGV2aWNlIG1vZGVsClsyMDE0LTAyLTA2IDEzOjI4
OjAwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDA4KSBSZWxlYXNp
bmcgZGV2aWNlcwpbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAo
WGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmlmLzAKWzIwMTQtMDIt
MDYgMTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYp
IFhlbmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0g
dmlmLCBkZXZpY2UgPSB2aWYvMApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIy
XSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQxNCkgUmVtb3ZpbmcgdmJkLzc2
OApbMjAxNC0wMi0wNiAxMzoyODowMCAxNTIyXSBERUJVRyAoWGVuZERvbWFp
bkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2
aWNlQ2xhc3MgPSB2YmQsIGRldmljZSA9IHZiZC83NjgKWzIwMTQtMDItMDYg
MTM6Mjg6MDAgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJl
bW92aW5nIHZiZC81NjMyClsyMDE0LTAyLTA2IDEzOjI4OjAwIDE1MjJdIERF
QlVHIChYZW5kRG9tYWluSW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0
cm95RGV2aWNlOiBkZXZpY2VDbGFzcyA9IHZiZCwgZGV2aWNlID0gdmJkLzU2
MzIKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKFhlbmREb21h
aW5JbmZvOjEwMykgWGVuZERvbWFpbkluZm8uY3JlYXRlKFsndm0nLCBbJ25h
bWUnLCAndWJ1bnR1MTEnXSwgWydtZW1vcnknLCAxMDI0XSwgWydvbl94ZW5k
X3N0YXJ0JywgJ2lnbm9yZSddLCBbJ29uX3hlbmRfc3RvcCcsICdpZ25vcmUn
XSwgWyd2Y3B1cycsIDFdLCBbJ29vcycsIDFdLCBbJ2ltYWdlJywgWydodm0n
LCBbJ2tlcm5lbCcsICdodm1sb2FkZXInXSwgWyd2aWRlb3JhbScsIDRdLCBb
J3NlcmlhbCcsICdwdHknXSwgWydhY3BpJywgMV0sIFsnYXBpYycsIDFdLCBb
J2Jvb3QnLCAnYyddLCBbJ2NwdWlkJywgW11dLCBbJ2NwdWlkX2NoZWNrJywg
W11dLCBbJ2RldmljZV9tb2RlbCcsICdxZW11LWRtJ10sIFsnZGlzcGxheScs
ICc6MC4wJ10sIFsnZmRhJywgJyddLCBbJ2ZkYicsICcnXSwgWydndWVzdF9v
c190eXBlJywgJ2RlZmF1bHQnXSwgWydoYXAnLCAxXSwgWydocGV0JywgMF0s
IFsnaXNhJywgMF0sIFsna2V5bWFwJywgJyddLCBbJ2xvY2FsdGltZScsIDBd
LCBbJ25vZ3JhcGhpYycsIDBdLCBbJ29wZW5nbCcsIDFdLCBbJ29vcycsIDFd
LCBbJ3BhZScsIDFdLCBbJ3BjaScsIFtdXSwgWydwY2lfbXNpdHJhbnNsYXRl
JywgMV0sIFsncGNpX3Bvd2VyX21nbXQnLCAwXSwgWydydGNfdGltZW9mZnNl
dCcsIDBdLCBbJ3NkbCcsIDBdLCBbJ3NvdW5kaHcnLCAnJ10sIFsnc3Rkdmdh
JywgMF0sIFsndGltZXJfbW9kZScsIDFdLCBbJ3VzYicsIDFdLCBbJ3VzYmRl
dmljZScsIFsnaG9zdDoxMjVmOmM5NmEnXV0sIFsndmNwdXMnLCAxXSwgWyd2
bmMnLCAxXSwgWyd2bmN1bnVzZWQnLCAxXSwgWyd2aXJpZGlhbicsIDBdLCBb
J3ZwdF9hbGlnbicsIDFdLCBbJ3hhdXRob3JpdHknLCAnL3Jvb3QvLlhhdXRo
b3JpdHknXSwgWyd4ZW5fcGxhdGZvcm1fcGNpJywgMV0sIFsnbWVtb3J5X3No
YXJpbmcnLCAwXSwgWyd2bmNwYXNzd2QnLCAnWFhYWFhYWFgnXSwgWyd0c2Nf
bW9kZScsIDBdLCBbJ25vbWlncmF0ZScsIDBdXV0sIFsnczNfaW50ZWdyaXR5
JywgMV0sIFsnZGV2aWNlJywgWyd2YmQnLCBbJ3VuYW1lJywgJ2ZpbGU6L3Zh
ci9saWIvbGlidmlydC9pbWFnZXMvdWJ1bnR1MTEuaW1nJ10sIFsnZGV2Jywg
J2hkYSddLCBbJ21vZGUnLCAndyddXV0sIFsnZGV2aWNlJywgWyd2YmQnLCBb
J3VuYW1lJywgJ3BoeTovZGV2L2Nkcm9tJ10sIFsnZGV2JywgJ2hkYzpjZHJv
bSddLCBbJ21vZGUnLCAnciddXV0sIFsnZGV2aWNlJywgWyd2aWYnLCBbJ2Jy
aWRnZScsICd4ZW5icjAnXSwgWyd0eXBlJywgJ2lvZW11J11dXV0pClsyMDE0
LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoy
NDk4KSBYZW5kRG9tYWluSW5mby5jb25zdHJ1Y3REb21haW4KWzIwMTQtMDIt
MDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGJhbGxvb246MTg3KSBCYWxsb29u
OiAyNzQ4MzA0IEtpQiBmcmVlOyBuZWVkIDE2Mzg0OyBkb25lLgpbMjAxNC0w
Mi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoWGVuZERvbWFpbjo0NzYpIEFk
ZGluZyBEb21haW46IDQKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVC
VUcgKFhlbmREb21haW5JbmZvOjI4MzYpIFhlbmREb21haW5JbmZvLmluaXRE
b21haW46IDQgMjU2ClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVH
IChpbWFnZTozMzkpIE5vIFZOQyBwYXNzd2QgY29uZmlndXJlZCBmb3IgdmZi
IGFjY2VzcwpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1h
Z2U6ODkxKSBhcmdzOiBib290LCB2YWw6IGMKWzIwMTQtMDItMDYgMTM6Mjk6
MzAgMTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogZmRhLCB2YWw6IE5v
bmUKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGltYWdlOjg5
MSkgYXJnczogZmRiLCB2YWw6IE5vbmUKWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogc291bmRodywgdmFsOiBO
b25lClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChpbWFnZTo4
OTEpIGFyZ3M6IGxvY2FsdGltZSwgdmFsOiAwClsyMDE0LTAyLTA2IDEzOjI5
OjMwIDE1MjJdIERFQlVHIChpbWFnZTo4OTEpIGFyZ3M6IHNlcmlhbCwgdmFs
OiBbJ3B0eSddClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChp
bWFnZTo4OTEpIGFyZ3M6IHN0ZC12Z2EsIHZhbDogMApbMjAxNC0wMi0wNiAx
MzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBpc2EsIHZh
bDogMApbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6
ODkxKSBhcmdzOiBhY3BpLCB2YWw6IDEKWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJnczogdXNiLCB2YWw6IDEKWzIw
MTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGltYWdlOjg5MSkgYXJn
czogdXNiZGV2aWNlLCB2YWw6IFsnaG9zdDoxMjVmOmM5NmEnXQpbMjAxNC0w
Mi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6ODkxKSBhcmdzOiBn
ZnhfcGFzc3RocnUsIHZhbDogTm9uZQpbMjAxNC0wMi0wNiAxMzoyOTozMCAx
NTIyXSBJTkZPIChpbWFnZTo4MjIpIE5lZWQgdG8gY3JlYXRlIHBsYXRmb3Jt
IGRldmljZS5bZG9taWQ6NF0KWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0g
REVCVUcgKFhlbmREb21haW5JbmZvOjI4NjMpIF9pbml0RG9tYWluOnNoYWRv
d19tZW1vcnk9MHgwLCBtZW1vcnlfc3RhdGljX21heD0weDQwMDAwMDAwLCBt
ZW1vcnlfc3RhdGljX21pbj0weDAuClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1
MjJdIElORk8gKGltYWdlOjE4MikgYnVpbGREb21haW4gb3M9aHZtIGRvbT00
IHZjcHVzPTEKWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGlt
YWdlOjk0OSkgZG9taWQgICAgICAgICAgPSA0ClsyMDE0LTAyLTA2IDEzOjI5
OjMwIDE1MjJdIERFQlVHIChpbWFnZTo5NTApIGltYWdlICAgICAgICAgID0g
L3Vzci9saWIveGVuL2Jvb3QvaHZtbG9hZGVyClsyMDE0LTAyLTA2IDEzOjI5
OjMwIDE1MjJdIERFQlVHIChpbWFnZTo5NTEpIHN0b3JlX2V2dGNobiAgID0g
MgpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBERUJVRyAoaW1hZ2U6OTUy
KSBtZW1zaXplICAgICAgICA9IDEwMjQKWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gREVCVUcgKGltYWdlOjk1MykgdGFyZ2V0ICAgICAgICAgPSAxMDI0
ClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChpbWFnZTo5NTQp
IHZjcHVzICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIy
XSBERUJVRyAoaW1hZ2U6OTU1KSB2Y3B1X2F2YWlsICAgICA9IDEKWzIwMTQt
MDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKGltYWdlOjk1NikgYWNwaSAg
ICAgICAgICAgPSAxClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVH
IChpbWFnZTo5NTcpIGFwaWMgICAgICAgICAgID0gMQpbMjAxNC0wMi0wNiAx
MzoyOTozMCAxNTIyXSBJTkZPIChYZW5kRG9tYWluSW5mbzoyMzU3KSBjcmVh
dGVEZXZpY2U6IHZmYiA6IHsndm5jdW51c2VkJzogMSwgJ290aGVyX2NvbmZp
Zyc6IHsndm5jdW51c2VkJzogMSwgJ3ZuYyc6ICcxJ30sICd2bmMnOiAnMScs
ICd1dWlkJzogJzhlYTk2NzIyLTE0YmUtODRjZC05YmFhLWMwYjRlMWMzYmQz
Yyd9ClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChEZXZDb250
cm9sbGVyOjk1KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnc3RhdGUnOiAn
MScsICdiYWNrZW5kLWlkJzogJzAnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9t
YWluLzAvYmFja2VuZC92ZmIvNC8wJ30gdG8gL2xvY2FsL2RvbWFpbi80L2Rl
dmljZS92ZmIvMC4KWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeyd2
bmN1bnVzZWQnOiAnMScsICdkb21haW4nOiAndWJ1bnR1MTEnLCAnZnJvbnRl
bmQnOiAnL2xvY2FsL2RvbWFpbi80L2RldmljZS92ZmIvMCcsICd1dWlkJzog
JzhlYTk2NzIyLTE0YmUtODRjZC05YmFhLWMwYjRlMWMzYmQzYycsICdmcm9u
dGVuZC1pZCc6ICc0JywgJ3N0YXRlJzogJzEnLCAnb25saW5lJzogJzEnLCAn
dm5jJzogJzEnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92ZmIvNC8w
LgpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIyXSBJTkZPIChYZW5kRG9tYWlu
SW5mbzoyMzU3KSBjcmVhdGVEZXZpY2U6IHZiZCA6IHsndXVpZCc6ICdiODUw
NDg2ZS05YjI2LWI4MmUtMjMxNC02OGNiOGIzOGNkNmMnLCAnYm9vdGFibGUn
OiAxLCAnZHJpdmVyJzogJ3BhcmF2aXJ0dWFsaXNlZCcsICdkZXYnOiAnaGRh
JywgJ3VuYW1lJzogJ2ZpbGU6L3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndyd9ClsyMDE0LTAyLTA2IDEzOjI5OjMw
IDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk1KSBEZXZDb250cm9sbGVy
OiB3cml0aW5nIHsnYmFja2VuZC1pZCc6ICcwJywgJ3ZpcnR1YWwtZGV2aWNl
JzogJzc2OCcsICdkZXZpY2UtdHlwZSc6ICdkaXNrJywgJ3N0YXRlJzogJzEn
LCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQvNC83
NjgnfSB0byAvbG9jYWwvZG9tYWluLzQvZGV2aWNlL3ZiZC83NjguClsyMDE0
LTAyLTA2IDEzOjI5OjMwIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjk3
KSBEZXZDb250cm9sbGVyOiB3cml0aW5nIHsnZG9tYWluJzogJ3VidW50dTEx
JywgJ2Zyb250ZW5kJzogJy9sb2NhbC9kb21haW4vNC9kZXZpY2UvdmJkLzc2
OCcsICd1dWlkJzogJ2I4NTA0ODZlLTliMjYtYjgyZS0yMzE0LTY4Y2I4YjM4
Y2Q2YycsICdib290YWJsZSc6ICcxJywgJ2Rldic6ICdoZGEnLCAnc3RhdGUn
OiAnMScsICdwYXJhbXMnOiAnL3Zhci9saWIvbGlidmlydC9pbWFnZXMvdWJ1
bnR1MTEuaW1nJywgJ21vZGUnOiAndycsICdvbmxpbmUnOiAnMScsICdmcm9u
dGVuZC1pZCc6ICc0JywgJ3R5cGUnOiAnZmlsZSd9IHRvIC9sb2NhbC9kb21h
aW4vMC9iYWNrZW5kL3ZiZC80Lzc2OC4KWzIwMTQtMDItMDYgMTM6Mjk6MzAg
MTUyMl0gSU5GTyAoWGVuZERvbWFpbkluZm86MjM1NykgY3JlYXRlRGV2aWNl
OiB2YmQgOiB7J3V1aWQnOiAnYTMzMTI3OGQtMmY1Zi01ZDczLTRlOWUtMGM3
ODcxZDYwNjcwJywgJ2Jvb3RhYmxlJzogMCwgJ2RyaXZlcic6ICdwYXJhdmly
dHVhbGlzZWQnLCAnZGV2JzogJ2hkYzpjZHJvbScsICd1bmFtZSc6ICdwaHk6
L2Rldi9jZHJvbScsICdtb2RlJzogJ3InfQpbMjAxNC0wMi0wNiAxMzoyOToz
MCAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJvbGxl
cjogd3JpdGluZyB7J2JhY2tlbmQtaWQnOiAnMCcsICd2aXJ0dWFsLWRldmlj
ZSc6ICc1NjMyJywgJ2RldmljZS10eXBlJzogJ2Nkcm9tJywgJ3N0YXRlJzog
JzEnLCAnYmFja2VuZCc6ICcvbG9jYWwvZG9tYWluLzAvYmFja2VuZC92YmQv
NC81NjMyJ30gdG8gL2xvY2FsL2RvbWFpbi80L2RldmljZS92YmQvNTYzMi4K
WzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcgKERldkNvbnRyb2xs
ZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1
bnR1MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi80L2RldmljZS92
YmQvNTYzMicsICd1dWlkJzogJ2EzMzEyNzhkLTJmNWYtNWQ3My00ZTllLTBj
Nzg3MWQ2MDY3MCcsICdib290YWJsZSc6ICcwJywgJ2Rldic6ICdoZGMnLCAn
c3RhdGUnOiAnMScsICdwYXJhbXMnOiAnL2Rldi9jZHJvbScsICdtb2RlJzog
J3InLCAnb25saW5lJzogJzEnLCAnZnJvbnRlbmQtaWQnOiAnNCcsICd0eXBl
JzogJ3BoeSd9IHRvIC9sb2NhbC9kb21haW4vMC9iYWNrZW5kL3ZiZC80LzU2
MzIuClsyMDE0LTAyLTA2IDEzOjI5OjMwIDE1MjJdIElORk8gKFhlbmREb21h
aW5JbmZvOjIzNTcpIGNyZWF0ZURldmljZTogdmlmIDogeydicmlkZ2UnOiAn
eGVuYnIwJywgJ21hYyc6ICcwMDoxNjozZTo2ODo1Njo3OScsICd0eXBlJzog
J2lvZW11JywgJ3V1aWQnOiAnNmU2MWQ2MjQtN2VkYS03ZGQ3LTE5YzYtZGQ1
OGU5YTkzNzM4J30KWzIwMTQtMDItMDYgMTM6Mjk6MzAgMTUyMl0gREVCVUcg
KERldkNvbnRyb2xsZXI6OTUpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydz
dGF0ZSc6ICcxJywgJ2JhY2tlbmQtaWQnOiAnMCcsICdiYWNrZW5kJzogJy9s
b2NhbC9kb21haW4vMC9iYWNrZW5kL3ZpZi80LzAnfSB0byAvbG9jYWwvZG9t
YWluLzQvZGV2aWNlL3ZpZi8wLgpbMjAxNC0wMi0wNiAxMzoyOTozMCAxNTIy
XSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NykgRGV2Q29udHJvbGxlcjogd3Jp
dGluZyB7J2JyaWRnZSc6ICd4ZW5icjAnLCAnZG9tYWluJzogJ3VidW50dTEx
JywgJ2hhbmRsZSc6ICcwJywgJ3V1aWQnOiAnNmU2MWQ2MjQtN2VkYS03ZGQ3
LTE5YzYtZGQ1OGU5YTkzNzM4JywgJ3NjcmlwdCc6ICcvZXRjL3hlbi9zY3Jp
cHRzL3ZpZi1icmlkZ2UnLCAnbWFjJzogJzAwOjE2OjNlOjY4OjU2Ojc5Jywg
J2Zyb250ZW5kLWlkJzogJzQnLCAnc3RhdGUnOiAnMScsICdvbmxpbmUnOiAn
MScsICdmcm9udGVuZCc6ICcvbG9jYWwvZG9tYWluLzQvZGV2aWNlL3ZpZi8w
JywgJ3R5cGUnOiAnaW9lbXUnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2Vu
ZC92aWYvNC8wLgpbMjAxNC0wMi0wNiAxMzoyOTozMSAxNTIyXSBJTkZPIChp
bWFnZTo0MTgpIHNwYXduaW5nIGRldmljZSBtb2RlbHM6IC91c3IvbGliL3hl
bi9iaW4vcWVtdS1kbSBbJy91c3IvbGliL3hlbi9iaW4vcWVtdS1kbScsICct
ZCcsICc0JywgJy1kb21haW4tbmFtZScsICd1YnVudHUxMScsICctdmlkZW9y
YW0nLCAnNCcsICctdm5jJywgJzEyNy4wLjAuMTowJywgJy12bmN1bnVzZWQn
LCAnLXZjcHVzJywgJzEnLCAnLXZjcHVfYXZhaWwnLCAnMHgxJywgJy1ib290
JywgJ2MnLCAnLXNlcmlhbCcsICdwdHknLCAnLWFjcGknLCAnLXVzYicsICct
dXNiZGV2aWNlJywgIlsnaG9zdDoxMjVmOmM5NmEnXSIsICctbmV0JywgJ25p
Yyx2bGFuPTEsbWFjYWRkcj0wMDoxNjozZTo2ODo1Njo3OSxtb2RlbD1ydGw4
MTM5JywgJy1uZXQnLCAndGFwLHZsYW49MSxpZm5hbWU9dGFwNC4wLGJyaWRn
ZT14ZW5icjAnLCAnLU0nLCAneGVuZnYnXQpbMjAxNC0wMi0wNiAxMzoyOToz
MSAxNTIyXSBJTkZPIChpbWFnZTo0NjcpIGRldmljZSBtb2RlbCBwaWQ6IDEy
MjU0ClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIElORk8gKGltYWdlOjU5
MCkgd2FpdGluZyBmb3Igc2VudGluZWxfZmlmbwpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MzQyMCkgU3Rvcmlu
ZyBWTSBkZXRhaWxzOiB7J29uX3hlbmRfc3RvcCc6ICdpZ25vcmUnLCAncG9v
bF9uYW1lJzogJ1Bvb2wtMCcsICdzaGFkb3dfbWVtb3J5JzogJzknLCAndXVp
ZCc6ICc1ZGE0NzY4Ni00NGE4LTNhZDQtZDIzOC0xNDJjOTYxZmQ3MmMnLCAn
b25fcmVib290JzogJ3Jlc3RhcnQnLCAnc3RhcnRfdGltZSc6ICcxMzkxNjgw
NzcxLjA0JywgJ29uX3Bvd2Vyb2ZmJzogJ2Rlc3Ryb3knLCAnYm9vdGxvYWRl
cl9hcmdzJzogJycsICdvbl94ZW5kX3N0YXJ0JzogJ2lnbm9yZScsICdvbl9j
cmFzaCc6ICdyZXN0YXJ0JywgJ3hlbmQvcmVzdGFydF9jb3VudCc6ICcwJywg
J3ZjcHVzJzogJzEnLCAndmNwdV9hdmFpbCc6ICcxJywgJ2Jvb3Rsb2FkZXIn
OiAnJywgJ2ltYWdlJzogIihodm0gKGtlcm5lbCAnJykgKHN1cGVycGFnZXMg
MCkgKHZpZGVvcmFtIDQpIChocGV0IDApIChzdGR2Z2EgMCkgKGxvYWRlciAv
dXNyL2xpYi94ZW4vYm9vdC9odm1sb2FkZXIpICh4ZW5fcGxhdGZvcm1fcGNp
IDEpIChvcGVuZ2wgMSkgKHJ0Y190aW1lb2Zmc2V0IDApIChwY2kgKCkpICho
YXAgMSkgKGxvY2FsdGltZSAwKSAodGltZXJfbW9kZSAxKSAocGNpX21zaXRy
YW5zbGF0ZSAxKSAob29zIDEpIChhcGljIDEpIChzZGwgMCkgKHVzYmRldmlj
ZSAoaG9zdDoxMjVmOmM5NmEpKSAoZGlzcGxheSA6MC4wKSAodnB0X2FsaWdu
IDEpIChzZXJpYWwgcHR5KSAodm5jdW51c2VkIDEpIChib290IGMpIChwYWUg
MSkgKHZpcmlkaWFuIDApIChhY3BpIDEpICh2bmMgMSkgKG5vZ3JhcGhpYyAw
KSAobm9taWdyYXRlIDApICh1c2IgMSkgKHRzY19tb2RlIDApIChndWVzdF9v
c190eXBlIGRlZmF1bHQpIChkZXZpY2VfbW9kZWwgL3Vzci9saWIveGVuL2Jp
bi9xZW11LWRtKSAocGNpX3Bvd2VyX21nbXQgMCkgKHhhdXRob3JpdHkgL3Jv
b3QvLlhhdXRob3JpdHkpIChpc2EgMCkgKG5vdGVzIChTVVNQRU5EX0NBTkNF
TCAxKSkpIiwgJ25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MTc5NCkgU3Rvcmlu
ZyBkb21haW4gZGV0YWlsczogeydjb25zb2xlL3BvcnQnOiAnMycsICdkZXNj
cmlwdGlvbic6ICcnLCAnY29uc29sZS9saW1pdCc6ICcxMDQ4NTc2JywgJ3N0
b3JlL3BvcnQnOiAnMicsICd2bSc6ICcvdm0vNWRhNDc2ODYtNDRhOC0zYWQ0
LWQyMzgtMTQyYzk2MWZkNzJjJywgJ2RvbWlkJzogJzQnLCAnaW1hZ2Uvc3Vz
cGVuZC1jYW5jZWwnOiAnMScsICdjcHUvMC9hdmFpbGFiaWxpdHknOiAnb25s
aW5lJywgJ21lbW9yeS90YXJnZXQnOiAnMTA0ODU3NicsICdjb250cm9sL3Bs
YXRmb3JtLWZlYXR1cmUtbXVsdGlwcm9jZXNzb3Itc3VzcGVuZCc6ICcxJywg
J3N0b3JlL3JpbmctcmVmJzogJzEwNDQ0NzYnLCAnY29uc29sZS90eXBlJzog
J2lvZW11JywgJ25hbWUnOiAndWJ1bnR1MTEnfQpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo5NSkgRGV2Q29udHJv
bGxlcjogd3JpdGluZyB7J3N0YXRlJzogJzEnLCAnYmFja2VuZC1pZCc6ICcw
JywgJ2JhY2tlbmQnOiAnL2xvY2FsL2RvbWFpbi8wL2JhY2tlbmQvY29uc29s
ZS80LzAnfSB0byAvbG9jYWwvZG9tYWluLzQvZGV2aWNlL2NvbnNvbGUvMC4K
WzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xs
ZXI6OTcpIERldkNvbnRyb2xsZXI6IHdyaXRpbmcgeydkb21haW4nOiAndWJ1
bnR1MTEnLCAnZnJvbnRlbmQnOiAnL2xvY2FsL2RvbWFpbi80L2RldmljZS9j
b25zb2xlLzAnLCAndXVpZCc6ICdiMzk0MGVlMi0wOGRlLTE3MWEtYWM4NS1h
MTU0NjUwNzQ3MWYnLCAnZnJvbnRlbmQtaWQnOiAnNCcsICdzdGF0ZSc6ICcx
JywgJ2xvY2F0aW9uJzogJzMnLCAnb25saW5lJzogJzEnLCAncHJvdG9jb2wn
OiAndnQxMDAnfSB0byAvbG9jYWwvZG9tYWluLzAvYmFja2VuZC9jb25zb2xl
LzQvMC4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjE4ODEpIFhlbmREb21haW5JbmZvLmhhbmRsZVNodXRkb3du
V2F0Y2gKWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNv
bnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHRhcDIuClsyMDE0
LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEz
OSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2aWYuClsyMDE0LTAyLTA2IDEzOjI5
OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBm
b3IgMC4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNv
bnRyb2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2Rv
bWFpbi8wL2JhY2tlbmQvdmlmLzQvMC9ob3RwbHVnLXN0YXR1cy4KWzIwMTQt
MDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6NjQy
KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgMS4KWzIwMTQtMDItMDYgMTM6Mjk6
MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5nIGZv
ciBkZXZpY2VzIHZrYmQuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERF
QlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBp
b3BvcnRzLgpbMjAxNC0wMi0wNiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2
Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdGFwLgpbMjAx
NC0wMi0wNiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjox
MzkpIFdhaXRpbmcgZm9yIGRldmljZXMgdmlmMi4KWzIwMTQtMDItMDYgMTM6
Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xsZXI6MTM5KSBXYWl0aW5n
IGZvciBkZXZpY2VzIGNvbnNvbGUuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1
MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjE0NCkgV2FpdGluZyBmb3IgMC4K
WzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRyb2xs
ZXI6MTM5KSBXYWl0aW5nIGZvciBkZXZpY2VzIHZzY3NpLgpbMjAxNC0wMi0w
NiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdh
aXRpbmcgZm9yIGRldmljZXMgdmJkLgpbMjAxNC0wMi0wNiAxMzoyOTozMSAx
NTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9yIDc2
OC4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0gREVCVUcgKERldkNvbnRy
b2xsZXI6NjI4KSBob3RwbHVnU3RhdHVzQ2FsbGJhY2sgL2xvY2FsL2RvbWFp
bi8wL2JhY2tlbmQvdmJkLzQvNzY4L2hvdHBsdWctc3RhdHVzLgpbMjAxNC0w
Mi0wNiAxMzoyOTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjo2NDIp
IGhvdHBsdWdTdGF0dXNDYWxsYmFjayAxLgpbMjAxNC0wMi0wNiAxMzoyOToz
MSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxNDQpIFdhaXRpbmcgZm9y
IDU2MzIuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZD
b250cm9sbGVyOjYyOCkgaG90cGx1Z1N0YXR1c0NhbGxiYWNrIC9sb2NhbC9k
b21haW4vMC9iYWNrZW5kL3ZiZC80LzU2MzIvaG90cGx1Zy1zdGF0dXMuClsy
MDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVy
OjY0MikgaG90cGx1Z1N0YXR1c0NhbGxiYWNrIDEuClsyMDE0LTAyLTA2IDEz
OjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGlu
ZyBmb3IgZGV2aWNlcyBpcnEuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJd
IERFQlVHIChEZXZDb250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNl
cyB2ZmIuClsyMDE0LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZD
b250cm9sbGVyOjEzOSkgV2FpdGluZyBmb3IgZGV2aWNlcyBwY2kuClsyMDE0
LTAyLTA2IDEzOjI5OjMxIDE1MjJdIERFQlVHIChEZXZDb250cm9sbGVyOjEz
OSkgV2FpdGluZyBmb3IgZGV2aWNlcyB2dXNiLgpbMjAxNC0wMi0wNiAxMzoy
OTozMSAxNTIyXSBERUJVRyAoRGV2Q29udHJvbGxlcjoxMzkpIFdhaXRpbmcg
Zm9yIGRldmljZXMgdnRwbS4KWzIwMTQtMDItMDYgMTM6Mjk6MzEgMTUyMl0g
SU5GTyAoWGVuZERvbWFpbjoxMjI1KSBEb21haW4gdWJ1bnR1MTEgKDQpIHVu
cGF1c2VkLgpbMjAxNC0wMi0wNiAxMzozMDo1MiAxNTIyXSBERUJVRyAoWGVu
ZENoZWNrcG9pbnQ6MTI0KSBbeGNfc2F2ZV06IC91c3IvbGliL3hlbi9iaW4v
eGNfc2F2ZSAyOCA0IDAgMCA1ClsyMDE0LTAyLTA2IDEzOjMwOjUyIDE1MjJd
IElORk8gKFhlbmRDaGVja3BvaW50OjQyMykgeGNfc2F2ZTogZmFpbGVkIHRv
IGdldCB0aGUgc3VzcGVuZCBldnRjaG4gcG9ydApbMjAxNC0wMi0wNiAxMzoz
MDo1MiAxNTIyXSBJTkZPIChYZW5kQ2hlY2twb2ludDo0MjMpIApbMjAxNC0w
Mi0wNiAxMzozMjozOCAxNTIyXSBERUJVRyAoWGVuZENoZWNrcG9pbnQ6Mzk0
KSBzdXNwZW5kClsyMDE0LTAyLTA2IDEzOjMyOjM4IDE1MjJdIERFQlVHIChY
ZW5kQ2hlY2twb2ludDoxMjcpIEluIHNhdmVJbnB1dEhhbmRsZXIgc3VzcGVu
ZApbMjAxNC0wMi0wNiAxMzozMjozOCAxNTIyXSBERUJVRyAoWGVuZENoZWNr
cG9pbnQ6MTI5KSBTdXNwZW5kaW5nIDQgLi4uClsyMDE0LTAyLTA2IDEzOjMy
OjM4IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzo1MjQpIFhlbmREb21h
aW5JbmZvLnNodXRkb3duKHN1c3BlbmQpClsyMDE0LTAyLTA2IDEzOjMyOjM4
IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoxODgxKSBYZW5kRG9tYWlu
SW5mby5oYW5kbGVTaHV0ZG93bldhdGNoClsyMDE0LTAyLTA2IDEzOjMyOjM4
IDE1MjJdIElORk8gKFhlbmREb21haW5JbmZvOjU0MSkgSFZNIHNhdmU6cmVt
b3RlIHNodXRkb3duIGRvbSA0IQpbMjAxNC0wMi0wNiAxMzozMjozOCAxNTIy
XSBJTkZPIChYZW5kQ2hlY2twb2ludDoxMzUpIERvbWFpbiA0IHN1c3BlbmRl
ZC4KWzIwMTQtMDItMDYgMTM6MzI6MzggMTUyMl0gSU5GTyAoWGVuZERvbWFp
bkluZm86MjA3OCkgRG9tYWluIGhhcyBzaHV0ZG93bjogbmFtZT1taWdyYXRp
bmctdWJ1bnR1MTEgaWQ9NCByZWFzb249c3VzcGVuZC4KWzIwMTQtMDItMDYg
MTM6MzI6MzggMTUyMl0gSU5GTyAoaW1hZ2U6NTM4KSBzaWduYWxEZXZpY2VN
b2RlbDpyZXN0b3JlIGRtIHN0YXRlIHRvIHJ1bm5pbmcKWzIwMTQtMDItMDYg
MTM6MzI6MzggMTUyMl0gREVCVUcgKFhlbmRDaGVja3BvaW50OjE0NCkgV3Jp
dHRlbiBkb25lClsyMDE0LTAyLTA2IDEzOjMyOjM4IDE1MjJdIERFQlVHIChY
ZW5kRG9tYWluSW5mbzozMDcxKSBYZW5kRG9tYWluSW5mby5kZXN0cm95OiBk
b21pZD00ClsyMDE0LTAyLTA2IDEzOjMyOjM5IDE1MjJdIERFQlVHIChYZW5k
RG9tYWluSW5mbzoyNDAxKSBEZXN0cm95aW5nIGRldmljZSBtb2RlbApbMjAx
NC0wMi0wNiAxMzozMjozOSAxNTIyXSBJTkZPIChpbWFnZTo2MTUpIG1pZ3Jh
dGluZy11YnVudHUxMSBkZXZpY2UgbW9kZWwgdGVybWluYXRlZApbMjAxNC0w
Mi0wNiAxMzozMjozOSAxNTIyXSBERUJVRyAoWGVuZERvbWFpbkluZm86MjQw
OCkgUmVsZWFzaW5nIGRldmljZXMKWzIwMTQtMDItMDYgMTM6MzI6MzkgMTUy
Ml0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZpZi8w
ClsyMDE0LTAyLTA2IDEzOjMyOjM5IDE1MjJdIERFQlVHIChYZW5kRG9tYWlu
SW5mbzoxMjc2KSBYZW5kRG9tYWluSW5mby5kZXN0cm95RGV2aWNlOiBkZXZp
Y2VDbGFzcyA9IHZpZiwgZGV2aWNlID0gdmlmLzAKWzIwMTQtMDItMDYgMTM6
MzI6MzkgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjI0MTQpIFJlbW92
aW5nIGNvbnNvbGUvMApbMjAxNC0wMi0wNiAxMzozMjozOSAxNTIyXSBERUJV
RyAoWGVuZERvbWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJv
eURldmljZTogZGV2aWNlQ2xhc3MgPSBjb25zb2xlLCBkZXZpY2UgPSBjb25z
b2xlLzAKWzIwMTQtMDItMDYgMTM6MzI6MzkgMTUyMl0gREVCVUcgKFhlbmRE
b21haW5JbmZvOjI0MTQpIFJlbW92aW5nIHZiZC83NjgKWzIwMTQtMDItMDYg
MTM6MzI6MzkgMTUyMl0gREVCVUcgKFhlbmREb21haW5JbmZvOjEyNzYpIFhl
bmREb21haW5JbmZvLmRlc3Ryb3lEZXZpY2U6IGRldmljZUNsYXNzID0gdmJk
LCBkZXZpY2UgPSB2YmQvNzY4ClsyMDE0LTAyLTA2IDEzOjMyOjM5IDE1MjJd
IERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBSZW1vdmluZyB2YmQvNTYz
MgpbMjAxNC0wMi0wNiAxMzozMjozOSAxNTIyXSBERUJVRyAoWGVuZERvbWFp
bkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJveURldmljZTogZGV2
aWNlQ2xhc3MgPSB2YmQsIGRldmljZSA9IHZiZC81NjMyClsyMDE0LTAyLTA2
IDEzOjMyOjM5IDE1MjJdIERFQlVHIChYZW5kRG9tYWluSW5mbzoyNDE0KSBS
ZW1vdmluZyB2ZmIvMApbMjAxNC0wMi0wNiAxMzozMjozOSAxNTIyXSBERUJV
RyAoWGVuZERvbWFpbkluZm86MTI3NikgWGVuZERvbWFpbkluZm8uZGVzdHJv
eURldmljZTogZGV2aWNlQ2xhc3MgPSB2ZmIsIGRldmljZSA9IHZmYi8wCg==


--1665047788-1877962208-1391689706=:75705
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1665047788-1877962208-1391689706=:75705--


From xen-devel-bounces@lists.xen.org Thu Feb 06 13:11:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOkB-0002qI-7L; Thu, 06 Feb 2014 13:11:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOk9-0002q4-Rc
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:11:50 +0000
Received: from [193.109.254.147:51803] by server-13.bemta-14.messagelabs.com
	id 0C/7A-01226-51A83F25; Thu, 06 Feb 2014 13:11:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391692306!2479629!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24849 invoked from network); 6 Feb 2014 13:11:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:11:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100449614"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:11:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:11:45 -0500
Message-ID: <1391692303.25128.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 13:11:43 +0000
In-Reply-To: <52F38938.2090500@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
> 
> On 06/02/14 12:57, Ian Campbell wrote:
> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> >>
> >> I remember we had a discussion when I have implemented vfp context
> >> switch for arm32 for the memory constraints
> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
> >>
> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
> >
> > Yes, I forgot to say: I think getting something in now is the priority,
> > which is why I committed it, but this should be tightened up, probably
> > for 4.5 unless the difference is benchmarkable.
> 
> The fix is very simple (a matter of 2 lines changes). I would prefer to 
> delay this patch for a couple of days and having a correct 
> implementation from the beginning, so we will not forgot to change the 
> code for Xen 4.5.

And I would rather close this rather large hole right now and not wait
for two more days when we are looking at doing what might be the final
rc soon.

I had already applied before you said anything, so the point is moot.

Anyway, if someone wants to submit for 4.4 with a case for a release
exception then I'm sure George will consider it.

Otherwise this thread is now in my QUEUE-4.5 folder so I'll get a
reminder shortly after the release when I go through that.

> Moreover Pranav usually answer quickly :).

If he's awake/at work, it's out of office hours for him now.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:11:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBOkB-0002qI-7L; Thu, 06 Feb 2014 13:11:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBOk9-0002q4-Rc
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:11:50 +0000
Received: from [193.109.254.147:51803] by server-13.bemta-14.messagelabs.com
	id 0C/7A-01226-51A83F25; Thu, 06 Feb 2014 13:11:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391692306!2479629!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24849 invoked from network); 6 Feb 2014 13:11:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:11:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100449614"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:11:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:11:45 -0500
Message-ID: <1391692303.25128.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 13:11:43 +0000
In-Reply-To: <52F38938.2090500@linaro.org>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
> 
> On 06/02/14 12:57, Ian Campbell wrote:
> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> >>
> >> I remember we had a discussion when I have implemented vfp context
> >> switch for arm32 for the memory constraints
> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
> >>
> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
> >
> > Yes, I forgot to say: I think getting something in now is the priority,
> > which is why I committed it, but this should be tightened up, probably
> > for 4.5 unless the difference is benchmarkable.
> 
> The fix is very simple (a matter of 2 lines changes). I would prefer to 
> delay this patch for a couple of days and having a correct 
> implementation from the beginning, so we will not forgot to change the 
> code for Xen 4.5.

And I would rather close this rather large hole right now and not wait
for two more days when we are looking at doing what might be the final
rc soon.

I had already applied before you said anything, so the point is moot.

Anyway, if someone wants to submit for 4.4 with a case for a release
exception then I'm sure George will consider it.

Otherwise this thread is now in my QUEUE-4.5 folder so I'll get a
reminder shortly after the release when I go through that.

> Moreover Pranav usually answer quickly :).

If he's awake/at work, it's out of office hours for him now.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:45:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPGJ-0004LF-MU; Thu, 06 Feb 2014 13:45:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBPGI-0004LA-OS
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:45:02 +0000
Received: from [85.158.137.68:56762] by server-8.bemta-3.messagelabs.com id
	10/8F-16039-DD193F25; Thu, 06 Feb 2014 13:45:01 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391694299!74252!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14977 invoked from network); 6 Feb 2014 13:45:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:45:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; 
	d="asc'?scan'208";a="100457171"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:44:58 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:44:58 -0500
Message-ID: <1391694283.9917.8.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 6 Feb 2014 14:44:43 +0100
In-Reply-To: <52F35238.90806@ts.fujitsu.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F35238.90806@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7483079689061036635=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7483079689061036635==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-0uFMVA0Ym1wNlcV1oe0y"

--=-0uFMVA0Ym1wNlcV1oe0y
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 10:13 +0100, Juergen Gross wrote:
> On 06.02.2014 09:58, Justin Weaver wrote:
> > This patch attempts to address the issue of the Xen Credit 2
> > Scheduler only creating one vCPU run queue on multiple physical
> > processor systems. It should be creating one run queue per
> > physical processor.
> >
> > CPU 0 does not get a starting callback, so it is hard coded to run
> > queue 0. At the time this happens, socket information is not
> > available for CPU 0.
> >
> > Socket information is available for each individual CPU when each
> > gets the STARTING callback (I believe socket information is also
> > available for CPU 0 by that time). This patch adds the following
> > algorithm...
> >
> > IF cpu is on the same socket as CPU 0, add it to run queue 0
>=20
> You should check whether cpu and CPU0 are in the same cpupool.
>=20
> BTW: CPU0 is allowed to be moved to another cpupool, too.
>=20
Good points. However, the code, as it is now, does not look to care much
about cpupools while constructing this 'one runqueue per socket' thing,
does it? I mean, what happens, right now, if, either after or credit2
builds up the runqueues --say one per socket-- two pCPUs from the same
socket are in different cpupools? It looks to me that things are
considered orthogonal while, as you say, tthy may not be... I guess I'll
try that ASAP and let you know...

My point being that, Justing is trying to fix a bug in credit2, which
says it constructs one runqueue per socket, while it ends up with only
one runqueue at all. If there is another bug, or buggy behavior, wrt how
this interacts with cpupools, although we should fix that too, that's
pre-existent and needs addressing in a dedicated patch (series), isn't
it?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-0uFMVA0Ym1wNlcV1oe0y
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLzkcsACgkQk4XaBE3IOsQNjgCdHzgTeXM/9cjFB8xfU7raBTtC
rloAoJUiQBslKJDU/WCdn/w4wu+3LF38
=wr9Y
-----END PGP SIGNATURE-----

--=-0uFMVA0Ym1wNlcV1oe0y--


--===============7483079689061036635==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7483079689061036635==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 13:45:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPGJ-0004LF-MU; Thu, 06 Feb 2014 13:45:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBPGI-0004LA-OS
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:45:02 +0000
Received: from [85.158.137.68:56762] by server-8.bemta-3.messagelabs.com id
	10/8F-16039-DD193F25; Thu, 06 Feb 2014 13:45:01 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391694299!74252!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14977 invoked from network); 6 Feb 2014 13:45:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:45:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; 
	d="asc'?scan'208";a="100457171"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:44:58 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:44:58 -0500
Message-ID: <1391694283.9917.8.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 6 Feb 2014 14:44:43 +0100
In-Reply-To: <52F35238.90806@ts.fujitsu.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F35238.90806@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7483079689061036635=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7483079689061036635==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-0uFMVA0Ym1wNlcV1oe0y"

--=-0uFMVA0Ym1wNlcV1oe0y
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 10:13 +0100, Juergen Gross wrote:
> On 06.02.2014 09:58, Justin Weaver wrote:
> > This patch attempts to address the issue of the Xen Credit 2
> > Scheduler only creating one vCPU run queue on multiple physical
> > processor systems. It should be creating one run queue per
> > physical processor.
> >
> > CPU 0 does not get a starting callback, so it is hard coded to run
> > queue 0. At the time this happens, socket information is not
> > available for CPU 0.
> >
> > Socket information is available for each individual CPU when each
> > gets the STARTING callback (I believe socket information is also
> > available for CPU 0 by that time). This patch adds the following
> > algorithm...
> >
> > IF cpu is on the same socket as CPU 0, add it to run queue 0
>=20
> You should check whether cpu and CPU0 are in the same cpupool.
>=20
> BTW: CPU0 is allowed to be moved to another cpupool, too.
>=20
Good points. However, the code, as it is now, does not look to care much
about cpupools while constructing this 'one runqueue per socket' thing,
does it? I mean, what happens, right now, if, either after or credit2
builds up the runqueues --say one per socket-- two pCPUs from the same
socket are in different cpupools? It looks to me that things are
considered orthogonal while, as you say, tthy may not be... I guess I'll
try that ASAP and let you know...

My point being that, Justing is trying to fix a bug in credit2, which
says it constructs one runqueue per socket, while it ends up with only
one runqueue at all. If there is another bug, or buggy behavior, wrt how
this interacts with cpupools, although we should fix that too, that's
pre-existent and needs addressing in a dedicated patch (series), isn't
it?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-0uFMVA0Ym1wNlcV1oe0y
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLzkcsACgkQk4XaBE3IOsQNjgCdHzgTeXM/9cjFB8xfU7raBTtC
rloAoJUiQBslKJDU/WCdn/w4wu+3LF38
=wr9Y
-----END PGP SIGNATURE-----

--=-0uFMVA0Ym1wNlcV1oe0y--


--===============7483079689061036635==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7483079689061036635==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 13:53:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPOn-0004l7-VI; Thu, 06 Feb 2014 13:53:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPOm-0004l2-Ad
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 13:53:48 +0000
Received: from [85.158.139.211:36912] by server-17.bemta-5.messagelabs.com id
	B6/44-31975-BE393F25; Thu, 06 Feb 2014 13:53:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391694825!2110127!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15111 invoked from network); 6 Feb 2014 13:53:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:53:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100459665"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:53:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:53:22 -0500
Message-ID: <1391694801.25128.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:53:21 +0000
In-Reply-To: <1391444091-22796-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 11/18] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> We are going to want introduce another call site in the final
> substantive patch.
> 
> Pure code motion; no functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> 
> ---
> v3: Remove now-unused variables from sigchld_installhandler_core
> ---
>  tools/libxl/libxl_fork.c |   23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index ce8e8eb..084d86a 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
>      errno = esave;
>  }
>  
> +static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
> +{
> +    struct sigaction ours;
> +    int r;
> +
> +    memset(&ours,0,sizeof(ours));
> +    ours.sa_handler = handler;
> +    sigemptyset(&ours.sa_mask);
> +    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> +    r = sigaction(SIGCHLD, &ours, old);
> +    assert(!r);
> +}
> +
>  static void sigchld_removehandler_core(void)
>  {
>      struct sigaction was;
> @@ -196,18 +209,10 @@ static void sigchld_removehandler_core(void)
>  
>  static void sigchld_installhandler_core(libxl__gc *gc)
>  {
> -    struct sigaction ours;
> -    int r;
> -
>      assert(!sigchld_owner);
>      sigchld_owner = CTX;
>  
> -    memset(&ours,0,sizeof(ours));
> -    ours.sa_handler = sigchld_handler;
> -    sigemptyset(&ours.sa_mask);
> -    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> -    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> -    assert(!r);
> +    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
>  
>      assert(((void)"application must negotiate with libxl about SIGCHLD",
>              !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:53:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPOn-0004l7-VI; Thu, 06 Feb 2014 13:53:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPOm-0004l2-Ad
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 13:53:48 +0000
Received: from [85.158.139.211:36912] by server-17.bemta-5.messagelabs.com id
	B6/44-31975-BE393F25; Thu, 06 Feb 2014 13:53:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391694825!2110127!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15111 invoked from network); 6 Feb 2014 13:53:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:53:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100459665"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:53:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:53:22 -0500
Message-ID: <1391694801.25128.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 13:53:21 +0000
In-Reply-To: <1391444091-22796-12-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-12-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 11/18] libxl: fork: Break out
	sigchld_sethandler_raw
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> We are going to want introduce another call site in the final
> substantive patch.
> 
> Pure code motion; no functional change.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> 
> ---
> v3: Remove now-unused variables from sigchld_installhandler_core
> ---
>  tools/libxl/libxl_fork.c |   23 ++++++++++++++---------
>  1 file changed, 14 insertions(+), 9 deletions(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index ce8e8eb..084d86a 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -182,6 +182,19 @@ static void sigchld_handler(int signo)
>      errno = esave;
>  }
>  
> +static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old)
> +{
> +    struct sigaction ours;
> +    int r;
> +
> +    memset(&ours,0,sizeof(ours));
> +    ours.sa_handler = handler;
> +    sigemptyset(&ours.sa_mask);
> +    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> +    r = sigaction(SIGCHLD, &ours, old);
> +    assert(!r);
> +}
> +
>  static void sigchld_removehandler_core(void)
>  {
>      struct sigaction was;
> @@ -196,18 +209,10 @@ static void sigchld_removehandler_core(void)
>  
>  static void sigchld_installhandler_core(libxl__gc *gc)
>  {
> -    struct sigaction ours;
> -    int r;
> -
>      assert(!sigchld_owner);
>      sigchld_owner = CTX;
>  
> -    memset(&ours,0,sizeof(ours));
> -    ours.sa_handler = sigchld_handler;
> -    sigemptyset(&ours.sa_mask);
> -    ours.sa_flags = SA_NOCLDSTOP | SA_RESTART;
> -    r = sigaction(SIGCHLD, &ours, &sigchld_saved_action);
> -    assert(!r);
> +    sigchld_sethandler_raw(sigchld_handler, &sigchld_saved_action);
>  
>      assert(((void)"application must negotiate with libxl about SIGCHLD",
>              !(sigchld_saved_action.sa_flags & SA_SIGINFO) &&



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPP3-0004mK-CT; Thu, 06 Feb 2014 13:54:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WBPP1-0004m0-Aa
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:54:03 +0000
Received: from [85.158.143.35:51169] by server-2.bemta-4.messagelabs.com id
	3F/CC-10891-AF393F25; Thu, 06 Feb 2014 13:54:02 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391694841!3652400!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31781 invoked from network); 6 Feb 2014 13:54:02 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 13:54:02 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=IGEs0qsDHhUwKyq3uWHDGt+y1lsKOHAMHqnYJJexwidgZDI0zcTqIzOv
	fYWjy6KXFCHQqS5asBpO8hcw8GnwctOKz43fqKGv54p/UKONIRiJ3pVxe
	UgQFV4mXHiAV7mk94O1aNiq2vpE70rJKJ7jQKXQpZcxM1r99gA+R1OXM5
	m+Wl/t52N6v2FlpvNXED94kZotKhmTIs0RvLGALnL7DC0Y5o9zNYHCYgR
	3TyNQcWMj2vcTRxyEYlpzpQZkxaeh;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1391694841; x=1423230841;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=pPP52LNa+c3wFo6JUs6CYP8R0Eq03TccB5tfsKXdX2I=;
	b=MtP5zfcvzkNFC1mIWsYBX12wWDHRux6P+I6M96aYCidEIXRx4ZbTIjiE
	6X18IoHLdFUAsz7cWzeP3To4l7oAX8zO+gNRRaLY5Va7ULpy5oNiivAmD
	4At4l2VFmweeF75A2r+r48blW0qpg20nPW+tZGrwu1YRIUUyqI7fJnHEC
	TX+Mwosnx8Wruc7jno68JilTORyzB05q0GddfNL28277yiNjEBZRWrXCx
	ko/wE1s39p0xp6DdSznZ+4GH9ax8r;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,793,1384297200"; d="scan'208";a="158376628"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 06 Feb 2014 14:54:00 +0100
X-IronPort-AV: E=Sophos;i="4.95,793,1384297200"; d="scan'208";a="30996638"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.36]) ([10.172.102.36])
	by abgdate50u.abg.fsc.net with ESMTP; 06 Feb 2014 14:54:00 +0100
Message-ID: <52F393F8.8010206@ts.fujitsu.com>
Date: Thu, 06 Feb 2014 14:54:00 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F35238.90806@ts.fujitsu.com> <1391694283.9917.8.camel@Solace>
In-Reply-To: <1391694283.9917.8.camel@Solace>
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06.02.2014 14:44, Dario Faggioli wrote:
> On gio, 2014-02-06 at 10:13 +0100, Juergen Gross wrote:
>> On 06.02.2014 09:58, Justin Weaver wrote:
>>> This patch attempts to address the issue of the Xen Credit 2
>>> Scheduler only creating one vCPU run queue on multiple physical
>>> processor systems. It should be creating one run queue per
>>> physical processor.
>>>
>>> CPU 0 does not get a starting callback, so it is hard coded to run
>>> queue 0. At the time this happens, socket information is not
>>> available for CPU 0.
>>>
>>> Socket information is available for each individual CPU when each
>>> gets the STARTING callback (I believe socket information is also
>>> available for CPU 0 by that time). This patch adds the following
>>> algorithm...
>>>
>>> IF cpu is on the same socket as CPU 0, add it to run queue 0
>>
>> You should check whether cpu and CPU0 are in the same cpupool.
>>
>> BTW: CPU0 is allowed to be moved to another cpupool, too.
>>
> Good points. However, the code, as it is now, does not look to care much
> about cpupools while constructing this 'one runqueue per socket' thing,
> does it? I mean, what happens, right now, if, either after or credit2
> builds up the runqueues --say one per socket-- two pCPUs from the same
> socket are in different cpupools? It looks to me that things are
> considered orthogonal while, as you say, tthy may not be... I guess I'll
> try that ASAP and let you know...
>
> My point being that, Justing is trying to fix a bug in credit2, which
> says it constructs one runqueue per socket, while it ends up with only
> one runqueue at all. If there is another bug, or buggy behavior, wrt how
> this interacts with cpupools, although we should fix that too, that's
> pre-existent and needs addressing in a dedicated patch (series), isn't
> it?

Now it will construct one runqueue per cpupool. There is one sched_private
structure per cpupool!

I'm not sure what will happen with the change proposed by Justin in case of
multiple credit2 cpupools...


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPP3-0004mK-CT; Thu, 06 Feb 2014 13:54:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WBPP1-0004m0-Aa
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:54:03 +0000
Received: from [85.158.143.35:51169] by server-2.bemta-4.messagelabs.com id
	3F/CC-10891-AF393F25; Thu, 06 Feb 2014 13:54:02 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391694841!3652400!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31781 invoked from network); 6 Feb 2014 13:54:02 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 13:54:02 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=IGEs0qsDHhUwKyq3uWHDGt+y1lsKOHAMHqnYJJexwidgZDI0zcTqIzOv
	fYWjy6KXFCHQqS5asBpO8hcw8GnwctOKz43fqKGv54p/UKONIRiJ3pVxe
	UgQFV4mXHiAV7mk94O1aNiq2vpE70rJKJ7jQKXQpZcxM1r99gA+R1OXM5
	m+Wl/t52N6v2FlpvNXED94kZotKhmTIs0RvLGALnL7DC0Y5o9zNYHCYgR
	3TyNQcWMj2vcTRxyEYlpzpQZkxaeh;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1391694841; x=1423230841;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=pPP52LNa+c3wFo6JUs6CYP8R0Eq03TccB5tfsKXdX2I=;
	b=MtP5zfcvzkNFC1mIWsYBX12wWDHRux6P+I6M96aYCidEIXRx4ZbTIjiE
	6X18IoHLdFUAsz7cWzeP3To4l7oAX8zO+gNRRaLY5Va7ULpy5oNiivAmD
	4At4l2VFmweeF75A2r+r48blW0qpg20nPW+tZGrwu1YRIUUyqI7fJnHEC
	TX+Mwosnx8Wruc7jno68JilTORyzB05q0GddfNL28277yiNjEBZRWrXCx
	ko/wE1s39p0xp6DdSznZ+4GH9ax8r;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,793,1384297200"; d="scan'208";a="158376628"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 06 Feb 2014 14:54:00 +0100
X-IronPort-AV: E=Sophos;i="4.95,793,1384297200"; d="scan'208";a="30996638"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.36]) ([10.172.102.36])
	by abgdate50u.abg.fsc.net with ESMTP; 06 Feb 2014 14:54:00 +0100
Message-ID: <52F393F8.8010206@ts.fujitsu.com>
Date: Thu, 06 Feb 2014 14:54:00 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F35238.90806@ts.fujitsu.com> <1391694283.9917.8.camel@Solace>
In-Reply-To: <1391694283.9917.8.camel@Solace>
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06.02.2014 14:44, Dario Faggioli wrote:
> On gio, 2014-02-06 at 10:13 +0100, Juergen Gross wrote:
>> On 06.02.2014 09:58, Justin Weaver wrote:
>>> This patch attempts to address the issue of the Xen Credit 2
>>> Scheduler only creating one vCPU run queue on multiple physical
>>> processor systems. It should be creating one run queue per
>>> physical processor.
>>>
>>> CPU 0 does not get a starting callback, so it is hard coded to run
>>> queue 0. At the time this happens, socket information is not
>>> available for CPU 0.
>>>
>>> Socket information is available for each individual CPU when each
>>> gets the STARTING callback (I believe socket information is also
>>> available for CPU 0 by that time). This patch adds the following
>>> algorithm...
>>>
>>> IF cpu is on the same socket as CPU 0, add it to run queue 0
>>
>> You should check whether cpu and CPU0 are in the same cpupool.
>>
>> BTW: CPU0 is allowed to be moved to another cpupool, too.
>>
> Good points. However, the code, as it is now, does not look to care much
> about cpupools while constructing this 'one runqueue per socket' thing,
> does it? I mean, what happens, right now, if, either after or credit2
> builds up the runqueues --say one per socket-- two pCPUs from the same
> socket are in different cpupools? It looks to me that things are
> considered orthogonal while, as you say, tthy may not be... I guess I'll
> try that ASAP and let you know...
>
> My point being that, Justing is trying to fix a bug in credit2, which
> says it constructs one runqueue per socket, while it ends up with only
> one runqueue at all. If there is another bug, or buggy behavior, wrt how
> this interacts with cpupools, although we should fix that too, that's
> pre-existent and needs addressing in a dedicated patch (series), isn't
> it?

Now it will construct one runqueue per cpupool. There is one sched_private
structure per cpupool!

I'm not sure what will happen with the change proposed by Justin in case of
multiple credit2 cpupools...


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 13:58:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPSx-00051q-C9; Thu, 06 Feb 2014 13:58:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBPSw-00051j-8P
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:58:06 +0000
Received: from [193.109.254.147:8358] by server-9.bemta-14.messagelabs.com id
	44/B5-24895-DE493F25; Thu, 06 Feb 2014 13:58:05 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391695075!2474230!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTM2MDIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3961 invoked from network); 6 Feb 2014 13:57:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:57:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; 
	d="asc'?scan'208";a="100460539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:57:55 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:57:54 -0500
Message-ID: <1391695072.9917.13.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 6 Feb 2014 14:57:52 +0100
In-Reply-To: <52F393F8.8010206@ts.fujitsu.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F35238.90806@ts.fujitsu.com> <1391694283.9917.8.camel@Solace>
	<52F393F8.8010206@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1548046851646850746=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1548046851646850746==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-lY1faKfrYui6LlUrPXzb"

--=-lY1faKfrYui6LlUrPXzb
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 14:54 +0100, Juergen Gross wrote:
> On 06.02.2014 14:44, Dario Faggioli wrote:
> > My point being that, Justing is trying to fix a bug in credit2, which
> > says it constructs one runqueue per socket, while it ends up with only
> > one runqueue at all. If there is another bug, or buggy behavior, wrt ho=
w
> > this interacts with cpupools, although we should fix that too, that's
> > pre-existent and needs addressing in a dedicated patch (series), isn't
> > it?
>=20
> Now it will construct one runqueue per cpupool. There is one sched_privat=
e
> structure per cpupool!
>=20
> I'm not sure what will happen with the change proposed by Justin in case =
of
> multiple credit2 cpupools...
>=20
I see... Mmm, let me try to think a bit more about this then.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-lY1faKfrYui6LlUrPXzb
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLzlOAACgkQk4XaBE3IOsQr9gCfYQqHSMmp4TsktayBRreiZutw
ipUAn388PRQWCrC3JuUneN6LpGc4wyFD
=IwVA
-----END PGP SIGNATURE-----

--=-lY1faKfrYui6LlUrPXzb--


--===============1548046851646850746==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1548046851646850746==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 13:58:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 13:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPSx-00051q-C9; Thu, 06 Feb 2014 13:58:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBPSw-00051j-8P
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 13:58:06 +0000
Received: from [193.109.254.147:8358] by server-9.bemta-14.messagelabs.com id
	44/B5-24895-DE493F25; Thu, 06 Feb 2014 13:58:05 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391695075!2474230!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTM2MDIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3961 invoked from network); 6 Feb 2014 13:57:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 13:57:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; 
	d="asc'?scan'208";a="100460539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 13:57:55 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	08:57:54 -0500
Message-ID: <1391695072.9917.13.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 6 Feb 2014 14:57:52 +0100
In-Reply-To: <52F393F8.8010206@ts.fujitsu.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F35238.90806@ts.fujitsu.com> <1391694283.9917.8.camel@Solace>
	<52F393F8.8010206@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1548046851646850746=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1548046851646850746==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-lY1faKfrYui6LlUrPXzb"

--=-lY1faKfrYui6LlUrPXzb
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 14:54 +0100, Juergen Gross wrote:
> On 06.02.2014 14:44, Dario Faggioli wrote:
> > My point being that, Justing is trying to fix a bug in credit2, which
> > says it constructs one runqueue per socket, while it ends up with only
> > one runqueue at all. If there is another bug, or buggy behavior, wrt ho=
w
> > this interacts with cpupools, although we should fix that too, that's
> > pre-existent and needs addressing in a dedicated patch (series), isn't
> > it?
>=20
> Now it will construct one runqueue per cpupool. There is one sched_privat=
e
> structure per cpupool!
>=20
> I'm not sure what will happen with the change proposed by Justin in case =
of
> multiple credit2 cpupools...
>=20
I see... Mmm, let me try to think a bit more about this then.

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-lY1faKfrYui6LlUrPXzb
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLzlOAACgkQk4XaBE3IOsQr9gCfYQqHSMmp4TsktayBRreiZutw
ipUAn388PRQWCrC3JuUneN6LpGc4wyFD
=IwVA
-----END PGP SIGNATURE-----

--=-lY1faKfrYui6LlUrPXzb--


--===============1548046851646850746==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1548046851646850746==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 14:00:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPV5-0005FA-Tn; Thu, 06 Feb 2014 14:00:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPUs-0005DD-LK
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:00:15 +0000
Received: from [85.158.139.211:25066] by server-11.bemta-5.messagelabs.com id
	CA/E7-23886-56593F25; Thu, 06 Feb 2014 14:00:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391695203!2130349!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16491 invoked from network); 6 Feb 2014 14:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100460963"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:00:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:00:02 -0500
Message-ID: <1391695201.25128.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:00:01 +0000
In-Reply-To: <1391444091-22796-16-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-16-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 15/18] libxl: events: Makefile builds
	internal unit tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> We provide a new LIBXL_TESTS facility in the Makefile.
> Also provide some helpful common routines for unit tests to use.
> 
> We don't want to put the weird test case entrypoints and the weird
> test case code in the main libxl.so library.  Symbol hiding prevents
> us from simply directly linking the libxl_test_FOO.o in later.  So
> instead we provide a special library libxenlight_test.so which is used
> only locally.
> 
> There are not yet any test cases defined; that will come in the next
> patch.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:00:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPV5-0005FA-Tn; Thu, 06 Feb 2014 14:00:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPUs-0005DD-LK
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:00:15 +0000
Received: from [85.158.139.211:25066] by server-11.bemta-5.messagelabs.com id
	CA/E7-23886-56593F25; Thu, 06 Feb 2014 14:00:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391695203!2130349!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16491 invoked from network); 6 Feb 2014 14:00:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:00:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100460963"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:00:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:00:02 -0500
Message-ID: <1391695201.25128.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:00:01 +0000
In-Reply-To: <1391444091-22796-16-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-16-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 15/18] libxl: events: Makefile builds
	internal unit tests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> We provide a new LIBXL_TESTS facility in the Makefile.
> Also provide some helpful common routines for unit tests to use.
> 
> We don't want to put the weird test case entrypoints and the weird
> test case code in the main libxl.so library.  Symbol hiding prevents
> us from simply directly linking the libxl_test_FOO.o in later.  So
> instead we provide a special library libxenlight_test.so which is used
> only locally.
> 
> There are not yet any test cases defined; that will come in the next
> patch.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:02:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPWz-0005ar-Jh; Thu, 06 Feb 2014 14:02:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPWk-0005aQ-0U
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:02:14 +0000
Received: from [85.158.143.35:8641] by server-2.bemta-4.messagelabs.com id
	27/4C-10891-9D593F25; Thu, 06 Feb 2014 14:02:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391695319!3662021!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9105 invoked from network); 6 Feb 2014 14:02:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:02:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100461615"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:01:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:01:58 -0500
Message-ID: <1391695317.25128.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:01:57 +0000
In-Reply-To: <1391444091-22796-17-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-17-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 16/18] libxl: events: timedereg internal
	unit test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> Test timeout deregistration idempotency.  In the current tree this
> test fails because ev->func is not cleared, meaning that a timeout
> can be removed from the list more than once, corrupting the list.
> 
> It is necessary to use multiple timeouts to demonstrate this bug,
> because removing the very same entry twice from a list in quick
> succession, without modifying the list in other ways in between,
> doesn't actually corrupt the list.  (Since removing an entry from a
> doubly-linked list just copies next and back from the disappearing
> entry into its neighbours.)
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

I've only glanced at the test, but I don't think it needs thorough
review so:

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:02:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPWz-0005ar-Jh; Thu, 06 Feb 2014 14:02:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPWk-0005aQ-0U
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:02:14 +0000
Received: from [85.158.143.35:8641] by server-2.bemta-4.messagelabs.com id
	27/4C-10891-9D593F25; Thu, 06 Feb 2014 14:02:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391695319!3662021!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9105 invoked from network); 6 Feb 2014 14:02:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:02:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100461615"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:01:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:01:58 -0500
Message-ID: <1391695317.25128.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:01:57 +0000
In-Reply-To: <1391444091-22796-17-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-17-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 16/18] libxl: events: timedereg internal
	unit test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> Test timeout deregistration idempotency.  In the current tree this
> test fails because ev->func is not cleared, meaning that a timeout
> can be removed from the list more than once, corrupting the list.
> 
> It is necessary to use multiple timeouts to demonstrate this bug,
> because removing the very same entry twice from a list in quick
> succession, without modifying the list in other ways in between,
> doesn't actually corrupt the list.  (Since removing an entry from a
> doubly-linked list just copies next and back from the disappearing
> entry into its neighbours.)
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Cc: Jim Fehlig <jfehlig@suse.com>

I've only glanced at the test, but I don't think it needs thorough
review so:

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:02:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPXW-0005eQ-7s; Thu, 06 Feb 2014 14:02:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPXU-0005eE-Sk
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:02:48 +0000
Received: from [193.109.254.147:29080] by server-8.bemta-14.messagelabs.com id
	D4/30-18529-80693F25; Thu, 06 Feb 2014 14:02:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391695366!2495335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2691 invoked from network); 6 Feb 2014 14:02:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:02:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98583346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 14:02:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:02:45 -0500
Message-ID: <1391695364.25128.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:02:44 +0000
In-Reply-To: <1391444091-22796-18-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-18-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: Re: [Xen-devel] [PATCH 17/18] libxl: timeouts: Break out time_occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> From: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Bring together the two places where etime->func() is called into a new
> function time_occurs.  For one call site this is pure code motion.
> For the other the only semantic change is the introduction of a new
> debugging message.
> 
> Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:02:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPXW-0005eQ-7s; Thu, 06 Feb 2014 14:02:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPXU-0005eE-Sk
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:02:48 +0000
Received: from [193.109.254.147:29080] by server-8.bemta-14.messagelabs.com id
	D4/30-18529-80693F25; Thu, 06 Feb 2014 14:02:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1391695366!2495335!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2691 invoked from network); 6 Feb 2014 14:02:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:02:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98583346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 14:02:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:02:45 -0500
Message-ID: <1391695364.25128.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:02:44 +0000
In-Reply-To: <1391444091-22796-18-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-18-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: Re: [Xen-devel] [PATCH 17/18] libxl: timeouts: Break out time_occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> From: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Bring together the two places where etime->func() is called into a new
> function time_occurs.  For one call site this is pure code motion.
> For the other the only semantic change is the introduction of a new
> debugging message.
> 
> Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:05:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPZc-0005pI-RV; Thu, 06 Feb 2014 14:05:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPZb-0005p4-6i
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:04:59 +0000
Received: from [85.158.143.35:24772] by server-1.bemta-4.messagelabs.com id
	15/89-31661-A8693F25; Thu, 06 Feb 2014 14:04:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391695496!3663362!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15908 invoked from network); 6 Feb 2014 14:04:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:04:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100462552"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:04:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:04:56 -0500
Message-ID: <1391695494.25128.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:04:54 +0000
In-Reply-To: <1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: Re: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record
 deregistration when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> From: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> When a timeout has occurred, it is deregistered.  However, we failed
> to record this fact by updating etime->func.  As a result,
> libxl__ev_time_isregistered would say `true' for a timeout which has
> already happened.
> 
> The results are that we might try to have the timeout occur again
> (causing problems for the call site), and/or corrupt the timeout list.
> 
> This fixes the timedereg event system unit test.
> 
> Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Although if you were minded to explicitly note in the changelog that it
is necessary to nuke ->func before running the callback so that it
doesn't try and fire it again in parallel while it is running then that
would be good.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:05:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPZc-0005pI-RV; Thu, 06 Feb 2014 14:05:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPZb-0005p4-6i
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:04:59 +0000
Received: from [85.158.143.35:24772] by server-1.bemta-4.messagelabs.com id
	15/89-31661-A8693F25; Thu, 06 Feb 2014 14:04:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1391695496!3663362!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15908 invoked from network); 6 Feb 2014 14:04:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:04:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100462552"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:04:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:04:56 -0500
Message-ID: <1391695494.25128.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:04:54 +0000
In-Reply-To: <1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com,
	Ian Jackson <ijackson@chiark.greenend.org.uk>
Subject: Re: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record
 deregistration when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> From: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> When a timeout has occurred, it is deregistered.  However, we failed
> to record this fact by updating etime->func.  As a result,
> libxl__ev_time_isregistered would say `true' for a timeout which has
> already happened.
> 
> The results are that we might try to have the timeout occur again
> (causing problems for the call site), and/or corrupt the timeout list.
> 
> This fixes the timedereg event system unit test.
> 
> Signed-off-by: Ian Jackson <ijackson@chiark.greenend.org.uk>
> Cc: Jim Fehlig <jfehlig@suse.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Although if you were minded to explicitly note in the changelog that it
is necessary to nuke ->func before running the callback so that it
doesn't try and fire it again in parallel while it is running then that
would be good.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:08:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPcJ-00061e-Sr; Thu, 06 Feb 2014 14:07:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPcI-00061Y-HH
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:07:46 +0000
Received: from [193.109.254.147:16334] by server-13.bemta-14.messagelabs.com
	id 05/3D-01226-13793F25; Thu, 06 Feb 2014 14:07:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391695659!2508723!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17285 invoked from network); 6 Feb 2014 14:07:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:07:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100463364"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:07:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:07:39 -0500
Message-ID: <1391695658.25128.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:07:38 +0000
In-Reply-To: <21235.33180.577781.42813@mariner.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
	<21235.33180.577781.42813@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:35 +0000, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> > I think libvirt support for libxl is a really important functionality 
> > from a strategic perspective: a solid support should make it much easier 
> > to integrate with other projects such as OpenStack and Cloudstack, as 
> > well as (in theory) other tools built on top of libvirt.
> 
> Right.
> 
> > So I'm inclined to consider this a blocker*; I think we should accept it 
> > and delay the release until we feel comfortable that it has been 
> > sufficiently tested.
> 
> Thanks.
> 
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> I think the right answer then would be to commit it as soon as it has
> been acked.  There are four patches at the end that could do with an
> ack from Ian C.

I think I've now acked everything which was lacking one. Apart from a
nitpick on the commit message of the final one (which you may choose to
ignore) I think this is good to go.

I'm assuming you will shovel this one in yourself.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:08:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPcJ-00061e-Sr; Thu, 06 Feb 2014 14:07:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPcI-00061Y-HH
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:07:46 +0000
Received: from [193.109.254.147:16334] by server-13.bemta-14.messagelabs.com
	id 05/3D-01226-13793F25; Thu, 06 Feb 2014 14:07:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391695659!2508723!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17285 invoked from network); 6 Feb 2014 14:07:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:07:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100463364"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:07:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:07:39 -0500
Message-ID: <1391695658.25128.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:07:38 +0000
In-Reply-To: <21235.33180.577781.42813@mariner.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
	<21235.33180.577781.42813@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 12:35 +0000, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> > I think libvirt support for libxl is a really important functionality 
> > from a strategic perspective: a solid support should make it much easier 
> > to integrate with other projects such as OpenStack and Cloudstack, as 
> > well as (in theory) other tools built on top of libvirt.
> 
> Right.
> 
> > So I'm inclined to consider this a blocker*; I think we should accept it 
> > and delay the release until we feel comfortable that it has been 
> > sufficiently tested.
> 
> Thanks.
> 
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> I think the right answer then would be to commit it as soon as it has
> been acked.  There are four patches at the end that could do with an
> ack from Ian C.

I think I've now acked everything which was lacking one. Apart from a
nitpick on the commit message of the final one (which you may choose to
ignore) I think this is good to go.

I'm assuming you will shovel this one in yourself.

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:19:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPnD-0006YP-NN; Thu, 06 Feb 2014 14:19:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBPn8-0006Xt-U5; Thu, 06 Feb 2014 14:18:59 +0000
Received: from [85.158.139.211:42461] by server-17.bemta-5.messagelabs.com id
	EA/05-31975-1D993F25; Thu, 06 Feb 2014 14:18:57 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391696336!2137646!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4639 invoked from network); 6 Feb 2014 14:18:57 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-9.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 14:18:57 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBPmz-0000zk-1z; Thu, 06 Feb 2014 14:18:49 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBPmy-0004Ab-3K; Thu, 06 Feb 2014 14:18:48 +0000
Date: Thu, 06 Feb 2014 14:18:48 +0000
Message-Id: <E1WBPmy-0004Ab-3K@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 84 - integer overflow in several
 XSM/Flask hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                     Xen Security Advisory XSA-84
                              version 2

           integer overflow in several XSM/Flask hypercalls

UPDATES IN VERSION 2
====================

Public release.

The patch for 4.1 was extended to cover a few further similar issues.

ISSUE DESCRIPTION
=================

The FLASK_{GET,SET}BOOL, FLASK_USER and FLASK_CONTEXT_TO_SID
suboperations of the flask hypercall are vulnerable to an integer
overflow on the input size. The hypercalls attempt to allocate a
buffer which is 1 larger than this size and is therefore vulnerable to
integer overflow and an attempt to allocate then access a zero byte
buffer.

Xen 3.3 through 4.1, while not affected by the above overflow, have a
different overflow issue on FLASK_{GET,SET}BOOL and expose unreasonably
large memory allocation to aribitrary guests.

Xen 3.2 (and presumably earlier) exhibit both problems, with the
overflow issue being present for more than just the suboperations
listed above.

The FLASK_GETBOOL op is available to all domains.

The FLASK_SETBOOL op is only available to domains which are granted
access via the Flask policy.  However the permissions check is
performed only after running the vulnerable code and the vulnerability
via this subop is exposed to all domains.

The FLASK_USER and FLASK_CONTEXT_TO_SID ops are only available to
domains which are granted access via the Flask policy.

IMPACT
======

Attempting to access the result of a zero byte allocation results in
a processor fault leading to a denial of service.

VULNERABLE SYSTEMS
==================

All Xen versions back to at least 3.2 are vulnerable to this issue when
built with XSM/Flask support. XSM support is disabled by default and is
enabled by building with XSM_ENABLE=y.

We have not checked earlier versions of Xen, but it is likely that
they are vulnerable to this or related vulnerabilities.

All Xen versions built with XSM_ENABLE=y are vulnerable.

MITIGATION
==========

There is no useful mitigation available in installations where XSM
support is actually in use.

In other systems, compiling it out (with XSM_ENABLE=n) will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa84-unstable-4.3.patch        xen-unstable,Xen 4.3.x
xsa84-4.2.patch                 Xen 4.2.x
xsa84-4.1.patch                 Xen 4.1.x


$ sha256sum xsa84*.patch
e33dd94499959363ad01bebefda9733683c49fd42a9641cf2d7edcd87f853d55  xsa84-4.1.patch
433f3c8a202482c51a48dc0e9e47ac8751d1c0d0759b7bcd22804e1856279a89  xsa84-4.2.patch
64ae433eb606c5446184c08e6fceb9f660ed9a9c28ec112c8cc529251b3b49fb  xsa84-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS85mEAAoJEIP+FMlX6CvZpLkH/1+K6cyCORgAmm1z4zzq4lwg
2XNHen88xZ/NAzZN/ETiGrvtafpGe2yBUAQlJWrYoKGNimBKVh4wlVUmymm/GLRp
Fcg+eck6q5BGF1L4ojMrWkZy1XqEOHrdzBk7nYxsJ/LN6lKKupvtPG67x65qBMkP
z/jEq5vP37J9mWtaZjBCn9wpfGrrUnoOi+MKw/5Wmr44eDm/V5+tJmZiAqxxvB9H
fFs2CI7alIvX4j848dG17juYGemlnVqOMHS65+IchDShAcde9ho6EoQMpDISFK+Q
HSCY5HfSPn4XmpqWHKlONL3sQAMj6WqZvok3WxlU0lIq9PPVrvdQDrbP4GdJKz4=
=dK4H
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa84-4.1.patch"
Content-Disposition: attachment; filename="xsa84-4.1.patch"
Content-Transfer-Encoding: base64

UmVmZXJlbmNlczogYm5jIzg2MDE2MyBYU0EtODQKCmZsYXNrOiByZXN0cmlj
dCBhbGxvY2F0aW9ucyBkb25lIGJ5IGh5cGVyY2FsbCBpbnRlcmZhY2UKCk90
aGVyIHRoYW4gaW4gNC4yIGFuZCBuZXdlciwgd2UncmUgbm90IGhhdmluZyBh
biBvdmVyZmxvdyBpc3N1ZSBoZXJlLApidXQgdW5jb250cm9sbGVkIGV4cG9z
dXJlIG9mIHRoZSBvcGVyYXRpb25zIG9wZW5zIHRoZSBob3N0IHRvIGJlIGRy
aXZlbgpvdXQgb2YgbWVtb3J5IGJ5IGFuIGFyYml0cmFyeSBndWVzdC4gU2lu
Y2UgYWxsIG9wZXJhdGlvbnMgb3RoZXIgdGhhbgpGTEFTS19MT0FEIHNpbXBs
eSBkZWFsIHdpdGggQVNDSUkgc3RyaW5ncywgbGltaXRpbmcgdGhlIGFsbG9j
YXRpb25zCihhbmQgaW5jb21pbmcgYnVmZmVyIHNpemVzKSB0byBhIHBhZ2Ug
d29ydGggb2YgbWVtb3J5IHNlZW1zIGxpa2UgdGhlCmJlc3QgdGhpbmcgd2Ug
Y2FuIGRvLgoKQ29uc2VxdWVudGx5LCBpbiBvcmRlciB0byBub3QgZXhwb3Nl
IHRoZSBsYXJnZXIgYWxsb2NhdGlvbiB0byBhcmJpdHJhcnkKZ3Vlc3RzLCB0
aGUgcGVybWlzc2lvbiBjaGVjayBmb3IgRkxBU0tfTE9BRCBuZWVkcyB0byBi
ZSBwdWxsZWQgYWhlYWQgb2YKdGhlIGFsbG9jYXRpb24gKGFuZCBpdCdzIHBl
cmhhcHMgd29ydGggbm90aW5nIHRoYXQgLSBhZmFpY3QgLSBpdCB3YXMKcG9p
bnRsZXNzbHkgZG9uZSB3aXRoIHRoZSBzZWxfc2VtIHNwaW4gbG9jayBoZWxk
KS4KCk5vdGUgdGhhdCB0aGlzIGJyZWFrcyBGTEFTS19BVkNfQ0FDSEVTVEFU
UyBvbiBzeXN0ZW1zIHdpdGggc3VmZmljaWVudGx5Cm1hbnkgQ1BVcyAoYXMg
cmVxdWlyaW5nIGEgYnVmZmVyIGJpZ2dlciB0aGFuIFBBR0VfU0laRSB0aGVy
ZSkuIE5vCmF0dGVtcHQgaXMgbWFkZSB0byBhZGRyZXNzIHRoaXMgaGVyZSwg
YXMgaXQgd291bGQgbmVlZGxlc3NseSBjb21wbGljYXRlCnRoaXMgZml4IHdp
dGggcmF0aGVyIGxpdHRsZSBnYWluLgoKVGhpcyBpcyBYU0EtODQuCgpSZXBv
cnRlZC1ieTogTWF0dGhldyBEYWxleSA8bWF0dGRAYnVnZnV6ei5jb20+ClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
ClRoZSBpbmRleCBvZiBib29sZWFuIHZhcmlhYmxlcyBpbiBGTEFTS197R0VU
LFNFVH1CT09MIHdhcyBub3QgYWx3YXlzCmNoZWNrZWQgYWdhaW5zdCB0aGUg
Ym91bmRzIG9mIHRoZSBhcnJheS4KClJlcG9ydGVkLWJ5OiBKb2huIE1jRGVy
bW90dCA8am9obi5tY2Rlcm1vdHRAbnJsLm5hdnkubWlsPgpTaWduZWQtb2Zm
LWJ5OiBEYW5pZWwgRGUgR3JhYWYgPGRnZGVncmFAdHljaG8ubnNhLmdvdj4K
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTU3Myw3ICs1NzMsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X3NldGF2Y190aHJlc2hvCiBzdGF0aWMgaW50
IGZsYXNrX3NlY3VyaXR5X3NldF9ib29sKGNoYXIgKmJ1ZiwgdWludDMyX3Qg
Y291bnQpCiB7CiAgICAgaW50IGxlbmd0aCA9IC1FRkFVTFQ7Ci0gICAgaW50
IGksIG5ld192YWx1ZTsKKyAgICB1bnNpZ25lZCBpbnQgaSwgbmV3X3ZhbHVl
OwogCiAgICAgc3Bpbl9sb2NrKCZzZWxfc2VtKTsKIApAQCAtNTg1LDYgKzU4
NSw5IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfc2V0X2Jvb2woY2hh
ciAKICAgICBpZiAoIHNzY2FuZihidWYsICIlZCAlZCIsICZpLCAmbmV3X3Zh
bHVlKSAhPSAyICkKICAgICAgICAgZ290byBvdXQ7CiAKKyAgICBpZiAoIGkg
Pj0gYm9vbF9udW0gKQorICAgICAgICBnb3RvIG91dDsKKwogICAgIGlmICgg
bmV3X3ZhbHVlICkKICAgICB7CiAgICAgICAgIG5ld192YWx1ZSA9IDE7CkBA
IC03MzQsMTAgKzczNyw2IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlf
bG9hZChjaGFyICpidWYKIAogICAgIHNwaW5fbG9jaygmc2VsX3NlbSk7CiAK
LSAgICBsZW5ndGggPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRv
bWFpbiwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKLSAgICBpZiAoIGxlbmd0
aCApCi0gICAgICAgIGdvdG8gb3V0OwotCiAgICAgbGVuZ3RoID0gc2VjdXJp
dHlfbG9hZF9wb2xpY3koYnVmLCBjb3VudCk7CiAgICAgaWYgKCBsZW5ndGgg
KQogICAgICAgICBnb3RvIG91dDsKQEAgLTg1Myw3ICs4NTIsMTUgQEAgbG9u
ZyBkb19mbGFza19vcChYRU5fR1VFU1RfSEFORExFKHhzbV9vcAogICAgIGlm
ICggb3AtPmNtZCA+IEZMQVNLX0xBU1QpCiAgICAgICAgIHJldHVybiAtRUlO
VkFMOwogCi0gICAgaWYgKCBvcC0+c2l6ZSA+IE1BWF9QT0xJQ1lfU0laRSAp
CisgICAgaWYgKCBvcC0+Y21kID09IEZMQVNLX0xPQUQgKQorICAgIHsKKyAg
ICAgICAgcmMgPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRvbWFp
biwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKKyAgICAgICAgaWYgKCByYyAp
CisgICAgICAgICAgICByZXR1cm4gcmM7CisgICAgICAgIGlmICggb3AtPnNp
emUgPiBNQVhfUE9MSUNZX1NJWkUgKQorICAgICAgICAgICAgcmV0dXJuIC1F
SU5WQUw7CisgICAgfQorICAgIGVsc2UgaWYgKCBvcC0+c2l6ZSA+PSBQQUdF
X1NJWkUgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKIAogICAgIGlmICgg
KG9wLT5idWYgPT0gTlVMTCAmJiBvcC0+c2l6ZSAhPSAwKSB8fCAKLS0tIGEv
eGVuL3hzbS9mbGFzay9zcy9zZXJ2aWNlcy5jCisrKyBiL3hlbi94c20vZmxh
c2svc3Mvc2VydmljZXMuYwpAQCAtMTk5MSw3ICsxOTkxLDcgQEAgaW50IHNl
Y3VyaXR5X2dldF9ib29sX3ZhbHVlKGludCBib29sKQogICAgIFBPTElDWV9S
RExPQ0s7CiAKICAgICBsZW4gPSBwb2xpY3lkYi5wX2Jvb2xzLm5wcmltOwot
ICAgIGlmICggYm9vbCA+PSBsZW4gKQorICAgIGlmICggYm9vbCA+PSBsZW4g
fHwgYm9vbCA8IDAgKQogICAgIHsKICAgICAgICAgcmMgPSAtRUZBVUxUOwog
ICAgICAgICBnb3RvIG91dDsK

--=separator
Content-Type: application/octet-stream; name="xsa84-4.2.patch"
Content-Disposition: attachment; filename="xsa84-4.2.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qgc2l6ZSkK
K3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VFU1RfSEFO
RExFKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXplX3QgbWF4X3NpemUp
CiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRlcyhzaXplICsgMSk7
CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXplID4gbWF4X3NpemUg
KQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAgIHRtcCA9IHhtYWxs
b2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlmICggIXRtcCApCiAg
ICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3ICsxMDYsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3RydWN0IHhlCiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVzZXIsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+dS51c2Vy
LCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3ICsyMTcsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQoc3RydWN0CiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZidWYsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+Y29udGV4
dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3ICszMTAsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVfYm9vbChzCiAgICAg
aWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAgICByZXR1cm4gMDsK
IAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPm5hbWUsICZu
YW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmlu
ZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJvb2xfbWF4c3RyKTsK
ICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2OwogCkBAIC0zMzQs
NyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1cml0eV9zZXRfYm9v
bChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAgICBpbnQgKnZhbHVl
czsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9ib29scygmbnVtLCBO
VUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAgICAgICAgIGlmICgg
cnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsKIApAQCAtNDQwLDcg
KzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfbWFrZV9ib29s
cyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRpbmdfdmFsdWVzKTsK
ICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9vbHMoJm51bSwgTlVM
TCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZu
dW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7CiAgICAgaWYgKCBy
ZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0tLSBhL3hlbi94c20v
Zmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBiL3hlbi94c20vZmxh
c2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3ICsxMyw5IEBACiAj
aWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2RlZmluZSBfRkxBU0tf
Q09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzKTsKKyNpbmNsdWRl
IDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzLCBzaXplX3QgKm1h
eHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMoaW50IGxlbiwgaW50
ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2svc3Mvc2VydmljZXMu
YworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2VzLmMKQEAgLTE5MDAs
NyArMTkwMCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jvb2woY29uc3QgY2hh
ciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWludCBzZWN1cml0eV9n
ZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVl
cykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioq
bmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhzdHIpCiB7CiAgICAg
aW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTkwOCw2ICsxOTA4LDggQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogICAg
IGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBOVUxMOwogICAgICp2
YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkKKyAgICAgICAgKm1h
eHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIucF9ib29scy5ucHJp
bTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE5MjksMTYgKzE5MzEsMTcgQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogCiAg
ICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQogICAgIHsKLSAgICAg
ICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXplX3QgbmFtZV9sZW4g
PSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKTsKKwog
ICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5ib29sX3ZhbF90b19z
dHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5hbWVzICkgewotICAg
ICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXSA9
IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVfbGVuKTsKKyAgICAg
ICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJheShjaGFyLCBuYW1l
X2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpuYW1lcylbaV0gKQog
ICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAgICAgICAgc3RybGNw
eSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ld
LCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXVtuYW1lX2xl
biAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHkoKCpuYW1lcylbaV0s
IHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwgbmFtZV9sZW4gKyAx
KTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0ciAmJiBuYW1lX2xl
biA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0ciA9IG5hbWVfbGVu
OwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0yMDU2LDcgKzIwNTks
NyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZlX2Jvb2xzKHN0cnVj
CiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9vbGRhdHVtOwogICAg
IHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJjID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFsdWVzKTsKKyAgICBy
YyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAmYm5hbWVzLCAmYnZh
bHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAgICAgIGdvdG8gb3V0
OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBpKysgKQo=

--=separator
Content-Type: application/octet-stream; name="xsa84-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa84-unstable-4.3.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qg
c2l6ZSkKK3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VF
U1RfSEFORExFX1BBUkFNKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXpl
X3QgbWF4X3NpemUpCiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRl
cyhzaXplICsgMSk7CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXpl
ID4gbWF4X3NpemUgKQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAg
IHRtcCA9IHhtYWxsb2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlm
ICggIXRtcCApCiAgICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3
ICsxMDYsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3Ry
dWN0IHhlCiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVz
ZXIsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+dS51c2VyLCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3
ICsyMTcsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQo
c3RydWN0CiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZi
dWYsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+Y29udGV4dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3
ICszMTAsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVf
Ym9vbChzCiAgICAgaWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAg
ICByZXR1cm4gMDsKIAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhh
cmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tf
Y29weWluX3N0cmluZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJv
b2xfbWF4c3RyKTsKICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2
OwogCkBAIC0zMzQsNyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1
cml0eV9zZXRfYm9vbChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAg
ICBpbnQgKnZhbHVlczsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9i
b29scygmbnVtLCBOVUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1
cml0eV9nZXRfYm9vbHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAg
ICAgICAgIGlmICggcnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsK
IApAQCAtNDQwLDcgKzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJp
dHlfbWFrZV9ib29scyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRp
bmdfdmFsdWVzKTsKICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZudW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7
CiAgICAgaWYgKCByZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0t
LSBhL3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBi
L3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3
ICsxMyw5IEBACiAjaWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2Rl
ZmluZSBfRkxBU0tfQ09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
KTsKKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
LCBzaXplX3QgKm1heHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMo
aW50IGxlbiwgaW50ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2sv
c3Mvc2VydmljZXMuYworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2Vz
LmMKQEAgLTE4NTAsNyArMTg1MCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jv
b2woY29uc3QgY2hhciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWlu
dCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMs
IGludCAqKnZhbHVlcykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICps
ZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhz
dHIpCiB7CiAgICAgaW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTg1OCw2
ICsxODU4LDggQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogICAgIGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBO
VUxMOwogICAgICp2YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkK
KyAgICAgICAgKm1heHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIu
cF9ib29scy5ucHJpbTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE4NzksMTYg
KzE4ODEsMTcgQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogCiAgICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQog
ICAgIHsKLSAgICAgICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXpl
X3QgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19u
YW1lW2ldKTsKKwogICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5i
b29sX3ZhbF90b19zdHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5h
bWVzICkgewotICAgICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5
ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAo
Km5hbWVzKVtpXSA9IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVf
bGVuKTsKKyAgICAgICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJh
eShjaGFyLCBuYW1lX2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpu
YW1lcylbaV0gKQogICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAg
ICAgICAgc3RybGNweSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldLCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVz
KVtpXVtuYW1lX2xlbiAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHko
KCpuYW1lcylbaV0sIHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwg
bmFtZV9sZW4gKyAxKTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0
ciAmJiBuYW1lX2xlbiA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0
ciA9IG5hbWVfbGVuOwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0y
MDA2LDcgKzIwMDksNyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZl
X2Jvb2xzKHN0cnVjCiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9v
bGRhdHVtOwogICAgIHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJj
ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFs
dWVzKTsKKyAgICByYyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAm
Ym5hbWVzLCAmYnZhbHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAg
ICAgIGdvdG8gb3V0OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBp
KysgKQo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Feb 06 14:19:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPnD-0006YP-NN; Thu, 06 Feb 2014 14:19:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBPn8-0006Xt-U5; Thu, 06 Feb 2014 14:18:59 +0000
Received: from [85.158.139.211:42461] by server-17.bemta-5.messagelabs.com id
	EA/05-31975-1D993F25; Thu, 06 Feb 2014 14:18:57 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391696336!2137646!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4639 invoked from network); 6 Feb 2014 14:18:57 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-9.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	6 Feb 2014 14:18:57 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBPmz-0000zk-1z; Thu, 06 Feb 2014 14:18:49 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WBPmy-0004Ab-3K; Thu, 06 Feb 2014 14:18:48 +0000
Date: Thu, 06 Feb 2014 14:18:48 +0000
Message-Id: <E1WBPmy-0004Ab-3K@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 84 - integer overflow in several
 XSM/Flask hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                     Xen Security Advisory XSA-84
                              version 2

           integer overflow in several XSM/Flask hypercalls

UPDATES IN VERSION 2
====================

Public release.

The patch for 4.1 was extended to cover a few further similar issues.

ISSUE DESCRIPTION
=================

The FLASK_{GET,SET}BOOL, FLASK_USER and FLASK_CONTEXT_TO_SID
suboperations of the flask hypercall are vulnerable to an integer
overflow on the input size. The hypercalls attempt to allocate a
buffer which is 1 larger than this size and is therefore vulnerable to
integer overflow and an attempt to allocate then access a zero byte
buffer.

Xen 3.3 through 4.1, while not affected by the above overflow, have a
different overflow issue on FLASK_{GET,SET}BOOL and expose unreasonably
large memory allocation to aribitrary guests.

Xen 3.2 (and presumably earlier) exhibit both problems, with the
overflow issue being present for more than just the suboperations
listed above.

The FLASK_GETBOOL op is available to all domains.

The FLASK_SETBOOL op is only available to domains which are granted
access via the Flask policy.  However the permissions check is
performed only after running the vulnerable code and the vulnerability
via this subop is exposed to all domains.

The FLASK_USER and FLASK_CONTEXT_TO_SID ops are only available to
domains which are granted access via the Flask policy.

IMPACT
======

Attempting to access the result of a zero byte allocation results in
a processor fault leading to a denial of service.

VULNERABLE SYSTEMS
==================

All Xen versions back to at least 3.2 are vulnerable to this issue when
built with XSM/Flask support. XSM support is disabled by default and is
enabled by building with XSM_ENABLE=y.

We have not checked earlier versions of Xen, but it is likely that
they are vulnerable to this or related vulnerabilities.

All Xen versions built with XSM_ENABLE=y are vulnerable.

MITIGATION
==========

There is no useful mitigation available in installations where XSM
support is actually in use.

In other systems, compiling it out (with XSM_ENABLE=n) will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa84-unstable-4.3.patch        xen-unstable,Xen 4.3.x
xsa84-4.2.patch                 Xen 4.2.x
xsa84-4.1.patch                 Xen 4.1.x


$ sha256sum xsa84*.patch
e33dd94499959363ad01bebefda9733683c49fd42a9641cf2d7edcd87f853d55  xsa84-4.1.patch
433f3c8a202482c51a48dc0e9e47ac8751d1c0d0759b7bcd22804e1856279a89  xsa84-4.2.patch
64ae433eb606c5446184c08e6fceb9f660ed9a9c28ec112c8cc529251b3b49fb  xsa84-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS85mEAAoJEIP+FMlX6CvZpLkH/1+K6cyCORgAmm1z4zzq4lwg
2XNHen88xZ/NAzZN/ETiGrvtafpGe2yBUAQlJWrYoKGNimBKVh4wlVUmymm/GLRp
Fcg+eck6q5BGF1L4ojMrWkZy1XqEOHrdzBk7nYxsJ/LN6lKKupvtPG67x65qBMkP
z/jEq5vP37J9mWtaZjBCn9wpfGrrUnoOi+MKw/5Wmr44eDm/V5+tJmZiAqxxvB9H
fFs2CI7alIvX4j848dG17juYGemlnVqOMHS65+IchDShAcde9ho6EoQMpDISFK+Q
HSCY5HfSPn4XmpqWHKlONL3sQAMj6WqZvok3WxlU0lIq9PPVrvdQDrbP4GdJKz4=
=dK4H
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa84-4.1.patch"
Content-Disposition: attachment; filename="xsa84-4.1.patch"
Content-Transfer-Encoding: base64

UmVmZXJlbmNlczogYm5jIzg2MDE2MyBYU0EtODQKCmZsYXNrOiByZXN0cmlj
dCBhbGxvY2F0aW9ucyBkb25lIGJ5IGh5cGVyY2FsbCBpbnRlcmZhY2UKCk90
aGVyIHRoYW4gaW4gNC4yIGFuZCBuZXdlciwgd2UncmUgbm90IGhhdmluZyBh
biBvdmVyZmxvdyBpc3N1ZSBoZXJlLApidXQgdW5jb250cm9sbGVkIGV4cG9z
dXJlIG9mIHRoZSBvcGVyYXRpb25zIG9wZW5zIHRoZSBob3N0IHRvIGJlIGRy
aXZlbgpvdXQgb2YgbWVtb3J5IGJ5IGFuIGFyYml0cmFyeSBndWVzdC4gU2lu
Y2UgYWxsIG9wZXJhdGlvbnMgb3RoZXIgdGhhbgpGTEFTS19MT0FEIHNpbXBs
eSBkZWFsIHdpdGggQVNDSUkgc3RyaW5ncywgbGltaXRpbmcgdGhlIGFsbG9j
YXRpb25zCihhbmQgaW5jb21pbmcgYnVmZmVyIHNpemVzKSB0byBhIHBhZ2Ug
d29ydGggb2YgbWVtb3J5IHNlZW1zIGxpa2UgdGhlCmJlc3QgdGhpbmcgd2Ug
Y2FuIGRvLgoKQ29uc2VxdWVudGx5LCBpbiBvcmRlciB0byBub3QgZXhwb3Nl
IHRoZSBsYXJnZXIgYWxsb2NhdGlvbiB0byBhcmJpdHJhcnkKZ3Vlc3RzLCB0
aGUgcGVybWlzc2lvbiBjaGVjayBmb3IgRkxBU0tfTE9BRCBuZWVkcyB0byBi
ZSBwdWxsZWQgYWhlYWQgb2YKdGhlIGFsbG9jYXRpb24gKGFuZCBpdCdzIHBl
cmhhcHMgd29ydGggbm90aW5nIHRoYXQgLSBhZmFpY3QgLSBpdCB3YXMKcG9p
bnRsZXNzbHkgZG9uZSB3aXRoIHRoZSBzZWxfc2VtIHNwaW4gbG9jayBoZWxk
KS4KCk5vdGUgdGhhdCB0aGlzIGJyZWFrcyBGTEFTS19BVkNfQ0FDSEVTVEFU
UyBvbiBzeXN0ZW1zIHdpdGggc3VmZmljaWVudGx5Cm1hbnkgQ1BVcyAoYXMg
cmVxdWlyaW5nIGEgYnVmZmVyIGJpZ2dlciB0aGFuIFBBR0VfU0laRSB0aGVy
ZSkuIE5vCmF0dGVtcHQgaXMgbWFkZSB0byBhZGRyZXNzIHRoaXMgaGVyZSwg
YXMgaXQgd291bGQgbmVlZGxlc3NseSBjb21wbGljYXRlCnRoaXMgZml4IHdp
dGggcmF0aGVyIGxpdHRsZSBnYWluLgoKVGhpcyBpcyBYU0EtODQuCgpSZXBv
cnRlZC1ieTogTWF0dGhldyBEYWxleSA8bWF0dGRAYnVnZnV6ei5jb20+ClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
ClRoZSBpbmRleCBvZiBib29sZWFuIHZhcmlhYmxlcyBpbiBGTEFTS197R0VU
LFNFVH1CT09MIHdhcyBub3QgYWx3YXlzCmNoZWNrZWQgYWdhaW5zdCB0aGUg
Ym91bmRzIG9mIHRoZSBhcnJheS4KClJlcG9ydGVkLWJ5OiBKb2huIE1jRGVy
bW90dCA8am9obi5tY2Rlcm1vdHRAbnJsLm5hdnkubWlsPgpTaWduZWQtb2Zm
LWJ5OiBEYW5pZWwgRGUgR3JhYWYgPGRnZGVncmFAdHljaG8ubnNhLmdvdj4K
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTU3Myw3ICs1NzMsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X3NldGF2Y190aHJlc2hvCiBzdGF0aWMgaW50
IGZsYXNrX3NlY3VyaXR5X3NldF9ib29sKGNoYXIgKmJ1ZiwgdWludDMyX3Qg
Y291bnQpCiB7CiAgICAgaW50IGxlbmd0aCA9IC1FRkFVTFQ7Ci0gICAgaW50
IGksIG5ld192YWx1ZTsKKyAgICB1bnNpZ25lZCBpbnQgaSwgbmV3X3ZhbHVl
OwogCiAgICAgc3Bpbl9sb2NrKCZzZWxfc2VtKTsKIApAQCAtNTg1LDYgKzU4
NSw5IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfc2V0X2Jvb2woY2hh
ciAKICAgICBpZiAoIHNzY2FuZihidWYsICIlZCAlZCIsICZpLCAmbmV3X3Zh
bHVlKSAhPSAyICkKICAgICAgICAgZ290byBvdXQ7CiAKKyAgICBpZiAoIGkg
Pj0gYm9vbF9udW0gKQorICAgICAgICBnb3RvIG91dDsKKwogICAgIGlmICgg
bmV3X3ZhbHVlICkKICAgICB7CiAgICAgICAgIG5ld192YWx1ZSA9IDE7CkBA
IC03MzQsMTAgKzczNyw2IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlf
bG9hZChjaGFyICpidWYKIAogICAgIHNwaW5fbG9jaygmc2VsX3NlbSk7CiAK
LSAgICBsZW5ndGggPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRv
bWFpbiwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKLSAgICBpZiAoIGxlbmd0
aCApCi0gICAgICAgIGdvdG8gb3V0OwotCiAgICAgbGVuZ3RoID0gc2VjdXJp
dHlfbG9hZF9wb2xpY3koYnVmLCBjb3VudCk7CiAgICAgaWYgKCBsZW5ndGgg
KQogICAgICAgICBnb3RvIG91dDsKQEAgLTg1Myw3ICs4NTIsMTUgQEAgbG9u
ZyBkb19mbGFza19vcChYRU5fR1VFU1RfSEFORExFKHhzbV9vcAogICAgIGlm
ICggb3AtPmNtZCA+IEZMQVNLX0xBU1QpCiAgICAgICAgIHJldHVybiAtRUlO
VkFMOwogCi0gICAgaWYgKCBvcC0+c2l6ZSA+IE1BWF9QT0xJQ1lfU0laRSAp
CisgICAgaWYgKCBvcC0+Y21kID09IEZMQVNLX0xPQUQgKQorICAgIHsKKyAg
ICAgICAgcmMgPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRvbWFp
biwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKKyAgICAgICAgaWYgKCByYyAp
CisgICAgICAgICAgICByZXR1cm4gcmM7CisgICAgICAgIGlmICggb3AtPnNp
emUgPiBNQVhfUE9MSUNZX1NJWkUgKQorICAgICAgICAgICAgcmV0dXJuIC1F
SU5WQUw7CisgICAgfQorICAgIGVsc2UgaWYgKCBvcC0+c2l6ZSA+PSBQQUdF
X1NJWkUgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKIAogICAgIGlmICgg
KG9wLT5idWYgPT0gTlVMTCAmJiBvcC0+c2l6ZSAhPSAwKSB8fCAKLS0tIGEv
eGVuL3hzbS9mbGFzay9zcy9zZXJ2aWNlcy5jCisrKyBiL3hlbi94c20vZmxh
c2svc3Mvc2VydmljZXMuYwpAQCAtMTk5MSw3ICsxOTkxLDcgQEAgaW50IHNl
Y3VyaXR5X2dldF9ib29sX3ZhbHVlKGludCBib29sKQogICAgIFBPTElDWV9S
RExPQ0s7CiAKICAgICBsZW4gPSBwb2xpY3lkYi5wX2Jvb2xzLm5wcmltOwot
ICAgIGlmICggYm9vbCA+PSBsZW4gKQorICAgIGlmICggYm9vbCA+PSBsZW4g
fHwgYm9vbCA8IDAgKQogICAgIHsKICAgICAgICAgcmMgPSAtRUZBVUxUOwog
ICAgICAgICBnb3RvIG91dDsK

--=separator
Content-Type: application/octet-stream; name="xsa84-4.2.patch"
Content-Disposition: attachment; filename="xsa84-4.2.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qgc2l6ZSkK
K3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VFU1RfSEFO
RExFKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXplX3QgbWF4X3NpemUp
CiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRlcyhzaXplICsgMSk7
CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXplID4gbWF4X3NpemUg
KQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAgIHRtcCA9IHhtYWxs
b2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlmICggIXRtcCApCiAg
ICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3ICsxMDYsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3RydWN0IHhlCiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVzZXIsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+dS51c2Vy
LCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3ICsyMTcsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQoc3RydWN0CiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZidWYsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+Y29udGV4
dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3ICszMTAsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVfYm9vbChzCiAgICAg
aWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAgICByZXR1cm4gMDsK
IAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPm5hbWUsICZu
YW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmlu
ZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJvb2xfbWF4c3RyKTsK
ICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2OwogCkBAIC0zMzQs
NyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1cml0eV9zZXRfYm9v
bChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAgICBpbnQgKnZhbHVl
czsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9ib29scygmbnVtLCBO
VUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAgICAgICAgIGlmICgg
cnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsKIApAQCAtNDQwLDcg
KzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfbWFrZV9ib29s
cyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRpbmdfdmFsdWVzKTsK
ICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9vbHMoJm51bSwgTlVM
TCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZu
dW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7CiAgICAgaWYgKCBy
ZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0tLSBhL3hlbi94c20v
Zmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBiL3hlbi94c20vZmxh
c2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3ICsxMyw5IEBACiAj
aWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2RlZmluZSBfRkxBU0tf
Q09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzKTsKKyNpbmNsdWRl
IDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzLCBzaXplX3QgKm1h
eHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMoaW50IGxlbiwgaW50
ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2svc3Mvc2VydmljZXMu
YworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2VzLmMKQEAgLTE5MDAs
NyArMTkwMCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jvb2woY29uc3QgY2hh
ciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWludCBzZWN1cml0eV9n
ZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVl
cykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioq
bmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhzdHIpCiB7CiAgICAg
aW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTkwOCw2ICsxOTA4LDggQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogICAg
IGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBOVUxMOwogICAgICp2
YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkKKyAgICAgICAgKm1h
eHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIucF9ib29scy5ucHJp
bTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE5MjksMTYgKzE5MzEsMTcgQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogCiAg
ICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQogICAgIHsKLSAgICAg
ICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXplX3QgbmFtZV9sZW4g
PSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKTsKKwog
ICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5ib29sX3ZhbF90b19z
dHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5hbWVzICkgewotICAg
ICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXSA9
IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVfbGVuKTsKKyAgICAg
ICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJheShjaGFyLCBuYW1l
X2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpuYW1lcylbaV0gKQog
ICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAgICAgICAgc3RybGNw
eSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ld
LCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXVtuYW1lX2xl
biAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHkoKCpuYW1lcylbaV0s
IHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwgbmFtZV9sZW4gKyAx
KTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0ciAmJiBuYW1lX2xl
biA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0ciA9IG5hbWVfbGVu
OwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0yMDU2LDcgKzIwNTks
NyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZlX2Jvb2xzKHN0cnVj
CiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9vbGRhdHVtOwogICAg
IHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJjID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFsdWVzKTsKKyAgICBy
YyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAmYm5hbWVzLCAmYnZh
bHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAgICAgIGdvdG8gb3V0
OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBpKysgKQo=

--=separator
Content-Type: application/octet-stream; name="xsa84-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa84-unstable-4.3.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qg
c2l6ZSkKK3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VF
U1RfSEFORExFX1BBUkFNKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXpl
X3QgbWF4X3NpemUpCiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRl
cyhzaXplICsgMSk7CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXpl
ID4gbWF4X3NpemUgKQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAg
IHRtcCA9IHhtYWxsb2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlm
ICggIXRtcCApCiAgICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3
ICsxMDYsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3Ry
dWN0IHhlCiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVz
ZXIsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+dS51c2VyLCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3
ICsyMTcsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQo
c3RydWN0CiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZi
dWYsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+Y29udGV4dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3
ICszMTAsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVf
Ym9vbChzCiAgICAgaWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAg
ICByZXR1cm4gMDsKIAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhh
cmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tf
Y29weWluX3N0cmluZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJv
b2xfbWF4c3RyKTsKICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2
OwogCkBAIC0zMzQsNyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1
cml0eV9zZXRfYm9vbChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAg
ICBpbnQgKnZhbHVlczsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9i
b29scygmbnVtLCBOVUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1
cml0eV9nZXRfYm9vbHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAg
ICAgICAgIGlmICggcnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsK
IApAQCAtNDQwLDcgKzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJp
dHlfbWFrZV9ib29scyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRp
bmdfdmFsdWVzKTsKICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZudW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7
CiAgICAgaWYgKCByZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0t
LSBhL3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBi
L3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3
ICsxMyw5IEBACiAjaWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2Rl
ZmluZSBfRkxBU0tfQ09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
KTsKKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
LCBzaXplX3QgKm1heHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMo
aW50IGxlbiwgaW50ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2sv
c3Mvc2VydmljZXMuYworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2Vz
LmMKQEAgLTE4NTAsNyArMTg1MCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jv
b2woY29uc3QgY2hhciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWlu
dCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMs
IGludCAqKnZhbHVlcykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICps
ZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhz
dHIpCiB7CiAgICAgaW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTg1OCw2
ICsxODU4LDggQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogICAgIGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBO
VUxMOwogICAgICp2YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkK
KyAgICAgICAgKm1heHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIu
cF9ib29scy5ucHJpbTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE4NzksMTYg
KzE4ODEsMTcgQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogCiAgICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQog
ICAgIHsKLSAgICAgICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXpl
X3QgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19u
YW1lW2ldKTsKKwogICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5i
b29sX3ZhbF90b19zdHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5h
bWVzICkgewotICAgICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5
ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAo
Km5hbWVzKVtpXSA9IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVf
bGVuKTsKKyAgICAgICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJh
eShjaGFyLCBuYW1lX2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpu
YW1lcylbaV0gKQogICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAg
ICAgICAgc3RybGNweSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldLCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVz
KVtpXVtuYW1lX2xlbiAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHko
KCpuYW1lcylbaV0sIHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwg
bmFtZV9sZW4gKyAxKTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0
ciAmJiBuYW1lX2xlbiA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0
ciA9IG5hbWVfbGVuOwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0y
MDA2LDcgKzIwMDksNyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZl
X2Jvb2xzKHN0cnVjCiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9v
bGRhdHVtOwogICAgIHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJj
ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFs
dWVzKTsKKyAgICByYyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAm
Ym5hbWVzLCAmYnZhbHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAg
ICAgIGdvdG8gb3V0OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBp
KysgKQo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Thu Feb 06 14:20:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPoO-0006mJ-ST; Thu, 06 Feb 2014 14:20:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBPoN-0006l7-Gi
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:20:15 +0000
Received: from [193.109.254.147:65191] by server-11.bemta-14.messagelabs.com
	id F4/A3-24604-E1A93F25; Thu, 06 Feb 2014 14:20:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391696412!2511543!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15713 invoked from network); 6 Feb 2014 14:20:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:20:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; 
	d="asc'?scan'208";a="100466899"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:20:11 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:20:10 -0500
Message-ID: <1391696408.9917.28.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 6 Feb 2014 15:20:08 +0100
In-Reply-To: <52F37D3C0200007800119BDF@nat28.tlf.novell.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F37D3C0200007800119BDF@nat28.tlf.novell.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3028610849892071660=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3028610849892071660==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-rHbotphQ4Q1F14fOhPix"

--=-rHbotphQ4Q1F14fOhPix
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 11:17 +0000, Jan Beulich wrote:
> >>> On 06.02.14 at 09:58, Justin Weaver <jtweaver@hawaii.edu> wrote:
> > --- a/xen/common/sched_credit2.c
> > +++ b/xen/common/sched_credit2.c
> > @@ -85,8 +85,7 @@
> >   * to a small value, and a fixed credit is added to everyone.
> >   *
> >   * The plan is for all cores that share an L2 will share the same
> > - * runqueue.  At the moment, there is one global runqueue for all
> > - * cores.
> > + * runqueue.=20
>=20
> If this is the intention, then ...
>
> [...]
>
> ... this is too simplistic: Whether the L2 is shared by all cores on a
> socket should be determined, not assumed.
>=20
True. However, what we do right now is trying to build one runqueu per
socket, by means of cpu_to_socket(), and failing badly, ending up with
only one *system wide* runqueue. Personally, I think that fixing this,
i.e., keeping using cpu_to_socket(), but in a correct way, and actually
building '1 runqueue per socket' would be a reasonable step in the
original author's intentions. Of course, in this case, I concur that the
comment above needs changing too.

Thoughts?

> Apart from that keeping the CPU0 special case at the top is pointless
> with the cpu0_socket special casing.
>=20
Indeed. If going this route, Justin, I think you can reorganize the
whole `if (cpu =3D=3D 0)' (not only the else), and get to a more correct an=
d
readable solution.

> As to coding style: Please fix your comments and get the indentation
> of the if/else sequence above right (i.e. either use "else if" with no
> added indentation, or enclose the inner if/else in figure braces (I'd
> personally prefer the former).
>=20
Yep, agreed. To be fair, about comments, sched_credit2.c has quite a
mixture of commenting style in it, and it's really an hard call to
decide which one should be used. Anyway, Justin, if you reorganize the
whole `if () else' block, you are probably better off with a big comment
describing the whole thing, before the block itself, for which you can
use the following style:

/*
 * Long comment...
 */

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-rHbotphQ4Q1F14fOhPix
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLzmhgACgkQk4XaBE3IOsTIdQCff4xlRFfw3gOpdTvyOD/RBVPW
cUsAn2S9DSxHkaiAvrfDth4B10yX9Gfe
=kNvQ
-----END PGP SIGNATURE-----

--=-rHbotphQ4Q1F14fOhPix--


--===============3028610849892071660==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3028610849892071660==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 14:20:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPoO-0006mJ-ST; Thu, 06 Feb 2014 14:20:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBPoN-0006l7-Gi
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:20:15 +0000
Received: from [193.109.254.147:65191] by server-11.bemta-14.messagelabs.com
	id F4/A3-24604-E1A93F25; Thu, 06 Feb 2014 14:20:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391696412!2511543!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15713 invoked from network); 6 Feb 2014 14:20:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:20:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; 
	d="asc'?scan'208";a="100466899"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:20:11 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:20:10 -0500
Message-ID: <1391696408.9917.28.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 6 Feb 2014 15:20:08 +0100
In-Reply-To: <52F37D3C0200007800119BDF@nat28.tlf.novell.com>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F37D3C0200007800119BDF@nat28.tlf.novell.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3028610849892071660=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3028610849892071660==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-rHbotphQ4Q1F14fOhPix"

--=-rHbotphQ4Q1F14fOhPix
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 11:17 +0000, Jan Beulich wrote:
> >>> On 06.02.14 at 09:58, Justin Weaver <jtweaver@hawaii.edu> wrote:
> > --- a/xen/common/sched_credit2.c
> > +++ b/xen/common/sched_credit2.c
> > @@ -85,8 +85,7 @@
> >   * to a small value, and a fixed credit is added to everyone.
> >   *
> >   * The plan is for all cores that share an L2 will share the same
> > - * runqueue.  At the moment, there is one global runqueue for all
> > - * cores.
> > + * runqueue.=20
>=20
> If this is the intention, then ...
>
> [...]
>
> ... this is too simplistic: Whether the L2 is shared by all cores on a
> socket should be determined, not assumed.
>=20
True. However, what we do right now is trying to build one runqueu per
socket, by means of cpu_to_socket(), and failing badly, ending up with
only one *system wide* runqueue. Personally, I think that fixing this,
i.e., keeping using cpu_to_socket(), but in a correct way, and actually
building '1 runqueue per socket' would be a reasonable step in the
original author's intentions. Of course, in this case, I concur that the
comment above needs changing too.

Thoughts?

> Apart from that keeping the CPU0 special case at the top is pointless
> with the cpu0_socket special casing.
>=20
Indeed. If going this route, Justin, I think you can reorganize the
whole `if (cpu =3D=3D 0)' (not only the else), and get to a more correct an=
d
readable solution.

> As to coding style: Please fix your comments and get the indentation
> of the if/else sequence above right (i.e. either use "else if" with no
> added indentation, or enclose the inner if/else in figure braces (I'd
> personally prefer the former).
>=20
Yep, agreed. To be fair, about comments, sched_credit2.c has quite a
mixture of commenting style in it, and it's really an hard call to
decide which one should be used. Anyway, Justin, if you reorganize the
whole `if () else' block, you are probably better off with a big comment
describing the whole thing, before the block itself, for which you can
use the following style:

/*
 * Long comment...
 */

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-rHbotphQ4Q1F14fOhPix
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlLzmhgACgkQk4XaBE3IOsTIdQCff4xlRFfw3gOpdTvyOD/RBVPW
cUsAn2S9DSxHkaiAvrfDth4B10yX9Gfe
=kNvQ
-----END PGP SIGNATURE-----

--=-rHbotphQ4Q1F14fOhPix--


--===============3028610849892071660==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3028610849892071660==--


From xen-devel-bounces@lists.xen.org Thu Feb 06 14:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPsx-0007gi-2C; Thu, 06 Feb 2014 14:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBPsv-0007gM-1N
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:24:57 +0000
Received: from [85.158.137.68:50241] by server-14.bemta-3.messagelabs.com id
	CD/9E-08196-83B93F25; Thu, 06 Feb 2014 14:24:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391696693!88433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6712 invoked from network); 6 Feb 2014 14:24:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:24:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100468452"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:24:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 09:24:53 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBPsq-0004R4-Pn;
	Thu, 06 Feb 2014 14:24:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBPsq-00028m-JJ;
	Thu, 06 Feb 2014 14:24:52 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.39732.329017.933315@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 14:24:52 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391695494.25128.22.camel@kazak.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
	<1391695494.25128.22.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record
 deregistration when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 18/18] libxl: timeouts: Record deregistration when one occurs"):
> On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> > When a timeout has occurred, it is deregistered.  However, we failed
> > to record this fact by updating etime->func.  As a result,
> > libxl__ev_time_isregistered would say `true' for a timeout which has
> > already happened.
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Although if you were minded to explicitly note in the changelog that it
> is necessary to nuke ->func before running the callback so that it
> doesn't try and fire it again in parallel while it is running then that
> would be good.

That's not the reason.  This is the reason, which I have just put in
the commit message:

    It is necessary to clear etime->func before the callback, because
    the callback might want to reinstate the timeout, or might free
    the etime (or its containing struct) entirely.

I have also fixed my S-o-b which incorrectly used my personal email
address.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:25:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPsx-0007gi-2C; Thu, 06 Feb 2014 14:24:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBPsv-0007gM-1N
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:24:57 +0000
Received: from [85.158.137.68:50241] by server-14.bemta-3.messagelabs.com id
	CD/9E-08196-83B93F25; Thu, 06 Feb 2014 14:24:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391696693!88433!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6712 invoked from network); 6 Feb 2014 14:24:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:24:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100468452"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:24:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 09:24:53 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBPsq-0004R4-Pn;
	Thu, 06 Feb 2014 14:24:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBPsq-00028m-JJ;
	Thu, 06 Feb 2014 14:24:52 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.39732.329017.933315@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 14:24:52 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391695494.25128.22.camel@kazak.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
	<1391695494.25128.22.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record
 deregistration when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 18/18] libxl: timeouts: Record deregistration when one occurs"):
> On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> > When a timeout has occurred, it is deregistered.  However, we failed
> > to record this fact by updating etime->func.  As a result,
> > libxl__ev_time_isregistered would say `true' for a timeout which has
> > already happened.
> 
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> 
> Although if you were minded to explicitly note in the changelog that it
> is necessary to nuke ->func before running the callback so that it
> doesn't try and fire it again in parallel while it is running then that
> would be good.

That's not the reason.  This is the reason, which I have just put in
the commit message:

    It is necessary to clear etime->func before the callback, because
    the callback might want to reinstate the timeout, or might free
    the etime (or its containing struct) entirely.

I have also fixed my S-o-b which incorrectly used my personal email
address.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:26:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPu4-0007sL-Ju; Thu, 06 Feb 2014 14:26:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBPu3-0007s1-MN
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:26:07 +0000
Received: from [85.158.137.68:8254] by server-2.bemta-3.messagelabs.com id
	59/EE-06531-E7B93F25; Thu, 06 Feb 2014 14:26:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391696765!88762!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14383 invoked from network); 6 Feb 2014 14:26:06 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:26:06 -0000
Received: by mail-wg0-f43.google.com with SMTP id y10so1338375wgg.22
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 06:26:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=k4uVGcEULD3A91kX6dH348CQM2GJD+Py4s6q0zPiSBU=;
	b=OnYWopbB9pda7NvkXD/547aA8ePQgakOGT1KdVyHiKrsRxYoNmgn9DIIXQTjQX1p/O
	eoiORN0DtIPYOhV6kvJmWNIq1khTbrXeUuG9Seo4+ygTWKxo79c8KUdCqHDLxVl5BXoO
	iaWhKaukkZAZzNbleneVkkzgny9RVbr0Zq0/KM48NBnz79S7q3bAbdEb91SZeFyo9cW7
	zStcshb1rVrwKdbFo5HeQocsLe63OrjgixI2t6nCl8eVEehAqZIq47mr35CihmYLpHOX
	nKxmhMhO2scquFXANIGhkiEbAAzxH6c1jX7qrqZ+i1js1kM2V56YA7yg6cXKan6rqwJn
	HN4g==
X-Gm-Message-State: ALoCoQmdgJr/95cL+vJZgEhKmboE72N86w3LgW2oCUHE/F//lw1R0oKPu+hxcc6QF+XRyZSNyns3
X-Received: by 10.180.105.65 with SMTP id gk1mr21843034wib.12.1391696765643;
	Thu, 06 Feb 2014 06:26:05 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id t6sm55927701wix.4.2014.02.06.06.26.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 06:26:04 -0800 (PST)
Message-ID: <52F39B7B.3090208@linaro.org>
Date: Thu, 06 Feb 2014 14:26:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
Cc: keir@xen.org, tim@xen.org, ian.jackson@eu.citrix.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 05/02/14 16:03, Ian Campbell wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
>
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.
>
> Secondly we need to flush anything which the domain builder touches, which we
> do via a new domctl.

As I understand, there is no hypercall continuation so if a domain give 
a big range Xen will get stuck for a long time (no softirq will be 
handled on the current processor ...). Shall we at least use 
hypercall_create_continuation?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:26:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPu4-0007sL-Ju; Thu, 06 Feb 2014 14:26:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBPu3-0007s1-MN
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:26:07 +0000
Received: from [85.158.137.68:8254] by server-2.bemta-3.messagelabs.com id
	59/EE-06531-E7B93F25; Thu, 06 Feb 2014 14:26:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391696765!88762!1
X-Originating-IP: [74.125.82.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14383 invoked from network); 6 Feb 2014 14:26:06 -0000
Received: from mail-wg0-f43.google.com (HELO mail-wg0-f43.google.com)
	(74.125.82.43)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:26:06 -0000
Received: by mail-wg0-f43.google.com with SMTP id y10so1338375wgg.22
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 06:26:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=k4uVGcEULD3A91kX6dH348CQM2GJD+Py4s6q0zPiSBU=;
	b=OnYWopbB9pda7NvkXD/547aA8ePQgakOGT1KdVyHiKrsRxYoNmgn9DIIXQTjQX1p/O
	eoiORN0DtIPYOhV6kvJmWNIq1khTbrXeUuG9Seo4+ygTWKxo79c8KUdCqHDLxVl5BXoO
	iaWhKaukkZAZzNbleneVkkzgny9RVbr0Zq0/KM48NBnz79S7q3bAbdEb91SZeFyo9cW7
	zStcshb1rVrwKdbFo5HeQocsLe63OrjgixI2t6nCl8eVEehAqZIq47mr35CihmYLpHOX
	nKxmhMhO2scquFXANIGhkiEbAAzxH6c1jX7qrqZ+i1js1kM2V56YA7yg6cXKan6rqwJn
	HN4g==
X-Gm-Message-State: ALoCoQmdgJr/95cL+vJZgEhKmboE72N86w3LgW2oCUHE/F//lw1R0oKPu+hxcc6QF+XRyZSNyns3
X-Received: by 10.180.105.65 with SMTP id gk1mr21843034wib.12.1391696765643;
	Thu, 06 Feb 2014 06:26:05 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id t6sm55927701wix.4.2014.02.06.06.26.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 06:26:04 -0800 (PST)
Message-ID: <52F39B7B.3090208@linaro.org>
Date: Thu, 06 Feb 2014 14:26:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
Cc: keir@xen.org, tim@xen.org, ian.jackson@eu.citrix.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 05/02/14 16:03, Ian Campbell wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
>
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.
>
> Secondly we need to flush anything which the domain builder touches, which we
> do via a new domctl.

As I understand, there is no hypercall continuation so if a domain give 
a big range Xen will get stuck for a long time (no softirq will be 
handled on the current processor ...). Shall we at least use 
hypercall_create_continuation?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:27:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPvg-0008AY-3e; Thu, 06 Feb 2014 14:27:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPvd-00089z-P4
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:27:45 +0000
Received: from [85.158.137.68:45416] by server-2.bemta-3.messagelabs.com id
	CB/C1-06531-1EB93F25; Thu, 06 Feb 2014 14:27:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391696862!90253!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30596 invoked from network); 6 Feb 2014 14:27:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:27:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98591140"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 14:27:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:27:41 -0500
Message-ID: <1391696860.25128.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:27:40 +0000
In-Reply-To: <21235.39732.329017.933315@mariner.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
	<1391695494.25128.22.camel@kazak.uk.xensource.com>
	<21235.39732.329017.933315@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record
 deregistration when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 14:24 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 18/18] libxl: timeouts: Record deregistration when one occurs"):
> > On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> > > When a timeout has occurred, it is deregistered.  However, we failed
> > > to record this fact by updating etime->func.  As a result,
> > > libxl__ev_time_isregistered would say `true' for a timeout which has
> > > already happened.
> > 
> > Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> > 
> > Although if you were minded to explicitly note in the changelog that it
> > is necessary to nuke ->func before running the callback so that it
> > doesn't try and fire it again in parallel while it is running then that
> > would be good.
> 
> That's not the reason.  This is the reason, which I have just put in
> the commit message:
> 
>     It is necessary to clear etime->func before the callback, because
>     the callback might want to reinstate the timeout, or might free
>     the etime (or its containing struct) entirely.

Ah, right!

That's fine -- my ack stands.

> I have also fixed my S-o-b which incorrectly used my personal email
> address.
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:27:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPvg-0008AY-3e; Thu, 06 Feb 2014 14:27:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPvd-00089z-P4
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:27:45 +0000
Received: from [85.158.137.68:45416] by server-2.bemta-3.messagelabs.com id
	CB/C1-06531-1EB93F25; Thu, 06 Feb 2014 14:27:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391696862!90253!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30596 invoked from network); 6 Feb 2014 14:27:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:27:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98591140"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 14:27:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:27:41 -0500
Message-ID: <1391696860.25128.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:27:40 +0000
In-Reply-To: <21235.39732.329017.933315@mariner.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<1391444091-22796-19-git-send-email-ian.jackson@eu.citrix.com>
	<1391695494.25128.22.camel@kazak.uk.xensource.com>
	<21235.39732.329017.933315@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 18/18] libxl: timeouts: Record
 deregistration when one occurs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 14:24 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH 18/18] libxl: timeouts: Record deregistration when one occurs"):
> > On Mon, 2014-02-03 at 16:14 +0000, Ian Jackson wrote:
> > > When a timeout has occurred, it is deregistered.  However, we failed
> > > to record this fact by updating etime->func.  As a result,
> > > libxl__ev_time_isregistered would say `true' for a timeout which has
> > > already happened.
> > 
> > Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
> > 
> > Although if you were minded to explicitly note in the changelog that it
> > is necessary to nuke ->func before running the callback so that it
> > doesn't try and fire it again in parallel while it is running then that
> > would be good.
> 
> That's not the reason.  This is the reason, which I have just put in
> the commit message:
> 
>     It is necessary to clear etime->func before the callback, because
>     the callback might want to reinstate the timeout, or might free
>     the etime (or its containing struct) entirely.

Ah, right!

That's fine -- my ack stands.

> I have also fixed my S-o-b which incorrectly used my personal email
> address.
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:29:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPx0-00004x-T0; Thu, 06 Feb 2014 14:29:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPwz-0008WE-Jf
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:29:09 +0000
Received: from [85.158.139.211:58709] by server-17.bemta-5.messagelabs.com id
	D7/0E-31975-43C93F25; Thu, 06 Feb 2014 14:29:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391696946!2120324!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 610 invoked from network); 6 Feb 2014 14:29:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:29:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100469825"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:29:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:29:02 -0500
Message-ID: <1391696940.25128.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 14:29:00 +0000
In-Reply-To: <52F39B7B.3090208@linaro.org>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F39B7B.3090208@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 14:26 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 05/02/14 16:03, Ian Campbell wrote:
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> >
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page function,
> > which is a stub on x86.
> >
> > Secondly we need to flush anything which the domain builder touches, which we
> > do via a new domctl.
> 
> As I understand, there is no hypercall continuation so if a domain give 
> a big range Xen will get stuck for a long time (no softirq will be 
> handled on the current processor ...). Shall we at least use 
> hypercall_create_continuation?

The hypercall deliberately limits the allowable range to avoid this.

This was discussed with Jan in a previous iteration, the other places
which end up here have a similar property, perhaps as a future cleanup
they can all be made preemptable.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:29:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:29:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBPx0-00004x-T0; Thu, 06 Feb 2014 14:29:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBPwz-0008WE-Jf
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:29:09 +0000
Received: from [85.158.139.211:58709] by server-17.bemta-5.messagelabs.com id
	D7/0E-31975-43C93F25; Thu, 06 Feb 2014 14:29:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391696946!2120324!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 610 invoked from network); 6 Feb 2014 14:29:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:29:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100469825"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:29:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	09:29:02 -0500
Message-ID: <1391696940.25128.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 14:29:00 +0000
In-Reply-To: <52F39B7B.3090208@linaro.org>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F39B7B.3090208@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 14:26 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 05/02/14 16:03, Ian Campbell wrote:
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> >
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page function,
> > which is a stub on x86.
> >
> > Secondly we need to flush anything which the domain builder touches, which we
> > do via a new domctl.
> 
> As I understand, there is no hypercall continuation so if a domain give 
> a big range Xen will get stuck for a long time (no softirq will be 
> handled on the current processor ...). Shall we at least use 
> hypercall_create_continuation?

The hypercall deliberately limits the allowable range to avoid this.

This was discussed with Jan in a previous iteration, the other places
which end up here have a similar property, perhaps as a future cleanup
they can all be made preemptable.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQ0s-0000nc-Ec; Thu, 06 Feb 2014 14:33:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBQ0q-0000nE-R0
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:33:08 +0000
Received: from [193.109.254.147:46650] by server-16.bemta-14.messagelabs.com
	id 23/C6-21945-42D93F25; Thu, 06 Feb 2014 14:33:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391697183!2513034!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18222 invoked from network); 6 Feb 2014 14:33:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:33:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100471576"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:33:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 09:33:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBQ0k-0004TM-R4;
	Thu, 06 Feb 2014 14:33:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBQ0k-0002AY-K3;
	Thu, 06 Feb 2014 14:33:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.40222.299941.488083@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 14:33:02 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391695658.25128.23.camel@kazak.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
	<21235.33180.577781.42813@mariner.uk.xensource.com>
	<1391695658.25128.23.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> I'm assuming you will shovel this one in yourself.

Now done, as modified, thanks.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQ0s-0000nc-Ec; Thu, 06 Feb 2014 14:33:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBQ0q-0000nE-R0
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:33:08 +0000
Received: from [193.109.254.147:46650] by server-16.bemta-14.messagelabs.com
	id 23/C6-21945-42D93F25; Thu, 06 Feb 2014 14:33:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391697183!2513034!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18222 invoked from network); 6 Feb 2014 14:33:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:33:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100471576"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:33:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 09:33:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBQ0k-0004TM-R4;
	Thu, 06 Feb 2014 14:33:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBQ0k-0002AY-K3;
	Thu, 06 Feb 2014 14:33:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21235.40222.299941.488083@mariner.uk.xensource.com>
Date: Thu, 6 Feb 2014 14:33:02 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391695658.25128.23.camel@kazak.uk.xensource.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
	<21235.33180.577781.42813@mariner.uk.xensource.com>
	<1391695658.25128.23.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
	libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH 00/18 v3] libxl: fork and event fixes for libvirt and 4.4"):
> I'm assuming you will shovel this one in yourself.

Now done, as modified, thanks.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:35:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQ3I-0001CA-LG; Thu, 06 Feb 2014 14:35:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBQ3H-0001BV-JG
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:35:39 +0000
Received: from [85.158.139.211:44636] by server-10.bemta-5.messagelabs.com id
	43/9E-08578-ABD93F25; Thu, 06 Feb 2014 14:35:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391697336!2142975!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 583 invoked from network); 6 Feb 2014 14:35:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:35:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100472771"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:35:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 09:35:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBQ3C-0004UU-VA;
	Thu, 06 Feb 2014 14:35:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBQ3C-0000hG-LA;
	Thu, 06 Feb 2014 14:35:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24744-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:35:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-arm-xen test] 24744: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24744 linux-arm-xen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24744/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           4 xen-install               fail REGR. vs. 24734

version targeted for testing:
 linux                95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
baseline version:
 linux                518e624ddfaef545408c19c30fff31bc64d6b346

------------------------------------------------------------
People who touched revisions under test:
  Ben Hutchings <ben@decadent.org.uk>
  Bjorn Helgaas <bhelgaas@google.com> [for PCI parts]
  Bob Liu <bob.liu@oracle.com>
  Dave Jones <davej@fedoraproject.org>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Jie Liu <jeff.liu@oracle.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Malcolm Crossley <malcolm.crossley@citrix.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomi Valkeinen <tomi.valkeinen@ti.com>
  Wei Liu <liuw@liuw.name>
  Wei Liu <wei.liu2@citrix.com>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Yijing Wang <wangyijing@huawei.com>
  Zoltan Kiss <zoltan.kiss@citrix.com>
------------------------------------------------------------

jobs:
 build-armhf                                                  pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1349 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:35:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQ3I-0001CA-LG; Thu, 06 Feb 2014 14:35:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBQ3H-0001BV-JG
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 14:35:39 +0000
Received: from [85.158.139.211:44636] by server-10.bemta-5.messagelabs.com id
	43/9E-08578-ABD93F25; Thu, 06 Feb 2014 14:35:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391697336!2142975!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 583 invoked from network); 6 Feb 2014 14:35:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:35:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100472771"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 14:35:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 09:35:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBQ3C-0004UU-VA;
	Thu, 06 Feb 2014 14:35:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBQ3C-0000hG-LA;
	Thu, 06 Feb 2014 14:35:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24744-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 14:35:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-arm-xen test] 24744: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24744 linux-arm-xen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24744/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           4 xen-install               fail REGR. vs. 24734

version targeted for testing:
 linux                95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
baseline version:
 linux                518e624ddfaef545408c19c30fff31bc64d6b346

------------------------------------------------------------
People who touched revisions under test:
  Ben Hutchings <ben@decadent.org.uk>
  Bjorn Helgaas <bhelgaas@google.com> [for PCI parts]
  Bob Liu <bob.liu@oracle.com>
  Dave Jones <davej@fedoraproject.org>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Jie Liu <jeff.liu@oracle.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Malcolm Crossley <malcolm.crossley@citrix.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomi Valkeinen <tomi.valkeinen@ti.com>
  Wei Liu <liuw@liuw.name>
  Wei Liu <wei.liu2@citrix.com>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Yijing Wang <wangyijing@huawei.com>
  Zoltan Kiss <zoltan.kiss@citrix.com>
------------------------------------------------------------

jobs:
 build-armhf                                                  pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1349 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:49:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQG8-0002BQ-Vk; Thu, 06 Feb 2014 14:48:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBQG8-0002BL-3o
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:48:56 +0000
Received: from [85.158.139.211:14052] by server-16.bemta-5.messagelabs.com id
	8A/A3-05060-7D0A3F25; Thu, 06 Feb 2014 14:48:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391698134!2122792!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22381 invoked from network); 6 Feb 2014 14:48:55 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:48:55 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so1342727wes.25
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 06:48:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=q6GysalD4vl2FSGAvGuR7D9Jb+xqvBmGrYo0aJZAWhI=;
	b=G6wiykZ0I0KZMQAnAP0F0YIRCQHAwVxe+pt7+6GRLZ77/39OY5xvUkU+0gVU1gxEVO
	/4rsvXr6m8C/1t73h6nMdbP4s6JDUmBQta1DuwTVuq5j1TvNmsSsQgvFoSFkNIcj/dmT
	NhPZIjfhti6Y9kTFfW3LGDk+qxregi/pKEaNU2t3d19z5x43rt/3B0LnRfu73qZHg8ed
	PT/BYb1axXVNkCELdlJAXmC1XNMrzN+EW90CxUdGxRpb/ahQykjOPt6/Ej5aIVUG9kUo
	SWtm+xJXRdzaPw226WH74FGPO6hcLTva9KTdnaVqnhkYPObFFFoPnLYEIvJmh6vbTpKW
	Bc0g==
X-Gm-Message-State: ALoCoQnaolqjYAeIR4+c0pucNB7ORsr+2oSzOandfxESpDbdH9YbYHEYxQK8opbaj3GNRrf8s1TA
X-Received: by 10.180.12.14 with SMTP id u14mr7710734wib.0.1391698134728;
	Thu, 06 Feb 2014 06:48:54 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ev4sm6338953wib.1.2014.02.06.06.48.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 06:48:54 -0800 (PST)
Message-ID: <52F3A0D4.5070406@linaro.org>
Date: Thu, 06 Feb 2014 14:48:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
Cc: keir@xen.org, tim@xen.org, ian.jackson@eu.citrix.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 05/02/14 16:03, Ian Campbell wrote:
    > +void sync_page_to_ram(unsigned long mfn)
> +{
> +    void *v = map_domain_page(mfn);
> +
> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> +

flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only 
clean the cache.

Following your commit message, we might want to use DCCIMVAC.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 14:49:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 14:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQG8-0002BQ-Vk; Thu, 06 Feb 2014 14:48:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBQG8-0002BL-3o
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:48:56 +0000
Received: from [85.158.139.211:14052] by server-16.bemta-5.messagelabs.com id
	8A/A3-05060-7D0A3F25; Thu, 06 Feb 2014 14:48:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391698134!2122792!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22381 invoked from network); 6 Feb 2014 14:48:55 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:48:55 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so1342727wes.25
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 06:48:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=q6GysalD4vl2FSGAvGuR7D9Jb+xqvBmGrYo0aJZAWhI=;
	b=G6wiykZ0I0KZMQAnAP0F0YIRCQHAwVxe+pt7+6GRLZ77/39OY5xvUkU+0gVU1gxEVO
	/4rsvXr6m8C/1t73h6nMdbP4s6JDUmBQta1DuwTVuq5j1TvNmsSsQgvFoSFkNIcj/dmT
	NhPZIjfhti6Y9kTFfW3LGDk+qxregi/pKEaNU2t3d19z5x43rt/3B0LnRfu73qZHg8ed
	PT/BYb1axXVNkCELdlJAXmC1XNMrzN+EW90CxUdGxRpb/ahQykjOPt6/Ej5aIVUG9kUo
	SWtm+xJXRdzaPw226WH74FGPO6hcLTva9KTdnaVqnhkYPObFFFoPnLYEIvJmh6vbTpKW
	Bc0g==
X-Gm-Message-State: ALoCoQnaolqjYAeIR4+c0pucNB7ORsr+2oSzOandfxESpDbdH9YbYHEYxQK8opbaj3GNRrf8s1TA
X-Received: by 10.180.12.14 with SMTP id u14mr7710734wib.0.1391698134728;
	Thu, 06 Feb 2014 06:48:54 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ev4sm6338953wib.1.2014.02.06.06.48.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 06:48:54 -0800 (PST)
Message-ID: <52F3A0D4.5070406@linaro.org>
Date: Thu, 06 Feb 2014 14:48:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
Cc: keir@xen.org, tim@xen.org, ian.jackson@eu.citrix.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 05/02/14 16:03, Ian Campbell wrote:
    > +void sync_page_to_ram(unsigned long mfn)
> +{
> +    void *v = map_domain_page(mfn);
> +
> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> +

flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only 
clean the cache.

Following your commit message, we might want to use DCCIMVAC.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:04:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQVD-00032h-Ap; Thu, 06 Feb 2014 15:04:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBQVB-00032c-4P
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:04:29 +0000
Received: from [85.158.137.68:49362] by server-5.bemta-3.messagelabs.com id
	20/D3-04712-C74A3F25; Thu, 06 Feb 2014 15:04:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391699065!103305!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17669 invoked from network); 6 Feb 2014 15:04:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:04:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100482383"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 15:04:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	10:04:19 -0500
Message-ID: <1391699058.25128.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 15:04:18 +0000
In-Reply-To: <52F3A0D4.5070406@linaro.org>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F3A0D4.5070406@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 14:48 +0000, Julien Grall wrote:
> 
> On 05/02/14 16:03, Ian Campbell wrote:
>     > +void sync_page_to_ram(unsigned long mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> 
> flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only 
> clean the cache.
> 
> Following your commit message, we might want to use DCCIMVAC.

Yes, I think you are right, I thought this function invalidated as well.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:04:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQVD-00032h-Ap; Thu, 06 Feb 2014 15:04:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBQVB-00032c-4P
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:04:29 +0000
Received: from [85.158.137.68:49362] by server-5.bemta-3.messagelabs.com id
	20/D3-04712-C74A3F25; Thu, 06 Feb 2014 15:04:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391699065!103305!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17669 invoked from network); 6 Feb 2014 15:04:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:04:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100482383"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 15:04:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	10:04:19 -0500
Message-ID: <1391699058.25128.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 15:04:18 +0000
In-Reply-To: <52F3A0D4.5070406@linaro.org>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F3A0D4.5070406@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 14:48 +0000, Julien Grall wrote:
> 
> On 05/02/14 16:03, Ian Campbell wrote:
>     > +void sync_page_to_ram(unsigned long mfn)
> > +{
> > +    void *v = map_domain_page(mfn);
> > +
> > +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> 
> flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only 
> clean the cache.
> 
> Following your commit message, we might want to use DCCIMVAC.

Yes, I think you are right, I thought this function invalidated as well.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQqf-0003w7-4R; Thu, 06 Feb 2014 15:26:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WBQqd-0003w2-RZ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:26:39 +0000
Received: from [193.109.254.147:55395] by server-9.bemta-14.messagelabs.com id
	99/E0-24895-FA9A3F25; Thu, 06 Feb 2014 15:26:39 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391700398!2523699!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32262 invoked from network); 6 Feb 2014 15:26:38 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 6 Feb 2014 15:26:38 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50450 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WBQq4-0005w0-Et
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 16:26:04 +0100
Date: Thu, 6 Feb 2014 16:26:35 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <383510698.20140206162635@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] Xen build outputs xen-4.4-rc2 although rc3 has been
	tagged ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Anyone else getting a xen-4.4-rc2.gz, 4.4-rc2 banner etc .. although rc3 has been tagged ?

I just recloned and rebuild the staging tree to be sure .. but still puts out rc2  ..

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQqf-0003w7-4R; Thu, 06 Feb 2014 15:26:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WBQqd-0003w2-RZ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:26:39 +0000
Received: from [193.109.254.147:55395] by server-9.bemta-14.messagelabs.com id
	99/E0-24895-FA9A3F25; Thu, 06 Feb 2014 15:26:39 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391700398!2523699!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32262 invoked from network); 6 Feb 2014 15:26:38 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 6 Feb 2014 15:26:38 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50450 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WBQq4-0005w0-Et
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 16:26:04 +0100
Date: Thu, 6 Feb 2014 16:26:35 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <383510698.20140206162635@eikelenboom.it>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] Xen build outputs xen-4.4-rc2 although rc3 has been
	tagged ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Anyone else getting a xen-4.4-rc2.gz, 4.4-rc2 banner etc .. although rc3 has been tagged ?

I just recloned and rebuild the staging tree to be sure .. but still puts out rc2  ..

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:32:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQwX-0004Md-DR; Thu, 06 Feb 2014 15:32:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBQwW-0004MY-9m
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:32:44 +0000
Received: from [85.158.143.35:36969] by server-2.bemta-4.messagelabs.com id
	D6/EA-10891-B1BA3F25; Thu, 06 Feb 2014 15:32:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391700762!3680714!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10121 invoked from network); 6 Feb 2014 15:32:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 15:32:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 15:32:42 +0000
Message-Id: <52F3B9290200007800119E40@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 15:32:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Sander Eikelenboom" <linux@eikelenboom.it>
References: <383510698.20140206162635@eikelenboom.it>
In-Reply-To: <383510698.20140206162635@eikelenboom.it>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen build outputs xen-4.4-rc2 although rc3 has been
 tagged ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 16:26, Sander Eikelenboom <linux@eikelenboom.it> wrote:
> Anyone else getting a xen-4.4-rc2.gz, 4.4-rc2 banner etc .. although rc3 has 
> been tagged ?
> 
> I just recloned and rebuild the staging tree to be sure .. but still puts 
> out rc2  ..

Ian said that this would unfortunately be the case in his RC3
announcement [1].

Jan

[1] http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg02721.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:32:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQwX-0004Md-DR; Thu, 06 Feb 2014 15:32:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBQwW-0004MY-9m
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:32:44 +0000
Received: from [85.158.143.35:36969] by server-2.bemta-4.messagelabs.com id
	D6/EA-10891-B1BA3F25; Thu, 06 Feb 2014 15:32:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391700762!3680714!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10121 invoked from network); 6 Feb 2014 15:32:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 15:32:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 15:32:42 +0000
Message-Id: <52F3B9290200007800119E40@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 15:32:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Sander Eikelenboom" <linux@eikelenboom.it>
References: <383510698.20140206162635@eikelenboom.it>
In-Reply-To: <383510698.20140206162635@eikelenboom.it>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen build outputs xen-4.4-rc2 although rc3 has been
 tagged ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 16:26, Sander Eikelenboom <linux@eikelenboom.it> wrote:
> Anyone else getting a xen-4.4-rc2.gz, 4.4-rc2 banner etc .. although rc3 has 
> been tagged ?
> 
> I just recloned and rebuild the staging tree to be sure .. but still puts 
> out rc2  ..

Ian said that this would unfortunately be the case in his RC3
announcement [1].

Jan

[1] http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg02721.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:35:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQzM-0004Tp-1M; Thu, 06 Feb 2014 15:35:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WBQzK-0004Tf-2V
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:35:38 +0000
Received: from [193.109.254.147:29044] by server-7.bemta-14.messagelabs.com id
	05/C3-23424-9CBA3F25; Thu, 06 Feb 2014 15:35:37 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391700936!2526237!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28863 invoked from network); 6 Feb 2014 15:35:36 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 6 Feb 2014 15:35:36 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50504 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WBQyk-0006eq-As; Thu, 06 Feb 2014 16:35:02 +0100
Date: Thu, 6 Feb 2014 16:35:33 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <601222704.20140206163533@eikelenboom.it>
To: "Jan Beulich" <JBeulich@suse.com>
In-Reply-To: <52F3B9290200007800119E40@nat28.tlf.novell.com>
References: <383510698.20140206162635@eikelenboom.it>
	<52F3B9290200007800119E40@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen build outputs xen-4.4-rc2 although rc3 has been
	tagged ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 6, 2014, 4:32:41 PM, you wrote:

>>>> On 06.02.14 at 16:26, Sander Eikelenboom <linux@eikelenboom.it> wrote:
>> Anyone else getting a xen-4.4-rc2.gz, 4.4-rc2 banner etc .. although rc3 has 
>> been tagged ?
>> 
>> I just recloned and rebuild the staging tree to be sure .. but still puts 
>> out rc2  ..

> Ian said that this would unfortunately be the case in his RC3
> announcement [1].

> Jan

> [1] http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg02721.html

Ah i missed that one, was just out to collect my xen t-shirt @fosdem i guess  :-)

Thanks for clearing that up Jan !

--
Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:35:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBQzM-0004Tp-1M; Thu, 06 Feb 2014 15:35:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WBQzK-0004Tf-2V
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:35:38 +0000
Received: from [193.109.254.147:29044] by server-7.bemta-14.messagelabs.com id
	05/C3-23424-9CBA3F25; Thu, 06 Feb 2014 15:35:37 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391700936!2526237!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28863 invoked from network); 6 Feb 2014 15:35:36 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 6 Feb 2014 15:35:36 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50504 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WBQyk-0006eq-As; Thu, 06 Feb 2014 16:35:02 +0100
Date: Thu, 6 Feb 2014 16:35:33 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <601222704.20140206163533@eikelenboom.it>
To: "Jan Beulich" <JBeulich@suse.com>
In-Reply-To: <52F3B9290200007800119E40@nat28.tlf.novell.com>
References: <383510698.20140206162635@eikelenboom.it>
	<52F3B9290200007800119E40@nat28.tlf.novell.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen build outputs xen-4.4-rc2 although rc3 has been
	tagged ?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 6, 2014, 4:32:41 PM, you wrote:

>>>> On 06.02.14 at 16:26, Sander Eikelenboom <linux@eikelenboom.it> wrote:
>> Anyone else getting a xen-4.4-rc2.gz, 4.4-rc2 banner etc .. although rc3 has 
>> been tagged ?
>> 
>> I just recloned and rebuild the staging tree to be sure .. but still puts 
>> out rc2  ..

> Ian said that this would unfortunately be the case in his RC3
> announcement [1].

> Jan

> [1] http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg02721.html

Ah i missed that one, was just out to collect my xen t-shirt @fosdem i guess  :-)

Thanks for clearing that up Jan !

--
Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:41:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBR4w-0004oU-0t; Thu, 06 Feb 2014 15:41:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBR4u-0004oL-MM
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:41:24 +0000
Received: from [193.109.254.147:34789] by server-12.bemta-14.messagelabs.com
	id 58/60-17220-42DA3F25; Thu, 06 Feb 2014 15:41:24 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391701283!2503604!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29473 invoked from network); 6 Feb 2014 15:41:23 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:41:23 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so1738222wib.2
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 07:41:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0BaLu6gwrW0jvvAxTmH8SS7RNNb3uy8hTr2a5cB1CMM=;
	b=Bg5rMmP4YIdMAMnwyotKS3z0LY6pff6JtYN7gExXgzSTVxIhXRWBZOg6+uBXvDX7e6
	jEaagJT7zVI9Dc7jdqX4Z2WVqOmd4433SC1hzJGIwoNi4hb0OqXRTJoSTAScUYVxr9TS
	M7SjBpgpWwxAAacQUnIPONvywHmdL4dWTwEMMjRo6lctNIuXlAxw3FftwNFFn/xALKv7
	4Hnp0DGfK//0DyFfQd+dDpTRa0K21fcaLJPm7qG9v622fDTXjvpt1wLT2Noo1z9uGoZB
	LjOKzkx08kB5HE2kN8RRg2NZPlYGJw3ut43ZJohhggL9IKhW33RqtFR3TfQYRix5e7b2
	3+6w==
X-Gm-Message-State: ALoCoQk5fT0vQ1nGfhmwSekdKvXbfEE2BbFW8Db6xbluzqa11WmU0wYRsbIhyxgeJxQTOCi0GDU0
X-Received: by 10.180.37.178 with SMTP id z18mr21924467wij.46.1391701283119;
	Thu, 06 Feb 2014 07:41:23 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id bj3sm3251337wjb.14.2014.02.06.07.41.21
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 07:41:22 -0800 (PST)
Message-ID: <52F3AD21.5030607@linaro.org>
Date: Thu, 06 Feb 2014 15:41:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>	
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>	
	<52F3A0D4.5070406@linaro.org>
	<1391699058.25128.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1391699058.25128.50.camel@kazak.uk.xensource.com>
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 06/02/14 15:04, Ian Campbell wrote:
> On Thu, 2014-02-06 at 14:48 +0000, Julien Grall wrote:
>>
>> On 05/02/14 16:03, Ian Campbell wrote:
>>      > +void sync_page_to_ram(unsigned long mfn)
>>> +{
>>> +    void *v = map_domain_page(mfn);
>>> +
>>> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
>>> +
>>
>> flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only
>> clean the cache.
>>
>> Following your commit message, we might want to use DCCIMVAC.
>
> Yes, I think you are right, I thought this function invalidated as well.

I was wondering if we can change the behaviour of 
flush_xen_dcache_va_range. Invalidate the cache should not harm the 
other call-site.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:41:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBR4w-0004oU-0t; Thu, 06 Feb 2014 15:41:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBR4u-0004oL-MM
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:41:24 +0000
Received: from [193.109.254.147:34789] by server-12.bemta-14.messagelabs.com
	id 58/60-17220-42DA3F25; Thu, 06 Feb 2014 15:41:24 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391701283!2503604!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29473 invoked from network); 6 Feb 2014 15:41:23 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:41:23 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so1738222wib.2
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 07:41:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0BaLu6gwrW0jvvAxTmH8SS7RNNb3uy8hTr2a5cB1CMM=;
	b=Bg5rMmP4YIdMAMnwyotKS3z0LY6pff6JtYN7gExXgzSTVxIhXRWBZOg6+uBXvDX7e6
	jEaagJT7zVI9Dc7jdqX4Z2WVqOmd4433SC1hzJGIwoNi4hb0OqXRTJoSTAScUYVxr9TS
	M7SjBpgpWwxAAacQUnIPONvywHmdL4dWTwEMMjRo6lctNIuXlAxw3FftwNFFn/xALKv7
	4Hnp0DGfK//0DyFfQd+dDpTRa0K21fcaLJPm7qG9v622fDTXjvpt1wLT2Noo1z9uGoZB
	LjOKzkx08kB5HE2kN8RRg2NZPlYGJw3ut43ZJohhggL9IKhW33RqtFR3TfQYRix5e7b2
	3+6w==
X-Gm-Message-State: ALoCoQk5fT0vQ1nGfhmwSekdKvXbfEE2BbFW8Db6xbluzqa11WmU0wYRsbIhyxgeJxQTOCi0GDU0
X-Received: by 10.180.37.178 with SMTP id z18mr21924467wij.46.1391701283119;
	Thu, 06 Feb 2014 07:41:23 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id bj3sm3251337wjb.14.2014.02.06.07.41.21
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 07:41:22 -0800 (PST)
Message-ID: <52F3AD21.5030607@linaro.org>
Date: Thu, 06 Feb 2014 15:41:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>	
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>	
	<52F3A0D4.5070406@linaro.org>
	<1391699058.25128.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1391699058.25128.50.camel@kazak.uk.xensource.com>
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 06/02/14 15:04, Ian Campbell wrote:
> On Thu, 2014-02-06 at 14:48 +0000, Julien Grall wrote:
>>
>> On 05/02/14 16:03, Ian Campbell wrote:
>>      > +void sync_page_to_ram(unsigned long mfn)
>>> +{
>>> +    void *v = map_domain_page(mfn);
>>> +
>>> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
>>> +
>>
>> flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only
>> clean the cache.
>>
>> Following your commit message, we might want to use DCCIMVAC.
>
> Yes, I think you are right, I thought this function invalidated as well.

I was wondering if we can change the behaviour of 
flush_xen_dcache_va_range. Invalidate the cache should not harm the 
other call-site.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:45:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBR8m-00056H-RZ; Thu, 06 Feb 2014 15:45:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBR8k-00056A-KJ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:45:22 +0000
Received: from [85.158.137.68:3249] by server-13.bemta-3.messagelabs.com id
	5F/51-26923-11EA3F25; Thu, 06 Feb 2014 15:45:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391701519!114189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3018 invoked from network); 6 Feb 2014 15:45:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100497105"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 15:45:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	10:45:18 -0500
Message-ID: <1391701517.25128.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 15:45:17 +0000
In-Reply-To: <52F3AD21.5030607@linaro.org>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F3A0D4.5070406@linaro.org>
	<1391699058.25128.50.camel@kazak.uk.xensource.com>
	<52F3AD21.5030607@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 15:41 +0000, Julien Grall wrote:
> 
> On 06/02/14 15:04, Ian Campbell wrote:
> > On Thu, 2014-02-06 at 14:48 +0000, Julien Grall wrote:
> >>
> >> On 05/02/14 16:03, Ian Campbell wrote:
> >>      > +void sync_page_to_ram(unsigned long mfn)
> >>> +{
> >>> +    void *v = map_domain_page(mfn);
> >>> +
> >>> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> >>> +
> >>
> >> flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only
> >> clean the cache.
> >>
> >> Following your commit message, we might want to use DCCIMVAC.
> >
> > Yes, I think you are right, I thought this function invalidated as well.
> 
> I was wondering if we can change the behaviour of 
> flush_xen_dcache_va_range. Invalidate the cache should not harm the 
> other call-site.

Perhaps, but not for 4.4.

I'm going to introduce clean_and_invalidate_xen_dcache and friends. 

Post 4.4 I'm also going to rename flush_xen_dcache_* to
clean_xen_dcache_* so I don't make this mistake again.

At that point we can consider where if anywhere moving from Clean to
Clean+Invalidate makes sense.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:45:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBR8m-00056H-RZ; Thu, 06 Feb 2014 15:45:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBR8k-00056A-KJ
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:45:22 +0000
Received: from [85.158.137.68:3249] by server-13.bemta-3.messagelabs.com id
	5F/51-26923-11EA3F25; Thu, 06 Feb 2014 15:45:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391701519!114189!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3018 invoked from network); 6 Feb 2014 15:45:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:45:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100497105"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 15:45:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	10:45:18 -0500
Message-ID: <1391701517.25128.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 6 Feb 2014 15:45:17 +0000
In-Reply-To: <52F3AD21.5030607@linaro.org>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
	<1391616235-22703-3-git-send-email-ian.campbell@citrix.com>
	<52F3A0D4.5070406@linaro.org>
	<1391699058.25128.50.camel@kazak.uk.xensource.com>
	<52F3AD21.5030607@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 15:41 +0000, Julien Grall wrote:
> 
> On 06/02/14 15:04, Ian Campbell wrote:
> > On Thu, 2014-02-06 at 14:48 +0000, Julien Grall wrote:
> >>
> >> On 05/02/14 16:03, Ian Campbell wrote:
> >>      > +void sync_page_to_ram(unsigned long mfn)
> >>> +{
> >>> +    void *v = map_domain_page(mfn);
> >>> +
> >>> +    flush_xen_dcache_va_range(v, PAGE_SIZE);
> >>> +
> >>
> >> flush_xen_dcache_va_range uses DCCMVAC (for ARM32 bits), which only
> >> clean the cache.
> >>
> >> Following your commit message, we might want to use DCCIMVAC.
> >
> > Yes, I think you are right, I thought this function invalidated as well.
> 
> I was wondering if we can change the behaviour of 
> flush_xen_dcache_va_range. Invalidate the cache should not harm the 
> other call-site.

Perhaps, but not for 4.4.

I'm going to introduce clean_and_invalidate_xen_dcache and friends. 

Post 4.4 I'm also going to rename flush_xen_dcache_* to
clean_xen_dcache_* so I don't make this mistake again.

At that point we can consider where if anywhere moving from Clean to
Clean+Invalidate makes sense.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBRJz-0005a6-8o; Thu, 06 Feb 2014 15:56:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1WBRJy-0005Zw-5h
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:56:58 +0000
Received: from [85.158.143.35:27859] by server-2.bemta-4.messagelabs.com id
	59/B4-10891-9C0B3F25; Thu, 06 Feb 2014 15:56:57 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391702216!3687451!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25412 invoked from network); 6 Feb 2014 15:56:56 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:56:56 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so1405629wes.23
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 07:56:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=HWjD4RQC7wnfVkh+0Gg/UUNjienDlIyCAxRg434/PSo=;
	b=tBzFc7uteIIhhKyzpxW7/Q0YsLaaXNiFcE+o+aMV/xW2cZxcKWa+C/JNRtTo00K2Ru
	J27hdNf5QSb34C/twi6twwzdV52vZ+i3MZG5TRIyO6JwEGLcmptQMh6/KwnUJWxvn4YE
	lFz0IeQ54NPhrxrHjgq8uFdBYAHSFF9Peankf2CecdJlHOpeJkx+q6ZmlhyHfch+aQmC
	fAED8GG3JwVlAGxQJ6kU4efvHMbtISqpm4oaFsQLncW+CF5aP8p0GZFiukHRMPAgaJQx
	aluk7t5E0sKreE9Tp6Ey6ZuAfKHqXaQtb2BQMNbFI/eIVQi0cycCtcbHCU4p6k9wDzwq
	wSiw==
X-Received: by 10.181.12.9 with SMTP id em9mr211280wid.37.1391702216469;
	Thu, 06 Feb 2014 07:56:56 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id ci4sm3366253wjc.21.2014.02.06.07.56.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 07:56:55 -0800 (PST)
Message-ID: <52F3B0C6.9050904@xen.org>
Date: Thu, 06 Feb 2014 15:56:54 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	wg-test-framework@lists.xenproject.org
Subject: [Xen-devel] Looking for a volunteer to represent the Xen Project
 developer community at Test Working group meetings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

at the last WG group meeting I had an action to call a vote on whether 
we should open up the Test Framework WG to a community member. We had 
the vote and it was carried.

== What type of person are we looking for ==
1) Can represent the Xen Project Developer community : this means you 
need to be an active member of the community (e.g. a maintainer or 
committer)
2) Has some understanding of how testing in the Xen Community works 
today and how it should work.
3) Ideally you also have written tests for Xen before.

== Responsibilities and Rights ==
1) You are presenting the views of the community, not those of your 
employee or your own.
2) This means that you will need to consult with the developer community 
and understand the key issues on testing.
3) You commit to attending WG meetings and monitor and participate on 
wg-test-framework@lists.xenproject.org
2) You get a full vote on the WG

== Nomination / Election ==
1) If only one community member volunteers, the nomination is confirmed 
unless a maintainer/committer objects in private or public.
2) If several volunteers step up, we will have to have a formal vote
3) Time-line: volunteer to self-nominate by Feb 11th. If we only have 
one volunteer and there are no objections by the 13th you will be 
confirmed. If you are a maintainers or committer feel free to express 
your support publicly. Otherwise I have to set up a proper vote, which 
will take longer to arrange.

Please self-nominate by replying to this mail, stating clearly that you

Note that the next test WG meeting is Feb 13.

Best Regards
Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 15:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 15:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBRJz-0005a6-8o; Thu, 06 Feb 2014 15:56:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1WBRJy-0005Zw-5h
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 15:56:58 +0000
Received: from [85.158.143.35:27859] by server-2.bemta-4.messagelabs.com id
	59/B4-10891-9C0B3F25; Thu, 06 Feb 2014 15:56:57 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391702216!3687451!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25412 invoked from network); 6 Feb 2014 15:56:56 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 15:56:56 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so1405629wes.23
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 07:56:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:content-type:content-transfer-encoding;
	bh=HWjD4RQC7wnfVkh+0Gg/UUNjienDlIyCAxRg434/PSo=;
	b=tBzFc7uteIIhhKyzpxW7/Q0YsLaaXNiFcE+o+aMV/xW2cZxcKWa+C/JNRtTo00K2Ru
	J27hdNf5QSb34C/twi6twwzdV52vZ+i3MZG5TRIyO6JwEGLcmptQMh6/KwnUJWxvn4YE
	lFz0IeQ54NPhrxrHjgq8uFdBYAHSFF9Peankf2CecdJlHOpeJkx+q6ZmlhyHfch+aQmC
	fAED8GG3JwVlAGxQJ6kU4efvHMbtISqpm4oaFsQLncW+CF5aP8p0GZFiukHRMPAgaJQx
	aluk7t5E0sKreE9Tp6Ey6ZuAfKHqXaQtb2BQMNbFI/eIVQi0cycCtcbHCU4p6k9wDzwq
	wSiw==
X-Received: by 10.181.12.9 with SMTP id em9mr211280wid.37.1391702216469;
	Thu, 06 Feb 2014 07:56:56 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id ci4sm3366253wjc.21.2014.02.06.07.56.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 07:56:55 -0800 (PST)
Message-ID: <52F3B0C6.9050904@xen.org>
Date: Thu, 06 Feb 2014 15:56:54 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	wg-test-framework@lists.xenproject.org
Subject: [Xen-devel] Looking for a volunteer to represent the Xen Project
 developer community at Test Working group meetings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

at the last WG group meeting I had an action to call a vote on whether 
we should open up the Test Framework WG to a community member. We had 
the vote and it was carried.

== What type of person are we looking for ==
1) Can represent the Xen Project Developer community : this means you 
need to be an active member of the community (e.g. a maintainer or 
committer)
2) Has some understanding of how testing in the Xen Community works 
today and how it should work.
3) Ideally you also have written tests for Xen before.

== Responsibilities and Rights ==
1) You are presenting the views of the community, not those of your 
employee or your own.
2) This means that you will need to consult with the developer community 
and understand the key issues on testing.
3) You commit to attending WG meetings and monitor and participate on 
wg-test-framework@lists.xenproject.org
2) You get a full vote on the WG

== Nomination / Election ==
1) If only one community member volunteers, the nomination is confirmed 
unless a maintainer/committer objects in private or public.
2) If several volunteers step up, we will have to have a formal vote
3) Time-line: volunteer to self-nominate by Feb 11th. If we only have 
one volunteer and there are no objections by the 13th you will be 
confirmed. If you are a maintainers or committer feel free to express 
your support publicly. Otherwise I have to set up a proper vote, which 
will take longer to arrange.

Please self-nominate by replying to this mail, stating clearly that you

Note that the next test WG meeting is Feb 13.

Best Regards
Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:03:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBRQb-0006Py-8H; Thu, 06 Feb 2014 16:03:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WBRQZ-0006Ps-Id
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 16:03:47 +0000
Received: from [85.158.139.211:8270] by server-13.bemta-5.messagelabs.com id
	55/A4-18801-262B3F25; Thu, 06 Feb 2014 16:03:46 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391702625!2146299!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18859 invoked from network); 6 Feb 2014 16:03:45 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 16:03:45 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so1764430wib.6
	for <xen-devel@lists.xensource.com>;
	Thu, 06 Feb 2014 08:03:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=PtU3WBvo3kiCSHlcfjSXTGLwqA4tnRFzIWH1oSQihFI=;
	b=Kg1xCJCUNC4C3y6Z/wHShMeNzt/RU2hI1ubG8/4EKu0Rc7DUlcyw25Tg4a3EdFrtjo
	7DuIgYKzULemj6lPyi/26NwXn8i0uJVnSGu8KmQSel/50MLX/RDw/apPlIX/gyXO9PYn
	m/x8fm7a9l4abOt2U8ukA0i/MZJ2M41Jz4PHeq0V0fxFQGXBSvF8WC0rIm1npL2z75wY
	ud0Vk5tTT898J9jVmtV6+u958WcLrpDB76XCXrCZGlLZKStpnCj6IDphU5POLppHcNxN
	qL0EFA+MQseVQowz5Fa3EoNkyEHpE2mGrwcTunRSzZhtUPjMSusgnYKJqMY5nTzilYgG
	xJ6Q==
X-Received: by 10.180.219.44 with SMTP id pl12mr106522wic.12.1391702625420;
	Thu, 06 Feb 2014 08:03:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Thu, 6 Feb 2014 08:03:25 -0800 (PST)
In-Reply-To: <1391681000.23098.29.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 6 Feb 2014 16:03:25 +0000
Message-ID: <CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

http://www.drbd.org/users-guide/s-xen-configure-domu.html suggests we
need disk = [ 'drbd:ressouce-name,xvda,w' ] not a full path...

this is my config:

# Comment this out if uncommenting the next section (installing)
bootloader="pygrub"

#kernel = "/var/lib/xen/images/ubuntu-netboot/vmlinuz"
#ramdisk = "/var/lib/xen/images/ubuntu-netboot/initrd.gz"

extra = 'console=hvc0 xencons=tty'

### Cpu & RAM ##
vcpus = 2
memory = 1024

# Disk device(s). ###
root = '/dev/xvda1 rw'
disk = [
        'drbd:drbd-remus-test,xvda,w'
]

### Hostname ###
name = 'remus-test'


### Networking ###
vif = [ 'bridge=xenbr0' ]


#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'


If I use kernel/ramdisk files instead of pygrub it works, with he same
disk syntax!

Thanks

On Thu, Feb 6, 2014 at 10:03 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 03:37 +0000, Miguel Clara wrote:
>> Sorry forgot to add the error:
>>
>> # xl create test.cfg
>> Parsing config from test.cfg
>> libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
>> failed - consult logfile /var/log/xen/bootloader.34.log
>> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
>> bootloader [10762] exited with error status 1
>> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
>> (re-)build domain: -3
>>
>> # cat /var/log/xen/bootloader.34.log
>> Traceback (most recent call last):
>>   File "/usr/local/lib/xen/bin/pygrub", line 844, in <module>
>>     part_offs = get_partition_offsets(file)
>>   File "/usr/local/lib/xen/bin/pygrub", line 105, in get_partition_offsets
>>     image_type = identify_disk_image(file)
>>   File "/usr/local/lib/xen/bin/pygrub", line 47, in identify_disk_image
>>     fd = os.open(file, os.O_RDONLY)
>> OSError: [Errno 2] No such file or directory: 'drbd-remus-test'
>
> You haven't shared your cfg file but this sounds to me like something is
> missing the necessary absolute path.
>
>>
>> On Wed, Feb 5, 2014 at 3:20 AM, Mike C. <miguelmclara@gmail.com> wrote:
>> > Fixed this but it seems using drbd: in rhe disk config doesnt work with
>> > pygrub....
>> >
>> > does this makes sense?
>> >
>> > I found na old bug reporta but this is Debain squeze Xen 4.3
>> >
>> > I seems to work fine booting into the installer bit if ibuse pygrub It
>> > doesnt find the drbd device.
>> >
>> >
>> >
>> >
>> > On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan
>> > <rshriram@cs.ubc.ca> wrote:
>> >>
>> >> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
>> >> wrote:
>> >>>
>> >>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>> >>> does this come with xen or the drbd package?
>> >>>
>> >>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
>> >>> on debian (v 8.3)
>> >>>
>> >>
>> >> It comes with the drbd package AFAIK
>> >>
>> >
>> > --
>> > Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:03:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:03:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBRQb-0006Py-8H; Thu, 06 Feb 2014 16:03:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WBRQZ-0006Ps-Id
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 16:03:47 +0000
Received: from [85.158.139.211:8270] by server-13.bemta-5.messagelabs.com id
	55/A4-18801-262B3F25; Thu, 06 Feb 2014 16:03:46 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391702625!2146299!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18859 invoked from network); 6 Feb 2014 16:03:45 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 16:03:45 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so1764430wib.6
	for <xen-devel@lists.xensource.com>;
	Thu, 06 Feb 2014 08:03:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=PtU3WBvo3kiCSHlcfjSXTGLwqA4tnRFzIWH1oSQihFI=;
	b=Kg1xCJCUNC4C3y6Z/wHShMeNzt/RU2hI1ubG8/4EKu0Rc7DUlcyw25Tg4a3EdFrtjo
	7DuIgYKzULemj6lPyi/26NwXn8i0uJVnSGu8KmQSel/50MLX/RDw/apPlIX/gyXO9PYn
	m/x8fm7a9l4abOt2U8ukA0i/MZJ2M41Jz4PHeq0V0fxFQGXBSvF8WC0rIm1npL2z75wY
	ud0Vk5tTT898J9jVmtV6+u958WcLrpDB76XCXrCZGlLZKStpnCj6IDphU5POLppHcNxN
	qL0EFA+MQseVQowz5Fa3EoNkyEHpE2mGrwcTunRSzZhtUPjMSusgnYKJqMY5nTzilYgG
	xJ6Q==
X-Received: by 10.180.219.44 with SMTP id pl12mr106522wic.12.1391702625420;
	Thu, 06 Feb 2014 08:03:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Thu, 6 Feb 2014 08:03:25 -0800 (PST)
In-Reply-To: <1391681000.23098.29.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 6 Feb 2014 16:03:25 +0000
Message-ID: <CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

http://www.drbd.org/users-guide/s-xen-configure-domu.html suggests we
need disk = [ 'drbd:ressouce-name,xvda,w' ] not a full path...

this is my config:

# Comment this out if uncommenting the next section (installing)
bootloader="pygrub"

#kernel = "/var/lib/xen/images/ubuntu-netboot/vmlinuz"
#ramdisk = "/var/lib/xen/images/ubuntu-netboot/initrd.gz"

extra = 'console=hvc0 xencons=tty'

### Cpu & RAM ##
vcpus = 2
memory = 1024

# Disk device(s). ###
root = '/dev/xvda1 rw'
disk = [
        'drbd:drbd-remus-test,xvda,w'
]

### Hostname ###
name = 'remus-test'


### Networking ###
vif = [ 'bridge=xenbr0' ]


#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'


If I use kernel/ramdisk files instead of pygrub it works, with he same
disk syntax!

Thanks

On Thu, Feb 6, 2014 at 10:03 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 03:37 +0000, Miguel Clara wrote:
>> Sorry forgot to add the error:
>>
>> # xl create test.cfg
>> Parsing config from test.cfg
>> libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
>> failed - consult logfile /var/log/xen/bootloader.34.log
>> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
>> bootloader [10762] exited with error status 1
>> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
>> (re-)build domain: -3
>>
>> # cat /var/log/xen/bootloader.34.log
>> Traceback (most recent call last):
>>   File "/usr/local/lib/xen/bin/pygrub", line 844, in <module>
>>     part_offs = get_partition_offsets(file)
>>   File "/usr/local/lib/xen/bin/pygrub", line 105, in get_partition_offsets
>>     image_type = identify_disk_image(file)
>>   File "/usr/local/lib/xen/bin/pygrub", line 47, in identify_disk_image
>>     fd = os.open(file, os.O_RDONLY)
>> OSError: [Errno 2] No such file or directory: 'drbd-remus-test'
>
> You haven't shared your cfg file but this sounds to me like something is
> missing the necessary absolute path.
>
>>
>> On Wed, Feb 5, 2014 at 3:20 AM, Mike C. <miguelmclara@gmail.com> wrote:
>> > Fixed this but it seems using drbd: in rhe disk config doesnt work with
>> > pygrub....
>> >
>> > does this makes sense?
>> >
>> > I found na old bug reporta but this is Debain squeze Xen 4.3
>> >
>> > I seems to work fine booting into the installer bit if ibuse pygrub It
>> > doesnt find the drbd device.
>> >
>> >
>> >
>> >
>> > On February 4, 2014 10:02:52 PM GMT, Shriram Rajagopalan
>> > <rshriram@cs.ubc.ca> wrote:
>> >>
>> >> On Tue, Feb 4, 2014 at 2:00 PM, Miguel Clara <miguelmclara@gmail.com>
>> >> wrote:
>> >>>
>> >>> I noticed /etc/xen/scripts doesn't include the 'block-drbd' script,
>> >>> does this come with xen or the drbd package?
>> >>>
>> >>> Xen 4.3.1 was compiled from source, but drdb is installed from apt-get
>> >>> on debian (v 8.3)
>> >>>
>> >>
>> >> It comes with the drbd package AFAIK
>> >>
>> >
>> > --
>> > Sent from my Android device with K-9 Mail. Please excuse my brevity.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:20:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBRgs-0007B4-E1; Thu, 06 Feb 2014 16:20:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBRgr-0007Az-N3
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 16:20:37 +0000
Received: from [85.158.137.68:26433] by server-17.bemta-3.messagelabs.com id
	B0/8F-22569-456B3F25; Thu, 06 Feb 2014 16:20:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391703634!100982!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32212 invoked from network); 6 Feb 2014 16:20:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 16:20:35 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s16GKKPU017546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 16:20:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16GKGR6021241
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 6 Feb 2014 16:20:17 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16GKFFE017577; Thu, 6 Feb 2014 16:20:15 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 08:20:15 -0800
Date: Thu, 6 Feb 2014 11:20:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matt Wilson <msw@linux.com>
Message-ID: <20140206162004.GA22864@konrad-lan.dumpdata.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
	<20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: linux-kernel@vger.kernel.org, mrushton@amazon.com, msw@amazon.com,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 08:57:30PM -0800, Matt Wilson wrote:
> On Tue, Feb 04, 2014 at 11:15:01AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> > > This series contain blkback bug fixes for memory leaks (patches 1 and 
> > > 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> > > since its memory layout is exactly the same as blkif_request_segment 
> > > and should introduce no functional change.
> > > 
> > > All patches should be backported to stable branches, although the last 
> > > one is not a functional change it will still be nice to have it for 
> > > code correctness.
> > 
> > Matt and Matt, could you guys kindly take a look as well? Thank you!
> 
> Matt R. did some testing today and set up additional tests to run
> overnight. He'll follow up after the overnight tests complete.

Thank you!
> 
> --msw
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:20:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:20:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBRgs-0007B4-E1; Thu, 06 Feb 2014 16:20:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBRgr-0007Az-N3
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 16:20:37 +0000
Received: from [85.158.137.68:26433] by server-17.bemta-3.messagelabs.com id
	B0/8F-22569-456B3F25; Thu, 06 Feb 2014 16:20:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391703634!100982!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32212 invoked from network); 6 Feb 2014 16:20:35 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 16:20:35 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s16GKKPU017546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 16:20:21 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16GKGR6021241
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 6 Feb 2014 16:20:17 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16GKFFE017577; Thu, 6 Feb 2014 16:20:15 GMT
Received: from konrad-lan.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 08:20:15 -0800
Date: Thu, 6 Feb 2014 11:20:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Matt Wilson <msw@linux.com>
Message-ID: <20140206162004.GA22864@konrad-lan.dumpdata.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
	<20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: linux-kernel@vger.kernel.org, mrushton@amazon.com, msw@amazon.com,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 05, 2014 at 08:57:30PM -0800, Matt Wilson wrote:
> On Tue, Feb 04, 2014 at 11:15:01AM -0400, Konrad Rzeszutek Wilk wrote:
> > On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> > > This series contain blkback bug fixes for memory leaks (patches 1 and 
> > > 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> > > since its memory layout is exactly the same as blkif_request_segment 
> > > and should introduce no functional change.
> > > 
> > > All patches should be backported to stable branches, although the last 
> > > one is not a functional change it will still be nice to have it for 
> > > code correctness.
> > 
> > Matt and Matt, could you guys kindly take a look as well? Thank you!
> 
> Matt R. did some testing today and set up additional tests to run
> overnight. He'll follow up after the overnight tests complete.

Thank you!
> 
> --msw
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:44:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBS3u-00082P-RR; Thu, 06 Feb 2014 16:44:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>)
	id 1WBS3t-00081s-PF; Thu, 06 Feb 2014 16:44:26 +0000
Received: from [85.158.137.68:40538] by server-15.bemta-3.messagelabs.com id
	10/B6-19263-7EBB3F25; Thu, 06 Feb 2014 16:44:23 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391705060!129350!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9939 invoked from network); 6 Feb 2014 16:44:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 16:44:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100520273"
Received: from sjcpex01cl03.citrite.net ([10.216.14.145])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Feb 2014 16:44:19 +0000
Received: from SJCPEX01CL01.citrite.net ([169.254.1.22]) by
	SJCPEX01CL03.citrite.net ([10.216.14.145]) with mapi id 14.02.0342.004;
	Thu, 6 Feb 2014 08:44:18 -0800
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "lars.kurth@xen.org"
	<lars.kurth@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Ben Guthro <Ben.Guthro@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Paul Durrant <Paul.Durrant@citrix.com>, "Ian
	Jackson" <Ian.Jackson@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvYT66dFluJmUOzBzbhNQgKPZqob1hA
Date: Thu, 6 Feb 2014 16:44:18 +0000
Message-ID: <722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.2.168]
MIME-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiANCj4gU2FudG9zaDoNCj4gDQo+ICAgICAgICogS0REIChXaW5kb3dzIERlYnVnZ2VyIFN0dWIp
IGVuaGFuY2VtZW50cw0KPiANCltTYW50b3NoIEpvZGhdIEkgYmVsaWV2ZSB0aGlzIGlzIHN0aWxs
IGEgZ29vZCBHU09DIHByb2plY3QuIEhvd2V2ZXIsIGluIGxpZ2h0IG9mIG15IG5ldyBkaXJlY3Rp
b24sIEkgYW0gbm90IHN1cmUgaWYgSSB3aWxsIGJlIGFibGUgdG8gbWVudG9yLiBEb27igJl0IGtu
b3cgaWYgUGF1bCBvciBzb21lb25lIGVsc2Ugb24gdGhlIFdpbmRvd3MgdGVhbSB3b3VsZCB3YW50
IHRvIHNwb25zb3IgdGhpcy4NCg0KUmVnYXJkcywNClNhbnRvc2gNCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:44:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBS3v-00082Y-8D; Thu, 06 Feb 2014 16:44:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBS3t-000820-Sm
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 16:44:26 +0000
Received: from [193.109.254.147:11689] by server-7.bemta-14.messagelabs.com id
	BD/13-23424-9EBB3F25; Thu, 06 Feb 2014 16:44:25 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391705062!2545151!1
X-Originating-IP: [98.139.212.175]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD, ML_RADAR_SPEW_LINKS_14, UPPERCASE_25_50,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30033 invoked from network); 6 Feb 2014 16:44:23 -0000
Received: from nm16.bullet.mail.bf1.yahoo.com (HELO
	nm16.bullet.mail.bf1.yahoo.com) (98.139.212.175)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 16:44:23 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391705062; bh=wToCkjmtBdqtGWwqng6eSIM8/Zu18G0qbGMBDu8JYqM=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=ipdSFQEuPyCG4BorqhrtyRFkHV6CEKL/MdbqELgLEpTaeUC5JCBn5zE36ptLZ2Fy7yqF/nbsDRGKFTjb7b0Rinb621g5wwn3GHLSStc42eVoiLBq3oF9eq16Q7fgJ6qD28VVWUoBuaqfTSYPPdGtL+nQjRniIEuVY72DK+A41+c=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=KzyyApPEQ/ECNUsI2AjWlpukiec+LuBBLJGRwOf8dJwcKE4/RhjZSZErRnH5pTLreveFwGTdxeZoxrIvF2Sj9SBtfduCqr66grjmt6j7sqmUR6eQM9HuDEcDicSi/3RDdQ8IBnzZQof2WeAmBqABJYk9tx8F9pACBgz8P3yqvy8=;
Received: from [98.139.215.142] by nm16.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 16:44:22 -0000
Received: from [98.139.211.202] by tm13.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 16:44:22 -0000
Received: from [127.0.0.1] by smtp211.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 16:44:22 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391705062; bh=wToCkjmtBdqtGWwqng6eSIM8/Zu18G0qbGMBDu8JYqM=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=4NjlforM+Y4yTt0CmBBJUmxBi6CaG0y+fSxPusH4RenZtrk9yw4nvYGzXVjV40UGDyEiZdBff9yJTjrChEsu6g0PZVyf8hpEiAbdz76Z/uGPAMYXe7OF+eHegNCXaJS40aSt05DhF8n1qnAJ7kWA0A7+y9Z2QnRVIpm6SEv8LWA=
X-Yahoo-Newman-Id: 105098.31187.bm@smtp211.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 7lozgwsVM1nNE3dwWtpuI.m3dSSHqUT6DJ5MZT01CptQEXE
	SZn3IQmT0O3MzWG6.iDStz36E3Bu2EHAKjb_H8dRgLdGOyr6yJmneLhQ3p.e
	dT8BBHVCSrEBO4Ean1YCLMG1VZFnDmiWFKasUJM.V41rExD5BQdOKdQl1cnk
	8qv9uxaQXdGkUpw3bwSkGGM3CdbIAbaysUK1WD7VDIZ.sPzjdIwNqB0vUhLb
	yDQhl.ep65eCeV8xHq5..pOtmNGr3uqYjOC6TjqOxTNevI2px1E_m8XehrcF
	_z9ielOQkGbdG.djqKxaZaB9aDXuvqC8QJl8PU8U1FZTvLbBmTglDYJYjcQQ
	GpMGQIuK3zc320.oeaaCczA8PpmEVPkF_F0nThylwS.chNzkAMBC8zTekz2.
	uWUFvbf7FNBv8T0lzoVI3rnENsZaY.Cw9TAjZLMHviQCfkjeOqs3_EpQnY.4
	Ka7UolFarTCzkNOwpezT45l1QF1ZB2GPBHGknryoiTuqpoWhgS.DAN_0cMdW
	c17IJG2eQHGbSczObVWwqn2Y2eCItaczdymn3m91cEING5w--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp211.mail.bf1.yahoo.com with SMTP; 06 Feb 2014 08:44:22 -0800 PST
Message-ID: <1391705060.2400.8.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 06 Feb 2014 09:44:20 -0700
In-Reply-To: <52E277920200007800116A70@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
	<52E277920200007800116A70@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-XCJRpyi4nADdM9oKwhZk"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-XCJRpyi4nADdM9oKwhZk
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Fri, 2014-01-24 at 13:24 +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 09:34, "Jan Beulich" <JBeulich@suse.com> wrote:
> >>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> >> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >>> If
> >>> the kernel responds to that mentioned firmware bug by forcing
> >>> interrupt remapping off, maybe we would have to do the same...
> >> 
> >> That would be better than xen failing to boot.
> > 
> > But you realize that, following precedents elsewhere in the
> > IOMMU code, we would disable the IOMMU as a whole rather
> > than just interrupt remapping.
> > 
> > But yes, looking at the Linux side code, I guess we need to do
> > so. Would be nice if you could confirm that the system comes up
> > fine (and hopefully without IOMMU faults) with
> > "iommu=no-intremap,debug" as well as with "iommu=off".
> 
> Here's a patch attempting to do just that. Please give this a try
> without any IOMMU-related override options.
> 
> Jan
> 
> AMD IOMMU: fail if there is no southbridge IO-APIC
> 
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
> 
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
> 
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>      const struct acpi_ivrs_header *ivrs_block;
>      unsigned long length;
>      unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>      int error = 0;
>  
>      BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>      /* Each IO-APIC must have been mentioned in the table. */
>      for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>      {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>              continue;
>  
>          if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>          }
>      }
>  
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>      return error;
>  }
>  
> 
> 
> 

Jan,

I applied the patch to RC3 and it seems to do the intended task of
disabling IOMMU and allowing my GA-890FXA-UD5 to boot without a crash.

(XEN) No southbridge IO-APIC found in IVRS table
(XEN) AMD-Vi: Error initialization
(XEN) I/O virtualisation disabled


Additional boot log information is attached.

Thanks,

Eric 


--=-XCJRpyi4nADdM9oKwhZk
Content-Disposition: attachment; filename="xen-boot-AMD-patch.txt"
Content-Type: text/plain; name="xen-boot-AMD-patch.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (root@houby.net) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Feb  6 09:20:28 MST 2014
(XEN) Latest ChangeSet: Tue Jan 28 16:48:55 2014 +0100 git:a731bb2-dirty
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose com1=38400,8n1,pci console=com1,vga
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.953 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) No southbridge IO-APIC found in IVRS table
(XEN) AMD-Vi: Error initialization
(XEN) I/O virtualisation disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET

--=-XCJRpyi4nADdM9oKwhZk
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-XCJRpyi4nADdM9oKwhZk--



From xen-devel-bounces@lists.xen.org Thu Feb 06 16:44:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBS3u-00082P-RR; Thu, 06 Feb 2014 16:44:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Santosh.Jodh@citrix.com>)
	id 1WBS3t-00081s-PF; Thu, 06 Feb 2014 16:44:26 +0000
Received: from [85.158.137.68:40538] by server-15.bemta-3.messagelabs.com id
	10/B6-19263-7EBB3F25; Thu, 06 Feb 2014 16:44:23 +0000
X-Env-Sender: Santosh.Jodh@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391705060!129350!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9939 invoked from network); 6 Feb 2014 16:44:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 16:44:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100520273"
Received: from sjcpex01cl03.citrite.net ([10.216.14.145])
	by FTLPIPO01.CITRIX.COM with ESMTP/TLS/AES128-SHA;
	06 Feb 2014 16:44:19 +0000
Received: from SJCPEX01CL01.citrite.net ([169.254.1.22]) by
	SJCPEX01CL03.citrite.net ([10.216.14.145]) with mapi id 14.02.0342.004;
	Thu, 6 Feb 2014 08:44:18 -0800
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, "lars.kurth@xen.org"
	<lars.kurth@xen.org>, Roger Pau Monne <roger.pau@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Ben Guthro <Ben.Guthro@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Paul Durrant <Paul.Durrant@citrix.com>, "Ian
	Jackson" <Ian.Jackson@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvYT66dFluJmUOzBzbhNQgKPZqob1hA
Date: Thu, 6 Feb 2014 16:44:18 +0000
Message-ID: <722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [172.16.2.168]
MIME-Version: 1.0
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiANCj4gU2FudG9zaDoNCj4gDQo+ICAgICAgICogS0REIChXaW5kb3dzIERlYnVnZ2VyIFN0dWIp
IGVuaGFuY2VtZW50cw0KPiANCltTYW50b3NoIEpvZGhdIEkgYmVsaWV2ZSB0aGlzIGlzIHN0aWxs
IGEgZ29vZCBHU09DIHByb2plY3QuIEhvd2V2ZXIsIGluIGxpZ2h0IG9mIG15IG5ldyBkaXJlY3Rp
b24sIEkgYW0gbm90IHN1cmUgaWYgSSB3aWxsIGJlIGFibGUgdG8gbWVudG9yLiBEb27igJl0IGtu
b3cgaWYgUGF1bCBvciBzb21lb25lIGVsc2Ugb24gdGhlIFdpbmRvd3MgdGVhbSB3b3VsZCB3YW50
IHRvIHNwb25zb3IgdGhpcy4NCg0KUmVnYXJkcywNClNhbnRvc2gNCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:44:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBS3v-00082Y-8D; Thu, 06 Feb 2014 16:44:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBS3t-000820-Sm
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 16:44:26 +0000
Received: from [193.109.254.147:11689] by server-7.bemta-14.messagelabs.com id
	BD/13-23424-9EBB3F25; Thu, 06 Feb 2014 16:44:25 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391705062!2545151!1
X-Originating-IP: [98.139.212.175]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	FORGED_YAHOO_RCVD, ML_RADAR_SPEW_LINKS_14, UPPERCASE_25_50,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30033 invoked from network); 6 Feb 2014 16:44:23 -0000
Received: from nm16.bullet.mail.bf1.yahoo.com (HELO
	nm16.bullet.mail.bf1.yahoo.com) (98.139.212.175)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 16:44:23 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391705062; bh=wToCkjmtBdqtGWwqng6eSIM8/Zu18G0qbGMBDu8JYqM=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=ipdSFQEuPyCG4BorqhrtyRFkHV6CEKL/MdbqELgLEpTaeUC5JCBn5zE36ptLZ2Fy7yqF/nbsDRGKFTjb7b0Rinb621g5wwn3GHLSStc42eVoiLBq3oF9eq16Q7fgJ6qD28VVWUoBuaqfTSYPPdGtL+nQjRniIEuVY72DK+A41+c=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=KzyyApPEQ/ECNUsI2AjWlpukiec+LuBBLJGRwOf8dJwcKE4/RhjZSZErRnH5pTLreveFwGTdxeZoxrIvF2Sj9SBtfduCqr66grjmt6j7sqmUR6eQM9HuDEcDicSi/3RDdQ8IBnzZQof2WeAmBqABJYk9tx8F9pACBgz8P3yqvy8=;
Received: from [98.139.215.142] by nm16.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 16:44:22 -0000
Received: from [98.139.211.202] by tm13.bullet.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 16:44:22 -0000
Received: from [127.0.0.1] by smtp211.mail.bf1.yahoo.com with NNFMP;
	06 Feb 2014 16:44:22 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391705062; bh=wToCkjmtBdqtGWwqng6eSIM8/Zu18G0qbGMBDu8JYqM=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version;
	b=4NjlforM+Y4yTt0CmBBJUmxBi6CaG0y+fSxPusH4RenZtrk9yw4nvYGzXVjV40UGDyEiZdBff9yJTjrChEsu6g0PZVyf8hpEiAbdz76Z/uGPAMYXe7OF+eHegNCXaJS40aSt05DhF8n1qnAJ7kWA0A7+y9Z2QnRVIpm6SEv8LWA=
X-Yahoo-Newman-Id: 105098.31187.bm@smtp211.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: 7lozgwsVM1nNE3dwWtpuI.m3dSSHqUT6DJ5MZT01CptQEXE
	SZn3IQmT0O3MzWG6.iDStz36E3Bu2EHAKjb_H8dRgLdGOyr6yJmneLhQ3p.e
	dT8BBHVCSrEBO4Ean1YCLMG1VZFnDmiWFKasUJM.V41rExD5BQdOKdQl1cnk
	8qv9uxaQXdGkUpw3bwSkGGM3CdbIAbaysUK1WD7VDIZ.sPzjdIwNqB0vUhLb
	yDQhl.ep65eCeV8xHq5..pOtmNGr3uqYjOC6TjqOxTNevI2px1E_m8XehrcF
	_z9ielOQkGbdG.djqKxaZaB9aDXuvqC8QJl8PU8U1FZTvLbBmTglDYJYjcQQ
	GpMGQIuK3zc320.oeaaCczA8PpmEVPkF_F0nThylwS.chNzkAMBC8zTekz2.
	uWUFvbf7FNBv8T0lzoVI3rnENsZaY.Cw9TAjZLMHviQCfkjeOqs3_EpQnY.4
	Ka7UolFarTCzkNOwpezT45l1QF1ZB2GPBHGknryoiTuqpoWhgS.DAN_0cMdW
	c17IJG2eQHGbSczObVWwqn2Y2eCItaczdymn3m91cEING5w--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp211.mail.bf1.yahoo.com with SMTP; 06 Feb 2014 08:44:22 -0800 PST
Message-ID: <1391705060.2400.8.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 06 Feb 2014 09:44:20 -0700
In-Reply-To: <52E277920200007800116A70@nat28.tlf.novell.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
	<52E277920200007800116A70@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="=-XCJRpyi4nADdM9oKwhZk"
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=-XCJRpyi4nADdM9oKwhZk
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit

On Fri, 2014-01-24 at 13:24 +0000, Jan Beulich wrote:
> >>> On 24.01.14 at 09:34, "Jan Beulich" <JBeulich@suse.com> wrote:
> >>>> On 24.01.14 at 01:47, Eric Houby <ehouby@yahoo.com> wrote:
> >> On Thu, 2014-01-23 at 11:16 +0000, Jan Beulich wrote:
> >>> If
> >>> the kernel responds to that mentioned firmware bug by forcing
> >>> interrupt remapping off, maybe we would have to do the same...
> >> 
> >> That would be better than xen failing to boot.
> > 
> > But you realize that, following precedents elsewhere in the
> > IOMMU code, we would disable the IOMMU as a whole rather
> > than just interrupt remapping.
> > 
> > But yes, looking at the Linux side code, I guess we need to do
> > so. Would be nice if you could confirm that the system comes up
> > fine (and hopefully without IOMMU faults) with
> > "iommu=no-intremap,debug" as well as with "iommu=off".
> 
> Here's a patch attempting to do just that. Please give this a try
> without any IOMMU-related override options.
> 
> Jan
> 
> AMD IOMMU: fail if there is no southbridge IO-APIC
> 
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
> 
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
> 
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>      const struct acpi_ivrs_header *ivrs_block;
>      unsigned long length;
>      unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>      int error = 0;
>  
>      BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>      /* Each IO-APIC must have been mentioned in the table. */
>      for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>      {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>              continue;
>  
>          if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>          }
>      }
>  
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>      return error;
>  }
>  
> 
> 
> 

Jan,

I applied the patch to RC3 and it seems to do the intended task of
disabling IOMMU and allowing my GA-890FXA-UD5 to boot without a crash.

(XEN) No southbridge IO-APIC found in IVRS table
(XEN) AMD-Vi: Error initialization
(XEN) I/O virtualisation disabled


Additional boot log information is attached.

Thanks,

Eric 


--=-XCJRpyi4nADdM9oKwhZk
Content-Disposition: attachment; filename="xen-boot-AMD-patch.txt"
Content-Type: text/plain; name="xen-boot-AMD-patch.txt"; charset="UTF-8"
Content-Transfer-Encoding: 7bit

 Xen 4.4-rc2
(XEN) Xen version 4.4-rc2 (root@houby.net) (gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7)) debug=y Thu Feb  6 09:20:28 MST 2014
(XEN) Latest ChangeSet: Tue Jan 28 16:48:55 2014 +0100 git:a731bb2-dirty
(XEN) Bootloader: GRUB 2.00
(XEN) Command line: placeholder dom0_mem=2048M,max:2048M dom0_max_vcpus=2 dom0_vcpus_pin loglvl=all guest_loglvl=all iommu=debug,verbose com1=38400,8n1,pci console=com1,vga
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: none; EDID transfer time: 0 seconds
(XEN)  EDID info not retrieved because no DDC retrieval method detected
(XEN) Disc information:
(XEN)  Found 3 MBR signatures
(XEN)  Found 3 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000097000 (usable)
(XEN)  000000000009f800 - 00000000000a0000 (reserved)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000afcf0000 (usable)
(XEN)  00000000afcf0000 - 00000000afcf1000 (ACPI NVS)
(XEN)  00000000afcf1000 - 00000000afd00000 (ACPI data)
(XEN)  00000000afd00000 - 00000000afe00000 (reserved)
(XEN)  00000000e0000000 - 00000000f0000000 (reserved)
(XEN)  00000000fec00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 0000000250000000 (usable)
(XEN) ACPI: RSDP 000F6080, 0014 (r0 GBT   )
(XEN) ACPI: RSDT AFCF1000, 0040 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: FACP AFCF1080, 0074 (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: DSDT AFCF1100, 7BE1 (r1 GBT    GBTUACPI     1000 MSFT  3000000)
(XEN) ACPI: FACS AFCF0000, 0040
(XEN) ACPI: HPET AFCF8DC0, 0038 (r1 GBT    GBTUACPI 42302E31 GBTU       98)
(XEN) ACPI: MCFG AFCF8E00, 003C (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: MATS AFCF8E40, 0034 (r1 GBT                    0             0)
(XEN) ACPI: TAMG AFCF8EB0, 048A (r1 GBT    GBT   B0 5455312E BG 53450101)
(XEN) ACPI: APIC AFCF8D00, 00BC (r1 GBT    GBTUACPI 42302E31 GBTU  1010101)
(XEN) ACPI: IVRS AFCF93B0, 00D8 (r1  AMD     RD890S   202031 AMD         0)
(XEN) System RAM: 8188MB (8385052kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-0000000250000000
(XEN) Domain heap initialised
(XEN) found SMP MP-table at 000f4670
(XEN) DMI 2.4 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[afcf000c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled)
(XEN) Processor #0 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled)
(XEN) Processor #1 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled)
(XEN) Processor #3 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] enabled)
(XEN) Processor #4 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] enabled)
(XEN) Processor #5 0:10 APIC version 16
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0x00] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x02] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x03] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x04] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x05] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x06] dfl dfl lint[0x1])
(XEN) ACPI: LAPIC_NMI (acpi_id[0x07] dfl dfl lint[0x1])
(XEN) ACPI: IOAPIC (id[0x02] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 2, version 33, address 0xfec00000, GSI 0-23
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
(XEN) ACPI: HPET id: 0x10b9a201 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 8 CPUs (2 hotplug CPUs)
(XEN) IRQ limits: 24 GSI, 1144 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 2812.953 MHz processor.
(XEN) Initing memory sharing.
(XEN) AMD Fam10h machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - ff
(XEN) PCI: MCFG area at e0000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-ff
(XEN) AMD-Vi: Found MSI capability block at 0x54
(XEN) AMD-Vi: ACPI Table:
(XEN) AMD-Vi:  Signature IVRS
(XEN) AMD-Vi:  Length 0xd8
(XEN) AMD-Vi:  Revision 0x1
(XEN) AMD-Vi:  CheckSum 0x6f
(XEN) AMD-Vi:  OEM_Id AMD
(XEN) AMD-Vi:  OEM_Table_Id RD890S
(XEN) AMD-Vi:  OEM_Revision 0x202031
(XEN) AMD-Vi:  Creator_Id AMD
(XEN) AMD-Vi:  Creator_Revision 0
(XEN) AMD-Vi: IVRS Block: type 0x10 flags 0x3e len 0xa8 id 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0 -> 0x2
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x10 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x100 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x100 -> 0x101
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x20 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x200 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x28 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x30 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x38 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x500 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x48 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x600 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x58 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x700 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x700 -> 0x701
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0x88 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x90 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x90 -> 0x92
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0x98 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x98 -> 0x9a
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa0 flags 0xd7
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa2 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa3 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa4 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0 id 0 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x43 id 0x800 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0x800 -> 0x8ff alias 0xa4
(XEN) AMD-Vi: IVHD Device Entry: type 0x2 id 0xa5 flags 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x3 id 0xb0 flags 0
(XEN) AMD-Vi:  Dev_Id Range: 0xb0 -> 0xb2
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0xd7
(XEN) AMD-Vi: IVHD Special: 0000:00:14.0 variety 0x2 handle 0
(XEN) AMD-Vi: IVHD Device Entry: type 0x48 id 0 flags 0
(XEN) AMD-Vi: IVHD Special: 0000:00:00.1 variety 0x1 handle 0x2
(XEN) No southbridge IO-APIC found in IVRS table
(XEN) AMD-Vi: Error initialization
(XEN) I/O virtualisation disabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET

--=-XCJRpyi4nADdM9oKwhZk
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=-XCJRpyi4nADdM9oKwhZk--



From xen-devel-bounces@lists.xen.org Thu Feb 06 16:51:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSB1-00009D-Fl; Thu, 06 Feb 2014 16:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBSAz-000098-Hj
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 16:51:45 +0000
Received: from [85.158.139.211:56246] by server-9.bemta-5.messagelabs.com id
	1A/00-11237-0ADB3F25; Thu, 06 Feb 2014 16:51:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391705503!2164462!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28918 invoked from network); 6 Feb 2014 16:51:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 16:51:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 16:51:43 +0000
Message-Id: <52F3CBAE0200007800119ED3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 16:51:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
	<52E277920200007800116A70@nat28.tlf.novell.com>
	<1391705060.2400.8.camel@astar.houby.net>
In-Reply-To: <1391705060.2400.8.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 17:44, Eric Houby <ehouby@yahoo.com> wrote:
> I applied the patch to RC3 and it seems to do the intended task of
> disabling IOMMU and allowing my GA-890FXA-UD5 to boot without a crash.
> 
> (XEN) No southbridge IO-APIC found in IVRS table
> (XEN) AMD-Vi: Error initialization
> (XEN) I/O virtualisation disabled

Thanks for testing!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 16:51:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 16:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSB1-00009D-Fl; Thu, 06 Feb 2014 16:51:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBSAz-000098-Hj
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 16:51:45 +0000
Received: from [85.158.139.211:56246] by server-9.bemta-5.messagelabs.com id
	1A/00-11237-0ADB3F25; Thu, 06 Feb 2014 16:51:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391705503!2164462!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28918 invoked from network); 6 Feb 2014 16:51:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 16:51:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 06 Feb 2014 16:51:43 +0000
Message-Id: <52F3CBAE0200007800119ED3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 06 Feb 2014 16:51:42 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ehouby@yahoo.com>
References: <1390244796.2322.6.camel@astar.houby.net>
	<52DE4CA1020000780011547D@nat28.tlf.novell.com>
	<1390297542.20516.90.camel@kazak.uk.xensource.com>
	<000001cf1695$0c960450$25c20cf0$@yahoo.com>
	<52DF8FB90200007800115AEC@nat28.tlf.novell.com>
	<1390455621.2415.56.camel@astar.houby.net>
	<52E1083602000078001161D5@nat28.tlf.novell.com>
	<1390524450.2281.12.camel@astar.houby.net>
	<52E233A60200007800116776@nat28.tlf.novell.com>
	<52E277920200007800116A70@nat28.tlf.novell.com>
	<1391705060.2400.8.camel@astar.houby.net>
In-Reply-To: <1391705060.2400.8.camel@astar.houby.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [TestDay] Xen crash on boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 17:44, Eric Houby <ehouby@yahoo.com> wrote:
> I applied the patch to RC3 and it seems to do the intended task of
> disabling IOMMU and allowing my GA-890FXA-UD5 to boot without a crash.
> 
> (XEN) No southbridge IO-APIC found in IVRS table
> (XEN) AMD-Vi: Error initialization
> (XEN) I/O virtualisation disabled

Thanks for testing!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:36:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSsP-0001f9-EN; Thu, 06 Feb 2014 17:36:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WBSsN-0001f4-Pw
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 17:36:36 +0000
Received: from [85.158.139.211:23878] by server-14.bemta-5.messagelabs.com id
	2C/05-27598-328C3F25; Thu, 06 Feb 2014 17:36:35 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391708191!2173895!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31491 invoked from network); 6 Feb 2014 17:36:34 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 17:36:34 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 06 Feb 2014 10:36:28 -0700
Message-ID: <52F3C81A.5080008@suse.com>
Date: Thu, 06 Feb 2014 10:36:26 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Michal Privoznik <mprivozn@redhat.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
	<52F385BE.5020509@redhat.com>
In-Reply-To: <52F385BE.5020509@redhat.com>
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org, bjzhang@suse.com
Subject: Re: [Xen-devel] [libvirt] [PATCH 0/4] libxl: fixes related to
	concurrency improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Michal Privoznik wrote:
> On 05.02.2014 18:39, Jim Fehlig wrote:
>> While reviving old patches to add job support to the libxl driver,
>> testing revealed some problems that were difficult to encounter
>> in the current, more serialized processing approach used in the
>> driver.
>>
>> The first patch is a bug fix, plugging leaks of libxlDomainObjPrivate
>> objects.  The second patch removes the list of libxl timer registrations
>> maintained in the driver - a hack I was never fond of.  The third patch
>> moves domain shutdown handling to a thread, instead of doing all the
>> shutdown work in the event handler.  The fourth patch fixes an issue wrt
>> child process handling discussed in this thread
>>
>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01553.html
>>
>> Ian Jackson's latest patches on the libxl side are here
>>
>> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00124.html
>>
>>
>> Jim Fehlig (4):
>>    libxl: fix leaking libxlDomainObjPrivate
>>    libxl: remove list of timer registrations from libxlDomainObjPrivate
>>    libxl: handle domain shutdown events in a thread
>>    libxl: improve subprocess handling
>>
>>   src/libxl/libxl_conf.h   |   5 +-
>>   src/libxl/libxl_domain.c | 102 ++++++++---------------------------
>>   src/libxl/libxl_domain.h |   8 +--
>>   src/libxl/libxl_driver.c | 135
>> +++++++++++++++++++++++++++++++----------------
>>   4 files changed, 115 insertions(+), 135 deletions(-)
>>
>
> ACK series but see my comment on 3/4 where I'm asking for a pair of
> fixes prior pushing.

Thanks for pointing those out, especially creating the joinable thread
that was never joined :).  Fixed.  I also added a note to the commit
message of 4/4 stating that the fixes on the libxl side will be included
in Xen 4.4.0

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00463.html

Pushed series.  Thanks!

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:36:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:36:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSsP-0001f9-EN; Thu, 06 Feb 2014 17:36:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WBSsN-0001f4-Pw
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 17:36:36 +0000
Received: from [85.158.139.211:23878] by server-14.bemta-5.messagelabs.com id
	2C/05-27598-328C3F25; Thu, 06 Feb 2014 17:36:35 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391708191!2173895!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31491 invoked from network); 6 Feb 2014 17:36:34 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 17:36:34 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 06 Feb 2014 10:36:28 -0700
Message-ID: <52F3C81A.5080008@suse.com>
Date: Thu, 06 Feb 2014 10:36:26 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: Michal Privoznik <mprivozn@redhat.com>
References: <1391621986-7341-1-git-send-email-jfehlig@suse.com>
	<52F385BE.5020509@redhat.com>
In-Reply-To: <52F385BE.5020509@redhat.com>
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org, bjzhang@suse.com
Subject: Re: [Xen-devel] [libvirt] [PATCH 0/4] libxl: fixes related to
	concurrency improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Michal Privoznik wrote:
> On 05.02.2014 18:39, Jim Fehlig wrote:
>> While reviving old patches to add job support to the libxl driver,
>> testing revealed some problems that were difficult to encounter
>> in the current, more serialized processing approach used in the
>> driver.
>>
>> The first patch is a bug fix, plugging leaks of libxlDomainObjPrivate
>> objects.  The second patch removes the list of libxl timer registrations
>> maintained in the driver - a hack I was never fond of.  The third patch
>> moves domain shutdown handling to a thread, instead of doing all the
>> shutdown work in the event handler.  The fourth patch fixes an issue wrt
>> child process handling discussed in this thread
>>
>> http://lists.xen.org/archives/html/xen-devel/2014-01/msg01553.html
>>
>> Ian Jackson's latest patches on the libxl side are here
>>
>> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00124.html
>>
>>
>> Jim Fehlig (4):
>>    libxl: fix leaking libxlDomainObjPrivate
>>    libxl: remove list of timer registrations from libxlDomainObjPrivate
>>    libxl: handle domain shutdown events in a thread
>>    libxl: improve subprocess handling
>>
>>   src/libxl/libxl_conf.h   |   5 +-
>>   src/libxl/libxl_domain.c | 102 ++++++++---------------------------
>>   src/libxl/libxl_domain.h |   8 +--
>>   src/libxl/libxl_driver.c | 135
>> +++++++++++++++++++++++++++++++----------------
>>   4 files changed, 115 insertions(+), 135 deletions(-)
>>
>
> ACK series but see my comment on 3/4 where I'm asking for a pair of
> fixes prior pushing.

Thanks for pointing those out, especially creating the joinable thread
that was never joined :).  Fixed.  I also added a note to the commit
message of 4/4 stating that the fixes on the libxl side will be included
in Xen 4.4.0

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00463.html

Pushed series.  Thanks!

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:43:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSyt-00029D-2Z; Thu, 06 Feb 2014 17:43:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>)
	id 1WBSyr-000291-NV; Thu, 06 Feb 2014 17:43:17 +0000
Received: from [85.158.143.35:35607] by server-1.bemta-4.messagelabs.com id
	07/9E-31661-5B9C3F25; Thu, 06 Feb 2014 17:43:17 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391708594!3711333!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28871 invoked from network); 6 Feb 2014 17:43:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:43:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98664209"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 17:43:14 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 6 Feb 2014 12:43:13 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 6 Feb 2014 18:43:12 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Santosh Jodh <Santosh.Jodh@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, "lars.kurth@xen.org" <lars.kurth@xen.org>,
	"Roger Pau Monne" <roger.pau@citrix.com>, Dario Faggioli
	<dario.faggioli@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Ben Guthro <Ben.Guthro@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvYpj3b21EIS0q8f9DE1KOLi5qoX58AgAAhFQA=
Date: Thu, 6 Feb 2014 17:43:11 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
In-Reply-To: <722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTYW50b3NoIEpvZGgNCj4gU2Vu
dDogMDYgRmVicnVhcnkgMjAxNCAxNjo0NA0KPiBUbzogSWFuIENhbXBiZWxsOyBsYXJzLmt1cnRo
QHhlbi5vcmc7IFJvZ2VyIFBhdSBNb25uZTsgRGFyaW8gRmFnZ2lvbGk7DQo+IEtvbnJhZCBSemVz
enV0ZWsgV2lsazsgQmVuIEd1dGhybzsgQW5kcmV3IENvb3BlcjsgUGF1bCBEdXJyYW50OyBJYW4N
Cj4gSmFja3Nvbg0KPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmc7IHhlbi1hcGlAbGlzdHMu
eGVuLm9yZzsgbWlyYWdlb3MtDQo+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IFN1Ympl
Y3Q6IFJFOiBbWGVuLWRldmVsXSBQcmVwcGluZyBmb3IgR1NPQyAyMDE0IFtVUkdFTlRdIC0gZGVh
ZGxpbmUgRmViIDE0DQo+IDIwMTQNCj4gDQo+ID4NCj4gPiBTYW50b3NoOg0KPiA+DQo+ID4gICAg
ICAgKiBLREQgKFdpbmRvd3MgRGVidWdnZXIgU3R1YikgZW5oYW5jZW1lbnRzDQo+ID4NCj4gW1Nh
bnRvc2ggSm9kaF0gSSBiZWxpZXZlIHRoaXMgaXMgc3RpbGwgYSBnb29kIEdTT0MgcHJvamVjdC4g
SG93ZXZlciwgaW4gbGlnaHQgb2YNCj4gbXkgbmV3IGRpcmVjdGlvbiwgSSBhbSBub3Qgc3VyZSBp
ZiBJIHdpbGwgYmUgYWJsZSB0byBtZW50b3IuIERvbuKAmXQga25vdyBpZiBQYXVsDQo+IG9yIHNv
bWVvbmUgZWxzZSBvbiB0aGUgV2luZG93cyB0ZWFtIHdvdWxkIHdhbnQgdG8gc3BvbnNvciB0aGlz
Lg0KPiANCg0KWWVzLCBpdCdzIHN0aWxsIGEgd29ydGh5IHByb2plY3QuIEknZCBiZSBoYXBweSB0
byBzcG9uc29yLg0KDQogIFBhdWwNCg0KPiBSZWdhcmRzLA0KPiBTYW50b3NoDQpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:43:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSyt-00029D-2Z; Thu, 06 Feb 2014 17:43:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>)
	id 1WBSyr-000291-NV; Thu, 06 Feb 2014 17:43:17 +0000
Received: from [85.158.143.35:35607] by server-1.bemta-4.messagelabs.com id
	07/9E-31661-5B9C3F25; Thu, 06 Feb 2014 17:43:17 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391708594!3711333!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28871 invoked from network); 6 Feb 2014 17:43:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:43:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98664209"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 17:43:14 +0000
Received: from AMSPEX01CL03.citrite.net (10.69.46.34) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 6 Feb 2014 12:43:13 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL03.citrite.net ([10.69.46.34]) with mapi id 14.02.0342.004;
	Thu, 6 Feb 2014 18:43:12 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Santosh Jodh <Santosh.Jodh@citrix.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, "lars.kurth@xen.org" <lars.kurth@xen.org>,
	"Roger Pau Monne" <roger.pau@citrix.com>, Dario Faggioli
	<dario.faggioli@citrix.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Ben Guthro <Ben.Guthro@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Ian Jackson <Ian.Jackson@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvYpj3b21EIS0q8f9DE1KOLi5qoX58AgAAhFQA=
Date: Thu, 6 Feb 2014 17:43:11 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
In-Reply-To: <722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBTYW50b3NoIEpvZGgNCj4gU2Vu
dDogMDYgRmVicnVhcnkgMjAxNCAxNjo0NA0KPiBUbzogSWFuIENhbXBiZWxsOyBsYXJzLmt1cnRo
QHhlbi5vcmc7IFJvZ2VyIFBhdSBNb25uZTsgRGFyaW8gRmFnZ2lvbGk7DQo+IEtvbnJhZCBSemVz
enV0ZWsgV2lsazsgQmVuIEd1dGhybzsgQW5kcmV3IENvb3BlcjsgUGF1bCBEdXJyYW50OyBJYW4N
Cj4gSmFja3Nvbg0KPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmc7IHhlbi1hcGlAbGlzdHMu
eGVuLm9yZzsgbWlyYWdlb3MtDQo+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IFN1Ympl
Y3Q6IFJFOiBbWGVuLWRldmVsXSBQcmVwcGluZyBmb3IgR1NPQyAyMDE0IFtVUkdFTlRdIC0gZGVh
ZGxpbmUgRmViIDE0DQo+IDIwMTQNCj4gDQo+ID4NCj4gPiBTYW50b3NoOg0KPiA+DQo+ID4gICAg
ICAgKiBLREQgKFdpbmRvd3MgRGVidWdnZXIgU3R1YikgZW5oYW5jZW1lbnRzDQo+ID4NCj4gW1Nh
bnRvc2ggSm9kaF0gSSBiZWxpZXZlIHRoaXMgaXMgc3RpbGwgYSBnb29kIEdTT0MgcHJvamVjdC4g
SG93ZXZlciwgaW4gbGlnaHQgb2YNCj4gbXkgbmV3IGRpcmVjdGlvbiwgSSBhbSBub3Qgc3VyZSBp
ZiBJIHdpbGwgYmUgYWJsZSB0byBtZW50b3IuIERvbuKAmXQga25vdyBpZiBQYXVsDQo+IG9yIHNv
bWVvbmUgZWxzZSBvbiB0aGUgV2luZG93cyB0ZWFtIHdvdWxkIHdhbnQgdG8gc3BvbnNvciB0aGlz
Lg0KPiANCg0KWWVzLCBpdCdzIHN0aWxsIGEgd29ydGh5IHByb2plY3QuIEknZCBiZSBoYXBweSB0
byBzcG9uc29yLg0KDQogIFBhdWwNCg0KPiBSZWdhcmRzLA0KPiBTYW50b3NoDQpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:44:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSzs-0002Dx-Sk; Thu, 06 Feb 2014 17:44:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBSzr-0002Dm-Ac
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 17:44:19 +0000
Received: from [85.158.137.68:54374] by server-13.bemta-3.messagelabs.com id
	0C/CE-26923-2F9C3F25; Thu, 06 Feb 2014 17:44:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391708654!141932!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23475 invoked from network); 6 Feb 2014 17:44:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:44:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100540317"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 17:43:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 12:43:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBSzG-0005Qc-Bu;
	Thu, 06 Feb 2014 17:43:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBSzF-00019y-OQ;
	Thu, 06 Feb 2014 17:43:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24746-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 17:43:41 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-arm-xen test] 24746: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24746 linux-arm-xen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24746/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass

version targeted for testing:
 linux                95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
baseline version:
 linux                518e624ddfaef545408c19c30fff31bc64d6b346

------------------------------------------------------------
People who touched revisions under test:
  Ben Hutchings <ben@decadent.org.uk>
  Bjorn Helgaas <bhelgaas@google.com> [for PCI parts]
  Bob Liu <bob.liu@oracle.com>
  Dave Jones <davej@fedoraproject.org>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Jie Liu <jeff.liu@oracle.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Malcolm Crossley <malcolm.crossley@citrix.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomi Valkeinen <tomi.valkeinen@ti.com>
  Wei Liu <liuw@liuw.name>
  Wei Liu <wei.liu2@citrix.com>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Yijing Wang <wangyijing@huawei.com>
  Zoltan Kiss <zoltan.kiss@citrix.com>
------------------------------------------------------------

jobs:
 build-armhf                                                  pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-arm-xen
+ revision=95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-arm-xen 95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
+ branch=linux-arm-xen
+ revision=95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-arm-xen
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-arm-xen
++ : daily-cron.linux-arm-xen
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-arm-xen
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-arm-xen
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : linux-arm-xen
+ : linux-arm-xen
+ : linux-arm-xen
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-arm-xen
+ : tested/linux-arm-xen
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04:tested/linux-arm-xen
Counting objects: 1   
Counting objects: 54   
Counting objects: 568, done.
Compressing objects:   0% (1/134)   
Compressing objects:   1% (2/134)   
Compressing objects:   2% (3/134)   
Compressing objects:   3% (5/134)   
Compressing objects:   4% (6/134)   
Compressing objects:   5% (7/134)   
Compressing objects:   6% (9/134)   
Compressing objects:   7% (10/134)   
Compressing objects:   8% (11/134)   
Compressing objects:   9% (13/134)   
Compressing objects:  10% (14/134)   
Compressing objects:  11% (15/134)   
Compressing objects:  12% (17/134)   
Compressing objects:  13% (18/134)   
Compressing objects:  14% (19/134)   
Compressing objects:  15% (21/134)   
Compressing objects:  16% (22/134)   
Compressing objects:  17% (23/134)   
Compressing objects:  18% (25/134)   
Compressing objects:  19% (26/134)   
Compressing objects:  20% (27/134)   
Compressing objects:  21% (29/134)   
Compressing objects:  22% (30/134)   
Compressing objects:  23% (31/134)   
Compressing objects:  24% (33/134)   
Compressing objects:  25% (34/134)   
Compressing objects:  26% (35/134)   
Compressing objects:  27% (37/134)   
Compressing objects:  28% (38/134)   
Compressing objects:  29% (39/134)   
Compressing objects:  30% (41/134)   
Compressing objects:  31% (42/134)   
Compressing objects:  32% (43/134)   
Compressing objects:  33% (45/134)   
Compressing objects:  34% (46/134)   
Compressing objects:  35% (47/134)   
Compressing objects:  36% (49/134)   
Compressing objects:  37% (50/134)   
Compressing objects:  38% (51/134)   
Compressing objects:  39% (53/134)   
Compressing objects:  40% (54/134)   
Compressing objects:  41% (55/134)   
Compressing objects:  42% (57/134)   
Compressing objects:  43% (58/134)   
Compressing objects:  44% (59/134)   
Compressing objects:  45% (61/134)   
Compressing objects:  46% (62/134)   
Compressing objects:  47% (63/134)   
Compressing objects:  48% (65/134)   
Compressing objects:  49% (66/134)   
Compressing objects:  50% (67/134)   
Compressing objects:  51% (69/134)   
Compressing objects:  52% (70/134)   
Compressing objects:  53% (72/134)   
Compressing objects:  54% (73/134)   
Compressing objects:  55% (74/134)   
Compressing objects:  56% (76/134)   
Compressing objects:  57% (77/134)   
Compressing objects:  58% (78/134)   
Compressing objects:  59% (80/134)   
Compressing objects:  60% (81/134)   
Compressing objects:  61% (82/134)   
Compressing objects:  62% (84/134)   
Compressing objects:  63% (85/134)   
Compressing objects:  64% (86/134)   
Compressing objects:  65% (88/134)   
Compressing objects:  66% (89/134)   
Compressing objects:  67% (90/134)   
Compressing objects:  68% (92/134)   
Compressing objects:  69% (93/134)   
Compressing objects:  70% (94/134)   
Compressing objects:  71% (96/134)   
Compressing objects:  72% (97/134)   
Compressing objects:  73% (98/134)   
Compressing objects:  74% (100/134)   
Compressing objects:  75% (101/134)   
Compressing objects:  76% (102/134)   
Compressing objects:  77% (104/134)   
Compressing objects:  78% (105/134)   
Compressing objects:  79% (106/134)   
Compressing objects:  80% (108/134)   
Compressing objects:  81% (109/134)   
Compressing objects:  82% (110/134)   
Compressing objects:  83% (112/134)   
Compressing objects:  84% (113/134)   
Compressing objects:  85% (114/134)   
Compressing objects:  86% (116/134)   
Compressing objects:  87% (117/134)   
Compressing objects:  88% (118/134)   
Compressing objects:  89% (120/134)   
Compressing objects:  90% (121/134)   
Compressing objects:  91% (122/134)   
Compressing objects:  92% (124/134)   
Compressing objects:  93% (125/134)   
Compressing objects:  94% (126/134)   
Compressing objects:  95% (128/134)   
Compressing objects:  96% (129/134)   
Compressing objects:  97% (130/134)   
Compressing objects:  98% (132/134)   
Compressing objects:  99% (133/134)   
Compressing objects: 100% (134/134)   
Compressing objects: 100% (134/134), done.
Writing objects:   0% (1/480)   
Writing objects:   1% (5/480)   
Writing objects:   2% (10/480)   
Writing objects:   3% (15/480)   
Writing objects:   4% (20/480)   
Writing objects:   5% (24/480)   
Writing objects:   6% (29/480)   
Writing objects:   7% (34/480)   
Writing objects:   8% (39/480)   
Writing objects:   9% (44/480)   
Writing objects:  10% (48/480)   
Writing objects:  11% (53/480)   
Writing objects:  12% (58/480)   
Writing objects:  13% (63/480)   
Writing objects:  14% (68/480)   
Writing objects:  15% (72/480)   
Writing objects:  16% (77/480)   
Writing objects:  17% (82/480)   
Writing objects:  18% (87/480)   
Writing objects:  19% (92/480)   
Writing objects:  20% (96/480)   
Writing objects:  21% (101/480)   
Writing objects:  22% (106/480)   
Writing objects:  23% (111/480)   
Writing objects:  24% (117/480)   
Writing objects:  25% (120/480)   
Writing objects:  26% (126/480)   
Writing objects:  27% (133/480)   
Writing objects:  28% (135/480)   
Writing objects:  29% (140/480)   
Writing objects:  30% (144/480)   
Writing objects:  31% (149/480)   
Writing objects:  32% (154/480)   
Writing objects:  33% (159/480)   
Writing objects:  34% (164/480)   
Writing objects:  35% (168/480)   
Writing objects:  36% (173/480)   
Writing objects:  37% (178/480)   
Writing objects:  38% (183/480)   
Writing objects:  39% (188/480)   
Writing objects:  40% (192/480)   
Writing objects:  41% (197/480)   
Writing objects:  42% (202/480)   
Writing objects:  43% (207/480)   
Writing objects:  44% (212/480)   
Writing objects:  45% (216/480)   
Writing objects:  46% (221/480)   
Writing objects:  47% (226/480)   
Writing objects:  48% (231/480)   
Writing objects:  49% (236/480)   
Writing objects:  50% (240/480)   
Writing objects:  51% (245/480)   
Writing objects:  52% (250/480)   
Writing objects:  53% (255/480)   
Writing objects:  54% (261/480)   
Writing objects:  55% (264/480)   
Writing objects:  56% (270/480)   
Writing objects:  57% (274/480)   
Writing objects:  58% (279/480)   
Writing objects:  59% (284/480)   
Writing objects:  60% (288/480)   
Writing objects:  61% (293/480)   
Writing objects:  62% (298/480)   
Writing objects:  63% (303/480)   
Writing objects:  64% (308/480)   
Writing objects:  65% (312/480)   
Writing objects:  66% (317/480)   
Writing objects:  67% (322/480)   
Writing objects:  68% (327/480)   
Writing objects:  69% (332/480)   
Writing objects:  70% (336/480)   
Writing objects:  71% (341/480)   
Writing objects:  72% (346/480)   
Writing objects:  73% (351/480)   
Writing objects:  74% (356/480)   
Writing objects:  75% (360/480)   
Writing objects:  76% (365/480)   
Writing objects:  77% (370/480)   
Writing objects:  78% (375/480)   
Writing objects:  79% (380/480)   
Writing objects:  80% (384/480)   
Writing objects:  81% (389/480)   
Writing objects:  82% (394/480)   
Writing objects:  83% (399/480)   
Writing objects:  84% (404/480)   
Writing objects:  85% (408/480)   
Writing objects:  86% (413/480)   
Writing objects:  87% (418/480)   
Writing objects:  88% (423/480)   
Writing objects:  89% (428/480)   
Writing objects:  90% (432/480)   
Writing objects:  91% (437/480)   
Writing objects:  92% (442/480)   
Writing objects:  93% (447/480)   
Writing objects:  94% (452/480)   
Writing objects:  95% (456/480)   
Writing objects:  96% (462/480)   
Writing objects:  97% (466/480)   
Writing objects:  98% (471/480)   
Writing objects:  99% (476/480)   
Writing objects: 100% (480/480)   
Writing objects: 100% (480/480), 167.77 KiB, done.
Total 480 (delta 395), reused 427 (delta 345)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
Auto packing the repository for optimum performance.
   518e624..95bfbee  95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04 -> tested/linux-arm-xen
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:44:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBSzs-0002Dx-Sk; Thu, 06 Feb 2014 17:44:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBSzr-0002Dm-Ac
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 17:44:19 +0000
Received: from [85.158.137.68:54374] by server-13.bemta-3.messagelabs.com id
	0C/CE-26923-2F9C3F25; Thu, 06 Feb 2014 17:44:18 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391708654!141932!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23475 invoked from network); 6 Feb 2014 17:44:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:44:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100540317"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 17:43:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 12:43:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBSzG-0005Qc-Bu;
	Thu, 06 Feb 2014 17:43:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBSzF-00019y-OQ;
	Thu, 06 Feb 2014 17:43:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24746-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 17:43:41 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-arm-xen test] 24746: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24746 linux-arm-xen real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24746/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           9 guest-start                  fail   never pass

version targeted for testing:
 linux                95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
baseline version:
 linux                518e624ddfaef545408c19c30fff31bc64d6b346

------------------------------------------------------------
People who touched revisions under test:
  Ben Hutchings <ben@decadent.org.uk>
  Bjorn Helgaas <bhelgaas@google.com> [for PCI parts]
  Bob Liu <bob.liu@oracle.com>
  Dave Jones <davej@fedoraproject.org>
  David Vrabel <david.vrabel@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <JBeulich@suse.com>
  Jie Liu <jeff.liu@oracle.com>
  Julien Grall <julien.grall@linaro.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Malcolm Crossley <malcolm.crossley@citrix.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Tomi Valkeinen <tomi.valkeinen@ti.com>
  Wei Liu <liuw@liuw.name>
  Wei Liu <wei.liu2@citrix.com>
  Wei Yongjun <yongjun_wei@trendmicro.com.cn>
  Yijing Wang <wangyijing@huawei.com>
  Zoltan Kiss <zoltan.kiss@citrix.com>
------------------------------------------------------------

jobs:
 build-armhf                                                  pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-arm-xen
+ revision=95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-arm-xen 95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
+ branch=linux-arm-xen
+ revision=95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-arm-xen
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-arm-xen
++ : daily-cron.linux-arm-xen
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-arm-xen
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-arm-xen
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
+ : linux-arm-xen
+ : linux-arm-xen
+ : linux-arm-xen
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-arm-xen
+ : tested/linux-arm-xen
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04:tested/linux-arm-xen
Counting objects: 1   
Counting objects: 54   
Counting objects: 568, done.
Compressing objects:   0% (1/134)   
Compressing objects:   1% (2/134)   
Compressing objects:   2% (3/134)   
Compressing objects:   3% (5/134)   
Compressing objects:   4% (6/134)   
Compressing objects:   5% (7/134)   
Compressing objects:   6% (9/134)   
Compressing objects:   7% (10/134)   
Compressing objects:   8% (11/134)   
Compressing objects:   9% (13/134)   
Compressing objects:  10% (14/134)   
Compressing objects:  11% (15/134)   
Compressing objects:  12% (17/134)   
Compressing objects:  13% (18/134)   
Compressing objects:  14% (19/134)   
Compressing objects:  15% (21/134)   
Compressing objects:  16% (22/134)   
Compressing objects:  17% (23/134)   
Compressing objects:  18% (25/134)   
Compressing objects:  19% (26/134)   
Compressing objects:  20% (27/134)   
Compressing objects:  21% (29/134)   
Compressing objects:  22% (30/134)   
Compressing objects:  23% (31/134)   
Compressing objects:  24% (33/134)   
Compressing objects:  25% (34/134)   
Compressing objects:  26% (35/134)   
Compressing objects:  27% (37/134)   
Compressing objects:  28% (38/134)   
Compressing objects:  29% (39/134)   
Compressing objects:  30% (41/134)   
Compressing objects:  31% (42/134)   
Compressing objects:  32% (43/134)   
Compressing objects:  33% (45/134)   
Compressing objects:  34% (46/134)   
Compressing objects:  35% (47/134)   
Compressing objects:  36% (49/134)   
Compressing objects:  37% (50/134)   
Compressing objects:  38% (51/134)   
Compressing objects:  39% (53/134)   
Compressing objects:  40% (54/134)   
Compressing objects:  41% (55/134)   
Compressing objects:  42% (57/134)   
Compressing objects:  43% (58/134)   
Compressing objects:  44% (59/134)   
Compressing objects:  45% (61/134)   
Compressing objects:  46% (62/134)   
Compressing objects:  47% (63/134)   
Compressing objects:  48% (65/134)   
Compressing objects:  49% (66/134)   
Compressing objects:  50% (67/134)   
Compressing objects:  51% (69/134)   
Compressing objects:  52% (70/134)   
Compressing objects:  53% (72/134)   
Compressing objects:  54% (73/134)   
Compressing objects:  55% (74/134)   
Compressing objects:  56% (76/134)   
Compressing objects:  57% (77/134)   
Compressing objects:  58% (78/134)   
Compressing objects:  59% (80/134)   
Compressing objects:  60% (81/134)   
Compressing objects:  61% (82/134)   
Compressing objects:  62% (84/134)   
Compressing objects:  63% (85/134)   
Compressing objects:  64% (86/134)   
Compressing objects:  65% (88/134)   
Compressing objects:  66% (89/134)   
Compressing objects:  67% (90/134)   
Compressing objects:  68% (92/134)   
Compressing objects:  69% (93/134)   
Compressing objects:  70% (94/134)   
Compressing objects:  71% (96/134)   
Compressing objects:  72% (97/134)   
Compressing objects:  73% (98/134)   
Compressing objects:  74% (100/134)   
Compressing objects:  75% (101/134)   
Compressing objects:  76% (102/134)   
Compressing objects:  77% (104/134)   
Compressing objects:  78% (105/134)   
Compressing objects:  79% (106/134)   
Compressing objects:  80% (108/134)   
Compressing objects:  81% (109/134)   
Compressing objects:  82% (110/134)   
Compressing objects:  83% (112/134)   
Compressing objects:  84% (113/134)   
Compressing objects:  85% (114/134)   
Compressing objects:  86% (116/134)   
Compressing objects:  87% (117/134)   
Compressing objects:  88% (118/134)   
Compressing objects:  89% (120/134)   
Compressing objects:  90% (121/134)   
Compressing objects:  91% (122/134)   
Compressing objects:  92% (124/134)   
Compressing objects:  93% (125/134)   
Compressing objects:  94% (126/134)   
Compressing objects:  95% (128/134)   
Compressing objects:  96% (129/134)   
Compressing objects:  97% (130/134)   
Compressing objects:  98% (132/134)   
Compressing objects:  99% (133/134)   
Compressing objects: 100% (134/134)   
Compressing objects: 100% (134/134), done.
Writing objects:   0% (1/480)   
Writing objects:   1% (5/480)   
Writing objects:   2% (10/480)   
Writing objects:   3% (15/480)   
Writing objects:   4% (20/480)   
Writing objects:   5% (24/480)   
Writing objects:   6% (29/480)   
Writing objects:   7% (34/480)   
Writing objects:   8% (39/480)   
Writing objects:   9% (44/480)   
Writing objects:  10% (48/480)   
Writing objects:  11% (53/480)   
Writing objects:  12% (58/480)   
Writing objects:  13% (63/480)   
Writing objects:  14% (68/480)   
Writing objects:  15% (72/480)   
Writing objects:  16% (77/480)   
Writing objects:  17% (82/480)   
Writing objects:  18% (87/480)   
Writing objects:  19% (92/480)   
Writing objects:  20% (96/480)   
Writing objects:  21% (101/480)   
Writing objects:  22% (106/480)   
Writing objects:  23% (111/480)   
Writing objects:  24% (117/480)   
Writing objects:  25% (120/480)   
Writing objects:  26% (126/480)   
Writing objects:  27% (133/480)   
Writing objects:  28% (135/480)   
Writing objects:  29% (140/480)   
Writing objects:  30% (144/480)   
Writing objects:  31% (149/480)   
Writing objects:  32% (154/480)   
Writing objects:  33% (159/480)   
Writing objects:  34% (164/480)   
Writing objects:  35% (168/480)   
Writing objects:  36% (173/480)   
Writing objects:  37% (178/480)   
Writing objects:  38% (183/480)   
Writing objects:  39% (188/480)   
Writing objects:  40% (192/480)   
Writing objects:  41% (197/480)   
Writing objects:  42% (202/480)   
Writing objects:  43% (207/480)   
Writing objects:  44% (212/480)   
Writing objects:  45% (216/480)   
Writing objects:  46% (221/480)   
Writing objects:  47% (226/480)   
Writing objects:  48% (231/480)   
Writing objects:  49% (236/480)   
Writing objects:  50% (240/480)   
Writing objects:  51% (245/480)   
Writing objects:  52% (250/480)   
Writing objects:  53% (255/480)   
Writing objects:  54% (261/480)   
Writing objects:  55% (264/480)   
Writing objects:  56% (270/480)   
Writing objects:  57% (274/480)   
Writing objects:  58% (279/480)   
Writing objects:  59% (284/480)   
Writing objects:  60% (288/480)   
Writing objects:  61% (293/480)   
Writing objects:  62% (298/480)   
Writing objects:  63% (303/480)   
Writing objects:  64% (308/480)   
Writing objects:  65% (312/480)   
Writing objects:  66% (317/480)   
Writing objects:  67% (322/480)   
Writing objects:  68% (327/480)   
Writing objects:  69% (332/480)   
Writing objects:  70% (336/480)   
Writing objects:  71% (341/480)   
Writing objects:  72% (346/480)   
Writing objects:  73% (351/480)   
Writing objects:  74% (356/480)   
Writing objects:  75% (360/480)   
Writing objects:  76% (365/480)   
Writing objects:  77% (370/480)   
Writing objects:  78% (375/480)   
Writing objects:  79% (380/480)   
Writing objects:  80% (384/480)   
Writing objects:  81% (389/480)   
Writing objects:  82% (394/480)   
Writing objects:  83% (399/480)   
Writing objects:  84% (404/480)   
Writing objects:  85% (408/480)   
Writing objects:  86% (413/480)   
Writing objects:  87% (418/480)   
Writing objects:  88% (423/480)   
Writing objects:  89% (428/480)   
Writing objects:  90% (432/480)   
Writing objects:  91% (437/480)   
Writing objects:  92% (442/480)   
Writing objects:  93% (447/480)   
Writing objects:  94% (452/480)   
Writing objects:  95% (456/480)   
Writing objects:  96% (462/480)   
Writing objects:  97% (466/480)   
Writing objects:  98% (471/480)   
Writing objects:  99% (476/480)   
Writing objects: 100% (480/480)   
Writing objects: 100% (480/480), 167.77 KiB, done.
Total 480 (delta 395), reused 427 (delta 345)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
Auto packing the repository for optimum performance.
   518e624..95bfbee  95bfbee422b9b1cfe8c2d2e27edf17ce1cc99e04 -> tested/linux-arm-xen
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:44:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBT05-0002GC-BN; Thu, 06 Feb 2014 17:44:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WBT04-0002Fu-K7; Thu, 06 Feb 2014 17:44:32 +0000
Received: from [85.158.137.68:4648] by server-6.bemta-3.messagelabs.com id
	CA/72-09180-FF9C3F25; Thu, 06 Feb 2014 17:44:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391708664!143488!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24656 invoked from network); 6 Feb 2014 17:44:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:44:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100540386"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 17:44:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	12:44:08 -0500
Message-ID: <1391708646.2162.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Date: Thu, 6 Feb 2014 17:44:06 +0000
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Dario
	Faggioli <dario.faggioli@citrix.com>, Ben Guthro <Ben.Guthro@citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDE0LTAyLTA2IGF0IDE3OjQzICswMDAwLCBQYXVsIER1cnJhbnQgd3JvdGU6Cj4g
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQo+ID4gRnJvbTogU2FudG9zaCBKb2RoCj4gPiBT
ZW50OiAwNiBGZWJydWFyeSAyMDE0IDE2OjQ0Cj4gPiBUbzogSWFuIENhbXBiZWxsOyBsYXJzLmt1
cnRoQHhlbi5vcmc7IFJvZ2VyIFBhdSBNb25uZTsgRGFyaW8gRmFnZ2lvbGk7Cj4gPiBLb25yYWQg
Unplc3p1dGVrIFdpbGs7IEJlbiBHdXRocm87IEFuZHJldyBDb29wZXI7IFBhdWwgRHVycmFudDsg
SWFuCj4gPiBKYWNrc29uCj4gPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmc7IHhlbi1hcGlA
bGlzdHMueGVuLm9yZzsgbWlyYWdlb3MtCj4gPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZwo+
ID4gU3ViamVjdDogUkU6IFtYZW4tZGV2ZWxdIFByZXBwaW5nIGZvciBHU09DIDIwMTQgW1VSR0VO
VF0gLSBkZWFkbGluZSBGZWIgMTQKPiA+IDIwMTQKPiA+IAo+ID4gPgo+ID4gPiBTYW50b3NoOgo+
ID4gPgo+ID4gPiAgICAgICAqIEtERCAoV2luZG93cyBEZWJ1Z2dlciBTdHViKSBlbmhhbmNlbWVu
dHMKPiA+ID4KPiA+IFtTYW50b3NoIEpvZGhdIEkgYmVsaWV2ZSB0aGlzIGlzIHN0aWxsIGEgZ29v
ZCBHU09DIHByb2plY3QuIEhvd2V2ZXIsIGluIGxpZ2h0IG9mCj4gPiBteSBuZXcgZGlyZWN0aW9u
LCBJIGFtIG5vdCBzdXJlIGlmIEkgd2lsbCBiZSBhYmxlIHRvIG1lbnRvci4gRG9u4oCZdCBrbm93
IGlmIFBhdWwKPiA+IG9yIHNvbWVvbmUgZWxzZSBvbiB0aGUgV2luZG93cyB0ZWFtIHdvdWxkIHdh
bnQgdG8gc3BvbnNvciB0aGlzLgo+ID4gCj4gCj4gWWVzLCBpdCdzIHN0aWxsIGEgd29ydGh5IHBy
b2plY3QuIEknZCBiZSBoYXBweSB0byBzcG9uc29yLgoKRG8geW91IG1lYW4gIm1lbnRvciI/CgpD
YW4geW91IHVwZGF0ZSBodGUgcHJvamVjdCBvbiB0aGUgd2lraSBwbGVhc2UuCgpJYW4uCgoKCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:44:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:44:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBT05-0002GC-BN; Thu, 06 Feb 2014 17:44:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WBT04-0002Fu-K7; Thu, 06 Feb 2014 17:44:32 +0000
Received: from [85.158.137.68:4648] by server-6.bemta-3.messagelabs.com id
	CA/72-09180-FF9C3F25; Thu, 06 Feb 2014 17:44:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391708664!143488!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24656 invoked from network); 6 Feb 2014 17:44:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:44:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="100540386"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 17:44:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Thu, 6 Feb 2014
	12:44:08 -0500
Message-ID: <1391708646.2162.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Paul Durrant <Paul.Durrant@citrix.com>
Date: Thu, 6 Feb 2014 17:44:06 +0000
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>, Dario
	Faggioli <dario.faggioli@citrix.com>, Ben Guthro <Ben.Guthro@citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDE0LTAyLTA2IGF0IDE3OjQzICswMDAwLCBQYXVsIER1cnJhbnQgd3JvdGU6Cj4g
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQo+ID4gRnJvbTogU2FudG9zaCBKb2RoCj4gPiBT
ZW50OiAwNiBGZWJydWFyeSAyMDE0IDE2OjQ0Cj4gPiBUbzogSWFuIENhbXBiZWxsOyBsYXJzLmt1
cnRoQHhlbi5vcmc7IFJvZ2VyIFBhdSBNb25uZTsgRGFyaW8gRmFnZ2lvbGk7Cj4gPiBLb25yYWQg
Unplc3p1dGVrIFdpbGs7IEJlbiBHdXRocm87IEFuZHJldyBDb29wZXI7IFBhdWwgRHVycmFudDsg
SWFuCj4gPiBKYWNrc29uCj4gPiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbi5vcmc7IHhlbi1hcGlA
bGlzdHMueGVuLm9yZzsgbWlyYWdlb3MtCj4gPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZwo+
ID4gU3ViamVjdDogUkU6IFtYZW4tZGV2ZWxdIFByZXBwaW5nIGZvciBHU09DIDIwMTQgW1VSR0VO
VF0gLSBkZWFkbGluZSBGZWIgMTQKPiA+IDIwMTQKPiA+IAo+ID4gPgo+ID4gPiBTYW50b3NoOgo+
ID4gPgo+ID4gPiAgICAgICAqIEtERCAoV2luZG93cyBEZWJ1Z2dlciBTdHViKSBlbmhhbmNlbWVu
dHMKPiA+ID4KPiA+IFtTYW50b3NoIEpvZGhdIEkgYmVsaWV2ZSB0aGlzIGlzIHN0aWxsIGEgZ29v
ZCBHU09DIHByb2plY3QuIEhvd2V2ZXIsIGluIGxpZ2h0IG9mCj4gPiBteSBuZXcgZGlyZWN0aW9u
LCBJIGFtIG5vdCBzdXJlIGlmIEkgd2lsbCBiZSBhYmxlIHRvIG1lbnRvci4gRG9u4oCZdCBrbm93
IGlmIFBhdWwKPiA+IG9yIHNvbWVvbmUgZWxzZSBvbiB0aGUgV2luZG93cyB0ZWFtIHdvdWxkIHdh
bnQgdG8gc3BvbnNvciB0aGlzLgo+ID4gCj4gCj4gWWVzLCBpdCdzIHN0aWxsIGEgd29ydGh5IHBy
b2plY3QuIEknZCBiZSBoYXBweSB0byBzcG9uc29yLgoKRG8geW91IG1lYW4gIm1lbnRvciI/CgpD
YW4geW91IHVwZGF0ZSBodGUgcHJvamVjdCBvbiB0aGUgd2lraSBwbGVhc2UuCgpJYW4uCgoKCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:47:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBT36-0002WA-DR; Thu, 06 Feb 2014 17:47:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>)
	id 1WBT35-0002Vu-2m; Thu, 06 Feb 2014 17:47:39 +0000
Received: from [193.109.254.147:21372] by server-5.bemta-14.messagelabs.com id
	F7/A6-16688-ABAC3F25; Thu, 06 Feb 2014 17:47:38 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391708856!2567816!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7445 invoked from network); 6 Feb 2014 17:47:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:47:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98665806"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 17:47:35 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 6 Feb 2014 12:47:35 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 6 Feb 2014 18:47:33 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvYpj3b21EIS0q8f9DE1KOLi5qoX58AgAAhFQD//++gAIAAEZ5w
Date: Thu, 6 Feb 2014 17:47:32 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD022712E@AMSPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
	<1391708646.2162.12.camel@kazak.uk.xensource.com>
In-Reply-To: <1391708646.2162.12.camel@kazak.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ben Guthro <Ben.Guthro@citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBJYW4gQ2FtcGJlbGwNCj4gU2Vu
dDogMDYgRmVicnVhcnkgMjAxNCAxNzo0NA0KPiBUbzogUGF1bCBEdXJyYW50DQo+IENjOiBTYW50
b3NoIEpvZGg7IGxhcnMua3VydGhAeGVuLm9yZzsgUm9nZXIgUGF1IE1vbm5lOyBEYXJpbyBGYWdn
aW9saTsNCj4gS29ucmFkIFJ6ZXN6dXRlayBXaWxrOyBCZW4gR3V0aHJvOyBBbmRyZXcgQ29vcGVy
OyBJYW4gSmFja3NvbjsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54ZW4ub3JnOyB4ZW4tYXBpQGxpc3Rz
Lnhlbi5vcmc7IG1pcmFnZW9zLQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1hlbi1kZXZlbF0gUHJlcHBpbmcgZm9yIEdTT0MgMjAxNCBbVVJHRU5UXSAtIGRl
YWRsaW5lIEZlYiAxNA0KPiAyMDE0DQo+IA0KPiBPbiBUaHUsIDIwMTQtMDItMDYgYXQgMTc6NDMg
KzAwMDAsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0t
LS0tDQo+ID4gPiBGcm9tOiBTYW50b3NoIEpvZGgNCj4gPiA+IFNlbnQ6IDA2IEZlYnJ1YXJ5IDIw
MTQgMTY6NDQNCj4gPiA+IFRvOiBJYW4gQ2FtcGJlbGw7IGxhcnMua3VydGhAeGVuLm9yZzsgUm9n
ZXIgUGF1IE1vbm5lOyBEYXJpbyBGYWdnaW9saTsNCj4gPiA+IEtvbnJhZCBSemVzenV0ZWsgV2ls
azsgQmVuIEd1dGhybzsgQW5kcmV3IENvb3BlcjsgUGF1bCBEdXJyYW50OyBJYW4NCj4gPiA+IEph
Y2tzb24NCj4gPiA+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZzsgeGVuLWFwaUBsaXN0cy54
ZW4ub3JnOyBtaXJhZ2Vvcy0NCj4gPiA+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+ID4g
PiBTdWJqZWN0OiBSRTogW1hlbi1kZXZlbF0gUHJlcHBpbmcgZm9yIEdTT0MgMjAxNCBbVVJHRU5U
XSAtIGRlYWRsaW5lDQo+IEZlYiAxNA0KPiA+ID4gMjAxNA0KPiA+ID4NCj4gPiA+ID4NCj4gPiA+
ID4gU2FudG9zaDoNCj4gPiA+ID4NCj4gPiA+ID4gICAgICAgKiBLREQgKFdpbmRvd3MgRGVidWdn
ZXIgU3R1YikgZW5oYW5jZW1lbnRzDQo+ID4gPiA+DQo+ID4gPiBbU2FudG9zaCBKb2RoXSBJIGJl
bGlldmUgdGhpcyBpcyBzdGlsbCBhIGdvb2QgR1NPQyBwcm9qZWN0LiBIb3dldmVyLCBpbiBsaWdo
dA0KPiBvZg0KPiA+ID4gbXkgbmV3IGRpcmVjdGlvbiwgSSBhbSBub3Qgc3VyZSBpZiBJIHdpbGwg
YmUgYWJsZSB0byBtZW50b3IuIERvbuKAmXQga25vdyBpZg0KPiBQYXVsDQo+ID4gPiBvciBzb21l
b25lIGVsc2Ugb24gdGhlIFdpbmRvd3MgdGVhbSB3b3VsZCB3YW50IHRvIHNwb25zb3IgdGhpcy4N
Cj4gPiA+DQo+ID4NCj4gPiBZZXMsIGl0J3Mgc3RpbGwgYSB3b3J0aHkgcHJvamVjdC4gSSdkIGJl
IGhhcHB5IHRvIHNwb25zb3IuDQo+IA0KPiBEbyB5b3UgbWVhbiAibWVudG9yIj8NCg0KWWVzLCB3
aGF0ZXZlciB0aGUgYXBwcm9wcmlhdGUgdGVybSBpcy4NCg0KPiANCj4gQ2FuIHlvdSB1cGRhdGUg
aHRlIHByb2plY3Qgb24gdGhlIHdpa2kgcGxlYXNlLg0KPiANCg0KU3VyZS4NCg0KICBQYXVsDQoN
Cj4gSWFuLg0KPiANCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 06 17:47:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 17:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBT36-0002WA-DR; Thu, 06 Feb 2014 17:47:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>)
	id 1WBT35-0002Vu-2m; Thu, 06 Feb 2014 17:47:39 +0000
Received: from [193.109.254.147:21372] by server-5.bemta-14.messagelabs.com id
	F7/A6-16688-ABAC3F25; Thu, 06 Feb 2014 17:47:38 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391708856!2567816!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7445 invoked from network); 6 Feb 2014 17:47:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 17:47:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,793,1384300800"; d="scan'208";a="98665806"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 17:47:35 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 6 Feb 2014 12:47:35 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Thu, 6 Feb 2014 18:47:33 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
Thread-Index: AQHPInvYpj3b21EIS0q8f9DE1KOLi5qoX58AgAAhFQD//++gAIAAEZ5w
Date: Thu, 6 Feb 2014 17:47:32 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD022712E@AMSPEX01CL01.citrite.net>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<722BF1F4E241B546BB147E89923C56DF15BB63D2@SJCPEX01CL01.citrite.net>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02270E9@AMSPEX01CL01.citrite.net>
	<1391708646.2162.12.camel@kazak.uk.xensource.com>
In-Reply-To: <1391708646.2162.12.camel@kazak.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA2
Cc: "mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ben Guthro <Ben.Guthro@citrix.com>,
	"lars.kurth@xen.org" <lars.kurth@xen.org>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBJYW4gQ2FtcGJlbGwNCj4gU2Vu
dDogMDYgRmVicnVhcnkgMjAxNCAxNzo0NA0KPiBUbzogUGF1bCBEdXJyYW50DQo+IENjOiBTYW50
b3NoIEpvZGg7IGxhcnMua3VydGhAeGVuLm9yZzsgUm9nZXIgUGF1IE1vbm5lOyBEYXJpbyBGYWdn
aW9saTsNCj4gS29ucmFkIFJ6ZXN6dXRlayBXaWxrOyBCZW4gR3V0aHJvOyBBbmRyZXcgQ29vcGVy
OyBJYW4gSmFja3NvbjsgeGVuLQ0KPiBkZXZlbEBsaXN0cy54ZW4ub3JnOyB4ZW4tYXBpQGxpc3Rz
Lnhlbi5vcmc7IG1pcmFnZW9zLQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1hlbi1kZXZlbF0gUHJlcHBpbmcgZm9yIEdTT0MgMjAxNCBbVVJHRU5UXSAtIGRl
YWRsaW5lIEZlYiAxNA0KPiAyMDE0DQo+IA0KPiBPbiBUaHUsIDIwMTQtMDItMDYgYXQgMTc6NDMg
KzAwMDAsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0t
LS0tDQo+ID4gPiBGcm9tOiBTYW50b3NoIEpvZGgNCj4gPiA+IFNlbnQ6IDA2IEZlYnJ1YXJ5IDIw
MTQgMTY6NDQNCj4gPiA+IFRvOiBJYW4gQ2FtcGJlbGw7IGxhcnMua3VydGhAeGVuLm9yZzsgUm9n
ZXIgUGF1IE1vbm5lOyBEYXJpbyBGYWdnaW9saTsNCj4gPiA+IEtvbnJhZCBSemVzenV0ZWsgV2ls
azsgQmVuIEd1dGhybzsgQW5kcmV3IENvb3BlcjsgUGF1bCBEdXJyYW50OyBJYW4NCj4gPiA+IEph
Y2tzb24NCj4gPiA+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZzsgeGVuLWFwaUBsaXN0cy54
ZW4ub3JnOyBtaXJhZ2Vvcy0NCj4gPiA+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+ID4g
PiBTdWJqZWN0OiBSRTogW1hlbi1kZXZlbF0gUHJlcHBpbmcgZm9yIEdTT0MgMjAxNCBbVVJHRU5U
XSAtIGRlYWRsaW5lDQo+IEZlYiAxNA0KPiA+ID4gMjAxNA0KPiA+ID4NCj4gPiA+ID4NCj4gPiA+
ID4gU2FudG9zaDoNCj4gPiA+ID4NCj4gPiA+ID4gICAgICAgKiBLREQgKFdpbmRvd3MgRGVidWdn
ZXIgU3R1YikgZW5oYW5jZW1lbnRzDQo+ID4gPiA+DQo+ID4gPiBbU2FudG9zaCBKb2RoXSBJIGJl
bGlldmUgdGhpcyBpcyBzdGlsbCBhIGdvb2QgR1NPQyBwcm9qZWN0LiBIb3dldmVyLCBpbiBsaWdo
dA0KPiBvZg0KPiA+ID4gbXkgbmV3IGRpcmVjdGlvbiwgSSBhbSBub3Qgc3VyZSBpZiBJIHdpbGwg
YmUgYWJsZSB0byBtZW50b3IuIERvbuKAmXQga25vdyBpZg0KPiBQYXVsDQo+ID4gPiBvciBzb21l
b25lIGVsc2Ugb24gdGhlIFdpbmRvd3MgdGVhbSB3b3VsZCB3YW50IHRvIHNwb25zb3IgdGhpcy4N
Cj4gPiA+DQo+ID4NCj4gPiBZZXMsIGl0J3Mgc3RpbGwgYSB3b3J0aHkgcHJvamVjdC4gSSdkIGJl
IGhhcHB5IHRvIHNwb25zb3IuDQo+IA0KPiBEbyB5b3UgbWVhbiAibWVudG9yIj8NCg0KWWVzLCB3
aGF0ZXZlciB0aGUgYXBwcm9wcmlhdGUgdGVybSBpcy4NCg0KPiANCj4gQ2FuIHlvdSB1cGRhdGUg
aHRlIHByb2plY3Qgb24gdGhlIHdpa2kgcGxlYXNlLg0KPiANCg0KU3VyZS4NCg0KICBQYXVsDQoN
Cj4gSWFuLg0KPiANCg0KX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:07:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTM3-0003j2-5U; Thu, 06 Feb 2014 18:07:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <donbrearley@hibbing.edu>) id 1WBTM2-0003ix-70
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:07:14 +0000
Received: from [193.109.254.147:41603] by server-6.bemta-14.messagelabs.com id
	7B/FF-03396-15FC3F25; Thu, 06 Feb 2014 18:07:13 +0000
X-Env-Sender: donbrearley@hibbing.edu
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391710032!2574863!1
X-Originating-IP: [134.29.200.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20771 invoked from network); 6 Feb 2014 18:07:12 -0000
Received: from hibbing.edu (HELO hibbing.edu) (134.29.200.12)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Feb 2014 18:07:12 -0000
Received: from SMAIL-DOM-MTA by hibbing.edu
	with Novell_GroupWise; Thu, 06 Feb 2014 12:03:32 -0600
Message-Id: <52F37A05020000260003A5AC@hibbing.edu>
X-Mailer: Novell GroupWise Internet Agent 8.0.2 
Date: Thu, 06 Feb 2014 12:03:17 -0600
From: "Don Brearley" <donbrearley@hibbing.edu>
To: <xen-devel@lists.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] [test day] Windows SBS 2011 random hang and resource
 utiluization on 4.4-rc2 and rc3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, 


I know it's not officially "test day" but I've been toying with Xen 4.4-rc2 and -rc3 and I have several successful linux
PV guests running on 3.13.1.  Works fantastic.


However, I have an SBS 2011 64bit HVM installation (no GPLPV drivers) and I routinely get a complete hang.  The VM 
goes 'stateless' in 'xl list' (shows the domain state as ------).  Cannot access via RDP, cannot access via VNC, cannot
ping, etc.   Just totally dead, yet the VM "lives".  Issing 'xl shutdown -F domain' does not kill it, I have to use
"destroy" to remove the domain.  I didnt install GPLPV because I didnt see they were supported for SBS 2011.


No errors in the xen or qemu logs, nothing in the event log, nothing in dmesg/messages on dom0.


I can't seem to find anything to trigger it... it just, happens, seemingly at random. 


"xl top" shows the domain as using 200% CPU when this happens.... in one case I left the VM running like that
overnight, and returned, and it had come up to 300% utilization.


The domain sits on a physical disk (/dev/sdd) which is replicated via DRBD to another server. 


Any hints? Suggestions?




The domU config:  (typed it, didnt copy/paste).


builder="hvm"
memory="32767"
maxmem="32767"
shadow = "128"
vcpu = 4
viridian = 1


vif = [ 'model=e1000, bridge=xen-pribr' ]


disk = [ 'phy:/dev/sdd,xvda,w' ]


videoram=128




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:07:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTM3-0003j2-5U; Thu, 06 Feb 2014 18:07:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <donbrearley@hibbing.edu>) id 1WBTM2-0003ix-70
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:07:14 +0000
Received: from [193.109.254.147:41603] by server-6.bemta-14.messagelabs.com id
	7B/FF-03396-15FC3F25; Thu, 06 Feb 2014 18:07:13 +0000
X-Env-Sender: donbrearley@hibbing.edu
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391710032!2574863!1
X-Originating-IP: [134.29.200.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20771 invoked from network); 6 Feb 2014 18:07:12 -0000
Received: from hibbing.edu (HELO hibbing.edu) (134.29.200.12)
	by server-10.tower-27.messagelabs.com with SMTP;
	6 Feb 2014 18:07:12 -0000
Received: from SMAIL-DOM-MTA by hibbing.edu
	with Novell_GroupWise; Thu, 06 Feb 2014 12:03:32 -0600
Message-Id: <52F37A05020000260003A5AC@hibbing.edu>
X-Mailer: Novell GroupWise Internet Agent 8.0.2 
Date: Thu, 06 Feb 2014 12:03:17 -0600
From: "Don Brearley" <donbrearley@hibbing.edu>
To: <xen-devel@lists.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] [test day] Windows SBS 2011 random hang and resource
 utiluization on 4.4-rc2 and rc3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello, 


I know it's not officially "test day" but I've been toying with Xen 4.4-rc2 and -rc3 and I have several successful linux
PV guests running on 3.13.1.  Works fantastic.


However, I have an SBS 2011 64bit HVM installation (no GPLPV drivers) and I routinely get a complete hang.  The VM 
goes 'stateless' in 'xl list' (shows the domain state as ------).  Cannot access via RDP, cannot access via VNC, cannot
ping, etc.   Just totally dead, yet the VM "lives".  Issing 'xl shutdown -F domain' does not kill it, I have to use
"destroy" to remove the domain.  I didnt install GPLPV because I didnt see they were supported for SBS 2011.


No errors in the xen or qemu logs, nothing in the event log, nothing in dmesg/messages on dom0.


I can't seem to find anything to trigger it... it just, happens, seemingly at random. 


"xl top" shows the domain as using 200% CPU when this happens.... in one case I left the VM running like that
overnight, and returned, and it had come up to 300% utilization.


The domain sits on a physical disk (/dev/sdd) which is replicated via DRBD to another server. 


Any hints? Suggestions?




The domU config:  (typed it, didnt copy/paste).


builder="hvm"
memory="32767"
maxmem="32767"
shadow = "128"
vcpu = 4
viridian = 1


vif = [ 'model=e1000, bridge=xen-pribr' ]


disk = [ 'phy:/dev/sdd,xvda,w' ]


videoram=128




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTQw-0004Bl-Br; Thu, 06 Feb 2014 18:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBTQu-0004Bg-91
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:12:16 +0000
Received: from [193.109.254.147:50501] by server-5.bemta-14.messagelabs.com id
	47/7B-16688-F70D3F25; Thu, 06 Feb 2014 18:12:15 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391710332!2563667!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8984 invoked from network); 6 Feb 2014 18:12:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 18:12:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98673987"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 18:12:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 13:12:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBTQp-0004sk-4D;
	Thu, 06 Feb 2014 18:12:11 +0000
Message-ID: <52F3D07B.9070002@citrix.com>
Date: Thu, 6 Feb 2014 18:12:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Don Brearley <donbrearley@hibbing.edu>
References: <52F37A05020000260003A5AC@hibbing.edu>
In-Reply-To: <52F37A05020000260003A5AC@hibbing.edu>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [test day] Windows SBS 2011 random hang and
 resource utiluization on 4.4-rc2 and rc3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/02/14 18:03, Don Brearley wrote:
> Hello, 
>
>
> I know it's not officially "test day" but I've been toying with Xen 4.4-rc2 and -rc3 and I have several successful linux
> PV guests running on 3.13.1.  Works fantastic.
>
>
> However, I have an SBS 2011 64bit HVM installation (no GPLPV drivers) and I routinely get a complete hang.  The VM 
> goes 'stateless' in 'xl list' (shows the domain state as ------).  Cannot access via RDP, cannot access via VNC, cannot
> ping, etc.   Just totally dead, yet the VM "lives".  Issing 'xl shutdown -F domain' does not kill it, I have to use
> "destroy" to remove the domain.  I didnt install GPLPV because I didnt see they were supported for SBS 2011.
>
>
> No errors in the xen or qemu logs, nothing in the event log, nothing in dmesg/messages on dom0.
>
>
> I can't seem to find anything to trigger it... it just, happens, seemingly at random. 
>
>
> "xl top" shows the domain as using 200% CPU when this happens.... in one case I left the VM running like that
> overnight, and returned, and it had come up to 300% utilization.
>
>
> The domain sits on a physical disk (/dev/sdd) which is replicated via DRBD to another server. 
>
>
> Any hints? Suggestions?

Boot with "guest_loglvl=all" on the Xen command line.

`xl debug-keys q` and `xl dmesg` to see what is reported from Xen as far
as the guest is concerned

`xen-hvmctx $domid` to see the architectural state of it.

Both be useful as a starting point.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTQw-0004Bl-Br; Thu, 06 Feb 2014 18:12:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBTQu-0004Bg-91
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:12:16 +0000
Received: from [193.109.254.147:50501] by server-5.bemta-14.messagelabs.com id
	47/7B-16688-F70D3F25; Thu, 06 Feb 2014 18:12:15 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391710332!2563667!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8984 invoked from network); 6 Feb 2014 18:12:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 18:12:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98673987"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 18:12:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 6 Feb 2014 13:12:11 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBTQp-0004sk-4D;
	Thu, 06 Feb 2014 18:12:11 +0000
Message-ID: <52F3D07B.9070002@citrix.com>
Date: Thu, 6 Feb 2014 18:12:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Don Brearley <donbrearley@hibbing.edu>
References: <52F37A05020000260003A5AC@hibbing.edu>
In-Reply-To: <52F37A05020000260003A5AC@hibbing.edu>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [test day] Windows SBS 2011 random hang and
 resource utiluization on 4.4-rc2 and rc3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/02/14 18:03, Don Brearley wrote:
> Hello, 
>
>
> I know it's not officially "test day" but I've been toying with Xen 4.4-rc2 and -rc3 and I have several successful linux
> PV guests running on 3.13.1.  Works fantastic.
>
>
> However, I have an SBS 2011 64bit HVM installation (no GPLPV drivers) and I routinely get a complete hang.  The VM 
> goes 'stateless' in 'xl list' (shows the domain state as ------).  Cannot access via RDP, cannot access via VNC, cannot
> ping, etc.   Just totally dead, yet the VM "lives".  Issing 'xl shutdown -F domain' does not kill it, I have to use
> "destroy" to remove the domain.  I didnt install GPLPV because I didnt see they were supported for SBS 2011.
>
>
> No errors in the xen or qemu logs, nothing in the event log, nothing in dmesg/messages on dom0.
>
>
> I can't seem to find anything to trigger it... it just, happens, seemingly at random. 
>
>
> "xl top" shows the domain as using 200% CPU when this happens.... in one case I left the VM running like that
> overnight, and returned, and it had come up to 300% utilization.
>
>
> The domain sits on a physical disk (/dev/sdd) which is replicated via DRBD to another server. 
>
>
> Any hints? Suggestions?

Boot with "guest_loglvl=all" on the Xen command line.

`xl debug-keys q` and `xl dmesg` to see what is reported from Xen as far
as the guest is concerned

`xen-hvmctx $domid` to see the architectural state of it.

Both be useful as a starting point.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:45:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTwN-0005bi-PU; Thu, 06 Feb 2014 18:44:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBTwM-0005bS-BS
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:44:46 +0000
Received: from [85.158.143.35:43884] by server-1.bemta-4.messagelabs.com id
	8E/F7-31661-D18D3F25; Thu, 06 Feb 2014 18:44:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391712283!3729154!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29425 invoked from network); 6 Feb 2014 18:44:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 18:44:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98683521"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 18:44:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 13:44:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBTvw-0005kf-MY;
	Thu, 06 Feb 2014 18:44:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBTvv-0004Y0-GY;
	Thu, 06 Feb 2014 18:44:19 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 18:44:17 +0000
Message-ID: <1391712257-17448-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: test programs: Fix Makefile race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We need to include the new TEST_PROG_OBJS and LIBXL_TEST_OBJS in the
appropriate dependencies.  Otherwise we risk trying to build the test
program before gentypes is run.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 66f3f3f..4af9033 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -130,7 +130,7 @@ all: $(CLIENTS) $(TEST_PROGS) \
 	$(AUTOSRCS) $(AUTOINCS)
 
 $(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS) \
-		$(LIBXL_TEST_OBJS): \
+		$(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): \
 	$(AUTOINCS) libxl.api-ok
 
 %.c %.h:: %.y
@@ -175,8 +175,9 @@ libxl_internal.h: _libxl_types_internal.h _paths.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 xl.h: _paths.h
 
-$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS): libxl.h
-$(LIBXL_OBJS): libxl_internal.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS) $(LIBXLU_OBJS) \
+	$(XL_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): libxl.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
 _libxl_type%.h _libxl_type%_json.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
 	$(PYTHON) gentypes.py libxl_type$*.idl __libxl_type$*.h __libxl_type$*_json.h __libxl_type$*.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:45:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTwN-0005bi-PU; Thu, 06 Feb 2014 18:44:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBTwM-0005bS-BS
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:44:46 +0000
Received: from [85.158.143.35:43884] by server-1.bemta-4.messagelabs.com id
	8E/F7-31661-D18D3F25; Thu, 06 Feb 2014 18:44:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391712283!3729154!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29425 invoked from network); 6 Feb 2014 18:44:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 18:44:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98683521"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 18:44:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 13:44:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBTvw-0005kf-MY;
	Thu, 06 Feb 2014 18:44:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBTvv-0004Y0-GY;
	Thu, 06 Feb 2014 18:44:19 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 18:44:17 +0000
Message-ID: <1391712257-17448-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: test programs: Fix Makefile race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We need to include the new TEST_PROG_OBJS and LIBXL_TEST_OBJS in the
appropriate dependencies.  Otherwise we risk trying to build the test
program before gentypes is run.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 66f3f3f..4af9033 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -130,7 +130,7 @@ all: $(CLIENTS) $(TEST_PROGS) \
 	$(AUTOSRCS) $(AUTOINCS)
 
 $(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS) \
-		$(LIBXL_TEST_OBJS): \
+		$(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): \
 	$(AUTOINCS) libxl.api-ok
 
 %.c %.h:: %.y
@@ -175,8 +175,9 @@ libxl_internal.h: _libxl_types_internal.h _paths.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 xl.h: _paths.h
 
-$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS): libxl.h
-$(LIBXL_OBJS): libxl_internal.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS) $(LIBXLU_OBJS) \
+	$(XL_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): libxl.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
 _libxl_type%.h _libxl_type%_json.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
 	$(PYTHON) gentypes.py libxl_type$*.idl __libxl_type$*.h __libxl_type$*_json.h __libxl_type$*.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:46:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTxh-0005hi-AQ; Thu, 06 Feb 2014 18:46:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBTxf-0005hZ-Vo
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:46:08 +0000
Received: from [85.158.137.68:27177] by server-6.bemta-3.messagelabs.com id
	F1/09-09180-F68D3F25; Thu, 06 Feb 2014 18:46:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391712363!152094!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23445 invoked from network); 6 Feb 2014 18:46:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 18:46:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98684112"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 18:46:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 13:46:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBTxa-0005l9-2a;
	Thu, 06 Feb 2014 18:46:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBTxZ-0004Kp-QM;
	Thu, 06 Feb 2014 18:46:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24743-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 18:46:01 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24743: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24743 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24743/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)         broken REGR. vs. 24739
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)        broken like 24739
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24729

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587
baseline version:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=fee61634e8f3fec7c137f0d16478c64c7f355587
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable fee61634e8f3fec7c137f0d16478c64c7f355587
+ branch=xen-unstable
+ revision=fee61634e8f3fec7c137f0d16478c64c7f355587
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git fee61634e8f3fec7c137f0d16478c64c7f355587:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   ff1745d..fee6163  fee61634e8f3fec7c137f0d16478c64c7f355587 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 18:46:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 18:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBTxh-0005hi-AQ; Thu, 06 Feb 2014 18:46:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBTxf-0005hZ-Vo
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 18:46:08 +0000
Received: from [85.158.137.68:27177] by server-6.bemta-3.messagelabs.com id
	F1/09-09180-F68D3F25; Thu, 06 Feb 2014 18:46:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391712363!152094!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23445 invoked from network); 6 Feb 2014 18:46:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 18:46:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98684112"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 18:46:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 13:46:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBTxa-0005l9-2a;
	Thu, 06 Feb 2014 18:46:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBTxZ-0004Kp-QM;
	Thu, 06 Feb 2014 18:46:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24743-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 18:46:01 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24743: tolerable trouble:
	broken/fail/pass - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24743 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24743/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  3 host-install(3)         broken REGR. vs. 24739
 test-amd64-i386-qemut-rhel6hvm-amd  3 host-install(3)        broken like 24739
 test-amd64-i386-rhel6hvm-amd  3 host-install(3)              broken like 24729

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587
baseline version:
 xen                  ff1745d5882b7356ea423709919e46e55c31b615

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 broken  
 test-amd64-i386-qemut-rhel6hvm-amd                           broken  
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=fee61634e8f3fec7c137f0d16478c64c7f355587
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable fee61634e8f3fec7c137f0d16478c64c7f355587
+ branch=xen-unstable
+ revision=fee61634e8f3fec7c137f0d16478c64c7f355587
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git fee61634e8f3fec7c137f0d16478c64c7f355587:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   ff1745d..fee6163  fee61634e8f3fec7c137f0d16478c64c7f355587 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUCB-0006a5-72; Thu, 06 Feb 2014 19:01:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBUC9-0006a0-M7
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:01:05 +0000
Received: from [85.158.137.68:5087] by server-7.bemta-3.messagelabs.com id
	2F/98-13775-0FBD3F25; Thu, 06 Feb 2014 19:01:04 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391713264!152734!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19009 invoked from network); 6 Feb 2014 19:01:04 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.161)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 19:01:04 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391713264; l=322;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=CvmgEGlXIHO1lsOgjqR6Tn5BMTA=;
	b=TKwR+k2iNKgI1IuAXeKqi6tXuh9zrSxmQY4kvrEOG1mf7jgGhFQZB4uPGZD2KRkVkR/
	FyiUgFBKmh2PAl6mdC6SDcz/8j9xas7Mj4xNXfS5uLgZdLIBUuFhBOdPelDJiPPZ76Oup
	bCEcSpnUc3L47eVL7KTSyVrWrNIL2gwak68=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id z03a54q16J138xf
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 20:01:03 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5C11F50269; Thu,  6 Feb 2014 20:01:03 +0100 (CET)
Date: Thu, 6 Feb 2014 20:01:03 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Message-ID: <20140206190103.GA17137@aepfle.de>
References: <1391712257-17448-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391712257-17448-1-git-send-email-ian.jackson@eu.citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: test programs: Fix Makefile race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Ian Jackson wrote:

> We need to include the new TEST_PROG_OBJS and LIBXL_TEST_OBJS in the
> appropriate dependencies.  Otherwise we risk trying to build the test
> program before gentypes is run.

This helps with building. But linking fails, TEST_PROGS lacks a
dependency on libxenlight.so

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUCB-0006a5-72; Thu, 06 Feb 2014 19:01:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBUC9-0006a0-M7
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:01:05 +0000
Received: from [85.158.137.68:5087] by server-7.bemta-3.messagelabs.com id
	2F/98-13775-0FBD3F25; Thu, 06 Feb 2014 19:01:04 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391713264!152734!1
X-Originating-IP: [81.169.146.161]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MSA9PiA1ODk3MjY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19009 invoked from network); 6 Feb 2014 19:01:04 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.161)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 19:01:04 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391713264; l=322;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=CvmgEGlXIHO1lsOgjqR6Tn5BMTA=;
	b=TKwR+k2iNKgI1IuAXeKqi6tXuh9zrSxmQY4kvrEOG1mf7jgGhFQZB4uPGZD2KRkVkR/
	FyiUgFBKmh2PAl6mdC6SDcz/8j9xas7Mj4xNXfS5uLgZdLIBUuFhBOdPelDJiPPZ76Oup
	bCEcSpnUc3L47eVL7KTSyVrWrNIL2gwak68=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id z03a54q16J138xf
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 20:01:03 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5C11F50269; Thu,  6 Feb 2014 20:01:03 +0100 (CET)
Date: Thu, 6 Feb 2014 20:01:03 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Message-ID: <20140206190103.GA17137@aepfle.de>
References: <1391712257-17448-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391712257-17448-1-git-send-email-ian.jackson@eu.citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: test programs: Fix Makefile race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Ian Jackson wrote:

> We need to include the new TEST_PROG_OBJS and LIBXL_TEST_OBJS in the
> appropriate dependencies.  Otherwise we risk trying to build the test
> program before gentypes is run.

This helps with building. But linking fails, TEST_PROGS lacks a
dependency on libxenlight.so

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:11:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUMW-00071Z-KP; Thu, 06 Feb 2014 19:11:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBUMU-00071R-By
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 19:11:46 +0000
Received: from [85.158.139.211:35337] by server-3.bemta-5.messagelabs.com id
	94/83-13671-17ED3F25; Thu, 06 Feb 2014 19:11:45 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391713905!2181490!1
X-Originating-IP: [213.199.154.204]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1704 invoked from network); 6 Feb 2014 19:11:45 -0000
Received: from am1ehsobe001.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.204)
	by server-2.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Feb 2014 19:11:45 -0000
Received: from mail30-am1-R.bigfish.com (10.3.201.234) by
	AM1EHSOBE017.bigfish.com (10.3.207.139) with Microsoft SMTP Server id
	14.1.225.22; Thu, 6 Feb 2014 19:11:44 +0000
Received: from mail30-am1 (localhost [127.0.0.1])	by mail30-am1-R.bigfish.com
	(Postfix) with ESMTP id 8C31C3A0475;
	Thu,  6 Feb 2014 19:11:44 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail30-am1 (localhost.localdomain [127.0.0.1]) by mail30-am1
	(MessageSwitch) id 1391713902872401_14057;
	Thu,  6 Feb 2014 19:11:42 +0000 (UTC)
Received: from AM1EHSMHS014.bigfish.com (unknown [10.3.201.241])	by
	mail30-am1.bigfish.com (Postfix) with ESMTP id D0BA8200215;
	Thu,  6 Feb 2014 19:11:42 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by AM1EHSMHS014.bigfish.com
	(10.3.207.152) with Microsoft SMTP Server id 14.16.227.3;
	Thu, 6 Feb 2014 19:11:42 +0000
X-WSS-ID: 0N0L9BF-07-5GT-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2F407CAE654;	Thu,  6 Feb 2014 13:11:38 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 13:11:57 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 14:11:38 -0500
Message-ID: <52F3DE66.7010800@amd.com>
Date: Thu, 6 Feb 2014 13:11:34 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F370EC0200007800119B43@nat28.tlf.novell.com>
In-Reply-To: <52F370EC0200007800119B43@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/6/2014 4:24 AM, Jan Beulich wrote:
>>>> On 05.02.14 at 22:43, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
>> +
>>   		rdmsrl(MSR_AMD64_LS_CFG, value);
>>   		if (!(value & (1 << 15))) {
>>   			static bool_t warned;
> The patch context even shows what is missing: A diagnostic
> message making it possible to know that the workaround was
> applied. Of course you don't need two separate messages for
> the two parts of the workaround, but indicating in the message
> which of them was applied would seem desirable.
>
> Furthermore, I don't see why you would need a new local
> variable here at all - there are two suitable variables available
> throughout the entire function (l and h).
>
>
Okay, corrected the patch as per your comments.
Sending it out as V3.

-Aravind


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:11:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUMW-00071Z-KP; Thu, 06 Feb 2014 19:11:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBUMU-00071R-By
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 19:11:46 +0000
Received: from [85.158.139.211:35337] by server-3.bemta-5.messagelabs.com id
	94/83-13671-17ED3F25; Thu, 06 Feb 2014 19:11:45 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391713905!2181490!1
X-Originating-IP: [213.199.154.204]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1704 invoked from network); 6 Feb 2014 19:11:45 -0000
Received: from am1ehsobe001.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.204)
	by server-2.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Feb 2014 19:11:45 -0000
Received: from mail30-am1-R.bigfish.com (10.3.201.234) by
	AM1EHSOBE017.bigfish.com (10.3.207.139) with Microsoft SMTP Server id
	14.1.225.22; Thu, 6 Feb 2014 19:11:44 +0000
Received: from mail30-am1 (localhost [127.0.0.1])	by mail30-am1-R.bigfish.com
	(Postfix) with ESMTP id 8C31C3A0475;
	Thu,  6 Feb 2014 19:11:44 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail30-am1 (localhost.localdomain [127.0.0.1]) by mail30-am1
	(MessageSwitch) id 1391713902872401_14057;
	Thu,  6 Feb 2014 19:11:42 +0000 (UTC)
Received: from AM1EHSMHS014.bigfish.com (unknown [10.3.201.241])	by
	mail30-am1.bigfish.com (Postfix) with ESMTP id D0BA8200215;
	Thu,  6 Feb 2014 19:11:42 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by AM1EHSMHS014.bigfish.com
	(10.3.207.152) with Microsoft SMTP Server id 14.16.227.3;
	Thu, 6 Feb 2014 19:11:42 +0000
X-WSS-ID: 0N0L9BF-07-5GT-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2F407CAE654;	Thu,  6 Feb 2014 13:11:38 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 13:11:57 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 14:11:38 -0500
Message-ID: <52F3DE66.7010800@amd.com>
Date: Thu, 6 Feb 2014 13:11:34 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391636619-1703-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F370EC0200007800119B43@nat28.tlf.novell.com>
In-Reply-To: <52F370EC0200007800119B43@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/6/2014 4:24 AM, Jan Beulich wrote:
>>>> On 05.02.14 at 22:43, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
>> +
>>   		rdmsrl(MSR_AMD64_LS_CFG, value);
>>   		if (!(value & (1 << 15))) {
>>   			static bool_t warned;
> The patch context even shows what is missing: A diagnostic
> message making it possible to know that the workaround was
> applied. Of course you don't need two separate messages for
> the two parts of the workaround, but indicating in the message
> which of them was applied would seem desirable.
>
> Furthermore, I don't see why you would need a new local
> variable here at all - there are two suitable variables available
> throughout the entire function (l and h).
>
>
Okay, corrected the patch as per your comments.
Sending it out as V3.

-Aravind


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUMm-00072j-1X; Thu, 06 Feb 2014 19:12:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBUMk-00072Y-Um
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 19:12:03 +0000
Received: from [85.158.139.211:49066] by server-10.bemta-5.messagelabs.com id
	4D/A8-08578-28ED3F25; Thu, 06 Feb 2014 19:12:02 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391713920!2184618!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14974 invoked from network); 6 Feb 2014 19:12:01 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-13.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Feb 2014 19:12:01 -0000
Received: from mail64-tx2-R.bigfish.com (10.9.14.247) by
	TX2EHSOBE011.bigfish.com (10.9.40.31) with Microsoft SMTP Server id
	14.1.225.22; Thu, 6 Feb 2014 19:11:59 +0000
Received: from mail64-tx2 (localhost [127.0.0.1])	by mail64-tx2-R.bigfish.com
	(Postfix) with ESMTP id A0076104043D;
	Thu,  6 Feb 2014 19:11:59 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(zz13e6Kzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h177df4h8275eh17326ah8275bh1de097h186068ha1495iz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail64-tx2 (localhost.localdomain [127.0.0.1]) by mail64-tx2
	(MessageSwitch) id 1391713916834282_28702;
	Thu,  6 Feb 2014 19:11:56 +0000 (UTC)
Received: from TX2EHSMHS014.bigfish.com (unknown [10.9.14.244])	by
	mail64-tx2.bigfish.com (Postfix) with ESMTP id 9F9D1A201BC;
	Thu,  6 Feb 2014 19:11:56 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by TX2EHSMHS014.bigfish.com
	(10.9.99.114) with Microsoft SMTP Server id 14.16.227.3; Thu, 6 Feb 2014
	19:11:53 +0000
X-WSS-ID: 0N0L9BR-08-J94-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	20C95D16018;	Thu,  6 Feb 2014 13:11:51 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 13:12:11 -0600
Received: from autotest-xen-olivehill.amd.com (10.180.168.240) by
	satlexdag04.amd.com (10.181.40.9) with Microsoft SMTP Server id
	14.2.328.9; Thu, 6 Feb 2014 14:11:51 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>, <andrew.cooper3@citrix.com>
Date: Thu, 6 Feb 2014 13:33:12 -0600
Message-ID: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V3] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.

Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
Rev. 3.04, November2013 for details on the Erratum.

Tested the patch on Fam16h server platform and it works fine.

Changes in V2: (per Andrew.C comments)
	- Move pci_val into same scope
	- rework indentation to match linux style
Changes in V3: (per Jan comments)
	- remove pci_val, use 'l' and 'h' instead
	- print warning message to hypervisor log

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/cpu/amd.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3307141..d83906a 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -477,6 +477,42 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 		       " all your (PV) guest kernels. ***\n");
 
 	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
+		/*
+		 * Apply workaround for erratum 792
+		 * Description:
+		 * Processor does not ensure DRAM scrub read/write sequence
+		 * is atomic wrt accesses to CC6 save state area. Therefore
+		 * if a concurrent scrub read/write access is to same address
+		 * the entry may appear as if it is not written. This quirk
+		 * applies to Fam16h models 00h-0Fh
+		 *
+		 * See "Revision Guide" for AMD F16h models 00h-0fh,
+		 * document 51810 rev. 3.04, Nov 2013
+		 *
+		 * Equivalent Linux patch link:
+		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
+		 */
+		if (smp_processor_id() == 0) {
+			l = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+			h = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+			printk(KERN_WARNING
+			       "CPU%u: Applying workaround for erratum 792:%s %s %s\n",
+			       smp_processor_id(),
+			       (l & 0x1f)? "clearing bits 0-4 of D18F3x58" : "" ,
+			       ((l & 0x1f) && (h & 0x1)) ? "and" : "" , 
+			       (h & 0x1) ? "clearing bit 0 of D18F3x5C" : "");
+
+			if (l & 0x1f) {
+				l &= ~0x1f;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, l);
+			}
+
+			if (h & 0x1) {
+				h &= ~0x1;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, h);
+			}
+		}
+
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
 			static bool_t warned;
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUMm-00072j-1X; Thu, 06 Feb 2014 19:12:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBUMk-00072Y-Um
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 19:12:03 +0000
Received: from [85.158.139.211:49066] by server-10.bemta-5.messagelabs.com id
	4D/A8-08578-28ED3F25; Thu, 06 Feb 2014 19:12:02 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391713920!2184618!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14974 invoked from network); 6 Feb 2014 19:12:01 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-13.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	6 Feb 2014 19:12:01 -0000
Received: from mail64-tx2-R.bigfish.com (10.9.14.247) by
	TX2EHSOBE011.bigfish.com (10.9.40.31) with Microsoft SMTP Server id
	14.1.225.22; Thu, 6 Feb 2014 19:11:59 +0000
Received: from mail64-tx2 (localhost [127.0.0.1])	by mail64-tx2-R.bigfish.com
	(Postfix) with ESMTP id A0076104043D;
	Thu,  6 Feb 2014 19:11:59 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(zz13e6Kzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h177df4h8275eh17326ah8275bh1de097h186068ha1495iz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail64-tx2 (localhost.localdomain [127.0.0.1]) by mail64-tx2
	(MessageSwitch) id 1391713916834282_28702;
	Thu,  6 Feb 2014 19:11:56 +0000 (UTC)
Received: from TX2EHSMHS014.bigfish.com (unknown [10.9.14.244])	by
	mail64-tx2.bigfish.com (Postfix) with ESMTP id 9F9D1A201BC;
	Thu,  6 Feb 2014 19:11:56 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by TX2EHSMHS014.bigfish.com
	(10.9.99.114) with Microsoft SMTP Server id 14.16.227.3; Thu, 6 Feb 2014
	19:11:53 +0000
X-WSS-ID: 0N0L9BR-08-J94-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	20C95D16018;	Thu,  6 Feb 2014 13:11:51 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 13:12:11 -0600
Received: from autotest-xen-olivehill.amd.com (10.180.168.240) by
	satlexdag04.amd.com (10.181.40.9) with Microsoft SMTP Server id
	14.2.328.9; Thu, 6 Feb 2014 14:11:51 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <jbeulich@suse.com>, <suravee.suthikulpanit@amd.com>,
	<xen-devel@lists.xen.org>, <keir@xen.org>, <andrew.cooper3@citrix.com>
Date: Thu, 6 Feb 2014 13:33:12 -0600
Message-ID: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V3] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workaround for the Erratum will be in BIOSes spun only after
Jan 2014 onwards. But initial production parts shipped in 2013
itself. Since there is a coverage hole, we should carry this fix
in software in case BIOS does not do the right thing or someone
is using old BIOS.

Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
Rev. 3.04, November2013 for details on the Erratum.

Tested the patch on Fam16h server platform and it works fine.

Changes in V2: (per Andrew.C comments)
	- Move pci_val into same scope
	- rework indentation to match linux style
Changes in V3: (per Jan comments)
	- remove pci_val, use 'l' and 'h' instead
	- print warning message to hypervisor log

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/cpu/amd.c | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 3307141..d83906a 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -477,6 +477,42 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 		       " all your (PV) guest kernels. ***\n");
 
 	if (c->x86 == 0x16 && c->x86_model <= 0xf) {
+		/*
+		 * Apply workaround for erratum 792
+		 * Description:
+		 * Processor does not ensure DRAM scrub read/write sequence
+		 * is atomic wrt accesses to CC6 save state area. Therefore
+		 * if a concurrent scrub read/write access is to same address
+		 * the entry may appear as if it is not written. This quirk
+		 * applies to Fam16h models 00h-0Fh
+		 *
+		 * See "Revision Guide" for AMD F16h models 00h-0fh,
+		 * document 51810 rev. 3.04, Nov 2013
+		 *
+		 * Equivalent Linux patch link:
+		 * http://marc.info/?l=linux-kernel&m=139066012217149&w=2
+		 */
+		if (smp_processor_id() == 0) {
+			l = pci_conf_read32(0, 0, 0x18, 0x3, 0x58);
+			h = pci_conf_read32(0, 0, 0x18, 0x3, 0x5c);
+			printk(KERN_WARNING
+			       "CPU%u: Applying workaround for erratum 792:%s %s %s\n",
+			       smp_processor_id(),
+			       (l & 0x1f)? "clearing bits 0-4 of D18F3x58" : "" ,
+			       ((l & 0x1f) && (h & 0x1)) ? "and" : "" , 
+			       (h & 0x1) ? "clearing bit 0 of D18F3x5C" : "");
+
+			if (l & 0x1f) {
+				l &= ~0x1f;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x58, l);
+			}
+
+			if (h & 0x1) {
+				h &= ~0x1;
+				pci_conf_write32(0, 0, 0x18, 0x3, 0x5c, h);
+			}
+		}
+
 		rdmsrl(MSR_AMD64_LS_CFG, value);
 		if (!(value & (1 << 15))) {
 			static bool_t warned;
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUXb-0007di-Jc; Thu, 06 Feb 2014 19:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBUXa-0007dP-Iv
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:14 +0000
Received: from [85.158.137.68:35393] by server-8.bemta-3.messagelabs.com id
	28/48-16039-121E3F25; Thu, 06 Feb 2014 19:23:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391714590!159047!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28007 invoked from network); 6 Feb 2014 19:23:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 19:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98695667"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 19:23:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 14:23:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXR-0005x1-Nf;
	Thu, 06 Feb 2014 19:23:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXR-0006HZ-Gl;
	Thu, 06 Feb 2014 19:23:05 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 19:23:00 +0000
Message-ID: <1391714580-24074-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: test programs: Fix make race re
	libxenlight.so
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The test programs were getting the proper libxenlight.so on their link
line.  Filter it out.  Also change the soname of the test library to
match the real one, so that libxutil is satisfied with it.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 4af9033..dab2929 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -195,7 +195,7 @@ libxenlight.so.$(MAJOR).$(MINOR): $(LIBXL_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
 
 libxenlight_test.so: $(LIBXL_OBJS) $(LIBXL_TEST_OBJS)
-	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight_test.so $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
 
 libxenlight.a: $(LIBXL_OBJS)
 	$(AR) rcs libxenlight.a $^
@@ -216,7 +216,7 @@ xl: $(XL_OBJS) libxlutil.so libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) libxlutil.so $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
 
 test_%: test_%.o test_common.o libxlutil.so libxenlight_test.so
-	$(CC) $(LDFLAGS) -o $@ $^ $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
 
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUXb-0007dW-66; Thu, 06 Feb 2014 19:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBUXZ-0007dI-Jp
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:13 +0000
Received: from [85.158.137.68:35357] by server-16.bemta-3.messagelabs.com id
	3B/EE-29917-021E3F25; Thu, 06 Feb 2014 19:23:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391714590!159047!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27971 invoked from network); 6 Feb 2014 19:23:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 19:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98695657"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 19:23:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 14:23:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXQ-0005wx-Eu;
	Thu, 06 Feb 2014 19:23:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXQ-0006HU-7w;
	Thu, 06 Feb 2014 19:23:04 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 19:22:59 +0000
Message-ID: <1391714580-24074-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: test programs: Fix Makefile race re
	headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We need to include the new TEST_PROG_OBJS and LIBXL_TEST_OBJS in the
appropriate dependencies.  Otherwise we risk trying to build the test
program before gentypes is run.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 66f3f3f..4af9033 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -130,7 +130,7 @@ all: $(CLIENTS) $(TEST_PROGS) \
 	$(AUTOSRCS) $(AUTOINCS)
 
 $(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS) \
-		$(LIBXL_TEST_OBJS): \
+		$(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): \
 	$(AUTOINCS) libxl.api-ok
 
 %.c %.h:: %.y
@@ -175,8 +175,9 @@ libxl_internal.h: _libxl_types_internal.h _paths.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 xl.h: _paths.h
 
-$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS): libxl.h
-$(LIBXL_OBJS): libxl_internal.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS) $(LIBXLU_OBJS) \
+	$(XL_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): libxl.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
 _libxl_type%.h _libxl_type%_json.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
 	$(PYTHON) gentypes.py libxl_type$*.idl __libxl_type$*.h __libxl_type$*_json.h __libxl_type$*.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUXT-0007d3-Pw; Thu, 06 Feb 2014 19:23:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBUXS-0007cv-AH
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:06 +0000
Received: from [85.158.137.68:60959] by server-2.bemta-3.messagelabs.com id
	0D/31-06531-911E3F25; Thu, 06 Feb 2014 19:23:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391714583!158089!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30325 invoked from network); 6 Feb 2014 19:23:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 19:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98695624"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 19:23:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 14:23:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXO-0005wu-0s;
	Thu, 06 Feb 2014 19:23:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXN-0006HJ-QF;
	Thu, 06 Feb 2014 19:23:01 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 19:22:58 +0000
Message-ID: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/2] libxl: test programs: fix Makefile races
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 1/2 libxl: test programs: Fix Makefile race re headers
 2/2 libxl: test programs: Fix make race re libxenlight.so

Patch 2 is new in this version.

I have pushed this here, too:
  http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/heads/wip.libxl-test-makefile-race


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUXb-0007di-Jc; Thu, 06 Feb 2014 19:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBUXa-0007dP-Iv
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:14 +0000
Received: from [85.158.137.68:35393] by server-8.bemta-3.messagelabs.com id
	28/48-16039-121E3F25; Thu, 06 Feb 2014 19:23:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391714590!159047!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28007 invoked from network); 6 Feb 2014 19:23:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 19:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98695667"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 19:23:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 14:23:10 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXR-0005x1-Nf;
	Thu, 06 Feb 2014 19:23:05 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXR-0006HZ-Gl;
	Thu, 06 Feb 2014 19:23:05 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 19:23:00 +0000
Message-ID: <1391714580-24074-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] libxl: test programs: Fix make race re
	libxenlight.so
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The test programs were getting the proper libxenlight.so on their link
line.  Filter it out.  Also change the soname of the test library to
match the real one, so that libxutil is satisfied with it.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 4af9033..dab2929 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -195,7 +195,7 @@ libxenlight.so.$(MAJOR).$(MINOR): $(LIBXL_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
 
 libxenlight_test.so: $(LIBXL_OBJS) $(LIBXL_TEST_OBJS)
-	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight_test.so $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBXL_LIBS) $(APPEND_LDFLAGS)
 
 libxenlight.a: $(LIBXL_OBJS)
 	$(AR) rcs libxenlight.a $^
@@ -216,7 +216,7 @@ xl: $(XL_OBJS) libxlutil.so libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(XL_OBJS) libxlutil.so $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
 
 test_%: test_%.o test_common.o libxlutil.so libxenlight_test.so
-	$(CC) $(LDFLAGS) -o $@ $^ $(LDLIBS_libxenlight) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
+	$(CC) $(LDFLAGS) -o $@ $^ $(filter-out %libxenlight.so, $(LDLIBS_libxenlight)) $(LDLIBS_libxenctrl) -lyajl $(APPEND_LDFLAGS)
 
 libxl-save-helper: $(SAVE_HELPER_OBJS) libxenlight.so
 	$(CC) $(LDFLAGS) -o $@ $(SAVE_HELPER_OBJS) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUXT-0007d3-Pw; Thu, 06 Feb 2014 19:23:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBUXS-0007cv-AH
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:06 +0000
Received: from [85.158.137.68:60959] by server-2.bemta-3.messagelabs.com id
	0D/31-06531-911E3F25; Thu, 06 Feb 2014 19:23:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391714583!158089!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30325 invoked from network); 6 Feb 2014 19:23:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 19:23:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98695624"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 19:23:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 14:23:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXO-0005wu-0s;
	Thu, 06 Feb 2014 19:23:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXN-0006HJ-QF;
	Thu, 06 Feb 2014 19:23:01 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 19:22:58 +0000
Message-ID: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2 0/2] libxl: test programs: fix Makefile races
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 1/2 libxl: test programs: Fix Makefile race re headers
 2/2 libxl: test programs: Fix make race re libxenlight.so

Patch 2 is new in this version.

I have pushed this here, too:
  http://xenbits.xen.org/gitweb/?p=people/iwj/xen.git;a=shortlog;h=refs/heads/wip.libxl-test-makefile-race


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:23:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:23:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUXb-0007dW-66; Thu, 06 Feb 2014 19:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBUXZ-0007dI-Jp
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:13 +0000
Received: from [85.158.137.68:35357] by server-16.bemta-3.messagelabs.com id
	3B/EE-29917-021E3F25; Thu, 06 Feb 2014 19:23:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391714590!159047!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27971 invoked from network); 6 Feb 2014 19:23:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 19:23:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,795,1384300800"; d="scan'208";a="98695657"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 19:23:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 14:23:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXQ-0005wx-Eu;
	Thu, 06 Feb 2014 19:23:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBUXQ-0006HU-7w;
	Thu, 06 Feb 2014 19:23:04 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Thu, 6 Feb 2014 19:22:59 +0000
Message-ID: <1391714580-24074-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH 1/2] libxl: test programs: Fix Makefile race re
	headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We need to include the new TEST_PROG_OBJS and LIBXL_TEST_OBJS in the
appropriate dependencies.  Otherwise we risk trying to build the test
program before gentypes is run.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>
Cc: Ian Campbell <Ian.Campbell@citrix.com>
---
 tools/libxl/Makefile |    7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 66f3f3f..4af9033 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -130,7 +130,7 @@ all: $(CLIENTS) $(TEST_PROGS) \
 	$(AUTOSRCS) $(AUTOINCS)
 
 $(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS) \
-		$(LIBXL_TEST_OBJS): \
+		$(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): \
 	$(AUTOINCS) libxl.api-ok
 
 %.c %.h:: %.y
@@ -175,8 +175,9 @@ libxl_internal.h: _libxl_types_internal.h _paths.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 xl.h: _paths.h
 
-$(LIBXL_OBJS) $(LIBXLU_OBJS) $(XL_OBJS) $(SAVE_HELPER_OBJS): libxl.h
-$(LIBXL_OBJS): libxl_internal.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS) $(LIBXLU_OBJS) \
+	$(XL_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): libxl.h
+$(LIBXL_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
 _libxl_type%.h _libxl_type%_json.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
 	$(PYTHON) gentypes.py libxl_type$*.idl __libxl_type$*.h __libxl_type$*_json.h __libxl_type$*.c
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:24:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:24:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUYK-0007m6-1b; Thu, 06 Feb 2014 19:24:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <donbrearley@hibbing.edu>) id 1WBUYI-0007lp-PJ
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:58 +0000
Received: from [193.109.254.147:61551] by server-10.bemta-14.messagelabs.com
	id 6B/90-10711-E41E3F25; Thu, 06 Feb 2014 19:23:58 +0000
X-Env-Sender: donbrearley@hibbing.edu
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391714636!2582427!1
X-Originating-IP: [134.29.200.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24734 invoked from network); 6 Feb 2014 19:23:57 -0000
Received: from hibbing.edu (HELO hibbing.edu) (134.29.200.12)
	by server-5.tower-27.messagelabs.com with SMTP;
	6 Feb 2014 19:23:57 -0000
Received: from SMAIL-DOM-MTA by hibbing.edu
	with Novell_GroupWise; Thu, 06 Feb 2014 13:20:17 -0600
Message-Id: <52F38C02020000260003A5C1@hibbing.edu>
X-Mailer: Novell GroupWise Internet Agent 8.0.2 
Date: Thu, 06 Feb 2014 13:20:02 -0600
From: "Don Brearley" <donbrearley@hibbing.edu>
To: <andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [test day] Windows SBS 2011 random hang and
 resource utiluization on 4.4-rc2 and rc3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Andrew Cooper  02/06/14 12:09 PM >>>
On 06/02/14 18:03, Don Brearley wrote:
> Hello, 
>
>
> I know it's not officially "test day" but I've been toying with Xen 4.4-rc2 and -rc3 and I have several successful linux
> PV guests running on 3.13.1.  Works fantastic.
>
>
> However, I have an SBS 2011 64bit HVM installation (no GPLPV drivers) and I routinely get a complete hang.  The VM 
> goes 'stateless' in 'xl list' (shows the domain state as ------).  Cannot access via RDP, cannot access via VNC, cannot
> ping, etc.   Just totally dead, yet the VM "lives".  Issing 'xl shutdown -F domain' does not kill it, I have to use
> "destroy" to remove the domain.  I didnt install GPLPV because I didnt see they were supported for SBS 2011.
>
>
> No errors in the xen or qemu logs, nothing in the event log, nothing in dmesg/messages on dom0.
>
>
> I can't seem to find anything to trigger it... it just, happens, seemingly at random. 
>
>
> "xl top" shows the domain as using 200% CPU when this happens.... in one case I left the VM running like that
> overnight, and returned, and it had come up to 300% utilization.
>
>
> The domain sits on a physical disk (/dev/sdd) which is replicated via DRBD to another server. 
>
>
> Any hints? Suggestions?

Boot with "guest_loglvl=all" on the Xen command line.

`xl debug-keys q` and `xl dmesg` to see what is reported from Xen as far
as the guest is concerned

`xen-hvmctx $domid` to see the architectural state of it.

Both be useful as a starting point.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel




Hi Andrew,


I do boot xen with "multiboot /boot/xen-4.gz dom0_mem=4096M loglvl=all guest_loglvl=all"


I will try those commands you suggested when I enounter the next hang, and will report it
back to you and the list.


Thanks much!
- Don


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 19:24:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 19:24:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBUYK-0007m6-1b; Thu, 06 Feb 2014 19:24:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <donbrearley@hibbing.edu>) id 1WBUYI-0007lp-PJ
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 19:23:58 +0000
Received: from [193.109.254.147:61551] by server-10.bemta-14.messagelabs.com
	id 6B/90-10711-E41E3F25; Thu, 06 Feb 2014 19:23:58 +0000
X-Env-Sender: donbrearley@hibbing.edu
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391714636!2582427!1
X-Originating-IP: [134.29.200.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24734 invoked from network); 6 Feb 2014 19:23:57 -0000
Received: from hibbing.edu (HELO hibbing.edu) (134.29.200.12)
	by server-5.tower-27.messagelabs.com with SMTP;
	6 Feb 2014 19:23:57 -0000
Received: from SMAIL-DOM-MTA by hibbing.edu
	with Novell_GroupWise; Thu, 06 Feb 2014 13:20:17 -0600
Message-Id: <52F38C02020000260003A5C1@hibbing.edu>
X-Mailer: Novell GroupWise Internet Agent 8.0.2 
Date: Thu, 06 Feb 2014 13:20:02 -0600
From: "Don Brearley" <donbrearley@hibbing.edu>
To: <andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [test day] Windows SBS 2011 random hang and
 resource utiluization on 4.4-rc2 and rc3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> Andrew Cooper  02/06/14 12:09 PM >>>
On 06/02/14 18:03, Don Brearley wrote:
> Hello, 
>
>
> I know it's not officially "test day" but I've been toying with Xen 4.4-rc2 and -rc3 and I have several successful linux
> PV guests running on 3.13.1.  Works fantastic.
>
>
> However, I have an SBS 2011 64bit HVM installation (no GPLPV drivers) and I routinely get a complete hang.  The VM 
> goes 'stateless' in 'xl list' (shows the domain state as ------).  Cannot access via RDP, cannot access via VNC, cannot
> ping, etc.   Just totally dead, yet the VM "lives".  Issing 'xl shutdown -F domain' does not kill it, I have to use
> "destroy" to remove the domain.  I didnt install GPLPV because I didnt see they were supported for SBS 2011.
>
>
> No errors in the xen or qemu logs, nothing in the event log, nothing in dmesg/messages on dom0.
>
>
> I can't seem to find anything to trigger it... it just, happens, seemingly at random. 
>
>
> "xl top" shows the domain as using 200% CPU when this happens.... in one case I left the VM running like that
> overnight, and returned, and it had come up to 300% utilization.
>
>
> The domain sits on a physical disk (/dev/sdd) which is replicated via DRBD to another server. 
>
>
> Any hints? Suggestions?

Boot with "guest_loglvl=all" on the Xen command line.

`xl debug-keys q` and `xl dmesg` to see what is reported from Xen as far
as the guest is concerned

`xen-hvmctx $domid` to see the architectural state of it.

Both be useful as a starting point.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel




Hi Andrew,


I do boot xen with "multiboot /boot/xen-4.gz dom0_mem=4096M loglvl=all guest_loglvl=all"


I will try those commands you suggested when I enounter the next hang, and will report it
back to you and the list.


Thanks much!
- Don


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 20:21:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 20:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBVRA-00021g-D4; Thu, 06 Feb 2014 20:20:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBVR8-00021a-C8
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 20:20:38 +0000
Received: from [85.158.139.211:34061] by server-16.bemta-5.messagelabs.com id
	A1/CC-05060-59EE3F25; Thu, 06 Feb 2014 20:20:37 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391718036!2191260!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19126 invoked from network); 6 Feb 2014 20:20:36 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 20:20:36 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391718036; l=269;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=SGRDsdEF2AZiE1ExENtl+py9LOU=;
	b=IZ2UY6VXFEKKRHM8ZHQL3F6bz2wTBd6JqFJIOh1pSat3mXGZYXksiiHVuoVmakXx+lw
	Lr5iS1hOgVv5fkbyCy8AHT8wEIY1vVm3SbxibGojKvb4/3rhCMy3k2VJm7jgoiXXySs10
	y0kKRgOqldA+2Fzq0MtPReXPB21Nr+uSBRk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id L0755cq16KKa97j
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 21:20:36 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id E556450269; Thu,  6 Feb 2014 21:20:35 +0100 (CET)
Date: Thu, 6 Feb 2014 21:20:35 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Message-ID: <20140206202035.GA489@aepfle.de>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/2] libxl: test programs: fix Makefile
	races
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Ian Jackson wrote:

>  1/2 libxl: test programs: Fix Makefile race re headers
>  2/2 libxl: test programs: Fix make race re libxenlight.so
> 
> Patch 2 is new in this version.

Yes, these two patches fix the build failures for me.
Thanks! 

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 20:21:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 20:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBVRA-00021g-D4; Thu, 06 Feb 2014 20:20:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBVR8-00021a-C8
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 20:20:38 +0000
Received: from [85.158.139.211:34061] by server-16.bemta-5.messagelabs.com id
	A1/CC-05060-59EE3F25; Thu, 06 Feb 2014 20:20:37 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1391718036!2191260!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19126 invoked from network); 6 Feb 2014 20:20:36 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 6 Feb 2014 20:20:36 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391718036; l=269;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=SGRDsdEF2AZiE1ExENtl+py9LOU=;
	b=IZ2UY6VXFEKKRHM8ZHQL3F6bz2wTBd6JqFJIOh1pSat3mXGZYXksiiHVuoVmakXx+lw
	Lr5iS1hOgVv5fkbyCy8AHT8wEIY1vVm3SbxibGojKvb4/3rhCMy3k2VJm7jgoiXXySs10
	y0kKRgOqldA+2Fzq0MtPReXPB21Nr+uSBRk=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id L0755cq16KKa97j
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 21:20:36 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id E556450269; Thu,  6 Feb 2014 21:20:35 +0100 (CET)
Date: Thu, 6 Feb 2014 21:20:35 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Message-ID: <20140206202035.GA489@aepfle.de>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/2] libxl: test programs: fix Makefile
	races
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Ian Jackson wrote:

>  1/2 libxl: test programs: Fix Makefile race re headers
>  2/2 libxl: test programs: Fix make race re libxenlight.so
> 
> Patch 2 is new in this version.

Yes, these two patches fix the build failures for me.
Thanks! 

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 21:29:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 21:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBWVj-0004Nx-8V; Thu, 06 Feb 2014 21:29:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Keith.Moyer@netapp.com>) id 1WBWVh-0004Ns-Uu
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 21:29:26 +0000
Received: from [85.158.137.68:63951] by server-8.bemta-3.messagelabs.com id
	D3/C3-16039-5BEF3F25; Thu, 06 Feb 2014 21:29:25 +0000
X-Env-Sender: Keith.Moyer@netapp.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391722162!170559!1
X-Originating-IP: [216.240.18.38]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15382 invoked from network); 6 Feb 2014 21:29:24 -0000
Received: from mx1.netapp.com (HELO mx1.netapp.com) (216.240.18.38)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 21:29:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384329600"; d="scan'208";a="306065852"
Received: from vmwexceht05-prd.hq.netapp.com ([10.106.77.35])
	by mx1-out.netapp.com with ESMTP; 06 Feb 2014 13:29:21 -0800
Received: from SACEXCMBX03-PRD.hq.netapp.com ([169.254.5.58]) by
	vmwexceht05-prd.hq.netapp.com ([10.106.77.35]) with mapi id
	14.03.0123.003; Thu, 6 Feb 2014 13:20:36 -0800
From: "Moyer, Keith" <Keith.Moyer@netapp.com>
To: Ian Campbell <ijc@hellion.org.uk>, Gerd Hoffmann <kraxel@redhat.com>
Thread-Topic: [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
Thread-Index: Ac8jCwmrvbRUweyWSfmzekOxQTa5/gATes+AAAArqQAACXfZcA==
Date: Thu, 6 Feb 2014 21:20:35 +0000
Message-ID: <79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
In-Reply-To: <1391675812.22033.2.camel@dagon.hellion.org.uk>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.106.53.53]
MIME-Version: 1.0
Cc: "seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, "Hoyer, David" <David.Hoyer@netapp.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you all for your quick, helpful replies!
This was about to become a headache for us, so the identification of a fix so quickly is fantastic.

On Thu, 2014-02-06 at 02:31 -0600, Gerd Hoffmann wrote:
> That commit made seabios size (default qemu config, gcc 4.7+) jump 
> from 128k to 256k in size because the code didn't fit into 128k any more.

Makes sense.  Thanks for the explanation.

On Thu, 2014-02-06 at 02:37 -0600, Ian Campbell wrote:
> I think this was fixed in Xen with
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5f2875739beef3a75c7a7e8579b6cbcb464e61b3

I have confirmed that applying that patch to our Xen sources fixes the problem.

I've filed a bug with Debian's Xen package on this matter, with the hope they'll backport that commit.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737905


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 21:29:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 21:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBWVj-0004Nx-8V; Thu, 06 Feb 2014 21:29:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Keith.Moyer@netapp.com>) id 1WBWVh-0004Ns-Uu
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 21:29:26 +0000
Received: from [85.158.137.68:63951] by server-8.bemta-3.messagelabs.com id
	D3/C3-16039-5BEF3F25; Thu, 06 Feb 2014 21:29:25 +0000
X-Env-Sender: Keith.Moyer@netapp.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391722162!170559!1
X-Originating-IP: [216.240.18.38]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15382 invoked from network); 6 Feb 2014 21:29:24 -0000
Received: from mx1.netapp.com (HELO mx1.netapp.com) (216.240.18.38)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 21:29:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384329600"; d="scan'208";a="306065852"
Received: from vmwexceht05-prd.hq.netapp.com ([10.106.77.35])
	by mx1-out.netapp.com with ESMTP; 06 Feb 2014 13:29:21 -0800
Received: from SACEXCMBX03-PRD.hq.netapp.com ([169.254.5.58]) by
	vmwexceht05-prd.hq.netapp.com ([10.106.77.35]) with mapi id
	14.03.0123.003; Thu, 6 Feb 2014 13:20:36 -0800
From: "Moyer, Keith" <Keith.Moyer@netapp.com>
To: Ian Campbell <ijc@hellion.org.uk>, Gerd Hoffmann <kraxel@redhat.com>
Thread-Topic: [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
Thread-Index: Ac8jCwmrvbRUweyWSfmzekOxQTa5/gATes+AAAArqQAACXfZcA==
Date: Thu, 6 Feb 2014 21:20:35 +0000
Message-ID: <79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
In-Reply-To: <1391675812.22033.2.camel@dagon.hellion.org.uk>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.106.53.53]
MIME-Version: 1.0
Cc: "seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>, "Hoyer, David" <David.Hoyer@netapp.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thank you all for your quick, helpful replies!
This was about to become a headache for us, so the identification of a fix so quickly is fantastic.

On Thu, 2014-02-06 at 02:31 -0600, Gerd Hoffmann wrote:
> That commit made seabios size (default qemu config, gcc 4.7+) jump 
> from 128k to 256k in size because the code didn't fit into 128k any more.

Makes sense.  Thanks for the explanation.

On Thu, 2014-02-06 at 02:37 -0600, Ian Campbell wrote:
> I think this was fixed in Xen with
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5f2875739beef3a75c7a7e8579b6cbcb464e61b3

I have confirmed that applying that patch to our Xen sources fixes the problem.

I've filed a bug with Debian's Xen package on this matter, with the hope they'll backport that commit.
http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=737905


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 21:40:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 21:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBWgS-0004pW-LS; Thu, 06 Feb 2014 21:40:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBWgR-0004pR-7J
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 21:40:31 +0000
Received: from [85.158.143.35:34941] by server-3.bemta-4.messagelabs.com id
	66/35-11539-E4104F25; Thu, 06 Feb 2014 21:40:30 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391722829!3750177!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14489 invoked from network); 6 Feb 2014 21:40:29 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 21:40:29 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391722829; l=708;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=BYKKO0lttzZQGRR3+LibYtqBGBU=;
	b=KvIJjC+/wYOYdMv2nXJjHcSOh63kNRgav4PT6huMWuZC1nL524Txhq+t08DxqM3efDT
	LjbFRcz8A3XZKRSgx3khcPF6x+3Tc2qpItLMNthygVgk5Olr2j+uqpcw0yMhesbamOkNE
	cDqzGDYVWY1MzpdpmB8W8q/EfFORdVqZy1Q=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id K04bffq16LeJ8BJ
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 22:40:19 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 3AE8F50269; Thu,  6 Feb 2014 22:40:19 +0100 (CET)
Date: Thu, 6 Feb 2014 22:40:18 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140206214018.GA14658@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Adel Amani wrote:

> I check XCFLAGS_DEBUG that definition at line 27 of file xenguest.h that
> mean's 
> #define XCFLAGS_DEBUG (1 << 1)
> I done 2 migration to amount lvl = XTL_DEBUG; and lvl = XTL_DETAIL; that result
> attached But until i have not log of dirty page(dirty memory)and downtime :-(

Did you already follow the code paths to see why nothing is printed?
Does a simpe 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\n");'
appear in xend.log?

The downtime is the time during domU suspend/resume. Its best measuered
by looking at domU dmesg. Boot the domU with 'initcall_debug' to get an
understanding whats going on during such a suspend/resume cycle.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 21:40:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 21:40:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBWgS-0004pW-LS; Thu, 06 Feb 2014 21:40:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBWgR-0004pR-7J
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 21:40:31 +0000
Received: from [85.158.143.35:34941] by server-3.bemta-4.messagelabs.com id
	66/35-11539-E4104F25; Thu, 06 Feb 2014 21:40:30 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391722829!3750177!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14489 invoked from network); 6 Feb 2014 21:40:29 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 21:40:29 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391722829; l=708;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=BYKKO0lttzZQGRR3+LibYtqBGBU=;
	b=KvIJjC+/wYOYdMv2nXJjHcSOh63kNRgav4PT6huMWuZC1nL524Txhq+t08DxqM3efDT
	LjbFRcz8A3XZKRSgx3khcPF6x+3Tc2qpItLMNthygVgk5Olr2j+uqpcw0yMhesbamOkNE
	cDqzGDYVWY1MzpdpmB8W8q/EfFORdVqZy1Q=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id K04bffq16LeJ8BJ
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 22:40:19 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 3AE8F50269; Thu,  6 Feb 2014 22:40:19 +0100 (CET)
Date: Thu, 6 Feb 2014 22:40:18 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140206214018.GA14658@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Adel Amani wrote:

> I check XCFLAGS_DEBUG that definition at line 27 of file xenguest.h that
> mean's 
> #define XCFLAGS_DEBUG (1 << 1)
> I done 2 migration to amount lvl = XTL_DEBUG; and lvl = XTL_DETAIL; that result
> attached But until i have not log of dirty page(dirty memory)and downtime :-(

Did you already follow the code paths to see why nothing is printed?
Does a simpe 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\n");'
appear in xend.log?

The downtime is the time during domU suspend/resume. Its best measuered
by looking at domU dmesg. Boot the domU with 'initcall_debug' to get an
understanding whats going on during such a suspend/resume cycle.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:05:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBX4Q-0005dp-Hh; Thu, 06 Feb 2014 22:05:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mjt@tls.msk.ru>) id 1WBX4O-0005dk-QB
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 22:05:17 +0000
Received: from [85.158.139.211:21851] by server-7.bemta-5.messagelabs.com id
	55/8C-14867-C1704F25; Thu, 06 Feb 2014 22:05:16 +0000
X-Env-Sender: mjt@tls.msk.ru
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391724313!2208566!1
X-Originating-IP: [86.62.121.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4911 invoked from network); 6 Feb 2014 22:05:14 -0000
Received: from isrv.corpit.ru (HELO isrv.corpit.ru) (86.62.121.231)
	by server-16.tower-206.messagelabs.com with SMTP;
	6 Feb 2014 22:05:14 -0000
Received: from [192.168.88.2] (mjt.vpn.tls.msk.ru [192.168.177.99])
	by isrv.corpit.ru (Postfix) with ESMTP id 1B4F840C1B;
	Fri,  7 Feb 2014 02:05:12 +0400 (MSK)
Message-ID: <52F40718.4070904@msgid.tls.msk.ru>
Date: Fri, 07 Feb 2014 02:05:12 +0400
From: Michael Tokarev <mjt@tls.msk.ru>
Organization: Telecom Service, JSC
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Moyer, Keith" <Keith.Moyer@netapp.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
In-Reply-To: <79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
X-Enigmail-Version: 1.5.1
OpenPGP: id=804465C5
Cc: Jan Beulich <JBeulich@suse.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Gerd Hoffmann <kraxel@redhat.com>, "Hoyer, David" <David.Hoyer@netapp.com>,
	Ian Campbell <ijc@hellion.org.uk>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MDcuMDIuMjAxNCAwMToyMCwgTW95ZXIsIEtlaXRoINC/0LjRiNC10YI6Cj4gVGhhbmsgeW91IGFs
bCBmb3IgeW91ciBxdWljaywgaGVscGZ1bCByZXBsaWVzIQo+IFRoaXMgd2FzIGFib3V0IHRvIGJl
Y29tZSBhIGhlYWRhY2hlIGZvciB1cywgc28gdGhlIGlkZW50aWZpY2F0aW9uIG9mIGEgZml4IHNv
IHF1aWNrbHkgaXMgZmFudGFzdGljLgo+IAo+IE9uIFRodSwgMjAxNC0wMi0wNiBhdCAwMjozMSAt
MDYwMCwgR2VyZCBIb2ZmbWFubiB3cm90ZToKPj4gVGhhdCBjb21taXQgbWFkZSBzZWFiaW9zIHNp
emUgKGRlZmF1bHQgcWVtdSBjb25maWcsIGdjYyA0LjcrKSBqdW1wIAo+PiBmcm9tIDEyOGsgdG8g
MjU2ayBpbiBzaXplIGJlY2F1c2UgdGhlIGNvZGUgZGlkbid0IGZpdCBpbnRvIDEyOGsgYW55IG1v
cmUuCj4gCj4gTWFrZXMgc2Vuc2UuICBUaGFua3MgZm9yIHRoZSBleHBsYW5hdGlvbi4KPiAKPiBP
biBUaHUsIDIwMTQtMDItMDYgYXQgMDI6MzcgLTA2MDAsIElhbiBDYW1wYmVsbCB3cm90ZToKPj4g
SSB0aGluayB0aGlzIHdhcyBmaXhlZCBpbiBYZW4gd2l0aAo+PiBodHRwOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9Y29tbWl0O2g9NWYyODc1NzM5YmVlZjNhNzVjN2E3ZTg1
NzliNmNiY2I0NjRlNjFiMwo+IAo+IEkgaGF2ZSBjb25maXJtZWQgdGhhdCBhcHBseWluZyB0aGF0
IHBhdGNoIHRvIG91ciBYZW4gc291cmNlcyBmaXhlcyB0aGUgcHJvYmxlbS4KPiAKPiBJJ3ZlIGZp
bGVkIGEgYnVnIHdpdGggRGViaWFuJ3MgWGVuIHBhY2thZ2Ugb24gdGhpcyBtYXR0ZXIsIHdpdGgg
dGhlIGhvcGUgdGhleSdsbCBiYWNrcG9ydCB0aGF0IGNvbW1pdC4KPiBodHRwOi8vYnVncy5kZWJp
YW4ub3JnL2NnaS1iaW4vYnVncmVwb3J0LmNnaT9idWc9NzM3OTA1CgpNZWFud2hpbGUgSSBqdXN0
IHVwbG9hZGVkIGFub3RoZXIgcmVsZWFzZSBvZiBzZWFiaW9zIHRvCkRlYmlhbiBhcmNoaXZlIHdo
aWNoIGJyaW5ncyBzaXplIG9mIGJpb3MuYmluIGJhY2sgdG8gMTI4S2IsCmJ5IGRpc2FibGluZyBY
Q0hJICh1c2IzLjApIGFuZCBQVlNDU0kgLS0gMS43LjQtMy4KClNvIHRoaW5ncyBzaG91bGQgYmUg
d29ya2luZyBmaW5lIG9uIGRlYmlhbiBzaWRlIGFnYWluIG5vdwooYXQgbGVhc3Qgb24gc2lkKSwg
ZXZlbiB3aXRob3V0IHBhdGNoaW5nIHhlbi4KCkFuZCBpdCBsb29rcyBsaWtlIHdlIHNob3VsZCBy
ZS10aGluayB3aGF0IHdlIHJlbW92ZSBmcm9tCm91ciAocWVtdSkgInN0cmlwcGVkLWRvd24iIGJp
b3MgLSBhcyBpdCB0dXJucyBvdXQsIFhlbgphY3R1YWxseSB1c2VzIGJpb3MgZnJvbSBxZW11IHVu
Y2hhbmdlZC4uLiAgQnV0IGl0IGlzCnJlYWxseSByZWFsbHkgZnJhZ2lsZSAtIG9uIG15IGJ1aWxk
IChnY2MgNC43KSwgd2l0aCAyCm9wdGlvbnMgcmVtb3ZlZCwgMTI4S2IgaXMgOTkuNyUgdXNlZCA7
KQoKVGhhbmtzLAoKL21qdAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpo
dHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:05:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBX4Q-0005dp-Hh; Thu, 06 Feb 2014 22:05:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mjt@tls.msk.ru>) id 1WBX4O-0005dk-QB
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 22:05:17 +0000
Received: from [85.158.139.211:21851] by server-7.bemta-5.messagelabs.com id
	55/8C-14867-C1704F25; Thu, 06 Feb 2014 22:05:16 +0000
X-Env-Sender: mjt@tls.msk.ru
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391724313!2208566!1
X-Originating-IP: [86.62.121.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4911 invoked from network); 6 Feb 2014 22:05:14 -0000
Received: from isrv.corpit.ru (HELO isrv.corpit.ru) (86.62.121.231)
	by server-16.tower-206.messagelabs.com with SMTP;
	6 Feb 2014 22:05:14 -0000
Received: from [192.168.88.2] (mjt.vpn.tls.msk.ru [192.168.177.99])
	by isrv.corpit.ru (Postfix) with ESMTP id 1B4F840C1B;
	Fri,  7 Feb 2014 02:05:12 +0400 (MSK)
Message-ID: <52F40718.4070904@msgid.tls.msk.ru>
Date: Fri, 07 Feb 2014 02:05:12 +0400
From: Michael Tokarev <mjt@tls.msk.ru>
Organization: Telecom Service, JSC
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Moyer, Keith" <Keith.Moyer@netapp.com>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
In-Reply-To: <79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
X-Enigmail-Version: 1.5.1
OpenPGP: id=804465C5
Cc: Jan Beulich <JBeulich@suse.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Gerd Hoffmann <kraxel@redhat.com>, "Hoyer, David" <David.Hoyer@netapp.com>,
	Ian Campbell <ijc@hellion.org.uk>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MDcuMDIuMjAxNCAwMToyMCwgTW95ZXIsIEtlaXRoINC/0LjRiNC10YI6Cj4gVGhhbmsgeW91IGFs
bCBmb3IgeW91ciBxdWljaywgaGVscGZ1bCByZXBsaWVzIQo+IFRoaXMgd2FzIGFib3V0IHRvIGJl
Y29tZSBhIGhlYWRhY2hlIGZvciB1cywgc28gdGhlIGlkZW50aWZpY2F0aW9uIG9mIGEgZml4IHNv
IHF1aWNrbHkgaXMgZmFudGFzdGljLgo+IAo+IE9uIFRodSwgMjAxNC0wMi0wNiBhdCAwMjozMSAt
MDYwMCwgR2VyZCBIb2ZmbWFubiB3cm90ZToKPj4gVGhhdCBjb21taXQgbWFkZSBzZWFiaW9zIHNp
emUgKGRlZmF1bHQgcWVtdSBjb25maWcsIGdjYyA0LjcrKSBqdW1wIAo+PiBmcm9tIDEyOGsgdG8g
MjU2ayBpbiBzaXplIGJlY2F1c2UgdGhlIGNvZGUgZGlkbid0IGZpdCBpbnRvIDEyOGsgYW55IG1v
cmUuCj4gCj4gTWFrZXMgc2Vuc2UuICBUaGFua3MgZm9yIHRoZSBleHBsYW5hdGlvbi4KPiAKPiBP
biBUaHUsIDIwMTQtMDItMDYgYXQgMDI6MzcgLTA2MDAsIElhbiBDYW1wYmVsbCB3cm90ZToKPj4g
SSB0aGluayB0aGlzIHdhcyBmaXhlZCBpbiBYZW4gd2l0aAo+PiBodHRwOi8veGVuYml0cy54ZW4u
b3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9Y29tbWl0O2g9NWYyODc1NzM5YmVlZjNhNzVjN2E3ZTg1
NzliNmNiY2I0NjRlNjFiMwo+IAo+IEkgaGF2ZSBjb25maXJtZWQgdGhhdCBhcHBseWluZyB0aGF0
IHBhdGNoIHRvIG91ciBYZW4gc291cmNlcyBmaXhlcyB0aGUgcHJvYmxlbS4KPiAKPiBJJ3ZlIGZp
bGVkIGEgYnVnIHdpdGggRGViaWFuJ3MgWGVuIHBhY2thZ2Ugb24gdGhpcyBtYXR0ZXIsIHdpdGgg
dGhlIGhvcGUgdGhleSdsbCBiYWNrcG9ydCB0aGF0IGNvbW1pdC4KPiBodHRwOi8vYnVncy5kZWJp
YW4ub3JnL2NnaS1iaW4vYnVncmVwb3J0LmNnaT9idWc9NzM3OTA1CgpNZWFud2hpbGUgSSBqdXN0
IHVwbG9hZGVkIGFub3RoZXIgcmVsZWFzZSBvZiBzZWFiaW9zIHRvCkRlYmlhbiBhcmNoaXZlIHdo
aWNoIGJyaW5ncyBzaXplIG9mIGJpb3MuYmluIGJhY2sgdG8gMTI4S2IsCmJ5IGRpc2FibGluZyBY
Q0hJICh1c2IzLjApIGFuZCBQVlNDU0kgLS0gMS43LjQtMy4KClNvIHRoaW5ncyBzaG91bGQgYmUg
d29ya2luZyBmaW5lIG9uIGRlYmlhbiBzaWRlIGFnYWluIG5vdwooYXQgbGVhc3Qgb24gc2lkKSwg
ZXZlbiB3aXRob3V0IHBhdGNoaW5nIHhlbi4KCkFuZCBpdCBsb29rcyBsaWtlIHdlIHNob3VsZCBy
ZS10aGluayB3aGF0IHdlIHJlbW92ZSBmcm9tCm91ciAocWVtdSkgInN0cmlwcGVkLWRvd24iIGJp
b3MgLSBhcyBpdCB0dXJucyBvdXQsIFhlbgphY3R1YWxseSB1c2VzIGJpb3MgZnJvbSBxZW11IHVu
Y2hhbmdlZC4uLiAgQnV0IGl0IGlzCnJlYWxseSByZWFsbHkgZnJhZ2lsZSAtIG9uIG15IGJ1aWxk
IChnY2MgNC43KSwgd2l0aCAyCm9wdGlvbnMgcmVtb3ZlZCwgMTI4S2IgaXMgOTkuNyUgdXNlZCA7
KQoKVGhhbmtzLAoKL21qdAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpo
dHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:12:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXAx-00066Y-S4; Thu, 06 Feb 2014 22:12:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBXAw-00066T-9f
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 22:12:02 +0000
Received: from [85.158.137.68:13204] by server-7.bemta-3.messagelabs.com id
	67/EC-13775-1B804F25; Thu, 06 Feb 2014 22:12:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391724718!177881!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7135 invoked from network); 6 Feb 2014 22:12:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:12:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98744012"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 22:11:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 17:11:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBXAr-0006pk-At;
	Thu, 06 Feb 2014 22:11:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBXAr-0002PE-9C;
	Thu, 06 Feb 2014 22:11:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24750-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 22:11:57 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24750: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3408589442542305310=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3408589442542305310==
Content-Type: text/plain

flight 24750 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24750/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743
 build-amd64                   4 xen-build                 fail REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  2efcb0193bf3916c8ce34882e845f5ceb1e511f7
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 643 lines long.)


--===============3408589442542305310==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3408589442542305310==--

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:12:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXAx-00066Y-S4; Thu, 06 Feb 2014 22:12:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBXAw-00066T-9f
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 22:12:02 +0000
Received: from [85.158.137.68:13204] by server-7.bemta-3.messagelabs.com id
	67/EC-13775-1B804F25; Thu, 06 Feb 2014 22:12:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391724718!177881!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7135 invoked from network); 6 Feb 2014 22:12:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:12:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98744012"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 22:11:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 17:11:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBXAr-0006pk-At;
	Thu, 06 Feb 2014 22:11:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBXAr-0002PE-9C;
	Thu, 06 Feb 2014 22:11:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24750-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 22:11:57 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24750: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3408589442542305310=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3408589442542305310==
Content-Type: text/plain

flight 24750 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24750/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743
 build-amd64                   4 xen-build                 fail REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  2efcb0193bf3916c8ce34882e845f5ceb1e511f7
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 643 lines long.)


--===============3408589442542305310==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3408589442542305310==--

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:16:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXF3-0006EF-KW; Thu, 06 Feb 2014 22:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBXF1-0006EA-PR
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 22:16:16 +0000
Received: from [85.158.143.35:54253] by server-2.bemta-4.messagelabs.com id
	15/43-10891-FA904F25; Thu, 06 Feb 2014 22:16:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391724972!3745583!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6074 invoked from network); 6 Feb 2014 22:16:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:16:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98744961"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 22:16:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 17:16:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBXEx-0006rH-Dp;
	Thu, 06 Feb 2014 22:16:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBXEx-0003nr-D9;
	Thu, 06 Feb 2014 22:16:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24747-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 22:16:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24747: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6410967515276041676=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6410967515276041676==
Content-Type: text/plain

flight 24747 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24747/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep           fail REGR. vs. 24699

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============6410967515276041676==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6410967515276041676==--

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:16:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXF3-0006EF-KW; Thu, 06 Feb 2014 22:16:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBXF1-0006EA-PR
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 22:16:16 +0000
Received: from [85.158.143.35:54253] by server-2.bemta-4.messagelabs.com id
	15/43-10891-FA904F25; Thu, 06 Feb 2014 22:16:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391724972!3745583!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6074 invoked from network); 6 Feb 2014 22:16:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:16:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98744961"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 22:16:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 17:16:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBXEx-0006rH-Dp;
	Thu, 06 Feb 2014 22:16:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBXEx-0003nr-D9;
	Thu, 06 Feb 2014 22:16:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24747-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 22:16:11 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24747: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6410967515276041676=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6410967515276041676==
Content-Type: text/plain

flight 24747 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24747/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              3 host-build-prep           fail REGR. vs. 24699

Tests which did not succeed, but are not blocking:
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             broken  
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============6410967515276041676==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6410967515276041676==--

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:19:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXHo-0006fr-AS; Thu, 06 Feb 2014 22:19:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WBXHn-0006fb-Ij
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 22:19:07 +0000
Received: from [85.158.139.211:36308] by server-7.bemta-5.messagelabs.com id
	C2/B4-14867-A5A04F25; Thu, 06 Feb 2014 22:19:06 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391725144!2227372!1
X-Originating-IP: [209.85.216.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18567 invoked from network); 6 Feb 2014 22:19:06 -0000
Received: from mail-qa0-f50.google.com (HELO mail-qa0-f50.google.com)
	(209.85.216.50)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:19:06 -0000
Received: by mail-qa0-f50.google.com with SMTP id cm18so3846222qab.23
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 14:19:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=jc+cwYyYWuDHJQ92CLkqr1loZGGQOnDfF7Yd86w90nc=;
	b=O7Edk0sjOUQjWx8JJ96C9o1lsGVm5yjaWplQgUEwCQyAL0qiy2Dg2NDVP6v0Nxniom
	kKP+n9UfZLaav6KrfmfnYTL5HfN2wjYaKnzRKgwxldpui4QeKJNamgk5Kha1cVovN5S9
	/PpLguu7LMLNVUJRjAQjtcQnsca7L8ST2/Rp71xU1YtBbgszD+0lnTmBXKcoYUc/u4re
	zAjce0VigMn6y5n1qEWnX/VoW7HXgLp9AnTQ5iP6VqblJAm09SwajHHRxtXsXQ1uv9Xb
	1Ii8WUcjl7ZecTVSXmLBgb0LJ6UHnSTfw7b98+9jZnJqTPe7+dYEvX6ldw+T9x125OEd
	Ks3w==
MIME-Version: 1.0
X-Received: by 10.224.157.7 with SMTP id z7mr16216847qaw.37.1391725064790;
	Thu, 06 Feb 2014 14:17:44 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 6 Feb 2014 14:17:44 -0800 (PST)
Date: Thu, 6 Feb 2014 17:17:44 -0500
Message-ID: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello

As I can see there is no support for hugepages in Xen for PV kernels.
Is this something in the future plans or there is no really need to
have this implemented?

I came across this paper https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
not sure if this was somehow presented to xen community, I can't find
any code related to this.

Thank you!
-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:19:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXHo-0006fr-AS; Thu, 06 Feb 2014 22:19:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WBXHn-0006fb-Ij
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 22:19:07 +0000
Received: from [85.158.139.211:36308] by server-7.bemta-5.messagelabs.com id
	C2/B4-14867-A5A04F25; Thu, 06 Feb 2014 22:19:06 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391725144!2227372!1
X-Originating-IP: [209.85.216.50]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18567 invoked from network); 6 Feb 2014 22:19:06 -0000
Received: from mail-qa0-f50.google.com (HELO mail-qa0-f50.google.com)
	(209.85.216.50)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:19:06 -0000
Received: by mail-qa0-f50.google.com with SMTP id cm18so3846222qab.23
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 14:19:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=jc+cwYyYWuDHJQ92CLkqr1loZGGQOnDfF7Yd86w90nc=;
	b=O7Edk0sjOUQjWx8JJ96C9o1lsGVm5yjaWplQgUEwCQyAL0qiy2Dg2NDVP6v0Nxniom
	kKP+n9UfZLaav6KrfmfnYTL5HfN2wjYaKnzRKgwxldpui4QeKJNamgk5Kha1cVovN5S9
	/PpLguu7LMLNVUJRjAQjtcQnsca7L8ST2/Rp71xU1YtBbgszD+0lnTmBXKcoYUc/u4re
	zAjce0VigMn6y5n1qEWnX/VoW7HXgLp9AnTQ5iP6VqblJAm09SwajHHRxtXsXQ1uv9Xb
	1Ii8WUcjl7ZecTVSXmLBgb0LJ6UHnSTfw7b98+9jZnJqTPe7+dYEvX6ldw+T9x125OEd
	Ks3w==
MIME-Version: 1.0
X-Received: by 10.224.157.7 with SMTP id z7mr16216847qaw.37.1391725064790;
	Thu, 06 Feb 2014 14:17:44 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 6 Feb 2014 14:17:44 -0800 (PST)
Date: Thu, 6 Feb 2014 17:17:44 -0500
Message-ID: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello

As I can see there is no support for hugepages in Xen for PV kernels.
Is this something in the future plans or there is no really need to
have this implemented?

I came across this paper https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
not sure if this was somehow presented to xen community, I can't find
any code related to this.

Thank you!
-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:54:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXpH-0008BQ-Br; Thu, 06 Feb 2014 22:53:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBXpG-0008BL-LA
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 22:53:42 +0000
Received: from [85.158.143.35:16243] by server-1.bemta-4.messagelabs.com id
	22/6E-31661-67214F25; Thu, 06 Feb 2014 22:53:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391727220!3764415!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16849 invoked from network); 6 Feb 2014 22:53:41 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 22:53:41 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391727220; l=1001;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=sKwDElAqwOmBzSJbR7bDT6Gb+u8=;
	b=nOvxDgoTq4Vvy18v33RgVVfctpBOpdbxSJMRQqkcg8Z1vceNceu0EUankejvOcteVcY
	+tmSMizz0ZI/Xw35XKwVhEM/XqytZsTs6Ci6GqcLXnwKyb9/REDZfNoK29Il+TV4cNNbC
	ppyq3CZzPN2DeqfROTIkGzjY0s70xhefwR8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id I0419eq16MrZAkz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 23:53:35 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5EBBC50269; Thu,  6 Feb 2014 23:53:34 +0100 (CET)
Date: Thu, 6 Feb 2014 23:53:34 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140206225334.GA21743@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 13, Jan Beulich wrote:

> Changeset 762:a070228ac76e ("add hvc compatibility mode to xencons"
> added this call just for the HVC case, without giving any reason why
> HVC would be special in this regard. Use the call for all cases.

> +++ b/drivers/xen/console/console.c
> @@ -236,6 +234,8 @@ static int __init xen_console_init(void)
>  
>  	wbuf = alloc_bootmem(wbuf_size);
>  
> +	if (!is_initial_xendomain())
> +		add_preferred_console(kcons_info.name, xc_num, NULL);
>  	register_console(&kcons_info);

Why is dom0 special in this case anyway? At least with SLE12, when Xen
is booted with 'console=com1 com1=115200' and the kernel is booted
without any console= or xencons=, kcons_info.index is still -1 and as a
result xvc-1 is registered as name for xvc0. This confuses systemd
because kernel name and console name do not match, so login via serial
is not possible.

When add_preferred_console is called uncondtionally the login on serial
works as expected.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:54:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXpH-0008BQ-Br; Thu, 06 Feb 2014 22:53:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBXpG-0008BL-LA
	for xen-devel@lists.xenproject.org; Thu, 06 Feb 2014 22:53:42 +0000
Received: from [85.158.143.35:16243] by server-1.bemta-4.messagelabs.com id
	22/6E-31661-67214F25; Thu, 06 Feb 2014 22:53:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391727220!3764415!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16849 invoked from network); 6 Feb 2014 22:53:41 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 22:53:41 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391727220; l=1001;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=sKwDElAqwOmBzSJbR7bDT6Gb+u8=;
	b=nOvxDgoTq4Vvy18v33RgVVfctpBOpdbxSJMRQqkcg8Z1vceNceu0EUankejvOcteVcY
	+tmSMizz0ZI/Xw35XKwVhEM/XqytZsTs6Ci6GqcLXnwKyb9/REDZfNoK29Il+TV4cNNbC
	ppyq3CZzPN2DeqfROTIkGzjY0s70xhefwR8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id I0419eq16MrZAkz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 6 Feb 2014 23:53:35 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5EBBC50269; Thu,  6 Feb 2014 23:53:34 +0100 (CET)
Date: Thu, 6 Feb 2014 23:53:34 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140206225334.GA21743@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Dec 13, Jan Beulich wrote:

> Changeset 762:a070228ac76e ("add hvc compatibility mode to xencons"
> added this call just for the HVC case, without giving any reason why
> HVC would be special in this regard. Use the call for all cases.

> +++ b/drivers/xen/console/console.c
> @@ -236,6 +234,8 @@ static int __init xen_console_init(void)
>  
>  	wbuf = alloc_bootmem(wbuf_size);
>  
> +	if (!is_initial_xendomain())
> +		add_preferred_console(kcons_info.name, xc_num, NULL);
>  	register_console(&kcons_info);

Why is dom0 special in this case anyway? At least with SLE12, when Xen
is booted with 'console=com1 com1=115200' and the kernel is booted
without any console= or xencons=, kcons_info.index is still -1 and as a
result xvc-1 is registered as name for xvc0. This confuses systemd
because kernel name and console name do not match, so login via serial
is not possible.

When add_preferred_console is called uncondtionally the login on serial
works as expected.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:58:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:58:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXtd-0008M1-Dt; Thu, 06 Feb 2014 22:58:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBXtb-0008Lv-JD
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 22:58:11 +0000
Received: from [85.158.137.68:26973] by server-17.bemta-3.messagelabs.com id
	BF/D0-22569-28314F25; Thu, 06 Feb 2014 22:58:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391727488!182669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23868 invoked from network); 6 Feb 2014 22:58:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:58:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="100625306"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 22:58:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 17:58:07 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBXtW-000741-M9;
	Thu, 06 Feb 2014 22:58:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBXtW-0005dG-CW;
	Thu, 06 Feb 2014 22:58:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24748-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 22:58:06 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24748: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24748 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24748/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
baseline version:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
+ branch=xen-4.1-testing
+ revision=f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.1-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.1-testing
+ xenversion=xen-4.1
+ xenversion=4.1
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git f0d0e5efe15a8ce53eaaeee64cf568358ec197ca:stable-4.1
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   fa1bde9..f0d0e5e  f0d0e5efe15a8ce53eaaeee64cf568358ec197ca -> stable-4.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 22:58:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 22:58:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBXtd-0008M1-Dt; Thu, 06 Feb 2014 22:58:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBXtb-0008Lv-JD
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 22:58:11 +0000
Received: from [85.158.137.68:26973] by server-17.bemta-3.messagelabs.com id
	BF/D0-22569-28314F25; Thu, 06 Feb 2014 22:58:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1391727488!182669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23868 invoked from network); 6 Feb 2014 22:58:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 22:58:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="100625306"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 06 Feb 2014 22:58:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 17:58:07 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBXtW-000741-M9;
	Thu, 06 Feb 2014 22:58:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBXtW-0005dG-CW;
	Thu, 06 Feb 2014 22:58:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24748-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 22:58:06 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24748: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24748 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24748/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass

version targeted for testing:
 xen                  f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
baseline version:
 xen                  fa1bde94493ee9fc66ce6f33ed434a9d7133c896

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
+ branch=xen-4.1-testing
+ revision=f0d0e5efe15a8ce53eaaeee64cf568358ec197ca
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.1-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.1-testing
+ xenversion=xen-4.1
+ xenversion=4.1
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git f0d0e5efe15a8ce53eaaeee64cf568358ec197ca:stable-4.1
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   fa1bde9..f0d0e5e  f0d0e5efe15a8ce53eaaeee64cf568358ec197ca -> stable-4.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 23:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 23:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBYSQ-0001NV-AG; Thu, 06 Feb 2014 23:34:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBYSO-0001NQ-Uf
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 23:34:09 +0000
Received: from [85.158.137.68:29011] by server-1.bemta-3.messagelabs.com id
	A7/FA-17293-0FB14F25; Thu, 06 Feb 2014 23:34:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391729645!186737!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26626 invoked from network); 6 Feb 2014 23:34:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 23:34:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98760737"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 23:34:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 18:34:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBYSJ-0007Eg-I0;
	Thu, 06 Feb 2014 23:34:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBYSI-0003Cd-Ux;
	Thu, 06 Feb 2014 23:34:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24751-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 23:34:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24751: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7144043058678261112=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7144043058678261112==
Content-Type: text/plain

flight 24751 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24751/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24680
 build-amd64                   4 xen-build                 fail REGR. vs. 24680
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24680

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
baseline version:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501

------------------------------------------------------------
People who touched revisions under test:
  Andrea Arcangeli <aarcange@redhat.com>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Grover <agrover@redhat.com>
  Aristeu Rozanski <aris@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Boris BREZILLON <b.brezillon@overkiz.com>
  Borislav Petkov <bp@suse.de>
  Chris Mason <clm@fb.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Kravkov <dmitry@broadcom.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@linux.intel.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Jack Pham <jackp@codeaurora.org>
  James Bottomley <JBottomley@Parallels.com>
  Jan Prinsloo <janroot@gmail.com>
  Jean-Jacques Hiblot <jjhiblot@traphandler.com>
  Johan Hovold <jhovold@gmail.com>
  John W. Linville <linville@tuxdriver.com>
  Josef Bacik <jbacik@fb.com>
  Jun zhang <zhang.jun92@zte.com.cn>
  Kevin Hilman <khilman@linaro.org>
  Khalid Aziz <khalid.aziz@oracle.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Leilei Zhao <leilei.zhao@atmel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek Roszko <mark.roszko@gmail.com>
  Mark Brown <broonie@linaro.org>
  Matthew Garrett <matthew.garrett@nebula.com>
  Michal Schmidt <mschmidt@redhat.com>
  Mikhail Zolotaryov <lebon@lebon.org.ua>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Mackerras <paulus@samba.org>
  PaX Team <pageexec@freemail.hu>
  Rahul Bedarkar <rahulbedarkar89@gmail.com>
  Richard Weinberger <richard@nod.at>
  Sarah Sharp <sarah.a.sharp@linux.intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Takashi Iwai <tiwai@suse.de>
  Thomas Pugliese <thomas.pugliese@gmail.com>
  Vijaya Mohan Guvva <vmohan@brocade.com>
  Wang Shilong <wangsl.fnst@cn.fujitsu.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  yury <urykhy@gmail.com>
  ZHAO Gang <gamerh2o@gmail.com>
  Ã‰ric Piel <eric.piel@tremplin-utc.net>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 861 lines long.)


--===============7144043058678261112==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7144043058678261112==--

From xen-devel-bounces@lists.xen.org Thu Feb 06 23:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 23:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBYSQ-0001NV-AG; Thu, 06 Feb 2014 23:34:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBYSO-0001NQ-Uf
	for xen-devel@lists.xensource.com; Thu, 06 Feb 2014 23:34:09 +0000
Received: from [85.158.137.68:29011] by server-1.bemta-3.messagelabs.com id
	A7/FA-17293-0FB14F25; Thu, 06 Feb 2014 23:34:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391729645!186737!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26626 invoked from network); 6 Feb 2014 23:34:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 23:34:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98760737"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 06 Feb 2014 23:34:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 18:34:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBYSJ-0007Eg-I0;
	Thu, 06 Feb 2014 23:34:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBYSI-0003Cd-Ux;
	Thu, 06 Feb 2014 23:34:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24751-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 6 Feb 2014 23:34:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24751: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7144043058678261112=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7144043058678261112==
Content-Type: text/plain

flight 24751 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24751/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24680
 build-amd64                   4 xen-build                 fail REGR. vs. 24680
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24680

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 linux                e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
baseline version:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501

------------------------------------------------------------
People who touched revisions under test:
  Andrea Arcangeli <aarcange@redhat.com>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Grover <agrover@redhat.com>
  Aristeu Rozanski <aris@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Boris BREZILLON <b.brezillon@overkiz.com>
  Borislav Petkov <bp@suse.de>
  Chris Mason <clm@fb.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Kravkov <dmitry@broadcom.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@linux.intel.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Jack Pham <jackp@codeaurora.org>
  James Bottomley <JBottomley@Parallels.com>
  Jan Prinsloo <janroot@gmail.com>
  Jean-Jacques Hiblot <jjhiblot@traphandler.com>
  Johan Hovold <jhovold@gmail.com>
  John W. Linville <linville@tuxdriver.com>
  Josef Bacik <jbacik@fb.com>
  Jun zhang <zhang.jun92@zte.com.cn>
  Kevin Hilman <khilman@linaro.org>
  Khalid Aziz <khalid.aziz@oracle.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Leilei Zhao <leilei.zhao@atmel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek Roszko <mark.roszko@gmail.com>
  Mark Brown <broonie@linaro.org>
  Matthew Garrett <matthew.garrett@nebula.com>
  Michal Schmidt <mschmidt@redhat.com>
  Mikhail Zolotaryov <lebon@lebon.org.ua>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Mackerras <paulus@samba.org>
  PaX Team <pageexec@freemail.hu>
  Rahul Bedarkar <rahulbedarkar89@gmail.com>
  Richard Weinberger <richard@nod.at>
  Sarah Sharp <sarah.a.sharp@linux.intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Takashi Iwai <tiwai@suse.de>
  Thomas Pugliese <thomas.pugliese@gmail.com>
  Vijaya Mohan Guvva <vmohan@brocade.com>
  Wang Shilong <wangsl.fnst@cn.fujitsu.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  yury <urykhy@gmail.com>
  ZHAO Gang <gamerh2o@gmail.com>
  Ã‰ric Piel <eric.piel@tremplin-utc.net>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 861 lines long.)


--===============7144043058678261112==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7144043058678261112==--

From xen-devel-bounces@lists.xen.org Thu Feb 06 23:59:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 23:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBYr0-0002QV-ET; Thu, 06 Feb 2014 23:59:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBYqz-0002Pc-Ct
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 23:59:33 +0000
Received: from [85.158.139.211:43955] by server-17.bemta-5.messagelabs.com id
	FA/6D-31975-4E124F25; Thu, 06 Feb 2014 23:59:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391731170!2232698!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16922 invoked from network); 6 Feb 2014 23:59:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 23:59:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s16NxRCw009427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 23:59:28 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16NxQAe026244
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Feb 2014 23:59:27 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16NxQVx023086; Thu, 6 Feb 2014 23:59:26 GMT
Received: from android-28e8a55005243565.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 15:59:26 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 06 Feb 2014 18:59:23 -0500
To: Elena Ufimtseva <ufimtseva@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>Hello
>
>As I can see there is no support for hugepages in Xen for PV kernels.
>Is this something in the future plans or there is no really need to
>have this implemented?

It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this implemented. But making it upstream means adding some Xen specific code in the generic code which is a no-go.

If you use PVH you get it for free. With the patches that Mukesh posted to enable some cpuid flags I was able to boot a guest with 2MB and 1GB pages without any trouble.
>
>I came across this paper
>https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
>not sure if this was somehow presented to xen community, I can't find
>any code related to this.
>
>Thank you!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 06 23:59:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Feb 2014 23:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBYr0-0002QV-ET; Thu, 06 Feb 2014 23:59:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBYqz-0002Pc-Ct
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 23:59:33 +0000
Received: from [85.158.139.211:43955] by server-17.bemta-5.messagelabs.com id
	FA/6D-31975-4E124F25; Thu, 06 Feb 2014 23:59:32 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391731170!2232698!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16922 invoked from network); 6 Feb 2014 23:59:31 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 6 Feb 2014 23:59:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s16NxRCw009427
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 6 Feb 2014 23:59:28 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16NxQAe026244
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 6 Feb 2014 23:59:27 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s16NxQVx023086; Thu, 6 Feb 2014 23:59:26 GMT
Received: from android-28e8a55005243565.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 15:59:26 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 06 Feb 2014 18:59:23 -0500
To: Elena Ufimtseva <ufimtseva@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Message-ID: <d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>Hello
>
>As I can see there is no support for hugepages in Xen for PV kernels.
>Is this something in the future plans or there is no really need to
>have this implemented?

It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this implemented. But making it upstream means adding some Xen specific code in the generic code which is a no-go.

If you use PVH you get it for free. With the patches that Mukesh posted to enable some cpuid flags I was able to boot a guest with 2MB and 1GB pages without any trouble.
>
>I came across this paper
>https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
>not sure if this was somehow presented to xen community, I can't find
>any code related to this.
>
>Thank you!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:12:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZ3U-0003O4-5m; Fri, 07 Feb 2014 00:12:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WBZ3T-0003Nz-FY
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:12:27 +0000
Received: from [85.158.139.211:11501] by server-14.bemta-5.messagelabs.com id
	F1/32-27598-AE424F25; Fri, 07 Feb 2014 00:12:26 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391731945!2234072!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28445 invoked from network); 7 Feb 2014 00:12:26 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 00:12:26 -0000
Received: by mail-qc0-f176.google.com with SMTP id e16so4586347qcx.21
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 16:12:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=OwkRsxbDiNlUMyagWXjUDSDIUkRrLaU60hlKRsWWxpU=;
	b=UUDBnW7lzJjUvrX3HysnqZ2I8EDfdEXoIT4Ks7t3R9nf66NuOLmJ7nm8sc9g7Q7nsf
	Xb/pn53BvWUHvrMssdo9a4/MK5Mt5cHd2gyleeoVqoFomc35kMg1tQSNaGFzw/pMa+gy
	ODdLMuBy1Izarwb9IwV/hSsUSwhSP6Vvb6uZpDV2X5vwPM7t3lifmkwGLdPBY1ZaxIIh
	CvcZjb372u0fe5lpxbC/BXHZH72zQNXOWmmN7UHWO57UDv+grTS3C4+sLIEQHnNtCynS
	L/XjGJ4KdNwsaucGMk3Nz594MnEC+eLH181XWQyhaE45QfBlRGyKk/UxEJAk1bfA/+2c
	A1fg==
MIME-Version: 1.0
X-Received: by 10.140.94.214 with SMTP id g80mr15945888qge.19.1391731944795;
	Thu, 06 Feb 2014 16:12:24 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 6 Feb 2014 16:12:24 -0800 (PST)
In-Reply-To: <d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
Date: Thu, 6 Feb 2014 19:12:24 -0500
Message-ID: <CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 6, 2014 at 6:59 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>>Hello
>>
>>As I can see there is no support for hugepages in Xen for PV kernels.
>>Is this something in the future plans or there is no really need to
>>have this implemented?
>
> It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this implemented. But making it upstream means adding some Xen specific code in the generic code which is a no-go.
>
> If you use PVH you get it for free. With the patches that Mukesh posted to enable some cpuid flags I was able to boot a guest with 2MB and 1GB pages without any trouble.

Ok, I see. Well, I would think then that the kernel then should not
provide any possibility to use it? Otherwise there are oopses and some
other unpleasant things happening when trying to use huge pages. I am
not sure, maybe if its clearly states that there is no hugetlb support
for pv, then there is no need for this?

>>
>>I came across this paper
>>https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
>>not sure if this was somehow presented to xen community, I can't find
>>any code related to this.
>>
>>Thank you!
>
>



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:12:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZ3U-0003O4-5m; Fri, 07 Feb 2014 00:12:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WBZ3T-0003Nz-FY
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:12:27 +0000
Received: from [85.158.139.211:11501] by server-14.bemta-5.messagelabs.com id
	F1/32-27598-AE424F25; Fri, 07 Feb 2014 00:12:26 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391731945!2234072!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28445 invoked from network); 7 Feb 2014 00:12:26 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 00:12:26 -0000
Received: by mail-qc0-f176.google.com with SMTP id e16so4586347qcx.21
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 16:12:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=OwkRsxbDiNlUMyagWXjUDSDIUkRrLaU60hlKRsWWxpU=;
	b=UUDBnW7lzJjUvrX3HysnqZ2I8EDfdEXoIT4Ks7t3R9nf66NuOLmJ7nm8sc9g7Q7nsf
	Xb/pn53BvWUHvrMssdo9a4/MK5Mt5cHd2gyleeoVqoFomc35kMg1tQSNaGFzw/pMa+gy
	ODdLMuBy1Izarwb9IwV/hSsUSwhSP6Vvb6uZpDV2X5vwPM7t3lifmkwGLdPBY1ZaxIIh
	CvcZjb372u0fe5lpxbC/BXHZH72zQNXOWmmN7UHWO57UDv+grTS3C4+sLIEQHnNtCynS
	L/XjGJ4KdNwsaucGMk3Nz594MnEC+eLH181XWQyhaE45QfBlRGyKk/UxEJAk1bfA/+2c
	A1fg==
MIME-Version: 1.0
X-Received: by 10.140.94.214 with SMTP id g80mr15945888qge.19.1391731944795;
	Thu, 06 Feb 2014 16:12:24 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 6 Feb 2014 16:12:24 -0800 (PST)
In-Reply-To: <d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
Date: Thu, 6 Feb 2014 19:12:24 -0500
Message-ID: <CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 6, 2014 at 6:59 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>>Hello
>>
>>As I can see there is no support for hugepages in Xen for PV kernels.
>>Is this something in the future plans or there is no really need to
>>have this implemented?
>
> It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this implemented. But making it upstream means adding some Xen specific code in the generic code which is a no-go.
>
> If you use PVH you get it for free. With the patches that Mukesh posted to enable some cpuid flags I was able to boot a guest with 2MB and 1GB pages without any trouble.

Ok, I see. Well, I would think then that the kernel then should not
provide any possibility to use it? Otherwise there are oopses and some
other unpleasant things happening when trying to use huge pages. I am
not sure, maybe if its clearly states that there is no hugetlb support
for pv, then there is no need for this?

>>
>>I came across this paper
>>https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
>>not sure if this was somehow presented to xen community, I can't find
>>any code related to this.
>>
>>Thank you!
>
>



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:15:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZ6h-0003VV-Rj; Fri, 07 Feb 2014 00:15:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBZ6g-0003VP-2U
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 00:15:46 +0000
Received: from [85.158.139.211:54610] by server-4.bemta-5.messagelabs.com id
	3E/CB-08092-1B524F25; Fri, 07 Feb 2014 00:15:45 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391732143!2234497!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7906 invoked from network); 7 Feb 2014 00:15:44 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 00:15:44 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 07 Feb 2014 00:15:42 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="665928646"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.245])
	by fldsmtpi01.verizon.com with ESMTP; 07 Feb 2014 00:15:42 +0000
Message-ID: <52F425AD.50209@terremark.com>
Date: Thu, 06 Feb 2014 19:15:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/14 07:01, Stefano Stabellini wrote:
> This gets me to:
> > >
> > >Parsing/home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
> > >   MLDEP
> > >make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > >make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
> > >   MLC      xenlight.cmo
> > >   MLA      xenlight.cma
> > >   CC       xenlight_stubs.o
> > >cc1: warnings being treated as errors
> > >xenlight_stubs.c: In function 'Defbool_val':
> > >xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> > >xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> > >xenlight_stubs.c: In function 'String_option_val':
> > >xenlight_stubs.c:379: error: expected expression before 'char'
> > >xenlight_stubs.c: In function 'aohow_val':
> > >xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> > >make[7]: *** [xenlight_stubs.o] Error 1
> > >make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > >make[6]: *** [subdir-install-xl] Error 2
> > >make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > >make[5]: *** [subdirs-install] Error 2
> > >make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > >make[4]: *** [subdir-install-libs] Error 2
> > >make[4]: Leaving directory `/home/don/xen/tools/ocaml'
> > >make[3]: *** [subdirs-install] Error 2
> > >make[3]: Leaving directory `/home/don/xen/tools/ocaml'
> > >make[2]: *** [subdir-install-ocaml] Error 2
> > >make[2]: Leaving directory `/home/don/xen/tools'
> > >make[1]: *** [subdirs-install] Error 2
> > >make[1]: Leaving directory `/home/don/xen/tools'
> > >make: *** [install-tools] Error 2
> > >
> > >
> > >Not sure how to work around this.
> > >      -Don Slutz

Any idea on what to do about ocaml issue?
     -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:15:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZ6h-0003VV-Rj; Fri, 07 Feb 2014 00:15:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBZ6g-0003VP-2U
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 00:15:46 +0000
Received: from [85.158.139.211:54610] by server-4.bemta-5.messagelabs.com id
	3E/CB-08092-1B524F25; Fri, 07 Feb 2014 00:15:45 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391732143!2234497!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7906 invoked from network); 7 Feb 2014 00:15:44 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 00:15:44 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 07 Feb 2014 00:15:42 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="665928646"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.245])
	by fldsmtpi01.verizon.com with ESMTP; 07 Feb 2014 00:15:42 +0000
Message-ID: <52F425AD.50209@terremark.com>
Date: Thu, 06 Feb 2014 19:15:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/03/14 07:01, Stefano Stabellini wrote:
> This gets me to:
> > >
> > >Parsing/home/don/xen/tools/ocaml/libs/xl/../../../../tools/libxl/libxl_types.idl
> > >   MLDEP
> > >make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > >make[7]: Entering directory `/home/don/xen/tools/ocaml/libs/xl'
> > >   MLC      xenlight.cmo
> > >   MLA      xenlight.cma
> > >   CC       xenlight_stubs.o
> > >cc1: warnings being treated as errors
> > >xenlight_stubs.c: In function 'Defbool_val':
> > >xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> > >xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> > >xenlight_stubs.c: In function 'String_option_val':
> > >xenlight_stubs.c:379: error: expected expression before 'char'
> > >xenlight_stubs.c: In function 'aohow_val':
> > >xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> > >make[7]: *** [xenlight_stubs.o] Error 1
> > >make[7]: Leaving directory `/home/don/xen/tools/ocaml/libs/xl'
> > >make[6]: *** [subdir-install-xl] Error 2
> > >make[6]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > >make[5]: *** [subdirs-install] Error 2
> > >make[5]: Leaving directory `/home/don/xen/tools/ocaml/libs'
> > >make[4]: *** [subdir-install-libs] Error 2
> > >make[4]: Leaving directory `/home/don/xen/tools/ocaml'
> > >make[3]: *** [subdirs-install] Error 2
> > >make[3]: Leaving directory `/home/don/xen/tools/ocaml'
> > >make[2]: *** [subdir-install-ocaml] Error 2
> > >make[2]: Leaving directory `/home/don/xen/tools'
> > >make[1]: *** [subdirs-install] Error 2
> > >make[1]: Leaving directory `/home/don/xen/tools'
> > >make: *** [install-tools] Error 2
> > >
> > >
> > >Not sure how to work around this.
> > >      -Don Slutz

Any idea on what to do about ocaml issue?
     -Don Slutz


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:34:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZN7-0004Jj-Ek; Fri, 07 Feb 2014 00:32:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBZN4-0004Ja-R6
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 00:32:44 +0000
Received: from [85.158.139.211:58241] by server-13.bemta-5.messagelabs.com id
	48/93-18801-A9924F25; Fri, 07 Feb 2014 00:32:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391733144!2248004!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14108 invoked from network); 7 Feb 2014 00:32:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 00:32:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98770026"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 00:32:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 19:32:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBZMk-0007Z3-PA;
	Fri, 07 Feb 2014 00:32:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBZMk-0004JW-Ld;
	Fri, 07 Feb 2014 00:32:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24749-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 00:32:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24749: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0534345082465467996=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0534345082465467996==
Content-Type: text/plain

flight 24749 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24749/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 24716

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  d7c6be61836b0a4d996f82d3e7c7e50150996701
baseline version:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d7c6be61836b0a4d996f82d3e7c7e50150996701
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:27:10 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 11b3280cc0ce64b375492416a23aa6a15f45a796
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:25:43 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit e9c5e56b4c17fd1ce28577df23cc53cc62c0d792
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:21:16 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============0534345082465467996==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0534345082465467996==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:34:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZN7-0004Jj-Ek; Fri, 07 Feb 2014 00:32:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBZN4-0004Ja-R6
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 00:32:44 +0000
Received: from [85.158.139.211:58241] by server-13.bemta-5.messagelabs.com id
	48/93-18801-A9924F25; Fri, 07 Feb 2014 00:32:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391733144!2248004!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14108 invoked from network); 7 Feb 2014 00:32:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 00:32:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,796,1384300800"; d="scan'208";a="98770026"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 00:32:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 19:32:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBZMk-0007Z3-PA;
	Fri, 07 Feb 2014 00:32:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBZMk-0004JW-Ld;
	Fri, 07 Feb 2014 00:32:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24749-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 00:32:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24749: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0534345082465467996=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0534345082465467996==
Content-Type: text/plain

flight 24749 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24749/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             3 host-build-prep           fail REGR. vs. 24716

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  d7c6be61836b0a4d996f82d3e7c7e50150996701
baseline version:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d7c6be61836b0a4d996f82d3e7c7e50150996701
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:27:10 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 11b3280cc0ce64b375492416a23aa6a15f45a796
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:25:43 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit e9c5e56b4c17fd1ce28577df23cc53cc62c0d792
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:21:16 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============0534345082465467996==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0534345082465467996==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:36:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZPX-0004Kz-MP; Fri, 07 Feb 2014 00:35:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1WBZPW-0004Kr-1P
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:35:14 +0000
Received: from [85.158.137.68:37040] by server-16.bemta-3.messagelabs.com id
	FA/E6-29917-D1A24F25; Fri, 07 Feb 2014 00:34:37 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391733276!189111!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=surbl: 
	U291cmNlOiAgckROUyA=[ZW5zLWx5b24uZnI=]
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7359 invoked from network); 7 Feb 2014 00:34:36 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 00:34:36 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 9951C8407B;
	Fri,  7 Feb 2014 01:34:35 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id qwQ84hap6pbV; Fri,  7 Feb 2014 01:34:35 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 4BEE78407A;
	Fri,  7 Feb 2014 01:34:35 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1WBZOr-0008WC-Hz; Fri, 07 Feb 2014 01:34:33 +0100
Date: Fri, 7 Feb 2014 01:34:33 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, monaka@monami-ya.jp,
	xen-devel@lists.xen.org
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
	<1391423287.10515.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391423287.10515.32.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: monaka@monami-ya.jp, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell, le Mon 03 Feb 2014 10:28:07 +0000, a =E9crit :
> On Sat, 2014-02-01 at 17:55 +0900, Masaki Muranaka wrote:
> > I got the build error. Log is like this:
> > /home/azureuser/xen/stubdom/mini-os-x86_64-c/test.o: In function
> > `app_main':
> > =

> > /home/azureuser/xen/extras/mini-os/test.c:542: multiple definition of
> > `app_main'
> > /home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xen=
/extras/mini-os/main.c:187: first defined here
> > make[1]: *** [/home/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os]
> > Error 1
> > make[1]: Leaving directory `/home/azureuser/xen/extras/mini-os'
> > make: *** [c-stubdom] Error 2
> > =

> > =

> > =

> > =

> > I thinks stubdom/c/minios.cfg should be like this.
> =

> That sounds plausible. I'm not sure what would then include the tests at
> that point though.

IIRC, by building mini-os by itself (without any stubdom part, i.e. no
libc)

> If Samuel agrees with the patch

Yes.  I wonder how that didn't pose problem so far.

> please could you formally submit according to
> http://wiki.xen.org/wiki/Submitting_Xen_Patches

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:36:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZPX-0004Kz-MP; Fri, 07 Feb 2014 00:35:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1WBZPW-0004Kr-1P
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:35:14 +0000
Received: from [85.158.137.68:37040] by server-16.bemta-3.messagelabs.com id
	FA/E6-29917-D1A24F25; Fri, 07 Feb 2014 00:34:37 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391733276!189111!1
X-Originating-IP: [140.77.166.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=surbl: 
	U291cmNlOiAgckROUyA=[ZW5zLWx5b24uZnI=]
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7359 invoked from network); 7 Feb 2014 00:34:36 -0000
Received: from toccata.ens-lyon.fr (HELO toccata.ens-lyon.org) (140.77.166.68)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 00:34:36 -0000
Received: from localhost (localhost [127.0.0.1])
	by toccata.ens-lyon.org (Postfix) with ESMTP id 9951C8407B;
	Fri,  7 Feb 2014 01:34:35 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at toccata.ens-lyon.org
Received: from toccata.ens-lyon.org ([127.0.0.1])
	by localhost (toccata.ens-lyon.org [127.0.0.1]) (amavisd-new,
	port 10024)
	with ESMTP id qwQ84hap6pbV; Fri,  7 Feb 2014 01:34:35 +0100 (CET)
Received: from type.ipv6 (youpi.perso.aquilenet.fr [80.67.176.89])
	(using TLSv1.2 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by toccata.ens-lyon.org (Postfix) with ESMTPSA id 4BEE78407A;
	Fri,  7 Feb 2014 01:34:35 +0100 (CET)
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1WBZOr-0008WC-Hz; Fri, 07 Feb 2014 01:34:33 +0100
Date: Fri, 7 Feb 2014 01:34:33 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, monaka@monami-ya.jp,
	xen-devel@lists.xen.org
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
	<1391423287.10515.32.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391423287.10515.32.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: monaka@monami-ya.jp, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell, le Mon 03 Feb 2014 10:28:07 +0000, a =E9crit :
> On Sat, 2014-02-01 at 17:55 +0900, Masaki Muranaka wrote:
> > I got the build error. Log is like this:
> > /home/azureuser/xen/stubdom/mini-os-x86_64-c/test.o: In function
> > `app_main':
> > =

> > /home/azureuser/xen/extras/mini-os/test.c:542: multiple definition of
> > `app_main'
> > /home/azureuser/xen/stubdom/mini-os-x86_64-c/main.o:/home/azureuser/xen=
/extras/mini-os/main.c:187: first defined here
> > make[1]: *** [/home/azureuser/xen/stubdom/mini-os-x86_64-c/mini-os]
> > Error 1
> > make[1]: Leaving directory `/home/azureuser/xen/extras/mini-os'
> > make: *** [c-stubdom] Error 2
> > =

> > =

> > =

> > =

> > I thinks stubdom/c/minios.cfg should be like this.
> =

> That sounds plausible. I'm not sure what would then include the tests at
> that point though.

IIRC, by building mini-os by itself (without any stubdom part, i.e. no
libc)

> If Samuel agrees with the patch

Yes.  I wonder how that didn't pose problem so far.

> please could you formally submit according to
> http://wiki.xen.org/wiki/Submitting_Xen_Patches

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:47:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZbb-0004uX-3T; Fri, 07 Feb 2014 00:47:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBZbY-0004uS-It
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:47:40 +0000
Received: from [85.158.143.35:10144] by server-1.bemta-4.messagelabs.com id
	15/65-31661-B2D24F25; Fri, 07 Feb 2014 00:47:39 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391734058!3770537!1
X-Originating-IP: [216.32.180.14]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10019 invoked from network); 7 Feb 2014 00:47:39 -0000
Received: from va3ehsobe004.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.14)
	by server-6.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 00:47:39 -0000
Received: from mail163-va3-R.bigfish.com (10.7.14.244) by
	VA3EHSOBE010.bigfish.com (10.7.40.12) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 00:47:37 +0000
Received: from mail163-va3 (localhost [127.0.0.1])	by
	mail163-va3-R.bigfish.com (Postfix) with ESMTP id 99EC11C03CF;
	Fri,  7 Feb 2014 00:47:37 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579ehz98dI1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail163-va3 (localhost.localdomain [127.0.0.1]) by mail163-va3
	(MessageSwitch) id 1391734055483239_24918;
	Fri,  7 Feb 2014 00:47:35 +0000 (UTC)
Received: from VA3EHSMHS022.bigfish.com (unknown [10.7.14.240])	by
	mail163-va3.bigfish.com (Postfix) with ESMTP id 5528232004F;
	Fri,  7 Feb 2014 00:47:35 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS022.bigfish.com
	(10.7.99.32) with Microsoft SMTP Server id 14.16.227.3; Fri, 7 Feb 2014
	00:47:28 +0000
X-WSS-ID: 0N0LOV3-07-02I-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	25E04CAE647;	Thu,  6 Feb 2014 18:47:27 -0600 (CST)
Received: from SATLEXDAG03.amd.com (10.181.40.7) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 18:47:47 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag03.amd.com
	(10.181.40.7) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 19:47:28 -0500
Date: Thu, 6 Feb 2014 18:47:43 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207004743.GB8837@arav-dinar>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F3789E0200007800119B83@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F3789E0200007800119B83@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3 V3] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 10:57:18AM +0000, Jan Beulich wrote:
> >>> On 05.02.14 at 21:28, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
> >          break;
> >  
> > -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> > -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> > +    case MSR_F10_MC4_MISC1: /* Threshold register */
> > +    case MSR_F10_MC4_MISC2:
> > +    case MSR_F10_MC4_MISC3:
> >          /*
> >           * MCA/MCE: We report that the threshold register is unavailable
> >           * for OS use (locked by the BIOS).
> > @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >          vpmu_do_wrmsr(msr, msr_content);
> >          break;
> >  
> > -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> > -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> > +    case MSR_F10_MC4_MISC1: /* Threshold register */
> > +    case MSR_F10_MC4_MISC2:
> > +    case MSR_F10_MC4_MISC3:
> >          /*
> >           * MCA/MCE: Threshold register is reported to be locked, so we ignore
> >           * all write accesses. This behaviour matches real HW, so guests should
> 
> While both of these changes are fine with the code as it currently
> is, they both rely on behavior that is sub-optimal and would hence
> need to be changed: Neither should MSR reads blindly read the
> hardware MSR as a fallback, nor should MSR write be blindly
> ignored. Yet with removing 0xC000040A from the explicit list of
> registers handled, a guest OS accessing this MSR (knowing it
> exists on Fam10) would start to see unexpected #GP faults as
> soon as the respective default cases would get corrected. And I
> assume you agree that correcting these default cases should not
> require the author to be particularly knowledgeable about AMD
> CPU family differences.
> 

So, just to be safe, it is better I don't touch this piece of
code here.. (you can ignore patch 1 of the series basically)

I'll condense the remaining changes into a single patch and
send it out separately.

Thanks
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:47:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZbb-0004uX-3T; Fri, 07 Feb 2014 00:47:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBZbY-0004uS-It
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:47:40 +0000
Received: from [85.158.143.35:10144] by server-1.bemta-4.messagelabs.com id
	15/65-31661-B2D24F25; Fri, 07 Feb 2014 00:47:39 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1391734058!3770537!1
X-Originating-IP: [216.32.180.14]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10019 invoked from network); 7 Feb 2014 00:47:39 -0000
Received: from va3ehsobe004.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.14)
	by server-6.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 00:47:39 -0000
Received: from mail163-va3-R.bigfish.com (10.7.14.244) by
	VA3EHSOBE010.bigfish.com (10.7.40.12) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 00:47:37 +0000
Received: from mail163-va3 (localhost [127.0.0.1])	by
	mail163-va3-R.bigfish.com (Postfix) with ESMTP id 99EC11C03CF;
	Fri,  7 Feb 2014 00:47:37 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579ehz98dI1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail163-va3 (localhost.localdomain [127.0.0.1]) by mail163-va3
	(MessageSwitch) id 1391734055483239_24918;
	Fri,  7 Feb 2014 00:47:35 +0000 (UTC)
Received: from VA3EHSMHS022.bigfish.com (unknown [10.7.14.240])	by
	mail163-va3.bigfish.com (Postfix) with ESMTP id 5528232004F;
	Fri,  7 Feb 2014 00:47:35 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS022.bigfish.com
	(10.7.99.32) with Microsoft SMTP Server id 14.16.227.3; Fri, 7 Feb 2014
	00:47:28 +0000
X-WSS-ID: 0N0LOV3-07-02I-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	25E04CAE647;	Thu,  6 Feb 2014 18:47:27 -0600 (CST)
Received: from SATLEXDAG03.amd.com (10.181.40.7) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 18:47:47 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag03.amd.com
	(10.181.40.7) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 19:47:28 -0500
Date: Thu, 6 Feb 2014 18:47:43 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207004743.GB8837@arav-dinar>
References: <1391632096-6209-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<1391632096-6209-2-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F3789E0200007800119B83@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F3789E0200007800119B83@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3 V3] hvm,
 svm: Update AMD Thresholding MSR definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 10:57:18AM +0000, Jan Beulich wrote:
> >>> On 05.02.14 at 21:28, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -1461,8 +1461,9 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
> >          *msr_content = v->arch.hvm_svm.guest_sysenter_eip;
> >          break;
> >  
> > -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> > -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> > +    case MSR_F10_MC4_MISC1: /* Threshold register */
> > +    case MSR_F10_MC4_MISC2:
> > +    case MSR_F10_MC4_MISC3:
> >          /*
> >           * MCA/MCE: We report that the threshold register is unavailable
> >           * for OS use (locked by the BIOS).
> > @@ -1660,8 +1661,9 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
> >          vpmu_do_wrmsr(msr, msr_content);
> >          break;
> >  
> > -    case MSR_IA32_MCx_MISC(4): /* Threshold register */
> > -    case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
> > +    case MSR_F10_MC4_MISC1: /* Threshold register */
> > +    case MSR_F10_MC4_MISC2:
> > +    case MSR_F10_MC4_MISC3:
> >          /*
> >           * MCA/MCE: Threshold register is reported to be locked, so we ignore
> >           * all write accesses. This behaviour matches real HW, so guests should
> 
> While both of these changes are fine with the code as it currently
> is, they both rely on behavior that is sub-optimal and would hence
> need to be changed: Neither should MSR reads blindly read the
> hardware MSR as a fallback, nor should MSR write be blindly
> ignored. Yet with removing 0xC000040A from the explicit list of
> registers handled, a guest OS accessing this MSR (knowing it
> exists on Fam10) would start to see unexpected #GP faults as
> soon as the respective default cases would get corrected. And I
> assume you agree that correcting these default cases should not
> require the author to be particularly knowledgeable about AMD
> CPU family differences.
> 

So, just to be safe, it is better I don't touch this piece of
code here.. (you can ignore patch 1 of the series basically)

I'll condense the remaining changes into a single patch and
send it out separately.

Thanks
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:49:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZcu-00055c-5U; Fri, 07 Feb 2014 00:49:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBZcq-00055V-S9
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:49:01 +0000
Received: from [85.158.137.68:53288] by server-3.bemta-3.messagelabs.com id
	01/6B-14520-C7D24F25; Fri, 07 Feb 2014 00:49:00 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391734137!193721!1
X-Originating-IP: [216.32.180.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2511 invoked from network); 7 Feb 2014 00:48:59 -0000
Received: from co1ehsobe003.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.186)
	by server-15.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 00:48:59 -0000
Received: from mail49-co1-R.bigfish.com (10.243.78.235) by
	CO1EHSOBE027.bigfish.com (10.243.66.90) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 00:48:57 +0000
Received: from mail49-co1 (localhost [127.0.0.1])	by mail49-co1-R.bigfish.com
	(Postfix) with ESMTP id EDBC38C0243;
	Fri,  7 Feb 2014 00:48:56 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail49-co1 (localhost.localdomain [127.0.0.1]) by mail49-co1
	(MessageSwitch) id 1391734135347557_20001;
	Fri,  7 Feb 2014 00:48:55 +0000 (UTC)
Received: from CO1EHSMHS016.bigfish.com (unknown [10.243.78.227])	by
	mail49-co1.bigfish.com (Postfix) with ESMTP id 4D6D79C006D;
	Fri,  7 Feb 2014 00:48:55 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO1EHSMHS016.bigfish.com
	(10.243.66.26) with Microsoft SMTP Server id 14.16.227.3;
	Fri, 7 Feb 2014 00:48:55 +0000
X-WSS-ID: 0N0LOXG-07-03U-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	20CDCCAE657;	Thu,  6 Feb 2014 18:48:51 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 18:49:11 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9;
	Thu, 6 Feb 2014 19:48:51 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Thu, 6 Feb 2014 18:32:56 -0600
Message-ID: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Also, the extended block of AMD MC4 MISC registers do not exist always.
In this patch, we rework the vmce_amd_[wr|rd]msr functions
to return #GP to guest if register does not exist in HW. If they do,
retain current behavior.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   54 +++++++++++++++----------------------
 xen/arch/x86/cpu/mcheck/mce_amd.h |    3 +++
 xen/arch/x86/cpu/mcheck/vmce.c    |    4 +--
 3 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..605f277 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -102,46 +102,34 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 	return mcheck_amd_famXX;
 }
 
+/* check for AMD MC4 extended MISC register presence */
+static inline int amd_thresholding_reg_present(uint32_t msr)
+{
+    uint64_t val;
+    rdmsr_safe(msr, val);
+    if ( val & (AMD_MC4_MISC_VAL_MASK | AMD_MC4_MISC_CNTP_MASK) )
+        return 1;
+
+    return 0;
+}
+
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
+    /* If not present, #GP fault, else do nothing as we don't emulate */
+    if ( !amd_thresholding_reg_present(msr) )
+        return -1;
 
-	return 1;
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
+    /* If not present, #GP fault, else assign '0' as we don't emulate */
+    if ( !amd_thresholding_reg_present(msr) )
+        return -1;
 
-	return 1;
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.h b/xen/arch/x86/cpu/mcheck/mce_amd.h
index 5d047e7..a6024fb 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.h
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.h
@@ -1,6 +1,9 @@
 #ifndef _MCHECK_AMD_H
 #define _MCHECK_AMD_H
 
+#define AMD_MC4_MISC_VAL_MASK           (1ULL << 63)
+#define AMD_MC4_MISC_CNTP_MASK          (1ULL << 62)
+
 enum mcheck_type amd_k8_mcheck_init(struct cpuinfo_x86 *c);
 enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c);
 
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..be9bb5e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 00:49:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 00:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBZcu-00055c-5U; Fri, 07 Feb 2014 00:49:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBZcq-00055V-S9
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 00:49:01 +0000
Received: from [85.158.137.68:53288] by server-3.bemta-3.messagelabs.com id
	01/6B-14520-C7D24F25; Fri, 07 Feb 2014 00:49:00 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391734137!193721!1
X-Originating-IP: [216.32.180.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2511 invoked from network); 7 Feb 2014 00:48:59 -0000
Received: from co1ehsobe003.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.186)
	by server-15.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 00:48:59 -0000
Received: from mail49-co1-R.bigfish.com (10.243.78.235) by
	CO1EHSOBE027.bigfish.com (10.243.66.90) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 00:48:57 +0000
Received: from mail49-co1 (localhost [127.0.0.1])	by mail49-co1-R.bigfish.com
	(Postfix) with ESMTP id EDBC38C0243;
	Fri,  7 Feb 2014 00:48:56 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail49-co1 (localhost.localdomain [127.0.0.1]) by mail49-co1
	(MessageSwitch) id 1391734135347557_20001;
	Fri,  7 Feb 2014 00:48:55 +0000 (UTC)
Received: from CO1EHSMHS016.bigfish.com (unknown [10.243.78.227])	by
	mail49-co1.bigfish.com (Postfix) with ESMTP id 4D6D79C006D;
	Fri,  7 Feb 2014 00:48:55 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO1EHSMHS016.bigfish.com
	(10.243.66.26) with Microsoft SMTP Server id 14.16.227.3;
	Fri, 7 Feb 2014 00:48:55 +0000
X-WSS-ID: 0N0LOXG-07-03U-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	20CDCCAE657;	Thu,  6 Feb 2014 18:48:51 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 6 Feb 2014 18:49:11 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9;
	Thu, 6 Feb 2014 19:48:51 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Thu, 6 Feb 2014 18:32:56 -0600
Message-ID: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

We correct this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

Also, the extended block of AMD MC4 MISC registers do not exist always.
In this patch, we rework the vmce_amd_[wr|rd]msr functions
to return #GP to guest if register does not exist in HW. If they do,
retain current behavior.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   54 +++++++++++++++----------------------
 xen/arch/x86/cpu/mcheck/mce_amd.h |    3 +++
 xen/arch/x86/cpu/mcheck/vmce.c    |    4 +--
 3 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..605f277 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -102,46 +102,34 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 	return mcheck_amd_famXX;
 }
 
+/* check for AMD MC4 extended MISC register presence */
+static inline int amd_thresholding_reg_present(uint32_t msr)
+{
+    uint64_t val;
+    rdmsr_safe(msr, val);
+    if ( val & (AMD_MC4_MISC_VAL_MASK | AMD_MC4_MISC_CNTP_MASK) )
+        return 1;
+
+    return 0;
+}
+
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
+    /* If not present, #GP fault, else do nothing as we don't emulate */
+    if ( !amd_thresholding_reg_present(msr) )
+        return -1;
 
-	return 1;
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
+    /* If not present, #GP fault, else assign '0' as we don't emulate */
+    if ( !amd_thresholding_reg_present(msr) )
+        return -1;
 
-	return 1;
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.h b/xen/arch/x86/cpu/mcheck/mce_amd.h
index 5d047e7..a6024fb 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.h
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.h
@@ -1,6 +1,9 @@
 #ifndef _MCHECK_AMD_H
 #define _MCHECK_AMD_H
 
+#define AMD_MC4_MISC_VAL_MASK           (1ULL << 63)
+#define AMD_MC4_MISC_CNTP_MASK          (1ULL << 62)
+
 enum mcheck_type amd_k8_mcheck_init(struct cpuinfo_x86 *c);
 enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c);
 
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..be9bb5e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 01:14:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 01:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBa16-0001XJ-Hs; Fri, 07 Feb 2014 01:14:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WBa14-0001XE-AB
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 01:14:03 +0000
Received: from [85.158.137.68:60052] by server-9.bemta-3.messagelabs.com id
	04/2C-10184-95334F25; Fri, 07 Feb 2014 01:14:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391735639!192483!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30924 invoked from network); 7 Feb 2014 01:14:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 01:14:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s171Cr1m012953
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 01:12:54 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171CojR016285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 01:12:52 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171Co1s016266; Fri, 7 Feb 2014 01:12:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 17:12:49 -0800
Date: Thu, 6 Feb 2014 17:12:48 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140206171248.5143e7b3@mantra.us.oracle.com>
In-Reply-To: <1391337376.15093.24.camel@hastur.hellion.org.uk>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
	<1391337376.15093.24.camel@hastur.hellion.org.uk>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2 Feb 2014 10:36:16 +0000
Ian Campbell <ian.campbell@citrix.com> wrote:

> On Sun, 2014-02-02 at 11:28 +0100, Olaf Hering wrote:
> > On Sun, Feb 02, Ian Campbell wrote:
> > 
> > > I suppose there is no harm in this, but is there any chance that
> > > gdbsx would actually work on arm64 without significant actual
> > > work going into it?
> > 
> > I have no idea. But I just noticed its built only due to this line
> > in our xen.spec:
> > 
> > make -C tools/debugger/gdbsx
> > 
> > Perhaps this should be guarded by a %ifarch x86_64.
> 
> I suspect that right now that this would be wise (maybe i386 too?) --
> hopefully Mukesh can advise.
> 
> Ian.

It should just work if you can quickly implement 
arch/x86/debug.c for arm. If not, you'd need to "arch it out".

thanks
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 01:14:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 01:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBa16-0001XJ-Hs; Fri, 07 Feb 2014 01:14:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WBa14-0001XE-AB
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 01:14:03 +0000
Received: from [85.158.137.68:60052] by server-9.bemta-3.messagelabs.com id
	04/2C-10184-95334F25; Fri, 07 Feb 2014 01:14:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391735639!192483!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30924 invoked from network); 7 Feb 2014 01:14:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 01:14:00 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s171Cr1m012953
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 01:12:54 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171CojR016285
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 01:12:52 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171Co1s016266; Fri, 7 Feb 2014 01:12:50 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 17:12:49 -0800
Date: Thu, 6 Feb 2014 17:12:48 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140206171248.5143e7b3@mantra.us.oracle.com>
In-Reply-To: <1391337376.15093.24.camel@hastur.hellion.org.uk>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
	<1391337376.15093.24.camel@hastur.hellion.org.uk>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2 Feb 2014 10:36:16 +0000
Ian Campbell <ian.campbell@citrix.com> wrote:

> On Sun, 2014-02-02 at 11:28 +0100, Olaf Hering wrote:
> > On Sun, Feb 02, Ian Campbell wrote:
> > 
> > > I suppose there is no harm in this, but is there any chance that
> > > gdbsx would actually work on arm64 without significant actual
> > > work going into it?
> > 
> > I have no idea. But I just noticed its built only due to this line
> > in our xen.spec:
> > 
> > make -C tools/debugger/gdbsx
> > 
> > Perhaps this should be guarded by a %ifarch x86_64.
> 
> I suspect that right now that this would be wise (maybe i386 too?) --
> hopefully Mukesh can advise.
> 
> Ian.

It should just work if you can quickly implement 
arch/x86/debug.c for arm. If not, you'd need to "arch it out".

thanks
mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 01:16:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 01:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBa3i-0001dX-6G; Fri, 07 Feb 2014 01:16:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WBa3g-0001dS-NH
	for Xen-devel@lists.xensource.com; Fri, 07 Feb 2014 01:16:44 +0000
Received: from [85.158.143.35:20984] by server-2.bemta-4.messagelabs.com id
	B2/9B-10891-CF334F25; Fri, 07 Feb 2014 01:16:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391735801!3759740!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19043 invoked from network); 7 Feb 2014 01:16:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 01:16:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s171Fd9F016725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 01:15:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171FdQh020352
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 01:15:39 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171FdWl020346; Fri, 7 Feb 2014 01:15:39 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 17:15:38 -0800
Date: Thu, 6 Feb 2014 17:15:37 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140206171537.4ec98205@mantra.us.oracle.com>
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: roger.pau@citrix.com, Xen-devel@lists.xensource.com,
	ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
 domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



ping? I think Roger you can ack from xen-pvh side. IanC/J, I guess one
of you need to ack from tool side?

Again this for 4.5, and not 4.4.

thanks
Mukesh


On Fri, 24 Jan 2014 17:13:30 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Expose features for pvh domUs from tools.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
>  tools/libxc/xc_domain.c    |    1 +
>  tools/libxc/xenctrl.h      |    2 +-
>  3 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index bbbf9b8..33f6829 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
>  
>  static void xc_cpuid_pv_policy(
>      xc_interface *xch, domid_t domid,
> -    const unsigned int *input, unsigned int *regs)
> +    const unsigned int *input, unsigned int *regs, int is_pvh)
>  {
>      DECLARE_DOMCTL;
>      unsigned int guest_width;
> @@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
>  
>      if ( (input[0] & 0x7fffffff) == 0x00000001 )
>      {
> -        clear_bit(X86_FEATURE_VME, regs[3]);
> -        clear_bit(X86_FEATURE_PSE, regs[3]);
> -        clear_bit(X86_FEATURE_PGE, regs[3]);
> -        clear_bit(X86_FEATURE_MCE, regs[3]);
> -        clear_bit(X86_FEATURE_MCA, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_VME, regs[3]);
> +            clear_bit(X86_FEATURE_PSE, regs[3]);
> +            clear_bit(X86_FEATURE_PGE, regs[3]);
> +            clear_bit(X86_FEATURE_MCE, regs[3]);
> +            clear_bit(X86_FEATURE_MCA, regs[3]);
> +            clear_bit(X86_FEATURE_PSE36, regs[3]);
> +        }
>          clear_bit(X86_FEATURE_MTRR, regs[3]);
> -        clear_bit(X86_FEATURE_PSE36, regs[3]);
>      }
>  
>      switch ( input[0] )
> @@ -524,8 +527,11 @@ static void xc_cpuid_pv_policy(
>          {
>              set_bit(X86_FEATURE_SYSCALL, regs[3]);
>          }
> -        clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> -        clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> +            clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        }
>  
>          clear_bit(X86_FEATURE_SVM, regs[2]);
>          clear_bit(X86_FEATURE_OSVW, regs[2]);
> @@ -561,7 +567,7 @@ static int xc_cpuid_policy(
>      if ( info.hvm )
>          xc_cpuid_hvm_policy(xch, domid, input, regs);
>      else
> -        xc_cpuid_pv_policy(xch, domid, input, regs);
> +        xc_cpuid_pv_policy(xch, domid, input, regs, info.pvh);
>  
>      return 0;
>  }
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f12999a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -316,6 +316,7 @@ int xc_domain_getinfo(xc_interface *xch,
>          info->running
> = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running); info->hvm
> = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
> info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
> +        info->pvh
> = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest); 
>          info->shutdown_reason =
>              (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift)
> & diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..77d219a 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -404,7 +404,7 @@ typedef struct xc_dominfo {
>      uint32_t      ssidref;
>      unsigned int  dying:1, crashed:1, shutdown:1,
>                    paused:1, blocked:1, running:1,
> -                  hvm:1, debugged:1;
> +                  hvm:1, debugged:1, pvh:1;
>      unsigned int  shutdown_reason; /* only meaningful if shutdown==1
> */ unsigned long nr_pages; /* current number, not maximum */
>      unsigned long nr_outstanding_pages;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 01:16:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 01:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBa3i-0001dX-6G; Fri, 07 Feb 2014 01:16:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WBa3g-0001dS-NH
	for Xen-devel@lists.xensource.com; Fri, 07 Feb 2014 01:16:44 +0000
Received: from [85.158.143.35:20984] by server-2.bemta-4.messagelabs.com id
	B2/9B-10891-CF334F25; Fri, 07 Feb 2014 01:16:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391735801!3759740!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19043 invoked from network); 7 Feb 2014 01:16:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 01:16:43 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s171Fd9F016725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 01:15:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171FdQh020352
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 01:15:39 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s171FdWl020346; Fri, 7 Feb 2014 01:15:39 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 06 Feb 2014 17:15:38 -0800
Date: Thu, 6 Feb 2014 17:15:37 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140206171537.4ec98205@mantra.us.oracle.com>
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: roger.pau@citrix.com, Xen-devel@lists.xensource.com,
	ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
 domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



ping? I think Roger you can ack from xen-pvh side. IanC/J, I guess one
of you need to ack from tool side?

Again this for 4.5, and not 4.4.

thanks
Mukesh


On Fri, 24 Jan 2014 17:13:30 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Expose features for pvh domUs from tools.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  tools/libxc/xc_cpuid_x86.c |   26 ++++++++++++++++----------
>  tools/libxc/xc_domain.c    |    1 +
>  tools/libxc/xenctrl.h      |    2 +-
>  3 files changed, 18 insertions(+), 11 deletions(-)
> 
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index bbbf9b8..33f6829 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -433,7 +433,7 @@ static void xc_cpuid_hvm_policy(
>  
>  static void xc_cpuid_pv_policy(
>      xc_interface *xch, domid_t domid,
> -    const unsigned int *input, unsigned int *regs)
> +    const unsigned int *input, unsigned int *regs, int is_pvh)
>  {
>      DECLARE_DOMCTL;
>      unsigned int guest_width;
> @@ -455,13 +455,16 @@ static void xc_cpuid_pv_policy(
>  
>      if ( (input[0] & 0x7fffffff) == 0x00000001 )
>      {
> -        clear_bit(X86_FEATURE_VME, regs[3]);
> -        clear_bit(X86_FEATURE_PSE, regs[3]);
> -        clear_bit(X86_FEATURE_PGE, regs[3]);
> -        clear_bit(X86_FEATURE_MCE, regs[3]);
> -        clear_bit(X86_FEATURE_MCA, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_VME, regs[3]);
> +            clear_bit(X86_FEATURE_PSE, regs[3]);
> +            clear_bit(X86_FEATURE_PGE, regs[3]);
> +            clear_bit(X86_FEATURE_MCE, regs[3]);
> +            clear_bit(X86_FEATURE_MCA, regs[3]);
> +            clear_bit(X86_FEATURE_PSE36, regs[3]);
> +        }
>          clear_bit(X86_FEATURE_MTRR, regs[3]);
> -        clear_bit(X86_FEATURE_PSE36, regs[3]);
>      }
>  
>      switch ( input[0] )
> @@ -524,8 +527,11 @@ static void xc_cpuid_pv_policy(
>          {
>              set_bit(X86_FEATURE_SYSCALL, regs[3]);
>          }
> -        clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> -        clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        if ( !is_pvh )
> +        {
> +            clear_bit(X86_FEATURE_PAGE1GB, regs[3]);
> +            clear_bit(X86_FEATURE_RDTSCP, regs[3]);
> +        }
>  
>          clear_bit(X86_FEATURE_SVM, regs[2]);
>          clear_bit(X86_FEATURE_OSVW, regs[2]);
> @@ -561,7 +567,7 @@ static int xc_cpuid_policy(
>      if ( info.hvm )
>          xc_cpuid_hvm_policy(xch, domid, input, regs);
>      else
> -        xc_cpuid_pv_policy(xch, domid, input, regs);
> +        xc_cpuid_pv_policy(xch, domid, input, regs, info.pvh);
>  
>      return 0;
>  }
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f12999a 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -316,6 +316,7 @@ int xc_domain_getinfo(xc_interface *xch,
>          info->running
> = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_running); info->hvm
> = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_hvm_guest);
> info->debugged = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_debugged);
> +        info->pvh
> = !!(domctl.u.getdomaininfo.flags&XEN_DOMINF_pvh_guest); 
>          info->shutdown_reason =
>              (domctl.u.getdomaininfo.flags>>XEN_DOMINF_shutdownshift)
> & diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> index 13f816b..77d219a 100644
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -404,7 +404,7 @@ typedef struct xc_dominfo {
>      uint32_t      ssidref;
>      unsigned int  dying:1, crashed:1, shutdown:1,
>                    paused:1, blocked:1, running:1,
> -                  hvm:1, debugged:1;
> +                  hvm:1, debugged:1, pvh:1;
>      unsigned int  shutdown_reason; /* only meaningful if shutdown==1
> */ unsigned long nr_pages; /* current number, not maximum */
>      unsigned long nr_outstanding_pages;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 01:29:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 01:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBaFs-0002dX-2r; Fri, 07 Feb 2014 01:29:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBaFq-0002dS-FW
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 01:29:18 +0000
Received: from [85.158.143.35:34723] by server-2.bemta-4.messagelabs.com id
	2A/D0-10891-DE634F25; Fri, 07 Feb 2014 01:29:17 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391736556!3777413!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32249 invoked from network); 7 Feb 2014 01:29:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 01:29:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,797,1384300800"; d="scan'208";a="100648216"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 01:29:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 20:29:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBaFm-0007ql-TY;
	Fri, 07 Feb 2014 01:29:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBaFm-0006xq-Ln;
	Fri, 07 Feb 2014 01:29:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21236.14058.382620.676819@mariner.uk.xensource.com>
Date: Fri, 7 Feb 2014 01:29:14 +0000
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140206202035.GA489@aepfle.de>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
	<20140206202035.GA489@aepfle.de>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/2] libxl: test programs: fix Makefile
	races
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [PATCH v2 0/2] libxl: test programs: fix Makefile races"):
> On Thu, Feb 06, Ian Jackson wrote:
> 
> >  1/2 libxl: test programs: Fix Makefile race re headers
> >  2/2 libxl: test programs: Fix make race re libxenlight.so
> > 
> > Patch 2 is new in this version.
> 
> Yes, these two patches fix the build failures for me.
> Thanks! 

Good, thanks for the report and the tests.  I have pushed those two
right away, on the grounds that they're clearly necessary bugfixes.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 01:29:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 01:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBaFs-0002dX-2r; Fri, 07 Feb 2014 01:29:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBaFq-0002dS-FW
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 01:29:18 +0000
Received: from [85.158.143.35:34723] by server-2.bemta-4.messagelabs.com id
	2A/D0-10891-DE634F25; Fri, 07 Feb 2014 01:29:17 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391736556!3777413!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32249 invoked from network); 7 Feb 2014 01:29:17 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 01:29:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,797,1384300800"; d="scan'208";a="100648216"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 01:29:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 6 Feb 2014 20:29:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBaFm-0007ql-TY;
	Fri, 07 Feb 2014 01:29:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBaFm-0006xq-Ln;
	Fri, 07 Feb 2014 01:29:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21236.14058.382620.676819@mariner.uk.xensource.com>
Date: Fri, 7 Feb 2014 01:29:14 +0000
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140206202035.GA489@aepfle.de>
References: <1391714580-24074-1-git-send-email-ian.jackson@eu.citrix.com>
	<20140206202035.GA489@aepfle.de>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/2] libxl: test programs: fix Makefile
	races
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [PATCH v2 0/2] libxl: test programs: fix Makefile races"):
> On Thu, Feb 06, Ian Jackson wrote:
> 
> >  1/2 libxl: test programs: Fix Makefile race re headers
> >  2/2 libxl: test programs: Fix make race re libxenlight.so
> > 
> > Patch 2 is new in this version.
> 
> Yes, these two patches fix the build failures for me.
> Thanks! 

Good, thanks for the report and the tests.  I have pushed those two
right away, on the grounds that they're clearly necessary bugfixes.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 02:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 02:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBbBD-0004y1-KR; Fri, 07 Feb 2014 02:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WBbBB-0004uO-O7
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 02:28:33 +0000
Received: from [85.158.137.68:44947] by server-11.bemta-3.messagelabs.com id
	12/ED-04255-0D444F25; Fri, 07 Feb 2014 02:28:32 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391740111!198151!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21221 invoked from network); 7 Feb 2014 02:28:32 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-31.messagelabs.com with SMTP;
	7 Feb 2014 02:28:32 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 06 Feb 2014 18:28:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,797,1384329600"; d="scan'208";a="451092824"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 06 Feb 2014 18:28:10 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 6 Feb 2014 18:28:09 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Fri, 7 Feb 2014 10:28:07 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, George Dunlap
	<george.dunlap@eu.citrix.com>
Thread-Topic: [PATCH] pvh: Fix regression caused by assumption that HVM
	paths MUST use io-backend device.
Thread-Index: AQHPIn+njVLnofJuHUihOs0NqzOmUZqmQkSAgALI8BA=
Date: Fri, 7 Feb 2014 02:28:07 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
In-Reply-To: <20140205152649.GA5167@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2014-02-05:
> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>> get_ioreq()s folded by using a local variable?
>>>>> Yes. As so
>>>> Thanks. Except that ...
>>>> 
>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>      struct vcpu *v = current;
>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>> -
>>>>> +    ioreq_t *p = get_ioreq(v);
>>>> ... you don't want to drop the blank line, and naming the new
>>>> variable "ioreq" would seem preferable.
>>>> 
>>>>>      /*
>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>       */
>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>> the right thing here. Yang, Jun?
>>> I have two patches - one the simpler one that is pretty
>>> straightfoward and the one you suggested. Either one fixes PVH
>>> guests. I also did bootup tests with HVM guests to make sure they worked.
>>> 
>>> Attached and inline.
>> 

Sorry for the late response. I just back from Chinese new year holiday.

>> But they do different things -- one does "ioreq && ioreq->state..."
> 
> Correct.
>> and the other does "!ioreq || ioreq->state...".  The first one is
>> incorrect, AFAICT.
> 
> Both of them fix the hypervisor blowing up with any PVH guest.

Both of fixings are right to me.
The only concern is that what we want to do here:
"ioreq && ioreq->state..." will only allow the VCPU that supporting IO request emulation mechanism to continue nested check which current means HVM VCPU.
And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO request emulation mechanism only which current means PVH VCPU.

The purpose of my original patch only wants to allow the HVM VCPU that doesn't has pending IO request to continue nested check. Not use it to distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here as Jan mentioned before that non-HVM domain should never call nested related function at all unless it also supports nested.

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 02:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 02:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBbBD-0004y1-KR; Fri, 07 Feb 2014 02:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WBbBB-0004uO-O7
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 02:28:33 +0000
Received: from [85.158.137.68:44947] by server-11.bemta-3.messagelabs.com id
	12/ED-04255-0D444F25; Fri, 07 Feb 2014 02:28:32 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391740111!198151!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21221 invoked from network); 7 Feb 2014 02:28:32 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-31.messagelabs.com with SMTP;
	7 Feb 2014 02:28:32 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 06 Feb 2014 18:28:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,797,1384329600"; d="scan'208";a="451092824"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 06 Feb 2014 18:28:10 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 6 Feb 2014 18:28:09 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Fri, 7 Feb 2014 10:28:07 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, George Dunlap
	<george.dunlap@eu.citrix.com>
Thread-Topic: [PATCH] pvh: Fix regression caused by assumption that HVM
	paths MUST use io-backend device.
Thread-Index: AQHPIn+njVLnofJuHUihOs0NqzOmUZqmQkSAgALI8BA=
Date: Fri, 7 Feb 2014 02:28:07 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
References: <1391447001-19100-1-git-send-email-konrad.wilk@oracle.com>
	<1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
In-Reply-To: <20140205152649.GA5167@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2014-02-05:
> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>> get_ioreq()s folded by using a local variable?
>>>>> Yes. As so
>>>> Thanks. Except that ...
>>>> 
>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>      struct vcpu *v = current;
>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>> -
>>>>> +    ioreq_t *p = get_ioreq(v);
>>>> ... you don't want to drop the blank line, and naming the new
>>>> variable "ioreq" would seem preferable.
>>>> 
>>>>>      /*
>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>       */
>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>> the right thing here. Yang, Jun?
>>> I have two patches - one the simpler one that is pretty
>>> straightfoward and the one you suggested. Either one fixes PVH
>>> guests. I also did bootup tests with HVM guests to make sure they worked.
>>> 
>>> Attached and inline.
>> 

Sorry for the late response. I just back from Chinese new year holiday.

>> But they do different things -- one does "ioreq && ioreq->state..."
> 
> Correct.
>> and the other does "!ioreq || ioreq->state...".  The first one is
>> incorrect, AFAICT.
> 
> Both of them fix the hypervisor blowing up with any PVH guest.

Both of fixings are right to me.
The only concern is that what we want to do here:
"ioreq && ioreq->state..." will only allow the VCPU that supporting IO request emulation mechanism to continue nested check which current means HVM VCPU.
And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO request emulation mechanism only which current means PVH VCPU.

The purpose of my original patch only wants to allow the HVM VCPU that doesn't has pending IO request to continue nested check. Not use it to distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here as Jan mentioned before that non-HVM domain should never call nested related function at all unless it also supports nested.

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 04:18:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 04:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBctD-00005p-3q; Fri, 07 Feb 2014 04:18:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WBctB-00005k-PM
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 04:18:06 +0000
Received: from [85.158.137.68:7412] by server-4.bemta-3.messagelabs.com id
	BB/DC-11750-C7E54F25; Fri, 07 Feb 2014 04:18:04 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391746681!204129!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32756 invoked from network); 7 Feb 2014 04:18:03 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 04:18:03 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 06 Feb 2014 21:17:51 -0700
Message-ID: <52F45E6D.70201@suse.com>
Date: Thu, 06 Feb 2014 21:17:49 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
In-Reply-To: <52F36977.6030106@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> On 02/05/2014 03:03 PM, Ian Jackson wrote:
>> George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event
>> fixes for libvirt and 4.4"):
>>> On 02/03/2014 04:14 PM, Ian Jackson wrote:
>>>> This is the latest version of my libxl event fixes apropos of Jim's
>>>> libvirt testing.
>>> Did you have any opinions on the suitability of this for 4.4?
>> Sorry, I should have made that clear in the body text rather than just
>> the subject line.
>>
>> I think this needs a freeze exception on the following grounds:
>>
>>   * There is little change visible to non-eventy/thready callers and
>>     the risk of new races there is limited; basic functional testing
>>     ought to catch those errors.
>>
>>   * The most prominent eventy/thready caller we are currently aware of
>>     is libvirt.  Without these changes it is nearly impossible to have
>>     a reliable libvirt.
>
> Thanks.
>
> I think libvirt support for libxl is a really important functionality
> from a strategic perspective: a solid support should make it much
> easier to integrate with other projects such as OpenStack and
> Cloudstack, as well as (in theory) other tools built on top of libvirt.

Thanks for considering this series for inclusion in 4.4!

> So I'm inclined to consider this a blocker*; I think we should accept
> it and delay the release until we feel comfortable that it has been
> sufficiently tested.

I've been running libvirt-based tests that continuously start/shutdown
persistent domains, create/shutdown transient domains, save/restore
domains, and dump info on all these domains, for the past 35 hours.  The
tests have been restarted a few times, as I fine tune some libvirt
patches, but did run for 20 hours before the first restart.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 04:18:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 04:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBctD-00005p-3q; Fri, 07 Feb 2014 04:18:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WBctB-00005k-PM
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 04:18:06 +0000
Received: from [85.158.137.68:7412] by server-4.bemta-3.messagelabs.com id
	BB/DC-11750-C7E54F25; Fri, 07 Feb 2014 04:18:04 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391746681!204129!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32756 invoked from network); 7 Feb 2014 04:18:03 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 04:18:03 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 06 Feb 2014 21:17:51 -0700
Message-ID: <52F45E6D.70201@suse.com>
Date: Thu, 06 Feb 2014 21:17:49 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1391444091-22796-1-git-send-email-ian.jackson@eu.citrix.com>	<52F24642.5000300@eu.citrix.com>
	<21234.21167.684304.970488@mariner.uk.xensource.com>
	<52F36977.6030106@eu.citrix.com>
In-Reply-To: <52F36977.6030106@eu.citrix.com>
Cc: xen-devel@lists.xensource.com, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/18 v3] libxl: fork and event fixes for
 libvirt and 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> On 02/05/2014 03:03 PM, Ian Jackson wrote:
>> George Dunlap writes ("Re: [PATCH 00/18 v3] libxl: fork and event
>> fixes for libvirt and 4.4"):
>>> On 02/03/2014 04:14 PM, Ian Jackson wrote:
>>>> This is the latest version of my libxl event fixes apropos of Jim's
>>>> libvirt testing.
>>> Did you have any opinions on the suitability of this for 4.4?
>> Sorry, I should have made that clear in the body text rather than just
>> the subject line.
>>
>> I think this needs a freeze exception on the following grounds:
>>
>>   * There is little change visible to non-eventy/thready callers and
>>     the risk of new races there is limited; basic functional testing
>>     ought to catch those errors.
>>
>>   * The most prominent eventy/thready caller we are currently aware of
>>     is libvirt.  Without these changes it is nearly impossible to have
>>     a reliable libvirt.
>
> Thanks.
>
> I think libvirt support for libxl is a really important functionality
> from a strategic perspective: a solid support should make it much
> easier to integrate with other projects such as OpenStack and
> Cloudstack, as well as (in theory) other tools built on top of libvirt.

Thanks for considering this series for inclusion in 4.4!

> So I'm inclined to consider this a blocker*; I think we should accept
> it and delay the release until we feel comfortable that it has been
> sufficiently tested.

I've been running libvirt-based tests that continuously start/shutdown
persistent domains, create/shutdown transient domains, save/restore
domains, and dump info on all these domains, for the past 35 hours.  The
tests have been restarted a few times, as I fine tune some libvirt
patches, but did run for 20 hours before the first restart.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 04:24:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 04:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBczV-0000Wx-1m; Fri, 07 Feb 2014 04:24:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WBczT-0000Ws-Mm
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 04:24:35 +0000
Received: from [85.158.139.211:60908] by server-2.bemta-5.messagelabs.com id
	52/DD-23037-20064F25; Fri, 07 Feb 2014 04:24:34 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391747072!2257298!1
X-Originating-IP: [209.85.160.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30325 invoked from network); 7 Feb 2014 04:24:33 -0000
Received: from mail-pb0-f49.google.com (HELO mail-pb0-f49.google.com)
	(209.85.160.49)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 04:24:33 -0000
Received: by mail-pb0-f49.google.com with SMTP id up15so2681074pbc.8
	for <xen-devel@lists.xenproject.org>;
	Thu, 06 Feb 2014 20:24:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=yIEtO3q3IJATm3O5SZ+BhtNmg3Vdy7YCTLRHSjUBRe8=;
	b=ySmMLEOzb9ajNNEQm0WSRlPRkrIVqVFMrJfZQYSNP533WM2clpKAyOrptoScnSA2GS
	FOMY3oOZkNoajLx0ZZopCfvtPp7jHgfL15yxfFb96PRKXdyUHOrT6rOEvIJRW7/3vV5x
	g3tB+BfkBJ+r/SMRhSyacGWnt84fAeKD8g7KiWNYua6ZC9I2SCtOlCtizju6+663jQ2h
	ZGvoOQqJL7Wkx15oYsiWz/Dl7FFLT6EfTJRh1BDcQsRwsCzk/8ig6CEqlpTYyG6pNuag
	sp6j2oDBoFD1mwRrTXFOfEeXItxU9CaETHcl45RlA10KTjskHpgvrFoAtqevX6hhIl7z
	MSxg==
X-Received: by 10.68.197.234 with SMTP id ix10mr16924882pbc.80.1391747071827; 
	Thu, 06 Feb 2014 20:24:31 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id tu3sm8885757pbc.40.2014.02.06.20.24.28
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 20:24:30 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 06 Feb 2014 20:24:27 -0800
Date: Thu, 6 Feb 2014 20:24:27 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207042427.GA28057@u109add4315675089e695.ant.amazon.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
	<20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
	<20140206162004.GA22864@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140206162004.GA22864@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: linux-kernel@vger.kernel.org, mrushton@amazon.com, msw@amazon.com,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 11:20:04AM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 05, 2014 at 08:57:30PM -0800, Matt Wilson wrote:
> > On Tue, Feb 04, 2014 at 11:15:01AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> > > > This series contain blkback bug fixes for memory leaks (patches 1 and 
> > > > 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> > > > since its memory layout is exactly the same as blkif_request_segment 
> > > > and should introduce no functional change.
> > > > 
> > > > All patches should be backported to stable branches, although the last 
> > > > one is not a functional change it will still be nice to have it for 
> > > > code correctness.
> > > 
> > > Matt and Matt, could you guys kindly take a look as well? Thank you!
> > 
> > Matt R. did some testing today and set up additional tests to run
> > overnight. He'll follow up after the overnight tests complete.
> 
> Thank you!

Just in case the various mailing list software ate Matt's messages, he
sent the following:

[PATCH v2 2/4] xen-blkback: fix memory leaks
Tested-by: Matt Rushton <mrushton@amazon.com>
Reviewed-by: Matt Rushton <mrushton@amazon.com>

[PATCH v2 3/4] xen-blkback: fix shutdown race
Tested-by: Matt Rushton <mrushton@amazon.com>
Reviewed-by: Matt Rushton <mrushton@amazon.com>

[PATCH v2 4/4] xen-blkif: drop struct blkif_request_segment_aligned
Tested-by: Matt Rushton <mrushton@amazon.com>

I've separately sent suggestions to Matt on how to setup his mailer to
format messages per list etiquette and how to avoid breaking message
threading.

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 04:24:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 04:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBczV-0000Wx-1m; Fri, 07 Feb 2014 04:24:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WBczT-0000Ws-Mm
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 04:24:35 +0000
Received: from [85.158.139.211:60908] by server-2.bemta-5.messagelabs.com id
	52/DD-23037-20064F25; Fri, 07 Feb 2014 04:24:34 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391747072!2257298!1
X-Originating-IP: [209.85.160.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30325 invoked from network); 7 Feb 2014 04:24:33 -0000
Received: from mail-pb0-f49.google.com (HELO mail-pb0-f49.google.com)
	(209.85.160.49)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 04:24:33 -0000
Received: by mail-pb0-f49.google.com with SMTP id up15so2681074pbc.8
	for <xen-devel@lists.xenproject.org>;
	Thu, 06 Feb 2014 20:24:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=yIEtO3q3IJATm3O5SZ+BhtNmg3Vdy7YCTLRHSjUBRe8=;
	b=ySmMLEOzb9ajNNEQm0WSRlPRkrIVqVFMrJfZQYSNP533WM2clpKAyOrptoScnSA2GS
	FOMY3oOZkNoajLx0ZZopCfvtPp7jHgfL15yxfFb96PRKXdyUHOrT6rOEvIJRW7/3vV5x
	g3tB+BfkBJ+r/SMRhSyacGWnt84fAeKD8g7KiWNYua6ZC9I2SCtOlCtizju6+663jQ2h
	ZGvoOQqJL7Wkx15oYsiWz/Dl7FFLT6EfTJRh1BDcQsRwsCzk/8ig6CEqlpTYyG6pNuag
	sp6j2oDBoFD1mwRrTXFOfEeXItxU9CaETHcl45RlA10KTjskHpgvrFoAtqevX6hhIl7z
	MSxg==
X-Received: by 10.68.197.234 with SMTP id ix10mr16924882pbc.80.1391747071827; 
	Thu, 06 Feb 2014 20:24:31 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id tu3sm8885757pbc.40.2014.02.06.20.24.28
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 20:24:30 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 06 Feb 2014 20:24:27 -0800
Date: Thu, 6 Feb 2014 20:24:27 -0800
From: Matt Wilson <msw@linux.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207042427.GA28057@u109add4315675089e695.ant.amazon.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
	<20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
	<20140206162004.GA22864@konrad-lan.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140206162004.GA22864@konrad-lan.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: linux-kernel@vger.kernel.org, mrushton@amazon.com, msw@amazon.com,
	xen-devel@lists.xenproject.org, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 11:20:04AM -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 05, 2014 at 08:57:30PM -0800, Matt Wilson wrote:
> > On Tue, Feb 04, 2014 at 11:15:01AM -0400, Konrad Rzeszutek Wilk wrote:
> > > On Tue, Feb 04, 2014 at 11:26:11AM +0100, Roger Pau Monne wrote:
> > > > This series contain blkback bug fixes for memory leaks (patches 1 and 
> > > > 2) and a race (patch 3). Patch 4 removes blkif_request_segment_aligned 
> > > > since its memory layout is exactly the same as blkif_request_segment 
> > > > and should introduce no functional change.
> > > > 
> > > > All patches should be backported to stable branches, although the last 
> > > > one is not a functional change it will still be nice to have it for 
> > > > code correctness.
> > > 
> > > Matt and Matt, could you guys kindly take a look as well? Thank you!
> > 
> > Matt R. did some testing today and set up additional tests to run
> > overnight. He'll follow up after the overnight tests complete.
> 
> Thank you!

Just in case the various mailing list software ate Matt's messages, he
sent the following:

[PATCH v2 2/4] xen-blkback: fix memory leaks
Tested-by: Matt Rushton <mrushton@amazon.com>
Reviewed-by: Matt Rushton <mrushton@amazon.com>

[PATCH v2 3/4] xen-blkback: fix shutdown race
Tested-by: Matt Rushton <mrushton@amazon.com>
Reviewed-by: Matt Rushton <mrushton@amazon.com>

[PATCH v2 4/4] xen-blkif: drop struct blkif_request_segment_aligned
Tested-by: Matt Rushton <mrushton@amazon.com>

I've separately sent suggestions to Matt on how to setup his mailer to
format messages per list etiquette and how to avoid breaking message
threading.

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 04:54:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 04:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBdRh-0001dN-UY; Fri, 07 Feb 2014 04:53:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WBdRg-0001dI-RY
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 04:53:45 +0000
Received: from [85.158.139.211:32758] by server-4.bemta-5.messagelabs.com id
	F5/CF-08092-7D664F25; Fri, 07 Feb 2014 04:53:43 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391748821!2244243!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9864 invoked from network); 7 Feb 2014 04:53:42 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 04:53:42 -0000
Received: by mail-pa0-f52.google.com with SMTP id bj1so2674297pad.25
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 20:53:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=hPE0x6PqtcjnJY0Dqxlq04TEtNCGgkM+dVUdQCivhac=;
	b=sPTomJslN2yceRV5queL/iX1PwOJM/GakxkmdtlWPA07OdXERGC2y1f9TURKlxk+QB
	B4zrZOg4crxPTSGhXrYZ8LA6O49/oGO1azFg+Syji295r5Wn9kFDQgSn5w2AzVxd/974
	NmxSenZ0havaeV3ZWYxtItC3+qcFvZdBQUOn6bojUz1RjQkjdxlRewPOPekOmkOcM0gJ
	spXKdr3MsQGExEx3Cgc+XxckIDOKF689fv+Vi++TgT3JDk4D/jq+y7ZkPOyY20qxT0/S
	xgmGS/U56yVpoNQc1i1B354KZnJEeE0Ra3ozE303JPdVXgJF8QK1hd+PE3erh4nK4DCG
	1uFw==
X-Received: by 10.68.171.67 with SMTP id as3mr16838616pbc.105.1391748820708;
	Thu, 06 Feb 2014 20:53:40 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	zc6sm23201208pab.18.2014.02.06.20.53.37 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 20:53:39 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 06 Feb 2014 20:53:36 -0800
Date: Thu, 6 Feb 2014 20:53:36 -0800
From: Matt Wilson <msw@linux.com>
To: Paul Durrant <paul.durrant@citrix.com>
Message-ID: <20140207045334.GA29403@u109add4315675089e695.ant.amazon.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 02:19:46PM +0000, Paul Durrant wrote:
> To simplify creation of the ioreq server abstraction in a
> subsequent patch, this patch centralizes all use of the shared
> ioreq structure and the buffered ioreq ring to the source module
> xen/arch/x86/hvm/hvm.c.
> Also, re-work hvm_send_assist_req() slightly to complete IO
> immediately in the case where there is no emulator (i.e. the shared
> IOREQ ring has not been set). This should handle the case currently
> covered by has_dm in hvmemul_do_io().
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

[...]

> diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
> index 3529499..b6af3c5 100644
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -22,19 +22,10 @@
>  #define __ASM_X86_HVM_SUPPORT_H__
>  
>  #include <xen/types.h>
> -#include <public/hvm/ioreq.h>
>  #include <xen/sched.h>
>  #include <xen/hvm/save.h>
>  #include <asm/processor.h>
>  
> -static inline ioreq_t *get_ioreq(struct vcpu *v)
> -{
> -    struct domain *d = v->domain;
> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> -    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
> -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> -}
> -
>  #define HVM_DELIVER_NO_ERROR_CODE  -1
>  
>  #ifndef NDEBUG

Seems like this breaks nested VMX:

vvmx.c: In function 'nvmx_switch_guest':
vvmx.c:1403: error: implicit declaration of function 'get_ioreq'
vvmx.c:1403: error: nested extern declaration of 'get_ioreq'
vvmx.c:1403: error: invalid type argument of '->' (have 'int')

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 04:54:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 04:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBdRh-0001dN-UY; Fri, 07 Feb 2014 04:53:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WBdRg-0001dI-RY
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 04:53:45 +0000
Received: from [85.158.139.211:32758] by server-4.bemta-5.messagelabs.com id
	F5/CF-08092-7D664F25; Fri, 07 Feb 2014 04:53:43 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391748821!2244243!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9864 invoked from network); 7 Feb 2014 04:53:42 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 04:53:42 -0000
Received: by mail-pa0-f52.google.com with SMTP id bj1so2674297pad.25
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 20:53:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=hPE0x6PqtcjnJY0Dqxlq04TEtNCGgkM+dVUdQCivhac=;
	b=sPTomJslN2yceRV5queL/iX1PwOJM/GakxkmdtlWPA07OdXERGC2y1f9TURKlxk+QB
	B4zrZOg4crxPTSGhXrYZ8LA6O49/oGO1azFg+Syji295r5Wn9kFDQgSn5w2AzVxd/974
	NmxSenZ0havaeV3ZWYxtItC3+qcFvZdBQUOn6bojUz1RjQkjdxlRewPOPekOmkOcM0gJ
	spXKdr3MsQGExEx3Cgc+XxckIDOKF689fv+Vi++TgT3JDk4D/jq+y7ZkPOyY20qxT0/S
	xgmGS/U56yVpoNQc1i1B354KZnJEeE0Ra3ozE303JPdVXgJF8QK1hd+PE3erh4nK4DCG
	1uFw==
X-Received: by 10.68.171.67 with SMTP id as3mr16838616pbc.105.1391748820708;
	Thu, 06 Feb 2014 20:53:40 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-169.amazon.com. [54.240.196.169])
	by mx.google.com with ESMTPSA id
	zc6sm23201208pab.18.2014.02.06.20.53.37 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 06 Feb 2014 20:53:39 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 06 Feb 2014 20:53:36 -0800
Date: Thu, 6 Feb 2014 20:53:36 -0800
From: Matt Wilson <msw@linux.com>
To: Paul Durrant <paul.durrant@citrix.com>
Message-ID: <20140207045334.GA29403@u109add4315675089e695.ant.amazon.com>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Jan 30, 2014 at 02:19:46PM +0000, Paul Durrant wrote:
> To simplify creation of the ioreq server abstraction in a
> subsequent patch, this patch centralizes all use of the shared
> ioreq structure and the buffered ioreq ring to the source module
> xen/arch/x86/hvm/hvm.c.
> Also, re-work hvm_send_assist_req() slightly to complete IO
> immediately in the case where there is no emulator (i.e. the shared
> IOREQ ring has not been set). This should handle the case currently
> covered by has_dm in hvmemul_do_io().
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

[...]

> diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-x86/hvm/support.h
> index 3529499..b6af3c5 100644
> --- a/xen/include/asm-x86/hvm/support.h
> +++ b/xen/include/asm-x86/hvm/support.h
> @@ -22,19 +22,10 @@
>  #define __ASM_X86_HVM_SUPPORT_H__
>  
>  #include <xen/types.h>
> -#include <public/hvm/ioreq.h>
>  #include <xen/sched.h>
>  #include <xen/hvm/save.h>
>  #include <asm/processor.h>
>  
> -static inline ioreq_t *get_ioreq(struct vcpu *v)
> -{
> -    struct domain *d = v->domain;
> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> -    ASSERT((v == current) || spin_is_locked(&d->arch.hvm_domain.ioreq.lock));
> -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> -}
> -
>  #define HVM_DELIVER_NO_ERROR_CODE  -1
>  
>  #ifndef NDEBUG

Seems like this breaks nested VMX:

vvmx.c: In function 'nvmx_switch_guest':
vvmx.c:1403: error: implicit declaration of function 'get_ioreq'
vvmx.c:1403: error: nested extern declaration of 'get_ioreq'
vvmx.c:1403: error: invalid type argument of '->' (have 'int')

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 05:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 05:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBdba-0002K9-GZ; Fri, 07 Feb 2014 05:03:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBdbY-0002K4-2E
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 05:03:56 +0000
Received: from [85.158.143.35:39858] by server-3.bemta-4.messagelabs.com id
	93/B4-11539-B3964F25; Fri, 07 Feb 2014 05:03:55 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391749433!3807175!1
X-Originating-IP: [98.139.213.126]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4671 invoked from network); 7 Feb 2014 05:03:54 -0000
Received: from nm30-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm30-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.126)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 05:03:54 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391749433; bh=ZWJ3mf23zQKABza78LAPayDO5/GuSWkq+W0alC1xIbI=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=SExxJyyh9WTCdY2j22xUL7xf3RLMGlCZwQ7t5v/bzpMI6HbGiuTbhMQndnPlyLMNWKA+xXkgI/kA/9nlWsaFTms+YDX30ATCmfQO7etWF2j5GKRs02w2/Gq7uavknJw9Q8tkmg6YIz6CjgZElCkw8l5QU9tysRG7LsfuWjekkrg=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Cn3GpkSndtCMKE3zASbLz3FkO3vSLWKfu85gQdxL4Vlao16CcivIcKW5VyRChgyHLq/rt2oELzXv0ERieS4XEPKI8knWrXE+DLxbw68vxq0Ez5Iv+OTFhMT8aCBHRQCcH1QwUx8cUTCIhpabpTWkcc5rYqzr/p+y/7ZMIrqRx/o=;
Received: from [98.139.215.142] by nm30.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 05:03:53 -0000
Received: from [98.139.211.206] by tm13.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 05:03:53 -0000
Received: from [127.0.0.1] by smtp215.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 05:03:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391749433; bh=ZWJ3mf23zQKABza78LAPayDO5/GuSWkq+W0alC1xIbI=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=ug8yYof0wnl3oAYqezVAlhpdnSISdbTgMTjE8SI9Oe9fqRJE5LRWt9W4LWchJ1V3y4qevxQ+tQzZrYG9w8e+LOL2R49fUkMyckjefh4qHWAv0J3Z5MQKLMl7eJO6wuiaUw97Q3XbLHGHhnL0vCwvRE19QcmbMtD8e63o45s05/8=
X-Yahoo-Newman-Id: 92169.79747.bm@smtp215.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: PjIGvfYVM1kK2TVhGGYlzNKgMrUi7d2jzq_5GDyXorxdRPn
	yozshUkK8S4DdUlKBLZUlFoLXlHOru4yQOPiKrAZC_I3poSAmRp8jHI0quHo
	G_3S4o1AFja4jT1BU4ktBXNGggqaw6tMZQj.IVFEr0ANo3vDHJCeIqKmE9Rw
	YELM8_alQWgmapsH_vnFwVXMiCCrH4cdLgN5r_n20Ln8_CFC7kVmjKNhVO9b
	apZ7R9NnrQs06X3rYqkkXdiSHNlgoqYCY9v8dM5X7YAak0PczKU2ZUgbmbJL
	s7CtDJPcisE0smooWnItnUbj3.mCRk0UmdcjGiJU8nRn5Yf2YMwVWth2wUav
	8VZmkX61BBDbLKpV6BVo46O1Z9wTFnewaw86jZOPa_kP2Udy_DBsoewYIs3_
	4jn2_XMmsvXpV0MsEDizY3i5CJbcu9IHaRBhcDz4UBBJdm._TT1s0GPkXLAh
	ntPFSv.uXgg0D90BUFzbCtP37MHU_At5Z9r50bAbf.sRXBL2QEOy86o7Kltw
	OGfWYO1T1FdbQEi9ZDpY8F3EGML5TwM37KxJniZLHLGI3EffWjVlPZhOpfXy
	V0hf5sqwU0SxJ07Byn99XgkKLS87llbojxxJVKqcvXQgl796uQ7rO0KO4d2u
	Gy6IcPw70K3.JmbiCt4oi6M6iUZEDM6Q-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp215.mail.bf1.yahoo.com with SMTP; 06 Feb 2014 21:03:53 -0800 PST
Message-ID: <1391749430.2943.13.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Thu, 06 Feb 2014 22:03:50 -0700
In-Reply-To: <CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:


>         Is there a knob for qxl support?
>         
>         [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
>         qemu-system-i386: -vga qxl: invalid option

> 
> Here there is a patch that add qxl support in libxl updated to xen
> 4.4-rc3 if you want add it:
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> 
> Or you can simply compile from this already ready for spice/qxl
> testing:
> https://github.com/Fantu/Xen/commits/rebase/m2r-testing
> 
> 
> 
> Is not upstream for now because there is something on xen that make it
> not working on linux domUs with qxl driver active and working with
> high performance problem on windows domUs.
> I spent several days without finding the exact problem to be solved :(
> If you want you can try it out and see if anything changes using
> Fedora instead of Debian as dom0, differents kernel domUs etc.
> Maybe you could even find some new informations/errors useful for
> solving the problem.
> 

I am not very familiar with compiling nor with github, but I did learn
that by adding .patch to your first link, it was possible to get the
changes in a patch file format.  Applying the patch generated these
errors:

[root@xen xen]# patch -p1 < patch1.txt 
patching file docs/man/xl.cfg.pod.5
Hunk #2 FAILED at 1085.
1 out of 2 hunks FAILED -- saving rejects to file
docs/man/xl.cfg.pod.5.rej
patching file tools/libxl/libxl_create.c
Hunk #1 FAILED at 229.
Hunk #2 FAILED at 252.
2 out of 2 hunks FAILED -- saving rejects to file
tools/libxl/libxl_create.c.rej
patching file tools/libxl/libxl_dm.c
Hunk #1 succeeded at 217 with fuzz 2 (offset -3 lines).
Hunk #2 succeeded at 517 with fuzz 2 (offset -5 lines).
patching file tools/libxl/libxl_types.idl
Hunk #1 FAILED at 154.
1 out of 1 hunk FAILED -- saving rejects to file
tools/libxl/libxl_types.idl.rej
patching file tools/libxl/xl_cmdimpl.c
Hunk #1 FAILED at 1669.
1 out of 1 hunk FAILED -- saving rejects to file
tools/libxl/xl_cmdimpl.c.rej

By inspecting the code, I could tell that there were some missing
patches (VGA interface type none was missing) compared to the RC3 source
that I have.  Would you mind helping with the additional patches that I
need?  Is so, I will try to help with the qxl issue.

Thanks,

Eric



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 05:04:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 05:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBdba-0002K9-GZ; Fri, 07 Feb 2014 05:03:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBdbY-0002K4-2E
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 05:03:56 +0000
Received: from [85.158.143.35:39858] by server-3.bemta-4.messagelabs.com id
	93/B4-11539-B3964F25; Fri, 07 Feb 2014 05:03:55 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391749433!3807175!1
X-Originating-IP: [98.139.213.126]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4671 invoked from network); 7 Feb 2014 05:03:54 -0000
Received: from nm30-vm0.bullet.mail.bf1.yahoo.com (HELO
	nm30-vm0.bullet.mail.bf1.yahoo.com) (98.139.213.126)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 05:03:54 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391749433; bh=ZWJ3mf23zQKABza78LAPayDO5/GuSWkq+W0alC1xIbI=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=SExxJyyh9WTCdY2j22xUL7xf3RLMGlCZwQ7t5v/bzpMI6HbGiuTbhMQndnPlyLMNWKA+xXkgI/kA/9nlWsaFTms+YDX30ATCmfQO7etWF2j5GKRs02w2/Gq7uavknJw9Q8tkmg6YIz6CjgZElCkw8l5QU9tysRG7LsfuWjekkrg=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Cn3GpkSndtCMKE3zASbLz3FkO3vSLWKfu85gQdxL4Vlao16CcivIcKW5VyRChgyHLq/rt2oELzXv0ERieS4XEPKI8knWrXE+DLxbw68vxq0Ez5Iv+OTFhMT8aCBHRQCcH1QwUx8cUTCIhpabpTWkcc5rYqzr/p+y/7ZMIrqRx/o=;
Received: from [98.139.215.142] by nm30.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 05:03:53 -0000
Received: from [98.139.211.206] by tm13.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 05:03:53 -0000
Received: from [127.0.0.1] by smtp215.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 05:03:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391749433; bh=ZWJ3mf23zQKABza78LAPayDO5/GuSWkq+W0alC1xIbI=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=ug8yYof0wnl3oAYqezVAlhpdnSISdbTgMTjE8SI9Oe9fqRJE5LRWt9W4LWchJ1V3y4qevxQ+tQzZrYG9w8e+LOL2R49fUkMyckjefh4qHWAv0J3Z5MQKLMl7eJO6wuiaUw97Q3XbLHGHhnL0vCwvRE19QcmbMtD8e63o45s05/8=
X-Yahoo-Newman-Id: 92169.79747.bm@smtp215.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: PjIGvfYVM1kK2TVhGGYlzNKgMrUi7d2jzq_5GDyXorxdRPn
	yozshUkK8S4DdUlKBLZUlFoLXlHOru4yQOPiKrAZC_I3poSAmRp8jHI0quHo
	G_3S4o1AFja4jT1BU4ktBXNGggqaw6tMZQj.IVFEr0ANo3vDHJCeIqKmE9Rw
	YELM8_alQWgmapsH_vnFwVXMiCCrH4cdLgN5r_n20Ln8_CFC7kVmjKNhVO9b
	apZ7R9NnrQs06X3rYqkkXdiSHNlgoqYCY9v8dM5X7YAak0PczKU2ZUgbmbJL
	s7CtDJPcisE0smooWnItnUbj3.mCRk0UmdcjGiJU8nRn5Yf2YMwVWth2wUav
	8VZmkX61BBDbLKpV6BVo46O1Z9wTFnewaw86jZOPa_kP2Udy_DBsoewYIs3_
	4jn2_XMmsvXpV0MsEDizY3i5CJbcu9IHaRBhcDz4UBBJdm._TT1s0GPkXLAh
	ntPFSv.uXgg0D90BUFzbCtP37MHU_At5Z9r50bAbf.sRXBL2QEOy86o7Kltw
	OGfWYO1T1FdbQEi9ZDpY8F3EGML5TwM37KxJniZLHLGI3EffWjVlPZhOpfXy
	V0hf5sqwU0SxJ07Byn99XgkKLS87llbojxxJVKqcvXQgl796uQ7rO0KO4d2u
	Gy6IcPw70K3.JmbiCt4oi6M6iUZEDM6Q-
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[63.250.193.228])
	by smtp215.mail.bf1.yahoo.com with SMTP; 06 Feb 2014 21:03:53 -0800 PST
Message-ID: <1391749430.2943.13.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Thu, 06 Feb 2014 22:03:50 -0700
In-Reply-To: <CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:


>         Is there a knob for qxl support?
>         
>         [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
>         qemu-system-i386: -vga qxl: invalid option

> 
> Here there is a patch that add qxl support in libxl updated to xen
> 4.4-rc3 if you want add it:
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> 
> Or you can simply compile from this already ready for spice/qxl
> testing:
> https://github.com/Fantu/Xen/commits/rebase/m2r-testing
> 
> 
> 
> Is not upstream for now because there is something on xen that make it
> not working on linux domUs with qxl driver active and working with
> high performance problem on windows domUs.
> I spent several days without finding the exact problem to be solved :(
> If you want you can try it out and see if anything changes using
> Fedora instead of Debian as dom0, differents kernel domUs etc.
> Maybe you could even find some new informations/errors useful for
> solving the problem.
> 

I am not very familiar with compiling nor with github, but I did learn
that by adding .patch to your first link, it was possible to get the
changes in a patch file format.  Applying the patch generated these
errors:

[root@xen xen]# patch -p1 < patch1.txt 
patching file docs/man/xl.cfg.pod.5
Hunk #2 FAILED at 1085.
1 out of 2 hunks FAILED -- saving rejects to file
docs/man/xl.cfg.pod.5.rej
patching file tools/libxl/libxl_create.c
Hunk #1 FAILED at 229.
Hunk #2 FAILED at 252.
2 out of 2 hunks FAILED -- saving rejects to file
tools/libxl/libxl_create.c.rej
patching file tools/libxl/libxl_dm.c
Hunk #1 succeeded at 217 with fuzz 2 (offset -3 lines).
Hunk #2 succeeded at 517 with fuzz 2 (offset -5 lines).
patching file tools/libxl/libxl_types.idl
Hunk #1 FAILED at 154.
1 out of 1 hunk FAILED -- saving rejects to file
tools/libxl/libxl_types.idl.rej
patching file tools/libxl/xl_cmdimpl.c
Hunk #1 FAILED at 1669.
1 out of 1 hunk FAILED -- saving rejects to file
tools/libxl/xl_cmdimpl.c.rej

By inspecting the code, I could tell that there were some missing
patches (VGA interface type none was missing) compared to the RC3 source
that I have.  Would you mind helping with the additional patches that I
need?  Is so, I will try to help with the qxl issue.

Thanks,

Eric



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 05:31:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 05:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBe29-0003PO-AI; Fri, 07 Feb 2014 05:31:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBe1y-0003PF-Ht
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 05:31:24 +0000
Received: from [85.158.137.68:8958] by server-8.bemta-3.messagelabs.com id
	C2/E5-16039-1AF64F25; Fri, 07 Feb 2014 05:31:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391751071!212562!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9547 invoked from network); 7 Feb 2014 05:31:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 05:31:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,798,1384300800"; d="scan'208";a="98808291"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 05:30:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 00:30:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBe1c-0000hH-JM;
	Fri, 07 Feb 2014 05:30:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBe1c-0005K1-51;
	Fri, 07 Feb 2014 05:30:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24755-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 05:30:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24755: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5492252515426066347=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5492252515426066347==
Content-Type: text/plain

flight 24755 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24755/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24699

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============5492252515426066347==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5492252515426066347==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 05:31:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 05:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBe29-0003PV-Rp; Fri, 07 Feb 2014 05:31:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBe1o-0003PB-Fr
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 05:31:24 +0000
Received: from [85.158.143.35:13476] by server-2.bemta-4.messagelabs.com id
	F9/58-10891-79F64F25; Fri, 07 Feb 2014 05:31:03 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391751062!3779046!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30517 invoked from network); 7 Feb 2014 05:31:03 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 05:31:03 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so5019343qcv.5
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 21:31:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=FKNZQEb6N1eEJoxMmYQrjGJoJ6yoblacFSIi/15kOo4=;
	b=kn1+BCoxvvuO5fp6AZ5XsHD4SQWkP+1pwiagOdh103O1prqVauaZMx8Sh+7mTUubHW
	nC3JcGPG0PhmlVor1a9YMCpwAYwAt5dZ6Rvdur2FhLOll/UPyViUZ+gu4GU//sNPlG1D
	ehaVuxCKp0bR/wxVzeA/bIMZ5xfz+TlYpnuC2mUuMx5snrBOt9dicZQ1DcAZU20A/uhJ
	2Fk1vMNweqSMcja8HGv8JXZynb3FJkoWP1B/mChB/IK4ewKfCxfXns0ESAaXsG92+4GH
	MzNZ8TM76r0ooDG95huM4Ih4gQe7lt7OXlVsi+aJHT0DMLB8p3QjQaA67mmWXMYT3kct
	8dsg==
X-Gm-Message-State: ALoCoQlJhe4PLtGweVi1sQLSMegyHRGL3ERuNdvUOJj9zSv4JaifsZfQ1yFJxw3OUelz6NMmb5Pg
MIME-Version: 1.0
X-Received: by 10.224.161.5 with SMTP id p5mr18650890qax.32.1391751062013;
	Thu, 06 Feb 2014 21:31:02 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Thu, 6 Feb 2014 21:31:01 -0800 (PST)
In-Reply-To: <1391692303.25128.3.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
Date: Fri, 7 Feb 2014 11:01:01 +0530
Message-ID: <CAAHg+HhiUuKTsysfhg=H7LmM=i0bLb=+5gBJnZ3bZnJbcQZsMw@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 6 February 2014 18:41, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
>>
>> On 06/02/14 12:57, Ian Campbell wrote:
>> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> >>
>> >> I remember we had a discussion when I have implemented vfp context
>> >> switch for arm32 for the memory constraints
>> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>> >>
>> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
>> >
>> > Yes, I forgot to say: I think getting something in now is the priority,
>> > which is why I committed it, but this should be tightened up, probably
>> > for 4.5 unless the difference is benchmarkable.
>>
>> The fix is very simple (a matter of 2 lines changes). I would prefer to
>> delay this patch for a couple of days and having a correct
>> implementation from the beginning, so we will not forgot to change the
>> code for Xen 4.5.
>
> And I would rather close this rather large hole right now and not wait
> for two more days when we are looking at doing what might be the final
> rc soon.
>
> I had already applied before you said anything, so the point is moot.
>
> Anyway, if someone wants to submit for 4.4 with a case for a release
> exception then I'm sure George will consider it.
>
> Otherwise this thread is now in my QUEUE-4.5 folder so I'll get a
> reminder shortly after the release when I go through that.
>
>> Moreover Pranav usually answer quickly :).
>
> If he's awake/at work, it's out of office hours for him now.

: ) :) , sorry somehow I missed this yesterday.

If "=Q"  is really critical i can quickly send you a new patch against
your commit in staging branch
(http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=712eb2e04da2cbcd9908f74ebd47c6df60d6d12f)
>
> Ian.
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 05:31:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 05:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBe29-0003PO-AI; Fri, 07 Feb 2014 05:31:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBe1y-0003PF-Ht
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 05:31:24 +0000
Received: from [85.158.137.68:8958] by server-8.bemta-3.messagelabs.com id
	C2/E5-16039-1AF64F25; Fri, 07 Feb 2014 05:31:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391751071!212562!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9547 invoked from network); 7 Feb 2014 05:31:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 05:31:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,798,1384300800"; d="scan'208";a="98808291"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 05:30:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 00:30:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBe1c-0000hH-JM;
	Fri, 07 Feb 2014 05:30:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBe1c-0005K1-51;
	Fri, 07 Feb 2014 05:30:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24755-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 05:30:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24755: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5492252515426066347=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5492252515426066347==
Content-Type: text/plain

flight 24755 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24755/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24699

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============5492252515426066347==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5492252515426066347==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 05:31:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 05:31:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBe29-0003PV-Rp; Fri, 07 Feb 2014 05:31:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBe1o-0003PB-Fr
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 05:31:24 +0000
Received: from [85.158.143.35:13476] by server-2.bemta-4.messagelabs.com id
	F9/58-10891-79F64F25; Fri, 07 Feb 2014 05:31:03 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391751062!3779046!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30517 invoked from network); 7 Feb 2014 05:31:03 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 05:31:03 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so5019343qcv.5
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 21:31:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=FKNZQEb6N1eEJoxMmYQrjGJoJ6yoblacFSIi/15kOo4=;
	b=kn1+BCoxvvuO5fp6AZ5XsHD4SQWkP+1pwiagOdh103O1prqVauaZMx8Sh+7mTUubHW
	nC3JcGPG0PhmlVor1a9YMCpwAYwAt5dZ6Rvdur2FhLOll/UPyViUZ+gu4GU//sNPlG1D
	ehaVuxCKp0bR/wxVzeA/bIMZ5xfz+TlYpnuC2mUuMx5snrBOt9dicZQ1DcAZU20A/uhJ
	2Fk1vMNweqSMcja8HGv8JXZynb3FJkoWP1B/mChB/IK4ewKfCxfXns0ESAaXsG92+4GH
	MzNZ8TM76r0ooDG95huM4Ih4gQe7lt7OXlVsi+aJHT0DMLB8p3QjQaA67mmWXMYT3kct
	8dsg==
X-Gm-Message-State: ALoCoQlJhe4PLtGweVi1sQLSMegyHRGL3ERuNdvUOJj9zSv4JaifsZfQ1yFJxw3OUelz6NMmb5Pg
MIME-Version: 1.0
X-Received: by 10.224.161.5 with SMTP id p5mr18650890qax.32.1391751062013;
	Thu, 06 Feb 2014 21:31:02 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Thu, 6 Feb 2014 21:31:01 -0800 (PST)
In-Reply-To: <1391692303.25128.3.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
Date: Fri, 7 Feb 2014 11:01:01 +0530
Message-ID: <CAAHg+HhiUuKTsysfhg=H7LmM=i0bLb=+5gBJnZ3bZnJbcQZsMw@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 6 February 2014 18:41, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
>>
>> On 06/02/14 12:57, Ian Campbell wrote:
>> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> >>
>> >> I remember we had a discussion when I have implemented vfp context
>> >> switch for arm32 for the memory constraints
>> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>> >>
>> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
>> >
>> > Yes, I forgot to say: I think getting something in now is the priority,
>> > which is why I committed it, but this should be tightened up, probably
>> > for 4.5 unless the difference is benchmarkable.
>>
>> The fix is very simple (a matter of 2 lines changes). I would prefer to
>> delay this patch for a couple of days and having a correct
>> implementation from the beginning, so we will not forgot to change the
>> code for Xen 4.5.
>
> And I would rather close this rather large hole right now and not wait
> for two more days when we are looking at doing what might be the final
> rc soon.
>
> I had already applied before you said anything, so the point is moot.
>
> Anyway, if someone wants to submit for 4.4 with a case for a release
> exception then I'm sure George will consider it.
>
> Otherwise this thread is now in my QUEUE-4.5 folder so I'll get a
> reminder shortly after the release when I go through that.
>
>> Moreover Pranav usually answer quickly :).
>
> If he's awake/at work, it's out of office hours for him now.

: ) :) , sorry somehow I missed this yesterday.

If "=Q"  is really critical i can quickly send you a new patch against
your commit in staging branch
(http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=712eb2e04da2cbcd9908f74ebd47c6df60d6d12f)
>
> Ian.
>
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 06:55:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 06:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBfKZ-0006HL-Hb; Fri, 07 Feb 2014 06:54:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBfKY-0006HG-7O
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 06:54:30 +0000
Received: from [85.158.137.68:7512] by server-1.bemta-3.messagelabs.com id
	25/DC-17293-52384F25; Fri, 07 Feb 2014 06:54:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391756066!225603!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21923 invoked from network); 7 Feb 2014 06:54:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 06:54:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98881460"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 06:38:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 01:38:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBf52-00015Q-2d;
	Fri, 07 Feb 2014 06:38:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBf51-0007bT-NV;
	Fri, 07 Feb 2014 06:38:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24756-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 06:38:27 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24756: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6590490107083634004=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6590490107083634004==
Content-Type: text/plain

flight 24756 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24756/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 24743
 build-i386-xend               4 xen-build                 fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  2efcb0193bf3916c8ce34882e845f5ceb1e511f7
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 643 lines long.)


--===============6590490107083634004==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6590490107083634004==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 06:55:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 06:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBfKZ-0006HL-Hb; Fri, 07 Feb 2014 06:54:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBfKY-0006HG-7O
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 06:54:30 +0000
Received: from [85.158.137.68:7512] by server-1.bemta-3.messagelabs.com id
	25/DC-17293-52384F25; Fri, 07 Feb 2014 06:54:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391756066!225603!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21923 invoked from network); 7 Feb 2014 06:54:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 06:54:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98881460"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 06:38:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 01:38:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBf52-00015Q-2d;
	Fri, 07 Feb 2014 06:38:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBf51-0007bT-NV;
	Fri, 07 Feb 2014 06:38:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24756-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 06:38:27 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24756: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6590490107083634004=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6590490107083634004==
Content-Type: text/plain

flight 24756 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24756/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 24743
 build-i386-xend               4 xen-build                 fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  2efcb0193bf3916c8ce34882e845f5ceb1e511f7
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 643 lines long.)


--===============6590490107083634004==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6590490107083634004==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 07:23:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBfmd-0007ZI-98; Fri, 07 Feb 2014 07:23:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WBfmc-0007ZD-4d
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 07:23:30 +0000
Received: from [85.158.139.211:5019] by server-12.bemta-5.messagelabs.com id
	81/9C-15415-1F984F25; Fri, 07 Feb 2014 07:23:29 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391757806!2288026!1
X-Originating-IP: [209.85.214.42]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5687 invoked from network); 7 Feb 2014 07:23:26 -0000
Received: from mail-bk0-f42.google.com (HELO mail-bk0-f42.google.com)
	(209.85.214.42)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 07:23:26 -0000
Received: by mail-bk0-f42.google.com with SMTP id 6so959371bkj.1
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 23:23:26 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=VKKrBHbEfxgm5ghdn6ybRNy+v8EyuZ2PLmkhVhSYdO0=;
	b=fpVwRGKbLAVg5O3rYtVSIh63B2qRtXSQ4m1Jaq1ozREb8AMktiAtQMgnbf/jrS62Et
	9f6+UhAxLXQhneSH00vRDlETOaQE2h5R+o+XaMhs5LNQmAcUJUkicvysmcj9VF1HVg6w
	dW3BqTDon8vy+AlXBKaJAxi+mESly7d3iC49c/u2SmAAL7k2DzebM0Y56clYOKeuM/kP
	4w4zFWkmagKuvLb2PgY/TuayzaVH5pFajOXr/X82OtRL9u00LzpG9IfqVdDJ4muCfwPH
	e6y53XbA6qFGC91KItkZFh1bxHLDImZzwgxa1eus2Mvdk2+WdqLEfGjLckdBP/LJ4DhD
	r2tw==
X-Gm-Message-State: ALoCoQm8WSdO6R6yF9+FUUKfRjjroPDmcyMIkhBNLntOQTRDNY2nEzNKAL4obeIRukEBg0zA+rZC
MIME-Version: 1.0
X-Received: by 10.204.76.7 with SMTP id a7mr7549453bkk.17.1391757806247; Thu,
	06 Feb 2014 23:23:26 -0800 (PST)
Received: by 10.204.103.194 with HTTP; Thu, 6 Feb 2014 23:23:26 -0800 (PST)
X-Originating-IP: [79.7.81.253]
In-Reply-To: <1391749430.2943.13.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391749430.2943.13.camel@astar.houby.net>
Date: Fri, 7 Feb 2014 08:23:26 +0100
Message-ID: <CABMPFzgxXo2PABX+=eVPSzHkOQavasNJ7y4iK3HPHYR+uWBg3g@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: ehouby@yahoo.com
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8165804480545115590=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8165804480545115590==
Content-Type: multipart/alternative; boundary=047d7bb03beaa8103c04f1cbddbb

--047d7bb03beaa8103c04f1cbddbb
Content-Type: text/plain; charset=ISO-8859-1

2014-02-07 6:03 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

> On Thu, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
>
>
> >         Is there a knob for qxl support?
> >
> >         [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
> >         qemu-system-i386: -vga qxl: invalid option
>
> >
> > Here there is a patch that add qxl support in libxl updated to xen
> > 4.4-rc3 if you want add it:
> >
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> >
> > Or you can simply compile from this already ready for spice/qxl
> > testing:
> > https://github.com/Fantu/Xen/commits/rebase/m2r-testing
> >
> >
> >
> > Is not upstream for now because there is something on xen that make it
> > not working on linux domUs with qxl driver active and working with
> > high performance problem on windows domUs.
> > I spent several days without finding the exact problem to be solved :(
> > If you want you can try it out and see if anything changes using
> > Fedora instead of Debian as dom0, differents kernel domUs etc.
> > Maybe you could even find some new informations/errors useful for
> > solving the problem.
> >
>
> I am not very familiar with compiling nor with github, but I did learn
> that by adding .patch to your first link, it was possible to get the
> changes in a patch file format.  Applying the patch generated these
> errors:
>
> [root@xen xen]# patch -p1 < patch1.txt
> patching file docs/man/xl.cfg.pod.5
> Hunk #2 FAILED at 1085.
> 1 out of 2 hunks FAILED -- saving rejects to file
> docs/man/xl.cfg.pod.5.rej
> patching file tools/libxl/libxl_create.c
> Hunk #1 FAILED at 229.
> Hunk #2 FAILED at 252.
> 2 out of 2 hunks FAILED -- saving rejects to file
> tools/libxl/libxl_create.c.rej
> patching file tools/libxl/libxl_dm.c
> Hunk #1 succeeded at 217 with fuzz 2 (offset -3 lines).
> Hunk #2 succeeded at 517 with fuzz 2 (offset -5 lines).
> patching file tools/libxl/libxl_types.idl
> Hunk #1 FAILED at 154.
> 1 out of 1 hunk FAILED -- saving rejects to file
> tools/libxl/libxl_types.idl.rej
> patching file tools/libxl/xl_cmdimpl.c
> Hunk #1 FAILED at 1669.
> 1 out of 1 hunk FAILED -- saving rejects to file
> tools/libxl/xl_cmdimpl.c.rej
>
> By inspecting the code, I could tell that there were some missing
> patches (VGA interface type none was missing) compared to the RC3 source
> that I have.  Would you mind helping with the additional patches that I
> need?  Is so, I will try to help with the qxl issue.
>
> Thanks,
>
> Eric
>
>
>
Yes, fail because is applied after vga none patch that modified neighboring
lines.
Apply vga none before and qxl patch after should make both applicable
without conflict.
Should not cause problems because both have already been reviewed by thelibxl
/qemu maintainers and improved following their advices.

--047d7bb03beaa8103c04f1cbddbb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-07 6:03 GMT+01:00 Eric Houby <span dir=3D"ltr">&lt=
;<a href=3D"mailto:ehouby@yahoo.com" target=3D"_blank">ehouby@yahoo.com</a>=
&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D"">On Thu, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:<br>
<br>
<br>
&gt; =A0 =A0 =A0 =A0 Is there a knob for qxl support?<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log<br>
&gt; =A0 =A0 =A0 =A0 qemu-system-i386: -vga qxl: invalid option<br>
<br>
&gt;<br>
&gt; Here there is a patch that add qxl support in libxl updated to xen<br>
&gt; 4.4-rc3 if you want add it:<br>
&gt; <a href=3D"https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591ceb=
d98e9263645bff56b" target=3D"_blank">https://github.com/Fantu/Xen/commit/f1=
e3f78f7b9580700591cebd98e9263645bff56b</a><br>
&gt;<br>
&gt; Or you can simply compile from this already ready for spice/qxl<br>
&gt; testing:<br>
&gt; <a href=3D"https://github.com/Fantu/Xen/commits/rebase/m2r-testing" ta=
rget=3D"_blank">https://github.com/Fantu/Xen/commits/rebase/m2r-testing</a>=
<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Is not upstream for now because there is something on xen that make it=
<br>
&gt; not working on linux domUs with qxl driver active and working with<br>
&gt; high performance problem on windows domUs.<br>
&gt; I spent several days without finding the exact problem to be solved :(=
<br>
&gt; If you want you can try it out and see if anything changes using<br>
&gt; Fedora instead of Debian as dom0, differents kernel domUs etc.<br>
&gt; Maybe you could even find some new informations/errors useful for<br>
&gt; solving the problem.<br>
&gt;<br>
<br>
</div>I am not very familiar with compiling nor with github, but I did lear=
n<br>
that by adding .patch to your first link, it was possible to get the<br>
changes in a patch file format. =A0Applying the patch generated these<br>
errors:<br>
<br>
[root@xen xen]# patch -p1 &lt; patch1.txt<br>
patching file docs/man/xl.cfg.pod.5<br>
Hunk #2 FAILED at 1085.<br>
1 out of 2 hunks FAILED -- saving rejects to file<br>
docs/man/xl.cfg.pod.5.rej<br>
patching file tools/libxl/libxl_create.c<br>
Hunk #1 FAILED at 229.<br>
Hunk #2 FAILED at 252.<br>
2 out of 2 hunks FAILED -- saving rejects to file<br>
tools/libxl/libxl_create.c.rej<br>
patching file tools/libxl/libxl_dm.c<br>
Hunk #1 succeeded at 217 with fuzz 2 (offset -3 lines).<br>
Hunk #2 succeeded at 517 with fuzz 2 (offset -5 lines).<br>
patching file tools/libxl/libxl_types.idl<br>
Hunk #1 FAILED at 154.<br>
1 out of 1 hunk FAILED -- saving rejects to file<br>
tools/libxl/libxl_types.idl.rej<br>
patching file tools/libxl/xl_cmdimpl.c<br>
Hunk #1 FAILED at 1669.<br>
1 out of 1 hunk FAILED -- saving rejects to file<br>
tools/libxl/xl_cmdimpl.c.rej<br>
<br>
By inspecting the code, I could tell that there were some missing<br>
patches (VGA interface type none was missing) compared to the RC3 source<br=
>
that I have. =A0Would you mind helping with the additional patches that I<b=
r>
need? =A0Is so, I will try to help with the qxl issue.<br>
<br>
Thanks,<br>
<br>
Eric<br>
<br>
<br>
</blockquote></div><br></div><div class=3D"gmail_extra">Yes, fail because i=
s applied after vga none patch that modified <span id=3D"result_box" class=
=3D"" lang=3D"en"><span class=3D"">neighboring lines.<br></span></span></di=
v><div class=3D"gmail_extra">
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">Apply vga n=
one before and qxl patch after should make both applicable without conflict=
.<br></span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span cla=
ss=3D"">Should not</span> <span class=3D"">cause problems because</span> <s=
pan class=3D"">both</span> <span class=3D"">have already been</span> <span =
class=3D"">reviewed by</span> <span class=3D"">the</span></span><span id=3D=
"result_box" class=3D"" lang=3D"en"><span class=3D""><span id=3D"result_box=
" class=3D"" lang=3D"en"> libxl<span class=3D""></span><span class=3D"">/</=
span><span class=3D"">qemu</span></span> maintainers</span> <span class=3D"=
">and improved</span> <span class=3D"">following their advice</span></span>=
s.<br>
</div></div>

--047d7bb03beaa8103c04f1cbddbb--


--===============8165804480545115590==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8165804480545115590==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:23:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBfmd-0007ZI-98; Fri, 07 Feb 2014 07:23:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WBfmc-0007ZD-4d
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 07:23:30 +0000
Received: from [85.158.139.211:5019] by server-12.bemta-5.messagelabs.com id
	81/9C-15415-1F984F25; Fri, 07 Feb 2014 07:23:29 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391757806!2288026!1
X-Originating-IP: [209.85.214.42]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5687 invoked from network); 7 Feb 2014 07:23:26 -0000
Received: from mail-bk0-f42.google.com (HELO mail-bk0-f42.google.com)
	(209.85.214.42)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 07:23:26 -0000
Received: by mail-bk0-f42.google.com with SMTP id 6so959371bkj.1
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 23:23:26 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=VKKrBHbEfxgm5ghdn6ybRNy+v8EyuZ2PLmkhVhSYdO0=;
	b=fpVwRGKbLAVg5O3rYtVSIh63B2qRtXSQ4m1Jaq1ozREb8AMktiAtQMgnbf/jrS62Et
	9f6+UhAxLXQhneSH00vRDlETOaQE2h5R+o+XaMhs5LNQmAcUJUkicvysmcj9VF1HVg6w
	dW3BqTDon8vy+AlXBKaJAxi+mESly7d3iC49c/u2SmAAL7k2DzebM0Y56clYOKeuM/kP
	4w4zFWkmagKuvLb2PgY/TuayzaVH5pFajOXr/X82OtRL9u00LzpG9IfqVdDJ4muCfwPH
	e6y53XbA6qFGC91KItkZFh1bxHLDImZzwgxa1eus2Mvdk2+WdqLEfGjLckdBP/LJ4DhD
	r2tw==
X-Gm-Message-State: ALoCoQm8WSdO6R6yF9+FUUKfRjjroPDmcyMIkhBNLntOQTRDNY2nEzNKAL4obeIRukEBg0zA+rZC
MIME-Version: 1.0
X-Received: by 10.204.76.7 with SMTP id a7mr7549453bkk.17.1391757806247; Thu,
	06 Feb 2014 23:23:26 -0800 (PST)
Received: by 10.204.103.194 with HTTP; Thu, 6 Feb 2014 23:23:26 -0800 (PST)
X-Originating-IP: [79.7.81.253]
In-Reply-To: <1391749430.2943.13.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391749430.2943.13.camel@astar.houby.net>
Date: Fri, 7 Feb 2014 08:23:26 +0100
Message-ID: <CABMPFzgxXo2PABX+=eVPSzHkOQavasNJ7y4iK3HPHYR+uWBg3g@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: ehouby@yahoo.com
Cc: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	xen@lists.fedoraproject.org
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8165804480545115590=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8165804480545115590==
Content-Type: multipart/alternative; boundary=047d7bb03beaa8103c04f1cbddbb

--047d7bb03beaa8103c04f1cbddbb
Content-Type: text/plain; charset=ISO-8859-1

2014-02-07 6:03 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

> On Thu, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
>
>
> >         Is there a knob for qxl support?
> >
> >         [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
> >         qemu-system-i386: -vga qxl: invalid option
>
> >
> > Here there is a patch that add qxl support in libxl updated to xen
> > 4.4-rc3 if you want add it:
> >
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> >
> > Or you can simply compile from this already ready for spice/qxl
> > testing:
> > https://github.com/Fantu/Xen/commits/rebase/m2r-testing
> >
> >
> >
> > Is not upstream for now because there is something on xen that make it
> > not working on linux domUs with qxl driver active and working with
> > high performance problem on windows domUs.
> > I spent several days without finding the exact problem to be solved :(
> > If you want you can try it out and see if anything changes using
> > Fedora instead of Debian as dom0, differents kernel domUs etc.
> > Maybe you could even find some new informations/errors useful for
> > solving the problem.
> >
>
> I am not very familiar with compiling nor with github, but I did learn
> that by adding .patch to your first link, it was possible to get the
> changes in a patch file format.  Applying the patch generated these
> errors:
>
> [root@xen xen]# patch -p1 < patch1.txt
> patching file docs/man/xl.cfg.pod.5
> Hunk #2 FAILED at 1085.
> 1 out of 2 hunks FAILED -- saving rejects to file
> docs/man/xl.cfg.pod.5.rej
> patching file tools/libxl/libxl_create.c
> Hunk #1 FAILED at 229.
> Hunk #2 FAILED at 252.
> 2 out of 2 hunks FAILED -- saving rejects to file
> tools/libxl/libxl_create.c.rej
> patching file tools/libxl/libxl_dm.c
> Hunk #1 succeeded at 217 with fuzz 2 (offset -3 lines).
> Hunk #2 succeeded at 517 with fuzz 2 (offset -5 lines).
> patching file tools/libxl/libxl_types.idl
> Hunk #1 FAILED at 154.
> 1 out of 1 hunk FAILED -- saving rejects to file
> tools/libxl/libxl_types.idl.rej
> patching file tools/libxl/xl_cmdimpl.c
> Hunk #1 FAILED at 1669.
> 1 out of 1 hunk FAILED -- saving rejects to file
> tools/libxl/xl_cmdimpl.c.rej
>
> By inspecting the code, I could tell that there were some missing
> patches (VGA interface type none was missing) compared to the RC3 source
> that I have.  Would you mind helping with the additional patches that I
> need?  Is so, I will try to help with the qxl issue.
>
> Thanks,
>
> Eric
>
>
>
Yes, fail because is applied after vga none patch that modified neighboring
lines.
Apply vga none before and qxl patch after should make both applicable
without conflict.
Should not cause problems because both have already been reviewed by thelibxl
/qemu maintainers and improved following their advices.

--047d7bb03beaa8103c04f1cbddbb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-07 6:03 GMT+01:00 Eric Houby <span dir=3D"ltr">&lt=
;<a href=3D"mailto:ehouby@yahoo.com" target=3D"_blank">ehouby@yahoo.com</a>=
&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D"">On Thu, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:<br>
<br>
<br>
&gt; =A0 =A0 =A0 =A0 Is there a knob for qxl support?<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log<br>
&gt; =A0 =A0 =A0 =A0 qemu-system-i386: -vga qxl: invalid option<br>
<br>
&gt;<br>
&gt; Here there is a patch that add qxl support in libxl updated to xen<br>
&gt; 4.4-rc3 if you want add it:<br>
&gt; <a href=3D"https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591ceb=
d98e9263645bff56b" target=3D"_blank">https://github.com/Fantu/Xen/commit/f1=
e3f78f7b9580700591cebd98e9263645bff56b</a><br>
&gt;<br>
&gt; Or you can simply compile from this already ready for spice/qxl<br>
&gt; testing:<br>
&gt; <a href=3D"https://github.com/Fantu/Xen/commits/rebase/m2r-testing" ta=
rget=3D"_blank">https://github.com/Fantu/Xen/commits/rebase/m2r-testing</a>=
<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; Is not upstream for now because there is something on xen that make it=
<br>
&gt; not working on linux domUs with qxl driver active and working with<br>
&gt; high performance problem on windows domUs.<br>
&gt; I spent several days without finding the exact problem to be solved :(=
<br>
&gt; If you want you can try it out and see if anything changes using<br>
&gt; Fedora instead of Debian as dom0, differents kernel domUs etc.<br>
&gt; Maybe you could even find some new informations/errors useful for<br>
&gt; solving the problem.<br>
&gt;<br>
<br>
</div>I am not very familiar with compiling nor with github, but I did lear=
n<br>
that by adding .patch to your first link, it was possible to get the<br>
changes in a patch file format. =A0Applying the patch generated these<br>
errors:<br>
<br>
[root@xen xen]# patch -p1 &lt; patch1.txt<br>
patching file docs/man/xl.cfg.pod.5<br>
Hunk #2 FAILED at 1085.<br>
1 out of 2 hunks FAILED -- saving rejects to file<br>
docs/man/xl.cfg.pod.5.rej<br>
patching file tools/libxl/libxl_create.c<br>
Hunk #1 FAILED at 229.<br>
Hunk #2 FAILED at 252.<br>
2 out of 2 hunks FAILED -- saving rejects to file<br>
tools/libxl/libxl_create.c.rej<br>
patching file tools/libxl/libxl_dm.c<br>
Hunk #1 succeeded at 217 with fuzz 2 (offset -3 lines).<br>
Hunk #2 succeeded at 517 with fuzz 2 (offset -5 lines).<br>
patching file tools/libxl/libxl_types.idl<br>
Hunk #1 FAILED at 154.<br>
1 out of 1 hunk FAILED -- saving rejects to file<br>
tools/libxl/libxl_types.idl.rej<br>
patching file tools/libxl/xl_cmdimpl.c<br>
Hunk #1 FAILED at 1669.<br>
1 out of 1 hunk FAILED -- saving rejects to file<br>
tools/libxl/xl_cmdimpl.c.rej<br>
<br>
By inspecting the code, I could tell that there were some missing<br>
patches (VGA interface type none was missing) compared to the RC3 source<br=
>
that I have. =A0Would you mind helping with the additional patches that I<b=
r>
need? =A0Is so, I will try to help with the qxl issue.<br>
<br>
Thanks,<br>
<br>
Eric<br>
<br>
<br>
</blockquote></div><br></div><div class=3D"gmail_extra">Yes, fail because i=
s applied after vga none patch that modified <span id=3D"result_box" class=
=3D"" lang=3D"en"><span class=3D"">neighboring lines.<br></span></span></di=
v><div class=3D"gmail_extra">
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">Apply vga n=
one before and qxl patch after should make both applicable without conflict=
.<br></span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span cla=
ss=3D"">Should not</span> <span class=3D"">cause problems because</span> <s=
pan class=3D"">both</span> <span class=3D"">have already been</span> <span =
class=3D"">reviewed by</span> <span class=3D"">the</span></span><span id=3D=
"result_box" class=3D"" lang=3D"en"><span class=3D""><span id=3D"result_box=
" class=3D"" lang=3D"en"> libxl<span class=3D""></span><span class=3D"">/</=
span><span class=3D"">qemu</span></span> maintainers</span> <span class=3D"=
">and improved</span> <span class=3D"">following their advice</span></span>=
s.<br>
</div></div>

--047d7bb03beaa8103c04f1cbddbb--


--===============8165804480545115590==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8165804480545115590==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:41:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:41:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBg4D-0008KE-7E; Fri, 07 Feb 2014 07:41:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBg4A-0008K9-OM
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 07:41:39 +0000
Received: from [85.158.143.35:14764] by server-3.bemta-4.messagelabs.com id
	A2/21-11539-23E84F25; Fri, 07 Feb 2014 07:41:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391758895!3807868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23202 invoked from network); 7 Feb 2014 07:41:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 07:41:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98900261"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 07:41:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 02:41:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBg46-0001OX-F7;
	Fri, 07 Feb 2014 07:41:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBg45-0006ES-P4;
	Fri, 07 Feb 2014 07:41:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24758-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 07:41:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24758: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2622291310774068475=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2622291310774068475==
Content-Type: text/plain

flight 24758 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24758/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 24680

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24604

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 linux                e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
baseline version:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501

------------------------------------------------------------
People who touched revisions under test:
  Andrea Arcangeli <aarcange@redhat.com>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Grover <agrover@redhat.com>
  Aristeu Rozanski <aris@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Boris BREZILLON <b.brezillon@overkiz.com>
  Borislav Petkov <bp@suse.de>
  Chris Mason <clm@fb.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Kravkov <dmitry@broadcom.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@linux.intel.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Jack Pham <jackp@codeaurora.org>
  James Bottomley <JBottomley@Parallels.com>
  Jan Prinsloo <janroot@gmail.com>
  Jean-Jacques Hiblot <jjhiblot@traphandler.com>
  Johan Hovold <jhovold@gmail.com>
  John W. Linville <linville@tuxdriver.com>
  Josef Bacik <jbacik@fb.com>
  Jun zhang <zhang.jun92@zte.com.cn>
  Kevin Hilman <khilman@linaro.org>
  Khalid Aziz <khalid.aziz@oracle.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Leilei Zhao <leilei.zhao@atmel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek Roszko <mark.roszko@gmail.com>
  Mark Brown <broonie@linaro.org>
  Matthew Garrett <matthew.garrett@nebula.com>
  Michal Schmidt <mschmidt@redhat.com>
  Mikhail Zolotaryov <lebon@lebon.org.ua>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Mackerras <paulus@samba.org>
  PaX Team <pageexec@freemail.hu>
  Rahul Bedarkar <rahulbedarkar89@gmail.com>
  Richard Weinberger <richard@nod.at>
  Sarah Sharp <sarah.a.sharp@linux.intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Takashi Iwai <tiwai@suse.de>
  Thomas Pugliese <thomas.pugliese@gmail.com>
  Vijaya Mohan Guvva <vmohan@brocade.com>
  Wang Shilong <wangsl.fnst@cn.fujitsu.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  yury <urykhy@gmail.com>
  ZHAO Gang <gamerh2o@gmail.com>
  Ã‰ric Piel <eric.piel@tremplin-utc.net>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 861 lines long.)


--===============2622291310774068475==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2622291310774068475==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 07:41:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:41:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBg4D-0008KE-7E; Fri, 07 Feb 2014 07:41:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBg4A-0008K9-OM
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 07:41:39 +0000
Received: from [85.158.143.35:14764] by server-3.bemta-4.messagelabs.com id
	A2/21-11539-23E84F25; Fri, 07 Feb 2014 07:41:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391758895!3807868!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23202 invoked from network); 7 Feb 2014 07:41:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 07:41:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98900261"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 07:41:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 02:41:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBg46-0001OX-F7;
	Fri, 07 Feb 2014 07:41:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBg45-0006ES-P4;
	Fri, 07 Feb 2014 07:41:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24758-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 07:41:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24758: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2622291310774068475=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2622291310774068475==
Content-Type: text/plain

flight 24758 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24758/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)       broken REGR. vs. 24680

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24604

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 linux                e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
baseline version:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501

------------------------------------------------------------
People who touched revisions under test:
  Andrea Arcangeli <aarcange@redhat.com>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Grover <agrover@redhat.com>
  Aristeu Rozanski <aris@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Boris BREZILLON <b.brezillon@overkiz.com>
  Borislav Petkov <bp@suse.de>
  Chris Mason <clm@fb.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Kravkov <dmitry@broadcom.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@linux.intel.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Jack Pham <jackp@codeaurora.org>
  James Bottomley <JBottomley@Parallels.com>
  Jan Prinsloo <janroot@gmail.com>
  Jean-Jacques Hiblot <jjhiblot@traphandler.com>
  Johan Hovold <jhovold@gmail.com>
  John W. Linville <linville@tuxdriver.com>
  Josef Bacik <jbacik@fb.com>
  Jun zhang <zhang.jun92@zte.com.cn>
  Kevin Hilman <khilman@linaro.org>
  Khalid Aziz <khalid.aziz@oracle.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Leilei Zhao <leilei.zhao@atmel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek Roszko <mark.roszko@gmail.com>
  Mark Brown <broonie@linaro.org>
  Matthew Garrett <matthew.garrett@nebula.com>
  Michal Schmidt <mschmidt@redhat.com>
  Mikhail Zolotaryov <lebon@lebon.org.ua>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Mackerras <paulus@samba.org>
  PaX Team <pageexec@freemail.hu>
  Rahul Bedarkar <rahulbedarkar89@gmail.com>
  Richard Weinberger <richard@nod.at>
  Sarah Sharp <sarah.a.sharp@linux.intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Takashi Iwai <tiwai@suse.de>
  Thomas Pugliese <thomas.pugliese@gmail.com>
  Vijaya Mohan Guvva <vmohan@brocade.com>
  Wang Shilong <wangsl.fnst@cn.fujitsu.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  yury <urykhy@gmail.com>
  ZHAO Gang <gamerh2o@gmail.com>
  Ã‰ric Piel <eric.piel@tremplin-utc.net>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 861 lines long.)


--===============2622291310774068475==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2622291310774068475==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 07:54:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgGL-0000KV-HA; Fri, 07 Feb 2014 07:54:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBgGK-0000KQ-Ai
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 07:54:12 +0000
Received: from [85.158.139.211:33666] by server-1.bemta-5.messagelabs.com id
	35/1F-12859-32194F25; Fri, 07 Feb 2014 07:54:11 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391759650!2294449!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3327 invoked from network); 7 Feb 2014 07:54:10 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 07:54:10 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391759650; l=306;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=W3EIsRRnzcMaggxjvV7X/hDH620=;
	b=f8b6jDZ1gQn9bsw9cJcwHrWGk7gFruBY6Ygy04iq98waolBvMJR1jMPYYtB5EH9ak3S
	agfpv6SxA2xv+oJ4QD0C/UVus872I/ovs31ZvIlOEkHZCF5EOfEw15BzoPQxpEUiOzbHS
	UhI0IPxdfFS+WxgtQTLnCST2T00y4QU4/9s=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id v06c1cq177s7F4q
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 08:54:07 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 03DBB50269; Fri,  7 Feb 2014 08:54:05 +0100 (CET)
Date: Fri, 7 Feb 2014 08:54:05 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140207075405.GA3206@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
	<1391337376.15093.24.camel@hastur.hellion.org.uk>
	<20140206171248.5143e7b3@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140206171248.5143e7b3@mantra.us.oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian.Jackson@eu.citrix.com, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Mukesh Rathor wrote:

> It should just work if you can quickly implement 
> arch/x86/debug.c for arm. If not, you'd need to "arch it out".

Some Makefile already has something like SUBDIR-$(ARCH), it was just the
hard "make -C tools/gdbsx" in xen.spec which caused the error.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 07:54:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgGL-0000KV-HA; Fri, 07 Feb 2014 07:54:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBgGK-0000KQ-Ai
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 07:54:12 +0000
Received: from [85.158.139.211:33666] by server-1.bemta-5.messagelabs.com id
	35/1F-12859-32194F25; Fri, 07 Feb 2014 07:54:11 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391759650!2294449!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3327 invoked from network); 7 Feb 2014 07:54:10 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 07:54:10 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391759650; l=306;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=W3EIsRRnzcMaggxjvV7X/hDH620=;
	b=f8b6jDZ1gQn9bsw9cJcwHrWGk7gFruBY6Ygy04iq98waolBvMJR1jMPYYtB5EH9ak3S
	agfpv6SxA2xv+oJ4QD0C/UVus872I/ovs31ZvIlOEkHZCF5EOfEw15BzoPQxpEUiOzbHS
	UhI0IPxdfFS+WxgtQTLnCST2T00y4QU4/9s=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id v06c1cq177s7F4q
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 08:54:07 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 03DBB50269; Fri,  7 Feb 2014 08:54:05 +0100 (CET)
Date: Fri, 7 Feb 2014 08:54:05 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140207075405.GA3206@aepfle.de>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
	<1391337376.15093.24.camel@hastur.hellion.org.uk>
	<20140206171248.5143e7b3@mantra.us.oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140206171248.5143e7b3@mantra.us.oracle.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian.Jackson@eu.citrix.com, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, Mukesh Rathor wrote:

> It should just work if you can quickly implement 
> arch/x86/debug.c for arm. If not, you'd need to "arch it out".

Some Makefile already has something like SUBDIR-$(ARCH), it was just the
hard "make -C tools/gdbsx" in xen.spec which caused the error.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHj-0000Oe-0Y; Fri, 07 Feb 2014 07:55:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBc2o-0006q8-Ls
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:23:58 +0000
Received: from [193.109.254.147:42805] by server-7.bemta-14.messagelabs.com id
	08/0B-23424-EC154F25; Fri, 07 Feb 2014 03:23:58 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391743435!2626643!1
X-Originating-IP: [98.138.90.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18554 invoked from network); 7 Feb 2014 03:23:57 -0000
Received: from nm11.bullet.mail.ne1.yahoo.com (HELO
	nm11.bullet.mail.ne1.yahoo.com) (98.138.90.74)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:23:57 -0000
Received: from [98.138.226.180] by nm11.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:23:55 -0000
Received: from [98.138.87.8] by tm15.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:23:55 -0000
Received: from [127.0.0.1] by omp1008.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:23:55 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 10417.38949.bm@omp1008.mail.ne1.yahoo.com
Received: (qmail 68027 invoked by uid 60001); 7 Feb 2014 03:23:54 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391743434; bh=R5Q2+SLEm1iOB9Zi2ZBVeOQn4OPvxF8ObAkBFBGxZCA=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=qScIwAo9Ulo2xg6waeHMt/kZsU8F0nWoeGOC78lhbII+7qGPQWYtD5upM0Co9jUHqYJ5nJ03nR4B7aeuZF9bRnWB3eF/gnXqbmXnLnphlqSYFmfR6F67GnTACHmtqcgBRFBhN1XkSZw8jGllK7jBPuQdmF7vBWx5X7IoqQkLOvQ=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=JdrBzJQ2MPzpeEY3LM0QG3E76tf1SbNqFs7Ve7ddFOz5wD3XWQ0u0fQurneh7IK6JEphy4FXL5fKx+pMR/TSQzEhDsEDWafhNgkqOeUDTrLAnG/PsJu2VwhgJMf6HV5hD/Ykc0dJeW9sKi6RcnOL2rzETmpsQpUoOseQHoLFv9M=;
X-YMail-OSG: Da2pnMsVM1kPZ1PiCTe1tijRs8spR0Y1iIOqQUJJVNI0ChF
	T8byyw3J3XC.ULtFxhyDmDfMkm7G9QKB8gIGuuHrUNMfGH4ya3D81RD5dmgp
	vbXkZvanm79NI0YsrYHlGhkfdOSTbmRzjyMpHHgwdrImLSXy.JLhIDQiq87q
	f8n43dpcVFHSK9O0kon28xeObZsJThuoIZ6ZmXq4ObmNl5pMcrWwsVseWk13
	I8G1.F2WSANiAkoYNlcSmIzLLnYSV0Ffmlhqw6P3XSVUuAm.u_akjUw1_OZ1
	B8guDay1q2hCtgZqYYmm388ZcB8hBP7dJ6hfMX7XvNZkfl6FIV1jXKiAjZzJ
	IWjmfOPYrz8RyITeJz6.aEyBiw1.XK04WA5rHcxYHTy98nZPwpLYUByKQzjL
	mINSpDhMQtWeZipY6WDKgtJt5MfIJI2vwg5gYGrVQbRtMo4_nMCOmoZ3_7mz
	EVD4ozK2MNA94Qk8BPjVn4bQGdvgRir2MWgNOf_lnkQECtmMPKfSyXTy8P.c
	D5vz3jXmQmdyDk90QS71lvLWh9GZmwYKqQ2hX3Qqh8VuUrlapuezMR4HCOnK
	qygX.aTjaZt12NknketJ6Qg--
Received: from [54.240.196.185] by web122602.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:23:54 PST
X-Rocket-MIMEInfo: 002.001,
	T24gMDQvMDIvMTQgMTA6MjYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBJbnRyb2R1Y2UgYSBuZXcgdmFyaWFibGUgdG8ga2VlcCB0cmFjayBvZiB0aGUgbnVtYmVyIG9mIGluLWZsaWdodAo.IHJlcXVlc3RzLiBXZSBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IHdoZW4geGVuX2Jsa2lmX3B1dCBpcyBjYWxsZWQgdGhlCj4gcmVxdWVzdCBoYXMgYWxyZWFkeSBiZWVuIGZyZWVkIGFuZCB3ZSBjYW4gc2FmZWx5IGZyZWUgeGVuX2Jsa2lmLCB3aGljaAo.IHdhcyBub3QgdGhlIGNhc2UgYmVmb3JlLgoKVGVzdGVkLWIBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391743434.10989.YahooMailNeo@web122602.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:23:54 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: "roger.pau@citrix.com" <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4738143264985626204=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4738143264985626204==
Content-Type: multipart/alternative; boundary="971470827-1276053675-1391743434=:10989"

--971470827-1276053675-1391743434=:10989
Content-Type: text/plain; charset=us-ascii

On 04/02/14 10:26, Roger Pau Monne wrote:
> Introduce a new variable to keep track of the number of in-flight
> requests. We need to make sure that when xen_blkif_put is called the
> request has already been freed and we can safely free xen_blkif, which
> was not the case before.

Tested-by: Matt Rushton <mrushton@amazon.com>

Reviewed-by: Matt Rushton <mrushton@amazon.com>

-Matt

--971470827-1276053675-1391743434=:10989
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div><font size="2"><span style="font-size:10pt;">On 04/02/14 10:26, Roger Pau Monne wrote:<br>

&gt; Introduce a new variable to keep track of the number of in-flight<br>

&gt; requests. We need to make sure that when xen_blkif_put is called the<br>

&gt; request has already been freed and we can safely free xen_blkif, which<br>

&gt; was not the case before.</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;">Tested-by: Matt Rushton &lt;mrushton@amazon.com&gt;<br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">Reviewed-by: Matt Rushton &lt;mrushton@amazon.com&gt;</span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica
 Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">-Matt<br></span></font></div></div></body></html>
--971470827-1276053675-1391743434=:10989--


--===============4738143264985626204==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4738143264985626204==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHj-0000Oe-0Y; Fri, 07 Feb 2014 07:55:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBc2o-0006q8-Ls
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:23:58 +0000
Received: from [193.109.254.147:42805] by server-7.bemta-14.messagelabs.com id
	08/0B-23424-EC154F25; Fri, 07 Feb 2014 03:23:58 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391743435!2626643!1
X-Originating-IP: [98.138.90.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18554 invoked from network); 7 Feb 2014 03:23:57 -0000
Received: from nm11.bullet.mail.ne1.yahoo.com (HELO
	nm11.bullet.mail.ne1.yahoo.com) (98.138.90.74)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:23:57 -0000
Received: from [98.138.226.180] by nm11.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:23:55 -0000
Received: from [98.138.87.8] by tm15.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:23:55 -0000
Received: from [127.0.0.1] by omp1008.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:23:55 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 10417.38949.bm@omp1008.mail.ne1.yahoo.com
Received: (qmail 68027 invoked by uid 60001); 7 Feb 2014 03:23:54 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391743434; bh=R5Q2+SLEm1iOB9Zi2ZBVeOQn4OPvxF8ObAkBFBGxZCA=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=qScIwAo9Ulo2xg6waeHMt/kZsU8F0nWoeGOC78lhbII+7qGPQWYtD5upM0Co9jUHqYJ5nJ03nR4B7aeuZF9bRnWB3eF/gnXqbmXnLnphlqSYFmfR6F67GnTACHmtqcgBRFBhN1XkSZw8jGllK7jBPuQdmF7vBWx5X7IoqQkLOvQ=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=JdrBzJQ2MPzpeEY3LM0QG3E76tf1SbNqFs7Ve7ddFOz5wD3XWQ0u0fQurneh7IK6JEphy4FXL5fKx+pMR/TSQzEhDsEDWafhNgkqOeUDTrLAnG/PsJu2VwhgJMf6HV5hD/Ykc0dJeW9sKi6RcnOL2rzETmpsQpUoOseQHoLFv9M=;
X-YMail-OSG: Da2pnMsVM1kPZ1PiCTe1tijRs8spR0Y1iIOqQUJJVNI0ChF
	T8byyw3J3XC.ULtFxhyDmDfMkm7G9QKB8gIGuuHrUNMfGH4ya3D81RD5dmgp
	vbXkZvanm79NI0YsrYHlGhkfdOSTbmRzjyMpHHgwdrImLSXy.JLhIDQiq87q
	f8n43dpcVFHSK9O0kon28xeObZsJThuoIZ6ZmXq4ObmNl5pMcrWwsVseWk13
	I8G1.F2WSANiAkoYNlcSmIzLLnYSV0Ffmlhqw6P3XSVUuAm.u_akjUw1_OZ1
	B8guDay1q2hCtgZqYYmm388ZcB8hBP7dJ6hfMX7XvNZkfl6FIV1jXKiAjZzJ
	IWjmfOPYrz8RyITeJz6.aEyBiw1.XK04WA5rHcxYHTy98nZPwpLYUByKQzjL
	mINSpDhMQtWeZipY6WDKgtJt5MfIJI2vwg5gYGrVQbRtMo4_nMCOmoZ3_7mz
	EVD4ozK2MNA94Qk8BPjVn4bQGdvgRir2MWgNOf_lnkQECtmMPKfSyXTy8P.c
	D5vz3jXmQmdyDk90QS71lvLWh9GZmwYKqQ2hX3Qqh8VuUrlapuezMR4HCOnK
	qygX.aTjaZt12NknketJ6Qg--
Received: from [54.240.196.185] by web122602.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:23:54 PST
X-Rocket-MIMEInfo: 002.001,
	T24gMDQvMDIvMTQgMTA6MjYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBJbnRyb2R1Y2UgYSBuZXcgdmFyaWFibGUgdG8ga2VlcCB0cmFjayBvZiB0aGUgbnVtYmVyIG9mIGluLWZsaWdodAo.IHJlcXVlc3RzLiBXZSBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IHdoZW4geGVuX2Jsa2lmX3B1dCBpcyBjYWxsZWQgdGhlCj4gcmVxdWVzdCBoYXMgYWxyZWFkeSBiZWVuIGZyZWVkIGFuZCB3ZSBjYW4gc2FmZWx5IGZyZWUgeGVuX2Jsa2lmLCB3aGljaAo.IHdhcyBub3QgdGhlIGNhc2UgYmVmb3JlLgoKVGVzdGVkLWIBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391743434.10989.YahooMailNeo@web122602.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:23:54 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: "roger.pau@citrix.com" <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4738143264985626204=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4738143264985626204==
Content-Type: multipart/alternative; boundary="971470827-1276053675-1391743434=:10989"

--971470827-1276053675-1391743434=:10989
Content-Type: text/plain; charset=us-ascii

On 04/02/14 10:26, Roger Pau Monne wrote:
> Introduce a new variable to keep track of the number of in-flight
> requests. We need to make sure that when xen_blkif_put is called the
> request has already been freed and we can safely free xen_blkif, which
> was not the case before.

Tested-by: Matt Rushton <mrushton@amazon.com>

Reviewed-by: Matt Rushton <mrushton@amazon.com>

-Matt

--971470827-1276053675-1391743434=:10989
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div><font size="2"><span style="font-size:10pt;">On 04/02/14 10:26, Roger Pau Monne wrote:<br>

&gt; Introduce a new variable to keep track of the number of in-flight<br>

&gt; requests. We need to make sure that when xen_blkif_put is called the<br>

&gt; request has already been freed and we can safely free xen_blkif, which<br>

&gt; was not the case before.</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;">Tested-by: Matt Rushton &lt;mrushton@amazon.com&gt;<br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">Reviewed-by: Matt Rushton &lt;mrushton@amazon.com&gt;</span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica
 Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">-Matt<br></span></font></div></div></body></html>
--971470827-1276053675-1391743434=:10989--


--===============4738143264985626204==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4738143264985626204==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHj-0000On-P1; Fri, 07 Feb 2014 07:55:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBcAT-0007A3-01
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:31:53 +0000
Received: from [85.158.139.211:15596] by server-5.bemta-5.messagelabs.com id
	B8/82-32749-8A354F25; Fri, 07 Feb 2014 03:31:52 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391743909!2251419!1
X-Originating-IP: [98.138.91.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20866 invoked from network); 7 Feb 2014 03:31:51 -0000
Received: from nm15-vm6.bullet.mail.ne1.yahoo.com (HELO
	nm15-vm6.bullet.mail.ne1.yahoo.com) (98.138.91.108)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:31:51 -0000
Received: from [98.138.100.102] by nm15.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:31:49 -0000
Received: from [98.138.89.244] by tm101.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:31:49 -0000
Received: from [127.0.0.1] by omp1058.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:31:49 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 585768.80985.bm@omp1058.mail.ne1.yahoo.com
Received: (qmail 39016 invoked by uid 60001); 7 Feb 2014 03:31:49 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391743909; bh=9Xz5oPTpYUa841UXJDbM+LsJYBFbu18TK35YfwCizJU=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=cKI8DMXSunb4e1A6Xqnsmj2IrwSvJl0oyf7k28J9bpsomT5v3b4ARs9Rc/dgqm7oCq/ME7g152jaBRZE627pZxTu+dwhuP8kp0Rubt9oPRktCnUsqh0icRPeVh4FNdZPDKTNLHjYD8lfqUwWPsXe95Kzp0WBMJQOIBpW0+1RVNU=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=XA7eqs1KQuy0zE/hjEzBufmhSXSid0RoHC2PMDlE6JN9dUqBdAz4KFDUXLqDvj0rohFvY3x3+TRcfbA9kLDuW3doYHl2Kt+huVPmukyTpCznA1+Eb8XUaFHlo0KZJDjAKLXBp7MSmjc+v1A1ng0FC9LmRFvDQJkf/MjywsqlInc=;
X-YMail-OSG: EgLs6pAVM1niGNanEX2ObxdE96uf7kmlt9KPtH2KG_MumQv
	ze4eWGkrOY.kDT.4joanO0VccpdQXLX9IU8RSta.hvwp6k_eBwJINqTAWfRW
	mhR04yFnBXQT_0lVIB7L6sVLuHMHsvsHC.FKXrzsbU8wc.qKHGel9mF.9GMd
	eLwob.1ivQC8wL0Yk8vV89eLGfYUXleMou1R0LCfwsfTO7TphUmmPJf7THeU
	2kDlE._5nGDoY39gIF1_kWmpI9_d0EiJfZ5kQt1BexoQMrpGBrJ0_EmROlHx
	eeJccweGvGOgMcgE32tKjOAM0fgswcI.1QV3uXIp.4DxheSBNhQDgXIxNVyr
	jFEZ26QmQ5VVkYSVkDezP4zHAj02afW1Su70QVb3N74gQOq1zlqtIwUJuqrv
	6Pb91P7ps2D2bir8t.tsYP24A6lrwGfO.rBdSaGuxwyU_H0NdU_RqIXR_Cfa
	v0LCh.qsDP0A3HJGpoFnTqn6WXvJH2vLb5Rv67BNyAc4Fsj7A6vazRxKgNOl
	In47vxgOT3Ab.p8euAxW0RDRV0yU3hrp95NVW7w8ppgaD4VFN2F78pmBokj9
	l43UmyAxIjXSd80EB
Received: from [54.240.196.185] by web122604.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:31:49 PST
X-Rocket-MIMEInfo: 002.001,
	T24gMDQvMDIvMTQgMTA6MjYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBUaGlzIHdhcyB3cm9uZ2x5IGludHJvZHVjZWQgaW4gY29tbWl0IDQwMmIyN2Y5LCB0aGUgb25seSBkaWZmZXJlbmNlCj4gYmV0d2VlbiBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCBhbmQgYmxraWZfcmVxdWVzdF9zZWdtZW50IGlzCj4gdGhhdCB0aGUgZm9ybWVyIGhhcyBhIG5hbWVkIHBhZGRpbmcsIHdoaWxlIGJvdGggc2hhcmUgdGhlIHNhbWUKPiBtZW1vcnkgbGF5b3V0Lgo.Cj4gQWxzbyBjb3JyZWN0IGEgZmV3IG1pbm8BMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391743909.80736.YahooMailNeo@web122604.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:31:49 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: "roger.pau@citrix.com" <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: "aliguori@amazon.com" <aliguori@amazon.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0737320562364361368=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0737320562364361368==
Content-Type: multipart/alternative; boundary="-159098491-1827889108-1391743909=:80736"

---159098491-1827889108-1391743909=:80736
Content-Type: text/plain; charset=us-ascii

On 04/02/14 10:26, Roger Pau Monne wrote:
> This was wrongly introduced in commit 402b27f9, the only difference
> between blkif_request_segment_aligned and blkif_request_segment is
> that the former has a named padding, while both share the same
> memory layout.
>
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE == 4096.

Tested-by: Matt Rushton <mrushton@amazon.com>

-Matt

---159098491-1827889108-1391743909=:80736
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div><font size="2"><span style="font-size:10pt;">On 04/02/14 10:26, Roger Pau Monne wrote:</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">&gt; This was wrongly introduced in commit 402b27f9, the only difference<br>&gt; between blkif_request_segment_aligned and blkif_request_segment is<br>

&gt; that the former has a named padding, while both share the same<br>&gt; memory layout.<br>

&gt;<br>

&gt; Also correct a few minor glitches in the description, including for it<br>

&gt; to no longer assume PAGE_SIZE == 4096.</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">Tested-by: Matt Rushton &lt;mrushton@amazon.com&gt;</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family:
 HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">-Matt<br></span></font></div></div></body></html>
---159098491-1827889108-1391743909=:80736--


--===============0737320562364361368==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0737320562364361368==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHi-0000OO-6y; Fri, 07 Feb 2014 07:55:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBQ7q-0001gX-E7
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:40:23 +0000
Received: from [85.158.139.211:57123] by server-9.bemta-5.messagelabs.com id
	E7/21-11237-5DE93F25; Thu, 06 Feb 2014 14:40:21 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391697617!2139855!1
X-Originating-IP: [209.85.212.44]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_23, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12974 invoked from network); 6 Feb 2014 14:40:18 -0000
Received: from mail-vb0-f44.google.com (HELO mail-vb0-f44.google.com)
	(209.85.212.44)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:40:18 -0000
Received: by mail-vb0-f44.google.com with SMTP id f12so1502829vbg.3
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 06:40:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=1cprBEfbqz37PWMH64e2Xz4NYxTZOVBxXzWs1rLVif0=;
	b=zBj+LkniDH4RpM0q3H95ljBk1QFxz2m/cZiaG90MQS4TxuSUZmu+HSRZ0sUyXJdBWP
	7x2br3rU58msxQgHcrwu4LP3dZMICj+FiczniTnuRXRH4qTdm38hlf54Hl01NyVufsXm
	GYr2WnKDwsDivLYeofkZI9nY3rgai8dvkFMb6Q4WvWLLJ2/El3fOvKNYCo5wPh2hLNOT
	2GK2WHzooXzU7izWv/aIjmkLwQvqdJFjSAuVR1WqioyjxLXmIldL8foTYmNSl2eqAVFj
	sUTnE24DDC0rLPc7XbtQdFQ+9tJSALoosKvWHAWkB7pUiQhjQ1wywWf+dKMJWSOM+J2N
	qK2Q==
X-Received: by 10.58.181.71 with SMTP id du7mr1942572vec.25.1391697617247;
	Thu, 06 Feb 2014 06:40:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Thu, 6 Feb 2014 06:39:37 -0800 (PST)
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Thu, 6 Feb 2014 09:39:37 -0500
Message-ID: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Subject: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7621343443596533977=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7621343443596533977==
Content-Type: multipart/alternative; boundary=047d7b8738101c689b04f1bdda2e

--047d7b8738101c689b04f1bdda2e
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC) to a
HVM.  I have been attempting to resolve this issue on the xen-users list,
but it was advised to post this issue to this list. (Initial Message -
http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)

The machine I am using as host is a Dell Poweredge server with a Xeon
E31220 with 4GB of ram.

The possible bug is the following:
root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
char device redirected to /dev/pts/5 (label serial0)
qemu: hardware error: xen: failed to populate ram at 40030000
....

I believe it may be similar to this thread
http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results


Additional info that may be helpful is below.

Please let me know if you need any additional information.

Thanks in advance for any help provided!
Regards

###########################################################
root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
###########################################################
# Configuration file for Xen HVM

# HVM Name (as appears in 'xl list')
name="ubuntu-hvm-0"
# HVM Build settings (+ hardware)
#kernel = "/usr/lib/xen-4.3/boot/hvmloader"
builder='hvm'
device_model='qemu-dm'
memory=1024
vcpus=2

# Virtual Interface
# Network bridge to USB NIC
vif=['bridge=xenbr0']

################### PCI PASSTHROUGH ###################
# PCI Permissive mode toggle
#pci_permissive=1

# All PCI Devices
#pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']

# First two ports on Intel 4x1G NIC
#pci=['03:00.0','03:00.1']

# Last two ports on Intel 4x1G NIC
#pci=['04:00.0', '04:00.1']

# All ports on Intel 4x1G NIC
pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']

# Brodcom 2x1G NIC
#pci=['05:00.0', '05:00.1']
################### PCI PASSTHROUGH ###################

# HVM Disks
# Hard disk only
# Boot from HDD first ('c')
boot="c"
disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']

# Hard disk with ISO
# Boot from ISO first ('d')
#boot="d"
#disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']

# ACPI Enable
acpi=1
# HVM Event Modes
on_poweroff='destroy'
on_reboot='restart'
on_crash='restart'

# Serial Console Configuration (Xen Console)
sdl=0
serial='pty'

# VNC Configuration
# Only reacable from localhost
vnc=1
vnclisten="0.0.0.0"
vncpasswd=""

###########################################################
Copied for xen-users list
###########################################################

It appears that it cannot obtain the RAM mapping for this PCI device.


I rebooted the Host.  I ran assigned pci devices to pciback. The output
looks like:
root@fiat:~# ./dev_mgmt.sh
Loading Kernel Module 'xen-pciback'
Calling function pciback_dev for:
PCI DEVICE 0000:03:00.0
Unbinding 0000:03:00.0 from igb
Binding 0000:03:00.0 to pciback

PCI DEVICE 0000:03:00.1
Unbinding 0000:03:00.1 from igb
Binding 0000:03:00.1 to pciback

PCI DEVICE 0000:04:00.0
Unbinding 0000:04:00.0 from igb
Binding 0000:04:00.0 to pciback

PCI DEVICE 0000:04:00.1
Unbinding 0000:04:00.1 from igb
Binding 0000:04:00.1 to pciback

PCI DEVICE 0000:05:00.0
Unbinding 0000:05:00.0 from bnx2
Binding 0000:05:00.0 to pciback

PCI DEVICE 0000:05:00.1
Unbinding 0000:05:00.1 from bnx2
Binding 0000:05:00.1 to pciback

Listing PCI Devices Available to Xen
0000:03:00.0
0000:03:00.1
0000:04:00.0
0000:04:00.1
0000:05:00.0
0000:05:00.1

###########################################################
root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
Parsing config from /etc/xen/ubuntu-hvm-0.cfg
WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a
non-default device_model
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
how=(nil) callback=(nil) poller=0x210c3c0
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
vdev=hda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
domain, skipping bootloader
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x210c728: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
free_memkb=2980
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate
with 1 nodes, 4 cpus and 2980 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->00000000001a69a4
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100608
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
inprogress: poller=0x210c3c0, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/2/768/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/2/768/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vbd/2/768/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x2112f48: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
/etc/xen/scripts/block add
libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-model
/usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
/usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1: register
slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
wpath=/local/domain/0/device-model/2/state token=3/1: event
epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
wpath=/local/domain/0/device-model/2/state token=3/1: event
epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x210c960: deregister unregistered
libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "change",
    "id": 3,
    "arguments": {
        "device": "vnc",
        "target": "password",
        "arg": ""
    }
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 4
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: register
slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x210e8a8: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
/etc/xen/scripts/vif-bridge online
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
/etc/xen/scripts/vif-bridge add
libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-03_00.0",
        "hostaddr": "0000:03:00.0"
    }
}
'
libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection reset
by peer
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
Connection refused
libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
progress report: ignored
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
complete, rc=0
libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: destroy
Daemon running with PID 3214
xc: debug: hypercall buffer: total allocations:793 total releases:793
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4

###########################################################
root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
char device redirected to /dev/pts/5 (label serial0)
qemu: hardware error: xen: failed to populate ram at 40030000
CPU #0:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #1:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000

###########################################################
/etc/default/grub
GRUB_DEFAULT="Xen 4.3-amd64"
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
# biosdevname=0
GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"

--047d7b8738101c689b04f1bdda2e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all,<div><br></div><div>I am attempting to do a pci pas=
sthrough of an Intel ET card (4x1G NIC) to a HVM. =A0I have been attempting=
 to resolve this issue on the xen-users list, but it was advised to post th=
is issue to this list. (Initial Message -=A0<a href=3D"http://lists.xenproj=
ect.org/archives/html/xen-users/2014-02/msg00036.html">http://lists.xenproj=
ect.org/archives/html/xen-users/2014-02/msg00036.html</a>)</div>

<div><br></div><div>The machine I am using as host is a Dell Poweredge serv=
er with a Xeon E31220 with 4GB of ram.</div><div><br></div><div>The possibl=
e bug is the following:</div><div><div style=3D"font-family:arial,sans-seri=
f;font-size:13px">

root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log=A0</div><div style=3D"=
font-family:arial,sans-serif;font-size:13px">char device redirected to /dev=
/pts/5 (label serial0)</div><div style=3D"font-family:arial,sans-serif;font=
-size:13px">

qemu: hardware error: xen: failed to populate ram at 40030000</div></div><d=
iv style=3D"font-family:arial,sans-serif;font-size:13px">....</div><div sty=
le=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div style=3D"=
font-family:arial,sans-serif;font-size:13px">

I believe it may be similar to this thread</div><div style=3D"font-family:a=
rial,sans-serif;font-size:13px"><a href=3D"http://markmail.org/message/3zui=
ojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_b=
lank">http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34v=
be4uyog2d4+state:results</a><br>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div>=
<div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div s=
tyle=3D"font-family:arial,sans-serif;font-size:13px">Additional info that m=
ay be helpful is below.</div>

<div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div s=
tyle=3D"font-family:arial,sans-serif;font-size:13px">Please let me know if =
you need any additional information.</div><div style=3D"font-family:arial,s=
ans-serif;font-size:13px">

<br></div><div style=3D"font-family:arial,sans-serif;font-size:13px">Thanks=
 in advance for any help provided!</div><div style=3D"font-family:arial,san=
s-serif;font-size:13px">Regards</div><div style=3D"font-family:arial,sans-s=
erif;font-size:13px">

<br></div><div>###########################################################<=
/div><div><div>root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg=A0</div><div>####=
#######################################################<br></div><div># Con=
figuration file for Xen HVM</div>

<div><br></div><div># HVM Name (as appears in &#39;xl list&#39;)</div><div>=
name=3D&quot;ubuntu-hvm-0&quot;</div><div># HVM Build settings (+ hardware)=
</div><div>#kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;</div><di=
v>

builder=3D&#39;hvm&#39;</div><div>device_model=3D&#39;qemu-dm&#39;</div><di=
v>memory=3D1024</div><div>vcpus=3D2</div><div><br></div><div># Virtual Inte=
rface</div><div># Network bridge to USB NIC</div><div>vif=3D[&#39;bridge=3D=
xenbr0&#39;]</div>

<div><br></div><div>################### PCI PASSTHROUGH ###################=
</div><div># PCI Permissive mode toggle</div><div>#pci_permissive=3D1</div>=
<div><br></div><div># All PCI Devices</div><div>#pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]</div>

<div><br></div><div># First two ports on Intel 4x1G NIC</div><div>#pci=3D[&=
#39;03:00.0&#39;,&#39;03:00.1&#39;]</div><div><br></div><div># Last two por=
ts on Intel 4x1G NIC</div><div>#pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;=
]</div>

<div><br></div><div># All ports on Intel 4x1G NIC</div><div>pci=3D[&#39;03:=
00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]</div><d=
iv><br></div><div># Brodcom 2x1G NIC</div><div>#pci=3D[&#39;05:00.0&#39;, &=
#39;05:00.1&#39;]</div>

<div>################### PCI PASSTHROUGH ###################</div><div><br>=
</div><div># HVM Disks</div><div># Hard disk only</div><div># Boot from HDD=
 first (&#39;c&#39;)</div><div>boot=3D&quot;c&quot;</div><div>disk=3D[&#39;=
phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]</div>

<div><br></div><div># Hard disk with ISO</div><div># Boot from ISO first (&=
#39;d&#39;)</div><div>#boot=3D&quot;d&quot;</div><div>#disk=3D[&#39;phy:/de=
v/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;, &#39;file:/root/ubuntu-12.04.3-server-=
amd64.iso,hdc:cdrom,r&#39;]</div>

<div><br></div><div># ACPI Enable</div><div>acpi=3D1</div><div># HVM Event =
Modes</div><div>on_poweroff=3D&#39;destroy&#39;</div><div>on_reboot=3D&#39;=
restart&#39;</div><div>on_crash=3D&#39;restart&#39;</div><div><br></div><di=
v># Serial Console Configuration (Xen Console)</div>

<div>sdl=3D0</div><div>serial=3D&#39;pty&#39;</div><div><br></div><div># VN=
C Configuration</div><div># Only reacable from localhost</div><div>vnc=3D1<=
/div><div>vnclisten=3D&quot;0.0.0.0&quot;</div><div>vncpasswd=3D&quot;&quot=
;</div>

</div><div><br></div><div>#################################################=
##########<br></div><div>Copied for xen-users list</div><div>##############=
#############################################<br></div><div><br></div>
<div>
<div style=3D"font-family:arial,sans-serif;font-size:13px">It appears that =
it cannot obtain the RAM mapping for this PCI device.</div><div style=3D"fo=
nt-family:arial,sans-serif;font-size:13px"><br></div><div style=3D"font-fam=
ily:arial,sans-serif;font-size:13px">

<br></div><span style=3D"font-family:arial,sans-serif;font-size:13px">I reb=
ooted the Host. =A0I ran assigned pci devices to pciback. The output looks =
like:</span><div style=3D"font-family:arial,sans-serif;font-size:13px"><div=
>
root@fiat:~# ./dev_mgmt.sh=A0</div>
<div>Loading Kernel Module &#39;xen-pciback&#39;</div><div>Calling function=
 pciback_dev for:=A0</div><div>PCI DEVICE 0000:03:00.0</div><div>Unbinding =
0000:03:00.0 from igb</div><div>Binding 0000:03:00.0 to pciback</div><div>

<br></div><div>PCI DEVICE 0000:03:00.1</div><div>Unbinding 0000:03:00.1 fro=
m igb</div><div>Binding 0000:03:00.1 to pciback</div><div><br></div><div>PC=
I DEVICE 0000:04:00.0</div><div>Unbinding 0000:04:00.0 from igb</div><div>

Binding 0000:04:00.0 to pciback</div><div><br></div><div>PCI DEVICE 0000:04=
:00.1</div><div>Unbinding 0000:04:00.1 from igb</div><div>Binding 0000:04:0=
0.1 to pciback</div><div><br></div><div>PCI DEVICE 0000:05:00.0</div><div>

Unbinding 0000:05:00.0 from bnx2</div><div>Binding 0000:05:00.0 to pciback<=
/div><div><br></div><div>PCI DEVICE 0000:05:00.1</div><div>Unbinding 0000:0=
5:00.1 from bnx2</div><div>Binding 0000:05:00.1 to pciback</div><div class=
=3D"im">

<div><br></div><div>Listing PCI Devices Available to Xen</div></div><div cl=
ass=3D"im"><div>0000:03:00.0</div><div>0000:03:00.1</div><div>0000:04:00.0<=
/div><div>0000:04:00.1</div></div><div class=3D"im"><div>0000:05:00.0</div>

<div>0000:05:00.1</div></div></div><div style=3D"font-family:arial,sans-ser=
if;font-size:13px"><br></div><div style=3D"font-family:arial,sans-serif;fon=
t-size:13px"><span style=3D"font-family:arial;font-size:small">############=
###############################################</span><br>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><div>root@=
fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg=A0</div><div>Parsing confi=
g from /etc/xen/ubuntu-hvm-0.cfg</div><div class=3D"im"><div>WARNING: ignor=
ing device_model directive.</div>

<div>WARNING: Use &quot;device_model_override&quot; instead if you really w=
ant a non-default device_model</div></div><div>libxl: debug: libxl_create.c=
:1230:do_domain_create: ao 0x210c360: create: how=3D(nil) callback=3D(nil) =
poller=3D0x210c3c0</div>

<div>libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk =
vdev=3Dhda spec.backend=3Dunknown</div><div>libxl: debug: libxl_device.c:29=
6:libxl__device_disk_set_backend: Disk vdev=3Dhda, using backend phy</div><=
div>

libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader=
</div><div>libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not =
a PV domain, skipping bootloader</div><div>libxl: debug: libxl_event.c:608:=
libxl__ev_xswatch_deregister: watch w=3D0x210c728: deregister unregistered<=
/div>

<div>libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUM=
A placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcpus=3D3, free_=
memkb=3D2980</div><div>libxl: detail: libxl_dom.c:195:numa_place_domain: NU=
MA placement candidate with 1 nodes, 4 cpus and 2980 KB free selected</div>

<div>xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=3D0xa69a4</=
div><div>xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a4</div=
><div class=3D"im"><div>xc: info: VIRTUAL MEMORY ARRANGEMENT:</div><div>=A0=
 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a69a4</div>

<div>=A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;0000000000000000</div><d=
iv>=A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f800000</div><d=
iv>=A0 ENTRY ADDRESS: 0000000000100608</div><div>xc: info: PHYSICAL MEMORY =
ALLOCATION:</div>

<div>=A0 4KB PAGES: 0x0000000000000200</div><div>=A0 2MB PAGES: 0x000000000=
00001fb</div><div>=A0 1GB PAGES: 0x0000000000000000</div></div><div>xc: det=
ail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; 0x7f022c81682d</div><d=
iv>

libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=
=3Dhda spec.backend=3Dphy</div><div>libxl: debug: libxl_event.c:559:libxl__=
ev_xswatch_register: watch w=3D0x2112f48 wpath=3D/local/domain/0/backend/vb=
d/2/768/state token=3D3/0: register slotnum=3D3</div>

<div>libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360: inpr=
ogress: poller=3D0x210c3c0, flags=3Di</div><div>libxl: debug: libxl_event.c=
:503:watchfd_callback: watch w=3D0x2112f48 wpath=3D/local/domain/0/backend/=
vbd/2/768/state token=3D3/0: event epath=3D/local/domain/0/backend/vbd/2/76=
8/state</div>

<div>libxl: debug: libxl_event.c:647:devstate_watch_callback: backend /loca=
l/domain/0/backend/vbd/2/768/state wanted state 2 still waiting state 1</di=
v><div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x2112f4=
8 wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: event epath=
=3D/local/domain/0/backend/vbd/2/768/state</div>

<div>libxl: debug: libxl_event.c:643:devstate_watch_callback: backend /loca=
l/domain/0/backend/vbd/2/768/state wanted state 2 ok</div><div>libxl: debug=
: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=3D0x2112f48 wpath=
=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: deregister slotnum=
=3D3</div>

<div>libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=
=3D0x2112f48: deregister unregistered</div><div>libxl: debug: libxl_device.=
c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/block add</d=
iv>

<div>libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-m=
odel /usr/bin/qemu-system-i386 with arguments:</div><div>libxl: debug: libx=
l_dm.c:1208:libxl__spawn_local_dm: =A0 /usr/bin/qemu-system-i386</div><div>

libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xen-domid</div><d=
iv>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2</div><div>lib=
xl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -chardev</div><div>li=
bxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 socket,id=3Dlibxl-cm=
d,path=3D/var/run/xen/qmp-libxl-2,server,nowait</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mon</div><di=
v>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 chardev=3Dlibxl-=
cmd,mode=3Dcontrol</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -name</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubuntu-hvm-0<=
/div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vnc</di=
v><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0=A0<a href=
=3D"http://0.0.0.0:0/" target=3D"_blank">0.0.0.0:0</a>,to=3D99</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global</div>=
<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa-fdc.drive=
A=3D</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -se=
rial</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty</div><div=
>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vga</div><div>li=
bxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cirrus</div><div>lib=
xl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga.vram_size=
_mb=3D8</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 =
-boot</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 or=
der=3Dc</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -smp</div><di=
v>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,maxcpus=3D2</d=
iv><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -device</d=
iv>
<div>
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 rtl8139,id=3Dnic0,=
netdev=3Dnet0,mac=3D00:16:3e:23:44:2c</div><div>libxl: debug: libxl_dm.c:12=
08:libxl__spawn_local_dm: =A0 -netdev</div><div>libxl: debug: libxl_dm.c:12=
08:libxl__spawn_local_dm: =A0 type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,scri=
pt=3Dno,downscript=3Dno</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M</div><div>=
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xenfv</div><div>li=
bxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m</div><div>libxl: =
debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 1016</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -drive</div><=
div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 file=3D/dev/ub=
untu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,format=3Draw,cache=3Dw=
riteback</div>

<div>libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=3D=
0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1: registe=
r slotnum=3D3</div><div>libxl: debug: libxl_event.c:503:watchfd_callback: w=
atch w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1=
: event epath=3D/local/domain/0/device-model/2/state</div>

<div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210c960 =
wpath=3D/local/domain/0/device-model/2/state token=3D3/1: event epath=3D/lo=
cal/domain/0/device-model/2/state</div><div>libxl: debug: libxl_event.c:596=
:libxl__ev_xswatch_deregister: watch w=3D0x210c960 wpath=3D/local/domain/0/=
device-model/2/state token=3D3/1: deregister slotnum=3D3</div>

<div>libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=
=3D0x210c960: deregister unregistered</div><div>libxl: debug: libxl_qmp.c:7=
07:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2</div><div>

libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp</div><=
div>libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;=
{</div><div>=A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,</div=
>

<div>=A0 =A0 &quot;id&quot;: 1</div><div>}</div><div>&#39;</div><div>libxl:=
 debug: libxl_qmp.c:299:qmp_handle_response: message type: return</div><div=
>libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;{</=
div>

<div>=A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,</div><div>=A0 =
=A0 &quot;id&quot;: 2</div><div>}</div><div>&#39;</div><div>libxl: debug: l=
ibxl_qmp.c:299:qmp_handle_response: message type: return</div><div>libxl: d=
ebug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;change&quot;,</div><div>=A0 =A0 &qu=
ot;id&quot;: 3,</div><div>=A0 =A0 &quot;arguments&quot;: {</div><div>=A0 =
=A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,</div><div>=A0 =A0 =A0 =A0 =
&quot;target&quot;: &quot;password&quot;,</div>

<div>=A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;</div><div>=A0 =A0 }</div=
><div>}</div><div>&#39;</div><div>libxl: debug: libxl_qmp.c:299:qmp_handle_=
response: message type: return</div><div>libxl: debug: libxl_qmp.c:555:qmp_=
send_prepare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,</div><div>=A0 =A0 =
&quot;id&quot;: 4</div><div>}</div><div>&#39;</div><div>libxl: debug: libxl=
_qmp.c:299:qmp_handle_response: message type: return</div><div>libxl: debug=
: libxl_event.c:559:libxl__ev_xswatch_register: watch w=3D0x210e8a8 wpath=
=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: register slotnum=3D3<=
/div>

<div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8 =
wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event epath=3D/l=
ocal/domain/0/backend/vif/2/0/state</div><div>libxl: debug: libxl_event.c:6=
47:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state w=
anted state 2 still waiting state 1</div>

<div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8 =
wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event epath=3D/l=
ocal/domain/0/backend/vif/2/0/state</div><div>libxl: debug: libxl_event.c:6=
43:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state w=
anted state 2 ok</div>

<div>libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=
=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: der=
egister slotnum=3D3</div><div>libxl: debug: libxl_event.c:608:libxl__ev_xsw=
atch_deregister: watch w=3D0x210e8a8: deregister unregistered</div>

<div>libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t: /etc/xen/scripts/vif-bridge online</div><div>libxl: debug: libxl_device.=
c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge a=
dd</div>

<div>libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to /var=
/run/xen/qmp-libxl-2</div><div>libxl: debug: libxl_qmp.c:299:qmp_handle_res=
ponse: message type: qmp</div><div>libxl: debug: libxl_qmp.c:555:qmp_send_p=
repare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,</div><div>=
=A0 =A0 &quot;id&quot;: 1</div><div>}</div><div>&#39;</div><div>libxl: debu=
g: libxl_qmp.c:299:qmp_handle_response: message type: return</div><div>libx=
l: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,</div><div>=A0 =A0=
 &quot;id&quot;: 2,</div><div>=A0 =A0 &quot;arguments&quot;: {</div><div>=
=A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthrough&quot;,</div><=
div>=A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,</div>

<div>=A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quot;</div><d=
iv class=3D"im"><div>=A0 =A0 }</div><div>}</div><div>&#39;</div><div>libxl:=
 error: libxl_qmp.c:454:qmp_next: Socket read error: Connection reset by pe=
er</div><div>

libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error: Conn=
ection refused</div><div>libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e: Connection error: Connection refused</div><div>libxl: error: libxl_qmp.c=
:702:libxl__qmp_initialize: Connection error: Connection refused</div>

</div><div>libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating=
 pci backend</div><div>libxl: debug: libxl_event.c:1737:libxl__ao_progress_=
report: ao 0x210c360: progress report: ignored</div><div>libxl: debug: libx=
l_event.c:1569:libxl__ao_complete: ao 0x210c360: complete, rc=3D0</div>

<div>libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: des=
troy</div><div>Daemon running with PID 3214</div><div>xc: debug: hypercall =
buffer: total allocations:793 total releases:793</div><div>xc: debug: hyper=
call buffer: current allocations:0 maximum allocations:4</div>

<div>xc: debug: hypercall buffer: cache current size:4</div><div>xc: debug:=
 hypercall buffer: cache hits:785 misses:4 toobig:4</div></div><div style=
=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div style=3D"fo=
nt-family:arial,sans-serif;font-size:13px">

<span style=3D"font-family:arial;font-size:small">#########################=
##################################</span><br></div><div style=3D"font-famil=
y:arial,sans-serif;font-size:13px"><div>root@fiat:/var/log/xen# cat qemu-dm=
-ubuntu-hvm-0.log=A0</div>

<div>char device redirected to /dev/pts/5 (label serial0)</div><div>qemu: h=
ardware error: xen: failed to populate ram at 40030000</div><div>CPU #0:</d=
iv><div>EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633</div><d=
iv>
ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000</div>
<div>EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0=
 HLT=3D1</div><div>ES =3D0000 00000000 0000ffff 00009300</div><div>CS =3Df0=
00 ffff0000 0000ffff 00009b00</div><div>SS =3D0000 00000000 0000ffff 000093=
00</div><div>DS =3D0000 00000000 0000ffff 00009300</div>

<div>FS =3D0000 00000000 0000ffff 00009300</div><div>GS =3D0000 00000000 00=
00ffff 00009300</div><div>LDT=3D0000 00000000 0000ffff 00008200</div><div>T=
R =3D0000 00000000 0000ffff 00008b00</div><div>GDT=3D =A0 =A0 00000000 0000=
ffff</div>

<div>IDT=3D =A0 =A0 00000000 0000ffff</div><div>CR0=3D60000010 CR2=3D000000=
00 CR3=3D00000000 CR4=3D00000000</div><div>DR0=3D00000000 DR1=3D00000000 DR=
2=3D00000000 DR3=3D00000000=A0</div><div>DR6=3Dffff0ff0 DR7=3D00000400</div=
><div>EFER=3D0000000000000000</div>

<div>FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80</div><div>FPR=
0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000</div><div>FPR2=3D000=
0000000000000 0000 FPR3=3D0000000000000000 0000</div><div>FPR4=3D0000000000=
000000 0000 FPR5=3D0000000000000000 0000</div>

<div>FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000</div><div>XM=
M00=3D00000000000000000000000000000000 XMM01=3D0000000000000000000000000000=
0000</div><div>XMM02=3D00000000000000000000000000000000 XMM03=3D00000000000=
000000000000000000000</div>

<div>XMM04=3D00000000000000000000000000000000 XMM05=3D000000000000000000000=
00000000000</div><div>XMM06=3D00000000000000000000000000000000 XMM07=3D0000=
0000000000000000000000000000</div><div>CPU #1:</div><div>EAX=3D00000000 EBX=
=3D00000000 ECX=3D00000000 EDX=3D00000633</div>

<div>ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000</div><div>=
EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0 HLT=
=3D1</div><div>ES =3D0000 00000000 0000ffff 00009300</div><div>CS =3Df000 f=
fff0000 0000ffff 00009b00</div>

<div>SS =3D0000 00000000 0000ffff 00009300</div><div>DS =3D0000 00000000 00=
00ffff 00009300</div><div>FS =3D0000 00000000 0000ffff 00009300</div><div>G=
S =3D0000 00000000 0000ffff 00009300</div><div>LDT=3D0000 00000000 0000ffff=
 00008200</div>

<div>TR =3D0000 00000000 0000ffff 00008b00</div><div>GDT=3D =A0 =A0 0000000=
0 0000ffff</div><div>IDT=3D =A0 =A0 00000000 0000ffff</div><div>CR0=3D60000=
010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000</div><div>DR0=3D00000000 D=
R1=3D00000000 DR2=3D00000000 DR3=3D00000000=A0</div>

<div>DR6=3Dffff0ff0 DR7=3D00000400</div><div>EFER=3D0000000000000000</div><=
div>FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80</div><div>FPR0=
=3D0000000000000000 0000 FPR1=3D0000000000000000 0000</div><div>FPR2=3D0000=
000000000000 0000 FPR3=3D0000000000000000 0000</div>

<div>FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000</div><div>FP=
R6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000</div><div>XMM00=3D0=
0000000000000000000000000000000 XMM01=3D00000000000000000000000000000000</d=
iv><div>

XMM02=3D00000000000000000000000000000000 XMM03=3D00000000000000000000000000=
000000</div><div>XMM04=3D00000000000000000000000000000000 XMM05=3D000000000=
00000000000000000000000</div><div>XMM06=3D00000000000000000000000000000000 =
XMM07=3D00000000000000000000000000000000</div>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div>=
<div style=3D"font-family:arial,sans-serif;font-size:13px"><span style=3D"f=
ont-family:arial;font-size:small">#########################################=
##################</span><br>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px">/etc/defau=
lt/grub</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><di=
v>GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;</div><div>GRUB_HIDDEN_TIMEOUT=3D=
0</div>

<div>GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue</div><div>GRUB_TIMEOUT=3D10</div><div=
>GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || echo Debian`</div=
><div>GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;</div><div>GRUB_=
CMDLINE_LINUX=3D&quot;&quot;</div>

<div># biosdevname=3D0</div><div>GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M =
dom0_max_vcpus=3D1&quot;</div></div></div></div>

--047d7b8738101c689b04f1bdda2e--


--===============7621343443596533977==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7621343443596533977==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHi-0000OV-Kf; Fri, 07 Feb 2014 07:55:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBbuJ-0006WC-5b
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:15:11 +0000
Received: from [85.158.139.211:4691] by server-14.bemta-5.messagelabs.com id
	AB/FE-27598-EBF44F25; Fri, 07 Feb 2014 03:15:10 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391742907!2247631!1
X-Originating-IP: [98.138.91.233]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32421 invoked from network); 7 Feb 2014 03:15:09 -0000
Received: from nm11-vm5.bullet.mail.ne1.yahoo.com (HELO
	nm11-vm5.bullet.mail.ne1.yahoo.com) (98.138.91.233)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:15:09 -0000
Received: from [98.138.101.130] by nm11.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:15:07 -0000
Received: from [98.138.101.170] by tm18.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:15:06 -0000
Received: from [127.0.0.1] by omp1081.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:15:06 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 673927.87836.bm@omp1081.mail.ne1.yahoo.com
Received: (qmail 44862 invoked by uid 60001); 7 Feb 2014 03:15:06 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391742906; bh=7PgwgNZ5/RSSUYrLaTzQy2cFkT1NA7xyfbLF4jGtC3A=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=QGzBMLbQcog43QCeMHjnyiFh3Hs0BkgZ72shpJHPiA8ssHHZITjOdTTx+XQxd3F8BYQyfSiVOKWZsgHu1fdMlzY/IfYkGwS+rrambAKotJ1W4DISbjKfurA7bc4RMkTIFwUvrDib8e315BQGp1asjxDf1jJ5LVqg1rwaLQfPoHU=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=E4TD8pvYaMvfvCLGhd7KVdqB5kwlCZSEIGDiEYNMGP4eKXkeevUoqT3K38K73EBQmr+oZt2sLWaXs3viLpC3C4wresyq4WHU6T29ykntsv24Y1ZqnfKqwY4BiZdmzW8nnZLH09NShrVhHx+a46CjNinMB+yaMAkwmyQZsDOZbTE=;
X-YMail-OSG: kLiG3HYVM1nWASDyyBToY6PRKFxX.SRPMya7U6TWf4ZByPM
	ZVk7XEB64SufyAoknXxQVsFnXPEPG1777ZFC_Fm3FXH.5dM4D6HGiO3kiCqF
	TSPrzYdHmeLi686vDl8svPzzn6DuhXHj7ESHKN.2f1BPSSIJbKtzoQj.9p7R
	hbDT7uTvI7eDPZnyR4aCnsdZpnyBkordmS4gRWJNp9.oVwv16462VSPjAm7F
	I.SuQo6OZwIpsSPylt7iACQNBjmtHs7C10BJ.LDLZIhXDC7hgNORGuDNXPfc
	6AIG1ogIaZQKqI_dzGsMlBVu682l6Jot1ZhcR.kebxjoX8NGTixsMOWgg9RL
	2y7FiRrRme7M7EVi1u3MabkU0CgLtveQFgAKN7lTh.NDdJ_n47JVtwOi_mNX
	EMpzpwFfCg3e3IWxuT87TLVaY7.KApayIn5NCssO3zM.n3oa8unjtHtBeaKv
	zIz1qM_lf92lr6B9MwtEjQy71rKxA6wdzwMwgzCVN5KGkJ6yhrl4HF7ezv0x
	ZwPOh66eqLm4nZ6VYDx7O4_pSBLZ1V7OyTQJOUarv4.tEIfKGNymibCwzLVp
	l0vwtghzEQRrLdkEp5p6akg--
Received: from [54.240.196.185] by web122601.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:15:06 PST
X-Rocket-MIMEInfo: 002.001,
	T24gMDQvMDIvMTQgMTA6MjYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBJJ3ZlIGF0IGxlYXN0IGlkZW50aWZpZWQgdHdvIHBvc3NpYmxlIG1lbW9yeSBsZWFrcyBpbiBibGtiYWNrLCBib3RoCj4gcmVsYXRlZCB0byB0aGUgc2h1dGRvd24gcGF0aCBvZiBhIFZCRDoKPiAKPiAtIGJsa2JhY2sgZG9lc24ndCB3YWl0IGZvciBhbnkgcGVuZGluZyBwdXJnZSB3b3JrIHRvIGZpbmlzaCBiZWZvcmUKPsKgwqAgY2xlYW5pbmcgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gVGhlIHB1cmdlIHdvcmsgd2lsbCBjYWxsCj4BMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391742906.53011.YahooMailNeo@web122601.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:15:06 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm5lIOKAjuKAjg==?= <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: =?utf-8?B?SWFuIENhbXBiZWxsIOKAjg==?= <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	=?utf-8?B?RGF2aWQgVnJhYmVsIOKAjuKAjuKAjg==?= <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	=?utf-8?B?Qm9yaXMgT3N0cm92c2t5IOKAjg==?= <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6098453347174010562=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6098453347174010562==
Content-Type: multipart/alternative; boundary="-822996273-1973664204-1391742906=:53011"

---822996273-1973664204-1391742906=:53011
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On 04/02/14 10:26, Roger Pau Monne wrote:=0A> I've at least identified two =
possible memory leaks in blkback, both=0A> related to the shutdown path of =
a VBD:=0A> =0A> - blkback doesn't wait for any pending purge work to finish=
 before=0A>=C2=A0=C2=A0 cleaning the list of free_pages. The purge work wil=
l call=0A>=C2=A0=C2=A0 put_free_pages and thus we might end up with pages b=
eing added to=0A>=C2=A0=C2=A0 the free_pages list after we have emptied it.=
 Fix this by making=0A>=C2=A0=C2=A0 sure there's no pending purge work befo=
re exiting=0A>=C2=A0=C2=A0 xen_blkif_schedule, and moving the free_page cle=
anup code to=0A>=C2=A0=C2=A0 xen_blkif_free.=0A> - blkback doesn't wait for=
 pending requests to end before cleaning=0A>=C2=A0=C2=A0 persistent grants =
and the list of free_pages. Again this can add=0A>=C2=A0=C2=A0 pages to the=
 free_pages list or persistent grants to the=0A>=C2=A0=C2=A0 persistent_gnt=
s red-black tree. Fixed by moving the persistent=0A>=C2=A0=C2=A0 grants and=
 free_pages cleanup code to xen_blkif_free.=0A> =0A> Also, add some checks =
in xen_blkif_free to make sure we are cleaning=0A> everything.=0A=0ATested-=
by: Matt Rushton <mrushton@amazon.com>=0A=0AReviewed-by: Matt Rushton <mrus=
hton@amazon.com>=0A
---822996273-1973664204-1391742906=:53011
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:He=
lveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;fo=
nt-size:12pt"><div><font size=3D"2"><span style=3D"font-size:10pt;">On 04/0=
2/14 10:26, Roger Pau Monne wrote:<br>=0A=0A&gt; I've at least identified t=
wo possible memory leaks in blkback, both<br>=0A=0A&gt; related to the shut=
down path of a VBD:<br>=0A=0A&gt; <br>=0A=0A&gt; - blkback doesn't wait for=
 any pending purge work to finish before<br>=0A=0A&gt;&nbsp;&nbsp; cleaning=
 the list of free_pages. The purge work will call<br>=0A=0A&gt;&nbsp;&nbsp;=
 put_free_pages and thus we might end up with pages being added to<br>=0A=
=0A&gt;&nbsp;&nbsp; the free_pages list after we have emptied it. Fix this =
by making<br>=0A=0A&gt;&nbsp;&nbsp; sure there's no pending purge work befo=
re exiting<br>=0A=0A&gt;&nbsp;&nbsp; xen_blkif_schedule, and moving the fre=
e_page cleanup code to<br>=0A=0A&gt;&nbsp;&nbsp; xen_blkif_free.<br>=0A=0A&=
gt; - blkback doesn't wait for pending requests to end before cleaning<br>=
=0A=0A&gt;&nbsp;&nbsp; persistent grants and the list of free_pages. Again =
this can add<br>=0A=0A&gt;&nbsp;&nbsp; pages to the free_pages list or pers=
istent grants to the<br>=0A=0A&gt;&nbsp;&nbsp; persistent_gnts red-black tr=
ee. Fixed by moving the persistent<br>=0A=0A&gt;&nbsp;&nbsp; grants and fre=
e_pages cleanup code to xen_blkif_free.<br>=0A=0A&gt; <br>=0A=0A&gt; Also, =
add some checks in xen_blkif_free to make sure we are cleaning<br>=0A=0A&gt=
; everything.</span></font></div><div style=3D"color: rgb(0, 0, 0); font-si=
ze: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lu=
cida Grande,sans-serif; background-color: transparent; font-style: normal;"=
><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 13.3333px; font-fa=
mily: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif=
; background-color: transparent; font-style: normal;">Tested-by: Matt Rusht=
on &lt;mrushton@amazon.com&gt;<br><font size=3D"2"><span style=3D"font-size=
:10pt;"></span></font></div><div style=3D"color: rgb(0, 0, 0); font-size: 1=
0pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grand=
e,sans-serif; background-color: transparent; font-style: normal;"><font siz=
e=3D"2"><span style=3D"font-size:10pt;">Reviewed-by: Matt Rushton &lt;mrush=
ton@amazon.com&gt;</span></font></div><div style=3D"color: rgb(0, 0, 0); fo=
nt-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lu=
cida
 Grande,sans-serif; background-color: transparent; font-style: normal;"><fo=
nt size=3D"2"><span style=3D"font-size:10pt;"><br></span></font></div></div=
></body></html>
---822996273-1973664204-1391742906=:53011--


--===============6098453347174010562==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6098453347174010562==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHk-0000Ou-56; Fri, 07 Feb 2014 07:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBcGx-0007JT-9K
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:38:35 +0000
Received: from [85.158.143.35:15194] by server-1.bemta-4.messagelabs.com id
	DA/E3-31661-A3554F25; Fri, 07 Feb 2014 03:38:34 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391744311!3790863!1
X-Originating-IP: [98.138.91.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26793 invoked from network); 7 Feb 2014 03:38:33 -0000
Received: from nm28-vm0.bullet.mail.ne1.yahoo.com (HELO
	nm28-vm0.bullet.mail.ne1.yahoo.com) (98.138.91.22)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:38:33 -0000
Received: from [98.138.100.103] by nm28.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:38:31 -0000
Received: from [98.138.101.172] by tm102.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:38:31 -0000
Received: from [127.0.0.1] by omp1083.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:38:31 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 654791.73292.bm@omp1083.mail.ne1.yahoo.com
Received: (qmail 51729 invoked by uid 60001); 7 Feb 2014 03:38:31 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391744311; bh=jItZhzDmXR6votFz4+Gp9WxHnaA/ppSXj1Vh/bIkOsk=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=kGNuCPOv+TPQIJ93JhnMdjbdmaemuBsHN30b0u8jO/B5uhQX+GM+LwoiZlWy7BqxcD7kOBg+3P2lF8fS4QTnOxJssr/08E1vEaKN8dgRsXlVP3OESWEYSrk9u2wtazjTaJT7/+mUiIg+OovICIjDzdZ6bPg2rn7TQ73p/2sBm3E=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=1ad02NHWei297HGnzPfAGMrLUdzbO0gPCDhtHVEnOZwIVOeXYtRb3RtwnVscO9yiWM4D5UihE0HNMovm7NqxqYz1SjPfwE7OqzjTfeeZO4KFGZSOknzEFtg4ERV6pFkfKwFCOyuqFrsGLjKdyDyH0zJY49JMC6MPzsLtMUpVjq0=;
X-YMail-OSG: Ijke8pwVM1mkdHBMKZhtEzX3lrNAeqEhNx.W4HIs8ctQVfL
	Uo0yc4QRuVqRgA8hjySuZuYBNAz0r69E7zhR3bT8M0GVPN9YOIkFu69J7QsG
	xgIA4YQp7AD6qqwx3HwgI5rooHN8s9SJnr_vjRtevQ5qb3UMHz8ch2dIzppL
	jDMYjvoDpwUAJCgAi59tJ8BDu.gPnXLkDz.MN6f4UWXoezlOKVOPKbwFhwvk
	XBKx4qDaUgvaCTfI9azA_9o4eyV7DJ0SBIyRT2Z1UoD9AOYnYpqMz6WSKWXq
	MjHbHduf.0xPXkFI8HSpEU3uYXnR1CjQe1aYzP5bon4GKfIMLdRRiNVMbFHk
	XxcSn_OMg6MSNbOf3a2yk4uNS.fmjBL9kG4nlK_7iaQJSgfzfUGwk3knHdYs
	WlwnoVVpRHpuzNB.Zn5ieE3QbBVTz2qWgtWZwt7tf7fnGFwGu1I1Gec8NtET
	.pjKswVey69GejAchO7T._KyZxSbbXQxm_RSyDtUxchLJVvDxIdwJv45.N.N
	BF7ZGfjPPrp.yzCgwIdXS4dmRap8Ey1RJ.bAbiHJzKdug9TVn83vxtb19Kkf
	94i969anv7UU_ZRGaxw8S
Received: from [54.240.196.185] by web122606.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:38:31 PST
X-Rocket-MIMEInfo: 002.001,
	PiBUaGlzIHdhcyB3cm9uZ2x5IGludHJvZHVjZWQgaW4gY29tbWl0IDQwMmIyN2Y5LCB0aGUgb25seSBkaWZmZXJlbmNlCj4gYmV0d2VlbiBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCBhbmQgYmxraWZfcmVxdWVzdF9zZWdtZW50IGlzCj4gdGhhdCB0aGUgZm9ybWVyIGhhcyBhIG5hbWVkIHBhZGRpbmcsIHdoaWxlIGJvdGggc2hhcmUgdGhlIHNhbWUKPiBtZW1vcnkgbGF5b3V0Lgo.Cj4gQWxzbyBjb3JyZWN0IGEgZmV3IG1pbm9yIGdsaXRjaGVzIGluIHRoZSBkZXNjcmlwdGlvbiwgaW5jbHVkaW5nIGYBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391744311.40091.YahooMailNeo@web122606.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:38:31 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: "roger.pau@citrix.com" <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: "msw@amazon.com" <msw@amazon.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1938383283014125849=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1938383283014125849==
Content-Type: multipart/alternative; boundary="2094167247-1629941735-1391744311=:40091"

--2094167247-1629941735-1391744311=:40091
Content-Type: text/plain; charset=us-ascii

> This was wrongly introduced in commit 402b27f9, the only difference
> between blkif_request_segment_aligned and blkif_request_segment is
> that the former has a named padding, while both share the same
> memory layout.
>
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE == 4096.

Tested-by: Matt Rushton <mrushton@amazon.com>

*Corrected subject line from last email and resent. I tested the set and everything looks solid. I also reviewed patch 2 and 3.

--2094167247-1629941735-1391744311=:40091
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div><font size="2"><span style="font-size:10pt;">&gt; This was wrongly introduced in commit 402b27f9, the only difference<br>

&gt; between blkif_request_segment_aligned and blkif_request_segment is<br>

&gt; that the former has a named padding, while both share the same<br>&gt; memory layout.<br>

&gt;<br>

&gt; Also correct a few minor glitches in the description, including for it<br>

&gt; to no longer assume PAGE_SIZE == 4096.</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">Tested-by: Matt Rushton &lt;mrushton@amazon.com&gt;</span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica
 Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">*Corrected subject line from last email and resent. I tested the set and everything looks solid. I also reviewed patch 2 and 3.<br></span></font></div></div></body></html>
--2094167247-1629941735-1391744311=:40091--


--===============1938383283014125849==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1938383283014125849==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHj-0000On-P1; Fri, 07 Feb 2014 07:55:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBcAT-0007A3-01
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:31:53 +0000
Received: from [85.158.139.211:15596] by server-5.bemta-5.messagelabs.com id
	B8/82-32749-8A354F25; Fri, 07 Feb 2014 03:31:52 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1391743909!2251419!1
X-Originating-IP: [98.138.91.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20866 invoked from network); 7 Feb 2014 03:31:51 -0000
Received: from nm15-vm6.bullet.mail.ne1.yahoo.com (HELO
	nm15-vm6.bullet.mail.ne1.yahoo.com) (98.138.91.108)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:31:51 -0000
Received: from [98.138.100.102] by nm15.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:31:49 -0000
Received: from [98.138.89.244] by tm101.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:31:49 -0000
Received: from [127.0.0.1] by omp1058.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:31:49 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 585768.80985.bm@omp1058.mail.ne1.yahoo.com
Received: (qmail 39016 invoked by uid 60001); 7 Feb 2014 03:31:49 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391743909; bh=9Xz5oPTpYUa841UXJDbM+LsJYBFbu18TK35YfwCizJU=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=cKI8DMXSunb4e1A6Xqnsmj2IrwSvJl0oyf7k28J9bpsomT5v3b4ARs9Rc/dgqm7oCq/ME7g152jaBRZE627pZxTu+dwhuP8kp0Rubt9oPRktCnUsqh0icRPeVh4FNdZPDKTNLHjYD8lfqUwWPsXe95Kzp0WBMJQOIBpW0+1RVNU=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=XA7eqs1KQuy0zE/hjEzBufmhSXSid0RoHC2PMDlE6JN9dUqBdAz4KFDUXLqDvj0rohFvY3x3+TRcfbA9kLDuW3doYHl2Kt+huVPmukyTpCznA1+Eb8XUaFHlo0KZJDjAKLXBp7MSmjc+v1A1ng0FC9LmRFvDQJkf/MjywsqlInc=;
X-YMail-OSG: EgLs6pAVM1niGNanEX2ObxdE96uf7kmlt9KPtH2KG_MumQv
	ze4eWGkrOY.kDT.4joanO0VccpdQXLX9IU8RSta.hvwp6k_eBwJINqTAWfRW
	mhR04yFnBXQT_0lVIB7L6sVLuHMHsvsHC.FKXrzsbU8wc.qKHGel9mF.9GMd
	eLwob.1ivQC8wL0Yk8vV89eLGfYUXleMou1R0LCfwsfTO7TphUmmPJf7THeU
	2kDlE._5nGDoY39gIF1_kWmpI9_d0EiJfZ5kQt1BexoQMrpGBrJ0_EmROlHx
	eeJccweGvGOgMcgE32tKjOAM0fgswcI.1QV3uXIp.4DxheSBNhQDgXIxNVyr
	jFEZ26QmQ5VVkYSVkDezP4zHAj02afW1Su70QVb3N74gQOq1zlqtIwUJuqrv
	6Pb91P7ps2D2bir8t.tsYP24A6lrwGfO.rBdSaGuxwyU_H0NdU_RqIXR_Cfa
	v0LCh.qsDP0A3HJGpoFnTqn6WXvJH2vLb5Rv67BNyAc4Fsj7A6vazRxKgNOl
	In47vxgOT3Ab.p8euAxW0RDRV0yU3hrp95NVW7w8ppgaD4VFN2F78pmBokj9
	l43UmyAxIjXSd80EB
Received: from [54.240.196.185] by web122604.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:31:49 PST
X-Rocket-MIMEInfo: 002.001,
	T24gMDQvMDIvMTQgMTA6MjYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBUaGlzIHdhcyB3cm9uZ2x5IGludHJvZHVjZWQgaW4gY29tbWl0IDQwMmIyN2Y5LCB0aGUgb25seSBkaWZmZXJlbmNlCj4gYmV0d2VlbiBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCBhbmQgYmxraWZfcmVxdWVzdF9zZWdtZW50IGlzCj4gdGhhdCB0aGUgZm9ybWVyIGhhcyBhIG5hbWVkIHBhZGRpbmcsIHdoaWxlIGJvdGggc2hhcmUgdGhlIHNhbWUKPiBtZW1vcnkgbGF5b3V0Lgo.Cj4gQWxzbyBjb3JyZWN0IGEgZmV3IG1pbm8BMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391743909.80736.YahooMailNeo@web122604.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:31:49 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: "roger.pau@citrix.com" <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: "aliguori@amazon.com" <aliguori@amazon.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0737320562364361368=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0737320562364361368==
Content-Type: multipart/alternative; boundary="-159098491-1827889108-1391743909=:80736"

---159098491-1827889108-1391743909=:80736
Content-Type: text/plain; charset=us-ascii

On 04/02/14 10:26, Roger Pau Monne wrote:
> This was wrongly introduced in commit 402b27f9, the only difference
> between blkif_request_segment_aligned and blkif_request_segment is
> that the former has a named padding, while both share the same
> memory layout.
>
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE == 4096.

Tested-by: Matt Rushton <mrushton@amazon.com>

-Matt

---159098491-1827889108-1391743909=:80736
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div><font size="2"><span style="font-size:10pt;">On 04/02/14 10:26, Roger Pau Monne wrote:</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">&gt; This was wrongly introduced in commit 402b27f9, the only difference<br>&gt; between blkif_request_segment_aligned and blkif_request_segment is<br>

&gt; that the former has a named padding, while both share the same<br>&gt; memory layout.<br>

&gt;<br>

&gt; Also correct a few minor glitches in the description, including for it<br>

&gt; to no longer assume PAGE_SIZE == 4096.</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">Tested-by: Matt Rushton &lt;mrushton@amazon.com&gt;</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family:
 HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">-Matt<br></span></font></div></div></body></html>
---159098491-1827889108-1391743909=:80736--


--===============0737320562364361368==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0737320562364361368==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHi-0000OV-Kf; Fri, 07 Feb 2014 07:55:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBbuJ-0006WC-5b
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:15:11 +0000
Received: from [85.158.139.211:4691] by server-14.bemta-5.messagelabs.com id
	AB/FE-27598-EBF44F25; Fri, 07 Feb 2014 03:15:10 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391742907!2247631!1
X-Originating-IP: [98.138.91.233]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32421 invoked from network); 7 Feb 2014 03:15:09 -0000
Received: from nm11-vm5.bullet.mail.ne1.yahoo.com (HELO
	nm11-vm5.bullet.mail.ne1.yahoo.com) (98.138.91.233)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:15:09 -0000
Received: from [98.138.101.130] by nm11.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:15:07 -0000
Received: from [98.138.101.170] by tm18.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:15:06 -0000
Received: from [127.0.0.1] by omp1081.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:15:06 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 673927.87836.bm@omp1081.mail.ne1.yahoo.com
Received: (qmail 44862 invoked by uid 60001); 7 Feb 2014 03:15:06 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391742906; bh=7PgwgNZ5/RSSUYrLaTzQy2cFkT1NA7xyfbLF4jGtC3A=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=QGzBMLbQcog43QCeMHjnyiFh3Hs0BkgZ72shpJHPiA8ssHHZITjOdTTx+XQxd3F8BYQyfSiVOKWZsgHu1fdMlzY/IfYkGwS+rrambAKotJ1W4DISbjKfurA7bc4RMkTIFwUvrDib8e315BQGp1asjxDf1jJ5LVqg1rwaLQfPoHU=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=E4TD8pvYaMvfvCLGhd7KVdqB5kwlCZSEIGDiEYNMGP4eKXkeevUoqT3K38K73EBQmr+oZt2sLWaXs3viLpC3C4wresyq4WHU6T29ykntsv24Y1ZqnfKqwY4BiZdmzW8nnZLH09NShrVhHx+a46CjNinMB+yaMAkwmyQZsDOZbTE=;
X-YMail-OSG: kLiG3HYVM1nWASDyyBToY6PRKFxX.SRPMya7U6TWf4ZByPM
	ZVk7XEB64SufyAoknXxQVsFnXPEPG1777ZFC_Fm3FXH.5dM4D6HGiO3kiCqF
	TSPrzYdHmeLi686vDl8svPzzn6DuhXHj7ESHKN.2f1BPSSIJbKtzoQj.9p7R
	hbDT7uTvI7eDPZnyR4aCnsdZpnyBkordmS4gRWJNp9.oVwv16462VSPjAm7F
	I.SuQo6OZwIpsSPylt7iACQNBjmtHs7C10BJ.LDLZIhXDC7hgNORGuDNXPfc
	6AIG1ogIaZQKqI_dzGsMlBVu682l6Jot1ZhcR.kebxjoX8NGTixsMOWgg9RL
	2y7FiRrRme7M7EVi1u3MabkU0CgLtveQFgAKN7lTh.NDdJ_n47JVtwOi_mNX
	EMpzpwFfCg3e3IWxuT87TLVaY7.KApayIn5NCssO3zM.n3oa8unjtHtBeaKv
	zIz1qM_lf92lr6B9MwtEjQy71rKxA6wdzwMwgzCVN5KGkJ6yhrl4HF7ezv0x
	ZwPOh66eqLm4nZ6VYDx7O4_pSBLZ1V7OyTQJOUarv4.tEIfKGNymibCwzLVp
	l0vwtghzEQRrLdkEp5p6akg--
Received: from [54.240.196.185] by web122601.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:15:06 PST
X-Rocket-MIMEInfo: 002.001,
	T24gMDQvMDIvMTQgMTA6MjYsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBJJ3ZlIGF0IGxlYXN0IGlkZW50aWZpZWQgdHdvIHBvc3NpYmxlIG1lbW9yeSBsZWFrcyBpbiBibGtiYWNrLCBib3RoCj4gcmVsYXRlZCB0byB0aGUgc2h1dGRvd24gcGF0aCBvZiBhIFZCRDoKPiAKPiAtIGJsa2JhY2sgZG9lc24ndCB3YWl0IGZvciBhbnkgcGVuZGluZyBwdXJnZSB3b3JrIHRvIGZpbmlzaCBiZWZvcmUKPsKgwqAgY2xlYW5pbmcgdGhlIGxpc3Qgb2YgZnJlZV9wYWdlcy4gVGhlIHB1cmdlIHdvcmsgd2lsbCBjYWxsCj4BMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391742906.53011.YahooMailNeo@web122601.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:15:06 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm5lIOKAjuKAjg==?= <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: =?utf-8?B?SWFuIENhbXBiZWxsIOKAjg==?= <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	=?utf-8?B?RGF2aWQgVnJhYmVsIOKAjuKAjuKAjg==?= <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	=?utf-8?B?Qm9yaXMgT3N0cm92c2t5IOKAjg==?= <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6098453347174010562=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6098453347174010562==
Content-Type: multipart/alternative; boundary="-822996273-1973664204-1391742906=:53011"

---822996273-1973664204-1391742906=:53011
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On 04/02/14 10:26, Roger Pau Monne wrote:=0A> I've at least identified two =
possible memory leaks in blkback, both=0A> related to the shutdown path of =
a VBD:=0A> =0A> - blkback doesn't wait for any pending purge work to finish=
 before=0A>=C2=A0=C2=A0 cleaning the list of free_pages. The purge work wil=
l call=0A>=C2=A0=C2=A0 put_free_pages and thus we might end up with pages b=
eing added to=0A>=C2=A0=C2=A0 the free_pages list after we have emptied it.=
 Fix this by making=0A>=C2=A0=C2=A0 sure there's no pending purge work befo=
re exiting=0A>=C2=A0=C2=A0 xen_blkif_schedule, and moving the free_page cle=
anup code to=0A>=C2=A0=C2=A0 xen_blkif_free.=0A> - blkback doesn't wait for=
 pending requests to end before cleaning=0A>=C2=A0=C2=A0 persistent grants =
and the list of free_pages. Again this can add=0A>=C2=A0=C2=A0 pages to the=
 free_pages list or persistent grants to the=0A>=C2=A0=C2=A0 persistent_gnt=
s red-black tree. Fixed by moving the persistent=0A>=C2=A0=C2=A0 grants and=
 free_pages cleanup code to xen_blkif_free.=0A> =0A> Also, add some checks =
in xen_blkif_free to make sure we are cleaning=0A> everything.=0A=0ATested-=
by: Matt Rushton <mrushton@amazon.com>=0A=0AReviewed-by: Matt Rushton <mrus=
hton@amazon.com>=0A
---822996273-1973664204-1391742906=:53011
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:He=
lveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;fo=
nt-size:12pt"><div><font size=3D"2"><span style=3D"font-size:10pt;">On 04/0=
2/14 10:26, Roger Pau Monne wrote:<br>=0A=0A&gt; I've at least identified t=
wo possible memory leaks in blkback, both<br>=0A=0A&gt; related to the shut=
down path of a VBD:<br>=0A=0A&gt; <br>=0A=0A&gt; - blkback doesn't wait for=
 any pending purge work to finish before<br>=0A=0A&gt;&nbsp;&nbsp; cleaning=
 the list of free_pages. The purge work will call<br>=0A=0A&gt;&nbsp;&nbsp;=
 put_free_pages and thus we might end up with pages being added to<br>=0A=
=0A&gt;&nbsp;&nbsp; the free_pages list after we have emptied it. Fix this =
by making<br>=0A=0A&gt;&nbsp;&nbsp; sure there's no pending purge work befo=
re exiting<br>=0A=0A&gt;&nbsp;&nbsp; xen_blkif_schedule, and moving the fre=
e_page cleanup code to<br>=0A=0A&gt;&nbsp;&nbsp; xen_blkif_free.<br>=0A=0A&=
gt; - blkback doesn't wait for pending requests to end before cleaning<br>=
=0A=0A&gt;&nbsp;&nbsp; persistent grants and the list of free_pages. Again =
this can add<br>=0A=0A&gt;&nbsp;&nbsp; pages to the free_pages list or pers=
istent grants to the<br>=0A=0A&gt;&nbsp;&nbsp; persistent_gnts red-black tr=
ee. Fixed by moving the persistent<br>=0A=0A&gt;&nbsp;&nbsp; grants and fre=
e_pages cleanup code to xen_blkif_free.<br>=0A=0A&gt; <br>=0A=0A&gt; Also, =
add some checks in xen_blkif_free to make sure we are cleaning<br>=0A=0A&gt=
; everything.</span></font></div><div style=3D"color: rgb(0, 0, 0); font-si=
ze: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lu=
cida Grande,sans-serif; background-color: transparent; font-style: normal;"=
><br></div><div style=3D"color: rgb(0, 0, 0); font-size: 13.3333px; font-fa=
mily: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif=
; background-color: transparent; font-style: normal;">Tested-by: Matt Rusht=
on &lt;mrushton@amazon.com&gt;<br><font size=3D"2"><span style=3D"font-size=
:10pt;"></span></font></div><div style=3D"color: rgb(0, 0, 0); font-size: 1=
0pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grand=
e,sans-serif; background-color: transparent; font-style: normal;"><font siz=
e=3D"2"><span style=3D"font-size:10pt;">Reviewed-by: Matt Rushton &lt;mrush=
ton@amazon.com&gt;</span></font></div><div style=3D"color: rgb(0, 0, 0); fo=
nt-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lu=
cida
 Grande,sans-serif; background-color: transparent; font-style: normal;"><fo=
nt size=3D"2"><span style=3D"font-size:10pt;"><br></span></font></div></div=
></body></html>
---822996273-1973664204-1391742906=:53011--


--===============6098453347174010562==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6098453347174010562==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHk-0000Ou-56; Fri, 07 Feb 2014 07:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mrushton7@yahoo.com>) id 1WBcGx-0007JT-9K
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 03:38:35 +0000
Received: from [85.158.143.35:15194] by server-1.bemta-4.messagelabs.com id
	DA/E3-31661-A3554F25; Fri, 07 Feb 2014 03:38:34 +0000
X-Env-Sender: mrushton7@yahoo.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391744311!3790863!1
X-Originating-IP: [98.138.91.22]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26793 invoked from network); 7 Feb 2014 03:38:33 -0000
Received: from nm28-vm0.bullet.mail.ne1.yahoo.com (HELO
	nm28-vm0.bullet.mail.ne1.yahoo.com) (98.138.91.22)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 03:38:33 -0000
Received: from [98.138.100.103] by nm28.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:38:31 -0000
Received: from [98.138.101.172] by tm102.bullet.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:38:31 -0000
Received: from [127.0.0.1] by omp1083.mail.ne1.yahoo.com with NNFMP;
	07 Feb 2014 03:38:31 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 654791.73292.bm@omp1083.mail.ne1.yahoo.com
Received: (qmail 51729 invoked by uid 60001); 7 Feb 2014 03:38:31 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391744311; bh=jItZhzDmXR6votFz4+Gp9WxHnaA/ppSXj1Vh/bIkOsk=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=kGNuCPOv+TPQIJ93JhnMdjbdmaemuBsHN30b0u8jO/B5uhQX+GM+LwoiZlWy7BqxcD7kOBg+3P2lF8fS4QTnOxJssr/08E1vEaKN8dgRsXlVP3OESWEYSrk9u2wtazjTaJT7/+mUiIg+OovICIjDzdZ6bPg2rn7TQ73p/2sBm3E=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:Message-ID:Date:From:Reply-To:Subject:To:Cc:MIME-Version:Content-Type;
	b=1ad02NHWei297HGnzPfAGMrLUdzbO0gPCDhtHVEnOZwIVOeXYtRb3RtwnVscO9yiWM4D5UihE0HNMovm7NqxqYz1SjPfwE7OqzjTfeeZO4KFGZSOknzEFtg4ERV6pFkfKwFCOyuqFrsGLjKdyDyH0zJY49JMC6MPzsLtMUpVjq0=;
X-YMail-OSG: Ijke8pwVM1mkdHBMKZhtEzX3lrNAeqEhNx.W4HIs8ctQVfL
	Uo0yc4QRuVqRgA8hjySuZuYBNAz0r69E7zhR3bT8M0GVPN9YOIkFu69J7QsG
	xgIA4YQp7AD6qqwx3HwgI5rooHN8s9SJnr_vjRtevQ5qb3UMHz8ch2dIzppL
	jDMYjvoDpwUAJCgAi59tJ8BDu.gPnXLkDz.MN6f4UWXoezlOKVOPKbwFhwvk
	XBKx4qDaUgvaCTfI9azA_9o4eyV7DJ0SBIyRT2Z1UoD9AOYnYpqMz6WSKWXq
	MjHbHduf.0xPXkFI8HSpEU3uYXnR1CjQe1aYzP5bon4GKfIMLdRRiNVMbFHk
	XxcSn_OMg6MSNbOf3a2yk4uNS.fmjBL9kG4nlK_7iaQJSgfzfUGwk3knHdYs
	WlwnoVVpRHpuzNB.Zn5ieE3QbBVTz2qWgtWZwt7tf7fnGFwGu1I1Gec8NtET
	.pjKswVey69GejAchO7T._KyZxSbbXQxm_RSyDtUxchLJVvDxIdwJv45.N.N
	BF7ZGfjPPrp.yzCgwIdXS4dmRap8Ey1RJ.bAbiHJzKdug9TVn83vxtb19Kkf
	94i969anv7UU_ZRGaxw8S
Received: from [54.240.196.185] by web122606.mail.ne1.yahoo.com via HTTP;
	Thu, 06 Feb 2014 19:38:31 PST
X-Rocket-MIMEInfo: 002.001,
	PiBUaGlzIHdhcyB3cm9uZ2x5IGludHJvZHVjZWQgaW4gY29tbWl0IDQwMmIyN2Y5LCB0aGUgb25seSBkaWZmZXJlbmNlCj4gYmV0d2VlbiBibGtpZl9yZXF1ZXN0X3NlZ21lbnRfYWxpZ25lZCBhbmQgYmxraWZfcmVxdWVzdF9zZWdtZW50IGlzCj4gdGhhdCB0aGUgZm9ybWVyIGhhcyBhIG5hbWVkIHBhZGRpbmcsIHdoaWxlIGJvdGggc2hhcmUgdGhlIHNhbWUKPiBtZW1vcnkgbGF5b3V0Lgo.Cj4gQWxzbyBjb3JyZWN0IGEgZmV3IG1pbm9yIGdsaXRjaGVzIGluIHRoZSBkZXNjcmlwdGlvbiwgaW5jbHVkaW5nIGYBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.175.632
Message-ID: <1391744311.40091.YahooMailNeo@web122606.mail.ne1.yahoo.com>
Date: Thu, 6 Feb 2014 19:38:31 -0800 (PST)
From: Matthew Rushton <mrushton7@yahoo.com>
To: "roger.pau@citrix.com" <roger.pau@citrix.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Cc: "msw@amazon.com" <msw@amazon.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Matthew Rushton <mrushton7@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1938383283014125849=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1938383283014125849==
Content-Type: multipart/alternative; boundary="2094167247-1629941735-1391744311=:40091"

--2094167247-1629941735-1391744311=:40091
Content-Type: text/plain; charset=us-ascii

> This was wrongly introduced in commit 402b27f9, the only difference
> between blkif_request_segment_aligned and blkif_request_segment is
> that the former has a named padding, while both share the same
> memory layout.
>
> Also correct a few minor glitches in the description, including for it
> to no longer assume PAGE_SIZE == 4096.

Tested-by: Matt Rushton <mrushton@amazon.com>

*Corrected subject line from last email and resent. I tested the set and everything looks solid. I also reviewed patch 2 and 3.

--2094167247-1629941735-1391744311=:40091
Content-Type: text/html; charset=us-ascii

<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt"><div><font size="2"><span style="font-size:10pt;">&gt; This was wrongly introduced in commit 402b27f9, the only difference<br>

&gt; between blkif_request_segment_aligned and blkif_request_segment is<br>

&gt; that the former has a named padding, while both share the same<br>&gt; memory layout.<br>

&gt;<br>

&gt; Also correct a few minor glitches in the description, including for it<br>

&gt; to no longer assume PAGE_SIZE == 4096.</span></font></div><div style="color: rgb(0, 0, 0); font-size: 13.3333px; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">Tested-by: Matt Rushton &lt;mrushton@amazon.com&gt;</span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><br><font size="2"><span style="font-size:10pt;"></span></font></div><div style="color: rgb(0, 0, 0); font-size: 10pt; font-family: HelveticaNeue,Helvetica
 Neue,Helvetica,Arial,Lucida Grande,sans-serif; background-color: transparent; font-style: normal;"><font size="2"><span style="font-size:10pt;">*Corrected subject line from last email and resent. I tested the set and everything looks solid. I also reviewed patch 2 and 3.<br></span></font></div></div></body></html>
--2094167247-1629941735-1391744311=:40091--


--===============1938383283014125849==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1938383283014125849==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 07:55:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 07:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgHi-0000OO-6y; Fri, 07 Feb 2014 07:55:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBQ7q-0001gX-E7
	for xen-devel@lists.xen.org; Thu, 06 Feb 2014 14:40:23 +0000
Received: from [85.158.139.211:57123] by server-9.bemta-5.messagelabs.com id
	E7/21-11237-5DE93F25; Thu, 06 Feb 2014 14:40:21 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391697617!2139855!1
X-Originating-IP: [209.85.212.44]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_23, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12974 invoked from network); 6 Feb 2014 14:40:18 -0000
Received: from mail-vb0-f44.google.com (HELO mail-vb0-f44.google.com)
	(209.85.212.44)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	6 Feb 2014 14:40:18 -0000
Received: by mail-vb0-f44.google.com with SMTP id f12so1502829vbg.3
	for <xen-devel@lists.xen.org>; Thu, 06 Feb 2014 06:40:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:from:date:message-id:subject:to:content-type;
	bh=1cprBEfbqz37PWMH64e2Xz4NYxTZOVBxXzWs1rLVif0=;
	b=zBj+LkniDH4RpM0q3H95ljBk1QFxz2m/cZiaG90MQS4TxuSUZmu+HSRZ0sUyXJdBWP
	7x2br3rU58msxQgHcrwu4LP3dZMICj+FiczniTnuRXRH4qTdm38hlf54Hl01NyVufsXm
	GYr2WnKDwsDivLYeofkZI9nY3rgai8dvkFMb6Q4WvWLLJ2/El3fOvKNYCo5wPh2hLNOT
	2GK2WHzooXzU7izWv/aIjmkLwQvqdJFjSAuVR1WqioyjxLXmIldL8foTYmNSl2eqAVFj
	sUTnE24DDC0rLPc7XbtQdFQ+9tJSALoosKvWHAWkB7pUiQhjQ1wywWf+dKMJWSOM+J2N
	qK2Q==
X-Received: by 10.58.181.71 with SMTP id du7mr1942572vec.25.1391697617247;
	Thu, 06 Feb 2014 06:40:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Thu, 6 Feb 2014 06:39:37 -0800 (PST)
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Thu, 6 Feb 2014 09:39:37 -0500
Message-ID: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Fri, 07 Feb 2014 07:55:37 +0000
Subject: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7621343443596533977=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7621343443596533977==
Content-Type: multipart/alternative; boundary=047d7b8738101c689b04f1bdda2e

--047d7b8738101c689b04f1bdda2e
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC) to a
HVM.  I have been attempting to resolve this issue on the xen-users list,
but it was advised to post this issue to this list. (Initial Message -
http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)

The machine I am using as host is a Dell Poweredge server with a Xeon
E31220 with 4GB of ram.

The possible bug is the following:
root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
char device redirected to /dev/pts/5 (label serial0)
qemu: hardware error: xen: failed to populate ram at 40030000
....

I believe it may be similar to this thread
http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results


Additional info that may be helpful is below.

Please let me know if you need any additional information.

Thanks in advance for any help provided!
Regards

###########################################################
root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
###########################################################
# Configuration file for Xen HVM

# HVM Name (as appears in 'xl list')
name="ubuntu-hvm-0"
# HVM Build settings (+ hardware)
#kernel = "/usr/lib/xen-4.3/boot/hvmloader"
builder='hvm'
device_model='qemu-dm'
memory=1024
vcpus=2

# Virtual Interface
# Network bridge to USB NIC
vif=['bridge=xenbr0']

################### PCI PASSTHROUGH ###################
# PCI Permissive mode toggle
#pci_permissive=1

# All PCI Devices
#pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']

# First two ports on Intel 4x1G NIC
#pci=['03:00.0','03:00.1']

# Last two ports on Intel 4x1G NIC
#pci=['04:00.0', '04:00.1']

# All ports on Intel 4x1G NIC
pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']

# Brodcom 2x1G NIC
#pci=['05:00.0', '05:00.1']
################### PCI PASSTHROUGH ###################

# HVM Disks
# Hard disk only
# Boot from HDD first ('c')
boot="c"
disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']

# Hard disk with ISO
# Boot from ISO first ('d')
#boot="d"
#disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']

# ACPI Enable
acpi=1
# HVM Event Modes
on_poweroff='destroy'
on_reboot='restart'
on_crash='restart'

# Serial Console Configuration (Xen Console)
sdl=0
serial='pty'

# VNC Configuration
# Only reacable from localhost
vnc=1
vnclisten="0.0.0.0"
vncpasswd=""

###########################################################
Copied for xen-users list
###########################################################

It appears that it cannot obtain the RAM mapping for this PCI device.


I rebooted the Host.  I ran assigned pci devices to pciback. The output
looks like:
root@fiat:~# ./dev_mgmt.sh
Loading Kernel Module 'xen-pciback'
Calling function pciback_dev for:
PCI DEVICE 0000:03:00.0
Unbinding 0000:03:00.0 from igb
Binding 0000:03:00.0 to pciback

PCI DEVICE 0000:03:00.1
Unbinding 0000:03:00.1 from igb
Binding 0000:03:00.1 to pciback

PCI DEVICE 0000:04:00.0
Unbinding 0000:04:00.0 from igb
Binding 0000:04:00.0 to pciback

PCI DEVICE 0000:04:00.1
Unbinding 0000:04:00.1 from igb
Binding 0000:04:00.1 to pciback

PCI DEVICE 0000:05:00.0
Unbinding 0000:05:00.0 from bnx2
Binding 0000:05:00.0 to pciback

PCI DEVICE 0000:05:00.1
Unbinding 0000:05:00.1 from bnx2
Binding 0000:05:00.1 to pciback

Listing PCI Devices Available to Xen
0000:03:00.0
0000:03:00.1
0000:04:00.0
0000:04:00.1
0000:05:00.0
0000:05:00.1

###########################################################
root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
Parsing config from /etc/xen/ubuntu-hvm-0.cfg
WARNING: ignoring device_model directive.
WARNING: Use "device_model_override" instead if you really want a
non-default device_model
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
how=(nil) callback=(nil) poller=0x210c3c0
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=unknown
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
vdev=hda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
domain, skipping bootloader
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x210c728: deregister unregistered
libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
free_memkb=2980
libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate
with 1 nodes, 4 cpus and 2980 KB free selected
xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
xc: info: VIRTUAL MEMORY ARRANGEMENT:
  Loader:        0000000000100000->00000000001a69a4
  Modules:       0000000000000000->0000000000000000
  TOTAL:         0000000000000000->000000003f800000
  ENTRY ADDRESS: 0000000000100608
xc: info: PHYSICAL MEMORY ALLOCATION:
  4KB PAGES: 0x0000000000000200
  2MB PAGES: 0x00000000000001fb
  1GB PAGES: 0x0000000000000000
xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=hda spec.backend=phy
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
inprogress: poller=0x210c3c0, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/2/768/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
epath=/local/domain/0/backend/vbd/2/768/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vbd/2/768/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x2112f48: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
/etc/xen/scripts/block add
libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-model
/usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
/usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1: register
slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
wpath=/local/domain/0/device-model/2/state token=3/1: event
epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
wpath=/local/domain/0/device-model/2/state token=3/1: event
epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x210c960: deregister unregistered
libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-chardev",
    "id": 2
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "change",
    "id": 3,
    "arguments": {
        "device": "vnc",
        "target": "password",
        "arg": ""
    }
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "query-vnc",
    "id": 4
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: register
slotnum=3
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x210e8a8: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
/etc/xen/scripts/vif-bridge online
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
/etc/xen/scripts/vif-bridge add
libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
/var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "qmp_capabilities",
    "id": 1
}
'
libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
    "execute": "device_add",
    "id": 2,
    "arguments": {
        "driver": "xen-pci-passthrough",
        "id": "pci-pt-03_00.0",
        "hostaddr": "0000:03:00.0"
    }
}
'
libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection reset
by peer
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
Connection refused
libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
progress report: ignored
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
complete, rc=0
libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: destroy
Daemon running with PID 3214
xc: debug: hypercall buffer: total allocations:793 total releases:793
xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
xc: debug: hypercall buffer: cache current size:4
xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4

###########################################################
root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
char device redirected to /dev/pts/5 (label serial0)
qemu: hardware error: xen: failed to populate ram at 40030000
CPU #0:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000
CPU #1:
EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
ES =0000 00000000 0000ffff 00009300
CS =f000 ffff0000 0000ffff 00009b00
SS =0000 00000000 0000ffff 00009300
DS =0000 00000000 0000ffff 00009300
FS =0000 00000000 0000ffff 00009300
GS =0000 00000000 0000ffff 00009300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 0000ffff
IDT=     00000000 0000ffff
CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
DR6=ffff0ff0 DR7=00000400
EFER=0000000000000000
FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
XMM00=00000000000000000000000000000000
XMM01=00000000000000000000000000000000
XMM02=00000000000000000000000000000000
XMM03=00000000000000000000000000000000
XMM04=00000000000000000000000000000000
XMM05=00000000000000000000000000000000
XMM06=00000000000000000000000000000000
XMM07=00000000000000000000000000000000

###########################################################
/etc/default/grub
GRUB_DEFAULT="Xen 4.3-amd64"
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
# biosdevname=0
GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"

--047d7b8738101c689b04f1bdda2e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all,<div><br></div><div>I am attempting to do a pci pas=
sthrough of an Intel ET card (4x1G NIC) to a HVM. =A0I have been attempting=
 to resolve this issue on the xen-users list, but it was advised to post th=
is issue to this list. (Initial Message -=A0<a href=3D"http://lists.xenproj=
ect.org/archives/html/xen-users/2014-02/msg00036.html">http://lists.xenproj=
ect.org/archives/html/xen-users/2014-02/msg00036.html</a>)</div>

<div><br></div><div>The machine I am using as host is a Dell Poweredge serv=
er with a Xeon E31220 with 4GB of ram.</div><div><br></div><div>The possibl=
e bug is the following:</div><div><div style=3D"font-family:arial,sans-seri=
f;font-size:13px">

root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log=A0</div><div style=3D"=
font-family:arial,sans-serif;font-size:13px">char device redirected to /dev=
/pts/5 (label serial0)</div><div style=3D"font-family:arial,sans-serif;font=
-size:13px">

qemu: hardware error: xen: failed to populate ram at 40030000</div></div><d=
iv style=3D"font-family:arial,sans-serif;font-size:13px">....</div><div sty=
le=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div style=3D"=
font-family:arial,sans-serif;font-size:13px">

I believe it may be similar to this thread</div><div style=3D"font-family:a=
rial,sans-serif;font-size:13px"><a href=3D"http://markmail.org/message/3zui=
ojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_b=
lank">http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34v=
be4uyog2d4+state:results</a><br>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div>=
<div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div s=
tyle=3D"font-family:arial,sans-serif;font-size:13px">Additional info that m=
ay be helpful is below.</div>

<div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div s=
tyle=3D"font-family:arial,sans-serif;font-size:13px">Please let me know if =
you need any additional information.</div><div style=3D"font-family:arial,s=
ans-serif;font-size:13px">

<br></div><div style=3D"font-family:arial,sans-serif;font-size:13px">Thanks=
 in advance for any help provided!</div><div style=3D"font-family:arial,san=
s-serif;font-size:13px">Regards</div><div style=3D"font-family:arial,sans-s=
erif;font-size:13px">

<br></div><div>###########################################################<=
/div><div><div>root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg=A0</div><div>####=
#######################################################<br></div><div># Con=
figuration file for Xen HVM</div>

<div><br></div><div># HVM Name (as appears in &#39;xl list&#39;)</div><div>=
name=3D&quot;ubuntu-hvm-0&quot;</div><div># HVM Build settings (+ hardware)=
</div><div>#kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;</div><di=
v>

builder=3D&#39;hvm&#39;</div><div>device_model=3D&#39;qemu-dm&#39;</div><di=
v>memory=3D1024</div><div>vcpus=3D2</div><div><br></div><div># Virtual Inte=
rface</div><div># Network bridge to USB NIC</div><div>vif=3D[&#39;bridge=3D=
xenbr0&#39;]</div>

<div><br></div><div>################### PCI PASSTHROUGH ###################=
</div><div># PCI Permissive mode toggle</div><div>#pci_permissive=3D1</div>=
<div><br></div><div># All PCI Devices</div><div>#pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]</div>

<div><br></div><div># First two ports on Intel 4x1G NIC</div><div>#pci=3D[&=
#39;03:00.0&#39;,&#39;03:00.1&#39;]</div><div><br></div><div># Last two por=
ts on Intel 4x1G NIC</div><div>#pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;=
]</div>

<div><br></div><div># All ports on Intel 4x1G NIC</div><div>pci=3D[&#39;03:=
00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]</div><d=
iv><br></div><div># Brodcom 2x1G NIC</div><div>#pci=3D[&#39;05:00.0&#39;, &=
#39;05:00.1&#39;]</div>

<div>################### PCI PASSTHROUGH ###################</div><div><br>=
</div><div># HVM Disks</div><div># Hard disk only</div><div># Boot from HDD=
 first (&#39;c&#39;)</div><div>boot=3D&quot;c&quot;</div><div>disk=3D[&#39;=
phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]</div>

<div><br></div><div># Hard disk with ISO</div><div># Boot from ISO first (&=
#39;d&#39;)</div><div>#boot=3D&quot;d&quot;</div><div>#disk=3D[&#39;phy:/de=
v/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;, &#39;file:/root/ubuntu-12.04.3-server-=
amd64.iso,hdc:cdrom,r&#39;]</div>

<div><br></div><div># ACPI Enable</div><div>acpi=3D1</div><div># HVM Event =
Modes</div><div>on_poweroff=3D&#39;destroy&#39;</div><div>on_reboot=3D&#39;=
restart&#39;</div><div>on_crash=3D&#39;restart&#39;</div><div><br></div><di=
v># Serial Console Configuration (Xen Console)</div>

<div>sdl=3D0</div><div>serial=3D&#39;pty&#39;</div><div><br></div><div># VN=
C Configuration</div><div># Only reacable from localhost</div><div>vnc=3D1<=
/div><div>vnclisten=3D&quot;0.0.0.0&quot;</div><div>vncpasswd=3D&quot;&quot=
;</div>

</div><div><br></div><div>#################################################=
##########<br></div><div>Copied for xen-users list</div><div>##############=
#############################################<br></div><div><br></div>
<div>
<div style=3D"font-family:arial,sans-serif;font-size:13px">It appears that =
it cannot obtain the RAM mapping for this PCI device.</div><div style=3D"fo=
nt-family:arial,sans-serif;font-size:13px"><br></div><div style=3D"font-fam=
ily:arial,sans-serif;font-size:13px">

<br></div><span style=3D"font-family:arial,sans-serif;font-size:13px">I reb=
ooted the Host. =A0I ran assigned pci devices to pciback. The output looks =
like:</span><div style=3D"font-family:arial,sans-serif;font-size:13px"><div=
>
root@fiat:~# ./dev_mgmt.sh=A0</div>
<div>Loading Kernel Module &#39;xen-pciback&#39;</div><div>Calling function=
 pciback_dev for:=A0</div><div>PCI DEVICE 0000:03:00.0</div><div>Unbinding =
0000:03:00.0 from igb</div><div>Binding 0000:03:00.0 to pciback</div><div>

<br></div><div>PCI DEVICE 0000:03:00.1</div><div>Unbinding 0000:03:00.1 fro=
m igb</div><div>Binding 0000:03:00.1 to pciback</div><div><br></div><div>PC=
I DEVICE 0000:04:00.0</div><div>Unbinding 0000:04:00.0 from igb</div><div>

Binding 0000:04:00.0 to pciback</div><div><br></div><div>PCI DEVICE 0000:04=
:00.1</div><div>Unbinding 0000:04:00.1 from igb</div><div>Binding 0000:04:0=
0.1 to pciback</div><div><br></div><div>PCI DEVICE 0000:05:00.0</div><div>

Unbinding 0000:05:00.0 from bnx2</div><div>Binding 0000:05:00.0 to pciback<=
/div><div><br></div><div>PCI DEVICE 0000:05:00.1</div><div>Unbinding 0000:0=
5:00.1 from bnx2</div><div>Binding 0000:05:00.1 to pciback</div><div class=
=3D"im">

<div><br></div><div>Listing PCI Devices Available to Xen</div></div><div cl=
ass=3D"im"><div>0000:03:00.0</div><div>0000:03:00.1</div><div>0000:04:00.0<=
/div><div>0000:04:00.1</div></div><div class=3D"im"><div>0000:05:00.0</div>

<div>0000:05:00.1</div></div></div><div style=3D"font-family:arial,sans-ser=
if;font-size:13px"><br></div><div style=3D"font-family:arial,sans-serif;fon=
t-size:13px"><span style=3D"font-family:arial;font-size:small">############=
###############################################</span><br>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><div>root@=
fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg=A0</div><div>Parsing confi=
g from /etc/xen/ubuntu-hvm-0.cfg</div><div class=3D"im"><div>WARNING: ignor=
ing device_model directive.</div>

<div>WARNING: Use &quot;device_model_override&quot; instead if you really w=
ant a non-default device_model</div></div><div>libxl: debug: libxl_create.c=
:1230:do_domain_create: ao 0x210c360: create: how=3D(nil) callback=3D(nil) =
poller=3D0x210c3c0</div>

<div>libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk =
vdev=3Dhda spec.backend=3Dunknown</div><div>libxl: debug: libxl_device.c:29=
6:libxl__device_disk_set_backend: Disk vdev=3Dhda, using backend phy</div><=
div>

libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader=
</div><div>libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not =
a PV domain, skipping bootloader</div><div>libxl: debug: libxl_event.c:608:=
libxl__ev_xswatch_deregister: watch w=3D0x210c728: deregister unregistered<=
/div>

<div>libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUM=
A placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcpus=3D3, free_=
memkb=3D2980</div><div>libxl: detail: libxl_dom.c:195:numa_place_domain: NU=
MA placement candidate with 1 nodes, 4 cpus and 2980 KB free selected</div>

<div>xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=3D0xa69a4</=
div><div>xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a4</div=
><div class=3D"im"><div>xc: info: VIRTUAL MEMORY ARRANGEMENT:</div><div>=A0=
 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a69a4</div>

<div>=A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;0000000000000000</div><d=
iv>=A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f800000</div><d=
iv>=A0 ENTRY ADDRESS: 0000000000100608</div><div>xc: info: PHYSICAL MEMORY =
ALLOCATION:</div>

<div>=A0 4KB PAGES: 0x0000000000000200</div><div>=A0 2MB PAGES: 0x000000000=
00001fb</div><div>=A0 1GB PAGES: 0x0000000000000000</div></div><div>xc: det=
ail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; 0x7f022c81682d</div><d=
iv>

libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=
=3Dhda spec.backend=3Dphy</div><div>libxl: debug: libxl_event.c:559:libxl__=
ev_xswatch_register: watch w=3D0x2112f48 wpath=3D/local/domain/0/backend/vb=
d/2/768/state token=3D3/0: register slotnum=3D3</div>

<div>libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360: inpr=
ogress: poller=3D0x210c3c0, flags=3Di</div><div>libxl: debug: libxl_event.c=
:503:watchfd_callback: watch w=3D0x2112f48 wpath=3D/local/domain/0/backend/=
vbd/2/768/state token=3D3/0: event epath=3D/local/domain/0/backend/vbd/2/76=
8/state</div>

<div>libxl: debug: libxl_event.c:647:devstate_watch_callback: backend /loca=
l/domain/0/backend/vbd/2/768/state wanted state 2 still waiting state 1</di=
v><div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x2112f4=
8 wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: event epath=
=3D/local/domain/0/backend/vbd/2/768/state</div>

<div>libxl: debug: libxl_event.c:643:devstate_watch_callback: backend /loca=
l/domain/0/backend/vbd/2/768/state wanted state 2 ok</div><div>libxl: debug=
: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=3D0x2112f48 wpath=
=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: deregister slotnum=
=3D3</div>

<div>libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=
=3D0x2112f48: deregister unregistered</div><div>libxl: debug: libxl_device.=
c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/block add</d=
iv>

<div>libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-m=
odel /usr/bin/qemu-system-i386 with arguments:</div><div>libxl: debug: libx=
l_dm.c:1208:libxl__spawn_local_dm: =A0 /usr/bin/qemu-system-i386</div><div>

libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xen-domid</div><d=
iv>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2</div><div>lib=
xl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -chardev</div><div>li=
bxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 socket,id=3Dlibxl-cm=
d,path=3D/var/run/xen/qmp-libxl-2,server,nowait</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mon</div><di=
v>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 chardev=3Dlibxl-=
cmd,mode=3Dcontrol</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -name</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubuntu-hvm-0<=
/div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vnc</di=
v><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0=A0<a href=
=3D"http://0.0.0.0:0/" target=3D"_blank">0.0.0.0:0</a>,to=3D99</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global</div>=
<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa-fdc.drive=
A=3D</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -se=
rial</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty</div><div=
>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vga</div><div>li=
bxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cirrus</div><div>lib=
xl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga.vram_size=
_mb=3D8</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 =
-boot</div><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 or=
der=3Dc</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -smp</div><di=
v>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,maxcpus=3D2</d=
iv><div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -device</d=
iv>
<div>
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 rtl8139,id=3Dnic0,=
netdev=3Dnet0,mac=3D00:16:3e:23:44:2c</div><div>libxl: debug: libxl_dm.c:12=
08:libxl__spawn_local_dm: =A0 -netdev</div><div>libxl: debug: libxl_dm.c:12=
08:libxl__spawn_local_dm: =A0 type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,scri=
pt=3Dno,downscript=3Dno</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M</div><div>=
libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xenfv</div><div>li=
bxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m</div><div>libxl: =
debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 1016</div>

<div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -drive</div><=
div>libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 file=3D/dev/ub=
untu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,format=3Draw,cache=3Dw=
riteback</div>

<div>libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=3D=
0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1: registe=
r slotnum=3D3</div><div>libxl: debug: libxl_event.c:503:watchfd_callback: w=
atch w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1=
: event epath=3D/local/domain/0/device-model/2/state</div>

<div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210c960 =
wpath=3D/local/domain/0/device-model/2/state token=3D3/1: event epath=3D/lo=
cal/domain/0/device-model/2/state</div><div>libxl: debug: libxl_event.c:596=
:libxl__ev_xswatch_deregister: watch w=3D0x210c960 wpath=3D/local/domain/0/=
device-model/2/state token=3D3/1: deregister slotnum=3D3</div>

<div>libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch w=
=3D0x210c960: deregister unregistered</div><div>libxl: debug: libxl_qmp.c:7=
07:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2</div><div>

libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp</div><=
div>libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;=
{</div><div>=A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,</div=
>

<div>=A0 =A0 &quot;id&quot;: 1</div><div>}</div><div>&#39;</div><div>libxl:=
 debug: libxl_qmp.c:299:qmp_handle_response: message type: return</div><div=
>libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;{</=
div>

<div>=A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,</div><div>=A0 =
=A0 &quot;id&quot;: 2</div><div>}</div><div>&#39;</div><div>libxl: debug: l=
ibxl_qmp.c:299:qmp_handle_response: message type: return</div><div>libxl: d=
ebug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;change&quot;,</div><div>=A0 =A0 &qu=
ot;id&quot;: 3,</div><div>=A0 =A0 &quot;arguments&quot;: {</div><div>=A0 =
=A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,</div><div>=A0 =A0 =A0 =A0 =
&quot;target&quot;: &quot;password&quot;,</div>

<div>=A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;</div><div>=A0 =A0 }</div=
><div>}</div><div>&#39;</div><div>libxl: debug: libxl_qmp.c:299:qmp_handle_=
response: message type: return</div><div>libxl: debug: libxl_qmp.c:555:qmp_=
send_prepare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,</div><div>=A0 =A0 =
&quot;id&quot;: 4</div><div>}</div><div>&#39;</div><div>libxl: debug: libxl=
_qmp.c:299:qmp_handle_response: message type: return</div><div>libxl: debug=
: libxl_event.c:559:libxl__ev_xswatch_register: watch w=3D0x210e8a8 wpath=
=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: register slotnum=3D3<=
/div>

<div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8 =
wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event epath=3D/l=
ocal/domain/0/backend/vif/2/0/state</div><div>libxl: debug: libxl_event.c:6=
47:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state w=
anted state 2 still waiting state 1</div>

<div>libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8 =
wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event epath=3D/l=
ocal/domain/0/backend/vif/2/0/state</div><div>libxl: debug: libxl_event.c:6=
43:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state w=
anted state 2 ok</div>

<div>libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=
=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: der=
egister slotnum=3D3</div><div>libxl: debug: libxl_event.c:608:libxl__ev_xsw=
atch_deregister: watch w=3D0x210e8a8: deregister unregistered</div>

<div>libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t: /etc/xen/scripts/vif-bridge online</div><div>libxl: debug: libxl_device.=
c:959:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge a=
dd</div>

<div>libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to /var=
/run/xen/qmp-libxl-2</div><div>libxl: debug: libxl_qmp.c:299:qmp_handle_res=
ponse: message type: qmp</div><div>libxl: debug: libxl_qmp.c:555:qmp_send_p=
repare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,</div><div>=
=A0 =A0 &quot;id&quot;: 1</div><div>}</div><div>&#39;</div><div>libxl: debu=
g: libxl_qmp.c:299:qmp_handle_response: message type: return</div><div>libx=
l: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39;{</div>

<div>=A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,</div><div>=A0 =A0=
 &quot;id&quot;: 2,</div><div>=A0 =A0 &quot;arguments&quot;: {</div><div>=
=A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthrough&quot;,</div><=
div>=A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,</div>

<div>=A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quot;</div><d=
iv class=3D"im"><div>=A0 =A0 }</div><div>}</div><div>&#39;</div><div>libxl:=
 error: libxl_qmp.c:454:qmp_next: Socket read error: Connection reset by pe=
er</div><div>

libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error: Conn=
ection refused</div><div>libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e: Connection error: Connection refused</div><div>libxl: error: libxl_qmp.c=
:702:libxl__qmp_initialize: Connection error: Connection refused</div>

</div><div>libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating=
 pci backend</div><div>libxl: debug: libxl_event.c:1737:libxl__ao_progress_=
report: ao 0x210c360: progress report: ignored</div><div>libxl: debug: libx=
l_event.c:1569:libxl__ao_complete: ao 0x210c360: complete, rc=3D0</div>

<div>libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: des=
troy</div><div>Daemon running with PID 3214</div><div>xc: debug: hypercall =
buffer: total allocations:793 total releases:793</div><div>xc: debug: hyper=
call buffer: current allocations:0 maximum allocations:4</div>

<div>xc: debug: hypercall buffer: cache current size:4</div><div>xc: debug:=
 hypercall buffer: cache hits:785 misses:4 toobig:4</div></div><div style=
=3D"font-family:arial,sans-serif;font-size:13px"><br></div><div style=3D"fo=
nt-family:arial,sans-serif;font-size:13px">

<span style=3D"font-family:arial;font-size:small">#########################=
##################################</span><br></div><div style=3D"font-famil=
y:arial,sans-serif;font-size:13px"><div>root@fiat:/var/log/xen# cat qemu-dm=
-ubuntu-hvm-0.log=A0</div>

<div>char device redirected to /dev/pts/5 (label serial0)</div><div>qemu: h=
ardware error: xen: failed to populate ram at 40030000</div><div>CPU #0:</d=
iv><div>EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633</div><d=
iv>
ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000</div>
<div>EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0=
 HLT=3D1</div><div>ES =3D0000 00000000 0000ffff 00009300</div><div>CS =3Df0=
00 ffff0000 0000ffff 00009b00</div><div>SS =3D0000 00000000 0000ffff 000093=
00</div><div>DS =3D0000 00000000 0000ffff 00009300</div>

<div>FS =3D0000 00000000 0000ffff 00009300</div><div>GS =3D0000 00000000 00=
00ffff 00009300</div><div>LDT=3D0000 00000000 0000ffff 00008200</div><div>T=
R =3D0000 00000000 0000ffff 00008b00</div><div>GDT=3D =A0 =A0 00000000 0000=
ffff</div>

<div>IDT=3D =A0 =A0 00000000 0000ffff</div><div>CR0=3D60000010 CR2=3D000000=
00 CR3=3D00000000 CR4=3D00000000</div><div>DR0=3D00000000 DR1=3D00000000 DR=
2=3D00000000 DR3=3D00000000=A0</div><div>DR6=3Dffff0ff0 DR7=3D00000400</div=
><div>EFER=3D0000000000000000</div>

<div>FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80</div><div>FPR=
0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000</div><div>FPR2=3D000=
0000000000000 0000 FPR3=3D0000000000000000 0000</div><div>FPR4=3D0000000000=
000000 0000 FPR5=3D0000000000000000 0000</div>

<div>FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000</div><div>XM=
M00=3D00000000000000000000000000000000 XMM01=3D0000000000000000000000000000=
0000</div><div>XMM02=3D00000000000000000000000000000000 XMM03=3D00000000000=
000000000000000000000</div>

<div>XMM04=3D00000000000000000000000000000000 XMM05=3D000000000000000000000=
00000000000</div><div>XMM06=3D00000000000000000000000000000000 XMM07=3D0000=
0000000000000000000000000000</div><div>CPU #1:</div><div>EAX=3D00000000 EBX=
=3D00000000 ECX=3D00000000 EDX=3D00000633</div>

<div>ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000</div><div>=
EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0 HLT=
=3D1</div><div>ES =3D0000 00000000 0000ffff 00009300</div><div>CS =3Df000 f=
fff0000 0000ffff 00009b00</div>

<div>SS =3D0000 00000000 0000ffff 00009300</div><div>DS =3D0000 00000000 00=
00ffff 00009300</div><div>FS =3D0000 00000000 0000ffff 00009300</div><div>G=
S =3D0000 00000000 0000ffff 00009300</div><div>LDT=3D0000 00000000 0000ffff=
 00008200</div>

<div>TR =3D0000 00000000 0000ffff 00008b00</div><div>GDT=3D =A0 =A0 0000000=
0 0000ffff</div><div>IDT=3D =A0 =A0 00000000 0000ffff</div><div>CR0=3D60000=
010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000</div><div>DR0=3D00000000 D=
R1=3D00000000 DR2=3D00000000 DR3=3D00000000=A0</div>

<div>DR6=3Dffff0ff0 DR7=3D00000400</div><div>EFER=3D0000000000000000</div><=
div>FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80</div><div>FPR0=
=3D0000000000000000 0000 FPR1=3D0000000000000000 0000</div><div>FPR2=3D0000=
000000000000 0000 FPR3=3D0000000000000000 0000</div>

<div>FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000</div><div>FP=
R6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000</div><div>XMM00=3D0=
0000000000000000000000000000000 XMM01=3D00000000000000000000000000000000</d=
iv><div>

XMM02=3D00000000000000000000000000000000 XMM03=3D00000000000000000000000000=
000000</div><div>XMM04=3D00000000000000000000000000000000 XMM05=3D000000000=
00000000000000000000000</div><div>XMM06=3D00000000000000000000000000000000 =
XMM07=3D00000000000000000000000000000000</div>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><br></div>=
<div style=3D"font-family:arial,sans-serif;font-size:13px"><span style=3D"f=
ont-family:arial;font-size:small">#########################################=
##################</span><br>

</div><div style=3D"font-family:arial,sans-serif;font-size:13px">/etc/defau=
lt/grub</div><div style=3D"font-family:arial,sans-serif;font-size:13px"><di=
v>GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;</div><div>GRUB_HIDDEN_TIMEOUT=3D=
0</div>

<div>GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue</div><div>GRUB_TIMEOUT=3D10</div><div=
>GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || echo Debian`</div=
><div>GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;</div><div>GRUB_=
CMDLINE_LINUX=3D&quot;&quot;</div>

<div># biosdevname=3D0</div><div>GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M =
dom0_max_vcpus=3D1&quot;</div></div></div></div>

--047d7b8738101c689b04f1bdda2e--


--===============7621343443596533977==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7621343443596533977==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 08:09:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgUR-00027w-Va; Fri, 07 Feb 2014 08:08:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WBgUQ-00024a-32
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 08:08:46 +0000
Received: from [85.158.139.211:46283] by server-12.bemta-5.messagelabs.com id
	1A/06-15415-D8494F25; Fri, 07 Feb 2014 08:08:45 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391760522!2270232!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6928 invoked from network); 7 Feb 2014 08:08:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 08:08:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98903905"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 08:08:42 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 03:08:41 -0500
Message-ID: <52F49489.6080706@citrix.com>
Date: Fri, 7 Feb 2014 09:08:41 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
	<20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
	<20140206162004.GA22864@konrad-lan.dumpdata.com>
	<20140207042427.GA28057@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140207042427.GA28057@u109add4315675089e695.ant.amazon.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: mrushton@amazon.com, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	linux-kernel@vger.kernel.org, msw@amazon.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/02/14 05:24, Matt Wilson wrote:
> Just in case the various mailing list software ate Matt's messages, he
> sent the following:
> 
> [PATCH v2 2/4] xen-blkback: fix memory leaks
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>
> 
> [PATCH v2 3/4] xen-blkback: fix shutdown race
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>
> 
> [PATCH v2 4/4] xen-blkif: drop struct blkif_request_segment_aligned
> Tested-by: Matt Rushton <mrushton@amazon.com>
> 
> I've separately sent suggestions to Matt on how to setup his mailer to
> format messages per list etiquette and how to avoid breaking message
> threading.

Thanks for the review and testing!

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:09:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgUR-00027w-Va; Fri, 07 Feb 2014 08:08:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WBgUQ-00024a-32
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 08:08:46 +0000
Received: from [85.158.139.211:46283] by server-12.bemta-5.messagelabs.com id
	1A/06-15415-D8494F25; Fri, 07 Feb 2014 08:08:45 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391760522!2270232!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6928 invoked from network); 7 Feb 2014 08:08:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 08:08:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98903905"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 08:08:42 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 03:08:41 -0500
Message-ID: <52F49489.6080706@citrix.com>
Date: Fri, 7 Feb 2014 09:08:41 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1391509575-3949-1-git-send-email-roger.pau@citrix.com>
	<20140204151501.GA1781@andromeda.dapyr.net>
	<20140206045729.GA15901@u109add4315675089e695.ant.amazon.com>
	<20140206162004.GA22864@konrad-lan.dumpdata.com>
	<20140207042427.GA28057@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140207042427.GA28057@u109add4315675089e695.ant.amazon.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: mrushton@amazon.com, Konrad Rzeszutek Wilk <konrad@darnok.org>,
	linux-kernel@vger.kernel.org, msw@amazon.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2 0/4] xen-blk: bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/02/14 05:24, Matt Wilson wrote:
> Just in case the various mailing list software ate Matt's messages, he
> sent the following:
> 
> [PATCH v2 2/4] xen-blkback: fix memory leaks
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>
> 
> [PATCH v2 3/4] xen-blkback: fix shutdown race
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>
> 
> [PATCH v2 4/4] xen-blkif: drop struct blkif_request_segment_aligned
> Tested-by: Matt Rushton <mrushton@amazon.com>
> 
> I've separately sent suggestions to Matt on how to setup his mailer to
> format messages per list etiquette and how to avoid breaking message
> threading.

Thanks for the review and testing!

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgeB-0002WI-4Q; Fri, 07 Feb 2014 08:18:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBgeA-0002WD-3Z
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 08:18:50 +0000
Received: from [85.158.143.35:41763] by server-1.bemta-4.messagelabs.com id
	7C/90-31661-9E694F25; Fri, 07 Feb 2014 08:18:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391761128!3828471!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23762 invoked from network); 7 Feb 2014 08:18:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 08:18:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 08:18:47 +0000
Message-Id: <52F4A4F7020000780011A102@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 08:18:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,
 "Ian Campbell" <ian.campbell@citrix.com>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
In-Reply-To: <20140206225334.GA21743@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 23:53, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Dec 13, Jan Beulich wrote:
> 
>> Changeset 762:a070228ac76e ("add hvc compatibility mode to xencons"
>> added this call just for the HVC case, without giving any reason why
>> HVC would be special in this regard. Use the call for all cases.
> 
>> +++ b/drivers/xen/console/console.c
>> @@ -236,6 +234,8 @@ static int __init xen_console_init(void)
>>  
>>  	wbuf = alloc_bootmem(wbuf_size);
>>  
>> +	if (!is_initial_xendomain())
>> +		add_preferred_console(kcons_info.name, xc_num, NULL);
>>  	register_console(&kcons_info);
> 
> Why is dom0 special in this case anyway? At least with SLE12, when Xen
> is booted with 'console=com1 com1=115200' and the kernel is booted
> without any console= or xencons=, kcons_info.index is still -1 and as a
> result xvc-1 is registered as name for xvc0. This confuses systemd
> because kernel name and console name do not match, so login via serial
> is not possible.

I have to direct this question to Ian, who wrote the original patch
(sorry Ian, I know it's been long ago), which the patch above only
generalizes.

> When add_preferred_console is called uncondtionally the login on serial
> works as expected.

They question is what the intended behavior here is: I'd generally
expect the lack of console= on the command line for Dom0 to
behave just like for a native kernel, which I don't think would show
a login prompt on other than the screen in that case. So maybe
instead of just dropping the is_initial_xendomain() we should make
console registration conditional upon a command line option having
requested its presence in the Dom0 case. (Looking at the command
line handling code I also wonder whether it isn't a mistake to set
console_use_vt even in the xencons=off case, and to not bail
upon the right side of the = not being recognized - see
196:52f308b17bae and 153:12c399692d44 for how this evolved.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgeB-0002WI-4Q; Fri, 07 Feb 2014 08:18:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBgeA-0002WD-3Z
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 08:18:50 +0000
Received: from [85.158.143.35:41763] by server-1.bemta-4.messagelabs.com id
	7C/90-31661-9E694F25; Fri, 07 Feb 2014 08:18:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391761128!3828471!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23762 invoked from network); 7 Feb 2014 08:18:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 08:18:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 08:18:47 +0000
Message-Id: <52F4A4F7020000780011A102@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 08:18:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>,
 "Ian Campbell" <ian.campbell@citrix.com>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
In-Reply-To: <20140206225334.GA21743@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 23:53, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Dec 13, Jan Beulich wrote:
> 
>> Changeset 762:a070228ac76e ("add hvc compatibility mode to xencons"
>> added this call just for the HVC case, without giving any reason why
>> HVC would be special in this regard. Use the call for all cases.
> 
>> +++ b/drivers/xen/console/console.c
>> @@ -236,6 +234,8 @@ static int __init xen_console_init(void)
>>  
>>  	wbuf = alloc_bootmem(wbuf_size);
>>  
>> +	if (!is_initial_xendomain())
>> +		add_preferred_console(kcons_info.name, xc_num, NULL);
>>  	register_console(&kcons_info);
> 
> Why is dom0 special in this case anyway? At least with SLE12, when Xen
> is booted with 'console=com1 com1=115200' and the kernel is booted
> without any console= or xencons=, kcons_info.index is still -1 and as a
> result xvc-1 is registered as name for xvc0. This confuses systemd
> because kernel name and console name do not match, so login via serial
> is not possible.

I have to direct this question to Ian, who wrote the original patch
(sorry Ian, I know it's been long ago), which the patch above only
generalizes.

> When add_preferred_console is called uncondtionally the login on serial
> works as expected.

They question is what the intended behavior here is: I'd generally
expect the lack of console= on the command line for Dom0 to
behave just like for a native kernel, which I don't think would show
a login prompt on other than the screen in that case. So maybe
instead of just dropping the is_initial_xendomain() we should make
console registration conditional upon a command line option having
requested its presence in the Dom0 case. (Looking at the command
line handling code I also wonder whether it isn't a mistake to set
console_use_vt even in the xencons=off case, and to not bail
upon the right side of the = not being recognized - see
196:52f308b17bae and 153:12c399692d44 for how this evolved.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:32:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgrb-0003EG-53; Fri, 07 Feb 2014 08:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBgra-0003EB-7X
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 08:32:42 +0000
Received: from [85.158.143.35:48864] by server-2.bemta-4.messagelabs.com id
	6A/DC-10891-92A94F25; Fri, 07 Feb 2014 08:32:41 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391761960!3837747!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28985 invoked from network); 7 Feb 2014 08:32:40 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 08:32:40 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391761960; l=575;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=NXAjKp11MIFCFwhqJ6swF7m8avw=;
	b=dLON6VIqP/xwgc3dA93KHv+gQOtgq+rDC0N6gp1ARp4wqfohQg0C6X5eDZeoCum+/+B
	6ZJSLzGxcwa3MVb6zEIChwEvQoJutlADmNjOpCYedhRDRQ3m74JRWafcqDMrkTyVlntDB
	NQokl9nl8Fky4GfbaHhfJiejeiCmgpUTey0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id h0628aq178WZCg6
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 09:32:35 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 83AE850269; Fri,  7 Feb 2014 09:32:34 +0100 (CET)
Date: Fri, 7 Feb 2014 09:32:34 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207083234.GA17978@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4A4F7020000780011A102@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, Jan Beulich wrote:

> They question is what the intended behavior here is: I'd generally

In my opinion dom0 is just a child of Xen, which should follow the rules
of the parent. If Xen is configured to have its console on serial then
the default of dom0 should be to follow just that. Appearently its just
a matter of correctly using xvc0.

I'm not sure what the gain would be to have Xen on serial and dom0
somewhere else, and enforcing the need of a console= cmdline option to
point dom0 also to serial. Thats just doing things twice.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:32:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:32:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBgrb-0003EG-53; Fri, 07 Feb 2014 08:32:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBgra-0003EB-7X
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 08:32:42 +0000
Received: from [85.158.143.35:48864] by server-2.bemta-4.messagelabs.com id
	6A/DC-10891-92A94F25; Fri, 07 Feb 2014 08:32:41 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391761960!3837747!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28985 invoked from network); 7 Feb 2014 08:32:40 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 08:32:40 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391761960; l=575;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=NXAjKp11MIFCFwhqJ6swF7m8avw=;
	b=dLON6VIqP/xwgc3dA93KHv+gQOtgq+rDC0N6gp1ARp4wqfohQg0C6X5eDZeoCum+/+B
	6ZJSLzGxcwa3MVb6zEIChwEvQoJutlADmNjOpCYedhRDRQ3m74JRWafcqDMrkTyVlntDB
	NQokl9nl8Fky4GfbaHhfJiejeiCmgpUTey0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id h0628aq178WZCg6
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 09:32:35 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 83AE850269; Fri,  7 Feb 2014 09:32:34 +0100 (CET)
Date: Fri, 7 Feb 2014 09:32:34 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207083234.GA17978@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4A4F7020000780011A102@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, Jan Beulich wrote:

> They question is what the intended behavior here is: I'd generally

In my opinion dom0 is just a child of Xen, which should follow the rules
of the parent. If Xen is configured to have its console on serial then
the default of dom0 should be to follow just that. Appearently its just
a matter of correctly using xvc0.

I'm not sure what the gain would be to have Xen on serial and dom0
somewhere else, and enforcing the need of a console= cmdline option to
point dom0 also to serial. Thats just doing things twice.


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:41:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBh0J-0003hA-Gq; Fri, 07 Feb 2014 08:41:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WBh0I-0003h5-6s
	for Xen-devel@lists.xensource.com; Fri, 07 Feb 2014 08:41:42 +0000
Received: from [193.109.254.147:38511] by server-2.bemta-14.messagelabs.com id
	31/58-01236-54C94F25; Fri, 07 Feb 2014 08:41:41 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391762499!2674959!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15941 invoked from network); 7 Feb 2014 08:41:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 08:41:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98909131"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 08:41:23 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 03:41:22 -0500
Message-ID: <52F49C32.8030605@citrix.com>
Date: Fri, 7 Feb 2014 09:41:22 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, <Xen-devel@lists.xensource.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/01/14 02:13, Mukesh Rathor wrote:
> Expose features for pvh domUs from tools.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>

The flags exposed for PVH by this patch look fine to me, but I've been
wondering if it would be easier to use xc_cpuid_hvm_policy for PVH and
just mask MTRR.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 08:41:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 08:41:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBh0J-0003hA-Gq; Fri, 07 Feb 2014 08:41:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WBh0I-0003h5-6s
	for Xen-devel@lists.xensource.com; Fri, 07 Feb 2014 08:41:42 +0000
Received: from [193.109.254.147:38511] by server-2.bemta-14.messagelabs.com id
	31/58-01236-54C94F25; Fri, 07 Feb 2014 08:41:41 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391762499!2674959!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15941 invoked from network); 7 Feb 2014 08:41:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 08:41:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98909131"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 08:41:23 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 03:41:22 -0500
Message-ID: <52F49C32.8030605@citrix.com>
Date: Fri, 7 Feb 2014 09:41:22 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, <Xen-devel@lists.xensource.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
	domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/01/14 02:13, Mukesh Rathor wrote:
> Expose features for pvh domUs from tools.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>

The flags exposed for PVH by this patch look fine to me, but I've been
wondering if it would be easier to use xc_cpuid_hvm_policy for PVH and
just mask MTRR.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:04:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhM9-0004WC-Dd; Fri, 07 Feb 2014 09:04:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBhM8-0004W7-0z
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 09:04:16 +0000
Received: from [193.109.254.147:37276] by server-8.bemta-14.messagelabs.com id
	12/17-18529-F81A4F25; Fri, 07 Feb 2014 09:04:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391763851!2676175!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18387 invoked from network); 7 Feb 2014 09:04:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:04:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100769912"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 09:04:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 04:04:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBhM1-0001ud-R0;
	Fri, 07 Feb 2014 09:04:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBhM1-00050y-LX;
	Fri, 07 Feb 2014 09:04:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24760-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 09:04:09 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24760: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8912450310694486392=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8912450310694486392==
Content-Type: text/plain

flight 24760 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24760/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24716

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  d7c6be61836b0a4d996f82d3e7c7e50150996701
baseline version:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=d7c6be61836b0a4d996f82d3e7c7e50150996701
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing d7c6be61836b0a4d996f82d3e7c7e50150996701
+ branch=xen-4.3-testing
+ revision=d7c6be61836b0a4d996f82d3e7c7e50150996701
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git d7c6be61836b0a4d996f82d3e7c7e50150996701:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   c450908..d7c6be6  d7c6be61836b0a4d996f82d3e7c7e50150996701 -> stable-4.3


--===============8912450310694486392==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8912450310694486392==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:04:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhM9-0004WC-Dd; Fri, 07 Feb 2014 09:04:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBhM8-0004W7-0z
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 09:04:16 +0000
Received: from [193.109.254.147:37276] by server-8.bemta-14.messagelabs.com id
	12/17-18529-F81A4F25; Fri, 07 Feb 2014 09:04:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391763851!2676175!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18387 invoked from network); 7 Feb 2014 09:04:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:04:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100769912"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 09:04:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 04:04:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBhM1-0001ud-R0;
	Fri, 07 Feb 2014 09:04:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBhM1-00050y-LX;
	Fri, 07 Feb 2014 09:04:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24760-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 09:04:09 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24760: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8912450310694486392=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8912450310694486392==
Content-Type: text/plain

flight 24760 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24760/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24716

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  d7c6be61836b0a4d996f82d3e7c7e50150996701
baseline version:
 xen                  c450908dc9168c3f20787aab268fcc295feaed7d

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=d7c6be61836b0a4d996f82d3e7c7e50150996701
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing d7c6be61836b0a4d996f82d3e7c7e50150996701
+ branch=xen-4.3-testing
+ revision=d7c6be61836b0a4d996f82d3e7c7e50150996701
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git d7c6be61836b0a4d996f82d3e7c7e50150996701:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   c450908..d7c6be6  d7c6be61836b0a4d996f82d3e7c7e50150996701 -> stable-4.3


--===============8912450310694486392==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8912450310694486392==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:09:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhQy-0004wa-8v; Fri, 07 Feb 2014 09:09:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBhQx-0004wV-6W
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 09:09:15 +0000
Received: from [193.109.254.147:49478] by server-8.bemta-14.messagelabs.com id
	C9/BD-18529-AB2A4F25; Fri, 07 Feb 2014 09:09:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391764142!2683897!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTg4MzkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13530 invoked from network); 7 Feb 2014 09:09:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:09:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; 
	d="asc'?scan'208";a="100770594"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 09:09:01 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	04:09:01 -0500
Message-ID: <1391764139.9917.54.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Fri, 7 Feb 2014 10:08:59 +0100
In-Reply-To: <CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1469052051561163291=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1469052051561163291==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-1N/T/nK3imeVvdQpDPJp"

--=-1N/T/nK3imeVvdQpDPJp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
> 2014-02-06 5:23 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

>         Is there a knob for qxl support?
>        =20
>         [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
>         qemu-system-i386: -vga qxl: invalid option
>
>=20
> Here there is a patch that add qxl support in libxl updated to xen
> 4.4-rc3 if you want add it:
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff=
56b
>=20
> Or you can simply compile from this already ready for spice/qxl
> testing:
> https://github.com/Fantu/Xen/commits/rebase/m2r-testing
>=20
>=20
>=20
> Is not upstream for now because there is something on xen that make it
> not working on linux domUs with qxl driver active and working with
> high performance problem on windows domUs.
>
Right.

> I spent several days without finding the exact problem to be solved :(
> If you want you can try it out and see if anything changes using
> Fedora instead of Debian as dom0, differents kernel domUs etc.
> Maybe you could even find some new informations/errors useful for
> solving the problem.
>
Yep, that would help... let us know! :-)

Anyway, first of all, sorry for my spice/qxl ignorance.

What I just wanted to say is this: searching the wiki, all I found about
spice and QXL is this section in the QEMU Upstream page:
http://wiki.xen.org/wiki/QEMU_Upstream#SPICE_.2F_QXL

Perhaps someone of you can double check whether the information there is
still fresh and accurate enough? Perhaps it's also worth adding
references to the patches mentioned above (with the proper disclaimer
about the known issues, of course)?

Also, maybe for the next DocsDay, this would be a nice one to have too:
http://wiki.xen.org/wiki/Xen_Document_Days/TODO#Spice_Config_Example_for_up=
stream_QEMU

If keen on doing any of the above that involves modifying the wiki, send
me a note, and I think I can provide the necessary permissions.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-1N/T/nK3imeVvdQpDPJp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL0oqsACgkQk4XaBE3IOsRAJQCgnVmqIFoXRVymGH3k7ZQYtu4+
TDUAn2cO4cU/bnBlfIH1p7x1Cv6UKPOh
=CWA1
-----END PGP SIGNATURE-----

--=-1N/T/nK3imeVvdQpDPJp--


--===============1469052051561163291==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1469052051561163291==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:09:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhQy-0004wa-8v; Fri, 07 Feb 2014 09:09:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBhQx-0004wV-6W
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 09:09:15 +0000
Received: from [193.109.254.147:49478] by server-8.bemta-14.messagelabs.com id
	C9/BD-18529-AB2A4F25; Fri, 07 Feb 2014 09:09:14 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391764142!2683897!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTg4MzkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13530 invoked from network); 7 Feb 2014 09:09:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:09:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; 
	d="asc'?scan'208";a="100770594"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 09:09:01 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	04:09:01 -0500
Message-ID: <1391764139.9917.54.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Fri, 7 Feb 2014 10:08:59 +0100
In-Reply-To: <CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1469052051561163291=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1469052051561163291==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-1N/T/nK3imeVvdQpDPJp"

--=-1N/T/nK3imeVvdQpDPJp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
> 2014-02-06 5:23 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

>         Is there a knob for qxl support?
>        =20
>         [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
>         qemu-system-i386: -vga qxl: invalid option
>
>=20
> Here there is a patch that add qxl support in libxl updated to xen
> 4.4-rc3 if you want add it:
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff=
56b
>=20
> Or you can simply compile from this already ready for spice/qxl
> testing:
> https://github.com/Fantu/Xen/commits/rebase/m2r-testing
>=20
>=20
>=20
> Is not upstream for now because there is something on xen that make it
> not working on linux domUs with qxl driver active and working with
> high performance problem on windows domUs.
>
Right.

> I spent several days without finding the exact problem to be solved :(
> If you want you can try it out and see if anything changes using
> Fedora instead of Debian as dom0, differents kernel domUs etc.
> Maybe you could even find some new informations/errors useful for
> solving the problem.
>
Yep, that would help... let us know! :-)

Anyway, first of all, sorry for my spice/qxl ignorance.

What I just wanted to say is this: searching the wiki, all I found about
spice and QXL is this section in the QEMU Upstream page:
http://wiki.xen.org/wiki/QEMU_Upstream#SPICE_.2F_QXL

Perhaps someone of you can double check whether the information there is
still fresh and accurate enough? Perhaps it's also worth adding
references to the patches mentioned above (with the proper disclaimer
about the known issues, of course)?

Also, maybe for the next DocsDay, this would be a nice one to have too:
http://wiki.xen.org/wiki/Xen_Document_Days/TODO#Spice_Config_Example_for_up=
stream_QEMU

If keen on doing any of the above that involves modifying the wiki, send
me a note, and I think I can provide the necessary permissions.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-1N/T/nK3imeVvdQpDPJp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL0oqsACgkQk4XaBE3IOsRAJQCgnVmqIFoXRVymGH3k7ZQYtu4+
TDUAn2cO4cU/bnBlfIH1p7x1Cv6UKPOh
=CWA1
-----END PGP SIGNATURE-----

--=-1N/T/nK3imeVvdQpDPJp--


--===============1469052051561163291==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1469052051561163291==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhUa-0005C6-GF; Fri, 07 Feb 2014 09:13:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBhUY-0005C0-Fy
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:12:58 +0000
Received: from [85.158.143.35:30737] by server-2.bemta-4.messagelabs.com id
	9B/E2-10891-993A4F25; Fri, 07 Feb 2014 09:12:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391764377!3850530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20603 invoked from network); 7 Feb 2014 09:12:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:12:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:12:56 +0000
Message-Id: <52F4B1A7020000780011A14B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:12:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
In-Reply-To: <20140207083234.GA17978@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 09:32, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Feb 07, Jan Beulich wrote:
> 
>> They question is what the intended behavior here is: I'd generally
> 
> In my opinion dom0 is just a child of Xen, which should follow the rules
> of the parent. If Xen is configured to have its console on serial then
> the default of dom0 should be to follow just that. Appearently its just
> a matter of correctly using xvc0.
> 
> I'm not sure what the gain would be to have Xen on serial and dom0
> somewhere else, and enforcing the need of a console= cmdline option to
> point dom0 also to serial. Thats just doing things twice.

That's a fair point, but leaves aside the case of Xen _not_ using
the serial console. Dom0 has no way to know, and hence would
still push output there, not knowing that it ends up no-where.

Also the "follow the rules of the parent" already doesn't apply for
the VGA console case, where Dom0 makes its own decision too
(and it's for that reason that Xen needs to stop sending data to
the VGA in order to not interfere). Hence I'm not sure that
argument really counts.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhUa-0005C6-GF; Fri, 07 Feb 2014 09:13:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBhUY-0005C0-Fy
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:12:58 +0000
Received: from [85.158.143.35:30737] by server-2.bemta-4.messagelabs.com id
	9B/E2-10891-993A4F25; Fri, 07 Feb 2014 09:12:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391764377!3850530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20603 invoked from network); 7 Feb 2014 09:12:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:12:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:12:56 +0000
Message-Id: <52F4B1A7020000780011A14B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:12:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
In-Reply-To: <20140207083234.GA17978@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 09:32, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Feb 07, Jan Beulich wrote:
> 
>> They question is what the intended behavior here is: I'd generally
> 
> In my opinion dom0 is just a child of Xen, which should follow the rules
> of the parent. If Xen is configured to have its console on serial then
> the default of dom0 should be to follow just that. Appearently its just
> a matter of correctly using xvc0.
> 
> I'm not sure what the gain would be to have Xen on serial and dom0
> somewhere else, and enforcing the need of a console= cmdline option to
> point dom0 also to serial. Thats just doing things twice.

That's a fair point, but leaves aside the case of Xen _not_ using
the serial console. Dom0 has no way to know, and hence would
still push output there, not knowing that it ends up no-where.

Also the "follow the rules of the parent" already doesn't apply for
the VGA console case, where Dom0 makes its own decision too
(and it's for that reason that Xen needs to stop sending data to
the VGA in order to not interfere). Hence I'm not sure that
argument really counts.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:21:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhcP-0005ff-JP; Fri, 07 Feb 2014 09:21:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBhcO-0005fa-1K
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:21:04 +0000
Received: from [85.158.143.35:50700] by server-1.bemta-4.messagelabs.com id
	70/69-31661-F75A4F25; Fri, 07 Feb 2014 09:21:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391764862!3837985!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28697 invoked from network); 7 Feb 2014 09:21:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:21:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:21:01 +0000
Message-Id: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:21:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0734AC6C.1__="
Cc: suravee.suthikulpanit@amd.com
Subject: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
	IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0734AC6C.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <joerg.roedel@amd.com>.

Reported-by: Eric Houby <ehouby@yahoo.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Eric Houby <ehouby@yahoo.com>

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic =3D !iommu_intremap;
     int error =3D 0;
=20
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic =3D 0; !error && iommu_intremap && apic < nr_ioapics; =
++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf =3D=3D PCI_BDF(0, 0x14, 0) =
)
+            sb_ioapic =3D 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
=20
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
=20
+    if ( !error && !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error =3D -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
=20




--=__Part0734AC6C.1__=
Content-Type: text/plain; name="AMD-IOMMU-require-SB-IOAPIC.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="AMD-IOMMU-require-SB-IOAPIC.patch"

AMD IOMMU: fail if there is no southbridge IO-APIC=0A=0A... but interrupt =
remapping is requested (with per-device remapping=0Atables). Without it, =
the timer interrupt is usually not working.=0A=0AInspired by Linux'es =
"iommu/amd: Work around wrong IOAPIC device-id in=0AIVRS table" (commit =
c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg=0ARoedel <joerg.roedel@a=
md.com>.=0A=0AReported-by: Eric Houby <ehouby@yahoo.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0ATested-by: Eric Houby <ehouby@yahoo.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/iommu_acpi.c=0A+++ b/xen/drivers/pa=
ssthrough/amd/iommu_acpi.c=0A@@ -984,6 +984,7 @@ static int __init =
parse_ivrs_table(struc=0A     const struct acpi_ivrs_header *ivrs_block;=0A=
     unsigned long length;=0A     unsigned int apic;=0A+    bool_t =
sb_ioapic =3D !iommu_intremap;=0A     int error =3D 0;=0A =0A     =
BUG_ON(!table);=0A@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table=
(struc=0A     /* Each IO-APIC must have been mentioned in the table. */=0A =
    for ( apic =3D 0; !error && iommu_intremap && apic < nr_ioapics; =
++apic )=0A     {=0A-        if ( !nr_ioapic_entries[apic] ||=0A-          =
   ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A+        if ( !nr_ioapic_ent=
ries[apic] )=0A+            continue;=0A+=0A+        if ( !ioapic_sbdf[IO_A=
PIC_ID(apic)].seg &&=0A+             /* SB IO-APIC is always on this =
device in AMD systems. */=0A+             ioapic_sbdf[IO_APIC_ID(apic)].bdf=
 =3D=3D PCI_BDF(0, 0x14, 0) )=0A+            sb_ioapic =3D 1;=0A+=0A+      =
  if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A             =
continue;=0A =0A         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) =
)=0A@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc=0A     =
    }=0A     }=0A =0A+    if ( !error && !sb_ioapic )=0A+    {=0A+        =
if ( amd_iommu_perdev_intremap )=0A+            error =3D -ENXIO;=0A+      =
  printk("%sNo southbridge IO-APIC found in IVRS table\n",=0A+             =
  amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);=0A+    =
}=0A+=0A     return error;=0A }=0A =0A
--=__Part0734AC6C.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0734AC6C.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:21:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhcP-0005ff-JP; Fri, 07 Feb 2014 09:21:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBhcO-0005fa-1K
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:21:04 +0000
Received: from [85.158.143.35:50700] by server-1.bemta-4.messagelabs.com id
	70/69-31661-F75A4F25; Fri, 07 Feb 2014 09:21:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1391764862!3837985!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28697 invoked from network); 7 Feb 2014 09:21:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:21:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:21:01 +0000
Message-Id: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:21:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0734AC6C.1__="
Cc: suravee.suthikulpanit@amd.com
Subject: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
	IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0734AC6C.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <joerg.roedel@amd.com>.

Reported-by: Eric Houby <ehouby@yahoo.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Eric Houby <ehouby@yahoo.com>

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic =3D !iommu_intremap;
     int error =3D 0;
=20
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic =3D 0; !error && iommu_intremap && apic < nr_ioapics; =
++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf =3D=3D PCI_BDF(0, 0x14, 0) =
)
+            sb_ioapic =3D 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
=20
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
=20
+    if ( !error && !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error =3D -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
=20




--=__Part0734AC6C.1__=
Content-Type: text/plain; name="AMD-IOMMU-require-SB-IOAPIC.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="AMD-IOMMU-require-SB-IOAPIC.patch"

AMD IOMMU: fail if there is no southbridge IO-APIC=0A=0A... but interrupt =
remapping is requested (with per-device remapping=0Atables). Without it, =
the timer interrupt is usually not working.=0A=0AInspired by Linux'es =
"iommu/amd: Work around wrong IOAPIC device-id in=0AIVRS table" (commit =
c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg=0ARoedel <joerg.roedel@a=
md.com>.=0A=0AReported-by: Eric Houby <ehouby@yahoo.com>=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0ATested-by: Eric Houby <ehouby@yahoo.com>=
=0A=0A--- a/xen/drivers/passthrough/amd/iommu_acpi.c=0A+++ b/xen/drivers/pa=
ssthrough/amd/iommu_acpi.c=0A@@ -984,6 +984,7 @@ static int __init =
parse_ivrs_table(struc=0A     const struct acpi_ivrs_header *ivrs_block;=0A=
     unsigned long length;=0A     unsigned int apic;=0A+    bool_t =
sb_ioapic =3D !iommu_intremap;=0A     int error =3D 0;=0A =0A     =
BUG_ON(!table);=0A@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table=
(struc=0A     /* Each IO-APIC must have been mentioned in the table. */=0A =
    for ( apic =3D 0; !error && iommu_intremap && apic < nr_ioapics; =
++apic )=0A     {=0A-        if ( !nr_ioapic_entries[apic] ||=0A-          =
   ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A+        if ( !nr_ioapic_ent=
ries[apic] )=0A+            continue;=0A+=0A+        if ( !ioapic_sbdf[IO_A=
PIC_ID(apic)].seg &&=0A+             /* SB IO-APIC is always on this =
device in AMD systems. */=0A+             ioapic_sbdf[IO_APIC_ID(apic)].bdf=
 =3D=3D PCI_BDF(0, 0x14, 0) )=0A+            sb_ioapic =3D 1;=0A+=0A+      =
  if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )=0A             =
continue;=0A =0A         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) =
)=0A@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc=0A     =
    }=0A     }=0A =0A+    if ( !error && !sb_ioapic )=0A+    {=0A+        =
if ( amd_iommu_perdev_intremap )=0A+            error =3D -ENXIO;=0A+      =
  printk("%sNo southbridge IO-APIC found in IVRS table\n",=0A+             =
  amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);=0A+    =
}=0A+=0A     return error;=0A }=0A =0A
--=__Part0734AC6C.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0734AC6C.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhdg-0005kQ-3d; Fri, 07 Feb 2014 09:22:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBhde-0005kJ-7j
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 09:22:22 +0000
Received: from [85.158.139.211:14971] by server-7.bemta-5.messagelabs.com id
	56/08-14867-DC5A4F25; Fri, 07 Feb 2014 09:22:21 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391764938!2300562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19156 invoked from network); 7 Feb 2014 09:22:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:22:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; 
	d="asc'?scan'208";a="100772649"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 09:22:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	04:22:17 -0500
Message-ID: <1391764936.9917.58.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Robbie VanVossen <robert.vanvossen@dornerworks.com>
Date: Fri, 7 Feb 2014 10:22:16 +0100
In-Reply-To: <52F2AD63.7030109@dornerworks.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace> <52F2AD63.7030109@dornerworks.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3045009814124978284=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3045009814124978284==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-tHaWGdfHS6bu6n8Qd0WH"

--=-tHaWGdfHS6bu6n8Qd0WH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-02-05 at 16:30 -0500, Robbie VanVossen wrote:
> On 1/20/2014 10:05 AM, Dario Faggioli wrote:
> > What about giving a try to it yourself? I think standardizing on one (a
> > set of) specific tool could be a good thing.
>=20
> Dario,
>=20
Hey! :-)

> We thought we would try to get some similar readings for the Arinc653 sch=
eduler.
> We followed your suggestions from this thread and have gotten some readin=
gs for
> the following configurations:
>=20
That's cool, thanks for doing this and sharing the results.

> We used the following command to get results for a 30 millisecond (30,000=
us)
> interval with 500 loops:
>=20
> cyclictest -t1 -i 30000 -l 500 -q
>=20
> Results:
>=20
> +--------+--------+-----------+-------+-------+-------+
> | Config | Domain | Scheduler |      Latency (us)     |
> |        |        |           |   Min |   Max |   Avg |
> +--------+--------+-----------+-------+-------+-------+
> |      1 |      0 |  Arinc653 |    20 |   163 |    68 |
> |      2 |      0 |  Arinc653 |    21 |   173 |    68 |
> |      3 |      1 |  Arinc653 |    20 |   155 |    75 |
> +--------+--------+-----------+-------+-------+-------+
>=20
> It looks like we get negligible latencies for each of these simplistic
> configurations.
>=20
It looks indeed. You're right, the configuration are simplistic. Yet, as
stated before, latency and jitter in scheduling/event response has two
major contributors: one is the scheduling algorithm itself, the other is
the interrupt/event delivery latency of the platform (HW + HYP + OS).
This means that, of course, you need to pick the right scheduler and
configure it properly, but there may be other sources of latency and
delay, and that is what sets the lowest possible limit, unless you go
chasing and fix these 'platform issues'.

=46rom your experiments (an from some other numbers I also have) it looks=20
like this lower bound is not terrible in Xen, which is something good to
know... So thanks again for taking the time of running the benchmarks
and sharing the results! :-D

That being said, especially if we compare to baremetal, I think there is
some room for improvements (I mean, there always will be an overhead,
but still...). Do you, by any chance, have the figures for cyclictest on
Linux baremetal too (on the same hardware and kernel, if possible)?

Thanks a lot again!
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-tHaWGdfHS6bu6n8Qd0WH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL0pcgACgkQk4XaBE3IOsSj/gCcCCOZYJX+4VtkKLeQXaaUY37g
qxcAoIFQBv+t+E1G8qr3BY/zesL8bZiJ
=5Hts
-----END PGP SIGNATURE-----

--=-tHaWGdfHS6bu6n8Qd0WH--


--===============3045009814124978284==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3045009814124978284==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhdg-0005kQ-3d; Fri, 07 Feb 2014 09:22:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WBhde-0005kJ-7j
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 09:22:22 +0000
Received: from [85.158.139.211:14971] by server-7.bemta-5.messagelabs.com id
	56/08-14867-DC5A4F25; Fri, 07 Feb 2014 09:22:21 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391764938!2300562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19156 invoked from network); 7 Feb 2014 09:22:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:22:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; 
	d="asc'?scan'208";a="100772649"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 09:22:18 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	04:22:17 -0500
Message-ID: <1391764936.9917.58.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Robbie VanVossen <robert.vanvossen@dornerworks.com>
Date: Fri, 7 Feb 2014 10:22:16 +0100
In-Reply-To: <52F2AD63.7030109@dornerworks.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace> <52F2AD63.7030109@dornerworks.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3045009814124978284=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3045009814124978284==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-tHaWGdfHS6bu6n8Qd0WH"

--=-tHaWGdfHS6bu6n8Qd0WH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-02-05 at 16:30 -0500, Robbie VanVossen wrote:
> On 1/20/2014 10:05 AM, Dario Faggioli wrote:
> > What about giving a try to it yourself? I think standardizing on one (a
> > set of) specific tool could be a good thing.
>=20
> Dario,
>=20
Hey! :-)

> We thought we would try to get some similar readings for the Arinc653 sch=
eduler.
> We followed your suggestions from this thread and have gotten some readin=
gs for
> the following configurations:
>=20
That's cool, thanks for doing this and sharing the results.

> We used the following command to get results for a 30 millisecond (30,000=
us)
> interval with 500 loops:
>=20
> cyclictest -t1 -i 30000 -l 500 -q
>=20
> Results:
>=20
> +--------+--------+-----------+-------+-------+-------+
> | Config | Domain | Scheduler |      Latency (us)     |
> |        |        |           |   Min |   Max |   Avg |
> +--------+--------+-----------+-------+-------+-------+
> |      1 |      0 |  Arinc653 |    20 |   163 |    68 |
> |      2 |      0 |  Arinc653 |    21 |   173 |    68 |
> |      3 |      1 |  Arinc653 |    20 |   155 |    75 |
> +--------+--------+-----------+-------+-------+-------+
>=20
> It looks like we get negligible latencies for each of these simplistic
> configurations.
>=20
It looks indeed. You're right, the configuration are simplistic. Yet, as
stated before, latency and jitter in scheduling/event response has two
major contributors: one is the scheduling algorithm itself, the other is
the interrupt/event delivery latency of the platform (HW + HYP + OS).
This means that, of course, you need to pick the right scheduler and
configure it properly, but there may be other sources of latency and
delay, and that is what sets the lowest possible limit, unless you go
chasing and fix these 'platform issues'.

=46rom your experiments (an from some other numbers I also have) it looks=20
like this lower bound is not terrible in Xen, which is something good to
know... So thanks again for taking the time of running the benchmarks
and sharing the results! :-D

That being said, especially if we compare to baremetal, I think there is
some room for improvements (I mean, there always will be an overhead,
but still...). Do you, by any chance, have the figures for cyclictest on
Linux baremetal too (on the same hardware and kernel, if possible)?

Thanks a lot again!
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-tHaWGdfHS6bu6n8Qd0WH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL0pcgACgkQk4XaBE3IOsSj/gCcCCOZYJX+4VtkKLeQXaaUY37g
qxcAoIFQBv+t+E1G8qr3BY/zesL8bZiJ
=5Hts
-----END PGP SIGNATURE-----

--=-tHaWGdfHS6bu6n8Qd0WH--


--===============3045009814124978284==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3045009814124978284==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhg4-0005tV-M4; Fri, 07 Feb 2014 09:24:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WBhg3-0005tN-9T
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 09:24:51 +0000
Received: from [85.158.139.211:18629] by server-11.bemta-5.messagelabs.com id
	E6/36-23886-266A4F25; Fri, 07 Feb 2014 09:24:50 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391765088!2294303!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9720 invoked from network); 7 Feb 2014 09:24:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:24:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98915764"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 09:24:47 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 7 Feb 2014 04:24:47 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 7 Feb 2014 10:24:45 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Matt Wilson <msw@linux.com>
Thread-Topic: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
	ioreq structures
Thread-Index: AQHPHcZZpkBP0yklNEimIf65ti+4fpqpNM4AgABbwnA=
Date: Fri, 7 Feb 2014 09:24:45 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0227993@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
	<20140207045334.GA29403@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140207045334.GA29403@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Matt Wilson [mailto:mswilson@gmail.com] On Behalf Of Matt Wilson
> Sent: 07 February 2014 04:54
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
> ioreq structures
> 
> On Thu, Jan 30, 2014 at 02:19:46PM +0000, Paul Durrant wrote:
> > To simplify creation of the ioreq server abstraction in a
> > subsequent patch, this patch centralizes all use of the shared
> > ioreq structure and the buffered ioreq ring to the source module
> > xen/arch/x86/hvm/hvm.c.
> > Also, re-work hvm_send_assist_req() slightly to complete IO
> > immediately in the case where there is no emulator (i.e. the shared
> > IOREQ ring has not been set). This should handle the case currently
> > covered by has_dm in hvmemul_do_io().
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> [...]
> 
> > diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-
> x86/hvm/support.h
> > index 3529499..b6af3c5 100644
> > --- a/xen/include/asm-x86/hvm/support.h
> > +++ b/xen/include/asm-x86/hvm/support.h
> > @@ -22,19 +22,10 @@
> >  #define __ASM_X86_HVM_SUPPORT_H__
> >
> >  #include <xen/types.h>
> > -#include <public/hvm/ioreq.h>
> >  #include <xen/sched.h>
> >  #include <xen/hvm/save.h>
> >  #include <asm/processor.h>
> >
> > -static inline ioreq_t *get_ioreq(struct vcpu *v)
> > -{
> > -    struct domain *d = v->domain;
> > -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > -    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > -}
> > -
> >  #define HVM_DELIVER_NO_ERROR_CODE  -1
> >
> >  #ifndef NDEBUG
> 
> Seems like this breaks nested VMX:
> 
> vvmx.c: In function 'nvmx_switch_guest':
> vvmx.c:1403: error: implicit declaration of function 'get_ioreq'
> vvmx.c:1403: error: nested extern declaration of 'get_ioreq'
> vvmx.c:1403: error: invalid type argument of '->' (have 'int')
> 

Thanks Matt. That'll teach me to rebase just before I post. I tripped across this a couple of days ago myself and I've incorporated another couple of changes in v2 of this patch to fix it. I'll re-post the series once I've done some more testing with a secondary emulator that actually does something :-)

  Cheers,

  Paul

> --msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhg4-0005tV-M4; Fri, 07 Feb 2014 09:24:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WBhg3-0005tN-9T
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 09:24:51 +0000
Received: from [85.158.139.211:18629] by server-11.bemta-5.messagelabs.com id
	E6/36-23886-266A4F25; Fri, 07 Feb 2014 09:24:50 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391765088!2294303!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9720 invoked from network); 7 Feb 2014 09:24:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:24:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98915764"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 09:24:47 +0000
Received: from AMSPEX01CL02.citrite.net (10.69.46.33) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Fri, 7 Feb 2014 04:24:47 -0500
Received: from AMSPEX01CL01.citrite.net ([169.254.6.176]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 7 Feb 2014 10:24:45 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Matt Wilson <msw@linux.com>
Thread-Topic: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
	ioreq structures
Thread-Index: AQHPHcZZpkBP0yklNEimIf65ti+4fpqpNM4AgABbwnA=
Date: Fri, 7 Feb 2014 09:24:45 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0227993@AMSPEX01CL01.citrite.net>
References: <1391091590-5454-1-git-send-email-paul.durrant@citrix.com>
	<1391091590-5454-2-git-send-email-paul.durrant@citrix.com>
	<20140207045334.GA29403@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140207045334.GA29403@u109add4315675089e695.ant.amazon.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
 ioreq structures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Matt Wilson [mailto:mswilson@gmail.com] On Behalf Of Matt Wilson
> Sent: 07 February 2014 04:54
> To: Paul Durrant
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] [RFC PATCH 1/5] ioreq-server: centralize access to
> ioreq structures
> 
> On Thu, Jan 30, 2014 at 02:19:46PM +0000, Paul Durrant wrote:
> > To simplify creation of the ioreq server abstraction in a
> > subsequent patch, this patch centralizes all use of the shared
> > ioreq structure and the buffered ioreq ring to the source module
> > xen/arch/x86/hvm/hvm.c.
> > Also, re-work hvm_send_assist_req() slightly to complete IO
> > immediately in the case where there is no emulator (i.e. the shared
> > IOREQ ring has not been set). This should handle the case currently
> > covered by has_dm in hvmemul_do_io().
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> [...]
> 
> > diff --git a/xen/include/asm-x86/hvm/support.h b/xen/include/asm-
> x86/hvm/support.h
> > index 3529499..b6af3c5 100644
> > --- a/xen/include/asm-x86/hvm/support.h
> > +++ b/xen/include/asm-x86/hvm/support.h
> > @@ -22,19 +22,10 @@
> >  #define __ASM_X86_HVM_SUPPORT_H__
> >
> >  #include <xen/types.h>
> > -#include <public/hvm/ioreq.h>
> >  #include <xen/sched.h>
> >  #include <xen/hvm/save.h>
> >  #include <asm/processor.h>
> >
> > -static inline ioreq_t *get_ioreq(struct vcpu *v)
> > -{
> > -    struct domain *d = v->domain;
> > -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> > -    ASSERT((v == current) || spin_is_locked(&d-
> >arch.hvm_domain.ioreq.lock));
> > -    return p ? &p->vcpu_ioreq[v->vcpu_id] : NULL;
> > -}
> > -
> >  #define HVM_DELIVER_NO_ERROR_CODE  -1
> >
> >  #ifndef NDEBUG
> 
> Seems like this breaks nested VMX:
> 
> vvmx.c: In function 'nvmx_switch_guest':
> vvmx.c:1403: error: implicit declaration of function 'get_ioreq'
> vvmx.c:1403: error: nested extern declaration of 'get_ioreq'
> vvmx.c:1403: error: invalid type argument of '->' (have 'int')
> 

Thanks Matt. That'll teach me to rebase just before I post. I tripped across this a couple of days ago myself and I've incorporated another couple of changes in v2 of this patch to fix it. I'll re-post the series once I've done some more testing with a secondary emulator that actually does something :-)

  Cheers,

  Paul

> --msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:41:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhvn-0006zB-79; Fri, 07 Feb 2014 09:41:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBhvm-0006z6-DS
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:41:06 +0000
Received: from [85.158.143.35:21334] by server-2.bemta-4.messagelabs.com id
	CD/B7-10891-13AA4F25; Fri, 07 Feb 2014 09:41:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391766064!3838087!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7635 invoked from network); 7 Feb 2014 09:41:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:41:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:41:04 +0000
Message-Id: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:41:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: fix memory leaks
2: fix error propagation from flask_security_set_bool()
3: check permissions first thing in flask_security_set_bool()
4: add compat mode guest support

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Release-wise, I would think that 1-3 should certainly go in. While I'd
like 4 to be in for 4.4 too, I realize that's a little more intrusive than
one would want at this point.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:41:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBhvn-0006zB-79; Fri, 07 Feb 2014 09:41:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBhvm-0006z6-DS
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:41:06 +0000
Received: from [85.158.143.35:21334] by server-2.bemta-4.messagelabs.com id
	CD/B7-10891-13AA4F25; Fri, 07 Feb 2014 09:41:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391766064!3838087!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7635 invoked from network); 7 Feb 2014 09:41:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:41:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:41:04 +0000
Message-Id: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:41:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: fix memory leaks
2: fix error propagation from flask_security_set_bool()
3: check permissions first thing in flask_security_set_bool()
4: add compat mode guest support

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Release-wise, I would think that 1-3 should certainly go in. While I'd
like 4 to be in for 4.4 too, I realize that's a little more intrusive than
one would want at this point.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:47:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi1R-00079p-3f; Fri, 07 Feb 2014 09:46:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi1P-00079i-Mi
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:46:55 +0000
Received: from [193.109.254.147:16867] by server-16.bemta-14.messagelabs.com
	id F1/68-21945-E8BA4F25; Fri, 07 Feb 2014 09:46:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391766414!2704521!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2744 invoked from network); 7 Feb 2014 09:46:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:46:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:46:53 +0000
Message-Id: <52F4B99C020000780011A1EE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:46:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartF9CA529C.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 1/4] flask: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartF9CA529C.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Plus, in the case of security_preserve_bools(), prevent double freeing
in the case of security_get_bools() failing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -347,6 +347,7 @@ static int flask_security_set_bool(struc
=20
         if ( arg->bool_id >=3D num )
         {
+            xfree(values);
             rv =3D -ENOENT;
             goto out;
         }
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1902,6 +1902,7 @@ err:
     {
         for ( i =3D 0; i < *len; i++ )
             xfree((*names)[i]);
+        xfree(*names);
     }
     xfree(*values);
     goto out;
@@ -2011,7 +2012,7 @@ static int security_preserve_bools(struc
=20
     rc =3D security_get_bools(&nbools, &bnames, &bvalues, NULL);
     if ( rc )
-        goto out;
+        return rc;
     for ( i =3D 0; i < nbools; i++ )
     {
         booldatum =3D hashtab_search(p->p_bools.table, bnames[i]);




--=__PartF9CA529C.1__=
Content-Type: text/plain; name="flask-leaks.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-leaks.patch"

flask: fix memory leaks=0A=0APlus, in the case of security_preserve_bools()=
, prevent double freeing=0Ain the case of security_get_bools() failing.=0A=
=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/xsm/flask/=
flask_op.c=0A+++ b/xen/xsm/flask/flask_op.c=0A@@ -347,6 +347,7 @@ static =
int flask_security_set_bool(struc=0A =0A         if ( arg->bool_id >=3D =
num )=0A         {=0A+            xfree(values);=0A             rv =3D =
-ENOENT;=0A             goto out;=0A         }=0A--- a/xen/xsm/flask/ss/ser=
vices.c=0A+++ b/xen/xsm/flask/ss/services.c=0A@@ -1902,6 +1902,7 @@ =
err:=0A     {=0A         for ( i =3D 0; i < *len; i++ )=0A             =
xfree((*names)[i]);=0A+        xfree(*names);=0A     }=0A     xfree(*values=
);=0A     goto out;=0A@@ -2011,7 +2012,7 @@ static int security_preserve_bo=
ols(struc=0A =0A     rc =3D security_get_bools(&nbools, &bnames, &bvalues, =
NULL);=0A     if ( rc )=0A-        goto out;=0A+        return rc;=0A     =
for ( i =3D 0; i < nbools; i++ )=0A     {=0A         booldatum =3D =
hashtab_search(p->p_bools.table, bnames[i]);=0A
--=__PartF9CA529C.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartF9CA529C.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:47:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi1R-00079p-3f; Fri, 07 Feb 2014 09:46:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi1P-00079i-Mi
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:46:55 +0000
Received: from [193.109.254.147:16867] by server-16.bemta-14.messagelabs.com
	id F1/68-21945-E8BA4F25; Fri, 07 Feb 2014 09:46:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391766414!2704521!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2744 invoked from network); 7 Feb 2014 09:46:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:46:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:46:53 +0000
Message-Id: <52F4B99C020000780011A1EE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:46:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartF9CA529C.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 1/4] flask: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartF9CA529C.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Plus, in the case of security_preserve_bools(), prevent double freeing
in the case of security_get_bools() failing.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -347,6 +347,7 @@ static int flask_security_set_bool(struc
=20
         if ( arg->bool_id >=3D num )
         {
+            xfree(values);
             rv =3D -ENOENT;
             goto out;
         }
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1902,6 +1902,7 @@ err:
     {
         for ( i =3D 0; i < *len; i++ )
             xfree((*names)[i]);
+        xfree(*names);
     }
     xfree(*values);
     goto out;
@@ -2011,7 +2012,7 @@ static int security_preserve_bools(struc
=20
     rc =3D security_get_bools(&nbools, &bnames, &bvalues, NULL);
     if ( rc )
-        goto out;
+        return rc;
     for ( i =3D 0; i < nbools; i++ )
     {
         booldatum =3D hashtab_search(p->p_bools.table, bnames[i]);




--=__PartF9CA529C.1__=
Content-Type: text/plain; name="flask-leaks.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-leaks.patch"

flask: fix memory leaks=0A=0APlus, in the case of security_preserve_bools()=
, prevent double freeing=0Ain the case of security_get_bools() failing.=0A=
=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/xsm/flask/=
flask_op.c=0A+++ b/xen/xsm/flask/flask_op.c=0A@@ -347,6 +347,7 @@ static =
int flask_security_set_bool(struc=0A =0A         if ( arg->bool_id >=3D =
num )=0A         {=0A+            xfree(values);=0A             rv =3D =
-ENOENT;=0A             goto out;=0A         }=0A--- a/xen/xsm/flask/ss/ser=
vices.c=0A+++ b/xen/xsm/flask/ss/services.c=0A@@ -1902,6 +1902,7 @@ =
err:=0A     {=0A         for ( i =3D 0; i < *len; i++ )=0A             =
xfree((*names)[i]);=0A+        xfree(*names);=0A     }=0A     xfree(*values=
);=0A     goto out;=0A@@ -2011,7 +2012,7 @@ static int security_preserve_bo=
ols(struc=0A =0A     rc =3D security_get_bools(&nbools, &bnames, &bvalues, =
NULL);=0A     if ( rc )=0A-        goto out;=0A+        return rc;=0A     =
for ( i =3D 0; i < nbools; i++ )=0A     {=0A         booldatum =3D =
hashtab_search(p->p_bools.table, bnames[i]);=0A
--=__PartF9CA529C.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartF9CA529C.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:47:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi1x-0007Cb-IN; Fri, 07 Feb 2014 09:47:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi1v-0007CJ-Fw
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:47:28 +0000
Received: from [193.109.254.147:29868] by server-6.bemta-14.messagelabs.com id
	51/3A-03396-EABA4F25; Fri, 07 Feb 2014 09:47:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391766446!2703264!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13549 invoked from network); 7 Feb 2014 09:47:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:47:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:47:25 +0000
Message-Id: <52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:47:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartD9EA72BC.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartD9EA72BC.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The function should return an error when flask_security_make_bools() as
well as when the input ID is out of range.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
     else
     {
         if ( !bool_pending_values )
-            flask_security_make_bools();
-
-        if ( arg->bool_id >=3D bool_num )
+            rv =3D flask_security_make_bools();
+        if ( !rv && arg->bool_id >=3D bool_num )
+            rv =3D -ENOENT;
+        if ( rv )
             goto out;
=20
         bool_pending_values[arg->bool_id] =3D !!(arg->new_value);




--=__PartD9EA72BC.1__=
Content-Type: text/plain; name="flask-set-bool-err.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-set-bool-err.patch"

flask: fix error propagation from flask_security_set_bool()=0A=0AThe =
function should return an error when flask_security_make_bools() as=0Awell =
as when the input ID is out of range.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/xsm/flask/flask_op.c=0A+++ b/xen/xsm/fla=
sk/flask_op.c=0A@@ -364,9 +364,10 @@ static int flask_security_set_bool(str=
uc=0A     else=0A     {=0A         if ( !bool_pending_values )=0A-         =
   flask_security_make_bools();=0A-=0A-        if ( arg->bool_id >=3D =
bool_num )=0A+            rv =3D flask_security_make_bools();=0A+        =
if ( !rv && arg->bool_id >=3D bool_num )=0A+            rv =3D -ENOENT;=0A+=
        if ( rv )=0A             goto out;=0A =0A         bool_pending_valu=
es[arg->bool_id] =3D !!(arg->new_value);=0A
--=__PartD9EA72BC.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartD9EA72BC.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:47:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi1x-0007Cb-IN; Fri, 07 Feb 2014 09:47:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi1v-0007CJ-Fw
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:47:28 +0000
Received: from [193.109.254.147:29868] by server-6.bemta-14.messagelabs.com id
	51/3A-03396-EABA4F25; Fri, 07 Feb 2014 09:47:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391766446!2703264!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13549 invoked from network); 7 Feb 2014 09:47:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:47:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:47:25 +0000
Message-Id: <52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:47:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartD9EA72BC.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartD9EA72BC.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The function should return an error when flask_security_make_bools() as
well as when the input ID is out of range.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
     else
     {
         if ( !bool_pending_values )
-            flask_security_make_bools();
-
-        if ( arg->bool_id >=3D bool_num )
+            rv =3D flask_security_make_bools();
+        if ( !rv && arg->bool_id >=3D bool_num )
+            rv =3D -ENOENT;
+        if ( rv )
             goto out;
=20
         bool_pending_values[arg->bool_id] =3D !!(arg->new_value);




--=__PartD9EA72BC.1__=
Content-Type: text/plain; name="flask-set-bool-err.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-set-bool-err.patch"

flask: fix error propagation from flask_security_set_bool()=0A=0AThe =
function should return an error when flask_security_make_bools() as=0Awell =
as when the input ID is out of range.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/xsm/flask/flask_op.c=0A+++ b/xen/xsm/fla=
sk/flask_op.c=0A@@ -364,9 +364,10 @@ static int flask_security_set_bool(str=
uc=0A     else=0A     {=0A         if ( !bool_pending_values )=0A-         =
   flask_security_make_bools();=0A-=0A-        if ( arg->bool_id >=3D =
bool_num )=0A+            rv =3D flask_security_make_bools();=0A+        =
if ( !rv && arg->bool_id >=3D bool_num )=0A+            rv =3D -ENOENT;=0A+=
        if ( rv )=0A             goto out;=0A =0A         bool_pending_valu=
es[arg->bool_id] =3D !!(arg->new_value);=0A
--=__PartD9EA72BC.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartD9EA72BC.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:48:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi2T-0007I0-8U; Fri, 07 Feb 2014 09:48:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi2R-0007Hc-TX
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:48:00 +0000
Received: from [193.109.254.147:37448] by server-10.bemta-14.messagelabs.com
	id 04/A1-10711-FCBA4F25; Fri, 07 Feb 2014 09:47:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391766478!2698098!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14577 invoked from network); 7 Feb 2014 09:47:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:47:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:47:58 +0000
Message-Id: <52F4B9DC020000780011A1F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:47:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB98A12DC.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 3/4] flask: check permissions first thing in
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB98A12DC.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Nothing else should be done if the caller isn't permitted to set
boolean values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -326,11 +326,11 @@ static int flask_security_set_bool(struc
 {
     int rv;
=20
-    rv =3D flask_security_resolve_bool(arg);
+    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
     if ( rv )
         return rv;
=20
-    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
+    rv =3D flask_security_resolve_bool(arg);
     if ( rv )
         return rv;
=20




--=__PartB98A12DC.1__=
Content-Type: text/plain; name="flask-set-bool-perm-first.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-set-bool-perm-first.patch"

flask: check permissions first thing in flask_security_set_bool()=0A=0ANoth=
ing else should be done if the caller isn't permitted to set=0Aboolean =
values.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/xsm/flask/flask_op.c=0A+++ b/xen/xsm/flask/flask_op.c=0A@@ -326,11 =
+326,11 @@ static int flask_security_set_bool(struc=0A {=0A     int rv;=0A =
=0A-    rv =3D flask_security_resolve_bool(arg);=0A+    rv =3D domain_has_s=
ecurity(current->domain, SECURITY__SETBOOL);=0A     if ( rv )=0A         =
return rv;=0A =0A-    rv =3D domain_has_security(current->domain, =
SECURITY__SETBOOL);=0A+    rv =3D flask_security_resolve_bool(arg);=0A     =
if ( rv )=0A         return rv;=0A =0A
--=__PartB98A12DC.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB98A12DC.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:48:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi2T-0007I0-8U; Fri, 07 Feb 2014 09:48:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi2R-0007Hc-TX
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:48:00 +0000
Received: from [193.109.254.147:37448] by server-10.bemta-14.messagelabs.com
	id 04/A1-10711-FCBA4F25; Fri, 07 Feb 2014 09:47:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391766478!2698098!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14577 invoked from network); 7 Feb 2014 09:47:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:47:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:47:58 +0000
Message-Id: <52F4B9DC020000780011A1F6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:47:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB98A12DC.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 3/4] flask: check permissions first thing in
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB98A12DC.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Nothing else should be done if the caller isn't permitted to set
boolean values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -326,11 +326,11 @@ static int flask_security_set_bool(struc
 {
     int rv;
=20
-    rv =3D flask_security_resolve_bool(arg);
+    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
     if ( rv )
         return rv;
=20
-    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
+    rv =3D flask_security_resolve_bool(arg);
     if ( rv )
         return rv;
=20




--=__PartB98A12DC.1__=
Content-Type: text/plain; name="flask-set-bool-perm-first.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-set-bool-perm-first.patch"

flask: check permissions first thing in flask_security_set_bool()=0A=0ANoth=
ing else should be done if the caller isn't permitted to set=0Aboolean =
values.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/xsm/flask/flask_op.c=0A+++ b/xen/xsm/flask/flask_op.c=0A@@ -326,11 =
+326,11 @@ static int flask_security_set_bool(struc=0A {=0A     int rv;=0A =
=0A-    rv =3D flask_security_resolve_bool(arg);=0A+    rv =3D domain_has_s=
ecurity(current->domain, SECURITY__SETBOOL);=0A     if ( rv )=0A         =
return rv;=0A =0A-    rv =3D domain_has_security(current->domain, =
SECURITY__SETBOOL);=0A+    rv =3D flask_security_resolve_bool(arg);=0A     =
if ( rv )=0A         return rv;=0A =0A
--=__PartB98A12DC.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB98A12DC.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:49:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi3R-0007kN-0V; Fri, 07 Feb 2014 09:49:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi3P-0007k4-Ar
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:48:59 +0000
Received: from [193.109.254.147:26077] by server-13.bemta-14.messagelabs.com
	id 93/D9-01226-A0CA4F25; Fri, 07 Feb 2014 09:48:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391766537!2705432!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22960 invoked from network); 7 Feb 2014 09:48:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:48:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:48:57 +0000
Message-Id: <52F4BA18020000780011A1FA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:48:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7A49D118.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov,
	Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 4/4] flask: add compat mode guest support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7A49D118.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... which has been missing since the introduction of the new interface
in the 4.2 development cycle.

In the course of this I also noticed that the compat header generation
failed to make use of move-if-changed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)
         .quad compat_vcpu_op
         .quad compat_ni_hypercall       /* 25 */
         .quad compat_mmuext_op
-        .quad do_xsm_op
+        .quad compat_xsm_op
         .quad compat_nmi_op
         .quad compat_sched_op
         .quad compat_callback_op        /* 30 */
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen-$(compat-arch-y).h
 headers-y                 +=3D compat/arch-$(compat-arch-y).h compat/xlat.=
h
+headers-$(FLASK_ENABLE)   +=3D compat/xsm/flask_op.h
=20
 cppflags-y                :=3D -include public/xen-compat.h
 cppflags-$(CONFIG_X86)    +=3D -m32
@@ -54,7 +55,7 @@ compat/%.h: compat/%.i Makefile $(BASEDI
 	$(PYTHON) $(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; =
\
 	$(if $(suffix-y),echo "$(suffix-y)" >>$@.new;) \
 	echo "#endif /* $$id */" >>$@.new
-	mv -f $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 compat/%.i: compat/%.c Makefile
 	$(CPP) $(filter-out -M% .%.d -include %/include/xen/config.h,$(CFLA=
GS)) $(cppflags-y) -o $@ $<
@@ -63,15 +64,17 @@ compat/%.c: public/%.h xlat.lst Makefile
 	mkdir -p $(@D)
 	grep -v 'DEFINE_XEN_GUEST_HANDLE(long)' $< | \
 	$(PYTHON) $(BASEDIR)/tools/compat-build-source.py >$@.new
-	mv -f $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) =
$(BASEDIR)/tools/get-fields.sh Makefile
 	export PYTHON=3D$(PYTHON); \
 	grep -v '^[	 ]*#' xlat.lst | \
 	while read what name hdr; do \
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g') || exit $$?; =
\
+		hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y=
),g')"; \
+		echo '$(headers-y)' | grep -q "$$hdr" || continue; \
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$hdr || exit $$?; \
 	done >$@.new
-	mv -f $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
=20
@@ -79,7 +82,7 @@ all: headers.chk
=20
 headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% =
public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) =
Makefile
 	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall =
-W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done >$@.new
-	mv $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 endif
=20
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -99,3 +99,16 @@
 !	vcpu_set_singleshot_timer	vcpu.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
+?	flask_access			xsm/flask_op.h
+!	flask_boolean			xsm/flask_op.h
+?	flask_cache_stats		xsm/flask_op.h
+?	flask_hash_stats		xsm/flask_op.h
+!	flask_load			xsm/flask_op.h
+?	flask_ocontext			xsm/flask_op.h
+?	flask_peersid			xsm/flask_op.h
+?	flask_relabel			xsm/flask_op.h
+?	flask_setavc_threshold		xsm/flask_op.h
+?	flask_setenforce		xsm/flask_op.h
+!	flask_sid_context		xsm/flask_op.h
+?	flask_transition		xsm/flask_op.h
+!	flask_userlist			xsm/flask_op.h
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN
     return -ENOSYS;
 }
=20
+#ifdef CONFIG_COMPAT
+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
op)
+{
+    return -ENOSYS;
+}
+#endif
+
 static XSM_INLINE char *xsm_show_irq_sid(int irq)
 {
     return NULL;
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -129,6 +129,9 @@ struct xsm_operations {
     int (*tmem_control)(void);
=20
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#ifdef CONFIG_COMPAT
+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#endif
=20
     int (*hvm_param) (struct domain *d, unsigned long op);
     int (*hvm_param_nested) (struct domain *d);
@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU
     return xsm_ops->do_xsm_op(op);
 }
=20
+#ifdef CONFIG_COMPAT
+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_ops->do_compat_op(op);
+}
+#endif
+
 static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, =
unsigned long op)
 {
     return xsm_ops->hvm_param(d, op);
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation
     set_to_dummy_if_null(ops, hvm_param_nested);
=20
     set_to_dummy_if_null(ops, do_xsm_op);
+#ifdef CONFIG_COMPAT
+    set_to_dummy_if_null(ops, do_compat_op);
+#endif
=20
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -7,7 +7,7 @@
  *  it under the terms of the GNU General Public License version 2,
  *  as published by the Free Software Foundation.
  */
-
+#ifndef COMPAT
 #include <xen/errno.h>
 #include <xen/event.h>
 #include <xsm/xsm.h>
@@ -20,6 +20,10 @@
 #include <objsec.h>
 #include <conditional.h>
=20
+#define ret_t long
+#define _copy_to_guest copy_to_guest
+#define _copy_from_guest copy_from_guest
+
 #ifdef FLASK_DEVELOP
 int flask_enforcing =3D 0;
 integer_param("flask_enforcing", flask_enforcing);
@@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_user(struct xen_flask_userlist *arg)
 {
     char *user;
@@ -119,7 +125,7 @@ static int flask_security_user(struct xe
=20
     arg->size =3D nsids;
=20
-    if ( copy_to_guest(arg->u.sids, sids, nsids) )
+    if ( _copy_to_guest(arg->u.sids, sids, nsids) )
         rv =3D -EFAULT;
=20
     xfree(sids);
@@ -128,6 +134,8 @@ static int flask_security_user(struct xe
     return rv;
 }
=20
+#ifndef COMPAT
+
 static int flask_security_relabel(struct xen_flask_transition *arg)
 {
     int rv;
@@ -208,6 +216,8 @@ static int flask_security_setenforce(str
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_context(struct xen_flask_sid_context *arg)
 {
     int rv;
@@ -252,7 +262,7 @@ static int flask_security_sid(struct xen
=20
     arg->size =3D len;
=20
-    if ( !rv && copy_to_guest(arg->context, context, len) )
+    if ( !rv && _copy_to_guest(arg->context, context, len) )
         rv =3D -EFAULT;
=20
     xfree(context);
@@ -260,6 +270,8 @@ static int flask_security_sid(struct xen
     return rv;
 }
=20
+#ifndef COMPAT
+
 int flask_disable(void)
 {
     static int flask_disabled =3D 0;
@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho
     return rv;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_resolve_bool(struct xen_flask_boolean *arg)
 {
     char *name;
@@ -382,24 +396,6 @@ static int flask_security_set_bool(struc
     return rv;
 }
=20
-static int flask_security_commit_bools(void)
-{
-    int rv;
-
-    spin_lock(&sel_sem);
-
-    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
-    if ( rv )
-        goto out;
-
-    if ( bool_pending_values )
-        rv =3D security_set_bools(bool_num, bool_pending_values);
-   =20
- out:
-    spin_unlock(&sel_sem);
-    return rv;
-}
-
 static int flask_security_get_bool(struct xen_flask_boolean *arg)
 {
     int rv;
@@ -431,7 +427,7 @@ static int flask_security_get_bool(struc
             rv =3D -ERANGE;
         arg->size =3D nameout_len;
 =20
-        if ( !rv && copy_to_guest(arg->name, nameout, nameout_len) )
+        if ( !rv && _copy_to_guest(arg->name, nameout, nameout_len) )
             rv =3D -EFAULT;
         xfree(nameout);
     }
@@ -441,6 +437,26 @@ static int flask_security_get_bool(struc
     return rv;
 }
=20
+#ifndef COMPAT
+
+static int flask_security_commit_bools(void)
+{
+    int rv;
+
+    spin_lock(&sel_sem);
+
+    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
+    if ( rv )
+        goto out;
+
+    if ( bool_pending_values )
+        rv =3D security_set_bools(bool_num, bool_pending_values);
+
+ out:
+    spin_unlock(&sel_sem);
+    return rv;
+}
+
 static int flask_security_make_bools(void)
 {
     int ret =3D 0;
@@ -484,6 +500,7 @@ static int flask_security_avc_cachestats
 }
=20
 #endif
+#endif /* COMPAT */
=20
 static int flask_security_load(struct xen_flask_load *load)
 {
@@ -501,7 +518,7 @@ static int flask_security_load(struct xe
     if ( !buf )
         return -ENOMEM;
=20
-    if ( copy_from_guest(buf, load->buffer, load->size) )
+    if ( _copy_from_guest(buf, load->buffer, load->size) )
     {
         ret =3D -EFAULT;
         goto out_free;
@@ -524,6 +541,8 @@ static int flask_security_load(struct xe
     return ret;
 }
=20
+#ifndef COMPAT
+
 static int flask_ocontext_del(struct xen_flask_ocontext *arg)
 {
     int rv;
@@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x
     return rc;
 }
=20
-long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
+#endif /* !COMPAT */
+
+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
@@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(
  out:
     return rv;
 }
+
+#ifndef COMPAT
+#undef _copy_to_guest
+#define _copy_to_guest copy_to_compat
+#undef _copy_from_guest
+#define _copy_from_guest copy_from_compat
+
+#include <compat/event_channel.h>
+#include <compat/xsm/flask_op.h>
+
+CHECK_flask_access;
+CHECK_flask_cache_stats;
+CHECK_flask_hash_stats;
+CHECK_flask_ocontext;
+CHECK_flask_peersid;
+CHECK_flask_relabel;
+CHECK_flask_setavc_threshold;
+CHECK_flask_setenforce;
+CHECK_flask_transition;
+
+#define COMPAT
+#define flask_copyin_string(ch, pb, sz, mx) ({ \
+	XEN_GUEST_HANDLE_PARAM(char) gh; \
+	guest_from_compat_handle(gh, ch); \
+	flask_copyin_string(gh, pb, sz, mx); \
+})
+
+#define xen_flask_load compat_flask_load
+#define flask_security_load compat_security_load
+
+#define xen_flask_userlist compat_flask_userlist
+#define flask_security_user compat_security_user
+
+#define xen_flask_sid_context compat_flask_sid_context
+#define flask_security_context compat_security_context
+#define flask_security_sid compat_security_sid
+
+#define xen_flask_boolean compat_flask_boolean
+#define flask_security_resolve_bool compat_security_resolve_bool
+#define flask_security_get_bool compat_security_get_bool
+#define flask_security_set_bool compat_security_set_bool
+
+#define xen_flask_op_t compat_flask_op_t
+#undef ret_t
+#define ret_t int
+#define do_flask_op compat_flask_op
+
+#include "flask_op.c"
+#endif
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1461,6 +1461,7 @@ static int flask_map_gmfn_foreign(struct
 #endif
=20
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
=20
 static struct xsm_operations flask_ops =3D {
     .security_domaininfo =3D flask_security_domaininfo,
@@ -1535,6 +1536,9 @@ static struct xsm_operations flask_ops =3D
     .hvm_param_nested =3D flask_hvm_param_nested,
=20
     .do_xsm_op =3D do_flask_op,
+#ifdef CONFIG_COMPAT
+    .do_compat_op =3D compat_flask_op,
+#endif
=20
     .add_to_physmap =3D flask_add_to_physmap,
     .remove_from_physmap =3D flask_remove_from_physmap,
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x
     return xsm_do_xsm_op(op);
 }
=20
-
+#ifdef CONFIG_COMPAT
+int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_do_compat_op(op);
+}
+#endif



--=__Part7A49D118.1__=
Content-Type: text/plain; name="flask-op-compat.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-op-compat.patch"

flask: add compat mode guest support=0A=0A... which has been missing since =
the introduction of the new interface=0Ain the 4.2 development cycle.=0A=0A=
In the course of this I also noticed that the compat header generation=0Afa=
iled to make use of move-if-changed.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/x86_64/compat/entry.S=0A+++ =
b/xen/arch/x86/x86_64/compat/entry.S=0A@@ -404,7 +404,7 @@ ENTRY(compat_hyp=
ercall_table)=0A         .quad compat_vcpu_op=0A         .quad compat_ni_hy=
percall       /* 25 */=0A         .quad compat_mmuext_op=0A-        .quad =
do_xsm_op=0A+        .quad compat_xsm_op=0A         .quad compat_nmi_op=0A =
        .quad compat_sched_op=0A         .quad compat_callback_op        =
/* 30 */=0A--- a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ =
-27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch=0A headers-$(CONF=
IG_X86)     +=3D compat/arch-x86/xen.h=0A headers-$(CONFIG_X86)     +=3D =
compat/arch-x86/xen-$(compat-arch-y).h=0A headers-y                 +=3D =
compat/arch-$(compat-arch-y).h compat/xlat.h=0A+headers-$(FLASK_ENABLE)   =
+=3D compat/xsm/flask_op.h=0A =0A cppflags-y                :=3D -include =
public/xen-compat.h=0A cppflags-$(CONFIG_X86)    +=3D -m32=0A@@ -54,7 =
+55,7 @@ compat/%.h: compat/%.i Makefile $(BASEDI=0A 	$(PYTHON) =
$(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; \=0A 	$(if =
$(suffix-y),echo "$(suffix-y)" >>$@.new;) \=0A 	echo "#endif /* $$id */" =
>>$@.new=0A-	mv -f $@.new $@=0A+	$(call move-if-changed,$@.new,$@)=
=0A =0A compat/%.i: compat/%.c Makefile=0A 	$(CPP) $(filter-out -M% =
.%.d -include %/include/xen/config.h,$(CFLAGS)) $(cppflags-y) -o $@ =
$<=0A@@ -63,15 +64,17 @@ compat/%.c: public/%.h xlat.lst Makefile=0A 	=
mkdir -p $(@D)=0A 	grep -v 'DEFINE_XEN_GUEST_HANDLE(long)' $< | \=0A 	=
$(PYTHON) $(BASEDIR)/tools/compat-build-source.py >$@.new=0A-	mv -f =
$@.new $@=0A+	$(call move-if-changed,$@.new,$@)=0A =0A compat/xlat.h: =
xlat.lst $(filter-out compat/xlat.h,$(headers-y)) $(BASEDIR)/tools/get-fiel=
ds.sh Makefile=0A 	export PYTHON=3D$(PYTHON); \=0A 	grep -v =
'^[	 ]*#' xlat.lst | \=0A 	while read what name hdr; do \=0A-		=
$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$(echo =
compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g') || exit $$?; \=0A+		=
hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \=0A+	=
	echo '$(headers-y)' | grep -q "$$hdr" || continue; \=0A+		=
$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr || =
exit $$?; \=0A 	done >$@.new=0A-	mv -f $@.new $@=0A+	$(call =
move-if-changed,$@.new,$@)=0A =0A ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_AR=
CH))=0A =0A@@ -79,7 +82,7 @@ all: headers.chk=0A =0A headers.chk: =
$(filter-out public/arch-% public/%ctl.h public/xsm/% public/%hvm/save.h, =
$(wildcard public/*.h public/*/*.h) $(public-y)) Makefile=0A 	for i in =
$(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W -Werror -S -o =
/dev/null -xc $$i || exit 1; echo $$i; done >$@.new=0A-	mv $@.new $@=0A+	=
$(call move-if-changed,$@.new,$@)=0A =0A endif=0A =0A--- a/xen/include/xlat=
.lst=0A+++ b/xen/include/xlat.lst=0A@@ -99,3 +99,16 @@=0A !	vcpu_set_si=
ngleshot_timer	vcpu.h=0A ?	xenoprof_init			xenoprof.h=
=0A ?	xenoprof_passive		xenoprof.h=0A+?	flask_access		=
	xsm/flask_op.h=0A+!	flask_boolean			xsm/flask_o=
p.h=0A+?	flask_cache_stats		xsm/flask_op.h=0A+?	=
flask_hash_stats		xsm/flask_op.h=0A+!	flask_load		=
	xsm/flask_op.h=0A+?	flask_ocontext			xsm/flask_o=
p.h=0A+?	flask_peersid			xsm/flask_op.h=0A+?	=
flask_relabel			xsm/flask_op.h=0A+?	flask_setavc_thresh=
old		xsm/flask_op.h=0A+?	flask_setenforce		=
xsm/flask_op.h=0A+!	flask_sid_context		xsm/flask_op.h=0A+?=
	flask_transition		xsm/flask_op.h=0A+!	flask_userl=
ist			xsm/flask_op.h=0A--- a/xen/include/xsm/dummy.h=0A++=
+ b/xen/include/xsm/dummy.h=0A@@ -412,6 +412,13 @@ static XSM_INLINE long =
xsm_do_xsm_op(XEN=0A     return -ENOSYS;=0A }=0A =0A+#ifdef CONFIG_COMPAT=
=0A+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t)=
 op)=0A+{=0A+    return -ENOSYS;=0A+}=0A+#endif=0A+=0A static XSM_INLINE =
char *xsm_show_irq_sid(int irq)=0A {=0A     return NULL;=0A--- a/xen/includ=
e/xsm/xsm.h=0A+++ b/xen/include/xsm/xsm.h=0A@@ -129,6 +129,9 @@ struct =
xsm_operations {=0A     int (*tmem_control)(void);=0A =0A     long =
(*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);=0A+#ifdef CONFIG_COMPAT=
=0A+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);=0A+#endi=
f=0A =0A     int (*hvm_param) (struct domain *d, unsigned long op);=0A     =
int (*hvm_param_nested) (struct domain *d);=0A@@ -499,6 +502,13 @@ static =
inline long xsm_do_xsm_op (XEN_GU=0A     return xsm_ops->do_xsm_op(op);=0A =
}=0A =0A+#ifdef CONFIG_COMPAT=0A+static inline int xsm_do_compat_op =
(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return xsm_ops->do_compat=
_op(op);=0A+}=0A+#endif=0A+=0A static inline int xsm_hvm_param (xsm_default=
_t def, struct domain *d, unsigned long op)=0A {=0A     return xsm_ops->hvm=
_param(d, op);=0A--- a/xen/xsm/dummy.c=0A+++ b/xen/xsm/dummy.c=0A@@ -105,6 =
+105,9 @@ void xsm_fixup_ops (struct xsm_operation=0A     set_to_dummy_if_n=
ull(ops, hvm_param_nested);=0A =0A     set_to_dummy_if_null(ops, do_xsm_op)=
;=0A+#ifdef CONFIG_COMPAT=0A+    set_to_dummy_if_null(ops, do_compat_op);=
=0A+#endif=0A =0A     set_to_dummy_if_null(ops, add_to_physmap);=0A     =
set_to_dummy_if_null(ops, remove_from_physmap);=0A--- a/xen/xsm/flask/flask=
_op.c=0A+++ b/xen/xsm/flask/flask_op.c=0A@@ -7,7 +7,7 @@=0A  *  it under =
the terms of the GNU General Public License version 2,=0A  *  as published =
by the Free Software Foundation.=0A  */=0A-=0A+#ifndef COMPAT=0A #include =
<xen/errno.h>=0A #include <xen/event.h>=0A #include <xsm/xsm.h>=0A@@ -20,6 =
+20,10 @@=0A #include <objsec.h>=0A #include <conditional.h>=0A =0A+#define=
 ret_t long=0A+#define _copy_to_guest copy_to_guest=0A+#define _copy_from_g=
uest copy_from_guest=0A+=0A #ifdef FLASK_DEVELOP=0A int flask_enforcing =
=3D 0;=0A integer_param("flask_enforcing", flask_enforcing);=0A@@ -95,6 =
+99,8 @@ static int flask_copyin_string(XEN_GUEST=0A     return 0;=0A }=0A =
=0A+#endif /* COMPAT */=0A+=0A static int flask_security_user(struct =
xen_flask_userlist *arg)=0A {=0A     char *user;=0A@@ -119,7 +125,7 @@ =
static int flask_security_user(struct xe=0A =0A     arg->size =3D =
nsids;=0A =0A-    if ( copy_to_guest(arg->u.sids, sids, nsids) )=0A+    if =
( _copy_to_guest(arg->u.sids, sids, nsids) )=0A         rv =3D -EFAULT;=0A =
=0A     xfree(sids);=0A@@ -128,6 +134,8 @@ static int flask_security_user(s=
truct xe=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int =
flask_security_relabel(struct xen_flask_transition *arg)=0A {=0A     int =
rv;=0A@@ -208,6 +216,8 @@ static int flask_security_setenforce(str=0A     =
return 0;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_security_=
context(struct xen_flask_sid_context *arg)=0A {=0A     int rv;=0A@@ -252,7 =
+262,7 @@ static int flask_security_sid(struct xen=0A =0A     arg->size =
=3D len;=0A =0A-    if ( !rv && copy_to_guest(arg->context, context, len) =
)=0A+    if ( !rv && _copy_to_guest(arg->context, context, len) )=0A       =
  rv =3D -EFAULT;=0A =0A     xfree(context);=0A@@ -260,6 +270,8 @@ static =
int flask_security_sid(struct xen=0A     return rv;=0A }=0A =0A+#ifndef =
COMPAT=0A+=0A int flask_disable(void)=0A {=0A     static int flask_disabled=
 =3D 0;=0A@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho=0A  =
   return rv;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_secur=
ity_resolve_bool(struct xen_flask_boolean *arg)=0A {=0A     char *name;=0A@=
@ -382,24 +396,6 @@ static int flask_security_set_bool(struc=0A     return =
rv;=0A }=0A =0A-static int flask_security_commit_bools(void)=0A-{=0A-    =
int rv;=0A-=0A-    spin_lock(&sel_sem);=0A-=0A-    rv =3D domain_has_securi=
ty(current->domain, SECURITY__SETBOOL);=0A-    if ( rv )=0A-        goto =
out;=0A-=0A-    if ( bool_pending_values )=0A-        rv =3D security_set_b=
ools(bool_num, bool_pending_values);=0A-    =0A- out:=0A-    spin_unlock(&s=
el_sem);=0A-    return rv;=0A-}=0A-=0A static int flask_security_get_bool(s=
truct xen_flask_boolean *arg)=0A {=0A     int rv;=0A@@ -431,7 +427,7 @@ =
static int flask_security_get_bool(struc=0A             rv =3D -ERANGE;=0A =
        arg->size =3D nameout_len;=0A  =0A-        if ( !rv && copy_to_gues=
t(arg->name, nameout, nameout_len) )=0A+        if ( !rv && _copy_to_guest(=
arg->name, nameout, nameout_len) )=0A             rv =3D -EFAULT;=0A       =
  xfree(nameout);=0A     }=0A@@ -441,6 +437,26 @@ static int flask_security=
_get_bool(struc=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A+static =
int flask_security_commit_bools(void)=0A+{=0A+    int rv;=0A+=0A+    =
spin_lock(&sel_sem);=0A+=0A+    rv =3D domain_has_security(current->domain,=
 SECURITY__SETBOOL);=0A+    if ( rv )=0A+        goto out;=0A+=0A+    if ( =
bool_pending_values )=0A+        rv =3D security_set_bools(bool_num, =
bool_pending_values);=0A+=0A+ out:=0A+    spin_unlock(&sel_sem);=0A+    =
return rv;=0A+}=0A+=0A static int flask_security_make_bools(void)=0A {=0A  =
   int ret =3D 0;=0A@@ -484,6 +500,7 @@ static int flask_security_avc_cache=
stats=0A }=0A =0A #endif=0A+#endif /* COMPAT */=0A =0A static int =
flask_security_load(struct xen_flask_load *load)=0A {=0A@@ -501,7 +518,7 =
@@ static int flask_security_load(struct xe=0A     if ( !buf )=0A         =
return -ENOMEM;=0A =0A-    if ( copy_from_guest(buf, load->buffer, =
load->size) )=0A+    if ( _copy_from_guest(buf, load->buffer, load->size) =
)=0A     {=0A         ret =3D -EFAULT;=0A         goto out_free;=0A@@ =
-524,6 +541,8 @@ static int flask_security_load(struct xe=0A     return =
ret;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int flask_ocontext_del(struct=
 xen_flask_ocontext *arg)=0A {=0A     int rv;=0A@@ -636,7 +655,9 @@ static =
int flask_relabel_domain(struct x=0A     return rc;=0A }=0A =0A-long =
do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)=0A+#endif /* =
!COMPAT */=0A+=0A+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op)=0A {=0A     xen_flask_op_t op;=0A     int rv;=0A@@ -763,3 =
+784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(=0A  out:=0A     return =
rv;=0A }=0A+=0A+#ifndef COMPAT=0A+#undef _copy_to_guest=0A+#define =
_copy_to_guest copy_to_compat=0A+#undef _copy_from_guest=0A+#define =
_copy_from_guest copy_from_compat=0A+=0A+#include <compat/event_channel.h>=
=0A+#include <compat/xsm/flask_op.h>=0A+=0A+CHECK_flask_access;=0A+CHECK_fl=
ask_cache_stats;=0A+CHECK_flask_hash_stats;=0A+CHECK_flask_ocontext;=0A+CHE=
CK_flask_peersid;=0A+CHECK_flask_relabel;=0A+CHECK_flask_setavc_threshold;=
=0A+CHECK_flask_setenforce;=0A+CHECK_flask_transition;=0A+=0A+#define =
COMPAT=0A+#define flask_copyin_string(ch, pb, sz, mx) ({ \=0A+	XEN_GUEST_H=
ANDLE_PARAM(char) gh; \=0A+	guest_from_compat_handle(gh, ch); \=0A+	=
flask_copyin_string(gh, pb, sz, mx); \=0A+})=0A+=0A+#define xen_flask_load =
compat_flask_load=0A+#define flask_security_load compat_security_load=0A+=
=0A+#define xen_flask_userlist compat_flask_userlist=0A+#define flask_secur=
ity_user compat_security_user=0A+=0A+#define xen_flask_sid_context =
compat_flask_sid_context=0A+#define flask_security_context compat_security_=
context=0A+#define flask_security_sid compat_security_sid=0A+=0A+#define =
xen_flask_boolean compat_flask_boolean=0A+#define flask_security_resolve_bo=
ol compat_security_resolve_bool=0A+#define flask_security_get_bool =
compat_security_get_bool=0A+#define flask_security_set_bool compat_security=
_set_bool=0A+=0A+#define xen_flask_op_t compat_flask_op_t=0A+#undef =
ret_t=0A+#define ret_t int=0A+#define do_flask_op compat_flask_op=0A+=0A+#i=
nclude "flask_op.c"=0A+#endif=0A--- a/xen/xsm/flask/hooks.c=0A+++ =
b/xen/xsm/flask/hooks.c=0A@@ -1461,6 +1461,7 @@ static int flask_map_gmfn_f=
oreign(struct=0A #endif=0A =0A long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_=
op_t) u_flask_op);=0A+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op);=0A =0A static struct xsm_operations flask_ops =3D {=0A     =
.security_domaininfo =3D flask_security_domaininfo,=0A@@ -1535,6 +1536,9 =
@@ static struct xsm_operations flask_ops =3D=0A     .hvm_param_nested =3D =
flask_hvm_param_nested,=0A =0A     .do_xsm_op =3D do_flask_op,=0A+#ifdef =
CONFIG_COMPAT=0A+    .do_compat_op =3D compat_flask_op,=0A+#endif=0A =0A   =
  .add_to_physmap =3D flask_add_to_physmap,=0A     .remove_from_physmap =
=3D flask_remove_from_physmap,=0A--- a/xen/xsm/xsm_core.c=0A+++ b/xen/xsm/x=
sm_core.c=0A@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x=0A=
     return xsm_do_xsm_op(op);=0A }=0A =0A-=0A+#ifdef CONFIG_COMPAT=0A+int =
compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return =
xsm_do_compat_op(op);=0A+}=0A+#endif=0A
--=__Part7A49D118.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7A49D118.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:49:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi3R-0007kN-0V; Fri, 07 Feb 2014 09:49:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBi3P-0007k4-Ar
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 09:48:59 +0000
Received: from [193.109.254.147:26077] by server-13.bemta-14.messagelabs.com
	id 93/D9-01226-A0CA4F25; Fri, 07 Feb 2014 09:48:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391766537!2705432!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22960 invoked from network); 7 Feb 2014 09:48:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 09:48:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 09:48:57 +0000
Message-Id: <52F4BA18020000780011A1FA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 09:48:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7A49D118.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, dgdegra@tycho.nsa.gov,
	Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 4/4] flask: add compat mode guest support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7A49D118.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... which has been missing since the introduction of the new interface
in the 4.2 development cycle.

In the course of this I also noticed that the compat header generation
failed to make use of move-if-changed.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)
         .quad compat_vcpu_op
         .quad compat_ni_hypercall       /* 25 */
         .quad compat_mmuext_op
-        .quad do_xsm_op
+        .quad compat_xsm_op
         .quad compat_nmi_op
         .quad compat_sched_op
         .quad compat_callback_op        /* 30 */
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen-$(compat-arch-y).h
 headers-y                 +=3D compat/arch-$(compat-arch-y).h compat/xlat.=
h
+headers-$(FLASK_ENABLE)   +=3D compat/xsm/flask_op.h
=20
 cppflags-y                :=3D -include public/xen-compat.h
 cppflags-$(CONFIG_X86)    +=3D -m32
@@ -54,7 +55,7 @@ compat/%.h: compat/%.i Makefile $(BASEDI
 	$(PYTHON) $(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; =
\
 	$(if $(suffix-y),echo "$(suffix-y)" >>$@.new;) \
 	echo "#endif /* $$id */" >>$@.new
-	mv -f $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 compat/%.i: compat/%.c Makefile
 	$(CPP) $(filter-out -M% .%.d -include %/include/xen/config.h,$(CFLA=
GS)) $(cppflags-y) -o $@ $<
@@ -63,15 +64,17 @@ compat/%.c: public/%.h xlat.lst Makefile
 	mkdir -p $(@D)
 	grep -v 'DEFINE_XEN_GUEST_HANDLE(long)' $< | \
 	$(PYTHON) $(BASEDIR)/tools/compat-build-source.py >$@.new
-	mv -f $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) =
$(BASEDIR)/tools/get-fields.sh Makefile
 	export PYTHON=3D$(PYTHON); \
 	grep -v '^[	 ]*#' xlat.lst | \
 	while read what name hdr; do \
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g') || exit $$?; =
\
+		hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y=
),g')"; \
+		echo '$(headers-y)' | grep -q "$$hdr" || continue; \
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$hdr || exit $$?; \
 	done >$@.new
-	mv -f $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
=20
@@ -79,7 +82,7 @@ all: headers.chk
=20
 headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% =
public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) =
Makefile
 	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall =
-W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done >$@.new
-	mv $@.new $@
+	$(call move-if-changed,$@.new,$@)
=20
 endif
=20
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -99,3 +99,16 @@
 !	vcpu_set_singleshot_timer	vcpu.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
+?	flask_access			xsm/flask_op.h
+!	flask_boolean			xsm/flask_op.h
+?	flask_cache_stats		xsm/flask_op.h
+?	flask_hash_stats		xsm/flask_op.h
+!	flask_load			xsm/flask_op.h
+?	flask_ocontext			xsm/flask_op.h
+?	flask_peersid			xsm/flask_op.h
+?	flask_relabel			xsm/flask_op.h
+?	flask_setavc_threshold		xsm/flask_op.h
+?	flask_setenforce		xsm/flask_op.h
+!	flask_sid_context		xsm/flask_op.h
+?	flask_transition		xsm/flask_op.h
+!	flask_userlist			xsm/flask_op.h
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN
     return -ENOSYS;
 }
=20
+#ifdef CONFIG_COMPAT
+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
op)
+{
+    return -ENOSYS;
+}
+#endif
+
 static XSM_INLINE char *xsm_show_irq_sid(int irq)
 {
     return NULL;
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -129,6 +129,9 @@ struct xsm_operations {
     int (*tmem_control)(void);
=20
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#ifdef CONFIG_COMPAT
+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#endif
=20
     int (*hvm_param) (struct domain *d, unsigned long op);
     int (*hvm_param_nested) (struct domain *d);
@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU
     return xsm_ops->do_xsm_op(op);
 }
=20
+#ifdef CONFIG_COMPAT
+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_ops->do_compat_op(op);
+}
+#endif
+
 static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, =
unsigned long op)
 {
     return xsm_ops->hvm_param(d, op);
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation
     set_to_dummy_if_null(ops, hvm_param_nested);
=20
     set_to_dummy_if_null(ops, do_xsm_op);
+#ifdef CONFIG_COMPAT
+    set_to_dummy_if_null(ops, do_compat_op);
+#endif
=20
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -7,7 +7,7 @@
  *  it under the terms of the GNU General Public License version 2,
  *  as published by the Free Software Foundation.
  */
-
+#ifndef COMPAT
 #include <xen/errno.h>
 #include <xen/event.h>
 #include <xsm/xsm.h>
@@ -20,6 +20,10 @@
 #include <objsec.h>
 #include <conditional.h>
=20
+#define ret_t long
+#define _copy_to_guest copy_to_guest
+#define _copy_from_guest copy_from_guest
+
 #ifdef FLASK_DEVELOP
 int flask_enforcing =3D 0;
 integer_param("flask_enforcing", flask_enforcing);
@@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_user(struct xen_flask_userlist *arg)
 {
     char *user;
@@ -119,7 +125,7 @@ static int flask_security_user(struct xe
=20
     arg->size =3D nsids;
=20
-    if ( copy_to_guest(arg->u.sids, sids, nsids) )
+    if ( _copy_to_guest(arg->u.sids, sids, nsids) )
         rv =3D -EFAULT;
=20
     xfree(sids);
@@ -128,6 +134,8 @@ static int flask_security_user(struct xe
     return rv;
 }
=20
+#ifndef COMPAT
+
 static int flask_security_relabel(struct xen_flask_transition *arg)
 {
     int rv;
@@ -208,6 +216,8 @@ static int flask_security_setenforce(str
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_context(struct xen_flask_sid_context *arg)
 {
     int rv;
@@ -252,7 +262,7 @@ static int flask_security_sid(struct xen
=20
     arg->size =3D len;
=20
-    if ( !rv && copy_to_guest(arg->context, context, len) )
+    if ( !rv && _copy_to_guest(arg->context, context, len) )
         rv =3D -EFAULT;
=20
     xfree(context);
@@ -260,6 +270,8 @@ static int flask_security_sid(struct xen
     return rv;
 }
=20
+#ifndef COMPAT
+
 int flask_disable(void)
 {
     static int flask_disabled =3D 0;
@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho
     return rv;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_resolve_bool(struct xen_flask_boolean *arg)
 {
     char *name;
@@ -382,24 +396,6 @@ static int flask_security_set_bool(struc
     return rv;
 }
=20
-static int flask_security_commit_bools(void)
-{
-    int rv;
-
-    spin_lock(&sel_sem);
-
-    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
-    if ( rv )
-        goto out;
-
-    if ( bool_pending_values )
-        rv =3D security_set_bools(bool_num, bool_pending_values);
-   =20
- out:
-    spin_unlock(&sel_sem);
-    return rv;
-}
-
 static int flask_security_get_bool(struct xen_flask_boolean *arg)
 {
     int rv;
@@ -431,7 +427,7 @@ static int flask_security_get_bool(struc
             rv =3D -ERANGE;
         arg->size =3D nameout_len;
 =20
-        if ( !rv && copy_to_guest(arg->name, nameout, nameout_len) )
+        if ( !rv && _copy_to_guest(arg->name, nameout, nameout_len) )
             rv =3D -EFAULT;
         xfree(nameout);
     }
@@ -441,6 +437,26 @@ static int flask_security_get_bool(struc
     return rv;
 }
=20
+#ifndef COMPAT
+
+static int flask_security_commit_bools(void)
+{
+    int rv;
+
+    spin_lock(&sel_sem);
+
+    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
+    if ( rv )
+        goto out;
+
+    if ( bool_pending_values )
+        rv =3D security_set_bools(bool_num, bool_pending_values);
+
+ out:
+    spin_unlock(&sel_sem);
+    return rv;
+}
+
 static int flask_security_make_bools(void)
 {
     int ret =3D 0;
@@ -484,6 +500,7 @@ static int flask_security_avc_cachestats
 }
=20
 #endif
+#endif /* COMPAT */
=20
 static int flask_security_load(struct xen_flask_load *load)
 {
@@ -501,7 +518,7 @@ static int flask_security_load(struct xe
     if ( !buf )
         return -ENOMEM;
=20
-    if ( copy_from_guest(buf, load->buffer, load->size) )
+    if ( _copy_from_guest(buf, load->buffer, load->size) )
     {
         ret =3D -EFAULT;
         goto out_free;
@@ -524,6 +541,8 @@ static int flask_security_load(struct xe
     return ret;
 }
=20
+#ifndef COMPAT
+
 static int flask_ocontext_del(struct xen_flask_ocontext *arg)
 {
     int rv;
@@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x
     return rc;
 }
=20
-long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
+#endif /* !COMPAT */
+
+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
@@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(
  out:
     return rv;
 }
+
+#ifndef COMPAT
+#undef _copy_to_guest
+#define _copy_to_guest copy_to_compat
+#undef _copy_from_guest
+#define _copy_from_guest copy_from_compat
+
+#include <compat/event_channel.h>
+#include <compat/xsm/flask_op.h>
+
+CHECK_flask_access;
+CHECK_flask_cache_stats;
+CHECK_flask_hash_stats;
+CHECK_flask_ocontext;
+CHECK_flask_peersid;
+CHECK_flask_relabel;
+CHECK_flask_setavc_threshold;
+CHECK_flask_setenforce;
+CHECK_flask_transition;
+
+#define COMPAT
+#define flask_copyin_string(ch, pb, sz, mx) ({ \
+	XEN_GUEST_HANDLE_PARAM(char) gh; \
+	guest_from_compat_handle(gh, ch); \
+	flask_copyin_string(gh, pb, sz, mx); \
+})
+
+#define xen_flask_load compat_flask_load
+#define flask_security_load compat_security_load
+
+#define xen_flask_userlist compat_flask_userlist
+#define flask_security_user compat_security_user
+
+#define xen_flask_sid_context compat_flask_sid_context
+#define flask_security_context compat_security_context
+#define flask_security_sid compat_security_sid
+
+#define xen_flask_boolean compat_flask_boolean
+#define flask_security_resolve_bool compat_security_resolve_bool
+#define flask_security_get_bool compat_security_get_bool
+#define flask_security_set_bool compat_security_set_bool
+
+#define xen_flask_op_t compat_flask_op_t
+#undef ret_t
+#define ret_t int
+#define do_flask_op compat_flask_op
+
+#include "flask_op.c"
+#endif
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1461,6 +1461,7 @@ static int flask_map_gmfn_foreign(struct
 #endif
=20
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
=20
 static struct xsm_operations flask_ops =3D {
     .security_domaininfo =3D flask_security_domaininfo,
@@ -1535,6 +1536,9 @@ static struct xsm_operations flask_ops =3D
     .hvm_param_nested =3D flask_hvm_param_nested,
=20
     .do_xsm_op =3D do_flask_op,
+#ifdef CONFIG_COMPAT
+    .do_compat_op =3D compat_flask_op,
+#endif
=20
     .add_to_physmap =3D flask_add_to_physmap,
     .remove_from_physmap =3D flask_remove_from_physmap,
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x
     return xsm_do_xsm_op(op);
 }
=20
-
+#ifdef CONFIG_COMPAT
+int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_do_compat_op(op);
+}
+#endif



--=__Part7A49D118.1__=
Content-Type: text/plain; name="flask-op-compat.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-op-compat.patch"

flask: add compat mode guest support=0A=0A... which has been missing since =
the introduction of the new interface=0Ain the 4.2 development cycle.=0A=0A=
In the course of this I also noticed that the compat header generation=0Afa=
iled to make use of move-if-changed.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/x86_64/compat/entry.S=0A+++ =
b/xen/arch/x86/x86_64/compat/entry.S=0A@@ -404,7 +404,7 @@ ENTRY(compat_hyp=
ercall_table)=0A         .quad compat_vcpu_op=0A         .quad compat_ni_hy=
percall       /* 25 */=0A         .quad compat_mmuext_op=0A-        .quad =
do_xsm_op=0A+        .quad compat_xsm_op=0A         .quad compat_nmi_op=0A =
        .quad compat_sched_op=0A         .quad compat_callback_op        =
/* 30 */=0A--- a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ =
-27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch=0A headers-$(CONF=
IG_X86)     +=3D compat/arch-x86/xen.h=0A headers-$(CONFIG_X86)     +=3D =
compat/arch-x86/xen-$(compat-arch-y).h=0A headers-y                 +=3D =
compat/arch-$(compat-arch-y).h compat/xlat.h=0A+headers-$(FLASK_ENABLE)   =
+=3D compat/xsm/flask_op.h=0A =0A cppflags-y                :=3D -include =
public/xen-compat.h=0A cppflags-$(CONFIG_X86)    +=3D -m32=0A@@ -54,7 =
+55,7 @@ compat/%.h: compat/%.i Makefile $(BASEDI=0A 	$(PYTHON) =
$(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; \=0A 	$(if =
$(suffix-y),echo "$(suffix-y)" >>$@.new;) \=0A 	echo "#endif /* $$id */" =
>>$@.new=0A-	mv -f $@.new $@=0A+	$(call move-if-changed,$@.new,$@)=
=0A =0A compat/%.i: compat/%.c Makefile=0A 	$(CPP) $(filter-out -M% =
.%.d -include %/include/xen/config.h,$(CFLAGS)) $(cppflags-y) -o $@ =
$<=0A@@ -63,15 +64,17 @@ compat/%.c: public/%.h xlat.lst Makefile=0A 	=
mkdir -p $(@D)=0A 	grep -v 'DEFINE_XEN_GUEST_HANDLE(long)' $< | \=0A 	=
$(PYTHON) $(BASEDIR)/tools/compat-build-source.py >$@.new=0A-	mv -f =
$@.new $@=0A+	$(call move-if-changed,$@.new,$@)=0A =0A compat/xlat.h: =
xlat.lst $(filter-out compat/xlat.h,$(headers-y)) $(BASEDIR)/tools/get-fiel=
ds.sh Makefile=0A 	export PYTHON=3D$(PYTHON); \=0A 	grep -v =
'^[	 ]*#' xlat.lst | \=0A 	while read what name hdr; do \=0A-		=
$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$(echo =
compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g') || exit $$?; \=0A+		=
hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \=0A+	=
	echo '$(headers-y)' | grep -q "$$hdr" || continue; \=0A+		=
$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr || =
exit $$?; \=0A 	done >$@.new=0A-	mv -f $@.new $@=0A+	$(call =
move-if-changed,$@.new,$@)=0A =0A ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_AR=
CH))=0A =0A@@ -79,7 +82,7 @@ all: headers.chk=0A =0A headers.chk: =
$(filter-out public/arch-% public/%ctl.h public/xsm/% public/%hvm/save.h, =
$(wildcard public/*.h public/*/*.h) $(public-y)) Makefile=0A 	for i in =
$(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W -Werror -S -o =
/dev/null -xc $$i || exit 1; echo $$i; done >$@.new=0A-	mv $@.new $@=0A+	=
$(call move-if-changed,$@.new,$@)=0A =0A endif=0A =0A--- a/xen/include/xlat=
.lst=0A+++ b/xen/include/xlat.lst=0A@@ -99,3 +99,16 @@=0A !	vcpu_set_si=
ngleshot_timer	vcpu.h=0A ?	xenoprof_init			xenoprof.h=
=0A ?	xenoprof_passive		xenoprof.h=0A+?	flask_access		=
	xsm/flask_op.h=0A+!	flask_boolean			xsm/flask_o=
p.h=0A+?	flask_cache_stats		xsm/flask_op.h=0A+?	=
flask_hash_stats		xsm/flask_op.h=0A+!	flask_load		=
	xsm/flask_op.h=0A+?	flask_ocontext			xsm/flask_o=
p.h=0A+?	flask_peersid			xsm/flask_op.h=0A+?	=
flask_relabel			xsm/flask_op.h=0A+?	flask_setavc_thresh=
old		xsm/flask_op.h=0A+?	flask_setenforce		=
xsm/flask_op.h=0A+!	flask_sid_context		xsm/flask_op.h=0A+?=
	flask_transition		xsm/flask_op.h=0A+!	flask_userl=
ist			xsm/flask_op.h=0A--- a/xen/include/xsm/dummy.h=0A++=
+ b/xen/include/xsm/dummy.h=0A@@ -412,6 +412,13 @@ static XSM_INLINE long =
xsm_do_xsm_op(XEN=0A     return -ENOSYS;=0A }=0A =0A+#ifdef CONFIG_COMPAT=
=0A+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t)=
 op)=0A+{=0A+    return -ENOSYS;=0A+}=0A+#endif=0A+=0A static XSM_INLINE =
char *xsm_show_irq_sid(int irq)=0A {=0A     return NULL;=0A--- a/xen/includ=
e/xsm/xsm.h=0A+++ b/xen/include/xsm/xsm.h=0A@@ -129,6 +129,9 @@ struct =
xsm_operations {=0A     int (*tmem_control)(void);=0A =0A     long =
(*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);=0A+#ifdef CONFIG_COMPAT=
=0A+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);=0A+#endi=
f=0A =0A     int (*hvm_param) (struct domain *d, unsigned long op);=0A     =
int (*hvm_param_nested) (struct domain *d);=0A@@ -499,6 +502,13 @@ static =
inline long xsm_do_xsm_op (XEN_GU=0A     return xsm_ops->do_xsm_op(op);=0A =
}=0A =0A+#ifdef CONFIG_COMPAT=0A+static inline int xsm_do_compat_op =
(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return xsm_ops->do_compat=
_op(op);=0A+}=0A+#endif=0A+=0A static inline int xsm_hvm_param (xsm_default=
_t def, struct domain *d, unsigned long op)=0A {=0A     return xsm_ops->hvm=
_param(d, op);=0A--- a/xen/xsm/dummy.c=0A+++ b/xen/xsm/dummy.c=0A@@ -105,6 =
+105,9 @@ void xsm_fixup_ops (struct xsm_operation=0A     set_to_dummy_if_n=
ull(ops, hvm_param_nested);=0A =0A     set_to_dummy_if_null(ops, do_xsm_op)=
;=0A+#ifdef CONFIG_COMPAT=0A+    set_to_dummy_if_null(ops, do_compat_op);=
=0A+#endif=0A =0A     set_to_dummy_if_null(ops, add_to_physmap);=0A     =
set_to_dummy_if_null(ops, remove_from_physmap);=0A--- a/xen/xsm/flask/flask=
_op.c=0A+++ b/xen/xsm/flask/flask_op.c=0A@@ -7,7 +7,7 @@=0A  *  it under =
the terms of the GNU General Public License version 2,=0A  *  as published =
by the Free Software Foundation.=0A  */=0A-=0A+#ifndef COMPAT=0A #include =
<xen/errno.h>=0A #include <xen/event.h>=0A #include <xsm/xsm.h>=0A@@ -20,6 =
+20,10 @@=0A #include <objsec.h>=0A #include <conditional.h>=0A =0A+#define=
 ret_t long=0A+#define _copy_to_guest copy_to_guest=0A+#define _copy_from_g=
uest copy_from_guest=0A+=0A #ifdef FLASK_DEVELOP=0A int flask_enforcing =
=3D 0;=0A integer_param("flask_enforcing", flask_enforcing);=0A@@ -95,6 =
+99,8 @@ static int flask_copyin_string(XEN_GUEST=0A     return 0;=0A }=0A =
=0A+#endif /* COMPAT */=0A+=0A static int flask_security_user(struct =
xen_flask_userlist *arg)=0A {=0A     char *user;=0A@@ -119,7 +125,7 @@ =
static int flask_security_user(struct xe=0A =0A     arg->size =3D =
nsids;=0A =0A-    if ( copy_to_guest(arg->u.sids, sids, nsids) )=0A+    if =
( _copy_to_guest(arg->u.sids, sids, nsids) )=0A         rv =3D -EFAULT;=0A =
=0A     xfree(sids);=0A@@ -128,6 +134,8 @@ static int flask_security_user(s=
truct xe=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int =
flask_security_relabel(struct xen_flask_transition *arg)=0A {=0A     int =
rv;=0A@@ -208,6 +216,8 @@ static int flask_security_setenforce(str=0A     =
return 0;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_security_=
context(struct xen_flask_sid_context *arg)=0A {=0A     int rv;=0A@@ -252,7 =
+262,7 @@ static int flask_security_sid(struct xen=0A =0A     arg->size =
=3D len;=0A =0A-    if ( !rv && copy_to_guest(arg->context, context, len) =
)=0A+    if ( !rv && _copy_to_guest(arg->context, context, len) )=0A       =
  rv =3D -EFAULT;=0A =0A     xfree(context);=0A@@ -260,6 +270,8 @@ static =
int flask_security_sid(struct xen=0A     return rv;=0A }=0A =0A+#ifndef =
COMPAT=0A+=0A int flask_disable(void)=0A {=0A     static int flask_disabled=
 =3D 0;=0A@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho=0A  =
   return rv;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_secur=
ity_resolve_bool(struct xen_flask_boolean *arg)=0A {=0A     char *name;=0A@=
@ -382,24 +396,6 @@ static int flask_security_set_bool(struc=0A     return =
rv;=0A }=0A =0A-static int flask_security_commit_bools(void)=0A-{=0A-    =
int rv;=0A-=0A-    spin_lock(&sel_sem);=0A-=0A-    rv =3D domain_has_securi=
ty(current->domain, SECURITY__SETBOOL);=0A-    if ( rv )=0A-        goto =
out;=0A-=0A-    if ( bool_pending_values )=0A-        rv =3D security_set_b=
ools(bool_num, bool_pending_values);=0A-    =0A- out:=0A-    spin_unlock(&s=
el_sem);=0A-    return rv;=0A-}=0A-=0A static int flask_security_get_bool(s=
truct xen_flask_boolean *arg)=0A {=0A     int rv;=0A@@ -431,7 +427,7 @@ =
static int flask_security_get_bool(struc=0A             rv =3D -ERANGE;=0A =
        arg->size =3D nameout_len;=0A  =0A-        if ( !rv && copy_to_gues=
t(arg->name, nameout, nameout_len) )=0A+        if ( !rv && _copy_to_guest(=
arg->name, nameout, nameout_len) )=0A             rv =3D -EFAULT;=0A       =
  xfree(nameout);=0A     }=0A@@ -441,6 +437,26 @@ static int flask_security=
_get_bool(struc=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A+static =
int flask_security_commit_bools(void)=0A+{=0A+    int rv;=0A+=0A+    =
spin_lock(&sel_sem);=0A+=0A+    rv =3D domain_has_security(current->domain,=
 SECURITY__SETBOOL);=0A+    if ( rv )=0A+        goto out;=0A+=0A+    if ( =
bool_pending_values )=0A+        rv =3D security_set_bools(bool_num, =
bool_pending_values);=0A+=0A+ out:=0A+    spin_unlock(&sel_sem);=0A+    =
return rv;=0A+}=0A+=0A static int flask_security_make_bools(void)=0A {=0A  =
   int ret =3D 0;=0A@@ -484,6 +500,7 @@ static int flask_security_avc_cache=
stats=0A }=0A =0A #endif=0A+#endif /* COMPAT */=0A =0A static int =
flask_security_load(struct xen_flask_load *load)=0A {=0A@@ -501,7 +518,7 =
@@ static int flask_security_load(struct xe=0A     if ( !buf )=0A         =
return -ENOMEM;=0A =0A-    if ( copy_from_guest(buf, load->buffer, =
load->size) )=0A+    if ( _copy_from_guest(buf, load->buffer, load->size) =
)=0A     {=0A         ret =3D -EFAULT;=0A         goto out_free;=0A@@ =
-524,6 +541,8 @@ static int flask_security_load(struct xe=0A     return =
ret;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int flask_ocontext_del(struct=
 xen_flask_ocontext *arg)=0A {=0A     int rv;=0A@@ -636,7 +655,9 @@ static =
int flask_relabel_domain(struct x=0A     return rc;=0A }=0A =0A-long =
do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)=0A+#endif /* =
!COMPAT */=0A+=0A+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op)=0A {=0A     xen_flask_op_t op;=0A     int rv;=0A@@ -763,3 =
+784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(=0A  out:=0A     return =
rv;=0A }=0A+=0A+#ifndef COMPAT=0A+#undef _copy_to_guest=0A+#define =
_copy_to_guest copy_to_compat=0A+#undef _copy_from_guest=0A+#define =
_copy_from_guest copy_from_compat=0A+=0A+#include <compat/event_channel.h>=
=0A+#include <compat/xsm/flask_op.h>=0A+=0A+CHECK_flask_access;=0A+CHECK_fl=
ask_cache_stats;=0A+CHECK_flask_hash_stats;=0A+CHECK_flask_ocontext;=0A+CHE=
CK_flask_peersid;=0A+CHECK_flask_relabel;=0A+CHECK_flask_setavc_threshold;=
=0A+CHECK_flask_setenforce;=0A+CHECK_flask_transition;=0A+=0A+#define =
COMPAT=0A+#define flask_copyin_string(ch, pb, sz, mx) ({ \=0A+	XEN_GUEST_H=
ANDLE_PARAM(char) gh; \=0A+	guest_from_compat_handle(gh, ch); \=0A+	=
flask_copyin_string(gh, pb, sz, mx); \=0A+})=0A+=0A+#define xen_flask_load =
compat_flask_load=0A+#define flask_security_load compat_security_load=0A+=
=0A+#define xen_flask_userlist compat_flask_userlist=0A+#define flask_secur=
ity_user compat_security_user=0A+=0A+#define xen_flask_sid_context =
compat_flask_sid_context=0A+#define flask_security_context compat_security_=
context=0A+#define flask_security_sid compat_security_sid=0A+=0A+#define =
xen_flask_boolean compat_flask_boolean=0A+#define flask_security_resolve_bo=
ol compat_security_resolve_bool=0A+#define flask_security_get_bool =
compat_security_get_bool=0A+#define flask_security_set_bool compat_security=
_set_bool=0A+=0A+#define xen_flask_op_t compat_flask_op_t=0A+#undef =
ret_t=0A+#define ret_t int=0A+#define do_flask_op compat_flask_op=0A+=0A+#i=
nclude "flask_op.c"=0A+#endif=0A--- a/xen/xsm/flask/hooks.c=0A+++ =
b/xen/xsm/flask/hooks.c=0A@@ -1461,6 +1461,7 @@ static int flask_map_gmfn_f=
oreign(struct=0A #endif=0A =0A long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_=
op_t) u_flask_op);=0A+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op);=0A =0A static struct xsm_operations flask_ops =3D {=0A     =
.security_domaininfo =3D flask_security_domaininfo,=0A@@ -1535,6 +1536,9 =
@@ static struct xsm_operations flask_ops =3D=0A     .hvm_param_nested =3D =
flask_hvm_param_nested,=0A =0A     .do_xsm_op =3D do_flask_op,=0A+#ifdef =
CONFIG_COMPAT=0A+    .do_compat_op =3D compat_flask_op,=0A+#endif=0A =0A   =
  .add_to_physmap =3D flask_add_to_physmap,=0A     .remove_from_physmap =
=3D flask_remove_from_physmap,=0A--- a/xen/xsm/xsm_core.c=0A+++ b/xen/xsm/x=
sm_core.c=0A@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x=0A=
     return xsm_do_xsm_op(op);=0A }=0A =0A-=0A+#ifdef CONFIG_COMPAT=0A+int =
compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return =
xsm_do_compat_op(op);=0A+}=0A+#endif=0A
--=__Part7A49D118.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7A49D118.1__=--


From xen-devel-bounces@lists.xen.org Fri Feb 07 09:54:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi8p-00083Z-EU; Fri, 07 Feb 2014 09:54:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBi8o-00083S-CA
	for Xen-devel@lists.xensource.com; Fri, 07 Feb 2014 09:54:34 +0000
Received: from [85.158.143.35:49192] by server-1.bemta-4.messagelabs.com id
	0E/56-31661-95DA4F25; Fri, 07 Feb 2014 09:54:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391766871!3861364!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4043 invoked from network); 7 Feb 2014 09:54:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:54:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98920451"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 09:54:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	04:54:30 -0500
Message-ID: <1391766869.2162.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 7 Feb 2014 09:54:29 +0000
In-Reply-To: <20140206171537.4ec98205@mantra.us.oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<20140206171537.4ec98205@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	roger.pau@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
 domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 17:15 -0800, Mukesh Rathor wrote:
> 
> ping? I think Roger you can ack from xen-pvh side. IanC/J, I guess one
> of you need to ack from tool side?

The mechanical bits of the tools side look OK, I'll leave it to others
to ack the actual bits you are setting/clearing.

> Again this for 4.5, and not 4.4.

Right, please ping/resend once 4.5 opens.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 09:54:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 09:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBi8p-00083Z-EU; Fri, 07 Feb 2014 09:54:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBi8o-00083S-CA
	for Xen-devel@lists.xensource.com; Fri, 07 Feb 2014 09:54:34 +0000
Received: from [85.158.143.35:49192] by server-1.bemta-4.messagelabs.com id
	0E/56-31661-95DA4F25; Fri, 07 Feb 2014 09:54:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391766871!3861364!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4043 invoked from network); 7 Feb 2014 09:54:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 09:54:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98920451"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 09:54:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	04:54:30 -0500
Message-ID: <1391766869.2162.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 7 Feb 2014 09:54:29 +0000
In-Reply-To: <20140206171537.4ec98205@mantra.us.oracle.com>
References: <1390612410-27384-1-git-send-email-mukesh.rathor@oracle.com>
	<1390612410-27384-2-git-send-email-mukesh.rathor@oracle.com>
	<20140206171537.4ec98205@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Xen-devel@lists.xensource.com, ian.jackson@eu.citrix.com,
	roger.pau@citrix.com, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [V0 PATCH] pvh: expose feature flags from tools for
 domUs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 17:15 -0800, Mukesh Rathor wrote:
> 
> ping? I think Roger you can ack from xen-pvh side. IanC/J, I guess one
> of you need to ack from tool side?

The mechanical bits of the tools side look OK, I'll leave it to others
to ack the actual bits you are setting/clearing.

> Again this for 4.5, and not 4.4.

Right, please ping/resend once 4.5 opens.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:02:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:02:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiG9-0000An-MO; Fri, 07 Feb 2014 10:02:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiG8-0000Ai-Dq
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:02:08 +0000
Received: from [85.158.143.35:33759] by server-3.bemta-4.messagelabs.com id
	84/04-11539-F1FA4F25; Fri, 07 Feb 2014 10:02:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391767326!3861183!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1713 invoked from network); 7 Feb 2014 10:02:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:02:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98921692"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 10:02:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:02:04 -0500
Message-ID: <1391767323.2162.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Fri, 7 Feb 2014 10:02:03 +0000
In-Reply-To: <20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
	<1391423287.10515.32.camel@kazak.uk.xensource.com>
	<20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: monaka@monami-ya.jp, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAyLTA3IGF0IDAxOjM0ICswMTAwLCBTYW11ZWwgVGhpYmF1bHQgd3JvdGU6
Cj4gSWFuIENhbXBiZWxsLCBsZSBNb24gMDMgRmViIDIwMTQgMTA6Mjg6MDcgKzAwMDAsIGEgw6lj
cml0IDoKPiA+IE9uIFNhdCwgMjAxNC0wMi0wMSBhdCAxNzo1NSArMDkwMCwgTWFzYWtpIE11cmFu
YWthIHdyb3RlOgo+ID4gPiBJIGdvdCB0aGUgYnVpbGQgZXJyb3IuIExvZyBpcyBsaWtlIHRoaXM6
Cj4gPiA+IC9ob21lL2F6dXJldXNlci94ZW4vc3R1YmRvbS9taW5pLW9zLXg4Nl82NC1jL3Rlc3Qu
bzogSW4gZnVuY3Rpb24KPiA+ID4gYGFwcF9tYWluJzoKPiA+ID4gCj4gPiA+IC9ob21lL2F6dXJl
dXNlci94ZW4vZXh0cmFzL21pbmktb3MvdGVzdC5jOjU0MjogbXVsdGlwbGUgZGVmaW5pdGlvbiBv
Zgo+ID4gPiBgYXBwX21haW4nCj4gPiA+IC9ob21lL2F6dXJldXNlci94ZW4vc3R1YmRvbS9taW5p
LW9zLXg4Nl82NC1jL21haW4ubzovaG9tZS9henVyZXVzZXIveGVuL2V4dHJhcy9taW5pLW9zL21h
aW4uYzoxODc6IGZpcnN0IGRlZmluZWQgaGVyZQo+ID4gPiBtYWtlWzFdOiAqKiogWy9ob21lL2F6
dXJldXNlci94ZW4vc3R1YmRvbS9taW5pLW9zLXg4Nl82NC1jL21pbmktb3NdCj4gPiA+IEVycm9y
IDEKPiA+ID4gbWFrZVsxXTogTGVhdmluZyBkaXJlY3RvcnkgYC9ob21lL2F6dXJldXNlci94ZW4v
ZXh0cmFzL21pbmktb3MnCj4gPiA+IG1ha2U6ICoqKiBbYy1zdHViZG9tXSBFcnJvciAyCj4gPiA+
IAo+ID4gPiAKPiA+ID4gCj4gPiA+IAo+ID4gPiBJIHRoaW5rcyBzdHViZG9tL2MvbWluaW9zLmNm
ZyBzaG91bGQgYmUgbGlrZSB0aGlzLgo+ID4gCj4gPiBUaGF0IHNvdW5kcyBwbGF1c2libGUuIEkn
bSBub3Qgc3VyZSB3aGF0IHdvdWxkIHRoZW4gaW5jbHVkZSB0aGUgdGVzdHMgYXQKPiA+IHRoYXQg
cG9pbnQgdGhvdWdoLgo+IAo+IElJUkMsIGJ5IGJ1aWxkaW5nIG1pbmktb3MgYnkgaXRzZWxmICh3
aXRob3V0IGFueSBzdHViZG9tIHBhcnQsIGkuZS4gbm8KPiBsaWJjKQoKSSB0aG91Z2h0IHRoYXQg
YnV0IEkgd2Fzbid0IGFibGUgdG8gY29uZmlybSBieSBpbnNwZWN0aW9uIG9mIHRoZQpNYWtlZmls
ZSBldGMuIEl0J3Mgbm90IGEgYmlnIGRlYWwgSU1ITywgaWYgc29tZW9uZSBjYXJlcyB0aGV5IHdp
bGwgZml4Cml0Li4uCgo+ID4gSWYgU2FtdWVsIGFncmVlcyB3aXRoIHRoZSBwYXRjaAo+IAo+IFll
cy4gIEkgd29uZGVyIGhvdyB0aGF0IGRpZG4ndCBwb3NlIHByb2JsZW0gc28gZmFyLgoKSSBkb24n
dCB0aGluayB3ZSBidWlsZCB0aGUgYyBzdHViZG9tIGJ5IGRlZmF1bHQgLS0gbWF5YmUgZm9yIHRo
aXMKcmVhc29uLgoKTWFzYWtpLCBpZiB5b3Ugc3VibWl0IGEgcGF0Y2ggeW91IGNvdWxkIGFsc28g
Y29uc2lkZXIgYWRkaW5nIHRoaXMgdG8gdGhlCmRlZmF1bHQgc2V0IG9mIHN0dWJkb21zIHRvIGJ1
aWxkIChvciBub3QsIHVwIHRvIHlvdSkuCgpJYW4uCgo+IAo+ID4gcGxlYXNlIGNvdWxkIHlvdSBm
b3JtYWxseSBzdWJtaXQgYWNjb3JkaW5nIHRvCj4gPiBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kv
U3VibWl0dGluZ19YZW5fUGF0Y2hlcwo+IAo+IFNhbXVlbAoKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:02:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:02:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiG9-0000An-MO; Fri, 07 Feb 2014 10:02:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiG8-0000Ai-Dq
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:02:08 +0000
Received: from [85.158.143.35:33759] by server-3.bemta-4.messagelabs.com id
	84/04-11539-F1FA4F25; Fri, 07 Feb 2014 10:02:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391767326!3861183!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1713 invoked from network); 7 Feb 2014 10:02:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:02:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98921692"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 10:02:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:02:04 -0500
Message-ID: <1391767323.2162.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>
Date: Fri, 7 Feb 2014 10:02:03 +0000
In-Reply-To: <20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
	<1391423287.10515.32.camel@kazak.uk.xensource.com>
	<20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: monaka@monami-ya.jp, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAyLTA3IGF0IDAxOjM0ICswMTAwLCBTYW11ZWwgVGhpYmF1bHQgd3JvdGU6
Cj4gSWFuIENhbXBiZWxsLCBsZSBNb24gMDMgRmViIDIwMTQgMTA6Mjg6MDcgKzAwMDAsIGEgw6lj
cml0IDoKPiA+IE9uIFNhdCwgMjAxNC0wMi0wMSBhdCAxNzo1NSArMDkwMCwgTWFzYWtpIE11cmFu
YWthIHdyb3RlOgo+ID4gPiBJIGdvdCB0aGUgYnVpbGQgZXJyb3IuIExvZyBpcyBsaWtlIHRoaXM6
Cj4gPiA+IC9ob21lL2F6dXJldXNlci94ZW4vc3R1YmRvbS9taW5pLW9zLXg4Nl82NC1jL3Rlc3Qu
bzogSW4gZnVuY3Rpb24KPiA+ID4gYGFwcF9tYWluJzoKPiA+ID4gCj4gPiA+IC9ob21lL2F6dXJl
dXNlci94ZW4vZXh0cmFzL21pbmktb3MvdGVzdC5jOjU0MjogbXVsdGlwbGUgZGVmaW5pdGlvbiBv
Zgo+ID4gPiBgYXBwX21haW4nCj4gPiA+IC9ob21lL2F6dXJldXNlci94ZW4vc3R1YmRvbS9taW5p
LW9zLXg4Nl82NC1jL21haW4ubzovaG9tZS9henVyZXVzZXIveGVuL2V4dHJhcy9taW5pLW9zL21h
aW4uYzoxODc6IGZpcnN0IGRlZmluZWQgaGVyZQo+ID4gPiBtYWtlWzFdOiAqKiogWy9ob21lL2F6
dXJldXNlci94ZW4vc3R1YmRvbS9taW5pLW9zLXg4Nl82NC1jL21pbmktb3NdCj4gPiA+IEVycm9y
IDEKPiA+ID4gbWFrZVsxXTogTGVhdmluZyBkaXJlY3RvcnkgYC9ob21lL2F6dXJldXNlci94ZW4v
ZXh0cmFzL21pbmktb3MnCj4gPiA+IG1ha2U6ICoqKiBbYy1zdHViZG9tXSBFcnJvciAyCj4gPiA+
IAo+ID4gPiAKPiA+ID4gCj4gPiA+IAo+ID4gPiBJIHRoaW5rcyBzdHViZG9tL2MvbWluaW9zLmNm
ZyBzaG91bGQgYmUgbGlrZSB0aGlzLgo+ID4gCj4gPiBUaGF0IHNvdW5kcyBwbGF1c2libGUuIEkn
bSBub3Qgc3VyZSB3aGF0IHdvdWxkIHRoZW4gaW5jbHVkZSB0aGUgdGVzdHMgYXQKPiA+IHRoYXQg
cG9pbnQgdGhvdWdoLgo+IAo+IElJUkMsIGJ5IGJ1aWxkaW5nIG1pbmktb3MgYnkgaXRzZWxmICh3
aXRob3V0IGFueSBzdHViZG9tIHBhcnQsIGkuZS4gbm8KPiBsaWJjKQoKSSB0aG91Z2h0IHRoYXQg
YnV0IEkgd2Fzbid0IGFibGUgdG8gY29uZmlybSBieSBpbnNwZWN0aW9uIG9mIHRoZQpNYWtlZmls
ZSBldGMuIEl0J3Mgbm90IGEgYmlnIGRlYWwgSU1ITywgaWYgc29tZW9uZSBjYXJlcyB0aGV5IHdp
bGwgZml4Cml0Li4uCgo+ID4gSWYgU2FtdWVsIGFncmVlcyB3aXRoIHRoZSBwYXRjaAo+IAo+IFll
cy4gIEkgd29uZGVyIGhvdyB0aGF0IGRpZG4ndCBwb3NlIHByb2JsZW0gc28gZmFyLgoKSSBkb24n
dCB0aGluayB3ZSBidWlsZCB0aGUgYyBzdHViZG9tIGJ5IGRlZmF1bHQgLS0gbWF5YmUgZm9yIHRo
aXMKcmVhc29uLgoKTWFzYWtpLCBpZiB5b3Ugc3VibWl0IGEgcGF0Y2ggeW91IGNvdWxkIGFsc28g
Y29uc2lkZXIgYWRkaW5nIHRoaXMgdG8gdGhlCmRlZmF1bHQgc2V0IG9mIHN0dWJkb21zIHRvIGJ1
aWxkIChvciBub3QsIHVwIHRvIHlvdSkuCgpJYW4uCgo+IAo+ID4gcGxlYXNlIGNvdWxkIHlvdSBm
b3JtYWxseSBzdWJtaXQgYWNjb3JkaW5nIHRvCj4gPiBodHRwOi8vd2lraS54ZW4ub3JnL3dpa2kv
U3VibWl0dGluZ19YZW5fUGF0Y2hlcwo+IAo+IFNhbXVlbAoKCgpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:05:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiJ2-0000HN-BY; Fri, 07 Feb 2014 10:05:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiJ0-0000HF-Ug
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:05:07 +0000
Received: from [85.158.139.211:38446] by server-14.bemta-5.messagelabs.com id
	19/14-27598-2DFA4F25; Fri, 07 Feb 2014 10:05:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391767504!2324343!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25095 invoked from network); 7 Feb 2014 10:05:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:05:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100779671"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 10:05:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:05:02 -0500
Message-ID: <1391767501.2162.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Fri, 7 Feb 2014 10:05:01 +0000
In-Reply-To: <52F425AD.50209@terremark.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
> > > >cc1: warnings being treated as errors
> > > >xenlight_stubs.c: In function 'Defbool_val':
> > > >xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> > > >xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> > > >xenlight_stubs.c: In function 'String_option_val':
> > > >xenlight_stubs.c:379: error: expected expression before 'char'
> > > >xenlight_stubs.c: In function 'aohow_val':
> > > >xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> 
> Any idea on what to do about ocaml issue?

My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
What version do you have?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:05:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiJ2-0000HN-BY; Fri, 07 Feb 2014 10:05:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiJ0-0000HF-Ug
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:05:07 +0000
Received: from [85.158.139.211:38446] by server-14.bemta-5.messagelabs.com id
	19/14-27598-2DFA4F25; Fri, 07 Feb 2014 10:05:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391767504!2324343!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25095 invoked from network); 7 Feb 2014 10:05:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:05:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100779671"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 10:05:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:05:02 -0500
Message-ID: <1391767501.2162.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Fri, 7 Feb 2014 10:05:01 +0000
In-Reply-To: <52F425AD.50209@terremark.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
> > > >cc1: warnings being treated as errors
> > > >xenlight_stubs.c: In function 'Defbool_val':
> > > >xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
> > > >xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
> > > >xenlight_stubs.c: In function 'String_option_val':
> > > >xenlight_stubs.c:379: error: expected expression before 'char'
> > > >xenlight_stubs.c: In function 'aohow_val':
> > > >xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
> 
> Any idea on what to do about ocaml issue?

My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
What version do you have?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:07:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiKw-0000Ns-Sw; Fri, 07 Feb 2014 10:07:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiKv-0000Nl-R8
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 10:07:06 +0000
Received: from [85.158.143.35:23821] by server-3.bemta-4.messagelabs.com id
	79/DF-11539-940B4F25; Fri, 07 Feb 2014 10:07:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391767623!3869950!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24263 invoked from network); 7 Feb 2014 10:07:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:07:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100780019"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 10:07:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:07:02 -0500
Message-ID: <1391767621.2162.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 10:07:01 +0000
In-Reply-To: <osstest-24746-mainreport@xen.org>
References: <osstest-24746-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24746: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 17:43 +0000, xen.org wrote:
> flight 24746 linux-arm-xen real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24746/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed, but are not blocking:
>  test-armhf-armhf-xl           9 guest-start                  fail   never pass
        
        [....] Configuring network interfaces...Internet Systems
        Consortium DHCP Client 4.2.2
        Copyright 2004-2011 Internet Systems Consortium.
        All rights reserved.
        For info, please visit https://www.isc.org/software/dhcp/
        
        socket: Address family not supported by protocol - make sure
        CONFIG_PACKET (Packet socket) and CONFIG_FILTER
        (Socket Filtering) are enabled in your kernel
        configuration!
        Failed to bring up eth0.
        
And indeed CONFIG_PACKET and CONFIG_FILTER are not enabled. I'll send a
patch to make sure these are enabled (x86 gets it from defconfig I
think) these, but what's weird is that the exact same kernel booted on
the host and appears to have done DHCP without complaint...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:07:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiKw-0000Ns-Sw; Fri, 07 Feb 2014 10:07:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiKv-0000Nl-R8
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 10:07:06 +0000
Received: from [85.158.143.35:23821] by server-3.bemta-4.messagelabs.com id
	79/DF-11539-940B4F25; Fri, 07 Feb 2014 10:07:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391767623!3869950!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24263 invoked from network); 7 Feb 2014 10:07:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:07:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100780019"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 10:07:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:07:02 -0500
Message-ID: <1391767621.2162.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 10:07:01 +0000
In-Reply-To: <osstest-24746-mainreport@xen.org>
References: <osstest-24746-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-arm-xen test] 24746: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 17:43 +0000, xen.org wrote:
> flight 24746 linux-arm-xen real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24746/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed, but are not blocking:
>  test-armhf-armhf-xl           9 guest-start                  fail   never pass
        
        [....] Configuring network interfaces...Internet Systems
        Consortium DHCP Client 4.2.2
        Copyright 2004-2011 Internet Systems Consortium.
        All rights reserved.
        For info, please visit https://www.isc.org/software/dhcp/
        
        socket: Address family not supported by protocol - make sure
        CONFIG_PACKET (Packet socket) and CONFIG_FILTER
        (Socket Filtering) are enabled in your kernel
        configuration!
        Failed to bring up eth0.
        
And indeed CONFIG_PACKET and CONFIG_FILTER are not enabled. I'll send a
patch to make sure these are enabled (x86 gets it from defconfig I
think) these, but what's weird is that the exact same kernel booted on
the host and appears to have done DHCP without complaint...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:07:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiLW-0000Rp-Aa; Fri, 07 Feb 2014 10:07:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBiLU-0000Rc-T0
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:07:41 +0000
Received: from [193.109.254.147:28823] by server-15.bemta-14.messagelabs.com
	id E8/58-10839-C60B4F25; Fri, 07 Feb 2014 10:07:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391767659!2703824!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31662 invoked from network); 7 Feb 2014 10:07:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 10:07:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391767658; l=1825;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=513AnHgeGmUh/Tx6AF2XnlMdvo0=;
	b=a+drPeyfkY3QYE+l8nabaUKkUZbc+b/Hh3k2OgdkZcE3dROG1UJ+egL+zduWRmxrdUP
	42KHV7u/w7HP1QZcCobI1fO+qjGVaoSUpXVaL33GZiNbLBhsll3bhjn8E1QAlIsZaJDTr
	34Joeg6uEYjVpo5ssDlDuXhjFcA2enfFXuw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id 6026ecq17A7YGsU
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 11:07:34 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id AE0EE50269; Fri,  7 Feb 2014 11:07:33 +0100 (CET)
Date: Fri, 7 Feb 2014 11:07:33 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207100733.GA1958@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4B1A7020000780011A14B@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, Jan Beulich wrote:

> >>> On 07.02.14 at 09:32, Olaf Hering <olaf@aepfle.de> wrote:
> > On Fri, Feb 07, Jan Beulich wrote:
> > 
> >> They question is what the intended behavior here is: I'd generally
> > 
> > In my opinion dom0 is just a child of Xen, which should follow the rules
> > of the parent. If Xen is configured to have its console on serial then
> > the default of dom0 should be to follow just that. Appearently its just
> > a matter of correctly using xvc0.
> > 
> > I'm not sure what the gain would be to have Xen on serial and dom0
> > somewhere else, and enforcing the need of a console= cmdline option to
> > point dom0 also to serial. Thats just doing things twice.
> 
> That's a fair point, but leaves aside the case of Xen _not_ using
> the serial console. Dom0 has no way to know, and hence would
> still push output there, not knowing that it ends up no-where.

You mean no Xen console= option implies that dom0 writes no-where? I
would think dom0 will use the graphics card in this case to send its
output.

> Also the "follow the rules of the parent" already doesn't apply for
> the VGA console case, where Dom0 makes its own decision too
> (and it's for that reason that Xen needs to stop sending data to
> the VGA in order to not interfere). Hence I'm not sure that
> argument really counts.

The details about driving a certain hardware dont really matter. I think
the important part is "goes to the wire" vs. "goes to the monitor". 

I think the bug is that register_console("xvc") is called without a
preceeding add_prefered_console, which with current kernels means a
second entry in /proc/consoles. This in turn lets systemd spawn a login
for that.

Somehow I think the rules have changed since 2.6.18. I will have a look
at this now.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:07:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:07:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiLW-0000Rp-Aa; Fri, 07 Feb 2014 10:07:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBiLU-0000Rc-T0
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:07:41 +0000
Received: from [193.109.254.147:28823] by server-15.bemta-14.messagelabs.com
	id E8/58-10839-C60B4F25; Fri, 07 Feb 2014 10:07:40 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391767659!2703824!1
X-Originating-IP: [81.169.146.218]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31662 invoked from network); 7 Feb 2014 10:07:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.218)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 10:07:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391767658; l=1825;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=513AnHgeGmUh/Tx6AF2XnlMdvo0=;
	b=a+drPeyfkY3QYE+l8nabaUKkUZbc+b/Hh3k2OgdkZcE3dROG1UJ+egL+zduWRmxrdUP
	42KHV7u/w7HP1QZcCobI1fO+qjGVaoSUpXVaL33GZiNbLBhsll3bhjn8E1QAlIsZaJDTr
	34Joeg6uEYjVpo5ssDlDuXhjFcA2enfFXuw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id 6026ecq17A7YGsU
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 11:07:34 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id AE0EE50269; Fri,  7 Feb 2014 11:07:33 +0100 (CET)
Date: Fri, 7 Feb 2014 11:07:33 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207100733.GA1958@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4B1A7020000780011A14B@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, Jan Beulich wrote:

> >>> On 07.02.14 at 09:32, Olaf Hering <olaf@aepfle.de> wrote:
> > On Fri, Feb 07, Jan Beulich wrote:
> > 
> >> They question is what the intended behavior here is: I'd generally
> > 
> > In my opinion dom0 is just a child of Xen, which should follow the rules
> > of the parent. If Xen is configured to have its console on serial then
> > the default of dom0 should be to follow just that. Appearently its just
> > a matter of correctly using xvc0.
> > 
> > I'm not sure what the gain would be to have Xen on serial and dom0
> > somewhere else, and enforcing the need of a console= cmdline option to
> > point dom0 also to serial. Thats just doing things twice.
> 
> That's a fair point, but leaves aside the case of Xen _not_ using
> the serial console. Dom0 has no way to know, and hence would
> still push output there, not knowing that it ends up no-where.

You mean no Xen console= option implies that dom0 writes no-where? I
would think dom0 will use the graphics card in this case to send its
output.

> Also the "follow the rules of the parent" already doesn't apply for
> the VGA console case, where Dom0 makes its own decision too
> (and it's for that reason that Xen needs to stop sending data to
> the VGA in order to not interfere). Hence I'm not sure that
> argument really counts.

The details about driving a certain hardware dont really matter. I think
the important part is "goes to the wire" vs. "goes to the monitor". 

I think the bug is that register_console("xvc") is called without a
preceeding add_prefered_console, which with current kernels means a
second entry in /proc/consoles. This in turn lets systemd spawn a login
for that.

Somehow I think the rules have changed since 2.6.18. I will have a look
at this now.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:08:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiMi-0000b9-QW; Fri, 07 Feb 2014 10:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiMh-0000aw-Dt
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:08:55 +0000
Received: from [85.158.139.211:15123] by server-3.bemta-5.messagelabs.com id
	E5/D6-13671-6B0B4F25; Fri, 07 Feb 2014 10:08:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391767732!2325898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23852 invoked from network); 7 Feb 2014 10:08:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:08:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98922929"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 10:08:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:08:51 -0500
Message-ID: <1391767730.2162.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 7 Feb 2014 10:08:50 +0000
In-Reply-To: <20140206171248.5143e7b3@mantra.us.oracle.com>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
	<1391337376.15093.24.camel@hastur.hellion.org.uk>
	<20140206171248.5143e7b3@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 17:12 -0800, Mukesh Rathor wrote:
> On Sun, 2 Feb 2014 10:36:16 +0000
> Ian Campbell <ian.campbell@citrix.com> wrote:
> 
> > On Sun, 2014-02-02 at 11:28 +0100, Olaf Hering wrote:
> > > On Sun, Feb 02, Ian Campbell wrote:
> > > 
> > > > I suppose there is no harm in this, but is there any chance that
> > > > gdbsx would actually work on arm64 without significant actual
> > > > work going into it?
> > > 
> > > I have no idea. But I just noticed its built only due to this line
> > > in our xen.spec:
> > > 
> > > make -C tools/debugger/gdbsx
> > > 
> > > Perhaps this should be guarded by a %ifarch x86_64.
> > 
> > I suspect that right now that this would be wise (maybe i386 too?) --
> > hopefully Mukesh can advise.
> > 
> > Ian.
> 
> It should just work if you can quickly implement 
> arch/x86/debug.c for arm.

Thanks.

Looks like we'd also need to make a bunch of arch/x86/domctl.c generic
too, or implement the appropriate arm equivalents.

Not a 4.4 thing for sure, if someone wants gdbsx on arm then patches for
4.5 would be welcome as always.

>  If not, you'd need to "arch it out".
> 
> thanks
> mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:08:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiMi-0000b9-QW; Fri, 07 Feb 2014 10:08:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiMh-0000aw-Dt
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:08:55 +0000
Received: from [85.158.139.211:15123] by server-3.bemta-5.messagelabs.com id
	E5/D6-13671-6B0B4F25; Fri, 07 Feb 2014 10:08:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391767732!2325898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23852 invoked from network); 7 Feb 2014 10:08:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:08:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98922929"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 10:08:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	05:08:51 -0500
Message-ID: <1391767730.2162.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Fri, 7 Feb 2014 10:08:50 +0000
In-Reply-To: <20140206171248.5143e7b3@mantra.us.oracle.com>
References: <1391329191-22566-1-git-send-email-olaf@aepfle.de>
	<1391336650.15093.23.camel@hastur.hellion.org.uk>
	<20140202102852.GA9984@aepfle.de>
	<1391337376.15093.24.camel@hastur.hellion.org.uk>
	<20140206171248.5143e7b3@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] tools/gdbsx: define format strings for
	aarch64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 17:12 -0800, Mukesh Rathor wrote:
> On Sun, 2 Feb 2014 10:36:16 +0000
> Ian Campbell <ian.campbell@citrix.com> wrote:
> 
> > On Sun, 2014-02-02 at 11:28 +0100, Olaf Hering wrote:
> > > On Sun, Feb 02, Ian Campbell wrote:
> > > 
> > > > I suppose there is no harm in this, but is there any chance that
> > > > gdbsx would actually work on arm64 without significant actual
> > > > work going into it?
> > > 
> > > I have no idea. But I just noticed its built only due to this line
> > > in our xen.spec:
> > > 
> > > make -C tools/debugger/gdbsx
> > > 
> > > Perhaps this should be guarded by a %ifarch x86_64.
> > 
> > I suspect that right now that this would be wise (maybe i386 too?) --
> > hopefully Mukesh can advise.
> > 
> > Ian.
> 
> It should just work if you can quickly implement 
> arch/x86/debug.c for arm.

Thanks.

Looks like we'd also need to make a bunch of arch/x86/domctl.c generic
too, or implement the appropriate arm equivalents.

Not a 4.4 thing for sure, if someone wants gdbsx on arm then patches for
4.5 would be welcome as always.

>  If not, you'd need to "arch it out".
> 
> thanks
> mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:10:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:10:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiOc-00014o-Cv; Fri, 07 Feb 2014 10:10:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiOa-00014X-MO
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:10:52 +0000
Received: from [85.158.137.68:13928] by server-3.bemta-3.messagelabs.com id
	A4/6B-14520-B21B4F25; Fri, 07 Feb 2014 10:10:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391767849!279431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24420 invoked from network); 7 Feb 2014 10:10:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100780893"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 10:10:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 05:10:48 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBiOW-0002En-NE;
	Fri, 07 Feb 2014 10:10:48 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 10:10:48 +0000
Message-ID: <1391767848-9633-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1391767621.2162.21.camel@kazak.uk.xensource.com>
References: <1391767621.2162.21.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] ts-kernel-build: make sure
	CONFIG_PACKET is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is required by the dhcp client and is not present in the arm
multi_v7_defconfig.

Also stash the config file in the build results for easy reference, it is
already in kerndist.tar.gz but that's a 30+M download compared with a few tens
of K.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ts-kernel-build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/ts-kernel-build b/ts-kernel-build
index 96f6b74..742d2b4 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -185,6 +185,7 @@ setopt CONFIG_SYSVIPC y
 
 setopt CONFIG_BLK_DEV_LOOP y
 
+setopt CONFIG_PACKET y
 END
 sub stash_config_edscript ($) {
     my ($settings) = @_;
@@ -347,3 +348,4 @@ if ($r{tree_linuxfirmware}) {
 built_stash($ho, $builddir, 'dist', 'kerndist');
 built_stash_file($ho, $builddir, 'vmlinux', 'linux/vmlinux');
 built_compress_stashed('vmlinux');
+built_stash_file($ho, $builddir, 'config', 'linux/.config');
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:10:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:10:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiOc-00014o-Cv; Fri, 07 Feb 2014 10:10:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBiOa-00014X-MO
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:10:52 +0000
Received: from [85.158.137.68:13928] by server-3.bemta-3.messagelabs.com id
	A4/6B-14520-B21B4F25; Fri, 07 Feb 2014 10:10:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391767849!279431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24420 invoked from network); 7 Feb 2014 10:10:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100780893"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 10:10:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 05:10:48 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBiOW-0002En-NE;
	Fri, 07 Feb 2014 10:10:48 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 10:10:48 +0000
Message-ID: <1391767848-9633-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1391767621.2162.21.camel@kazak.uk.xensource.com>
References: <1391767621.2162.21.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] ts-kernel-build: make sure
	CONFIG_PACKET is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is required by the dhcp client and is not present in the arm
multi_v7_defconfig.

Also stash the config file in the build results for easy reference, it is
already in kerndist.tar.gz but that's a 30+M download compared with a few tens
of K.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ts-kernel-build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/ts-kernel-build b/ts-kernel-build
index 96f6b74..742d2b4 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -185,6 +185,7 @@ setopt CONFIG_SYSVIPC y
 
 setopt CONFIG_BLK_DEV_LOOP y
 
+setopt CONFIG_PACKET y
 END
 sub stash_config_edscript ($) {
     my ($settings) = @_;
@@ -347,3 +348,4 @@ if ($r{tree_linuxfirmware}) {
 built_stash($ho, $builddir, 'dist', 'kerndist');
 built_stash_file($ho, $builddir, 'vmlinux', 'linux/vmlinux');
 built_compress_stashed('vmlinux');
+built_stash_file($ho, $builddir, 'config', 'linux/.config');
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiS6-0001I0-3H; Fri, 07 Feb 2014 10:14:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBiS4-0001Hu-LL
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:14:28 +0000
Received: from [85.158.143.35:49436] by server-2.bemta-4.messagelabs.com id
	C2/2D-10891-402B4F25; Fri, 07 Feb 2014 10:14:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391768067!3869796!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19437 invoked from network); 7 Feb 2014 10:14:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:14:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:14:26 +0000
Message-Id: <52F4C00F020000780011A26B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:14:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <Aravind.Gopalakrishnan@amd.com>
References: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 20:33, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
> Workaround for the Erratum will be in BIOSes spun only after
> Jan 2014 onwards. But initial production parts shipped in 2013
> itself. Since there is a coverage hole, we should carry this fix
> in software in case BIOS does not do the right thing or someone
> is using old BIOS.
> 
> Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> Rev. 3.04, November2013 for details on the Erratum.
> 
> Tested the patch on Fam16h server platform and it works fine.
> 
> Changes in V2: (per Andrew.C comments)
> 	- Move pci_val into same scope
> 	- rework indentation to match linux style
> Changes in V3: (per Jan comments)
> 	- remove pci_val, use 'l' and 'h' instead
> 	- print warning message to hypervisor log
> 
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Applied after some more editing. Please double check.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiS6-0001I0-3H; Fri, 07 Feb 2014 10:14:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBiS4-0001Hu-LL
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:14:28 +0000
Received: from [85.158.143.35:49436] by server-2.bemta-4.messagelabs.com id
	C2/2D-10891-402B4F25; Fri, 07 Feb 2014 10:14:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391768067!3869796!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19437 invoked from network); 7 Feb 2014 10:14:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:14:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:14:26 +0000
Message-Id: <52F4C00F020000780011A26B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:14:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <Aravind.Gopalakrishnan@amd.com>
References: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] x86/AMD: Apply workaround for AMD F16h
 Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 06.02.14 at 20:33, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
> Workaround for the Erratum will be in BIOSes spun only after
> Jan 2014 onwards. But initial production parts shipped in 2013
> itself. Since there is a coverage hole, we should carry this fix
> in software in case BIOS does not do the right thing or someone
> is using old BIOS.
> 
> Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> Rev. 3.04, November2013 for details on the Erratum.
> 
> Tested the patch on Fam16h server platform and it works fine.
> 
> Changes in V2: (per Andrew.C comments)
> 	- Move pci_val into same scope
> 	- rework indentation to match linux style
> Changes in V3: (per Jan comments)
> 	- remove pci_val, use 'l' and 'h' instead
> 	- print warning message to hypervisor log
> 
> Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Applied after some more editing. Please double check.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:20:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiXb-0001s4-OH; Fri, 07 Feb 2014 10:20:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBiXS-0001rk-JH
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:20:03 +0000
Received: from [85.158.137.68:43025] by server-17.bemta-3.messagelabs.com id
	AC/8D-22569-153B4F25; Fri, 07 Feb 2014 10:20:01 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391768400!279077!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4626 invoked from network); 7 Feb 2014 10:20:01 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-6.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Feb 2014 10:20:01 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBiXH-0000cc-Bu; Fri, 07 Feb 2014 10:19:51 +0000
Message-ID: <1391768389.2162.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Michael Tokarev <mjt@tls.msk.ru>
Date: Fri, 07 Feb 2014 10:19:49 +0000
In-Reply-To: <52F40718.4070904@msgid.tls.msk.ru>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAyLTA3IGF0IDAyOjA1ICswNDAwLCBNaWNoYWVsIFRva2FyZXYgd3JvdGU6
Cj4gMDcuMDIuMjAxNCAwMToyMCwgTW95ZXIsIEtlaXRoINC/0LjRiNC10YI6Cj4gPiBUaGFuayB5
b3UgYWxsIGZvciB5b3VyIHF1aWNrLCBoZWxwZnVsIHJlcGxpZXMhCj4gPiBUaGlzIHdhcyBhYm91
dCB0byBiZWNvbWUgYSBoZWFkYWNoZSBmb3IgdXMsIHNvIHRoZSBpZGVudGlmaWNhdGlvbiBvZiBh
IGZpeCBzbyBxdWlja2x5IGlzIGZhbnRhc3RpYy4KPiA+IAo+ID4gT24gVGh1LCAyMDE0LTAyLTA2
IGF0IDAyOjMxIC0wNjAwLCBHZXJkIEhvZmZtYW5uIHdyb3RlOgo+ID4+IFRoYXQgY29tbWl0IG1h
ZGUgc2VhYmlvcyBzaXplIChkZWZhdWx0IHFlbXUgY29uZmlnLCBnY2MgNC43KykganVtcCAKPiA+
PiBmcm9tIDEyOGsgdG8gMjU2ayBpbiBzaXplIGJlY2F1c2UgdGhlIGNvZGUgZGlkbid0IGZpdCBp
bnRvIDEyOGsgYW55IG1vcmUuCj4gPiAKPiA+IE1ha2VzIHNlbnNlLiAgVGhhbmtzIGZvciB0aGUg
ZXhwbGFuYXRpb24uCj4gPiAKPiA+IE9uIFRodSwgMjAxNC0wMi0wNiBhdCAwMjozNyAtMDYwMCwg
SWFuIENhbXBiZWxsIHdyb3RlOgo+ID4+IEkgdGhpbmsgdGhpcyB3YXMgZml4ZWQgaW4gWGVuIHdp
dGgKPiA+PiBodHRwOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9Y29tbWl0
O2g9NWYyODc1NzM5YmVlZjNhNzVjN2E3ZTg1NzliNmNiY2I0NjRlNjFiMwo+ID4gCj4gPiBJIGhh
dmUgY29uZmlybWVkIHRoYXQgYXBwbHlpbmcgdGhhdCBwYXRjaCB0byBvdXIgWGVuIHNvdXJjZXMg
Zml4ZXMgdGhlIHByb2JsZW0uCj4gPiAKPiA+IEkndmUgZmlsZWQgYSBidWcgd2l0aCBEZWJpYW4n
cyBYZW4gcGFja2FnZSBvbiB0aGlzIG1hdHRlciwgd2l0aCB0aGUgaG9wZSB0aGV5J2xsIGJhY2tw
b3J0IHRoYXQgY29tbWl0Lgo+ID4gaHR0cDovL2J1Z3MuZGViaWFuLm9yZy9jZ2ktYmluL2J1Z3Jl
cG9ydC5jZ2k/YnVnPTczNzkwNQo+IAo+IE1lYW53aGlsZSBJIGp1c3QgdXBsb2FkZWQgYW5vdGhl
ciByZWxlYXNlIG9mIHNlYWJpb3MgdG8KPiBEZWJpYW4gYXJjaGl2ZSB3aGljaCBicmluZ3Mgc2l6
ZSBvZiBiaW9zLmJpbiBiYWNrIHRvIDEyOEtiLAo+IGJ5IGRpc2FibGluZyBYQ0hJICh1c2IzLjAp
IGFuZCBQVlNDU0kgLS0gMS43LjQtMy4KPiAKPiBTbyB0aGluZ3Mgc2hvdWxkIGJlIHdvcmtpbmcg
ZmluZSBvbiBkZWJpYW4gc2lkZSBhZ2FpbiBub3cKPiAoYXQgbGVhc3Qgb24gc2lkKSwgZXZlbiB3
aXRob3V0IHBhdGNoaW5nIHhlbi4KPiAKPiBBbmQgaXQgbG9va3MgbGlrZSB3ZSBzaG91bGQgcmUt
dGhpbmsgd2hhdCB3ZSByZW1vdmUgZnJvbQo+IG91ciAocWVtdSkgInN0cmlwcGVkLWRvd24iIGJp
b3MgLSBhcyBpdCB0dXJucyBvdXQsIFhlbgo+IGFjdHVhbGx5IHVzZXMgYmlvcyBmcm9tIHFlbXUg
dW5jaGFuZ2VkLi4uICBCdXQgaXQgaXMKPiByZWFsbHkgcmVhbGx5IGZyYWdpbGUgLSBvbiBteSBi
dWlsZCAoZ2NjIDQuNyksIHdpdGggMgo+IG9wdGlvbnMgcmVtb3ZlZCwgMTI4S2IgaXMgOTkuNyUg
dXNlZCA7KQoKRG8gdGhlIHNlYWJpb3MgcGFja2FnZXMgYWxzbyBwcm9kdWNlIGEgbm9uLXN0cmlw
cGVkIGRvd24gc2VhYmlvcyBmb3IKbmV3ZXIgcWVtdT8gSWYgc28gdGhlbiBJIHRoaW5rIHRoZSBY
ZW4gcGFja2FnZSBzaG91bGQganVzdCBwaWNrIHRoYXQgb25lCnVwIGluc3RlYWQuCgpJYW4uCgoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:20:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiXb-0001s4-OH; Fri, 07 Feb 2014 10:20:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBiXS-0001rk-JH
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:20:03 +0000
Received: from [85.158.137.68:43025] by server-17.bemta-3.messagelabs.com id
	AC/8D-22569-153B4F25; Fri, 07 Feb 2014 10:20:01 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391768400!279077!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4626 invoked from network); 7 Feb 2014 10:20:01 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-6.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Feb 2014 10:20:01 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBiXH-0000cc-Bu; Fri, 07 Feb 2014 10:19:51 +0000
Message-ID: <1391768389.2162.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Michael Tokarev <mjt@tls.msk.ru>
Date: Fri, 07 Feb 2014 10:19:49 +0000
In-Reply-To: <52F40718.4070904@msgid.tls.msk.ru>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRnJpLCAyMDE0LTAyLTA3IGF0IDAyOjA1ICswNDAwLCBNaWNoYWVsIFRva2FyZXYgd3JvdGU6
Cj4gMDcuMDIuMjAxNCAwMToyMCwgTW95ZXIsIEtlaXRoINC/0LjRiNC10YI6Cj4gPiBUaGFuayB5
b3UgYWxsIGZvciB5b3VyIHF1aWNrLCBoZWxwZnVsIHJlcGxpZXMhCj4gPiBUaGlzIHdhcyBhYm91
dCB0byBiZWNvbWUgYSBoZWFkYWNoZSBmb3IgdXMsIHNvIHRoZSBpZGVudGlmaWNhdGlvbiBvZiBh
IGZpeCBzbyBxdWlja2x5IGlzIGZhbnRhc3RpYy4KPiA+IAo+ID4gT24gVGh1LCAyMDE0LTAyLTA2
IGF0IDAyOjMxIC0wNjAwLCBHZXJkIEhvZmZtYW5uIHdyb3RlOgo+ID4+IFRoYXQgY29tbWl0IG1h
ZGUgc2VhYmlvcyBzaXplIChkZWZhdWx0IHFlbXUgY29uZmlnLCBnY2MgNC43KykganVtcCAKPiA+
PiBmcm9tIDEyOGsgdG8gMjU2ayBpbiBzaXplIGJlY2F1c2UgdGhlIGNvZGUgZGlkbid0IGZpdCBp
bnRvIDEyOGsgYW55IG1vcmUuCj4gPiAKPiA+IE1ha2VzIHNlbnNlLiAgVGhhbmtzIGZvciB0aGUg
ZXhwbGFuYXRpb24uCj4gPiAKPiA+IE9uIFRodSwgMjAxNC0wMi0wNiBhdCAwMjozNyAtMDYwMCwg
SWFuIENhbXBiZWxsIHdyb3RlOgo+ID4+IEkgdGhpbmsgdGhpcyB3YXMgZml4ZWQgaW4gWGVuIHdp
dGgKPiA+PiBodHRwOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD14ZW4uZ2l0O2E9Y29tbWl0
O2g9NWYyODc1NzM5YmVlZjNhNzVjN2E3ZTg1NzliNmNiY2I0NjRlNjFiMwo+ID4gCj4gPiBJIGhh
dmUgY29uZmlybWVkIHRoYXQgYXBwbHlpbmcgdGhhdCBwYXRjaCB0byBvdXIgWGVuIHNvdXJjZXMg
Zml4ZXMgdGhlIHByb2JsZW0uCj4gPiAKPiA+IEkndmUgZmlsZWQgYSBidWcgd2l0aCBEZWJpYW4n
cyBYZW4gcGFja2FnZSBvbiB0aGlzIG1hdHRlciwgd2l0aCB0aGUgaG9wZSB0aGV5J2xsIGJhY2tw
b3J0IHRoYXQgY29tbWl0Lgo+ID4gaHR0cDovL2J1Z3MuZGViaWFuLm9yZy9jZ2ktYmluL2J1Z3Jl
cG9ydC5jZ2k/YnVnPTczNzkwNQo+IAo+IE1lYW53aGlsZSBJIGp1c3QgdXBsb2FkZWQgYW5vdGhl
ciByZWxlYXNlIG9mIHNlYWJpb3MgdG8KPiBEZWJpYW4gYXJjaGl2ZSB3aGljaCBicmluZ3Mgc2l6
ZSBvZiBiaW9zLmJpbiBiYWNrIHRvIDEyOEtiLAo+IGJ5IGRpc2FibGluZyBYQ0hJICh1c2IzLjAp
IGFuZCBQVlNDU0kgLS0gMS43LjQtMy4KPiAKPiBTbyB0aGluZ3Mgc2hvdWxkIGJlIHdvcmtpbmcg
ZmluZSBvbiBkZWJpYW4gc2lkZSBhZ2FpbiBub3cKPiAoYXQgbGVhc3Qgb24gc2lkKSwgZXZlbiB3
aXRob3V0IHBhdGNoaW5nIHhlbi4KPiAKPiBBbmQgaXQgbG9va3MgbGlrZSB3ZSBzaG91bGQgcmUt
dGhpbmsgd2hhdCB3ZSByZW1vdmUgZnJvbQo+IG91ciAocWVtdSkgInN0cmlwcGVkLWRvd24iIGJp
b3MgLSBhcyBpdCB0dXJucyBvdXQsIFhlbgo+IGFjdHVhbGx5IHVzZXMgYmlvcyBmcm9tIHFlbXUg
dW5jaGFuZ2VkLi4uICBCdXQgaXQgaXMKPiByZWFsbHkgcmVhbGx5IGZyYWdpbGUgLSBvbiBteSBi
dWlsZCAoZ2NjIDQuNyksIHdpdGggMgo+IG9wdGlvbnMgcmVtb3ZlZCwgMTI4S2IgaXMgOTkuNyUg
dXNlZCA7KQoKRG8gdGhlIHNlYWJpb3MgcGFja2FnZXMgYWxzbyBwcm9kdWNlIGEgbm9uLXN0cmlw
cGVkIGRvd24gc2VhYmlvcyBmb3IKbmV3ZXIgcWVtdT8gSWYgc28gdGhlbiBJIHRoaW5rIHRoZSBY
ZW4gcGFja2FnZSBzaG91bGQganVzdCBwaWNrIHRoYXQgb25lCnVwIGluc3RlYWQuCgpJYW4uCgoK
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs
IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:21:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiYx-00020m-D0; Fri, 07 Feb 2014 10:21:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mjt@tls.msk.ru>) id 1WBiYv-00020X-Kk
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:21:33 +0000
Received: from [85.158.143.35:43962] by server-2.bemta-4.messagelabs.com id
	29/FB-10891-CA3B4F25; Fri, 07 Feb 2014 10:21:32 +0000
X-Env-Sender: mjt@tls.msk.ru
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391768492!3854040!1
X-Originating-IP: [86.62.121.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22646 invoked from network); 7 Feb 2014 10:21:32 -0000
Received: from isrv.corpit.ru (HELO isrv.corpit.ru) (86.62.121.231)
	by server-5.tower-21.messagelabs.com with SMTP;
	7 Feb 2014 10:21:32 -0000
Received: from [192.168.88.2] (mjt.vpn.tls.msk.ru [192.168.177.99])
	by isrv.corpit.ru (Postfix) with ESMTP id 2A9E740CC5;
	Fri,  7 Feb 2014 14:21:30 +0400 (MSK)
Message-ID: <52F4B3AA.5050403@msgid.tls.msk.ru>
Date: Fri, 07 Feb 2014 14:21:30 +0400
From: Michael Tokarev <mjt@tls.msk.ru>
Organization: Telecom Service, JSC
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
In-Reply-To: <1391768389.2162.26.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.5.1
OpenPGP: id=804465C5
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

07.02.2014 14:19, Ian Campbell wrote:
> On Fri, 2014-02-07 at 02:05 +0400, Michael Tokarev wrote:
[]
>> Meanwhile I just uploaded another release of seabios to
>> Debian archive which brings size of bios.bin back to 128Kb,
>> by disabling XCHI (usb3.0) and PVSCSI -- 1.7.4-3.
>>
>> So things should be working fine on debian side again now
>> (at least on sid), even without patching xen.
>>
>> And it looks like we should re-think what we remove from
>> our (qemu) "stripped-down" bios - as it turns out, Xen
>> actually uses bios from qemu unchanged...  But it is
>> really really fragile - on my build (gcc 4.7), with 2
>> options removed, 128Kb is 99.7% used ;)
> 
> Do the seabios packages also produce a non-stripped down seabios for

Yes, ofcourse.

> newer qemu? If so then I think the Xen package should just pick that one
> up instead.

Sure it should and hopefully will in a near future.  But I also want
to support pain-free upgrades, so it is better if old xen works with
new seabios at least for some time.

Thanks,

/mjt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:21:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiYx-00020m-D0; Fri, 07 Feb 2014 10:21:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mjt@tls.msk.ru>) id 1WBiYv-00020X-Kk
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:21:33 +0000
Received: from [85.158.143.35:43962] by server-2.bemta-4.messagelabs.com id
	29/FB-10891-CA3B4F25; Fri, 07 Feb 2014 10:21:32 +0000
X-Env-Sender: mjt@tls.msk.ru
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391768492!3854040!1
X-Originating-IP: [86.62.121.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22646 invoked from network); 7 Feb 2014 10:21:32 -0000
Received: from isrv.corpit.ru (HELO isrv.corpit.ru) (86.62.121.231)
	by server-5.tower-21.messagelabs.com with SMTP;
	7 Feb 2014 10:21:32 -0000
Received: from [192.168.88.2] (mjt.vpn.tls.msk.ru [192.168.177.99])
	by isrv.corpit.ru (Postfix) with ESMTP id 2A9E740CC5;
	Fri,  7 Feb 2014 14:21:30 +0400 (MSK)
Message-ID: <52F4B3AA.5050403@msgid.tls.msk.ru>
Date: Fri, 07 Feb 2014 14:21:30 +0400
From: Michael Tokarev <mjt@tls.msk.ru>
Organization: Telecom Service, JSC
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
In-Reply-To: <1391768389.2162.26.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.5.1
OpenPGP: id=804465C5
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

07.02.2014 14:19, Ian Campbell wrote:
> On Fri, 2014-02-07 at 02:05 +0400, Michael Tokarev wrote:
[]
>> Meanwhile I just uploaded another release of seabios to
>> Debian archive which brings size of bios.bin back to 128Kb,
>> by disabling XCHI (usb3.0) and PVSCSI -- 1.7.4-3.
>>
>> So things should be working fine on debian side again now
>> (at least on sid), even without patching xen.
>>
>> And it looks like we should re-think what we remove from
>> our (qemu) "stripped-down" bios - as it turns out, Xen
>> actually uses bios from qemu unchanged...  But it is
>> really really fragile - on my build (gcc 4.7), with 2
>> options removed, 128Kb is 99.7% used ;)
> 
> Do the seabios packages also produce a non-stripped down seabios for

Yes, ofcourse.

> newer qemu? If so then I think the Xen package should just pick that one
> up instead.

Sure it should and hopefully will in a near future.  But I also want
to support pain-free upgrades, so it is better if old xen works with
new seabios at least for some time.

Thanks,

/mjt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBic9-0002G7-2m; Fri, 07 Feb 2014 10:24:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBic6-0002Fx-Rm
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:24:51 +0000
Received: from [85.158.139.211:65219] by server-13.bemta-5.messagelabs.com id
	8D/B7-18801-274B4F25; Fri, 07 Feb 2014 10:24:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391768689!2344837!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16125 invoked from network); 7 Feb 2014 10:24:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:24:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:24:49 +0000
Message-Id: <52F4C280020000780011A294@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:24:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
	<20140207100733.GA1958@aepfle.de>
In-Reply-To: <20140207100733.GA1958@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 11:07, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Feb 07, Jan Beulich wrote:
>> >>> On 07.02.14 at 09:32, Olaf Hering <olaf@aepfle.de> wrote:
>> > In my opinion dom0 is just a child of Xen, which should follow the rules
>> > of the parent. If Xen is configured to have its console on serial then
>> > the default of dom0 should be to follow just that. Appearently its just
>> > a matter of correctly using xvc0.
>> > 
>> > I'm not sure what the gain would be to have Xen on serial and dom0
>> > somewhere else, and enforcing the need of a console= cmdline option to
>> > point dom0 also to serial. Thats just doing things twice.
>> 
>> That's a fair point, but leaves aside the case of Xen _not_ using
>> the serial console. Dom0 has no way to know, and hence would
>> still push output there, not knowing that it ends up no-where.
> 
> You mean no Xen console= option implies that dom0 writes no-where? I
> would think dom0 will use the graphics card in this case to send its
> output.

"No" to the question. Dom0, without any console= and xencons=,
always writes _both_ ways (unless - under EFI only - it detects that
there's RAM at address 0xA0000), no matter whether this actually
ends up being visible anywhere.

>> Also the "follow the rules of the parent" already doesn't apply for
>> the VGA console case, where Dom0 makes its own decision too
>> (and it's for that reason that Xen needs to stop sending data to
>> the VGA in order to not interfere). Hence I'm not sure that
>> argument really counts.
> 
> The details about driving a certain hardware dont really matter. I think
> the important part is "goes to the wire" vs. "goes to the monitor". 
> 
> I think the bug is that register_console("xvc") is called without a
> preceeding add_prefered_console, which with current kernels means a
> second entry in /proc/consoles. This in turn lets systemd spawn a login
> for that.

Yes, I agree that there likely needs to be another change here.
I'm just not certain yet which way this should be done.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBic9-0002G7-2m; Fri, 07 Feb 2014 10:24:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBic6-0002Fx-Rm
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:24:51 +0000
Received: from [85.158.139.211:65219] by server-13.bemta-5.messagelabs.com id
	8D/B7-18801-274B4F25; Fri, 07 Feb 2014 10:24:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391768689!2344837!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16125 invoked from network); 7 Feb 2014 10:24:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:24:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:24:49 +0000
Message-Id: <52F4C280020000780011A294@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:24:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
	<20140207100733.GA1958@aepfle.de>
In-Reply-To: <20140207100733.GA1958@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 11:07, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Feb 07, Jan Beulich wrote:
>> >>> On 07.02.14 at 09:32, Olaf Hering <olaf@aepfle.de> wrote:
>> > In my opinion dom0 is just a child of Xen, which should follow the rules
>> > of the parent. If Xen is configured to have its console on serial then
>> > the default of dom0 should be to follow just that. Appearently its just
>> > a matter of correctly using xvc0.
>> > 
>> > I'm not sure what the gain would be to have Xen on serial and dom0
>> > somewhere else, and enforcing the need of a console= cmdline option to
>> > point dom0 also to serial. Thats just doing things twice.
>> 
>> That's a fair point, but leaves aside the case of Xen _not_ using
>> the serial console. Dom0 has no way to know, and hence would
>> still push output there, not knowing that it ends up no-where.
> 
> You mean no Xen console= option implies that dom0 writes no-where? I
> would think dom0 will use the graphics card in this case to send its
> output.

"No" to the question. Dom0, without any console= and xencons=,
always writes _both_ ways (unless - under EFI only - it detects that
there's RAM at address 0xA0000), no matter whether this actually
ends up being visible anywhere.

>> Also the "follow the rules of the parent" already doesn't apply for
>> the VGA console case, where Dom0 makes its own decision too
>> (and it's for that reason that Xen needs to stop sending data to
>> the VGA in order to not interfere). Hence I'm not sure that
>> argument really counts.
> 
> The details about driving a certain hardware dont really matter. I think
> the important part is "goes to the wire" vs. "goes to the monitor". 
> 
> I think the bug is that register_console("xvc") is called without a
> preceeding add_prefered_console, which with current kernels means a
> second entry in /proc/consoles. This in turn lets systemd spawn a login
> for that.

Yes, I agree that there likely needs to be another change here.
I'm just not certain yet which way this should be done.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBipF-0002ri-NT; Fri, 07 Feb 2014 10:38:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBipE-0002rd-1j
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:38:24 +0000
Received: from [85.158.139.211:58197] by server-15.bemta-5.messagelabs.com id
	A9/CC-24395-F97B4F25; Fri, 07 Feb 2014 10:38:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391769502!2349976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29252 invoked from network); 7 Feb 2014 10:38:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:38:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:38:22 +0000
Message-Id: <52F4C5AB020000780011A2A3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:38:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Elena Ufimtseva" <ufimtseva@gmail.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
In-Reply-To: <d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 00:59, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>>As I can see there is no support for hugepages in Xen for PV kernels.
>>Is this something in the future plans or there is no really need to
>>have this implemented?
> 
> It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this 
> implemented.

No, neither SLES nor openSUSE ever did (I guess it's tmem you're
mixing this up with) - the hypervisor side implementation just
doesn't look to be ready for other than experimental use.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBipF-0002ri-NT; Fri, 07 Feb 2014 10:38:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBipE-0002rd-1j
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:38:24 +0000
Received: from [85.158.139.211:58197] by server-15.bemta-5.messagelabs.com id
	A9/CC-24395-F97B4F25; Fri, 07 Feb 2014 10:38:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391769502!2349976!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29252 invoked from network); 7 Feb 2014 10:38:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:38:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:38:22 +0000
Message-Id: <52F4C5AB020000780011A2A3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:38:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Elena Ufimtseva" <ufimtseva@gmail.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
In-Reply-To: <d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 00:59, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>>As I can see there is no support for hugepages in Xen for PV kernels.
>>Is this something in the future plans or there is no really need to
>>have this implemented?
> 
> It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this 
> implemented.

No, neither SLES nor openSUSE ever did (I guess it's tmem you're
mixing this up with) - the hypervisor side implementation just
doesn't look to be ready for other than experimental use.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:39:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiq5-0003FL-6f; Fri, 07 Feb 2014 10:39:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBiq3-0003FA-6d
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:39:15 +0000
Received: from [85.158.137.68:12511] by server-14.bemta-3.messagelabs.com id
	CA/F3-08196-2D7B4F25; Fri, 07 Feb 2014 10:39:14 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391769551!284979!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29272 invoked from network); 7 Feb 2014 10:39:13 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:39:13 -0000
Received: by mail-pb0-f41.google.com with SMTP id up15so3085554pbc.14
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 02:39:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Rovd9zq7GVlrERq5T+KaxTCrllmoJBMXCBsBzq/6poM=;
	b=Ll4lpC3QC4obg73zpGtzIXLQPwwMzwR2cfU/kKvGmJd4I49XE//Ljc3prW4/qVVJnX
	wHj8NNE1xTtsFloRhfL75NC/ajfLb8kBkeIAfmbdRhzjFyiVKYMzdr3chK92yedlmjyP
	YM+kxHcnNfNc1nYTLzt904rP5n+QZAoi7RF+5o/XrEPyYOVBdi5VtMMHOJlu8kpOvqHK
	wtxieWeKX552Ws8DXoAi3QM+E0dlLTvQjYGYrMflxJ5Epmh8UimgRKtbMjg/fs5NJwOL
	EyzOr4QE/H9e6fbFupXMgeY1UpgNYB4kGwPCP2yJbdEGglpU8E2V78fiOLkreRVarmlv
	OSXw==
X-Gm-Message-State: ALoCoQnxsjELrn5OhbLb8R0PM0LUpiAF9tODL5b9bRtZxwgroFmlPFwmp2P3T2VhIZZ9M8x42DLZ
X-Received: by 10.66.163.164 with SMTP id yj4mr6882350pab.91.1391769550688;
	Fri, 07 Feb 2014 02:39:10 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id ja8sm12321206pbd.3.2014.02.07.02.39.07
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 02:39:10 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 16:08:58 +0530
Message-Id: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring issues
	during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch addresses memory cloberring issue mentioed by Julien Grall
with my earlier patch -
Ref:
http://www.gossamer-threads.com/lists/xen/devel/316247

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
 1 file changed, 36 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index c09cf0c..62f56a3 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
-                 "stp q2, q3, [%0, #16 * 2]\n\t"
-                 "stp q4, q5, [%0, #16 * 4]\n\t"
-                 "stp q6, q7, [%0, #16 * 6]\n\t"
-                 "stp q8, q9, [%0, #16 * 8]\n\t"
-                 "stp q10, q11, [%0, #16 * 10]\n\t"
-                 "stp q12, q13, [%0, #16 * 12]\n\t"
-                 "stp q14, q15, [%0, #16 * 14]\n\t"
-                 "stp q16, q17, [%0, #16 * 16]\n\t"
-                 "stp q18, q19, [%0, #16 * 18]\n\t"
-                 "stp q20, q21, [%0, #16 * 20]\n\t"
-                 "stp q22, q23, [%0, #16 * 22]\n\t"
-                 "stp q24, q25, [%0, #16 * 24]\n\t"
-                 "stp q26, q27, [%0, #16 * 26]\n\t"
-                 "stp q28, q29, [%0, #16 * 28]\n\t"
-                 "stp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                 "stp q2, q3, [%1, #16 * 2]\n\t"
+                 "stp q4, q5, [%1, #16 * 4]\n\t"
+                 "stp q6, q7, [%1, #16 * 6]\n\t"
+                 "stp q8, q9, [%1, #16 * 8]\n\t"
+                 "stp q10, q11, [%1, #16 * 10]\n\t"
+                 "stp q12, q13, [%1, #16 * 12]\n\t"
+                 "stp q14, q15, [%1, #16 * 14]\n\t"
+                 "stp q16, q17, [%1, #16 * 16]\n\t"
+                 "stp q18, q19, [%1, #16 * 18]\n\t"
+                 "stp q20, q21, [%1, #16 * 20]\n\t"
+                 "stp q22, q23, [%1, #16 * 22]\n\t"
+                 "stp q24, q25, [%1, #16 * 24]\n\t"
+                 "stp q26, q27, [%1, #16 * 26]\n\t"
+                 "stp q28, q29, [%1, #16 * 28]\n\t"
+                 "stp q30, q31, [%1, #16 * 30]\n\t"
+                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
+                 : "memory");
 
     v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
@@ -36,23 +37,24 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
-                 "ldp q2, q3, [%0, #16 * 2]\n\t"
-                 "ldp q4, q5, [%0, #16 * 4]\n\t"
-                 "ldp q6, q7, [%0, #16 * 6]\n\t"
-                 "ldp q8, q9, [%0, #16 * 8]\n\t"
-                 "ldp q10, q11, [%0, #16 * 10]\n\t"
-                 "ldp q12, q13, [%0, #16 * 12]\n\t"
-                 "ldp q14, q15, [%0, #16 * 14]\n\t"
-                 "ldp q16, q17, [%0, #16 * 16]\n\t"
-                 "ldp q18, q19, [%0, #16 * 18]\n\t"
-                 "ldp q20, q21, [%0, #16 * 20]\n\t"
-                 "ldp q22, q23, [%0, #16 * 22]\n\t"
-                 "ldp q24, q25, [%0, #16 * 24]\n\t"
-                 "ldp q26, q27, [%0, #16 * 26]\n\t"
-                 "ldp q28, q29, [%0, #16 * 28]\n\t"
-                 "ldp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                 "ldp q2, q3, [%1, #16 * 2]\n\t"
+                 "ldp q4, q5, [%1, #16 * 4]\n\t"
+                 "ldp q6, q7, [%1, #16 * 6]\n\t"
+                 "ldp q8, q9, [%1, #16 * 8]\n\t"
+                 "ldp q10, q11, [%1, #16 * 10]\n\t"
+                 "ldp q12, q13, [%1, #16 * 12]\n\t"
+                 "ldp q14, q15, [%1, #16 * 14]\n\t"
+                 "ldp q16, q17, [%1, #16 * 16]\n\t"
+                 "ldp q18, q19, [%1, #16 * 18]\n\t"
+                 "ldp q20, q21, [%1, #16 * 20]\n\t"
+                 "ldp q22, q23, [%1, #16 * 22]\n\t"
+                 "ldp q24, q25, [%1, #16 * 24]\n\t"
+                 "ldp q26, q27, [%1, #16 * 26]\n\t"
+                 "ldp q28, q29, [%1, #16 * 28]\n\t"
+                 "ldp q30, q31, [%1, #16 * 30]\n\t"
+                 :: "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)
+                 : "memory");
 
     WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:39:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiq5-0003FL-6f; Fri, 07 Feb 2014 10:39:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBiq3-0003FA-6d
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:39:15 +0000
Received: from [85.158.137.68:12511] by server-14.bemta-3.messagelabs.com id
	CA/F3-08196-2D7B4F25; Fri, 07 Feb 2014 10:39:14 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391769551!284979!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29272 invoked from network); 7 Feb 2014 10:39:13 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:39:13 -0000
Received: by mail-pb0-f41.google.com with SMTP id up15so3085554pbc.14
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 02:39:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Rovd9zq7GVlrERq5T+KaxTCrllmoJBMXCBsBzq/6poM=;
	b=Ll4lpC3QC4obg73zpGtzIXLQPwwMzwR2cfU/kKvGmJd4I49XE//Ljc3prW4/qVVJnX
	wHj8NNE1xTtsFloRhfL75NC/ajfLb8kBkeIAfmbdRhzjFyiVKYMzdr3chK92yedlmjyP
	YM+kxHcnNfNc1nYTLzt904rP5n+QZAoi7RF+5o/XrEPyYOVBdi5VtMMHOJlu8kpOvqHK
	wtxieWeKX552Ws8DXoAi3QM+E0dlLTvQjYGYrMflxJ5Epmh8UimgRKtbMjg/fs5NJwOL
	EyzOr4QE/H9e6fbFupXMgeY1UpgNYB4kGwPCP2yJbdEGglpU8E2V78fiOLkreRVarmlv
	OSXw==
X-Gm-Message-State: ALoCoQnxsjELrn5OhbLb8R0PM0LUpiAF9tODL5b9bRtZxwgroFmlPFwmp2P3T2VhIZZ9M8x42DLZ
X-Received: by 10.66.163.164 with SMTP id yj4mr6882350pab.91.1391769550688;
	Fri, 07 Feb 2014 02:39:10 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id ja8sm12321206pbd.3.2014.02.07.02.39.07
	for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 02:39:10 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 16:08:58 +0530
Message-Id: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, patches@apm.com, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring issues
	during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch addresses memory cloberring issue mentioed by Julien Grall
with my earlier patch -
Ref:
http://www.gossamer-threads.com/lists/xen/devel/316247

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
 1 file changed, 36 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index c09cf0c..62f56a3 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
-                 "stp q2, q3, [%0, #16 * 2]\n\t"
-                 "stp q4, q5, [%0, #16 * 4]\n\t"
-                 "stp q6, q7, [%0, #16 * 6]\n\t"
-                 "stp q8, q9, [%0, #16 * 8]\n\t"
-                 "stp q10, q11, [%0, #16 * 10]\n\t"
-                 "stp q12, q13, [%0, #16 * 12]\n\t"
-                 "stp q14, q15, [%0, #16 * 14]\n\t"
-                 "stp q16, q17, [%0, #16 * 16]\n\t"
-                 "stp q18, q19, [%0, #16 * 18]\n\t"
-                 "stp q20, q21, [%0, #16 * 20]\n\t"
-                 "stp q22, q23, [%0, #16 * 22]\n\t"
-                 "stp q24, q25, [%0, #16 * 24]\n\t"
-                 "stp q26, q27, [%0, #16 * 26]\n\t"
-                 "stp q28, q29, [%0, #16 * 28]\n\t"
-                 "stp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                 "stp q2, q3, [%1, #16 * 2]\n\t"
+                 "stp q4, q5, [%1, #16 * 4]\n\t"
+                 "stp q6, q7, [%1, #16 * 6]\n\t"
+                 "stp q8, q9, [%1, #16 * 8]\n\t"
+                 "stp q10, q11, [%1, #16 * 10]\n\t"
+                 "stp q12, q13, [%1, #16 * 12]\n\t"
+                 "stp q14, q15, [%1, #16 * 14]\n\t"
+                 "stp q16, q17, [%1, #16 * 16]\n\t"
+                 "stp q18, q19, [%1, #16 * 18]\n\t"
+                 "stp q20, q21, [%1, #16 * 20]\n\t"
+                 "stp q22, q23, [%1, #16 * 22]\n\t"
+                 "stp q24, q25, [%1, #16 * 24]\n\t"
+                 "stp q26, q27, [%1, #16 * 26]\n\t"
+                 "stp q28, q29, [%1, #16 * 28]\n\t"
+                 "stp q30, q31, [%1, #16 * 30]\n\t"
+                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
+                 : "memory");
 
     v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
@@ -36,23 +37,24 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
-                 "ldp q2, q3, [%0, #16 * 2]\n\t"
-                 "ldp q4, q5, [%0, #16 * 4]\n\t"
-                 "ldp q6, q7, [%0, #16 * 6]\n\t"
-                 "ldp q8, q9, [%0, #16 * 8]\n\t"
-                 "ldp q10, q11, [%0, #16 * 10]\n\t"
-                 "ldp q12, q13, [%0, #16 * 12]\n\t"
-                 "ldp q14, q15, [%0, #16 * 14]\n\t"
-                 "ldp q16, q17, [%0, #16 * 16]\n\t"
-                 "ldp q18, q19, [%0, #16 * 18]\n\t"
-                 "ldp q20, q21, [%0, #16 * 20]\n\t"
-                 "ldp q22, q23, [%0, #16 * 22]\n\t"
-                 "ldp q24, q25, [%0, #16 * 24]\n\t"
-                 "ldp q26, q27, [%0, #16 * 26]\n\t"
-                 "ldp q28, q29, [%0, #16 * 28]\n\t"
-                 "ldp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                 "ldp q2, q3, [%1, #16 * 2]\n\t"
+                 "ldp q4, q5, [%1, #16 * 4]\n\t"
+                 "ldp q6, q7, [%1, #16 * 6]\n\t"
+                 "ldp q8, q9, [%1, #16 * 8]\n\t"
+                 "ldp q10, q11, [%1, #16 * 10]\n\t"
+                 "ldp q12, q13, [%1, #16 * 12]\n\t"
+                 "ldp q14, q15, [%1, #16 * 14]\n\t"
+                 "ldp q16, q17, [%1, #16 * 16]\n\t"
+                 "ldp q18, q19, [%1, #16 * 18]\n\t"
+                 "ldp q20, q21, [%1, #16 * 20]\n\t"
+                 "ldp q22, q23, [%1, #16 * 22]\n\t"
+                 "ldp q24, q25, [%1, #16 * 24]\n\t"
+                 "ldp q26, q27, [%1, #16 * 26]\n\t"
+                 "ldp q28, q29, [%1, #16 * 28]\n\t"
+                 "ldp q30, q31, [%1, #16 * 30]\n\t"
+                 :: "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)
+                 : "memory");
 
     WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:41:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBisT-0003PX-Po; Fri, 07 Feb 2014 10:41:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBisS-0003PR-NT
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:41:44 +0000
Received: from [85.158.139.211:19165] by server-14.bemta-5.messagelabs.com id
	BC/88-27598-768B4F25; Fri, 07 Feb 2014 10:41:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391769703!2338268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23610 invoked from network); 7 Feb 2014 10:41:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:41:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:41:43 +0000
Message-Id: <52F4C674020000780011A2A6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:41:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Elena Ufimtseva" <ufimtseva@gmail.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
	<CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
In-Reply-To: <CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 01:12, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
> Ok, I see. Well, I would think then that the kernel then should not
> provide any possibility to use it? Otherwise there are oopses and some
> other unpleasant things happening when trying to use huge pages. I am
> not sure, maybe if its clearly states that there is no hugetlb support
> for pv, then there is no need for this?

One of the downsides of pv-ops: You can't hide functionality
known to not work under Xen already at kernel configuration
time, and hence need to find ways to suppress their use at
run time if enabled for the purpose of using it when run on
bare hardware.

Btw., if you're after the underlying code in the hypervisor,
look for the uses of "opt_allow_superpage".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:41:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBisT-0003PX-Po; Fri, 07 Feb 2014 10:41:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBisS-0003PR-NT
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:41:44 +0000
Received: from [85.158.139.211:19165] by server-14.bemta-5.messagelabs.com id
	BC/88-27598-768B4F25; Fri, 07 Feb 2014 10:41:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391769703!2338268!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23610 invoked from network); 7 Feb 2014 10:41:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 10:41:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 10:41:43 +0000
Message-Id: <52F4C674020000780011A2A6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 10:41:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Elena Ufimtseva" <ufimtseva@gmail.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
	<CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
In-Reply-To: <CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 01:12, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
> Ok, I see. Well, I would think then that the kernel then should not
> provide any possibility to use it? Otherwise there are oopses and some
> other unpleasant things happening when trying to use huge pages. I am
> not sure, maybe if its clearly states that there is no hugetlb support
> for pv, then there is no need for this?

One of the downsides of pv-ops: You can't hide functionality
known to not work under Xen already at kernel configuration
time, and hence need to find ways to suppress their use at
run time if enabled for the purpose of using it when run on
bare hardware.

Btw., if you're after the underlying code in the hypervisor,
look for the uses of "opt_allow_superpage".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiso-0003SH-7b; Fri, 07 Feb 2014 10:42:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBism-0003Rj-0t
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:42:04 +0000
Received: from [193.109.254.147:22992] by server-8.bemta-14.messagelabs.com id
	22/8C-18529-B78B4F25; Fri, 07 Feb 2014 10:42:03 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391769720!2696545!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31082 invoked from network); 7 Feb 2014 10:42:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:42:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208,217";a="98928595"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 10:41:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 05:41:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBisa-0001qN-Tq;
	Fri, 07 Feb 2014 10:41:52 +0000
Message-ID: <52F4B870.4020804@citrix.com>
Date: Fri, 7 Feb 2014 10:41:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
In-Reply-To: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8920811466116811251=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8920811466116811251==
Content-Type: multipart/alternative;
	boundary="------------010505030901030900070002"

--------------010505030901030900070002
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:21, Jan Beulich wrote:
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
>
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
>
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Eric Houby <ehouby@yahoo.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>      const struct acpi_ivrs_header *ivrs_block;
>      unsigned long length;
>      unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>      int error = 0;
>  
>      BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>      /* Each IO-APIC must have been mentioned in the table. */
>      for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>      {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>              continue;
>  
>          if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>          }
>      }
>  
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>      return error;
>  }
>  
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010505030901030900070002
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:21, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <a class="moz-txt-link-rfc2396E" href="mailto:joerg.roedel@amd.com">&lt;joerg.roedel@amd.com&gt;</a>.

Reported-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a>
Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>
Tested-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic = !iommu_intremap;
     int error = 0;
 
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic = 0; !error &amp;&amp; iommu_intremap &amp;&amp; apic &lt; nr_ioapics; ++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &amp;&amp;
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
+            sb_ioapic = 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
 
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
 
+    if ( !error &amp;&amp; !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error = -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010505030901030900070002--


--===============8920811466116811251==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8920811466116811251==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 10:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiso-0003SH-7b; Fri, 07 Feb 2014 10:42:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBism-0003Rj-0t
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 10:42:04 +0000
Received: from [193.109.254.147:22992] by server-8.bemta-14.messagelabs.com id
	22/8C-18529-B78B4F25; Fri, 07 Feb 2014 10:42:03 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391769720!2696545!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31082 invoked from network); 7 Feb 2014 10:42:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:42:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208,217";a="98928595"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 10:41:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 05:41:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBisa-0001qN-Tq;
	Fri, 07 Feb 2014 10:41:52 +0000
Message-ID: <52F4B870.4020804@citrix.com>
Date: Fri, 7 Feb 2014 10:41:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
In-Reply-To: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8920811466116811251=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8920811466116811251==
Content-Type: multipart/alternative;
	boundary="------------010505030901030900070002"

--------------010505030901030900070002
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:21, Jan Beulich wrote:
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
>
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
>
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Eric Houby <ehouby@yahoo.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>      const struct acpi_ivrs_header *ivrs_block;
>      unsigned long length;
>      unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>      int error = 0;
>  
>      BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>      /* Each IO-APIC must have been mentioned in the table. */
>      for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>      {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>              continue;
>  
>          if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>          }
>      }
>  
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>      return error;
>  }
>  
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010505030901030900070002
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:21, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <a class="moz-txt-link-rfc2396E" href="mailto:joerg.roedel@amd.com">&lt;joerg.roedel@amd.com&gt;</a>.

Reported-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a>
Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>
Tested-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic = !iommu_intremap;
     int error = 0;
 
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic = 0; !error &amp;&amp; iommu_intremap &amp;&amp; apic &lt; nr_ioapics; ++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &amp;&amp;
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
+            sb_ioapic = 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
 
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
 
+    if ( !error &amp;&amp; !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error = -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010505030901030900070002--


--===============8920811466116811251==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8920811466116811251==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 10:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiso-0003SW-Kk; Fri, 07 Feb 2014 10:42:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1WBisn-0003S0-Kg
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:42:05 +0000
Received: from [193.109.254.147:23188] by server-12.bemta-14.messagelabs.com
	id F7/EF-17220-C78B4F25; Fri, 07 Feb 2014 10:42:04 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391769724!2696577!1
X-Originating-IP: [192.134.164.83]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31648 invoked from network); 7 Feb 2014 10:42:04 -0000
Received: from mail2-relais-roc.national.inria.fr (HELO
	mail2-relais-roc.national.inria.fr) (192.134.164.83)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:42:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384297200"; d="scan'208";a="57295757"
Received: from unknown (HELO type.ipv6) ([193.50.110.152])
	by mail2-relais-roc.national.inria.fr with ESMTP/TLS/DHE-RSA-AES128-SHA;
	07 Feb 2014 11:42:03 +0100
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1WBisl-0002lS-NN; Fri, 07 Feb 2014 11:42:03 +0100
Date: Fri, 7 Feb 2014 11:42:03 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140207104203.GA8198@type.bordeaux.inria.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, monaka@monami-ya.jp,
	xen-devel@lists.xen.org
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
	<1391423287.10515.32.camel@kazak.uk.xensource.com>
	<20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
	<1391767323.2162.18.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391767323.2162.18.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: monaka@monami-ya.jp, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell, le Fri 07 Feb 2014 10:02:03 +0000, a =E9crit :
> > > I'm not sure what would then include the tests at
> > > that point though.
> > =

> > IIRC, by building mini-os by itself (without any stubdom part, i.e. no
> > libc)
> =

> I thought that but I wasn't able to confirm by inspection of the
> Makefile etc. It's not a big deal IMHO, if someone cares they will fix
> it...

One has to run "make CONFIG_TEST=3Dy" to get it.

> > > If Samuel agrees with the patch
> > =

> > Yes.  I wonder how that didn't pose problem so far.
> =

> I don't think we build the c stubdom by default -- maybe for this
> reason.

Ok.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBiso-0003SW-Kk; Fri, 07 Feb 2014 10:42:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <samuel.thibault@ens-lyon.org>) id 1WBisn-0003S0-Kg
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:42:05 +0000
Received: from [193.109.254.147:23188] by server-12.bemta-14.messagelabs.com
	id F7/EF-17220-C78B4F25; Fri, 07 Feb 2014 10:42:04 +0000
X-Env-Sender: samuel.thibault@ens-lyon.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391769724!2696577!1
X-Originating-IP: [192.134.164.83]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31648 invoked from network); 7 Feb 2014 10:42:04 -0000
Received: from mail2-relais-roc.national.inria.fr (HELO
	mail2-relais-roc.national.inria.fr) (192.134.164.83)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:42:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384297200"; d="scan'208";a="57295757"
Received: from unknown (HELO type.ipv6) ([193.50.110.152])
	by mail2-relais-roc.national.inria.fr with ESMTP/TLS/DHE-RSA-AES128-SHA;
	07 Feb 2014 11:42:03 +0100
Received: from samy by type.ipv6 with local (Exim 4.82)
	(envelope-from <samuel.thibault@ens-lyon.org>)
	id 1WBisl-0002lS-NN; Fri, 07 Feb 2014 11:42:03 +0100
Date: Fri, 7 Feb 2014 11:42:03 +0100
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140207104203.GA8198@type.bordeaux.inria.fr>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, monaka@monami-ya.jp,
	xen-devel@lists.xen.org
References: <CAH4w_GwwVDucnUFA1pH2kpYfSg6dZErs-Lu3Y9--fcLBEELL+Q@mail.gmail.com>
	<1391423287.10515.32.camel@kazak.uk.xensource.com>
	<20140207003433.GK5465@type.youpi.perso.aquilenet.fr>
	<1391767323.2162.18.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391767323.2162.18.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.21+34 (58baf7c9f32f) (2010-12-30)
Cc: monaka@monami-ya.jp, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG]Cannot build mini-os-x86_64-c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell, le Fri 07 Feb 2014 10:02:03 +0000, a =E9crit :
> > > I'm not sure what would then include the tests at
> > > that point though.
> > =

> > IIRC, by building mini-os by itself (without any stubdom part, i.e. no
> > libc)
> =

> I thought that but I wasn't able to confirm by inspection of the
> Makefile etc. It's not a big deal IMHO, if someone cares they will fix
> it...

One has to run "make CONFIG_TEST=3Dy" to get it.

> > > If Samuel agrees with the patch
> > =

> > Yes.  I wonder how that didn't pose problem so far.
> =

> I don't think we build the c stubdom by default -- maybe for this
> reason.

Ok.

Samuel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBitx-0003hv-EZ; Fri, 07 Feb 2014 10:43:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBitw-0003hi-IA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:43:16 +0000
Received: from [85.158.137.68:32642] by server-6.bemta-3.messagelabs.com id
	CA/A4-09180-3C8B4F25; Fri, 07 Feb 2014 10:43:15 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391769791!264326!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10925 invoked from network); 7 Feb 2014 10:43:12 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:43:12 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so4814802qac.36
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 02:43:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=TKViy+csbiW2u7CXAWbvBxicr19Ac4Ar4KlVji9orGw=;
	b=Az84OfSCEsrJQ+ELQIUPs/4EBlAb/A+UhE3XeVwX1ZNok7sVs83+0ybFiazBnPGewY
	/rt1mZ0yeCfJDTb12fhmoFPXAQ55irP7FBxivD6RZZfvaYn4UZxsF0eBbv1r2GKqWoR8
	bgNA4XsPAPhDVRcidjrrQCmlGrDEvaH1lGgdxPPbktrzIPyPtDS9LkMGA7hpMDq5D58I
	UqE5fYYD8zKJGmp82+N7+Qflv6eaaDY840Th9wTh+k5h+j4DGvMpRCrcrsPKe50IrZlZ
	Pjzv5y7br1nPJ+HbIy7hX47i3zKbS4c+aMbQS8uwqtjis70FD01BvGYB0g7jnfqLAbDC
	I9cA==
X-Gm-Message-State: ALoCoQk/jtSggMWR1SkuLO2TuGkL9lYb15Wt3iy1k9kqSLgwWMtBoI/+WUebhJZCpyGw7Xm1V7jM
MIME-Version: 1.0
X-Received: by 10.140.23.52 with SMTP id 49mr5125016qgo.17.1391769790825; Fri,
	07 Feb 2014 02:43:10 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Fri, 7 Feb 2014 02:43:10 -0800 (PST)
In-Reply-To: <CAAHg+HhiUuKTsysfhg=H7LmM=i0bLb=+5gBJnZ3bZnJbcQZsMw@mail.gmail.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
	<CAAHg+HhiUuKTsysfhg=H7LmM=i0bLb=+5gBJnZ3bZnJbcQZsMw@mail.gmail.com>
Date: Fri, 7 Feb 2014 16:13:10 +0530
Message-ID: <CAAHg+Hi0Crpwxf1rQf+9+st=GH3NHtg2sTJDN=9ygSU3exxeBA@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian/ Julien,

On 7 February 2014 11:01, Pranavkumar Sawargaonkar
<pranavkumar@linaro.org> wrote:
> Hi Ian,
>
> On 6 February 2014 18:41, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
>>>
>>> On 06/02/14 12:57, Ian Campbell wrote:
>>> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>>> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>>> >>
>>> >> I remember we had a discussion when I have implemented vfp context
>>> >> switch for arm32 for the memory constraints
>>> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>>> >>
>>> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
>>> >
>>> > Yes, I forgot to say: I think getting something in now is the priority,
>>> > which is why I committed it, but this should be tightened up, probably
>>> > for 4.5 unless the difference is benchmarkable.
>>>
>>> The fix is very simple (a matter of 2 lines changes). I would prefer to
>>> delay this patch for a couple of days and having a correct
>>> implementation from the beginning, so we will not forgot to change the
>>> code for Xen 4.5.
>>
>> And I would rather close this rather large hole right now and not wait
>> for two more days when we are looking at doing what might be the final
>> rc soon.
>>
>> I had already applied before you said anything, so the point is moot.
>>
>> Anyway, if someone wants to submit for 4.4 with a case for a release
>> exception then I'm sure George will consider it.
>>
>> Otherwise this thread is now in my QUEUE-4.5 folder so I'll get a
>> reminder shortly after the release when I go through that.
>>
>>> Moreover Pranav usually answer quickly :).
>>
>> If he's awake/at work, it's out of office hours for him now.
>
> : ) :) , sorry somehow I missed this yesterday.
>
> If "=Q"  is really critical i can quickly send you a new patch against
> your commit in staging branch
> (http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=712eb2e04da2cbcd9908f74ebd47c6df60d6d12f)

Posted a new patch with the desired fix -
"xen: arm: arm64: Fix memory cloberring issues during VFP save restore."

>>
>> Ian.
>>
> Thanks,
> Pranav
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:43:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBitx-0003hv-EZ; Fri, 07 Feb 2014 10:43:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBitw-0003hi-IA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:43:16 +0000
Received: from [85.158.137.68:32642] by server-6.bemta-3.messagelabs.com id
	CA/A4-09180-3C8B4F25; Fri, 07 Feb 2014 10:43:15 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391769791!264326!1
X-Originating-IP: [209.85.216.49]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10925 invoked from network); 7 Feb 2014 10:43:12 -0000
Received: from mail-qa0-f49.google.com (HELO mail-qa0-f49.google.com)
	(209.85.216.49)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:43:12 -0000
Received: by mail-qa0-f49.google.com with SMTP id w8so4814802qac.36
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 02:43:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=TKViy+csbiW2u7CXAWbvBxicr19Ac4Ar4KlVji9orGw=;
	b=Az84OfSCEsrJQ+ELQIUPs/4EBlAb/A+UhE3XeVwX1ZNok7sVs83+0ybFiazBnPGewY
	/rt1mZ0yeCfJDTb12fhmoFPXAQ55irP7FBxivD6RZZfvaYn4UZxsF0eBbv1r2GKqWoR8
	bgNA4XsPAPhDVRcidjrrQCmlGrDEvaH1lGgdxPPbktrzIPyPtDS9LkMGA7hpMDq5D58I
	UqE5fYYD8zKJGmp82+N7+Qflv6eaaDY840Th9wTh+k5h+j4DGvMpRCrcrsPKe50IrZlZ
	Pjzv5y7br1nPJ+HbIy7hX47i3zKbS4c+aMbQS8uwqtjis70FD01BvGYB0g7jnfqLAbDC
	I9cA==
X-Gm-Message-State: ALoCoQk/jtSggMWR1SkuLO2TuGkL9lYb15Wt3iy1k9kqSLgwWMtBoI/+WUebhJZCpyGw7Xm1V7jM
MIME-Version: 1.0
X-Received: by 10.140.23.52 with SMTP id 49mr5125016qgo.17.1391769790825; Fri,
	07 Feb 2014 02:43:10 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Fri, 7 Feb 2014 02:43:10 -0800 (PST)
In-Reply-To: <CAAHg+HhiUuKTsysfhg=H7LmM=i0bLb=+5gBJnZ3bZnJbcQZsMw@mail.gmail.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
	<CAAHg+HhiUuKTsysfhg=H7LmM=i0bLb=+5gBJnZ3bZnJbcQZsMw@mail.gmail.com>
Date: Fri, 7 Feb 2014 16:13:10 +0530
Message-ID: <CAAHg+Hi0Crpwxf1rQf+9+st=GH3NHtg2sTJDN=9ygSU3exxeBA@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>, Patch Tracking <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian/ Julien,

On 7 February 2014 11:01, Pranavkumar Sawargaonkar
<pranavkumar@linaro.org> wrote:
> Hi Ian,
>
> On 6 February 2014 18:41, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
>>>
>>> On 06/02/14 12:57, Ian Campbell wrote:
>>> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>>> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>>> >>
>>> >> I remember we had a discussion when I have implemented vfp context
>>> >> switch for arm32 for the memory constraints
>>> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>>> >>
>>> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
>>> >
>>> > Yes, I forgot to say: I think getting something in now is the priority,
>>> > which is why I committed it, but this should be tightened up, probably
>>> > for 4.5 unless the difference is benchmarkable.
>>>
>>> The fix is very simple (a matter of 2 lines changes). I would prefer to
>>> delay this patch for a couple of days and having a correct
>>> implementation from the beginning, so we will not forgot to change the
>>> code for Xen 4.5.
>>
>> And I would rather close this rather large hole right now and not wait
>> for two more days when we are looking at doing what might be the final
>> rc soon.
>>
>> I had already applied before you said anything, so the point is moot.
>>
>> Anyway, if someone wants to submit for 4.4 with a case for a release
>> exception then I'm sure George will consider it.
>>
>> Otherwise this thread is now in my QUEUE-4.5 folder so I'll get a
>> reminder shortly after the release when I go through that.
>>
>>> Moreover Pranav usually answer quickly :).
>>
>> If he's awake/at work, it's out of office hours for him now.
>
> : ) :) , sorry somehow I missed this yesterday.
>
> If "=Q"  is really critical i can quickly send you a new patch against
> your commit in staging branch
> (http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=712eb2e04da2cbcd9908f74ebd47c6df60d6d12f)

Posted a new patch with the desired fix -
"xen: arm: arm64: Fix memory cloberring issues during VFP save restore."

>>
>> Ian.
>>
> Thanks,
> Pranav
Thanks,
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:53:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBj3H-0004NO-RB; Fri, 07 Feb 2014 10:52:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WBj3F-0004NJ-ST
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:52:54 +0000
Received: from [85.158.139.211:29311] by server-17.bemta-5.messagelabs.com id
	1D/80-31975-50BB4F25; Fri, 07 Feb 2014 10:52:53 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391770371!2326899!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25153 invoked from network); 7 Feb 2014 10:52:52 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:52:52 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1449567eek.24
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 02:52:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=pE49Fur28MGH+IAHYavkZQ8OowxVRuRc7gN4eNzjNKI=;
	b=aZOVUvlENLJixJPbAMzUNUOkNOr9DYz0Rh7if8EJs/KDdRrCzd8YMT4K+DNRt7e/3/
	Rkhm3xk46ZA3HZsFHmcxo64RN9wS8mTHqcNl5WP2kGZqkNXaeYGBofQEgOzm024WoaUS
	FV1Q/RAWfsfp3jP6Yx1gGUJkRzlwx7cKgymduBmSdlb2d2DY3jB9iZdLVWeNYIKk80BI
	+aDhsJW3XFm3M7WHrRcf/S/5XXg3EaITb5FGYdbrjLGRkMHA9hWzGIFo5KXgBr2NzWnY
	rbZ4UIpFIus8GoJrFu1634781UOu+H/kL+IHlj1tn6fgNQgj56IkR+ILi6cviZ4pUcS3
	Z4sA==
X-Gm-Message-State: ALoCoQllLa8Csr+OtcAvoi2QCTaIdC5nHMP4tcoen0DvRVatIidzfCkfFx37FTsHSAjYo1FEMXvI
X-Received: by 10.14.127.200 with SMTP id d48mr15608995eei.9.1391770371560;
	Fri, 07 Feb 2014 02:52:51 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id f45sm14932218eeg.5.2014.02.07.02.52.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 02:52:50 -0800 (PST)
Message-ID: <52F4BB0A.8000007@m2r.biz>
Date: Fri, 07 Feb 2014 11:52:58 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1391528492.2441.26.camel@astar.houby.net>	
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>	
	<15510565929.20140205090647@eikelenboom.it>	
	<1391645864.2751.9.camel@astar.houby.net>	
	<1391660630.2751.12.camel@astar.houby.net>	
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391764139.9917.54.camel@Solace>
In-Reply-To: <1391764139.9917.54.camel@Solace>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/02/2014 10:08, Dario Faggioli ha scritto:
> On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
>> 2014-02-06 5:23 GMT+01:00 Eric Houby <ehouby@yahoo.com>:
>>          Is there a knob for qxl support?
>>          
>>          [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
>>          qemu-system-i386: -vga qxl: invalid option
>>
>>
>> Here there is a patch that add qxl support in libxl updated to xen
>> 4.4-rc3 if you want add it:
>> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
>>
>> Or you can simply compile from this already ready for spice/qxl
>> testing:
>> https://github.com/Fantu/Xen/commits/rebase/m2r-testing
>>
>>
>>
>> Is not upstream for now because there is something on xen that make it
>> not working on linux domUs with qxl driver active and working with
>> high performance problem on windows domUs.
>>
> Right.
>
>> I spent several days without finding the exact problem to be solved :(
>> If you want you can try it out and see if anything changes using
>> Fedora instead of Debian as dom0, differents kernel domUs etc.
>> Maybe you could even find some new informations/errors useful for
>> solving the problem.
>>
> Yep, that would help... let us know! :-)
>
> Anyway, first of all, sorry for my spice/qxl ignorance.
>
> What I just wanted to say is this: searching the wiki, all I found about
> spice and QXL is this section in the QEMU Upstream page:
> http://wiki.xen.org/wiki/QEMU_Upstream#SPICE_.2F_QXL
>
> Perhaps someone of you can double check whether the information there is
> still fresh and accurate enough? Perhaps it's also worth adding
> references to the patches mentioned above (with the proper disclaimer
> about the known issues, of course)?
>
> Also, maybe for the next DocsDay, this would be a nice one to have too:
> http://wiki.xen.org/wiki/Xen_Document_Days/TODO#Spice_Config_Example_for_upstream_QEMU
>
> If keen on doing any of the above that involves modifying the wiki, send
> me a note, and I think I can provide the necessary permissions.

The wiki should be updated, the qxl patch for libxl part is complete and 
correct and can be used for tests:
https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
the actual problem is off of libxl.
Probably are on hvmloader, qemu and/or kernel.

The latest mail about qxl problem that I send:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg00758.html

Shortly in the latest tests I have got:
- on windows 7 domU with xen_platform_pci=0 the worse video refresh 
performance with qxl seems was "solved" but unfortunately is a problem 
without pv drivers (neeeded for xl save/restore/shutdown).
- on Saucy (ubuntu 13.10) domU with "xen/pvhvm: If xen_platform_pci=0 is 
set don't blow up" patch even with xen_platform_pci=0 I got a 100% cpu 
X.org and black screen. So there is probably another problem on linux 
domUs kernel-side and/or xorg's qxl drivers.
- on Fedora19 domU, comparing kvm and xen hosts the only difference I 
have found is the following error in /var/log/messages:
> ioremap error for 0xfc001000-0xfc002000, requested 0x10, got 0x0

There also this from old tests:
> And about xen hypervisor logs (with xl dmesg) the only difference 
> between stdvga and qxl (same domU) is that qxl log has 3 "pci dev bar" 
> more.
Which may be useful to understand if further hvmloader modification are 
needed.
I have not knowledge about it to say anything certain.

If someone want help me, please reply if further tests/data needed and 
I'll post them.

Thanks for any reply.

>
> Thanks and Regards,
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 10:53:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 10:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBj3H-0004NO-RB; Fri, 07 Feb 2014 10:52:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WBj3F-0004NJ-ST
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 10:52:54 +0000
Received: from [85.158.139.211:29311] by server-17.bemta-5.messagelabs.com id
	1D/80-31975-50BB4F25; Fri, 07 Feb 2014 10:52:53 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391770371!2326899!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25153 invoked from network); 7 Feb 2014 10:52:52 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 10:52:52 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1449567eek.24
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 02:52:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=pE49Fur28MGH+IAHYavkZQ8OowxVRuRc7gN4eNzjNKI=;
	b=aZOVUvlENLJixJPbAMzUNUOkNOr9DYz0Rh7if8EJs/KDdRrCzd8YMT4K+DNRt7e/3/
	Rkhm3xk46ZA3HZsFHmcxo64RN9wS8mTHqcNl5WP2kGZqkNXaeYGBofQEgOzm024WoaUS
	FV1Q/RAWfsfp3jP6Yx1gGUJkRzlwx7cKgymduBmSdlb2d2DY3jB9iZdLVWeNYIKk80BI
	+aDhsJW3XFm3M7WHrRcf/S/5XXg3EaITb5FGYdbrjLGRkMHA9hWzGIFo5KXgBr2NzWnY
	rbZ4UIpFIus8GoJrFu1634781UOu+H/kL+IHlj1tn6fgNQgj56IkR+ILi6cviZ4pUcS3
	Z4sA==
X-Gm-Message-State: ALoCoQllLa8Csr+OtcAvoi2QCTaIdC5nHMP4tcoen0DvRVatIidzfCkfFx37FTsHSAjYo1FEMXvI
X-Received: by 10.14.127.200 with SMTP id d48mr15608995eei.9.1391770371560;
	Fri, 07 Feb 2014 02:52:51 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id f45sm14932218eeg.5.2014.02.07.02.52.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 02:52:50 -0800 (PST)
Message-ID: <52F4BB0A.8000007@m2r.biz>
Date: Fri, 07 Feb 2014 11:52:58 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1391528492.2441.26.camel@astar.houby.net>	
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>	
	<15510565929.20140205090647@eikelenboom.it>	
	<1391645864.2751.9.camel@astar.houby.net>	
	<1391660630.2751.12.camel@astar.houby.net>	
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391764139.9917.54.camel@Solace>
In-Reply-To: <1391764139.9917.54.camel@Solace>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 07/02/2014 10:08, Dario Faggioli ha scritto:
> On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
>> 2014-02-06 5:23 GMT+01:00 Eric Houby <ehouby@yahoo.com>:
>>          Is there a knob for qxl support?
>>          
>>          [root@xen ~]# cat /var/log/xen/qemu-dm-f20.log
>>          qemu-system-i386: -vga qxl: invalid option
>>
>>
>> Here there is a patch that add qxl support in libxl updated to xen
>> 4.4-rc3 if you want add it:
>> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
>>
>> Or you can simply compile from this already ready for spice/qxl
>> testing:
>> https://github.com/Fantu/Xen/commits/rebase/m2r-testing
>>
>>
>>
>> Is not upstream for now because there is something on xen that make it
>> not working on linux domUs with qxl driver active and working with
>> high performance problem on windows domUs.
>>
> Right.
>
>> I spent several days without finding the exact problem to be solved :(
>> If you want you can try it out and see if anything changes using
>> Fedora instead of Debian as dom0, differents kernel domUs etc.
>> Maybe you could even find some new informations/errors useful for
>> solving the problem.
>>
> Yep, that would help... let us know! :-)
>
> Anyway, first of all, sorry for my spice/qxl ignorance.
>
> What I just wanted to say is this: searching the wiki, all I found about
> spice and QXL is this section in the QEMU Upstream page:
> http://wiki.xen.org/wiki/QEMU_Upstream#SPICE_.2F_QXL
>
> Perhaps someone of you can double check whether the information there is
> still fresh and accurate enough? Perhaps it's also worth adding
> references to the patches mentioned above (with the proper disclaimer
> about the known issues, of course)?
>
> Also, maybe for the next DocsDay, this would be a nice one to have too:
> http://wiki.xen.org/wiki/Xen_Document_Days/TODO#Spice_Config_Example_for_upstream_QEMU
>
> If keen on doing any of the above that involves modifying the wiki, send
> me a note, and I think I can provide the necessary permissions.

The wiki should be updated, the qxl patch for libxl part is complete and 
correct and can be used for tests:
https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
the actual problem is off of libxl.
Probably are on hvmloader, qemu and/or kernel.

The latest mail about qxl problem that I send:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg00758.html

Shortly in the latest tests I have got:
- on windows 7 domU with xen_platform_pci=0 the worse video refresh 
performance with qxl seems was "solved" but unfortunately is a problem 
without pv drivers (neeeded for xl save/restore/shutdown).
- on Saucy (ubuntu 13.10) domU with "xen/pvhvm: If xen_platform_pci=0 is 
set don't blow up" patch even with xen_platform_pci=0 I got a 100% cpu 
X.org and black screen. So there is probably another problem on linux 
domUs kernel-side and/or xorg's qxl drivers.
- on Fedora19 domU, comparing kvm and xen hosts the only difference I 
have found is the following error in /var/log/messages:
> ioremap error for 0xfc001000-0xfc002000, requested 0x10, got 0x0

There also this from old tests:
> And about xen hypervisor logs (with xl dmesg) the only difference 
> between stdvga and qxl (same domU) is that qxl log has 3 "pci dev bar" 
> more.
Which may be useful to understand if further hvmloader modification are 
needed.
I have not knowledge about it to say anything certain.

If someone want help me, please reply if further tests/data needed and 
I'll post them.

Thanks for any reply.

>
> Thanks and Regards,
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:05:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjFK-0004u0-BW; Fri, 07 Feb 2014 11:05:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBjFJ-0004tv-1t
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:05:21 +0000
Received: from [85.158.143.35:16622] by server-1.bemta-4.messagelabs.com id
	B9/39-31661-0FDB4F25; Fri, 07 Feb 2014 11:05:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391771119!3895588!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12499 invoked from network); 7 Feb 2014 11:05:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 11:05:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 11:05:18 +0000
Message-Id: <52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 11:05:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> +/* check for AMD MC4 extended MISC register presence */
> +static inline int amd_thresholding_reg_present(uint32_t msr)
> +{
> +    uint64_t val;
> +    rdmsr_safe(msr, val);

You ought to check the result of this operation, even if at present
it clear "val" on error.

I also wonder what good it does to repeatedly trigger #GP here
if we already once learned that there's no such register. IOW,
please store the fact that the register is absent in a static
variable (and no, this shouldn't be a per-CPU one - if the register
is missing on any pCPU, we must not try to access it anywhere, as
vCPU-s could end up running once here and once there; in the end
we assume consistency across the CPUs in a system anyway).

> +    if ( val & (AMD_MC4_MISC_VAL_MASK | AMD_MC4_MISC_CNTP_MASK) )
> +        return 1;
> +
> +    return 0;
> +}
> +
>  /* amd specific MCA MSR */
>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		v->arch.vmce.bank[1].mci_misc = val; 
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* ignore write: we do not emulate link and l3 cache errors
> -		 * to the guest.
> -		 */
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	default:
> -		return 0;
> -	}
> +    /* If not present, #GP fault, else do nothing as we don't emulate */
> +    if ( !amd_thresholding_reg_present(msr) )
> +        return -1;

The one thing I'm concerned about making this #GP in the guest is
migration: With it being _newer_ CPUs implementing fewer of these
MSRs, it would be impossible to migrate a guest from an older system
to a newer one - a direction that (as long as the newer system
provides all the hardware capabilities the older one has) is generally
assumed to work. Bottom line - we're probably better off always
dropping writes, and always returning zero for reads. Which will
eliminate the need for amd_thresholding_reg_present().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:05:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjFK-0004u0-BW; Fri, 07 Feb 2014 11:05:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBjFJ-0004tv-1t
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:05:21 +0000
Received: from [85.158.143.35:16622] by server-1.bemta-4.messagelabs.com id
	B9/39-31661-0FDB4F25; Fri, 07 Feb 2014 11:05:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391771119!3895588!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12499 invoked from network); 7 Feb 2014 11:05:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 11:05:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 11:05:18 +0000
Message-Id: <52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 11:05:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> +/* check for AMD MC4 extended MISC register presence */
> +static inline int amd_thresholding_reg_present(uint32_t msr)
> +{
> +    uint64_t val;
> +    rdmsr_safe(msr, val);

You ought to check the result of this operation, even if at present
it clear "val" on error.

I also wonder what good it does to repeatedly trigger #GP here
if we already once learned that there's no such register. IOW,
please store the fact that the register is absent in a static
variable (and no, this shouldn't be a per-CPU one - if the register
is missing on any pCPU, we must not try to access it anywhere, as
vCPU-s could end up running once here and once there; in the end
we assume consistency across the CPUs in a system anyway).

> +    if ( val & (AMD_MC4_MISC_VAL_MASK | AMD_MC4_MISC_CNTP_MASK) )
> +        return 1;
> +
> +    return 0;
> +}
> +
>  /* amd specific MCA MSR */
>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		v->arch.vmce.bank[1].mci_misc = val; 
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* ignore write: we do not emulate link and l3 cache errors
> -		 * to the guest.
> -		 */
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	default:
> -		return 0;
> -	}
> +    /* If not present, #GP fault, else do nothing as we don't emulate */
> +    if ( !amd_thresholding_reg_present(msr) )
> +        return -1;

The one thing I'm concerned about making this #GP in the guest is
migration: With it being _newer_ CPUs implementing fewer of these
MSRs, it would be impossible to migrate a guest from an older system
to a newer one - a direction that (as long as the newer system
provides all the hardware capabilities the older one has) is generally
assumed to work. Bottom line - we're probably better off always
dropping writes, and always returning zero for reads. Which will
eliminate the need for amd_thresholding_reg_present().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:29:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjco-0005zA-Oz; Fri, 07 Feb 2014 11:29:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBjcn-0005yH-9Q
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 11:29:37 +0000
Received: from [193.109.254.147:8198] by server-1.bemta-14.messagelabs.com id
	87/AA-15438-0A3C4F25; Fri, 07 Feb 2014 11:29:36 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391772574!2744339!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1839 invoked from network); 7 Feb 2014 11:29:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:29:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98936790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 11:29:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 06:29:33 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBjcj-0002WW-OV;
	Fri, 07 Feb 2014 11:29:33 +0000
Message-ID: <52F4C39C.4090005@eu.citrix.com>
Date: Fri, 7 Feb 2014 11:29:32 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 09:41 AM, Jan Beulich wrote:
> 1: fix memory leaks
> 2: fix error propagation from flask_security_set_bool()
> 3: check permissions first thing in flask_security_set_bool()
> 4: add compat mode guest support
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> Release-wise, I would think that 1-3 should certainly go in. While I'd
> like 4 to be in for 4.4 too, I realize that's a little more intrusive than
> one would want at this point.

I agree on 1-3:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Re #4: This is something we can pull in for 4.4.1, right?

If nobody has noticed it missing, it's not exactly a burning feature.  
If it's possible to wait for 4.4.1 I'd rather do so.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:29:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjco-0005zA-Oz; Fri, 07 Feb 2014 11:29:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBjcn-0005yH-9Q
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 11:29:37 +0000
Received: from [193.109.254.147:8198] by server-1.bemta-14.messagelabs.com id
	87/AA-15438-0A3C4F25; Fri, 07 Feb 2014 11:29:36 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391772574!2744339!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1839 invoked from network); 7 Feb 2014 11:29:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:29:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="98936790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 11:29:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 06:29:33 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBjcj-0002WW-OV;
	Fri, 07 Feb 2014 11:29:33 +0000
Message-ID: <52F4C39C.4090005@eu.citrix.com>
Date: Fri, 7 Feb 2014 11:29:32 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 09:41 AM, Jan Beulich wrote:
> 1: fix memory leaks
> 2: fix error propagation from flask_security_set_bool()
> 3: check permissions first thing in flask_security_set_bool()
> 4: add compat mode guest support
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> Release-wise, I would think that 1-3 should certainly go in. While I'd
> like 4 to be in for 4.4 too, I realize that's a little more intrusive than
> one would want at this point.

I agree on 1-3:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Re #4: This is something we can pull in for 4.4.1, right?

If nobody has noticed it missing, it's not exactly a burning feature.  
If it's possible to wait for 4.4.1 I'd rather do so.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjjX-0006CL-AX; Fri, 07 Feb 2014 11:36:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WBjjV-0006CG-Q4
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:36:34 +0000
Received: from [85.158.143.35:65381] by server-2.bemta-4.messagelabs.com id
	F0/00-10891-145C4F25; Fri, 07 Feb 2014 11:36:33 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391772992!3901144!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32488 invoked from network); 7 Feb 2014 11:36:32 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:36:32 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so2168021wes.25
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 03:36:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=KygshuFzkrCZRnKhjmcmCvzC894m0u/UgnJ/H31WeEI=;
	b=lWvsoWc8jzL4sz6JoOoTCfSVSHOMb97iOjYyLwFrRQNG9yqElNREHnLVODQ7PedFdg
	kDeuEHq0zZoWAY2aZsYJ4aUPKTyqqcuvtVbXs2PaFv3ErBai8AQaeLWoIQkHf7wNnKda
	rvIWCHgsod6hxgf4e0rmFzdBrADZwlhmVXbbYHyhKGSM/s/Vu4tM6UI+i2QOS7ncRN0Y
	yZgEjIL9t98dzYDVv2FU1z0HUf1uywpLnmB+qYmYarcb4ZDuKjHBHDFGKF4cQt717YIv
	q0e9RrZQE+yWKWTkga5+e7oOGKgN51gQJ3p93cnEFm7X0VdDKdoyP5Ek4ekhCgtKFj4B
	y+Yg==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr3451064wic.56.1391772992232;
	Fri, 07 Feb 2014 03:36:32 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 7 Feb 2014 03:36:32 -0800 (PST)
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
Date: Fri, 7 Feb 2014 11:36:32 +0000
X-Google-Sender-Auth: H83OdX7GuVtslNJY0KkBnWhWCFg
Message-ID: <CAFLBxZa+DsobEgc+FEYn8=DMwuWPkWr-mQ3BiyYqFMo11o-kcw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 0/4 v3] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 5, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> George, this should go into 4.4 -- there is a security aspect.

Sorry I missed this one -- in spite of obviously being in the 'to'
field up there, I'm pretty sure this never made it to my Citrix inbox
(which puts it in the 'priority' queue more or less).

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> Changes in v3:
>                 s/cacheflush_page/sync_page_to_ram/
>
>                 xc interface takes a length instead of an end
>
>                 make the domctl range inclusive.
>
>                 make xc interface internal -- it isn't needed from libxl
>                 in the current design and it is easier to expose an
>                 interface in the future than to hide it.
>
> Changes in v2:
>         Flush on page alloc and do targeted flushes at domain build time
>         rather than a big flush after domain build. This adds a new call
>         to common code, which is stubbed out on x86. This avoid needing
>         to worry about preemptability of the new domctl and also catches
>         cases related to ballooning where things might not be flushed
>         (e.g. a guest scrubs a page but doesn't clean the cache)
>
> This has done 12000 boot loops on arm32 and 10000 on arm64.
>
> Given the security aspect I would like to put this in 4.4.
>
> Original blurb:
>
> On ARM we need to take care of cache coherency for guests which we have
> just built because they start with their caches disabled.
>
> Our current strategy for dealing with this, which is to make guest
> memory default to cacheable regardless of the in guest configuration
> (the HCR.DC bit), is flawed because it doesn't handle guests which
> enable their MMU before enabling their caches, which at least FreeBSD
> does. (NB: Setting HCR.DC while the guest MMU is enabled is
> UNPREDICTABLE, hence we must disable it when the guest turns its MMU
> one).
>
> There is also a security aspect here since the current strategy means
> that a guest which enables its MMU before its caches can potentially see
> unscrubbed data in RAM (because the scrubbed bytes are still held in the
> cache).
>
> As well as the new stuff this series removes the HCR.DC support and
> performs two purely cosmetic renames.
>
> Ian.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjjX-0006CL-AX; Fri, 07 Feb 2014 11:36:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WBjjV-0006CG-Q4
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:36:34 +0000
Received: from [85.158.143.35:65381] by server-2.bemta-4.messagelabs.com id
	F0/00-10891-145C4F25; Fri, 07 Feb 2014 11:36:33 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391772992!3901144!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32488 invoked from network); 7 Feb 2014 11:36:32 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:36:32 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so2168021wes.25
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 03:36:32 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=KygshuFzkrCZRnKhjmcmCvzC894m0u/UgnJ/H31WeEI=;
	b=lWvsoWc8jzL4sz6JoOoTCfSVSHOMb97iOjYyLwFrRQNG9yqElNREHnLVODQ7PedFdg
	kDeuEHq0zZoWAY2aZsYJ4aUPKTyqqcuvtVbXs2PaFv3ErBai8AQaeLWoIQkHf7wNnKda
	rvIWCHgsod6hxgf4e0rmFzdBrADZwlhmVXbbYHyhKGSM/s/Vu4tM6UI+i2QOS7ncRN0Y
	yZgEjIL9t98dzYDVv2FU1z0HUf1uywpLnmB+qYmYarcb4ZDuKjHBHDFGKF4cQt717YIv
	q0e9RrZQE+yWKWTkga5+e7oOGKgN51gQJ3p93cnEFm7X0VdDKdoyP5Ek4ekhCgtKFj4B
	y+Yg==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr3451064wic.56.1391772992232;
	Fri, 07 Feb 2014 03:36:32 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 7 Feb 2014 03:36:32 -0800 (PST)
In-Reply-To: <1391616214.23098.9.camel@kazak.uk.xensource.com>
References: <1391616214.23098.9.camel@kazak.uk.xensource.com>
Date: Fri, 7 Feb 2014 11:36:32 +0000
X-Google-Sender-Auth: H83OdX7GuVtslNJY0KkBnWhWCFg
Message-ID: <CAFLBxZa+DsobEgc+FEYn8=DMwuWPkWr-mQ3BiyYqFMo11o-kcw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 0/4 v3] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 5, 2014 at 4:03 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> George, this should go into 4.4 -- there is a security aspect.

Sorry I missed this one -- in spite of obviously being in the 'to'
field up there, I'm pretty sure this never made it to my Citrix inbox
(which puts it in the 'priority' queue more or less).

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> Changes in v3:
>                 s/cacheflush_page/sync_page_to_ram/
>
>                 xc interface takes a length instead of an end
>
>                 make the domctl range inclusive.
>
>                 make xc interface internal -- it isn't needed from libxl
>                 in the current design and it is easier to expose an
>                 interface in the future than to hide it.
>
> Changes in v2:
>         Flush on page alloc and do targeted flushes at domain build time
>         rather than a big flush after domain build. This adds a new call
>         to common code, which is stubbed out on x86. This avoid needing
>         to worry about preemptability of the new domctl and also catches
>         cases related to ballooning where things might not be flushed
>         (e.g. a guest scrubs a page but doesn't clean the cache)
>
> This has done 12000 boot loops on arm32 and 10000 on arm64.
>
> Given the security aspect I would like to put this in 4.4.
>
> Original blurb:
>
> On ARM we need to take care of cache coherency for guests which we have
> just built because they start with their caches disabled.
>
> Our current strategy for dealing with this, which is to make guest
> memory default to cacheable regardless of the in guest configuration
> (the HCR.DC bit), is flawed because it doesn't handle guests which
> enable their MMU before enabling their caches, which at least FreeBSD
> does. (NB: Setting HCR.DC while the guest MMU is enabled is
> UNPREDICTABLE, hence we must disable it when the guest turns its MMU
> one).
>
> There is also a security aspect here since the current strategy means
> that a guest which enables its MMU before its caches can potentially see
> unscrubbed data in RAM (because the scrubbed bytes are still held in the
> cache).
>
> As well as the new stuff this series removes the HCR.DC support and
> performs two purely cosmetic renames.
>
> Ian.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:39:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjmI-0006cW-GU; Fri, 07 Feb 2014 11:39:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBjmG-0006cO-6O
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:39:24 +0000
Received: from [85.158.137.68:12652] by server-1.bemta-3.messagelabs.com id
	75/FA-17293-BE5C4F25; Fri, 07 Feb 2014 11:39:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391773161!306893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4764 invoked from network); 7 Feb 2014 11:39:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100795242"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 11:39:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	06:39:19 -0500
Message-ID: <1391773158.2162.81.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Fri, 7 Feb 2014 11:39:18 +0000
In-Reply-To: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 16:08 +0530, Pranavkumar Sawargaonkar wrote:
> This patch addresses memory cloberring issue mentioed by Julien Grall
> with my earlier patch -
> Ref:
> http://www.gossamer-threads.com/lists/xen/devel/316247
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
>  1 file changed, 36 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index c09cf0c..62f56a3 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>      if ( !cpu_has_fp )
>          return;
>  
> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
> +                 : "memory");

The point of this change was to be able to drop the memory clobbers.

George, I'd like to take this in 4.4 if possible -- I wanted to get the
baseline functionality fixed for 4.4 ASAP since it was quite a big hole
which is why I committed without waiting for this respin.

The issue is that the patch which was committed yesterday clobbers all
of memory and not just the bits the inline asm touches.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:39:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjmI-0006cW-GU; Fri, 07 Feb 2014 11:39:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBjmG-0006cO-6O
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:39:24 +0000
Received: from [85.158.137.68:12652] by server-1.bemta-3.messagelabs.com id
	75/FA-17293-BE5C4F25; Fri, 07 Feb 2014 11:39:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391773161!306893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4764 invoked from network); 7 Feb 2014 11:39:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,799,1384300800"; d="scan'208";a="100795242"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 11:39:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	06:39:19 -0500
Message-ID: <1391773158.2162.81.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date: Fri, 7 Feb 2014 11:39:18 +0000
In-Reply-To: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org,
	George Dunlap <george.dunlap@eu.citrix.com>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 16:08 +0530, Pranavkumar Sawargaonkar wrote:
> This patch addresses memory cloberring issue mentioed by Julien Grall
> with my earlier patch -
> Ref:
> http://www.gossamer-threads.com/lists/xen/devel/316247
> 
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>  xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
>  1 file changed, 36 insertions(+), 34 deletions(-)
> 
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index c09cf0c..62f56a3 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>      if ( !cpu_has_fp )
>          return;
>  
> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
> +                 : "memory");

The point of this change was to be able to drop the memory clobbers.

George, I'd like to take this in 4.4 if possible -- I wanted to get the
baseline functionality fixed for 4.4 ASAP since it was quite a big hole
which is why I committed without waiting for this respin.

The issue is that the patch which was committed yesterday clobbers all
of memory and not just the bits the inline asm touches.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:40:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjng-0006ja-QV; Fri, 07 Feb 2014 11:40:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBjne-0006jK-Sf
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:40:51 +0000
Received: from [85.158.139.211:34649] by server-8.bemta-5.messagelabs.com id
	4F/88-05298-146C4F25; Fri, 07 Feb 2014 11:40:49 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391773249!2323196!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24394 invoked from network); 7 Feb 2014 11:40:49 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-9.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Feb 2014 11:40:49 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBjnM-0001de-Du; Fri, 07 Feb 2014 11:40:32 +0000
Message-ID: <1391773231.2162.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Michael Tokarev <mjt@tls.msk.ru>
Date: Fri, 07 Feb 2014 11:40:31 +0000
In-Reply-To: <52F4B3AA.5050403@msgid.tls.msk.ru>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
	<52F4B3AA.5050403@msgid.tls.msk.ru>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 14:21 +0400, Michael Tokarev wrote:
> 07.02.2014 14:19, Ian Campbell wrote:
> > On Fri, 2014-02-07 at 02:05 +0400, Michael Tokarev wrote:
> []
> >> Meanwhile I just uploaded another release of seabios to
> >> Debian archive which brings size of bios.bin back to 128Kb,
> >> by disabling XCHI (usb3.0) and PVSCSI -- 1.7.4-3.
> >>
> >> So things should be working fine on debian side again now
> >> (at least on sid), even without patching xen.
> >>
> >> And it looks like we should re-think what we remove from
> >> our (qemu) "stripped-down" bios - as it turns out, Xen
> >> actually uses bios from qemu unchanged...  But it is
> >> really really fragile - on my build (gcc 4.7), with 2
> >> options removed, 128Kb is 99.7% used ;)
> > 
> > Do the seabios packages also produce a non-stripped down seabios for
> 
> Yes, ofcourse.
> 
> > newer qemu? If so then I think the Xen package should just pick that one
> > up instead.
> 
> Sure it should and hopefully will in a near future.  But I also want
> to support pain-free upgrades, so it is better if old xen works with
> new seabios at least for some time.

OK.

Note that the SeaBIOS image is baked into Xen at build time, not picked
up at runtime like with (I think) qemu.


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:40:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBjng-0006ja-QV; Fri, 07 Feb 2014 11:40:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WBjne-0006jK-Sf
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:40:51 +0000
Received: from [85.158.139.211:34649] by server-8.bemta-5.messagelabs.com id
	4F/88-05298-146C4F25; Fri, 07 Feb 2014 11:40:49 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391773249!2323196!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24394 invoked from network); 7 Feb 2014 11:40:49 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-9.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	7 Feb 2014 11:40:49 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WBjnM-0001de-Du; Fri, 07 Feb 2014 11:40:32 +0000
Message-ID: <1391773231.2162.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Michael Tokarev <mjt@tls.msk.ru>
Date: Fri, 07 Feb 2014 11:40:31 +0000
In-Reply-To: <52F4B3AA.5050403@msgid.tls.msk.ru>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
	<52F4B3AA.5050403@msgid.tls.msk.ru>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 14:21 +0400, Michael Tokarev wrote:
> 07.02.2014 14:19, Ian Campbell wrote:
> > On Fri, 2014-02-07 at 02:05 +0400, Michael Tokarev wrote:
> []
> >> Meanwhile I just uploaded another release of seabios to
> >> Debian archive which brings size of bios.bin back to 128Kb,
> >> by disabling XCHI (usb3.0) and PVSCSI -- 1.7.4-3.
> >>
> >> So things should be working fine on debian side again now
> >> (at least on sid), even without patching xen.
> >>
> >> And it looks like we should re-think what we remove from
> >> our (qemu) "stripped-down" bios - as it turns out, Xen
> >> actually uses bios from qemu unchanged...  But it is
> >> really really fragile - on my build (gcc 4.7), with 2
> >> options removed, 128Kb is 99.7% used ;)
> > 
> > Do the seabios packages also produce a non-stripped down seabios for
> 
> Yes, ofcourse.
> 
> > newer qemu? If so then I think the Xen package should just pick that one
> > up instead.
> 
> Sure it should and hopefully will in a near future.  But I also want
> to support pain-free upgrades, so it is better if old xen works with
> new seabios at least for some time.

OK.

Note that the SeaBIOS image is baked into Xen at build time, not picked
up at runtime like with (I think) qemu.


Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:54:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBk0e-0007Ib-7J; Fri, 07 Feb 2014 11:54:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBk0d-0007IW-6E
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:54:15 +0000
Received: from [85.158.137.68:6080] by server-4.bemta-3.messagelabs.com id
	5E/64-11750-669C4F25; Fri, 07 Feb 2014 11:54:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391774053!306309!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27796 invoked from network); 7 Feb 2014 11:54:13 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:54:13 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so2198113wgh.19
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 03:54:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Ia7VDSBRLpHGtrBSZL6UKrVq/5n8lMRnzgGHr0lsD1k=;
	b=aPbwWkmt3wcPR3IKAL+WwhNwMcJhEkyKfY9LtZ2QQW3vcU33MGfugl6ENggPBtwV/B
	SgUYRWIuAFXkNGoklB8zGIR/BIE+s2uozWr+sp3IEOQM15XVV/Xv55CcwrcX7kAke7tG
	QvL7dHgyrt6psY9zyRu+pKX6tMuJe7Le4yVBA2g7GeqMj9ukb1Bw3rvCNm8yCKLi8JgL
	uk1zqwvKrd4GwI2rBHQzyuFe3y/hARVte/MfSaKt78CdzZPYqDbfeTsvwLD+s0C1zMvF
	2NkiqS8NUhBenYR+016UQKPvKlAXIeYU4mH+rlRtb7nC1wsfslOfKo3QRsePQGf+mFP9
	BUpA==
X-Gm-Message-State: ALoCoQkBamMDMsxED9a/Q/DhFaeiCQNvGifuS1DVrlZkbbZX1P0Unt/ULR3FmTltnW+Qq6A2JFin
X-Received: by 10.180.12.14 with SMTP id u14mr3673432wib.0.1391774052993;
	Fri, 07 Feb 2014 03:54:12 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id u6sm7530940wif.6.2014.02.07.03.54.11
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 03:54:12 -0800 (PST)
Message-ID: <52F4C963.2030401@linaro.org>
Date: Fri, 07 Feb 2014 11:54:11 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>, xen-devel@lists.xen.org
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
Cc: stefano.stabellini@citrix.com, patches@apm.com, ian.campbell@citrix.com,
	Anup Patel <anup.patel@linaro.org>, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Thanks for sending the patch quickly.

On 07/02/14 10:38, Pranavkumar Sawargaonkar wrote:
> This patch addresses memory cloberring issue mentioed by Julien Grall

clobbering mentioned.

> with my earlier patch -
> Ref:
> http://www.gossamer-threads.com/lists/xen/devel/316247

Can you add the commit id?

>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>   xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
>   1 file changed, 36 insertions(+), 34 deletions(-)
>
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index c09cf0c..62f56a3 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>       if ( !cpu_has_fp )
>           return;
>
> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
> +                 : "memory");

You don't need anymore to clobber the whole memory. "memory" can be removed.

>
>       v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>       v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
> @@ -36,23 +37,24 @@ void vfp_restore_state(struct vcpu *v)
>       if ( !cpu_has_fp )
>           return;
>
> -    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
> -                 "ldp q2, q3, [%0, #16 * 2]\n\t"
> -                 "ldp q4, q5, [%0, #16 * 4]\n\t"
> -                 "ldp q6, q7, [%0, #16 * 6]\n\t"
> -                 "ldp q8, q9, [%0, #16 * 8]\n\t"
> -                 "ldp q10, q11, [%0, #16 * 10]\n\t"
> -                 "ldp q12, q13, [%0, #16 * 12]\n\t"
> -                 "ldp q14, q15, [%0, #16 * 14]\n\t"
> -                 "ldp q16, q17, [%0, #16 * 16]\n\t"
> -                 "ldp q18, q19, [%0, #16 * 18]\n\t"
> -                 "ldp q20, q21, [%0, #16 * 20]\n\t"
> -                 "ldp q22, q23, [%0, #16 * 22]\n\t"
> -                 "ldp q24, q25, [%0, #16 * 24]\n\t"
> -                 "ldp q26, q27, [%0, #16 * 26]\n\t"
> -                 "ldp q28, q29, [%0, #16 * 28]\n\t"
> -                 "ldp q30, q31, [%0, #16 * 30]\n\t"
> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
> +                 "ldp q2, q3, [%1, #16 * 2]\n\t"
> +                 "ldp q4, q5, [%1, #16 * 4]\n\t"
> +                 "ldp q6, q7, [%1, #16 * 6]\n\t"
> +                 "ldp q8, q9, [%1, #16 * 8]\n\t"
> +                 "ldp q10, q11, [%1, #16 * 10]\n\t"
> +                 "ldp q12, q13, [%1, #16 * 12]\n\t"
> +                 "ldp q14, q15, [%1, #16 * 14]\n\t"
> +                 "ldp q16, q17, [%1, #16 * 16]\n\t"
> +                 "ldp q18, q19, [%1, #16 * 18]\n\t"
> +                 "ldp q20, q21, [%1, #16 * 20]\n\t"
> +                 "ldp q22, q23, [%1, #16 * 22]\n\t"
> +                 "ldp q24, q25, [%1, #16 * 24]\n\t"
> +                 "ldp q26, q27, [%1, #16 * 26]\n\t"
> +                 "ldp q28, q29, [%1, #16 * 28]\n\t"
> +                 "ldp q30, q31, [%1, #16 * 30]\n\t"
> +                 :: "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)
> +                 : "memory");

Same here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 11:54:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 11:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBk0e-0007Ib-7J; Fri, 07 Feb 2014 11:54:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBk0d-0007IW-6E
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 11:54:15 +0000
Received: from [85.158.137.68:6080] by server-4.bemta-3.messagelabs.com id
	5E/64-11750-669C4F25; Fri, 07 Feb 2014 11:54:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391774053!306309!1
X-Originating-IP: [74.125.82.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27796 invoked from network); 7 Feb 2014 11:54:13 -0000
Received: from mail-wg0-f52.google.com (HELO mail-wg0-f52.google.com)
	(74.125.82.52)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 11:54:13 -0000
Received: by mail-wg0-f52.google.com with SMTP id b13so2198113wgh.19
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 03:54:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Ia7VDSBRLpHGtrBSZL6UKrVq/5n8lMRnzgGHr0lsD1k=;
	b=aPbwWkmt3wcPR3IKAL+WwhNwMcJhEkyKfY9LtZ2QQW3vcU33MGfugl6ENggPBtwV/B
	SgUYRWIuAFXkNGoklB8zGIR/BIE+s2uozWr+sp3IEOQM15XVV/Xv55CcwrcX7kAke7tG
	QvL7dHgyrt6psY9zyRu+pKX6tMuJe7Le4yVBA2g7GeqMj9ukb1Bw3rvCNm8yCKLi8JgL
	uk1zqwvKrd4GwI2rBHQzyuFe3y/hARVte/MfSaKt78CdzZPYqDbfeTsvwLD+s0C1zMvF
	2NkiqS8NUhBenYR+016UQKPvKlAXIeYU4mH+rlRtb7nC1wsfslOfKo3QRsePQGf+mFP9
	BUpA==
X-Gm-Message-State: ALoCoQkBamMDMsxED9a/Q/DhFaeiCQNvGifuS1DVrlZkbbZX1P0Unt/ULR3FmTltnW+Qq6A2JFin
X-Received: by 10.180.12.14 with SMTP id u14mr3673432wib.0.1391774052993;
	Fri, 07 Feb 2014 03:54:12 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id u6sm7530940wif.6.2014.02.07.03.54.11
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 03:54:12 -0800 (PST)
Message-ID: <52F4C963.2030401@linaro.org>
Date: Fri, 07 Feb 2014 11:54:11 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>, xen-devel@lists.xen.org
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
Cc: stefano.stabellini@citrix.com, patches@apm.com, ian.campbell@citrix.com,
	Anup Patel <anup.patel@linaro.org>, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Thanks for sending the patch quickly.

On 07/02/14 10:38, Pranavkumar Sawargaonkar wrote:
> This patch addresses memory cloberring issue mentioed by Julien Grall

clobbering mentioned.

> with my earlier patch -
> Ref:
> http://www.gossamer-threads.com/lists/xen/devel/316247

Can you add the commit id?

>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> ---
>   xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
>   1 file changed, 36 insertions(+), 34 deletions(-)
>
> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> index c09cf0c..62f56a3 100644
> --- a/xen/arch/arm/arm64/vfp.c
> +++ b/xen/arch/arm/arm64/vfp.c
> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>       if ( !cpu_has_fp )
>           return;
>
> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
> +                 : "memory");

You don't need anymore to clobber the whole memory. "memory" can be removed.

>
>       v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>       v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
> @@ -36,23 +37,24 @@ void vfp_restore_state(struct vcpu *v)
>       if ( !cpu_has_fp )
>           return;
>
> -    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
> -                 "ldp q2, q3, [%0, #16 * 2]\n\t"
> -                 "ldp q4, q5, [%0, #16 * 4]\n\t"
> -                 "ldp q6, q7, [%0, #16 * 6]\n\t"
> -                 "ldp q8, q9, [%0, #16 * 8]\n\t"
> -                 "ldp q10, q11, [%0, #16 * 10]\n\t"
> -                 "ldp q12, q13, [%0, #16 * 12]\n\t"
> -                 "ldp q14, q15, [%0, #16 * 14]\n\t"
> -                 "ldp q16, q17, [%0, #16 * 16]\n\t"
> -                 "ldp q18, q19, [%0, #16 * 18]\n\t"
> -                 "ldp q20, q21, [%0, #16 * 20]\n\t"
> -                 "ldp q22, q23, [%0, #16 * 22]\n\t"
> -                 "ldp q24, q25, [%0, #16 * 24]\n\t"
> -                 "ldp q26, q27, [%0, #16 * 26]\n\t"
> -                 "ldp q28, q29, [%0, #16 * 28]\n\t"
> -                 "ldp q30, q31, [%0, #16 * 30]\n\t"
> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> +    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
> +                 "ldp q2, q3, [%1, #16 * 2]\n\t"
> +                 "ldp q4, q5, [%1, #16 * 4]\n\t"
> +                 "ldp q6, q7, [%1, #16 * 6]\n\t"
> +                 "ldp q8, q9, [%1, #16 * 8]\n\t"
> +                 "ldp q10, q11, [%1, #16 * 10]\n\t"
> +                 "ldp q12, q13, [%1, #16 * 12]\n\t"
> +                 "ldp q14, q15, [%1, #16 * 14]\n\t"
> +                 "ldp q16, q17, [%1, #16 * 16]\n\t"
> +                 "ldp q18, q19, [%1, #16 * 18]\n\t"
> +                 "ldp q20, q21, [%1, #16 * 20]\n\t"
> +                 "ldp q22, q23, [%1, #16 * 22]\n\t"
> +                 "ldp q24, q25, [%1, #16 * 24]\n\t"
> +                 "ldp q26, q27, [%1, #16 * 26]\n\t"
> +                 "ldp q28, q29, [%1, #16 * 28]\n\t"
> +                 "ldp q30, q31, [%1, #16 * 30]\n\t"
> +                 :: "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)
> +                 : "memory");

Same here.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:12:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIM-0008I6-5l; Fri, 07 Feb 2014 12:12:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIK-0008I1-Qu
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:12:33 +0000
Received: from [85.158.143.35:9013] by server-1.bemta-4.messagelabs.com id
	03/BD-31661-0BDC4F25; Fri, 07 Feb 2014 12:12:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391775149!3913445!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17445 invoked from network); 7 Feb 2014 12:12:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:12:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="100802122"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	07:12:20 -0500
Message-ID: <1391775139.2162.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
Date: Fri, 7 Feb 2014 12:12:19 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Tim
	Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Julien Grall <julien.grall@citrix.com>, Stefano
	Stabellini <stefano.stabellini@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/5 v4] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George gave a release ack to v3.

The last patch here is new and renames the cache flushing functions to
make it clearer what they do (which is clean but not invalidate, which
caught me out). In principal this could wait for 4.5 but given it is a
pure rename I think it might as well go in along with the rest, so that
is what I intend to do unless George objects.

Both 32 and 64 bit have survived ~10,000 boots with this version.

Changes in v4:
                make sure to actually invalidate the cache, not just
                clean it
                
                rename existing cache flush functions to avoid catching
                me out that way again.

                switch to using a start + length in the domctl interface
                
Changes in v3:
        s/cacheflush_page/sync_page_to_ram/
                    
        xc interface takes a length instead of an end
                    
        make the domctl range inclusive.
                    
        make xc interface internal -- it isn't needed from libxl
        in the current design and it is easier to expose an
        interface in the future than to hide it.
                
Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:12:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIM-0008I6-5l; Fri, 07 Feb 2014 12:12:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIK-0008I1-Qu
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:12:33 +0000
Received: from [85.158.143.35:9013] by server-1.bemta-4.messagelabs.com id
	03/BD-31661-0BDC4F25; Fri, 07 Feb 2014 12:12:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391775149!3913445!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17445 invoked from network); 7 Feb 2014 12:12:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:12:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="100802122"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	07:12:20 -0500
Message-ID: <1391775139.2162.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
Date: Fri, 7 Feb 2014 12:12:19 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Tim
	Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Julien Grall <julien.grall@citrix.com>, Stefano
	Stabellini <stefano.stabellini@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/5 v4] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George gave a release ack to v3.

The last patch here is new and renames the cache flushing functions to
make it clearer what they do (which is clean but not invalidate, which
caught me out). In principal this could wait for 4.5 but given it is a
pure rename I think it might as well go in along with the rest, so that
is what I intend to do unless George objects.

Both 32 and 64 bit have survived ~10,000 boots with this version.

Changes in v4:
                make sure to actually invalidate the cache, not just
                clean it
                
                rename existing cache flush functions to avoid catching
                me out that way again.

                switch to using a start + length in the domctl interface
                
Changes in v3:
        s/cacheflush_page/sync_page_to_ram/
                    
        xc interface takes a length instead of an end
                    
        make the domctl range inclusive.
                    
        make xc interface internal -- it isn't needed from libxl
        in the current design and it is easier to expose an
        interface in the future than to hide it.
                
Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIo-0008Jo-PX; Fri, 07 Feb 2014 12:13:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIm-0008JT-Vb
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:01 +0000
Received: from [85.158.139.211:15387] by server-17.bemta-5.messagelabs.com id
	38/71-31975-CCDC4F25; Fri, 07 Feb 2014 12:13:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391775177!2373221!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24129 invoked from network); 7 Feb 2014 12:12:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:12:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945532"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIi-0002qV-Pg;
	Fri, 07 Feb 2014 12:12:56 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:52 +0000
Message-ID: <1391775176-30313-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 1/5] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIp-0008KB-5j; Fri, 07 Feb 2014 12:13:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIn-0008JX-DN
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:01 +0000
Received: from [85.158.139.211:64803] by server-9.bemta-5.messagelabs.com id
	CE/AB-11237-CCDC4F25; Fri, 07 Feb 2014 12:13:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391775177!2373221!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24244 invoked from network); 7 Feb 2014 12:12:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:12:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945533"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIi-0002qV-To;
	Fri, 07 Feb 2014 12:12:56 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:53 +0000
Message-ID: <1391775176-30313-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 2/5] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIo-0008Jo-PX; Fri, 07 Feb 2014 12:13:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIm-0008JT-Vb
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:01 +0000
Received: from [85.158.139.211:15387] by server-17.bemta-5.messagelabs.com id
	38/71-31975-CCDC4F25; Fri, 07 Feb 2014 12:13:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391775177!2373221!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24129 invoked from network); 7 Feb 2014 12:12:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:12:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945532"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIi-0002qV-Pg;
	Fri, 07 Feb 2014 12:12:56 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:52 +0000
Message-ID: <1391775176-30313-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 1/5] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIp-0008KB-5j; Fri, 07 Feb 2014 12:13:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIn-0008JX-DN
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:01 +0000
Received: from [85.158.139.211:64803] by server-9.bemta-5.messagelabs.com id
	CE/AB-11237-CCDC4F25; Fri, 07 Feb 2014 12:13:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391775177!2373221!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24244 invoked from network); 7 Feb 2014 12:12:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:12:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945533"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIi-0002qV-To;
	Fri, 07 Feb 2014 12:12:56 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:53 +0000
Message-ID: <1391775176-30313-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 2/5] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIq-0008LC-L4; Fri, 07 Feb 2014 12:13:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIo-0008Jl-DC
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:02 +0000
Received: from [193.109.254.147:30792] by server-1.bemta-14.messagelabs.com id
	C4/6A-15438-DCDC4F25; Fri, 07 Feb 2014 12:13:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391775179!2754574!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30170 invoked from network); 7 Feb 2014 12:13:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:13:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIj-0002qV-6c;
	Fri, 07 Feb 2014 12:12:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:56 +0000
Message-ID: <1391775176-30313-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 5/5] xen: arm: correct terminology for cache
	flush macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The term "flush" is slightly ambiguous. The correct ARM term for for this
operaton is clean, as opposed to clean+invalidate for which we also now have a
function.

This is a pure rename, no functional change.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
This could easily be left for 4.5.
---
 xen/arch/arm/guestcopy.c         |    2 +-
 xen/arch/arm/kernel.c            |    2 +-
 xen/arch/arm/mm.c                |   16 ++++++++--------
 xen/arch/arm/smpboot.c           |    2 +-
 xen/include/asm-arm/arm32/page.h |    2 +-
 xen/include/asm-arm/arm64/page.h |    2 +-
 xen/include/asm-arm/page.h       |   10 +++++-----
 7 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index bd0a355..af0af6b 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -24,7 +24,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         p += offset;
         memcpy(p, from, size);
         if ( flush_dcache )
-            flush_xen_dcache_va_range(p, size);
+            clean_xen_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
         len -= size;
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..1e3107d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -58,7 +58,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 
         set_fixmap(FIXMAP_MISC, p, attrindx);
         memcpy(dst, src + s, l);
-        flush_xen_dcache_va_range(dst, l);
+        clean_xen_dcache_va_range(dst, l);
 
         paddr += l;
         dst += l;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d2cfe64..4c5cff0 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -480,13 +480,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Clear the copy of the boot pagetables. Each secondary CPU
      * rebuilds these itself (see head.S) */
     memset(boot_pgtable, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_pgtable);
+    clean_xen_dcache(boot_pgtable);
 #ifdef CONFIG_ARM_64
     memset(boot_first, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_first);
+    clean_xen_dcache(boot_first);
 #endif
     memset(boot_second, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_second);
+    clean_xen_dcache(boot_second);
 
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < LPAE_ENTRIES; i++ )
@@ -524,7 +524,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 
     /* Make sure it is clear */
     memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
-    flush_xen_dcache_va_range(this_cpu(xen_dommap),
+    clean_xen_dcache_va_range(this_cpu(xen_dommap),
                               DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 #endif
 }
@@ -535,7 +535,7 @@ int init_secondary_pagetables(int cpu)
     /* Set init_ttbr for this CPU coming up. All CPus share a single setof
      * pagetables, but rewrite it each time for consistency with 32 bit. */
     init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
     return 0;
 }
 #else
@@ -570,15 +570,15 @@ int init_secondary_pagetables(int cpu)
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
 
-    flush_xen_dcache_va_range(first, PAGE_SIZE);
-    flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
+    clean_xen_dcache_va_range(first, PAGE_SIZE);
+    clean_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 
     per_cpu(xen_pgtable, cpu) = first;
     per_cpu(xen_dommap, cpu) = domheap;
 
     /* Set init_ttbr for this CPU coming up */
     init_ttbr = __pa(first);
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
 
     return 0;
 }
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..a829957 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -378,7 +378,7 @@ int __cpu_up(unsigned int cpu)
 
     /* Open the gate for this CPU */
     smp_up_cpu = cpu_logical_map(cpu);
-    flush_xen_dcache(smp_up_cpu);
+    clean_xen_dcache(smp_up_cpu);
 
     rc = arch_cpu_up(cpu);
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cb6add4..b8221ca 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -20,7 +20,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+#define __clean_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index baf8903..3352821 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -15,7 +15,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+#define __clean_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 67d64c9..a577942 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -229,26 +229,26 @@ extern size_t cacheline_bytes;
 /* Function for flushing medium-sized areas.
  * if 'range' is large enough we might want to use model-specific
  * full-cache flushes. */
-static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
+static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
 {
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        asm volatile (__flush_xen_dcache_one(0) : : "r" (p));
+        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
     dsb();           /* So we know the flushes happen before continuing */
 }
 
 /* Macro for flushing a single small item.  The predicate is always
  * compile-time constant so this will compile down to 3 instructions in
  * the common case. */
-#define flush_xen_dcache(x) do {                                        \
+#define clean_xen_dcache(x) do {                                        \
     typeof(x) *_p = &(x);                                               \
     if ( sizeof(x) > MIN_CACHELINE_BYTES || sizeof(x) > alignof(x) )    \
-        flush_xen_dcache_va_range(_p, sizeof(x));                       \
+        clean_xen_dcache_va_range(_p, sizeof(x));                       \
     else                                                                \
         asm volatile (                                                  \
             "dsb sy;"   /* Finish all earlier writes */                 \
-            __flush_xen_dcache_one(0)                                   \
+            __clean_xen_dcache_one(0)                                   \
             "dsb sy;"   /* Finish flush before continuing */            \
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIq-0008LC-L4; Fri, 07 Feb 2014 12:13:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIo-0008Jl-DC
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:02 +0000
Received: from [193.109.254.147:30792] by server-1.bemta-14.messagelabs.com id
	C4/6A-15438-DCDC4F25; Fri, 07 Feb 2014 12:13:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391775179!2754574!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30170 invoked from network); 7 Feb 2014 12:13:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:13:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIj-0002qV-6c;
	Fri, 07 Feb 2014 12:12:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:56 +0000
Message-ID: <1391775176-30313-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 5/5] xen: arm: correct terminology for cache
	flush macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The term "flush" is slightly ambiguous. The correct ARM term for for this
operaton is clean, as opposed to clean+invalidate for which we also now have a
function.

This is a pure rename, no functional change.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
This could easily be left for 4.5.
---
 xen/arch/arm/guestcopy.c         |    2 +-
 xen/arch/arm/kernel.c            |    2 +-
 xen/arch/arm/mm.c                |   16 ++++++++--------
 xen/arch/arm/smpboot.c           |    2 +-
 xen/include/asm-arm/arm32/page.h |    2 +-
 xen/include/asm-arm/arm64/page.h |    2 +-
 xen/include/asm-arm/page.h       |   10 +++++-----
 7 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index bd0a355..af0af6b 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -24,7 +24,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         p += offset;
         memcpy(p, from, size);
         if ( flush_dcache )
-            flush_xen_dcache_va_range(p, size);
+            clean_xen_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
         len -= size;
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..1e3107d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -58,7 +58,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 
         set_fixmap(FIXMAP_MISC, p, attrindx);
         memcpy(dst, src + s, l);
-        flush_xen_dcache_va_range(dst, l);
+        clean_xen_dcache_va_range(dst, l);
 
         paddr += l;
         dst += l;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index d2cfe64..4c5cff0 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -480,13 +480,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Clear the copy of the boot pagetables. Each secondary CPU
      * rebuilds these itself (see head.S) */
     memset(boot_pgtable, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_pgtable);
+    clean_xen_dcache(boot_pgtable);
 #ifdef CONFIG_ARM_64
     memset(boot_first, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_first);
+    clean_xen_dcache(boot_first);
 #endif
     memset(boot_second, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_second);
+    clean_xen_dcache(boot_second);
 
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < LPAE_ENTRIES; i++ )
@@ -524,7 +524,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 
     /* Make sure it is clear */
     memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
-    flush_xen_dcache_va_range(this_cpu(xen_dommap),
+    clean_xen_dcache_va_range(this_cpu(xen_dommap),
                               DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 #endif
 }
@@ -535,7 +535,7 @@ int init_secondary_pagetables(int cpu)
     /* Set init_ttbr for this CPU coming up. All CPus share a single setof
      * pagetables, but rewrite it each time for consistency with 32 bit. */
     init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
     return 0;
 }
 #else
@@ -570,15 +570,15 @@ int init_secondary_pagetables(int cpu)
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
 
-    flush_xen_dcache_va_range(first, PAGE_SIZE);
-    flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
+    clean_xen_dcache_va_range(first, PAGE_SIZE);
+    clean_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 
     per_cpu(xen_pgtable, cpu) = first;
     per_cpu(xen_dommap, cpu) = domheap;
 
     /* Set init_ttbr for this CPU coming up */
     init_ttbr = __pa(first);
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
 
     return 0;
 }
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..a829957 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -378,7 +378,7 @@ int __cpu_up(unsigned int cpu)
 
     /* Open the gate for this CPU */
     smp_up_cpu = cpu_logical_map(cpu);
-    flush_xen_dcache(smp_up_cpu);
+    clean_xen_dcache(smp_up_cpu);
 
     rc = arch_cpu_up(cpu);
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cb6add4..b8221ca 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -20,7 +20,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+#define __clean_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index baf8903..3352821 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -15,7 +15,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+#define __clean_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 67d64c9..a577942 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -229,26 +229,26 @@ extern size_t cacheline_bytes;
 /* Function for flushing medium-sized areas.
  * if 'range' is large enough we might want to use model-specific
  * full-cache flushes. */
-static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
+static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
 {
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        asm volatile (__flush_xen_dcache_one(0) : : "r" (p));
+        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
     dsb();           /* So we know the flushes happen before continuing */
 }
 
 /* Macro for flushing a single small item.  The predicate is always
  * compile-time constant so this will compile down to 3 instructions in
  * the common case. */
-#define flush_xen_dcache(x) do {                                        \
+#define clean_xen_dcache(x) do {                                        \
     typeof(x) *_p = &(x);                                               \
     if ( sizeof(x) > MIN_CACHELINE_BYTES || sizeof(x) > alignof(x) )    \
-        flush_xen_dcache_va_range(_p, sizeof(x));                       \
+        clean_xen_dcache_va_range(_p, sizeof(x));                       \
     else                                                                \
         asm volatile (                                                  \
             "dsb sy;"   /* Finish all earlier writes */                 \
-            __flush_xen_dcache_one(0)                                   \
+            __clean_xen_dcache_one(0)                                   \
             "dsb sy;"   /* Finish flush before continuing */            \
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIr-0008Ln-2o; Fri, 07 Feb 2014 12:13:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIp-0008K4-DY
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:03 +0000
Received: from [193.109.254.147:8886] by server-1.bemta-14.messagelabs.com id
	9B/6A-15438-ECDC4F25; Fri, 07 Feb 2014 12:13:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391775179!2754574!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30345 invoked from network); 7 Feb 2014 12:13:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945536"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIj-0002qV-1d;
	Fri, 07 Feb 2014 12:12:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:54 +0000
Message-ID: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v4: introduce a function to clean and invalidate as intended

    make the domctl take a length not an end.

v3:
    s/cacheflush_page/sync_page_to_ram/

    xc interface takes a length instead of an end

    make the domctl range inclusive.

    make xc interface internal -- it isn't needed from libxl in the current
    design and it is easier to expose an interface in the future than to hide
    it.

v2:
   Switch to cleaning at page allocation time + explicit flushing of the
   regions which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    2 ++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xc_private.h            |    3 +++
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |   12 ++++++++++++
 xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/arm32/page.h    |    4 ++++
 xen/include/asm-arm/arm64/page.h    |    4 ++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |    3 +++
 xen/xsm/flask/policy/access_vectors |    2 ++
 17 files changed, 112 insertions(+)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..f10ec01 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.nr_pfns = nr_pfns;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..ffb68da 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
+
+        if ( e < s )
+            return -EINVAL;
+
+        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2f48347..d2cfe64 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -338,6 +338,18 @@ unsigned long domain_page_map_to_mfn(const void *va)
 }
 #endif
 
+void sync_page_to_ram(unsigned long mfn)
+{
+    void *p, *v = map_domain_page(mfn);
+
+    dsb();           /* So the CPU issues all writes to the range */
+    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
+        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
+    dsb();           /* So we know the flushes happen before continuing */
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..86f13e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+
+                    sync_page_to_ram(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..c73c717 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        sync_page_to_ram(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..cb6add4 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
+
 /*
  * Flush all hypervisor mappings from the TLB and branch predictor.
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..baf8903 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
+
 /*
  * Flush all hypervisor mappings from the TLB
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..67d64c9 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void sync_page_to_ram(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..abe35fb 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void sync_page_to_ram(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f22fe2e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush. */
+    xen_pfn_t start_pfn, nr_pfns;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..1345d7e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIr-0008Ln-2o; Fri, 07 Feb 2014 12:13:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIp-0008K4-DY
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:03 +0000
Received: from [193.109.254.147:8886] by server-1.bemta-14.messagelabs.com id
	9B/6A-15438-ECDC4F25; Fri, 07 Feb 2014 12:13:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391775179!2754574!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30345 invoked from network); 7 Feb 2014 12:13:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:13:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945536"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIj-0002qV-1d;
	Fri, 07 Feb 2014 12:12:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:54 +0000
Message-ID: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v4: introduce a function to clean and invalidate as intended

    make the domctl take a length not an end.

v3:
    s/cacheflush_page/sync_page_to_ram/

    xc interface takes a length instead of an end

    make the domctl range inclusive.

    make xc interface internal -- it isn't needed from libxl in the current
    design and it is easier to expose an interface in the future than to hide
    it.

v2:
   Switch to cleaning at page allocation time + explicit flushing of the
   regions which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    2 ++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xc_private.h            |    3 +++
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |   12 ++++++++++++
 xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/arm32/page.h    |    4 ++++
 xen/include/asm-arm/arm64/page.h    |    4 ++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |    3 +++
 xen/xsm/flask/policy/access_vectors |    2 ++
 17 files changed, 112 insertions(+)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..f10ec01 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.nr_pfns = nr_pfns;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..ffb68da 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
+
+        if ( e < s )
+            return -EINVAL;
+
+        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 2f48347..d2cfe64 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -338,6 +338,18 @@ unsigned long domain_page_map_to_mfn(const void *va)
 }
 #endif
 
+void sync_page_to_ram(unsigned long mfn)
+{
+    void *p, *v = map_domain_page(mfn);
+
+    dsb();           /* So the CPU issues all writes to the range */
+    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
+        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
+    dsb();           /* So we know the flushes happen before continuing */
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..86f13e9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+
+                    sync_page_to_ram(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..c73c717 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        sync_page_to_ram(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..cb6add4 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
+
 /*
  * Flush all hypervisor mappings from the TLB and branch predictor.
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..baf8903 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
+
 /*
  * Flush all hypervisor mappings from the TLB
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..67d64c9 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void sync_page_to_ram(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..abe35fb 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void sync_page_to_ram(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f22fe2e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush. */
+    xen_pfn_t start_pfn, nr_pfns;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..1345d7e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIs-0008NO-Pk; Fri, 07 Feb 2014 12:13:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIq-0008L1-H4
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:04 +0000
Received: from [85.158.139.211:2245] by server-12.bemta-5.messagelabs.com id
	40/48-15415-FCDC4F25; Fri, 07 Feb 2014 12:13:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391775177!2373221!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24311 invoked from network); 7 Feb 2014 12:13:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:13:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945534"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIj-0002qV-4I;
	Fri, 07 Feb 2014 12:12:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:55 +0000
Message-ID: <1391775176-30313-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 4/5] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  158 ------------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:13:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:13:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkIs-0008NO-Pk; Fri, 07 Feb 2014 12:13:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBkIq-0008L1-H4
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:13:04 +0000
Received: from [85.158.139.211:2245] by server-12.bemta-5.messagelabs.com id
	40/48-15415-FCDC4F25; Fri, 07 Feb 2014 12:13:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391775177!2373221!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24311 invoked from network); 7 Feb 2014 12:13:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:13:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="98945534"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 12:12:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 07:12:57 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WBkIj-0002qV-4I;
	Fri, 07 Feb 2014 12:12:57 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Fri, 7 Feb 2014 12:12:55 +0000
Message-ID: <1391775176-30313-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391775139.2162.88.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v4 4/5] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  158 ------------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..124cccf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -475,7 +469,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:30:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkZd-0001Mf-GG; Fri, 07 Feb 2014 12:30:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBkZc-0001Ma-Ms
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:30:24 +0000
Received: from [85.158.143.35:41156] by server-3.bemta-4.messagelabs.com id
	E6/B6-11539-FD1D4F25; Fri, 07 Feb 2014 12:30:23 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391776223!3901933!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22769 invoked from network); 7 Feb 2014 12:30:23 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 12:30:23 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391776223; l=345;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=ujSgMSBGkaaybScioFbBkH0z788=;
	b=CQRtJ7Khn/SNlPTSAFR+c/fUH+qOL5u55ygCLByh0s+o9/4TIxefQTcMlPiCWQEdET2
	nl+d+o22ZauD/NMqkhQ5PDFMklFg7zapvfKV2oboT0bgAUAqNMJXBPnQlxY7BWTqTGKeJ
	8UHrSoonsFLBwaZP5SmYolpK6ExkAforEO0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id e036a5q17CUHIwc
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 13:30:17 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5D75750269; Fri,  7 Feb 2014 13:30:17 +0100 (CET)
Date: Fri, 7 Feb 2014 13:30:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207123017.GA31152@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
	<20140207100733.GA1958@aepfle.de>
	<52F4C280020000780011A294@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4C280020000780011A294@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, Jan Beulich wrote:

> Yes, I agree that there likely needs to be another change here.
> I'm just not certain yet which way this should be done.

Jan,

before I start doing software archaeology:
How would you want it to look like?
What combinations of Xen console= and Linux xencons=/console= are useful anyway?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:30:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkZd-0001Mf-GG; Fri, 07 Feb 2014 12:30:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBkZc-0001Ma-Ms
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:30:24 +0000
Received: from [85.158.143.35:41156] by server-3.bemta-4.messagelabs.com id
	E6/B6-11539-FD1D4F25; Fri, 07 Feb 2014 12:30:23 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391776223!3901933!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22769 invoked from network); 7 Feb 2014 12:30:23 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 12:30:23 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391776223; l=345;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=ujSgMSBGkaaybScioFbBkH0z788=;
	b=CQRtJ7Khn/SNlPTSAFR+c/fUH+qOL5u55ygCLByh0s+o9/4TIxefQTcMlPiCWQEdET2
	nl+d+o22ZauD/NMqkhQ5PDFMklFg7zapvfKV2oboT0bgAUAqNMJXBPnQlxY7BWTqTGKeJ
	8UHrSoonsFLBwaZP5SmYolpK6ExkAforEO0=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id e036a5q17CUHIwc
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 13:30:17 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5D75750269; Fri,  7 Feb 2014 13:30:17 +0100 (CET)
Date: Fri, 7 Feb 2014 13:30:17 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207123017.GA31152@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
	<20140207100733.GA1958@aepfle.de>
	<52F4C280020000780011A294@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4C280020000780011A294@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, Jan Beulich wrote:

> Yes, I agree that there likely needs to be another change here.
> I'm just not certain yet which way this should be done.

Jan,

before I start doing software archaeology:
How would you want it to look like?
What combinations of Xen console= and Linux xencons=/console= are useful anyway?

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:38:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkho-0001ZF-MY; Fri, 07 Feb 2014 12:38:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBkhn-0001ZA-18
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:38:51 +0000
Received: from [85.158.137.68:31648] by server-2.bemta-3.messagelabs.com id
	81/AC-06531-AD3D4F25; Fri, 07 Feb 2014 12:38:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391776729!320968!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9745 invoked from network); 7 Feb 2014 12:38:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 12:38:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 12:38:48 +0000
Message-Id: <52F4E1E9020000780011A37E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 12:38:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
	<20140207100733.GA1958@aepfle.de>
	<52F4C280020000780011A294@nat28.tlf.novell.com>
	<20140207123017.GA31152@aepfle.de>
In-Reply-To: <20140207123017.GA31152@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:30, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Feb 07, Jan Beulich wrote:
> 
>> Yes, I agree that there likely needs to be another change here.
>> I'm just not certain yet which way this should be done.
> 
> before I start doing software archaeology:
> How would you want it to look like?
> What combinations of Xen console= and Linux xencons=/console= are useful 
> anyway?

I'd really first see if Ian remembers why that original change was
done for DomU only.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:38:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkho-0001ZF-MY; Fri, 07 Feb 2014 12:38:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBkhn-0001ZA-18
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:38:51 +0000
Received: from [85.158.137.68:31648] by server-2.bemta-3.messagelabs.com id
	81/AC-06531-AD3D4F25; Fri, 07 Feb 2014 12:38:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391776729!320968!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9745 invoked from network); 7 Feb 2014 12:38:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 12:38:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 12:38:48 +0000
Message-Id: <52F4E1E9020000780011A37E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 12:38:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Olaf Hering" <olaf@aepfle.de>
References: <52AAE1C2020000780010CE39@nat28.tlf.novell.com>
	<20140206225334.GA21743@aepfle.de>
	<52F4A4F7020000780011A102@nat28.tlf.novell.com>
	<20140207083234.GA17978@aepfle.de>
	<52F4B1A7020000780011A14B@nat28.tlf.novell.com>
	<20140207100733.GA1958@aepfle.de>
	<52F4C280020000780011A294@nat28.tlf.novell.com>
	<20140207123017.GA31152@aepfle.de>
In-Reply-To: <20140207123017.GA31152@aepfle.de>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] linux-2.6.18/xencons: generalize use of
 add_preferred_console()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:30, Olaf Hering <olaf@aepfle.de> wrote:
> On Fri, Feb 07, Jan Beulich wrote:
> 
>> Yes, I agree that there likely needs to be another change here.
>> I'm just not certain yet which way this should be done.
> 
> before I start doing software archaeology:
> How would you want it to look like?
> What combinations of Xen console= and Linux xencons=/console= are useful 
> anyway?

I'd really first see if Ian remembers why that original change was
done for DomU only.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:44:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBknJ-00021I-L3; Fri, 07 Feb 2014 12:44:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBknH-00021D-SL
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:44:32 +0000
Received: from [85.158.139.211:17941] by server-5.bemta-5.messagelabs.com id
	1B/F7-32749-E25D4F25; Fri, 07 Feb 2014 12:44:30 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391777069!2342059!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29086 invoked from network); 7 Feb 2014 12:44:29 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-9.tower-206.messagelabs.com with SMTP;
	7 Feb 2014 12:44:29 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 788F1120A9E;
	Fri,  7 Feb 2014 13:44:29 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id Mqy42E-s3Oaa; Fri,  7 Feb 2014 13:44:28 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id 8C0FA120A4B;
	Fri,  7 Feb 2014 13:44:28 +0100 (CET)
Date: Fri, 7 Feb 2014 13:46:07 +0100
From: William Dauchy <william@gandi.net>
To: konrad@kernel.org
Message-ID: <20140207124607.GE19084@gandi.net>
MIME-Version: 1.0
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org, william@gandi.net
Subject: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4149034007530491145=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4149034007530491145==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="h56sxpGKRmy85csR"
Content-Disposition: inline


--h56sxpGKRmy85csR
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hello,

I am facing random memleaks with a xenU 3.10.x pv (x86_64);
sometimes without issue and sometimes after a reboot, the
kernel starts to memleak a lot until OOM.

config: hypervisor xen4.1, dom0 v3.4.x (x86_32)

I got no issue with a xenU v3.12.x pv; the only difference in dmesg was
about APIC.

I manually backported the following patch:
6efa20e xen: Support 64-bit PV guest receiving NMIs
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -427,8 +427,7 @@ static void __init xen_init_cpuid_mask(void)
=20
        if (!xen_initial_domain())
                cpuid_leaf1_edx_mask &=3D
-                       ~((1 << X86_FEATURE_APIC) |  /* disable local APIC =
*/
-                         (1 << X86_FEATURE_ACPI));  /* disable ACPI */
+                       ~((1 << X86_FEATURE_ACPI));  /* disable ACPI */


and my issue is completely fixed. Should it backported for stable?

Thanks,
--=20
William

--h56sxpGKRmy85csR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL01Y8ACgkQ1I6eqOUidQEyUQCgo+u4b/XXiz532Seg2Ot5tAbL
h+wAoJGbvAIj19g5/B/b1KaV8jPhGYdt
=CbFe
-----END PGP SIGNATURE-----

--h56sxpGKRmy85csR--


--===============4149034007530491145==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4149034007530491145==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:44:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBknJ-00021I-L3; Fri, 07 Feb 2014 12:44:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBknH-00021D-SL
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:44:32 +0000
Received: from [85.158.139.211:17941] by server-5.bemta-5.messagelabs.com id
	1B/F7-32749-E25D4F25; Fri, 07 Feb 2014 12:44:30 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391777069!2342059!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29086 invoked from network); 7 Feb 2014 12:44:29 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-9.tower-206.messagelabs.com with SMTP;
	7 Feb 2014 12:44:29 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 788F1120A9E;
	Fri,  7 Feb 2014 13:44:29 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id Mqy42E-s3Oaa; Fri,  7 Feb 2014 13:44:28 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id 8C0FA120A4B;
	Fri,  7 Feb 2014 13:44:28 +0100 (CET)
Date: Fri, 7 Feb 2014 13:46:07 +0100
From: William Dauchy <william@gandi.net>
To: konrad@kernel.org
Message-ID: <20140207124607.GE19084@gandi.net>
MIME-Version: 1.0
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org, william@gandi.net
Subject: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4149034007530491145=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4149034007530491145==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="h56sxpGKRmy85csR"
Content-Disposition: inline


--h56sxpGKRmy85csR
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

Hello,

I am facing random memleaks with a xenU 3.10.x pv (x86_64);
sometimes without issue and sometimes after a reboot, the
kernel starts to memleak a lot until OOM.

config: hypervisor xen4.1, dom0 v3.4.x (x86_32)

I got no issue with a xenU v3.12.x pv; the only difference in dmesg was
about APIC.

I manually backported the following patch:
6efa20e xen: Support 64-bit PV guest receiving NMIs
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -427,8 +427,7 @@ static void __init xen_init_cpuid_mask(void)
=20
        if (!xen_initial_domain())
                cpuid_leaf1_edx_mask &=3D
-                       ~((1 << X86_FEATURE_APIC) |  /* disable local APIC =
*/
-                         (1 << X86_FEATURE_ACPI));  /* disable ACPI */
+                       ~((1 << X86_FEATURE_ACPI));  /* disable ACPI */


and my issue is completely fixed. Should it backported for stable?

Thanks,
--=20
William

--h56sxpGKRmy85csR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL01Y8ACgkQ1I6eqOUidQEyUQCgo+u4b/XXiz532Seg2Ot5tAbL
h+wAoJGbvAIj19g5/B/b1KaV8jPhGYdt
=CbFe
-----END PGP SIGNATURE-----

--h56sxpGKRmy85csR--


--===============4149034007530491145==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4149034007530491145==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:47:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkps-0002CO-1m; Fri, 07 Feb 2014 12:47:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBkpq-0002CF-2Y
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:47:10 +0000
Received: from [85.158.137.68:11225] by server-13.bemta-3.messagelabs.com id
	78/F7-26923-DC5D4F25; Fri, 07 Feb 2014 12:47:09 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391777227!329804!1
X-Originating-IP: [209.85.216.48]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21208 invoked from network); 7 Feb 2014 12:47:08 -0000
Received: from mail-qa0-f48.google.com (HELO mail-qa0-f48.google.com)
	(209.85.216.48)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:47:08 -0000
Received: by mail-qa0-f48.google.com with SMTP id f11so5039539qae.7
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 04:47:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=4AO+VWUkAatD9XUltGYIMAVrhSPwI1CgL2Sw/D29V48=;
	b=WE5WO4QIEhvjddURvQHFzBEV/rS2PJYDzfD/JH9YVMQdPpFOmc1cRdVQpeMZRlm4X7
	zISgfyljKOvwHhXXBayJYVJy+ICNeKrmST78NEAwIVLwoJnJMBJ2AOySkQIv6zGWQtRa
	8eSV6cefGNvKRPp1v1NP0dvhDZvtixVsTdt/JTsmBsXf9YtvQRDhE/yISrHgQRTsoIJp
	kYxVp5j+1ia4stsbQ91v38mlJzWLrCZofPX2AgVGrJNvHNxw4Y541MFqlgvdlOzNyVDJ
	IzCQD2/d6FEw+uhr4GvXHrucFephax/aDZeeRuQb6wsRclkBmljszKJCNrPq/QwT8mkw
	gA2g==
X-Gm-Message-State: ALoCoQl/ToTEVs+dvUhU+j9Q5pUQTGWhS5upILM8Ne3GIZaqvxHMMWU0okHLnUYZuN/hA177M5Ev
MIME-Version: 1.0
X-Received: by 10.224.7.10 with SMTP id b10mr21978562qab.50.1391777226966;
	Fri, 07 Feb 2014 04:47:06 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Fri, 7 Feb 2014 04:47:06 -0800 (PST)
In-Reply-To: <52F4C963.2030401@linaro.org>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
	<52F4C963.2030401@linaro.org>
Date: Fri, 7 Feb 2014 18:17:06 +0530
Message-ID: <CAAHg+Hg20YJf6ARdXHsz1EObABj5zn0-rWrdr4pWO=XWmKe60g@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Julien Grall <julien.grall@linaro.org>
Cc: Ian Campbell <ian.campbell@citrix.com>, Anup Patel <anup.patel@linaro.org>,
	Patch Tracking <patches@linaro.org>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Julieng,

On 7 February 2014 17:24, Julien Grall <julien.grall@linaro.org> wrote:
> Hello,
>
> Thanks for sending the patch quickly.
>
>
> On 07/02/14 10:38, Pranavkumar Sawargaonkar wrote:
>>
>> This patch addresses memory cloberring issue mentioed by Julien Grall
>
>
> clobbering mentioned.
>
>
>> with my earlier patch -
>> Ref:
>> http://www.gossamer-threads.com/lists/xen/devel/316247
>
>
> Can you add the commit id?
Sure.
>
>
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>   xen/arch/arm/arm64/vfp.c |   70
>> ++++++++++++++++++++++++----------------------
>>   1 file changed, 36 insertions(+), 34 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index c09cf0c..62f56a3 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>>       if ( !cpu_has_fp )
>>           return;
>>
>> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
>> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
>> +                 : "memory");
>
>
> You don't need anymore to clobber the whole memory. "memory" can be removed.
Ok I will remove it in V2.
>
>
>>
>>       v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>>       v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> @@ -36,23 +37,24 @@ void vfp_restore_state(struct vcpu *v)
>>       if ( !cpu_has_fp )
>>           return;
>>
>> -    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> -                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> -                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> -                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> -                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> -                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> -                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> -                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> -                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> -                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> -                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> -                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> -                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> -                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> -                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> -                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%1, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%1, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%1, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%1, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%1, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%1, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%1, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%1, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%1, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%1, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%1, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%1, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%1, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%1, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%1, #16 * 30]\n\t"
>> +                 :: "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)
>> +                 : "memory");
>
>
> Same here.
>
> Cheers,
>
> --
> Julien Grall

-
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:47:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkps-0002CO-1m; Fri, 07 Feb 2014 12:47:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBkpq-0002CF-2Y
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:47:10 +0000
Received: from [85.158.137.68:11225] by server-13.bemta-3.messagelabs.com id
	78/F7-26923-DC5D4F25; Fri, 07 Feb 2014 12:47:09 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391777227!329804!1
X-Originating-IP: [209.85.216.48]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21208 invoked from network); 7 Feb 2014 12:47:08 -0000
Received: from mail-qa0-f48.google.com (HELO mail-qa0-f48.google.com)
	(209.85.216.48)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:47:08 -0000
Received: by mail-qa0-f48.google.com with SMTP id f11so5039539qae.7
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 04:47:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=4AO+VWUkAatD9XUltGYIMAVrhSPwI1CgL2Sw/D29V48=;
	b=WE5WO4QIEhvjddURvQHFzBEV/rS2PJYDzfD/JH9YVMQdPpFOmc1cRdVQpeMZRlm4X7
	zISgfyljKOvwHhXXBayJYVJy+ICNeKrmST78NEAwIVLwoJnJMBJ2AOySkQIv6zGWQtRa
	8eSV6cefGNvKRPp1v1NP0dvhDZvtixVsTdt/JTsmBsXf9YtvQRDhE/yISrHgQRTsoIJp
	kYxVp5j+1ia4stsbQ91v38mlJzWLrCZofPX2AgVGrJNvHNxw4Y541MFqlgvdlOzNyVDJ
	IzCQD2/d6FEw+uhr4GvXHrucFephax/aDZeeRuQb6wsRclkBmljszKJCNrPq/QwT8mkw
	gA2g==
X-Gm-Message-State: ALoCoQl/ToTEVs+dvUhU+j9Q5pUQTGWhS5upILM8Ne3GIZaqvxHMMWU0okHLnUYZuN/hA177M5Ev
MIME-Version: 1.0
X-Received: by 10.224.7.10 with SMTP id b10mr21978562qab.50.1391777226966;
	Fri, 07 Feb 2014 04:47:06 -0800 (PST)
Received: by 10.140.108.33 with HTTP; Fri, 7 Feb 2014 04:47:06 -0800 (PST)
In-Reply-To: <52F4C963.2030401@linaro.org>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
	<52F4C963.2030401@linaro.org>
Date: Fri, 7 Feb 2014 18:17:06 +0530
Message-ID: <CAAHg+Hg20YJf6ARdXHsz1EObABj5zn0-rWrdr4pWO=XWmKe60g@mail.gmail.com>
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: Julien Grall <julien.grall@linaro.org>
Cc: Ian Campbell <ian.campbell@citrix.com>, Anup Patel <anup.patel@linaro.org>,
	Patch Tracking <patches@linaro.org>, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Julieng,

On 7 February 2014 17:24, Julien Grall <julien.grall@linaro.org> wrote:
> Hello,
>
> Thanks for sending the patch quickly.
>
>
> On 07/02/14 10:38, Pranavkumar Sawargaonkar wrote:
>>
>> This patch addresses memory cloberring issue mentioed by Julien Grall
>
>
> clobbering mentioned.
>
>
>> with my earlier patch -
>> Ref:
>> http://www.gossamer-threads.com/lists/xen/devel/316247
>
>
> Can you add the commit id?
Sure.
>
>
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>   xen/arch/arm/arm64/vfp.c |   70
>> ++++++++++++++++++++++++----------------------
>>   1 file changed, 36 insertions(+), 34 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index c09cf0c..62f56a3 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>>       if ( !cpu_has_fp )
>>           return;
>>
>> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
>> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
>> +                 : "memory");
>
>
> You don't need anymore to clobber the whole memory. "memory" can be removed.
Ok I will remove it in V2.
>
>
>>
>>       v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
>>       v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
>> @@ -36,23 +37,24 @@ void vfp_restore_state(struct vcpu *v)
>>       if ( !cpu_has_fp )
>>           return;
>>
>> -    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
>> -                 "ldp q2, q3, [%0, #16 * 2]\n\t"
>> -                 "ldp q4, q5, [%0, #16 * 4]\n\t"
>> -                 "ldp q6, q7, [%0, #16 * 6]\n\t"
>> -                 "ldp q8, q9, [%0, #16 * 8]\n\t"
>> -                 "ldp q10, q11, [%0, #16 * 10]\n\t"
>> -                 "ldp q12, q13, [%0, #16 * 12]\n\t"
>> -                 "ldp q14, q15, [%0, #16 * 14]\n\t"
>> -                 "ldp q16, q17, [%0, #16 * 16]\n\t"
>> -                 "ldp q18, q19, [%0, #16 * 18]\n\t"
>> -                 "ldp q20, q21, [%0, #16 * 20]\n\t"
>> -                 "ldp q22, q23, [%0, #16 * 22]\n\t"
>> -                 "ldp q24, q25, [%0, #16 * 24]\n\t"
>> -                 "ldp q26, q27, [%0, #16 * 26]\n\t"
>> -                 "ldp q28, q29, [%0, #16 * 28]\n\t"
>> -                 "ldp q30, q31, [%0, #16 * 30]\n\t"
>> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
>> +                 "ldp q2, q3, [%1, #16 * 2]\n\t"
>> +                 "ldp q4, q5, [%1, #16 * 4]\n\t"
>> +                 "ldp q6, q7, [%1, #16 * 6]\n\t"
>> +                 "ldp q8, q9, [%1, #16 * 8]\n\t"
>> +                 "ldp q10, q11, [%1, #16 * 10]\n\t"
>> +                 "ldp q12, q13, [%1, #16 * 12]\n\t"
>> +                 "ldp q14, q15, [%1, #16 * 14]\n\t"
>> +                 "ldp q16, q17, [%1, #16 * 16]\n\t"
>> +                 "ldp q18, q19, [%1, #16 * 18]\n\t"
>> +                 "ldp q20, q21, [%1, #16 * 20]\n\t"
>> +                 "ldp q22, q23, [%1, #16 * 22]\n\t"
>> +                 "ldp q24, q25, [%1, #16 * 24]\n\t"
>> +                 "ldp q26, q27, [%1, #16 * 26]\n\t"
>> +                 "ldp q28, q29, [%1, #16 * 28]\n\t"
>> +                 "ldp q30, q31, [%1, #16 * 30]\n\t"
>> +                 :: "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs)
>> +                 : "memory");
>
>
> Same here.
>
> Cheers,
>
> --
> Julien Grall

-
Pranav

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:52:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkuj-0002nv-2I; Fri, 07 Feb 2014 12:52:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBkui-0002nq-CC
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:52:12 +0000
Received: from [85.158.139.211:30655] by server-7.bemta-5.messagelabs.com id
	16/61-14867-BF6D4F25; Fri, 07 Feb 2014 12:52:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391777525!2385504!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28743 invoked from network); 7 Feb 2014 12:52:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; 
	d="scan'208,217";a="100810017"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:51:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 07:51:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBkuG-0003o6-2g;
	Fri, 07 Feb 2014 12:51:44 +0000
Message-ID: <52F4D6E0.60305@citrix.com>
Date: Fri, 7 Feb 2014 12:51:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B99C020000780011A1EE@nat28.tlf.novell.com>
In-Reply-To: <52F4B99C020000780011A1EE@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 1/4] flask: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5842493271014130802=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5842493271014130802==
Content-Type: multipart/alternative;
	boundary="------------030201030602060303090808"

--------------030201030602060303090808
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:46, Jan Beulich wrote:
> Plus, in the case of security_preserve_bools(), prevent double freeing
> in the case of security_get_bools() failing.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -347,6 +347,7 @@ static int flask_security_set_bool(struc
>  
>          if ( arg->bool_id >= num )
>          {
> +            xfree(values);
>              rv = -ENOENT;
>              goto out;
>          }
> --- a/xen/xsm/flask/ss/services.c
> +++ b/xen/xsm/flask/ss/services.c
> @@ -1902,6 +1902,7 @@ err:
>      {
>          for ( i = 0; i < *len; i++ )
>              xfree((*names)[i]);
> +        xfree(*names);
>      }
>      xfree(*values);
>      goto out;
> @@ -2011,7 +2012,7 @@ static int security_preserve_bools(struc
>  
>      rc = security_get_bools(&nbools, &bnames, &bvalues, NULL);
>      if ( rc )
> -        goto out;
> +        return rc;
>      for ( i = 0; i < nbools; i++ )
>      {
>          booldatum = hashtab_search(p->p_bools.table, bnames[i]);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------030201030602060303090808
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:46, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B99C020000780011A1EE@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">Plus, in the case of security_preserve_bools(), prevent double freeing
in the case of security_get_bools() failing.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F4B99C020000780011A1EE@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -347,6 +347,7 @@ static int flask_security_set_bool(struc
 
         if ( arg-&gt;bool_id &gt;= num )
         {
+            xfree(values);
             rv = -ENOENT;
             goto out;
         }
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1902,6 +1902,7 @@ err:
     {
         for ( i = 0; i &lt; *len; i++ )
             xfree((*names)[i]);
+        xfree(*names);
     }
     xfree(*values);
     goto out;
@@ -2011,7 +2012,7 @@ static int security_preserve_bools(struc
 
     rc = security_get_bools(&amp;nbools, &amp;bnames, &amp;bvalues, NULL);
     if ( rc )
-        goto out;
+        return rc;
     for ( i = 0; i &lt; nbools; i++ )
     {
         booldatum = hashtab_search(p-&gt;p_bools.table, bnames[i]);



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------030201030602060303090808--


--===============5842493271014130802==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5842493271014130802==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:52:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkuj-0002nv-2I; Fri, 07 Feb 2014 12:52:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBkui-0002nq-CC
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:52:12 +0000
Received: from [85.158.139.211:30655] by server-7.bemta-5.messagelabs.com id
	16/61-14867-BF6D4F25; Fri, 07 Feb 2014 12:52:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391777525!2385504!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28743 invoked from network); 7 Feb 2014 12:52:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; 
	d="scan'208,217";a="100810017"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:51:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 07:51:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBkuG-0003o6-2g;
	Fri, 07 Feb 2014 12:51:44 +0000
Message-ID: <52F4D6E0.60305@citrix.com>
Date: Fri, 7 Feb 2014 12:51:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B99C020000780011A1EE@nat28.tlf.novell.com>
In-Reply-To: <52F4B99C020000780011A1EE@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 1/4] flask: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5842493271014130802=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5842493271014130802==
Content-Type: multipart/alternative;
	boundary="------------030201030602060303090808"

--------------030201030602060303090808
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:46, Jan Beulich wrote:
> Plus, in the case of security_preserve_bools(), prevent double freeing
> in the case of security_get_bools() failing.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -347,6 +347,7 @@ static int flask_security_set_bool(struc
>  
>          if ( arg->bool_id >= num )
>          {
> +            xfree(values);
>              rv = -ENOENT;
>              goto out;
>          }
> --- a/xen/xsm/flask/ss/services.c
> +++ b/xen/xsm/flask/ss/services.c
> @@ -1902,6 +1902,7 @@ err:
>      {
>          for ( i = 0; i < *len; i++ )
>              xfree((*names)[i]);
> +        xfree(*names);
>      }
>      xfree(*values);
>      goto out;
> @@ -2011,7 +2012,7 @@ static int security_preserve_bools(struc
>  
>      rc = security_get_bools(&nbools, &bnames, &bvalues, NULL);
>      if ( rc )
> -        goto out;
> +        return rc;
>      for ( i = 0; i < nbools; i++ )
>      {
>          booldatum = hashtab_search(p->p_bools.table, bnames[i]);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------030201030602060303090808
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:46, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B99C020000780011A1EE@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">Plus, in the case of security_preserve_bools(), prevent double freeing
in the case of security_get_bools() failing.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F4B99C020000780011A1EE@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -347,6 +347,7 @@ static int flask_security_set_bool(struc
 
         if ( arg-&gt;bool_id &gt;= num )
         {
+            xfree(values);
             rv = -ENOENT;
             goto out;
         }
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1902,6 +1902,7 @@ err:
     {
         for ( i = 0; i &lt; *len; i++ )
             xfree((*names)[i]);
+        xfree(*names);
     }
     xfree(*values);
     goto out;
@@ -2011,7 +2012,7 @@ static int security_preserve_bools(struc
 
     rc = security_get_bools(&amp;nbools, &amp;bnames, &amp;bvalues, NULL);
     if ( rc )
-        goto out;
+        return rc;
     for ( i = 0; i &lt; nbools; i++ )
     {
         booldatum = hashtab_search(p-&gt;p_bools.table, bnames[i]);



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------030201030602060303090808--


--===============5842493271014130802==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5842493271014130802==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:56:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkyU-00030r-QW; Fri, 07 Feb 2014 12:56:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBkyT-00030m-B7
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:56:05 +0000
Received: from [85.158.137.68:11813] by server-4.bemta-3.messagelabs.com id
	A1/BE-11750-4E7D4F25; Fri, 07 Feb 2014 12:56:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391777762!329450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4525 invoked from network); 7 Feb 2014 12:56:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; 
	d="scan'208,217";a="100810741"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:56:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 07:56:01 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBkyP-0003sr-C6;
	Fri, 07 Feb 2014 12:56:01 +0000
Message-ID: <52F4D7E1.9020002@citrix.com>
Date: Fri, 7 Feb 2014 12:56:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
In-Reply-To: <52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
	flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3794812668176306307=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3794812668176306307==
Content-Type: multipart/alternative;
	boundary="------------080701000106060701090301"

--------------080701000106060701090301
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:47, Jan Beulich wrote:
> The function should return an error when flask_security_make_bools() as

when flask_security_make_bools() fails ?

> well as when the input ID is out of range.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
>      else
>      {
>          if ( !bool_pending_values )
> -            flask_security_make_bools();
> -
> -        if ( arg->bool_id >= bool_num )
> +            rv = flask_security_make_bools();
> +        if ( !rv && arg->bool_id >= bool_num )

Surely you want "rv || arg->" if you want to catch both
flask_security_make_bools() failing as well as the input ID being out of
range?

~Andrew

> +            rv = -ENOENT;
> +        if ( rv )
>              goto out;
>  
>          bool_pending_values[arg->bool_id] = !!(arg->new_value);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------080701000106060701090301
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:47, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">The function should return an error when flask_security_make_bools() as</pre>
    </blockquote>
    <br>
    when flask_security_make_bools() fails ?<br>
    <br>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
well as when the input ID is out of range.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
     else
     {
         if ( !bool_pending_values )
-            flask_security_make_bools();
-
-        if ( arg-&gt;bool_id &gt;= bool_num )
+            rv = flask_security_make_bools();
+        if ( !rv &amp;&amp; arg-&gt;bool_id &gt;= bool_num )</pre>
    </blockquote>
    <br>
    Surely you want "rv || arg-&gt;" if you want to catch both
    flask_security_make_bools() failing as well as the input ID being
    out of range?<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
+            rv = -ENOENT;
+        if ( rv )
             goto out;
 
         bool_pending_values[arg-&gt;bool_id] = !!(arg-&gt;new_value);



</pre>
    </blockquote>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite"><br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------080701000106060701090301--


--===============3794812668176306307==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3794812668176306307==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:56:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkyU-00030r-QW; Fri, 07 Feb 2014 12:56:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBkyT-00030m-B7
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:56:05 +0000
Received: from [85.158.137.68:11813] by server-4.bemta-3.messagelabs.com id
	A1/BE-11750-4E7D4F25; Fri, 07 Feb 2014 12:56:04 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391777762!329450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4525 invoked from network); 7 Feb 2014 12:56:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:56:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; 
	d="scan'208,217";a="100810741"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:56:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 07:56:01 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBkyP-0003sr-C6;
	Fri, 07 Feb 2014 12:56:01 +0000
Message-ID: <52F4D7E1.9020002@citrix.com>
Date: Fri, 7 Feb 2014 12:56:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
In-Reply-To: <52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
	flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3794812668176306307=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3794812668176306307==
Content-Type: multipart/alternative;
	boundary="------------080701000106060701090301"

--------------080701000106060701090301
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:47, Jan Beulich wrote:
> The function should return an error when flask_security_make_bools() as

when flask_security_make_bools() fails ?

> well as when the input ID is out of range.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
>      else
>      {
>          if ( !bool_pending_values )
> -            flask_security_make_bools();
> -
> -        if ( arg->bool_id >= bool_num )
> +            rv = flask_security_make_bools();
> +        if ( !rv && arg->bool_id >= bool_num )

Surely you want "rv || arg->" if you want to catch both
flask_security_make_bools() failing as well as the input ID being out of
range?

~Andrew

> +            rv = -ENOENT;
> +        if ( rv )
>              goto out;
>  
>          bool_pending_values[arg->bool_id] = !!(arg->new_value);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------080701000106060701090301
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:47, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">The function should return an error when flask_security_make_bools() as</pre>
    </blockquote>
    <br>
    when flask_security_make_bools() fails ?<br>
    <br>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
well as when the input ID is out of range.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
     else
     {
         if ( !bool_pending_values )
-            flask_security_make_bools();
-
-        if ( arg-&gt;bool_id &gt;= bool_num )
+            rv = flask_security_make_bools();
+        if ( !rv &amp;&amp; arg-&gt;bool_id &gt;= bool_num )</pre>
    </blockquote>
    <br>
    Surely you want "rv || arg-&gt;" if you want to catch both
    flask_security_make_bools() failing as well as the input ID being
    out of range?<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
+            rv = -ENOENT;
+        if ( rv )
             goto out;
 
         bool_pending_values[arg-&gt;bool_id] = !!(arg-&gt;new_value);



</pre>
    </blockquote>
    <blockquote cite="mid:52F4B9BC020000780011A1F2@nat28.tlf.novell.com"
      type="cite"><br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------080701000106060701090301--


--===============3794812668176306307==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3794812668176306307==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:57:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkzu-00037o-Bg; Fri, 07 Feb 2014 12:57:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBkzs-00037d-Gx
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:57:32 +0000
Received: from [85.158.139.211:49063] by server-9.bemta-5.messagelabs.com id
	A7/01-11237-B38D4F25; Fri, 07 Feb 2014 12:57:31 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391777849!2380724!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23860 invoked from network); 7 Feb 2014 12:57:30 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:57:30 -0000
Received: by mail-pd0-f178.google.com with SMTP id y13so3115402pdi.23
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 04:57:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=9vr6EIPBmVL6/vnzKingBCBUTOnuiw+359Q62O9mhyw=;
	b=hln1l3L6yUwLgHE6igBCOr8350KcB1yM5zOjFKWrE9EElcGMltc3/6ga+TYpZrWhQ4
	BfsDrJpc7z+S/O9u4WEOT046Y/5R+EcWben1u5DLCzagA//dJm0Ko4+cpEu8Sqdc1Mwq
	In1rFi2mJ/FgqcpE25eK6aqy8Q0gHR7A/+U0SVZ1w7TyKBGRf5Orj0uujE3y/PPHDIBO
	NaVCON2VIf/vmfp0mRLELrBL5NzJgcS8hCGJaPRAqvqDSwztCUTYX4wL4XnnzOpWTsnz
	IVrU9XPq1HUE8UgDpUgTHnAU29WXuGbKOUuyYZZ2lkfXK3zj65Na6op+1ejk1ba0b6/x
	rcAQ==
X-Gm-Message-State: ALoCoQlLR3PLLpCTUtRLccjSqZHMXmmYmyCMviNqqfTj4pbla0qC39EPQ/FxOtxL/uj/1IoRv7nr
X-Received: by 10.66.144.227 with SMTP id sp3mr7705682pab.100.1391777848643;
	Fri, 07 Feb 2014 04:57:28 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	eo11sm33034333pac.0.2014.02.07.04.57.24 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 04:57:28 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 18:27:16 +0530
Message-Id: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, julien.grall@linaro.org, patches@apm.com,
	stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V2] xen: arm: arm64: Fix memory cloberring
	issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch addresses memory cloberring issue mentioed by Julien Grall
with my earlier patch -
Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f

Discussion related to this fix -
http://www.gossamer-threads.com/lists/xen/devel/316247

V2: Incorporating comments received on V1.
V1: Initial Patch

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/arm64/vfp.c |   68 +++++++++++++++++++++++-----------------------
 1 file changed, 34 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index c09cf0c..3cd2b1b 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -8,23 +8,23 @@ void vfp_save_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
-                 "stp q2, q3, [%0, #16 * 2]\n\t"
-                 "stp q4, q5, [%0, #16 * 4]\n\t"
-                 "stp q6, q7, [%0, #16 * 6]\n\t"
-                 "stp q8, q9, [%0, #16 * 8]\n\t"
-                 "stp q10, q11, [%0, #16 * 10]\n\t"
-                 "stp q12, q13, [%0, #16 * 12]\n\t"
-                 "stp q14, q15, [%0, #16 * 14]\n\t"
-                 "stp q16, q17, [%0, #16 * 16]\n\t"
-                 "stp q18, q19, [%0, #16 * 18]\n\t"
-                 "stp q20, q21, [%0, #16 * 20]\n\t"
-                 "stp q22, q23, [%0, #16 * 22]\n\t"
-                 "stp q24, q25, [%0, #16 * 24]\n\t"
-                 "stp q26, q27, [%0, #16 * 26]\n\t"
-                 "stp q28, q29, [%0, #16 * 28]\n\t"
-                 "stp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                 "stp q2, q3, [%1, #16 * 2]\n\t"
+                 "stp q4, q5, [%1, #16 * 4]\n\t"
+                 "stp q6, q7, [%1, #16 * 6]\n\t"
+                 "stp q8, q9, [%1, #16 * 8]\n\t"
+                 "stp q10, q11, [%1, #16 * 10]\n\t"
+                 "stp q12, q13, [%1, #16 * 12]\n\t"
+                 "stp q14, q15, [%1, #16 * 14]\n\t"
+                 "stp q16, q17, [%1, #16 * 16]\n\t"
+                 "stp q18, q19, [%1, #16 * 18]\n\t"
+                 "stp q20, q21, [%1, #16 * 20]\n\t"
+                 "stp q22, q23, [%1, #16 * 22]\n\t"
+                 "stp q24, q25, [%1, #16 * 24]\n\t"
+                 "stp q26, q27, [%1, #16 * 26]\n\t"
+                 "stp q28, q29, [%1, #16 * 28]\n\t"
+                 "stp q30, q31, [%1, #16 * 30]\n\t"
+                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
 
     v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
@@ -36,23 +36,23 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
-                 "ldp q2, q3, [%0, #16 * 2]\n\t"
-                 "ldp q4, q5, [%0, #16 * 4]\n\t"
-                 "ldp q6, q7, [%0, #16 * 6]\n\t"
-                 "ldp q8, q9, [%0, #16 * 8]\n\t"
-                 "ldp q10, q11, [%0, #16 * 10]\n\t"
-                 "ldp q12, q13, [%0, #16 * 12]\n\t"
-                 "ldp q14, q15, [%0, #16 * 14]\n\t"
-                 "ldp q16, q17, [%0, #16 * 16]\n\t"
-                 "ldp q18, q19, [%0, #16 * 18]\n\t"
-                 "ldp q20, q21, [%0, #16 * 20]\n\t"
-                 "ldp q22, q23, [%0, #16 * 22]\n\t"
-                 "ldp q24, q25, [%0, #16 * 24]\n\t"
-                 "ldp q26, q27, [%0, #16 * 26]\n\t"
-                 "ldp q28, q29, [%0, #16 * 28]\n\t"
-                 "ldp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                 "ldp q2, q3, [%1, #16 * 2]\n\t"
+                 "ldp q4, q5, [%1, #16 * 4]\n\t"
+                 "ldp q6, q7, [%1, #16 * 6]\n\t"
+                 "ldp q8, q9, [%1, #16 * 8]\n\t"
+                 "ldp q10, q11, [%1, #16 * 10]\n\t"
+                 "ldp q12, q13, [%1, #16 * 12]\n\t"
+                 "ldp q14, q15, [%1, #16 * 14]\n\t"
+                 "ldp q16, q17, [%1, #16 * 16]\n\t"
+                 "ldp q18, q19, [%1, #16 * 18]\n\t"
+                 "ldp q20, q21, [%1, #16 * 20]\n\t"
+                 "ldp q22, q23, [%1, #16 * 22]\n\t"
+                 "ldp q24, q25, [%1, #16 * 24]\n\t"
+                 "ldp q26, q27, [%1, #16 * 26]\n\t"
+                 "ldp q28, q29, [%1, #16 * 28]\n\t"
+                 "ldp q30, q31, [%1, #16 * 30]\n\t"
+                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
 
     WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:57:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBkzu-00037o-Bg; Fri, 07 Feb 2014 12:57:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pranavkumar@linaro.org>) id 1WBkzs-00037d-Gx
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:57:32 +0000
Received: from [85.158.139.211:49063] by server-9.bemta-5.messagelabs.com id
	A7/01-11237-B38D4F25; Fri, 07 Feb 2014 12:57:31 +0000
X-Env-Sender: pranavkumar@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391777849!2380724!1
X-Originating-IP: [209.85.192.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23860 invoked from network); 7 Feb 2014 12:57:30 -0000
Received: from mail-pd0-f178.google.com (HELO mail-pd0-f178.google.com)
	(209.85.192.178)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:57:30 -0000
Received: by mail-pd0-f178.google.com with SMTP id y13so3115402pdi.23
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 04:57:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=9vr6EIPBmVL6/vnzKingBCBUTOnuiw+359Q62O9mhyw=;
	b=hln1l3L6yUwLgHE6igBCOr8350KcB1yM5zOjFKWrE9EElcGMltc3/6ga+TYpZrWhQ4
	BfsDrJpc7z+S/O9u4WEOT046Y/5R+EcWben1u5DLCzagA//dJm0Ko4+cpEu8Sqdc1Mwq
	In1rFi2mJ/FgqcpE25eK6aqy8Q0gHR7A/+U0SVZ1w7TyKBGRf5Orj0uujE3y/PPHDIBO
	NaVCON2VIf/vmfp0mRLELrBL5NzJgcS8hCGJaPRAqvqDSwztCUTYX4wL4XnnzOpWTsnz
	IVrU9XPq1HUE8UgDpUgTHnAU29WXuGbKOUuyYZZ2lkfXK3zj65Na6op+1ejk1ba0b6/x
	rcAQ==
X-Gm-Message-State: ALoCoQlLR3PLLpCTUtRLccjSqZHMXmmYmyCMviNqqfTj4pbla0qC39EPQ/FxOtxL/uj/1IoRv7nr
X-Received: by 10.66.144.227 with SMTP id sp3mr7705682pab.100.1391777848643;
	Fri, 07 Feb 2014 04:57:28 -0800 (PST)
Received: from pnqlab006.amcc.com ([182.73.239.130])
	by mx.google.com with ESMTPSA id
	eo11sm33034333pac.0.2014.02.07.04.57.24 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 04:57:28 -0800 (PST)
From: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 18:27:16 +0530
Message-Id: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
X-Mailer: git-send-email 1.7.9.5
Cc: ian.campbell@citrix.com, Anup Patel <anup.patel@linaro.org>,
	patches@linaro.org, julien.grall@linaro.org, patches@apm.com,
	stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: [Xen-devel] [PATCH V2] xen: arm: arm64: Fix memory cloberring
	issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch addresses memory cloberring issue mentioed by Julien Grall
with my earlier patch -
Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f

Discussion related to this fix -
http://www.gossamer-threads.com/lists/xen/devel/316247

V2: Incorporating comments received on V1.
V1: Initial Patch

Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Signed-off-by: Anup Patel <anup.patel@linaro.org>
---
 xen/arch/arm/arm64/vfp.c |   68 +++++++++++++++++++++++-----------------------
 1 file changed, 34 insertions(+), 34 deletions(-)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index c09cf0c..3cd2b1b 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -8,23 +8,23 @@ void vfp_save_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
-                 "stp q2, q3, [%0, #16 * 2]\n\t"
-                 "stp q4, q5, [%0, #16 * 4]\n\t"
-                 "stp q6, q7, [%0, #16 * 6]\n\t"
-                 "stp q8, q9, [%0, #16 * 8]\n\t"
-                 "stp q10, q11, [%0, #16 * 10]\n\t"
-                 "stp q12, q13, [%0, #16 * 12]\n\t"
-                 "stp q14, q15, [%0, #16 * 14]\n\t"
-                 "stp q16, q17, [%0, #16 * 16]\n\t"
-                 "stp q18, q19, [%0, #16 * 18]\n\t"
-                 "stp q20, q21, [%0, #16 * 20]\n\t"
-                 "stp q22, q23, [%0, #16 * 22]\n\t"
-                 "stp q24, q25, [%0, #16 * 24]\n\t"
-                 "stp q26, q27, [%0, #16 * 26]\n\t"
-                 "stp q28, q29, [%0, #16 * 28]\n\t"
-                 "stp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
+                 "stp q2, q3, [%1, #16 * 2]\n\t"
+                 "stp q4, q5, [%1, #16 * 4]\n\t"
+                 "stp q6, q7, [%1, #16 * 6]\n\t"
+                 "stp q8, q9, [%1, #16 * 8]\n\t"
+                 "stp q10, q11, [%1, #16 * 10]\n\t"
+                 "stp q12, q13, [%1, #16 * 12]\n\t"
+                 "stp q14, q15, [%1, #16 * 14]\n\t"
+                 "stp q16, q17, [%1, #16 * 16]\n\t"
+                 "stp q18, q19, [%1, #16 * 18]\n\t"
+                 "stp q20, q21, [%1, #16 * 20]\n\t"
+                 "stp q22, q23, [%1, #16 * 22]\n\t"
+                 "stp q24, q25, [%1, #16 * 24]\n\t"
+                 "stp q26, q27, [%1, #16 * 26]\n\t"
+                 "stp q28, q29, [%1, #16 * 28]\n\t"
+                 "stp q30, q31, [%1, #16 * 30]\n\t"
+                 : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
 
     v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
     v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
@@ -36,23 +36,23 @@ void vfp_restore_state(struct vcpu *v)
     if ( !cpu_has_fp )
         return;
 
-    asm volatile("ldp q0, q1, [%0, #16 * 0]\n\t"
-                 "ldp q2, q3, [%0, #16 * 2]\n\t"
-                 "ldp q4, q5, [%0, #16 * 4]\n\t"
-                 "ldp q6, q7, [%0, #16 * 6]\n\t"
-                 "ldp q8, q9, [%0, #16 * 8]\n\t"
-                 "ldp q10, q11, [%0, #16 * 10]\n\t"
-                 "ldp q12, q13, [%0, #16 * 12]\n\t"
-                 "ldp q14, q15, [%0, #16 * 14]\n\t"
-                 "ldp q16, q17, [%0, #16 * 16]\n\t"
-                 "ldp q18, q19, [%0, #16 * 18]\n\t"
-                 "ldp q20, q21, [%0, #16 * 20]\n\t"
-                 "ldp q22, q23, [%0, #16 * 22]\n\t"
-                 "ldp q24, q25, [%0, #16 * 24]\n\t"
-                 "ldp q26, q27, [%0, #16 * 26]\n\t"
-                 "ldp q28, q29, [%0, #16 * 28]\n\t"
-                 "ldp q30, q31, [%0, #16 * 30]\n\t"
-                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
+    asm volatile("ldp q0, q1, [%1, #16 * 0]\n\t"
+                 "ldp q2, q3, [%1, #16 * 2]\n\t"
+                 "ldp q4, q5, [%1, #16 * 4]\n\t"
+                 "ldp q6, q7, [%1, #16 * 6]\n\t"
+                 "ldp q8, q9, [%1, #16 * 8]\n\t"
+                 "ldp q10, q11, [%1, #16 * 10]\n\t"
+                 "ldp q12, q13, [%1, #16 * 12]\n\t"
+                 "ldp q14, q15, [%1, #16 * 14]\n\t"
+                 "ldp q16, q17, [%1, #16 * 16]\n\t"
+                 "ldp q18, q19, [%1, #16 * 18]\n\t"
+                 "ldp q20, q21, [%1, #16 * 20]\n\t"
+                 "ldp q22, q23, [%1, #16 * 22]\n\t"
+                 "ldp q24, q25, [%1, #16 * 24]\n\t"
+                 "ldp q26, q27, [%1, #16 * 26]\n\t"
+                 "ldp q28, q29, [%1, #16 * 28]\n\t"
+                 "ldp q30, q31, [%1, #16 * 30]\n\t"
+                 : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
 
     WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
     WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:57:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl04-00039L-PJ; Fri, 07 Feb 2014 12:57:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBl03-000392-P1
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:57:43 +0000
Received: from [85.158.139.211:54162] by server-13.bemta-5.messagelabs.com id
	FA/F9-18801-648D4F25; Fri, 07 Feb 2014 12:57:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391777860!2380780!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25781 invoked from network); 7 Feb 2014 12:57:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:57:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; 
	d="scan'208,217";a="100811241"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:57:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 07:57:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBkzz-0003v9-OJ;
	Fri, 07 Feb 2014 12:57:39 +0000
Message-ID: <52F4D843.5070407@citrix.com>
Date: Fri, 7 Feb 2014 12:57:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9DC020000780011A1F6@nat28.tlf.novell.com>
In-Reply-To: <52F4B9DC020000780011A1F6@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 3/4] flask: check permissions first thing in
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6445427778366527671=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6445427778366527671==
Content-Type: multipart/alternative;
	boundary="------------090608040508030803070205"

--------------090608040508030803070205
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:47, Jan Beulich wrote:
> Nothing else should be done if the caller isn't permitted to set
> boolean values.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -326,11 +326,11 @@ static int flask_security_set_bool(struc
>  {
>      int rv;
>  
> -    rv = flask_security_resolve_bool(arg);
> +    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
>      if ( rv )
>          return rv;
>  
> -    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
> +    rv = flask_security_resolve_bool(arg);
>      if ( rv )
>          return rv;
>  
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------090608040508030803070205
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:47, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B9DC020000780011A1F6@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">Nothing else should be done if the caller isn't permitted to set
boolean values.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F4B9DC020000780011A1F6@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -326,11 +326,11 @@ static int flask_security_set_bool(struc
 {
     int rv;
 
-    rv = flask_security_resolve_bool(arg);
+    rv = domain_has_security(current-&gt;domain, SECURITY__SETBOOL);
     if ( rv )
         return rv;
 
-    rv = domain_has_security(current-&gt;domain, SECURITY__SETBOOL);
+    rv = flask_security_resolve_bool(arg);
     if ( rv )
         return rv;
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------090608040508030803070205--


--===============6445427778366527671==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6445427778366527671==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:57:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:57:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl04-00039L-PJ; Fri, 07 Feb 2014 12:57:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBl03-000392-P1
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 12:57:43 +0000
Received: from [85.158.139.211:54162] by server-13.bemta-5.messagelabs.com id
	FA/F9-18801-648D4F25; Fri, 07 Feb 2014 12:57:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391777860!2380780!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25781 invoked from network); 7 Feb 2014 12:57:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 12:57:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; 
	d="scan'208,217";a="100811241"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 12:57:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 07:57:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBkzz-0003v9-OJ;
	Fri, 07 Feb 2014 12:57:39 +0000
Message-ID: <52F4D843.5070407@citrix.com>
Date: Fri, 7 Feb 2014 12:57:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9DC020000780011A1F6@nat28.tlf.novell.com>
In-Reply-To: <52F4B9DC020000780011A1F6@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 3/4] flask: check permissions first thing in
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6445427778366527671=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6445427778366527671==
Content-Type: multipart/alternative;
	boundary="------------090608040508030803070205"

--------------090608040508030803070205
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 07/02/14 09:47, Jan Beulich wrote:
> Nothing else should be done if the caller isn't permitted to set
> boolean values.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -326,11 +326,11 @@ static int flask_security_set_bool(struc
>  {
>      int rv;
>  
> -    rv = flask_security_resolve_bool(arg);
> +    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
>      if ( rv )
>          return rv;
>  
> -    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
> +    rv = flask_security_resolve_bool(arg);
>      if ( rv )
>          return rv;
>  
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------090608040508030803070205
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 07/02/14 09:47, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:52F4B9DC020000780011A1F6@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">Nothing else should be done if the caller isn't permitted to set
boolean values.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:52F4B9DC020000780011A1F6@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -326,11 +326,11 @@ static int flask_security_set_bool(struc
 {
     int rv;
 
-    rv = flask_security_resolve_bool(arg);
+    rv = domain_has_security(current-&gt;domain, SECURITY__SETBOOL);
     if ( rv )
         return rv;
 
-    rv = domain_has_security(current-&gt;domain, SECURITY__SETBOOL);
+    rv = flask_security_resolve_bool(arg);
     if ( rv )
         return rv;
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------090608040508030803070205--


--===============6445427778366527671==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6445427778366527671==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 12:58:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl0K-0003Cy-7R; Fri, 07 Feb 2014 12:58:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBl0I-0003Ce-KT
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:57:58 +0000
Received: from [193.109.254.147:9611] by server-15.bemta-14.messagelabs.com id
	60/A8-10839-558D4F25; Fri, 07 Feb 2014 12:57:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391777877!2758623!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19549 invoked from network); 7 Feb 2014 12:57:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 12:57:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 12:57:56 +0000
Message-Id: <52F4E663020000780011A3B0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 12:57:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> +            return -EINVAL;

get_order_from_pages() takes an unsigned long, while xen_pfn_t
is - iirc - 64-bits even on arm32. So you're not checking the full
passed in value, yet use the full one in the calculation of "e" (which
is what gets passed down).

Also, did you consider the nr_pfns == 0 case? At present, due to
the way get_order_from_pages() works, this will produce -EINVAL.
I'm not sure that's intended.

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};

The name here (and of the libxc interface) is now certainly
counterintuitive. But it's a domctl (and an internal interface),
which we can change post-4.4 (I'd envision it to actually take
a flags parameter indicating the kind of flush that's wanted).

> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_set_max_evtchn:
>          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
>  
> +    case XEN_DOMCTL_cacheflush:
> +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);

Hard tab.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 12:58:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 12:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl0K-0003Cy-7R; Fri, 07 Feb 2014 12:58:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBl0I-0003Ce-KT
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 12:57:58 +0000
Received: from [193.109.254.147:9611] by server-15.bemta-14.messagelabs.com id
	60/A8-10839-558D4F25; Fri, 07 Feb 2014 12:57:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1391777877!2758623!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19549 invoked from network); 7 Feb 2014 12:57:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 12:57:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 12:57:56 +0000
Message-Id: <52F4E663020000780011A3B0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 12:57:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> +            return -EINVAL;

get_order_from_pages() takes an unsigned long, while xen_pfn_t
is - iirc - 64-bits even on arm32. So you're not checking the full
passed in value, yet use the full one in the calculation of "e" (which
is what gets passed down).

Also, did you consider the nr_pfns == 0 case? At present, due to
the way get_order_from_pages() works, this will produce -EINVAL.
I'm not sure that's intended.

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};

The name here (and of the libxc interface) is now certainly
counterintuitive. But it's a domctl (and an internal interface),
which we can change post-4.4 (I'd envision it to actually take
a flags parameter indicating the kind of flush that's wanted).

> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_set_max_evtchn:
>          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
>  
> +    case XEN_DOMCTL_cacheflush:
> +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);

Hard tab.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:04:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl6V-00041s-D2; Fri, 07 Feb 2014 13:04:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBl6U-00041n-99
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 13:04:22 +0000
Received: from [85.158.137.68:57434] by server-15.bemta-3.messagelabs.com id
	50/1E-19263-5D9D4F25; Fri, 07 Feb 2014 13:04:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391778260!320836!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13232 invoked from network); 7 Feb 2014 13:04:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 13:04:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 13:04:20 +0000
Message-Id: <52F4E7E3020000780011A3D7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 13:04:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
	<52F4D7E1.9020002@citrix.com>
In-Reply-To: <52F4D7E1.9020002@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 07/02/14 09:47, Jan Beulich wrote:
>> The function should return an error when flask_security_make_bools() as
> 
> when flask_security_make_bools() fails ?

Oops, yes of course. Corrected.

>> --- a/xen/xsm/flask/flask_op.c
>> +++ b/xen/xsm/flask/flask_op.c
>> @@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
>>      else
>>      {
>>          if ( !bool_pending_values )
>> -            flask_security_make_bools();
>> -
>> -        if ( arg->bool_id >= bool_num )
>> +            rv = flask_security_make_bools();
>> +        if ( !rv && arg->bool_id >= bool_num )
> 
> Surely you want "rv || arg->" if you want to catch both
> flask_security_make_bools() failing as well as the input ID being out of
> range?

Yes, which is what the code does - it just cares to not clobber "rv"
if that got already set non-zero from the function call. See the
context below.

Jan

>> +            rv = -ENOENT;
>> +        if ( rv )
>>              goto out;
>>  
>>          bool_pending_values[arg->bool_id] = !!(arg->new_value);
>>
>>
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:04:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl6V-00041s-D2; Fri, 07 Feb 2014 13:04:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBl6U-00041n-99
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 13:04:22 +0000
Received: from [85.158.137.68:57434] by server-15.bemta-3.messagelabs.com id
	50/1E-19263-5D9D4F25; Fri, 07 Feb 2014 13:04:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391778260!320836!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13232 invoked from network); 7 Feb 2014 13:04:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 13:04:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 13:04:20 +0000
Message-Id: <52F4E7E3020000780011A3D7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 13:04:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
	<52F4D7E1.9020002@citrix.com>
In-Reply-To: <52F4D7E1.9020002@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
 flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 07/02/14 09:47, Jan Beulich wrote:
>> The function should return an error when flask_security_make_bools() as
> 
> when flask_security_make_bools() fails ?

Oops, yes of course. Corrected.

>> --- a/xen/xsm/flask/flask_op.c
>> +++ b/xen/xsm/flask/flask_op.c
>> @@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
>>      else
>>      {
>>          if ( !bool_pending_values )
>> -            flask_security_make_bools();
>> -
>> -        if ( arg->bool_id >= bool_num )
>> +            rv = flask_security_make_bools();
>> +        if ( !rv && arg->bool_id >= bool_num )
> 
> Surely you want "rv || arg->" if you want to catch both
> flask_security_make_bools() failing as well as the input ID being out of
> range?

Yes, which is what the code does - it just cares to not clobber "rv"
if that got already set non-zero from the function call. See the
context below.

Jan

>> +            rv = -ENOENT;
>> +        if ( rv )
>>              goto out;
>>  
>>          bool_pending_values[arg->bool_id] = !!(arg->new_value);
>>
>>
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:07:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl9Z-00049W-35; Fri, 07 Feb 2014 13:07:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBl9X-00049R-It
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 13:07:31 +0000
Received: from [85.158.143.35:27487] by server-2.bemta-4.messagelabs.com id
	29/3C-10891-29AD4F25; Fri, 07 Feb 2014 13:07:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391778448!3931864!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 860 invoked from network); 7 Feb 2014 13:07:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:07:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="100813759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 13:07:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 08:07:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBl9T-00044b-Le;
	Fri, 07 Feb 2014 13:07:27 +0000
Message-ID: <52F4DA8F.1040407@citrix.com>
Date: Fri, 7 Feb 2014 13:07:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
	<52F4D7E1.9020002@citrix.com>
	<52F4E7E3020000780011A3D7@nat28.tlf.novell.com>
In-Reply-To: <52F4E7E3020000780011A3D7@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
	flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/02/14 13:04, Jan Beulich wrote:
>>>> On 07.02.14 at 13:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 07/02/14 09:47, Jan Beulich wrote:
>>> The function should return an error when flask_security_make_bools() as
>> when flask_security_make_bools() fails ?
> Oops, yes of course. Corrected.
>
>>> --- a/xen/xsm/flask/flask_op.c
>>> +++ b/xen/xsm/flask/flask_op.c
>>> @@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
>>>      else
>>>      {
>>>          if ( !bool_pending_values )
>>> -            flask_security_make_bools();
>>> -
>>> -        if ( arg->bool_id >= bool_num )
>>> +            rv = flask_security_make_bools();
>>> +        if ( !rv && arg->bool_id >= bool_num )
>> Surely you want "rv || arg->" if you want to catch both
>> flask_security_make_bools() failing as well as the input ID being out of
>> range?
> Yes, which is what the code does - it just cares to not clobber "rv"
> if that got already set non-zero from the function call. See the
> context below.
>
> Jan

Ah yes.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
>>> +            rv = -ENOENT;
>>> +        if ( rv )
>>>              goto out;
>>>  
>>>          bool_pending_values[arg->bool_id] = !!(arg->new_value);
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:07:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBl9Z-00049W-35; Fri, 07 Feb 2014 13:07:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBl9X-00049R-It
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 13:07:31 +0000
Received: from [85.158.143.35:27487] by server-2.bemta-4.messagelabs.com id
	29/3C-10891-29AD4F25; Fri, 07 Feb 2014 13:07:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1391778448!3931864!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 860 invoked from network); 7 Feb 2014 13:07:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:07:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="100813759"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 13:07:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 08:07:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBl9T-00044b-Le;
	Fri, 07 Feb 2014 13:07:27 +0000
Message-ID: <52F4DA8F.1040407@citrix.com>
Date: Fri, 7 Feb 2014 13:07:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F4B9BC020000780011A1F2@nat28.tlf.novell.com>
	<52F4D7E1.9020002@citrix.com>
	<52F4E7E3020000780011A3D7@nat28.tlf.novell.com>
In-Reply-To: <52F4E7E3020000780011A3D7@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 2/4] flask: fix error propagation from
	flask_security_set_bool()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/02/14 13:04, Jan Beulich wrote:
>>>> On 07.02.14 at 13:56, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 07/02/14 09:47, Jan Beulich wrote:
>>> The function should return an error when flask_security_make_bools() as
>> when flask_security_make_bools() fails ?
> Oops, yes of course. Corrected.
>
>>> --- a/xen/xsm/flask/flask_op.c
>>> +++ b/xen/xsm/flask/flask_op.c
>>> @@ -364,9 +364,10 @@ static int flask_security_set_bool(struc
>>>      else
>>>      {
>>>          if ( !bool_pending_values )
>>> -            flask_security_make_bools();
>>> -
>>> -        if ( arg->bool_id >= bool_num )
>>> +            rv = flask_security_make_bools();
>>> +        if ( !rv && arg->bool_id >= bool_num )
>> Surely you want "rv || arg->" if you want to catch both
>> flask_security_make_bools() failing as well as the input ID being out of
>> range?
> Yes, which is what the code does - it just cares to not clobber "rv"
> if that got already set non-zero from the function call. See the
> context below.
>
> Jan

Ah yes.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
>>> +            rv = -ENOENT;
>>> +        if ( rv )
>>>              goto out;
>>>  
>>>          bool_pending_values[arg->bool_id] = !!(arg->new_value);
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org 
>>> http://lists.xen.org/xen-devel 
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:10:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBlCY-0004dh-Nd; Fri, 07 Feb 2014 13:10:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBlCW-0004db-Sp
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 13:10:37 +0000
Received: from [85.158.139.211:26417] by server-13.bemta-5.messagelabs.com id
	0C/C3-18801-C4BD4F25; Fri, 07 Feb 2014 13:10:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391778635!2395573!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22648 invoked from network); 7 Feb 2014 13:10:35 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:10:35 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so2209884wes.26
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 05:10:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=mQIZdsM5QaYWXt03Z1hcI5tMNKMEy0l661+LQRGuepg=;
	b=kdBPExztkbkpVcY1TzD52yYFp7f6HByAjmni2GiupOG7BBbSIRSNojpmlxDbaEfrZJ
	P19ILES0VwCF5vFsVzH7whuM2ICFk6Vd2rH6EmGOBkIZCqJrPNZuwEV2Rs/XViMw+0yH
	y/v9solMicm4PSScxgR8GXcDmWU3q6PjniZLJyJQjrHdasNofvWCnKLFLBeiJnKkwd1X
	QODXZe/WUz2EAnjNvnmMJAojbvXgnBur3zwuY149+rIO1VIm4OFq12x4c0xElj7mp/lz
	SYXbL1qMVSFax23CGrXighiql1u6wtb1F6t9V56KN8GaenztErBxnShAltz9qupOLgJz
	CzlQ==
X-Gm-Message-State: ALoCoQnx8wiR3lSC+b7IIkeI6o2akzB0SGWSRKl0SamNQhsNY8urJ3Cmjx0ToLgIVWvdHDNdloR9
X-Received: by 10.194.189.4 with SMTP id ge4mr361753wjc.92.1391778635144;
	Fri, 07 Feb 2014 05:10:35 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	k10sm10569428wjf.11.2014.02.07.05.10.33 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 05:10:34 -0800 (PST)
Message-ID: <52F4DB49.3070502@linaro.org>
Date: Fri, 07 Feb 2014 13:10:33 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-5-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-5-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v4 5/5] xen: arm: correct terminology for
 cache flush macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 12:12, Ian Campbell wrote:
> The term "flush" is slightly ambiguous. The correct ARM term for for this
> operaton is clean, as opposed to clean+invalidate for which we also now have a
> function.
>
> This is a pure rename, no functional change.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
> This could easily be left for 4.5.

It would be nice to have a common nomenclature for the functions (a bit 
like your TLB patch series).

But, if the patch don't go in Xen 4.4, it won't change anything to 
backporting. The function name is not reused.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:10:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBlCY-0004dh-Nd; Fri, 07 Feb 2014 13:10:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBlCW-0004db-Sp
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 13:10:37 +0000
Received: from [85.158.139.211:26417] by server-13.bemta-5.messagelabs.com id
	0C/C3-18801-C4BD4F25; Fri, 07 Feb 2014 13:10:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391778635!2395573!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22648 invoked from network); 7 Feb 2014 13:10:35 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:10:35 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so2209884wes.26
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 05:10:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=mQIZdsM5QaYWXt03Z1hcI5tMNKMEy0l661+LQRGuepg=;
	b=kdBPExztkbkpVcY1TzD52yYFp7f6HByAjmni2GiupOG7BBbSIRSNojpmlxDbaEfrZJ
	P19ILES0VwCF5vFsVzH7whuM2ICFk6Vd2rH6EmGOBkIZCqJrPNZuwEV2Rs/XViMw+0yH
	y/v9solMicm4PSScxgR8GXcDmWU3q6PjniZLJyJQjrHdasNofvWCnKLFLBeiJnKkwd1X
	QODXZe/WUz2EAnjNvnmMJAojbvXgnBur3zwuY149+rIO1VIm4OFq12x4c0xElj7mp/lz
	SYXbL1qMVSFax23CGrXighiql1u6wtb1F6t9V56KN8GaenztErBxnShAltz9qupOLgJz
	CzlQ==
X-Gm-Message-State: ALoCoQnx8wiR3lSC+b7IIkeI6o2akzB0SGWSRKl0SamNQhsNY8urJ3Cmjx0ToLgIVWvdHDNdloR9
X-Received: by 10.194.189.4 with SMTP id ge4mr361753wjc.92.1391778635144;
	Fri, 07 Feb 2014 05:10:35 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	k10sm10569428wjf.11.2014.02.07.05.10.33 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 05:10:34 -0800 (PST)
Message-ID: <52F4DB49.3070502@linaro.org>
Date: Fri, 07 Feb 2014 13:10:33 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-5-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-5-git-send-email-ian.campbell@citrix.com>
Cc: tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v4 5/5] xen: arm: correct terminology for
 cache flush macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 12:12, Ian Campbell wrote:
> The term "flush" is slightly ambiguous. The correct ARM term for for this
> operaton is clean, as opposed to clean+invalidate for which we also now have a
> function.
>
> This is a pure rename, no functional change.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
> This could easily be left for 4.5.

It would be nice to have a common nomenclature for the functions (a bit 
like your TLB patch series).

But, if the patch don't go in Xen 4.4, it won't change anything to 
backporting. The function name is not reused.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:25:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBlQM-0005BR-JF; Fri, 07 Feb 2014 13:24:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBlQK-0005BM-UN
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 13:24:53 +0000
Received: from [193.109.254.147:3069] by server-15.bemta-14.messagelabs.com id
	A6/52-10839-4AED4F25; Fri, 07 Feb 2014 13:24:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391779490!2769958!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29347 invoked from network); 7 Feb 2014 13:24:50 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:24:50 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so2223570wgg.33
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 05:24:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=nBpKhv4cXWXwbyORG/PPcpHl/EYmurtvUDTDEVjM8jE=;
	b=gH/hQ+4ehNTGeOJ0LlT/KnsbRIPbuMvM4uuzW79zW2hxi3T9LzkrYV4tTZrbsHE/jS
	qyMZjN26VQuMG3bj0hPtOk1eFjW8+JjEhwQDCCHTkr5GOY+pELlfL86yb6lGoV09l9Xq
	tIifP7fVcQwLaM58HOL8Jk9RzyRkrjYoAXZaiX8yxjU3JfvDm1NERN5xiKUxicm5xrqs
	1g9YsUcI9IE3Y8GKfXwoCv2GJzfQ1UK3c37N1LGvhPRtPwfCsdzFSVzvIn/4JVa2y9Tc
	l9S0fFV831jjfYAfXNwh+Le2fsUVibgbQkhnsRUtO+6PD3IWXCySBp85Iei1BBPPWHCG
	6mFA==
X-Gm-Message-State: ALoCoQm32KfPtckatDag20BIUnoU0GrRYO2vnkZpwdljTqFYIz/YPfEQThyjvBZoyMoPsY9Se9Hu
X-Received: by 10.180.164.73 with SMTP id yo9mr3743978wib.29.1391779490404;
	Fri, 07 Feb 2014 05:24:50 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm10713699wjc.5.2014.02.07.05.24.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 05:24:49 -0800 (PST)
Message-ID: <52F4DEA0.3030207@linaro.org>
Date: Fri, 07 Feb 2014 13:24:48 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
Cc: keir@xen.org, tim@xen.org, ian.jackson@eu.citrix.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 12:12, Ian Campbell wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
>
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.
>
> Secondly we need to flush anything which the domain builder touches, which we
> do via a new domctl.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org> [for the ARM part]

> Cc: jbeulich@suse.com
> Cc: keir@xen.org
> Cc: ian.jackson@eu.citrix.com
> --
> v4: introduce a function to clean and invalidate as intended
>
>      make the domctl take a length not an end.
>
> v3:
>      s/cacheflush_page/sync_page_to_ram/
>
>      xc interface takes a length instead of an end
>
>      make the domctl range inclusive.
>
>      make xc interface internal -- it isn't needed from libxl in the current
>      design and it is easier to expose an interface in the future than to hide
>      it.
>
> v2:
>     Switch to cleaning at page allocation time + explicit flushing of the
>     regions which the toolstack touches.
>
>     Add XSM for new domctl.
>
>     New domctl restricts the amount of space it is willing to flush, to avoid
>     thinking about preemption.
> ---
>   tools/libxc/xc_dom_boot.c           |    4 ++++
>   tools/libxc/xc_dom_core.c           |    2 ++
>   tools/libxc/xc_domain.c             |   10 ++++++++++
>   tools/libxc/xc_private.c            |    2 ++
>   tools/libxc/xc_private.h            |    3 +++
>   xen/arch/arm/domctl.c               |   14 ++++++++++++++
>   xen/arch/arm/mm.c                   |   12 ++++++++++++
>   xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
>   xen/common/page_alloc.c             |    5 +++++
>   xen/include/asm-arm/arm32/page.h    |    4 ++++
>   xen/include/asm-arm/arm64/page.h    |    4 ++++
>   xen/include/asm-arm/p2m.h           |    3 +++
>   xen/include/asm-arm/page.h          |    3 +++
>   xen/include/asm-x86/page.h          |    3 +++
>   xen/include/public/domctl.h         |   13 +++++++++++++
>   xen/xsm/flask/hooks.c               |    3 +++
>   xen/xsm/flask/policy/access_vectors |    2 ++
>   17 files changed, 112 insertions(+)
>
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>           return -1;
>       }
>
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
>       return 0;
>   }
>
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
>           prev->next = phys->next;
>       else
>           dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
>   }
>
>   void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f10ec01 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>       return 0;
>   }
>
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.nr_pfns = nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}
>
>   int xc_domain_pause(xc_interface *xch,
>                       uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
>           return -1;
>       memcpy(vaddr, src_page, PAGE_SIZE);
>       munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>       return 0;
>   }
>
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
>           return -1;
>       memset(vaddr, 0, PAGE_SIZE);
>       munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>       return 0;
>   }
>
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
>   /* Optionally flush file to disk and discard page cache */
>   void discard_file_cache(xc_interface *xch, int fd, int flush);
>
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
>   #define MAX_MMU_UPDATES 1024
>   struct xc_mmu {
>       mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..ffb68da 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>   {
>       switch ( domctl->cmd )
>       {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> +            return -EINVAL;
> +
> +        return p2m_cache_flush(d, s, e);
> +    }
> +
>       default:
>           return subarch_do_domctl(domctl, d, u_domctl);
>       }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 2f48347..d2cfe64 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -338,6 +338,18 @@ unsigned long domain_page_map_to_mfn(const void *va)
>   }
>   #endif
>
> +void sync_page_to_ram(unsigned long mfn)
> +{
> +    void *p, *v = map_domain_page(mfn);
> +
> +    dsb();           /* So the CPU issues all writes to the range */
> +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
> +    dsb();           /* So we know the flushes happen before continuing */
> +
> +    unmap_domain_page(v);
> +}
> +
>   void __init arch_init_memory(void)
>   {
>       /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..86f13e9 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
>   #include <asm/gic.h>
>   #include <asm/event.h>
>   #include <asm/hardirq.h>
> +#include <asm/page.h>
>
>   /* First level P2M is 2 consecutive pages */
>   #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
>       ALLOCATE,
>       REMOVE,
>       RELINQUISH,
> +    CACHEFLUSH,
>   };
>
>   static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
>                       count++;
>                   }
>                   break;
> +
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                        break;
> +
> +                    sync_page_to_ram(pte.p2m.base);
> +                }
> +                break;
>           }
>
>           /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
>                                 MATTR_MEM, p2m_invalid);
>   }
>
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> +    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> +    return apply_p2m_changes(d, CACHEFLUSH,
> +                             pfn_to_paddr(start_mfn),
> +                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(INVALID_MFN),
> +                             MATTR_MEM, p2m_invalid);
> +}
> +
>   unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>   {
>       paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..c73c717 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
>           /* Initialise fields which have other uses for free pages. */
>           pg[i].u.inuse.type_info = 0;
>           page_set_owner(&pg[i], NULL);
> +
> +        /* Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        sync_page_to_ram(page_to_mfn(&pg[i]));
>       }
>
>       spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>   /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>   #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
>   /*
>    * Flush all hypervisor mappings from the TLB and branch predictor.
>    * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>   /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>   #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
> +
>   /*
>    * Flush all hypervisor mappings from the TLB
>    * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
>   /* Look up the MFN corresponding to a domain's PFN. */
>   paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>
> +/* Clean & invalidate caches corresponding to a region of guest address space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
> +
>   /* Setup p2m RAM mapping for domain d from start-end. */
>   int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
>   /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..67d64c9 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
>               : : "r" (_p), "m" (*_p));                                   \
>   } while (0)
>
> +/* Flush the dcache for an entire page. */
> +void sync_page_to_ram(unsigned long mfn);
> +
>   /* Print a walk of an arbitrary page table */
>   void dump_pt_walk(lpae_t *table, paddr_t addr);
>
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..abe35fb 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
>       return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>   }
>
> +/* No cache maintenance required on x86 architecture. */
> +static inline void sync_page_to_ram(unsigned long mfn) {}
> +
>   /* return true if permission increased */
>   static inline bool_t
>   perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>   typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>   DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>   struct xen_domctl {
>       uint32_t cmd;
>   #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +965,7 @@ struct xen_domctl {
>   #define XEN_DOMCTL_setnodeaffinity               68
>   #define XEN_DOMCTL_getnodeaffinity               69
>   #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>   #define XEN_DOMCTL_gdbsx_guestmemio            1000
>   #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>   #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
>           struct xen_domctl_set_max_evtchn    set_max_evtchn;
>           struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>           struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>           struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>           struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>           uint8_t                             pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..1345d7e 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>       case XEN_DOMCTL_set_max_evtchn:
>           return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
>
> +    case XEN_DOMCTL_cacheflush:
> +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
>       default:
>           printk("flask_domctl: Unknown op %d\n", cmd);
>           return -EPERM;
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
>       setclaim
>   # XEN_DOMCTL_set_max_evtchn
>       set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> +    cacheflush
>   }
>
>   # Similar to class domain, but primarily contains domctls related to HVM domains
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:25:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBlQM-0005BR-JF; Fri, 07 Feb 2014 13:24:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBlQK-0005BM-UN
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 13:24:53 +0000
Received: from [193.109.254.147:3069] by server-15.bemta-14.messagelabs.com id
	A6/52-10839-4AED4F25; Fri, 07 Feb 2014 13:24:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391779490!2769958!1
X-Originating-IP: [74.125.82.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29347 invoked from network); 7 Feb 2014 13:24:50 -0000
Received: from mail-wg0-f54.google.com (HELO mail-wg0-f54.google.com)
	(74.125.82.54)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:24:50 -0000
Received: by mail-wg0-f54.google.com with SMTP id x13so2223570wgg.33
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 05:24:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=nBpKhv4cXWXwbyORG/PPcpHl/EYmurtvUDTDEVjM8jE=;
	b=gH/hQ+4ehNTGeOJ0LlT/KnsbRIPbuMvM4uuzW79zW2hxi3T9LzkrYV4tTZrbsHE/jS
	qyMZjN26VQuMG3bj0hPtOk1eFjW8+JjEhwQDCCHTkr5GOY+pELlfL86yb6lGoV09l9Xq
	tIifP7fVcQwLaM58HOL8Jk9RzyRkrjYoAXZaiX8yxjU3JfvDm1NERN5xiKUxicm5xrqs
	1g9YsUcI9IE3Y8GKfXwoCv2GJzfQ1UK3c37N1LGvhPRtPwfCsdzFSVzvIn/4JVa2y9Tc
	l9S0fFV831jjfYAfXNwh+Le2fsUVibgbQkhnsRUtO+6PD3IWXCySBp85Iei1BBPPWHCG
	6mFA==
X-Gm-Message-State: ALoCoQm32KfPtckatDag20BIUnoU0GrRYO2vnkZpwdljTqFYIz/YPfEQThyjvBZoyMoPsY9Se9Hu
X-Received: by 10.180.164.73 with SMTP id yo9mr3743978wib.29.1391779490404;
	Fri, 07 Feb 2014 05:24:50 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm10713699wjc.5.2014.02.07.05.24.48
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 05:24:49 -0800 (PST)
Message-ID: <52F4DEA0.3030207@linaro.org>
Date: Fri, 07 Feb 2014 13:24:48 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
Cc: keir@xen.org, tim@xen.org, ian.jackson@eu.citrix.com, jbeulich@suse.com,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 12:12, Ian Campbell wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
>
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.
>
> Secondly we need to flush anything which the domain builder touches, which we
> do via a new domctl.
>
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org> [for the ARM part]

> Cc: jbeulich@suse.com
> Cc: keir@xen.org
> Cc: ian.jackson@eu.citrix.com
> --
> v4: introduce a function to clean and invalidate as intended
>
>      make the domctl take a length not an end.
>
> v3:
>      s/cacheflush_page/sync_page_to_ram/
>
>      xc interface takes a length instead of an end
>
>      make the domctl range inclusive.
>
>      make xc interface internal -- it isn't needed from libxl in the current
>      design and it is easier to expose an interface in the future than to hide
>      it.
>
> v2:
>     Switch to cleaning at page allocation time + explicit flushing of the
>     regions which the toolstack touches.
>
>     Add XSM for new domctl.
>
>     New domctl restricts the amount of space it is willing to flush, to avoid
>     thinking about preemption.
> ---
>   tools/libxc/xc_dom_boot.c           |    4 ++++
>   tools/libxc/xc_dom_core.c           |    2 ++
>   tools/libxc/xc_domain.c             |   10 ++++++++++
>   tools/libxc/xc_private.c            |    2 ++
>   tools/libxc/xc_private.h            |    3 +++
>   xen/arch/arm/domctl.c               |   14 ++++++++++++++
>   xen/arch/arm/mm.c                   |   12 ++++++++++++
>   xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
>   xen/common/page_alloc.c             |    5 +++++
>   xen/include/asm-arm/arm32/page.h    |    4 ++++
>   xen/include/asm-arm/arm64/page.h    |    4 ++++
>   xen/include/asm-arm/p2m.h           |    3 +++
>   xen/include/asm-arm/page.h          |    3 +++
>   xen/include/asm-x86/page.h          |    3 +++
>   xen/include/public/domctl.h         |   13 +++++++++++++
>   xen/xsm/flask/hooks.c               |    3 +++
>   xen/xsm/flask/policy/access_vectors |    2 ++
>   17 files changed, 112 insertions(+)
>
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>           return -1;
>       }
>
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
>       return 0;
>   }
>
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
>           prev->next = phys->next;
>       else
>           dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
>   }
>
>   void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f10ec01 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>       return 0;
>   }
>
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.nr_pfns = nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}
>
>   int xc_domain_pause(xc_interface *xch,
>                       uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
>           return -1;
>       memcpy(vaddr, src_page, PAGE_SIZE);
>       munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>       return 0;
>   }
>
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
>           return -1;
>       memset(vaddr, 0, PAGE_SIZE);
>       munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>       return 0;
>   }
>
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
>   /* Optionally flush file to disk and discard page cache */
>   void discard_file_cache(xc_interface *xch, int fd, int flush);
>
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
>   #define MAX_MMU_UPDATES 1024
>   struct xc_mmu {
>       mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..ffb68da 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>   {
>       switch ( domctl->cmd )
>       {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> +            return -EINVAL;
> +
> +        return p2m_cache_flush(d, s, e);
> +    }
> +
>       default:
>           return subarch_do_domctl(domctl, d, u_domctl);
>       }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 2f48347..d2cfe64 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -338,6 +338,18 @@ unsigned long domain_page_map_to_mfn(const void *va)
>   }
>   #endif
>
> +void sync_page_to_ram(unsigned long mfn)
> +{
> +    void *p, *v = map_domain_page(mfn);
> +
> +    dsb();           /* So the CPU issues all writes to the range */
> +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
> +    dsb();           /* So we know the flushes happen before continuing */
> +
> +    unmap_domain_page(v);
> +}
> +
>   void __init arch_init_memory(void)
>   {
>       /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..86f13e9 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
>   #include <asm/gic.h>
>   #include <asm/event.h>
>   #include <asm/hardirq.h>
> +#include <asm/page.h>
>
>   /* First level P2M is 2 consecutive pages */
>   #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
>       ALLOCATE,
>       REMOVE,
>       RELINQUISH,
> +    CACHEFLUSH,
>   };
>
>   static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
>                       count++;
>                   }
>                   break;
> +
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                        break;
> +
> +                    sync_page_to_ram(pte.p2m.base);
> +                }
> +                break;
>           }
>
>           /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
>                                 MATTR_MEM, p2m_invalid);
>   }
>
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> +    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> +    return apply_p2m_changes(d, CACHEFLUSH,
> +                             pfn_to_paddr(start_mfn),
> +                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(INVALID_MFN),
> +                             MATTR_MEM, p2m_invalid);
> +}
> +
>   unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>   {
>       paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..c73c717 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
>           /* Initialise fields which have other uses for free pages. */
>           pg[i].u.inuse.type_info = 0;
>           page_set_owner(&pg[i], NULL);
> +
> +        /* Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        sync_page_to_ram(page_to_mfn(&pg[i]));
>       }
>
>       spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>   /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>   #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
>   /*
>    * Flush all hypervisor mappings from the TLB and branch predictor.
>    * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>   /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>   #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
> +
>   /*
>    * Flush all hypervisor mappings from the TLB
>    * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
>   /* Look up the MFN corresponding to a domain's PFN. */
>   paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>
> +/* Clean & invalidate caches corresponding to a region of guest address space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
> +
>   /* Setup p2m RAM mapping for domain d from start-end. */
>   int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
>   /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..67d64c9 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
>               : : "r" (_p), "m" (*_p));                                   \
>   } while (0)
>
> +/* Flush the dcache for an entire page. */
> +void sync_page_to_ram(unsigned long mfn);
> +
>   /* Print a walk of an arbitrary page table */
>   void dump_pt_walk(lpae_t *table, paddr_t addr);
>
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..abe35fb 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
>       return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>   }
>
> +/* No cache maintenance required on x86 architecture. */
> +static inline void sync_page_to_ram(unsigned long mfn) {}
> +
>   /* return true if permission increased */
>   static inline bool_t
>   perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>   typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>   DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>   struct xen_domctl {
>       uint32_t cmd;
>   #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +965,7 @@ struct xen_domctl {
>   #define XEN_DOMCTL_setnodeaffinity               68
>   #define XEN_DOMCTL_getnodeaffinity               69
>   #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>   #define XEN_DOMCTL_gdbsx_guestmemio            1000
>   #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>   #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
>           struct xen_domctl_set_max_evtchn    set_max_evtchn;
>           struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>           struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>           struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>           struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>           uint8_t                             pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..1345d7e 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>       case XEN_DOMCTL_set_max_evtchn:
>           return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
>
> +    case XEN_DOMCTL_cacheflush:
> +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
>       default:
>           printk("flask_domctl: Unknown op %d\n", cmd);
>           return -EPERM;
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
>       setclaim
>   # XEN_DOMCTL_set_max_evtchn
>       set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> +    cacheflush
>   }
>
>   # Similar to class domain, but primarily contains domctls related to HVM domains
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:26:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBlRS-0005GA-EN; Fri, 07 Feb 2014 13:26:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBlRQ-0005G0-AS
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 13:26:00 +0000
Received: from [85.158.137.68:51317] by server-6.bemta-3.messagelabs.com id
	D8/F9-09180-7EED4F25; Fri, 07 Feb 2014 13:25:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391779558!335355!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9638 invoked from network); 7 Feb 2014 13:25:58 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:25:58 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so826408wib.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 05:25:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2MVc7Ss8s6Vdzgb72k4COLz5v93vOkV5vZ6oLSUI9cI=;
	b=mJMcyUV/oupwPpCdsnNfgctewZFENkxPxIwZ3p9ZiYDMwEOcNl1q4XNVvP02JOyvEa
	iWXMKuWzHybnJNzF4rQ+VPsvzG75plyzK5uqdZNQgy0q2rE095g6W69v7rWAGxL+pyxT
	lM4vAiPbjsJvwQVrNE2YifCMuJQkBK51m/A4HmcJsYqZnvLRVktWan7Ir9eRSIbYH/4O
	AKfcXdDTclrEoVzClcnSORRNXi4L1gvUYW0XdsDr/yDSh0xvq1zYKUdcMi31nb0MUkZ2
	SNOfrgEv+tNx8lQyS7eJsDD/NSBIAcSyCuz7h2P00Vc0sCtpDYQXw2k1TYPsXdO18bux
	/yQw==
X-Gm-Message-State: ALoCoQkQ0T4wsXKutW+Dsbs9EHSIxhOsYEIrdwYuoAYy+mL4fmKRzc5kpd36ASG3FugE8jA11vki
X-Received: by 10.194.250.34 with SMTP id yz2mr10591816wjc.18.1391779558258;
	Fri, 07 Feb 2014 05:25:58 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id hy8sm10709572wjb.2.2014.02.07.05.25.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 05:25:57 -0800 (PST)
Message-ID: <52F4DEE4.3080106@linaro.org>
Date: Fri, 07 Feb 2014 13:25:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>, xen-devel@lists.xen.org
References: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
Cc: patches@apm.com, stefano.stabellini@citrix.com, ian.campbell@citrix.com,
	Anup Patel <anup.patel@linaro.org>, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Pranavkumar,

On 07/02/14 12:57, Pranavkumar Sawargaonkar wrote:
> This patch addresses memory cloberring issue mentioed by Julien Grall
> with my earlier patch -
> Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f
>
> Discussion related to this fix -
> http://www.gossamer-threads.com/lists/xen/devel/316247
>
> V2: Incorporating comments received on V1.
> V1: Initial Patch
>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
Acked-by: Julien Grall <julien.grall@linaro.org>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 13:26:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 13:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBlRS-0005GA-EN; Fri, 07 Feb 2014 13:26:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBlRQ-0005G0-AS
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 13:26:00 +0000
Received: from [85.158.137.68:51317] by server-6.bemta-3.messagelabs.com id
	D8/F9-09180-7EED4F25; Fri, 07 Feb 2014 13:25:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391779558!335355!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9638 invoked from network); 7 Feb 2014 13:25:58 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 13:25:58 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so826408wib.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 05:25:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2MVc7Ss8s6Vdzgb72k4COLz5v93vOkV5vZ6oLSUI9cI=;
	b=mJMcyUV/oupwPpCdsnNfgctewZFENkxPxIwZ3p9ZiYDMwEOcNl1q4XNVvP02JOyvEa
	iWXMKuWzHybnJNzF4rQ+VPsvzG75plyzK5uqdZNQgy0q2rE095g6W69v7rWAGxL+pyxT
	lM4vAiPbjsJvwQVrNE2YifCMuJQkBK51m/A4HmcJsYqZnvLRVktWan7Ir9eRSIbYH/4O
	AKfcXdDTclrEoVzClcnSORRNXi4L1gvUYW0XdsDr/yDSh0xvq1zYKUdcMi31nb0MUkZ2
	SNOfrgEv+tNx8lQyS7eJsDD/NSBIAcSyCuz7h2P00Vc0sCtpDYQXw2k1TYPsXdO18bux
	/yQw==
X-Gm-Message-State: ALoCoQkQ0T4wsXKutW+Dsbs9EHSIxhOsYEIrdwYuoAYy+mL4fmKRzc5kpd36ASG3FugE8jA11vki
X-Received: by 10.194.250.34 with SMTP id yz2mr10591816wjc.18.1391779558258;
	Fri, 07 Feb 2014 05:25:58 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id hy8sm10709572wjb.2.2014.02.07.05.25.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 05:25:57 -0800 (PST)
Message-ID: <52F4DEE4.3080106@linaro.org>
Date: Fri, 07 Feb 2014 13:25:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>, xen-devel@lists.xen.org
References: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
In-Reply-To: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
Cc: patches@apm.com, stefano.stabellini@citrix.com, ian.campbell@citrix.com,
	Anup Patel <anup.patel@linaro.org>, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Pranavkumar,

On 07/02/14 12:57, Pranavkumar Sawargaonkar wrote:
> This patch addresses memory cloberring issue mentioed by Julien Grall
> with my earlier patch -
> Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f
>
> Discussion related to this fix -
> http://www.gossamer-threads.com/lists/xen/devel/316247
>
> V2: Incorporating comments received on V1.
> V1: Initial Patch
>
> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> Signed-off-by: Anup Patel <anup.patel@linaro.org>
Acked-by: Julien Grall <julien.grall@linaro.org>

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:29:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmQX-0007xD-Cm; Fri, 07 Feb 2014 14:29:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBmQV-0007uX-Aj
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:29:08 +0000
Received: from [193.109.254.147:49195] by server-6.bemta-14.messagelabs.com id
	AA/66-03396-2BDE4F25; Fri, 07 Feb 2014 14:29:06 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391783344!2784264!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32367 invoked from network); 7 Feb 2014 14:29:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:29:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="100833425"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 14:29:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 09:29:03 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBmQR-0005CD-BS;
	Fri, 07 Feb 2014 14:29:03 +0000
Message-ID: <52F4EDAD.7020500@eu.citrix.com>
Date: Fri, 7 Feb 2014 14:29:01 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
	<1391773158.2162.81.camel@kazak.uk.xensource.com>
In-Reply-To: <1391773158.2162.81.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 11:39 AM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 16:08 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch addresses memory cloberring issue mentioed by Julien Grall
>> with my earlier patch -
>> Ref:
>> http://www.gossamer-threads.com/lists/xen/devel/316247
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>   xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
>>   1 file changed, 36 insertions(+), 34 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index c09cf0c..62f56a3 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>>       if ( !cpu_has_fp )
>>           return;
>>   
>> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
>> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
>> +                 : "memory");
> The point of this change was to be able to drop the memory clobbers.
>
> George, I'd like to take this in 4.4 if possible -- I wanted to get the
> baseline functionality fixed for 4.4 ASAP since it was quite a big hole
> which is why I committed without waiting for this respin.
>
> The issue is that the patch which was committed yesterday clobbers all
> of memory and not just the bits the inline asm touches.

Obviously there's not much point in releasing a version with a fix that 
doesn't work. :-)

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:29:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmQX-0007xD-Cm; Fri, 07 Feb 2014 14:29:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WBmQV-0007uX-Aj
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:29:08 +0000
Received: from [193.109.254.147:49195] by server-6.bemta-14.messagelabs.com id
	AA/66-03396-2BDE4F25; Fri, 07 Feb 2014 14:29:06 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391783344!2784264!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32367 invoked from network); 7 Feb 2014 14:29:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:29:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,800,1384300800"; d="scan'208";a="100833425"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 14:29:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 09:29:03 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WBmQR-0005CD-BS;
	Fri, 07 Feb 2014 14:29:03 +0000
Message-ID: <52F4EDAD.7020500@eu.citrix.com>
Date: Fri, 7 Feb 2014 14:29:01 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Pranavkumar Sawargaonkar
	<pranavkumar@linaro.org>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
	<1391773158.2162.81.camel@kazak.uk.xensource.com>
In-Reply-To: <1391773158.2162.81.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: patches@apm.com, patches@linaro.org, stefano.stabellini@citrix.com,
	Anup Patel <anup.patel@linaro.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 11:39 AM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 16:08 +0530, Pranavkumar Sawargaonkar wrote:
>> This patch addresses memory cloberring issue mentioed by Julien Grall
>> with my earlier patch -
>> Ref:
>> http://www.gossamer-threads.com/lists/xen/devel/316247
>>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
>> Signed-off-by: Anup Patel <anup.patel@linaro.org>
>> ---
>>   xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
>>   1 file changed, 36 insertions(+), 34 deletions(-)
>>
>> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
>> index c09cf0c..62f56a3 100644
>> --- a/xen/arch/arm/arm64/vfp.c
>> +++ b/xen/arch/arm/arm64/vfp.c
>> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
>>       if ( !cpu_has_fp )
>>           return;
>>   
>> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
>> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
>> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
>> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
>> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
>> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
>> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
>> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
>> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
>> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
>> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
>> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
>> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
>> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
>> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
>> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
>> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
>> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
>> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
>> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
>> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
>> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
>> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
>> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
>> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
>> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
>> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
>> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
>> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
>> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
>> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
>> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
>> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
>> +                 : "memory");
> The point of this change was to be able to drop the memory clobbers.
>
> George, I'd like to take this in 4.4 if possible -- I wanted to get the
> baseline functionality fixed for 4.4 ASAP since it was quite a big hole
> which is why I committed without waiting for this respin.
>
> The issue is that the patch which was committed yesterday clobbers all
> of memory and not just the bits the inline asm touches.

Obviously there's not much point in releasing a version with a fix that 
doesn't work. :-)

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:34:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmVW-0008DI-8g; Fri, 07 Feb 2014 14:34:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBmVU-0008DC-CA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:34:16 +0000
Received: from [85.158.137.68:38435] by server-5.bemta-3.messagelabs.com id
	5A/71-04712-7EEE4F25; Fri, 07 Feb 2014 14:34:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391783653!356302!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29901 invoked from network); 7 Feb 2014 14:34:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:34:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100834880"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 14:34:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	09:34:12 -0500
Message-ID: <1391783651.2162.103.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 7 Feb 2014 14:34:11 +0000
In-Reply-To: <52F4E663020000780011A3B0@nat28.tlf.novell.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 12:57 +0000, Jan Beulich wrote:
> >>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/xen/arch/arm/domctl.c
> > +++ b/xen/arch/arm/domctl.c
> > @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
> >  {
> >      switch ( domctl->cmd )
> >      {
> > +    case XEN_DOMCTL_cacheflush:
> > +    {
> > +        unsigned long s = domctl->u.cacheflush.start_pfn;
> > +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> > +
> > +        if ( e < s )
> > +            return -EINVAL;
> > +
> > +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> > +            return -EINVAL;
> 
> get_order_from_pages() takes an unsigned long, while xen_pfn_t
> is - iirc - 64-bits even on arm32. So you're not checking the full
> passed in value, yet use the full one in the calculation of "e" (which
> is what gets passed down).

Yes, you are right, I should have made nr_mfns be a smaller type.

> Also, did you consider the nr_pfns == 0 case? At present, due to
> the way get_order_from_pages() works, this will produce -EINVAL.
> I'm not sure that's intended.

I think nr == 0 => EINVAL is probably ok.

But actually this made me realise that using get_order_from_pages is a
bit silly here. It seems more logical to compare nr_pfns with
1<<(MAX_ORDER-PAGE_ORDER) now that we have a length in hand.

> 
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> >  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
> >  
> > +/*
> > + * ARM: Clean and invalidate caches associated with given region of
> > + * guest memory.
> > + */
> > +struct xen_domctl_cacheflush {
> > +    /* IN: page range to flush. */
> > +    xen_pfn_t start_pfn, nr_pfns;
> > +};
> 
> The name here (and of the libxc interface) is now certainly
> counterintuitive. But it's a domctl (and an internal interface),
> which we can change post-4.4 (I'd envision it to actually take
> a flags parameter indicating the kind of flush that's wanted).

Sounds OK to me, thanks.

> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
> >      case XEN_DOMCTL_set_max_evtchn:
> >          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
> >  
> > +    case XEN_DOMCTL_cacheflush:
> > +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> 
> Hard tab.

Well spotted.

This file is missing the emacs magic block. I'll fix this and add one.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:34:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmVW-0008DI-8g; Fri, 07 Feb 2014 14:34:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBmVU-0008DC-CA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:34:16 +0000
Received: from [85.158.137.68:38435] by server-5.bemta-3.messagelabs.com id
	5A/71-04712-7EEE4F25; Fri, 07 Feb 2014 14:34:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391783653!356302!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29901 invoked from network); 7 Feb 2014 14:34:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:34:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100834880"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 14:34:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	09:34:12 -0500
Message-ID: <1391783651.2162.103.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 7 Feb 2014 14:34:11 +0000
In-Reply-To: <52F4E663020000780011A3B0@nat28.tlf.novell.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 12:57 +0000, Jan Beulich wrote:
> >>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> > --- a/xen/arch/arm/domctl.c
> > +++ b/xen/arch/arm/domctl.c
> > @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
> >  {
> >      switch ( domctl->cmd )
> >      {
> > +    case XEN_DOMCTL_cacheflush:
> > +    {
> > +        unsigned long s = domctl->u.cacheflush.start_pfn;
> > +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> > +
> > +        if ( e < s )
> > +            return -EINVAL;
> > +
> > +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> > +            return -EINVAL;
> 
> get_order_from_pages() takes an unsigned long, while xen_pfn_t
> is - iirc - 64-bits even on arm32. So you're not checking the full
> passed in value, yet use the full one in the calculation of "e" (which
> is what gets passed down).

Yes, you are right, I should have made nr_mfns be a smaller type.

> Also, did you consider the nr_pfns == 0 case? At present, due to
> the way get_order_from_pages() works, this will produce -EINVAL.
> I'm not sure that's intended.

I think nr == 0 => EINVAL is probably ok.

But actually this made me realise that using get_order_from_pages is a
bit silly here. It seems more logical to compare nr_pfns with
1<<(MAX_ORDER-PAGE_ORDER) now that we have a length in hand.

> 
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> >  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
> >  
> > +/*
> > + * ARM: Clean and invalidate caches associated with given region of
> > + * guest memory.
> > + */
> > +struct xen_domctl_cacheflush {
> > +    /* IN: page range to flush. */
> > +    xen_pfn_t start_pfn, nr_pfns;
> > +};
> 
> The name here (and of the libxc interface) is now certainly
> counterintuitive. But it's a domctl (and an internal interface),
> which we can change post-4.4 (I'd envision it to actually take
> a flags parameter indicating the kind of flush that's wanted).

Sounds OK to me, thanks.

> > --- a/xen/xsm/flask/hooks.c
> > +++ b/xen/xsm/flask/hooks.c
> > @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
> >      case XEN_DOMCTL_set_max_evtchn:
> >          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
> >  
> > +    case XEN_DOMCTL_cacheflush:
> > +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> 
> Hard tab.

Well spotted.

This file is missing the emacs magic block. I'll fix this and add one.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:35:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmWS-0008Hm-P1; Fri, 07 Feb 2014 14:35:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBmWS-0008He-9S
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:35:16 +0000
Received: from [193.109.254.147:12896] by server-4.bemta-14.messagelabs.com id
	50/47-32066-32FE4F25; Fri, 07 Feb 2014 14:35:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391783709!2761956!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9629 invoked from network); 7 Feb 2014 14:35:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:35:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100835136"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 14:35:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	09:35:08 -0500
Message-ID: <1391783707.2162.104.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 7 Feb 2014 14:35:07 +0000
In-Reply-To: <52F4EDAD.7020500@eu.citrix.com>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
	<1391773158.2162.81.camel@kazak.uk.xensource.com>
	<52F4EDAD.7020500@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 14:29 +0000, George Dunlap wrote:
> On 02/07/2014 11:39 AM, Ian Campbell wrote:
> > On Fri, 2014-02-07 at 16:08 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch addresses memory cloberring issue mentioed by Julien Grall
> >> with my earlier patch -
> >> Ref:
> >> http://www.gossamer-threads.com/lists/xen/devel/316247
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> >> ---
> >>   xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
> >>   1 file changed, 36 insertions(+), 34 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> >> index c09cf0c..62f56a3 100644
> >> --- a/xen/arch/arm/arm64/vfp.c
> >> +++ b/xen/arch/arm/arm64/vfp.c
> >> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
> >>       if ( !cpu_has_fp )
> >>           return;
> >>   
> >> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> >> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
> >> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
> >> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
> >> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
> >> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
> >> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
> >> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
> >> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
> >> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
> >> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
> >> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
> >> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
> >> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
> >> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
> >> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
> >> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> >> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> >> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
> >> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
> >> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
> >> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
> >> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
> >> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
> >> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
> >> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
> >> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
> >> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
> >> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
> >> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
> >> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
> >> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
> >> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
> >> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
> >> +                 : "memory");
> > The point of this change was to be able to drop the memory clobbers.
> >
> > George, I'd like to take this in 4.4 if possible -- I wanted to get the
> > baseline functionality fixed for 4.4 ASAP since it was quite a big hole
> > which is why I committed without waiting for this respin.
> >
> > The issue is that the patch which was committed yesterday clobbers all
> > of memory and not just the bits the inline asm touches.
> 
> Obviously there's not much point in releasing a version with a fix that 
> doesn't work. :-)

It does work, just the clobber is too aggressive.

> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:35:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmWS-0008Hm-P1; Fri, 07 Feb 2014 14:35:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBmWS-0008He-9S
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:35:16 +0000
Received: from [193.109.254.147:12896] by server-4.bemta-14.messagelabs.com id
	50/47-32066-32FE4F25; Fri, 07 Feb 2014 14:35:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391783709!2761956!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9629 invoked from network); 7 Feb 2014 14:35:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:35:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100835136"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 14:35:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	09:35:08 -0500
Message-ID: <1391783707.2162.104.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 7 Feb 2014 14:35:07 +0000
In-Reply-To: <52F4EDAD.7020500@eu.citrix.com>
References: <1391769538-9091-1-git-send-email-pranavkumar@linaro.org>
	<1391773158.2162.81.camel@kazak.uk.xensource.com>
	<52F4EDAD.7020500@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 14:29 +0000, George Dunlap wrote:
> On 02/07/2014 11:39 AM, Ian Campbell wrote:
> > On Fri, 2014-02-07 at 16:08 +0530, Pranavkumar Sawargaonkar wrote:
> >> This patch addresses memory cloberring issue mentioed by Julien Grall
> >> with my earlier patch -
> >> Ref:
> >> http://www.gossamer-threads.com/lists/xen/devel/316247
> >>
> >> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> >> Signed-off-by: Anup Patel <anup.patel@linaro.org>
> >> ---
> >>   xen/arch/arm/arm64/vfp.c |   70 ++++++++++++++++++++++++----------------------
> >>   1 file changed, 36 insertions(+), 34 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
> >> index c09cf0c..62f56a3 100644
> >> --- a/xen/arch/arm/arm64/vfp.c
> >> +++ b/xen/arch/arm/arm64/vfp.c
> >> @@ -8,23 +8,24 @@ void vfp_save_state(struct vcpu *v)
> >>       if ( !cpu_has_fp )
> >>           return;
> >>   
> >> -    asm volatile("stp q0, q1, [%0, #16 * 0]\n\t"
> >> -                 "stp q2, q3, [%0, #16 * 2]\n\t"
> >> -                 "stp q4, q5, [%0, #16 * 4]\n\t"
> >> -                 "stp q6, q7, [%0, #16 * 6]\n\t"
> >> -                 "stp q8, q9, [%0, #16 * 8]\n\t"
> >> -                 "stp q10, q11, [%0, #16 * 10]\n\t"
> >> -                 "stp q12, q13, [%0, #16 * 12]\n\t"
> >> -                 "stp q14, q15, [%0, #16 * 14]\n\t"
> >> -                 "stp q16, q17, [%0, #16 * 16]\n\t"
> >> -                 "stp q18, q19, [%0, #16 * 18]\n\t"
> >> -                 "stp q20, q21, [%0, #16 * 20]\n\t"
> >> -                 "stp q22, q23, [%0, #16 * 22]\n\t"
> >> -                 "stp q24, q25, [%0, #16 * 24]\n\t"
> >> -                 "stp q26, q27, [%0, #16 * 26]\n\t"
> >> -                 "stp q28, q29, [%0, #16 * 28]\n\t"
> >> -                 "stp q30, q31, [%0, #16 * 30]\n\t"
> >> -                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> >> +    asm volatile("stp q0, q1, [%1, #16 * 0]\n\t"
> >> +                 "stp q2, q3, [%1, #16 * 2]\n\t"
> >> +                 "stp q4, q5, [%1, #16 * 4]\n\t"
> >> +                 "stp q6, q7, [%1, #16 * 6]\n\t"
> >> +                 "stp q8, q9, [%1, #16 * 8]\n\t"
> >> +                 "stp q10, q11, [%1, #16 * 10]\n\t"
> >> +                 "stp q12, q13, [%1, #16 * 12]\n\t"
> >> +                 "stp q14, q15, [%1, #16 * 14]\n\t"
> >> +                 "stp q16, q17, [%1, #16 * 16]\n\t"
> >> +                 "stp q18, q19, [%1, #16 * 18]\n\t"
> >> +                 "stp q20, q21, [%1, #16 * 20]\n\t"
> >> +                 "stp q22, q23, [%1, #16 * 22]\n\t"
> >> +                 "stp q24, q25, [%1, #16 * 24]\n\t"
> >> +                 "stp q26, q27, [%1, #16 * 26]\n\t"
> >> +                 "stp q28, q29, [%1, #16 * 28]\n\t"
> >> +                 "stp q30, q31, [%1, #16 * 30]\n\t"
> >> +                 :"=Q" (*v->arch.vfp.fpregs): "r" (v->arch.vfp.fpregs)
> >> +                 : "memory");
> > The point of this change was to be able to drop the memory clobbers.
> >
> > George, I'd like to take this in 4.4 if possible -- I wanted to get the
> > baseline functionality fixed for 4.4 ASAP since it was quite a big hole
> > which is why I committed without waiting for this respin.
> >
> > The issue is that the patch which was committed yesterday clobbers all
> > of memory and not just the bits the inline asm touches.
> 
> Obviously there's not much point in releasing a version with a fix that 
> doesn't work. :-)

It does work, just the clobber is too aggressive.

> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:54:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmox-0000p8-L9; Fri, 07 Feb 2014 14:54:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBmov-0000om-CS
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:54:21 +0000
Received: from [85.158.137.68:17190] by server-17.bemta-3.messagelabs.com id
	DB/DA-22569-C93F4F25; Fri, 07 Feb 2014 14:54:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391784857!356789!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15927 invoked from network); 7 Feb 2014 14:54:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:54:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="98986917"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 14:54:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 09:54:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBmjz-0005Uz-LV;
	Fri, 07 Feb 2014 14:49:15 +0000
Date: Fri, 7 Feb 2014 14:49:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Ian Campbell wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
> 
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.
> 
> Secondly we need to flush anything which the domain builder touches, which we
> do via a new domctl.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: jbeulich@suse.com
> Cc: keir@xen.org
> Cc: ian.jackson@eu.citrix.com
> --
> v4: introduce a function to clean and invalidate as intended
> 
>     make the domctl take a length not an end.
> 
> v3:
>     s/cacheflush_page/sync_page_to_ram/
> 
>     xc interface takes a length instead of an end
> 
>     make the domctl range inclusive.
> 
>     make xc interface internal -- it isn't needed from libxl in the current
>     design and it is easier to expose an interface in the future than to hide
>     it.
> 
> v2:
>    Switch to cleaning at page allocation time + explicit flushing of the
>    regions which the toolstack touches.
> 
>    Add XSM for new domctl.
> 
>    New domctl restricts the amount of space it is willing to flush, to avoid
>    thinking about preemption.
> ---
>  tools/libxc/xc_dom_boot.c           |    4 ++++
>  tools/libxc/xc_dom_core.c           |    2 ++
>  tools/libxc/xc_domain.c             |   10 ++++++++++
>  tools/libxc/xc_private.c            |    2 ++
>  tools/libxc/xc_private.h            |    3 +++
>  xen/arch/arm/domctl.c               |   14 ++++++++++++++
>  xen/arch/arm/mm.c                   |   12 ++++++++++++
>  xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
>  xen/common/page_alloc.c             |    5 +++++
>  xen/include/asm-arm/arm32/page.h    |    4 ++++
>  xen/include/asm-arm/arm64/page.h    |    4 ++++
>  xen/include/asm-arm/p2m.h           |    3 +++
>  xen/include/asm-arm/page.h          |    3 +++
>  xen/include/asm-x86/page.h          |    3 +++
>  xen/include/public/domctl.h         |   13 +++++++++++++
>  xen/xsm/flask/hooks.c               |    3 +++
>  xen/xsm/flask/policy/access_vectors |    2 ++
>  17 files changed, 112 insertions(+)
> 
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>          return -1;
>      }
>  
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
>          prev->next = phys->next;
>      else
>          dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);

I am not sure I understand the semantic of xc_dom_unmap_one: from the
name of the function and the parameters I would think that it is
supposed to unmap just one page. In that case we should flush just one
page. However the implementation calls

munmap(phys->ptr, phys->count << page_shift)

so it actually can unmap more than a single page, so I think that you
did the right thing by calling xc_domain_cacheflush with phys->first and
phys->count.


>  }
>  
>  void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f10ec01 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.nr_pfns = nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}
>  
>  int xc_domain_pause(xc_interface *xch,
>                      uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
>          return -1;
>      memcpy(vaddr, src_page, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
>          return -1;
>      memset(vaddr, 0, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
>  /* Optionally flush file to disk and discard page cache */
>  void discard_file_cache(xc_interface *xch, int fd, int flush);
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
>  #define MAX_MMU_UPDATES 1024
>  struct xc_mmu {
>      mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..ffb68da 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> +            return -EINVAL;
> +
> +        return p2m_cache_flush(d, s, e);
> +    }
> +
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 2f48347..d2cfe64 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -338,6 +338,18 @@ unsigned long domain_page_map_to_mfn(const void *va)
>  }
>  #endif
>  
> +void sync_page_to_ram(unsigned long mfn)
> +{
> +    void *p, *v = map_domain_page(mfn);
> +
> +    dsb();           /* So the CPU issues all writes to the range */
> +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )

What about the last few bytes on a page?
Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?


> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
> +    dsb();           /* So we know the flushes happen before continuing */
> +
> +    unmap_domain_page(v);
> +}
> +
>  void __init arch_init_memory(void)
>  {
>      /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..86f13e9 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
>  #include <asm/gic.h>
>  #include <asm/event.h>
>  #include <asm/hardirq.h>
> +#include <asm/page.h>
>  
>  /* First level P2M is 2 consecutive pages */
>  #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
> +    CACHEFLUSH,
>  };
>  
>  static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
>                      count++;
>                  }
>                  break;
> +
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                        break;
> +
> +                    sync_page_to_ram(pte.p2m.base);
> +                }
> +                break;
>          }
>  
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
>                                MATTR_MEM, p2m_invalid);
>  }
>  
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> +    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> +    return apply_p2m_changes(d, CACHEFLUSH,
> +                             pfn_to_paddr(start_mfn),
> +                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(INVALID_MFN),
> +                             MATTR_MEM, p2m_invalid);
> +}
> +
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>  {
>      paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..c73c717 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
>          /* Initialise fields which have other uses for free pages. */
>          pg[i].u.inuse.type_info = 0;
>          page_set_owner(&pg[i], NULL);
> +
> +        /* Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        sync_page_to_ram(page_to_mfn(&pg[i]));
>      }
>  
>      spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>  #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
>  /*
>   * Flush all hypervisor mappings from the TLB and branch predictor.
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>  #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
> +
>  /*
>   * Flush all hypervisor mappings from the TLB
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
>  /* Look up the MFN corresponding to a domain's PFN. */
>  paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>  
> +/* Clean & invalidate caches corresponding to a region of guest address space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
> +
>  /* Setup p2m RAM mapping for domain d from start-end. */
>  int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
>  /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..67d64c9 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
>              : : "r" (_p), "m" (*_p));                                   \
>  } while (0)
>  
> +/* Flush the dcache for an entire page. */
> +void sync_page_to_ram(unsigned long mfn);
> +
>  /* Print a walk of an arbitrary page table */
>  void dump_pt_walk(lpae_t *table, paddr_t addr);
>  
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..abe35fb 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
>      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>  }
>  
> +/* No cache maintenance required on x86 architecture. */
> +static inline void sync_page_to_ram(unsigned long mfn) {}
> +
>  /* return true if permission increased */
>  static inline bool_t
>  perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +965,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setnodeaffinity               68
>  #define XEN_DOMCTL_getnodeaffinity               69
>  #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
>          struct xen_domctl_set_max_evtchn    set_max_evtchn;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>          struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..1345d7e 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_set_max_evtchn:
>          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
>  
> +    case XEN_DOMCTL_cacheflush:
> +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
>      default:
>          printk("flask_domctl: Unknown op %d\n", cmd);
>          return -EPERM;
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
>      setclaim
>  # XEN_DOMCTL_set_max_evtchn
>      set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> +    cacheflush
>  }
>  
>  # Similar to class domain, but primarily contains domctls related to HVM domains
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 14:54:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 14:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmox-0000p8-L9; Fri, 07 Feb 2014 14:54:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBmov-0000om-CS
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 14:54:21 +0000
Received: from [85.158.137.68:17190] by server-17.bemta-3.messagelabs.com id
	DB/DA-22569-C93F4F25; Fri, 07 Feb 2014 14:54:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391784857!356789!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15927 invoked from network); 7 Feb 2014 14:54:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 14:54:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="98986917"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 14:54:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 09:54:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBmjz-0005Uz-LV;
	Fri, 07 Feb 2014 14:49:15 +0000
Date: Fri, 7 Feb 2014 14:49:02 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
Message-ID: <alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Ian Campbell wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
> 
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has occured
> gets committed to real RAM. To achieve this add a new cacheflush_page function,
> which is a stub on x86.
> 
> Secondly we need to flush anything which the domain builder touches, which we
> do via a new domctl.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: jbeulich@suse.com
> Cc: keir@xen.org
> Cc: ian.jackson@eu.citrix.com
> --
> v4: introduce a function to clean and invalidate as intended
> 
>     make the domctl take a length not an end.
> 
> v3:
>     s/cacheflush_page/sync_page_to_ram/
> 
>     xc interface takes a length instead of an end
> 
>     make the domctl range inclusive.
> 
>     make xc interface internal -- it isn't needed from libxl in the current
>     design and it is easier to expose an interface in the future than to hide
>     it.
> 
> v2:
>    Switch to cleaning at page allocation time + explicit flushing of the
>    regions which the toolstack touches.
> 
>    Add XSM for new domctl.
> 
>    New domctl restricts the amount of space it is willing to flush, to avoid
>    thinking about preemption.
> ---
>  tools/libxc/xc_dom_boot.c           |    4 ++++
>  tools/libxc/xc_dom_core.c           |    2 ++
>  tools/libxc/xc_domain.c             |   10 ++++++++++
>  tools/libxc/xc_private.c            |    2 ++
>  tools/libxc/xc_private.h            |    3 +++
>  xen/arch/arm/domctl.c               |   14 ++++++++++++++
>  xen/arch/arm/mm.c                   |   12 ++++++++++++
>  xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
>  xen/common/page_alloc.c             |    5 +++++
>  xen/include/asm-arm/arm32/page.h    |    4 ++++
>  xen/include/asm-arm/arm64/page.h    |    4 ++++
>  xen/include/asm-arm/p2m.h           |    3 +++
>  xen/include/asm-arm/page.h          |    3 +++
>  xen/include/asm-x86/page.h          |    3 +++
>  xen/include/public/domctl.h         |   13 +++++++++++++
>  xen/xsm/flask/hooks.c               |    3 +++
>  xen/xsm/flask/policy/access_vectors |    2 ++
>  17 files changed, 112 insertions(+)
> 
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>          return -1;
>      }
>  
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
>          prev->next = phys->next;
>      else
>          dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);

I am not sure I understand the semantic of xc_dom_unmap_one: from the
name of the function and the parameters I would think that it is
supposed to unmap just one page. In that case we should flush just one
page. However the implementation calls

munmap(phys->ptr, phys->count << page_shift)

so it actually can unmap more than a single page, so I think that you
did the right thing by calling xc_domain_cacheflush with phys->first and
phys->count.


>  }
>  
>  void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index c2fdd74..f10ec01 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.nr_pfns = nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}
>  
>  int xc_domain_pause(xc_interface *xch,
>                      uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
>          return -1;
>      memcpy(vaddr, src_page, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
>          return -1;
>      memset(vaddr, 0, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
>  /* Optionally flush file to disk and discard page cache */
>  void discard_file_cache(xc_interface *xch, int fd, int flush);
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
>  #define MAX_MMU_UPDATES 1024
>  struct xc_mmu {
>      mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..ffb68da 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        if ( get_order_from_pages(domctl->u.cacheflush.nr_pfns) > MAX_ORDER )
> +            return -EINVAL;
> +
> +        return p2m_cache_flush(d, s, e);
> +    }
> +
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 2f48347..d2cfe64 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -338,6 +338,18 @@ unsigned long domain_page_map_to_mfn(const void *va)
>  }
>  #endif
>  
> +void sync_page_to_ram(unsigned long mfn)
> +{
> +    void *p, *v = map_domain_page(mfn);
> +
> +    dsb();           /* So the CPU issues all writes to the range */
> +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )

What about the last few bytes on a page?
Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?


> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
> +    dsb();           /* So we know the flushes happen before continuing */
> +
> +    unmap_domain_page(v);
> +}
> +
>  void __init arch_init_memory(void)
>  {
>      /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..86f13e9 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
>  #include <asm/gic.h>
>  #include <asm/event.h>
>  #include <asm/hardirq.h>
> +#include <asm/page.h>
>  
>  /* First level P2M is 2 consecutive pages */
>  #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
> +    CACHEFLUSH,
>  };
>  
>  static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
>                      count++;
>                  }
>                  break;
> +
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                        break;
> +
> +                    sync_page_to_ram(pte.p2m.base);
> +                }
> +                break;
>          }
>  
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
>                                MATTR_MEM, p2m_invalid);
>  }
>  
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> +    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> +    return apply_p2m_changes(d, CACHEFLUSH,
> +                             pfn_to_paddr(start_mfn),
> +                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(INVALID_MFN),
> +                             MATTR_MEM, p2m_invalid);
> +}
> +
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>  {
>      paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..c73c717 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
>          /* Initialise fields which have other uses for free pages. */
>          pg[i].u.inuse.type_info = 0;
>          page_set_owner(&pg[i], NULL);
> +
> +        /* Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        sync_page_to_ram(page_to_mfn(&pg[i]));
>      }
>  
>      spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>  #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
>  /*
>   * Flush all hypervisor mappings from the TLB and branch predictor.
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
>  #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
> +
>  /*
>   * Flush all hypervisor mappings from the TLB
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
>  /* Look up the MFN corresponding to a domain's PFN. */
>  paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>  
> +/* Clean & invalidate caches corresponding to a region of guest address space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
> +
>  /* Setup p2m RAM mapping for domain d from start-end. */
>  int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
>  /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..67d64c9 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
>              : : "r" (_p), "m" (*_p));                                   \
>  } while (0)
>  
> +/* Flush the dcache for an entire page. */
> +void sync_page_to_ram(unsigned long mfn);
> +
>  /* Print a walk of an arbitrary page table */
>  void dump_pt_walk(lpae_t *table, paddr_t addr);
>  
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..abe35fb 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
>      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>  }
>  
> +/* No cache maintenance required on x86 architecture. */
> +static inline void sync_page_to_ram(unsigned long mfn) {}
> +
>  /* return true if permission increased */
>  static inline bool_t
>  perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +965,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setnodeaffinity               68
>  #define XEN_DOMCTL_getnodeaffinity               69
>  #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
>          struct xen_domctl_set_max_evtchn    set_max_evtchn;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>          struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..1345d7e 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_set_max_evtchn:
>          return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
>  
> +    case XEN_DOMCTL_cacheflush:
> +	    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
>      default:
>          printk("flask_domctl: Unknown op %d\n", cmd);
>          return -EPERM;
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
>      setclaim
>  # XEN_DOMCTL_set_max_evtchn
>      set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> +    cacheflush
>  }
>  
>  # Similar to class domain, but primarily contains domctls related to HVM domains
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:01:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmvl-0001gC-8Y; Fri, 07 Feb 2014 15:01:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBmvj-0001g6-PC
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:01:24 +0000
Received: from [85.158.137.68:43369] by server-15.bemta-3.messagelabs.com id
	32/89-19263-245F4F25; Fri, 07 Feb 2014 15:01:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391785280!358659!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29271 invoked from network); 7 Feb 2014 15:01:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:01:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100843127"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:01:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	10:01:19 -0500
Message-ID: <1391785278.2162.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 7 Feb 2014 15:01:18 +0000
In-Reply-To: <alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, ian.jackson@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 14:49 +0000, Stefano Stabellini wrote:
> > diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> > index 77a4e64..b9d1015 100644
> > --- a/tools/libxc/xc_dom_core.c
> > +++ b/tools/libxc/xc_dom_core.c
> > @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
> >          prev->next = phys->next;
> >      else
> >          dom->phys_pages = phys->next;
> > +
> > +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
> 
> I am not sure I understand the semantic of xc_dom_unmap_one: from the
> name of the function and the parameters I would think that it is
> supposed to unmap just one page. In that case we should flush just one
> page. However the implementation calls
> 
> munmap(phys->ptr, phys->count << page_shift)
> 
> so it actually can unmap more than a single page, so I think that you
> did the right thing by calling xc_domain_cacheflush with phys->first and
> phys->count.

"one" in the name means "one struct xc_dom_phys", which is a range
starting at the given pfn, which is why using phys->{first,count} is
correct.

> > +void sync_page_to_ram(unsigned long mfn)
> > +{
> > +    void *p, *v = map_domain_page(mfn);
> > +
> > +    dsb();           /* So the CPU issues all writes to the range */
> > +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> 
> What about the last few bytes on a page?
> Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?

I sure hope so! A non power of two cache line size would be pretty
crazy!

The cacheline is always a 2^N (the register contains log2(cacheline
size)) and so is PAGE_SIZE.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:01:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:01:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmvl-0001gC-8Y; Fri, 07 Feb 2014 15:01:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBmvj-0001g6-PC
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:01:24 +0000
Received: from [85.158.137.68:43369] by server-15.bemta-3.messagelabs.com id
	32/89-19263-245F4F25; Fri, 07 Feb 2014 15:01:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391785280!358659!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29271 invoked from network); 7 Feb 2014 15:01:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:01:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100843127"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:01:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	10:01:19 -0500
Message-ID: <1391785278.2162.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 7 Feb 2014 15:01:18 +0000
In-Reply-To: <alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, ian.jackson@eu.citrix.com, julien.grall@linaro.org,
	tim@xen.org, xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 14:49 +0000, Stefano Stabellini wrote:
> > diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> > index 77a4e64..b9d1015 100644
> > --- a/tools/libxc/xc_dom_core.c
> > +++ b/tools/libxc/xc_dom_core.c
> > @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
> >          prev->next = phys->next;
> >      else
> >          dom->phys_pages = phys->next;
> > +
> > +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
> 
> I am not sure I understand the semantic of xc_dom_unmap_one: from the
> name of the function and the parameters I would think that it is
> supposed to unmap just one page. In that case we should flush just one
> page. However the implementation calls
> 
> munmap(phys->ptr, phys->count << page_shift)
> 
> so it actually can unmap more than a single page, so I think that you
> did the right thing by calling xc_domain_cacheflush with phys->first and
> phys->count.

"one" in the name means "one struct xc_dom_phys", which is a range
starting at the given pfn, which is why using phys->{first,count} is
correct.

> > +void sync_page_to_ram(unsigned long mfn)
> > +{
> > +    void *p, *v = map_domain_page(mfn);
> > +
> > +    dsb();           /* So the CPU issues all writes to the range */
> > +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> 
> What about the last few bytes on a page?
> Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?

I sure hope so! A non power of two cache line size would be pretty
crazy!

The cacheline is always a 2^N (the register contains log2(cacheline
size)) and so is PAGE_SIZE.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:02:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmww-0001nL-Om; Fri, 07 Feb 2014 15:02:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBmwu-0001n3-O9
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:02:36 +0000
Received: from [85.158.139.211:63139] by server-8.bemta-5.messagelabs.com id
	C0/35-05298-C85F4F25; Fri, 07 Feb 2014 15:02:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391785353!2400654!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4902 invoked from network); 7 Feb 2014 15:02:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:02:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17F2Rru010411
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:02:28 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17F2QLP027198
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:02:26 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17F2Q1N027190; Fri, 7 Feb 2014 15:02:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:02:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D30DD1C0972; Fri,  7 Feb 2014 10:02:24 -0500 (EST)
Date: Fri, 7 Feb 2014 10:02:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: William Dauchy <william@gandi.net>
Message-ID: <20140207150224.GA3605@phenom.dumpdata.com>
References: <20140207124607.GE19084@gandi.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140207124607.GE19084@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: konrad@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 01:46:07PM +0100, William Dauchy wrote:
> Hello,
> 
> I am facing random memleaks with a xenU 3.10.x pv (x86_64);
> sometimes without issue and sometimes after a reboot, the
> kernel starts to memleak a lot until OOM.
> 
> config: hypervisor xen4.1, dom0 v3.4.x (x86_32)
> 
> I got no issue with a xenU v3.12.x pv; the only difference in dmesg was
> about APIC.
> 
> I manually backported the following patch:
> 6efa20e xen: Support 64-bit PV guest receiving NMIs
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -427,8 +427,7 @@ static void __init xen_init_cpuid_mask(void)
>  
>         if (!xen_initial_domain())
>                 cpuid_leaf1_edx_mask &=
> -                       ~((1 << X86_FEATURE_APIC) |  /* disable local APIC */
> -                         (1 << X86_FEATURE_ACPI));  /* disable ACPI */
> +                       ~((1 << X86_FEATURE_ACPI));  /* disable ACPI */
> 
> 
> and my issue is completely fixed. Should it backported for stable?

That does not make sense. What are the leaks? What are the messages
that you see about APIC?

> 
> Thanks,
> -- 
> William



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:02:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBmww-0001nL-Om; Fri, 07 Feb 2014 15:02:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBmwu-0001n3-O9
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:02:36 +0000
Received: from [85.158.139.211:63139] by server-8.bemta-5.messagelabs.com id
	C0/35-05298-C85F4F25; Fri, 07 Feb 2014 15:02:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391785353!2400654!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4902 invoked from network); 7 Feb 2014 15:02:35 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:02:35 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17F2Rru010411
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:02:28 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17F2QLP027198
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:02:26 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17F2Q1N027190; Fri, 7 Feb 2014 15:02:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:02:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id D30DD1C0972; Fri,  7 Feb 2014 10:02:24 -0500 (EST)
Date: Fri, 7 Feb 2014 10:02:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: William Dauchy <william@gandi.net>
Message-ID: <20140207150224.GA3605@phenom.dumpdata.com>
References: <20140207124607.GE19084@gandi.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140207124607.GE19084@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: konrad@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 01:46:07PM +0100, William Dauchy wrote:
> Hello,
> 
> I am facing random memleaks with a xenU 3.10.x pv (x86_64);
> sometimes without issue and sometimes after a reboot, the
> kernel starts to memleak a lot until OOM.
> 
> config: hypervisor xen4.1, dom0 v3.4.x (x86_32)
> 
> I got no issue with a xenU v3.12.x pv; the only difference in dmesg was
> about APIC.
> 
> I manually backported the following patch:
> 6efa20e xen: Support 64-bit PV guest receiving NMIs
> --- a/arch/x86/xen/enlighten.c
> +++ b/arch/x86/xen/enlighten.c
> @@ -427,8 +427,7 @@ static void __init xen_init_cpuid_mask(void)
>  
>         if (!xen_initial_domain())
>                 cpuid_leaf1_edx_mask &=
> -                       ~((1 << X86_FEATURE_APIC) |  /* disable local APIC */
> -                         (1 << X86_FEATURE_ACPI));  /* disable ACPI */
> +                       ~((1 << X86_FEATURE_ACPI));  /* disable ACPI */
> 
> 
> and my issue is completely fixed. Should it backported for stable?

That does not make sense. What are the leaks? What are the messages
that you see about APIC?

> 
> Thanks,
> -- 
> William



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBn5A-0002Sg-VI; Fri, 07 Feb 2014 15:11:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBn59-0002SY-Uj
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:11:08 +0000
Received: from [193.109.254.147:9850] by server-1.bemta-14.messagelabs.com id
	EF/9F-15438-B87F4F25; Fri, 07 Feb 2014 15:11:07 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391785865!2771745!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26144 invoked from network); 7 Feb 2014 15:11:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:11:06 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FB2sa004857
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:11:03 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FAxoE023353
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 15:11:00 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FAx8W002881; Fri, 7 Feb 2014 15:10:59 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:10:58 -0800
Message-ID: <52F4F7D1.9020906@oracle.com>
Date: Fri, 07 Feb 2014 10:12:17 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
In-Reply-To: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9123913576358413881=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============9123913576358413881==
Content-Type: multipart/alternative;
 boundary="------------020301070406040603050505"

This is a multi-part message in MIME format.
--------------020301070406040603050505
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/07/2014 04:21 AM, Jan Beulich wrote:
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
>
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
>
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Eric Houby <ehouby@yahoo.com>
>
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>       const struct acpi_ivrs_header *ivrs_block;
>       unsigned long length;
>       unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>       int error = 0;
>   
>       BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>       /* Each IO-APIC must have been mentioned in the table. */
>       for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>       {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>               continue;
>   
>           if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )

I don't know whether 0:14:0 is set in stone, I don't remember seeing 
anywhere that this is architectural.

In the (unlikely) event that it is moved somewhere else will the user be 
able to overwrite where it is? Do you
think that sb_ioapic may need to be set to true if appropriate bit is 
set in ioapic_cmdline?

-boris


> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>           }
>       }
>   
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>       return error;
>   }
>   
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------020301070406040603050505
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 02/07/2014 04:21 AM, Jan Beulich
      wrote:<br>
    </div>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <a class="moz-txt-link-rfc2396E" href="mailto:joerg.roedel@amd.com">&lt;joerg.roedel@amd.com&gt;</a>.

Reported-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a>
Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>
Tested-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a>

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic = !iommu_intremap;
     int error = 0;
 
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic = 0; !error &amp;&amp; iommu_intremap &amp;&amp; apic &lt; nr_ioapics; ++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &amp;&amp;
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
+            sb_ioapic = 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
 
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )</pre>
    </blockquote>
    <br>
    I don't know whether 0:14:0 is set in stone, I don't remember seeing
    anywhere that this is architectural. <br>
    <br>
    In the (unlikely) event that it is moved somewhere else will the
    user be able to overwrite where it is? Do you<br>
    think that sb_ioapic may need to be set to true if appropriate bit
    is set in ioapic_cmdline?<br>
    <br>
    -boris<br>
    <br>
    <br>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
 
+    if ( !error &amp;&amp; !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error = -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------020301070406040603050505--


--===============9123913576358413881==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9123913576358413881==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 15:11:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBn5A-0002Sg-VI; Fri, 07 Feb 2014 15:11:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBn59-0002SY-Uj
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:11:08 +0000
Received: from [193.109.254.147:9850] by server-1.bemta-14.messagelabs.com id
	EF/9F-15438-B87F4F25; Fri, 07 Feb 2014 15:11:07 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391785865!2771745!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26144 invoked from network); 7 Feb 2014 15:11:06 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:11:06 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FB2sa004857
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:11:03 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FAxoE023353
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 15:11:00 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FAx8W002881; Fri, 7 Feb 2014 15:10:59 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:10:58 -0800
Message-ID: <52F4F7D1.9020906@oracle.com>
Date: Fri, 07 Feb 2014 10:12:17 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
In-Reply-To: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9123913576358413881=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============9123913576358413881==
Content-Type: multipart/alternative;
 boundary="------------020301070406040603050505"

This is a multi-part message in MIME format.
--------------020301070406040603050505
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/07/2014 04:21 AM, Jan Beulich wrote:
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
>
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
>
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Eric Houby <ehouby@yahoo.com>
>
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>       const struct acpi_ivrs_header *ivrs_block;
>       unsigned long length;
>       unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>       int error = 0;
>   
>       BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>       /* Each IO-APIC must have been mentioned in the table. */
>       for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>       {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>               continue;
>   
>           if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )

I don't know whether 0:14:0 is set in stone, I don't remember seeing 
anywhere that this is architectural.

In the (unlikely) event that it is moved somewhere else will the user be 
able to overwrite where it is? Do you
think that sb_ioapic may need to be set to true if appropriate bit is 
set in ioapic_cmdline?

-boris


> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>           }
>       }
>   
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>       return error;
>   }
>   
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------020301070406040603050505
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 02/07/2014 04:21 AM, Jan Beulich
      wrote:<br>
    </div>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">... but interrupt remapping is requested (with per-device remapping
tables). Without it, the timer interrupt is usually not working.

Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
Roedel <a class="moz-txt-link-rfc2396E" href="mailto:joerg.roedel@amd.com">&lt;joerg.roedel@amd.com&gt;</a>.

Reported-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a>
Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a>
Tested-by: Eric Houby <a class="moz-txt-link-rfc2396E" href="mailto:ehouby@yahoo.com">&lt;ehouby@yahoo.com&gt;</a>

--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
     const struct acpi_ivrs_header *ivrs_block;
     unsigned long length;
     unsigned int apic;
+    bool_t sb_ioapic = !iommu_intremap;
     int error = 0;
 
     BUG_ON(!table);
@@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
     /* Each IO-APIC must have been mentioned in the table. */
     for ( apic = 0; !error &amp;&amp; iommu_intremap &amp;&amp; apic &lt; nr_ioapics; ++apic )
     {
-        if ( !nr_ioapic_entries[apic] ||
-             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
+        if ( !nr_ioapic_entries[apic] )
+            continue;
+
+        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &amp;&amp;
+             /* SB IO-APIC is always on this device in AMD systems. */
+             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
+            sb_ioapic = 1;
+
+        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
             continue;
 
         if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )</pre>
    </blockquote>
    <br>
    I don't know whether 0:14:0 is set in stone, I don't remember seeing
    anywhere that this is architectural. <br>
    <br>
    In the (unlikely) event that it is moved somewhere else will the
    user be able to overwrite where it is? Do you<br>
    think that sb_ioapic may need to be set to true if appropriate bit
    is set in ioapic_cmdline?<br>
    <br>
    -boris<br>
    <br>
    <br>
    <blockquote cite="mid:52F4B38C020000780011A15D@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">
@@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
         }
     }
 
+    if ( !error &amp;&amp; !sb_ioapic )
+    {
+        if ( amd_iommu_perdev_intremap )
+            error = -ENXIO;
+        printk("%sNo southbridge IO-APIC found in IVRS table\n",
+               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
+    }
+
     return error;
 }
 



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------020301070406040603050505--


--===============9123913576358413881==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9123913576358413881==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 15:15:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBn9C-0002cV-OK; Fri, 07 Feb 2014 15:15:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBn9A-0002cQ-H4
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:15:16 +0000
Received: from [85.158.143.35:21717] by server-1.bemta-4.messagelabs.com id
	61/FA-31661-388F4F25; Fri, 07 Feb 2014 15:15:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391786113!3967705!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12801 invoked from network); 7 Feb 2014 15:15:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:15:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100849199"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:15:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 10:15:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBn42-00062X-6A;
	Fri, 07 Feb 2014 15:09:58 +0000
Date: Fri, 7 Feb 2014 15:09:45 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391785278.2162.108.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402071508590.4373@kaball.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
	<1391785278.2162.108.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Ian Campbell wrote:
> > > +void sync_page_to_ram(unsigned long mfn)
> > > +{
> > > +    void *p, *v = map_domain_page(mfn);
> > > +
> > > +    dsb();           /* So the CPU issues all writes to the range */
> > > +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> > 
> > What about the last few bytes on a page?
> > Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?
> 
> I sure hope so! A non power of two cache line size would be pretty
> crazy!
> 
> The cacheline is always a 2^N (the register contains log2(cacheline
> size)) and so is PAGE_SIZE.

Thought so.

For the Xen ARM side:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:15:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBn9C-0002cV-OK; Fri, 07 Feb 2014 15:15:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBn9A-0002cQ-H4
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:15:16 +0000
Received: from [85.158.143.35:21717] by server-1.bemta-4.messagelabs.com id
	61/FA-31661-388F4F25; Fri, 07 Feb 2014 15:15:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391786113!3967705!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12801 invoked from network); 7 Feb 2014 15:15:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:15:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100849199"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:15:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 10:15:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBn42-00062X-6A;
	Fri, 07 Feb 2014 15:09:58 +0000
Date: Fri, 7 Feb 2014 15:09:45 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1391785278.2162.108.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402071508590.4373@kaball.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<alpine.DEB.2.02.1402071439490.4373@kaball.uk.xensource.com>
	<1391785278.2162.108.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, jbeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Ian Campbell wrote:
> > > +void sync_page_to_ram(unsigned long mfn)
> > > +{
> > > +    void *p, *v = map_domain_page(mfn);
> > > +
> > > +    dsb();           /* So the CPU issues all writes to the range */
> > > +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> > 
> > What about the last few bytes on a page?
> > Can we assume that PAGE_SIZE is a multiple of cacheline_bytes?
> 
> I sure hope so! A non power of two cache line size would be pretty
> crazy!
> 
> The cacheline is always a 2^N (the register contains log2(cacheline
> size)) and so is PAGE_SIZE.

Thought so.

For the Xen ARM side:

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnHP-0003MK-KC; Fri, 07 Feb 2014 15:23:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBnHO-0003ME-Bn
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:23:46 +0000
Received: from [85.158.143.35:20432] by server-2.bemta-4.messagelabs.com id
	4A/A8-10891-18AF4F25; Fri, 07 Feb 2014 15:23:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391786624!3951256!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15101 invoked from network); 7 Feb 2014 15:23:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 15:23:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 15:23:44 +0000
Message-Id: <52F5088F020000780011A484@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 15:23:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
In-Reply-To: <52F4F7D1.9020906@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjEyLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4gLi4uIGJ1dCBpbnRlcnJ1cHQgcmVtYXBwaW5nIGlzIHJlcXVlc3RlZCAod2l0
aCBwZXItZGV2aWNlIHJlbWFwcGluZwo+PiB0YWJsZXMpLiBXaXRob3V0IGl0LCB0aGUgdGltZXIg
aW50ZXJydXB0IGlzIHVzdWFsbHkgbm90IHdvcmtpbmcuCj4+Cj4+IEluc3BpcmVkIGJ5IExpbnV4
J2VzICJpb21tdS9hbWQ6IFdvcmsgYXJvdW5kIHdyb25nIElPQVBJQyBkZXZpY2UtaWQgaW4KPj4g
SVZSUyB0YWJsZSIgKGNvbW1pdCBjMmZmNWNmNTI5NGJjYmQ3ZmE1MGY3ZDg2MGU5MGE2NmRiN2U1
MDU5KSBieSBKb2VyZwo+PiBSb2VkZWwgPGpvZXJnLnJvZWRlbEBhbWQuY29tPi4KPj4KPj4gUmVw
b3J0ZWQtYnk6IEVyaWMgSG91YnkgPGVob3VieUB5YWhvby5jb20+Cj4+IFNpZ25lZC1vZmYtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KPj4gVGVzdGVkLWJ5OiBFcmljIEhvdWJ5
IDxlaG91YnlAeWFob28uY29tPgo+Pgo+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9h
bWQvaW9tbXVfYWNwaS5jCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21t
dV9hY3BpLmMKPj4gQEAgLTk4NCw2ICs5ODQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9p
dnJzX3RhYmxlKHN0cnVjCj4+ICAgICAgIGNvbnN0IHN0cnVjdCBhY3BpX2l2cnNfaGVhZGVyICpp
dnJzX2Jsb2NrOwo+PiAgICAgICB1bnNpZ25lZCBsb25nIGxlbmd0aDsKPj4gICAgICAgdW5zaWdu
ZWQgaW50IGFwaWM7Cj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9pbnRyZW1hcDsK
Pj4gICAgICAgaW50IGVycm9yID0gMDsKPj4gICAKPj4gICAgICAgQlVHX09OKCF0YWJsZSk7Cj4+
IEBAIC0xMDE3LDggKzEwMTgsMTUgQEAgc3RhdGljIGludCBfX2luaXQgcGFyc2VfaXZyc190YWJs
ZShzdHJ1Ywo+PiAgICAgICAvKiBFYWNoIElPLUFQSUMgbXVzdCBoYXZlIGJlZW4gbWVudGlvbmVk
IGluIHRoZSB0YWJsZS4gKi8KPj4gICAgICAgZm9yICggYXBpYyA9IDA7ICFlcnJvciAmJiBpb21t
dV9pbnRyZW1hcCAmJiBhcGljIDwgbnJfaW9hcGljczsgKythcGljICkKPj4gICAgICAgewo+PiAt
ICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSB8fAo+PiAtICAgICAgICAgICAg
IGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnBpbl8yX2lkeCApCj4+ICsgICAgICAgIGlm
ICggIW5yX2lvYXBpY19lbnRyaWVzW2FwaWNdICkKPj4gKyAgICAgICAgICAgIGNvbnRpbnVlOwo+
PiArCj4+ICsgICAgICAgIGlmICggIWlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnNlZyAm
Jgo+PiArICAgICAgICAgICAgIC8qIFNCIElPLUFQSUMgaXMgYWx3YXlzIG9uIHRoaXMgZGV2aWNl
IGluIEFNRCBzeXN0ZW1zLiAqLwo+PiArICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lPX0FQSUNf
SUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAsIDB4MTQsIDApICkKPj4gKyAgICAgICAgICAgIHNi
X2lvYXBpYyA9IDE7Cj4+ICsKPj4gKyAgICAgICAgaWYgKCBpb2FwaWNfc2JkZltJT19BUElDX0lE
KGFwaWMpXS5waW5fMl9pZHggKQo+PiAgICAgICAgICAgICAgIGNvbnRpbnVlOwo+PiAgIAo+PiAg
ICAgICAgICAgaWYgKCAhdGVzdF9iaXQoSU9fQVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUp
ICkKPiAKPiBJIGRvbid0IGtub3cgd2hldGhlciAwOjE0OjAgaXMgc2V0IGluIHN0b25lLCBJIGRv
bid0IHJlbWVtYmVyIHNlZWluZyAKPiBhbnl3aGVyZSB0aGF0IHRoaXMgaXMgYXJjaGl0ZWN0dXJh
bC4KPiAKPiBJbiB0aGUgKHVubGlrZWx5KSBldmVudCB0aGF0IGl0IGlzIG1vdmVkIHNvbWV3aGVy
ZSBlbHNlIHdpbGwgdGhlIHVzZXIgYmUgCj4gYWJsZSB0byBvdmVyd3JpdGUgd2hlcmUgaXQgaXM/
IERvIHlvdQo+IHRoaW5rIHRoYXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVl
IGlmIGFwcHJvcHJpYXRlIGJpdCBpcyAKPiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/CgpUaGVzZSBh
cmUgcXVlc3Rpb24geW91J2QgbmVlZCB0byBhc2sgdG8gSsO2cmcsIHRoZSBhdXRob3Igb2YgdGhl
Cm9yaWdpbmFsIExpbnV4IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNvbmRpdGlvbiBoZXJl
IHRoYXQgaGUKa25ldyB3aGF0IGhlIHdhcyBkb2luZy4KCkphbgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnHP-0003MK-KC; Fri, 07 Feb 2014 15:23:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBnHO-0003ME-Bn
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:23:46 +0000
Received: from [85.158.143.35:20432] by server-2.bemta-4.messagelabs.com id
	4A/A8-10891-18AF4F25; Fri, 07 Feb 2014 15:23:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391786624!3951256!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15101 invoked from network); 7 Feb 2014 15:23:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 15:23:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 15:23:44 +0000
Message-Id: <52F5088F020000780011A484@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 15:23:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
In-Reply-To: <52F4F7D1.9020906@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjEyLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4gLi4uIGJ1dCBpbnRlcnJ1cHQgcmVtYXBwaW5nIGlzIHJlcXVlc3RlZCAod2l0
aCBwZXItZGV2aWNlIHJlbWFwcGluZwo+PiB0YWJsZXMpLiBXaXRob3V0IGl0LCB0aGUgdGltZXIg
aW50ZXJydXB0IGlzIHVzdWFsbHkgbm90IHdvcmtpbmcuCj4+Cj4+IEluc3BpcmVkIGJ5IExpbnV4
J2VzICJpb21tdS9hbWQ6IFdvcmsgYXJvdW5kIHdyb25nIElPQVBJQyBkZXZpY2UtaWQgaW4KPj4g
SVZSUyB0YWJsZSIgKGNvbW1pdCBjMmZmNWNmNTI5NGJjYmQ3ZmE1MGY3ZDg2MGU5MGE2NmRiN2U1
MDU5KSBieSBKb2VyZwo+PiBSb2VkZWwgPGpvZXJnLnJvZWRlbEBhbWQuY29tPi4KPj4KPj4gUmVw
b3J0ZWQtYnk6IEVyaWMgSG91YnkgPGVob3VieUB5YWhvby5jb20+Cj4+IFNpZ25lZC1vZmYtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KPj4gVGVzdGVkLWJ5OiBFcmljIEhvdWJ5
IDxlaG91YnlAeWFob28uY29tPgo+Pgo+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9h
bWQvaW9tbXVfYWNwaS5jCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21t
dV9hY3BpLmMKPj4gQEAgLTk4NCw2ICs5ODQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9p
dnJzX3RhYmxlKHN0cnVjCj4+ICAgICAgIGNvbnN0IHN0cnVjdCBhY3BpX2l2cnNfaGVhZGVyICpp
dnJzX2Jsb2NrOwo+PiAgICAgICB1bnNpZ25lZCBsb25nIGxlbmd0aDsKPj4gICAgICAgdW5zaWdu
ZWQgaW50IGFwaWM7Cj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9pbnRyZW1hcDsK
Pj4gICAgICAgaW50IGVycm9yID0gMDsKPj4gICAKPj4gICAgICAgQlVHX09OKCF0YWJsZSk7Cj4+
IEBAIC0xMDE3LDggKzEwMTgsMTUgQEAgc3RhdGljIGludCBfX2luaXQgcGFyc2VfaXZyc190YWJs
ZShzdHJ1Ywo+PiAgICAgICAvKiBFYWNoIElPLUFQSUMgbXVzdCBoYXZlIGJlZW4gbWVudGlvbmVk
IGluIHRoZSB0YWJsZS4gKi8KPj4gICAgICAgZm9yICggYXBpYyA9IDA7ICFlcnJvciAmJiBpb21t
dV9pbnRyZW1hcCAmJiBhcGljIDwgbnJfaW9hcGljczsgKythcGljICkKPj4gICAgICAgewo+PiAt
ICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSB8fAo+PiAtICAgICAgICAgICAg
IGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnBpbl8yX2lkeCApCj4+ICsgICAgICAgIGlm
ICggIW5yX2lvYXBpY19lbnRyaWVzW2FwaWNdICkKPj4gKyAgICAgICAgICAgIGNvbnRpbnVlOwo+
PiArCj4+ICsgICAgICAgIGlmICggIWlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnNlZyAm
Jgo+PiArICAgICAgICAgICAgIC8qIFNCIElPLUFQSUMgaXMgYWx3YXlzIG9uIHRoaXMgZGV2aWNl
IGluIEFNRCBzeXN0ZW1zLiAqLwo+PiArICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lPX0FQSUNf
SUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAsIDB4MTQsIDApICkKPj4gKyAgICAgICAgICAgIHNi
X2lvYXBpYyA9IDE7Cj4+ICsKPj4gKyAgICAgICAgaWYgKCBpb2FwaWNfc2JkZltJT19BUElDX0lE
KGFwaWMpXS5waW5fMl9pZHggKQo+PiAgICAgICAgICAgICAgIGNvbnRpbnVlOwo+PiAgIAo+PiAg
ICAgICAgICAgaWYgKCAhdGVzdF9iaXQoSU9fQVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUp
ICkKPiAKPiBJIGRvbid0IGtub3cgd2hldGhlciAwOjE0OjAgaXMgc2V0IGluIHN0b25lLCBJIGRv
bid0IHJlbWVtYmVyIHNlZWluZyAKPiBhbnl3aGVyZSB0aGF0IHRoaXMgaXMgYXJjaGl0ZWN0dXJh
bC4KPiAKPiBJbiB0aGUgKHVubGlrZWx5KSBldmVudCB0aGF0IGl0IGlzIG1vdmVkIHNvbWV3aGVy
ZSBlbHNlIHdpbGwgdGhlIHVzZXIgYmUgCj4gYWJsZSB0byBvdmVyd3JpdGUgd2hlcmUgaXQgaXM/
IERvIHlvdQo+IHRoaW5rIHRoYXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVl
IGlmIGFwcHJvcHJpYXRlIGJpdCBpcyAKPiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/CgpUaGVzZSBh
cmUgcXVlc3Rpb24geW91J2QgbmVlZCB0byBhc2sgdG8gSsO2cmcsIHRoZSBhdXRob3Igb2YgdGhl
Cm9yaWdpbmFsIExpbnV4IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNvbmRpdGlvbiBoZXJl
IHRoYXQgaGUKa25ldyB3aGF0IGhlIHdhcyBkb2luZy4KCkphbgoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:25:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnJT-0003Y2-RA; Fri, 07 Feb 2014 15:25:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnJS-0003Xc-Fc
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:25:54 +0000
Received: from [85.158.139.211:24794] by server-8.bemta-5.messagelabs.com id
	C5/CC-05298-10BF4F25; Fri, 07 Feb 2014 15:25:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391786751!2434142!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21621 invoked from network); 7 Feb 2014 15:25:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:25:52 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FPnFa023000
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:25:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FPmRF003258
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 15:25:49 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FPmKP010205; Fri, 7 Feb 2014 15:25:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:25:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 376F81C0972; Fri,  7 Feb 2014 10:25:47 -0500 (EST)
Date: Fri, 7 Feb 2014 10:25:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207152547.GB3605@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> Hi all,
> 
> I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC) to a
> HVM.  I have been attempting to resolve this issue on the xen-users list,
> but it was advised to post this issue to this list. (Initial Message -
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> 
> The machine I am using as host is a Dell Poweredge server with a Xeon
> E31220 with 4GB of ram.
> 
> The possible bug is the following:
> root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> char device redirected to /dev/pts/5 (label serial0)
> qemu: hardware error: xen: failed to populate ram at 40030000
> ....
> 
> I believe it may be similar to this thread
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> 
> 
> Additional info that may be helpful is below.

Did you try the patch?
> 
> Please let me know if you need any additional information.
> 
> Thanks in advance for any help provided!
> Regards
> 
> ###########################################################
> root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> ###########################################################
> # Configuration file for Xen HVM
> 
> # HVM Name (as appears in 'xl list')
> name="ubuntu-hvm-0"
> # HVM Build settings (+ hardware)
> #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> builder='hvm'
> device_model='qemu-dm'
> memory=1024
> vcpus=2
> 
> # Virtual Interface
> # Network bridge to USB NIC
> vif=['bridge=xenbr0']
> 
> ################### PCI PASSTHROUGH ###################
> # PCI Permissive mode toggle
> #pci_permissive=1
> 
> # All PCI Devices
> #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> 
> # First two ports on Intel 4x1G NIC
> #pci=['03:00.0','03:00.1']
> 
> # Last two ports on Intel 4x1G NIC
> #pci=['04:00.0', '04:00.1']
> 
> # All ports on Intel 4x1G NIC
> pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> 
> # Brodcom 2x1G NIC
> #pci=['05:00.0', '05:00.1']
> ################### PCI PASSTHROUGH ###################
> 
> # HVM Disks
> # Hard disk only
> # Boot from HDD first ('c')
> boot="c"
> disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> 
> # Hard disk with ISO
> # Boot from ISO first ('d')
> #boot="d"
> #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> 
> # ACPI Enable
> acpi=1
> # HVM Event Modes
> on_poweroff='destroy'
> on_reboot='restart'
> on_crash='restart'
> 
> # Serial Console Configuration (Xen Console)
> sdl=0
> serial='pty'
> 
> # VNC Configuration
> # Only reacable from localhost
> vnc=1
> vnclisten="0.0.0.0"
> vncpasswd=""
> 
> ###########################################################
> Copied for xen-users list
> ###########################################################
> 
> It appears that it cannot obtain the RAM mapping for this PCI device.
> 
> 
> I rebooted the Host.  I ran assigned pci devices to pciback. The output
> looks like:
> root@fiat:~# ./dev_mgmt.sh
> Loading Kernel Module 'xen-pciback'
> Calling function pciback_dev for:
> PCI DEVICE 0000:03:00.0
> Unbinding 0000:03:00.0 from igb
> Binding 0000:03:00.0 to pciback
> 
> PCI DEVICE 0000:03:00.1
> Unbinding 0000:03:00.1 from igb
> Binding 0000:03:00.1 to pciback
> 
> PCI DEVICE 0000:04:00.0
> Unbinding 0000:04:00.0 from igb
> Binding 0000:04:00.0 to pciback
> 
> PCI DEVICE 0000:04:00.1
> Unbinding 0000:04:00.1 from igb
> Binding 0000:04:00.1 to pciback
> 
> PCI DEVICE 0000:05:00.0
> Unbinding 0000:05:00.0 from bnx2
> Binding 0000:05:00.0 to pciback
> 
> PCI DEVICE 0000:05:00.1
> Unbinding 0000:05:00.1 from bnx2
> Binding 0000:05:00.1 to pciback
> 
> Listing PCI Devices Available to Xen
> 0000:03:00.0
> 0000:03:00.1
> 0000:04:00.0
> 0000:04:00.1
> 0000:05:00.0
> 0000:05:00.1
> 
> ###########################################################
> root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> WARNING: ignoring device_model directive.
> WARNING: Use "device_model_override" instead if you really want a
> non-default device_model
> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> how=(nil) callback=(nil) poller=0x210c3c0
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> vdev=hda, using backend phy
> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> domain, skipping bootloader
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x210c728: deregister unregistered
> libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> free_memkb=2980
> libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate
> with 1 nodes, 4 cpus and 2980 KB free selected
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> xc: info: VIRTUAL MEMORY ARRANGEMENT:
>   Loader:        0000000000100000->00000000001a69a4
>   Modules:       0000000000000000->0000000000000000
>   TOTAL:         0000000000000000->000000003f800000
>   ENTRY ADDRESS: 0000000000100608
> xc: info: PHYSICAL MEMORY ALLOCATION:
>   4KB PAGES: 0x0000000000000200
>   2MB PAGES: 0x00000000000001fb
>   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> register slotnum=3
> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> inprogress: poller=0x210c3c0, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/2/768/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/2/768/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x2112f48: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> /etc/xen/scripts/block add
> libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-model
> /usr/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> /usr/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1: register
> slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> wpath=/local/domain/0/device-model/2/state token=3/1: event
> epath=/local/domain/0/device-model/2/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> wpath=/local/domain/0/device-model/2/state token=3/1: event
> epath=/local/domain/0/device-model/2/state
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x210c960: deregister unregistered
> libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-2
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "query-chardev",
>     "id": 2
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "change",
>     "id": 3,
>     "arguments": {
>         "device": "vnc",
>         "target": "password",
>         "arg": ""
>     }
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "query-vnc",
>     "id": 4
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: register
> slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/2/0/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/2/0/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x210e8a8: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-2
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-03_00.0",
>         "hostaddr": "0000:03:00.0"
>     }
> }
> '
> libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection reset
> by peer
> libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
> libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> progress report: ignored
> libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> complete, rc=0
> libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: destroy
> Daemon running with PID 3214
> xc: debug: hypercall buffer: total allocations:793 total releases:793
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> xc: debug: hypercall buffer: cache current size:4
> xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> 
> ###########################################################
> root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> char device redirected to /dev/pts/5 (label serial0)
> qemu: hardware error: xen: failed to populate ram at 40030000
> CPU #0:
> EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> ES =0000 00000000 0000ffff 00009300
> CS =f000 ffff0000 0000ffff 00009b00
> SS =0000 00000000 0000ffff 00009300
> DS =0000 00000000 0000ffff 00009300
> FS =0000 00000000 0000ffff 00009300
> GS =0000 00000000 0000ffff 00009300
> LDT=0000 00000000 0000ffff 00008200
> TR =0000 00000000 0000ffff 00008b00
> GDT=     00000000 0000ffff
> IDT=     00000000 0000ffff
> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> DR6=ffff0ff0 DR7=00000400
> EFER=0000000000000000
> FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> XMM00=00000000000000000000000000000000
> XMM01=00000000000000000000000000000000
> XMM02=00000000000000000000000000000000
> XMM03=00000000000000000000000000000000
> XMM04=00000000000000000000000000000000
> XMM05=00000000000000000000000000000000
> XMM06=00000000000000000000000000000000
> XMM07=00000000000000000000000000000000
> CPU #1:
> EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> ES =0000 00000000 0000ffff 00009300
> CS =f000 ffff0000 0000ffff 00009b00
> SS =0000 00000000 0000ffff 00009300
> DS =0000 00000000 0000ffff 00009300
> FS =0000 00000000 0000ffff 00009300
> GS =0000 00000000 0000ffff 00009300
> LDT=0000 00000000 0000ffff 00008200
> TR =0000 00000000 0000ffff 00008b00
> GDT=     00000000 0000ffff
> IDT=     00000000 0000ffff
> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> DR6=ffff0ff0 DR7=00000400
> EFER=0000000000000000
> FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> XMM00=00000000000000000000000000000000
> XMM01=00000000000000000000000000000000
> XMM02=00000000000000000000000000000000
> XMM03=00000000000000000000000000000000
> XMM04=00000000000000000000000000000000
> XMM05=00000000000000000000000000000000
> XMM06=00000000000000000000000000000000
> XMM07=00000000000000000000000000000000
> 
> ###########################################################
> /etc/default/grub
> GRUB_DEFAULT="Xen 4.3-amd64"
> GRUB_HIDDEN_TIMEOUT=0
> GRUB_HIDDEN_TIMEOUT_QUIET=true
> GRUB_TIMEOUT=10
> GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> GRUB_CMDLINE_LINUX=""
> # biosdevname=0
> GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:25:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnJT-0003Y2-RA; Fri, 07 Feb 2014 15:25:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnJS-0003Xc-Fc
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:25:54 +0000
Received: from [85.158.139.211:24794] by server-8.bemta-5.messagelabs.com id
	C5/CC-05298-10BF4F25; Fri, 07 Feb 2014 15:25:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391786751!2434142!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21621 invoked from network); 7 Feb 2014 15:25:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:25:52 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FPnFa023000
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:25:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FPmRF003258
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 15:25:49 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FPmKP010205; Fri, 7 Feb 2014 15:25:48 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:25:48 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 376F81C0972; Fri,  7 Feb 2014 10:25:47 -0500 (EST)
Date: Fri, 7 Feb 2014 10:25:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207152547.GB3605@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> Hi all,
> 
> I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC) to a
> HVM.  I have been attempting to resolve this issue on the xen-users list,
> but it was advised to post this issue to this list. (Initial Message -
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> 
> The machine I am using as host is a Dell Poweredge server with a Xeon
> E31220 with 4GB of ram.
> 
> The possible bug is the following:
> root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> char device redirected to /dev/pts/5 (label serial0)
> qemu: hardware error: xen: failed to populate ram at 40030000
> ....
> 
> I believe it may be similar to this thread
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> 
> 
> Additional info that may be helpful is below.

Did you try the patch?
> 
> Please let me know if you need any additional information.
> 
> Thanks in advance for any help provided!
> Regards
> 
> ###########################################################
> root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> ###########################################################
> # Configuration file for Xen HVM
> 
> # HVM Name (as appears in 'xl list')
> name="ubuntu-hvm-0"
> # HVM Build settings (+ hardware)
> #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> builder='hvm'
> device_model='qemu-dm'
> memory=1024
> vcpus=2
> 
> # Virtual Interface
> # Network bridge to USB NIC
> vif=['bridge=xenbr0']
> 
> ################### PCI PASSTHROUGH ###################
> # PCI Permissive mode toggle
> #pci_permissive=1
> 
> # All PCI Devices
> #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> 
> # First two ports on Intel 4x1G NIC
> #pci=['03:00.0','03:00.1']
> 
> # Last two ports on Intel 4x1G NIC
> #pci=['04:00.0', '04:00.1']
> 
> # All ports on Intel 4x1G NIC
> pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> 
> # Brodcom 2x1G NIC
> #pci=['05:00.0', '05:00.1']
> ################### PCI PASSTHROUGH ###################
> 
> # HVM Disks
> # Hard disk only
> # Boot from HDD first ('c')
> boot="c"
> disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> 
> # Hard disk with ISO
> # Boot from ISO first ('d')
> #boot="d"
> #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> 
> # ACPI Enable
> acpi=1
> # HVM Event Modes
> on_poweroff='destroy'
> on_reboot='restart'
> on_crash='restart'
> 
> # Serial Console Configuration (Xen Console)
> sdl=0
> serial='pty'
> 
> # VNC Configuration
> # Only reacable from localhost
> vnc=1
> vnclisten="0.0.0.0"
> vncpasswd=""
> 
> ###########################################################
> Copied for xen-users list
> ###########################################################
> 
> It appears that it cannot obtain the RAM mapping for this PCI device.
> 
> 
> I rebooted the Host.  I ran assigned pci devices to pciback. The output
> looks like:
> root@fiat:~# ./dev_mgmt.sh
> Loading Kernel Module 'xen-pciback'
> Calling function pciback_dev for:
> PCI DEVICE 0000:03:00.0
> Unbinding 0000:03:00.0 from igb
> Binding 0000:03:00.0 to pciback
> 
> PCI DEVICE 0000:03:00.1
> Unbinding 0000:03:00.1 from igb
> Binding 0000:03:00.1 to pciback
> 
> PCI DEVICE 0000:04:00.0
> Unbinding 0000:04:00.0 from igb
> Binding 0000:04:00.0 to pciback
> 
> PCI DEVICE 0000:04:00.1
> Unbinding 0000:04:00.1 from igb
> Binding 0000:04:00.1 to pciback
> 
> PCI DEVICE 0000:05:00.0
> Unbinding 0000:05:00.0 from bnx2
> Binding 0000:05:00.0 to pciback
> 
> PCI DEVICE 0000:05:00.1
> Unbinding 0000:05:00.1 from bnx2
> Binding 0000:05:00.1 to pciback
> 
> Listing PCI Devices Available to Xen
> 0000:03:00.0
> 0000:03:00.1
> 0000:04:00.0
> 0000:04:00.1
> 0000:05:00.0
> 0000:05:00.1
> 
> ###########################################################
> root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> WARNING: ignoring device_model directive.
> WARNING: Use "device_model_override" instead if you really want a
> non-default device_model
> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> how=(nil) callback=(nil) poller=0x210c3c0
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=unknown
> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> vdev=hda, using backend phy
> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
> libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> domain, skipping bootloader
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x210c728: deregister unregistered
> libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> free_memkb=2980
> libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candidate
> with 1 nodes, 4 cpus and 2980 KB free selected
> xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> xc: info: VIRTUAL MEMORY ARRANGEMENT:
>   Loader:        0000000000100000->00000000001a69a4
>   Modules:       0000000000000000->0000000000000000
>   TOTAL:         0000000000000000->000000003f800000
>   ENTRY ADDRESS: 0000000000100608
> xc: info: PHYSICAL MEMORY ALLOCATION:
>   4KB PAGES: 0x0000000000000200
>   2MB PAGES: 0x00000000000001fb
>   1GB PAGES: 0x0000000000000000
> xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=hda spec.backend=phy
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> register slotnum=3
> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> inprogress: poller=0x210c3c0, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/2/768/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> epath=/local/domain/0/backend/vbd/2/768/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x2112f48: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> /etc/xen/scripts/block add
> libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-model
> /usr/bin/qemu-system-i386 with arguments:
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> /usr/bin/qemu-system-i386
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> chardev=libxl-cmd,mode=control
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1: register
> slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> wpath=/local/domain/0/device-model/2/state token=3/1: event
> epath=/local/domain/0/device-model/2/state
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> wpath=/local/domain/0/device-model/2/state token=3/1: event
> epath=/local/domain/0/device-model/2/state
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x210c960: deregister unregistered
> libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-2
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "query-chardev",
>     "id": 2
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "change",
>     "id": 3,
>     "arguments": {
>         "device": "vnc",
>         "target": "password",
>         "arg": ""
>     }
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "query-vnc",
>     "id": 4
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2: register
> slotnum=3
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/2/0/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> epath=/local/domain/0/backend/vif/2/0/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x210e8a8: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> /etc/xen/scripts/vif-bridge online
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> /etc/xen/scripts/vif-bridge add
> libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> /var/run/xen/qmp-libxl-2
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "qmp_capabilities",
>     "id": 1
> }
> '
> libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>     "execute": "device_add",
>     "id": 2,
>     "arguments": {
>         "driver": "xen-pci-passthrough",
>         "id": "pci-pt-03_00.0",
>         "hostaddr": "0000:03:00.0"
>     }
> }
> '
> libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection reset
> by peer
> libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci backend
> libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> progress report: ignored
> libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> complete, rc=0
> libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: destroy
> Daemon running with PID 3214
> xc: debug: hypercall buffer: total allocations:793 total releases:793
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> xc: debug: hypercall buffer: cache current size:4
> xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> 
> ###########################################################
> root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> char device redirected to /dev/pts/5 (label serial0)
> qemu: hardware error: xen: failed to populate ram at 40030000
> CPU #0:
> EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> ES =0000 00000000 0000ffff 00009300
> CS =f000 ffff0000 0000ffff 00009b00
> SS =0000 00000000 0000ffff 00009300
> DS =0000 00000000 0000ffff 00009300
> FS =0000 00000000 0000ffff 00009300
> GS =0000 00000000 0000ffff 00009300
> LDT=0000 00000000 0000ffff 00008200
> TR =0000 00000000 0000ffff 00008b00
> GDT=     00000000 0000ffff
> IDT=     00000000 0000ffff
> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> DR6=ffff0ff0 DR7=00000400
> EFER=0000000000000000
> FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> XMM00=00000000000000000000000000000000
> XMM01=00000000000000000000000000000000
> XMM02=00000000000000000000000000000000
> XMM03=00000000000000000000000000000000
> XMM04=00000000000000000000000000000000
> XMM05=00000000000000000000000000000000
> XMM06=00000000000000000000000000000000
> XMM07=00000000000000000000000000000000
> CPU #1:
> EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> ES =0000 00000000 0000ffff 00009300
> CS =f000 ffff0000 0000ffff 00009b00
> SS =0000 00000000 0000ffff 00009300
> DS =0000 00000000 0000ffff 00009300
> FS =0000 00000000 0000ffff 00009300
> GS =0000 00000000 0000ffff 00009300
> LDT=0000 00000000 0000ffff 00008200
> TR =0000 00000000 0000ffff 00008b00
> GDT=     00000000 0000ffff
> IDT=     00000000 0000ffff
> CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> DR6=ffff0ff0 DR7=00000400
> EFER=0000000000000000
> FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> XMM00=00000000000000000000000000000000
> XMM01=00000000000000000000000000000000
> XMM02=00000000000000000000000000000000
> XMM03=00000000000000000000000000000000
> XMM04=00000000000000000000000000000000
> XMM05=00000000000000000000000000000000
> XMM06=00000000000000000000000000000000
> XMM07=00000000000000000000000000000000
> 
> ###########################################################
> /etc/default/grub
> GRUB_DEFAULT="Xen 4.3-amd64"
> GRUB_HIDDEN_TIMEOUT=0
> GRUB_HIDDEN_TIMEOUT_QUIET=true
> GRUB_TIMEOUT=10
> GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> GRUB_CMDLINE_LINUX=""
> # biosdevname=0
> GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:28:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnLz-0003wC-9s; Fri, 07 Feb 2014 15:28:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WBnLx-0003u8-IA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:28:29 +0000
Received: from [85.158.143.35:20208] by server-1.bemta-4.messagelabs.com id
	E5/40-31661-C9BF4F25; Fri, 07 Feb 2014 15:28:28 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391786908!3971695!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6399 invoked from network); 7 Feb 2014 15:28:28 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:28:28 -0000
Received: by mail-wi0-f179.google.com with SMTP id hn9so946891wib.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 07:28:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=wgKLbxUljPoZ3pdanDg7KZm7JuD9YAbQ161X5bwvPNI=;
	b=T01+OgtAkEnFPEy+pas0bXN6yFVXNMEAJj7W9UgfRl7uRddK3E3GmHhmH2U4z8kRsE
	5M033blvds581rzyjXN12IKw7Efr/75j62i7B41ryypOT+YJzmpvze6+f4EpTGh1S2Y7
	X2RLcuZAmLpA4TJ13jMdgO7TlDS8pTLsz8HnA9jB1/kfViTlAxIUbmecs4GHnS14Gx7B
	hZumrct7/xzFQqCWwN+NG+9lYMGeVYyPJSnjmZDbHAqyI9zzc8DBJEHvM6A3CAmmDpO/
	6KHR96HW4kHY5GwVDJS/GPzPHpo7wbohQdoFEOaE3JzZ4fHtmmuIJ72c4wFF4bo5Qb/H
	YAIQ==
MIME-Version: 1.0
X-Received: by 10.180.77.129 with SMTP id s1mr312605wiw.56.1391786908023; Fri,
	07 Feb 2014 07:28:28 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 7 Feb 2014 07:28:27 -0800 (PST)
In-Reply-To: <1391692303.25128.3.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
Date: Fri, 7 Feb 2014 15:28:27 +0000
X-Google-Sender-Auth: txwnQXwMfTyoz7jNrzrQGNo6eS4
Message-ID: <CAFLBxZZcBZT921A8zSXaGB82eop4baQOPgW=Ed7wBfdGRB3w3Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>,
	"patches@linaro.org" <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 6, 2014 at 1:11 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
>>
>> On 06/02/14 12:57, Ian Campbell wrote:
>> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> >>
>> >> I remember we had a discussion when I have implemented vfp context
>> >> switch for arm32 for the memory constraints
>> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>> >>
>> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
>> >
>> > Yes, I forgot to say: I think getting something in now is the priority,
>> > which is why I committed it, but this should be tightened up, probably
>> > for 4.5 unless the difference is benchmarkable.
>>
>> The fix is very simple (a matter of 2 lines changes). I would prefer to
>> delay this patch for a couple of days and having a correct
>> implementation from the beginning, so we will not forgot to change the
>> code for Xen 4.5.
>
> And I would rather close this rather large hole right now and not wait
> for two more days when we are looking at doing what might be the final
> rc soon.
>
> I had already applied before you said anything, so the point is moot.

But just for future reference: Be on your guard against the "hurry and
get in" mentality; it's likely to be counterproductive.  I'd rather
have *more* care taken with the patches at this point than normally.
If that means making a case to delay the release, that's what it will
take.  This would obviously have been a complete blocker -- there's no
way we could in good release without FPU support for guests. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:28:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnLz-0003wC-9s; Fri, 07 Feb 2014 15:28:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WBnLx-0003u8-IA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:28:29 +0000
Received: from [85.158.143.35:20208] by server-1.bemta-4.messagelabs.com id
	E5/40-31661-C9BF4F25; Fri, 07 Feb 2014 15:28:28 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391786908!3971695!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6399 invoked from network); 7 Feb 2014 15:28:28 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:28:28 -0000
Received: by mail-wi0-f179.google.com with SMTP id hn9so946891wib.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 07:28:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=wgKLbxUljPoZ3pdanDg7KZm7JuD9YAbQ161X5bwvPNI=;
	b=T01+OgtAkEnFPEy+pas0bXN6yFVXNMEAJj7W9UgfRl7uRddK3E3GmHhmH2U4z8kRsE
	5M033blvds581rzyjXN12IKw7Efr/75j62i7B41ryypOT+YJzmpvze6+f4EpTGh1S2Y7
	X2RLcuZAmLpA4TJ13jMdgO7TlDS8pTLsz8HnA9jB1/kfViTlAxIUbmecs4GHnS14Gx7B
	hZumrct7/xzFQqCWwN+NG+9lYMGeVYyPJSnjmZDbHAqyI9zzc8DBJEHvM6A3CAmmDpO/
	6KHR96HW4kHY5GwVDJS/GPzPHpo7wbohQdoFEOaE3JzZ4fHtmmuIJ72c4wFF4bo5Qb/H
	YAIQ==
MIME-Version: 1.0
X-Received: by 10.180.77.129 with SMTP id s1mr312605wiw.56.1391786908023; Fri,
	07 Feb 2014 07:28:28 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 7 Feb 2014 07:28:27 -0800 (PST)
In-Reply-To: <1391692303.25128.3.camel@kazak.uk.xensource.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
Date: Fri, 7 Feb 2014 15:28:27 +0000
X-Google-Sender-Auth: txwnQXwMfTyoz7jNrzrQGNo6eS4
Message-ID: <CAFLBxZZcBZT921A8zSXaGB82eop4baQOPgW=Ed7wBfdGRB3w3Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Anup Patel <anup.patel@linaro.org>,
	"patches@linaro.org" <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
	support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 6, 2014 at 1:11 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
>>
>> On 06/02/14 12:57, Ian Campbell wrote:
>> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
>> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
>> >>
>> >> I remember we had a discussion when I have implemented vfp context
>> >> switch for arm32 for the memory constraints
>> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
>> >>
>> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
>> >
>> > Yes, I forgot to say: I think getting something in now is the priority,
>> > which is why I committed it, but this should be tightened up, probably
>> > for 4.5 unless the difference is benchmarkable.
>>
>> The fix is very simple (a matter of 2 lines changes). I would prefer to
>> delay this patch for a couple of days and having a correct
>> implementation from the beginning, so we will not forgot to change the
>> code for Xen 4.5.
>
> And I would rather close this rather large hole right now and not wait
> for two more days when we are looking at doing what might be the final
> rc soon.
>
> I had already applied before you said anything, so the point is moot.

But just for future reference: Be on your guard against the "hurry and
get in" mentality; it's likely to be counterproductive.  I'd rather
have *more* care taken with the patches at this point than normally.
If that means making a case to delay the release, that's what it will
take.  This would obviously have been a complete blocker -- there's no
way we could in good release without FPU support for guests. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:33:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnQT-0004TH-4z; Fri, 07 Feb 2014 15:33:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBnQR-0004T2-Hl
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:33:07 +0000
Received: from [193.109.254.147:38415] by server-4.bemta-14.messagelabs.com id
	78/C3-32066-2BCF4F25; Fri, 07 Feb 2014 15:33:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391787184!2811818!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4525 invoked from network); 7 Feb 2014 15:33:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:33:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100855598"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:33:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	10:33:03 -0500
Message-ID: <1391787182.2162.111.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Fri, 7 Feb 2014 15:33:02 +0000
In-Reply-To: <CAFLBxZZcBZT921A8zSXaGB82eop4baQOPgW=Ed7wBfdGRB3w3Q@mail.gmail.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
	<CAFLBxZZcBZT921A8zSXaGB82eop4baQOPgW=Ed7wBfdGRB3w3Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>,
	"patches@linaro.org" <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>, Pranavkumar
	Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 15:28 +0000, George Dunlap wrote:
> On Thu, Feb 6, 2014 at 1:11 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
> >>
> >> On 06/02/14 12:57, Ian Campbell wrote:
> >> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
> >> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> >> >>
> >> >> I remember we had a discussion when I have implemented vfp context
> >> >> switch for arm32 for the memory constraints
> >> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
> >> >>
> >> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
> >> >
> >> > Yes, I forgot to say: I think getting something in now is the priority,
> >> > which is why I committed it, but this should be tightened up, probably
> >> > for 4.5 unless the difference is benchmarkable.
> >>
> >> The fix is very simple (a matter of 2 lines changes). I would prefer to
> >> delay this patch for a couple of days and having a correct
> >> implementation from the beginning, so we will not forgot to change the
> >> code for Xen 4.5.
> >
> > And I would rather close this rather large hole right now and not wait
> > for two more days when we are looking at doing what might be the final
> > rc soon.
> >
> > I had already applied before you said anything, so the point is moot.
> 
> But just for future reference: Be on your guard against the "hurry and
> get in" mentality; it's likely to be counterproductive.  I'd rather
> have *more* care taken with the patches at this point than normally.

This was not a case of a careless rush to get something in. It was my
opinion that the minor shortcoming wasn't worth waiting for compared
with the benefit of getting that patch in sooner rather than later.

Note that the patch was not in any way wrong.

> If that means making a case to delay the release, that's what it will
> take.  This would obviously have been a complete blocker -- there's no
> way we could in good release without FPU support for guests. :-)
> 
>  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:33:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:33:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnQT-0004TH-4z; Fri, 07 Feb 2014 15:33:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBnQR-0004T2-Hl
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:33:07 +0000
Received: from [193.109.254.147:38415] by server-4.bemta-14.messagelabs.com id
	78/C3-32066-2BCF4F25; Fri, 07 Feb 2014 15:33:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391787184!2811818!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4525 invoked from network); 7 Feb 2014 15:33:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:33:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100855598"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:33:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	10:33:03 -0500
Message-ID: <1391787182.2162.111.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Fri, 7 Feb 2014 15:33:02 +0000
In-Reply-To: <CAFLBxZZcBZT921A8zSXaGB82eop4baQOPgW=Ed7wBfdGRB3w3Q@mail.gmail.com>
References: <1391671722-16127-1-git-send-email-pranavkumar@linaro.org>
	<52F38394.50405@linaro.org>
	<1391691427.23098.118.camel@kazak.uk.xensource.com>
	<52F38938.2090500@linaro.org>
	<1391692303.25128.3.camel@kazak.uk.xensource.com>
	<CAFLBxZZcBZT921A8zSXaGB82eop4baQOPgW=Ed7wBfdGRB3w3Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anup Patel <anup.patel@linaro.org>,
	"patches@linaro.org" <patches@linaro.org>,
	Julien Grall <julien.grall@linaro.org>, patches@apm.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>, Pranavkumar
	Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen: arm: arm64: Adding VFP save/restore
 support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 15:28 +0000, George Dunlap wrote:
> On Thu, Feb 6, 2014 at 1:11 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-02-06 at 13:08 +0000, Julien Grall wrote:
> >>
> >> On 06/02/14 12:57, Ian Campbell wrote:
> >> > On Thu, 2014-02-06 at 12:44 +0000, Julien Grall wrote:
> >> >>> +                 :: "r" ((char *)(&v->arch.vfp.fpregs)): "memory");
> >> >>
> >> >> I remember we had a discussion when I have implemented vfp context
> >> >> switch for arm32 for the memory constraints
> >> >> (http://lists.xen.org/archives/html/xen-devel/2013-06/msg00110.html).
> >> >>
> >> >> I think you should use "=Q" also here to avoid cloberring the whole memory.
> >> >
> >> > Yes, I forgot to say: I think getting something in now is the priority,
> >> > which is why I committed it, but this should be tightened up, probably
> >> > for 4.5 unless the difference is benchmarkable.
> >>
> >> The fix is very simple (a matter of 2 lines changes). I would prefer to
> >> delay this patch for a couple of days and having a correct
> >> implementation from the beginning, so we will not forgot to change the
> >> code for Xen 4.5.
> >
> > And I would rather close this rather large hole right now and not wait
> > for two more days when we are looking at doing what might be the final
> > rc soon.
> >
> > I had already applied before you said anything, so the point is moot.
> 
> But just for future reference: Be on your guard against the "hurry and
> get in" mentality; it's likely to be counterproductive.  I'd rather
> have *more* care taken with the patches at this point than normally.

This was not a case of a careless rush to get something in. It was my
opinion that the minor shortcoming wasn't worth waiting for compared
with the benefit of getting that patch in sooner rather than later.

Note that the patch was not in any way wrong.

> If that means making a case to delay the release, that's what it will
> take.  This would obviously have been a complete blocker -- there's no
> way we could in good release without FPU support for guests. :-)
> 
>  -George



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:34:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnRf-0004ZR-LE; Fri, 07 Feb 2014 15:34:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnRe-0004ZJ-JE
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:34:22 +0000
Received: from [85.158.139.211:18437] by server-6.bemta-5.messagelabs.com id
	9E/28-14342-DFCF4F25; Fri, 07 Feb 2014 15:34:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391787259!2421772!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29277 invoked from network); 7 Feb 2014 15:34:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:34:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FYIhQ001471
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:34:18 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FYHSc025379
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 15:34:17 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FYGff003363; Fri, 7 Feb 2014 15:34:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:34:16 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C688C1C0972; Fri,  7 Feb 2014 10:34:15 -0500 (EST)
Date: Fri, 7 Feb 2014 10:34:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140207153415.GC3605@phenom.dumpdata.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
	<CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 07:12:24PM -0500, Elena Ufimtseva wrote:
> On Thu, Feb 6, 2014 at 6:59 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
> >>Hello
> >>
> >>As I can see there is no support for hugepages in Xen for PV kernels.
> >>Is this something in the future plans or there is no really need to
> >>have this implemented?
> >
> > It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this implemented. But making it upstream means adding some Xen specific code in the generic code which is a no-go.
> >
> > If you use PVH you get it for free. With the patches that Mukesh posted to enable some cpuid flags I was able to boot a guest with 2MB and 1GB pages without any trouble.
> 
> Ok, I see. Well, I would think then that the kernel then should not
> provide any possibility to use it? Otherwise there are oopses and some
> other unpleasant things happening when trying to use huge pages. I am
> not sure, maybe if its clearly states that there is no hugetlb support
> for pv, then there is no need for this?

I was under the impression that the hugetlb option only gets enabled
if the 'cpu_has_pse' bit is set. And for PV that is cleared so it won't
be set.

> 
> >>
> >>I came across this paper
> >>https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
> >>not sure if this was somehow presented to xen community, I can't find
> >>any code related to this.
> >>
> >>Thank you!
> >
> >
> 
> 
> 
> -- 
> Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:34:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnRf-0004ZR-LE; Fri, 07 Feb 2014 15:34:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnRe-0004ZJ-JE
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:34:22 +0000
Received: from [85.158.139.211:18437] by server-6.bemta-5.messagelabs.com id
	9E/28-14342-DFCF4F25; Fri, 07 Feb 2014 15:34:21 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1391787259!2421772!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29277 invoked from network); 7 Feb 2014 15:34:20 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:34:20 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FYIhQ001471
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:34:18 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FYHSc025379
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 15:34:17 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FYGff003363; Fri, 7 Feb 2014 15:34:16 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:34:16 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C688C1C0972; Fri,  7 Feb 2014 10:34:15 -0500 (EST)
Date: Fri, 7 Feb 2014 10:34:15 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Message-ID: <20140207153415.GC3605@phenom.dumpdata.com>
References: <CAEr7rXhSLu8MhKjoYye5C+3Q90ikWLn184eUyTDDoyXhHmiJ2g@mail.gmail.com>
	<d58a88c5-9666-40c8-b364-ff425dfd2a32@email.android.com>
	<CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAEr7rXhqCdUNo3rGORPMkFudY_0TW4Vmbogq4DpmRBv+z3_s_g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] xen pv hugepages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 06, 2014 at 07:12:24PM -0500, Elena Ufimtseva wrote:
> On Thu, Feb 6, 2014 at 6:59 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On February 6, 2014 5:17:44 PM EST, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
> >>Hello
> >>
> >>As I can see there is no support for hugepages in Xen for PV kernels.
> >>Is this something in the future plans or there is no really need to
> >>have this implemented?
> >
> > It was implemented. The UEK2 kernel (Oracle's) and SLES (I think) has this implemented. But making it upstream means adding some Xen specific code in the generic code which is a no-go.
> >
> > If you use PVH you get it for free. With the patches that Mukesh posted to enable some cpuid flags I was able to boot a guest with 2MB and 1GB pages without any trouble.
> 
> Ok, I see. Well, I would think then that the kernel then should not
> provide any possibility to use it? Otherwise there are oopses and some
> other unpleasant things happening when trying to use huge pages. I am
> not sure, maybe if its clearly states that there is no hugetlb support
> for pv, then there is no need for this?

I was under the impression that the hugetlb option only gets enabled
if the 'cpu_has_pse' bit is set. And for PV that is cleared so it won't
be set.

> 
> >>
> >>I came across this paper
> >>https://www.kernel.org/doc/ols/2011/ols2011-gadre.pdf,
> >>not sure if this was somehow presented to xen community, I can't find
> >>any code related to this.
> >>
> >>Thank you!
> >
> >
> 
> 
> 
> -- 
> Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:36:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnU3-0004mX-9D; Fri, 07 Feb 2014 15:36:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBnU1-0004mN-HO
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:36:49 +0000
Received: from [85.158.139.211:12543] by server-9.bemta-5.messagelabs.com id
	10/7B-11237-09DF4F25; Fri, 07 Feb 2014 15:36:48 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391787408!2437473!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10580 invoked from network); 7 Feb 2014 15:36:48 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-11.tower-206.messagelabs.com with SMTP;
	7 Feb 2014 15:36:48 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 9A45B120AA2;
	Fri,  7 Feb 2014 16:36:47 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id 5XJ1otlH6ah8; Fri,  7 Feb 2014 16:36:46 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id E0463120A73;
	Fri,  7 Feb 2014 16:36:46 +0100 (CET)
Date: Fri, 7 Feb 2014 16:38:26 +0100
From: William Dauchy <william@gandi.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207153826.GF19084@gandi.net>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140207150224.GA3605@phenom.dumpdata.com>
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad@kernel.org, William Dauchy <william@gandi.net>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7200421218307811908=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7200421218307811908==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="kR3zbvD4cgoYnS/6"
Content-Disposition: inline


--kR3zbvD4cgoYnS/6
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Feb07 10:02, Konrad Rzeszutek Wilk wrote:
> That does not make sense. What are the leaks?

I agree, I still do not understand precisely why it fixes my issue.
when I say leaks, it's global memory usage going down (`used` field
in `free` output is growing)

> What are the messages
> that you see about APIC?

No local APIC present
APIC: disable apic facility
APIC: switched to apic NOOP


no mention of it with the patch is applied.

--=20
William

--kR3zbvD4cgoYnS/6
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL0/fIACgkQ1I6eqOUidQEReACfTXtVMl/RahjcgVFMBULp+wyX
YeQAni/1I/T8BByD3NtcMqA6YTzANl8D
=s/f3
-----END PGP SIGNATURE-----

--kR3zbvD4cgoYnS/6--


--===============7200421218307811908==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7200421218307811908==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 15:36:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnU3-0004mX-9D; Fri, 07 Feb 2014 15:36:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBnU1-0004mN-HO
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:36:49 +0000
Received: from [85.158.139.211:12543] by server-9.bemta-5.messagelabs.com id
	10/7B-11237-09DF4F25; Fri, 07 Feb 2014 15:36:48 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391787408!2437473!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10580 invoked from network); 7 Feb 2014 15:36:48 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-11.tower-206.messagelabs.com with SMTP;
	7 Feb 2014 15:36:48 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 9A45B120AA2;
	Fri,  7 Feb 2014 16:36:47 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id 5XJ1otlH6ah8; Fri,  7 Feb 2014 16:36:46 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id E0463120A73;
	Fri,  7 Feb 2014 16:36:46 +0100 (CET)
Date: Fri, 7 Feb 2014 16:38:26 +0100
From: William Dauchy <william@gandi.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207153826.GF19084@gandi.net>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140207150224.GA3605@phenom.dumpdata.com>
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad@kernel.org, William Dauchy <william@gandi.net>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7200421218307811908=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============7200421218307811908==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="kR3zbvD4cgoYnS/6"
Content-Disposition: inline


--kR3zbvD4cgoYnS/6
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Feb07 10:02, Konrad Rzeszutek Wilk wrote:
> That does not make sense. What are the leaks?

I agree, I still do not understand precisely why it fixes my issue.
when I say leaks, it's global memory usage going down (`used` field
in `free` output is growing)

> What are the messages
> that you see about APIC?

No local APIC present
APIC: disable apic facility
APIC: switched to apic NOOP


no mention of it with the patch is applied.

--=20
William

--kR3zbvD4cgoYnS/6
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL0/fIACgkQ1I6eqOUidQEReACfTXtVMl/RahjcgVFMBULp+wyX
YeQAni/1I/T8BByD3NtcMqA6YTzANl8D
=s/f3
-----END PGP SIGNATURE-----

--kR3zbvD4cgoYnS/6--


--===============7200421218307811908==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7200421218307811908==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 15:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnU6-0004nV-Ny; Fri, 07 Feb 2014 15:36:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBnU5-0004ms-6m
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:36:53 +0000
Received: from [193.109.254.147:2537] by server-3.bemta-14.messagelabs.com id
	E6/39-00432-49DF4F25; Fri, 07 Feb 2014 15:36:52 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391787410!2809005!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9938 invoked from network); 7 Feb 2014 15:36:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:36:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FaiF8019830
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:36:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fagst001753
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:36:43 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fag6h025631; Fri, 7 Feb 2014 15:36:42 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:36:42 -0800
Message-ID: <52F4FDD9.5000608@oracle.com>
Date: Fri, 07 Feb 2014 10:38:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
In-Reply-To: <52F5088F020000780011A484@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMDcvMjAxNCAxMDoyMyBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gT24gMDcuMDIu
MTQgYXQgMTY6MTIsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+
IHdyb3RlOgo+PiBPbiAwMi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+
IC4uLiBidXQgaW50ZXJydXB0IHJlbWFwcGluZyBpcyByZXF1ZXN0ZWQgKHdpdGggcGVyLWRldmlj
ZSByZW1hcHBpbmcKPj4+IHRhYmxlcykuIFdpdGhvdXQgaXQsIHRoZSB0aW1lciBpbnRlcnJ1cHQg
aXMgdXN1YWxseSBub3Qgd29ya2luZy4KPj4+Cj4+PiBJbnNwaXJlZCBieSBMaW51eCdlcyAiaW9t
bXUvYW1kOiBXb3JrIGFyb3VuZCB3cm9uZyBJT0FQSUMgZGV2aWNlLWlkIGluCj4+PiBJVlJTIHRh
YmxlIiAoY29tbWl0IGMyZmY1Y2Y1Mjk0YmNiZDdmYTUwZjdkODYwZTkwYTY2ZGI3ZTUwNTkpIGJ5
IEpvZXJnCj4+PiBSb2VkZWwgPGpvZXJnLnJvZWRlbEBhbWQuY29tPi4KPj4+Cj4+PiBSZXBvcnRl
ZC1ieTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+IFNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KPj4+IFRlc3RlZC1ieTogRXJpYyBIb3VieSA8
ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Cj4+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9h
bWQvaW9tbXVfYWNwaS5jCj4+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9t
bXVfYWNwaS5jCj4+PiBAQCAtOTg0LDYgKzk4NCw3IEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNl
X2l2cnNfdGFibGUoc3RydWMKPj4+ICAgICAgICBjb25zdCBzdHJ1Y3QgYWNwaV9pdnJzX2hlYWRl
ciAqaXZyc19ibG9jazsKPj4+ICAgICAgICB1bnNpZ25lZCBsb25nIGxlbmd0aDsKPj4+ICAgICAg
ICB1bnNpZ25lZCBpbnQgYXBpYzsKPj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9p
bnRyZW1hcDsKPj4+ICAgICAgICBpbnQgZXJyb3IgPSAwOwo+Pj4gICAgCj4+PiAgICAgICAgQlVH
X09OKCF0YWJsZSk7Cj4+PiBAQCAtMTAxNyw4ICsxMDE4LDE1IEBAIHN0YXRpYyBpbnQgX19pbml0
IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+ICAgICAgICAvKiBFYWNoIElPLUFQSUMgbXVzdCBo
YXZlIGJlZW4gbWVudGlvbmVkIGluIHRoZSB0YWJsZS4gKi8KPj4+ICAgICAgICBmb3IgKCBhcGlj
ID0gMDsgIWVycm9yICYmIGlvbW11X2ludHJlbWFwICYmIGFwaWMgPCBucl9pb2FwaWNzOyArK2Fw
aWMgKQo+Pj4gICAgICAgIHsKPj4+IC0gICAgICAgIGlmICggIW5yX2lvYXBpY19lbnRyaWVzW2Fw
aWNdIHx8Cj4+PiAtICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnBp
bl8yX2lkeCApCj4+PiArICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSApCj4+
PiArICAgICAgICAgICAgY29udGludWU7Cj4+PiArCj4+PiArICAgICAgICBpZiAoICFpb2FwaWNf
c2JkZltJT19BUElDX0lEKGFwaWMpXS5zZWcgJiYKPj4+ICsgICAgICAgICAgICAgLyogU0IgSU8t
QVBJQyBpcyBhbHdheXMgb24gdGhpcyBkZXZpY2UgaW4gQU1EIHN5c3RlbXMuICovCj4+PiArICAg
ICAgICAgICAgIGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAs
IDB4MTQsIDApICkKPj4+ICsgICAgICAgICAgICBzYl9pb2FwaWMgPSAxOwo+Pj4gKwo+Pj4gKyAg
ICAgICAgaWYgKCBpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9pZHggKQo+Pj4g
ICAgICAgICAgICAgICAgY29udGludWU7Cj4+PiAgICAKPj4+ICAgICAgICAgICAgaWYgKCAhdGVz
dF9iaXQoSU9fQVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUpICkKPj4gSSBkb24ndCBrbm93
IHdoZXRoZXIgMDoxNDowIGlzIHNldCBpbiBzdG9uZSwgSSBkb24ndCByZW1lbWJlciBzZWVpbmcK
Pj4gYW55d2hlcmUgdGhhdCB0aGlzIGlzIGFyY2hpdGVjdHVyYWwuCj4+Cj4+IEluIHRoZSAodW5s
aWtlbHkpIGV2ZW50IHRoYXQgaXQgaXMgbW92ZWQgc29tZXdoZXJlIGVsc2Ugd2lsbCB0aGUgdXNl
ciBiZQo+PiBhYmxlIHRvIG92ZXJ3cml0ZSB3aGVyZSBpdCBpcz8gRG8geW91Cj4+IHRoaW5rIHRo
YXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVlIGlmIGFwcHJvcHJpYXRlIGJp
dCBpcwo+PiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/Cj4gVGhlc2UgYXJlIHF1ZXN0aW9uIHlvdSdk
IG5lZWQgdG8gYXNrIHRvIErDtnJnLCB0aGUgYXV0aG9yIG9mIHRoZQo+IG9yaWdpbmFsIExpbnV4
IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNvbmRpdGlvbiBoZXJlIHRoYXQgaGUKPiBrbmV3
IHdoYXQgaGUgd2FzIGRvaW5nLgoKWGVuIGFscmVhZHkgaGFzIGEgd2F5IHRvIG92ZXJyaWRlIElW
UlMnIHZpZXcgb2YgSU9BUElDcyB3aXRoIAppb2FwaWNfY21kbGluZSwgc29tZXRoaW5nIHRoYXQg
TGludXggZG9lc24ndC4gUHJlc3VtYWJseSBpZiB0aGUgdXNlciAKc2V0cyBpdnJzX2lvYXBpY1td
IG9wdGlvbiBvbiBib290IGxpbmUgdGhlbiBoZSBrbm93cyB3aGF0IGhlIGlzIGRvaW5nIAooYXQg
bGVhc3Qgb25lIHdvdWxkIGhvcGUgc28pLgoKTXkgY29uY2VybiBpcyB0aGF0IHRoaXMgcGF0Y2gg
d291bGQgcHJldmVudCB0aGUgdXNlciBmcm9tIHNwZWNpZnlpbmcgCndoZXJlIHRoZSBJT0FQSUMg
aXMuIFdpbGwgdGhpcyBib290IG9wdGlvbiBiZSB1c2VmdWwgYXQgYWxsIG5vdz8gV2hlbiB3ZSAK
c3BlY2lmeSBhbnl0aGluZyBidXQgMDoxNDowIGl0IHdpbGwgYmUgcHJldHR5IG11Y2ggaWdub3Jl
ZCwgd29uJ3QgaXQ/CgotYm9yaXMKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnU6-0004nV-Ny; Fri, 07 Feb 2014 15:36:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBnU5-0004ms-6m
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:36:53 +0000
Received: from [193.109.254.147:2537] by server-3.bemta-14.messagelabs.com id
	E6/39-00432-49DF4F25; Fri, 07 Feb 2014 15:36:52 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391787410!2809005!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9938 invoked from network); 7 Feb 2014 15:36:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:36:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FaiF8019830
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:36:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fagst001753
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:36:43 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fag6h025631; Fri, 7 Feb 2014 15:36:42 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:36:42 -0800
Message-ID: <52F4FDD9.5000608@oracle.com>
Date: Fri, 07 Feb 2014 10:38:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
In-Reply-To: <52F5088F020000780011A484@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMDcvMjAxNCAxMDoyMyBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gT24gMDcuMDIu
MTQgYXQgMTY6MTIsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+
IHdyb3RlOgo+PiBPbiAwMi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+
IC4uLiBidXQgaW50ZXJydXB0IHJlbWFwcGluZyBpcyByZXF1ZXN0ZWQgKHdpdGggcGVyLWRldmlj
ZSByZW1hcHBpbmcKPj4+IHRhYmxlcykuIFdpdGhvdXQgaXQsIHRoZSB0aW1lciBpbnRlcnJ1cHQg
aXMgdXN1YWxseSBub3Qgd29ya2luZy4KPj4+Cj4+PiBJbnNwaXJlZCBieSBMaW51eCdlcyAiaW9t
bXUvYW1kOiBXb3JrIGFyb3VuZCB3cm9uZyBJT0FQSUMgZGV2aWNlLWlkIGluCj4+PiBJVlJTIHRh
YmxlIiAoY29tbWl0IGMyZmY1Y2Y1Mjk0YmNiZDdmYTUwZjdkODYwZTkwYTY2ZGI3ZTUwNTkpIGJ5
IEpvZXJnCj4+PiBSb2VkZWwgPGpvZXJnLnJvZWRlbEBhbWQuY29tPi4KPj4+Cj4+PiBSZXBvcnRl
ZC1ieTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+IFNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KPj4+IFRlc3RlZC1ieTogRXJpYyBIb3VieSA8
ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Cj4+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9h
bWQvaW9tbXVfYWNwaS5jCj4+PiArKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9t
bXVfYWNwaS5jCj4+PiBAQCAtOTg0LDYgKzk4NCw3IEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNl
X2l2cnNfdGFibGUoc3RydWMKPj4+ICAgICAgICBjb25zdCBzdHJ1Y3QgYWNwaV9pdnJzX2hlYWRl
ciAqaXZyc19ibG9jazsKPj4+ICAgICAgICB1bnNpZ25lZCBsb25nIGxlbmd0aDsKPj4+ICAgICAg
ICB1bnNpZ25lZCBpbnQgYXBpYzsKPj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9p
bnRyZW1hcDsKPj4+ICAgICAgICBpbnQgZXJyb3IgPSAwOwo+Pj4gICAgCj4+PiAgICAgICAgQlVH
X09OKCF0YWJsZSk7Cj4+PiBAQCAtMTAxNyw4ICsxMDE4LDE1IEBAIHN0YXRpYyBpbnQgX19pbml0
IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+ICAgICAgICAvKiBFYWNoIElPLUFQSUMgbXVzdCBo
YXZlIGJlZW4gbWVudGlvbmVkIGluIHRoZSB0YWJsZS4gKi8KPj4+ICAgICAgICBmb3IgKCBhcGlj
ID0gMDsgIWVycm9yICYmIGlvbW11X2ludHJlbWFwICYmIGFwaWMgPCBucl9pb2FwaWNzOyArK2Fw
aWMgKQo+Pj4gICAgICAgIHsKPj4+IC0gICAgICAgIGlmICggIW5yX2lvYXBpY19lbnRyaWVzW2Fw
aWNdIHx8Cj4+PiAtICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnBp
bl8yX2lkeCApCj4+PiArICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSApCj4+
PiArICAgICAgICAgICAgY29udGludWU7Cj4+PiArCj4+PiArICAgICAgICBpZiAoICFpb2FwaWNf
c2JkZltJT19BUElDX0lEKGFwaWMpXS5zZWcgJiYKPj4+ICsgICAgICAgICAgICAgLyogU0IgSU8t
QVBJQyBpcyBhbHdheXMgb24gdGhpcyBkZXZpY2UgaW4gQU1EIHN5c3RlbXMuICovCj4+PiArICAg
ICAgICAgICAgIGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAs
IDB4MTQsIDApICkKPj4+ICsgICAgICAgICAgICBzYl9pb2FwaWMgPSAxOwo+Pj4gKwo+Pj4gKyAg
ICAgICAgaWYgKCBpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9pZHggKQo+Pj4g
ICAgICAgICAgICAgICAgY29udGludWU7Cj4+PiAgICAKPj4+ICAgICAgICAgICAgaWYgKCAhdGVz
dF9iaXQoSU9fQVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUpICkKPj4gSSBkb24ndCBrbm93
IHdoZXRoZXIgMDoxNDowIGlzIHNldCBpbiBzdG9uZSwgSSBkb24ndCByZW1lbWJlciBzZWVpbmcK
Pj4gYW55d2hlcmUgdGhhdCB0aGlzIGlzIGFyY2hpdGVjdHVyYWwuCj4+Cj4+IEluIHRoZSAodW5s
aWtlbHkpIGV2ZW50IHRoYXQgaXQgaXMgbW92ZWQgc29tZXdoZXJlIGVsc2Ugd2lsbCB0aGUgdXNl
ciBiZQo+PiBhYmxlIHRvIG92ZXJ3cml0ZSB3aGVyZSBpdCBpcz8gRG8geW91Cj4+IHRoaW5rIHRo
YXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVlIGlmIGFwcHJvcHJpYXRlIGJp
dCBpcwo+PiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/Cj4gVGhlc2UgYXJlIHF1ZXN0aW9uIHlvdSdk
IG5lZWQgdG8gYXNrIHRvIErDtnJnLCB0aGUgYXV0aG9yIG9mIHRoZQo+IG9yaWdpbmFsIExpbnV4
IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNvbmRpdGlvbiBoZXJlIHRoYXQgaGUKPiBrbmV3
IHdoYXQgaGUgd2FzIGRvaW5nLgoKWGVuIGFscmVhZHkgaGFzIGEgd2F5IHRvIG92ZXJyaWRlIElW
UlMnIHZpZXcgb2YgSU9BUElDcyB3aXRoIAppb2FwaWNfY21kbGluZSwgc29tZXRoaW5nIHRoYXQg
TGludXggZG9lc24ndC4gUHJlc3VtYWJseSBpZiB0aGUgdXNlciAKc2V0cyBpdnJzX2lvYXBpY1td
IG9wdGlvbiBvbiBib290IGxpbmUgdGhlbiBoZSBrbm93cyB3aGF0IGhlIGlzIGRvaW5nIAooYXQg
bGVhc3Qgb25lIHdvdWxkIGhvcGUgc28pLgoKTXkgY29uY2VybiBpcyB0aGF0IHRoaXMgcGF0Y2gg
d291bGQgcHJldmVudCB0aGUgdXNlciBmcm9tIHNwZWNpZnlpbmcgCndoZXJlIHRoZSBJT0FQSUMg
aXMuIFdpbGwgdGhpcyBib290IG9wdGlvbiBiZSB1c2VmdWwgYXQgYWxsIG5vdz8gV2hlbiB3ZSAK
c3BlY2lmeSBhbnl0aGluZyBidXQgMDoxNDowIGl0IHdpbGwgYmUgcHJldHR5IG11Y2ggaWdub3Jl
ZCwgd29uJ3QgaXQ/CgotYm9yaXMKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5v
cmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:41:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnYj-0005gu-76; Fri, 07 Feb 2014 15:41:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnYh-0005gk-J5
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:41:39 +0000
Received: from [85.158.139.211:13140] by server-8.bemta-5.messagelabs.com id
	8D/B6-05298-2BEF4F25; Fri, 07 Feb 2014 15:41:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391787696!2415577!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2860 invoked from network); 7 Feb 2014 15:41:37 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 15:41:37 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FfV2j025687
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:41:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FfUEK007386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:41:30 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FfU8P013391; Fri, 7 Feb 2014 15:41:30 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:41:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 88F041C0972; Fri,  7 Feb 2014 10:41:28 -0500 (EST)
Date: Fri, 7 Feb 2014 10:41:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140207154128.GE3605@phenom.dumpdata.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
> Konrad Rzeszutek Wilk wrote on 2014-02-05:
> > On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
> >> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
> >>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
> >>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com> wrote:
> >>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>>>>> Wasn't it that Mukesh's patch simply was yours with the two
> >>>>>> get_ioreq()s folded by using a local variable?
> >>>>> Yes. As so
> >>>> Thanks. Except that ...
> >>>> 
> >>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> >>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
> >>>>>      struct vcpu *v = current;
> >>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> >>>>> -
> >>>>> +    ioreq_t *p = get_ioreq(v);
> >>>> ... you don't want to drop the blank line, and naming the new
> >>>> variable "ioreq" would seem preferable.
> >>>> 
> >>>>>      /*
> >>>>>       * a pending IO emualtion may still no finished. In this case,
> >>>>>       * no virtual vmswith is allowed. Or else, the following IO
> >>>>>       * emulation will handled in a wrong VCPU context.
> >>>>>       */
> >>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
> >>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
> >>>> the right thing here. Yang, Jun?
> >>> I have two patches - one the simpler one that is pretty
> >>> straightfoward and the one you suggested. Either one fixes PVH
> >>> guests. I also did bootup tests with HVM guests to make sure they worked.
> >>> 
> >>> Attached and inline.
> >> 
> 
> Sorry for the late response. I just back from Chinese new year holiday.
> 
> >> But they do different things -- one does "ioreq && ioreq->state..."
> > 
> > Correct.
> >> and the other does "!ioreq || ioreq->state...".  The first one is
> >> incorrect, AFAICT.
> > 
> > Both of them fix the hypervisor blowing up with any PVH guest.
> 
> Both of fixings are right to me.
> The only concern is that what we want to do here:
> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO request emulation mechanism to continue nested check which current means HVM VCPU.
> And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO request emulation mechanism only which current means PVH VCPU.
> 
> The purpose of my original patch only wants to allow the HVM VCPU that doesn't has pending IO request to continue nested check. Not use it to distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here as Jan mentioned before that non-HVM domain should never call nested related function at all unless it also supports nested.

So it sounds like the #2 patch is preferable by you.

Can I stick Acked-by on it?


>From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


> 
> Best regards,
> Yang
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:41:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnYj-0005gu-76; Fri, 07 Feb 2014 15:41:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnYh-0005gk-J5
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:41:39 +0000
Received: from [85.158.139.211:13140] by server-8.bemta-5.messagelabs.com id
	8D/B6-05298-2BEF4F25; Fri, 07 Feb 2014 15:41:38 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391787696!2415577!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2860 invoked from network); 7 Feb 2014 15:41:37 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 15:41:37 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17FfV2j025687
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:41:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FfUEK007386
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:41:30 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17FfU8P013391; Fri, 7 Feb 2014 15:41:30 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:41:29 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 88F041C0972; Fri,  7 Feb 2014 10:41:28 -0500 (EST)
Date: Fri, 7 Feb 2014 10:41:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140207154128.GE3605@phenom.dumpdata.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jan Beulich <JBeulich@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
> Konrad Rzeszutek Wilk wrote on 2014-02-05:
> > On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
> >> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
> >>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
> >>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com> wrote:
> >>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
> >>>>>> Wasn't it that Mukesh's patch simply was yours with the two
> >>>>>> get_ioreq()s folded by using a local variable?
> >>>>> Yes. As so
> >>>> Thanks. Except that ...
> >>>> 
> >>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> >>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> >>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
> >>>>>      struct vcpu *v = current;
> >>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> >>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
> >>>>> -
> >>>>> +    ioreq_t *p = get_ioreq(v);
> >>>> ... you don't want to drop the blank line, and naming the new
> >>>> variable "ioreq" would seem preferable.
> >>>> 
> >>>>>      /*
> >>>>>       * a pending IO emualtion may still no finished. In this case,
> >>>>>       * no virtual vmswith is allowed. Or else, the following IO
> >>>>>       * emulation will handled in a wrong VCPU context.
> >>>>>       */
> >>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> >>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
> >>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
> >>>> the right thing here. Yang, Jun?
> >>> I have two patches - one the simpler one that is pretty
> >>> straightfoward and the one you suggested. Either one fixes PVH
> >>> guests. I also did bootup tests with HVM guests to make sure they worked.
> >>> 
> >>> Attached and inline.
> >> 
> 
> Sorry for the late response. I just back from Chinese new year holiday.
> 
> >> But they do different things -- one does "ioreq && ioreq->state..."
> > 
> > Correct.
> >> and the other does "!ioreq || ioreq->state...".  The first one is
> >> incorrect, AFAICT.
> > 
> > Both of them fix the hypervisor blowing up with any PVH guest.
> 
> Both of fixings are right to me.
> The only concern is that what we want to do here:
> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO request emulation mechanism to continue nested check which current means HVM VCPU.
> And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO request emulation mechanism only which current means PVH VCPU.
> 
> The purpose of my original patch only wants to allow the HVM VCPU that doesn't has pending IO request to continue nested check. Not use it to distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here as Jan mentioned before that non-HVM domain should never call nested related function at all unless it also supports nested.

So it sounds like the #2 patch is preferable by you.

Can I stick Acked-by on it?


>From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
From: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Mon, 3 Feb 2014 11:45:52 -0500
Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
 use io-backend device.

The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
"Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
assumes that the HVM paths are only taken by HVM guests. With the PVH
enabled that is no longer the case - which means that we do not have
to have the IO-backend device (QEMU) enabled.

As such, that patch can crash the hypervisor:

Xen call trace:
    [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
    [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0

Pagetable walk from 000000000000001e:
  L4[0x000] = 0000000000000000 ffffffffffffffff

****************************************
Panic on CPU 7:
FATAL PAGE FAULT
[error_code=0000]
Faulting linear address: 000000000000001e
****************************************

as we do not have an io based backend. In the case that the
PVH guest does run an HVM guest inside it - we need to do
further work to suport this - and for now the check will
bail us out.

We also fix spelling mistakes and the sentence structure.

CC: Yang Zhang <yang.z.zhang@Intel.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
 1 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index d2ba435..71522cf 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
     struct vcpu *v = current;
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    ioreq_t *ioreq = get_ioreq(v);
 
     /*
-     * a pending IO emualtion may still no finished. In this case,
+     * A pending IO emulation may still be not finished. In this case,
      * no virtual vmswith is allowed. Or else, the following IO
-     * emulation will handled in a wrong VCPU context.
+     * emulation will be handled in a wrong VCPU context. If there are
+     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
+     * running inside - we don't want to continue as this setup is not
+     * implemented nor supported as of right now.
      */
-    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
+    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
-- 
1.7.7.6


> 
> Best regards,
> Yang
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnf1-00064U-Ej; Fri, 07 Feb 2014 15:48:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnez-00064L-Ro
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:48:10 +0000
Received: from [85.158.143.35:13608] by server-3.bemta-4.messagelabs.com id
	77/E6-11539-93005F25; Fri, 07 Feb 2014 15:48:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391788087!3979965!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17560 invoked from network); 7 Feb 2014 15:48:08 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:48:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17Fm3cD001545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:48:04 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fm2xv001085
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:48:02 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fm2cZ001071; Fri, 7 Feb 2014 15:48:02 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:48:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CF5D51C0972; Fri,  7 Feb 2014 10:48:00 -0500 (EST)
Date: Fri, 7 Feb 2014 10:48:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Bob Liu <lliubbo@gmail.com>
Message-ID: <20140207154800.GA4855@phenom.dumpdata.com>
References: <1383872637-15486-1-git-send-email-bob.liu@oracle.com>
	<1383872637-15486-12-git-send-email-bob.liu@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1383872637-15486-12-git-send-email-bob.liu@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, keir@xen.org, ian.campbell@citrix.com,
	JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 11/11] tmem: cleanup: drop useless
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 08, 2013 at 09:03:57AM +0800, Bob Liu wrote:
> Function tmem_release_avail_pages_to_host() and tmem_scrub_page() only used
> once, no need to separate them out.

All of the patches look good to me. Let me put them in my tree
and do a sanity check tonight and then send a git pull to Jan
on Monday.

Thank you for making the code much easier to read!
> 
> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> ---
>  xen/common/tmem.c          |   19 +++++++++++++++++--
>  xen/common/tmem_xen.c      |   24 ------------------------
>  xen/include/xen/tmem_xen.h |    3 ---
>  3 files changed, 17 insertions(+), 29 deletions(-)
> 
> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
> index f009fd8..3d15ead 100644
> --- a/xen/common/tmem.c
> +++ b/xen/common/tmem.c
> @@ -1418,7 +1418,19 @@ static unsigned long tmem_relinquish_npages(unsigned long n)
>              break;
>      }
>      if ( avail_pages )
> -        tmem_release_avail_pages_to_host();
> +    {
> +        spin_lock(&tmem_page_list_lock);
> +        while ( !page_list_empty(&tmem_page_list) )
> +        {
> +            struct page_info *pg = page_list_remove_head(&tmem_page_list);
> +            scrub_one_page(pg);
> +            tmem_page_list_pages--;
> +            free_domheap_page(pg);
> +        }
> +        ASSERT(tmem_page_list_pages == 0);
> +        INIT_PAGE_LIST_HEAD(&tmem_page_list);
> +        spin_unlock(&tmem_page_list_lock);
> +    }
>      return avail_pages;
>  }
>  
> @@ -2911,9 +2923,12 @@ EXPORT void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
>      }
>      if ( evicts_per_relinq > max_evicts_per_relinq )
>          max_evicts_per_relinq = evicts_per_relinq;
> -    tmem_scrub_page(pfp, memflags);
>      if ( pfp != NULL )
> +    {
> +        if ( !(memflags & MEMF_tmem) )
> +            scrub_one_page(pfp);
>          relinq_pgs++;
> +    }
>  
>      if ( tmem_called_from_tmem(memflags) )
>      {
> diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
> index 0f5955d..d6e2e0d 100644
> --- a/xen/common/tmem_xen.c
> +++ b/xen/common/tmem_xen.c
> @@ -289,30 +289,6 @@ EXPORT DEFINE_SPINLOCK(tmem_page_list_lock);
>  EXPORT PAGE_LIST_HEAD(tmem_page_list);
>  EXPORT unsigned long tmem_page_list_pages = 0;
>  
> -/* free anything on tmem_page_list to Xen's scrub list */
> -EXPORT void tmem_release_avail_pages_to_host(void)
> -{
> -    spin_lock(&tmem_page_list_lock);
> -    while ( !page_list_empty(&tmem_page_list) )
> -    {
> -        struct page_info *pg = page_list_remove_head(&tmem_page_list);
> -        scrub_one_page(pg);
> -        tmem_page_list_pages--;
> -        free_domheap_page(pg);
> -    }
> -    ASSERT(tmem_page_list_pages == 0);
> -    INIT_PAGE_LIST_HEAD(&tmem_page_list);
> -    spin_unlock(&tmem_page_list_lock);
> -}
> -
> -EXPORT void tmem_scrub_page(struct page_info *pi, unsigned int memflags)
> -{
> -    if ( pi == NULL )
> -        return;
> -    if ( !(memflags & MEMF_tmem) )
> -        scrub_one_page(pi);
> -}
> -
>  static noinline void *tmem_mempool_page_get(unsigned long size)
>  {
>      struct page_info *pi;
> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
> index f9639a5..034fd5c 100644
> --- a/xen/include/xen/tmem_xen.h
> +++ b/xen/include/xen/tmem_xen.h
> @@ -42,9 +42,6 @@ extern void tmem_copy_page(char *to, char*from);
>  extern int tmem_init(void);
>  #define tmem_hash hash_long
>  
> -extern void tmem_release_avail_pages_to_host(void);
> -extern void tmem_scrub_page(struct page_info *pi, unsigned int memflags);
> -
>  extern bool_t opt_tmem_compress;
>  static inline bool_t tmem_compression_enabled(void)
>  {
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnf1-00064U-Ej; Fri, 07 Feb 2014 15:48:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnez-00064L-Ro
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:48:10 +0000
Received: from [85.158.143.35:13608] by server-3.bemta-4.messagelabs.com id
	77/E6-11539-93005F25; Fri, 07 Feb 2014 15:48:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391788087!3979965!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17560 invoked from network); 7 Feb 2014 15:48:08 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 15:48:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17Fm3cD001545
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 15:48:04 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fm2xv001085
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 15:48:02 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Fm2cZ001071; Fri, 7 Feb 2014 15:48:02 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 07:48:01 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CF5D51C0972; Fri,  7 Feb 2014 10:48:00 -0500 (EST)
Date: Fri, 7 Feb 2014 10:48:00 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Bob Liu <lliubbo@gmail.com>
Message-ID: <20140207154800.GA4855@phenom.dumpdata.com>
References: <1383872637-15486-1-git-send-email-bob.liu@oracle.com>
	<1383872637-15486-12-git-send-email-bob.liu@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1383872637-15486-12-git-send-email-bob.liu@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, keir@xen.org, ian.campbell@citrix.com,
	JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 11/11] tmem: cleanup: drop useless
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Nov 08, 2013 at 09:03:57AM +0800, Bob Liu wrote:
> Function tmem_release_avail_pages_to_host() and tmem_scrub_page() only used
> once, no need to separate them out.

All of the patches look good to me. Let me put them in my tree
and do a sanity check tonight and then send a git pull to Jan
on Monday.

Thank you for making the code much easier to read!
> 
> Signed-off-by: Bob Liu <bob.liu@oracle.com>
> ---
>  xen/common/tmem.c          |   19 +++++++++++++++++--
>  xen/common/tmem_xen.c      |   24 ------------------------
>  xen/include/xen/tmem_xen.h |    3 ---
>  3 files changed, 17 insertions(+), 29 deletions(-)
> 
> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
> index f009fd8..3d15ead 100644
> --- a/xen/common/tmem.c
> +++ b/xen/common/tmem.c
> @@ -1418,7 +1418,19 @@ static unsigned long tmem_relinquish_npages(unsigned long n)
>              break;
>      }
>      if ( avail_pages )
> -        tmem_release_avail_pages_to_host();
> +    {
> +        spin_lock(&tmem_page_list_lock);
> +        while ( !page_list_empty(&tmem_page_list) )
> +        {
> +            struct page_info *pg = page_list_remove_head(&tmem_page_list);
> +            scrub_one_page(pg);
> +            tmem_page_list_pages--;
> +            free_domheap_page(pg);
> +        }
> +        ASSERT(tmem_page_list_pages == 0);
> +        INIT_PAGE_LIST_HEAD(&tmem_page_list);
> +        spin_unlock(&tmem_page_list_lock);
> +    }
>      return avail_pages;
>  }
>  
> @@ -2911,9 +2923,12 @@ EXPORT void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
>      }
>      if ( evicts_per_relinq > max_evicts_per_relinq )
>          max_evicts_per_relinq = evicts_per_relinq;
> -    tmem_scrub_page(pfp, memflags);
>      if ( pfp != NULL )
> +    {
> +        if ( !(memflags & MEMF_tmem) )
> +            scrub_one_page(pfp);
>          relinq_pgs++;
> +    }
>  
>      if ( tmem_called_from_tmem(memflags) )
>      {
> diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
> index 0f5955d..d6e2e0d 100644
> --- a/xen/common/tmem_xen.c
> +++ b/xen/common/tmem_xen.c
> @@ -289,30 +289,6 @@ EXPORT DEFINE_SPINLOCK(tmem_page_list_lock);
>  EXPORT PAGE_LIST_HEAD(tmem_page_list);
>  EXPORT unsigned long tmem_page_list_pages = 0;
>  
> -/* free anything on tmem_page_list to Xen's scrub list */
> -EXPORT void tmem_release_avail_pages_to_host(void)
> -{
> -    spin_lock(&tmem_page_list_lock);
> -    while ( !page_list_empty(&tmem_page_list) )
> -    {
> -        struct page_info *pg = page_list_remove_head(&tmem_page_list);
> -        scrub_one_page(pg);
> -        tmem_page_list_pages--;
> -        free_domheap_page(pg);
> -    }
> -    ASSERT(tmem_page_list_pages == 0);
> -    INIT_PAGE_LIST_HEAD(&tmem_page_list);
> -    spin_unlock(&tmem_page_list_lock);
> -}
> -
> -EXPORT void tmem_scrub_page(struct page_info *pi, unsigned int memflags)
> -{
> -    if ( pi == NULL )
> -        return;
> -    if ( !(memflags & MEMF_tmem) )
> -        scrub_one_page(pi);
> -}
> -
>  static noinline void *tmem_mempool_page_get(unsigned long size)
>  {
>      struct page_info *pi;
> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
> index f9639a5..034fd5c 100644
> --- a/xen/include/xen/tmem_xen.h
> +++ b/xen/include/xen/tmem_xen.h
> @@ -42,9 +42,6 @@ extern void tmem_copy_page(char *to, char*from);
>  extern int tmem_init(void);
>  #define tmem_hash hash_long
>  
> -extern void tmem_release_avail_pages_to_host(void);
> -extern void tmem_scrub_page(struct page_info *pi, unsigned int memflags);
> -
>  extern bool_t opt_tmem_compress;
>  static inline bool_t tmem_compression_enabled(void)
>  {
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:50:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBngr-0006Q0-0a; Fri, 07 Feb 2014 15:50:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBngp-0006Pq-Ku
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:50:03 +0000
Received: from [85.158.137.68:52020] by server-12.bemta-3.messagelabs.com id
	41/78-01674-AA005F25; Fri, 07 Feb 2014 15:50:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391788201!370550!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7301 invoked from network); 7 Feb 2014 15:50:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 15:50:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 15:50:01 +0000
Message-Id: <52F50EB9020000780011A4D8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 15:50:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
In-Reply-To: <52F4FDD9.5000608@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjM4LCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMi8wNy8yMDE0IDEwOjIzIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4+Pj4gT24gMDcuMDIuMTQgYXQgMTY6MTIsIEJvcmlzIE9zdHJvdnNreSA8Ym9y
aXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4gT24gMDIvMDcvMjAxNCAwNDoyMSBB
TSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gLi4uIGJ1dCBpbnRlcnJ1cHQgcmVtYXBwaW5nIGlz
IHJlcXVlc3RlZCAod2l0aCBwZXItZGV2aWNlIHJlbWFwcGluZwo+Pj4+IHRhYmxlcykuIFdpdGhv
dXQgaXQsIHRoZSB0aW1lciBpbnRlcnJ1cHQgaXMgdXN1YWxseSBub3Qgd29ya2luZy4KPj4+Pgo+
Pj4+IEluc3BpcmVkIGJ5IExpbnV4J2VzICJpb21tdS9hbWQ6IFdvcmsgYXJvdW5kIHdyb25nIElP
QVBJQyBkZXZpY2UtaWQgaW4KPj4+PiBJVlJTIHRhYmxlIiAoY29tbWl0IGMyZmY1Y2Y1Mjk0YmNi
ZDdmYTUwZjdkODYwZTkwYTY2ZGI3ZTUwNTkpIGJ5IEpvZXJnCj4+Pj4gUm9lZGVsIDxqb2VyZy5y
b2VkZWxAYW1kLmNvbT4uCj4+Pj4KPj4+PiBSZXBvcnRlZC1ieTogRXJpYyBIb3VieSA8ZWhvdWJ5
QHlhaG9vLmNvbT4KPj4+PiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+Cj4+Pj4gVGVzdGVkLWJ5OiBFcmljIEhvdWJ5IDxlaG91YnlAeWFob28uY29tPgo+Pj4+
Cj4+Pj4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2FjcGkuYwo+Pj4+
ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9hY3BpLmMKPj4+PiBAQCAt
OTg0LDYgKzk4NCw3IEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNlX2l2cnNfdGFibGUoc3RydWMK
Pj4+PiAgICAgICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxvY2s7Cj4+
Pj4gICAgICAgIHVuc2lnbmVkIGxvbmcgbGVuZ3RoOwo+Pj4+ICAgICAgICB1bnNpZ25lZCBpbnQg
YXBpYzsKPj4+PiArICAgIGJvb2xfdCBzYl9pb2FwaWMgPSAhaW9tbXVfaW50cmVtYXA7Cj4+Pj4g
ICAgICAgIGludCBlcnJvciA9IDA7Cj4+Pj4gICAgCj4+Pj4gICAgICAgIEJVR19PTighdGFibGUp
Owo+Pj4+IEBAIC0xMDE3LDggKzEwMTgsMTUgQEAgc3RhdGljIGludCBfX2luaXQgcGFyc2VfaXZy
c190YWJsZShzdHJ1Ywo+Pj4+ICAgICAgICAvKiBFYWNoIElPLUFQSUMgbXVzdCBoYXZlIGJlZW4g
bWVudGlvbmVkIGluIHRoZSB0YWJsZS4gKi8KPj4+PiAgICAgICAgZm9yICggYXBpYyA9IDA7ICFl
cnJvciAmJiBpb21tdV9pbnRyZW1hcCAmJiBhcGljIDwgbnJfaW9hcGljczsgKythcGljICkKPj4+
PiAgICAgICAgewo+Pj4+IC0gICAgICAgIGlmICggIW5yX2lvYXBpY19lbnRyaWVzW2FwaWNdIHx8
Cj4+Pj4gLSAgICAgICAgICAgICBpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9p
ZHggKQo+Pj4+ICsgICAgICAgIGlmICggIW5yX2lvYXBpY19lbnRyaWVzW2FwaWNdICkKPj4+PiAr
ICAgICAgICAgICAgY29udGludWU7Cj4+Pj4gKwo+Pj4+ICsgICAgICAgIGlmICggIWlvYXBpY19z
YmRmW0lPX0FQSUNfSUQoYXBpYyldLnNlZyAmJgo+Pj4+ICsgICAgICAgICAgICAgLyogU0IgSU8t
QVBJQyBpcyBhbHdheXMgb24gdGhpcyBkZXZpY2UgaW4gQU1EIHN5c3RlbXMuICovCj4+Pj4gKyAg
ICAgICAgICAgICBpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5iZGYgPT0gUENJX0JERigw
LCAweDE0LCAwKSApCj4+Pj4gKyAgICAgICAgICAgIHNiX2lvYXBpYyA9IDE7Cj4+Pj4gKwo+Pj4+
ICsgICAgICAgIGlmICggaW9hcGljX3NiZGZbSU9fQVBJQ19JRChhcGljKV0ucGluXzJfaWR4ICkK
Pj4+PiAgICAgICAgICAgICAgICBjb250aW51ZTsKPj4+PiAgICAKPj4+PiAgICAgICAgICAgIGlm
ICggIXRlc3RfYml0KElPX0FQSUNfSUQoYXBpYyksIGlvYXBpY19jbWRsaW5lKSApCj4+PiBJIGRv
bid0IGtub3cgd2hldGhlciAwOjE0OjAgaXMgc2V0IGluIHN0b25lLCBJIGRvbid0IHJlbWVtYmVy
IHNlZWluZwo+Pj4gYW55d2hlcmUgdGhhdCB0aGlzIGlzIGFyY2hpdGVjdHVyYWwuCj4+Pgo+Pj4g
SW4gdGhlICh1bmxpa2VseSkgZXZlbnQgdGhhdCBpdCBpcyBtb3ZlZCBzb21ld2hlcmUgZWxzZSB3
aWxsIHRoZSB1c2VyIGJlCj4+PiBhYmxlIHRvIG92ZXJ3cml0ZSB3aGVyZSBpdCBpcz8gRG8geW91
Cj4+PiB0aGluayB0aGF0IHNiX2lvYXBpYyBtYXkgbmVlZCB0byBiZSBzZXQgdG8gdHJ1ZSBpZiBh
cHByb3ByaWF0ZSBiaXQgaXMKPj4+IHNldCBpbiBpb2FwaWNfY21kbGluZT8KPj4gVGhlc2UgYXJl
IHF1ZXN0aW9uIHlvdSdkIG5lZWQgdG8gYXNrIHRvIErDtnJnLCB0aGUgYXV0aG9yIG9mIHRoZQo+
PiBvcmlnaW5hbCBMaW51eCBzaWRlIHBhdGNoLiBJIHRvb2sgYXMgYSBwcmVjb25kaXRpb24gaGVy
ZSB0aGF0IGhlCj4+IGtuZXcgd2hhdCBoZSB3YXMgZG9pbmcuCj4gCj4gWGVuIGFscmVhZHkgaGFz
IGEgd2F5IHRvIG92ZXJyaWRlIElWUlMnIHZpZXcgb2YgSU9BUElDcyB3aXRoIAo+IGlvYXBpY19j
bWRsaW5lLCBzb21ldGhpbmcgdGhhdCBMaW51eCBkb2Vzbid0LiBQcmVzdW1hYmx5IGlmIHRoZSB1
c2VyIAo+IHNldHMgaXZyc19pb2FwaWNbXSBvcHRpb24gb24gYm9vdCBsaW5lIHRoZW4gaGUga25v
d3Mgd2hhdCBoZSBpcyBkb2luZyAKPiAoYXQgbGVhc3Qgb25lIHdvdWxkIGhvcGUgc28pLgoKSSB0
aGluayB0aGUgbG9naWMgd2UgaGF2ZSBpcyBzdWZmaWNpZW50bHkgc2ltaWxhciB0byBMaW51eCdl
cy4KCj4gTXkgY29uY2VybiBpcyB0aGF0IHRoaXMgcGF0Y2ggd291bGQgcHJldmVudCB0aGUgdXNl
ciBmcm9tIHNwZWNpZnlpbmcgCj4gd2hlcmUgdGhlIElPQVBJQyBpcy4gV2lsbCB0aGlzIGJvb3Qg
b3B0aW9uIGJlIHVzZWZ1bCBhdCBhbGwgbm93PyBXaGVuIHdlIAo+IHNwZWNpZnkgYW55dGhpbmcg
YnV0IDA6MTQ6MCBpdCB3aWxsIGJlIHByZXR0eSBtdWNoIGlnbm9yZWQsIHdvbid0IGl0PwoKQnV0
IHRoZSBwdXJwb3NlIGhlcmUgaXNuJ3QgdG8gb3ZlcnJpZGUgaG93IHRoZSBoYXJkd2FyZSBpcwpz
dHJ1Y3R1cmVkLCBidXQgdG8gb3ZlcmNvbWUgZmlybXdhcmUgdmVuZG9ycyBub3QgZ2V0dGluZyB0
aGVpcgpBQ1BJIHRhYmxlcyBjb3JyZWN0LgoKRnVydGhlcm1vcmUsIHdoYXQgaXMgYmVpbmcgc3Bl
Y2lmaWVkIGhlcmUgY2FuIHZlcnkgd2VsbCBiZQpkaWZmZXJlbnQgZnJvbSAwMDoxNC4wIC0gY29u
c2lkZXIgdGhlIG5vcnRoYnJpZGdlIElPLUFQSUMgYW5kCmV2ZW50dWFsIGZ1cnRoZXIgb25lcy4K
CkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:50:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:50:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBngr-0006Q0-0a; Fri, 07 Feb 2014 15:50:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBngp-0006Pq-Ku
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 15:50:03 +0000
Received: from [85.158.137.68:52020] by server-12.bemta-3.messagelabs.com id
	41/78-01674-AA005F25; Fri, 07 Feb 2014 15:50:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391788201!370550!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7301 invoked from network); 7 Feb 2014 15:50:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 15:50:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 15:50:01 +0000
Message-Id: <52F50EB9020000780011A4D8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 15:50:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
In-Reply-To: <52F4FDD9.5000608@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjM4LCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMi8wNy8yMDE0IDEwOjIzIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4+Pj4gT24gMDcuMDIuMTQgYXQgMTY6MTIsIEJvcmlzIE9zdHJvdnNreSA8Ym9y
aXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4gT24gMDIvMDcvMjAxNCAwNDoyMSBB
TSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gLi4uIGJ1dCBpbnRlcnJ1cHQgcmVtYXBwaW5nIGlz
IHJlcXVlc3RlZCAod2l0aCBwZXItZGV2aWNlIHJlbWFwcGluZwo+Pj4+IHRhYmxlcykuIFdpdGhv
dXQgaXQsIHRoZSB0aW1lciBpbnRlcnJ1cHQgaXMgdXN1YWxseSBub3Qgd29ya2luZy4KPj4+Pgo+
Pj4+IEluc3BpcmVkIGJ5IExpbnV4J2VzICJpb21tdS9hbWQ6IFdvcmsgYXJvdW5kIHdyb25nIElP
QVBJQyBkZXZpY2UtaWQgaW4KPj4+PiBJVlJTIHRhYmxlIiAoY29tbWl0IGMyZmY1Y2Y1Mjk0YmNi
ZDdmYTUwZjdkODYwZTkwYTY2ZGI3ZTUwNTkpIGJ5IEpvZXJnCj4+Pj4gUm9lZGVsIDxqb2VyZy5y
b2VkZWxAYW1kLmNvbT4uCj4+Pj4KPj4+PiBSZXBvcnRlZC1ieTogRXJpYyBIb3VieSA8ZWhvdWJ5
QHlhaG9vLmNvbT4KPj4+PiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+Cj4+Pj4gVGVzdGVkLWJ5OiBFcmljIEhvdWJ5IDxlaG91YnlAeWFob28uY29tPgo+Pj4+
Cj4+Pj4gLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X2FjcGkuYwo+Pj4+
ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9hY3BpLmMKPj4+PiBAQCAt
OTg0LDYgKzk4NCw3IEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNlX2l2cnNfdGFibGUoc3RydWMK
Pj4+PiAgICAgICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxvY2s7Cj4+
Pj4gICAgICAgIHVuc2lnbmVkIGxvbmcgbGVuZ3RoOwo+Pj4+ICAgICAgICB1bnNpZ25lZCBpbnQg
YXBpYzsKPj4+PiArICAgIGJvb2xfdCBzYl9pb2FwaWMgPSAhaW9tbXVfaW50cmVtYXA7Cj4+Pj4g
ICAgICAgIGludCBlcnJvciA9IDA7Cj4+Pj4gICAgCj4+Pj4gICAgICAgIEJVR19PTighdGFibGUp
Owo+Pj4+IEBAIC0xMDE3LDggKzEwMTgsMTUgQEAgc3RhdGljIGludCBfX2luaXQgcGFyc2VfaXZy
c190YWJsZShzdHJ1Ywo+Pj4+ICAgICAgICAvKiBFYWNoIElPLUFQSUMgbXVzdCBoYXZlIGJlZW4g
bWVudGlvbmVkIGluIHRoZSB0YWJsZS4gKi8KPj4+PiAgICAgICAgZm9yICggYXBpYyA9IDA7ICFl
cnJvciAmJiBpb21tdV9pbnRyZW1hcCAmJiBhcGljIDwgbnJfaW9hcGljczsgKythcGljICkKPj4+
PiAgICAgICAgewo+Pj4+IC0gICAgICAgIGlmICggIW5yX2lvYXBpY19lbnRyaWVzW2FwaWNdIHx8
Cj4+Pj4gLSAgICAgICAgICAgICBpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9p
ZHggKQo+Pj4+ICsgICAgICAgIGlmICggIW5yX2lvYXBpY19lbnRyaWVzW2FwaWNdICkKPj4+PiAr
ICAgICAgICAgICAgY29udGludWU7Cj4+Pj4gKwo+Pj4+ICsgICAgICAgIGlmICggIWlvYXBpY19z
YmRmW0lPX0FQSUNfSUQoYXBpYyldLnNlZyAmJgo+Pj4+ICsgICAgICAgICAgICAgLyogU0IgSU8t
QVBJQyBpcyBhbHdheXMgb24gdGhpcyBkZXZpY2UgaW4gQU1EIHN5c3RlbXMuICovCj4+Pj4gKyAg
ICAgICAgICAgICBpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5iZGYgPT0gUENJX0JERigw
LCAweDE0LCAwKSApCj4+Pj4gKyAgICAgICAgICAgIHNiX2lvYXBpYyA9IDE7Cj4+Pj4gKwo+Pj4+
ICsgICAgICAgIGlmICggaW9hcGljX3NiZGZbSU9fQVBJQ19JRChhcGljKV0ucGluXzJfaWR4ICkK
Pj4+PiAgICAgICAgICAgICAgICBjb250aW51ZTsKPj4+PiAgICAKPj4+PiAgICAgICAgICAgIGlm
ICggIXRlc3RfYml0KElPX0FQSUNfSUQoYXBpYyksIGlvYXBpY19jbWRsaW5lKSApCj4+PiBJIGRv
bid0IGtub3cgd2hldGhlciAwOjE0OjAgaXMgc2V0IGluIHN0b25lLCBJIGRvbid0IHJlbWVtYmVy
IHNlZWluZwo+Pj4gYW55d2hlcmUgdGhhdCB0aGlzIGlzIGFyY2hpdGVjdHVyYWwuCj4+Pgo+Pj4g
SW4gdGhlICh1bmxpa2VseSkgZXZlbnQgdGhhdCBpdCBpcyBtb3ZlZCBzb21ld2hlcmUgZWxzZSB3
aWxsIHRoZSB1c2VyIGJlCj4+PiBhYmxlIHRvIG92ZXJ3cml0ZSB3aGVyZSBpdCBpcz8gRG8geW91
Cj4+PiB0aGluayB0aGF0IHNiX2lvYXBpYyBtYXkgbmVlZCB0byBiZSBzZXQgdG8gdHJ1ZSBpZiBh
cHByb3ByaWF0ZSBiaXQgaXMKPj4+IHNldCBpbiBpb2FwaWNfY21kbGluZT8KPj4gVGhlc2UgYXJl
IHF1ZXN0aW9uIHlvdSdkIG5lZWQgdG8gYXNrIHRvIErDtnJnLCB0aGUgYXV0aG9yIG9mIHRoZQo+
PiBvcmlnaW5hbCBMaW51eCBzaWRlIHBhdGNoLiBJIHRvb2sgYXMgYSBwcmVjb25kaXRpb24gaGVy
ZSB0aGF0IGhlCj4+IGtuZXcgd2hhdCBoZSB3YXMgZG9pbmcuCj4gCj4gWGVuIGFscmVhZHkgaGFz
IGEgd2F5IHRvIG92ZXJyaWRlIElWUlMnIHZpZXcgb2YgSU9BUElDcyB3aXRoIAo+IGlvYXBpY19j
bWRsaW5lLCBzb21ldGhpbmcgdGhhdCBMaW51eCBkb2Vzbid0LiBQcmVzdW1hYmx5IGlmIHRoZSB1
c2VyIAo+IHNldHMgaXZyc19pb2FwaWNbXSBvcHRpb24gb24gYm9vdCBsaW5lIHRoZW4gaGUga25v
d3Mgd2hhdCBoZSBpcyBkb2luZyAKPiAoYXQgbGVhc3Qgb25lIHdvdWxkIGhvcGUgc28pLgoKSSB0
aGluayB0aGUgbG9naWMgd2UgaGF2ZSBpcyBzdWZmaWNpZW50bHkgc2ltaWxhciB0byBMaW51eCdl
cy4KCj4gTXkgY29uY2VybiBpcyB0aGF0IHRoaXMgcGF0Y2ggd291bGQgcHJldmVudCB0aGUgdXNl
ciBmcm9tIHNwZWNpZnlpbmcgCj4gd2hlcmUgdGhlIElPQVBJQyBpcy4gV2lsbCB0aGlzIGJvb3Qg
b3B0aW9uIGJlIHVzZWZ1bCBhdCBhbGwgbm93PyBXaGVuIHdlIAo+IHNwZWNpZnkgYW55dGhpbmcg
YnV0IDA6MTQ6MCBpdCB3aWxsIGJlIHByZXR0eSBtdWNoIGlnbm9yZWQsIHdvbid0IGl0PwoKQnV0
IHRoZSBwdXJwb3NlIGhlcmUgaXNuJ3QgdG8gb3ZlcnJpZGUgaG93IHRoZSBoYXJkd2FyZSBpcwpz
dHJ1Y3R1cmVkLCBidXQgdG8gb3ZlcmNvbWUgZmlybXdhcmUgdmVuZG9ycyBub3QgZ2V0dGluZyB0
aGVpcgpBQ1BJIHRhYmxlcyBjb3JyZWN0LgoKRnVydGhlcm1vcmUsIHdoYXQgaXMgYmVpbmcgc3Bl
Y2lmaWVkIGhlcmUgY2FuIHZlcnkgd2VsbCBiZQpkaWZmZXJlbnQgZnJvbSAwMDoxNC4wIC0gY29u
c2lkZXIgdGhlIG5vcnRoYnJpZGdlIElPLUFQSUMgYW5kCmV2ZW50dWFsIGZ1cnRoZXIgb25lcy4K
CkphbgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnmF-0006mG-4o; Fri, 07 Feb 2014 15:55:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBnmE-0006mB-EJ
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 15:55:38 +0000
Received: from [193.109.254.147:45608] by server-16.bemta-14.messagelabs.com
	id AA/37-21945-9F105F25; Fri, 07 Feb 2014 15:55:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391788535!2813760!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22228 invoked from network); 7 Feb 2014 15:55:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:55:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100862307"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:55:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 10:55:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBnmA-00089F-9p;
	Fri, 07 Feb 2014 15:55:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBnm9-0007Ea-I4;
	Fri, 07 Feb 2014 15:55:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24775-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 15:55:33 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24775: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1542072942678763374=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1542072942678763374==
Content-Type: text/plain

flight 24775 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24775/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build        fail in 24755 REGR. vs. 24699

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 24755
 test-amd64-amd64-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail pass in 24755

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24755 never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============1542072942678763374==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1542072942678763374==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnmF-0006mG-4o; Fri, 07 Feb 2014 15:55:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBnmE-0006mB-EJ
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 15:55:38 +0000
Received: from [193.109.254.147:45608] by server-16.bemta-14.messagelabs.com
	id AA/37-21945-9F105F25; Fri, 07 Feb 2014 15:55:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391788535!2813760!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22228 invoked from network); 7 Feb 2014 15:55:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:55:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100862307"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 15:55:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 10:55:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBnmA-00089F-9p;
	Fri, 07 Feb 2014 15:55:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBnm9-0007Ea-I4;
	Fri, 07 Feb 2014 15:55:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24775-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 15:55:33 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24775: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1542072942678763374=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1542072942678763374==
Content-Type: text/plain

flight 24775 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24775/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build        fail in 24755 REGR. vs. 24699

Tests which are failing intermittently (not blocking):
 test-amd64-i386-rhel6hvm-intel  7 redhat-install            fail pass in 24755
 test-amd64-amd64-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail pass in 24755

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24755 never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============1542072942678763374==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1542072942678763374==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 15:59:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnqJ-0007H8-Nq; Fri, 07 Feb 2014 15:59:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBnkm-0006hm-WA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:54:09 +0000
Received: from [85.158.137.68:36588] by server-10.bemta-3.messagelabs.com id
	16/9F-07302-0A105F25; Fri, 07 Feb 2014 15:54:08 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391788442!369554!1
X-Originating-IP: [209.85.220.182]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32078 invoked from network); 7 Feb 2014 15:54:03 -0000
Received: from mail-vc0-f182.google.com (HELO mail-vc0-f182.google.com)
	(209.85.220.182)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:54:03 -0000
Received: by mail-vc0-f182.google.com with SMTP id id10so2828600vcb.27
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 07:54:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=1eeLPD/rLDw/n9DilLKW7gp5QuLfdyc0QJnsP9ClmOI=;
	b=VCIeBgQi1q+SofKgzlc5rdzhYHzITc8axKnu87G1CNcNQVo1Lg6SEUMh4QJd+qtPYc
	st+BpP32V36AzAr+mHcWWm3IghvFl7CqpYiJk5wngZ8Fdo/gwEaaeJOylw0WYzs2b6qk
	qYcPFW6c8fxkIEwroTAKwPgyaiJvFquOJ9atY2GZO0IIZkeyv6zXYODw0UekiA2zrqaD
	ObvkNK9rQ+lA620JR0Q1NASWrWxlyzoaKfCIbOWHQTfX/Z0xxreNJqJA5IMfrZG4iOSS
	8F7Dy7s/ksQ7kBbBMReNTSOsC7wGtgNCqX1ll4KDAvz+j9cHbWkyCKQG94Vejd9jELda
	puEQ==
X-Received: by 10.52.30.167 with SMTP id t7mr5376490vdh.36.1391788442372; Fri,
	07 Feb 2014 07:54:02 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 07:53:22 -0800 (PST)
In-Reply-To: <20140207152547.GB3605@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 10:53:22 -0500
Message-ID: <CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 15:59:49 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8994647789825843070=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8994647789825843070==
Content-Type: multipart/alternative; boundary=bcaec51d2c42b5d0c304f1d2ff27

--bcaec51d2c42b5d0c304f1d2ff27
Content-Type: text/plain; charset=ISO-8859-1

I did not.  I do not have the toolchain installed.  I may have time later
today to try the patch.  Are there any specific instructions on how to
patch the src, compile and install?

Regards


On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > Hi all,
> >
> > I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)
> to a
> > HVM.  I have been attempting to resolve this issue on the xen-users list,
> > but it was advised to post this issue to this list. (Initial Message -
> >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> >
> > The machine I am using as host is a Dell Poweredge server with a Xeon
> > E31220 with 4GB of ram.
> >
> > The possible bug is the following:
> > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > char device redirected to /dev/pts/5 (label serial0)
> > qemu: hardware error: xen: failed to populate ram at 40030000
> > ....
> >
> > I believe it may be similar to this thread
> >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> >
> >
> > Additional info that may be helpful is below.
>
> Did you try the patch?
> >
> > Please let me know if you need any additional information.
> >
> > Thanks in advance for any help provided!
> > Regards
> >
> > ###########################################################
> > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > ###########################################################
> > # Configuration file for Xen HVM
> >
> > # HVM Name (as appears in 'xl list')
> > name="ubuntu-hvm-0"
> > # HVM Build settings (+ hardware)
> > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > builder='hvm'
> > device_model='qemu-dm'
> > memory=1024
> > vcpus=2
> >
> > # Virtual Interface
> > # Network bridge to USB NIC
> > vif=['bridge=xenbr0']
> >
> > ################### PCI PASSTHROUGH ###################
> > # PCI Permissive mode toggle
> > #pci_permissive=1
> >
> > # All PCI Devices
> > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> >
> > # First two ports on Intel 4x1G NIC
> > #pci=['03:00.0','03:00.1']
> >
> > # Last two ports on Intel 4x1G NIC
> > #pci=['04:00.0', '04:00.1']
> >
> > # All ports on Intel 4x1G NIC
> > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> >
> > # Brodcom 2x1G NIC
> > #pci=['05:00.0', '05:00.1']
> > ################### PCI PASSTHROUGH ###################
> >
> > # HVM Disks
> > # Hard disk only
> > # Boot from HDD first ('c')
> > boot="c"
> > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> >
> > # Hard disk with ISO
> > # Boot from ISO first ('d')
> > #boot="d"
> > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> >
> > # ACPI Enable
> > acpi=1
> > # HVM Event Modes
> > on_poweroff='destroy'
> > on_reboot='restart'
> > on_crash='restart'
> >
> > # Serial Console Configuration (Xen Console)
> > sdl=0
> > serial='pty'
> >
> > # VNC Configuration
> > # Only reacable from localhost
> > vnc=1
> > vnclisten="0.0.0.0"
> > vncpasswd=""
> >
> > ###########################################################
> > Copied for xen-users list
> > ###########################################################
> >
> > It appears that it cannot obtain the RAM mapping for this PCI device.
> >
> >
> > I rebooted the Host.  I ran assigned pci devices to pciback. The output
> > looks like:
> > root@fiat:~# ./dev_mgmt.sh
> > Loading Kernel Module 'xen-pciback'
> > Calling function pciback_dev for:
> > PCI DEVICE 0000:03:00.0
> > Unbinding 0000:03:00.0 from igb
> > Binding 0000:03:00.0 to pciback
> >
> > PCI DEVICE 0000:03:00.1
> > Unbinding 0000:03:00.1 from igb
> > Binding 0000:03:00.1 to pciback
> >
> > PCI DEVICE 0000:04:00.0
> > Unbinding 0000:04:00.0 from igb
> > Binding 0000:04:00.0 to pciback
> >
> > PCI DEVICE 0000:04:00.1
> > Unbinding 0000:04:00.1 from igb
> > Binding 0000:04:00.1 to pciback
> >
> > PCI DEVICE 0000:05:00.0
> > Unbinding 0000:05:00.0 from bnx2
> > Binding 0000:05:00.0 to pciback
> >
> > PCI DEVICE 0000:05:00.1
> > Unbinding 0000:05:00.1 from bnx2
> > Binding 0000:05:00.1 to pciback
> >
> > Listing PCI Devices Available to Xen
> > 0000:03:00.0
> > 0000:03:00.1
> > 0000:04:00.0
> > 0000:04:00.1
> > 0000:05:00.0
> > 0000:05:00.1
> >
> > ###########################################################
> > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > WARNING: ignoring device_model directive.
> > WARNING: Use "device_model_override" instead if you really want a
> > non-default device_model
> > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> > how=(nil) callback=(nil) poller=0x210c3c0
> > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > vdev=hda spec.backend=unknown
> > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > vdev=hda, using backend phy
> > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> bootloader
> > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > domain, skipping bootloader
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x210c728: deregister unregistered
> > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > free_memkb=2980
> > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> candidate
> > with 1 nodes, 4 cpus and 2980 KB free selected
> > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> >   Loader:        0000000000100000->00000000001a69a4
> >   Modules:       0000000000000000->0000000000000000
> >   TOTAL:         0000000000000000->000000003f800000
> >   ENTRY ADDRESS: 0000000000100608
> > xc: info: PHYSICAL MEMORY ALLOCATION:
> >   4KB PAGES: 0x0000000000000200
> >   2MB PAGES: 0x00000000000001fb
> >   1GB PAGES: 0x0000000000000000
> > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > vdev=hda spec.backend=phy
> > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > register slotnum=3
> > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > inprogress: poller=0x210c3c0, flags=i
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > epath=/local/domain/0/backend/vbd/2/768/state
> > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> state 1
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > epath=/local/domain/0/backend/vbd/2/768/state
> > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > deregister slotnum=3
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x2112f48: deregister unregistered
> > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > /etc/xen/scripts/block add
> > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> device-model
> > /usr/bin/qemu-system-i386 with arguments:
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > /usr/bin/qemu-system-i386
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > chardev=libxl-cmd,mode=control
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> register
> > slotnum=3
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > epath=/local/domain/0/device-model/2/state
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > epath=/local/domain/0/device-model/2/state
> > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > deregister slotnum=3
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x210c960: deregister unregistered
> > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > /var/run/xen/qmp-libxl-2
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "qmp_capabilities",
> >     "id": 1
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "query-chardev",
> >     "id": 2
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "change",
> >     "id": 3,
> >     "arguments": {
> >         "device": "vnc",
> >         "target": "password",
> >         "arg": ""
> >     }
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "query-vnc",
> >     "id": 4
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> register
> > slotnum=3
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > epath=/local/domain/0/backend/vif/2/0/state
> > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state
> 1
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > epath=/local/domain/0/backend/vif/2/0/state
> > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > deregister slotnum=3
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x210e8a8: deregister unregistered
> > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > /etc/xen/scripts/vif-bridge online
> > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > /etc/xen/scripts/vif-bridge add
> > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > /var/run/xen/qmp-libxl-2
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "qmp_capabilities",
> >     "id": 1
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "device_add",
> >     "id": 2,
> >     "arguments": {
> >         "driver": "xen-pci-passthrough",
> >         "id": "pci-pt-03_00.0",
> >         "hostaddr": "0000:03:00.0"
> >     }
> > }
> > '
> > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> reset
> > by peer
> > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> backend
> > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> > progress report: ignored
> > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > complete, rc=0
> > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> destroy
> > Daemon running with PID 3214
> > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> > xc: debug: hypercall buffer: cache current size:4
> > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> >
> > ###########################################################
> > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > char device redirected to /dev/pts/5 (label serial0)
> > qemu: hardware error: xen: failed to populate ram at 40030000
> > CPU #0:
> > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > ES =0000 00000000 0000ffff 00009300
> > CS =f000 ffff0000 0000ffff 00009b00
> > SS =0000 00000000 0000ffff 00009300
> > DS =0000 00000000 0000ffff 00009300
> > FS =0000 00000000 0000ffff 00009300
> > GS =0000 00000000 0000ffff 00009300
> > LDT=0000 00000000 0000ffff 00008200
> > TR =0000 00000000 0000ffff 00008b00
> > GDT=     00000000 0000ffff
> > IDT=     00000000 0000ffff
> > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > DR6=ffff0ff0 DR7=00000400
> > EFER=0000000000000000
> > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > XMM00=00000000000000000000000000000000
> > XMM01=00000000000000000000000000000000
> > XMM02=00000000000000000000000000000000
> > XMM03=00000000000000000000000000000000
> > XMM04=00000000000000000000000000000000
> > XMM05=00000000000000000000000000000000
> > XMM06=00000000000000000000000000000000
> > XMM07=00000000000000000000000000000000
> > CPU #1:
> > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > ES =0000 00000000 0000ffff 00009300
> > CS =f000 ffff0000 0000ffff 00009b00
> > SS =0000 00000000 0000ffff 00009300
> > DS =0000 00000000 0000ffff 00009300
> > FS =0000 00000000 0000ffff 00009300
> > GS =0000 00000000 0000ffff 00009300
> > LDT=0000 00000000 0000ffff 00008200
> > TR =0000 00000000 0000ffff 00008b00
> > GDT=     00000000 0000ffff
> > IDT=     00000000 0000ffff
> > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > DR6=ffff0ff0 DR7=00000400
> > EFER=0000000000000000
> > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > XMM00=00000000000000000000000000000000
> > XMM01=00000000000000000000000000000000
> > XMM02=00000000000000000000000000000000
> > XMM03=00000000000000000000000000000000
> > XMM04=00000000000000000000000000000000
> > XMM05=00000000000000000000000000000000
> > XMM06=00000000000000000000000000000000
> > XMM07=00000000000000000000000000000000
> >
> > ###########################################################
> > /etc/default/grub
> > GRUB_DEFAULT="Xen 4.3-amd64"
> > GRUB_HIDDEN_TIMEOUT=0
> > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > GRUB_TIMEOUT=10
> > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > GRUB_CMDLINE_LINUX=""
> > # biosdevname=0
> > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--bcaec51d2c42b5d0c304f1d2ff27
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I did not. =A0I do not have the toolchain installed. =A0I =
may have time later today to try the patch. =A0Are there any specific instr=
uctions on how to patch the src, compile and install?<div><br></div><div>Re=
gards</div>

</div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Fri,=
 Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.c=
om</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Thu, Feb 06, 2014 at 09=
:39:37AM -0500, Mike Neiderhauser wrote:<br>
&gt; Hi all,<br>
&gt;<br>
&gt; I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)=
 to a<br>
&gt; HVM. =A0I have been attempting to resolve this issue on the xen-users =
list,<br>
&gt; but it was advised to post this issue to this list. (Initial Message -=
<br>
&gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/2014-02=
/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives/html=
/xen-users/2014-02/msg00036.html</a>)<br>
&gt;<br>
&gt; The machine I am using as host is a Dell Poweredge server with a Xeon<=
br>
&gt; E31220 with 4GB of ram.<br>
&gt;<br>
&gt; The possible bug is the following:<br>
&gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; qemu: hardware error: xen: failed to populate ram at 40030000<br>
&gt; ....<br>
&gt;<br>
&gt; I believe it may be similar to this thread<br>
&gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+page:1+=
mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.org/m=
essage/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results</a=
><br>


&gt;<br>
&gt;<br>
&gt; Additional info that may be helpful is below.<br>
<br>
</div>Did you try the patch?<br>
<div><div class=3D"h5">&gt;<br>
&gt; Please let me know if you need any additional information.<br>
&gt;<br>
&gt; Thanks in advance for any help provided!<br>
&gt; Regards<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; ###########################################################<br>
&gt; # Configuration file for Xen HVM<br>
&gt;<br>
&gt; # HVM Name (as appears in &#39;xl list&#39;)<br>
&gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; # HVM Build settings (+ hardware)<br>
&gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;<br>
&gt; builder=3D&#39;hvm&#39;<br>
&gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; memory=3D1024<br>
&gt; vcpus=3D2<br>
&gt;<br>
&gt; # Virtual Interface<br>
&gt; # Network bridge to USB NIC<br>
&gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt;<br>
&gt; ################### PCI PASSTHROUGH ###################<br>
&gt; # PCI Permissive mode toggle<br>
&gt; #pci_permissive=3D1<br>
&gt;<br>
&gt; # All PCI Devices<br>
&gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;=
04:00.1&#39;, &#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt;<br>
&gt; # First two ports on Intel 4x1G NIC<br>
&gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<br>
&gt;<br>
&gt; # Last two ports on Intel 4x1G NIC<br>
&gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt;<br>
&gt; # All ports on Intel 4x1G NIC<br>
&gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;0=
4:00.1&#39;]<br>
&gt;<br>
&gt; # Brodcom 2x1G NIC<br>
&gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; ################### PCI PASSTHROUGH ###################<br>
&gt;<br>
&gt; # HVM Disks<br>
&gt; # Hard disk only<br>
&gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; boot=3D&quot;c&quot;<br>
&gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt;<br>
&gt; # Hard disk with ISO<br>
&gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; #boot=3D&quot;d&quot;<br>
&gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>
&gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>
&gt;<br>
&gt; # ACPI Enable<br>
&gt; acpi=3D1<br>
&gt; # HVM Event Modes<br>
&gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; on_reboot=3D&#39;restart&#39;<br>
&gt; on_crash=3D&#39;restart&#39;<br>
&gt;<br>
&gt; # Serial Console Configuration (Xen Console)<br>
&gt; sdl=3D0<br>
&gt; serial=3D&#39;pty&#39;<br>
&gt;<br>
&gt; # VNC Configuration<br>
&gt; # Only reacable from localhost<br>
&gt; vnc=3D1<br>
&gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; vncpasswd=3D&quot;&quot;<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; Copied for xen-users list<br>
&gt; ###########################################################<br>
&gt;<br>
&gt; It appears that it cannot obtain the RAM mapping for this PCI device.<=
br>
&gt;<br>
&gt;<br>
&gt; I rebooted the Host. =A0I ran assigned pci devices to pciback. The out=
put<br>
&gt; looks like:<br>
&gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; Loading Kernel Module &#39;xen-pciback&#39;<br>
&gt; Calling function pciback_dev for:<br>
&gt; PCI DEVICE 0000:03:00.0<br>
&gt; Unbinding 0000:03:00.0 from igb<br>
&gt; Binding 0000:03:00.0 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:03:00.1<br>
&gt; Unbinding 0000:03:00.1 from igb<br>
&gt; Binding 0000:03:00.1 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:04:00.0<br>
&gt; Unbinding 0000:04:00.0 from igb<br>
&gt; Binding 0000:04:00.0 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:04:00.1<br>
&gt; Unbinding 0000:04:00.1 from igb<br>
&gt; Binding 0000:04:00.1 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:05:00.0<br>
&gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; Binding 0000:05:00.0 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:05:00.1<br>
&gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; Binding 0000:05:00.1 to pciback<br>
&gt;<br>
&gt; Listing PCI Devices Available to Xen<br>
&gt; 0000:03:00.0<br>
&gt; 0000:03:00.1<br>
&gt; 0000:04:00.0<br>
&gt; 0000:04:00.1<br>
&gt; 0000:05:00.0<br>
&gt; 0000:05:00.1<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; WARNING: ignoring device_model directive.<br>
&gt; WARNING: Use &quot;device_model_override&quot; instead if you really w=
ant a<br>
&gt; non-default device_model<br>
&gt; libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: crea=
te:<br>
&gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c0<br>
&gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk<=
br>
&gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk<=
br>
&gt; vdev=3Dhda, using backend phy<br>
&gt; libxl: debug: libxl_create.c:675:initiate_domain_create: running bootl=
oader<br>
&gt; libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV<b=
r>
&gt; domain, skipping bootloader<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210c728: deregister unregistered<br>
&gt; libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUM=
A<br>
&gt; placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcpus=3D3,<br=
>
&gt; free_memkb=3D2980<br>
&gt; libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candi=
date<br>
&gt; with 1 nodes, 4 cpus and 2980 KB free selected<br>
&gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=3D0xa69a4<b=
r>
&gt; xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a4<br>
&gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a69a4<br>
&gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;0000000000000000<br>
&gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f800000<br>
&gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; 0x7f022c81=
682d<br>
&gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk<=
br>
&gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch<br>
&gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D=
3/0:<br>
&gt; register slotnum=3D3<br>
&gt; libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:<br>
&gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x2112f48<=
br>
&gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: event<br>
&gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting s=
tate 1<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x2112f48<=
br>
&gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: event<br>
&gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 ok<br>
&gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D=
3/0:<br>
&gt; deregister slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x2112f48: deregister unregistered<br>
&gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t:<br>
&gt; /etc/xen/scripts/block add<br>
&gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-m=
odel<br>
&gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; /usr/bin/qemu-system-i386<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xen-domid<br=
>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -chardev<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowait<br=
>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mon<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -name<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubuntu-hvm-0<=
br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vnc<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 <a href=3D"ht=
tp://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a>,to=3D99<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa-fdc.drive=
A=3D<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -serial<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vga<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cirrus<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga.vram_size=
_mb=3D8<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -boot<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 order=3Dc<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -smp<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,maxcpus=3D2=
<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -device<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -netdev<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscript=3Dno<b=
r>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xenfv<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 1016<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -drive<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,for=
mat=3Draw,cache=3Dwriteback<br>
&gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch<br>
&gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1=
: register<br>
&gt; slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210c960<=
br>
&gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: event<br>
&gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210c960<=
br>
&gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: event<br>
&gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1=
:<br>
&gt; deregister slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210c960: deregister unregistered<br>
&gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to<br>
&gt; /var/run/xen/qmp-libxl-2<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp<b=
r>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,<br>
&gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;password&quot;,<br>
&gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<br>
&gt; =A0 =A0 }<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch<br>
&gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/=
2: register<br>
&gt; slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8<=
br>
&gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event<br>
&gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting sta=
te 1<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8<=
br>
&gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event<br>
&gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vif/2/0/state wanted state 2 ok<br>
&gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/=
2:<br>
&gt; deregister slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t:<br>
&gt; /etc/xen/scripts/vif-bridge online<br>
&gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t:<br>
&gt; /etc/xen/scripts/vif-bridge add<br>
&gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to<br>
&gt; /var/run/xen/qmp-libxl-2<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp<b=
r>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthrough&quot;,<b=
r>
&gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,<br>
&gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quot;<br>
&gt; =A0 =A0 }<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection =
reset<br>
&gt; by peer<br>
&gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci b=
ackend<br>
&gt; libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c3=
60:<br>
&gt; progress report: ignored<br>
&gt; libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:<br>
&gt; complete, rc=3D0<br>
&gt; libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: des=
troy<br>
&gt; Daemon running with PID 3214<br>
&gt; xc: debug: hypercall buffer: total allocations:793 total releases:793<=
br>
&gt; xc: debug: hypercall buffer: current allocations:0 maximum allocations=
:4<br>
&gt; xc: debug: hypercall buffer: cache current size:4<br>
&gt; xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; qemu: hardware error: xen: failed to populate ram at 40030000<br>
&gt; CPU #0:<br>
&gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<br>
&gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<br>
&gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0=
 HLT=3D1<br>
&gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<br>
&gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<br>
&gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; EFER=3D0000000000000000<br>
&gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br>
&gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br>
&gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br>
&gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br>
&gt; XMM00=3D00000000000000000000000000000000<br>
&gt; XMM01=3D00000000000000000000000000000000<br>
&gt; XMM02=3D00000000000000000000000000000000<br>
&gt; XMM03=3D00000000000000000000000000000000<br>
&gt; XMM04=3D00000000000000000000000000000000<br>
&gt; XMM05=3D00000000000000000000000000000000<br>
&gt; XMM06=3D00000000000000000000000000000000<br>
&gt; XMM07=3D00000000000000000000000000000000<br>
&gt; CPU #1:<br>
&gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<br>
&gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<br>
&gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0=
 HLT=3D1<br>
&gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<br>
&gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<br>
&gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; EFER=3D0000000000000000<br>
&gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br>
&gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br>
&gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br>
&gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br>
&gt; XMM00=3D00000000000000000000000000000000<br>
&gt; XMM01=3D00000000000000000000000000000000<br>
&gt; XMM02=3D00000000000000000000000000000000<br>
&gt; XMM03=3D00000000000000000000000000000000<br>
&gt; XMM04=3D00000000000000000000000000000000<br>
&gt; XMM05=3D00000000000000000000000000000000<br>
&gt; XMM06=3D00000000000000000000000000000000<br>
&gt; XMM07=3D00000000000000000000000000000000<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; /etc/default/grub<br>
&gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; GRUB_TIMEOUT=3D10<br>
&gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || echo Debian`<=
br>
&gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;<br>
&gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; # biosdevname=3D0<br>
&gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>
<br>
</div></div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div>

--bcaec51d2c42b5d0c304f1d2ff27--


--===============8994647789825843070==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8994647789825843070==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 15:59:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 15:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnqJ-0007H8-Nq; Fri, 07 Feb 2014 15:59:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBnkm-0006hm-WA
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 15:54:09 +0000
Received: from [85.158.137.68:36588] by server-10.bemta-3.messagelabs.com id
	16/9F-07302-0A105F25; Fri, 07 Feb 2014 15:54:08 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391788442!369554!1
X-Originating-IP: [209.85.220.182]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32078 invoked from network); 7 Feb 2014 15:54:03 -0000
Received: from mail-vc0-f182.google.com (HELO mail-vc0-f182.google.com)
	(209.85.220.182)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 15:54:03 -0000
Received: by mail-vc0-f182.google.com with SMTP id id10so2828600vcb.27
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 07:54:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=1eeLPD/rLDw/n9DilLKW7gp5QuLfdyc0QJnsP9ClmOI=;
	b=VCIeBgQi1q+SofKgzlc5rdzhYHzITc8axKnu87G1CNcNQVo1Lg6SEUMh4QJd+qtPYc
	st+BpP32V36AzAr+mHcWWm3IghvFl7CqpYiJk5wngZ8Fdo/gwEaaeJOylw0WYzs2b6qk
	qYcPFW6c8fxkIEwroTAKwPgyaiJvFquOJ9atY2GZO0IIZkeyv6zXYODw0UekiA2zrqaD
	ObvkNK9rQ+lA620JR0Q1NASWrWxlyzoaKfCIbOWHQTfX/Z0xxreNJqJA5IMfrZG4iOSS
	8F7Dy7s/ksQ7kBbBMReNTSOsC7wGtgNCqX1ll4KDAvz+j9cHbWkyCKQG94Vejd9jELda
	puEQ==
X-Received: by 10.52.30.167 with SMTP id t7mr5376490vdh.36.1391788442372; Fri,
	07 Feb 2014 07:54:02 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 07:53:22 -0800 (PST)
In-Reply-To: <20140207152547.GB3605@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 10:53:22 -0500
Message-ID: <CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 15:59:49 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8994647789825843070=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8994647789825843070==
Content-Type: multipart/alternative; boundary=bcaec51d2c42b5d0c304f1d2ff27

--bcaec51d2c42b5d0c304f1d2ff27
Content-Type: text/plain; charset=ISO-8859-1

I did not.  I do not have the toolchain installed.  I may have time later
today to try the patch.  Are there any specific instructions on how to
patch the src, compile and install?

Regards


On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > Hi all,
> >
> > I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)
> to a
> > HVM.  I have been attempting to resolve this issue on the xen-users list,
> > but it was advised to post this issue to this list. (Initial Message -
> >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> >
> > The machine I am using as host is a Dell Poweredge server with a Xeon
> > E31220 with 4GB of ram.
> >
> > The possible bug is the following:
> > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > char device redirected to /dev/pts/5 (label serial0)
> > qemu: hardware error: xen: failed to populate ram at 40030000
> > ....
> >
> > I believe it may be similar to this thread
> >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> >
> >
> > Additional info that may be helpful is below.
>
> Did you try the patch?
> >
> > Please let me know if you need any additional information.
> >
> > Thanks in advance for any help provided!
> > Regards
> >
> > ###########################################################
> > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > ###########################################################
> > # Configuration file for Xen HVM
> >
> > # HVM Name (as appears in 'xl list')
> > name="ubuntu-hvm-0"
> > # HVM Build settings (+ hardware)
> > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > builder='hvm'
> > device_model='qemu-dm'
> > memory=1024
> > vcpus=2
> >
> > # Virtual Interface
> > # Network bridge to USB NIC
> > vif=['bridge=xenbr0']
> >
> > ################### PCI PASSTHROUGH ###################
> > # PCI Permissive mode toggle
> > #pci_permissive=1
> >
> > # All PCI Devices
> > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> >
> > # First two ports on Intel 4x1G NIC
> > #pci=['03:00.0','03:00.1']
> >
> > # Last two ports on Intel 4x1G NIC
> > #pci=['04:00.0', '04:00.1']
> >
> > # All ports on Intel 4x1G NIC
> > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> >
> > # Brodcom 2x1G NIC
> > #pci=['05:00.0', '05:00.1']
> > ################### PCI PASSTHROUGH ###################
> >
> > # HVM Disks
> > # Hard disk only
> > # Boot from HDD first ('c')
> > boot="c"
> > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> >
> > # Hard disk with ISO
> > # Boot from ISO first ('d')
> > #boot="d"
> > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> >
> > # ACPI Enable
> > acpi=1
> > # HVM Event Modes
> > on_poweroff='destroy'
> > on_reboot='restart'
> > on_crash='restart'
> >
> > # Serial Console Configuration (Xen Console)
> > sdl=0
> > serial='pty'
> >
> > # VNC Configuration
> > # Only reacable from localhost
> > vnc=1
> > vnclisten="0.0.0.0"
> > vncpasswd=""
> >
> > ###########################################################
> > Copied for xen-users list
> > ###########################################################
> >
> > It appears that it cannot obtain the RAM mapping for this PCI device.
> >
> >
> > I rebooted the Host.  I ran assigned pci devices to pciback. The output
> > looks like:
> > root@fiat:~# ./dev_mgmt.sh
> > Loading Kernel Module 'xen-pciback'
> > Calling function pciback_dev for:
> > PCI DEVICE 0000:03:00.0
> > Unbinding 0000:03:00.0 from igb
> > Binding 0000:03:00.0 to pciback
> >
> > PCI DEVICE 0000:03:00.1
> > Unbinding 0000:03:00.1 from igb
> > Binding 0000:03:00.1 to pciback
> >
> > PCI DEVICE 0000:04:00.0
> > Unbinding 0000:04:00.0 from igb
> > Binding 0000:04:00.0 to pciback
> >
> > PCI DEVICE 0000:04:00.1
> > Unbinding 0000:04:00.1 from igb
> > Binding 0000:04:00.1 to pciback
> >
> > PCI DEVICE 0000:05:00.0
> > Unbinding 0000:05:00.0 from bnx2
> > Binding 0000:05:00.0 to pciback
> >
> > PCI DEVICE 0000:05:00.1
> > Unbinding 0000:05:00.1 from bnx2
> > Binding 0000:05:00.1 to pciback
> >
> > Listing PCI Devices Available to Xen
> > 0000:03:00.0
> > 0000:03:00.1
> > 0000:04:00.0
> > 0000:04:00.1
> > 0000:05:00.0
> > 0000:05:00.1
> >
> > ###########################################################
> > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > WARNING: ignoring device_model directive.
> > WARNING: Use "device_model_override" instead if you really want a
> > non-default device_model
> > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> > how=(nil) callback=(nil) poller=0x210c3c0
> > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > vdev=hda spec.backend=unknown
> > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > vdev=hda, using backend phy
> > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> bootloader
> > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > domain, skipping bootloader
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x210c728: deregister unregistered
> > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > free_memkb=2980
> > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> candidate
> > with 1 nodes, 4 cpus and 2980 KB free selected
> > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> >   Loader:        0000000000100000->00000000001a69a4
> >   Modules:       0000000000000000->0000000000000000
> >   TOTAL:         0000000000000000->000000003f800000
> >   ENTRY ADDRESS: 0000000000100608
> > xc: info: PHYSICAL MEMORY ALLOCATION:
> >   4KB PAGES: 0x0000000000000200
> >   2MB PAGES: 0x00000000000001fb
> >   1GB PAGES: 0x0000000000000000
> > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > vdev=hda spec.backend=phy
> > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > register slotnum=3
> > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > inprogress: poller=0x210c3c0, flags=i
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > epath=/local/domain/0/backend/vbd/2/768/state
> > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> state 1
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > epath=/local/domain/0/backend/vbd/2/768/state
> > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > deregister slotnum=3
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x2112f48: deregister unregistered
> > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > /etc/xen/scripts/block add
> > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> device-model
> > /usr/bin/qemu-system-i386 with arguments:
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > /usr/bin/qemu-system-i386
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > chardev=libxl-cmd,mode=control
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> register
> > slotnum=3
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > epath=/local/domain/0/device-model/2/state
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > epath=/local/domain/0/device-model/2/state
> > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > deregister slotnum=3
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x210c960: deregister unregistered
> > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > /var/run/xen/qmp-libxl-2
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "qmp_capabilities",
> >     "id": 1
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "query-chardev",
> >     "id": 2
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "change",
> >     "id": 3,
> >     "arguments": {
> >         "device": "vnc",
> >         "target": "password",
> >         "arg": ""
> >     }
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "query-vnc",
> >     "id": 4
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> register
> > slotnum=3
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > epath=/local/domain/0/backend/vif/2/0/state
> > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state
> 1
> > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > epath=/local/domain/0/backend/vif/2/0/state
> > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > deregister slotnum=3
> > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > w=0x210e8a8: deregister unregistered
> > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > /etc/xen/scripts/vif-bridge online
> > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > /etc/xen/scripts/vif-bridge add
> > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > /var/run/xen/qmp-libxl-2
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "qmp_capabilities",
> >     "id": 1
> > }
> > '
> > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >     "execute": "device_add",
> >     "id": 2,
> >     "arguments": {
> >         "driver": "xen-pci-passthrough",
> >         "id": "pci-pt-03_00.0",
> >         "hostaddr": "0000:03:00.0"
> >     }
> > }
> > '
> > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> reset
> > by peer
> > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> backend
> > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> > progress report: ignored
> > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > complete, rc=0
> > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> destroy
> > Daemon running with PID 3214
> > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> > xc: debug: hypercall buffer: cache current size:4
> > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> >
> > ###########################################################
> > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > char device redirected to /dev/pts/5 (label serial0)
> > qemu: hardware error: xen: failed to populate ram at 40030000
> > CPU #0:
> > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > ES =0000 00000000 0000ffff 00009300
> > CS =f000 ffff0000 0000ffff 00009b00
> > SS =0000 00000000 0000ffff 00009300
> > DS =0000 00000000 0000ffff 00009300
> > FS =0000 00000000 0000ffff 00009300
> > GS =0000 00000000 0000ffff 00009300
> > LDT=0000 00000000 0000ffff 00008200
> > TR =0000 00000000 0000ffff 00008b00
> > GDT=     00000000 0000ffff
> > IDT=     00000000 0000ffff
> > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > DR6=ffff0ff0 DR7=00000400
> > EFER=0000000000000000
> > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > XMM00=00000000000000000000000000000000
> > XMM01=00000000000000000000000000000000
> > XMM02=00000000000000000000000000000000
> > XMM03=00000000000000000000000000000000
> > XMM04=00000000000000000000000000000000
> > XMM05=00000000000000000000000000000000
> > XMM06=00000000000000000000000000000000
> > XMM07=00000000000000000000000000000000
> > CPU #1:
> > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > ES =0000 00000000 0000ffff 00009300
> > CS =f000 ffff0000 0000ffff 00009b00
> > SS =0000 00000000 0000ffff 00009300
> > DS =0000 00000000 0000ffff 00009300
> > FS =0000 00000000 0000ffff 00009300
> > GS =0000 00000000 0000ffff 00009300
> > LDT=0000 00000000 0000ffff 00008200
> > TR =0000 00000000 0000ffff 00008b00
> > GDT=     00000000 0000ffff
> > IDT=     00000000 0000ffff
> > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > DR6=ffff0ff0 DR7=00000400
> > EFER=0000000000000000
> > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > XMM00=00000000000000000000000000000000
> > XMM01=00000000000000000000000000000000
> > XMM02=00000000000000000000000000000000
> > XMM03=00000000000000000000000000000000
> > XMM04=00000000000000000000000000000000
> > XMM05=00000000000000000000000000000000
> > XMM06=00000000000000000000000000000000
> > XMM07=00000000000000000000000000000000
> >
> > ###########################################################
> > /etc/default/grub
> > GRUB_DEFAULT="Xen 4.3-amd64"
> > GRUB_HIDDEN_TIMEOUT=0
> > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > GRUB_TIMEOUT=10
> > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > GRUB_CMDLINE_LINUX=""
> > # biosdevname=0
> > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--bcaec51d2c42b5d0c304f1d2ff27
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I did not. =A0I do not have the toolchain installed. =A0I =
may have time later today to try the patch. =A0Are there any specific instr=
uctions on how to patch the src, compile and install?<div><br></div><div>Re=
gards</div>

</div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Fri,=
 Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.c=
om</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Thu, Feb 06, 2014 at 09=
:39:37AM -0500, Mike Neiderhauser wrote:<br>
&gt; Hi all,<br>
&gt;<br>
&gt; I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)=
 to a<br>
&gt; HVM. =A0I have been attempting to resolve this issue on the xen-users =
list,<br>
&gt; but it was advised to post this issue to this list. (Initial Message -=
<br>
&gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/2014-02=
/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives/html=
/xen-users/2014-02/msg00036.html</a>)<br>
&gt;<br>
&gt; The machine I am using as host is a Dell Poweredge server with a Xeon<=
br>
&gt; E31220 with 4GB of ram.<br>
&gt;<br>
&gt; The possible bug is the following:<br>
&gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; qemu: hardware error: xen: failed to populate ram at 40030000<br>
&gt; ....<br>
&gt;<br>
&gt; I believe it may be similar to this thread<br>
&gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+page:1+=
mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.org/m=
essage/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results</a=
><br>


&gt;<br>
&gt;<br>
&gt; Additional info that may be helpful is below.<br>
<br>
</div>Did you try the patch?<br>
<div><div class=3D"h5">&gt;<br>
&gt; Please let me know if you need any additional information.<br>
&gt;<br>
&gt; Thanks in advance for any help provided!<br>
&gt; Regards<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; ###########################################################<br>
&gt; # Configuration file for Xen HVM<br>
&gt;<br>
&gt; # HVM Name (as appears in &#39;xl list&#39;)<br>
&gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; # HVM Build settings (+ hardware)<br>
&gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;<br>
&gt; builder=3D&#39;hvm&#39;<br>
&gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; memory=3D1024<br>
&gt; vcpus=3D2<br>
&gt;<br>
&gt; # Virtual Interface<br>
&gt; # Network bridge to USB NIC<br>
&gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt;<br>
&gt; ################### PCI PASSTHROUGH ###################<br>
&gt; # PCI Permissive mode toggle<br>
&gt; #pci_permissive=3D1<br>
&gt;<br>
&gt; # All PCI Devices<br>
&gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;=
04:00.1&#39;, &#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt;<br>
&gt; # First two ports on Intel 4x1G NIC<br>
&gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<br>
&gt;<br>
&gt; # Last two ports on Intel 4x1G NIC<br>
&gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt;<br>
&gt; # All ports on Intel 4x1G NIC<br>
&gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;0=
4:00.1&#39;]<br>
&gt;<br>
&gt; # Brodcom 2x1G NIC<br>
&gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; ################### PCI PASSTHROUGH ###################<br>
&gt;<br>
&gt; # HVM Disks<br>
&gt; # Hard disk only<br>
&gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; boot=3D&quot;c&quot;<br>
&gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt;<br>
&gt; # Hard disk with ISO<br>
&gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; #boot=3D&quot;d&quot;<br>
&gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>
&gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>
&gt;<br>
&gt; # ACPI Enable<br>
&gt; acpi=3D1<br>
&gt; # HVM Event Modes<br>
&gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; on_reboot=3D&#39;restart&#39;<br>
&gt; on_crash=3D&#39;restart&#39;<br>
&gt;<br>
&gt; # Serial Console Configuration (Xen Console)<br>
&gt; sdl=3D0<br>
&gt; serial=3D&#39;pty&#39;<br>
&gt;<br>
&gt; # VNC Configuration<br>
&gt; # Only reacable from localhost<br>
&gt; vnc=3D1<br>
&gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; vncpasswd=3D&quot;&quot;<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; Copied for xen-users list<br>
&gt; ###########################################################<br>
&gt;<br>
&gt; It appears that it cannot obtain the RAM mapping for this PCI device.<=
br>
&gt;<br>
&gt;<br>
&gt; I rebooted the Host. =A0I ran assigned pci devices to pciback. The out=
put<br>
&gt; looks like:<br>
&gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; Loading Kernel Module &#39;xen-pciback&#39;<br>
&gt; Calling function pciback_dev for:<br>
&gt; PCI DEVICE 0000:03:00.0<br>
&gt; Unbinding 0000:03:00.0 from igb<br>
&gt; Binding 0000:03:00.0 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:03:00.1<br>
&gt; Unbinding 0000:03:00.1 from igb<br>
&gt; Binding 0000:03:00.1 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:04:00.0<br>
&gt; Unbinding 0000:04:00.0 from igb<br>
&gt; Binding 0000:04:00.0 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:04:00.1<br>
&gt; Unbinding 0000:04:00.1 from igb<br>
&gt; Binding 0000:04:00.1 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:05:00.0<br>
&gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; Binding 0000:05:00.0 to pciback<br>
&gt;<br>
&gt; PCI DEVICE 0000:05:00.1<br>
&gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; Binding 0000:05:00.1 to pciback<br>
&gt;<br>
&gt; Listing PCI Devices Available to Xen<br>
&gt; 0000:03:00.0<br>
&gt; 0000:03:00.1<br>
&gt; 0000:04:00.0<br>
&gt; 0000:04:00.1<br>
&gt; 0000:05:00.0<br>
&gt; 0000:05:00.1<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; WARNING: ignoring device_model directive.<br>
&gt; WARNING: Use &quot;device_model_override&quot; instead if you really w=
ant a<br>
&gt; non-default device_model<br>
&gt; libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: crea=
te:<br>
&gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c0<br>
&gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk<=
br>
&gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk<=
br>
&gt; vdev=3Dhda, using backend phy<br>
&gt; libxl: debug: libxl_create.c:675:initiate_domain_create: running bootl=
oader<br>
&gt; libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV<b=
r>
&gt; domain, skipping bootloader<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210c728: deregister unregistered<br>
&gt; libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUM=
A<br>
&gt; placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcpus=3D3,<br=
>
&gt; free_memkb=3D2980<br>
&gt; libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement candi=
date<br>
&gt; with 1 nodes, 4 cpus and 2980 KB free selected<br>
&gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=3D0xa69a4<b=
r>
&gt; xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a4<br>
&gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a69a4<br>
&gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;0000000000000000<br>
&gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f800000<br>
&gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; 0x7f022c81=
682d<br>
&gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk<=
br>
&gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch<br>
&gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D=
3/0:<br>
&gt; register slotnum=3D3<br>
&gt; libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:<br>
&gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x2112f48<=
br>
&gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: event<br>
&gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting s=
tate 1<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x2112f48<=
br>
&gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0: event<br>
&gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 ok<br>
&gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D=
3/0:<br>
&gt; deregister slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x2112f48: deregister unregistered<br>
&gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t:<br>
&gt; /etc/xen/scripts/block add<br>
&gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning device-m=
odel<br>
&gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; /usr/bin/qemu-system-i386<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xen-domid<br=
>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -chardev<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowait<br=
>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mon<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -name<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubuntu-hvm-0<=
br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vnc<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 <a href=3D"ht=
tp://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a>,to=3D99<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa-fdc.drive=
A=3D<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -serial<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vga<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cirrus<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -global<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga.vram_size=
_mb=3D8<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -boot<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 order=3Dc<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -smp<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,maxcpus=3D2=
<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -device<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -netdev<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscript=3Dno<b=
r>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xenfv<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 1016<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -drive<br>
&gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,for=
mat=3Draw,cache=3Dwriteback<br>
&gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch<br>
&gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1=
: register<br>
&gt; slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210c960<=
br>
&gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: event<br>
&gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210c960<=
br>
&gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: event<br>
&gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state token=3D3/1=
:<br>
&gt; deregister slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210c960: deregister unregistered<br>
&gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to<br>
&gt; /var/run/xen/qmp-libxl-2<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp<b=
r>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,<br>
&gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;password&quot;,<br>
&gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<br>
&gt; =A0 =A0 }<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch<br>
&gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/=
2: register<br>
&gt; slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8<=
br>
&gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event<br>
&gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting sta=
te 1<br>
&gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x210e8a8<=
br>
&gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: event<br>
&gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: backend<br>
&gt; /local/domain/0/backend/vif/2/0/state wanted state 2 ok<br>
&gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/=
2:<br>
&gt; deregister slotnum=3D3<br>
&gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch<br=
>
&gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t:<br>
&gt; /etc/xen/scripts/vif-bridge online<br>
&gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug scrip=
t:<br>
&gt; /etc/xen/scripts/vif-bridge add<br>
&gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to<br>
&gt; /var/run/xen/qmp-libxl-2<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp<b=
r>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: retur=
n<br>
&gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: &#39=
;{<br>
&gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,<br>
&gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthrough&quot;,<b=
r>
&gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,<br>
&gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quot;<br>
&gt; =A0 =A0 }<br>
&gt; }<br>
&gt; &#39;<br>
&gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection =
reset<br>
&gt; by peer<br>
&gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci b=
ackend<br>
&gt; libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c3=
60:<br>
&gt; progress report: ignored<br>
&gt; libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:<br>
&gt; complete, rc=3D0<br>
&gt; libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360: des=
troy<br>
&gt; Daemon running with PID 3214<br>
&gt; xc: debug: hypercall buffer: total allocations:793 total releases:793<=
br>
&gt; xc: debug: hypercall buffer: current allocations:0 maximum allocations=
:4<br>
&gt; xc: debug: hypercall buffer: cache current size:4<br>
&gt; xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; qemu: hardware error: xen: failed to populate ram at 40030000<br>
&gt; CPU #0:<br>
&gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<br>
&gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<br>
&gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0=
 HLT=3D1<br>
&gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<br>
&gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<br>
&gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; EFER=3D0000000000000000<br>
&gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br>
&gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br>
&gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br>
&gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br>
&gt; XMM00=3D00000000000000000000000000000000<br>
&gt; XMM01=3D00000000000000000000000000000000<br>
&gt; XMM02=3D00000000000000000000000000000000<br>
&gt; XMM03=3D00000000000000000000000000000000<br>
&gt; XMM04=3D00000000000000000000000000000000<br>
&gt; XMM05=3D00000000000000000000000000000000<br>
&gt; XMM06=3D00000000000000000000000000000000<br>
&gt; XMM07=3D00000000000000000000000000000000<br>
&gt; CPU #1:<br>
&gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<br>
&gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<br>
&gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0=
 HLT=3D1<br>
&gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<br>
&gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<br>
&gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; EFER=3D0000000000000000<br>
&gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br>
&gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br>
&gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br>
&gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br>
&gt; XMM00=3D00000000000000000000000000000000<br>
&gt; XMM01=3D00000000000000000000000000000000<br>
&gt; XMM02=3D00000000000000000000000000000000<br>
&gt; XMM03=3D00000000000000000000000000000000<br>
&gt; XMM04=3D00000000000000000000000000000000<br>
&gt; XMM05=3D00000000000000000000000000000000<br>
&gt; XMM06=3D00000000000000000000000000000000<br>
&gt; XMM07=3D00000000000000000000000000000000<br>
&gt;<br>
&gt; ###########################################################<br>
&gt; /etc/default/grub<br>
&gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; GRUB_TIMEOUT=3D10<br>
&gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || echo Debian`<=
br>
&gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;<br>
&gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; # biosdevname=3D0<br>
&gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>
<br>
</div></div>&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div>

--bcaec51d2c42b5d0c304f1d2ff27--


--===============8994647789825843070==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8994647789825843070==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 16:01:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBns7-0007nc-Lf; Fri, 07 Feb 2014 16:01:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBns6-0007nT-9u
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 16:01:42 +0000
Received: from [85.158.137.68:39702] by server-16.bemta-3.messagelabs.com id
	AB/44-29917-56305F25; Fri, 07 Feb 2014 16:01:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391788898!364813!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4493 invoked from network); 7 Feb 2014 16:01:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 16:01:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100864264"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 16:01:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	11:01:37 -0500
Message-ID: <1391788896.2162.113.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 16:01:36 +0000
In-Reply-To: <1391767848-9633-1-git-send-email-ian.campbell@citrix.com>
References: <1391767621.2162.21.camel@kazak.uk.xensource.com>
	<1391767848-9633-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] ts-kernel-build: make sure
 CONFIG_PACKET is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 10:10 +0000, Ian Campbell wrote:
> It is required by the dhcp client and is not present in the arm
> multi_v7_defconfig.
> 
> Also stash the config file in the build results for easy reference, it is
> already in kerndist.tar.gz but that's a 30+M download compared with a few tens
> of K.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Ian acked this on IRC and I have now pushed it.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:01:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBns7-0007nc-Lf; Fri, 07 Feb 2014 16:01:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WBns6-0007nT-9u
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 16:01:42 +0000
Received: from [85.158.137.68:39702] by server-16.bemta-3.messagelabs.com id
	AB/44-29917-56305F25; Fri, 07 Feb 2014 16:01:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391788898!364813!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4493 invoked from network); 7 Feb 2014 16:01:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 16:01:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="100864264"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 16:01:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	11:01:37 -0500
Message-ID: <1391788896.2162.113.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 16:01:36 +0000
In-Reply-To: <1391767848-9633-1-git-send-email-ian.campbell@citrix.com>
References: <1391767621.2162.21.camel@kazak.uk.xensource.com>
	<1391767848-9633-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] ts-kernel-build: make sure
 CONFIG_PACKET is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 10:10 +0000, Ian Campbell wrote:
> It is required by the dhcp client and is not present in the arm
> multi_v7_defconfig.
> 
> Also stash the config file in the build results for easy reference, it is
> already in kerndist.tar.gz but that's a 30+M download compared with a few tens
> of K.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>

Ian acked this on IRC and I have now pushed it.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:01:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnsK-0007pQ-3O; Fri, 07 Feb 2014 16:01:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBnsJ-0007pB-0R
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:01:55 +0000
Received: from [193.109.254.147:48726] by server-13.bemta-14.messagelabs.com
	id 36/CB-01226-27305F25; Fri, 07 Feb 2014 16:01:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391788910!2784373!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18097 invoked from network); 7 Feb 2014 16:01:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:01:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17G1lqV004795
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 16:01:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17G1ktb014311
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 16:01:47 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s17G1kLa002636; Fri, 7 Feb 2014 16:01:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 08:01:46 -0800
Message-ID: <52F503B8.2040304@oracle.com>
Date: Fri, 07 Feb 2014 11:03:04 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
	<52F50EB9020000780011A4D8@nat28.tlf.novell.com>
In-Reply-To: <52F50EB9020000780011A4D8@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMDcvMjAxNCAxMDo1MCBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gT24gMDcuMDIu
MTQgYXQgMTY6MzgsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+
IHdyb3RlOgo+PiBPbiAwMi8wNy8yMDE0IDEwOjIzIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+
Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjEyLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPj4+PiBPbiAwMi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1
bGljaCB3cm90ZToKPj4+Pj4gLi4uIGJ1dCBpbnRlcnJ1cHQgcmVtYXBwaW5nIGlzIHJlcXVlc3Rl
ZCAod2l0aCBwZXItZGV2aWNlIHJlbWFwcGluZwo+Pj4+PiB0YWJsZXMpLiBXaXRob3V0IGl0LCB0
aGUgdGltZXIgaW50ZXJydXB0IGlzIHVzdWFsbHkgbm90IHdvcmtpbmcuCj4+Pj4+Cj4+Pj4+IElu
c3BpcmVkIGJ5IExpbnV4J2VzICJpb21tdS9hbWQ6IFdvcmsgYXJvdW5kIHdyb25nIElPQVBJQyBk
ZXZpY2UtaWQgaW4KPj4+Pj4gSVZSUyB0YWJsZSIgKGNvbW1pdCBjMmZmNWNmNTI5NGJjYmQ3ZmE1
MGY3ZDg2MGU5MGE2NmRiN2U1MDU5KSBieSBKb2VyZwo+Pj4+PiBSb2VkZWwgPGpvZXJnLnJvZWRl
bEBhbWQuY29tPi4KPj4+Pj4KPj4+Pj4gUmVwb3J0ZWQtYnk6IEVyaWMgSG91YnkgPGVob3VieUB5
YWhvby5jb20+Cj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KPj4+Pj4gVGVzdGVkLWJ5OiBFcmljIEhvdWJ5IDxlaG91YnlAeWFob28uY29tPgo+Pj4+
Pgo+Pj4+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+
Pj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9hY3BpLmMKPj4+Pj4g
QEAgLTk4NCw2ICs5ODQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9pdnJzX3RhYmxlKHN0
cnVjCj4+Pj4+ICAgICAgICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxv
Y2s7Cj4+Pj4+ICAgICAgICAgdW5zaWduZWQgbG9uZyBsZW5ndGg7Cj4+Pj4+ICAgICAgICAgdW5z
aWduZWQgaW50IGFwaWM7Cj4+Pj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9pbnRy
ZW1hcDsKPj4+Pj4gICAgICAgICBpbnQgZXJyb3IgPSAwOwo+Pj4+PiAgICAgCj4+Pj4+ICAgICAg
ICAgQlVHX09OKCF0YWJsZSk7Cj4+Pj4+IEBAIC0xMDE3LDggKzEwMTgsMTUgQEAgc3RhdGljIGlu
dCBfX2luaXQgcGFyc2VfaXZyc190YWJsZShzdHJ1Ywo+Pj4+PiAgICAgICAgIC8qIEVhY2ggSU8t
QVBJQyBtdXN0IGhhdmUgYmVlbiBtZW50aW9uZWQgaW4gdGhlIHRhYmxlLiAqLwo+Pj4+PiAgICAg
ICAgIGZvciAoIGFwaWMgPSAwOyAhZXJyb3IgJiYgaW9tbXVfaW50cmVtYXAgJiYgYXBpYyA8IG5y
X2lvYXBpY3M7ICsrYXBpYyApCj4+Pj4+ICAgICAgICAgewo+Pj4+PiAtICAgICAgICBpZiAoICFu
cl9pb2FwaWNfZW50cmllc1thcGljXSB8fAo+Pj4+PiAtICAgICAgICAgICAgIGlvYXBpY19zYmRm
W0lPX0FQSUNfSUQoYXBpYyldLnBpbl8yX2lkeCApCj4+Pj4+ICsgICAgICAgIGlmICggIW5yX2lv
YXBpY19lbnRyaWVzW2FwaWNdICkKPj4+Pj4gKyAgICAgICAgICAgIGNvbnRpbnVlOwo+Pj4+PiAr
Cj4+Pj4+ICsgICAgICAgIGlmICggIWlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnNlZyAm
Jgo+Pj4+PiArICAgICAgICAgICAgIC8qIFNCIElPLUFQSUMgaXMgYWx3YXlzIG9uIHRoaXMgZGV2
aWNlIGluIEFNRCBzeXN0ZW1zLiAqLwo+Pj4+PiArICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lP
X0FQSUNfSUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAsIDB4MTQsIDApICkKPj4+Pj4gKyAgICAg
ICAgICAgIHNiX2lvYXBpYyA9IDE7Cj4+Pj4+ICsKPj4+Pj4gKyAgICAgICAgaWYgKCBpb2FwaWNf
c2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9pZHggKQo+Pj4+PiAgICAgICAgICAgICAgICAg
Y29udGludWU7Cj4+Pj4+ICAgICAKPj4+Pj4gICAgICAgICAgICAgaWYgKCAhdGVzdF9iaXQoSU9f
QVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUpICkKPj4+PiBJIGRvbid0IGtub3cgd2hldGhl
ciAwOjE0OjAgaXMgc2V0IGluIHN0b25lLCBJIGRvbid0IHJlbWVtYmVyIHNlZWluZwo+Pj4+IGFu
eXdoZXJlIHRoYXQgdGhpcyBpcyBhcmNoaXRlY3R1cmFsLgo+Pj4+Cj4+Pj4gSW4gdGhlICh1bmxp
a2VseSkgZXZlbnQgdGhhdCBpdCBpcyBtb3ZlZCBzb21ld2hlcmUgZWxzZSB3aWxsIHRoZSB1c2Vy
IGJlCj4+Pj4gYWJsZSB0byBvdmVyd3JpdGUgd2hlcmUgaXQgaXM/IERvIHlvdQo+Pj4+IHRoaW5r
IHRoYXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVlIGlmIGFwcHJvcHJpYXRl
IGJpdCBpcwo+Pj4+IHNldCBpbiBpb2FwaWNfY21kbGluZT8KPj4+IFRoZXNlIGFyZSBxdWVzdGlv
biB5b3UnZCBuZWVkIHRvIGFzayB0byBKw7ZyZywgdGhlIGF1dGhvciBvZiB0aGUKPj4+IG9yaWdp
bmFsIExpbnV4IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNvbmRpdGlvbiBoZXJlIHRoYXQg
aGUKPj4+IGtuZXcgd2hhdCBoZSB3YXMgZG9pbmcuCj4+IFhlbiBhbHJlYWR5IGhhcyBhIHdheSB0
byBvdmVycmlkZSBJVlJTJyB2aWV3IG9mIElPQVBJQ3Mgd2l0aAo+PiBpb2FwaWNfY21kbGluZSwg
c29tZXRoaW5nIHRoYXQgTGludXggZG9lc24ndC4gUHJlc3VtYWJseSBpZiB0aGUgdXNlcgo+PiBz
ZXRzIGl2cnNfaW9hcGljW10gb3B0aW9uIG9uIGJvb3QgbGluZSB0aGVuIGhlIGtub3dzIHdoYXQg
aGUgaXMgZG9pbmcKPj4gKGF0IGxlYXN0IG9uZSB3b3VsZCBob3BlIHNvKS4KPiBJIHRoaW5rIHRo
ZSBsb2dpYyB3ZSBoYXZlIGlzIHN1ZmZpY2llbnRseSBzaW1pbGFyIHRvIExpbnV4J2VzLgo+Cj4+
IE15IGNvbmNlcm4gaXMgdGhhdCB0aGlzIHBhdGNoIHdvdWxkIHByZXZlbnQgdGhlIHVzZXIgZnJv
bSBzcGVjaWZ5aW5nCj4+IHdoZXJlIHRoZSBJT0FQSUMgaXMuIFdpbGwgdGhpcyBib290IG9wdGlv
biBiZSB1c2VmdWwgYXQgYWxsIG5vdz8gV2hlbiB3ZQo+PiBzcGVjaWZ5IGFueXRoaW5nIGJ1dCAw
OjE0OjAgaXQgd2lsbCBiZSBwcmV0dHkgbXVjaCBpZ25vcmVkLCB3b24ndCBpdD8KPiBCdXQgdGhl
IHB1cnBvc2UgaGVyZSBpc24ndCB0byBvdmVycmlkZSBob3cgdGhlIGhhcmR3YXJlIGlzCj4gc3Ry
dWN0dXJlZCwgYnV0IHRvIG92ZXJjb21lIGZpcm13YXJlIHZlbmRvcnMgbm90IGdldHRpbmcgdGhl
aXIKPiBBQ1BJIHRhYmxlcyBjb3JyZWN0Lgo+Cj4gRnVydGhlcm1vcmUsIHdoYXQgaXMgYmVpbmcg
c3BlY2lmaWVkIGhlcmUgY2FuIHZlcnkgd2VsbCBiZQo+IGRpZmZlcmVudCBmcm9tIDAwOjE0LjAg
LSBjb25zaWRlciB0aGUgbm9ydGhicmlkZ2UgSU8tQVBJQyBhbmQKPiBldmVudHVhbCBmdXJ0aGVy
IG9uZXMuCgpUaGlzIGlzIGV4YWN0bHkgd2hhdCBJIGFtIGFza2luZzogU3VwcG9zZSB3ZSBoYXZl
IElPQVBJQyBpbiB0aGUgTkIgKEkgCnRoaW5rIGl0J3Mgc29tZXRoaW5nIGxpa2UgMDowMi4wKSAq
YW5kKiBJVlJTIGlzIGJyb2tlbi4gQ3VycmVudGx5IHdlIGNhbiAKc2F5ICdpdnJzX2lvYXBpY1sw
XT0wOjAyLjAnIGFuZCB3ZSBhcmUgZ29vZCB0byBnbyAocmlnaHQ/KS4gV2lsbCB3ZSAKc3RpbGwg
YmUgYWJsZSB0byBkbyB0aGlzPwoKLWJvcmlzCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:01:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnsK-0007pQ-3O; Fri, 07 Feb 2014 16:01:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WBnsJ-0007pB-0R
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:01:55 +0000
Received: from [193.109.254.147:48726] by server-13.bemta-14.messagelabs.com
	id 36/CB-01226-27305F25; Fri, 07 Feb 2014 16:01:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1391788910!2784373!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18097 invoked from network); 7 Feb 2014 16:01:51 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:01:51 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17G1lqV004795
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 16:01:48 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17G1ktb014311
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 16:01:47 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s17G1kLa002636; Fri, 7 Feb 2014 16:01:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 08:01:46 -0800
Message-ID: <52F503B8.2040304@oracle.com>
Date: Fri, 07 Feb 2014 11:03:04 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
	<52F50EB9020000780011A4D8@nat28.tlf.novell.com>
In-Reply-To: <52F50EB9020000780011A4D8@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMDcvMjAxNCAxMDo1MCBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4gT24gMDcuMDIu
MTQgYXQgMTY6MzgsIEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+
IHdyb3RlOgo+PiBPbiAwMi8wNy8yMDE0IDEwOjIzIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+
Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjEyLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPj4+PiBPbiAwMi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1
bGljaCB3cm90ZToKPj4+Pj4gLi4uIGJ1dCBpbnRlcnJ1cHQgcmVtYXBwaW5nIGlzIHJlcXVlc3Rl
ZCAod2l0aCBwZXItZGV2aWNlIHJlbWFwcGluZwo+Pj4+PiB0YWJsZXMpLiBXaXRob3V0IGl0LCB0
aGUgdGltZXIgaW50ZXJydXB0IGlzIHVzdWFsbHkgbm90IHdvcmtpbmcuCj4+Pj4+Cj4+Pj4+IElu
c3BpcmVkIGJ5IExpbnV4J2VzICJpb21tdS9hbWQ6IFdvcmsgYXJvdW5kIHdyb25nIElPQVBJQyBk
ZXZpY2UtaWQgaW4KPj4+Pj4gSVZSUyB0YWJsZSIgKGNvbW1pdCBjMmZmNWNmNTI5NGJjYmQ3ZmE1
MGY3ZDg2MGU5MGE2NmRiN2U1MDU5KSBieSBKb2VyZwo+Pj4+PiBSb2VkZWwgPGpvZXJnLnJvZWRl
bEBhbWQuY29tPi4KPj4+Pj4KPj4+Pj4gUmVwb3J0ZWQtYnk6IEVyaWMgSG91YnkgPGVob3VieUB5
YWhvby5jb20+Cj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KPj4+Pj4gVGVzdGVkLWJ5OiBFcmljIEhvdWJ5IDxlaG91YnlAeWFob28uY29tPgo+Pj4+
Pgo+Pj4+PiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+
Pj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9hY3BpLmMKPj4+Pj4g
QEAgLTk4NCw2ICs5ODQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9pdnJzX3RhYmxlKHN0
cnVjCj4+Pj4+ICAgICAgICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxv
Y2s7Cj4+Pj4+ICAgICAgICAgdW5zaWduZWQgbG9uZyBsZW5ndGg7Cj4+Pj4+ICAgICAgICAgdW5z
aWduZWQgaW50IGFwaWM7Cj4+Pj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9pbnRy
ZW1hcDsKPj4+Pj4gICAgICAgICBpbnQgZXJyb3IgPSAwOwo+Pj4+PiAgICAgCj4+Pj4+ICAgICAg
ICAgQlVHX09OKCF0YWJsZSk7Cj4+Pj4+IEBAIC0xMDE3LDggKzEwMTgsMTUgQEAgc3RhdGljIGlu
dCBfX2luaXQgcGFyc2VfaXZyc190YWJsZShzdHJ1Ywo+Pj4+PiAgICAgICAgIC8qIEVhY2ggSU8t
QVBJQyBtdXN0IGhhdmUgYmVlbiBtZW50aW9uZWQgaW4gdGhlIHRhYmxlLiAqLwo+Pj4+PiAgICAg
ICAgIGZvciAoIGFwaWMgPSAwOyAhZXJyb3IgJiYgaW9tbXVfaW50cmVtYXAgJiYgYXBpYyA8IG5y
X2lvYXBpY3M7ICsrYXBpYyApCj4+Pj4+ICAgICAgICAgewo+Pj4+PiAtICAgICAgICBpZiAoICFu
cl9pb2FwaWNfZW50cmllc1thcGljXSB8fAo+Pj4+PiAtICAgICAgICAgICAgIGlvYXBpY19zYmRm
W0lPX0FQSUNfSUQoYXBpYyldLnBpbl8yX2lkeCApCj4+Pj4+ICsgICAgICAgIGlmICggIW5yX2lv
YXBpY19lbnRyaWVzW2FwaWNdICkKPj4+Pj4gKyAgICAgICAgICAgIGNvbnRpbnVlOwo+Pj4+PiAr
Cj4+Pj4+ICsgICAgICAgIGlmICggIWlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyldLnNlZyAm
Jgo+Pj4+PiArICAgICAgICAgICAgIC8qIFNCIElPLUFQSUMgaXMgYWx3YXlzIG9uIHRoaXMgZGV2
aWNlIGluIEFNRCBzeXN0ZW1zLiAqLwo+Pj4+PiArICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lP
X0FQSUNfSUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAsIDB4MTQsIDApICkKPj4+Pj4gKyAgICAg
ICAgICAgIHNiX2lvYXBpYyA9IDE7Cj4+Pj4+ICsKPj4+Pj4gKyAgICAgICAgaWYgKCBpb2FwaWNf
c2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9pZHggKQo+Pj4+PiAgICAgICAgICAgICAgICAg
Y29udGludWU7Cj4+Pj4+ICAgICAKPj4+Pj4gICAgICAgICAgICAgaWYgKCAhdGVzdF9iaXQoSU9f
QVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUpICkKPj4+PiBJIGRvbid0IGtub3cgd2hldGhl
ciAwOjE0OjAgaXMgc2V0IGluIHN0b25lLCBJIGRvbid0IHJlbWVtYmVyIHNlZWluZwo+Pj4+IGFu
eXdoZXJlIHRoYXQgdGhpcyBpcyBhcmNoaXRlY3R1cmFsLgo+Pj4+Cj4+Pj4gSW4gdGhlICh1bmxp
a2VseSkgZXZlbnQgdGhhdCBpdCBpcyBtb3ZlZCBzb21ld2hlcmUgZWxzZSB3aWxsIHRoZSB1c2Vy
IGJlCj4+Pj4gYWJsZSB0byBvdmVyd3JpdGUgd2hlcmUgaXQgaXM/IERvIHlvdQo+Pj4+IHRoaW5r
IHRoYXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVlIGlmIGFwcHJvcHJpYXRl
IGJpdCBpcwo+Pj4+IHNldCBpbiBpb2FwaWNfY21kbGluZT8KPj4+IFRoZXNlIGFyZSBxdWVzdGlv
biB5b3UnZCBuZWVkIHRvIGFzayB0byBKw7ZyZywgdGhlIGF1dGhvciBvZiB0aGUKPj4+IG9yaWdp
bmFsIExpbnV4IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNvbmRpdGlvbiBoZXJlIHRoYXQg
aGUKPj4+IGtuZXcgd2hhdCBoZSB3YXMgZG9pbmcuCj4+IFhlbiBhbHJlYWR5IGhhcyBhIHdheSB0
byBvdmVycmlkZSBJVlJTJyB2aWV3IG9mIElPQVBJQ3Mgd2l0aAo+PiBpb2FwaWNfY21kbGluZSwg
c29tZXRoaW5nIHRoYXQgTGludXggZG9lc24ndC4gUHJlc3VtYWJseSBpZiB0aGUgdXNlcgo+PiBz
ZXRzIGl2cnNfaW9hcGljW10gb3B0aW9uIG9uIGJvb3QgbGluZSB0aGVuIGhlIGtub3dzIHdoYXQg
aGUgaXMgZG9pbmcKPj4gKGF0IGxlYXN0IG9uZSB3b3VsZCBob3BlIHNvKS4KPiBJIHRoaW5rIHRo
ZSBsb2dpYyB3ZSBoYXZlIGlzIHN1ZmZpY2llbnRseSBzaW1pbGFyIHRvIExpbnV4J2VzLgo+Cj4+
IE15IGNvbmNlcm4gaXMgdGhhdCB0aGlzIHBhdGNoIHdvdWxkIHByZXZlbnQgdGhlIHVzZXIgZnJv
bSBzcGVjaWZ5aW5nCj4+IHdoZXJlIHRoZSBJT0FQSUMgaXMuIFdpbGwgdGhpcyBib290IG9wdGlv
biBiZSB1c2VmdWwgYXQgYWxsIG5vdz8gV2hlbiB3ZQo+PiBzcGVjaWZ5IGFueXRoaW5nIGJ1dCAw
OjE0OjAgaXQgd2lsbCBiZSBwcmV0dHkgbXVjaCBpZ25vcmVkLCB3b24ndCBpdD8KPiBCdXQgdGhl
IHB1cnBvc2UgaGVyZSBpc24ndCB0byBvdmVycmlkZSBob3cgdGhlIGhhcmR3YXJlIGlzCj4gc3Ry
dWN0dXJlZCwgYnV0IHRvIG92ZXJjb21lIGZpcm13YXJlIHZlbmRvcnMgbm90IGdldHRpbmcgdGhl
aXIKPiBBQ1BJIHRhYmxlcyBjb3JyZWN0Lgo+Cj4gRnVydGhlcm1vcmUsIHdoYXQgaXMgYmVpbmcg
c3BlY2lmaWVkIGhlcmUgY2FuIHZlcnkgd2VsbCBiZQo+IGRpZmZlcmVudCBmcm9tIDAwOjE0LjAg
LSBjb25zaWRlciB0aGUgbm9ydGhicmlkZ2UgSU8tQVBJQyBhbmQKPiBldmVudHVhbCBmdXJ0aGVy
IG9uZXMuCgpUaGlzIGlzIGV4YWN0bHkgd2hhdCBJIGFtIGFza2luZzogU3VwcG9zZSB3ZSBoYXZl
IElPQVBJQyBpbiB0aGUgTkIgKEkgCnRoaW5rIGl0J3Mgc29tZXRoaW5nIGxpa2UgMDowMi4wKSAq
YW5kKiBJVlJTIGlzIGJyb2tlbi4gQ3VycmVudGx5IHdlIGNhbiAKc2F5ICdpdnJzX2lvYXBpY1sw
XT0wOjAyLjAnIGFuZCB3ZSBhcmUgZ29vZCB0byBnbyAocmlnaHQ/KS4gV2lsbCB3ZSAKc3RpbGwg
YmUgYWJsZSB0byBkbyB0aGlzPwoKLWJvcmlzCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnv9-00084q-Qj; Fri, 07 Feb 2014 16:04:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnv8-00084i-85
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:04:50 +0000
Received: from [85.158.137.68:58829] by server-10.bemta-3.messagelabs.com id
	AB/02-07302-12405F25; Fri, 07 Feb 2014 16:04:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391789087!365677!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1247 invoked from network); 7 Feb 2014 16:04:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:04:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17G4hki008760
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 16:04:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17G4gd9010689
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 16:04:42 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17G4g7r017164; Fri, 7 Feb 2014 16:04:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 08:04:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F4C71C0972; Fri,  7 Feb 2014 11:04:41 -0500 (EST)
Date: Fri, 7 Feb 2014 11:04:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: William Dauchy <william@gandi.net>
Message-ID: <20140207160441.GA5060@phenom.dumpdata.com>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140207153826.GF19084@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: konrad@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 04:38:26PM +0100, William Dauchy wrote:
> On Feb07 10:02, Konrad Rzeszutek Wilk wrote:
> > That does not make sense. What are the leaks?
> 
> I agree, I still do not understand precisely why it fixes my issue.
> when I say leaks, it's global memory usage going down (`used` field
> in `free` output is growing)
> 
> > What are the messages
> > that you see about APIC?
> 
> No local APIC present
> APIC: disable apic facility
> APIC: switched to apic NOOP
> 
> 
> no mention of it with the patch is applied.

OK, and that fixes the leak?

that sounds like some other subsystem is going crazy because
of APIC being disabled and ACPI. And your patch enables
APIC back in.

Hmm..
> 
> -- 
> William



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:04:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnv9-00084q-Qj; Fri, 07 Feb 2014 16:04:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBnv8-00084i-85
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:04:50 +0000
Received: from [85.158.137.68:58829] by server-10.bemta-3.messagelabs.com id
	AB/02-07302-12405F25; Fri, 07 Feb 2014 16:04:49 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1391789087!365677!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1247 invoked from network); 7 Feb 2014 16:04:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:04:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17G4hki008760
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 16:04:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17G4gd9010689
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 16:04:42 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17G4g7r017164; Fri, 7 Feb 2014 16:04:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 08:04:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 4F4C71C0972; Fri,  7 Feb 2014 11:04:41 -0500 (EST)
Date: Fri, 7 Feb 2014 11:04:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: William Dauchy <william@gandi.net>
Message-ID: <20140207160441.GA5060@phenom.dumpdata.com>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140207153826.GF19084@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: konrad@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 04:38:26PM +0100, William Dauchy wrote:
> On Feb07 10:02, Konrad Rzeszutek Wilk wrote:
> > That does not make sense. What are the leaks?
> 
> I agree, I still do not understand precisely why it fixes my issue.
> when I say leaks, it's global memory usage going down (`used` field
> in `free` output is growing)
> 
> > What are the messages
> > that you see about APIC?
> 
> No local APIC present
> APIC: disable apic facility
> APIC: switched to apic NOOP
> 
> 
> no mention of it with the patch is applied.

OK, and that fixes the leak?

that sounds like some other subsystem is going crazy because
of APIC being disabled and ACPI. And your patch enables
APIC back in.

Hmm..
> 
> -- 
> William



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnwe-0008Cb-CO; Fri, 07 Feb 2014 16:06:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBnwd-0008CT-8H
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 16:06:23 +0000
Received: from [85.158.143.35:50508] by server-3.bemta-4.messagelabs.com id
	04/76-11539-E7405F25; Fri, 07 Feb 2014 16:06:22 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391789180!3984960!1
X-Originating-IP: [216.32.180.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14993 invoked from network); 7 Feb 2014 16:06:21 -0000
Received: from co1ehsobe003.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.186)
	by server-4.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 16:06:21 -0000
Received: from mail88-co1-R.bigfish.com (10.243.78.250) by
	CO1EHSOBE011.bigfish.com (10.243.66.74) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 16:06:19 +0000
Received: from mail88-co1 (localhost [127.0.0.1])	by mail88-co1-R.bigfish.com
	(Postfix) with ESMTP id 7CD97780379;
	Fri,  7 Feb 2014 16:06:19 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zz98dI1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail88-co1 (localhost.localdomain [127.0.0.1]) by mail88-co1
	(MessageSwitch) id 1391789177281767_25830;
	Fri,  7 Feb 2014 16:06:17 +0000 (UTC)
Received: from CO1EHSMHS022.bigfish.com (unknown [10.243.78.253])	by
	mail88-co1.bigfish.com (Postfix) with ESMTP id 3FB9C48004F;
	Fri,  7 Feb 2014 16:06:17 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO1EHSMHS022.bigfish.com
	(10.243.66.32) with Microsoft SMTP Server id 14.16.227.3;
	Fri, 7 Feb 2014 16:06:13 +0000
X-WSS-ID: 0N0MVE8-08-9XQ-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2EE8ED22017;	Fri,  7 Feb 2014 10:06:07 -0600 (CST)
Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 10:06:29 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag06.amd.com
	(10.181.40.13) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 11:06:11 -0500
Date: Fri, 7 Feb 2014 10:06:30 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207160630.GC8837@arav-dinar>
References: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F4C00F020000780011A26B@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4C00F020000780011A26B@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 10:14:23AM +0000, Jan Beulich wrote:
> >>> On 06.02.14 at 20:33, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
> > Workaround for the Erratum will be in BIOSes spun only after
> > Jan 2014 onwards. But initial production parts shipped in 2013
> > itself. Since there is a coverage hole, we should carry this fix
> > in software in case BIOS does not do the right thing or someone
> > is using old BIOS.
> > 
> > Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> > Rev. 3.04, November2013 for details on the Erratum.
> > 
> > Tested the patch on Fam16h server platform and it works fine.
> > 
> > Changes in V2: (per Andrew.C comments)
> > 	- Move pci_val into same scope
> > 	- rework indentation to match linux style
> > Changes in V3: (per Jan comments)
> > 	- remove pci_val, use 'l' and 'h' instead
> > 	- print warning message to hypervisor log
> > 
> > Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> > Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Applied after some more editing. Please double check.
> 

Tested staging branch on F16h to make sure; and works fine..

Thanks,
-Aravind.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnwe-0008Cb-CO; Fri, 07 Feb 2014 16:06:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBnwd-0008CT-8H
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 16:06:23 +0000
Received: from [85.158.143.35:50508] by server-3.bemta-4.messagelabs.com id
	04/76-11539-E7405F25; Fri, 07 Feb 2014 16:06:22 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391789180!3984960!1
X-Originating-IP: [216.32.180.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14993 invoked from network); 7 Feb 2014 16:06:21 -0000
Received: from co1ehsobe003.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.186)
	by server-4.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 16:06:21 -0000
Received: from mail88-co1-R.bigfish.com (10.243.78.250) by
	CO1EHSOBE011.bigfish.com (10.243.66.74) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 16:06:19 +0000
Received: from mail88-co1 (localhost [127.0.0.1])	by mail88-co1-R.bigfish.com
	(Postfix) with ESMTP id 7CD97780379;
	Fri,  7 Feb 2014 16:06:19 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -3
X-BigFish: VPS-3(zz98dI1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail88-co1 (localhost.localdomain [127.0.0.1]) by mail88-co1
	(MessageSwitch) id 1391789177281767_25830;
	Fri,  7 Feb 2014 16:06:17 +0000 (UTC)
Received: from CO1EHSMHS022.bigfish.com (unknown [10.243.78.253])	by
	mail88-co1.bigfish.com (Postfix) with ESMTP id 3FB9C48004F;
	Fri,  7 Feb 2014 16:06:17 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO1EHSMHS022.bigfish.com
	(10.243.66.32) with Microsoft SMTP Server id 14.16.227.3;
	Fri, 7 Feb 2014 16:06:13 +0000
X-WSS-ID: 0N0MVE8-08-9XQ-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2EE8ED22017;	Fri,  7 Feb 2014 10:06:07 -0600 (CST)
Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 10:06:29 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag06.amd.com
	(10.181.40.13) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 11:06:11 -0500
Date: Fri, 7 Feb 2014 10:06:30 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207160630.GC8837@arav-dinar>
References: <1391715192-1766-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52F4C00F020000780011A26B@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4C00F020000780011A26B@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: andrew.cooper3@citrix.com, keir@xen.org, suravee.suthikulpanit@amd.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V3] x86/AMD: Apply workaround for AMD F16h
	Erratum792
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 10:14:23AM +0000, Jan Beulich wrote:
> >>> On 06.02.14 at 20:33, Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com> wrote:
> > Workaround for the Erratum will be in BIOSes spun only after
> > Jan 2014 onwards. But initial production parts shipped in 2013
> > itself. Since there is a coverage hole, we should carry this fix
> > in software in case BIOS does not do the right thing or someone
> > is using old BIOS.
> > 
> > Refer to Revision Guide for AMD F16h models 00h-0fh, document 51810
> > Rev. 3.04, November2013 for details on the Erratum.
> > 
> > Tested the patch on Fam16h server platform and it works fine.
> > 
> > Changes in V2: (per Andrew.C comments)
> > 	- Move pci_val into same scope
> > 	- rework indentation to match linux style
> > Changes in V3: (per Jan comments)
> > 	- remove pci_val, use 'l' and 'h' instead
> > 	- print warning message to hypervisor log
> > 
> > Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
> > Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Applied after some more editing. Please double check.
> 

Tested staging branch on F16h to make sure; and works fine..

Thanks,
-Aravind.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:08:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnyB-0008LI-Va; Fri, 07 Feb 2014 16:07:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBnyA-0008L8-UZ
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:07:59 +0000
Received: from [85.158.139.211:31404] by server-10.bemta-5.messagelabs.com id
	4C/88-08578-ED405F25; Fri, 07 Feb 2014 16:07:58 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391789277!2421961!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 826 invoked from network); 7 Feb 2014 16:07:57 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-15.tower-206.messagelabs.com with SMTP;
	7 Feb 2014 16:07:57 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 48786120A5E;
	Fri,  7 Feb 2014 17:07:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id pEpB6qBUwNTw; Fri,  7 Feb 2014 17:07:56 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id 8CC6F120A27;
	Fri,  7 Feb 2014 17:07:56 +0100 (CET)
Date: Fri, 7 Feb 2014 17:09:35 +0100
From: William Dauchy <william@gandi.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207160935.GG19084@gandi.net>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
	<20140207160441.GA5060@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140207160441.GA5060@phenom.dumpdata.com>
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad@kernel.org, William Dauchy <william@gandi.net>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8861793270269911939=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8861793270269911939==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="kadn00tgSopKmJ1H"
Content-Disposition: inline


--kadn00tgSopKmJ1H
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Feb07 11:04, Konrad Rzeszutek Wilk wrote:
> OK, and that fixes the leak?

yup

> that sounds like some other subsystem is going crazy because
> of APIC being disabled and ACPI. And your patch enables
> APIC back in.
>=20
> Hmm..
--=20
William

--kadn00tgSopKmJ1H
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL1BT8ACgkQ1I6eqOUidQHcQwCgs9lhnH7Z2GM6f/T7LGx2xlTq
KPsAoIvOibwflKip4NdvmlB2S4TOE0fd
=EdmF
-----END PGP SIGNATURE-----

--kadn00tgSopKmJ1H--


--===============8861793270269911939==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8861793270269911939==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 16:08:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBnyB-0008LI-Va; Fri, 07 Feb 2014 16:07:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBnyA-0008L8-UZ
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:07:59 +0000
Received: from [85.158.139.211:31404] by server-10.bemta-5.messagelabs.com id
	4C/88-08578-ED405F25; Fri, 07 Feb 2014 16:07:58 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391789277!2421961!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 826 invoked from network); 7 Feb 2014 16:07:57 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-15.tower-206.messagelabs.com with SMTP;
	7 Feb 2014 16:07:57 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 48786120A5E;
	Fri,  7 Feb 2014 17:07:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id pEpB6qBUwNTw; Fri,  7 Feb 2014 17:07:56 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id 8CC6F120A27;
	Fri,  7 Feb 2014 17:07:56 +0100 (CET)
Date: Fri, 7 Feb 2014 17:09:35 +0100
From: William Dauchy <william@gandi.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207160935.GG19084@gandi.net>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
	<20140207160441.GA5060@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140207160441.GA5060@phenom.dumpdata.com>
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad@kernel.org, William Dauchy <william@gandi.net>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8861793270269911939=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8861793270269911939==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="kadn00tgSopKmJ1H"
Content-Disposition: inline


--kadn00tgSopKmJ1H
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Feb07 11:04, Konrad Rzeszutek Wilk wrote:
> OK, and that fixes the leak?

yup

> that sounds like some other subsystem is going crazy because
> of APIC being disabled and ACPI. And your patch enables
> APIC back in.
>=20
> Hmm..
--=20
William

--kadn00tgSopKmJ1H
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL1BT8ACgkQ1I6eqOUidQHcQwCgs9lhnH7Z2GM6f/T7LGx2xlTq
KPsAoIvOibwflKip4NdvmlB2S4TOE0fd
=EdmF
-----END PGP SIGNATURE-----

--kadn00tgSopKmJ1H--


--===============8861793270269911939==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8861793270269911939==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 16:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBo2i-0000Gr-0N; Fri, 07 Feb 2014 16:12:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBo2g-0000Gm-Uw
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:12:39 +0000
Received: from [85.158.143.35:40101] by server-3.bemta-4.messagelabs.com id
	48/40-11539-6F505F25; Fri, 07 Feb 2014 16:12:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391789557!3983724!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13983 invoked from network); 7 Feb 2014 16:12:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 16:12:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 16:12:37 +0000
Message-Id: <52F51404020000780011A517@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 16:12:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
	<52F50EB9020000780011A4D8@nat28.tlf.novell.com>
	<52F503B8.2040304@oracle.com>
In-Reply-To: <52F503B8.2040304@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA3LjAyLjE0IGF0IDE3OjAzLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMi8wNy8yMDE0IDEwOjUwIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4+Pj4gT24gMDcuMDIuMTQgYXQgMTY6MzgsIEJvcmlzIE9zdHJvdnNreSA8Ym9y
aXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4gT24gMDIvMDcvMjAxNCAxMDoyMyBB
TSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4+Pj4gT24gMDcuMDIuMTQgYXQgMTY6MTIsIEJvcmlz
IE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4+PiBPbiAw
Mi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+Pj4+IC4uLiBidXQgaW50
ZXJydXB0IHJlbWFwcGluZyBpcyByZXF1ZXN0ZWQgKHdpdGggcGVyLWRldmljZSByZW1hcHBpbmcK
Pj4+Pj4+IHRhYmxlcykuIFdpdGhvdXQgaXQsIHRoZSB0aW1lciBpbnRlcnJ1cHQgaXMgdXN1YWxs
eSBub3Qgd29ya2luZy4KPj4+Pj4+Cj4+Pj4+PiBJbnNwaXJlZCBieSBMaW51eCdlcyAiaW9tbXUv
YW1kOiBXb3JrIGFyb3VuZCB3cm9uZyBJT0FQSUMgZGV2aWNlLWlkIGluCj4+Pj4+PiBJVlJTIHRh
YmxlIiAoY29tbWl0IGMyZmY1Y2Y1Mjk0YmNiZDdmYTUwZjdkODYwZTkwYTY2ZGI3ZTUwNTkpIGJ5
IEpvZXJnCj4+Pj4+PiBSb2VkZWwgPGpvZXJnLnJvZWRlbEBhbWQuY29tPi4KPj4+Pj4+Cj4+Pj4+
PiBSZXBvcnRlZC1ieTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Pj4+IFNpZ25l
ZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KPj4+Pj4+IFRlc3RlZC1i
eTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Pj4+Cj4+Pj4+PiAtLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+Pj4+PiArKysgYi94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+Pj4+PiBAQCAtOTg0LDYgKzk4NCw3
IEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+Pj4+ICAgICAg
ICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxvY2s7Cj4+Pj4+PiAgICAg
ICAgIHVuc2lnbmVkIGxvbmcgbGVuZ3RoOwo+Pj4+Pj4gICAgICAgICB1bnNpZ25lZCBpbnQgYXBp
YzsKPj4+Pj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9pbnRyZW1hcDsKPj4+Pj4+
ICAgICAgICAgaW50IGVycm9yID0gMDsKPj4+Pj4+ICAgICAKPj4+Pj4+ICAgICAgICAgQlVHX09O
KCF0YWJsZSk7Cj4+Pj4+PiBAQCAtMTAxNyw4ICsxMDE4LDE1IEBAIHN0YXRpYyBpbnQgX19pbml0
IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+Pj4+ICAgICAgICAgLyogRWFjaCBJTy1BUElDIG11
c3QgaGF2ZSBiZWVuIG1lbnRpb25lZCBpbiB0aGUgdGFibGUuICovCj4+Pj4+PiAgICAgICAgIGZv
ciAoIGFwaWMgPSAwOyAhZXJyb3IgJiYgaW9tbXVfaW50cmVtYXAgJiYgYXBpYyA8IG5yX2lvYXBp
Y3M7ICsrYXBpYyApCj4+Pj4+PiAgICAgICAgIHsKPj4+Pj4+IC0gICAgICAgIGlmICggIW5yX2lv
YXBpY19lbnRyaWVzW2FwaWNdIHx8Cj4+Pj4+PiAtICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lP
X0FQSUNfSUQoYXBpYyldLnBpbl8yX2lkeCApCj4+Pj4+PiArICAgICAgICBpZiAoICFucl9pb2Fw
aWNfZW50cmllc1thcGljXSApCj4+Pj4+PiArICAgICAgICAgICAgY29udGludWU7Cj4+Pj4+PiAr
Cj4+Pj4+PiArICAgICAgICBpZiAoICFpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5zZWcg
JiYKPj4+Pj4+ICsgICAgICAgICAgICAgLyogU0IgSU8tQVBJQyBpcyBhbHdheXMgb24gdGhpcyBk
ZXZpY2UgaW4gQU1EIHN5c3RlbXMuICovCj4+Pj4+PiArICAgICAgICAgICAgIGlvYXBpY19zYmRm
W0lPX0FQSUNfSUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAsIDB4MTQsIDApICkKPj4+Pj4+ICsg
ICAgICAgICAgICBzYl9pb2FwaWMgPSAxOwo+Pj4+Pj4gKwo+Pj4+Pj4gKyAgICAgICAgaWYgKCBp
b2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9pZHggKQo+Pj4+Pj4gICAgICAgICAg
ICAgICAgIGNvbnRpbnVlOwo+Pj4+Pj4gICAgIAo+Pj4+Pj4gICAgICAgICAgICAgaWYgKCAhdGVz
dF9iaXQoSU9fQVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUpICkKPj4+Pj4gSSBkb24ndCBr
bm93IHdoZXRoZXIgMDoxNDowIGlzIHNldCBpbiBzdG9uZSwgSSBkb24ndCByZW1lbWJlciBzZWVp
bmcKPj4+Pj4gYW55d2hlcmUgdGhhdCB0aGlzIGlzIGFyY2hpdGVjdHVyYWwuCj4+Pj4+Cj4+Pj4+
IEluIHRoZSAodW5saWtlbHkpIGV2ZW50IHRoYXQgaXQgaXMgbW92ZWQgc29tZXdoZXJlIGVsc2Ug
d2lsbCB0aGUgdXNlciBiZQo+Pj4+PiBhYmxlIHRvIG92ZXJ3cml0ZSB3aGVyZSBpdCBpcz8gRG8g
eW91Cj4+Pj4+IHRoaW5rIHRoYXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVl
IGlmIGFwcHJvcHJpYXRlIGJpdCBpcwo+Pj4+PiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/Cj4+Pj4g
VGhlc2UgYXJlIHF1ZXN0aW9uIHlvdSdkIG5lZWQgdG8gYXNrIHRvIErDtnJnLCB0aGUgYXV0aG9y
IG9mIHRoZQo+Pj4+IG9yaWdpbmFsIExpbnV4IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNv
bmRpdGlvbiBoZXJlIHRoYXQgaGUKPj4+PiBrbmV3IHdoYXQgaGUgd2FzIGRvaW5nLgo+Pj4gWGVu
IGFscmVhZHkgaGFzIGEgd2F5IHRvIG92ZXJyaWRlIElWUlMnIHZpZXcgb2YgSU9BUElDcyB3aXRo
Cj4+PiBpb2FwaWNfY21kbGluZSwgc29tZXRoaW5nIHRoYXQgTGludXggZG9lc24ndC4gUHJlc3Vt
YWJseSBpZiB0aGUgdXNlcgo+Pj4gc2V0cyBpdnJzX2lvYXBpY1tdIG9wdGlvbiBvbiBib290IGxp
bmUgdGhlbiBoZSBrbm93cyB3aGF0IGhlIGlzIGRvaW5nCj4+PiAoYXQgbGVhc3Qgb25lIHdvdWxk
IGhvcGUgc28pLgo+PiBJIHRoaW5rIHRoZSBsb2dpYyB3ZSBoYXZlIGlzIHN1ZmZpY2llbnRseSBz
aW1pbGFyIHRvIExpbnV4J2VzLgo+Pgo+Pj4gTXkgY29uY2VybiBpcyB0aGF0IHRoaXMgcGF0Y2gg
d291bGQgcHJldmVudCB0aGUgdXNlciBmcm9tIHNwZWNpZnlpbmcKPj4+IHdoZXJlIHRoZSBJT0FQ
SUMgaXMuIFdpbGwgdGhpcyBib290IG9wdGlvbiBiZSB1c2VmdWwgYXQgYWxsIG5vdz8gV2hlbiB3
ZQo+Pj4gc3BlY2lmeSBhbnl0aGluZyBidXQgMDoxNDowIGl0IHdpbGwgYmUgcHJldHR5IG11Y2gg
aWdub3JlZCwgd29uJ3QgaXQ/Cj4+IEJ1dCB0aGUgcHVycG9zZSBoZXJlIGlzbid0IHRvIG92ZXJy
aWRlIGhvdyB0aGUgaGFyZHdhcmUgaXMKPj4gc3RydWN0dXJlZCwgYnV0IHRvIG92ZXJjb21lIGZp
cm13YXJlIHZlbmRvcnMgbm90IGdldHRpbmcgdGhlaXIKPj4gQUNQSSB0YWJsZXMgY29ycmVjdC4K
Pj4KPj4gRnVydGhlcm1vcmUsIHdoYXQgaXMgYmVpbmcgc3BlY2lmaWVkIGhlcmUgY2FuIHZlcnkg
d2VsbCBiZQo+PiBkaWZmZXJlbnQgZnJvbSAwMDoxNC4wIC0gY29uc2lkZXIgdGhlIG5vcnRoYnJp
ZGdlIElPLUFQSUMgYW5kCj4+IGV2ZW50dWFsIGZ1cnRoZXIgb25lcy4KPiAKPiBUaGlzIGlzIGV4
YWN0bHkgd2hhdCBJIGFtIGFza2luZzogU3VwcG9zZSB3ZSBoYXZlIElPQVBJQyBpbiB0aGUgTkIg
KEkgCj4gdGhpbmsgaXQncyBzb21ldGhpbmcgbGlrZSAwOjAyLjApICphbmQqIElWUlMgaXMgYnJv
a2VuLiBDdXJyZW50bHkgd2UgY2FuIAo+IHNheSAnaXZyc19pb2FwaWNbMF09MDowMi4wJyBhbmQg
d2UgYXJlIGdvb2QgdG8gZ28gKHJpZ2h0PykuCgpObyAtIHRoZXJlIF9oYXNfIHRvIGJlIGFuIElP
LUFQSUMgaW4gdGhlIFNCLgoKPiBXaWxsIHdlIHN0aWxsIGJlIGFibGUgdG8gZG8gdGhpcz8KCkl0
IHNob3VsZG4ndCBoYXZlIHdvcmtlZCBiZWZvcmUgZWl0aGVyLCB1bmxlc3MgSSdtIG5vdCByZWFs
bHkKdW5kZXJzdGFuZGluZyB0aGUgcHVycG9zZSBvZiB0aGUgY2hlY2sgaW4gTGludXguCgpKYW4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBo2i-0000Gr-0N; Fri, 07 Feb 2014 16:12:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBo2g-0000Gm-Uw
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:12:39 +0000
Received: from [85.158.143.35:40101] by server-3.bemta-4.messagelabs.com id
	48/40-11539-6F505F25; Fri, 07 Feb 2014 16:12:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391789557!3983724!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13983 invoked from network); 7 Feb 2014 16:12:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 16:12:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 16:12:37 +0000
Message-Id: <52F51404020000780011A517@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 16:12:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
	<52F50EB9020000780011A4D8@nat28.tlf.novell.com>
	<52F503B8.2040304@oracle.com>
In-Reply-To: <52F503B8.2040304@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, suravee.suthikulpanit@amd.com
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDA3LjAyLjE0IGF0IDE3OjAzLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNr
eUBvcmFjbGUuY29tPiB3cm90ZToKPiBPbiAwMi8wNy8yMDE0IDEwOjUwIEFNLCBKYW4gQmV1bGlj
aCB3cm90ZToKPj4+Pj4gT24gMDcuMDIuMTQgYXQgMTY6MzgsIEJvcmlzIE9zdHJvdnNreSA8Ym9y
aXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4gT24gMDIvMDcvMjAxNCAxMDoyMyBB
TSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4+Pj4gT24gMDcuMDIuMTQgYXQgMTY6MTIsIEJvcmlz
IE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+IHdyb3RlOgo+Pj4+PiBPbiAw
Mi8wNy8yMDE0IDA0OjIxIEFNLCBKYW4gQmV1bGljaCB3cm90ZToKPj4+Pj4+IC4uLiBidXQgaW50
ZXJydXB0IHJlbWFwcGluZyBpcyByZXF1ZXN0ZWQgKHdpdGggcGVyLWRldmljZSByZW1hcHBpbmcK
Pj4+Pj4+IHRhYmxlcykuIFdpdGhvdXQgaXQsIHRoZSB0aW1lciBpbnRlcnJ1cHQgaXMgdXN1YWxs
eSBub3Qgd29ya2luZy4KPj4+Pj4+Cj4+Pj4+PiBJbnNwaXJlZCBieSBMaW51eCdlcyAiaW9tbXUv
YW1kOiBXb3JrIGFyb3VuZCB3cm9uZyBJT0FQSUMgZGV2aWNlLWlkIGluCj4+Pj4+PiBJVlJTIHRh
YmxlIiAoY29tbWl0IGMyZmY1Y2Y1Mjk0YmNiZDdmYTUwZjdkODYwZTkwYTY2ZGI3ZTUwNTkpIGJ5
IEpvZXJnCj4+Pj4+PiBSb2VkZWwgPGpvZXJnLnJvZWRlbEBhbWQuY29tPi4KPj4+Pj4+Cj4+Pj4+
PiBSZXBvcnRlZC1ieTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Pj4+IFNpZ25l
ZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KPj4+Pj4+IFRlc3RlZC1i
eTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Pj4+Cj4+Pj4+PiAtLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+Pj4+PiArKysgYi94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+Pj4+PiBAQCAtOTg0LDYgKzk4NCw3
IEBAIHN0YXRpYyBpbnQgX19pbml0IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+Pj4+ICAgICAg
ICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxvY2s7Cj4+Pj4+PiAgICAg
ICAgIHVuc2lnbmVkIGxvbmcgbGVuZ3RoOwo+Pj4+Pj4gICAgICAgICB1bnNpZ25lZCBpbnQgYXBp
YzsKPj4+Pj4+ICsgICAgYm9vbF90IHNiX2lvYXBpYyA9ICFpb21tdV9pbnRyZW1hcDsKPj4+Pj4+
ICAgICAgICAgaW50IGVycm9yID0gMDsKPj4+Pj4+ICAgICAKPj4+Pj4+ICAgICAgICAgQlVHX09O
KCF0YWJsZSk7Cj4+Pj4+PiBAQCAtMTAxNyw4ICsxMDE4LDE1IEBAIHN0YXRpYyBpbnQgX19pbml0
IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+Pj4+ICAgICAgICAgLyogRWFjaCBJTy1BUElDIG11
c3QgaGF2ZSBiZWVuIG1lbnRpb25lZCBpbiB0aGUgdGFibGUuICovCj4+Pj4+PiAgICAgICAgIGZv
ciAoIGFwaWMgPSAwOyAhZXJyb3IgJiYgaW9tbXVfaW50cmVtYXAgJiYgYXBpYyA8IG5yX2lvYXBp
Y3M7ICsrYXBpYyApCj4+Pj4+PiAgICAgICAgIHsKPj4+Pj4+IC0gICAgICAgIGlmICggIW5yX2lv
YXBpY19lbnRyaWVzW2FwaWNdIHx8Cj4+Pj4+PiAtICAgICAgICAgICAgIGlvYXBpY19zYmRmW0lP
X0FQSUNfSUQoYXBpYyldLnBpbl8yX2lkeCApCj4+Pj4+PiArICAgICAgICBpZiAoICFucl9pb2Fw
aWNfZW50cmllc1thcGljXSApCj4+Pj4+PiArICAgICAgICAgICAgY29udGludWU7Cj4+Pj4+PiAr
Cj4+Pj4+PiArICAgICAgICBpZiAoICFpb2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5zZWcg
JiYKPj4+Pj4+ICsgICAgICAgICAgICAgLyogU0IgSU8tQVBJQyBpcyBhbHdheXMgb24gdGhpcyBk
ZXZpY2UgaW4gQU1EIHN5c3RlbXMuICovCj4+Pj4+PiArICAgICAgICAgICAgIGlvYXBpY19zYmRm
W0lPX0FQSUNfSUQoYXBpYyldLmJkZiA9PSBQQ0lfQkRGKDAsIDB4MTQsIDApICkKPj4+Pj4+ICsg
ICAgICAgICAgICBzYl9pb2FwaWMgPSAxOwo+Pj4+Pj4gKwo+Pj4+Pj4gKyAgICAgICAgaWYgKCBp
b2FwaWNfc2JkZltJT19BUElDX0lEKGFwaWMpXS5waW5fMl9pZHggKQo+Pj4+Pj4gICAgICAgICAg
ICAgICAgIGNvbnRpbnVlOwo+Pj4+Pj4gICAgIAo+Pj4+Pj4gICAgICAgICAgICAgaWYgKCAhdGVz
dF9iaXQoSU9fQVBJQ19JRChhcGljKSwgaW9hcGljX2NtZGxpbmUpICkKPj4+Pj4gSSBkb24ndCBr
bm93IHdoZXRoZXIgMDoxNDowIGlzIHNldCBpbiBzdG9uZSwgSSBkb24ndCByZW1lbWJlciBzZWVp
bmcKPj4+Pj4gYW55d2hlcmUgdGhhdCB0aGlzIGlzIGFyY2hpdGVjdHVyYWwuCj4+Pj4+Cj4+Pj4+
IEluIHRoZSAodW5saWtlbHkpIGV2ZW50IHRoYXQgaXQgaXMgbW92ZWQgc29tZXdoZXJlIGVsc2Ug
d2lsbCB0aGUgdXNlciBiZQo+Pj4+PiBhYmxlIHRvIG92ZXJ3cml0ZSB3aGVyZSBpdCBpcz8gRG8g
eW91Cj4+Pj4+IHRoaW5rIHRoYXQgc2JfaW9hcGljIG1heSBuZWVkIHRvIGJlIHNldCB0byB0cnVl
IGlmIGFwcHJvcHJpYXRlIGJpdCBpcwo+Pj4+PiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/Cj4+Pj4g
VGhlc2UgYXJlIHF1ZXN0aW9uIHlvdSdkIG5lZWQgdG8gYXNrIHRvIErDtnJnLCB0aGUgYXV0aG9y
IG9mIHRoZQo+Pj4+IG9yaWdpbmFsIExpbnV4IHNpZGUgcGF0Y2guIEkgdG9vayBhcyBhIHByZWNv
bmRpdGlvbiBoZXJlIHRoYXQgaGUKPj4+PiBrbmV3IHdoYXQgaGUgd2FzIGRvaW5nLgo+Pj4gWGVu
IGFscmVhZHkgaGFzIGEgd2F5IHRvIG92ZXJyaWRlIElWUlMnIHZpZXcgb2YgSU9BUElDcyB3aXRo
Cj4+PiBpb2FwaWNfY21kbGluZSwgc29tZXRoaW5nIHRoYXQgTGludXggZG9lc24ndC4gUHJlc3Vt
YWJseSBpZiB0aGUgdXNlcgo+Pj4gc2V0cyBpdnJzX2lvYXBpY1tdIG9wdGlvbiBvbiBib290IGxp
bmUgdGhlbiBoZSBrbm93cyB3aGF0IGhlIGlzIGRvaW5nCj4+PiAoYXQgbGVhc3Qgb25lIHdvdWxk
IGhvcGUgc28pLgo+PiBJIHRoaW5rIHRoZSBsb2dpYyB3ZSBoYXZlIGlzIHN1ZmZpY2llbnRseSBz
aW1pbGFyIHRvIExpbnV4J2VzLgo+Pgo+Pj4gTXkgY29uY2VybiBpcyB0aGF0IHRoaXMgcGF0Y2gg
d291bGQgcHJldmVudCB0aGUgdXNlciBmcm9tIHNwZWNpZnlpbmcKPj4+IHdoZXJlIHRoZSBJT0FQ
SUMgaXMuIFdpbGwgdGhpcyBib290IG9wdGlvbiBiZSB1c2VmdWwgYXQgYWxsIG5vdz8gV2hlbiB3
ZQo+Pj4gc3BlY2lmeSBhbnl0aGluZyBidXQgMDoxNDowIGl0IHdpbGwgYmUgcHJldHR5IG11Y2gg
aWdub3JlZCwgd29uJ3QgaXQ/Cj4+IEJ1dCB0aGUgcHVycG9zZSBoZXJlIGlzbid0IHRvIG92ZXJy
aWRlIGhvdyB0aGUgaGFyZHdhcmUgaXMKPj4gc3RydWN0dXJlZCwgYnV0IHRvIG92ZXJjb21lIGZp
cm13YXJlIHZlbmRvcnMgbm90IGdldHRpbmcgdGhlaXIKPj4gQUNQSSB0YWJsZXMgY29ycmVjdC4K
Pj4KPj4gRnVydGhlcm1vcmUsIHdoYXQgaXMgYmVpbmcgc3BlY2lmaWVkIGhlcmUgY2FuIHZlcnkg
d2VsbCBiZQo+PiBkaWZmZXJlbnQgZnJvbSAwMDoxNC4wIC0gY29uc2lkZXIgdGhlIG5vcnRoYnJp
ZGdlIElPLUFQSUMgYW5kCj4+IGV2ZW50dWFsIGZ1cnRoZXIgb25lcy4KPiAKPiBUaGlzIGlzIGV4
YWN0bHkgd2hhdCBJIGFtIGFza2luZzogU3VwcG9zZSB3ZSBoYXZlIElPQVBJQyBpbiB0aGUgTkIg
KEkgCj4gdGhpbmsgaXQncyBzb21ldGhpbmcgbGlrZSAwOjAyLjApICphbmQqIElWUlMgaXMgYnJv
a2VuLiBDdXJyZW50bHkgd2UgY2FuIAo+IHNheSAnaXZyc19pb2FwaWNbMF09MDowMi4wJyBhbmQg
d2UgYXJlIGdvb2QgdG8gZ28gKHJpZ2h0PykuCgpObyAtIHRoZXJlIF9oYXNfIHRvIGJlIGFuIElP
LUFQSUMgaW4gdGhlIFNCLgoKPiBXaWxsIHdlIHN0aWxsIGJlIGFibGUgdG8gZG8gdGhpcz8KCkl0
IHNob3VsZG4ndCBoYXZlIHdvcmtlZCBiZWZvcmUgZWl0aGVyLCB1bmxlc3MgSSdtIG5vdCByZWFs
bHkKdW5kZXJzdGFuZGluZyB0aGUgcHVycG9zZSBvZiB0aGUgY2hlY2sgaW4gTGludXguCgpKYW4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:16:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBo6L-0000c9-TC; Fri, 07 Feb 2014 16:16:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBo6K-0000c3-8f
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:16:24 +0000
Received: from [85.158.139.211:62338] by server-3.bemta-5.messagelabs.com id
	E2/7F-13671-7D605F25; Fri, 07 Feb 2014 16:16:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391789782!2446890!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11092 invoked from network); 7 Feb 2014 16:16:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 16:16:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 16:16:22 +0000
Message-Id: <52F514E5020000780011A529@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 16:16:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Bob Liu" <lliubbo@gmail.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1383872637-15486-1-git-send-email-bob.liu@oracle.com>
	<1383872637-15486-12-git-send-email-bob.liu@oracle.com>
	<20140207154800.GA4855@phenom.dumpdata.com>
In-Reply-To: <20140207154800.GA4855@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, keir@xen.org, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 11/11] tmem: cleanup: drop useless
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 16:48, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Nov 08, 2013 at 09:03:57AM +0800, Bob Liu wrote:
>> Function tmem_release_avail_pages_to_host() and tmem_scrub_page() only used
>> once, no need to separate them out.
> 
> All of the patches look good to me. Let me put them in my tree
> and do a sanity check tonight and then send a git pull to Jan
> on Monday.

I don't think we should be pulling in cleanup like this anymore,
until we branch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:16:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBo6L-0000c9-TC; Fri, 07 Feb 2014 16:16:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WBo6K-0000c3-8f
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:16:24 +0000
Received: from [85.158.139.211:62338] by server-3.bemta-5.messagelabs.com id
	E2/7F-13671-7D605F25; Fri, 07 Feb 2014 16:16:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391789782!2446890!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11092 invoked from network); 7 Feb 2014 16:16:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 7 Feb 2014 16:16:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 07 Feb 2014 16:16:22 +0000
Message-Id: <52F514E5020000780011A529@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 07 Feb 2014 16:16:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Bob Liu" <lliubbo@gmail.com>,
	"Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <1383872637-15486-1-git-send-email-bob.liu@oracle.com>
	<1383872637-15486-12-git-send-email-bob.liu@oracle.com>
	<20140207154800.GA4855@phenom.dumpdata.com>
In-Reply-To: <20140207154800.GA4855@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, keir@xen.org, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 11/11] tmem: cleanup: drop useless
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 16:48, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> On Fri, Nov 08, 2013 at 09:03:57AM +0800, Bob Liu wrote:
>> Function tmem_release_avail_pages_to_host() and tmem_scrub_page() only used
>> once, no need to separate them out.
> 
> All of the patches look good to me. Let me put them in my tree
> and do a sanity check tonight and then send a git pull to Jan
> on Monday.

I don't think we should be pulling in cleanup like this anymore,
until we branch.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBo9B-0000sQ-Ok; Fri, 07 Feb 2014 16:19:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBo9A-0000sJ-L3
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:19:20 +0000
Received: from [85.158.137.68:10273] by server-12.bemta-3.messagelabs.com id
	F7/19-01674-78705F25; Fri, 07 Feb 2014 16:19:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391789957!377185!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11997 invoked from network); 7 Feb 2014 16:19:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:19:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17GJCLj026964
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 16:19:13 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17GJBC9000422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 16:19:11 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17GJA9X024606; Fri, 7 Feb 2014 16:19:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 08:19:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BEAD91C0972; Fri,  7 Feb 2014 11:19:09 -0500 (EST)
Date: Fri, 7 Feb 2014 11:19:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: William Dauchy <william@gandi.net>
Message-ID: <20140207161909.GA5330@phenom.dumpdata.com>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
	<20140207160441.GA5060@phenom.dumpdata.com>
	<20140207160935.GG19084@gandi.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140207160935.GG19084@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: konrad@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 05:09:35PM +0100, William Dauchy wrote:
> On Feb07 11:04, Konrad Rzeszutek Wilk wrote:
> > OK, and that fixes the leak?
> 
> yup
> 
> > that sounds like some other subsystem is going crazy because
> > of APIC being disabled and ACPI. And your patch enables
> > APIC back in.
> > 
> > Hmm..

Can you send me your guest config? That is your .config
and also your guest configuration file?

> -- 
> William



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBo9B-0000sQ-Ok; Fri, 07 Feb 2014 16:19:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBo9A-0000sJ-L3
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:19:20 +0000
Received: from [85.158.137.68:10273] by server-12.bemta-3.messagelabs.com id
	F7/19-01674-78705F25; Fri, 07 Feb 2014 16:19:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1391789957!377185!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11997 invoked from network); 7 Feb 2014 16:19:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:19:19 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17GJCLj026964
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 16:19:13 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17GJBC9000422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 16:19:11 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17GJA9X024606; Fri, 7 Feb 2014 16:19:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 08:19:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id BEAD91C0972; Fri,  7 Feb 2014 11:19:09 -0500 (EST)
Date: Fri, 7 Feb 2014 11:19:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: William Dauchy <william@gandi.net>
Message-ID: <20140207161909.GA5330@phenom.dumpdata.com>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
	<20140207160441.GA5060@phenom.dumpdata.com>
	<20140207160935.GG19084@gandi.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140207160935.GG19084@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: konrad@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 05:09:35PM +0100, William Dauchy wrote:
> On Feb07 11:04, Konrad Rzeszutek Wilk wrote:
> > OK, and that fixes the leak?
> 
> yup
> 
> > that sounds like some other subsystem is going crazy because
> > of APIC being disabled and ACPI. And your patch enables
> > APIC back in.
> > 
> > Hmm..

Can you send me your guest config? That is your .config
and also your guest configuration file?

> -- 
> William



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:22:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoBx-0001Ih-Fe; Fri, 07 Feb 2014 16:22:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBoBw-0001IN-6o
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:22:12 +0000
Received: from [85.158.139.211:35529] by server-7.bemta-5.messagelabs.com id
	E9/35-14867-33805F25; Fri, 07 Feb 2014 16:22:11 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391790129!2419197!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15279 invoked from network); 7 Feb 2014 16:22:10 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:22:10 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Feb 2014 16:22:08 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="648470183"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.6.30])
	by fldsmtpi03.verizon.com with ESMTP; 07 Feb 2014 16:22:07 +0000
Message-ID: <52F5082E.6010207@terremark.com>
Date: Fri, 07 Feb 2014 11:22:06 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
In-Reply-To: <1391767501.2162.20.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/14 05:05, Ian Campbell wrote:
> On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
>>>>> cc1: warnings being treated as errors
>>>>> xenlight_stubs.c: In function 'Defbool_val':
>>>>> xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
>>>>> xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
>>>>> xenlight_stubs.c: In function 'String_option_val':
>>>>> xenlight_stubs.c:379: error: expected expression before 'char'
>>>>> xenlight_stubs.c: In function 'aohow_val':
>>>>> xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
>> Any idea on what to do about ocaml issue?
> My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
> What version do you have?
>
> Ian.
>
dcs-xen-53:~>ocaml -version
The Objective Caml toplevel, version 3.09.3

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:22:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoBx-0001Ih-Fe; Fri, 07 Feb 2014 16:22:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBoBw-0001IN-6o
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:22:12 +0000
Received: from [85.158.139.211:35529] by server-7.bemta-5.messagelabs.com id
	E9/35-14867-33805F25; Fri, 07 Feb 2014 16:22:11 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391790129!2419197!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15279 invoked from network); 7 Feb 2014 16:22:10 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 16:22:10 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Feb 2014 16:22:08 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="648470183"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.6.30])
	by fldsmtpi03.verizon.com with ESMTP; 07 Feb 2014 16:22:07 +0000
Message-ID: <52F5082E.6010207@terremark.com>
Date: Fri, 07 Feb 2014 11:22:06 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
In-Reply-To: <1391767501.2162.20.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/14 05:05, Ian Campbell wrote:
> On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
>>>>> cc1: warnings being treated as errors
>>>>> xenlight_stubs.c: In function 'Defbool_val':
>>>>> xenlight_stubs.c:344: warning: implicit declaration of function 'CAMLreturnT'
>>>>> xenlight_stubs.c:344: error: expected expression before 'libxl_defbool'
>>>>> xenlight_stubs.c: In function 'String_option_val':
>>>>> xenlight_stubs.c:379: error: expected expression before 'char'
>>>>> xenlight_stubs.c: In function 'aohow_val':
>>>>> xenlight_stubs.c:440: error: expected expression before 'libxl_asyncop_how'
>> Any idea on what to do about ocaml issue?
> My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
> What version do you have?
>
> Ian.
>
dcs-xen-53:~>ocaml -version
The Objective Caml toplevel, version 3.09.3

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:27:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoGe-0001UL-Bb; Fri, 07 Feb 2014 16:27:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBoGb-0001UB-Bl
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:27:03 +0000
Received: from [85.158.143.35:25810] by server-1.bemta-4.messagelabs.com id
	B4/FC-31661-45905F25; Fri, 07 Feb 2014 16:27:00 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391790417!3988799!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=2.4 required=7.0 tests=UNIQUE_WORDS, UPPERCASE_75_100
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14654 invoked from network); 7 Feb 2014 16:26:58 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-2.tower-21.messagelabs.com with SMTP;
	7 Feb 2014 16:26:58 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 7CC1A120A39;
	Fri,  7 Feb 2014 17:26:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id QSBd8UYly1+7; Fri,  7 Feb 2014 17:26:55 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id C52A2120971;
	Fri,  7 Feb 2014 17:26:55 +0100 (CET)
Date: Fri, 7 Feb 2014 17:28:35 +0100
From: William Dauchy <william@gandi.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207162835.GH19084@gandi.net>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
	<20140207160441.GA5060@phenom.dumpdata.com>
	<20140207160935.GG19084@gandi.net>
	<20140207161909.GA5330@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140207161909.GA5330@phenom.dumpdata.com>
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad@kernel.org, William Dauchy <william@gandi.net>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1704717724066019537=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1704717724066019537==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="RwxaKO075aXzzOz0"
Content-Disposition: inline


--RwxaKO075aXzzOz0
Content-Type: multipart/mixed; boundary="mYYhpFXgKVw71fwr"
Content-Disposition: inline


--mYYhpFXgKVw71fwr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Feb07 11:19, Konrad Rzeszutek Wilk wrote:
> Can you send me your guest config? That is your .config

attached

> and also your guest configuration file?

hard to get since I'm creating it libxs, but nothing particular.
--=20
William

--mYYhpFXgKVw71fwr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="linux-3.10-xenU-x86_64-hosting"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.10.29 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION=""
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
# CONFIG_KERNEL_GZIP is not set
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
CONFIG_KERNEL_XZ=y
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=32
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_UIDGID_STRICT_TYPE_CHECKS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_MM_OWNER=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
# CONFIG_BLK_DEV_INITRD is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HOTPLUG=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_EXPERT=y
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
# CONFIG_SLOB is not set
# CONFIG_PROFILING is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_THROTTLING=y

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
CONFIG_LDM_DEBUG=y
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_KARMA_PARTITION is not set
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_DEADLINE=y
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="deadline"
CONFIG_PADATA=y
CONFIG_ASN1=m
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_XEN=y
# CONFIG_XEN_PRIVILEGED_GUEST is not set
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_KVM_GUEST is not set
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
# CONFIG_CPU_SUP_AMD is not set
# CONFIG_CPU_SUP_CENTAUR is not set
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=32
# CONFIG_SCHED_SMT is not set
CONFIG_SCHED_MC=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set
# CONFIG_X86_MCE is not set
# CONFIG_I8K is not set
# CONFIG_MICROCODE is not set
CONFIG_X86_MSR=m
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
# CONFIG_NUMA is not set
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_CLEANCACHE is not set
# CONFIG_FRONTSWAP is not set
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
CONFIG_COMPAT_VDSO=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y

#
# Power management and ACPI options
#
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
# CONFIG_HIBERNATION is not set
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
# CONFIG_CPU_IDLE is not set
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
# CONFIG_PCI is not set
CONFIG_PCI_LABEL=y
# CONFIG_ISA_DMA_API is not set
# CONFIG_PCCARD is not set

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
CONFIG_IA32_AOUT=m
CONFIG_X86_X32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=m
CONFIG_XFRM_USER=m
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
# CONFIG_IP_PNP is not set
CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IP_TUNNEL=m
CONFIG_NET_IPGRE=m
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
CONFIG_INET_LRO=y
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
CONFIG_TCP_CONG_CUBIC=m
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_DEFAULT_BIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="bic"
# CONFIG_TCP_MD5SIG is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
CONFIG_INET6_XFRM_MODE_TRANSPORT=m
CONFIG_INET6_XFRM_MODE_TUNNEL=m
CONFIG_INET6_XFRM_MODE_BEET=m
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_GRE=m
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=y

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_NETLINK_ACCT=m
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=m
CONFIG_NF_CT_PROTO_GRE=m
CONFIG_NF_CT_PROTO_SCTP=m
CONFIG_NF_CT_PROTO_UDPLITE=m
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_QUEUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
CONFIG_NF_NAT_PROTO_DCCP=m
CONFIG_NF_NAT_PROTO_UDPLITE=m
CONFIG_NF_NAT_PROTO_SCTP=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NETFILTER_TPROXY=m
CONFIG_NETFILTER_XTABLES=m

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_NFACCT=m
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_TIME=m
CONFIG_NETFILTER_XT_MATCH_U32=m
CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
# CONFIG_IP_VS_PROTO_SCTP is not set

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_CONNTRACK_IPV4=m
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PROTO_GRE=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_CONNTRACK_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_NF_NAT_IPV6=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m

#
# DECnet: Netfilter Configuration
#
CONFIG_DECNET_NF_GRABULATOR=m
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_ULOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m

#
# DCCP CCIDs Configuration
#
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
CONFIG_IP_DCCP_CCID3=y
# CONFIG_IP_DCCP_CCID3_DEBUG is not set
CONFIG_IP_DCCP_TFRC_LIB=y

#
# DCCP Kernel Hacking
#
# CONFIG_IP_DCCP_DEBUG is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_MSG is not set
# CONFIG_SCTP_DBG_OBJCNT is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_RDS=m
CONFIG_RDS_TCP=m
# CONFIG_RDS_DEBUG is not set
CONFIG_TIPC=m
CONFIG_TIPC_PORTS=8191
CONFIG_ATM=y
CONFIG_ATM_CLIP=y
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
CONFIG_ATM_MPOA=m
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
# CONFIG_L2TP_DEBUGFS is not set
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=m
CONFIG_GARP=m
CONFIG_MRP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
CONFIG_HAVE_NET_DSA=y
CONFIG_VLAN_8021Q=m
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
CONFIG_DECNET=m
CONFIG_DECNET_ROUTER=y
CONFIG_LLC=m
CONFIG_LLC2=m
CONFIG_IPX=m
CONFIG_IPX_INTERN=y
CONFIG_ATALK=m
CONFIG_DEV_APPLETALK=m
CONFIG_IPDDP=m
CONFIG_IPDDP_ENCAP=y
CONFIG_X25=m
CONFIG_LAPB=m
CONFIG_PHONET=m
CONFIG_IEEE802154=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
CONFIG_NET_EMATCH_CANID=m
CONFIG_NET_EMATCH_IPSET=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_IPT=m
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
CONFIG_NET_CLS_IND=y
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_BLA=y
CONFIG_BATMAN_ADV_DAT=y
# CONFIG_BATMAN_ADV_NC is not set
# CONFIG_BATMAN_ADV_DEBUG is not set
CONFIG_OPENVSWITCH=m
CONFIG_VSOCKETS=m
CONFIG_NETLINK_MMAP=y
CONFIG_NETLINK_DIAG=m
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
CONFIG_NETPRIO_CGROUP=m
CONFIG_BQL=y
CONFIG_BPF_JIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m

#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=m
# CONFIG_CAN_SLCAN is not set
CONFIG_CAN_DEV=m
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_SJA1000 is not set
# CONFIG_CAN_C_CAN is not set
# CONFIG_CAN_CC770 is not set
# CONFIG_CAN_SOFTING is not set
# CONFIG_CAN_DEBUG_DEVICES is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
CONFIG_AF_RXRPC=y
# CONFIG_AF_RXRPC_DEBUG is not set
# CONFIG_RXKAD is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=m
# CONFIG_RFKILL_INPUT is not set
CONFIG_NET_9P=m
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
# CONFIG_CEPH_LIB_USE_DNS_RESOLVER is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
# CONFIG_DMA_SHARED_BUFFER is not set

#
# Bus devices
#
CONFIG_CONNECTOR=m
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
# CONFIG_DRBD_FAULT_INJECTION is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=8192
CONFIG_BLK_DEV_XIP=y
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=y
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_RBD=m

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_93CX6 is not set

#
# Texas Instruments shared transport line discipline
#

#
# Altera FPGA firmware download module
#
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
# CONFIG_SCSI_DMA is not set
# CONFIG_SCSI_NETLINK is not set
# CONFIG_ATA is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_BCACHE=m
# CONFIG_BCACHE_DEBUG is not set
CONFIG_BCACHE_EDEBUG=y
# CONFIG_BCACHE_CLOSURES_DEBUG is not set
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_DEBUG_BLOCK_STACK_TRACING=y
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_MQ=m
CONFIG_DM_CACHE_CLEANER=m
CONFIG_DM_MIRROR=m
CONFIG_DM_RAID=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
CONFIG_DM_DELAY=m
CONFIG_DM_UEVENT=y
# CONFIG_DM_FLAKEY is not set
CONFIG_DM_VERITY=m
# CONFIG_MACINTOSH_DRIVERS is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
CONFIG_BONDING=m
CONFIG_DUMMY=m
CONFIG_EQUALIZER=m
CONFIG_MII=m
CONFIG_IFB=m
CONFIG_NET_TEAM=m
CONFIG_NET_TEAM_MODE_BROADCAST=m
CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
CONFIG_NET_TEAM_MODE_RANDOM=m
CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
CONFIG_NET_TEAM_MODE_LOADBALANCE=m
CONFIG_MACVLAN=m
CONFIG_MACVTAP=m
CONFIG_VXLAN=m
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=m
CONFIG_VETH=m
# CONFIG_ATM_DRIVERS is not set

#
# CAIF transport drivers
#

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
# CONFIG_ETHERNET is not set
# CONFIG_PHYLIB is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_FILTER=y
CONFIG_PPP_MPPE=m
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOATM=m
CONFIG_PPPOE=m
CONFIG_PPTP=m
CONFIG_PPPOL2TP=m
CONFIG_PPP_ASYNC=m
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
CONFIG_SLIP_MODE_SLIP6=y
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
CONFIG_WAN=y
CONFIG_HDLC=m
CONFIG_HDLC_RAW=m
CONFIG_HDLC_RAW_ETH=m
CONFIG_HDLC_CISCO=m
CONFIG_HDLC_FR=m
CONFIG_HDLC_PPP=m
CONFIG_HDLC_X25=m
CONFIG_DLCI=m
CONFIG_DLCI_MAX=8
CONFIG_LAPBETHER=m
CONFIG_X25_ASY=m
CONFIG_SBNI=m
CONFIG_SBNI_MULTILINE=y
CONFIG_IEEE802154_DRIVERS=m
CONFIG_IEEE802154_FAKEHARD=m
# CONFIG_IEEE802154_FAKELB is not set
CONFIG_XEN_NETDEV_FRONTEND=y
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
# CONFIG_INPUT_FF_MEMLESS is not set
# CONFIG_INPUT_POLLDEV is not set
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
# CONFIG_INPUT_MOUSEDEV is not set
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=m
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
# CONFIG_VT_HW_CONSOLE_BINDING is not set
CONFIG_UNIX98_PTYS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=16
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_N_HDLC=m
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y
# CONFIG_STALDRV is not set

#
# Serial drivers
#
# CONFIG_SERIAL_8250 is not set
CONFIG_FIX_EARLYCON_MEM=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
# CONFIG_IPMI_HANDLER is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_NVRAM is not set
# CONFIG_R3964 is not set
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=m
CONFIG_MAX_RAW_DEVS=256
# CONFIG_HANGCHECK_TIMER is not set
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
# CONFIG_I2C is not set
# CONFIG_SPI is not set

#
# Qualcomm MSM SSBI bus support
#
# CONFIG_SSBI is not set
# CONFIG_HSI is not set

#
# PPS support
#
# CONFIG_PPS is not set

#
# PPS generators support
#

#
# PTP clock support
#
# CONFIG_PTP_1588_CLOCK is not set

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIO_DEVRES=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_SUPPLY is not set
# CONFIG_POWER_AVS is not set
# CONFIG_HWMON is not set
# CONFIG_THERMAL is not set
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SC520_WDT is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_60XX_WDT is not set
# CONFIG_SBC8360_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
# CONFIG_W83697UG_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_XEN_WDT=m
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
# CONFIG_DRM is not set
# CONFIG_VGASTATE is not set
# CONFIG_VIDEO_OUTPUT_CONTROL is not set
# CONFIG_FB is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
# CONFIG_SOUND is not set

#
# HID support
#
# CONFIG_HID is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB_ARCH_HAS_XHCI is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#

#
# HID Sensor RTC drivers
#
# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
CONFIG_UIO=m
CONFIG_UIO_PDRV=m
CONFIG_UIO_PDRV_GENIRQ=m
CONFIG_UIO_DMEM_GENIRQ=m
CONFIG_VIRT_DRIVERS=y

#
# Virtio drivers
#
# CONFIG_VIRTIO_MMIO is not set

#
# Microsoft Hyper-V guest support
#

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=m
CONFIG_XEN_GRANT_DEV_ALLOC=m
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_HAVE_PVMMU=y
# CONFIG_STAGING is not set
# CONFIG_X86_PLATFORM_DEVICES is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_SUPPORT=y

#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# Firmware Drivers
#
# CONFIG_EDD is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
# CONFIG_DMIID is not set
# CONFIG_DMI_SYSFS is not set
# CONFIG_ISCSI_IBFT_FIND is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT23=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=m
CONFIG_OCFS2_FS_O2CB=m
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
CONFIG_OCFS2_FS_STATS=y
CONFIG_OCFS2_DEBUG_MASKLOG=y
# CONFIG_OCFS2_DEBUG_FS is not set
CONFIG_BTRFS_FS=m
# CONFIG_BTRFS_FS_POSIX_ACL is not set
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
CONFIG_NILFS2_FS=m
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=m
CONFIG_QFMT_V1=m
CONFIG_QFMT_V2=m
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=m
CONFIG_FUSE_FS=m
CONFIG_CUSE=m

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
CONFIG_UDF_NLS=y

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=m
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
# CONFIG_TMPFS_POSIX_ACL is not set
CONFIG_TMPFS_XATTR=y
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
CONFIG_ADFS_FS=m
# CONFIG_ADFS_FS_RW is not set
CONFIG_AFFS_FS=m
CONFIG_ECRYPT_FS=m
# CONFIG_ECRYPT_FS_MESSAGING is not set
CONFIG_HFS_FS=m
CONFIG_HFSPLUS_FS=m
CONFIG_BEFS_FS=m
# CONFIG_BEFS_DEBUG is not set
CONFIG_BFS_FS=m
CONFIG_EFS_FS=m
CONFIG_LOGFS=m
CONFIG_CRAMFS=y
CONFIG_SQUASHFS=m
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZO is not set
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
CONFIG_VXFS_FS=m
CONFIG_MINIX_FS=m
CONFIG_OMFS_FS=m
CONFIG_HPFS_FS=m
CONFIG_QNX4FS_FS=m
# CONFIG_QNX6FS_FS is not set
CONFIG_ROMFS_FS=m
CONFIG_ROMFS_BACKED_BY_BLOCK=y
CONFIG_ROMFS_ON_BLOCK=y
# CONFIG_PSTORE is not set
CONFIG_SYSV_FS=m
CONFIG_UFS_FS=m
# CONFIG_UFS_FS_WRITE is not set
# CONFIG_UFS_DEBUG is not set
# CONFIG_F2FS_FS is not set
CONFIG_AUFS_FS=m
CONFIG_AUFS_BRANCH_MAX_127=y
# CONFIG_AUFS_BRANCH_MAX_511 is not set
# CONFIG_AUFS_BRANCH_MAX_1023 is not set
# CONFIG_AUFS_BRANCH_MAX_32767 is not set
CONFIG_AUFS_SBILIST=y
# CONFIG_AUFS_HNOTIFY is not set
# CONFIG_AUFS_EXPORT is not set
# CONFIG_AUFS_RDU is not set
# CONFIG_AUFS_SP_IATTR is not set
# CONFIG_AUFS_SHWH is not set
# CONFIG_AUFS_BR_RAMFS is not set
# CONFIG_AUFS_BR_FUSE is not set
# CONFIG_AUFS_BR_HFSPLUS is not set
CONFIG_AUFS_BDEV_LOOP=y
# CONFIG_AUFS_DEBUG is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V2=m
CONFIG_NFS_V3=m
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
# CONFIG_NFSD_FAULT_INJECTION is not set
CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=m
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=m
CONFIG_SUNRPC_GSS=m
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DEBUG is not set
CONFIG_CEPH_FS=m
CONFIG_CIFS=m
# CONFIG_CIFS_STATS is not set
# CONFIG_CIFS_WEAK_PW_HASH is not set
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
# CONFIG_CIFS_ACL is not set
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DFS_UPCALL is not set
# CONFIG_CIFS_SMB2 is not set
CONFIG_NCP_FS=m
CONFIG_NCPFS_PACKET_SIGNING=y
CONFIG_NCPFS_IOCTL_LOCKING=y
CONFIG_NCPFS_STRONG=y
CONFIG_NCPFS_NFS_NS=y
CONFIG_NCPFS_OS2_NS=y
# CONFIG_NCPFS_SMALLDOS is not set
CONFIG_NCPFS_NLS=y
CONFIG_NCPFS_EXTRAS=y
CONFIG_CODA_FS=m
CONFIG_AFS_FS=m
# CONFIG_AFS_DEBUG is not set
# CONFIG_9P_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="iso8859-15"
CONFIG_NLS_CODEPAGE_437=m
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=m
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
# CONFIG_DLM_DEBUG is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
# CONFIG_PRINTK_TIME is not set
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
CONFIG_ENABLE_WARN_DEPRECATED=y
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=1024
CONFIG_MAGIC_SYSRQ=y
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
CONFIG_UNUSED_SYMBOLS=y
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
# CONFIG_LOCKUP_DETECTOR is not set
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHED_DEBUG=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_MEMORY_INIT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
# CONFIG_FRAME_POINTER is not set
# CONFIG_BOOT_PRINTK_DELAY is not set

#
# RCU Debugging
#
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
# CONFIG_FAULT_INJECTION is not set
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACING_SUPPORT=y
# CONFIG_FTRACE is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
CONFIG_STRICT_DEVMEM=y
# CONFIG_X86_VERBOSE_BOOTUP is not set
CONFIG_EARLY_PRINTK=y
CONFIG_DEBUG_STACKOVERFLOW=y
# CONFIG_X86_PTDUMP is not set
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
# CONFIG_DEBUG_SET_MODULE_RONX is not set
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
# CONFIG_OPTIMIZE_INLINING is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set

#
# Security options
#
CONFIG_KEYS=y
CONFIG_ENCRYPTED_KEYS=m
CONFIG_KEYS_DEBUG_PROC_KEYS=y
CONFIG_SECURITY_DMESG_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_PATH=y
CONFIG_LSM_MMAP_MIN_ADDR=65536
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX=y
CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX_VALUE=19
CONFIG_SECURITY_SMACK=y
CONFIG_SECURITY_TOMOYO=y
CONFIG_SECURITY_TOMOYO_MAX_ACCEPT_ENTRY=2048
CONFIG_SECURITY_TOMOYO_MAX_AUDIT_LOG=1024
# CONFIG_SECURITY_TOMOYO_OMIT_USERSPACE_LOADER is not set
CONFIG_SECURITY_TOMOYO_POLICY_LOADER="/sbin/tomoyo-init"
CONFIG_SECURITY_TOMOYO_ACTIVATION_TRIGGER="/sbin/init"
CONFIG_SECURITY_APPARMOR=y
CONFIG_SECURITY_APPARMOR_BOOTPARAM_VALUE=0
CONFIG_SECURITY_YAMA=y
CONFIG_SECURITY_YAMA_STACKED=y
# CONFIG_IMA is not set
# CONFIG_EVM is not set
# CONFIG_DEFAULT_SECURITY_SELINUX is not set
# CONFIG_DEFAULT_SECURITY_SMACK is not set
# CONFIG_DEFAULT_SECURITY_TOMOYO is not set
# CONFIG_DEFAULT_SECURITY_APPARMOR is not set
# CONFIG_DEFAULT_SECURITY_YAMA is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_DEFAULT_SECURITY=""
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=m
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=y
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_ABLK_HELPER_X86=y
CONFIG_CRYPTO_GLUE_HELPER_X86=y

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=y
CONFIG_CRYPTO_GCM=m
CONFIG_CRYPTO_SEQIV=y

#
# Block modes
#
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=m
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_LRW=y
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=y

#
# Hash modes
#
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_VMAC=m

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_GHASH=m
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD128=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=m
CONFIG_CRYPTO_SHA256_SSSE3=m
CONFIG_CRYPTO_SHA512_SSSE3=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=m
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_X86_64=m
CONFIG_CRYPTO_AES_NI_INTEL=m
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
CONFIG_CRYPTO_DES=m
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SALSA20_X86_64=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=y
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=y
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=y
CONFIG_CRYPTO_TWOFISH_X86_64=y
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=y
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_USER_API=m
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
# CONFIG_CRYPTO_HW is not set
CONFIG_ASYMMETRIC_KEY_TYPE=m
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_PUBLIC_KEY_ALGO_RSA=m
CONFIG_X509_CERTIFICATE_PARSER=m
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
# CONFIG_BINARY_PRINTF is not set

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=m
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=m
CONFIG_LZO_DECOMPRESS=m
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
# CONFIG_XZ_DEC_POWERPC is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_BTREE=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_LRU_CACHE=m
CONFIG_AVERAGE=y
CONFIG_CLZ_TAB=y
CONFIG_CORDIC=m
# CONFIG_DDR is not set
CONFIG_MPILIB=m
CONFIG_OID_REGISTRY=m

--mYYhpFXgKVw71fwr--

--RwxaKO075aXzzOz0
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL1CbMACgkQ1I6eqOUidQGoPACeO9ypvsfmRcmuevUbdS3Bo5eJ
JGAAnAjHIrvpVW4JgOzf768noZiYvxBn
=OHp3
-----END PGP SIGNATURE-----

--RwxaKO075aXzzOz0--


--===============1704717724066019537==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1704717724066019537==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 16:27:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoGe-0001UL-Bb; Fri, 07 Feb 2014 16:27:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <william@gandi.net>) id 1WBoGb-0001UB-Bl
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:27:03 +0000
Received: from [85.158.143.35:25810] by server-1.bemta-4.messagelabs.com id
	B4/FC-31661-45905F25; Fri, 07 Feb 2014 16:27:00 +0000
X-Env-Sender: william@gandi.net
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391790417!3988799!1
X-Originating-IP: [217.70.183.210]
X-SpamReason: No, hits=2.4 required=7.0 tests=UNIQUE_WORDS, UPPERCASE_75_100
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14654 invoked from network); 7 Feb 2014 16:26:58 -0000
Received: from mail4.gandi.net (HELO mail4.gandi.net) (217.70.183.210)
	by server-2.tower-21.messagelabs.com with SMTP;
	7 Feb 2014 16:26:58 -0000
Received: from localhost (mfiltercorp1-d.gandi.net [217.70.183.155])
	by mail4.gandi.net (Postfix) with ESMTP id 7CC1A120A39;
	Fri,  7 Feb 2014 17:26:57 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfiltercorp1-d.gandi.net
Received: from mail4.gandi.net ([217.70.183.210])
	by localhost (mfiltercorp1-d.gandi.net [217.70.183.155]) (amavisd-new,
	port 10024)
	with ESMTP id QSBd8UYly1+7; Fri,  7 Feb 2014 17:26:55 +0100 (CET)
Received: from gandi.net (hitchhiker.gandi.net [217.70.181.24])
	(using TLSv1 with cipher DHE-RSA-AES128-SHA (128/128 bits))
	(No client certificate requested)
	by mail4.gandi.net (Postfix) with ESMTPSA id C52A2120971;
	Fri,  7 Feb 2014 17:26:55 +0100 (CET)
Date: Fri, 7 Feb 2014 17:28:35 +0100
From: William Dauchy <william@gandi.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140207162835.GH19084@gandi.net>
References: <20140207124607.GE19084@gandi.net>
	<20140207150224.GA3605@phenom.dumpdata.com>
	<20140207153826.GF19084@gandi.net>
	<20140207160441.GA5060@phenom.dumpdata.com>
	<20140207160935.GG19084@gandi.net>
	<20140207161909.GA5330@phenom.dumpdata.com>
MIME-Version: 1.0
In-Reply-To: <20140207161909.GA5330@phenom.dumpdata.com>
Reply_to: William Dauchy <william@gandi.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: konrad@kernel.org, William Dauchy <william@gandi.net>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 3.10 xenU memleaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1704717724066019537=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1704717724066019537==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="RwxaKO075aXzzOz0"
Content-Disposition: inline


--RwxaKO075aXzzOz0
Content-Type: multipart/mixed; boundary="mYYhpFXgKVw71fwr"
Content-Disposition: inline


--mYYhpFXgKVw71fwr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Feb07 11:19, Konrad Rzeszutek Wilk wrote:
> Can you send me your guest config? That is your .config

attached

> and also your guest configuration file?

hard to get since I'm creating it libxs, but nothing particular.
--=20
William

--mYYhpFXgKVw71fwr
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="linux-3.10-xenU-x86_64-hosting"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.10.29 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_CPU_PROBE_RELEASE=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
CONFIG_LOCALVERSION=""
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
# CONFIG_KERNEL_GZIP is not set
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
CONFIG_KERNEL_XZ=y
# CONFIG_KERNEL_LZO is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y
# CONFIG_AUDIT_LOGINUID_IMMUTABLE is not set
CONFIG_HAVE_GENERIC_HARDIRQS=y

#
# IRQ subsystem
#
CONFIG_GENERIC_HARDIRQS=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=y
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=32
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=15
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_SWAP_ENABLED=y
CONFIG_MEMCG_KMEM=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_DEBUG_BLK_CGROUP is not set
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_UIDGID_STRICT_TYPE_CHECKS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_MM_OWNER=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
# CONFIG_BLK_DEV_INITRD is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HOTPLUG=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_EXPERT=y
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
# CONFIG_PCSPKR_PLATFORM is not set
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
# CONFIG_SLOB is not set
# CONFIG_PROFILING is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
# CONFIG_KPROBES is not set
# CONFIG_JUMP_LABEL is not set
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_USE_GENERIC_SMP_HELPERS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
CONFIG_MODVERSIONS=y
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_THROTTLING=y

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_OSF_PARTITION is not set
# CONFIG_AMIGA_PARTITION is not set
# CONFIG_ATARI_PARTITION is not set
# CONFIG_MAC_PARTITION is not set
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
CONFIG_LDM_PARTITION=y
CONFIG_LDM_DEBUG=y
# CONFIG_SGI_PARTITION is not set
# CONFIG_ULTRIX_PARTITION is not set
# CONFIG_SUN_PARTITION is not set
# CONFIG_KARMA_PARTITION is not set
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_DEADLINE=y
# CONFIG_DEFAULT_CFQ is not set
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="deadline"
CONFIG_PADATA=y
CONFIG_ASN1=m
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_XEN=y
# CONFIG_XEN_PRIVILEGED_GUEST is not set
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_KVM_GUEST is not set
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_PROCESSOR_SELECT=y
CONFIG_CPU_SUP_INTEL=y
# CONFIG_CPU_SUP_AMD is not set
# CONFIG_CPU_SUP_CENTAUR is not set
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=32
# CONFIG_SCHED_SMT is not set
CONFIG_SCHED_MC=y
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
# CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS is not set
# CONFIG_X86_MCE is not set
# CONFIG_I8K is not set
# CONFIG_MICROCODE is not set
CONFIG_X86_MSR=m
CONFIG_X86_CPUID=m
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
# CONFIG_NUMA is not set
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
# CONFIG_CLEANCACHE is not set
# CONFIG_FRONTSWAP is not set
# CONFIG_X86_CHECK_BIOS_CORRUPTION is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=0
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
# CONFIG_SECCOMP is not set
# CONFIG_CC_STACKPROTECTOR is not set
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=y
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=250
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
# CONFIG_CRASH_DUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x1000000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
CONFIG_COMPAT_VDSO=y
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y

#
# Power management and ACPI options
#
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
# CONFIG_HIBERNATION is not set
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
# CONFIG_PM_DEBUG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
# CONFIG_CPU_IDLE is not set
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
# CONFIG_PCI is not set
CONFIG_PCI_LABEL=y
# CONFIG_ISA_DMA_API is not set
# CONFIG_PCCARD is not set

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
CONFIG_IA32_AOUT=m
CONFIG_X86_X32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_HAVE_TEXT_POKE_SMP=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_DIAG=m
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=m
CONFIG_XFRM_USER=m
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
# CONFIG_IP_PNP is not set
CONFIG_NET_IPIP=m
CONFIG_NET_IPGRE_DEMUX=m
CONFIG_NET_IP_TUNNEL=m
CONFIG_NET_IPGRE=m
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
# CONFIG_ARPD is not set
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=m
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=m
CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
CONFIG_INET_LRO=y
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
CONFIG_TCP_CONG_CUBIC=m
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_DEFAULT_BIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="bic"
# CONFIG_TCP_MD5SIG is not set
CONFIG_IPV6=y
CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=m
CONFIG_INET6_XFRM_MODE_TRANSPORT=m
CONFIG_INET6_XFRM_MODE_TUNNEL=m
CONFIG_INET6_XFRM_MODE_BEET=m
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_GRE=m
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=y

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_NETLINK_ACCT=m
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=m
CONFIG_NF_CT_PROTO_GRE=m
CONFIG_NF_CT_PROTO_SCTP=m
CONFIG_NF_CT_PROTO_UDPLITE=m
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
CONFIG_NF_CT_NETLINK_HELPER=m
CONFIG_NETFILTER_NETLINK_QUEUE_CT=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
CONFIG_NF_NAT_PROTO_DCCP=m
CONFIG_NF_NAT_PROTO_UDPLITE=m
CONFIG_NF_NAT_PROTO_SCTP=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NETFILTER_TPROXY=m
CONFIG_NETFILTER_XTABLES=m

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_NFACCT=m
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_TIME=m
CONFIG_NETFILTER_XT_MATCH_U32=m
CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
# CONFIG_IP_VS_PROTO_SCTP is not set

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_CONNTRACK_IPV4=m
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_ULOG=m
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PROTO_GRE=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=m
CONFIG_NF_CONNTRACK_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_NF_NAT_IPV6=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m

#
# DECnet: Netfilter Configuration
#
CONFIG_DECNET_NF_GRABULATOR=m
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_ULOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m

#
# DCCP CCIDs Configuration
#
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
CONFIG_IP_DCCP_CCID3=y
# CONFIG_IP_DCCP_CCID3_DEBUG is not set
CONFIG_IP_DCCP_TFRC_LIB=y

#
# DCCP Kernel Hacking
#
# CONFIG_IP_DCCP_DEBUG is not set
CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_MSG is not set
# CONFIG_SCTP_DBG_OBJCNT is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1 is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_RDS=m
CONFIG_RDS_TCP=m
# CONFIG_RDS_DEBUG is not set
CONFIG_TIPC=m
CONFIG_TIPC_PORTS=8191
CONFIG_ATM=y
CONFIG_ATM_CLIP=y
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
CONFIG_ATM_MPOA=m
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
# CONFIG_L2TP_DEBUGFS is not set
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=m
CONFIG_GARP=m
CONFIG_MRP=m
CONFIG_BRIDGE=m
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
CONFIG_HAVE_NET_DSA=y
CONFIG_VLAN_8021Q=m
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
CONFIG_DECNET=m
CONFIG_DECNET_ROUTER=y
CONFIG_LLC=m
CONFIG_LLC2=m
CONFIG_IPX=m
CONFIG_IPX_INTERN=y
CONFIG_ATALK=m
CONFIG_DEV_APPLETALK=m
CONFIG_IPDDP=m
CONFIG_IPDDP_ENCAP=y
CONFIG_X25=m
CONFIG_LAPB=m
CONFIG_PHONET=m
CONFIG_IEEE802154=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=m
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=m
CONFIG_NET_SCH_INGRESS=m
CONFIG_NET_SCH_PLUG=m

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
CONFIG_NET_EMATCH_CANID=m
CONFIG_NET_EMATCH_IPSET=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_IPT=m
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
CONFIG_NET_CLS_IND=y
CONFIG_NET_SCH_FIFO=y
# CONFIG_DCB is not set
CONFIG_DNS_RESOLVER=y
CONFIG_BATMAN_ADV=m
CONFIG_BATMAN_ADV_BLA=y
CONFIG_BATMAN_ADV_DAT=y
# CONFIG_BATMAN_ADV_NC is not set
# CONFIG_BATMAN_ADV_DEBUG is not set
CONFIG_OPENVSWITCH=m
CONFIG_VSOCKETS=m
CONFIG_NETLINK_MMAP=y
CONFIG_NETLINK_DIAG=m
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
CONFIG_NETPRIO_CGROUP=m
CONFIG_BQL=y
CONFIG_BPF_JIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m

#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=m
# CONFIG_CAN_SLCAN is not set
CONFIG_CAN_DEV=m
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_SJA1000 is not set
# CONFIG_CAN_C_CAN is not set
# CONFIG_CAN_CC770 is not set
# CONFIG_CAN_SOFTING is not set
# CONFIG_CAN_DEBUG_DEVICES is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
CONFIG_AF_RXRPC=y
# CONFIG_AF_RXRPC_DEBUG is not set
# CONFIG_RXKAD is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=m
# CONFIG_RFKILL_INPUT is not set
CONFIG_NET_9P=m
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
# CONFIG_CEPH_LIB_USE_DNS_RESOLVER is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
# CONFIG_DMA_SHARED_BUFFER is not set

#
# Bus devices
#
CONFIG_CONNECTOR=m
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
CONFIG_BLK_DEV_CRYPTOLOOP=m
CONFIG_BLK_DEV_DRBD=m
# CONFIG_DRBD_FAULT_INJECTION is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=8192
CONFIG_BLK_DEV_XIP=y
# CONFIG_CDROM_PKTCDVD is not set
CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=y
# CONFIG_BLK_DEV_HD is not set
CONFIG_BLK_DEV_RBD=m

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_93CX6 is not set

#
# Texas Instruments shared transport line discipline
#

#
# Altera FPGA firmware download module
#
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
# CONFIG_SCSI_DMA is not set
# CONFIG_SCSI_NETLINK is not set
# CONFIG_ATA is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
CONFIG_BCACHE=m
# CONFIG_BCACHE_DEBUG is not set
CONFIG_BCACHE_EDEBUG=y
# CONFIG_BCACHE_CLOSURES_DEBUG is not set
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_DEBUG_BLOCK_STACK_TRACING=y
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_MQ=m
CONFIG_DM_CACHE_CLEANER=m
CONFIG_DM_MIRROR=m
CONFIG_DM_RAID=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
CONFIG_DM_DELAY=m
CONFIG_DM_UEVENT=y
# CONFIG_DM_FLAKEY is not set
CONFIG_DM_VERITY=m
# CONFIG_MACINTOSH_DRIVERS is not set
CONFIG_NETDEVICES=y
CONFIG_NET_CORE=y
CONFIG_BONDING=m
CONFIG_DUMMY=m
CONFIG_EQUALIZER=m
CONFIG_MII=m
CONFIG_IFB=m
CONFIG_NET_TEAM=m
CONFIG_NET_TEAM_MODE_BROADCAST=m
CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
CONFIG_NET_TEAM_MODE_RANDOM=m
CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
CONFIG_NET_TEAM_MODE_LOADBALANCE=m
CONFIG_MACVLAN=m
CONFIG_MACVTAP=m
CONFIG_VXLAN=m
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=m
CONFIG_VETH=m
# CONFIG_ATM_DRIVERS is not set

#
# CAIF transport drivers
#

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
# CONFIG_ETHERNET is not set
# CONFIG_PHYLIB is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_FILTER=y
CONFIG_PPP_MPPE=m
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOATM=m
CONFIG_PPPOE=m
CONFIG_PPTP=m
CONFIG_PPPOL2TP=m
CONFIG_PPP_ASYNC=m
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
CONFIG_SLIP_MODE_SLIP6=y
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
CONFIG_WAN=y
CONFIG_HDLC=m
CONFIG_HDLC_RAW=m
CONFIG_HDLC_RAW_ETH=m
CONFIG_HDLC_CISCO=m
CONFIG_HDLC_FR=m
CONFIG_HDLC_PPP=m
CONFIG_HDLC_X25=m
CONFIG_DLCI=m
CONFIG_DLCI_MAX=8
CONFIG_LAPBETHER=m
CONFIG_X25_ASY=m
CONFIG_SBNI=m
CONFIG_SBNI_MULTILINE=y
CONFIG_IEEE802154_DRIVERS=m
CONFIG_IEEE802154_FAKEHARD=m
# CONFIG_IEEE802154_FAKELB is not set
CONFIG_XEN_NETDEV_FRONTEND=y
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
# CONFIG_INPUT_FF_MEMLESS is not set
# CONFIG_INPUT_POLLDEV is not set
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
# CONFIG_INPUT_MOUSEDEV is not set
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=m
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TABLET is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_LIBPS2 is not set
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
# CONFIG_VT_HW_CONSOLE_BINDING is not set
CONFIG_UNIX98_PTYS=y
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=16
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_N_HDLC=m
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y
# CONFIG_STALDRV is not set

#
# Serial drivers
#
# CONFIG_SERIAL_8250 is not set
CONFIG_FIX_EARLYCON_MEM=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
# CONFIG_IPMI_HANDLER is not set
# CONFIG_HW_RANDOM is not set
# CONFIG_NVRAM is not set
# CONFIG_R3964 is not set
# CONFIG_MWAVE is not set
CONFIG_RAW_DRIVER=m
CONFIG_MAX_RAW_DEVS=256
# CONFIG_HANGCHECK_TIMER is not set
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
# CONFIG_I2C is not set
# CONFIG_SPI is not set

#
# Qualcomm MSM SSBI bus support
#
# CONFIG_SSBI is not set
# CONFIG_HSI is not set

#
# PPS support
#
# CONFIG_PPS is not set

#
# PPS generators support
#

#
# PTP clock support
#
# CONFIG_PTP_1588_CLOCK is not set

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
CONFIG_GPIO_DEVRES=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
# CONFIG_POWER_SUPPLY is not set
# CONFIG_POWER_AVS is not set
# CONFIG_HWMON is not set
# CONFIG_THERMAL is not set
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SC520_WDT is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_60XX_WDT is not set
# CONFIG_SBC8360_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83697HF_WDT is not set
# CONFIG_W83697UG_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_XEN_WDT=m
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
# CONFIG_DRM is not set
# CONFIG_VGASTATE is not set
# CONFIG_VIDEO_OUTPUT_CONTROL is not set
# CONFIG_FB is not set
# CONFIG_EXYNOS_VIDEO is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
# CONFIG_SOUND is not set

#
# HID support
#
# CONFIG_HID is not set
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB_ARCH_HAS_XHCI is not set
# CONFIG_USB_SUPPORT is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
# CONFIG_NEW_LEDS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#

#
# HID Sensor RTC drivers
#
# CONFIG_DMADEVICES is not set
# CONFIG_AUXDISPLAY is not set
CONFIG_UIO=m
CONFIG_UIO_PDRV=m
CONFIG_UIO_PDRV_GENIRQ=m
CONFIG_UIO_DMEM_GENIRQ=m
CONFIG_VIRT_DRIVERS=y

#
# Virtio drivers
#
# CONFIG_VIRTIO_MMIO is not set

#
# Microsoft Hyper-V guest support
#

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=y
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=m
CONFIG_XEN_GRANT_DEV_ALLOC=m
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_HAVE_PVMMU=y
# CONFIG_STAGING is not set
# CONFIG_X86_PLATFORM_DEVICES is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_SUPPORT=y

#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# Firmware Drivers
#
# CONFIG_EDD is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
# CONFIG_DMIID is not set
# CONFIG_DMI_SYSFS is not set
# CONFIG_ISCSI_IBFT_FIND is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_USE_FOR_EXT23=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=y
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=y
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_XFS_RT=y
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
CONFIG_OCFS2_FS=m
CONFIG_OCFS2_FS_O2CB=m
CONFIG_OCFS2_FS_USERSPACE_CLUSTER=m
CONFIG_OCFS2_FS_STATS=y
CONFIG_OCFS2_DEBUG_MASKLOG=y
# CONFIG_OCFS2_DEBUG_FS is not set
CONFIG_BTRFS_FS=m
# CONFIG_BTRFS_FS_POSIX_ACL is not set
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
CONFIG_NILFS2_FS=m
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=m
CONFIG_QFMT_V1=m
CONFIG_QFMT_V2=m
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=m
CONFIG_FUSE_FS=m
CONFIG_CUSE=m

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
CONFIG_UDF_NLS=y

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=m
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
# CONFIG_TMPFS_POSIX_ACL is not set
CONFIG_TMPFS_XATTR=y
# CONFIG_HUGETLBFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
CONFIG_ADFS_FS=m
# CONFIG_ADFS_FS_RW is not set
CONFIG_AFFS_FS=m
CONFIG_ECRYPT_FS=m
# CONFIG_ECRYPT_FS_MESSAGING is not set
CONFIG_HFS_FS=m
CONFIG_HFSPLUS_FS=m
CONFIG_BEFS_FS=m
# CONFIG_BEFS_DEBUG is not set
CONFIG_BFS_FS=m
CONFIG_EFS_FS=m
CONFIG_LOGFS=m
CONFIG_CRAMFS=y
CONFIG_SQUASHFS=m
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZO is not set
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
CONFIG_VXFS_FS=m
CONFIG_MINIX_FS=m
CONFIG_OMFS_FS=m
CONFIG_HPFS_FS=m
CONFIG_QNX4FS_FS=m
# CONFIG_QNX6FS_FS is not set
CONFIG_ROMFS_FS=m
CONFIG_ROMFS_BACKED_BY_BLOCK=y
CONFIG_ROMFS_ON_BLOCK=y
# CONFIG_PSTORE is not set
CONFIG_SYSV_FS=m
CONFIG_UFS_FS=m
# CONFIG_UFS_FS_WRITE is not set
# CONFIG_UFS_DEBUG is not set
# CONFIG_F2FS_FS is not set
CONFIG_AUFS_FS=m
CONFIG_AUFS_BRANCH_MAX_127=y
# CONFIG_AUFS_BRANCH_MAX_511 is not set
# CONFIG_AUFS_BRANCH_MAX_1023 is not set
# CONFIG_AUFS_BRANCH_MAX_32767 is not set
CONFIG_AUFS_SBILIST=y
# CONFIG_AUFS_HNOTIFY is not set
# CONFIG_AUFS_EXPORT is not set
# CONFIG_AUFS_RDU is not set
# CONFIG_AUFS_SP_IATTR is not set
# CONFIG_AUFS_SHWH is not set
# CONFIG_AUFS_BR_RAMFS is not set
# CONFIG_AUFS_BR_FUSE is not set
# CONFIG_AUFS_BR_HFSPLUS is not set
CONFIG_AUFS_BDEV_LOOP=y
# CONFIG_AUFS_DEBUG is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V2=m
CONFIG_NFS_V3=m
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
# CONFIG_NFSD_FAULT_INJECTION is not set
CONFIG_LOCKD=m
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=m
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=m
CONFIG_SUNRPC_GSS=m
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DEBUG is not set
CONFIG_CEPH_FS=m
CONFIG_CIFS=m
# CONFIG_CIFS_STATS is not set
# CONFIG_CIFS_WEAK_PW_HASH is not set
# CONFIG_CIFS_UPCALL is not set
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
# CONFIG_CIFS_ACL is not set
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DFS_UPCALL is not set
# CONFIG_CIFS_SMB2 is not set
CONFIG_NCP_FS=m
CONFIG_NCPFS_PACKET_SIGNING=y
CONFIG_NCPFS_IOCTL_LOCKING=y
CONFIG_NCPFS_STRONG=y
CONFIG_NCPFS_NFS_NS=y
CONFIG_NCPFS_OS2_NS=y
# CONFIG_NCPFS_SMALLDOS is not set
CONFIG_NCPFS_NLS=y
CONFIG_NCPFS_EXTRAS=y
CONFIG_CODA_FS=m
CONFIG_AFS_FS=m
# CONFIG_AFS_DEBUG is not set
# CONFIG_9P_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="iso8859-15"
CONFIG_NLS_CODEPAGE_437=m
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=m
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
# CONFIG_DLM_DEBUG is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
# CONFIG_PRINTK_TIME is not set
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
CONFIG_ENABLE_WARN_DEPRECATED=y
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=1024
CONFIG_MAGIC_SYSRQ=y
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
CONFIG_UNUSED_SYMBOLS=y
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_SHIRQ is not set
# CONFIG_LOCKUP_DETECTOR is not set
# CONFIG_PANIC_ON_OOPS is not set
CONFIG_PANIC_ON_OOPS_VALUE=0
# CONFIG_DETECT_HUNG_TASK is not set
CONFIG_SCHED_DEBUG=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_TIMER_STATS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_ATOMIC_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_INFO is not set
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VIRTUAL is not set
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_MEMORY_INIT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
# CONFIG_FRAME_POINTER is not set
# CONFIG_BOOT_PRINTK_DELAY is not set

#
# RCU Debugging
#
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# CONFIG_DEBUG_PER_CPU_MAPS is not set
# CONFIG_LKDTM is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
# CONFIG_FAULT_INJECTION is not set
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
# CONFIG_DEBUG_PAGEALLOC is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACING_SUPPORT=y
# CONFIG_FTRACE is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_DYNAMIC_DEBUG is not set
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
CONFIG_STRICT_DEVMEM=y
# CONFIG_X86_VERBOSE_BOOTUP is not set
CONFIG_EARLY_PRINTK=y
CONFIG_DEBUG_STACKOVERFLOW=y
# CONFIG_X86_PTDUMP is not set
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
# CONFIG_DEBUG_SET_MODULE_RONX is not set
# CONFIG_DEBUG_NX_TEST is not set
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
# CONFIG_OPTIMIZE_INLINING is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set

#
# Security options
#
CONFIG_KEYS=y
CONFIG_ENCRYPTED_KEYS=m
CONFIG_KEYS_DEBUG_PROC_KEYS=y
CONFIG_SECURITY_DMESG_RESTRICT=y
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_PATH=y
CONFIG_LSM_MMAP_MIN_ADDR=65536
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=0
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX=y
CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX_VALUE=19
CONFIG_SECURITY_SMACK=y
CONFIG_SECURITY_TOMOYO=y
CONFIG_SECURITY_TOMOYO_MAX_ACCEPT_ENTRY=2048
CONFIG_SECURITY_TOMOYO_MAX_AUDIT_LOG=1024
# CONFIG_SECURITY_TOMOYO_OMIT_USERSPACE_LOADER is not set
CONFIG_SECURITY_TOMOYO_POLICY_LOADER="/sbin/tomoyo-init"
CONFIG_SECURITY_TOMOYO_ACTIVATION_TRIGGER="/sbin/init"
CONFIG_SECURITY_APPARMOR=y
CONFIG_SECURITY_APPARMOR_BOOTPARAM_VALUE=0
CONFIG_SECURITY_YAMA=y
CONFIG_SECURITY_YAMA_STACKED=y
# CONFIG_IMA is not set
# CONFIG_EVM is not set
# CONFIG_DEFAULT_SECURITY_SELINUX is not set
# CONFIG_DEFAULT_SECURITY_SMACK is not set
# CONFIG_DEFAULT_SECURITY_TOMOYO is not set
# CONFIG_DEFAULT_SECURITY_APPARMOR is not set
# CONFIG_DEFAULT_SECURITY_YAMA is not set
CONFIG_DEFAULT_SECURITY_DAC=y
CONFIG_DEFAULT_SECURITY=""
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=m
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=m
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_WORKQUEUE=y
CONFIG_CRYPTO_CRYPTD=y
CONFIG_CRYPTO_AUTHENC=y
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_ABLK_HELPER_X86=y
CONFIG_CRYPTO_GLUE_HELPER_X86=y

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=y
CONFIG_CRYPTO_GCM=m
CONFIG_CRYPTO_SEQIV=y

#
# Block modes
#
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=m
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_LRW=y
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=y

#
# Hash modes
#
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_VMAC=m

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_GHASH=m
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD128=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=m
CONFIG_CRYPTO_SHA256_SSSE3=m
CONFIG_CRYPTO_SHA512_SSSE3=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=m
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
CONFIG_CRYPTO_AES_X86_64=m
CONFIG_CRYPTO_AES_NI_INTEL=m
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
CONFIG_CRYPTO_DES=m
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SALSA20=m
CONFIG_CRYPTO_SALSA20_X86_64=m
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=y
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=y
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=y
CONFIG_CRYPTO_TWOFISH_X86_64=y
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=y
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=m
CONFIG_CRYPTO_ZLIB=m
CONFIG_CRYPTO_LZO=m

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_USER_API=m
CONFIG_CRYPTO_USER_API_HASH=m
CONFIG_CRYPTO_USER_API_SKCIPHER=m
# CONFIG_CRYPTO_HW is not set
CONFIG_ASYMMETRIC_KEY_TYPE=m
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=m
CONFIG_PUBLIC_KEY_ALGO_RSA=m
CONFIG_X509_CERTIFICATE_PARSER=m
CONFIG_HAVE_KVM=y
# CONFIG_VIRTUALIZATION is not set
# CONFIG_BINARY_PRINTF is not set

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=m
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
CONFIG_CRC7=m
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=m
CONFIG_LZO_COMPRESS=m
CONFIG_LZO_DECOMPRESS=m
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
# CONFIG_XZ_DEC_POWERPC is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_BTREE=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_LRU_CACHE=m
CONFIG_AVERAGE=y
CONFIG_CLZ_TAB=y
CONFIG_CORDIC=m
# CONFIG_DDR is not set
CONFIG_MPILIB=m
CONFIG_OID_REGISTRY=m

--mYYhpFXgKVw71fwr--

--RwxaKO075aXzzOz0
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEARECAAYFAlL1CbMACgkQ1I6eqOUidQGoPACeO9ypvsfmRcmuevUbdS3Bo5eJ
JGAAnAjHIrvpVW4JgOzf768noZiYvxBn
=OHp3
-----END PGP SIGNATURE-----

--RwxaKO075aXzzOz0--


--===============1704717724066019537==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1704717724066019537==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 16:29:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoJF-0001ma-Al; Fri, 07 Feb 2014 16:29:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBoJD-0001lC-HZ
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:29:43 +0000
Received: from [193.109.254.147:39367] by server-1.bemta-14.messagelabs.com id
	D0/19-15438-6F905F25; Fri, 07 Feb 2014 16:29:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391790580!2817037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26631 invoked from network); 7 Feb 2014 16:29:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 16:29:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="99019866"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 16:29:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 11:29:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBoJ9-0007P7-GX;
	Fri, 07 Feb 2014 16:29:39 +0000
Message-ID: <52F509F3.1000806@citrix.com>
Date: Fri, 7 Feb 2014 16:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com>
In-Reply-To: <52F5082E.6010207@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Rob Hoes <Rob.Hoes@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/02/14 16:22, Don Slutz wrote:
> On 02/07/14 05:05, Ian Campbell wrote:
>> On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
>>>>>> cc1: warnings being treated as errors
>>>>>> xenlight_stubs.c: In function 'Defbool_val':
>>>>>> xenlight_stubs.c:344: warning: implicit declaration of function
>>>>>> 'CAMLreturnT'
>>>>>> xenlight_stubs.c:344: error: expected expression before
>>>>>> 'libxl_defbool'
>>>>>> xenlight_stubs.c: In function 'String_option_val':
>>>>>> xenlight_stubs.c:379: error: expected expression before 'char'
>>>>>> xenlight_stubs.c: In function 'aohow_val':
>>>>>> xenlight_stubs.c:440: error: expected expression before
>>>>>> 'libxl_asyncop_how'
>>> Any idea on what to do about ocaml issue?
>> My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
>> What version do you have?
>>
>> Ian.
>>
> dcs-xen-53:~>ocaml -version
> The Objective Caml toplevel, version 3.09.3
>
>    -Don Slutz
>

Which, according to google, was introduced in 3.09.4

I think the ./configure script needs a min version check.

Applying the top macro from
http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
as a compatibility hack might also be a good idea.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:29:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoJF-0001ma-Al; Fri, 07 Feb 2014 16:29:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WBoJD-0001lC-HZ
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 16:29:43 +0000
Received: from [193.109.254.147:39367] by server-1.bemta-14.messagelabs.com id
	D0/19-15438-6F905F25; Fri, 07 Feb 2014 16:29:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1391790580!2817037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26631 invoked from network); 7 Feb 2014 16:29:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 16:29:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="99019866"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 16:29:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 11:29:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WBoJ9-0007P7-GX;
	Fri, 07 Feb 2014 16:29:39 +0000
Message-ID: <52F509F3.1000806@citrix.com>
Date: Fri, 7 Feb 2014 16:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com>
In-Reply-To: <52F5082E.6010207@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Rob Hoes <Rob.Hoes@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07/02/14 16:22, Don Slutz wrote:
> On 02/07/14 05:05, Ian Campbell wrote:
>> On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
>>>>>> cc1: warnings being treated as errors
>>>>>> xenlight_stubs.c: In function 'Defbool_val':
>>>>>> xenlight_stubs.c:344: warning: implicit declaration of function
>>>>>> 'CAMLreturnT'
>>>>>> xenlight_stubs.c:344: error: expected expression before
>>>>>> 'libxl_defbool'
>>>>>> xenlight_stubs.c: In function 'String_option_val':
>>>>>> xenlight_stubs.c:379: error: expected expression before 'char'
>>>>>> xenlight_stubs.c: In function 'aohow_val':
>>>>>> xenlight_stubs.c:440: error: expected expression before
>>>>>> 'libxl_asyncop_how'
>>> Any idea on what to do about ocaml issue?
>> My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
>> What version do you have?
>>
>> Ian.
>>
> dcs-xen-53:~>ocaml -version
> The Objective Caml toplevel, version 3.09.3
>
>    -Don Slutz
>

Which, according to google, was introduced in 3.09.4

I think the ./configure script needs a min version check.

Applying the top macro from
http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
as a compatibility hack might also be a good idea.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:56:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoiI-0002yV-H6; Fri, 07 Feb 2014 16:55:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBoiH-0002y8-8C
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 16:55:37 +0000
Received: from [85.158.139.211:19817] by server-2.bemta-5.messagelabs.com id
	86/27-23037-80015F25; Fri, 07 Feb 2014 16:55:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391792134!2451799!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30405 invoked from network); 7 Feb 2014 16:55:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 16:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="99027909"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 16:55:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 11:55:33 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBoiC-00049H-Sf;
	Fri, 07 Feb 2014 16:55:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBoiC-00017p-IF;
	Fri, 07 Feb 2014 16:55:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21237.4100.356986.496721@mariner.uk.xensource.com>
Date: Fri, 7 Feb 2014 16:55:32 +0000
To: <lars.kurth@xen.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <52F3B0C6.9050904@xen.org>
References: <52F3B0C6.9050904@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: wg-test-framework@lists.xenproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Looking for a volunteer to represent the Xen
 Project developer community at Test Working group meetings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Lars Kurth writes ("[Xen-devel] Looking for a volunteer to represent the Xen Project developer community at Test Working group meetings"):
> at the last WG group meeting I had an action to call a vote on whether 
> we should open up the Test Framework WG to a community member. We had 
> the vote and it was carried.

Thanks.

I'd like to volunteer for this role.

> == What type of person are we looking for ==
> 1) Can represent the Xen Project Developer community : this means you 
> need to be an active member of the community (e.g. a maintainer or 
> committer)

I'm a tools maintainer, and committer.

> 2) Has some understanding of how testing in the Xen Community works 
> today and how it should work.
> 3) Ideally you also have written tests for Xen before.

I mostly wrote run the existing test system.

> == Responsibilities and Rights ==
> 1) You are presenting the views of the community, not those of your 
> employee or your own.

Right.

I'm open to questions on how I would approach this task.

> 2) This means that you will need to consult with the developer community 
> and understand the key issues on testing.

Indeed.

> 3) You commit to attending WG meetings and monitor and participate on 
> wg-test-framework@lists.xenproject.org

I can do that.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 16:56:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 16:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBoiI-0002yV-H6; Fri, 07 Feb 2014 16:55:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBoiH-0002y8-8C
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 16:55:37 +0000
Received: from [85.158.139.211:19817] by server-2.bemta-5.messagelabs.com id
	86/27-23037-80015F25; Fri, 07 Feb 2014 16:55:36 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1391792134!2451799!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30405 invoked from network); 7 Feb 2014 16:55:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 16:55:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,801,1384300800"; d="scan'208";a="99027909"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 16:55:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 11:55:33 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBoiC-00049H-Sf;
	Fri, 07 Feb 2014 16:55:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WBoiC-00017p-IF;
	Fri, 07 Feb 2014 16:55:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21237.4100.356986.496721@mariner.uk.xensource.com>
Date: Fri, 7 Feb 2014 16:55:32 +0000
To: <lars.kurth@xen.org>
Newsgroups: chiark.mail.xen.devel
In-Reply-To: <52F3B0C6.9050904@xen.org>
References: <52F3B0C6.9050904@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: wg-test-framework@lists.xenproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Looking for a volunteer to represent the Xen
 Project developer community at Test Working group meetings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Lars Kurth writes ("[Xen-devel] Looking for a volunteer to represent the Xen Project developer community at Test Working group meetings"):
> at the last WG group meeting I had an action to call a vote on whether 
> we should open up the Test Framework WG to a community member. We had 
> the vote and it was carried.

Thanks.

I'd like to volunteer for this role.

> == What type of person are we looking for ==
> 1) Can represent the Xen Project Developer community : this means you 
> need to be an active member of the community (e.g. a maintainer or 
> committer)

I'm a tools maintainer, and committer.

> 2) Has some understanding of how testing in the Xen Community works 
> today and how it should work.
> 3) Ideally you also have written tests for Xen before.

I mostly wrote run the existing test system.

> == Responsibilities and Rights ==
> 1) You are presenting the views of the community, not those of your 
> employee or your own.

Right.

I'm open to questions on how I would approach this task.

> 2) This means that you will need to consult with the developer community 
> and understand the key issues on testing.

Indeed.

> 3) You commit to attending WG meetings and monitor and participate on 
> wg-test-framework@lists.xenproject.org

I can do that.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBp0l-0003v7-Uf; Fri, 07 Feb 2014 17:14:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Suravee.Suthikulpanit@amd.com>) id 1WBp0k-0003v2-I3
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:14:42 +0000
Received: from [85.158.143.35:7902] by server-1.bemta-4.messagelabs.com id
	8F/D0-31661-18415F25; Fri, 07 Feb 2014 17:14:41 +0000
X-Env-Sender: Suravee.Suthikulpanit@amd.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391793279!3985073!1
X-Originating-IP: [207.46.163.27]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16219 invoked from network); 7 Feb 2014 17:14:40 -0000
Received: from co9ehsobe004.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.27)
	by server-3.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 17:14:40 -0000
Received: from mail10-co9-R.bigfish.com (10.236.132.247) by
	CO9EHSOBE020.bigfish.com (10.236.130.83) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 17:14:38 +0000
Received: from mail10-co9 (localhost [127.0.0.1])	by mail10-co9-R.bigfish.com
	(Postfix) with ESMTP id C1F40E066B;
	Fri,  7 Feb 2014 17:14:38 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z579ehzbb2dI98dI9371Ic89bh1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h93fhd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h1155h)
Received: from mail10-co9 (localhost.localdomain [127.0.0.1]) by mail10-co9
	(MessageSwitch) id 1391793277728778_24912;
	Fri,  7 Feb 2014 17:14:37 +0000 (UTC)
Received: from CO9EHSMHS031.bigfish.com (unknown [10.236.132.238])	by
	mail10-co9.bigfish.com (Postfix) with ESMTP id A248C20054;
	Fri,  7 Feb 2014 17:14:37 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS031.bigfish.com
	(10.236.130.41) with Microsoft SMTP Server id 14.16.227.3;
	Fri, 7 Feb 2014 17:14:36 +0000
X-WSS-ID: 0N0MYK9-08-EQK-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2C569D16050;	Fri,  7 Feb 2014 11:14:33 -0600 (CST)
Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 11:14:55 -0600
Received: from [10.224.13.88] (10.180.168.240) by satlexdag06.amd.com
	(10.181.40.13) with Microsoft SMTP Server id 14.2.328.9; Fri, 7 Feb 2014
	12:14:35 -0500
Message-ID: <52F5147A.2060504@amd.com>
Date: Fri, 7 Feb 2014 11:14:34 -0600
From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
	<52F50EB9020000780011A4D8@nat28.tlf.novell.com>
	<52F503B8.2040304@oracle.com>
	<52F51404020000780011A517@nat28.tlf.novell.com>
In-Reply-To: <52F51404020000780011A517@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMi83LzIwMTQgMTA6MTIgQU0sIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+IE9uIDA3LjAyLjE0
IGF0IDE3OjAzLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPiB3
cm90ZToKPj4gT24gMDIvMDcvMjAxNCAxMDo1MCBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4+
PiBPbiAwNy4wMi4xNCBhdCAxNjozOCwgQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zza3lA
b3JhY2xlLmNvbT4gd3JvdGU6Cj4+Pj4gT24gMDIvMDcvMjAxNCAxMDoyMyBBTSwgSmFuIEJldWxp
Y2ggd3JvdGU6Cj4+Pj4+Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjEyLCBCb3JpcyBPc3Ryb3Zza3kg
PGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPiB3cm90ZToKPj4+Pj4+IE9uIDAyLzA3LzIwMTQg
MDQ6MjEgQU0sIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4+IC4uLiBidXQgaW50ZXJydXB0IHJl
bWFwcGluZyBpcyByZXF1ZXN0ZWQgKHdpdGggcGVyLWRldmljZSByZW1hcHBpbmcKPj4+Pj4+PiB0
YWJsZXMpLiBXaXRob3V0IGl0LCB0aGUgdGltZXIgaW50ZXJydXB0IGlzIHVzdWFsbHkgbm90IHdv
cmtpbmcuCj4+Pj4+Pj4KPj4+Pj4+PiBJbnNwaXJlZCBieSBMaW51eCdlcyAiaW9tbXUvYW1kOiBX
b3JrIGFyb3VuZCB3cm9uZyBJT0FQSUMgZGV2aWNlLWlkIGluCj4+Pj4+Pj4gSVZSUyB0YWJsZSIg
KGNvbW1pdCBjMmZmNWNmNTI5NGJjYmQ3ZmE1MGY3ZDg2MGU5MGE2NmRiN2U1MDU5KSBieSBKb2Vy
Zwo+Pj4+Pj4+IFJvZWRlbCA8am9lcmcucm9lZGVsQGFtZC5jb20+Lgo+Pj4+Pj4+Cj4+Pj4+Pj4g
UmVwb3J0ZWQtYnk6IEVyaWMgSG91YnkgPGVob3VieUB5YWhvby5jb20+Cj4+Pj4+Pj4gU2lnbmVk
LW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgo+Pj4+Pj4+IFRlc3RlZC1i
eTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Pj4+Pgo+Pj4+Pj4+IC0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9hY3BpLmMKPj4+Pj4+PiArKysgYi94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+Pj4+Pj4gQEAgLTk4NCw2ICs5
ODQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9pdnJzX3RhYmxlKHN0cnVjCj4+Pj4+Pj4g
ICAgICAgICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxvY2s7Cj4+Pj4+
Pj4gICAgICAgICAgdW5zaWduZWQgbG9uZyBsZW5ndGg7Cj4+Pj4+Pj4gICAgICAgICAgdW5zaWdu
ZWQgaW50IGFwaWM7Cj4+Pj4+Pj4gKyAgICBib29sX3Qgc2JfaW9hcGljID0gIWlvbW11X2ludHJl
bWFwOwo+Pj4+Pj4+ICAgICAgICAgIGludCBlcnJvciA9IDA7Cj4+Pj4+Pj4KPj4+Pj4+PiAgICAg
ICAgICBCVUdfT04oIXRhYmxlKTsKPj4+Pj4+PiBAQCAtMTAxNyw4ICsxMDE4LDE1IEBAIHN0YXRp
YyBpbnQgX19pbml0IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+Pj4+PiAgICAgICAgICAvKiBF
YWNoIElPLUFQSUMgbXVzdCBoYXZlIGJlZW4gbWVudGlvbmVkIGluIHRoZSB0YWJsZS4gKi8KPj4+
Pj4+PiAgICAgICAgICBmb3IgKCBhcGljID0gMDsgIWVycm9yICYmIGlvbW11X2ludHJlbWFwICYm
IGFwaWMgPCBucl9pb2FwaWNzOyArK2FwaWMgKQo+Pj4+Pj4+ICAgICAgICAgIHsKPj4+Pj4+PiAt
ICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSB8fAo+Pj4+Pj4+IC0gICAgICAg
ICAgICAgaW9hcGljX3NiZGZbSU9fQVBJQ19JRChhcGljKV0ucGluXzJfaWR4ICkKPj4+Pj4+PiAr
ICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSApCj4+Pj4+Pj4gKyAgICAgICAg
ICAgIGNvbnRpbnVlOwo+Pj4+Pj4+ICsKPj4+Pj4+PiArICAgICAgICBpZiAoICFpb2FwaWNfc2Jk
ZltJT19BUElDX0lEKGFwaWMpXS5zZWcgJiYKPj4+Pj4+PiArICAgICAgICAgICAgIC8qIFNCIElP
LUFQSUMgaXMgYWx3YXlzIG9uIHRoaXMgZGV2aWNlIGluIEFNRCBzeXN0ZW1zLiAqLwo+Pj4+Pj4+
ICsgICAgICAgICAgICAgaW9hcGljX3NiZGZbSU9fQVBJQ19JRChhcGljKV0uYmRmID09IFBDSV9C
REYoMCwgMHgxNCwgMCkgKQo+Pj4+Pj4+ICsgICAgICAgICAgICBzYl9pb2FwaWMgPSAxOwo+Pj4+
Pj4+ICsKPj4+Pj4+PiArICAgICAgICBpZiAoIGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyld
LnBpbl8yX2lkeCApCj4+Pj4+Pj4gICAgICAgICAgICAgICAgICBjb250aW51ZTsKPj4+Pj4+Pgo+
Pj4+Pj4+ICAgICAgICAgICAgICBpZiAoICF0ZXN0X2JpdChJT19BUElDX0lEKGFwaWMpLCBpb2Fw
aWNfY21kbGluZSkgKQo+Pj4+Pj4gSSBkb24ndCBrbm93IHdoZXRoZXIgMDoxNDowIGlzIHNldCBp
biBzdG9uZSwgSSBkb24ndCByZW1lbWJlciBzZWVpbmcKPj4+Pj4+IGFueXdoZXJlIHRoYXQgdGhp
cyBpcyBhcmNoaXRlY3R1cmFsLgo+Pj4+Pj4KW1N1cmF2ZWVdIEZyb20gd2hhdCBJIGhhdmUgc2Vl
biBvbiBhbGwgb2Ygb3VyIHN5c3RlbSBhbmQgY2hlY2tpbmcgd2l0aCAKZGVzaWduIGVuZ2luZWVy
LiAgSU9BUElDIGluIHRoZSBTQiBvbiBBTUQgc3lzdGVtIGFsd2F5cyBoYXZlICIwOjE0LjAiIAph
bmQgc2hvdWxkIGFsd2F5cyBiZSBlbmFibGVkLgoKPj4+Pj4+IEluIHRoZSAodW5saWtlbHkpIGV2
ZW50IHRoYXQgaXQgaXMgbW92ZWQgc29tZXdoZXJlIGVsc2Ugd2lsbCB0aGUgdXNlciBiZQo+Pj4+
Pj4gYWJsZSB0byBvdmVyd3JpdGUgd2hlcmUgaXQgaXM/IERvIHlvdQo+Pj4+Pj4gdGhpbmsgdGhh
dCBzYl9pb2FwaWMgbWF5IG5lZWQgdG8gYmUgc2V0IHRvIHRydWUgaWYgYXBwcm9wcmlhdGUgYml0
IGlzCj4+Pj4+PiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/Cj4+Pj4+IFRoZXNlIGFyZSBxdWVzdGlv
biB5b3UnZCBuZWVkIHRvIGFzayB0byBKw7ZyZywgdGhlIGF1dGhvciBvZiB0aGUKPj4+Pj4gb3Jp
Z2luYWwgTGludXggc2lkZSBwYXRjaC4gSSB0b29rIGFzIGEgcHJlY29uZGl0aW9uIGhlcmUgdGhh
dCBoZQo+Pj4+PiBrbmV3IHdoYXQgaGUgd2FzIGRvaW5nLgo+Pj4+IFhlbiBhbHJlYWR5IGhhcyBh
IHdheSB0byBvdmVycmlkZSBJVlJTJyB2aWV3IG9mIElPQVBJQ3Mgd2l0aAo+Pj4+IGlvYXBpY19j
bWRsaW5lLCBzb21ldGhpbmcgdGhhdCBMaW51eCBkb2Vzbid0LiBQcmVzdW1hYmx5IGlmIHRoZSB1
c2VyCj4+Pj4gc2V0cyBpdnJzX2lvYXBpY1tdIG9wdGlvbiBvbiBib290IGxpbmUgdGhlbiBoZSBr
bm93cyB3aGF0IGhlIGlzIGRvaW5nCj4+Pj4gKGF0IGxlYXN0IG9uZSB3b3VsZCBob3BlIHNvKS4K
Pj4+IEkgdGhpbmsgdGhlIGxvZ2ljIHdlIGhhdmUgaXMgc3VmZmljaWVudGx5IHNpbWlsYXIgdG8g
TGludXgnZXMuCj4+Pgo+Pj4+IE15IGNvbmNlcm4gaXMgdGhhdCB0aGlzIHBhdGNoIHdvdWxkIHBy
ZXZlbnQgdGhlIHVzZXIgZnJvbSBzcGVjaWZ5aW5nCj4+Pj4gd2hlcmUgdGhlIElPQVBJQyBpcy4g
V2lsbCB0aGlzIGJvb3Qgb3B0aW9uIGJlIHVzZWZ1bCBhdCBhbGwgbm93PyBXaGVuIHdlCj4+Pj4g
c3BlY2lmeSBhbnl0aGluZyBidXQgMDoxNDowIGl0IHdpbGwgYmUgcHJldHR5IG11Y2ggaWdub3Jl
ZCwgd29uJ3QgaXQ/Cj4+PiBCdXQgdGhlIHB1cnBvc2UgaGVyZSBpc24ndCB0byBvdmVycmlkZSBo
b3cgdGhlIGhhcmR3YXJlIGlzCj4+PiBzdHJ1Y3R1cmVkLCBidXQgdG8gb3ZlcmNvbWUgZmlybXdh
cmUgdmVuZG9ycyBub3QgZ2V0dGluZyB0aGVpcgo+Pj4gQUNQSSB0YWJsZXMgY29ycmVjdC4KCltT
dXJhdmVlXSBUaGUgaXZyc19pb2FwaWNbXSB3b3VsZCB3b3JrIGZvciB0aGUgY2FzZSB3aGVyZSB1
c2VycyB3YW50IHRvIApmaXggdGhlIElWSEQgd2hlbiBpdCBpbmNvcnJlY3RseSBsaXN0cyB0aGUg
aGFuZGxlIElEIGZvciBJT0FQSUMsIG9yIAptaXNzaW5nIGl0LiBCdXQgdGhlIElPQVBJQyBuZWVk
cyB0byBhbHNvIGJlIGNyb3NzIGNoZWNrZWQgd2l0aCB0aGUgCklPQVBJQyBlbnRyeSBsaXN0ZWQg
aW4gdGhlIEFQSUMgdGFibGUuCgpJbiB0aGlzIGNhc2UsIHRoZSBJT0FQSUMgaXMgYWxzbyBtaXNz
aW5nIGluIHRoZSBBUElDIHRhYmxlLCBhbmQgSSBkb24ndCAKdGhpbmsgaXZyc19pb2FwaWNbXSB3
b3VsZCBiZSBoYW5kbGluZyB0aGlzIGNhc2UgYW55d2F5cy4KClN1cmF2ZWUKCgoKX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcg
bGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2
ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:15:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBp0l-0003v7-Uf; Fri, 07 Feb 2014 17:14:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Suravee.Suthikulpanit@amd.com>) id 1WBp0k-0003v2-I3
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:14:42 +0000
Received: from [85.158.143.35:7902] by server-1.bemta-4.messagelabs.com id
	8F/D0-31661-18415F25; Fri, 07 Feb 2014 17:14:41 +0000
X-Env-Sender: Suravee.Suthikulpanit@amd.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391793279!3985073!1
X-Originating-IP: [207.46.163.27]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16219 invoked from network); 7 Feb 2014 17:14:40 -0000
Received: from co9ehsobe004.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.27)
	by server-3.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 17:14:40 -0000
Received: from mail10-co9-R.bigfish.com (10.236.132.247) by
	CO9EHSOBE020.bigfish.com (10.236.130.83) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 17:14:38 +0000
Received: from mail10-co9 (localhost [127.0.0.1])	by mail10-co9-R.bigfish.com
	(Postfix) with ESMTP id C1F40E066B;
	Fri,  7 Feb 2014 17:14:38 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z579ehzbb2dI98dI9371Ic89bh1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h93fhd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h1155h)
Received: from mail10-co9 (localhost.localdomain [127.0.0.1]) by mail10-co9
	(MessageSwitch) id 1391793277728778_24912;
	Fri,  7 Feb 2014 17:14:37 +0000 (UTC)
Received: from CO9EHSMHS031.bigfish.com (unknown [10.236.132.238])	by
	mail10-co9.bigfish.com (Postfix) with ESMTP id A248C20054;
	Fri,  7 Feb 2014 17:14:37 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS031.bigfish.com
	(10.236.130.41) with Microsoft SMTP Server id 14.16.227.3;
	Fri, 7 Feb 2014 17:14:36 +0000
X-WSS-ID: 0N0MYK9-08-EQK-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2C569D16050;	Fri,  7 Feb 2014 11:14:33 -0600 (CST)
Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 11:14:55 -0600
Received: from [10.224.13.88] (10.180.168.240) by satlexdag06.amd.com
	(10.181.40.13) with Microsoft SMTP Server id 14.2.328.9; Fri, 7 Feb 2014
	12:14:35 -0500
Message-ID: <52F5147A.2060504@amd.com>
Date: Fri, 7 Feb 2014 11:14:34 -0600
From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
	<52F4F7D1.9020906@oracle.com>
	<52F5088F020000780011A484@nat28.tlf.novell.com>
	<52F4FDD9.5000608@oracle.com>
	<52F50EB9020000780011A4D8@nat28.tlf.novell.com>
	<52F503B8.2040304@oracle.com>
	<52F51404020000780011A517@nat28.tlf.novell.com>
In-Reply-To: <52F51404020000780011A517@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
 IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMi83LzIwMTQgMTA6MTIgQU0sIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+IE9uIDA3LjAyLjE0
IGF0IDE3OjAzLCBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPiB3
cm90ZToKPj4gT24gMDIvMDcvMjAxNCAxMDo1MCBBTSwgSmFuIEJldWxpY2ggd3JvdGU6Cj4+Pj4+
PiBPbiAwNy4wMi4xNCBhdCAxNjozOCwgQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ryb3Zza3lA
b3JhY2xlLmNvbT4gd3JvdGU6Cj4+Pj4gT24gMDIvMDcvMjAxNCAxMDoyMyBBTSwgSmFuIEJldWxp
Y2ggd3JvdGU6Cj4+Pj4+Pj4+IE9uIDA3LjAyLjE0IGF0IDE2OjEyLCBCb3JpcyBPc3Ryb3Zza3kg
PGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPiB3cm90ZToKPj4+Pj4+IE9uIDAyLzA3LzIwMTQg
MDQ6MjEgQU0sIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4+IC4uLiBidXQgaW50ZXJydXB0IHJl
bWFwcGluZyBpcyByZXF1ZXN0ZWQgKHdpdGggcGVyLWRldmljZSByZW1hcHBpbmcKPj4+Pj4+PiB0
YWJsZXMpLiBXaXRob3V0IGl0LCB0aGUgdGltZXIgaW50ZXJydXB0IGlzIHVzdWFsbHkgbm90IHdv
cmtpbmcuCj4+Pj4+Pj4KPj4+Pj4+PiBJbnNwaXJlZCBieSBMaW51eCdlcyAiaW9tbXUvYW1kOiBX
b3JrIGFyb3VuZCB3cm9uZyBJT0FQSUMgZGV2aWNlLWlkIGluCj4+Pj4+Pj4gSVZSUyB0YWJsZSIg
KGNvbW1pdCBjMmZmNWNmNTI5NGJjYmQ3ZmE1MGY3ZDg2MGU5MGE2NmRiN2U1MDU5KSBieSBKb2Vy
Zwo+Pj4+Pj4+IFJvZWRlbCA8am9lcmcucm9lZGVsQGFtZC5jb20+Lgo+Pj4+Pj4+Cj4+Pj4+Pj4g
UmVwb3J0ZWQtYnk6IEVyaWMgSG91YnkgPGVob3VieUB5YWhvby5jb20+Cj4+Pj4+Pj4gU2lnbmVk
LW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgo+Pj4+Pj4+IFRlc3RlZC1i
eTogRXJpYyBIb3VieSA8ZWhvdWJ5QHlhaG9vLmNvbT4KPj4+Pj4+Pgo+Pj4+Pj4+IC0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9hY3BpLmMKPj4+Pj4+PiArKysgYi94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfYWNwaS5jCj4+Pj4+Pj4gQEAgLTk4NCw2ICs5
ODQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBwYXJzZV9pdnJzX3RhYmxlKHN0cnVjCj4+Pj4+Pj4g
ICAgICAgICAgY29uc3Qgc3RydWN0IGFjcGlfaXZyc19oZWFkZXIgKml2cnNfYmxvY2s7Cj4+Pj4+
Pj4gICAgICAgICAgdW5zaWduZWQgbG9uZyBsZW5ndGg7Cj4+Pj4+Pj4gICAgICAgICAgdW5zaWdu
ZWQgaW50IGFwaWM7Cj4+Pj4+Pj4gKyAgICBib29sX3Qgc2JfaW9hcGljID0gIWlvbW11X2ludHJl
bWFwOwo+Pj4+Pj4+ICAgICAgICAgIGludCBlcnJvciA9IDA7Cj4+Pj4+Pj4KPj4+Pj4+PiAgICAg
ICAgICBCVUdfT04oIXRhYmxlKTsKPj4+Pj4+PiBAQCAtMTAxNyw4ICsxMDE4LDE1IEBAIHN0YXRp
YyBpbnQgX19pbml0IHBhcnNlX2l2cnNfdGFibGUoc3RydWMKPj4+Pj4+PiAgICAgICAgICAvKiBF
YWNoIElPLUFQSUMgbXVzdCBoYXZlIGJlZW4gbWVudGlvbmVkIGluIHRoZSB0YWJsZS4gKi8KPj4+
Pj4+PiAgICAgICAgICBmb3IgKCBhcGljID0gMDsgIWVycm9yICYmIGlvbW11X2ludHJlbWFwICYm
IGFwaWMgPCBucl9pb2FwaWNzOyArK2FwaWMgKQo+Pj4+Pj4+ICAgICAgICAgIHsKPj4+Pj4+PiAt
ICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSB8fAo+Pj4+Pj4+IC0gICAgICAg
ICAgICAgaW9hcGljX3NiZGZbSU9fQVBJQ19JRChhcGljKV0ucGluXzJfaWR4ICkKPj4+Pj4+PiAr
ICAgICAgICBpZiAoICFucl9pb2FwaWNfZW50cmllc1thcGljXSApCj4+Pj4+Pj4gKyAgICAgICAg
ICAgIGNvbnRpbnVlOwo+Pj4+Pj4+ICsKPj4+Pj4+PiArICAgICAgICBpZiAoICFpb2FwaWNfc2Jk
ZltJT19BUElDX0lEKGFwaWMpXS5zZWcgJiYKPj4+Pj4+PiArICAgICAgICAgICAgIC8qIFNCIElP
LUFQSUMgaXMgYWx3YXlzIG9uIHRoaXMgZGV2aWNlIGluIEFNRCBzeXN0ZW1zLiAqLwo+Pj4+Pj4+
ICsgICAgICAgICAgICAgaW9hcGljX3NiZGZbSU9fQVBJQ19JRChhcGljKV0uYmRmID09IFBDSV9C
REYoMCwgMHgxNCwgMCkgKQo+Pj4+Pj4+ICsgICAgICAgICAgICBzYl9pb2FwaWMgPSAxOwo+Pj4+
Pj4+ICsKPj4+Pj4+PiArICAgICAgICBpZiAoIGlvYXBpY19zYmRmW0lPX0FQSUNfSUQoYXBpYyld
LnBpbl8yX2lkeCApCj4+Pj4+Pj4gICAgICAgICAgICAgICAgICBjb250aW51ZTsKPj4+Pj4+Pgo+
Pj4+Pj4+ICAgICAgICAgICAgICBpZiAoICF0ZXN0X2JpdChJT19BUElDX0lEKGFwaWMpLCBpb2Fw
aWNfY21kbGluZSkgKQo+Pj4+Pj4gSSBkb24ndCBrbm93IHdoZXRoZXIgMDoxNDowIGlzIHNldCBp
biBzdG9uZSwgSSBkb24ndCByZW1lbWJlciBzZWVpbmcKPj4+Pj4+IGFueXdoZXJlIHRoYXQgdGhp
cyBpcyBhcmNoaXRlY3R1cmFsLgo+Pj4+Pj4KW1N1cmF2ZWVdIEZyb20gd2hhdCBJIGhhdmUgc2Vl
biBvbiBhbGwgb2Ygb3VyIHN5c3RlbSBhbmQgY2hlY2tpbmcgd2l0aCAKZGVzaWduIGVuZ2luZWVy
LiAgSU9BUElDIGluIHRoZSBTQiBvbiBBTUQgc3lzdGVtIGFsd2F5cyBoYXZlICIwOjE0LjAiIAph
bmQgc2hvdWxkIGFsd2F5cyBiZSBlbmFibGVkLgoKPj4+Pj4+IEluIHRoZSAodW5saWtlbHkpIGV2
ZW50IHRoYXQgaXQgaXMgbW92ZWQgc29tZXdoZXJlIGVsc2Ugd2lsbCB0aGUgdXNlciBiZQo+Pj4+
Pj4gYWJsZSB0byBvdmVyd3JpdGUgd2hlcmUgaXQgaXM/IERvIHlvdQo+Pj4+Pj4gdGhpbmsgdGhh
dCBzYl9pb2FwaWMgbWF5IG5lZWQgdG8gYmUgc2V0IHRvIHRydWUgaWYgYXBwcm9wcmlhdGUgYml0
IGlzCj4+Pj4+PiBzZXQgaW4gaW9hcGljX2NtZGxpbmU/Cj4+Pj4+IFRoZXNlIGFyZSBxdWVzdGlv
biB5b3UnZCBuZWVkIHRvIGFzayB0byBKw7ZyZywgdGhlIGF1dGhvciBvZiB0aGUKPj4+Pj4gb3Jp
Z2luYWwgTGludXggc2lkZSBwYXRjaC4gSSB0b29rIGFzIGEgcHJlY29uZGl0aW9uIGhlcmUgdGhh
dCBoZQo+Pj4+PiBrbmV3IHdoYXQgaGUgd2FzIGRvaW5nLgo+Pj4+IFhlbiBhbHJlYWR5IGhhcyBh
IHdheSB0byBvdmVycmlkZSBJVlJTJyB2aWV3IG9mIElPQVBJQ3Mgd2l0aAo+Pj4+IGlvYXBpY19j
bWRsaW5lLCBzb21ldGhpbmcgdGhhdCBMaW51eCBkb2Vzbid0LiBQcmVzdW1hYmx5IGlmIHRoZSB1
c2VyCj4+Pj4gc2V0cyBpdnJzX2lvYXBpY1tdIG9wdGlvbiBvbiBib290IGxpbmUgdGhlbiBoZSBr
bm93cyB3aGF0IGhlIGlzIGRvaW5nCj4+Pj4gKGF0IGxlYXN0IG9uZSB3b3VsZCBob3BlIHNvKS4K
Pj4+IEkgdGhpbmsgdGhlIGxvZ2ljIHdlIGhhdmUgaXMgc3VmZmljaWVudGx5IHNpbWlsYXIgdG8g
TGludXgnZXMuCj4+Pgo+Pj4+IE15IGNvbmNlcm4gaXMgdGhhdCB0aGlzIHBhdGNoIHdvdWxkIHBy
ZXZlbnQgdGhlIHVzZXIgZnJvbSBzcGVjaWZ5aW5nCj4+Pj4gd2hlcmUgdGhlIElPQVBJQyBpcy4g
V2lsbCB0aGlzIGJvb3Qgb3B0aW9uIGJlIHVzZWZ1bCBhdCBhbGwgbm93PyBXaGVuIHdlCj4+Pj4g
c3BlY2lmeSBhbnl0aGluZyBidXQgMDoxNDowIGl0IHdpbGwgYmUgcHJldHR5IG11Y2ggaWdub3Jl
ZCwgd29uJ3QgaXQ/Cj4+PiBCdXQgdGhlIHB1cnBvc2UgaGVyZSBpc24ndCB0byBvdmVycmlkZSBo
b3cgdGhlIGhhcmR3YXJlIGlzCj4+PiBzdHJ1Y3R1cmVkLCBidXQgdG8gb3ZlcmNvbWUgZmlybXdh
cmUgdmVuZG9ycyBub3QgZ2V0dGluZyB0aGVpcgo+Pj4gQUNQSSB0YWJsZXMgY29ycmVjdC4KCltT
dXJhdmVlXSBUaGUgaXZyc19pb2FwaWNbXSB3b3VsZCB3b3JrIGZvciB0aGUgY2FzZSB3aGVyZSB1
c2VycyB3YW50IHRvIApmaXggdGhlIElWSEQgd2hlbiBpdCBpbmNvcnJlY3RseSBsaXN0cyB0aGUg
aGFuZGxlIElEIGZvciBJT0FQSUMsIG9yIAptaXNzaW5nIGl0LiBCdXQgdGhlIElPQVBJQyBuZWVk
cyB0byBhbHNvIGJlIGNyb3NzIGNoZWNrZWQgd2l0aCB0aGUgCklPQVBJQyBlbnRyeSBsaXN0ZWQg
aW4gdGhlIEFQSUMgdGFibGUuCgpJbiB0aGlzIGNhc2UsIHRoZSBJT0FQSUMgaXMgYWxzbyBtaXNz
aW5nIGluIHRoZSBBUElDIHRhYmxlLCBhbmQgSSBkb24ndCAKdGhpbmsgaXZyc19pb2FwaWNbXSB3
b3VsZCBiZSBoYW5kbGluZyB0aGlzIGNhc2UgYW55d2F5cy4KClN1cmF2ZWUKCgoKX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcg
bGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2
ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSd-0005Ds-H6; Fri, 07 Feb 2014 17:43:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSb-0005D5-6H
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:29 +0000
Received: from [85.158.139.211:13875] by server-4.bemta-5.messagelabs.com id
	6D/59-08092-04B15F25; Fri, 07 Feb 2014 17:43:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391795007!2408396!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2395 invoked from network); 7 Feb 2014 17:43:27 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:27 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so1711939ead.6
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=iiWciA/AuIQKwLpkCbz7yd8cXIHTjwg6w51XGWlmc5k=;
	b=igxapd17k10tEoiOv38+Jx6ZO9NwV7BgqsvZE/jIrGcCCGFZQALehwhWV06jBbYAvW
	ITUut0K87ScNw8U2i/0flTmGal7l97OPlEN7wT+K64YBIRVXxLcM8akHsnEF0ibmTXOK
	Sj5qY8sC0vNL829cP5IUF4iC5kwH5iSQvoKbro93LkaIPvl5arqxRNpNFD3eMbt7ZnCZ
	zHHZdWCTVXLhk/bHoiolJOg30iznWAfTuYoC8kbgBdNWonvsls+v4IBt1BzKb0epyvmt
	lwrFDxtjuq7DDGDlpcupwXmUdbMC5GPJ9XjxpHV9mKtmV5rL8MTwnnVpcFoccvkKqJml
	Fq/Q==
X-Gm-Message-State: ALoCoQngJ3cAQAfTfIRgSqkzuisqAJ6u0SDISAjewirCWyvQags93b/uo1Jez+EOeVEdx57952MD
X-Received: by 10.15.33.193 with SMTP id c41mr5611887eev.79.1391795007417;
	Fri, 07 Feb 2014 09:43:27 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.25
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:26 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:02 +0000
Message-Id: <1391794991-5919-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantoa Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't export
	iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_set_pgd is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantoa Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 xen/include/xen/iommu.h             |    1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a8d33fc..d5ce5b7 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
 /*
  * set VT-d page table directory to EPT table if allowed
  */
-void iommu_set_pgd(struct domain *d)
+static void iommu_set_pgd(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
     mfn_t pgd_mfn;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 8bb0a1d..fcbc432 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_set_pgd(struct domain *d);
 void iommu_domain_teardown(struct domain *d);
 
 void pt_pci_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSg-0005FI-RL; Fri, 07 Feb 2014 17:43:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSf-0005El-T5
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:34 +0000
Received: from [193.109.254.147:23583] by server-16.bemta-14.messagelabs.com
	id 85/F5-21945-54B15F25; Fri, 07 Feb 2014 17:43:33 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391795012!2831804!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31198 invoked from network); 7 Feb 2014 17:43:32 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:32 -0000
Received: by mail-ea0-f174.google.com with SMTP id b10so1724113eae.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=8uuD56yJ6jVMO1mJIExIBghOj9y/PWZbg3OQmNKoTWY=;
	b=mZrqjjiak9+YsssKlD/HQHatQoh5moH4Nhdys9a78oR13jLqm7vLqjmV+5M86VplCW
	JjYo/OJqQ8VrwQcLTdWdG3h84feMgERbDe4UmQrwgtEymQTNtR5rkyBPeEJqRBFew3E3
	v0fTx64B9lvnm3KaCQRhmERrmBXYyCGGUVQThkfsYEjvNtgk53LDnM3x2pTLBPTL1ygI
	xc1DBxFpmAFmp489tI4XwCD67d5wtcFqiBzitlZVytN+wL8ovHuT2QyP7p79XCsEj+YV
	0102SYy2imi7OBUt3Bl5/KTYcjKelqrWrJGOORz22qzkle4jQiFRfBWZNoMXwNOMpRG9
	joFw==
X-Gm-Message-State: ALoCoQmnRxrBYVruz6Kn/+oLPATxKpWwHOwlTdPFDZhiu7DjsJIJV1gkic7kmgYIX9B31zxPtBPi
X-Received: by 10.14.204.9 with SMTP id g9mr5628656eeo.82.1391795012158;
	Fri, 07 Feb 2014 09:43:32 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.30
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:31 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:05 +0000
Message-Id: <1391794991-5919-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
	dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is enabled.
Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
to know if it needs to check the requirements.

Rename the function and remove "pvh" word in the commit message.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/iommu.c |   14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 19b0e23..26a5d91 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
     return hd->platform_ops->init(d);
 }
 
-static __init void check_dom0_pvh_reqs(struct domain *d)
+static __init void check_dom0_reqs(struct domain *d)
 {
+    if ( !paging_mode_translate(d) )
+        return;
+
     if ( !iommu_enabled )
-        panic("Presently, iommu must be enabled for pvh dom0\n");
+        panic("Presently, iommu must be enabled to use dom0 with translate "
+              "paging mode\n");
 
     if ( iommu_passthrough )
-        panic("For pvh dom0, dom0-passthrough must not be enabled\n");
+        panic("Dom0 uses translate paging mode, dom0-passthrough must not be "
+              "enabled\n");
 
     iommu_dom0_strict = 1;
 }
@@ -145,8 +150,7 @@ void __init iommu_dom0_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    if ( is_pvh_domain(d) )
-        check_dom0_pvh_reqs(d);
+    check_dom0_reqs(d);
 
     if ( !iommu_enabled )
         return;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSa-0005D6-S8; Fri, 07 Feb 2014 17:43:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSZ-0005Cp-D0
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:27 +0000
Received: from [85.158.143.35:60829] by server-2.bemta-4.messagelabs.com id
	12/2D-10891-E3B15F25; Fri, 07 Feb 2014 17:43:26 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391795005!4001307!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7773 invoked from network); 7 Feb 2014 17:43:26 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:26 -0000
Received: by mail-ea0-f172.google.com with SMTP id l9so1447687eaj.31
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=A+FPIBV0p1bt9VIYlpV6PZCZjS4kiCj/p53t8RAeJ6Q=;
	b=D+cBJIXkZ0u4dPSmODqzF3t9kXvRnAVZ1UyS7is2RCAgzp9AGPmhupWQ5WIyjx7f1E
	HGxD0QEbhDFNq8UozGtTNEy3QYWorqqwOlHjX06Ieb8hFRaDu9a/Qmdp4fABmAfN3csg
	L+GlIEZwdTVu8r5WBfmfkSVNCNX8UsQ8VjE35FUA4RPNWP7yKO58IxhYH+UcKyKv6Si6
	C7ZveWJlGAH/u5awFq2JmYHamblHJJK+TNzr2RJZsCy5NDyh3GEviL17p29LAANX+T+L
	Y3HbgrMDE5JCZnbkbLNEN+ey8YKpJi79khBAXkTq0bMUkxQEelj3fdhbqiCx9LY4gTzm
	EkFQ==
X-Gm-Message-State: ALoCoQn9uSEzwy2vVMUkwRw4FIWv2IeSV+qiMvZrphaB5fxGJOaUxfDKFvcDpJrIJsRavJ+oP/7c
X-Received: by 10.15.21.2 with SMTP id c2mr5740660eeu.77.1391795005757;
	Fri, 07 Feb 2014 09:43:25 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.23
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:25 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:01 +0000
Message-Id: <1391794991-5919-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 02/12] xen/passthrough: vtd: Don't export
	iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_domain_teardown is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..a8d33fc 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
     return ret;
 }
 
-void iommu_domain_teardown(struct domain *d)
+static void iommu_domain_teardown(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSZ-0005Cq-Bt; Fri, 07 Feb 2014 17:43:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSX-0005Cd-NS
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:25 +0000
Received: from [193.109.254.147:23120] by server-6.bemta-14.messagelabs.com id
	AB/CD-03396-D3B15F25; Fri, 07 Feb 2014 17:43:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391795004!2830320!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21353 invoked from network); 7 Feb 2014 17:43:24 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:24 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1688209eek.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=jqxoPLhffLdddCuXl7gJ26I+N3MoJPCQWDLv9+DRoO4=;
	b=lp3FETCBZdI/78dRW2EOptiBvdwWOGtcyi/vPul+XgyaJc+BT578eYTW6spodhF29k
	N9sKm2Vcp29/milV0CJbFWRYA5DDo5zJd1CGj4rtT0BeXWmbjw3BbhDUkmXc/o3Vjrkk
	soQnJ7//GS4oGIV8OVxamStRQ1PZdVPXBkii8IlclgQgAFKDfXcZF+jktOJE0xLR7wAT
	c1PZuAU+TjRKMGTnaxSVlgr5FnvJDAh3MPDT7ebStwidSeefGLl3rZvtjh36Re+ZYtA0
	wjOTVH5wnZ2EYa9yvDl+7ziRrTB+/Tu4Wu/9IaxOzbpWWy5mSUTuJxVPmRQmvyPazVaK
	by8A==
X-Gm-Message-State: ALoCoQnXWIjLiiJqCYd0qpQYnn2R4nyFlDUFk49ok7GhIjXDwVDGbBZvF0tWbu3h+fQt1CQ6Ik8F
X-Received: by 10.14.203.197 with SMTP id f45mr4543131eeo.90.1391795003688;
	Fri, 07 Feb 2014 09:43:23 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.22
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:23 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:00 +0000
Message-Id: <1391794991-5919-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [RFC for-4.5 01/12] xen/common: grant-table: only call
	IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From Xen point of view, ARM guests are PV guest with paging auto translate
enabled.

When IOMMU support will be added for ARM, mapping grant ref will always crash
Xen due to the BUG_ON in __gnttab_map_grant_ref.

On x86:
    - PV guests always have paging mode translate disabled
    - PVH and HVM guests have always paging mode translate enabled

It means that we can safely replace the check that the domain is a PV guests
by checking if the guest has paging mode translate enabled.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
---
 xen/common/grant_table.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 107b000..778bdb7 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -721,12 +721,10 @@ __gnttab_map_grant_ref(
 
     double_gt_lock(lgt, rgt);
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        /* Shouldn't happen, because you can't use iommu in a HVM domain. */
-        BUG_ON(paging_mode_translate(ld));
         /* We're not translated, so we know that gmfns and mfns are
            the same things, so the IOMMU entry is always 1-to-1. */
         mapcount(lgt, rd, frame, &wrc, &rdc);
@@ -931,11 +929,10 @@ __gnttab_unmap_common(
             act->pin -= GNTPIN_hstw_inc;
     }
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        BUG_ON(paging_mode_translate(ld));
         mapcount(lgt, rd, op->frame, &wrc, &rdc);
         if ( (wrc + rdc) == 0 )
             err = iommu_unmap_page(ld, op->frame);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSg-0005F2-D3; Fri, 07 Feb 2014 17:43:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSe-0005EW-TV
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:33 +0000
Received: from [85.158.139.211:13975] by server-17.bemta-5.messagelabs.com id
	1D/1F-31975-44B15F25; Fri, 07 Feb 2014 17:43:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391795010!2464125!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5921 invoked from network); 7 Feb 2014 17:43:31 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:31 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so1704969eek.13
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=9CVzbBqB2BP51z23WZ6Ws++UFOn7l43WtO1ayNF1I/c=;
	b=g0kJeYsue3EjM68o0uS/9BqGrnDjYPANFqIsncXp8ZgDc0LklqXoZFPHxQynQgWNnP
	3Ee3OFTKXklIsUKj3V/Ag/k0SM2RdWQQOhhhsZ3Hgi02FAl31or02OcL29KEr9oBUpJS
	CcJWzyE/eDl9PMm6UCWsRutbfdg1coQl9jfKm4ANqDzTM5jayV67zrTWesrtTKLE/kKy
	7dIP/2FRkReLWeJz0fZuvRNc65zDDtt2EQN1WGDMQn78WOREwztvJb49/5LCtbUO8Jqc
	/aKZ/SDta23LDPS+ecgRVaSEOx+pfOKyBpYPHDkrYfeuLZLAV8lZyTM3SPCLwnYmqlSC
	FaIQ==
X-Gm-Message-State: ALoCoQmj7NEKEMwhaKyy4pf/odoyz/mD3APWGpjFr+RO4CNN/Perpnqn+YLS4NTn7tG/MMAeLZeK
X-Received: by 10.14.202.136 with SMTP id d8mr18200499eeo.46.1391795010884;
	Fri, 07 Feb 2014 09:43:30 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.28
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:29 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:04 +0000
Message-Id: <1391794991-5919-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
	dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code adapted from linux drivers/of/base.c (commit ef42c58).

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/common/device_tree.c      |  151 ++++++++++++++++++++++++++++++++++++++++-
 xen/include/xen/device_tree.h |   54 +++++++++++++++
 2 files changed, 203 insertions(+), 2 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index ccdb7ff..37a025a 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1090,9 +1090,9 @@ int dt_device_get_address(const struct dt_device_node *dev, int index,
  *
  * Returns a node pointer.
  */
-static const struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
+static struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
 {
-    const struct dt_device_node *np;
+    struct dt_device_node *np;
 
     dt_for_each_device_node(dt_host, np)
         if ( np->phandle == handle )
@@ -1477,6 +1477,153 @@ bool_t dt_device_is_available(const struct dt_device_node *device)
     return 0;
 }
 
+static int __dt_parse_phandle_with_args(const struct dt_device_node *np,
+                                        const char *list_name,
+                                        const char *cells_name,
+                                        int cell_count, int index,
+                                        struct dt_phandle_args *out_args)
+{
+    const __be32 *list, *list_end;
+    int rc = 0, cur_index = 0;
+    u32 size, count = 0;
+    struct dt_device_node *node = NULL;
+    dt_phandle phandle;
+
+    /* Retrieve the phandle list property */
+    list = dt_get_property(np, list_name, &size);
+    if ( !list )
+        return -ENOENT;
+    list_end = list + size / sizeof(*list);
+
+    /* Loop over the phandles until all the requested entry is found */
+    while ( list < list_end )
+    {
+        rc = -EINVAL;
+        count = 0;
+
+        /*
+         * If phandle is 0, then it is an empty entry with no
+         * arguments.  Skip forward to the next entry.
+         * */
+        phandle = be32_to_cpup(list++);
+        if ( phandle )
+        {
+            /*
+             * Find the provider node and parse the #*-cells
+             * property to determine the argument length.
+             *
+             * This is not needed if the cell count is hard-coded
+             * (i.e. cells_name not set, but cell_count is set),
+             * except when we're going to return the found node
+             * below.
+             */
+            if ( cells_name || cur_index == index )
+            {
+                node = dt_find_node_by_phandle(phandle);
+                if ( !node )
+                {
+                    dt_printk(XENLOG_ERR "%s: could not find phandle\n",
+                              np->full_name);
+                    goto err;
+                }
+            }
+
+            if ( cells_name )
+            {
+                if ( !dt_property_read_u32(node, cells_name, &count) )
+                {
+                    dt_printk("%s: could not get %s for %s\n",
+                              np->full_name, cells_name, node->full_name);
+                    goto err;
+                }
+            }
+            else
+                count = cell_count;
+
+            /*
+             * Make sure that the arguments actually fit in the
+             * remaining property data length
+             */
+            if ( list + count > list_end )
+            {
+                dt_printk(XENLOG_ERR "%s: arguments longer than property\n",
+                          np->full_name);
+                goto err;
+            }
+        }
+
+        /*
+         * All of the error cases above bail out of the loop, so at
+         * this point, the parsing is successful. If the requested
+         * index matches, then fill the out_args structure and return,
+         * or return -ENOENT for an empty entry.
+         */
+        rc = -ENOENT;
+        if ( cur_index == index )
+        {
+            if (!phandle)
+                goto err;
+
+            if ( out_args )
+            {
+                int i;
+
+                WARN_ON(count > MAX_PHANDLE_ARGS);
+                if (count > MAX_PHANDLE_ARGS)
+                    count = MAX_PHANDLE_ARGS;
+                out_args->np = node;
+                out_args->args_count = count;
+                for ( i = 0; i < count; i++ )
+                    out_args->args[i] = be32_to_cpup(list++);
+            }
+
+            /* Found it! return success */
+            return 0;
+        }
+
+        node = NULL;
+        list += count;
+        cur_index++;
+    }
+
+    /*
+     * Returning result will be one of:
+     * -ENOENT : index is for empty phandle
+     * -EINVAL : parsing error on data
+     * [1..n]  : Number of phandle (count mode; when index = -1)
+     */
+    rc = index < 0 ? cur_index : -ENOENT;
+err:
+    return rc;
+}
+
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+				                        const char *phandle_name, int index)
+{
+	struct dt_phandle_args args;
+
+	if (index < 0)
+		return NULL;
+
+	if (__dt_parse_phandle_with_args(np, phandle_name, NULL, 0,
+					 index, &args))
+		return NULL;
+
+	return args.np;
+}
+
+
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args)
+{
+    if ( index < 0 )
+        return -EINVAL;
+    return __dt_parse_phandle_with_args(np, list_name, cells_name, 0,
+                                        index, out_args);
+}
+
 /**
  * unflatten_dt_node - Alloc and populate a device_node from the flat tree
  * @fdt: The parent device tree blob
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 7c075d9..d429e60 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -112,6 +112,13 @@ struct dt_device_node {
 
 };
 
+#define MAX_PHANDLE_ARGS 16
+struct dt_phandle_args {
+    struct dt_device_node *np;
+    int args_count;
+    uint32_t args[MAX_PHANDLE_ARGS];
+};
+
 /**
  * IRQ line type.
  *
@@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
 void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
                   u64 *address, u64 *size);
 
+/**
+ * dt_parse_phandle - Resolve a phandle property to a device_node pointer
+ * @np: Pointer to device node holding phandle property
+ * @phandle_name: Name of property holding a phandle value
+ * @index: For properties holding a table of phandles, this is the index into
+ *         the table
+ *
+ * Returns the device_node pointer.
+ */
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+				                        const char *phandle_name,
+                                        int index);
+
+/**
+ * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
+ * @np:	pointer to a device tree node containing a list
+ * @list_name: property name that contains a list
+ * @cells_name: property name that specifies phandles' arguments count
+ * @index: index of a phandle to parse out
+ * @out_args: optional pointer to output arguments structure (will be filled)
+ *
+ * This function is useful to parse lists of phandles and their arguments.
+ * Returns 0 on success and fills out_args, on error returns appropriate
+ * errno value.
+ *
+ * Example:
+ *
+ * phandle1: node1 {
+ * 	#list-cells = <2>;
+ * }
+ *
+ * phandle2: node2 {
+ * 	#list-cells = <1>;
+ * }
+ *
+ * node3 {
+ * 	list = <&phandle1 1 2 &phandle2 3>;
+ * }
+ *
+ * To get a device_node of the `node2' node you may call this:
+ * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
+ */
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args);
+
 #endif /* __XEN_DEVICE_TREE_H */
 
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSn-0005Jl-R3; Fri, 07 Feb 2014 17:43:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSm-0005Hw-3D
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:40 +0000
Received: from [85.158.143.35:43852] by server-2.bemta-4.messagelabs.com id
	15/5D-10891-B4B15F25; Fri, 07 Feb 2014 17:43:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391795017!4005456!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3061 invoked from network); 7 Feb 2014 17:43:37 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:37 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so1440049eek.0
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=r563eQs8f6yNCqws1iMCdZND/WzzsUH7eJK6yR+kEWU=;
	b=FMLlDkEd6Qc74mOimJfhWhAr1FiKGSsqfU3mEnm9tsjbeA68BCHzu3eSjPQMm1zpa5
	JsSC+BHSruYGt4eRgYk1AgWDYzmWXFeF9wWvnda+ku7D+DLuAW1vEt+Q9q8DtX/y/9bZ
	4FYWZ/tciypPap2MCHHny2z1Dz3F5Rzvbbg7tNXuS2hMVKuRMIzt/4Q4vBs5s2Ol16S2
	+2dBG0KpBese6/y/SzOr9LgR6UcB+rVNRQh8BSfcukPPrPxEkF3p9Bk+y2pRYWfaQJWa
	DFQkeHagQ7LLrPpmuJA5PLOUsmtiC1D9lIVMZyFIDiQQCrlJZElUJ+dr/xrE382rasuJ
	Kd5g==
X-Gm-Message-State: ALoCoQlRYuhIh7u5gj5188gAyBCvMFEDk+yJ7vP7BOauzxla4TZz7Miv1PTqJkbWtZw7apvcAEhw
X-Received: by 10.14.211.71 with SMTP id v47mr18196050eeo.37.1391795017155;
	Fri, 07 Feb 2014 09:43:37 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.35
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:36 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:08 +0000
Message-Id: <1391794991-5919-10-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: Joseph Cihula <joseph.cihula@intel.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, patches@linaro.org,
	Shane Wang <shane.wang@intel.com>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Gang Wei <gang.wei@intel.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 09/12] xen/passthrough: iommu: Introduce
	arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
x86 specific fields.

This patch creates:
    - arch_hvm_iommu structure which will contain architecture depend
    fields
    - arch_iommu_domain_{init,destroy} function to execute arch
    specific during domain creation/destruction

Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joseph Cihula <joseph.cihula@intel.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: Shane Wang <shane.wang@intel.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +--
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +++++++++----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +++++++++---------
 xen/drivers/passthrough/iommu.c             |   36 +++---------
 xen/drivers/passthrough/iommu_x86.c         |   41 ++++++++++++++
 xen/drivers/passthrough/vtd/iommu.c         |   80 +++++++++++++--------------
 xen/include/asm-x86/hvm/iommu.h             |   29 ++++++++++
 xen/include/asm-x86/iommu.h                 |    4 ++
 xen/include/xen/hvm/iommu.h                 |   26 +--------
 xen/include/xen/iommu.h                     |    8 +--
 14 files changed, 191 insertions(+), 163 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26635ff..e55d9d5 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -745,7 +745,7 @@ long arch_do_domctl(
                    "ioport_map:add: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
 
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if (g2m_ioport->mport == fmp )
                 {
                     g2m_ioport->gport = fgp;
@@ -764,7 +764,7 @@ long arch_do_domctl(
                 g2m_ioport->gport = fgp;
                 g2m_ioport->mport = fmp;
                 g2m_ioport->np = np;
-                list_add_tail(&g2m_ioport->list, &hd->g2m_ioport_list);
+                list_add_tail(&g2m_ioport->list, &hd->arch.g2m_ioport_list);
             }
             if ( !ret )
                 ret = ioports_permit_access(d, fmp, fmp + np - 1);
@@ -779,7 +779,7 @@ long arch_do_domctl(
             printk(XENLOG_G_INFO
                    "ioport_map:remove: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if ( g2m_ioport->mport == fmp )
                 {
                     list_del(&g2m_ioport->list);
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf6309d..ddb03f8 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -451,7 +451,7 @@ int dpci_ioport_intercept(ioreq_t *p)
     unsigned int s = 0, e = 0;
     int rc;
 
-    list_for_each_entry( g2m_ioport, &hd->g2m_ioport_list, list )
+    list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list )
     {
         s = g2m_ioport->gport;
         e = s + g2m_ioport->np;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index ccde4a0..c40fe12 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -230,7 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
         if ( !is_idle_domain(d) )
         {
             struct hvm_iommu *hd = domain_hvm_iommu(d);
-            update_iommu_mac(&ctx, hd->pgd_maddr, agaw_to_level(hd->agaw));
+            update_iommu_mac(&ctx, hd->arch.pgd_maddr,
+                             agaw_to_level(hd->arch.agaw));
         }
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index d27bd3c..f39bd9d 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -355,7 +355,7 @@ static void _amd_iommu_flush_pages(struct domain *d,
     unsigned long flags;
     struct amd_iommu *iommu;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
-    unsigned int dom_id = hd->domain_id;
+    unsigned int dom_id = hd->arch.domain_id;
 
     /* send INVALIDATE_IOMMU_PAGES command */
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 477de20..bd31bb5 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -60,12 +60,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t machine_bdf)
 
 static inline struct guest_iommu *domain_iommu(struct domain *d)
 {
-    return domain_hvm_iommu(d)->g_iommu;
+    return domain_hvm_iommu(d)->arch.g_iommu;
 }
 
 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v)
 {
-    return domain_hvm_iommu(v->domain)->g_iommu;
+    return domain_hvm_iommu(v->domain)->arch.g_iommu;
 }
 
 static void guest_iommu_enable(struct guest_iommu *iommu)
@@ -886,7 +886,7 @@ int guest_iommu_init(struct domain* d)
 
     guest_iommu_reg_init(iommu);
     iommu->domain = d;
-    hd->g_iommu = iommu;
+    hd->arch.g_iommu = iommu;
 
     tasklet_init(&iommu->cmd_buffer_tasklet,
                  guest_iommu_process_command, (unsigned long)d);
@@ -907,7 +907,7 @@ void guest_iommu_destroy(struct domain *d)
     tasklet_kill(&iommu->cmd_buffer_tasklet);
     xfree(iommu);
 
-    domain_hvm_iommu(d)->g_iommu = NULL;
+    domain_hvm_iommu(d)->arch.g_iommu = NULL;
 }
 
 static int guest_iommu_mmio_range(struct vcpu *v, unsigned long addr)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 1294561..be34e90 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -344,7 +344,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     struct hvm_iommu *hd = domain_hvm_iommu(d);
     bool_t ok = 0;
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     next_level = merge_level - 1;
 
@@ -398,7 +398,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     unsigned long first_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(pt_mfn);
     pde = table + pfn_to_pde_idx(gfn, merge_level);
@@ -448,8 +448,8 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
     struct page_info *table;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    table = hd->root_table;
-    level = hd->paging_mode;
+    table = hd->arch.root_table;
+    level = hd->arch.paging_mode;
 
     BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 || 
             level > IOMMU_PAGING_MODE_LEVEL_6 );
@@ -557,11 +557,11 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    level = hd->paging_mode;
-    old_root = hd->root_table;
+    level = hd->arch.paging_mode;
+    old_root = hd->arch.root_table;
     offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
 
-    ASSERT(spin_is_locked(&hd->mapping_lock) && is_hvm_domain(d));
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
 
     while ( offset >= PTE_PER_TABLE_SIZE )
     {
@@ -587,8 +587,8 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
     if ( new_root != NULL )
     {
-        hd->paging_mode = level;
-        hd->root_table = new_root;
+        hd->arch.paging_mode = level;
+        hd->arch.root_table = new_root;
 
         if ( !spin_is_locked(&pcidevs_lock) )
             AMD_IOMMU_DEBUG("%s Try to access pdev_list "
@@ -613,9 +613,9 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
                 /* valid = 0 only works for dom0 passthrough mode */
                 amd_iommu_set_root_page_table((u32 *)device_entry,
-                                              page_to_maddr(hd->root_table),
-                                              hd->domain_id,
-                                              hd->paging_mode, 1);
+                                              page_to_maddr(hd->arch.root_table),
+                                              hd->arch.domain_id,
+                                              hd->arch.paging_mode, 1);
 
                 amd_iommu_flush_device(iommu, req_id);
                 bdf += pdev->phantom_stride;
@@ -638,14 +638,14 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     unsigned long pt_mfn[7];
     unsigned int merge_level;
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -653,7 +653,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -662,7 +662,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -684,7 +684,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         amd_iommu_flush_pages(d, gfn, 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
-          merge_level <= hd->paging_mode; merge_level++ )
+          merge_level <= hd->arch.paging_mode; merge_level++ )
     {
         if ( pt_mfn[merge_level] == 0 )
             break;
@@ -697,7 +697,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
                                flags, merge_level) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
                             "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
             domain_crash(d);
@@ -706,7 +706,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     }
 
 out:
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -715,14 +715,14 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     unsigned long pt_mfn[7];
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -730,7 +730,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -739,7 +739,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -747,7 +747,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     /* mark PTE as 'page not present' */
     clear_iommu_pte_present(pt_mfn[1], gfn);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 
     amd_iommu_flush_pages(d, gfn, 0);
 
@@ -792,13 +792,13 @@ void amd_iommu_share_p2m(struct domain *d)
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
     p2m_table = mfn_to_page(mfn_x(pgd_mfn));
 
-    if ( hd->root_table != p2m_table )
+    if ( hd->arch.root_table != p2m_table )
     {
-        free_amd_iommu_pgtable(hd->root_table);
-        hd->root_table = p2m_table;
+        free_amd_iommu_pgtable(hd->arch.root_table);
+        hd->arch.root_table = p2m_table;
 
         /* When sharing p2m with iommu, paging mode = 4 */
-        hd->paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
+        hd->arch.paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
         AMD_IOMMU_DEBUG("Share p2m table with iommu: p2m table = %#lx\n",
                         mfn_x(pgd_mfn));
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index c26aabc..0c3cd3e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -120,7 +120,8 @@ static void amd_iommu_setup_domain_device(
 
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
 
-    BUG_ON( !hd->root_table || !hd->paging_mode || !iommu->dev_table.buffer );
+    BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode ||
+            !iommu->dev_table.buffer );
 
     if ( iommu_passthrough && (domain->domain_id == 0) )
         valid = 0;
@@ -138,8 +139,8 @@ static void amd_iommu_setup_domain_device(
     {
         /* bind DTE to domain page-tables */
         amd_iommu_set_root_page_table(
-            (u32 *)dte, page_to_maddr(hd->root_table), hd->domain_id,
-            hd->paging_mode, valid);
+            (u32 *)dte, page_to_maddr(hd->arch.root_table), hd->arch.domain_id,
+            hd->arch.paging_mode, valid);
 
         if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
@@ -151,8 +152,8 @@ static void amd_iommu_setup_domain_device(
                         "root table = %#"PRIx64", "
                         "domain = %d, paging mode = %d\n",
                         req_id, pdev->type,
-                        page_to_maddr(hd->root_table),
-                        hd->domain_id, hd->paging_mode);
+                        page_to_maddr(hd->arch.root_table),
+                        hd->arch.domain_id, hd->arch.paging_mode);
     }
 
     spin_unlock_irqrestore(&iommu->lock, flags);
@@ -225,17 +226,17 @@ int __init amd_iov_detect(void)
 static int allocate_domain_resources(struct hvm_iommu *hd)
 {
     /* allocate root table */
-    spin_lock(&hd->mapping_lock);
-    if ( !hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( !hd->arch.root_table )
     {
-        hd->root_table = alloc_amd_iommu_pgtable();
-        if ( !hd->root_table )
+        hd->arch.root_table = alloc_amd_iommu_pgtable();
+        if ( !hd->arch.root_table )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             return -ENOMEM;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -262,18 +263,18 @@ static int amd_iommu_domain_init(struct domain *d)
     /* allocate page directroy */
     if ( allocate_domain_resources(hd) != 0 )
     {
-        if ( hd->root_table )
-            free_domheap_page(hd->root_table);
+        if ( hd->arch.root_table )
+            free_domheap_page(hd->arch.root_table);
         return -ENOMEM;
     }
 
     /* For pv and dom0, stick with get_paging_mode(max_page)
      * For HVM dom0, use 2 level page table at first */
-    hd->paging_mode = is_hvm_domain(d) ?
+    hd->arch.paging_mode = is_hvm_domain(d) ?
                       IOMMU_PAGING_MODE_LEVEL_2 :
                       get_paging_mode(max_page);
 
-    hd->domain_id = d->domain_id;
+    hd->arch.domain_id = d->domain_id;
 
     guest_iommu_init(d);
 
@@ -333,8 +334,8 @@ void amd_iommu_disable_domain_device(struct domain *domain,
 
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
-                        req_id,  domain_hvm_iommu(domain)->domain_id,
-                        domain_hvm_iommu(domain)->paging_mode);
+                        req_id,  domain_hvm_iommu(domain)->arch.domain_id,
+                        domain_hvm_iommu(domain)->arch.paging_mode);
     }
     spin_unlock_irqrestore(&iommu->lock, flags);
 
@@ -374,7 +375,7 @@ static int reassign_device(struct domain *source, struct domain *target,
 
     /* IO page tables might be destroyed after pci-detach the last device
      * In this case, we have to re-allocate root table for next pci-attach.*/
-    if ( t->root_table == NULL )
+    if ( t->arch.root_table == NULL )
         allocate_domain_resources(t);
 
     amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
@@ -456,13 +457,13 @@ static void deallocate_iommu_page_tables(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    if ( hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( hd->arch.root_table )
     {
-        deallocate_next_page_table(hd->root_table, hd->paging_mode);
-        hd->root_table = NULL;
+        deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mode);
+        hd->arch.root_table = NULL;
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 
@@ -593,11 +594,11 @@ static void amd_dump_p2m_table(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
 
-    if ( !hd->root_table ) 
+    if ( !hd->arch.root_table ) 
         return;
 
-    printk("p2m table has %d levels\n", hd->paging_mode);
-    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+    printk("p2m table has %d levels\n", hd->arch.paging_mode);
+    amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0, 0);
 }
 
 const struct iommu_ops amd_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index d733878..2346da9 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -117,10 +117,11 @@ static void __init parse_iommu_param(char *s)
 int iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
+    int ret = 0;
 
-    spin_lock_init(&hd->mapping_lock);
-    INIT_LIST_HEAD(&hd->g2m_ioport_list);
-    INIT_LIST_HEAD(&hd->mapped_rmrrs);
+    ret = arch_iommu_domain_init(d);
+    if ( ret )
+        return ret;
 
     if ( !iommu_enabled )
         return 0;
@@ -190,10 +191,7 @@ void iommu_teardown(struct domain *d)
 
 void iommu_domain_destroy(struct domain *d)
 {
-    struct hvm_iommu *hd  = domain_hvm_iommu(d);
-    struct list_head *ioport_list, *rmrr_list, *tmp;
-    struct g2m_ioport *ioport;
-    struct mapped_rmrr *mrmrr;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops )
         return;
@@ -201,20 +199,8 @@ void iommu_domain_destroy(struct domain *d)
     if ( need_iommu(d) )
         iommu_teardown(d);
 
-    list_for_each_safe ( ioport_list, tmp, &hd->g2m_ioport_list )
-    {
-        ioport = list_entry(ioport_list, struct g2m_ioport, list);
-        list_del(&ioport->list);
-        xfree(ioport);
-    }
-
-    list_for_each_safe ( rmrr_list, tmp, &hd->mapped_rmrrs )
-    {
-        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
-        list_del(&mrmrr->list);
-        xfree(mrmrr);
-    }
-}
+    arch_iommu_domain_destroy(d);
+ }
 
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
@@ -328,14 +314,6 @@ void iommu_suspend()
         ops->suspend();
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-
-    if ( iommu_enabled && is_hvm_domain(d) )
-        ops->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/drivers/passthrough/iommu_x86.c b/xen/drivers/passthrough/iommu_x86.c
index bd3c23b..c137cef 100644
--- a/xen/drivers/passthrough/iommu_x86.c
+++ b/xen/drivers/passthrough/iommu_x86.c
@@ -55,6 +55,47 @@ int __init iommu_setup_hpet_msi(struct msi_desc *msi)
     return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
 }
 
+void iommu_share_p2m_table(struct domain* d)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+
+    if ( iommu_enabled && is_hvm_domain(d) )
+        ops->share_p2m(d);
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    spin_lock_init(&hd->arch.mapping_lock);
+    INIT_LIST_HEAD(&hd->arch.g2m_ioport_list);
+    INIT_LIST_HEAD(&hd->arch.mapped_rmrrs);
+
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+   struct hvm_iommu *hd  = domain_hvm_iommu(d);
+   struct list_head *ioport_list, *rmrr_list, *tmp;
+   struct g2m_ioport *ioport;
+   struct mapped_rmrr *mrmrr;
+
+   list_for_each_safe ( ioport_list, tmp, &hd->arch.g2m_ioport_list )
+   {
+       ioport = list_entry(ioport_list, struct g2m_ioport, list);
+       list_del(&ioport->list);
+       xfree(ioport);
+   }
+
+    list_for_each_safe ( rmrr_list, tmp, &hd->arch.mapped_rmrrs )
+    {
+        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
+        list_del(&mrmrr->list);
+        xfree(mrmrr);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index faa794b..a7a5253 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -249,16 +249,16 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
-    int addr_width = agaw_to_width(hd->agaw);
+    int addr_width = agaw_to_width(hd->arch.agaw);
     struct dma_pte *parent, *pte = NULL;
-    int level = agaw_to_level(hd->agaw);
+    int level = agaw_to_level(hd->arch.agaw);
     int offset;
     u64 pte_maddr = 0, maddr;
     u64 *vaddr = NULL;
 
     addr &= (((u64)1) << addr_width) - 1;
-    ASSERT(spin_is_locked(&hd->mapping_lock));
-    if ( hd->pgd_maddr == 0 )
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+    if ( hd->arch.pgd_maddr == 0 )
     {
         /*
          * just get any passthrough device in the domainr - assume user
@@ -266,11 +266,11 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
          */
         pdev = pci_get_pdev_by_domain(domain, -1, -1, -1);
         drhd = acpi_find_matched_drhd_unit(pdev);
-        if ( !alloc || ((hd->pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
+        if ( !alloc || ((hd->arch.pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
             goto out;
     }
 
-    parent = (struct dma_pte *)map_vtd_domain_page(hd->pgd_maddr);
+    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr);
     while ( level > 1 )
     {
         offset = address_level_offset(addr, level);
@@ -580,7 +580,7 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     {
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -622,12 +622,12 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     u64 pg_maddr;
     struct mapped_rmrr *mrmrr;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
     pg_maddr = addr_to_dma_page_maddr(domain, addr, 0);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return;
     }
 
@@ -636,13 +636,13 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
 
     if ( !dma_pte_present(*pte) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return;
     }
 
     dma_clear_pte(*pte);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -653,8 +653,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     /* if the cleared address is between mapped RMRR region,
      * remove the mapped RMRR
      */
-    spin_lock(&hd->mapping_lock);
-    list_for_each_entry ( mrmrr, &hd->mapped_rmrrs, list )
+    spin_lock(&hd->arch.mapping_lock);
+    list_for_each_entry ( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( addr >= mrmrr->base && addr <= mrmrr->end )
         {
@@ -663,7 +663,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
             break;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1248,7 +1248,7 @@ static int intel_iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    hd->agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    hd->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
     return 0;
 }
@@ -1345,16 +1345,16 @@ int domain_context_mapping_one(
     }
     else
     {
-        spin_lock(&hd->mapping_lock);
+        spin_lock(&hd->arch.mapping_lock);
 
         /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->pgd_maddr == 0 )
+        if ( hd->arch.pgd_maddr == 0 )
         {
             addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->pgd_maddr == 0 )
+            if ( hd->arch.pgd_maddr == 0 )
             {
             nomem:
-                spin_unlock(&hd->mapping_lock);
+                spin_unlock(&hd->arch.mapping_lock);
                 spin_unlock(&iommu->lock);
                 unmap_vtd_domain_page(context_entries);
                 return -ENOMEM;
@@ -1362,7 +1362,7 @@ int domain_context_mapping_one(
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->pgd_maddr;
+        pgd_maddr = hd->arch.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1380,7 +1380,7 @@ int domain_context_mapping_one(
         else
             context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
 
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
     }
 
     if ( context_set_domain_id(context, domain, iommu) )
@@ -1406,7 +1406,7 @@ int domain_context_mapping_one(
         iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
     }
 
-    set_bit(iommu->index, &hd->iommu_bitmap);
+    set_bit(iommu->index, &hd->arch.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
 
@@ -1652,7 +1652,7 @@ static int domain_context_unmap(
         struct hvm_iommu *hd = domain_hvm_iommu(domain);
         int iommu_domid;
 
-        clear_bit(iommu->index, &hd->iommu_bitmap);
+        clear_bit(iommu->index, &hd->arch.iommu_bitmap);
 
         iommu_domid = domain_iommu_domid(domain, iommu);
         if ( iommu_domid == -1 )
@@ -1711,10 +1711,10 @@ static void iommu_domain_teardown(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    iommu_free_pagetable(hd->pgd_maddr, agaw_to_level(hd->agaw));
-    hd->pgd_maddr = 0;
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw));
+    hd->arch.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static int intel_iommu_map_page(
@@ -1733,12 +1733,12 @@ static int intel_iommu_map_page(
     if ( iommu_passthrough && (d->domain_id == 0) )
         return 0;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -1755,14 +1755,14 @@ static int intel_iommu_map_page(
 
     if ( old.val == new.val )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -1796,7 +1796,7 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
         for_each_drhd_unit ( drhd )
         {
             iommu = drhd->iommu;
-            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+            if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
                 continue;
 
             flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -1837,7 +1837,7 @@ static void iommu_set_pgd(struct domain *d)
         return;
 
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    hd->pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
+    hd->arch.pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
 }
 
 static int rmrr_identity_mapping(struct domain *d,
@@ -1852,10 +1852,10 @@ static int rmrr_identity_mapping(struct domain *d,
     ASSERT(rmrr->base_address < rmrr->end_address);
 
     /*
-     * No need to acquire hd->mapping_lock, as the only theoretical race is
+     * No need to acquire hd->arch.mapping_lock, as the only theoretical race is
      * with the insertion below (impossible due to holding pcidevs_lock).
      */
-    list_for_each_entry( mrmrr, &hd->mapped_rmrrs, list )
+    list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( mrmrr->base == rmrr->base_address &&
              mrmrr->end == rmrr->end_address )
@@ -1880,9 +1880,9 @@ static int rmrr_identity_mapping(struct domain *d,
         return -ENOMEM;
     mrmrr->base = rmrr->base_address;
     mrmrr->end = rmrr->end_address;
-    spin_lock(&hd->mapping_lock);
-    list_add_tail(&mrmrr->list, &hd->mapped_rmrrs);
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
+    spin_unlock(&hd->arch.mapping_lock);
 
     return 0;
 }
@@ -2427,8 +2427,8 @@ static void vtd_dump_p2m_table(struct domain *d)
         return;
 
     hd = domain_hvm_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
-    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw));
+    vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0);
 }
 
 const struct iommu_ops intel_iommu_ops = {
diff --git a/xen/include/asm-x86/hvm/iommu.h b/xen/include/asm-x86/hvm/iommu.h
index d488edf..a3f83d0 100644
--- a/xen/include/asm-x86/hvm/iommu.h
+++ b/xen/include/asm-x86/hvm/iommu.h
@@ -39,4 +39,33 @@ static inline int iommu_hardware_setup(void)
     return 0;
 }
 
+struct g2m_ioport {
+    struct list_head list;
+    unsigned int gport;
+    unsigned int mport;
+    unsigned int np;
+};
+
+struct mapped_rmrr {
+    struct list_head list;
+    u64 base;
+    u64 end;
+};
+
+struct arch_hvm_iommu
+{
+    u64 pgd_maddr;                 /* io page directory machine address */
+    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
+    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
+    /* amd iommu support */
+    int domain_id;
+    int paging_mode;
+    struct page_info *root_table;
+    struct guest_iommu *g_iommu;
+
+    struct list_head g2m_ioport_list;   /* guest to machine ioport mapping */
+    struct list_head mapped_rmrrs;
+    spinlock_t mapping_lock;            /* io page table lock */
+};
+
 #endif /* __ASM_X86_HVM_IOMMU_H__ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 34c1896..021cd80 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -19,6 +19,10 @@
 
 #include <asm/msi.h>
 
+/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
+#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
+#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
+
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
 int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
 void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 2abb4e3..f8f8a93 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -23,32 +23,8 @@
 #include <xen/iommu.h>
 #include <asm/hvm/iommu.h>
 
-struct g2m_ioport {
-    struct list_head list;
-    unsigned int gport;
-    unsigned int mport;
-    unsigned int np;
-};
-
-struct mapped_rmrr {
-    struct list_head list;
-    u64 base;
-    u64 end;
-};
-
 struct hvm_iommu {
-    u64 pgd_maddr;                 /* io page directory machine address */
-    spinlock_t mapping_lock;       /* io page table lock */
-    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
-    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
-    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
-    struct list_head mapped_rmrrs;
-
-    /* amd iommu support */
-    int domain_id;
-    int paging_mode;
-    struct page_info *root_table;
-    struct guest_iommu *g_iommu;
+    struct arch_hvm_iommu arch;
 
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 60df9d6..9a69b76 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -35,11 +35,6 @@ extern bool_t iommu_hap_pt_share;
 extern bool_t iommu_debug;
 extern bool_t amd_iommu_perdev_intremap;
 
-/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
-#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
-
-#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
@@ -55,6 +50,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+void arch_iommu_domain_destroy(struct domain *d);
+int arch_iommu_domain_init(struct domain *d);
+
 /* Function used internally, use iommu_domain_destroy */
 void iommu_teardown(struct domain *d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSd-0005EE-UI; Fri, 07 Feb 2014 17:43:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSc-0005DW-RP
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:31 +0000
Received: from [85.158.137.68:6080] by server-1.bemta-3.messagelabs.com id
	0C/5D-17293-14B15F25; Fri, 07 Feb 2014 17:43:29 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391795009!394843!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23698 invoked from network); 7 Feb 2014 17:43:29 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:29 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1688257eek.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=XWqALXRj576Gk9ZaYAsA1jX5pC5564URrjKEcU85iMQ=;
	b=jiQL5Ct2WhMj5jAE65B7HDoyhFR4LRb3mrdf3Shs0W6FzdnLubTsLk8b28E191MWC3
	Onu7YRWzz6Rd1Fe4etQeZomNpRTY8o2ffyRLUC/wCvQodATnNBbSfB54ClD4qP0A0cTM
	LJdmJ1QpAfS4PIwYlEbQqCuSB4/viw58q3l9Yaiw3szMv7nNgHAIR4U6xaCH7MQaJGkA
	6GQHqAFzTvksYkMWt3qehb4wq+gmO3SxBuj0MRwV8GDuWHB2PfakSjHFVBkO6Syi8wqQ
	h72V9ouAqgQQlcjmJiGLNM6mFncFMu3YB+s2nHru3tE8IdzmgXGFqny815E29xdrTJHV
	pDXA==
X-Gm-Message-State: ALoCoQlhlmkeJAgXFwch/IsGWXwpPExru1D33v6DOtYzIkcveA9wTJk0Osgx/aQJSD6vx2A89++0
X-Received: by 10.14.203.197 with SMTP id f45mr4543645eeo.90.1391795008955;
	Fri, 07 Feb 2014 09:43:28 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.27
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:28 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:03 +0000
Message-Id: <1391794991-5919-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [RFC for-4.5 04/12] xen/dts: Add dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function check if property exists in a specific node.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/common/device_tree.c      |    6 ++----
 xen/include/xen/device_tree.h |   21 +++++++++++++++++++++
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index c66d1d5..ccdb7ff 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -512,10 +512,8 @@ static void __init *unflatten_dt_alloc(unsigned long *mem, unsigned long size,
 }
 
 /* Find a property with a given name for a given node and return it. */
-static const struct dt_property *
-dt_find_property(const struct dt_device_node *np,
-                 const char *name,
-                 u32 *lenp)
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp)
 {
     const struct dt_property *pp;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 9a8c3de..7c075d9 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #include <xen/init.h>
 #include <xen/string.h>
 #include <xen/types.h>
+#include <xen/stdbool.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -347,6 +348,10 @@ struct dt_device_node *dt_find_compatible_node(struct dt_device_node *from,
 const void *dt_get_property(const struct dt_device_node *np,
                             const char *name, u32 *lenp);
 
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp);
+
+
 /**
  * dt_property_read_u32 - Helper to read a u32 property.
  * @np: node to get the value
@@ -369,6 +374,22 @@ bool_t dt_property_read_u64(const struct dt_device_node *np,
                             const char *name, u64 *out_value);
 
 /**
+ * dt_property_read_bool - Check if a property exists
+ * @np: node to get the value
+ * @name: name of the property
+ *
+ * Search for a property in a device node.
+ * Return true if the property exists false otherwise.
+ */
+static inline bool_t dt_property_read_bool(const struct dt_device_node *np,
+                                           const char *name)
+{
+    const struct dt_property *prop = dt_find_property(np, name, NULL);
+
+    return prop ? true : false;
+}
+
+/**
  * dt_property_read_string - Find and read a string from a property
  * @np:         Device node from which the property value is to be read
  * @propname:   Name of the property to be searched
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSn-0005Iq-1Z; Fri, 07 Feb 2014 17:43:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSl-0005H9-26
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:39 +0000
Received: from [85.158.137.68:6636] by server-17.bemta-3.messagelabs.com id
	36/7A-22569-A4B15F25; Fri, 07 Feb 2014 17:43:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391795015!393528!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7063 invoked from network); 7 Feb 2014 17:43:35 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:35 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so1690144eei.12
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=aU6mvXVuT0Ab5g154PdFlfENa85BzWy1cX6zXgiDPDI=;
	b=POOOBOOyudb4CV+zM+FGBwCwpdDEO3T+RCjnW4csed1zkBM9ucVvg2j9Lup2PmdYKh
	pzYZ2VoiE5ggDkDp9FekvuANrko+NNw/9CrjmEwDSrvVn6QQe3gu+bv5GC8zRXYYhMES
	LtR/uax169z1JZulW6FYARSCZgMu64fb7LQjRW8CFMoYMCIjsdcpr6zCaYF0o2N2Q6eL
	iQ3P3H9U35K49V348SMnXlGdvtzVhNZm0kEN5e0pkKtqXGRX+na3Enmq8RXAnrcwhQU3
	HrdWHgyocu/tsz7kELaZq4kioPuMQQJkiVPJXSpkBWDSemNKl65UZMmXwbmhsOVY4J3a
	d0Bw==
X-Gm-Message-State: ALoCoQlnp5FRxXSXaqD8wTWVwlc/1xtaPEDy8oMpmWLWvAQUZ4X+e2waJ8siBIul394MO2DRx6YC
X-Received: by 10.14.221.4 with SMTP id q4mr17736010eep.47.1391795015488;
	Fri, 07 Feb 2014 09:43:35 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.33
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:34 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:07 +0000
Message-Id: <1391794991-5919-9-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
	generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
functions specific to x86 and PCI.

Split the framework in 3 distincts files:
    - iommu.c: contains generic functions shared between x86 and ARM
               (when it will be supported)
    - iommu_pci.c: contains specific functions for PCI passthrough
    - iommu_x86.c: contains specific functions for x86

iommu_pci.c will be only compiled when PCI is supported by the architecture
(eg. HAS_PCI is defined).

This patch is mostly code movement in new files.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/Makefile    |    6 +-
 xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
 xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu_x86.c |   65 +++++
 xen/drivers/passthrough/vtd/iommu.c |   42 ++--
 xen/include/asm-x86/iommu.h         |   46 ++++
 xen/include/xen/hvm/iommu.h         |    1 +
 xen/include/xen/iommu.h             |   42 ++--
 8 files changed, 625 insertions(+), 518 deletions(-)
 create mode 100644 xen/drivers/passthrough/iommu_pci.c
 create mode 100644 xen/drivers/passthrough/iommu_x86.c
 create mode 100644 xen/include/asm-x86/iommu.h

diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 7c40fa5..51e0a0d 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -3,5 +3,7 @@ subdir-$(x86) += amd
 subdir-$(x86_64) += x86
 
 obj-y += iommu.o
-obj-y += io.o
-obj-y += pci.o
+obj-$(x86) += iommu_x86.o
+obj-$(HAS_PCI) += iommu_pci.o
+obj-$(x86) += io.o
+obj-$(HAS_PCI) += pci.o
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 0a26956..d733878 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -24,7 +24,6 @@
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
-static int iommu_populate_page_table(struct domain *d);
 static void iommu_dump_p2m_table(unsigned char key);
 
 /*
@@ -180,86 +179,7 @@ void __init iommu_dom0_init(struct domain *d)
     return hd->platform_ops->dom0_init(d);
 }
 
-int iommu_add_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    int rc;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
-    if ( rc || !pdev->phantom_stride )
-        return rc;
-
-    for ( devfn = pdev->devfn ; ; )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            return 0;
-        rc = hd->platform_ops->add_device(devfn, pdev);
-        if ( rc )
-            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
-                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-    }
-}
-
-int iommu_enable_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops ||
-         !hd->platform_ops->enable_device )
-        return 0;
-
-    return hd->platform_ops->enable_device(pdev);
-}
-
-int iommu_remove_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
-    {
-        int rc;
-
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->remove_device(devfn, pdev);
-        if ( !rc )
-            continue;
-
-        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
-               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-        return rc;
-    }
-
-    return hd->platform_ops->remove_device(pdev->devfn, pdev);
-}
-
-static void iommu_teardown(struct domain *d)
+void iommu_teardown(struct domain *d)
 {
     const struct hvm_iommu *hd = domain_hvm_iommu(d);
 
@@ -268,151 +188,6 @@ static void iommu_teardown(struct domain *d)
     tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
-/*
- * If the device isn't owned by dom0, it means it already
- * has been assigned to other domain, or it doesn't exist.
- */
-static int device_assigned(u16 seg, u8 bus, u8 devfn)
-{
-    struct pci_dev *pdev;
-
-    spin_lock(&pcidevs_lock);
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    spin_unlock(&pcidevs_lock);
-
-    return pdev ? 0 : -EBUSY;
-}
-
-static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int rc = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( unlikely(!need_iommu(d) &&
-            (d->arch.hvm_domain.mem_sharing_enabled ||
-             d->mem_event->paging.ring_page)) )
-        return -EXDEV;
-
-    if ( !spin_trylock(&pcidevs_lock) )
-        return -ERESTART;
-
-    if ( need_iommu(d) <= 0 )
-    {
-        if ( !iommu_use_hap_pt(d) )
-        {
-            rc = iommu_populate_page_table(d);
-            if ( rc )
-            {
-                spin_unlock(&pcidevs_lock);
-                return rc;
-            }
-        }
-        d->need_iommu = 1;
-    }
-
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    if ( !pdev )
-    {
-        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
-        goto done;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
-        goto done;
-
-    for ( ; pdev->phantom_stride; rc = 0 )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->assign_device(d, devfn, pdev);
-        if ( rc )
-            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
-                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   rc);
-    }
-
- done:
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-    spin_unlock(&pcidevs_lock);
-
-    return rc;
-}
-
-static int iommu_populate_page_table(struct domain *d)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct page_info *page;
-    int rc = 0, n = 0;
-
-    d->need_iommu = -1;
-
-    this_cpu(iommu_dont_flush_iotlb) = 1;
-    spin_lock(&d->page_alloc_lock);
-
-    if ( unlikely(d->is_dying) )
-        rc = -ESRCH;
-
-    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
-    {
-        if ( is_hvm_domain(d) ||
-            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
-        {
-            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
-            rc = hd->platform_ops->map_page(
-                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
-                IOMMUF_readable|IOMMUF_writable);
-            if ( rc )
-            {
-                page_list_add(page, &d->page_list);
-                break;
-            }
-        }
-        page_list_add_tail(page, &d->arch.relmem_list);
-        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
-             hypercall_preempt_check() )
-            rc = -ERESTART;
-    }
-
-    if ( !rc )
-    {
-        /*
-         * The expectation here is that generally there are many normal pages
-         * on relmem_list (the ones we put there) and only few being in an
-         * offline/broken state. The latter ones are always at the head of the
-         * list. Hence we first move the whole list, and then move back the
-         * first few entries.
-         */
-        page_list_move(&d->page_list, &d->arch.relmem_list);
-        while ( (page = page_list_first(&d->page_list)) != NULL &&
-                (page->count_info & (PGC_state|PGC_broken)) )
-        {
-            page_list_del(page, &d->page_list);
-            page_list_add_tail(page, &d->arch.relmem_list);
-        }
-    }
-
-    spin_unlock(&d->page_alloc_lock);
-    this_cpu(iommu_dont_flush_iotlb) = 0;
-
-    if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
-        iommu_teardown(d);
-
-    return rc;
-}
-
-
 void iommu_domain_destroy(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
@@ -499,53 +274,6 @@ void iommu_iotlb_flush_all(struct domain *d)
     hd->platform_ops->iotlb_flush_all(d);
 }
 
-/* caller should hold the pcidevs_lock */
-int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev = NULL;
-    int ret = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
-    if ( !pdev )
-        return -ENODEV;
-
-    while ( pdev->phantom_stride )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-        if ( !ret )
-            continue;
-
-        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
-               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
-        return ret;
-    }
-
-    devfn = pdev->devfn;
-    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-    if ( ret )
-    {
-        dprintk(XENLOG_G_ERR,
-                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
-                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-        return ret;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-
-    return ret;
-}
-
 int __init iommu_setup(void)
 {
     int rc = -ENODEV;
@@ -586,86 +314,6 @@ int __init iommu_setup(void)
     return rc;
 }
 
-static int iommu_get_device_group(
-    struct domain *d, u16 seg, u8 bus, u8 devfn,
-    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int group_id, sdev_id;
-    u32 bdf;
-    int i = 0;
-    const struct iommu_ops *ops = hd->platform_ops;
-
-    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
-        return 0;
-
-    group_id = ops->get_device_group_id(seg, bus, devfn);
-
-    spin_lock(&pcidevs_lock);
-    for_each_pdev( d, pdev )
-    {
-        if ( (pdev->seg != seg) ||
-             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
-            continue;
-
-        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
-            continue;
-
-        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
-        if ( (sdev_id == group_id) && (i < max_sdevs) )
-        {
-            bdf = 0;
-            bdf |= (pdev->bus & 0xff) << 16;
-            bdf |= (pdev->devfn & 0xff) << 8;
-
-            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
-            {
-                spin_unlock(&pcidevs_lock);
-                return -1;
-            }
-            i++;
-        }
-    }
-    spin_unlock(&pcidevs_lock);
-
-    return i;
-}
-
-void iommu_update_ire_from_apic(
-    unsigned int apic, unsigned int reg, unsigned int value)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    ops->update_ire_from_apic(apic, reg, value);
-}
-
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
-}
-
-void iommu_read_msi_from_ire(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    if ( iommu_intremap )
-        ops->read_msi_from_ire(msi_desc, msg);
-}
-
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->read_apic_from_ire(apic, reg);
-}
-
-int __init iommu_setup_hpet_msi(struct msi_desc *msi)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
-}
-
 void iommu_resume()
 {
     const struct iommu_ops *ops = iommu_get_ops();
@@ -696,125 +344,6 @@ void iommu_crash_shutdown(void)
     iommu_enabled = iommu_intremap = 0;
 }
 
-int iommu_do_domctl(
-    struct xen_domctl *domctl, struct domain *d,
-    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
-{
-    u16 seg;
-    u8 bus, devfn;
-    int ret = 0;
-
-    if ( !iommu_enabled )
-        return -ENOSYS;
-
-    switch ( domctl->cmd )
-    {
-    case XEN_DOMCTL_get_device_group:
-    {
-        u32 max_sdevs;
-        XEN_GUEST_HANDLE_64(uint32) sdevs;
-
-        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.get_device_group.machine_sbdf >> 16;
-        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
-        max_sdevs = domctl->u.get_device_group.max_sdevs;
-        sdevs = domctl->u.get_device_group.sdev_array;
-
-        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
-        if ( ret < 0 )
-        {
-            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
-            ret = -EFAULT;
-            domctl->u.get_device_group.num_sdevs = 0;
-        }
-        else
-        {
-            domctl->u.get_device_group.num_sdevs = ret;
-            ret = 0;
-        }
-        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
-            ret = -EFAULT;
-    }
-    break;
-
-    case XEN_DOMCTL_test_assign_device:
-        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        if ( device_assigned(seg, bus, devfn) )
-        {
-            printk(XENLOG_G_INFO
-                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-            ret = -EINVAL;
-        }
-        break;
-
-    case XEN_DOMCTL_assign_device:
-        if ( unlikely(d->is_dying) )
-        {
-            ret = -EINVAL;
-            break;
-        }
-
-        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        ret = device_assigned(seg, bus, devfn) ?:
-              assign_device(d, seg, bus, devfn);
-        if ( ret == -ERESTART )
-            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
-                                                "h", u_domctl);
-        else if ( ret )
-            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
-                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    case XEN_DOMCTL_deassign_device:
-        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        spin_lock(&pcidevs_lock);
-        ret = deassign_device(d, seg, bus, devfn);
-        spin_unlock(&pcidevs_lock);
-        if ( ret )
-            printk(XENLOG_G_ERR
-                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    default:
-        ret = -ENOSYS;
-        break;
-    }
-
-    return ret;
-}
-
 static void iommu_dump_p2m_table(unsigned char key)
 {
     struct domain *d;
diff --git a/xen/drivers/passthrough/iommu_pci.c b/xen/drivers/passthrough/iommu_pci.c
new file mode 100644
index 0000000..5b9d937
--- /dev/null
+++ b/xen/drivers/passthrough/iommu_pci.c
@@ -0,0 +1,468 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xsm/xsm.h>
+
+static int iommu_populate_page_table(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct page_info *page;
+    int rc = 0, n = 0;
+
+    d->need_iommu = -1;
+
+    this_cpu(iommu_dont_flush_iotlb) = 1;
+    spin_lock(&d->page_alloc_lock);
+
+    if ( unlikely(d->is_dying) )
+        rc = -ESRCH;
+
+
+    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
+    {
+        if ( is_hvm_domain(d) ||
+            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
+        {
+            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
+            rc = hd->platform_ops->map_page(
+                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
+                IOMMUF_readable|IOMMUF_writable);
+            if ( rc )
+            {
+                page_list_add(page, &d->page_list);
+                break;
+            }
+        }
+        page_list_add_tail(page, &d->arch.relmem_list);
+        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
+             hypercall_preempt_check() )
+            rc = -ERESTART;
+    }
+
+    if ( !rc )
+    {
+        /*
+         * The expectation here is that generally there are many normal pages
+         * on relmem_list (the ones we put there) and only few being in an
+         * offline/broken state. The latter ones are always at the head of the
+         * list. Hence we first move the whole list, and then move back the
+         * first few entries.
+         */
+        page_list_move(&d->page_list, &d->arch.relmem_list);
+        while ( (page = page_list_first(&d->page_list)) != NULL &&
+                (page->count_info & (PGC_state|PGC_broken)) )
+        {
+            page_list_del(page, &d->page_list);
+            page_list_add_tail(page, &d->arch.relmem_list);
+        }
+    }
+
+    spin_unlock(&d->page_alloc_lock);
+    this_cpu(iommu_dont_flush_iotlb) = 0;
+
+    if ( !rc )
+        iommu_iotlb_flush_all(d);
+    else if ( rc != -ERESTART )
+        iommu_teardown(d);
+
+    return rc;
+}
+
+int iommu_add_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    int rc;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
+    if ( rc || !pdev->phantom_stride )
+        return rc;
+
+    for ( devfn = pdev->devfn ; ; )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            return 0;
+        rc = hd->platform_ops->add_device(devfn, pdev);
+        if ( rc )
+            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+    }
+}
+
+int iommu_enable_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops ||
+         !hd->platform_ops->enable_device )
+        return 0;
+
+    return hd->platform_ops->enable_device(pdev);
+}
+
+int iommu_remove_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
+    {
+        int rc;
+
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->remove_device(devfn, pdev);
+        if ( !rc )
+            continue;
+
+        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
+               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+        return rc;
+    }
+
+    return hd->platform_ops->remove_device(pdev->devfn, pdev);
+}
+
+/*
+ * If the device isn't owned by dom0, it means it already
+ * has been assigned to other domain, or it doesn't exist.
+ */
+static int device_assigned(u16 seg, u8 bus, u8 devfn)
+{
+    struct pci_dev *pdev = NULL;
+
+    spin_lock(&pcidevs_lock);
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    spin_unlock(&pcidevs_lock);
+
+    return pdev ? 0 : -EBUSY;
+}
+
+static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int rc = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    /* Prevent device assign if mem paging or mem sharing have been 
+     * enabled for this domain */
+    if ( unlikely(!need_iommu(d) &&
+            (mem_sharing_enabled(d) ||
+             d->mem_event->paging.ring_page)) )
+        return -EXDEV;
+
+    if ( !spin_trylock(&pcidevs_lock) )
+        return -ERESTART;
+
+    if ( need_iommu(d) <= 0 )
+    {
+        if ( !iommu_use_hap_pt(d) )
+        {
+            rc = iommu_populate_page_table(d);
+            if ( rc )
+            {
+                spin_unlock(&pcidevs_lock);
+                return rc;
+            }
+        }
+        d->need_iommu = 1;
+    }
+
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    if ( !pdev )
+    {
+        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
+        goto done;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
+        goto done;
+
+    for ( ; pdev->phantom_stride; rc = 0 )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->assign_device(d, devfn, pdev);
+        if ( rc )
+            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
+                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   rc);
+    }
+
+ done:
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+    spin_unlock(&pcidevs_lock);
+
+    return rc;
+}
+
+/* caller should hold the pcidevs_lock */
+int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev = NULL;
+    int ret = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
+    if ( !pdev )
+        return -ENODEV;
+
+    while ( pdev->phantom_stride )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+        if ( !ret )
+            continue;
+
+        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
+               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
+        return ret;
+    }
+
+    devfn = pdev->devfn;
+    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+    if ( ret )
+    {
+        dprintk(XENLOG_G_ERR,
+                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
+                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return ret;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+
+    return ret;
+}
+
+static int iommu_get_device_group(
+    struct domain *d, u16 seg, u8 bus, u8 devfn,
+    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int group_id, sdev_id;
+    u32 bdf;
+    int i = 0;
+    const struct iommu_ops *ops = hd->platform_ops;
+
+    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
+        return 0;
+
+    group_id = ops->get_device_group_id(seg, bus, devfn);
+
+    spin_lock(&pcidevs_lock);
+    for_each_pdev( d, pdev )
+    {
+        if ( (pdev->seg != seg) ||
+             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
+            continue;
+
+        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
+            continue;
+
+        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
+        if ( (sdev_id == group_id) && (i < max_sdevs) )
+        {
+            bdf = 0;
+            bdf |= (pdev->bus & 0xff) << 16;
+            bdf |= (pdev->devfn & 0xff) << 8;
+
+            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
+            {
+                spin_unlock(&pcidevs_lock);
+                return -1;
+            }
+            i++;
+        }
+    }
+
+    spin_unlock(&pcidevs_lock);
+
+    return i;
+}
+
+int iommu_do_domctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    u16 seg;
+    u8 bus, devfn;
+    int ret = 0;
+
+    if ( !iommu_enabled )
+        return -ENOSYS;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_get_device_group:
+    {
+        u32 max_sdevs;
+        XEN_GUEST_HANDLE_64(uint32) sdevs;
+
+        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.get_device_group.machine_sbdf >> 16;
+        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
+        max_sdevs = domctl->u.get_device_group.max_sdevs;
+        sdevs = domctl->u.get_device_group.sdev_array;
+
+        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
+        if ( ret < 0 )
+        {
+            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
+            ret = -EFAULT;
+            domctl->u.get_device_group.num_sdevs = 0;
+        }
+        else
+        {
+            domctl->u.get_device_group.num_sdevs = ret;
+            ret = 0;
+        }
+        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
+            ret = -EFAULT;
+    }
+    break;
+
+    case XEN_DOMCTL_test_assign_device:
+        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        if ( device_assigned(seg, bus, devfn) )
+        {
+            printk(XENLOG_G_INFO
+                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            ret = -EINVAL;
+        }
+        break;
+
+    case XEN_DOMCTL_assign_device:
+        if ( unlikely(d->is_dying) )
+        {
+            ret = -EINVAL;
+            break;
+        }
+
+        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        ret = device_assigned(seg, bus, devfn) ?:
+              assign_device(d, seg, bus, devfn);
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
+                                                "h", u_domctl);
+        else if ( ret )
+            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
+                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    case XEN_DOMCTL_deassign_device:
+        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        spin_lock(&pcidevs_lock);
+        ret = deassign_device(d, seg, bus, devfn);
+        spin_unlock(&pcidevs_lock);
+        if ( ret )
+            printk(XENLOG_G_ERR
+                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/passthrough/iommu_x86.c b/xen/drivers/passthrough/iommu_x86.c
new file mode 100644
index 0000000..bd3c23b
--- /dev/null
+++ b/xen/drivers/passthrough/iommu_x86.c
@@ -0,0 +1,65 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xsm/xsm.h>
+
+void iommu_update_ire_from_apic(
+    unsigned int apic, unsigned int reg, unsigned int value)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    ops->update_ire_from_apic(apic, reg, value);
+}
+
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
+}
+
+void iommu_read_msi_from_ire(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    if ( iommu_intremap )
+        ops->read_msi_from_ire(msi_desc, msg);
+}
+
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->read_apic_from_ire(apic, reg);
+}
+
+int __init iommu_setup_hpet_msi(struct msi_desc *msi)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d5ce5b7..faa794b 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1784,31 +1784,31 @@ static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
                      int order, int present)
-{
-    struct acpi_drhd_unit *drhd;
-    struct iommu *iommu = NULL;
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    int flush_dev_iotlb;
-    int iommu_domid;
+    {
+        struct acpi_drhd_unit *drhd;
+        struct iommu *iommu = NULL;
+        struct hvm_iommu *hd = domain_hvm_iommu(d);
+        int flush_dev_iotlb;
+        int iommu_domid;
 
-    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
+        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
-    for_each_drhd_unit ( drhd )
-    {
-        iommu = drhd->iommu;
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
-            continue;
+        for_each_drhd_unit ( drhd )
+        {
+            iommu = drhd->iommu;
+            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+                continue;
 
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_domid= domain_iommu_domid(d, iommu);
-        if ( iommu_domid == -1 )
-            continue;
-        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                   (paddr_t)gfn << PAGE_SHIFT_4K,
-                                   order, !present, flush_dev_iotlb) )
-            iommu_flush_write_buffer(iommu);
+            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+            iommu_domid= domain_iommu_domid(d, iommu);
+            if ( iommu_domid == -1 )
+                continue;
+            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       order, !present, flush_dev_iotlb) )
+                iommu_flush_write_buffer(iommu);
+        }
     }
-}
 
 static int vtd_ept_page_compatible(struct iommu *iommu)
 {
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
new file mode 100644
index 0000000..34c1896
--- /dev/null
+++ b/xen/include/asm-x86/iommu.h
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_X86_IOMMU_H__
+#define __ARCH_X86_IOMMU_H__
+
+#define MAX_IOMMUS 32
+
+#include <asm/msi.h>
+
+void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
+int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
+void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
+int iommu_setup_hpet_msi(struct msi_desc *);
+
+void iommu_share_p2m_table(struct domain *d);
+
+/* While VT-d specific, this must get declared in a generic header. */
+int adjust_vtd_irq_affinities(void);
+void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int iommu_supports_eim(void);
+int iommu_enable_x2apic_IR(void);
+void iommu_disable_x2apic_IR(void);
+void iommu_set_dom0_mapping(struct domain *d);
+
+#endif /* !__ARCH_X86_IOMMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 26539e0..2abb4e3 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -21,6 +21,7 @@
 #define __XEN_HVM_IOMMU_H__
 
 #include <xen/iommu.h>
+#include <asm/hvm/iommu.h>
 
 struct g2m_ioport {
     struct list_head list;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index fcbc432..60df9d6 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -25,6 +25,7 @@
 #include <xen/pci.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
+#include <asm/iommu.h>
 
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
@@ -39,17 +40,12 @@ extern bool_t amd_iommu_perdev_intremap;
 
 #define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
 
-#define MAX_IOMMUS 32
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
 #define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
 
 int iommu_setup(void);
-int iommu_supports_eim(void);
-int iommu_enable_x2apic_IR(void);
-void iommu_disable_x2apic_IR(void);
 
 int iommu_add_device(struct pci_dev *pdev);
 int iommu_enable_device(struct pci_dev *pdev);
@@ -59,6 +55,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+/* Function used internally, use iommu_domain_destroy */
+void iommu_teardown(struct domain *d);
+
 /* iommu_map_page() takes flags to direct the mapping operation. */
 #define _IOMMUF_readable 0
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
@@ -67,9 +66,8 @@ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_domain_teardown(struct domain *d);
 
+#ifdef HAS_PCI
 void pt_pci_init(void);
 
 struct pirq;
@@ -84,62 +82,60 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 bool_t pt_irq_need_timer(uint32_t flags);
 
 #define PT_IRQ_TIME_OUT MILLISECS(8)
+#endif /* HAS_PCI */
 
+#ifdef CONFIG_X86
 struct msi_desc;
 struct msi_msg;
+#endif /* CONFIG_X86 */
+
 struct page_info;
 
 struct iommu_ops {
     int (*init)(struct domain *d);
     void (*dom0_init)(struct domain *d);
+#ifdef HAS_PCI
     int (*add_device)(u8 devfn, struct pci_dev *);
     int (*enable_device)(struct pci_dev *pdev);
     int (*remove_device)(u8 devfn, struct pci_dev *);
     int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
+    int (*reassign_device)(struct domain *s, struct domain *t,
+			   u8 devfn, struct pci_dev *);
+    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#endif /* HAS_PCI */
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
-    int (*reassign_device)(struct domain *s, struct domain *t,
-			   u8 devfn, struct pci_dev *);
-    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
     int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
     void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
+    void (*share_p2m)(struct domain *d);
+#endif /* CONFIG_X86 */
     void (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
-int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
-void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
-int iommu_setup_hpet_msi(struct msi_desc *);
-
 void iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 
-void iommu_set_dom0_mapping(struct domain *d);
-void iommu_share_p2m_table(struct domain *d);
-
+#if HAS_PCI
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
+#endif
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
 
-/* While VT-d specific, this must get declared in a generic header. */
-int adjust_vtd_irq_affinities(void);
-
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSi-0005Fn-9Z; Fri, 07 Feb 2014 17:43:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSg-0005F7-TT
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:35 +0000
Received: from [85.158.143.35:39583] by server-1.bemta-4.messagelabs.com id
	CB/4F-31661-64B15F25; Fri, 07 Feb 2014 17:43:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391795013!4004021!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26473 invoked from network); 7 Feb 2014 17:43:33 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:33 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so1688705eek.17
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=03GaTqwqAUDX5bqJ+1QCUxpYAArQ35zKzDBtptwQs2Y=;
	b=cw3MppyRdwHlBWUbcqkU3wICzKEtHLlRFw1WbYGfgUMc9gq81IMJiUv/XYz/hCQ1KF
	w66OX9W8hx13hWiYQVeoHCAPQtw/ILn53T9Yp9AOGDiZq95EDao7JVW7P1bBzFJOzB5e
	Vy5X29OSsApGeJaAnCTnqTkEuftrEFsO1TY7UA/bjUrV2Dppun+PFtfVFao/XbuII6OK
	FWDruCvDJCSipm62uw2M/VL8WXem1IkWdeTT/+P8ewH90PVhddS53g/K9yNMChDRWPuS
	3l/B9jan+fbWvI12AWRB71gX6lXhQIXkD+7+gI23OuP0MVPqcIcJHlUEQ4+wxc4LjSzQ
	7Mdw==
X-Gm-Message-State: ALoCoQlwJRXnlXCW6WdN8Lab6qoS/sTZJHWtjt2MGqYt7Fw14e9g/HayT9Mx1ytJvXmzdAGDBAnm
X-Received: by 10.14.29.6 with SMTP id h6mr5738816eea.84.1391795013454;
	Fri, 07 Feb 2014 09:43:33 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.32
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:32 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:06 +0000
Message-Id: <1391794991-5919-8-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 07/12] xen/passthrough: iommu: Don't need
	to map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently iommu_init_dom0 is browsing the page list and call map_page callback
on each page.

On both AMD and VTD drivers, the function will directly return if the page
table is shared with the processor. So Xen can safely avoid to run through
the page list.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 26a5d91..0a26956 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -157,7 +157,7 @@ void __init iommu_dom0_init(struct domain *d)
 
     register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
-    if ( need_iommu(d) )
+    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
     {
         struct page_info *page;
         unsigned int i = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSX-0005Ce-Ju; Fri, 07 Feb 2014 17:43:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSW-0005CY-DM
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:24 +0000
Received: from [85.158.143.35:60675] by server-3.bemta-4.messagelabs.com id
	7F/0C-11539-B3B15F25; Fri, 07 Feb 2014 17:43:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391795002!4009991!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30260 invoked from network); 7 Feb 2014 17:43:22 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:22 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so1719180ead.20
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=VMy9p6LSTPRVXMFBf95lblqMRmlDK80BpPUXTNowokk=;
	b=jTabu+tysmUIvbfZxwzOZd0YX0mQUZN8hycnZl881Vv0vhS7JtvOewsIM5R9vzKEGd
	3vpnwmGubpqrgkItKRLNh34Nbl/pWmrgzkdCD+BZpXBxoY+9+Snnam5VPUQFgpc2fDOd
	etFeTkTmPVOJCPny5MlOXfuPtc8kKvhyKcgS/ELNMTaccF/dTulEqRHsQfRHhJjItWzI
	JfsOvLSdv/TcoixBsMfY51Qv3LPAKlYKHFXWuXOnzGyLd/N0qUrKrv/1JM50S0poyEPp
	NT+QKEhne4ARsMMxL2O1pZU8WeODqpF9l+WluKKXri9Ppap0MA6JUnhDPgi5pCIgd7pb
	onOw==
X-Gm-Message-State: ALoCoQm/pQ8RA5gYs5pJOhAZeLTaB0gqsQ3TVWA9KZSXFQWivQjKRSboiuTDHXGIY0BfcEqWsH4q
X-Received: by 10.15.81.196 with SMTP id x44mr17733383eey.31.1391795002371;
	Fri, 07 Feb 2014 09:43:22 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.21
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:21 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:42:59 +0000
Message-Id: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [RFC for-4.5 00/12] IOMMU support for ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series add support for IOMMU on ARM. It have also added ARM SMMU
driver which is used for instance on Midway.

The IOMMU architecture for ARM is relying on the page table is shared between
the processor and each IOMMU.

The patch series is divided following:
    - #1: fixing grant-table with IOMMU. Will be necessary for ARM later
    - #2-#3: Make static some vtd functions
    - #4-#5: Adding new device tree functions
    - #6-#9: Prepare IOMMU code to add support for ARM
    - #10-#11: Add IOMMU architecture for ARM
    - #12: Add SMMU drivers

For now the 1:1 workaround is not removed because a same platform can have
DMA-capable device which are under an IOMMU and some not. It's a problem for
the swiotlb which needs to know if the device is procted or not when foreign
mapping is mapped in dom0.

When I talked with Stefano, 2 solutions came:
    - Having a property in each "protected" device
    - List in the hypervisor node the procted devices

I didn't yet decide which solution I will use.

Any comments, questions are welcomed.

Sincerely yours,

Julien Grall (12):
  xen/common: grant-table: only call IOMMU if paging mode translate is
    disabled
  xen/passthrough: vtd: Don't export iommu_domain_teardown
  xen/passthrough: vtd: Don't export iommu_set_pgd
  xen/dts: Add dt_property_read_bool
  xen/dts: Add dt_parse_phandle_with_args and dt_parse_phandle
  xen/passthrough: rework dom0_pvh_reqs to use it also on ARM
  xen/passthrough: iommu: Don't need to map dom0 page when the PT is
    shared
  xen/passthrough: iommu: Split generic IOMMU code
  xen/passthrough: iommu: Introduce arch specific code
  xen/passthrough: Introduce IOMMU ARM architure
  MAINTAINERS: Add drivers/passthrough/arm
  drivers/passthrough: arm: Add support for SMMU drivers

 MAINTAINERS                                 |    1 +
 xen/arch/arm/Rules.mk                       |    1 +
 xen/arch/arm/domain.c                       |    7 +
 xen/arch/arm/domain_build.c                 |    2 +
 xen/arch/arm/p2m.c                          |    4 +
 xen/arch/arm/setup.c                        |    2 +
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/common/device_tree.c                    |  157 ++-
 xen/common/grant_table.c                    |    7 +-
 xen/drivers/passthrough/Makefile            |    7 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +-
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +-
 xen/drivers/passthrough/arm/Makefile        |    2 +
 xen/drivers/passthrough/arm/iommu.c         |   65 +
 xen/drivers/passthrough/arm/smmu.c          | 1701 +++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu.c             |  525 +--------
 xen/drivers/passthrough/iommu_pci.c         |  468 ++++++++
 xen/drivers/passthrough/iommu_x86.c         |  106 ++
 xen/drivers/passthrough/vtd/iommu.c         |  124 +-
 xen/include/asm-arm/device.h                |    3 +-
 xen/include/asm-arm/domain.h                |    2 +
 xen/include/asm-arm/hvm/iommu.h             |   10 +
 xen/include/asm-arm/iommu.h                 |   36 +
 xen/include/asm-x86/hvm/iommu.h             |   29 +
 xen/include/asm-x86/iommu.h                 |   50 +
 xen/include/xen/device_tree.h               |   75 ++
 xen/include/xen/hvm/iommu.h                 |   27 +-
 xen/include/xen/iommu.h                     |   51 +-
 32 files changed, 2891 insertions(+), 701 deletions(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/drivers/passthrough/arm/smmu.c
 create mode 100644 xen/drivers/passthrough/iommu_pci.c
 create mode 100644 xen/drivers/passthrough/iommu_x86.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h
 create mode 100644 xen/include/asm-x86/iommu.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSg-0005FI-RL; Fri, 07 Feb 2014 17:43:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSf-0005El-T5
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:34 +0000
Received: from [193.109.254.147:23583] by server-16.bemta-14.messagelabs.com
	id 85/F5-21945-54B15F25; Fri, 07 Feb 2014 17:43:33 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1391795012!2831804!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31198 invoked from network); 7 Feb 2014 17:43:32 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:32 -0000
Received: by mail-ea0-f174.google.com with SMTP id b10so1724113eae.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=8uuD56yJ6jVMO1mJIExIBghOj9y/PWZbg3OQmNKoTWY=;
	b=mZrqjjiak9+YsssKlD/HQHatQoh5moH4Nhdys9a78oR13jLqm7vLqjmV+5M86VplCW
	JjYo/OJqQ8VrwQcLTdWdG3h84feMgERbDe4UmQrwgtEymQTNtR5rkyBPeEJqRBFew3E3
	v0fTx64B9lvnm3KaCQRhmERrmBXYyCGGUVQThkfsYEjvNtgk53LDnM3x2pTLBPTL1ygI
	xc1DBxFpmAFmp489tI4XwCD67d5wtcFqiBzitlZVytN+wL8ovHuT2QyP7p79XCsEj+YV
	0102SYy2imi7OBUt3Bl5/KTYcjKelqrWrJGOORz22qzkle4jQiFRfBWZNoMXwNOMpRG9
	joFw==
X-Gm-Message-State: ALoCoQmnRxrBYVruz6Kn/+oLPATxKpWwHOwlTdPFDZhiu7DjsJIJV1gkic7kmgYIX9B31zxPtBPi
X-Received: by 10.14.204.9 with SMTP id g9mr5628656eeo.82.1391795012158;
	Fri, 07 Feb 2014 09:43:32 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.30
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:31 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:05 +0000
Message-Id: <1391794991-5919-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
	dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is enabled.
Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
to know if it needs to check the requirements.

Rename the function and remove "pvh" word in the commit message.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/iommu.c |   14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 19b0e23..26a5d91 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
     return hd->platform_ops->init(d);
 }
 
-static __init void check_dom0_pvh_reqs(struct domain *d)
+static __init void check_dom0_reqs(struct domain *d)
 {
+    if ( !paging_mode_translate(d) )
+        return;
+
     if ( !iommu_enabled )
-        panic("Presently, iommu must be enabled for pvh dom0\n");
+        panic("Presently, iommu must be enabled to use dom0 with translate "
+              "paging mode\n");
 
     if ( iommu_passthrough )
-        panic("For pvh dom0, dom0-passthrough must not be enabled\n");
+        panic("Dom0 uses translate paging mode, dom0-passthrough must not be "
+              "enabled\n");
 
     iommu_dom0_strict = 1;
 }
@@ -145,8 +150,7 @@ void __init iommu_dom0_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    if ( is_pvh_domain(d) )
-        check_dom0_pvh_reqs(d);
+    check_dom0_reqs(d);
 
     if ( !iommu_enabled )
         return;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSg-0005F2-D3; Fri, 07 Feb 2014 17:43:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSe-0005EW-TV
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:33 +0000
Received: from [85.158.139.211:13975] by server-17.bemta-5.messagelabs.com id
	1D/1F-31975-44B15F25; Fri, 07 Feb 2014 17:43:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391795010!2464125!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5921 invoked from network); 7 Feb 2014 17:43:31 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:31 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so1704969eek.13
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=9CVzbBqB2BP51z23WZ6Ws++UFOn7l43WtO1ayNF1I/c=;
	b=g0kJeYsue3EjM68o0uS/9BqGrnDjYPANFqIsncXp8ZgDc0LklqXoZFPHxQynQgWNnP
	3Ee3OFTKXklIsUKj3V/Ag/k0SM2RdWQQOhhhsZ3Hgi02FAl31or02OcL29KEr9oBUpJS
	CcJWzyE/eDl9PMm6UCWsRutbfdg1coQl9jfKm4ANqDzTM5jayV67zrTWesrtTKLE/kKy
	7dIP/2FRkReLWeJz0fZuvRNc65zDDtt2EQN1WGDMQn78WOREwztvJb49/5LCtbUO8Jqc
	/aKZ/SDta23LDPS+ecgRVaSEOx+pfOKyBpYPHDkrYfeuLZLAV8lZyTM3SPCLwnYmqlSC
	FaIQ==
X-Gm-Message-State: ALoCoQmj7NEKEMwhaKyy4pf/odoyz/mD3APWGpjFr+RO4CNN/Perpnqn+YLS4NTn7tG/MMAeLZeK
X-Received: by 10.14.202.136 with SMTP id d8mr18200499eeo.46.1391795010884;
	Fri, 07 Feb 2014 09:43:30 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.28
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:29 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:04 +0000
Message-Id: <1391794991-5919-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
	dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code adapted from linux drivers/of/base.c (commit ef42c58).

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/common/device_tree.c      |  151 ++++++++++++++++++++++++++++++++++++++++-
 xen/include/xen/device_tree.h |   54 +++++++++++++++
 2 files changed, 203 insertions(+), 2 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index ccdb7ff..37a025a 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1090,9 +1090,9 @@ int dt_device_get_address(const struct dt_device_node *dev, int index,
  *
  * Returns a node pointer.
  */
-static const struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
+static struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
 {
-    const struct dt_device_node *np;
+    struct dt_device_node *np;
 
     dt_for_each_device_node(dt_host, np)
         if ( np->phandle == handle )
@@ -1477,6 +1477,153 @@ bool_t dt_device_is_available(const struct dt_device_node *device)
     return 0;
 }
 
+static int __dt_parse_phandle_with_args(const struct dt_device_node *np,
+                                        const char *list_name,
+                                        const char *cells_name,
+                                        int cell_count, int index,
+                                        struct dt_phandle_args *out_args)
+{
+    const __be32 *list, *list_end;
+    int rc = 0, cur_index = 0;
+    u32 size, count = 0;
+    struct dt_device_node *node = NULL;
+    dt_phandle phandle;
+
+    /* Retrieve the phandle list property */
+    list = dt_get_property(np, list_name, &size);
+    if ( !list )
+        return -ENOENT;
+    list_end = list + size / sizeof(*list);
+
+    /* Loop over the phandles until all the requested entry is found */
+    while ( list < list_end )
+    {
+        rc = -EINVAL;
+        count = 0;
+
+        /*
+         * If phandle is 0, then it is an empty entry with no
+         * arguments.  Skip forward to the next entry.
+         * */
+        phandle = be32_to_cpup(list++);
+        if ( phandle )
+        {
+            /*
+             * Find the provider node and parse the #*-cells
+             * property to determine the argument length.
+             *
+             * This is not needed if the cell count is hard-coded
+             * (i.e. cells_name not set, but cell_count is set),
+             * except when we're going to return the found node
+             * below.
+             */
+            if ( cells_name || cur_index == index )
+            {
+                node = dt_find_node_by_phandle(phandle);
+                if ( !node )
+                {
+                    dt_printk(XENLOG_ERR "%s: could not find phandle\n",
+                              np->full_name);
+                    goto err;
+                }
+            }
+
+            if ( cells_name )
+            {
+                if ( !dt_property_read_u32(node, cells_name, &count) )
+                {
+                    dt_printk("%s: could not get %s for %s\n",
+                              np->full_name, cells_name, node->full_name);
+                    goto err;
+                }
+            }
+            else
+                count = cell_count;
+
+            /*
+             * Make sure that the arguments actually fit in the
+             * remaining property data length
+             */
+            if ( list + count > list_end )
+            {
+                dt_printk(XENLOG_ERR "%s: arguments longer than property\n",
+                          np->full_name);
+                goto err;
+            }
+        }
+
+        /*
+         * All of the error cases above bail out of the loop, so at
+         * this point, the parsing is successful. If the requested
+         * index matches, then fill the out_args structure and return,
+         * or return -ENOENT for an empty entry.
+         */
+        rc = -ENOENT;
+        if ( cur_index == index )
+        {
+            if (!phandle)
+                goto err;
+
+            if ( out_args )
+            {
+                int i;
+
+                WARN_ON(count > MAX_PHANDLE_ARGS);
+                if (count > MAX_PHANDLE_ARGS)
+                    count = MAX_PHANDLE_ARGS;
+                out_args->np = node;
+                out_args->args_count = count;
+                for ( i = 0; i < count; i++ )
+                    out_args->args[i] = be32_to_cpup(list++);
+            }
+
+            /* Found it! return success */
+            return 0;
+        }
+
+        node = NULL;
+        list += count;
+        cur_index++;
+    }
+
+    /*
+     * Returning result will be one of:
+     * -ENOENT : index is for empty phandle
+     * -EINVAL : parsing error on data
+     * [1..n]  : Number of phandle (count mode; when index = -1)
+     */
+    rc = index < 0 ? cur_index : -ENOENT;
+err:
+    return rc;
+}
+
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+				                        const char *phandle_name, int index)
+{
+	struct dt_phandle_args args;
+
+	if (index < 0)
+		return NULL;
+
+	if (__dt_parse_phandle_with_args(np, phandle_name, NULL, 0,
+					 index, &args))
+		return NULL;
+
+	return args.np;
+}
+
+
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args)
+{
+    if ( index < 0 )
+        return -EINVAL;
+    return __dt_parse_phandle_with_args(np, list_name, cells_name, 0,
+                                        index, out_args);
+}
+
 /**
  * unflatten_dt_node - Alloc and populate a device_node from the flat tree
  * @fdt: The parent device tree blob
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 7c075d9..d429e60 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -112,6 +112,13 @@ struct dt_device_node {
 
 };
 
+#define MAX_PHANDLE_ARGS 16
+struct dt_phandle_args {
+    struct dt_device_node *np;
+    int args_count;
+    uint32_t args[MAX_PHANDLE_ARGS];
+};
+
 /**
  * IRQ line type.
  *
@@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
 void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
                   u64 *address, u64 *size);
 
+/**
+ * dt_parse_phandle - Resolve a phandle property to a device_node pointer
+ * @np: Pointer to device node holding phandle property
+ * @phandle_name: Name of property holding a phandle value
+ * @index: For properties holding a table of phandles, this is the index into
+ *         the table
+ *
+ * Returns the device_node pointer.
+ */
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+				                        const char *phandle_name,
+                                        int index);
+
+/**
+ * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
+ * @np:	pointer to a device tree node containing a list
+ * @list_name: property name that contains a list
+ * @cells_name: property name that specifies phandles' arguments count
+ * @index: index of a phandle to parse out
+ * @out_args: optional pointer to output arguments structure (will be filled)
+ *
+ * This function is useful to parse lists of phandles and their arguments.
+ * Returns 0 on success and fills out_args, on error returns appropriate
+ * errno value.
+ *
+ * Example:
+ *
+ * phandle1: node1 {
+ * 	#list-cells = <2>;
+ * }
+ *
+ * phandle2: node2 {
+ * 	#list-cells = <1>;
+ * }
+ *
+ * node3 {
+ * 	list = <&phandle1 1 2 &phandle2 3>;
+ * }
+ *
+ * To get a device_node of the `node2' node you may call this:
+ * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
+ */
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args);
+
 #endif /* __XEN_DEVICE_TREE_H */
 
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSn-0005Jl-R3; Fri, 07 Feb 2014 17:43:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSm-0005Hw-3D
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:40 +0000
Received: from [85.158.143.35:43852] by server-2.bemta-4.messagelabs.com id
	15/5D-10891-B4B15F25; Fri, 07 Feb 2014 17:43:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1391795017!4005456!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3061 invoked from network); 7 Feb 2014 17:43:37 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:37 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so1440049eek.0
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=r563eQs8f6yNCqws1iMCdZND/WzzsUH7eJK6yR+kEWU=;
	b=FMLlDkEd6Qc74mOimJfhWhAr1FiKGSsqfU3mEnm9tsjbeA68BCHzu3eSjPQMm1zpa5
	JsSC+BHSruYGt4eRgYk1AgWDYzmWXFeF9wWvnda+ku7D+DLuAW1vEt+Q9q8DtX/y/9bZ
	4FYWZ/tciypPap2MCHHny2z1Dz3F5Rzvbbg7tNXuS2hMVKuRMIzt/4Q4vBs5s2Ol16S2
	+2dBG0KpBese6/y/SzOr9LgR6UcB+rVNRQh8BSfcukPPrPxEkF3p9Bk+y2pRYWfaQJWa
	DFQkeHagQ7LLrPpmuJA5PLOUsmtiC1D9lIVMZyFIDiQQCrlJZElUJ+dr/xrE382rasuJ
	Kd5g==
X-Gm-Message-State: ALoCoQlRYuhIh7u5gj5188gAyBCvMFEDk+yJ7vP7BOauzxla4TZz7Miv1PTqJkbWtZw7apvcAEhw
X-Received: by 10.14.211.71 with SMTP id v47mr18196050eeo.37.1391795017155;
	Fri, 07 Feb 2014 09:43:37 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.35
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:36 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:08 +0000
Message-Id: <1391794991-5919-10-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: Joseph Cihula <joseph.cihula@intel.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, patches@linaro.org,
	Shane Wang <shane.wang@intel.com>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Gang Wei <gang.wei@intel.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 09/12] xen/passthrough: iommu: Introduce
	arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
x86 specific fields.

This patch creates:
    - arch_hvm_iommu structure which will contain architecture depend
    fields
    - arch_iommu_domain_{init,destroy} function to execute arch
    specific during domain creation/destruction

Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joseph Cihula <joseph.cihula@intel.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: Shane Wang <shane.wang@intel.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +--
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +++++++++----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +++++++++---------
 xen/drivers/passthrough/iommu.c             |   36 +++---------
 xen/drivers/passthrough/iommu_x86.c         |   41 ++++++++++++++
 xen/drivers/passthrough/vtd/iommu.c         |   80 +++++++++++++--------------
 xen/include/asm-x86/hvm/iommu.h             |   29 ++++++++++
 xen/include/asm-x86/iommu.h                 |    4 ++
 xen/include/xen/hvm/iommu.h                 |   26 +--------
 xen/include/xen/iommu.h                     |    8 +--
 14 files changed, 191 insertions(+), 163 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26635ff..e55d9d5 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -745,7 +745,7 @@ long arch_do_domctl(
                    "ioport_map:add: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
 
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if (g2m_ioport->mport == fmp )
                 {
                     g2m_ioport->gport = fgp;
@@ -764,7 +764,7 @@ long arch_do_domctl(
                 g2m_ioport->gport = fgp;
                 g2m_ioport->mport = fmp;
                 g2m_ioport->np = np;
-                list_add_tail(&g2m_ioport->list, &hd->g2m_ioport_list);
+                list_add_tail(&g2m_ioport->list, &hd->arch.g2m_ioport_list);
             }
             if ( !ret )
                 ret = ioports_permit_access(d, fmp, fmp + np - 1);
@@ -779,7 +779,7 @@ long arch_do_domctl(
             printk(XENLOG_G_INFO
                    "ioport_map:remove: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if ( g2m_ioport->mport == fmp )
                 {
                     list_del(&g2m_ioport->list);
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf6309d..ddb03f8 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -451,7 +451,7 @@ int dpci_ioport_intercept(ioreq_t *p)
     unsigned int s = 0, e = 0;
     int rc;
 
-    list_for_each_entry( g2m_ioport, &hd->g2m_ioport_list, list )
+    list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list )
     {
         s = g2m_ioport->gport;
         e = s + g2m_ioport->np;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index ccde4a0..c40fe12 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -230,7 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
         if ( !is_idle_domain(d) )
         {
             struct hvm_iommu *hd = domain_hvm_iommu(d);
-            update_iommu_mac(&ctx, hd->pgd_maddr, agaw_to_level(hd->agaw));
+            update_iommu_mac(&ctx, hd->arch.pgd_maddr,
+                             agaw_to_level(hd->arch.agaw));
         }
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index d27bd3c..f39bd9d 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -355,7 +355,7 @@ static void _amd_iommu_flush_pages(struct domain *d,
     unsigned long flags;
     struct amd_iommu *iommu;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
-    unsigned int dom_id = hd->domain_id;
+    unsigned int dom_id = hd->arch.domain_id;
 
     /* send INVALIDATE_IOMMU_PAGES command */
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 477de20..bd31bb5 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -60,12 +60,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t machine_bdf)
 
 static inline struct guest_iommu *domain_iommu(struct domain *d)
 {
-    return domain_hvm_iommu(d)->g_iommu;
+    return domain_hvm_iommu(d)->arch.g_iommu;
 }
 
 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v)
 {
-    return domain_hvm_iommu(v->domain)->g_iommu;
+    return domain_hvm_iommu(v->domain)->arch.g_iommu;
 }
 
 static void guest_iommu_enable(struct guest_iommu *iommu)
@@ -886,7 +886,7 @@ int guest_iommu_init(struct domain* d)
 
     guest_iommu_reg_init(iommu);
     iommu->domain = d;
-    hd->g_iommu = iommu;
+    hd->arch.g_iommu = iommu;
 
     tasklet_init(&iommu->cmd_buffer_tasklet,
                  guest_iommu_process_command, (unsigned long)d);
@@ -907,7 +907,7 @@ void guest_iommu_destroy(struct domain *d)
     tasklet_kill(&iommu->cmd_buffer_tasklet);
     xfree(iommu);
 
-    domain_hvm_iommu(d)->g_iommu = NULL;
+    domain_hvm_iommu(d)->arch.g_iommu = NULL;
 }
 
 static int guest_iommu_mmio_range(struct vcpu *v, unsigned long addr)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 1294561..be34e90 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -344,7 +344,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     struct hvm_iommu *hd = domain_hvm_iommu(d);
     bool_t ok = 0;
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     next_level = merge_level - 1;
 
@@ -398,7 +398,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     unsigned long first_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(pt_mfn);
     pde = table + pfn_to_pde_idx(gfn, merge_level);
@@ -448,8 +448,8 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
     struct page_info *table;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    table = hd->root_table;
-    level = hd->paging_mode;
+    table = hd->arch.root_table;
+    level = hd->arch.paging_mode;
 
     BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 || 
             level > IOMMU_PAGING_MODE_LEVEL_6 );
@@ -557,11 +557,11 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    level = hd->paging_mode;
-    old_root = hd->root_table;
+    level = hd->arch.paging_mode;
+    old_root = hd->arch.root_table;
     offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
 
-    ASSERT(spin_is_locked(&hd->mapping_lock) && is_hvm_domain(d));
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
 
     while ( offset >= PTE_PER_TABLE_SIZE )
     {
@@ -587,8 +587,8 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
     if ( new_root != NULL )
     {
-        hd->paging_mode = level;
-        hd->root_table = new_root;
+        hd->arch.paging_mode = level;
+        hd->arch.root_table = new_root;
 
         if ( !spin_is_locked(&pcidevs_lock) )
             AMD_IOMMU_DEBUG("%s Try to access pdev_list "
@@ -613,9 +613,9 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
                 /* valid = 0 only works for dom0 passthrough mode */
                 amd_iommu_set_root_page_table((u32 *)device_entry,
-                                              page_to_maddr(hd->root_table),
-                                              hd->domain_id,
-                                              hd->paging_mode, 1);
+                                              page_to_maddr(hd->arch.root_table),
+                                              hd->arch.domain_id,
+                                              hd->arch.paging_mode, 1);
 
                 amd_iommu_flush_device(iommu, req_id);
                 bdf += pdev->phantom_stride;
@@ -638,14 +638,14 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     unsigned long pt_mfn[7];
     unsigned int merge_level;
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -653,7 +653,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -662,7 +662,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -684,7 +684,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         amd_iommu_flush_pages(d, gfn, 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
-          merge_level <= hd->paging_mode; merge_level++ )
+          merge_level <= hd->arch.paging_mode; merge_level++ )
     {
         if ( pt_mfn[merge_level] == 0 )
             break;
@@ -697,7 +697,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
                                flags, merge_level) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
                             "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
             domain_crash(d);
@@ -706,7 +706,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     }
 
 out:
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -715,14 +715,14 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     unsigned long pt_mfn[7];
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -730,7 +730,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -739,7 +739,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -747,7 +747,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     /* mark PTE as 'page not present' */
     clear_iommu_pte_present(pt_mfn[1], gfn);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 
     amd_iommu_flush_pages(d, gfn, 0);
 
@@ -792,13 +792,13 @@ void amd_iommu_share_p2m(struct domain *d)
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
     p2m_table = mfn_to_page(mfn_x(pgd_mfn));
 
-    if ( hd->root_table != p2m_table )
+    if ( hd->arch.root_table != p2m_table )
     {
-        free_amd_iommu_pgtable(hd->root_table);
-        hd->root_table = p2m_table;
+        free_amd_iommu_pgtable(hd->arch.root_table);
+        hd->arch.root_table = p2m_table;
 
         /* When sharing p2m with iommu, paging mode = 4 */
-        hd->paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
+        hd->arch.paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
         AMD_IOMMU_DEBUG("Share p2m table with iommu: p2m table = %#lx\n",
                         mfn_x(pgd_mfn));
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index c26aabc..0c3cd3e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -120,7 +120,8 @@ static void amd_iommu_setup_domain_device(
 
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
 
-    BUG_ON( !hd->root_table || !hd->paging_mode || !iommu->dev_table.buffer );
+    BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode ||
+            !iommu->dev_table.buffer );
 
     if ( iommu_passthrough && (domain->domain_id == 0) )
         valid = 0;
@@ -138,8 +139,8 @@ static void amd_iommu_setup_domain_device(
     {
         /* bind DTE to domain page-tables */
         amd_iommu_set_root_page_table(
-            (u32 *)dte, page_to_maddr(hd->root_table), hd->domain_id,
-            hd->paging_mode, valid);
+            (u32 *)dte, page_to_maddr(hd->arch.root_table), hd->arch.domain_id,
+            hd->arch.paging_mode, valid);
 
         if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
@@ -151,8 +152,8 @@ static void amd_iommu_setup_domain_device(
                         "root table = %#"PRIx64", "
                         "domain = %d, paging mode = %d\n",
                         req_id, pdev->type,
-                        page_to_maddr(hd->root_table),
-                        hd->domain_id, hd->paging_mode);
+                        page_to_maddr(hd->arch.root_table),
+                        hd->arch.domain_id, hd->arch.paging_mode);
     }
 
     spin_unlock_irqrestore(&iommu->lock, flags);
@@ -225,17 +226,17 @@ int __init amd_iov_detect(void)
 static int allocate_domain_resources(struct hvm_iommu *hd)
 {
     /* allocate root table */
-    spin_lock(&hd->mapping_lock);
-    if ( !hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( !hd->arch.root_table )
     {
-        hd->root_table = alloc_amd_iommu_pgtable();
-        if ( !hd->root_table )
+        hd->arch.root_table = alloc_amd_iommu_pgtable();
+        if ( !hd->arch.root_table )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             return -ENOMEM;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -262,18 +263,18 @@ static int amd_iommu_domain_init(struct domain *d)
     /* allocate page directroy */
     if ( allocate_domain_resources(hd) != 0 )
     {
-        if ( hd->root_table )
-            free_domheap_page(hd->root_table);
+        if ( hd->arch.root_table )
+            free_domheap_page(hd->arch.root_table);
         return -ENOMEM;
     }
 
     /* For pv and dom0, stick with get_paging_mode(max_page)
      * For HVM dom0, use 2 level page table at first */
-    hd->paging_mode = is_hvm_domain(d) ?
+    hd->arch.paging_mode = is_hvm_domain(d) ?
                       IOMMU_PAGING_MODE_LEVEL_2 :
                       get_paging_mode(max_page);
 
-    hd->domain_id = d->domain_id;
+    hd->arch.domain_id = d->domain_id;
 
     guest_iommu_init(d);
 
@@ -333,8 +334,8 @@ void amd_iommu_disable_domain_device(struct domain *domain,
 
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
-                        req_id,  domain_hvm_iommu(domain)->domain_id,
-                        domain_hvm_iommu(domain)->paging_mode);
+                        req_id,  domain_hvm_iommu(domain)->arch.domain_id,
+                        domain_hvm_iommu(domain)->arch.paging_mode);
     }
     spin_unlock_irqrestore(&iommu->lock, flags);
 
@@ -374,7 +375,7 @@ static int reassign_device(struct domain *source, struct domain *target,
 
     /* IO page tables might be destroyed after pci-detach the last device
      * In this case, we have to re-allocate root table for next pci-attach.*/
-    if ( t->root_table == NULL )
+    if ( t->arch.root_table == NULL )
         allocate_domain_resources(t);
 
     amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
@@ -456,13 +457,13 @@ static void deallocate_iommu_page_tables(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    if ( hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( hd->arch.root_table )
     {
-        deallocate_next_page_table(hd->root_table, hd->paging_mode);
-        hd->root_table = NULL;
+        deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mode);
+        hd->arch.root_table = NULL;
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 
@@ -593,11 +594,11 @@ static void amd_dump_p2m_table(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
 
-    if ( !hd->root_table ) 
+    if ( !hd->arch.root_table ) 
         return;
 
-    printk("p2m table has %d levels\n", hd->paging_mode);
-    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+    printk("p2m table has %d levels\n", hd->arch.paging_mode);
+    amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0, 0);
 }
 
 const struct iommu_ops amd_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index d733878..2346da9 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -117,10 +117,11 @@ static void __init parse_iommu_param(char *s)
 int iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
+    int ret = 0;
 
-    spin_lock_init(&hd->mapping_lock);
-    INIT_LIST_HEAD(&hd->g2m_ioport_list);
-    INIT_LIST_HEAD(&hd->mapped_rmrrs);
+    ret = arch_iommu_domain_init(d);
+    if ( ret )
+        return ret;
 
     if ( !iommu_enabled )
         return 0;
@@ -190,10 +191,7 @@ void iommu_teardown(struct domain *d)
 
 void iommu_domain_destroy(struct domain *d)
 {
-    struct hvm_iommu *hd  = domain_hvm_iommu(d);
-    struct list_head *ioport_list, *rmrr_list, *tmp;
-    struct g2m_ioport *ioport;
-    struct mapped_rmrr *mrmrr;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops )
         return;
@@ -201,20 +199,8 @@ void iommu_domain_destroy(struct domain *d)
     if ( need_iommu(d) )
         iommu_teardown(d);
 
-    list_for_each_safe ( ioport_list, tmp, &hd->g2m_ioport_list )
-    {
-        ioport = list_entry(ioport_list, struct g2m_ioport, list);
-        list_del(&ioport->list);
-        xfree(ioport);
-    }
-
-    list_for_each_safe ( rmrr_list, tmp, &hd->mapped_rmrrs )
-    {
-        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
-        list_del(&mrmrr->list);
-        xfree(mrmrr);
-    }
-}
+    arch_iommu_domain_destroy(d);
+ }
 
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
@@ -328,14 +314,6 @@ void iommu_suspend()
         ops->suspend();
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-
-    if ( iommu_enabled && is_hvm_domain(d) )
-        ops->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/drivers/passthrough/iommu_x86.c b/xen/drivers/passthrough/iommu_x86.c
index bd3c23b..c137cef 100644
--- a/xen/drivers/passthrough/iommu_x86.c
+++ b/xen/drivers/passthrough/iommu_x86.c
@@ -55,6 +55,47 @@ int __init iommu_setup_hpet_msi(struct msi_desc *msi)
     return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
 }
 
+void iommu_share_p2m_table(struct domain* d)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+
+    if ( iommu_enabled && is_hvm_domain(d) )
+        ops->share_p2m(d);
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    spin_lock_init(&hd->arch.mapping_lock);
+    INIT_LIST_HEAD(&hd->arch.g2m_ioport_list);
+    INIT_LIST_HEAD(&hd->arch.mapped_rmrrs);
+
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+   struct hvm_iommu *hd  = domain_hvm_iommu(d);
+   struct list_head *ioport_list, *rmrr_list, *tmp;
+   struct g2m_ioport *ioport;
+   struct mapped_rmrr *mrmrr;
+
+   list_for_each_safe ( ioport_list, tmp, &hd->arch.g2m_ioport_list )
+   {
+       ioport = list_entry(ioport_list, struct g2m_ioport, list);
+       list_del(&ioport->list);
+       xfree(ioport);
+   }
+
+    list_for_each_safe ( rmrr_list, tmp, &hd->arch.mapped_rmrrs )
+    {
+        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
+        list_del(&mrmrr->list);
+        xfree(mrmrr);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index faa794b..a7a5253 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -249,16 +249,16 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
-    int addr_width = agaw_to_width(hd->agaw);
+    int addr_width = agaw_to_width(hd->arch.agaw);
     struct dma_pte *parent, *pte = NULL;
-    int level = agaw_to_level(hd->agaw);
+    int level = agaw_to_level(hd->arch.agaw);
     int offset;
     u64 pte_maddr = 0, maddr;
     u64 *vaddr = NULL;
 
     addr &= (((u64)1) << addr_width) - 1;
-    ASSERT(spin_is_locked(&hd->mapping_lock));
-    if ( hd->pgd_maddr == 0 )
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+    if ( hd->arch.pgd_maddr == 0 )
     {
         /*
          * just get any passthrough device in the domainr - assume user
@@ -266,11 +266,11 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
          */
         pdev = pci_get_pdev_by_domain(domain, -1, -1, -1);
         drhd = acpi_find_matched_drhd_unit(pdev);
-        if ( !alloc || ((hd->pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
+        if ( !alloc || ((hd->arch.pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
             goto out;
     }
 
-    parent = (struct dma_pte *)map_vtd_domain_page(hd->pgd_maddr);
+    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr);
     while ( level > 1 )
     {
         offset = address_level_offset(addr, level);
@@ -580,7 +580,7 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     {
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -622,12 +622,12 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     u64 pg_maddr;
     struct mapped_rmrr *mrmrr;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
     pg_maddr = addr_to_dma_page_maddr(domain, addr, 0);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return;
     }
 
@@ -636,13 +636,13 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
 
     if ( !dma_pte_present(*pte) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return;
     }
 
     dma_clear_pte(*pte);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -653,8 +653,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     /* if the cleared address is between mapped RMRR region,
      * remove the mapped RMRR
      */
-    spin_lock(&hd->mapping_lock);
-    list_for_each_entry ( mrmrr, &hd->mapped_rmrrs, list )
+    spin_lock(&hd->arch.mapping_lock);
+    list_for_each_entry ( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( addr >= mrmrr->base && addr <= mrmrr->end )
         {
@@ -663,7 +663,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
             break;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1248,7 +1248,7 @@ static int intel_iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    hd->agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    hd->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
     return 0;
 }
@@ -1345,16 +1345,16 @@ int domain_context_mapping_one(
     }
     else
     {
-        spin_lock(&hd->mapping_lock);
+        spin_lock(&hd->arch.mapping_lock);
 
         /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->pgd_maddr == 0 )
+        if ( hd->arch.pgd_maddr == 0 )
         {
             addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->pgd_maddr == 0 )
+            if ( hd->arch.pgd_maddr == 0 )
             {
             nomem:
-                spin_unlock(&hd->mapping_lock);
+                spin_unlock(&hd->arch.mapping_lock);
                 spin_unlock(&iommu->lock);
                 unmap_vtd_domain_page(context_entries);
                 return -ENOMEM;
@@ -1362,7 +1362,7 @@ int domain_context_mapping_one(
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->pgd_maddr;
+        pgd_maddr = hd->arch.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1380,7 +1380,7 @@ int domain_context_mapping_one(
         else
             context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
 
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
     }
 
     if ( context_set_domain_id(context, domain, iommu) )
@@ -1406,7 +1406,7 @@ int domain_context_mapping_one(
         iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
     }
 
-    set_bit(iommu->index, &hd->iommu_bitmap);
+    set_bit(iommu->index, &hd->arch.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
 
@@ -1652,7 +1652,7 @@ static int domain_context_unmap(
         struct hvm_iommu *hd = domain_hvm_iommu(domain);
         int iommu_domid;
 
-        clear_bit(iommu->index, &hd->iommu_bitmap);
+        clear_bit(iommu->index, &hd->arch.iommu_bitmap);
 
         iommu_domid = domain_iommu_domid(domain, iommu);
         if ( iommu_domid == -1 )
@@ -1711,10 +1711,10 @@ static void iommu_domain_teardown(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    iommu_free_pagetable(hd->pgd_maddr, agaw_to_level(hd->agaw));
-    hd->pgd_maddr = 0;
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw));
+    hd->arch.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static int intel_iommu_map_page(
@@ -1733,12 +1733,12 @@ static int intel_iommu_map_page(
     if ( iommu_passthrough && (d->domain_id == 0) )
         return 0;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -1755,14 +1755,14 @@ static int intel_iommu_map_page(
 
     if ( old.val == new.val )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -1796,7 +1796,7 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
         for_each_drhd_unit ( drhd )
         {
             iommu = drhd->iommu;
-            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+            if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
                 continue;
 
             flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -1837,7 +1837,7 @@ static void iommu_set_pgd(struct domain *d)
         return;
 
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    hd->pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
+    hd->arch.pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
 }
 
 static int rmrr_identity_mapping(struct domain *d,
@@ -1852,10 +1852,10 @@ static int rmrr_identity_mapping(struct domain *d,
     ASSERT(rmrr->base_address < rmrr->end_address);
 
     /*
-     * No need to acquire hd->mapping_lock, as the only theoretical race is
+     * No need to acquire hd->arch.mapping_lock, as the only theoretical race is
      * with the insertion below (impossible due to holding pcidevs_lock).
      */
-    list_for_each_entry( mrmrr, &hd->mapped_rmrrs, list )
+    list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( mrmrr->base == rmrr->base_address &&
              mrmrr->end == rmrr->end_address )
@@ -1880,9 +1880,9 @@ static int rmrr_identity_mapping(struct domain *d,
         return -ENOMEM;
     mrmrr->base = rmrr->base_address;
     mrmrr->end = rmrr->end_address;
-    spin_lock(&hd->mapping_lock);
-    list_add_tail(&mrmrr->list, &hd->mapped_rmrrs);
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
+    spin_unlock(&hd->arch.mapping_lock);
 
     return 0;
 }
@@ -2427,8 +2427,8 @@ static void vtd_dump_p2m_table(struct domain *d)
         return;
 
     hd = domain_hvm_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
-    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw));
+    vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0);
 }
 
 const struct iommu_ops intel_iommu_ops = {
diff --git a/xen/include/asm-x86/hvm/iommu.h b/xen/include/asm-x86/hvm/iommu.h
index d488edf..a3f83d0 100644
--- a/xen/include/asm-x86/hvm/iommu.h
+++ b/xen/include/asm-x86/hvm/iommu.h
@@ -39,4 +39,33 @@ static inline int iommu_hardware_setup(void)
     return 0;
 }
 
+struct g2m_ioport {
+    struct list_head list;
+    unsigned int gport;
+    unsigned int mport;
+    unsigned int np;
+};
+
+struct mapped_rmrr {
+    struct list_head list;
+    u64 base;
+    u64 end;
+};
+
+struct arch_hvm_iommu
+{
+    u64 pgd_maddr;                 /* io page directory machine address */
+    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
+    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
+    /* amd iommu support */
+    int domain_id;
+    int paging_mode;
+    struct page_info *root_table;
+    struct guest_iommu *g_iommu;
+
+    struct list_head g2m_ioport_list;   /* guest to machine ioport mapping */
+    struct list_head mapped_rmrrs;
+    spinlock_t mapping_lock;            /* io page table lock */
+};
+
 #endif /* __ASM_X86_HVM_IOMMU_H__ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 34c1896..021cd80 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -19,6 +19,10 @@
 
 #include <asm/msi.h>
 
+/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
+#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
+#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
+
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
 int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
 void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 2abb4e3..f8f8a93 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -23,32 +23,8 @@
 #include <xen/iommu.h>
 #include <asm/hvm/iommu.h>
 
-struct g2m_ioport {
-    struct list_head list;
-    unsigned int gport;
-    unsigned int mport;
-    unsigned int np;
-};
-
-struct mapped_rmrr {
-    struct list_head list;
-    u64 base;
-    u64 end;
-};
-
 struct hvm_iommu {
-    u64 pgd_maddr;                 /* io page directory machine address */
-    spinlock_t mapping_lock;       /* io page table lock */
-    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
-    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
-    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
-    struct list_head mapped_rmrrs;
-
-    /* amd iommu support */
-    int domain_id;
-    int paging_mode;
-    struct page_info *root_table;
-    struct guest_iommu *g_iommu;
+    struct arch_hvm_iommu arch;
 
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 60df9d6..9a69b76 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -35,11 +35,6 @@ extern bool_t iommu_hap_pt_share;
 extern bool_t iommu_debug;
 extern bool_t amd_iommu_perdev_intremap;
 
-/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
-#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
-
-#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
@@ -55,6 +50,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+void arch_iommu_domain_destroy(struct domain *d);
+int arch_iommu_domain_init(struct domain *d);
+
 /* Function used internally, use iommu_domain_destroy */
 void iommu_teardown(struct domain *d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSa-0005D6-S8; Fri, 07 Feb 2014 17:43:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSZ-0005Cp-D0
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:27 +0000
Received: from [85.158.143.35:60829] by server-2.bemta-4.messagelabs.com id
	12/2D-10891-E3B15F25; Fri, 07 Feb 2014 17:43:26 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391795005!4001307!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7773 invoked from network); 7 Feb 2014 17:43:26 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:26 -0000
Received: by mail-ea0-f172.google.com with SMTP id l9so1447687eaj.31
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=A+FPIBV0p1bt9VIYlpV6PZCZjS4kiCj/p53t8RAeJ6Q=;
	b=D+cBJIXkZ0u4dPSmODqzF3t9kXvRnAVZ1UyS7is2RCAgzp9AGPmhupWQ5WIyjx7f1E
	HGxD0QEbhDFNq8UozGtTNEy3QYWorqqwOlHjX06Ieb8hFRaDu9a/Qmdp4fABmAfN3csg
	L+GlIEZwdTVu8r5WBfmfkSVNCNX8UsQ8VjE35FUA4RPNWP7yKO58IxhYH+UcKyKv6Si6
	C7ZveWJlGAH/u5awFq2JmYHamblHJJK+TNzr2RJZsCy5NDyh3GEviL17p29LAANX+T+L
	Y3HbgrMDE5JCZnbkbLNEN+ey8YKpJi79khBAXkTq0bMUkxQEelj3fdhbqiCx9LY4gTzm
	EkFQ==
X-Gm-Message-State: ALoCoQn9uSEzwy2vVMUkwRw4FIWv2IeSV+qiMvZrphaB5fxGJOaUxfDKFvcDpJrIJsRavJ+oP/7c
X-Received: by 10.15.21.2 with SMTP id c2mr5740660eeu.77.1391795005757;
	Fri, 07 Feb 2014 09:43:25 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.23
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:25 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:01 +0000
Message-Id: <1391794991-5919-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 02/12] xen/passthrough: vtd: Don't export
	iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_domain_teardown is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..a8d33fc 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
     return ret;
 }
 
-void iommu_domain_teardown(struct domain *d)
+static void iommu_domain_teardown(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSZ-0005Cq-Bt; Fri, 07 Feb 2014 17:43:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSX-0005Cd-NS
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:25 +0000
Received: from [193.109.254.147:23120] by server-6.bemta-14.messagelabs.com id
	AB/CD-03396-D3B15F25; Fri, 07 Feb 2014 17:43:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1391795004!2830320!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21353 invoked from network); 7 Feb 2014 17:43:24 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:24 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1688209eek.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=jqxoPLhffLdddCuXl7gJ26I+N3MoJPCQWDLv9+DRoO4=;
	b=lp3FETCBZdI/78dRW2EOptiBvdwWOGtcyi/vPul+XgyaJc+BT578eYTW6spodhF29k
	N9sKm2Vcp29/milV0CJbFWRYA5DDo5zJd1CGj4rtT0BeXWmbjw3BbhDUkmXc/o3Vjrkk
	soQnJ7//GS4oGIV8OVxamStRQ1PZdVPXBkii8IlclgQgAFKDfXcZF+jktOJE0xLR7wAT
	c1PZuAU+TjRKMGTnaxSVlgr5FnvJDAh3MPDT7ebStwidSeefGLl3rZvtjh36Re+ZYtA0
	wjOTVH5wnZ2EYa9yvDl+7ziRrTB+/Tu4Wu/9IaxOzbpWWy5mSUTuJxVPmRQmvyPazVaK
	by8A==
X-Gm-Message-State: ALoCoQnXWIjLiiJqCYd0qpQYnn2R4nyFlDUFk49ok7GhIjXDwVDGbBZvF0tWbu3h+fQt1CQ6Ik8F
X-Received: by 10.14.203.197 with SMTP id f45mr4543131eeo.90.1391795003688;
	Fri, 07 Feb 2014 09:43:23 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.22
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:23 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:00 +0000
Message-Id: <1391794991-5919-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [RFC for-4.5 01/12] xen/common: grant-table: only call
	IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From Xen point of view, ARM guests are PV guest with paging auto translate
enabled.

When IOMMU support will be added for ARM, mapping grant ref will always crash
Xen due to the BUG_ON in __gnttab_map_grant_ref.

On x86:
    - PV guests always have paging mode translate disabled
    - PVH and HVM guests have always paging mode translate enabled

It means that we can safely replace the check that the domain is a PV guests
by checking if the guest has paging mode translate enabled.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
---
 xen/common/grant_table.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 107b000..778bdb7 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -721,12 +721,10 @@ __gnttab_map_grant_ref(
 
     double_gt_lock(lgt, rgt);
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        /* Shouldn't happen, because you can't use iommu in a HVM domain. */
-        BUG_ON(paging_mode_translate(ld));
         /* We're not translated, so we know that gmfns and mfns are
            the same things, so the IOMMU entry is always 1-to-1. */
         mapcount(lgt, rd, frame, &wrc, &rdc);
@@ -931,11 +929,10 @@ __gnttab_unmap_common(
             act->pin -= GNTPIN_hstw_inc;
     }
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        BUG_ON(paging_mode_translate(ld));
         mapcount(lgt, rd, op->frame, &wrc, &rdc);
         if ( (wrc + rdc) == 0 )
             err = iommu_unmap_page(ld, op->frame);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSX-0005Ce-Ju; Fri, 07 Feb 2014 17:43:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSW-0005CY-DM
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:24 +0000
Received: from [85.158.143.35:60675] by server-3.bemta-4.messagelabs.com id
	7F/0C-11539-B3B15F25; Fri, 07 Feb 2014 17:43:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391795002!4009991!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30260 invoked from network); 7 Feb 2014 17:43:22 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:22 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so1719180ead.20
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=VMy9p6LSTPRVXMFBf95lblqMRmlDK80BpPUXTNowokk=;
	b=jTabu+tysmUIvbfZxwzOZd0YX0mQUZN8hycnZl881Vv0vhS7JtvOewsIM5R9vzKEGd
	3vpnwmGubpqrgkItKRLNh34Nbl/pWmrgzkdCD+BZpXBxoY+9+Snnam5VPUQFgpc2fDOd
	etFeTkTmPVOJCPny5MlOXfuPtc8kKvhyKcgS/ELNMTaccF/dTulEqRHsQfRHhJjItWzI
	JfsOvLSdv/TcoixBsMfY51Qv3LPAKlYKHFXWuXOnzGyLd/N0qUrKrv/1JM50S0poyEPp
	NT+QKEhne4ARsMMxL2O1pZU8WeODqpF9l+WluKKXri9Ppap0MA6JUnhDPgi5pCIgd7pb
	onOw==
X-Gm-Message-State: ALoCoQm/pQ8RA5gYs5pJOhAZeLTaB0gqsQ3TVWA9KZSXFQWivQjKRSboiuTDHXGIY0BfcEqWsH4q
X-Received: by 10.15.81.196 with SMTP id x44mr17733383eey.31.1391795002371;
	Fri, 07 Feb 2014 09:43:22 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.21
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:21 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:42:59 +0000
Message-Id: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [RFC for-4.5 00/12] IOMMU support for ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series add support for IOMMU on ARM. It have also added ARM SMMU
driver which is used for instance on Midway.

The IOMMU architecture for ARM is relying on the page table is shared between
the processor and each IOMMU.

The patch series is divided following:
    - #1: fixing grant-table with IOMMU. Will be necessary for ARM later
    - #2-#3: Make static some vtd functions
    - #4-#5: Adding new device tree functions
    - #6-#9: Prepare IOMMU code to add support for ARM
    - #10-#11: Add IOMMU architecture for ARM
    - #12: Add SMMU drivers

For now the 1:1 workaround is not removed because a same platform can have
DMA-capable device which are under an IOMMU and some not. It's a problem for
the swiotlb which needs to know if the device is procted or not when foreign
mapping is mapped in dom0.

When I talked with Stefano, 2 solutions came:
    - Having a property in each "protected" device
    - List in the hypervisor node the procted devices

I didn't yet decide which solution I will use.

Any comments, questions are welcomed.

Sincerely yours,

Julien Grall (12):
  xen/common: grant-table: only call IOMMU if paging mode translate is
    disabled
  xen/passthrough: vtd: Don't export iommu_domain_teardown
  xen/passthrough: vtd: Don't export iommu_set_pgd
  xen/dts: Add dt_property_read_bool
  xen/dts: Add dt_parse_phandle_with_args and dt_parse_phandle
  xen/passthrough: rework dom0_pvh_reqs to use it also on ARM
  xen/passthrough: iommu: Don't need to map dom0 page when the PT is
    shared
  xen/passthrough: iommu: Split generic IOMMU code
  xen/passthrough: iommu: Introduce arch specific code
  xen/passthrough: Introduce IOMMU ARM architure
  MAINTAINERS: Add drivers/passthrough/arm
  drivers/passthrough: arm: Add support for SMMU drivers

 MAINTAINERS                                 |    1 +
 xen/arch/arm/Rules.mk                       |    1 +
 xen/arch/arm/domain.c                       |    7 +
 xen/arch/arm/domain_build.c                 |    2 +
 xen/arch/arm/p2m.c                          |    4 +
 xen/arch/arm/setup.c                        |    2 +
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/common/device_tree.c                    |  157 ++-
 xen/common/grant_table.c                    |    7 +-
 xen/drivers/passthrough/Makefile            |    7 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +-
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +-
 xen/drivers/passthrough/arm/Makefile        |    2 +
 xen/drivers/passthrough/arm/iommu.c         |   65 +
 xen/drivers/passthrough/arm/smmu.c          | 1701 +++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu.c             |  525 +--------
 xen/drivers/passthrough/iommu_pci.c         |  468 ++++++++
 xen/drivers/passthrough/iommu_x86.c         |  106 ++
 xen/drivers/passthrough/vtd/iommu.c         |  124 +-
 xen/include/asm-arm/device.h                |    3 +-
 xen/include/asm-arm/domain.h                |    2 +
 xen/include/asm-arm/hvm/iommu.h             |   10 +
 xen/include/asm-arm/iommu.h                 |   36 +
 xen/include/asm-x86/hvm/iommu.h             |   29 +
 xen/include/asm-x86/iommu.h                 |   50 +
 xen/include/xen/device_tree.h               |   75 ++
 xen/include/xen/hvm/iommu.h                 |   27 +-
 xen/include/xen/iommu.h                     |   51 +-
 32 files changed, 2891 insertions(+), 701 deletions(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/drivers/passthrough/arm/smmu.c
 create mode 100644 xen/drivers/passthrough/iommu_pci.c
 create mode 100644 xen/drivers/passthrough/iommu_x86.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h
 create mode 100644 xen/include/asm-x86/iommu.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSd-0005EE-UI; Fri, 07 Feb 2014 17:43:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSc-0005DW-RP
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:31 +0000
Received: from [85.158.137.68:6080] by server-1.bemta-3.messagelabs.com id
	0C/5D-17293-14B15F25; Fri, 07 Feb 2014 17:43:29 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391795009!394843!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23698 invoked from network); 7 Feb 2014 17:43:29 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:29 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so1688257eek.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=XWqALXRj576Gk9ZaYAsA1jX5pC5564URrjKEcU85iMQ=;
	b=jiQL5Ct2WhMj5jAE65B7HDoyhFR4LRb3mrdf3Shs0W6FzdnLubTsLk8b28E191MWC3
	Onu7YRWzz6Rd1Fe4etQeZomNpRTY8o2ffyRLUC/wCvQodATnNBbSfB54ClD4qP0A0cTM
	LJdmJ1QpAfS4PIwYlEbQqCuSB4/viw58q3l9Yaiw3szMv7nNgHAIR4U6xaCH7MQaJGkA
	6GQHqAFzTvksYkMWt3qehb4wq+gmO3SxBuj0MRwV8GDuWHB2PfakSjHFVBkO6Syi8wqQ
	h72V9ouAqgQQlcjmJiGLNM6mFncFMu3YB+s2nHru3tE8IdzmgXGFqny815E29xdrTJHV
	pDXA==
X-Gm-Message-State: ALoCoQlhlmkeJAgXFwch/IsGWXwpPExru1D33v6DOtYzIkcveA9wTJk0Osgx/aQJSD6vx2A89++0
X-Received: by 10.14.203.197 with SMTP id f45mr4543645eeo.90.1391795008955;
	Fri, 07 Feb 2014 09:43:28 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.27
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:28 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:03 +0000
Message-Id: <1391794991-5919-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, patches@linaro.org
Subject: [Xen-devel] [RFC for-4.5 04/12] xen/dts: Add dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function check if property exists in a specific node.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/common/device_tree.c      |    6 ++----
 xen/include/xen/device_tree.h |   21 +++++++++++++++++++++
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index c66d1d5..ccdb7ff 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -512,10 +512,8 @@ static void __init *unflatten_dt_alloc(unsigned long *mem, unsigned long size,
 }
 
 /* Find a property with a given name for a given node and return it. */
-static const struct dt_property *
-dt_find_property(const struct dt_device_node *np,
-                 const char *name,
-                 u32 *lenp)
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp)
 {
     const struct dt_property *pp;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 9a8c3de..7c075d9 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #include <xen/init.h>
 #include <xen/string.h>
 #include <xen/types.h>
+#include <xen/stdbool.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -347,6 +348,10 @@ struct dt_device_node *dt_find_compatible_node(struct dt_device_node *from,
 const void *dt_get_property(const struct dt_device_node *np,
                             const char *name, u32 *lenp);
 
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp);
+
+
 /**
  * dt_property_read_u32 - Helper to read a u32 property.
  * @np: node to get the value
@@ -369,6 +374,22 @@ bool_t dt_property_read_u64(const struct dt_device_node *np,
                             const char *name, u64 *out_value);
 
 /**
+ * dt_property_read_bool - Check if a property exists
+ * @np: node to get the value
+ * @name: name of the property
+ *
+ * Search for a property in a device node.
+ * Return true if the property exists false otherwise.
+ */
+static inline bool_t dt_property_read_bool(const struct dt_device_node *np,
+                                           const char *name)
+{
+    const struct dt_property *prop = dt_find_property(np, name, NULL);
+
+    return prop ? true : false;
+}
+
+/**
  * dt_property_read_string - Find and read a string from a property
  * @np:         Device node from which the property value is to be read
  * @propname:   Name of the property to be searched
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSn-0005Iq-1Z; Fri, 07 Feb 2014 17:43:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSl-0005H9-26
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:39 +0000
Received: from [85.158.137.68:6636] by server-17.bemta-3.messagelabs.com id
	36/7A-22569-A4B15F25; Fri, 07 Feb 2014 17:43:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391795015!393528!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7063 invoked from network); 7 Feb 2014 17:43:35 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:35 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so1690144eei.12
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=aU6mvXVuT0Ab5g154PdFlfENa85BzWy1cX6zXgiDPDI=;
	b=POOOBOOyudb4CV+zM+FGBwCwpdDEO3T+RCjnW4csed1zkBM9ucVvg2j9Lup2PmdYKh
	pzYZ2VoiE5ggDkDp9FekvuANrko+NNw/9CrjmEwDSrvVn6QQe3gu+bv5GC8zRXYYhMES
	LtR/uax169z1JZulW6FYARSCZgMu64fb7LQjRW8CFMoYMCIjsdcpr6zCaYF0o2N2Q6eL
	iQ3P3H9U35K49V348SMnXlGdvtzVhNZm0kEN5e0pkKtqXGRX+na3Enmq8RXAnrcwhQU3
	HrdWHgyocu/tsz7kELaZq4kioPuMQQJkiVPJXSpkBWDSemNKl65UZMmXwbmhsOVY4J3a
	d0Bw==
X-Gm-Message-State: ALoCoQlnp5FRxXSXaqD8wTWVwlc/1xtaPEDy8oMpmWLWvAQUZ4X+e2waJ8siBIul394MO2DRx6YC
X-Received: by 10.14.221.4 with SMTP id q4mr17736010eep.47.1391795015488;
	Fri, 07 Feb 2014 09:43:35 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.33
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:34 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:07 +0000
Message-Id: <1391794991-5919-9-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
	generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
functions specific to x86 and PCI.

Split the framework in 3 distincts files:
    - iommu.c: contains generic functions shared between x86 and ARM
               (when it will be supported)
    - iommu_pci.c: contains specific functions for PCI passthrough
    - iommu_x86.c: contains specific functions for x86

iommu_pci.c will be only compiled when PCI is supported by the architecture
(eg. HAS_PCI is defined).

This patch is mostly code movement in new files.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/Makefile    |    6 +-
 xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
 xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu_x86.c |   65 +++++
 xen/drivers/passthrough/vtd/iommu.c |   42 ++--
 xen/include/asm-x86/iommu.h         |   46 ++++
 xen/include/xen/hvm/iommu.h         |    1 +
 xen/include/xen/iommu.h             |   42 ++--
 8 files changed, 625 insertions(+), 518 deletions(-)
 create mode 100644 xen/drivers/passthrough/iommu_pci.c
 create mode 100644 xen/drivers/passthrough/iommu_x86.c
 create mode 100644 xen/include/asm-x86/iommu.h

diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 7c40fa5..51e0a0d 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -3,5 +3,7 @@ subdir-$(x86) += amd
 subdir-$(x86_64) += x86
 
 obj-y += iommu.o
-obj-y += io.o
-obj-y += pci.o
+obj-$(x86) += iommu_x86.o
+obj-$(HAS_PCI) += iommu_pci.o
+obj-$(x86) += io.o
+obj-$(HAS_PCI) += pci.o
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 0a26956..d733878 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -24,7 +24,6 @@
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
-static int iommu_populate_page_table(struct domain *d);
 static void iommu_dump_p2m_table(unsigned char key);
 
 /*
@@ -180,86 +179,7 @@ void __init iommu_dom0_init(struct domain *d)
     return hd->platform_ops->dom0_init(d);
 }
 
-int iommu_add_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    int rc;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
-    if ( rc || !pdev->phantom_stride )
-        return rc;
-
-    for ( devfn = pdev->devfn ; ; )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            return 0;
-        rc = hd->platform_ops->add_device(devfn, pdev);
-        if ( rc )
-            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
-                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-    }
-}
-
-int iommu_enable_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops ||
-         !hd->platform_ops->enable_device )
-        return 0;
-
-    return hd->platform_ops->enable_device(pdev);
-}
-
-int iommu_remove_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
-    {
-        int rc;
-
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->remove_device(devfn, pdev);
-        if ( !rc )
-            continue;
-
-        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
-               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-        return rc;
-    }
-
-    return hd->platform_ops->remove_device(pdev->devfn, pdev);
-}
-
-static void iommu_teardown(struct domain *d)
+void iommu_teardown(struct domain *d)
 {
     const struct hvm_iommu *hd = domain_hvm_iommu(d);
 
@@ -268,151 +188,6 @@ static void iommu_teardown(struct domain *d)
     tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
-/*
- * If the device isn't owned by dom0, it means it already
- * has been assigned to other domain, or it doesn't exist.
- */
-static int device_assigned(u16 seg, u8 bus, u8 devfn)
-{
-    struct pci_dev *pdev;
-
-    spin_lock(&pcidevs_lock);
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    spin_unlock(&pcidevs_lock);
-
-    return pdev ? 0 : -EBUSY;
-}
-
-static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int rc = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( unlikely(!need_iommu(d) &&
-            (d->arch.hvm_domain.mem_sharing_enabled ||
-             d->mem_event->paging.ring_page)) )
-        return -EXDEV;
-
-    if ( !spin_trylock(&pcidevs_lock) )
-        return -ERESTART;
-
-    if ( need_iommu(d) <= 0 )
-    {
-        if ( !iommu_use_hap_pt(d) )
-        {
-            rc = iommu_populate_page_table(d);
-            if ( rc )
-            {
-                spin_unlock(&pcidevs_lock);
-                return rc;
-            }
-        }
-        d->need_iommu = 1;
-    }
-
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    if ( !pdev )
-    {
-        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
-        goto done;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
-        goto done;
-
-    for ( ; pdev->phantom_stride; rc = 0 )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->assign_device(d, devfn, pdev);
-        if ( rc )
-            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
-                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   rc);
-    }
-
- done:
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-    spin_unlock(&pcidevs_lock);
-
-    return rc;
-}
-
-static int iommu_populate_page_table(struct domain *d)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct page_info *page;
-    int rc = 0, n = 0;
-
-    d->need_iommu = -1;
-
-    this_cpu(iommu_dont_flush_iotlb) = 1;
-    spin_lock(&d->page_alloc_lock);
-
-    if ( unlikely(d->is_dying) )
-        rc = -ESRCH;
-
-    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
-    {
-        if ( is_hvm_domain(d) ||
-            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
-        {
-            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
-            rc = hd->platform_ops->map_page(
-                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
-                IOMMUF_readable|IOMMUF_writable);
-            if ( rc )
-            {
-                page_list_add(page, &d->page_list);
-                break;
-            }
-        }
-        page_list_add_tail(page, &d->arch.relmem_list);
-        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
-             hypercall_preempt_check() )
-            rc = -ERESTART;
-    }
-
-    if ( !rc )
-    {
-        /*
-         * The expectation here is that generally there are many normal pages
-         * on relmem_list (the ones we put there) and only few being in an
-         * offline/broken state. The latter ones are always at the head of the
-         * list. Hence we first move the whole list, and then move back the
-         * first few entries.
-         */
-        page_list_move(&d->page_list, &d->arch.relmem_list);
-        while ( (page = page_list_first(&d->page_list)) != NULL &&
-                (page->count_info & (PGC_state|PGC_broken)) )
-        {
-            page_list_del(page, &d->page_list);
-            page_list_add_tail(page, &d->arch.relmem_list);
-        }
-    }
-
-    spin_unlock(&d->page_alloc_lock);
-    this_cpu(iommu_dont_flush_iotlb) = 0;
-
-    if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
-        iommu_teardown(d);
-
-    return rc;
-}
-
-
 void iommu_domain_destroy(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
@@ -499,53 +274,6 @@ void iommu_iotlb_flush_all(struct domain *d)
     hd->platform_ops->iotlb_flush_all(d);
 }
 
-/* caller should hold the pcidevs_lock */
-int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev = NULL;
-    int ret = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
-    if ( !pdev )
-        return -ENODEV;
-
-    while ( pdev->phantom_stride )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-        if ( !ret )
-            continue;
-
-        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
-               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
-        return ret;
-    }
-
-    devfn = pdev->devfn;
-    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-    if ( ret )
-    {
-        dprintk(XENLOG_G_ERR,
-                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
-                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-        return ret;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-
-    return ret;
-}
-
 int __init iommu_setup(void)
 {
     int rc = -ENODEV;
@@ -586,86 +314,6 @@ int __init iommu_setup(void)
     return rc;
 }
 
-static int iommu_get_device_group(
-    struct domain *d, u16 seg, u8 bus, u8 devfn,
-    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int group_id, sdev_id;
-    u32 bdf;
-    int i = 0;
-    const struct iommu_ops *ops = hd->platform_ops;
-
-    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
-        return 0;
-
-    group_id = ops->get_device_group_id(seg, bus, devfn);
-
-    spin_lock(&pcidevs_lock);
-    for_each_pdev( d, pdev )
-    {
-        if ( (pdev->seg != seg) ||
-             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
-            continue;
-
-        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
-            continue;
-
-        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
-        if ( (sdev_id == group_id) && (i < max_sdevs) )
-        {
-            bdf = 0;
-            bdf |= (pdev->bus & 0xff) << 16;
-            bdf |= (pdev->devfn & 0xff) << 8;
-
-            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
-            {
-                spin_unlock(&pcidevs_lock);
-                return -1;
-            }
-            i++;
-        }
-    }
-    spin_unlock(&pcidevs_lock);
-
-    return i;
-}
-
-void iommu_update_ire_from_apic(
-    unsigned int apic, unsigned int reg, unsigned int value)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    ops->update_ire_from_apic(apic, reg, value);
-}
-
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
-}
-
-void iommu_read_msi_from_ire(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    if ( iommu_intremap )
-        ops->read_msi_from_ire(msi_desc, msg);
-}
-
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->read_apic_from_ire(apic, reg);
-}
-
-int __init iommu_setup_hpet_msi(struct msi_desc *msi)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
-}
-
 void iommu_resume()
 {
     const struct iommu_ops *ops = iommu_get_ops();
@@ -696,125 +344,6 @@ void iommu_crash_shutdown(void)
     iommu_enabled = iommu_intremap = 0;
 }
 
-int iommu_do_domctl(
-    struct xen_domctl *domctl, struct domain *d,
-    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
-{
-    u16 seg;
-    u8 bus, devfn;
-    int ret = 0;
-
-    if ( !iommu_enabled )
-        return -ENOSYS;
-
-    switch ( domctl->cmd )
-    {
-    case XEN_DOMCTL_get_device_group:
-    {
-        u32 max_sdevs;
-        XEN_GUEST_HANDLE_64(uint32) sdevs;
-
-        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.get_device_group.machine_sbdf >> 16;
-        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
-        max_sdevs = domctl->u.get_device_group.max_sdevs;
-        sdevs = domctl->u.get_device_group.sdev_array;
-
-        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
-        if ( ret < 0 )
-        {
-            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
-            ret = -EFAULT;
-            domctl->u.get_device_group.num_sdevs = 0;
-        }
-        else
-        {
-            domctl->u.get_device_group.num_sdevs = ret;
-            ret = 0;
-        }
-        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
-            ret = -EFAULT;
-    }
-    break;
-
-    case XEN_DOMCTL_test_assign_device:
-        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        if ( device_assigned(seg, bus, devfn) )
-        {
-            printk(XENLOG_G_INFO
-                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-            ret = -EINVAL;
-        }
-        break;
-
-    case XEN_DOMCTL_assign_device:
-        if ( unlikely(d->is_dying) )
-        {
-            ret = -EINVAL;
-            break;
-        }
-
-        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        ret = device_assigned(seg, bus, devfn) ?:
-              assign_device(d, seg, bus, devfn);
-        if ( ret == -ERESTART )
-            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
-                                                "h", u_domctl);
-        else if ( ret )
-            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
-                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    case XEN_DOMCTL_deassign_device:
-        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        spin_lock(&pcidevs_lock);
-        ret = deassign_device(d, seg, bus, devfn);
-        spin_unlock(&pcidevs_lock);
-        if ( ret )
-            printk(XENLOG_G_ERR
-                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    default:
-        ret = -ENOSYS;
-        break;
-    }
-
-    return ret;
-}
-
 static void iommu_dump_p2m_table(unsigned char key)
 {
     struct domain *d;
diff --git a/xen/drivers/passthrough/iommu_pci.c b/xen/drivers/passthrough/iommu_pci.c
new file mode 100644
index 0000000..5b9d937
--- /dev/null
+++ b/xen/drivers/passthrough/iommu_pci.c
@@ -0,0 +1,468 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xsm/xsm.h>
+
+static int iommu_populate_page_table(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct page_info *page;
+    int rc = 0, n = 0;
+
+    d->need_iommu = -1;
+
+    this_cpu(iommu_dont_flush_iotlb) = 1;
+    spin_lock(&d->page_alloc_lock);
+
+    if ( unlikely(d->is_dying) )
+        rc = -ESRCH;
+
+
+    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
+    {
+        if ( is_hvm_domain(d) ||
+            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
+        {
+            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
+            rc = hd->platform_ops->map_page(
+                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
+                IOMMUF_readable|IOMMUF_writable);
+            if ( rc )
+            {
+                page_list_add(page, &d->page_list);
+                break;
+            }
+        }
+        page_list_add_tail(page, &d->arch.relmem_list);
+        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
+             hypercall_preempt_check() )
+            rc = -ERESTART;
+    }
+
+    if ( !rc )
+    {
+        /*
+         * The expectation here is that generally there are many normal pages
+         * on relmem_list (the ones we put there) and only few being in an
+         * offline/broken state. The latter ones are always at the head of the
+         * list. Hence we first move the whole list, and then move back the
+         * first few entries.
+         */
+        page_list_move(&d->page_list, &d->arch.relmem_list);
+        while ( (page = page_list_first(&d->page_list)) != NULL &&
+                (page->count_info & (PGC_state|PGC_broken)) )
+        {
+            page_list_del(page, &d->page_list);
+            page_list_add_tail(page, &d->arch.relmem_list);
+        }
+    }
+
+    spin_unlock(&d->page_alloc_lock);
+    this_cpu(iommu_dont_flush_iotlb) = 0;
+
+    if ( !rc )
+        iommu_iotlb_flush_all(d);
+    else if ( rc != -ERESTART )
+        iommu_teardown(d);
+
+    return rc;
+}
+
+int iommu_add_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    int rc;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
+    if ( rc || !pdev->phantom_stride )
+        return rc;
+
+    for ( devfn = pdev->devfn ; ; )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            return 0;
+        rc = hd->platform_ops->add_device(devfn, pdev);
+        if ( rc )
+            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+    }
+}
+
+int iommu_enable_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops ||
+         !hd->platform_ops->enable_device )
+        return 0;
+
+    return hd->platform_ops->enable_device(pdev);
+}
+
+int iommu_remove_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
+    {
+        int rc;
+
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->remove_device(devfn, pdev);
+        if ( !rc )
+            continue;
+
+        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
+               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+        return rc;
+    }
+
+    return hd->platform_ops->remove_device(pdev->devfn, pdev);
+}
+
+/*
+ * If the device isn't owned by dom0, it means it already
+ * has been assigned to other domain, or it doesn't exist.
+ */
+static int device_assigned(u16 seg, u8 bus, u8 devfn)
+{
+    struct pci_dev *pdev = NULL;
+
+    spin_lock(&pcidevs_lock);
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    spin_unlock(&pcidevs_lock);
+
+    return pdev ? 0 : -EBUSY;
+}
+
+static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int rc = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    /* Prevent device assign if mem paging or mem sharing have been 
+     * enabled for this domain */
+    if ( unlikely(!need_iommu(d) &&
+            (mem_sharing_enabled(d) ||
+             d->mem_event->paging.ring_page)) )
+        return -EXDEV;
+
+    if ( !spin_trylock(&pcidevs_lock) )
+        return -ERESTART;
+
+    if ( need_iommu(d) <= 0 )
+    {
+        if ( !iommu_use_hap_pt(d) )
+        {
+            rc = iommu_populate_page_table(d);
+            if ( rc )
+            {
+                spin_unlock(&pcidevs_lock);
+                return rc;
+            }
+        }
+        d->need_iommu = 1;
+    }
+
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    if ( !pdev )
+    {
+        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
+        goto done;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
+        goto done;
+
+    for ( ; pdev->phantom_stride; rc = 0 )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->assign_device(d, devfn, pdev);
+        if ( rc )
+            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
+                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   rc);
+    }
+
+ done:
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+    spin_unlock(&pcidevs_lock);
+
+    return rc;
+}
+
+/* caller should hold the pcidevs_lock */
+int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev = NULL;
+    int ret = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
+    if ( !pdev )
+        return -ENODEV;
+
+    while ( pdev->phantom_stride )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+        if ( !ret )
+            continue;
+
+        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
+               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
+        return ret;
+    }
+
+    devfn = pdev->devfn;
+    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+    if ( ret )
+    {
+        dprintk(XENLOG_G_ERR,
+                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
+                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return ret;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+
+    return ret;
+}
+
+static int iommu_get_device_group(
+    struct domain *d, u16 seg, u8 bus, u8 devfn,
+    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int group_id, sdev_id;
+    u32 bdf;
+    int i = 0;
+    const struct iommu_ops *ops = hd->platform_ops;
+
+    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
+        return 0;
+
+    group_id = ops->get_device_group_id(seg, bus, devfn);
+
+    spin_lock(&pcidevs_lock);
+    for_each_pdev( d, pdev )
+    {
+        if ( (pdev->seg != seg) ||
+             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
+            continue;
+
+        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
+            continue;
+
+        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
+        if ( (sdev_id == group_id) && (i < max_sdevs) )
+        {
+            bdf = 0;
+            bdf |= (pdev->bus & 0xff) << 16;
+            bdf |= (pdev->devfn & 0xff) << 8;
+
+            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
+            {
+                spin_unlock(&pcidevs_lock);
+                return -1;
+            }
+            i++;
+        }
+    }
+
+    spin_unlock(&pcidevs_lock);
+
+    return i;
+}
+
+int iommu_do_domctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    u16 seg;
+    u8 bus, devfn;
+    int ret = 0;
+
+    if ( !iommu_enabled )
+        return -ENOSYS;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_get_device_group:
+    {
+        u32 max_sdevs;
+        XEN_GUEST_HANDLE_64(uint32) sdevs;
+
+        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.get_device_group.machine_sbdf >> 16;
+        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
+        max_sdevs = domctl->u.get_device_group.max_sdevs;
+        sdevs = domctl->u.get_device_group.sdev_array;
+
+        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
+        if ( ret < 0 )
+        {
+            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
+            ret = -EFAULT;
+            domctl->u.get_device_group.num_sdevs = 0;
+        }
+        else
+        {
+            domctl->u.get_device_group.num_sdevs = ret;
+            ret = 0;
+        }
+        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
+            ret = -EFAULT;
+    }
+    break;
+
+    case XEN_DOMCTL_test_assign_device:
+        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        if ( device_assigned(seg, bus, devfn) )
+        {
+            printk(XENLOG_G_INFO
+                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            ret = -EINVAL;
+        }
+        break;
+
+    case XEN_DOMCTL_assign_device:
+        if ( unlikely(d->is_dying) )
+        {
+            ret = -EINVAL;
+            break;
+        }
+
+        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        ret = device_assigned(seg, bus, devfn) ?:
+              assign_device(d, seg, bus, devfn);
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
+                                                "h", u_domctl);
+        else if ( ret )
+            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
+                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    case XEN_DOMCTL_deassign_device:
+        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        spin_lock(&pcidevs_lock);
+        ret = deassign_device(d, seg, bus, devfn);
+        spin_unlock(&pcidevs_lock);
+        if ( ret )
+            printk(XENLOG_G_ERR
+                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/passthrough/iommu_x86.c b/xen/drivers/passthrough/iommu_x86.c
new file mode 100644
index 0000000..bd3c23b
--- /dev/null
+++ b/xen/drivers/passthrough/iommu_x86.c
@@ -0,0 +1,65 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xsm/xsm.h>
+
+void iommu_update_ire_from_apic(
+    unsigned int apic, unsigned int reg, unsigned int value)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    ops->update_ire_from_apic(apic, reg, value);
+}
+
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
+}
+
+void iommu_read_msi_from_ire(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    if ( iommu_intremap )
+        ops->read_msi_from_ire(msi_desc, msg);
+}
+
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->read_apic_from_ire(apic, reg);
+}
+
+int __init iommu_setup_hpet_msi(struct msi_desc *msi)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d5ce5b7..faa794b 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1784,31 +1784,31 @@ static int intel_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
                      int order, int present)
-{
-    struct acpi_drhd_unit *drhd;
-    struct iommu *iommu = NULL;
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    int flush_dev_iotlb;
-    int iommu_domid;
+    {
+        struct acpi_drhd_unit *drhd;
+        struct iommu *iommu = NULL;
+        struct hvm_iommu *hd = domain_hvm_iommu(d);
+        int flush_dev_iotlb;
+        int iommu_domid;
 
-    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
+        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
-    for_each_drhd_unit ( drhd )
-    {
-        iommu = drhd->iommu;
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
-            continue;
+        for_each_drhd_unit ( drhd )
+        {
+            iommu = drhd->iommu;
+            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+                continue;
 
-        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
-        iommu_domid= domain_iommu_domid(d, iommu);
-        if ( iommu_domid == -1 )
-            continue;
-        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
-                                   (paddr_t)gfn << PAGE_SHIFT_4K,
-                                   order, !present, flush_dev_iotlb) )
-            iommu_flush_write_buffer(iommu);
+            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
+            iommu_domid= domain_iommu_domid(d, iommu);
+            if ( iommu_domid == -1 )
+                continue;
+            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
+                                       (paddr_t)gfn << PAGE_SHIFT_4K,
+                                       order, !present, flush_dev_iotlb) )
+                iommu_flush_write_buffer(iommu);
+        }
     }
-}
 
 static int vtd_ept_page_compatible(struct iommu *iommu)
 {
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
new file mode 100644
index 0000000..34c1896
--- /dev/null
+++ b/xen/include/asm-x86/iommu.h
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_X86_IOMMU_H__
+#define __ARCH_X86_IOMMU_H__
+
+#define MAX_IOMMUS 32
+
+#include <asm/msi.h>
+
+void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
+int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
+void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
+int iommu_setup_hpet_msi(struct msi_desc *);
+
+void iommu_share_p2m_table(struct domain *d);
+
+/* While VT-d specific, this must get declared in a generic header. */
+int adjust_vtd_irq_affinities(void);
+void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int iommu_supports_eim(void);
+int iommu_enable_x2apic_IR(void);
+void iommu_disable_x2apic_IR(void);
+void iommu_set_dom0_mapping(struct domain *d);
+
+#endif /* !__ARCH_X86_IOMMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 26539e0..2abb4e3 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -21,6 +21,7 @@
 #define __XEN_HVM_IOMMU_H__
 
 #include <xen/iommu.h>
+#include <asm/hvm/iommu.h>
 
 struct g2m_ioport {
     struct list_head list;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index fcbc432..60df9d6 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -25,6 +25,7 @@
 #include <xen/pci.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
+#include <asm/iommu.h>
 
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
@@ -39,17 +40,12 @@ extern bool_t amd_iommu_perdev_intremap;
 
 #define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
 
-#define MAX_IOMMUS 32
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
 #define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
 
 int iommu_setup(void);
-int iommu_supports_eim(void);
-int iommu_enable_x2apic_IR(void);
-void iommu_disable_x2apic_IR(void);
 
 int iommu_add_device(struct pci_dev *pdev);
 int iommu_enable_device(struct pci_dev *pdev);
@@ -59,6 +55,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+/* Function used internally, use iommu_domain_destroy */
+void iommu_teardown(struct domain *d);
+
 /* iommu_map_page() takes flags to direct the mapping operation. */
 #define _IOMMUF_readable 0
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
@@ -67,9 +66,8 @@ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_domain_teardown(struct domain *d);
 
+#ifdef HAS_PCI
 void pt_pci_init(void);
 
 struct pirq;
@@ -84,62 +82,60 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 bool_t pt_irq_need_timer(uint32_t flags);
 
 #define PT_IRQ_TIME_OUT MILLISECS(8)
+#endif /* HAS_PCI */
 
+#ifdef CONFIG_X86
 struct msi_desc;
 struct msi_msg;
+#endif /* CONFIG_X86 */
+
 struct page_info;
 
 struct iommu_ops {
     int (*init)(struct domain *d);
     void (*dom0_init)(struct domain *d);
+#ifdef HAS_PCI
     int (*add_device)(u8 devfn, struct pci_dev *);
     int (*enable_device)(struct pci_dev *pdev);
     int (*remove_device)(u8 devfn, struct pci_dev *);
     int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
+    int (*reassign_device)(struct domain *s, struct domain *t,
+			   u8 devfn, struct pci_dev *);
+    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#endif /* HAS_PCI */
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
-    int (*reassign_device)(struct domain *s, struct domain *t,
-			   u8 devfn, struct pci_dev *);
-    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
     int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
     void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
+    void (*share_p2m)(struct domain *d);
+#endif /* CONFIG_X86 */
     void (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
-int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
-void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
-int iommu_setup_hpet_msi(struct msi_desc *);
-
 void iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 
-void iommu_set_dom0_mapping(struct domain *d);
-void iommu_share_p2m_table(struct domain *d);
-
+#if HAS_PCI
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
+#endif
 
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
 
-/* While VT-d specific, this must get declared in a generic header. */
-int adjust_vtd_irq_affinities(void);
-
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSi-0005Fn-9Z; Fri, 07 Feb 2014 17:43:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSg-0005F7-TT
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:35 +0000
Received: from [85.158.143.35:39583] by server-1.bemta-4.messagelabs.com id
	CB/4F-31661-64B15F25; Fri, 07 Feb 2014 17:43:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1391795013!4004021!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26473 invoked from network); 7 Feb 2014 17:43:33 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:33 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so1688705eek.17
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=03GaTqwqAUDX5bqJ+1QCUxpYAArQ35zKzDBtptwQs2Y=;
	b=cw3MppyRdwHlBWUbcqkU3wICzKEtHLlRFw1WbYGfgUMc9gq81IMJiUv/XYz/hCQ1KF
	w66OX9W8hx13hWiYQVeoHCAPQtw/ILn53T9Yp9AOGDiZq95EDao7JVW7P1bBzFJOzB5e
	Vy5X29OSsApGeJaAnCTnqTkEuftrEFsO1TY7UA/bjUrV2Dppun+PFtfVFao/XbuII6OK
	FWDruCvDJCSipm62uw2M/VL8WXem1IkWdeTT/+P8ewH90PVhddS53g/K9yNMChDRWPuS
	3l/B9jan+fbWvI12AWRB71gX6lXhQIXkD+7+gI23OuP0MVPqcIcJHlUEQ4+wxc4LjSzQ
	7Mdw==
X-Gm-Message-State: ALoCoQlwJRXnlXCW6WdN8Lab6qoS/sTZJHWtjt2MGqYt7Fw14e9g/HayT9Mx1ytJvXmzdAGDBAnm
X-Received: by 10.14.29.6 with SMTP id h6mr5738816eea.84.1391795013454;
	Fri, 07 Feb 2014 09:43:33 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.32
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:32 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:06 +0000
Message-Id: <1391794991-5919-8-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 07/12] xen/passthrough: iommu: Don't need
	to map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently iommu_init_dom0 is browsing the page list and call map_page callback
on each page.

On both AMD and VTD drivers, the function will directly return if the page
table is shared with the processor. So Xen can safely avoid to run through
the page list.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 26a5d91..0a26956 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -157,7 +157,7 @@ void __init iommu_dom0_init(struct domain *d)
 
     register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
-    if ( need_iommu(d) )
+    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
     {
         struct page_info *page;
         unsigned int i = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSd-0005Ds-H6; Fri, 07 Feb 2014 17:43:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSb-0005D5-6H
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:29 +0000
Received: from [85.158.139.211:13875] by server-4.bemta-5.messagelabs.com id
	6D/59-08092-04B15F25; Fri, 07 Feb 2014 17:43:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391795007!2408396!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2395 invoked from network); 7 Feb 2014 17:43:27 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:27 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so1711939ead.6
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:27 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=iiWciA/AuIQKwLpkCbz7yd8cXIHTjwg6w51XGWlmc5k=;
	b=igxapd17k10tEoiOv38+Jx6ZO9NwV7BgqsvZE/jIrGcCCGFZQALehwhWV06jBbYAvW
	ITUut0K87ScNw8U2i/0flTmGal7l97OPlEN7wT+K64YBIRVXxLcM8akHsnEF0ibmTXOK
	Sj5qY8sC0vNL829cP5IUF4iC5kwH5iSQvoKbro93LkaIPvl5arqxRNpNFD3eMbt7ZnCZ
	zHHZdWCTVXLhk/bHoiolJOg30iznWAfTuYoC8kbgBdNWonvsls+v4IBt1BzKb0epyvmt
	lwrFDxtjuq7DDGDlpcupwXmUdbMC5GPJ9XjxpHV9mKtmV5rL8MTwnnVpcFoccvkKqJml
	Fq/Q==
X-Gm-Message-State: ALoCoQngJ3cAQAfTfIRgSqkzuisqAJ6u0SDISAjewirCWyvQags93b/uo1Jez+EOeVEdx57952MD
X-Received: by 10.15.33.193 with SMTP id c41mr5611887eev.79.1391795007417;
	Fri, 07 Feb 2014 09:43:27 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.25
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:26 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:02 +0000
Message-Id: <1391794991-5919-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantoa Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't export
	iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_set_pgd is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantoa Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 xen/include/xen/iommu.h             |    1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a8d33fc..d5ce5b7 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
 /*
  * set VT-d page table directory to EPT table if allowed
  */
-void iommu_set_pgd(struct domain *d)
+static void iommu_set_pgd(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
     mfn_t pgd_mfn;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 8bb0a1d..fcbc432 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_set_pgd(struct domain *d);
 void iommu_domain_teardown(struct domain *d);
 
 void pt_pci_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSo-0005LL-Ha; Fri, 07 Feb 2014 17:43:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSm-0005IM-Pl
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:41 +0000
Received: from [85.158.137.68:64227] by server-8.bemta-3.messagelabs.com id
	AC/94-16039-C4B15F25; Fri, 07 Feb 2014 17:43:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391795018!373550!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23609 invoked from network); 7 Feb 2014 17:43:38 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:38 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so1697652eek.30
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Ylf5wCQqy+iagUcUn2Lzjj7+gUCArC0HlLCDwzCYQis=;
	b=S2hlKfztkwgpGF44FN4q7PNopoMDs9QM6NgT9owhwt4gkALSFrwXM308WM0M/W66Ra
	Y09Upzf9zKBXwYsubfzZQ6MUGdFACdnOGwsIKALGCi4dd6u/g/AWwfAFkkeAhuqPjvQ2
	YquP5ikJwnu+YyuKiv4wd5s1Gg5bTF7Wtx0S2vDlTiY+IqKVyvZ/c4DWohuV7/9CoBd3
	ZW5MVrd+v+1kJ4h3TdWBmRYckB4TPNaFNBs3r9ZDKibGdKb3gs7JE0rrbnffMgLomcKq
	2z+mFwj8ITkEpwnzpljXACPMaLBqUr4Y6psIpw/uFHtIeHH1e4dyhEjDskZHla1ShAzS
	dfWA==
X-Gm-Message-State: ALoCoQnaVEVr1/HtRkDBEUWBbGn0VpVmI0G34HVUKAF5nHUddd0DqzIKbpVKuK86duZ7FtnNF9G2
X-Received: by 10.14.29.6 with SMTP id h6mr5739299eea.84.1391795018524;
	Fri, 07 Feb 2014 09:43:38 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.37
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:37 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:09 +0000
Message-Id: <1391794991-5919-11-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 10/12] xen/passthrough: Introduce IOMMU
	ARM architure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the architecture to use IOMMU on ARM. There is no
IOMMU drivers on this patch.

The code will run through the device tree and will initialize every IOMMUs.
It's possible to have multiple IOMMUs on the same platform, but they should
be handled with the same drivers. For now, there is no support for using
multiple iommu drivers at runtime.

Each new IOMMU drivers should contain:

static const char * const myiommu_dt_compat[] __initcontst =
{
    /* list of device compatible with the drivers. Will be matched with
     * the "compatible" property on the device tree
     */
    NULL,
};

DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
        .compatible = myiommu_compatible,
        .init = myiommu_init,
DT_DEVICE_END

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Rules.mk                |    1 +
 xen/arch/arm/domain.c                |    7 ++++
 xen/arch/arm/domain_build.c          |    2 ++
 xen/arch/arm/p2m.c                   |    4 +++
 xen/arch/arm/setup.c                 |    2 ++
 xen/drivers/passthrough/Makefile     |    1 +
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/iommu.c  |   65 ++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/device.h         |    3 +-
 xen/include/asm-arm/domain.h         |    2 ++
 xen/include/asm-arm/hvm/iommu.h      |   10 ++++++
 xen/include/asm-arm/iommu.h          |   36 +++++++++++++++++++
 12 files changed, 133 insertions(+), 1 deletion(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 57f2eb1..1703551 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -9,6 +9,7 @@
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
+HAS_PASSTHROUGH := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..b3c0dda 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -557,6 +557,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     if ( (d->domain_id == 0) && (rc = domain_vuart_init(d)) )
         goto fail;
 
+    if ( (rc = iommu_domain_init(d)) != 0 )
+        goto fail;
+
     return 0;
 
 fail:
@@ -568,6 +571,10 @@ fail:
 
 void arch_domain_destroy(struct domain *d)
 {
+    /* IOMMU page table is shared with P2M, always call
+     * iommu_domain_destroy() before p2m_teardown().
+     */
+    iommu_domain_destroy(d);
     p2m_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 6d9a801..1f845d7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1091,6 +1091,8 @@ int construct_dom0(struct domain *d)
         }
     }
 
+    iommu_dom0_init(d);
+
     return 0;
 }
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..57304d0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -401,12 +401,16 @@ static int create_p2m_entries(struct domain *d,
 
     if ( flush )
     {
+        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
+        unsigned long egfn = paddr_to_pfn(end_gpaddr);
+
         /* At the beginning of the function, Xen is updating VTTBR
          * with the domain where the mappings are created. In this
          * case it's only necessary to flush TLBs on every CPUs with
          * the current VMID (our domain).
          */
         flush_tlb();
+        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
     }
 
     if ( op == ALLOCATE || op == INSERT )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f6d713..5a687d1 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -725,6 +725,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     local_irq_enable();
     local_abort_enable();
 
+    iommu_setup(); /* setup iommu if available */
+
     smp_prepare_cpus(cpus);
 
     initialize_keytable();
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 51e0a0d..aded1ea 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -1,6 +1,7 @@
 subdir-$(x86) += vtd
 subdir-$(x86) += amd
 subdir-$(x86_64) += x86
+subdir-$(arm) += arm
 
 obj-y += iommu.o
 obj-$(x86) += iommu_x86.o
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
new file mode 100644
index 0000000..0484b79
--- /dev/null
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -0,0 +1 @@
+obj-y += iommu.o
diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
new file mode 100644
index 0000000..7cf36cd
--- /dev/null
+++ b/xen/drivers/passthrough/arm/iommu.c
@@ -0,0 +1,65 @@
+/*
+ * xen/drivers/passthrough/arm/iommu.c
+ *
+ * Generic IOMMU framework via the device tree
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2013 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/iommu.h>
+#include <xen/device_tree.h>
+#include <asm/device.h>
+
+static const struct iommu_ops *iommu_ops;
+
+const struct iommu_ops *iommu_get_ops(void)
+{
+    return iommu_ops;
+}
+
+void __init iommu_set_ops(const struct iommu_ops *ops)
+{
+    BUG_ON(ops == NULL);
+
+    if ( iommu_ops && iommu_ops != ops )
+        printk("WARNING: IOMMU ops already set to a different value\n");
+
+    iommu_ops = ops;
+}
+
+int __init iommu_hardware_setup(void)
+{
+    struct dt_device_node *np;
+    int rc;
+    unsigned int num_iommus = 0;
+
+    dt_for_each_device_node(dt_host, np)
+    {
+        rc = device_init(np, DEVICE_IOMMU, NULL);
+        if ( !rc )
+            num_iommus++;
+    }
+
+    return ( num_iommus > 0 ) ? 0 : -ENODEV;
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+}
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index 9e47ca6..ed04344 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -6,7 +6,8 @@
 
 enum device_type
 {
-    DEVICE_SERIAL
+    DEVICE_SERIAL,
+    DEVICE_IOMMU,
 };
 
 struct device_desc {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..15d814a 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -9,6 +9,7 @@
 #include <asm/vfp.h>
 #include <public/hvm/params.h>
 #include <xen/serial.h>
+#include <xen/hvm/iommu.h>
 
 /* Represents state corresponding to a block of 32 interrupts */
 struct vgic_irq_rank {
@@ -72,6 +73,7 @@ struct pending_irq
 struct hvm_domain
 {
     uint64_t              params[HVM_NR_PARAMS];
+    struct hvm_iommu      hvm_iommu;
 }  __cacheline_aligned;
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
new file mode 100644
index 0000000..461c8cf
--- /dev/null
+++ b/xen/include/asm-arm/hvm/iommu.h
@@ -0,0 +1,10 @@
+#ifndef __ASM_ARM_HVM_IOMMU_H_
+#define __ASM_ARM_HVM_IOMMU_H_
+
+struct arch_hvm_iommu
+{
+    /* Private information for the IOMMU drivers */
+    void *priv;
+};
+
+#endif /* __ASM_ARM_HVM_IOMMU_H_ */
diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h
new file mode 100644
index 0000000..81eec83
--- /dev/null
+++ b/xen/include/asm-arm/iommu.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_ARM_IOMMU_H__
+#define __ARCH_ARM_IOMMU_H__
+
+/* Always share P2M Table between the CPU and the IOMMU */
+#define iommu_use_hap_pt(d) (1)
+#define domain_hvm_iommu(d) (&d->arch.hvm_domain.hvm_iommu)
+
+const struct iommu_ops *iommu_get_ops(void);
+void __init iommu_set_ops(const struct iommu_ops *ops);
+
+int __init iommu_hardware_setup(void);
+
+#endif /* __ARCH_ARM_IOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSo-0005LL-Ha; Fri, 07 Feb 2014 17:43:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSm-0005IM-Pl
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:41 +0000
Received: from [85.158.137.68:64227] by server-8.bemta-3.messagelabs.com id
	AC/94-16039-C4B15F25; Fri, 07 Feb 2014 17:43:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391795018!373550!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23609 invoked from network); 7 Feb 2014 17:43:38 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:38 -0000
Received: by mail-ee0-f43.google.com with SMTP id c41so1697652eek.30
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Ylf5wCQqy+iagUcUn2Lzjj7+gUCArC0HlLCDwzCYQis=;
	b=S2hlKfztkwgpGF44FN4q7PNopoMDs9QM6NgT9owhwt4gkALSFrwXM308WM0M/W66Ra
	Y09Upzf9zKBXwYsubfzZQ6MUGdFACdnOGwsIKALGCi4dd6u/g/AWwfAFkkeAhuqPjvQ2
	YquP5ikJwnu+YyuKiv4wd5s1Gg5bTF7Wtx0S2vDlTiY+IqKVyvZ/c4DWohuV7/9CoBd3
	ZW5MVrd+v+1kJ4h3TdWBmRYckB4TPNaFNBs3r9ZDKibGdKb3gs7JE0rrbnffMgLomcKq
	2z+mFwj8ITkEpwnzpljXACPMaLBqUr4Y6psIpw/uFHtIeHH1e4dyhEjDskZHla1ShAzS
	dfWA==
X-Gm-Message-State: ALoCoQnaVEVr1/HtRkDBEUWBbGn0VpVmI0G34HVUKAF5nHUddd0DqzIKbpVKuK86duZ7FtnNF9G2
X-Received: by 10.14.29.6 with SMTP id h6mr5739299eea.84.1391795018524;
	Fri, 07 Feb 2014 09:43:38 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.37
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:37 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:09 +0000
Message-Id: <1391794991-5919-11-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 10/12] xen/passthrough: Introduce IOMMU
	ARM architure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the architecture to use IOMMU on ARM. There is no
IOMMU drivers on this patch.

The code will run through the device tree and will initialize every IOMMUs.
It's possible to have multiple IOMMUs on the same platform, but they should
be handled with the same drivers. For now, there is no support for using
multiple iommu drivers at runtime.

Each new IOMMU drivers should contain:

static const char * const myiommu_dt_compat[] __initcontst =
{
    /* list of device compatible with the drivers. Will be matched with
     * the "compatible" property on the device tree
     */
    NULL,
};

DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
        .compatible = myiommu_compatible,
        .init = myiommu_init,
DT_DEVICE_END

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Rules.mk                |    1 +
 xen/arch/arm/domain.c                |    7 ++++
 xen/arch/arm/domain_build.c          |    2 ++
 xen/arch/arm/p2m.c                   |    4 +++
 xen/arch/arm/setup.c                 |    2 ++
 xen/drivers/passthrough/Makefile     |    1 +
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/iommu.c  |   65 ++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/device.h         |    3 +-
 xen/include/asm-arm/domain.h         |    2 ++
 xen/include/asm-arm/hvm/iommu.h      |   10 ++++++
 xen/include/asm-arm/iommu.h          |   36 +++++++++++++++++++
 12 files changed, 133 insertions(+), 1 deletion(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 57f2eb1..1703551 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -9,6 +9,7 @@
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
+HAS_PASSTHROUGH := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..b3c0dda 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -557,6 +557,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     if ( (d->domain_id == 0) && (rc = domain_vuart_init(d)) )
         goto fail;
 
+    if ( (rc = iommu_domain_init(d)) != 0 )
+        goto fail;
+
     return 0;
 
 fail:
@@ -568,6 +571,10 @@ fail:
 
 void arch_domain_destroy(struct domain *d)
 {
+    /* IOMMU page table is shared with P2M, always call
+     * iommu_domain_destroy() before p2m_teardown().
+     */
+    iommu_domain_destroy(d);
     p2m_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 6d9a801..1f845d7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -1091,6 +1091,8 @@ int construct_dom0(struct domain *d)
         }
     }
 
+    iommu_dom0_init(d);
+
     return 0;
 }
 
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..57304d0 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -401,12 +401,16 @@ static int create_p2m_entries(struct domain *d,
 
     if ( flush )
     {
+        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
+        unsigned long egfn = paddr_to_pfn(end_gpaddr);
+
         /* At the beginning of the function, Xen is updating VTTBR
          * with the domain where the mappings are created. In this
          * case it's only necessary to flush TLBs on every CPUs with
          * the current VMID (our domain).
          */
         flush_tlb();
+        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
     }
 
     if ( op == ALLOCATE || op == INSERT )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f6d713..5a687d1 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -725,6 +725,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     local_irq_enable();
     local_abort_enable();
 
+    iommu_setup(); /* setup iommu if available */
+
     smp_prepare_cpus(cpus);
 
     initialize_keytable();
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 51e0a0d..aded1ea 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -1,6 +1,7 @@
 subdir-$(x86) += vtd
 subdir-$(x86) += amd
 subdir-$(x86_64) += x86
+subdir-$(arm) += arm
 
 obj-y += iommu.o
 obj-$(x86) += iommu_x86.o
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
new file mode 100644
index 0000000..0484b79
--- /dev/null
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -0,0 +1 @@
+obj-y += iommu.o
diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
new file mode 100644
index 0000000..7cf36cd
--- /dev/null
+++ b/xen/drivers/passthrough/arm/iommu.c
@@ -0,0 +1,65 @@
+/*
+ * xen/drivers/passthrough/arm/iommu.c
+ *
+ * Generic IOMMU framework via the device tree
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2013 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/iommu.h>
+#include <xen/device_tree.h>
+#include <asm/device.h>
+
+static const struct iommu_ops *iommu_ops;
+
+const struct iommu_ops *iommu_get_ops(void)
+{
+    return iommu_ops;
+}
+
+void __init iommu_set_ops(const struct iommu_ops *ops)
+{
+    BUG_ON(ops == NULL);
+
+    if ( iommu_ops && iommu_ops != ops )
+        printk("WARNING: IOMMU ops already set to a different value\n");
+
+    iommu_ops = ops;
+}
+
+int __init iommu_hardware_setup(void)
+{
+    struct dt_device_node *np;
+    int rc;
+    unsigned int num_iommus = 0;
+
+    dt_for_each_device_node(dt_host, np)
+    {
+        rc = device_init(np, DEVICE_IOMMU, NULL);
+        if ( !rc )
+            num_iommus++;
+    }
+
+    return ( num_iommus > 0 ) ? 0 : -ENODEV;
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+}
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index 9e47ca6..ed04344 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -6,7 +6,8 @@
 
 enum device_type
 {
-    DEVICE_SERIAL
+    DEVICE_SERIAL,
+    DEVICE_IOMMU,
 };
 
 struct device_desc {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..15d814a 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -9,6 +9,7 @@
 #include <asm/vfp.h>
 #include <public/hvm/params.h>
 #include <xen/serial.h>
+#include <xen/hvm/iommu.h>
 
 /* Represents state corresponding to a block of 32 interrupts */
 struct vgic_irq_rank {
@@ -72,6 +73,7 @@ struct pending_irq
 struct hvm_domain
 {
     uint64_t              params[HVM_NR_PARAMS];
+    struct hvm_iommu      hvm_iommu;
 }  __cacheline_aligned;
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
new file mode 100644
index 0000000..461c8cf
--- /dev/null
+++ b/xen/include/asm-arm/hvm/iommu.h
@@ -0,0 +1,10 @@
+#ifndef __ASM_ARM_HVM_IOMMU_H_
+#define __ASM_ARM_HVM_IOMMU_H_
+
+struct arch_hvm_iommu
+{
+    /* Private information for the IOMMU drivers */
+    void *priv;
+};
+
+#endif /* __ASM_ARM_HVM_IOMMU_H_ */
diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h
new file mode 100644
index 0000000..81eec83
--- /dev/null
+++ b/xen/include/asm-arm/iommu.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_ARM_IOMMU_H__
+#define __ARCH_ARM_IOMMU_H__
+
+/* Always share P2M Table between the CPU and the IOMMU */
+#define iommu_use_hap_pt(d) (1)
+#define domain_hvm_iommu(d) (&d->arch.hvm_domain.hvm_iommu)
+
+const struct iommu_ops *iommu_get_ops(void);
+void __init iommu_set_ops(const struct iommu_ops *ops);
+
+int __init iommu_hardware_setup(void);
+
+#endif /* __ARCH_ARM_IOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSq-0005PA-Ud; Fri, 07 Feb 2014 17:43:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSp-0005LK-4e
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:43 +0000
Received: from [85.158.137.68:6792] by server-8.bemta-3.messagelabs.com id
	8E/94-16039-C4B15F25; Fri, 07 Feb 2014 17:43:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391795019!393509!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21653 invoked from network); 7 Feb 2014 17:43:40 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:40 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so1689075eek.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=ESnAODfE+eNrZbGAMcw/YQTXqO7df8oiLVjJ5qMc8ww=;
	b=SgDsTFixNPP6XATFRjYiaM5AsxBhRLZgWt5q7R9wUplAYsqwQmYkXA+QP+0Lf0I7TM
	bRT8Fak4GoJEgR5ZiGBxYc8vRdTBahgLrME24YT5E823ebA3ZUaW/eNsKbxhik71+j/A
	AA+zduVFKMs6Aw0DTffwQpQVYMONjBNitv0Qz49h6ETjHzkbSuumWeYPWXxHzhd0K39f
	DVSs/QGEBIjYKCJ32ZGQgVy4jlilAco5/eu3RWVy3z6gT2t4FQQHvaovbBsWFtLz75Fz
	gub/6QLniWWXh5ZlrfrBAMt29yAzafXfILE018vTnMKZAcQrXA5US6pcOTcF3+eTg3wF
	+/7Q==
X-Gm-Message-State: ALoCoQk9Z89cVIle4V1bs3ezx67pzBWwcZDtrgSrI7Fw/Nu6GKjj8yA3o7yRvuDdi09cLfdR0Cxr
X-Received: by 10.14.184.66 with SMTP id r42mr2430999eem.109.1391795019747;
	Fri, 07 Feb 2014 09:43:39 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.38
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:39 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:10 +0000
Message-Id: <1391794991-5919-12-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [RFC for-4.5 11/12] MAINTAINERS: Add
	drivers/passthrough/arm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the ARM IOMMU directory to "ARM ARCHITECTURE" part

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
---
 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7757cdd..ad6c8a9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -130,6 +130,7 @@ S:	Supported
 L:	xen-devel@lists.xen.org
 F:	xen/arch/arm/
 F:	xen/include/asm-arm/
+F:	xen/drivers/passthrough/arm
 
 CPU POOLS
 M:	Juergen Gross <juergen.gross@ts.fujitsu.com>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSq-0005PA-Ud; Fri, 07 Feb 2014 17:43:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSp-0005LK-4e
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:43 +0000
Received: from [85.158.137.68:6792] by server-8.bemta-3.messagelabs.com id
	8E/94-16039-C4B15F25; Fri, 07 Feb 2014 17:43:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391795019!393509!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21653 invoked from network); 7 Feb 2014 17:43:40 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:40 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so1689075eek.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=ESnAODfE+eNrZbGAMcw/YQTXqO7df8oiLVjJ5qMc8ww=;
	b=SgDsTFixNPP6XATFRjYiaM5AsxBhRLZgWt5q7R9wUplAYsqwQmYkXA+QP+0Lf0I7TM
	bRT8Fak4GoJEgR5ZiGBxYc8vRdTBahgLrME24YT5E823ebA3ZUaW/eNsKbxhik71+j/A
	AA+zduVFKMs6Aw0DTffwQpQVYMONjBNitv0Qz49h6ETjHzkbSuumWeYPWXxHzhd0K39f
	DVSs/QGEBIjYKCJ32ZGQgVy4jlilAco5/eu3RWVy3z6gT2t4FQQHvaovbBsWFtLz75Fz
	gub/6QLniWWXh5ZlrfrBAMt29yAzafXfILE018vTnMKZAcQrXA5US6pcOTcF3+eTg3wF
	+/7Q==
X-Gm-Message-State: ALoCoQk9Z89cVIle4V1bs3ezx67pzBWwcZDtrgSrI7Fw/Nu6GKjj8yA3o7yRvuDdi09cLfdR0Cxr
X-Received: by 10.14.184.66 with SMTP id r42mr2430999eem.109.1391795019747;
	Fri, 07 Feb 2014 09:43:39 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.38
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:39 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:10 +0000
Message-Id: <1391794991-5919-12-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com
Subject: [Xen-devel] [RFC for-4.5 11/12] MAINTAINERS: Add
	drivers/passthrough/arm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the ARM IOMMU directory to "ARM ARCHITECTURE" part

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
---
 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7757cdd..ad6c8a9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -130,6 +130,7 @@ S:	Supported
 L:	xen-devel@lists.xen.org
 F:	xen/arch/arm/
 F:	xen/include/asm-arm/
+F:	xen/drivers/passthrough/arm
 
 CPU POOLS
 M:	Juergen Gross <juergen.gross@ts.fujitsu.com>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSt-0005SF-GF; Fri, 07 Feb 2014 17:43:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSr-0005Pa-KO
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:46 +0000
Received: from [85.158.143.35:63818] by server-1.bemta-4.messagelabs.com id
	20/7F-31661-05B15F25; Fri, 07 Feb 2014 17:43:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391795021!4001486!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29510 invoked from network); 7 Feb 2014 17:43:42 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:42 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so1676117eek.19
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=oXQDGp3n7kkfVtjDLBJhB9KXjITwlexRcF1fgsIhT0A=;
	b=M48+AFwMLv3Yh9AAQi7RcCF0/8jQvAjJ6iEGqhFMEV+NrNsXaeO8OXhOcxj9/PFiHR
	s0Dw0AmRayVtd7cGTsVjkNRkgv3anNg+xswXB2Rp/EkQM0zEgn7G+YprIJC8qmisv+9g
	SXWNltl5pkp1CAirOh+Dn5GC5KQFi+vR+Rke5OwvpdFU3lFeTwJtM/AXAopSKF2X0lpV
	4ahePTMcN+xmQHTQl7GtV/59rNEmg2ngAeY77CxhUzMelCXwx0At9eArzoGV20XhcN9s
	YveI43c39ucvbi1j/5yt1IrgyBbpZhur+Te76TkAbU4MySo0ZVPTvZOhel5EC8hsaVvV
	rxMA==
X-Gm-Message-State: ALoCoQn3cELT2zBb1n0E4P7N4CoomTgwvRw+XK+BsS86wMtDWq3FXMB4b69xeKFevxrKciNWfXzn
X-Received: by 10.14.176.133 with SMTP id b5mr425500eem.105.1391795021643;
	Fri, 07 Feb 2014 09:43:41 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:40 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:11 +0000
Message-Id: <1391794991-5919-13-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
	support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add support for ARM architected SMMU driver. It's based on the
linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.

Signed-off-by: Julien Grall<julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/smmu.c   | 1701 ++++++++++++++++++++++++++++++++++
 2 files changed, 1702 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu.c

diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index 0484b79..f4cd26e 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1 +1,2 @@
 obj-y += iommu.o
+obj-y += smmu.o
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
new file mode 100644
index 0000000..9bf2aa3
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -0,0 +1,1701 @@
+/*
+ * IOMMU API for ARM architected SMMU implementations.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Based on Linux drivers/iommu/arm-smmu.c (commit 89a23cd)
+ * Copyright (C) 2013 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * Xen modification:
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (C) 2014 Linaro Limited.
+ *
+ * This driver currently supports:
+ *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
+ *  - Stream-matching and stream-indexing
+ *  - v7/v8 long-descriptor format
+ *  - Non-secure access to the SMMU
+ *  - 4k pages, p2m shared with the processor
+ *  - Up to 40-bit addressing
+ *  - Context fault reporting
+ */
+
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+
+#define SZ_4K                               (1 << 12)
+#define SZ_64K                              (1 << 16)
+
+/* Driver options */
+#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
+
+/* Maximum number of stream IDs assigned to a single device */
+#define MAX_MASTER_STREAMIDS    MAX_PHANDLE_ARGS
+
+/* Maximum stream ID */
+#define SMMU_MAX_STREAMIDS      (SZ_64K - 1)
+
+/* Maximum number of context banks per SMMU */
+#define SMMU_MAX_CBS        128
+
+/* Maximum number of mapping groups per SMMU */
+#define SMMU_MAX_SMRS       128
+
+/* SMMU global address space */
+#define SMMU_GR0(smmu)      ((smmu)->base)
+#define SMMU_GR1(smmu)      ((smmu)->base + (smmu)->pagesize)
+
+/*
+ * SMMU global address space with conditional offset to access secure aliases of
+ * non-secure registers (e.g. nsCR0: 0x400, nsGFSR: 0x448, nsGFSYNR0: 0x450)
+ */
+#define SMMU_GR0_NS(smmu)                                   \
+    ((smmu)->base +                                         \
+     ((smmu->options & SMMU_OPT_SECURE_CONFIG_ACCESS)    \
+        ? 0x400 : 0))
+
+/* Page table bits */
+#define SMMU_PTE_PAGE           (((pteval_t)3) << 0)
+#define SMMU_PTE_CONT           (((pteval_t)1) << 52)
+#define SMMU_PTE_AF             (((pteval_t)1) << 10)
+#define SMMU_PTE_SH_NS          (((pteval_t)0) << 8)
+#define SMMU_PTE_SH_OS          (((pteval_t)2) << 8)
+#define SMMU_PTE_SH_IS          (((pteval_t)3) << 8)
+
+#if PAGE_SIZE == SZ_4K
+#define SMMU_PTE_CONT_ENTRIES   16
+#elif PAGE_SIZE == SZ_64K
+#define SMMU_PTE_CONT_ENTRIES   32
+#else
+#define SMMU_PTE_CONT_ENTRIES   1
+#endif
+
+#define SMMU_PTE_CONT_SIZE      (PAGE_SIZE * SMMU_PTE_CONT_ENTRIES)
+#define SMMU_PTE_CONT_MASK      (~(SMMU_PTE_CONT_SIZE - 1))
+#define SMMU_PTE_HWTABLE_SIZE   (PTRS_PER_PTE * sizeof(pte_t))
+
+/* Stage-1 PTE */
+#define SMMU_PTE_AP_UNPRIV      (((pteval_t)1) << 6)
+#define SMMU_PTE_AP_RDONLY      (((pteval_t)2) << 6)
+#define SMMU_PTE_ATTRINDX_SHIFT 2
+#define SMMU_PTE_nG             (((pteval_t)1) << 11)
+
+/* Stage-2 PTE */
+#define SMMU_PTE_HAP_FAULT      (((pteval_t)0) << 6)
+#define SMMU_PTE_HAP_READ       (((pteval_t)1) << 6)
+#define SMMU_PTE_HAP_WRITE      (((pteval_t)2) << 6)
+#define SMMU_PTE_MEMATTR_OIWB   (((pteval_t)0xf) << 2)
+#define SMMU_PTE_MEMATTR_NC     (((pteval_t)0x5) << 2)
+#define SMMU_PTE_MEMATTR_DEV    (((pteval_t)0x1) << 2)
+
+/* Configuration registers */
+#define SMMU_GR0_sCR0           0x0
+#define SMMU_sCR0_CLIENTPD      (1 << 0)
+#define SMMU_sCR0_GFRE          (1 << 1)
+#define SMMU_sCR0_GFIE          (1 << 2)
+#define SMMU_sCR0_GCFGFRE       (1 << 4)
+#define SMMU_sCR0_GCFGFIE       (1 << 5)
+#define SMMU_sCR0_USFCFG        (1 << 10)
+#define SMMU_sCR0_VMIDPNE       (1 << 11)
+#define SMMU_sCR0_PTM           (1 << 12)
+#define SMMU_sCR0_FB            (1 << 13)
+#define SMMU_sCR0_BSU_SHIFT     14
+#define SMMU_sCR0_BSU_MASK      0x3
+
+/* Identification registers */
+#define SMMU_GR0_ID0            0x20
+#define SMMU_GR0_ID1            0x24
+#define SMMU_GR0_ID2            0x28
+#define SMMU_GR0_ID3            0x2c
+#define SMMU_GR0_ID4            0x30
+#define SMMU_GR0_ID5            0x34
+#define SMMU_GR0_ID6            0x38
+#define SMMU_GR0_ID7            0x3c
+#define SMMU_GR0_sGFSR          0x48
+#define SMMU_GR0_sGFSYNR0       0x50
+#define SMMU_GR0_sGFSYNR1       0x54
+#define SMMU_GR0_sGFSYNR2       0x58
+#define SMMU_GR0_PIDR0          0xfe0
+#define SMMU_GR0_PIDR1          0xfe4
+#define SMMU_GR0_PIDR2          0xfe8
+
+#define SMMU_ID0_S1TS           (1 << 30)
+#define SMMU_ID0_S2TS           (1 << 29)
+#define SMMU_ID0_NTS            (1 << 28)
+#define SMMU_ID0_SMS            (1 << 27)
+#define SMMU_ID0_PTFS_SHIFT     24
+#define SMMU_ID0_PTFS_MASK      0x2
+#define SMMU_ID0_PTFS_V8_ONLY   0x2
+#define SMMU_ID0_CTTW           (1 << 14)
+#define SMMU_ID0_NUMIRPT_SHIFT  16
+#define SMMU_ID0_NUMIRPT_MASK   0xff
+#define SMMU_ID0_NUMSMRG_SHIFT  0
+#define SMMU_ID0_NUMSMRG_MASK   0xff
+
+#define SMMU_ID1_PAGESIZE            (1 << 31)
+#define SMMU_ID1_NUMPAGENDXB_SHIFT   28
+#define SMMU_ID1_NUMPAGENDXB_MASK    7
+#define SMMU_ID1_NUMS2CB_SHIFT       16
+#define SMMU_ID1_NUMS2CB_MASK        0xff
+#define SMMU_ID1_NUMCB_SHIFT         0
+#define SMMU_ID1_NUMCB_MASK          0xff
+
+#define SMMU_ID2_OAS_SHIFT           4
+#define SMMU_ID2_OAS_MASK            0xf
+#define SMMU_ID2_IAS_SHIFT           0
+#define SMMU_ID2_IAS_MASK            0xf
+#define SMMU_ID2_UBS_SHIFT           8
+#define SMMU_ID2_UBS_MASK            0xf
+#define SMMU_ID2_PTFS_4K             (1 << 12)
+#define SMMU_ID2_PTFS_16K            (1 << 13)
+#define SMMU_ID2_PTFS_64K            (1 << 14)
+
+#define SMMU_PIDR2_ARCH_SHIFT        4
+#define SMMU_PIDR2_ARCH_MASK         0xf
+
+/* Global TLB invalidation */
+#define SMMU_GR0_STLBIALL           0x60
+#define SMMU_GR0_TLBIVMID           0x64
+#define SMMU_GR0_TLBIALLNSNH        0x68
+#define SMMU_GR0_TLBIALLH           0x6c
+#define SMMU_GR0_sTLBGSYNC          0x70
+#define SMMU_GR0_sTLBGSTATUS        0x74
+#define SMMU_sTLBGSTATUS_GSACTIVE   (1 << 0)
+#define SMMU_TLB_LOOP_TIMEOUT       1000000 /* 1s! */
+
+/* Stream mapping registers */
+#define SMMU_GR0_SMR(n)             (0x800 + ((n) << 2))
+#define SMMU_SMR_VALID              (1 << 31)
+#define SMMU_SMR_MASK_SHIFT         16
+#define SMMU_SMR_MASK_MASK          0x7fff
+#define SMMU_SMR_ID_SHIFT           0
+#define SMMU_SMR_ID_MASK            0x7fff
+
+#define SMMU_GR0_S2CR(n)        (0xc00 + ((n) << 2))
+#define SMMU_S2CR_CBNDX_SHIFT   0
+#define SMMU_S2CR_CBNDX_MASK    0xff
+#define SMMU_S2CR_TYPE_SHIFT    16
+#define SMMU_S2CR_TYPE_MASK     0x3
+#define SMMU_S2CR_TYPE_TRANS    (0 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_BYPASS   (1 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_FAULT    (2 << SMMU_S2CR_TYPE_SHIFT)
+
+/* Context bank attribute registers */
+#define SMMU_GR1_CBAR(n)                    (0x0 + ((n) << 2))
+#define SMMU_CBAR_VMID_SHIFT                0
+#define SMMU_CBAR_VMID_MASK                 0xff
+#define SMMU_CBAR_S1_MEMATTR_SHIFT          12
+#define SMMU_CBAR_S1_MEMATTR_MASK           0xf
+#define SMMU_CBAR_S1_MEMATTR_WB             0xf
+#define SMMU_CBAR_TYPE_SHIFT                16
+#define SMMU_CBAR_TYPE_MASK                 0x3
+#define SMMU_CBAR_TYPE_S2_TRANS             (0 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_BYPASS   (1 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_FAULT    (2 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_TRANS    (3 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_IRPTNDX_SHIFT             24
+#define SMMU_CBAR_IRPTNDX_MASK              0xff
+
+#define SMMU_GR1_CBA2R(n)                   (0x800 + ((n) << 2))
+#define SMMU_CBA2R_RW64_32BIT               (0 << 0)
+#define SMMU_CBA2R_RW64_64BIT               (1 << 0)
+
+/* Translation context bank */
+#define SMMU_CB_BASE(smmu)                  ((smmu)->base + ((smmu)->size >> 1))
+#define SMMU_CB(smmu, n)                    ((n) * (smmu)->pagesize)
+
+#define SMMU_CB_SCTLR                       0x0
+#define SMMU_CB_RESUME                      0x8
+#define SMMU_CB_TCR2                        0x10
+#define SMMU_CB_TTBR0_LO                    0x20
+#define SMMU_CB_TTBR0_HI                    0x24
+#define SMMU_CB_TCR                         0x30
+#define SMMU_CB_S1_MAIR0                    0x38
+#define SMMU_CB_FSR                         0x58
+#define SMMU_CB_FAR_LO                      0x60
+#define SMMU_CB_FAR_HI                      0x64
+#define SMMU_CB_FSYNR0                      0x68
+#define SMMU_CB_S1_TLBIASID                 0x610
+
+#define SMMU_SCTLR_S1_ASIDPNE               (1 << 12)
+#define SMMU_SCTLR_CFCFG                    (1 << 7)
+#define SMMU_SCTLR_CFIE                     (1 << 6)
+#define SMMU_SCTLR_CFRE                     (1 << 5)
+#define SMMU_SCTLR_E                        (1 << 4)
+#define SMMU_SCTLR_AFE                      (1 << 2)
+#define SMMU_SCTLR_TRE                      (1 << 1)
+#define SMMU_SCTLR_M                        (1 << 0)
+#define SMMU_SCTLR_EAE_SBOP                 (SMMU_SCTLR_AFE | SMMU_SCTLR_TRE)
+
+#define SMMU_RESUME_RETRY                   (0 << 0)
+#define SMMU_RESUME_TERMINATE               (1 << 0)
+
+#define SMMU_TCR_EAE                        (1 << 31)
+
+#define SMMU_TCR_PASIZE_SHIFT               16
+#define SMMU_TCR_PASIZE_MASK                0x7
+
+#define SMMU_TCR_TG0_4K                     (0 << 14)
+#define SMMU_TCR_TG0_64K                    (1 << 14)
+
+#define SMMU_TCR_SH0_SHIFT                  12
+#define SMMU_TCR_SH0_MASK                   0x3
+#define SMMU_TCR_SH_NS                      0
+#define SMMU_TCR_SH_OS                      2
+#define SMMU_TCR_SH_IS                      3
+
+#define SMMU_TCR_ORGN0_SHIFT                10
+#define SMMU_TCR_IRGN0_SHIFT                8
+#define SMMU_TCR_RGN_MASK                   0x3
+#define SMMU_TCR_RGN_NC                     0
+#define SMMU_TCR_RGN_WBWA                   1
+#define SMMU_TCR_RGN_WT                     2
+#define SMMU_TCR_RGN_WB                     3
+
+#define SMMU_TCR_SL0_SHIFT                  6
+#define SMMU_TCR_SL0_MASK                   0x3
+#define SMMU_TCR_SL0_LVL_2                  0
+#define SMMU_TCR_SL0_LVL_1                  1
+
+#define SMMU_TCR_T1SZ_SHIFT                 16
+#define SMMU_TCR_T0SZ_SHIFT                 0
+#define SMMU_TCR_SZ_MASK                    0xf
+
+#define SMMU_TCR2_SEP_SHIFT                 15
+#define SMMU_TCR2_SEP_MASK                  0x7
+
+#define SMMU_TCR2_PASIZE_SHIFT              0
+#define SMMU_TCR2_PASIZE_MASK               0x7
+
+/* Common definitions for PASize and SEP fields */
+#define SMMU_TCR2_ADDR_32                   0
+#define SMMU_TCR2_ADDR_36                   1
+#define SMMU_TCR2_ADDR_40                   2
+#define SMMU_TCR2_ADDR_42                   3
+#define SMMU_TCR2_ADDR_44                   4
+#define SMMU_TCR2_ADDR_48                   5
+
+#define SMMU_TTBRn_HI_ASID_SHIFT            16
+
+#define SMMU_MAIR_ATTR_SHIFT(n)             ((n) << 3)
+#define SMMU_MAIR_ATTR_MASK                 0xff
+#define SMMU_MAIR_ATTR_DEVICE               0x04
+#define SMMU_MAIR_ATTR_NC                   0x44
+#define SMMU_MAIR_ATTR_WBRWA                0xff
+#define SMMU_MAIR_ATTR_IDX_NC               0
+#define SMMU_MAIR_ATTR_IDX_CACHE            1
+#define SMMU_MAIR_ATTR_IDX_DEV              2
+
+#define SMMU_FSR_MULTI                      (1 << 31)
+#define SMMU_FSR_SS                         (1 << 30)
+#define SMMU_FSR_UUT                        (1 << 8)
+#define SMMU_FSR_ASF                        (1 << 7)
+#define SMMU_FSR_TLBLKF                     (1 << 6)
+#define SMMU_FSR_TLBMCF                     (1 << 5)
+#define SMMU_FSR_EF                         (1 << 4)
+#define SMMU_FSR_PF                         (1 << 3)
+#define SMMU_FSR_AFF                        (1 << 2)
+#define SMMU_FSR_TF                         (1 << 1)
+
+#define SMMU_FSR_IGN                        (SMMU_FSR_AFF | SMMU_FSR_ASF |    \
+                                             SMMU_FSR_TLBMCF | SMMU_FSR_TLBLKF)
+#define SMMU_FSR_FAULT                      (SMMU_FSR_MULTI | SMMU_FSR_SS |   \
+                                             SMMU_FSR_UUT | SMMU_FSR_EF |     \
+                                             SMMU_FSR_PF | SMMU_FSR_TF |      \
+                                             SMMU_FSR_IGN)
+
+#define SMMU_FSYNR0_WNR                     (1 << 4)
+
+#define smmu_print(dev, lvl, fmt, ...)                                        \
+    printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev->node), ## __VA_ARGS__)
+
+#define smmu_err(dev, fmt, ...) smmu_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+
+#define smmu_dbg(dev, fmt, ...)                                             \
+    smmu_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
+
+#define smmu_info(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
+
+#define smmu_warn(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
+
+struct arm_smmu_device {
+    const struct dt_device_node *node;
+
+    void __iomem                *base;
+    unsigned long               size;
+    unsigned long               pagesize;
+
+#define SMMU_FEAT_COHERENT_WALK (1 << 0)
+#define SMMU_FEAT_STREAM_MATCH  (1 << 1)
+#define SMMU_FEAT_TRANS_S1      (1 << 2)
+#define SMMU_FEAT_TRANS_S2      (1 << 3)
+#define SMMU_FEAT_TRANS_NESTED  (1 << 4)
+    u32                         features;
+    u32                         options;
+    int                         version;
+
+    u32                         num_context_banks;
+    u32                         num_s2_context_banks;
+    DECLARE_BITMAP(context_map, SMMU_MAX_CBS);
+    atomic_t                    irptndx;
+
+    u32                         num_mapping_groups;
+    DECLARE_BITMAP(smr_map, SMMU_MAX_SMRS);
+
+    unsigned long               input_size;
+    unsigned long               s1_output_size;
+    unsigned long               s2_output_size;
+
+    u32                         num_global_irqs;
+    u32                         num_context_irqs;
+    struct dt_irq               *irqs;
+
+    u32                         smr_mask_mask;
+    u32                         smr_id_mask;
+
+    unsigned long               *sids;
+
+    struct list_head            list;
+    struct rb_root              masters;
+};
+
+struct arm_smmu_smr {
+    u8                          idx;
+    u16                         mask;
+    u16                         id;
+};
+
+#define INVALID_IRPTNDX         0xff
+
+#define SMMU_CB_ASID(cfg)       ((cfg)->cbndx)
+#define SMMU_CB_VMID(cfg)       ((cfg)->cbndx + 1)
+
+struct arm_smmu_domain_cfg {
+    struct arm_smmu_device  *smmu;
+    u8                      cbndx;
+    u8                      irptndx;
+    u32                     cbar;
+    /* Domain associated to this device */
+    struct domain           *domain;
+    /* List of master which use this structure */
+    struct list_head        masters;
+
+    /* Used to link domain context for a same domain */
+    struct list_head        list;
+};
+
+struct arm_smmu_master {
+    const struct dt_device_node *dt_node;
+
+    /*
+     * The following is specific to the master's position in the
+     * SMMU chain.
+     */
+    struct rb_node              node;
+    u32                         num_streamids;
+    u16                         streamids[MAX_MASTER_STREAMIDS];
+    int                         num_s2crs;
+
+    struct arm_smmu_smr         *smrs;
+    struct arm_smmu_domain_cfg  *cfg;
+
+    /* Used to link masters in a same domain context */
+    struct list_head            list;
+};
+
+static LIST_HEAD(arm_smmu_devices);
+
+struct arm_smmu_domain {
+    spinlock_t lock;
+    struct list_head contexts;
+};
+
+struct arm_smmu_option_prop {
+    u32         opt;
+    const char  *prop;
+};
+
+static const struct arm_smmu_option_prop arm_smmu_options [] __initconst =
+{
+    { SMMU_OPT_SECURE_CONFIG_ACCESS, "calxeda,smmu-secure-config-access" },
+    { 0, NULL},
+};
+
+static void __init check_driver_options(struct arm_smmu_device *smmu)
+{
+    int i = 0;
+
+    do {
+        if ( dt_property_read_bool(smmu->node, arm_smmu_options[i].prop) )
+        {
+            smmu->options |= arm_smmu_options[i].opt;
+            smmu_dbg(smmu, "option %s\n", arm_smmu_options[i].prop);
+        }
+    } while ( arm_smmu_options[++i].opt );
+}
+
+static void arm_smmu_context_fault(int irq, void *data,
+                                   struct cpu_user_regs *regs)
+{
+    u32 fsr, far, fsynr;
+    unsigned long iova;
+    struct arm_smmu_domain_cfg *cfg = data;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    fsr = readl_relaxed(cb_base + SMMU_CB_FSR);
+
+    if ( !(fsr & SMMU_FSR_FAULT) )
+        return;
+
+    if ( fsr & SMMU_FSR_IGN )
+        smmu_err(smmu, "Unexpected context fault (fsr 0x%u)\n", fsr);
+
+    fsynr = readl_relaxed(cb_base + SMMU_CB_FSYNR0);
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_LO);
+    iova = far;
+#ifdef CONFIG_ARM_64
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_HI);
+    iova |= ((unsigned long)far << 32);
+#endif
+
+    smmu_err(smmu,
+             "Unhandled context fault: iova=0x%08lx, fsynr=0x%x, cb=%d\n",
+             iova, fsynr, cfg->cbndx);
+
+    /* Clear the faulting FSR */
+    writel(fsr, cb_base + SMMU_CB_FSR);
+
+    /* Terminate any stalled transactions */
+    if ( fsr & SMMU_FSR_SS )
+        writel_relaxed(SMMU_RESUME_TERMINATE, cb_base + SMMU_CB_RESUME);
+}
+
+static void arm_smmu_global_fault(int irq, void *data,
+                                  struct cpu_user_regs *regs)
+{
+    u32 gfsr, gfsynr0, gfsynr1, gfsynr2;
+    struct arm_smmu_device *smmu = data;
+    void __iomem *gr0_base = SMMU_GR0_NS(smmu);
+
+    gfsr = readl_relaxed(gr0_base + SMMU_GR0_sGFSR);
+    gfsynr0 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR0);
+    gfsynr1 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR1);
+    gfsynr2 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR2);
+
+    if ( !gfsr )
+        return;
+
+    smmu_err(smmu, "Unexpected global fault, this could be serious\n");
+    smmu_err(smmu,
+             "\tGFSR 0x%08x, GFSYNR0 0x%08x, GFSYNR1 0x%08x, GFSYNR2 0x%08x\n",
+             gfsr, gfsynr0, gfsynr1, gfsynr2);
+    writel(gfsr, gr0_base + SMMU_GR0_sGFSR);
+}
+
+static struct arm_smmu_master *
+find_smmu_master(struct arm_smmu_device *smmu,
+                 const struct dt_device_node *dev_node)
+{
+    struct rb_node *node = smmu->masters.rb_node;
+
+    while ( node )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+
+        if ( dev_node < master->dt_node )
+            node = node->rb_left;
+        else if ( dev_node > master->dt_node )
+            node = node->rb_right;
+        else
+            return master;
+    }
+
+    return NULL;
+}
+
+static __init int insert_smmu_master(struct arm_smmu_device *smmu,
+                                     struct arm_smmu_master *master)
+{
+    struct rb_node **new, *parent;
+
+    new = &smmu->masters.rb_node;
+    parent = NULL;
+    while ( *new )
+    {
+        struct arm_smmu_master *this;
+
+        this = container_of(*new, struct arm_smmu_master, node);
+
+        parent = *new;
+        if ( master->dt_node < this->dt_node )
+            new = &((*new)->rb_left);
+        else if (master->dt_node > this->dt_node)
+            new = &((*new)->rb_right);
+        else
+            return -EEXIST;
+    }
+
+    rb_link_node(&master->node, parent, new);
+    rb_insert_color(&master->node, &smmu->masters);
+    return 0;
+}
+
+static __init int register_smmu_master(struct arm_smmu_device *smmu,
+                                       struct dt_phandle_args *masterspec)
+{
+    int i, sid;
+    struct arm_smmu_master *master;
+    int rc = 0;
+
+    smmu_dbg(smmu, "Try to add master %s\n", masterspec->np->name);
+
+    master = find_smmu_master(smmu, masterspec->np);
+    if ( master )
+    {
+        smmu_err(smmu,
+                 "rejecting multiple registrations for master device %s\n",
+                 masterspec->np->name);
+        return -EBUSY;
+    }
+
+    if ( masterspec->args_count > MAX_MASTER_STREAMIDS )
+    {
+        smmu_err(smmu,
+            "reached maximum number (%d) of stream IDs for master device %s\n",
+            MAX_MASTER_STREAMIDS, masterspec->np->name);
+        return -ENOSPC;
+    }
+
+    master = xzalloc(struct arm_smmu_master);
+    if ( !master )
+        return -ENOMEM;
+
+    INIT_LIST_HEAD(&master->list);
+    master->dt_node = masterspec->np;
+    master->num_streamids = masterspec->args_count;
+
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        sid = masterspec->args[i];
+        if ( test_and_set_bit(sid, smmu->sids) )
+        {
+            smmu_err(smmu, "duplicate stream ID (%d)\n", sid);
+            xfree(master);
+            return -EEXIST;
+        }
+        master->streamids[i] = masterspec->args[i];
+    }
+
+    rc = insert_smmu_master(smmu, master);
+    /* Insertion should never fail */
+    ASSERT(rc == 0);
+
+    return 0;
+}
+
+static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end)
+{
+    int idx;
+
+    do
+    {
+        idx = find_next_zero_bit(map, end, start);
+        if ( idx == end )
+            return -ENOSPC;
+    } while ( test_and_set_bit(idx, map) );
+
+    return idx;
+}
+
+static void __arm_smmu_free_bitmap(unsigned long *map, int idx)
+{
+    clear_bit(idx, map);
+}
+
+static void arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
+{
+    int count = 0;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    writel_relaxed(0, gr0_base + SMMU_GR0_sTLBGSYNC);
+    while ( readl_relaxed(gr0_base + SMMU_GR0_sTLBGSTATUS) &
+            SMMU_sTLBGSTATUS_GSACTIVE )
+    {
+        cpu_relax();
+        if ( ++count == SMMU_TLB_LOOP_TIMEOUT )
+        {
+            smmu_err(smmu, "TLB sync timed out -- SMMU may be deadlocked\n");
+            return;
+        }
+        udelay(1);
+    }
+}
+
+static void arm_smmu_tlb_inv_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *base = SMMU_GR0(smmu);
+
+    writel_relaxed(SMMU_CB_VMID(cfg),
+                   base + SMMU_GR0_TLBIVMID);
+
+    arm_smmu_tlb_sync(smmu);
+}
+
+static void arm_smmu_iotlb_flush_all(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry(cfg, &smmu_domain->contexts, list)
+        arm_smmu_tlb_inv_context(cfg);
+    spin_unlock(&smmu_domain->lock);
+}
+
+static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                 unsigned int page_count)
+{
+    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
+    arm_smmu_iotlb_flush_all(d);
+}
+
+static int determine_smr_mask(struct arm_smmu_device *smmu,
+                              struct arm_smmu_master *master,
+                              struct arm_smmu_smr *smr, int start, int order)
+{
+    u16 i, zero_bits_mask, one_bits_mask, const_mask;
+    int nr;
+
+    nr = 1 << order;
+
+    if ( nr == 1 )
+    {
+        /* no mask, use streamid to match and be done with it */
+        smr->mask = 0;
+        smr->id = master->streamids[start];
+        return 0;
+    }
+
+    zero_bits_mask = 0;
+    one_bits_mask = 0xffff;
+    for ( i = start; i < start + nr; i++)
+    {
+        zero_bits_mask |= master->streamids[i];   /* const 0 bits */
+        one_bits_mask &= master->streamids[i]; /* const 1 bits */
+    }
+    zero_bits_mask = ~zero_bits_mask;
+
+    /* bits having constant values (either 0 or 1) */
+    const_mask = zero_bits_mask | one_bits_mask;
+
+    i = hweight16(~const_mask);
+    if ( (1 << i) == nr )
+    {
+        smr->mask = ~const_mask;
+        smr->id = one_bits_mask;
+    }
+    else
+        /* no usable mask for this set of streamids */
+        return 1;
+
+    if ( ((smr->mask & smmu->smr_mask_mask) != smr->mask) ||
+         ((smr->id & smmu->smr_id_mask) != smr->id) )
+        /* insufficient number of mask/id bits */
+        return 1;
+
+    return 0;
+}
+
+static int determine_smr_mapping(struct arm_smmu_device *smmu,
+                                 struct arm_smmu_master *master,
+                                 struct arm_smmu_smr *smrs, int max_smrs)
+{
+    int nr_sid, nr, i, bit, start;
+
+    /*
+     * This function is called only once -- when a master is added
+     * to a domain. If master->num_s2crs != 0 then this master
+     * was already added to a domain.
+     */
+    BUG_ON(master->num_s2crs);
+
+    start = nr = 0;
+    nr_sid = master->num_streamids;
+    do
+    {
+        /*
+         * largest power-of-2 number of streamids for which to
+         * determine a usable mask/id pair for stream matching
+         */
+        bit = fls(nr_sid);
+        if (!bit)
+            return 0;
+
+        /*
+         * iterate over power-of-2 numbers to determine
+         * largest possible mask/id pair for stream matching
+         * of next 2**i streamids
+         */
+        for ( i = bit - 1; i >= 0; i-- )
+        {
+            if( !determine_smr_mask(smmu, master,
+                                    &smrs[master->num_s2crs],
+                                    start, i))
+                break;
+        }
+
+        if ( i < 0 )
+            goto out;
+
+        nr = 1 << i;
+        nr_sid -= nr;
+        start += nr;
+        master->num_s2crs++;
+    } while ( master->num_s2crs <= max_smrs );
+
+out:
+    if ( nr_sid )
+    {
+        /* not enough mapping groups available */
+        master->num_s2crs = 0;
+        return -ENOSPC;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
+                                          struct arm_smmu_master *master)
+{
+    int i, max_smrs, ret;
+    struct arm_smmu_smr *smrs;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    if ( !(smmu->features & SMMU_FEAT_STREAM_MATCH) )
+        return 0;
+
+    if ( master->smrs )
+        return -EEXIST;
+
+    max_smrs = min(smmu->num_mapping_groups, master->num_streamids);
+    smrs = xmalloc_array(struct arm_smmu_smr, max_smrs);
+    if ( !smrs )
+    {
+        smmu_err(smmu, "failed to allocated %d SMRs for master %s\n",
+                 max_smrs, dt_node_name(master->dt_node));
+        return -ENOMEM;
+    }
+
+    ret = determine_smr_mapping(smmu, master, smrs, max_smrs);
+    if ( ret )
+        goto err_free_smrs;
+
+    /* Allocate the SMRs on the root SMMU */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
+                                          smmu->num_mapping_groups);
+        if ( idx < 0 )
+        {
+            smmu_err(smmu, "failed to allocate free SMR\n");
+            goto err_free_bitmap;
+        }
+        smrs[i].idx = idx;
+    }
+
+    /* It worked! Now, poke the actual hardware */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 reg = SMMU_SMR_VALID | smrs[i].id << SMMU_SMR_ID_SHIFT |
+            smrs[i].mask << SMMU_SMR_MASK_SHIFT;
+        smmu_dbg(smmu, "SMR%d: 0x%x\n", smrs[i].idx, reg);
+        writel_relaxed(reg, gr0_base + SMMU_GR0_SMR(smrs[i].idx));
+    }
+
+    master->smrs = smrs;
+    return 0;
+
+err_free_bitmap:
+    while (--i >= 0)
+        __arm_smmu_free_bitmap(smmu->smr_map, smrs[i].idx);
+    master->num_s2crs = 0;
+err_free_smrs:
+    xfree(smrs);
+    return -ENOSPC;
+}
+
+/* Forward declaration */
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg);
+
+static int arm_smmu_domain_add_master(struct domain *d,
+                                      struct arm_smmu_domain_cfg *cfg,
+                                      struct arm_smmu_master *master)
+{
+    int i, ret;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    if ( master->cfg )
+        return -EBUSY;
+
+    ret = arm_smmu_master_configure_smrs(smmu, master);
+    if ( ret )
+        return ret;
+
+    /* Now we're at the root, time to point at our context bank */
+    if ( !master->num_s2crs )
+        master->num_s2crs = master->num_streamids;
+
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 idx, s2cr;
+
+        idx = smrs ? smrs[i].idx : master->streamids[i];
+        s2cr = (SMMU_S2CR_TYPE_TRANS << SMMU_S2CR_TYPE_SHIFT) |
+            (cfg->cbndx << SMMU_S2CR_CBNDX_SHIFT);
+        smmu_dbg(smmu, "S2CR%d: 0x%x\n", idx, s2cr);
+        writel_relaxed(s2cr, gr0_base + SMMU_GR0_S2CR(idx));
+    }
+
+    master->cfg = cfg;
+    list_add(&master->list, &cfg->masters);
+
+    return 0;
+}
+
+static void arm_smmu_domain_remove_master(struct arm_smmu_master *master)
+{
+    int i;
+    struct arm_smmu_domain_cfg *cfg = master->cfg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    /*
+     * We *must* clear the S2CR first, because freeing the SMR means
+     * that it can be reallocated immediately
+     */
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        u16 sid = master->streamids[i];
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT,
+                       gr0_base + SMMU_GR0_S2CR(sid));
+    }
+
+    /* Invalidate the SMRs before freeing back to the allocator */
+    for (i = 0; i < master->num_streamids; ++i) {
+        u8 idx = smrs[i].idx;
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(idx));
+        __arm_smmu_free_bitmap(smmu->smr_map, idx);
+    }
+
+    master->smrs = NULL;
+    xfree(smrs);
+
+    master->cfg = NULL;
+    list_del(&master->list);
+    INIT_LIST_HEAD(&master->list);
+}
+
+static void arm_smmu_init_context_bank(struct arm_smmu_domain_cfg *cfg)
+{
+    u32 reg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base, *gr0_base, *gr1_base;
+    paddr_t p2maddr;
+
+    ASSERT(cfg->domain != NULL);
+    p2maddr = page_to_maddr(cfg->domain->arch.p2m.first_level);
+
+    gr0_base = SMMU_GR0(smmu);
+    gr1_base = SMMU_GR1(smmu);
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+
+    /* CBAR */
+    reg = cfg->cbar;
+    if ( smmu->version == 1 )
+        reg |= cfg->irptndx << SMMU_CBAR_IRPTNDX_SHIFT;
+
+    reg |= SMMU_CB_VMID(cfg) << SMMU_CBAR_VMID_SHIFT;
+    writel_relaxed(reg, gr1_base + SMMU_GR1_CBAR(cfg->cbndx));
+
+    if ( smmu->version > 1 )
+    {
+        /* CBA2R */
+#ifdef CONFIG_ARM_64
+        reg = SMMU_CBA2R_RW64_64BIT;
+#else
+        reg = SMMU_CBA2R_RW64_32BIT;
+#endif
+        writel_relaxed(reg, gr1_base + SMMU_GR1_CBA2R(cfg->cbndx));
+    }
+
+    /* TTBR0 */
+    reg = (p2maddr & ((1ULL << 32) - 1));
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_LO);
+    reg = (p2maddr >> 32);
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_HI);
+
+    /*
+     * TCR
+     * We use long descriptor, with inner-shareable WBWA tables in TTBR0.
+     */
+    if ( smmu->version > 1 )
+    {
+        /* 4K Page Table */
+        if ( PAGE_SIZE == SZ_4K )
+            reg = SMMU_TCR_TG0_4K;
+        else
+            reg = SMMU_TCR_TG0_64K;
+
+        switch ( smmu->s2_output_size ) {
+        case 32:
+            reg |= (SMMU_TCR2_ADDR_32 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 36:
+            reg |= (SMMU_TCR2_ADDR_36 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 40:
+            reg |= (SMMU_TCR2_ADDR_40 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 42:
+            reg |= (SMMU_TCR2_ADDR_42 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 44:
+            reg |= (SMMU_TCR2_ADDR_44 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 48:
+            reg |= (SMMU_TCR2_ADDR_48 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        }
+    }
+    else
+        reg = 0;
+
+    /* The attribute to walk the page table should be the same as VTCR_EL2 */
+    reg |= SMMU_TCR_EAE |
+        (SMMU_TCR_SH_NS << SMMU_TCR_SH0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_ORGN0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_IRGN0_SHIFT) |
+        (SMMU_TCR_SL0_LVL_1 << SMMU_TCR_SL0_SHIFT);
+    writel_relaxed(reg, cb_base + SMMU_CB_TCR);
+
+    /* SCTLR */
+    reg = SMMU_SCTLR_CFCFG |
+        SMMU_SCTLR_CFIE |
+        SMMU_SCTLR_CFRE |
+        SMMU_SCTLR_M |
+        SMMU_SCTLR_EAE_SBOP;
+
+    writel_relaxed(reg, cb_base + SMMU_CB_SCTLR);
+}
+
+static struct arm_smmu_domain_cfg *
+arm_smmu_alloc_domain_context(struct domain *d,
+                              struct arm_smmu_device *smmu)
+{
+    const struct dt_irq *irq;
+    int ret, start;
+    struct arm_smmu_domain_cfg *cfg;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+
+    cfg = xzalloc(struct arm_smmu_domain_cfg);
+    if ( !cfg )
+        return NULL;
+
+    /* Master already initialized to another domain ... */
+    if ( cfg->domain != NULL )
+        goto out_free_mem;
+
+    cfg->cbar = SMMU_CBAR_TYPE_S2_TRANS;
+    start = 0;
+
+    ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
+                                  smmu->num_context_banks);
+    if ( ret < 0 )
+        goto out_free_mem;
+
+    cfg->cbndx = ret;
+    if ( smmu->version == 1 )
+    {
+        cfg->irptndx = atomic_inc_return(&smmu->irptndx);
+        cfg->irptndx %= smmu->num_context_irqs;
+    }
+    else
+        cfg->irptndx = cfg->cbndx;
+
+    irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+    ret = request_dt_irq(irq, arm_smmu_context_fault,
+                         "arm-smmu-context-fault", cfg);
+    if ( ret )
+    {
+        smmu_err(smmu, "failed to request context IRQ %d (%u)\n",
+                 cfg->irptndx, irq->irq);
+        cfg->irptndx = INVALID_IRPTNDX;
+        goto out_free_context;
+    }
+
+    cfg->domain = d;
+    cfg->smmu = smmu;
+
+    arm_smmu_init_context_bank(cfg);
+    list_add(&cfg->list, &smmu_domain->contexts);
+    INIT_LIST_HEAD(&cfg->masters);
+
+    return cfg;
+
+out_free_context:
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+out_free_mem:
+    xfree(cfg);
+
+    return NULL;
+}
+
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct domain *d = cfg->domain;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+    const struct dt_irq *irq;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+    BUG_ON(!list_empty(&cfg->masters));
+
+    /* Disable the context bank and nuke the TLB before freeing it */
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+    arm_smmu_tlb_inv_context(cfg);
+
+    if ( cfg->irptndx != INVALID_IRPTNDX )
+    {
+        irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+        release_dt_irq(irq, cfg);
+    }
+
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+}
+
+static struct arm_smmu_device *
+arm_smmu_find_smmu_by_dev(const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_master *master = NULL;
+
+    list_for_each_entry( smmu, &arm_smmu_devices, list )
+    {
+        master = find_smmu_master(smmu, dev);
+        if ( master )
+            break;
+    }
+
+    if ( !master )
+        return NULL;
+
+    return smmu;
+}
+
+static int arm_smmu_attach_dev(struct domain *d,
+                               const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu = arm_smmu_find_smmu_by_dev(dev);
+    struct arm_smmu_master *master;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg = NULL;
+    struct arm_smmu_domain_cfg *curr;
+    int ret;
+
+    printk(XENLOG_DEBUG "arm-smmu: attach %s to domain %d\n",
+           dt_node_full_name(dev), d->domain_id);
+
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: cannot attach to SMMU, is it on the same bus?\n",
+               dt_node_full_name(dev));
+        return -ENODEV;
+    }
+
+    master = find_smmu_master(smmu, dev);
+    BUG_ON(master == NULL);
+
+    /* Check if the device is already assigned to someone */
+    if ( master->cfg )
+        return -EBUSY;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry( curr, &smmu_domain->contexts, list )
+    {
+        if ( curr->smmu == smmu )
+        {
+            cfg = curr;
+            break;
+        }
+    }
+
+    if ( !cfg )
+    {
+        cfg = arm_smmu_alloc_domain_context(d, smmu);
+        if ( !cfg )
+        {
+            smmu_err(smmu, "unable to allocate context for domain %u\n",
+                     d->domain_id);
+            spin_unlock(&smmu_domain->lock);
+            return -ENOMEM;
+        }
+    }
+    spin_unlock(&smmu_domain->lock);
+
+    ret = arm_smmu_domain_add_master(d, cfg, master);
+    if ( ret )
+    {
+        spin_lock(&smmu_domain->lock);
+        if ( list_empty(&cfg->masters) )
+            arm_smmu_destroy_domain_context(cfg);
+        spin_unlock(&smmu_domain->lock);
+    }
+
+    return ret;
+}
+
+static __init int arm_smmu_id_size_to_bits(int size)
+{
+    switch ( size )
+    {
+    case 0:
+        return 32;
+    case 1:
+        return 36;
+    case 2:
+        return 40;
+    case 3:
+        return 42;
+    case 4:
+        return 44;
+    case 5:
+    default:
+        return 48;
+    }
+}
+
+static __init int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
+{
+    unsigned long size;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    u32 id;
+
+    smmu_info(smmu, "probing hardware configuration...\n");
+
+    /*
+     * Primecell ID
+     */
+    id = readl_relaxed(gr0_base + SMMU_GR0_PIDR2);
+    smmu->version = ((id >> SMMU_PIDR2_ARCH_SHIFT) & SMMU_PIDR2_ARCH_MASK) + 1;
+    smmu_info(smmu, "SMMUv%d with:\n", smmu->version);
+
+    /* ID0 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID0);
+#ifndef CONFIG_ARM_64
+    if ( ((id >> SMMU_ID0_PTFS_SHIFT) & SMMU_ID0_PTFS_MASK) ==
+            SMMU_ID0_PTFS_V8_ONLY )
+    {
+        smmu_err(smmu, "\tno v7 descriptor support!\n");
+        return -ENODEV;
+    }
+#endif
+    if ( id & SMMU_ID0_S1TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S1;
+        smmu_info(smmu, "\tstage 1 translation\n");
+    }
+
+    if ( id & SMMU_ID0_S2TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S2;
+        smmu_info(smmu, "\tstage 2 translation\n");
+    }
+
+    if ( id & SMMU_ID0_NTS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_NESTED;
+        smmu_info(smmu, "\tnested translation\n");
+    }
+
+    if ( !(smmu->features &
+           (SMMU_FEAT_TRANS_S1 | SMMU_FEAT_TRANS_S2 |
+            SMMU_FEAT_TRANS_NESTED)) )
+    {
+        smmu_err(smmu, "\tno translation support!\n");
+        return -ENODEV;
+    }
+
+    /* We need at least support for Stage 2 */
+    if ( !(smmu->features & SMMU_FEAT_TRANS_S2) )
+    {
+        smmu_err(smmu, "\tno stage 2 translation!\n");
+        return -ENODEV;
+    }
+
+    if ( id & SMMU_ID0_CTTW )
+    {
+        smmu->features |= SMMU_FEAT_COHERENT_WALK;
+        smmu_info(smmu, "\tcoherent table walk\n");
+    }
+
+    if ( id & SMMU_ID0_SMS )
+    {
+        u32 smr, sid, mask;
+
+        smmu->features |= SMMU_FEAT_STREAM_MATCH;
+        smmu->num_mapping_groups = (id >> SMMU_ID0_NUMSMRG_SHIFT) &
+            SMMU_ID0_NUMSMRG_MASK;
+        if ( smmu->num_mapping_groups == 0 )
+        {
+            smmu_err(smmu,
+                     "stream-matching supported, but no SMRs present!\n");
+            return -ENODEV;
+        }
+
+        smr = SMMU_SMR_MASK_MASK << SMMU_SMR_MASK_SHIFT;
+        smr |= (SMMU_SMR_ID_MASK << SMMU_SMR_ID_SHIFT);
+        writel_relaxed(smr, gr0_base + SMMU_GR0_SMR(0));
+        smr = readl_relaxed(gr0_base + SMMU_GR0_SMR(0));
+
+        mask = (smr >> SMMU_SMR_MASK_SHIFT) & SMMU_SMR_MASK_MASK;
+        sid = (smr >> SMMU_SMR_ID_SHIFT) & SMMU_SMR_ID_MASK;
+        if ( (mask & sid) != sid )
+        {
+            smmu_err(smmu,
+                     "SMR mask bits (0x%x) insufficient for ID field (0x%x)\n",
+                     mask, sid);
+            return -ENODEV;
+        }
+        smmu->smr_mask_mask = mask;
+        smmu->smr_id_mask = sid;
+
+        smmu_info(smmu,
+                  "\tstream matching with %u register groups, mask 0x%x\n",
+                  smmu->num_mapping_groups, mask);
+    }
+
+    /* ID1 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID1);
+    smmu->pagesize = (id & SMMU_ID1_PAGESIZE) ? SZ_64K : SZ_4K;
+
+    /* Check for size mismatch of SMMU address space from mapped region */
+    size = 1 << (((id >> SMMU_ID1_NUMPAGENDXB_SHIFT) &
+                  SMMU_ID1_NUMPAGENDXB_MASK) + 1);
+    size *= (smmu->pagesize << 1);
+    if ( smmu->size != size )
+        smmu_warn(smmu, "SMMU address space size (0x%lx) differs "
+                  "from mapped region size (0x%lx)!\n", size, smmu->size);
+
+    smmu->num_s2_context_banks = (id >> SMMU_ID1_NUMS2CB_SHIFT) &
+        SMMU_ID1_NUMS2CB_MASK;
+    smmu->num_context_banks = (id >> SMMU_ID1_NUMCB_SHIFT) &
+        SMMU_ID1_NUMCB_MASK;
+    if ( smmu->num_s2_context_banks > smmu->num_context_banks )
+    {
+        smmu_err(smmu, "impossible number of S2 context banks!\n");
+        return -ENODEV;
+    }
+    smmu_info(smmu, "\t%u context banks (%u stage-2 only)\n",
+              smmu->num_context_banks, smmu->num_s2_context_banks);
+
+    /* ID2 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID2);
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_IAS_SHIFT) &
+                                    SMMU_ID2_IAS_MASK);
+
+    /*
+     * Stage-1 output limited by stage-2 input size due to VTCR_EL2
+     * setup (see setup_virt_paging)
+     */
+    /* Current maximum output size of 40 bits */
+    smmu->s1_output_size = min(40UL, size);
+
+    /* The stage-2 output mask is also applied for bypass */
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_OAS_SHIFT) &
+                                    SMMU_ID2_OAS_MASK);
+    smmu->s2_output_size = min((unsigned long)PADDR_BITS, size);
+
+    if ( smmu->version == 1 )
+        smmu->input_size = 32;
+    else
+    {
+#ifdef CONFIG_ARM_64
+        size = (id >> SMMU_ID2_UBS_SHIFT) & SMMU_ID2_UBS_MASK;
+        size = min(39, arm_smmu_id_size_to_bits(size));
+#else
+        size = 32;
+#endif
+        smmu->input_size = size;
+
+        if ( (PAGE_SIZE == SZ_4K && !(id & SMMU_ID2_PTFS_4K) ) ||
+             (PAGE_SIZE == SZ_64K && !(id & SMMU_ID2_PTFS_64K)) ||
+             (PAGE_SIZE != SZ_4K && PAGE_SIZE != SZ_64K) )
+        {
+            smmu_err(smmu, "CPU page size 0x%lx unsupported\n",
+                     PAGE_SIZE);
+            return -ENODEV;
+        }
+    }
+
+    smmu_info(smmu, "\t%lu-bit VA, %lu-bit IPA, %lu-bit PA\n",
+              smmu->input_size, smmu->s1_output_size, smmu->s2_output_size);
+    return 0;
+}
+
+static __init void arm_smmu_device_reset(struct arm_smmu_device *smmu)
+{
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    void __iomem *cb_base;
+    int i = 0;
+    u32 reg;
+
+    smmu_dbg(smmu, "device reset\n");
+
+    /* Clear Global FSR */
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+    writel(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+
+    /* Mark all SMRn as invalid and all S2CRn as bypass */
+    for ( i = 0; i < smmu->num_mapping_groups; ++i )
+    {
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(i));
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT, gr0_base + SMMU_GR0_S2CR(i));
+    }
+
+    /* Make sure all context banks are disabled and clear CB_FSR  */
+    for ( i = 0; i < smmu->num_context_banks; ++i )
+    {
+        cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, i);
+        writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+        writel_relaxed(SMMU_FSR_FAULT, cb_base + SMMU_CB_FSR);
+    }
+
+    /* Invalidate the TLB, just in case */
+    writel_relaxed(0, gr0_base + SMMU_GR0_STLBIALL);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLH);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLNSNH);
+
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+
+    /* Enable fault reporting */
+    reg |= (SMMU_sCR0_GFRE | SMMU_sCR0_GFIE |
+            SMMU_sCR0_GCFGFRE | SMMU_sCR0_GCFGFIE);
+
+    /* Disable TLB broadcasting. */
+    reg |= (SMMU_sCR0_VMIDPNE | SMMU_sCR0_PTM);
+
+    /* Enable client access, generate a fault if no mapping is found */
+    reg &= ~(SMMU_sCR0_CLIENTPD);
+    reg |= SMMU_sCR0_USFCFG;
+
+    /* Disable forced broadcasting */
+    reg &= ~SMMU_sCR0_FB;
+
+    /* Don't upgrade barriers */
+    reg &= ~(SMMU_sCR0_BSU_MASK << SMMU_sCR0_BSU_SHIFT);
+
+    /* Push the button */
+    arm_smmu_tlb_sync(smmu);
+    writel_relaxed(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+}
+
+int arm_smmu_iommu_domain_init(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain;
+
+    smmu_domain = xzalloc(struct arm_smmu_domain);
+    if ( !smmu_domain )
+        return -ENOMEM;
+
+    spin_lock_init(&smmu_domain->lock);
+    INIT_LIST_HEAD(&smmu_domain->contexts);
+
+    domain_hvm_iommu(d)->arch.priv = smmu_domain;
+
+    return 0;
+}
+
+void arm_smmu_iommu_dom0_init(struct domain *d)
+{
+    struct arm_smmu_device *smmu;
+    struct rb_node *node;
+
+    printk(XENLOG_DEBUG "arm-smmu: Initialize dom0\n");
+
+    list_for_each_entry( smmu, &arm_smmu_devices, list )
+    {
+        for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
+        {
+            struct arm_smmu_master *master;
+
+            master = container_of(node, struct arm_smmu_master, node);
+
+            if ( dt_device_used_by(master->dt_node) == DOMID_XEN ||
+                 platform_device_is_blacklisted(master->dt_node) )
+                continue;
+
+            arm_smmu_attach_dev(d, master->dt_node);
+        }
+    }
+}
+
+void arm_smmu_iommu_domain_teardown(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg, *_cfg;
+
+    spin_lock(&smmu_domain->lock);
+
+    list_for_each_entry_safe( cfg, _cfg, &smmu_domain->contexts, list )
+    {
+        struct arm_smmu_master *master, *_master;
+
+        list_for_each_entry_safe( master, _master, &cfg->masters, list )
+            arm_smmu_domain_remove_master(master);
+        arm_smmu_destroy_domain_context(cfg);
+    }
+
+    spin_unlock(&smmu_domain->lock);
+
+    xfree(smmu_domain);
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+    .init = arm_smmu_iommu_domain_init,
+    .dom0_init = arm_smmu_iommu_dom0_init,
+    .teardown = arm_smmu_iommu_domain_teardown,
+    .iotlb_flush = arm_smmu_iotlb_flush,
+    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+};
+
+static int __init smmu_init(struct dt_device_node *dev,
+                            const void *data)
+{
+    struct arm_smmu_device *smmu;
+    int res;
+    u64 addr, size;
+    unsigned int num_irqs, i;
+    struct dt_phandle_args masterspec;
+    struct rb_node *node;
+
+    /* Even if the device can't be initialized, we don't want to give to
+     * dom0 the smmu device
+     */
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    smmu = xzalloc(struct arm_smmu_device);
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: failed to allocate arm_smmu_device\n",
+               dt_node_full_name(dev));
+        return -ENOMEM;
+    }
+
+    smmu->node = dev;
+    check_driver_options(smmu);
+
+    res = dt_device_get_address(smmu->node, 0, &addr, &size);
+    if ( res )
+    {
+        smmu_err(smmu, "unable to retrieve the base address of the SMMU\n");
+        goto out_err;
+    }
+
+    smmu->base = ioremap_nocache(addr, size);
+    if ( !smmu->base )
+    {
+        smmu_err(smmu, "unable to map the SMMU memory\n");
+        goto out_err;
+    }
+
+    smmu->size = size;
+
+    if ( !dt_property_read_u32(smmu->node, "#global-interrupts",
+                               &smmu->num_global_irqs) )
+    {
+        smmu_err(smmu, "missing #global-interrupts\n");
+        goto out_unmap;
+    }
+
+    num_irqs = dt_number_of_irq(smmu->node);
+    if ( num_irqs > smmu->num_global_irqs )
+        smmu->num_context_irqs = num_irqs - smmu->num_global_irqs;
+
+    if ( !smmu->num_context_irqs )
+    {
+        smmu_err(smmu, "found %d interrupts but expected at least %d\n",
+                 num_irqs, smmu->num_global_irqs + 1);
+        goto out_unmap;
+    }
+
+    smmu->irqs = xzalloc_array(struct dt_irq, num_irqs);
+    if ( !smmu->irqs )
+    {
+        smmu_err(smmu, "failed to allocated %d irqs\n", num_irqs);
+        goto out_unmap;
+    }
+
+    for ( i = 0; i < num_irqs; i++ )
+    {
+        res = dt_device_get_irq(smmu->node, i, &smmu->irqs[i]);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to get irq index %d\n", i);
+            goto out_free_irqs;
+        }
+    }
+
+    smmu->sids = xzalloc_array(unsigned long,
+                               BITS_TO_LONGS(SMMU_MAX_STREAMIDS));
+    if ( !smmu->sids )
+    {
+        smmu_err(smmu, "failed to allocated bitmap for stream ID tracking\n");
+        goto out_free_masters;
+    }
+
+
+    i = 0;
+    smmu->masters = RB_ROOT;
+    while ( !dt_parse_phandle_with_args(smmu->node, "mmu-masters",
+                                        "#stream-id-cells", i, &masterspec) )
+    {
+        res = register_smmu_master(smmu, &masterspec);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to add master %s\n",
+                     masterspec.np->name);
+            goto out_free_masters;
+        }
+        i++;
+    }
+
+    smmu_info(smmu, "registered %d master devices\n", i);
+
+    res = arm_smmu_device_cfg_probe(smmu);
+    if ( res )
+    {
+        smmu_err(smmu, "failed to probe the SMMU\n");
+        goto out_free_masters;
+    }
+
+    if ( smmu->version > 1 &&
+         smmu->num_context_banks != smmu->num_context_irqs )
+    {
+        smmu_err(smmu,
+                 "found only %d context interrupt(s) but %d required\n",
+                 smmu->num_context_irqs, smmu->num_context_banks);
+        goto out_free_masters;
+    }
+
+    smmu_dbg(smmu, "register global IRQs handler\n");
+
+    for ( i = 0; i < smmu->num_global_irqs; ++i )
+    {
+        smmu_dbg(smmu, "\t- global IRQ %u\n", smmu->irqs[i].irq);
+        res = request_dt_irq(&smmu->irqs[i], arm_smmu_global_fault,
+                             "arm-smmu global fault", smmu);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to request global IRQ %d (%u)\n",
+                     i, smmu->irqs[i].irq);
+            goto out_release_irqs;
+        }
+    }
+
+    INIT_LIST_HEAD(&smmu->list);
+    list_add(&smmu->list, &arm_smmu_devices);
+
+    arm_smmu_device_reset(smmu);
+
+    iommu_set_ops(&arm_smmu_iommu_ops);
+
+    /* sids field can be freed... */
+    xfree(smmu->sids);
+    smmu->sids = NULL;
+
+    return 0;
+
+out_release_irqs:
+    while (i--)
+        release_dt_irq(&smmu->irqs[i], smmu);
+
+out_free_masters:
+    for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+        xfree(master);
+    }
+
+    xfree(smmu->sids);
+
+out_free_irqs:
+    xfree(smmu->irqs);
+
+out_unmap:
+    iounmap(smmu->base);
+
+out_err:
+    xfree(smmu);
+
+    return -ENODEV;
+}
+
+static const char * const smmu_dt_compat[] __initconst =
+{
+    "arm,mmu-400",
+    NULL
+};
+
+DT_DEVICE_START(smmu, "ARM SMMU", DEVICE_IOMMU)
+    .compatible = smmu_dt_compat,
+    .init = smmu_init,
+DT_DEVICE_END
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpSt-0005SF-GF; Fri, 07 Feb 2014 17:43:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpSr-0005Pa-KO
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:43:46 +0000
Received: from [85.158.143.35:63818] by server-1.bemta-4.messagelabs.com id
	20/7F-31661-05B15F25; Fri, 07 Feb 2014 17:43:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391795021!4001486!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29510 invoked from network); 7 Feb 2014 17:43:42 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:43:42 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so1676117eek.19
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:43:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=oXQDGp3n7kkfVtjDLBJhB9KXjITwlexRcF1fgsIhT0A=;
	b=M48+AFwMLv3Yh9AAQi7RcCF0/8jQvAjJ6iEGqhFMEV+NrNsXaeO8OXhOcxj9/PFiHR
	s0Dw0AmRayVtd7cGTsVjkNRkgv3anNg+xswXB2Rp/EkQM0zEgn7G+YprIJC8qmisv+9g
	SXWNltl5pkp1CAirOh+Dn5GC5KQFi+vR+Rke5OwvpdFU3lFeTwJtM/AXAopSKF2X0lpV
	4ahePTMcN+xmQHTQl7GtV/59rNEmg2ngAeY77CxhUzMelCXwx0At9eArzoGV20XhcN9s
	YveI43c39ucvbi1j/5yt1IrgyBbpZhur+Te76TkAbU4MySo0ZVPTvZOhel5EC8hsaVvV
	rxMA==
X-Gm-Message-State: ALoCoQn3cELT2zBb1n0E4P7N4CoomTgwvRw+XK+BsS86wMtDWq3FXMB4b69xeKFevxrKciNWfXzn
X-Received: by 10.14.176.133 with SMTP id b5mr425500eem.105.1391795021643;
	Fri, 07 Feb 2014 09:43:41 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18849063eef.1.2014.02.07.09.43.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 07 Feb 2014 09:43:40 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri,  7 Feb 2014 17:43:11 +0000
Message-Id: <1391794991-5919-13-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
	support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add support for ARM architected SMMU driver. It's based on the
linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.

Signed-off-by: Julien Grall<julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/smmu.c   | 1701 ++++++++++++++++++++++++++++++++++
 2 files changed, 1702 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu.c

diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index 0484b79..f4cd26e 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1 +1,2 @@
 obj-y += iommu.o
+obj-y += smmu.o
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
new file mode 100644
index 0000000..9bf2aa3
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -0,0 +1,1701 @@
+/*
+ * IOMMU API for ARM architected SMMU implementations.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Based on Linux drivers/iommu/arm-smmu.c (commit 89a23cd)
+ * Copyright (C) 2013 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * Xen modification:
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (C) 2014 Linaro Limited.
+ *
+ * This driver currently supports:
+ *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
+ *  - Stream-matching and stream-indexing
+ *  - v7/v8 long-descriptor format
+ *  - Non-secure access to the SMMU
+ *  - 4k pages, p2m shared with the processor
+ *  - Up to 40-bit addressing
+ *  - Context fault reporting
+ */
+
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+
+#define SZ_4K                               (1 << 12)
+#define SZ_64K                              (1 << 16)
+
+/* Driver options */
+#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
+
+/* Maximum number of stream IDs assigned to a single device */
+#define MAX_MASTER_STREAMIDS    MAX_PHANDLE_ARGS
+
+/* Maximum stream ID */
+#define SMMU_MAX_STREAMIDS      (SZ_64K - 1)
+
+/* Maximum number of context banks per SMMU */
+#define SMMU_MAX_CBS        128
+
+/* Maximum number of mapping groups per SMMU */
+#define SMMU_MAX_SMRS       128
+
+/* SMMU global address space */
+#define SMMU_GR0(smmu)      ((smmu)->base)
+#define SMMU_GR1(smmu)      ((smmu)->base + (smmu)->pagesize)
+
+/*
+ * SMMU global address space with conditional offset to access secure aliases of
+ * non-secure registers (e.g. nsCR0: 0x400, nsGFSR: 0x448, nsGFSYNR0: 0x450)
+ */
+#define SMMU_GR0_NS(smmu)                                   \
+    ((smmu)->base +                                         \
+     ((smmu->options & SMMU_OPT_SECURE_CONFIG_ACCESS)    \
+        ? 0x400 : 0))
+
+/* Page table bits */
+#define SMMU_PTE_PAGE           (((pteval_t)3) << 0)
+#define SMMU_PTE_CONT           (((pteval_t)1) << 52)
+#define SMMU_PTE_AF             (((pteval_t)1) << 10)
+#define SMMU_PTE_SH_NS          (((pteval_t)0) << 8)
+#define SMMU_PTE_SH_OS          (((pteval_t)2) << 8)
+#define SMMU_PTE_SH_IS          (((pteval_t)3) << 8)
+
+#if PAGE_SIZE == SZ_4K
+#define SMMU_PTE_CONT_ENTRIES   16
+#elif PAGE_SIZE == SZ_64K
+#define SMMU_PTE_CONT_ENTRIES   32
+#else
+#define SMMU_PTE_CONT_ENTRIES   1
+#endif
+
+#define SMMU_PTE_CONT_SIZE      (PAGE_SIZE * SMMU_PTE_CONT_ENTRIES)
+#define SMMU_PTE_CONT_MASK      (~(SMMU_PTE_CONT_SIZE - 1))
+#define SMMU_PTE_HWTABLE_SIZE   (PTRS_PER_PTE * sizeof(pte_t))
+
+/* Stage-1 PTE */
+#define SMMU_PTE_AP_UNPRIV      (((pteval_t)1) << 6)
+#define SMMU_PTE_AP_RDONLY      (((pteval_t)2) << 6)
+#define SMMU_PTE_ATTRINDX_SHIFT 2
+#define SMMU_PTE_nG             (((pteval_t)1) << 11)
+
+/* Stage-2 PTE */
+#define SMMU_PTE_HAP_FAULT      (((pteval_t)0) << 6)
+#define SMMU_PTE_HAP_READ       (((pteval_t)1) << 6)
+#define SMMU_PTE_HAP_WRITE      (((pteval_t)2) << 6)
+#define SMMU_PTE_MEMATTR_OIWB   (((pteval_t)0xf) << 2)
+#define SMMU_PTE_MEMATTR_NC     (((pteval_t)0x5) << 2)
+#define SMMU_PTE_MEMATTR_DEV    (((pteval_t)0x1) << 2)
+
+/* Configuration registers */
+#define SMMU_GR0_sCR0           0x0
+#define SMMU_sCR0_CLIENTPD      (1 << 0)
+#define SMMU_sCR0_GFRE          (1 << 1)
+#define SMMU_sCR0_GFIE          (1 << 2)
+#define SMMU_sCR0_GCFGFRE       (1 << 4)
+#define SMMU_sCR0_GCFGFIE       (1 << 5)
+#define SMMU_sCR0_USFCFG        (1 << 10)
+#define SMMU_sCR0_VMIDPNE       (1 << 11)
+#define SMMU_sCR0_PTM           (1 << 12)
+#define SMMU_sCR0_FB            (1 << 13)
+#define SMMU_sCR0_BSU_SHIFT     14
+#define SMMU_sCR0_BSU_MASK      0x3
+
+/* Identification registers */
+#define SMMU_GR0_ID0            0x20
+#define SMMU_GR0_ID1            0x24
+#define SMMU_GR0_ID2            0x28
+#define SMMU_GR0_ID3            0x2c
+#define SMMU_GR0_ID4            0x30
+#define SMMU_GR0_ID5            0x34
+#define SMMU_GR0_ID6            0x38
+#define SMMU_GR0_ID7            0x3c
+#define SMMU_GR0_sGFSR          0x48
+#define SMMU_GR0_sGFSYNR0       0x50
+#define SMMU_GR0_sGFSYNR1       0x54
+#define SMMU_GR0_sGFSYNR2       0x58
+#define SMMU_GR0_PIDR0          0xfe0
+#define SMMU_GR0_PIDR1          0xfe4
+#define SMMU_GR0_PIDR2          0xfe8
+
+#define SMMU_ID0_S1TS           (1 << 30)
+#define SMMU_ID0_S2TS           (1 << 29)
+#define SMMU_ID0_NTS            (1 << 28)
+#define SMMU_ID0_SMS            (1 << 27)
+#define SMMU_ID0_PTFS_SHIFT     24
+#define SMMU_ID0_PTFS_MASK      0x2
+#define SMMU_ID0_PTFS_V8_ONLY   0x2
+#define SMMU_ID0_CTTW           (1 << 14)
+#define SMMU_ID0_NUMIRPT_SHIFT  16
+#define SMMU_ID0_NUMIRPT_MASK   0xff
+#define SMMU_ID0_NUMSMRG_SHIFT  0
+#define SMMU_ID0_NUMSMRG_MASK   0xff
+
+#define SMMU_ID1_PAGESIZE            (1 << 31)
+#define SMMU_ID1_NUMPAGENDXB_SHIFT   28
+#define SMMU_ID1_NUMPAGENDXB_MASK    7
+#define SMMU_ID1_NUMS2CB_SHIFT       16
+#define SMMU_ID1_NUMS2CB_MASK        0xff
+#define SMMU_ID1_NUMCB_SHIFT         0
+#define SMMU_ID1_NUMCB_MASK          0xff
+
+#define SMMU_ID2_OAS_SHIFT           4
+#define SMMU_ID2_OAS_MASK            0xf
+#define SMMU_ID2_IAS_SHIFT           0
+#define SMMU_ID2_IAS_MASK            0xf
+#define SMMU_ID2_UBS_SHIFT           8
+#define SMMU_ID2_UBS_MASK            0xf
+#define SMMU_ID2_PTFS_4K             (1 << 12)
+#define SMMU_ID2_PTFS_16K            (1 << 13)
+#define SMMU_ID2_PTFS_64K            (1 << 14)
+
+#define SMMU_PIDR2_ARCH_SHIFT        4
+#define SMMU_PIDR2_ARCH_MASK         0xf
+
+/* Global TLB invalidation */
+#define SMMU_GR0_STLBIALL           0x60
+#define SMMU_GR0_TLBIVMID           0x64
+#define SMMU_GR0_TLBIALLNSNH        0x68
+#define SMMU_GR0_TLBIALLH           0x6c
+#define SMMU_GR0_sTLBGSYNC          0x70
+#define SMMU_GR0_sTLBGSTATUS        0x74
+#define SMMU_sTLBGSTATUS_GSACTIVE   (1 << 0)
+#define SMMU_TLB_LOOP_TIMEOUT       1000000 /* 1s! */
+
+/* Stream mapping registers */
+#define SMMU_GR0_SMR(n)             (0x800 + ((n) << 2))
+#define SMMU_SMR_VALID              (1 << 31)
+#define SMMU_SMR_MASK_SHIFT         16
+#define SMMU_SMR_MASK_MASK          0x7fff
+#define SMMU_SMR_ID_SHIFT           0
+#define SMMU_SMR_ID_MASK            0x7fff
+
+#define SMMU_GR0_S2CR(n)        (0xc00 + ((n) << 2))
+#define SMMU_S2CR_CBNDX_SHIFT   0
+#define SMMU_S2CR_CBNDX_MASK    0xff
+#define SMMU_S2CR_TYPE_SHIFT    16
+#define SMMU_S2CR_TYPE_MASK     0x3
+#define SMMU_S2CR_TYPE_TRANS    (0 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_BYPASS   (1 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_FAULT    (2 << SMMU_S2CR_TYPE_SHIFT)
+
+/* Context bank attribute registers */
+#define SMMU_GR1_CBAR(n)                    (0x0 + ((n) << 2))
+#define SMMU_CBAR_VMID_SHIFT                0
+#define SMMU_CBAR_VMID_MASK                 0xff
+#define SMMU_CBAR_S1_MEMATTR_SHIFT          12
+#define SMMU_CBAR_S1_MEMATTR_MASK           0xf
+#define SMMU_CBAR_S1_MEMATTR_WB             0xf
+#define SMMU_CBAR_TYPE_SHIFT                16
+#define SMMU_CBAR_TYPE_MASK                 0x3
+#define SMMU_CBAR_TYPE_S2_TRANS             (0 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_BYPASS   (1 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_FAULT    (2 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_TRANS    (3 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_IRPTNDX_SHIFT             24
+#define SMMU_CBAR_IRPTNDX_MASK              0xff
+
+#define SMMU_GR1_CBA2R(n)                   (0x800 + ((n) << 2))
+#define SMMU_CBA2R_RW64_32BIT               (0 << 0)
+#define SMMU_CBA2R_RW64_64BIT               (1 << 0)
+
+/* Translation context bank */
+#define SMMU_CB_BASE(smmu)                  ((smmu)->base + ((smmu)->size >> 1))
+#define SMMU_CB(smmu, n)                    ((n) * (smmu)->pagesize)
+
+#define SMMU_CB_SCTLR                       0x0
+#define SMMU_CB_RESUME                      0x8
+#define SMMU_CB_TCR2                        0x10
+#define SMMU_CB_TTBR0_LO                    0x20
+#define SMMU_CB_TTBR0_HI                    0x24
+#define SMMU_CB_TCR                         0x30
+#define SMMU_CB_S1_MAIR0                    0x38
+#define SMMU_CB_FSR                         0x58
+#define SMMU_CB_FAR_LO                      0x60
+#define SMMU_CB_FAR_HI                      0x64
+#define SMMU_CB_FSYNR0                      0x68
+#define SMMU_CB_S1_TLBIASID                 0x610
+
+#define SMMU_SCTLR_S1_ASIDPNE               (1 << 12)
+#define SMMU_SCTLR_CFCFG                    (1 << 7)
+#define SMMU_SCTLR_CFIE                     (1 << 6)
+#define SMMU_SCTLR_CFRE                     (1 << 5)
+#define SMMU_SCTLR_E                        (1 << 4)
+#define SMMU_SCTLR_AFE                      (1 << 2)
+#define SMMU_SCTLR_TRE                      (1 << 1)
+#define SMMU_SCTLR_M                        (1 << 0)
+#define SMMU_SCTLR_EAE_SBOP                 (SMMU_SCTLR_AFE | SMMU_SCTLR_TRE)
+
+#define SMMU_RESUME_RETRY                   (0 << 0)
+#define SMMU_RESUME_TERMINATE               (1 << 0)
+
+#define SMMU_TCR_EAE                        (1 << 31)
+
+#define SMMU_TCR_PASIZE_SHIFT               16
+#define SMMU_TCR_PASIZE_MASK                0x7
+
+#define SMMU_TCR_TG0_4K                     (0 << 14)
+#define SMMU_TCR_TG0_64K                    (1 << 14)
+
+#define SMMU_TCR_SH0_SHIFT                  12
+#define SMMU_TCR_SH0_MASK                   0x3
+#define SMMU_TCR_SH_NS                      0
+#define SMMU_TCR_SH_OS                      2
+#define SMMU_TCR_SH_IS                      3
+
+#define SMMU_TCR_ORGN0_SHIFT                10
+#define SMMU_TCR_IRGN0_SHIFT                8
+#define SMMU_TCR_RGN_MASK                   0x3
+#define SMMU_TCR_RGN_NC                     0
+#define SMMU_TCR_RGN_WBWA                   1
+#define SMMU_TCR_RGN_WT                     2
+#define SMMU_TCR_RGN_WB                     3
+
+#define SMMU_TCR_SL0_SHIFT                  6
+#define SMMU_TCR_SL0_MASK                   0x3
+#define SMMU_TCR_SL0_LVL_2                  0
+#define SMMU_TCR_SL0_LVL_1                  1
+
+#define SMMU_TCR_T1SZ_SHIFT                 16
+#define SMMU_TCR_T0SZ_SHIFT                 0
+#define SMMU_TCR_SZ_MASK                    0xf
+
+#define SMMU_TCR2_SEP_SHIFT                 15
+#define SMMU_TCR2_SEP_MASK                  0x7
+
+#define SMMU_TCR2_PASIZE_SHIFT              0
+#define SMMU_TCR2_PASIZE_MASK               0x7
+
+/* Common definitions for PASize and SEP fields */
+#define SMMU_TCR2_ADDR_32                   0
+#define SMMU_TCR2_ADDR_36                   1
+#define SMMU_TCR2_ADDR_40                   2
+#define SMMU_TCR2_ADDR_42                   3
+#define SMMU_TCR2_ADDR_44                   4
+#define SMMU_TCR2_ADDR_48                   5
+
+#define SMMU_TTBRn_HI_ASID_SHIFT            16
+
+#define SMMU_MAIR_ATTR_SHIFT(n)             ((n) << 3)
+#define SMMU_MAIR_ATTR_MASK                 0xff
+#define SMMU_MAIR_ATTR_DEVICE               0x04
+#define SMMU_MAIR_ATTR_NC                   0x44
+#define SMMU_MAIR_ATTR_WBRWA                0xff
+#define SMMU_MAIR_ATTR_IDX_NC               0
+#define SMMU_MAIR_ATTR_IDX_CACHE            1
+#define SMMU_MAIR_ATTR_IDX_DEV              2
+
+#define SMMU_FSR_MULTI                      (1 << 31)
+#define SMMU_FSR_SS                         (1 << 30)
+#define SMMU_FSR_UUT                        (1 << 8)
+#define SMMU_FSR_ASF                        (1 << 7)
+#define SMMU_FSR_TLBLKF                     (1 << 6)
+#define SMMU_FSR_TLBMCF                     (1 << 5)
+#define SMMU_FSR_EF                         (1 << 4)
+#define SMMU_FSR_PF                         (1 << 3)
+#define SMMU_FSR_AFF                        (1 << 2)
+#define SMMU_FSR_TF                         (1 << 1)
+
+#define SMMU_FSR_IGN                        (SMMU_FSR_AFF | SMMU_FSR_ASF |    \
+                                             SMMU_FSR_TLBMCF | SMMU_FSR_TLBLKF)
+#define SMMU_FSR_FAULT                      (SMMU_FSR_MULTI | SMMU_FSR_SS |   \
+                                             SMMU_FSR_UUT | SMMU_FSR_EF |     \
+                                             SMMU_FSR_PF | SMMU_FSR_TF |      \
+                                             SMMU_FSR_IGN)
+
+#define SMMU_FSYNR0_WNR                     (1 << 4)
+
+#define smmu_print(dev, lvl, fmt, ...)                                        \
+    printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev->node), ## __VA_ARGS__)
+
+#define smmu_err(dev, fmt, ...) smmu_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+
+#define smmu_dbg(dev, fmt, ...)                                             \
+    smmu_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
+
+#define smmu_info(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
+
+#define smmu_warn(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
+
+struct arm_smmu_device {
+    const struct dt_device_node *node;
+
+    void __iomem                *base;
+    unsigned long               size;
+    unsigned long               pagesize;
+
+#define SMMU_FEAT_COHERENT_WALK (1 << 0)
+#define SMMU_FEAT_STREAM_MATCH  (1 << 1)
+#define SMMU_FEAT_TRANS_S1      (1 << 2)
+#define SMMU_FEAT_TRANS_S2      (1 << 3)
+#define SMMU_FEAT_TRANS_NESTED  (1 << 4)
+    u32                         features;
+    u32                         options;
+    int                         version;
+
+    u32                         num_context_banks;
+    u32                         num_s2_context_banks;
+    DECLARE_BITMAP(context_map, SMMU_MAX_CBS);
+    atomic_t                    irptndx;
+
+    u32                         num_mapping_groups;
+    DECLARE_BITMAP(smr_map, SMMU_MAX_SMRS);
+
+    unsigned long               input_size;
+    unsigned long               s1_output_size;
+    unsigned long               s2_output_size;
+
+    u32                         num_global_irqs;
+    u32                         num_context_irqs;
+    struct dt_irq               *irqs;
+
+    u32                         smr_mask_mask;
+    u32                         smr_id_mask;
+
+    unsigned long               *sids;
+
+    struct list_head            list;
+    struct rb_root              masters;
+};
+
+struct arm_smmu_smr {
+    u8                          idx;
+    u16                         mask;
+    u16                         id;
+};
+
+#define INVALID_IRPTNDX         0xff
+
+#define SMMU_CB_ASID(cfg)       ((cfg)->cbndx)
+#define SMMU_CB_VMID(cfg)       ((cfg)->cbndx + 1)
+
+struct arm_smmu_domain_cfg {
+    struct arm_smmu_device  *smmu;
+    u8                      cbndx;
+    u8                      irptndx;
+    u32                     cbar;
+    /* Domain associated to this device */
+    struct domain           *domain;
+    /* List of master which use this structure */
+    struct list_head        masters;
+
+    /* Used to link domain context for a same domain */
+    struct list_head        list;
+};
+
+struct arm_smmu_master {
+    const struct dt_device_node *dt_node;
+
+    /*
+     * The following is specific to the master's position in the
+     * SMMU chain.
+     */
+    struct rb_node              node;
+    u32                         num_streamids;
+    u16                         streamids[MAX_MASTER_STREAMIDS];
+    int                         num_s2crs;
+
+    struct arm_smmu_smr         *smrs;
+    struct arm_smmu_domain_cfg  *cfg;
+
+    /* Used to link masters in a same domain context */
+    struct list_head            list;
+};
+
+static LIST_HEAD(arm_smmu_devices);
+
+struct arm_smmu_domain {
+    spinlock_t lock;
+    struct list_head contexts;
+};
+
+struct arm_smmu_option_prop {
+    u32         opt;
+    const char  *prop;
+};
+
+static const struct arm_smmu_option_prop arm_smmu_options [] __initconst =
+{
+    { SMMU_OPT_SECURE_CONFIG_ACCESS, "calxeda,smmu-secure-config-access" },
+    { 0, NULL},
+};
+
+static void __init check_driver_options(struct arm_smmu_device *smmu)
+{
+    int i = 0;
+
+    do {
+        if ( dt_property_read_bool(smmu->node, arm_smmu_options[i].prop) )
+        {
+            smmu->options |= arm_smmu_options[i].opt;
+            smmu_dbg(smmu, "option %s\n", arm_smmu_options[i].prop);
+        }
+    } while ( arm_smmu_options[++i].opt );
+}
+
+static void arm_smmu_context_fault(int irq, void *data,
+                                   struct cpu_user_regs *regs)
+{
+    u32 fsr, far, fsynr;
+    unsigned long iova;
+    struct arm_smmu_domain_cfg *cfg = data;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    fsr = readl_relaxed(cb_base + SMMU_CB_FSR);
+
+    if ( !(fsr & SMMU_FSR_FAULT) )
+        return;
+
+    if ( fsr & SMMU_FSR_IGN )
+        smmu_err(smmu, "Unexpected context fault (fsr 0x%u)\n", fsr);
+
+    fsynr = readl_relaxed(cb_base + SMMU_CB_FSYNR0);
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_LO);
+    iova = far;
+#ifdef CONFIG_ARM_64
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_HI);
+    iova |= ((unsigned long)far << 32);
+#endif
+
+    smmu_err(smmu,
+             "Unhandled context fault: iova=0x%08lx, fsynr=0x%x, cb=%d\n",
+             iova, fsynr, cfg->cbndx);
+
+    /* Clear the faulting FSR */
+    writel(fsr, cb_base + SMMU_CB_FSR);
+
+    /* Terminate any stalled transactions */
+    if ( fsr & SMMU_FSR_SS )
+        writel_relaxed(SMMU_RESUME_TERMINATE, cb_base + SMMU_CB_RESUME);
+}
+
+static void arm_smmu_global_fault(int irq, void *data,
+                                  struct cpu_user_regs *regs)
+{
+    u32 gfsr, gfsynr0, gfsynr1, gfsynr2;
+    struct arm_smmu_device *smmu = data;
+    void __iomem *gr0_base = SMMU_GR0_NS(smmu);
+
+    gfsr = readl_relaxed(gr0_base + SMMU_GR0_sGFSR);
+    gfsynr0 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR0);
+    gfsynr1 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR1);
+    gfsynr2 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR2);
+
+    if ( !gfsr )
+        return;
+
+    smmu_err(smmu, "Unexpected global fault, this could be serious\n");
+    smmu_err(smmu,
+             "\tGFSR 0x%08x, GFSYNR0 0x%08x, GFSYNR1 0x%08x, GFSYNR2 0x%08x\n",
+             gfsr, gfsynr0, gfsynr1, gfsynr2);
+    writel(gfsr, gr0_base + SMMU_GR0_sGFSR);
+}
+
+static struct arm_smmu_master *
+find_smmu_master(struct arm_smmu_device *smmu,
+                 const struct dt_device_node *dev_node)
+{
+    struct rb_node *node = smmu->masters.rb_node;
+
+    while ( node )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+
+        if ( dev_node < master->dt_node )
+            node = node->rb_left;
+        else if ( dev_node > master->dt_node )
+            node = node->rb_right;
+        else
+            return master;
+    }
+
+    return NULL;
+}
+
+static __init int insert_smmu_master(struct arm_smmu_device *smmu,
+                                     struct arm_smmu_master *master)
+{
+    struct rb_node **new, *parent;
+
+    new = &smmu->masters.rb_node;
+    parent = NULL;
+    while ( *new )
+    {
+        struct arm_smmu_master *this;
+
+        this = container_of(*new, struct arm_smmu_master, node);
+
+        parent = *new;
+        if ( master->dt_node < this->dt_node )
+            new = &((*new)->rb_left);
+        else if (master->dt_node > this->dt_node)
+            new = &((*new)->rb_right);
+        else
+            return -EEXIST;
+    }
+
+    rb_link_node(&master->node, parent, new);
+    rb_insert_color(&master->node, &smmu->masters);
+    return 0;
+}
+
+static __init int register_smmu_master(struct arm_smmu_device *smmu,
+                                       struct dt_phandle_args *masterspec)
+{
+    int i, sid;
+    struct arm_smmu_master *master;
+    int rc = 0;
+
+    smmu_dbg(smmu, "Try to add master %s\n", masterspec->np->name);
+
+    master = find_smmu_master(smmu, masterspec->np);
+    if ( master )
+    {
+        smmu_err(smmu,
+                 "rejecting multiple registrations for master device %s\n",
+                 masterspec->np->name);
+        return -EBUSY;
+    }
+
+    if ( masterspec->args_count > MAX_MASTER_STREAMIDS )
+    {
+        smmu_err(smmu,
+            "reached maximum number (%d) of stream IDs for master device %s\n",
+            MAX_MASTER_STREAMIDS, masterspec->np->name);
+        return -ENOSPC;
+    }
+
+    master = xzalloc(struct arm_smmu_master);
+    if ( !master )
+        return -ENOMEM;
+
+    INIT_LIST_HEAD(&master->list);
+    master->dt_node = masterspec->np;
+    master->num_streamids = masterspec->args_count;
+
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        sid = masterspec->args[i];
+        if ( test_and_set_bit(sid, smmu->sids) )
+        {
+            smmu_err(smmu, "duplicate stream ID (%d)\n", sid);
+            xfree(master);
+            return -EEXIST;
+        }
+        master->streamids[i] = masterspec->args[i];
+    }
+
+    rc = insert_smmu_master(smmu, master);
+    /* Insertion should never fail */
+    ASSERT(rc == 0);
+
+    return 0;
+}
+
+static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end)
+{
+    int idx;
+
+    do
+    {
+        idx = find_next_zero_bit(map, end, start);
+        if ( idx == end )
+            return -ENOSPC;
+    } while ( test_and_set_bit(idx, map) );
+
+    return idx;
+}
+
+static void __arm_smmu_free_bitmap(unsigned long *map, int idx)
+{
+    clear_bit(idx, map);
+}
+
+static void arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
+{
+    int count = 0;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    writel_relaxed(0, gr0_base + SMMU_GR0_sTLBGSYNC);
+    while ( readl_relaxed(gr0_base + SMMU_GR0_sTLBGSTATUS) &
+            SMMU_sTLBGSTATUS_GSACTIVE )
+    {
+        cpu_relax();
+        if ( ++count == SMMU_TLB_LOOP_TIMEOUT )
+        {
+            smmu_err(smmu, "TLB sync timed out -- SMMU may be deadlocked\n");
+            return;
+        }
+        udelay(1);
+    }
+}
+
+static void arm_smmu_tlb_inv_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *base = SMMU_GR0(smmu);
+
+    writel_relaxed(SMMU_CB_VMID(cfg),
+                   base + SMMU_GR0_TLBIVMID);
+
+    arm_smmu_tlb_sync(smmu);
+}
+
+static void arm_smmu_iotlb_flush_all(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry(cfg, &smmu_domain->contexts, list)
+        arm_smmu_tlb_inv_context(cfg);
+    spin_unlock(&smmu_domain->lock);
+}
+
+static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                 unsigned int page_count)
+{
+    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
+    arm_smmu_iotlb_flush_all(d);
+}
+
+static int determine_smr_mask(struct arm_smmu_device *smmu,
+                              struct arm_smmu_master *master,
+                              struct arm_smmu_smr *smr, int start, int order)
+{
+    u16 i, zero_bits_mask, one_bits_mask, const_mask;
+    int nr;
+
+    nr = 1 << order;
+
+    if ( nr == 1 )
+    {
+        /* no mask, use streamid to match and be done with it */
+        smr->mask = 0;
+        smr->id = master->streamids[start];
+        return 0;
+    }
+
+    zero_bits_mask = 0;
+    one_bits_mask = 0xffff;
+    for ( i = start; i < start + nr; i++)
+    {
+        zero_bits_mask |= master->streamids[i];   /* const 0 bits */
+        one_bits_mask &= master->streamids[i]; /* const 1 bits */
+    }
+    zero_bits_mask = ~zero_bits_mask;
+
+    /* bits having constant values (either 0 or 1) */
+    const_mask = zero_bits_mask | one_bits_mask;
+
+    i = hweight16(~const_mask);
+    if ( (1 << i) == nr )
+    {
+        smr->mask = ~const_mask;
+        smr->id = one_bits_mask;
+    }
+    else
+        /* no usable mask for this set of streamids */
+        return 1;
+
+    if ( ((smr->mask & smmu->smr_mask_mask) != smr->mask) ||
+         ((smr->id & smmu->smr_id_mask) != smr->id) )
+        /* insufficient number of mask/id bits */
+        return 1;
+
+    return 0;
+}
+
+static int determine_smr_mapping(struct arm_smmu_device *smmu,
+                                 struct arm_smmu_master *master,
+                                 struct arm_smmu_smr *smrs, int max_smrs)
+{
+    int nr_sid, nr, i, bit, start;
+
+    /*
+     * This function is called only once -- when a master is added
+     * to a domain. If master->num_s2crs != 0 then this master
+     * was already added to a domain.
+     */
+    BUG_ON(master->num_s2crs);
+
+    start = nr = 0;
+    nr_sid = master->num_streamids;
+    do
+    {
+        /*
+         * largest power-of-2 number of streamids for which to
+         * determine a usable mask/id pair for stream matching
+         */
+        bit = fls(nr_sid);
+        if (!bit)
+            return 0;
+
+        /*
+         * iterate over power-of-2 numbers to determine
+         * largest possible mask/id pair for stream matching
+         * of next 2**i streamids
+         */
+        for ( i = bit - 1; i >= 0; i-- )
+        {
+            if( !determine_smr_mask(smmu, master,
+                                    &smrs[master->num_s2crs],
+                                    start, i))
+                break;
+        }
+
+        if ( i < 0 )
+            goto out;
+
+        nr = 1 << i;
+        nr_sid -= nr;
+        start += nr;
+        master->num_s2crs++;
+    } while ( master->num_s2crs <= max_smrs );
+
+out:
+    if ( nr_sid )
+    {
+        /* not enough mapping groups available */
+        master->num_s2crs = 0;
+        return -ENOSPC;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
+                                          struct arm_smmu_master *master)
+{
+    int i, max_smrs, ret;
+    struct arm_smmu_smr *smrs;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    if ( !(smmu->features & SMMU_FEAT_STREAM_MATCH) )
+        return 0;
+
+    if ( master->smrs )
+        return -EEXIST;
+
+    max_smrs = min(smmu->num_mapping_groups, master->num_streamids);
+    smrs = xmalloc_array(struct arm_smmu_smr, max_smrs);
+    if ( !smrs )
+    {
+        smmu_err(smmu, "failed to allocated %d SMRs for master %s\n",
+                 max_smrs, dt_node_name(master->dt_node));
+        return -ENOMEM;
+    }
+
+    ret = determine_smr_mapping(smmu, master, smrs, max_smrs);
+    if ( ret )
+        goto err_free_smrs;
+
+    /* Allocate the SMRs on the root SMMU */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
+                                          smmu->num_mapping_groups);
+        if ( idx < 0 )
+        {
+            smmu_err(smmu, "failed to allocate free SMR\n");
+            goto err_free_bitmap;
+        }
+        smrs[i].idx = idx;
+    }
+
+    /* It worked! Now, poke the actual hardware */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 reg = SMMU_SMR_VALID | smrs[i].id << SMMU_SMR_ID_SHIFT |
+            smrs[i].mask << SMMU_SMR_MASK_SHIFT;
+        smmu_dbg(smmu, "SMR%d: 0x%x\n", smrs[i].idx, reg);
+        writel_relaxed(reg, gr0_base + SMMU_GR0_SMR(smrs[i].idx));
+    }
+
+    master->smrs = smrs;
+    return 0;
+
+err_free_bitmap:
+    while (--i >= 0)
+        __arm_smmu_free_bitmap(smmu->smr_map, smrs[i].idx);
+    master->num_s2crs = 0;
+err_free_smrs:
+    xfree(smrs);
+    return -ENOSPC;
+}
+
+/* Forward declaration */
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg);
+
+static int arm_smmu_domain_add_master(struct domain *d,
+                                      struct arm_smmu_domain_cfg *cfg,
+                                      struct arm_smmu_master *master)
+{
+    int i, ret;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    if ( master->cfg )
+        return -EBUSY;
+
+    ret = arm_smmu_master_configure_smrs(smmu, master);
+    if ( ret )
+        return ret;
+
+    /* Now we're at the root, time to point at our context bank */
+    if ( !master->num_s2crs )
+        master->num_s2crs = master->num_streamids;
+
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 idx, s2cr;
+
+        idx = smrs ? smrs[i].idx : master->streamids[i];
+        s2cr = (SMMU_S2CR_TYPE_TRANS << SMMU_S2CR_TYPE_SHIFT) |
+            (cfg->cbndx << SMMU_S2CR_CBNDX_SHIFT);
+        smmu_dbg(smmu, "S2CR%d: 0x%x\n", idx, s2cr);
+        writel_relaxed(s2cr, gr0_base + SMMU_GR0_S2CR(idx));
+    }
+
+    master->cfg = cfg;
+    list_add(&master->list, &cfg->masters);
+
+    return 0;
+}
+
+static void arm_smmu_domain_remove_master(struct arm_smmu_master *master)
+{
+    int i;
+    struct arm_smmu_domain_cfg *cfg = master->cfg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    /*
+     * We *must* clear the S2CR first, because freeing the SMR means
+     * that it can be reallocated immediately
+     */
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        u16 sid = master->streamids[i];
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT,
+                       gr0_base + SMMU_GR0_S2CR(sid));
+    }
+
+    /* Invalidate the SMRs before freeing back to the allocator */
+    for (i = 0; i < master->num_streamids; ++i) {
+        u8 idx = smrs[i].idx;
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(idx));
+        __arm_smmu_free_bitmap(smmu->smr_map, idx);
+    }
+
+    master->smrs = NULL;
+    xfree(smrs);
+
+    master->cfg = NULL;
+    list_del(&master->list);
+    INIT_LIST_HEAD(&master->list);
+}
+
+static void arm_smmu_init_context_bank(struct arm_smmu_domain_cfg *cfg)
+{
+    u32 reg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base, *gr0_base, *gr1_base;
+    paddr_t p2maddr;
+
+    ASSERT(cfg->domain != NULL);
+    p2maddr = page_to_maddr(cfg->domain->arch.p2m.first_level);
+
+    gr0_base = SMMU_GR0(smmu);
+    gr1_base = SMMU_GR1(smmu);
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+
+    /* CBAR */
+    reg = cfg->cbar;
+    if ( smmu->version == 1 )
+        reg |= cfg->irptndx << SMMU_CBAR_IRPTNDX_SHIFT;
+
+    reg |= SMMU_CB_VMID(cfg) << SMMU_CBAR_VMID_SHIFT;
+    writel_relaxed(reg, gr1_base + SMMU_GR1_CBAR(cfg->cbndx));
+
+    if ( smmu->version > 1 )
+    {
+        /* CBA2R */
+#ifdef CONFIG_ARM_64
+        reg = SMMU_CBA2R_RW64_64BIT;
+#else
+        reg = SMMU_CBA2R_RW64_32BIT;
+#endif
+        writel_relaxed(reg, gr1_base + SMMU_GR1_CBA2R(cfg->cbndx));
+    }
+
+    /* TTBR0 */
+    reg = (p2maddr & ((1ULL << 32) - 1));
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_LO);
+    reg = (p2maddr >> 32);
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_HI);
+
+    /*
+     * TCR
+     * We use long descriptor, with inner-shareable WBWA tables in TTBR0.
+     */
+    if ( smmu->version > 1 )
+    {
+        /* 4K Page Table */
+        if ( PAGE_SIZE == SZ_4K )
+            reg = SMMU_TCR_TG0_4K;
+        else
+            reg = SMMU_TCR_TG0_64K;
+
+        switch ( smmu->s2_output_size ) {
+        case 32:
+            reg |= (SMMU_TCR2_ADDR_32 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 36:
+            reg |= (SMMU_TCR2_ADDR_36 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 40:
+            reg |= (SMMU_TCR2_ADDR_40 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 42:
+            reg |= (SMMU_TCR2_ADDR_42 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 44:
+            reg |= (SMMU_TCR2_ADDR_44 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 48:
+            reg |= (SMMU_TCR2_ADDR_48 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        }
+    }
+    else
+        reg = 0;
+
+    /* The attribute to walk the page table should be the same as VTCR_EL2 */
+    reg |= SMMU_TCR_EAE |
+        (SMMU_TCR_SH_NS << SMMU_TCR_SH0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_ORGN0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_IRGN0_SHIFT) |
+        (SMMU_TCR_SL0_LVL_1 << SMMU_TCR_SL0_SHIFT);
+    writel_relaxed(reg, cb_base + SMMU_CB_TCR);
+
+    /* SCTLR */
+    reg = SMMU_SCTLR_CFCFG |
+        SMMU_SCTLR_CFIE |
+        SMMU_SCTLR_CFRE |
+        SMMU_SCTLR_M |
+        SMMU_SCTLR_EAE_SBOP;
+
+    writel_relaxed(reg, cb_base + SMMU_CB_SCTLR);
+}
+
+static struct arm_smmu_domain_cfg *
+arm_smmu_alloc_domain_context(struct domain *d,
+                              struct arm_smmu_device *smmu)
+{
+    const struct dt_irq *irq;
+    int ret, start;
+    struct arm_smmu_domain_cfg *cfg;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+
+    cfg = xzalloc(struct arm_smmu_domain_cfg);
+    if ( !cfg )
+        return NULL;
+
+    /* Master already initialized to another domain ... */
+    if ( cfg->domain != NULL )
+        goto out_free_mem;
+
+    cfg->cbar = SMMU_CBAR_TYPE_S2_TRANS;
+    start = 0;
+
+    ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
+                                  smmu->num_context_banks);
+    if ( ret < 0 )
+        goto out_free_mem;
+
+    cfg->cbndx = ret;
+    if ( smmu->version == 1 )
+    {
+        cfg->irptndx = atomic_inc_return(&smmu->irptndx);
+        cfg->irptndx %= smmu->num_context_irqs;
+    }
+    else
+        cfg->irptndx = cfg->cbndx;
+
+    irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+    ret = request_dt_irq(irq, arm_smmu_context_fault,
+                         "arm-smmu-context-fault", cfg);
+    if ( ret )
+    {
+        smmu_err(smmu, "failed to request context IRQ %d (%u)\n",
+                 cfg->irptndx, irq->irq);
+        cfg->irptndx = INVALID_IRPTNDX;
+        goto out_free_context;
+    }
+
+    cfg->domain = d;
+    cfg->smmu = smmu;
+
+    arm_smmu_init_context_bank(cfg);
+    list_add(&cfg->list, &smmu_domain->contexts);
+    INIT_LIST_HEAD(&cfg->masters);
+
+    return cfg;
+
+out_free_context:
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+out_free_mem:
+    xfree(cfg);
+
+    return NULL;
+}
+
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct domain *d = cfg->domain;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+    const struct dt_irq *irq;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+    BUG_ON(!list_empty(&cfg->masters));
+
+    /* Disable the context bank and nuke the TLB before freeing it */
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+    arm_smmu_tlb_inv_context(cfg);
+
+    if ( cfg->irptndx != INVALID_IRPTNDX )
+    {
+        irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+        release_dt_irq(irq, cfg);
+    }
+
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+}
+
+static struct arm_smmu_device *
+arm_smmu_find_smmu_by_dev(const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_master *master = NULL;
+
+    list_for_each_entry( smmu, &arm_smmu_devices, list )
+    {
+        master = find_smmu_master(smmu, dev);
+        if ( master )
+            break;
+    }
+
+    if ( !master )
+        return NULL;
+
+    return smmu;
+}
+
+static int arm_smmu_attach_dev(struct domain *d,
+                               const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu = arm_smmu_find_smmu_by_dev(dev);
+    struct arm_smmu_master *master;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg = NULL;
+    struct arm_smmu_domain_cfg *curr;
+    int ret;
+
+    printk(XENLOG_DEBUG "arm-smmu: attach %s to domain %d\n",
+           dt_node_full_name(dev), d->domain_id);
+
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: cannot attach to SMMU, is it on the same bus?\n",
+               dt_node_full_name(dev));
+        return -ENODEV;
+    }
+
+    master = find_smmu_master(smmu, dev);
+    BUG_ON(master == NULL);
+
+    /* Check if the device is already assigned to someone */
+    if ( master->cfg )
+        return -EBUSY;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry( curr, &smmu_domain->contexts, list )
+    {
+        if ( curr->smmu == smmu )
+        {
+            cfg = curr;
+            break;
+        }
+    }
+
+    if ( !cfg )
+    {
+        cfg = arm_smmu_alloc_domain_context(d, smmu);
+        if ( !cfg )
+        {
+            smmu_err(smmu, "unable to allocate context for domain %u\n",
+                     d->domain_id);
+            spin_unlock(&smmu_domain->lock);
+            return -ENOMEM;
+        }
+    }
+    spin_unlock(&smmu_domain->lock);
+
+    ret = arm_smmu_domain_add_master(d, cfg, master);
+    if ( ret )
+    {
+        spin_lock(&smmu_domain->lock);
+        if ( list_empty(&cfg->masters) )
+            arm_smmu_destroy_domain_context(cfg);
+        spin_unlock(&smmu_domain->lock);
+    }
+
+    return ret;
+}
+
+static __init int arm_smmu_id_size_to_bits(int size)
+{
+    switch ( size )
+    {
+    case 0:
+        return 32;
+    case 1:
+        return 36;
+    case 2:
+        return 40;
+    case 3:
+        return 42;
+    case 4:
+        return 44;
+    case 5:
+    default:
+        return 48;
+    }
+}
+
+static __init int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
+{
+    unsigned long size;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    u32 id;
+
+    smmu_info(smmu, "probing hardware configuration...\n");
+
+    /*
+     * Primecell ID
+     */
+    id = readl_relaxed(gr0_base + SMMU_GR0_PIDR2);
+    smmu->version = ((id >> SMMU_PIDR2_ARCH_SHIFT) & SMMU_PIDR2_ARCH_MASK) + 1;
+    smmu_info(smmu, "SMMUv%d with:\n", smmu->version);
+
+    /* ID0 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID0);
+#ifndef CONFIG_ARM_64
+    if ( ((id >> SMMU_ID0_PTFS_SHIFT) & SMMU_ID0_PTFS_MASK) ==
+            SMMU_ID0_PTFS_V8_ONLY )
+    {
+        smmu_err(smmu, "\tno v7 descriptor support!\n");
+        return -ENODEV;
+    }
+#endif
+    if ( id & SMMU_ID0_S1TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S1;
+        smmu_info(smmu, "\tstage 1 translation\n");
+    }
+
+    if ( id & SMMU_ID0_S2TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S2;
+        smmu_info(smmu, "\tstage 2 translation\n");
+    }
+
+    if ( id & SMMU_ID0_NTS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_NESTED;
+        smmu_info(smmu, "\tnested translation\n");
+    }
+
+    if ( !(smmu->features &
+           (SMMU_FEAT_TRANS_S1 | SMMU_FEAT_TRANS_S2 |
+            SMMU_FEAT_TRANS_NESTED)) )
+    {
+        smmu_err(smmu, "\tno translation support!\n");
+        return -ENODEV;
+    }
+
+    /* We need at least support for Stage 2 */
+    if ( !(smmu->features & SMMU_FEAT_TRANS_S2) )
+    {
+        smmu_err(smmu, "\tno stage 2 translation!\n");
+        return -ENODEV;
+    }
+
+    if ( id & SMMU_ID0_CTTW )
+    {
+        smmu->features |= SMMU_FEAT_COHERENT_WALK;
+        smmu_info(smmu, "\tcoherent table walk\n");
+    }
+
+    if ( id & SMMU_ID0_SMS )
+    {
+        u32 smr, sid, mask;
+
+        smmu->features |= SMMU_FEAT_STREAM_MATCH;
+        smmu->num_mapping_groups = (id >> SMMU_ID0_NUMSMRG_SHIFT) &
+            SMMU_ID0_NUMSMRG_MASK;
+        if ( smmu->num_mapping_groups == 0 )
+        {
+            smmu_err(smmu,
+                     "stream-matching supported, but no SMRs present!\n");
+            return -ENODEV;
+        }
+
+        smr = SMMU_SMR_MASK_MASK << SMMU_SMR_MASK_SHIFT;
+        smr |= (SMMU_SMR_ID_MASK << SMMU_SMR_ID_SHIFT);
+        writel_relaxed(smr, gr0_base + SMMU_GR0_SMR(0));
+        smr = readl_relaxed(gr0_base + SMMU_GR0_SMR(0));
+
+        mask = (smr >> SMMU_SMR_MASK_SHIFT) & SMMU_SMR_MASK_MASK;
+        sid = (smr >> SMMU_SMR_ID_SHIFT) & SMMU_SMR_ID_MASK;
+        if ( (mask & sid) != sid )
+        {
+            smmu_err(smmu,
+                     "SMR mask bits (0x%x) insufficient for ID field (0x%x)\n",
+                     mask, sid);
+            return -ENODEV;
+        }
+        smmu->smr_mask_mask = mask;
+        smmu->smr_id_mask = sid;
+
+        smmu_info(smmu,
+                  "\tstream matching with %u register groups, mask 0x%x\n",
+                  smmu->num_mapping_groups, mask);
+    }
+
+    /* ID1 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID1);
+    smmu->pagesize = (id & SMMU_ID1_PAGESIZE) ? SZ_64K : SZ_4K;
+
+    /* Check for size mismatch of SMMU address space from mapped region */
+    size = 1 << (((id >> SMMU_ID1_NUMPAGENDXB_SHIFT) &
+                  SMMU_ID1_NUMPAGENDXB_MASK) + 1);
+    size *= (smmu->pagesize << 1);
+    if ( smmu->size != size )
+        smmu_warn(smmu, "SMMU address space size (0x%lx) differs "
+                  "from mapped region size (0x%lx)!\n", size, smmu->size);
+
+    smmu->num_s2_context_banks = (id >> SMMU_ID1_NUMS2CB_SHIFT) &
+        SMMU_ID1_NUMS2CB_MASK;
+    smmu->num_context_banks = (id >> SMMU_ID1_NUMCB_SHIFT) &
+        SMMU_ID1_NUMCB_MASK;
+    if ( smmu->num_s2_context_banks > smmu->num_context_banks )
+    {
+        smmu_err(smmu, "impossible number of S2 context banks!\n");
+        return -ENODEV;
+    }
+    smmu_info(smmu, "\t%u context banks (%u stage-2 only)\n",
+              smmu->num_context_banks, smmu->num_s2_context_banks);
+
+    /* ID2 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID2);
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_IAS_SHIFT) &
+                                    SMMU_ID2_IAS_MASK);
+
+    /*
+     * Stage-1 output limited by stage-2 input size due to VTCR_EL2
+     * setup (see setup_virt_paging)
+     */
+    /* Current maximum output size of 40 bits */
+    smmu->s1_output_size = min(40UL, size);
+
+    /* The stage-2 output mask is also applied for bypass */
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_OAS_SHIFT) &
+                                    SMMU_ID2_OAS_MASK);
+    smmu->s2_output_size = min((unsigned long)PADDR_BITS, size);
+
+    if ( smmu->version == 1 )
+        smmu->input_size = 32;
+    else
+    {
+#ifdef CONFIG_ARM_64
+        size = (id >> SMMU_ID2_UBS_SHIFT) & SMMU_ID2_UBS_MASK;
+        size = min(39, arm_smmu_id_size_to_bits(size));
+#else
+        size = 32;
+#endif
+        smmu->input_size = size;
+
+        if ( (PAGE_SIZE == SZ_4K && !(id & SMMU_ID2_PTFS_4K) ) ||
+             (PAGE_SIZE == SZ_64K && !(id & SMMU_ID2_PTFS_64K)) ||
+             (PAGE_SIZE != SZ_4K && PAGE_SIZE != SZ_64K) )
+        {
+            smmu_err(smmu, "CPU page size 0x%lx unsupported\n",
+                     PAGE_SIZE);
+            return -ENODEV;
+        }
+    }
+
+    smmu_info(smmu, "\t%lu-bit VA, %lu-bit IPA, %lu-bit PA\n",
+              smmu->input_size, smmu->s1_output_size, smmu->s2_output_size);
+    return 0;
+}
+
+static __init void arm_smmu_device_reset(struct arm_smmu_device *smmu)
+{
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    void __iomem *cb_base;
+    int i = 0;
+    u32 reg;
+
+    smmu_dbg(smmu, "device reset\n");
+
+    /* Clear Global FSR */
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+    writel(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+
+    /* Mark all SMRn as invalid and all S2CRn as bypass */
+    for ( i = 0; i < smmu->num_mapping_groups; ++i )
+    {
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(i));
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT, gr0_base + SMMU_GR0_S2CR(i));
+    }
+
+    /* Make sure all context banks are disabled and clear CB_FSR  */
+    for ( i = 0; i < smmu->num_context_banks; ++i )
+    {
+        cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, i);
+        writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+        writel_relaxed(SMMU_FSR_FAULT, cb_base + SMMU_CB_FSR);
+    }
+
+    /* Invalidate the TLB, just in case */
+    writel_relaxed(0, gr0_base + SMMU_GR0_STLBIALL);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLH);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLNSNH);
+
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+
+    /* Enable fault reporting */
+    reg |= (SMMU_sCR0_GFRE | SMMU_sCR0_GFIE |
+            SMMU_sCR0_GCFGFRE | SMMU_sCR0_GCFGFIE);
+
+    /* Disable TLB broadcasting. */
+    reg |= (SMMU_sCR0_VMIDPNE | SMMU_sCR0_PTM);
+
+    /* Enable client access, generate a fault if no mapping is found */
+    reg &= ~(SMMU_sCR0_CLIENTPD);
+    reg |= SMMU_sCR0_USFCFG;
+
+    /* Disable forced broadcasting */
+    reg &= ~SMMU_sCR0_FB;
+
+    /* Don't upgrade barriers */
+    reg &= ~(SMMU_sCR0_BSU_MASK << SMMU_sCR0_BSU_SHIFT);
+
+    /* Push the button */
+    arm_smmu_tlb_sync(smmu);
+    writel_relaxed(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+}
+
+int arm_smmu_iommu_domain_init(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain;
+
+    smmu_domain = xzalloc(struct arm_smmu_domain);
+    if ( !smmu_domain )
+        return -ENOMEM;
+
+    spin_lock_init(&smmu_domain->lock);
+    INIT_LIST_HEAD(&smmu_domain->contexts);
+
+    domain_hvm_iommu(d)->arch.priv = smmu_domain;
+
+    return 0;
+}
+
+void arm_smmu_iommu_dom0_init(struct domain *d)
+{
+    struct arm_smmu_device *smmu;
+    struct rb_node *node;
+
+    printk(XENLOG_DEBUG "arm-smmu: Initialize dom0\n");
+
+    list_for_each_entry( smmu, &arm_smmu_devices, list )
+    {
+        for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
+        {
+            struct arm_smmu_master *master;
+
+            master = container_of(node, struct arm_smmu_master, node);
+
+            if ( dt_device_used_by(master->dt_node) == DOMID_XEN ||
+                 platform_device_is_blacklisted(master->dt_node) )
+                continue;
+
+            arm_smmu_attach_dev(d, master->dt_node);
+        }
+    }
+}
+
+void arm_smmu_iommu_domain_teardown(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg, *_cfg;
+
+    spin_lock(&smmu_domain->lock);
+
+    list_for_each_entry_safe( cfg, _cfg, &smmu_domain->contexts, list )
+    {
+        struct arm_smmu_master *master, *_master;
+
+        list_for_each_entry_safe( master, _master, &cfg->masters, list )
+            arm_smmu_domain_remove_master(master);
+        arm_smmu_destroy_domain_context(cfg);
+    }
+
+    spin_unlock(&smmu_domain->lock);
+
+    xfree(smmu_domain);
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+    .init = arm_smmu_iommu_domain_init,
+    .dom0_init = arm_smmu_iommu_dom0_init,
+    .teardown = arm_smmu_iommu_domain_teardown,
+    .iotlb_flush = arm_smmu_iotlb_flush,
+    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+};
+
+static int __init smmu_init(struct dt_device_node *dev,
+                            const void *data)
+{
+    struct arm_smmu_device *smmu;
+    int res;
+    u64 addr, size;
+    unsigned int num_irqs, i;
+    struct dt_phandle_args masterspec;
+    struct rb_node *node;
+
+    /* Even if the device can't be initialized, we don't want to give to
+     * dom0 the smmu device
+     */
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    smmu = xzalloc(struct arm_smmu_device);
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: failed to allocate arm_smmu_device\n",
+               dt_node_full_name(dev));
+        return -ENOMEM;
+    }
+
+    smmu->node = dev;
+    check_driver_options(smmu);
+
+    res = dt_device_get_address(smmu->node, 0, &addr, &size);
+    if ( res )
+    {
+        smmu_err(smmu, "unable to retrieve the base address of the SMMU\n");
+        goto out_err;
+    }
+
+    smmu->base = ioremap_nocache(addr, size);
+    if ( !smmu->base )
+    {
+        smmu_err(smmu, "unable to map the SMMU memory\n");
+        goto out_err;
+    }
+
+    smmu->size = size;
+
+    if ( !dt_property_read_u32(smmu->node, "#global-interrupts",
+                               &smmu->num_global_irqs) )
+    {
+        smmu_err(smmu, "missing #global-interrupts\n");
+        goto out_unmap;
+    }
+
+    num_irqs = dt_number_of_irq(smmu->node);
+    if ( num_irqs > smmu->num_global_irqs )
+        smmu->num_context_irqs = num_irqs - smmu->num_global_irqs;
+
+    if ( !smmu->num_context_irqs )
+    {
+        smmu_err(smmu, "found %d interrupts but expected at least %d\n",
+                 num_irqs, smmu->num_global_irqs + 1);
+        goto out_unmap;
+    }
+
+    smmu->irqs = xzalloc_array(struct dt_irq, num_irqs);
+    if ( !smmu->irqs )
+    {
+        smmu_err(smmu, "failed to allocated %d irqs\n", num_irqs);
+        goto out_unmap;
+    }
+
+    for ( i = 0; i < num_irqs; i++ )
+    {
+        res = dt_device_get_irq(smmu->node, i, &smmu->irqs[i]);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to get irq index %d\n", i);
+            goto out_free_irqs;
+        }
+    }
+
+    smmu->sids = xzalloc_array(unsigned long,
+                               BITS_TO_LONGS(SMMU_MAX_STREAMIDS));
+    if ( !smmu->sids )
+    {
+        smmu_err(smmu, "failed to allocated bitmap for stream ID tracking\n");
+        goto out_free_masters;
+    }
+
+
+    i = 0;
+    smmu->masters = RB_ROOT;
+    while ( !dt_parse_phandle_with_args(smmu->node, "mmu-masters",
+                                        "#stream-id-cells", i, &masterspec) )
+    {
+        res = register_smmu_master(smmu, &masterspec);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to add master %s\n",
+                     masterspec.np->name);
+            goto out_free_masters;
+        }
+        i++;
+    }
+
+    smmu_info(smmu, "registered %d master devices\n", i);
+
+    res = arm_smmu_device_cfg_probe(smmu);
+    if ( res )
+    {
+        smmu_err(smmu, "failed to probe the SMMU\n");
+        goto out_free_masters;
+    }
+
+    if ( smmu->version > 1 &&
+         smmu->num_context_banks != smmu->num_context_irqs )
+    {
+        smmu_err(smmu,
+                 "found only %d context interrupt(s) but %d required\n",
+                 smmu->num_context_irqs, smmu->num_context_banks);
+        goto out_free_masters;
+    }
+
+    smmu_dbg(smmu, "register global IRQs handler\n");
+
+    for ( i = 0; i < smmu->num_global_irqs; ++i )
+    {
+        smmu_dbg(smmu, "\t- global IRQ %u\n", smmu->irqs[i].irq);
+        res = request_dt_irq(&smmu->irqs[i], arm_smmu_global_fault,
+                             "arm-smmu global fault", smmu);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to request global IRQ %d (%u)\n",
+                     i, smmu->irqs[i].irq);
+            goto out_release_irqs;
+        }
+    }
+
+    INIT_LIST_HEAD(&smmu->list);
+    list_add(&smmu->list, &arm_smmu_devices);
+
+    arm_smmu_device_reset(smmu);
+
+    iommu_set_ops(&arm_smmu_iommu_ops);
+
+    /* sids field can be freed... */
+    xfree(smmu->sids);
+    smmu->sids = NULL;
+
+    return 0;
+
+out_release_irqs:
+    while (i--)
+        release_dt_irq(&smmu->irqs[i], smmu);
+
+out_free_masters:
+    for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+        xfree(master);
+    }
+
+    xfree(smmu->sids);
+
+out_free_irqs:
+    xfree(smmu->irqs);
+
+out_unmap:
+    iounmap(smmu->base);
+
+out_err:
+    xfree(smmu);
+
+    return -ENODEV;
+}
+
+static const char * const smmu_dt_compat[] __initconst =
+{
+    "arm,mmu-400",
+    NULL
+};
+
+DT_DEVICE_START(smmu, "ARM SMMU", DEVICE_IOMMU)
+    .compatible = smmu_dt_compat,
+    .init = smmu_init,
+DT_DEVICE_END
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpac-0007Q9-Co; Fri, 07 Feb 2014 17:51:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpaa-0007Pz-7F
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:51:44 +0000
Received: from [85.158.143.35:46194] by server-3.bemta-4.messagelabs.com id
	5A/83-11539-F2D15F25; Fri, 07 Feb 2014 17:51:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391795502!4002744!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5192 invoked from network); 7 Feb 2014 17:51:42 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:51:42 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so1678523eek.15
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:51:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=reoFMvfkbAMThfDjk8Td9U1GOFicMj57sZXg/OS4Y8A=;
	b=TO6r1hiZO68756jDXgN4EpAaC0+rDjoL1KHGAErVS217u27R9yH+xeT8az5ZygxIbj
	N8Lb8ywOSvAE6i+bfo2FE7m6R17qVRK7h7nVGPpLbaJ9dkhx/F3o7LZ+njYlKJ+JICJ/
	kRVaL3ubk/gNpRn+9pr4/QIGUV5FftPRDmSeEwHCcGFCt7HlEA1CmJv3ttoLpA7O0h0e
	QiCPKuDd4bDimgJ55xFeCQEbKnAgwhDPxF+7UF5/tOEQdxI04qlpDWfvsOlBoISetoG2
	rQDs90Rkg2/lxmUNoD+BVAx+SDTOt6JfjY5By9UOMU09eNYF2CFtTFx7mrFKcDi7++Ea
	eYQg==
X-Gm-Message-State: ALoCoQlhecPxZzQQHqPJs94dSdQLpM4ZWTFd1xLg49cLmrEoO6tvU4Tu5okfRM7uvAcYHzrXoQPf
X-Received: by 10.15.27.136 with SMTP id p8mr326415eeu.111.1391795501811;
	Fri, 07 Feb 2014 09:51:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18933849eef.1.2014.02.07.09.51.40
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 09:51:41 -0800 (PST)
Message-ID: <52F51D2C.8040305@linaro.org>
Date: Fri, 07 Feb 2014 17:51:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 00/12] IOMMU support for ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I forgot to add a link to repository:
	git://xenbits.xen.org/people/julieng/xen-unstable.git branch smmu

This series has also dependency on:
   - early printk series : http://lists.xen.org/archives/html/xen-devel/2014-01/msg00288.html
   - interrupt series: http://lists.xen.org/archives/html/xen-devel/2014-01/msg02139.html
   - few bug fixes on the previous series


On 02/07/2014 05:42 PM, Julien Grall wrote:
> Hello,
> 
> This patch series add support for IOMMU on ARM. It have also added ARM SMMU
> driver which is used for instance on Midway.
> 
> The IOMMU architecture for ARM is relying on the page table is shared between
> the processor and each IOMMU.
> 
> The patch series is divided following:
>     - #1: fixing grant-table with IOMMU. Will be necessary for ARM later
>     - #2-#3: Make static some vtd functions
>     - #4-#5: Adding new device tree functions
>     - #6-#9: Prepare IOMMU code to add support for ARM
>     - #10-#11: Add IOMMU architecture for ARM
>     - #12: Add SMMU drivers
> 
> For now the 1:1 workaround is not removed because a same platform can have
> DMA-capable device which are under an IOMMU and some not. It's a problem for
> the swiotlb which needs to know if the device is procted or not when foreign
> mapping is mapped in dom0.
> 
> When I talked with Stefano, 2 solutions came:
>     - Having a property in each "protected" device
>     - List in the hypervisor node the procted devices
> 
> I didn't yet decide which solution I will use.
> 
> Any comments, questions are welcomed.
> 
> Sincerely yours,
> 
> Julien Grall (12):
>   xen/common: grant-table: only call IOMMU if paging mode translate is
>     disabled
>   xen/passthrough: vtd: Don't export iommu_domain_teardown
>   xen/passthrough: vtd: Don't export iommu_set_pgd
>   xen/dts: Add dt_property_read_bool
>   xen/dts: Add dt_parse_phandle_with_args and dt_parse_phandle
>   xen/passthrough: rework dom0_pvh_reqs to use it also on ARM
>   xen/passthrough: iommu: Don't need to map dom0 page when the PT is
>     shared
>   xen/passthrough: iommu: Split generic IOMMU code
>   xen/passthrough: iommu: Introduce arch specific code
>   xen/passthrough: Introduce IOMMU ARM architure
>   MAINTAINERS: Add drivers/passthrough/arm
>   drivers/passthrough: arm: Add support for SMMU drivers
> 
>  MAINTAINERS                                 |    1 +
>  xen/arch/arm/Rules.mk                       |    1 +
>  xen/arch/arm/domain.c                       |    7 +
>  xen/arch/arm/domain_build.c                 |    2 +
>  xen/arch/arm/p2m.c                          |    4 +
>  xen/arch/arm/setup.c                        |    2 +
>  xen/arch/x86/domctl.c                       |    6 +-
>  xen/arch/x86/hvm/io.c                       |    2 +-
>  xen/arch/x86/tboot.c                        |    3 +-
>  xen/common/device_tree.c                    |  157 ++-
>  xen/common/grant_table.c                    |    7 +-
>  xen/drivers/passthrough/Makefile            |    7 +-
>  xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
>  xen/drivers/passthrough/amd/iommu_guest.c   |    8 +-
>  xen/drivers/passthrough/amd/iommu_map.c     |   56 +-
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +-
>  xen/drivers/passthrough/arm/Makefile        |    2 +
>  xen/drivers/passthrough/arm/iommu.c         |   65 +
>  xen/drivers/passthrough/arm/smmu.c          | 1701 +++++++++++++++++++++++++++
>  xen/drivers/passthrough/iommu.c             |  525 +--------
>  xen/drivers/passthrough/iommu_pci.c         |  468 ++++++++
>  xen/drivers/passthrough/iommu_x86.c         |  106 ++
>  xen/drivers/passthrough/vtd/iommu.c         |  124 +-
>  xen/include/asm-arm/device.h                |    3 +-
>  xen/include/asm-arm/domain.h                |    2 +
>  xen/include/asm-arm/hvm/iommu.h             |   10 +
>  xen/include/asm-arm/iommu.h                 |   36 +
>  xen/include/asm-x86/hvm/iommu.h             |   29 +
>  xen/include/asm-x86/iommu.h                 |   50 +
>  xen/include/xen/device_tree.h               |   75 ++
>  xen/include/xen/hvm/iommu.h                 |   27 +-
>  xen/include/xen/iommu.h                     |   51 +-
>  32 files changed, 2891 insertions(+), 701 deletions(-)
>  create mode 100644 xen/drivers/passthrough/arm/Makefile
>  create mode 100644 xen/drivers/passthrough/arm/iommu.c
>  create mode 100644 xen/drivers/passthrough/arm/smmu.c
>  create mode 100644 xen/drivers/passthrough/iommu_pci.c
>  create mode 100644 xen/drivers/passthrough/iommu_x86.c
>  create mode 100644 xen/include/asm-arm/hvm/iommu.h
>  create mode 100644 xen/include/asm-arm/iommu.h
>  create mode 100644 xen/include/asm-x86/iommu.h
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 17:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 17:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBpac-0007Q9-Co; Fri, 07 Feb 2014 17:51:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBpaa-0007Pz-7F
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 17:51:44 +0000
Received: from [85.158.143.35:46194] by server-3.bemta-4.messagelabs.com id
	5A/83-11539-F2D15F25; Fri, 07 Feb 2014 17:51:43 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391795502!4002744!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5192 invoked from network); 7 Feb 2014 17:51:42 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 17:51:42 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so1678523eek.15
	for <xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 09:51:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=reoFMvfkbAMThfDjk8Td9U1GOFicMj57sZXg/OS4Y8A=;
	b=TO6r1hiZO68756jDXgN4EpAaC0+rDjoL1KHGAErVS217u27R9yH+xeT8az5ZygxIbj
	N8Lb8ywOSvAE6i+bfo2FE7m6R17qVRK7h7nVGPpLbaJ9dkhx/F3o7LZ+njYlKJ+JICJ/
	kRVaL3ubk/gNpRn+9pr4/QIGUV5FftPRDmSeEwHCcGFCt7HlEA1CmJv3ttoLpA7O0h0e
	QiCPKuDd4bDimgJ55xFeCQEbKnAgwhDPxF+7UF5/tOEQdxI04qlpDWfvsOlBoISetoG2
	rQDs90Rkg2/lxmUNoD+BVAx+SDTOt6JfjY5By9UOMU09eNYF2CFtTFx7mrFKcDi7++Ea
	eYQg==
X-Gm-Message-State: ALoCoQlhecPxZzQQHqPJs94dSdQLpM4ZWTFd1xLg49cLmrEoO6tvU4Tu5okfRM7uvAcYHzrXoQPf
X-Received: by 10.15.27.136 with SMTP id p8mr326415eeu.111.1391795501811;
	Fri, 07 Feb 2014 09:51:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm18933849eef.1.2014.02.07.09.51.40
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 09:51:41 -0800 (PST)
Message-ID: <52F51D2C.8040305@linaro.org>
Date: Fri, 07 Feb 2014 17:51:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 00/12] IOMMU support for ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I forgot to add a link to repository:
	git://xenbits.xen.org/people/julieng/xen-unstable.git branch smmu

This series has also dependency on:
   - early printk series : http://lists.xen.org/archives/html/xen-devel/2014-01/msg00288.html
   - interrupt series: http://lists.xen.org/archives/html/xen-devel/2014-01/msg02139.html
   - few bug fixes on the previous series


On 02/07/2014 05:42 PM, Julien Grall wrote:
> Hello,
> 
> This patch series add support for IOMMU on ARM. It have also added ARM SMMU
> driver which is used for instance on Midway.
> 
> The IOMMU architecture for ARM is relying on the page table is shared between
> the processor and each IOMMU.
> 
> The patch series is divided following:
>     - #1: fixing grant-table with IOMMU. Will be necessary for ARM later
>     - #2-#3: Make static some vtd functions
>     - #4-#5: Adding new device tree functions
>     - #6-#9: Prepare IOMMU code to add support for ARM
>     - #10-#11: Add IOMMU architecture for ARM
>     - #12: Add SMMU drivers
> 
> For now the 1:1 workaround is not removed because a same platform can have
> DMA-capable device which are under an IOMMU and some not. It's a problem for
> the swiotlb which needs to know if the device is procted or not when foreign
> mapping is mapped in dom0.
> 
> When I talked with Stefano, 2 solutions came:
>     - Having a property in each "protected" device
>     - List in the hypervisor node the procted devices
> 
> I didn't yet decide which solution I will use.
> 
> Any comments, questions are welcomed.
> 
> Sincerely yours,
> 
> Julien Grall (12):
>   xen/common: grant-table: only call IOMMU if paging mode translate is
>     disabled
>   xen/passthrough: vtd: Don't export iommu_domain_teardown
>   xen/passthrough: vtd: Don't export iommu_set_pgd
>   xen/dts: Add dt_property_read_bool
>   xen/dts: Add dt_parse_phandle_with_args and dt_parse_phandle
>   xen/passthrough: rework dom0_pvh_reqs to use it also on ARM
>   xen/passthrough: iommu: Don't need to map dom0 page when the PT is
>     shared
>   xen/passthrough: iommu: Split generic IOMMU code
>   xen/passthrough: iommu: Introduce arch specific code
>   xen/passthrough: Introduce IOMMU ARM architure
>   MAINTAINERS: Add drivers/passthrough/arm
>   drivers/passthrough: arm: Add support for SMMU drivers
> 
>  MAINTAINERS                                 |    1 +
>  xen/arch/arm/Rules.mk                       |    1 +
>  xen/arch/arm/domain.c                       |    7 +
>  xen/arch/arm/domain_build.c                 |    2 +
>  xen/arch/arm/p2m.c                          |    4 +
>  xen/arch/arm/setup.c                        |    2 +
>  xen/arch/x86/domctl.c                       |    6 +-
>  xen/arch/x86/hvm/io.c                       |    2 +-
>  xen/arch/x86/tboot.c                        |    3 +-
>  xen/common/device_tree.c                    |  157 ++-
>  xen/common/grant_table.c                    |    7 +-
>  xen/drivers/passthrough/Makefile            |    7 +-
>  xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
>  xen/drivers/passthrough/amd/iommu_guest.c   |    8 +-
>  xen/drivers/passthrough/amd/iommu_map.c     |   56 +-
>  xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +-
>  xen/drivers/passthrough/arm/Makefile        |    2 +
>  xen/drivers/passthrough/arm/iommu.c         |   65 +
>  xen/drivers/passthrough/arm/smmu.c          | 1701 +++++++++++++++++++++++++++
>  xen/drivers/passthrough/iommu.c             |  525 +--------
>  xen/drivers/passthrough/iommu_pci.c         |  468 ++++++++
>  xen/drivers/passthrough/iommu_x86.c         |  106 ++
>  xen/drivers/passthrough/vtd/iommu.c         |  124 +-
>  xen/include/asm-arm/device.h                |    3 +-
>  xen/include/asm-arm/domain.h                |    2 +
>  xen/include/asm-arm/hvm/iommu.h             |   10 +
>  xen/include/asm-arm/iommu.h                 |   36 +
>  xen/include/asm-x86/hvm/iommu.h             |   29 +
>  xen/include/asm-x86/iommu.h                 |   50 +
>  xen/include/xen/device_tree.h               |   75 ++
>  xen/include/xen/hvm/iommu.h                 |   27 +-
>  xen/include/xen/iommu.h                     |   51 +-
>  32 files changed, 2891 insertions(+), 701 deletions(-)
>  create mode 100644 xen/drivers/passthrough/arm/Makefile
>  create mode 100644 xen/drivers/passthrough/arm/iommu.c
>  create mode 100644 xen/drivers/passthrough/arm/smmu.c
>  create mode 100644 xen/drivers/passthrough/iommu_pci.c
>  create mode 100644 xen/drivers/passthrough/iommu_x86.c
>  create mode 100644 xen/include/asm-arm/hvm/iommu.h
>  create mode 100644 xen/include/asm-arm/iommu.h
>  create mode 100644 xen/include/asm-x86/iommu.h
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:31:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqCh-0000pt-Az; Fri, 07 Feb 2014 18:31:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBqCf-0000po-R1
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 18:31:06 +0000
Received: from [85.158.139.211:5864] by server-6.bemta-5.messagelabs.com id
	DE/A7-14342-96625F25; Fri, 07 Feb 2014 18:31:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391797862!2461013!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30712 invoked from network); 7 Feb 2014 18:31:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 18:31:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17IUxYQ003099
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 18:31:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17IUw30001021
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 18:30:58 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17IUv2i018863; Fri, 7 Feb 2014 18:30:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 10:30:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 429DE1C0972; Fri,  7 Feb 2014 13:30:56 -0500 (EST)
Date: Fri, 7 Feb 2014 13:30:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207183056.GA10265@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> I did not.  I do not have the toolchain installed.  I may have time later
> today to try the patch.  Are there any specific instructions on how to
> patch the src, compile and install?

There actually should be a new version of Xen 4.4-rcX which will have the
fix. That might be easier for you?
> 
> Regards
> 
> 
> On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > Hi all,
> > >
> > > I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)
> > to a
> > > HVM.  I have been attempting to resolve this issue on the xen-users list,
> > > but it was advised to post this issue to this list. (Initial Message -
> > >
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> > >
> > > The machine I am using as host is a Dell Poweredge server with a Xeon
> > > E31220 with 4GB of ram.
> > >
> > > The possible bug is the following:
> > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > char device redirected to /dev/pts/5 (label serial0)
> > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > ....
> > >
> > > I believe it may be similar to this thread
> > >
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > >
> > >
> > > Additional info that may be helpful is below.
> >
> > Did you try the patch?
> > >
> > > Please let me know if you need any additional information.
> > >
> > > Thanks in advance for any help provided!
> > > Regards
> > >
> > > ###########################################################
> > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > ###########################################################
> > > # Configuration file for Xen HVM
> > >
> > > # HVM Name (as appears in 'xl list')
> > > name="ubuntu-hvm-0"
> > > # HVM Build settings (+ hardware)
> > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > builder='hvm'
> > > device_model='qemu-dm'
> > > memory=1024
> > > vcpus=2
> > >
> > > # Virtual Interface
> > > # Network bridge to USB NIC
> > > vif=['bridge=xenbr0']
> > >
> > > ################### PCI PASSTHROUGH ###################
> > > # PCI Permissive mode toggle
> > > #pci_permissive=1
> > >
> > > # All PCI Devices
> > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> > >
> > > # First two ports on Intel 4x1G NIC
> > > #pci=['03:00.0','03:00.1']
> > >
> > > # Last two ports on Intel 4x1G NIC
> > > #pci=['04:00.0', '04:00.1']
> > >
> > > # All ports on Intel 4x1G NIC
> > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > >
> > > # Brodcom 2x1G NIC
> > > #pci=['05:00.0', '05:00.1']
> > > ################### PCI PASSTHROUGH ###################
> > >
> > > # HVM Disks
> > > # Hard disk only
> > > # Boot from HDD first ('c')
> > > boot="c"
> > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > >
> > > # Hard disk with ISO
> > > # Boot from ISO first ('d')
> > > #boot="d"
> > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > >
> > > # ACPI Enable
> > > acpi=1
> > > # HVM Event Modes
> > > on_poweroff='destroy'
> > > on_reboot='restart'
> > > on_crash='restart'
> > >
> > > # Serial Console Configuration (Xen Console)
> > > sdl=0
> > > serial='pty'
> > >
> > > # VNC Configuration
> > > # Only reacable from localhost
> > > vnc=1
> > > vnclisten="0.0.0.0"
> > > vncpasswd=""
> > >
> > > ###########################################################
> > > Copied for xen-users list
> > > ###########################################################
> > >
> > > It appears that it cannot obtain the RAM mapping for this PCI device.
> > >
> > >
> > > I rebooted the Host.  I ran assigned pci devices to pciback. The output
> > > looks like:
> > > root@fiat:~# ./dev_mgmt.sh
> > > Loading Kernel Module 'xen-pciback'
> > > Calling function pciback_dev for:
> > > PCI DEVICE 0000:03:00.0
> > > Unbinding 0000:03:00.0 from igb
> > > Binding 0000:03:00.0 to pciback
> > >
> > > PCI DEVICE 0000:03:00.1
> > > Unbinding 0000:03:00.1 from igb
> > > Binding 0000:03:00.1 to pciback
> > >
> > > PCI DEVICE 0000:04:00.0
> > > Unbinding 0000:04:00.0 from igb
> > > Binding 0000:04:00.0 to pciback
> > >
> > > PCI DEVICE 0000:04:00.1
> > > Unbinding 0000:04:00.1 from igb
> > > Binding 0000:04:00.1 to pciback
> > >
> > > PCI DEVICE 0000:05:00.0
> > > Unbinding 0000:05:00.0 from bnx2
> > > Binding 0000:05:00.0 to pciback
> > >
> > > PCI DEVICE 0000:05:00.1
> > > Unbinding 0000:05:00.1 from bnx2
> > > Binding 0000:05:00.1 to pciback
> > >
> > > Listing PCI Devices Available to Xen
> > > 0000:03:00.0
> > > 0000:03:00.1
> > > 0000:04:00.0
> > > 0000:04:00.1
> > > 0000:05:00.0
> > > 0000:05:00.1
> > >
> > > ###########################################################
> > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > WARNING: ignoring device_model directive.
> > > WARNING: Use "device_model_override" instead if you really want a
> > > non-default device_model
> > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> > > how=(nil) callback=(nil) poller=0x210c3c0
> > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > vdev=hda spec.backend=unknown
> > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > > vdev=hda, using backend phy
> > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > bootloader
> > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > > domain, skipping bootloader
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210c728: deregister unregistered
> > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > free_memkb=2980
> > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > candidate
> > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > >   Loader:        0000000000100000->00000000001a69a4
> > >   Modules:       0000000000000000->0000000000000000
> > >   TOTAL:         0000000000000000->000000003f800000
> > >   ENTRY ADDRESS: 0000000000100608
> > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > >   4KB PAGES: 0x0000000000000200
> > >   2MB PAGES: 0x00000000000001fb
> > >   1GB PAGES: 0x0000000000000000
> > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > vdev=hda spec.backend=phy
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > register slotnum=3
> > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > > inprogress: poller=0x210c3c0, flags=i
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > epath=/local/domain/0/backend/vbd/2/768/state
> > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> > state 1
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > epath=/local/domain/0/backend/vbd/2/768/state
> > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x2112f48: deregister unregistered
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/block add
> > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > device-model
> > > /usr/bin/qemu-system-i386 with arguments:
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > /usr/bin/qemu-system-i386
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > chardev=libxl-cmd,mode=control
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > register
> > > slotnum=3
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > epath=/local/domain/0/device-model/2/state
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > epath=/local/domain/0/device-model/2/state
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210c960: deregister unregistered
> > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > /var/run/xen/qmp-libxl-2
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "qmp_capabilities",
> > >     "id": 1
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "query-chardev",
> > >     "id": 2
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "change",
> > >     "id": 3,
> > >     "arguments": {
> > >         "device": "vnc",
> > >         "target": "password",
> > >         "arg": ""
> > >     }
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "query-vnc",
> > >     "id": 4
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > register
> > > slotnum=3
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > epath=/local/domain/0/backend/vif/2/0/state
> > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state
> > 1
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > epath=/local/domain/0/backend/vif/2/0/state
> > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210e8a8: deregister unregistered
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/vif-bridge online
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/vif-bridge add
> > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > /var/run/xen/qmp-libxl-2
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "qmp_capabilities",
> > >     "id": 1
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "device_add",
> > >     "id": 2,
> > >     "arguments": {
> > >         "driver": "xen-pci-passthrough",
> > >         "id": "pci-pt-03_00.0",
> > >         "hostaddr": "0000:03:00.0"
> > >     }
> > > }
> > > '
> > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> > reset
> > > by peer
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> > backend
> > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> > > progress report: ignored
> > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > > complete, rc=0
> > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> > destroy
> > > Daemon running with PID 3214
> > > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > > xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> > > xc: debug: hypercall buffer: cache current size:4
> > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > >
> > > ###########################################################
> > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > char device redirected to /dev/pts/5 (label serial0)
> > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > CPU #0:
> > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > ES =0000 00000000 0000ffff 00009300
> > > CS =f000 ffff0000 0000ffff 00009b00
> > > SS =0000 00000000 0000ffff 00009300
> > > DS =0000 00000000 0000ffff 00009300
> > > FS =0000 00000000 0000ffff 00009300
> > > GS =0000 00000000 0000ffff 00009300
> > > LDT=0000 00000000 0000ffff 00008200
> > > TR =0000 00000000 0000ffff 00008b00
> > > GDT=     00000000 0000ffff
> > > IDT=     00000000 0000ffff
> > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > DR6=ffff0ff0 DR7=00000400
> > > EFER=0000000000000000
> > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > XMM00=00000000000000000000000000000000
> > > XMM01=00000000000000000000000000000000
> > > XMM02=00000000000000000000000000000000
> > > XMM03=00000000000000000000000000000000
> > > XMM04=00000000000000000000000000000000
> > > XMM05=00000000000000000000000000000000
> > > XMM06=00000000000000000000000000000000
> > > XMM07=00000000000000000000000000000000
> > > CPU #1:
> > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > ES =0000 00000000 0000ffff 00009300
> > > CS =f000 ffff0000 0000ffff 00009b00
> > > SS =0000 00000000 0000ffff 00009300
> > > DS =0000 00000000 0000ffff 00009300
> > > FS =0000 00000000 0000ffff 00009300
> > > GS =0000 00000000 0000ffff 00009300
> > > LDT=0000 00000000 0000ffff 00008200
> > > TR =0000 00000000 0000ffff 00008b00
> > > GDT=     00000000 0000ffff
> > > IDT=     00000000 0000ffff
> > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > DR6=ffff0ff0 DR7=00000400
> > > EFER=0000000000000000
> > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > XMM00=00000000000000000000000000000000
> > > XMM01=00000000000000000000000000000000
> > > XMM02=00000000000000000000000000000000
> > > XMM03=00000000000000000000000000000000
> > > XMM04=00000000000000000000000000000000
> > > XMM05=00000000000000000000000000000000
> > > XMM06=00000000000000000000000000000000
> > > XMM07=00000000000000000000000000000000
> > >
> > > ###########################################################
> > > /etc/default/grub
> > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > GRUB_HIDDEN_TIMEOUT=0
> > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > GRUB_TIMEOUT=10
> > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > GRUB_CMDLINE_LINUX=""
> > > # biosdevname=0
> > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:31:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqCh-0000pt-Az; Fri, 07 Feb 2014 18:31:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBqCf-0000po-R1
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 18:31:06 +0000
Received: from [85.158.139.211:5864] by server-6.bemta-5.messagelabs.com id
	DE/A7-14342-96625F25; Fri, 07 Feb 2014 18:31:05 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391797862!2461013!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30712 invoked from network); 7 Feb 2014 18:31:03 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 18:31:03 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17IUxYQ003099
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 18:31:00 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17IUw30001021
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 7 Feb 2014 18:30:58 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17IUv2i018863; Fri, 7 Feb 2014 18:30:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 10:30:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 429DE1C0972; Fri,  7 Feb 2014 13:30:56 -0500 (EST)
Date: Fri, 7 Feb 2014 13:30:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207183056.GA10265@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> I did not.  I do not have the toolchain installed.  I may have time later
> today to try the patch.  Are there any specific instructions on how to
> patch the src, compile and install?

There actually should be a new version of Xen 4.4-rcX which will have the
fix. That might be easier for you?
> 
> Regards
> 
> 
> On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > Hi all,
> > >
> > > I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)
> > to a
> > > HVM.  I have been attempting to resolve this issue on the xen-users list,
> > > but it was advised to post this issue to this list. (Initial Message -
> > >
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> > >
> > > The machine I am using as host is a Dell Poweredge server with a Xeon
> > > E31220 with 4GB of ram.
> > >
> > > The possible bug is the following:
> > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > char device redirected to /dev/pts/5 (label serial0)
> > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > ....
> > >
> > > I believe it may be similar to this thread
> > >
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > >
> > >
> > > Additional info that may be helpful is below.
> >
> > Did you try the patch?
> > >
> > > Please let me know if you need any additional information.
> > >
> > > Thanks in advance for any help provided!
> > > Regards
> > >
> > > ###########################################################
> > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > ###########################################################
> > > # Configuration file for Xen HVM
> > >
> > > # HVM Name (as appears in 'xl list')
> > > name="ubuntu-hvm-0"
> > > # HVM Build settings (+ hardware)
> > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > builder='hvm'
> > > device_model='qemu-dm'
> > > memory=1024
> > > vcpus=2
> > >
> > > # Virtual Interface
> > > # Network bridge to USB NIC
> > > vif=['bridge=xenbr0']
> > >
> > > ################### PCI PASSTHROUGH ###################
> > > # PCI Permissive mode toggle
> > > #pci_permissive=1
> > >
> > > # All PCI Devices
> > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> > >
> > > # First two ports on Intel 4x1G NIC
> > > #pci=['03:00.0','03:00.1']
> > >
> > > # Last two ports on Intel 4x1G NIC
> > > #pci=['04:00.0', '04:00.1']
> > >
> > > # All ports on Intel 4x1G NIC
> > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > >
> > > # Brodcom 2x1G NIC
> > > #pci=['05:00.0', '05:00.1']
> > > ################### PCI PASSTHROUGH ###################
> > >
> > > # HVM Disks
> > > # Hard disk only
> > > # Boot from HDD first ('c')
> > > boot="c"
> > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > >
> > > # Hard disk with ISO
> > > # Boot from ISO first ('d')
> > > #boot="d"
> > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > >
> > > # ACPI Enable
> > > acpi=1
> > > # HVM Event Modes
> > > on_poweroff='destroy'
> > > on_reboot='restart'
> > > on_crash='restart'
> > >
> > > # Serial Console Configuration (Xen Console)
> > > sdl=0
> > > serial='pty'
> > >
> > > # VNC Configuration
> > > # Only reacable from localhost
> > > vnc=1
> > > vnclisten="0.0.0.0"
> > > vncpasswd=""
> > >
> > > ###########################################################
> > > Copied for xen-users list
> > > ###########################################################
> > >
> > > It appears that it cannot obtain the RAM mapping for this PCI device.
> > >
> > >
> > > I rebooted the Host.  I ran assigned pci devices to pciback. The output
> > > looks like:
> > > root@fiat:~# ./dev_mgmt.sh
> > > Loading Kernel Module 'xen-pciback'
> > > Calling function pciback_dev for:
> > > PCI DEVICE 0000:03:00.0
> > > Unbinding 0000:03:00.0 from igb
> > > Binding 0000:03:00.0 to pciback
> > >
> > > PCI DEVICE 0000:03:00.1
> > > Unbinding 0000:03:00.1 from igb
> > > Binding 0000:03:00.1 to pciback
> > >
> > > PCI DEVICE 0000:04:00.0
> > > Unbinding 0000:04:00.0 from igb
> > > Binding 0000:04:00.0 to pciback
> > >
> > > PCI DEVICE 0000:04:00.1
> > > Unbinding 0000:04:00.1 from igb
> > > Binding 0000:04:00.1 to pciback
> > >
> > > PCI DEVICE 0000:05:00.0
> > > Unbinding 0000:05:00.0 from bnx2
> > > Binding 0000:05:00.0 to pciback
> > >
> > > PCI DEVICE 0000:05:00.1
> > > Unbinding 0000:05:00.1 from bnx2
> > > Binding 0000:05:00.1 to pciback
> > >
> > > Listing PCI Devices Available to Xen
> > > 0000:03:00.0
> > > 0000:03:00.1
> > > 0000:04:00.0
> > > 0000:04:00.1
> > > 0000:05:00.0
> > > 0000:05:00.1
> > >
> > > ###########################################################
> > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > WARNING: ignoring device_model directive.
> > > WARNING: Use "device_model_override" instead if you really want a
> > > non-default device_model
> > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> > > how=(nil) callback=(nil) poller=0x210c3c0
> > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > vdev=hda spec.backend=unknown
> > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > > vdev=hda, using backend phy
> > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > bootloader
> > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > > domain, skipping bootloader
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210c728: deregister unregistered
> > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > free_memkb=2980
> > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > candidate
> > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > >   Loader:        0000000000100000->00000000001a69a4
> > >   Modules:       0000000000000000->0000000000000000
> > >   TOTAL:         0000000000000000->000000003f800000
> > >   ENTRY ADDRESS: 0000000000100608
> > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > >   4KB PAGES: 0x0000000000000200
> > >   2MB PAGES: 0x00000000000001fb
> > >   1GB PAGES: 0x0000000000000000
> > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > vdev=hda spec.backend=phy
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > register slotnum=3
> > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > > inprogress: poller=0x210c3c0, flags=i
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > epath=/local/domain/0/backend/vbd/2/768/state
> > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> > state 1
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > epath=/local/domain/0/backend/vbd/2/768/state
> > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x2112f48: deregister unregistered
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/block add
> > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > device-model
> > > /usr/bin/qemu-system-i386 with arguments:
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > /usr/bin/qemu-system-i386
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > chardev=libxl-cmd,mode=control
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > register
> > > slotnum=3
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > epath=/local/domain/0/device-model/2/state
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > epath=/local/domain/0/device-model/2/state
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210c960: deregister unregistered
> > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > /var/run/xen/qmp-libxl-2
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "qmp_capabilities",
> > >     "id": 1
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "query-chardev",
> > >     "id": 2
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "change",
> > >     "id": 3,
> > >     "arguments": {
> > >         "device": "vnc",
> > >         "target": "password",
> > >         "arg": ""
> > >     }
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "query-vnc",
> > >     "id": 4
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > register
> > > slotnum=3
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > epath=/local/domain/0/backend/vif/2/0/state
> > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state
> > 1
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > epath=/local/domain/0/backend/vif/2/0/state
> > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210e8a8: deregister unregistered
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/vif-bridge online
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/vif-bridge add
> > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > /var/run/xen/qmp-libxl-2
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "qmp_capabilities",
> > >     "id": 1
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "device_add",
> > >     "id": 2,
> > >     "arguments": {
> > >         "driver": "xen-pci-passthrough",
> > >         "id": "pci-pt-03_00.0",
> > >         "hostaddr": "0000:03:00.0"
> > >     }
> > > }
> > > '
> > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> > reset
> > > by peer
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> > backend
> > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> > > progress report: ignored
> > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > > complete, rc=0
> > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> > destroy
> > > Daemon running with PID 3214
> > > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > > xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> > > xc: debug: hypercall buffer: cache current size:4
> > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > >
> > > ###########################################################
> > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > char device redirected to /dev/pts/5 (label serial0)
> > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > CPU #0:
> > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > ES =0000 00000000 0000ffff 00009300
> > > CS =f000 ffff0000 0000ffff 00009b00
> > > SS =0000 00000000 0000ffff 00009300
> > > DS =0000 00000000 0000ffff 00009300
> > > FS =0000 00000000 0000ffff 00009300
> > > GS =0000 00000000 0000ffff 00009300
> > > LDT=0000 00000000 0000ffff 00008200
> > > TR =0000 00000000 0000ffff 00008b00
> > > GDT=     00000000 0000ffff
> > > IDT=     00000000 0000ffff
> > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > DR6=ffff0ff0 DR7=00000400
> > > EFER=0000000000000000
> > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > XMM00=00000000000000000000000000000000
> > > XMM01=00000000000000000000000000000000
> > > XMM02=00000000000000000000000000000000
> > > XMM03=00000000000000000000000000000000
> > > XMM04=00000000000000000000000000000000
> > > XMM05=00000000000000000000000000000000
> > > XMM06=00000000000000000000000000000000
> > > XMM07=00000000000000000000000000000000
> > > CPU #1:
> > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > ES =0000 00000000 0000ffff 00009300
> > > CS =f000 ffff0000 0000ffff 00009b00
> > > SS =0000 00000000 0000ffff 00009300
> > > DS =0000 00000000 0000ffff 00009300
> > > FS =0000 00000000 0000ffff 00009300
> > > GS =0000 00000000 0000ffff 00009300
> > > LDT=0000 00000000 0000ffff 00008200
> > > TR =0000 00000000 0000ffff 00008b00
> > > GDT=     00000000 0000ffff
> > > IDT=     00000000 0000ffff
> > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > DR6=ffff0ff0 DR7=00000400
> > > EFER=0000000000000000
> > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > XMM00=00000000000000000000000000000000
> > > XMM01=00000000000000000000000000000000
> > > XMM02=00000000000000000000000000000000
> > > XMM03=00000000000000000000000000000000
> > > XMM04=00000000000000000000000000000000
> > > XMM05=00000000000000000000000000000000
> > > XMM06=00000000000000000000000000000000
> > > XMM07=00000000000000000000000000000000
> > >
> > > ###########################################################
> > > /etc/default/grub
> > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > GRUB_HIDDEN_TIMEOUT=0
> > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > GRUB_TIMEOUT=10
> > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > GRUB_CMDLINE_LINUX=""
> > > # biosdevname=0
> > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:42:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqNa-00017r-Ue; Fri, 07 Feb 2014 18:42:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WBqNZ-00017m-QY
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 18:42:21 +0000
Received: from [85.158.143.35:45319] by server-3.bemta-4.messagelabs.com id
	CA/5B-11539-D0925F25; Fri, 07 Feb 2014 18:42:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391798539!4010276!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20004 invoked from network); 7 Feb 2014 18:42:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:42:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99060107"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 18:42:18 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	13:42:18 -0500
Message-ID: <52F52908.4080205@citrix.com>
Date: Fri, 7 Feb 2014 18:42:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v8] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/02/14 12:56, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> - it cuts out common parts from m2p_*_override functions to
>   *_foreign_p2m_mapping functions
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:

Please put this version information after the '---' marker.  It doesn't
need to end up in the commit message.

> @@ -955,10 +957,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}

I think this block and the loop should be in an arch-specific function
(e.g., set_foreign_p2m_mappings(), but I would like to hear Stefano's
opinion.

Similarly for clear_foreign_p2m_mappings().

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:42:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqNa-00017r-Ue; Fri, 07 Feb 2014 18:42:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WBqNZ-00017m-QY
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 18:42:21 +0000
Received: from [85.158.143.35:45319] by server-3.bemta-4.messagelabs.com id
	CA/5B-11539-D0925F25; Fri, 07 Feb 2014 18:42:21 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1391798539!4010276!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20004 invoked from network); 7 Feb 2014 18:42:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:42:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99060107"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 18:42:18 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Fri, 7 Feb 2014
	13:42:18 -0500
Message-ID: <52F52908.4080205@citrix.com>
Date: Fri, 7 Feb 2014 18:42:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@citrix.com>
References: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
In-Reply-To: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v8] xen/grant-table: Avoid m2p_override
 during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06/02/14 12:56, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> - it cuts out common parts from m2p_*_override functions to
>   *_foreign_p2m_mapping functions
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:

Please put this version information after the '---' marker.  It doesn't
need to end up in the commit message.

> @@ -955,10 +957,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}

I think this block and the loop should be in an arch-specific function
(e.g., set_foreign_p2m_mappings(), but I would like to hear Stefano's
opinion.

Similarly for clear_foreign_p2m_mappings().

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:55:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqaY-0001uA-Ce; Fri, 07 Feb 2014 18:55:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBqaW-0001u5-Ti
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:55:45 +0000
Received: from [193.109.254.147:39823] by server-16.bemta-14.messagelabs.com
	id AF/93-21945-03C25F25; Fri, 07 Feb 2014 18:55:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391799341!2846967!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6573 invoked from network); 7 Feb 2014 18:55:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:55:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99063800"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 18:55:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 13:55:40 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBqaS-0004oC-Gf;
	Fri, 07 Feb 2014 18:55:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBqaS-0007Wh-Ce;
	Fri, 07 Feb 2014 18:55:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24778-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 18:55:40 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24778: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8278008638599630518=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8278008638599630518==
Content-Type: text/plain

flight 24778 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24778/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 24743
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 24743
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24743

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  8 debian-fixup             fail blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  cde17c6a62d97239c76aace92f3fdbad09931ca4
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 673 lines long.)


--===============8278008638599630518==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8278008638599630518==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:55:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqaY-0001uA-Ce; Fri, 07 Feb 2014 18:55:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBqaW-0001u5-Ti
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:55:45 +0000
Received: from [193.109.254.147:39823] by server-16.bemta-14.messagelabs.com
	id AF/93-21945-03C25F25; Fri, 07 Feb 2014 18:55:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391799341!2846967!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6573 invoked from network); 7 Feb 2014 18:55:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:55:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99063800"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 18:55:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 13:55:40 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBqaS-0004oC-Gf;
	Fri, 07 Feb 2014 18:55:40 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBqaS-0007Wh-Ce;
	Fri, 07 Feb 2014 18:55:40 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24778-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 18:55:40 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24778: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8278008638599630518=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8278008638599630518==
Content-Type: text/plain

flight 24778 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24778/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail REGR. vs. 24743
 test-amd64-i386-xend-qemut-winxpsp3  3 host-install(3)  broken REGR. vs. 24743
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 24743

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  8 debian-fixup             fail blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  cde17c6a62d97239c76aace92f3fdbad09931ca4
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          broken  
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 673 lines long.)


--===============8278008638599630518==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8278008638599630518==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbC-0001wq-Qt; Fri, 07 Feb 2014 18:56:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbA-0001wd-S6
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:25 +0000
Received: from [85.158.137.68:63706] by server-16.bemta-3.messagelabs.com id
	E5/97-29917-85C25F25; Fri, 07 Feb 2014 18:56:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799381!403582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1818 invoked from network); 7 Feb 2014 18:56:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916532"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqb6-0000vA-HF;
	Fri, 07 Feb 2014 18:56:20 +0000
Date: Fri, 7 Feb 2014 18:56:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 0/4] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.

Please test!!


Stefano Stabellini (4):
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: support HW interrupts in gic_set_lr
      xen/arm: do not request maintenance_interrupts
      xen/arm: set GICH_HCR_NPIE if all the LRs are in use

 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |  158 +++++++++++++++++++++++++++++++++++++++++++++++++----------------------------------------------------
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    7 ++---
 xen/arch/arm/vtimer.c     |    4 +--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 85 insertions(+), 92 deletions(-)

git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbC-0001wq-Qt; Fri, 07 Feb 2014 18:56:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbA-0001wd-S6
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:25 +0000
Received: from [85.158.137.68:63706] by server-16.bemta-3.messagelabs.com id
	E5/97-29917-85C25F25; Fri, 07 Feb 2014 18:56:24 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799381!403582!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1818 invoked from network); 7 Feb 2014 18:56:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916532"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:20 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqb6-0000vA-HF;
	Fri, 07 Feb 2014 18:56:20 +0000
Date: Fri, 7 Feb 2014 18:56:07 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 0/4] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.

Please test!!


Stefano Stabellini (4):
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: support HW interrupts in gic_set_lr
      xen/arm: do not request maintenance_interrupts
      xen/arm: set GICH_HCR_NPIE if all the LRs are in use

 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |  158 +++++++++++++++++++++++++++++++++++++++++++++++++----------------------------------------------------
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    7 ++---
 xen/arch/arm/vtimer.c     |    4 +--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 85 insertions(+), 92 deletions(-)

git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbS-0001zL-7r; Fri, 07 Feb 2014 18:56:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbR-0001z7-LF
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:41 +0000
Received: from [85.158.137.68:64406] by server-17.bemta-3.messagelabs.com id
	D6/90-22569-86C25F25; Fri, 07 Feb 2014 18:56:40 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799398!403618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2546 invoked from network); 7 Feb 2014 18:56:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916613"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-EB;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:15 +0000
Message-ID: <1391799378-31664-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 1/4] xen/arm: remove unused virtual
	parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..244738d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 50b3a38..acf7195 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -748,7 +748,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 68b939d..0548201 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..7d10227 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbS-0001zL-7r; Fri, 07 Feb 2014 18:56:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbR-0001z7-LF
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:41 +0000
Received: from [85.158.137.68:64406] by server-17.bemta-3.messagelabs.com id
	D6/90-22569-86C25F25; Fri, 07 Feb 2014 18:56:40 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799398!403618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2546 invoked from network); 7 Feb 2014 18:56:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916613"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-EB;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:15 +0000
Message-ID: <1391799378-31664-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 1/4] xen/arm: remove unused virtual
	parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..244738d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 50b3a38..acf7195 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -748,7 +748,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 68b939d..0548201 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..7d10227 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbT-00020L-N3; Fri, 07 Feb 2014 18:56:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbS-0001zC-6y
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:42 +0000
Received: from [85.158.137.68:64431] by server-6.bemta-3.messagelabs.com id
	46/2F-09180-96C25F25; Fri, 07 Feb 2014 18:56:41 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799398!403618!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2575 invoked from network); 7 Feb 2014 18:56:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916614"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-Jt;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:18 +0000
Message-ID: <1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all the
	LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 87bd5d3..bee2618 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -810,8 +810,14 @@ void gic_inject(void)
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
         gic_inject_irq_stop();
-    else
+    else {
         gic_inject_irq_start();
+    }
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) )
+        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
+    else
+        GICH[GICH_HCR] = GICH_HCR_EN;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbT-00020L-N3; Fri, 07 Feb 2014 18:56:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbS-0001zC-6y
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:42 +0000
Received: from [85.158.137.68:64431] by server-6.bemta-3.messagelabs.com id
	46/2F-09180-96C25F25; Fri, 07 Feb 2014 18:56:41 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799398!403618!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2575 invoked from network); 7 Feb 2014 18:56:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916614"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-Jt;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:18 +0000
Message-ID: <1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all the
	LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 87bd5d3..bee2618 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -810,8 +810,14 @@ void gic_inject(void)
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
         gic_inject_irq_stop();
-    else
+    else {
         gic_inject_irq_start();
+    }
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) )
+        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
+    else
+        GICH[GICH_HCR] = GICH_HCR_EN;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbU-00020v-4s; Fri, 07 Feb 2014 18:56:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbS-0001zB-6W
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:42 +0000
Received: from [85.158.139.211:62087] by server-17.bemta-5.messagelabs.com id
	90/5A-31975-96C25F25; Fri, 07 Feb 2014 18:56:41 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391799399!2450761!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11136 invoked from network); 7 Feb 2014 18:56:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99064082"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-JO;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:17 +0000
Message-ID: <1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
GICH_LR registers.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs from gic_restore_state and on return to guest
(gic_inject).

Remove the now unused code in maintenance_interrupts and gic_irq_eoi.

In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
send and SGI to it to interrupt it and force it to clear the old LRs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c  |  126 ++++++++++++++++++++++-----------------------------
 xen/arch/arm/vgic.c |    3 +-
 2 files changed, 56 insertions(+), 73 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 215b679..87bd5d3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_lrs(struct vcpu *v);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -126,6 +128,7 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
+    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -628,12 +631,12 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
     if ( p->desc != NULL )
-        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
+        GICH[GICH_LR + lr] = GICH_LR_HW | state |
             ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
             ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |
             ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     else
-        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
+        GICH[GICH_LR + lr] = state |
             ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
             ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
 
@@ -695,6 +698,54 @@ out:
     return;
 }
 
+static void gic_clear_lrs(struct vcpu *v)
+{
+    struct pending_irq *p;
+    int i = 0, irq;
+    uint32_t lr;
+    bool_t inflight;
+
+    ASSERT(!local_irq_is_enabled());
+
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        lr = GICH[GICH_LR + i];
+        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+        {
+            if ( lr & GICH_LR_HW )
+                irq = (lr >> GICH_LR_PHYSICAL_SHIFT) & GICH_LR_PHYSICAL_MASK;
+            else
+                irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+
+            inflight = 0;
+            GICH[GICH_LR + i] = 0;
+            clear_bit(i, &this_cpu(lr_mask));
+
+            spin_lock(&gic.lock);
+            p = irq_to_pending(v, irq);
+            if ( p->desc != NULL )
+                p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+            {
+                inflight = 1;
+                gic_add_to_lr_pending(v, irq, p->priority);
+            }
+            spin_unlock(&gic.lock);
+            if ( !inflight )
+            {
+                spin_lock(&v->arch.vgic.lock);
+                list_del_init(&p->inflight);
+                spin_unlock(&v->arch.vgic.lock);
+            }
+
+        }
+
+        i++;
+    }
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -751,6 +802,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    gic_clear_lrs(current);
+
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
@@ -908,77 +961,8 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        int cpu;
-        bool_t inflight;
-
-        cpu = -1;
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
-            p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, virq, p->priority);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
-        i++;
-    }
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d10227..da15f4d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -699,8 +699,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbU-00020v-4s; Fri, 07 Feb 2014 18:56:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbS-0001zB-6W
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:42 +0000
Received: from [85.158.139.211:62087] by server-17.bemta-5.messagelabs.com id
	90/5A-31975-96C25F25; Fri, 07 Feb 2014 18:56:41 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391799399!2450761!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11136 invoked from network); 7 Feb 2014 18:56:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99064082"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-JO;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:17 +0000
Message-ID: <1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
GICH_LR registers.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs from gic_restore_state and on return to guest
(gic_inject).

Remove the now unused code in maintenance_interrupts and gic_irq_eoi.

In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
send and SGI to it to interrupt it and force it to clear the old LRs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c  |  126 ++++++++++++++++++++++-----------------------------
 xen/arch/arm/vgic.c |    3 +-
 2 files changed, 56 insertions(+), 73 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 215b679..87bd5d3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_lrs(struct vcpu *v);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -126,6 +128,7 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
+    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -628,12 +631,12 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
     if ( p->desc != NULL )
-        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
+        GICH[GICH_LR + lr] = GICH_LR_HW | state |
             ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
             ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |
             ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     else
-        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
+        GICH[GICH_LR + lr] = state |
             ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
             ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
 
@@ -695,6 +698,54 @@ out:
     return;
 }
 
+static void gic_clear_lrs(struct vcpu *v)
+{
+    struct pending_irq *p;
+    int i = 0, irq;
+    uint32_t lr;
+    bool_t inflight;
+
+    ASSERT(!local_irq_is_enabled());
+
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        lr = GICH[GICH_LR + i];
+        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+        {
+            if ( lr & GICH_LR_HW )
+                irq = (lr >> GICH_LR_PHYSICAL_SHIFT) & GICH_LR_PHYSICAL_MASK;
+            else
+                irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+
+            inflight = 0;
+            GICH[GICH_LR + i] = 0;
+            clear_bit(i, &this_cpu(lr_mask));
+
+            spin_lock(&gic.lock);
+            p = irq_to_pending(v, irq);
+            if ( p->desc != NULL )
+                p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+            {
+                inflight = 1;
+                gic_add_to_lr_pending(v, irq, p->priority);
+            }
+            spin_unlock(&gic.lock);
+            if ( !inflight )
+            {
+                spin_lock(&v->arch.vgic.lock);
+                list_del_init(&p->inflight);
+                spin_unlock(&v->arch.vgic.lock);
+            }
+
+        }
+
+        i++;
+    }
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -751,6 +802,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    gic_clear_lrs(current);
+
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
@@ -908,77 +961,8 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        int cpu;
-        bool_t inflight;
-
-        cpu = -1;
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
-            p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, virq, p->priority);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
-        i++;
-    }
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d10227..da15f4d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -699,8 +699,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbV-000225-3H; Fri, 07 Feb 2014 18:56:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbT-0001zb-0i
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:43 +0000
Received: from [85.158.137.68:64474] by server-10.bemta-3.messagelabs.com id
	DB/BF-07302-A6C25F25; Fri, 07 Feb 2014 18:56:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799398!403618!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2622 invoked from network); 7 Feb 2014 18:56:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916615"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-Fq;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:16 +0000
Message-ID: <1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 2/4] xen/arm: support HW interrupts in
	gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW.

Also add a struct vcpu* parameter to gic_set_lr.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index acf7195..215b679 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     return rc;
 }
 
-static inline void gic_set_lr(int lr, unsigned int virtual_irq,
+static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
-    struct pending_irq *p = irq_to_pending(current, virtual_irq);
+    struct pending_irq *p = irq_to_pending(v, irq);
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
-        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
-        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
+            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+            ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |
+            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    else
+        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
+            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -666,7 +670,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
@@ -679,12 +683,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, virtual_irq, state, priority);
+            gic_set_lr(v, i, irq, state, priority);
             goto out;
         }
     }
 
-    gic_add_to_lr_pending(v, virtual_irq, priority);
+    gic_add_to_lr_pending(v, irq, priority);
 
 out:
     spin_unlock_irqrestore(&gic.lock, flags);
@@ -703,7 +707,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         if ( i >= nr_lrs ) return;
 
         spin_lock_irqsave(&gic.lock, flags);
-        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
+        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
         spin_unlock_irqrestore(&gic.lock, flags);
@@ -950,7 +954,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
 
         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
             p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
+            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
             list_del_init(&p2->lr_queue);
             set_bit(i, &this_cpu(lr_mask));
         }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 18:56:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 18:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqbV-000225-3H; Fri, 07 Feb 2014 18:56:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WBqbT-0001zb-0i
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 18:56:43 +0000
Received: from [85.158.137.68:64474] by server-10.bemta-3.messagelabs.com id
	DB/BF-07302-A6C25F25; Fri, 07 Feb 2014 18:56:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391799398!403618!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2622 invoked from network); 7 Feb 2014 18:56:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="100916615"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 07 Feb 2014 18:56:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 7 Feb 2014 13:56:37 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WBqbI-0000vD-Fq;
	Fri, 07 Feb 2014 18:56:32 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 7 Feb 2014 18:56:16 +0000
Message-ID: <1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 2/4] xen/arm: support HW interrupts in
	gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW.

Also add a struct vcpu* parameter to gic_set_lr.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   28 ++++++++++++++++------------
 1 file changed, 16 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index acf7195..215b679 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     return rc;
 }
 
-static inline void gic_set_lr(int lr, unsigned int virtual_irq,
+static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
-    struct pending_irq *p = irq_to_pending(current, virtual_irq);
+    struct pending_irq *p = irq_to_pending(v, irq);
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
-        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
-        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
+            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+            ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |
+            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    else
+        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
+            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -666,7 +670,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
@@ -679,12 +683,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, virtual_irq, state, priority);
+            gic_set_lr(v, i, irq, state, priority);
             goto out;
         }
     }
 
-    gic_add_to_lr_pending(v, virtual_irq, priority);
+    gic_add_to_lr_pending(v, irq, priority);
 
 out:
     spin_unlock_irqrestore(&gic.lock, flags);
@@ -703,7 +707,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         if ( i >= nr_lrs ) return;
 
         spin_lock_irqsave(&gic.lock, flags);
-        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
+        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
         spin_unlock_irqrestore(&gic.lock, flags);
@@ -950,7 +954,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
 
         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
             p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
+            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
             list_del_init(&p2->lr_queue);
             set_bit(i, &this_cpu(lr_mask));
         }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 19:06:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 19:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqkK-0003N1-4w; Fri, 07 Feb 2014 19:05:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WBqkI-0003Ms-3K; Fri, 07 Feb 2014 19:05:50 +0000
Received: from [85.158.139.211:44128] by server-3.bemta-5.messagelabs.com id
	0D/78-13671-D8E25F25; Fri, 07 Feb 2014 19:05:49 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391799948!2453218!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10851 invoked from network); 7 Feb 2014 19:05:48 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 19:05:48 -0000
Received: by mail-la0-f42.google.com with SMTP id hr13so3001429lab.15
	for <multiple recipients>; Fri, 07 Feb 2014 11:05:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=AWW6d7+HNdEmn3pMxmtBDE+j2mdnjv4DJTfdEvJ57Bc=;
	b=kpr1TL7r8SEEx6pJZhebEGsc/l2KZX7X6Hrf7aWtijWiVsyLxgZ1Iee9Z5XoEapw9Z
	ZrPC1lQ6BJtQY3XNAIEpBPlvpCJPZ4ax9RJM+1YEuybVY+WCgrGs+lKt1sBB5GfULe/K
	ab8423WJnLK08Nvxw+pykkZLvT+tOMWMCoT3FFRNNwcmeISnVfmXf7N1Wbq3LH+OzQlx
	xrCMXsDnZK/ibEeXZ47U/t4/HQoZ6LYQt9A1/qKPLWUyNkDb+IuzqkQ3YY2vrRIYd8fJ
	VrDnj+xgCYVoj9Q7qy6T/lZT0Qn7J6CedsadKLdBuFisjViUsUZiKErGvSM+WOzFfUZZ
	Zqqw==
MIME-Version: 1.0
X-Received: by 10.112.142.40 with SMTP id rt8mr140311lbb.52.1391799947928;
	Fri, 07 Feb 2014 11:05:47 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Fri, 7 Feb 2014 11:05:47 -0800 (PST)
Date: Fri, 7 Feb 2014 14:05:47 -0500
X-Google-Sender-Auth: 7Nn2wbrYwVPfhkDL83EG3TepASo
Message-ID: <CAHehzX0x6UJgWjt=eFP6TnUPb3bw6NF1C8Rn+0kPO-ZhW_Au-A@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] Want to demo or assist at the Xen Project booth at
	SCALE 12X?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Are you are planning to attend SCALE 12X and you can demo something
running Xen Project software?  Would you like an opportunity to show
that demo at the Xen Project booth?

We are looking for a couple good demos to be performed at the Xen
Project booth during SCALE 12X in Los Angeles later this month.

If you are:

1) A user with something cool or an interesting story involving Xen
Project software,
2) An Open Source project which works with Xen Project software, or
3) A vendor with a solution leveraging Xen Project software

this is your opportunity to do some show-and-tell in the Xen Project
booth (vendors: since this is a non-commercial booth, you can demo and
hand out literature, but no closing business at our booth, okay?).

Contact me and describe what you'd like to do.  You might just get a
chance to show your stuff at the booth.

Also, I could really use a couple people who'd be willing to spend an
hour or two talking to people who come by the booth.  You don't need
to be a guru.  If you have some Xen knowledge, or a story to tell (how
you use Xen in your organization, why you picked Xen, etc.), we'd
welcome having you in the booth for a while to talk to people as they
come by.

If you help out, I'll make sure you get one of our cool Xen Project
T-shirts (which flew out of the booth in record time last year)!

Drop me a line if you're willing to be part of the Xen Project booth
at SCALE 12X.

Thanks!

Russ Pavlicek
Xen Project Evangelist/Booth Guy/Loudmouth

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 19:06:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 19:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBqkK-0003N1-4w; Fri, 07 Feb 2014 19:05:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WBqkI-0003Ms-3K; Fri, 07 Feb 2014 19:05:50 +0000
Received: from [85.158.139.211:44128] by server-3.bemta-5.messagelabs.com id
	0D/78-13671-D8E25F25; Fri, 07 Feb 2014 19:05:49 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1391799948!2453218!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10851 invoked from network); 7 Feb 2014 19:05:48 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 19:05:48 -0000
Received: by mail-la0-f42.google.com with SMTP id hr13so3001429lab.15
	for <multiple recipients>; Fri, 07 Feb 2014 11:05:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=AWW6d7+HNdEmn3pMxmtBDE+j2mdnjv4DJTfdEvJ57Bc=;
	b=kpr1TL7r8SEEx6pJZhebEGsc/l2KZX7X6Hrf7aWtijWiVsyLxgZ1Iee9Z5XoEapw9Z
	ZrPC1lQ6BJtQY3XNAIEpBPlvpCJPZ4ax9RJM+1YEuybVY+WCgrGs+lKt1sBB5GfULe/K
	ab8423WJnLK08Nvxw+pykkZLvT+tOMWMCoT3FFRNNwcmeISnVfmXf7N1Wbq3LH+OzQlx
	xrCMXsDnZK/ibEeXZ47U/t4/HQoZ6LYQt9A1/qKPLWUyNkDb+IuzqkQ3YY2vrRIYd8fJ
	VrDnj+xgCYVoj9Q7qy6T/lZT0Qn7J6CedsadKLdBuFisjViUsUZiKErGvSM+WOzFfUZZ
	Zqqw==
MIME-Version: 1.0
X-Received: by 10.112.142.40 with SMTP id rt8mr140311lbb.52.1391799947928;
	Fri, 07 Feb 2014 11:05:47 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Fri, 7 Feb 2014 11:05:47 -0800 (PST)
Date: Fri, 7 Feb 2014 14:05:47 -0500
X-Google-Sender-Auth: 7Nn2wbrYwVPfhkDL83EG3TepASo
Message-ID: <CAHehzX0x6UJgWjt=eFP6TnUPb3bw6NF1C8Rn+0kPO-ZhW_Au-A@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org
Subject: [Xen-devel] Want to demo or assist at the Xen Project booth at
	SCALE 12X?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Are you are planning to attend SCALE 12X and you can demo something
running Xen Project software?  Would you like an opportunity to show
that demo at the Xen Project booth?

We are looking for a couple good demos to be performed at the Xen
Project booth during SCALE 12X in Los Angeles later this month.

If you are:

1) A user with something cool or an interesting story involving Xen
Project software,
2) An Open Source project which works with Xen Project software, or
3) A vendor with a solution leveraging Xen Project software

this is your opportunity to do some show-and-tell in the Xen Project
booth (vendors: since this is a non-commercial booth, you can demo and
hand out literature, but no closing business at our booth, okay?).

Contact me and describe what you'd like to do.  You might just get a
chance to show your stuff at the booth.

Also, I could really use a couple people who'd be willing to spend an
hour or two talking to people who come by the booth.  You don't need
to be a guru.  If you have some Xen knowledge, or a story to tell (how
you use Xen in your organization, why you picked Xen, etc.), we'd
welcome having you in the booth for a while to talk to people as they
come by.

If you help out, I'll make sure you get one of our cool Xen Project
T-shirts (which flew out of the booth in record time last year)!

Drop me a line if you're willing to be part of the Xen Project booth
at SCALE 12X.

Thanks!

Russ Pavlicek
Xen Project Evangelist/Booth Guy/Loudmouth

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 19:22:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 19:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBr0B-0004Ka-As; Fri, 07 Feb 2014 19:22:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WBr0A-0004KS-BV; Fri, 07 Feb 2014 19:22:14 +0000
Received: from [85.158.139.211:23164] by server-1.bemta-5.messagelabs.com id
	41/4D-12859-56235F25; Fri, 07 Feb 2014 19:22:13 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391800931!2465863!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5326 invoked from network); 7 Feb 2014 19:22:12 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 19:22:12 -0000
Received: by mail-lb0-f171.google.com with SMTP id c11so3068225lbj.30
	for <multiple recipients>; Fri, 07 Feb 2014 11:22:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:content-type;
	bh=vKWICBi+ppeZLtqhT+GzITG9Kjd6BPYoEUlmm+A83cY=;
	b=Aa+zmNhRVD45X+4PMpsJk3fBhzwqTfHEfYAZbAu4AsxntxZuMklpQtRFgUAsClz3e1
	wsWdVOT+o4EbPLdyWLai37cssf4JuGejIgL0nhLs5RSB3iiaQSvrbNvJ+/kjsH8WyKSD
	FYJ1qH53NPrFeMm4TeER0vRzS+CSXGahgbwY4hy2gy2kXil7N8AbJwvfblcpIPS/c2EP
	pJktj1UfU1K5eHy6LSrSmY/tryFvQk0g2NvTt6AkkuN45oMbV/MeWKt/1zPpH/XLSQel
	ib+tuDoMTxS74yLvtBLZbtiy5YJFi8a1jvyvSbjF4RnIiJRqh+kN/VUf0AQqNIzpchxJ
	QEwA==
MIME-Version: 1.0
X-Received: by 10.152.87.228 with SMTP id bb4mr11341565lab.15.1391800931694;
	Fri, 07 Feb 2014 11:22:11 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Fri, 7 Feb 2014 11:22:11 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Fri, 7 Feb 2014 11:22:11 -0800 (PST)
In-Reply-To: <CAHehzX0x6UJgWjt=eFP6TnUPb3bw6NF1C8Rn+0kPO-ZhW_Au-A@mail.gmail.com>
References: <CAHehzX0x6UJgWjt=eFP6TnUPb3bw6NF1C8Rn+0kPO-ZhW_Au-A@mail.gmail.com>
Date: Fri, 7 Feb 2014 14:22:11 -0500
X-Google-Sender-Auth: rd-cr0zHUjm8RydHOf1PbTPI39Q
Message-ID: <CAHehzX0GxJrZNvv_2Xf8MUpcZ_zUVvY67PLKD_kbZ1or+K=xGQ@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Want to demo or assist at the Xen Project booth at
	SCALE 12X?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1215187419002280242=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1215187419002280242==
Content-Type: multipart/alternative; boundary=001a11c2623021bc7e04f1d5e8bf

--001a11c2623021bc7e04f1d5e8bf
Content-Type: text/plain; charset=ISO-8859-1

For those who don't already know the conference,  their website is here:

http://www.socallinuxexpo.org/

It's a great FOSS conference for those who love Open Source.

Russ
On Feb 7, 2014 2:05 PM, "Russ Pavlicek" <russell.pavlicek@xenproject.org>
wrote:

> Are you are planning to attend SCALE 12X and you can demo something
> running Xen Project software?  Would you like an opportunity to show
> that demo at the Xen Project booth?
>
> We are looking for a couple good demos to be performed at the Xen
> Project booth during SCALE 12X in Los Angeles later this month.
>
> If you are:
>
> 1) A user with something cool or an interesting story involving Xen
> Project software,
> 2) An Open Source project which works with Xen Project software, or
> 3) A vendor with a solution leveraging Xen Project software
>
> this is your opportunity to do some show-and-tell in the Xen Project
> booth (vendors: since this is a non-commercial booth, you can demo and
> hand out literature, but no closing business at our booth, okay?).
>
> Contact me and describe what you'd like to do.  You might just get a
> chance to show your stuff at the booth.
>
> Also, I could really use a couple people who'd be willing to spend an
> hour or two talking to people who come by the booth.  You don't need
> to be a guru.  If you have some Xen knowledge, or a story to tell (how
> you use Xen in your organization, why you picked Xen, etc.), we'd
> welcome having you in the booth for a while to talk to people as they
> come by.
>
> If you help out, I'll make sure you get one of our cool Xen Project
> T-shirts (which flew out of the booth in record time last year)!
>
> Drop me a line if you're willing to be part of the Xen Project booth
> at SCALE 12X.
>
> Thanks!
>
> Russ Pavlicek
> Xen Project Evangelist/Booth Guy/Loudmouth
>

--001a11c2623021bc7e04f1d5e8bf
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">For those who don&#39;t already know the conference,=A0 thei=
r website is here:</p>
<p dir=3D"ltr"><a href=3D"http://www.socallinuxexpo.org/">http://www.socall=
inuxexpo.org/</a></p>
<p dir=3D"ltr">It&#39;s a great FOSS conference for those who love Open Sou=
rce. </p>
<p dir=3D"ltr">Russ</p>
<div class=3D"gmail_quote">On Feb 7, 2014 2:05 PM, &quot;Russ Pavlicek&quot=
; &lt;<a href=3D"mailto:russell.pavlicek@xenproject.org">russell.pavlicek@x=
enproject.org</a>&gt; wrote:<br type=3D"attribution"><blockquote class=3D"g=
mail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-l=
eft:1ex">
Are you are planning to attend SCALE 12X and you can demo something<br>
running Xen Project software? =A0Would you like an opportunity to show<br>
that demo at the Xen Project booth?<br>
<br>
We are looking for a couple good demos to be performed at the Xen<br>
Project booth during SCALE 12X in Los Angeles later this month.<br>
<br>
If you are:<br>
<br>
1) A user with something cool or an interesting story involving Xen<br>
Project software,<br>
2) An Open Source project which works with Xen Project software, or<br>
3) A vendor with a solution leveraging Xen Project software<br>
<br>
this is your opportunity to do some show-and-tell in the Xen Project<br>
booth (vendors: since this is a non-commercial booth, you can demo and<br>
hand out literature, but no closing business at our booth, okay?).<br>
<br>
Contact me and describe what you&#39;d like to do. =A0You might just get a<=
br>
chance to show your stuff at the booth.<br>
<br>
Also, I could really use a couple people who&#39;d be willing to spend an<b=
r>
hour or two talking to people who come by the booth. =A0You don&#39;t need<=
br>
to be a guru. =A0If you have some Xen knowledge, or a story to tell (how<br=
>
you use Xen in your organization, why you picked Xen, etc.), we&#39;d<br>
welcome having you in the booth for a while to talk to people as they<br>
come by.<br>
<br>
If you help out, I&#39;ll make sure you get one of our cool Xen Project<br>
T-shirts (which flew out of the booth in record time last year)!<br>
<br>
Drop me a line if you&#39;re willing to be part of the Xen Project booth<br=
>
at SCALE 12X.<br>
<br>
Thanks!<br>
<br>
Russ Pavlicek<br>
Xen Project Evangelist/Booth Guy/Loudmouth<br>
</blockquote></div>

--001a11c2623021bc7e04f1d5e8bf--


--===============1215187419002280242==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1215187419002280242==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 19:22:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 19:22:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBr0B-0004Ka-As; Fri, 07 Feb 2014 19:22:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WBr0A-0004KS-BV; Fri, 07 Feb 2014 19:22:14 +0000
Received: from [85.158.139.211:23164] by server-1.bemta-5.messagelabs.com id
	41/4D-12859-56235F25; Fri, 07 Feb 2014 19:22:13 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1391800931!2465863!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5326 invoked from network); 7 Feb 2014 19:22:12 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 19:22:12 -0000
Received: by mail-lb0-f171.google.com with SMTP id c11so3068225lbj.30
	for <multiple recipients>; Fri, 07 Feb 2014 11:22:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:content-type;
	bh=vKWICBi+ppeZLtqhT+GzITG9Kjd6BPYoEUlmm+A83cY=;
	b=Aa+zmNhRVD45X+4PMpsJk3fBhzwqTfHEfYAZbAu4AsxntxZuMklpQtRFgUAsClz3e1
	wsWdVOT+o4EbPLdyWLai37cssf4JuGejIgL0nhLs5RSB3iiaQSvrbNvJ+/kjsH8WyKSD
	FYJ1qH53NPrFeMm4TeER0vRzS+CSXGahgbwY4hy2gy2kXil7N8AbJwvfblcpIPS/c2EP
	pJktj1UfU1K5eHy6LSrSmY/tryFvQk0g2NvTt6AkkuN45oMbV/MeWKt/1zPpH/XLSQel
	ib+tuDoMTxS74yLvtBLZbtiy5YJFi8a1jvyvSbjF4RnIiJRqh+kN/VUf0AQqNIzpchxJ
	QEwA==
MIME-Version: 1.0
X-Received: by 10.152.87.228 with SMTP id bb4mr11341565lab.15.1391800931694;
	Fri, 07 Feb 2014 11:22:11 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Fri, 7 Feb 2014 11:22:11 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Fri, 7 Feb 2014 11:22:11 -0800 (PST)
In-Reply-To: <CAHehzX0x6UJgWjt=eFP6TnUPb3bw6NF1C8Rn+0kPO-ZhW_Au-A@mail.gmail.com>
References: <CAHehzX0x6UJgWjt=eFP6TnUPb3bw6NF1C8Rn+0kPO-ZhW_Au-A@mail.gmail.com>
Date: Fri, 7 Feb 2014 14:22:11 -0500
X-Google-Sender-Auth: rd-cr0zHUjm8RydHOf1PbTPI39Q
Message-ID: <CAHehzX0GxJrZNvv_2Xf8MUpcZ_zUVvY67PLKD_kbZ1or+K=xGQ@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: xen-users@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Want to demo or assist at the Xen Project booth at
	SCALE 12X?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1215187419002280242=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1215187419002280242==
Content-Type: multipart/alternative; boundary=001a11c2623021bc7e04f1d5e8bf

--001a11c2623021bc7e04f1d5e8bf
Content-Type: text/plain; charset=ISO-8859-1

For those who don't already know the conference,  their website is here:

http://www.socallinuxexpo.org/

It's a great FOSS conference for those who love Open Source.

Russ
On Feb 7, 2014 2:05 PM, "Russ Pavlicek" <russell.pavlicek@xenproject.org>
wrote:

> Are you are planning to attend SCALE 12X and you can demo something
> running Xen Project software?  Would you like an opportunity to show
> that demo at the Xen Project booth?
>
> We are looking for a couple good demos to be performed at the Xen
> Project booth during SCALE 12X in Los Angeles later this month.
>
> If you are:
>
> 1) A user with something cool or an interesting story involving Xen
> Project software,
> 2) An Open Source project which works with Xen Project software, or
> 3) A vendor with a solution leveraging Xen Project software
>
> this is your opportunity to do some show-and-tell in the Xen Project
> booth (vendors: since this is a non-commercial booth, you can demo and
> hand out literature, but no closing business at our booth, okay?).
>
> Contact me and describe what you'd like to do.  You might just get a
> chance to show your stuff at the booth.
>
> Also, I could really use a couple people who'd be willing to spend an
> hour or two talking to people who come by the booth.  You don't need
> to be a guru.  If you have some Xen knowledge, or a story to tell (how
> you use Xen in your organization, why you picked Xen, etc.), we'd
> welcome having you in the booth for a while to talk to people as they
> come by.
>
> If you help out, I'll make sure you get one of our cool Xen Project
> T-shirts (which flew out of the booth in record time last year)!
>
> Drop me a line if you're willing to be part of the Xen Project booth
> at SCALE 12X.
>
> Thanks!
>
> Russ Pavlicek
> Xen Project Evangelist/Booth Guy/Loudmouth
>

--001a11c2623021bc7e04f1d5e8bf
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">For those who don&#39;t already know the conference,=A0 thei=
r website is here:</p>
<p dir=3D"ltr"><a href=3D"http://www.socallinuxexpo.org/">http://www.socall=
inuxexpo.org/</a></p>
<p dir=3D"ltr">It&#39;s a great FOSS conference for those who love Open Sou=
rce. </p>
<p dir=3D"ltr">Russ</p>
<div class=3D"gmail_quote">On Feb 7, 2014 2:05 PM, &quot;Russ Pavlicek&quot=
; &lt;<a href=3D"mailto:russell.pavlicek@xenproject.org">russell.pavlicek@x=
enproject.org</a>&gt; wrote:<br type=3D"attribution"><blockquote class=3D"g=
mail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-l=
eft:1ex">
Are you are planning to attend SCALE 12X and you can demo something<br>
running Xen Project software? =A0Would you like an opportunity to show<br>
that demo at the Xen Project booth?<br>
<br>
We are looking for a couple good demos to be performed at the Xen<br>
Project booth during SCALE 12X in Los Angeles later this month.<br>
<br>
If you are:<br>
<br>
1) A user with something cool or an interesting story involving Xen<br>
Project software,<br>
2) An Open Source project which works with Xen Project software, or<br>
3) A vendor with a solution leveraging Xen Project software<br>
<br>
this is your opportunity to do some show-and-tell in the Xen Project<br>
booth (vendors: since this is a non-commercial booth, you can demo and<br>
hand out literature, but no closing business at our booth, okay?).<br>
<br>
Contact me and describe what you&#39;d like to do. =A0You might just get a<=
br>
chance to show your stuff at the booth.<br>
<br>
Also, I could really use a couple people who&#39;d be willing to spend an<b=
r>
hour or two talking to people who come by the booth. =A0You don&#39;t need<=
br>
to be a guru. =A0If you have some Xen knowledge, or a story to tell (how<br=
>
you use Xen in your organization, why you picked Xen, etc.), we&#39;d<br>
welcome having you in the booth for a while to talk to people as they<br>
come by.<br>
<br>
If you help out, I&#39;ll make sure you get one of our cool Xen Project<br>
T-shirts (which flew out of the booth in record time last year)!<br>
<br>
Drop me a line if you&#39;re willing to be part of the Xen Project booth<br=
>
at SCALE 12X.<br>
<br>
Thanks!<br>
<br>
Russ Pavlicek<br>
Xen Project Evangelist/Booth Guy/Loudmouth<br>
</blockquote></div>

--001a11c2623021bc7e04f1d5e8bf--


--===============1215187419002280242==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1215187419002280242==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 19:47:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 19:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBrOb-0006LS-6y; Fri, 07 Feb 2014 19:47:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBrOZ-0006LM-UP
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 19:47:28 +0000
Received: from [193.109.254.147:8002] by server-10.bemta-14.messagelabs.com id
	F0/24-10711-F4835F25; Fri, 07 Feb 2014 19:47:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391802444!2849262!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20873 invoked from network); 7 Feb 2014 19:47:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 19:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99079027"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 19:47:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 14:47:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBrOV-00053q-A8;
	Fri, 07 Feb 2014 19:47:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBrOV-0007NP-0Q;
	Fri, 07 Feb 2014 19:47:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24780-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 19:47:23 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24780: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8053717368504865359=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8053717368504865359==
Content-Type: text/plain

flight 24780 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24780/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24758
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24758
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 24758 pass in 24780

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24604

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24758 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24758 never pass

version targeted for testing:
 linux                e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
baseline version:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501

------------------------------------------------------------
People who touched revisions under test:
  Andrea Arcangeli <aarcange@redhat.com>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Grover <agrover@redhat.com>
  Aristeu Rozanski <aris@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Boris BREZILLON <b.brezillon@overkiz.com>
  Borislav Petkov <bp@suse.de>
  Chris Mason <clm@fb.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Kravkov <dmitry@broadcom.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@linux.intel.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Jack Pham <jackp@codeaurora.org>
  James Bottomley <JBottomley@Parallels.com>
  Jan Prinsloo <janroot@gmail.com>
  Jean-Jacques Hiblot <jjhiblot@traphandler.com>
  Johan Hovold <jhovold@gmail.com>
  John W. Linville <linville@tuxdriver.com>
  Josef Bacik <jbacik@fb.com>
  Jun zhang <zhang.jun92@zte.com.cn>
  Kevin Hilman <khilman@linaro.org>
  Khalid Aziz <khalid.aziz@oracle.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Leilei Zhao <leilei.zhao@atmel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek Roszko <mark.roszko@gmail.com>
  Mark Brown <broonie@linaro.org>
  Matthew Garrett <matthew.garrett@nebula.com>
  Michal Schmidt <mschmidt@redhat.com>
  Mikhail Zolotaryov <lebon@lebon.org.ua>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Mackerras <paulus@samba.org>
  PaX Team <pageexec@freemail.hu>
  Rahul Bedarkar <rahulbedarkar89@gmail.com>
  Richard Weinberger <richard@nod.at>
  Sarah Sharp <sarah.a.sharp@linux.intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Takashi Iwai <tiwai@suse.de>
  Thomas Pugliese <thomas.pugliese@gmail.com>
  Vijaya Mohan Guvva <vmohan@brocade.com>
  Wang Shilong <wangsl.fnst@cn.fujitsu.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  yury <urykhy@gmail.com>
  ZHAO Gang <gamerh2o@gmail.com>
  Ã‰ric Piel <eric.piel@tremplin-utc.net>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
+ branch=linux-3.4
+ revision=e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b:tested/linux-3.4
Counting objects: 1   
Counting objects: 7   
Counting objects: 21   
Counting objects: 28   
Counting objects: 31   
Counting objects: 49   
Counting objects: 81   
Counting objects: 151   
Counting objects: 222   
Counting objects: 255   
Counting objects: 286   
Counting objects: 306   
Counting objects: 321, done.
Compressing objects:   2% (1/35)   
Compressing objects:   5% (2/35)   
Compressing objects:   8% (3/35)   
Compressing objects:  11% (4/35)   
Compressing objects:  14% (5/35)   
Compressing objects:  17% (6/35)   
Compressing objects:  20% (7/35)   
Compressing objects:  22% (8/35)   
Compressing objects:  25% (9/35)   
Compressing objects:  28% (10/35)   
Compressing objects:  31% (11/35)   
Compressing objects:  34% (12/35)   
Compressing objects:  37% (13/35)   
Compressing objects:  40% (14/35)   
Compressing objects:  42% (15/35)   
Compressing objects:  45% (16/35)   
Compressing objects:  48% (17/35)   
Compressing objects:  51% (18/35)   
Compressing objects:  54% (19/35)   
Compressing objects:  57% (20/35)   
Compressing objects:  60% (21/35)   
Compressing objects:  62% (22/35)   
Compressing objects:  65% (23/35)   
Compressing objects:  68% (24/35)   
Compressing objects:  71% (25/35)   
Compressing objects:  74% (26/35)   
Compressing objects:  77% (27/35)   
Compressing objects:  80% (28/35)   
Compressing objects:  82% (29/35)   
Compressing objects:  85% (30/35)   
Compressing objects:  88% (31/35)   
Compressing objects:  91% (32/35)   
Compressing objects:  94% (33/35)   
Compressing objects:  97% (34/35)   
Compressing objects: 100% (35/35)   
Compressing objects: 100% (35/35), done.
Writing objects:   0% (1/229)   
Writing objects:   1% (3/229)   
Writing objects:   2% (5/229)   
Writing objects:   3% (7/229)   
Writing objects:   4% (10/229)   
Writing objects:   5% (12/229)   
Writing objects:   6% (14/229)   
Writing objects:   7% (17/229)   
Writing objects:   8% (19/229)   
Writing objects:   9% (21/229)   
Writing objects:  10% (24/229)   
Writing objects:  11% (27/229)   
Writing objects:  12% (28/229)   
Writing objects:  13% (30/229)   
Writing objects:  14% (33/229)   
Writing objects:  15% (35/229)   
Writing objects:  16% (37/229)   
Writing objects:  17% (39/229)   
Writing objects:  18% (42/229)   
Writing objects:  19% (44/229)   
Writing objects:  20% (46/229)   
Writing objects:  21% (49/229)   
Writing objects:  22% (51/229)   
Writing objects:  23% (53/229)   
Writing objects:  24% (55/229)   
Writing objects:  25% (58/229)   
Writing objects:  26% (60/229)   
Writing objects:  27% (62/229)   
Writing objects:  28% (65/229)   
Writing objects:  29% (67/229)   
Writing objects:  30% (69/229)   
Writing objects:  31% (71/229)   
Writing objects:  32% (74/229)   
Writing objects:  33% (76/229)   
Writing objects:  34% (78/229)   
Writing objects:  35% (81/229)   
Writing objects:  36% (83/229)   
Writing objects:  37% (85/229)   
Writing objects:  38% (88/229)   
Writing objects:  39% (90/229)   
Writing objects:  40% (92/229)   
Writing objects:  41% (94/229)   
Writing objects:  42% (97/229)   
Writing objects:  43% (99/229)   
Writing objects:  44% (101/229)   
Writing objects:  45% (104/229)   
Writing objects:  46% (106/229)   
Writing objects:  47% (108/229)   
Writing objects:  48% (110/229)   
Writing objects:  49% (113/229)   
Writing objects:  50% (115/229)   
Writing objects:  51% (117/229)   
Writing objects:  52% (120/229)   
Writing objects:  53% (122/229)   
Writing objects:  54% (124/229)   
Writing objects:  55% (126/229)   
Writing objects:  56% (129/229)   
Writing objects:  57% (131/229)   
Writing objects:  58% (133/229)   
Writing objects:  59% (136/229)   
Writing objects:  60% (138/229)   
Writing objects:  61% (140/229)   
Writing objects:  62% (142/229)   
Writing objects:  63% (145/229)   
Writing objects:  64% (147/229)   
Writing objects:  65% (149/229)   
Writing objects:  66% (152/229)   
Writing objects:  67% (154/229)   
Writing objects:  68% (156/229)   
Writing objects:  69% (159/229)   
Writing objects:  70% (161/229)   
Writing objects:  71% (163/229)   
Writing objects:  72% (165/229)   
Writing objects:  73% (168/229)   
Writing objects:  74% (170/229)   
Writing objects:  75% (172/229)   
Writing objects:  76% (175/229)   
Writing objects:  77% (177/229)   
Writing objects:  78% (179/229)   
Writing objects:  79% (181/229)   
Writing objects:  80% (184/229)   
Writing objects:  81% (186/229)   
Writing objects:  82% (188/229)   
Writing objects:  83% (191/229)   
Writing objects:  84% (193/229)   
Writing objects:  85% (195/229)   
Writing objects:  86% (197/229)   
Writing objects:  87% (200/229)   
Writing objects:  88% (202/229)   
Writing objects:  89% (204/229)   
Writing objects:  90% (207/229)   
Writing objects:  91% (209/229)   
Writing objects:  92% (211/229)   
Writing objects:  93% (213/229)   
Writing objects:  94% (216/229)   
Writing objects:  95% (218/229)   
Writing objects:  96% (220/229)   
Writing objects:  97% (223/229)   
Writing objects:  98% (225/229)   
Writing objects:  99% (227/229)   
Writing objects: 100% (229/229)   
Writing objects: 100% (229/229), 35.80 KiB, done.
Total 229 (delta 194), reused 229 (delta 194)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   a132240..e3b1f41  e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b -> tested/linux-3.4
+ exit 0


--===============8053717368504865359==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8053717368504865359==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 19:47:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 19:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBrOb-0006LS-6y; Fri, 07 Feb 2014 19:47:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBrOZ-0006LM-UP
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 19:47:28 +0000
Received: from [193.109.254.147:8002] by server-10.bemta-14.messagelabs.com id
	F0/24-10711-F4835F25; Fri, 07 Feb 2014 19:47:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391802444!2849262!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20873 invoked from network); 7 Feb 2014 19:47:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 19:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,802,1384300800"; d="scan'208";a="99079027"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 19:47:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 14:47:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBrOV-00053q-A8;
	Fri, 07 Feb 2014 19:47:23 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBrOV-0007NP-0Q;
	Fri, 07 Feb 2014 19:47:23 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24780-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 19:47:23 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 24780: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8053717368504865359=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8053717368504865359==
Content-Type: text/plain

flight 24780 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24780/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 12 guest-localmigrate/x10 fail pass in 24758
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24758
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 24758 pass in 24780

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24604

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24758 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24758 never pass

version targeted for testing:
 linux                e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
baseline version:
 linux                a13224074af5f2813d52d15e67fc97e4c5741501

------------------------------------------------------------
People who touched revisions under test:
  Andrea Arcangeli <aarcange@redhat.com>
  Andrew Honig <ahonig@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Grover <agrover@redhat.com>
  Aristeu Rozanski <aris@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Boris BREZILLON <b.brezillon@overkiz.com>
  Borislav Petkov <bp@suse.de>
  Chris Mason <clm@fb.com>
  David S. Miller <davem@davemloft.net>
  Dmitry Kravkov <dmitry@broadcom.com>
  Eric Dumazet <edumazet@google.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@linux.intel.com>
  Hannes Frederic Sowa <hannes@stressinduktion.org>
  Jack Pham <jackp@codeaurora.org>
  James Bottomley <JBottomley@Parallels.com>
  Jan Prinsloo <janroot@gmail.com>
  Jean-Jacques Hiblot <jjhiblot@traphandler.com>
  Johan Hovold <jhovold@gmail.com>
  John W. Linville <linville@tuxdriver.com>
  Josef Bacik <jbacik@fb.com>
  Jun zhang <zhang.jun92@zte.com.cn>
  Kevin Hilman <khilman@linaro.org>
  Khalid Aziz <khalid.aziz@oracle.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lars-Peter Clausen <lars@metafoo.de>
  Leilei Zhao <leilei.zhao@atmel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek Roszko <mark.roszko@gmail.com>
  Mark Brown <broonie@linaro.org>
  Matthew Garrett <matthew.garrett@nebula.com>
  Michal Schmidt <mschmidt@redhat.com>
  Mikhail Zolotaryov <lebon@lebon.org.ua>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Mackerras <paulus@samba.org>
  PaX Team <pageexec@freemail.hu>
  Rahul Bedarkar <rahulbedarkar89@gmail.com>
  Richard Weinberger <richard@nod.at>
  Sarah Sharp <sarah.a.sharp@linux.intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Takashi Iwai <tiwai@suse.de>
  Thomas Pugliese <thomas.pugliese@gmail.com>
  Vijaya Mohan Guvva <vmohan@brocade.com>
  Wang Shilong <wangsl.fnst@cn.fujitsu.com>
  Yegor Yefremov <yegorslists@googlemail.com>
  yury <urykhy@gmail.com>
  ZHAO Gang <gamerh2o@gmail.com>
  Ã‰ric Piel <eric.piel@tremplin-utc.net>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
+ branch=linux-3.4
+ revision=e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b:tested/linux-3.4
Counting objects: 1   
Counting objects: 7   
Counting objects: 21   
Counting objects: 28   
Counting objects: 31   
Counting objects: 49   
Counting objects: 81   
Counting objects: 151   
Counting objects: 222   
Counting objects: 255   
Counting objects: 286   
Counting objects: 306   
Counting objects: 321, done.
Compressing objects:   2% (1/35)   
Compressing objects:   5% (2/35)   
Compressing objects:   8% (3/35)   
Compressing objects:  11% (4/35)   
Compressing objects:  14% (5/35)   
Compressing objects:  17% (6/35)   
Compressing objects:  20% (7/35)   
Compressing objects:  22% (8/35)   
Compressing objects:  25% (9/35)   
Compressing objects:  28% (10/35)   
Compressing objects:  31% (11/35)   
Compressing objects:  34% (12/35)   
Compressing objects:  37% (13/35)   
Compressing objects:  40% (14/35)   
Compressing objects:  42% (15/35)   
Compressing objects:  45% (16/35)   
Compressing objects:  48% (17/35)   
Compressing objects:  51% (18/35)   
Compressing objects:  54% (19/35)   
Compressing objects:  57% (20/35)   
Compressing objects:  60% (21/35)   
Compressing objects:  62% (22/35)   
Compressing objects:  65% (23/35)   
Compressing objects:  68% (24/35)   
Compressing objects:  71% (25/35)   
Compressing objects:  74% (26/35)   
Compressing objects:  77% (27/35)   
Compressing objects:  80% (28/35)   
Compressing objects:  82% (29/35)   
Compressing objects:  85% (30/35)   
Compressing objects:  88% (31/35)   
Compressing objects:  91% (32/35)   
Compressing objects:  94% (33/35)   
Compressing objects:  97% (34/35)   
Compressing objects: 100% (35/35)   
Compressing objects: 100% (35/35), done.
Writing objects:   0% (1/229)   
Writing objects:   1% (3/229)   
Writing objects:   2% (5/229)   
Writing objects:   3% (7/229)   
Writing objects:   4% (10/229)   
Writing objects:   5% (12/229)   
Writing objects:   6% (14/229)   
Writing objects:   7% (17/229)   
Writing objects:   8% (19/229)   
Writing objects:   9% (21/229)   
Writing objects:  10% (24/229)   
Writing objects:  11% (27/229)   
Writing objects:  12% (28/229)   
Writing objects:  13% (30/229)   
Writing objects:  14% (33/229)   
Writing objects:  15% (35/229)   
Writing objects:  16% (37/229)   
Writing objects:  17% (39/229)   
Writing objects:  18% (42/229)   
Writing objects:  19% (44/229)   
Writing objects:  20% (46/229)   
Writing objects:  21% (49/229)   
Writing objects:  22% (51/229)   
Writing objects:  23% (53/229)   
Writing objects:  24% (55/229)   
Writing objects:  25% (58/229)   
Writing objects:  26% (60/229)   
Writing objects:  27% (62/229)   
Writing objects:  28% (65/229)   
Writing objects:  29% (67/229)   
Writing objects:  30% (69/229)   
Writing objects:  31% (71/229)   
Writing objects:  32% (74/229)   
Writing objects:  33% (76/229)   
Writing objects:  34% (78/229)   
Writing objects:  35% (81/229)   
Writing objects:  36% (83/229)   
Writing objects:  37% (85/229)   
Writing objects:  38% (88/229)   
Writing objects:  39% (90/229)   
Writing objects:  40% (92/229)   
Writing objects:  41% (94/229)   
Writing objects:  42% (97/229)   
Writing objects:  43% (99/229)   
Writing objects:  44% (101/229)   
Writing objects:  45% (104/229)   
Writing objects:  46% (106/229)   
Writing objects:  47% (108/229)   
Writing objects:  48% (110/229)   
Writing objects:  49% (113/229)   
Writing objects:  50% (115/229)   
Writing objects:  51% (117/229)   
Writing objects:  52% (120/229)   
Writing objects:  53% (122/229)   
Writing objects:  54% (124/229)   
Writing objects:  55% (126/229)   
Writing objects:  56% (129/229)   
Writing objects:  57% (131/229)   
Writing objects:  58% (133/229)   
Writing objects:  59% (136/229)   
Writing objects:  60% (138/229)   
Writing objects:  61% (140/229)   
Writing objects:  62% (142/229)   
Writing objects:  63% (145/229)   
Writing objects:  64% (147/229)   
Writing objects:  65% (149/229)   
Writing objects:  66% (152/229)   
Writing objects:  67% (154/229)   
Writing objects:  68% (156/229)   
Writing objects:  69% (159/229)   
Writing objects:  70% (161/229)   
Writing objects:  71% (163/229)   
Writing objects:  72% (165/229)   
Writing objects:  73% (168/229)   
Writing objects:  74% (170/229)   
Writing objects:  75% (172/229)   
Writing objects:  76% (175/229)   
Writing objects:  77% (177/229)   
Writing objects:  78% (179/229)   
Writing objects:  79% (181/229)   
Writing objects:  80% (184/229)   
Writing objects:  81% (186/229)   
Writing objects:  82% (188/229)   
Writing objects:  83% (191/229)   
Writing objects:  84% (193/229)   
Writing objects:  85% (195/229)   
Writing objects:  86% (197/229)   
Writing objects:  87% (200/229)   
Writing objects:  88% (202/229)   
Writing objects:  89% (204/229)   
Writing objects:  90% (207/229)   
Writing objects:  91% (209/229)   
Writing objects:  92% (211/229)   
Writing objects:  93% (213/229)   
Writing objects:  94% (216/229)   
Writing objects:  95% (218/229)   
Writing objects:  96% (220/229)   
Writing objects:  97% (223/229)   
Writing objects:  98% (225/229)   
Writing objects:  99% (227/229)   
Writing objects: 100% (229/229)   
Writing objects: 100% (229/229), 35.80 KiB, done.
Total 229 (delta 194), reused 229 (delta 194)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   a132240..e3b1f41  e3b1f4138a12a66dcd2a48e5b4a7fa1bba9c2c5b -> tested/linux-3.4
+ exit 0


--===============8053717368504865359==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8053717368504865359==--

From xen-devel-bounces@lists.xen.org Fri Feb 07 20:40:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsD9-0000bY-0O; Fri, 07 Feb 2014 20:39:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBsD8-0000bT-2y
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 20:39:42 +0000
Received: from [85.158.137.68:10089] by server-8.bemta-3.messagelabs.com id
	7E/92-16039-D8445F25; Fri, 07 Feb 2014 20:39:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391805578!419296!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16141 invoked from network); 7 Feb 2014 20:39:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 20:39:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17Kda70015428
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 20:39:37 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17KdZnC012654
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 20:39:35 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17KdZVA026958; Fri, 7 Feb 2014 20:39:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 12:39:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 42DFB1C0972; Fri,  7 Feb 2014 15:39:34 -0500 (EST)
Date: Fri, 7 Feb 2014 15:39:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207203934.GA13333@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> I was able to compile and install xen4.4 RC3 on my host, however I am
> getting the error:
> 
> root@fiat:~/git/xen# xl list
> xc: error: Could not obtain handle on privileged command interface (2 = No
> such file or directory): Internal error
> libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No such
> file or directory
> cannot init xl context
> 
> I've google searched for this and an article appears, but is not the same
> (as far as I can tell).  Running any xl command generates a similar error.
> 
> What can I do to fix this?


You need to run the initscripts for Xen. I don't know what your distro is, but
they are usually put in /etc/init.d/rc.d/xen*


> 
> Regards
> 
> 
> On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> mikeneiderhauser@gmail.com> wrote:
> 
> > Much. Do I need to install from src or is there a package I can install.
> >
> > Regards
> >
> >
> > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> >> > I did not.  I do not have the toolchain installed.  I may have time
> >> later
> >> > today to try the patch.  Are there any specific instructions on how to
> >> > patch the src, compile and install?
> >>
> >> There actually should be a new version of Xen 4.4-rcX which will have the
> >> fix. That might be easier for you?
> >> >
> >> > Regards
> >> >
> >> >
> >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> >> > konrad.wilk@oracle.com> wrote:
> >> >
> >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> >> > > > Hi all,
> >> > > >
> >> > > > I am attempting to do a pci passthrough of an Intel ET card (4x1G
> >> NIC)
> >> > > to a
> >> > > > HVM.  I have been attempting to resolve this issue on the xen-users
> >> list,
> >> > > > but it was advised to post this issue to this list. (Initial
> >> Message -
> >> > > >
> >> > >
> >> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> >> )
> >> > > >
> >> > > > The machine I am using as host is a Dell Poweredge server with a
> >> Xeon
> >> > > > E31220 with 4GB of ram.
> >> > > >
> >> > > > The possible bug is the following:
> >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> >> > > > char device redirected to /dev/pts/5 (label serial0)
> >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> >> > > > ....
> >> > > >
> >> > > > I believe it may be similar to this thread
> >> > > >
> >> > >
> >> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> >> > > >
> >> > > >
> >> > > > Additional info that may be helpful is below.
> >> > >
> >> > > Did you try the patch?
> >> > > >
> >> > > > Please let me know if you need any additional information.
> >> > > >
> >> > > > Thanks in advance for any help provided!
> >> > > > Regards
> >> > > >
> >> > > > ###########################################################
> >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> >> > > > ###########################################################
> >> > > > # Configuration file for Xen HVM
> >> > > >
> >> > > > # HVM Name (as appears in 'xl list')
> >> > > > name="ubuntu-hvm-0"
> >> > > > # HVM Build settings (+ hardware)
> >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> >> > > > builder='hvm'
> >> > > > device_model='qemu-dm'
> >> > > > memory=1024
> >> > > > vcpus=2
> >> > > >
> >> > > > # Virtual Interface
> >> > > > # Network bridge to USB NIC
> >> > > > vif=['bridge=xenbr0']
> >> > > >
> >> > > > ################### PCI PASSTHROUGH ###################
> >> > > > # PCI Permissive mode toggle
> >> > > > #pci_permissive=1
> >> > > >
> >> > > > # All PCI Devices
> >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> >> '05:00.1']
> >> > > >
> >> > > > # First two ports on Intel 4x1G NIC
> >> > > > #pci=['03:00.0','03:00.1']
> >> > > >
> >> > > > # Last two ports on Intel 4x1G NIC
> >> > > > #pci=['04:00.0', '04:00.1']
> >> > > >
> >> > > > # All ports on Intel 4x1G NIC
> >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> >> > > >
> >> > > > # Brodcom 2x1G NIC
> >> > > > #pci=['05:00.0', '05:00.1']
> >> > > > ################### PCI PASSTHROUGH ###################
> >> > > >
> >> > > > # HVM Disks
> >> > > > # Hard disk only
> >> > > > # Boot from HDD first ('c')
> >> > > > boot="c"
> >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> >> > > >
> >> > > > # Hard disk with ISO
> >> > > > # Boot from ISO first ('d')
> >> > > > #boot="d"
> >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> >> > > >
> >> > > > # ACPI Enable
> >> > > > acpi=1
> >> > > > # HVM Event Modes
> >> > > > on_poweroff='destroy'
> >> > > > on_reboot='restart'
> >> > > > on_crash='restart'
> >> > > >
> >> > > > # Serial Console Configuration (Xen Console)
> >> > > > sdl=0
> >> > > > serial='pty'
> >> > > >
> >> > > > # VNC Configuration
> >> > > > # Only reacable from localhost
> >> > > > vnc=1
> >> > > > vnclisten="0.0.0.0"
> >> > > > vncpasswd=""
> >> > > >
> >> > > > ###########################################################
> >> > > > Copied for xen-users list
> >> > > > ###########################################################
> >> > > >
> >> > > > It appears that it cannot obtain the RAM mapping for this PCI
> >> device.
> >> > > >
> >> > > >
> >> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> >> output
> >> > > > looks like:
> >> > > > root@fiat:~# ./dev_mgmt.sh
> >> > > > Loading Kernel Module 'xen-pciback'
> >> > > > Calling function pciback_dev for:
> >> > > > PCI DEVICE 0000:03:00.0
> >> > > > Unbinding 0000:03:00.0 from igb
> >> > > > Binding 0000:03:00.0 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:03:00.1
> >> > > > Unbinding 0000:03:00.1 from igb
> >> > > > Binding 0000:03:00.1 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:04:00.0
> >> > > > Unbinding 0000:04:00.0 from igb
> >> > > > Binding 0000:04:00.0 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:04:00.1
> >> > > > Unbinding 0000:04:00.1 from igb
> >> > > > Binding 0000:04:00.1 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:05:00.0
> >> > > > Unbinding 0000:05:00.0 from bnx2
> >> > > > Binding 0000:05:00.0 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:05:00.1
> >> > > > Unbinding 0000:05:00.1 from bnx2
> >> > > > Binding 0000:05:00.1 to pciback
> >> > > >
> >> > > > Listing PCI Devices Available to Xen
> >> > > > 0000:03:00.0
> >> > > > 0000:03:00.1
> >> > > > 0000:04:00.0
> >> > > > 0000:04:00.1
> >> > > > 0000:05:00.0
> >> > > > 0000:05:00.1
> >> > > >
> >> > > > ###########################################################
> >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> >> > > > WARNING: ignoring device_model directive.
> >> > > > WARNING: Use "device_model_override" instead if you really want a
> >> > > > non-default device_model
> >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360:
> >> create:
> >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> >> Disk
> >> > > > vdev=hda spec.backend=unknown
> >> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
> >> Disk
> >> > > > vdev=hda, using backend phy
> >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> >> > > bootloader
> >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> >> > > > domain, skipping bootloader
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210c728: deregister unregistered
> >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
> >> NUMA
> >> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> >> > > > free_memkb=2980
> >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> >> > > candidate
> >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> >> > > >   Loader:        0000000000100000->00000000001a69a4
> >> > > >   Modules:       0000000000000000->0000000000000000
> >> > > >   TOTAL:         0000000000000000->000000003f800000
> >> > > >   ENTRY ADDRESS: 0000000000100608
> >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> >> > > >   4KB PAGES: 0x0000000000000200
> >> > > >   2MB PAGES: 0x00000000000001fb
> >> > > >   1GB PAGES: 0x0000000000000000
> >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> >> 0x7f022c81682d
> >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> >> Disk
> >> > > > vdev=hda spec.backend=phy
> >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> >> > > > register slotnum=3
> >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> >> > > > inprogress: poller=0x210c3c0, flags=i
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> >> > > state 1
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> >> > > > deregister slotnum=3
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x2112f48: deregister unregistered
> >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> >> script:
> >> > > > /etc/xen/scripts/block add
> >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> >> > > device-model
> >> > > > /usr/bin/qemu-system-i386 with arguments:
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > /usr/bin/qemu-system-i386
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > chardev=libxl-cmd,mode=control
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0
> >> ,to=99
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> isa-fdc.driveA=
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> vga.vram_size_mb=8
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > >
> >> > >
> >> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> >> > > register
> >> > > > slotnum=3
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> >> > > > epath=/local/domain/0/device-model/2/state
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> >> > > > epath=/local/domain/0/device-model/2/state
> >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> >> > > > deregister slotnum=3
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210c960: deregister unregistered
> >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> >> > > > /var/run/xen/qmp-libxl-2
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "qmp_capabilities",
> >> > > >     "id": 1
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "query-chardev",
> >> > > >     "id": 2
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "change",
> >> > > >     "id": 3,
> >> > > >     "arguments": {
> >> > > >         "device": "vnc",
> >> > > >         "target": "password",
> >> > > >         "arg": ""
> >> > > >     }
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "query-vnc",
> >> > > >     "id": 4
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> >> > > register
> >> > > > slotnum=3
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> >> > > > epath=/local/domain/0/backend/vif/2/0/state
> >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting
> >> state
> >> > > 1
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> >> > > > epath=/local/domain/0/backend/vif/2/0/state
> >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> >> > > > deregister slotnum=3
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210e8a8: deregister unregistered
> >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> >> script:
> >> > > > /etc/xen/scripts/vif-bridge online
> >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> >> script:
> >> > > > /etc/xen/scripts/vif-bridge add
> >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> >> > > > /var/run/xen/qmp-libxl-2
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "qmp_capabilities",
> >> > > >     "id": 1
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "device_add",
> >> > > >     "id": 2,
> >> > > >     "arguments": {
> >> > > >         "driver": "xen-pci-passthrough",
> >> > > >         "id": "pci-pt-03_00.0",
> >> > > >         "hostaddr": "0000:03:00.0"
> >> > > >     }
> >> > > > }
> >> > > > '
> >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> >> Connection
> >> > > reset
> >> > > > by peer
> >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> >> error:
> >> > > > Connection refused
> >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> >> error:
> >> > > > Connection refused
> >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> >> error:
> >> > > > Connection refused
> >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> >> > > backend
> >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> >> 0x210c360:
> >> > > > progress report: ignored
> >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> >> > > > complete, rc=0
> >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> >> > > destroy
> >> > > > Daemon running with PID 3214
> >> > > > xc: debug: hypercall buffer: total allocations:793 total
> >> releases:793
> >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> >> allocations:4
> >> > > > xc: debug: hypercall buffer: cache current size:4
> >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> >> > > >
> >> > > > ###########################################################
> >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> >> > > > char device redirected to /dev/pts/5 (label serial0)
> >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> >> > > > CPU #0:
> >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> >> > > > ES =0000 00000000 0000ffff 00009300
> >> > > > CS =f000 ffff0000 0000ffff 00009b00
> >> > > > SS =0000 00000000 0000ffff 00009300
> >> > > > DS =0000 00000000 0000ffff 00009300
> >> > > > FS =0000 00000000 0000ffff 00009300
> >> > > > GS =0000 00000000 0000ffff 00009300
> >> > > > LDT=0000 00000000 0000ffff 00008200
> >> > > > TR =0000 00000000 0000ffff 00008b00
> >> > > > GDT=     00000000 0000ffff
> >> > > > IDT=     00000000 0000ffff
> >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> >> > > > DR6=ffff0ff0 DR7=00000400
> >> > > > EFER=0000000000000000
> >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> >> > > > XMM00=00000000000000000000000000000000
> >> > > > XMM01=00000000000000000000000000000000
> >> > > > XMM02=00000000000000000000000000000000
> >> > > > XMM03=00000000000000000000000000000000
> >> > > > XMM04=00000000000000000000000000000000
> >> > > > XMM05=00000000000000000000000000000000
> >> > > > XMM06=00000000000000000000000000000000
> >> > > > XMM07=00000000000000000000000000000000
> >> > > > CPU #1:
> >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> >> > > > ES =0000 00000000 0000ffff 00009300
> >> > > > CS =f000 ffff0000 0000ffff 00009b00
> >> > > > SS =0000 00000000 0000ffff 00009300
> >> > > > DS =0000 00000000 0000ffff 00009300
> >> > > > FS =0000 00000000 0000ffff 00009300
> >> > > > GS =0000 00000000 0000ffff 00009300
> >> > > > LDT=0000 00000000 0000ffff 00008200
> >> > > > TR =0000 00000000 0000ffff 00008b00
> >> > > > GDT=     00000000 0000ffff
> >> > > > IDT=     00000000 0000ffff
> >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> >> > > > DR6=ffff0ff0 DR7=00000400
> >> > > > EFER=0000000000000000
> >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> >> > > > XMM00=00000000000000000000000000000000
> >> > > > XMM01=00000000000000000000000000000000
> >> > > > XMM02=00000000000000000000000000000000
> >> > > > XMM03=00000000000000000000000000000000
> >> > > > XMM04=00000000000000000000000000000000
> >> > > > XMM05=00000000000000000000000000000000
> >> > > > XMM06=00000000000000000000000000000000
> >> > > > XMM07=00000000000000000000000000000000
> >> > > >
> >> > > > ###########################################################
> >> > > > /etc/default/grub
> >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> >> > > > GRUB_HIDDEN_TIMEOUT=0
> >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> >> > > > GRUB_TIMEOUT=10
> >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> >> > > > GRUB_CMDLINE_LINUX=""
> >> > > > # biosdevname=0
> >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> >> > >
> >> > > > _______________________________________________
> >> > > > Xen-devel mailing list
> >> > > > Xen-devel@lists.xen.org
> >> > > > http://lists.xen.org/xen-devel
> >> > >
> >> > >
> >>
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 20:40:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsD9-0000bY-0O; Fri, 07 Feb 2014 20:39:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBsD8-0000bT-2y
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 20:39:42 +0000
Received: from [85.158.137.68:10089] by server-8.bemta-3.messagelabs.com id
	7E/92-16039-D8445F25; Fri, 07 Feb 2014 20:39:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391805578!419296!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16141 invoked from network); 7 Feb 2014 20:39:40 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 20:39:40 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17Kda70015428
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 20:39:37 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17KdZnC012654
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 20:39:35 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17KdZVA026958; Fri, 7 Feb 2014 20:39:35 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 12:39:35 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 42DFB1C0972; Fri,  7 Feb 2014 15:39:34 -0500 (EST)
Date: Fri, 7 Feb 2014 15:39:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207203934.GA13333@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> I was able to compile and install xen4.4 RC3 on my host, however I am
> getting the error:
> 
> root@fiat:~/git/xen# xl list
> xc: error: Could not obtain handle on privileged command interface (2 = No
> such file or directory): Internal error
> libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No such
> file or directory
> cannot init xl context
> 
> I've google searched for this and an article appears, but is not the same
> (as far as I can tell).  Running any xl command generates a similar error.
> 
> What can I do to fix this?


You need to run the initscripts for Xen. I don't know what your distro is, but
they are usually put in /etc/init.d/rc.d/xen*


> 
> Regards
> 
> 
> On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> mikeneiderhauser@gmail.com> wrote:
> 
> > Much. Do I need to install from src or is there a package I can install.
> >
> > Regards
> >
> >
> > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> >> > I did not.  I do not have the toolchain installed.  I may have time
> >> later
> >> > today to try the patch.  Are there any specific instructions on how to
> >> > patch the src, compile and install?
> >>
> >> There actually should be a new version of Xen 4.4-rcX which will have the
> >> fix. That might be easier for you?
> >> >
> >> > Regards
> >> >
> >> >
> >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> >> > konrad.wilk@oracle.com> wrote:
> >> >
> >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> >> > > > Hi all,
> >> > > >
> >> > > > I am attempting to do a pci passthrough of an Intel ET card (4x1G
> >> NIC)
> >> > > to a
> >> > > > HVM.  I have been attempting to resolve this issue on the xen-users
> >> list,
> >> > > > but it was advised to post this issue to this list. (Initial
> >> Message -
> >> > > >
> >> > >
> >> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> >> )
> >> > > >
> >> > > > The machine I am using as host is a Dell Poweredge server with a
> >> Xeon
> >> > > > E31220 with 4GB of ram.
> >> > > >
> >> > > > The possible bug is the following:
> >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> >> > > > char device redirected to /dev/pts/5 (label serial0)
> >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> >> > > > ....
> >> > > >
> >> > > > I believe it may be similar to this thread
> >> > > >
> >> > >
> >> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> >> > > >
> >> > > >
> >> > > > Additional info that may be helpful is below.
> >> > >
> >> > > Did you try the patch?
> >> > > >
> >> > > > Please let me know if you need any additional information.
> >> > > >
> >> > > > Thanks in advance for any help provided!
> >> > > > Regards
> >> > > >
> >> > > > ###########################################################
> >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> >> > > > ###########################################################
> >> > > > # Configuration file for Xen HVM
> >> > > >
> >> > > > # HVM Name (as appears in 'xl list')
> >> > > > name="ubuntu-hvm-0"
> >> > > > # HVM Build settings (+ hardware)
> >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> >> > > > builder='hvm'
> >> > > > device_model='qemu-dm'
> >> > > > memory=1024
> >> > > > vcpus=2
> >> > > >
> >> > > > # Virtual Interface
> >> > > > # Network bridge to USB NIC
> >> > > > vif=['bridge=xenbr0']
> >> > > >
> >> > > > ################### PCI PASSTHROUGH ###################
> >> > > > # PCI Permissive mode toggle
> >> > > > #pci_permissive=1
> >> > > >
> >> > > > # All PCI Devices
> >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> >> '05:00.1']
> >> > > >
> >> > > > # First two ports on Intel 4x1G NIC
> >> > > > #pci=['03:00.0','03:00.1']
> >> > > >
> >> > > > # Last two ports on Intel 4x1G NIC
> >> > > > #pci=['04:00.0', '04:00.1']
> >> > > >
> >> > > > # All ports on Intel 4x1G NIC
> >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> >> > > >
> >> > > > # Brodcom 2x1G NIC
> >> > > > #pci=['05:00.0', '05:00.1']
> >> > > > ################### PCI PASSTHROUGH ###################
> >> > > >
> >> > > > # HVM Disks
> >> > > > # Hard disk only
> >> > > > # Boot from HDD first ('c')
> >> > > > boot="c"
> >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> >> > > >
> >> > > > # Hard disk with ISO
> >> > > > # Boot from ISO first ('d')
> >> > > > #boot="d"
> >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> >> > > >
> >> > > > # ACPI Enable
> >> > > > acpi=1
> >> > > > # HVM Event Modes
> >> > > > on_poweroff='destroy'
> >> > > > on_reboot='restart'
> >> > > > on_crash='restart'
> >> > > >
> >> > > > # Serial Console Configuration (Xen Console)
> >> > > > sdl=0
> >> > > > serial='pty'
> >> > > >
> >> > > > # VNC Configuration
> >> > > > # Only reacable from localhost
> >> > > > vnc=1
> >> > > > vnclisten="0.0.0.0"
> >> > > > vncpasswd=""
> >> > > >
> >> > > > ###########################################################
> >> > > > Copied for xen-users list
> >> > > > ###########################################################
> >> > > >
> >> > > > It appears that it cannot obtain the RAM mapping for this PCI
> >> device.
> >> > > >
> >> > > >
> >> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> >> output
> >> > > > looks like:
> >> > > > root@fiat:~# ./dev_mgmt.sh
> >> > > > Loading Kernel Module 'xen-pciback'
> >> > > > Calling function pciback_dev for:
> >> > > > PCI DEVICE 0000:03:00.0
> >> > > > Unbinding 0000:03:00.0 from igb
> >> > > > Binding 0000:03:00.0 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:03:00.1
> >> > > > Unbinding 0000:03:00.1 from igb
> >> > > > Binding 0000:03:00.1 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:04:00.0
> >> > > > Unbinding 0000:04:00.0 from igb
> >> > > > Binding 0000:04:00.0 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:04:00.1
> >> > > > Unbinding 0000:04:00.1 from igb
> >> > > > Binding 0000:04:00.1 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:05:00.0
> >> > > > Unbinding 0000:05:00.0 from bnx2
> >> > > > Binding 0000:05:00.0 to pciback
> >> > > >
> >> > > > PCI DEVICE 0000:05:00.1
> >> > > > Unbinding 0000:05:00.1 from bnx2
> >> > > > Binding 0000:05:00.1 to pciback
> >> > > >
> >> > > > Listing PCI Devices Available to Xen
> >> > > > 0000:03:00.0
> >> > > > 0000:03:00.1
> >> > > > 0000:04:00.0
> >> > > > 0000:04:00.1
> >> > > > 0000:05:00.0
> >> > > > 0000:05:00.1
> >> > > >
> >> > > > ###########################################################
> >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> >> > > > WARNING: ignoring device_model directive.
> >> > > > WARNING: Use "device_model_override" instead if you really want a
> >> > > > non-default device_model
> >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360:
> >> create:
> >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> >> Disk
> >> > > > vdev=hda spec.backend=unknown
> >> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
> >> Disk
> >> > > > vdev=hda, using backend phy
> >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> >> > > bootloader
> >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> >> > > > domain, skipping bootloader
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210c728: deregister unregistered
> >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
> >> NUMA
> >> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> >> > > > free_memkb=2980
> >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> >> > > candidate
> >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> >> > > >   Loader:        0000000000100000->00000000001a69a4
> >> > > >   Modules:       0000000000000000->0000000000000000
> >> > > >   TOTAL:         0000000000000000->000000003f800000
> >> > > >   ENTRY ADDRESS: 0000000000100608
> >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> >> > > >   4KB PAGES: 0x0000000000000200
> >> > > >   2MB PAGES: 0x00000000000001fb
> >> > > >   1GB PAGES: 0x0000000000000000
> >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> >> 0x7f022c81682d
> >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> >> Disk
> >> > > > vdev=hda spec.backend=phy
> >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> >> > > > register slotnum=3
> >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> >> > > > inprogress: poller=0x210c3c0, flags=i
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> >> > > state 1
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> >> > > > deregister slotnum=3
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x2112f48: deregister unregistered
> >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> >> script:
> >> > > > /etc/xen/scripts/block add
> >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> >> > > device-model
> >> > > > /usr/bin/qemu-system-i386 with arguments:
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > /usr/bin/qemu-system-i386
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > chardev=libxl-cmd,mode=control
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0
> >> ,to=99
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> isa-fdc.driveA=
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> vga.vram_size_mb=8
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> >> > > >
> >> > >
> >> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> >> > > register
> >> > > > slotnum=3
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> >> > > > epath=/local/domain/0/device-model/2/state
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> >> > > > epath=/local/domain/0/device-model/2/state
> >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> >> > > > deregister slotnum=3
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210c960: deregister unregistered
> >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> >> > > > /var/run/xen/qmp-libxl-2
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "qmp_capabilities",
> >> > > >     "id": 1
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "query-chardev",
> >> > > >     "id": 2
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "change",
> >> > > >     "id": 3,
> >> > > >     "arguments": {
> >> > > >         "device": "vnc",
> >> > > >         "target": "password",
> >> > > >         "arg": ""
> >> > > >     }
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "query-vnc",
> >> > > >     "id": 4
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> >> > > register
> >> > > > slotnum=3
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> >> > > > epath=/local/domain/0/backend/vif/2/0/state
> >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting
> >> state
> >> > > 1
> >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> >> > > > epath=/local/domain/0/backend/vif/2/0/state
> >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> >> > > > deregister slotnum=3
> >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> >> > > > w=0x210e8a8: deregister unregistered
> >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> >> script:
> >> > > > /etc/xen/scripts/vif-bridge online
> >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> >> script:
> >> > > > /etc/xen/scripts/vif-bridge add
> >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> >> > > > /var/run/xen/qmp-libxl-2
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "qmp_capabilities",
> >> > > >     "id": 1
> >> > > > }
> >> > > > '
> >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> >> return
> >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> >> > > >     "execute": "device_add",
> >> > > >     "id": 2,
> >> > > >     "arguments": {
> >> > > >         "driver": "xen-pci-passthrough",
> >> > > >         "id": "pci-pt-03_00.0",
> >> > > >         "hostaddr": "0000:03:00.0"
> >> > > >     }
> >> > > > }
> >> > > > '
> >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> >> Connection
> >> > > reset
> >> > > > by peer
> >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> >> error:
> >> > > > Connection refused
> >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> >> error:
> >> > > > Connection refused
> >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> >> error:
> >> > > > Connection refused
> >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> >> > > backend
> >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> >> 0x210c360:
> >> > > > progress report: ignored
> >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> >> > > > complete, rc=0
> >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> >> > > destroy
> >> > > > Daemon running with PID 3214
> >> > > > xc: debug: hypercall buffer: total allocations:793 total
> >> releases:793
> >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> >> allocations:4
> >> > > > xc: debug: hypercall buffer: cache current size:4
> >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> >> > > >
> >> > > > ###########################################################
> >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> >> > > > char device redirected to /dev/pts/5 (label serial0)
> >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> >> > > > CPU #0:
> >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> >> > > > ES =0000 00000000 0000ffff 00009300
> >> > > > CS =f000 ffff0000 0000ffff 00009b00
> >> > > > SS =0000 00000000 0000ffff 00009300
> >> > > > DS =0000 00000000 0000ffff 00009300
> >> > > > FS =0000 00000000 0000ffff 00009300
> >> > > > GS =0000 00000000 0000ffff 00009300
> >> > > > LDT=0000 00000000 0000ffff 00008200
> >> > > > TR =0000 00000000 0000ffff 00008b00
> >> > > > GDT=     00000000 0000ffff
> >> > > > IDT=     00000000 0000ffff
> >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> >> > > > DR6=ffff0ff0 DR7=00000400
> >> > > > EFER=0000000000000000
> >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> >> > > > XMM00=00000000000000000000000000000000
> >> > > > XMM01=00000000000000000000000000000000
> >> > > > XMM02=00000000000000000000000000000000
> >> > > > XMM03=00000000000000000000000000000000
> >> > > > XMM04=00000000000000000000000000000000
> >> > > > XMM05=00000000000000000000000000000000
> >> > > > XMM06=00000000000000000000000000000000
> >> > > > XMM07=00000000000000000000000000000000
> >> > > > CPU #1:
> >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> >> > > > ES =0000 00000000 0000ffff 00009300
> >> > > > CS =f000 ffff0000 0000ffff 00009b00
> >> > > > SS =0000 00000000 0000ffff 00009300
> >> > > > DS =0000 00000000 0000ffff 00009300
> >> > > > FS =0000 00000000 0000ffff 00009300
> >> > > > GS =0000 00000000 0000ffff 00009300
> >> > > > LDT=0000 00000000 0000ffff 00008200
> >> > > > TR =0000 00000000 0000ffff 00008b00
> >> > > > GDT=     00000000 0000ffff
> >> > > > IDT=     00000000 0000ffff
> >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> >> > > > DR6=ffff0ff0 DR7=00000400
> >> > > > EFER=0000000000000000
> >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> >> > > > XMM00=00000000000000000000000000000000
> >> > > > XMM01=00000000000000000000000000000000
> >> > > > XMM02=00000000000000000000000000000000
> >> > > > XMM03=00000000000000000000000000000000
> >> > > > XMM04=00000000000000000000000000000000
> >> > > > XMM05=00000000000000000000000000000000
> >> > > > XMM06=00000000000000000000000000000000
> >> > > > XMM07=00000000000000000000000000000000
> >> > > >
> >> > > > ###########################################################
> >> > > > /etc/default/grub
> >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> >> > > > GRUB_HIDDEN_TIMEOUT=0
> >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> >> > > > GRUB_TIMEOUT=10
> >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> >> > > > GRUB_CMDLINE_LINUX=""
> >> > > > # biosdevname=0
> >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> >> > >
> >> > > > _______________________________________________
> >> > > > Xen-devel mailing list
> >> > > > Xen-devel@lists.xen.org
> >> > > > http://lists.xen.org/xen-devel
> >> > >
> >> > >
> >>
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 20:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsJo-0000l8-12; Fri, 07 Feb 2014 20:46:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBsJN-0000kZ-V7
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 20:46:10 +0000
Received: from [85.158.137.68:23735] by server-15.bemta-3.messagelabs.com id
	FF/F1-19263-11645F25; Fri, 07 Feb 2014 20:46:09 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391805959!415597!1
X-Originating-IP: [209.85.220.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30196 invoked from network); 7 Feb 2014 20:46:01 -0000
Received: from mail-vc0-f172.google.com (HELO mail-vc0-f172.google.com)
	(209.85.220.172)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 20:46:01 -0000
Received: by mail-vc0-f172.google.com with SMTP id lf12so3039969vcb.17
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 12:45:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=ylKE0UsorZ6aP+fBwfbejTPbnCPNj9qeepvc7UO7CFo=;
	b=ydJJFuvTcxAMY4Ba7i9Ke4ScAIShcGi2AlTNT9Mj1J45L8R2kWj6j9WToOrp95Cyat
	iswmwuLe5FyRMTFjaXGajrgZ61uC6NVUOPDb9fWKszr4AgXfrdlKGknGlKnjuZKA05sJ
	QT7q4gff/2Wz91ApPWgfScmTabNBWiYqN60dHtfDdFlU6Non2SkyPpXvortfc1yZKD1N
	a2n441UbJg/cdFAJsW6Ecd7WnXlPcN3tF+GTumu2gvyHS9Cd4sY0ZnDsrSEHoQ6dGXZB
	ylFaqKRzFa1zsMYc4E3ohNqYuNy3g+VFn23+G2QhfgaHAfNTn/qtifeAWjAo+LLKvO1c
	4KAw==
X-Received: by 10.221.20.199 with SMTP id qp7mr7954704vcb.24.1391805959576;
	Fri, 07 Feb 2014 12:45:59 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 12:45:19 -0800 (PST)
In-Reply-To: <20140207203934.GA13333@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 15:45:19 -0500
Message-ID: <CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 20:46:34 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7898976862245331660=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7898976862245331660==
Content-Type: multipart/alternative; boundary=001a11339e2ed1228504f1d713e7

--001a11339e2ed1228504f1d713e7
Content-Type: text/plain; charset=ISO-8859-1

Ok. I started ran the initscripts and now xl works.

However, I still see the same behavior as before:

root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
Parsing config from /etc/xen/ubuntu-hvm-0.cfg
libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection reset
by peer
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
Connection refused
root@fiat:~# xl list
Name                                        ID   Mem VCPUs State Time(s)
Domain-0                                     0  1024     1     r-----
 15.2
ubuntu-hvm-0                                 1  1025     1     ------
0.0

(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
be allocated)
(XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
(XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
(XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
(XEN)  Start info:    ffffffff85519000->ffffffff855194b4
(XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
(XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
(XEN)  ENTRY ADDRESS: ffffffff81d261e0
(XEN) Dom0 has maximum 1 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
(XEN) Scrubbing Free RAM: .............................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
to Xen)
(XEN) Freed 260kB init memory.
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:01.0
(XEN) PCI add device 0000:00:1a.0
(XEN) PCI add device 0000:00:1c.0
(XEN) PCI add device 0000:00:1d.0
(XEN) PCI add device 0000:00:1e.0
(XEN) PCI add device 0000:00:1f.0
(XEN) PCI add device 0000:00:1f.2
(XEN) PCI add device 0000:00:1f.3
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:02:02.0
(XEN) PCI add device 0000:02:04.0
(XEN) PCI add device 0000:03:00.0
(XEN) PCI add device 0000:03:00.1
(XEN) PCI add device 0000:04:00.0
(XEN) PCI add device 0000:04:00.1
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:05:00.1
(XEN) PCI add device 0000:06:03.0
(XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
(XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
(200 of 1024)
(d1) HVM Loader
(d1) Detected Xen v4.4-rc2
(d1) Xenbus rings @0xfeffc000, event channel 4
(d1) System requested SeaBIOS
(d1) CPU speed is 3093 MHz
(d1) Relocating guest memory for lowmem MMIO space disabled


Excerpt from /var/log/xen/*
qemu: hardware error: xen: failed to populate ram at 40050000


On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > I was able to compile and install xen4.4 RC3 on my host, however I am
> > getting the error:
> >
> > root@fiat:~/git/xen# xl list
> > xc: error: Could not obtain handle on privileged command interface (2 =
> No
> > such file or directory): Internal error
> > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No
> such
> > file or directory
> > cannot init xl context
> >
> > I've google searched for this and an article appears, but is not the same
> > (as far as I can tell).  Running any xl command generates a similar
> error.
> >
> > What can I do to fix this?
>
>
> You need to run the initscripts for Xen. I don't know what your distro is,
> but
> they are usually put in /etc/init.d/rc.d/xen*
>
>
> >
> > Regards
> >
> >
> > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > mikeneiderhauser@gmail.com> wrote:
> >
> > > Much. Do I need to install from src or is there a package I can
> install.
> > >
> > > Regards
> > >
> > >
> > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > konrad.wilk@oracle.com> wrote:
> > >
> > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > >> > I did not.  I do not have the toolchain installed.  I may have time
> > >> later
> > >> > today to try the patch.  Are there any specific instructions on how
> to
> > >> > patch the src, compile and install?
> > >>
> > >> There actually should be a new version of Xen 4.4-rcX which will have
> the
> > >> fix. That might be easier for you?
> > >> >
> > >> > Regards
> > >> >
> > >> >
> > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > >> > konrad.wilk@oracle.com> wrote:
> > >> >
> > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > >> > > > Hi all,
> > >> > > >
> > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> (4x1G
> > >> NIC)
> > >> > > to a
> > >> > > > HVM.  I have been attempting to resolve this issue on the
> xen-users
> > >> list,
> > >> > > > but it was advised to post this issue to this list. (Initial
> > >> Message -
> > >> > > >
> > >> > >
> > >>
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > >> )
> > >> > > >
> > >> > > > The machine I am using as host is a Dell Poweredge server with a
> > >> Xeon
> > >> > > > E31220 with 4GB of ram.
> > >> > > >
> > >> > > > The possible bug is the following:
> > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > >> > > > ....
> > >> > > >
> > >> > > > I believe it may be similar to this thread
> > >> > > >
> > >> > >
> > >>
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > >> > > >
> > >> > > >
> > >> > > > Additional info that may be helpful is below.
> > >> > >
> > >> > > Did you try the patch?
> > >> > > >
> > >> > > > Please let me know if you need any additional information.
> > >> > > >
> > >> > > > Thanks in advance for any help provided!
> > >> > > > Regards
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > >> > > > ###########################################################
> > >> > > > # Configuration file for Xen HVM
> > >> > > >
> > >> > > > # HVM Name (as appears in 'xl list')
> > >> > > > name="ubuntu-hvm-0"
> > >> > > > # HVM Build settings (+ hardware)
> > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > >> > > > builder='hvm'
> > >> > > > device_model='qemu-dm'
> > >> > > > memory=1024
> > >> > > > vcpus=2
> > >> > > >
> > >> > > > # Virtual Interface
> > >> > > > # Network bridge to USB NIC
> > >> > > > vif=['bridge=xenbr0']
> > >> > > >
> > >> > > > ################### PCI PASSTHROUGH ###################
> > >> > > > # PCI Permissive mode toggle
> > >> > > > #pci_permissive=1
> > >> > > >
> > >> > > > # All PCI Devices
> > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > >> '05:00.1']
> > >> > > >
> > >> > > > # First two ports on Intel 4x1G NIC
> > >> > > > #pci=['03:00.0','03:00.1']
> > >> > > >
> > >> > > > # Last two ports on Intel 4x1G NIC
> > >> > > > #pci=['04:00.0', '04:00.1']
> > >> > > >
> > >> > > > # All ports on Intel 4x1G NIC
> > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > >> > > >
> > >> > > > # Brodcom 2x1G NIC
> > >> > > > #pci=['05:00.0', '05:00.1']
> > >> > > > ################### PCI PASSTHROUGH ###################
> > >> > > >
> > >> > > > # HVM Disks
> > >> > > > # Hard disk only
> > >> > > > # Boot from HDD first ('c')
> > >> > > > boot="c"
> > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > >> > > >
> > >> > > > # Hard disk with ISO
> > >> > > > # Boot from ISO first ('d')
> > >> > > > #boot="d"
> > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > >> > > >
> > >> > > > # ACPI Enable
> > >> > > > acpi=1
> > >> > > > # HVM Event Modes
> > >> > > > on_poweroff='destroy'
> > >> > > > on_reboot='restart'
> > >> > > > on_crash='restart'
> > >> > > >
> > >> > > > # Serial Console Configuration (Xen Console)
> > >> > > > sdl=0
> > >> > > > serial='pty'
> > >> > > >
> > >> > > > # VNC Configuration
> > >> > > > # Only reacable from localhost
> > >> > > > vnc=1
> > >> > > > vnclisten="0.0.0.0"
> > >> > > > vncpasswd=""
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > Copied for xen-users list
> > >> > > > ###########################################################
> > >> > > >
> > >> > > > It appears that it cannot obtain the RAM mapping for this PCI
> > >> device.
> > >> > > >
> > >> > > >
> > >> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> > >> output
> > >> > > > looks like:
> > >> > > > root@fiat:~# ./dev_mgmt.sh
> > >> > > > Loading Kernel Module 'xen-pciback'
> > >> > > > Calling function pciback_dev for:
> > >> > > > PCI DEVICE 0000:03:00.0
> > >> > > > Unbinding 0000:03:00.0 from igb
> > >> > > > Binding 0000:03:00.0 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:03:00.1
> > >> > > > Unbinding 0000:03:00.1 from igb
> > >> > > > Binding 0000:03:00.1 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:04:00.0
> > >> > > > Unbinding 0000:04:00.0 from igb
> > >> > > > Binding 0000:04:00.0 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:04:00.1
> > >> > > > Unbinding 0000:04:00.1 from igb
> > >> > > > Binding 0000:04:00.1 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:05:00.0
> > >> > > > Unbinding 0000:05:00.0 from bnx2
> > >> > > > Binding 0000:05:00.0 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:05:00.1
> > >> > > > Unbinding 0000:05:00.1 from bnx2
> > >> > > > Binding 0000:05:00.1 to pciback
> > >> > > >
> > >> > > > Listing PCI Devices Available to Xen
> > >> > > > 0000:03:00.0
> > >> > > > 0000:03:00.1
> > >> > > > 0000:04:00.0
> > >> > > > 0000:04:00.1
> > >> > > > 0000:05:00.0
> > >> > > > 0000:05:00.1
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > >> > > > WARNING: ignoring device_model directive.
> > >> > > > WARNING: Use "device_model_override" instead if you really want
> a
> > >> > > > non-default device_model
> > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> 0x210c360:
> > >> create:
> > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > >> Disk
> > >> > > > vdev=hda spec.backend=unknown
> > >> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
> > >> Disk
> > >> > > > vdev=hda, using backend phy
> > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > >> > > bootloader
> > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not
> a PV
> > >> > > > domain, skipping bootloader
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210c728: deregister unregistered
> > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New
> best
> > >> NUMA
> > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > >> > > > free_memkb=2980
> > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > >> > > candidate
> > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > >> > > >   Modules:       0000000000000000->0000000000000000
> > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > >> > > >   ENTRY ADDRESS: 0000000000100608
> > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > >> > > >   4KB PAGES: 0x0000000000000200
> > >> > > >   2MB PAGES: 0x00000000000001fb
> > >> > > >   1GB PAGES: 0x0000000000000000
> > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > >> 0x7f022c81682d
> > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > >> Disk
> > >> > > > vdev=hda spec.backend=phy
> > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> watch
> > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> token=3/0:
> > >> > > > register slotnum=3
> > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> 0x210c360:
> > >> > > > inprogress: poller=0x210c3c0, flags=i
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x2112f48
> > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> waiting
> > >> > > state 1
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x2112f48
> > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> token=3/0:
> > >> > > > deregister slotnum=3
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x2112f48: deregister unregistered
> > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > >> script:
> > >> > > > /etc/xen/scripts/block add
> > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > >> > > device-model
> > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > /usr/bin/qemu-system-i386
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -xen-domid
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > chardev=libxl-cmd,mode=control
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> ubuntu-hvm-0
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> 0.0.0.0:0
> > >> ,to=99
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> isa-fdc.driveA=
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> vga.vram_size_mb=8
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> 2,maxcpus=2
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > >
> > >> > >
> > >>
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> watch
> > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> token=3/1:
> > >> > > register
> > >> > > > slotnum=3
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210c960
> > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > >> > > > epath=/local/domain/0/device-model/2/state
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210c960
> > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > >> > > > epath=/local/domain/0/device-model/2/state
> > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> token=3/1:
> > >> > > > deregister slotnum=3
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210c960: deregister unregistered
> > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> to
> > >> > > > /var/run/xen/qmp-libxl-2
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type: qmp
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "qmp_capabilities",
> > >> > > >     "id": 1
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "query-chardev",
> > >> > > >     "id": 2
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "change",
> > >> > > >     "id": 3,
> > >> > > >     "arguments": {
> > >> > > >         "device": "vnc",
> > >> > > >         "target": "password",
> > >> > > >         "arg": ""
> > >> > > >     }
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "query-vnc",
> > >> > > >     "id": 4
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> watch
> > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> token=3/2:
> > >> > > register
> > >> > > > slotnum=3
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210e8a8
> > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> waiting
> > >> state
> > >> > > 1
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210e8a8
> > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> token=3/2:
> > >> > > > deregister slotnum=3
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210e8a8: deregister unregistered
> > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > >> script:
> > >> > > > /etc/xen/scripts/vif-bridge online
> > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > >> script:
> > >> > > > /etc/xen/scripts/vif-bridge add
> > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> to
> > >> > > > /var/run/xen/qmp-libxl-2
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type: qmp
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "qmp_capabilities",
> > >> > > >     "id": 1
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "device_add",
> > >> > > >     "id": 2,
> > >> > > >     "arguments": {
> > >> > > >         "driver": "xen-pci-passthrough",
> > >> > > >         "id": "pci-pt-03_00.0",
> > >> > > >         "hostaddr": "0000:03:00.0"
> > >> > > >     }
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > >> Connection
> > >> > > reset
> > >> > > > by peer
> > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > >> error:
> > >> > > > Connection refused
> > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > >> error:
> > >> > > > Connection refused
> > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > >> error:
> > >> > > > Connection refused
> > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> Creating pci
> > >> > > backend
> > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> > >> 0x210c360:
> > >> > > > progress report: ignored
> > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> 0x210c360:
> > >> > > > complete, rc=0
> > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> 0x210c360:
> > >> > > destroy
> > >> > > > Daemon running with PID 3214
> > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > >> releases:793
> > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > >> allocations:4
> > >> > > > xc: debug: hypercall buffer: cache current size:4
> > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > >> > > > CPU #0:
> > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > >> > > > ES =0000 00000000 0000ffff 00009300
> > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > >> > > > SS =0000 00000000 0000ffff 00009300
> > >> > > > DS =0000 00000000 0000ffff 00009300
> > >> > > > FS =0000 00000000 0000ffff 00009300
> > >> > > > GS =0000 00000000 0000ffff 00009300
> > >> > > > LDT=0000 00000000 0000ffff 00008200
> > >> > > > TR =0000 00000000 0000ffff 00008b00
> > >> > > > GDT=     00000000 0000ffff
> > >> > > > IDT=     00000000 0000ffff
> > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > >> > > > DR6=ffff0ff0 DR7=00000400
> > >> > > > EFER=0000000000000000
> > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > >> > > > XMM00=00000000000000000000000000000000
> > >> > > > XMM01=00000000000000000000000000000000
> > >> > > > XMM02=00000000000000000000000000000000
> > >> > > > XMM03=00000000000000000000000000000000
> > >> > > > XMM04=00000000000000000000000000000000
> > >> > > > XMM05=00000000000000000000000000000000
> > >> > > > XMM06=00000000000000000000000000000000
> > >> > > > XMM07=00000000000000000000000000000000
> > >> > > > CPU #1:
> > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > >> > > > ES =0000 00000000 0000ffff 00009300
> > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > >> > > > SS =0000 00000000 0000ffff 00009300
> > >> > > > DS =0000 00000000 0000ffff 00009300
> > >> > > > FS =0000 00000000 0000ffff 00009300
> > >> > > > GS =0000 00000000 0000ffff 00009300
> > >> > > > LDT=0000 00000000 0000ffff 00008200
> > >> > > > TR =0000 00000000 0000ffff 00008b00
> > >> > > > GDT=     00000000 0000ffff
> > >> > > > IDT=     00000000 0000ffff
> > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > >> > > > DR6=ffff0ff0 DR7=00000400
> > >> > > > EFER=0000000000000000
> > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > >> > > > XMM00=00000000000000000000000000000000
> > >> > > > XMM01=00000000000000000000000000000000
> > >> > > > XMM02=00000000000000000000000000000000
> > >> > > > XMM03=00000000000000000000000000000000
> > >> > > > XMM04=00000000000000000000000000000000
> > >> > > > XMM05=00000000000000000000000000000000
> > >> > > > XMM06=00000000000000000000000000000000
> > >> > > > XMM07=00000000000000000000000000000000
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > /etc/default/grub
> > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > >> > > > GRUB_TIMEOUT=10
> > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > >> > > > GRUB_CMDLINE_LINUX=""
> > >> > > > # biosdevname=0
> > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > >> > >
> > >> > > > _______________________________________________
> > >> > > > Xen-devel mailing list
> > >> > > > Xen-devel@lists.xen.org
> > >> > > > http://lists.xen.org/xen-devel
> > >> > >
> > >> > >
> > >>
> > >
> > >
>

--001a11339e2ed1228504f1d713e7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Ok. I started ran the initscripts and now xl works.<div><b=
r></div><div>However, I still see the same behavior as before:</div><div><b=
r></div><div><div>root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg</div><di=
v>

Parsing config from /etc/xen/ubuntu-hvm-0.cfg</div><div>libxl: error: libxl=
_qmp.c:448:qmp_next: Socket read error: Connection reset by peer</div><div>=
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error: Conn=
ection refused</div>

<div>libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
 Connection refused</div><div>libxl: error: libxl_qmp.c:691:libxl__qmp_init=
ialize: Connection error: Connection refused</div><div>root@fiat:~# xl list=
</div>

<div>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0ID =A0 Mem VCPUs<span class=3D"" style=3D"white-space:pre">	=
</span>State<span class=3D"" style=3D"white-space:pre">	</span>Time(s)</div=
><div>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r----- =A0 =A0 =A015.2</div>

<div>ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------ =A0 =A0 =A0 0.0</div></div><div>=
<br></div><div><div>(XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -&gt; 0x23f3000</div><div>(XEN) PHYSICAL MEMORY ARRANGEMENT:</div>

<div>(XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000000 (23369=
0 pages to be allocated)</div><div>(XEN) =A0Init. ramdisk: 000000013d0da000=
-&gt;000000013ffffe00</div><div>(XEN) VIRTUAL MEMORY ARRANGEMENT:</div><div=
>
(XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f3000</div>
<div>(XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e00</div><di=
v>(XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff85519000</div><div>(=
XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855194b4</div><div>=
(XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549000</div>

<div>(XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff8554a000</div=
><div>(XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;ffffffff85800000=
</div><div>(XEN) =A0ENTRY ADDRESS: ffffffff81d261e0</div><div>(XEN) Dom0 ha=
s maximum 1 VCPUs</div>

<div>(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0xffffffff81=
b2f000</div><div>(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; =
0xffffffff81d0f0f0</div><div>(XEN) elf_load_binary: phdr 2 at 0xffffffff81d=
10000 -&gt; 0xffffffff81d252c0</div>

<div>(XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0xffffffff81=
e6d000</div><div>(XEN) Scrubbing Free RAM: .............................don=
e.</div><div>(XEN) Initial low memory virq threshold set at 0x4000 pages.</=
div>

<div>(XEN) Std. Loglevel: All</div><div>(XEN) Guest Loglevel: All</div><div=
>(XEN) Xen is relinquishing VGA console.</div><div>(XEN) *** Serial input -=
&gt; DOM0 (type &#39;CTRL-a&#39; three times to switch input to Xen)</div>

<div>(XEN) Freed 260kB init memory.</div><div>(XEN) PCI add device 0000:00:=
00.0</div><div>(XEN) PCI add device 0000:00:01.0</div><div>(XEN) PCI add de=
vice 0000:00:1a.0</div><div>(XEN) PCI add device 0000:00:1c.0</div><div>

(XEN) PCI add device 0000:00:1d.0</div><div>(XEN) PCI add device 0000:00:1e=
.0</div><div>(XEN) PCI add device 0000:00:1f.0</div><div>(XEN) PCI add devi=
ce 0000:00:1f.2</div><div>(XEN) PCI add device 0000:00:1f.3</div><div>
(XEN) PCI add device 0000:01:00.0</div>
<div>(XEN) PCI add device 0000:02:02.0</div><div>(XEN) PCI add device 0000:=
02:04.0</div><div>(XEN) PCI add device 0000:03:00.0</div><div>(XEN) PCI add=
 device 0000:03:00.1</div><div>(XEN) PCI add device 0000:04:00.0</div>
<div>
(XEN) PCI add device 0000:04:00.1</div><div>(XEN) PCI add device 0000:05:00=
.0</div><div>(XEN) PCI add device 0000:05:00.1</div><div>(XEN) PCI add devi=
ce 0000:06:03.0</div><div>(XEN) page_alloc.c:1460:d0 Over-allocation for do=
main 1: 262401 &gt; 262400</div>

<div>(XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D1 memf=
lags=3D0 (200 of 1024)</div><div>(d1) HVM Loader</div><div>(d1) Detected Xe=
n v4.4-rc2</div><div>(d1) Xenbus rings @0xfeffc000, event channel 4</div><d=
iv>

(d1) System requested SeaBIOS</div><div>(d1) CPU speed is 3093 MHz</div><di=
v>(d1) Relocating guest memory for lowmem MMIO space disabled</div></div><d=
iv><br></div><div><br></div><div>Excerpt from /var/log/xen/*</div><div>

qemu: hardware error: xen: failed to populate ram at 40050000<br></div></di=
v><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Fri, Feb=
 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D=
"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a=
>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, Feb 07, 2014 at 03=
:36:49PM -0500, Mike Neiderhauser wrote:<br>
&gt; I was able to compile and install xen4.4 RC3 on my host, however I am<=
br>
&gt; getting the error:<br>
&gt;<br>
&gt; root@fiat:~/git/xen# xl list<br>
&gt; xc: error: Could not obtain handle on privileged command interface (2 =
=3D No<br>
&gt; such file or directory): Internal error<br>
&gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No=
 such<br>
&gt; file or directory<br>
&gt; cannot init xl context<br>
&gt;<br>
&gt; I&#39;ve google searched for this and an article appears, but is not t=
he same<br>
&gt; (as far as I can tell). =A0Running any xl command generates a similar =
error.<br>
&gt;<br>
&gt; What can I do to fix this?<br>
<br>
<br>
</div>You need to run the initscripts for Xen. I don&#39;t know what your d=
istro is, but<br>
they are usually put in /etc/init.d/rc.d/xen*<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt;<br>
&gt; Regards<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser &lt;<br>
&gt; <a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhauser@gmail.c=
om</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; Much. Do I need to install from src or is there a package I can i=
nstall.<br>
&gt; &gt;<br>
&gt; &gt; Regards<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com<=
/a>&gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser w=
rote:<br>
&gt; &gt;&gt; &gt; I did not. =A0I do not have the toolchain installed. =A0=
I may have time<br>
&gt; &gt;&gt; later<br>
&gt; &gt;&gt; &gt; today to try the patch. =A0Are there any specific instru=
ctions on how to<br>
&gt; &gt;&gt; &gt; patch the src, compile and install?<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; There actually should be a new version of Xen 4.4-rcX which w=
ill have the<br>
&gt; &gt;&gt; fix. That might be easier for you?<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Regards<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk &=
lt;<br>
&gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@or=
acle.com</a>&gt; wrote:<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neid=
erhauser wrote:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pci passthrough of an =
Intel ET card (4x1G<br>
&gt; &gt;&gt; NIC)<br>
&gt; &gt;&gt; &gt; &gt; to a<br>
&gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attempting to resolve this=
 issue on the xen-users<br>
&gt; &gt;&gt; list,<br>
&gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post this issue to this =
list. (Initial<br>
&gt; &gt;&gt; Message -<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-user=
s/2014-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/arch=
ives/html/xen-users/2014-02/msg00036.html</a><br>
&gt; &gt;&gt; )<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
</div></div><div class=3D"HOEnZb"><div class=3D"h5">&gt; &gt;&gt; &gt; &gt;=
 &gt; The machine I am using as host is a Dell Poweredge server with a<br>
&gt; &gt;&gt; Xeon<br>
&gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the following:<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm=
-0.log<br>
&gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5 (label se=
rial0)<br>
&gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to populate =
ram at 40030000<br>
&gt; &gt;&gt; &gt; &gt; &gt; ....<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; I believe it may be similar to this thread<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query=
:+page:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markm=
ail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:r=
esults</a><br>


&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Additional info that may be helpful is below.<=
br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you need any additional =
information.<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any help provided!<br>
&gt; &gt;&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in &#39;xl list&#39;)<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ hardware)<br>
&gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloa=
der&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>
&gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH ##########=
#########<br>
&gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, =
&#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;,<br>
&gt; &gt;&gt; &#39;05:00.1&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel 4x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<=
br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &=
#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH ##########=
#########<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,h=
da,w&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,=
hda,w&#39;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.is=
o,hdc:cdrom,r&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configuration (Xen Console)<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot obtain the RAM mappi=
ng for this PCI<br>
&gt; &gt;&gt; device.<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I ran assigned pci dev=
ices to pciback. The<br>
&gt; &gt;&gt; output<br>
&gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39;xen-pciback&#39;<br=
>
&gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_dev for:<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Available to Xen<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hv=
m-0.cfg<br>
&gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_model directive.<br>
&gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_model_override&quot;=
 instead if you really want a<br>
&gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do_domain_cr=
eate: ao 0x210c360:<br>
&gt; &gt;&gt; create:<br>
&gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c=
0<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device=
_disk_set_backend:<br>
&gt; &gt;&gt; Disk<br>
&gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:296:libxl__device=
_disk_set_backend:<br>
&gt; &gt;&gt; Disk<br>
&gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:675:initiate_doma=
in_create: running<br>
&gt; &gt;&gt; &gt; &gt; bootloader<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321:libxl__bo=
otloader_run: not a PV<br>
&gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl__get_numa=
_candidate: New best<br>
&gt; &gt;&gt; NUMA<br>
&gt; &gt;&gt; &gt; &gt; &gt; placement candidate found: nr_nodes=3D1, nr_cp=
us=3D4, nr_vcpus=3D3,<br>
&gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_place_doma=
in: NUMA placement<br>
&gt; &gt;&gt; &gt; &gt; candidate<br>
&gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB free selected=
<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x=
100000 memsz=3D0xa69a4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: memory: 0x100000=
 -&gt; 0x1a69a4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&g=
t;00000000001a69a4<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;=
0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&g=
t;000000003f800000<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022=
c779000 -&gt;<br>
&gt; &gt;&gt; 0x7f022c81682d<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device=
_disk_set_backend:<br>
&gt; &gt;&gt; Disk<br>
&gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswa=
tch_register: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/=
vbd/2/768/state token=3D3/0:<br>
&gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do_domain_cr=
eate: ao 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x2112f48<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/stat=
e<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted=
 state 2 still waiting<br>
&gt; &gt;&gt; &gt; &gt; state 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x2112f48<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/stat=
e<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted=
 state 2 ok<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/=
vbd/2/768/state token=3D3/0:<br>
&gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplu=
g: calling hotplug<br>
&gt; &gt;&gt; script:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_loc=
al_dm: Spawning<br>
&gt; &gt;&gt; &gt; &gt; device-model<br>
&gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -xen-domid<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -chardev<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-=
libxl-2,server,nowait<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -mon<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -name<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 ubuntu-hvm-0<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -vnc<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a><br>
&gt; &gt;&gt; ,to=3D99<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -global<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; isa-fdc.driveA=3D<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -serial<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 pty<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -vga<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 cirrus<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -global<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; vga.vram_size_mb=3D8<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -boot<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 order=3Dc<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -smp<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 2,maxcpus=3D2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -device<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e=
:23:44:2c<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -netdev<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,scrip=
t=3Dno,downscript=3Dno<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -M<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 xenfv<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -m<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 1016<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -drive<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=
=3Ddisk,format=3Draw,cache=3Dwriteback<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswa=
tch_register: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-m=
odel/2/state token=3D3/1:<br>
&gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210c960<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210c960<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-m=
odel/2/state token=3D3/1:<br>
&gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initi=
alize: connected to<br>
&gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type: qmp<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabil=
ities&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-chard=
ev&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot=
;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&=
quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;pass=
word&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&q=
uot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswa=
tch_register: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/=
vif/2/0/state token=3D3/2:<br>
&gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210e8a8<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted s=
tate 2 still waiting<br>
&gt; &gt;&gt; state<br>
&gt; &gt;&gt; &gt; &gt; 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210e8a8<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted s=
tate 2 ok<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/=
vif/2/0/state token=3D3/2:<br>
&gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplu=
g: calling hotplug<br>
&gt; &gt;&gt; script:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplu=
g: calling hotplug<br>
&gt; &gt;&gt; script:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initi=
alize: connected to<br>
&gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type: qmp<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabil=
ities&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&=
quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-=
pci-passthrough&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-0=
3_00.0&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;00=
00:03:00.0&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket=
 read error:<br>
&gt; &gt;&gt; Connection<br>
&gt; &gt;&gt; &gt; &gt; reset<br>
&gt; &gt;&gt; &gt; &gt; &gt; by peer<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initi=
alize: Connection<br>
&gt; &gt;&gt; error:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initi=
alize: Connection<br>
&gt; &gt;&gt; error:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initi=
alize: Connection<br>
&gt; &gt;&gt; error:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__create_pci=
_backend: Creating pci<br>
&gt; &gt;&gt; &gt; &gt; backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1737:libxl__ao_pro=
gress_report: ao<br>
&gt; &gt;&gt; 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1569:libxl__ao_com=
plete: ao 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1541:libxl__ao__de=
stroy: ao 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; destroy<br>
&gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: total allocations=
:793 total<br>
&gt; &gt;&gt; releases:793<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: current allocatio=
ns:0 maximum<br>
&gt; &gt;&gt; allocations:4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache current siz=
e:4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache hits:785 mi=
sses:4 toobig:4<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm=
-0.log<br>
&gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5 (label se=
rial0)<br>
&gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to populate =
ram at 40030000<br>
&gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>
&gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 E=
DX=3D00000633<br>
&gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 E=
SP=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D=
0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 C=
R4=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 D=
R3=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=
=3D00001f80<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>
&gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 E=
DX=3D00000633<br>
&gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 E=
SP=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D=
0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 C=
R4=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 D=
R3=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=
=3D00001f80<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /d=
ev/null || echo Debian`<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splas=
h&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0=
_max_vcpus=3D1&quot;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ______________________________________________=
_<br>
&gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen=
-devel@lists.xen.org</a><br>
&gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" tar=
get=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a11339e2ed1228504f1d713e7--


--===============7898976862245331660==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7898976862245331660==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 20:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsJn-0000kt-3y; Fri, 07 Feb 2014 20:46:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBqMR-00017J-L6
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 18:41:12 +0000
Received: from [85.158.143.35:64510] by server-2.bemta-4.messagelabs.com id
	79/8A-10891-6C825F25; Fri, 07 Feb 2014 18:41:10 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391798466!4020921!1
X-Originating-IP: [209.85.212.51]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8762 invoked from network); 7 Feb 2014 18:41:07 -0000
Received: from mail-vb0-f51.google.com (HELO mail-vb0-f51.google.com)
	(209.85.212.51)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:41:07 -0000
Received: by mail-vb0-f51.google.com with SMTP id 11so2919896vbe.38
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 10:41:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=IHUbxjo1HnoKHrVR4ly2PhmdWxfnaAZYN291alBVVtY=;
	b=P8DbuqI9mttbKoWb4uHAw80EOvcTw8exDBIt9QrE0k7iDZdbV8llXzfXz2FAXnge+y
	JiE7NIUh58x5lzqsBal2NHcB5T02wJaXMIQWFOx6V93NKJeDioDduIzlGAQVAxW8ewt7
	9bDuUTB0sL1XJh4qyNiJZjow40FqFYDkE9OfTN3s+IeTxIYXyYok2fRknow6/J4V7NcK
	d64Oz6xr8fSjgI5ViWpLh2EhDuDtFeRx463EKI7ca5+ws03xT3e63trq1N+97993UJze
	1ge3XuZVCYnsz25Gvj5rwGcan0i16mZ29huanQmn4b0l/+1x/tN9eeXM/VOT++HxxRJ/
	rAGQ==
X-Received: by 10.52.114.99 with SMTP id jf3mr85123vdb.66.1391798466621; Fri,
	07 Feb 2014 10:41:06 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 10:40:26 -0800 (PST)
In-Reply-To: <20140207183056.GA10265@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 13:40:26 -0500
Message-ID: <CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 20:46:34 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8734806169890918992=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8734806169890918992==
Content-Type: multipart/alternative; boundary=bcaec547c9cf33b73f04f1d55595

--bcaec547c9cf33b73f04f1d55595
Content-Type: text/plain; charset=ISO-8859-1

Much. Do I need to install from src or is there a package I can install.

Regards


On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > I did not.  I do not have the toolchain installed.  I may have time later
> > today to try the patch.  Are there any specific instructions on how to
> > patch the src, compile and install?
>
> There actually should be a new version of Xen 4.4-rcX which will have the
> fix. That might be easier for you?
> >
> > Regards
> >
> >
> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > > Hi all,
> > > >
> > > > I am attempting to do a pci passthrough of an Intel ET card (4x1G
> NIC)
> > > to a
> > > > HVM.  I have been attempting to resolve this issue on the xen-users
> list,
> > > > but it was advised to post this issue to this list. (Initial Message
> -
> > > >
> > >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> > > >
> > > > The machine I am using as host is a Dell Poweredge server with a Xeon
> > > > E31220 with 4GB of ram.
> > > >
> > > > The possible bug is the following:
> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > > ....
> > > >
> > > > I believe it may be similar to this thread
> > > >
> > >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > >
> > > >
> > > > Additional info that may be helpful is below.
> > >
> > > Did you try the patch?
> > > >
> > > > Please let me know if you need any additional information.
> > > >
> > > > Thanks in advance for any help provided!
> > > > Regards
> > > >
> > > > ###########################################################
> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > ###########################################################
> > > > # Configuration file for Xen HVM
> > > >
> > > > # HVM Name (as appears in 'xl list')
> > > > name="ubuntu-hvm-0"
> > > > # HVM Build settings (+ hardware)
> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > builder='hvm'
> > > > device_model='qemu-dm'
> > > > memory=1024
> > > > vcpus=2
> > > >
> > > > # Virtual Interface
> > > > # Network bridge to USB NIC
> > > > vif=['bridge=xenbr0']
> > > >
> > > > ################### PCI PASSTHROUGH ###################
> > > > # PCI Permissive mode toggle
> > > > #pci_permissive=1
> > > >
> > > > # All PCI Devices
> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> '05:00.1']
> > > >
> > > > # First two ports on Intel 4x1G NIC
> > > > #pci=['03:00.0','03:00.1']
> > > >
> > > > # Last two ports on Intel 4x1G NIC
> > > > #pci=['04:00.0', '04:00.1']
> > > >
> > > > # All ports on Intel 4x1G NIC
> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > >
> > > > # Brodcom 2x1G NIC
> > > > #pci=['05:00.0', '05:00.1']
> > > > ################### PCI PASSTHROUGH ###################
> > > >
> > > > # HVM Disks
> > > > # Hard disk only
> > > > # Boot from HDD first ('c')
> > > > boot="c"
> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > >
> > > > # Hard disk with ISO
> > > > # Boot from ISO first ('d')
> > > > #boot="d"
> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > >
> > > > # ACPI Enable
> > > > acpi=1
> > > > # HVM Event Modes
> > > > on_poweroff='destroy'
> > > > on_reboot='restart'
> > > > on_crash='restart'
> > > >
> > > > # Serial Console Configuration (Xen Console)
> > > > sdl=0
> > > > serial='pty'
> > > >
> > > > # VNC Configuration
> > > > # Only reacable from localhost
> > > > vnc=1
> > > > vnclisten="0.0.0.0"
> > > > vncpasswd=""
> > > >
> > > > ###########################################################
> > > > Copied for xen-users list
> > > > ###########################################################
> > > >
> > > > It appears that it cannot obtain the RAM mapping for this PCI device.
> > > >
> > > >
> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> output
> > > > looks like:
> > > > root@fiat:~# ./dev_mgmt.sh
> > > > Loading Kernel Module 'xen-pciback'
> > > > Calling function pciback_dev for:
> > > > PCI DEVICE 0000:03:00.0
> > > > Unbinding 0000:03:00.0 from igb
> > > > Binding 0000:03:00.0 to pciback
> > > >
> > > > PCI DEVICE 0000:03:00.1
> > > > Unbinding 0000:03:00.1 from igb
> > > > Binding 0000:03:00.1 to pciback
> > > >
> > > > PCI DEVICE 0000:04:00.0
> > > > Unbinding 0000:04:00.0 from igb
> > > > Binding 0000:04:00.0 to pciback
> > > >
> > > > PCI DEVICE 0000:04:00.1
> > > > Unbinding 0000:04:00.1 from igb
> > > > Binding 0000:04:00.1 to pciback
> > > >
> > > > PCI DEVICE 0000:05:00.0
> > > > Unbinding 0000:05:00.0 from bnx2
> > > > Binding 0000:05:00.0 to pciback
> > > >
> > > > PCI DEVICE 0000:05:00.1
> > > > Unbinding 0000:05:00.1 from bnx2
> > > > Binding 0000:05:00.1 to pciback
> > > >
> > > > Listing PCI Devices Available to Xen
> > > > 0000:03:00.0
> > > > 0000:03:00.1
> > > > 0000:04:00.0
> > > > 0000:04:00.1
> > > > 0000:05:00.0
> > > > 0000:05:00.1
> > > >
> > > > ###########################################################
> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > WARNING: ignoring device_model directive.
> > > > WARNING: Use "device_model_override" instead if you really want a
> > > > non-default device_model
> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360:
> create:
> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > > vdev=hda spec.backend=unknown
> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > > > vdev=hda, using backend phy
> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > > bootloader
> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > > > domain, skipping bootloader
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x210c728: deregister unregistered
> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
> NUMA
> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > > free_memkb=2980
> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > > candidate
> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > >   Loader:        0000000000100000->00000000001a69a4
> > > >   Modules:       0000000000000000->0000000000000000
> > > >   TOTAL:         0000000000000000->000000003f800000
> > > >   ENTRY ADDRESS: 0000000000100608
> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > >   4KB PAGES: 0x0000000000000200
> > > >   2MB PAGES: 0x00000000000001fb
> > > >   1GB PAGES: 0x0000000000000000
> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> 0x7f022c81682d
> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > > vdev=hda spec.backend=phy
> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > > register slotnum=3
> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > > > inprogress: poller=0x210c3c0, flags=i
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> > > state 1
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > > deregister slotnum=3
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x2112f48: deregister unregistered
> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script:
> > > > /etc/xen/scripts/block add
> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > > device-model
> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > /usr/bin/qemu-system-i386
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > chardev=libxl-cmd,mode=control
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0
> ,to=99
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> isa-fdc.driveA=
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> vga.vram_size_mb=8
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >
> > >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > register
> > > > slotnum=3
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > epath=/local/domain/0/device-model/2/state
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > epath=/local/domain/0/device-model/2/state
> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > > deregister slotnum=3
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x210c960: deregister unregistered
> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > > /var/run/xen/qmp-libxl-2
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "qmp_capabilities",
> > > >     "id": 1
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "query-chardev",
> > > >     "id": 2
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "change",
> > > >     "id": 3,
> > > >     "arguments": {
> > > >         "device": "vnc",
> > > >         "target": "password",
> > > >         "arg": ""
> > > >     }
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "query-vnc",
> > > >     "id": 4
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > register
> > > > slotnum=3
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting
> state
> > > 1
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > > deregister slotnum=3
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x210e8a8: deregister unregistered
> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script:
> > > > /etc/xen/scripts/vif-bridge online
> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script:
> > > > /etc/xen/scripts/vif-bridge add
> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > > /var/run/xen/qmp-libxl-2
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "qmp_capabilities",
> > > >     "id": 1
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "device_add",
> > > >     "id": 2,
> > > >     "arguments": {
> > > >         "driver": "xen-pci-passthrough",
> > > >         "id": "pci-pt-03_00.0",
> > > >         "hostaddr": "0000:03:00.0"
> > > >     }
> > > > }
> > > > '
> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> > > reset
> > > > by peer
> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> > > backend
> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> 0x210c360:
> > > > progress report: ignored
> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > > > complete, rc=0
> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> > > destroy
> > > > Daemon running with PID 3214
> > > > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> allocations:4
> > > > xc: debug: hypercall buffer: cache current size:4
> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > > >
> > > > ###########################################################
> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > > CPU #0:
> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > > ES =0000 00000000 0000ffff 00009300
> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > SS =0000 00000000 0000ffff 00009300
> > > > DS =0000 00000000 0000ffff 00009300
> > > > FS =0000 00000000 0000ffff 00009300
> > > > GS =0000 00000000 0000ffff 00009300
> > > > LDT=0000 00000000 0000ffff 00008200
> > > > TR =0000 00000000 0000ffff 00008b00
> > > > GDT=     00000000 0000ffff
> > > > IDT=     00000000 0000ffff
> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > DR6=ffff0ff0 DR7=00000400
> > > > EFER=0000000000000000
> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > XMM00=00000000000000000000000000000000
> > > > XMM01=00000000000000000000000000000000
> > > > XMM02=00000000000000000000000000000000
> > > > XMM03=00000000000000000000000000000000
> > > > XMM04=00000000000000000000000000000000
> > > > XMM05=00000000000000000000000000000000
> > > > XMM06=00000000000000000000000000000000
> > > > XMM07=00000000000000000000000000000000
> > > > CPU #1:
> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > > ES =0000 00000000 0000ffff 00009300
> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > SS =0000 00000000 0000ffff 00009300
> > > > DS =0000 00000000 0000ffff 00009300
> > > > FS =0000 00000000 0000ffff 00009300
> > > > GS =0000 00000000 0000ffff 00009300
> > > > LDT=0000 00000000 0000ffff 00008200
> > > > TR =0000 00000000 0000ffff 00008b00
> > > > GDT=     00000000 0000ffff
> > > > IDT=     00000000 0000ffff
> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > DR6=ffff0ff0 DR7=00000400
> > > > EFER=0000000000000000
> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > XMM00=00000000000000000000000000000000
> > > > XMM01=00000000000000000000000000000000
> > > > XMM02=00000000000000000000000000000000
> > > > XMM03=00000000000000000000000000000000
> > > > XMM04=00000000000000000000000000000000
> > > > XMM05=00000000000000000000000000000000
> > > > XMM06=00000000000000000000000000000000
> > > > XMM07=00000000000000000000000000000000
> > > >
> > > > ###########################################################
> > > > /etc/default/grub
> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > GRUB_TIMEOUT=10
> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > GRUB_CMDLINE_LINUX=""
> > > > # biosdevname=0
> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel
> > >
> > >
>

--bcaec547c9cf33b73f04f1d55595
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Much. Do I need to install from src or is there a package =
I can install.<div><br></div><div>Regards</div></div><div class=3D"gmail_ex=
tra"><br><br><div class=3D"gmail_quote">On Fri, Feb 7, 2014 at 1:30 PM, Kon=
rad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@orac=
le.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, Feb 07, 2014 at 10=
:53:22AM -0500, Mike Neiderhauser wrote:<br>
&gt; I did not. =A0I do not have the toolchain installed. =A0I may have tim=
e later<br>
&gt; today to try the patch. =A0Are there any specific instructions on how =
to<br>
&gt; patch the src, compile and install?<br>
<br>
</div>There actually should be a new version of Xen 4.4-rcX which will have=
 the<br>
fix. That might be easier for you?<br>
<div class=3D"HOEnZb"><div class=3D"h5">&gt;<br>
&gt; Regards<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I am attempting to do a pci passthrough of an Intel ET card =
(4x1G NIC)<br>
&gt; &gt; to a<br>
&gt; &gt; &gt; HVM. =A0I have been attempting to resolve this issue on the =
xen-users list,<br>
&gt; &gt; &gt; but it was advised to post this issue to this list. (Initial=
 Message -<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a>)<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The machine I am using as host is a Dell Poweredge server wi=
th a Xeon<br>
&gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The possible bug is the following:<br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; ....<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I believe it may be similar to this thread<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Additional info that may be helpful is below.<br>
&gt; &gt;<br>
&gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Please let me know if you need any additional information.<b=
r>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks in advance for any help provided!<br>
&gt; &gt; &gt; Regards<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Name (as appears in &#39;xl list&#39;)<br>
&gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt; &gt; # HVM Build settings (+ hardware)<br>
&gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;<br>
&gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#=
39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # First two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#3=
9;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;,<br=
>
&gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r&=
#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Serial Console Configuration (Xen Console)<br>
&gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; It appears that it cannot obtain the RAM mapping for this PC=
I device.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I rebooted the Host. =A0I ran assigned pci devices to pcibac=
k. The output<br>
&gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt; &gt; Loading Kernel Module &#39;xen-pciback&#39;<br>
&gt; &gt; &gt; Calling function pciback_dev for:<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Listing PCI Devices Available to Xen<br>
&gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; WARNING: ignoring device_model directive.<br>
&gt; &gt; &gt; WARNING: Use &quot;device_model_override&quot; instead if yo=
u really want a<br>
&gt; &gt; &gt; non-default device_model<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210=
c360: create:<br>
&gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c0<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:296:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:675:initiate_domain_create: run=
ning<br>
&gt; &gt; bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: =
not a PV<br>
&gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c728: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: Ne=
w best NUMA<br>
&gt; &gt; &gt; placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcp=
us=3D3,<br>
&gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA place=
ment<br>
&gt; &gt; candidate<br>
&gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB free selected<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=
=3D0xa69a4<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a=
4<br>
&gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a=
69a4<br>
&gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;00000000000000=
00<br>
&gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f80=
0000<br>
&gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; =
0x7f022c81682d<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210=
c360:<br>
&gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 still=
 waiting<br>
&gt; &gt; state 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 ok<br=
>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawnin=
g<br>
&gt; &gt; device-model<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xe=
n-domid<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2<b=
r>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ch=
ardev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server=
,nowait<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mo=
n<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -na=
me<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubu=
ntu-hvm-0<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vn=
c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 <a =
href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a>,to=3D99<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa=
-fdc.driveA=3D<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -se=
rial<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty=
<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vg=
a<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cir=
rus<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga=
.vram_size_mb=3D8<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -bo=
ot<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ord=
er=3Dc<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -sm=
p<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,m=
axcpus=3D2<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -de=
vice<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ne=
tdev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscr=
ipt=3Dno<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xen=
fv<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 101=
6<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -dr=
ive<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;password&quot;,<br=
>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 still w=
aiting state<br>
&gt; &gt; 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 ok<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthroug=
h&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,<=
br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quo=
t;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: C=
onnection<br>
&gt; &gt; reset<br>
&gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Crea=
ting pci<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: =
ao 0x210c360:<br>
&gt; &gt; &gt; progress report: ignored<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x21=
0c360:<br>
&gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x21=
0c360:<br>
&gt; &gt; destroy<br>
&gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: total allocations:793 total rel=
eases:793<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: current allocations:0 maximum a=
llocations:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache current size:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:=
4<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || ech=
o Debian`<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0_max_vcpus=3D1=
&quot;<br>
&gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.x=
en.org</a><br>
&gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--bcaec547c9cf33b73f04f1d55595--


--===============8734806169890918992==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8734806169890918992==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 20:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsJn-0000l0-IH; Fri, 07 Feb 2014 20:46:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBsB5-0000Db-40
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 20:37:35 +0000
Received: from [85.158.139.211:55558] by server-6.bemta-5.messagelabs.com id
	6B/19-14342-E0445F25; Fri, 07 Feb 2014 20:37:34 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391805450!2474395!1
X-Originating-IP: [209.85.212.50]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4805 invoked from network); 7 Feb 2014 20:37:31 -0000
Received: from mail-vb0-f50.google.com (HELO mail-vb0-f50.google.com)
	(209.85.212.50)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 20:37:31 -0000
Received: by mail-vb0-f50.google.com with SMTP id w8so3104439vbj.9
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 12:37:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=AT6mM8K9ySEpCrXiUQdzGHOi+Rg8kX9ipICAaR8+QQM=;
	b=lSnzgy5G41ChkAFZ726dbgPxTDwb8C9KN0g2agQiXhyA2FdzMgmPNj1arXQxTUkZLA
	7tZTagq3DWQHBOOGYg90yi0Td1csZPmxyUHAxGCQd8v80w94YP0geH1/XcONicSa9tX4
	RfGvNbswBUFc6e87Vbwdm2DGK6iQvSbLAWrhRf3fh1z6WigYeHcugjHzjj0m+KVkrCUW
	SoXZQ5UkADbCn6lQ4UJIfcklnkB2Wovv2VEKFtQ2h7tdd/egm/c6GyOPqB/mdRocenvP
	h/w2SFvKdEbePbODH2nhLJ5Corswu48xLj1fmHDlU3k02Jsc6FnxUvBXxN4Oa5YsBu7r
	+NQg==
X-Received: by 10.58.155.162 with SMTP id vx2mr29396veb.46.1391805449936; Fri,
	07 Feb 2014 12:37:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 12:36:49 -0800 (PST)
In-Reply-To: <CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 15:36:49 -0500
Message-ID: <CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 20:46:34 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8441876354739994021=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8441876354739994021==
Content-Type: multipart/alternative; boundary=047d7b66f2f770a4fb04f1d6f5d3

--047d7b66f2f770a4fb04f1d6f5d3
Content-Type: text/plain; charset=ISO-8859-1

I was able to compile and install xen4.4 RC3 on my host, however I am
getting the error:

root@fiat:~/git/xen# xl list
xc: error: Could not obtain handle on privileged command interface (2 = No
such file or directory): Internal error
libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No such
file or directory
cannot init xl context

I've google searched for this and an article appears, but is not the same
(as far as I can tell).  Running any xl command generates a similar error.

What can I do to fix this?

Regards


On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
mikeneiderhauser@gmail.com> wrote:

> Much. Do I need to install from src or is there a package I can install.
>
> Regards
>
>
> On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
>
>> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
>> > I did not.  I do not have the toolchain installed.  I may have time
>> later
>> > today to try the patch.  Are there any specific instructions on how to
>> > patch the src, compile and install?
>>
>> There actually should be a new version of Xen 4.4-rcX which will have the
>> fix. That might be easier for you?
>> >
>> > Regards
>> >
>> >
>> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
>> > konrad.wilk@oracle.com> wrote:
>> >
>> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
>> > > > Hi all,
>> > > >
>> > > > I am attempting to do a pci passthrough of an Intel ET card (4x1G
>> NIC)
>> > > to a
>> > > > HVM.  I have been attempting to resolve this issue on the xen-users
>> list,
>> > > > but it was advised to post this issue to this list. (Initial
>> Message -
>> > > >
>> > >
>> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
>> )
>> > > >
>> > > > The machine I am using as host is a Dell Poweredge server with a
>> Xeon
>> > > > E31220 with 4GB of ram.
>> > > >
>> > > > The possible bug is the following:
>> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
>> > > > char device redirected to /dev/pts/5 (label serial0)
>> > > > qemu: hardware error: xen: failed to populate ram at 40030000
>> > > > ....
>> > > >
>> > > > I believe it may be similar to this thread
>> > > >
>> > >
>> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
>> > > >
>> > > >
>> > > > Additional info that may be helpful is below.
>> > >
>> > > Did you try the patch?
>> > > >
>> > > > Please let me know if you need any additional information.
>> > > >
>> > > > Thanks in advance for any help provided!
>> > > > Regards
>> > > >
>> > > > ###########################################################
>> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>> > > > ###########################################################
>> > > > # Configuration file for Xen HVM
>> > > >
>> > > > # HVM Name (as appears in 'xl list')
>> > > > name="ubuntu-hvm-0"
>> > > > # HVM Build settings (+ hardware)
>> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
>> > > > builder='hvm'
>> > > > device_model='qemu-dm'
>> > > > memory=1024
>> > > > vcpus=2
>> > > >
>> > > > # Virtual Interface
>> > > > # Network bridge to USB NIC
>> > > > vif=['bridge=xenbr0']
>> > > >
>> > > > ################### PCI PASSTHROUGH ###################
>> > > > # PCI Permissive mode toggle
>> > > > #pci_permissive=1
>> > > >
>> > > > # All PCI Devices
>> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
>> '05:00.1']
>> > > >
>> > > > # First two ports on Intel 4x1G NIC
>> > > > #pci=['03:00.0','03:00.1']
>> > > >
>> > > > # Last two ports on Intel 4x1G NIC
>> > > > #pci=['04:00.0', '04:00.1']
>> > > >
>> > > > # All ports on Intel 4x1G NIC
>> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
>> > > >
>> > > > # Brodcom 2x1G NIC
>> > > > #pci=['05:00.0', '05:00.1']
>> > > > ################### PCI PASSTHROUGH ###################
>> > > >
>> > > > # HVM Disks
>> > > > # Hard disk only
>> > > > # Boot from HDD first ('c')
>> > > > boot="c"
>> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>> > > >
>> > > > # Hard disk with ISO
>> > > > # Boot from ISO first ('d')
>> > > > #boot="d"
>> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>> > > >
>> > > > # ACPI Enable
>> > > > acpi=1
>> > > > # HVM Event Modes
>> > > > on_poweroff='destroy'
>> > > > on_reboot='restart'
>> > > > on_crash='restart'
>> > > >
>> > > > # Serial Console Configuration (Xen Console)
>> > > > sdl=0
>> > > > serial='pty'
>> > > >
>> > > > # VNC Configuration
>> > > > # Only reacable from localhost
>> > > > vnc=1
>> > > > vnclisten="0.0.0.0"
>> > > > vncpasswd=""
>> > > >
>> > > > ###########################################################
>> > > > Copied for xen-users list
>> > > > ###########################################################
>> > > >
>> > > > It appears that it cannot obtain the RAM mapping for this PCI
>> device.
>> > > >
>> > > >
>> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
>> output
>> > > > looks like:
>> > > > root@fiat:~# ./dev_mgmt.sh
>> > > > Loading Kernel Module 'xen-pciback'
>> > > > Calling function pciback_dev for:
>> > > > PCI DEVICE 0000:03:00.0
>> > > > Unbinding 0000:03:00.0 from igb
>> > > > Binding 0000:03:00.0 to pciback
>> > > >
>> > > > PCI DEVICE 0000:03:00.1
>> > > > Unbinding 0000:03:00.1 from igb
>> > > > Binding 0000:03:00.1 to pciback
>> > > >
>> > > > PCI DEVICE 0000:04:00.0
>> > > > Unbinding 0000:04:00.0 from igb
>> > > > Binding 0000:04:00.0 to pciback
>> > > >
>> > > > PCI DEVICE 0000:04:00.1
>> > > > Unbinding 0000:04:00.1 from igb
>> > > > Binding 0000:04:00.1 to pciback
>> > > >
>> > > > PCI DEVICE 0000:05:00.0
>> > > > Unbinding 0000:05:00.0 from bnx2
>> > > > Binding 0000:05:00.0 to pciback
>> > > >
>> > > > PCI DEVICE 0000:05:00.1
>> > > > Unbinding 0000:05:00.1 from bnx2
>> > > > Binding 0000:05:00.1 to pciback
>> > > >
>> > > > Listing PCI Devices Available to Xen
>> > > > 0000:03:00.0
>> > > > 0000:03:00.1
>> > > > 0000:04:00.0
>> > > > 0000:04:00.1
>> > > > 0000:05:00.0
>> > > > 0000:05:00.1
>> > > >
>> > > > ###########################################################
>> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
>> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>> > > > WARNING: ignoring device_model directive.
>> > > > WARNING: Use "device_model_override" instead if you really want a
>> > > > non-default device_model
>> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360:
>> create:
>> > > > how=(nil) callback=(nil) poller=0x210c3c0
>> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
>> Disk
>> > > > vdev=hda spec.backend=unknown
>> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
>> Disk
>> > > > vdev=hda, using backend phy
>> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
>> > > bootloader
>> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
>> > > > domain, skipping bootloader
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210c728: deregister unregistered
>> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
>> NUMA
>> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
>> > > > free_memkb=2980
>> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
>> > > candidate
>> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
>> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
>> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>> > > >   Loader:        0000000000100000->00000000001a69a4
>> > > >   Modules:       0000000000000000->0000000000000000
>> > > >   TOTAL:         0000000000000000->000000003f800000
>> > > >   ENTRY ADDRESS: 0000000000100608
>> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>> > > >   4KB PAGES: 0x0000000000000200
>> > > >   2MB PAGES: 0x00000000000001fb
>> > > >   1GB PAGES: 0x0000000000000000
>> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
>> 0x7f022c81682d
>> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
>> Disk
>> > > > vdev=hda spec.backend=phy
>> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
>> > > > register slotnum=3
>> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
>> > > > inprogress: poller=0x210c3c0, flags=i
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
>> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
>> > > > epath=/local/domain/0/backend/vbd/2/768/state
>> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
>> > > state 1
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
>> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
>> > > > epath=/local/domain/0/backend/vbd/2/768/state
>> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
>> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
>> > > > deregister slotnum=3
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x2112f48: deregister unregistered
>> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script:
>> > > > /etc/xen/scripts/block add
>> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
>> > > device-model
>> > > > /usr/bin/qemu-system-i386 with arguments:
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > /usr/bin/qemu-system-i386
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > chardev=libxl-cmd,mode=control
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0
>> ,to=99
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> isa-fdc.driveA=
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> vga.vram_size_mb=8
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > >
>> > >
>> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
>> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
>> > > register
>> > > > slotnum=3
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
>> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
>> > > > epath=/local/domain/0/device-model/2/state
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
>> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
>> > > > epath=/local/domain/0/device-model/2/state
>> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
>> > > > deregister slotnum=3
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210c960: deregister unregistered
>> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
>> > > > /var/run/xen/qmp-libxl-2
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "qmp_capabilities",
>> > > >     "id": 1
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "query-chardev",
>> > > >     "id": 2
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "change",
>> > > >     "id": 3,
>> > > >     "arguments": {
>> > > >         "device": "vnc",
>> > > >         "target": "password",
>> > > >         "arg": ""
>> > > >     }
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "query-vnc",
>> > > >     "id": 4
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
>> > > register
>> > > > slotnum=3
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
>> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
>> > > > epath=/local/domain/0/backend/vif/2/0/state
>> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting
>> state
>> > > 1
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
>> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
>> > > > epath=/local/domain/0/backend/vif/2/0/state
>> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
>> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
>> > > > deregister slotnum=3
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210e8a8: deregister unregistered
>> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script:
>> > > > /etc/xen/scripts/vif-bridge online
>> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script:
>> > > > /etc/xen/scripts/vif-bridge add
>> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
>> > > > /var/run/xen/qmp-libxl-2
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "qmp_capabilities",
>> > > >     "id": 1
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "device_add",
>> > > >     "id": 2,
>> > > >     "arguments": {
>> > > >         "driver": "xen-pci-passthrough",
>> > > >         "id": "pci-pt-03_00.0",
>> > > >         "hostaddr": "0000:03:00.0"
>> > > >     }
>> > > > }
>> > > > '
>> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
>> Connection
>> > > reset
>> > > > by peer
>> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
>> error:
>> > > > Connection refused
>> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
>> error:
>> > > > Connection refused
>> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
>> error:
>> > > > Connection refused
>> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
>> > > backend
>> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
>> 0x210c360:
>> > > > progress report: ignored
>> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
>> > > > complete, rc=0
>> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
>> > > destroy
>> > > > Daemon running with PID 3214
>> > > > xc: debug: hypercall buffer: total allocations:793 total
>> releases:793
>> > > > xc: debug: hypercall buffer: current allocations:0 maximum
>> allocations:4
>> > > > xc: debug: hypercall buffer: cache current size:4
>> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
>> > > >
>> > > > ###########################################################
>> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
>> > > > char device redirected to /dev/pts/5 (label serial0)
>> > > > qemu: hardware error: xen: failed to populate ram at 40030000
>> > > > CPU #0:
>> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
>> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
>> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
>> > > > ES =0000 00000000 0000ffff 00009300
>> > > > CS =f000 ffff0000 0000ffff 00009b00
>> > > > SS =0000 00000000 0000ffff 00009300
>> > > > DS =0000 00000000 0000ffff 00009300
>> > > > FS =0000 00000000 0000ffff 00009300
>> > > > GS =0000 00000000 0000ffff 00009300
>> > > > LDT=0000 00000000 0000ffff 00008200
>> > > > TR =0000 00000000 0000ffff 00008b00
>> > > > GDT=     00000000 0000ffff
>> > > > IDT=     00000000 0000ffff
>> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
>> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
>> > > > DR6=ffff0ff0 DR7=00000400
>> > > > EFER=0000000000000000
>> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
>> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
>> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
>> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
>> > > > XMM00=00000000000000000000000000000000
>> > > > XMM01=00000000000000000000000000000000
>> > > > XMM02=00000000000000000000000000000000
>> > > > XMM03=00000000000000000000000000000000
>> > > > XMM04=00000000000000000000000000000000
>> > > > XMM05=00000000000000000000000000000000
>> > > > XMM06=00000000000000000000000000000000
>> > > > XMM07=00000000000000000000000000000000
>> > > > CPU #1:
>> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
>> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
>> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
>> > > > ES =0000 00000000 0000ffff 00009300
>> > > > CS =f000 ffff0000 0000ffff 00009b00
>> > > > SS =0000 00000000 0000ffff 00009300
>> > > > DS =0000 00000000 0000ffff 00009300
>> > > > FS =0000 00000000 0000ffff 00009300
>> > > > GS =0000 00000000 0000ffff 00009300
>> > > > LDT=0000 00000000 0000ffff 00008200
>> > > > TR =0000 00000000 0000ffff 00008b00
>> > > > GDT=     00000000 0000ffff
>> > > > IDT=     00000000 0000ffff
>> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
>> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
>> > > > DR6=ffff0ff0 DR7=00000400
>> > > > EFER=0000000000000000
>> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
>> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
>> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
>> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
>> > > > XMM00=00000000000000000000000000000000
>> > > > XMM01=00000000000000000000000000000000
>> > > > XMM02=00000000000000000000000000000000
>> > > > XMM03=00000000000000000000000000000000
>> > > > XMM04=00000000000000000000000000000000
>> > > > XMM05=00000000000000000000000000000000
>> > > > XMM06=00000000000000000000000000000000
>> > > > XMM07=00000000000000000000000000000000
>> > > >
>> > > > ###########################################################
>> > > > /etc/default/grub
>> > > > GRUB_DEFAULT="Xen 4.3-amd64"
>> > > > GRUB_HIDDEN_TIMEOUT=0
>> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
>> > > > GRUB_TIMEOUT=10
>> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
>> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
>> > > > GRUB_CMDLINE_LINUX=""
>> > > > # biosdevname=0
>> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
>> > >
>> > > > _______________________________________________
>> > > > Xen-devel mailing list
>> > > > Xen-devel@lists.xen.org
>> > > > http://lists.xen.org/xen-devel
>> > >
>> > >
>>
>
>

--047d7b66f2f770a4fb04f1d6f5d3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I was able to compile and install xen4.4 RC3 on my host, h=
owever I am getting the error:<div><br></div><div><div>root@fiat:~/git/xen#=
 xl list</div><div>xc: error: Could not obtain handle on privileged command=
 interface (2 =3D No such file or directory): Internal error</div>

<div>libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No=
 such file or directory</div><div>cannot init xl context</div></div><div><b=
r></div><div>I&#39;ve google searched for this and an article appears, but =
is not the same (as far as I can tell). =A0Running any xl command generates=
 a similar error.</div>

<div><br></div><div>What can I do to fix this?</div><div><br></div><div>Reg=
ards</div></div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quot=
e">On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <span dir=3D"ltr">&lt;=
<a href=3D"mailto:mikeneiderhauser@gmail.com" target=3D"_blank">mikeneiderh=
auser@gmail.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Much. Do I need to install =
from src or is there a package I can install.<div><br></div><div>Regards</d=
iv>

</div><div class=3D"HOEnZb"><div class=3D"h5"><div class=3D"gmail_extra"><b=
r><br><div class=3D"gmail_quote">On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rze=
szutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com"=
 target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Fri, Feb 07, 2014 at 10:53:22AM -050=
0, Mike Neiderhauser wrote:<br>
&gt; I did not. =A0I do not have the toolchain installed. =A0I may have tim=
e later<br>
&gt; today to try the patch. =A0Are there any specific instructions on how =
to<br>
&gt; patch the src, compile and install?<br>
<br>
</div>There actually should be a new version of Xen 4.4-rcX which will have=
 the<br>
fix. That might be easier for you?<br>
<div><div>&gt;<br>
&gt; Regards<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I am attempting to do a pci passthrough of an Intel ET card =
(4x1G NIC)<br>
&gt; &gt; to a<br>
&gt; &gt; &gt; HVM. =A0I have been attempting to resolve this issue on the =
xen-users list,<br>
&gt; &gt; &gt; but it was advised to post this issue to this list. (Initial=
 Message -<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a>)<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The machine I am using as host is a Dell Poweredge server wi=
th a Xeon<br>
&gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The possible bug is the following:<br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; ....<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I believe it may be similar to this thread<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>



&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Additional info that may be helpful is below.<br>
&gt; &gt;<br>
&gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Please let me know if you need any additional information.<b=
r>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks in advance for any help provided!<br>
&gt; &gt; &gt; Regards<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Name (as appears in &#39;xl list&#39;)<br>
&gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt; &gt; # HVM Build settings (+ hardware)<br>
&gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;<br>
&gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#=
39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # First two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#3=
9;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;,<br=
>
&gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r&=
#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Serial Console Configuration (Xen Console)<br>
&gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; It appears that it cannot obtain the RAM mapping for this PC=
I device.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I rebooted the Host. =A0I ran assigned pci devices to pcibac=
k. The output<br>
&gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt; &gt; Loading Kernel Module &#39;xen-pciback&#39;<br>
&gt; &gt; &gt; Calling function pciback_dev for:<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Listing PCI Devices Available to Xen<br>
&gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; WARNING: ignoring device_model directive.<br>
&gt; &gt; &gt; WARNING: Use &quot;device_model_override&quot; instead if yo=
u really want a<br>
&gt; &gt; &gt; non-default device_model<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210=
c360: create:<br>
&gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c0<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:296:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:675:initiate_domain_create: run=
ning<br>
&gt; &gt; bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: =
not a PV<br>
&gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c728: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: Ne=
w best NUMA<br>
&gt; &gt; &gt; placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcp=
us=3D3,<br>
&gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA place=
ment<br>
&gt; &gt; candidate<br>
&gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB free selected<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=
=3D0xa69a4<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a=
4<br>
&gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a=
69a4<br>
&gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;00000000000000=
00<br>
&gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f80=
0000<br>
&gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; =
0x7f022c81682d<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210=
c360:<br>
&gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 still=
 waiting<br>
&gt; &gt; state 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 ok<br=
>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawnin=
g<br>
&gt; &gt; device-model<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xe=
n-domid<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2<b=
r>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ch=
ardev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server=
,nowait<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mo=
n<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -na=
me<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubu=
ntu-hvm-0<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vn=
c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 <a =
href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a>,to=3D99<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa=
-fdc.driveA=3D<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -se=
rial<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty=
<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vg=
a<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cir=
rus<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga=
.vram_size_mb=3D8<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -bo=
ot<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ord=
er=3Dc<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -sm=
p<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,m=
axcpus=3D2<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -de=
vice<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ne=
tdev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscr=
ipt=3Dno<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xen=
fv<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 101=
6<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -dr=
ive<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;password&quot;,<br=
>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 still w=
aiting state<br>
&gt; &gt; 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 ok<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthroug=
h&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,<=
br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quo=
t;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: C=
onnection<br>
&gt; &gt; reset<br>
&gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Crea=
ting pci<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: =
ao 0x210c360:<br>
&gt; &gt; &gt; progress report: ignored<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x21=
0c360:<br>
&gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x21=
0c360:<br>
&gt; &gt; destroy<br>
&gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: total allocations:793 total rel=
eases:793<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: current allocations:0 maximum a=
llocations:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache current size:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:=
4<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || ech=
o Debian`<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0_max_vcpus=3D1=
&quot;<br>
&gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank"=
>Xen-devel@lists.xen.org</a><br>
&gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--047d7b66f2f770a4fb04f1d6f5d3--


--===============8441876354739994021==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8441876354739994021==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 20:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsJo-0000l8-12; Fri, 07 Feb 2014 20:46:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBsJN-0000kZ-V7
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 20:46:10 +0000
Received: from [85.158.137.68:23735] by server-15.bemta-3.messagelabs.com id
	FF/F1-19263-11645F25; Fri, 07 Feb 2014 20:46:09 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391805959!415597!1
X-Originating-IP: [209.85.220.172]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30196 invoked from network); 7 Feb 2014 20:46:01 -0000
Received: from mail-vc0-f172.google.com (HELO mail-vc0-f172.google.com)
	(209.85.220.172)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 20:46:01 -0000
Received: by mail-vc0-f172.google.com with SMTP id lf12so3039969vcb.17
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 12:45:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=ylKE0UsorZ6aP+fBwfbejTPbnCPNj9qeepvc7UO7CFo=;
	b=ydJJFuvTcxAMY4Ba7i9Ke4ScAIShcGi2AlTNT9Mj1J45L8R2kWj6j9WToOrp95Cyat
	iswmwuLe5FyRMTFjaXGajrgZ61uC6NVUOPDb9fWKszr4AgXfrdlKGknGlKnjuZKA05sJ
	QT7q4gff/2Wz91ApPWgfScmTabNBWiYqN60dHtfDdFlU6Non2SkyPpXvortfc1yZKD1N
	a2n441UbJg/cdFAJsW6Ecd7WnXlPcN3tF+GTumu2gvyHS9Cd4sY0ZnDsrSEHoQ6dGXZB
	ylFaqKRzFa1zsMYc4E3ohNqYuNy3g+VFn23+G2QhfgaHAfNTn/qtifeAWjAo+LLKvO1c
	4KAw==
X-Received: by 10.221.20.199 with SMTP id qp7mr7954704vcb.24.1391805959576;
	Fri, 07 Feb 2014 12:45:59 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 12:45:19 -0800 (PST)
In-Reply-To: <20140207203934.GA13333@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 15:45:19 -0500
Message-ID: <CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 20:46:34 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7898976862245331660=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7898976862245331660==
Content-Type: multipart/alternative; boundary=001a11339e2ed1228504f1d713e7

--001a11339e2ed1228504f1d713e7
Content-Type: text/plain; charset=ISO-8859-1

Ok. I started ran the initscripts and now xl works.

However, I still see the same behavior as before:

root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
Parsing config from /etc/xen/ubuntu-hvm-0.cfg
libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection reset
by peer
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
Connection refused
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
Connection refused
root@fiat:~# xl list
Name                                        ID   Mem VCPUs State Time(s)
Domain-0                                     0  1024     1     r-----
 15.2
ubuntu-hvm-0                                 1  1025     1     ------
0.0

(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
be allocated)
(XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
(XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
(XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
(XEN)  Start info:    ffffffff85519000->ffffffff855194b4
(XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
(XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
(XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
(XEN)  ENTRY ADDRESS: ffffffff81d261e0
(XEN) Dom0 has maximum 1 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
(XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
(XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
(XEN) Scrubbing Free RAM: .............................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) Xen is relinquishing VGA console.
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
to Xen)
(XEN) Freed 260kB init memory.
(XEN) PCI add device 0000:00:00.0
(XEN) PCI add device 0000:00:01.0
(XEN) PCI add device 0000:00:1a.0
(XEN) PCI add device 0000:00:1c.0
(XEN) PCI add device 0000:00:1d.0
(XEN) PCI add device 0000:00:1e.0
(XEN) PCI add device 0000:00:1f.0
(XEN) PCI add device 0000:00:1f.2
(XEN) PCI add device 0000:00:1f.3
(XEN) PCI add device 0000:01:00.0
(XEN) PCI add device 0000:02:02.0
(XEN) PCI add device 0000:02:04.0
(XEN) PCI add device 0000:03:00.0
(XEN) PCI add device 0000:03:00.1
(XEN) PCI add device 0000:04:00.0
(XEN) PCI add device 0000:04:00.1
(XEN) PCI add device 0000:05:00.0
(XEN) PCI add device 0000:05:00.1
(XEN) PCI add device 0000:06:03.0
(XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
(XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
(200 of 1024)
(d1) HVM Loader
(d1) Detected Xen v4.4-rc2
(d1) Xenbus rings @0xfeffc000, event channel 4
(d1) System requested SeaBIOS
(d1) CPU speed is 3093 MHz
(d1) Relocating guest memory for lowmem MMIO space disabled


Excerpt from /var/log/xen/*
qemu: hardware error: xen: failed to populate ram at 40050000


On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > I was able to compile and install xen4.4 RC3 on my host, however I am
> > getting the error:
> >
> > root@fiat:~/git/xen# xl list
> > xc: error: Could not obtain handle on privileged command interface (2 =
> No
> > such file or directory): Internal error
> > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No
> such
> > file or directory
> > cannot init xl context
> >
> > I've google searched for this and an article appears, but is not the same
> > (as far as I can tell).  Running any xl command generates a similar
> error.
> >
> > What can I do to fix this?
>
>
> You need to run the initscripts for Xen. I don't know what your distro is,
> but
> they are usually put in /etc/init.d/rc.d/xen*
>
>
> >
> > Regards
> >
> >
> > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > mikeneiderhauser@gmail.com> wrote:
> >
> > > Much. Do I need to install from src or is there a package I can
> install.
> > >
> > > Regards
> > >
> > >
> > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > konrad.wilk@oracle.com> wrote:
> > >
> > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > >> > I did not.  I do not have the toolchain installed.  I may have time
> > >> later
> > >> > today to try the patch.  Are there any specific instructions on how
> to
> > >> > patch the src, compile and install?
> > >>
> > >> There actually should be a new version of Xen 4.4-rcX which will have
> the
> > >> fix. That might be easier for you?
> > >> >
> > >> > Regards
> > >> >
> > >> >
> > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > >> > konrad.wilk@oracle.com> wrote:
> > >> >
> > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > >> > > > Hi all,
> > >> > > >
> > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> (4x1G
> > >> NIC)
> > >> > > to a
> > >> > > > HVM.  I have been attempting to resolve this issue on the
> xen-users
> > >> list,
> > >> > > > but it was advised to post this issue to this list. (Initial
> > >> Message -
> > >> > > >
> > >> > >
> > >>
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > >> )
> > >> > > >
> > >> > > > The machine I am using as host is a Dell Poweredge server with a
> > >> Xeon
> > >> > > > E31220 with 4GB of ram.
> > >> > > >
> > >> > > > The possible bug is the following:
> > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > >> > > > ....
> > >> > > >
> > >> > > > I believe it may be similar to this thread
> > >> > > >
> > >> > >
> > >>
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > >> > > >
> > >> > > >
> > >> > > > Additional info that may be helpful is below.
> > >> > >
> > >> > > Did you try the patch?
> > >> > > >
> > >> > > > Please let me know if you need any additional information.
> > >> > > >
> > >> > > > Thanks in advance for any help provided!
> > >> > > > Regards
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > >> > > > ###########################################################
> > >> > > > # Configuration file for Xen HVM
> > >> > > >
> > >> > > > # HVM Name (as appears in 'xl list')
> > >> > > > name="ubuntu-hvm-0"
> > >> > > > # HVM Build settings (+ hardware)
> > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > >> > > > builder='hvm'
> > >> > > > device_model='qemu-dm'
> > >> > > > memory=1024
> > >> > > > vcpus=2
> > >> > > >
> > >> > > > # Virtual Interface
> > >> > > > # Network bridge to USB NIC
> > >> > > > vif=['bridge=xenbr0']
> > >> > > >
> > >> > > > ################### PCI PASSTHROUGH ###################
> > >> > > > # PCI Permissive mode toggle
> > >> > > > #pci_permissive=1
> > >> > > >
> > >> > > > # All PCI Devices
> > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > >> '05:00.1']
> > >> > > >
> > >> > > > # First two ports on Intel 4x1G NIC
> > >> > > > #pci=['03:00.0','03:00.1']
> > >> > > >
> > >> > > > # Last two ports on Intel 4x1G NIC
> > >> > > > #pci=['04:00.0', '04:00.1']
> > >> > > >
> > >> > > > # All ports on Intel 4x1G NIC
> > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > >> > > >
> > >> > > > # Brodcom 2x1G NIC
> > >> > > > #pci=['05:00.0', '05:00.1']
> > >> > > > ################### PCI PASSTHROUGH ###################
> > >> > > >
> > >> > > > # HVM Disks
> > >> > > > # Hard disk only
> > >> > > > # Boot from HDD first ('c')
> > >> > > > boot="c"
> > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > >> > > >
> > >> > > > # Hard disk with ISO
> > >> > > > # Boot from ISO first ('d')
> > >> > > > #boot="d"
> > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > >> > > >
> > >> > > > # ACPI Enable
> > >> > > > acpi=1
> > >> > > > # HVM Event Modes
> > >> > > > on_poweroff='destroy'
> > >> > > > on_reboot='restart'
> > >> > > > on_crash='restart'
> > >> > > >
> > >> > > > # Serial Console Configuration (Xen Console)
> > >> > > > sdl=0
> > >> > > > serial='pty'
> > >> > > >
> > >> > > > # VNC Configuration
> > >> > > > # Only reacable from localhost
> > >> > > > vnc=1
> > >> > > > vnclisten="0.0.0.0"
> > >> > > > vncpasswd=""
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > Copied for xen-users list
> > >> > > > ###########################################################
> > >> > > >
> > >> > > > It appears that it cannot obtain the RAM mapping for this PCI
> > >> device.
> > >> > > >
> > >> > > >
> > >> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> > >> output
> > >> > > > looks like:
> > >> > > > root@fiat:~# ./dev_mgmt.sh
> > >> > > > Loading Kernel Module 'xen-pciback'
> > >> > > > Calling function pciback_dev for:
> > >> > > > PCI DEVICE 0000:03:00.0
> > >> > > > Unbinding 0000:03:00.0 from igb
> > >> > > > Binding 0000:03:00.0 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:03:00.1
> > >> > > > Unbinding 0000:03:00.1 from igb
> > >> > > > Binding 0000:03:00.1 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:04:00.0
> > >> > > > Unbinding 0000:04:00.0 from igb
> > >> > > > Binding 0000:04:00.0 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:04:00.1
> > >> > > > Unbinding 0000:04:00.1 from igb
> > >> > > > Binding 0000:04:00.1 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:05:00.0
> > >> > > > Unbinding 0000:05:00.0 from bnx2
> > >> > > > Binding 0000:05:00.0 to pciback
> > >> > > >
> > >> > > > PCI DEVICE 0000:05:00.1
> > >> > > > Unbinding 0000:05:00.1 from bnx2
> > >> > > > Binding 0000:05:00.1 to pciback
> > >> > > >
> > >> > > > Listing PCI Devices Available to Xen
> > >> > > > 0000:03:00.0
> > >> > > > 0000:03:00.1
> > >> > > > 0000:04:00.0
> > >> > > > 0000:04:00.1
> > >> > > > 0000:05:00.0
> > >> > > > 0000:05:00.1
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > >> > > > WARNING: ignoring device_model directive.
> > >> > > > WARNING: Use "device_model_override" instead if you really want
> a
> > >> > > > non-default device_model
> > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> 0x210c360:
> > >> create:
> > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > >> Disk
> > >> > > > vdev=hda spec.backend=unknown
> > >> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
> > >> Disk
> > >> > > > vdev=hda, using backend phy
> > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > >> > > bootloader
> > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not
> a PV
> > >> > > > domain, skipping bootloader
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210c728: deregister unregistered
> > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New
> best
> > >> NUMA
> > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > >> > > > free_memkb=2980
> > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > >> > > candidate
> > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > >> > > >   Modules:       0000000000000000->0000000000000000
> > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > >> > > >   ENTRY ADDRESS: 0000000000100608
> > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > >> > > >   4KB PAGES: 0x0000000000000200
> > >> > > >   2MB PAGES: 0x00000000000001fb
> > >> > > >   1GB PAGES: 0x0000000000000000
> > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > >> 0x7f022c81682d
> > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > >> Disk
> > >> > > > vdev=hda spec.backend=phy
> > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> watch
> > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> token=3/0:
> > >> > > > register slotnum=3
> > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> 0x210c360:
> > >> > > > inprogress: poller=0x210c3c0, flags=i
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x2112f48
> > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> waiting
> > >> > > state 1
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x2112f48
> > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> token=3/0:
> > >> > > > deregister slotnum=3
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x2112f48: deregister unregistered
> > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > >> script:
> > >> > > > /etc/xen/scripts/block add
> > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > >> > > device-model
> > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > /usr/bin/qemu-system-i386
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -xen-domid
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > chardev=libxl-cmd,mode=control
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> ubuntu-hvm-0
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> 0.0.0.0:0
> > >> ,to=99
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> isa-fdc.driveA=
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> vga.vram_size_mb=8
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> 2,maxcpus=2
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >> > > >
> > >> > >
> > >>
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> watch
> > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> token=3/1:
> > >> > > register
> > >> > > > slotnum=3
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210c960
> > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > >> > > > epath=/local/domain/0/device-model/2/state
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210c960
> > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > >> > > > epath=/local/domain/0/device-model/2/state
> > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> token=3/1:
> > >> > > > deregister slotnum=3
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210c960: deregister unregistered
> > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> to
> > >> > > > /var/run/xen/qmp-libxl-2
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type: qmp
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "qmp_capabilities",
> > >> > > >     "id": 1
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "query-chardev",
> > >> > > >     "id": 2
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "change",
> > >> > > >     "id": 3,
> > >> > > >     "arguments": {
> > >> > > >         "device": "vnc",
> > >> > > >         "target": "password",
> > >> > > >         "arg": ""
> > >> > > >     }
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "query-vnc",
> > >> > > >     "id": 4
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> watch
> > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> token=3/2:
> > >> > > register
> > >> > > > slotnum=3
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210e8a8
> > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> waiting
> > >> state
> > >> > > 1
> > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> w=0x210e8a8
> > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> token=3/2:
> > >> > > > deregister slotnum=3
> > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> watch
> > >> > > > w=0x210e8a8: deregister unregistered
> > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > >> script:
> > >> > > > /etc/xen/scripts/vif-bridge online
> > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > >> script:
> > >> > > > /etc/xen/scripts/vif-bridge add
> > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> to
> > >> > > > /var/run/xen/qmp-libxl-2
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type: qmp
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "qmp_capabilities",
> > >> > > >     "id": 1
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > >> return
> > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> command: '{
> > >> > > >     "execute": "device_add",
> > >> > > >     "id": 2,
> > >> > > >     "arguments": {
> > >> > > >         "driver": "xen-pci-passthrough",
> > >> > > >         "id": "pci-pt-03_00.0",
> > >> > > >         "hostaddr": "0000:03:00.0"
> > >> > > >     }
> > >> > > > }
> > >> > > > '
> > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > >> Connection
> > >> > > reset
> > >> > > > by peer
> > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > >> error:
> > >> > > > Connection refused
> > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > >> error:
> > >> > > > Connection refused
> > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > >> error:
> > >> > > > Connection refused
> > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> Creating pci
> > >> > > backend
> > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> > >> 0x210c360:
> > >> > > > progress report: ignored
> > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> 0x210c360:
> > >> > > > complete, rc=0
> > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> 0x210c360:
> > >> > > destroy
> > >> > > > Daemon running with PID 3214
> > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > >> releases:793
> > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > >> allocations:4
> > >> > > > xc: debug: hypercall buffer: cache current size:4
> > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > >> > > > CPU #0:
> > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > >> > > > ES =0000 00000000 0000ffff 00009300
> > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > >> > > > SS =0000 00000000 0000ffff 00009300
> > >> > > > DS =0000 00000000 0000ffff 00009300
> > >> > > > FS =0000 00000000 0000ffff 00009300
> > >> > > > GS =0000 00000000 0000ffff 00009300
> > >> > > > LDT=0000 00000000 0000ffff 00008200
> > >> > > > TR =0000 00000000 0000ffff 00008b00
> > >> > > > GDT=     00000000 0000ffff
> > >> > > > IDT=     00000000 0000ffff
> > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > >> > > > DR6=ffff0ff0 DR7=00000400
> > >> > > > EFER=0000000000000000
> > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > >> > > > XMM00=00000000000000000000000000000000
> > >> > > > XMM01=00000000000000000000000000000000
> > >> > > > XMM02=00000000000000000000000000000000
> > >> > > > XMM03=00000000000000000000000000000000
> > >> > > > XMM04=00000000000000000000000000000000
> > >> > > > XMM05=00000000000000000000000000000000
> > >> > > > XMM06=00000000000000000000000000000000
> > >> > > > XMM07=00000000000000000000000000000000
> > >> > > > CPU #1:
> > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > >> > > > ES =0000 00000000 0000ffff 00009300
> > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > >> > > > SS =0000 00000000 0000ffff 00009300
> > >> > > > DS =0000 00000000 0000ffff 00009300
> > >> > > > FS =0000 00000000 0000ffff 00009300
> > >> > > > GS =0000 00000000 0000ffff 00009300
> > >> > > > LDT=0000 00000000 0000ffff 00008200
> > >> > > > TR =0000 00000000 0000ffff 00008b00
> > >> > > > GDT=     00000000 0000ffff
> > >> > > > IDT=     00000000 0000ffff
> > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > >> > > > DR6=ffff0ff0 DR7=00000400
> > >> > > > EFER=0000000000000000
> > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > >> > > > XMM00=00000000000000000000000000000000
> > >> > > > XMM01=00000000000000000000000000000000
> > >> > > > XMM02=00000000000000000000000000000000
> > >> > > > XMM03=00000000000000000000000000000000
> > >> > > > XMM04=00000000000000000000000000000000
> > >> > > > XMM05=00000000000000000000000000000000
> > >> > > > XMM06=00000000000000000000000000000000
> > >> > > > XMM07=00000000000000000000000000000000
> > >> > > >
> > >> > > > ###########################################################
> > >> > > > /etc/default/grub
> > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > >> > > > GRUB_TIMEOUT=10
> > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > >> > > > GRUB_CMDLINE_LINUX=""
> > >> > > > # biosdevname=0
> > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > >> > >
> > >> > > > _______________________________________________
> > >> > > > Xen-devel mailing list
> > >> > > > Xen-devel@lists.xen.org
> > >> > > > http://lists.xen.org/xen-devel
> > >> > >
> > >> > >
> > >>
> > >
> > >
>

--001a11339e2ed1228504f1d713e7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Ok. I started ran the initscripts and now xl works.<div><b=
r></div><div>However, I still see the same behavior as before:</div><div><b=
r></div><div><div>root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg</div><di=
v>

Parsing config from /etc/xen/ubuntu-hvm-0.cfg</div><div>libxl: error: libxl=
_qmp.c:448:qmp_next: Socket read error: Connection reset by peer</div><div>=
libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error: Conn=
ection refused</div>

<div>libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
 Connection refused</div><div>libxl: error: libxl_qmp.c:691:libxl__qmp_init=
ialize: Connection error: Connection refused</div><div>root@fiat:~# xl list=
</div>

<div>Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0ID =A0 Mem VCPUs<span class=3D"" style=3D"white-space:pre">	=
</span>State<span class=3D"" style=3D"white-space:pre">	</span>Time(s)</div=
><div>Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r----- =A0 =A0 =A015.2</div>

<div>ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------ =A0 =A0 =A0 0.0</div></div><div>=
<br></div><div><div>(XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000=
 -&gt; 0x23f3000</div><div>(XEN) PHYSICAL MEMORY ARRANGEMENT:</div>

<div>(XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000000 (23369=
0 pages to be allocated)</div><div>(XEN) =A0Init. ramdisk: 000000013d0da000=
-&gt;000000013ffffe00</div><div>(XEN) VIRTUAL MEMORY ARRANGEMENT:</div><div=
>
(XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f3000</div>
<div>(XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e00</div><di=
v>(XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff85519000</div><div>(=
XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855194b4</div><div>=
(XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549000</div>

<div>(XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff8554a000</div=
><div>(XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;ffffffff85800000=
</div><div>(XEN) =A0ENTRY ADDRESS: ffffffff81d261e0</div><div>(XEN) Dom0 ha=
s maximum 1 VCPUs</div>

<div>(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0xffffffff81=
b2f000</div><div>(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; =
0xffffffff81d0f0f0</div><div>(XEN) elf_load_binary: phdr 2 at 0xffffffff81d=
10000 -&gt; 0xffffffff81d252c0</div>

<div>(XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0xffffffff81=
e6d000</div><div>(XEN) Scrubbing Free RAM: .............................don=
e.</div><div>(XEN) Initial low memory virq threshold set at 0x4000 pages.</=
div>

<div>(XEN) Std. Loglevel: All</div><div>(XEN) Guest Loglevel: All</div><div=
>(XEN) Xen is relinquishing VGA console.</div><div>(XEN) *** Serial input -=
&gt; DOM0 (type &#39;CTRL-a&#39; three times to switch input to Xen)</div>

<div>(XEN) Freed 260kB init memory.</div><div>(XEN) PCI add device 0000:00:=
00.0</div><div>(XEN) PCI add device 0000:00:01.0</div><div>(XEN) PCI add de=
vice 0000:00:1a.0</div><div>(XEN) PCI add device 0000:00:1c.0</div><div>

(XEN) PCI add device 0000:00:1d.0</div><div>(XEN) PCI add device 0000:00:1e=
.0</div><div>(XEN) PCI add device 0000:00:1f.0</div><div>(XEN) PCI add devi=
ce 0000:00:1f.2</div><div>(XEN) PCI add device 0000:00:1f.3</div><div>
(XEN) PCI add device 0000:01:00.0</div>
<div>(XEN) PCI add device 0000:02:02.0</div><div>(XEN) PCI add device 0000:=
02:04.0</div><div>(XEN) PCI add device 0000:03:00.0</div><div>(XEN) PCI add=
 device 0000:03:00.1</div><div>(XEN) PCI add device 0000:04:00.0</div>
<div>
(XEN) PCI add device 0000:04:00.1</div><div>(XEN) PCI add device 0000:05:00=
.0</div><div>(XEN) PCI add device 0000:05:00.1</div><div>(XEN) PCI add devi=
ce 0000:06:03.0</div><div>(XEN) page_alloc.c:1460:d0 Over-allocation for do=
main 1: 262401 &gt; 262400</div>

<div>(XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D1 memf=
lags=3D0 (200 of 1024)</div><div>(d1) HVM Loader</div><div>(d1) Detected Xe=
n v4.4-rc2</div><div>(d1) Xenbus rings @0xfeffc000, event channel 4</div><d=
iv>

(d1) System requested SeaBIOS</div><div>(d1) CPU speed is 3093 MHz</div><di=
v>(d1) Relocating guest memory for lowmem MMIO space disabled</div></div><d=
iv><br></div><div><br></div><div>Excerpt from /var/log/xen/*</div><div>

qemu: hardware error: xen: failed to populate ram at 40050000<br></div></di=
v><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Fri, Feb=
 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D=
"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a=
>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, Feb 07, 2014 at 03=
:36:49PM -0500, Mike Neiderhauser wrote:<br>
&gt; I was able to compile and install xen4.4 RC3 on my host, however I am<=
br>
&gt; getting the error:<br>
&gt;<br>
&gt; root@fiat:~/git/xen# xl list<br>
&gt; xc: error: Could not obtain handle on privileged command interface (2 =
=3D No<br>
&gt; such file or directory): Internal error<br>
&gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No=
 such<br>
&gt; file or directory<br>
&gt; cannot init xl context<br>
&gt;<br>
&gt; I&#39;ve google searched for this and an article appears, but is not t=
he same<br>
&gt; (as far as I can tell). =A0Running any xl command generates a similar =
error.<br>
&gt;<br>
&gt; What can I do to fix this?<br>
<br>
<br>
</div>You need to run the initscripts for Xen. I don&#39;t know what your d=
istro is, but<br>
they are usually put in /etc/init.d/rc.d/xen*<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt;<br>
&gt; Regards<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser &lt;<br>
&gt; <a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhauser@gmail.c=
om</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; Much. Do I need to install from src or is there a package I can i=
nstall.<br>
&gt; &gt;<br>
&gt; &gt; Regards<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com<=
/a>&gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser w=
rote:<br>
&gt; &gt;&gt; &gt; I did not. =A0I do not have the toolchain installed. =A0=
I may have time<br>
&gt; &gt;&gt; later<br>
&gt; &gt;&gt; &gt; today to try the patch. =A0Are there any specific instru=
ctions on how to<br>
&gt; &gt;&gt; &gt; patch the src, compile and install?<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; There actually should be a new version of Xen 4.4-rcX which w=
ill have the<br>
&gt; &gt;&gt; fix. That might be easier for you?<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; Regards<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk &=
lt;<br>
&gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@or=
acle.com</a>&gt; wrote:<br>
&gt; &gt;&gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neid=
erhauser wrote:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pci passthrough of an =
Intel ET card (4x1G<br>
&gt; &gt;&gt; NIC)<br>
&gt; &gt;&gt; &gt; &gt; to a<br>
&gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attempting to resolve this=
 issue on the xen-users<br>
&gt; &gt;&gt; list,<br>
&gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post this issue to this =
list. (Initial<br>
&gt; &gt;&gt; Message -<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-user=
s/2014-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/arch=
ives/html/xen-users/2014-02/msg00036.html</a><br>
&gt; &gt;&gt; )<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
</div></div><div class=3D"HOEnZb"><div class=3D"h5">&gt; &gt;&gt; &gt; &gt;=
 &gt; The machine I am using as host is a Dell Poweredge server with a<br>
&gt; &gt;&gt; Xeon<br>
&gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the following:<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm=
-0.log<br>
&gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5 (label se=
rial0)<br>
&gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to populate =
ram at 40030000<br>
&gt; &gt;&gt; &gt; &gt; &gt; ....<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; I believe it may be similar to this thread<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query=
:+page:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markm=
ail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:r=
esults</a><br>


&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Additional info that may be helpful is below.<=
br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you need any additional =
information.<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any help provided!<br>
&gt; &gt;&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in &#39;xl list&#39;)<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ hardware)<br>
&gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloa=
der&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>
&gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH ##########=
#########<br>
&gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, =
&#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;,<br>
&gt; &gt;&gt; &#39;05:00.1&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel 4x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<=
br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &=
#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH ##########=
#########<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,h=
da,w&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,=
hda,w&#39;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.is=
o,hdc:cdrom,r&#39;]<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configuration (Xen Console)<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot obtain the RAM mappi=
ng for this PCI<br>
&gt; &gt;&gt; device.<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I ran assigned pci dev=
ices to pciback. The<br>
&gt; &gt;&gt; output<br>
&gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39;xen-pciback&#39;<br=
>
&gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_dev for:<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Available to Xen<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hv=
m-0.cfg<br>
&gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_model directive.<br>
&gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_model_override&quot;=
 instead if you really want a<br>
&gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do_domain_cr=
eate: ao 0x210c360:<br>
&gt; &gt;&gt; create:<br>
&gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c=
0<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device=
_disk_set_backend:<br>
&gt; &gt;&gt; Disk<br>
&gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:296:libxl__device=
_disk_set_backend:<br>
&gt; &gt;&gt; Disk<br>
&gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:675:initiate_doma=
in_create: running<br>
&gt; &gt;&gt; &gt; &gt; bootloader<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321:libxl__bo=
otloader_run: not a PV<br>
&gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl__get_numa=
_candidate: New best<br>
&gt; &gt;&gt; NUMA<br>
&gt; &gt;&gt; &gt; &gt; &gt; placement candidate found: nr_nodes=3D1, nr_cp=
us=3D4, nr_vcpus=3D3,<br>
&gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_place_doma=
in: NUMA placement<br>
&gt; &gt;&gt; &gt; &gt; candidate<br>
&gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB free selected=
<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x=
100000 memsz=3D0xa69a4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: memory: 0x100000=
 -&gt; 0x1a69a4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&g=
t;00000000001a69a4<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;=
0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&g=
t;000000003f800000<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022=
c779000 -&gt;<br>
&gt; &gt;&gt; 0x7f022c81682d<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device=
_disk_set_backend:<br>
&gt; &gt;&gt; Disk<br>
&gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswa=
tch_register: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/=
vbd/2/768/state token=3D3/0:<br>
&gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do_domain_cr=
eate: ao 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x2112f48<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/stat=
e<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted=
 state 2 still waiting<br>
&gt; &gt;&gt; &gt; &gt; state 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x2112f48<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/stat=
e<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted=
 state 2 ok<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/=
vbd/2/768/state token=3D3/0:<br>
&gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplu=
g: calling hotplug<br>
&gt; &gt;&gt; script:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_loc=
al_dm: Spawning<br>
&gt; &gt;&gt; &gt; &gt; device-model<br>
&gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -xen-domid<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -chardev<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-=
libxl-2,server,nowait<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -mon<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -name<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 ubuntu-hvm-0<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -vnc<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a><br>
&gt; &gt;&gt; ,to=3D99<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -global<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; isa-fdc.driveA=3D<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -serial<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 pty<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -vga<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 cirrus<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -global<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; vga.vram_size_mb=3D8<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -boot<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 order=3Dc<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -smp<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 2,maxcpus=3D2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -device<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e=
:23:44:2c<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -netdev<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,scrip=
t=3Dno,downscript=3Dno<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -M<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 xenfv<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -m<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 1016<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm: =A0 -drive<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_loc=
al_dm:<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=
=3Ddisk,format=3Draw,cache=3Dwriteback<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswa=
tch_register: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-m=
odel/2/state token=3D3/1:<br>
&gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210c960<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210c960<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<b=
r>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-m=
odel/2/state token=3D3/1:<br>
&gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initi=
alize: connected to<br>
&gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type: qmp<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabil=
ities&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-chard=
ev&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot=
;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&=
quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;pass=
word&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&q=
uot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswa=
tch_register: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/=
vif/2/0/state token=3D3/2:<br>
&gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210e8a8<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted s=
tate 2 still waiting<br>
&gt; &gt;&gt; state<br>
&gt; &gt;&gt; &gt; &gt; 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callba=
ck: watch w=3D0x210e8a8<br>
&gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2: event<br>
&gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<=
br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch=
_callback: backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted s=
tate 2 ok<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/=
vif/2/0/state token=3D3/2:<br>
&gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswa=
tch_deregister: watch<br>
&gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplu=
g: calling hotplug<br>
&gt; &gt;&gt; script:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplu=
g: calling hotplug<br>
&gt; &gt;&gt; script:<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initi=
alize: connected to<br>
&gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type: qmp<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabil=
ities&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_respo=
nse: message type:<br>
&gt; &gt;&gt; return<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare=
: next qmp command: &#39;{<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&=
quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-=
pci-passthrough&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-0=
3_00.0&quot;,<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;00=
00:03:00.0&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket=
 read error:<br>
&gt; &gt;&gt; Connection<br>
&gt; &gt;&gt; &gt; &gt; reset<br>
&gt; &gt;&gt; &gt; &gt; &gt; by peer<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initi=
alize: Connection<br>
&gt; &gt;&gt; error:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initi=
alize: Connection<br>
&gt; &gt;&gt; error:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initi=
alize: Connection<br>
&gt; &gt;&gt; error:<br>
&gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__create_pci=
_backend: Creating pci<br>
&gt; &gt;&gt; &gt; &gt; backend<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1737:libxl__ao_pro=
gress_report: ao<br>
&gt; &gt;&gt; 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1569:libxl__ao_com=
plete: ao 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1541:libxl__ao__de=
stroy: ao 0x210c360:<br>
&gt; &gt;&gt; &gt; &gt; destroy<br>
&gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: total allocations=
:793 total<br>
&gt; &gt;&gt; releases:793<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: current allocatio=
ns:0 maximum<br>
&gt; &gt;&gt; allocations:4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache current siz=
e:4<br>
&gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache hits:785 mi=
sses:4 toobig:4<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm=
-0.log<br>
&gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5 (label se=
rial0)<br>
&gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to populate =
ram at 40030000<br>
&gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>
&gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 E=
DX=3D00000633<br>
&gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 E=
SP=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D=
0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 C=
R4=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 D=
R3=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=
=3D00001f80<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>
&gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 E=
DX=3D00000633<br>
&gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 E=
SP=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D=
0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 C=
R4=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 D=
R3=3D00000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=
=3D00001f80<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000=
000000 0000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ##############################################=
#############<br>
&gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /d=
ev/null || echo Debian`<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splas=
h&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0=
_max_vcpus=3D1&quot;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt; &gt; ______________________________________________=
_<br>
&gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen=
-devel@lists.xen.org</a><br>
&gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" tar=
get=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a11339e2ed1228504f1d713e7--


--===============7898976862245331660==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7898976862245331660==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 20:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsJn-0000kt-3y; Fri, 07 Feb 2014 20:46:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBqMR-00017J-L6
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 18:41:12 +0000
Received: from [85.158.143.35:64510] by server-2.bemta-4.messagelabs.com id
	79/8A-10891-6C825F25; Fri, 07 Feb 2014 18:41:10 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391798466!4020921!1
X-Originating-IP: [209.85.212.51]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8762 invoked from network); 7 Feb 2014 18:41:07 -0000
Received: from mail-vb0-f51.google.com (HELO mail-vb0-f51.google.com)
	(209.85.212.51)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 18:41:07 -0000
Received: by mail-vb0-f51.google.com with SMTP id 11so2919896vbe.38
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 10:41:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=IHUbxjo1HnoKHrVR4ly2PhmdWxfnaAZYN291alBVVtY=;
	b=P8DbuqI9mttbKoWb4uHAw80EOvcTw8exDBIt9QrE0k7iDZdbV8llXzfXz2FAXnge+y
	JiE7NIUh58x5lzqsBal2NHcB5T02wJaXMIQWFOx6V93NKJeDioDduIzlGAQVAxW8ewt7
	9bDuUTB0sL1XJh4qyNiJZjow40FqFYDkE9OfTN3s+IeTxIYXyYok2fRknow6/J4V7NcK
	d64Oz6xr8fSjgI5ViWpLh2EhDuDtFeRx463EKI7ca5+ws03xT3e63trq1N+97993UJze
	1ge3XuZVCYnsz25Gvj5rwGcan0i16mZ29huanQmn4b0l/+1x/tN9eeXM/VOT++HxxRJ/
	rAGQ==
X-Received: by 10.52.114.99 with SMTP id jf3mr85123vdb.66.1391798466621; Fri,
	07 Feb 2014 10:41:06 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 10:40:26 -0800 (PST)
In-Reply-To: <20140207183056.GA10265@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 13:40:26 -0500
Message-ID: <CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 20:46:34 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8734806169890918992=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8734806169890918992==
Content-Type: multipart/alternative; boundary=bcaec547c9cf33b73f04f1d55595

--bcaec547c9cf33b73f04f1d55595
Content-Type: text/plain; charset=ISO-8859-1

Much. Do I need to install from src or is there a package I can install.

Regards


On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > I did not.  I do not have the toolchain installed.  I may have time later
> > today to try the patch.  Are there any specific instructions on how to
> > patch the src, compile and install?
>
> There actually should be a new version of Xen 4.4-rcX which will have the
> fix. That might be easier for you?
> >
> > Regards
> >
> >
> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > > Hi all,
> > > >
> > > > I am attempting to do a pci passthrough of an Intel ET card (4x1G
> NIC)
> > > to a
> > > > HVM.  I have been attempting to resolve this issue on the xen-users
> list,
> > > > but it was advised to post this issue to this list. (Initial Message
> -
> > > >
> > >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> > > >
> > > > The machine I am using as host is a Dell Poweredge server with a Xeon
> > > > E31220 with 4GB of ram.
> > > >
> > > > The possible bug is the following:
> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > > ....
> > > >
> > > > I believe it may be similar to this thread
> > > >
> > >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > >
> > > >
> > > > Additional info that may be helpful is below.
> > >
> > > Did you try the patch?
> > > >
> > > > Please let me know if you need any additional information.
> > > >
> > > > Thanks in advance for any help provided!
> > > > Regards
> > > >
> > > > ###########################################################
> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > ###########################################################
> > > > # Configuration file for Xen HVM
> > > >
> > > > # HVM Name (as appears in 'xl list')
> > > > name="ubuntu-hvm-0"
> > > > # HVM Build settings (+ hardware)
> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > builder='hvm'
> > > > device_model='qemu-dm'
> > > > memory=1024
> > > > vcpus=2
> > > >
> > > > # Virtual Interface
> > > > # Network bridge to USB NIC
> > > > vif=['bridge=xenbr0']
> > > >
> > > > ################### PCI PASSTHROUGH ###################
> > > > # PCI Permissive mode toggle
> > > > #pci_permissive=1
> > > >
> > > > # All PCI Devices
> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> '05:00.1']
> > > >
> > > > # First two ports on Intel 4x1G NIC
> > > > #pci=['03:00.0','03:00.1']
> > > >
> > > > # Last two ports on Intel 4x1G NIC
> > > > #pci=['04:00.0', '04:00.1']
> > > >
> > > > # All ports on Intel 4x1G NIC
> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > >
> > > > # Brodcom 2x1G NIC
> > > > #pci=['05:00.0', '05:00.1']
> > > > ################### PCI PASSTHROUGH ###################
> > > >
> > > > # HVM Disks
> > > > # Hard disk only
> > > > # Boot from HDD first ('c')
> > > > boot="c"
> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > >
> > > > # Hard disk with ISO
> > > > # Boot from ISO first ('d')
> > > > #boot="d"
> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > >
> > > > # ACPI Enable
> > > > acpi=1
> > > > # HVM Event Modes
> > > > on_poweroff='destroy'
> > > > on_reboot='restart'
> > > > on_crash='restart'
> > > >
> > > > # Serial Console Configuration (Xen Console)
> > > > sdl=0
> > > > serial='pty'
> > > >
> > > > # VNC Configuration
> > > > # Only reacable from localhost
> > > > vnc=1
> > > > vnclisten="0.0.0.0"
> > > > vncpasswd=""
> > > >
> > > > ###########################################################
> > > > Copied for xen-users list
> > > > ###########################################################
> > > >
> > > > It appears that it cannot obtain the RAM mapping for this PCI device.
> > > >
> > > >
> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> output
> > > > looks like:
> > > > root@fiat:~# ./dev_mgmt.sh
> > > > Loading Kernel Module 'xen-pciback'
> > > > Calling function pciback_dev for:
> > > > PCI DEVICE 0000:03:00.0
> > > > Unbinding 0000:03:00.0 from igb
> > > > Binding 0000:03:00.0 to pciback
> > > >
> > > > PCI DEVICE 0000:03:00.1
> > > > Unbinding 0000:03:00.1 from igb
> > > > Binding 0000:03:00.1 to pciback
> > > >
> > > > PCI DEVICE 0000:04:00.0
> > > > Unbinding 0000:04:00.0 from igb
> > > > Binding 0000:04:00.0 to pciback
> > > >
> > > > PCI DEVICE 0000:04:00.1
> > > > Unbinding 0000:04:00.1 from igb
> > > > Binding 0000:04:00.1 to pciback
> > > >
> > > > PCI DEVICE 0000:05:00.0
> > > > Unbinding 0000:05:00.0 from bnx2
> > > > Binding 0000:05:00.0 to pciback
> > > >
> > > > PCI DEVICE 0000:05:00.1
> > > > Unbinding 0000:05:00.1 from bnx2
> > > > Binding 0000:05:00.1 to pciback
> > > >
> > > > Listing PCI Devices Available to Xen
> > > > 0000:03:00.0
> > > > 0000:03:00.1
> > > > 0000:04:00.0
> > > > 0000:04:00.1
> > > > 0000:05:00.0
> > > > 0000:05:00.1
> > > >
> > > > ###########################################################
> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > WARNING: ignoring device_model directive.
> > > > WARNING: Use "device_model_override" instead if you really want a
> > > > non-default device_model
> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360:
> create:
> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > > vdev=hda spec.backend=unknown
> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > > > vdev=hda, using backend phy
> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > > bootloader
> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > > > domain, skipping bootloader
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x210c728: deregister unregistered
> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
> NUMA
> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > > free_memkb=2980
> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > > candidate
> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > >   Loader:        0000000000100000->00000000001a69a4
> > > >   Modules:       0000000000000000->0000000000000000
> > > >   TOTAL:         0000000000000000->000000003f800000
> > > >   ENTRY ADDRESS: 0000000000100608
> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > >   4KB PAGES: 0x0000000000000200
> > > >   2MB PAGES: 0x00000000000001fb
> > > >   1GB PAGES: 0x0000000000000000
> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> 0x7f022c81682d
> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > > vdev=hda spec.backend=phy
> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > > register slotnum=3
> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > > > inprogress: poller=0x210c3c0, flags=i
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> > > state 1
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > > deregister slotnum=3
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x2112f48: deregister unregistered
> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script:
> > > > /etc/xen/scripts/block add
> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > > device-model
> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > /usr/bin/qemu-system-i386
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > chardev=libxl-cmd,mode=control
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0
> ,to=99
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> isa-fdc.driveA=
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> vga.vram_size_mb=8
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >
> > >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > register
> > > > slotnum=3
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > epath=/local/domain/0/device-model/2/state
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > epath=/local/domain/0/device-model/2/state
> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > > deregister slotnum=3
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x210c960: deregister unregistered
> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > > /var/run/xen/qmp-libxl-2
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "qmp_capabilities",
> > > >     "id": 1
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "query-chardev",
> > > >     "id": 2
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "change",
> > > >     "id": 3,
> > > >     "arguments": {
> > > >         "device": "vnc",
> > > >         "target": "password",
> > > >         "arg": ""
> > > >     }
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "query-vnc",
> > > >     "id": 4
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > register
> > > > slotnum=3
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting
> state
> > > 1
> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > > deregister slotnum=3
> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > > w=0x210e8a8: deregister unregistered
> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script:
> > > > /etc/xen/scripts/vif-bridge online
> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script:
> > > > /etc/xen/scripts/vif-bridge add
> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > > /var/run/xen/qmp-libxl-2
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "qmp_capabilities",
> > > >     "id": 1
> > > > }
> > > > '
> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> return
> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > > >     "execute": "device_add",
> > > >     "id": 2,
> > > >     "arguments": {
> > > >         "driver": "xen-pci-passthrough",
> > > >         "id": "pci-pt-03_00.0",
> > > >         "hostaddr": "0000:03:00.0"
> > > >     }
> > > > }
> > > > '
> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> > > reset
> > > > by peer
> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> > > backend
> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> 0x210c360:
> > > > progress report: ignored
> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > > > complete, rc=0
> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> > > destroy
> > > > Daemon running with PID 3214
> > > > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> allocations:4
> > > > xc: debug: hypercall buffer: cache current size:4
> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > > >
> > > > ###########################################################
> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > > CPU #0:
> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > > ES =0000 00000000 0000ffff 00009300
> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > SS =0000 00000000 0000ffff 00009300
> > > > DS =0000 00000000 0000ffff 00009300
> > > > FS =0000 00000000 0000ffff 00009300
> > > > GS =0000 00000000 0000ffff 00009300
> > > > LDT=0000 00000000 0000ffff 00008200
> > > > TR =0000 00000000 0000ffff 00008b00
> > > > GDT=     00000000 0000ffff
> > > > IDT=     00000000 0000ffff
> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > DR6=ffff0ff0 DR7=00000400
> > > > EFER=0000000000000000
> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > XMM00=00000000000000000000000000000000
> > > > XMM01=00000000000000000000000000000000
> > > > XMM02=00000000000000000000000000000000
> > > > XMM03=00000000000000000000000000000000
> > > > XMM04=00000000000000000000000000000000
> > > > XMM05=00000000000000000000000000000000
> > > > XMM06=00000000000000000000000000000000
> > > > XMM07=00000000000000000000000000000000
> > > > CPU #1:
> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > > ES =0000 00000000 0000ffff 00009300
> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > SS =0000 00000000 0000ffff 00009300
> > > > DS =0000 00000000 0000ffff 00009300
> > > > FS =0000 00000000 0000ffff 00009300
> > > > GS =0000 00000000 0000ffff 00009300
> > > > LDT=0000 00000000 0000ffff 00008200
> > > > TR =0000 00000000 0000ffff 00008b00
> > > > GDT=     00000000 0000ffff
> > > > IDT=     00000000 0000ffff
> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > DR6=ffff0ff0 DR7=00000400
> > > > EFER=0000000000000000
> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > XMM00=00000000000000000000000000000000
> > > > XMM01=00000000000000000000000000000000
> > > > XMM02=00000000000000000000000000000000
> > > > XMM03=00000000000000000000000000000000
> > > > XMM04=00000000000000000000000000000000
> > > > XMM05=00000000000000000000000000000000
> > > > XMM06=00000000000000000000000000000000
> > > > XMM07=00000000000000000000000000000000
> > > >
> > > > ###########################################################
> > > > /etc/default/grub
> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > GRUB_TIMEOUT=10
> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > GRUB_CMDLINE_LINUX=""
> > > > # biosdevname=0
> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel
> > >
> > >
>

--bcaec547c9cf33b73f04f1d55595
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Much. Do I need to install from src or is there a package =
I can install.<div><br></div><div>Regards</div></div><div class=3D"gmail_ex=
tra"><br><br><div class=3D"gmail_quote">On Fri, Feb 7, 2014 at 1:30 PM, Kon=
rad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@orac=
le.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, Feb 07, 2014 at 10=
:53:22AM -0500, Mike Neiderhauser wrote:<br>
&gt; I did not. =A0I do not have the toolchain installed. =A0I may have tim=
e later<br>
&gt; today to try the patch. =A0Are there any specific instructions on how =
to<br>
&gt; patch the src, compile and install?<br>
<br>
</div>There actually should be a new version of Xen 4.4-rcX which will have=
 the<br>
fix. That might be easier for you?<br>
<div class=3D"HOEnZb"><div class=3D"h5">&gt;<br>
&gt; Regards<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I am attempting to do a pci passthrough of an Intel ET card =
(4x1G NIC)<br>
&gt; &gt; to a<br>
&gt; &gt; &gt; HVM. =A0I have been attempting to resolve this issue on the =
xen-users list,<br>
&gt; &gt; &gt; but it was advised to post this issue to this list. (Initial=
 Message -<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a>)<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The machine I am using as host is a Dell Poweredge server wi=
th a Xeon<br>
&gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The possible bug is the following:<br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; ....<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I believe it may be similar to this thread<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Additional info that may be helpful is below.<br>
&gt; &gt;<br>
&gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Please let me know if you need any additional information.<b=
r>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks in advance for any help provided!<br>
&gt; &gt; &gt; Regards<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Name (as appears in &#39;xl list&#39;)<br>
&gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt; &gt; # HVM Build settings (+ hardware)<br>
&gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;<br>
&gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#=
39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # First two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#3=
9;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;,<br=
>
&gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r&=
#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Serial Console Configuration (Xen Console)<br>
&gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; It appears that it cannot obtain the RAM mapping for this PC=
I device.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I rebooted the Host. =A0I ran assigned pci devices to pcibac=
k. The output<br>
&gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt; &gt; Loading Kernel Module &#39;xen-pciback&#39;<br>
&gt; &gt; &gt; Calling function pciback_dev for:<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Listing PCI Devices Available to Xen<br>
&gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; WARNING: ignoring device_model directive.<br>
&gt; &gt; &gt; WARNING: Use &quot;device_model_override&quot; instead if yo=
u really want a<br>
&gt; &gt; &gt; non-default device_model<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210=
c360: create:<br>
&gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c0<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:296:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:675:initiate_domain_create: run=
ning<br>
&gt; &gt; bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: =
not a PV<br>
&gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c728: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: Ne=
w best NUMA<br>
&gt; &gt; &gt; placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcp=
us=3D3,<br>
&gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA place=
ment<br>
&gt; &gt; candidate<br>
&gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB free selected<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=
=3D0xa69a4<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a=
4<br>
&gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a=
69a4<br>
&gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;00000000000000=
00<br>
&gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f80=
0000<br>
&gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; =
0x7f022c81682d<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210=
c360:<br>
&gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 still=
 waiting<br>
&gt; &gt; state 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 ok<br=
>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawnin=
g<br>
&gt; &gt; device-model<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xe=
n-domid<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2<b=
r>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ch=
ardev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server=
,nowait<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mo=
n<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -na=
me<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubu=
ntu-hvm-0<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vn=
c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 <a =
href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a>,to=3D99<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa=
-fdc.driveA=3D<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -se=
rial<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty=
<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vg=
a<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cir=
rus<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga=
.vram_size_mb=3D8<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -bo=
ot<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ord=
er=3Dc<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -sm=
p<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,m=
axcpus=3D2<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -de=
vice<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ne=
tdev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscr=
ipt=3Dno<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xen=
fv<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 101=
6<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -dr=
ive<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;password&quot;,<br=
>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 still w=
aiting state<br>
&gt; &gt; 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 ok<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthroug=
h&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,<=
br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quo=
t;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: C=
onnection<br>
&gt; &gt; reset<br>
&gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Crea=
ting pci<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: =
ao 0x210c360:<br>
&gt; &gt; &gt; progress report: ignored<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x21=
0c360:<br>
&gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x21=
0c360:<br>
&gt; &gt; destroy<br>
&gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: total allocations:793 total rel=
eases:793<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: current allocations:0 maximum a=
llocations:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache current size:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:=
4<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || ech=
o Debian`<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0_max_vcpus=3D1=
&quot;<br>
&gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.x=
en.org</a><br>
&gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--bcaec547c9cf33b73f04f1d55595--


--===============8734806169890918992==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8734806169890918992==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 20:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 20:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsJn-0000l0-IH; Fri, 07 Feb 2014 20:46:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBsB5-0000Db-40
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 20:37:35 +0000
Received: from [85.158.139.211:55558] by server-6.bemta-5.messagelabs.com id
	6B/19-14342-E0445F25; Fri, 07 Feb 2014 20:37:34 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391805450!2474395!1
X-Originating-IP: [209.85.212.50]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4805 invoked from network); 7 Feb 2014 20:37:31 -0000
Received: from mail-vb0-f50.google.com (HELO mail-vb0-f50.google.com)
	(209.85.212.50)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 20:37:31 -0000
Received: by mail-vb0-f50.google.com with SMTP id w8so3104439vbj.9
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 12:37:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=AT6mM8K9ySEpCrXiUQdzGHOi+Rg8kX9ipICAaR8+QQM=;
	b=lSnzgy5G41ChkAFZ726dbgPxTDwb8C9KN0g2agQiXhyA2FdzMgmPNj1arXQxTUkZLA
	7tZTagq3DWQHBOOGYg90yi0Td1csZPmxyUHAxGCQd8v80w94YP0geH1/XcONicSa9tX4
	RfGvNbswBUFc6e87Vbwdm2DGK6iQvSbLAWrhRf3fh1z6WigYeHcugjHzjj0m+KVkrCUW
	SoXZQ5UkADbCn6lQ4UJIfcklnkB2Wovv2VEKFtQ2h7tdd/egm/c6GyOPqB/mdRocenvP
	h/w2SFvKdEbePbODH2nhLJ5Corswu48xLj1fmHDlU3k02Jsc6FnxUvBXxN4Oa5YsBu7r
	+NQg==
X-Received: by 10.58.155.162 with SMTP id vx2mr29396veb.46.1391805449936; Fri,
	07 Feb 2014 12:37:29 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 12:36:49 -0800 (PST)
In-Reply-To: <CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 15:36:49 -0500
Message-ID: <CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 20:46:34 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8441876354739994021=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8441876354739994021==
Content-Type: multipart/alternative; boundary=047d7b66f2f770a4fb04f1d6f5d3

--047d7b66f2f770a4fb04f1d6f5d3
Content-Type: text/plain; charset=ISO-8859-1

I was able to compile and install xen4.4 RC3 on my host, however I am
getting the error:

root@fiat:~/git/xen# xl list
xc: error: Could not obtain handle on privileged command interface (2 = No
such file or directory): Internal error
libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No such
file or directory
cannot init xl context

I've google searched for this and an article appears, but is not the same
(as far as I can tell).  Running any xl command generates a similar error.

What can I do to fix this?

Regards


On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
mikeneiderhauser@gmail.com> wrote:

> Much. Do I need to install from src or is there a package I can install.
>
> Regards
>
>
> On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
>
>> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
>> > I did not.  I do not have the toolchain installed.  I may have time
>> later
>> > today to try the patch.  Are there any specific instructions on how to
>> > patch the src, compile and install?
>>
>> There actually should be a new version of Xen 4.4-rcX which will have the
>> fix. That might be easier for you?
>> >
>> > Regards
>> >
>> >
>> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
>> > konrad.wilk@oracle.com> wrote:
>> >
>> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
>> > > > Hi all,
>> > > >
>> > > > I am attempting to do a pci passthrough of an Intel ET card (4x1G
>> NIC)
>> > > to a
>> > > > HVM.  I have been attempting to resolve this issue on the xen-users
>> list,
>> > > > but it was advised to post this issue to this list. (Initial
>> Message -
>> > > >
>> > >
>> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
>> )
>> > > >
>> > > > The machine I am using as host is a Dell Poweredge server with a
>> Xeon
>> > > > E31220 with 4GB of ram.
>> > > >
>> > > > The possible bug is the following:
>> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
>> > > > char device redirected to /dev/pts/5 (label serial0)
>> > > > qemu: hardware error: xen: failed to populate ram at 40030000
>> > > > ....
>> > > >
>> > > > I believe it may be similar to this thread
>> > > >
>> > >
>> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
>> > > >
>> > > >
>> > > > Additional info that may be helpful is below.
>> > >
>> > > Did you try the patch?
>> > > >
>> > > > Please let me know if you need any additional information.
>> > > >
>> > > > Thanks in advance for any help provided!
>> > > > Regards
>> > > >
>> > > > ###########################################################
>> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>> > > > ###########################################################
>> > > > # Configuration file for Xen HVM
>> > > >
>> > > > # HVM Name (as appears in 'xl list')
>> > > > name="ubuntu-hvm-0"
>> > > > # HVM Build settings (+ hardware)
>> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
>> > > > builder='hvm'
>> > > > device_model='qemu-dm'
>> > > > memory=1024
>> > > > vcpus=2
>> > > >
>> > > > # Virtual Interface
>> > > > # Network bridge to USB NIC
>> > > > vif=['bridge=xenbr0']
>> > > >
>> > > > ################### PCI PASSTHROUGH ###################
>> > > > # PCI Permissive mode toggle
>> > > > #pci_permissive=1
>> > > >
>> > > > # All PCI Devices
>> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
>> '05:00.1']
>> > > >
>> > > > # First two ports on Intel 4x1G NIC
>> > > > #pci=['03:00.0','03:00.1']
>> > > >
>> > > > # Last two ports on Intel 4x1G NIC
>> > > > #pci=['04:00.0', '04:00.1']
>> > > >
>> > > > # All ports on Intel 4x1G NIC
>> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
>> > > >
>> > > > # Brodcom 2x1G NIC
>> > > > #pci=['05:00.0', '05:00.1']
>> > > > ################### PCI PASSTHROUGH ###################
>> > > >
>> > > > # HVM Disks
>> > > > # Hard disk only
>> > > > # Boot from HDD first ('c')
>> > > > boot="c"
>> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>> > > >
>> > > > # Hard disk with ISO
>> > > > # Boot from ISO first ('d')
>> > > > #boot="d"
>> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>> > > >
>> > > > # ACPI Enable
>> > > > acpi=1
>> > > > # HVM Event Modes
>> > > > on_poweroff='destroy'
>> > > > on_reboot='restart'
>> > > > on_crash='restart'
>> > > >
>> > > > # Serial Console Configuration (Xen Console)
>> > > > sdl=0
>> > > > serial='pty'
>> > > >
>> > > > # VNC Configuration
>> > > > # Only reacable from localhost
>> > > > vnc=1
>> > > > vnclisten="0.0.0.0"
>> > > > vncpasswd=""
>> > > >
>> > > > ###########################################################
>> > > > Copied for xen-users list
>> > > > ###########################################################
>> > > >
>> > > > It appears that it cannot obtain the RAM mapping for this PCI
>> device.
>> > > >
>> > > >
>> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
>> output
>> > > > looks like:
>> > > > root@fiat:~# ./dev_mgmt.sh
>> > > > Loading Kernel Module 'xen-pciback'
>> > > > Calling function pciback_dev for:
>> > > > PCI DEVICE 0000:03:00.0
>> > > > Unbinding 0000:03:00.0 from igb
>> > > > Binding 0000:03:00.0 to pciback
>> > > >
>> > > > PCI DEVICE 0000:03:00.1
>> > > > Unbinding 0000:03:00.1 from igb
>> > > > Binding 0000:03:00.1 to pciback
>> > > >
>> > > > PCI DEVICE 0000:04:00.0
>> > > > Unbinding 0000:04:00.0 from igb
>> > > > Binding 0000:04:00.0 to pciback
>> > > >
>> > > > PCI DEVICE 0000:04:00.1
>> > > > Unbinding 0000:04:00.1 from igb
>> > > > Binding 0000:04:00.1 to pciback
>> > > >
>> > > > PCI DEVICE 0000:05:00.0
>> > > > Unbinding 0000:05:00.0 from bnx2
>> > > > Binding 0000:05:00.0 to pciback
>> > > >
>> > > > PCI DEVICE 0000:05:00.1
>> > > > Unbinding 0000:05:00.1 from bnx2
>> > > > Binding 0000:05:00.1 to pciback
>> > > >
>> > > > Listing PCI Devices Available to Xen
>> > > > 0000:03:00.0
>> > > > 0000:03:00.1
>> > > > 0000:04:00.0
>> > > > 0000:04:00.1
>> > > > 0000:05:00.0
>> > > > 0000:05:00.1
>> > > >
>> > > > ###########################################################
>> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
>> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>> > > > WARNING: ignoring device_model directive.
>> > > > WARNING: Use "device_model_override" instead if you really want a
>> > > > non-default device_model
>> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360:
>> create:
>> > > > how=(nil) callback=(nil) poller=0x210c3c0
>> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
>> Disk
>> > > > vdev=hda spec.backend=unknown
>> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
>> Disk
>> > > > vdev=hda, using backend phy
>> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
>> > > bootloader
>> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
>> > > > domain, skipping bootloader
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210c728: deregister unregistered
>> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best
>> NUMA
>> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
>> > > > free_memkb=2980
>> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
>> > > candidate
>> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
>> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
>> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>> > > >   Loader:        0000000000100000->00000000001a69a4
>> > > >   Modules:       0000000000000000->0000000000000000
>> > > >   TOTAL:         0000000000000000->000000003f800000
>> > > >   ENTRY ADDRESS: 0000000000100608
>> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>> > > >   4KB PAGES: 0x0000000000000200
>> > > >   2MB PAGES: 0x00000000000001fb
>> > > >   1GB PAGES: 0x0000000000000000
>> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
>> 0x7f022c81682d
>> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
>> Disk
>> > > > vdev=hda spec.backend=phy
>> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
>> > > > register slotnum=3
>> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
>> > > > inprogress: poller=0x210c3c0, flags=i
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
>> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
>> > > > epath=/local/domain/0/backend/vbd/2/768/state
>> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
>> > > state 1
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
>> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
>> > > > epath=/local/domain/0/backend/vbd/2/768/state
>> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
>> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
>> > > > deregister slotnum=3
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x2112f48: deregister unregistered
>> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script:
>> > > > /etc/xen/scripts/block add
>> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
>> > > device-model
>> > > > /usr/bin/qemu-system-i386 with arguments:
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > /usr/bin/qemu-system-i386
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > chardev=libxl-cmd,mode=control
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0
>> ,to=99
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> isa-fdc.driveA=
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> vga.vram_size_mb=8
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
>> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
>> > > >
>> > >
>> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
>> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
>> > > register
>> > > > slotnum=3
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
>> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
>> > > > epath=/local/domain/0/device-model/2/state
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
>> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
>> > > > epath=/local/domain/0/device-model/2/state
>> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
>> > > > deregister slotnum=3
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210c960: deregister unregistered
>> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
>> > > > /var/run/xen/qmp-libxl-2
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "qmp_capabilities",
>> > > >     "id": 1
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "query-chardev",
>> > > >     "id": 2
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "change",
>> > > >     "id": 3,
>> > > >     "arguments": {
>> > > >         "device": "vnc",
>> > > >         "target": "password",
>> > > >         "arg": ""
>> > > >     }
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "query-vnc",
>> > > >     "id": 4
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
>> > > register
>> > > > slotnum=3
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
>> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
>> > > > epath=/local/domain/0/backend/vif/2/0/state
>> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting
>> state
>> > > 1
>> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
>> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
>> > > > epath=/local/domain/0/backend/vif/2/0/state
>> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
>> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
>> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
>> > > > deregister slotnum=3
>> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> > > > w=0x210e8a8: deregister unregistered
>> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script:
>> > > > /etc/xen/scripts/vif-bridge online
>> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script:
>> > > > /etc/xen/scripts/vif-bridge add
>> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
>> > > > /var/run/xen/qmp-libxl-2
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "qmp_capabilities",
>> > > >     "id": 1
>> > > > }
>> > > > '
>> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
>> return
>> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
>> > > >     "execute": "device_add",
>> > > >     "id": 2,
>> > > >     "arguments": {
>> > > >         "driver": "xen-pci-passthrough",
>> > > >         "id": "pci-pt-03_00.0",
>> > > >         "hostaddr": "0000:03:00.0"
>> > > >     }
>> > > > }
>> > > > '
>> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
>> Connection
>> > > reset
>> > > > by peer
>> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
>> error:
>> > > > Connection refused
>> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
>> error:
>> > > > Connection refused
>> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
>> error:
>> > > > Connection refused
>> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
>> > > backend
>> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
>> 0x210c360:
>> > > > progress report: ignored
>> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
>> > > > complete, rc=0
>> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
>> > > destroy
>> > > > Daemon running with PID 3214
>> > > > xc: debug: hypercall buffer: total allocations:793 total
>> releases:793
>> > > > xc: debug: hypercall buffer: current allocations:0 maximum
>> allocations:4
>> > > > xc: debug: hypercall buffer: cache current size:4
>> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
>> > > >
>> > > > ###########################################################
>> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
>> > > > char device redirected to /dev/pts/5 (label serial0)
>> > > > qemu: hardware error: xen: failed to populate ram at 40030000
>> > > > CPU #0:
>> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
>> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
>> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
>> > > > ES =0000 00000000 0000ffff 00009300
>> > > > CS =f000 ffff0000 0000ffff 00009b00
>> > > > SS =0000 00000000 0000ffff 00009300
>> > > > DS =0000 00000000 0000ffff 00009300
>> > > > FS =0000 00000000 0000ffff 00009300
>> > > > GS =0000 00000000 0000ffff 00009300
>> > > > LDT=0000 00000000 0000ffff 00008200
>> > > > TR =0000 00000000 0000ffff 00008b00
>> > > > GDT=     00000000 0000ffff
>> > > > IDT=     00000000 0000ffff
>> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
>> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
>> > > > DR6=ffff0ff0 DR7=00000400
>> > > > EFER=0000000000000000
>> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
>> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
>> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
>> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
>> > > > XMM00=00000000000000000000000000000000
>> > > > XMM01=00000000000000000000000000000000
>> > > > XMM02=00000000000000000000000000000000
>> > > > XMM03=00000000000000000000000000000000
>> > > > XMM04=00000000000000000000000000000000
>> > > > XMM05=00000000000000000000000000000000
>> > > > XMM06=00000000000000000000000000000000
>> > > > XMM07=00000000000000000000000000000000
>> > > > CPU #1:
>> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
>> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
>> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
>> > > > ES =0000 00000000 0000ffff 00009300
>> > > > CS =f000 ffff0000 0000ffff 00009b00
>> > > > SS =0000 00000000 0000ffff 00009300
>> > > > DS =0000 00000000 0000ffff 00009300
>> > > > FS =0000 00000000 0000ffff 00009300
>> > > > GS =0000 00000000 0000ffff 00009300
>> > > > LDT=0000 00000000 0000ffff 00008200
>> > > > TR =0000 00000000 0000ffff 00008b00
>> > > > GDT=     00000000 0000ffff
>> > > > IDT=     00000000 0000ffff
>> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
>> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
>> > > > DR6=ffff0ff0 DR7=00000400
>> > > > EFER=0000000000000000
>> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
>> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
>> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
>> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
>> > > > XMM00=00000000000000000000000000000000
>> > > > XMM01=00000000000000000000000000000000
>> > > > XMM02=00000000000000000000000000000000
>> > > > XMM03=00000000000000000000000000000000
>> > > > XMM04=00000000000000000000000000000000
>> > > > XMM05=00000000000000000000000000000000
>> > > > XMM06=00000000000000000000000000000000
>> > > > XMM07=00000000000000000000000000000000
>> > > >
>> > > > ###########################################################
>> > > > /etc/default/grub
>> > > > GRUB_DEFAULT="Xen 4.3-amd64"
>> > > > GRUB_HIDDEN_TIMEOUT=0
>> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
>> > > > GRUB_TIMEOUT=10
>> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
>> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
>> > > > GRUB_CMDLINE_LINUX=""
>> > > > # biosdevname=0
>> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
>> > >
>> > > > _______________________________________________
>> > > > Xen-devel mailing list
>> > > > Xen-devel@lists.xen.org
>> > > > http://lists.xen.org/xen-devel
>> > >
>> > >
>>
>
>

--047d7b66f2f770a4fb04f1d6f5d3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I was able to compile and install xen4.4 RC3 on my host, h=
owever I am getting the error:<div><br></div><div><div>root@fiat:~/git/xen#=
 xl list</div><div>xc: error: Could not obtain handle on privileged command=
 interface (2 =3D No such file or directory): Internal error</div>

<div>libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No=
 such file or directory</div><div>cannot init xl context</div></div><div><b=
r></div><div>I&#39;ve google searched for this and an article appears, but =
is not the same (as far as I can tell). =A0Running any xl command generates=
 a similar error.</div>

<div><br></div><div>What can I do to fix this?</div><div><br></div><div>Reg=
ards</div></div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quot=
e">On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <span dir=3D"ltr">&lt;=
<a href=3D"mailto:mikeneiderhauser@gmail.com" target=3D"_blank">mikeneiderh=
auser@gmail.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Much. Do I need to install =
from src or is there a package I can install.<div><br></div><div>Regards</d=
iv>

</div><div class=3D"HOEnZb"><div class=3D"h5"><div class=3D"gmail_extra"><b=
r><br><div class=3D"gmail_quote">On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rze=
szutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com"=
 target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>


<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Fri, Feb 07, 2014 at 10:53:22AM -050=
0, Mike Neiderhauser wrote:<br>
&gt; I did not. =A0I do not have the toolchain installed. =A0I may have tim=
e later<br>
&gt; today to try the patch. =A0Are there any specific instructions on how =
to<br>
&gt; patch the src, compile and install?<br>
<br>
</div>There actually should be a new version of Xen 4.4-rcX which will have=
 the<br>
fix. That might be easier for you?<br>
<div><div>&gt;<br>
&gt; Regards<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I am attempting to do a pci passthrough of an Intel ET card =
(4x1G NIC)<br>
&gt; &gt; to a<br>
&gt; &gt; &gt; HVM. =A0I have been attempting to resolve this issue on the =
xen-users list,<br>
&gt; &gt; &gt; but it was advised to post this issue to this list. (Initial=
 Message -<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a>)<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The machine I am using as host is a Dell Poweredge server wi=
th a Xeon<br>
&gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; The possible bug is the following:<br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; ....<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I believe it may be similar to this thread<br>
&gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>



&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Additional info that may be helpful is below.<br>
&gt; &gt;<br>
&gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Please let me know if you need any additional information.<b=
r>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks in advance for any help provided!<br>
&gt; &gt; &gt; Regards<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Name (as appears in &#39;xl list&#39;)<br>
&gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt; &gt; # HVM Build settings (+ hardware)<br>
&gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/boot/hvmloader&quot;<br>
&gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#=
39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # First two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00.1&#39;, &#39;04:00.0&#3=
9;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; ################### PCI PASSTHROUGH ###################<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<br>
&gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<br>
&gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w&#39;,<br=
>
&gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r&=
#39;]<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Serial Console Configuration (Xen Console)<br>
&gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; It appears that it cannot obtain the RAM mapping for this PC=
I device.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I rebooted the Host. =A0I ran assigned pci devices to pcibac=
k. The output<br>
&gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt; &gt; Loading Kernel Module &#39;xen-pciback&#39;<br>
&gt; &gt; &gt; Calling function pciback_dev for:<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Listing PCI Devices Available to Xen<br>
&gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; WARNING: ignoring device_model directive.<br>
&gt; &gt; &gt; WARNING: Use &quot;device_model_override&quot; instead if yo=
u really want a<br>
&gt; &gt; &gt; non-default device_model<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210=
c360: create:<br>
&gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=3D0x210c3c0<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:296:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:675:initiate_domain_create: run=
ning<br>
&gt; &gt; bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: =
not a PV<br>
&gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c728: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: Ne=
w best NUMA<br>
&gt; &gt; &gt; placement candidate found: nr_nodes=3D1, nr_cpus=3D4, nr_vcp=
us=3D3,<br>
&gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA place=
ment<br>
&gt; &gt; candidate<br>
&gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB free selected<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: paddr=3D0x100000 memsz=
=3D0xa69a4<br>
&gt; &gt; &gt; xc: detail: elf_parse_binary: memory: 0x100000 -&gt; 0x1a69a=
4<br>
&gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A00000000000100000-&gt;00000000001a=
69a4<br>
&gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0000000000000000-&gt;00000000000000=
00<br>
&gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 0000000000000000-&gt;000000003f80=
0000<br>
&gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<br>
&gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION:<br>
&gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br>
&gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br>
&gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br>
&gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -&gt; =
0x7f022c81682d<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:257:libxl__device_disk_set_back=
end: Disk<br>
&gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210=
c360:<br>
&gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flags=3Di<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 still=
 waiting<br>
&gt; &gt; state 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x2112f48<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/2/768/state token=3D3/0:=
 event<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/state wanted state 2 ok<br=
>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/768/stat=
e token=3D3/0:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x2112f48: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawnin=
g<br>
&gt; &gt; device-model<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386 with arguments:<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -xe=
n-domid<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2<b=
r>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ch=
ardev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server=
,nowait<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -mo=
n<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -na=
me<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ubu=
ntu-hvm-0<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vn=
c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 <a =
href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a>,to=3D99<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 isa=
-fdc.driveA=3D<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -se=
rial<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 pty=
<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -vg=
a<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 cir=
rus<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -gl=
obal<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 vga=
.vram_size_mb=3D8<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -bo=
ot<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 ord=
er=3Dc<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -sm=
p<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 2,m=
axcpus=3D2<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -de=
vice<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -ne=
tdev<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscr=
ipt=3Dno<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -M<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 xen=
fv<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -m<=
br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 101=
6<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: =A0 -dr=
ive<br>
&gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210c960<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/device-model/2/state token=3D3/1: ev=
ent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/device-model/2/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/state t=
oken=3D3/1:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210c960: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-chardev&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;change&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: &quot;vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: &quot;password&quot;,<br=
>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &quot;&quot;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;query-vnc&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: =
watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; register<br>
&gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:647:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 still w=
aiting state<br>
&gt; &gt; 1<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D=
0x210e8a8<br>
&gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/2/0/state token=3D3/2: e=
vent<br>
&gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:643:devstate_watch_callback: bac=
kend<br>
&gt; &gt; &gt; /local/domain/0/backend/vif/2/0/state wanted state 2 ok<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/state =
token=3D3/2:<br>
&gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister=
: watch<br>
&gt; &gt; &gt; w=3D0x210e8a8: deregister unregistered<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<br>
&gt; &gt; &gt; libxl: debug: libxl_device.c:959:device_hotplug: calling hot=
plug script:<br>
&gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connect=
ed to<br>
&gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: qmp<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;qmp_capabilities&quot;,<b=
r>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_handle_response: message t=
ype: return<br>
&gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp com=
mand: &#39;{<br>
&gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;device_add&quot;,<br>
&gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: &quot;xen-pci-passthroug=
h&quot;,<br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quot;pci-pt-03_00.0&quot;,<=
br>
&gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;: &quot;0000:03:00.0&quo=
t;<br>
&gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; }<br>
&gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: C=
onnection<br>
&gt; &gt; reset<br>
&gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Crea=
ting pci<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: =
ao 0x210c360:<br>
&gt; &gt; &gt; progress report: ignored<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x21=
0c360:<br>
&gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x21=
0c360:<br>
&gt; &gt; destroy<br>
&gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: total allocations:793 total rel=
eases:793<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: current allocations:0 maximum a=
llocations:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache current size:4<br>
&gt; &gt; &gt; xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:=
4<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; char device redirected to /dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4003000=
0<br>
&gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D00000633<=
br>
&gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D00000000<=
br>
&gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0 A20=
=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; ES =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b00<br>
&gt; &gt; &gt; SS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; DS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; FS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; GS =3D0000 00000000 0000ffff 00009300<br>
&gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 00008200<br>
&gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b00<br>
&gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D00000000<=
br>
&gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D00000000<=
br>
&gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0000<br=
>
&gt; &gt; &gt; XMM00=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM01=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM02=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM03=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM04=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM05=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM06=3D00000000000000000000000000000000<br>
&gt; &gt; &gt; XMM07=3D00000000000000000000000000000000<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; ###########################################################<=
br>
&gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&quot;<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2&gt; /dev/null || ech=
o Debian`<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;quiet splash&quot;<br>
&gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br>
&gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D1024M dom0_max_vcpus=3D1=
&quot;<br>
&gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank"=
>Xen-devel@lists.xen.org</a><br>
&gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--047d7b66f2f770a4fb04f1d6f5d3--


--===============8441876354739994021==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8441876354739994021==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 21:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsYW-0001g3-7C; Fri, 07 Feb 2014 21:01:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBsYT-0001fy-Q5
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:01:46 +0000
Received: from [85.158.137.68:58313] by server-10.bemta-3.messagelabs.com id
	25/14-07302-9B945F25; Fri, 07 Feb 2014 21:01:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391806902!421671!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8993 invoked from network); 7 Feb 2014 21:01:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:01:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17L1dV3006612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 21:01:40 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17L1cUW021676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 21:01:39 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17L1cYZ021666; Fri, 7 Feb 2014 21:01:38 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 13:01:38 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5278A1C0972; Fri,  7 Feb 2014 16:01:37 -0500 (EST)
Date: Fri, 7 Feb 2014 16:01:37 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207210137.GA13743@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> Ok. I started ran the initscripts and now xl works.
> 
> However, I still see the same behavior as before:
> 

Did you use the patch that was mentioned in the URL?

> root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection reset
> by peer
> libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> Connection refused
> root@fiat:~# xl list
> Name                                        ID   Mem VCPUs State Time(s)
> Domain-0                                     0  1024     1     r-----
>  15.2
> ubuntu-hvm-0                                 1  1025     1     ------
> 0.0
> 
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> be allocated)
> (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> (XEN) Dom0 has maximum 1 VCPUs
> (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> (XEN) Scrubbing Free RAM: .............................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> to Xen)
> (XEN) Freed 260kB init memory.
> (XEN) PCI add device 0000:00:00.0
> (XEN) PCI add device 0000:00:01.0
> (XEN) PCI add device 0000:00:1a.0
> (XEN) PCI add device 0000:00:1c.0
> (XEN) PCI add device 0000:00:1d.0
> (XEN) PCI add device 0000:00:1e.0
> (XEN) PCI add device 0000:00:1f.0
> (XEN) PCI add device 0000:00:1f.2
> (XEN) PCI add device 0000:00:1f.3
> (XEN) PCI add device 0000:01:00.0
> (XEN) PCI add device 0000:02:02.0
> (XEN) PCI add device 0000:02:04.0
> (XEN) PCI add device 0000:03:00.0
> (XEN) PCI add device 0000:03:00.1
> (XEN) PCI add device 0000:04:00.0
> (XEN) PCI add device 0000:04:00.1
> (XEN) PCI add device 0000:05:00.0
> (XEN) PCI add device 0000:05:00.1
> (XEN) PCI add device 0000:06:03.0
> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> (200 of 1024)
> (d1) HVM Loader
> (d1) Detected Xen v4.4-rc2
> (d1) Xenbus rings @0xfeffc000, event channel 4
> (d1) System requested SeaBIOS
> (d1) CPU speed is 3093 MHz
> (d1) Relocating guest memory for lowmem MMIO space disabled
> 
> 
> Excerpt from /var/log/xen/*
> qemu: hardware error: xen: failed to populate ram at 40050000
> 
> 
> On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > getting the error:
> > >
> > > root@fiat:~/git/xen# xl list
> > > xc: error: Could not obtain handle on privileged command interface (2 =
> > No
> > > such file or directory): Internal error
> > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No
> > such
> > > file or directory
> > > cannot init xl context
> > >
> > > I've google searched for this and an article appears, but is not the same
> > > (as far as I can tell).  Running any xl command generates a similar
> > error.
> > >
> > > What can I do to fix this?
> >
> >
> > You need to run the initscripts for Xen. I don't know what your distro is,
> > but
> > they are usually put in /etc/init.d/rc.d/xen*
> >
> >
> > >
> > > Regards
> > >
> > >
> > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > mikeneiderhauser@gmail.com> wrote:
> > >
> > > > Much. Do I need to install from src or is there a package I can
> > install.
> > > >
> > > > Regards
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@oracle.com> wrote:
> > > >
> > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > >> > I did not.  I do not have the toolchain installed.  I may have time
> > > >> later
> > > >> > today to try the patch.  Are there any specific instructions on how
> > to
> > > >> > patch the src, compile and install?
> > > >>
> > > >> There actually should be a new version of Xen 4.4-rcX which will have
> > the
> > > >> fix. That might be easier for you?
> > > >> >
> > > >> > Regards
> > > >> >
> > > >> >
> > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > >> > konrad.wilk@oracle.com> wrote:
> > > >> >
> > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > >> > > > Hi all,
> > > >> > > >
> > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > (4x1G
> > > >> NIC)
> > > >> > > to a
> > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > xen-users
> > > >> list,
> > > >> > > > but it was advised to post this issue to this list. (Initial
> > > >> Message -
> > > >> > > >
> > > >> > >
> > > >>
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > >> )
> > > >> > > >
> > > >> > > > The machine I am using as host is a Dell Poweredge server with a
> > > >> Xeon
> > > >> > > > E31220 with 4GB of ram.
> > > >> > > >
> > > >> > > > The possible bug is the following:
> > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > >> > > > ....
> > > >> > > >
> > > >> > > > I believe it may be similar to this thread
> > > >> > > >
> > > >> > >
> > > >>
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > >> > > >
> > > >> > > >
> > > >> > > > Additional info that may be helpful is below.
> > > >> > >
> > > >> > > Did you try the patch?
> > > >> > > >
> > > >> > > > Please let me know if you need any additional information.
> > > >> > > >
> > > >> > > > Thanks in advance for any help provided!
> > > >> > > > Regards
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > >> > > > ###########################################################
> > > >> > > > # Configuration file for Xen HVM
> > > >> > > >
> > > >> > > > # HVM Name (as appears in 'xl list')
> > > >> > > > name="ubuntu-hvm-0"
> > > >> > > > # HVM Build settings (+ hardware)
> > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > >> > > > builder='hvm'
> > > >> > > > device_model='qemu-dm'
> > > >> > > > memory=1024
> > > >> > > > vcpus=2
> > > >> > > >
> > > >> > > > # Virtual Interface
> > > >> > > > # Network bridge to USB NIC
> > > >> > > > vif=['bridge=xenbr0']
> > > >> > > >
> > > >> > > > ################### PCI PASSTHROUGH ###################
> > > >> > > > # PCI Permissive mode toggle
> > > >> > > > #pci_permissive=1
> > > >> > > >
> > > >> > > > # All PCI Devices
> > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > >> '05:00.1']
> > > >> > > >
> > > >> > > > # First two ports on Intel 4x1G NIC
> > > >> > > > #pci=['03:00.0','03:00.1']
> > > >> > > >
> > > >> > > > # Last two ports on Intel 4x1G NIC
> > > >> > > > #pci=['04:00.0', '04:00.1']
> > > >> > > >
> > > >> > > > # All ports on Intel 4x1G NIC
> > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > >> > > >
> > > >> > > > # Brodcom 2x1G NIC
> > > >> > > > #pci=['05:00.0', '05:00.1']
> > > >> > > > ################### PCI PASSTHROUGH ###################
> > > >> > > >
> > > >> > > > # HVM Disks
> > > >> > > > # Hard disk only
> > > >> > > > # Boot from HDD first ('c')
> > > >> > > > boot="c"
> > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > >> > > >
> > > >> > > > # Hard disk with ISO
> > > >> > > > # Boot from ISO first ('d')
> > > >> > > > #boot="d"
> > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > >> > > >
> > > >> > > > # ACPI Enable
> > > >> > > > acpi=1
> > > >> > > > # HVM Event Modes
> > > >> > > > on_poweroff='destroy'
> > > >> > > > on_reboot='restart'
> > > >> > > > on_crash='restart'
> > > >> > > >
> > > >> > > > # Serial Console Configuration (Xen Console)
> > > >> > > > sdl=0
> > > >> > > > serial='pty'
> > > >> > > >
> > > >> > > > # VNC Configuration
> > > >> > > > # Only reacable from localhost
> > > >> > > > vnc=1
> > > >> > > > vnclisten="0.0.0.0"
> > > >> > > > vncpasswd=""
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > Copied for xen-users list
> > > >> > > > ###########################################################
> > > >> > > >
> > > >> > > > It appears that it cannot obtain the RAM mapping for this PCI
> > > >> device.
> > > >> > > >
> > > >> > > >
> > > >> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> > > >> output
> > > >> > > > looks like:
> > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > >> > > > Loading Kernel Module 'xen-pciback'
> > > >> > > > Calling function pciback_dev for:
> > > >> > > > PCI DEVICE 0000:03:00.0
> > > >> > > > Unbinding 0000:03:00.0 from igb
> > > >> > > > Binding 0000:03:00.0 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:03:00.1
> > > >> > > > Unbinding 0000:03:00.1 from igb
> > > >> > > > Binding 0000:03:00.1 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:04:00.0
> > > >> > > > Unbinding 0000:04:00.0 from igb
> > > >> > > > Binding 0000:04:00.0 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:04:00.1
> > > >> > > > Unbinding 0000:04:00.1 from igb
> > > >> > > > Binding 0000:04:00.1 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:05:00.0
> > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > >> > > > Binding 0000:05:00.0 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:05:00.1
> > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > >> > > > Binding 0000:05:00.1 to pciback
> > > >> > > >
> > > >> > > > Listing PCI Devices Available to Xen
> > > >> > > > 0000:03:00.0
> > > >> > > > 0000:03:00.1
> > > >> > > > 0000:04:00.0
> > > >> > > > 0000:04:00.1
> > > >> > > > 0000:05:00.0
> > > >> > > > 0000:05:00.1
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > >> > > > WARNING: ignoring device_model directive.
> > > >> > > > WARNING: Use "device_model_override" instead if you really want
> > a
> > > >> > > > non-default device_model
> > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > 0x210c360:
> > > >> create:
> > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > > >> Disk
> > > >> > > > vdev=hda spec.backend=unknown
> > > >> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
> > > >> Disk
> > > >> > > > vdev=hda, using backend phy
> > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > > >> > > bootloader
> > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not
> > a PV
> > > >> > > > domain, skipping bootloader
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210c728: deregister unregistered
> > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New
> > best
> > > >> NUMA
> > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > >> > > > free_memkb=2980
> > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > > >> > > candidate
> > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > >> > > >   4KB PAGES: 0x0000000000000200
> > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > >> > > >   1GB PAGES: 0x0000000000000000
> > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > >> 0x7f022c81682d
> > > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > > >> Disk
> > > >> > > > vdev=hda spec.backend=phy
> > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > watch
> > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > token=3/0:
> > > >> > > > register slotnum=3
> > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > 0x210c360:
> > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x2112f48
> > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > waiting
> > > >> > > state 1
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x2112f48
> > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > token=3/0:
> > > >> > > > deregister slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x2112f48: deregister unregistered
> > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > > >> script:
> > > >> > > > /etc/xen/scripts/block add
> > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > > >> > > device-model
> > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > /usr/bin/qemu-system-i386
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -xen-domid
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > chardev=libxl-cmd,mode=control
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > ubuntu-hvm-0
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > 0.0.0.0:0
> > > >> ,to=99
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> isa-fdc.driveA=
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> vga.vram_size_mb=8
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > 2,maxcpus=2
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > >
> > > >> > >
> > > >>
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > watch
> > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > token=3/1:
> > > >> > > register
> > > >> > > > slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210c960
> > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > >> > > > epath=/local/domain/0/device-model/2/state
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210c960
> > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > >> > > > epath=/local/domain/0/device-model/2/state
> > > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > token=3/1:
> > > >> > > > deregister slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210c960: deregister unregistered
> > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> > to
> > > >> > > > /var/run/xen/qmp-libxl-2
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type: qmp
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "qmp_capabilities",
> > > >> > > >     "id": 1
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "query-chardev",
> > > >> > > >     "id": 2
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "change",
> > > >> > > >     "id": 3,
> > > >> > > >     "arguments": {
> > > >> > > >         "device": "vnc",
> > > >> > > >         "target": "password",
> > > >> > > >         "arg": ""
> > > >> > > >     }
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "query-vnc",
> > > >> > > >     "id": 4
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > watch
> > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > token=3/2:
> > > >> > > register
> > > >> > > > slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210e8a8
> > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > waiting
> > > >> state
> > > >> > > 1
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210e8a8
> > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > token=3/2:
> > > >> > > > deregister slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210e8a8: deregister unregistered
> > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > > >> script:
> > > >> > > > /etc/xen/scripts/vif-bridge online
> > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > > >> script:
> > > >> > > > /etc/xen/scripts/vif-bridge add
> > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> > to
> > > >> > > > /var/run/xen/qmp-libxl-2
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type: qmp
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "qmp_capabilities",
> > > >> > > >     "id": 1
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "device_add",
> > > >> > > >     "id": 2,
> > > >> > > >     "arguments": {
> > > >> > > >         "driver": "xen-pci-passthrough",
> > > >> > > >         "id": "pci-pt-03_00.0",
> > > >> > > >         "hostaddr": "0000:03:00.0"
> > > >> > > >     }
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > >> Connection
> > > >> > > reset
> > > >> > > > by peer
> > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > > >> error:
> > > >> > > > Connection refused
> > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > > >> error:
> > > >> > > > Connection refused
> > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > > >> error:
> > > >> > > > Connection refused
> > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > Creating pci
> > > >> > > backend
> > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> > > >> 0x210c360:
> > > >> > > > progress report: ignored
> > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > 0x210c360:
> > > >> > > > complete, rc=0
> > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > 0x210c360:
> > > >> > > destroy
> > > >> > > > Daemon running with PID 3214
> > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > >> releases:793
> > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > >> allocations:4
> > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > >> > > > CPU #0:
> > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > >> > > > GDT=     00000000 0000ffff
> > > >> > > > IDT=     00000000 0000ffff
> > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > >> > > > EFER=0000000000000000
> > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > >> > > > XMM00=00000000000000000000000000000000
> > > >> > > > XMM01=00000000000000000000000000000000
> > > >> > > > XMM02=00000000000000000000000000000000
> > > >> > > > XMM03=00000000000000000000000000000000
> > > >> > > > XMM04=00000000000000000000000000000000
> > > >> > > > XMM05=00000000000000000000000000000000
> > > >> > > > XMM06=00000000000000000000000000000000
> > > >> > > > XMM07=00000000000000000000000000000000
> > > >> > > > CPU #1:
> > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > >> > > > GDT=     00000000 0000ffff
> > > >> > > > IDT=     00000000 0000ffff
> > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > >> > > > EFER=0000000000000000
> > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > >> > > > XMM00=00000000000000000000000000000000
> > > >> > > > XMM01=00000000000000000000000000000000
> > > >> > > > XMM02=00000000000000000000000000000000
> > > >> > > > XMM03=00000000000000000000000000000000
> > > >> > > > XMM04=00000000000000000000000000000000
> > > >> > > > XMM05=00000000000000000000000000000000
> > > >> > > > XMM06=00000000000000000000000000000000
> > > >> > > > XMM07=00000000000000000000000000000000
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > /etc/default/grub
> > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > >> > > > GRUB_TIMEOUT=10
> > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > >> > > > GRUB_CMDLINE_LINUX=""
> > > >> > > > # biosdevname=0
> > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > >> > >
> > > >> > > > _______________________________________________
> > > >> > > > Xen-devel mailing list
> > > >> > > > Xen-devel@lists.xen.org
> > > >> > > > http://lists.xen.org/xen-devel
> > > >> > >
> > > >> > >
> > > >>
> > > >
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsYW-0001g3-7C; Fri, 07 Feb 2014 21:01:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBsYT-0001fy-Q5
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:01:46 +0000
Received: from [85.158.137.68:58313] by server-10.bemta-3.messagelabs.com id
	25/14-07302-9B945F25; Fri, 07 Feb 2014 21:01:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391806902!421671!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8993 invoked from network); 7 Feb 2014 21:01:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:01:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17L1dV3006612
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 21:01:40 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17L1cUW021676
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 21:01:39 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17L1cYZ021666; Fri, 7 Feb 2014 21:01:38 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 13:01:38 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5278A1C0972; Fri,  7 Feb 2014 16:01:37 -0500 (EST)
Date: Fri, 7 Feb 2014 16:01:37 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207210137.GA13743@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> Ok. I started ran the initscripts and now xl works.
> 
> However, I still see the same behavior as before:
> 

Did you use the patch that was mentioned in the URL?

> root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection reset
> by peer
> libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> Connection refused
> libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> Connection refused
> root@fiat:~# xl list
> Name                                        ID   Mem VCPUs State Time(s)
> Domain-0                                     0  1024     1     r-----
>  15.2
> ubuntu-hvm-0                                 1  1025     1     ------
> 0.0
> 
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> be allocated)
> (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> (XEN) Dom0 has maximum 1 VCPUs
> (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> (XEN) Scrubbing Free RAM: .............................done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> to Xen)
> (XEN) Freed 260kB init memory.
> (XEN) PCI add device 0000:00:00.0
> (XEN) PCI add device 0000:00:01.0
> (XEN) PCI add device 0000:00:1a.0
> (XEN) PCI add device 0000:00:1c.0
> (XEN) PCI add device 0000:00:1d.0
> (XEN) PCI add device 0000:00:1e.0
> (XEN) PCI add device 0000:00:1f.0
> (XEN) PCI add device 0000:00:1f.2
> (XEN) PCI add device 0000:00:1f.3
> (XEN) PCI add device 0000:01:00.0
> (XEN) PCI add device 0000:02:02.0
> (XEN) PCI add device 0000:02:04.0
> (XEN) PCI add device 0000:03:00.0
> (XEN) PCI add device 0000:03:00.1
> (XEN) PCI add device 0000:04:00.0
> (XEN) PCI add device 0000:04:00.1
> (XEN) PCI add device 0000:05:00.0
> (XEN) PCI add device 0000:05:00.1
> (XEN) PCI add device 0000:06:03.0
> (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> (200 of 1024)
> (d1) HVM Loader
> (d1) Detected Xen v4.4-rc2
> (d1) Xenbus rings @0xfeffc000, event channel 4
> (d1) System requested SeaBIOS
> (d1) CPU speed is 3093 MHz
> (d1) Relocating guest memory for lowmem MMIO space disabled
> 
> 
> Excerpt from /var/log/xen/*
> qemu: hardware error: xen: failed to populate ram at 40050000
> 
> 
> On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > getting the error:
> > >
> > > root@fiat:~/git/xen# xl list
> > > xc: error: Could not obtain handle on privileged command interface (2 =
> > No
> > > such file or directory): Internal error
> > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle: No
> > such
> > > file or directory
> > > cannot init xl context
> > >
> > > I've google searched for this and an article appears, but is not the same
> > > (as far as I can tell).  Running any xl command generates a similar
> > error.
> > >
> > > What can I do to fix this?
> >
> >
> > You need to run the initscripts for Xen. I don't know what your distro is,
> > but
> > they are usually put in /etc/init.d/rc.d/xen*
> >
> >
> > >
> > > Regards
> > >
> > >
> > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > mikeneiderhauser@gmail.com> wrote:
> > >
> > > > Much. Do I need to install from src or is there a package I can
> > install.
> > > >
> > > > Regards
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@oracle.com> wrote:
> > > >
> > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > >> > I did not.  I do not have the toolchain installed.  I may have time
> > > >> later
> > > >> > today to try the patch.  Are there any specific instructions on how
> > to
> > > >> > patch the src, compile and install?
> > > >>
> > > >> There actually should be a new version of Xen 4.4-rcX which will have
> > the
> > > >> fix. That might be easier for you?
> > > >> >
> > > >> > Regards
> > > >> >
> > > >> >
> > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > >> > konrad.wilk@oracle.com> wrote:
> > > >> >
> > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > >> > > > Hi all,
> > > >> > > >
> > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > (4x1G
> > > >> NIC)
> > > >> > > to a
> > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > xen-users
> > > >> list,
> > > >> > > > but it was advised to post this issue to this list. (Initial
> > > >> Message -
> > > >> > > >
> > > >> > >
> > > >>
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > >> )
> > > >> > > >
> > > >> > > > The machine I am using as host is a Dell Poweredge server with a
> > > >> Xeon
> > > >> > > > E31220 with 4GB of ram.
> > > >> > > >
> > > >> > > > The possible bug is the following:
> > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > >> > > > ....
> > > >> > > >
> > > >> > > > I believe it may be similar to this thread
> > > >> > > >
> > > >> > >
> > > >>
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > >> > > >
> > > >> > > >
> > > >> > > > Additional info that may be helpful is below.
> > > >> > >
> > > >> > > Did you try the patch?
> > > >> > > >
> > > >> > > > Please let me know if you need any additional information.
> > > >> > > >
> > > >> > > > Thanks in advance for any help provided!
> > > >> > > > Regards
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > >> > > > ###########################################################
> > > >> > > > # Configuration file for Xen HVM
> > > >> > > >
> > > >> > > > # HVM Name (as appears in 'xl list')
> > > >> > > > name="ubuntu-hvm-0"
> > > >> > > > # HVM Build settings (+ hardware)
> > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > >> > > > builder='hvm'
> > > >> > > > device_model='qemu-dm'
> > > >> > > > memory=1024
> > > >> > > > vcpus=2
> > > >> > > >
> > > >> > > > # Virtual Interface
> > > >> > > > # Network bridge to USB NIC
> > > >> > > > vif=['bridge=xenbr0']
> > > >> > > >
> > > >> > > > ################### PCI PASSTHROUGH ###################
> > > >> > > > # PCI Permissive mode toggle
> > > >> > > > #pci_permissive=1
> > > >> > > >
> > > >> > > > # All PCI Devices
> > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > >> '05:00.1']
> > > >> > > >
> > > >> > > > # First two ports on Intel 4x1G NIC
> > > >> > > > #pci=['03:00.0','03:00.1']
> > > >> > > >
> > > >> > > > # Last two ports on Intel 4x1G NIC
> > > >> > > > #pci=['04:00.0', '04:00.1']
> > > >> > > >
> > > >> > > > # All ports on Intel 4x1G NIC
> > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > >> > > >
> > > >> > > > # Brodcom 2x1G NIC
> > > >> > > > #pci=['05:00.0', '05:00.1']
> > > >> > > > ################### PCI PASSTHROUGH ###################
> > > >> > > >
> > > >> > > > # HVM Disks
> > > >> > > > # Hard disk only
> > > >> > > > # Boot from HDD first ('c')
> > > >> > > > boot="c"
> > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > >> > > >
> > > >> > > > # Hard disk with ISO
> > > >> > > > # Boot from ISO first ('d')
> > > >> > > > #boot="d"
> > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > >> > > >
> > > >> > > > # ACPI Enable
> > > >> > > > acpi=1
> > > >> > > > # HVM Event Modes
> > > >> > > > on_poweroff='destroy'
> > > >> > > > on_reboot='restart'
> > > >> > > > on_crash='restart'
> > > >> > > >
> > > >> > > > # Serial Console Configuration (Xen Console)
> > > >> > > > sdl=0
> > > >> > > > serial='pty'
> > > >> > > >
> > > >> > > > # VNC Configuration
> > > >> > > > # Only reacable from localhost
> > > >> > > > vnc=1
> > > >> > > > vnclisten="0.0.0.0"
> > > >> > > > vncpasswd=""
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > Copied for xen-users list
> > > >> > > > ###########################################################
> > > >> > > >
> > > >> > > > It appears that it cannot obtain the RAM mapping for this PCI
> > > >> device.
> > > >> > > >
> > > >> > > >
> > > >> > > > I rebooted the Host.  I ran assigned pci devices to pciback. The
> > > >> output
> > > >> > > > looks like:
> > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > >> > > > Loading Kernel Module 'xen-pciback'
> > > >> > > > Calling function pciback_dev for:
> > > >> > > > PCI DEVICE 0000:03:00.0
> > > >> > > > Unbinding 0000:03:00.0 from igb
> > > >> > > > Binding 0000:03:00.0 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:03:00.1
> > > >> > > > Unbinding 0000:03:00.1 from igb
> > > >> > > > Binding 0000:03:00.1 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:04:00.0
> > > >> > > > Unbinding 0000:04:00.0 from igb
> > > >> > > > Binding 0000:04:00.0 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:04:00.1
> > > >> > > > Unbinding 0000:04:00.1 from igb
> > > >> > > > Binding 0000:04:00.1 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:05:00.0
> > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > >> > > > Binding 0000:05:00.0 to pciback
> > > >> > > >
> > > >> > > > PCI DEVICE 0000:05:00.1
> > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > >> > > > Binding 0000:05:00.1 to pciback
> > > >> > > >
> > > >> > > > Listing PCI Devices Available to Xen
> > > >> > > > 0000:03:00.0
> > > >> > > > 0000:03:00.1
> > > >> > > > 0000:04:00.0
> > > >> > > > 0000:04:00.1
> > > >> > > > 0000:05:00.0
> > > >> > > > 0000:05:00.1
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > >> > > > WARNING: ignoring device_model directive.
> > > >> > > > WARNING: Use "device_model_override" instead if you really want
> > a
> > > >> > > > non-default device_model
> > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > 0x210c360:
> > > >> create:
> > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > > >> Disk
> > > >> > > > vdev=hda spec.backend=unknown
> > > >> > > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend:
> > > >> Disk
> > > >> > > > vdev=hda, using backend phy
> > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > > >> > > bootloader
> > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not
> > a PV
> > > >> > > > domain, skipping bootloader
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210c728: deregister unregistered
> > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New
> > best
> > > >> NUMA
> > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > >> > > > free_memkb=2980
> > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > > >> > > candidate
> > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > >> > > >   4KB PAGES: 0x0000000000000200
> > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > >> > > >   1GB PAGES: 0x0000000000000000
> > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > >> 0x7f022c81682d
> > > >> > > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend:
> > > >> Disk
> > > >> > > > vdev=hda spec.backend=phy
> > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > watch
> > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > token=3/0:
> > > >> > > > register slotnum=3
> > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > 0x210c360:
> > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x2112f48
> > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > waiting
> > > >> > > state 1
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x2112f48
> > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > token=3/0:
> > > >> > > > deregister slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x2112f48: deregister unregistered
> > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > > >> script:
> > > >> > > > /etc/xen/scripts/block add
> > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > > >> > > device-model
> > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > /usr/bin/qemu-system-i386
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -xen-domid
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > chardev=libxl-cmd,mode=control
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > ubuntu-hvm-0
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > 0.0.0.0:0
> > > >> ,to=99
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> isa-fdc.driveA=
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> vga.vram_size_mb=8
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > 2,maxcpus=2
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > >> > > >
> > > >> > >
> > > >>
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > watch
> > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > token=3/1:
> > > >> > > register
> > > >> > > > slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210c960
> > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > >> > > > epath=/local/domain/0/device-model/2/state
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210c960
> > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > >> > > > epath=/local/domain/0/device-model/2/state
> > > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > token=3/1:
> > > >> > > > deregister slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210c960: deregister unregistered
> > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> > to
> > > >> > > > /var/run/xen/qmp-libxl-2
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type: qmp
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "qmp_capabilities",
> > > >> > > >     "id": 1
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "query-chardev",
> > > >> > > >     "id": 2
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "change",
> > > >> > > >     "id": 3,
> > > >> > > >     "arguments": {
> > > >> > > >         "device": "vnc",
> > > >> > > >         "target": "password",
> > > >> > > >         "arg": ""
> > > >> > > >     }
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "query-vnc",
> > > >> > > >     "id": 4
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > watch
> > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > token=3/2:
> > > >> > > register
> > > >> > > > slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210e8a8
> > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > waiting
> > > >> state
> > > >> > > 1
> > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > w=0x210e8a8
> > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > >> > > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > token=3/2:
> > > >> > > > deregister slotnum=3
> > > >> > > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister:
> > watch
> > > >> > > > w=0x210e8a8: deregister unregistered
> > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > > >> script:
> > > >> > > > /etc/xen/scripts/vif-bridge online
> > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> > > >> script:
> > > >> > > > /etc/xen/scripts/vif-bridge add
> > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected
> > to
> > > >> > > > /var/run/xen/qmp-libxl-2
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type: qmp
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "qmp_capabilities",
> > > >> > > >     "id": 1
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type:
> > > >> return
> > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > command: '{
> > > >> > > >     "execute": "device_add",
> > > >> > > >     "id": 2,
> > > >> > > >     "arguments": {
> > > >> > > >         "driver": "xen-pci-passthrough",
> > > >> > > >         "id": "pci-pt-03_00.0",
> > > >> > > >         "hostaddr": "0000:03:00.0"
> > > >> > > >     }
> > > >> > > > }
> > > >> > > > '
> > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > >> Connection
> > > >> > > reset
> > > >> > > > by peer
> > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > > >> error:
> > > >> > > > Connection refused
> > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > > >> error:
> > > >> > > > Connection refused
> > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection
> > > >> error:
> > > >> > > > Connection refused
> > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > Creating pci
> > > >> > > backend
> > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
> > > >> 0x210c360:
> > > >> > > > progress report: ignored
> > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > 0x210c360:
> > > >> > > > complete, rc=0
> > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > 0x210c360:
> > > >> > > destroy
> > > >> > > > Daemon running with PID 3214
> > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > >> releases:793
> > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > >> allocations:4
> > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > >> > > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > >> > > > CPU #0:
> > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > >> > > > GDT=     00000000 0000ffff
> > > >> > > > IDT=     00000000 0000ffff
> > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > >> > > > EFER=0000000000000000
> > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > >> > > > XMM00=00000000000000000000000000000000
> > > >> > > > XMM01=00000000000000000000000000000000
> > > >> > > > XMM02=00000000000000000000000000000000
> > > >> > > > XMM03=00000000000000000000000000000000
> > > >> > > > XMM04=00000000000000000000000000000000
> > > >> > > > XMM05=00000000000000000000000000000000
> > > >> > > > XMM06=00000000000000000000000000000000
> > > >> > > > XMM07=00000000000000000000000000000000
> > > >> > > > CPU #1:
> > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > >> > > > GDT=     00000000 0000ffff
> > > >> > > > IDT=     00000000 0000ffff
> > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > >> > > > EFER=0000000000000000
> > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > >> > > > XMM00=00000000000000000000000000000000
> > > >> > > > XMM01=00000000000000000000000000000000
> > > >> > > > XMM02=00000000000000000000000000000000
> > > >> > > > XMM03=00000000000000000000000000000000
> > > >> > > > XMM04=00000000000000000000000000000000
> > > >> > > > XMM05=00000000000000000000000000000000
> > > >> > > > XMM06=00000000000000000000000000000000
> > > >> > > > XMM07=00000000000000000000000000000000
> > > >> > > >
> > > >> > > > ###########################################################
> > > >> > > > /etc/default/grub
> > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > >> > > > GRUB_TIMEOUT=10
> > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > >> > > > GRUB_CMDLINE_LINUX=""
> > > >> > > > # biosdevname=0
> > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > >> > >
> > > >> > > > _______________________________________________
> > > >> > > > Xen-devel mailing list
> > > >> > > > Xen-devel@lists.xen.org
> > > >> > > > http://lists.xen.org/xen-devel
> > > >> > >
> > > >> > >
> > > >>
> > > >
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:22:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBssJ-0002mf-IY; Fri, 07 Feb 2014 21:22:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterxianggao@gmail.com>) id 1WBspx-0002m8-1U
	for Xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 21:19:49 +0000
Received: from [193.109.254.147:55587] by server-3.bemta-14.messagelabs.com id
	0B/5A-00432-4FD45F25; Fri, 07 Feb 2014 21:19:48 +0000
X-Env-Sender: peterxianggao@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391807986!2866053!1
X-Originating-IP: [209.85.219.54]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22976 invoked from network); 7 Feb 2014 21:19:47 -0000
Received: from mail-oa0-f54.google.com (HELO mail-oa0-f54.google.com)
	(209.85.219.54)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 21:19:47 -0000
Received: by mail-oa0-f54.google.com with SMTP id i4so4902424oah.13
	for <Xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 13:19:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=e08BV5wklO9Rr80iz+p4iHQ1W2unyFziyJY1MImqFXg=;
	b=K11a4dPNsQh4jPE10yM9BEDEmO4cYnnWqP8JsgQhyCdorKlJcFpKOwZhqZ5uHxRX1e
	pqZFExtnP01fV9MhwqceQ/0FJVp4usF24ppEyDkZ7P4DX3FXyC8fdfCiz6081nJK0yC0
	tVInGFHWppbbvMameP0km/2Az4jd0DaVVKt72o1Hf74lNJeFmN90hXNwaphWWg0QqWkc
	Xk228rBJGci2ypz35Tbm8C+Dk8FzIvXMMxm3KMYKw8M051HlS1XmS7z/pXtZ2g+sdHfO
	CtWmjCh29UsKwrn1LxKjv6KeHZ0bt0CTuN62lAu1t6kgV53PMKJAlg6fT0P6PF8GPoWX
	gRmA==
MIME-Version: 1.0
X-Received: by 10.60.54.138 with SMTP id j10mr14569772oep.51.1391807985893;
	Fri, 07 Feb 2014 13:19:45 -0800 (PST)
Received: by 10.182.33.34 with HTTP; Fri, 7 Feb 2014 13:19:45 -0800 (PST)
Date: Fri, 7 Feb 2014 13:19:45 -0800
Message-ID: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
From: "Peter X. Gao" <peterxianggao@gmail.com>
To: Xen-devel@lists.xenproject.org
X-Mailman-Approved-At: Fri, 07 Feb 2014 21:22:14 +0000
Subject: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0323673784855029180=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0323673784855029180==
Content-Type: multipart/alternative; boundary=089e0112cd329846b404f1d78c70

--089e0112cd329846b404f1d78c70
Content-Type: text/plain; charset=ISO-8859-1

Hi,

       I am new to Xen and I am trying to run Intel DPDK inside a domU with
virtio on Xen 4.2. Is it possible to do this?

Regards
Peter

--089e0112cd329846b404f1d78c70
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>=A0 =A0 =A0 =A0I am new to Xen and =
I am trying to run Intel DPDK inside a domU with virtio on Xen 4.2. Is it p=
ossible to do this?=A0</div><div><br></div><div>Regards</div><div>Peter</di=
v></div>

--089e0112cd329846b404f1d78c70--


--===============0323673784855029180==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0323673784855029180==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 21:22:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBssJ-0002mf-IY; Fri, 07 Feb 2014 21:22:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterxianggao@gmail.com>) id 1WBspx-0002m8-1U
	for Xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 21:19:49 +0000
Received: from [193.109.254.147:55587] by server-3.bemta-14.messagelabs.com id
	0B/5A-00432-4FD45F25; Fri, 07 Feb 2014 21:19:48 +0000
X-Env-Sender: peterxianggao@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1391807986!2866053!1
X-Originating-IP: [209.85.219.54]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22976 invoked from network); 7 Feb 2014 21:19:47 -0000
Received: from mail-oa0-f54.google.com (HELO mail-oa0-f54.google.com)
	(209.85.219.54)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 21:19:47 -0000
Received: by mail-oa0-f54.google.com with SMTP id i4so4902424oah.13
	for <Xen-devel@lists.xenproject.org>;
	Fri, 07 Feb 2014 13:19:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=e08BV5wklO9Rr80iz+p4iHQ1W2unyFziyJY1MImqFXg=;
	b=K11a4dPNsQh4jPE10yM9BEDEmO4cYnnWqP8JsgQhyCdorKlJcFpKOwZhqZ5uHxRX1e
	pqZFExtnP01fV9MhwqceQ/0FJVp4usF24ppEyDkZ7P4DX3FXyC8fdfCiz6081nJK0yC0
	tVInGFHWppbbvMameP0km/2Az4jd0DaVVKt72o1Hf74lNJeFmN90hXNwaphWWg0QqWkc
	Xk228rBJGci2ypz35Tbm8C+Dk8FzIvXMMxm3KMYKw8M051HlS1XmS7z/pXtZ2g+sdHfO
	CtWmjCh29UsKwrn1LxKjv6KeHZ0bt0CTuN62lAu1t6kgV53PMKJAlg6fT0P6PF8GPoWX
	gRmA==
MIME-Version: 1.0
X-Received: by 10.60.54.138 with SMTP id j10mr14569772oep.51.1391807985893;
	Fri, 07 Feb 2014 13:19:45 -0800 (PST)
Received: by 10.182.33.34 with HTTP; Fri, 7 Feb 2014 13:19:45 -0800 (PST)
Date: Fri, 7 Feb 2014 13:19:45 -0800
Message-ID: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
From: "Peter X. Gao" <peterxianggao@gmail.com>
To: Xen-devel@lists.xenproject.org
X-Mailman-Approved-At: Fri, 07 Feb 2014 21:22:14 +0000
Subject: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0323673784855029180=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0323673784855029180==
Content-Type: multipart/alternative; boundary=089e0112cd329846b404f1d78c70

--089e0112cd329846b404f1d78c70
Content-Type: text/plain; charset=ISO-8859-1

Hi,

       I am new to Xen and I am trying to run Intel DPDK inside a domU with
virtio on Xen 4.2. Is it possible to do this?

Regards
Peter

--089e0112cd329846b404f1d78c70
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>=A0 =A0 =A0 =A0I am new to Xen and =
I am trying to run Intel DPDK inside a domU with virtio on Xen 4.2. Is it p=
ossible to do this?=A0</div><div><br></div><div>Regards</div><div>Peter</di=
v></div>

--089e0112cd329846b404f1d78c70--


--===============0323673784855029180==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0323673784855029180==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 21:27:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsxA-0003Ad-Cl; Fri, 07 Feb 2014 21:27:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBsx8-0003AW-Uo
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:27:15 +0000
Received: from [85.158.139.211:63018] by server-17.bemta-5.messagelabs.com id
	33/F2-31975-2BF45F25; Fri, 07 Feb 2014 21:27:14 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391808432!2489764!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16063 invoked from network); 7 Feb 2014 21:27:13 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-11.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 21:27:13 -0000
Received: from mail183-va3-R.bigfish.com (10.7.14.239) by
	VA3EHSOBE007.bigfish.com (10.7.40.11) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 21:27:12 +0000
Received: from mail183-va3 (localhost [127.0.0.1])	by
	mail183-va3-R.bigfish.com (Postfix) with ESMTP id EB57B2C04E8;
	Fri,  7 Feb 2014 21:27:11 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z579eh37d5kz98dI1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail183-va3 (localhost.localdomain [127.0.0.1]) by mail183-va3
	(MessageSwitch) id 1391808430173274_5018;
	Fri,  7 Feb 2014 21:27:10 +0000 (UTC)
Received: from VA3EHSMHS013.bigfish.com (unknown [10.7.14.228])	by
	mail183-va3.bigfish.com (Postfix) with ESMTP id 1AC6838004B;
	Fri,  7 Feb 2014 21:27:10 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS013.bigfish.com
	(10.7.99.23) with Microsoft SMTP Server id 14.16.227.3; Fri, 7 Feb 2014
	21:27:05 +0000
X-WSS-ID: 0N0NA94-07-47S-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29C5112C0023;	Fri,  7 Feb 2014 15:27:03 -0600 (CST)
Received: from SATLEXDAG03.amd.com (10.181.40.7) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 15:27:24 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag03.amd.com
	(10.181.40.7) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 16:27:03 -0500
Date: Fri, 7 Feb 2014 15:27:25 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207212724.GD8837@arav-dinar>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 11:05:17AM +0000, Jan Beulich wrote:
> >>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> > -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> > -		v->arch.vmce.bank[1].mci_misc = val; 
> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> > -		break;
> > -	case MSR_F10_MC4_MISC2: /* Link error type */
> > -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> > -		/* ignore write: we do not emulate link and l3 cache errors
> > -		 * to the guest.
> > -		 */
> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> > -		break;
> > -	default:
> > -		return 0;
> > -	}
> > +    /* If not present, #GP fault, else do nothing as we don't emulate */
> > +    if ( !amd_thresholding_reg_present(msr) )
> > +        return -1;
> 
> The one thing I'm concerned about making this #GP in the guest is
> migration: With it being _newer_ CPUs implementing fewer of these
> MSRs, it would be impossible to migrate a guest from an older system
> to a newer one - a direction that (as long as the newer system
> provides all the hardware capabilities the older one has) is generally
> assumed to work. Bottom line - we're probably better off always
> dropping writes, and always returning zero for reads. Which will
> eliminate the need for amd_thresholding_reg_present().
> 

Before I go ahead and remove the function, few questions-

Assuming there is a tool in the guest that accesses these MSRs,
wouldn't it be fair to expect that the tool keep in mind these MSRs
exist only in certain families?

For example:
if there's a guest running on F10 that accesses 0xc000040a, that would
be fine. But once we migrate to a newer family, then the guest should
not even generate accesses to the MSR.

Also, returning #GP to guests would mean keeping it consistent with HW
behavior. If we return zero for reads, (IMHO) it's not necessarily
correct information as the register does not even exist.. 

Bare-metal cases will face same problems too.. but if a register doesn't
exist, then shouldn't OS/hypervisor just say so and let whoever
generated the access deal with it?

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:27:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBsxA-0003Ad-Cl; Fri, 07 Feb 2014 21:27:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WBsx8-0003AW-Uo
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:27:15 +0000
Received: from [85.158.139.211:63018] by server-17.bemta-5.messagelabs.com id
	33/F2-31975-2BF45F25; Fri, 07 Feb 2014 21:27:14 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391808432!2489764!1
X-Originating-IP: [216.32.180.30]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16063 invoked from network); 7 Feb 2014 21:27:13 -0000
Received: from va3ehsobe010.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.30)
	by server-11.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	7 Feb 2014 21:27:13 -0000
Received: from mail183-va3-R.bigfish.com (10.7.14.239) by
	VA3EHSOBE007.bigfish.com (10.7.40.11) with Microsoft SMTP Server id
	14.1.225.22; Fri, 7 Feb 2014 21:27:12 +0000
Received: from mail183-va3 (localhost [127.0.0.1])	by
	mail183-va3-R.bigfish.com (Postfix) with ESMTP id EB57B2C04E8;
	Fri,  7 Feb 2014 21:27:11 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(z579eh37d5kz98dI1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail183-va3 (localhost.localdomain [127.0.0.1]) by mail183-va3
	(MessageSwitch) id 1391808430173274_5018;
	Fri,  7 Feb 2014 21:27:10 +0000 (UTC)
Received: from VA3EHSMHS013.bigfish.com (unknown [10.7.14.228])	by
	mail183-va3.bigfish.com (Postfix) with ESMTP id 1AC6838004B;
	Fri,  7 Feb 2014 21:27:10 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS013.bigfish.com
	(10.7.99.23) with Microsoft SMTP Server id 14.16.227.3; Fri, 7 Feb 2014
	21:27:05 +0000
X-WSS-ID: 0N0NA94-07-47S-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29C5112C0023;	Fri,  7 Feb 2014 15:27:03 -0600 (CST)
Received: from SATLEXDAG03.amd.com (10.181.40.7) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 15:27:24 -0600
Received: from arav-dinar (10.180.168.240) by satlexdag03.amd.com
	(10.181.40.7) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Fri, 7 Feb 2014 16:27:03 -0500
Date: Fri, 7 Feb 2014 15:27:25 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140207212724.GD8837@arav-dinar>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 11:05:17AM +0000, Jan Beulich wrote:
> >>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> > -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> > -		v->arch.vmce.bank[1].mci_misc = val; 
> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> > -		break;
> > -	case MSR_F10_MC4_MISC2: /* Link error type */
> > -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> > -		/* ignore write: we do not emulate link and l3 cache errors
> > -		 * to the guest.
> > -		 */
> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> > -		break;
> > -	default:
> > -		return 0;
> > -	}
> > +    /* If not present, #GP fault, else do nothing as we don't emulate */
> > +    if ( !amd_thresholding_reg_present(msr) )
> > +        return -1;
> 
> The one thing I'm concerned about making this #GP in the guest is
> migration: With it being _newer_ CPUs implementing fewer of these
> MSRs, it would be impossible to migrate a guest from an older system
> to a newer one - a direction that (as long as the newer system
> provides all the hardware capabilities the older one has) is generally
> assumed to work. Bottom line - we're probably better off always
> dropping writes, and always returning zero for reads. Which will
> eliminate the need for amd_thresholding_reg_present().
> 

Before I go ahead and remove the function, few questions-

Assuming there is a tool in the guest that accesses these MSRs,
wouldn't it be fair to expect that the tool keep in mind these MSRs
exist only in certain families?

For example:
if there's a guest running on F10 that accesses 0xc000040a, that would
be fine. But once we migrate to a newer family, then the guest should
not even generate accesses to the MSR.

Also, returning #GP to guests would mean keeping it consistent with HW
behavior. If we return zero for reads, (IMHO) it's not necessarily
correct information as the register does not even exist.. 

Bare-metal cases will face same problems too.. but if a register doesn't
exist, then shouldn't OS/hypervisor just say so and let whoever
generated the access deal with it?

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:50:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtJF-00047X-Ms; Fri, 07 Feb 2014 21:50:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBtJE-00047S-Du
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:50:04 +0000
Received: from [85.158.139.211:54187] by server-15.bemta-5.messagelabs.com id
	A0/07-24395-B0555F25; Fri, 07 Feb 2014 21:50:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391809800!2481503!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28323 invoked from network); 7 Feb 2014 21:50:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:50:02 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17LnwFo016644
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 21:49:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s17LnveZ013280
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 21:49:57 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Lnvdt012136; Fri, 7 Feb 2014 21:49:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 13:49:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 006911C0972; Fri,  7 Feb 2014 16:49:55 -0500 (EST)
Date: Fri, 7 Feb 2014 16:49:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207214955.GD14908@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
	<20140207210137.GA13743@phenom.dumpdata.com>
	<CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
> I did not use the patch.  I was assuming it was already patched given
> previous email.  Is the patch for qemu source or xen source?

It is for QEMU, but you are right - it should have been part
of QEMU if you got the latest version of Xen-unstable.

You didn't use some specific tag but just 'staging' ?

> 
> 
> On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > > Ok. I started ran the initscripts and now xl works.
> > >
> > > However, I still see the same behavior as before:
> > >
> >
> > Did you use the patch that was mentioned in the URL?
> >
> > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> > reset
> > > by peer
> > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > root@fiat:~# xl list
> > > Name                                        ID   Mem VCPUs State Time(s)
> > > Domain-0                                     0  1024     1     r-----
> > >  15.2
> > > ubuntu-hvm-0                                 1  1025     1     ------
> > > 0.0
> > >
> > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> > > be allocated)
> > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > > (XEN) Dom0 has maximum 1 VCPUs
> > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> > > (XEN) Scrubbing Free RAM: .............................done.
> > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > > (XEN) Std. Loglevel: All
> > > (XEN) Guest Loglevel: All
> > > (XEN) Xen is relinquishing VGA console.
> > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> > > to Xen)
> > > (XEN) Freed 260kB init memory.
> > > (XEN) PCI add device 0000:00:00.0
> > > (XEN) PCI add device 0000:00:01.0
> > > (XEN) PCI add device 0000:00:1a.0
> > > (XEN) PCI add device 0000:00:1c.0
> > > (XEN) PCI add device 0000:00:1d.0
> > > (XEN) PCI add device 0000:00:1e.0
> > > (XEN) PCI add device 0000:00:1f.0
> > > (XEN) PCI add device 0000:00:1f.2
> > > (XEN) PCI add device 0000:00:1f.3
> > > (XEN) PCI add device 0000:01:00.0
> > > (XEN) PCI add device 0000:02:02.0
> > > (XEN) PCI add device 0000:02:04.0
> > > (XEN) PCI add device 0000:03:00.0
> > > (XEN) PCI add device 0000:03:00.1
> > > (XEN) PCI add device 0000:04:00.0
> > > (XEN) PCI add device 0000:04:00.1
> > > (XEN) PCI add device 0000:05:00.0
> > > (XEN) PCI add device 0000:05:00.1
> > > (XEN) PCI add device 0000:06:03.0
> > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> > > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> > > (200 of 1024)
> > > (d1) HVM Loader
> > > (d1) Detected Xen v4.4-rc2
> > > (d1) Xenbus rings @0xfeffc000, event channel 4
> > > (d1) System requested SeaBIOS
> > > (d1) CPU speed is 3093 MHz
> > > (d1) Relocating guest memory for lowmem MMIO space disabled
> > >
> > >
> > > Excerpt from /var/log/xen/*
> > > qemu: hardware error: xen: failed to populate ram at 40050000
> > >
> > >
> > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > > konrad.wilk@oracle.com> wrote:
> > >
> > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > > > getting the error:
> > > > >
> > > > > root@fiat:~/git/xen# xl list
> > > > > xc: error: Could not obtain handle on privileged command interface
> > (2 =
> > > > No
> > > > > such file or directory): Internal error
> > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle:
> > No
> > > > such
> > > > > file or directory
> > > > > cannot init xl context
> > > > >
> > > > > I've google searched for this and an article appears, but is not the
> > same
> > > > > (as far as I can tell).  Running any xl command generates a similar
> > > > error.
> > > > >
> > > > > What can I do to fix this?
> > > >
> > > >
> > > > You need to run the initscripts for Xen. I don't know what your distro
> > is,
> > > > but
> > > > they are usually put in /etc/init.d/rc.d/xen*
> > > >
> > > >
> > > > >
> > > > > Regards
> > > > >
> > > > >
> > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > > mikeneiderhauser@gmail.com> wrote:
> > > > >
> > > > > > Much. Do I need to install from src or is there a package I can
> > > > install.
> > > > > >
> > > > > > Regards
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > > konrad.wilk@oracle.com> wrote:
> > > > > >
> > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > > > >> > I did not.  I do not have the toolchain installed.  I may have
> > time
> > > > > >> later
> > > > > >> > today to try the patch.  Are there any specific instructions on
> > how
> > > > to
> > > > > >> > patch the src, compile and install?
> > > > > >>
> > > > > >> There actually should be a new version of Xen 4.4-rcX which will
> > have
> > > > the
> > > > > >> fix. That might be easier for you?
> > > > > >> >
> > > > > >> > Regards
> > > > > >> >
> > > > > >> >
> > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > > >> > konrad.wilk@oracle.com> wrote:
> > > > > >> >
> > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser
> > wrote:
> > > > > >> > > > Hi all,
> > > > > >> > > >
> > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > > > (4x1G
> > > > > >> NIC)
> > > > > >> > > to a
> > > > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > > > xen-users
> > > > > >> list,
> > > > > >> > > > but it was advised to post this issue to this list. (Initial
> > > > > >> Message -
> > > > > >> > > >
> > > > > >> > >
> > > > > >>
> > > >
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > > >> )
> > > > > >> > > >
> > > > > >> > > > The machine I am using as host is a Dell Poweredge server
> > with a
> > > > > >> Xeon
> > > > > >> > > > E31220 with 4GB of ram.
> > > > > >> > > >
> > > > > >> > > > The possible bug is the following:
> > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > 40030000
> > > > > >> > > > ....
> > > > > >> > > >
> > > > > >> > > > I believe it may be similar to this thread
> > > > > >> > > >
> > > > > >> > >
> > > > > >>
> > > >
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > Additional info that may be helpful is below.
> > > > > >> > >
> > > > > >> > > Did you try the patch?
> > > > > >> > > >
> > > > > >> > > > Please let me know if you need any additional information.
> > > > > >> > > >
> > > > > >> > > > Thanks in advance for any help provided!
> > > > > >> > > > Regards
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > > >> > > > ###########################################################
> > > > > >> > > > # Configuration file for Xen HVM
> > > > > >> > > >
> > > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > > >> > > > name="ubuntu-hvm-0"
> > > > > >> > > > # HVM Build settings (+ hardware)
> > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > > >> > > > builder='hvm'
> > > > > >> > > > device_model='qemu-dm'
> > > > > >> > > > memory=1024
> > > > > >> > > > vcpus=2
> > > > > >> > > >
> > > > > >> > > > # Virtual Interface
> > > > > >> > > > # Network bridge to USB NIC
> > > > > >> > > > vif=['bridge=xenbr0']
> > > > > >> > > >
> > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > >> > > > # PCI Permissive mode toggle
> > > > > >> > > > #pci_permissive=1
> > > > > >> > > >
> > > > > >> > > > # All PCI Devices
> > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > > > >> '05:00.1']
> > > > > >> > > >
> > > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > > >> > > >
> > > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > > >> > > >
> > > > > >> > > > # All ports on Intel 4x1G NIC
> > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > > >> > > >
> > > > > >> > > > # Brodcom 2x1G NIC
> > > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > >> > > >
> > > > > >> > > > # HVM Disks
> > > > > >> > > > # Hard disk only
> > > > > >> > > > # Boot from HDD first ('c')
> > > > > >> > > > boot="c"
> > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > > >> > > >
> > > > > >> > > > # Hard disk with ISO
> > > > > >> > > > # Boot from ISO first ('d')
> > > > > >> > > > #boot="d"
> > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > > >> > > >
> > > > > >> > > > # ACPI Enable
> > > > > >> > > > acpi=1
> > > > > >> > > > # HVM Event Modes
> > > > > >> > > > on_poweroff='destroy'
> > > > > >> > > > on_reboot='restart'
> > > > > >> > > > on_crash='restart'
> > > > > >> > > >
> > > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > > >> > > > sdl=0
> > > > > >> > > > serial='pty'
> > > > > >> > > >
> > > > > >> > > > # VNC Configuration
> > > > > >> > > > # Only reacable from localhost
> > > > > >> > > > vnc=1
> > > > > >> > > > vnclisten="0.0.0.0"
> > > > > >> > > > vncpasswd=""
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > Copied for xen-users list
> > > > > >> > > > ###########################################################
> > > > > >> > > >
> > > > > >> > > > It appears that it cannot obtain the RAM mapping for this
> > PCI
> > > > > >> device.
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> > pciback. The
> > > > > >> output
> > > > > >> > > > looks like:
> > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > > >> > > > Calling function pciback_dev for:
> > > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > > >> > > >
> > > > > >> > > > Listing PCI Devices Available to Xen
> > > > > >> > > > 0000:03:00.0
> > > > > >> > > > 0000:03:00.1
> > > > > >> > > > 0000:04:00.0
> > > > > >> > > > 0000:04:00.1
> > > > > >> > > > 0000:05:00.0
> > > > > >> > > > 0000:05:00.1
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > > >> > > > WARNING: ignoring device_model directive.
> > > > > >> > > > WARNING: Use "device_model_override" instead if you really
> > want
> > > > a
> > > > > >> > > > non-default device_model
> > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > > 0x210c360:
> > > > > >> create:
> > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > > >> > > > libxl: debug:
> > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > >> Disk
> > > > > >> > > > vdev=hda spec.backend=unknown
> > > > > >> > > > libxl: debug:
> > libxl_device.c:296:libxl__device_disk_set_backend:
> > > > > >> Disk
> > > > > >> > > > vdev=hda, using backend phy
> > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> > running
> > > > > >> > > bootloader
> > > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run:
> > not
> > > > a PV
> > > > > >> > > > domain, skipping bootloader
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210c728: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate:
> > New
> > > > best
> > > > > >> NUMA
> > > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> > nr_vcpus=3,
> > > > > >> > > > free_memkb=2980
> > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> > placement
> > > > > >> > > candidate
> > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> > memsz=0xa69a4
> > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > > >> 0x7f022c81682d
> > > > > >> > > > libxl: debug:
> > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > >> Disk
> > > > > >> > > > vdev=hda spec.backend=phy
> > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > watch
> > > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > > token=3/0:
> > > > > >> > > > register slotnum=3
> > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > > 0x210c360:
> > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x2112f48
> > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > event
> > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > > > waiting
> > > > > >> > > state 1
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x2112f48
> > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > event
> > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > > >> > > > libxl: debug:
> > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > > token=3/0:
> > > > > >> > > > deregister slotnum=3
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x2112f48: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > hotplug
> > > > > >> script:
> > > > > >> > > > /etc/xen/scripts/block add
> > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> > Spawning
> > > > > >> > > device-model
> > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > /usr/bin/qemu-system-i386
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > -xen-domid
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -chardev
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > >
> > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > chardev=libxl-cmd,mode=control
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > ubuntu-hvm-0
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > 0.0.0.0:0
> > > > > >> ,to=99
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -global
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> isa-fdc.driveA=
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -serial
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > cirrus
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -global
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> vga.vram_size_mb=8
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > order=c
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > 2,maxcpus=2
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -device
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -netdev
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -drive
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > >
> > > > > >> > >
> > > > > >>
> > > >
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > watch
> > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > token=3/1:
> > > > > >> > > register
> > > > > >> > > > slotnum=3
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210c960
> > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210c960
> > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > >> > > > libxl: debug:
> > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > token=3/1:
> > > > > >> > > > deregister slotnum=3
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210c960: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > connected
> > > > to
> > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > > type: qmp
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > >> > > >     "id": 1
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "query-chardev",
> > > > > >> > > >     "id": 2
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "change",
> > > > > >> > > >     "id": 3,
> > > > > >> > > >     "arguments": {
> > > > > >> > > >         "device": "vnc",
> > > > > >> > > >         "target": "password",
> > > > > >> > > >         "arg": ""
> > > > > >> > > >     }
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "query-vnc",
> > > > > >> > > >     "id": 4
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > watch
> > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > token=3/2:
> > > > > >> > > register
> > > > > >> > > > slotnum=3
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210e8a8
> > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > > > waiting
> > > > > >> state
> > > > > >> > > 1
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210e8a8
> > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > > >> > > > libxl: debug:
> > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > token=3/2:
> > > > > >> > > > deregister slotnum=3
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > hotplug
> > > > > >> script:
> > > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > hotplug
> > > > > >> script:
> > > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > connected
> > > > to
> > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > > type: qmp
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > >> > > >     "id": 1
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "device_add",
> > > > > >> > > >     "id": 2,
> > > > > >> > > >     "arguments": {
> > > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > > >> > > >     }
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > > > >> Connection
> > > > > >> > > reset
> > > > > >> > > > by peer
> > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > Connection
> > > > > >> error:
> > > > > >> > > > Connection refused
> > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > Connection
> > > > > >> error:
> > > > > >> > > > Connection refused
> > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > Connection
> > > > > >> error:
> > > > > >> > > > Connection refused
> > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > > Creating pci
> > > > > >> > > backend
> > > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report:
> > ao
> > > > > >> 0x210c360:
> > > > > >> > > > progress report: ignored
> > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > > 0x210c360:
> > > > > >> > > > complete, rc=0
> > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > > 0x210c360:
> > > > > >> > > destroy
> > > > > >> > > > Daemon running with PID 3214
> > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > > >> releases:793
> > > > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > > > >> allocations:4
> > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> > toobig:4
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > 40030000
> > > > > >> > > > CPU #0:
> > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> > HLT=1
> > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > >> > > > GDT=     00000000 0000ffff
> > > > > >> > > > IDT=     00000000 0000ffff
> > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > >> > > > EFER=0000000000000000
> > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > >> > > > CPU #1:
> > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> > HLT=1
> > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > >> > > > GDT=     00000000 0000ffff
> > > > > >> > > > IDT=     00000000 0000ffff
> > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > >> > > > EFER=0000000000000000
> > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > /etc/default/grub
> > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > > >> > > > GRUB_TIMEOUT=10
> > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> > Debian`
> > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > > >> > > > # biosdevname=0
> > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > > >> > >
> > > > > >> > > > _______________________________________________
> > > > > >> > > > Xen-devel mailing list
> > > > > >> > > > Xen-devel@lists.xen.org
> > > > > >> > > > http://lists.xen.org/xen-devel
> > > > > >> > >
> > > > > >> > >
> > > > > >>
> > > > > >
> > > > > >
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:50:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtJF-00047X-Ms; Fri, 07 Feb 2014 21:50:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WBtJE-00047S-Du
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:50:04 +0000
Received: from [85.158.139.211:54187] by server-15.bemta-5.messagelabs.com id
	A0/07-24395-B0555F25; Fri, 07 Feb 2014 21:50:03 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391809800!2481503!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28323 invoked from network); 7 Feb 2014 21:50:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:50:02 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s17LnwFo016644
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 7 Feb 2014 21:49:59 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s17LnveZ013280
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 7 Feb 2014 21:49:57 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s17Lnvdt012136; Fri, 7 Feb 2014 21:49:57 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 07 Feb 2014 13:49:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 006911C0972; Fri,  7 Feb 2014 16:49:55 -0500 (EST)
Date: Fri, 7 Feb 2014 16:49:55 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140207214955.GD14908@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
	<20140207210137.GA13743@phenom.dumpdata.com>
	<CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
> I did not use the patch.  I was assuming it was already patched given
> previous email.  Is the patch for qemu source or xen source?

It is for QEMU, but you are right - it should have been part
of QEMU if you got the latest version of Xen-unstable.

You didn't use some specific tag but just 'staging' ?

> 
> 
> On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > > Ok. I started ran the initscripts and now xl works.
> > >
> > > However, I still see the same behavior as before:
> > >
> >
> > Did you use the patch that was mentioned in the URL?
> >
> > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> > reset
> > > by peer
> > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > root@fiat:~# xl list
> > > Name                                        ID   Mem VCPUs State Time(s)
> > > Domain-0                                     0  1024     1     r-----
> > >  15.2
> > > ubuntu-hvm-0                                 1  1025     1     ------
> > > 0.0
> > >
> > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> > > be allocated)
> > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > > (XEN) Dom0 has maximum 1 VCPUs
> > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> > > (XEN) Scrubbing Free RAM: .............................done.
> > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > > (XEN) Std. Loglevel: All
> > > (XEN) Guest Loglevel: All
> > > (XEN) Xen is relinquishing VGA console.
> > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> > > to Xen)
> > > (XEN) Freed 260kB init memory.
> > > (XEN) PCI add device 0000:00:00.0
> > > (XEN) PCI add device 0000:00:01.0
> > > (XEN) PCI add device 0000:00:1a.0
> > > (XEN) PCI add device 0000:00:1c.0
> > > (XEN) PCI add device 0000:00:1d.0
> > > (XEN) PCI add device 0000:00:1e.0
> > > (XEN) PCI add device 0000:00:1f.0
> > > (XEN) PCI add device 0000:00:1f.2
> > > (XEN) PCI add device 0000:00:1f.3
> > > (XEN) PCI add device 0000:01:00.0
> > > (XEN) PCI add device 0000:02:02.0
> > > (XEN) PCI add device 0000:02:04.0
> > > (XEN) PCI add device 0000:03:00.0
> > > (XEN) PCI add device 0000:03:00.1
> > > (XEN) PCI add device 0000:04:00.0
> > > (XEN) PCI add device 0000:04:00.1
> > > (XEN) PCI add device 0000:05:00.0
> > > (XEN) PCI add device 0000:05:00.1
> > > (XEN) PCI add device 0000:06:03.0
> > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> > > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> > > (200 of 1024)
> > > (d1) HVM Loader
> > > (d1) Detected Xen v4.4-rc2
> > > (d1) Xenbus rings @0xfeffc000, event channel 4
> > > (d1) System requested SeaBIOS
> > > (d1) CPU speed is 3093 MHz
> > > (d1) Relocating guest memory for lowmem MMIO space disabled
> > >
> > >
> > > Excerpt from /var/log/xen/*
> > > qemu: hardware error: xen: failed to populate ram at 40050000
> > >
> > >
> > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > > konrad.wilk@oracle.com> wrote:
> > >
> > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > > > getting the error:
> > > > >
> > > > > root@fiat:~/git/xen# xl list
> > > > > xc: error: Could not obtain handle on privileged command interface
> > (2 =
> > > > No
> > > > > such file or directory): Internal error
> > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle:
> > No
> > > > such
> > > > > file or directory
> > > > > cannot init xl context
> > > > >
> > > > > I've google searched for this and an article appears, but is not the
> > same
> > > > > (as far as I can tell).  Running any xl command generates a similar
> > > > error.
> > > > >
> > > > > What can I do to fix this?
> > > >
> > > >
> > > > You need to run the initscripts for Xen. I don't know what your distro
> > is,
> > > > but
> > > > they are usually put in /etc/init.d/rc.d/xen*
> > > >
> > > >
> > > > >
> > > > > Regards
> > > > >
> > > > >
> > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > > mikeneiderhauser@gmail.com> wrote:
> > > > >
> > > > > > Much. Do I need to install from src or is there a package I can
> > > > install.
> > > > > >
> > > > > > Regards
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > > konrad.wilk@oracle.com> wrote:
> > > > > >
> > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > > > >> > I did not.  I do not have the toolchain installed.  I may have
> > time
> > > > > >> later
> > > > > >> > today to try the patch.  Are there any specific instructions on
> > how
> > > > to
> > > > > >> > patch the src, compile and install?
> > > > > >>
> > > > > >> There actually should be a new version of Xen 4.4-rcX which will
> > have
> > > > the
> > > > > >> fix. That might be easier for you?
> > > > > >> >
> > > > > >> > Regards
> > > > > >> >
> > > > > >> >
> > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > > >> > konrad.wilk@oracle.com> wrote:
> > > > > >> >
> > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser
> > wrote:
> > > > > >> > > > Hi all,
> > > > > >> > > >
> > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > > > (4x1G
> > > > > >> NIC)
> > > > > >> > > to a
> > > > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > > > xen-users
> > > > > >> list,
> > > > > >> > > > but it was advised to post this issue to this list. (Initial
> > > > > >> Message -
> > > > > >> > > >
> > > > > >> > >
> > > > > >>
> > > >
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > > >> )
> > > > > >> > > >
> > > > > >> > > > The machine I am using as host is a Dell Poweredge server
> > with a
> > > > > >> Xeon
> > > > > >> > > > E31220 with 4GB of ram.
> > > > > >> > > >
> > > > > >> > > > The possible bug is the following:
> > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > 40030000
> > > > > >> > > > ....
> > > > > >> > > >
> > > > > >> > > > I believe it may be similar to this thread
> > > > > >> > > >
> > > > > >> > >
> > > > > >>
> > > >
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > Additional info that may be helpful is below.
> > > > > >> > >
> > > > > >> > > Did you try the patch?
> > > > > >> > > >
> > > > > >> > > > Please let me know if you need any additional information.
> > > > > >> > > >
> > > > > >> > > > Thanks in advance for any help provided!
> > > > > >> > > > Regards
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > > >> > > > ###########################################################
> > > > > >> > > > # Configuration file for Xen HVM
> > > > > >> > > >
> > > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > > >> > > > name="ubuntu-hvm-0"
> > > > > >> > > > # HVM Build settings (+ hardware)
> > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > > >> > > > builder='hvm'
> > > > > >> > > > device_model='qemu-dm'
> > > > > >> > > > memory=1024
> > > > > >> > > > vcpus=2
> > > > > >> > > >
> > > > > >> > > > # Virtual Interface
> > > > > >> > > > # Network bridge to USB NIC
> > > > > >> > > > vif=['bridge=xenbr0']
> > > > > >> > > >
> > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > >> > > > # PCI Permissive mode toggle
> > > > > >> > > > #pci_permissive=1
> > > > > >> > > >
> > > > > >> > > > # All PCI Devices
> > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > > > >> '05:00.1']
> > > > > >> > > >
> > > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > > >> > > >
> > > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > > >> > > >
> > > > > >> > > > # All ports on Intel 4x1G NIC
> > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > > >> > > >
> > > > > >> > > > # Brodcom 2x1G NIC
> > > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > >> > > >
> > > > > >> > > > # HVM Disks
> > > > > >> > > > # Hard disk only
> > > > > >> > > > # Boot from HDD first ('c')
> > > > > >> > > > boot="c"
> > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > > >> > > >
> > > > > >> > > > # Hard disk with ISO
> > > > > >> > > > # Boot from ISO first ('d')
> > > > > >> > > > #boot="d"
> > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > > >> > > >
> > > > > >> > > > # ACPI Enable
> > > > > >> > > > acpi=1
> > > > > >> > > > # HVM Event Modes
> > > > > >> > > > on_poweroff='destroy'
> > > > > >> > > > on_reboot='restart'
> > > > > >> > > > on_crash='restart'
> > > > > >> > > >
> > > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > > >> > > > sdl=0
> > > > > >> > > > serial='pty'
> > > > > >> > > >
> > > > > >> > > > # VNC Configuration
> > > > > >> > > > # Only reacable from localhost
> > > > > >> > > > vnc=1
> > > > > >> > > > vnclisten="0.0.0.0"
> > > > > >> > > > vncpasswd=""
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > Copied for xen-users list
> > > > > >> > > > ###########################################################
> > > > > >> > > >
> > > > > >> > > > It appears that it cannot obtain the RAM mapping for this
> > PCI
> > > > > >> device.
> > > > > >> > > >
> > > > > >> > > >
> > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> > pciback. The
> > > > > >> output
> > > > > >> > > > looks like:
> > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > > >> > > > Calling function pciback_dev for:
> > > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > > >> > > >
> > > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > > >> > > >
> > > > > >> > > > Listing PCI Devices Available to Xen
> > > > > >> > > > 0000:03:00.0
> > > > > >> > > > 0000:03:00.1
> > > > > >> > > > 0000:04:00.0
> > > > > >> > > > 0000:04:00.1
> > > > > >> > > > 0000:05:00.0
> > > > > >> > > > 0000:05:00.1
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > > >> > > > WARNING: ignoring device_model directive.
> > > > > >> > > > WARNING: Use "device_model_override" instead if you really
> > want
> > > > a
> > > > > >> > > > non-default device_model
> > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > > 0x210c360:
> > > > > >> create:
> > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > > >> > > > libxl: debug:
> > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > >> Disk
> > > > > >> > > > vdev=hda spec.backend=unknown
> > > > > >> > > > libxl: debug:
> > libxl_device.c:296:libxl__device_disk_set_backend:
> > > > > >> Disk
> > > > > >> > > > vdev=hda, using backend phy
> > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> > running
> > > > > >> > > bootloader
> > > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run:
> > not
> > > > a PV
> > > > > >> > > > domain, skipping bootloader
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210c728: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate:
> > New
> > > > best
> > > > > >> NUMA
> > > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> > nr_vcpus=3,
> > > > > >> > > > free_memkb=2980
> > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> > placement
> > > > > >> > > candidate
> > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> > memsz=0xa69a4
> > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > > >> 0x7f022c81682d
> > > > > >> > > > libxl: debug:
> > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > >> Disk
> > > > > >> > > > vdev=hda spec.backend=phy
> > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > watch
> > > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > > token=3/0:
> > > > > >> > > > register slotnum=3
> > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > > 0x210c360:
> > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x2112f48
> > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > event
> > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > > > waiting
> > > > > >> > > state 1
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x2112f48
> > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > event
> > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > > >> > > > libxl: debug:
> > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > > token=3/0:
> > > > > >> > > > deregister slotnum=3
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x2112f48: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > hotplug
> > > > > >> script:
> > > > > >> > > > /etc/xen/scripts/block add
> > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> > Spawning
> > > > > >> > > device-model
> > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > /usr/bin/qemu-system-i386
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > -xen-domid
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -chardev
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > >
> > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > chardev=libxl-cmd,mode=control
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > ubuntu-hvm-0
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > 0.0.0.0:0
> > > > > >> ,to=99
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -global
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> isa-fdc.driveA=
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -serial
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > cirrus
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -global
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> vga.vram_size_mb=8
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > order=c
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > 2,maxcpus=2
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -device
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -netdev
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > -drive
> > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > >> > > >
> > > > > >> > >
> > > > > >>
> > > >
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > watch
> > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > token=3/1:
> > > > > >> > > register
> > > > > >> > > > slotnum=3
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210c960
> > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210c960
> > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > >> > > > libxl: debug:
> > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > token=3/1:
> > > > > >> > > > deregister slotnum=3
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210c960: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > connected
> > > > to
> > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > > type: qmp
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > >> > > >     "id": 1
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "query-chardev",
> > > > > >> > > >     "id": 2
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "change",
> > > > > >> > > >     "id": 3,
> > > > > >> > > >     "arguments": {
> > > > > >> > > >         "device": "vnc",
> > > > > >> > > >         "target": "password",
> > > > > >> > > >         "arg": ""
> > > > > >> > > >     }
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "query-vnc",
> > > > > >> > > >     "id": 4
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > watch
> > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > token=3/2:
> > > > > >> > > register
> > > > > >> > > > slotnum=3
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210e8a8
> > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > > > waiting
> > > > > >> state
> > > > > >> > > 1
> > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > w=0x210e8a8
> > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > backend
> > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > > >> > > > libxl: debug:
> > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > token=3/2:
> > > > > >> > > > deregister slotnum=3
> > > > > >> > > > libxl: debug:
> > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > watch
> > > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > hotplug
> > > > > >> script:
> > > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > hotplug
> > > > > >> script:
> > > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > connected
> > > > to
> > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > > type: qmp
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > >> > > >     "id": 1
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > type:
> > > > > >> return
> > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > command: '{
> > > > > >> > > >     "execute": "device_add",
> > > > > >> > > >     "id": 2,
> > > > > >> > > >     "arguments": {
> > > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > > >> > > >     }
> > > > > >> > > > }
> > > > > >> > > > '
> > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > > > >> Connection
> > > > > >> > > reset
> > > > > >> > > > by peer
> > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > Connection
> > > > > >> error:
> > > > > >> > > > Connection refused
> > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > Connection
> > > > > >> error:
> > > > > >> > > > Connection refused
> > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > Connection
> > > > > >> error:
> > > > > >> > > > Connection refused
> > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > > Creating pci
> > > > > >> > > backend
> > > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report:
> > ao
> > > > > >> 0x210c360:
> > > > > >> > > > progress report: ignored
> > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > > 0x210c360:
> > > > > >> > > > complete, rc=0
> > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > > 0x210c360:
> > > > > >> > > destroy
> > > > > >> > > > Daemon running with PID 3214
> > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > > >> releases:793
> > > > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > > > >> allocations:4
> > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> > toobig:4
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > 40030000
> > > > > >> > > > CPU #0:
> > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> > HLT=1
> > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > >> > > > GDT=     00000000 0000ffff
> > > > > >> > > > IDT=     00000000 0000ffff
> > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > >> > > > EFER=0000000000000000
> > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > >> > > > CPU #1:
> > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> > HLT=1
> > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > >> > > > GDT=     00000000 0000ffff
> > > > > >> > > > IDT=     00000000 0000ffff
> > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > >> > > > EFER=0000000000000000
> > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > >> > > >
> > > > > >> > > > ###########################################################
> > > > > >> > > > /etc/default/grub
> > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > > >> > > > GRUB_TIMEOUT=10
> > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> > Debian`
> > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > > >> > > > # biosdevname=0
> > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > > >> > >
> > > > > >> > > > _______________________________________________
> > > > > >> > > > Xen-devel mailing list
> > > > > >> > > > Xen-devel@lists.xen.org
> > > > > >> > > > http://lists.xen.org/xen-devel
> > > > > >> > >
> > > > > >> > >
> > > > > >>
> > > > > >
> > > > > >
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtLE-0004bo-Ej; Fri, 07 Feb 2014 21:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBtLB-0004bX-0S
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:52:06 +0000
Received: from [85.158.137.68:9766] by server-12.bemta-3.messagelabs.com id
	E8/7B-01674-48555F25; Fri, 07 Feb 2014 21:52:04 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391809921!426337!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2352 invoked from network); 7 Feb 2014 21:52:03 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:52:03 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Feb 2014 21:52:00 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="648004927"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.22])
	by fldsmtpi02.verizon.com with ESMTP; 07 Feb 2014 21:51:59 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 16:51:50 -0500
Message-Id: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: David Scott <dave.scott@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Don Slutz <dslutz@verizon.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [BUGFIX][PATCH 0/1] Enable 4.4 to build on CentOS/RHEL
	5.10
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I only did Andrew Cooper's statement of "Applying the top macro".
There is no checking of ocaml version.

Since CentOS/RHEL 5.10 (Released on 22-Oct-2013) disto provides
ocaml 3.09.3, the current code does not build.

Compile tested and simple tests done on CentOS 5.10 and Fedora 17.

Note: The formating of the code was changed to be closer to "Coding
Style for the Xen Hypervisor".  The file has a non-standard emacs
setting at the end (indent of 8 and use tabs).  And the example of
code just below was not used as an example of coding style to
follow do to lack of indent at all.

I feel this should have a release exception and go into 4.4 for 2
reasons:

1) It is a bufix for CentOS/RHEL 5.10 and looks to be low risk.

2) It make the release more awesome to run on these distros.


"What functionality is being fixed / enabled by this patch?"

4.4 Added CAML_returnT usage.  ocaml provided this at 3.09.4.


"If there was a bug in this patch, what functionality might be broken?"

ocaml may stop functioning.  Since this is conditional code, ocaml
should only possibly have issues on CentOS/RHEL 5.10.


"What is the probability that this patch has a bug?"

I feel it is low.

Mail thread:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00665.html

Don Slutz (1):
  xenlight_stubs.c: Allow it to build with ocaml 3.09.3

 tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtLE-0004bo-Ej; Fri, 07 Feb 2014 21:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBtLB-0004bX-0S
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:52:06 +0000
Received: from [85.158.137.68:9766] by server-12.bemta-3.messagelabs.com id
	E8/7B-01674-48555F25; Fri, 07 Feb 2014 21:52:04 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391809921!426337!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2352 invoked from network); 7 Feb 2014 21:52:03 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:52:03 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Feb 2014 21:52:00 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="648004927"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.22])
	by fldsmtpi02.verizon.com with ESMTP; 07 Feb 2014 21:51:59 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 16:51:50 -0500
Message-Id: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: David Scott <dave.scott@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Don Slutz <dslutz@verizon.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [BUGFIX][PATCH 0/1] Enable 4.4 to build on CentOS/RHEL
	5.10
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I only did Andrew Cooper's statement of "Applying the top macro".
There is no checking of ocaml version.

Since CentOS/RHEL 5.10 (Released on 22-Oct-2013) disto provides
ocaml 3.09.3, the current code does not build.

Compile tested and simple tests done on CentOS 5.10 and Fedora 17.

Note: The formating of the code was changed to be closer to "Coding
Style for the Xen Hypervisor".  The file has a non-standard emacs
setting at the end (indent of 8 and use tabs).  And the example of
code just below was not used as an example of coding style to
follow do to lack of indent at all.

I feel this should have a release exception and go into 4.4 for 2
reasons:

1) It is a bufix for CentOS/RHEL 5.10 and looks to be low risk.

2) It make the release more awesome to run on these distros.


"What functionality is being fixed / enabled by this patch?"

4.4 Added CAML_returnT usage.  ocaml provided this at 3.09.4.


"If there was a bug in this patch, what functionality might be broken?"

ocaml may stop functioning.  Since this is conditional code, ocaml
should only possibly have issues on CentOS/RHEL 5.10.


"What is the probability that this patch has a bug?"

I feel it is low.

Mail thread:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00665.html

Don Slutz (1):
  xenlight_stubs.c: Allow it to build with ocaml 3.09.3

 tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtLE-0004bx-RF; Fri, 07 Feb 2014 21:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBtLB-0004bZ-Ql
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:52:06 +0000
Received: from [85.158.137.68:9795] by server-8.bemta-3.messagelabs.com id
	0D/43-16039-58555F25; Fri, 07 Feb 2014 21:52:05 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391809921!426337!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2374 invoked from network); 7 Feb 2014 21:52:04 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:52:04 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Feb 2014 21:52:01 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="648004945"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.22])
	by fldsmtpi02.verizon.com with ESMTP; 07 Feb 2014 21:52:00 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 16:51:51 -0500
Message-Id: <1391809911-13610-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
Cc: David Scott <dave.scott@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Don Slutz <dslutz@verizon.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to build
	with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This code was copied from:

http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 23f253a..8e825ae 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -35,6 +35,19 @@
 
 #include "caml_xentoollog.h"
 
+/*
+ * Starting with ocaml-3.09.3, CAMLreturn can only be used for ``value''
+ * types. CAMLreturnT was only added in 3.09.4, so we define our own
+ * version here if needed.
+ */
+#ifndef CAMLreturnT
+#define CAMLreturnT(type, result) do { \
+    type caml__temp_result = (result); \
+    caml_local_roots = caml__frame; \
+    return (caml__temp_result); \
+} while (0)
+#endif
+
 /* The following is equal to the CAMLreturn macro, but without the return */
 #define CAMLdone do{ \
 caml_local_roots = caml__frame; \
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 21:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 21:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtLE-0004bx-RF; Fri, 07 Feb 2014 21:52:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBtLB-0004bZ-Ql
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:52:06 +0000
Received: from [85.158.137.68:9795] by server-8.bemta-3.messagelabs.com id
	0D/43-16039-58555F25; Fri, 07 Feb 2014 21:52:05 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391809921!426337!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2374 invoked from network); 7 Feb 2014 21:52:04 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 21:52:04 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 07 Feb 2014 21:52:01 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="648004945"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.22])
	by fldsmtpi02.verizon.com with ESMTP; 07 Feb 2014 21:52:00 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 16:51:51 -0500
Message-Id: <1391809911-13610-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
Cc: David Scott <dave.scott@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Don Slutz <dslutz@verizon.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to build
	with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This code was copied from:

http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 23f253a..8e825ae 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -35,6 +35,19 @@
 
 #include "caml_xentoollog.h"
 
+/*
+ * Starting with ocaml-3.09.3, CAMLreturn can only be used for ``value''
+ * types. CAMLreturnT was only added in 3.09.4, so we define our own
+ * version here if needed.
+ */
+#ifndef CAMLreturnT
+#define CAMLreturnT(type, result) do { \
+    type caml__temp_result = (result); \
+    caml_local_roots = caml__frame; \
+    return (caml__temp_result); \
+} while (0)
+#endif
+
 /* The following is equal to the CAMLreturn macro, but without the return */
 #define CAMLdone do{ \
 caml_local_roots = caml__frame; \
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:04:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtXT-0005Md-BZ; Fri, 07 Feb 2014 22:04:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBtXS-0005MY-8n
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 22:04:46 +0000
Received: from [85.158.139.211:19773] by server-9.bemta-5.messagelabs.com id
	D9/F4-11237-D7855F25; Fri, 07 Feb 2014 22:04:45 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391810683!2440508!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29670 invoked from network); 7 Feb 2014 22:04:44 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 22:04:44 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 07 Feb 2014 22:04:43 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="648015831"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.58])
	by fldsmtpi02.verizon.com with ESMTP; 07 Feb 2014 22:04:42 +0000
Message-ID: <52F5587A.4010608@terremark.com>
Date: Fri, 07 Feb 2014 17:04:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
In-Reply-To: <52E8F6B30200007800117E35@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/29/14 06:40, Jan Beulich wrote:
> All,
>
> aiming at releases with, as before, presumably just one more RC on
> each of them, please test!

Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.

CentOS 5.10 has a build issue with QEMU:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html

Has more info, for this testing I changed:


Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Jan 31 22:37:04 2014 +0000

     Work around QEMU bug #1257099 on CentOS 5.10

diff --git a/tools/Makefile b/tools/Makefile
index e44a3e9..b411e60 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
                 source=.; \
         fi; \
         cd qemu-xen-dir; \
-       $$source/configure --enable-xen --target-list=i386-softmmu \
+       $$source/configure --enable-xen --target-list=i386-softmmu 
--disable-smartcard-nss\
                 --prefix=$(PREFIX) \
                 --source-path=$$source \
                 --extra-cflags="-I$(XEN_ROOT)/tools/include \

and was able to use the resulting build for some simple testing.  No new 
issues were found.
      -Don Slutz

> Thanks, Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:04:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtXT-0005Md-BZ; Fri, 07 Feb 2014 22:04:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WBtXS-0005MY-8n
	for xen-devel@lists.xenproject.org; Fri, 07 Feb 2014 22:04:46 +0000
Received: from [85.158.139.211:19773] by server-9.bemta-5.messagelabs.com id
	D9/F4-11237-D7855F25; Fri, 07 Feb 2014 22:04:45 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391810683!2440508!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29670 invoked from network); 7 Feb 2014 22:04:44 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 22:04:44 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 07 Feb 2014 22:04:43 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="648015831"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.3.58])
	by fldsmtpi02.verizon.com with ESMTP; 07 Feb 2014 22:04:42 +0000
Message-ID: <52F5587A.4010608@terremark.com>
Date: Fri, 07 Feb 2014 17:04:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
In-Reply-To: <52E8F6B30200007800117E35@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/29/14 06:40, Jan Beulich wrote:
> All,
>
> aiming at releases with, as before, presumably just one more RC on
> each of them, please test!

Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.

CentOS 5.10 has a build issue with QEMU:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html

Has more info, for this testing I changed:


Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Jan 31 22:37:04 2014 +0000

     Work around QEMU bug #1257099 on CentOS 5.10

diff --git a/tools/Makefile b/tools/Makefile
index e44a3e9..b411e60 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
                 source=.; \
         fi; \
         cd qemu-xen-dir; \
-       $$source/configure --enable-xen --target-list=i386-softmmu \
+       $$source/configure --enable-xen --target-list=i386-softmmu 
--disable-smartcard-nss\
                 --prefix=$(PREFIX) \
                 --source-path=$$source \
                 --extra-cflags="-I$(XEN_ROOT)/tools/include \

and was able to use the resulting build for some simple testing.  No new 
issues were found.
      -Don Slutz

> Thanks, Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:06:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtZQ-0005Sb-TF; Fri, 07 Feb 2014 22:06:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBtZO-0005SU-ME
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:06:46 +0000
Received: from [85.158.139.211:64866] by server-10.bemta-5.messagelabs.com id
	28/96-08578-5F855F25; Fri, 07 Feb 2014 22:06:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391810804!2492913!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20427 invoked from network); 7 Feb 2014 22:06:45 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:06:45 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so2688190wes.2
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 14:06:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Z+jB62Jhxm7FhLskJF7AHY+qtzSlNLwcJLLOCuenHvw=;
	b=BqTpsRn+mk38jCU+MEJoBNLPdD/ZYyIu5bjsIw5EZRmWng1nrKPYwH2Yose9nI2qq2
	2PzmxD9HoiSANjjIyi29gIn47B4k6TyPt/qiEhK4gnj1+g8bDe0HtqBuV+B9NmtjNzZC
	WpSNMyI/N0wxpd4FSd6k3wSV2eVPalg7lpu8hAaZjJ/TbGYJBWRW0/MmUmD5FXf1FelQ
	u8EV/YOm+WCOmdN2K5UbBFSa01+pt6CYeQrLzW5UvXnRFqamizphi2MZLMxvN6KBos3S
	8rjd2eQoccWPUVUAl1V1s34bsBBFLfYnFSU7432Wx5h7Vo0McC1UrZ7n3XE0qtoutD/G
	C6wA==
X-Gm-Message-State: ALoCoQkxRwB/8VDCrdCYjuHA3H7e5il8NLqrs2k3ONDLfOgYuz6BvWw5IKCCNUunF8O+CTYMQ5tU
X-Received: by 10.194.173.163 with SMTP id bl3mr147177wjc.73.1391810804501;
	Fri, 07 Feb 2014 14:06:44 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id u6sm11809589wif.6.2014.02.07.14.06.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 14:06:43 -0800 (PST)
Message-ID: <52F558F2.9020801@linaro.org>
Date: Fri, 07 Feb 2014 22:06:42 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-1-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 1/4] xen/arm: remove unused virtual
 parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>   xen/arch/arm/domain.c     |    2 +-
>   xen/arch/arm/gic.c        |    2 +-
>   xen/arch/arm/irq.c        |    2 +-
>   xen/arch/arm/time.c       |    2 +-
>   xen/arch/arm/vgic.c       |    4 ++--
>   xen/arch/arm/vtimer.c     |    4 ++--
>   xen/include/asm-arm/gic.h |    2 +-
>   7 files changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 635a9a4..244738d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
>       if ( already_pending )
>           return;
>
> -    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
> +    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
>   }
>
>   /*
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 50b3a38..acf7195 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -748,7 +748,7 @@ int gic_events_need_delivery(void)
>   void gic_inject(void)
>   {
>       if ( vcpu_info(current, evtchn_upcall_pending) )
> -        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
> +        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
>
>       gic_restore_pending_irqs(current);
>       if (!gic_events_need_delivery())
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3e326b0..5daa269 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
>           desc->arch.eoi_cpu = smp_processor_id();
>
>           /* XXX: inject irq into all guest vcpus */
> -        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
> +        vgic_vcpu_inject_irq(d->vcpu[0], irq);
>           goto out_no_end;
>       }
>
> diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> index 68b939d..0548201 100644
> --- a/xen/arch/arm/time.c
> +++ b/xen/arch/arm/time.c
> @@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>   {
>       current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
>       WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
> -    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
> +    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
>   }
>
>   /* Route timer's IRQ on this CPU */
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..7d10227 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
>                        sgir, vcpu_mask);
>               continue;
>           }
> -        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
> +        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
>       }
>       return 1;
>   }
> @@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
>       spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>   }
>
> -void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
> +void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
>   {
>       int idx = irq >> 2, byte = irq & 0x3;
>       uint8_t priority;
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index e325f78..87be11e 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
>       struct vtimer *t = data;
>       t->ctl |= CNTx_CTL_PENDING;
>       if ( !(t->ctl & CNTx_CTL_MASK) )
> -        vgic_vcpu_inject_irq(t->v, t->irq, 1);
> +        vgic_vcpu_inject_irq(t->v, t->irq);
>   }
>
>   static void virt_timer_expired(void *data)
>   {
>       struct vtimer *t = data;
>       t->ctl |= CNTx_CTL_MASK;
> -    vgic_vcpu_inject_irq(t->v, t->irq, 1);
> +    vgic_vcpu_inject_irq(t->v, t->irq);
>   }
>
>   int vcpu_domain_init(struct domain *d)
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 071280b..6fce5c2 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
>
>   extern int vcpu_vgic_init(struct vcpu *v);
>
> -extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
> +extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
>   extern void vgic_clear_pending_irqs(struct vcpu *v);
>   extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:06:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtZQ-0005Sb-TF; Fri, 07 Feb 2014 22:06:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBtZO-0005SU-ME
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:06:46 +0000
Received: from [85.158.139.211:64866] by server-10.bemta-5.messagelabs.com id
	28/96-08578-5F855F25; Fri, 07 Feb 2014 22:06:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1391810804!2492913!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20427 invoked from network); 7 Feb 2014 22:06:45 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:06:45 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so2688190wes.2
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 14:06:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Z+jB62Jhxm7FhLskJF7AHY+qtzSlNLwcJLLOCuenHvw=;
	b=BqTpsRn+mk38jCU+MEJoBNLPdD/ZYyIu5bjsIw5EZRmWng1nrKPYwH2Yose9nI2qq2
	2PzmxD9HoiSANjjIyi29gIn47B4k6TyPt/qiEhK4gnj1+g8bDe0HtqBuV+B9NmtjNzZC
	WpSNMyI/N0wxpd4FSd6k3wSV2eVPalg7lpu8hAaZjJ/TbGYJBWRW0/MmUmD5FXf1FelQ
	u8EV/YOm+WCOmdN2K5UbBFSa01+pt6CYeQrLzW5UvXnRFqamizphi2MZLMxvN6KBos3S
	8rjd2eQoccWPUVUAl1V1s34bsBBFLfYnFSU7432Wx5h7Vo0McC1UrZ7n3XE0qtoutD/G
	C6wA==
X-Gm-Message-State: ALoCoQkxRwB/8VDCrdCYjuHA3H7e5il8NLqrs2k3ONDLfOgYuz6BvWw5IKCCNUunF8O+CTYMQ5tU
X-Received: by 10.194.173.163 with SMTP id bl3mr147177wjc.73.1391810804501;
	Fri, 07 Feb 2014 14:06:44 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id u6sm11809589wif.6.2014.02.07.14.06.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 14:06:43 -0800 (PST)
Message-ID: <52F558F2.9020801@linaro.org>
Date: Fri, 07 Feb 2014 22:06:42 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-1-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 1/4] xen/arm: remove unused virtual
 parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

> ---
>   xen/arch/arm/domain.c     |    2 +-
>   xen/arch/arm/gic.c        |    2 +-
>   xen/arch/arm/irq.c        |    2 +-
>   xen/arch/arm/time.c       |    2 +-
>   xen/arch/arm/vgic.c       |    4 ++--
>   xen/arch/arm/vtimer.c     |    4 ++--
>   xen/include/asm-arm/gic.h |    2 +-
>   7 files changed, 9 insertions(+), 9 deletions(-)
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 635a9a4..244738d 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
>       if ( already_pending )
>           return;
>
> -    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
> +    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
>   }
>
>   /*
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 50b3a38..acf7195 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -748,7 +748,7 @@ int gic_events_need_delivery(void)
>   void gic_inject(void)
>   {
>       if ( vcpu_info(current, evtchn_upcall_pending) )
> -        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
> +        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
>
>       gic_restore_pending_irqs(current);
>       if (!gic_events_need_delivery())
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3e326b0..5daa269 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
>           desc->arch.eoi_cpu = smp_processor_id();
>
>           /* XXX: inject irq into all guest vcpus */
> -        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
> +        vgic_vcpu_inject_irq(d->vcpu[0], irq);
>           goto out_no_end;
>       }
>
> diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
> index 68b939d..0548201 100644
> --- a/xen/arch/arm/time.c
> +++ b/xen/arch/arm/time.c
> @@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>   {
>       current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
>       WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
> -    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
> +    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
>   }
>
>   /* Route timer's IRQ on this CPU */
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index 90e9707..7d10227 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
>                        sgir, vcpu_mask);
>               continue;
>           }
> -        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
> +        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
>       }
>       return 1;
>   }
> @@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
>       spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
>   }
>
> -void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
> +void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
>   {
>       int idx = irq >> 2, byte = irq & 0x3;
>       uint8_t priority;
> diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
> index e325f78..87be11e 100644
> --- a/xen/arch/arm/vtimer.c
> +++ b/xen/arch/arm/vtimer.c
> @@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
>       struct vtimer *t = data;
>       t->ctl |= CNTx_CTL_PENDING;
>       if ( !(t->ctl & CNTx_CTL_MASK) )
> -        vgic_vcpu_inject_irq(t->v, t->irq, 1);
> +        vgic_vcpu_inject_irq(t->v, t->irq);
>   }
>
>   static void virt_timer_expired(void *data)
>   {
>       struct vtimer *t = data;
>       t->ctl |= CNTx_CTL_MASK;
> -    vgic_vcpu_inject_irq(t->v, t->irq, 1);
> +    vgic_vcpu_inject_irq(t->v, t->irq);
>   }
>
>   int vcpu_domain_init(struct domain *d)
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 071280b..6fce5c2 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
>
>   extern int vcpu_vgic_init(struct vcpu *v);
>
> -extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
> +extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
>   extern void vgic_clear_pending_irqs(struct vcpu *v);
>   extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:09:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtcH-0005ie-IN; Fri, 07 Feb 2014 22:09:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBszs-0003fo-Ja
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:30:10 +0000
Received: from [85.158.143.35:42364] by server-2.bemta-4.messagelabs.com id
	F6/07-10891-B5055F25; Fri, 07 Feb 2014 21:30:03 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391808599!4041548!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19079 invoked from network); 7 Feb 2014 21:30:00 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 21:30:00 -0000
Received: by mail-vc0-f181.google.com with SMTP id ie18so3129279vcb.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 13:29:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=arLRtME3LMSzLGx8d81ZMsLr+D+rNc98EFiGOdIMIoI=;
	b=RCrGuxLiYoRtVVSq/FO4e2zf3GYHXf7hJgVWTbjWfLNvYV1MSJF4RK2be4avtJG//x
	X1s4y6LeiwCXggEwoBG8E5cdhBBudFol25dXEo/EWOFqTPLMNTzP6EqthDmhnlpmm33m
	DyiFfdFVjgL9pUdLgLiK91ksKG3TyNMFFprs+DlYPUe2FfaE2PPtpIbQdP7L+mns3G1P
	NxEdzn/sUc6YY02g/IKbeXw+M4u0B0VB9nmC1dnEKXbmXzLInDAhbo1p2U5RMVzb4UwH
	/D1N3BLyaHUlfjzohZ9XX+PKOpCep3yN6qz/lVGDjflYBjO/Rt4j9cJrZHooQh+IP7LA
	S/5g==
X-Received: by 10.58.170.69 with SMTP id ak5mr7727867vec.28.1391808598922;
	Fri, 07 Feb 2014 13:29:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 13:29:18 -0800 (PST)
In-Reply-To: <20140207210137.GA13743@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
	<20140207210137.GA13743@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 16:29:18 -0500
Message-ID: <CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 22:09:44 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6579243974944900194=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6579243974944900194==
Content-Type: multipart/alternative; boundary=047d7b86e2d622b48204f1d7b141

--047d7b86e2d622b48204f1d7b141
Content-Type: text/plain; charset=ISO-8859-1

I did not use the patch.  I was assuming it was already patched given
previous email.  Is the patch for qemu source or xen source?


On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > Ok. I started ran the initscripts and now xl works.
> >
> > However, I still see the same behavior as before:
> >
>
> Did you use the patch that was mentioned in the URL?
>
> > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> reset
> > by peer
> > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > Connection refused
> > root@fiat:~# xl list
> > Name                                        ID   Mem VCPUs State Time(s)
> > Domain-0                                     0  1024     1     r-----
> >  15.2
> > ubuntu-hvm-0                                 1  1025     1     ------
> > 0.0
> >
> > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> > be allocated)
> > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > (XEN) Dom0 has maximum 1 VCPUs
> > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> > (XEN) Scrubbing Free RAM: .............................done.
> > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) Xen is relinquishing VGA console.
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> > to Xen)
> > (XEN) Freed 260kB init memory.
> > (XEN) PCI add device 0000:00:00.0
> > (XEN) PCI add device 0000:00:01.0
> > (XEN) PCI add device 0000:00:1a.0
> > (XEN) PCI add device 0000:00:1c.0
> > (XEN) PCI add device 0000:00:1d.0
> > (XEN) PCI add device 0000:00:1e.0
> > (XEN) PCI add device 0000:00:1f.0
> > (XEN) PCI add device 0000:00:1f.2
> > (XEN) PCI add device 0000:00:1f.3
> > (XEN) PCI add device 0000:01:00.0
> > (XEN) PCI add device 0000:02:02.0
> > (XEN) PCI add device 0000:02:04.0
> > (XEN) PCI add device 0000:03:00.0
> > (XEN) PCI add device 0000:03:00.1
> > (XEN) PCI add device 0000:04:00.0
> > (XEN) PCI add device 0000:04:00.1
> > (XEN) PCI add device 0000:05:00.0
> > (XEN) PCI add device 0000:05:00.1
> > (XEN) PCI add device 0000:06:03.0
> > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> > (200 of 1024)
> > (d1) HVM Loader
> > (d1) Detected Xen v4.4-rc2
> > (d1) Xenbus rings @0xfeffc000, event channel 4
> > (d1) System requested SeaBIOS
> > (d1) CPU speed is 3093 MHz
> > (d1) Relocating guest memory for lowmem MMIO space disabled
> >
> >
> > Excerpt from /var/log/xen/*
> > qemu: hardware error: xen: failed to populate ram at 40050000
> >
> >
> > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > > getting the error:
> > > >
> > > > root@fiat:~/git/xen# xl list
> > > > xc: error: Could not obtain handle on privileged command interface
> (2 =
> > > No
> > > > such file or directory): Internal error
> > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle:
> No
> > > such
> > > > file or directory
> > > > cannot init xl context
> > > >
> > > > I've google searched for this and an article appears, but is not the
> same
> > > > (as far as I can tell).  Running any xl command generates a similar
> > > error.
> > > >
> > > > What can I do to fix this?
> > >
> > >
> > > You need to run the initscripts for Xen. I don't know what your distro
> is,
> > > but
> > > they are usually put in /etc/init.d/rc.d/xen*
> > >
> > >
> > > >
> > > > Regards
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > mikeneiderhauser@gmail.com> wrote:
> > > >
> > > > > Much. Do I need to install from src or is there a package I can
> > > install.
> > > > >
> > > > > Regards
> > > > >
> > > > >
> > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > konrad.wilk@oracle.com> wrote:
> > > > >
> > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > > >> > I did not.  I do not have the toolchain installed.  I may have
> time
> > > > >> later
> > > > >> > today to try the patch.  Are there any specific instructions on
> how
> > > to
> > > > >> > patch the src, compile and install?
> > > > >>
> > > > >> There actually should be a new version of Xen 4.4-rcX which will
> have
> > > the
> > > > >> fix. That might be easier for you?
> > > > >> >
> > > > >> > Regards
> > > > >> >
> > > > >> >
> > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > >> > konrad.wilk@oracle.com> wrote:
> > > > >> >
> > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser
> wrote:
> > > > >> > > > Hi all,
> > > > >> > > >
> > > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > > (4x1G
> > > > >> NIC)
> > > > >> > > to a
> > > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > > xen-users
> > > > >> list,
> > > > >> > > > but it was advised to post this issue to this list. (Initial
> > > > >> Message -
> > > > >> > > >
> > > > >> > >
> > > > >>
> > >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > >> )
> > > > >> > > >
> > > > >> > > > The machine I am using as host is a Dell Poweredge server
> with a
> > > > >> Xeon
> > > > >> > > > E31220 with 4GB of ram.
> > > > >> > > >
> > > > >> > > > The possible bug is the following:
> > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> 40030000
> > > > >> > > > ....
> > > > >> > > >
> > > > >> > > > I believe it may be similar to this thread
> > > > >> > > >
> > > > >> > >
> > > > >>
> > >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > Additional info that may be helpful is below.
> > > > >> > >
> > > > >> > > Did you try the patch?
> > > > >> > > >
> > > > >> > > > Please let me know if you need any additional information.
> > > > >> > > >
> > > > >> > > > Thanks in advance for any help provided!
> > > > >> > > > Regards
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > >> > > > ###########################################################
> > > > >> > > > # Configuration file for Xen HVM
> > > > >> > > >
> > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > >> > > > name="ubuntu-hvm-0"
> > > > >> > > > # HVM Build settings (+ hardware)
> > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > >> > > > builder='hvm'
> > > > >> > > > device_model='qemu-dm'
> > > > >> > > > memory=1024
> > > > >> > > > vcpus=2
> > > > >> > > >
> > > > >> > > > # Virtual Interface
> > > > >> > > > # Network bridge to USB NIC
> > > > >> > > > vif=['bridge=xenbr0']
> > > > >> > > >
> > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > >> > > > # PCI Permissive mode toggle
> > > > >> > > > #pci_permissive=1
> > > > >> > > >
> > > > >> > > > # All PCI Devices
> > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > > >> '05:00.1']
> > > > >> > > >
> > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > >> > > >
> > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > >> > > >
> > > > >> > > > # All ports on Intel 4x1G NIC
> > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > >> > > >
> > > > >> > > > # Brodcom 2x1G NIC
> > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > >> > > >
> > > > >> > > > # HVM Disks
> > > > >> > > > # Hard disk only
> > > > >> > > > # Boot from HDD first ('c')
> > > > >> > > > boot="c"
> > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > >> > > >
> > > > >> > > > # Hard disk with ISO
> > > > >> > > > # Boot from ISO first ('d')
> > > > >> > > > #boot="d"
> > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > >> > > >
> > > > >> > > > # ACPI Enable
> > > > >> > > > acpi=1
> > > > >> > > > # HVM Event Modes
> > > > >> > > > on_poweroff='destroy'
> > > > >> > > > on_reboot='restart'
> > > > >> > > > on_crash='restart'
> > > > >> > > >
> > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > >> > > > sdl=0
> > > > >> > > > serial='pty'
> > > > >> > > >
> > > > >> > > > # VNC Configuration
> > > > >> > > > # Only reacable from localhost
> > > > >> > > > vnc=1
> > > > >> > > > vnclisten="0.0.0.0"
> > > > >> > > > vncpasswd=""
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > Copied for xen-users list
> > > > >> > > > ###########################################################
> > > > >> > > >
> > > > >> > > > It appears that it cannot obtain the RAM mapping for this
> PCI
> > > > >> device.
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> pciback. The
> > > > >> output
> > > > >> > > > looks like:
> > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > >> > > > Calling function pciback_dev for:
> > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > >> > > >
> > > > >> > > > Listing PCI Devices Available to Xen
> > > > >> > > > 0000:03:00.0
> > > > >> > > > 0000:03:00.1
> > > > >> > > > 0000:04:00.0
> > > > >> > > > 0000:04:00.1
> > > > >> > > > 0000:05:00.0
> > > > >> > > > 0000:05:00.1
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > >> > > > WARNING: ignoring device_model directive.
> > > > >> > > > WARNING: Use "device_model_override" instead if you really
> want
> > > a
> > > > >> > > > non-default device_model
> > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > 0x210c360:
> > > > >> create:
> > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > >> > > > libxl: debug:
> libxl_device.c:257:libxl__device_disk_set_backend:
> > > > >> Disk
> > > > >> > > > vdev=hda spec.backend=unknown
> > > > >> > > > libxl: debug:
> libxl_device.c:296:libxl__device_disk_set_backend:
> > > > >> Disk
> > > > >> > > > vdev=hda, using backend phy
> > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> running
> > > > >> > > bootloader
> > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run:
> not
> > > a PV
> > > > >> > > > domain, skipping bootloader
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210c728: deregister unregistered
> > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate:
> New
> > > best
> > > > >> NUMA
> > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> nr_vcpus=3,
> > > > >> > > > free_memkb=2980
> > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> placement
> > > > >> > > candidate
> > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> memsz=0xa69a4
> > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > >> 0x7f022c81682d
> > > > >> > > > libxl: debug:
> libxl_device.c:257:libxl__device_disk_set_backend:
> > > > >> Disk
> > > > >> > > > vdev=hda spec.backend=phy
> > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > watch
> > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > token=3/0:
> > > > >> > > > register slotnum=3
> > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > 0x210c360:
> > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x2112f48
> > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> event
> > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > > waiting
> > > > >> > > state 1
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x2112f48
> > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> event
> > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > >> > > > libxl: debug:
> libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > token=3/0:
> > > > >> > > > deregister slotnum=3
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x2112f48: deregister unregistered
> > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> hotplug
> > > > >> script:
> > > > >> > > > /etc/xen/scripts/block add
> > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> Spawning
> > > > >> > > device-model
> > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > /usr/bin/qemu-system-i386
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -xen-domid
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -chardev
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > >
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > chardev=libxl-cmd,mode=control
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > ubuntu-hvm-0
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > 0.0.0.0:0
> > > > >> ,to=99
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -global
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> isa-fdc.driveA=
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -serial
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> cirrus
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -global
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> vga.vram_size_mb=8
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> order=c
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > 2,maxcpus=2
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -device
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -netdev
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -drive
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > >
> > > > >> > >
> > > > >>
> > >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > watch
> > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > token=3/1:
> > > > >> > > register
> > > > >> > > > slotnum=3
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210c960
> > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210c960
> > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > >> > > > libxl: debug:
> libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > token=3/1:
> > > > >> > > > deregister slotnum=3
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210c960: deregister unregistered
> > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> connected
> > > to
> > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type: qmp
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "qmp_capabilities",
> > > > >> > > >     "id": 1
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "query-chardev",
> > > > >> > > >     "id": 2
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "change",
> > > > >> > > >     "id": 3,
> > > > >> > > >     "arguments": {
> > > > >> > > >         "device": "vnc",
> > > > >> > > >         "target": "password",
> > > > >> > > >         "arg": ""
> > > > >> > > >     }
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "query-vnc",
> > > > >> > > >     "id": 4
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > watch
> > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > token=3/2:
> > > > >> > > register
> > > > >> > > > slotnum=3
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210e8a8
> > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > > waiting
> > > > >> state
> > > > >> > > 1
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210e8a8
> > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > >> > > > libxl: debug:
> libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > token=3/2:
> > > > >> > > > deregister slotnum=3
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> hotplug
> > > > >> script:
> > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> hotplug
> > > > >> script:
> > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> connected
> > > to
> > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type: qmp
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "qmp_capabilities",
> > > > >> > > >     "id": 1
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "device_add",
> > > > >> > > >     "id": 2,
> > > > >> > > >     "arguments": {
> > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > >> > > >     }
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > > >> Connection
> > > > >> > > reset
> > > > >> > > > by peer
> > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> Connection
> > > > >> error:
> > > > >> > > > Connection refused
> > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> Connection
> > > > >> error:
> > > > >> > > > Connection refused
> > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> Connection
> > > > >> error:
> > > > >> > > > Connection refused
> > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > Creating pci
> > > > >> > > backend
> > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report:
> ao
> > > > >> 0x210c360:
> > > > >> > > > progress report: ignored
> > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > 0x210c360:
> > > > >> > > > complete, rc=0
> > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > 0x210c360:
> > > > >> > > destroy
> > > > >> > > > Daemon running with PID 3214
> > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > >> releases:793
> > > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > > >> allocations:4
> > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> toobig:4
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> 40030000
> > > > >> > > > CPU #0:
> > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> HLT=1
> > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > >> > > > GDT=     00000000 0000ffff
> > > > >> > > > IDT=     00000000 0000ffff
> > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > >> > > > EFER=0000000000000000
> > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > >> > > > XMM00=00000000000000000000000000000000
> > > > >> > > > XMM01=00000000000000000000000000000000
> > > > >> > > > XMM02=00000000000000000000000000000000
> > > > >> > > > XMM03=00000000000000000000000000000000
> > > > >> > > > XMM04=00000000000000000000000000000000
> > > > >> > > > XMM05=00000000000000000000000000000000
> > > > >> > > > XMM06=00000000000000000000000000000000
> > > > >> > > > XMM07=00000000000000000000000000000000
> > > > >> > > > CPU #1:
> > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> HLT=1
> > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > >> > > > GDT=     00000000 0000ffff
> > > > >> > > > IDT=     00000000 0000ffff
> > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > >> > > > EFER=0000000000000000
> > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > >> > > > XMM00=00000000000000000000000000000000
> > > > >> > > > XMM01=00000000000000000000000000000000
> > > > >> > > > XMM02=00000000000000000000000000000000
> > > > >> > > > XMM03=00000000000000000000000000000000
> > > > >> > > > XMM04=00000000000000000000000000000000
> > > > >> > > > XMM05=00000000000000000000000000000000
> > > > >> > > > XMM06=00000000000000000000000000000000
> > > > >> > > > XMM07=00000000000000000000000000000000
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > /etc/default/grub
> > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > >> > > > GRUB_TIMEOUT=10
> > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> Debian`
> > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > >> > > > # biosdevname=0
> > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > >> > >
> > > > >> > > > _______________________________________________
> > > > >> > > > Xen-devel mailing list
> > > > >> > > > Xen-devel@lists.xen.org
> > > > >> > > > http://lists.xen.org/xen-devel
> > > > >> > >
> > > > >> > >
> > > > >>
> > > > >
> > > > >
> > >
>

--047d7b86e2d622b48204f1d7b141
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I did not use the patch. =A0I was assuming it was already =
patched given previous email. =A0Is the patch for qemu source or xen source=
?</div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Fri=
, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.c=
om</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, Feb 07, 2014 at 03=
:45:19PM -0500, Mike Neiderhauser wrote:<br>
&gt; Ok. I started ran the initscripts and now xl works.<br>
&gt;<br>
&gt; However, I still see the same behavior as before:<br>
&gt;<br>
<br>
</div>Did you use the patch that was mentioned in the URL?<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection =
reset<br>
&gt; by peer<br>
&gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; root@fiat:~# xl list<br>
&gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>
&gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>
&gt; =A015.2<br>
&gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>
&gt; 0.0<br>
&gt;<br>
&gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt; 0x23f300=
0<br>
&gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>
&gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000000 (23369=
0 pages to<br>
&gt; be allocated)<br>
&gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe00<br>
&gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f3000<br>
&gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e00<br>
&gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff85519000<br>
&gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855194b4<br>
&gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549000<br>
&gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff8554a000<br>
&gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;ffffffff85800000<=
br>
&gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>
&gt; (XEN) Dom0 has maximum 1 VCPUs<br>
&gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0xffffffff81=
b2f000<br>
&gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0xffffffff81=
d0f0f0<br>
&gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0xffffffff81=
d252c0<br>
&gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0xffffffff81=
e6d000<br>
&gt; (XEN) Scrubbing Free RAM: .............................done.<br>
&gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.<br>
&gt; (XEN) Std. Loglevel: All<br>
&gt; (XEN) Guest Loglevel: All<br>
&gt; (XEN) Xen is relinquishing VGA console.<br>
&gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; three times t=
o switch input<br>
&gt; to Xen)<br>
&gt; (XEN) Freed 260kB init memory.<br>
&gt; (XEN) PCI add device 0000:00:00.0<br>
&gt; (XEN) PCI add device 0000:00:01.0<br>
&gt; (XEN) PCI add device 0000:00:1a.0<br>
&gt; (XEN) PCI add device 0000:00:1c.0<br>
&gt; (XEN) PCI add device 0000:00:1d.0<br>
&gt; (XEN) PCI add device 0000:00:1e.0<br>
&gt; (XEN) PCI add device 0000:00:1f.0<br>
&gt; (XEN) PCI add device 0000:00:1f.2<br>
&gt; (XEN) PCI add device 0000:00:1f.3<br>
&gt; (XEN) PCI add device 0000:01:00.0<br>
&gt; (XEN) PCI add device 0000:02:02.0<br>
&gt; (XEN) PCI add device 0000:02:04.0<br>
&gt; (XEN) PCI add device 0000:03:00.0<br>
&gt; (XEN) PCI add device 0000:03:00.1<br>
&gt; (XEN) PCI add device 0000:04:00.0<br>
&gt; (XEN) PCI add device 0000:04:00.1<br>
&gt; (XEN) PCI add device 0000:05:00.0<br>
&gt; (XEN) PCI add device 0000:05:00.1<br>
&gt; (XEN) PCI add device 0000:06:03.0<br>
&gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 &gt; 2=
62400<br>
&gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D1 memf=
lags=3D0<br>
&gt; (200 of 1024)<br>
&gt; (d1) HVM Loader<br>
&gt; (d1) Detected Xen v4.4-rc2<br>
&gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>
&gt; (d1) System requested SeaBIOS<br>
&gt; (d1) CPU speed is 3093 MHz<br>
&gt; (d1) Relocating guest memory for lowmem MMIO space disabled<br>
&gt;<br>
&gt;<br>
&gt; Excerpt from /var/log/xen/*<br>
&gt; qemu: hardware error: xen: failed to populate ram at 40050000<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my host, how=
ever I am<br>
&gt; &gt; &gt; getting the error:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>
&gt; &gt; &gt; xc: error: Could not obtain handle on privileged command int=
erface (2 =3D<br>
&gt; &gt; No<br>
&gt; &gt; &gt; such file or directory): Internal error<br>
&gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc =
handle: No<br>
&gt; &gt; such<br>
&gt; &gt; &gt; file or directory<br>
&gt; &gt; &gt; cannot init xl context<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;ve google searched for this and an article appears, bu=
t is not the same<br>
&gt; &gt; &gt; (as far as I can tell). =A0Running any xl command generates =
a similar<br>
&gt; &gt; error.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; What can I do to fix this?<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; You need to run the initscripts for Xen. I don&#39;t know what yo=
ur distro is,<br>
&gt; &gt; but<br>
&gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Regards<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser &lt;<br>
&gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhaus=
er@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Much. Do I need to install from src or is there a packa=
ge I can<br>
&gt; &gt; install.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk &=
lt;<br>
&gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@o=
racle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neid=
erhauser wrote:<br>
&gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the toolchain inst=
alled. =A0I may have time<br>
&gt; &gt; &gt; &gt;&gt; later<br>
&gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there any speci=
fic instructions on how<br>
&gt; &gt; to<br>
&gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; There actually should be a new version of Xen 4.4-r=
cX which will have<br>
&gt; &gt; the<br>
&gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; Regards<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszu=
tek Wilk &lt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konr=
ad.wilk@oracle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500,=
 Mike Neiderhauser wrote:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pci passthro=
ugh of an Intel ET card<br>
&gt; &gt; (4x1G<br>
&gt; &gt; &gt; &gt;&gt; NIC)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attempting to re=
solve this issue on the<br>
&gt; &gt; xen-users<br>
&gt; &gt; &gt; &gt;&gt; list,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post this issu=
e to this list. (Initial<br>
&gt; &gt; &gt; &gt;&gt; Message -<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>
&gt; &gt; &gt; &gt;&gt; )<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as host is a =
Dell Poweredge server with a<br>
&gt; &gt; &gt; &gt;&gt; Xeon<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the following:<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-=
ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5=
 (label serial0)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to=
 populate ram at 40030000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be similar to this =
thread<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may be helpful =
is below.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you need any a=
dditional information.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any help provi=
ded!<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm=
-0.cfg<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in &#39;xl li=
st&#39;)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ hardware)<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/b=
oot/hvmloader&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH =
###################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:0=
0.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;,<br>
&gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel 4x1G NIC<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00=
.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:0=
0.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00=
.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:0=
0.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH =
###################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubun=
tu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubu=
ntu-hvm-0,hda,w&#39;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-serve=
r-amd64.iso,hdc:cdrom,r&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configuration (Xen =
Console)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot obtain the=
 RAM mapping for this PCI<br>
&gt; &gt; &gt; &gt;&gt; device.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I ran assign=
ed pci devices to pciback. The<br>
&gt; &gt; &gt; &gt;&gt; output<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39;xen-pciba=
ck&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_dev for:<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Available to Xen=
<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen=
/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-=
hvm-0.cfg<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_model direc=
tive.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_model_over=
ride&quot; instead if you really want<br>
&gt; &gt; a<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do=
_domain_create: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; create:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=
=3D0x210c3c0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:lib=
xl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:296:lib=
xl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:675:ini=
tiate_domain_create: running<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321=
:libxl__bootloader_run: not<br>
&gt; &gt; a PV<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl=
__get_numa_candidate: New<br>
&gt; &gt; best<br>
&gt; &gt; &gt; &gt;&gt; NUMA<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found: nr_nodes=
=3D1, nr_cpus=3D4, nr_vcpus=3D3,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_=
place_domain: NUMA placement<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB fre=
e selected<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: =
paddr=3D0x100000 memsz=3D0xa69a4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: memory=
: 0x100000 -&gt; 0x1a69a4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT=
:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A0000000000=
0100000-&gt;00000000001a69a4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 00000000000=
00000-&gt;0000000000000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 000000000=
0000000-&gt;000000003f800000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION=
:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 =
at 0x7f022c779000 -&gt;<br>
&gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:lib=
xl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libx=
l__ev_xswatch_register:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/=
0/backend/vbd/2/768/state<br>
&gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do=
_domain_create: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flag=
s=3Di<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/=
2/768/state token=3D3/0: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/=
2/768/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/st=
ate wanted state 2 still<br>
&gt; &gt; waiting<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/=
2/768/state token=3D3/0: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/=
2/768/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/st=
ate wanted state 2 ok<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/=
0/backend/vbd/2/768/state<br>
&gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:dev=
ice_hotplug: calling hotplug<br>
&gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl_=
_spawn_local_dm: Spawning<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 with argum=
ents:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; -xen-domid<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -chardev<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/ru=
n/xen/qmp-libxl-2,server,nowait<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -mon<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -name<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; ubuntu-hvm-0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -vnc<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a><br>
&gt; &gt; &gt; &gt;&gt; ,to=3D99<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -global<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -serial<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 pty<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -vga<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 cirrus<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -global<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -boot<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 order=3Dc<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -smp<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; 2,maxcpus=3D2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -device<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=
=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -netdev<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0=
-emu,script=3Dno,downscript=3Dno<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -M<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 xenfv<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -m<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 1016<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -drive<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libx=
l__ev_xswatch_register:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/=
0/device-model/2/state<br>
&gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model=
/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model=
/2/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model=
/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model=
/2/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/=
0/device-model/2/state<br>
&gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl_=
_qmp_initialize: connected<br>
&gt; &gt; to<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message<br>
&gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
mp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
uery-chardev&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;c=
hange&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: =
&quot;vnc&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: =
&quot;password&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &qu=
ot;&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
uery-vnc&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libx=
l__ev_xswatch_register:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/=
0/backend/vif/2/0/state<br>
&gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/=
2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/=
2/0/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/stat=
e wanted state 2 still<br>
&gt; &gt; waiting<br>
&gt; &gt; &gt; &gt;&gt; state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/=
2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/=
2/0/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/stat=
e wanted state 2 ok<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/=
0/backend/vif/2/0/state<br>
&gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:dev=
ice_hotplug: calling hotplug<br>
&gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:dev=
ice_hotplug: calling hotplug<br>
&gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl_=
_qmp_initialize: connected<br>
&gt; &gt; to<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message<br>
&gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
mp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;d=
evice_add&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: =
&quot;xen-pci-passthrough&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quo=
t;pci-pt-03_00.0&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;=
: &quot;0000:03:00.0&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_ne=
xt: Socket read error:<br>
&gt; &gt; &gt; &gt;&gt; Connection<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl_=
_qmp_initialize: Connection<br>
&gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl_=
_qmp_initialize: Connection<br>
&gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl_=
_qmp_initialize: Connection<br>
&gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__=
create_pci_backend:<br>
&gt; &gt; Creating pci<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1737:lib=
xl__ao_progress_report: ao<br>
&gt; &gt; &gt; &gt;&gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1569:lib=
xl__ao_complete: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1541:lib=
xl__ao__destroy: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: total a=
llocations:793 total<br>
&gt; &gt; &gt; &gt;&gt; releases:793<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: current=
 allocations:0 maximum<br>
&gt; &gt; &gt; &gt;&gt; allocations:4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache c=
urrent size:4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache h=
its:785 misses:4 toobig:4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-=
ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5=
 (label serial0)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to=
 populate ram at 40030000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D=
00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D=
00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-----=
--] CPL=3D0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 0000820=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D=
00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D=
00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=
=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D=
00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D=
00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-----=
--] CPL=3D0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 0000820=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D=
00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D=
00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=
=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&q=
uot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -=
s 2&gt; /dev/null || echo Debian`<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;q=
uiet splash&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D=
1024M dom0_max_vcpus=3D1&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ____________________________________=
___________<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xe=
n.org">Xen-devel@lists.xen.org</a><br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-=
devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--047d7b86e2d622b48204f1d7b141--


--===============6579243974944900194==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6579243974944900194==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 22:09:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtcH-0005ie-IN; Fri, 07 Feb 2014 22:09:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBszs-0003fo-Ja
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 21:30:10 +0000
Received: from [85.158.143.35:42364] by server-2.bemta-4.messagelabs.com id
	F6/07-10891-B5055F25; Fri, 07 Feb 2014 21:30:03 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1391808599!4041548!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19079 invoked from network); 7 Feb 2014 21:30:00 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 21:30:00 -0000
Received: by mail-vc0-f181.google.com with SMTP id ie18so3129279vcb.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 13:29:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=arLRtME3LMSzLGx8d81ZMsLr+D+rNc98EFiGOdIMIoI=;
	b=RCrGuxLiYoRtVVSq/FO4e2zf3GYHXf7hJgVWTbjWfLNvYV1MSJF4RK2be4avtJG//x
	X1s4y6LeiwCXggEwoBG8E5cdhBBudFol25dXEo/EWOFqTPLMNTzP6EqthDmhnlpmm33m
	DyiFfdFVjgL9pUdLgLiK91ksKG3TyNMFFprs+DlYPUe2FfaE2PPtpIbQdP7L+mns3G1P
	NxEdzn/sUc6YY02g/IKbeXw+M4u0B0VB9nmC1dnEKXbmXzLInDAhbo1p2U5RMVzb4UwH
	/D1N3BLyaHUlfjzohZ9XX+PKOpCep3yN6qz/lVGDjflYBjO/Rt4j9cJrZHooQh+IP7LA
	S/5g==
X-Received: by 10.58.170.69 with SMTP id ak5mr7727867vec.28.1391808598922;
	Fri, 07 Feb 2014 13:29:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 13:29:18 -0800 (PST)
In-Reply-To: <20140207210137.GA13743@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
	<20140207210137.GA13743@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 16:29:18 -0500
Message-ID: <CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Fri, 07 Feb 2014 22:09:44 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6579243974944900194=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6579243974944900194==
Content-Type: multipart/alternative; boundary=047d7b86e2d622b48204f1d7b141

--047d7b86e2d622b48204f1d7b141
Content-Type: text/plain; charset=ISO-8859-1

I did not use the patch.  I was assuming it was already patched given
previous email.  Is the patch for qemu source or xen source?


On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > Ok. I started ran the initscripts and now xl works.
> >
> > However, I still see the same behavior as before:
> >
>
> Did you use the patch that was mentioned in the URL?
>
> > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> reset
> > by peer
> > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > Connection refused
> > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > Connection refused
> > root@fiat:~# xl list
> > Name                                        ID   Mem VCPUs State Time(s)
> > Domain-0                                     0  1024     1     r-----
> >  15.2
> > ubuntu-hvm-0                                 1  1025     1     ------
> > 0.0
> >
> > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> > be allocated)
> > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > (XEN) Dom0 has maximum 1 VCPUs
> > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> > (XEN) Scrubbing Free RAM: .............................done.
> > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > (XEN) Std. Loglevel: All
> > (XEN) Guest Loglevel: All
> > (XEN) Xen is relinquishing VGA console.
> > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> > to Xen)
> > (XEN) Freed 260kB init memory.
> > (XEN) PCI add device 0000:00:00.0
> > (XEN) PCI add device 0000:00:01.0
> > (XEN) PCI add device 0000:00:1a.0
> > (XEN) PCI add device 0000:00:1c.0
> > (XEN) PCI add device 0000:00:1d.0
> > (XEN) PCI add device 0000:00:1e.0
> > (XEN) PCI add device 0000:00:1f.0
> > (XEN) PCI add device 0000:00:1f.2
> > (XEN) PCI add device 0000:00:1f.3
> > (XEN) PCI add device 0000:01:00.0
> > (XEN) PCI add device 0000:02:02.0
> > (XEN) PCI add device 0000:02:04.0
> > (XEN) PCI add device 0000:03:00.0
> > (XEN) PCI add device 0000:03:00.1
> > (XEN) PCI add device 0000:04:00.0
> > (XEN) PCI add device 0000:04:00.1
> > (XEN) PCI add device 0000:05:00.0
> > (XEN) PCI add device 0000:05:00.1
> > (XEN) PCI add device 0000:06:03.0
> > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> > (200 of 1024)
> > (d1) HVM Loader
> > (d1) Detected Xen v4.4-rc2
> > (d1) Xenbus rings @0xfeffc000, event channel 4
> > (d1) System requested SeaBIOS
> > (d1) CPU speed is 3093 MHz
> > (d1) Relocating guest memory for lowmem MMIO space disabled
> >
> >
> > Excerpt from /var/log/xen/*
> > qemu: hardware error: xen: failed to populate ram at 40050000
> >
> >
> > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > > getting the error:
> > > >
> > > > root@fiat:~/git/xen# xl list
> > > > xc: error: Could not obtain handle on privileged command interface
> (2 =
> > > No
> > > > such file or directory): Internal error
> > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle:
> No
> > > such
> > > > file or directory
> > > > cannot init xl context
> > > >
> > > > I've google searched for this and an article appears, but is not the
> same
> > > > (as far as I can tell).  Running any xl command generates a similar
> > > error.
> > > >
> > > > What can I do to fix this?
> > >
> > >
> > > You need to run the initscripts for Xen. I don't know what your distro
> is,
> > > but
> > > they are usually put in /etc/init.d/rc.d/xen*
> > >
> > >
> > > >
> > > > Regards
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > mikeneiderhauser@gmail.com> wrote:
> > > >
> > > > > Much. Do I need to install from src or is there a package I can
> > > install.
> > > > >
> > > > > Regards
> > > > >
> > > > >
> > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > konrad.wilk@oracle.com> wrote:
> > > > >
> > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > > >> > I did not.  I do not have the toolchain installed.  I may have
> time
> > > > >> later
> > > > >> > today to try the patch.  Are there any specific instructions on
> how
> > > to
> > > > >> > patch the src, compile and install?
> > > > >>
> > > > >> There actually should be a new version of Xen 4.4-rcX which will
> have
> > > the
> > > > >> fix. That might be easier for you?
> > > > >> >
> > > > >> > Regards
> > > > >> >
> > > > >> >
> > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > >> > konrad.wilk@oracle.com> wrote:
> > > > >> >
> > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser
> wrote:
> > > > >> > > > Hi all,
> > > > >> > > >
> > > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > > (4x1G
> > > > >> NIC)
> > > > >> > > to a
> > > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > > xen-users
> > > > >> list,
> > > > >> > > > but it was advised to post this issue to this list. (Initial
> > > > >> Message -
> > > > >> > > >
> > > > >> > >
> > > > >>
> > >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > >> )
> > > > >> > > >
> > > > >> > > > The machine I am using as host is a Dell Poweredge server
> with a
> > > > >> Xeon
> > > > >> > > > E31220 with 4GB of ram.
> > > > >> > > >
> > > > >> > > > The possible bug is the following:
> > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> 40030000
> > > > >> > > > ....
> > > > >> > > >
> > > > >> > > > I believe it may be similar to this thread
> > > > >> > > >
> > > > >> > >
> > > > >>
> > >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > Additional info that may be helpful is below.
> > > > >> > >
> > > > >> > > Did you try the patch?
> > > > >> > > >
> > > > >> > > > Please let me know if you need any additional information.
> > > > >> > > >
> > > > >> > > > Thanks in advance for any help provided!
> > > > >> > > > Regards
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > >> > > > ###########################################################
> > > > >> > > > # Configuration file for Xen HVM
> > > > >> > > >
> > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > >> > > > name="ubuntu-hvm-0"
> > > > >> > > > # HVM Build settings (+ hardware)
> > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > >> > > > builder='hvm'
> > > > >> > > > device_model='qemu-dm'
> > > > >> > > > memory=1024
> > > > >> > > > vcpus=2
> > > > >> > > >
> > > > >> > > > # Virtual Interface
> > > > >> > > > # Network bridge to USB NIC
> > > > >> > > > vif=['bridge=xenbr0']
> > > > >> > > >
> > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > >> > > > # PCI Permissive mode toggle
> > > > >> > > > #pci_permissive=1
> > > > >> > > >
> > > > >> > > > # All PCI Devices
> > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > > >> '05:00.1']
> > > > >> > > >
> > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > >> > > >
> > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > >> > > >
> > > > >> > > > # All ports on Intel 4x1G NIC
> > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > >> > > >
> > > > >> > > > # Brodcom 2x1G NIC
> > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > >> > > >
> > > > >> > > > # HVM Disks
> > > > >> > > > # Hard disk only
> > > > >> > > > # Boot from HDD first ('c')
> > > > >> > > > boot="c"
> > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > >> > > >
> > > > >> > > > # Hard disk with ISO
> > > > >> > > > # Boot from ISO first ('d')
> > > > >> > > > #boot="d"
> > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > >> > > >
> > > > >> > > > # ACPI Enable
> > > > >> > > > acpi=1
> > > > >> > > > # HVM Event Modes
> > > > >> > > > on_poweroff='destroy'
> > > > >> > > > on_reboot='restart'
> > > > >> > > > on_crash='restart'
> > > > >> > > >
> > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > >> > > > sdl=0
> > > > >> > > > serial='pty'
> > > > >> > > >
> > > > >> > > > # VNC Configuration
> > > > >> > > > # Only reacable from localhost
> > > > >> > > > vnc=1
> > > > >> > > > vnclisten="0.0.0.0"
> > > > >> > > > vncpasswd=""
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > Copied for xen-users list
> > > > >> > > > ###########################################################
> > > > >> > > >
> > > > >> > > > It appears that it cannot obtain the RAM mapping for this
> PCI
> > > > >> device.
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> pciback. The
> > > > >> output
> > > > >> > > > looks like:
> > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > >> > > > Calling function pciback_dev for:
> > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > >> > > >
> > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > >> > > >
> > > > >> > > > Listing PCI Devices Available to Xen
> > > > >> > > > 0000:03:00.0
> > > > >> > > > 0000:03:00.1
> > > > >> > > > 0000:04:00.0
> > > > >> > > > 0000:04:00.1
> > > > >> > > > 0000:05:00.0
> > > > >> > > > 0000:05:00.1
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > >> > > > WARNING: ignoring device_model directive.
> > > > >> > > > WARNING: Use "device_model_override" instead if you really
> want
> > > a
> > > > >> > > > non-default device_model
> > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > 0x210c360:
> > > > >> create:
> > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > >> > > > libxl: debug:
> libxl_device.c:257:libxl__device_disk_set_backend:
> > > > >> Disk
> > > > >> > > > vdev=hda spec.backend=unknown
> > > > >> > > > libxl: debug:
> libxl_device.c:296:libxl__device_disk_set_backend:
> > > > >> Disk
> > > > >> > > > vdev=hda, using backend phy
> > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> running
> > > > >> > > bootloader
> > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run:
> not
> > > a PV
> > > > >> > > > domain, skipping bootloader
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210c728: deregister unregistered
> > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate:
> New
> > > best
> > > > >> NUMA
> > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> nr_vcpus=3,
> > > > >> > > > free_memkb=2980
> > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> placement
> > > > >> > > candidate
> > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> memsz=0xa69a4
> > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > >> 0x7f022c81682d
> > > > >> > > > libxl: debug:
> libxl_device.c:257:libxl__device_disk_set_backend:
> > > > >> Disk
> > > > >> > > > vdev=hda spec.backend=phy
> > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > watch
> > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > token=3/0:
> > > > >> > > > register slotnum=3
> > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > 0x210c360:
> > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x2112f48
> > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> event
> > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > > waiting
> > > > >> > > state 1
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x2112f48
> > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> event
> > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > >> > > > libxl: debug:
> libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > token=3/0:
> > > > >> > > > deregister slotnum=3
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x2112f48: deregister unregistered
> > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> hotplug
> > > > >> script:
> > > > >> > > > /etc/xen/scripts/block add
> > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> Spawning
> > > > >> > > device-model
> > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > /usr/bin/qemu-system-i386
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -xen-domid
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -chardev
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > >
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > chardev=libxl-cmd,mode=control
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > ubuntu-hvm-0
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > 0.0.0.0:0
> > > > >> ,to=99
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -global
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> isa-fdc.driveA=
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -serial
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> cirrus
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -global
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> vga.vram_size_mb=8
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> order=c
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > 2,maxcpus=2
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -device
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -netdev
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -drive
> > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > >> > > >
> > > > >> > >
> > > > >>
> > >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > watch
> > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > token=3/1:
> > > > >> > > register
> > > > >> > > > slotnum=3
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210c960
> > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210c960
> > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > >> > > > libxl: debug:
> libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > token=3/1:
> > > > >> > > > deregister slotnum=3
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210c960: deregister unregistered
> > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> connected
> > > to
> > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type: qmp
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "qmp_capabilities",
> > > > >> > > >     "id": 1
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "query-chardev",
> > > > >> > > >     "id": 2
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "change",
> > > > >> > > >     "id": 3,
> > > > >> > > >     "arguments": {
> > > > >> > > >         "device": "vnc",
> > > > >> > > >         "target": "password",
> > > > >> > > >         "arg": ""
> > > > >> > > >     }
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "query-vnc",
> > > > >> > > >     "id": 4
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > watch
> > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > token=3/2:
> > > > >> > > register
> > > > >> > > > slotnum=3
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210e8a8
> > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > > waiting
> > > > >> state
> > > > >> > > 1
> > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > w=0x210e8a8
> > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> backend
> > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > >> > > > libxl: debug:
> libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > token=3/2:
> > > > >> > > > deregister slotnum=3
> > > > >> > > > libxl: debug:
> libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > watch
> > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> hotplug
> > > > >> script:
> > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> hotplug
> > > > >> script:
> > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> connected
> > > to
> > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type: qmp
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "qmp_capabilities",
> > > > >> > > >     "id": 1
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> type:
> > > > >> return
> > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > command: '{
> > > > >> > > >     "execute": "device_add",
> > > > >> > > >     "id": 2,
> > > > >> > > >     "arguments": {
> > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > >> > > >     }
> > > > >> > > > }
> > > > >> > > > '
> > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > > >> Connection
> > > > >> > > reset
> > > > >> > > > by peer
> > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> Connection
> > > > >> error:
> > > > >> > > > Connection refused
> > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> Connection
> > > > >> error:
> > > > >> > > > Connection refused
> > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> Connection
> > > > >> error:
> > > > >> > > > Connection refused
> > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > Creating pci
> > > > >> > > backend
> > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report:
> ao
> > > > >> 0x210c360:
> > > > >> > > > progress report: ignored
> > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > 0x210c360:
> > > > >> > > > complete, rc=0
> > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > 0x210c360:
> > > > >> > > destroy
> > > > >> > > > Daemon running with PID 3214
> > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > >> releases:793
> > > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > > >> allocations:4
> > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> toobig:4
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> 40030000
> > > > >> > > > CPU #0:
> > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> HLT=1
> > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > >> > > > GDT=     00000000 0000ffff
> > > > >> > > > IDT=     00000000 0000ffff
> > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > >> > > > EFER=0000000000000000
> > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > >> > > > XMM00=00000000000000000000000000000000
> > > > >> > > > XMM01=00000000000000000000000000000000
> > > > >> > > > XMM02=00000000000000000000000000000000
> > > > >> > > > XMM03=00000000000000000000000000000000
> > > > >> > > > XMM04=00000000000000000000000000000000
> > > > >> > > > XMM05=00000000000000000000000000000000
> > > > >> > > > XMM06=00000000000000000000000000000000
> > > > >> > > > XMM07=00000000000000000000000000000000
> > > > >> > > > CPU #1:
> > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> HLT=1
> > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > >> > > > GDT=     00000000 0000ffff
> > > > >> > > > IDT=     00000000 0000ffff
> > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > >> > > > EFER=0000000000000000
> > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > >> > > > XMM00=00000000000000000000000000000000
> > > > >> > > > XMM01=00000000000000000000000000000000
> > > > >> > > > XMM02=00000000000000000000000000000000
> > > > >> > > > XMM03=00000000000000000000000000000000
> > > > >> > > > XMM04=00000000000000000000000000000000
> > > > >> > > > XMM05=00000000000000000000000000000000
> > > > >> > > > XMM06=00000000000000000000000000000000
> > > > >> > > > XMM07=00000000000000000000000000000000
> > > > >> > > >
> > > > >> > > > ###########################################################
> > > > >> > > > /etc/default/grub
> > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > >> > > > GRUB_TIMEOUT=10
> > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> Debian`
> > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > >> > > > # biosdevname=0
> > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > >> > >
> > > > >> > > > _______________________________________________
> > > > >> > > > Xen-devel mailing list
> > > > >> > > > Xen-devel@lists.xen.org
> > > > >> > > > http://lists.xen.org/xen-devel
> > > > >> > >
> > > > >> > >
> > > > >>
> > > > >
> > > > >
> > >
>

--047d7b86e2d622b48204f1d7b141
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I did not use the patch. =A0I was assuming it was already =
patched given previous email. =A0Is the patch for qemu source or xen source=
?</div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Fri=
, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a hr=
ef=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.c=
om</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Fri, Feb 07, 2014 at 03=
:45:19PM -0500, Mike Neiderhauser wrote:<br>
&gt; Ok. I started ran the initscripts and now xl works.<br>
&gt;<br>
&gt; However, I still see the same behavior as before:<br>
&gt;<br>
<br>
</div>Did you use the patch that was mentioned in the URL?<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection =
reset<br>
&gt; by peer<br>
&gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:=
<br>
&gt; Connection refused<br>
&gt; root@fiat:~# xl list<br>
&gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>
&gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>
&gt; =A015.2<br>
&gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>
&gt; 0.0<br>
&gt;<br>
&gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt; 0x23f300=
0<br>
&gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>
&gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000000 (23369=
0 pages to<br>
&gt; be allocated)<br>
&gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe00<br>
&gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f3000<br>
&gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e00<br>
&gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff85519000<br>
&gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855194b4<br>
&gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549000<br>
&gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff8554a000<br>
&gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;ffffffff85800000<=
br>
&gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>
&gt; (XEN) Dom0 has maximum 1 VCPUs<br>
&gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0xffffffff81=
b2f000<br>
&gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0xffffffff81=
d0f0f0<br>
&gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0xffffffff81=
d252c0<br>
&gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0xffffffff81=
e6d000<br>
&gt; (XEN) Scrubbing Free RAM: .............................done.<br>
&gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.<br>
&gt; (XEN) Std. Loglevel: All<br>
&gt; (XEN) Guest Loglevel: All<br>
&gt; (XEN) Xen is relinquishing VGA console.<br>
&gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; three times t=
o switch input<br>
&gt; to Xen)<br>
&gt; (XEN) Freed 260kB init memory.<br>
&gt; (XEN) PCI add device 0000:00:00.0<br>
&gt; (XEN) PCI add device 0000:00:01.0<br>
&gt; (XEN) PCI add device 0000:00:1a.0<br>
&gt; (XEN) PCI add device 0000:00:1c.0<br>
&gt; (XEN) PCI add device 0000:00:1d.0<br>
&gt; (XEN) PCI add device 0000:00:1e.0<br>
&gt; (XEN) PCI add device 0000:00:1f.0<br>
&gt; (XEN) PCI add device 0000:00:1f.2<br>
&gt; (XEN) PCI add device 0000:00:1f.3<br>
&gt; (XEN) PCI add device 0000:01:00.0<br>
&gt; (XEN) PCI add device 0000:02:02.0<br>
&gt; (XEN) PCI add device 0000:02:04.0<br>
&gt; (XEN) PCI add device 0000:03:00.0<br>
&gt; (XEN) PCI add device 0000:03:00.1<br>
&gt; (XEN) PCI add device 0000:04:00.0<br>
&gt; (XEN) PCI add device 0000:04:00.1<br>
&gt; (XEN) PCI add device 0000:05:00.0<br>
&gt; (XEN) PCI add device 0000:05:00.1<br>
&gt; (XEN) PCI add device 0000:06:03.0<br>
&gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 &gt; 2=
62400<br>
&gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D1 memf=
lags=3D0<br>
&gt; (200 of 1024)<br>
&gt; (d1) HVM Loader<br>
&gt; (d1) Detected Xen v4.4-rc2<br>
&gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>
&gt; (d1) System requested SeaBIOS<br>
&gt; (d1) CPU speed is 3093 MHz<br>
&gt; (d1) Relocating guest memory for lowmem MMIO space disabled<br>
&gt;<br>
&gt;<br>
&gt; Excerpt from /var/log/xen/*<br>
&gt; qemu: hardware error: xen: failed to populate ram at 40050000<br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my host, how=
ever I am<br>
&gt; &gt; &gt; getting the error:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>
&gt; &gt; &gt; xc: error: Could not obtain handle on privileged command int=
erface (2 =3D<br>
&gt; &gt; No<br>
&gt; &gt; &gt; such file or directory): Internal error<br>
&gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc =
handle: No<br>
&gt; &gt; such<br>
&gt; &gt; &gt; file or directory<br>
&gt; &gt; &gt; cannot init xl context<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;ve google searched for this and an article appears, bu=
t is not the same<br>
&gt; &gt; &gt; (as far as I can tell). =A0Running any xl command generates =
a similar<br>
&gt; &gt; error.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; What can I do to fix this?<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; You need to run the initscripts for Xen. I don&#39;t know what yo=
ur distro is,<br>
&gt; &gt; but<br>
&gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Regards<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser &lt;<br>
&gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhaus=
er@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Much. Do I need to install from src or is there a packa=
ge I can<br>
&gt; &gt; install.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk &=
lt;<br>
&gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@o=
racle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neid=
erhauser wrote:<br>
&gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the toolchain inst=
alled. =A0I may have time<br>
&gt; &gt; &gt; &gt;&gt; later<br>
&gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there any speci=
fic instructions on how<br>
&gt; &gt; to<br>
&gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; There actually should be a new version of Xen 4.4-r=
cX which will have<br>
&gt; &gt; the<br>
&gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; Regards<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszu=
tek Wilk &lt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konr=
ad.wilk@oracle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:37AM -0500,=
 Mike Neiderhauser wrote:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pci passthro=
ugh of an Intel ET card<br>
&gt; &gt; (4x1G<br>
&gt; &gt; &gt; &gt;&gt; NIC)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attempting to re=
solve this issue on the<br>
&gt; &gt; xen-users<br>
&gt; &gt; &gt; &gt;&gt; list,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post this issu=
e to this list. (Initial<br>
&gt; &gt; &gt; &gt;&gt; Message -<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>
&gt; &gt; &gt; &gt;&gt; )<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as host is a =
Dell Poweredge server with a<br>
&gt; &gt; &gt; &gt;&gt; Xeon<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the following:<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-=
ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5=
 (label serial0)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to=
 populate ram at 40030000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be similar to this =
thread<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may be helpful =
is below.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you need any a=
dditional information.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any help provi=
ded!<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/ubuntu-hvm=
-0.cfg<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for Xen HVM<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in &#39;xl li=
st&#39;)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ hardware)<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib/xen-4.3/b=
oot/hvmloader&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-dm&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NIC<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr0&#39;]<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH =
###################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode toggle<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;, &#39;03:0=
0.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;,<br>
&gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel 4x1G NIC<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,&#39;03:00=
.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel 4x1G NIC<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;, &#39;04:0=
0.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G NIC<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, &#39;03:00=
.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;, &#39;05:0=
0.1&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PASSTHROUGH =
###################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#39;c&#39;)<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubuntu-vg/ubun=
tu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#39;d&#39;)<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubuntu-vg/ubu=
ntu-hvm-0,hda,w&#39;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.04.3-serve=
r-amd64.iso,hdc:cdrom,r&#39;]<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configuration (Xen =
Console)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from localhost<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot obtain the=
 RAM mapping for this PCI<br>
&gt; &gt; &gt; &gt;&gt; device.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I ran assign=
ed pci devices to pciback. The<br>
&gt; &gt; &gt; &gt;&gt; output<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39;xen-pciba=
ck&#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_dev for:<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 from igb<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 from bnx2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 from bnx2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pciback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Available to Xen=
<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv create /etc/xen=
/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-=
hvm-0.cfg<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_model direc=
tive.<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_model_over=
ride&quot; instead if you really want<br>
&gt; &gt; a<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1230:do=
_domain_create: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; create:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(nil) poller=
=3D0x210c3c0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:lib=
xl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dunknown<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:296:lib=
xl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend phy<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:675:ini=
tiate_domain_create: running<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootloader.c:321=
:libxl__bootloader_run: not<br>
&gt; &gt; a PV<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloader<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c:475:libxl=
__get_numa_candidate: New<br>
&gt; &gt; best<br>
&gt; &gt; &gt; &gt;&gt; NUMA<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found: nr_nodes=
=3D1, nr_cpus=3D4, nr_vcpus=3D3,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c:195:numa_=
place_domain: NUMA placement<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2980 KB fre=
e selected<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: phdr: =
paddr=3D0x100000 memsz=3D0xa69a4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_binary: memory=
: 0x100000 -&gt; 0x1a69a4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY ARRANGEMENT=
:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =A0000000000=
0100000-&gt;00000000001a69a4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 00000000000=
00000-&gt;0000000000000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0 000000000=
0000000-&gt;000000003f800000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000000100608<=
br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY ALLOCATION=
:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x0000000000000200<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x00000000000001fb<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x0000000000000000<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binary: phdr 0 =
at 0x7f022c779000 -&gt;<br>
&gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:257:lib=
xl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3Dphy<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libx=
l__ev_xswatch_register:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/=
0/backend/vbd/2/768/state<br>
&gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create.c:1243:do=
_domain_create: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210c3c0, flag=
s=3Di<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/=
2/768/state token=3D3/0: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/=
2/768/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/st=
ate wanted state 2 still<br>
&gt; &gt; waiting<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vbd/=
2/768/state token=3D3/0: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vbd/=
2/768/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vbd/2/768/st=
ate wanted state 2 ok<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/local/domain/=
0/backend/vbd/2/768/state<br>
&gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:dev=
ice_hotplug: calling hotplug<br>
&gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1206:libxl_=
_spawn_local_dm: Spawning<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 with argum=
ents:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; -xen-domid<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -chardev<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/ru=
n/xen/qmp-libxl-2,server,nowait<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -mon<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=3Dcontrol<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -name<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; ubuntu-hvm-0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -vnc<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:0</a><br>
&gt; &gt; &gt; &gt;&gt; ,to=3D99<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -global<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -serial<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 pty<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -vga<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 cirrus<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -global<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -boot<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 order=3Dc<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -smp<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; 2,maxcpus=3D2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -device<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=3Dnet0,mac=
=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -netdev<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifname=3Dvif2.0=
-emu,script=3Dno,downscript=3Dno<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -M<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 xenfv<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -m<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 1016<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm: =A0 -drive<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1208:libxl_=
_spawn_local_dm:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libx=
l__ev_xswatch_register:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/=
0/device-model/2/state<br>
&gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model=
/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model=
/2/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/device-model=
/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/device-model=
/2/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/local/domain/=
0/device-model/2/state<br>
&gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl_=
_qmp_initialize: connected<br>
&gt; &gt; to<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message<br>
&gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
mp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
uery-chardev&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;c=
hange&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;device&quot;: =
&quot;vnc&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;target&quot;: =
&quot;password&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&quot;: &qu=
ot;&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
uery-vnc&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:559:libx=
l__ev_xswatch_register:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/=
0/backend/vif/2/0/state<br>
&gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/=
2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/=
2/0/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:647:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/stat=
e wanted state 2 still<br>
&gt; &gt; waiting<br>
&gt; &gt; &gt; &gt;&gt; state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:503:watc=
hfd_callback: watch<br>
&gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/backend/vif/=
2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/backend/vif/=
2/0/state<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:643:devs=
tate_watch_callback: backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vif/2/0/stat=
e wanted state 2 ok<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:596:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/local/domain/=
0/backend/vif/2/0/state<br>
&gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:608:libx=
l__ev_xswatch_deregister:<br>
&gt; &gt; watch<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister unregister=
ed<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:dev=
ice_hotplug: calling hotplug<br>
&gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge online<b=
r>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device.c:959:dev=
ice_hotplug: calling hotplug<br>
&gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridge add<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:707:libxl_=
_qmp_initialize: connected<br>
&gt; &gt; to<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message<br>
&gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;q=
mp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:299:qmp_ha=
ndle_response: message type:<br>
&gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:555:qmp_se=
nd_prepare: next qmp<br>
&gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot;: &quot;d=
evice_add&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&quot;: {<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driver&quot;: =
&quot;xen-pci-passthrough&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&quot;: &quo=
t;pci-pt-03_00.0&quot;,<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;hostaddr&quot;=
: &quot;0000:03:00.0&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:454:qmp_ne=
xt: Socket read error:<br>
&gt; &gt; &gt; &gt;&gt; Connection<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl_=
_qmp_initialize: Connection<br>
&gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl_=
_qmp_initialize: Connection<br>
&gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:702:libxl_=
_qmp_initialize: Connection<br>
&gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:81:libxl__=
create_pci_backend:<br>
&gt; &gt; Creating pci<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1737:lib=
xl__ao_progress_report: ao<br>
&gt; &gt; &gt; &gt;&gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1569:lib=
xl__ao_complete: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.c:1541:lib=
xl__ao__destroy: ao<br>
&gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 3214<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: total a=
llocations:793 total<br>
&gt; &gt; &gt; &gt;&gt; releases:793<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: current=
 allocations:0 maximum<br>
&gt; &gt; &gt; &gt;&gt; allocations:4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache c=
urrent size:4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffer: cache h=
its:785 misses:4 toobig:4<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# cat qemu-dm-=
ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to /dev/pts/5=
 (label serial0)<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen: failed to=
 populate ram at 40030000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D=
00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D=
00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-----=
--] CPL=3D0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 0000820=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D=
00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D=
00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=
=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000000 ECX=3D=
00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000000 EBP=3D=
00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000002 [-----=
--] CPL=3D0 II=3D0 A20=3D1 SMM=3D0 HLT=3D1<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ffff 00009b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ffff 0000930=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ffff 0000820=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ffff 00008b0=
0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 0000ffff<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000000 CR3=3D=
00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000000 DR2=3D=
00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000400<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=3D0] FTW=
=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 0000 FPR1=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 0000 FPR3=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 0000 FPR5=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 0000 FPR7=3D=
0000000000000000 0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D0000000000000000000000000000=
0000<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ####################################=
#######################<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4.3-amd64&q=
uot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_release -i -=
s 2&gt; /dev/null || echo Debian`<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=3D&quot;q=
uiet splash&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot;&quot;<br=
>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;dom0_mem=3D=
1024M dom0_max_vcpus=3D1&quot;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ____________________________________=
___________<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xe=
n.org">Xen-devel@lists.xen.org</a><br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-=
devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--047d7b86e2d622b48204f1d7b141--


--===============6579243974944900194==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6579243974944900194==--


From xen-devel-bounces@lists.xen.org Fri Feb 07 22:23:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtp0-0006aa-8j; Fri, 07 Feb 2014 22:22:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBtoy-0006aV-Mw
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 22:22:53 +0000
Received: from [85.158.139.211:24722] by server-5.bemta-5.messagelabs.com id
	04/B4-32749-BBC55F25; Fri, 07 Feb 2014 22:22:51 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391811771!2439386!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6588 invoked from network); 7 Feb 2014 22:22:51 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 22:22:51 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391811771; l=760;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yt+T6rk/V2RQQkES2cl45XqLVX4=;
	b=OS1TFZgq5Wu6tELoMKkOqkjEd3XVi2x+g8IqPQBlZR62gaQy9JwXPVS8Y4DJDDy/HVs
	9LoLd/5knqVuEvIeN8WRkTf283R9okJcYWHiT20Xt5ge1X2pfI1j1/83Pj7PisO9kJ+yn
	+VApqRnRw6MYkZQVzKrsGXYF1QqWn4RH37Y=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id D04b9fq17MMmOKc
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 23:22:48 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5982050269; Fri,  7 Feb 2014 23:22:47 +0100 (CET)
Date: Fri, 7 Feb 2014 23:22:47 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140207222247.GA23234@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please keep xen-devel@lists.xen.org in CC list.

On Fri, Feb 07, Adel Amani wrote:

> yes, for print data, function print_stats() in xc_domain_save.c should run and
> work. I read in function and check.... But i don't know really why this don't
> work!!! :-|... I test 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\n");'
> But again not answer :'(.....

Please make sure the self-compiled binary is actually used. Try this to
verify: grep STDERR /usr/lib/xen/bin/xc_save (assuming the fprintf above
is actually in the compiled code.)

> how boot the domU with 'initcall_debug'?! Are affect on total time?!

This is a kernel cmdline option. Please check the documentation about
how to pass additional kernel parameters to a domU.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:23:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtp0-0006aa-8j; Fri, 07 Feb 2014 22:22:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WBtoy-0006aV-Mw
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 22:22:53 +0000
Received: from [85.158.139.211:24722] by server-5.bemta-5.messagelabs.com id
	04/B4-32749-BBC55F25; Fri, 07 Feb 2014 22:22:51 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391811771!2439386!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6588 invoked from network); 7 Feb 2014 22:22:51 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 22:22:51 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1391811771; l=760;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=yt+T6rk/V2RQQkES2cl45XqLVX4=;
	b=OS1TFZgq5Wu6tELoMKkOqkjEd3XVi2x+g8IqPQBlZR62gaQy9JwXPVS8Y4DJDDy/HVs
	9LoLd/5knqVuEvIeN8WRkTf283R9okJcYWHiT20Xt5ge1X2pfI1j1/83Pj7PisO9kJ+yn
	+VApqRnRw6MYkZQVzKrsGXYF1QqWn4RH37Y=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.24 AUTH) with ESMTPSA id D04b9fq17MMmOKc
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Fri, 7 Feb 2014 23:22:48 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 5982050269; Fri,  7 Feb 2014 23:22:47 +0100 (CET)
Date: Fri, 7 Feb 2014 23:22:47 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140207222247.GA23234@aepfle.de>
References: <1391331061.24599.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please keep xen-devel@lists.xen.org in CC list.

On Fri, Feb 07, Adel Amani wrote:

> yes, for print data, function print_stats() in xc_domain_save.c should run and
> work. I read in function and check.... But i don't know really why this don't
> work!!! :-|... I test 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\n");'
> But again not answer :'(.....

Please make sure the self-compiled binary is actually used. Try this to
verify: grep STDERR /usr/lib/xen/bin/xc_save (assuming the fprintf above
is actually in the compiled code.)

> how boot the domU with 'initcall_debug'?! Are affect on total time?!

This is a kernel cmdline option. Please check the documentation about
how to pass additional kernel parameters to a domU.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:31:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtwx-0006oY-7n; Fri, 07 Feb 2014 22:31:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBtwv-0006oT-0U
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:31:05 +0000
Received: from [193.109.254.147:52832] by server-6.bemta-14.messagelabs.com id
	F2/65-03396-8AE55F25; Fri, 07 Feb 2014 22:31:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391812262!2869965!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24900 invoked from network); 7 Feb 2014 22:31:02 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:31:02 -0000
Received: by mail-we0-f175.google.com with SMTP id q59so2765060wes.34
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 14:31:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=bNQUmHYQB8f/fD0yzvwmQ24PTzu4oTJJENLPD9xXaYg=;
	b=jyFs34ojZvLVmSLwutTvpITTXfdqOAr/4NM8LpNC3+EleLpJyjhPw4vv1FJNLIgxan
	zyqVotbCUZtEEYVd1j9glDRND5n4JdTTtuP70qkn9tnidZBTMqBh1J4fGw8KVaXHc15T
	YfpWMSTuk8s3dXi4EjemUSUo8pe53uiOQcAV+TmR9+gumVki0FEjGouheXqVw5lmHemN
	W9n+tz+wEMixd0NtEgzMEw/sMMoEcObBwa4IOKahDbcLWQCaMh6jMV13T0O2vTCKlgAk
	c6BcAqs9TbTXxG7/oi2tGRodYsSoyKCp79evCHurlgWl7kJ7H1wvcjjH8Vv6lkY2dl5s
	BGtA==
X-Gm-Message-State: ALoCoQmlfBbe8MHPeNd9As+uS5enPeKw/l+INH0cYolSxqXGZMylrEjzdSusXFajJBYVRmMTFQj2
X-Received: by 10.194.62.111 with SMTP id x15mr3858750wjr.55.1391812262305;
	Fri, 07 Feb 2014 14:31:02 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ci4sm14005295wjc.21.2014.02.07.14.31.01 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 14:31:01 -0800 (PST)
Message-ID: <52F55EA4.2060703@linaro.org>
Date: Fri, 07 Feb 2014 22:31:00 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 2/4] xen/arm: support HW interrupts in
 gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 07/02/14 18:56, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW.

If you set the GICH_LR_HW, I think you should remove the EOI of physical 
interrupt in maintenance IRQ in this patch. Otherwise we will EOI twice 
and from the documentation the behavior is unpredicatable.

> Also add a struct vcpu* parameter to gic_set_lr.
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>   xen/arch/arm/gic.c |   28 ++++++++++++++++------------
>   1 file changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index acf7195..215b679 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>       return rc;
>   }
>
> -static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> +static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
>           unsigned int state, unsigned int priority)
>   {
> -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> -    struct pending_irq *p = irq_to_pending(current, virtual_irq);
> +    struct pending_irq *p = irq_to_pending(v, irq);
>
>       BUG_ON(lr >= nr_lrs);
>       BUG_ON(lr < 0);
>       BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
>
> -    GICH[GICH_LR + lr] = state |
> -        maintenance_int |
> -        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> -        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +    if ( p->desc != NULL )
> +        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
> +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> +            ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |

We should not assume that the physical IRQ == virtual IRQ. You should 
use p->desc->irq

> +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +    else
> +        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
> +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);

The final result of virtual IRQ is a subset of the physical IRQ. Can you 
factor the code?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:31:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBtwx-0006oY-7n; Fri, 07 Feb 2014 22:31:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBtwv-0006oT-0U
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:31:05 +0000
Received: from [193.109.254.147:52832] by server-6.bemta-14.messagelabs.com id
	F2/65-03396-8AE55F25; Fri, 07 Feb 2014 22:31:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1391812262!2869965!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24900 invoked from network); 7 Feb 2014 22:31:02 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:31:02 -0000
Received: by mail-we0-f175.google.com with SMTP id q59so2765060wes.34
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 14:31:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=bNQUmHYQB8f/fD0yzvwmQ24PTzu4oTJJENLPD9xXaYg=;
	b=jyFs34ojZvLVmSLwutTvpITTXfdqOAr/4NM8LpNC3+EleLpJyjhPw4vv1FJNLIgxan
	zyqVotbCUZtEEYVd1j9glDRND5n4JdTTtuP70qkn9tnidZBTMqBh1J4fGw8KVaXHc15T
	YfpWMSTuk8s3dXi4EjemUSUo8pe53uiOQcAV+TmR9+gumVki0FEjGouheXqVw5lmHemN
	W9n+tz+wEMixd0NtEgzMEw/sMMoEcObBwa4IOKahDbcLWQCaMh6jMV13T0O2vTCKlgAk
	c6BcAqs9TbTXxG7/oi2tGRodYsSoyKCp79evCHurlgWl7kJ7H1wvcjjH8Vv6lkY2dl5s
	BGtA==
X-Gm-Message-State: ALoCoQmlfBbe8MHPeNd9As+uS5enPeKw/l+INH0cYolSxqXGZMylrEjzdSusXFajJBYVRmMTFQj2
X-Received: by 10.194.62.111 with SMTP id x15mr3858750wjr.55.1391812262305;
	Fri, 07 Feb 2014 14:31:02 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ci4sm14005295wjc.21.2014.02.07.14.31.01 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 14:31:01 -0800 (PST)
Message-ID: <52F55EA4.2060703@linaro.org>
Date: Fri, 07 Feb 2014 22:31:00 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 2/4] xen/arm: support HW interrupts in
 gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 07/02/14 18:56, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW.

If you set the GICH_LR_HW, I think you should remove the EOI of physical 
interrupt in maintenance IRQ in this patch. Otherwise we will EOI twice 
and from the documentation the behavior is unpredicatable.

> Also add a struct vcpu* parameter to gic_set_lr.
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>   xen/arch/arm/gic.c |   28 ++++++++++++++++------------
>   1 file changed, 16 insertions(+), 12 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index acf7195..215b679 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>       return rc;
>   }
>
> -static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> +static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
>           unsigned int state, unsigned int priority)
>   {
> -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> -    struct pending_irq *p = irq_to_pending(current, virtual_irq);
> +    struct pending_irq *p = irq_to_pending(v, irq);
>
>       BUG_ON(lr >= nr_lrs);
>       BUG_ON(lr < 0);
>       BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
>
> -    GICH[GICH_LR + lr] = state |
> -        maintenance_int |
> -        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> -        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +    if ( p->desc != NULL )
> +        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
> +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> +            ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |

We should not assume that the physical IRQ == virtual IRQ. You should 
use p->desc->irq

> +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +    else
> +        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
> +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);

The final result of virtual IRQ is a subset of the physical IRQ. Can you 
factor the code?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:35:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBu0o-0007FV-TR; Fri, 07 Feb 2014 22:35:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBu0n-0007FQ-Gv
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:35:06 +0000
Received: from [193.109.254.147:35625] by server-12.bemta-14.messagelabs.com
	id 25/35-17220-89F55F25; Fri, 07 Feb 2014 22:35:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391812502!2872481!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14235 invoked from network); 7 Feb 2014 22:35:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="99122449"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 22:35:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 17:35:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBu0j-0005ia-1i;
	Fri, 07 Feb 2014 22:35:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBu0i-0007hG-Pc;
	Fri, 07 Feb 2014 22:35:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24783-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 22:35:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24783: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24783 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24783/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl            5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl-credit2    5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot            fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  5 xen-boot                     fail  like 12557
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  5 xen-boot     fail blocked in 12557
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot           fail blocked in 12557
 test-amd64-i386-pv            5 xen-boot                     fail   like 12557
 test-amd64-i386-freebsd10-amd64  5 xen-boot              fail blocked in 12557
 test-amd64-i386-freebsd10-i386  5 xen-boot               fail blocked in 12557
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot                fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                9343224bfd4be6a02e6ae0c0d66426c955c7d76e
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7009 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2368689 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:35:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBu0o-0007FV-TR; Fri, 07 Feb 2014 22:35:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WBu0n-0007FQ-Gv
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:35:06 +0000
Received: from [193.109.254.147:35625] by server-12.bemta-14.messagelabs.com
	id 25/35-17220-89F55F25; Fri, 07 Feb 2014 22:35:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1391812502!2872481!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14235 invoked from network); 7 Feb 2014 22:35:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:35:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,803,1384300800"; d="scan'208";a="99122449"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 07 Feb 2014 22:35:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 7 Feb 2014 17:35:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WBu0j-0005ia-1i;
	Fri, 07 Feb 2014 22:35:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WBu0i-0007hG-Pc;
	Fri, 07 Feb 2014 22:35:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24783-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 7 Feb 2014 22:35:00 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24783: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24783 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24783/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl            5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl-credit2    5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot            fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  5 xen-boot                     fail  like 12557
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  5 xen-boot     fail blocked in 12557
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot           fail blocked in 12557
 test-amd64-i386-pv            5 xen-boot                     fail   like 12557
 test-amd64-i386-freebsd10-amd64  5 xen-boot              fail blocked in 12557
 test-amd64-i386-freebsd10-i386  5 xen-boot               fail blocked in 12557
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot                fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                9343224bfd4be6a02e6ae0c0d66426c955c7d76e
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7009 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2368689 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:45:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBuAw-0007nM-4h; Fri, 07 Feb 2014 22:45:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBuAu-0007nH-LT
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:45:32 +0000
Received: from [85.158.137.68:61215] by server-17.bemta-3.messagelabs.com id
	28/5C-22569-B0265F25; Fri, 07 Feb 2014 22:45:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391813130!424120!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7693 invoked from network); 7 Feb 2014 22:45:31 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:45:31 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so1301463wib.6
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 14:45:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=HnHmq3lnPg6tsnoyGevPijpDz6paeBano5mcEG8mn40=;
	b=J2wD5tNnBop9ioutk1QqMfuKBV4GrU3/WnibaawblHTwIlIjW5H6VCiVAqsHXzW9s+
	/vzQtUvMOZRqr4xRxCSDbFZgY9eIFWGbHGoQwDNqKGSEwSZMqF+HQMIVvr+fP+UFq61J
	xkOhHn7Pzwi517nHBWekk+BmHaGsbMB1fOfk8NlP9tf1EvMpf+Ibq33aeieoAWdUFh1r
	owlwdRihXOm4+75f9bVJXaYq/t/RofCnTwIwH6jfHYhhP2brBEocZS/j9/GqZM4wm9Bt
	sZdRyRouIFMoTvvUXBaiOOVbjMAaQUXFXp9jPhiRNWDvZlQTDu/58chUPT0GQbxd3XOy
	w2Yw==
X-Gm-Message-State: ALoCoQnLa1OwG19O+heJ/uivNOBSMqamwt5RlMOT/QMt3FmiOVCUI/6PhvrEFZKoKzdUZMf6ZBQV
X-Received: by 10.180.187.237 with SMTP id fv13mr1699397wic.26.1391813130579; 
	Fri, 07 Feb 2014 14:45:30 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id j9sm14060462wjz.13.2014.02.07.14.45.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 14:45:29 -0800 (PST)
Message-ID: <52F56208.3090504@linaro.org>
Date: Fri, 07 Feb 2014 22:45:28 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 07/02/14 18:56, Stefano Stabellini wrote:
> Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
> GICH_LR registers.
>
> Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
> registers, clear the invalid ones and free the corresponding interrupts
> from the inflight queue if appropriate. Add the interrupt to lr_pending
> if the GIC_IRQ_GUEST_PENDING is still set.
>
> Call gic_clear_lrs from gic_restore_state and on return to guest
> (gic_inject).
>
> Remove the now unused code in maintenance_interrupts and gic_irq_eoi.
>
> In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
> send and SGI to it to interrupt it and force it to clear the old LRs.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>   xen/arch/arm/gic.c  |  126 ++++++++++++++++++++++-----------------------------
>   xen/arch/arm/vgic.c |    3 +-
>   2 files changed, 56 insertions(+), 73 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 215b679..87bd5d3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> +static void gic_clear_lrs(struct vcpu *v)
> +{
> +    struct pending_irq *p;
> +    int i = 0, irq;
> +    uint32_t lr;
> +    bool_t inflight;
> +
> +    ASSERT(!local_irq_is_enabled());
> +
> +    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
> +                              nr_lrs, i)) < nr_lrs) {
> +        lr = GICH[GICH_LR + i];
> +        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> +        {
> +            if ( lr & GICH_LR_HW )
> +                irq = (lr >> GICH_LR_PHYSICAL_SHIFT) & GICH_LR_PHYSICAL_MASK;
> +            else
> +                irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> +

The if sentence can be simply by:

irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;


> +            inflight = 0;
> +            GICH[GICH_LR + i] = 0;
> +            clear_bit(i, &this_cpu(lr_mask));
> +
> +            spin_lock(&gic.lock);
> +            p = irq_to_pending(v, irq);
> +            if ( p->desc != NULL )
> +                p->desc->status &= ~IRQ_INPROGRESS;
> +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> +            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> +                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> +            {

I would add a WARN_ON(p->desc != NULL) here. AFAIK, this code path 
shouldn't be used for physical IRQ.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 22:45:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 22:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBuAw-0007nM-4h; Fri, 07 Feb 2014 22:45:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBuAu-0007nH-LT
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 22:45:32 +0000
Received: from [85.158.137.68:61215] by server-17.bemta-3.messagelabs.com id
	28/5C-22569-B0265F25; Fri, 07 Feb 2014 22:45:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391813130!424120!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7693 invoked from network); 7 Feb 2014 22:45:31 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 22:45:31 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so1301463wib.6
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 14:45:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=HnHmq3lnPg6tsnoyGevPijpDz6paeBano5mcEG8mn40=;
	b=J2wD5tNnBop9ioutk1QqMfuKBV4GrU3/WnibaawblHTwIlIjW5H6VCiVAqsHXzW9s+
	/vzQtUvMOZRqr4xRxCSDbFZgY9eIFWGbHGoQwDNqKGSEwSZMqF+HQMIVvr+fP+UFq61J
	xkOhHn7Pzwi517nHBWekk+BmHaGsbMB1fOfk8NlP9tf1EvMpf+Ibq33aeieoAWdUFh1r
	owlwdRihXOm4+75f9bVJXaYq/t/RofCnTwIwH6jfHYhhP2brBEocZS/j9/GqZM4wm9Bt
	sZdRyRouIFMoTvvUXBaiOOVbjMAaQUXFXp9jPhiRNWDvZlQTDu/58chUPT0GQbxd3XOy
	w2Yw==
X-Gm-Message-State: ALoCoQnLa1OwG19O+heJ/uivNOBSMqamwt5RlMOT/QMt3FmiOVCUI/6PhvrEFZKoKzdUZMf6ZBQV
X-Received: by 10.180.187.237 with SMTP id fv13mr1699397wic.26.1391813130579; 
	Fri, 07 Feb 2014 14:45:30 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id j9sm14060462wjz.13.2014.02.07.14.45.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 14:45:29 -0800 (PST)
Message-ID: <52F56208.3090504@linaro.org>
Date: Fri, 07 Feb 2014 22:45:28 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 07/02/14 18:56, Stefano Stabellini wrote:
> Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
> GICH_LR registers.
>
> Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
> registers, clear the invalid ones and free the corresponding interrupts
> from the inflight queue if appropriate. Add the interrupt to lr_pending
> if the GIC_IRQ_GUEST_PENDING is still set.
>
> Call gic_clear_lrs from gic_restore_state and on return to guest
> (gic_inject).
>
> Remove the now unused code in maintenance_interrupts and gic_irq_eoi.
>
> In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
> send and SGI to it to interrupt it and force it to clear the old LRs.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>   xen/arch/arm/gic.c  |  126 ++++++++++++++++++++++-----------------------------
>   xen/arch/arm/vgic.c |    3 +-
>   2 files changed, 56 insertions(+), 73 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 215b679..87bd5d3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> +static void gic_clear_lrs(struct vcpu *v)
> +{
> +    struct pending_irq *p;
> +    int i = 0, irq;
> +    uint32_t lr;
> +    bool_t inflight;
> +
> +    ASSERT(!local_irq_is_enabled());
> +
> +    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
> +                              nr_lrs, i)) < nr_lrs) {
> +        lr = GICH[GICH_LR + i];
> +        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> +        {
> +            if ( lr & GICH_LR_HW )
> +                irq = (lr >> GICH_LR_PHYSICAL_SHIFT) & GICH_LR_PHYSICAL_MASK;
> +            else
> +                irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> +

The if sentence can be simply by:

irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;


> +            inflight = 0;
> +            GICH[GICH_LR + i] = 0;
> +            clear_bit(i, &this_cpu(lr_mask));
> +
> +            spin_lock(&gic.lock);
> +            p = irq_to_pending(v, irq);
> +            if ( p->desc != NULL )
> +                p->desc->status &= ~IRQ_INPROGRESS;
> +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> +            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> +                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> +            {

I would add a WARN_ON(p->desc != NULL) here. AFAIK, this code path 
shouldn't be used for physical IRQ.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:11:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBuYj-0000iK-GI; Fri, 07 Feb 2014 23:10:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBuYh-0000iC-Aa
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 23:10:07 +0000
Received: from [85.158.143.35:24732] by server-1.bemta-4.messagelabs.com id
	D2/94-31661-EC765F25; Fri, 07 Feb 2014 23:10:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391814605!4049843!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7015 invoked from network); 7 Feb 2014 23:10:06 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 23:10:06 -0000
Received: by mail-wi0-f178.google.com with SMTP id cc10so1308828wib.5
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 15:10:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=foINkfo7EP1j9vFbCPxaSE6m86fw9ED1aNBcpaA7Gnk=;
	b=gzLqhsAjjGwOmd5+I8bx89FKDVeXrOt6u4hkhynA/GLNlmKXHzf2XesYWCU7qaoMFp
	RUFUsitX+sXE7fm6rFhahT/Ze6F14kjw0py31jaV8ntr/7H0syCHOK6MpjjixfuEnwp/
	nvezWgrdXI2hGZa2aNQiHqJ5hCHwhr2d5gcoQZd9mmIMOg2Siyuud/CzDY7RqeJuvKkl
	IlercCaW144HUlm94oiQAQouFW8S9wBMppIufZsSi+mWaB/Xh1Oqn+1JOCoIpM8V+wQ/
	Qns3AVJVuDqU6Bfs+2Ya+Q23Vpjotfqn61NSc0F0cfYsQjvb1IMkllcfjU1m6aSW8yK4
	hDpQ==
X-Gm-Message-State: ALoCoQm7yq0D29S7lsJCgyIUl6nR4mHk1B1WADNjdEvJT+6X67yUOobRr41m7y/7IwmHSEN9tLq7
X-Received: by 10.194.60.103 with SMTP id g7mr12530674wjr.37.1391814605753;
	Fri, 07 Feb 2014 15:10:05 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id u6sm12212989wif.6.2014.02.07.15.10.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 15:10:05 -0800 (PST)
Message-ID: <52F567CB.7080302@linaro.org>
Date: Fri, 07 Feb 2014 23:10:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
    > +static void gic_clear_lrs(struct vcpu *v)
> +{
> +    struct pending_irq *p;
> +    int i = 0, irq;
> +    uint32_t lr;
> +    bool_t inflight;
> +
> +    ASSERT(!local_irq_is_enabled());
> +
> +    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
> +                              nr_lrs, i)) < nr_lrs) {

Did you look at to ELRSR{0,1} registers which list the usable LRs? I 
think you can use it with the this_cpu(lr_mask) to avoid browsing every LRs.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:11:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:11:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBuYj-0000iK-GI; Fri, 07 Feb 2014 23:10:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBuYh-0000iC-Aa
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 23:10:07 +0000
Received: from [85.158.143.35:24732] by server-1.bemta-4.messagelabs.com id
	D2/94-31661-EC765F25; Fri, 07 Feb 2014 23:10:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391814605!4049843!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7015 invoked from network); 7 Feb 2014 23:10:06 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 23:10:06 -0000
Received: by mail-wi0-f178.google.com with SMTP id cc10so1308828wib.5
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 15:10:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=foINkfo7EP1j9vFbCPxaSE6m86fw9ED1aNBcpaA7Gnk=;
	b=gzLqhsAjjGwOmd5+I8bx89FKDVeXrOt6u4hkhynA/GLNlmKXHzf2XesYWCU7qaoMFp
	RUFUsitX+sXE7fm6rFhahT/Ze6F14kjw0py31jaV8ntr/7H0syCHOK6MpjjixfuEnwp/
	nvezWgrdXI2hGZa2aNQiHqJ5hCHwhr2d5gcoQZd9mmIMOg2Siyuud/CzDY7RqeJuvKkl
	IlercCaW144HUlm94oiQAQouFW8S9wBMppIufZsSi+mWaB/Xh1Oqn+1JOCoIpM8V+wQ/
	Qns3AVJVuDqU6Bfs+2Ya+Q23Vpjotfqn61NSc0F0cfYsQjvb1IMkllcfjU1m6aSW8yK4
	hDpQ==
X-Gm-Message-State: ALoCoQm7yq0D29S7lsJCgyIUl6nR4mHk1B1WADNjdEvJT+6X67yUOobRr41m7y/7IwmHSEN9tLq7
X-Received: by 10.194.60.103 with SMTP id g7mr12530674wjr.37.1391814605753;
	Fri, 07 Feb 2014 15:10:05 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id u6sm12212989wif.6.2014.02.07.15.10.04
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 15:10:05 -0800 (PST)
Message-ID: <52F567CB.7080302@linaro.org>
Date: Fri, 07 Feb 2014 23:10:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
    > +static void gic_clear_lrs(struct vcpu *v)
> +{
> +    struct pending_irq *p;
> +    int i = 0, irq;
> +    uint32_t lr;
> +    bool_t inflight;
> +
> +    ASSERT(!local_irq_is_enabled());
> +
> +    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
> +                              nr_lrs, i)) < nr_lrs) {

Did you look at to ELRSR{0,1} registers which list the usable LRs? I 
think you can use it with the this_cpu(lr_mask) to avoid browsing every LRs.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:22:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBukZ-0000vo-Qp; Fri, 07 Feb 2014 23:22:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBukY-0000vj-RM
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 23:22:23 +0000
Received: from [85.158.143.35:64783] by server-3.bemta-4.messagelabs.com id
	90/0A-11539-EAA65F25; Fri, 07 Feb 2014 23:22:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391815341!4044209!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11407 invoked from network); 7 Feb 2014 23:22:21 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 23:22:21 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so2790043wes.15
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 15:22:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=XYkn7dSzweB+rpaonTPVF5/rIUwlE/7qci08Zrl8RfY=;
	b=QQQqZwSnEcbCe2pEXp9mCF7GVLjPjPvS9AMmdhUsH+goCi3f+woVbmhhEZxNm0jAgc
	WEgMZUlIxjpAavq8tyLeLB6tg/GICbtVkwnfhSgzvKEN1Udx1Hx/f7lPspk9p1V+qdBE
	68Ba03yOkUf2dsNsqB8b2wPPfNzpHclZ4lXDHe0RGDojS9z4UPOyn3KVgEP1FKBKP/ky
	dh6oUEAULJzKDe8UsjH1mtJjFMYzVo2l4Tf8PhM42xjxpTnKr9nbdtQoECzsG13wjHe6
	QEgP/5ilglcdHOjxlfuteo+xCTE7urXvvIuMVcXcpmArHrjoxXufymn39WcuVYEvl3Z8
	RRMg==
X-Gm-Message-State: ALoCoQkrSk6m2J3djug5n1GJufvtdh8PU0Dnj5Zdki5AAAe6gKS8eTOqPO67xOSF4mb7b4i5wKCP
X-Received: by 10.180.91.17 with SMTP id ca17mr1660719wib.41.1391815341097;
	Fri, 07 Feb 2014 15:22:21 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ju6sm14288833wjc.1.2014.02.07.15.22.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 15:22:20 -0800 (PST)
Message-ID: <52F56AAB.7080300@linaro.org>
Date: Fri, 07 Feb 2014 23:22:19 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 0/4] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
> Hi all,

Hi Stefano,

> this patch series removes any needs for maintenance interrupts for both
> hardware and software interrupts in Xen.
> It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
> and by checking the status of the GICH_LR registers on return to guest,
> clearing the registers that are invalid and handling the lifecycle of
> the corresponding interrupts in Xen data structures.

After reading your patch series I see a possible race condition with the 
timer interrupt.

As you know, Xen can re-inject the timer interrupt before the previous 
one is EOIed. As it's the timer, the IRQ is injected on the current 
running VCPU.

vgic_vcpu_inject_irq(timer)
   -> IRQ already visible to the guest -> set PENDING
return to guest context
<--------------------- Guest EOI the IRQ
.... few milleseconds
going to hyp mode
   -> doing stuff
   -> reinject the timer IRQ

If I'm not mistaken, with your solution, the next IRQ can be delayed for 
few milliseconds. That could be fixed by updating the Lrs.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:22:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBukZ-0000vo-Qp; Fri, 07 Feb 2014 23:22:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBukY-0000vj-RM
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 23:22:23 +0000
Received: from [85.158.143.35:64783] by server-3.bemta-4.messagelabs.com id
	90/0A-11539-EAA65F25; Fri, 07 Feb 2014 23:22:22 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391815341!4044209!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11407 invoked from network); 7 Feb 2014 23:22:21 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 23:22:21 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so2790043wes.15
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 15:22:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=XYkn7dSzweB+rpaonTPVF5/rIUwlE/7qci08Zrl8RfY=;
	b=QQQqZwSnEcbCe2pEXp9mCF7GVLjPjPvS9AMmdhUsH+goCi3f+woVbmhhEZxNm0jAgc
	WEgMZUlIxjpAavq8tyLeLB6tg/GICbtVkwnfhSgzvKEN1Udx1Hx/f7lPspk9p1V+qdBE
	68Ba03yOkUf2dsNsqB8b2wPPfNzpHclZ4lXDHe0RGDojS9z4UPOyn3KVgEP1FKBKP/ky
	dh6oUEAULJzKDe8UsjH1mtJjFMYzVo2l4Tf8PhM42xjxpTnKr9nbdtQoECzsG13wjHe6
	QEgP/5ilglcdHOjxlfuteo+xCTE7urXvvIuMVcXcpmArHrjoxXufymn39WcuVYEvl3Z8
	RRMg==
X-Gm-Message-State: ALoCoQkrSk6m2J3djug5n1GJufvtdh8PU0Dnj5Zdki5AAAe6gKS8eTOqPO67xOSF4mb7b4i5wKCP
X-Received: by 10.180.91.17 with SMTP id ca17mr1660719wib.41.1391815341097;
	Fri, 07 Feb 2014 15:22:21 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ju6sm14288833wjc.1.2014.02.07.15.22.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 15:22:20 -0800 (PST)
Message-ID: <52F56AAB.7080300@linaro.org>
Date: Fri, 07 Feb 2014 23:22:19 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 0/4] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
> Hi all,

Hi Stefano,

> this patch series removes any needs for maintenance interrupts for both
> hardware and software interrupts in Xen.
> It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
> and by checking the status of the GICH_LR registers on return to guest,
> clearing the registers that are invalid and handling the lifecycle of
> the corresponding interrupts in Xen data structures.

After reading your patch series I see a possible race condition with the 
timer interrupt.

As you know, Xen can re-inject the timer interrupt before the previous 
one is EOIed. As it's the timer, the IRQ is injected on the current 
running VCPU.

vgic_vcpu_inject_irq(timer)
   -> IRQ already visible to the guest -> set PENDING
return to guest context
<--------------------- Guest EOI the IRQ
.... few milleseconds
going to hyp mode
   -> doing stuff
   -> reinject the timer IRQ

If I'm not mistaken, with your solution, the next IRQ can be delayed for 
few milliseconds. That could be fixed by updating the Lrs.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:38:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBuzd-0001p5-JK; Fri, 07 Feb 2014 23:37:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBuzb-0001p0-OB
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 23:37:56 +0000
Received: from [85.158.137.68:48310] by server-10.bemta-3.messagelabs.com id
	2E/9F-07302-25E65F25; Fri, 07 Feb 2014 23:37:54 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391816272!430637!1
X-Originating-IP: [72.30.239.25]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25107 invoked from network); 7 Feb 2014 23:37:53 -0000
Received: from nm38-vm9.bullet.mail.bf1.yahoo.com (HELO
	nm38-vm9.bullet.mail.bf1.yahoo.com) (72.30.239.25)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 23:37:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391816272; bh=H+y/6vrJNoMatS5TiGn/npfJVPNAGnACO9DYCt+zj3A=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=GPraW6DM8KPZpi5njtXxw3FROW6RSPGhEdplr6ykEgYkNdLWPMnxOsbZDTTArUAzpkR3uFfkm+JCDU4p9Xjq4IUwC7sV++PM/w2daXi/i7FWX7iHyreIbKjV8tb4Sik+tv8qDg6GU+lnTO7M7D+xnF6hLY2NVHN1JQNgqqjrX1w=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Cb0Y6p5UgOR+aE9AiuwvHlqgCghfCNhV+YEvvv1/P57jmC9+JW59uCVBO0XYbbowx3Lcp7T9eEGGyQJQ+edWbAjN7sFljMVyVS7Np9xFAb9DMp0rjPsFg9qUxd/14VoT258M6XlR752E5dUEDSq4KIByAfLlK3iKJyN+ZxQi6xc=;
Received: from [98.139.215.140] by nm38.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 23:37:52 -0000
Received: from [68.142.230.65] by tm11.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 23:37:52 -0000
Received: from [127.0.0.1] by smtp222.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 23:37:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391816272; bh=H+y/6vrJNoMatS5TiGn/npfJVPNAGnACO9DYCt+zj3A=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=mjkh0DSz2A+SMwoh313DHcUJbYUDEEcqTquJMoRyvQXIhNmbW5hdEPGAlYAIq8nb5RPndV7DpKmVEXd/K9JXCb1W5iHCidv0m2gLF3wlVpEF9ipWnIzCmecL84q3v9HmXBMlPDjGig1qi2Hr02XVR/UK6XIS/PdRkYCl+vUQkno=
X-Yahoo-Newman-Id: 199927.42117.bm@smtp222.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: nyCmCSoVM1mwMIqVuuwJFJI469RcfXePAMcmfP5jDuFORmG
	csczXomCQc5IWq6EC.MAFLEOXLApT0_MBOe.sq69Vsf3Voy0BDwMvAN5JCAE
	pHyPD_fEyf7w_XvooN92i2ksyqE3jzw9qymfUrW7JMdbgbVJGy2GBHbnM1jM
	0GUHKCwAYXEHGkLrs5.dgmLYGZmWP9pyUJafr6SgvVdSf9thVGoc7Jv675FP
	Fb3EbAExDodi2WS86bEejyw31zBg9HUmdqzME1t6qKy9yEcFwJ3XIjSUFIVd
	48v8BSabucPfcKbnGMfG0BSCEZSQePFhjdCMkRlE6HGm6uVseIAeW8pscEZu
	xWM9ncb3auTSOdAiJVmhp6ylCW32xXdNsPeq_JKAfEHe42hU85RFmeVpvdT9
	S1PYrn5wKoM5F81ubj6HL3bKJyrfB_zjivF2fVWAnrZCNP0zrIahrcF1zMc3
	_uPYXIbLostQiOsZ66sztufTbLh_eM7CP3wfK9yp_npztr58.tZrjiom2UbI
	espD8K7xdFXLAsvbgpTNNi5xLAY8NZj2tYzDk.esC5Wa7gnpwphPadP90aaX
	TcXRbnQR2wMsbPAl24Wut4JYPu5Pw74KVEp1GKHqs9nmHkxsvKeu_vpfdoiF
	v_B.LEyX6uM2ofvGe5eKQQjI7bY9as3zlNbx_aeMh5xn2bWegr8ewhi1P4DP
	6878UxS0ogaTgwiOd6R1Gj0tavFRx0sGta_keURkrWLrDXTDSBVhM5LheC0t
	TRo6c1UtY82fHIKTY54n8gxRpGWc4FeVEibHAwiN0DlcU930U8LP16Tzmxdh
	MfWaHQggwuLZY5TPEahoatYf0gH3PBivwy6wJOw--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp222.mail.bf1.yahoo.com with SMTP; 07 Feb 2014 23:37:52 +0000 UTC
Message-ID: <1391816270.2943.21.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Fri, 07 Feb 2014 16:37:50 -0700
In-Reply-To: <52F4BB0A.8000007@m2r.biz>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391764139.9917.54.camel@Solace> <52F4BB0A.8000007@m2r.biz>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, Sander Eikelenboom <linux@eikelenboom.it>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 11:52 +0100, Fabio Fantoni wrote:
> Il 07/02/2014 10:08, Dario Faggioli ha scritto:
> > On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:

> 
> The wiki should be updated, the qxl patch for libxl part is complete and 
> correct and can be used for tests:
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> the actual problem is off of libxl.
> Probably are on hvmloader, qemu and/or kernel.
> 
> The latest mail about qxl problem that I send:
> http://lists.xen.org/archives/html/xen-devel/2013-12/msg00758.html
> 

I have everything built and would like to make a quick observation.
>From what I have read, the qxl minimum and default video ram is 128MB
but the qemu command line is showing 64MB even when set to 128MB in the
xl config file.

spice = 1
spicehost = '0.0.0.0'
spiceport = 6001
spicedisable_ticketing = 1
spicevdagent = 1
videoram = 128
vga = 'qxl'

/usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 18 -chardev
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-18,server,nowait -mon
chardev=libxl-cmd,mode=control -nodefaults -name f20 -serial pty -spice
port=6001,tls-port=0,addr=0.0.0.0,disable-ticketing,agent-mouse=on
-device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device
virtserialport,chardev=vdagent,name=com.redhat.spice.0 -device
qxl-vga,vram_size_mb=64,ram_size_mb=64 -boot order=c -smp 2,maxcpus=2
-device virtio-net,id=nic0,netdev=net0,mac=00:16:00:00:11:22 -netdev
type=tap,id=net0,ifname=vif18.0-emu,script=no,downscript=no -machine
xenfv -m 3968 -drive
file=/dev/mapper/xen_vm-f20pvhvm,if=ide,index=0,media=disk,format=raw,cache=writeback

Commenting out the videoram=128 line still shows 64MB in the qemu
command line.

-Eric



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:38:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBuzd-0001p5-JK; Fri, 07 Feb 2014 23:37:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ehouby@yahoo.com>) id 1WBuzb-0001p0-OB
	for xen-devel@lists.xen.org; Fri, 07 Feb 2014 23:37:56 +0000
Received: from [85.158.137.68:48310] by server-10.bemta-3.messagelabs.com id
	2E/9F-07302-25E65F25; Fri, 07 Feb 2014 23:37:54 +0000
X-Env-Sender: ehouby@yahoo.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391816272!430637!1
X-Originating-IP: [72.30.239.25]
X-SpamReason: No, hits=0.6 required=7.0 tests=FORGED_YAHOO_RCVD,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25107 invoked from network); 7 Feb 2014 23:37:53 -0000
Received: from nm38-vm9.bullet.mail.bf1.yahoo.com (HELO
	nm38-vm9.bullet.mail.bf1.yahoo.com) (72.30.239.25)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 7 Feb 2014 23:37:53 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=gcom1024;
	t=1391816272; bh=H+y/6vrJNoMatS5TiGn/npfJVPNAGnACO9DYCt+zj3A=;
	h=Received:Received:Received:DKIM-Signature:X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=GPraW6DM8KPZpi5njtXxw3FROW6RSPGhEdplr6ykEgYkNdLWPMnxOsbZDTTArUAzpkR3uFfkm+JCDU4p9Xjq4IUwC7sV++PM/w2daXi/i7FWX7iHyreIbKjV8tb4Sik+tv8qDg6GU+lnTO7M7D+xnF6hLY2NVHN1JQNgqqjrX1w=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=gcom1024; d=yahoo.com;
	b=Cb0Y6p5UgOR+aE9AiuwvHlqgCghfCNhV+YEvvv1/P57jmC9+JW59uCVBO0XYbbowx3Lcp7T9eEGGyQJQ+edWbAjN7sFljMVyVS7Np9xFAb9DMp0rjPsFg9qUxd/14VoT258M6XlR752E5dUEDSq4KIByAfLlK3iKJyN+ZxQi6xc=;
Received: from [98.139.215.140] by nm38.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 23:37:52 -0000
Received: from [68.142.230.65] by tm11.bullet.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 23:37:52 -0000
Received: from [127.0.0.1] by smtp222.mail.bf1.yahoo.com with NNFMP;
	07 Feb 2014 23:37:52 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1391816272; bh=H+y/6vrJNoMatS5TiGn/npfJVPNAGnACO9DYCt+zj3A=;
	h=X-Yahoo-Newman-Id:X-Yahoo-Newman-Property:X-YMail-OSG:X-Yahoo-SMTP:X-Rocket-Received:Message-ID:Subject:From:Reply-To:To:Cc:Date:In-Reply-To:References:Content-Type:X-Mailer:Mime-Version:Content-Transfer-Encoding;
	b=mjkh0DSz2A+SMwoh313DHcUJbYUDEEcqTquJMoRyvQXIhNmbW5hdEPGAlYAIq8nb5RPndV7DpKmVEXd/K9JXCb1W5iHCidv0m2gLF3wlVpEF9ipWnIzCmecL84q3v9HmXBMlPDjGig1qi2Hr02XVR/UK6XIS/PdRkYCl+vUQkno=
X-Yahoo-Newman-Id: 199927.42117.bm@smtp222.mail.bf1.yahoo.com
X-Yahoo-Newman-Property: ymail-3
X-YMail-OSG: nyCmCSoVM1mwMIqVuuwJFJI469RcfXePAMcmfP5jDuFORmG
	csczXomCQc5IWq6EC.MAFLEOXLApT0_MBOe.sq69Vsf3Voy0BDwMvAN5JCAE
	pHyPD_fEyf7w_XvooN92i2ksyqE3jzw9qymfUrW7JMdbgbVJGy2GBHbnM1jM
	0GUHKCwAYXEHGkLrs5.dgmLYGZmWP9pyUJafr6SgvVdSf9thVGoc7Jv675FP
	Fb3EbAExDodi2WS86bEejyw31zBg9HUmdqzME1t6qKy9yEcFwJ3XIjSUFIVd
	48v8BSabucPfcKbnGMfG0BSCEZSQePFhjdCMkRlE6HGm6uVseIAeW8pscEZu
	xWM9ncb3auTSOdAiJVmhp6ylCW32xXdNsPeq_JKAfEHe42hU85RFmeVpvdT9
	S1PYrn5wKoM5F81ubj6HL3bKJyrfB_zjivF2fVWAnrZCNP0zrIahrcF1zMc3
	_uPYXIbLostQiOsZ66sztufTbLh_eM7CP3wfK9yp_npztr58.tZrjiom2UbI
	espD8K7xdFXLAsvbgpTNNi5xLAY8NZj2tYzDk.esC5Wa7gnpwphPadP90aaX
	TcXRbnQR2wMsbPAl24Wut4JYPu5Pw74KVEp1GKHqs9nmHkxsvKeu_vpfdoiF
	v_B.LEyX6uM2ofvGe5eKQQjI7bY9as3zlNbx_aeMh5xn2bWegr8ewhi1P4DP
	6878UxS0ogaTgwiOd6R1Gj0tavFRx0sGta_keURkrWLrDXTDSBVhM5LheC0t
	TRo6c1UtY82fHIKTY54n8gxRpGWc4FeVEibHAwiN0DlcU930U8LP16Tzmxdh
	MfWaHQggwuLZY5TPEahoatYf0gH3PBivwy6wJOw--
X-Yahoo-SMTP: QpZsTh.swBBbiXoX3lukB1DLTA--
X-Rocket-Received: from [172.16.10.10] (ehouby@71.196.207.87 with plain
	[98.139.211.125])
	by smtp222.mail.bf1.yahoo.com with SMTP; 07 Feb 2014 23:37:52 +0000 UTC
Message-ID: <1391816270.2943.21.camel@astar.houby.net>
From: Eric Houby <ehouby@yahoo.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date: Fri, 07 Feb 2014 16:37:50 -0700
In-Reply-To: <52F4BB0A.8000007@m2r.biz>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz> <1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391764139.9917.54.camel@Solace> <52F4BB0A.8000007@m2r.biz>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: xen@lists.fedoraproject.org, Sander Eikelenboom <linux@eikelenboom.it>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: ehouby@yahoo.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 11:52 +0100, Fabio Fantoni wrote:
> Il 07/02/2014 10:08, Dario Faggioli ha scritto:
> > On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:

> 
> The wiki should be updated, the qxl patch for libxl part is complete and 
> correct and can be used for tests:
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> the actual problem is off of libxl.
> Probably are on hvmloader, qemu and/or kernel.
> 
> The latest mail about qxl problem that I send:
> http://lists.xen.org/archives/html/xen-devel/2013-12/msg00758.html
> 

I have everything built and would like to make a quick observation.
>From what I have read, the qxl minimum and default video ram is 128MB
but the qemu command line is showing 64MB even when set to 128MB in the
xl config file.

spice = 1
spicehost = '0.0.0.0'
spiceport = 6001
spicedisable_ticketing = 1
spicevdagent = 1
videoram = 128
vga = 'qxl'

/usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 18 -chardev
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-18,server,nowait -mon
chardev=libxl-cmd,mode=control -nodefaults -name f20 -serial pty -spice
port=6001,tls-port=0,addr=0.0.0.0,disable-ticketing,agent-mouse=on
-device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device
virtserialport,chardev=vdagent,name=com.redhat.spice.0 -device
qxl-vga,vram_size_mb=64,ram_size_mb=64 -boot order=c -smp 2,maxcpus=2
-device virtio-net,id=nic0,netdev=net0,mac=00:16:00:00:11:22 -netdev
type=tap,id=net0,ifname=vif18.0-emu,script=no,downscript=no -machine
xenfv -m 3968 -drive
file=/dev/mapper/xen_vm-f20pvhvm,if=ide,index=0,media=disk,format=raw,cache=writeback

Commenting out the videoram=128 line still shows 64MB in the qemu
command line.

-Eric



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:39:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:39:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBv15-0001xd-1u; Fri, 07 Feb 2014 23:39:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBv13-0001xW-Ta
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 23:39:26 +0000
Received: from [85.158.137.68:54790] by server-8.bemta-3.messagelabs.com id
	AF/C7-16039-DAE65F25; Fri, 07 Feb 2014 23:39:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391816364!430902!1
X-Originating-IP: [209.85.212.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18656 invoked from network); 7 Feb 2014 23:39:24 -0000
Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com)
	(209.85.212.177)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 23:39:24 -0000
Received: by mail-wi0-f177.google.com with SMTP id e4so1315314wiv.16
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 15:39:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=JfWg7lDQXWEhvQXL+9pcvq5Y9uAhfsDVIT82EXkl5zg=;
	b=Z05KLRBsR5TXE7bWFG3yOGxR9HNikif43s3REnjZddyK9ROOwa5K6ZQUbXLAFZGynE
	BS4qIup0Enha5zcqBVahQTG3qsTEXSXGoLUl37Nq3bsL+17ipsj5EIK3vfi7ayWVNAwV
	E7xyfKwU3+0/wHjZGYs+UPtHv813Q8oTG17zTxPV2nOoTHmPzWJhPAliZabY3/Ir9g30
	gag6lZXbK8zJx2bEj5oByL5okfZLGWtecv5js4Py1yrcdYYwrLPk9tpv9p2Yn5ohNKwj
	8k6InLXZsJdlZLhcb4D8kcxcdw9sl2GmGj5W33GlqNXktvj/j32FbZPRk49gdLq99Qdr
	QtQQ==
X-Gm-Message-State: ALoCoQkPftuO/K2OWmAu/33P13GE0ydhMlidTsJZ6GcSZ7LIwl13GIpz663AxREOU1fKoTEaHlNc
X-Received: by 10.180.19.65 with SMTP id c1mr1688857wie.39.1391816363817;
	Fri, 07 Feb 2014 15:39:23 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ju6sm14377072wjc.1.2014.02.07.15.39.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 15:39:23 -0800 (PST)
Message-ID: <52F56EA9.9020903@linaro.org>
Date: Fri, 07 Feb 2014 23:39:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
> On return to guest, if there are no free LRs and we still have more
> interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
> maintenance interrupt when no pending interrupts are present in the LR
> registers.
> The maintenance interrupt handler won't do anything anymore, but
> receiving the interrupt is going to cause gic_inject to be called on
> return to guest that is going to clear the old LRs and inject new
> interrupts.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>   xen/arch/arm/gic.c |    8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 87bd5d3..bee2618 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -810,8 +810,14 @@ void gic_inject(void)
>       gic_restore_pending_irqs(current);
>       if (!gic_events_need_delivery())
>           gic_inject_irq_stop();
> -    else
> +    else {
>           gic_inject_irq_start();
> +    }
> +
> +    if ( !list_empty(&current->arch.vgic.lr_pending) )
> +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
> +    else
> +        GICH[GICH_HCR] = GICH_HCR_EN;

Any reason to not move this in the else?

BTW, I think we can safely avoid to reinject the IRQ for the event 
channels if there is pending events. It will also avoid spurious IRQ on 
some specific case :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 07 23:39:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Feb 2014 23:39:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WBv15-0001xd-1u; Fri, 07 Feb 2014 23:39:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WBv13-0001xW-Ta
	for xen-devel@lists.xensource.com; Fri, 07 Feb 2014 23:39:26 +0000
Received: from [85.158.137.68:54790] by server-8.bemta-3.messagelabs.com id
	AF/C7-16039-DAE65F25; Fri, 07 Feb 2014 23:39:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391816364!430902!1
X-Originating-IP: [209.85.212.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18656 invoked from network); 7 Feb 2014 23:39:24 -0000
Received: from mail-wi0-f177.google.com (HELO mail-wi0-f177.google.com)
	(209.85.212.177)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	7 Feb 2014 23:39:24 -0000
Received: by mail-wi0-f177.google.com with SMTP id e4so1315314wiv.16
	for <xen-devel@lists.xensource.com>;
	Fri, 07 Feb 2014 15:39:23 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=JfWg7lDQXWEhvQXL+9pcvq5Y9uAhfsDVIT82EXkl5zg=;
	b=Z05KLRBsR5TXE7bWFG3yOGxR9HNikif43s3REnjZddyK9ROOwa5K6ZQUbXLAFZGynE
	BS4qIup0Enha5zcqBVahQTG3qsTEXSXGoLUl37Nq3bsL+17ipsj5EIK3vfi7ayWVNAwV
	E7xyfKwU3+0/wHjZGYs+UPtHv813Q8oTG17zTxPV2nOoTHmPzWJhPAliZabY3/Ir9g30
	gag6lZXbK8zJx2bEj5oByL5okfZLGWtecv5js4Py1yrcdYYwrLPk9tpv9p2Yn5ohNKwj
	8k6InLXZsJdlZLhcb4D8kcxcdw9sl2GmGj5W33GlqNXktvj/j32FbZPRk49gdLq99Qdr
	QtQQ==
X-Gm-Message-State: ALoCoQkPftuO/K2OWmAu/33P13GE0ydhMlidTsJZ6GcSZ7LIwl13GIpz663AxREOU1fKoTEaHlNc
X-Received: by 10.180.19.65 with SMTP id c1mr1688857wie.39.1391816363817;
	Fri, 07 Feb 2014 15:39:23 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ju6sm14377072wjc.1.2014.02.07.15.39.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 07 Feb 2014 15:39:23 -0800 (PST)
Message-ID: <52F56EA9.9020903@linaro.org>
Date: Fri, 07 Feb 2014 23:39:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 07/02/14 18:56, Stefano Stabellini wrote:
> On return to guest, if there are no free LRs and we still have more
> interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
> maintenance interrupt when no pending interrupts are present in the LR
> registers.
> The maintenance interrupt handler won't do anything anymore, but
> receiving the interrupt is going to cause gic_inject to be called on
> return to guest that is going to clear the old LRs and inject new
> interrupts.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>   xen/arch/arm/gic.c |    8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 87bd5d3..bee2618 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -810,8 +810,14 @@ void gic_inject(void)
>       gic_restore_pending_irqs(current);
>       if (!gic_events_need_delivery())
>           gic_inject_irq_stop();
> -    else
> +    else {
>           gic_inject_irq_start();
> +    }
> +
> +    if ( !list_empty(&current->arch.vgic.lr_pending) )
> +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
> +    else
> +        GICH[GICH_HCR] = GICH_HCR_EN;

Any reason to not move this in the else?

BTW, I think we can safely avoid to reinject the IRQ for the event 
channels if there is pending events. It will also avoid spurious IRQ on 
some specific case :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BS-0003Zn-Da; Sat, 08 Feb 2014 05:10:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BR-0003ZS-Ea
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:29 +0000
Received: from [193.109.254.147:45371] by server-15.bemta-14.messagelabs.com
	id 52/E5-10839-44CB5F25; Sat, 08 Feb 2014 05:10:28 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391836226!2902699!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24147 invoked from network); 8 Feb 2014 05:10:27 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-27.messagelabs.com with SMTP;
	8 Feb 2014 05:10:27 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Feb 2014 21:10:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="451748447"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 07 Feb 2014 21:10:24 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:33 +0800
Message-Id: <1391836058-81430-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 2/6] x86: dynamically attach/detach CQM
	service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add hypervisor side support for dynamically attach and detach CQM
services for a certain guest.

When attach CQM service for a guest, system will allocate an RMID for
it. When detach or guest is shutdown, the RMID will be retrieved back
for future use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c        |    3 +++
 xen/arch/x86/domctl.c        |   28 ++++++++++++++++++++
 xen/arch/x86/pqos.c          |   60 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |    2 ++
 xen/include/asm-x86/pqos.h   |   12 +++++++++
 xen/include/public/domctl.h  |   11 ++++++++
 6 files changed, 116 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 16f2b50..2656204 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -60,6 +60,7 @@
 #include <xen/numa.h>
 #include <xen/iommu.h>
 #include <compat/vcpu.h>
+#include <asm/pqos.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 DEFINE_PER_CPU(unsigned long, cr4);
@@ -612,6 +613,8 @@ void arch_domain_destroy(struct domain *d)
 
     free_xenheap_page(d->shared_info);
     cleanup_domain_irq_mapping(d);
+
+    free_cqm_rmid(d);
 }
 
 unsigned long pv_guest_cr4_fixup(const struct vcpu *v, unsigned long guest_cr4)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..7219011 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,6 +35,7 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
+#include <asm/pqos.h>
 
 static int gdbsx_guest_mem_io(
     domid_t domid, struct xen_domctl_gdbsx_memio *iop)
@@ -1245,6 +1246,33 @@ long arch_do_domctl(
     }
     break;
 
+    case XEN_DOMCTL_attach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else
+            ret = alloc_cqm_rmid(d);
+    }
+    break;
+
+    case XEN_DOMCTL_detach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else if ( d->arch.pqos_cqm_rmid > 0 )
+        {
+            free_cqm_rmid(d);
+            ret = 0;
+        }
+        else
+            ret = -ENOENT;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index ba0de37..eb469ac 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <xen/init.h>
 #include <xen/mm.h>
+#include <xen/spinlock.h>
 #include <asm/pqos.h>
 
 static bool_t __initdata opt_pqos = 1;
@@ -145,6 +146,65 @@ void __init init_platform_qos(void)
     init_qos_monitor();
 }
 
+int alloc_cqm_rmid(struct domain *d)
+{
+    int rc = 0;
+    unsigned int rmid;
+
+    ASSERT(system_supports_cqm());
+
+    spin_lock(&cqm->cqm_lock);
+
+    if ( d->arch.pqos_cqm_rmid > 0 )
+    {
+        rc = -EEXIST;
+        goto out;
+    }
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
+            continue;
+
+        cqm->rmid_to_dom[rmid] = d->domain_id;
+        break;
+    }
+
+    /* No CQM RMID available, assign RMID=0 by default */
+    if ( rmid > cqm->max_rmid )
+    {
+        rmid = 0;
+        rc = -EUSERS;
+    }
+    else
+        cqm->used_rmid++;
+
+    d->arch.pqos_cqm_rmid = rmid;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+
+    return rc;
+}
+
+void free_cqm_rmid(struct domain *d)
+{
+    unsigned int rmid;
+
+    spin_lock(&cqm->cqm_lock);
+    rmid = d->arch.pqos_cqm_rmid;
+    /* We do not free system reserved "RMID=0" */
+    if ( rmid == 0 )
+        goto out;
+
+    cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+    d->arch.pqos_cqm_rmid = 0;
+    cqm->used_rmid--;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..662714d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -313,6 +313,8 @@ struct arch_domain
     spinlock_t e820_lock;
     struct e820entry *e820;
     unsigned int nr_e820;
+
+    unsigned int pqos_cqm_rmid;       /* CQM RMID assigned to the domain */
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 0a8065c..f25037d 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -16,6 +16,7 @@
  */
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
+#include <xen/sched.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -38,6 +39,17 @@ struct pqos_cqm {
 };
 extern struct pqos_cqm *cqm;
 
+static inline bool_t system_supports_cqm(void)
+{
+    return !!cqm;
+}
+
+/* IA32_QM_CTR */
+#define IA32_QM_CTR_ERROR_MASK         (0x3ul << 62)
+
 void init_platform_qos(void);
 
+int alloc_cqm_rmid(struct domain *d);
+void free_cqm_rmid(struct domain *d);
+
 #endif
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f8d9293 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,14 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_qos_type {
+#define _XEN_DOMCTL_pqos_cqm      0
+#define XEN_DOMCTL_pqos_cqm       (1U<<_XEN_DOMCTL_pqos_cqm)
+    uint64_t flags;
+};
+typedef struct xen_domctl_qos_type xen_domctl_qos_type_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_qos_type_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +962,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_attach_pqos                   71
+#define XEN_DOMCTL_detach_pqos                   72
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1014,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
+        struct xen_domctl_qos_type          qos_type;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BP-0003ZH-5I; Sat, 08 Feb 2014 05:10:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BN-0003Z7-CF
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:25 +0000
Received: from [85.158.139.211:56770] by server-10.bemta-5.messagelabs.com id
	E9/D0-08578-04CB5F25; Sat, 08 Feb 2014 05:10:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8271 invoked from network); 8 Feb 2014 05:10:23 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:23 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:21 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742855"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:19 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:31 +0800
Message-Id: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 0/6] enable Cache QoS Monitoring (CQM) feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes from v7:
 - Address comments from Andrew Cooper, including:
   * Check CQM capability before allocating cpumask memory.
   * Move one function declaration into the correct patch.

Changes from v6:
 - Address comments from Jan Beulich, including:
   * Remove the unnecessary CPUID feature check.
   * Remove the unnecessary socket_cpu_map.
   * Spin_lock related changes, avoid spin_lock_irqsave().
   * Use readonly mapping to pass cqm data between Xen/Userspace,
     to avoid data copying.
   * Optimize RDMSR/WRMSR logic to avoid unnecessary calls.
   * Misc fixes including __read_mostly prefix, return value, etc.

Changes from v5:
 - Address comments from Dario Faggioli, including:
   * Define a new libxl_cqminfo structure to avoid reference of xc
     structure in libxl functions.
   * Use LOGE() instead of the LIBXL__LOG() functions.

Changes from v4:
 - When comparing xl cqm parameter, use strcmp instead of strncmp,
   otherwise, "xl pqos-attach cqmabcd domid" will be considered as
   a valid command line.
 - Address comments from Andrew Cooper, including:
   * Adjust the pqos parameter parsing function.
   * Modify the pqos related documentation.
   * Add a check for opt_cqm_max_rmid in initialization code.
   * Do not IPI CPU that is in same socket with current CPU.
 - Address comments from Dario Faggioli, including:
   * Fix an typo in export symbols.
   * Return correct libxl error code for qos related functions.
   * Abstract the error printing logic into a function.
 - Address comment from Daniel De Graaf, including:
   * Add return value in for pqos related check.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Modify the GPLv2 related file header, remove the address.

Changes from v3:
 - Use structure to better organize CQM related global variables.
 - Address comments from Andrew Cooper, including:
   * Remove the domain creation flag for CQM RMID allocation.
   * Adjust the boot parameter format, use custom_param().
   * Add documentation for the new added boot parameter.
   * Change QoS type flag to be uint64_t.
   * Initialize the per socket cpu bitmap in system boot time.
   * Remove get_cqm_avail() function.
   * Misc of format changes.
 - Address comment from Daniel De Graaf, including:
   * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to SECCLASS_XEN2.

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

RMID count    56        RMID available    53

Dongxiao Xu (6):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 docs/misc/xen-command-line.markdown          |    7 +
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   37 ++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    4 +
 tools/libxl/libxl_pqos.c                     |  132 +++++++++++++
 tools/libxl/libxl_types.idl                  |    7 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  111 +++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/domain.c                        |    8 +
 xen/arch/x86/domctl.c                        |   28 +++
 xen/arch/x86/pqos.c                          |  273 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   58 ++++++
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   59 ++++++
 xen/include/public/domctl.h                  |   11 ++
 xen/include/public/sysctl.h                  |   11 ++
 xen/xsm/flask/hooks.c                        |    8 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 26 files changed, 817 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BQ-0003ZO-LJ; Sat, 08 Feb 2014 05:10:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BO-0003ZC-Dt
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:26 +0000
Received: from [85.158.139.211:56786] by server-10.bemta-5.messagelabs.com id
	4B/D0-08578-14CB5F25; Sat, 08 Feb 2014 05:10:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8309 invoked from network); 8 Feb 2014 05:10:24 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:24 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742863"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:21 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:32 +0800
Message-Id: <1391836058-81430-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 1/6] x86: detect and initialize Cache QoS
	Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Detect platform QoS feature status and enumerate the resource types,
one of which is to monitor the L3 cache occupancy.

Also introduce a Xen grub command line parameter to control the
QoS feature status.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/misc/xen-command-line.markdown |    7 ++
 xen/arch/x86/Makefile               |    1 +
 xen/arch/x86/pqos.c                 |  156 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |    3 +
 xen/include/asm-x86/cpufeature.h    |    1 +
 xen/include/asm-x86/pqos.h          |   43 ++++++++++
 6 files changed, 211 insertions(+)
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 15aa404..7751ffe 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -770,6 +770,13 @@ This option can be specified more than once (up to 8 times at present).
 ### ple\_window
 > `= <integer>`
 
+### pqos (Intel)
+> `= List of ( <boolean> | cqm:<boolean> | cqm_max_rmid:<integer> )`
+
+> Default: `pqos=1,cqm:1,cqm_max_rmid:255`
+
+Configure platform QoS services.
+
 ### reboot
 > `= t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..54962e0 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += pqos.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
new file mode 100644
index 0000000..ba0de37
--- /dev/null
+++ b/xen/arch/x86/pqos.c
@@ -0,0 +1,156 @@
+/*
+ * pqos.c: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#include <asm/processor.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <asm/pqos.h>
+
+static bool_t __initdata opt_pqos = 1;
+static bool_t __initdata opt_cqm = 1;
+static unsigned int __initdata opt_cqm_max_rmid = 255;
+
+static void __init parse_pqos_param(char *s)
+{
+    char *ss, *val_str;
+    int val;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        val = parse_bool(s);
+        if ( val >= 0 )
+            opt_pqos = val;
+        else
+        {
+            val = !!strncmp(s, "no-", 3);
+            if ( !val )
+                s += 3;
+
+            val_str = strchr(s, ':');
+            if ( val_str )
+                *val_str++ = '\0';
+
+            if ( val_str && !strcmp(s, "cqm") &&
+                 (val = parse_bool(val_str)) >= 0 )
+                opt_cqm = val;
+            else if ( val_str && !strcmp(s, "cqm_max_rmid") )
+                opt_cqm_max_rmid = simple_strtoul(val_str, NULL, 0);
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+custom_param("pqos", parse_pqos_param);
+
+struct pqos_cqm __read_mostly *cqm = NULL;
+
+static void __init init_cqm(void)
+{
+    unsigned int rmid;
+    unsigned int eax, edx;
+    unsigned int cqm_pages;
+    unsigned int i;
+
+    if ( !opt_cqm_max_rmid )
+        return;
+
+    cqm = xzalloc(struct pqos_cqm);
+    if ( !cqm )
+        return;
+
+    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
+    if ( !(edx & QOS_MONITOR_EVTID_L3) )
+        goto out;
+
+    cqm->min_rmid = 1;
+    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
+
+    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
+    if ( !cqm->rmid_to_dom )
+        goto out;
+
+    /* Reserve RMID 0 for all domains not being monitored */
+    cqm->rmid_to_dom[0] = DOMID_XEN;
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+
+    /* Allocate CQM buffer size in initialization stage */
+    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
+                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
+                PAGE_SIZE + 1;
+    cqm->buffer_size = cqm_pages * PAGE_SIZE;
+
+    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
+    if ( !cqm->buffer )
+    {
+        xfree(cqm->rmid_to_dom);
+        goto out;
+    }
+    memset(cqm->buffer, 0, cqm->buffer_size);
+
+    for ( i = 0; i < cqm_pages; i++ )
+        share_xen_page_with_privileged_guests(
+            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),
+            XENSHARE_readonly);
+
+    spin_lock_init(&cqm->cqm_lock);
+
+    cqm->used_rmid = 0;
+
+    printk(XENLOG_INFO "Cache QoS Monitoring Enabled.\n");
+
+    return;
+
+out:
+    xfree(cqm);
+    cqm = NULL;
+}
+
+static void __init init_qos_monitor(void)
+{
+    unsigned int qm_features;
+    unsigned int eax, ebx, ecx;
+
+    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
+        return;
+
+    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+
+    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
+        init_cqm();
+}
+
+void __init init_platform_qos(void)
+{
+    if ( !opt_pqos )
+        return;
+
+    init_qos_monitor();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..639528f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,6 +48,7 @@
 #include <asm/setup.h>
 #include <xen/cpu.h>
 #include <asm/nmi.h>
+#include <asm/pqos.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
@@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     domain_unpause_by_systemcontroller(dom0);
 
+    init_platform_qos();
+
     reset_stack_and_jump(init_done);
 }
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..ca59668 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -147,6 +147,7 @@
 #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
+#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
new file mode 100644
index 0000000..0a8065c
--- /dev/null
+++ b/xen/include/asm-x86/pqos.h
@@ -0,0 +1,43 @@
+/*
+ * pqos.h: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef ASM_PQOS_H
+#define ASM_PQOS_H
+
+#include <public/xen.h>
+#include <xen/spinlock.h>
+
+/* QoS Resource Type Enumeration */
+#define QOS_MONITOR_TYPE_L3            0x2
+
+/* QoS Monitoring Event ID */
+#define QOS_MONITOR_EVTID_L3           0x1
+
+struct pqos_cqm {
+    spinlock_t cqm_lock;
+    uint64_t *buffer;
+    unsigned int min_rmid;
+    unsigned int max_rmid;
+    unsigned int used_rmid;
+    unsigned int upscaling_factor;
+    unsigned int buffer_size;
+    domid_t *rmid_to_dom;
+};
+extern struct pqos_cqm *cqm;
+
+void init_platform_qos(void);
+
+#endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BS-0003Zn-Da; Sat, 08 Feb 2014 05:10:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BR-0003ZS-Ea
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:29 +0000
Received: from [193.109.254.147:45371] by server-15.bemta-14.messagelabs.com
	id 52/E5-10839-44CB5F25; Sat, 08 Feb 2014 05:10:28 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1391836226!2902699!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24147 invoked from network); 8 Feb 2014 05:10:27 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-27.messagelabs.com with SMTP;
	8 Feb 2014 05:10:27 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 07 Feb 2014 21:10:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="451748447"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 07 Feb 2014 21:10:24 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:33 +0800
Message-Id: <1391836058-81430-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 2/6] x86: dynamically attach/detach CQM
	service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add hypervisor side support for dynamically attach and detach CQM
services for a certain guest.

When attach CQM service for a guest, system will allocate an RMID for
it. When detach or guest is shutdown, the RMID will be retrieved back
for future use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c        |    3 +++
 xen/arch/x86/domctl.c        |   28 ++++++++++++++++++++
 xen/arch/x86/pqos.c          |   60 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |    2 ++
 xen/include/asm-x86/pqos.h   |   12 +++++++++
 xen/include/public/domctl.h  |   11 ++++++++
 6 files changed, 116 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 16f2b50..2656204 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -60,6 +60,7 @@
 #include <xen/numa.h>
 #include <xen/iommu.h>
 #include <compat/vcpu.h>
+#include <asm/pqos.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 DEFINE_PER_CPU(unsigned long, cr4);
@@ -612,6 +613,8 @@ void arch_domain_destroy(struct domain *d)
 
     free_xenheap_page(d->shared_info);
     cleanup_domain_irq_mapping(d);
+
+    free_cqm_rmid(d);
 }
 
 unsigned long pv_guest_cr4_fixup(const struct vcpu *v, unsigned long guest_cr4)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..7219011 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,6 +35,7 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
+#include <asm/pqos.h>
 
 static int gdbsx_guest_mem_io(
     domid_t domid, struct xen_domctl_gdbsx_memio *iop)
@@ -1245,6 +1246,33 @@ long arch_do_domctl(
     }
     break;
 
+    case XEN_DOMCTL_attach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else
+            ret = alloc_cqm_rmid(d);
+    }
+    break;
+
+    case XEN_DOMCTL_detach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else if ( d->arch.pqos_cqm_rmid > 0 )
+        {
+            free_cqm_rmid(d);
+            ret = 0;
+        }
+        else
+            ret = -ENOENT;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index ba0de37..eb469ac 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <xen/init.h>
 #include <xen/mm.h>
+#include <xen/spinlock.h>
 #include <asm/pqos.h>
 
 static bool_t __initdata opt_pqos = 1;
@@ -145,6 +146,65 @@ void __init init_platform_qos(void)
     init_qos_monitor();
 }
 
+int alloc_cqm_rmid(struct domain *d)
+{
+    int rc = 0;
+    unsigned int rmid;
+
+    ASSERT(system_supports_cqm());
+
+    spin_lock(&cqm->cqm_lock);
+
+    if ( d->arch.pqos_cqm_rmid > 0 )
+    {
+        rc = -EEXIST;
+        goto out;
+    }
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
+            continue;
+
+        cqm->rmid_to_dom[rmid] = d->domain_id;
+        break;
+    }
+
+    /* No CQM RMID available, assign RMID=0 by default */
+    if ( rmid > cqm->max_rmid )
+    {
+        rmid = 0;
+        rc = -EUSERS;
+    }
+    else
+        cqm->used_rmid++;
+
+    d->arch.pqos_cqm_rmid = rmid;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+
+    return rc;
+}
+
+void free_cqm_rmid(struct domain *d)
+{
+    unsigned int rmid;
+
+    spin_lock(&cqm->cqm_lock);
+    rmid = d->arch.pqos_cqm_rmid;
+    /* We do not free system reserved "RMID=0" */
+    if ( rmid == 0 )
+        goto out;
+
+    cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+    d->arch.pqos_cqm_rmid = 0;
+    cqm->used_rmid--;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..662714d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -313,6 +313,8 @@ struct arch_domain
     spinlock_t e820_lock;
     struct e820entry *e820;
     unsigned int nr_e820;
+
+    unsigned int pqos_cqm_rmid;       /* CQM RMID assigned to the domain */
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 0a8065c..f25037d 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -16,6 +16,7 @@
  */
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
+#include <xen/sched.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -38,6 +39,17 @@ struct pqos_cqm {
 };
 extern struct pqos_cqm *cqm;
 
+static inline bool_t system_supports_cqm(void)
+{
+    return !!cqm;
+}
+
+/* IA32_QM_CTR */
+#define IA32_QM_CTR_ERROR_MASK         (0x3ul << 62)
+
 void init_platform_qos(void);
 
+int alloc_cqm_rmid(struct domain *d);
+void free_cqm_rmid(struct domain *d);
+
 #endif
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f8d9293 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,14 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_qos_type {
+#define _XEN_DOMCTL_pqos_cqm      0
+#define XEN_DOMCTL_pqos_cqm       (1U<<_XEN_DOMCTL_pqos_cqm)
+    uint64_t flags;
+};
+typedef struct xen_domctl_qos_type xen_domctl_qos_type_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_qos_type_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +962,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_attach_pqos                   71
+#define XEN_DOMCTL_detach_pqos                   72
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1014,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
+        struct xen_domctl_qos_type          qos_type;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BP-0003ZH-5I; Sat, 08 Feb 2014 05:10:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BN-0003Z7-CF
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:25 +0000
Received: from [85.158.139.211:56770] by server-10.bemta-5.messagelabs.com id
	E9/D0-08578-04CB5F25; Sat, 08 Feb 2014 05:10:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8271 invoked from network); 8 Feb 2014 05:10:23 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:23 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:21 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742855"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:19 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:31 +0800
Message-Id: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 0/6] enable Cache QoS Monitoring (CQM) feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes from v7:
 - Address comments from Andrew Cooper, including:
   * Check CQM capability before allocating cpumask memory.
   * Move one function declaration into the correct patch.

Changes from v6:
 - Address comments from Jan Beulich, including:
   * Remove the unnecessary CPUID feature check.
   * Remove the unnecessary socket_cpu_map.
   * Spin_lock related changes, avoid spin_lock_irqsave().
   * Use readonly mapping to pass cqm data between Xen/Userspace,
     to avoid data copying.
   * Optimize RDMSR/WRMSR logic to avoid unnecessary calls.
   * Misc fixes including __read_mostly prefix, return value, etc.

Changes from v5:
 - Address comments from Dario Faggioli, including:
   * Define a new libxl_cqminfo structure to avoid reference of xc
     structure in libxl functions.
   * Use LOGE() instead of the LIBXL__LOG() functions.

Changes from v4:
 - When comparing xl cqm parameter, use strcmp instead of strncmp,
   otherwise, "xl pqos-attach cqmabcd domid" will be considered as
   a valid command line.
 - Address comments from Andrew Cooper, including:
   * Adjust the pqos parameter parsing function.
   * Modify the pqos related documentation.
   * Add a check for opt_cqm_max_rmid in initialization code.
   * Do not IPI CPU that is in same socket with current CPU.
 - Address comments from Dario Faggioli, including:
   * Fix an typo in export symbols.
   * Return correct libxl error code for qos related functions.
   * Abstract the error printing logic into a function.
 - Address comment from Daniel De Graaf, including:
   * Add return value in for pqos related check.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Modify the GPLv2 related file header, remove the address.

Changes from v3:
 - Use structure to better organize CQM related global variables.
 - Address comments from Andrew Cooper, including:
   * Remove the domain creation flag for CQM RMID allocation.
   * Adjust the boot parameter format, use custom_param().
   * Add documentation for the new added boot parameter.
   * Change QoS type flag to be uint64_t.
   * Initialize the per socket cpu bitmap in system boot time.
   * Remove get_cqm_avail() function.
   * Misc of format changes.
 - Address comment from Daniel De Graaf, including:
   * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to SECCLASS_XEN2.

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

RMID count    56        RMID available    53

Dongxiao Xu (6):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 docs/misc/xen-command-line.markdown          |    7 +
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   37 ++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    4 +
 tools/libxl/libxl_pqos.c                     |  132 +++++++++++++
 tools/libxl/libxl_types.idl                  |    7 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  111 +++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/domain.c                        |    8 +
 xen/arch/x86/domctl.c                        |   28 +++
 xen/arch/x86/pqos.c                          |  273 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   58 ++++++
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   59 ++++++
 xen/include/public/domctl.h                  |   11 ++
 xen/include/public/sysctl.h                  |   11 ++
 xen/xsm/flask/hooks.c                        |    8 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 26 files changed, 817 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BZ-0003bV-UL; Sat, 08 Feb 2014 05:10:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BY-0003bA-TI
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:37 +0000
Received: from [85.158.139.211:58617] by server-7.bemta-5.messagelabs.com id
	24/EF-14867-C4CB5F25; Sat, 08 Feb 2014 05:10:36 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!5
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9216 invoked from network); 8 Feb 2014 05:10:35 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:35 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:34 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742898"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:30 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:36 +0800
Message-Id: <1391836058-81430-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 5/6] xsm: add platform QoS related xsm
	policies
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xsm policies for attach/detach pqos services and get CQM info
hypercalls.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 ++++-
 xen/xsm/flask/hooks.c                        |    8 ++++++++
 xen/xsm/flask/policy/access_vectors          |   17 ++++++++++++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index dedc035..1f683af 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			getscheduler getvcpuinfo getvcpuextstate getaddrsize
 			getaffinity setaffinity };
-	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim  set_max_evtchn };
+	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim set_max_evtchn pqos_op };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index bb59fe8..115fcfe 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -64,6 +64,9 @@ allow dom0_t xen_t:xen {
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op tmem_op
 	tmem_control getscheduler setscheduler
 };
+allow dom0_t xen_t:xen2 {
+	pqos_op
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
@@ -76,7 +79,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn
+	set_cpuid gettsc settsc setscheduler set_max_evtchn pqos_op
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..6ee7771 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -730,6 +730,10 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_attach_pqos:
+    case XEN_DOMCTL_detach_pqos:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__PQOS_OP);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -785,6 +789,10 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_numainfo:
         return domain_has_xen(current->domain, XEN__PHYSINFO);
 
+    case XEN_SYSCTL_getcqminfo:
+        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
+                                    XEN2__PQOS_OP, NULL);
+
     default:
         printk("flask_sysctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..91af8b2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -3,9 +3,9 @@
 #
 # class class_name { permission_name ... }
 
-# Class xen consists of dom0-only operations dealing with the hypervisor itself.
-# Unless otherwise specified, the source is the domain executing the hypercall,
-# and the target is the xen initial sid (type xen_t).
+# Class xen and xen2 consists of dom0-only operations dealing with the
+# hypervisor itself. Unless otherwise specified, the source is the domain
+# executing the hypercall, and the target is the xen initial sid (type xen_t).
 class xen
 {
 # XENPF_settime
@@ -75,6 +75,14 @@ class xen
     setscheduler
 }
 
+# This is a continuation of class xen, since only 32 permissions can be
+# defined per class
+class xen2
+{
+# XEN_SYSCTL_getcqminfo
+    pqos_op
+}
+
 # Classes domain and domain2 consist of operations that a domain performs on
 # another domain or on itself.  Unless otherwise specified, the source is the
 # domain executing the hypercall, and the target is the domain being operated on
@@ -196,6 +204,9 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_attach_pqos
+# XEN_DOMCTL_detach_pqos
+    pqos_op
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0Bc-0003cX-Rk; Sat, 08 Feb 2014 05:10:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0Ba-0003bU-EE
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:38 +0000
Received: from [85.158.139.211:57054] by server-8.bemta-5.messagelabs.com id
	57/66-05298-D4CB5F25; Sat, 08 Feb 2014 05:10:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!6
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9254 invoked from network); 8 Feb 2014 05:10:36 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:36 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:35 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="477953152"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga002.fm.intel.com with ESMTP; 07 Feb 2014 21:10:33 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:37 +0800
Message-Id: <1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
	feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduced two new xl commands to attach/detach CQM service for a guest
$ xl pqos-attach cqm domid
$ xl pqos-detach cqm domid

Introduce one new xl command to retrive guest CQM information
$ xl pqos-list cqm

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/libxc/xc_domain.c     |   37 ++++++++++++
 tools/libxc/xenctrl.h       |   12 ++++
 tools/libxl/Makefile        |    3 +-
 tools/libxl/libxl.h         |    4 ++
 tools/libxl/libxl_pqos.c    |  132 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl |    7 +++
 tools/libxl/xl.h            |    3 +
 tools/libxl/xl_cmdimpl.c    |  111 ++++++++++++++++++++++++++++++++++++
 tools/libxl/xl_cmdtable.c   |   15 +++++
 9 files changed, 323 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_pqos.c

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..bcdffd2 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1776,6 +1776,43 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_attach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_detach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
+{
+    int ret = 0;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_getcqminfo;
+    if ( xc_sysctl(xch, &sysctl) < 0 )
+        ret = -1;
+    else
+    {
+        info->buffer_mfn = sysctl.u.getcqminfo.buffer_mfn;
+        info->size = sysctl.u.getcqminfo.size;
+        info->nr_rmids = sysctl.u.getcqminfo.nr_rmids;
+        info->nr_sockets = sysctl.u.getcqminfo.nr_sockets;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..f4eb198 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2427,4 +2427,16 @@ int xc_kexec_load(xc_interface *xch, uint8_t type, uint16_t arch,
  */
 int xc_kexec_unload(xc_interface *xch, int type);
 
+struct xc_cqminfo
+{
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xc_cqminfo xc_cqminfo_t;
+
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info);
 #endif /* XENCTRL_H */
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..8beb7f8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -76,7 +76,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
 			libxl_json.o libxl_aoutils.o libxl_numa.o \
 			libxl_save_callout.o _libxl_save_msgs_callout.o \
-			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
+			libxl_qmp.o libxl_event.o libxl_fork.o libxl_pqos.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
 $(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f3d2202 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
 int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
 int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
 
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);
+
 /* misc */
 
 /* Each of these sets or clears the flag according to whether the
diff --git a/tools/libxl/libxl_pqos.c b/tools/libxl/libxl_pqos.c
new file mode 100644
index 0000000..664a891
--- /dev/null
+++ b/tools/libxl/libxl_pqos.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2014      Intel Corporation
+ * Author Jiongxi Li <jiongxi.li@intel.com>
+ * Author Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+
+static const char * const msg[] = {
+    [EINVAL] = "invalid QoS resource type! Supported types: \"cqm\"",
+    [ENODEV] = "CQM is not supported in this system.",
+    [EEXIST] = "CQM is already attached to this domain.",
+    [ENOENT] = "CQM is not attached to this domain.",
+    [EUSERS] = "there is no free CQM RMID available.",
+    [ESRCH]  = "is this Domain ID valid?",
+};
+
+static void libxl_pqos_err_msg(libxl_ctx *ctx, int err)
+{
+    GC_INIT(ctx);
+
+    switch (err) {
+    case EINVAL:
+    case ENODEV:
+    case EEXIST:
+    case EUSERS:
+    case ESRCH:
+    case ENOENT:
+        LOGE(ERROR, "%s", msg[err]);
+        break;
+    default:
+        LOGE(ERROR, "errno: %d", err);
+    }
+
+    GC_FREE;
+}
+
+static int libxl_pqos_type2flags(const char * qos_type, uint64_t *flags)
+{
+    int rc = 0;
+
+    if (!strcmp(qos_type, "cqm"))
+        *flags |= XEN_DOMCTL_pqos_cqm;
+    else
+        rc = -1;
+
+    return rc;
+}
+
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_attach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_detach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo)
+{
+    int ret;
+    xc_cqminfo_t xcinfo;
+    GC_INIT(ctx);
+
+    ret = xc_domain_getcqminfo(ctx->xch, &xcinfo);
+    if (ret < 0) {
+        LOGE(ERROR, "getting domain cqm info");
+        return;
+    }
+
+    xlinfo->buffer_virt = (uint64_t)xc_map_foreign_range(ctx->xch, DOMID_XEN,
+                              xcinfo.size, PROT_READ, xcinfo.buffer_mfn);
+    if (xlinfo->buffer_virt == 0) {
+        LOGE(ERROR, "Failed to map cqm buffers");
+        return;
+    }
+    xlinfo->size = xcinfo.size;
+    xlinfo->nr_rmids = xcinfo.nr_rmids;
+    xlinfo->nr_sockets = xcinfo.nr_sockets;
+
+    GC_FREE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..43c0f48 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -596,3 +596,10 @@ libxl_event = Struct("event",[
                                  ])),
            ("domain_create_console_available", Struct(None, [])),
            ]))])
+
+libxl_cqminfo = Struct("cqminfo", [
+    ("buffer_virt",    uint64),
+    ("size",           uint32),
+    ("nr_rmids",       uint32),
+    ("nr_sockets",     uint32),
+    ])
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..4362b96 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -106,6 +106,9 @@ int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 int main_devd(int argc, char **argv);
+int main_pqosattach(int argc, char **argv);
+int main_pqosdetach(int argc, char **argv);
+int main_pqoslist(int argc, char **argv);
 
 void help(const char *command);
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..2e0b822 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7364,6 +7364,117 @@ out:
     return ret;
 }
 
+int main_pqosattach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-attach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_attach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+int main_pqosdetach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-detach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_detach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+static void print_cqm_info(const libxl_cqminfo *info)
+{
+    unsigned int i, j, k;
+    char *domname;
+    int print_header;
+    int cqm_domains = 0;
+    uint16_t *rmid_to_dom;
+    uint64_t *l3c_data;
+    uint32_t first_domain = 0;
+    unsigned int num_domains = 1024;
+
+    if (info->nr_rmids == 0) {
+        printf("System doesn't support CQM.\n");
+        return;
+    }
+
+    print_header = 1;
+    l3c_data = (uint64_t *)(info->buffer_virt);
+    rmid_to_dom = (uint16_t *)(info->buffer_virt +
+                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
+    for (i = first_domain; i < (first_domain + num_domains); i++) {
+        for (j = 0; j < info->nr_rmids; j++) {
+            if (rmid_to_dom[j] != i)
+                continue;
+
+            if (print_header) {
+                printf("Name                                        ID");
+                for (k = 0; k < info->nr_sockets; k++)
+                    printf("\tSocketID\tL3C_Usage");
+                print_header = 0;
+            }
+
+            domname = libxl_domid_to_name(ctx, i);
+            printf("\n%-40s %5d", domname, i);
+            free(domname);
+            cqm_domains++;
+
+            for (k = 0; k < info->nr_sockets; k++)
+                printf("%10u %16lu     ",
+                        k, l3c_data[info->nr_rmids * k + j]);
+        }
+    }
+    if (!cqm_domains)
+        printf("No RMID is assigned to domains.\n");
+    else
+        printf("\n");
+
+    printf("\nRMID count %5d\tRMID available %5d\n",
+           info->nr_rmids, info->nr_rmids - cqm_domains - 1);
+}
+
+int main_pqoslist(int argc, char **argv)
+{
+    int opt;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-list", 1) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+
+    if (!strcmp(qos_type, "cqm")) {
+        libxl_cqminfo info;
+        libxl_map_cqm_buffer(ctx, &info);
+        print_cqm_info(&info);
+    } else {
+        fprintf(stderr, "QoS resource type supported is: cqm.\n");
+        help("pqos-list");
+        return 2;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..d4af4a9 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
       "[options]",
       "-F                      Run in the foreground",
     },
+    { "pqos-attach",
+      &main_pqosattach, 0, 1,
+      "Allocate and map qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-detach",
+      &main_pqosdetach, 0, 1,
+      "Reliquish qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-list",
+      &main_pqoslist, 0, 0,
+      "List qos information for all domains",
+      "<Resource>",
+    },
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BV-0003aa-Ic; Sat, 08 Feb 2014 05:10:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BU-0003a6-FX
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:32 +0000
Received: from [85.158.139.211:56914] by server-15.bemta-5.messagelabs.com id
	B8/A9-24395-74CB5F25; Sat, 08 Feb 2014 05:10:31 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!4
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8691 invoked from network); 8 Feb 2014 05:10:31 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742892"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:28 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:35 +0800
Message-Id: <1391836058-81430-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 4/6] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c           |    5 +++++
 xen/arch/x86/pqos.c             |   14 ++++++++++++++
 xen/include/asm-x86/msr-index.h |    1 +
 xen/include/asm-x86/pqos.h      |    1 +
 4 files changed, 21 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
     {
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
+        if ( system_supports_cqm() && cqm->used_rmid )
+            cqm_assoc_rmid(0);
         p->arch.ctxt_switch_from(p);
     }
 
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
         }
         vcpu_restore_fpu_eager(n);
         n->arch.ctxt_switch_to(n);
+
+        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
     }
 
     gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
 custom_param("pqos", parse_pqos_param);
 
 struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
 
 static void __init init_cqm(void)
 {
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
 
     cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
 
+    rmid_mask = ~(~0ull << get_count_order(ebx));
+
     if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
         init_cqm();
 }
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
 
 }
 
+void cqm_assoc_rmid(unsigned int rmid)
+{
+    uint64_t val;
+    uint64_t new_val;
+
+    rdmsrl(MSR_IA32_PQR_ASSOC, val);
+    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+    if ( val != new_val )
+        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
 /* Platform QoS register */
 #define MSR_IA32_QOSEVTSEL             0x00000c8d
 #define MSR_IA32_QMC                   0x00000c8e
+#define MSR_IA32_PQR_ASSOC             0x00000c8f
 
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 4372af6..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -54,5 +54,6 @@ void init_platform_qos(void);
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
 void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
 
 #endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BU-0003aI-T1; Sat, 08 Feb 2014 05:10:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BS-0003Zm-PR
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:31 +0000
Received: from [85.158.139.211:7127] by server-3.bemta-5.messagelabs.com id
	12/15-13671-64CB5F25; Sat, 08 Feb 2014 05:10:30 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8402 invoked from network); 8 Feb 2014 05:10:28 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:28 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742883"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:26 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:34 +0800
Message-Id: <1391836058-81430-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 3/6] x86: collect CQM information from all
	sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect CQM information (L3 cache occupancy) from all sockets.
Upper layer application can parse the data structure to get the
information of guest's L3 cache occupancy on certain sockets.

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/pqos.c             |   43 +++++++++++++++++++++++++++++
 xen/arch/x86/sysctl.c           |   58 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |    4 +++
 xen/include/asm-x86/pqos.h      |    3 ++
 xen/include/public/sysctl.h     |   11 ++++++++
 5 files changed, 119 insertions(+)

diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index eb469ac..2cde56e 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -15,6 +15,7 @@
  * more details.
  */
 #include <asm/processor.h>
+#include <asm/msr.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/spinlock.h>
@@ -205,6 +206,48 @@ out:
     spin_unlock(&cqm->cqm_lock);
 }
 
+static void read_cqm_data(void *arg)
+{
+    uint64_t cqm_data;
+    unsigned int rmid;
+    int socket = cpu_to_socket(smp_processor_id());
+    unsigned long i;
+
+    ASSERT(system_supports_cqm());
+
+    if ( socket < 0 )
+        return;
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
+            continue;
+
+        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
+        rdmsrl(MSR_IA32_QMC, cqm_data);
+
+        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
+        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
+            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
+    }
+}
+
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
+{
+    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+    unsigned int nr_rmids = cqm->max_rmid + 1;
+
+    /* Read CQM data in current CPU */
+    read_cqm_data(NULL);
+    /* Issue IPI to other CPUs to read CQM data */
+    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
+
+    /* Copy the rmid_to_dom info to the buffer */
+    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
+           sizeof(domid_t) * (cqm->max_rmid + 1));
+
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..7b0acc9 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -28,6 +28,7 @@
 #include <xen/nodemask.h>
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
+#include <asm/pqos.h>
 
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 
@@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+/* Select one random CPU for each socket. Current CPU's socket is excluded */
+static void select_socket_cpu(cpumask_t *cpu_bitmap)
+{
+    int i;
+    unsigned int cpu;
+    int socket, socket_curr = cpu_to_socket(smp_processor_id());
+    DECLARE_BITMAP(sockets, NR_CPUS);
+
+    bitmap_zero(sockets, NR_CPUS);
+    if (socket_curr >= 0)
+        set_bit(socket_curr, sockets);
+
+    cpumask_clear(cpu_bitmap);
+    for ( i = 0; i < NR_CPUS; i++ )
+    {
+        socket = cpu_to_socket(i);
+        if ( socket < 0 || test_and_set_bit(socket, sockets) )
+            continue;
+        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
+        if ( cpu < nr_cpu_ids )
+            cpumask_set_cpu(cpu, cpu_bitmap);
+    }
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +126,39 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_getcqminfo:
+    {
+        cpumask_var_t cpu_cqmdata_map;
+
+        if ( !system_supports_cqm() )
+        {
+            ret = -ENODEV;
+            break;
+        }
+
+        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        memset(cqm->buffer, 0, cqm->buffer_size);
+
+        select_socket_cpu(cpu_cqmdata_map);
+        get_cqm_info(cpu_cqmdata_map);
+
+        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
+        sysctl->u.getcqminfo.size = cqm->buffer_size;
+        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
+        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+
+        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+
+        free_cpumask_var(cpu_cqmdata_map);
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e3ff10c 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -489,4 +489,8 @@
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
 
+/* Platform QoS register */
+#define MSR_IA32_QOSEVTSEL             0x00000c8d
+#define MSR_IA32_QMC                   0x00000c8e
+
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index f25037d..4372af6 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -17,6 +17,8 @@
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
 #include <xen/sched.h>
+#include <xen/cpumask.h>
+#include <public/domctl.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -51,5 +53,6 @@ void init_platform_qos(void);
 
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
 
 #endif
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..335b1d9 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
 typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
+struct xen_sysctl_getcqminfo {
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
+
 
 struct xen_sysctl {
     uint32_t cmd;
@@ -654,6 +663,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_getcqminfo                    21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +685,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_getcqminfo        getcqminfo;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BZ-0003bV-UL; Sat, 08 Feb 2014 05:10:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BY-0003bA-TI
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:37 +0000
Received: from [85.158.139.211:58617] by server-7.bemta-5.messagelabs.com id
	24/EF-14867-C4CB5F25; Sat, 08 Feb 2014 05:10:36 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!5
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9216 invoked from network); 8 Feb 2014 05:10:35 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:35 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:34 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742898"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:30 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:36 +0800
Message-Id: <1391836058-81430-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 5/6] xsm: add platform QoS related xsm
	policies
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xsm policies for attach/detach pqos services and get CQM info
hypercalls.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 ++++-
 xen/xsm/flask/hooks.c                        |    8 ++++++++
 xen/xsm/flask/policy/access_vectors          |   17 ++++++++++++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index dedc035..1f683af 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			getscheduler getvcpuinfo getvcpuextstate getaddrsize
 			getaffinity setaffinity };
-	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim  set_max_evtchn };
+	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim set_max_evtchn pqos_op };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index bb59fe8..115fcfe 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -64,6 +64,9 @@ allow dom0_t xen_t:xen {
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op tmem_op
 	tmem_control getscheduler setscheduler
 };
+allow dom0_t xen_t:xen2 {
+	pqos_op
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
@@ -76,7 +79,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn
+	set_cpuid gettsc settsc setscheduler set_max_evtchn pqos_op
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..6ee7771 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -730,6 +730,10 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_attach_pqos:
+    case XEN_DOMCTL_detach_pqos:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__PQOS_OP);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -785,6 +789,10 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_numainfo:
         return domain_has_xen(current->domain, XEN__PHYSINFO);
 
+    case XEN_SYSCTL_getcqminfo:
+        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
+                                    XEN2__PQOS_OP, NULL);
+
     default:
         printk("flask_sysctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..91af8b2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -3,9 +3,9 @@
 #
 # class class_name { permission_name ... }
 
-# Class xen consists of dom0-only operations dealing with the hypervisor itself.
-# Unless otherwise specified, the source is the domain executing the hypercall,
-# and the target is the xen initial sid (type xen_t).
+# Class xen and xen2 consists of dom0-only operations dealing with the
+# hypervisor itself. Unless otherwise specified, the source is the domain
+# executing the hypercall, and the target is the xen initial sid (type xen_t).
 class xen
 {
 # XENPF_settime
@@ -75,6 +75,14 @@ class xen
     setscheduler
 }
 
+# This is a continuation of class xen, since only 32 permissions can be
+# defined per class
+class xen2
+{
+# XEN_SYSCTL_getcqminfo
+    pqos_op
+}
+
 # Classes domain and domain2 consist of operations that a domain performs on
 # another domain or on itself.  Unless otherwise specified, the source is the
 # domain executing the hypercall, and the target is the domain being operated on
@@ -196,6 +204,9 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_attach_pqos
+# XEN_DOMCTL_detach_pqos
+    pqos_op
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0Bc-0003cX-Rk; Sat, 08 Feb 2014 05:10:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0Ba-0003bU-EE
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:38 +0000
Received: from [85.158.139.211:57054] by server-8.bemta-5.messagelabs.com id
	57/66-05298-D4CB5F25; Sat, 08 Feb 2014 05:10:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!6
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9254 invoked from network); 8 Feb 2014 05:10:36 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:36 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:35 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="477953152"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga002.fm.intel.com with ESMTP; 07 Feb 2014 21:10:33 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:37 +0800
Message-Id: <1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
	feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduced two new xl commands to attach/detach CQM service for a guest
$ xl pqos-attach cqm domid
$ xl pqos-detach cqm domid

Introduce one new xl command to retrive guest CQM information
$ xl pqos-list cqm

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/libxc/xc_domain.c     |   37 ++++++++++++
 tools/libxc/xenctrl.h       |   12 ++++
 tools/libxl/Makefile        |    3 +-
 tools/libxl/libxl.h         |    4 ++
 tools/libxl/libxl_pqos.c    |  132 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl |    7 +++
 tools/libxl/xl.h            |    3 +
 tools/libxl/xl_cmdimpl.c    |  111 ++++++++++++++++++++++++++++++++++++
 tools/libxl/xl_cmdtable.c   |   15 +++++
 9 files changed, 323 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_pqos.c

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..bcdffd2 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1776,6 +1776,43 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_attach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_detach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
+{
+    int ret = 0;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_getcqminfo;
+    if ( xc_sysctl(xch, &sysctl) < 0 )
+        ret = -1;
+    else
+    {
+        info->buffer_mfn = sysctl.u.getcqminfo.buffer_mfn;
+        info->size = sysctl.u.getcqminfo.size;
+        info->nr_rmids = sysctl.u.getcqminfo.nr_rmids;
+        info->nr_sockets = sysctl.u.getcqminfo.nr_sockets;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..f4eb198 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2427,4 +2427,16 @@ int xc_kexec_load(xc_interface *xch, uint8_t type, uint16_t arch,
  */
 int xc_kexec_unload(xc_interface *xch, int type);
 
+struct xc_cqminfo
+{
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xc_cqminfo xc_cqminfo_t;
+
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info);
 #endif /* XENCTRL_H */
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..8beb7f8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -76,7 +76,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
 			libxl_json.o libxl_aoutils.o libxl_numa.o \
 			libxl_save_callout.o _libxl_save_msgs_callout.o \
-			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
+			libxl_qmp.o libxl_event.o libxl_fork.o libxl_pqos.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
 $(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f3d2202 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
 int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
 int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
 
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);
+
 /* misc */
 
 /* Each of these sets or clears the flag according to whether the
diff --git a/tools/libxl/libxl_pqos.c b/tools/libxl/libxl_pqos.c
new file mode 100644
index 0000000..664a891
--- /dev/null
+++ b/tools/libxl/libxl_pqos.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2014      Intel Corporation
+ * Author Jiongxi Li <jiongxi.li@intel.com>
+ * Author Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+
+static const char * const msg[] = {
+    [EINVAL] = "invalid QoS resource type! Supported types: \"cqm\"",
+    [ENODEV] = "CQM is not supported in this system.",
+    [EEXIST] = "CQM is already attached to this domain.",
+    [ENOENT] = "CQM is not attached to this domain.",
+    [EUSERS] = "there is no free CQM RMID available.",
+    [ESRCH]  = "is this Domain ID valid?",
+};
+
+static void libxl_pqos_err_msg(libxl_ctx *ctx, int err)
+{
+    GC_INIT(ctx);
+
+    switch (err) {
+    case EINVAL:
+    case ENODEV:
+    case EEXIST:
+    case EUSERS:
+    case ESRCH:
+    case ENOENT:
+        LOGE(ERROR, "%s", msg[err]);
+        break;
+    default:
+        LOGE(ERROR, "errno: %d", err);
+    }
+
+    GC_FREE;
+}
+
+static int libxl_pqos_type2flags(const char * qos_type, uint64_t *flags)
+{
+    int rc = 0;
+
+    if (!strcmp(qos_type, "cqm"))
+        *flags |= XEN_DOMCTL_pqos_cqm;
+    else
+        rc = -1;
+
+    return rc;
+}
+
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_attach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_detach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo)
+{
+    int ret;
+    xc_cqminfo_t xcinfo;
+    GC_INIT(ctx);
+
+    ret = xc_domain_getcqminfo(ctx->xch, &xcinfo);
+    if (ret < 0) {
+        LOGE(ERROR, "getting domain cqm info");
+        return;
+    }
+
+    xlinfo->buffer_virt = (uint64_t)xc_map_foreign_range(ctx->xch, DOMID_XEN,
+                              xcinfo.size, PROT_READ, xcinfo.buffer_mfn);
+    if (xlinfo->buffer_virt == 0) {
+        LOGE(ERROR, "Failed to map cqm buffers");
+        return;
+    }
+    xlinfo->size = xcinfo.size;
+    xlinfo->nr_rmids = xcinfo.nr_rmids;
+    xlinfo->nr_sockets = xcinfo.nr_sockets;
+
+    GC_FREE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..43c0f48 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -596,3 +596,10 @@ libxl_event = Struct("event",[
                                  ])),
            ("domain_create_console_available", Struct(None, [])),
            ]))])
+
+libxl_cqminfo = Struct("cqminfo", [
+    ("buffer_virt",    uint64),
+    ("size",           uint32),
+    ("nr_rmids",       uint32),
+    ("nr_sockets",     uint32),
+    ])
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..4362b96 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -106,6 +106,9 @@ int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 int main_devd(int argc, char **argv);
+int main_pqosattach(int argc, char **argv);
+int main_pqosdetach(int argc, char **argv);
+int main_pqoslist(int argc, char **argv);
 
 void help(const char *command);
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..2e0b822 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7364,6 +7364,117 @@ out:
     return ret;
 }
 
+int main_pqosattach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-attach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_attach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+int main_pqosdetach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-detach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_detach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+static void print_cqm_info(const libxl_cqminfo *info)
+{
+    unsigned int i, j, k;
+    char *domname;
+    int print_header;
+    int cqm_domains = 0;
+    uint16_t *rmid_to_dom;
+    uint64_t *l3c_data;
+    uint32_t first_domain = 0;
+    unsigned int num_domains = 1024;
+
+    if (info->nr_rmids == 0) {
+        printf("System doesn't support CQM.\n");
+        return;
+    }
+
+    print_header = 1;
+    l3c_data = (uint64_t *)(info->buffer_virt);
+    rmid_to_dom = (uint16_t *)(info->buffer_virt +
+                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
+    for (i = first_domain; i < (first_domain + num_domains); i++) {
+        for (j = 0; j < info->nr_rmids; j++) {
+            if (rmid_to_dom[j] != i)
+                continue;
+
+            if (print_header) {
+                printf("Name                                        ID");
+                for (k = 0; k < info->nr_sockets; k++)
+                    printf("\tSocketID\tL3C_Usage");
+                print_header = 0;
+            }
+
+            domname = libxl_domid_to_name(ctx, i);
+            printf("\n%-40s %5d", domname, i);
+            free(domname);
+            cqm_domains++;
+
+            for (k = 0; k < info->nr_sockets; k++)
+                printf("%10u %16lu     ",
+                        k, l3c_data[info->nr_rmids * k + j]);
+        }
+    }
+    if (!cqm_domains)
+        printf("No RMID is assigned to domains.\n");
+    else
+        printf("\n");
+
+    printf("\nRMID count %5d\tRMID available %5d\n",
+           info->nr_rmids, info->nr_rmids - cqm_domains - 1);
+}
+
+int main_pqoslist(int argc, char **argv)
+{
+    int opt;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-list", 1) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+
+    if (!strcmp(qos_type, "cqm")) {
+        libxl_cqminfo info;
+        libxl_map_cqm_buffer(ctx, &info);
+        print_cqm_info(&info);
+    } else {
+        fprintf(stderr, "QoS resource type supported is: cqm.\n");
+        help("pqos-list");
+        return 2;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..d4af4a9 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
       "[options]",
       "-F                      Run in the foreground",
     },
+    { "pqos-attach",
+      &main_pqosattach, 0, 1,
+      "Allocate and map qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-detach",
+      &main_pqosdetach, 0, 1,
+      "Reliquish qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-list",
+      &main_pqoslist, 0, 0,
+      "List qos information for all domains",
+      "<Resource>",
+    },
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BQ-0003ZO-LJ; Sat, 08 Feb 2014 05:10:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BO-0003ZC-Dt
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:26 +0000
Received: from [85.158.139.211:56786] by server-10.bemta-5.messagelabs.com id
	4B/D0-08578-14CB5F25; Sat, 08 Feb 2014 05:10:25 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!2
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8309 invoked from network); 8 Feb 2014 05:10:24 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:24 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742863"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:21 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:32 +0800
Message-Id: <1391836058-81430-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 1/6] x86: detect and initialize Cache QoS
	Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Detect platform QoS feature status and enumerate the resource types,
one of which is to monitor the L3 cache occupancy.

Also introduce a Xen grub command line parameter to control the
QoS feature status.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/misc/xen-command-line.markdown |    7 ++
 xen/arch/x86/Makefile               |    1 +
 xen/arch/x86/pqos.c                 |  156 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |    3 +
 xen/include/asm-x86/cpufeature.h    |    1 +
 xen/include/asm-x86/pqos.h          |   43 ++++++++++
 6 files changed, 211 insertions(+)
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 15aa404..7751ffe 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -770,6 +770,13 @@ This option can be specified more than once (up to 8 times at present).
 ### ple\_window
 > `= <integer>`
 
+### pqos (Intel)
+> `= List of ( <boolean> | cqm:<boolean> | cqm_max_rmid:<integer> )`
+
+> Default: `pqos=1,cqm:1,cqm_max_rmid:255`
+
+Configure platform QoS services.
+
 ### reboot
 > `= t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..54962e0 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += pqos.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
new file mode 100644
index 0000000..ba0de37
--- /dev/null
+++ b/xen/arch/x86/pqos.c
@@ -0,0 +1,156 @@
+/*
+ * pqos.c: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#include <asm/processor.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <asm/pqos.h>
+
+static bool_t __initdata opt_pqos = 1;
+static bool_t __initdata opt_cqm = 1;
+static unsigned int __initdata opt_cqm_max_rmid = 255;
+
+static void __init parse_pqos_param(char *s)
+{
+    char *ss, *val_str;
+    int val;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        val = parse_bool(s);
+        if ( val >= 0 )
+            opt_pqos = val;
+        else
+        {
+            val = !!strncmp(s, "no-", 3);
+            if ( !val )
+                s += 3;
+
+            val_str = strchr(s, ':');
+            if ( val_str )
+                *val_str++ = '\0';
+
+            if ( val_str && !strcmp(s, "cqm") &&
+                 (val = parse_bool(val_str)) >= 0 )
+                opt_cqm = val;
+            else if ( val_str && !strcmp(s, "cqm_max_rmid") )
+                opt_cqm_max_rmid = simple_strtoul(val_str, NULL, 0);
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+custom_param("pqos", parse_pqos_param);
+
+struct pqos_cqm __read_mostly *cqm = NULL;
+
+static void __init init_cqm(void)
+{
+    unsigned int rmid;
+    unsigned int eax, edx;
+    unsigned int cqm_pages;
+    unsigned int i;
+
+    if ( !opt_cqm_max_rmid )
+        return;
+
+    cqm = xzalloc(struct pqos_cqm);
+    if ( !cqm )
+        return;
+
+    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
+    if ( !(edx & QOS_MONITOR_EVTID_L3) )
+        goto out;
+
+    cqm->min_rmid = 1;
+    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
+
+    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
+    if ( !cqm->rmid_to_dom )
+        goto out;
+
+    /* Reserve RMID 0 for all domains not being monitored */
+    cqm->rmid_to_dom[0] = DOMID_XEN;
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+
+    /* Allocate CQM buffer size in initialization stage */
+    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
+                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
+                PAGE_SIZE + 1;
+    cqm->buffer_size = cqm_pages * PAGE_SIZE;
+
+    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
+    if ( !cqm->buffer )
+    {
+        xfree(cqm->rmid_to_dom);
+        goto out;
+    }
+    memset(cqm->buffer, 0, cqm->buffer_size);
+
+    for ( i = 0; i < cqm_pages; i++ )
+        share_xen_page_with_privileged_guests(
+            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),
+            XENSHARE_readonly);
+
+    spin_lock_init(&cqm->cqm_lock);
+
+    cqm->used_rmid = 0;
+
+    printk(XENLOG_INFO "Cache QoS Monitoring Enabled.\n");
+
+    return;
+
+out:
+    xfree(cqm);
+    cqm = NULL;
+}
+
+static void __init init_qos_monitor(void)
+{
+    unsigned int qm_features;
+    unsigned int eax, ebx, ecx;
+
+    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
+        return;
+
+    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+
+    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
+        init_cqm();
+}
+
+void __init init_platform_qos(void)
+{
+    if ( !opt_pqos )
+        return;
+
+    init_qos_monitor();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..639528f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,6 +48,7 @@
 #include <asm/setup.h>
 #include <xen/cpu.h>
 #include <asm/nmi.h>
+#include <asm/pqos.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
@@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     domain_unpause_by_systemcontroller(dom0);
 
+    init_platform_qos();
+
     reset_stack_and_jump(init_done);
 }
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..ca59668 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -147,6 +147,7 @@
 #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
+#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
new file mode 100644
index 0000000..0a8065c
--- /dev/null
+++ b/xen/include/asm-x86/pqos.h
@@ -0,0 +1,43 @@
+/*
+ * pqos.h: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef ASM_PQOS_H
+#define ASM_PQOS_H
+
+#include <public/xen.h>
+#include <xen/spinlock.h>
+
+/* QoS Resource Type Enumeration */
+#define QOS_MONITOR_TYPE_L3            0x2
+
+/* QoS Monitoring Event ID */
+#define QOS_MONITOR_EVTID_L3           0x1
+
+struct pqos_cqm {
+    spinlock_t cqm_lock;
+    uint64_t *buffer;
+    unsigned int min_rmid;
+    unsigned int max_rmid;
+    unsigned int used_rmid;
+    unsigned int upscaling_factor;
+    unsigned int buffer_size;
+    domid_t *rmid_to_dom;
+};
+extern struct pqos_cqm *cqm;
+
+void init_platform_qos(void);
+
+#endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BV-0003aa-Ic; Sat, 08 Feb 2014 05:10:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BU-0003a6-FX
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:32 +0000
Received: from [85.158.139.211:56914] by server-15.bemta-5.messagelabs.com id
	B8/A9-24395-74CB5F25; Sat, 08 Feb 2014 05:10:31 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!4
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8691 invoked from network); 8 Feb 2014 05:10:31 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:31 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:30 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742892"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:28 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:35 +0800
Message-Id: <1391836058-81430-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 4/6] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c           |    5 +++++
 xen/arch/x86/pqos.c             |   14 ++++++++++++++
 xen/include/asm-x86/msr-index.h |    1 +
 xen/include/asm-x86/pqos.h      |    1 +
 4 files changed, 21 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
     {
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
+        if ( system_supports_cqm() && cqm->used_rmid )
+            cqm_assoc_rmid(0);
         p->arch.ctxt_switch_from(p);
     }
 
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
         }
         vcpu_restore_fpu_eager(n);
         n->arch.ctxt_switch_to(n);
+
+        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
     }
 
     gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
 custom_param("pqos", parse_pqos_param);
 
 struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
 
 static void __init init_cqm(void)
 {
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
 
     cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
 
+    rmid_mask = ~(~0ull << get_count_order(ebx));
+
     if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
         init_cqm();
 }
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
 
 }
 
+void cqm_assoc_rmid(unsigned int rmid)
+{
+    uint64_t val;
+    uint64_t new_val;
+
+    rdmsrl(MSR_IA32_PQR_ASSOC, val);
+    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+    if ( val != new_val )
+        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
 /* Platform QoS register */
 #define MSR_IA32_QOSEVTSEL             0x00000c8d
 #define MSR_IA32_QMC                   0x00000c8e
+#define MSR_IA32_PQR_ASSOC             0x00000c8f
 
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 4372af6..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -54,5 +54,6 @@ void init_platform_qos(void);
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
 void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
 
 #endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0BU-0003aI-T1; Sat, 08 Feb 2014 05:10:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WC0BS-0003Zm-PR
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 05:10:31 +0000
Received: from [85.158.139.211:7127] by server-3.bemta-5.messagelabs.com id
	12/15-13671-64CB5F25; Sat, 08 Feb 2014 05:10:30 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1391836222!2518176!3
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8402 invoked from network); 8 Feb 2014 05:10:28 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-6.tower-206.messagelabs.com with SMTP;
	8 Feb 2014 05:10:28 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 07 Feb 2014 21:10:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,804,1384329600"; d="scan'208";a="471742883"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 07 Feb 2014 21:10:26 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 13:07:34 +0800
Message-Id: <1391836058-81430-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v8 3/6] x86: collect CQM information from all
	sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect CQM information (L3 cache occupancy) from all sockets.
Upper layer application can parse the data structure to get the
information of guest's L3 cache occupancy on certain sockets.

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/pqos.c             |   43 +++++++++++++++++++++++++++++
 xen/arch/x86/sysctl.c           |   58 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |    4 +++
 xen/include/asm-x86/pqos.h      |    3 ++
 xen/include/public/sysctl.h     |   11 ++++++++
 5 files changed, 119 insertions(+)

diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index eb469ac..2cde56e 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -15,6 +15,7 @@
  * more details.
  */
 #include <asm/processor.h>
+#include <asm/msr.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/spinlock.h>
@@ -205,6 +206,48 @@ out:
     spin_unlock(&cqm->cqm_lock);
 }
 
+static void read_cqm_data(void *arg)
+{
+    uint64_t cqm_data;
+    unsigned int rmid;
+    int socket = cpu_to_socket(smp_processor_id());
+    unsigned long i;
+
+    ASSERT(system_supports_cqm());
+
+    if ( socket < 0 )
+        return;
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
+            continue;
+
+        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
+        rdmsrl(MSR_IA32_QMC, cqm_data);
+
+        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
+        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
+            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
+    }
+}
+
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
+{
+    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+    unsigned int nr_rmids = cqm->max_rmid + 1;
+
+    /* Read CQM data in current CPU */
+    read_cqm_data(NULL);
+    /* Issue IPI to other CPUs to read CQM data */
+    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
+
+    /* Copy the rmid_to_dom info to the buffer */
+    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
+           sizeof(domid_t) * (cqm->max_rmid + 1));
+
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..7b0acc9 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -28,6 +28,7 @@
 #include <xen/nodemask.h>
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
+#include <asm/pqos.h>
 
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 
@@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+/* Select one random CPU for each socket. Current CPU's socket is excluded */
+static void select_socket_cpu(cpumask_t *cpu_bitmap)
+{
+    int i;
+    unsigned int cpu;
+    int socket, socket_curr = cpu_to_socket(smp_processor_id());
+    DECLARE_BITMAP(sockets, NR_CPUS);
+
+    bitmap_zero(sockets, NR_CPUS);
+    if (socket_curr >= 0)
+        set_bit(socket_curr, sockets);
+
+    cpumask_clear(cpu_bitmap);
+    for ( i = 0; i < NR_CPUS; i++ )
+    {
+        socket = cpu_to_socket(i);
+        if ( socket < 0 || test_and_set_bit(socket, sockets) )
+            continue;
+        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
+        if ( cpu < nr_cpu_ids )
+            cpumask_set_cpu(cpu, cpu_bitmap);
+    }
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +126,39 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_getcqminfo:
+    {
+        cpumask_var_t cpu_cqmdata_map;
+
+        if ( !system_supports_cqm() )
+        {
+            ret = -ENODEV;
+            break;
+        }
+
+        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        memset(cqm->buffer, 0, cqm->buffer_size);
+
+        select_socket_cpu(cpu_cqmdata_map);
+        get_cqm_info(cpu_cqmdata_map);
+
+        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
+        sysctl->u.getcqminfo.size = cqm->buffer_size;
+        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
+        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+
+        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+
+        free_cpumask_var(cpu_cqmdata_map);
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e3ff10c 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -489,4 +489,8 @@
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
 
+/* Platform QoS register */
+#define MSR_IA32_QOSEVTSEL             0x00000c8d
+#define MSR_IA32_QMC                   0x00000c8e
+
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index f25037d..4372af6 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -17,6 +17,8 @@
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
 #include <xen/sched.h>
+#include <xen/cpumask.h>
+#include <public/domctl.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -51,5 +53,6 @@ void init_platform_qos(void);
 
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
 
 #endif
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..335b1d9 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
 typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
+struct xen_sysctl_getcqminfo {
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
+
 
 struct xen_sysctl {
     uint32_t cmd;
@@ -654,6 +663,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_getcqminfo                    21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +685,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_getcqminfo        getcqminfo;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:57:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0uG-0006Nd-Sn; Sat, 08 Feb 2014 05:56:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WC0uF-0006NY-Rw
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 05:56:48 +0000
Received: from [85.158.137.68:29655] by server-14.bemta-3.messagelabs.com id
	E2/4E-08196-F17C5F25; Sat, 08 Feb 2014 05:56:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391839004!462386!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30783 invoked from network); 8 Feb 2014 05:56:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 05:56:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,805,1384300800"; d="scan'208";a="101018781"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 05:56:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 00:56:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WC0uA-0003bf-QP;
	Sat, 08 Feb 2014 05:56:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WC0uA-00051P-AP;
	Sat, 08 Feb 2014 05:56:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24792-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 05:56:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24792: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0577425950218482852=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0577425950218482852==
Content-Type: text/plain

flight 24792 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24792/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24699

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 24775
 test-amd64-i386-rhel6hvm-intel  7 redhat-install   fail in 24775 pass in 24792
 test-amd64-amd64-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail in 24775 pass in 24792

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============0577425950218482852==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0577425950218482852==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 05:57:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 05:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC0uG-0006Nd-Sn; Sat, 08 Feb 2014 05:56:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WC0uF-0006NY-Rw
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 05:56:48 +0000
Received: from [85.158.137.68:29655] by server-14.bemta-3.messagelabs.com id
	E2/4E-08196-F17C5F25; Sat, 08 Feb 2014 05:56:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391839004!462386!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30783 invoked from network); 8 Feb 2014 05:56:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 05:56:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,805,1384300800"; d="scan'208";a="101018781"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 05:56:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 00:56:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WC0uA-0003bf-QP;
	Sat, 08 Feb 2014 05:56:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WC0uA-00051P-AP;
	Sat, 08 Feb 2014 05:56:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24792-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 05:56:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24792: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0577425950218482852=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0577425950218482852==
Content-Type: text/plain

flight 24792 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24792/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24699

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 24775
 test-amd64-i386-rhel6hvm-intel  7 redhat-install   fail in 24775 pass in 24792
 test-amd64-amd64-xl-qemut-win7-amd64 12 guest-localmigrate/x10 fail in 24775 pass in 24792

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============0577425950218482852==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0577425950218482852==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 07:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 07:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC2Tz-0002Q9-Ea; Sat, 08 Feb 2014 07:37:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WC2Ty-0002Q4-B7
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 07:37:46 +0000
Received: from [85.158.137.68:50825] by server-11.bemta-3.messagelabs.com id
	3E/F7-04255-9CED5F25; Sat, 08 Feb 2014 07:37:45 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391845063!468443!1
X-Originating-IP: [209.85.214.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11855 invoked from network); 8 Feb 2014 07:37:44 -0000
Received: from mail-ob0-f174.google.com (HELO mail-ob0-f174.google.com)
	(209.85.214.174)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 07:37:44 -0000
Received: by mail-ob0-f174.google.com with SMTP id uy5so5117869obc.5
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 23:37:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=66uz5DxtC72MMP0MO89Sb72Kzfsp5wQa94e6gHEIj3E=;
	b=jGzkB+zmwhqN5MNQ/3VMl0NvlcNgsHpZtNs27txxgpoZBShc8rFDxeKzDZTu28pDPk
	fhimBU49JJ8Lhjf1idNvyXqmJiHpovUZiuwvrxNsV768nBiL3IuwEZIekx2bpexMvKwZ
	gdOy0oSwHBHT8+QvmEXsamXJEZEJRcao3R4EkxcgPvMEiXhZsJt47I11SwPC1JlntKHm
	cB6UQaTx1hM9bZvQz+UEm0hQnbYZtdHzRiRDc6ZnDPcnQ71z7KPYbex/NfXPuIQ14EQC
	DWrcmBkl3ilG+/Ms2XAIu/Wjr/a8ovHSbSrQTRBLxadGicXW3uk2V5lFdTx4ginzv6VW
	03PA==
X-Gm-Message-State: ALoCoQnMPBuflq58wsUvYzrzKWZ0elIvDSw7y2xbVAiXcGK1dO3BubIRBKFj78drfCMlPKSUlKZ+
MIME-Version: 1.0
X-Received: by 10.60.80.137 with SMTP id r9mr16762776oex.30.1391845063272;
	Fri, 07 Feb 2014 23:37:43 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Fri, 7 Feb 2014 23:37:43 -0800 (PST)
In-Reply-To: <1391696408.9917.28.camel@Solace>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F37D3C0200007800119BDF@nat28.tlf.novell.com>
	<1391696408.9917.28.camel@Solace>
Date: Fri, 7 Feb 2014 21:37:43 -1000
Message-ID: <CA+o8iRXocJkAXPFxGcpTKzfH3M+XvwGZSe3N0oWRf1zNKWai9g@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Cc: Marcus.Granado@eu.citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>, esb@ics.hawaii.edu,
	Jan Beulich <JBeulich@suse.com>, Henri Casanova <henric@hawaii.edu>
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> Apart from that keeping the CPU0 special case at the top is pointless
>> with the cpu0_socket special casing.
>>
> Indeed. If going this route, Justin, I think you can reorganize the
> whole `if (cpu == 0)' (not only the else), and get to a more correct and
> readable solution.

It's still a special case because socket information is not yet
available in this function when it gets called for CPU 0. But I can
make it more readable but not having it stand alone at the top.

>> As to coding style: Please fix your comments and get the indentation
>> of the if/else sequence above right (i.e. either use "else if" with no
>> added indentation, or enclose the inner if/else in figure braces (I'd
>> personally prefer the former).
>>
> Yep, agreed. To be fair, about comments, sched_credit2.c has quite a
> mixture of commenting style in it, and it's really an hard call to
> decide which one should be used. Anyway, Justin, if you reorganize the
> whole `if () else' block, you are probably better off with a big comment
> describing the whole thing, before the block itself, for which you can
> use the following style:
>
> /*
>  * Long comment...
>  */

Will do.

Thanks,
Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 07:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 07:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC2Tz-0002Q9-Ea; Sat, 08 Feb 2014 07:37:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WC2Ty-0002Q4-B7
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 07:37:46 +0000
Received: from [85.158.137.68:50825] by server-11.bemta-3.messagelabs.com id
	3E/F7-04255-9CED5F25; Sat, 08 Feb 2014 07:37:45 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-2.tower-31.messagelabs.com!1391845063!468443!1
X-Originating-IP: [209.85.214.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11855 invoked from network); 8 Feb 2014 07:37:44 -0000
Received: from mail-ob0-f174.google.com (HELO mail-ob0-f174.google.com)
	(209.85.214.174)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 07:37:44 -0000
Received: by mail-ob0-f174.google.com with SMTP id uy5so5117869obc.5
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 23:37:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=66uz5DxtC72MMP0MO89Sb72Kzfsp5wQa94e6gHEIj3E=;
	b=jGzkB+zmwhqN5MNQ/3VMl0NvlcNgsHpZtNs27txxgpoZBShc8rFDxeKzDZTu28pDPk
	fhimBU49JJ8Lhjf1idNvyXqmJiHpovUZiuwvrxNsV768nBiL3IuwEZIekx2bpexMvKwZ
	gdOy0oSwHBHT8+QvmEXsamXJEZEJRcao3R4EkxcgPvMEiXhZsJt47I11SwPC1JlntKHm
	cB6UQaTx1hM9bZvQz+UEm0hQnbYZtdHzRiRDc6ZnDPcnQ71z7KPYbex/NfXPuIQ14EQC
	DWrcmBkl3ilG+/Ms2XAIu/Wjr/a8ovHSbSrQTRBLxadGicXW3uk2V5lFdTx4ginzv6VW
	03PA==
X-Gm-Message-State: ALoCoQnMPBuflq58wsUvYzrzKWZ0elIvDSw7y2xbVAiXcGK1dO3BubIRBKFj78drfCMlPKSUlKZ+
MIME-Version: 1.0
X-Received: by 10.60.80.137 with SMTP id r9mr16762776oex.30.1391845063272;
	Fri, 07 Feb 2014 23:37:43 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Fri, 7 Feb 2014 23:37:43 -0800 (PST)
In-Reply-To: <1391696408.9917.28.camel@Solace>
References: <1391677118-3071-1-git-send-email-jtweaver@hawaii.edu>
	<52F37D3C0200007800119BDF@nat28.tlf.novell.com>
	<1391696408.9917.28.camel@Solace>
Date: Fri, 7 Feb 2014 21:37:43 -1000
Message-ID: <CA+o8iRXocJkAXPFxGcpTKzfH3M+XvwGZSe3N0oWRf1zNKWai9g@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Cc: Marcus.Granado@eu.citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>, esb@ics.hawaii.edu,
	Jan Beulich <JBeulich@suse.com>, Henri Casanova <henric@hawaii.edu>
Subject: Re: [Xen-devel] [PATCH] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>> Apart from that keeping the CPU0 special case at the top is pointless
>> with the cpu0_socket special casing.
>>
> Indeed. If going this route, Justin, I think you can reorganize the
> whole `if (cpu == 0)' (not only the else), and get to a more correct and
> readable solution.

It's still a special case because socket information is not yet
available in this function when it gets called for CPU 0. But I can
make it more readable but not having it stand alone at the top.

>> As to coding style: Please fix your comments and get the indentation
>> of the if/else sequence above right (i.e. either use "else if" with no
>> added indentation, or enclose the inner if/else in figure braces (I'd
>> personally prefer the former).
>>
> Yep, agreed. To be fair, about comments, sched_credit2.c has quite a
> mixture of commenting style in it, and it's really an hard call to
> decide which one should be used. Anyway, Justin, if you reorganize the
> whole `if () else' block, you are probably better off with a big comment
> describing the whole thing, before the block itself, for which you can
> use the following style:
>
> /*
>  * Long comment...
>  */

Will do.

Thanks,
Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 07:55:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 07:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC2kn-0003NU-AC; Sat, 08 Feb 2014 07:55:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WC2kl-0003NM-Cd
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 07:55:07 +0000
Received: from [85.158.143.35:45738] by server-2.bemta-4.messagelabs.com id
	3D/EC-10891-AD2E5F25; Sat, 08 Feb 2014 07:55:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391846104!4096368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13377 invoked from network); 8 Feb 2014 07:55:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 07:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,805,1384300800"; d="scan'208";a="99181109"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 07:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 02:55:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WC2kg-0004E9-GG;
	Sat, 08 Feb 2014 07:55:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WC2kg-0000Vf-Ct;
	Sat, 08 Feb 2014 07:55:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24796-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 07:55:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24796: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7247328459135321877=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7247328459135321877==
Content-Type: text/plain

flight 24796 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24796/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============7247328459135321877==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7247328459135321877==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 07:55:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 07:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC2kn-0003NU-AC; Sat, 08 Feb 2014 07:55:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WC2kl-0003NM-Cd
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 07:55:07 +0000
Received: from [85.158.143.35:45738] by server-2.bemta-4.messagelabs.com id
	3D/EC-10891-AD2E5F25; Sat, 08 Feb 2014 07:55:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391846104!4096368!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13377 invoked from network); 8 Feb 2014 07:55:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 07:55:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,805,1384300800"; d="scan'208";a="99181109"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 07:55:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 02:55:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WC2kg-0004E9-GG;
	Sat, 08 Feb 2014 07:55:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WC2kg-0000Vf-Ct;
	Sat, 08 Feb 2014 07:55:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24796-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 07:55:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24796: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7247328459135321877=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7247328459135321877==
Content-Type: text/plain

flight 24796 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24796/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============7247328459135321877==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7247328459135321877==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 08:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 08:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC3hq-0006QF-UU; Sat, 08 Feb 2014 08:56:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WC3hp-0006QA-E0
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 08:56:09 +0000
Received: from [85.158.137.68:55243] by server-11.bemta-3.messagelabs.com id
	1A/4B-04255-821F5F25; Sat, 08 Feb 2014 08:56:08 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391849766!479713!1
X-Originating-IP: [209.85.214.41]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12242 invoked from network); 8 Feb 2014 08:56:07 -0000
Received: from mail-bk0-f41.google.com (HELO mail-bk0-f41.google.com)
	(209.85.214.41)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 08:56:07 -0000
Received: by mail-bk0-f41.google.com with SMTP id na10so1363145bkb.28
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 00:56:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=cEYkq4tOnR/nUfWsyMS/w85EFmPkuon77JMsBg7OnTQ=;
	b=QaGu6gRzgDegpzhFaPNY8frPThJzN0aquTXVJKjdBiAriS/fC9sO8uswl1u01/Z85D
	W6OSHf9XWRvC8Vu9VTIW143SCHyl4PRdXylf1qfVcrFkHYmV5Y9i7c6NbYC1Tk2WreZG
	nzsVO3YGkuslKVAzjzbeb8EsRZC1QKgbC4ASJpbLspfD83GEFZP/CS0B/UpCmxBt8vo4
	MAeko6e0ppcVscQnHfHXyw2Ik3pz+wuEhtaEa4saNxHYxtFTYu5i+B5pfRL2/6osZm5i
	TV1Pgi3kvj2heCIEjiyGC8Ds56YpdTFtk+rx+w64mkJ4hkjHynXlLh88FubimWlb1p4N
	uhIA==
X-Gm-Message-State: ALoCoQkx6I+R6NzBNqQqzgPLzIe0arlGsAtdcNUzsuwIEt9LnBH9yOZOKn/srTUUC5Ekil9x2bgI
MIME-Version: 1.0
X-Received: by 10.205.35.199 with SMTP id sx7mr7175034bkb.32.1391849766574;
	Sat, 08 Feb 2014 00:56:06 -0800 (PST)
Received: by 10.204.103.194 with HTTP; Sat, 8 Feb 2014 00:56:06 -0800 (PST)
X-Originating-IP: [79.7.81.253]
In-Reply-To: <1391816270.2943.21.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391764139.9917.54.camel@Solace> <52F4BB0A.8000007@m2r.biz>
	<1391816270.2943.21.camel@astar.houby.net>
Date: Sat, 8 Feb 2014 09:56:06 +0100
Message-ID: <CABMPFzhqAUZKimKSC7ScSosiSBNog52fZLZbEh=rhktcfE9MPA@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: ehouby@yahoo.com
Cc: xen@lists.fedoraproject.org, Sander Eikelenboom <linux@eikelenboom.it>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2118939146906469740=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2118939146906469740==
Content-Type: multipart/alternative; boundary=bcaec52c60d1eb40a704f1e146d1

--bcaec52c60d1eb40a704f1e146d1
Content-Type: text/plain; charset=ISO-8859-1

2014-02-08 0:37 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

> On Fri, 2014-02-07 at 11:52 +0100, Fabio Fantoni wrote:
> > Il 07/02/2014 10:08, Dario Faggioli ha scritto:
> > > On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
>
> >
> > The wiki should be updated, the qxl patch for libxl part is complete and
> > correct and can be used for tests:
> >
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> > the actual problem is off of libxl.
> > Probably are on hvmloader, qemu and/or kernel.
> >
> > The latest mail about qxl problem that I send:
> > http://lists.xen.org/archives/html/xen-devel/2013-12/msg00758.html
> >
>
> I have everything built and would like to make a quick observation.
> From what I have read, the qxl minimum and default video ram is 128MB
> but the qemu command line is showing 64MB even when set to 128MB in the
> xl config file.
>
> spice = 1
> spicehost = '0.0.0.0'
> spiceport = 6001
> spicedisable_ticketing = 1
> spicevdagent = 1
> videoram = 128
> vga = 'qxl'
>
> /usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 18 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-18,server,nowait -mon
> chardev=libxl-cmd,mode=control -nodefaults -name f20 -serial pty -spice
> port=6001,tls-port=0,addr=0.0.0.0,disable-ticketing,agent-mouse=on
> -device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device
> virtserialport,chardev=vdagent,name=com.redhat.spice.0 -device
> qxl-vga,vram_size_mb=64,ram_size_mb=64 -boot order=c -smp 2,maxcpus=2
> -device virtio-net,id=nic0,netdev=net0,mac=00:16:00:00:11:22 -netdev
> type=tap,id=net0,ifname=vif18.0-emu,script=no,downscript=no -machine
> xenfv -m 3968 -drive
>
> file=/dev/mapper/xen_vm-f20pvhvm,if=ide,index=0,media=disk,format=raw,cache=writeback
>
> Commenting out the videoram=128 line still shows 64MB in the qemu
> command line.
>
> -Eric
>
>
>
Is correct, see also comment on patch... QXL have 2 ram regions, ram and
vram
I not found anymore the old qemu patch on one mailing list which describes in
detail the qxl structure :(
In shortly, if I remember correctly was:
First region contain basic framebuffer, default 16 mb and up to half of
region that is the used also without qxl driver active (when working as
standard vga), and the other two part of this region contain commands and
cache.
Second region contains the cache of render and various advanced operations,
when it was written that I had read the description it seems to me that had
not yet been implemented.
Is very likely to have done something wrong in the description that I do
not remember.
However, the parameters are correct, I had confirmation from developers qemu
/ spice last year and I also remember that it was better not to go with the
ram under 64 MB for the first region so I set the minimum to 128 total.
If I'll find the official and detailed qxl description I'll post it.

--bcaec52c60d1eb40a704f1e146d1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-08 0:37 GMT+01:00 Eric Houby <span dir=3D"ltr">&lt=
;<a href=3D"mailto:ehouby@yahoo.com" target=3D"_blank">ehouby@yahoo.com</a>=
&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D"">On Fri, 2014-02-07 at 11:52 +0100, Fabio Fantoni wrote:<br>
&gt; Il 07/02/2014 10:08, Dario Faggioli ha scritto:<br>
&gt; &gt; On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:<br>
<br>
&gt;<br>
</div><div class=3D"">&gt; The wiki should be updated, the qxl patch for li=
bxl part is complete and<br>
&gt; correct and can be used for tests:<br>
&gt; <a href=3D"https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591ceb=
d98e9263645bff56b" target=3D"_blank">https://github.com/Fantu/Xen/commit/f1=
e3f78f7b9580700591cebd98e9263645bff56b</a><br>
&gt; the actual problem is off of libxl.<br>
&gt; Probably are on hvmloader, qemu and/or kernel.<br>
&gt;<br>
&gt; The latest mail about qxl problem that I send:<br>
&gt; <a href=3D"http://lists.xen.org/archives/html/xen-devel/2013-12/msg007=
58.html" target=3D"_blank">http://lists.xen.org/archives/html/xen-devel/201=
3-12/msg00758.html</a><br>
&gt;<br>
<br>
</div>I have everything built and would like to make a quick observation.<b=
r>
>From what I have read, the qxl minimum and default video ram is 128MB<br>
but the qemu command line is showing 64MB even when set to 128MB in the<br>
xl config file.<br>
<br>
spice =3D 1<br>
spicehost =3D &#39;0.0.0.0&#39;<br>
spiceport =3D 6001<br>
spicedisable_ticketing =3D 1<br>
spicevdagent =3D 1<br>
videoram =3D 128<br>
vga =3D &#39;qxl&#39;<br>
<br>
/usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 18 -chardev<br>
socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-18,server,nowait -mon<b=
r>
chardev=3Dlibxl-cmd,mode=3Dcontrol -nodefaults -name f20 -serial pty -spice=
<br>
port=3D6001,tls-port=3D0,addr=3D0.0.0.0,disable-ticketing,agent-mouse=3Don<=
br>
-device virtio-serial -chardev spicevmc,id=3Dvdagent,name=3Dvdagent -device=
<br>
virtserialport,chardev=3Dvdagent,name=3Dcom.redhat.spice.0 -device<br>
qxl-vga,vram_size_mb=3D64,ram_size_mb=3D64 -boot order=3Dc -smp 2,maxcpus=
=3D2<br>
-device virtio-net,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:00:00:11:22 -netdev<=
br>
type=3Dtap,id=3Dnet0,ifname=3Dvif18.0-emu,script=3Dno,downscript=3Dno -mach=
ine<br>
xenfv -m 3968 -drive<br>
file=3D/dev/mapper/xen_vm-f20pvhvm,if=3Dide,index=3D0,media=3Ddisk,format=
=3Draw,cache=3Dwriteback<br>
<br>
Commenting out the videoram=3D128 line still shows 64MB in the qemu<br>
command line.<br>
<span class=3D""><font color=3D"#888888"><br>
-Eric<br>
<br>
<br>
</font></span></blockquote></div><br></div><div class=3D"gmail_extra">Is co=
rrect, see also comment on patch... QXL have 2 ram regions, ram and vram<br=
></div><div class=3D"gmail_extra">I not found anymore the old qemu patch on=
 one mailing list <span id=3D"result_box" class=3D"" lang=3D"en"><span clas=
s=3D"">which describes</span> <span class=3D"">in detail</span> <span class=
=3D"">the qxl structure :(<br>
</span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D=
"">In</span> <span class=3D"">shortly, if</span> <span class=3D"">I remembe=
r correctly</span> <span class=3D"">was</span></span>:<br></div><div class=
=3D"gmail_extra">
First region contain basic framebuffer, default 16 mb and up to half of reg=
ion that is the used also without qxl driver active (when working as standa=
rd vga), and the other two part of this region contain commands and cache.<=
br>
</div><div class=3D"gmail_extra">Second region contains<span id=3D"result_b=
ox" class=3D"" lang=3D"en"><span class=3D""> the</span> <span class=3D"">ca=
che</span> <span class=3D"">of</span> <span class=3D"">render</span> <span =
class=3D"">and various</span> <span class=3D"">advanced operations</span><s=
pan>,</span> <span class=3D"">when</span> <span class=3D"">it was written</=
span> <span class=3D"">that I had read</span> <span class=3D"">the descript=
ion</span> <span class=3D"">it seems to me</span> <span class=3D"">that</sp=
an> <span class=3D"">had not yet been</span> <span class=3D"">implemented</=
span><span class=3D"">.<br>
</span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D=
"">Is very</span> <span class=3D"">likely to have</span> <span class=3D"">d=
one something wrong</span> <span class=3D"">in the description</span> <span=
 class=3D"">that</span> <span class=3D"">I do not remember</span></span>.<b=
r>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">However,</s=
pan> <span class=3D"">the parameters are correct</span><span>,</span> <span=
 class=3D"">I had</span> <span class=3D"">confirmation</span> <span class=
=3D"">from</span> <span class=3D"">developers</span> <span class=3D"">qemu<=
/span> <span class=3D"">/</span> <span class=3D"">spice</span> <span class=
=3D"">last year and</span> <span class=3D"">I also remember</span> <span cl=
ass=3D"">that it was better</span> <span class=3D"">not to go</span> <span =
class=3D"">with</span> <span class=3D"">the</span> <span class=3D"">ram</sp=
an> <span class=3D"">under</span> <span class=3D"">64</span> <span class=3D=
"">MB for the</span> <span class=3D"">first region</span> <span class=3D"">=
so I</span> <span class=3D"">set</span> <span class=3D"">the minimum</span>=
 <span class=3D"">to 128</span> <span class=3D"">total</span></span>.<br>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">If</span> <=
span class=3D"">I&#39;ll find</span> <span class=3D"">the official</span> <=
span class=3D"">and detailed</span> qxl </span><span id=3D"result_box" clas=
s=3D"" lang=3D"en"><span class=3D"">description</span></span><span id=3D"re=
sult_box" class=3D"" lang=3D"en"><span class=3D""> I&#39;ll post</span> <sp=
an class=3D"">it.</span></span><br>
</div></div>

--bcaec52c60d1eb40a704f1e146d1--


--===============2118939146906469740==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2118939146906469740==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 08:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 08:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC3hq-0006QF-UU; Sat, 08 Feb 2014 08:56:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WC3hp-0006QA-E0
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 08:56:09 +0000
Received: from [85.158.137.68:55243] by server-11.bemta-3.messagelabs.com id
	1A/4B-04255-821F5F25; Sat, 08 Feb 2014 08:56:08 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-31.messagelabs.com!1391849766!479713!1
X-Originating-IP: [209.85.214.41]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12242 invoked from network); 8 Feb 2014 08:56:07 -0000
Received: from mail-bk0-f41.google.com (HELO mail-bk0-f41.google.com)
	(209.85.214.41)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 08:56:07 -0000
Received: by mail-bk0-f41.google.com with SMTP id na10so1363145bkb.28
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 00:56:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=cEYkq4tOnR/nUfWsyMS/w85EFmPkuon77JMsBg7OnTQ=;
	b=QaGu6gRzgDegpzhFaPNY8frPThJzN0aquTXVJKjdBiAriS/fC9sO8uswl1u01/Z85D
	W6OSHf9XWRvC8Vu9VTIW143SCHyl4PRdXylf1qfVcrFkHYmV5Y9i7c6NbYC1Tk2WreZG
	nzsVO3YGkuslKVAzjzbeb8EsRZC1QKgbC4ASJpbLspfD83GEFZP/CS0B/UpCmxBt8vo4
	MAeko6e0ppcVscQnHfHXyw2Ik3pz+wuEhtaEa4saNxHYxtFTYu5i+B5pfRL2/6osZm5i
	TV1Pgi3kvj2heCIEjiyGC8Ds56YpdTFtk+rx+w64mkJ4hkjHynXlLh88FubimWlb1p4N
	uhIA==
X-Gm-Message-State: ALoCoQkx6I+R6NzBNqQqzgPLzIe0arlGsAtdcNUzsuwIEt9LnBH9yOZOKn/srTUUC5Ekil9x2bgI
MIME-Version: 1.0
X-Received: by 10.205.35.199 with SMTP id sx7mr7175034bkb.32.1391849766574;
	Sat, 08 Feb 2014 00:56:06 -0800 (PST)
Received: by 10.204.103.194 with HTTP; Sat, 8 Feb 2014 00:56:06 -0800 (PST)
X-Originating-IP: [79.7.81.253]
In-Reply-To: <1391816270.2943.21.camel@astar.houby.net>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
	<1391572808.2441.37.camel@astar.houby.net>
	<15510565929.20140205090647@eikelenboom.it>
	<1391645864.2751.9.camel@astar.houby.net>
	<1391660630.2751.12.camel@astar.houby.net>
	<CABMPFzgwkc7RVakyTGTZZMkepQn5ENhsw-H8hAx78g4nxkwk9Q@mail.gmail.com>
	<1391764139.9917.54.camel@Solace> <52F4BB0A.8000007@m2r.biz>
	<1391816270.2943.21.camel@astar.houby.net>
Date: Sat, 8 Feb 2014 09:56:06 +0100
Message-ID: <CABMPFzhqAUZKimKSC7ScSosiSBNog52fZLZbEh=rhktcfE9MPA@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: ehouby@yahoo.com
Cc: xen@lists.fedoraproject.org, Sander Eikelenboom <linux@eikelenboom.it>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2118939146906469740=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2118939146906469740==
Content-Type: multipart/alternative; boundary=bcaec52c60d1eb40a704f1e146d1

--bcaec52c60d1eb40a704f1e146d1
Content-Type: text/plain; charset=ISO-8859-1

2014-02-08 0:37 GMT+01:00 Eric Houby <ehouby@yahoo.com>:

> On Fri, 2014-02-07 at 11:52 +0100, Fabio Fantoni wrote:
> > Il 07/02/2014 10:08, Dario Faggioli ha scritto:
> > > On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:
>
> >
> > The wiki should be updated, the qxl patch for libxl part is complete and
> > correct and can be used for tests:
> >
> https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591cebd98e9263645bff56b
> > the actual problem is off of libxl.
> > Probably are on hvmloader, qemu and/or kernel.
> >
> > The latest mail about qxl problem that I send:
> > http://lists.xen.org/archives/html/xen-devel/2013-12/msg00758.html
> >
>
> I have everything built and would like to make a quick observation.
> From what I have read, the qxl minimum and default video ram is 128MB
> but the qemu command line is showing 64MB even when set to 128MB in the
> xl config file.
>
> spice = 1
> spicehost = '0.0.0.0'
> spiceport = 6001
> spicedisable_ticketing = 1
> spicevdagent = 1
> videoram = 128
> vga = 'qxl'
>
> /usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 18 -chardev
> socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-18,server,nowait -mon
> chardev=libxl-cmd,mode=control -nodefaults -name f20 -serial pty -spice
> port=6001,tls-port=0,addr=0.0.0.0,disable-ticketing,agent-mouse=on
> -device virtio-serial -chardev spicevmc,id=vdagent,name=vdagent -device
> virtserialport,chardev=vdagent,name=com.redhat.spice.0 -device
> qxl-vga,vram_size_mb=64,ram_size_mb=64 -boot order=c -smp 2,maxcpus=2
> -device virtio-net,id=nic0,netdev=net0,mac=00:16:00:00:11:22 -netdev
> type=tap,id=net0,ifname=vif18.0-emu,script=no,downscript=no -machine
> xenfv -m 3968 -drive
>
> file=/dev/mapper/xen_vm-f20pvhvm,if=ide,index=0,media=disk,format=raw,cache=writeback
>
> Commenting out the videoram=128 line still shows 64MB in the qemu
> command line.
>
> -Eric
>
>
>
Is correct, see also comment on patch... QXL have 2 ram regions, ram and
vram
I not found anymore the old qemu patch on one mailing list which describes in
detail the qxl structure :(
In shortly, if I remember correctly was:
First region contain basic framebuffer, default 16 mb and up to half of
region that is the used also without qxl driver active (when working as
standard vga), and the other two part of this region contain commands and
cache.
Second region contains the cache of render and various advanced operations,
when it was written that I had read the description it seems to me that had
not yet been implemented.
Is very likely to have done something wrong in the description that I do
not remember.
However, the parameters are correct, I had confirmation from developers qemu
/ spice last year and I also remember that it was better not to go with the
ram under 64 MB for the first region so I set the minimum to 128 total.
If I'll find the official and detailed qxl description I'll post it.

--bcaec52c60d1eb40a704f1e146d1
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-08 0:37 GMT+01:00 Eric Houby <span dir=3D"ltr">&lt=
;<a href=3D"mailto:ehouby@yahoo.com" target=3D"_blank">ehouby@yahoo.com</a>=
&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail_quote"><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D"">On Fri, 2014-02-07 at 11:52 +0100, Fabio Fantoni wrote:<br>
&gt; Il 07/02/2014 10:08, Dario Faggioli ha scritto:<br>
&gt; &gt; On gio, 2014-02-06 at 08:35 +0100, Fabio Fantoni wrote:<br>
<br>
&gt;<br>
</div><div class=3D"">&gt; The wiki should be updated, the qxl patch for li=
bxl part is complete and<br>
&gt; correct and can be used for tests:<br>
&gt; <a href=3D"https://github.com/Fantu/Xen/commit/f1e3f78f7b9580700591ceb=
d98e9263645bff56b" target=3D"_blank">https://github.com/Fantu/Xen/commit/f1=
e3f78f7b9580700591cebd98e9263645bff56b</a><br>
&gt; the actual problem is off of libxl.<br>
&gt; Probably are on hvmloader, qemu and/or kernel.<br>
&gt;<br>
&gt; The latest mail about qxl problem that I send:<br>
&gt; <a href=3D"http://lists.xen.org/archives/html/xen-devel/2013-12/msg007=
58.html" target=3D"_blank">http://lists.xen.org/archives/html/xen-devel/201=
3-12/msg00758.html</a><br>
&gt;<br>
<br>
</div>I have everything built and would like to make a quick observation.<b=
r>
>From what I have read, the qxl minimum and default video ram is 128MB<br>
but the qemu command line is showing 64MB even when set to 128MB in the<br>
xl config file.<br>
<br>
spice =3D 1<br>
spicehost =3D &#39;0.0.0.0&#39;<br>
spiceport =3D 6001<br>
spicedisable_ticketing =3D 1<br>
spicevdagent =3D 1<br>
videoram =3D 128<br>
vga =3D &#39;qxl&#39;<br>
<br>
/usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 18 -chardev<br>
socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-18,server,nowait -mon<b=
r>
chardev=3Dlibxl-cmd,mode=3Dcontrol -nodefaults -name f20 -serial pty -spice=
<br>
port=3D6001,tls-port=3D0,addr=3D0.0.0.0,disable-ticketing,agent-mouse=3Don<=
br>
-device virtio-serial -chardev spicevmc,id=3Dvdagent,name=3Dvdagent -device=
<br>
virtserialport,chardev=3Dvdagent,name=3Dcom.redhat.spice.0 -device<br>
qxl-vga,vram_size_mb=3D64,ram_size_mb=3D64 -boot order=3Dc -smp 2,maxcpus=
=3D2<br>
-device virtio-net,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:00:00:11:22 -netdev<=
br>
type=3Dtap,id=3Dnet0,ifname=3Dvif18.0-emu,script=3Dno,downscript=3Dno -mach=
ine<br>
xenfv -m 3968 -drive<br>
file=3D/dev/mapper/xen_vm-f20pvhvm,if=3Dide,index=3D0,media=3Ddisk,format=
=3Draw,cache=3Dwriteback<br>
<br>
Commenting out the videoram=3D128 line still shows 64MB in the qemu<br>
command line.<br>
<span class=3D""><font color=3D"#888888"><br>
-Eric<br>
<br>
<br>
</font></span></blockquote></div><br></div><div class=3D"gmail_extra">Is co=
rrect, see also comment on patch... QXL have 2 ram regions, ram and vram<br=
></div><div class=3D"gmail_extra">I not found anymore the old qemu patch on=
 one mailing list <span id=3D"result_box" class=3D"" lang=3D"en"><span clas=
s=3D"">which describes</span> <span class=3D"">in detail</span> <span class=
=3D"">the qxl structure :(<br>
</span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D=
"">In</span> <span class=3D"">shortly, if</span> <span class=3D"">I remembe=
r correctly</span> <span class=3D"">was</span></span>:<br></div><div class=
=3D"gmail_extra">
First region contain basic framebuffer, default 16 mb and up to half of reg=
ion that is the used also without qxl driver active (when working as standa=
rd vga), and the other two part of this region contain commands and cache.<=
br>
</div><div class=3D"gmail_extra">Second region contains<span id=3D"result_b=
ox" class=3D"" lang=3D"en"><span class=3D""> the</span> <span class=3D"">ca=
che</span> <span class=3D"">of</span> <span class=3D"">render</span> <span =
class=3D"">and various</span> <span class=3D"">advanced operations</span><s=
pan>,</span> <span class=3D"">when</span> <span class=3D"">it was written</=
span> <span class=3D"">that I had read</span> <span class=3D"">the descript=
ion</span> <span class=3D"">it seems to me</span> <span class=3D"">that</sp=
an> <span class=3D"">had not yet been</span> <span class=3D"">implemented</=
span><span class=3D"">.<br>
</span></span><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D=
"">Is very</span> <span class=3D"">likely to have</span> <span class=3D"">d=
one something wrong</span> <span class=3D"">in the description</span> <span=
 class=3D"">that</span> <span class=3D"">I do not remember</span></span>.<b=
r>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">However,</s=
pan> <span class=3D"">the parameters are correct</span><span>,</span> <span=
 class=3D"">I had</span> <span class=3D"">confirmation</span> <span class=
=3D"">from</span> <span class=3D"">developers</span> <span class=3D"">qemu<=
/span> <span class=3D"">/</span> <span class=3D"">spice</span> <span class=
=3D"">last year and</span> <span class=3D"">I also remember</span> <span cl=
ass=3D"">that it was better</span> <span class=3D"">not to go</span> <span =
class=3D"">with</span> <span class=3D"">the</span> <span class=3D"">ram</sp=
an> <span class=3D"">under</span> <span class=3D"">64</span> <span class=3D=
"">MB for the</span> <span class=3D"">first region</span> <span class=3D"">=
so I</span> <span class=3D"">set</span> <span class=3D"">the minimum</span>=
 <span class=3D"">to 128</span> <span class=3D"">total</span></span>.<br>
<span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">If</span> <=
span class=3D"">I&#39;ll find</span> <span class=3D"">the official</span> <=
span class=3D"">and detailed</span> qxl </span><span id=3D"result_box" clas=
s=3D"" lang=3D"en"><span class=3D"">description</span></span><span id=3D"re=
sult_box" class=3D"" lang=3D"en"><span class=3D""> I&#39;ll post</span> <sp=
an class=3D"">it.</span></span><br>
</div></div>

--bcaec52c60d1eb40a704f1e146d1--


--===============2118939146906469740==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2118939146906469740==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 09:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 09:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC3ov-0006z7-Ug; Sat, 08 Feb 2014 09:03:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WC3ou-0006z2-7P
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 09:03:28 +0000
Received: from [85.158.143.35:13381] by server-3.bemta-4.messagelabs.com id
	EC/FA-11539-FD2F5F25; Sat, 08 Feb 2014 09:03:27 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391850205!4104587!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19609 invoked from network); 8 Feb 2014 09:03:26 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 09:03:26 -0000
Received: by mail-qc0-f172.google.com with SMTP id c9so7556722qcz.17
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 01:03:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=A6uGzGaq47euHo8phHFb7GR6ub/kKN5F+cvCTboWwTI=;
	b=GcjFvq0BUUqe0u0I9Y7k3XPt11XA2nUstt3hOjg5nwRcmzXmuhiZ3wyOgqy5InnfdG
	suyyX5jSqcOWYyD+XDtj3hek6VnGoJTECbhARcNzcn6/2FUHPJ5xVnmcpL79Ul2i/X1C
	DyfImqb5OrYuIAYd4oGeoREWqsyR5fSG5kL9iYUPnB+X8RO2qbgrdzJyVS7rIYpE0/kd
	JvR3Q1XFUjhXU7cLvJDjeqNnJNLBDu0MJuc91ux5c1f8MeuQ866aiL5Gh4Lwh+y5k8nj
	J2u70Oi63kGPz8YiE/WMEexyzjS8mp/XeqMPNqLFQld3PKV2SdnLyZe5tYd/VBHV9hqj
	hahw==
X-Gm-Message-State: ALoCoQkS5iFYQ7zbGEDzoqf3g9WMOsOSBRebtM3JL4K754CxPukSwGCp0nBI2UsqZa8Rk/LspP4x
X-Received: by 10.140.32.133 with SMTP id h5mr12608941qgh.49.1391850205542;
	Sat, 08 Feb 2014 01:03:25 -0800 (PST)
Received: from debian-vm.localdomain (cpe-72-130-147-24.hawaii.res.rr.com.
	[72.130.147.24]) by mx.google.com with ESMTPSA id
	o75sm13739208qgd.11.2014.02.08.01.03.23 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sat, 08 Feb 2014 01:03:24 -0800 (PST)
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 23:03:18 -1000
Message-Id: <1391850198-3918-1-git-send-email-jtweaver@hawaii.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	esb@ics.hawaii.edu, henric@hawaii.edu
Subject: [Xen-devel] [PATCH v2] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.

CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.

Socket information is available for each individual CPU when each
gets the STARTING callback (socket information is also available
for CPU 0 by that time). Each are assigned to a run queue
based on their socket.

---
Changes from v1:
* moved comments to the top of the section in one long comment block
* collapsed code to improve readability
* fixed else if indentation style
* updated comment about the runqueue plan
---
 xen/common/sched_credit2.c |   41 +++++++++++++++++++++++++++--------------
 1 file changed, 27 insertions(+), 14 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..3ff46a3 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,8 @@
  * to a small value, and a fixed credit is added to everyone.
  *
  * The plan is for all cores that share an L2 will share the same
- * runqueue.  At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. At the moment, all cores that share a socket share the same
+ * runqueue.
  */
 
 /*
@@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
 static void init_pcpu(const struct scheduler *ops, int cpu)
 {
     int rqi;
+    int cpu0_socket;
+    int cpu_socket;
     unsigned long flags;
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_runqueue_data *rqd;
@@ -1959,15 +1961,26 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
         return;
     }
 
-    /* Figure out which runqueue to put it in */
+    /*
+     * Choose which run queue to add cpu to based on its socket.
+     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
+     * callback and socket information is not yet available for it).
+     * If cpu is on the same socket as CPU 0, add it to run queue 0 with CPU 0.
+     * Else If cpu is on socket 0, add it to a run queue based on the socket
+     * CPU 0 is actually on.
+     * Else add it to a run queue based on its own socket. 
+     */
+
     rqi = 0;
+    cpu_socket = cpu_to_socket(cpu);
+    cpu0_socket = cpu_to_socket(0);
 
-    /* Figure out which runqueue to put it in */
-    /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
-    if ( cpu == 0 )
-        rqi = 0;
+    if ( cpu == 0 || cpu_socket == cpu0_socket )
+        rqi = 0;    
+    else if ( cpu_socket == 0 )
+        rqi = cpu0_socket;
     else
-        rqi = cpu_to_socket(cpu);
+        rqi = cpu_socket;
 
     if ( rqi < 0 )
     {
@@ -2010,13 +2023,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
 static void *
 csched_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    /* Check to see if the cpu is online yet */
-    /* Note: cpu 0 doesn't get a STARTING callback */
-    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+    /* This function is only for calling init_pcpu on CPU 0
+     * because it does not get a STARTING callback */
+
+    if ( cpu == 0 )
         init_pcpu(ops, cpu);
-    else
-        printk("%s: cpu %d not online yet, deferring initializatgion\n",
-               __func__, cpu);
 
     return (void *)1;
 }
@@ -2072,6 +2083,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 static int
 csched_cpu_starting(int cpu)
 {
+    /* This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
     struct scheduler *ops;
 
     /* Hope this is safe from cpupools switching things around. :-) */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 09:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 09:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC3ov-0006z7-Ug; Sat, 08 Feb 2014 09:03:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WC3ou-0006z2-7P
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 09:03:28 +0000
Received: from [85.158.143.35:13381] by server-3.bemta-4.messagelabs.com id
	EC/FA-11539-FD2F5F25; Sat, 08 Feb 2014 09:03:27 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391850205!4104587!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19609 invoked from network); 8 Feb 2014 09:03:26 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 09:03:26 -0000
Received: by mail-qc0-f172.google.com with SMTP id c9so7556722qcz.17
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 01:03:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=A6uGzGaq47euHo8phHFb7GR6ub/kKN5F+cvCTboWwTI=;
	b=GcjFvq0BUUqe0u0I9Y7k3XPt11XA2nUstt3hOjg5nwRcmzXmuhiZ3wyOgqy5InnfdG
	suyyX5jSqcOWYyD+XDtj3hek6VnGoJTECbhARcNzcn6/2FUHPJ5xVnmcpL79Ul2i/X1C
	DyfImqb5OrYuIAYd4oGeoREWqsyR5fSG5kL9iYUPnB+X8RO2qbgrdzJyVS7rIYpE0/kd
	JvR3Q1XFUjhXU7cLvJDjeqNnJNLBDu0MJuc91ux5c1f8MeuQ866aiL5Gh4Lwh+y5k8nj
	J2u70Oi63kGPz8YiE/WMEexyzjS8mp/XeqMPNqLFQld3PKV2SdnLyZe5tYd/VBHV9hqj
	hahw==
X-Gm-Message-State: ALoCoQkS5iFYQ7zbGEDzoqf3g9WMOsOSBRebtM3JL4K754CxPukSwGCp0nBI2UsqZa8Rk/LspP4x
X-Received: by 10.140.32.133 with SMTP id h5mr12608941qgh.49.1391850205542;
	Sat, 08 Feb 2014 01:03:25 -0800 (PST)
Received: from debian-vm.localdomain (cpe-72-130-147-24.hawaii.res.rr.com.
	[72.130.147.24]) by mx.google.com with ESMTPSA id
	o75sm13739208qgd.11.2014.02.08.01.03.23 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sat, 08 Feb 2014 01:03:24 -0800 (PST)
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Date: Fri,  7 Feb 2014 23:03:18 -1000
Message-Id: <1391850198-3918-1-git-send-email-jtweaver@hawaii.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	esb@ics.hawaii.edu, henric@hawaii.edu
Subject: [Xen-devel] [PATCH v2] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.

CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.

Socket information is available for each individual CPU when each
gets the STARTING callback (socket information is also available
for CPU 0 by that time). Each are assigned to a run queue
based on their socket.

---
Changes from v1:
* moved comments to the top of the section in one long comment block
* collapsed code to improve readability
* fixed else if indentation style
* updated comment about the runqueue plan
---
 xen/common/sched_credit2.c |   41 +++++++++++++++++++++++++++--------------
 1 file changed, 27 insertions(+), 14 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..3ff46a3 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,8 @@
  * to a small value, and a fixed credit is added to everyone.
  *
  * The plan is for all cores that share an L2 will share the same
- * runqueue.  At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. At the moment, all cores that share a socket share the same
+ * runqueue.
  */
 
 /*
@@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
 static void init_pcpu(const struct scheduler *ops, int cpu)
 {
     int rqi;
+    int cpu0_socket;
+    int cpu_socket;
     unsigned long flags;
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_runqueue_data *rqd;
@@ -1959,15 +1961,26 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
         return;
     }
 
-    /* Figure out which runqueue to put it in */
+    /*
+     * Choose which run queue to add cpu to based on its socket.
+     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
+     * callback and socket information is not yet available for it).
+     * If cpu is on the same socket as CPU 0, add it to run queue 0 with CPU 0.
+     * Else If cpu is on socket 0, add it to a run queue based on the socket
+     * CPU 0 is actually on.
+     * Else add it to a run queue based on its own socket. 
+     */
+
     rqi = 0;
+    cpu_socket = cpu_to_socket(cpu);
+    cpu0_socket = cpu_to_socket(0);
 
-    /* Figure out which runqueue to put it in */
-    /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
-    if ( cpu == 0 )
-        rqi = 0;
+    if ( cpu == 0 || cpu_socket == cpu0_socket )
+        rqi = 0;    
+    else if ( cpu_socket == 0 )
+        rqi = cpu0_socket;
     else
-        rqi = cpu_to_socket(cpu);
+        rqi = cpu_socket;
 
     if ( rqi < 0 )
     {
@@ -2010,13 +2023,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
 static void *
 csched_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    /* Check to see if the cpu is online yet */
-    /* Note: cpu 0 doesn't get a STARTING callback */
-    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+    /* This function is only for calling init_pcpu on CPU 0
+     * because it does not get a STARTING callback */
+
+    if ( cpu == 0 )
         init_pcpu(ops, cpu);
-    else
-        printk("%s: cpu %d not online yet, deferring initializatgion\n",
-               __func__, cpu);
 
     return (void *)1;
 }
@@ -2072,6 +2083,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 static int
 csched_cpu_starting(int cpu)
 {
+    /* This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
     struct scheduler *ops;
 
     /* Hope this is safe from cpupools switching things around. :-) */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 10:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 10:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC4jr-00018d-TW; Sat, 08 Feb 2014 10:02:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WC4jq-00018Y-CV
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 10:02:18 +0000
Received: from [85.158.137.68:65062] by server-17.bemta-3.messagelabs.com id
	08/FB-22569-9A006F25; Sat, 08 Feb 2014 10:02:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391853735!489809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2108 invoked from network); 8 Feb 2014 10:02:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 10:02:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,805,1384300800"; d="scan'208";a="99191352"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 10:02:14 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Sat, 8 Feb 2014
	05:02:13 -0500
Message-ID: <1391853732.15093.38.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 10:02:12 +0000
In-Reply-To: <osstest-24796-mainreport@xen.org>
References: <osstest-24796-mainreport@xen.org>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24796: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-08 at 07:55 +0000, xen.org wrote:
> flight 24796 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24796/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24743

Woking apparently couldn't see /usr/groups/xencore:
2014-02-07 20:09:55 Z XenUse overriding $USER to osstest
Can't exec "/usr/groups/xencore/systems/bin/xenuse": Input/output error at Osstest.pm line 274.
/usr/groups/xencore/systems/bin/xenuse --off marilith-n5: -1 Input/output error at Osstest.pm line 275.

It looks like autofs was b0rked. I tried sudo /e/init.d/autofs reload
which didn't help and then restart, which did.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 10:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 10:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC4jr-00018d-TW; Sat, 08 Feb 2014 10:02:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WC4jq-00018Y-CV
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 10:02:18 +0000
Received: from [85.158.137.68:65062] by server-17.bemta-3.messagelabs.com id
	08/FB-22569-9A006F25; Sat, 08 Feb 2014 10:02:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391853735!489809!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2108 invoked from network); 8 Feb 2014 10:02:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 10:02:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,805,1384300800"; d="scan'208";a="99191352"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 10:02:14 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4; Sat, 8 Feb 2014
	05:02:13 -0500
Message-ID: <1391853732.15093.38.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 10:02:12 +0000
In-Reply-To: <osstest-24796-mainreport@xen.org>
References: <osstest-24796-mainreport@xen.org>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24796: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-08 at 07:55 +0000, xen.org wrote:
> flight 24796 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24796/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24743

Woking apparently couldn't see /usr/groups/xencore:
2014-02-07 20:09:55 Z XenUse overriding $USER to osstest
Can't exec "/usr/groups/xencore/systems/bin/xenuse": Input/output error at Osstest.pm line 274.
/usr/groups/xencore/systems/bin/xenuse --off marilith-n5: -1 Input/output error at Osstest.pm line 275.

It looks like autofs was b0rked. I tried sudo /e/init.d/autofs reload
which didn't help and then restart, which did.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 10:20:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 10:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC511-00026r-Se; Sat, 08 Feb 2014 10:20:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBwY4-00023T-Im
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 01:17:37 +0000
Received: from [85.158.139.211:34637] by server-10.bemta-5.messagelabs.com id
	B0/CE-08578-FA585F25; Sat, 08 Feb 2014 01:17:35 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391822249!2485192!1
X-Originating-IP: [209.85.212.53]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4281 invoked from network); 8 Feb 2014 01:17:30 -0000
Received: from mail-vb0-f53.google.com (HELO mail-vb0-f53.google.com)
	(209.85.212.53)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 01:17:30 -0000
Received: by mail-vb0-f53.google.com with SMTP id p17so3238022vbe.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 17:17:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=1mrf3ot8IOoQznA2BljENKP6fcafS0NmT+BcKJsDEx8=;
	b=PV705WFM122iUYtzHVDu4eksoClZ62rwEyhk/kkpAZ6PIwI0c4tpZuRcvlJiy+WsTi
	7Qc/VsivfiSA9PuqHcm5gCyCRv3FvmV60Je5+qsGDpHyzD0NFe4eZwNwV+5c2LRgrC0a
	VdR7dfuZ6d9mtHQ3sDtpJxb6TgvpHiHIYNDVf0T0u0Zd8s2WvNNUtz3ugA35xBC1zezq
	8xGn4bp5fuUsQlqKOUT6DdYGM8FBcXzc9/MXEz2g5owA/Dui+A4JcmoVJ32c81kywFEk
	p/DWQmZfz/Uui3P0k0ZYDvIHnHKIwZ7aSrI6N9XCk9By/H+I2ZCkF4hYaqltlfeIcGfc
	Wbrg==
X-Received: by 10.220.200.6 with SMTP id eu6mr230868vcb.35.1391822248718; Fri,
	07 Feb 2014 17:17:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 17:16:47 -0800 (PST)
In-Reply-To: <20140207214955.GD14908@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
	<20140207210137.GA13743@phenom.dumpdata.com>
	<CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
	<20140207214955.GD14908@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 20:16:47 -0500
Message-ID: <CA+XTOOhGNvbEq9RdzO1OEcg-kuEPRRD4na=71gxhWV91Ls8i=w@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Sat, 08 Feb 2014 10:20:03 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7369840473100185955=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7369840473100185955==
Content-Type: multipart/alternative; boundary=047d7b5d6612b9b61404f1dade1d

--047d7b5d6612b9b61404f1dade1d
Content-Type: text/plain; charset=ISO-8859-1

I followed this site (
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
and then followed (http://wiki.xen.org/wiki/Compiling_Xen_From_Source)

git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git

Had to take some additional steps here to get all of the libs
# apt-get install build-essential # apt-get install bcc bin86 gawk
bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2
module-init-tools transfig tgif # apt-get install texinfo
texlive-latex-base texlive-latex-recommended texlive-fonts-extra
texlive-fonts-recommended pciutils-dev mercurial# apt-get install make
gcc libc6-dev zlib1g-dev python python-dev python-twisted
libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev#
apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml
ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev# apt-get
install gettext
apt-get install libaio-dev
apt-get install libpixman-1-dev

./configure
make dist
make install



On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
> > I did not use the patch.  I was assuming it was already patched given
> > previous email.  Is the patch for qemu source or xen source?
>
> It is for QEMU, but you are right - it should have been part
> of QEMU if you got the latest version of Xen-unstable.
>
> You didn't use some specific tag but just 'staging' ?
>
> >
> >
> > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > > > Ok. I started ran the initscripts and now xl works.
> > > >
> > > > However, I still see the same behavior as before:
> > > >
> > >
> > > Did you use the patch that was mentioned in the URL?
> > >
> > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> > > reset
> > > > by peer
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > root@fiat:~# xl list
> > > > Name                                        ID   Mem VCPUs State
> Time(s)
> > > > Domain-0                                     0  1024     1     r-----
> > > >  15.2
> > > > ubuntu-hvm-0                                 1  1025     1     ------
> > > > 0.0
> > > >
> > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690
> pages to
> > > > be allocated)
> > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > > > (XEN) Dom0 has maximum 1 VCPUs
> > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
> 0xffffffff81b2f000
> > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
> 0xffffffff81d0f0f0
> > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
> 0xffffffff81d252c0
> > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
> 0xffffffff81e6d000
> > > > (XEN) Scrubbing Free RAM: .............................done.
> > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > > > (XEN) Std. Loglevel: All
> > > > (XEN) Guest Loglevel: All
> > > > (XEN) Xen is relinquishing VGA console.
> > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> input
> > > > to Xen)
> > > > (XEN) Freed 260kB init memory.
> > > > (XEN) PCI add device 0000:00:00.0
> > > > (XEN) PCI add device 0000:00:01.0
> > > > (XEN) PCI add device 0000:00:1a.0
> > > > (XEN) PCI add device 0000:00:1c.0
> > > > (XEN) PCI add device 0000:00:1d.0
> > > > (XEN) PCI add device 0000:00:1e.0
> > > > (XEN) PCI add device 0000:00:1f.0
> > > > (XEN) PCI add device 0000:00:1f.2
> > > > (XEN) PCI add device 0000:00:1f.3
> > > > (XEN) PCI add device 0000:01:00.0
> > > > (XEN) PCI add device 0000:02:02.0
> > > > (XEN) PCI add device 0000:02:04.0
> > > > (XEN) PCI add device 0000:03:00.0
> > > > (XEN) PCI add device 0000:03:00.1
> > > > (XEN) PCI add device 0000:04:00.0
> > > > (XEN) PCI add device 0000:04:00.1
> > > > (XEN) PCI add device 0000:05:00.0
> > > > (XEN) PCI add device 0000:05:00.1
> > > > (XEN) PCI add device 0000:06:03.0
> > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 >
> 262400
> > > > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1
> memflags=0
> > > > (200 of 1024)
> > > > (d1) HVM Loader
> > > > (d1) Detected Xen v4.4-rc2
> > > > (d1) Xenbus rings @0xfeffc000, event channel 4
> > > > (d1) System requested SeaBIOS
> > > > (d1) CPU speed is 3093 MHz
> > > > (d1) Relocating guest memory for lowmem MMIO space disabled
> > > >
> > > >
> > > > Excerpt from /var/log/xen/*
> > > > qemu: hardware error: xen: failed to populate ram at 40050000
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@oracle.com> wrote:
> > > >
> > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > > > I was able to compile and install xen4.4 RC3 on my host, however
> I am
> > > > > > getting the error:
> > > > > >
> > > > > > root@fiat:~/git/xen# xl list
> > > > > > xc: error: Could not obtain handle on privileged command
> interface
> > > (2 =
> > > > > No
> > > > > > such file or directory): Internal error
> > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc
> handle:
> > > No
> > > > > such
> > > > > > file or directory
> > > > > > cannot init xl context
> > > > > >
> > > > > > I've google searched for this and an article appears, but is not
> the
> > > same
> > > > > > (as far as I can tell).  Running any xl command generates a
> similar
> > > > > error.
> > > > > >
> > > > > > What can I do to fix this?
> > > > >
> > > > >
> > > > > You need to run the initscripts for Xen. I don't know what your
> distro
> > > is,
> > > > > but
> > > > > they are usually put in /etc/init.d/rc.d/xen*
> > > > >
> > > > >
> > > > > >
> > > > > > Regards
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > > > mikeneiderhauser@gmail.com> wrote:
> > > > > >
> > > > > > > Much. Do I need to install from src or is there a package I can
> > > > > install.
> > > > > > >
> > > > > > > Regards
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > > > konrad.wilk@oracle.com> wrote:
> > > > > > >
> > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser
> wrote:
> > > > > > >> > I did not.  I do not have the toolchain installed.  I may
> have
> > > time
> > > > > > >> later
> > > > > > >> > today to try the patch.  Are there any specific
> instructions on
> > > how
> > > > > to
> > > > > > >> > patch the src, compile and install?
> > > > > > >>
> > > > > > >> There actually should be a new version of Xen 4.4-rcX which
> will
> > > have
> > > > > the
> > > > > > >> fix. That might be easier for you?
> > > > > > >> >
> > > > > > >> > Regards
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > > > >> > konrad.wilk@oracle.com> wrote:
> > > > > > >> >
> > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
> Neiderhauser
> > > wrote:
> > > > > > >> > > > Hi all,
> > > > > > >> > > >
> > > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET
> card
> > > > > (4x1G
> > > > > > >> NIC)
> > > > > > >> > > to a
> > > > > > >> > > > HVM.  I have been attempting to resolve this issue on
> the
> > > > > xen-users
> > > > > > >> list,
> > > > > > >> > > > but it was advised to post this issue to this list.
> (Initial
> > > > > > >> Message -
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > > > >> )
> > > > > > >> > > >
> > > > > > >> > > > The machine I am using as host is a Dell Poweredge
> server
> > > with a
> > > > > > >> Xeon
> > > > > > >> > > > E31220 with 4GB of ram.
> > > > > > >> > > >
> > > > > > >> > > > The possible bug is the following:
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > > 40030000
> > > > > > >> > > > ....
> > > > > > >> > > >
> > > > > > >> > > > I believe it may be similar to this thread
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > Additional info that may be helpful is below.
> > > > > > >> > >
> > > > > > >> > > Did you try the patch?
> > > > > > >> > > >
> > > > > > >> > > > Please let me know if you need any additional
> information.
> > > > > > >> > > >
> > > > > > >> > > > Thanks in advance for any help provided!
> > > > > > >> > > > Regards
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > # Configuration file for Xen HVM
> > > > > > >> > > >
> > > > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > > > >> > > > name="ubuntu-hvm-0"
> > > > > > >> > > > # HVM Build settings (+ hardware)
> > > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > > > >> > > > builder='hvm'
> > > > > > >> > > > device_model='qemu-dm'
> > > > > > >> > > > memory=1024
> > > > > > >> > > > vcpus=2
> > > > > > >> > > >
> > > > > > >> > > > # Virtual Interface
> > > > > > >> > > > # Network bridge to USB NIC
> > > > > > >> > > > vif=['bridge=xenbr0']
> > > > > > >> > > >
> > > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > > >> > > > # PCI Permissive mode toggle
> > > > > > >> > > > #pci_permissive=1
> > > > > > >> > > >
> > > > > > >> > > > # All PCI Devices
> > > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1',
> '05:00.0',
> > > > > > >> '05:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # All ports on Intel 4x1G NIC
> > > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # Brodcom 2x1G NIC
> > > > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > > >> > > >
> > > > > > >> > > > # HVM Disks
> > > > > > >> > > > # Hard disk only
> > > > > > >> > > > # Boot from HDD first ('c')
> > > > > > >> > > > boot="c"
> > > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > > > >> > > >
> > > > > > >> > > > # Hard disk with ISO
> > > > > > >> > > > # Boot from ISO first ('d')
> > > > > > >> > > > #boot="d"
> > > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > > > >> > > >
> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > > > >> > > >
> > > > > > >> > > > # ACPI Enable
> > > > > > >> > > > acpi=1
> > > > > > >> > > > # HVM Event Modes
> > > > > > >> > > > on_poweroff='destroy'
> > > > > > >> > > > on_reboot='restart'
> > > > > > >> > > > on_crash='restart'
> > > > > > >> > > >
> > > > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > > > >> > > > sdl=0
> > > > > > >> > > > serial='pty'
> > > > > > >> > > >
> > > > > > >> > > > # VNC Configuration
> > > > > > >> > > > # Only reacable from localhost
> > > > > > >> > > > vnc=1
> > > > > > >> > > > vnclisten="0.0.0.0"
> > > > > > >> > > > vncpasswd=""
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > Copied for xen-users list
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > >
> > > > > > >> > > > It appears that it cannot obtain the RAM mapping for
> this
> > > PCI
> > > > > > >> device.
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> > > pciback. The
> > > > > > >> output
> > > > > > >> > > > looks like:
> > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > > > >> > > > Calling function pciback_dev for:
> > > > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > Listing PCI Devices Available to Xen
> > > > > > >> > > > 0000:03:00.0
> > > > > > >> > > > 0000:03:00.1
> > > > > > >> > > > 0000:04:00.0
> > > > > > >> > > > 0000:04:00.1
> > > > > > >> > > > 0000:05:00.0
> > > > > > >> > > > 0000:05:00.1
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > WARNING: ignoring device_model directive.
> > > > > > >> > > > WARNING: Use "device_model_override" instead if you
> really
> > > want
> > > > > a
> > > > > > >> > > > non-default device_model
> > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > > > 0x210c360:
> > > > > > >> create:
> > > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda spec.backend=unknown
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:296:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda, using backend phy
> > > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> > > running
> > > > > > >> > > bootloader
> > > > > > >> > > > libxl: debug:
> libxl_bootloader.c:321:libxl__bootloader_run:
> > > not
> > > > > a PV
> > > > > > >> > > > domain, skipping bootloader
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c728: deregister unregistered
> > > > > > >> > > > libxl: debug:
> libxl_numa.c:475:libxl__get_numa_candidate:
> > > New
> > > > > best
> > > > > > >> NUMA
> > > > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> > > nr_vcpus=3,
> > > > > > >> > > > free_memkb=2980
> > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> > > placement
> > > > > > >> > > candidate
> > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> > > memsz=0xa69a4
> > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 ->
> 0x1a69a4
> > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > > > >> 0x7f022c81682d
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda spec.backend=phy
> > > > > > >> > > > libxl: debug:
> libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state
> > > > > token=3/0:
> > > > > > >> > > > register slotnum=3
> > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > > > 0x210c360:
> > > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x2112f48
> > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > event
> > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2
> still
> > > > > waiting
> > > > > > >> > > state 1
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x2112f48
> > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > event
> > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2
> ok
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state
> > > > > token=3/0:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x2112f48: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/block add
> > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> > > Spawning
> > > > > > >> > > device-model
> > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > /usr/bin/qemu-system-i386
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > -xen-domid
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -chardev
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -mon
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > chardev=libxl-cmd,mode=control
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -name
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > ubuntu-hvm-0
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -vnc
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > 0.0.0.0:0
> > > > > > >> ,to=99
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -global
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> isa-fdc.driveA=
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -serial
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> pty
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -vga
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > cirrus
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -global
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> vga.vram_size_mb=8
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -boot
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > order=c
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -smp
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > 2,maxcpus=2
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -device
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -netdev
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -M
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> xenfv
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -m
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> 1016
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -drive
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > > > >> > > > libxl: debug:
> libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > > token=3/1:
> > > > > > >> > > register
> > > > > > >> > > > slotnum=3
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210c960
> > > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1:
> event
> > > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210c960
> > > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1:
> event
> > > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > > token=3/1:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c960: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > > connected
> > > > > to
> > > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > > > type: qmp
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > > >> > > >     "id": 1
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "query-chardev",
> > > > > > >> > > >     "id": 2
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "change",
> > > > > > >> > > >     "id": 3,
> > > > > > >> > > >     "arguments": {
> > > > > > >> > > >         "device": "vnc",
> > > > > > >> > > >         "target": "password",
> > > > > > >> > > >         "arg": ""
> > > > > > >> > > >     }
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "query-vnc",
> > > > > > >> > > >     "id": 4
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug:
> libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > > token=3/2:
> > > > > > >> > > register
> > > > > > >> > > > slotnum=3
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210e8a8
> > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> event
> > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2
> still
> > > > > waiting
> > > > > > >> state
> > > > > > >> > > 1
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210e8a8
> > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> event
> > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > > token=3/2:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > > connected
> > > > > to
> > > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > > > type: qmp
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > > >> > > >     "id": 1
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "device_add",
> > > > > > >> > > >     "id": 2,
> > > > > > >> > > >     "arguments": {
> > > > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > > > >> > > >     }
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read
> error:
> > > > > > >> Connection
> > > > > > >> > > reset
> > > > > > >> > > > by peer
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > > > Creating pci
> > > > > > >> > > backend
> > > > > > >> > > > libxl: debug:
> libxl_event.c:1737:libxl__ao_progress_report:
> > > ao
> > > > > > >> 0x210c360:
> > > > > > >> > > > progress report: ignored
> > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > > > 0x210c360:
> > > > > > >> > > > complete, rc=0
> > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > > > 0x210c360:
> > > > > > >> > > destroy
> > > > > > >> > > > Daemon running with PID 3214
> > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > > > >> releases:793
> > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0
> maximum
> > > > > > >> allocations:4
> > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> > > toobig:4
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > > 40030000
> > > > > > >> > > > CPU #0:
> > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1
> SMM=0
> > > HLT=1
> > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > > >> > > > GDT=     00000000 0000ffff
> > > > > > >> > > > IDT=     00000000 0000ffff
> > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > > >> > > > EFER=0000000000000000
> > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > > >> > > > CPU #1:
> > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1
> SMM=0
> > > HLT=1
> > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > > >> > > > GDT=     00000000 0000ffff
> > > > > > >> > > > IDT=     00000000 0000ffff
> > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > > >> > > > EFER=0000000000000000
> > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > /etc/default/grub
> > > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > > > >> > > > GRUB_TIMEOUT=10
> > > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> > > Debian`
> > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > > > >> > > > # biosdevname=0
> > > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > > > >> > >
> > > > > > >> > > > _______________________________________________
> > > > > > >> > > > Xen-devel mailing list
> > > > > > >> > > > Xen-devel@lists.xen.org
> > > > > > >> > > > http://lists.xen.org/xen-devel
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > >
> > >
>

--047d7b5d6612b9b61404f1dade1d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I followed this site (<a href=3D"http://wiki.xenproject.or=
g/wiki/Xen_4.4_RC3_test_instructions">http://wiki.xenproject.org/wiki/Xen_4=
.4_RC3_test_instructions</a>).<div>and then followed (<a href=3D"http://wik=
i.xen.org/wiki/Compiling_Xen_From_Source">http://wiki.xen.org/wiki/Compilin=
g_Xen_From_Source</a>)<br>

<div><br></div><div><pre style=3D"padding:1em;border:1px solid rgb(221,221,=
221);color:rgb(0,0,0);background-color:rgb(250,250,250);line-height:1.3em;f=
ont-size:15px"><span style=3D"font-family:arial;line-height:1.3em">git clon=
e -b 4.4.0-rc3 git://<a href=3D"http://xenbits.xen.org/xen.git">xenbits.xen=
.org/xen.git</a></span><br>

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"color:rgb(0,0,0);font-size:15px;li=
ne-height:1.3em;font-family:arial">Had to take some additional steps here t=
o get all of the libs
# apt-get install build-essential=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install bcc bin86 gawk bridge-utils iproute libcu=
rl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install texinfo texlive-latex-base texlive-latex-=
recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev merc=
urial
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install make gcc libc6-dev zlib1g-dev python pyth=
on-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev lib=
jpeg62-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install iasl libbz2-dev e2fslibs-dev git-core uui=
d-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"padding:1e=
m;border:1px solid rgb(221,221,221);color:rgb(0,0,0);background-color:rgb(2=
50,250,250);line-height:1.3em;font-size:15px"><span style=3D"line-height:1.=
3em;font-family:arial">./configure
make dist
make install</span></pre></div></div></div><div class=3D"gmail_extra"><br><=
br><div class=3D"gmail_quote">On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszu=
tek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" ta=
rget=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Fri, Feb 07, 2014 at 04:2=
9:18PM -0500, Mike Neiderhauser wrote:<br>
&gt; I did not use the patch. =A0I was assuming it was already patched give=
n<br>
&gt; previous email. =A0Is the patch for qemu source or xen source?<br>
<br>
</div>It is for QEMU, but you are right - it should have been part<br>
of QEMU if you got the latest version of Xen-unstable.<br>
<br>
You didn&#39;t use some specific tag but just &#39;staging&#39; ?<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; However, I still see the same behavior as before:<br>
&gt; &gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>
&gt; &gt;<br>
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>
&gt; &gt; reset<br>
&gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; root@fiat:~# xl list<br>
&gt; &gt; &gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>
&gt; &gt; &gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>
&gt; &gt; &gt; =A015.2<br>
&gt; &gt; &gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>
&gt; &gt; &gt; 0.0<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt=
; 0x23f3000<br>
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000=
000 (233690 pages to<br>
&gt; &gt; &gt; be allocated)<br>
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe0=
0<br>
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f300=
0<br>
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e0=
0<br>
&gt; &gt; &gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff8551900=
0<br>
&gt; &gt; &gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855=
194b4<br>
&gt; &gt; &gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549=
000<br>
&gt; &gt; &gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff855=
4a000<br>
&gt; &gt; &gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;fffffff=
f85800000<br>
&gt; &gt; &gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; thr=
ee times to switch input<br>
&gt; &gt; &gt; to Xen)<br>
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>
&gt; &gt; &gt; (200 of 1024)<br>
&gt; &gt; &gt; (d1) HVM Loader<br>
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle=
.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>
&gt; &gt; &gt; &gt; &gt; getting the error:<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>
&gt; &gt; (2 =3D<br>
&gt; &gt; &gt; &gt; No<br>
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>
&gt; &gt; No<br>
&gt; &gt; &gt; &gt; such<br>
&gt; &gt; &gt; &gt; &gt; file or directory<br>
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I&#39;ve google searched for this and an article a=
ppears, but is not the<br>
&gt; &gt; same<br>
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). =A0Running any xl command =
generates a similar<br>
&gt; &gt; &gt; &gt; error.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don&#39;t kn=
ow what your distro<br>
&gt; &gt; is,<br>
&gt; &gt; &gt; &gt; but<br>
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com">mike=
neiderhauser@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>
&gt; &gt; &gt; &gt; install.<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">kon=
rad.wilk@oracle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the tool=
chain installed. =A0I may have<br>
&gt; &gt; time<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there=
 any specific instructions on<br>
&gt; &gt; how<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>
&gt; &gt; have<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com">konrad.wilk@oracle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>
&gt; &gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>
&gt; &gt; &gt; &gt; (4x1G<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attemp=
ting to resolve this issue on the<br>
&gt; &gt; &gt; &gt; xen-users<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>
&gt; &gt; with a<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>
&gt; &gt; 40030000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
&#39;xl list&#39;)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&=
quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib=
/xen-4.3/boot/hvmloader&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-d=
m&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr=
0&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
 &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;=
,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
&#39;03:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;,=
 &#39;04:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#3=
9;c&#39;)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubun=
tu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#3=
9;d&#39;)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubu=
ntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.=
04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy=
&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#=
39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#3=
9;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&=
quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>
&gt; &gt; PCI<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I =
ran assigned pci devices to<br>
&gt; &gt; pciback. The<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39=
;xen-pciback&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_=
model_override&quot; instead if you really<br>
&gt; &gt; want<br>
&gt; &gt; &gt; &gt; a<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>
&gt; &gt; running<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>
&gt; &gt; not<br>
&gt; &gt; &gt; &gt; a PV<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>
&gt; &gt; New<br>
&gt; &gt; &gt; &gt; best<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>
&gt; &gt; nr_vcpus=3D3,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>
&gt; &gt; placement<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>
&gt; &gt; memsz=3D0xa69a4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =
=A00000000000100000-&gt;00000000001a69a4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0=
000000000000000-&gt;0000000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0=
 0000000000000000-&gt;000000003f800000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000=
000100608<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x000000000=
0000200<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x000000000=
00001fb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x000000000=
0000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>
&gt; &gt; event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>
&gt; &gt; &gt; &gt; waiting<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>
&gt; &gt; event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>
&gt; &gt; hotplug<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>
&gt; &gt; Spawning<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; -xen-domid<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -chardev<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -mon<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -name<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vnc<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -global<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -serial<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 pty<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vga<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; cirrus<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -global<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -boot<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; order=3Dc<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -smp<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -device<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -netdev<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -M<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 xenfv<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -m<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 1016<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -drive<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>
&gt; &gt; &gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>
&gt; &gt; &gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>
&gt; &gt; connected<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; &gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-chardev&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;change&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;devi=
ce&quot;: &quot;vnc&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;targ=
et&quot;: &quot;password&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&=
quot;: &quot;&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-vnc&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>
&gt; &gt; &gt; &gt; waiting<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>
&gt; &gt; hotplug<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>
&gt; &gt; hotplug<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>
&gt; &gt; connected<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; &gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;device_add&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driv=
er&quot;: &quot;xen-pci-passthrough&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&q=
uot;: &quot;pci-pt-03_00.0&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;host=
addr&quot;: &quot;0000:03:00.0&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>
&gt; &gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>
&gt; &gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>
&gt; &gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>
&gt; &gt; &gt; &gt; Creating pci<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>
&gt; &gt; ao<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>
&gt; &gt; toobig:4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>
&gt; &gt; 40030000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>
&gt; &gt; HLT=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>
&gt; &gt; HLT=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4=
.3-amd64&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>
&gt; &gt; Debian`<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D&quot;quiet splash&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot=
;&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;d=
om0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org">Xen-devel@lists.xen.org</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--047d7b5d6612b9b61404f1dade1d--


--===============7369840473100185955==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7369840473100185955==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 10:20:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 10:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC511-00026r-Se; Sat, 08 Feb 2014 10:20:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WBwY4-00023T-Im
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 01:17:37 +0000
Received: from [85.158.139.211:34637] by server-10.bemta-5.messagelabs.com id
	B0/CE-08578-FA585F25; Sat, 08 Feb 2014 01:17:35 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1391822249!2485192!1
X-Originating-IP: [209.85.212.53]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4281 invoked from network); 8 Feb 2014 01:17:30 -0000
Received: from mail-vb0-f53.google.com (HELO mail-vb0-f53.google.com)
	(209.85.212.53)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 01:17:30 -0000
Received: by mail-vb0-f53.google.com with SMTP id p17so3238022vbe.12
	for <xen-devel@lists.xen.org>; Fri, 07 Feb 2014 17:17:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=1mrf3ot8IOoQznA2BljENKP6fcafS0NmT+BcKJsDEx8=;
	b=PV705WFM122iUYtzHVDu4eksoClZ62rwEyhk/kkpAZ6PIwI0c4tpZuRcvlJiy+WsTi
	7Qc/VsivfiSA9PuqHcm5gCyCRv3FvmV60Je5+qsGDpHyzD0NFe4eZwNwV+5c2LRgrC0a
	VdR7dfuZ6d9mtHQ3sDtpJxb6TgvpHiHIYNDVf0T0u0Zd8s2WvNNUtz3ugA35xBC1zezq
	8xGn4bp5fuUsQlqKOUT6DdYGM8FBcXzc9/MXEz2g5owA/Dui+A4JcmoVJ32c81kywFEk
	p/DWQmZfz/Uui3P0k0ZYDvIHnHKIwZ7aSrI6N9XCk9By/H+I2ZCkF4hYaqltlfeIcGfc
	Wbrg==
X-Received: by 10.220.200.6 with SMTP id eu6mr230868vcb.35.1391822248718; Fri,
	07 Feb 2014 17:17:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Fri, 7 Feb 2014 17:16:47 -0800 (PST)
In-Reply-To: <20140207214955.GD14908@phenom.dumpdata.com>
References: <CA+XTOOgYa4kS8ZNtnVgjs5fa3Jcs9L=XKsWiTk=9gCQvpHDh5Q@mail.gmail.com>
	<20140207152547.GB3605@phenom.dumpdata.com>
	<CA+XTOOieq3JB_5t=BBSphkRgOhFpCfBZjiGL-GasURdEMD=uUg@mail.gmail.com>
	<20140207183056.GA10265@phenom.dumpdata.com>
	<CA+XTOOhBPzVZiJgtNT99_y-=gb-Z0k1MbdQyLRCZQ1_0-n7k+A@mail.gmail.com>
	<CA+XTOOgoXs7cFeo9_5b=b-1+ta+FXXJX0mmyYNW70qf45wWW3w@mail.gmail.com>
	<20140207203934.GA13333@phenom.dumpdata.com>
	<CA+XTOOjGmzoA8LwPEm5cMAFDCwYYfBA9FD3HKeM4Ve3D4+QWDg@mail.gmail.com>
	<20140207210137.GA13743@phenom.dumpdata.com>
	<CA+XTOOicrbctK3_0b08csx-_XoW65Ur9sfq9wzrW3yJDyfB=mw@mail.gmail.com>
	<20140207214955.GD14908@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Fri, 7 Feb 2014 20:16:47 -0500
Message-ID: <CA+XTOOhGNvbEq9RdzO1OEcg-kuEPRRD4na=71gxhWV91Ls8i=w@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Sat, 08 Feb 2014 10:20:03 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7369840473100185955=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7369840473100185955==
Content-Type: multipart/alternative; boundary=047d7b5d6612b9b61404f1dade1d

--047d7b5d6612b9b61404f1dade1d
Content-Type: text/plain; charset=ISO-8859-1

I followed this site (
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
and then followed (http://wiki.xen.org/wiki/Compiling_Xen_From_Source)

git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git

Had to take some additional steps here to get all of the libs
# apt-get install build-essential # apt-get install bcc bin86 gawk
bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2
module-init-tools transfig tgif # apt-get install texinfo
texlive-latex-base texlive-latex-recommended texlive-fonts-extra
texlive-fonts-recommended pciutils-dev mercurial# apt-get install make
gcc libc6-dev zlib1g-dev python python-dev python-twisted
libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev#
apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml
ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev# apt-get
install gettext
apt-get install libaio-dev
apt-get install libpixman-1-dev

./configure
make dist
make install



On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
> > I did not use the patch.  I was assuming it was already patched given
> > previous email.  Is the patch for qemu source or xen source?
>
> It is for QEMU, but you are right - it should have been part
> of QEMU if you got the latest version of Xen-unstable.
>
> You didn't use some specific tag but just 'staging' ?
>
> >
> >
> > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > > > Ok. I started ran the initscripts and now xl works.
> > > >
> > > > However, I still see the same behavior as before:
> > > >
> > >
> > > Did you use the patch that was mentioned in the URL?
> > >
> > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> > > reset
> > > > by peer
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
> error:
> > > > Connection refused
> > > > root@fiat:~# xl list
> > > > Name                                        ID   Mem VCPUs State
> Time(s)
> > > > Domain-0                                     0  1024     1     r-----
> > > >  15.2
> > > > ubuntu-hvm-0                                 1  1025     1     ------
> > > > 0.0
> > > >
> > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690
> pages to
> > > > be allocated)
> > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > > > (XEN) Dom0 has maximum 1 VCPUs
> > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
> 0xffffffff81b2f000
> > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
> 0xffffffff81d0f0f0
> > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
> 0xffffffff81d252c0
> > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
> 0xffffffff81e6d000
> > > > (XEN) Scrubbing Free RAM: .............................done.
> > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > > > (XEN) Std. Loglevel: All
> > > > (XEN) Guest Loglevel: All
> > > > (XEN) Xen is relinquishing VGA console.
> > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> input
> > > > to Xen)
> > > > (XEN) Freed 260kB init memory.
> > > > (XEN) PCI add device 0000:00:00.0
> > > > (XEN) PCI add device 0000:00:01.0
> > > > (XEN) PCI add device 0000:00:1a.0
> > > > (XEN) PCI add device 0000:00:1c.0
> > > > (XEN) PCI add device 0000:00:1d.0
> > > > (XEN) PCI add device 0000:00:1e.0
> > > > (XEN) PCI add device 0000:00:1f.0
> > > > (XEN) PCI add device 0000:00:1f.2
> > > > (XEN) PCI add device 0000:00:1f.3
> > > > (XEN) PCI add device 0000:01:00.0
> > > > (XEN) PCI add device 0000:02:02.0
> > > > (XEN) PCI add device 0000:02:04.0
> > > > (XEN) PCI add device 0000:03:00.0
> > > > (XEN) PCI add device 0000:03:00.1
> > > > (XEN) PCI add device 0000:04:00.0
> > > > (XEN) PCI add device 0000:04:00.1
> > > > (XEN) PCI add device 0000:05:00.0
> > > > (XEN) PCI add device 0000:05:00.1
> > > > (XEN) PCI add device 0000:06:03.0
> > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 >
> 262400
> > > > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1
> memflags=0
> > > > (200 of 1024)
> > > > (d1) HVM Loader
> > > > (d1) Detected Xen v4.4-rc2
> > > > (d1) Xenbus rings @0xfeffc000, event channel 4
> > > > (d1) System requested SeaBIOS
> > > > (d1) CPU speed is 3093 MHz
> > > > (d1) Relocating guest memory for lowmem MMIO space disabled
> > > >
> > > >
> > > > Excerpt from /var/log/xen/*
> > > > qemu: hardware error: xen: failed to populate ram at 40050000
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@oracle.com> wrote:
> > > >
> > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > > > I was able to compile and install xen4.4 RC3 on my host, however
> I am
> > > > > > getting the error:
> > > > > >
> > > > > > root@fiat:~/git/xen# xl list
> > > > > > xc: error: Could not obtain handle on privileged command
> interface
> > > (2 =
> > > > > No
> > > > > > such file or directory): Internal error
> > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc
> handle:
> > > No
> > > > > such
> > > > > > file or directory
> > > > > > cannot init xl context
> > > > > >
> > > > > > I've google searched for this and an article appears, but is not
> the
> > > same
> > > > > > (as far as I can tell).  Running any xl command generates a
> similar
> > > > > error.
> > > > > >
> > > > > > What can I do to fix this?
> > > > >
> > > > >
> > > > > You need to run the initscripts for Xen. I don't know what your
> distro
> > > is,
> > > > > but
> > > > > they are usually put in /etc/init.d/rc.d/xen*
> > > > >
> > > > >
> > > > > >
> > > > > > Regards
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > > > mikeneiderhauser@gmail.com> wrote:
> > > > > >
> > > > > > > Much. Do I need to install from src or is there a package I can
> > > > > install.
> > > > > > >
> > > > > > > Regards
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > > > konrad.wilk@oracle.com> wrote:
> > > > > > >
> > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser
> wrote:
> > > > > > >> > I did not.  I do not have the toolchain installed.  I may
> have
> > > time
> > > > > > >> later
> > > > > > >> > today to try the patch.  Are there any specific
> instructions on
> > > how
> > > > > to
> > > > > > >> > patch the src, compile and install?
> > > > > > >>
> > > > > > >> There actually should be a new version of Xen 4.4-rcX which
> will
> > > have
> > > > > the
> > > > > > >> fix. That might be easier for you?
> > > > > > >> >
> > > > > > >> > Regards
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > > > >> > konrad.wilk@oracle.com> wrote:
> > > > > > >> >
> > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
> Neiderhauser
> > > wrote:
> > > > > > >> > > > Hi all,
> > > > > > >> > > >
> > > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET
> card
> > > > > (4x1G
> > > > > > >> NIC)
> > > > > > >> > > to a
> > > > > > >> > > > HVM.  I have been attempting to resolve this issue on
> the
> > > > > xen-users
> > > > > > >> list,
> > > > > > >> > > > but it was advised to post this issue to this list.
> (Initial
> > > > > > >> Message -
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > >
> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > > > >> )
> > > > > > >> > > >
> > > > > > >> > > > The machine I am using as host is a Dell Poweredge
> server
> > > with a
> > > > > > >> Xeon
> > > > > > >> > > > E31220 with 4GB of ram.
> > > > > > >> > > >
> > > > > > >> > > > The possible bug is the following:
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > > 40030000
> > > > > > >> > > > ....
> > > > > > >> > > >
> > > > > > >> > > > I believe it may be similar to this thread
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > >
> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > Additional info that may be helpful is below.
> > > > > > >> > >
> > > > > > >> > > Did you try the patch?
> > > > > > >> > > >
> > > > > > >> > > > Please let me know if you need any additional
> information.
> > > > > > >> > > >
> > > > > > >> > > > Thanks in advance for any help provided!
> > > > > > >> > > > Regards
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > # Configuration file for Xen HVM
> > > > > > >> > > >
> > > > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > > > >> > > > name="ubuntu-hvm-0"
> > > > > > >> > > > # HVM Build settings (+ hardware)
> > > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > > > >> > > > builder='hvm'
> > > > > > >> > > > device_model='qemu-dm'
> > > > > > >> > > > memory=1024
> > > > > > >> > > > vcpus=2
> > > > > > >> > > >
> > > > > > >> > > > # Virtual Interface
> > > > > > >> > > > # Network bridge to USB NIC
> > > > > > >> > > > vif=['bridge=xenbr0']
> > > > > > >> > > >
> > > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > > >> > > > # PCI Permissive mode toggle
> > > > > > >> > > > #pci_permissive=1
> > > > > > >> > > >
> > > > > > >> > > > # All PCI Devices
> > > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1',
> '05:00.0',
> > > > > > >> '05:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # All ports on Intel 4x1G NIC
> > > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # Brodcom 2x1G NIC
> > > > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > > >> > > >
> > > > > > >> > > > # HVM Disks
> > > > > > >> > > > # Hard disk only
> > > > > > >> > > > # Boot from HDD first ('c')
> > > > > > >> > > > boot="c"
> > > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > > > >> > > >
> > > > > > >> > > > # Hard disk with ISO
> > > > > > >> > > > # Boot from ISO first ('d')
> > > > > > >> > > > #boot="d"
> > > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > > > >> > > >
> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > > > >> > > >
> > > > > > >> > > > # ACPI Enable
> > > > > > >> > > > acpi=1
> > > > > > >> > > > # HVM Event Modes
> > > > > > >> > > > on_poweroff='destroy'
> > > > > > >> > > > on_reboot='restart'
> > > > > > >> > > > on_crash='restart'
> > > > > > >> > > >
> > > > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > > > >> > > > sdl=0
> > > > > > >> > > > serial='pty'
> > > > > > >> > > >
> > > > > > >> > > > # VNC Configuration
> > > > > > >> > > > # Only reacable from localhost
> > > > > > >> > > > vnc=1
> > > > > > >> > > > vnclisten="0.0.0.0"
> > > > > > >> > > > vncpasswd=""
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > Copied for xen-users list
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > >
> > > > > > >> > > > It appears that it cannot obtain the RAM mapping for
> this
> > > PCI
> > > > > > >> device.
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> > > pciback. The
> > > > > > >> output
> > > > > > >> > > > looks like:
> > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > > > >> > > > Calling function pciback_dev for:
> > > > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > Listing PCI Devices Available to Xen
> > > > > > >> > > > 0000:03:00.0
> > > > > > >> > > > 0000:03:00.1
> > > > > > >> > > > 0000:04:00.0
> > > > > > >> > > > 0000:04:00.1
> > > > > > >> > > > 0000:05:00.0
> > > > > > >> > > > 0000:05:00.1
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > WARNING: ignoring device_model directive.
> > > > > > >> > > > WARNING: Use "device_model_override" instead if you
> really
> > > want
> > > > > a
> > > > > > >> > > > non-default device_model
> > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > > > 0x210c360:
> > > > > > >> create:
> > > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda spec.backend=unknown
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:296:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda, using backend phy
> > > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> > > running
> > > > > > >> > > bootloader
> > > > > > >> > > > libxl: debug:
> libxl_bootloader.c:321:libxl__bootloader_run:
> > > not
> > > > > a PV
> > > > > > >> > > > domain, skipping bootloader
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c728: deregister unregistered
> > > > > > >> > > > libxl: debug:
> libxl_numa.c:475:libxl__get_numa_candidate:
> > > New
> > > > > best
> > > > > > >> NUMA
> > > > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> > > nr_vcpus=3,
> > > > > > >> > > > free_memkb=2980
> > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> > > placement
> > > > > > >> > > candidate
> > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> > > memsz=0xa69a4
> > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 ->
> 0x1a69a4
> > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > > > >> 0x7f022c81682d
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda spec.backend=phy
> > > > > > >> > > > libxl: debug:
> libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state
> > > > > token=3/0:
> > > > > > >> > > > register slotnum=3
> > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > > > 0x210c360:
> > > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x2112f48
> > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > event
> > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2
> still
> > > > > waiting
> > > > > > >> > > state 1
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x2112f48
> > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > event
> > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2
> ok
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x2112f48
> wpath=/local/domain/0/backend/vbd/2/768/state
> > > > > token=3/0:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x2112f48: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/block add
> > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> > > Spawning
> > > > > > >> > > device-model
> > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > /usr/bin/qemu-system-i386
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > -xen-domid
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -chardev
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -mon
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > chardev=libxl-cmd,mode=control
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -name
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > ubuntu-hvm-0
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -vnc
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > 0.0.0.0:0
> > > > > > >> ,to=99
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -global
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> isa-fdc.driveA=
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -serial
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> pty
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -vga
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > cirrus
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -global
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> vga.vram_size_mb=8
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -boot
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > order=c
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -smp
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > 2,maxcpus=2
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -device
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -netdev
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -M
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> xenfv
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> -m
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> 1016
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -drive
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > >
> file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > > > >> > > > libxl: debug:
> libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > > token=3/1:
> > > > > > >> > > register
> > > > > > >> > > > slotnum=3
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210c960
> > > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1:
> event
> > > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210c960
> > > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1:
> event
> > > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > > token=3/1:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c960: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > > connected
> > > > > to
> > > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > > > type: qmp
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > > >> > > >     "id": 1
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "query-chardev",
> > > > > > >> > > >     "id": 2
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "change",
> > > > > > >> > > >     "id": 3,
> > > > > > >> > > >     "arguments": {
> > > > > > >> > > >         "device": "vnc",
> > > > > > >> > > >         "target": "password",
> > > > > > >> > > >         "arg": ""
> > > > > > >> > > >     }
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "query-vnc",
> > > > > > >> > > >     "id": 4
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug:
> libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > > token=3/2:
> > > > > > >> > > register
> > > > > > >> > > > slotnum=3
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210e8a8
> > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> event
> > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2
> still
> > > > > waiting
> > > > > > >> state
> > > > > > >> > > 1
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210e8a8
> > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> event
> > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > > token=3/2:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > > connected
> > > > > to
> > > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > > > type: qmp
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > > >> > > >     "id": 1
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
> message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "device_add",
> > > > > > >> > > >     "id": 2,
> > > > > > >> > > >     "arguments": {
> > > > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > > > >> > > >     }
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read
> error:
> > > > > > >> Connection
> > > > > > >> > > reset
> > > > > > >> > > > by peer
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > > > Creating pci
> > > > > > >> > > backend
> > > > > > >> > > > libxl: debug:
> libxl_event.c:1737:libxl__ao_progress_report:
> > > ao
> > > > > > >> 0x210c360:
> > > > > > >> > > > progress report: ignored
> > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > > > 0x210c360:
> > > > > > >> > > > complete, rc=0
> > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > > > 0x210c360:
> > > > > > >> > > destroy
> > > > > > >> > > > Daemon running with PID 3214
> > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > > > >> releases:793
> > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0
> maximum
> > > > > > >> allocations:4
> > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> > > toobig:4
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > > 40030000
> > > > > > >> > > > CPU #0:
> > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1
> SMM=0
> > > HLT=1
> > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > > >> > > > GDT=     00000000 0000ffff
> > > > > > >> > > > IDT=     00000000 0000ffff
> > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > > >> > > > EFER=0000000000000000
> > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > > >> > > > CPU #1:
> > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1
> SMM=0
> > > HLT=1
> > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > > >> > > > GDT=     00000000 0000ffff
> > > > > > >> > > > IDT=     00000000 0000ffff
> > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > > >> > > > EFER=0000000000000000
> > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > > >> > > >
> > > > > > >> > > >
> ###########################################################
> > > > > > >> > > > /etc/default/grub
> > > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > > > >> > > > GRUB_TIMEOUT=10
> > > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> > > Debian`
> > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > > > >> > > > # biosdevname=0
> > > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > > > >> > >
> > > > > > >> > > > _______________________________________________
> > > > > > >> > > > Xen-devel mailing list
> > > > > > >> > > > Xen-devel@lists.xen.org
> > > > > > >> > > > http://lists.xen.org/xen-devel
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > >
> > >
>

--047d7b5d6612b9b61404f1dade1d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I followed this site (<a href=3D"http://wiki.xenproject.or=
g/wiki/Xen_4.4_RC3_test_instructions">http://wiki.xenproject.org/wiki/Xen_4=
.4_RC3_test_instructions</a>).<div>and then followed (<a href=3D"http://wik=
i.xen.org/wiki/Compiling_Xen_From_Source">http://wiki.xen.org/wiki/Compilin=
g_Xen_From_Source</a>)<br>

<div><br></div><div><pre style=3D"padding:1em;border:1px solid rgb(221,221,=
221);color:rgb(0,0,0);background-color:rgb(250,250,250);line-height:1.3em;f=
ont-size:15px"><span style=3D"font-family:arial;line-height:1.3em">git clon=
e -b 4.4.0-rc3 git://<a href=3D"http://xenbits.xen.org/xen.git">xenbits.xen=
.org/xen.git</a></span><br>

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"color:rgb(0,0,0);font-size:15px;li=
ne-height:1.3em;font-family:arial">Had to take some additional steps here t=
o get all of the libs
# apt-get install build-essential=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install bcc bin86 gawk bridge-utils iproute libcu=
rl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install texinfo texlive-latex-base texlive-latex-=
recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev merc=
urial
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install make gcc libc6-dev zlib1g-dev python pyth=
on-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev lib=
jpeg62-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install iasl libbz2-dev e2fslibs-dev git-core uui=
d-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"padding:1e=
m;border:1px solid rgb(221,221,221);color:rgb(0,0,0);background-color:rgb(2=
50,250,250);line-height:1.3em;font-size:15px"><span style=3D"line-height:1.=
3em;font-family:arial">./configure
make dist
make install</span></pre></div></div></div><div class=3D"gmail_extra"><br><=
br><div class=3D"gmail_quote">On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszu=
tek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" ta=
rget=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Fri, Feb 07, 2014 at 04:2=
9:18PM -0500, Mike Neiderhauser wrote:<br>
&gt; I did not use the patch. =A0I was assuming it was already patched give=
n<br>
&gt; previous email. =A0Is the patch for qemu source or xen source?<br>
<br>
</div>It is for QEMU, but you are right - it should have been part<br>
of QEMU if you got the latest version of Xen-unstable.<br>
<br>
You didn&#39;t use some specific tag but just &#39;staging&#39; ?<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt;<br>
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; However, I still see the same behavior as before:<br>
&gt; &gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>
&gt; &gt;<br>
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>
&gt; &gt; reset<br>
&gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>
&gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; root@fiat:~# xl list<br>
&gt; &gt; &gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>
&gt; &gt; &gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>
&gt; &gt; &gt; =A015.2<br>
&gt; &gt; &gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>
&gt; &gt; &gt; 0.0<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt=
; 0x23f3000<br>
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000=
000 (233690 pages to<br>
&gt; &gt; &gt; be allocated)<br>
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe0=
0<br>
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>
&gt; &gt; &gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f300=
0<br>
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e0=
0<br>
&gt; &gt; &gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff8551900=
0<br>
&gt; &gt; &gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855=
194b4<br>
&gt; &gt; &gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549=
000<br>
&gt; &gt; &gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff855=
4a000<br>
&gt; &gt; &gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;fffffff=
f85800000<br>
&gt; &gt; &gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; thr=
ee times to switch input<br>
&gt; &gt; &gt; to Xen)<br>
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>
&gt; &gt; &gt; (200 of 1024)<br>
&gt; &gt; &gt; (d1) HVM Loader<br>
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle=
.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>
&gt; &gt; &gt; &gt; &gt; getting the error:<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>
&gt; &gt; (2 =3D<br>
&gt; &gt; &gt; &gt; No<br>
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>
&gt; &gt; No<br>
&gt; &gt; &gt; &gt; such<br>
&gt; &gt; &gt; &gt; &gt; file or directory<br>
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I&#39;ve google searched for this and an article a=
ppears, but is not the<br>
&gt; &gt; same<br>
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). =A0Running any xl command =
generates a similar<br>
&gt; &gt; &gt; &gt; error.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don&#39;t kn=
ow what your distro<br>
&gt; &gt; is,<br>
&gt; &gt; &gt; &gt; but<br>
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com">mike=
neiderhauser@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>
&gt; &gt; &gt; &gt; install.<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">kon=
rad.wilk@oracle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the tool=
chain installed. =A0I may have<br>
&gt; &gt; time<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there=
 any specific instructions on<br>
&gt; &gt; how<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>
&gt; &gt; have<br>
&gt; &gt; &gt; &gt; the<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com">konrad.wilk@oracle.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>
&gt; &gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>
&gt; &gt; &gt; &gt; (4x1G<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attemp=
ting to resolve this issue on the<br>
&gt; &gt; &gt; &gt; xen-users<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>
&gt; &gt; with a<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>
&gt; &gt; 40030000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
&#39;xl list&#39;)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&=
quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib=
/xen-4.3/boot/hvmloader&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-d=
m&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr=
0&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
 &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;=
,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
&#39;03:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;,=
 &#39;04:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#3=
9;c&#39;)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubun=
tu-vg/ubuntu-hvm-0,hda,w&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#3=
9;d&#39;)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubu=
ntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.=
04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy=
&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#=
39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#3=
9;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&=
quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>
&gt; &gt; PCI<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I =
ran assigned pci devices to<br>
&gt; &gt; pciback. The<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39=
;xen-pciback&#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_=
model_override&quot; instead if you really<br>
&gt; &gt; want<br>
&gt; &gt; &gt; &gt; a<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>
&gt; &gt; running<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>
&gt; &gt; not<br>
&gt; &gt; &gt; &gt; a PV<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>
&gt; &gt; New<br>
&gt; &gt; &gt; &gt; best<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>
&gt; &gt; nr_vcpus=3D3,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>
&gt; &gt; placement<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>
&gt; &gt; memsz=3D0xa69a4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =
=A00000000000100000-&gt;00000000001a69a4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0=
000000000000000-&gt;0000000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0=
 0000000000000000-&gt;000000003f800000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000=
000100608<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x000000000=
0000200<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x000000000=
00001fb<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x000000000=
0000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>
&gt; &gt; event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>
&gt; &gt; &gt; &gt; waiting<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>
&gt; &gt; event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>
&gt; &gt; &gt; &gt; token=3D3/0:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>
&gt; &gt; hotplug<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>
&gt; &gt; Spawning<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; -xen-domid<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -chardev<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -mon<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -name<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vnc<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -global<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -serial<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 pty<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vga<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; cirrus<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -global<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -boot<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; order=3Dc<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -smp<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -device<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -netdev<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -M<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 xenfv<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -m<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 1016<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; -drive<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>
&gt; &gt; &gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210c960<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>
&gt; &gt; &gt; &gt; token=3D3/1:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>
&gt; &gt; connected<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; &gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-chardev&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;change&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;devi=
ce&quot;: &quot;vnc&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;targ=
et&quot;: &quot;password&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&=
quot;: &quot;&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-vnc&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>
&gt; &gt; &gt; &gt; waiting<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>
&gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>
&gt; &gt; &gt; &gt; token=3D3/2:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>
&gt; &gt; &gt; &gt; watch<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>
&gt; &gt; hotplug<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>
&gt; &gt; hotplug<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>
&gt; &gt; connected<br>
&gt; &gt; &gt; &gt; to<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; &gt; &gt; type: qmp<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>
&gt; &gt; type:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>
&gt; &gt; &gt; &gt; command: &#39;{<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;device_add&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,=
<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driv=
er&quot;: &quot;xen-pci-passthrough&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&q=
uot;: &quot;pci-pt-03_00.0&quot;,<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;host=
addr&quot;: &quot;0000:03:00.0&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>
&gt; &gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>
&gt; &gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>
&gt; &gt; Connection<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>
&gt; &gt; &gt; &gt; Creating pci<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>
&gt; &gt; ao<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>
&gt; &gt; &gt; &gt; 0x210c360:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>
&gt; &gt; toobig:4<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>
&gt; &gt; 40030000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>
&gt; &gt; HLT=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>
&gt; &gt; HLT=3D1<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4=
.3-amd64&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>
&gt; &gt; Debian`<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D&quot;quiet splash&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot=
;&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;d=
om0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org">Xen-devel@lists.xen.org</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--047d7b5d6612b9b61404f1dade1d--


--===============7369840473100185955==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7369840473100185955==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 13:34:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 13:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC82Y-0001h1-74; Sat, 08 Feb 2014 13:33:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WC82X-0001gw-GJ
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 13:33:49 +0000
Received: from [85.158.137.68:46603] by server-6.bemta-3.messagelabs.com id
	79/BC-09180-C3236F25; Sat, 08 Feb 2014 13:33:48 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391866416!517909!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTM5MTcgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27872 invoked from network); 8 Feb 2014 13:33:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 13:33:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,806,1384300800"; 
	d="asc'?scan'208";a="101054800"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 13:33:35 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Sat, 8 Feb 2014
	08:33:34 -0500
Message-ID: <1391866405.12373.11.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Sat, 8 Feb 2014 14:33:25 +0100
In-Reply-To: <1391850198-3918-1-git-send-email-jtweaver@hawaii.edu>
References: <1391850198-3918-1-git-send-email-jtweaver@hawaii.edu>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	Juergen Gross <juergen.gross@ts.fujitsu.com>, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH v2] Xen sched: Fix multiple runqueues in
	credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1017536559958343046=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1017536559958343046==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-REKKXwfoGK7OR3DCIHVz"

--=-REKKXwfoGK7OR3DCIHVz
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2014-02-07 at 23:03 -1000, Justin Weaver wrote:
> This patch attempts to address the issue of the Xen Credit 2
> Scheduler only creating one vCPU run queue on multiple physical
> processor systems. It should be creating one run queue per
> physical processor.
>=20
> CPU 0 does not get a starting callback, so it is hard coded to run
> queue 0. At the time this happens, socket information is not
> available for CPU 0.
>=20
> Socket information is available for each individual CPU when each
> gets the STARTING callback (socket information is also available
> for CPU 0 by that time). Each are assigned to a run queue
> based on their socket.
>=20
Right. Don't forget to put your 'Signed-off-by: xxx' here, as an
indication that you certify the patch under the "Developer's Certificate
of Origin":

http://wiki.xenproject.org/wiki/Submitting_Xen_Patches#Signing_off_a_patch

> ---
> Changes from v1:
> * moved comments to the top of the section in one long comment block
> * collapsed code to improve readability
> * fixed else if indentation style
> * updated comment about the runqueue plan
>
Thanks for all this.

> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 4e68375..3ff46a3 100644

>  /*
> @@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_priva=
te *prv, int rqi)
>  static void init_pcpu(const struct scheduler *ops, int cpu)
>  {

> @@ -1959,15 +1961,26 @@ static void init_pcpu(const struct scheduler *ops=
, int cpu)
>          return;
>      }
> =20
> -    /* Figure out which runqueue to put it in */
> +    /*
> +     * Choose which run queue to add cpu to based on its socket.
> +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STAR=
TING
> +     * callback and socket information is not yet available for it).
> +     * If cpu is on the same socket as CPU 0, add it to run queue 0 with=
 CPU 0.
> +     * Else If cpu is on socket 0, add it to a run queue based on the so=
cket
> +     * CPU 0 is actually on.
> +     * Else add it to a run queue based on its own socket.=20
> +     */
> +
>
I don't think you need this extra blank line here.

>      rqi =3D 0;
> +    cpu_socket =3D cpu_to_socket(cpu);
> +    cpu0_socket =3D cpu_to_socket(0);
> =20
> -    /* Figure out which runqueue to put it in */
> -    /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to =
runqueue 0. */
> -    if ( cpu =3D=3D 0 )
> -        rqi =3D 0;
> +    if ( cpu =3D=3D 0 || cpu_socket =3D=3D cpu0_socket )
> +        rqi =3D 0;   =20
> +    else if ( cpu_socket =3D=3D 0 )
> +        rqi =3D cpu0_socket;
>      else
> -        rqi =3D cpu_to_socket(cpu);
> +        rqi =3D cpu_socket;
>
This looks much better, thanks.

So, what about Juergen's comments about cpupools? In case you don't
know, Xen allows for the pCPUs to be pooled in pretty much any way. A
pool is basically a set of pCPUs, and each pool has its own scheduler.
You can have, for instance, pCPUs 0-7 in pool0, with the credit1
scheduler, and pCPUs 8-15 in pool1, with credit2.

More info about cpupools here:
http://wiki.xenproject.org/wiki/Xen_4.2:_cpupools
http://wiki.xenproject.org/wiki/Cpupools_Howto

I agree with Juergen that it would be wrong (perhaps just pointless, but
I lean toward the former) to have pCPUs from different cpupools in the
same runqueue. As he says, right now, one runqueue per cpupool is what
is created... I think we need to check more thoroughly what happens
when, after your bugfix, we start creating cpupools.

Can you look into that and report it here? (don't forget to keep Juergen
in Cc).

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-REKKXwfoGK7OR3DCIHVz
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL2MiUACgkQk4XaBE3IOsS64gCgiJFTWCXgRNPS7lBoagPVu2eE
y4QAn0NMPp2aTcSlOhPqY+An3dsKm1NR
=++iC
-----END PGP SIGNATURE-----

--=-REKKXwfoGK7OR3DCIHVz--


--===============1017536559958343046==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1017536559958343046==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 13:34:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 13:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC82Y-0001h1-74; Sat, 08 Feb 2014 13:33:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WC82X-0001gw-GJ
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 13:33:49 +0000
Received: from [85.158.137.68:46603] by server-6.bemta-3.messagelabs.com id
	79/BC-09180-C3236F25; Sat, 08 Feb 2014 13:33:48 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1391866416!517909!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTM5MTcgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27872 invoked from network); 8 Feb 2014 13:33:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 13:33:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,806,1384300800"; 
	d="asc'?scan'208";a="101054800"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 13:33:35 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4; Sat, 8 Feb 2014
	08:33:34 -0500
Message-ID: <1391866405.12373.11.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Justin Weaver <jtweaver@hawaii.edu>
Date: Sat, 8 Feb 2014 14:33:25 +0100
In-Reply-To: <1391850198-3918-1-git-send-email-jtweaver@hawaii.edu>
References: <1391850198-3918-1-git-send-email-jtweaver@hawaii.edu>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	Juergen Gross <juergen.gross@ts.fujitsu.com>, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH v2] Xen sched: Fix multiple runqueues in
	credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1017536559958343046=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1017536559958343046==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-REKKXwfoGK7OR3DCIHVz"

--=-REKKXwfoGK7OR3DCIHVz
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2014-02-07 at 23:03 -1000, Justin Weaver wrote:
> This patch attempts to address the issue of the Xen Credit 2
> Scheduler only creating one vCPU run queue on multiple physical
> processor systems. It should be creating one run queue per
> physical processor.
>=20
> CPU 0 does not get a starting callback, so it is hard coded to run
> queue 0. At the time this happens, socket information is not
> available for CPU 0.
>=20
> Socket information is available for each individual CPU when each
> gets the STARTING callback (socket information is also available
> for CPU 0 by that time). Each are assigned to a run queue
> based on their socket.
>=20
Right. Don't forget to put your 'Signed-off-by: xxx' here, as an
indication that you certify the patch under the "Developer's Certificate
of Origin":

http://wiki.xenproject.org/wiki/Submitting_Xen_Patches#Signing_off_a_patch

> ---
> Changes from v1:
> * moved comments to the top of the section in one long comment block
> * collapsed code to improve readability
> * fixed else if indentation style
> * updated comment about the runqueue plan
>
Thanks for all this.

> diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
> index 4e68375..3ff46a3 100644

>  /*
> @@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_priva=
te *prv, int rqi)
>  static void init_pcpu(const struct scheduler *ops, int cpu)
>  {

> @@ -1959,15 +1961,26 @@ static void init_pcpu(const struct scheduler *ops=
, int cpu)
>          return;
>      }
> =20
> -    /* Figure out which runqueue to put it in */
> +    /*
> +     * Choose which run queue to add cpu to based on its socket.
> +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STAR=
TING
> +     * callback and socket information is not yet available for it).
> +     * If cpu is on the same socket as CPU 0, add it to run queue 0 with=
 CPU 0.
> +     * Else If cpu is on socket 0, add it to a run queue based on the so=
cket
> +     * CPU 0 is actually on.
> +     * Else add it to a run queue based on its own socket.=20
> +     */
> +
>
I don't think you need this extra blank line here.

>      rqi =3D 0;
> +    cpu_socket =3D cpu_to_socket(cpu);
> +    cpu0_socket =3D cpu_to_socket(0);
> =20
> -    /* Figure out which runqueue to put it in */
> -    /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to =
runqueue 0. */
> -    if ( cpu =3D=3D 0 )
> -        rqi =3D 0;
> +    if ( cpu =3D=3D 0 || cpu_socket =3D=3D cpu0_socket )
> +        rqi =3D 0;   =20
> +    else if ( cpu_socket =3D=3D 0 )
> +        rqi =3D cpu0_socket;
>      else
> -        rqi =3D cpu_to_socket(cpu);
> +        rqi =3D cpu_socket;
>
This looks much better, thanks.

So, what about Juergen's comments about cpupools? In case you don't
know, Xen allows for the pCPUs to be pooled in pretty much any way. A
pool is basically a set of pCPUs, and each pool has its own scheduler.
You can have, for instance, pCPUs 0-7 in pool0, with the credit1
scheduler, and pCPUs 8-15 in pool1, with credit2.

More info about cpupools here:
http://wiki.xenproject.org/wiki/Xen_4.2:_cpupools
http://wiki.xenproject.org/wiki/Cpupools_Howto

I agree with Juergen that it would be wrong (perhaps just pointless, but
I lean toward the former) to have pCPUs from different cpupools in the
same runqueue. As he says, right now, one runqueue per cpupool is what
is created... I think we need to check more thoroughly what happens
when, after your bugfix, we start creating cpupools.

Can you look into that and report it here? (don't forget to keep Juergen
in Cc).

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-REKKXwfoGK7OR3DCIHVz
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL2MiUACgkQk4XaBE3IOsS64gCgiJFTWCXgRNPS7lBoagPVu2eE
y4QAn0NMPp2aTcSlOhPqY+An3dsKm1NR
=++iC
-----END PGP SIGNATURE-----

--=-REKKXwfoGK7OR3DCIHVz--


--===============1017536559958343046==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1017536559958343046==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 15:16:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC9dg-0005xH-Bq; Sat, 08 Feb 2014 15:16:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WC9de-0005xC-Ar
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 15:16:14 +0000
Received: from [85.158.137.68:33086] by server-5.bemta-3.messagelabs.com id
	4F/C1-04712-D3A46F25; Sat, 08 Feb 2014 15:16:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391872570!505831!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24717 invoked from network); 8 Feb 2014 15:16:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 15:16:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,806,1384300800"; d="scan'208";a="99217790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 15:16:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 10:16:09 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WC9dZ-0006Pz-4W;
	Sat, 08 Feb 2014 15:16:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WC9dY-0003yz-TE;
	Sat, 08 Feb 2014 15:16:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24799-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 15:16:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24799: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7813602821693770668=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7813602821693770668==
Content-Type: text/plain

flight 24799 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24799/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24699

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============7813602821693770668==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7813602821693770668==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 15:16:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC9dg-0005xH-Bq; Sat, 08 Feb 2014 15:16:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WC9de-0005xC-Ar
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 15:16:14 +0000
Received: from [85.158.137.68:33086] by server-5.bemta-3.messagelabs.com id
	4F/C1-04712-D3A46F25; Sat, 08 Feb 2014 15:16:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1391872570!505831!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24717 invoked from network); 8 Feb 2014 15:16:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 15:16:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,806,1384300800"; d="scan'208";a="99217790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 15:16:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 10:16:09 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WC9dZ-0006Pz-4W;
	Sat, 08 Feb 2014 15:16:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WC9dY-0003yz-TE;
	Sat, 08 Feb 2014 15:16:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24799-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 15:16:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24799: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7813602821693770668=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7813602821693770668==
Content-Type: text/plain

flight 24799 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24799/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24699

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============7813602821693770668==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7813602821693770668==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 15:37:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC9xn-0006tY-AT; Sat, 08 Feb 2014 15:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WC9xm-0006tT-0v
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 15:37:02 +0000
Received: from [85.158.143.35:64347] by server-1.bemta-4.messagelabs.com id
	30/E8-31661-D1F46F25; Sat, 08 Feb 2014 15:37:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391873817!4158308!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4543 invoked from network); 8 Feb 2014 15:36:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Feb 2014 15:36:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s18FarsO014585
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 8 Feb 2014 15:36:54 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s18FaqKX006287
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sat, 8 Feb 2014 15:36:52 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s18FapWc005267; Sat, 8 Feb 2014 15:36:51 GMT
MIME-Version: 1.0
Message-ID: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
Date: Sat, 8 Feb 2014 07:36:51 -0800 (PST)
From: Konrad Wilk <konrad.wilk@oracle.com>
To: <mikeneiderhauser@gmail.com>
X-Mailer: Zimbra on Oracle Beehive
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3107846946465527579=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3107846946465527579==
Content-Type: multipart/alternative;
 boundary="__13918738111841123abhmp0004.oracle.com"

--__13918738111841123abhmp0004.oracle.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline


----- mikeneiderhauser@gmail.com wrote:=20
>=20
> I followed this site ( http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_i=
nstructions ).=20
and then followed ( http://wiki.xen.org/wiki/Compiling_Xen_From_Source )=20
>=20


Ah, so you are looking for the xen_pt: Fix passthrough of device with ROM.=
=20
which is not in the Xen 4.4-rc3 but in the master.=20


One thing you can do is:=20


cd xen/tools/qemu-xen-dir=20
git fetch upstream=20
git checkout origin/master=20
[you should see: " HEAD is now at 027c412... configure: Disable libtool if =
-fPIE does not work with it (bug #1257099)"]=20


Go back to main xen directory:=20
cd ../../../=20
./configure=20
make=20
make install=20


and you should be using now an newer version of QEMU with the fix.=20




>=20
git clone -b 4.4.0-rc3 git:// xenbits.xen.org/xen.git=20
> Had to take some additional steps here to get all of the libs
# apt-get install build-essential # apt-get install bcc bin86 gawk bridge-u=
tils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig=
 tgif # apt-get install texinfo texlive-latex-base texlive-latex-recommende=
d texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial # ap=
t-get install make gcc libc6-dev zlib1g-dev python python-dev python-twiste=
d libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev # apt-get=
 install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib=
 libx11-dev bison flex xz-utils libyajl-dev # apt-get install gettext
apt-get install libaio-dev
apt-get install libpixman-1-dev ./configure
make dist
make install=20
>=20
>=20
>=20
> On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk < konrad.wilk@oracl=
e.com > wrote:=20
>=20


> On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:=20
> > I did not use the patch. I was assuming it was already patched given=20
> > previous email. Is the patch for qemu source or xen source?=20
>=20
> It is for QEMU, but you are right - it should have been part=20
> of QEMU if you got the latest version of Xen-unstable.=20
>=20
> You didn't use some specific tag but just 'staging' ?=20
>=20
>=20
>=20
> >=20
> >=20
> > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <=20
> > konrad.wilk@oracle.com > wrote:=20
> >=20
> > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:=20
> > > > Ok. I started ran the initscripts and now xl works.=20
> > > >=20
> > > > However, I still see the same behavior as before:=20
> > > >=20
> > >=20
> > > Did you use the patch that was mentioned in the URL?=20
> > >=20
> > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg=20
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg=20
> > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connecti=
on=20
> > > reset=20
> > > > by peer=20
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection err=
or:=20
> > > > Connection refused=20
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection err=
or:=20
> > > > Connection refused=20
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection err=
or:=20
> > > > Connection refused=20
> > > > root@fiat:~# xl list=20
> > > > Name ID Mem VCPUs State Time(s)=20
> > > > Domain-0 0 1024 1 r-----=20
> > > > 15.2=20
> > > > ubuntu-hvm-0 1 1025 1 ------=20
> > > > 0.0=20
> > > >=20
> > > > (XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000=
=20
> > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:=20
> > > > (XEN) Dom0 alloc.: 0000000134000000->0000000138000000 (233690 pages=
 to=20
> > > > be allocated)=20
> > > > (XEN) Init. ramdisk: 000000013d0da000->000000013ffffe00=20
> > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:=20
> > > > (XEN) Loaded kernel: ffffffff81000000->ffffffff823f3000=20
> > > > (XEN) Init. ramdisk: ffffffff823f3000->ffffffff85318e00=20
> > > > (XEN) Phys-Mach map: ffffffff85319000->ffffffff85519000=20
> > > > (XEN) Start info: ffffffff85519000->ffffffff855194b4=20
> > > > (XEN) Page tables: ffffffff8551a000->ffffffff85549000=20
> > > > (XEN) Boot stack: ffffffff85549000->ffffffff8554a000=20
> > > > (XEN) TOTAL: ffffffff80000000->ffffffff85800000=20
> > > > (XEN) ENTRY ADDRESS: ffffffff81d261e0=20
> > > > (XEN) Dom0 has maximum 1 VCPUs=20
> > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81=
b2f000=20
> > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81=
d0f0f0=20
> > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81=
d252c0=20
> > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81=
e6d000=20
> > > > (XEN) Scrubbing Free RAM: .............................done.=20
> > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.=20
> > > > (XEN) Std. Loglevel: All=20
> > > > (XEN) Guest Loglevel: All=20
> > > > (XEN) Xen is relinquishing VGA console.=20
> > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch=
 input=20
> > > > to Xen)=20
> > > > (XEN) Freed 260kB init memory.=20
> > > > (XEN) PCI add device 0000:00:00.0=20
> > > > (XEN) PCI add device 0000:00:01.0=20
> > > > (XEN) PCI add device 0000:00:1a.0=20
> > > > (XEN) PCI add device 0000:00:1c.0=20
> > > > (XEN) PCI add device 0000:00:1d.0=20
> > > > (XEN) PCI add device 0000:00:1e.0=20
> > > > (XEN) PCI add device 0000:00:1f.0=20
> > > > (XEN) PCI add device 0000:00:1f.2=20
> > > > (XEN) PCI add device 0000:00:1f.3=20
> > > > (XEN) PCI add device 0000:01:00.0=20
> > > > (XEN) PCI add device 0000:02:02.0=20
> > > > (XEN) PCI add device 0000:02:04.0=20
> > > > (XEN) PCI add device 0000:03:00.0=20
> > > > (XEN) PCI add device 0000:03:00.1=20
> > > > (XEN) PCI add device 0000:04:00.0=20
> > > > (XEN) PCI add device 0000:04:00.1=20
> > > > (XEN) PCI add device 0000:05:00.0=20
> > > > (XEN) PCI add device 0000:05:00.1=20
> > > > (XEN) PCI add device 0000:06:03.0=20
> > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 2=
62400=20
> > > > (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D1 m=
emflags=3D0=20
> > > > (200 of 1024)=20
> > > > (d1) HVM Loader=20
> > > > (d1) Detected Xen v4.4-rc2=20
> > > > (d1) Xenbus rings @0xfeffc000, event channel 4=20
> > > > (d1) System requested SeaBIOS=20
> > > > (d1) CPU speed is 3093 MHz=20
> > > > (d1) Relocating guest memory for lowmem MMIO space disabled=20
> > > >=20
> > > >=20
> > > > Excerpt from /var/log/xen/*=20
> > > > qemu: hardware error: xen: failed to populate ram at 40050000=20
> > > >=20
> > > >=20
> > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <=20
> > > > konrad.wilk@oracle.com > wrote:=20
> > > >=20
> > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote=
:=20
> > > > > > I was able to compile and install xen4.4 RC3 on my host, howeve=
r I am=20
> > > > > > getting the error:=20
> > > > > >=20
> > > > > > root@fiat:~/git/xen# xl list=20
> > > > > > xc: error: Could not obtain handle on privileged command interf=
ace=20
> > > (2 =3D=20
> > > > > No=20
> > > > > > such file or directory): Internal error=20
> > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc han=
dle:=20
> > > No=20
> > > > > such=20
> > > > > > file or directory=20
> > > > > > cannot init xl context=20
> > > > > >=20
> > > > > > I've google searched for this and an article appears, but is no=
t the=20
> > > same=20
> > > > > > (as far as I can tell). Running any xl command generates a simi=
lar=20
> > > > > error.=20
> > > > > >=20
> > > > > > What can I do to fix this?=20
> > > > >=20
> > > > >=20
> > > > > You need to run the initscripts for Xen. I don't know what your d=
istro=20
> > > is,=20
> > > > > but=20
> > > > > they are usually put in /etc/init.d/rc.d/xen*=20
> > > > >=20
> > > > >=20
> > > > > >=20
> > > > > > Regards=20
> > > > > >=20
> > > > > >=20
> > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <=20
> > > > > > mikeneiderhauser@gmail.com > wrote:=20
> > > > > >=20
> > > > > > > Much. Do I need to install from src or is there a package I c=
an=20
> > > > > install.=20
> > > > > > >=20
> > > > > > > Regards=20
> > > > > > >=20
> > > > > > >=20
> > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <=20
> > > > > > > konrad.wilk@oracle.com > wrote:=20
> > > > > > >=20
> > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser =
wrote:=20
> > > > > > >> > I did not. I do not have the toolchain installed. I may ha=
ve=20
> > > time=20
> > > > > > >> later=20
> > > > > > >> > today to try the patch. Are there any specific instruction=
s on=20
> > > how=20
> > > > > to=20
> > > > > > >> > patch the src, compile and install?=20
> > > > > > >>=20
> > > > > > >> There actually should be a new version of Xen 4.4-rcX which =
will=20
> > > have=20
> > > > > the=20
> > > > > > >> fix. That might be easier for you?=20
> > > > > > >> >=20
> > > > > > >> > Regards=20
> > > > > > >> >=20
> > > > > > >> >=20
> > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <=
=20
> > > > > > >> > konrad.wilk@oracle.com > wrote:=20
> > > > > > >> >=20
> > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhau=
ser=20
> > > wrote:=20
> > > > > > >> > > > Hi all,=20
> > > > > > >> > > >=20
> > > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET=
 card=20
> > > > > (4x1G=20
> > > > > > >> NIC)=20
> > > > > > >> > > to a=20
> > > > > > >> > > > HVM. I have been attempting to resolve this issue on t=
he=20
> > > > > xen-users=20
> > > > > > >> list,=20
> > > > > > >> > > > but it was advised to post this issue to this list. (I=
nitial=20
> > > > > > >> Message -=20
> > > > > > >> > > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > >=20
> > > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.=
html=20
> > > > > > >> )=20
> > > > > > >> > > >=20
> > > > > > >> > > > The machine I am using as host is a Dell Poweredge ser=
ver=20
> > > with a=20
> > > > > > >> Xeon=20
> > > > > > >> > > > E31220 with 4GB of ram.=20
> > > > > > >> > > >=20
> > > > > > >> > > > The possible bug is the following:=20
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log=
=20
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)=
=20
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at=
=20
> > > 40030000=20
> > > > > > >> > > > ....=20
> > > > > > >> > > >=20
> > > > > > >> > > > I believe it may be similar to this thread=20
> > > > > > >> > > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > >=20
> > > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34v=
be4uyog2d4+state:results=20
> > > > > > >> > > >=20
> > > > > > >> > > >=20
> > > > > > >> > > > Additional info that may be helpful is below.=20
> > > > > > >> > >=20
> > > > > > >> > > Did you try the patch?=20
> > > > > > >> > > >=20
> > > > > > >> > > > Please let me know if you need any additional informat=
ion.=20
> > > > > > >> > > >=20
> > > > > > >> > > > Thanks in advance for any help provided!=20
> > > > > > >> > > > Regards=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > # Configuration file for Xen HVM=20
> > > > > > >> > > >=20
> > > > > > >> > > > # HVM Name (as appears in 'xl list')=20
> > > > > > >> > > > name=3D"ubuntu-hvm-0"=20
> > > > > > >> > > > # HVM Build settings (+ hardware)=20
> > > > > > >> > > > #kernel =3D "/usr/lib/xen-4.3/boot/hvmloader"=20
> > > > > > >> > > > builder=3D'hvm'=20
> > > > > > >> > > > device_model=3D'qemu-dm'=20
> > > > > > >> > > > memory=3D1024=20
> > > > > > >> > > > vcpus=3D2=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Virtual Interface=20
> > > > > > >> > > > # Network bridge to USB NIC=20
> > > > > > >> > > > vif=3D['bridge=3Dxenbr0']=20
> > > > > > >> > > >=20
> > > > > > >> > > > ################### PCI PASSTHROUGH ##################=
#=20
> > > > > > >> > > > # PCI Permissive mode toggle=20
> > > > > > >> > > > #pci_permissive=3D1=20
> > > > > > >> > > >=20
> > > > > > >> > > > # All PCI Devices=20
> > > > > > >> > > > #pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1', '0=
5:00.0',=20
> > > > > > >> '05:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # First two ports on Intel 4x1G NIC=20
> > > > > > >> > > > #pci=3D['03:00.0','03:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Last two ports on Intel 4x1G NIC=20
> > > > > > >> > > > #pci=3D['04:00.0', '04:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # All ports on Intel 4x1G NIC=20
> > > > > > >> > > > pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Brodcom 2x1G NIC=20
> > > > > > >> > > > #pci=3D['05:00.0', '05:00.1']=20
> > > > > > >> > > > ################### PCI PASSTHROUGH ##################=
#=20
> > > > > > >> > > >=20
> > > > > > >> > > > # HVM Disks=20
> > > > > > >> > > > # Hard disk only=20
> > > > > > >> > > > # Boot from HDD first ('c')=20
> > > > > > >> > > > boot=3D"c"=20
> > > > > > >> > > > disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Hard disk with ISO=20
> > > > > > >> > > > # Boot from ISO first ('d')=20
> > > > > > >> > > > #boot=3D"d"=20
> > > > > > >> > > > #disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',=20
> > > > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,=
r']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # ACPI Enable=20
> > > > > > >> > > > acpi=3D1=20
> > > > > > >> > > > # HVM Event Modes=20
> > > > > > >> > > > on_poweroff=3D'destroy'=20
> > > > > > >> > > > on_reboot=3D'restart'=20
> > > > > > >> > > > on_crash=3D'restart'=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Serial Console Configuration (Xen Console)=20
> > > > > > >> > > > sdl=3D0=20
> > > > > > >> > > > serial=3D'pty'=20
> > > > > > >> > > >=20
> > > > > > >> > > > # VNC Configuration=20
> > > > > > >> > > > # Only reacable from localhost=20
> > > > > > >> > > > vnc=3D1=20
> > > > > > >> > > > vnclisten=3D"0.0.0.0"=20
> > > > > > >> > > > vncpasswd=3D""=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > Copied for xen-users list=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > >=20
> > > > > > >> > > > It appears that it cannot obtain the RAM mapping for t=
his=20
> > > PCI=20
> > > > > > >> device.=20
> > > > > > >> > > >=20
> > > > > > >> > > >=20
> > > > > > >> > > > I rebooted the Host. I ran assigned pci devices to=20
> > > pciback. The=20
> > > > > > >> output=20
> > > > > > >> > > > looks like:=20
> > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh=20
> > > > > > >> > > > Loading Kernel Module 'xen-pciback'=20
> > > > > > >> > > > Calling function pciback_dev for:=20
> > > > > > >> > > > PCI DEVICE 0000:03:00.0=20
> > > > > > >> > > > Unbinding 0000:03:00.0 from igb=20
> > > > > > >> > > > Binding 0000:03:00.0 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:03:00.1=20
> > > > > > >> > > > Unbinding 0000:03:00.1 from igb=20
> > > > > > >> > > > Binding 0000:03:00.1 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:04:00.0=20
> > > > > > >> > > > Unbinding 0000:04:00.0 from igb=20
> > > > > > >> > > > Binding 0000:04:00.0 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:04:00.1=20
> > > > > > >> > > > Unbinding 0000:04:00.1 from igb=20
> > > > > > >> > > > Binding 0000:04:00.1 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:05:00.0=20
> > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2=20
> > > > > > >> > > > Binding 0000:05:00.0 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:05:00.1=20
> > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2=20
> > > > > > >> > > > Binding 0000:05:00.1 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > Listing PCI Devices Available to Xen=20
> > > > > > >> > > > 0000:03:00.0=20
> > > > > > >> > > > 0000:03:00.1=20
> > > > > > >> > > > 0000:04:00.0=20
> > > > > > >> > > > 0000:04:00.1=20
> > > > > > >> > > > 0000:05:00.0=20
> > > > > > >> > > > 0000:05:00.1=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg=
=20
> > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg=20
> > > > > > >> > > > WARNING: ignoring device_model directive.=20
> > > > > > >> > > > WARNING: Use "device_model_override" instead if you re=
ally=20
> > > want=20
> > > > > a=20
> > > > > > >> > > > non-default device_model=20
> > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao=
=20
> > > > > 0x210c360:=20
> > > > > > >> create:=20
> > > > > > >> > > > how=3D(nil) callback=3D(nil) poller=3D0x210c3c0=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_device.c:257:libxl__device_disk_set_backend:=20
> > > > > > >> Disk=20
> > > > > > >> > > > vdev=3Dhda spec.backend=3Dunknown=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_device.c:296:libxl__device_disk_set_backend:=20
> > > > > > >> Disk=20
> > > > > > >> > > > vdev=3Dhda, using backend phy=20
> > > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_creat=
e:=20
> > > running=20
> > > > > > >> > > bootloader=20
> > > > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader=
_run:=20
> > > not=20
> > > > > a PV=20
> > > > > > >> > > > domain, skipping bootloader=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c728: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candida=
te:=20
> > > New=20
> > > > > best=20
> > > > > > >> NUMA=20
> > > > > > >> > > > placement candidate found: nr_nodes=3D1, nr_cpus=3D4,=
=20
> > > nr_vcpus=3D3,=20
> > > > > > >> > > > free_memkb=3D2980=20
> > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA=
=20
> > > placement=20
> > > > > > >> > > candidate=20
> > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected=20
> > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=3D0x100000=
=20
> > > memsz=3D0xa69a4=20
> > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a=
69a4=20
> > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:=20
> > > > > > >> > > > Loader: 0000000000100000->00000000001a69a4=20
> > > > > > >> > > > Modules: 0000000000000000->0000000000000000=20
> > > > > > >> > > > TOTAL: 0000000000000000->000000003f800000=20
> > > > > > >> > > > ENTRY ADDRESS: 0000000000100608=20
> > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:=20
> > > > > > >> > > > 4KB PAGES: 0x0000000000000200=20
> > > > > > >> > > > 2MB PAGES: 0x00000000000001fb=20
> > > > > > >> > > > 1GB PAGES: 0x0000000000000000=20
> > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 =
->=20
> > > > > > >> 0x7f022c81682d=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_device.c:257:libxl__device_disk_set_backend:=20
> > > > > > >> Disk=20
> > > > > > >> > > > vdev=3Dhda spec.backend=3Dphy=20
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_regi=
ster:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/76=
8/state=20
> > > > > token=3D3/0:=20
> > > > > > >> > > > register slotnum=3D3=20
> > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao=
=20
> > > > > 0x210c360:=20
> > > > > > >> > > > inprogress: poller=3D0x210c3c0, flags=3Di=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x2112f48=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state token=
=3D3/0:=20
> > > event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2=
 still=20
> > > > > waiting=20
> > > > > > >> > > state 1=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x2112f48=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state token=
=3D3/0:=20
> > > event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2=
 ok=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/76=
8/state=20
> > > > > token=3D3/0:=20
> > > > > > >> > > > deregister slotnum=3D3=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x2112f48: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calli=
ng=20
> > > hotplug=20
> > > > > > >> script:=20
> > > > > > >> > > > /etc/xen/scripts/block add=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:=
=20
> > > Spawning=20
> > > > > > >> > > device-model=20
> > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > /usr/bin/qemu-system-i386=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > -xen-domid=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: 2=
=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -chardev=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > >=20
> > > socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowait=
=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
mon=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > chardev=3Dlibxl-cmd,mode=3Dcontrol=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
name=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > ubuntu-hvm-0=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
vnc=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > 0.0.0.0:0=20
> > > > > > >> ,to=3D99=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -global=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> isa-fdc.driveA=3D=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -serial=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: p=
ty=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
vga=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > cirrus=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -global=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> vga.vram_size_mb=3D8=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
boot=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > order=3Dc=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
smp=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > 2,maxcpus=3D2=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -device=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2=
c=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -netdev=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,d=
ownscript=3Dno=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
M=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: x=
enfv=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
m=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: 1=
016=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -drive=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > >=20
> > > file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,fo=
rmat=3Draw,cache=3Dwriteback=20
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_regi=
ster:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/s=
tate=20
> > > > > token=3D3/1:=20
> > > > > > >> > > register=20
> > > > > > >> > > > slotnum=3D3=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210c960=20
> > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state token=3D3=
/1: event=20
> > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210c960=20
> > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state token=3D3=
/1: event=20
> > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/s=
tate=20
> > > > > token=3D3/1:=20
> > > > > > >> > > > deregister slotnum=3D3=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c960: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:=
=20
> > > connected=20
> > > > > to=20
> > > > > > >> > > > /var/run/xen/qmp-libxl-2=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > > > type: qmp=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "qmp_capabilities",=20
> > > > > > >> > > > "id": 1=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "query-chardev",=20
> > > > > > >> > > > "id": 2=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "change",=20
> > > > > > >> > > > "id": 3,=20
> > > > > > >> > > > "arguments": {=20
> > > > > > >> > > > "device": "vnc",=20
> > > > > > >> > > > "target": "password",=20
> > > > > > >> > > > "arg": ""=20
> > > > > > >> > > > }=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "query-vnc",=20
> > > > > > >> > > > "id": 4=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_regi=
ster:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/=
state=20
> > > > > token=3D3/2:=20
> > > > > > >> > > register=20
> > > > > > >> > > > slotnum=3D3=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210e8a8=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state token=3D=
3/2: event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 s=
till=20
> > > > > waiting=20
> > > > > > >> state=20
> > > > > > >> > > 1=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210e8a8=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state token=3D=
3/2: event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 o=
k=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/=
state=20
> > > > > token=3D3/2:=20
> > > > > > >> > > > deregister slotnum=3D3=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210e8a8: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calli=
ng=20
> > > hotplug=20
> > > > > > >> script:=20
> > > > > > >> > > > /etc/xen/scripts/vif-bridge online=20
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calli=
ng=20
> > > hotplug=20
> > > > > > >> script:=20
> > > > > > >> > > > /etc/xen/scripts/vif-bridge add=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:=
=20
> > > connected=20
> > > > > to=20
> > > > > > >> > > > /var/run/xen/qmp-libxl-2=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > > > type: qmp=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "qmp_capabilities",=20
> > > > > > >> > > > "id": 1=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "device_add",=20
> > > > > > >> > > > "id": 2,=20
> > > > > > >> > > > "arguments": {=20
> > > > > > >> > > > "driver": "xen-pci-passthrough",=20
> > > > > > >> > > > "id": "pci-pt-03_00.0",=20
> > > > > > >> > > > "hostaddr": "0000:03:00.0"=20
> > > > > > >> > > > }=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read er=
ror:=20
> > > > > > >> Connection=20
> > > > > > >> > > reset=20
> > > > > > >> > > > by peer=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:=
=20
> > > Connection=20
> > > > > > >> error:=20
> > > > > > >> > > > Connection refused=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:=
=20
> > > Connection=20
> > > > > > >> error:=20
> > > > > > >> > > > Connection refused=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:=
=20
> > > Connection=20
> > > > > > >> error:=20
> > > > > > >> > > > Connection refused=20
> > > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend=
:=20
> > > > > Creating pci=20
> > > > > > >> > > backend=20
> > > > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_re=
port:=20
> > > ao=20
> > > > > > >> 0x210c360:=20
> > > > > > >> > > > progress report: ignored=20
> > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: a=
o=20
> > > > > 0x210c360:=20
> > > > > > >> > > > complete, rc=3D0=20
> > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: a=
o=20
> > > > > 0x210c360:=20
> > > > > > >> > > destroy=20
> > > > > > >> > > > Daemon running with PID 3214=20
> > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 tot=
al=20
> > > > > > >> releases:793=20
> > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0 max=
imum=20
> > > > > > >> allocations:4=20
> > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4=20
> > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4=
=20
> > > toobig:4=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log=
=20
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)=
=20
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at=
=20
> > > 40030000=20
> > > > > > >> > > > CPU #0:=20
> > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D000=
00633=20
> > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D000=
00000=20
> > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0=
 A20=3D1 SMM=3D0=20
> > > HLT=3D1=20
> > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00=20
> > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200=20
> > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00=20
> > > > > > >> > > > GDT=3D 00000000 0000ffff=20
> > > > > > >> > > > IDT=3D 00000000 0000ffff=20
> > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D000=
00000=20
> > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D000=
00000=20
> > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400=20
> > > > > > >> > > > EFER=3D0000000000000000=20
> > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f=
80=20
> > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0=
000=20
> > > > > > >> > > > XMM00=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM01=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM02=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM03=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM04=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM05=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM06=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM07=3D00000000000000000000000000000000=20
> > > > > > >> > > > CPU #1:=20
> > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D000=
00633=20
> > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D000=
00000=20
> > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0=
 A20=3D1 SMM=3D0=20
> > > HLT=3D1=20
> > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00=20
> > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200=20
> > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00=20
> > > > > > >> > > > GDT=3D 00000000 0000ffff=20
> > > > > > >> > > > IDT=3D 00000000 0000ffff=20
> > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D000=
00000=20
> > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D000=
00000=20
> > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400=20
> > > > > > >> > > > EFER=3D0000000000000000=20
> > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f=
80=20
> > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0=
000=20
> > > > > > >> > > > XMM00=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM01=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM02=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM03=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM04=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM05=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM06=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM07=3D00000000000000000000000000000000=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > /etc/default/grub=20
> > > > > > >> > > > GRUB_DEFAULT=3D"Xen 4.3-amd64"=20
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=3D0=20
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue=20
> > > > > > >> > > > GRUB_TIMEOUT=3D10=20
> > > > > > >> > > > GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2> /dev/null || =
echo=20
> > > Debian`=20
> > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash"=20
> > > > > > >> > > > GRUB_CMDLINE_LINUX=3D""=20
> > > > > > >> > > > # biosdevname=3D0=20
> > > > > > >> > > > GRUB_CMDLINE_XEN=3D"dom0_mem=3D1024M dom0_max_vcpus=3D=
1"=20
> > > > > > >> > >=20
> > > > > > >> > > > _______________________________________________=20
> > > > > > >> > > > Xen-devel mailing list=20
> > > > > > >> > > > Xen-devel@lists.xen.org=20
> > > > > > >> > > > http://lists.xen.org/xen-devel=20
> > > > > > >> > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > > > >=20
> > > > > > >=20
> > > > >=20
> > >=20
>=20
>
--__13918738111841123abhmp0004.oracle.com
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: Times New Roman; font-size: 12pt; color: #000000'=
><br>----- mikeneiderhauser@gmail.com wrote:
<br>&gt; <div dir=3D"ltr">&gt; I followed this site (<a href=3D"http://wiki=
.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" target=3D"_blank">http:=
//wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions</a>).<div>and then=
 followed (<a href=3D"http://wiki.xen.org/wiki/Compiling_Xen_From_Source" t=
arget=3D"_blank">http://wiki.xen.org/wiki/Compiling_Xen_From_Source</a>)<br=
>&gt;=20

<div><br></div><div>Ah, so you are looking for the&nbsp;<span style=3D"font=
-size: 12pt;">&nbsp; &nbsp; xen_pt: Fix passthrough of device with ROM.</sp=
an></div><div><span style=3D"font-size: 12pt;">which is not in the Xen 4.4-=
rc3 but in the master.</span></div><div><br></div><div>One thing you can do=
 is:</div><div><br></div><div>cd xen/tools/qemu-xen-dir</div><div>git fetch=
 upstream</div><div>git checkout origin/master</div><div>[you should see: "=
<span style=3D"font-size: 12pt;">HEAD is now at 027c412... configure: Disab=
le libtool if -fPIE does not work with it (bug #1257099)"]</span></div><div=
><span style=3D"font-size: 12pt;"><br></span></div><div><span style=3D"font=
-size: 12pt;">Go back to main xen directory:</span></div><div>cd ../../../<=
/div><div>./configure</div><div>make&nbsp;</div><div>make install</div><div=
><br></div><div>and you should be using now an newer version of QEMU with t=
he fix.</div><div><br></div><div><br></div><div>&gt; </div><div><pre style=
=3D"padding:1em;border:1px solid rgb(221,221,221);color:rgb(0,0,0);backgrou=
nd-color:rgb(250,250,250);line-height:1.3em;font-size:15px"><span style=3D"=
font-family:arial;line-height:1.3em">git clone -b 4.4.0-rc3 git://<a href=
=3D"http://xenbits.xen.org/xen.git" target=3D"_blank">xenbits.xen.org/xen.g=
it</a></span><br>&gt;=20

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"color:rgb(0,0,0);font-size:15px;li=
ne-height:1.3em;font-family:arial">Had to take some additional steps here t=
o get all of the libs
# apt-get install build-essential=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install bcc bin86 gawk bridge-utils iproute libcu=
rl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install texinfo texlive-latex-base texlive-latex-=
recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev merc=
urial
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install make gcc libc6-dev zlib1g-dev python pyth=
on-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev lib=
jpeg62-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install iasl libbz2-dev e2fslibs-dev git-core uui=
d-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"padding:1e=
m;border:1px solid rgb(221,221,221);color:rgb(0,0,0);background-color:rgb(2=
50,250,250);line-height:1.3em;font-size:15px"><span style=3D"line-height:1.=
3em;font-family:arial">./configure
make dist
make install</span></pre></div></div></div><div class=3D"gmail_extra">&gt; =
<br>&gt; <br>&gt; <div class=3D"gmail_quote">&gt; On Fri, Feb 7, 2014 at 4:=
49 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad=
.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> w=
rote:<br>&gt;=20

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">&gt; On Fri, Feb 07, 2014 at=
 04:29:18PM -0500, Mike Neiderhauser wrote:<br>&gt;=20
&gt; I did not use the patch. &nbsp;I was assuming it was already patched g=
iven<br>&gt;=20
&gt; previous email. &nbsp;Is the patch for qemu source or xen source?<br>&=
gt;=20
<br>&gt;=20
</div>It is for QEMU, but you are right - it should have been part<br>&gt;=
=20
of QEMU if you got the latest version of Xen-unstable.<br>&gt;=20
<br>&gt;=20
You didn't use some specific tag but just 'staging' ?<br>&gt;=20
<div class=3D"HOEnZb">&gt; <div class=3D"h5">&gt; <br>&gt;=20
&gt;<br>&gt;=20
&gt;<br>&gt;=20
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt;<br>&gt;=20
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>&gt;=20
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>&gt;=
=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; However, I still see the same behavior as before:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>&gt;=20
&gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl list<br>&gt;=20
&gt; &gt; &gt; Name &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp;ID &nbsp; Mem VCPUs State Time(s)<br>&gt;=20
&gt; &gt; &gt; Domain-0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
0 &nbsp;1024 &nbsp; &nbsp; 1 &nbsp; &nbsp; r-----<br>&gt;=20
&gt; &gt; &gt; &nbsp;15.2<br>&gt;=20
&gt; &gt; &gt; ubuntu-hvm-0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1 &nbsp;10=
25 &nbsp; &nbsp; 1 &nbsp; &nbsp; ------<br>&gt;=20
&gt; &gt; &gt; 0.0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -=
&gt; 0x23f3000<br>&gt;=20
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Dom0 alloc.: &nbsp; 0000000134000000-&gt;0000000=
138000000 (233690 pages to<br>&gt;=20
&gt; &gt; &gt; be allocated)<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Init. ramdisk: 000000013d0da000-&gt;000000013fff=
fe00<br>&gt;=20
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Loaded kernel: ffffffff81000000-&gt;ffffffff823f=
3000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Init. ramdisk: ffffffff823f3000-&gt;ffffffff8531=
8e00<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Phys-Mach map: ffffffff85319000-&gt;ffffffff8551=
9000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Start info: &nbsp; &nbsp;ffffffff85519000-&gt;ff=
ffffff855194b4<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Page tables: &nbsp; ffffffff8551a000-&gt;fffffff=
f85549000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Boot stack: &nbsp; &nbsp;ffffffff85549000-&gt;ff=
ffffff8554a000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;TOTAL: &nbsp; &nbsp; &nbsp; &nbsp; ffffffff80000=
000-&gt;ffffffff85800000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;ENTRY ADDRESS: ffffffff81d261e0<br>&gt;=20
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>&gt;=20
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>&gt;=20
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three times=
 to switch input<br>&gt;=20
&gt; &gt; &gt; to Xen)<br>&gt;=20
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>&gt;=20
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>&gt;=20
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>&gt;=20
&gt; &gt; &gt; (200 of 1024)<br>&gt;=20
&gt; &gt; &gt; (d1) HVM Loader<br>&gt;=20
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>&gt;=20
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>&gt;=20
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>&gt;=20
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>&gt;=20
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>&gt;=20
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>&gt;=20
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">=
konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; getting the error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>&gt;=20
&gt; &gt; (2 =3D<br>&gt;=20
&gt; &gt; &gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>&gt;=20
&gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; such<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; file or directory<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I've google searched for this and an article appea=
rs, but is not the<br>&gt;=20
&gt; &gt; same<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). &nbsp;Running any xl comma=
nd generates a similar<br>&gt;=20
&gt; &gt; &gt; &gt; error.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don't know w=
hat your distro<br>&gt;=20
&gt; &gt; is,<br>&gt;=20
&gt; &gt; &gt; &gt; but<br>&gt;=20
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>&gt;=
=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com" targ=
et=3D"_blank">mikeneiderhauser@gmail.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>&gt;=20
&gt; &gt; &gt; &gt; install.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. &nbsp;I do not have the t=
oolchain installed. &nbsp;I may have<br>&gt;=20
&gt; &gt; time<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. &nbsp;Are th=
ere any specific instructions on<br>&gt;=20
&gt; &gt; how<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>&gt;=20
&gt; &gt; have<br>&gt;=20
&gt; &gt; &gt; &gt; the<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>&gt;=20
&gt; &gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>&gt;=20
&gt; &gt; &gt; &gt; (4x1G<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. &nbsp;I have been att=
empting to resolve this issue on the<br>&gt;=20
&gt; &gt; &gt; &gt; xen-users<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>&gt;=20
&gt; &gt; with a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>&gt;=20


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
'xl list')<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D"ubuntu-hvm-0"<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D "/usr/lib/xen-=
4.3/boot/hvmloader"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D'hvm'<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D'qemu-dm'<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D['bridge=3Dxenbr0']<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['03:00.0', '03:00.=
1', '04:00.0', '04:00.1', '05:00.0',<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; '05:00.1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['03:00.0','03:00.1=
']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['04:00.0', '04:00.=
1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D['03:00.0', '03:00.1=
', '04:00.0', '04:00.1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['05:00.0', '05:00.=
1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first ('c'=
)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D"c"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D['phy:/dev/ubuntu-v=
g/ubuntu-hvm-0,hda,w']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first ('d'=
)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D"d"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D['phy:/dev/ubuntu-=
vg/ubuntu-hvm-0,hda,w',<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 'file:/root/ubuntu-12.04.3=
-server-amd64.iso,hdc:cdrom,r']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D'destroy'<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D'restart'<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D'restart'<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D'pty'<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D"0.0.0.0"<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D""<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>&gt;=20
&gt; &gt; PCI<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. &nbsp=
;I ran assigned pci devices to<br>&gt;=20
&gt; &gt; pciback. The<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module 'xen=
-pciback'<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use "device_model=
_override" instead if you really<br>&gt;=20
&gt; &gt; want<br>&gt;=20
&gt; &gt; &gt; &gt; a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>&gt;=20
&gt; &gt; running<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>&gt;=20
&gt; &gt; not<br>&gt;=20
&gt; &gt; &gt; &gt; a PV<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>&gt;=20
&gt; &gt; New<br>&gt;=20
&gt; &gt; &gt; &gt; best<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>&gt;=20
&gt; &gt; nr_vcpus=3D3,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>&gt;=20
&gt; &gt; placement<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>&gt;=20
&gt; &gt; memsz=3D0xa69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; Loader: &nbsp; &nbs=
p; &nbsp; &nbsp;0000000000100000-&gt;00000000001a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; Modules: &nbsp; &nb=
sp; &nbsp; 0000000000000000-&gt;0000000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; TOTAL: &nbsp; &nbsp=
; &nbsp; &nbsp; 0000000000000000-&gt;000000003f800000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; ENTRY ADDRESS: 0000=
000000100608<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; 4KB PAGES: 0x000000=
0000000200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; 2MB PAGES: 0x000000=
00000001fb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; 1GB PAGES: 0x000000=
0000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; Spawning<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; -xen-domid<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; 2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -chardev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -mon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -name<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -vnc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -serial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; pty<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -vga<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; cirrus<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -boot<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; order=3Dc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -smp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -device<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -netdev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -M<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; xenfv<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -m<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; 1016<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -drive<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
qmp_capabilities",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 1<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
query-chardev",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 2<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
change",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 3,<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "arguments":=
 {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "device": "vnc",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "target": "password",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "arg": ""<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
query-vnc",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 4<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
qmp_capabilities",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 1<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
device_add",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 2,<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "arguments":=
 {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "driver": "xen-pci-passthrough",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "id": "pci-pt-03_00.0",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "hostaddr": "0000:03:00.0"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; Creating pci<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>&gt;=20
&gt; &gt; ao<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>&gt;=20
&gt; &gt; toobig:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D"Xen 4.3-am=
d64"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>&gt;=20
&gt; &gt; Debian`<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D"quiet splash"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D""<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D"dom0_m=
em=3D1024M dom0_max_vcpus=3D1"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org" target=3D"_blank">Xen-devel@lists.xen.org</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
</div></div></blockquote></div><br>&gt; </div>
</div></body></html>
--__13918738111841123abhmp0004.oracle.com--


--===============3107846946465527579==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3107846946465527579==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 15:37:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WC9xn-0006tY-AT; Sat, 08 Feb 2014 15:37:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WC9xm-0006tT-0v
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 15:37:02 +0000
Received: from [85.158.143.35:64347] by server-1.bemta-4.messagelabs.com id
	30/E8-31661-D1F46F25; Sat, 08 Feb 2014 15:37:01 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1391873817!4158308!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4543 invoked from network); 8 Feb 2014 15:36:59 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Feb 2014 15:36:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s18FarsO014585
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 8 Feb 2014 15:36:54 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s18FaqKX006287
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Sat, 8 Feb 2014 15:36:52 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s18FapWc005267; Sat, 8 Feb 2014 15:36:51 GMT
MIME-Version: 1.0
Message-ID: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
Date: Sat, 8 Feb 2014 07:36:51 -0800 (PST)
From: Konrad Wilk <konrad.wilk@oracle.com>
To: <mikeneiderhauser@gmail.com>
X-Mailer: Zimbra on Oracle Beehive
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3107846946465527579=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3107846946465527579==
Content-Type: multipart/alternative;
 boundary="__13918738111841123abhmp0004.oracle.com"

--__13918738111841123abhmp0004.oracle.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline


----- mikeneiderhauser@gmail.com wrote:=20
>=20
> I followed this site ( http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_i=
nstructions ).=20
and then followed ( http://wiki.xen.org/wiki/Compiling_Xen_From_Source )=20
>=20


Ah, so you are looking for the xen_pt: Fix passthrough of device with ROM.=
=20
which is not in the Xen 4.4-rc3 but in the master.=20


One thing you can do is:=20


cd xen/tools/qemu-xen-dir=20
git fetch upstream=20
git checkout origin/master=20
[you should see: " HEAD is now at 027c412... configure: Disable libtool if =
-fPIE does not work with it (bug #1257099)"]=20


Go back to main xen directory:=20
cd ../../../=20
./configure=20
make=20
make install=20


and you should be using now an newer version of QEMU with the fix.=20




>=20
git clone -b 4.4.0-rc3 git:// xenbits.xen.org/xen.git=20
> Had to take some additional steps here to get all of the libs
# apt-get install build-essential # apt-get install bcc bin86 gawk bridge-u=
tils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig=
 tgif # apt-get install texinfo texlive-latex-base texlive-latex-recommende=
d texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial # ap=
t-get install make gcc libc6-dev zlib1g-dev python python-dev python-twiste=
d libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev # apt-get=
 install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib=
 libx11-dev bison flex xz-utils libyajl-dev # apt-get install gettext
apt-get install libaio-dev
apt-get install libpixman-1-dev ./configure
make dist
make install=20
>=20
>=20
>=20
> On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk < konrad.wilk@oracl=
e.com > wrote:=20
>=20


> On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:=20
> > I did not use the patch. I was assuming it was already patched given=20
> > previous email. Is the patch for qemu source or xen source?=20
>=20
> It is for QEMU, but you are right - it should have been part=20
> of QEMU if you got the latest version of Xen-unstable.=20
>=20
> You didn't use some specific tag but just 'staging' ?=20
>=20
>=20
>=20
> >=20
> >=20
> > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <=20
> > konrad.wilk@oracle.com > wrote:=20
> >=20
> > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:=20
> > > > Ok. I started ran the initscripts and now xl works.=20
> > > >=20
> > > > However, I still see the same behavior as before:=20
> > > >=20
> > >=20
> > > Did you use the patch that was mentioned in the URL?=20
> > >=20
> > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg=20
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg=20
> > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connecti=
on=20
> > > reset=20
> > > > by peer=20
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection err=
or:=20
> > > > Connection refused=20
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection err=
or:=20
> > > > Connection refused=20
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection err=
or:=20
> > > > Connection refused=20
> > > > root@fiat:~# xl list=20
> > > > Name ID Mem VCPUs State Time(s)=20
> > > > Domain-0 0 1024 1 r-----=20
> > > > 15.2=20
> > > > ubuntu-hvm-0 1 1025 1 ------=20
> > > > 0.0=20
> > > >=20
> > > > (XEN) Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000=
=20
> > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:=20
> > > > (XEN) Dom0 alloc.: 0000000134000000->0000000138000000 (233690 pages=
 to=20
> > > > be allocated)=20
> > > > (XEN) Init. ramdisk: 000000013d0da000->000000013ffffe00=20
> > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:=20
> > > > (XEN) Loaded kernel: ffffffff81000000->ffffffff823f3000=20
> > > > (XEN) Init. ramdisk: ffffffff823f3000->ffffffff85318e00=20
> > > > (XEN) Phys-Mach map: ffffffff85319000->ffffffff85519000=20
> > > > (XEN) Start info: ffffffff85519000->ffffffff855194b4=20
> > > > (XEN) Page tables: ffffffff8551a000->ffffffff85549000=20
> > > > (XEN) Boot stack: ffffffff85549000->ffffffff8554a000=20
> > > > (XEN) TOTAL: ffffffff80000000->ffffffff85800000=20
> > > > (XEN) ENTRY ADDRESS: ffffffff81d261e0=20
> > > > (XEN) Dom0 has maximum 1 VCPUs=20
> > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81=
b2f000=20
> > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81=
d0f0f0=20
> > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81=
d252c0=20
> > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81=
e6d000=20
> > > > (XEN) Scrubbing Free RAM: .............................done.=20
> > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.=20
> > > > (XEN) Std. Loglevel: All=20
> > > > (XEN) Guest Loglevel: All=20
> > > > (XEN) Xen is relinquishing VGA console.=20
> > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch=
 input=20
> > > > to Xen)=20
> > > > (XEN) Freed 260kB init memory.=20
> > > > (XEN) PCI add device 0000:00:00.0=20
> > > > (XEN) PCI add device 0000:00:01.0=20
> > > > (XEN) PCI add device 0000:00:1a.0=20
> > > > (XEN) PCI add device 0000:00:1c.0=20
> > > > (XEN) PCI add device 0000:00:1d.0=20
> > > > (XEN) PCI add device 0000:00:1e.0=20
> > > > (XEN) PCI add device 0000:00:1f.0=20
> > > > (XEN) PCI add device 0000:00:1f.2=20
> > > > (XEN) PCI add device 0000:00:1f.3=20
> > > > (XEN) PCI add device 0000:01:00.0=20
> > > > (XEN) PCI add device 0000:02:02.0=20
> > > > (XEN) PCI add device 0000:02:04.0=20
> > > > (XEN) PCI add device 0000:03:00.0=20
> > > > (XEN) PCI add device 0000:03:00.1=20
> > > > (XEN) PCI add device 0000:04:00.0=20
> > > > (XEN) PCI add device 0000:04:00.1=20
> > > > (XEN) PCI add device 0000:05:00.0=20
> > > > (XEN) PCI add device 0000:05:00.1=20
> > > > (XEN) PCI add device 0000:06:03.0=20
> > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 2=
62400=20
> > > > (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D1 m=
emflags=3D0=20
> > > > (200 of 1024)=20
> > > > (d1) HVM Loader=20
> > > > (d1) Detected Xen v4.4-rc2=20
> > > > (d1) Xenbus rings @0xfeffc000, event channel 4=20
> > > > (d1) System requested SeaBIOS=20
> > > > (d1) CPU speed is 3093 MHz=20
> > > > (d1) Relocating guest memory for lowmem MMIO space disabled=20
> > > >=20
> > > >=20
> > > > Excerpt from /var/log/xen/*=20
> > > > qemu: hardware error: xen: failed to populate ram at 40050000=20
> > > >=20
> > > >=20
> > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <=20
> > > > konrad.wilk@oracle.com > wrote:=20
> > > >=20
> > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote=
:=20
> > > > > > I was able to compile and install xen4.4 RC3 on my host, howeve=
r I am=20
> > > > > > getting the error:=20
> > > > > >=20
> > > > > > root@fiat:~/git/xen# xl list=20
> > > > > > xc: error: Could not obtain handle on privileged command interf=
ace=20
> > > (2 =3D=20
> > > > > No=20
> > > > > > such file or directory): Internal error=20
> > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc han=
dle:=20
> > > No=20
> > > > > such=20
> > > > > > file or directory=20
> > > > > > cannot init xl context=20
> > > > > >=20
> > > > > > I've google searched for this and an article appears, but is no=
t the=20
> > > same=20
> > > > > > (as far as I can tell). Running any xl command generates a simi=
lar=20
> > > > > error.=20
> > > > > >=20
> > > > > > What can I do to fix this?=20
> > > > >=20
> > > > >=20
> > > > > You need to run the initscripts for Xen. I don't know what your d=
istro=20
> > > is,=20
> > > > > but=20
> > > > > they are usually put in /etc/init.d/rc.d/xen*=20
> > > > >=20
> > > > >=20
> > > > > >=20
> > > > > > Regards=20
> > > > > >=20
> > > > > >=20
> > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <=20
> > > > > > mikeneiderhauser@gmail.com > wrote:=20
> > > > > >=20
> > > > > > > Much. Do I need to install from src or is there a package I c=
an=20
> > > > > install.=20
> > > > > > >=20
> > > > > > > Regards=20
> > > > > > >=20
> > > > > > >=20
> > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <=20
> > > > > > > konrad.wilk@oracle.com > wrote:=20
> > > > > > >=20
> > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser =
wrote:=20
> > > > > > >> > I did not. I do not have the toolchain installed. I may ha=
ve=20
> > > time=20
> > > > > > >> later=20
> > > > > > >> > today to try the patch. Are there any specific instruction=
s on=20
> > > how=20
> > > > > to=20
> > > > > > >> > patch the src, compile and install?=20
> > > > > > >>=20
> > > > > > >> There actually should be a new version of Xen 4.4-rcX which =
will=20
> > > have=20
> > > > > the=20
> > > > > > >> fix. That might be easier for you?=20
> > > > > > >> >=20
> > > > > > >> > Regards=20
> > > > > > >> >=20
> > > > > > >> >=20
> > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <=
=20
> > > > > > >> > konrad.wilk@oracle.com > wrote:=20
> > > > > > >> >=20
> > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhau=
ser=20
> > > wrote:=20
> > > > > > >> > > > Hi all,=20
> > > > > > >> > > >=20
> > > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET=
 card=20
> > > > > (4x1G=20
> > > > > > >> NIC)=20
> > > > > > >> > > to a=20
> > > > > > >> > > > HVM. I have been attempting to resolve this issue on t=
he=20
> > > > > xen-users=20
> > > > > > >> list,=20
> > > > > > >> > > > but it was advised to post this issue to this list. (I=
nitial=20
> > > > > > >> Message -=20
> > > > > > >> > > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > >=20
> > > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.=
html=20
> > > > > > >> )=20
> > > > > > >> > > >=20
> > > > > > >> > > > The machine I am using as host is a Dell Poweredge ser=
ver=20
> > > with a=20
> > > > > > >> Xeon=20
> > > > > > >> > > > E31220 with 4GB of ram.=20
> > > > > > >> > > >=20
> > > > > > >> > > > The possible bug is the following:=20
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log=
=20
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)=
=20
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at=
=20
> > > 40030000=20
> > > > > > >> > > > ....=20
> > > > > > >> > > >=20
> > > > > > >> > > > I believe it may be similar to this thread=20
> > > > > > >> > > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > >=20
> > > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34v=
be4uyog2d4+state:results=20
> > > > > > >> > > >=20
> > > > > > >> > > >=20
> > > > > > >> > > > Additional info that may be helpful is below.=20
> > > > > > >> > >=20
> > > > > > >> > > Did you try the patch?=20
> > > > > > >> > > >=20
> > > > > > >> > > > Please let me know if you need any additional informat=
ion.=20
> > > > > > >> > > >=20
> > > > > > >> > > > Thanks in advance for any help provided!=20
> > > > > > >> > > > Regards=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > # Configuration file for Xen HVM=20
> > > > > > >> > > >=20
> > > > > > >> > > > # HVM Name (as appears in 'xl list')=20
> > > > > > >> > > > name=3D"ubuntu-hvm-0"=20
> > > > > > >> > > > # HVM Build settings (+ hardware)=20
> > > > > > >> > > > #kernel =3D "/usr/lib/xen-4.3/boot/hvmloader"=20
> > > > > > >> > > > builder=3D'hvm'=20
> > > > > > >> > > > device_model=3D'qemu-dm'=20
> > > > > > >> > > > memory=3D1024=20
> > > > > > >> > > > vcpus=3D2=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Virtual Interface=20
> > > > > > >> > > > # Network bridge to USB NIC=20
> > > > > > >> > > > vif=3D['bridge=3Dxenbr0']=20
> > > > > > >> > > >=20
> > > > > > >> > > > ################### PCI PASSTHROUGH ##################=
#=20
> > > > > > >> > > > # PCI Permissive mode toggle=20
> > > > > > >> > > > #pci_permissive=3D1=20
> > > > > > >> > > >=20
> > > > > > >> > > > # All PCI Devices=20
> > > > > > >> > > > #pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1', '0=
5:00.0',=20
> > > > > > >> '05:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # First two ports on Intel 4x1G NIC=20
> > > > > > >> > > > #pci=3D['03:00.0','03:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Last two ports on Intel 4x1G NIC=20
> > > > > > >> > > > #pci=3D['04:00.0', '04:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # All ports on Intel 4x1G NIC=20
> > > > > > >> > > > pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Brodcom 2x1G NIC=20
> > > > > > >> > > > #pci=3D['05:00.0', '05:00.1']=20
> > > > > > >> > > > ################### PCI PASSTHROUGH ##################=
#=20
> > > > > > >> > > >=20
> > > > > > >> > > > # HVM Disks=20
> > > > > > >> > > > # Hard disk only=20
> > > > > > >> > > > # Boot from HDD first ('c')=20
> > > > > > >> > > > boot=3D"c"=20
> > > > > > >> > > > disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Hard disk with ISO=20
> > > > > > >> > > > # Boot from ISO first ('d')=20
> > > > > > >> > > > #boot=3D"d"=20
> > > > > > >> > > > #disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',=20
> > > > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,=
r']=20
> > > > > > >> > > >=20
> > > > > > >> > > > # ACPI Enable=20
> > > > > > >> > > > acpi=3D1=20
> > > > > > >> > > > # HVM Event Modes=20
> > > > > > >> > > > on_poweroff=3D'destroy'=20
> > > > > > >> > > > on_reboot=3D'restart'=20
> > > > > > >> > > > on_crash=3D'restart'=20
> > > > > > >> > > >=20
> > > > > > >> > > > # Serial Console Configuration (Xen Console)=20
> > > > > > >> > > > sdl=3D0=20
> > > > > > >> > > > serial=3D'pty'=20
> > > > > > >> > > >=20
> > > > > > >> > > > # VNC Configuration=20
> > > > > > >> > > > # Only reacable from localhost=20
> > > > > > >> > > > vnc=3D1=20
> > > > > > >> > > > vnclisten=3D"0.0.0.0"=20
> > > > > > >> > > > vncpasswd=3D""=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > Copied for xen-users list=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > >=20
> > > > > > >> > > > It appears that it cannot obtain the RAM mapping for t=
his=20
> > > PCI=20
> > > > > > >> device.=20
> > > > > > >> > > >=20
> > > > > > >> > > >=20
> > > > > > >> > > > I rebooted the Host. I ran assigned pci devices to=20
> > > pciback. The=20
> > > > > > >> output=20
> > > > > > >> > > > looks like:=20
> > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh=20
> > > > > > >> > > > Loading Kernel Module 'xen-pciback'=20
> > > > > > >> > > > Calling function pciback_dev for:=20
> > > > > > >> > > > PCI DEVICE 0000:03:00.0=20
> > > > > > >> > > > Unbinding 0000:03:00.0 from igb=20
> > > > > > >> > > > Binding 0000:03:00.0 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:03:00.1=20
> > > > > > >> > > > Unbinding 0000:03:00.1 from igb=20
> > > > > > >> > > > Binding 0000:03:00.1 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:04:00.0=20
> > > > > > >> > > > Unbinding 0000:04:00.0 from igb=20
> > > > > > >> > > > Binding 0000:04:00.0 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:04:00.1=20
> > > > > > >> > > > Unbinding 0000:04:00.1 from igb=20
> > > > > > >> > > > Binding 0000:04:00.1 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:05:00.0=20
> > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2=20
> > > > > > >> > > > Binding 0000:05:00.0 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > PCI DEVICE 0000:05:00.1=20
> > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2=20
> > > > > > >> > > > Binding 0000:05:00.1 to pciback=20
> > > > > > >> > > >=20
> > > > > > >> > > > Listing PCI Devices Available to Xen=20
> > > > > > >> > > > 0000:03:00.0=20
> > > > > > >> > > > 0000:03:00.1=20
> > > > > > >> > > > 0000:04:00.0=20
> > > > > > >> > > > 0000:04:00.1=20
> > > > > > >> > > > 0000:05:00.0=20
> > > > > > >> > > > 0000:05:00.1=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg=
=20
> > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg=20
> > > > > > >> > > > WARNING: ignoring device_model directive.=20
> > > > > > >> > > > WARNING: Use "device_model_override" instead if you re=
ally=20
> > > want=20
> > > > > a=20
> > > > > > >> > > > non-default device_model=20
> > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao=
=20
> > > > > 0x210c360:=20
> > > > > > >> create:=20
> > > > > > >> > > > how=3D(nil) callback=3D(nil) poller=3D0x210c3c0=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_device.c:257:libxl__device_disk_set_backend:=20
> > > > > > >> Disk=20
> > > > > > >> > > > vdev=3Dhda spec.backend=3Dunknown=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_device.c:296:libxl__device_disk_set_backend:=20
> > > > > > >> Disk=20
> > > > > > >> > > > vdev=3Dhda, using backend phy=20
> > > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_creat=
e:=20
> > > running=20
> > > > > > >> > > bootloader=20
> > > > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader=
_run:=20
> > > not=20
> > > > > a PV=20
> > > > > > >> > > > domain, skipping bootloader=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c728: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candida=
te:=20
> > > New=20
> > > > > best=20
> > > > > > >> NUMA=20
> > > > > > >> > > > placement candidate found: nr_nodes=3D1, nr_cpus=3D4,=
=20
> > > nr_vcpus=3D3,=20
> > > > > > >> > > > free_memkb=3D2980=20
> > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA=
=20
> > > placement=20
> > > > > > >> > > candidate=20
> > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected=20
> > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=3D0x100000=
=20
> > > memsz=3D0xa69a4=20
> > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a=
69a4=20
> > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:=20
> > > > > > >> > > > Loader: 0000000000100000->00000000001a69a4=20
> > > > > > >> > > > Modules: 0000000000000000->0000000000000000=20
> > > > > > >> > > > TOTAL: 0000000000000000->000000003f800000=20
> > > > > > >> > > > ENTRY ADDRESS: 0000000000100608=20
> > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:=20
> > > > > > >> > > > 4KB PAGES: 0x0000000000000200=20
> > > > > > >> > > > 2MB PAGES: 0x00000000000001fb=20
> > > > > > >> > > > 1GB PAGES: 0x0000000000000000=20
> > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 =
->=20
> > > > > > >> 0x7f022c81682d=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_device.c:257:libxl__device_disk_set_backend:=20
> > > > > > >> Disk=20
> > > > > > >> > > > vdev=3Dhda spec.backend=3Dphy=20
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_regi=
ster:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/76=
8/state=20
> > > > > token=3D3/0:=20
> > > > > > >> > > > register slotnum=3D3=20
> > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao=
=20
> > > > > 0x210c360:=20
> > > > > > >> > > > inprogress: poller=3D0x210c3c0, flags=3Di=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x2112f48=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state token=
=3D3/0:=20
> > > event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2=
 still=20
> > > > > waiting=20
> > > > > > >> > > state 1=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x2112f48=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state token=
=3D3/0:=20
> > > event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2=
 ok=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x2112f48 wpath=3D/local/domain/0/backend/vbd/2/76=
8/state=20
> > > > > token=3D3/0:=20
> > > > > > >> > > > deregister slotnum=3D3=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x2112f48: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calli=
ng=20
> > > hotplug=20
> > > > > > >> script:=20
> > > > > > >> > > > /etc/xen/scripts/block add=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:=
=20
> > > Spawning=20
> > > > > > >> > > device-model=20
> > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > /usr/bin/qemu-system-i386=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > -xen-domid=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: 2=
=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -chardev=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > >=20
> > > socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowait=
=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
mon=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > chardev=3Dlibxl-cmd,mode=3Dcontrol=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
name=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > ubuntu-hvm-0=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
vnc=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > 0.0.0.0:0=20
> > > > > > >> ,to=3D99=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -global=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> isa-fdc.driveA=3D=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -serial=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: p=
ty=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
vga=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > cirrus=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -global=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> vga.vram_size_mb=3D8=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
boot=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > order=3Dc=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
smp=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > 2,maxcpus=3D2=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -device=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:44:2=
c=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -netdev=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > > type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,d=
ownscript=3Dno=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
M=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: x=
enfv=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: -=
m=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm: 1=
016=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > -drive=20
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:=
=20
> > > > > > >> > > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > >=20
> > > file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,fo=
rmat=3Draw,cache=3Dwriteback=20
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_regi=
ster:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/s=
tate=20
> > > > > token=3D3/1:=20
> > > > > > >> > > register=20
> > > > > > >> > > > slotnum=3D3=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210c960=20
> > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state token=3D3=
/1: event=20
> > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210c960=20
> > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state token=3D3=
/1: event=20
> > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c960 wpath=3D/local/domain/0/device-model/2/s=
tate=20
> > > > > token=3D3/1:=20
> > > > > > >> > > > deregister slotnum=3D3=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210c960: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:=
=20
> > > connected=20
> > > > > to=20
> > > > > > >> > > > /var/run/xen/qmp-libxl-2=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > > > type: qmp=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "qmp_capabilities",=20
> > > > > > >> > > > "id": 1=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "query-chardev",=20
> > > > > > >> > > > "id": 2=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "change",=20
> > > > > > >> > > > "id": 3,=20
> > > > > > >> > > > "arguments": {=20
> > > > > > >> > > > "device": "vnc",=20
> > > > > > >> > > > "target": "password",=20
> > > > > > >> > > > "arg": ""=20
> > > > > > >> > > > }=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "query-vnc",=20
> > > > > > >> > > > "id": 4=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_regi=
ster:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/=
state=20
> > > > > token=3D3/2:=20
> > > > > > >> > > register=20
> > > > > > >> > > > slotnum=3D3=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210e8a8=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state token=3D=
3/2: event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 s=
till=20
> > > > > waiting=20
> > > > > > >> state=20
> > > > > > >> > > 1=20
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watc=
h=20
> > > > > w=3D0x210e8a8=20
> > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state token=3D=
3/2: event=20
> > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state=20
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callbac=
k:=20
> > > backend=20
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 o=
k=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210e8a8 wpath=3D/local/domain/0/backend/vif/2/0/=
state=20
> > > > > token=3D3/2:=20
> > > > > > >> > > > deregister slotnum=3D3=20
> > > > > > >> > > > libxl: debug:=20
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:=20
> > > > > watch=20
> > > > > > >> > > > w=3D0x210e8a8: deregister unregistered=20
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calli=
ng=20
> > > hotplug=20
> > > > > > >> script:=20
> > > > > > >> > > > /etc/xen/scripts/vif-bridge online=20
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calli=
ng=20
> > > hotplug=20
> > > > > > >> script:=20
> > > > > > >> > > > /etc/xen/scripts/vif-bridge add=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:=
=20
> > > connected=20
> > > > > to=20
> > > > > > >> > > > /var/run/xen/qmp-libxl-2=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > > > type: qmp=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "qmp_capabilities",=20
> > > > > > >> > > > "id": 1=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: mes=
sage=20
> > > type:=20
> > > > > > >> return=20
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next q=
mp=20
> > > > > command: '{=20
> > > > > > >> > > > "execute": "device_add",=20
> > > > > > >> > > > "id": 2,=20
> > > > > > >> > > > "arguments": {=20
> > > > > > >> > > > "driver": "xen-pci-passthrough",=20
> > > > > > >> > > > "id": "pci-pt-03_00.0",=20
> > > > > > >> > > > "hostaddr": "0000:03:00.0"=20
> > > > > > >> > > > }=20
> > > > > > >> > > > }=20
> > > > > > >> > > > '=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read er=
ror:=20
> > > > > > >> Connection=20
> > > > > > >> > > reset=20
> > > > > > >> > > > by peer=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:=
=20
> > > Connection=20
> > > > > > >> error:=20
> > > > > > >> > > > Connection refused=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:=
=20
> > > Connection=20
> > > > > > >> error:=20
> > > > > > >> > > > Connection refused=20
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:=
=20
> > > Connection=20
> > > > > > >> error:=20
> > > > > > >> > > > Connection refused=20
> > > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend=
:=20
> > > > > Creating pci=20
> > > > > > >> > > backend=20
> > > > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_re=
port:=20
> > > ao=20
> > > > > > >> 0x210c360:=20
> > > > > > >> > > > progress report: ignored=20
> > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: a=
o=20
> > > > > 0x210c360:=20
> > > > > > >> > > > complete, rc=3D0=20
> > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: a=
o=20
> > > > > 0x210c360:=20
> > > > > > >> > > destroy=20
> > > > > > >> > > > Daemon running with PID 3214=20
> > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 tot=
al=20
> > > > > > >> releases:793=20
> > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0 max=
imum=20
> > > > > > >> allocations:4=20
> > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4=20
> > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4=
=20
> > > toobig:4=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log=
=20
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)=
=20
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at=
=20
> > > 40030000=20
> > > > > > >> > > > CPU #0:=20
> > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D000=
00633=20
> > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D000=
00000=20
> > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0=
 A20=3D1 SMM=3D0=20
> > > HLT=3D1=20
> > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00=20
> > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200=20
> > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00=20
> > > > > > >> > > > GDT=3D 00000000 0000ffff=20
> > > > > > >> > > > IDT=3D 00000000 0000ffff=20
> > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D000=
00000=20
> > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D000=
00000=20
> > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400=20
> > > > > > >> > > > EFER=3D0000000000000000=20
> > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f=
80=20
> > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0=
000=20
> > > > > > >> > > > XMM00=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM01=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM02=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM03=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM04=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM05=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM06=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM07=3D00000000000000000000000000000000=20
> > > > > > >> > > > CPU #1:=20
> > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D000=
00633=20
> > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D000=
00000=20
> > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=3D0=
 A20=3D1 SMM=3D0=20
> > > HLT=3D1=20
> > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00=20
> > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300=20
> > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200=20
> > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00=20
> > > > > > >> > > > GDT=3D 00000000 0000ffff=20
> > > > > > >> > > > IDT=3D 00000000 0000ffff=20
> > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D000=
00000=20
> > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D000=
00000=20
> > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400=20
> > > > > > >> > > > EFER=3D0000000000000000=20
> > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00001f=
80=20
> > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D0000000000000000 0=
000=20
> > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D0000000000000000 0=
000=20
> > > > > > >> > > > XMM00=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM01=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM02=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM03=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM04=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM05=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM06=3D00000000000000000000000000000000=20
> > > > > > >> > > > XMM07=3D00000000000000000000000000000000=20
> > > > > > >> > > >=20
> > > > > > >> > > > ######################################################=
#####=20
> > > > > > >> > > > /etc/default/grub=20
> > > > > > >> > > > GRUB_DEFAULT=3D"Xen 4.3-amd64"=20
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=3D0=20
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue=20
> > > > > > >> > > > GRUB_TIMEOUT=3D10=20
> > > > > > >> > > > GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2> /dev/null || =
echo=20
> > > Debian`=20
> > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash"=20
> > > > > > >> > > > GRUB_CMDLINE_LINUX=3D""=20
> > > > > > >> > > > # biosdevname=3D0=20
> > > > > > >> > > > GRUB_CMDLINE_XEN=3D"dom0_mem=3D1024M dom0_max_vcpus=3D=
1"=20
> > > > > > >> > >=20
> > > > > > >> > > > _______________________________________________=20
> > > > > > >> > > > Xen-devel mailing list=20
> > > > > > >> > > > Xen-devel@lists.xen.org=20
> > > > > > >> > > > http://lists.xen.org/xen-devel=20
> > > > > > >> > >=20
> > > > > > >> > >=20
> > > > > > >>=20
> > > > > > >=20
> > > > > > >=20
> > > > >=20
> > >=20
>=20
>
--__13918738111841123abhmp0004.oracle.com
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

<html><head><style type=3D'text/css'>p { margin: 0; }</style></head><body><=
div style=3D'font-family: Times New Roman; font-size: 12pt; color: #000000'=
><br>----- mikeneiderhauser@gmail.com wrote:
<br>&gt; <div dir=3D"ltr">&gt; I followed this site (<a href=3D"http://wiki=
.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" target=3D"_blank">http:=
//wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions</a>).<div>and then=
 followed (<a href=3D"http://wiki.xen.org/wiki/Compiling_Xen_From_Source" t=
arget=3D"_blank">http://wiki.xen.org/wiki/Compiling_Xen_From_Source</a>)<br=
>&gt;=20

<div><br></div><div>Ah, so you are looking for the&nbsp;<span style=3D"font=
-size: 12pt;">&nbsp; &nbsp; xen_pt: Fix passthrough of device with ROM.</sp=
an></div><div><span style=3D"font-size: 12pt;">which is not in the Xen 4.4-=
rc3 but in the master.</span></div><div><br></div><div>One thing you can do=
 is:</div><div><br></div><div>cd xen/tools/qemu-xen-dir</div><div>git fetch=
 upstream</div><div>git checkout origin/master</div><div>[you should see: "=
<span style=3D"font-size: 12pt;">HEAD is now at 027c412... configure: Disab=
le libtool if -fPIE does not work with it (bug #1257099)"]</span></div><div=
><span style=3D"font-size: 12pt;"><br></span></div><div><span style=3D"font=
-size: 12pt;">Go back to main xen directory:</span></div><div>cd ../../../<=
/div><div>./configure</div><div>make&nbsp;</div><div>make install</div><div=
><br></div><div>and you should be using now an newer version of QEMU with t=
he fix.</div><div><br></div><div><br></div><div>&gt; </div><div><pre style=
=3D"padding:1em;border:1px solid rgb(221,221,221);color:rgb(0,0,0);backgrou=
nd-color:rgb(250,250,250);line-height:1.3em;font-size:15px"><span style=3D"=
font-family:arial;line-height:1.3em">git clone -b 4.4.0-rc3 git://<a href=
=3D"http://xenbits.xen.org/xen.git" target=3D"_blank">xenbits.xen.org/xen.g=
it</a></span><br>&gt;=20

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"color:rgb(0,0,0);font-size:15px;li=
ne-height:1.3em;font-family:arial">Had to take some additional steps here t=
o get all of the libs
# apt-get install build-essential=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install bcc bin86 gawk bridge-utils iproute libcu=
rl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install texinfo texlive-latex-base texlive-latex-=
recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev merc=
urial
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install make gcc libc6-dev zlib1g-dev python pyth=
on-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev lib=
jpeg62-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install iasl libbz2-dev e2fslibs-dev git-core uui=
d-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"color:rgb(0,0,0);font-size:15px;line-height:1.3em;fon=
t-family:arial"># apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"padding:1e=
m;border:1px solid rgb(221,221,221);color:rgb(0,0,0);background-color:rgb(2=
50,250,250);line-height:1.3em;font-size:15px"><span style=3D"line-height:1.=
3em;font-family:arial">./configure
make dist
make install</span></pre></div></div></div><div class=3D"gmail_extra">&gt; =
<br>&gt; <br>&gt; <div class=3D"gmail_quote">&gt; On Fri, Feb 7, 2014 at 4:=
49 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad=
.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> w=
rote:<br>&gt;=20

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">&gt; On Fri, Feb 07, 2014 at=
 04:29:18PM -0500, Mike Neiderhauser wrote:<br>&gt;=20
&gt; I did not use the patch. &nbsp;I was assuming it was already patched g=
iven<br>&gt;=20
&gt; previous email. &nbsp;Is the patch for qemu source or xen source?<br>&=
gt;=20
<br>&gt;=20
</div>It is for QEMU, but you are right - it should have been part<br>&gt;=
=20
of QEMU if you got the latest version of Xen-unstable.<br>&gt;=20
<br>&gt;=20
You didn't use some specific tag but just 'staging' ?<br>&gt;=20
<div class=3D"HOEnZb">&gt; <div class=3D"h5">&gt; <br>&gt;=20
&gt;<br>&gt;=20
&gt;<br>&gt;=20
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt;<br>&gt;=20
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>&gt;=20
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>&gt;=
=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; However, I still see the same behavior as before:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>&gt;=20
&gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl list<br>&gt;=20
&gt; &gt; &gt; Name &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp;ID &nbsp; Mem VCPUs State Time(s)<br>&gt;=20
&gt; &gt; &gt; Domain-0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
0 &nbsp;1024 &nbsp; &nbsp; 1 &nbsp; &nbsp; r-----<br>&gt;=20
&gt; &gt; &gt; &nbsp;15.2<br>&gt;=20
&gt; &gt; &gt; ubuntu-hvm-0 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 1 &nbsp;10=
25 &nbsp; &nbsp; 1 &nbsp; &nbsp; ------<br>&gt;=20
&gt; &gt; &gt; 0.0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -=
&gt; 0x23f3000<br>&gt;=20
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Dom0 alloc.: &nbsp; 0000000134000000-&gt;0000000=
138000000 (233690 pages to<br>&gt;=20
&gt; &gt; &gt; be allocated)<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Init. ramdisk: 000000013d0da000-&gt;000000013fff=
fe00<br>&gt;=20
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Loaded kernel: ffffffff81000000-&gt;ffffffff823f=
3000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Init. ramdisk: ffffffff823f3000-&gt;ffffffff8531=
8e00<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Phys-Mach map: ffffffff85319000-&gt;ffffffff8551=
9000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Start info: &nbsp; &nbsp;ffffffff85519000-&gt;ff=
ffffff855194b4<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Page tables: &nbsp; ffffffff8551a000-&gt;fffffff=
f85549000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;Boot stack: &nbsp; &nbsp;ffffffff85549000-&gt;ff=
ffffff8554a000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;TOTAL: &nbsp; &nbsp; &nbsp; &nbsp; ffffffff80000=
000-&gt;ffffffff85800000<br>&gt;=20
&gt; &gt; &gt; (XEN) &nbsp;ENTRY ADDRESS: ffffffff81d261e0<br>&gt;=20
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>&gt;=20
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>&gt;=20
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three times=
 to switch input<br>&gt;=20
&gt; &gt; &gt; to Xen)<br>&gt;=20
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>&gt;=20
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>&gt;=20
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>&gt;=20
&gt; &gt; &gt; (200 of 1024)<br>&gt;=20
&gt; &gt; &gt; (d1) HVM Loader<br>&gt;=20
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>&gt;=20
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>&gt;=20
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>&gt;=20
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>&gt;=20
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>&gt;=20
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>&gt;=20
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">=
konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; getting the error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>&gt;=20
&gt; &gt; (2 =3D<br>&gt;=20
&gt; &gt; &gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>&gt;=20
&gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; such<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; file or directory<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I've google searched for this and an article appea=
rs, but is not the<br>&gt;=20
&gt; &gt; same<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). &nbsp;Running any xl comma=
nd generates a similar<br>&gt;=20
&gt; &gt; &gt; &gt; error.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don't know w=
hat your distro<br>&gt;=20
&gt; &gt; is,<br>&gt;=20
&gt; &gt; &gt; &gt; but<br>&gt;=20
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>&gt;=
=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com" targ=
et=3D"_blank">mikeneiderhauser@gmail.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>&gt;=20
&gt; &gt; &gt; &gt; install.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. &nbsp;I do not have the t=
oolchain installed. &nbsp;I may have<br>&gt;=20
&gt; &gt; time<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. &nbsp;Are th=
ere any specific instructions on<br>&gt;=20
&gt; &gt; how<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>&gt;=20
&gt; &gt; have<br>&gt;=20
&gt; &gt; &gt; &gt; the<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>&gt;=20
&gt; &gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>&gt;=20
&gt; &gt; &gt; &gt; (4x1G<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. &nbsp;I have been att=
empting to resolve this issue on the<br>&gt;=20
&gt; &gt; &gt; &gt; xen-users<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>&gt;=20
&gt; &gt; with a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>&gt;=20


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
'xl list')<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D"ubuntu-hvm-0"<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D "/usr/lib/xen-=
4.3/boot/hvmloader"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D'hvm'<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D'qemu-dm'<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D['bridge=3Dxenbr0']<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['03:00.0', '03:00.=
1', '04:00.0', '04:00.1', '05:00.0',<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; '05:00.1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['03:00.0','03:00.1=
']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['04:00.0', '04:00.=
1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D['03:00.0', '03:00.1=
', '04:00.0', '04:00.1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D['05:00.0', '05:00.=
1']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first ('c'=
)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D"c"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D['phy:/dev/ubuntu-v=
g/ubuntu-hvm-0,hda,w']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first ('d'=
)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D"d"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D['phy:/dev/ubuntu-=
vg/ubuntu-hvm-0,hda,w',<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 'file:/root/ubuntu-12.04.3=
-server-amd64.iso,hdc:cdrom,r']<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D'destroy'<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D'restart'<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D'restart'<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D'pty'<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D"0.0.0.0"<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D""<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>&gt;=20
&gt; &gt; PCI<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. &nbsp=
;I ran assigned pci devices to<br>&gt;=20
&gt; &gt; pciback. The<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module 'xen=
-pciback'<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use "device_model=
_override" instead if you really<br>&gt;=20
&gt; &gt; want<br>&gt;=20
&gt; &gt; &gt; &gt; a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>&gt;=20
&gt; &gt; running<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>&gt;=20
&gt; &gt; not<br>&gt;=20
&gt; &gt; &gt; &gt; a PV<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>&gt;=20
&gt; &gt; New<br>&gt;=20
&gt; &gt; &gt; &gt; best<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>&gt;=20
&gt; &gt; nr_vcpus=3D3,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>&gt;=20
&gt; &gt; placement<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>&gt;=20
&gt; &gt; memsz=3D0xa69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; Loader: &nbsp; &nbs=
p; &nbsp; &nbsp;0000000000100000-&gt;00000000001a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; Modules: &nbsp; &nb=
sp; &nbsp; 0000000000000000-&gt;0000000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; TOTAL: &nbsp; &nbsp=
; &nbsp; &nbsp; 0000000000000000-&gt;000000003f800000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; ENTRY ADDRESS: 0000=
000000100608<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; 4KB PAGES: 0x000000=
0000000200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; 2MB PAGES: 0x000000=
00000001fb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; 1GB PAGES: 0x000000=
0000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; Spawning<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; -xen-domid<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; 2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -chardev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -mon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -name<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -vnc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -serial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; pty<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -vga<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; cirrus<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -boot<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; order=3Dc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -smp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -device<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -netdev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -M<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; xenfv<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; -m<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: &nbsp; 1016<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -drive<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
qmp_capabilities",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 1<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
query-chardev",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 2<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
change",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 3,<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "arguments":=
 {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "device": "vnc",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "target": "password",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "arg": ""<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
query-vnc",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 4<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
qmp_capabilities",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 1<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: '{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "execute": "=
device_add",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "id": 2,<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; "arguments":=
 {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "driver": "xen-pci-passthrough",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "id": "pci-pt-03_00.0",<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; &nbsp; &nbsp=
; "hostaddr": "0000:03:00.0"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &nbsp; &nbsp; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; '<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; Creating pci<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>&gt;=20
&gt; &gt; ao<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>&gt;=20
&gt; &gt; toobig:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D &nbsp; &nbsp; 00000=
000 0000ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D"Xen 4.3-am=
d64"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>&gt;=20
&gt; &gt; Debian`<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D"quiet splash"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D""<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D"dom0_m=
em=3D1024M dom0_max_vcpus=3D1"<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org" target=3D"_blank">Xen-devel@lists.xen.org</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
</div></div></blockquote></div><br>&gt; </div>
</div></body></html>
--__13918738111841123abhmp0004.oracle.com--


--===============3107846946465527579==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3107846946465527579==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 15:52:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCACD-0007rA-8J; Sat, 08 Feb 2014 15:51:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1WCACB-0007r5-Qq
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 15:51:56 +0000
Received: from [85.158.137.68:39339] by server-1.bemta-3.messagelabs.com id
	F3/AA-17293-B9256F25; Sat, 08 Feb 2014 15:51:55 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391874711!524375!1
X-Originating-IP: [64.18.0.145]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19973 invoked from network); 8 Feb 2014 15:51:53 -0000
Received: from exprod5og103.obsmtp.com (HELO exprod5og103.obsmtp.com)
	(64.18.0.145)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Feb 2014 15:51:53 -0000
Received: from mail-ob0-f177.google.com ([209.85.214.177]) (using TLSv1) by
	exprod5ob103.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvZSl2qT0V5mYwVrxZeqOstSxujeo3vu@postini.com;
	Sat, 08 Feb 2014 07:51:53 PST
Received: by mail-ob0-f177.google.com with SMTP id wp18so5365510obc.8
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 07:51:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=XWgywsDjMyRxGO/iwF7nlOwa4g5Gd602+JBrLWMm5zU=;
	b=WKMq+r/RD0KTprA6NaJfvCBbWxgBwIOuV4wvOX216Q3JiGQcukOvz3lukwycbth5aL
	JnV6Z1eh3JQZdUmHZHW/IehA/O0nW2WVyeWW/u8fixJHyG8C/rRZ2BOIjU8T3hrpEp3w
	y5cvyjoXVkh8w7yJIv3oqBLzjFeI+T8ZF9d38=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=XWgywsDjMyRxGO/iwF7nlOwa4g5Gd602+JBrLWMm5zU=;
	b=I7XKM0jOMHZJAR04bxBlABn8ctK3pY2GIeu55C33qHM820zU6ARtsblkeyCuxsuSjg
	93bJTw8Oq8Lrvum072Lp/LFs9bUxrnoLBDGW7j/G+4IV0Axu+i36ZbMV1U77nAr+Pa4t
	DsGGQr68pfffBBtqKQqhhBOKEudePjOlJsoqJ3dEHW8sib6Bl24KhJJuB5OtfWjbjbOC
	pJrkZVb3HyED4akLP+YIWPuQtla8JLDNmGaXL8aPLikCqlqAfSwMtytPEqqATmNku3B+
	oe/bsJ/9uau0ItlZ0jy0MXVzlUTrsqodogedjuW+Zkfwcy26IW9Ur3kADZIBrl/f5a56
	We+A==
X-Gm-Message-State: ALoCoQl6Q/a4dL2RcxfS/v2mX20LeFJyc9mSMRxbcGDKGB1SxVAZz1D8QQMgDzezsT4wPLEEkFxmIBKrIuQAUFgPDo1uUjTjJHbE626naAiYTHoVe1v6J5w7DG9dEDiCRRDiJ43701e3uTHQf9aNMDIXr1RBJ/LEj+pFUPL1+U/XanpXjEUBuzQ=
X-Received: by 10.60.161.37 with SMTP id xp5mr18169653oeb.31.1391874710867;
	Sat, 08 Feb 2014 07:51:50 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.161.37 with SMTP id xp5mr18169643oeb.31.1391874710725;
	Sat, 08 Feb 2014 07:51:50 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Sat, 8 Feb 2014 07:51:50 -0800 (PST)
In-Reply-To: <1391522239.10515.79.camel@kazak.uk.xensource.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
	<52EFFCF5.5070108@linaro.org>
	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
	<52F0EBA8.3000206@linaro.org>
	<1391522239.10515.79.camel@kazak.uk.xensource.com>
Date: Sat, 8 Feb 2014 17:51:50 +0200
Message-ID: <CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3005230050702433463=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3005230050702433463==
Content-Type: multipart/alternative; boundary=089e0118475ab47f1104f1e71560

--089e0118475ab47f1104f1e71560
Content-Type: text/plain; charset=ISO-8859-1

Hi,

>
> >  > To support xentrace on ARM, we will need at least:
> >
> > I would readily do that if you give me some directions on where to look
> > at, or a high-level explanation of:
> >
> >  > - to replace rcu_lock_domain_by_any_id() by a a similar function
> >
> > What semantics should this function have?
>
> I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code.
> The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.

> Probably best just to dig in and ask questions as issues arise.

After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the quite
same manner it is done in x86) I've faced different problem.
Specifically, p2m_lookup
for an address in the xen restricted heap fails on the very first map
call: p2m_map_first
returns zero page due to both p2m->first_level and p2m_first_level_index(addr)
equal to zero. As far as I understand xen simply does not know how to make
p2m translation for it's own restricted heap on arm. Am I right?

Regards,
  Pavlo

--089e0118475ab47f1104f1e71560
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div style><span style=3D"color:rgb(80,0,80);font-family:a=
rial,sans-serif;font-size:13px">Hi,</span></div><div><span style=3D"color:r=
gb(80,0,80);font-family:arial,sans-serif;font-size:13px"><br></span></div><=
div>
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif=
;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,sans-s=
erif;font-size:13px">&gt; &gt; =A0&gt; To support xentrace on ARM, we will =
need at least:</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans=
-serif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-=
serif;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,s=
ans-serif;font-size:13px">&gt; &gt; I would readily do that if you give me =
some directions on where to look</span><br style=3D"color:rgb(80,0,80);font=
-family:arial,sans-serif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt; at, or a high-level explanation of:</span><br style=3D"color:=
rgb(80,0,80);font-family:arial,sans-serif;font-size:13px"><span style=3D"co=
lor:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px">&gt; &gt;</sp=
an><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:1=
3px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt; =A0&gt; - to replace rcu_lock_domain_by_any_id() by a a simil=
ar function</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-se=
rif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-=
serif;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,s=
ans-serif;font-size:13px">&gt; &gt; What semantics should this function hav=
e?</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-=
size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif=
;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,sans-s=
erif;font-size:13px">&gt; I would copy in part get_pg_owner (arch/x86/mm/mm=
.c) in the ARM code.</span><br style=3D"color:rgb(80,0,80);font-family:aria=
l,sans-serif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; The check &quot;unlikely(paging_mode_</span><span style=3D"color:r=
gb(80,0,80);font-family:arial,sans-serif;font-size:13px">translate(curr))&q=
uot; will always fail on ARM.</span><span style=3D"font-family:arial,sans-s=
erif;font-size:13px"><br>
</span></div><span style=3D"font-family:arial,sans-serif;font-size:13px"><d=
iv><span style=3D"font-family:arial,sans-serif;font-size:13px"><br></span><=
/div>&gt; Probably best just to=A0</span><span style=3D"font-family:arial,s=
ans-serif;font-size:13px">dig in and ask questions as issues arise.</span><=
br>
<div><span style=3D"font-family:arial,sans-serif;font-size:13px"><br></span=
></div><div style><span style=3D"font-family:arial,sans-serif;font-size:13p=
x">After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the qu=
ite same manner it is done in x86) I&#39;ve faced different problem. Specif=
ically,=A0</span><font face=3D"arial, sans-serif">p2m_lookup for an address=
 in the xen restricted heap fails on the very first map call:=A0</font><spa=
n style=3D"font-family:arial,sans-serif;font-size:13px">p2m_map_first retur=
ns zero page due to both=A0</span><span style=3D"font-family:arial,sans-ser=
if;font-size:13px">p2m-&gt;first_level and=A0</span><span style=3D"font-fam=
ily:arial,sans-serif;font-size:13px">p2m_first_level_index(addr) equal to z=
ero. As far as I understand xen simply does not know how to make p2m transl=
ation for it&#39;s own restricted heap on arm. Am I right?</span></div>
<div style><span style=3D"font-family:arial,sans-serif;font-size:13px"><br>=
</span></div><div style><span style=3D"font-family:arial,sans-serif;font-si=
ze:13px">Regards,</span></div><div style><span style=3D"font-family:arial,s=
ans-serif;font-size:13px">=A0 Pavlo</span></div>
<div style><span style=3D"font-family:arial,sans-serif;font-size:13px"><br>=
</span></div><div style><span style=3D"font-family:arial,sans-serif;font-si=
ze:13px"><br></span></div></div>

--089e0118475ab47f1104f1e71560--


--===============3005230050702433463==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3005230050702433463==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 15:52:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCACD-0007rA-8J; Sat, 08 Feb 2014 15:51:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pavlo.suikov@globallogic.com>) id 1WCACB-0007r5-Qq
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 15:51:56 +0000
Received: from [85.158.137.68:39339] by server-1.bemta-3.messagelabs.com id
	F3/AA-17293-B9256F25; Sat, 08 Feb 2014 15:51:55 +0000
X-Env-Sender: pavlo.suikov@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1391874711!524375!1
X-Originating-IP: [64.18.0.145]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19973 invoked from network); 8 Feb 2014 15:51:53 -0000
Received: from exprod5og103.obsmtp.com (HELO exprod5og103.obsmtp.com)
	(64.18.0.145)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Feb 2014 15:51:53 -0000
Received: from mail-ob0-f177.google.com ([209.85.214.177]) (using TLSv1) by
	exprod5ob103.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUvZSl2qT0V5mYwVrxZeqOstSxujeo3vu@postini.com;
	Sat, 08 Feb 2014 07:51:53 PST
Received: by mail-ob0-f177.google.com with SMTP id wp18so5365510obc.8
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 07:51:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=XWgywsDjMyRxGO/iwF7nlOwa4g5Gd602+JBrLWMm5zU=;
	b=WKMq+r/RD0KTprA6NaJfvCBbWxgBwIOuV4wvOX216Q3JiGQcukOvz3lukwycbth5aL
	JnV6Z1eh3JQZdUmHZHW/IehA/O0nW2WVyeWW/u8fixJHyG8C/rRZ2BOIjU8T3hrpEp3w
	y5cvyjoXVkh8w7yJIv3oqBLzjFeI+T8ZF9d38=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=XWgywsDjMyRxGO/iwF7nlOwa4g5Gd602+JBrLWMm5zU=;
	b=I7XKM0jOMHZJAR04bxBlABn8ctK3pY2GIeu55C33qHM820zU6ARtsblkeyCuxsuSjg
	93bJTw8Oq8Lrvum072Lp/LFs9bUxrnoLBDGW7j/G+4IV0Axu+i36ZbMV1U77nAr+Pa4t
	DsGGQr68pfffBBtqKQqhhBOKEudePjOlJsoqJ3dEHW8sib6Bl24KhJJuB5OtfWjbjbOC
	pJrkZVb3HyED4akLP+YIWPuQtla8JLDNmGaXL8aPLikCqlqAfSwMtytPEqqATmNku3B+
	oe/bsJ/9uau0ItlZ0jy0MXVzlUTrsqodogedjuW+Zkfwcy26IW9Ur3kADZIBrl/f5a56
	We+A==
X-Gm-Message-State: ALoCoQl6Q/a4dL2RcxfS/v2mX20LeFJyc9mSMRxbcGDKGB1SxVAZz1D8QQMgDzezsT4wPLEEkFxmIBKrIuQAUFgPDo1uUjTjJHbE626naAiYTHoVe1v6J5w7DG9dEDiCRRDiJ43701e3uTHQf9aNMDIXr1RBJ/LEj+pFUPL1+U/XanpXjEUBuzQ=
X-Received: by 10.60.161.37 with SMTP id xp5mr18169653oeb.31.1391874710867;
	Sat, 08 Feb 2014 07:51:50 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.60.161.37 with SMTP id xp5mr18169643oeb.31.1391874710725;
	Sat, 08 Feb 2014 07:51:50 -0800 (PST)
Received: by 10.182.141.69 with HTTP; Sat, 8 Feb 2014 07:51:50 -0800 (PST)
In-Reply-To: <1391522239.10515.79.camel@kazak.uk.xensource.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>
	<52EFFCF5.5070108@linaro.org>
	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>
	<52F0EBA8.3000206@linaro.org>
	<1391522239.10515.79.camel@kazak.uk.xensource.com>
Date: Sat, 8 Feb 2014 17:51:50 +0200
Message-ID: <CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
From: Pavlo Suikov <pavlo.suikov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3005230050702433463=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3005230050702433463==
Content-Type: multipart/alternative; boundary=089e0118475ab47f1104f1e71560

--089e0118475ab47f1104f1e71560
Content-Type: text/plain; charset=ISO-8859-1

Hi,

>
> >  > To support xentrace on ARM, we will need at least:
> >
> > I would readily do that if you give me some directions on where to look
> > at, or a high-level explanation of:
> >
> >  > - to replace rcu_lock_domain_by_any_id() by a a similar function
> >
> > What semantics should this function have?
>
> I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code.
> The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.

> Probably best just to dig in and ask questions as issues arise.

After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the quite
same manner it is done in x86) I've faced different problem.
Specifically, p2m_lookup
for an address in the xen restricted heap fails on the very first map
call: p2m_map_first
returns zero page due to both p2m->first_level and p2m_first_level_index(addr)
equal to zero. As far as I understand xen simply does not know how to make
p2m translation for it's own restricted heap on arm. Am I right?

Regards,
  Pavlo

--089e0118475ab47f1104f1e71560
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div style><span style=3D"color:rgb(80,0,80);font-family:a=
rial,sans-serif;font-size:13px">Hi,</span></div><div><span style=3D"color:r=
gb(80,0,80);font-family:arial,sans-serif;font-size:13px"><br></span></div><=
div>
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif=
;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,sans-s=
erif;font-size:13px">&gt; &gt; =A0&gt; To support xentrace on ARM, we will =
need at least:</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans=
-serif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-=
serif;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,s=
ans-serif;font-size:13px">&gt; &gt; I would readily do that if you give me =
some directions on where to look</span><br style=3D"color:rgb(80,0,80);font=
-family:arial,sans-serif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt; at, or a high-level explanation of:</span><br style=3D"color:=
rgb(80,0,80);font-family:arial,sans-serif;font-size:13px"><span style=3D"co=
lor:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px">&gt; &gt;</sp=
an><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:1=
3px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt; =A0&gt; - to replace rcu_lock_domain_by_any_id() by a a simil=
ar function</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-se=
rif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; &gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-=
serif;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,s=
ans-serif;font-size:13px">&gt; &gt; What semantics should this function hav=
e?</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-=
size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt;</span><br style=3D"color:rgb(80,0,80);font-family:arial,sans-serif=
;font-size:13px"><span style=3D"color:rgb(80,0,80);font-family:arial,sans-s=
erif;font-size:13px">&gt; I would copy in part get_pg_owner (arch/x86/mm/mm=
.c) in the ARM code.</span><br style=3D"color:rgb(80,0,80);font-family:aria=
l,sans-serif;font-size:13px">
<span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13=
px">&gt; The check &quot;unlikely(paging_mode_</span><span style=3D"color:r=
gb(80,0,80);font-family:arial,sans-serif;font-size:13px">translate(curr))&q=
uot; will always fail on ARM.</span><span style=3D"font-family:arial,sans-s=
erif;font-size:13px"><br>
</span></div><span style=3D"font-family:arial,sans-serif;font-size:13px"><d=
iv><span style=3D"font-family:arial,sans-serif;font-size:13px"><br></span><=
/div>&gt; Probably best just to=A0</span><span style=3D"font-family:arial,s=
ans-serif;font-size:13px">dig in and ask questions as issues arise.</span><=
br>
<div><span style=3D"font-family:arial,sans-serif;font-size:13px"><br></span=
></div><div style><span style=3D"font-family:arial,sans-serif;font-size:13p=
x">After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the qu=
ite same manner it is done in x86) I&#39;ve faced different problem. Specif=
ically,=A0</span><font face=3D"arial, sans-serif">p2m_lookup for an address=
 in the xen restricted heap fails on the very first map call:=A0</font><spa=
n style=3D"font-family:arial,sans-serif;font-size:13px">p2m_map_first retur=
ns zero page due to both=A0</span><span style=3D"font-family:arial,sans-ser=
if;font-size:13px">p2m-&gt;first_level and=A0</span><span style=3D"font-fam=
ily:arial,sans-serif;font-size:13px">p2m_first_level_index(addr) equal to z=
ero. As far as I understand xen simply does not know how to make p2m transl=
ation for it&#39;s own restricted heap on arm. Am I right?</span></div>
<div style><span style=3D"font-family:arial,sans-serif;font-size:13px"><br>=
</span></div><div style><span style=3D"font-family:arial,sans-serif;font-si=
ze:13px">Regards,</span></div><div style><span style=3D"font-family:arial,s=
ans-serif;font-size:13px">=A0 Pavlo</span></div>
<div style><span style=3D"font-family:arial,sans-serif;font-size:13px"><br>=
</span></div><div style><span style=3D"font-family:arial,sans-serif;font-si=
ze:13px"><br></span></div></div>

--089e0118475ab47f1104f1e71560--


--===============3005230050702433463==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3005230050702433463==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 15:54:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCAEc-0007xT-Qt; Sat, 08 Feb 2014 15:54:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WC9zM-0007Jt-7H
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 15:38:40 +0000
Received: from [85.158.143.35:8502] by server-3.bemta-4.messagelabs.com id
	8E/90-11539-F7F46F25; Sat, 08 Feb 2014 15:38:39 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391873913!4155904!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30492 invoked from network); 8 Feb 2014 15:38:34 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 15:38:34 -0000
Received: by mail-vc0-f174.google.com with SMTP id im17so3592044vcb.33
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 07:38:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=Mh6F/6m+yrQmCBCfhw88bidbtR7odjiHS75k3LXBGmo=;
	b=GCNoMGb8pZCnTzbjtzRfRPZBd4XjogitkcGH8K2OfN1KIIIUjCPeeGmvPtAyiHsPoM
	Cj7UraQ7B3+AI6oAFs1qeDkCow5ZjAdFg/DIerow8hzdt96BxvAeE/Ci6odT4ZxtfTgR
	ivEwIznwKdoMi3KNg272vbXM25wLZvcIrOSb3SFcMc1fwNcNAcmoKffeuNME+RG1tUiL
	NwD2Pw7LYngQOusbaL16krtbFqLONuut3n2yMe9lpHslajYl3ZYqKiAQaJpWnrozFQGw
	5iib49yqt2Hk97E5KV8jxmDvNTjt9GY09oT+4drkx1J0Y/Z46q8v7MveEGuw7CqLje9S
	ZkmQ==
X-Received: by 10.52.160.233 with SMTP id xn9mr62760vdb.48.1391873913663; Sat,
	08 Feb 2014 07:38:33 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Sat, 8 Feb 2014 07:37:53 -0800 (PST)
In-Reply-To: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Sat, 8 Feb 2014 10:37:53 -0500
Message-ID: <CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Sat, 08 Feb 2014 15:54:25 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8116040961981903080=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8116040961981903080==
Content-Type: multipart/alternative; boundary=089e0160c3ae3238a404f1e6e697

--089e0160c3ae3238a404f1e6e697
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I will give it a shot.  Thanks!


On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <konrad.wilk@oracle.com> wrote=
:

>
> ----- mikeneiderhauser@gmail.com wrote:
> >
> > I followed this site (
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
> and then followed (http://wiki.xen.org/wiki/Compiling_Xen_From_Source)
> >
>
> Ah, so you are looking for the     xen_pt: Fix passthrough of device with
> ROM.
> which is not in the Xen 4.4-rc3 but in the master.
>
> One thing you can do is:
>
> cd xen/tools/qemu-xen-dir
> git fetch upstream
> git checkout origin/master
> [you should see: "HEAD is now at 027c412... configure: Disable libtool if
> -fPIE does not work with it (bug #1257099)"]
>
> Go back to main xen directory:
> cd ../../../
> ./configure
> make
> make install
>
> and you should be using now an newer version of QEMU with the fix.
>
>
> >
>
> git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git
> >
>
>
> Had to take some additional steps here to get all of the libs
> # apt-get install build-essential # apt-get install bcc bin86 gawk bridge=
-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transf=
ig tgif # apt-get install texinfo texlive-latex-base texlive-latex-recommen=
ded texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial# a=
pt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twist=
ed libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev# apt-get=
 install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib=
 libx11-dev bison flex xz-utils libyajl-dev# apt-get install gettext
> apt-get install libaio-dev
> apt-get install libpixman-1-dev
>
> ./configure
> make dist
> make install
>
> >
> >
> >
> > On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> >
>>
>> > On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
>> > > I did not use the patch.  I was assuming it was already patched give=
n
>> > > previous email.  Is the patch for qemu source or xen source?
>> >
>> >
>> It is for QEMU, but you are right - it should have been part
>> > of QEMU if you got the latest version of Xen-unstable.
>> >
>> > You didn't use some specific tag but just 'staging' ?
>> >
>> >
>> >
>> > >
>> > >
>> > > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
>> > > konrad.wilk@oracle.com> wrote:
>> > >
>> > > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
>> > > > > Ok. I started ran the initscripts and now xl works.
>> > > > >
>> > > > > However, I still see the same behavior as before:
>> > > > >
>> > > >
>> > > > Did you use the patch that was mentioned in the URL?
>> > > >
>> > > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
>> > > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>> > > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error:
>> Connection
>> > > > reset
>> > > > > by peer
>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>> error:
>> > > > > Connection refused
>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>> error:
>> > > > > Connection refused
>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>> error:
>> > > > > Connection refused
>> > > > > root@fiat:~# xl list
>> > > > > Name                                        ID   Mem VCPUs State
>> Time(s)
>> > > > > Domain-0                                     0  1024     1
>> r-----
>> > > > >  15.2
>> > > > > ubuntu-hvm-0                                 1  1025     1
>> ------
>> > > > > 0.0
>> > > > >
>> > > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f30=
00
>> > > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
>> > > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690
>> pages to
>> > > > > be allocated)
>> > > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
>> > > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
>> > > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
>> > > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
>> > > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
>> > > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
>> > > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
>> > > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
>> > > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
>> > > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
>> > > > > (XEN) Dom0 has maximum 1 VCPUs
>> > > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
>> 0xffffffff81b2f000
>> > > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
>> 0xffffffff81d0f0f0
>> > > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
>> 0xffffffff81d252c0
>> > > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
>> 0xffffffff81e6d000
>> > > > > (XEN) Scrubbing Free RAM: .............................done.
>> > > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>> > > > > (XEN) Std. Loglevel: All
>> > > > > (XEN) Guest Loglevel: All
>> > > > > (XEN) Xen is relinquishing VGA console.
>> > > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to
>> switch input
>> > > > > to Xen)
>> > > > > (XEN) Freed 260kB init memory.
>> > > > > (XEN) PCI add device 0000:00:00.0
>> > > > > (XEN) PCI add device 0000:00:01.0
>> > > > > (XEN) PCI add device 0000:00:1a.0
>> > > > > (XEN) PCI add device 0000:00:1c.0
>> > > > > (XEN) PCI add device 0000:00:1d.0
>> > > > > (XEN) PCI add device 0000:00:1e.0
>> > > > > (XEN) PCI add device 0000:00:1f.0
>> > > > > (XEN) PCI add device 0000:00:1f.2
>> > > > > (XEN) PCI add device 0000:00:1f.3
>> > > > > (XEN) PCI add device 0000:01:00.0
>> > > > > (XEN) PCI add device 0000:02:02.0
>> > > > > (XEN) PCI add device 0000:02:04.0
>> > > > > (XEN) PCI add device 0000:03:00.0
>> > > > > (XEN) PCI add device 0000:03:00.1
>> > > > > (XEN) PCI add device 0000:04:00.0
>> > > > > (XEN) PCI add device 0000:04:00.1
>> > > > > (XEN) PCI add device 0000:05:00.0
>> > > > > (XEN) PCI add device 0000:05:00.1
>> > > > > (XEN) PCI add device 0000:06:03.0
>> > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 =
>
>> 262400
>> > > > > (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D=
1
>> memflags=3D0
>> > > > > (200 of 1024)
>> > > > > (d1) HVM Loader
>> > > > > (d1) Detected Xen v4.4-rc2
>> > > > > (d1) Xenbus rings @0xfeffc000, event channel 4
>> > > > > (d1) System requested SeaBIOS
>> > > > > (d1) CPU speed is 3093 MHz
>> > > > > (d1) Relocating guest memory for lowmem MMIO space disabled
>> > > > >
>> > > > >
>> > > > > Excerpt from /var/log/xen/*
>> > > > > qemu: hardware error: xen: failed to populate ram at 40050000
>> > > > >
>> > > > >
>> > > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
>> > > > > konrad.wilk@oracle.com> wrote:
>> > > > >
>> > > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser
>> wrote:
>> > > > > > > I was able to compile and install xen4.4 RC3 on my host,
>> however I am
>> > > > > > > getting the error:
>> > > > > > >
>> > > > > > > root@fiat:~/git/xen# xl list
>> > > > > > > xc: error: Could not obtain handle on privileged command
>> interface
>> > > > (2 =3D
>> > > > > > No
>> > > > > > > such file or directory): Internal error
>> > > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc
>> handle:
>> > > > No
>> > > > > > such
>> > > > > > > file or directory
>> > > > > > > cannot init xl context
>> > > > > > >
>> > > > > > > I've google searched for this and an article appears, but is
>> not the
>> > > > same
>> > > > > > > (as far as I can tell).  Running any xl command generates a
>> similar
>> > > > > > error.
>> > > > > > >
>> > > > > > > What can I do to fix this?
>> > > > > >
>> > > > > >
>> > > > > > You need to run the initscripts for Xen. I don't know what you=
r
>> distro
>> > > > is,
>> > > > > > but
>> > > > > > they are usually put in /etc/init.d/rc.d/xen*
>> > > > > >
>> > > > > >
>> > > > > > >
>> > > > > > > Regards
>> > > > > > >
>> > > > > > >
>> > > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
>> > > > > > > mikeneiderhauser@gmail.com> wrote:
>> > > > > > >
>> > > > > > > > Much. Do I need to install from src or is there a package =
I
>> can
>> > > > > > install.
>> > > > > > > >
>> > > > > > > > Regards
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
>> > > > > > > > konrad.wilk@oracle.com> wrote:
>> > > > > > > >
>> > > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike
>> Neiderhauser wrote:
>> > > > > > > >> > I did not.  I do not have the toolchain installed.  I
>> may have
>> > > > time
>> > > > > > > >> later
>> > > > > > > >> > today to try the patch.  Are there any specific
>> instructions on
>> > > > how
>> > > > > > to
>> > > > > > > >> > patch the src, compile and install?
>> > > > > > > >>
>> > > > > > > >> There actually should be a new version of Xen 4.4-rcX
>> which will
>> > > > have
>> > > > > > the
>> > > > > > > >> fix. That might be easier for you?
>> > > > > > > >> >
>> > > > > > > >> > Regards
>> > > > > > > >> >
>> > > > > > > >> >
>> > > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk =
<
>> > > > > > > >> > konrad.wilk@oracle.com> wrote:
>> > > > > > > >> >
>> > > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
>> Neiderhauser
>> > > > wrote:
>> > > > > > > >> > > > Hi all,
>> > > > > > > >> > > >
>> > > > > > > >> > > > I am attempting to do a pci passthrough of an Intel
>> ET card
>> > > > > > (4x1G
>> > > > > > > >> NIC)
>> > > > > > > >> > > to a
>> > > > > > > >> > > > HVM.  I have been attempting to resolve this issue
>> on the
>> > > > > > xen-users
>> > > > > > > >> list,
>> > > > > > > >> > > > but it was advised to post this issue to this list.
>> (Initial
>> > > > > > > >> Message -
>> > > > > > > >> > > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > >
>> > > >
>> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.htm=
l
>> > > > > > > >> )
>> > > > > > > >> > > >
>> > > > > > > >> > > > The machine I am using as host is a Dell Poweredge
>> server
>> > > > with a
>> > > > > > > >> Xeon
>> > > > > > > >> > > > E31220 with 4GB of ram.
>> > > > > > > >> > > >
>> > > > > > > >> > > > The possible bug is the following:
>> > > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.lo=
g
>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0=
)
>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram a=
t
>> > > > 40030000
>> > > > > > > >> > > > ....
>> > > > > > > >> > > >
>> > > > > > > >> > > > I believe it may be similar to this thread
>> > > > > > > >> > > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > >
>> > > >
>> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4=
uyog2d4+state:results
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> > > > > > > >> > > > Additional info that may be helpful is below.
>> > > > > > > >> > >
>> > > > > > > >> > > Did you try the patch?
>> > > > > > > >> > > >
>> > > > > > > >> > > > Please let me know if you need any additional
>> information.
>> > > > > > > >> > > >
>> > > > > > > >> > > > Thanks in advance for any help provided!
>> > > > > > > >> > > > Regards
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > # Configuration file for Xen HVM
>> > > > > > > >> > > >
>> > > > > > > >> > > > # HVM Name (as appears in 'xl list')
>> > > > > > > >> > > > name=3D"ubuntu-hvm-0"
>> > > > > > > >> > > > # HVM Build settings (+ hardware)
>> > > > > > > >> > > > #kernel =3D "/usr/lib/xen-4.3/boot/hvmloader"
>> > > > > > > >> > > > builder=3D'hvm'
>> > > > > > > >> > > > device_model=3D'qemu-dm'
>> > > > > > > >> > > > memory=3D1024
>> > > > > > > >> > > > vcpus=3D2
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Virtual Interface
>> > > > > > > >> > > > # Network bridge to USB NIC
>> > > > > > > >> > > > vif=3D['bridge=3Dxenbr0']
>> > > > > > > >> > > >
>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>> ###################
>> > > > > > > >> > > > # PCI Permissive mode toggle
>> > > > > > > >> > > > #pci_permissive=3D1
>> > > > > > > >> > > >
>> > > > > > > >> > > > # All PCI Devices
>> > > > > > > >> > > > #pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1',
>> '05:00.0',
>> > > > > > > >> '05:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # First two ports on Intel 4x1G NIC
>> > > > > > > >> > > > #pci=3D['03:00.0','03:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Last two ports on Intel 4x1G NIC
>> > > > > > > >> > > > #pci=3D['04:00.0', '04:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # All ports on Intel 4x1G NIC
>> > > > > > > >> > > > pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Brodcom 2x1G NIC
>> > > > > > > >> > > > #pci=3D['05:00.0', '05:00.1']
>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>> ###################
>> > > > > > > >> > > >
>> > > > > > > >> > > > # HVM Disks
>> > > > > > > >> > > > # Hard disk only
>> > > > > > > >> > > > # Boot from HDD first ('c')
>> > > > > > > >> > > > boot=3D"c"
>> > > > > > > >> > > > disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Hard disk with ISO
>> > > > > > > >> > > > # Boot from ISO first ('d')
>> > > > > > > >> > > > #boot=3D"d"
>> > > > > > > >> > > > #disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>> > > > > > > >> > > >
>> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # ACPI Enable
>> > > > > > > >> > > > acpi=3D1
>> > > > > > > >> > > > # HVM Event Modes
>> > > > > > > >> > > > on_poweroff=3D'destroy'
>> > > > > > > >> > > > on_reboot=3D'restart'
>> > > > > > > >> > > > on_crash=3D'restart'
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Serial Console Configuration (Xen Console)
>> > > > > > > >> > > > sdl=3D0
>> > > > > > > >> > > > serial=3D'pty'
>> > > > > > > >> > > >
>> > > > > > > >> > > > # VNC Configuration
>> > > > > > > >> > > > # Only reacable from localhost
>> > > > > > > >> > > > vnc=3D1
>> > > > > > > >> > > > vnclisten=3D"0.0.0.0"
>> > > > > > > >> > > > vncpasswd=3D""
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > Copied for xen-users list
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > >
>> > > > > > > >> > > > It appears that it cannot obtain the RAM mapping fo=
r
>> this
>> > > > PCI
>> > > > > > > >> device.
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> > > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
>> > > > pciback. The
>> > > > > > > >> output
>> > > > > > > >> > > > looks like:
>> > > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
>> > > > > > > >> > > > Loading Kernel Module 'xen-pciback'
>> > > > > > > >> > > > Calling function pciback_dev for:
>> > > > > > > >> > > > PCI DEVICE 0000:03:00.0
>> > > > > > > >> > > > Unbinding 0000:03:00.0 from igb
>> > > > > > > >> > > > Binding 0000:03:00.0 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:03:00.1
>> > > > > > > >> > > > Unbinding 0000:03:00.1 from igb
>> > > > > > > >> > > > Binding 0000:03:00.1 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:04:00.0
>> > > > > > > >> > > > Unbinding 0000:04:00.0 from igb
>> > > > > > > >> > > > Binding 0000:04:00.0 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:04:00.1
>> > > > > > > >> > > > Unbinding 0000:04:00.1 from igb
>> > > > > > > >> > > > Binding 0000:04:00.1 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:05:00.0
>> > > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
>> > > > > > > >> > > > Binding 0000:05:00.0 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:05:00.1
>> > > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
>> > > > > > > >> > > > Binding 0000:05:00.1 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > Listing PCI Devices Available to Xen
>> > > > > > > >> > > > 0000:03:00.0
>> > > > > > > >> > > > 0000:03:00.1
>> > > > > > > >> > > > 0000:04:00.0
>> > > > > > > >> > > > 0000:04:00.1
>> > > > > > > >> > > > 0000:05:00.0
>> > > > > > > >> > > > 0000:05:00.1
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > root@fiat:~# xl -vvv create
>> /etc/xen/ubuntu-hvm-0.cfg
>> > > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>> > > > > > > >> > > > WARNING: ignoring device_model directive.
>> > > > > > > >> > > > WARNING: Use "device_model_override" instead if you
>> really
>> > > > want
>> > > > > > a
>> > > > > > > >> > > > non-default device_model
>> > > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> create:
>> > > > > > > >> > > > how=3D(nil) callback=3D(nil) poller=3D0x210c3c0
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>> > > > > > > >> Disk
>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dunknown
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_device.c:296:libxl__device_disk_set_backend:
>> > > > > > > >> Disk
>> > > > > > > >> > > > vdev=3Dhda, using backend phy
>> > > > > > > >> > > > libxl: debug:
>> libxl_create.c:675:initiate_domain_create:
>> > > > running
>> > > > > > > >> > > bootloader
>> > > > > > > >> > > > libxl: debug:
>> libxl_bootloader.c:321:libxl__bootloader_run:
>> > > > not
>> > > > > > a PV
>> > > > > > > >> > > > domain, skipping bootloader
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c728: deregister unregistered
>> > > > > > > >> > > > libxl: debug:
>> libxl_numa.c:475:libxl__get_numa_candidate:
>> > > > New
>> > > > > > best
>> > > > > > > >> NUMA
>> > > > > > > >> > > > placement candidate found: nr_nodes=3D1, nr_cpus=3D=
4,
>> > > > nr_vcpus=3D3,
>> > > > > > > >> > > > free_memkb=3D2980
>> > > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain:
>> NUMA
>> > > > placement
>> > > > > > > >> > > candidate
>> > > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>> > > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=3D0x10000=
0
>> > > > memsz=3D0xa69a4
>> > > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 ->
>> 0x1a69a4
>> > > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>> > > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
>> > > > > > > >> > > >   Modules:       0000000000000000->0000000000000000
>> > > > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
>> > > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
>> > > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>> > > > > > > >> > > >   4KB PAGES: 0x0000000000000200
>> > > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
>> > > > > > > >> > > >   1GB PAGES: 0x0000000000000000
>> > > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at
>> 0x7f022c779000 ->
>> > > > > > > >> 0x7f022c81682d
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>> > > > > > > >> Disk
>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dphy
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:559:libxl__ev_xswatch_register:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x2112f48
>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > token=3D3/0:
>> > > > > > > >> > > > register slotnum=3D3
>> > > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> > > > inprogress: poller=3D0x210c3c0, flags=3Di
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x2112f48
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>> token=3D3/0:
>> > > > event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:647:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted stat=
e
>> 2 still
>> > > > > > waiting
>> > > > > > > >> > > state 1
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x2112f48
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>> token=3D3/0:
>> > > > event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:643:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted stat=
e
>> 2 ok
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x2112f48
>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > token=3D3/0:
>> > > > > > > >> > > > deregister slotnum=3D3
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x2112f48: deregister unregistered
>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>> calling
>> > > > hotplug
>> > > > > > > >> script:
>> > > > > > > >> > > > /etc/xen/scripts/block add
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm=
:
>> > > > Spawning
>> > > > > > > >> > > device-model
>> > > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > > /usr/bin/qemu-system-i386
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > -xen-domid
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   2
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -chardev
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > >
>> > > > socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowai=
t
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -mon
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > > chardev=3Dlibxl-cmd,mode=3Dcontrol
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -name
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > ubuntu-hvm-0
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -vnc
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > 0.0.0.0:0
>> > > > > > > >> ,to=3D99
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -global
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> isa-fdc.driveA=3D
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -serial
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   pty
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -vga
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > cirrus
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -global
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> vga.vram_size_mb=3D8
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -boot
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > order=3Dc
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -smp
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > 2,maxcpus=3D2
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -device
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > > rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:4=
4:2c
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -netdev
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > >
>> type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscript=3Dno
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -M
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   xenfv
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -m
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   1016
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -drive
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > >
>> > > >
>> file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,forma=
t=3Draw,cache=3Dwriteback
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:559:libxl__ev_xswatch_register:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c960
>> wpath=3D/local/domain/0/device-model/2/state
>> > > > > > token=3D3/1:
>> > > > > > > >> > > register
>> > > > > > > >> > > > slotnum=3D3
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210c960
>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>> token=3D3/1: event
>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210c960
>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>> token=3D3/1: event
>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c960
>> wpath=3D/local/domain/0/device-model/2/state
>> > > > > > token=3D3/1:
>> > > > > > > >> > > > deregister slotnum=3D3
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c960: deregister unregistered
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize=
:
>> > > > connected
>> > > > > > to
>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > > > type: qmp
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>> > > > > > > >> > > >     "id": 1
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "query-chardev",
>> > > > > > > >> > > >     "id": 2
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "change",
>> > > > > > > >> > > >     "id": 3,
>> > > > > > > >> > > >     "arguments": {
>> > > > > > > >> > > >         "device": "vnc",
>> > > > > > > >> > > >         "target": "password",
>> > > > > > > >> > > >         "arg": ""
>> > > > > > > >> > > >     }
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "query-vnc",
>> > > > > > > >> > > >     "id": 4
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:559:libxl__ev_xswatch_register:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210e8a8
>> wpath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > token=3D3/2:
>> > > > > > > >> > > register
>> > > > > > > >> > > > slotnum=3D3
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210e8a8
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>> token=3D3/2: event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:647:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state =
2
>> still
>> > > > > > waiting
>> > > > > > > >> state
>> > > > > > > >> > > 1
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210e8a8
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>> token=3D3/2: event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:643:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state =
2
>> ok
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210e8a8
>> wpath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > token=3D3/2:
>> > > > > > > >> > > > deregister slotnum=3D3
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210e8a8: deregister unregistered
>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>> calling
>> > > > hotplug
>> > > > > > > >> script:
>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge online
>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>> calling
>> > > > hotplug
>> > > > > > > >> script:
>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge add
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize=
:
>> > > > connected
>> > > > > > to
>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > > > type: qmp
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>> > > > > > > >> > > >     "id": 1
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "device_add",
>> > > > > > > >> > > >     "id": 2,
>> > > > > > > >> > > >     "arguments": {
>> > > > > > > >> > > >         "driver": "xen-pci-passthrough",
>> > > > > > > >> > > >         "id": "pci-pt-03_00.0",
>> > > > > > > >> > > >         "hostaddr": "0000:03:00.0"
>> > > > > > > >> > > >     }
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read
>> error:
>> > > > > > > >> Connection
>> > > > > > > >> > > reset
>> > > > > > > >> > > > by peer
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize=
:
>> > > > Connection
>> > > > > > > >> error:
>> > > > > > > >> > > > Connection refused
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize=
:
>> > > > Connection
>> > > > > > > >> error:
>> > > > > > > >> > > > Connection refused
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize=
:
>> > > > Connection
>> > > > > > > >> error:
>> > > > > > > >> > > > Connection refused
>> > > > > > > >> > > > libxl: debug:
>> libxl_pci.c:81:libxl__create_pci_backend:
>> > > > > > Creating pci
>> > > > > > > >> > > backend
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:1737:libxl__ao_progress_report:
>> > > > ao
>> > > > > > > >> 0x210c360:
>> > > > > > > >> > > > progress report: ignored
>> > > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete=
:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> > > > complete, rc=3D0
>> > > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy=
:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> > > destroy
>> > > > > > > >> > > > Daemon running with PID 3214
>> > > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793
>> total
>> > > > > > > >> releases:793
>> > > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0
>> maximum
>> > > > > > > >> allocations:4
>> > > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
>> > > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:=
4
>> > > > toobig:4
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.lo=
g
>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0=
)
>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram a=
t
>> > > > 40030000
>> > > > > > > >> > > > CPU #0:
>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D=
00000633
>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D=
00000000
>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0 A20=3D1
>> SMM=3D0
>> > > > HLT=3D1
>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D=
00000000
>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D=
00000000
>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>> > > > > > > >> > > > EFER=3D0000000000000000
>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D000=
01f80
>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D000000000000000=
0 0000
>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>> > > > > > > >> > > > CPU #1:
>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D=
00000633
>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D=
00000000
>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0 A20=3D1
>> SMM=3D0
>> > > > HLT=3D1
>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D=
00000000
>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D=
00000000
>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>> > > > > > > >> > > > EFER=3D0000000000000000
>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D000=
01f80
>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D000000000000000=
0 0000
>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > /etc/default/grub
>> > > > > > > >> > > > GRUB_DEFAULT=3D"Xen 4.3-amd64"
>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=3D0
>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue
>> > > > > > > >> > > > GRUB_TIMEOUT=3D10
>> > > > > > > >> > > > GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2> /dev/null =
||
>> echo
>> > > > Debian`
>> > > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash"
>> > > > > > > >> > > > GRUB_CMDLINE_LINUX=3D""
>> > > > > > > >> > > > # biosdevname=3D0
>> > > > > > > >> > > > GRUB_CMDLINE_XEN=3D"dom0_mem=3D1024M dom0_max_vcpus=
=3D1"
>> > > > > > > >> > >
>> > > > > > > >> > > > _______________________________________________
>> > > > > > > >> > > > Xen-devel mailing list
>> > > > > > > >> > > > Xen-devel@lists.xen.org
>> > > > > > > >> > > > http://lists.xen.org/xen-devel
>> > > > > > > >> > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > > > >
>> > > > > > > >
>> > > > > >
>> > > >
>> >
>>
>
> >
>

--089e0160c3ae3238a404f1e6e697
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I will give it a shot. =A0Thanks!</div><div class=3D"gmail=
_extra"><br><br><div class=3D"gmail_quote">On Sat, Feb 8, 2014 at 10:36 AM,=
 Konrad Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com=
" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div style=3D"font-size:12pt;font-famil=
y:Times New Roman"><br>----- <a href=3D"mailto:mikeneiderhauser@gmail.com" =
target=3D"_blank">mikeneiderhauser@gmail.com</a> wrote:
<br>&gt; <div dir=3D"ltr"><div class=3D"">&gt; I followed this site (<a hre=
f=3D"http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" target=
=3D"_blank">http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions</=
a>).</div>

<div><div class=3D"">and then followed (<a href=3D"http://wiki.xen.org/wiki=
/Compiling_Xen_From_Source" target=3D"_blank">http://wiki.xen.org/wiki/Comp=
iling_Xen_From_Source</a>)<br>&gt;=20

<div><br></div></div><div>Ah, so you are looking for the=A0<span style=3D"f=
ont-size:12pt">=A0 =A0 xen_pt: Fix passthrough of device with ROM.</span></=
div><div><span style=3D"font-size:12pt">which is not in the Xen 4.4-rc3 but=
 in the master.</span></div>

<div><br></div><div>One thing you can do is:</div><div><br></div><div>cd xe=
n/tools/qemu-xen-dir</div><div>git fetch upstream</div><div>git checkout or=
igin/master</div><div>[you should see: &quot;<span style=3D"font-size:12pt"=
>HEAD is now at 027c412... configure: Disable libtool if -fPIE does not wor=
k with it (bug #1257099)&quot;]</span></div>

<div><span style=3D"font-size:12pt"><br></span></div><div><span style=3D"fo=
nt-size:12pt">Go back to main xen directory:</span></div><div>cd ../../../<=
/div><div>./configure</div><div>make=A0</div><div>make install</div><div><b=
r>

</div><div>and you should be using now an newer version of QEMU with the fi=
x.</div><div><div class=3D"h5"><div><br></div><div><br></div><div>&gt; </di=
v><div><pre style=3D"line-height:1.3em;font-size:15px;background-color:rgb(=
250,250,250);border:1px solid rgb(221,221,221);padding:1em">

<span style=3D"font-family:arial;line-height:1.3em">git clone -b 4.4.0-rc3 =
git://<a href=3D"http://xenbits.xen.org/xen.git" target=3D"_blank">xenbits.=
xen.org/xen.git</a></span><br>&gt;=20

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"line-height:1.3em;font-size:15px;f=
ont-family:arial">Had to take some additional steps here to get all of the =
libs
# apt-get install build-essential=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-open=
ssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install texinfo texlive-latex-base texlive-latex-recommended texli=
ve-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twi=
sted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml=
-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"line-heigh=
t:1.3em;font-size:15px;background-color:rgb(250,250,250);border:1px solid r=
gb(221,221,221);padding:1em"><span style=3D"line-height:1.3em;font-family:a=
rial">./configure
make dist
make install</span></pre></div></div></div></div></div><div><div class=3D"h=
5"><div class=3D"gmail_extra">&gt; <br>&gt; <br>&gt; <div class=3D"gmail_qu=
ote">&gt; On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <span dir=
=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">ko=
nrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

&gt;=20

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>&gt; On Fri, Feb 07, 2014 at 04:29:18PM=
 -0500, Mike Neiderhauser wrote:<br>&gt;=20
&gt; I did not use the patch. =A0I was assuming it was already patched give=
n<br>&gt;=20
&gt; previous email. =A0Is the patch for qemu source or xen source?<br>&gt;=
=20
<br>&gt;=20
</div>It is for QEMU, but you are right - it should have been part<br>&gt;=
=20
of QEMU if you got the latest version of Xen-unstable.<br>&gt;=20
<br>&gt;=20
You didn&#39;t use some specific tag but just &#39;staging&#39; ?<br>&gt;=
=20
<div>&gt; <div>&gt; <br>&gt;=20
&gt;<br>&gt;=20
&gt;<br>&gt;=20
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt;<br>&gt;=20
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>&gt;=20
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>&gt;=
=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; However, I still see the same behavior as before:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>&gt;=20
&gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl list<br>&gt;=20
&gt; &gt; &gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>&gt;=20
&gt; &gt; &gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>&gt;=20
&gt; &gt; &gt; =A015.2<br>&gt;=20
&gt; &gt; &gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>&gt;=20
&gt; &gt; &gt; 0.0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt=
; 0x23f3000<br>&gt;=20
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000=
000 (233690 pages to<br>&gt;=20
&gt; &gt; &gt; be allocated)<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f300=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff8551900=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855=
194b4<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549=
000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff855=
4a000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;fffffff=
f85800000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>&gt;=20
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>&gt;=20
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>&gt;=20
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; thr=
ee times to switch input<br>&gt;=20
&gt; &gt; &gt; to Xen)<br>&gt;=20
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>&gt;=20
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>&gt;=20
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>&gt;=20
&gt; &gt; &gt; (200 of 1024)<br>&gt;=20
&gt; &gt; &gt; (d1) HVM Loader<br>&gt;=20
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>&gt;=20
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>&gt;=20
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>&gt;=20
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>&gt;=20
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>&gt;=20
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>&gt;=20
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">=
konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; getting the error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>&gt;=20
&gt; &gt; (2 =3D<br>&gt;=20
&gt; &gt; &gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>&gt;=20
&gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; such<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; file or directory<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I&#39;ve google searched for this and an article a=
ppears, but is not the<br>&gt;=20
&gt; &gt; same<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). =A0Running any xl command =
generates a similar<br>&gt;=20
&gt; &gt; &gt; &gt; error.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don&#39;t kn=
ow what your distro<br>&gt;=20
&gt; &gt; is,<br>&gt;=20
&gt; &gt; &gt; &gt; but<br>&gt;=20
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>&gt;=
=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com" targ=
et=3D"_blank">mikeneiderhauser@gmail.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>&gt;=20
&gt; &gt; &gt; &gt; install.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the tool=
chain installed. =A0I may have<br>&gt;=20
&gt; &gt; time<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there=
 any specific instructions on<br>&gt;=20
&gt; &gt; how<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>&gt;=20
&gt; &gt; have<br>&gt;=20
&gt; &gt; &gt; &gt; the<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>&gt;=20
&gt; &gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>&gt;=20
&gt; &gt; &gt; &gt; (4x1G<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attemp=
ting to resolve this issue on the<br>&gt;=20
&gt; &gt; &gt; &gt; xen-users<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>&gt;=20
&gt; &gt; with a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>

&gt;=20


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
&#39;xl list&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib=
/xen-4.3/boot/hvmloader&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-d=
m&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr=
0&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
 &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;=
,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
&#39;03:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;,=
 &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#3=
9;c&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubun=
tu-vg/ubuntu-hvm-0,hda,w&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#3=
9;d&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubu=
ntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.=
04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy=
&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#=
39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#3=
9;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>&gt;=20
&gt; &gt; PCI<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I =
ran assigned pci devices to<br>&gt;=20
&gt; &gt; pciback. The<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39=
;xen-pciback&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_=
model_override&quot; instead if you really<br>&gt;=20
&gt; &gt; want<br>&gt;=20
&gt; &gt; &gt; &gt; a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>&gt;=20
&gt; &gt; running<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>&gt;=20
&gt; &gt; not<br>&gt;=20
&gt; &gt; &gt; &gt; a PV<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>&gt;=20
&gt; &gt; New<br>&gt;=20
&gt; &gt; &gt; &gt; best<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>&gt;=20
&gt; &gt; nr_vcpus=3D3,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>&gt;=20
&gt; &gt; placement<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>&gt;=20
&gt; &gt; memsz=3D0xa69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =
=A00000000000100000-&gt;00000000001a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0=
000000000000000-&gt;0000000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0=
 0000000000000000-&gt;000000003f800000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000=
000100608<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x000000000=
0000200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x000000000=
00001fb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x000000000=
0000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; Spawning<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; -xen-domid<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -chardev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -mon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -name<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vnc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -serial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 pty<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vga<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; cirrus<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -boot<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; order=3Dc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -smp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -device<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -netdev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -M<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 xenfv<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -m<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 1016<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -drive<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-chardev&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;change&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;devi=
ce&quot;: &quot;vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;targ=
et&quot;: &quot;password&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&=
quot;: &quot;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;device_add&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driv=
er&quot;: &quot;xen-pci-passthrough&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&q=
uot;: &quot;pci-pt-03_00.0&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;host=
addr&quot;: &quot;0000:03:00.0&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; Creating pci<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>&gt;=20
&gt; &gt; ao<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>&gt;=20
&gt; &gt; toobig:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4=
.3-amd64&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>&gt;=20
&gt; &gt; Debian`<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D&quot;quiet splash&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot=
;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;d=
om0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org" target=3D"_blank">Xen-devel@lists.xen.org</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
</div></div></blockquote></div><br>&gt; </div>
</div></div></div></div></blockquote></div><br></div>

--089e0160c3ae3238a404f1e6e697--


--===============8116040961981903080==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8116040961981903080==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 15:54:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 15:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCAEc-0007xT-Qt; Sat, 08 Feb 2014 15:54:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WC9zM-0007Jt-7H
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 15:38:40 +0000
Received: from [85.158.143.35:8502] by server-3.bemta-4.messagelabs.com id
	8E/90-11539-F7F46F25; Sat, 08 Feb 2014 15:38:39 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1391873913!4155904!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30492 invoked from network); 8 Feb 2014 15:38:34 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 15:38:34 -0000
Received: by mail-vc0-f174.google.com with SMTP id im17so3592044vcb.33
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 07:38:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=Mh6F/6m+yrQmCBCfhw88bidbtR7odjiHS75k3LXBGmo=;
	b=GCNoMGb8pZCnTzbjtzRfRPZBd4XjogitkcGH8K2OfN1KIIIUjCPeeGmvPtAyiHsPoM
	Cj7UraQ7B3+AI6oAFs1qeDkCow5ZjAdFg/DIerow8hzdt96BxvAeE/Ci6odT4ZxtfTgR
	ivEwIznwKdoMi3KNg272vbXM25wLZvcIrOSb3SFcMc1fwNcNAcmoKffeuNME+RG1tUiL
	NwD2Pw7LYngQOusbaL16krtbFqLONuut3n2yMe9lpHslajYl3ZYqKiAQaJpWnrozFQGw
	5iib49yqt2Hk97E5KV8jxmDvNTjt9GY09oT+4drkx1J0Y/Z46q8v7MveEGuw7CqLje9S
	ZkmQ==
X-Received: by 10.52.160.233 with SMTP id xn9mr62760vdb.48.1391873913663; Sat,
	08 Feb 2014 07:38:33 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Sat, 8 Feb 2014 07:37:53 -0800 (PST)
In-Reply-To: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Sat, 8 Feb 2014 10:37:53 -0500
Message-ID: <CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Sat, 08 Feb 2014 15:54:25 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8116040961981903080=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8116040961981903080==
Content-Type: multipart/alternative; boundary=089e0160c3ae3238a404f1e6e697

--089e0160c3ae3238a404f1e6e697
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

I will give it a shot.  Thanks!


On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <konrad.wilk@oracle.com> wrote=
:

>
> ----- mikeneiderhauser@gmail.com wrote:
> >
> > I followed this site (
> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
> and then followed (http://wiki.xen.org/wiki/Compiling_Xen_From_Source)
> >
>
> Ah, so you are looking for the     xen_pt: Fix passthrough of device with
> ROM.
> which is not in the Xen 4.4-rc3 but in the master.
>
> One thing you can do is:
>
> cd xen/tools/qemu-xen-dir
> git fetch upstream
> git checkout origin/master
> [you should see: "HEAD is now at 027c412... configure: Disable libtool if
> -fPIE does not work with it (bug #1257099)"]
>
> Go back to main xen directory:
> cd ../../../
> ./configure
> make
> make install
>
> and you should be using now an newer version of QEMU with the fix.
>
>
> >
>
> git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git
> >
>
>
> Had to take some additional steps here to get all of the libs
> # apt-get install build-essential # apt-get install bcc bin86 gawk bridge=
-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transf=
ig tgif # apt-get install texinfo texlive-latex-base texlive-latex-recommen=
ded texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial# a=
pt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twist=
ed libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev# apt-get=
 install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib=
 libx11-dev bison flex xz-utils libyajl-dev# apt-get install gettext
> apt-get install libaio-dev
> apt-get install libpixman-1-dev
>
> ./configure
> make dist
> make install
>
> >
> >
> >
> > On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> >
>>
>> > On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
>> > > I did not use the patch.  I was assuming it was already patched give=
n
>> > > previous email.  Is the patch for qemu source or xen source?
>> >
>> >
>> It is for QEMU, but you are right - it should have been part
>> > of QEMU if you got the latest version of Xen-unstable.
>> >
>> > You didn't use some specific tag but just 'staging' ?
>> >
>> >
>> >
>> > >
>> > >
>> > > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
>> > > konrad.wilk@oracle.com> wrote:
>> > >
>> > > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
>> > > > > Ok. I started ran the initscripts and now xl works.
>> > > > >
>> > > > > However, I still see the same behavior as before:
>> > > > >
>> > > >
>> > > > Did you use the patch that was mentioned in the URL?
>> > > >
>> > > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
>> > > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>> > > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error:
>> Connection
>> > > > reset
>> > > > > by peer
>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>> error:
>> > > > > Connection refused
>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>> error:
>> > > > > Connection refused
>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>> error:
>> > > > > Connection refused
>> > > > > root@fiat:~# xl list
>> > > > > Name                                        ID   Mem VCPUs State
>> Time(s)
>> > > > > Domain-0                                     0  1024     1
>> r-----
>> > > > >  15.2
>> > > > > ubuntu-hvm-0                                 1  1025     1
>> ------
>> > > > > 0.0
>> > > > >
>> > > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f30=
00
>> > > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
>> > > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690
>> pages to
>> > > > > be allocated)
>> > > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
>> > > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
>> > > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
>> > > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
>> > > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
>> > > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
>> > > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
>> > > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
>> > > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
>> > > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
>> > > > > (XEN) Dom0 has maximum 1 VCPUs
>> > > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
>> 0xffffffff81b2f000
>> > > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
>> 0xffffffff81d0f0f0
>> > > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
>> 0xffffffff81d252c0
>> > > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
>> 0xffffffff81e6d000
>> > > > > (XEN) Scrubbing Free RAM: .............................done.
>> > > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>> > > > > (XEN) Std. Loglevel: All
>> > > > > (XEN) Guest Loglevel: All
>> > > > > (XEN) Xen is relinquishing VGA console.
>> > > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to
>> switch input
>> > > > > to Xen)
>> > > > > (XEN) Freed 260kB init memory.
>> > > > > (XEN) PCI add device 0000:00:00.0
>> > > > > (XEN) PCI add device 0000:00:01.0
>> > > > > (XEN) PCI add device 0000:00:1a.0
>> > > > > (XEN) PCI add device 0000:00:1c.0
>> > > > > (XEN) PCI add device 0000:00:1d.0
>> > > > > (XEN) PCI add device 0000:00:1e.0
>> > > > > (XEN) PCI add device 0000:00:1f.0
>> > > > > (XEN) PCI add device 0000:00:1f.2
>> > > > > (XEN) PCI add device 0000:00:1f.3
>> > > > > (XEN) PCI add device 0000:01:00.0
>> > > > > (XEN) PCI add device 0000:02:02.0
>> > > > > (XEN) PCI add device 0000:02:04.0
>> > > > > (XEN) PCI add device 0000:03:00.0
>> > > > > (XEN) PCI add device 0000:03:00.1
>> > > > > (XEN) PCI add device 0000:04:00.0
>> > > > > (XEN) PCI add device 0000:04:00.1
>> > > > > (XEN) PCI add device 0000:05:00.0
>> > > > > (XEN) PCI add device 0000:05:00.1
>> > > > > (XEN) PCI add device 0000:06:03.0
>> > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 =
>
>> 262400
>> > > > > (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=3D=
1
>> memflags=3D0
>> > > > > (200 of 1024)
>> > > > > (d1) HVM Loader
>> > > > > (d1) Detected Xen v4.4-rc2
>> > > > > (d1) Xenbus rings @0xfeffc000, event channel 4
>> > > > > (d1) System requested SeaBIOS
>> > > > > (d1) CPU speed is 3093 MHz
>> > > > > (d1) Relocating guest memory for lowmem MMIO space disabled
>> > > > >
>> > > > >
>> > > > > Excerpt from /var/log/xen/*
>> > > > > qemu: hardware error: xen: failed to populate ram at 40050000
>> > > > >
>> > > > >
>> > > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
>> > > > > konrad.wilk@oracle.com> wrote:
>> > > > >
>> > > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser
>> wrote:
>> > > > > > > I was able to compile and install xen4.4 RC3 on my host,
>> however I am
>> > > > > > > getting the error:
>> > > > > > >
>> > > > > > > root@fiat:~/git/xen# xl list
>> > > > > > > xc: error: Could not obtain handle on privileged command
>> interface
>> > > > (2 =3D
>> > > > > > No
>> > > > > > > such file or directory): Internal error
>> > > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc
>> handle:
>> > > > No
>> > > > > > such
>> > > > > > > file or directory
>> > > > > > > cannot init xl context
>> > > > > > >
>> > > > > > > I've google searched for this and an article appears, but is
>> not the
>> > > > same
>> > > > > > > (as far as I can tell).  Running any xl command generates a
>> similar
>> > > > > > error.
>> > > > > > >
>> > > > > > > What can I do to fix this?
>> > > > > >
>> > > > > >
>> > > > > > You need to run the initscripts for Xen. I don't know what you=
r
>> distro
>> > > > is,
>> > > > > > but
>> > > > > > they are usually put in /etc/init.d/rc.d/xen*
>> > > > > >
>> > > > > >
>> > > > > > >
>> > > > > > > Regards
>> > > > > > >
>> > > > > > >
>> > > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
>> > > > > > > mikeneiderhauser@gmail.com> wrote:
>> > > > > > >
>> > > > > > > > Much. Do I need to install from src or is there a package =
I
>> can
>> > > > > > install.
>> > > > > > > >
>> > > > > > > > Regards
>> > > > > > > >
>> > > > > > > >
>> > > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
>> > > > > > > > konrad.wilk@oracle.com> wrote:
>> > > > > > > >
>> > > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike
>> Neiderhauser wrote:
>> > > > > > > >> > I did not.  I do not have the toolchain installed.  I
>> may have
>> > > > time
>> > > > > > > >> later
>> > > > > > > >> > today to try the patch.  Are there any specific
>> instructions on
>> > > > how
>> > > > > > to
>> > > > > > > >> > patch the src, compile and install?
>> > > > > > > >>
>> > > > > > > >> There actually should be a new version of Xen 4.4-rcX
>> which will
>> > > > have
>> > > > > > the
>> > > > > > > >> fix. That might be easier for you?
>> > > > > > > >> >
>> > > > > > > >> > Regards
>> > > > > > > >> >
>> > > > > > > >> >
>> > > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk =
<
>> > > > > > > >> > konrad.wilk@oracle.com> wrote:
>> > > > > > > >> >
>> > > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
>> Neiderhauser
>> > > > wrote:
>> > > > > > > >> > > > Hi all,
>> > > > > > > >> > > >
>> > > > > > > >> > > > I am attempting to do a pci passthrough of an Intel
>> ET card
>> > > > > > (4x1G
>> > > > > > > >> NIC)
>> > > > > > > >> > > to a
>> > > > > > > >> > > > HVM.  I have been attempting to resolve this issue
>> on the
>> > > > > > xen-users
>> > > > > > > >> list,
>> > > > > > > >> > > > but it was advised to post this issue to this list.
>> (Initial
>> > > > > > > >> Message -
>> > > > > > > >> > > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > >
>> > > >
>> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.htm=
l
>> > > > > > > >> )
>> > > > > > > >> > > >
>> > > > > > > >> > > > The machine I am using as host is a Dell Poweredge
>> server
>> > > > with a
>> > > > > > > >> Xeon
>> > > > > > > >> > > > E31220 with 4GB of ram.
>> > > > > > > >> > > >
>> > > > > > > >> > > > The possible bug is the following:
>> > > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.lo=
g
>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0=
)
>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram a=
t
>> > > > 40030000
>> > > > > > > >> > > > ....
>> > > > > > > >> > > >
>> > > > > > > >> > > > I believe it may be similar to this thread
>> > > > > > > >> > > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > >
>> > > >
>> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4=
uyog2d4+state:results
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> > > > > > > >> > > > Additional info that may be helpful is below.
>> > > > > > > >> > >
>> > > > > > > >> > > Did you try the patch?
>> > > > > > > >> > > >
>> > > > > > > >> > > > Please let me know if you need any additional
>> information.
>> > > > > > > >> > > >
>> > > > > > > >> > > > Thanks in advance for any help provided!
>> > > > > > > >> > > > Regards
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > # Configuration file for Xen HVM
>> > > > > > > >> > > >
>> > > > > > > >> > > > # HVM Name (as appears in 'xl list')
>> > > > > > > >> > > > name=3D"ubuntu-hvm-0"
>> > > > > > > >> > > > # HVM Build settings (+ hardware)
>> > > > > > > >> > > > #kernel =3D "/usr/lib/xen-4.3/boot/hvmloader"
>> > > > > > > >> > > > builder=3D'hvm'
>> > > > > > > >> > > > device_model=3D'qemu-dm'
>> > > > > > > >> > > > memory=3D1024
>> > > > > > > >> > > > vcpus=3D2
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Virtual Interface
>> > > > > > > >> > > > # Network bridge to USB NIC
>> > > > > > > >> > > > vif=3D['bridge=3Dxenbr0']
>> > > > > > > >> > > >
>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>> ###################
>> > > > > > > >> > > > # PCI Permissive mode toggle
>> > > > > > > >> > > > #pci_permissive=3D1
>> > > > > > > >> > > >
>> > > > > > > >> > > > # All PCI Devices
>> > > > > > > >> > > > #pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1',
>> '05:00.0',
>> > > > > > > >> '05:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # First two ports on Intel 4x1G NIC
>> > > > > > > >> > > > #pci=3D['03:00.0','03:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Last two ports on Intel 4x1G NIC
>> > > > > > > >> > > > #pci=3D['04:00.0', '04:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # All ports on Intel 4x1G NIC
>> > > > > > > >> > > > pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Brodcom 2x1G NIC
>> > > > > > > >> > > > #pci=3D['05:00.0', '05:00.1']
>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>> ###################
>> > > > > > > >> > > >
>> > > > > > > >> > > > # HVM Disks
>> > > > > > > >> > > > # Hard disk only
>> > > > > > > >> > > > # Boot from HDD first ('c')
>> > > > > > > >> > > > boot=3D"c"
>> > > > > > > >> > > > disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Hard disk with ISO
>> > > > > > > >> > > > # Boot from ISO first ('d')
>> > > > > > > >> > > > #boot=3D"d"
>> > > > > > > >> > > > #disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>> > > > > > > >> > > >
>> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>> > > > > > > >> > > >
>> > > > > > > >> > > > # ACPI Enable
>> > > > > > > >> > > > acpi=3D1
>> > > > > > > >> > > > # HVM Event Modes
>> > > > > > > >> > > > on_poweroff=3D'destroy'
>> > > > > > > >> > > > on_reboot=3D'restart'
>> > > > > > > >> > > > on_crash=3D'restart'
>> > > > > > > >> > > >
>> > > > > > > >> > > > # Serial Console Configuration (Xen Console)
>> > > > > > > >> > > > sdl=3D0
>> > > > > > > >> > > > serial=3D'pty'
>> > > > > > > >> > > >
>> > > > > > > >> > > > # VNC Configuration
>> > > > > > > >> > > > # Only reacable from localhost
>> > > > > > > >> > > > vnc=3D1
>> > > > > > > >> > > > vnclisten=3D"0.0.0.0"
>> > > > > > > >> > > > vncpasswd=3D""
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > Copied for xen-users list
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > >
>> > > > > > > >> > > > It appears that it cannot obtain the RAM mapping fo=
r
>> this
>> > > > PCI
>> > > > > > > >> device.
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> > > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
>> > > > pciback. The
>> > > > > > > >> output
>> > > > > > > >> > > > looks like:
>> > > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
>> > > > > > > >> > > > Loading Kernel Module 'xen-pciback'
>> > > > > > > >> > > > Calling function pciback_dev for:
>> > > > > > > >> > > > PCI DEVICE 0000:03:00.0
>> > > > > > > >> > > > Unbinding 0000:03:00.0 from igb
>> > > > > > > >> > > > Binding 0000:03:00.0 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:03:00.1
>> > > > > > > >> > > > Unbinding 0000:03:00.1 from igb
>> > > > > > > >> > > > Binding 0000:03:00.1 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:04:00.0
>> > > > > > > >> > > > Unbinding 0000:04:00.0 from igb
>> > > > > > > >> > > > Binding 0000:04:00.0 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:04:00.1
>> > > > > > > >> > > > Unbinding 0000:04:00.1 from igb
>> > > > > > > >> > > > Binding 0000:04:00.1 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:05:00.0
>> > > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
>> > > > > > > >> > > > Binding 0000:05:00.0 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > PCI DEVICE 0000:05:00.1
>> > > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
>> > > > > > > >> > > > Binding 0000:05:00.1 to pciback
>> > > > > > > >> > > >
>> > > > > > > >> > > > Listing PCI Devices Available to Xen
>> > > > > > > >> > > > 0000:03:00.0
>> > > > > > > >> > > > 0000:03:00.1
>> > > > > > > >> > > > 0000:04:00.0
>> > > > > > > >> > > > 0000:04:00.1
>> > > > > > > >> > > > 0000:05:00.0
>> > > > > > > >> > > > 0000:05:00.1
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > root@fiat:~# xl -vvv create
>> /etc/xen/ubuntu-hvm-0.cfg
>> > > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>> > > > > > > >> > > > WARNING: ignoring device_model directive.
>> > > > > > > >> > > > WARNING: Use "device_model_override" instead if you
>> really
>> > > > want
>> > > > > > a
>> > > > > > > >> > > > non-default device_model
>> > > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> create:
>> > > > > > > >> > > > how=3D(nil) callback=3D(nil) poller=3D0x210c3c0
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>> > > > > > > >> Disk
>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dunknown
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_device.c:296:libxl__device_disk_set_backend:
>> > > > > > > >> Disk
>> > > > > > > >> > > > vdev=3Dhda, using backend phy
>> > > > > > > >> > > > libxl: debug:
>> libxl_create.c:675:initiate_domain_create:
>> > > > running
>> > > > > > > >> > > bootloader
>> > > > > > > >> > > > libxl: debug:
>> libxl_bootloader.c:321:libxl__bootloader_run:
>> > > > not
>> > > > > > a PV
>> > > > > > > >> > > > domain, skipping bootloader
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c728: deregister unregistered
>> > > > > > > >> > > > libxl: debug:
>> libxl_numa.c:475:libxl__get_numa_candidate:
>> > > > New
>> > > > > > best
>> > > > > > > >> NUMA
>> > > > > > > >> > > > placement candidate found: nr_nodes=3D1, nr_cpus=3D=
4,
>> > > > nr_vcpus=3D3,
>> > > > > > > >> > > > free_memkb=3D2980
>> > > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain:
>> NUMA
>> > > > placement
>> > > > > > > >> > > candidate
>> > > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>> > > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=3D0x10000=
0
>> > > > memsz=3D0xa69a4
>> > > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 ->
>> 0x1a69a4
>> > > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>> > > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
>> > > > > > > >> > > >   Modules:       0000000000000000->0000000000000000
>> > > > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
>> > > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
>> > > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>> > > > > > > >> > > >   4KB PAGES: 0x0000000000000200
>> > > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
>> > > > > > > >> > > >   1GB PAGES: 0x0000000000000000
>> > > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at
>> 0x7f022c779000 ->
>> > > > > > > >> 0x7f022c81682d
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>> > > > > > > >> Disk
>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dphy
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:559:libxl__ev_xswatch_register:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x2112f48
>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > token=3D3/0:
>> > > > > > > >> > > > register slotnum=3D3
>> > > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> > > > inprogress: poller=3D0x210c3c0, flags=3Di
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x2112f48
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>> token=3D3/0:
>> > > > event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:647:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted stat=
e
>> 2 still
>> > > > > > waiting
>> > > > > > > >> > > state 1
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x2112f48
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>> token=3D3/0:
>> > > > event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:643:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted stat=
e
>> 2 ok
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x2112f48
>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>> > > > > > token=3D3/0:
>> > > > > > > >> > > > deregister slotnum=3D3
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x2112f48: deregister unregistered
>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>> calling
>> > > > hotplug
>> > > > > > > >> script:
>> > > > > > > >> > > > /etc/xen/scripts/block add
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm=
:
>> > > > Spawning
>> > > > > > > >> > > device-model
>> > > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > > /usr/bin/qemu-system-i386
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > -xen-domid
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   2
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -chardev
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > >
>> > > > socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowai=
t
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -mon
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > > chardev=3Dlibxl-cmd,mode=3Dcontrol
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -name
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > ubuntu-hvm-0
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -vnc
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > 0.0.0.0:0
>> > > > > > > >> ,to=3D99
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -global
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> isa-fdc.driveA=3D
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -serial
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   pty
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -vga
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > cirrus
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -global
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> vga.vram_size_mb=3D8
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -boot
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > order=3Dc
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -smp
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > 2,maxcpus=3D2
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -device
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > > rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:4=
4:2c
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -netdev
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > >
>> type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscript=3Dno
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -M
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   xenfv
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   -m
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>>   1016
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > -drive
>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm=
:
>> > > > > > > >> > > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > >
>> > > >
>> file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,forma=
t=3Draw,cache=3Dwriteback
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:559:libxl__ev_xswatch_register:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c960
>> wpath=3D/local/domain/0/device-model/2/state
>> > > > > > token=3D3/1:
>> > > > > > > >> > > register
>> > > > > > > >> > > > slotnum=3D3
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210c960
>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>> token=3D3/1: event
>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210c960
>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>> token=3D3/1: event
>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c960
>> wpath=3D/local/domain/0/device-model/2/state
>> > > > > > token=3D3/1:
>> > > > > > > >> > > > deregister slotnum=3D3
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210c960: deregister unregistered
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize=
:
>> > > > connected
>> > > > > > to
>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > > > type: qmp
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>> > > > > > > >> > > >     "id": 1
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "query-chardev",
>> > > > > > > >> > > >     "id": 2
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "change",
>> > > > > > > >> > > >     "id": 3,
>> > > > > > > >> > > >     "arguments": {
>> > > > > > > >> > > >         "device": "vnc",
>> > > > > > > >> > > >         "target": "password",
>> > > > > > > >> > > >         "arg": ""
>> > > > > > > >> > > >     }
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "query-vnc",
>> > > > > > > >> > > >     "id": 4
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:559:libxl__ev_xswatch_register:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210e8a8
>> wpath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > token=3D3/2:
>> > > > > > > >> > > register
>> > > > > > > >> > > > slotnum=3D3
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210e8a8
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>> token=3D3/2: event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:647:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state =
2
>> still
>> > > > > > waiting
>> > > > > > > >> state
>> > > > > > > >> > > 1
>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>> watch
>> > > > > > w=3D0x210e8a8
>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>> token=3D3/2: event
>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:643:devstate_watch_callback:
>> > > > backend
>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state =
2
>> ok
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210e8a8
>> wpath=3D/local/domain/0/backend/vif/2/0/state
>> > > > > > token=3D3/2:
>> > > > > > > >> > > > deregister slotnum=3D3
>> > > > > > > >> > > > libxl: debug:
>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>> > > > > > watch
>> > > > > > > >> > > > w=3D0x210e8a8: deregister unregistered
>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>> calling
>> > > > hotplug
>> > > > > > > >> script:
>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge online
>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>> calling
>> > > > hotplug
>> > > > > > > >> script:
>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge add
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize=
:
>> > > > connected
>> > > > > > to
>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > > > type: qmp
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>> > > > > > > >> > > >     "id": 1
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>> message
>> > > > type:
>> > > > > > > >> return
>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: nex=
t
>> qmp
>> > > > > > command: '{
>> > > > > > > >> > > >     "execute": "device_add",
>> > > > > > > >> > > >     "id": 2,
>> > > > > > > >> > > >     "arguments": {
>> > > > > > > >> > > >         "driver": "xen-pci-passthrough",
>> > > > > > > >> > > >         "id": "pci-pt-03_00.0",
>> > > > > > > >> > > >         "hostaddr": "0000:03:00.0"
>> > > > > > > >> > > >     }
>> > > > > > > >> > > > }
>> > > > > > > >> > > > '
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read
>> error:
>> > > > > > > >> Connection
>> > > > > > > >> > > reset
>> > > > > > > >> > > > by peer
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize=
:
>> > > > Connection
>> > > > > > > >> error:
>> > > > > > > >> > > > Connection refused
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize=
:
>> > > > Connection
>> > > > > > > >> error:
>> > > > > > > >> > > > Connection refused
>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize=
:
>> > > > Connection
>> > > > > > > >> error:
>> > > > > > > >> > > > Connection refused
>> > > > > > > >> > > > libxl: debug:
>> libxl_pci.c:81:libxl__create_pci_backend:
>> > > > > > Creating pci
>> > > > > > > >> > > backend
>> > > > > > > >> > > > libxl: debug:
>> libxl_event.c:1737:libxl__ao_progress_report:
>> > > > ao
>> > > > > > > >> 0x210c360:
>> > > > > > > >> > > > progress report: ignored
>> > > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete=
:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> > > > complete, rc=3D0
>> > > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy=
:
>> ao
>> > > > > > 0x210c360:
>> > > > > > > >> > > destroy
>> > > > > > > >> > > > Daemon running with PID 3214
>> > > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793
>> total
>> > > > > > > >> releases:793
>> > > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0
>> maximum
>> > > > > > > >> allocations:4
>> > > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
>> > > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:=
4
>> > > > toobig:4
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.lo=
g
>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0=
)
>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram a=
t
>> > > > 40030000
>> > > > > > > >> > > > CPU #0:
>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D=
00000633
>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D=
00000000
>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0 A20=3D1
>> SMM=3D0
>> > > > HLT=3D1
>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D=
00000000
>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D=
00000000
>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>> > > > > > > >> > > > EFER=3D0000000000000000
>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D000=
01f80
>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D000000000000000=
0 0000
>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>> > > > > > > >> > > > CPU #1:
>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=3D=
00000633
>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=3D=
00000000
>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0 A20=3D1
>> SMM=3D0
>> > > > HLT=3D1
>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=3D=
00000000
>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=3D=
00000000
>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>> > > > > > > >> > > > EFER=3D0000000000000000
>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D000=
01f80
>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D000000000000000=
0 0000
>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D000000000000000=
0 0000
>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>> > > > > > > >> > > >
>> > > > > > > >> > > >
>> ###########################################################
>> > > > > > > >> > > > /etc/default/grub
>> > > > > > > >> > > > GRUB_DEFAULT=3D"Xen 4.3-amd64"
>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=3D0
>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue
>> > > > > > > >> > > > GRUB_TIMEOUT=3D10
>> > > > > > > >> > > > GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2> /dev/null =
||
>> echo
>> > > > Debian`
>> > > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash"
>> > > > > > > >> > > > GRUB_CMDLINE_LINUX=3D""
>> > > > > > > >> > > > # biosdevname=3D0
>> > > > > > > >> > > > GRUB_CMDLINE_XEN=3D"dom0_mem=3D1024M dom0_max_vcpus=
=3D1"
>> > > > > > > >> > >
>> > > > > > > >> > > > _______________________________________________
>> > > > > > > >> > > > Xen-devel mailing list
>> > > > > > > >> > > > Xen-devel@lists.xen.org
>> > > > > > > >> > > > http://lists.xen.org/xen-devel
>> > > > > > > >> > >
>> > > > > > > >> > >
>> > > > > > > >>
>> > > > > > > >
>> > > > > > > >
>> > > > > >
>> > > >
>> >
>>
>
> >
>

--089e0160c3ae3238a404f1e6e697
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I will give it a shot. =A0Thanks!</div><div class=3D"gmail=
_extra"><br><br><div class=3D"gmail_quote">On Sat, Feb 8, 2014 at 10:36 AM,=
 Konrad Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com=
" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div style=3D"font-size:12pt;font-famil=
y:Times New Roman"><br>----- <a href=3D"mailto:mikeneiderhauser@gmail.com" =
target=3D"_blank">mikeneiderhauser@gmail.com</a> wrote:
<br>&gt; <div dir=3D"ltr"><div class=3D"">&gt; I followed this site (<a hre=
f=3D"http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" target=
=3D"_blank">http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions</=
a>).</div>

<div><div class=3D"">and then followed (<a href=3D"http://wiki.xen.org/wiki=
/Compiling_Xen_From_Source" target=3D"_blank">http://wiki.xen.org/wiki/Comp=
iling_Xen_From_Source</a>)<br>&gt;=20

<div><br></div></div><div>Ah, so you are looking for the=A0<span style=3D"f=
ont-size:12pt">=A0 =A0 xen_pt: Fix passthrough of device with ROM.</span></=
div><div><span style=3D"font-size:12pt">which is not in the Xen 4.4-rc3 but=
 in the master.</span></div>

<div><br></div><div>One thing you can do is:</div><div><br></div><div>cd xe=
n/tools/qemu-xen-dir</div><div>git fetch upstream</div><div>git checkout or=
igin/master</div><div>[you should see: &quot;<span style=3D"font-size:12pt"=
>HEAD is now at 027c412... configure: Disable libtool if -fPIE does not wor=
k with it (bug #1257099)&quot;]</span></div>

<div><span style=3D"font-size:12pt"><br></span></div><div><span style=3D"fo=
nt-size:12pt">Go back to main xen directory:</span></div><div>cd ../../../<=
/div><div>./configure</div><div>make=A0</div><div>make install</div><div><b=
r>

</div><div>and you should be using now an newer version of QEMU with the fi=
x.</div><div><div class=3D"h5"><div><br></div><div><br></div><div>&gt; </di=
v><div><pre style=3D"line-height:1.3em;font-size:15px;background-color:rgb(=
250,250,250);border:1px solid rgb(221,221,221);padding:1em">

<span style=3D"font-family:arial;line-height:1.3em">git clone -b 4.4.0-rc3 =
git://<a href=3D"http://xenbits.xen.org/xen.git" target=3D"_blank">xenbits.=
xen.org/xen.git</a></span><br>&gt;=20

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"line-height:1.3em;font-size:15px;f=
ont-family:arial">Had to take some additional steps here to get all of the =
libs
# apt-get install build-essential=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-open=
ssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install texinfo texlive-latex-base texlive-latex-recommended texli=
ve-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twi=
sted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml=
-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"line-heigh=
t:1.3em;font-size:15px;background-color:rgb(250,250,250);border:1px solid r=
gb(221,221,221);padding:1em"><span style=3D"line-height:1.3em;font-family:a=
rial">./configure
make dist
make install</span></pre></div></div></div></div></div><div><div class=3D"h=
5"><div class=3D"gmail_extra">&gt; <br>&gt; <br>&gt; <div class=3D"gmail_qu=
ote">&gt; On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <span dir=
=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">ko=
nrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

&gt;=20

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>&gt; On Fri, Feb 07, 2014 at 04:29:18PM=
 -0500, Mike Neiderhauser wrote:<br>&gt;=20
&gt; I did not use the patch. =A0I was assuming it was already patched give=
n<br>&gt;=20
&gt; previous email. =A0Is the patch for qemu source or xen source?<br>&gt;=
=20
<br>&gt;=20
</div>It is for QEMU, but you are right - it should have been part<br>&gt;=
=20
of QEMU if you got the latest version of Xen-unstable.<br>&gt;=20
<br>&gt;=20
You didn&#39;t use some specific tag but just &#39;staging&#39; ?<br>&gt;=
=20
<div>&gt; <div>&gt; <br>&gt;=20
&gt;<br>&gt;=20
&gt;<br>&gt;=20
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt;<br>&gt;=20
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>&gt;=20
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>&gt;=
=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; However, I still see the same behavior as before:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>&gt;=20
&gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl list<br>&gt;=20
&gt; &gt; &gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>&gt;=20
&gt; &gt; &gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>&gt;=20
&gt; &gt; &gt; =A015.2<br>&gt;=20
&gt; &gt; &gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>&gt;=20
&gt; &gt; &gt; 0.0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt=
; 0x23f3000<br>&gt;=20
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000=
000 (233690 pages to<br>&gt;=20
&gt; &gt; &gt; be allocated)<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f300=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff8551900=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855=
194b4<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549=
000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff855=
4a000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;fffffff=
f85800000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>&gt;=20
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>&gt;=20
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>&gt;=20
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; thr=
ee times to switch input<br>&gt;=20
&gt; &gt; &gt; to Xen)<br>&gt;=20
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>&gt;=20
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>&gt;=20
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>&gt;=20
&gt; &gt; &gt; (200 of 1024)<br>&gt;=20
&gt; &gt; &gt; (d1) HVM Loader<br>&gt;=20
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>&gt;=20
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>&gt;=20
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>&gt;=20
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>&gt;=20
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>&gt;=20
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>&gt;=20
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">=
konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; getting the error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>&gt;=20
&gt; &gt; (2 =3D<br>&gt;=20
&gt; &gt; &gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>&gt;=20
&gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; such<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; file or directory<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I&#39;ve google searched for this and an article a=
ppears, but is not the<br>&gt;=20
&gt; &gt; same<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). =A0Running any xl command =
generates a similar<br>&gt;=20
&gt; &gt; &gt; &gt; error.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don&#39;t kn=
ow what your distro<br>&gt;=20
&gt; &gt; is,<br>&gt;=20
&gt; &gt; &gt; &gt; but<br>&gt;=20
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>&gt;=
=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com" targ=
et=3D"_blank">mikeneiderhauser@gmail.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>&gt;=20
&gt; &gt; &gt; &gt; install.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the tool=
chain installed. =A0I may have<br>&gt;=20
&gt; &gt; time<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there=
 any specific instructions on<br>&gt;=20
&gt; &gt; how<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>&gt;=20
&gt; &gt; have<br>&gt;=20
&gt; &gt; &gt; &gt; the<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>&gt;=20
&gt; &gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>&gt;=20
&gt; &gt; &gt; &gt; (4x1G<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attemp=
ting to resolve this issue on the<br>&gt;=20
&gt; &gt; &gt; &gt; xen-users<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>&gt;=20
&gt; &gt; with a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>

&gt;=20


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
&#39;xl list&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib=
/xen-4.3/boot/hvmloader&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-d=
m&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr=
0&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
 &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;=
,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
&#39;03:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;,=
 &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#3=
9;c&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubun=
tu-vg/ubuntu-hvm-0,hda,w&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#3=
9;d&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubu=
ntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.=
04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy=
&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#=
39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#3=
9;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>&gt;=20
&gt; &gt; PCI<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I =
ran assigned pci devices to<br>&gt;=20
&gt; &gt; pciback. The<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39=
;xen-pciback&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_=
model_override&quot; instead if you really<br>&gt;=20
&gt; &gt; want<br>&gt;=20
&gt; &gt; &gt; &gt; a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>&gt;=20
&gt; &gt; running<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>&gt;=20
&gt; &gt; not<br>&gt;=20
&gt; &gt; &gt; &gt; a PV<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>&gt;=20
&gt; &gt; New<br>&gt;=20
&gt; &gt; &gt; &gt; best<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>&gt;=20
&gt; &gt; nr_vcpus=3D3,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>&gt;=20
&gt; &gt; placement<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>&gt;=20
&gt; &gt; memsz=3D0xa69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =
=A00000000000100000-&gt;00000000001a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0=
000000000000000-&gt;0000000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0=
 0000000000000000-&gt;000000003f800000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000=
000100608<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x000000000=
0000200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x000000000=
00001fb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x000000000=
0000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; Spawning<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; -xen-domid<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -chardev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -mon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -name<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vnc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -serial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 pty<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vga<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; cirrus<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -boot<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; order=3Dc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -smp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -device<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -netdev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -M<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 xenfv<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -m<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 1016<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -drive<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-chardev&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;change&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;devi=
ce&quot;: &quot;vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;targ=
et&quot;: &quot;password&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&=
quot;: &quot;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;device_add&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driv=
er&quot;: &quot;xen-pci-passthrough&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&q=
uot;: &quot;pci-pt-03_00.0&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;host=
addr&quot;: &quot;0000:03:00.0&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; Creating pci<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>&gt;=20
&gt; &gt; ao<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>&gt;=20
&gt; &gt; toobig:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4=
.3-amd64&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>&gt;=20
&gt; &gt; Debian`<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D&quot;quiet splash&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot=
;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;d=
om0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org" target=3D"_blank">Xen-devel@lists.xen.org</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
</div></div></blockquote></div><br>&gt; </div>
</div></div></div></div></blockquote></div><br></div>

--089e0160c3ae3238a404f1e6e697--


--===============8116040961981903080==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8116040961981903080==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 16:54:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 16:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCB9s-0002Xu-1H; Sat, 08 Feb 2014 16:53:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCB9q-0002Xp-PA
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 16:53:34 +0000
Received: from [85.158.143.35:40195] by server-3.bemta-4.messagelabs.com id
	27/1C-11539-E0166F25; Sat, 08 Feb 2014 16:53:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391878413!4171241!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28760 invoked from network); 8 Feb 2014 16:53:33 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 16:53:33 -0000
Received: by mail-wi0-f176.google.com with SMTP id hi5so1691868wib.3
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 08:53:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=AJJJXMYjDOBTQtabrV0HBT4MtF9T5lVyqEj6FZIPzVE=;
	b=cxvLzoPWee0qJTS18LnCeDWSgd8aNEVP5du9GQwDpx2aoY3/9s/imhM1CSQDikUyda
	i7JvwAQDawHyedpfYm96KZctxVeflaoFyEbBCkgh8MzRt87/k5Nwn7TOuytoavR5jxul
	nWd+Fyv2kC3E/AVOLPBXyz8j4gzkNKGZWUC49ERKVNxTszW1n1+iE13siw2r4p6e1bVU
	9meAAsnpvgF0m5toEr8sq7QqJ4NVZj95S8HQl5CJ/atYvVThwgEppmNvemzfCx75WuRQ
	ciTZl12tR1TcXaA2HD2Yzx5gqgArK7aTKTX3HYTiqBwUB/6XhiXBEmZVTfZcvGifLtGx
	Vn2A==
X-Gm-Message-State: ALoCoQlJv6b6nB6jaNd3m/64aAEhR/gzBAAyGjG608NBUy/2GgqHEcXFYSyIX7VpqdIjIvzsC9I6
X-Received: by 10.180.77.74 with SMTP id q10mr4043697wiw.39.1391878412995;
	Sat, 08 Feb 2014 08:53:32 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	k10sm19861869wjf.11.2014.02.08.08.53.31 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 08 Feb 2014 08:53:32 -0800 (PST)
Message-ID: <52F6610A.1060401@linaro.org>
Date: Sat, 08 Feb 2014 16:53:30 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>	<52EFFCF5.5070108@linaro.org>	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>	<52F0EBA8.3000206@linaro.org>	<1391522239.10515.79.camel@kazak.uk.xensource.com>
	<CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
In-Reply-To: <CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 08/02/14 15:51, Pavlo Suikov wrote:
> Hi,

Hi Pavlo,

>>
>> >  > To support xentrace on ARM, we will need at least:
>> >
>> > I would readily do that if you give me some directions on where to look
>> > at, or a high-level explanation of:
>> >
>> >  > - to replace rcu_lock_domain_by_any_id() by a a similar function
>> >
>> > What semantics should this function have?
>>
>> I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code.
>> The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.
>
>> Probably best just todig in and ask questions as issues arise.
>
> After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the
> quite same manner it is done in x86) I've faced different problem.
> Specifically, p2m_lookup for an address in the xen restricted heap fails
> on the very first map call: p2m_map_first returns zero page due to both
> p2m->first_level and p2m_first_level_index(addr) equal to zero. As far
> as I understand xen simply does not know how to make p2m translation for
> it's own restricted heap on arm. Am I right?

For DOMID_XEN, the p2m is not initialized. When xentrace requests to map 
a page by directly giving the mfn (machine frame number), so there is no 
need of translation.

I would either add a specific case for DOMID_XEN in get_page_from_gfn or 
p2m_lookup.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 16:54:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 16:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCB9s-0002Xu-1H; Sat, 08 Feb 2014 16:53:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCB9q-0002Xp-PA
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 16:53:34 +0000
Received: from [85.158.143.35:40195] by server-3.bemta-4.messagelabs.com id
	27/1C-11539-E0166F25; Sat, 08 Feb 2014 16:53:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1391878413!4171241!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28760 invoked from network); 8 Feb 2014 16:53:33 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 16:53:33 -0000
Received: by mail-wi0-f176.google.com with SMTP id hi5so1691868wib.3
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 08:53:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=AJJJXMYjDOBTQtabrV0HBT4MtF9T5lVyqEj6FZIPzVE=;
	b=cxvLzoPWee0qJTS18LnCeDWSgd8aNEVP5du9GQwDpx2aoY3/9s/imhM1CSQDikUyda
	i7JvwAQDawHyedpfYm96KZctxVeflaoFyEbBCkgh8MzRt87/k5Nwn7TOuytoavR5jxul
	nWd+Fyv2kC3E/AVOLPBXyz8j4gzkNKGZWUC49ERKVNxTszW1n1+iE13siw2r4p6e1bVU
	9meAAsnpvgF0m5toEr8sq7QqJ4NVZj95S8HQl5CJ/atYvVThwgEppmNvemzfCx75WuRQ
	ciTZl12tR1TcXaA2HD2Yzx5gqgArK7aTKTX3HYTiqBwUB/6XhiXBEmZVTfZcvGifLtGx
	Vn2A==
X-Gm-Message-State: ALoCoQlJv6b6nB6jaNd3m/64aAEhR/gzBAAyGjG608NBUy/2GgqHEcXFYSyIX7VpqdIjIvzsC9I6
X-Received: by 10.180.77.74 with SMTP id q10mr4043697wiw.39.1391878412995;
	Sat, 08 Feb 2014 08:53:32 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	k10sm19861869wjf.11.2014.02.08.08.53.31 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 08 Feb 2014 08:53:32 -0800 (PST)
Message-ID: <52F6610A.1060401@linaro.org>
Date: Sat, 08 Feb 2014 16:53:30 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>	<52EFFCF5.5070108@linaro.org>	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>	<52F0EBA8.3000206@linaro.org>	<1391522239.10515.79.camel@kazak.uk.xensource.com>
	<CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
In-Reply-To: <CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 08/02/14 15:51, Pavlo Suikov wrote:
> Hi,

Hi Pavlo,

>>
>> >  > To support xentrace on ARM, we will need at least:
>> >
>> > I would readily do that if you give me some directions on where to look
>> > at, or a high-level explanation of:
>> >
>> >  > - to replace rcu_lock_domain_by_any_id() by a a similar function
>> >
>> > What semantics should this function have?
>>
>> I would copy in part get_pg_owner (arch/x86/mm/mm.c) in the ARM code.
>> The check "unlikely(paging_mode_translate(curr))" will always fail on ARM.
>
>> Probably best just todig in and ask questions as issues arise.
>
> After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the
> quite same manner it is done in x86) I've faced different problem.
> Specifically, p2m_lookup for an address in the xen restricted heap fails
> on the very first map call: p2m_map_first returns zero page due to both
> p2m->first_level and p2m_first_level_index(addr) equal to zero. As far
> as I understand xen simply does not know how to make p2m translation for
> it's own restricted heap on arm. Am I right?

For DOMID_XEN, the p2m is not initialized. When xentrace requests to map 
a page by directly giving the mfn (machine frame number), so there is no 
need of translation.

I would either add a specific case for DOMID_XEN in get_page_from_gfn or 
p2m_lookup.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 17:21:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 17:21:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCBaf-0003v4-FI; Sat, 08 Feb 2014 17:21:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCBae-0003uz-90
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 17:21:16 +0000
Received: from [85.158.143.35:19108] by server-3.bemta-4.messagelabs.com id
	27/E6-11539-B8766F25; Sat, 08 Feb 2014 17:21:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391880072!4171125!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29806 invoked from network); 8 Feb 2014 17:21:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 17:21:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,806,1384300800"; d="scan'208";a="101075004"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 17:20:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 12:20:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCBaA-00072L-K3;
	Sat, 08 Feb 2014 17:20:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCBaA-0005TZ-Cr;
	Sat, 08 Feb 2014 17:20:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24801-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 17:20:46 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24801: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6789447683425532335=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6789447683425532335==
Content-Type: text/plain

flight 24801 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24801/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24743

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============6789447683425532335==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6789447683425532335==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 17:21:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 17:21:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCBaf-0003v4-FI; Sat, 08 Feb 2014 17:21:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCBae-0003uz-90
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 17:21:16 +0000
Received: from [85.158.143.35:19108] by server-3.bemta-4.messagelabs.com id
	27/E6-11539-B8766F25; Sat, 08 Feb 2014 17:21:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1391880072!4171125!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29806 invoked from network); 8 Feb 2014 17:21:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 17:21:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,806,1384300800"; d="scan'208";a="101075004"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 17:20:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 12:20:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCBaA-00072L-K3;
	Sat, 08 Feb 2014 17:20:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCBaA-0005TZ-Cr;
	Sat, 08 Feb 2014 17:20:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24801-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 17:20:46 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24801: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6789447683425532335=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6789447683425532335==
Content-Type: text/plain

flight 24801 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24801/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           3 host-install(3)         broken REGR. vs. 24743

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============6789447683425532335==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6789447683425532335==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 17:46:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 17:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCByV-0004rq-RI; Sat, 08 Feb 2014 17:45:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WCBvy-0004rH-E3
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 17:43:19 +0000
Received: from [85.158.143.35:50421] by server-2.bemta-4.messagelabs.com id
	A7/34-10891-5BC66F25; Sat, 08 Feb 2014 17:43:17 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391881391!4149662!1
X-Originating-IP: [209.85.128.169]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28513 invoked from network); 8 Feb 2014 17:43:13 -0000
Received: from mail-ve0-f169.google.com (HELO mail-ve0-f169.google.com)
	(209.85.128.169)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 17:43:13 -0000
Received: by mail-ve0-f169.google.com with SMTP id oy12so3884638veb.28
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 09:43:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=dM5AQJ+LaqxyBkGKidQ2vzWe5KEJrp4Md8WoaC0ChcY=;
	b=P1tOQ15EXs2cOH0rM+KdmLE8SCV+CqxaVRo1OGbuuTsy3Tfwl2AD0ebO2jEz7IbVyp
	fCdFU6AYKB6l3rDmfL/ICwf6coW8rePi/ijBjjfPp3+RlpikZV6Cbjil/QInbht5lhhO
	bfgfV1LlXpM3yXfESMs3/hIuyss/8vMWGdRpDY5oepN5Hst3l1chnfmP68KWXC3TULJZ
	6BLgHOURfjaDk7wAyABrGExIseDQi4BMvZSevrITA8HjSrPXOiaqRUgtyLEZSo8d4GhJ
	Fr4m20MoN3tiWKm1nogirkKJic9Qj3m9ulMKtRlitxntWPEHhF2txMop4sNlvMuB/YAR
	5gJA==
X-Received: by 10.220.99.7 with SMTP id s7mr15490562vcn.19.1391881391687; Sat,
	08 Feb 2014 09:43:11 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Sat, 8 Feb 2014 09:42:31 -0800 (PST)
In-Reply-To: <CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Sat, 8 Feb 2014 12:42:31 -0500
Message-ID: <CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Sat, 08 Feb 2014 17:45:54 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5404484784840195602=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5404484784840195602==
Content-Type: multipart/alternative; boundary=001a11c1def2ebcb6304f1e8a35d

--001a11c1def2ebcb6304f1e8a35d
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Works like a charm.  I do not have physical access to the computer this
weekend to verify that the cards are isolated, but the HVM starts and
appears to be working well.

When do you think Xen 4.4 will be released.  The article I read mentioned
it will be released in 2014 (hinting towards the end of February).  I also
read 'When it is ready.'

Any timeline would be great.

Thanks again for your help!


On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser <
mikeneiderhauser@gmail.com> wrote:

> I will give it a shot.  Thanks!
>
>
> On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <konrad.wilk@oracle.com>wrot=
e:
>
>>
>> ----- mikeneiderhauser@gmail.com wrote:
>> >
>> > I followed this site (
>> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
>> and then followed (http://wiki.xen.org/wiki/Compiling_Xen_From_Source)
>> >
>>
>> Ah, so you are looking for the     xen_pt: Fix passthrough of device
>> with ROM.
>> which is not in the Xen 4.4-rc3 but in the master.
>>
>> One thing you can do is:
>>
>> cd xen/tools/qemu-xen-dir
>> git fetch upstream
>> git checkout origin/master
>> [you should see: "HEAD is now at 027c412... configure: Disable libtool
>> if -fPIE does not work with it (bug #1257099)"]
>>
>> Go back to main xen directory:
>> cd ../../../
>> ./configure
>> make
>> make install
>>
>> and you should be using now an newer version of QEMU with the fix.
>>
>>
>> >
>>
>> git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git
>> >
>>
>>
>> Had to take some additional steps here to get all of the libs
>> # apt-get install build-essential # apt-get install bcc bin86 gawk bridg=
e-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools trans=
fig tgif # apt-get install texinfo texlive-latex-base texlive-latex-recomme=
nded texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial# =
apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twis=
ted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev# apt-ge=
t install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findli=
b libx11-dev bison flex xz-utils libyajl-dev# apt-get install gettext
>> apt-get install libaio-dev
>> apt-get install libpixman-1-dev
>>
>> ./configure
>> make dist
>> make install
>>
>> >
>> >
>> >
>> > On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <
>> konrad.wilk@oracle.com> wrote:
>> >
>>>
>>> > On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
>>> > > I did not use the patch.  I was assuming it was already patched giv=
en
>>> > > previous email.  Is the patch for qemu source or xen source?
>>> >
>>> >
>>> It is for QEMU, but you are right - it should have been part
>>> > of QEMU if you got the latest version of Xen-unstable.
>>> >
>>> > You didn't use some specific tag but just 'staging' ?
>>> >
>>> >
>>> >
>>> > >
>>> > >
>>> > > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
>>> > > konrad.wilk@oracle.com> wrote:
>>> > >
>>> > > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:
>>> > > > > Ok. I started ran the initscripts and now xl works.
>>> > > > >
>>> > > > > However, I still see the same behavior as before:
>>> > > > >
>>> > > >
>>> > > > Did you use the patch that was mentioned in the URL?
>>> > > >
>>> > > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error:
>>> Connection
>>> > > > reset
>>> > > > > by peer
>>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>>> error:
>>> > > > > Connection refused
>>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>>> error:
>>> > > > > Connection refused
>>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>>> error:
>>> > > > > Connection refused
>>> > > > > root@fiat:~# xl list
>>> > > > > Name                                        ID   Mem VCPUs Stat=
e
>>> Time(s)
>>> > > > > Domain-0                                     0  1024     1
>>> r-----
>>> > > > >  15.2
>>> > > > > ubuntu-hvm-0                                 1  1025     1
>>> ------
>>> > > > > 0.0
>>> > > > >
>>> > > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 ->
>>> 0x23f3000
>>> > > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
>>> > > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (23369=
0
>>> pages to
>>> > > > > be allocated)
>>> > > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
>>> > > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
>>> > > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
>>> > > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
>>> > > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
>>> > > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
>>> > > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
>>> > > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
>>> > > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
>>> > > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
>>> > > > > (XEN) Dom0 has maximum 1 VCPUs
>>> > > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
>>> 0xffffffff81b2f000
>>> > > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
>>> 0xffffffff81d0f0f0
>>> > > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
>>> 0xffffffff81d252c0
>>> > > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
>>> 0xffffffff81e6d000
>>> > > > > (XEN) Scrubbing Free RAM: .............................done.
>>> > > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>>> > > > > (XEN) Std. Loglevel: All
>>> > > > > (XEN) Guest Loglevel: All
>>> > > > > (XEN) Xen is relinquishing VGA console.
>>> > > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to
>>> switch input
>>> > > > > to Xen)
>>> > > > > (XEN) Freed 260kB init memory.
>>> > > > > (XEN) PCI add device 0000:00:00.0
>>> > > > > (XEN) PCI add device 0000:00:01.0
>>> > > > > (XEN) PCI add device 0000:00:1a.0
>>> > > > > (XEN) PCI add device 0000:00:1c.0
>>> > > > > (XEN) PCI add device 0000:00:1d.0
>>> > > > > (XEN) PCI add device 0000:00:1e.0
>>> > > > > (XEN) PCI add device 0000:00:1f.0
>>> > > > > (XEN) PCI add device 0000:00:1f.2
>>> > > > > (XEN) PCI add device 0000:00:1f.3
>>> > > > > (XEN) PCI add device 0000:01:00.0
>>> > > > > (XEN) PCI add device 0000:02:02.0
>>> > > > > (XEN) PCI add device 0000:02:04.0
>>> > > > > (XEN) PCI add device 0000:03:00.0
>>> > > > > (XEN) PCI add device 0000:03:00.1
>>> > > > > (XEN) PCI add device 0000:04:00.0
>>> > > > > (XEN) PCI add device 0000:04:00.1
>>> > > > > (XEN) PCI add device 0000:05:00.0
>>> > > > > (XEN) PCI add device 0000:05:00.1
>>> > > > > (XEN) PCI add device 0000:06:03.0
>>> > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401
>>> > 262400
>>> > > > > (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=
=3D1
>>> memflags=3D0
>>> > > > > (200 of 1024)
>>> > > > > (d1) HVM Loader
>>> > > > > (d1) Detected Xen v4.4-rc2
>>> > > > > (d1) Xenbus rings @0xfeffc000, event channel 4
>>> > > > > (d1) System requested SeaBIOS
>>> > > > > (d1) CPU speed is 3093 MHz
>>> > > > > (d1) Relocating guest memory for lowmem MMIO space disabled
>>> > > > >
>>> > > > >
>>> > > > > Excerpt from /var/log/xen/*
>>> > > > > qemu: hardware error: xen: failed to populate ram at 40050000
>>> > > > >
>>> > > > >
>>> > > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
>>> > > > > konrad.wilk@oracle.com> wrote:
>>> > > > >
>>> > > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser
>>> wrote:
>>> > > > > > > I was able to compile and install xen4.4 RC3 on my host,
>>> however I am
>>> > > > > > > getting the error:
>>> > > > > > >
>>> > > > > > > root@fiat:~/git/xen# xl list
>>> > > > > > > xc: error: Could not obtain handle on privileged command
>>> interface
>>> > > > (2 =3D
>>> > > > > > No
>>> > > > > > > such file or directory): Internal error
>>> > > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc
>>> handle:
>>> > > > No
>>> > > > > > such
>>> > > > > > > file or directory
>>> > > > > > > cannot init xl context
>>> > > > > > >
>>> > > > > > > I've google searched for this and an article appears, but i=
s
>>> not the
>>> > > > same
>>> > > > > > > (as far as I can tell).  Running any xl command generates a
>>> similar
>>> > > > > > error.
>>> > > > > > >
>>> > > > > > > What can I do to fix this?
>>> > > > > >
>>> > > > > >
>>> > > > > > You need to run the initscripts for Xen. I don't know what
>>> your distro
>>> > > > is,
>>> > > > > > but
>>> > > > > > they are usually put in /etc/init.d/rc.d/xen*
>>> > > > > >
>>> > > > > >
>>> > > > > > >
>>> > > > > > > Regards
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
>>> > > > > > > mikeneiderhauser@gmail.com> wrote:
>>> > > > > > >
>>> > > > > > > > Much. Do I need to install from src or is there a package
>>> I can
>>> > > > > > install.
>>> > > > > > > >
>>> > > > > > > > Regards
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
>>> > > > > > > > konrad.wilk@oracle.com> wrote:
>>> > > > > > > >
>>> > > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike
>>> Neiderhauser wrote:
>>> > > > > > > >> > I did not.  I do not have the toolchain installed.  I
>>> may have
>>> > > > time
>>> > > > > > > >> later
>>> > > > > > > >> > today to try the patch.  Are there any specific
>>> instructions on
>>> > > > how
>>> > > > > > to
>>> > > > > > > >> > patch the src, compile and install?
>>> > > > > > > >>
>>> > > > > > > >> There actually should be a new version of Xen 4.4-rcX
>>> which will
>>> > > > have
>>> > > > > > the
>>> > > > > > > >> fix. That might be easier for you?
>>> > > > > > > >> >
>>> > > > > > > >> > Regards
>>> > > > > > > >> >
>>> > > > > > > >> >
>>> > > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk=
 <
>>> > > > > > > >> > konrad.wilk@oracle.com> wrote:
>>> > > > > > > >> >
>>> > > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
>>> Neiderhauser
>>> > > > wrote:
>>> > > > > > > >> > > > Hi all,
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > I am attempting to do a pci passthrough of an Inte=
l
>>> ET card
>>> > > > > > (4x1G
>>> > > > > > > >> NIC)
>>> > > > > > > >> > > to a
>>> > > > > > > >> > > > HVM.  I have been attempting to resolve this issue
>>> on the
>>> > > > > > xen-users
>>> > > > > > > >> list,
>>> > > > > > > >> > > > but it was advised to post this issue to this list=
.
>>> (Initial
>>> > > > > > > >> Message -
>>> > > > > > > >> > > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > >
>>> > > >
>>> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.ht=
ml
>>> > > > > > > >> )
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > The machine I am using as host is a Dell Poweredge
>>> server
>>> > > > with a
>>> > > > > > > >> Xeon
>>> > > > > > > >> > > > E31220 with 4GB of ram.
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > The possible bug is the following:
>>> > > > > > > >> > > > root@fiat:/var/log/xen# cat
>>> qemu-dm-ubuntu-hvm-0.log
>>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial=
0)
>>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram =
at
>>> > > > 40030000
>>> > > > > > > >> > > > ....
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > I believe it may be similar to this thread
>>> > > > > > > >> > > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > >
>>> > > >
>>> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe=
4uyog2d4+state:results
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Additional info that may be helpful is below.
>>> > > > > > > >> > >
>>> > > > > > > >> > > Did you try the patch?
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Please let me know if you need any additional
>>> information.
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Thanks in advance for any help provided!
>>> > > > > > > >> > > > Regards
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > # Configuration file for Xen HVM
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # HVM Name (as appears in 'xl list')
>>> > > > > > > >> > > > name=3D"ubuntu-hvm-0"
>>> > > > > > > >> > > > # HVM Build settings (+ hardware)
>>> > > > > > > >> > > > #kernel =3D "/usr/lib/xen-4.3/boot/hvmloader"
>>> > > > > > > >> > > > builder=3D'hvm'
>>> > > > > > > >> > > > device_model=3D'qemu-dm'
>>> > > > > > > >> > > > memory=3D1024
>>> > > > > > > >> > > > vcpus=3D2
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Virtual Interface
>>> > > > > > > >> > > > # Network bridge to USB NIC
>>> > > > > > > >> > > > vif=3D['bridge=3Dxenbr0']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>>> ###################
>>> > > > > > > >> > > > # PCI Permissive mode toggle
>>> > > > > > > >> > > > #pci_permissive=3D1
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # All PCI Devices
>>> > > > > > > >> > > > #pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1'=
,
>>> '05:00.0',
>>> > > > > > > >> '05:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # First two ports on Intel 4x1G NIC
>>> > > > > > > >> > > > #pci=3D['03:00.0','03:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Last two ports on Intel 4x1G NIC
>>> > > > > > > >> > > > #pci=3D['04:00.0', '04:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # All ports on Intel 4x1G NIC
>>> > > > > > > >> > > > pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Brodcom 2x1G NIC
>>> > > > > > > >> > > > #pci=3D['05:00.0', '05:00.1']
>>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>>> ###################
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # HVM Disks
>>> > > > > > > >> > > > # Hard disk only
>>> > > > > > > >> > > > # Boot from HDD first ('c')
>>> > > > > > > >> > > > boot=3D"c"
>>> > > > > > > >> > > > disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Hard disk with ISO
>>> > > > > > > >> > > > # Boot from ISO first ('d')
>>> > > > > > > >> > > > #boot=3D"d"
>>> > > > > > > >> > > > #disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>>> > > > > > > >> > > >
>>> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # ACPI Enable
>>> > > > > > > >> > > > acpi=3D1
>>> > > > > > > >> > > > # HVM Event Modes
>>> > > > > > > >> > > > on_poweroff=3D'destroy'
>>> > > > > > > >> > > > on_reboot=3D'restart'
>>> > > > > > > >> > > > on_crash=3D'restart'
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Serial Console Configuration (Xen Console)
>>> > > > > > > >> > > > sdl=3D0
>>> > > > > > > >> > > > serial=3D'pty'
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # VNC Configuration
>>> > > > > > > >> > > > # Only reacable from localhost
>>> > > > > > > >> > > > vnc=3D1
>>> > > > > > > >> > > > vnclisten=3D"0.0.0.0"
>>> > > > > > > >> > > > vncpasswd=3D""
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > Copied for xen-users list
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > It appears that it cannot obtain the RAM mapping
>>> for this
>>> > > > PCI
>>> > > > > > > >> device.
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices t=
o
>>> > > > pciback. The
>>> > > > > > > >> output
>>> > > > > > > >> > > > looks like:
>>> > > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
>>> > > > > > > >> > > > Loading Kernel Module 'xen-pciback'
>>> > > > > > > >> > > > Calling function pciback_dev for:
>>> > > > > > > >> > > > PCI DEVICE 0000:03:00.0
>>> > > > > > > >> > > > Unbinding 0000:03:00.0 from igb
>>> > > > > > > >> > > > Binding 0000:03:00.0 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:03:00.1
>>> > > > > > > >> > > > Unbinding 0000:03:00.1 from igb
>>> > > > > > > >> > > > Binding 0000:03:00.1 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:04:00.0
>>> > > > > > > >> > > > Unbinding 0000:04:00.0 from igb
>>> > > > > > > >> > > > Binding 0000:04:00.0 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:04:00.1
>>> > > > > > > >> > > > Unbinding 0000:04:00.1 from igb
>>> > > > > > > >> > > > Binding 0000:04:00.1 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:05:00.0
>>> > > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
>>> > > > > > > >> > > > Binding 0000:05:00.0 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:05:00.1
>>> > > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
>>> > > > > > > >> > > > Binding 0000:05:00.1 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Listing PCI Devices Available to Xen
>>> > > > > > > >> > > > 0000:03:00.0
>>> > > > > > > >> > > > 0000:03:00.1
>>> > > > > > > >> > > > 0000:04:00.0
>>> > > > > > > >> > > > 0000:04:00.1
>>> > > > > > > >> > > > 0000:05:00.0
>>> > > > > > > >> > > > 0000:05:00.1
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > root@fiat:~# xl -vvv create
>>> /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > > > >> > > > WARNING: ignoring device_model directive.
>>> > > > > > > >> > > > WARNING: Use "device_model_override" instead if yo=
u
>>> really
>>> > > > want
>>> > > > > > a
>>> > > > > > > >> > > > non-default device_model
>>> > > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create=
:
>>> ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> create:
>>> > > > > > > >> > > > how=3D(nil) callback=3D(nil) poller=3D0x210c3c0
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>>> > > > > > > >> Disk
>>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dunknown
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_device.c:296:libxl__device_disk_set_backend:
>>> > > > > > > >> Disk
>>> > > > > > > >> > > > vdev=3Dhda, using backend phy
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_create.c:675:initiate_domain_create:
>>> > > > running
>>> > > > > > > >> > > bootloader
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_bootloader.c:321:libxl__bootloader_run:
>>> > > > not
>>> > > > > > a PV
>>> > > > > > > >> > > > domain, skipping bootloader
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c728: deregister unregistered
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_numa.c:475:libxl__get_numa_candidate:
>>> > > > New
>>> > > > > > best
>>> > > > > > > >> NUMA
>>> > > > > > > >> > > > placement candidate found: nr_nodes=3D1, nr_cpus=
=3D4,
>>> > > > nr_vcpus=3D3,
>>> > > > > > > >> > > > free_memkb=3D2980
>>> > > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain:
>>> NUMA
>>> > > > placement
>>> > > > > > > >> > > candidate
>>> > > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>>> > > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=3D0x1000=
00
>>> > > > memsz=3D0xa69a4
>>> > > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 ->
>>> 0x1a69a4
>>> > > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>>> > > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a=
4
>>> > > > > > > >> > > >   Modules:       0000000000000000->000000000000000=
0
>>> > > > > > > >> > > >   TOTAL:         0000000000000000->000000003f80000=
0
>>> > > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
>>> > > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>>> > > > > > > >> > > >   4KB PAGES: 0x0000000000000200
>>> > > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
>>> > > > > > > >> > > >   1GB PAGES: 0x0000000000000000
>>> > > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at
>>> 0x7f022c779000 ->
>>> > > > > > > >> 0x7f022c81682d
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>>> > > > > > > >> Disk
>>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dphy
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:559:libxl__ev_xswatch_register:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x2112f48
>>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > token=3D3/0:
>>> > > > > > > >> > > > register slotnum=3D3
>>> > > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create=
:
>>> ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> > > > inprogress: poller=3D0x210c3c0, flags=3Di
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x2112f48
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> token=3D3/0:
>>> > > > event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:647:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>>> state 2 still
>>> > > > > > waiting
>>> > > > > > > >> > > state 1
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x2112f48
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> token=3D3/0:
>>> > > > event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:643:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>>> state 2 ok
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x2112f48
>>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > token=3D3/0:
>>> > > > > > > >> > > > deregister slotnum=3D3
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x2112f48: deregister unregistered
>>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>>> calling
>>> > > > hotplug
>>> > > > > > > >> script:
>>> > > > > > > >> > > > /etc/xen/scripts/block add
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_d=
m:
>>> > > > Spawning
>>> > > > > > > >> > > device-model
>>> > > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > > /usr/bin/qemu-system-i386
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > -xen-domid
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   2
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -chardev
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > >
>>> > > > socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > > chardev=3Dlibxl-cmd,mode=3Dcontrol
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -name
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > ubuntu-hvm-0
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > 0.0.0.0:0
>>> > > > > > > >> ,to=3D99
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -global
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> isa-fdc.driveA=3D
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -serial
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   pty
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > cirrus
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -global
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> vga.vram_size_mb=3D8
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > order=3Dc
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > 2,maxcpus=3D2
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -device
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > > rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:=
44:2c
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -netdev
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > >
>>> type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscript=3Dno
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -M
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -m
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   1016
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -drive
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > >
>>> > > >
>>> file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,form=
at=3Draw,cache=3Dwriteback
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:559:libxl__ev_xswatch_register:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c960
>>> wpath=3D/local/domain/0/device-model/2/state
>>> > > > > > token=3D3/1:
>>> > > > > > > >> > > register
>>> > > > > > > >> > > > slotnum=3D3
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210c960
>>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>>> token=3D3/1: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210c960
>>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>>> token=3D3/1: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c960
>>> wpath=3D/local/domain/0/device-model/2/state
>>> > > > > > token=3D3/1:
>>> > > > > > > >> > > > deregister slotnum=3D3
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c960: deregister unregistered
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initializ=
e:
>>> > > > connected
>>> > > > > > to
>>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > > > type: qmp
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>>> > > > > > > >> > > >     "id": 1
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "query-chardev",
>>> > > > > > > >> > > >     "id": 2
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "change",
>>> > > > > > > >> > > >     "id": 3,
>>> > > > > > > >> > > >     "arguments": {
>>> > > > > > > >> > > >         "device": "vnc",
>>> > > > > > > >> > > >         "target": "password",
>>> > > > > > > >> > > >         "arg": ""
>>> > > > > > > >> > > >     }
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "query-vnc",
>>> > > > > > > >> > > >     "id": 4
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:559:libxl__ev_xswatch_register:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210e8a8
>>> wpath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > token=3D3/2:
>>> > > > > > > >> > > register
>>> > > > > > > >> > > > slotnum=3D3
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210e8a8
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>>> token=3D3/2: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:647:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state
>>> 2 still
>>> > > > > > waiting
>>> > > > > > > >> state
>>> > > > > > > >> > > 1
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210e8a8
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>>> token=3D3/2: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:643:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state
>>> 2 ok
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210e8a8
>>> wpath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > token=3D3/2:
>>> > > > > > > >> > > > deregister slotnum=3D3
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210e8a8: deregister unregistered
>>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>>> calling
>>> > > > hotplug
>>> > > > > > > >> script:
>>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge online
>>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>>> calling
>>> > > > hotplug
>>> > > > > > > >> script:
>>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge add
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initializ=
e:
>>> > > > connected
>>> > > > > > to
>>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > > > type: qmp
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>>> > > > > > > >> > > >     "id": 1
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "device_add",
>>> > > > > > > >> > > >     "id": 2,
>>> > > > > > > >> > > >     "arguments": {
>>> > > > > > > >> > > >         "driver": "xen-pci-passthrough",
>>> > > > > > > >> > > >         "id": "pci-pt-03_00.0",
>>> > > > > > > >> > > >         "hostaddr": "0000:03:00.0"
>>> > > > > > > >> > > >     }
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket rea=
d
>>> error:
>>> > > > > > > >> Connection
>>> > > > > > > >> > > reset
>>> > > > > > > >> > > > by peer
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e:
>>> > > > Connection
>>> > > > > > > >> error:
>>> > > > > > > >> > > > Connection refused
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e:
>>> > > > Connection
>>> > > > > > > >> error:
>>> > > > > > > >> > > > Connection refused
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e:
>>> > > > Connection
>>> > > > > > > >> error:
>>> > > > > > > >> > > > Connection refused
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_pci.c:81:libxl__create_pci_backend:
>>> > > > > > Creating pci
>>> > > > > > > >> > > backend
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:1737:libxl__ao_progress_report:
>>> > > > ao
>>> > > > > > > >> 0x210c360:
>>> > > > > > > >> > > > progress report: ignored
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:1569:libxl__ao_complete: ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> > > > complete, rc=3D0
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:1541:libxl__ao__destroy: ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> > > destroy
>>> > > > > > > >> > > > Daemon running with PID 3214
>>> > > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793
>>> total
>>> > > > > > > >> releases:793
>>> > > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0
>>> maximum
>>> > > > > > > >> allocations:4
>>> > > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
>>> > > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses=
:4
>>> > > > toobig:4
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > root@fiat:/var/log/xen# cat
>>> qemu-dm-ubuntu-hvm-0.log
>>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial=
0)
>>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram =
at
>>> > > > 40030000
>>> > > > > > > >> > > > CPU #0:
>>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=
=3D00000633
>>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=
=3D00000000
>>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0
>>> A20=3D1 SMM=3D0
>>> > > > HLT=3D1
>>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=
=3D00000000
>>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=
=3D00000000
>>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>>> > > > > > > >> > > > EFER=3D0000000000000000
>>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00=
001f80
>>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>>> > > > > > > >> > > > CPU #1:
>>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=
=3D00000633
>>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=
=3D00000000
>>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0
>>> A20=3D1 SMM=3D0
>>> > > > HLT=3D1
>>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=
=3D00000000
>>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=
=3D00000000
>>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>>> > > > > > > >> > > > EFER=3D0000000000000000
>>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00=
001f80
>>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > /etc/default/grub
>>> > > > > > > >> > > > GRUB_DEFAULT=3D"Xen 4.3-amd64"
>>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=3D0
>>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue
>>> > > > > > > >> > > > GRUB_TIMEOUT=3D10
>>> > > > > > > >> > > > GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2> /dev/null=
 ||
>>> echo
>>> > > > Debian`
>>> > > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash"
>>> > > > > > > >> > > > GRUB_CMDLINE_LINUX=3D""
>>> > > > > > > >> > > > # biosdevname=3D0
>>> > > > > > > >> > > > GRUB_CMDLINE_XEN=3D"dom0_mem=3D1024M dom0_max_vcpu=
s=3D1"
>>> > > > > > > >> > >
>>> > > > > > > >> > > > _______________________________________________
>>> > > > > > > >> > > > Xen-devel mailing list
>>> > > > > > > >> > > > Xen-devel@lists.xen.org
>>> > > > > > > >> > > > http://lists.xen.org/xen-devel
>>> > > > > > > >> > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > >
>>> > > >
>>> >
>>>
>>
>> >
>>
>
>

--001a11c1def2ebcb6304f1e8a35d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Works like a charm. =A0I do not have physical access to th=
e computer this weekend to verify that the cards are isolated, but the HVM =
starts and appears to be working well.<div><br></div><div>When do you think=
 Xen 4.4 will be released. =A0The article I read mentioned it will be relea=
sed in 2014 (hinting towards the end of February). =A0I also read &#39;When=
 it is ready.&#39;</div>

<div><br></div><div>Any timeline would be great.</div><div><br></div><div>T=
hanks again for your help!</div></div><div class=3D"gmail_extra"><br><br><d=
iv class=3D"gmail_quote">On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:mikeneiderhauser@gmail.com" target=
=3D"_blank">mikeneiderhauser@gmail.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">I will give it a shot. =A0T=
hanks!</div><div class=3D"HOEnZb"><div class=3D"h5"><div class=3D"gmail_ext=
ra"><br><br>

<div class=3D"gmail_quote">On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_bla=
nk">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div style=3D"font-size:12pt;font-famil=
y:Times New Roman"><br>----- <a href=3D"mailto:mikeneiderhauser@gmail.com" =
target=3D"_blank">mikeneiderhauser@gmail.com</a> wrote:
<br>&gt; <div dir=3D"ltr"><div>&gt; I followed this site (<a href=3D"http:/=
/wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" target=3D"_blank">=
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions</a>).</div>
<div><div>and then followed (<a href=3D"http://wiki.xen.org/wiki/Compiling_=
Xen_From_Source" target=3D"_blank">http://wiki.xen.org/wiki/Compiling_Xen_F=
rom_Source</a>)<br>&gt;=20

<div><br></div></div><div>Ah, so you are looking for the=A0<span style=3D"f=
ont-size:12pt">=A0 =A0 xen_pt: Fix passthrough of device with ROM.</span></=
div><div><span style=3D"font-size:12pt">which is not in the Xen 4.4-rc3 but=
 in the master.</span></div>


<div><br></div><div>One thing you can do is:</div><div><br></div><div>cd xe=
n/tools/qemu-xen-dir</div><div>git fetch upstream</div><div>git checkout or=
igin/master</div><div>[you should see: &quot;<span style=3D"font-size:12pt"=
>HEAD is now at 027c412... configure: Disable libtool if -fPIE does not wor=
k with it (bug #1257099)&quot;]</span></div>


<div><span style=3D"font-size:12pt"><br></span></div><div><span style=3D"fo=
nt-size:12pt">Go back to main xen directory:</span></div><div>cd ../../../<=
/div><div>./configure</div><div>make=A0</div><div>make install</div><div><b=
r>


</div><div>and you should be using now an newer version of QEMU with the fi=
x.</div><div><div><div><br></div><div><br></div><div>&gt; </div><div><pre s=
tyle=3D"line-height:1.3em;font-size:15px;background-color:rgb(250,250,250);=
border:1px solid rgb(221,221,221);padding:1em">

<span style=3D"font-family:arial;line-height:1.3em">git clone -b 4.4.0-rc3 =
git://<a href=3D"http://xenbits.xen.org/xen.git" target=3D"_blank">xenbits.=
xen.org/xen.git</a></span><br>&gt;=20

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"line-height:1.3em;font-size:15px;f=
ont-family:arial">Had to take some additional steps here to get all of the =
libs
# apt-get install build-essential=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-open=
ssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install texinfo texlive-latex-base texlive-latex-recommended texli=
ve-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twi=
sted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml=
-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"line-heigh=
t:1.3em;font-size:15px;background-color:rgb(250,250,250);border:1px solid r=
gb(221,221,221);padding:1em"><span style=3D"line-height:1.3em;font-family:a=
rial">./configure
make dist
make install</span></pre></div></div></div></div></div><div><div><div class=
=3D"gmail_extra">&gt; <br>&gt; <br>&gt; <div class=3D"gmail_quote">&gt; On =
Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a=
 href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracl=
e.com</a>&gt;</span> wrote:<br>


&gt;=20

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>&gt; On Fri, Feb 07, 2014 at 04:29:18PM=
 -0500, Mike Neiderhauser wrote:<br>&gt;=20
&gt; I did not use the patch. =A0I was assuming it was already patched give=
n<br>&gt;=20
&gt; previous email. =A0Is the patch for qemu source or xen source?<br>&gt;=
=20
<br>&gt;=20
</div>It is for QEMU, but you are right - it should have been part<br>&gt;=
=20
of QEMU if you got the latest version of Xen-unstable.<br>&gt;=20
<br>&gt;=20
You didn&#39;t use some specific tag but just &#39;staging&#39; ?<br>&gt;=
=20
<div>&gt; <div>&gt; <br>&gt;=20
&gt;<br>&gt;=20
&gt;<br>&gt;=20
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt;<br>&gt;=20
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>&gt;=20
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>&gt;=
=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; However, I still see the same behavior as before:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>&gt;=20
&gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl list<br>&gt;=20
&gt; &gt; &gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>&gt;=20
&gt; &gt; &gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>&gt;=20
&gt; &gt; &gt; =A015.2<br>&gt;=20
&gt; &gt; &gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>&gt;=20
&gt; &gt; &gt; 0.0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt=
; 0x23f3000<br>&gt;=20
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000=
000 (233690 pages to<br>&gt;=20
&gt; &gt; &gt; be allocated)<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f300=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff8551900=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855=
194b4<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549=
000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff855=
4a000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;fffffff=
f85800000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>&gt;=20
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>&gt;=20
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>&gt;=20
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; thr=
ee times to switch input<br>&gt;=20
&gt; &gt; &gt; to Xen)<br>&gt;=20
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>&gt;=20
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>&gt;=20
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>&gt;=20
&gt; &gt; &gt; (200 of 1024)<br>&gt;=20
&gt; &gt; &gt; (d1) HVM Loader<br>&gt;=20
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>&gt;=20
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>&gt;=20
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>&gt;=20
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>&gt;=20
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>&gt;=20
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>&gt;=20
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">=
konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; getting the error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>&gt;=20
&gt; &gt; (2 =3D<br>&gt;=20
&gt; &gt; &gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>&gt;=20
&gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; such<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; file or directory<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I&#39;ve google searched for this and an article a=
ppears, but is not the<br>&gt;=20
&gt; &gt; same<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). =A0Running any xl command =
generates a similar<br>&gt;=20
&gt; &gt; &gt; &gt; error.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don&#39;t kn=
ow what your distro<br>&gt;=20
&gt; &gt; is,<br>&gt;=20
&gt; &gt; &gt; &gt; but<br>&gt;=20
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>&gt;=
=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com" targ=
et=3D"_blank">mikeneiderhauser@gmail.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>&gt;=20
&gt; &gt; &gt; &gt; install.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the tool=
chain installed. =A0I may have<br>&gt;=20
&gt; &gt; time<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there=
 any specific instructions on<br>&gt;=20
&gt; &gt; how<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>&gt;=20
&gt; &gt; have<br>&gt;=20
&gt; &gt; &gt; &gt; the<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>&gt;=20
&gt; &gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>&gt;=20
&gt; &gt; &gt; &gt; (4x1G<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attemp=
ting to resolve this issue on the<br>&gt;=20
&gt; &gt; &gt; &gt; xen-users<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>&gt;=20
&gt; &gt; with a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt;=20


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
&#39;xl list&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib=
/xen-4.3/boot/hvmloader&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-d=
m&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr=
0&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
 &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;=
,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
&#39;03:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;,=
 &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#3=
9;c&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubun=
tu-vg/ubuntu-hvm-0,hda,w&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#3=
9;d&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubu=
ntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.=
04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy=
&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#=
39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#3=
9;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>&gt;=20
&gt; &gt; PCI<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I =
ran assigned pci devices to<br>&gt;=20
&gt; &gt; pciback. The<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39=
;xen-pciback&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_=
model_override&quot; instead if you really<br>&gt;=20
&gt; &gt; want<br>&gt;=20
&gt; &gt; &gt; &gt; a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>&gt;=20
&gt; &gt; running<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>&gt;=20
&gt; &gt; not<br>&gt;=20
&gt; &gt; &gt; &gt; a PV<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>&gt;=20
&gt; &gt; New<br>&gt;=20
&gt; &gt; &gt; &gt; best<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>&gt;=20
&gt; &gt; nr_vcpus=3D3,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>&gt;=20
&gt; &gt; placement<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>&gt;=20
&gt; &gt; memsz=3D0xa69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =
=A00000000000100000-&gt;00000000001a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0=
000000000000000-&gt;0000000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0=
 0000000000000000-&gt;000000003f800000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000=
000100608<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x000000000=
0000200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x000000000=
00001fb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x000000000=
0000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; Spawning<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; -xen-domid<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -chardev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -mon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -name<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vnc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -serial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 pty<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vga<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; cirrus<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -boot<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; order=3Dc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -smp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -device<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -netdev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -M<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 xenfv<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -m<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 1016<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -drive<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-chardev&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;change&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;devi=
ce&quot;: &quot;vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;targ=
et&quot;: &quot;password&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&=
quot;: &quot;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;device_add&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driv=
er&quot;: &quot;xen-pci-passthrough&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&q=
uot;: &quot;pci-pt-03_00.0&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;host=
addr&quot;: &quot;0000:03:00.0&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; Creating pci<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>&gt;=20
&gt; &gt; ao<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>&gt;=20
&gt; &gt; toobig:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4=
.3-amd64&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>&gt;=20
&gt; &gt; Debian`<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D&quot;quiet splash&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot=
;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;d=
om0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org" target=3D"_blank">Xen-devel@lists.xen.org</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
</div></div></blockquote></div><br>&gt; </div>
</div></div></div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--001a11c1def2ebcb6304f1e8a35d--


--===============5404484784840195602==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5404484784840195602==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 17:46:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 17:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCByV-0004rq-RI; Sat, 08 Feb 2014 17:45:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WCBvy-0004rH-E3
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 17:43:19 +0000
Received: from [85.158.143.35:50421] by server-2.bemta-4.messagelabs.com id
	A7/34-10891-5BC66F25; Sat, 08 Feb 2014 17:43:17 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1391881391!4149662!1
X-Originating-IP: [209.85.128.169]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28513 invoked from network); 8 Feb 2014 17:43:13 -0000
Received: from mail-ve0-f169.google.com (HELO mail-ve0-f169.google.com)
	(209.85.128.169)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 17:43:13 -0000
Received: by mail-ve0-f169.google.com with SMTP id oy12so3884638veb.28
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 09:43:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=dM5AQJ+LaqxyBkGKidQ2vzWe5KEJrp4Md8WoaC0ChcY=;
	b=P1tOQ15EXs2cOH0rM+KdmLE8SCV+CqxaVRo1OGbuuTsy3Tfwl2AD0ebO2jEz7IbVyp
	fCdFU6AYKB6l3rDmfL/ICwf6coW8rePi/ijBjjfPp3+RlpikZV6Cbjil/QInbht5lhhO
	bfgfV1LlXpM3yXfESMs3/hIuyss/8vMWGdRpDY5oepN5Hst3l1chnfmP68KWXC3TULJZ
	6BLgHOURfjaDk7wAyABrGExIseDQi4BMvZSevrITA8HjSrPXOiaqRUgtyLEZSo8d4GhJ
	Fr4m20MoN3tiWKm1nogirkKJic9Qj3m9ulMKtRlitxntWPEHhF2txMop4sNlvMuB/YAR
	5gJA==
X-Received: by 10.220.99.7 with SMTP id s7mr15490562vcn.19.1391881391687; Sat,
	08 Feb 2014 09:43:11 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Sat, 8 Feb 2014 09:42:31 -0800 (PST)
In-Reply-To: <CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Sat, 8 Feb 2014 12:42:31 -0500
Message-ID: <CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
To: Konrad Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Sat, 08 Feb 2014 17:45:54 +0000
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5404484784840195602=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5404484784840195602==
Content-Type: multipart/alternative; boundary=001a11c1def2ebcb6304f1e8a35d

--001a11c1def2ebcb6304f1e8a35d
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Works like a charm.  I do not have physical access to the computer this
weekend to verify that the cards are isolated, but the HVM starts and
appears to be working well.

When do you think Xen 4.4 will be released.  The article I read mentioned
it will be released in 2014 (hinting towards the end of February).  I also
read 'When it is ready.'

Any timeline would be great.

Thanks again for your help!


On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser <
mikeneiderhauser@gmail.com> wrote:

> I will give it a shot.  Thanks!
>
>
> On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <konrad.wilk@oracle.com>wrot=
e:
>
>>
>> ----- mikeneiderhauser@gmail.com wrote:
>> >
>> > I followed this site (
>> http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
>> and then followed (http://wiki.xen.org/wiki/Compiling_Xen_From_Source)
>> >
>>
>> Ah, so you are looking for the     xen_pt: Fix passthrough of device
>> with ROM.
>> which is not in the Xen 4.4-rc3 but in the master.
>>
>> One thing you can do is:
>>
>> cd xen/tools/qemu-xen-dir
>> git fetch upstream
>> git checkout origin/master
>> [you should see: "HEAD is now at 027c412... configure: Disable libtool
>> if -fPIE does not work with it (bug #1257099)"]
>>
>> Go back to main xen directory:
>> cd ../../../
>> ./configure
>> make
>> make install
>>
>> and you should be using now an newer version of QEMU with the fix.
>>
>>
>> >
>>
>> git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git
>> >
>>
>>
>> Had to take some additional steps here to get all of the libs
>> # apt-get install build-essential # apt-get install bcc bin86 gawk bridg=
e-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools trans=
fig tgif # apt-get install texinfo texlive-latex-base texlive-latex-recomme=
nded texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial# =
apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twis=
ted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev# apt-ge=
t install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findli=
b libx11-dev bison flex xz-utils libyajl-dev# apt-get install gettext
>> apt-get install libaio-dev
>> apt-get install libpixman-1-dev
>>
>> ./configure
>> make dist
>> make install
>>
>> >
>> >
>> >
>> > On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <
>> konrad.wilk@oracle.com> wrote:
>> >
>>>
>>> > On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
>>> > > I did not use the patch.  I was assuming it was already patched giv=
en
>>> > > previous email.  Is the patch for qemu source or xen source?
>>> >
>>> >
>>> It is for QEMU, but you are right - it should have been part
>>> > of QEMU if you got the latest version of Xen-unstable.
>>> >
>>> > You didn't use some specific tag but just 'staging' ?
>>> >
>>> >
>>> >
>>> > >
>>> > >
>>> > > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
>>> > > konrad.wilk@oracle.com> wrote:
>>> > >
>>> > > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:
>>> > > > > Ok. I started ran the initscripts and now xl works.
>>> > > > >
>>> > > > > However, I still see the same behavior as before:
>>> > > > >
>>> > > >
>>> > > > Did you use the patch that was mentioned in the URL?
>>> > > >
>>> > > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error:
>>> Connection
>>> > > > reset
>>> > > > > by peer
>>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>>> error:
>>> > > > > Connection refused
>>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>>> error:
>>> > > > > Connection refused
>>> > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection
>>> error:
>>> > > > > Connection refused
>>> > > > > root@fiat:~# xl list
>>> > > > > Name                                        ID   Mem VCPUs Stat=
e
>>> Time(s)
>>> > > > > Domain-0                                     0  1024     1
>>> r-----
>>> > > > >  15.2
>>> > > > > ubuntu-hvm-0                                 1  1025     1
>>> ------
>>> > > > > 0.0
>>> > > > >
>>> > > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 ->
>>> 0x23f3000
>>> > > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
>>> > > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (23369=
0
>>> pages to
>>> > > > > be allocated)
>>> > > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
>>> > > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
>>> > > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
>>> > > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
>>> > > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
>>> > > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
>>> > > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
>>> > > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
>>> > > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
>>> > > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
>>> > > > > (XEN) Dom0 has maximum 1 VCPUs
>>> > > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
>>> 0xffffffff81b2f000
>>> > > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
>>> 0xffffffff81d0f0f0
>>> > > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
>>> 0xffffffff81d252c0
>>> > > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
>>> 0xffffffff81e6d000
>>> > > > > (XEN) Scrubbing Free RAM: .............................done.
>>> > > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>>> > > > > (XEN) Std. Loglevel: All
>>> > > > > (XEN) Guest Loglevel: All
>>> > > > > (XEN) Xen is relinquishing VGA console.
>>> > > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to
>>> switch input
>>> > > > > to Xen)
>>> > > > > (XEN) Freed 260kB init memory.
>>> > > > > (XEN) PCI add device 0000:00:00.0
>>> > > > > (XEN) PCI add device 0000:00:01.0
>>> > > > > (XEN) PCI add device 0000:00:1a.0
>>> > > > > (XEN) PCI add device 0000:00:1c.0
>>> > > > > (XEN) PCI add device 0000:00:1d.0
>>> > > > > (XEN) PCI add device 0000:00:1e.0
>>> > > > > (XEN) PCI add device 0000:00:1f.0
>>> > > > > (XEN) PCI add device 0000:00:1f.2
>>> > > > > (XEN) PCI add device 0000:00:1f.3
>>> > > > > (XEN) PCI add device 0000:01:00.0
>>> > > > > (XEN) PCI add device 0000:02:02.0
>>> > > > > (XEN) PCI add device 0000:02:04.0
>>> > > > > (XEN) PCI add device 0000:03:00.0
>>> > > > > (XEN) PCI add device 0000:03:00.1
>>> > > > > (XEN) PCI add device 0000:04:00.0
>>> > > > > (XEN) PCI add device 0000:04:00.1
>>> > > > > (XEN) PCI add device 0000:05:00.0
>>> > > > > (XEN) PCI add device 0000:05:00.1
>>> > > > > (XEN) PCI add device 0000:06:03.0
>>> > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401
>>> > 262400
>>> > > > > (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: id=
=3D1
>>> memflags=3D0
>>> > > > > (200 of 1024)
>>> > > > > (d1) HVM Loader
>>> > > > > (d1) Detected Xen v4.4-rc2
>>> > > > > (d1) Xenbus rings @0xfeffc000, event channel 4
>>> > > > > (d1) System requested SeaBIOS
>>> > > > > (d1) CPU speed is 3093 MHz
>>> > > > > (d1) Relocating guest memory for lowmem MMIO space disabled
>>> > > > >
>>> > > > >
>>> > > > > Excerpt from /var/log/xen/*
>>> > > > > qemu: hardware error: xen: failed to populate ram at 40050000
>>> > > > >
>>> > > > >
>>> > > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
>>> > > > > konrad.wilk@oracle.com> wrote:
>>> > > > >
>>> > > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser
>>> wrote:
>>> > > > > > > I was able to compile and install xen4.4 RC3 on my host,
>>> however I am
>>> > > > > > > getting the error:
>>> > > > > > >
>>> > > > > > > root@fiat:~/git/xen# xl list
>>> > > > > > > xc: error: Could not obtain handle on privileged command
>>> interface
>>> > > > (2 =3D
>>> > > > > > No
>>> > > > > > > such file or directory): Internal error
>>> > > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc
>>> handle:
>>> > > > No
>>> > > > > > such
>>> > > > > > > file or directory
>>> > > > > > > cannot init xl context
>>> > > > > > >
>>> > > > > > > I've google searched for this and an article appears, but i=
s
>>> not the
>>> > > > same
>>> > > > > > > (as far as I can tell).  Running any xl command generates a
>>> similar
>>> > > > > > error.
>>> > > > > > >
>>> > > > > > > What can I do to fix this?
>>> > > > > >
>>> > > > > >
>>> > > > > > You need to run the initscripts for Xen. I don't know what
>>> your distro
>>> > > > is,
>>> > > > > > but
>>> > > > > > they are usually put in /etc/init.d/rc.d/xen*
>>> > > > > >
>>> > > > > >
>>> > > > > > >
>>> > > > > > > Regards
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
>>> > > > > > > mikeneiderhauser@gmail.com> wrote:
>>> > > > > > >
>>> > > > > > > > Much. Do I need to install from src or is there a package
>>> I can
>>> > > > > > install.
>>> > > > > > > >
>>> > > > > > > > Regards
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
>>> > > > > > > > konrad.wilk@oracle.com> wrote:
>>> > > > > > > >
>>> > > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike
>>> Neiderhauser wrote:
>>> > > > > > > >> > I did not.  I do not have the toolchain installed.  I
>>> may have
>>> > > > time
>>> > > > > > > >> later
>>> > > > > > > >> > today to try the patch.  Are there any specific
>>> instructions on
>>> > > > how
>>> > > > > > to
>>> > > > > > > >> > patch the src, compile and install?
>>> > > > > > > >>
>>> > > > > > > >> There actually should be a new version of Xen 4.4-rcX
>>> which will
>>> > > > have
>>> > > > > > the
>>> > > > > > > >> fix. That might be easier for you?
>>> > > > > > > >> >
>>> > > > > > > >> > Regards
>>> > > > > > > >> >
>>> > > > > > > >> >
>>> > > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk=
 <
>>> > > > > > > >> > konrad.wilk@oracle.com> wrote:
>>> > > > > > > >> >
>>> > > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
>>> Neiderhauser
>>> > > > wrote:
>>> > > > > > > >> > > > Hi all,
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > I am attempting to do a pci passthrough of an Inte=
l
>>> ET card
>>> > > > > > (4x1G
>>> > > > > > > >> NIC)
>>> > > > > > > >> > > to a
>>> > > > > > > >> > > > HVM.  I have been attempting to resolve this issue
>>> on the
>>> > > > > > xen-users
>>> > > > > > > >> list,
>>> > > > > > > >> > > > but it was advised to post this issue to this list=
.
>>> (Initial
>>> > > > > > > >> Message -
>>> > > > > > > >> > > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > >
>>> > > >
>>> http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.ht=
ml
>>> > > > > > > >> )
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > The machine I am using as host is a Dell Poweredge
>>> server
>>> > > > with a
>>> > > > > > > >> Xeon
>>> > > > > > > >> > > > E31220 with 4GB of ram.
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > The possible bug is the following:
>>> > > > > > > >> > > > root@fiat:/var/log/xen# cat
>>> qemu-dm-ubuntu-hvm-0.log
>>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial=
0)
>>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram =
at
>>> > > > 40030000
>>> > > > > > > >> > > > ....
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > I believe it may be similar to this thread
>>> > > > > > > >> > > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > >
>>> > > >
>>> http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe=
4uyog2d4+state:results
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Additional info that may be helpful is below.
>>> > > > > > > >> > >
>>> > > > > > > >> > > Did you try the patch?
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Please let me know if you need any additional
>>> information.
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Thanks in advance for any help provided!
>>> > > > > > > >> > > > Regards
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > # Configuration file for Xen HVM
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # HVM Name (as appears in 'xl list')
>>> > > > > > > >> > > > name=3D"ubuntu-hvm-0"
>>> > > > > > > >> > > > # HVM Build settings (+ hardware)
>>> > > > > > > >> > > > #kernel =3D "/usr/lib/xen-4.3/boot/hvmloader"
>>> > > > > > > >> > > > builder=3D'hvm'
>>> > > > > > > >> > > > device_model=3D'qemu-dm'
>>> > > > > > > >> > > > memory=3D1024
>>> > > > > > > >> > > > vcpus=3D2
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Virtual Interface
>>> > > > > > > >> > > > # Network bridge to USB NIC
>>> > > > > > > >> > > > vif=3D['bridge=3Dxenbr0']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>>> ###################
>>> > > > > > > >> > > > # PCI Permissive mode toggle
>>> > > > > > > >> > > > #pci_permissive=3D1
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # All PCI Devices
>>> > > > > > > >> > > > #pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1'=
,
>>> '05:00.0',
>>> > > > > > > >> '05:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # First two ports on Intel 4x1G NIC
>>> > > > > > > >> > > > #pci=3D['03:00.0','03:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Last two ports on Intel 4x1G NIC
>>> > > > > > > >> > > > #pci=3D['04:00.0', '04:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # All ports on Intel 4x1G NIC
>>> > > > > > > >> > > > pci=3D['03:00.0', '03:00.1', '04:00.0', '04:00.1']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Brodcom 2x1G NIC
>>> > > > > > > >> > > > #pci=3D['05:00.0', '05:00.1']
>>> > > > > > > >> > > > ################### PCI PASSTHROUGH
>>> ###################
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # HVM Disks
>>> > > > > > > >> > > > # Hard disk only
>>> > > > > > > >> > > > # Boot from HDD first ('c')
>>> > > > > > > >> > > > boot=3D"c"
>>> > > > > > > >> > > > disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Hard disk with ISO
>>> > > > > > > >> > > > # Boot from ISO first ('d')
>>> > > > > > > >> > > > #boot=3D"d"
>>> > > > > > > >> > > > #disk=3D['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>>> > > > > > > >> > > >
>>> 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # ACPI Enable
>>> > > > > > > >> > > > acpi=3D1
>>> > > > > > > >> > > > # HVM Event Modes
>>> > > > > > > >> > > > on_poweroff=3D'destroy'
>>> > > > > > > >> > > > on_reboot=3D'restart'
>>> > > > > > > >> > > > on_crash=3D'restart'
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # Serial Console Configuration (Xen Console)
>>> > > > > > > >> > > > sdl=3D0
>>> > > > > > > >> > > > serial=3D'pty'
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > # VNC Configuration
>>> > > > > > > >> > > > # Only reacable from localhost
>>> > > > > > > >> > > > vnc=3D1
>>> > > > > > > >> > > > vnclisten=3D"0.0.0.0"
>>> > > > > > > >> > > > vncpasswd=3D""
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > Copied for xen-users list
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > It appears that it cannot obtain the RAM mapping
>>> for this
>>> > > > PCI
>>> > > > > > > >> device.
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices t=
o
>>> > > > pciback. The
>>> > > > > > > >> output
>>> > > > > > > >> > > > looks like:
>>> > > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
>>> > > > > > > >> > > > Loading Kernel Module 'xen-pciback'
>>> > > > > > > >> > > > Calling function pciback_dev for:
>>> > > > > > > >> > > > PCI DEVICE 0000:03:00.0
>>> > > > > > > >> > > > Unbinding 0000:03:00.0 from igb
>>> > > > > > > >> > > > Binding 0000:03:00.0 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:03:00.1
>>> > > > > > > >> > > > Unbinding 0000:03:00.1 from igb
>>> > > > > > > >> > > > Binding 0000:03:00.1 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:04:00.0
>>> > > > > > > >> > > > Unbinding 0000:04:00.0 from igb
>>> > > > > > > >> > > > Binding 0000:04:00.0 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:04:00.1
>>> > > > > > > >> > > > Unbinding 0000:04:00.1 from igb
>>> > > > > > > >> > > > Binding 0000:04:00.1 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:05:00.0
>>> > > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
>>> > > > > > > >> > > > Binding 0000:05:00.0 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > PCI DEVICE 0000:05:00.1
>>> > > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
>>> > > > > > > >> > > > Binding 0000:05:00.1 to pciback
>>> > > > > > > >> > > >
>>> > > > > > > >> > > > Listing PCI Devices Available to Xen
>>> > > > > > > >> > > > 0000:03:00.0
>>> > > > > > > >> > > > 0000:03:00.1
>>> > > > > > > >> > > > 0000:04:00.0
>>> > > > > > > >> > > > 0000:04:00.1
>>> > > > > > > >> > > > 0000:05:00.0
>>> > > > > > > >> > > > 0000:05:00.1
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > root@fiat:~# xl -vvv create
>>> /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>>> > > > > > > >> > > > WARNING: ignoring device_model directive.
>>> > > > > > > >> > > > WARNING: Use "device_model_override" instead if yo=
u
>>> really
>>> > > > want
>>> > > > > > a
>>> > > > > > > >> > > > non-default device_model
>>> > > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create=
:
>>> ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> create:
>>> > > > > > > >> > > > how=3D(nil) callback=3D(nil) poller=3D0x210c3c0
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>>> > > > > > > >> Disk
>>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dunknown
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_device.c:296:libxl__device_disk_set_backend:
>>> > > > > > > >> Disk
>>> > > > > > > >> > > > vdev=3Dhda, using backend phy
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_create.c:675:initiate_domain_create:
>>> > > > running
>>> > > > > > > >> > > bootloader
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_bootloader.c:321:libxl__bootloader_run:
>>> > > > not
>>> > > > > > a PV
>>> > > > > > > >> > > > domain, skipping bootloader
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c728: deregister unregistered
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_numa.c:475:libxl__get_numa_candidate:
>>> > > > New
>>> > > > > > best
>>> > > > > > > >> NUMA
>>> > > > > > > >> > > > placement candidate found: nr_nodes=3D1, nr_cpus=
=3D4,
>>> > > > nr_vcpus=3D3,
>>> > > > > > > >> > > > free_memkb=3D2980
>>> > > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain:
>>> NUMA
>>> > > > placement
>>> > > > > > > >> > > candidate
>>> > > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>>> > > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=3D0x1000=
00
>>> > > > memsz=3D0xa69a4
>>> > > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 ->
>>> 0x1a69a4
>>> > > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>>> > > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a=
4
>>> > > > > > > >> > > >   Modules:       0000000000000000->000000000000000=
0
>>> > > > > > > >> > > >   TOTAL:         0000000000000000->000000003f80000=
0
>>> > > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
>>> > > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>>> > > > > > > >> > > >   4KB PAGES: 0x0000000000000200
>>> > > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
>>> > > > > > > >> > > >   1GB PAGES: 0x0000000000000000
>>> > > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at
>>> 0x7f022c779000 ->
>>> > > > > > > >> 0x7f022c81682d
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_device.c:257:libxl__device_disk_set_backend:
>>> > > > > > > >> Disk
>>> > > > > > > >> > > > vdev=3Dhda spec.backend=3Dphy
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:559:libxl__ev_xswatch_register:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x2112f48
>>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > token=3D3/0:
>>> > > > > > > >> > > > register slotnum=3D3
>>> > > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create=
:
>>> ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> > > > inprogress: poller=3D0x210c3c0, flags=3Di
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x2112f48
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> token=3D3/0:
>>> > > > event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:647:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>>> state 2 still
>>> > > > > > waiting
>>> > > > > > > >> > > state 1
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x2112f48
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> token=3D3/0:
>>> > > > event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:643:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>>> state 2 ok
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x2112f48
>>> wpath=3D/local/domain/0/backend/vbd/2/768/state
>>> > > > > > token=3D3/0:
>>> > > > > > > >> > > > deregister slotnum=3D3
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x2112f48: deregister unregistered
>>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>>> calling
>>> > > > hotplug
>>> > > > > > > >> script:
>>> > > > > > > >> > > > /etc/xen/scripts/block add
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_d=
m:
>>> > > > Spawning
>>> > > > > > > >> > > device-model
>>> > > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > > /usr/bin/qemu-system-i386
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > -xen-domid
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   2
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -chardev
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > >
>>> > > > socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > > chardev=3Dlibxl-cmd,mode=3Dcontrol
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -name
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > ubuntu-hvm-0
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > 0.0.0.0:0
>>> > > > > > > >> ,to=3D99
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -global
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> isa-fdc.driveA=3D
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -serial
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   pty
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > cirrus
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -global
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> vga.vram_size_mb=3D8
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > order=3Dc
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > 2,maxcpus=3D2
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -device
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > > rtl8139,id=3Dnic0,netdev=3Dnet0,mac=3D00:16:3e:23:=
44:2c
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -netdev
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > >
>>> type=3Dtap,id=3Dnet0,ifname=3Dvif2.0-emu,script=3Dno,downscript=3Dno
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -M
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   -m
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_dm.c:1208:libxl__spawn_local_dm:   1016
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > -drive
>>> > > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_d=
m:
>>> > > > > > > >> > > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > >
>>> > > >
>>> file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddisk,form=
at=3Draw,cache=3Dwriteback
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:559:libxl__ev_xswatch_register:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c960
>>> wpath=3D/local/domain/0/device-model/2/state
>>> > > > > > token=3D3/1:
>>> > > > > > > >> > > register
>>> > > > > > > >> > > > slotnum=3D3
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210c960
>>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>>> token=3D3/1: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210c960
>>> > > > > > > >> > > > wpath=3D/local/domain/0/device-model/2/state
>>> token=3D3/1: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/device-model/2/state
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c960
>>> wpath=3D/local/domain/0/device-model/2/state
>>> > > > > > token=3D3/1:
>>> > > > > > > >> > > > deregister slotnum=3D3
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210c960: deregister unregistered
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initializ=
e:
>>> > > > connected
>>> > > > > > to
>>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > > > type: qmp
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>>> > > > > > > >> > > >     "id": 1
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "query-chardev",
>>> > > > > > > >> > > >     "id": 2
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "change",
>>> > > > > > > >> > > >     "id": 3,
>>> > > > > > > >> > > >     "arguments": {
>>> > > > > > > >> > > >         "device": "vnc",
>>> > > > > > > >> > > >         "target": "password",
>>> > > > > > > >> > > >         "arg": ""
>>> > > > > > > >> > > >     }
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "query-vnc",
>>> > > > > > > >> > > >     "id": 4
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:559:libxl__ev_xswatch_register:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210e8a8
>>> wpath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > token=3D3/2:
>>> > > > > > > >> > > register
>>> > > > > > > >> > > > slotnum=3D3
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210e8a8
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>>> token=3D3/2: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:647:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state
>>> 2 still
>>> > > > > > waiting
>>> > > > > > > >> state
>>> > > > > > > >> > > 1
>>> > > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback:
>>> watch
>>> > > > > > w=3D0x210e8a8
>>> > > > > > > >> > > > wpath=3D/local/domain/0/backend/vif/2/0/state
>>> token=3D3/2: event
>>> > > > > > > >> > > > epath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:643:devstate_watch_callback:
>>> > > > backend
>>> > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state
>>> 2 ok
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210e8a8
>>> wpath=3D/local/domain/0/backend/vif/2/0/state
>>> > > > > > token=3D3/2:
>>> > > > > > > >> > > > deregister slotnum=3D3
>>> > > > > > > >> > > > libxl: debug:
>>> > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>>> > > > > > watch
>>> > > > > > > >> > > > w=3D0x210e8a8: deregister unregistered
>>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>>> calling
>>> > > > hotplug
>>> > > > > > > >> script:
>>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge online
>>> > > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug:
>>> calling
>>> > > > hotplug
>>> > > > > > > >> script:
>>> > > > > > > >> > > > /etc/xen/scripts/vif-bridge add
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initializ=
e:
>>> > > > connected
>>> > > > > > to
>>> > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > > > type: qmp
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "qmp_capabilities",
>>> > > > > > > >> > > >     "id": 1
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response:
>>> message
>>> > > > type:
>>> > > > > > > >> return
>>> > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>>> next qmp
>>> > > > > > command: '{
>>> > > > > > > >> > > >     "execute": "device_add",
>>> > > > > > > >> > > >     "id": 2,
>>> > > > > > > >> > > >     "arguments": {
>>> > > > > > > >> > > >         "driver": "xen-pci-passthrough",
>>> > > > > > > >> > > >         "id": "pci-pt-03_00.0",
>>> > > > > > > >> > > >         "hostaddr": "0000:03:00.0"
>>> > > > > > > >> > > >     }
>>> > > > > > > >> > > > }
>>> > > > > > > >> > > > '
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket rea=
d
>>> error:
>>> > > > > > > >> Connection
>>> > > > > > > >> > > reset
>>> > > > > > > >> > > > by peer
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e:
>>> > > > Connection
>>> > > > > > > >> error:
>>> > > > > > > >> > > > Connection refused
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e:
>>> > > > Connection
>>> > > > > > > >> error:
>>> > > > > > > >> > > > Connection refused
>>> > > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initializ=
e:
>>> > > > Connection
>>> > > > > > > >> error:
>>> > > > > > > >> > > > Connection refused
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_pci.c:81:libxl__create_pci_backend:
>>> > > > > > Creating pci
>>> > > > > > > >> > > backend
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:1737:libxl__ao_progress_report:
>>> > > > ao
>>> > > > > > > >> 0x210c360:
>>> > > > > > > >> > > > progress report: ignored
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:1569:libxl__ao_complete: ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> > > > complete, rc=3D0
>>> > > > > > > >> > > > libxl: debug:
>>> libxl_event.c:1541:libxl__ao__destroy: ao
>>> > > > > > 0x210c360:
>>> > > > > > > >> > > destroy
>>> > > > > > > >> > > > Daemon running with PID 3214
>>> > > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793
>>> total
>>> > > > > > > >> releases:793
>>> > > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0
>>> maximum
>>> > > > > > > >> allocations:4
>>> > > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
>>> > > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses=
:4
>>> > > > toobig:4
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > root@fiat:/var/log/xen# cat
>>> qemu-dm-ubuntu-hvm-0.log
>>> > > > > > > >> > > > char device redirected to /dev/pts/5 (label serial=
0)
>>> > > > > > > >> > > > qemu: hardware error: xen: failed to populate ram =
at
>>> > > > 40030000
>>> > > > > > > >> > > > CPU #0:
>>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=
=3D00000633
>>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=
=3D00000000
>>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0
>>> A20=3D1 SMM=3D0
>>> > > > HLT=3D1
>>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=
=3D00000000
>>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=
=3D00000000
>>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>>> > > > > > > >> > > > EFER=3D0000000000000000
>>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00=
001f80
>>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>>> > > > > > > >> > > > CPU #1:
>>> > > > > > > >> > > > EAX=3D00000000 EBX=3D00000000 ECX=3D00000000 EDX=
=3D00000633
>>> > > > > > > >> > > > ESI=3D00000000 EDI=3D00000000 EBP=3D00000000 ESP=
=3D00000000
>>> > > > > > > >> > > > EIP=3D0000fff0 EFL=3D00000002 [-------] CPL=3D0 II=
=3D0
>>> A20=3D1 SMM=3D0
>>> > > > HLT=3D1
>>> > > > > > > >> > > > ES =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > CS =3Df000 ffff0000 0000ffff 00009b00
>>> > > > > > > >> > > > SS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > DS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > FS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > GS =3D0000 00000000 0000ffff 00009300
>>> > > > > > > >> > > > LDT=3D0000 00000000 0000ffff 00008200
>>> > > > > > > >> > > > TR =3D0000 00000000 0000ffff 00008b00
>>> > > > > > > >> > > > GDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > IDT=3D     00000000 0000ffff
>>> > > > > > > >> > > > CR0=3D60000010 CR2=3D00000000 CR3=3D00000000 CR4=
=3D00000000
>>> > > > > > > >> > > > DR0=3D00000000 DR1=3D00000000 DR2=3D00000000 DR3=
=3D00000000
>>> > > > > > > >> > > > DR6=3Dffff0ff0 DR7=3D00000400
>>> > > > > > > >> > > > EFER=3D0000000000000000
>>> > > > > > > >> > > > FCW=3D037f FSW=3D0000 [ST=3D0] FTW=3D00 MXCSR=3D00=
001f80
>>> > > > > > > >> > > > FPR0=3D0000000000000000 0000 FPR1=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR2=3D0000000000000000 0000 FPR3=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR4=3D0000000000000000 0000 FPR5=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > FPR6=3D0000000000000000 0000 FPR7=3D00000000000000=
00
>>> 0000
>>> > > > > > > >> > > > XMM00=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM01=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM02=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM03=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM04=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM05=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM06=3D00000000000000000000000000000000
>>> > > > > > > >> > > > XMM07=3D00000000000000000000000000000000
>>> > > > > > > >> > > >
>>> > > > > > > >> > > >
>>> ###########################################################
>>> > > > > > > >> > > > /etc/default/grub
>>> > > > > > > >> > > > GRUB_DEFAULT=3D"Xen 4.3-amd64"
>>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=3D0
>>> > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=3Dtrue
>>> > > > > > > >> > > > GRUB_TIMEOUT=3D10
>>> > > > > > > >> > > > GRUB_DISTRIBUTOR=3D`lsb_release -i -s 2> /dev/null=
 ||
>>> echo
>>> > > > Debian`
>>> > > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT=3D"quiet splash"
>>> > > > > > > >> > > > GRUB_CMDLINE_LINUX=3D""
>>> > > > > > > >> > > > # biosdevname=3D0
>>> > > > > > > >> > > > GRUB_CMDLINE_XEN=3D"dom0_mem=3D1024M dom0_max_vcpu=
s=3D1"
>>> > > > > > > >> > >
>>> > > > > > > >> > > > _______________________________________________
>>> > > > > > > >> > > > Xen-devel mailing list
>>> > > > > > > >> > > > Xen-devel@lists.xen.org
>>> > > > > > > >> > > > http://lists.xen.org/xen-devel
>>> > > > > > > >> > >
>>> > > > > > > >> > >
>>> > > > > > > >>
>>> > > > > > > >
>>> > > > > > > >
>>> > > > > >
>>> > > >
>>> >
>>>
>>
>> >
>>
>
>

--001a11c1def2ebcb6304f1e8a35d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Works like a charm. =A0I do not have physical access to th=
e computer this weekend to verify that the cards are isolated, but the HVM =
starts and appears to be working well.<div><br></div><div>When do you think=
 Xen 4.4 will be released. =A0The article I read mentioned it will be relea=
sed in 2014 (hinting towards the end of February). =A0I also read &#39;When=
 it is ready.&#39;</div>

<div><br></div><div>Any timeline would be great.</div><div><br></div><div>T=
hanks again for your help!</div></div><div class=3D"gmail_extra"><br><br><d=
iv class=3D"gmail_quote">On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser=
 <span dir=3D"ltr">&lt;<a href=3D"mailto:mikeneiderhauser@gmail.com" target=
=3D"_blank">mikeneiderhauser@gmail.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">I will give it a shot. =A0T=
hanks!</div><div class=3D"HOEnZb"><div class=3D"h5"><div class=3D"gmail_ext=
ra"><br><br>

<div class=3D"gmail_quote">On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <sp=
an dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_bla=
nk">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div><div style=3D"font-size:12pt;font-famil=
y:Times New Roman"><br>----- <a href=3D"mailto:mikeneiderhauser@gmail.com" =
target=3D"_blank">mikeneiderhauser@gmail.com</a> wrote:
<br>&gt; <div dir=3D"ltr"><div>&gt; I followed this site (<a href=3D"http:/=
/wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions" target=3D"_blank">=
http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions</a>).</div>
<div><div>and then followed (<a href=3D"http://wiki.xen.org/wiki/Compiling_=
Xen_From_Source" target=3D"_blank">http://wiki.xen.org/wiki/Compiling_Xen_F=
rom_Source</a>)<br>&gt;=20

<div><br></div></div><div>Ah, so you are looking for the=A0<span style=3D"f=
ont-size:12pt">=A0 =A0 xen_pt: Fix passthrough of device with ROM.</span></=
div><div><span style=3D"font-size:12pt">which is not in the Xen 4.4-rc3 but=
 in the master.</span></div>


<div><br></div><div>One thing you can do is:</div><div><br></div><div>cd xe=
n/tools/qemu-xen-dir</div><div>git fetch upstream</div><div>git checkout or=
igin/master</div><div>[you should see: &quot;<span style=3D"font-size:12pt"=
>HEAD is now at 027c412... configure: Disable libtool if -fPIE does not wor=
k with it (bug #1257099)&quot;]</span></div>


<div><span style=3D"font-size:12pt"><br></span></div><div><span style=3D"fo=
nt-size:12pt">Go back to main xen directory:</span></div><div>cd ../../../<=
/div><div>./configure</div><div>make=A0</div><div>make install</div><div><b=
r>


</div><div>and you should be using now an newer version of QEMU with the fi=
x.</div><div><div><div><br></div><div><br></div><div>&gt; </div><div><pre s=
tyle=3D"line-height:1.3em;font-size:15px;background-color:rgb(250,250,250);=
border:1px solid rgb(221,221,221);padding:1em">

<span style=3D"font-family:arial;line-height:1.3em">git clone -b 4.4.0-rc3 =
git://<a href=3D"http://xenbits.xen.org/xen.git" target=3D"_blank">xenbits.=
xen.org/xen.git</a></span><br>&gt;=20

</pre><pre style=3D"padding:1em;border:1px solid rgb(221,221,221);backgroun=
d-color:rgb(250,250,250)"><span style=3D"line-height:1.3em;font-size:15px;f=
ont-family:arial">Had to take some additional steps here to get all of the =
libs
# apt-get install build-essential=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-open=
ssl-dev bzip2 module-init-tools transfig tgif=20
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install texinfo texlive-latex-base texlive-latex-recommended texli=
ve-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twi=
sted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml=
-findlib libx11-dev bison flex xz-utils libyajl-dev
</span><span style=3D"line-height:1.3em;font-size:15px;font-family:arial">#=
 apt-get install gettext
apt-get install </span><span style=3D"background-color:rgb(255,255,255);fon=
t-size:15px;line-height:19.5px"><font color=3D"#000000" face=3D"arial">liba=
io-dev
apt-get install libpixman-1-dev</font></span></pre><pre style=3D"line-heigh=
t:1.3em;font-size:15px;background-color:rgb(250,250,250);border:1px solid r=
gb(221,221,221);padding:1em"><span style=3D"line-height:1.3em;font-family:a=
rial">./configure
make dist
make install</span></pre></div></div></div></div></div><div><div><div class=
=3D"gmail_extra">&gt; <br>&gt; <br>&gt; <div class=3D"gmail_quote">&gt; On =
Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a=
 href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracl=
e.com</a>&gt;</span> wrote:<br>


&gt;=20

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>&gt; On Fri, Feb 07, 2014 at 04:29:18PM=
 -0500, Mike Neiderhauser wrote:<br>&gt;=20
&gt; I did not use the patch. =A0I was assuming it was already patched give=
n<br>&gt;=20
&gt; previous email. =A0Is the patch for qemu source or xen source?<br>&gt;=
=20
<br>&gt;=20
</div>It is for QEMU, but you are right - it should have been part<br>&gt;=
=20
of QEMU if you got the latest version of Xen-unstable.<br>&gt;=20
<br>&gt;=20
You didn&#39;t use some specific tag but just &#39;staging&#39; ?<br>&gt;=
=20
<div>&gt; <div>&gt; <br>&gt;=20
&gt;<br>&gt;=20
&gt;<br>&gt;=20
&gt; On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wil=
k@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt;<br>&gt;=20
&gt; &gt; On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote=
:<br>&gt;=20
&gt; &gt; &gt; Ok. I started ran the initscripts and now xl works.<br>&gt;=
=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; However, I still see the same behavior as before:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; Did you use the patch that was mentioned in the URL?<br>&gt;=20
&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: C=
onnection<br>&gt;=20
&gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connect=
ion error:<br>&gt;=20
&gt; &gt; &gt; Connection refused<br>&gt;=20
&gt; &gt; &gt; root@fiat:~# xl list<br>&gt;=20
&gt; &gt; &gt; Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 =A0ID =A0 Mem VCPUs State Time(s)<br>&gt;=20
&gt; &gt; &gt; Domain-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 0 =A01024 =A0 =A0 1 =A0 =A0 r-----<br>&gt;=20
&gt; &gt; &gt; =A015.2<br>&gt;=20
&gt; &gt; &gt; ubuntu-hvm-0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 1 =A01025 =A0 =A0 1 =A0 =A0 ------<br>&gt;=20
&gt; &gt; &gt; 0.0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -&gt=
; 0x23f3000<br>&gt;=20
&gt; &gt; &gt; (XEN) PHYSICAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Dom0 alloc.: =A0 0000000134000000-&gt;0000000138000=
000 (233690 pages to<br>&gt;=20
&gt; &gt; &gt; be allocated)<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: 000000013d0da000-&gt;000000013ffffe0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) VIRTUAL MEMORY ARRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Loaded kernel: ffffffff81000000-&gt;ffffffff823f300=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Init. ramdisk: ffffffff823f3000-&gt;ffffffff85318e0=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Phys-Mach map: ffffffff85319000-&gt;ffffffff8551900=
0<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Start info: =A0 =A0ffffffff85519000-&gt;ffffffff855=
194b4<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Page tables: =A0 ffffffff8551a000-&gt;ffffffff85549=
000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0Boot stack: =A0 =A0ffffffff85549000-&gt;ffffffff855=
4a000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0TOTAL: =A0 =A0 =A0 =A0 ffffffff80000000-&gt;fffffff=
f85800000<br>&gt;=20
&gt; &gt; &gt; (XEN) =A0ENTRY ADDRESS: ffffffff81d261e0<br>&gt;=20
&gt; &gt; &gt; (XEN) Dom0 has maximum 1 VCPUs<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -&gt; 0x=
ffffffff81b2f000<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -&gt; 0x=
ffffffff81d0f0f0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -&gt; 0x=
ffffffff81d252c0<br>&gt;=20
&gt; &gt; &gt; (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -&gt; 0x=
ffffffff81e6d000<br>&gt;=20
&gt; &gt; &gt; (XEN) Scrubbing Free RAM: .............................done.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Initial low memory virq threshold set at 0x4000 pages.=
<br>&gt;=20
&gt; &gt; &gt; (XEN) Std. Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Guest Loglevel: All<br>&gt;=20
&gt; &gt; &gt; (XEN) Xen is relinquishing VGA console.<br>&gt;=20
&gt; &gt; &gt; (XEN) *** Serial input -&gt; DOM0 (type &#39;CTRL-a&#39; thr=
ee times to switch input<br>&gt;=20
&gt; &gt; &gt; to Xen)<br>&gt;=20
&gt; &gt; &gt; (XEN) Freed 260kB init memory.<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:01.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1a.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1c.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1d.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1e.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.2<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:00:1f.3<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:01:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:02.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:02:04.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; (XEN) PCI add device 0000:06:03.0<br>&gt;=20
&gt; &gt; &gt; (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262=
401 &gt; 262400<br>&gt;=20
&gt; &gt; &gt; (XEN) memory.c:158:d0 Could not allocate order=3D0 extent: i=
d=3D1 memflags=3D0<br>&gt;=20
&gt; &gt; &gt; (200 of 1024)<br>&gt;=20
&gt; &gt; &gt; (d1) HVM Loader<br>&gt;=20
&gt; &gt; &gt; (d1) Detected Xen v4.4-rc2<br>&gt;=20
&gt; &gt; &gt; (d1) Xenbus rings @0xfeffc000, event channel 4<br>&gt;=20
&gt; &gt; &gt; (d1) System requested SeaBIOS<br>&gt;=20
&gt; &gt; &gt; (d1) CPU speed is 3093 MHz<br>&gt;=20
&gt; &gt; &gt; (d1) Relocating guest memory for lowmem MMIO space disabled<=
br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; Excerpt from /var/log/xen/*<br>&gt;=20
&gt; &gt; &gt; qemu: hardware error: xen: failed to populate ram at 4005000=
0<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk &lt;<b=
r>&gt;=20
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">=
konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderha=
user wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I was able to compile and install xen4.4 RC3 on my=
 host, however I am<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; getting the error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; root@fiat:~/git/xen# xl list<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; xc: error: Could not obtain handle on privileged c=
ommand interface<br>&gt;=20
&gt; &gt; (2 =3D<br>&gt;=20
&gt; &gt; &gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; such file or directory): Internal error<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; libxl: error: libxl.c:92:libxl_ctx_alloc: cannot o=
pen libxc handle:<br>&gt;=20
&gt; &gt; No<br>&gt;=20
&gt; &gt; &gt; &gt; such<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; file or directory<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; cannot init xl context<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; I&#39;ve google searched for this and an article a=
ppears, but is not the<br>&gt;=20
&gt; &gt; same<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; (as far as I can tell). =A0Running any xl command =
generates a similar<br>&gt;=20
&gt; &gt; &gt; &gt; error.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; What can I do to fix this?<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; You need to run the initscripts for Xen. I don&#39;t kn=
ow what your distro<br>&gt;=20
&gt; &gt; is,<br>&gt;=20
&gt; &gt; &gt; &gt; but<br>&gt;=20
&gt; &gt; &gt; &gt; they are usually put in /etc/init.d/rc.d/xen*<br>&gt;=
=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser =
&lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:mikeneiderhauser@gmail.com" targ=
et=3D"_blank">mikeneiderhauser@gmail.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Much. Do I need to install from src or is the=
re a package I can<br>&gt;=20
&gt; &gt; &gt; &gt; install.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszu=
tek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; On Fri, Feb 07, 2014 at 10:53:22AM -0500,=
 Mike Neiderhauser wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; I did not. =A0I do not have the tool=
chain installed. =A0I may have<br>&gt;=20
&gt; &gt; time<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; later<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; today to try the patch. =A0Are there=
 any specific instructions on<br>&gt;=20
&gt; &gt; how<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; patch the src, compile and install?<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; There actually should be a new version of=
 Xen 4.4-rcX which will<br>&gt;=20
&gt; &gt; have<br>&gt;=20
&gt; &gt; &gt; &gt; the<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; fix. That might be easier for you?<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; On Fri, Feb 7, 2014 at 10:25 AM, Kon=
rad Rzeszutek Wilk &lt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; <a href=3D"mailto:konrad.wilk@oracle=
.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; On Thu, Feb 06, 2014 at 09:39:3=
7AM -0500, Mike Neiderhauser<br>&gt;=20
&gt; &gt; wrote:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Hi all,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I am attempting to do a pc=
i passthrough of an Intel ET card<br>&gt;=20
&gt; &gt; &gt; &gt; (4x1G<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NIC)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; to a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; HVM. =A0I have been attemp=
ting to resolve this issue on the<br>&gt;=20
&gt; &gt; &gt; &gt; xen-users<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; list,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; but it was advised to post=
 this issue to this list. (Initial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Message -<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://lists.xenproject.org/archives/html/xen-users/20=
14-02/msg00036.html" target=3D"_blank">http://lists.xenproject.org/archives=
/html/xen-users/2014-02/msg00036.html</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; )<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The machine I am using as =
host is a Dell Poweredge server<br>&gt;=20
&gt; &gt; with a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Xeon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; E31220 with 4GB of ram.<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; The possible bug is the fo=
llowing:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ....<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I believe it may be simila=
r to this thread<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; <a href=3D"http://markmail.org/message/3zuiojywempoorxj#query:+pa=
ge:1+mid:gul34vbe4uyog2d4+state:results" target=3D"_blank">http://markmail.=
org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:resul=
ts</a><br>


&gt;=20


&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Additional info that may b=
e helpful is below.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; Did you try the patch?<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Please let me know if you =
need any additional information.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Thanks in advance for any =
help provided!<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Regards<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# cat /etc/xen/=
ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Configuration file for X=
en HVM<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Name (as appears in =
&#39;xl list&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; name=3D&quot;ubuntu-hvm-0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Build settings (+ ha=
rdware)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #kernel =3D &quot;/usr/lib=
/xen-4.3/boot/hvmloader&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; builder=3D&#39;hvm&#39;<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; device_model=3D&#39;qemu-d=
m&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; memory=3D1024<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Virtual Interface<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Network bridge to USB NI=
C<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vif=3D[&#39;bridge=3Dxenbr=
0&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # PCI Permissive mode togg=
le<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci_permissive=3D1<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All PCI Devices<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
 &#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;, &#39;05:00.0&#39;=
,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # First two ports on Intel=
 4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;03:00.0&#39;,=
&#39;03:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Last two ports on Intel =
4x1G NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;04:00.0&#39;,=
 &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # All ports on Intel 4x1G =
NIC<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; pci=3D[&#39;03:00.0&#39;, =
&#39;03:00.1&#39;, &#39;04:00.0&#39;, &#39;04:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Brodcom 2x1G NIC<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #pci=3D[&#39;05:00.0&#39;,=
 &#39;05:00.1&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ################### PCI PA=
SSTHROUGH ###################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Disks<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk only<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from HDD first (&#3=
9;c&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; boot=3D&quot;c&quot;<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; disk=3D[&#39;phy:/dev/ubun=
tu-vg/ubuntu-hvm-0,hda,w&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Hard disk with ISO<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Boot from ISO first (&#3=
9;d&#39;)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #boot=3D&quot;d&quot;<br>&=
gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; #disk=3D[&#39;phy:/dev/ubu=
ntu-vg/ubuntu-hvm-0,hda,w&#39;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;file:/root/ubuntu-12.=
04.3-server-amd64.iso,hdc:cdrom,r&#39;]<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # ACPI Enable<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; acpi=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # HVM Event Modes<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_poweroff=3D&#39;destroy=
&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_reboot=3D&#39;restart&#=
39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; on_crash=3D&#39;restart&#3=
9;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Serial Console Configura=
tion (Xen Console)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; sdl=3D0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; serial=3D&#39;pty&#39;<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # VNC Configuration<br>&gt=
;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # Only reacable from local=
host<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnc=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vnclisten=3D&quot;0.0.0.0&=
quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vncpasswd=3D&quot;&quot;<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Copied for xen-users list<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; It appears that it cannot =
obtain the RAM mapping for this<br>&gt;=20
&gt; &gt; PCI<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; device.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; I rebooted the Host. =A0I =
ran assigned pci devices to<br>&gt;=20
&gt; &gt; pciback. The<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; output<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; looks like:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# ./dev_mgmt.sh=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Loading Kernel Module &#39=
;xen-pciback&#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Calling function pciback_d=
ev for:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:03:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:03:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:03:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.0 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:04:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:04:00.1 fro=
m igb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:04:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.0 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.0 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; PCI DEVICE 0000:05:00.1<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Unbinding 0000:05:00.1 fro=
m bnx2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Binding 0000:05:00.1 to pc=
iback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Listing PCI Devices Availa=
ble to Xen<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:03:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:04:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; 0000:05:00.1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:~# xl -vvv creat=
e /etc/xen/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Parsing config from /etc/x=
en/ubuntu-hvm-0.cfg<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: ignoring device_m=
odel directive.<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; WARNING: Use &quot;device_=
model_override&quot; instead if you really<br>&gt;=20
&gt; &gt; want<br>&gt;=20
&gt; &gt; &gt; &gt; a<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; non-default device_model<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1230:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; create:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; how=3D(nil) callback=3D(ni=
l) poller=3D0x210c3c0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
unknown<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:296:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda, using backend =
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:675:initiate_domain_create:<br>&gt;=20
&gt; &gt; running<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; bootloader<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_bootlo=
ader.c:321:libxl__bootloader_run:<br>&gt;=20
&gt; &gt; not<br>&gt;=20
&gt; &gt; &gt; &gt; a PV<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; domain, skipping bootloade=
r<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c728: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_numa.c=
:475:libxl__get_numa_candidate:<br>&gt;=20
&gt; &gt; New<br>&gt;=20
&gt; &gt; &gt; &gt; best<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; NUMA<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; placement candidate found:=
 nr_nodes=3D1, nr_cpus=3D4,<br>&gt;=20
&gt; &gt; nr_vcpus=3D3,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; free_memkb=3D2980<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: detail: libxl_dom.c=
:195:numa_place_domain: NUMA<br>&gt;=20
&gt; &gt; placement<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; candidate<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; with 1 nodes, 4 cpus and 2=
980 KB free selected<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: phdr: paddr=3D0x100000<br>&gt;=20
&gt; &gt; memsz=3D0xa69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_parse_bina=
ry: memory: 0x100000 -&gt; 0x1a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: VIRTUAL MEMORY A=
RRANGEMENT:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Loader: =A0 =A0 =A0 =
=A00000000000100000-&gt;00000000001a69a4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 Modules: =A0 =A0 =A0 0=
000000000000000-&gt;0000000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 TOTAL: =A0 =A0 =A0 =A0=
 0000000000000000-&gt;000000003f800000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 ENTRY ADDRESS: 0000000=
000100608<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: info: PHYSICAL MEMORY =
ALLOCATION:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 4KB PAGES: 0x000000000=
0000200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 2MB PAGES: 0x000000000=
00001fb<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 1GB PAGES: 0x000000000=
0000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: detail: elf_load_binar=
y: phdr 0 at 0x7f022c779000 -&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x7f022c81682d<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_device.c:257:libxl__device_disk_set_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Disk<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; vdev=3Dhda spec.backend=3D=
phy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; register slotnum=3D3<br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_create=
.c:1243:do_domain_create: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; inprogress: poller=3D0x210=
c3c0, flags=3Di<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; state 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x2112f48<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vbd/2/768/state token=3D3/0:<br>&gt;=20
&gt; &gt; event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vb=
d/2/768/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48 wpath=3D/loc=
al/domain/0/backend/vbd/2/768/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x2112f48: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/block add=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
206:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; Spawning<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; device-model<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386 =
with arguments:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /usr/bin/qemu-system-i386<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; -xen-domid<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -chardev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; socket,id=3Dlibxl-cmd,path=3D/var/run/xen/qmp-libxl-2,server,nowa=
it<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -mon<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; chardev=3Dlibxl-cmd,mode=
=3Dcontrol<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -name<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; ubuntu-hvm-0<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vnc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; <a href=3D"http://0.0.0.0:0" target=3D"_blank">0.0.0.0:=
0</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; ,to=3D99<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; isa-fdc.driveA=3D<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -serial<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 pty<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -vga<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; cirrus<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -global<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; vga.vram_size_mb=3D8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -boot<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; order=3Dc<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -smp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; 2,maxcpus=3D2<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -device<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; rtl8139,id=3Dnic0,netdev=
=3Dnet0,mac=3D00:16:3e:23:44:2c<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -netdev<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; type=3Dtap,id=3Dnet0,ifnam=
e=3Dvif2.0-emu,script=3Dno,downscript=3Dno<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -M<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 xenfv<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 -m<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm: =A0 1016<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; -drive<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_dm.c:1=
208:libxl__spawn_local_dm:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; file=3D/dev/ubuntu-vg/ubuntu-hvm-0,if=3Dide,index=3D0,media=3Ddis=
k,format=3Draw,cache=3Dwriteback<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210c960<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/de=
vice-model/2/state token=3D3/1: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/de=
vice-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960 wpath=3D/loc=
al/domain/0/device-model/2/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210c960: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-chardev&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;change&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 3,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;devi=
ce&quot;: &quot;vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;targ=
et&quot;: &quot;password&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;arg&=
quot;: &quot;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;query-vnc&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 4<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:559:libxl__ev_xswatch_register:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; register<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; slotnum=3D3<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:647:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 still<br>&gt;=20
&gt; &gt; &gt; &gt; waiting<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; 1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:503:watchfd_callback: watch<br>&gt;=20
&gt; &gt; &gt; &gt; w=3D0x210e8a8<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; wpath=3D/local/domain/0/ba=
ckend/vif/2/0/state token=3D3/2: event<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; epath=3D/local/domain/0/ba=
ckend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:643:devstate_watch_callback:<br>&gt;=20
&gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /local/domain/0/backend/vi=
f/2/0/state wanted state 2 ok<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:596:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8 wpath=3D/loc=
al/domain/0/backend/vif/2/0/state<br>&gt;=20
&gt; &gt; &gt; &gt; token=3D3/2:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; deregister slotnum=3D3<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug:<br>&gt;=20
&gt; &gt; libxl_event.c:608:libxl__ev_xswatch_deregister:<br>&gt;=20
&gt; &gt; &gt; &gt; watch<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; w=3D0x210e8a8: deregister =
unregistered<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e online<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_device=
.c:959:device_hotplug: calling<br>&gt;=20
&gt; &gt; hotplug<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; script:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/xen/scripts/vif-bridg=
e add<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
707:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; connected<br>&gt;=20
&gt; &gt; &gt; &gt; to<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /var/run/xen/qmp-libxl-2<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; &gt; &gt; type: qmp<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;qmp_capabilities&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 1<=
br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
299:qmp_handle_response: message<br>&gt;=20
&gt; &gt; type:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; return<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_qmp.c:=
555:qmp_send_prepare: next qmp<br>&gt;=20
&gt; &gt; &gt; &gt; command: &#39;{<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;execute&quot=
;: &quot;device_add&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;id&quot;: 2,=
<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 &quot;arguments&qu=
ot;: {<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;driv=
er&quot;: &quot;xen-pci-passthrough&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;id&q=
uot;: &quot;pci-pt-03_00.0&quot;,<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 =A0 =A0 &quot;host=
addr&quot;: &quot;0000:03:00.0&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; =A0 =A0 }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; }<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; &#39;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
454:qmp_next: Socket read error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; reset<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; by peer<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: error: libxl_qmp.c:=
702:libxl__qmp_initialize:<br>&gt;=20
&gt; &gt; Connection<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; error:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Connection refused<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_pci.c:=
81:libxl__create_pci_backend:<br>&gt;=20
&gt; &gt; &gt; &gt; Creating pci<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; backend<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1737:libxl__ao_progress_report:<br>&gt;=20
&gt; &gt; ao<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; progress report: ignored<b=
r>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1569:libxl__ao_complete: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; complete, rc=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; libxl: debug: libxl_event.=
c:1541:libxl__ao__destroy: ao<br>&gt;=20
&gt; &gt; &gt; &gt; 0x210c360:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; destroy<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Daemon running with PID 32=
14<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: total allocations:793 total<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; releases:793<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: current allocations:0 maximum<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; allocations:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache current size:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; xc: debug: hypercall buffe=
r: cache hits:785 misses:4<br>&gt;=20
&gt; &gt; toobig:4<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; root@fiat:/var/log/xen# ca=
t qemu-dm-ubuntu-hvm-0.log<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; char device redirected to =
/dev/pts/5 (label serial0)<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; qemu: hardware error: xen:=
 failed to populate ram at<br>&gt;=20
&gt; &gt; 40030000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #0:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CPU #1:<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EAX=3D00000000 EBX=3D00000=
000 ECX=3D00000000 EDX=3D00000633<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ESI=3D00000000 EDI=3D00000=
000 EBP=3D00000000 ESP=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EIP=3D0000fff0 EFL=3D00000=
002 [-------] CPL=3D0 II=3D0 A20=3D1 SMM=3D0<br>&gt;=20
&gt; &gt; HLT=3D1<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ES =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CS =3Df000 ffff0000 0000ff=
ff 00009b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; SS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GS =3D0000 00000000 0000ff=
ff 00009300<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; LDT=3D0000 00000000 0000ff=
ff 00008200<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; TR =3D0000 00000000 0000ff=
ff 00008b00<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; IDT=3D =A0 =A0 00000000 00=
00ffff<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; CR0=3D60000010 CR2=3D00000=
000 CR3=3D00000000 CR4=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR0=3D00000000 DR1=3D00000=
000 DR2=3D00000000 DR3=3D00000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; DR6=3Dffff0ff0 DR7=3D00000=
400<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; EFER=3D0000000000000000<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FCW=3D037f FSW=3D0000 [ST=
=3D0] FTW=3D00 MXCSR=3D00001f80<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR0=3D0000000000000000 00=
00 FPR1=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR2=3D0000000000000000 00=
00 FPR3=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR4=3D0000000000000000 00=
00 FPR5=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; FPR6=3D0000000000000000 00=
00 FPR7=3D0000000000000000 0000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM00=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM01=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM02=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM03=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM04=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM05=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM06=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; XMM07=3D000000000000000000=
00000000000000<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; ##########################=
#################################<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; /etc/default/grub<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DEFAULT=3D&quot;Xen 4=
.3-amd64&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT=3D0<br=
>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_HIDDEN_TIMEOUT_QUIET=
=3Dtrue<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_TIMEOUT=3D10<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_DISTRIBUTOR=3D`lsb_re=
lease -i -s 2&gt; /dev/null || echo<br>&gt;=20
&gt; &gt; Debian`<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX_DEFAULT=
=3D&quot;quiet splash&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_LINUX=3D&quot=
;&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; # biosdevname=3D0<br>&gt;=
=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; GRUB_CMDLINE_XEN=3D&quot;d=
om0_mem=3D1024M dom0_max_vcpus=3D1&quot;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; __________________________=
_____________________<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; Xen-devel mailing list<br>=
&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-deve=
l@lists.xen.org" target=3D"_blank">Xen-devel@lists.xen.org</a><br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt; &gt; <a href=3D"http://lists.xe=
n.org/xen-devel" target=3D"_blank">http://lists.xen.org/xen-devel</a><br>&g=
t;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;&gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt; &gt; &gt;<br>&gt;=20
&gt; &gt;<br>&gt;=20
</div></div></blockquote></div><br>&gt; </div>
</div></div></div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>

--001a11c1def2ebcb6304f1e8a35d--


--===============5404484784840195602==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5404484784840195602==--


From xen-devel-bounces@lists.xen.org Sat Feb 08 20:03:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 20:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCE6x-0002Om-Rd; Sat, 08 Feb 2014 20:02:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WCE6v-0002Of-Dz
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 20:02:46 +0000
Received: from [85.158.137.68:14284] by server-9.bemta-3.messagelabs.com id
	75/D9-10184-46D86F25; Sat, 08 Feb 2014 20:02:44 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391889762!551924!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2642 invoked from network); 8 Feb 2014 20:02:43 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Feb 2014 20:02:43 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 629A41A2655;
	Sat,  8 Feb 2014 22:02:42 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 4AE0636C01F; Sat,  8 Feb 2014 22:02:42 +0200 (EET)
Date: Sat, 8 Feb 2014 22:02:42 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140208200242.GT2924@reaktio.net>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 08, 2014 at 12:42:31PM -0500, Mike Neiderhauser wrote:
>    Works like a charm.  I do not have physical access to the computer this
>    weekend to verify that the cards are isolated, but the HVM starts and
>    appears to be working well.
>    When do you think Xen 4.4 will be released.  The article I read mentioned
>    it will be released in 2014 (hinting towards the end of February).  I also
>    read 'When it is ready.'
>    Any timeline would be great.
>    Thanks again for your help!
> 

I *assume* there is going to be at least one more rc..

-- Pasi

>    On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser
>    <[1]mikeneiderhauser@gmail.com> wrote:
> 
>      I will give it a shot.  Thanks!
> 
>      On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <[2]konrad.wilk@oracle.com>
>      wrote:
> 
>        ----- [3]mikeneiderhauser@gmail.com wrote:
>        >
>        > I followed this site
>        ([4]http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
>        and then followed
>        ([5]http://wiki.xen.org/wiki/Compiling_Xen_From_Source)
>        >
>        Ah, so you are looking for the     xen_pt: Fix passthrough of device
>        with ROM.
>        which is not in the Xen 4.4-rc3 but in the master.
>        One thing you can do is:
>        cd xen/tools/qemu-xen-dir
>        git fetch upstream
>        git checkout origin/master
>        [you should see: "HEAD is now at 027c412... configure: Disable libtool
>        if -fPIE does not work with it (bug #1257099)"]
>        Go back to main xen directory:
>        cd ../../../
>        ./configure
>        make
>        make install
>        and you should be using now an newer version of QEMU with the fix.
>        >
> 
> 
>  git clone -b 4.4.0-rc3 git://[6]xenbits.xen.org/xen.git
>  >
> 
> 
>  Had to take some additional steps here to get all of the libs
>  # apt-get install build-essential
>  # apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
>  # apt-get install texinfo texlive-latex-base texlive-latex-recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
>  # apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
>  # apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
>  # apt-get install gettext
>  apt-get install libaio-dev
>  apt-get install libpixman-1-dev
> 
>  ./configure
>  make dist
>  make install
> 
>        >
>        >
>        >
>        > On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk
>        <[7]konrad.wilk@oracle.com> wrote:
>        >
> 
>          > On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
>          > > I did not use the patch.  I was assuming it was already patched
>          given
>          > > previous email.  Is the patch for qemu source or xen source?
>          >
>          >
>          It is for QEMU, but you are right - it should have been part
>          > of QEMU if you got the latest version of Xen-unstable.
>          >
>          > You didn't use some specific tag but just 'staging' ?
>          >
>          >
>          >
>          > >
>          > >
>          > > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
>          > > [8]konrad.wilk@oracle.com> wrote:
>          > >
>          > > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser
>          wrote:
>          > > > > Ok. I started ran the initscripts and now xl works.
>          > > > >
>          > > > > However, I still see the same behavior as before:
>          > > > >
>          > > >
>          > > > Did you use the patch that was mentioned in the URL?
>          > > >
>          > > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
>          > > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>          > > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error:
>          Connection
>          > > > reset
>          > > > > by peer
>          > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize:
>          Connection error:
>          > > > > Connection refused
>          > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize:
>          Connection error:
>          > > > > Connection refused
>          > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize:
>          Connection error:
>          > > > > Connection refused
>          > > > > root@fiat:~# xl list
>          > > > > Name                                        ID   Mem VCPUs
>          State Time(s)
>          > > > > Domain-0                                     0  1024     1
>            r-----
>          > > > >  15.2
>          > > > > ubuntu-hvm-0                                 1  1025     1
>            ------
>          > > > > 0.0
>          > > > >
>          > > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 ->
>          0x23f3000
>          > > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
>          > > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000
>          (233690 pages to
>          > > > > be allocated)
>          > > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
>          > > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
>          > > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
>          > > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
>          > > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
>          > > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
>          > > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
>          > > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
>          > > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
>          > > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
>          > > > > (XEN) Dom0 has maximum 1 VCPUs
>          > > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
>          0xffffffff81b2f000
>          > > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
>          0xffffffff81d0f0f0
>          > > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
>          0xffffffff81d252c0
>          > > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
>          0xffffffff81e6d000
>          > > > > (XEN) Scrubbing Free RAM: .............................done.
>          > > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>          > > > > (XEN) Std. Loglevel: All
>          > > > > (XEN) Guest Loglevel: All
>          > > > > (XEN) Xen is relinquishing VGA console.
>          > > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to
>          switch input
>          > > > > to Xen)
>          > > > > (XEN) Freed 260kB init memory.
>          > > > > (XEN) PCI add device 0000:00:00.0
>          > > > > (XEN) PCI add device 0000:00:01.0
>          > > > > (XEN) PCI add device 0000:00:1a.0
>          > > > > (XEN) PCI add device 0000:00:1c.0
>          > > > > (XEN) PCI add device 0000:00:1d.0
>          > > > > (XEN) PCI add device 0000:00:1e.0
>          > > > > (XEN) PCI add device 0000:00:1f.0
>          > > > > (XEN) PCI add device 0000:00:1f.2
>          > > > > (XEN) PCI add device 0000:00:1f.3
>          > > > > (XEN) PCI add device 0000:01:00.0
>          > > > > (XEN) PCI add device 0000:02:02.0
>          > > > > (XEN) PCI add device 0000:02:04.0
>          > > > > (XEN) PCI add device 0000:03:00.0
>          > > > > (XEN) PCI add device 0000:03:00.1
>          > > > > (XEN) PCI add device 0000:04:00.0
>          > > > > (XEN) PCI add device 0000:04:00.1
>          > > > > (XEN) PCI add device 0000:05:00.0
>          > > > > (XEN) PCI add device 0000:05:00.1
>          > > > > (XEN) PCI add device 0000:06:03.0
>          > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1:
>          262401 > 262400
>          > > > > (XEN) memory.c:158:d0 Could not allocate order=0 extent:
>          id=1 memflags=0
>          > > > > (200 of 1024)
>          > > > > (d1) HVM Loader
>          > > > > (d1) Detected Xen v4.4-rc2
>          > > > > (d1) Xenbus rings @0xfeffc000, event channel 4
>          > > > > (d1) System requested SeaBIOS
>          > > > > (d1) CPU speed is 3093 MHz
>          > > > > (d1) Relocating guest memory for lowmem MMIO space disabled
>          > > > >
>          > > > >
>          > > > > Excerpt from /var/log/xen/*
>          > > > > qemu: hardware error: xen: failed to populate ram at
>          40050000
>          > > > >
>          > > > >
>          > > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
>          > > > > [9]konrad.wilk@oracle.com> wrote:
>          > > > >
>          > > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike
>          Neiderhauser wrote:
>          > > > > > > I was able to compile and install xen4.4 RC3 on my host,
>          however I am
>          > > > > > > getting the error:
>          > > > > > >
>          > > > > > > root@fiat:~/git/xen# xl list
>          > > > > > > xc: error: Could not obtain handle on privileged command
>          interface
>          > > > (2 =
>          > > > > > No
>          > > > > > > such file or directory): Internal error
>          > > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open
>          libxc handle:
>          > > > No
>          > > > > > such
>          > > > > > > file or directory
>          > > > > > > cannot init xl context
>          > > > > > >
>          > > > > > > I've google searched for this and an article appears,
>          but is not the
>          > > > same
>          > > > > > > (as far as I can tell).  Running any xl command
>          generates a similar
>          > > > > > error.
>          > > > > > >
>          > > > > > > What can I do to fix this?
>          > > > > >
>          > > > > >
>          > > > > > You need to run the initscripts for Xen. I don't know what
>          your distro
>          > > > is,
>          > > > > > but
>          > > > > > they are usually put in /etc/init.d/rc.d/xen*
>          > > > > >
>          > > > > >
>          > > > > > >
>          > > > > > > Regards
>          > > > > > >
>          > > > > > >
>          > > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
>          > > > > > > [10]mikeneiderhauser@gmail.com> wrote:
>          > > > > > >
>          > > > > > > > Much. Do I need to install from src or is there a
>          package I can
>          > > > > > install.
>          > > > > > > >
>          > > > > > > > Regards
>          > > > > > > >
>          > > > > > > >
>          > > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk
>          <
>          > > > > > > > [11]konrad.wilk@oracle.com> wrote:
>          > > > > > > >
>          > > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike
>          Neiderhauser wrote:
>          > > > > > > >> > I did not.  I do not have the toolchain installed.
>           I may have
>          > > > time
>          > > > > > > >> later
>          > > > > > > >> > today to try the patch.  Are there any specific
>          instructions on
>          > > > how
>          > > > > > to
>          > > > > > > >> > patch the src, compile and install?
>          > > > > > > >>
>          > > > > > > >> There actually should be a new version of Xen 4.4-rcX
>          which will
>          > > > have
>          > > > > > the
>          > > > > > > >> fix. That might be easier for you?
>          > > > > > > >> >
>          > > > > > > >> > Regards
>          > > > > > > >> >
>          > > > > > > >> >
>          > > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek
>          Wilk <
>          > > > > > > >> > [12]konrad.wilk@oracle.com> wrote:
>          > > > > > > >> >
>          > > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
>          Neiderhauser
>          > > > wrote:
>          > > > > > > >> > > > Hi all,
>          > > > > > > >> > > >
>          > > > > > > >> > > > I am attempting to do a pci passthrough of an
>          Intel ET card
>          > > > > > (4x1G
>          > > > > > > >> NIC)
>          > > > > > > >> > > to a
>          > > > > > > >> > > > HVM.  I have been attempting to resolve this
>          issue on the
>          > > > > > xen-users
>          > > > > > > >> list,
>          > > > > > > >> > > > but it was advised to post this issue to this
>          list. (Initial
>          > > > > > > >> Message -
>          > > > > > > >> > > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > >
>          > > >
>          [13]http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
>          > > > > > > >> )
>          > > > > > > >> > > >
>          > > > > > > >> > > > The machine I am using as host is a Dell
>          Poweredge server
>          > > > with a
>          > > > > > > >> Xeon
>          > > > > > > >> > > > E31220 with 4GB of ram.
>          > > > > > > >> > > >
>          > > > > > > >> > > > The possible bug is the following:
>          > > > > > > >> > > > root@fiat:/var/log/xen# cat
>          qemu-dm-ubuntu-hvm-0.log
>          > > > > > > >> > > > char device redirected to /dev/pts/5 (label
>          serial0)
>          > > > > > > >> > > > qemu: hardware error: xen: failed to populate
>          ram at
>          > > > 40030000
>          > > > > > > >> > > > ....
>          > > > > > > >> > > >
>          > > > > > > >> > > > I believe it may be similar to this thread
>          > > > > > > >> > > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > >
>          > > >
>          [14]http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          > > > > > > >> > > > Additional info that may be helpful is below.
>          > > > > > > >> > >
>          > > > > > > >> > > Did you try the patch?
>          > > > > > > >> > > >
>          > > > > > > >> > > > Please let me know if you need any additional
>          information.
>          > > > > > > >> > > >
>          > > > > > > >> > > > Thanks in advance for any help provided!
>          > > > > > > >> > > > Regards
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > # Configuration file for Xen HVM
>          > > > > > > >> > > >
>          > > > > > > >> > > > # HVM Name (as appears in 'xl list')
>          > > > > > > >> > > > name="ubuntu-hvm-0"
>          > > > > > > >> > > > # HVM Build settings (+ hardware)
>          > > > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
>          > > > > > > >> > > > builder='hvm'
>          > > > > > > >> > > > device_model='qemu-dm'
>          > > > > > > >> > > > memory=1024
>          > > > > > > >> > > > vcpus=2
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Virtual Interface
>          > > > > > > >> > > > # Network bridge to USB NIC
>          > > > > > > >> > > > vif=['bridge=xenbr0']
>          > > > > > > >> > > >
>          > > > > > > >> > > > ################### PCI PASSTHROUGH
>          ###################
>          > > > > > > >> > > > # PCI Permissive mode toggle
>          > > > > > > >> > > > #pci_permissive=1
>          > > > > > > >> > > >
>          > > > > > > >> > > > # All PCI Devices
>          > > > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0',
>          '04:00.1', '05:00.0',
>          > > > > > > >> '05:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # First two ports on Intel 4x1G NIC
>          > > > > > > >> > > > #pci=['03:00.0','03:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Last two ports on Intel 4x1G NIC
>          > > > > > > >> > > > #pci=['04:00.0', '04:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # All ports on Intel 4x1G NIC
>          > > > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0',
>          '04:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Brodcom 2x1G NIC
>          > > > > > > >> > > > #pci=['05:00.0', '05:00.1']
>          > > > > > > >> > > > ################### PCI PASSTHROUGH
>          ###################
>          > > > > > > >> > > >
>          > > > > > > >> > > > # HVM Disks
>          > > > > > > >> > > > # Hard disk only
>          > > > > > > >> > > > # Boot from HDD first ('c')
>          > > > > > > >> > > > boot="c"
>          > > > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Hard disk with ISO
>          > > > > > > >> > > > # Boot from ISO first ('d')
>          > > > > > > >> > > > #boot="d"
>          > > > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>          > > > > > > >> > > >
>          'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # ACPI Enable
>          > > > > > > >> > > > acpi=1
>          > > > > > > >> > > > # HVM Event Modes
>          > > > > > > >> > > > on_poweroff='destroy'
>          > > > > > > >> > > > on_reboot='restart'
>          > > > > > > >> > > > on_crash='restart'
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Serial Console Configuration (Xen Console)
>          > > > > > > >> > > > sdl=0
>          > > > > > > >> > > > serial='pty'
>          > > > > > > >> > > >
>          > > > > > > >> > > > # VNC Configuration
>          > > > > > > >> > > > # Only reacable from localhost
>          > > > > > > >> > > > vnc=1
>          > > > > > > >> > > > vnclisten="0.0.0.0"
>          > > > > > > >> > > > vncpasswd=""
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > Copied for xen-users list
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > >
>          > > > > > > >> > > > It appears that it cannot obtain the RAM
>          mapping for this
>          > > > PCI
>          > > > > > > >> device.
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          > > > > > > >> > > > I rebooted the Host.  I ran assigned pci
>          devices to
>          > > > pciback. The
>          > > > > > > >> output
>          > > > > > > >> > > > looks like:
>          > > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
>          > > > > > > >> > > > Loading Kernel Module 'xen-pciback'
>          > > > > > > >> > > > Calling function pciback_dev for:
>          > > > > > > >> > > > PCI DEVICE 0000:03:00.0
>          > > > > > > >> > > > Unbinding 0000:03:00.0 from igb
>          > > > > > > >> > > > Binding 0000:03:00.0 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:03:00.1
>          > > > > > > >> > > > Unbinding 0000:03:00.1 from igb
>          > > > > > > >> > > > Binding 0000:03:00.1 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:04:00.0
>          > > > > > > >> > > > Unbinding 0000:04:00.0 from igb
>          > > > > > > >> > > > Binding 0000:04:00.0 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:04:00.1
>          > > > > > > >> > > > Unbinding 0000:04:00.1 from igb
>          > > > > > > >> > > > Binding 0000:04:00.1 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:05:00.0
>          > > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
>          > > > > > > >> > > > Binding 0000:05:00.0 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:05:00.1
>          > > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
>          > > > > > > >> > > > Binding 0000:05:00.1 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > Listing PCI Devices Available to Xen
>          > > > > > > >> > > > 0000:03:00.0
>          > > > > > > >> > > > 0000:03:00.1
>          > > > > > > >> > > > 0000:04:00.0
>          > > > > > > >> > > > 0000:04:00.1
>          > > > > > > >> > > > 0000:05:00.0
>          > > > > > > >> > > > 0000:05:00.1
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > root@fiat:~# xl -vvv create
>          /etc/xen/ubuntu-hvm-0.cfg
>          > > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>          > > > > > > >> > > > WARNING: ignoring device_model directive.
>          > > > > > > >> > > > WARNING: Use "device_model_override" instead if
>          you really
>          > > > want
>          > > > > > a
>          > > > > > > >> > > > non-default device_model
>          > > > > > > >> > > > libxl: debug:
>          libxl_create.c:1230:do_domain_create: ao
>          > > > > > 0x210c360:
>          > > > > > > >> create:
>          > > > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_device.c:257:libxl__device_disk_set_backend:
>          > > > > > > >> Disk
>          > > > > > > >> > > > vdev=hda spec.backend=unknown
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_device.c:296:libxl__device_disk_set_backend:
>          > > > > > > >> Disk
>          > > > > > > >> > > > vdev=hda, using backend phy
>          > > > > > > >> > > > libxl: debug:
>          libxl_create.c:675:initiate_domain_create:
>          > > > running
>          > > > > > > >> > > bootloader
>          > > > > > > >> > > > libxl: debug:
>          libxl_bootloader.c:321:libxl__bootloader_run:
>          > > > not
>          > > > > > a PV
>          > > > > > > >> > > > domain, skipping bootloader
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c728: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_numa.c:475:libxl__get_numa_candidate:
>          > > > New
>          > > > > > best
>          > > > > > > >> NUMA
>          > > > > > > >> > > > placement candidate found: nr_nodes=1,
>          nr_cpus=4,
>          > > > nr_vcpus=3,
>          > > > > > > >> > > > free_memkb=2980
>          > > > > > > >> > > > libxl: detail:
>          libxl_dom.c:195:numa_place_domain: NUMA
>          > > > placement
>          > > > > > > >> > > candidate
>          > > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>          > > > > > > >> > > > xc: detail: elf_parse_binary: phdr:
>          paddr=0x100000
>          > > > memsz=0xa69a4
>          > > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000
>          -> 0x1a69a4
>          > > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>          > > > > > > >> > > >   Loader:
>           0000000000100000->00000000001a69a4
>          > > > > > > >> > > >   Modules:
>          0000000000000000->0000000000000000
>          > > > > > > >> > > >   TOTAL:
>          0000000000000000->000000003f800000
>          > > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
>          > > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>          > > > > > > >> > > >   4KB PAGES: 0x0000000000000200
>          > > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
>          > > > > > > >> > > >   1GB PAGES: 0x0000000000000000
>          > > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at
>          0x7f022c779000 ->
>          > > > > > > >> 0x7f022c81682d
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_device.c:257:libxl__device_disk_set_backend:
>          > > > > > > >> Disk
>          > > > > > > >> > > > vdev=hda spec.backend=phy
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:559:libxl__ev_xswatch_register:
>          > > > > > watch
>          > > > > > > >> > > > w=0x2112f48
>          wpath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > token=3/0:
>          > > > > > > >> > > > register slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          libxl_create.c:1243:do_domain_create: ao
>          > > > > > 0x210c360:
>          > > > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x2112f48
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state
>          token=3/0:
>          > > > event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:647:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>          state 2 still
>          > > > > > waiting
>          > > > > > > >> > > state 1
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x2112f48
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state
>          token=3/0:
>          > > > event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:643:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>          state 2 ok
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x2112f48
>          wpath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > token=3/0:
>          > > > > > > >> > > > deregister slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x2112f48: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_device.c:959:device_hotplug: calling
>          > > > hotplug
>          > > > > > > >> script:
>          > > > > > > >> > > > /etc/xen/scripts/block add
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1206:libxl__spawn_local_dm:
>          > > > Spawning
>          > > > > > > >> > > device-model
>          > > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > > /usr/bin/qemu-system-i386
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > -xen-domid
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   2
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -chardev
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          > > >
>          socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > > chardev=libxl-cmd,mode=control
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -name
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > ubuntu-hvm-0
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > [15]0.0.0.0:0
>          > > > > > > >> ,to=99
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -global
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> isa-fdc.driveA=
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -serial
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   pty
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > cirrus
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -global
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> vga.vram_size_mb=8
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > order=c
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > 2,maxcpus=2
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -device
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -netdev
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -M
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -m
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   1016
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -drive
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > >
>          > > >
>          file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:559:libxl__ev_xswatch_register:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c960
>          wpath=/local/domain/0/device-model/2/state
>          > > > > > token=3/1:
>          > > > > > > >> > > register
>          > > > > > > >> > > > slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210c960
>          > > > > > > >> > > > wpath=/local/domain/0/device-model/2/state
>          token=3/1: event
>          > > > > > > >> > > > epath=/local/domain/0/device-model/2/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210c960
>          > > > > > > >> > > > wpath=/local/domain/0/device-model/2/state
>          token=3/1: event
>          > > > > > > >> > > > epath=/local/domain/0/device-model/2/state
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c960
>          wpath=/local/domain/0/device-model/2/state
>          > > > > > token=3/1:
>          > > > > > > >> > > > deregister slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c960: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:707:libxl__qmp_initialize:
>          > > > connected
>          > > > > > to
>          > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > > > type: qmp
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "qmp_capabilities",
>          > > > > > > >> > > >     "id": 1
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "query-chardev",
>          > > > > > > >> > > >     "id": 2
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "change",
>          > > > > > > >> > > >     "id": 3,
>          > > > > > > >> > > >     "arguments": {
>          > > > > > > >> > > >         "device": "vnc",
>          > > > > > > >> > > >         "target": "password",
>          > > > > > > >> > > >         "arg": ""
>          > > > > > > >> > > >     }
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "query-vnc",
>          > > > > > > >> > > >     "id": 4
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:559:libxl__ev_xswatch_register:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210e8a8
>          wpath=/local/domain/0/backend/vif/2/0/state
>          > > > > > token=3/2:
>          > > > > > > >> > > register
>          > > > > > > >> > > > slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210e8a8
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state
>          token=3/2: event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:647:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted
>          state 2 still
>          > > > > > waiting
>          > > > > > > >> state
>          > > > > > > >> > > 1
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210e8a8
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state
>          token=3/2: event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:643:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted
>          state 2 ok
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210e8a8
>          wpath=/local/domain/0/backend/vif/2/0/state
>          > > > > > token=3/2:
>          > > > > > > >> > > > deregister slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210e8a8: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_device.c:959:device_hotplug: calling
>          > > > hotplug
>          > > > > > > >> script:
>          > > > > > > >> > > > /etc/xen/scripts/vif-bridge online
>          > > > > > > >> > > > libxl: debug:
>          libxl_device.c:959:device_hotplug: calling
>          > > > hotplug
>          > > > > > > >> script:
>          > > > > > > >> > > > /etc/xen/scripts/vif-bridge add
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:707:libxl__qmp_initialize:
>          > > > connected
>          > > > > > to
>          > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > > > type: qmp
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "qmp_capabilities",
>          > > > > > > >> > > >     "id": 1
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "device_add",
>          > > > > > > >> > > >     "id": 2,
>          > > > > > > >> > > >     "arguments": {
>          > > > > > > >> > > >         "driver": "xen-pci-passthrough",
>          > > > > > > >> > > >         "id": "pci-pt-03_00.0",
>          > > > > > > >> > > >         "hostaddr": "0000:03:00.0"
>          > > > > > > >> > > >     }
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket
>          read error:
>          > > > > > > >> Connection
>          > > > > > > >> > > reset
>          > > > > > > >> > > > by peer
>          > > > > > > >> > > > libxl: error:
>          libxl_qmp.c:702:libxl__qmp_initialize:
>          > > > Connection
>          > > > > > > >> error:
>          > > > > > > >> > > > Connection refused
>          > > > > > > >> > > > libxl: error:
>          libxl_qmp.c:702:libxl__qmp_initialize:
>          > > > Connection
>          > > > > > > >> error:
>          > > > > > > >> > > > Connection refused
>          > > > > > > >> > > > libxl: error:
>          libxl_qmp.c:702:libxl__qmp_initialize:
>          > > > Connection
>          > > > > > > >> error:
>          > > > > > > >> > > > Connection refused
>          > > > > > > >> > > > libxl: debug:
>          libxl_pci.c:81:libxl__create_pci_backend:
>          > > > > > Creating pci
>          > > > > > > >> > > backend
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:1737:libxl__ao_progress_report:
>          > > > ao
>          > > > > > > >> 0x210c360:
>          > > > > > > >> > > > progress report: ignored
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:1569:libxl__ao_complete: ao
>          > > > > > 0x210c360:
>          > > > > > > >> > > > complete, rc=0
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:1541:libxl__ao__destroy: ao
>          > > > > > 0x210c360:
>          > > > > > > >> > > destroy
>          > > > > > > >> > > > Daemon running with PID 3214
>          > > > > > > >> > > > xc: debug: hypercall buffer: total
>          allocations:793 total
>          > > > > > > >> releases:793
>          > > > > > > >> > > > xc: debug: hypercall buffer: current
>          allocations:0 maximum
>          > > > > > > >> allocations:4
>          > > > > > > >> > > > xc: debug: hypercall buffer: cache current
>          size:4
>          > > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785
>          misses:4
>          > > > toobig:4
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > root@fiat:/var/log/xen# cat
>          qemu-dm-ubuntu-hvm-0.log
>          > > > > > > >> > > > char device redirected to /dev/pts/5 (label
>          serial0)
>          > > > > > > >> > > > qemu: hardware error: xen: failed to populate
>          ram at
>          > > > 40030000
>          > > > > > > >> > > > CPU #0:
>          > > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000
>          EDX=00000633
>          > > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000
>          ESP=00000000
>          > > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0
>          A20=1 SMM=0
>          > > > HLT=1
>          > > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
>          > > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
>          > > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
>          > > > > > > >> > > > GDT=     00000000 0000ffff
>          > > > > > > >> > > > IDT=     00000000 0000ffff
>          > > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000
>          CR4=00000000
>          > > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000
>          DR3=00000000
>          > > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
>          > > > > > > >> > > > EFER=0000000000000000
>          > > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>          > > > > > > >> > > > FPR0=0000000000000000 0000
>          FPR1=0000000000000000 0000
>          > > > > > > >> > > > FPR2=0000000000000000 0000
>          FPR3=0000000000000000 0000
>          > > > > > > >> > > > FPR4=0000000000000000 0000
>          FPR5=0000000000000000 0000
>          > > > > > > >> > > > FPR6=0000000000000000 0000
>          FPR7=0000000000000000 0000
>          > > > > > > >> > > > XMM00=00000000000000000000000000000000
>          > > > > > > >> > > > XMM01=00000000000000000000000000000000
>          > > > > > > >> > > > XMM02=00000000000000000000000000000000
>          > > > > > > >> > > > XMM03=00000000000000000000000000000000
>          > > > > > > >> > > > XMM04=00000000000000000000000000000000
>          > > > > > > >> > > > XMM05=00000000000000000000000000000000
>          > > > > > > >> > > > XMM06=00000000000000000000000000000000
>          > > > > > > >> > > > XMM07=00000000000000000000000000000000
>          > > > > > > >> > > > CPU #1:
>          > > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000
>          EDX=00000633
>          > > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000
>          ESP=00000000
>          > > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0
>          A20=1 SMM=0
>          > > > HLT=1
>          > > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
>          > > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
>          > > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
>          > > > > > > >> > > > GDT=     00000000 0000ffff
>          > > > > > > >> > > > IDT=     00000000 0000ffff
>          > > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000
>          CR4=00000000
>          > > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000
>          DR3=00000000
>          > > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
>          > > > > > > >> > > > EFER=0000000000000000
>          > > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>          > > > > > > >> > > > FPR0=0000000000000000 0000
>          FPR1=0000000000000000 0000
>          > > > > > > >> > > > FPR2=0000000000000000 0000
>          FPR3=0000000000000000 0000
>          > > > > > > >> > > > FPR4=0000000000000000 0000
>          FPR5=0000000000000000 0000
>          > > > > > > >> > > > FPR6=0000000000000000 0000
>          FPR7=0000000000000000 0000
>          > > > > > > >> > > > XMM00=00000000000000000000000000000000
>          > > > > > > >> > > > XMM01=00000000000000000000000000000000
>          > > > > > > >> > > > XMM02=00000000000000000000000000000000
>          > > > > > > >> > > > XMM03=00000000000000000000000000000000
>          > > > > > > >> > > > XMM04=00000000000000000000000000000000
>          > > > > > > >> > > > XMM05=00000000000000000000000000000000
>          > > > > > > >> > > > XMM06=00000000000000000000000000000000
>          > > > > > > >> > > > XMM07=00000000000000000000000000000000
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > /etc/default/grub
>          > > > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
>          > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
>          > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
>          > > > > > > >> > > > GRUB_TIMEOUT=10
>          > > > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2>
>          /dev/null || echo
>          > > > Debian`
>          > > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
>          > > > > > > >> > > > GRUB_CMDLINE_LINUX=""
>          > > > > > > >> > > > # biosdevname=0
>          > > > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M
>          dom0_max_vcpus=1"
>          > > > > > > >> > >
>          > > > > > > >> > > > _______________________________________________
>          > > > > > > >> > > > Xen-devel mailing list
>          > > > > > > >> > > > [16]Xen-devel@lists.xen.org
>          > > > > > > >> > > > [17]http://lists.xen.org/xen-devel
>          > > > > > > >> > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > > > >
>          > > > > > > >
>          > > > > >
>          > > >
>          >
> 
>        >
> 
> References
> 
>    Visible links
>    1. mailto:mikeneiderhauser@gmail.com
>    2. mailto:konrad.wilk@oracle.com
>    3. mailto:mikeneiderhauser@gmail.com
>    4. http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>    5. http://wiki.xen.org/wiki/Compiling_Xen_From_Source
>    6. http://xenbits.xen.org/xen.git
>    7. mailto:konrad.wilk@oracle.com
>    8. mailto:konrad.wilk@oracle.com
>    9. mailto:konrad.wilk@oracle.com
>   10. mailto:mikeneiderhauser@gmail.com
>   11. mailto:konrad.wilk@oracle.com
>   12. mailto:konrad.wilk@oracle.com
>   13. http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
>   14. http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
>   15. http://0.0.0.0:0/
>   16. mailto:Xen-devel@lists.xen.org
>   17. http://lists.xen.org/xen-devel

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 20:03:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 20:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCE6x-0002Om-Rd; Sat, 08 Feb 2014 20:02:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WCE6v-0002Of-Dz
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 20:02:46 +0000
Received: from [85.158.137.68:14284] by server-9.bemta-3.messagelabs.com id
	75/D9-10184-46D86F25; Sat, 08 Feb 2014 20:02:44 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-16.tower-31.messagelabs.com!1391889762!551924!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2642 invoked from network); 8 Feb 2014 20:02:43 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 8 Feb 2014 20:02:43 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 629A41A2655;
	Sat,  8 Feb 2014 22:02:42 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 4AE0636C01F; Sat,  8 Feb 2014 22:02:42 +0200 (EET)
Date: Sat, 8 Feb 2014 22:02:42 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140208200242.GT2924@reaktio.net>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 08, 2014 at 12:42:31PM -0500, Mike Neiderhauser wrote:
>    Works like a charm.  I do not have physical access to the computer this
>    weekend to verify that the cards are isolated, but the HVM starts and
>    appears to be working well.
>    When do you think Xen 4.4 will be released.  The article I read mentioned
>    it will be released in 2014 (hinting towards the end of February).  I also
>    read 'When it is ready.'
>    Any timeline would be great.
>    Thanks again for your help!
> 

I *assume* there is going to be at least one more rc..

-- Pasi

>    On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser
>    <[1]mikeneiderhauser@gmail.com> wrote:
> 
>      I will give it a shot.  Thanks!
> 
>      On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <[2]konrad.wilk@oracle.com>
>      wrote:
> 
>        ----- [3]mikeneiderhauser@gmail.com wrote:
>        >
>        > I followed this site
>        ([4]http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions).
>        and then followed
>        ([5]http://wiki.xen.org/wiki/Compiling_Xen_From_Source)
>        >
>        Ah, so you are looking for the     xen_pt: Fix passthrough of device
>        with ROM.
>        which is not in the Xen 4.4-rc3 but in the master.
>        One thing you can do is:
>        cd xen/tools/qemu-xen-dir
>        git fetch upstream
>        git checkout origin/master
>        [you should see: "HEAD is now at 027c412... configure: Disable libtool
>        if -fPIE does not work with it (bug #1257099)"]
>        Go back to main xen directory:
>        cd ../../../
>        ./configure
>        make
>        make install
>        and you should be using now an newer version of QEMU with the fix.
>        >
> 
> 
>  git clone -b 4.4.0-rc3 git://[6]xenbits.xen.org/xen.git
>  >
> 
> 
>  Had to take some additional steps here to get all of the libs
>  # apt-get install build-essential
>  # apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif
>  # apt-get install texinfo texlive-latex-base texlive-latex-recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
>  # apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
>  # apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
>  # apt-get install gettext
>  apt-get install libaio-dev
>  apt-get install libpixman-1-dev
> 
>  ./configure
>  make dist
>  make install
> 
>        >
>        >
>        >
>        > On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk
>        <[7]konrad.wilk@oracle.com> wrote:
>        >
> 
>          > On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
>          > > I did not use the patch.  I was assuming it was already patched
>          given
>          > > previous email.  Is the patch for qemu source or xen source?
>          >
>          >
>          It is for QEMU, but you are right - it should have been part
>          > of QEMU if you got the latest version of Xen-unstable.
>          >
>          > You didn't use some specific tag but just 'staging' ?
>          >
>          >
>          >
>          > >
>          > >
>          > > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
>          > > [8]konrad.wilk@oracle.com> wrote:
>          > >
>          > > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser
>          wrote:
>          > > > > Ok. I started ran the initscripts and now xl works.
>          > > > >
>          > > > > However, I still see the same behavior as before:
>          > > > >
>          > > >
>          > > > Did you use the patch that was mentioned in the URL?
>          > > >
>          > > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
>          > > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>          > > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error:
>          Connection
>          > > > reset
>          > > > > by peer
>          > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize:
>          Connection error:
>          > > > > Connection refused
>          > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize:
>          Connection error:
>          > > > > Connection refused
>          > > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize:
>          Connection error:
>          > > > > Connection refused
>          > > > > root@fiat:~# xl list
>          > > > > Name                                        ID   Mem VCPUs
>          State Time(s)
>          > > > > Domain-0                                     0  1024     1
>            r-----
>          > > > >  15.2
>          > > > > ubuntu-hvm-0                                 1  1025     1
>            ------
>          > > > > 0.0
>          > > > >
>          > > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 ->
>          0x23f3000
>          > > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
>          > > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000
>          (233690 pages to
>          > > > > be allocated)
>          > > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
>          > > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
>          > > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
>          > > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
>          > > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
>          > > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
>          > > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
>          > > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
>          > > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
>          > > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
>          > > > > (XEN) Dom0 has maximum 1 VCPUs
>          > > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 ->
>          0xffffffff81b2f000
>          > > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 ->
>          0xffffffff81d0f0f0
>          > > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 ->
>          0xffffffff81d252c0
>          > > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 ->
>          0xffffffff81e6d000
>          > > > > (XEN) Scrubbing Free RAM: .............................done.
>          > > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
>          > > > > (XEN) Std. Loglevel: All
>          > > > > (XEN) Guest Loglevel: All
>          > > > > (XEN) Xen is relinquishing VGA console.
>          > > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to
>          switch input
>          > > > > to Xen)
>          > > > > (XEN) Freed 260kB init memory.
>          > > > > (XEN) PCI add device 0000:00:00.0
>          > > > > (XEN) PCI add device 0000:00:01.0
>          > > > > (XEN) PCI add device 0000:00:1a.0
>          > > > > (XEN) PCI add device 0000:00:1c.0
>          > > > > (XEN) PCI add device 0000:00:1d.0
>          > > > > (XEN) PCI add device 0000:00:1e.0
>          > > > > (XEN) PCI add device 0000:00:1f.0
>          > > > > (XEN) PCI add device 0000:00:1f.2
>          > > > > (XEN) PCI add device 0000:00:1f.3
>          > > > > (XEN) PCI add device 0000:01:00.0
>          > > > > (XEN) PCI add device 0000:02:02.0
>          > > > > (XEN) PCI add device 0000:02:04.0
>          > > > > (XEN) PCI add device 0000:03:00.0
>          > > > > (XEN) PCI add device 0000:03:00.1
>          > > > > (XEN) PCI add device 0000:04:00.0
>          > > > > (XEN) PCI add device 0000:04:00.1
>          > > > > (XEN) PCI add device 0000:05:00.0
>          > > > > (XEN) PCI add device 0000:05:00.1
>          > > > > (XEN) PCI add device 0000:06:03.0
>          > > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1:
>          262401 > 262400
>          > > > > (XEN) memory.c:158:d0 Could not allocate order=0 extent:
>          id=1 memflags=0
>          > > > > (200 of 1024)
>          > > > > (d1) HVM Loader
>          > > > > (d1) Detected Xen v4.4-rc2
>          > > > > (d1) Xenbus rings @0xfeffc000, event channel 4
>          > > > > (d1) System requested SeaBIOS
>          > > > > (d1) CPU speed is 3093 MHz
>          > > > > (d1) Relocating guest memory for lowmem MMIO space disabled
>          > > > >
>          > > > >
>          > > > > Excerpt from /var/log/xen/*
>          > > > > qemu: hardware error: xen: failed to populate ram at
>          40050000
>          > > > >
>          > > > >
>          > > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
>          > > > > [9]konrad.wilk@oracle.com> wrote:
>          > > > >
>          > > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike
>          Neiderhauser wrote:
>          > > > > > > I was able to compile and install xen4.4 RC3 on my host,
>          however I am
>          > > > > > > getting the error:
>          > > > > > >
>          > > > > > > root@fiat:~/git/xen# xl list
>          > > > > > > xc: error: Could not obtain handle on privileged command
>          interface
>          > > > (2 =
>          > > > > > No
>          > > > > > > such file or directory): Internal error
>          > > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open
>          libxc handle:
>          > > > No
>          > > > > > such
>          > > > > > > file or directory
>          > > > > > > cannot init xl context
>          > > > > > >
>          > > > > > > I've google searched for this and an article appears,
>          but is not the
>          > > > same
>          > > > > > > (as far as I can tell).  Running any xl command
>          generates a similar
>          > > > > > error.
>          > > > > > >
>          > > > > > > What can I do to fix this?
>          > > > > >
>          > > > > >
>          > > > > > You need to run the initscripts for Xen. I don't know what
>          your distro
>          > > > is,
>          > > > > > but
>          > > > > > they are usually put in /etc/init.d/rc.d/xen*
>          > > > > >
>          > > > > >
>          > > > > > >
>          > > > > > > Regards
>          > > > > > >
>          > > > > > >
>          > > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
>          > > > > > > [10]mikeneiderhauser@gmail.com> wrote:
>          > > > > > >
>          > > > > > > > Much. Do I need to install from src or is there a
>          package I can
>          > > > > > install.
>          > > > > > > >
>          > > > > > > > Regards
>          > > > > > > >
>          > > > > > > >
>          > > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk
>          <
>          > > > > > > > [11]konrad.wilk@oracle.com> wrote:
>          > > > > > > >
>          > > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike
>          Neiderhauser wrote:
>          > > > > > > >> > I did not.  I do not have the toolchain installed.
>           I may have
>          > > > time
>          > > > > > > >> later
>          > > > > > > >> > today to try the patch.  Are there any specific
>          instructions on
>          > > > how
>          > > > > > to
>          > > > > > > >> > patch the src, compile and install?
>          > > > > > > >>
>          > > > > > > >> There actually should be a new version of Xen 4.4-rcX
>          which will
>          > > > have
>          > > > > > the
>          > > > > > > >> fix. That might be easier for you?
>          > > > > > > >> >
>          > > > > > > >> > Regards
>          > > > > > > >> >
>          > > > > > > >> >
>          > > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek
>          Wilk <
>          > > > > > > >> > [12]konrad.wilk@oracle.com> wrote:
>          > > > > > > >> >
>          > > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike
>          Neiderhauser
>          > > > wrote:
>          > > > > > > >> > > > Hi all,
>          > > > > > > >> > > >
>          > > > > > > >> > > > I am attempting to do a pci passthrough of an
>          Intel ET card
>          > > > > > (4x1G
>          > > > > > > >> NIC)
>          > > > > > > >> > > to a
>          > > > > > > >> > > > HVM.  I have been attempting to resolve this
>          issue on the
>          > > > > > xen-users
>          > > > > > > >> list,
>          > > > > > > >> > > > but it was advised to post this issue to this
>          list. (Initial
>          > > > > > > >> Message -
>          > > > > > > >> > > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > >
>          > > >
>          [13]http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
>          > > > > > > >> )
>          > > > > > > >> > > >
>          > > > > > > >> > > > The machine I am using as host is a Dell
>          Poweredge server
>          > > > with a
>          > > > > > > >> Xeon
>          > > > > > > >> > > > E31220 with 4GB of ram.
>          > > > > > > >> > > >
>          > > > > > > >> > > > The possible bug is the following:
>          > > > > > > >> > > > root@fiat:/var/log/xen# cat
>          qemu-dm-ubuntu-hvm-0.log
>          > > > > > > >> > > > char device redirected to /dev/pts/5 (label
>          serial0)
>          > > > > > > >> > > > qemu: hardware error: xen: failed to populate
>          ram at
>          > > > 40030000
>          > > > > > > >> > > > ....
>          > > > > > > >> > > >
>          > > > > > > >> > > > I believe it may be similar to this thread
>          > > > > > > >> > > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > >
>          > > >
>          [14]http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          > > > > > > >> > > > Additional info that may be helpful is below.
>          > > > > > > >> > >
>          > > > > > > >> > > Did you try the patch?
>          > > > > > > >> > > >
>          > > > > > > >> > > > Please let me know if you need any additional
>          information.
>          > > > > > > >> > > >
>          > > > > > > >> > > > Thanks in advance for any help provided!
>          > > > > > > >> > > > Regards
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > # Configuration file for Xen HVM
>          > > > > > > >> > > >
>          > > > > > > >> > > > # HVM Name (as appears in 'xl list')
>          > > > > > > >> > > > name="ubuntu-hvm-0"
>          > > > > > > >> > > > # HVM Build settings (+ hardware)
>          > > > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
>          > > > > > > >> > > > builder='hvm'
>          > > > > > > >> > > > device_model='qemu-dm'
>          > > > > > > >> > > > memory=1024
>          > > > > > > >> > > > vcpus=2
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Virtual Interface
>          > > > > > > >> > > > # Network bridge to USB NIC
>          > > > > > > >> > > > vif=['bridge=xenbr0']
>          > > > > > > >> > > >
>          > > > > > > >> > > > ################### PCI PASSTHROUGH
>          ###################
>          > > > > > > >> > > > # PCI Permissive mode toggle
>          > > > > > > >> > > > #pci_permissive=1
>          > > > > > > >> > > >
>          > > > > > > >> > > > # All PCI Devices
>          > > > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0',
>          '04:00.1', '05:00.0',
>          > > > > > > >> '05:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # First two ports on Intel 4x1G NIC
>          > > > > > > >> > > > #pci=['03:00.0','03:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Last two ports on Intel 4x1G NIC
>          > > > > > > >> > > > #pci=['04:00.0', '04:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # All ports on Intel 4x1G NIC
>          > > > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0',
>          '04:00.1']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Brodcom 2x1G NIC
>          > > > > > > >> > > > #pci=['05:00.0', '05:00.1']
>          > > > > > > >> > > > ################### PCI PASSTHROUGH
>          ###################
>          > > > > > > >> > > >
>          > > > > > > >> > > > # HVM Disks
>          > > > > > > >> > > > # Hard disk only
>          > > > > > > >> > > > # Boot from HDD first ('c')
>          > > > > > > >> > > > boot="c"
>          > > > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Hard disk with ISO
>          > > > > > > >> > > > # Boot from ISO first ('d')
>          > > > > > > >> > > > #boot="d"
>          > > > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
>          > > > > > > >> > > >
>          'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
>          > > > > > > >> > > >
>          > > > > > > >> > > > # ACPI Enable
>          > > > > > > >> > > > acpi=1
>          > > > > > > >> > > > # HVM Event Modes
>          > > > > > > >> > > > on_poweroff='destroy'
>          > > > > > > >> > > > on_reboot='restart'
>          > > > > > > >> > > > on_crash='restart'
>          > > > > > > >> > > >
>          > > > > > > >> > > > # Serial Console Configuration (Xen Console)
>          > > > > > > >> > > > sdl=0
>          > > > > > > >> > > > serial='pty'
>          > > > > > > >> > > >
>          > > > > > > >> > > > # VNC Configuration
>          > > > > > > >> > > > # Only reacable from localhost
>          > > > > > > >> > > > vnc=1
>          > > > > > > >> > > > vnclisten="0.0.0.0"
>          > > > > > > >> > > > vncpasswd=""
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > Copied for xen-users list
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > >
>          > > > > > > >> > > > It appears that it cannot obtain the RAM
>          mapping for this
>          > > > PCI
>          > > > > > > >> device.
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          > > > > > > >> > > > I rebooted the Host.  I ran assigned pci
>          devices to
>          > > > pciback. The
>          > > > > > > >> output
>          > > > > > > >> > > > looks like:
>          > > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
>          > > > > > > >> > > > Loading Kernel Module 'xen-pciback'
>          > > > > > > >> > > > Calling function pciback_dev for:
>          > > > > > > >> > > > PCI DEVICE 0000:03:00.0
>          > > > > > > >> > > > Unbinding 0000:03:00.0 from igb
>          > > > > > > >> > > > Binding 0000:03:00.0 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:03:00.1
>          > > > > > > >> > > > Unbinding 0000:03:00.1 from igb
>          > > > > > > >> > > > Binding 0000:03:00.1 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:04:00.0
>          > > > > > > >> > > > Unbinding 0000:04:00.0 from igb
>          > > > > > > >> > > > Binding 0000:04:00.0 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:04:00.1
>          > > > > > > >> > > > Unbinding 0000:04:00.1 from igb
>          > > > > > > >> > > > Binding 0000:04:00.1 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:05:00.0
>          > > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
>          > > > > > > >> > > > Binding 0000:05:00.0 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > PCI DEVICE 0000:05:00.1
>          > > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
>          > > > > > > >> > > > Binding 0000:05:00.1 to pciback
>          > > > > > > >> > > >
>          > > > > > > >> > > > Listing PCI Devices Available to Xen
>          > > > > > > >> > > > 0000:03:00.0
>          > > > > > > >> > > > 0000:03:00.1
>          > > > > > > >> > > > 0000:04:00.0
>          > > > > > > >> > > > 0000:04:00.1
>          > > > > > > >> > > > 0000:05:00.0
>          > > > > > > >> > > > 0000:05:00.1
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > root@fiat:~# xl -vvv create
>          /etc/xen/ubuntu-hvm-0.cfg
>          > > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
>          > > > > > > >> > > > WARNING: ignoring device_model directive.
>          > > > > > > >> > > > WARNING: Use "device_model_override" instead if
>          you really
>          > > > want
>          > > > > > a
>          > > > > > > >> > > > non-default device_model
>          > > > > > > >> > > > libxl: debug:
>          libxl_create.c:1230:do_domain_create: ao
>          > > > > > 0x210c360:
>          > > > > > > >> create:
>          > > > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_device.c:257:libxl__device_disk_set_backend:
>          > > > > > > >> Disk
>          > > > > > > >> > > > vdev=hda spec.backend=unknown
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_device.c:296:libxl__device_disk_set_backend:
>          > > > > > > >> Disk
>          > > > > > > >> > > > vdev=hda, using backend phy
>          > > > > > > >> > > > libxl: debug:
>          libxl_create.c:675:initiate_domain_create:
>          > > > running
>          > > > > > > >> > > bootloader
>          > > > > > > >> > > > libxl: debug:
>          libxl_bootloader.c:321:libxl__bootloader_run:
>          > > > not
>          > > > > > a PV
>          > > > > > > >> > > > domain, skipping bootloader
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c728: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_numa.c:475:libxl__get_numa_candidate:
>          > > > New
>          > > > > > best
>          > > > > > > >> NUMA
>          > > > > > > >> > > > placement candidate found: nr_nodes=1,
>          nr_cpus=4,
>          > > > nr_vcpus=3,
>          > > > > > > >> > > > free_memkb=2980
>          > > > > > > >> > > > libxl: detail:
>          libxl_dom.c:195:numa_place_domain: NUMA
>          > > > placement
>          > > > > > > >> > > candidate
>          > > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
>          > > > > > > >> > > > xc: detail: elf_parse_binary: phdr:
>          paddr=0x100000
>          > > > memsz=0xa69a4
>          > > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000
>          -> 0x1a69a4
>          > > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
>          > > > > > > >> > > >   Loader:
>           0000000000100000->00000000001a69a4
>          > > > > > > >> > > >   Modules:
>          0000000000000000->0000000000000000
>          > > > > > > >> > > >   TOTAL:
>          0000000000000000->000000003f800000
>          > > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
>          > > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
>          > > > > > > >> > > >   4KB PAGES: 0x0000000000000200
>          > > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
>          > > > > > > >> > > >   1GB PAGES: 0x0000000000000000
>          > > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at
>          0x7f022c779000 ->
>          > > > > > > >> 0x7f022c81682d
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_device.c:257:libxl__device_disk_set_backend:
>          > > > > > > >> Disk
>          > > > > > > >> > > > vdev=hda spec.backend=phy
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:559:libxl__ev_xswatch_register:
>          > > > > > watch
>          > > > > > > >> > > > w=0x2112f48
>          wpath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > token=3/0:
>          > > > > > > >> > > > register slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          libxl_create.c:1243:do_domain_create: ao
>          > > > > > 0x210c360:
>          > > > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x2112f48
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state
>          token=3/0:
>          > > > event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:647:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>          state 2 still
>          > > > > > waiting
>          > > > > > > >> > > state 1
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x2112f48
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state
>          token=3/0:
>          > > > event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:643:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted
>          state 2 ok
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x2112f48
>          wpath=/local/domain/0/backend/vbd/2/768/state
>          > > > > > token=3/0:
>          > > > > > > >> > > > deregister slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x2112f48: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_device.c:959:device_hotplug: calling
>          > > > hotplug
>          > > > > > > >> script:
>          > > > > > > >> > > > /etc/xen/scripts/block add
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1206:libxl__spawn_local_dm:
>          > > > Spawning
>          > > > > > > >> > > device-model
>          > > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > > /usr/bin/qemu-system-i386
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > -xen-domid
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   2
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -chardev
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          > > >
>          socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > > chardev=libxl-cmd,mode=control
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -name
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > ubuntu-hvm-0
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > [15]0.0.0.0:0
>          > > > > > > >> ,to=99
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -global
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> isa-fdc.driveA=
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -serial
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   pty
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > cirrus
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -global
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> vga.vram_size_mb=8
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > order=c
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > 2,maxcpus=2
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -device
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -netdev
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -M
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   -m
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:   1016
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > -drive
>          > > > > > > >> > > > libxl: debug:
>          libxl_dm.c:1208:libxl__spawn_local_dm:
>          > > > > > > >> > > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > >
>          > > >
>          file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:559:libxl__ev_xswatch_register:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c960
>          wpath=/local/domain/0/device-model/2/state
>          > > > > > token=3/1:
>          > > > > > > >> > > register
>          > > > > > > >> > > > slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210c960
>          > > > > > > >> > > > wpath=/local/domain/0/device-model/2/state
>          token=3/1: event
>          > > > > > > >> > > > epath=/local/domain/0/device-model/2/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210c960
>          > > > > > > >> > > > wpath=/local/domain/0/device-model/2/state
>          token=3/1: event
>          > > > > > > >> > > > epath=/local/domain/0/device-model/2/state
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c960
>          wpath=/local/domain/0/device-model/2/state
>          > > > > > token=3/1:
>          > > > > > > >> > > > deregister slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210c960: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:707:libxl__qmp_initialize:
>          > > > connected
>          > > > > > to
>          > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > > > type: qmp
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "qmp_capabilities",
>          > > > > > > >> > > >     "id": 1
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "query-chardev",
>          > > > > > > >> > > >     "id": 2
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "change",
>          > > > > > > >> > > >     "id": 3,
>          > > > > > > >> > > >     "arguments": {
>          > > > > > > >> > > >         "device": "vnc",
>          > > > > > > >> > > >         "target": "password",
>          > > > > > > >> > > >         "arg": ""
>          > > > > > > >> > > >     }
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "query-vnc",
>          > > > > > > >> > > >     "id": 4
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:559:libxl__ev_xswatch_register:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210e8a8
>          wpath=/local/domain/0/backend/vif/2/0/state
>          > > > > > token=3/2:
>          > > > > > > >> > > register
>          > > > > > > >> > > > slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210e8a8
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state
>          token=3/2: event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:647:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted
>          state 2 still
>          > > > > > waiting
>          > > > > > > >> state
>          > > > > > > >> > > 1
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:503:watchfd_callback: watch
>          > > > > > w=0x210e8a8
>          > > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state
>          token=3/2: event
>          > > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:643:devstate_watch_callback:
>          > > > backend
>          > > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted
>          state 2 ok
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:596:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210e8a8
>          wpath=/local/domain/0/backend/vif/2/0/state
>          > > > > > token=3/2:
>          > > > > > > >> > > > deregister slotnum=3
>          > > > > > > >> > > > libxl: debug:
>          > > > libxl_event.c:608:libxl__ev_xswatch_deregister:
>          > > > > > watch
>          > > > > > > >> > > > w=0x210e8a8: deregister unregistered
>          > > > > > > >> > > > libxl: debug:
>          libxl_device.c:959:device_hotplug: calling
>          > > > hotplug
>          > > > > > > >> script:
>          > > > > > > >> > > > /etc/xen/scripts/vif-bridge online
>          > > > > > > >> > > > libxl: debug:
>          libxl_device.c:959:device_hotplug: calling
>          > > > hotplug
>          > > > > > > >> script:
>          > > > > > > >> > > > /etc/xen/scripts/vif-bridge add
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:707:libxl__qmp_initialize:
>          > > > connected
>          > > > > > to
>          > > > > > > >> > > > /var/run/xen/qmp-libxl-2
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > > > type: qmp
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "qmp_capabilities",
>          > > > > > > >> > > >     "id": 1
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: debug:
>          libxl_qmp.c:299:qmp_handle_response: message
>          > > > type:
>          > > > > > > >> return
>          > > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare:
>          next qmp
>          > > > > > command: '{
>          > > > > > > >> > > >     "execute": "device_add",
>          > > > > > > >> > > >     "id": 2,
>          > > > > > > >> > > >     "arguments": {
>          > > > > > > >> > > >         "driver": "xen-pci-passthrough",
>          > > > > > > >> > > >         "id": "pci-pt-03_00.0",
>          > > > > > > >> > > >         "hostaddr": "0000:03:00.0"
>          > > > > > > >> > > >     }
>          > > > > > > >> > > > }
>          > > > > > > >> > > > '
>          > > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket
>          read error:
>          > > > > > > >> Connection
>          > > > > > > >> > > reset
>          > > > > > > >> > > > by peer
>          > > > > > > >> > > > libxl: error:
>          libxl_qmp.c:702:libxl__qmp_initialize:
>          > > > Connection
>          > > > > > > >> error:
>          > > > > > > >> > > > Connection refused
>          > > > > > > >> > > > libxl: error:
>          libxl_qmp.c:702:libxl__qmp_initialize:
>          > > > Connection
>          > > > > > > >> error:
>          > > > > > > >> > > > Connection refused
>          > > > > > > >> > > > libxl: error:
>          libxl_qmp.c:702:libxl__qmp_initialize:
>          > > > Connection
>          > > > > > > >> error:
>          > > > > > > >> > > > Connection refused
>          > > > > > > >> > > > libxl: debug:
>          libxl_pci.c:81:libxl__create_pci_backend:
>          > > > > > Creating pci
>          > > > > > > >> > > backend
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:1737:libxl__ao_progress_report:
>          > > > ao
>          > > > > > > >> 0x210c360:
>          > > > > > > >> > > > progress report: ignored
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:1569:libxl__ao_complete: ao
>          > > > > > 0x210c360:
>          > > > > > > >> > > > complete, rc=0
>          > > > > > > >> > > > libxl: debug:
>          libxl_event.c:1541:libxl__ao__destroy: ao
>          > > > > > 0x210c360:
>          > > > > > > >> > > destroy
>          > > > > > > >> > > > Daemon running with PID 3214
>          > > > > > > >> > > > xc: debug: hypercall buffer: total
>          allocations:793 total
>          > > > > > > >> releases:793
>          > > > > > > >> > > > xc: debug: hypercall buffer: current
>          allocations:0 maximum
>          > > > > > > >> allocations:4
>          > > > > > > >> > > > xc: debug: hypercall buffer: cache current
>          size:4
>          > > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785
>          misses:4
>          > > > toobig:4
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > root@fiat:/var/log/xen# cat
>          qemu-dm-ubuntu-hvm-0.log
>          > > > > > > >> > > > char device redirected to /dev/pts/5 (label
>          serial0)
>          > > > > > > >> > > > qemu: hardware error: xen: failed to populate
>          ram at
>          > > > 40030000
>          > > > > > > >> > > > CPU #0:
>          > > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000
>          EDX=00000633
>          > > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000
>          ESP=00000000
>          > > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0
>          A20=1 SMM=0
>          > > > HLT=1
>          > > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
>          > > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
>          > > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
>          > > > > > > >> > > > GDT=     00000000 0000ffff
>          > > > > > > >> > > > IDT=     00000000 0000ffff
>          > > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000
>          CR4=00000000
>          > > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000
>          DR3=00000000
>          > > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
>          > > > > > > >> > > > EFER=0000000000000000
>          > > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>          > > > > > > >> > > > FPR0=0000000000000000 0000
>          FPR1=0000000000000000 0000
>          > > > > > > >> > > > FPR2=0000000000000000 0000
>          FPR3=0000000000000000 0000
>          > > > > > > >> > > > FPR4=0000000000000000 0000
>          FPR5=0000000000000000 0000
>          > > > > > > >> > > > FPR6=0000000000000000 0000
>          FPR7=0000000000000000 0000
>          > > > > > > >> > > > XMM00=00000000000000000000000000000000
>          > > > > > > >> > > > XMM01=00000000000000000000000000000000
>          > > > > > > >> > > > XMM02=00000000000000000000000000000000
>          > > > > > > >> > > > XMM03=00000000000000000000000000000000
>          > > > > > > >> > > > XMM04=00000000000000000000000000000000
>          > > > > > > >> > > > XMM05=00000000000000000000000000000000
>          > > > > > > >> > > > XMM06=00000000000000000000000000000000
>          > > > > > > >> > > > XMM07=00000000000000000000000000000000
>          > > > > > > >> > > > CPU #1:
>          > > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000
>          EDX=00000633
>          > > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000
>          ESP=00000000
>          > > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0
>          A20=1 SMM=0
>          > > > HLT=1
>          > > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
>          > > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
>          > > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
>          > > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
>          > > > > > > >> > > > GDT=     00000000 0000ffff
>          > > > > > > >> > > > IDT=     00000000 0000ffff
>          > > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000
>          CR4=00000000
>          > > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000
>          DR3=00000000
>          > > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
>          > > > > > > >> > > > EFER=0000000000000000
>          > > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
>          > > > > > > >> > > > FPR0=0000000000000000 0000
>          FPR1=0000000000000000 0000
>          > > > > > > >> > > > FPR2=0000000000000000 0000
>          FPR3=0000000000000000 0000
>          > > > > > > >> > > > FPR4=0000000000000000 0000
>          FPR5=0000000000000000 0000
>          > > > > > > >> > > > FPR6=0000000000000000 0000
>          FPR7=0000000000000000 0000
>          > > > > > > >> > > > XMM00=00000000000000000000000000000000
>          > > > > > > >> > > > XMM01=00000000000000000000000000000000
>          > > > > > > >> > > > XMM02=00000000000000000000000000000000
>          > > > > > > >> > > > XMM03=00000000000000000000000000000000
>          > > > > > > >> > > > XMM04=00000000000000000000000000000000
>          > > > > > > >> > > > XMM05=00000000000000000000000000000000
>          > > > > > > >> > > > XMM06=00000000000000000000000000000000
>          > > > > > > >> > > > XMM07=00000000000000000000000000000000
>          > > > > > > >> > > >
>          > > > > > > >> > > >
>          ###########################################################
>          > > > > > > >> > > > /etc/default/grub
>          > > > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
>          > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
>          > > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
>          > > > > > > >> > > > GRUB_TIMEOUT=10
>          > > > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2>
>          /dev/null || echo
>          > > > Debian`
>          > > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
>          > > > > > > >> > > > GRUB_CMDLINE_LINUX=""
>          > > > > > > >> > > > # biosdevname=0
>          > > > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M
>          dom0_max_vcpus=1"
>          > > > > > > >> > >
>          > > > > > > >> > > > _______________________________________________
>          > > > > > > >> > > > Xen-devel mailing list
>          > > > > > > >> > > > [16]Xen-devel@lists.xen.org
>          > > > > > > >> > > > [17]http://lists.xen.org/xen-devel
>          > > > > > > >> > >
>          > > > > > > >> > >
>          > > > > > > >>
>          > > > > > > >
>          > > > > > > >
>          > > > > >
>          > > >
>          >
> 
>        >
> 
> References
> 
>    Visible links
>    1. mailto:mikeneiderhauser@gmail.com
>    2. mailto:konrad.wilk@oracle.com
>    3. mailto:mikeneiderhauser@gmail.com
>    4. http://wiki.xenproject.org/wiki/Xen_4.4_RC3_test_instructions
>    5. http://wiki.xen.org/wiki/Compiling_Xen_From_Source
>    6. http://xenbits.xen.org/xen.git
>    7. mailto:konrad.wilk@oracle.com
>    8. mailto:konrad.wilk@oracle.com
>    9. mailto:konrad.wilk@oracle.com
>   10. mailto:mikeneiderhauser@gmail.com
>   11. mailto:konrad.wilk@oracle.com
>   12. mailto:konrad.wilk@oracle.com
>   13. http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
>   14. http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
>   15. http://0.0.0.0:0/
>   16. mailto:Xen-devel@lists.xen.org
>   17. http://lists.xen.org/xen-devel

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 21:04:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 21:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCF4J-00051D-4K; Sat, 08 Feb 2014 21:04:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCF4I-000518-DW
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 21:04:06 +0000
Received: from [85.158.139.211:54334] by server-9.bemta-5.messagelabs.com id
	ED/98-11237-5CB96F25; Sat, 08 Feb 2014 21:04:05 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391893444!2599664!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29521 invoked from network); 8 Feb 2014 21:04:05 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 21:04:05 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so3209533wes.39
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 13:04:04 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=fwbjk1JEFURQiMduGOq/X6jzgBmXLpYoGsgRFIN+E48=;
	b=NiqzRqGI0dyD+0qWMfRh1ZacZs6ka/oTfgc5wKECMWhIWevq3kK+NmdkhpP6Zuz95E
	2u19XjYBGLyWKRVuy0jOl3AKjXxkGpjDfDdzofFCTWh9UWsUw3E1j8gJpxQM9y564se6
	7ltlJdo480L0yJ/cLtYduCYohXt31VVbtU4KI3zErATBRetk3HSfVY4TOragn/nORT0u
	V1rynB1rQw+UqH1GwKhYHC81SEhM+XkA8B3JhEGCHB7gayYUcUJ2w78B/7HE6aYlB8Dx
	+WNgBHW5CVTeYFAQzj8b6Wg4n84YaFpbaTrxsp3wtBqYA1jxNNUaROyPO4LJdddwSjbs
	SpQQ==
X-Gm-Message-State: ALoCoQnfA3J5s27qUgdFOo7NsbGmJyNTy224CATPdR+4TKAg7vmnu7o5fq8lMfxEnU0zPUiWLQfz
X-Received: by 10.194.75.198 with SMTP id e6mr15603218wjw.3.1391893444731;
	Sat, 08 Feb 2014 13:04:04 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm21300627wjc.5.2014.02.08.13.04.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 08 Feb 2014 13:04:04 -0800 (PST)
Message-ID: <52F69BC2.8010103@linaro.org>
Date: Sat, 08 Feb 2014 21:04:02 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>	<52EFFCF5.5070108@linaro.org>	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>	<52F0EBA8.3000206@linaro.org>	<1391522239.10515.79.camel@kazak.uk.xensource.com>
	<CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
In-Reply-To: <CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Pavlo,

On 08/02/14 15:51, Pavlo Suikov wrote:
> After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the
> quite same manner it is done in x86) I've faced different problem.
> Specifically, p2m_lookup for an address in the xen restricted heap fails
> on the very first map call: p2m_map_first returns zero page due to both
> p2m->first_level and p2m_first_level_index(addr) equal to zero. As far
> as I understand xen simply does not know how to make p2m translation for
> it's own restricted heap on arm. Am I right?

Another thing to take into account, xen heap pages can be mapped 
read-only or read-write. You will have to check that when the page is 
mapped into the domain.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 21:04:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 21:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCF4J-00051D-4K; Sat, 08 Feb 2014 21:04:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCF4I-000518-DW
	for xen-devel@lists.xen.org; Sat, 08 Feb 2014 21:04:06 +0000
Received: from [85.158.139.211:54334] by server-9.bemta-5.messagelabs.com id
	ED/98-11237-5CB96F25; Sat, 08 Feb 2014 21:04:05 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391893444!2599664!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29521 invoked from network); 8 Feb 2014 21:04:05 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 21:04:05 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so3209533wes.39
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 13:04:04 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=fwbjk1JEFURQiMduGOq/X6jzgBmXLpYoGsgRFIN+E48=;
	b=NiqzRqGI0dyD+0qWMfRh1ZacZs6ka/oTfgc5wKECMWhIWevq3kK+NmdkhpP6Zuz95E
	2u19XjYBGLyWKRVuy0jOl3AKjXxkGpjDfDdzofFCTWh9UWsUw3E1j8gJpxQM9y564se6
	7ltlJdo480L0yJ/cLtYduCYohXt31VVbtU4KI3zErATBRetk3HSfVY4TOragn/nORT0u
	V1rynB1rQw+UqH1GwKhYHC81SEhM+XkA8B3JhEGCHB7gayYUcUJ2w78B/7HE6aYlB8Dx
	+WNgBHW5CVTeYFAQzj8b6Wg4n84YaFpbaTrxsp3wtBqYA1jxNNUaROyPO4LJdddwSjbs
	SpQQ==
X-Gm-Message-State: ALoCoQnfA3J5s27qUgdFOo7NsbGmJyNTy224CATPdR+4TKAg7vmnu7o5fq8lMfxEnU0zPUiWLQfz
X-Received: by 10.194.75.198 with SMTP id e6mr15603218wjw.3.1391893444731;
	Sat, 08 Feb 2014 13:04:04 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id uq2sm21300627wjc.5.2014.02.08.13.04.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 08 Feb 2014 13:04:04 -0800 (PST)
Message-ID: <52F69BC2.8010103@linaro.org>
Date: Sat, 08 Feb 2014 21:04:02 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Pavlo Suikov <pavlo.suikov@globallogic.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <CAE4oM6wXAZxDjyNesDprUVYgwCcxH75qvYUiFY1hm4JMqU0ekw@mail.gmail.com>	<52EFFCF5.5070108@linaro.org>	<CAE4oM6ypZH_mgg7kuJJsKzPXRhy==HS2zw2pCx2VUHEP+i_D=w@mail.gmail.com>	<52F0EBA8.3000206@linaro.org>	<1391522239.10515.79.camel@kazak.uk.xensource.com>
	<CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
In-Reply-To: <CAE4oM6wFyRMXw==kkHFr0Gy7JS9=GZQyJ5grHQRr82CdT+gbOw@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xentrace, arm, hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Pavlo,

On 08/02/14 15:51, Pavlo Suikov wrote:
> After making a quick workaround to rcu_lock DOMAIN_XEN on arm (in the
> quite same manner it is done in x86) I've faced different problem.
> Specifically, p2m_lookup for an address in the xen restricted heap fails
> on the very first map call: p2m_map_first returns zero page due to both
> p2m->first_level and p2m_first_level_index(addr) equal to zero. As far
> as I understand xen simply does not know how to make p2m translation for
> it's own restricted heap on arm. Am I right?

Another thing to take into account, xen heap pages can be mapped 
read-only or read-write. You will have to check that when the page is 
mapped into the domain.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 21:42:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 21:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCFfL-0006m4-CK; Sat, 08 Feb 2014 21:42:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCFfJ-0006lz-6V
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 21:42:21 +0000
Received: from [193.109.254.147:62506] by server-13.bemta-14.messagelabs.com
	id 92/53-01226-CB4A6F25; Sat, 08 Feb 2014 21:42:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391895738!2967691!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30180 invoked from network); 8 Feb 2014 21:42:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 21:42:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,807,1384300800"; d="scan'208";a="101091925"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 21:42:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 16:42:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCFfE-0008KX-Cq;
	Sat, 08 Feb 2014 21:42:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCFfE-0007sC-2M;
	Sat, 08 Feb 2014 21:42:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24804-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 21:42:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24804: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24804 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24804/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl            5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl-credit2    5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot            fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  5 xen-boot                     fail  like 12557
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  5 xen-boot     fail blocked in 12557
 test-amd64-i386-pv            5 xen-boot                     fail   like 12557
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot           fail blocked in 12557
 test-amd64-i386-freebsd10-amd64  5 xen-boot              fail blocked in 12557
 test-amd64-i386-freebsd10-i386  5 xen-boot               fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot                fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                34a9bff4abd4db833c3c338d2f942eafe927ca92
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7012 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2370031 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 21:42:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 21:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCFfL-0006m4-CK; Sat, 08 Feb 2014 21:42:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCFfJ-0006lz-6V
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 21:42:21 +0000
Received: from [193.109.254.147:62506] by server-13.bemta-14.messagelabs.com
	id 92/53-01226-CB4A6F25; Sat, 08 Feb 2014 21:42:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391895738!2967691!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30180 invoked from network); 8 Feb 2014 21:42:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 21:42:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,807,1384300800"; d="scan'208";a="101091925"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 08 Feb 2014 21:42:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 16:42:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCFfE-0008KX-Cq;
	Sat, 08 Feb 2014 21:42:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCFfE-0007sC-2M;
	Sat, 08 Feb 2014 21:42:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24804-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 21:42:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24804: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24804 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24804/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-rhel6hvm-amd  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl            5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-xl-credit2    5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-qemuu-rhel6hvm-amd  5 xen-boot            fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  5 xen-boot                     fail  like 12557
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  5 xen-boot     fail blocked in 12557
 test-amd64-i386-pv            5 xen-boot                     fail   like 12557
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot           fail blocked in 12557
 test-amd64-i386-freebsd10-amd64  5 xen-boot              fail blocked in 12557
 test-amd64-i386-freebsd10-i386  5 xen-boot               fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot                fail never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                34a9bff4abd4db833c3c338d2f942eafe927ca92
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7012 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2370031 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 08 22:33:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 22:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCGS5-0000Tz-GI; Sat, 08 Feb 2014 22:32:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCGS3-0000Ts-Bk
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 22:32:43 +0000
Received: from [85.158.137.68:41191] by server-9.bemta-3.messagelabs.com id
	D9/E7-10184-A80B6F25; Sat, 08 Feb 2014 22:32:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391898759!554493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6284 invoked from network); 8 Feb 2014 22:32:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 22:32:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,808,1384300800"; d="scan'208";a="99249385"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 22:32:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 17:32:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCGRy-00008U-1p;
	Sat, 08 Feb 2014 22:32:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCGRx-0008EY-Tp;
	Sat, 08 Feb 2014 22:32:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24806-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 22:32:37 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24806: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7165503470148650694=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7165503470148650694==
Content-Type: text/plain

flight 24806 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24806/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24699
 build-i386-oldkern            3 host-build-prep  fail in 24799 REGR. vs. 24699

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24799
 test-amd64-i386-qemuu-freebsd10-i386  9 guest-saverestore   fail pass in 24799

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============7165503470148650694==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7165503470148650694==--

From xen-devel-bounces@lists.xen.org Sat Feb 08 22:33:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Feb 2014 22:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCGS5-0000Tz-GI; Sat, 08 Feb 2014 22:32:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCGS3-0000Ts-Bk
	for xen-devel@lists.xensource.com; Sat, 08 Feb 2014 22:32:43 +0000
Received: from [85.158.137.68:41191] by server-9.bemta-3.messagelabs.com id
	D9/E7-10184-A80B6F25; Sat, 08 Feb 2014 22:32:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1391898759!554493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6284 invoked from network); 8 Feb 2014 22:32:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	8 Feb 2014 22:32:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,808,1384300800"; d="scan'208";a="99249385"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 08 Feb 2014 22:32:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 17:32:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCGRy-00008U-1p;
	Sat, 08 Feb 2014 22:32:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCGRx-0008EY-Tp;
	Sat, 08 Feb 2014 22:32:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24806-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 8 Feb 2014 22:32:37 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24806: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7165503470148650694=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7165503470148650694==
Content-Type: text/plain

flight 24806 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24806/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24699
 build-i386-oldkern            3 host-build-prep  fail in 24799 REGR. vs. 24699

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24799
 test-amd64-i386-qemuu-freebsd10-i386  9 guest-saverestore   fail pass in 24799

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
Author: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
Date:   Thu Feb 6 17:39:17 2014 +0100

    libvchan: Fix handling of invalid ring buffer indices
    
    The remote (hostile) process can set ring buffer indices to any value
    at any time. If that happens, it is possible to get "buffer space"
    (either for writing data, or ready for reading) negative or greater
    than buffer size.  This will end up with buffer overflow in the second
    memcpy inside of do_send/do_recv.
    
    Fix this by introducing new available bytes accessor functions
    raw_get_data_ready and raw_get_buffer_space which are robust against
    mad ring states, and only return sanitised values.
    
    Proof sketch of correctness:
    
    Now {rd,wr}_{cons,prod} are only ever used in the raw available bytes
    functions, and in do_send and do_recv.
    
    The raw available bytes functions do unsigned arithmetic on the
    returned values.  If the result is "negative" or too big it will be
    >ring_size (since we used unsigned arithmetic).  Otherwise the result
    is a positive in-range value representing a reasonable ring state, in
    which case we can safely convert it to int (as the rest of the code
    expects).
    
    do_send and do_recv immediately mask the ring index value with the
    ring size.  The result is always going to be plausible.  If the ring
    state has become mad, the worst case is that our behaviour is
    inconsistent with the peer's ring pointer.  I.e. we read or write to
    arguably-incorrect parts of the ring - but always parts of the ring.
    And of course if a peer misoperates the ring they can achieve this
    effect anyway.
    
    So the security problem is fixed.
    
    This is XSA-86.
    
    (The patch is essentially Ian Jackson's work, although parts of the
    commit message are by Marek.)
    
    Signed-off-by: Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
    master commit: 2efcb0193bf3916c8ce34882e845f5ceb1e511f7
    master date: 2014-02-06 16:44:41 +0100

commit 1d65af769af52199132df8ebacc0e7a52fd1f4ca
Author: Matthew Daley <mattd@bugfuzz.com>
Date:   Thu Feb 6 17:38:22 2014 +0100

    xsm/flask: correct off-by-one in flask_security_avc_cachestats cpu id check
    
    This is XSA-85.
    
    Signed-off-by: Matthew Daley <mattd@bugfuzz.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Ian Campbell <ian.campbell@citrix.com>
    master commit: 2e1cba2da4631c5cd7218a8f30d521dce0f41370
    master date: 2014-02-06 16:42:36 +0100

commit 6c6b5568e8d2342de1bb653eb70aee4615430db0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 6 17:36:40 2014 +0100

    flask: fix reading strings from guest memory
    
    Since the string size is being specified by the guest, we must range
    check it properly before doing allocations based on it. While for the
    two cases that are exposed only to trusted guests (via policy
    restriction) this just uses an arbitrary upper limit (PAGE_SIZE), for
    the FLASK_[GS]ETBOOL case (which any guest can use) the upper limit
    gets enforced based on the longest name across all boolean settings.
    
    This is XSA-84.
    
    Reported-by: Matthew Daley <mattd@bugfuzz.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    master commit: 6c79e0ab9ac6042e60434c02e1d99b0cf0cc3470
    master date: 2014-02-06 16:33:50 +0100
(qemu changes not included)


--===============7165503470148650694==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7165503470148650694==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 01:58:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 01:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCJf9-0004cr-RP; Sun, 09 Feb 2014 01:58:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WCJf7-0004ZA-VM
	for xen-devel@lists.xen.org; Sun, 09 Feb 2014 01:58:26 +0000
Received: from [85.158.143.35:62136] by server-1.bemta-4.messagelabs.com id
	63/02-31661-0C0E6F25; Sun, 09 Feb 2014 01:58:24 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391911102!4204669!1
X-Originating-IP: [209.85.216.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28814 invoked from network); 9 Feb 2014 01:58:23 -0000
Received: from mail-qc0-f177.google.com (HELO mail-qc0-f177.google.com)
	(209.85.216.177)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 01:58:23 -0000
Received: by mail-qc0-f177.google.com with SMTP id i8so8414045qcq.22
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 17:58:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=wWH1h2z0r1AlCrbCSLfs0UAdew+C/NmIg+usyvqHu9c=;
	b=NAZHSdG+uHv1iH6x9WingchhKYICM1VHdauqNinvbdknUpyR5V0F+tBqFuds0SvMFK
	mMFb/HTkoRzcvrK2Ah05CbIHgUYQ1kJS1LK8+Jqwa5NKYaW94FQsPNf1z7sNipdGIZCY
	3FTz8oMpuFzcn3xEPnrm9PW56MCA3aAvTsIZ3TP18JPlf4x5Wpt9iGygu62/muBF3td6
	I9GE6rXlk7B0XAGM9rp0ynTNneMWwy/t8Ro5u1xb/SYKg0IxNryVWMsrGzjZsyhmfyi2
	sZp5iSh8YlTE6zBKC7PASLkZwZMI7zsg/t4B4LsuKkm1OksZaQTJenUZp9J4yLr7iY6P
	TrCg==
X-Gm-Message-State: ALoCoQkNgpPuZAPLEGHfcSv/f379VIOFoNpQFSeBtR3x95wYK1fgnCu+9ll18tzBhm0xl4NDYpo9
X-Received: by 10.140.91.23 with SMTP id y23mr33680482qgd.3.1391911102285;
	Sat, 08 Feb 2014 17:58:22 -0800 (PST)
Received: from debian-vm.localdomain (cpe-72-130-147-24.hawaii.res.rr.com.
	[72.130.147.24]) by mx.google.com with ESMTPSA id
	f10sm28586143qar.12.2014.02.08.17.58.19 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sat, 08 Feb 2014 17:58:21 -0800 (PST)
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 15:57:46 -1000
Message-Id: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	esb@ics.hawaii.edu, henric@hawaii.edu, juergen.gross@ts.fujitsu.com
Subject: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.

CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.

Socket information is available for each individual CPU when each
gets the STARTING callback (socket information is also available
for CPU 0 by that time). Each are assigned to a run queue
based on their socket.

Signed-off-by: Justin Weaver <jtweaver@hawaii.edu>
---
Changes from v2:
* removed extra blank line

Changes from v1:
* moved comments to the top of the section in one long comment block
* collapsed code to improve readability
* fixed else if indentation style
* updated comment about the runqueue plan
---
 xen/common/sched_credit2.c |   40 ++++++++++++++++++++++++++--------------
 1 file changed, 26 insertions(+), 14 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..14d2e37 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,8 @@
  * to a small value, and a fixed credit is added to everyone.
  *
  * The plan is for all cores that share an L2 will share the same
- * runqueue.  At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. At the moment, all cores that share a socket share the same
+ * runqueue.
  */
 
 /*
@@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
 static void init_pcpu(const struct scheduler *ops, int cpu)
 {
     int rqi;
+    int cpu0_socket;
+    int cpu_socket;
     unsigned long flags;
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_runqueue_data *rqd;
@@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
         return;
     }
 
-    /* Figure out which runqueue to put it in */
+    /*
+     * Choose which run queue to add cpu to based on its socket.
+     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
+     * callback and socket information is not yet available for it).
+     * If cpu is on the same socket as CPU 0, add it to run queue 0 with CPU 0.
+     * Else If cpu is on socket 0, add it to a run queue based on the socket
+     * CPU 0 is actually on.
+     * Else add it to a run queue based on its own socket. 
+     */
     rqi = 0;
+    cpu_socket = cpu_to_socket(cpu);
+    cpu0_socket = cpu_to_socket(0);
 
-    /* Figure out which runqueue to put it in */
-    /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
-    if ( cpu == 0 )
-        rqi = 0;
+    if ( cpu == 0 || cpu_socket == cpu0_socket )
+        rqi = 0;    
+    else if ( cpu_socket == 0 )
+        rqi = cpu0_socket;
     else
-        rqi = cpu_to_socket(cpu);
+        rqi = cpu_socket;
 
     if ( rqi < 0 )
     {
@@ -2010,13 +2022,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
 static void *
 csched_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    /* Check to see if the cpu is online yet */
-    /* Note: cpu 0 doesn't get a STARTING callback */
-    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+    /* This function is only for calling init_pcpu on CPU 0
+     * because it does not get a STARTING callback */
+
+    if ( cpu == 0 )
         init_pcpu(ops, cpu);
-    else
-        printk("%s: cpu %d not online yet, deferring initializatgion\n",
-               __func__, cpu);
 
     return (void *)1;
 }
@@ -2072,6 +2082,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 static int
 csched_cpu_starting(int cpu)
 {
+    /* This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
     struct scheduler *ops;
 
     /* Hope this is safe from cpupools switching things around. :-) */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 01:58:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 01:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCJf9-0004cr-RP; Sun, 09 Feb 2014 01:58:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WCJf7-0004ZA-VM
	for xen-devel@lists.xen.org; Sun, 09 Feb 2014 01:58:26 +0000
Received: from [85.158.143.35:62136] by server-1.bemta-4.messagelabs.com id
	63/02-31661-0C0E6F25; Sun, 09 Feb 2014 01:58:24 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-3.tower-21.messagelabs.com!1391911102!4204669!1
X-Originating-IP: [209.85.216.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28814 invoked from network); 9 Feb 2014 01:58:23 -0000
Received: from mail-qc0-f177.google.com (HELO mail-qc0-f177.google.com)
	(209.85.216.177)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 01:58:23 -0000
Received: by mail-qc0-f177.google.com with SMTP id i8so8414045qcq.22
	for <xen-devel@lists.xen.org>; Sat, 08 Feb 2014 17:58:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=wWH1h2z0r1AlCrbCSLfs0UAdew+C/NmIg+usyvqHu9c=;
	b=NAZHSdG+uHv1iH6x9WingchhKYICM1VHdauqNinvbdknUpyR5V0F+tBqFuds0SvMFK
	mMFb/HTkoRzcvrK2Ah05CbIHgUYQ1kJS1LK8+Jqwa5NKYaW94FQsPNf1z7sNipdGIZCY
	3FTz8oMpuFzcn3xEPnrm9PW56MCA3aAvTsIZ3TP18JPlf4x5Wpt9iGygu62/muBF3td6
	I9GE6rXlk7B0XAGM9rp0ynTNneMWwy/t8Ro5u1xb/SYKg0IxNryVWMsrGzjZsyhmfyi2
	sZp5iSh8YlTE6zBKC7PASLkZwZMI7zsg/t4B4LsuKkm1OksZaQTJenUZp9J4yLr7iY6P
	TrCg==
X-Gm-Message-State: ALoCoQkNgpPuZAPLEGHfcSv/f379VIOFoNpQFSeBtR3x95wYK1fgnCu+9ll18tzBhm0xl4NDYpo9
X-Received: by 10.140.91.23 with SMTP id y23mr33680482qgd.3.1391911102285;
	Sat, 08 Feb 2014 17:58:22 -0800 (PST)
Received: from debian-vm.localdomain (cpe-72-130-147-24.hawaii.res.rr.com.
	[72.130.147.24]) by mx.google.com with ESMTPSA id
	f10sm28586143qar.12.2014.02.08.17.58.19 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sat, 08 Feb 2014 17:58:21 -0800 (PST)
From: Justin Weaver <jtweaver@hawaii.edu>
To: xen-devel@lists.xen.org
Date: Sat,  8 Feb 2014 15:57:46 -1000
Message-Id: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
X-Mailer: git-send-email 1.7.10.4
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, dario.faggioli@citrix.com,
	esb@ics.hawaii.edu, henric@hawaii.edu, juergen.gross@ts.fujitsu.com
Subject: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch attempts to address the issue of the Xen Credit 2
Scheduler only creating one vCPU run queue on multiple physical
processor systems. It should be creating one run queue per
physical processor.

CPU 0 does not get a starting callback, so it is hard coded to run
queue 0. At the time this happens, socket information is not
available for CPU 0.

Socket information is available for each individual CPU when each
gets the STARTING callback (socket information is also available
for CPU 0 by that time). Each are assigned to a run queue
based on their socket.

Signed-off-by: Justin Weaver <jtweaver@hawaii.edu>
---
Changes from v2:
* removed extra blank line

Changes from v1:
* moved comments to the top of the section in one long comment block
* collapsed code to improve readability
* fixed else if indentation style
* updated comment about the runqueue plan
---
 xen/common/sched_credit2.c |   40 ++++++++++++++++++++++++++--------------
 1 file changed, 26 insertions(+), 14 deletions(-)

diff --git a/xen/common/sched_credit2.c b/xen/common/sched_credit2.c
index 4e68375..14d2e37 100644
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -85,8 +85,8 @@
  * to a small value, and a fixed credit is added to everyone.
  *
  * The plan is for all cores that share an L2 will share the same
- * runqueue.  At the moment, there is one global runqueue for all
- * cores.
+ * runqueue. At the moment, all cores that share a socket share the same
+ * runqueue.
  */
 
 /*
@@ -1945,6 +1945,8 @@ static void deactivate_runqueue(struct csched_private *prv, int rqi)
 static void init_pcpu(const struct scheduler *ops, int cpu)
 {
     int rqi;
+    int cpu0_socket;
+    int cpu_socket;
     unsigned long flags;
     struct csched_private *prv = CSCHED_PRIV(ops);
     struct csched_runqueue_data *rqd;
@@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
         return;
     }
 
-    /* Figure out which runqueue to put it in */
+    /*
+     * Choose which run queue to add cpu to based on its socket.
+     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
+     * callback and socket information is not yet available for it).
+     * If cpu is on the same socket as CPU 0, add it to run queue 0 with CPU 0.
+     * Else If cpu is on socket 0, add it to a run queue based on the socket
+     * CPU 0 is actually on.
+     * Else add it to a run queue based on its own socket. 
+     */
     rqi = 0;
+    cpu_socket = cpu_to_socket(cpu);
+    cpu0_socket = cpu_to_socket(0);
 
-    /* Figure out which runqueue to put it in */
-    /* NB: cpu 0 doesn't get a STARTING callback, so we hard-code it to runqueue 0. */
-    if ( cpu == 0 )
-        rqi = 0;
+    if ( cpu == 0 || cpu_socket == cpu0_socket )
+        rqi = 0;    
+    else if ( cpu_socket == 0 )
+        rqi = cpu0_socket;
     else
-        rqi = cpu_to_socket(cpu);
+        rqi = cpu_socket;
 
     if ( rqi < 0 )
     {
@@ -2010,13 +2022,11 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
 static void *
 csched_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    /* Check to see if the cpu is online yet */
-    /* Note: cpu 0 doesn't get a STARTING callback */
-    if ( cpu == 0 || cpu_to_socket(cpu) >= 0 )
+    /* This function is only for calling init_pcpu on CPU 0
+     * because it does not get a STARTING callback */
+
+    if ( cpu == 0 )
         init_pcpu(ops, cpu);
-    else
-        printk("%s: cpu %d not online yet, deferring initializatgion\n",
-               __func__, cpu);
 
     return (void *)1;
 }
@@ -2072,6 +2082,8 @@ csched_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
 static int
 csched_cpu_starting(int cpu)
 {
+    /* This function is for calling init_pcpu on every CPU, except for CPU 0 */
+
     struct scheduler *ops;
 
     /* Hope this is safe from cpupools switching things around. :-) */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 02:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 02:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCJqa-0005rD-D5; Sun, 09 Feb 2014 02:10:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCJqZ-0005r8-9q
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 02:10:15 +0000
Received: from [85.158.137.68:22687] by server-9.bemta-3.messagelabs.com id
	1F/55-10184-683E6F25; Sun, 09 Feb 2014 02:10:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391911811!582704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29319 invoked from network); 9 Feb 2014 02:10:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 02:10:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,809,1384300800"; d="scan'208";a="99263803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Feb 2014 02:10:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 21:10:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCJqU-0001E0-0d;
	Sun, 09 Feb 2014 02:10:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCJqT-00073m-Jv;
	Sun, 09 Feb 2014 02:10:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24807-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 02:10:09 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24807: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3218862002440952259=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3218862002440952259==
Content-Type: text/plain

flight 24807 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24807/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl          10 capture-logs(10)        broken REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============3218862002440952259==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3218862002440952259==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 02:10:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 02:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCJqa-0005rD-D5; Sun, 09 Feb 2014 02:10:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCJqZ-0005r8-9q
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 02:10:15 +0000
Received: from [85.158.137.68:22687] by server-9.bemta-3.messagelabs.com id
	1F/55-10184-683E6F25; Sun, 09 Feb 2014 02:10:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1391911811!582704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29319 invoked from network); 9 Feb 2014 02:10:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 02:10:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,809,1384300800"; d="scan'208";a="99263803"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Feb 2014 02:10:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 21:10:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCJqU-0001E0-0d;
	Sun, 09 Feb 2014 02:10:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCJqT-00073m-Jv;
	Sun, 09 Feb 2014 02:10:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24807-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 02:10:09 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24807: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3218862002440952259=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3218862002440952259==
Content-Type: text/plain

flight 24807 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24807/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl          10 capture-logs(10)        broken REGR. vs. 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============3218862002440952259==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3218862002440952259==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 03:50:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 03:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCLPS-0001bn-SS; Sun, 09 Feb 2014 03:50:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCLPS-0001bi-4D
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 03:50:22 +0000
Received: from [85.158.137.68:29456] by server-9.bemta-3.messagelabs.com id
	09/E6-10184-DFAF6F25; Sun, 09 Feb 2014 03:50:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391917818!581562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11936 invoked from network); 9 Feb 2014 03:50:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 03:50:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,809,1384300800"; d="scan'208";a="101112586"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 03:50:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 22:50:17 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCLPM-0001iG-Sn;
	Sun, 09 Feb 2014 03:50:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCLPM-0008C4-0l;
	Sun, 09 Feb 2014 03:50:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24809-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 03:50:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24809: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0161749750161187425=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0161749750161187425==
Content-Type: text/plain

flight 24809 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24809/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743
 build-amd64                   4 xen-build                 fail REGR. vs. 24743
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start        fail in 24807 never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24807 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24807 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24807 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============0161749750161187425==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0161749750161187425==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 03:50:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 03:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCLPS-0001bn-SS; Sun, 09 Feb 2014 03:50:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCLPS-0001bi-4D
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 03:50:22 +0000
Received: from [85.158.137.68:29456] by server-9.bemta-3.messagelabs.com id
	09/E6-10184-DFAF6F25; Sun, 09 Feb 2014 03:50:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1391917818!581562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11936 invoked from network); 9 Feb 2014 03:50:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 03:50:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,809,1384300800"; d="scan'208";a="101112586"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 03:50:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 22:50:17 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCLPM-0001iG-Sn;
	Sun, 09 Feb 2014 03:50:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCLPM-0008C4-0l;
	Sun, 09 Feb 2014 03:50:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24809-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 03:50:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24809: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0161749750161187425=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0161749750161187425==
Content-Type: text/plain

flight 24809 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24809/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743
 build-amd64                   4 xen-build                 fail REGR. vs. 24743
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start        fail in 24807 never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24807 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24807 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24807 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============0161749750161187425==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0161749750161187425==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 04:36:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 04:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCM85-0003VV-4H; Sun, 09 Feb 2014 04:36:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCM83-0003VQ-EA
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 04:36:27 +0000
Received: from [85.158.139.211:36600] by server-10.bemta-5.messagelabs.com id
	E2/3A-08578-AC507F25; Sun, 09 Feb 2014 04:36:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391920584!2626374!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11789 invoked from network); 9 Feb 2014 04:36:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 04:36:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,810,1384300800"; d="scan'208";a="99271265"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Feb 2014 04:36:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 23:36:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCM7y-0001wu-MP;
	Sun, 09 Feb 2014 04:36:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCM7y-0006Ec-Lx;
	Sun, 09 Feb 2014 04:36:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24808-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 04:36:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24808: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8793559412051416730=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8793559412051416730==
Content-Type: text/plain

flight 24808 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24808/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
+ branch=xen-4.2-testing
+ revision=ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git ae5d69f1c6d6cf5960e72d79ac0840eec1d75856:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   0037ec3..ae5d69f  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856 -> stable-4.2


--===============8793559412051416730==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8793559412051416730==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 04:36:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 04:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCM85-0003VV-4H; Sun, 09 Feb 2014 04:36:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCM83-0003VQ-EA
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 04:36:27 +0000
Received: from [85.158.139.211:36600] by server-10.bemta-5.messagelabs.com id
	E2/3A-08578-AC507F25; Sun, 09 Feb 2014 04:36:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1391920584!2626374!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11789 invoked from network); 9 Feb 2014 04:36:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 04:36:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,810,1384300800"; d="scan'208";a="99271265"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Feb 2014 04:36:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 8 Feb 2014 23:36:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCM7y-0001wu-MP;
	Sun, 09 Feb 2014 04:36:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCM7y-0006Ec-Lx;
	Sun, 09 Feb 2014 04:36:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24808-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 04:36:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24808: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8793559412051416730=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8793559412051416730==
Content-Type: text/plain

flight 24808 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24808/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
baseline version:
 xen                  0037ec360b8792f966acc154e06ac9f627b00f9f

------------------------------------------------------------
People who touched revisions under test:
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
+ branch=xen-4.2-testing
+ revision=ae5d69f1c6d6cf5960e72d79ac0840eec1d75856
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git ae5d69f1c6d6cf5960e72d79ac0840eec1d75856:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   0037ec3..ae5d69f  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856 -> stable-4.2


--===============8793559412051416730==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8793559412051416730==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 05:15:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 05:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCMjL-0005Tc-NX; Sun, 09 Feb 2014 05:14:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCMjK-0005TX-In
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 05:14:58 +0000
Received: from [193.109.254.147:42917] by server-10.bemta-14.messagelabs.com
	id 0B/BD-10711-1DE07F25; Sun, 09 Feb 2014 05:14:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391922895!2994669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28532 invoked from network); 9 Feb 2014 05:14:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 05:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,810,1384300800"; d="scan'208";a="101116753"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 05:14:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 00:14:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCMjG-00028F-61;
	Sun, 09 Feb 2014 05:14:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCMjG-0007KS-5Y;
	Sun, 09 Feb 2014 05:14:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24811-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 05:14:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24811: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7326386891865390545=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7326386891865390545==
Content-Type: text/plain

flight 24811 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24811/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743
 build-amd64                   4 xen-build                 fail REGR. vs. 24743
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start        fail in 24807 never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24807 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24807 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24807 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============7326386891865390545==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7326386891865390545==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 05:15:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 05:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCMjL-0005Tc-NX; Sun, 09 Feb 2014 05:14:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCMjK-0005TX-In
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 05:14:58 +0000
Received: from [193.109.254.147:42917] by server-10.bemta-14.messagelabs.com
	id 0B/BD-10711-1DE07F25; Sun, 09 Feb 2014 05:14:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391922895!2994669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28532 invoked from network); 9 Feb 2014 05:14:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 05:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,810,1384300800"; d="scan'208";a="101116753"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 05:14:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 00:14:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCMjG-00028F-61;
	Sun, 09 Feb 2014 05:14:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCMjG-0007KS-5Y;
	Sun, 09 Feb 2014 05:14:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24811-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 05:14:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24811: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7326386891865390545=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7326386891865390545==
Content-Type: text/plain

flight 24811 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24811/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24743
 build-amd64                   4 xen-build                 fail REGR. vs. 24743
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start        fail in 24807 never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24807 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24807 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24807 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24807 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============7326386891865390545==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7326386891865390545==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 06:24:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 06:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCNoB-0008U6-Eg; Sun, 09 Feb 2014 06:24:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mjt@tls.msk.ru>) id 1WCNo9-0008U1-SE
	for xen-devel@lists.xen.org; Sun, 09 Feb 2014 06:24:02 +0000
Received: from [85.158.139.211:48470] by server-17.bemta-5.messagelabs.com id
	E9/7C-31975-10F17F25; Sun, 09 Feb 2014 06:24:01 +0000
X-Env-Sender: mjt@tls.msk.ru
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391927039!2664503!1
X-Originating-IP: [86.62.121.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6894 invoked from network); 9 Feb 2014 06:24:00 -0000
Received: from isrv.corpit.ru (HELO isrv.corpit.ru) (86.62.121.231)
	by server-7.tower-206.messagelabs.com with SMTP;
	9 Feb 2014 06:24:00 -0000
Received: from [192.168.88.2] (mjt.vpn.tls.msk.ru [192.168.177.99])
	by isrv.corpit.ru (Postfix) with ESMTP id 0922C40E4D;
	Sun,  9 Feb 2014 10:23:59 +0400 (MSK)
Message-ID: <52F71EFF.2040803@msgid.tls.msk.ru>
Date: Sun, 09 Feb 2014 10:23:59 +0400
From: Michael Tokarev <mjt@tls.msk.ru>
Organization: Telecom Service, JSC
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
	<52F4B3AA.5050403@msgid.tls.msk.ru>
	<1391773231.2162.82.camel@kazak.uk.xensource.com>
In-Reply-To: <1391773231.2162.82.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.5.1
OpenPGP: id=804465C5
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

07.02.2014 15:40, Ian Campbell wrote:
[]
> Note that the SeaBIOS image is baked into Xen at build time, not picked
> up at runtime like with (I think) qemu.

Oh. Now things are much clearer.  Indeed, qemu does not embed seabios inside
any image at build time, it just loads the bios from a given file, and you
can even tell it to use another bios.  Ditto for other optional roms, network
boot roms, vga bios and other things.

And so yes, with new xen builds it should just pick bios-256k.bin from qemu
instead of bios.bin, so indeed, qemu does not need to build xen support for
a smaller version of bios.bin it ships, anymore.  So finally I see the good
reason of what Gerd did when removing xen support from small bios.bin.

Is it possible to change xen to do something similar?

Thanks,

/mjt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 06:24:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 06:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCNoB-0008U6-Eg; Sun, 09 Feb 2014 06:24:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mjt@tls.msk.ru>) id 1WCNo9-0008U1-SE
	for xen-devel@lists.xen.org; Sun, 09 Feb 2014 06:24:02 +0000
Received: from [85.158.139.211:48470] by server-17.bemta-5.messagelabs.com id
	E9/7C-31975-10F17F25; Sun, 09 Feb 2014 06:24:01 +0000
X-Env-Sender: mjt@tls.msk.ru
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391927039!2664503!1
X-Originating-IP: [86.62.121.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6894 invoked from network); 9 Feb 2014 06:24:00 -0000
Received: from isrv.corpit.ru (HELO isrv.corpit.ru) (86.62.121.231)
	by server-7.tower-206.messagelabs.com with SMTP;
	9 Feb 2014 06:24:00 -0000
Received: from [192.168.88.2] (mjt.vpn.tls.msk.ru [192.168.177.99])
	by isrv.corpit.ru (Postfix) with ESMTP id 0922C40E4D;
	Sun,  9 Feb 2014 10:23:59 +0400 (MSK)
Message-ID: <52F71EFF.2040803@msgid.tls.msk.ru>
Date: Sun, 09 Feb 2014 10:23:59 +0400
From: Michael Tokarev <mjt@tls.msk.ru>
Organization: Telecom Service, JSC
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <ijc@hellion.org.uk>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
	<52F4B3AA.5050403@msgid.tls.msk.ru>
	<1391773231.2162.82.camel@kazak.uk.xensource.com>
In-Reply-To: <1391773231.2162.82.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.5.1
OpenPGP: id=804465C5
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

07.02.2014 15:40, Ian Campbell wrote:
[]
> Note that the SeaBIOS image is baked into Xen at build time, not picked
> up at runtime like with (I think) qemu.

Oh. Now things are much clearer.  Indeed, qemu does not embed seabios inside
any image at build time, it just loads the bios from a given file, and you
can even tell it to use another bios.  Ditto for other optional roms, network
boot roms, vga bios and other things.

And so yes, with new xen builds it should just pick bios-256k.bin from qemu
instead of bios.bin, so indeed, qemu does not need to build xen support for
a smaller version of bios.bin it ships, anymore.  So finally I see the good
reason of what Gerd did when removing xen support from small bios.bin.

Is it possible to change xen to do something similar?

Thanks,

/mjt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 07:23:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 07:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCOjR-0002ox-GV; Sun, 09 Feb 2014 07:23:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WCGV0-0000aP-0X
	for xen-devel@lists.xenproject.org; Sat, 08 Feb 2014 22:35:46 +0000
Received: from [85.158.137.68:4311] by server-7.bemta-3.messagelabs.com id
	75/D5-13775-141B6F25; Sat, 08 Feb 2014 22:35:45 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391898944!561423!1
X-Originating-IP: [213.75.39.7]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjcgPT4gNTcxMzEw\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19207 invoked from network); 8 Feb 2014 22:35:44 -0000
Received: from cpsmtpb-ews04.kpnxchange.com (HELO
	cpsmtpb-ews04.kpnxchange.com) (213.75.39.7)
	by server-5.tower-31.messagelabs.com with SMTP;
	8 Feb 2014 22:35:44 -0000
Received: from cpsps-ews13.kpnxchange.com ([10.94.84.180]) by
	cpsmtpb-ews04.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sat, 8 Feb 2014 23:35:44 +0100
Received: from CPSMTPM-TLF102.kpnxchange.com ([195.121.3.5]) by
	cpsps-ews13.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sat, 8 Feb 2014 23:35:44 +0100
Received: from [192.168.1.105] ([82.169.24.127]) by
	CPSMTPM-TLF102.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Sat, 8 Feb 2014 23:35:43 +0100
Message-ID: <1391898943.27190.5.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>
Date: Sat, 08 Feb 2014 23:35:43 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 08 Feb 2014 22:35:43.0701 (UTC)
	FILETIME=[1A450050:01CF251E]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Sun, 09 Feb 2014 07:23:12 +0000
Cc: xen-devel@lists.xenproject.org, Tony Luck <tony.luck@intel.com>,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
on that symbol. Remove that code now.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
---
Basically only tested with "git grep". There's a chance I might have
missed some second order effects.

 drivers/xen/Makefile            |   1 -
 drivers/xen/xencomm.c           | 219 ----------------------------------------
 include/xen/interface/xencomm.h |  41 --------
 include/xen/xencomm.h           |  77 --------------
 4 files changed, 338 deletions(-)
 delete mode 100644 drivers/xen/xencomm.c
 delete mode 100644 include/xen/interface/xencomm.h
 delete mode 100644 include/xen/xencomm.h

diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index d75c811..45e00af 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -16,7 +16,6 @@ xen-pad-$(CONFIG_X86) += xen-acpi-pad.o
 dom0-$(CONFIG_X86) += pcpu.o
 obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
 obj-$(CONFIG_BLOCK)			+= biomerge.o
-obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
 obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
 obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
 obj-$(CONFIG_XEN_DEV_EVTCHN)		+= xen-evtchn.o
diff --git a/drivers/xen/xencomm.c b/drivers/xen/xencomm.c
deleted file mode 100644
index 4793fc5..0000000
--- a/drivers/xen/xencomm.c
+++ /dev/null
@@ -1,219 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
- *
- * Copyright (C) IBM Corp. 2006
- *
- * Authors: Hollis Blanchard <hollisb@us.ibm.com>
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/mm.h>
-#include <linux/slab.h>
-#include <asm/page.h>
-#include <xen/xencomm.h>
-#include <xen/interface/xen.h>
-#include <asm/xen/xencomm.h>	/* for xencomm_is_phys_contiguous() */
-
-static int xencomm_init(struct xencomm_desc *desc,
-			void *buffer, unsigned long bytes)
-{
-	unsigned long recorded = 0;
-	int i = 0;
-
-	while ((recorded < bytes) && (i < desc->nr_addrs)) {
-		unsigned long vaddr = (unsigned long)buffer + recorded;
-		unsigned long paddr;
-		int offset;
-		int chunksz;
-
-		offset = vaddr % PAGE_SIZE; /* handle partial pages */
-		chunksz = min(PAGE_SIZE - offset, bytes - recorded);
-
-		paddr = xencomm_vtop(vaddr);
-		if (paddr == ~0UL) {
-			printk(KERN_DEBUG "%s: couldn't translate vaddr %lx\n",
-			       __func__, vaddr);
-			return -EINVAL;
-		}
-
-		desc->address[i++] = paddr;
-		recorded += chunksz;
-	}
-
-	if (recorded < bytes) {
-		printk(KERN_DEBUG
-		       "%s: could only translate %ld of %ld bytes\n",
-		       __func__, recorded, bytes);
-		return -ENOSPC;
-	}
-
-	/* mark remaining addresses invalid (just for safety) */
-	while (i < desc->nr_addrs)
-		desc->address[i++] = XENCOMM_INVALID;
-
-	desc->magic = XENCOMM_MAGIC;
-
-	return 0;
-}
-
-static struct xencomm_desc *xencomm_alloc(gfp_t gfp_mask,
-					  void *buffer, unsigned long bytes)
-{
-	struct xencomm_desc *desc;
-	unsigned long buffer_ulong = (unsigned long)buffer;
-	unsigned long start = buffer_ulong & PAGE_MASK;
-	unsigned long end = (buffer_ulong + bytes) | ~PAGE_MASK;
-	unsigned long nr_addrs = (end - start + 1) >> PAGE_SHIFT;
-	unsigned long size = sizeof(*desc) +
-		sizeof(desc->address[0]) * nr_addrs;
-
-	/*
-	 * slab allocator returns at least sizeof(void*) aligned pointer.
-	 * When sizeof(*desc) > sizeof(void*), struct xencomm_desc might
-	 * cross page boundary.
-	 */
-	if (sizeof(*desc) > sizeof(void *)) {
-		unsigned long order = get_order(size);
-		desc = (struct xencomm_desc *)__get_free_pages(gfp_mask,
-							       order);
-		if (desc == NULL)
-			return NULL;
-
-		desc->nr_addrs =
-			((PAGE_SIZE << order) - sizeof(struct xencomm_desc)) /
-			sizeof(*desc->address);
-	} else {
-		desc = kmalloc(size, gfp_mask);
-		if (desc == NULL)
-			return NULL;
-
-		desc->nr_addrs = nr_addrs;
-	}
-	return desc;
-}
-
-void xencomm_free(struct xencomm_handle *desc)
-{
-	if (desc && !((ulong)desc & XENCOMM_INLINE_FLAG)) {
-		struct xencomm_desc *desc__ = (struct xencomm_desc *)desc;
-		if (sizeof(*desc__) > sizeof(void *)) {
-			unsigned long size = sizeof(*desc__) +
-				sizeof(desc__->address[0]) * desc__->nr_addrs;
-			unsigned long order = get_order(size);
-			free_pages((unsigned long)__va(desc), order);
-		} else
-			kfree(__va(desc));
-	}
-}
-
-static int xencomm_create(void *buffer, unsigned long bytes,
-			  struct xencomm_desc **ret, gfp_t gfp_mask)
-{
-	struct xencomm_desc *desc;
-	int rc;
-
-	pr_debug("%s: %p[%ld]\n", __func__, buffer, bytes);
-
-	if (bytes == 0) {
-		/* don't create a descriptor; Xen recognizes NULL. */
-		BUG_ON(buffer != NULL);
-		*ret = NULL;
-		return 0;
-	}
-
-	BUG_ON(buffer == NULL); /* 'bytes' is non-zero */
-
-	desc = xencomm_alloc(gfp_mask, buffer, bytes);
-	if (!desc) {
-		printk(KERN_DEBUG "%s failure\n", "xencomm_alloc");
-		return -ENOMEM;
-	}
-
-	rc = xencomm_init(desc, buffer, bytes);
-	if (rc) {
-		printk(KERN_DEBUG "%s failure: %d\n", "xencomm_init", rc);
-		xencomm_free((struct xencomm_handle *)__pa(desc));
-		return rc;
-	}
-
-	*ret = desc;
-	return 0;
-}
-
-static struct xencomm_handle *xencomm_create_inline(void *ptr)
-{
-	unsigned long paddr;
-
-	BUG_ON(!xencomm_is_phys_contiguous((unsigned long)ptr));
-
-	paddr = (unsigned long)xencomm_pa(ptr);
-	BUG_ON(paddr & XENCOMM_INLINE_FLAG);
-	return (struct xencomm_handle *)(paddr | XENCOMM_INLINE_FLAG);
-}
-
-/* "mini" routine, for stack-based communications: */
-static int xencomm_create_mini(void *buffer,
-	unsigned long bytes, struct xencomm_mini *xc_desc,
-	struct xencomm_desc **ret)
-{
-	int rc = 0;
-	struct xencomm_desc *desc;
-	BUG_ON(((unsigned long)xc_desc) % sizeof(*xc_desc) != 0);
-
-	desc = (void *)xc_desc;
-
-	desc->nr_addrs = XENCOMM_MINI_ADDRS;
-
-	rc = xencomm_init(desc, buffer, bytes);
-	if (!rc)
-		*ret = desc;
-
-	return rc;
-}
-
-struct xencomm_handle *xencomm_map(void *ptr, unsigned long bytes)
-{
-	int rc;
-	struct xencomm_desc *desc;
-
-	if (xencomm_is_phys_contiguous((unsigned long)ptr))
-		return xencomm_create_inline(ptr);
-
-	rc = xencomm_create(ptr, bytes, &desc, GFP_KERNEL);
-
-	if (rc || desc == NULL)
-		return NULL;
-
-	return xencomm_pa(desc);
-}
-
-struct xencomm_handle *__xencomm_map_no_alloc(void *ptr, unsigned long bytes,
-			struct xencomm_mini *xc_desc)
-{
-	int rc;
-	struct xencomm_desc *desc = NULL;
-
-	if (xencomm_is_phys_contiguous((unsigned long)ptr))
-		return xencomm_create_inline(ptr);
-
-	rc = xencomm_create_mini(ptr, bytes, xc_desc,
-				&desc);
-
-	if (rc)
-		return NULL;
-
-	return xencomm_pa(desc);
-}
diff --git a/include/xen/interface/xencomm.h b/include/xen/interface/xencomm.h
deleted file mode 100644
index ac45e07..0000000
--- a/include/xen/interface/xencomm.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to
- * deal in the Software without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- *
- * Copyright (C) IBM Corp. 2006
- */
-
-#ifndef _XEN_XENCOMM_H_
-#define _XEN_XENCOMM_H_
-
-/* A xencomm descriptor is a scatter/gather list containing physical
- * addresses corresponding to a virtually contiguous memory area. The
- * hypervisor translates these physical addresses to machine addresses to copy
- * to and from the virtually contiguous area.
- */
-
-#define XENCOMM_MAGIC 0x58434F4D /* 'XCOM' */
-#define XENCOMM_INVALID (~0UL)
-
-struct xencomm_desc {
-    uint32_t magic;
-    uint32_t nr_addrs; /* the number of entries in address[] */
-    uint64_t address[0];
-};
-
-#endif /* _XEN_XENCOMM_H_ */
diff --git a/include/xen/xencomm.h b/include/xen/xencomm.h
deleted file mode 100644
index e43b039..0000000
--- a/include/xen/xencomm.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
- *
- * Copyright (C) IBM Corp. 2006
- *
- * Authors: Hollis Blanchard <hollisb@us.ibm.com>
- *          Jerone Young <jyoung5@us.ibm.com>
- */
-
-#ifndef _LINUX_XENCOMM_H_
-#define _LINUX_XENCOMM_H_
-
-#include <xen/interface/xencomm.h>
-
-#define XENCOMM_MINI_ADDRS 3
-struct xencomm_mini {
-	struct xencomm_desc _desc;
-	uint64_t address[XENCOMM_MINI_ADDRS];
-};
-
-/* To avoid additionnal virt to phys conversion, an opaque structure is
-   presented.  */
-struct xencomm_handle;
-
-extern void xencomm_free(struct xencomm_handle *desc);
-extern struct xencomm_handle *xencomm_map(void *ptr, unsigned long bytes);
-extern struct xencomm_handle *__xencomm_map_no_alloc(void *ptr,
-			unsigned long bytes,  struct xencomm_mini *xc_area);
-
-#if 0
-#define XENCOMM_MINI_ALIGNED(xc_desc, n)				\
-	struct xencomm_mini xc_desc ## _base[(n)]			\
-	__attribute__((__aligned__(sizeof(struct xencomm_mini))));	\
-	struct xencomm_mini *xc_desc = &xc_desc ## _base[0];
-#else
-/*
- * gcc bug workaround:
- * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16660
- * gcc doesn't handle properly stack variable with
- * __attribute__((__align__(sizeof(struct xencomm_mini))))
- */
-#define XENCOMM_MINI_ALIGNED(xc_desc, n)				\
-	unsigned char xc_desc ## _base[((n) + 1 ) *			\
-				       sizeof(struct xencomm_mini)];	\
-	struct xencomm_mini *xc_desc = (struct xencomm_mini *)		\
-		((unsigned long)xc_desc ## _base +			\
-		 (sizeof(struct xencomm_mini) -				\
-		  ((unsigned long)xc_desc ## _base) %			\
-		  sizeof(struct xencomm_mini)));
-#endif
-#define xencomm_map_no_alloc(ptr, bytes)			\
-	({ XENCOMM_MINI_ALIGNED(xc_desc, 1);			\
-		__xencomm_map_no_alloc(ptr, bytes, xc_desc); })
-
-/* provided by architecture code: */
-extern unsigned long xencomm_vtop(unsigned long vaddr);
-
-static inline void *xencomm_pa(void *ptr)
-{
-	return (void *)xencomm_vtop((unsigned long)ptr);
-}
-
-#define xen_guest_handle(hnd)  ((hnd).p)
-
-#endif /* _LINUX_XENCOMM_H_ */
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 07:23:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 07:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCOjR-0002ox-GV; Sun, 09 Feb 2014 07:23:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WCGV0-0000aP-0X
	for xen-devel@lists.xenproject.org; Sat, 08 Feb 2014 22:35:46 +0000
Received: from [85.158.137.68:4311] by server-7.bemta-3.messagelabs.com id
	75/D5-13775-141B6F25; Sat, 08 Feb 2014 22:35:45 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-5.tower-31.messagelabs.com!1391898944!561423!1
X-Originating-IP: [213.75.39.7]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjcgPT4gNTcxMzEw\n, ML_RADAR_SPEW_LINKS_14, 
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19207 invoked from network); 8 Feb 2014 22:35:44 -0000
Received: from cpsmtpb-ews04.kpnxchange.com (HELO
	cpsmtpb-ews04.kpnxchange.com) (213.75.39.7)
	by server-5.tower-31.messagelabs.com with SMTP;
	8 Feb 2014 22:35:44 -0000
Received: from cpsps-ews13.kpnxchange.com ([10.94.84.180]) by
	cpsmtpb-ews04.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sat, 8 Feb 2014 23:35:44 +0100
Received: from CPSMTPM-TLF102.kpnxchange.com ([195.121.3.5]) by
	cpsps-ews13.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sat, 8 Feb 2014 23:35:44 +0100
Received: from [192.168.1.105] ([82.169.24.127]) by
	CPSMTPM-TLF102.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Sat, 8 Feb 2014 23:35:43 +0100
Message-ID: <1391898943.27190.5.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>
Date: Sat, 08 Feb 2014 23:35:43 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 08 Feb 2014 22:35:43.0701 (UTC)
	FILETIME=[1A450050:01CF251E]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Sun, 09 Feb 2014 07:23:12 +0000
Cc: xen-devel@lists.xenproject.org, Tony Luck <tony.luck@intel.com>,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
on that symbol. Remove that code now.

Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
---
Basically only tested with "git grep". There's a chance I might have
missed some second order effects.

 drivers/xen/Makefile            |   1 -
 drivers/xen/xencomm.c           | 219 ----------------------------------------
 include/xen/interface/xencomm.h |  41 --------
 include/xen/xencomm.h           |  77 --------------
 4 files changed, 338 deletions(-)
 delete mode 100644 drivers/xen/xencomm.c
 delete mode 100644 include/xen/interface/xencomm.h
 delete mode 100644 include/xen/xencomm.h

diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index d75c811..45e00af 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -16,7 +16,6 @@ xen-pad-$(CONFIG_X86) += xen-acpi-pad.o
 dom0-$(CONFIG_X86) += pcpu.o
 obj-$(CONFIG_XEN_DOM0)			+= $(dom0-y)
 obj-$(CONFIG_BLOCK)			+= biomerge.o
-obj-$(CONFIG_XEN_XENCOMM)		+= xencomm.o
 obj-$(CONFIG_XEN_BALLOON)		+= xen-balloon.o
 obj-$(CONFIG_XEN_SELFBALLOONING)	+= xen-selfballoon.o
 obj-$(CONFIG_XEN_DEV_EVTCHN)		+= xen-evtchn.o
diff --git a/drivers/xen/xencomm.c b/drivers/xen/xencomm.c
deleted file mode 100644
index 4793fc5..0000000
--- a/drivers/xen/xencomm.c
+++ /dev/null
@@ -1,219 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
- *
- * Copyright (C) IBM Corp. 2006
- *
- * Authors: Hollis Blanchard <hollisb@us.ibm.com>
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/mm.h>
-#include <linux/slab.h>
-#include <asm/page.h>
-#include <xen/xencomm.h>
-#include <xen/interface/xen.h>
-#include <asm/xen/xencomm.h>	/* for xencomm_is_phys_contiguous() */
-
-static int xencomm_init(struct xencomm_desc *desc,
-			void *buffer, unsigned long bytes)
-{
-	unsigned long recorded = 0;
-	int i = 0;
-
-	while ((recorded < bytes) && (i < desc->nr_addrs)) {
-		unsigned long vaddr = (unsigned long)buffer + recorded;
-		unsigned long paddr;
-		int offset;
-		int chunksz;
-
-		offset = vaddr % PAGE_SIZE; /* handle partial pages */
-		chunksz = min(PAGE_SIZE - offset, bytes - recorded);
-
-		paddr = xencomm_vtop(vaddr);
-		if (paddr == ~0UL) {
-			printk(KERN_DEBUG "%s: couldn't translate vaddr %lx\n",
-			       __func__, vaddr);
-			return -EINVAL;
-		}
-
-		desc->address[i++] = paddr;
-		recorded += chunksz;
-	}
-
-	if (recorded < bytes) {
-		printk(KERN_DEBUG
-		       "%s: could only translate %ld of %ld bytes\n",
-		       __func__, recorded, bytes);
-		return -ENOSPC;
-	}
-
-	/* mark remaining addresses invalid (just for safety) */
-	while (i < desc->nr_addrs)
-		desc->address[i++] = XENCOMM_INVALID;
-
-	desc->magic = XENCOMM_MAGIC;
-
-	return 0;
-}
-
-static struct xencomm_desc *xencomm_alloc(gfp_t gfp_mask,
-					  void *buffer, unsigned long bytes)
-{
-	struct xencomm_desc *desc;
-	unsigned long buffer_ulong = (unsigned long)buffer;
-	unsigned long start = buffer_ulong & PAGE_MASK;
-	unsigned long end = (buffer_ulong + bytes) | ~PAGE_MASK;
-	unsigned long nr_addrs = (end - start + 1) >> PAGE_SHIFT;
-	unsigned long size = sizeof(*desc) +
-		sizeof(desc->address[0]) * nr_addrs;
-
-	/*
-	 * slab allocator returns at least sizeof(void*) aligned pointer.
-	 * When sizeof(*desc) > sizeof(void*), struct xencomm_desc might
-	 * cross page boundary.
-	 */
-	if (sizeof(*desc) > sizeof(void *)) {
-		unsigned long order = get_order(size);
-		desc = (struct xencomm_desc *)__get_free_pages(gfp_mask,
-							       order);
-		if (desc == NULL)
-			return NULL;
-
-		desc->nr_addrs =
-			((PAGE_SIZE << order) - sizeof(struct xencomm_desc)) /
-			sizeof(*desc->address);
-	} else {
-		desc = kmalloc(size, gfp_mask);
-		if (desc == NULL)
-			return NULL;
-
-		desc->nr_addrs = nr_addrs;
-	}
-	return desc;
-}
-
-void xencomm_free(struct xencomm_handle *desc)
-{
-	if (desc && !((ulong)desc & XENCOMM_INLINE_FLAG)) {
-		struct xencomm_desc *desc__ = (struct xencomm_desc *)desc;
-		if (sizeof(*desc__) > sizeof(void *)) {
-			unsigned long size = sizeof(*desc__) +
-				sizeof(desc__->address[0]) * desc__->nr_addrs;
-			unsigned long order = get_order(size);
-			free_pages((unsigned long)__va(desc), order);
-		} else
-			kfree(__va(desc));
-	}
-}
-
-static int xencomm_create(void *buffer, unsigned long bytes,
-			  struct xencomm_desc **ret, gfp_t gfp_mask)
-{
-	struct xencomm_desc *desc;
-	int rc;
-
-	pr_debug("%s: %p[%ld]\n", __func__, buffer, bytes);
-
-	if (bytes == 0) {
-		/* don't create a descriptor; Xen recognizes NULL. */
-		BUG_ON(buffer != NULL);
-		*ret = NULL;
-		return 0;
-	}
-
-	BUG_ON(buffer == NULL); /* 'bytes' is non-zero */
-
-	desc = xencomm_alloc(gfp_mask, buffer, bytes);
-	if (!desc) {
-		printk(KERN_DEBUG "%s failure\n", "xencomm_alloc");
-		return -ENOMEM;
-	}
-
-	rc = xencomm_init(desc, buffer, bytes);
-	if (rc) {
-		printk(KERN_DEBUG "%s failure: %d\n", "xencomm_init", rc);
-		xencomm_free((struct xencomm_handle *)__pa(desc));
-		return rc;
-	}
-
-	*ret = desc;
-	return 0;
-}
-
-static struct xencomm_handle *xencomm_create_inline(void *ptr)
-{
-	unsigned long paddr;
-
-	BUG_ON(!xencomm_is_phys_contiguous((unsigned long)ptr));
-
-	paddr = (unsigned long)xencomm_pa(ptr);
-	BUG_ON(paddr & XENCOMM_INLINE_FLAG);
-	return (struct xencomm_handle *)(paddr | XENCOMM_INLINE_FLAG);
-}
-
-/* "mini" routine, for stack-based communications: */
-static int xencomm_create_mini(void *buffer,
-	unsigned long bytes, struct xencomm_mini *xc_desc,
-	struct xencomm_desc **ret)
-{
-	int rc = 0;
-	struct xencomm_desc *desc;
-	BUG_ON(((unsigned long)xc_desc) % sizeof(*xc_desc) != 0);
-
-	desc = (void *)xc_desc;
-
-	desc->nr_addrs = XENCOMM_MINI_ADDRS;
-
-	rc = xencomm_init(desc, buffer, bytes);
-	if (!rc)
-		*ret = desc;
-
-	return rc;
-}
-
-struct xencomm_handle *xencomm_map(void *ptr, unsigned long bytes)
-{
-	int rc;
-	struct xencomm_desc *desc;
-
-	if (xencomm_is_phys_contiguous((unsigned long)ptr))
-		return xencomm_create_inline(ptr);
-
-	rc = xencomm_create(ptr, bytes, &desc, GFP_KERNEL);
-
-	if (rc || desc == NULL)
-		return NULL;
-
-	return xencomm_pa(desc);
-}
-
-struct xencomm_handle *__xencomm_map_no_alloc(void *ptr, unsigned long bytes,
-			struct xencomm_mini *xc_desc)
-{
-	int rc;
-	struct xencomm_desc *desc = NULL;
-
-	if (xencomm_is_phys_contiguous((unsigned long)ptr))
-		return xencomm_create_inline(ptr);
-
-	rc = xencomm_create_mini(ptr, bytes, xc_desc,
-				&desc);
-
-	if (rc)
-		return NULL;
-
-	return xencomm_pa(desc);
-}
diff --git a/include/xen/interface/xencomm.h b/include/xen/interface/xencomm.h
deleted file mode 100644
index ac45e07..0000000
--- a/include/xen/interface/xencomm.h
+++ /dev/null
@@ -1,41 +0,0 @@
-/*
- * Permission is hereby granted, free of charge, to any person obtaining a copy
- * of this software and associated documentation files (the "Software"), to
- * deal in the Software without restriction, including without limitation the
- * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
- * sell copies of the Software, and to permit persons to whom the Software is
- * furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice shall be included in
- * all copies or substantial portions of the Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
- * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
- * DEALINGS IN THE SOFTWARE.
- *
- * Copyright (C) IBM Corp. 2006
- */
-
-#ifndef _XEN_XENCOMM_H_
-#define _XEN_XENCOMM_H_
-
-/* A xencomm descriptor is a scatter/gather list containing physical
- * addresses corresponding to a virtually contiguous memory area. The
- * hypervisor translates these physical addresses to machine addresses to copy
- * to and from the virtually contiguous area.
- */
-
-#define XENCOMM_MAGIC 0x58434F4D /* 'XCOM' */
-#define XENCOMM_INVALID (~0UL)
-
-struct xencomm_desc {
-    uint32_t magic;
-    uint32_t nr_addrs; /* the number of entries in address[] */
-    uint64_t address[0];
-};
-
-#endif /* _XEN_XENCOMM_H_ */
diff --git a/include/xen/xencomm.h b/include/xen/xencomm.h
deleted file mode 100644
index e43b039..0000000
--- a/include/xen/xencomm.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/*
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
- *
- * Copyright (C) IBM Corp. 2006
- *
- * Authors: Hollis Blanchard <hollisb@us.ibm.com>
- *          Jerone Young <jyoung5@us.ibm.com>
- */
-
-#ifndef _LINUX_XENCOMM_H_
-#define _LINUX_XENCOMM_H_
-
-#include <xen/interface/xencomm.h>
-
-#define XENCOMM_MINI_ADDRS 3
-struct xencomm_mini {
-	struct xencomm_desc _desc;
-	uint64_t address[XENCOMM_MINI_ADDRS];
-};
-
-/* To avoid additionnal virt to phys conversion, an opaque structure is
-   presented.  */
-struct xencomm_handle;
-
-extern void xencomm_free(struct xencomm_handle *desc);
-extern struct xencomm_handle *xencomm_map(void *ptr, unsigned long bytes);
-extern struct xencomm_handle *__xencomm_map_no_alloc(void *ptr,
-			unsigned long bytes,  struct xencomm_mini *xc_area);
-
-#if 0
-#define XENCOMM_MINI_ALIGNED(xc_desc, n)				\
-	struct xencomm_mini xc_desc ## _base[(n)]			\
-	__attribute__((__aligned__(sizeof(struct xencomm_mini))));	\
-	struct xencomm_mini *xc_desc = &xc_desc ## _base[0];
-#else
-/*
- * gcc bug workaround:
- * http://gcc.gnu.org/bugzilla/show_bug.cgi?id=16660
- * gcc doesn't handle properly stack variable with
- * __attribute__((__align__(sizeof(struct xencomm_mini))))
- */
-#define XENCOMM_MINI_ALIGNED(xc_desc, n)				\
-	unsigned char xc_desc ## _base[((n) + 1 ) *			\
-				       sizeof(struct xencomm_mini)];	\
-	struct xencomm_mini *xc_desc = (struct xencomm_mini *)		\
-		((unsigned long)xc_desc ## _base +			\
-		 (sizeof(struct xencomm_mini) -				\
-		  ((unsigned long)xc_desc ## _base) %			\
-		  sizeof(struct xencomm_mini)));
-#endif
-#define xencomm_map_no_alloc(ptr, bytes)			\
-	({ XENCOMM_MINI_ALIGNED(xc_desc, 1);			\
-		__xencomm_map_no_alloc(ptr, bytes, xc_desc); })
-
-/* provided by architecture code: */
-extern unsigned long xencomm_vtop(unsigned long vaddr);
-
-static inline void *xencomm_pa(void *ptr)
-{
-	return (void *)xencomm_vtop((unsigned long)ptr);
-}
-
-#define xen_guest_handle(hnd)  ((hnd).p)
-
-#endif /* _LINUX_XENCOMM_H_ */
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 10:58:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 10:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCS54-0003Oa-62; Sun, 09 Feb 2014 10:57:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCS52-0003OV-4H
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 10:57:44 +0000
Received: from [85.158.137.68:53739] by server-12.bemta-3.messagelabs.com id
	8A/3F-01674-72F57F25; Sun, 09 Feb 2014 10:57:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391943460!620180!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17782 invoked from network); 9 Feb 2014 10:57:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 10:57:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,812,1384300800"; d="scan'208";a="101137744"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 10:57:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 05:57:39 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCS4w-0003rI-Q1;
	Sun, 09 Feb 2014 10:57:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCS4w-0005Le-AM;
	Sun, 09 Feb 2014 10:57:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24816-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 10:57:38 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24816: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4020372309589560003=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4020372309589560003==
Content-Type: text/plain

flight 24816 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24816/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start        fail in 24807 never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24807 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============4020372309589560003==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4020372309589560003==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 10:58:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 10:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCS54-0003Oa-62; Sun, 09 Feb 2014 10:57:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCS52-0003OV-4H
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 10:57:44 +0000
Received: from [85.158.137.68:53739] by server-12.bemta-3.messagelabs.com id
	8A/3F-01674-72F57F25; Sun, 09 Feb 2014 10:57:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1391943460!620180!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17782 invoked from network); 9 Feb 2014 10:57:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 10:57:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,812,1384300800"; d="scan'208";a="101137744"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 10:57:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 05:57:39 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCS4w-0003rI-Q1;
	Sun, 09 Feb 2014 10:57:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCS4w-0005Le-AM;
	Sun, 09 Feb 2014 10:57:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24816-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 10:57:38 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24816: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4020372309589560003=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4020372309589560003==
Content-Type: text/plain

flight 24816 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24816/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24743
 build-amd64-pvops             4 kernel-build              fail REGR. vs. 24743
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 3 host-install(3) broken pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start        fail in 24807 never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24807 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24807 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop      fail in 24807 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop    fail in 24807 never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop            fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============4020372309589560003==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4020372309589560003==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 10:59:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 10:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCS6u-0003qb-Se; Sun, 09 Feb 2014 10:59:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCS6s-0003qR-HG
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 10:59:38 +0000
Received: from [193.109.254.147:26933] by server-6.bemta-14.messagelabs.com id
	4E/67-03396-99F57F25; Sun, 09 Feb 2014 10:59:37 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391943575!3034696!1
X-Originating-IP: [209.85.160.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9029 invoked from network); 9 Feb 2014 10:59:37 -0000
Received: from mail-pb0-f53.google.com (HELO mail-pb0-f53.google.com)
	(209.85.160.53)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 10:59:37 -0000
Received: by mail-pb0-f53.google.com with SMTP id md12so5059298pbc.40
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 02:59:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:mime-version:content-type
	:content-disposition:content-transfer-encoding:user-agent;
	bh=/UBh27FOPSgYAR4Fi9ubdLRrRonCLAAldT9QHOqWuxE=;
	b=APdKoqdwY3CyPGgSGKqbOKL2ZqKqogDOByrf0SMAKeT4lcbR2i6RMqvQ0m+T01vo3n
	uxaoLqQvziwfnQWvt1Jj0Ttil31wBKx/dYcyLSHSYXItJv/NeFcMEFR/3FypwqXomSxD
	KIhP7usyA00X8yzsnWFLlXfPHJv8QFQMxe+MEQP36qQpfAwHjZB+C4Ekz0c9YeSPAiaB
	Qqtp5yMvmftihW7F4aC9Qe1LvfMaZmvVTqVwXTqHs85VxZ4+rb5z1MDySEPKKGOSPp+N
	gm/WokdhWLXBF6ivCjlguhRX+nDnvtx2IUzerWJZLTrJEeYYIeavqzjel/B/duQS41Xc
	oUNg==
X-Received: by 10.68.178.197 with SMTP id da5mr31011768pbc.28.1391943575122;
	Sun, 09 Feb 2014 02:59:35 -0800 (PST)
Received: from rashika ([14.139.82.6]) by mx.google.com with ESMTPSA id
	qq5sm31512645pbb.24.2014.02.09.02.59.33 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 02:59:34 -0800 (PST)
Date: Sun, 9 Feb 2014 16:29:30 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, josh@joshtriplett.org
Subject: [Xen-devel] [PATCH 1/4] drivers: xen: Mark function as static in
	platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TWFyayBmdW5jdGlvbiBhcyBzdGF0aWMgaW4geGVuL3BsYXRmb3JtLXBjaS5jIGJlY2F1c2UgaXQg
aXMgbm90IHVzZWQKb3V0c2lkZSB0aGlzIGZpbGUuCgpUaGlzIGVsaW1pbmF0ZXMgdGhlIGZvbGxv
d2luZyB3YXJuaW5nIGluIHhlbi9wbGF0Zm9ybS1wY2kuYzoKZHJpdmVycy94ZW4vcGxhdGZvcm0t
cGNpLmM6NDg6MTU6IHdhcm5pbmc6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3Ig4oCYYWxsb2Nf
eGVuX21taW/igJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQoKU2lnbmVkLW9mZi1ieTogUmFzaGlr
YSBLaGVyaWEgPHJhc2hpa2Eua2hlcmlhQGdtYWlsLmNvbT4KUmV2aWV3ZWQtYnk6IEpvc2ggVHJp
cGxldHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KLS0tCiBkcml2ZXJzL3hlbi9wbGF0Zm9ybS1w
Y2kuYyB8ICAgIDIgKy0KIDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlvbigrKSwgMSBkZWxldGlv
bigtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5jIGIvZHJpdmVycy94
ZW4vcGxhdGZvcm0tcGNpLmMKaW5kZXggYTEzNjFjMy4uMzQ1NDk3MyAxMDA2NDQKLS0tIGEvZHJp
dmVycy94ZW4vcGxhdGZvcm0tcGNpLmMKKysrIGIvZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmMK
QEAgLTQ1LDcgKzQ1LDcgQEAgc3RhdGljIHVuc2lnbmVkIGxvbmcgcGxhdGZvcm1fbW1pb19hbGxv
YzsKIHN0YXRpYyB1bnNpZ25lZCBsb25nIHBsYXRmb3JtX21taW9sZW47CiBzdGF0aWMgdWludDY0
X3QgY2FsbGJhY2tfdmlhOwogCi11bnNpZ25lZCBsb25nIGFsbG9jX3hlbl9tbWlvKHVuc2lnbmVk
IGxvbmcgbGVuKQorc3RhdGljIHVuc2lnbmVkIGxvbmcgYWxsb2NfeGVuX21taW8odW5zaWduZWQg
bG9uZyBsZW4pCiB7CiAJdW5zaWduZWQgbG9uZyBhZGRyOwogCi0tIAoxLjcuOS41CgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Sun Feb 09 10:59:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 10:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCS6u-0003qb-Se; Sun, 09 Feb 2014 10:59:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCS6s-0003qR-HG
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 10:59:38 +0000
Received: from [193.109.254.147:26933] by server-6.bemta-14.messagelabs.com id
	4E/67-03396-99F57F25; Sun, 09 Feb 2014 10:59:37 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1391943575!3034696!1
X-Originating-IP: [209.85.160.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9029 invoked from network); 9 Feb 2014 10:59:37 -0000
Received: from mail-pb0-f53.google.com (HELO mail-pb0-f53.google.com)
	(209.85.160.53)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 10:59:37 -0000
Received: by mail-pb0-f53.google.com with SMTP id md12so5059298pbc.40
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 02:59:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:mime-version:content-type
	:content-disposition:content-transfer-encoding:user-agent;
	bh=/UBh27FOPSgYAR4Fi9ubdLRrRonCLAAldT9QHOqWuxE=;
	b=APdKoqdwY3CyPGgSGKqbOKL2ZqKqogDOByrf0SMAKeT4lcbR2i6RMqvQ0m+T01vo3n
	uxaoLqQvziwfnQWvt1Jj0Ttil31wBKx/dYcyLSHSYXItJv/NeFcMEFR/3FypwqXomSxD
	KIhP7usyA00X8yzsnWFLlXfPHJv8QFQMxe+MEQP36qQpfAwHjZB+C4Ekz0c9YeSPAiaB
	Qqtp5yMvmftihW7F4aC9Qe1LvfMaZmvVTqVwXTqHs85VxZ4+rb5z1MDySEPKKGOSPp+N
	gm/WokdhWLXBF6ivCjlguhRX+nDnvtx2IUzerWJZLTrJEeYYIeavqzjel/B/duQS41Xc
	oUNg==
X-Received: by 10.68.178.197 with SMTP id da5mr31011768pbc.28.1391943575122;
	Sun, 09 Feb 2014 02:59:35 -0800 (PST)
Received: from rashika ([14.139.82.6]) by mx.google.com with ESMTPSA id
	qq5sm31512645pbb.24.2014.02.09.02.59.33 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 02:59:34 -0800 (PST)
Date: Sun, 9 Feb 2014 16:29:30 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, josh@joshtriplett.org
Subject: [Xen-devel] [PATCH 1/4] drivers: xen: Mark function as static in
	platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TWFyayBmdW5jdGlvbiBhcyBzdGF0aWMgaW4geGVuL3BsYXRmb3JtLXBjaS5jIGJlY2F1c2UgaXQg
aXMgbm90IHVzZWQKb3V0c2lkZSB0aGlzIGZpbGUuCgpUaGlzIGVsaW1pbmF0ZXMgdGhlIGZvbGxv
d2luZyB3YXJuaW5nIGluIHhlbi9wbGF0Zm9ybS1wY2kuYzoKZHJpdmVycy94ZW4vcGxhdGZvcm0t
cGNpLmM6NDg6MTU6IHdhcm5pbmc6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3Ig4oCYYWxsb2Nf
eGVuX21taW/igJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQoKU2lnbmVkLW9mZi1ieTogUmFzaGlr
YSBLaGVyaWEgPHJhc2hpa2Eua2hlcmlhQGdtYWlsLmNvbT4KUmV2aWV3ZWQtYnk6IEpvc2ggVHJp
cGxldHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KLS0tCiBkcml2ZXJzL3hlbi9wbGF0Zm9ybS1w
Y2kuYyB8ICAgIDIgKy0KIDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlvbigrKSwgMSBkZWxldGlv
bigtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3BsYXRmb3JtLXBjaS5jIGIvZHJpdmVycy94
ZW4vcGxhdGZvcm0tcGNpLmMKaW5kZXggYTEzNjFjMy4uMzQ1NDk3MyAxMDA2NDQKLS0tIGEvZHJp
dmVycy94ZW4vcGxhdGZvcm0tcGNpLmMKKysrIGIvZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmMK
QEAgLTQ1LDcgKzQ1LDcgQEAgc3RhdGljIHVuc2lnbmVkIGxvbmcgcGxhdGZvcm1fbW1pb19hbGxv
YzsKIHN0YXRpYyB1bnNpZ25lZCBsb25nIHBsYXRmb3JtX21taW9sZW47CiBzdGF0aWMgdWludDY0
X3QgY2FsbGJhY2tfdmlhOwogCi11bnNpZ25lZCBsb25nIGFsbG9jX3hlbl9tbWlvKHVuc2lnbmVk
IGxvbmcgbGVuKQorc3RhdGljIHVuc2lnbmVkIGxvbmcgYWxsb2NfeGVuX21taW8odW5zaWduZWQg
bG9uZyBsZW4pCiB7CiAJdW5zaWduZWQgbG9uZyBhZGRyOwogCi0tIAoxLjcuOS41CgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Sun Feb 09 11:01:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 11:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCS94-00040X-E0; Sun, 09 Feb 2014 11:01:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCS93-00040Q-LQ
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 11:01:53 +0000
Received: from [193.109.254.147:63463] by server-10.bemta-14.messagelabs.com
	id 48/DE-10711-02067F25; Sun, 09 Feb 2014 11:01:52 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391943710!3004424!1
X-Originating-IP: [209.85.192.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22097 invoked from network); 9 Feb 2014 11:01:52 -0000
Received: from mail-pd0-f172.google.com (HELO mail-pd0-f172.google.com)
	(209.85.192.172)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 11:01:52 -0000
Received: by mail-pd0-f172.google.com with SMTP id p10so4942460pdj.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 03:01:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=mlgtaK1xAFHaad6lzxW9j+iRxMa2882IyU1gyt4lN4o=;
	b=yS3W2ageZ1SI6fpF+5VVAZoHUBtvmhY6MSZ67FhGXoBywqg5CgV96IrlhWSnTJQ4Cx
	Vs3Qk9zBoamjUr3AirFjgCOPKhqGZWchM6phGqvAh+9vxXf5Z7PCtWq/dU70h5OK+BGw
	EWaS2GbA2iYwQR4hMt1Jja3VBN1muT0GUhtZDKXx6CFRSozIpJHWHD/i8t6hn+ApOE54
	Tw9BFa7xMGu5MA96+pjAzDLBd+qpb8wq+xQYIB5UUdaZNXJyRU2bDoUQebt7Fs2TeuKe
	4EUcnmYrwvACr2MF+bChjJibUi6cA3AKr+XFX7oylqCMyduNtbyj/LTZiGmxfcclwd31
	9kDg==
X-Received: by 10.68.240.36 with SMTP id vx4mr31160930pbc.140.1391943710286;
	Sun, 09 Feb 2014 03:01:50 -0800 (PST)
Received: from rashika ([14.139.82.6]) by mx.google.com with ESMTPSA id
	xn12sm82387420pac.12.2014.02.09.03.01.48 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 03:01:49 -0800 (PST)
Date: Sun, 9 Feb 2014 16:31:46 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <a7672a06595d907ce9aacc65b3cbe0179684f5e0.1391943416.git.rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, josh@joshtriplett.org
Subject: [Xen-devel] [PATCH 2/4] drivers: xen: Include appropriate header
	file in pcpu.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW5jbHVkZSBhcHByb3ByaWF0ZSBoZWFkZXIgZmlsZSBpbiB4ZW4vcGNwdS5jIGJlY2F1c2UgaW5j
bHVkZS94ZW4vYWNwaS5oCmNvbnRhaW5zIHByb3RvdHlwZSBkZWNsYXJhdGlvbiBvZiBmdW5jdGlv
bnMgZGVmaW5lZCBpbiB0aGUgZmlsZS4KClRoaXMgZWxpbWluYXRlcyB0aGUgZm9sbG93aW5nIHdh
cm5pbmcgaW4geGVuL3BjcHUuYzoKZHJpdmVycy94ZW4vcGNwdS5jOjMzNjo2OiB3YXJuaW5nOiBu
byBwcmV2aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9wY3B1X2hvdHBsdWdfc3luY+KAmSBbLVdt
aXNzaW5nLXByb3RvdHlwZXNdCmRyaXZlcnMveGVuL3BjcHUuYzozNDY6NTogd2FybmluZzogbm8g
cHJldmlvdXMgcHJvdG90eXBlIGZvciDigJh4ZW5fcGNwdV9pZOKAmSBbLVdtaXNzaW5nLXByb3Rv
dHlwZXNdCgpTaWduZWQtb2ZmLWJ5OiBSYXNoaWthIEtoZXJpYSA8cmFzaGlrYS5raGVyaWFAZ21h
aWwuY29tPgpSZXZpZXdlZC1ieTogSm9zaCBUcmlwbGV0dCA8am9zaEBqb3NodHJpcGxldHQub3Jn
PgotLS0KIGRyaXZlcnMveGVuL3BjcHUuYyB8ICAgIDEgKwogMSBmaWxlIGNoYW5nZWQsIDEgaW5z
ZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vcGNwdS5jIGIvZHJpdmVycy94ZW4v
cGNwdS5jCmluZGV4IDc5ZTFkZmYuLjBhYWM0MDMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL3Bj
cHUuYworKysgYi9kcml2ZXJzL3hlbi9wY3B1LmMKQEAgLTQwLDYgKzQwLDcgQEAKICNpbmNsdWRl
IDxsaW51eC9jYXBhYmlsaXR5Lmg+CiAKICNpbmNsdWRlIDx4ZW4veGVuLmg+CisjaW5jbHVkZSA8
eGVuL2FjcGkuaD4KICNpbmNsdWRlIDx4ZW4veGVuYnVzLmg+CiAjaW5jbHVkZSA8eGVuL2V2ZW50
cy5oPgogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvcGxhdGZvcm0uaD4KLS0gCjEuNy45LjUKCgpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwg
bWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Sun Feb 09 11:01:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 11:01:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCS94-00040X-E0; Sun, 09 Feb 2014 11:01:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCS93-00040Q-LQ
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 11:01:53 +0000
Received: from [193.109.254.147:63463] by server-10.bemta-14.messagelabs.com
	id 48/DE-10711-02067F25; Sun, 09 Feb 2014 11:01:52 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1391943710!3004424!1
X-Originating-IP: [209.85.192.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22097 invoked from network); 9 Feb 2014 11:01:52 -0000
Received: from mail-pd0-f172.google.com (HELO mail-pd0-f172.google.com)
	(209.85.192.172)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 11:01:52 -0000
Received: by mail-pd0-f172.google.com with SMTP id p10so4942460pdj.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 03:01:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=mlgtaK1xAFHaad6lzxW9j+iRxMa2882IyU1gyt4lN4o=;
	b=yS3W2ageZ1SI6fpF+5VVAZoHUBtvmhY6MSZ67FhGXoBywqg5CgV96IrlhWSnTJQ4Cx
	Vs3Qk9zBoamjUr3AirFjgCOPKhqGZWchM6phGqvAh+9vxXf5Z7PCtWq/dU70h5OK+BGw
	EWaS2GbA2iYwQR4hMt1Jja3VBN1muT0GUhtZDKXx6CFRSozIpJHWHD/i8t6hn+ApOE54
	Tw9BFa7xMGu5MA96+pjAzDLBd+qpb8wq+xQYIB5UUdaZNXJyRU2bDoUQebt7Fs2TeuKe
	4EUcnmYrwvACr2MF+bChjJibUi6cA3AKr+XFX7oylqCMyduNtbyj/LTZiGmxfcclwd31
	9kDg==
X-Received: by 10.68.240.36 with SMTP id vx4mr31160930pbc.140.1391943710286;
	Sun, 09 Feb 2014 03:01:50 -0800 (PST)
Received: from rashika ([14.139.82.6]) by mx.google.com with ESMTPSA id
	xn12sm82387420pac.12.2014.02.09.03.01.48 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 03:01:49 -0800 (PST)
Date: Sun, 9 Feb 2014 16:31:46 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <a7672a06595d907ce9aacc65b3cbe0179684f5e0.1391943416.git.rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>, josh@joshtriplett.org
Subject: [Xen-devel] [PATCH 2/4] drivers: xen: Include appropriate header
	file in pcpu.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SW5jbHVkZSBhcHByb3ByaWF0ZSBoZWFkZXIgZmlsZSBpbiB4ZW4vcGNwdS5jIGJlY2F1c2UgaW5j
bHVkZS94ZW4vYWNwaS5oCmNvbnRhaW5zIHByb3RvdHlwZSBkZWNsYXJhdGlvbiBvZiBmdW5jdGlv
bnMgZGVmaW5lZCBpbiB0aGUgZmlsZS4KClRoaXMgZWxpbWluYXRlcyB0aGUgZm9sbG93aW5nIHdh
cm5pbmcgaW4geGVuL3BjcHUuYzoKZHJpdmVycy94ZW4vcGNwdS5jOjMzNjo2OiB3YXJuaW5nOiBu
byBwcmV2aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9wY3B1X2hvdHBsdWdfc3luY+KAmSBbLVdt
aXNzaW5nLXByb3RvdHlwZXNdCmRyaXZlcnMveGVuL3BjcHUuYzozNDY6NTogd2FybmluZzogbm8g
cHJldmlvdXMgcHJvdG90eXBlIGZvciDigJh4ZW5fcGNwdV9pZOKAmSBbLVdtaXNzaW5nLXByb3Rv
dHlwZXNdCgpTaWduZWQtb2ZmLWJ5OiBSYXNoaWthIEtoZXJpYSA8cmFzaGlrYS5raGVyaWFAZ21h
aWwuY29tPgpSZXZpZXdlZC1ieTogSm9zaCBUcmlwbGV0dCA8am9zaEBqb3NodHJpcGxldHQub3Jn
PgotLS0KIGRyaXZlcnMveGVuL3BjcHUuYyB8ICAgIDEgKwogMSBmaWxlIGNoYW5nZWQsIDEgaW5z
ZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vcGNwdS5jIGIvZHJpdmVycy94ZW4v
cGNwdS5jCmluZGV4IDc5ZTFkZmYuLjBhYWM0MDMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL3Bj
cHUuYworKysgYi9kcml2ZXJzL3hlbi9wY3B1LmMKQEAgLTQwLDYgKzQwLDcgQEAKICNpbmNsdWRl
IDxsaW51eC9jYXBhYmlsaXR5Lmg+CiAKICNpbmNsdWRlIDx4ZW4veGVuLmg+CisjaW5jbHVkZSA8
eGVuL2FjcGkuaD4KICNpbmNsdWRlIDx4ZW4veGVuYnVzLmg+CiAjaW5jbHVkZSA8eGVuL2V2ZW50
cy5oPgogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvcGxhdGZvcm0uaD4KLS0gCjEuNy45LjUKCgpf
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwg
bWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3Jn
L3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Sun Feb 09 11:09:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 11:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCSG2-0004Zp-DE; Sun, 09 Feb 2014 11:09:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCSG1-0004Zk-Ez
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 11:09:05 +0000
Received: from [85.158.139.211:46558] by server-12.bemta-5.messagelabs.com id
	7B/B8-15415-0D167F25; Sun, 09 Feb 2014 11:09:04 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391944141!2637211!1
X-Originating-IP: [209.85.192.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2694 invoked from network); 9 Feb 2014 11:09:03 -0000
Received: from mail-pd0-f177.google.com (HELO mail-pd0-f177.google.com)
	(209.85.192.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 11:09:03 -0000
Received: by mail-pd0-f177.google.com with SMTP id x10so4885768pdj.8
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 03:09:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=QIuep+BCE5DytDeTDdeUJC8jcP5HlGTaisvckYCFJMo=;
	b=lUHSOPK8mqaEk+b0oUobaDh1BpezJY8zl8tKtMR/aOKq3q5KOqDuEu5iPweMKpsw+V
	cgX/uMOo8u/aBUzhXS1mGmq6hqDrfpYU2u6viTZ3xBHxmhS6DIxjojuGYXJ3FjgPaOW0
	WAyzFlsBLesgtrTmRZdBa+jrZQSLQIwbO5A0iAbZmZhuRw9oCwel59d3w4QLX7SliVRd
	FlX/dS6yffzXIRORgEcqpJHCPvIsz7L+Of44YR3vCJILY+P/XXx33zmXFWvuXrOTk1LK
	2Ty/380HgXhIGji3L/TDc9t8fC9CL17jgrr8tJQTDfqycsidRsskN/qkKbjwUHOkZ81w
	LIqg==
X-Received: by 10.68.159.228 with SMTP id xf4mr31112345pbb.74.1391944141641;
	Sun, 09 Feb 2014 03:09:01 -0800 (PST)
Received: from rashika ([14.139.82.6])
	by mx.google.com with ESMTPSA id e6sm31627412pbg.4.2014.02.09.03.09.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 03:09:00 -0800 (PST)
Date: Sun, 9 Feb 2014 16:38:57 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <a609cbd7fcce1a4f5df0b00e3f866461b9ad071f.1391943416.git.rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: x86@kernel.org, josh@joshtriplett.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 3/4] drivers: xen: Move prototype declaration to
 appropriate header file from arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW92ZSBwcm90b3R5cGUgZGVjbGFyYXRpb24gdG8gaGVhZGVyIGZpbGUgaW5jbHVkZS94ZW4veGVu
LW9wcy5oIGZyb20KYXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIHRoZXkgYXJlIHVzZWQg
YnkgbW9yZSB0aGFuIG9uZSBmaWxlLgoKVGhpcyBlbGltaW5hdGVzIHRoZSBmb2xsb3dpbmcgd2Fy
bmluZyBpbiBkcml2ZXJzL3hlbi9ldmVudHMvOgpkcml2ZXJzL3hlbi9ldmVudHNfMmwuYzoxMjMx
OjEzOiB3YXJuaW5nOiBubyBwcmV2aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9kZWJ1Z19pbnRl
cnJ1cHTigJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQoKU2lnbmVkLW9mZi1ieTogUmFzaGlrYSBL
aGVyaWEgPHJhc2hpa2Eua2hlcmlhQGdtYWlsLmNvbT4KUmV2aWV3ZWQtYnk6IEpvc2ggVHJpcGxl
dHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KLS0tCiBhcmNoL3g4Ni94ZW4veGVuLW9wcy5oIHwg
ICAgMiAtLQogaW5jbHVkZS94ZW4veGVuLW9wcy5oICB8ICAgIDMgKysrCiAyIGZpbGVzIGNoYW5n
ZWQsIDMgaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9hcmNoL3g4
Ni94ZW4veGVuLW9wcy5oIGIvYXJjaC94ODYveGVuL3hlbi1vcHMuaAppbmRleCAxY2I2ZjRjLi5l
NWVkYzdmIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4veGVuLW9wcy5oCisrKyBiL2FyY2gveDg2
L3hlbi94ZW4tb3BzLmgKQEAgLTU1LDggKzU1LDYgQEAgdm9pZCB4ZW5fc2V0dXBfY3B1X2Nsb2Nr
ZXZlbnRzKHZvaWQpOwogdm9pZCBfX2luaXQgeGVuX2luaXRfdGltZV9vcHModm9pZCk7CiB2b2lk
IF9faW5pdCB4ZW5faHZtX2luaXRfdGltZV9vcHModm9pZCk7CiAKLWlycXJldHVybl90IHhlbl9k
ZWJ1Z19pbnRlcnJ1cHQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKLQogYm9vbCB4ZW5fdmNwdV9z
dG9sZW4oaW50IHZjcHUpOwogCiB2b2lkIHhlbl9zZXR1cF92Y3B1X2luZm9fcGxhY2VtZW50KHZv
aWQpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4veGVuLW9wcy5oIGIvaW5jbHVkZS94ZW4veGVu
LW9wcy5oCmluZGV4IGZiMmVhOGYuLjlhODYzMzcgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUveGVuL3hl
bi1vcHMuaAorKysgYi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKQEAgLTIsNiArMiw3IEBACiAjZGVm
aW5lIElOQ0xVREVfWEVOX09QU19ICiAKICNpbmNsdWRlIDxsaW51eC9wZXJjcHUuaD4KKyNpbmNs
dWRlIDxsaW51eC9pbnRlcnJ1cHQuaD4KICNpbmNsdWRlIDxhc20veGVuL2ludGVyZmFjZS5oPgog
CiBERUNMQVJFX1BFUl9DUFUoc3RydWN0IHZjcHVfaW5mbyAqLCB4ZW5fdmNwdSk7CkBAIC0zNSw0
ICszNiw2IEBAIGludCB4ZW5fdW5tYXBfZG9tYWluX21mbl9yYW5nZShzdHJ1Y3Qgdm1fYXJlYV9z
dHJ1Y3QgKnZtYSwKIAkJCSAgICAgICBpbnQgbnVtcGdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzKTsK
IAogYm9vbCB4ZW5fcnVubmluZ19vbl92ZXJzaW9uX29yX2xhdGVyKHVuc2lnbmVkIGludCBtYWpv
ciwgdW5zaWduZWQgaW50IG1pbm9yKTsKKworaXJxcmV0dXJuX3QgeGVuX2RlYnVnX2ludGVycnVw
dChpbnQgaXJxLCB2b2lkICpkZXZfaWQpOwogI2VuZGlmIC8qIElOQ0xVREVfWEVOX09QU19IICov
Ci0tIAoxLjcuOS41CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Sun Feb 09 11:09:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 11:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCSG2-0004Zp-DE; Sun, 09 Feb 2014 11:09:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCSG1-0004Zk-Ez
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 11:09:05 +0000
Received: from [85.158.139.211:46558] by server-12.bemta-5.messagelabs.com id
	7B/B8-15415-0D167F25; Sun, 09 Feb 2014 11:09:04 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1391944141!2637211!1
X-Originating-IP: [209.85.192.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2694 invoked from network); 9 Feb 2014 11:09:03 -0000
Received: from mail-pd0-f177.google.com (HELO mail-pd0-f177.google.com)
	(209.85.192.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 11:09:03 -0000
Received: by mail-pd0-f177.google.com with SMTP id x10so4885768pdj.8
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 03:09:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=QIuep+BCE5DytDeTDdeUJC8jcP5HlGTaisvckYCFJMo=;
	b=lUHSOPK8mqaEk+b0oUobaDh1BpezJY8zl8tKtMR/aOKq3q5KOqDuEu5iPweMKpsw+V
	cgX/uMOo8u/aBUzhXS1mGmq6hqDrfpYU2u6viTZ3xBHxmhS6DIxjojuGYXJ3FjgPaOW0
	WAyzFlsBLesgtrTmRZdBa+jrZQSLQIwbO5A0iAbZmZhuRw9oCwel59d3w4QLX7SliVRd
	FlX/dS6yffzXIRORgEcqpJHCPvIsz7L+Of44YR3vCJILY+P/XXx33zmXFWvuXrOTk1LK
	2Ty/380HgXhIGji3L/TDc9t8fC9CL17jgrr8tJQTDfqycsidRsskN/qkKbjwUHOkZ81w
	LIqg==
X-Received: by 10.68.159.228 with SMTP id xf4mr31112345pbb.74.1391944141641;
	Sun, 09 Feb 2014 03:09:01 -0800 (PST)
Received: from rashika ([14.139.82.6])
	by mx.google.com with ESMTPSA id e6sm31627412pbg.4.2014.02.09.03.09.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 03:09:00 -0800 (PST)
Date: Sun, 9 Feb 2014 16:38:57 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <a609cbd7fcce1a4f5df0b00e3f866461b9ad071f.1391943416.git.rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: x86@kernel.org, josh@joshtriplett.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 3/4] drivers: xen: Move prototype declaration to
 appropriate header file from arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW92ZSBwcm90b3R5cGUgZGVjbGFyYXRpb24gdG8gaGVhZGVyIGZpbGUgaW5jbHVkZS94ZW4veGVu
LW9wcy5oIGZyb20KYXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIHRoZXkgYXJlIHVzZWQg
YnkgbW9yZSB0aGFuIG9uZSBmaWxlLgoKVGhpcyBlbGltaW5hdGVzIHRoZSBmb2xsb3dpbmcgd2Fy
bmluZyBpbiBkcml2ZXJzL3hlbi9ldmVudHMvOgpkcml2ZXJzL3hlbi9ldmVudHNfMmwuYzoxMjMx
OjEzOiB3YXJuaW5nOiBubyBwcmV2aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9kZWJ1Z19pbnRl
cnJ1cHTigJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQoKU2lnbmVkLW9mZi1ieTogUmFzaGlrYSBL
aGVyaWEgPHJhc2hpa2Eua2hlcmlhQGdtYWlsLmNvbT4KUmV2aWV3ZWQtYnk6IEpvc2ggVHJpcGxl
dHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KLS0tCiBhcmNoL3g4Ni94ZW4veGVuLW9wcy5oIHwg
ICAgMiAtLQogaW5jbHVkZS94ZW4veGVuLW9wcy5oICB8ICAgIDMgKysrCiAyIGZpbGVzIGNoYW5n
ZWQsIDMgaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9hcmNoL3g4
Ni94ZW4veGVuLW9wcy5oIGIvYXJjaC94ODYveGVuL3hlbi1vcHMuaAppbmRleCAxY2I2ZjRjLi5l
NWVkYzdmIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni94ZW4veGVuLW9wcy5oCisrKyBiL2FyY2gveDg2
L3hlbi94ZW4tb3BzLmgKQEAgLTU1LDggKzU1LDYgQEAgdm9pZCB4ZW5fc2V0dXBfY3B1X2Nsb2Nr
ZXZlbnRzKHZvaWQpOwogdm9pZCBfX2luaXQgeGVuX2luaXRfdGltZV9vcHModm9pZCk7CiB2b2lk
IF9faW5pdCB4ZW5faHZtX2luaXRfdGltZV9vcHModm9pZCk7CiAKLWlycXJldHVybl90IHhlbl9k
ZWJ1Z19pbnRlcnJ1cHQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKLQogYm9vbCB4ZW5fdmNwdV9z
dG9sZW4oaW50IHZjcHUpOwogCiB2b2lkIHhlbl9zZXR1cF92Y3B1X2luZm9fcGxhY2VtZW50KHZv
aWQpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4veGVuLW9wcy5oIGIvaW5jbHVkZS94ZW4veGVu
LW9wcy5oCmluZGV4IGZiMmVhOGYuLjlhODYzMzcgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUveGVuL3hl
bi1vcHMuaAorKysgYi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKQEAgLTIsNiArMiw3IEBACiAjZGVm
aW5lIElOQ0xVREVfWEVOX09QU19ICiAKICNpbmNsdWRlIDxsaW51eC9wZXJjcHUuaD4KKyNpbmNs
dWRlIDxsaW51eC9pbnRlcnJ1cHQuaD4KICNpbmNsdWRlIDxhc20veGVuL2ludGVyZmFjZS5oPgog
CiBERUNMQVJFX1BFUl9DUFUoc3RydWN0IHZjcHVfaW5mbyAqLCB4ZW5fdmNwdSk7CkBAIC0zNSw0
ICszNiw2IEBAIGludCB4ZW5fdW5tYXBfZG9tYWluX21mbl9yYW5nZShzdHJ1Y3Qgdm1fYXJlYV9z
dHJ1Y3QgKnZtYSwKIAkJCSAgICAgICBpbnQgbnVtcGdzLCBzdHJ1Y3QgcGFnZSAqKnBhZ2VzKTsK
IAogYm9vbCB4ZW5fcnVubmluZ19vbl92ZXJzaW9uX29yX2xhdGVyKHVuc2lnbmVkIGludCBtYWpv
ciwgdW5zaWduZWQgaW50IG1pbm9yKTsKKworaXJxcmV0dXJuX3QgeGVuX2RlYnVnX2ludGVycnVw
dChpbnQgaXJxLCB2b2lkICpkZXZfaWQpOwogI2VuZGlmIC8qIElOQ0xVREVfWEVOX09QU19IICov
Ci0tIAoxLjcuOS41CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Sun Feb 09 11:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 11:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCSJX-0004h9-1s; Sun, 09 Feb 2014 11:12:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCSJV-0004h3-3u
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 11:12:41 +0000
Received: from [85.158.137.68:17466] by server-4.bemta-3.messagelabs.com id
	A3/CB-11750-8A267F25; Sun, 09 Feb 2014 11:12:40 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391944357!624339!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23332 invoked from network); 9 Feb 2014 11:12:39 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 11:12:39 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so5022409pab.2
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 03:12:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=mQLkhd6xzp6L/Veue5DyhgfRVyQxpaRUvALa6TPaJDY=;
	b=udAtkBOsaFs+1Tl06/ZumZEHzMteYtGCjJ3fV823J1xF2KyitjjkxJqPYS4h6Wdcxy
	vyLhfT7OWW4/R6ZAyWW813N0y0arPtshJJGYgh5sDOZbIbyVeMxgXGVBqLBOw2jN+fNb
	BeIQsflyq1/iXbMOlt/qf/FDZJ8J9gILk0Mdc4PGL/1fGqk/Xa2byAKzLv36VLjaPyq6
	zcvyUYjL59tBaf15lLbJFSNLnPtZu6sPKmIP4BLa/Hx8POtxu2F/t2utHzrl5zEIuWfN
	IAh6Qx0QZ7VfrdpWhJ8L9uM0cKpXHqpgAp2ni538HZjGD1rGIC1zfb6v/Zw0hpST1Qpy
	sbpQ==
X-Received: by 10.66.160.195 with SMTP id xm3mr18959058pab.93.1391944357578;
	Sun, 09 Feb 2014 03:12:37 -0800 (PST)
Received: from rashika ([14.139.82.6]) by mx.google.com with ESMTPSA id
	sy2sm31590645pbc.28.2014.02.09.03.12.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 03:12:36 -0800 (PST)
Date: Sun, 9 Feb 2014 16:42:33 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, josh@joshtriplett.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype declaration to
 header file include/xen/xen-ops.h from arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW92ZSBwcm90b3R5cGUgZGVjbGFyYXRpb24gdG8gaGVhZGVyIGZpbGUgaW5jbHVkZS94ZW4veGVu
LW9wcy5oIGZyb20KYXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIGl0IGlzIHVzZWQgYnkg
bW9yZSB0aGFuIG9uZSBmaWxlLiBBbHNvLApyZW1vdmUgZWxzZSBjb25kaXRpb24gZnJvbSB4ZW4v
ZXZlbnRzL2V2ZW50c19iYXNlLmMgdG8gZWxpbWluYXRlCmNvbmZsaWN0aW5nIGRlZmluaXRpb25z
IHdoZW4gQ09ORklHX1hFTl9QVkhWTSBpcyBub3QgZGVmaW5lZC4KClRoaXMgZWxpbWluYXRlcyB0
aGUgZm9sbG93aW5nIHdhcm5pbmcgaW4geGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jOgpkcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYzoxNjQwOjY6IHdhcm5pbmc6IG5vIHByZXZpb3VzIHBy
b3RvdHlwZSBmb3Ig4oCYeGVuX2NhbGxiYWNrX3ZlY3RvcuKAmSBbLVdtaXNzaW5nLXByb3RvdHlw
ZXNdCgpTaWduZWQtb2ZmLWJ5OiBSYXNoaWthIEtoZXJpYSA8cmFzaGlrYS5raGVyaWFAZ21haWwu
Y29tPgotLS0KIGFyY2gveDg2L3hlbi94ZW4tb3BzLmggICAgICAgICAgIHwgICAgMSAtCiBkcml2
ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyB8ICAgIDIgLS0KIGluY2x1ZGUveGVuL3hlbi1v
cHMuaCAgICAgICAgICAgIHwgICAgNyArKysrKysrCiAzIGZpbGVzIGNoYW5nZWQsIDcgaW5zZXJ0
aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni94ZW4veGVuLW9w
cy5oIGIvYXJjaC94ODYveGVuL3hlbi1vcHMuaAppbmRleCBlNWVkYzdmLi5hYThhOTc5IDEwMDY0
NAotLS0gYS9hcmNoL3g4Ni94ZW4veGVuLW9wcy5oCisrKyBiL2FyY2gveDg2L3hlbi94ZW4tb3Bz
LmgKQEAgLTM5LDcgKzM5LDYgQEAgdm9pZCB4ZW5fZW5hYmxlX3N5c2VudGVyKHZvaWQpOwogdm9p
ZCB4ZW5fZW5hYmxlX3N5c2NhbGwodm9pZCk7CiB2b2lkIHhlbl92Y3B1X3Jlc3RvcmUodm9pZCk7
CiAKLXZvaWQgeGVuX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKTsKIHZvaWQgeGVuX2h2bV9pbml0X3No
YXJlZF9pbmZvKHZvaWQpOwogdm9pZCB4ZW5fdW5wbHVnX2VtdWxhdGVkX2RldmljZXModm9pZCk7
CiAKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jIGIvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKaW5kZXggNDY3MmUwMC4uNTQ2NjU0MyAxMDA2NDQK
LS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJpdmVycy94ZW4v
ZXZlbnRzL2V2ZW50c19iYXNlLmMKQEAgLTE2NTYsOCArMTY1Niw2IEBAIHZvaWQgeGVuX2NhbGxi
YWNrX3ZlY3Rvcih2b2lkKQogCQkJCQl4ZW5faHZtX2NhbGxiYWNrX3ZlY3Rvcik7CiAJfQogfQot
I2Vsc2UKLXZvaWQgeGVuX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKSB7fQogI2VuZGlmCiAKICN1bmRl
ZiBNT0RVTEVfUEFSQU1fUFJFRklYCmRpZmYgLS1naXQgYS9pbmNsdWRlL3hlbi94ZW4tb3BzLmgg
Yi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKaW5kZXggOWE4NjMzNy4uY2RlYTQ1YiAxMDA2NDQKLS0t
IGEvaW5jbHVkZS94ZW4veGVuLW9wcy5oCisrKyBiL2luY2x1ZGUveGVuL3hlbi1vcHMuaApAQCAt
MzgsNCArMzgsMTEgQEAgaW50IHhlbl91bm1hcF9kb21haW5fbWZuX3JhbmdlKHN0cnVjdCB2bV9h
cmVhX3N0cnVjdCAqdm1hLAogYm9vbCB4ZW5fcnVubmluZ19vbl92ZXJzaW9uX29yX2xhdGVyKHVu
c2lnbmVkIGludCBtYWpvciwgdW5zaWduZWQgaW50IG1pbm9yKTsKIAogaXJxcmV0dXJuX3QgeGVu
X2RlYnVnX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpOworCisjaWZkZWYgQ09ORklH
X1hFTl9QVkhWTQordm9pZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpOworI2Vsc2UKK3N0YXRp
YyBpbmxpbmUgdm9pZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpIHt9CisjZW5kaWYKKwogI2Vu
ZGlmIC8qIElOQ0xVREVfWEVOX09QU19IICovCi0tIAoxLjcuOS41CgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Sun Feb 09 11:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 11:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCSJX-0004h9-1s; Sun, 09 Feb 2014 11:12:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rashika.kheria@gmail.com>) id 1WCSJV-0004h3-3u
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 11:12:41 +0000
Received: from [85.158.137.68:17466] by server-4.bemta-3.messagelabs.com id
	A3/CB-11750-8A267F25; Sun, 09 Feb 2014 11:12:40 +0000
X-Env-Sender: rashika.kheria@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1391944357!624339!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23332 invoked from network); 9 Feb 2014 11:12:39 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 11:12:39 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so5022409pab.2
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 03:12:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:content-transfer-encoding
	:in-reply-to:user-agent;
	bh=mQLkhd6xzp6L/Veue5DyhgfRVyQxpaRUvALa6TPaJDY=;
	b=udAtkBOsaFs+1Tl06/ZumZEHzMteYtGCjJ3fV823J1xF2KyitjjkxJqPYS4h6Wdcxy
	vyLhfT7OWW4/R6ZAyWW813N0y0arPtshJJGYgh5sDOZbIbyVeMxgXGVBqLBOw2jN+fNb
	BeIQsflyq1/iXbMOlt/qf/FDZJ8J9gILk0Mdc4PGL/1fGqk/Xa2byAKzLv36VLjaPyq6
	zcvyUYjL59tBaf15lLbJFSNLnPtZu6sPKmIP4BLa/Hx8POtxu2F/t2utHzrl5zEIuWfN
	IAh6Qx0QZ7VfrdpWhJ8L9uM0cKpXHqpgAp2ni538HZjGD1rGIC1zfb6v/Zw0hpST1Qpy
	sbpQ==
X-Received: by 10.66.160.195 with SMTP id xm3mr18959058pab.93.1391944357578;
	Sun, 09 Feb 2014 03:12:37 -0800 (PST)
Received: from rashika ([14.139.82.6]) by mx.google.com with ESMTPSA id
	sy2sm31590645pbc.28.2014.02.09.03.12.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 03:12:36 -0800 (PST)
Date: Sun, 9 Feb 2014 16:42:33 +0530
From: Rashika Kheria <rashika.kheria@gmail.com>
To: linux-kernel@vger.kernel.org
Message-ID: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, josh@joshtriplett.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype declaration to
 header file include/xen/xen-ops.h from arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

TW92ZSBwcm90b3R5cGUgZGVjbGFyYXRpb24gdG8gaGVhZGVyIGZpbGUgaW5jbHVkZS94ZW4veGVu
LW9wcy5oIGZyb20KYXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIGl0IGlzIHVzZWQgYnkg
bW9yZSB0aGFuIG9uZSBmaWxlLiBBbHNvLApyZW1vdmUgZWxzZSBjb25kaXRpb24gZnJvbSB4ZW4v
ZXZlbnRzL2V2ZW50c19iYXNlLmMgdG8gZWxpbWluYXRlCmNvbmZsaWN0aW5nIGRlZmluaXRpb25z
IHdoZW4gQ09ORklHX1hFTl9QVkhWTSBpcyBub3QgZGVmaW5lZC4KClRoaXMgZWxpbWluYXRlcyB0
aGUgZm9sbG93aW5nIHdhcm5pbmcgaW4geGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jOgpkcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYzoxNjQwOjY6IHdhcm5pbmc6IG5vIHByZXZpb3VzIHBy
b3RvdHlwZSBmb3Ig4oCYeGVuX2NhbGxiYWNrX3ZlY3RvcuKAmSBbLVdtaXNzaW5nLXByb3RvdHlw
ZXNdCgpTaWduZWQtb2ZmLWJ5OiBSYXNoaWthIEtoZXJpYSA8cmFzaGlrYS5raGVyaWFAZ21haWwu
Y29tPgotLS0KIGFyY2gveDg2L3hlbi94ZW4tb3BzLmggICAgICAgICAgIHwgICAgMSAtCiBkcml2
ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyB8ICAgIDIgLS0KIGluY2x1ZGUveGVuL3hlbi1v
cHMuaCAgICAgICAgICAgIHwgICAgNyArKysrKysrCiAzIGZpbGVzIGNoYW5nZWQsIDcgaW5zZXJ0
aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni94ZW4veGVuLW9w
cy5oIGIvYXJjaC94ODYveGVuL3hlbi1vcHMuaAppbmRleCBlNWVkYzdmLi5hYThhOTc5IDEwMDY0
NAotLS0gYS9hcmNoL3g4Ni94ZW4veGVuLW9wcy5oCisrKyBiL2FyY2gveDg2L3hlbi94ZW4tb3Bz
LmgKQEAgLTM5LDcgKzM5LDYgQEAgdm9pZCB4ZW5fZW5hYmxlX3N5c2VudGVyKHZvaWQpOwogdm9p
ZCB4ZW5fZW5hYmxlX3N5c2NhbGwodm9pZCk7CiB2b2lkIHhlbl92Y3B1X3Jlc3RvcmUodm9pZCk7
CiAKLXZvaWQgeGVuX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKTsKIHZvaWQgeGVuX2h2bV9pbml0X3No
YXJlZF9pbmZvKHZvaWQpOwogdm9pZCB4ZW5fdW5wbHVnX2VtdWxhdGVkX2RldmljZXModm9pZCk7
CiAKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jIGIvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKaW5kZXggNDY3MmUwMC4uNTQ2NjU0MyAxMDA2NDQK
LS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJpdmVycy94ZW4v
ZXZlbnRzL2V2ZW50c19iYXNlLmMKQEAgLTE2NTYsOCArMTY1Niw2IEBAIHZvaWQgeGVuX2NhbGxi
YWNrX3ZlY3Rvcih2b2lkKQogCQkJCQl4ZW5faHZtX2NhbGxiYWNrX3ZlY3Rvcik7CiAJfQogfQot
I2Vsc2UKLXZvaWQgeGVuX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKSB7fQogI2VuZGlmCiAKICN1bmRl
ZiBNT0RVTEVfUEFSQU1fUFJFRklYCmRpZmYgLS1naXQgYS9pbmNsdWRlL3hlbi94ZW4tb3BzLmgg
Yi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKaW5kZXggOWE4NjMzNy4uY2RlYTQ1YiAxMDA2NDQKLS0t
IGEvaW5jbHVkZS94ZW4veGVuLW9wcy5oCisrKyBiL2luY2x1ZGUveGVuL3hlbi1vcHMuaApAQCAt
MzgsNCArMzgsMTEgQEAgaW50IHhlbl91bm1hcF9kb21haW5fbWZuX3JhbmdlKHN0cnVjdCB2bV9h
cmVhX3N0cnVjdCAqdm1hLAogYm9vbCB4ZW5fcnVubmluZ19vbl92ZXJzaW9uX29yX2xhdGVyKHVu
c2lnbmVkIGludCBtYWpvciwgdW5zaWduZWQgaW50IG1pbm9yKTsKIAogaXJxcmV0dXJuX3QgeGVu
X2RlYnVnX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpOworCisjaWZkZWYgQ09ORklH
X1hFTl9QVkhWTQordm9pZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpOworI2Vsc2UKK3N0YXRp
YyBpbmxpbmUgdm9pZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpIHt9CisjZW5kaWYKKwogI2Vu
ZGlmIC8qIElOQ0xVREVfWEVOX09QU19IICovCi0tIAoxLjcuOS41CgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Sun Feb 09 12:21:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 12:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCTNM-0008A6-7F; Sun, 09 Feb 2014 12:20:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <josh@joshtriplett.org>) id 1WCTNL-0008A1-Bj
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 12:20:43 +0000
Received: from [85.158.139.211:3253] by server-1.bemta-5.messagelabs.com id
	09/13-12859-A9277F25; Sun, 09 Feb 2014 12:20:42 +0000
X-Env-Sender: josh@joshtriplett.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391948441!2674200!1
X-Originating-IP: [217.70.183.195]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjE3LjcwLjE4My4xOTUgPT4gMzc4NjI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9904 invoked from network); 9 Feb 2014 12:20:41 -0000
Received: from relay3-d.mail.gandi.net (HELO relay3-d.mail.gandi.net)
	(217.70.183.195)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Feb 2014 12:20:41 -0000
Received: from mfilter6-d.gandi.net (mfilter6-d.gandi.net [217.70.178.135])
	by relay3-d.mail.gandi.net (Postfix) with ESMTP id 83128A80B6;
	Sun,  9 Feb 2014 13:20:41 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfilter6-d.gandi.net
Received: from relay3-d.mail.gandi.net ([217.70.183.195])
	by mfilter6-d.gandi.net (mfilter6-d.gandi.net [10.0.15.180])
	(amavisd-new, port 10024)
	with ESMTP id agIEmRTs9yXq; Sun,  9 Feb 2014 13:20:39 +0100 (CET)
X-Originating-IP: 50.43.14.201
Received: from leaf (static-50-43-14-201.bvtn.or.frontiernet.net
	[50.43.14.201]) (Authenticated sender: josh@joshtriplett.org)
	by relay3-d.mail.gandi.net (Postfix) with ESMTPSA id ADB67A80B4;
	Sun,  9 Feb 2014 13:20:34 +0100 (CET)
Date: Sun, 9 Feb 2014 04:20:32 -0800
From: Josh Triplett <josh@joshtriplett.org>
To: Rashika Kheria <rashika.kheria@gmail.com>
Message-ID: <20140209122032.GB29984@leaf>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype
 declaration to header file include/xen/xen-ops.h from
 arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCBGZWIgMDksIDIwMTQgYXQgMDQ6NDI6MzNQTSArMDUzMCwgUmFzaGlrYSBLaGVyaWEg
d3JvdGU6Cj4gTW92ZSBwcm90b3R5cGUgZGVjbGFyYXRpb24gdG8gaGVhZGVyIGZpbGUgaW5jbHVk
ZS94ZW4veGVuLW9wcy5oIGZyb20KPiBhcmNoL3g4Ni94ZW4veGVuLW9wcy5oIGJlY2F1c2UgaXQg
aXMgdXNlZCBieSBtb3JlIHRoYW4gb25lIGZpbGUuIEFsc28sCj4gcmVtb3ZlIGVsc2UgY29uZGl0
aW9uIGZyb20geGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jIHRvIGVsaW1pbmF0ZQo+IGNvbmZsaWN0
aW5nIGRlZmluaXRpb25zIHdoZW4gQ09ORklHX1hFTl9QVkhWTSBpcyBub3QgZGVmaW5lZC4KPiAK
PiBUaGlzIGVsaW1pbmF0ZXMgdGhlIGZvbGxvd2luZyB3YXJuaW5nIGluIHhlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYzoKPiBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYzoxNjQwOjY6IHdh
cm5pbmc6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3Ig4oCYeGVuX2NhbGxiYWNrX3ZlY3RvcuKA
mSBbLVdtaXNzaW5nLXByb3RvdHlwZXNdCj4gCj4gU2lnbmVkLW9mZi1ieTogUmFzaGlrYSBLaGVy
aWEgPHJhc2hpa2Eua2hlcmlhQGdtYWlsLmNvbT4KClJldmlld2VkLWJ5OiBKb3NoIFRyaXBsZXR0
IDxqb3NoQGpvc2h0cmlwbGV0dC5vcmc+Cgo+ICBhcmNoL3g4Ni94ZW4veGVuLW9wcy5oICAgICAg
ICAgICB8ICAgIDEgLQo+ICBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyB8ICAgIDIg
LS0KPiAgaW5jbHVkZS94ZW4veGVuLW9wcy5oICAgICAgICAgICAgfCAgICA3ICsrKysrKysKPiAg
MyBmaWxlcyBjaGFuZ2VkLCA3IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCj4gCj4gZGlm
ZiAtLWdpdCBhL2FyY2gveDg2L3hlbi94ZW4tb3BzLmggYi9hcmNoL3g4Ni94ZW4veGVuLW9wcy5o
Cj4gaW5kZXggZTVlZGM3Zi4uYWE4YTk3OSAxMDA2NDQKPiAtLS0gYS9hcmNoL3g4Ni94ZW4veGVu
LW9wcy5oCj4gKysrIGIvYXJjaC94ODYveGVuL3hlbi1vcHMuaAo+IEBAIC0zOSw3ICszOSw2IEBA
IHZvaWQgeGVuX2VuYWJsZV9zeXNlbnRlcih2b2lkKTsKPiAgdm9pZCB4ZW5fZW5hYmxlX3N5c2Nh
bGwodm9pZCk7Cj4gIHZvaWQgeGVuX3ZjcHVfcmVzdG9yZSh2b2lkKTsKPiAgCj4gLXZvaWQgeGVu
X2NhbGxiYWNrX3ZlY3Rvcih2b2lkKTsKPiAgdm9pZCB4ZW5faHZtX2luaXRfc2hhcmVkX2luZm8o
dm9pZCk7Cj4gIHZvaWQgeGVuX3VucGx1Z19lbXVsYXRlZF9kZXZpY2VzKHZvaWQpOwo+ICAKPiBk
aWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgYi9kcml2ZXJzL3hl
bi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+IGluZGV4IDQ2NzJlMDAuLjU0NjY1NDMgMTAwNjQ0Cj4g
LS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKPiArKysgYi9kcml2ZXJzL3hl
bi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+IEBAIC0xNjU2LDggKzE2NTYsNiBAQCB2b2lkIHhlbl9j
YWxsYmFja192ZWN0b3Iodm9pZCkKPiAgCQkJCQl4ZW5faHZtX2NhbGxiYWNrX3ZlY3Rvcik7Cj4g
IAl9Cj4gIH0KPiAtI2Vsc2UKPiAtdm9pZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpIHt9Cj4g
ICNlbmRpZgo+ICAKPiAgI3VuZGVmIE1PRFVMRV9QQVJBTV9QUkVGSVgKPiBkaWZmIC0tZ2l0IGEv
aW5jbHVkZS94ZW4veGVuLW9wcy5oIGIvaW5jbHVkZS94ZW4veGVuLW9wcy5oCj4gaW5kZXggOWE4
NjMzNy4uY2RlYTQ1YiAxMDA2NDQKPiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKPiArKysg
Yi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKPiBAQCAtMzgsNCArMzgsMTEgQEAgaW50IHhlbl91bm1h
cF9kb21haW5fbWZuX3JhbmdlKHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqdm1hLAo+ICBib29sIHhl
bl9ydW5uaW5nX29uX3ZlcnNpb25fb3JfbGF0ZXIodW5zaWduZWQgaW50IG1ham9yLCB1bnNpZ25l
ZCBpbnQgbWlub3IpOwo+ICAKPiAgaXJxcmV0dXJuX3QgeGVuX2RlYnVnX2ludGVycnVwdChpbnQg
aXJxLCB2b2lkICpkZXZfaWQpOwo+ICsKPiArI2lmZGVmIENPTkZJR19YRU5fUFZIVk0KPiArdm9p
ZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpOwo+ICsjZWxzZQo+ICtzdGF0aWMgaW5saW5lIHZv
aWQgeGVuX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKSB7fQo+ICsjZW5kaWYKPiArCj4gICNlbmRpZiAv
KiBJTkNMVURFX1hFTl9PUFNfSCAqLwo+IC0tIAo+IDEuNy45LjUKPiAKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sun Feb 09 12:21:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 12:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCTNM-0008A6-7F; Sun, 09 Feb 2014 12:20:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <josh@joshtriplett.org>) id 1WCTNL-0008A1-Bj
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 12:20:43 +0000
Received: from [85.158.139.211:3253] by server-1.bemta-5.messagelabs.com id
	09/13-12859-A9277F25; Sun, 09 Feb 2014 12:20:42 +0000
X-Env-Sender: josh@joshtriplett.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1391948441!2674200!1
X-Originating-IP: [217.70.183.195]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjE3LjcwLjE4My4xOTUgPT4gMzc4NjI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9904 invoked from network); 9 Feb 2014 12:20:41 -0000
Received: from relay3-d.mail.gandi.net (HELO relay3-d.mail.gandi.net)
	(217.70.183.195)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 9 Feb 2014 12:20:41 -0000
Received: from mfilter6-d.gandi.net (mfilter6-d.gandi.net [217.70.178.135])
	by relay3-d.mail.gandi.net (Postfix) with ESMTP id 83128A80B6;
	Sun,  9 Feb 2014 13:20:41 +0100 (CET)
X-Virus-Scanned: Debian amavisd-new at mfilter6-d.gandi.net
Received: from relay3-d.mail.gandi.net ([217.70.183.195])
	by mfilter6-d.gandi.net (mfilter6-d.gandi.net [10.0.15.180])
	(amavisd-new, port 10024)
	with ESMTP id agIEmRTs9yXq; Sun,  9 Feb 2014 13:20:39 +0100 (CET)
X-Originating-IP: 50.43.14.201
Received: from leaf (static-50-43-14-201.bvtn.or.frontiernet.net
	[50.43.14.201]) (Authenticated sender: josh@joshtriplett.org)
	by relay3-d.mail.gandi.net (Postfix) with ESMTPSA id ADB67A80B4;
	Sun,  9 Feb 2014 13:20:34 +0100 (CET)
Date: Sun, 9 Feb 2014 04:20:32 -0800
From: Josh Triplett <josh@joshtriplett.org>
To: Rashika Kheria <rashika.kheria@gmail.com>
Message-ID: <20140209122032.GB29984@leaf>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype
 declaration to header file include/xen/xen-ops.h from
 arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCBGZWIgMDksIDIwMTQgYXQgMDQ6NDI6MzNQTSArMDUzMCwgUmFzaGlrYSBLaGVyaWEg
d3JvdGU6Cj4gTW92ZSBwcm90b3R5cGUgZGVjbGFyYXRpb24gdG8gaGVhZGVyIGZpbGUgaW5jbHVk
ZS94ZW4veGVuLW9wcy5oIGZyb20KPiBhcmNoL3g4Ni94ZW4veGVuLW9wcy5oIGJlY2F1c2UgaXQg
aXMgdXNlZCBieSBtb3JlIHRoYW4gb25lIGZpbGUuIEFsc28sCj4gcmVtb3ZlIGVsc2UgY29uZGl0
aW9uIGZyb20geGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jIHRvIGVsaW1pbmF0ZQo+IGNvbmZsaWN0
aW5nIGRlZmluaXRpb25zIHdoZW4gQ09ORklHX1hFTl9QVkhWTSBpcyBub3QgZGVmaW5lZC4KPiAK
PiBUaGlzIGVsaW1pbmF0ZXMgdGhlIGZvbGxvd2luZyB3YXJuaW5nIGluIHhlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYzoKPiBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYzoxNjQwOjY6IHdh
cm5pbmc6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3Ig4oCYeGVuX2NhbGxiYWNrX3ZlY3RvcuKA
mSBbLVdtaXNzaW5nLXByb3RvdHlwZXNdCj4gCj4gU2lnbmVkLW9mZi1ieTogUmFzaGlrYSBLaGVy
aWEgPHJhc2hpa2Eua2hlcmlhQGdtYWlsLmNvbT4KClJldmlld2VkLWJ5OiBKb3NoIFRyaXBsZXR0
IDxqb3NoQGpvc2h0cmlwbGV0dC5vcmc+Cgo+ICBhcmNoL3g4Ni94ZW4veGVuLW9wcy5oICAgICAg
ICAgICB8ICAgIDEgLQo+ICBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyB8ICAgIDIg
LS0KPiAgaW5jbHVkZS94ZW4veGVuLW9wcy5oICAgICAgICAgICAgfCAgICA3ICsrKysrKysKPiAg
MyBmaWxlcyBjaGFuZ2VkLCA3IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCj4gCj4gZGlm
ZiAtLWdpdCBhL2FyY2gveDg2L3hlbi94ZW4tb3BzLmggYi9hcmNoL3g4Ni94ZW4veGVuLW9wcy5o
Cj4gaW5kZXggZTVlZGM3Zi4uYWE4YTk3OSAxMDA2NDQKPiAtLS0gYS9hcmNoL3g4Ni94ZW4veGVu
LW9wcy5oCj4gKysrIGIvYXJjaC94ODYveGVuL3hlbi1vcHMuaAo+IEBAIC0zOSw3ICszOSw2IEBA
IHZvaWQgeGVuX2VuYWJsZV9zeXNlbnRlcih2b2lkKTsKPiAgdm9pZCB4ZW5fZW5hYmxlX3N5c2Nh
bGwodm9pZCk7Cj4gIHZvaWQgeGVuX3ZjcHVfcmVzdG9yZSh2b2lkKTsKPiAgCj4gLXZvaWQgeGVu
X2NhbGxiYWNrX3ZlY3Rvcih2b2lkKTsKPiAgdm9pZCB4ZW5faHZtX2luaXRfc2hhcmVkX2luZm8o
dm9pZCk7Cj4gIHZvaWQgeGVuX3VucGx1Z19lbXVsYXRlZF9kZXZpY2VzKHZvaWQpOwo+ICAKPiBk
aWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgYi9kcml2ZXJzL3hl
bi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+IGluZGV4IDQ2NzJlMDAuLjU0NjY1NDMgMTAwNjQ0Cj4g
LS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKPiArKysgYi9kcml2ZXJzL3hl
bi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+IEBAIC0xNjU2LDggKzE2NTYsNiBAQCB2b2lkIHhlbl9j
YWxsYmFja192ZWN0b3Iodm9pZCkKPiAgCQkJCQl4ZW5faHZtX2NhbGxiYWNrX3ZlY3Rvcik7Cj4g
IAl9Cj4gIH0KPiAtI2Vsc2UKPiAtdm9pZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpIHt9Cj4g
ICNlbmRpZgo+ICAKPiAgI3VuZGVmIE1PRFVMRV9QQVJBTV9QUkVGSVgKPiBkaWZmIC0tZ2l0IGEv
aW5jbHVkZS94ZW4veGVuLW9wcy5oIGIvaW5jbHVkZS94ZW4veGVuLW9wcy5oCj4gaW5kZXggOWE4
NjMzNy4uY2RlYTQ1YiAxMDA2NDQKPiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKPiArKysg
Yi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKPiBAQCAtMzgsNCArMzgsMTEgQEAgaW50IHhlbl91bm1h
cF9kb21haW5fbWZuX3JhbmdlKHN0cnVjdCB2bV9hcmVhX3N0cnVjdCAqdm1hLAo+ICBib29sIHhl
bl9ydW5uaW5nX29uX3ZlcnNpb25fb3JfbGF0ZXIodW5zaWduZWQgaW50IG1ham9yLCB1bnNpZ25l
ZCBpbnQgbWlub3IpOwo+ICAKPiAgaXJxcmV0dXJuX3QgeGVuX2RlYnVnX2ludGVycnVwdChpbnQg
aXJxLCB2b2lkICpkZXZfaWQpOwo+ICsKPiArI2lmZGVmIENPTkZJR19YRU5fUFZIVk0KPiArdm9p
ZCB4ZW5fY2FsbGJhY2tfdmVjdG9yKHZvaWQpOwo+ICsjZWxzZQo+ICtzdGF0aWMgaW5saW5lIHZv
aWQgeGVuX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKSB7fQo+ICsjZW5kaWYKPiArCj4gICNlbmRpZiAv
KiBJTkNMVURFX1hFTl9PUFNfSCAqLwo+IC0tIAo+IDEuNy45LjUKPiAKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sun Feb 09 16:05:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 16:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCWsF-0008T8-Mv; Sun, 09 Feb 2014 16:04:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCWsC-0008T3-VT
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 16:04:49 +0000
Received: from [85.158.139.211:26593] by server-12.bemta-5.messagelabs.com id
	3C/D8-15415-027A7F25; Sun, 09 Feb 2014 16:04:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391961885!2665819!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4208 invoked from network); 9 Feb 2014 16:04:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 16:04:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,812,1384300800"; d="scan'208";a="99319459"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Feb 2014 16:04:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 11:04:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCWs2-00016N-KJ;
	Sun, 09 Feb 2014 16:04:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCWs2-0003Ds-4Q;
	Sun, 09 Feb 2014 16:04:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24817-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 16:04:38 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24817: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24817 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24817/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                494479038d97f1b9f76fc633a360a681acdf035c
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7013 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2370853 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 16:05:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 16:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCWsF-0008T8-Mv; Sun, 09 Feb 2014 16:04:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCWsC-0008T3-VT
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 16:04:49 +0000
Received: from [85.158.139.211:26593] by server-12.bemta-5.messagelabs.com id
	3C/D8-15415-027A7F25; Sun, 09 Feb 2014 16:04:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1391961885!2665819!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4208 invoked from network); 9 Feb 2014 16:04:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 16:04:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,812,1384300800"; d="scan'208";a="99319459"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 09 Feb 2014 16:04:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 11:04:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCWs2-00016N-KJ;
	Sun, 09 Feb 2014 16:04:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCWs2-0003Ds-4Q;
	Sun, 09 Feb 2014 16:04:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24817-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 16:04:38 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24817: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24817 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24817/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                494479038d97f1b9f76fc633a360a681acdf035c
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7013 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2370853 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 16:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 16:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCXbV-00029R-Cn; Sun, 09 Feb 2014 16:51:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCXbT-00029M-T9
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 16:51:36 +0000
Received: from [193.109.254.147:24586] by server-12.bemta-14.messagelabs.com
	id 9C/84-17220-612B7F25; Sun, 09 Feb 2014 16:51:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391964693!3074338!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29391 invoked from network); 9 Feb 2014 16:51:34 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 16:51:34 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so3616042wes.36
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 08:51:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=TTWyzNJNa1M1Cismjslwf0kHr74cHlE6Qb6a/No6FwI=;
	b=edMA0BUQ4KEMTogYxThH4Xx0R8JujuSFnRaOBa1vmRMMV+BG6UWUs1dH6Q9pPQE2nv
	9KiFyd1gE8zz9XQkbtOO9C0mqUWN2aww16pjuKypu+jQOT/pJD7euCzoIU6EbR+m69RT
	cKE31Rih5/GZg0eMyfieoOJcxKCvsWZnXe21obgt6/nVWuB2uwgGopWwJT3Lj8+rO8et
	G3WPX4hGz4yUEhUU4Ldvvwc0uDROHjIu1yi2D8PMqZlqQWcHOBML6cpjf/VQHTrNMULR
	PzwZFWMJuajCRHIJhpqa2y9hBP7bp94NqNWJFBgSvH1swTliMar/tcvDUijbJwQzR/Z0
	Eb8w==
X-Gm-Message-State: ALoCoQkxgXG5DtkxsK0N7+6BOeeD/a3q/JdGFlMxdlzY3K63BL1/XLb3SvuXSbQDRZc05IJ6fKst
X-Received: by 10.194.60.103 with SMTP id g7mr2863398wjr.37.1391964693515;
	Sun, 09 Feb 2014 08:51:33 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ux5sm27915175wjc.6.2014.02.09.08.51.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 08:51:32 -0800 (PST)
Message-ID: <52F7B213.3000302@linaro.org>
Date: Sun, 09 Feb 2014 16:51:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>	<20140127175550.4cc67171@mantra.us.oracle.com>	<52E7951802000078001177F6@nat28.tlf.novell.com>	<20140128180802.152b3f8d@mantra.us.oracle.com>	<1390992026.31814.63.camel@kazak.uk.xensource.com>	<20140129113846.GA54797@deinos.phlegethon.org>	<1390995662.31814.76.camel@kazak.uk.xensource.com>	<20140129114837.GB54797@deinos.phlegethon.org>	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
In-Reply-To: <20140129173315.592e593e@mantra.us.oracle.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Mukesh,

On 30/01/14 01:33, Mukesh Rathor wrote:
>>> I'm not sure what you mean:
>>>   - the code that Mukesh is adding doesn't have a struct page, it's
>>>     just grabbing the foreign domid from the hypercall arg;
>>>   - if we did have a struct page, we'd just need to take a ref to
>>>     stop the owner changing underfoot; and
>>>   - get_pg_owner() takes a domid anyway.
>>
>> Sorry, I was confused/mislead by the name...
>>
>> rcu_lock_live_remote_domain_by_id does look like what is needed.

Following the xentrace tread: 
http://www.gossamer-threads.com/lists/xen/devel/315883, 
rcu_lock_live_remote_domain_by_id will not correctly works.

On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
prevent xentrace to works on Xen on ARM (and on PVH).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 16:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 16:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCXbV-00029R-Cn; Sun, 09 Feb 2014 16:51:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCXbT-00029M-T9
	for xen-devel@lists.xenproject.org; Sun, 09 Feb 2014 16:51:36 +0000
Received: from [193.109.254.147:24586] by server-12.bemta-14.messagelabs.com
	id 9C/84-17220-612B7F25; Sun, 09 Feb 2014 16:51:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1391964693!3074338!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29391 invoked from network); 9 Feb 2014 16:51:34 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 16:51:34 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so3616042wes.36
	for <xen-devel@lists.xenproject.org>;
	Sun, 09 Feb 2014 08:51:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=TTWyzNJNa1M1Cismjslwf0kHr74cHlE6Qb6a/No6FwI=;
	b=edMA0BUQ4KEMTogYxThH4Xx0R8JujuSFnRaOBa1vmRMMV+BG6UWUs1dH6Q9pPQE2nv
	9KiFyd1gE8zz9XQkbtOO9C0mqUWN2aww16pjuKypu+jQOT/pJD7euCzoIU6EbR+m69RT
	cKE31Rih5/GZg0eMyfieoOJcxKCvsWZnXe21obgt6/nVWuB2uwgGopWwJT3Lj8+rO8et
	G3WPX4hGz4yUEhUU4Ldvvwc0uDROHjIu1yi2D8PMqZlqQWcHOBML6cpjf/VQHTrNMULR
	PzwZFWMJuajCRHIJhpqa2y9hBP7bp94NqNWJFBgSvH1swTliMar/tcvDUijbJwQzR/Z0
	Eb8w==
X-Gm-Message-State: ALoCoQkxgXG5DtkxsK0N7+6BOeeD/a3q/JdGFlMxdlzY3K63BL1/XLb3SvuXSbQDRZc05IJ6fKst
X-Received: by 10.194.60.103 with SMTP id g7mr2863398wjr.37.1391964693515;
	Sun, 09 Feb 2014 08:51:33 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ux5sm27915175wjc.6.2014.02.09.08.51.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 09 Feb 2014 08:51:32 -0800 (PST)
Message-ID: <52F7B213.3000302@linaro.org>
Date: Sun, 09 Feb 2014 16:51:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>	<20140127175550.4cc67171@mantra.us.oracle.com>	<52E7951802000078001177F6@nat28.tlf.novell.com>	<20140128180802.152b3f8d@mantra.us.oracle.com>	<1390992026.31814.63.camel@kazak.uk.xensource.com>	<20140129113846.GA54797@deinos.phlegethon.org>	<1390995662.31814.76.camel@kazak.uk.xensource.com>	<20140129114837.GB54797@deinos.phlegethon.org>	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
In-Reply-To: <20140129173315.592e593e@mantra.us.oracle.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Mukesh,

On 30/01/14 01:33, Mukesh Rathor wrote:
>>> I'm not sure what you mean:
>>>   - the code that Mukesh is adding doesn't have a struct page, it's
>>>     just grabbing the foreign domid from the hypercall arg;
>>>   - if we did have a struct page, we'd just need to take a ref to
>>>     stop the owner changing underfoot; and
>>>   - get_pg_owner() takes a domid anyway.
>>
>> Sorry, I was confused/mislead by the name...
>>
>> rcu_lock_live_remote_domain_by_id does look like what is needed.

Following the xentrace tread: 
http://www.gossamer-threads.com/lists/xen/devel/315883, 
rcu_lock_live_remote_domain_by_id will not correctly works.

On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
prevent xentrace to works on Xen on ARM (and on PVH).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 09 20:26:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 20:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCax8-00020Z-4v; Sun, 09 Feb 2014 20:26:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCax6-00020R-1w
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 20:26:08 +0000
Received: from [85.158.139.211:17679] by server-12.bemta-5.messagelabs.com id
	9A/C0-15415-F54E7F25; Sun, 09 Feb 2014 20:26:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391977564!2743385!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28688 invoked from network); 9 Feb 2014 20:26:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 20:26:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,813,1384300800"; d="scan'208";a="101179127"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 20:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 15:26:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCax0-0002NK-Vb;
	Sun, 09 Feb 2014 20:26:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCax0-0004so-RA;
	Sun, 09 Feb 2014 20:26:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24818-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 20:26:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24818: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6039101826729763767=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6039101826729763767==
Content-Type: text/plain

flight 24818 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24818/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24807
 test-amd64-amd64-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============6039101826729763767==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6039101826729763767==--

From xen-devel-bounces@lists.xen.org Sun Feb 09 20:26:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Feb 2014 20:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCax8-00020Z-4v; Sun, 09 Feb 2014 20:26:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCax6-00020R-1w
	for xen-devel@lists.xensource.com; Sun, 09 Feb 2014 20:26:08 +0000
Received: from [85.158.139.211:17679] by server-12.bemta-5.messagelabs.com id
	9A/C0-15415-F54E7F25; Sun, 09 Feb 2014 20:26:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1391977564!2743385!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28688 invoked from network); 9 Feb 2014 20:26:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	9 Feb 2014 20:26:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,813,1384300800"; d="scan'208";a="101179127"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 09 Feb 2014 20:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 15:26:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCax0-0002NK-Vb;
	Sun, 09 Feb 2014 20:26:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCax0-0004so-RA;
	Sun, 09 Feb 2014 20:26:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24818-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 9 Feb 2014 20:26:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24818: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6039101826729763767=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6039101826729763767==
Content-Type: text/plain

flight 24818 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24818/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10      fail pass in 24807
 test-amd64-amd64-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail pass in 24807

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============6039101826729763767==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6039101826729763767==--

From xen-devel-bounces@lists.xen.org Mon Feb 10 02:28:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 02:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCgax-0003Tm-Ik; Mon, 10 Feb 2014 02:27:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCgaw-0003Th-Jd
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 02:27:38 +0000
Received: from [193.109.254.147:32259] by server-14.bemta-14.messagelabs.com
	id 4F/4D-29228-91938F25; Mon, 10 Feb 2014 02:27:37 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391999254!3119065!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32448 invoked from network); 10 Feb 2014 02:27:36 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-8.tower-27.messagelabs.com with SMTP;
	10 Feb 2014 02:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,815,1384272000"; 
   d="scan'208";a="9494136"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 10:23:45 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A2RQlW000833;
	Mon, 10 Feb 2014 10:27:27 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021010253260-1682028 ;
	Mon, 10 Feb 2014 10:25:32 +0800 
Message-ID: <52F8399F.5030804@cn.fujitsu.com>
Date: Mon, 10 Feb 2014 10:29:51 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>	
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>	
	<52E60568.7060305@cn.fujitsu.com>	
	<CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
	<1390901372.7753.2.camel@kazak.uk.xensource.com>
In-Reply-To: <1390901372.7753.2.camel@kazak.uk.xensource.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 10:25:32,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 10:25:38,
	Serialize complete at 2014/02/10 10:25:38
Cc: Dong Eddie <eddie.dong@intel.com>, Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	rshriram@cs.ubc.ca, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 05:29 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 10:05 -0800, Shriram Rajagopalan wrote:
>> On Sun, Jan 26, 2014 at 11:06 PM, Wen Congyang <wency@cn.fujitsu.com>
>> wrote:
>>         > The last time I posted this script, the feedback was that
>>         the script and> the code invoking > the script should be in a
>>         single patch. So I would suggest doing the same.
>>         
>>         
>>         We use the script in patch6. It adds 479 lines. These two
>>         patches are big patches(add more than 100 lines), so why put
>>         them into a single patch?
> 
>>
>> That is a valid question. IIRC, IanJ was the one who wanted the code
>> and the script together. IanJ, any thoughts?

> 
> Unless the patches are so big they won't get past the mailing list
> filters (which are 100s of Kb I think) the important thing is the
> logical separation of functionality into separate patches, not the
> individual line count of each patch.
> 

We did the logically separate, patches are split by functionalities, ^_^.
They are a little more fine-grain-separation than previous versions.
fine-grain-separation patches are much convenient to be reviewed in the
mail-list and in the future changlog.

If any one insist the original way, we will change it back.

thx,
Lai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 02:28:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 02:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCgax-0003Tm-Ik; Mon, 10 Feb 2014 02:27:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCgaw-0003Th-Jd
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 02:27:38 +0000
Received: from [193.109.254.147:32259] by server-14.bemta-14.messagelabs.com
	id 4F/4D-29228-91938F25; Mon, 10 Feb 2014 02:27:37 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1391999254!3119065!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32448 invoked from network); 10 Feb 2014 02:27:36 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-8.tower-27.messagelabs.com with SMTP;
	10 Feb 2014 02:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,815,1384272000"; 
   d="scan'208";a="9494136"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 10:23:45 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A2RQlW000833;
	Mon, 10 Feb 2014 10:27:27 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021010253260-1682028 ;
	Mon, 10 Feb 2014 10:25:32 +0800 
Message-ID: <52F8399F.5030804@cn.fujitsu.com>
Date: Mon, 10 Feb 2014 10:29:51 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390295117-718-1-git-send-email-laijs@cn.fujitsu.com>	
	<1390295117-718-3-git-send-email-laijs@cn.fujitsu.com>	
	<CAP8mzPPhwsQ=mn-xJAG7FzvHoBdSBVdrtFdzgMJM2gac_CpmuA@mail.gmail.com>	
	<52E60568.7060305@cn.fujitsu.com>	
	<CAP8mzPOdaTzGPHRzg=v_CUOEJk-40e=Qv6rAyG3joUnoTbeoaA@mail.gmail.com>
	<1390901372.7753.2.camel@kazak.uk.xensource.com>
In-Reply-To: <1390901372.7753.2.camel@kazak.uk.xensource.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 10:25:32,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 10:25:38,
	Serialize complete at 2014/02/10 10:25:38
Cc: Dong Eddie <eddie.dong@intel.com>, Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	rshriram@cs.ubc.ca, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 02/13 V6] remus: implement network buffering
 hotplug scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 01/28/2014 05:29 PM, Ian Campbell wrote:
> On Mon, 2014-01-27 at 10:05 -0800, Shriram Rajagopalan wrote:
>> On Sun, Jan 26, 2014 at 11:06 PM, Wen Congyang <wency@cn.fujitsu.com>
>> wrote:
>>         > The last time I posted this script, the feedback was that
>>         the script and> the code invoking > the script should be in a
>>         single patch. So I would suggest doing the same.
>>         
>>         
>>         We use the script in patch6. It adds 479 lines. These two
>>         patches are big patches(add more than 100 lines), so why put
>>         them into a single patch?
> 
>>
>> That is a valid question. IIRC, IanJ was the one who wanted the code
>> and the script together. IanJ, any thoughts?

> 
> Unless the patches are so big they won't get past the mailing list
> filters (which are 100s of Kb I think) the important thing is the
> logical separation of functionality into separate patches, not the
> individual line count of each patch.
> 

We did the logically separate, patches are split by functionalities, ^_^.
They are a little more fine-grain-separation than previous versions.
fine-grain-separation patches are much convenient to be reviewed in the
mail-list and in the future changlog.

If any one insist the original way, we will change it back.

thx,
Lai

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 02:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 02:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCgkc-00040o-4K; Mon, 10 Feb 2014 02:37:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCgkZ-00040j-Pv
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 02:37:36 +0000
Received: from [85.158.139.211:36416] by server-15.bemta-5.messagelabs.com id
	F0/B0-24395-F6B38F25; Mon, 10 Feb 2014 02:37:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391999852!2766961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23125 invoked from network); 10 Feb 2014 02:37:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 02:37:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,815,1384300800"; d="scan'208";a="101208106"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 02:37:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 21:37:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCgkV-0004De-3H;
	Mon, 10 Feb 2014 02:37:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCgkU-00006o-LH;
	Mon, 10 Feb 2014 02:37:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24819-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 02:37:30 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24819: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8717625246403405095=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8717625246403405095==
Content-Type: text/plain

flight 24819 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24819/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807
 test-amd64-i386-pv            9 guest-start                 fail pass in 24818
 test-amd64-i386-pair          4 host-install/dst_host(4)  broken pass in 24818
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 24818 pass in 24819
 test-amd64-amd64-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail in 24818 pass in 24819

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============8717625246403405095==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8717625246403405095==--

From xen-devel-bounces@lists.xen.org Mon Feb 10 02:37:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 02:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCgkc-00040o-4K; Mon, 10 Feb 2014 02:37:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCgkZ-00040j-Pv
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 02:37:36 +0000
Received: from [85.158.139.211:36416] by server-15.bemta-5.messagelabs.com id
	F0/B0-24395-F6B38F25; Mon, 10 Feb 2014 02:37:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1391999852!2766961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23125 invoked from network); 10 Feb 2014 02:37:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 02:37:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,815,1384300800"; d="scan'208";a="101208106"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 02:37:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 9 Feb 2014 21:37:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCgkV-0004De-3H;
	Mon, 10 Feb 2014 02:37:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCgkU-00006o-LH;
	Mon, 10 Feb 2014 02:37:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24819-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 02:37:30 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24819: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8717625246403405095=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8717625246403405095==
Content-Type: text/plain

flight 24819 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24819/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807
 test-amd64-i386-pv            9 guest-start                 fail pass in 24818
 test-amd64-i386-pair          4 host-install/dst_host(4)  broken pass in 24818
 test-amd64-amd64-xl-sedf 14 guest-localmigrate/x10 fail in 24818 pass in 24819
 test-amd64-amd64-xl-qemuu-win7-amd64 11 guest-localmigrate.2 fail in 24818 pass in 24819

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl           4 capture-logs(4)        broken blocked in 24743

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         broken  
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============8717625246403405095==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8717625246403405095==--

From xen-devel-bounces@lists.xen.org Mon Feb 10 05:34:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 05:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCjV2-000349-77; Mon, 10 Feb 2014 05:33:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Suravee.Suthikulpanit@amd.com>) id 1WCjV1-000342-3N
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 05:33:43 +0000
Received: from [85.158.137.68:13664] by server-7.bemta-3.messagelabs.com id
	22/65-13775-6B468F25; Mon, 10 Feb 2014 05:33:42 +0000
X-Env-Sender: Suravee.Suthikulpanit@amd.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392010419!729596!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6250 invoked from network); 10 Feb 2014 05:33:41 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-15.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 05:33:41 -0000
Received: from mail119-ch1-R.bigfish.com (10.43.68.229) by
	CH1EHSOBE003.bigfish.com (10.43.70.53) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 05:33:39 +0000
Received: from mail119-ch1 (localhost [127.0.0.1])	by
	mail119-ch1-R.bigfish.com (Postfix) with ESMTP id 8EBD04E0796;
	Mon, 10 Feb 2014 05:33:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail119-ch1 (localhost.localdomain [127.0.0.1]) by mail119-ch1
	(MessageSwitch) id 1392010418583836_12027;
	Mon, 10 Feb 2014 05:33:38 +0000 (UTC)
Received: from CH1EHSMHS032.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.249])	by mail119-ch1.bigfish.com (Postfix) with ESMTP id
	7F77B60074;	Mon, 10 Feb 2014 05:33:38 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CH1EHSMHS032.bigfish.com
	(10.43.70.32) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 05:33:38 +0000
X-WSS-ID: 0N0RM40-07-AZB-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2F19612C000E;	Sun,  9 Feb 2014 23:33:35 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Sun, 9 Feb 2014 23:33:37 -0600
Received: from [10.224.13.4] (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 10 Feb 2014
	00:33:36 -0500
Message-ID: <52F864B0.50209@amd.com>
Date: Sun, 9 Feb 2014 23:33:36 -0600
From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
In-Reply-To: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
	IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

On 2/7/2014 3:21 AM, Jan Beulich wrote:
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
>
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
>
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Eric Houby <ehouby@yahoo.com>
>
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>       const struct acpi_ivrs_header *ivrs_block;
>       unsigned long length;
>       unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>       int error = 0;
>
>       BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>       /* Each IO-APIC must have been mentioned in the table. */
>       for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>       {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>               continue;
>
>           if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>           }
>       }
>
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>       return error;
>   }
>
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 05:34:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 05:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCjV2-000349-77; Mon, 10 Feb 2014 05:33:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Suravee.Suthikulpanit@amd.com>) id 1WCjV1-000342-3N
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 05:33:43 +0000
Received: from [85.158.137.68:13664] by server-7.bemta-3.messagelabs.com id
	22/65-13775-6B468F25; Mon, 10 Feb 2014 05:33:42 +0000
X-Env-Sender: Suravee.Suthikulpanit@amd.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392010419!729596!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6250 invoked from network); 10 Feb 2014 05:33:41 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-15.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 05:33:41 -0000
Received: from mail119-ch1-R.bigfish.com (10.43.68.229) by
	CH1EHSOBE003.bigfish.com (10.43.70.53) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 05:33:39 +0000
Received: from mail119-ch1 (localhost [127.0.0.1])	by
	mail119-ch1-R.bigfish.com (Postfix) with ESMTP id 8EBD04E0796;
	Mon, 10 Feb 2014 05:33:39 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(zzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail119-ch1 (localhost.localdomain [127.0.0.1]) by mail119-ch1
	(MessageSwitch) id 1392010418583836_12027;
	Mon, 10 Feb 2014 05:33:38 +0000 (UTC)
Received: from CH1EHSMHS032.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.249])	by mail119-ch1.bigfish.com (Postfix) with ESMTP id
	7F77B60074;	Mon, 10 Feb 2014 05:33:38 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CH1EHSMHS032.bigfish.com
	(10.43.70.32) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 05:33:38 +0000
X-WSS-ID: 0N0RM40-07-AZB-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2F19612C000E;	Sun,  9 Feb 2014 23:33:35 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Sun, 9 Feb 2014 23:33:37 -0600
Received: from [10.224.13.4] (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9; Mon, 10 Feb 2014
	00:33:36 -0500
Message-ID: <52F864B0.50209@amd.com>
Date: Sun, 9 Feb 2014 23:33:36 -0600
From: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
In-Reply-To: <52F4B38C020000780011A15D@nat28.tlf.novell.com>
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Subject: Re: [Xen-devel] [PATCH] AMD IOMMU: fail if there is no southbridge
	IO-APIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Acked-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>

On 2/7/2014 3:21 AM, Jan Beulich wrote:
> ... but interrupt remapping is requested (with per-device remapping
> tables). Without it, the timer interrupt is usually not working.
>
> Inspired by Linux'es "iommu/amd: Work around wrong IOAPIC device-id in
> IVRS table" (commit c2ff5cf5294bcbd7fa50f7d860e90a66db7e5059) by Joerg
> Roedel <joerg.roedel@amd.com>.
>
> Reported-by: Eric Houby <ehouby@yahoo.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Eric Houby <ehouby@yahoo.com>
>
> --- a/xen/drivers/passthrough/amd/iommu_acpi.c
> +++ b/xen/drivers/passthrough/amd/iommu_acpi.c
> @@ -984,6 +984,7 @@ static int __init parse_ivrs_table(struc
>       const struct acpi_ivrs_header *ivrs_block;
>       unsigned long length;
>       unsigned int apic;
> +    bool_t sb_ioapic = !iommu_intremap;
>       int error = 0;
>
>       BUG_ON(!table);
> @@ -1017,8 +1018,15 @@ static int __init parse_ivrs_table(struc
>       /* Each IO-APIC must have been mentioned in the table. */
>       for ( apic = 0; !error && iommu_intremap && apic < nr_ioapics; ++apic )
>       {
> -        if ( !nr_ioapic_entries[apic] ||
> -             ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
> +        if ( !nr_ioapic_entries[apic] )
> +            continue;
> +
> +        if ( !ioapic_sbdf[IO_APIC_ID(apic)].seg &&
> +             /* SB IO-APIC is always on this device in AMD systems. */
> +             ioapic_sbdf[IO_APIC_ID(apic)].bdf == PCI_BDF(0, 0x14, 0) )
> +            sb_ioapic = 1;
> +
> +        if ( ioapic_sbdf[IO_APIC_ID(apic)].pin_2_idx )
>               continue;
>
>           if ( !test_bit(IO_APIC_ID(apic), ioapic_cmdline) )
> @@ -1041,6 +1049,14 @@ static int __init parse_ivrs_table(struc
>           }
>       }
>
> +    if ( !error && !sb_ioapic )
> +    {
> +        if ( amd_iommu_perdev_intremap )
> +            error = -ENXIO;
> +        printk("%sNo southbridge IO-APIC found in IVRS table\n",
> +               amd_iommu_perdev_intremap ? XENLOG_ERR : XENLOG_WARNING);
> +    }
> +
>       return error;
>   }
>
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 06:19:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 06:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCkCe-00056K-Mi; Mon, 10 Feb 2014 06:18:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WCkCd-00056F-IU
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 06:18:47 +0000
Received: from [85.158.143.35:34225] by server-3.bemta-4.messagelabs.com id
	6D/12-11539-64F68F25; Mon, 10 Feb 2014 06:18:46 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392013125!4371382!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjYzMTcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5348 invoked from network); 10 Feb 2014 06:18:45 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-8.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 06:18:45 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga101.ch.intel.com with ESMTP; 09 Feb 2014 22:18:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,816,1384329600"; d="scan'208";a="472571733"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2014 22:18:41 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 14:14:00 +0800
Message-Id: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: Yang Zhang <yang.z.zhang@Intel.com>, andrew.cooper3@citrix.com, tim@xen.org,
	xiantao.zhang@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty
	to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

When enabling log dirty mode, it sets all guest's memory to readonly.
And in HAP enabled domain, it modifies all EPT entries to clear write bit
to make sure it is readonly. This will cause problem if VT-d shares page
table with EPT: the device may issue a DMA write request, then VT-d engine
tells it the target memory is readonly and result in VT-d fault.

Currnetly, there are two places will enable log dirty mode: migration and vram
tracking. Migration with device assigned is not allowed, so it is ok. For vram,
it doesn't need to set all memory to readonly. Only track the vram range is enough.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/mm/hap/hap.c       |   20 ++++++++++++++------
 xen/arch/x86/mm/paging.c        |    9 +++++----
 xen/arch/x86/mm/shadow/common.c |    2 +-
 xen/include/asm-x86/domain.h    |    2 +-
 xen/include/asm-x86/paging.h    |    5 +++--
 xen/include/asm-x86/shadow.h    |    2 +-
 6 files changed, 25 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d3f64bd..5f75636 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -82,7 +82,7 @@ int hap_track_dirty_vram(struct domain *d,
         if ( !paging_mode_log_dirty(d) )
         {
             hap_logdirty_init(d);
-            rc = paging_log_dirty_enable(d);
+            rc = paging_log_dirty_enable(d, 0);
             if ( rc )
                 goto out;
         }
@@ -167,17 +167,25 @@ out:
 /*            HAP LOG DIRTY SUPPORT             */
 /************************************************/
 
-/* hap code to call when log_dirty is enable. return 0 if no problem found. */
-static int hap_enable_log_dirty(struct domain *d)
+/*
+ * hap code to call when log_dirty is enable. return 0 if no problem found.
+ *
+ * NB: Domain that having device assigned should not set log_global. Because
+ * there is no way to track the memory updating from device.
+ */
+static int hap_enable_log_dirty(struct domain *d, bool_t log_global)
 {
     /* turn on PG_log_dirty bit in paging mode */
     paging_lock(d);
     d->arch.paging.mode |= PG_log_dirty;
     paging_unlock(d);
 
-    /* set l1e entries of P2M table to be read-only. */
-    p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
-    flush_tlb_mask(d->domain_dirty_cpumask);
+    if ( log_global )
+    {
+        /* set l1e entries of P2M table to be read-only. */
+        p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
+        flush_tlb_mask(d->domain_dirty_cpumask);
+    }
     return 0;
 }
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 21344e5..ab5eacb 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -164,7 +164,7 @@ void paging_free_log_dirty_bitmap(struct domain *d)
     paging_unlock(d);
 }
 
-int paging_log_dirty_enable(struct domain *d)
+int paging_log_dirty_enable(struct domain *d, bool_t log_global)
 {
     int ret;
 
@@ -172,7 +172,7 @@ int paging_log_dirty_enable(struct domain *d)
         return -EINVAL;
 
     domain_pause(d);
-    ret = d->arch.paging.log_dirty.enable_log_dirty(d);
+    ret = d->arch.paging.log_dirty.enable_log_dirty(d, log_global);
     domain_unpause(d);
 
     return ret;
@@ -489,7 +489,8 @@ void paging_log_dirty_range(struct domain *d,
  * These function pointers must not be followed with the log-dirty lock held.
  */
 void paging_log_dirty_init(struct domain *d,
-                           int    (*enable_log_dirty)(struct domain *d),
+                           int    (*enable_log_dirty)(struct domain *d,
+                                                      bool_t log_global),
                            int    (*disable_log_dirty)(struct domain *d),
                            void   (*clean_dirty_bitmap)(struct domain *d))
 {
@@ -590,7 +591,7 @@ int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
     case XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY:
         if ( hap_enabled(d) )
             hap_logdirty_init(d);
-        return paging_log_dirty_enable(d);
+        return paging_log_dirty_enable(d, 1);
 
     case XEN_DOMCTL_SHADOW_OP_OFF:
         if ( paging_mode_log_dirty(d) )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0bfa595..11c6b62 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3418,7 +3418,7 @@ shadow_write_p2m_entry(struct vcpu *v, unsigned long gfn,
 /* Shadow specific code which is called in paging_log_dirty_enable().
  * Return 0 if no problem found.
  */
-int shadow_enable_log_dirty(struct domain *d)
+int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
 {
     int ret;
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..4ff89f0 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -169,7 +169,7 @@ struct log_dirty_domain {
     unsigned int   dirty_count;
 
     /* functions which are paging mode specific */
-    int            (*enable_log_dirty   )(struct domain *d);
+    int            (*enable_log_dirty   )(struct domain *d, bool_t log_global);
     int            (*disable_log_dirty  )(struct domain *d);
     void           (*clean_dirty_bitmap )(struct domain *d);
 };
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index cd7ee3b..8dd2a61 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -143,14 +143,15 @@ void paging_log_dirty_range(struct domain *d,
                             uint8_t *dirty_bitmap);
 
 /* enable log dirty */
-int paging_log_dirty_enable(struct domain *d);
+int paging_log_dirty_enable(struct domain *d, bool_t log_global);
 
 /* disable log dirty */
 int paging_log_dirty_disable(struct domain *d);
 
 /* log dirty initialization */
 void paging_log_dirty_init(struct domain *d,
-                           int  (*enable_log_dirty)(struct domain *d),
+                           int  (*enable_log_dirty)(struct domain *d,
+                                                    bool_t log_global),
                            int  (*disable_log_dirty)(struct domain *d),
                            void (*clean_dirty_bitmap)(struct domain *d));
 
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 852023d..348915e 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -82,7 +82,7 @@ void shadow_teardown(struct domain *d);
 void shadow_final_teardown(struct domain *d);
 
 /* shadow code to call when log dirty is enabled */
-int shadow_enable_log_dirty(struct domain *d);
+int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
 
 /* shadow code to call when log dirty is disabled */
 int shadow_disable_log_dirty(struct domain *d);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 06:19:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 06:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCkCe-00056K-Mi; Mon, 10 Feb 2014 06:18:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WCkCd-00056F-IU
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 06:18:47 +0000
Received: from [85.158.143.35:34225] by server-3.bemta-4.messagelabs.com id
	6D/12-11539-64F68F25; Mon, 10 Feb 2014 06:18:46 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392013125!4371382!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjYzMTcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5348 invoked from network); 10 Feb 2014 06:18:45 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-8.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 06:18:45 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga101.ch.intel.com with ESMTP; 09 Feb 2014 22:18:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,816,1384329600"; d="scan'208";a="472571733"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by fmsmga001.fm.intel.com with ESMTP; 09 Feb 2014 22:18:41 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 14:14:00 +0800
Message-Id: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: Yang Zhang <yang.z.zhang@Intel.com>, andrew.cooper3@citrix.com, tim@xen.org,
	xiantao.zhang@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty
	to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

When enabling log dirty mode, it sets all guest's memory to readonly.
And in HAP enabled domain, it modifies all EPT entries to clear write bit
to make sure it is readonly. This will cause problem if VT-d shares page
table with EPT: the device may issue a DMA write request, then VT-d engine
tells it the target memory is readonly and result in VT-d fault.

Currnetly, there are two places will enable log dirty mode: migration and vram
tracking. Migration with device assigned is not allowed, so it is ok. For vram,
it doesn't need to set all memory to readonly. Only track the vram range is enough.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/mm/hap/hap.c       |   20 ++++++++++++++------
 xen/arch/x86/mm/paging.c        |    9 +++++----
 xen/arch/x86/mm/shadow/common.c |    2 +-
 xen/include/asm-x86/domain.h    |    2 +-
 xen/include/asm-x86/paging.h    |    5 +++--
 xen/include/asm-x86/shadow.h    |    2 +-
 6 files changed, 25 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d3f64bd..5f75636 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -82,7 +82,7 @@ int hap_track_dirty_vram(struct domain *d,
         if ( !paging_mode_log_dirty(d) )
         {
             hap_logdirty_init(d);
-            rc = paging_log_dirty_enable(d);
+            rc = paging_log_dirty_enable(d, 0);
             if ( rc )
                 goto out;
         }
@@ -167,17 +167,25 @@ out:
 /*            HAP LOG DIRTY SUPPORT             */
 /************************************************/
 
-/* hap code to call when log_dirty is enable. return 0 if no problem found. */
-static int hap_enable_log_dirty(struct domain *d)
+/*
+ * hap code to call when log_dirty is enable. return 0 if no problem found.
+ *
+ * NB: Domain that having device assigned should not set log_global. Because
+ * there is no way to track the memory updating from device.
+ */
+static int hap_enable_log_dirty(struct domain *d, bool_t log_global)
 {
     /* turn on PG_log_dirty bit in paging mode */
     paging_lock(d);
     d->arch.paging.mode |= PG_log_dirty;
     paging_unlock(d);
 
-    /* set l1e entries of P2M table to be read-only. */
-    p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
-    flush_tlb_mask(d->domain_dirty_cpumask);
+    if ( log_global )
+    {
+        /* set l1e entries of P2M table to be read-only. */
+        p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
+        flush_tlb_mask(d->domain_dirty_cpumask);
+    }
     return 0;
 }
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 21344e5..ab5eacb 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -164,7 +164,7 @@ void paging_free_log_dirty_bitmap(struct domain *d)
     paging_unlock(d);
 }
 
-int paging_log_dirty_enable(struct domain *d)
+int paging_log_dirty_enable(struct domain *d, bool_t log_global)
 {
     int ret;
 
@@ -172,7 +172,7 @@ int paging_log_dirty_enable(struct domain *d)
         return -EINVAL;
 
     domain_pause(d);
-    ret = d->arch.paging.log_dirty.enable_log_dirty(d);
+    ret = d->arch.paging.log_dirty.enable_log_dirty(d, log_global);
     domain_unpause(d);
 
     return ret;
@@ -489,7 +489,8 @@ void paging_log_dirty_range(struct domain *d,
  * These function pointers must not be followed with the log-dirty lock held.
  */
 void paging_log_dirty_init(struct domain *d,
-                           int    (*enable_log_dirty)(struct domain *d),
+                           int    (*enable_log_dirty)(struct domain *d,
+                                                      bool_t log_global),
                            int    (*disable_log_dirty)(struct domain *d),
                            void   (*clean_dirty_bitmap)(struct domain *d))
 {
@@ -590,7 +591,7 @@ int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
     case XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY:
         if ( hap_enabled(d) )
             hap_logdirty_init(d);
-        return paging_log_dirty_enable(d);
+        return paging_log_dirty_enable(d, 1);
 
     case XEN_DOMCTL_SHADOW_OP_OFF:
         if ( paging_mode_log_dirty(d) )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0bfa595..11c6b62 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3418,7 +3418,7 @@ shadow_write_p2m_entry(struct vcpu *v, unsigned long gfn,
 /* Shadow specific code which is called in paging_log_dirty_enable().
  * Return 0 if no problem found.
  */
-int shadow_enable_log_dirty(struct domain *d)
+int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
 {
     int ret;
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..4ff89f0 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -169,7 +169,7 @@ struct log_dirty_domain {
     unsigned int   dirty_count;
 
     /* functions which are paging mode specific */
-    int            (*enable_log_dirty   )(struct domain *d);
+    int            (*enable_log_dirty   )(struct domain *d, bool_t log_global);
     int            (*disable_log_dirty  )(struct domain *d);
     void           (*clean_dirty_bitmap )(struct domain *d);
 };
diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
index cd7ee3b..8dd2a61 100644
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -143,14 +143,15 @@ void paging_log_dirty_range(struct domain *d,
                             uint8_t *dirty_bitmap);
 
 /* enable log dirty */
-int paging_log_dirty_enable(struct domain *d);
+int paging_log_dirty_enable(struct domain *d, bool_t log_global);
 
 /* disable log dirty */
 int paging_log_dirty_disable(struct domain *d);
 
 /* log dirty initialization */
 void paging_log_dirty_init(struct domain *d,
-                           int  (*enable_log_dirty)(struct domain *d),
+                           int  (*enable_log_dirty)(struct domain *d,
+                                                    bool_t log_global),
                            int  (*disable_log_dirty)(struct domain *d),
                            void (*clean_dirty_bitmap)(struct domain *d));
 
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 852023d..348915e 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -82,7 +82,7 @@ void shadow_teardown(struct domain *d);
 void shadow_final_teardown(struct domain *d);
 
 /* shadow code to call when log dirty is enabled */
-int shadow_enable_log_dirty(struct domain *d);
+int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
 
 /* shadow code to call when log dirty is disabled */
 int shadow_disable_log_dirty(struct domain *d);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 07:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 07:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClS7-00082W-Hh; Mon, 10 Feb 2014 07:38:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClS6-00082O-An
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 07:38:50 +0000
Received: from [193.109.254.147:7520] by server-15.bemta-14.messagelabs.com id
	7B/F6-10839-90288F25; Mon, 10 Feb 2014 07:38:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392017928!3172284!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24912 invoked from network); 10 Feb 2014 07:38:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 07:38:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 07:38:48 +0000
Message-Id: <52F89013020000780011A952@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 07:38:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52F5587A.4010608@terremark.com>
In-Reply-To: <52F5587A.4010608@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 23:04, Don Slutz <dslutz@verizon.com> wrote:
> On 01/29/14 06:40, Jan Beulich wrote:
>> All,
>>
>> aiming at releases with, as before, presumably just one more RC on
>> each of them, please test!
> 
> Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.
> 
> CentOS 5.10 has a build issue with QEMU:
> 
> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html 

Is this a regression over 4.3.1?

In any event, it would be Stefano to take care of this, just like for
4.4.

Jan

> Has more info, for this testing I changed:
> 
> 
> Author: Don Slutz <dslutz@verizon.com>
> Date:   Fri Jan 31 22:37:04 2014 +0000
> 
>      Work around QEMU bug #1257099 on CentOS 5.10
> 
> diff --git a/tools/Makefile b/tools/Makefile
> index e44a3e9..b411e60 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>                  source=.; \
>          fi; \
>          cd qemu-xen-dir; \
> -       $$source/configure --enable-xen --target-list=i386-softmmu \
> +       $$source/configure --enable-xen --target-list=i386-softmmu 
> --disable-smartcard-nss\
>                  --prefix=$(PREFIX) \
>                  --source-path=$$source \
>                  --extra-cflags="-I$(XEN_ROOT)/tools/include \
> 
> and was able to use the resulting build for some simple testing.  No new 
> issues were found.
>       -Don Slutz
> 
>> Thanks, Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 07:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 07:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClS7-00082W-Hh; Mon, 10 Feb 2014 07:38:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClS6-00082O-An
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 07:38:50 +0000
Received: from [193.109.254.147:7520] by server-15.bemta-14.messagelabs.com id
	7B/F6-10839-90288F25; Mon, 10 Feb 2014 07:38:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392017928!3172284!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24912 invoked from network); 10 Feb 2014 07:38:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 07:38:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 07:38:48 +0000
Message-Id: <52F89013020000780011A952@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 07:38:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52F5587A.4010608@terremark.com>
In-Reply-To: <52F5587A.4010608@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 23:04, Don Slutz <dslutz@verizon.com> wrote:
> On 01/29/14 06:40, Jan Beulich wrote:
>> All,
>>
>> aiming at releases with, as before, presumably just one more RC on
>> each of them, please test!
> 
> Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.
> 
> CentOS 5.10 has a build issue with QEMU:
> 
> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html 

Is this a regression over 4.3.1?

In any event, it would be Stefano to take care of this, just like for
4.4.

Jan

> Has more info, for this testing I changed:
> 
> 
> Author: Don Slutz <dslutz@verizon.com>
> Date:   Fri Jan 31 22:37:04 2014 +0000
> 
>      Work around QEMU bug #1257099 on CentOS 5.10
> 
> diff --git a/tools/Makefile b/tools/Makefile
> index e44a3e9..b411e60 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>                  source=.; \
>          fi; \
>          cd qemu-xen-dir; \
> -       $$source/configure --enable-xen --target-list=i386-softmmu \
> +       $$source/configure --enable-xen --target-list=i386-softmmu 
> --disable-smartcard-nss\
>                  --prefix=$(PREFIX) \
>                  --source-path=$$source \
>                  --extra-cflags="-I$(XEN_ROOT)/tools/include \
> 
> and was able to use the resulting build for some simple testing.  No new 
> issues were found.
>       -Don Slutz
> 
>> Thanks, Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org 
>> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 07:42:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 07:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClV5-0008Uf-5G; Mon, 10 Feb 2014 07:41:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClV4-0008UY-7J
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 07:41:54 +0000
Received: from [193.109.254.147:35772] by server-6.bemta-14.messagelabs.com id
	06/43-03396-1C288F25; Mon, 10 Feb 2014 07:41:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392018112!3169214!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14482 invoked from network); 10 Feb 2014 07:41:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 07:41:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 07:41:52 +0000
Message-Id: <52F890CD020000780011A95D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 07:41:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
	<20140207212724.GD8837@arav-dinar>
In-Reply-To: <20140207212724.GD8837@arav-dinar>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 22:27, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
wrote:
> On Fri, Feb 07, 2014 at 11:05:17AM +0000, Jan Beulich wrote:
>> >>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
> wrote:
>> > -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>> > -		v->arch.vmce.bank[1].mci_misc = val; 
>> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> > -		break;
>> > -	case MSR_F10_MC4_MISC2: /* Link error type */
>> > -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>> > -		/* ignore write: we do not emulate link and l3 cache errors
>> > -		 * to the guest.
>> > -		 */
>> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> > -		break;
>> > -	default:
>> > -		return 0;
>> > -	}
>> > +    /* If not present, #GP fault, else do nothing as we don't emulate */
>> > +    if ( !amd_thresholding_reg_present(msr) )
>> > +        return -1;
>> 
>> The one thing I'm concerned about making this #GP in the guest is
>> migration: With it being _newer_ CPUs implementing fewer of these
>> MSRs, it would be impossible to migrate a guest from an older system
>> to a newer one - a direction that (as long as the newer system
>> provides all the hardware capabilities the older one has) is generally
>> assumed to work. Bottom line - we're probably better off always
>> dropping writes, and always returning zero for reads. Which will
>> eliminate the need for amd_thresholding_reg_present().
>> 
> 
> Before I go ahead and remove the function, few questions-
> 
> Assuming there is a tool in the guest that accesses these MSRs,
> wouldn't it be fair to expect that the tool keep in mind these MSRs
> exist only in certain families?
> 
> For example:
> if there's a guest running on F10 that accesses 0xc000040a, that would
> be fine. But once we migrate to a newer family, then the guest should
> not even generate accesses to the MSR.

All correct, provided the family check and the MSR access aren't
separated by a migration.

> Also, returning #GP to guests would mean keeping it consistent with HW
> behavior. If we return zero for reads, (IMHO) it's not necessarily
> correct information as the register does not even exist.. 
> 
> Bare-metal cases will face same problems too.. but if a register doesn't
> exist, then shouldn't OS/hypervisor just say so and let whoever
> generated the access deal with it?

That's all valid argumentation as long as you leave migration out
of the picture.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 07:42:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 07:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClV5-0008Uf-5G; Mon, 10 Feb 2014 07:41:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClV4-0008UY-7J
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 07:41:54 +0000
Received: from [193.109.254.147:35772] by server-6.bemta-14.messagelabs.com id
	06/43-03396-1C288F25; Mon, 10 Feb 2014 07:41:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392018112!3169214!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14482 invoked from network); 10 Feb 2014 07:41:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 07:41:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 07:41:52 +0000
Message-Id: <52F890CD020000780011A95D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 07:41:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
	<20140207212724.GD8837@arav-dinar>
In-Reply-To: <20140207212724.GD8837@arav-dinar>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 22:27, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
wrote:
> On Fri, Feb 07, 2014 at 11:05:17AM +0000, Jan Beulich wrote:
>> >>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
> wrote:
>> > -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>> > -		v->arch.vmce.bank[1].mci_misc = val; 
>> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> > -		break;
>> > -	case MSR_F10_MC4_MISC2: /* Link error type */
>> > -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>> > -		/* ignore write: we do not emulate link and l3 cache errors
>> > -		 * to the guest.
>> > -		 */
>> > -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> > -		break;
>> > -	default:
>> > -		return 0;
>> > -	}
>> > +    /* If not present, #GP fault, else do nothing as we don't emulate */
>> > +    if ( !amd_thresholding_reg_present(msr) )
>> > +        return -1;
>> 
>> The one thing I'm concerned about making this #GP in the guest is
>> migration: With it being _newer_ CPUs implementing fewer of these
>> MSRs, it would be impossible to migrate a guest from an older system
>> to a newer one - a direction that (as long as the newer system
>> provides all the hardware capabilities the older one has) is generally
>> assumed to work. Bottom line - we're probably better off always
>> dropping writes, and always returning zero for reads. Which will
>> eliminate the need for amd_thresholding_reg_present().
>> 
> 
> Before I go ahead and remove the function, few questions-
> 
> Assuming there is a tool in the guest that accesses these MSRs,
> wouldn't it be fair to expect that the tool keep in mind these MSRs
> exist only in certain families?
> 
> For example:
> if there's a guest running on F10 that accesses 0xc000040a, that would
> be fine. But once we migrate to a newer family, then the guest should
> not even generate accesses to the MSR.

All correct, provided the family check and the MSR access aren't
separated by a migration.

> Also, returning #GP to guests would mean keeping it consistent with HW
> behavior. If we return zero for reads, (IMHO) it's not necessarily
> correct information as the register does not even exist.. 
> 
> Bare-metal cases will face same problems too.. but if a register doesn't
> exist, then shouldn't OS/hypervisor just say so and let whoever
> generated the access deal with it?

That's all valid argumentation as long as you leave migration out
of the picture.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 07:58:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 07:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClkN-0000cJ-2W; Mon, 10 Feb 2014 07:57:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WClkL-0000cE-A2
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 07:57:41 +0000
Received: from [85.158.139.211:50745] by server-10.bemta-5.messagelabs.com id
	41/5D-08578-37688F25; Mon, 10 Feb 2014 07:57:39 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392019058!2791340!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16150 invoked from network); 10 Feb 2014 07:57:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 07:57:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392019058; l=1606;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=g9srjzcbzdrUlPf6NT/KrBaZaGY=;
	b=XAcq+QQgZJYHrYIuig/BfER4zY12yq6CN8ahxm3p2eFqqy1klD0YdjYtRRNlK9K6pSv
	FAH8zZJdDWONZnonAoQk/f+6WIdGJSJN9uM7ONGEpppp80hVMmsl+KrpqJCJbG8lXJ/pG
	SBdcyfGiYv+co0wCFlw0zF22JStsp/9pUt8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id y06a37q1A7vc6V3
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 08:57:38 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id EF27250269; Mon, 10 Feb 2014 08:57:37 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 08:57:34 +0100
Message-Id: <1392019054-23740-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/xend: move assert to exception block
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The two assert in restore trigger sometimes after hundreds of
migrations. If they trigger the destination host will not destroy the
newly created, yet empty guest. After a second migration attempt to this
host there will be two guets with the same name and uuid. This situation
is poorly handled by the xm tools.
With this change the empty guest will be destroyed.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---

This is a resend of an old patch, which never made it into the tree:

http://lists.xenproject.org/archives/html/xen-devel/2013-03/msg02550.html

 tools/python/xen/xend/XendCheckpoint.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/python/xen/xend/XendCheckpoint.py b/tools/python/xen/xend/XendCheckpoint.py
index a433ffa..b8caf02 100644
--- a/tools/python/xen/xend/XendCheckpoint.py
+++ b/tools/python/xen/xend/XendCheckpoint.py
@@ -249,9 +249,6 @@ def restore(xd, fd, dominfo = None, paused = False, relocating = False):
     store_port   = dominfo.getStorePort()
     console_port = dominfo.getConsolePort()
 
-    assert store_port
-    assert console_port
-
     # if hvm, pass mem size to calculate the store_mfn
     if is_hvm:
         apic = int(dominfo.info['platform'].get('apic', 0))
@@ -263,6 +260,9 @@ def restore(xd, fd, dominfo = None, paused = False, relocating = False):
         pae  = 0
 
     try:
+        assert store_port
+        assert console_port
+
         restore_image = image.create(dominfo, dominfo.info)
         memory = restore_image.getRequiredAvailableMemory(
             dominfo.info['memory_dynamic_max'] / 1024)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 07:58:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 07:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClkN-0000cJ-2W; Mon, 10 Feb 2014 07:57:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WClkL-0000cE-A2
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 07:57:41 +0000
Received: from [85.158.139.211:50745] by server-10.bemta-5.messagelabs.com id
	41/5D-08578-37688F25; Mon, 10 Feb 2014 07:57:39 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392019058!2791340!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16150 invoked from network); 10 Feb 2014 07:57:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 07:57:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392019058; l=1606;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=g9srjzcbzdrUlPf6NT/KrBaZaGY=;
	b=XAcq+QQgZJYHrYIuig/BfER4zY12yq6CN8ahxm3p2eFqqy1klD0YdjYtRRNlK9K6pSv
	FAH8zZJdDWONZnonAoQk/f+6WIdGJSJN9uM7ONGEpppp80hVMmsl+KrpqJCJbG8lXJ/pG
	SBdcyfGiYv+co0wCFlw0zF22JStsp/9pUt8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id y06a37q1A7vc6V3
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 08:57:38 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id EF27250269; Mon, 10 Feb 2014 08:57:37 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 08:57:34 +0100
Message-Id: <1392019054-23740-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/xend: move assert to exception block
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The two assert in restore trigger sometimes after hundreds of
migrations. If they trigger the destination host will not destroy the
newly created, yet empty guest. After a second migration attempt to this
host there will be two guets with the same name and uuid. This situation
is poorly handled by the xm tools.
With this change the empty guest will be destroyed.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---

This is a resend of an old patch, which never made it into the tree:

http://lists.xenproject.org/archives/html/xen-devel/2013-03/msg02550.html

 tools/python/xen/xend/XendCheckpoint.py | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/python/xen/xend/XendCheckpoint.py b/tools/python/xen/xend/XendCheckpoint.py
index a433ffa..b8caf02 100644
--- a/tools/python/xen/xend/XendCheckpoint.py
+++ b/tools/python/xen/xend/XendCheckpoint.py
@@ -249,9 +249,6 @@ def restore(xd, fd, dominfo = None, paused = False, relocating = False):
     store_port   = dominfo.getStorePort()
     console_port = dominfo.getConsolePort()
 
-    assert store_port
-    assert console_port
-
     # if hvm, pass mem size to calculate the store_mfn
     if is_hvm:
         apic = int(dominfo.info['platform'].get('apic', 0))
@@ -263,6 +260,9 @@ def restore(xd, fd, dominfo = None, paused = False, relocating = False):
         pae  = 0
 
     try:
+        assert store_port
+        assert console_port
+
         restore_image = image.create(dominfo, dominfo.info)
         memory = restore_image.getRequiredAvailableMemory(
             dominfo.info['memory_dynamic_max'] / 1024)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:02:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClpJ-0001ZN-MI; Mon, 10 Feb 2014 08:02:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClpI-0001ZG-5Q
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:02:48 +0000
Received: from [85.158.139.211:25487] by server-3.bemta-5.messagelabs.com id
	83/DE-13671-7A788F25; Mon, 10 Feb 2014 08:02:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392019366!2790374!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4385 invoked from network); 10 Feb 2014 08:02:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:02:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:02:46 +0000
Message-Id: <52F895B1020000780011A974@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:02:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-2-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-2-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	tim@xen.org, stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC for-4.5 01/12] xen/common: grant-table: only
 call IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> From Xen point of view, ARM guests are PV guest with paging auto translate
> enabled.
> 
> When IOMMU support will be added for ARM, mapping grant ref will always 
> crash
> Xen due to the BUG_ON in __gnttab_map_grant_ref.
> 
> On x86:
>     - PV guests always have paging mode translate disabled
>     - PVH and HVM guests have always paging mode translate enabled
> 
> It means that we can safely replace the check that the domain is a PV guests
> by checking if the guest has paging mode translate enabled.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> Cc: Keir Fraser <keir@xen.org>
> ---
>  xen/common/grant_table.c |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 107b000..778bdb7 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -721,12 +721,10 @@ __gnttab_map_grant_ref(
>  
>      double_gt_lock(lgt, rgt);
>  
> -    if ( is_pv_domain(ld) && need_iommu(ld) )
> +    if ( !paging_mode_translate(ld) && need_iommu(ld) )
>      {
>          unsigned int wrc, rdc;
>          int err = 0;
> -        /* Shouldn't happen, because you can't use iommu in a HVM domain. */
> -        BUG_ON(paging_mode_translate(ld));
>          /* We're not translated, so we know that gmfns and mfns are
>             the same things, so the IOMMU entry is always 1-to-1. */
>          mapcount(lgt, rd, frame, &wrc, &rdc);
> @@ -931,11 +929,10 @@ __gnttab_unmap_common(
>              act->pin -= GNTPIN_hstw_inc;
>      }
>  
> -    if ( is_pv_domain(ld) && need_iommu(ld) )
> +    if ( !paging_mode_translate(ld) && need_iommu(ld) )
>      {
>          unsigned int wrc, rdc;
>          int err = 0;
> -        BUG_ON(paging_mode_translate(ld));
>          mapcount(lgt, rd, op->frame, &wrc, &rdc);
>          if ( (wrc + rdc) == 0 )
>              err = iommu_unmap_page(ld, op->frame);
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:02:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClpJ-0001ZN-MI; Mon, 10 Feb 2014 08:02:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClpI-0001ZG-5Q
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:02:48 +0000
Received: from [85.158.139.211:25487] by server-3.bemta-5.messagelabs.com id
	83/DE-13671-7A788F25; Mon, 10 Feb 2014 08:02:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392019366!2790374!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4385 invoked from network); 10 Feb 2014 08:02:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:02:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:02:46 +0000
Message-Id: <52F895B1020000780011A974@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:02:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-2-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-2-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com, patches@linaro.org,
	tim@xen.org, stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC for-4.5 01/12] xen/common: grant-table: only
 call IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> From Xen point of view, ARM guests are PV guest with paging auto translate
> enabled.
> 
> When IOMMU support will be added for ARM, mapping grant ref will always 
> crash
> Xen due to the BUG_ON in __gnttab_map_grant_ref.
> 
> On x86:
>     - PV guests always have paging mode translate disabled
>     - PVH and HVM guests have always paging mode translate enabled
> 
> It means that we can safely replace the check that the domain is a PV guests
> by checking if the guest has paging mode translate enabled.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> Cc: Keir Fraser <keir@xen.org>
> ---
>  xen/common/grant_table.c |    7 ++-----
>  1 file changed, 2 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 107b000..778bdb7 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -721,12 +721,10 @@ __gnttab_map_grant_ref(
>  
>      double_gt_lock(lgt, rgt);
>  
> -    if ( is_pv_domain(ld) && need_iommu(ld) )
> +    if ( !paging_mode_translate(ld) && need_iommu(ld) )
>      {
>          unsigned int wrc, rdc;
>          int err = 0;
> -        /* Shouldn't happen, because you can't use iommu in a HVM domain. */
> -        BUG_ON(paging_mode_translate(ld));
>          /* We're not translated, so we know that gmfns and mfns are
>             the same things, so the IOMMU entry is always 1-to-1. */
>          mapcount(lgt, rd, frame, &wrc, &rdc);
> @@ -931,11 +929,10 @@ __gnttab_unmap_common(
>              act->pin -= GNTPIN_hstw_inc;
>      }
>  
> -    if ( is_pv_domain(ld) && need_iommu(ld) )
> +    if ( !paging_mode_translate(ld) && need_iommu(ld) )
>      {
>          unsigned int wrc, rdc;
>          int err = 0;
> -        BUG_ON(paging_mode_translate(ld));
>          mapcount(lgt, rd, op->frame, &wrc, &rdc);
>          if ( (wrc + rdc) == 0 )
>              err = iommu_unmap_page(ld, op->frame);
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClpp-0001c8-4N; Mon, 10 Feb 2014 08:03:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WClpn-0001bz-VO
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 08:03:20 +0000
Received: from [85.158.139.211:2337] by server-2.bemta-5.messagelabs.com id
	90/30-23037-7C788F25; Mon, 10 Feb 2014 08:03:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392019398!2799465!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1314 invoked from network); 10 Feb 2014 08:03:18 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:03:18 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WClpi-0000Jv-D2; Mon, 10 Feb 2014 08:03:14 +0000
Date: Mon, 10 Feb 2014 09:03:14 +0100
From: Tim Deegan <tim@xen.org>
To: Yang Zhang <yang.z.zhang@intel.com>
Message-ID: <20140210080314.GA758@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: andrew.cooper3@citrix.com, xiantao.zhang@intel.com, JBeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> When enabling log dirty mode, it sets all guest's memory to readonly.
> And in HAP enabled domain, it modifies all EPT entries to clear write bit
> to make sure it is readonly. This will cause problem if VT-d shares page
> table with EPT: the device may issue a DMA write request, then VT-d engine
> tells it the target memory is readonly and result in VT-d fault.

So that's a problem even if only the VGA framebuffer is being tracked
-- DMA from a passthrough device will either cause a spurious error or
fail to update the dirt bitmap. 

I think it would be better not to allow VT-d and EPT to share
pagetables in cases where devices are passed through (i.e. all cases
where VT-d is in use).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:03:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClpp-0001c8-4N; Mon, 10 Feb 2014 08:03:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WClpn-0001bz-VO
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 08:03:20 +0000
Received: from [85.158.139.211:2337] by server-2.bemta-5.messagelabs.com id
	90/30-23037-7C788F25; Mon, 10 Feb 2014 08:03:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392019398!2799465!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1314 invoked from network); 10 Feb 2014 08:03:18 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:03:18 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WClpi-0000Jv-D2; Mon, 10 Feb 2014 08:03:14 +0000
Date: Mon, 10 Feb 2014 09:03:14 +0100
From: Tim Deegan <tim@xen.org>
To: Yang Zhang <yang.z.zhang@intel.com>
Message-ID: <20140210080314.GA758@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: andrew.cooper3@citrix.com, xiantao.zhang@intel.com, JBeulich@suse.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> When enabling log dirty mode, it sets all guest's memory to readonly.
> And in HAP enabled domain, it modifies all EPT entries to clear write bit
> to make sure it is readonly. This will cause problem if VT-d shares page
> table with EPT: the device may issue a DMA write request, then VT-d engine
> tells it the target memory is readonly and result in VT-d fault.

So that's a problem even if only the VGA framebuffer is being tracked
-- DMA from a passthrough device will either cause a spurious error or
fail to update the dirt bitmap. 

I think it would be better not to allow VT-d and EPT to share
pagetables in cases where devices are passed through (i.e. all cases
where VT-d is in use).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:04:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClrD-0001lH-Lv; Mon, 10 Feb 2014 08:04:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClrC-0001l2-VA
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:04:47 +0000
Received: from [85.158.139.211:54795] by server-8.bemta-5.messagelabs.com id
	9A/24-05298-D1888F25; Mon, 10 Feb 2014 08:04:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392019484!2762026!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16090 invoked from network); 10 Feb 2014 08:04:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:04:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:04:45 +0000
Message-Id: <52F89627020000780011A977@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:04:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-4-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-4-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantoa Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't
 export iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> iommu_set_pgd is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantoa Zhang <xiantao.zhang@intel.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  xen/include/xen/iommu.h             |    1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index a8d33fc..d5ce5b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
>  /*
>   * set VT-d page table directory to EPT table if allowed
>   */
> -void iommu_set_pgd(struct domain *d)
> +static void iommu_set_pgd(struct domain *d)
>  {
>      struct hvm_iommu *hd  = domain_hvm_iommu(d);
>      mfn_t pgd_mfn;
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 8bb0a1d..fcbc432 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, 
> unsigned long mfn,
>                     unsigned int flags);
>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int 
> present);
> -void iommu_set_pgd(struct domain *d);
>  void iommu_domain_teardown(struct domain *d);
>  
>  void pt_pci_init(void);
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:04:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClrD-0001lH-Lv; Mon, 10 Feb 2014 08:04:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClrC-0001l2-VA
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:04:47 +0000
Received: from [85.158.139.211:54795] by server-8.bemta-5.messagelabs.com id
	9A/24-05298-D1888F25; Mon, 10 Feb 2014 08:04:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392019484!2762026!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16090 invoked from network); 10 Feb 2014 08:04:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:04:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:04:45 +0000
Message-Id: <52F89627020000780011A977@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:04:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-4-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-4-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantoa Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't
 export iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> iommu_set_pgd is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantoa Zhang <xiantao.zhang@intel.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  xen/include/xen/iommu.h             |    1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index a8d33fc..d5ce5b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
>  /*
>   * set VT-d page table directory to EPT table if allowed
>   */
> -void iommu_set_pgd(struct domain *d)
> +static void iommu_set_pgd(struct domain *d)
>  {
>      struct hvm_iommu *hd  = domain_hvm_iommu(d);
>      mfn_t pgd_mfn;
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 8bb0a1d..fcbc432 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, 
> unsigned long mfn,
>                     unsigned int flags);
>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int 
> present);
> -void iommu_set_pgd(struct domain *d);
>  void iommu_domain_teardown(struct domain *d);
>  
>  void pt_pci_init(void);
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:07:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClte-0001wS-8n; Mon, 10 Feb 2014 08:07:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCltc-0001wH-HX
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:07:16 +0000
Received: from [85.158.139.211:26541] by server-5.bemta-5.messagelabs.com id
	D6/14-32749-3B888F25; Mon, 10 Feb 2014 08:07:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392019634!2808210!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3785 invoked from network); 10 Feb 2014 08:07:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:07:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:07:15 +0000
Message-Id: <52F896BE020000780011A995@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:07:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-7-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
> enabled.
> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
> to know if it needs to check the requirements.
> 
> Rename the function and remove "pvh" word in the commit message.

s/commit/panic/ ?

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Other than the above,
Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 19b0e23..26a5d91 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>      return hd->platform_ops->init(d);
>  }
>  
> -static __init void check_dom0_pvh_reqs(struct domain *d)
> +static __init void check_dom0_reqs(struct domain *d)
>  {
> +    if ( !paging_mode_translate(d) )
> +        return;
> +
>      if ( !iommu_enabled )
> -        panic("Presently, iommu must be enabled for pvh dom0\n");
> +        panic("Presently, iommu must be enabled to use dom0 with translate 
> "
> +              "paging mode\n");
>  
>      if ( iommu_passthrough )
> -        panic("For pvh dom0, dom0-passthrough must not be enabled\n");
> +        panic("Dom0 uses translate paging mode, dom0-passthrough must not be 
> "
> +              "enabled\n");
>  
>      iommu_dom0_strict = 1;
>  }
> @@ -145,8 +150,7 @@ void __init iommu_dom0_init(struct domain *d)
>  {
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>  
> -    if ( is_pvh_domain(d) )
> -        check_dom0_pvh_reqs(d);
> +    check_dom0_reqs(d);
>  
>      if ( !iommu_enabled )
>          return;
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:07:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClte-0001wS-8n; Mon, 10 Feb 2014 08:07:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCltc-0001wH-HX
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:07:16 +0000
Received: from [85.158.139.211:26541] by server-5.bemta-5.messagelabs.com id
	D6/14-32749-3B888F25; Mon, 10 Feb 2014 08:07:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392019634!2808210!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3785 invoked from network); 10 Feb 2014 08:07:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:07:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:07:15 +0000
Message-Id: <52F896BE020000780011A995@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:07:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-7-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
> enabled.
> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
> to know if it needs to check the requirements.
> 
> Rename the function and remove "pvh" word in the commit message.

s/commit/panic/ ?

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Other than the above,
Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 19b0e23..26a5d91 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>      return hd->platform_ops->init(d);
>  }
>  
> -static __init void check_dom0_pvh_reqs(struct domain *d)
> +static __init void check_dom0_reqs(struct domain *d)
>  {
> +    if ( !paging_mode_translate(d) )
> +        return;
> +
>      if ( !iommu_enabled )
> -        panic("Presently, iommu must be enabled for pvh dom0\n");
> +        panic("Presently, iommu must be enabled to use dom0 with translate 
> "
> +              "paging mode\n");
>  
>      if ( iommu_passthrough )
> -        panic("For pvh dom0, dom0-passthrough must not be enabled\n");
> +        panic("Dom0 uses translate paging mode, dom0-passthrough must not be 
> "
> +              "enabled\n");
>  
>      iommu_dom0_strict = 1;
>  }
> @@ -145,8 +150,7 @@ void __init iommu_dom0_init(struct domain *d)
>  {
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>  
> -    if ( is_pvh_domain(d) )
> -        check_dom0_pvh_reqs(d);
> +    check_dom0_reqs(d);
>  
>      if ( !iommu_enabled )
>          return;
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCltw-0001zw-Lj; Mon, 10 Feb 2014 08:07:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1WCltv-0001ze-Ae
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:07:35 +0000
Received: from [85.158.137.68:52955] by server-2.bemta-3.messagelabs.com id
	F2/68-06531-6C888F25; Mon, 10 Feb 2014 08:07:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392019652!746921!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16599 invoked from network); 10 Feb 2014 08:07:33 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 08:07:33 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 00:07:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,816,1384329600"; d="scan'208";a="452622325"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 10 Feb 2014 00:07:31 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:07:31 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:07:31 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Mon, 10 Feb 2014 16:07:29 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Julien Grall <julien.grall@linaro.org>
Thread-Topic: [RFC for-4.5 03/12] xen/passthrough: vtd: Don't export
	iommu_set_pgd
Thread-Index: AQHPJCwfROmkgQcq9EWsqsaWi/wwwpqtnwmAgACGsCA=
Date: Mon, 10 Feb 2014 08:07:29 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A4834564404EAF377@SHSMSX104.ccr.corp.intel.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-4-git-send-email-julien.grall@linaro.org>
	<52F89627020000780011A977@nat28.tlf.novell.com>
In-Reply-To: <52F89627020000780011A977@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"tim@xen.org" <tim@xen.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>
Subject: Re: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't
 export iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Acked-by: Xiantao Zhang <Xiantao.zhang@intel.com>
Xiantao
-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Monday, February 10, 2014 4:05 PM
To: Julien Grall
Cc: ian.campbell@citrix.com; stefano.stabellini@citrix.com; Zhang, Xiantao; patches@linaro.org; xen-devel@lists.xenproject.org; tim@xen.org
Subject: Re: [RFC for-4.5 03/12] xen/passthrough: vtd: Don't export iommu_set_pgd

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> iommu_set_pgd is only used internally in 
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantoa Zhang <xiantao.zhang@intel.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  xen/include/xen/iommu.h             |    1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index a8d33fc..d5ce5b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu 
> *iommu)
>  /*
>   * set VT-d page table directory to EPT table if allowed
>   */
> -void iommu_set_pgd(struct domain *d)
> +static void iommu_set_pgd(struct domain *d)
>  {
>      struct hvm_iommu *hd  = domain_hvm_iommu(d);
>      mfn_t pgd_mfn;
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 
> 8bb0a1d..fcbc432 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long 
> gfn, unsigned long mfn,
>                     unsigned int flags);  int iommu_unmap_page(struct 
> domain *d, unsigned long gfn);  void iommu_pte_flush(struct domain *d, 
> u64 gfn, u64 *pte, int order, int present); -void iommu_set_pgd(struct 
> domain *d);  void iommu_domain_teardown(struct domain *d);
>  
>  void pt_pci_init(void);
> --
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCltw-0001zw-Lj; Mon, 10 Feb 2014 08:07:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiantao.zhang@intel.com>) id 1WCltv-0001ze-Ae
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:07:35 +0000
Received: from [85.158.137.68:52955] by server-2.bemta-3.messagelabs.com id
	F2/68-06531-6C888F25; Mon, 10 Feb 2014 08:07:34 +0000
X-Env-Sender: xiantao.zhang@intel.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392019652!746921!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16599 invoked from network); 10 Feb 2014 08:07:33 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-7.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 08:07:33 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 00:07:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,816,1384329600"; d="scan'208";a="452622325"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 10 Feb 2014 00:07:31 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:07:31 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:07:31 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Mon, 10 Feb 2014 16:07:29 +0800
From: "Zhang, Xiantao" <xiantao.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Julien Grall <julien.grall@linaro.org>
Thread-Topic: [RFC for-4.5 03/12] xen/passthrough: vtd: Don't export
	iommu_set_pgd
Thread-Index: AQHPJCwfROmkgQcq9EWsqsaWi/wwwpqtnwmAgACGsCA=
Date: Mon, 10 Feb 2014 08:07:29 +0000
Message-ID: <B6C2EB9186482D47BD0C5A9A4834564404EAF377@SHSMSX104.ccr.corp.intel.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-4-git-send-email-julien.grall@linaro.org>
	<52F89627020000780011A977@nat28.tlf.novell.com>
In-Reply-To: <52F89627020000780011A977@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"tim@xen.org" <tim@xen.org>,
	"stefano.stabellini@citrix.com" <stefano.stabellini@citrix.com>,
	"ian.campbell@citrix.com" <ian.campbell@citrix.com>,
	"patches@linaro.org" <patches@linaro.org>
Subject: Re: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't
 export iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Acked-by: Xiantao Zhang <Xiantao.zhang@intel.com>
Xiantao
-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Monday, February 10, 2014 4:05 PM
To: Julien Grall
Cc: ian.campbell@citrix.com; stefano.stabellini@citrix.com; Zhang, Xiantao; patches@linaro.org; xen-devel@lists.xenproject.org; tim@xen.org
Subject: Re: [RFC for-4.5 03/12] xen/passthrough: vtd: Don't export iommu_set_pgd

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> iommu_set_pgd is only used internally in 
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantoa Zhang <xiantao.zhang@intel.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  xen/include/xen/iommu.h             |    1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c
> b/xen/drivers/passthrough/vtd/iommu.c
> index a8d33fc..d5ce5b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu 
> *iommu)
>  /*
>   * set VT-d page table directory to EPT table if allowed
>   */
> -void iommu_set_pgd(struct domain *d)
> +static void iommu_set_pgd(struct domain *d)
>  {
>      struct hvm_iommu *hd  = domain_hvm_iommu(d);
>      mfn_t pgd_mfn;
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h index 
> 8bb0a1d..fcbc432 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long 
> gfn, unsigned long mfn,
>                     unsigned int flags);  int iommu_unmap_page(struct 
> domain *d, unsigned long gfn);  void iommu_pte_flush(struct domain *d, 
> u64 gfn, u64 *pte, int order, int present); -void iommu_set_pgd(struct 
> domain *d);  void iommu_domain_teardown(struct domain *d);
>  
>  void pt_pci_init(void);
> --
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:11:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClxu-0002bp-EV; Mon, 10 Feb 2014 08:11:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClxt-0002bf-2U
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:11:41 +0000
Received: from [85.158.143.35:52973] by server-1.bemta-4.messagelabs.com id
	EC/51-31661-CB988F25; Mon, 10 Feb 2014 08:11:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392019899!4394595!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13008 invoked from network); 10 Feb 2014 08:11:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:11:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:11:38 +0000
Message-Id: <52F897C6020000780011A9AA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:11:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-8-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-8-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 07/12] xen/passthrough: iommu: Don't
 need to map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> Currently iommu_init_dom0 is browsing the page list and call map_page 
> callback
> on each page.
> 
> On both AMD and VTD drivers, the function will directly return if the page
> table is shared with the processor. So Xen can safely avoid to run through
> the page list.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 26a5d91..0a26956 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -157,7 +157,7 @@ void __init iommu_dom0_init(struct domain *d)
>  
>      register_keyhandler('o', &iommu_p2m_table);
>      d->need_iommu = !!iommu_dom0_strict;
> -    if ( need_iommu(d) )
> +    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>      {
>          struct page_info *page;
>          unsigned int i = 0;
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:11:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WClxu-0002bp-EV; Mon, 10 Feb 2014 08:11:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WClxt-0002bf-2U
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:11:41 +0000
Received: from [85.158.143.35:52973] by server-1.bemta-4.messagelabs.com id
	EC/51-31661-CB988F25; Mon, 10 Feb 2014 08:11:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392019899!4394595!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13008 invoked from network); 10 Feb 2014 08:11:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:11:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:11:38 +0000
Message-Id: <52F897C6020000780011A9AA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:11:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-8-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-8-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 07/12] xen/passthrough: iommu: Don't
 need to map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> Currently iommu_init_dom0 is browsing the page list and call map_page 
> callback
> on each page.
> 
> On both AMD and VTD drivers, the function will directly return if the page
> table is shared with the processor. So Xen can safely avoid to run through
> the page list.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/drivers/passthrough/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 26a5d91..0a26956 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -157,7 +157,7 @@ void __init iommu_dom0_init(struct domain *d)
>  
>      register_keyhandler('o', &iommu_p2m_table);
>      d->need_iommu = !!iommu_dom0_strict;
> -    if ( need_iommu(d) )
> +    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>      {
>          struct page_info *page;
>          unsigned int i = 0;
> -- 
> 1.7.10.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:14:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCm09-0002iG-1z; Mon, 10 Feb 2014 08:14:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCm07-0002iB-JM
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 08:13:59 +0000
Received: from [85.158.139.211:61084] by server-5.bemta-5.messagelabs.com id
	D0/DC-32749-64A88F25; Mon, 10 Feb 2014 08:13:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392020036!2816021!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4458 invoked from network); 10 Feb 2014 08:13:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 08:13:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="99404149"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 08:13:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 03:13:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCm03-0005sz-8z;
	Mon, 10 Feb 2014 08:13:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCm03-0004NQ-7J;
	Mon, 10 Feb 2014 08:13:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24821-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 08:13:55 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24821: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1557458780994054862=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1557458780994054862==
Content-Type: text/plain

flight 24821 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24821/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============1557458780994054862==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1557458780994054862==--

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:14:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCm09-0002iG-1z; Mon, 10 Feb 2014 08:14:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCm07-0002iB-JM
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 08:13:59 +0000
Received: from [85.158.139.211:61084] by server-5.bemta-5.messagelabs.com id
	D0/DC-32749-64A88F25; Mon, 10 Feb 2014 08:13:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392020036!2816021!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4458 invoked from network); 10 Feb 2014 08:13:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 08:13:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="99404149"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 08:13:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 03:13:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WCm03-0005sz-8z;
	Mon, 10 Feb 2014 08:13:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WCm03-0004NQ-7J;
	Mon, 10 Feb 2014 08:13:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24821-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 08:13:55 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24821: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1557458780994054862=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1557458780994054862==
Content-Type: text/plain

flight 24821 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24821/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           3 host-install(3)           broken pass in 24807

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-armhf-armhf-xl           9 guest-start           fail in 24807 never pass

version targeted for testing:
 xen                  4d3ebb84df43d90db4cc25a48f4658709bd11678
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 708 lines long.)


--===============1557458780994054862==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1557458780994054862==--

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:15:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCm1r-0002rN-OQ; Mon, 10 Feb 2014 08:15:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WCm1p-0002rG-IU
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 08:15:45 +0000
Received: from [85.158.137.68:25873] by server-12.bemta-3.messagelabs.com id
	2E/84-01674-0BA88F25; Mon, 10 Feb 2014 08:15:44 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392020142!753112!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4420 invoked from network); 10 Feb 2014 08:15:44 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 08:15:44 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 00:15:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,816,1384329600"; d="scan'208";a="480773311"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 10 Feb 2014 00:15:22 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:15:18 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:15:18 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Mon, 10 Feb 2014 16:15:16 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [PATCH] Don't track all memory when enabling log dirty to
	track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGw
Date: Mon, 10 Feb 2014 08:15:16 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
In-Reply-To: <20140210080314.GA758@deinos.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tim Deegan wrote on 2014-02-10:
> At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> When enabling log dirty mode, it sets all guest's memory to readonly.
>> And in HAP enabled domain, it modifies all EPT entries to clear
>> write bit to make sure it is readonly. This will cause problem if
>> VT-d shares page table with EPT: the device may issue a DMA write
>> request, then VT-d engine tells it the target memory is readonly and
>> result in VT-d
> fault.
> 
> So that's a problem even if only the VGA framebuffer is being tracked
> -- DMA from a passthrough device will either cause a spurious error or
> fail to update the dirt bitmap.

Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.

> 
> I think it would be better not to allow VT-d and EPT to share
> pagetables in cases where devices are passed through (i.e. all cases where VT-d is in use).

Without VT-d and EPT share page, we still cannot track the memory updating from DMA. I think the point is that we cannot track the memory updating via DMA. So the user should use the log dirty mode carefully. Also, I am not sure whether the memory updating from dom0 and QEMU is tracked currently.

> 
> Tim.


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:15:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCm1r-0002rN-OQ; Mon, 10 Feb 2014 08:15:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WCm1p-0002rG-IU
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 08:15:45 +0000
Received: from [85.158.137.68:25873] by server-12.bemta-3.messagelabs.com id
	2E/84-01674-0BA88F25; Mon, 10 Feb 2014 08:15:44 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392020142!753112!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4420 invoked from network); 10 Feb 2014 08:15:44 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 08:15:44 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 00:15:42 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,816,1384329600"; d="scan'208";a="480773311"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 10 Feb 2014 00:15:22 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:15:18 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 00:15:18 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Mon, 10 Feb 2014 16:15:16 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Tim Deegan <tim@xen.org>
Thread-Topic: [PATCH] Don't track all memory when enabling log dirty to
	track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGw
Date: Mon, 10 Feb 2014 08:15:16 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
In-Reply-To: <20140210080314.GA758@deinos.phlegethon.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Tim Deegan wrote on 2014-02-10:
> At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> When enabling log dirty mode, it sets all guest's memory to readonly.
>> And in HAP enabled domain, it modifies all EPT entries to clear
>> write bit to make sure it is readonly. This will cause problem if
>> VT-d shares page table with EPT: the device may issue a DMA write
>> request, then VT-d engine tells it the target memory is readonly and
>> result in VT-d
> fault.
> 
> So that's a problem even if only the VGA framebuffer is being tracked
> -- DMA from a passthrough device will either cause a spurious error or
> fail to update the dirt bitmap.

Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.

> 
> I think it would be better not to allow VT-d and EPT to share
> pagetables in cases where devices are passed through (i.e. all cases where VT-d is in use).

Without VT-d and EPT share page, we still cannot track the memory updating from DMA. I think the point is that we cannot track the memory updating via DMA. So the user should use the log dirty mode carefully. Also, I am not sure whether the memory updating from dom0 and QEMU is tracked currently.

> 
> Tim.


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:22:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCm8Y-0003R0-Oj; Mon, 10 Feb 2014 08:22:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCm8X-0003Qv-ET
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:22:41 +0000
Received: from [193.109.254.147:41570] by server-2.bemta-14.messagelabs.com id
	87/AE-01236-05C88F25; Mon, 10 Feb 2014 08:22:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392020559!3147462!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26747 invoked from network); 10 Feb 2014 08:22:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:22:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:22:39 +0000
Message-Id: <52F89A5A020000780011A9C9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:22:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-9-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
> functions specific to x86 and PCI.
> 
> Split the framework in 3 distincts files:
>     - iommu.c: contains generic functions shared between x86 and ARM
>                (when it will be supported)
>     - iommu_pci.c: contains specific functions for PCI passthrough
>     - iommu_x86.c: contains specific functions for x86
> 
> iommu_pci.c will be only compiled when PCI is supported by the architecture
> (eg. HAS_PCI is defined).
> 
> This patch is mostly code movement in new files.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/drivers/passthrough/Makefile    |    6 +-
>  xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>  xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++

There's xen/drivers/passthrough/pci.c already - any reason not to
move the code there?

>  xen/drivers/passthrough/iommu_x86.c |   65 +++++

Same here for xen/drivers/passthrough/x86/.

> @@ -696,125 +344,6 @@ void iommu_crash_shutdown(void)
>      iommu_enabled = iommu_intremap = 0;
>  }
>  
> -int iommu_do_domctl(
> -    struct xen_domctl *domctl, struct domain *d,
> -    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)

The function itself should probably not be moved out. Either the
PCI-specific pieces of it should be made conditional, or a
descendant function be created. Since (afaict) you'll need all of
the domctl-s (with different arguments) too for pass-through on
ARM - what's your plan for them?

> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1784,31 +1784,31 @@ static int intel_iommu_unmap_page(struct domain *d, 
> unsigned long gfn)
>  
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
>                       int order, int present)
> -{
> -    struct acpi_drhd_unit *drhd;
> -    struct iommu *iommu = NULL;
> -    struct hvm_iommu *hd = domain_hvm_iommu(d);
> -    int flush_dev_iotlb;
> -    int iommu_domid;
> +    {
> +        struct acpi_drhd_unit *drhd;
> +        struct iommu *iommu = NULL;
> +        struct hvm_iommu *hd = domain_hvm_iommu(d);
> +        int flush_dev_iotlb;
> +        int iommu_domid;
>  
> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>  
> -    for_each_drhd_unit ( drhd )
> -    {
> -        iommu = drhd->iommu;
> -        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
> -            continue;
> +        for_each_drhd_unit ( drhd )
> +        {
> +            iommu = drhd->iommu;
> +            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
> +                continue;
>  
> -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_domid= domain_iommu_domid(d, iommu);
> -        if ( iommu_domid == -1 )
> -            continue;
> -        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> -                                   (paddr_t)gfn << PAGE_SHIFT_4K,
> -                                   order, !present, flush_dev_iotlb) )
> -            iommu_flush_write_buffer(iommu);
> +            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> +            iommu_domid= domain_iommu_domid(d, iommu);
> +            if ( iommu_domid == -1 )
> +                continue;
> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> +                                       (paddr_t)gfn << PAGE_SHIFT_4K,
> +                                       order, !present, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> +        }
>      }
> -}

What are these changes to indentation about? Are you
deliberately breaking common rules here, or is this some sort of
unintentional leftover?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:22:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCm8Y-0003R0-Oj; Mon, 10 Feb 2014 08:22:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCm8X-0003Qv-ET
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 08:22:41 +0000
Received: from [193.109.254.147:41570] by server-2.bemta-14.messagelabs.com id
	87/AE-01236-05C88F25; Mon, 10 Feb 2014 08:22:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392020559!3147462!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26747 invoked from network); 10 Feb 2014 08:22:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:22:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:22:39 +0000
Message-Id: <52F89A5A020000780011A9C9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:22:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-9-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
> functions specific to x86 and PCI.
> 
> Split the framework in 3 distincts files:
>     - iommu.c: contains generic functions shared between x86 and ARM
>                (when it will be supported)
>     - iommu_pci.c: contains specific functions for PCI passthrough
>     - iommu_x86.c: contains specific functions for x86
> 
> iommu_pci.c will be only compiled when PCI is supported by the architecture
> (eg. HAS_PCI is defined).
> 
> This patch is mostly code movement in new files.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/drivers/passthrough/Makefile    |    6 +-
>  xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>  xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++

There's xen/drivers/passthrough/pci.c already - any reason not to
move the code there?

>  xen/drivers/passthrough/iommu_x86.c |   65 +++++

Same here for xen/drivers/passthrough/x86/.

> @@ -696,125 +344,6 @@ void iommu_crash_shutdown(void)
>      iommu_enabled = iommu_intremap = 0;
>  }
>  
> -int iommu_do_domctl(
> -    struct xen_domctl *domctl, struct domain *d,
> -    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)

The function itself should probably not be moved out. Either the
PCI-specific pieces of it should be made conditional, or a
descendant function be created. Since (afaict) you'll need all of
the domctl-s (with different arguments) too for pass-through on
ARM - what's your plan for them?

> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1784,31 +1784,31 @@ static int intel_iommu_unmap_page(struct domain *d, 
> unsigned long gfn)
>  
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
>                       int order, int present)
> -{
> -    struct acpi_drhd_unit *drhd;
> -    struct iommu *iommu = NULL;
> -    struct hvm_iommu *hd = domain_hvm_iommu(d);
> -    int flush_dev_iotlb;
> -    int iommu_domid;
> +    {
> +        struct acpi_drhd_unit *drhd;
> +        struct iommu *iommu = NULL;
> +        struct hvm_iommu *hd = domain_hvm_iommu(d);
> +        int flush_dev_iotlb;
> +        int iommu_domid;
>  
> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>  
> -    for_each_drhd_unit ( drhd )
> -    {
> -        iommu = drhd->iommu;
> -        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
> -            continue;
> +        for_each_drhd_unit ( drhd )
> +        {
> +            iommu = drhd->iommu;
> +            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
> +                continue;
>  
> -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> -        iommu_domid= domain_iommu_domid(d, iommu);
> -        if ( iommu_domid == -1 )
> -            continue;
> -        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> -                                   (paddr_t)gfn << PAGE_SHIFT_4K,
> -                                   order, !present, flush_dev_iotlb) )
> -            iommu_flush_write_buffer(iommu);
> +            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
> +            iommu_domid= domain_iommu_domid(d, iommu);
> +            if ( iommu_domid == -1 )
> +                continue;
> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
> +                                       (paddr_t)gfn << PAGE_SHIFT_4K,
> +                                       order, !present, flush_dev_iotlb) )
> +                iommu_flush_write_buffer(iommu);
> +        }
>      }
> -}

What are these changes to indentation about? Are you
deliberately breaking common rules here, or is this some sort of
unintentional leftover?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:53:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:53:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmbT-0004ih-2N; Mon, 10 Feb 2014 08:52:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCmbS-0004ic-Bd
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 08:52:34 +0000
Received: from [193.109.254.147:56026] by server-14.bemta-14.messagelabs.com
	id 3C/78-29228-05398F25; Mon, 10 Feb 2014 08:52:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392022352!3180846!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12392 invoked from network); 10 Feb 2014 08:52:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:52:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:52:34 +0000
Message-Id: <52F8A15A020000780011A9E2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:52:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Justin Weaver" <jtweaver@hawaii.edu>
References: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
In-Reply-To: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	dario.faggioli@citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu, juergen.gross@ts.fujitsu.com
Subject: Re: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in
 credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.02.14 at 02:57, Justin Weaver <jtweaver@hawaii.edu> wrote:
> @@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>          return;
>      }
>  
> -    /* Figure out which runqueue to put it in */
> +    /*
> +     * Choose which run queue to add cpu to based on its socket.
> +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
> +     * callback and socket information is not yet available for it).

Did you verify that last part to be the case? Because if so, we would
probably be better off fixing the initialization ordering.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 08:53:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 08:53:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmbT-0004ih-2N; Mon, 10 Feb 2014 08:52:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCmbS-0004ic-Bd
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 08:52:34 +0000
Received: from [193.109.254.147:56026] by server-14.bemta-14.messagelabs.com
	id 3C/78-29228-05398F25; Mon, 10 Feb 2014 08:52:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392022352!3180846!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12392 invoked from network); 10 Feb 2014 08:52:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 08:52:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 08:52:34 +0000
Message-Id: <52F8A15A020000780011A9E2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 08:52:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Justin Weaver" <jtweaver@hawaii.edu>
References: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
In-Reply-To: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus.Granado@eu.citrix.com, george.dunlap@eu.citrix.com,
	dario.faggioli@citrix.com, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, henric@hawaii.edu, juergen.gross@ts.fujitsu.com
Subject: Re: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in
 credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 09.02.14 at 02:57, Justin Weaver <jtweaver@hawaii.edu> wrote:
> @@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, int cpu)
>          return;
>      }
>  
> -    /* Figure out which runqueue to put it in */
> +    /*
> +     * Choose which run queue to add cpu to based on its socket.
> +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
> +     * callback and socket information is not yet available for it).

Did you verify that last part to be the case? Because if so, we would
probably be better off fixing the initialization ordering.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzZ-0005pX-U9; Mon, 10 Feb 2014 09:17:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzX-0005nV-RM
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:28 +0000
Received: from [85.158.137.68:28567] by server-17.bemta-3.messagelabs.com id
	C6/79-22569-62998F25; Mon, 10 Feb 2014 09:17:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!5
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13566 invoked from network); 10 Feb 2014 09:17:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497757"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8aT030292;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151422-1694491 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:28 +0800
Message-Id: <1392023972-24675-7-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 06/10 V7] remus: implement the API to
	buffer/release packages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch implements two APIs:
1. libxl__remus_netbuf_start_new_epoch()
   It marks a new epoch. The packages before this epoch will
   be flushed, and the packages after this epoch will be buffered.
   It will be called after the guest is suspended.
2. libxl__remus_netbuf_release_prev_epoch()
   It flushes the buffered packages to client, and it will be
   called when a checkpoint finishes.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_internal.h    |    6 ++++
 tools/libxl/libxl_netbuffer.c   |   49 +++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c |   14 +++++++++++
 3 files changed, 69 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 4006174..c13296b 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2330,6 +2330,12 @@ _hidden void libxl__remus_teardown_done(libxl__egc *egc,
 _hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
                                           libxl__domain_suspend_state *dss);
 
+_hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                               libxl__remus_state *remus_state);
+
+_hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                                  libxl__remus_state *remus_state);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 2c77076..f358f4b 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -503,6 +503,55 @@ void libxl__remus_netbuf_teardown(libxl__egc *egc,
         libxl__remus_teardown_done(egc, dss);
 }
 
+/* The buffer_op's value, not the value passed to kernel */
+enum {
+    tc_buffer_start,
+    tc_buffer_release
+};
+
+static int remus_netbuf_op(libxl__gc *gc, uint32_t domid,
+                           libxl__remus_state *remus_state,
+                           int buffer_op)
+{
+    int i, ret;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    for (i = 0; i < netbuf_state->num_netbufs; ++i) {
+        if (buffer_op == tc_buffer_start)
+            ret = rtnl_qdisc_plug_buffer(netbuf_state->netbuf_qdisc_list[i]);
+        else
+            ret = rtnl_qdisc_plug_release_one(netbuf_state->netbuf_qdisc_list[i]);
+
+        if (!ret)
+            ret = rtnl_qdisc_add(netbuf_state->nlsock,
+                                 netbuf_state->netbuf_qdisc_list[i],
+                                 NLM_F_REQUEST);
+        if (ret) {
+            LOG(ERROR, "Remus: cannot do netbuf op %s on %s:%s",
+                ((buffer_op == tc_buffer_start) ?
+                 "start_new_epoch" : "release_prev_epoch"),
+                netbuf_state->ifb_list[i], nl_geterror(ret));
+            return ERROR_FAIL;
+        }
+    }
+
+    return 0;
+}
+
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_start);
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_release);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index 559d0a6..92f35bc 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -33,6 +33,20 @@ void libxl__remus_netbuf_teardown(libxl__egc *egc,
 {
 }
 
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmza-0005pu-CJ; Mon, 10 Feb 2014 09:17:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzY-0005nn-82
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:28 +0000
Received: from [85.158.143.35:21721] by server-2.bemta-4.messagelabs.com id
	5A/43-10891-72998F25; Mon, 10 Feb 2014 09:17:27 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392023841!4445500!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19680 invoked from network); 10 Feb 2014 09:17:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-13.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497758"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H89p030290;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151425-1694493 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:30 +0800
Message-Id: <1392023972-24675-9-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:19,
	Serialize complete at 2014/02/10 17:15:19
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 08/10 V7] libxl: rename remus_failover_cb() to
	remus_replication_failure_cb()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Failover means that: the machine on which primary vm is running is
down, and we need to start the secondary vm to take over the primary
vm. remus_failover_cb() is called when remus fails, not when we need
to do failover. So rename it to remus_replication_failure_cb()

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c |   12 +++++++-----
 1 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 83d3772..70e34c0 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -702,8 +702,9 @@ out:
     return ptr;
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc);
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc);
 
 /* TODO: Explicit Checkpoint acknowledgements via recv_fd. */
 int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
@@ -722,7 +723,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     GCNEW(dss);
     dss->ao = ao;
-    dss->callback = remus_failover_cb;
+    dss->callback = remus_replication_failure_cb;
     dss->domid = domid;
     dss->fd = send_fd;
     /* TODO do something with recv_fd */
@@ -769,8 +770,9 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     return AO_ABORT(rc);
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc)
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc)
 {
     STATE_AO_GC(dss->ao);
     /*
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzR-0005lx-1P; Mon, 10 Feb 2014 09:17:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzQ-0005ls-3L
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:20 +0000
Received: from [85.158.137.68:23579] by server-3.bemta-3.messagelabs.com id
	68/D8-14520-F1998F25; Mon, 10 Feb 2014 09:17:19 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11880 invoked from network); 10 Feb 2014 09:17:17 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497749"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bD030288;
	Mon, 10 Feb 2014 17:17:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151391-1694485 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:22 +0800
Message-Id: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:16,
	Serialize complete at 2014/02/10 17:15:16
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 00/10 V7] Remus/Libxl: Network buffering support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series adds support for network buffering in the Remus
codebase in libxl. 

Changes in V7:
  Applied missing comments(by IanJ).
  Applied Shriram comments.

  merge netbufering tangled setup/teardown code into one patch.
  (2/6/8 in V6 => 5 in V7. 9/10 in V6 => 7 in V7)

Changes in V6:
  Applied Ian Jackson's comments of V5 series.
  the [PATCH 2/4 V5] is split by small functionalities.

  [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.

Changes in V5:

Merge hotplug script patch (2/5) and hotplug script setup/teardown
patch (3/5) into a single patch.

Changes in V4:

[1/5] Remove check for libnl command line utils in autoconf checks

[2/5] minor nits

[3/5] define LIBXL_HAVE_REMUS_NETBUF in libxl.h

[4/5] clean ups. Make the usleep in checkpoint callback asynchronous

[5/5] minor nits

Changes in V3:
[1/5] Fix redundant checks in configure scripts
      (based on Ian Campbell's suggestions)

[2/5] Introduce locking in the script, during IFB setup.
      Add xenstore paths used by netbuf scripts
      to xenstore-paths.markdown

[3/5] Hotplug scripts setup/teardown invocations are now asynchronous
      following IanJ's feedback.  However, the invocations are still
      sequential. 

[5/5] Allow per-domain specification of netbuffer scripts in xl remus
      commmand.

And minor nits throughout the series based on feedback from
the last version

Changes in V2:
[1/5] Configure script will automatically enable/disable network
      buffer support depending on the availability of the appropriate
      libnl3 version. [If libnl3 is unavailable, a warning message will be
      printed to let the user know that the feature has been disabled.]

      use macros from pkg.m4 instead of pkg-config commands
      removed redundant checks for libnl3 libraries.

[3,4/5] - Minor nits.

Version 1:

[1/5] Changes to autoconf scripts to check for libnl3. Add linker flags
      to libxl Makefile.

[2/5] External script to setup/teardown network buffering using libnl3's
      CLI. This script will be invoked by libxl before starting Remus.
      The script's main job is to bring up an IFB device with plug qdisc
      attached to it.  It then re-routes egress traffic from the guest's
      vif to the IFB device.

[3/5] Libxl code to invoke the external setup script, followed by netlink
      related setup to obtain a handle on the output buffers attached
      to each vif.

[4/5] Libxl interaction with network buffer module in the kernel via
      libnl3 API.

[5/5] xl cmdline switch to explicitly enable network buffering when
      starting remus.


  Few things to note(by shriram): 

    a) Based on previous email discussions, the setup/teardown task has
    been moved to a hotplug style shell script which can be customized as
    desired, instead of implementing it as C code inside libxl.

    b) Libnl3 is not available on NetBSD. Nor is it available on CentOS
   (Linux).  So I have made network buffering support an optional feature
   so that it can be disabled if desired.

   c) NetBSD does not have libnl3. So I have put the setup script under
   tools/hotplug/Linux folder.

thanks
Lai



Shriram Rajagopalan (8):
  remus: add libnl3 dependency to autoconf scripts
  tools/libxl: update libxl_domain_remus_info
  tools/libxl: introduce a new structure libxl__remus_state
  remus: introduce a function to check whether network buffering is
    enabled
  remus: Remus network buffering core and APIs to setup/teardown
  remus: implement the APIs to buffer/release packages
  libxl: use the APIs to setup/teardown network buffering
  libxl: rename remus_failover_cb() to remus_replication_failure_cb()
  libxl: control network buffering in remus callbacks
  libxl: network buffering cmdline switch

 README                                 |    4 +
 config/Tools.mk.in                     |    3 +
 docs/man/xl.conf.pod.5                 |    6 +
 docs/man/xl.pod.1                      |   11 +-
 docs/misc/xenstore-paths.markdown      |    4 +
 tools/configure.ac                     |   15 +
 tools/hotplug/Linux/Makefile           |    1 +
 tools/hotplug/Linux/remus-netbuf-setup |  183 +++++++++++
 tools/libxl/Makefile                   |   11 +
 tools/libxl/libxl.c                    |   48 ++-
 tools/libxl/libxl.h                    |   13 +
 tools/libxl/libxl_dom.c                |  118 ++++++--
 tools/libxl/libxl_internal.h           |   54 +++-
 tools/libxl/libxl_netbuffer.c          |  561 ++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c        |   56 ++++
 tools/libxl/libxl_remus.c              |   64 ++++
 tools/libxl/libxl_types.idl            |    2 +
 tools/libxl/xl.c                       |    4 +
 tools/libxl/xl.h                       |    1 +
 tools/libxl/xl_cmdimpl.c               |   28 ++-
 tools/libxl/xl_cmdtable.c              |    3 +
 tools/remus/README                     |    6 +
 22 files changed, 1155 insertions(+), 41 deletions(-)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c
 create mode 100644 tools/libxl/libxl_remus.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzY-0005oD-Dl; Mon, 10 Feb 2014 09:17:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzW-0005mn-FE
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:26 +0000
Received: from [85.158.143.35:31395] by server-3.bemta-4.messagelabs.com id
	87/BA-11539-52998F25; Mon, 10 Feb 2014 09:17:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392023840!4434687!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29314 invoked from network); 10 Feb 2014 09:17:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497752"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8pI030289;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151420-1694490 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:27 +0800
Message-Id: <1392023972-24675-6-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 05/10 V7] remus: Remus network buffering core
	and APIs to setup/teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch introduces remus-netbuf-setup hotplug script responsible for
setting up and tearing down the necessary infrastructure required for
network output buffering in Remus.  This script is intended to be invoked
by libxl for each guest interface, when starting or stopping Remus.

Apart from returning success/failure indication via the usual hotplug
entries in xenstore, this script also writes to xenstore, the name of
the IFB device to be used to control the vif's network output.

The script relies on libnl3 command line utilities to perform various
setup/teardown functions. The script is confined to Linux platforms only
since NetBSD does not seem to have libnl3.

The following steps are taken during setup:
 a) call the hotplug script for each vif to setup its network buffer

 b) establish a dedicated remus context containing libnl related
    state (netlink sockets, qdisc caches, etc.,)

 c) Obtain handles to plug qdiscs installed on the IFB devices
    chosen by the hotplug scripts.

And during teardown, the netlink resources are released, followed by
invocation of hotplug scripts to remove the IFB devices.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/misc/xenstore-paths.markdown      |    4 +
 tools/hotplug/Linux/Makefile           |    1 +
 tools/hotplug/Linux/remus-netbuf-setup |  183 ++++++++++++
 tools/libxl/Makefile                   |    2 +
 tools/libxl/libxl_dom.c                |    7 +-
 tools/libxl/libxl_internal.h           |   17 ++
 tools/libxl/libxl_netbuffer.c          |  481 ++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c        |   11 +
 tools/libxl/libxl_remus.c              |   41 +++
 9 files changed, 742 insertions(+), 5 deletions(-)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
 create mode 100644 tools/libxl/libxl_remus.c

diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 70ab7f4..7a0d2c9 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
 
 The device model version for a domain.
 
+#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
+
+IFB device used by Remus to buffer network output from the associated vif.
+
 [BLKIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
 [FBIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
 [HVMPARAMS]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
diff --git a/tools/hotplug/Linux/Makefile b/tools/hotplug/Linux/Makefile
index 47655f6..6139c1f 100644
--- a/tools/hotplug/Linux/Makefile
+++ b/tools/hotplug/Linux/Makefile
@@ -16,6 +16,7 @@ XEN_SCRIPTS += network-nat vif-nat
 XEN_SCRIPTS += vif-openvswitch
 XEN_SCRIPTS += vif2
 XEN_SCRIPTS += vif-setup
+XEN_SCRIPTS-$(CONFIG_REMUS_NETBUF) += remus-netbuf-setup
 XEN_SCRIPTS += block
 XEN_SCRIPTS += block-enbd block-nbd
 XEN_SCRIPTS-$(CONFIG_BLKTAP1) += blktap
diff --git a/tools/hotplug/Linux/remus-netbuf-setup b/tools/hotplug/Linux/remus-netbuf-setup
new file mode 100644
index 0000000..3467db2
--- /dev/null
+++ b/tools/hotplug/Linux/remus-netbuf-setup
@@ -0,0 +1,183 @@
+#!/bin/bash
+#============================================================================
+# ${XEN_SCRIPT_DIR}/remus-netbuf-setup
+#
+# Script for attaching a network buffer to the specified vif (in any mode).
+# The hotplugging system will call this script when starting remus via libxl
+# API, libxl_domain_remus_start.
+#
+# Usage:
+# remus-netbuf-setup (setup|teardown)
+#
+# Environment vars:
+# vifname     vif interface name (required).
+# XENBUS_PATH path in Xenstore, where the IFB device details will be stored
+#                      or read from (required).
+#             (libxl passes /libxl/<domid>/remus/netbuf/<devid>)
+# IFB         ifb interface to be cleaned up (required). [for teardown op only]
+
+# Written to the store: (setup operation)
+# XENBUS_PATH/ifb=<ifbdevName> the IFB device serving
+#  as the intermediate buffer through which the interface's network output
+#  can be controlled.
+#
+# To install a network buffer on a guest vif (vif1.0) using ifb (ifb0)
+# we need to do the following
+#
+#  ip link set dev ifb0 up
+#  tc qdisc add dev vif1.0 ingress
+#  tc filter add dev vif1.0 parent ffff: proto ip \
+#    prio 10 u32 match u32 0 0 action mirred egress redirect dev ifb0
+#  nl-qdisc-add --dev=ifb0 --parent root plug
+#  nl-qdisc-add --dev=ifb0 --parent root --update plug --limit=10000000
+#                                                (10MB limit on buffer)
+#
+# So order of operations when installing a network buffer on vif1.0
+# 1. find a free ifb and bring up the device
+# 2. redirect traffic from vif1.0 to ifb:
+#   2.1 add ingress qdisc to vif1.0 (to capture outgoing packets from guest)
+#   2.2 use tc filter command with actions mirred egress + redirect
+# 3. install plug_qdisc on ifb device, with which we can buffer/release
+#    guest's network output from vif1.0
+#
+#
+
+#============================================================================
+
+# Unlike other vif scripts, vif-common is not needed here as it executes vif
+#specific setup code such as renaming.
+dir=$(dirname "$0")
+. "$dir/xen-hotplug-common.sh"
+
+findCommand "$@"
+
+if [ "$command" != "setup" -a  "$command" != "teardown" ]
+then
+  echo "Invalid command: $command"
+  log err "Invalid command: $command"
+  exit 1
+fi
+
+evalVariables "$@"
+
+: ${vifname:?}
+: ${XENBUS_PATH:?}
+
+check_libnl_tools() {
+    if ! command -v nl-qdisc-list > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-list tool"
+    fi
+    if ! command -v nl-qdisc-add > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-add tool"
+    fi
+    if ! command -v nl-qdisc-delete > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-delete tool"
+    fi
+}
+
+# We only check for modules. We don't load them.
+# User/Admin is supposed to load ifb during boot time,
+# ensuring that there are enough free ifbs in the system.
+# Other modules will be loaded automatically by tc commands.
+check_modules() {
+    for m in ifb sch_plug sch_ingress act_mirred cls_u32
+    do
+        if ! modinfo $m > /dev/null 2>&1; then
+            fatal "Unable to find $m kernel module"
+        fi
+    done
+}
+
+setup_ifb() {
+
+    for ifb in `ifconfig -a -s|egrep ^ifb|cut -d ' ' -f1`
+    do
+        local installed=`nl-qdisc-list -d $ifb`
+        [ -n "$installed" ] && continue
+        IFB="$ifb"
+        break
+    done
+
+    if [ -z "$IFB" ]
+    then
+        fatal "Unable to find a free IFB device for $vifname"
+    fi
+
+    do_or_die ip link set dev "$IFB" up
+}
+
+redirect_vif_traffic() {
+    local vif=$1
+    local ifb=$2
+
+    do_or_die tc qdisc add dev "$vif" ingress
+
+    tc filter add dev "$vif" parent ffff: proto ip prio 10 \
+        u32 match u32 0 0 action mirred egress redirect dev "$ifb" >/dev/null 2>&1
+
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to redirect traffic from $vif to $ifb"
+    fi
+}
+
+add_plug_qdisc() {
+    local vif=$1
+    local ifb=$2
+
+    nl-qdisc-add --dev="$ifb" --parent root plug >/dev/null 2>&1
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to add plug qdisc to $ifb"
+    fi
+
+    #set ifb buffering limit in bytes. Its okay if this command fails
+    nl-qdisc-add --dev="$ifb" --parent root \
+        --update plug --limit=10000000 >/dev/null 2>&1
+}
+
+teardown_netbuf() {
+    local vif=$1
+    local ifb=$2
+
+    if [ "$ifb" ]; then
+        do_without_error ip link set dev "$ifb" down
+        do_without_error nl-qdisc-delete --dev="$ifb" --parent root plug >/dev/null 2>&1
+        xenstore-rm -t "$XENBUS_PATH/ifb" 2>/dev/null || true
+    fi
+    do_without_error tc qdisc del dev "$vif" ingress
+    xenstore-rm -t "$XENBUS_PATH/hotplug-status" 2>/dev/null || true
+}
+
+xs_write_failed() {
+    local vif=$1
+    local ifb=$2
+    teardown_netbuf "$vifname" "$IFB"
+    fatal "failed to write ifb name to xenstore"
+}
+
+case "$command" in
+    setup)
+        check_libnl_tools
+        check_modules
+
+        claim_lock "pickifb"
+        setup_ifb
+        redirect_vif_traffic "$vifname" "$IFB"
+        add_plug_qdisc "$vifname" "$IFB"
+        release_lock "pickifb"
+
+        #not using xenstore_write that automatically exits on error
+        #because we need to cleanup
+        _xenstore_write "$XENBUS_PATH/ifb" "$IFB" || xs_write_failed "$vifname" "$IFB"
+        success
+        ;;
+    teardown)
+        : ${IFB:?}
+        teardown_netbuf "$vifname" "$IFB"
+        ;;
+esac
+
+log debug "Successful remus-netbuf-setup $command for $vifname, ifb $IFB."
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 84a467c..218f55e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -52,6 +52,8 @@ else
 LIBXL_OBJS-y += libxl_nonetbuffer.o
 endif
 
+LIBXL_OBJS-y += libxl_remus.o
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 8d63f90..e3e9f6f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
 
 /*==================== Domain suspend (save) ====================*/
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc);
-
 /*----- complicated callback, called by xc_domain_save -----*/
 
 /*
@@ -1508,8 +1505,8 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     dss->save_dm_callback(egc, dss, our_rc);
 }
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc)
+void domain_suspend_done(libxl__egc *egc,
+                         libxl__domain_suspend_state *dss, int rc)
 {
     STATE_AO_GC(dss->ao);
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2f64382..4006174 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2313,6 +2313,23 @@ typedef struct libxl__remus_state {
 
 _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
 
+_hidden void domain_suspend_done(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss,
+                                 int rc);
+
+_hidden void libxl__remus_setup_done(libxl__egc *egc,
+                                     libxl__domain_suspend_state *dss,
+                                     int rc);
+
+_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
+                                       libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_teardown_done(libxl__egc *egc,
+                                        libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                          libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 8e23d75..2c77076 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -17,11 +17,492 @@
 
 #include "libxl_internal.h"
 
+#include <netlink/cache.h>
+#include <netlink/socket.h>
+#include <netlink/attr.h>
+#include <netlink/route/link.h>
+#include <netlink/route/route.h>
+#include <netlink/route/qdisc.h>
+#include <netlink/route/qdisc/plug.h>
+
+typedef struct libxl__remus_netbuf_state {
+    struct rtnl_qdisc **netbuf_qdisc_list;
+    struct nl_sock *nlsock;
+    struct nl_cache *qdisc_cache;
+    const char **vif_list;
+    const char **ifb_list;
+    uint32_t num_netbufs;
+    uint32_t unused;
+} libxl__remus_netbuf_state;
+
 int libxl__netbuffer_enabled(libxl__gc *gc)
 {
     return 1;
 }
 
+/* If the device has a vifname, then use that instead of
+ * the vifX.Y format.
+ */
+static const char *get_vifname(libxl__gc *gc, uint32_t domid,
+                               libxl_device_nic *nic)
+{
+    const char *vifname = NULL;
+    const char *path;
+    int rc;
+
+    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
+                          libxl__xs_get_dompath(gc, 0), domid, nic->devid);
+    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
+    if (!rc && !vifname) {
+        /* use the default name */
+        vifname = libxl__device_nic_devname(gc, domid,
+                                            nic->devid,
+                                            nic->nictype);
+    }
+
+    return vifname;
+}
+
+static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
+                                       int *num_vifs)
+{
+    libxl_device_nic *nics = NULL;
+    int nb, i = 0;
+    const char **vif_list = NULL;
+
+    *num_vifs = 0;
+    nics = libxl_device_nic_list(CTX, domid, &nb);
+    if (!nics)
+        return NULL;
+
+    /* Ensure that none of the vifs are backed by driver domains */
+    for (i = 0; i < nb; i++) {
+        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            const char *vifname = get_vifname(gc, domid, &nics[i]);
+
+            if (!vifname)
+              vifname = "(unknown)";
+            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
+                "Network buffering is not supported with driver domains",
+                vifname, nics[i].backend_domid);
+            *num_vifs = -1;
+            goto out;
+        }
+    }
+
+    GCNEW_ARRAY(vif_list, nb);
+    for (i = 0; i < nb; ++i) {
+        vif_list[i] = get_vifname(gc, domid, &nics[i]);
+        if (!vif_list[i]) {
+            vif_list = NULL;
+            goto out;
+        }
+    }
+    *num_vifs = nb;
+
+ out:
+    for (i = 0; i < nb; i++)
+        libxl_device_nic_dispose(&nics[i]);
+    free(nics);
+    return vif_list;
+}
+
+static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
+{
+    int i;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* free qdiscs */
+    for (i = 0; i < netbuf_state->num_netbufs; i++) {
+        qdisc = netbuf_state->netbuf_qdisc_list[i];
+        if (!qdisc)
+            break;
+
+        nl_object_put((struct nl_object *)qdisc);
+        netbuf_state->netbuf_qdisc_list[i] = NULL;
+    }
+
+    /* free qdisc cache */
+    if (netbuf_state->qdisc_cache) {
+      nl_cache_clear(netbuf_state->qdisc_cache);
+      nl_cache_free(netbuf_state->qdisc_cache);
+      netbuf_state->qdisc_cache = NULL;
+    }
+
+    /* close & free nlsock */
+    if (netbuf_state->nlsock) {
+      nl_close(netbuf_state->nlsock);
+      nl_socket_free(netbuf_state->nlsock);
+      netbuf_state->nlsock = NULL;
+    }
+}
+
+static int init_qdiscs(libxl__gc *gc,
+                       libxl__remus_state *remus_state)
+{
+    int i, ret, ifindex;
+    struct rtnl_link *ifb = NULL;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state * const netbuf_state = remus_state->netbuf_state;
+    const int num_netbufs = netbuf_state->num_netbufs;
+    const char ** const ifb_list = netbuf_state->ifb_list;
+
+    /* Now that we have brought up IFB devices with plug qdisc for
+     * each vif, lets get a netlink handle on the plug qdisc for use
+     * during checkpointing.
+     */
+    netbuf_state->nlsock = nl_socket_alloc();
+    if (!netbuf_state->nlsock) {
+        LOG(ERROR, "cannot allocate nl socket");
+        goto out;
+    }
+
+    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
+    if (ret) {
+        LOG(ERROR, "failed to open netlink socket: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* get list of all qdiscs installed on network devs. */
+    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
+                                 &netbuf_state->qdisc_cache);
+    if (ret) {
+        LOG(ERROR, "failed to allocate qdisc cache: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* list of handles to plug qdiscs */
+    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
+
+    for (i = 0; i < num_netbufs; ++i) {
+
+        /* get a handle to the IFB interface */
+        ifb = NULL;
+        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
+                                   ifb_list[i], &ifb);
+        if (ret) {
+            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
+                nl_geterror(ret));
+            goto out;
+        }
+
+        ifindex = rtnl_link_get_ifindex(ifb);
+        if (!ifindex) {
+            LOG(ERROR, "interface %s has no index", ifb_list[i]);
+            goto out;
+        }
+
+        /* Get a reference to the root qdisc installed on the IFB, by
+         * querying the qdisc list we obtained earlier. The netbufscript
+         * sets up the plug qdisc as the root qdisc, so we don't have to
+         * search the entire qdisc tree on the IFB dev.
+
+         * There is no need to explicitly free this qdisc as its just a
+         * reference from the qdisc cache we allocated earlier.
+         */
+        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache, ifindex,
+                                         TC_H_ROOT);
+
+        if (qdisc) {
+            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
+            /* Sanity check: Ensure that the root qdisc is a plug qdisc. */
+            if (!tc_kind || strcmp(tc_kind, "plug")) {
+                nl_object_put((struct nl_object *)qdisc);
+                LOG(ERROR, "plug qdisc is not installed on %s", ifb_list[i]);
+                goto out;
+            }
+            netbuf_state->netbuf_qdisc_list[i] = qdisc;
+        } else {
+            LOG(ERROR, "Cannot get qdisc handle from ifb %s", ifb_list[i]);
+            goto out;
+        }
+        rtnl_link_put(ifb);
+    }
+
+    return 0;
+
+ out:
+    if (ifb)
+        rtnl_link_put(ifb);
+    free_qdiscs(netbuf_state);
+    return ERROR_FAIL;
+}
+
+static void netbuf_setup_timeout_cb(libxl__egc *egc,
+                                    libxl__ev_time *ev,
+                                    const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+    assert(libxl__ev_child_inuse(&remus_state->child));
+
+    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
+        remus_state->netbufscript, vif);
+
+    if (kill(remus_state->child.pid, SIGKILL)) {
+        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
+              remus_state->netbufscript,
+              (unsigned long)remus_state->child.pid);
+    }
+
+    return;
+}
+
+/* the script needs the following env & args
+ * $vifname
+ * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
+ * $IFB (for teardown)
+ * setup/teardown as command line arg.
+ * In return, the script writes the name of IFB device (during setup) to be
+ * used for output buffering into XENBUS_PATH/ifb
+ */
+static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state *remus_state,
+                              char *op, libxl__ev_child_callback *death)
+{
+    int arraysize, nr = 0;
+    char **env = NULL, **args = NULL;
+    pid_t pid;
+
+    /* Convenience aliases */
+    libxl__ev_child *const child = &remus_state->child;
+    libxl__ev_time *const timeout = &remus_state->timeout;
+    char *const script = libxl__strdup(gc, remus_state->netbufscript);
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char *const ifb = netbuf_state->ifb_list[devid];
+
+    arraysize = 7;
+    GCNEW_ARRAY(env, arraysize);
+    env[nr++] = "vifname";
+    env[nr++] = libxl__strdup(gc, vif);
+    env[nr++] = "XENBUS_PATH";
+    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
+                          libxl__xs_libxl_path(gc, domid), devid);
+    if (!strcmp(op, "teardown")) {
+        env[nr++] = "IFB";
+        env[nr++] = libxl__strdup(gc, ifb);
+    }
+    env[nr++] = NULL;
+    assert(nr <= arraysize);
+
+    arraysize = 3; nr = 0;
+    GCNEW_ARRAY(args, arraysize);
+    args[nr++] = script;
+    args[nr++] = op;
+    args[nr++] = NULL;
+    assert(nr == arraysize);
+
+    /* Set hotplug timeout */
+    if (libxl__ev_time_register_rel(gc, timeout,
+                                    netbuf_setup_timeout_cb,
+                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
+        LOG(ERROR, "unable to register timeout for "
+            "netbuf setup script %s on vif %s", script, vif);
+        return ERROR_FAIL;
+    }
+
+    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
+        script, op, vif);
+
+    /* Fork and exec netbuf script */
+    pid = libxl__ev_child_fork(gc, child, death);
+    if (pid == -1) {
+        LOG(ERROR, "unable to fork netbuf script %s", script);
+        return ERROR_FAIL;
+    }
+
+    if (!pid) {
+        /* child: Launch netbuf script */
+        libxl__exec(gc, -1, -1, -1, args[0], args, env);
+        /* notreached */
+        abort();
+    }
+
+    return 0;
+}
+
+static void netbuf_setup_script_cb(libxl__egc *egc,
+                                   libxl__ev_child *child,
+                                   pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+    const char *out_path_base, *hotplug_error = NULL;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char **const ifb = &netbuf_state->ifb_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    out_path_base = GCSPRINTF("%s/remus/netbuf/%d",
+                              libxl__xs_libxl_path(gc, domid), devid);
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/hotplug-error", out_path_base),
+                                &hotplug_error);
+    if (rc)
+        goto out;
+
+    if (hotplug_error) {
+        LOG(ERROR, "netbuf script %s setup failed for vif %s: %s",
+            remus_state->netbufscript,
+            netbuf_state->vif_list[devid], hotplug_error);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/remus/netbuf/%d/ifb",
+                                          libxl__xs_libxl_path(gc, domid),
+                                          devid),
+                                ifb);
+    if (rc)
+        goto out;
+
+    if (!(*ifb)) {
+        LOG(ERROR, "Cannot get ifb dev name for domain %u dev %s",
+            domid, vif);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    LOG(DEBUG, "%s will buffer packets from vif %s", *ifb, vif);
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        rc = exec_netbuf_script(gc, remus_state,
+                                "setup", netbuf_setup_script_cb);
+        if (rc)
+            goto out;
+
+        return;
+    }
+
+    rc = init_qdiscs(gc, remus_state);
+ out:
+    libxl__remus_setup_done(egc, remus_state->dss, rc);
+}
+
+/* Scan through the list of vifs belonging to domid and
+ * invoke the netbufscript to setup the IFB device & plug qdisc
+ * for each vif. Then scan through the list of IFB devices to obtain
+ * a handle on the plug qdisc installed on these IFB devices.
+ * Network output buffering is controlled via these qdiscs.
+ */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+    libxl__remus_netbuf_state *netbuf_state = NULL;
+    int num_netbufs = 0;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = dss->domid;
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    GCNEW(netbuf_state);
+    netbuf_state->vif_list = get_guest_vif_list(gc, domid, &num_netbufs);
+    if (!num_netbufs) {
+        rc = 0;
+        goto out;
+    }
+
+    if (num_netbufs < 0) goto out;
+
+    GCNEW_ARRAY(netbuf_state->ifb_list, num_netbufs);
+    netbuf_state->num_netbufs = num_netbufs;
+    remus_state->netbuf_state = netbuf_state;
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "setup",
+                           netbuf_setup_script_cb))
+        goto out;
+    return;
+
+ out:
+    libxl__remus_setup_done(egc, dss, rc);
+}
+
+static void netbuf_teardown_script_cb(libxl__egc *egc,
+                                      libxl__ev_child *child,
+                                      pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+    }
+
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        if (exec_netbuf_script(gc, remus_state,
+                               "teardown", netbuf_teardown_script_cb))
+            goto out;
+        return;
+    }
+
+ out:
+    libxl__remus_teardown_done(egc, remus_state->dss);
+}
+
+/* Note: This function will be called in the same gc context as
+ * libxl__remus_netbuf_setup, created during the libxl_domain_remus_start
+ * API call.
+ */
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(dss->ao);
+
+    free_qdiscs(netbuf_state);
+
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "teardown",
+                           netbuf_teardown_script_cb))
+        libxl__remus_teardown_done(egc, dss);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index 6aa4bf1..559d0a6 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -22,6 +22,17 @@ int libxl__netbuffer_enabled(libxl__gc *gc)
     return 0;
 }
 
+/* Remus network buffer related stubs */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+}
+
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
new file mode 100644
index 0000000..4e40412
--- /dev/null
+++ b/tools/libxl/libxl_remus.c
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2014
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+/*----- remus setup/teardown code -----*/
+
+void libxl__remus_setup_done(libxl__egc *egc,
+                             libxl__domain_suspend_state *dss,
+                             int rc)
+{
+    STATE_AO_GC(dss->ao);
+    if (!rc) {
+        libxl__domain_suspend(egc, dss);
+        return;
+    }
+
+    LOG(ERROR, "Remus: failed to setup network buffering"
+        " for guest with domid %u", dss->domid);
+    domain_suspend_done(egc, dss, rc);
+}
+
+void libxl__remus_teardown_done(libxl__egc *egc,
+                                libxl__domain_suspend_state *dss)
+{
+    dss->callback(egc, dss, dss->remus_state->saved_rc);
+}
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzW-0005n8-FX; Mon, 10 Feb 2014 09:17:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzV-0005mZ-Kh
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:25 +0000
Received: from [85.158.143.35:31267] by server-1.bemta-4.messagelabs.com id
	48/A3-31661-42998F25; Mon, 10 Feb 2014 09:17:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392023841!4445500!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18772 invoked from network); 10 Feb 2014 09:17:22 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-13.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497753"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H89o030290;
	Mon, 10 Feb 2014 17:17:11 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151397-1694488 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:25 +0800
Message-Id: <1392023972-24675-4-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/10 V7] tools/libxl: introduce a new structure
	libxl__remus_state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl_domain_remus_info only contains the argument of the command
'xl remus'. So introduce a new structure libxl__remus_state to save
the remus state.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |   25 +++++++++++++++++++++++--
 tools/libxl/libxl_dom.c      |   12 ++++--------
 tools/libxl/libxl_internal.h |   22 ++++++++++++++++++++--
 3 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..25af816 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -729,11 +729,32 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     dss->type = type;
     dss->live = 1;
     dss->debug = 0;
-    dss->remus = info;
 
     assert(info);
 
-    /* TBD: Remus setup - i.e. attach qdisc, enable disk buffering, etc */
+    GCNEW(dss->remus_state);
+
+    /* convenience shorthand */
+    libxl__remus_state *remus_state = dss->remus_state;
+    remus_state->blackhole = info->blackhole;
+    remus_state->interval = info->interval;
+    remus_state->compression = info->compression;
+    remus_state->dss = dss;
+    libxl__ev_child_init(&remus_state->child);
+
+    /* TODO: enable disk buffering */
+
+    /* Setup network buffering */
+    if (info->netbuf) {
+        if (info->netbufscript) {
+            remus_state->netbufscript =
+                libxl__strdup(gc, info->netbufscript);
+        } else {
+            remus_state->netbufscript =
+                GCSPRINTF("%s/remus-netbuf-setup",
+                          libxl__xen_script_dir_path());
+        }
+    }
 
     /* Point of no return */
     libxl__domain_suspend(egc, dss);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..8d63f90 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1290,7 +1290,7 @@ static void remus_checkpoint_dm_saved(libxl__egc *egc,
     /* REMUS TODO: Wait for disk and memory ack, release network buffer */
     /* REMUS TODO: make this asynchronous */
     assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->interval * 1000);
+    usleep(dss->remus_state->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
@@ -1308,7 +1308,6 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     const libxl_domain_type type = dss->type;
     const int live = dss->live;
     const int debug = dss->debug;
-    const libxl_domain_remus_info *const r_info = dss->remus;
     libxl__srm_save_autogen_callbacks *const callbacks =
         &dss->shs.callbacks.save.a;
 
@@ -1343,11 +1342,8 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     dss->guest_responded = 0;
     dss->dm_savefile = libxl__device_model_savefile(gc, domid);
 
-    if (r_info != NULL) {
-        dss->interval = r_info->interval;
-        if (r_info->compression)
-            dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
-    }
+    if (dss->remus_state && dss->remus_state->compression)
+        dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
 
     dss->xce = xc_evtchn_open(NULL, 0);
     if (dss->xce == NULL)
@@ -1366,7 +1362,7 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     }
 
     memset(callbacks, 0, sizeof(*callbacks));
-    if (r_info != NULL) {
+    if (dss->remus_state != NULL) {
         callbacks->suspend = libxl__remus_domain_suspend_callback;
         callbacks->postcopy = libxl__remus_domain_resume_callback;
         callbacks->checkpoint = libxl__remus_domain_checkpoint_callback;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..9970780 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2292,6 +2292,25 @@ typedef struct libxl__logdirty_switch {
     libxl__ev_time timeout;
 } libxl__logdirty_switch;
 
+typedef struct libxl__remus_state {
+    /* filled by the user */
+    /* checkpoint interval */
+    int interval;
+    int blackhole;
+    int compression;
+    /* Script to setup/teardown network buffers */
+    const char *netbufscript;
+    libxl__domain_suspend_state *dss;
+
+    /* private */
+    int saved_rc;
+    int dev_id;
+    /* Opaque context containing network buffer related stuff */
+    void *netbuf_state;
+    libxl__ev_time timeout;
+    libxl__ev_child child;
+} libxl__remus_state;
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
@@ -2302,7 +2321,7 @@ struct libxl__domain_suspend_state {
     libxl_domain_type type;
     int live;
     int debug;
-    const libxl_domain_remus_info *remus;
+    libxl__remus_state *remus_state;
     /* private */
     xc_evtchn *xce; /* event channel handle */
     int suspend_eventchn;
@@ -2310,7 +2329,6 @@ struct libxl__domain_suspend_state {
     int xcflags;
     int guest_responded;
     const char *dm_savefile;
-    int interval; /* checkpoint interval (for Remus) */
     libxl__save_helper_state shs;
     libxl__logdirty_switch logdirty;
     /* private for libxl__domain_save_device_model */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzX-0005nt-Tb; Mon, 10 Feb 2014 09:17:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzV-0005ma-QN
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:26 +0000
Received: from [85.158.137.68:28287] by server-11.bemta-3.messagelabs.com id
	B1/88-04255-42998F25; Mon, 10 Feb 2014 09:17:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!4
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13231 invoked from network); 10 Feb 2014 09:17:23 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497754"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:25 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bE030288;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151399-1694489 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:26 +0800
Message-Id: <1392023972-24675-5-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 04/10 V7] remus: introduce a function to check
	whether network buffering is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl__netbuffer_enabled() returns 1 when network buffering is compiled,
or returns 0 when network buffering is not compiled.

If network buffering is not compiled, and the user wants to use it, report
a error and exit.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/Makefile            |    7 +++++++
 tools/libxl/libxl.c             |    5 +++++
 tools/libxl/libxl_internal.h    |    2 ++
 tools/libxl/libxl_netbuffer.c   |   31 +++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c |   31 +++++++++++++++++++++++++++++++
 5 files changed, 76 insertions(+), 0 deletions(-)
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index da27c84..84a467c 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -45,6 +45,13 @@ LIBXL_OBJS-y += libxl_blktap2.o
 else
 LIBXL_OBJS-y += libxl_noblktap2.o
 endif
+
+ifeq ($(CONFIG_REMUS_NETBUF),y)
+LIBXL_OBJS-y += libxl_netbuffer.o
+else
+LIBXL_OBJS-y += libxl_nonetbuffer.o
+endif
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 25af816..026206a 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -746,6 +746,11 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     /* Setup network buffering */
     if (info->netbuf) {
+        if (!libxl__netbuffer_enabled(gc)) {
+            LOG(ERROR, "Remus: No support for network buffering");
+            goto out;
+        }
+
         if (info->netbufscript) {
             remus_state->netbufscript =
                 libxl__strdup(gc, info->netbufscript);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9970780..2f64382 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2311,6 +2311,8 @@ typedef struct libxl__remus_state {
     libxl__ev_child child;
 } libxl__remus_state;
 
+_hidden int libxl__netbuffer_enabled(libxl__gc *gc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
new file mode 100644
index 0000000..8e23d75
--- /dev/null
+++ b/tools/libxl/libxl_netbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
new file mode 100644
index 0000000..6aa4bf1
--- /dev/null
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzV-0005mb-QC; Mon, 10 Feb 2014 09:17:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzT-0005mB-LJ
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:23 +0000
Received: from [85.158.137.68:18929] by server-13.bemta-3.messagelabs.com id
	CD/C5-26923-22998F25; Mon, 10 Feb 2014 09:17:22 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12845 invoked from network); 10 Feb 2014 09:17:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497750"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8pH030289;
	Mon, 10 Feb 2014 17:17:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151392-1694486 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:23 +0800
Message-Id: <1392023972-24675-2-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:16,
	Serialize complete at 2014/02/10 17:15:16
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 01/10 V7] remus: add libnl3 dependency to
	autoconf scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Libnl3 is required for controlling Remus network buffering.
This patch adds dependency on libnl3 (>= 3.2.8) to autoconf scripts.
Also provide ability to configure tools without libnl3 support, that
is without network buffering support.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 README               |    4 ++++
 config/Tools.mk.in   |    3 +++
 tools/configure.ac   |   15 +++++++++++++++
 tools/libxl/Makefile |    2 ++
 tools/remus/README   |    6 ++++++
 5 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/README b/README
index 4148a26..7bb25fb 100644
--- a/README
+++ b/README
@@ -72,6 +72,10 @@ disabled at compile time:
     * cmake (if building vtpm stub domains)
     * markdown
     * figlet (for generating the traditional Xen start of day banner)
+    * Development install of libnl3 (e.g., libnl-3-200,
+      libnl-3-dev, etc).  Required if network buffering is desired
+      when using Remus with libxl.  See tools/remus/README for detailed
+      information.
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index d9d3239..81802b3 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -38,6 +38,8 @@ PTHREAD_LIBS        := @PTHREAD_LIBS@
 
 PTYFUNCS_LIBS       := @PTYFUNCS_LIBS@
 
+LIBNL3_LIBS         := @LIBNL3_LIBS@
+LIBNL3_CFLAGS       := @LIBNL3_CFLAGS@
 # Download GIT repositories via HTTP or GIT's own protocol?
 # GIT's protocol is faster and more robust, when it works at all (firewalls
 # may block it). We make it the default, but if your GIT repository downloads
@@ -56,6 +58,7 @@ CONFIG_QEMU_TRAD    := @qemu_traditional@
 CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_XEND         := @xend@
 CONFIG_BLKTAP1      := @blktap1@
+CONFIG_REMUS_NETBUF := @remus_netbuf@
 
 #System options
 ZLIB                := @zlib@
diff --git a/tools/configure.ac b/tools/configure.ac
index 0754f0e..f95956d 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -236,6 +236,21 @@ esac
 # Checks for header files.
 AC_CHECK_HEADERS([yajl/yajl_version.h sys/eventfd.h])
 
+# Check for libnl3 >=3.2.8. If present enable remus network buffering.
+PKG_CHECK_MODULES(LIBNL3, [libnl-3.0 >= 3.2.8 libnl-route-3.0 >= 3.2.8],
+		[libnl3_lib="y"], [libnl3_lib="n"])
+
+AS_IF([test "x$libnl3_lib" = "xn" ], [
+	    AC_MSG_WARN([Disabling support for Remus network buffering.
+	    Please install libnl3 libraries, command line tools and devel
+	    headers - version 3.2.8 or higher])
+	    AC_SUBST(remus_netbuf, [n])
+	    ],[
+	    AC_SUBST(LIBNL3_LIBS)
+	    AC_SUBST(LIBNL3_CFLAGS)
+	    AC_SUBST(remus_netbuf, [y])
+])
+
 AC_OUTPUT()
 
 AS_IF([test "x$xend" = "xy" ], [
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..da27c84 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -21,11 +21,13 @@ endif
 
 LIBXL_LIBS =
 LIBXL_LIBS = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libblktapctl) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS += $(LIBNL3_LIBS)
 
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 CFLAGS_LIBXL += $(CFLAGS_libblktapctl) 
+CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 CFLAGS_LIBXL += -Wshadow
 
 LIBXL_LIBS-$(CONFIG_ARM) += -lfdt
diff --git a/tools/remus/README b/tools/remus/README
index 9e8140b..4736252 100644
--- a/tools/remus/README
+++ b/tools/remus/README
@@ -2,3 +2,9 @@ Remus provides fault tolerance for virtual machines by sending continuous
 checkpoints to a backup, which will activate if the target VM fails.
 
 See the website at http://nss.cs.ubc.ca/remus/ for details.
+
+Using Remus with libxl on Xen 4.4 and higher:
+ To enable network buffering, you need libnl 3.2.8
+ or higher along with the development headers and command line utilities.
+ If your distro does not have the appropriate libnl3 version, you can find
+ the latest source tarball of libnl3 at http://www.carisma.slowglass.com/~tgr/libnl/
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmza-0005qU-UW; Mon, 10 Feb 2014 09:17:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzY-0005o6-L4
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:28 +0000
Received: from [85.158.143.35:21776] by server-3.bemta-4.messagelabs.com id
	C9/CA-11539-82998F25; Mon, 10 Feb 2014 09:17:28 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392023840!4434687!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30778 invoked from network); 10 Feb 2014 09:17:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497759"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:27 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8pJ030289;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151424-1694492 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:29 +0800
Message-Id: <1392023972-24675-8-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:18,
	Serialize complete at 2014/02/10 17:15:18
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 07/10 V7] libxl: use the API to setup/teardown
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

If there is network buffering hotplug scripts, call
libxl__remus_netbuf_setup() to setup the network
buffering and libxl__remus_netbuf_teardown() to
teardown network buffering.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |    6 +-----
 tools/libxl/libxl_dom.c      |   11 +++++++++++
 tools/libxl/libxl_internal.h |    7 +++++++
 tools/libxl/libxl_remus.c    |   23 +++++++++++++++++++++++
 4 files changed, 42 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 026206a..83d3772 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -762,7 +762,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     }
 
     /* Point of no return */
-    libxl__domain_suspend(egc, dss);
+    libxl__remus_setup_initiate(egc, dss);
     return AO_INPROGRESS;
 
  out:
@@ -778,10 +778,6 @@ static void remus_failover_cb(libxl__egc *egc,
      * backup died or some network error occurred preventing us
      * from sending checkpoints.
      */
-
-    /* TBD: Remus cleanup - i.e. detach qdisc, release other
-     * resources.
-     */
     libxl__ao_complete(egc, ao, rc);
 }
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e3e9f6f..912a6e4 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1519,6 +1519,17 @@ void domain_suspend_done(libxl__egc *egc,
     if (dss->xce != NULL)
         xc_evtchn_close(dss->xce);
 
+    if (dss->remus_state) {
+        /*
+         * With Remus, if we reach this point, it means either
+         * backup died or some network error occurred preventing us
+         * from sending checkpoints. Teardown the network buffers and
+         * release netlink resources.  This is an async op.
+         */
+        libxl__remus_teardown_initiate(egc, dss, rc);
+        return;
+    }
+
     dss->callback(egc, dss, rc);
 }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c13296b..1bd2bba 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2336,6 +2336,13 @@ _hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
 _hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
                                                   libxl__remus_state *remus_state);
 
+_hidden void libxl__remus_setup_initiate(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                            libxl__domain_suspend_state *dss,
+                                            int rc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index 4e40412..cdc1c16 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -19,6 +19,16 @@
 
 /*----- remus setup/teardown code -----*/
 
+void libxl__remus_setup_initiate(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss)
+{
+    libxl__ev_time_init(&dss->remus_state->timeout);
+    if (!dss->remus_state->netbufscript)
+        libxl__remus_setup_done(egc, dss, 0);
+    else
+        libxl__remus_netbuf_setup(egc, dss);
+}
+
 void libxl__remus_setup_done(libxl__egc *egc,
                              libxl__domain_suspend_state *dss,
                              int rc)
@@ -34,6 +44,19 @@ void libxl__remus_setup_done(libxl__egc *egc,
     domain_suspend_done(egc, dss, rc);
 }
 
+void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                    libxl__domain_suspend_state *dss,
+                                    int rc)
+{
+    /* stash rc somewhere before invoking teardown ops. */
+    dss->remus_state->saved_rc = rc;
+
+    if (!dss->remus_state->netbuf_state)
+        libxl__remus_teardown_done(egc, dss);
+    else
+        libxl__remus_netbuf_teardown(egc, dss);
+}
+
 void libxl__remus_teardown_done(libxl__egc *egc,
                                 libxl__domain_suspend_state *dss)
 {
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzZ-0005ow-2G; Mon, 10 Feb 2014 09:17:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzX-0005nJ-6L
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:27 +0000
Received: from [85.158.143.35:31479] by server-2.bemta-4.messagelabs.com id
	B1/43-10891-62998F25; Mon, 10 Feb 2014 09:17:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392023840!4434687!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29907 invoked from network); 10 Feb 2014 09:17:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497756"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bG030288;
	Mon, 10 Feb 2014 17:17:13 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151428-1694495 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:32 +0800
Message-Id: <1392023972-24675-11-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:19,
	Serialize complete at 2014/02/10 17:15:19
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 10/10 V7] libxl: network buffering cmdline switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Command line switch to 'xl remus' command, to enable network buffering.
Pass on this flag to libxl so that it can act accordingly.
Also update man pages to reflect the addition of a new option to
'xl remus' command.

Note: the network buffering is enabled as default. If you want to
disable it, please use -n option.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/man/xl.conf.pod.5    |    6 ++++++
 docs/man/xl.pod.1         |   11 ++++++++++-
 tools/libxl/xl.c          |    4 ++++
 tools/libxl/xl.h          |    1 +
 tools/libxl/xl_cmdimpl.c  |   28 ++++++++++++++++++++++------
 tools/libxl/xl_cmdtable.c |    3 +++
 6 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.conf.pod.5 b/docs/man/xl.conf.pod.5
index 7c43bde..8ae19bb 100644
--- a/docs/man/xl.conf.pod.5
+++ b/docs/man/xl.conf.pod.5
@@ -105,6 +105,12 @@ Configures the default gateway device to set for virtual network devices.
 
 Default: C<None>
 
+=item B<remus.default.netbufscript="PATH">
+
+Configures the default script used by Remus to setup network buffering.
+
+Default: C</etc/xen/scripts/remus-netbuf-setup>
+
 =item B<output_format="json|sxp">
 
 Configures the default output format used by xl when printing "machine
diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..3c5f246 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -399,7 +399,7 @@ Enable Remus HA for domain. By default B<xl> relies on ssh as a transport
 mechanism between the two hosts.
 
 N.B: Remus support in xl is still in experimental (proof-of-concept) phase.
-     There is no support for network or disk buffering at the moment.
+     There is no support for disk buffering at the moment.
 
 B<OPTIONS>
 
@@ -418,6 +418,15 @@ Generally useful for debugging.
 
 Disable memory checkpoint compression.
 
+=item B<-n>
+
+Disable network output buffering.
+
+=item B<-N> I<netbufscript>
+
+Use <netbufscript> to setup network buffering instead of the instead of
+the default (/etc/xen/scripts/remus-netbuf-setup).
+
 =item B<-s> I<sshcommand>
 
 Use <sshcommand> instead of ssh.  String will be passed to sh.
diff --git a/tools/libxl/xl.c b/tools/libxl/xl.c
index 657610b..e02a618 100644
--- a/tools/libxl/xl.c
+++ b/tools/libxl/xl.c
@@ -46,6 +46,7 @@ char *default_vifscript = NULL;
 char *default_bridge = NULL;
 char *default_gatewaydev = NULL;
 char *default_vifbackend = NULL;
+char *default_remus_netbufscript = NULL;
 enum output_format default_output_format = OUTPUT_FORMAT_JSON;
 int claim_mode = 1;
 
@@ -177,6 +178,9 @@ static void parse_global_config(const char *configfile,
     if (!xlu_cfg_get_long (config, "claim_mode", &l, 0))
         claim_mode = l;
 
+    xlu_cfg_replace_string (config, "remus.default.netbufscript",
+                            &default_remus_netbufscript, 0);
+
     xlu_cfg_destroy(config);
 }
 
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..d991fd3 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -153,6 +153,7 @@ extern char *default_vifscript;
 extern char *default_bridge;
 extern char *default_gatewaydev;
 extern char *default_vifbackend;
+extern char *default_remus_netbufscript;
 extern char *blkdev_start;
 
 enum output_format {
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..6d41775 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7265,8 +7265,9 @@ int main_remus(int argc, char **argv)
     r_info.interval = 200;
     r_info.blackhole = 0;
     r_info.compression = 1;
+    r_info.netbuf = 1;
 
-    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
+    SWITCH_FOREACH_OPT(opt, "buni:s:N:e", NULL, "remus", 2) {
     case 'i':
         r_info.interval = atoi(optarg);
         break;
@@ -7276,6 +7277,12 @@ int main_remus(int argc, char **argv)
     case 'u':
         r_info.compression = 0;
         break;
+    case 'n':
+        r_info.netbuf = 0;
+        break;
+    case 'N':
+        r_info.netbufscript = optarg;
+        break;
     case 's':
         ssh_command = optarg;
         break;
@@ -7287,6 +7294,9 @@ int main_remus(int argc, char **argv)
     domid = find_domain(argv[optind]);
     host = argv[optind + 1];
 
+    if(!r_info.netbufscript)
+        r_info.netbufscript = default_remus_netbufscript;
+
     if (r_info.blackhole) {
         send_fd = open("/dev/null", O_RDWR, 0644);
         if (send_fd < 0) {
@@ -7324,13 +7334,19 @@ int main_remus(int argc, char **argv)
     /* Point of no return */
     rc = libxl_domain_remus_start(ctx, &r_info, domid, send_fd, recv_fd, 0);
 
-    /* If we are here, it means backup has failed/domain suspend failed.
-     * Try to resume the domain and exit gracefully.
-     * TODO: Split-Brain check.
+    /* check if the domain exists. User may have xl destroyed the
+     * domain to force failover
      */
-    fprintf(stderr, "remus sender: libxl_domain_suspend failed"
-            " (rc=%d)\n", rc);
+    if (libxl_domain_info(ctx, 0, domid)) {
+        fprintf(stderr, "Remus: Primary domain has been destroyed.\n");
+        close(send_fd);
+        return 0;
+    }
 
+    /* If we are here, it means remus setup/domain suspend/backup has
+     * failed. Try to resume the domain and exit gracefully.
+     * TODO: Split-Brain check.
+     */
     if (rc == ERROR_GUEST_TIMEDOUT)
         fprintf(stderr, "Failed to suspend domain at primary.\n");
     else {
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..9b7104c 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
       "-i MS                   Checkpoint domain memory every MS milliseconds (def. 200ms).\n"
       "-b                      Replicate memory checkpoints to /dev/null (blackhole)\n"
       "-u                      Disable memory checkpoint compression.\n"
+      "-n                      Disable network output buffering.\n"
+      "-N <netbufscript>       Use netbufscript to setup network buffering instead of the\n"
+      "                        instead of the default (/etc/xen/scripts/remus-netbuf-setup).\n"
       "-s <sshcommand>         Use <sshcommand> instead of ssh.  String will be passed\n"
       "                        to sh. If empty, run <host> instead of \n"
       "                        ssh <host> xl migrate-receive -r [-e]\n"
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzT-0005mD-DP; Mon, 10 Feb 2014 09:17:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzR-0005m4-SU
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:22 +0000
Received: from [85.158.137.68:38990] by server-3.bemta-3.messagelabs.com id
	97/E8-14520-12998F25; Mon, 10 Feb 2014 09:17:21 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12477 invoked from network); 10 Feb 2014 09:17:19 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497751"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8aS030292;
	Mon, 10 Feb 2014 17:17:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151394-1694487 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:24 +0800
Message-Id: <1392023972-24675-3-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:16,
	Serialize complete at 2014/02/10 17:15:16
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/10 V7] tools/libxl: update
	libxl_domain_remus_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Add two members:
1. netbuf: whether netbuf is enabled
2. netbufscript: the path of the script which will be run to setup
     and tear down the guest's interface.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.h         |   13 +++++++++++++
 tools/libxl/libxl_types.idl |    2 ++
 2 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..d89ad0a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_REMUS_NETBUF 1
+ *
+ * If this is defined, then the libxl_domain_remus_info structure will
+ * have a boolean field (netbuf) and a string field (netbufscript).
+ *
+ * netbuf, if true, indicates that network buffering should be enabled.
+ *
+ * netbufscript, if set, indicates the path to the hotplug script to
+ * setup or teardown network buffers.
+ */
+#define LIBXL_HAVE_REMUS_NETBUF 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..e49945a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -561,6 +561,8 @@ libxl_domain_remus_info = Struct("domain_remus_info",[
     ("interval",     integer),
     ("blackhole",    bool),
     ("compression",  bool),
+    ("netbuf",       bool),
+    ("netbufscript", string),
     ])
 
 libxl_event_type = Enumeration("event_type", [
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzZ-0005pF-GQ; Mon, 10 Feb 2014 09:17:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzX-0005nT-I8
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:27 +0000
Received: from [85.158.143.35:21484] by server-3.bemta-4.messagelabs.com id
	F7/BA-11539-52998F25; Mon, 10 Feb 2014 09:17:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392023841!4445500!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19315 invoked from network); 10 Feb 2014 09:17:24 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-13.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497755"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bF030288;
	Mon, 10 Feb 2014 17:17:13 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151427-1694494 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:31 +0800
Message-Id: <1392023972-24675-10-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:18,
	Serialize complete at 2014/02/10 17:15:18
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 09/10 V7] libxl: control network buffering in
	remus callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch constitutes the core network buffering logic.
and does the following:
 a) create a new network buffer when the domain is suspended
    (remus_domain_suspend_callback)
 b) release the previous network buffer pertaining to the
    committed checkpoint (remus_domain_checkpoint_dm_saved)

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_dom.c |   90 ++++++++++++++++++++++++++++++++++++++++++----
 1 files changed, 82 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 912a6e4..a4ffdfd 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1243,8 +1243,30 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* REMUS TODO: Issue disk and network checkpoint reqs. */
-    return libxl__domain_suspend_common_callback(data);
+    libxl__save_helper_state *shs = data;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    /* REMUS TODO: Issue disk checkpoint reqs. */
+    int ok = libxl__domain_suspend_common_callback(data);
+
+    if (!remus_state->netbuf_state || !ok) goto out;
+
+    /* The domain was suspended successfully. Start a new network
+     * buffer for the next epoch. If this operation fails, then act
+     * as though domain suspend failed -- libxc exits its infinite
+     * loop and ultimately, the replication stops.
+     */
+    if (libxl__remus_netbuf_start_new_epoch(gc, dss->domid,
+                                            remus_state))
+        ok = 0;
+
+ out:
+    return ok;
 }
 
 static int libxl__remus_domain_resume_callback(void *data)
@@ -1257,7 +1279,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* REMUS TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. */
     return 1;
 }
 
@@ -1266,11 +1288,17 @@ static int libxl__remus_domain_resume_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc);
 
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs);
+
 static void libxl__remus_domain_checkpoint_callback(void *data)
 {
     libxl__save_helper_state *shs = data;
     libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
-    libxl__egc *egc = dss->shs.egc;
+
+    /* Convenience aliases */
+    libxl__egc *const egc = dss->shs.egc;
+
     STATE_AO_GC(dss->ao);
 
     /* This would go into tailbuf. */
@@ -1284,10 +1312,56 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
-    /* REMUS TODO: make this asynchronous */
-    assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->remus_state->interval * 1000);
+    /* Convenience aliases */
+    /*
+     * REMUS TODO: Wait for disk and explicit memory ack (through restore
+     * callback from remote) before releasing network buffer.
+     */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    if (rc) {
+        LOG(ERROR, "Failed to save device model. Terminating Remus..");
+        goto out;
+    }
+
+    if (remus_state->netbuf_state) {
+        rc = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
+                                                    remus_state);
+        if (rc) {
+            LOG(ERROR, "Failed to release network buffer."
+                " Terminating Remus..");
+            goto out;
+        }
+    }
+
+    /* Set checkpoint interval timeout */
+    rc = libxl__ev_time_register_rel(gc, &remus_state->timeout,
+                                     remus_next_checkpoint,
+                                     dss->remus_state->interval);
+    if (rc) {
+        LOG(ERROR, "unable to register timeout for next epoch."
+            " Terminating Remus..");
+        goto out;
+    }
+    return;
+
+ out:
+    libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 0);
+}
+
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    libxl__domain_suspend_state *const dss = remus_state->dss;
+
+    STATE_AO_GC(dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzX-0005nt-Tb; Mon, 10 Feb 2014 09:17:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzV-0005ma-QN
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:26 +0000
Received: from [85.158.137.68:28287] by server-11.bemta-3.messagelabs.com id
	B1/88-04255-42998F25; Mon, 10 Feb 2014 09:17:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!4
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13231 invoked from network); 10 Feb 2014 09:17:23 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497754"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:25 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bE030288;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151399-1694489 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:26 +0800
Message-Id: <1392023972-24675-5-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 04/10 V7] remus: introduce a function to check
	whether network buffering is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl__netbuffer_enabled() returns 1 when network buffering is compiled,
or returns 0 when network buffering is not compiled.

If network buffering is not compiled, and the user wants to use it, report
a error and exit.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/Makefile            |    7 +++++++
 tools/libxl/libxl.c             |    5 +++++
 tools/libxl/libxl_internal.h    |    2 ++
 tools/libxl/libxl_netbuffer.c   |   31 +++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c |   31 +++++++++++++++++++++++++++++++
 5 files changed, 76 insertions(+), 0 deletions(-)
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index da27c84..84a467c 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -45,6 +45,13 @@ LIBXL_OBJS-y += libxl_blktap2.o
 else
 LIBXL_OBJS-y += libxl_noblktap2.o
 endif
+
+ifeq ($(CONFIG_REMUS_NETBUF),y)
+LIBXL_OBJS-y += libxl_netbuffer.o
+else
+LIBXL_OBJS-y += libxl_nonetbuffer.o
+endif
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 25af816..026206a 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -746,6 +746,11 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     /* Setup network buffering */
     if (info->netbuf) {
+        if (!libxl__netbuffer_enabled(gc)) {
+            LOG(ERROR, "Remus: No support for network buffering");
+            goto out;
+        }
+
         if (info->netbufscript) {
             remus_state->netbufscript =
                 libxl__strdup(gc, info->netbufscript);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 9970780..2f64382 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2311,6 +2311,8 @@ typedef struct libxl__remus_state {
     libxl__ev_child child;
 } libxl__remus_state;
 
+_hidden int libxl__netbuffer_enabled(libxl__gc *gc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
new file mode 100644
index 0000000..8e23d75
--- /dev/null
+++ b/tools/libxl/libxl_netbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
new file mode 100644
index 0000000..6aa4bf1
--- /dev/null
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2013
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+int libxl__netbuffer_enabled(libxl__gc *gc)
+{
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzR-0005lx-1P; Mon, 10 Feb 2014 09:17:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzQ-0005ls-3L
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:20 +0000
Received: from [85.158.137.68:23579] by server-3.bemta-3.messagelabs.com id
	68/D8-14520-F1998F25; Mon, 10 Feb 2014 09:17:19 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11880 invoked from network); 10 Feb 2014 09:17:17 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497749"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bD030288;
	Mon, 10 Feb 2014 17:17:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151391-1694485 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:22 +0800
Message-Id: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:16,
	Serialize complete at 2014/02/10 17:15:16
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 00/10 V7] Remus/Libxl: Network buffering support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch series adds support for network buffering in the Remus
codebase in libxl. 

Changes in V7:
  Applied missing comments(by IanJ).
  Applied Shriram comments.

  merge netbufering tangled setup/teardown code into one patch.
  (2/6/8 in V6 => 5 in V7. 9/10 in V6 => 7 in V7)

Changes in V6:
  Applied Ian Jackson's comments of V5 series.
  the [PATCH 2/4 V5] is split by small functionalities.

  [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.

Changes in V5:

Merge hotplug script patch (2/5) and hotplug script setup/teardown
patch (3/5) into a single patch.

Changes in V4:

[1/5] Remove check for libnl command line utils in autoconf checks

[2/5] minor nits

[3/5] define LIBXL_HAVE_REMUS_NETBUF in libxl.h

[4/5] clean ups. Make the usleep in checkpoint callback asynchronous

[5/5] minor nits

Changes in V3:
[1/5] Fix redundant checks in configure scripts
      (based on Ian Campbell's suggestions)

[2/5] Introduce locking in the script, during IFB setup.
      Add xenstore paths used by netbuf scripts
      to xenstore-paths.markdown

[3/5] Hotplug scripts setup/teardown invocations are now asynchronous
      following IanJ's feedback.  However, the invocations are still
      sequential. 

[5/5] Allow per-domain specification of netbuffer scripts in xl remus
      commmand.

And minor nits throughout the series based on feedback from
the last version

Changes in V2:
[1/5] Configure script will automatically enable/disable network
      buffer support depending on the availability of the appropriate
      libnl3 version. [If libnl3 is unavailable, a warning message will be
      printed to let the user know that the feature has been disabled.]

      use macros from pkg.m4 instead of pkg-config commands
      removed redundant checks for libnl3 libraries.

[3,4/5] - Minor nits.

Version 1:

[1/5] Changes to autoconf scripts to check for libnl3. Add linker flags
      to libxl Makefile.

[2/5] External script to setup/teardown network buffering using libnl3's
      CLI. This script will be invoked by libxl before starting Remus.
      The script's main job is to bring up an IFB device with plug qdisc
      attached to it.  It then re-routes egress traffic from the guest's
      vif to the IFB device.

[3/5] Libxl code to invoke the external setup script, followed by netlink
      related setup to obtain a handle on the output buffers attached
      to each vif.

[4/5] Libxl interaction with network buffer module in the kernel via
      libnl3 API.

[5/5] xl cmdline switch to explicitly enable network buffering when
      starting remus.


  Few things to note(by shriram): 

    a) Based on previous email discussions, the setup/teardown task has
    been moved to a hotplug style shell script which can be customized as
    desired, instead of implementing it as C code inside libxl.

    b) Libnl3 is not available on NetBSD. Nor is it available on CentOS
   (Linux).  So I have made network buffering support an optional feature
   so that it can be disabled if desired.

   c) NetBSD does not have libnl3. So I have put the setup script under
   tools/hotplug/Linux folder.

thanks
Lai



Shriram Rajagopalan (8):
  remus: add libnl3 dependency to autoconf scripts
  tools/libxl: update libxl_domain_remus_info
  tools/libxl: introduce a new structure libxl__remus_state
  remus: introduce a function to check whether network buffering is
    enabled
  remus: Remus network buffering core and APIs to setup/teardown
  remus: implement the APIs to buffer/release packages
  libxl: use the APIs to setup/teardown network buffering
  libxl: rename remus_failover_cb() to remus_replication_failure_cb()
  libxl: control network buffering in remus callbacks
  libxl: network buffering cmdline switch

 README                                 |    4 +
 config/Tools.mk.in                     |    3 +
 docs/man/xl.conf.pod.5                 |    6 +
 docs/man/xl.pod.1                      |   11 +-
 docs/misc/xenstore-paths.markdown      |    4 +
 tools/configure.ac                     |   15 +
 tools/hotplug/Linux/Makefile           |    1 +
 tools/hotplug/Linux/remus-netbuf-setup |  183 +++++++++++
 tools/libxl/Makefile                   |   11 +
 tools/libxl/libxl.c                    |   48 ++-
 tools/libxl/libxl.h                    |   13 +
 tools/libxl/libxl_dom.c                |  118 ++++++--
 tools/libxl/libxl_internal.h           |   54 +++-
 tools/libxl/libxl_netbuffer.c          |  561 ++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c        |   56 ++++
 tools/libxl/libxl_remus.c              |   64 ++++
 tools/libxl/libxl_types.idl            |    2 +
 tools/libxl/xl.c                       |    4 +
 tools/libxl/xl.h                       |    1 +
 tools/libxl/xl_cmdimpl.c               |   28 ++-
 tools/libxl/xl_cmdtable.c              |    3 +
 tools/remus/README                     |    6 +
 22 files changed, 1155 insertions(+), 41 deletions(-)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
 create mode 100644 tools/libxl/libxl_netbuffer.c
 create mode 100644 tools/libxl/libxl_nonetbuffer.c
 create mode 100644 tools/libxl/libxl_remus.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzZ-0005pX-U9; Mon, 10 Feb 2014 09:17:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzX-0005nV-RM
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:28 +0000
Received: from [85.158.137.68:28567] by server-17.bemta-3.messagelabs.com id
	C6/79-22569-62998F25; Mon, 10 Feb 2014 09:17:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!5
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13566 invoked from network); 10 Feb 2014 09:17:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497757"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8aT030292;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151422-1694491 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:28 +0800
Message-Id: <1392023972-24675-7-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 06/10 V7] remus: implement the API to
	buffer/release packages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch implements two APIs:
1. libxl__remus_netbuf_start_new_epoch()
   It marks a new epoch. The packages before this epoch will
   be flushed, and the packages after this epoch will be buffered.
   It will be called after the guest is suspended.
2. libxl__remus_netbuf_release_prev_epoch()
   It flushes the buffered packages to client, and it will be
   called when a checkpoint finishes.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_internal.h    |    6 ++++
 tools/libxl/libxl_netbuffer.c   |   49 +++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c |   14 +++++++++++
 3 files changed, 69 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 4006174..c13296b 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2330,6 +2330,12 @@ _hidden void libxl__remus_teardown_done(libxl__egc *egc,
 _hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
                                           libxl__domain_suspend_state *dss);
 
+_hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                               libxl__remus_state *remus_state);
+
+_hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                                  libxl__remus_state *remus_state);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 2c77076..f358f4b 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -503,6 +503,55 @@ void libxl__remus_netbuf_teardown(libxl__egc *egc,
         libxl__remus_teardown_done(egc, dss);
 }
 
+/* The buffer_op's value, not the value passed to kernel */
+enum {
+    tc_buffer_start,
+    tc_buffer_release
+};
+
+static int remus_netbuf_op(libxl__gc *gc, uint32_t domid,
+                           libxl__remus_state *remus_state,
+                           int buffer_op)
+{
+    int i, ret;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    for (i = 0; i < netbuf_state->num_netbufs; ++i) {
+        if (buffer_op == tc_buffer_start)
+            ret = rtnl_qdisc_plug_buffer(netbuf_state->netbuf_qdisc_list[i]);
+        else
+            ret = rtnl_qdisc_plug_release_one(netbuf_state->netbuf_qdisc_list[i]);
+
+        if (!ret)
+            ret = rtnl_qdisc_add(netbuf_state->nlsock,
+                                 netbuf_state->netbuf_qdisc_list[i],
+                                 NLM_F_REQUEST);
+        if (ret) {
+            LOG(ERROR, "Remus: cannot do netbuf op %s on %s:%s",
+                ((buffer_op == tc_buffer_start) ?
+                 "start_new_epoch" : "release_prev_epoch"),
+                netbuf_state->ifb_list[i], nl_geterror(ret));
+            return ERROR_FAIL;
+        }
+    }
+
+    return 0;
+}
+
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_start);
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    return remus_netbuf_op(gc, domid, remus_state, tc_buffer_release);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index 559d0a6..92f35bc 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -33,6 +33,20 @@ void libxl__remus_netbuf_teardown(libxl__egc *egc,
 {
 }
 
+int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
+                                        libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
+int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
+                                           libxl__remus_state *remus_state)
+{
+    LOG(ERROR, "Remus: No support for network buffering");
+    return ERROR_FAIL;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmza-0005pu-CJ; Mon, 10 Feb 2014 09:17:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzY-0005nn-82
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:28 +0000
Received: from [85.158.143.35:21721] by server-2.bemta-4.messagelabs.com id
	5A/43-10891-72998F25; Mon, 10 Feb 2014 09:17:27 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392023841!4445500!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19680 invoked from network); 10 Feb 2014 09:17:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-13.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497758"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H89p030290;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151425-1694493 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:30 +0800
Message-Id: <1392023972-24675-9-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:19,
	Serialize complete at 2014/02/10 17:15:19
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 08/10 V7] libxl: rename remus_failover_cb() to
	remus_replication_failure_cb()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Failover means that: the machine on which primary vm is running is
down, and we need to start the secondary vm to take over the primary
vm. remus_failover_cb() is called when remus fails, not when we need
to do failover. So rename it to remus_replication_failure_cb()

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c |   12 +++++++-----
 1 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 83d3772..70e34c0 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -702,8 +702,9 @@ out:
     return ptr;
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc);
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc);
 
 /* TODO: Explicit Checkpoint acknowledgements via recv_fd. */
 int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
@@ -722,7 +723,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
 
     GCNEW(dss);
     dss->ao = ao;
-    dss->callback = remus_failover_cb;
+    dss->callback = remus_replication_failure_cb;
     dss->domid = domid;
     dss->fd = send_fd;
     /* TODO do something with recv_fd */
@@ -769,8 +770,9 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     return AO_ABORT(rc);
 }
 
-static void remus_failover_cb(libxl__egc *egc,
-                              libxl__domain_suspend_state *dss, int rc)
+static void remus_replication_failure_cb(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss,
+                                         int rc)
 {
     STATE_AO_GC(dss->ao);
     /*
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmza-0005qU-UW; Mon, 10 Feb 2014 09:17:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzY-0005o6-L4
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:28 +0000
Received: from [85.158.143.35:21776] by server-3.bemta-4.messagelabs.com id
	C9/CA-11539-82998F25; Mon, 10 Feb 2014 09:17:28 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392023840!4434687!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30778 invoked from network); 10 Feb 2014 09:17:26 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497759"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:27 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8pJ030289;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151424-1694492 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:29 +0800
Message-Id: <1392023972-24675-8-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:18,
	Serialize complete at 2014/02/10 17:15:18
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 07/10 V7] libxl: use the API to setup/teardown
	network buffering
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

If there is network buffering hotplug scripts, call
libxl__remus_netbuf_setup() to setup the network
buffering and libxl__remus_netbuf_teardown() to
teardown network buffering.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |    6 +-----
 tools/libxl/libxl_dom.c      |   11 +++++++++++
 tools/libxl/libxl_internal.h |    7 +++++++
 tools/libxl/libxl_remus.c    |   23 +++++++++++++++++++++++
 4 files changed, 42 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 026206a..83d3772 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -762,7 +762,7 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     }
 
     /* Point of no return */
-    libxl__domain_suspend(egc, dss);
+    libxl__remus_setup_initiate(egc, dss);
     return AO_INPROGRESS;
 
  out:
@@ -778,10 +778,6 @@ static void remus_failover_cb(libxl__egc *egc,
      * backup died or some network error occurred preventing us
      * from sending checkpoints.
      */
-
-    /* TBD: Remus cleanup - i.e. detach qdisc, release other
-     * resources.
-     */
     libxl__ao_complete(egc, ao, rc);
 }
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e3e9f6f..912a6e4 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1519,6 +1519,17 @@ void domain_suspend_done(libxl__egc *egc,
     if (dss->xce != NULL)
         xc_evtchn_close(dss->xce);
 
+    if (dss->remus_state) {
+        /*
+         * With Remus, if we reach this point, it means either
+         * backup died or some network error occurred preventing us
+         * from sending checkpoints. Teardown the network buffers and
+         * release netlink resources.  This is an async op.
+         */
+        libxl__remus_teardown_initiate(egc, dss, rc);
+        return;
+    }
+
     dss->callback(egc, dss, rc);
 }
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c13296b..1bd2bba 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2336,6 +2336,13 @@ _hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
 _hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
                                                   libxl__remus_state *remus_state);
 
+_hidden void libxl__remus_setup_initiate(libxl__egc *egc,
+                                         libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                            libxl__domain_suspend_state *dss,
+                                            int rc);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index 4e40412..cdc1c16 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -19,6 +19,16 @@
 
 /*----- remus setup/teardown code -----*/
 
+void libxl__remus_setup_initiate(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss)
+{
+    libxl__ev_time_init(&dss->remus_state->timeout);
+    if (!dss->remus_state->netbufscript)
+        libxl__remus_setup_done(egc, dss, 0);
+    else
+        libxl__remus_netbuf_setup(egc, dss);
+}
+
 void libxl__remus_setup_done(libxl__egc *egc,
                              libxl__domain_suspend_state *dss,
                              int rc)
@@ -34,6 +44,19 @@ void libxl__remus_setup_done(libxl__egc *egc,
     domain_suspend_done(egc, dss, rc);
 }
 
+void libxl__remus_teardown_initiate(libxl__egc *egc,
+                                    libxl__domain_suspend_state *dss,
+                                    int rc)
+{
+    /* stash rc somewhere before invoking teardown ops. */
+    dss->remus_state->saved_rc = rc;
+
+    if (!dss->remus_state->netbuf_state)
+        libxl__remus_teardown_done(egc, dss);
+    else
+        libxl__remus_netbuf_teardown(egc, dss);
+}
+
 void libxl__remus_teardown_done(libxl__egc *egc,
                                 libxl__domain_suspend_state *dss)
 {
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzY-0005oD-Dl; Mon, 10 Feb 2014 09:17:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzW-0005mn-FE
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:26 +0000
Received: from [85.158.143.35:31395] by server-3.bemta-4.messagelabs.com id
	87/BA-11539-52998F25; Mon, 10 Feb 2014 09:17:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392023840!4434687!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29314 invoked from network); 10 Feb 2014 09:17:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497752"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8pI030289;
	Mon, 10 Feb 2014 17:17:12 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151420-1694490 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:27 +0800
Message-Id: <1392023972-24675-6-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 05/10 V7] remus: Remus network buffering core
	and APIs to setup/teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch introduces remus-netbuf-setup hotplug script responsible for
setting up and tearing down the necessary infrastructure required for
network output buffering in Remus.  This script is intended to be invoked
by libxl for each guest interface, when starting or stopping Remus.

Apart from returning success/failure indication via the usual hotplug
entries in xenstore, this script also writes to xenstore, the name of
the IFB device to be used to control the vif's network output.

The script relies on libnl3 command line utilities to perform various
setup/teardown functions. The script is confined to Linux platforms only
since NetBSD does not seem to have libnl3.

The following steps are taken during setup:
 a) call the hotplug script for each vif to setup its network buffer

 b) establish a dedicated remus context containing libnl related
    state (netlink sockets, qdisc caches, etc.,)

 c) Obtain handles to plug qdiscs installed on the IFB devices
    chosen by the hotplug scripts.

And during teardown, the netlink resources are released, followed by
invocation of hotplug scripts to remove the IFB devices.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/misc/xenstore-paths.markdown      |    4 +
 tools/hotplug/Linux/Makefile           |    1 +
 tools/hotplug/Linux/remus-netbuf-setup |  183 ++++++++++++
 tools/libxl/Makefile                   |    2 +
 tools/libxl/libxl_dom.c                |    7 +-
 tools/libxl/libxl_internal.h           |   17 ++
 tools/libxl/libxl_netbuffer.c          |  481 ++++++++++++++++++++++++++++++++
 tools/libxl/libxl_nonetbuffer.c        |   11 +
 tools/libxl/libxl_remus.c              |   41 +++
 9 files changed, 742 insertions(+), 5 deletions(-)
 create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
 create mode 100644 tools/libxl/libxl_remus.c

diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore-paths.markdown
index 70ab7f4..7a0d2c9 100644
--- a/docs/misc/xenstore-paths.markdown
+++ b/docs/misc/xenstore-paths.markdown
@@ -385,6 +385,10 @@ The guest's virtual time offset from UTC in seconds.
 
 The device model version for a domain.
 
+#### /libxl/$DOMID/remus/netbuf/$DEVID/ifb = STRING [n,INTERNAL]
+
+IFB device used by Remus to buffer network output from the associated vif.
+
 [BLKIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,blkif.h.html
 [FBIF]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,io,fbif.h.html
 [HVMPARAMS]: http://xenbits.xen.org/docs/unstable/hypercall/include,public,hvm,params.h.html
diff --git a/tools/hotplug/Linux/Makefile b/tools/hotplug/Linux/Makefile
index 47655f6..6139c1f 100644
--- a/tools/hotplug/Linux/Makefile
+++ b/tools/hotplug/Linux/Makefile
@@ -16,6 +16,7 @@ XEN_SCRIPTS += network-nat vif-nat
 XEN_SCRIPTS += vif-openvswitch
 XEN_SCRIPTS += vif2
 XEN_SCRIPTS += vif-setup
+XEN_SCRIPTS-$(CONFIG_REMUS_NETBUF) += remus-netbuf-setup
 XEN_SCRIPTS += block
 XEN_SCRIPTS += block-enbd block-nbd
 XEN_SCRIPTS-$(CONFIG_BLKTAP1) += blktap
diff --git a/tools/hotplug/Linux/remus-netbuf-setup b/tools/hotplug/Linux/remus-netbuf-setup
new file mode 100644
index 0000000..3467db2
--- /dev/null
+++ b/tools/hotplug/Linux/remus-netbuf-setup
@@ -0,0 +1,183 @@
+#!/bin/bash
+#============================================================================
+# ${XEN_SCRIPT_DIR}/remus-netbuf-setup
+#
+# Script for attaching a network buffer to the specified vif (in any mode).
+# The hotplugging system will call this script when starting remus via libxl
+# API, libxl_domain_remus_start.
+#
+# Usage:
+# remus-netbuf-setup (setup|teardown)
+#
+# Environment vars:
+# vifname     vif interface name (required).
+# XENBUS_PATH path in Xenstore, where the IFB device details will be stored
+#                      or read from (required).
+#             (libxl passes /libxl/<domid>/remus/netbuf/<devid>)
+# IFB         ifb interface to be cleaned up (required). [for teardown op only]
+
+# Written to the store: (setup operation)
+# XENBUS_PATH/ifb=<ifbdevName> the IFB device serving
+#  as the intermediate buffer through which the interface's network output
+#  can be controlled.
+#
+# To install a network buffer on a guest vif (vif1.0) using ifb (ifb0)
+# we need to do the following
+#
+#  ip link set dev ifb0 up
+#  tc qdisc add dev vif1.0 ingress
+#  tc filter add dev vif1.0 parent ffff: proto ip \
+#    prio 10 u32 match u32 0 0 action mirred egress redirect dev ifb0
+#  nl-qdisc-add --dev=ifb0 --parent root plug
+#  nl-qdisc-add --dev=ifb0 --parent root --update plug --limit=10000000
+#                                                (10MB limit on buffer)
+#
+# So order of operations when installing a network buffer on vif1.0
+# 1. find a free ifb and bring up the device
+# 2. redirect traffic from vif1.0 to ifb:
+#   2.1 add ingress qdisc to vif1.0 (to capture outgoing packets from guest)
+#   2.2 use tc filter command with actions mirred egress + redirect
+# 3. install plug_qdisc on ifb device, with which we can buffer/release
+#    guest's network output from vif1.0
+#
+#
+
+#============================================================================
+
+# Unlike other vif scripts, vif-common is not needed here as it executes vif
+#specific setup code such as renaming.
+dir=$(dirname "$0")
+. "$dir/xen-hotplug-common.sh"
+
+findCommand "$@"
+
+if [ "$command" != "setup" -a  "$command" != "teardown" ]
+then
+  echo "Invalid command: $command"
+  log err "Invalid command: $command"
+  exit 1
+fi
+
+evalVariables "$@"
+
+: ${vifname:?}
+: ${XENBUS_PATH:?}
+
+check_libnl_tools() {
+    if ! command -v nl-qdisc-list > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-list tool"
+    fi
+    if ! command -v nl-qdisc-add > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-add tool"
+    fi
+    if ! command -v nl-qdisc-delete > /dev/null 2>&1; then
+        fatal "Unable to find nl-qdisc-delete tool"
+    fi
+}
+
+# We only check for modules. We don't load them.
+# User/Admin is supposed to load ifb during boot time,
+# ensuring that there are enough free ifbs in the system.
+# Other modules will be loaded automatically by tc commands.
+check_modules() {
+    for m in ifb sch_plug sch_ingress act_mirred cls_u32
+    do
+        if ! modinfo $m > /dev/null 2>&1; then
+            fatal "Unable to find $m kernel module"
+        fi
+    done
+}
+
+setup_ifb() {
+
+    for ifb in `ifconfig -a -s|egrep ^ifb|cut -d ' ' -f1`
+    do
+        local installed=`nl-qdisc-list -d $ifb`
+        [ -n "$installed" ] && continue
+        IFB="$ifb"
+        break
+    done
+
+    if [ -z "$IFB" ]
+    then
+        fatal "Unable to find a free IFB device for $vifname"
+    fi
+
+    do_or_die ip link set dev "$IFB" up
+}
+
+redirect_vif_traffic() {
+    local vif=$1
+    local ifb=$2
+
+    do_or_die tc qdisc add dev "$vif" ingress
+
+    tc filter add dev "$vif" parent ffff: proto ip prio 10 \
+        u32 match u32 0 0 action mirred egress redirect dev "$ifb" >/dev/null 2>&1
+
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to redirect traffic from $vif to $ifb"
+    fi
+}
+
+add_plug_qdisc() {
+    local vif=$1
+    local ifb=$2
+
+    nl-qdisc-add --dev="$ifb" --parent root plug >/dev/null 2>&1
+    if [ $? -ne 0 ]
+    then
+        do_without_error tc qdisc del dev "$vif" ingress
+        fatal "Failed to add plug qdisc to $ifb"
+    fi
+
+    #set ifb buffering limit in bytes. Its okay if this command fails
+    nl-qdisc-add --dev="$ifb" --parent root \
+        --update plug --limit=10000000 >/dev/null 2>&1
+}
+
+teardown_netbuf() {
+    local vif=$1
+    local ifb=$2
+
+    if [ "$ifb" ]; then
+        do_without_error ip link set dev "$ifb" down
+        do_without_error nl-qdisc-delete --dev="$ifb" --parent root plug >/dev/null 2>&1
+        xenstore-rm -t "$XENBUS_PATH/ifb" 2>/dev/null || true
+    fi
+    do_without_error tc qdisc del dev "$vif" ingress
+    xenstore-rm -t "$XENBUS_PATH/hotplug-status" 2>/dev/null || true
+}
+
+xs_write_failed() {
+    local vif=$1
+    local ifb=$2
+    teardown_netbuf "$vifname" "$IFB"
+    fatal "failed to write ifb name to xenstore"
+}
+
+case "$command" in
+    setup)
+        check_libnl_tools
+        check_modules
+
+        claim_lock "pickifb"
+        setup_ifb
+        redirect_vif_traffic "$vifname" "$IFB"
+        add_plug_qdisc "$vifname" "$IFB"
+        release_lock "pickifb"
+
+        #not using xenstore_write that automatically exits on error
+        #because we need to cleanup
+        _xenstore_write "$XENBUS_PATH/ifb" "$IFB" || xs_write_failed "$vifname" "$IFB"
+        success
+        ;;
+    teardown)
+        : ${IFB:?}
+        teardown_netbuf "$vifname" "$IFB"
+        ;;
+esac
+
+log debug "Successful remus-netbuf-setup $command for $vifname, ifb $IFB."
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 84a467c..218f55e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -52,6 +52,8 @@ else
 LIBXL_OBJS-y += libxl_nonetbuffer.o
 endif
 
+LIBXL_OBJS-y += libxl_remus.o
+
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
 
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 8d63f90..e3e9f6f 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -753,9 +753,6 @@ int libxl__toolstack_restore(uint32_t domid, const uint8_t *buf,
 
 /*==================== Domain suspend (save) ====================*/
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc);
-
 /*----- complicated callback, called by xc_domain_save -----*/
 
 /*
@@ -1508,8 +1505,8 @@ static void save_device_model_datacopier_done(libxl__egc *egc,
     dss->save_dm_callback(egc, dss, our_rc);
 }
 
-static void domain_suspend_done(libxl__egc *egc,
-                        libxl__domain_suspend_state *dss, int rc)
+void domain_suspend_done(libxl__egc *egc,
+                         libxl__domain_suspend_state *dss, int rc)
 {
     STATE_AO_GC(dss->ao);
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 2f64382..4006174 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2313,6 +2313,23 @@ typedef struct libxl__remus_state {
 
 _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
 
+_hidden void domain_suspend_done(libxl__egc *egc,
+                                 libxl__domain_suspend_state *dss,
+                                 int rc);
+
+_hidden void libxl__remus_setup_done(libxl__egc *egc,
+                                     libxl__domain_suspend_state *dss,
+                                     int rc);
+
+_hidden void libxl__remus_netbuf_setup(libxl__egc *egc,
+                                       libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_teardown_done(libxl__egc *egc,
+                                        libxl__domain_suspend_state *dss);
+
+_hidden void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                          libxl__domain_suspend_state *dss);
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
diff --git a/tools/libxl/libxl_netbuffer.c b/tools/libxl/libxl_netbuffer.c
index 8e23d75..2c77076 100644
--- a/tools/libxl/libxl_netbuffer.c
+++ b/tools/libxl/libxl_netbuffer.c
@@ -17,11 +17,492 @@
 
 #include "libxl_internal.h"
 
+#include <netlink/cache.h>
+#include <netlink/socket.h>
+#include <netlink/attr.h>
+#include <netlink/route/link.h>
+#include <netlink/route/route.h>
+#include <netlink/route/qdisc.h>
+#include <netlink/route/qdisc/plug.h>
+
+typedef struct libxl__remus_netbuf_state {
+    struct rtnl_qdisc **netbuf_qdisc_list;
+    struct nl_sock *nlsock;
+    struct nl_cache *qdisc_cache;
+    const char **vif_list;
+    const char **ifb_list;
+    uint32_t num_netbufs;
+    uint32_t unused;
+} libxl__remus_netbuf_state;
+
 int libxl__netbuffer_enabled(libxl__gc *gc)
 {
     return 1;
 }
 
+/* If the device has a vifname, then use that instead of
+ * the vifX.Y format.
+ */
+static const char *get_vifname(libxl__gc *gc, uint32_t domid,
+                               libxl_device_nic *nic)
+{
+    const char *vifname = NULL;
+    const char *path;
+    int rc;
+
+    path = libxl__sprintf(gc, "%s/backend/vif/%d/%d/vifname",
+                          libxl__xs_get_dompath(gc, 0), domid, nic->devid);
+    rc = libxl__xs_read_checked(gc, XBT_NULL, path, &vifname);
+    if (!rc && !vifname) {
+        /* use the default name */
+        vifname = libxl__device_nic_devname(gc, domid,
+                                            nic->devid,
+                                            nic->nictype);
+    }
+
+    return vifname;
+}
+
+static const char **get_guest_vif_list(libxl__gc *gc, uint32_t domid,
+                                       int *num_vifs)
+{
+    libxl_device_nic *nics = NULL;
+    int nb, i = 0;
+    const char **vif_list = NULL;
+
+    *num_vifs = 0;
+    nics = libxl_device_nic_list(CTX, domid, &nb);
+    if (!nics)
+        return NULL;
+
+    /* Ensure that none of the vifs are backed by driver domains */
+    for (i = 0; i < nb; i++) {
+        if (nics[i].backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            const char *vifname = get_vifname(gc, domid, &nics[i]);
+
+            if (!vifname)
+              vifname = "(unknown)";
+            LOG(ERROR, "vif %s has driver domain (%u) as its backend. "
+                "Network buffering is not supported with driver domains",
+                vifname, nics[i].backend_domid);
+            *num_vifs = -1;
+            goto out;
+        }
+    }
+
+    GCNEW_ARRAY(vif_list, nb);
+    for (i = 0; i < nb; ++i) {
+        vif_list[i] = get_vifname(gc, domid, &nics[i]);
+        if (!vif_list[i]) {
+            vif_list = NULL;
+            goto out;
+        }
+    }
+    *num_vifs = nb;
+
+ out:
+    for (i = 0; i < nb; i++)
+        libxl_device_nic_dispose(&nics[i]);
+    free(nics);
+    return vif_list;
+}
+
+static void free_qdiscs(libxl__remus_netbuf_state *netbuf_state)
+{
+    int i;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* free qdiscs */
+    for (i = 0; i < netbuf_state->num_netbufs; i++) {
+        qdisc = netbuf_state->netbuf_qdisc_list[i];
+        if (!qdisc)
+            break;
+
+        nl_object_put((struct nl_object *)qdisc);
+        netbuf_state->netbuf_qdisc_list[i] = NULL;
+    }
+
+    /* free qdisc cache */
+    if (netbuf_state->qdisc_cache) {
+      nl_cache_clear(netbuf_state->qdisc_cache);
+      nl_cache_free(netbuf_state->qdisc_cache);
+      netbuf_state->qdisc_cache = NULL;
+    }
+
+    /* close & free nlsock */
+    if (netbuf_state->nlsock) {
+      nl_close(netbuf_state->nlsock);
+      nl_socket_free(netbuf_state->nlsock);
+      netbuf_state->nlsock = NULL;
+    }
+}
+
+static int init_qdiscs(libxl__gc *gc,
+                       libxl__remus_state *remus_state)
+{
+    int i, ret, ifindex;
+    struct rtnl_link *ifb = NULL;
+    struct rtnl_qdisc *qdisc = NULL;
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state * const netbuf_state = remus_state->netbuf_state;
+    const int num_netbufs = netbuf_state->num_netbufs;
+    const char ** const ifb_list = netbuf_state->ifb_list;
+
+    /* Now that we have brought up IFB devices with plug qdisc for
+     * each vif, lets get a netlink handle on the plug qdisc for use
+     * during checkpointing.
+     */
+    netbuf_state->nlsock = nl_socket_alloc();
+    if (!netbuf_state->nlsock) {
+        LOG(ERROR, "cannot allocate nl socket");
+        goto out;
+    }
+
+    ret = nl_connect(netbuf_state->nlsock, NETLINK_ROUTE);
+    if (ret) {
+        LOG(ERROR, "failed to open netlink socket: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* get list of all qdiscs installed on network devs. */
+    ret = rtnl_qdisc_alloc_cache(netbuf_state->nlsock,
+                                 &netbuf_state->qdisc_cache);
+    if (ret) {
+        LOG(ERROR, "failed to allocate qdisc cache: %s",
+            nl_geterror(ret));
+        goto out;
+    }
+
+    /* list of handles to plug qdiscs */
+    GCNEW_ARRAY(netbuf_state->netbuf_qdisc_list, num_netbufs);
+
+    for (i = 0; i < num_netbufs; ++i) {
+
+        /* get a handle to the IFB interface */
+        ifb = NULL;
+        ret = rtnl_link_get_kernel(netbuf_state->nlsock, 0,
+                                   ifb_list[i], &ifb);
+        if (ret) {
+            LOG(ERROR, "cannot obtain handle for %s: %s", ifb_list[i],
+                nl_geterror(ret));
+            goto out;
+        }
+
+        ifindex = rtnl_link_get_ifindex(ifb);
+        if (!ifindex) {
+            LOG(ERROR, "interface %s has no index", ifb_list[i]);
+            goto out;
+        }
+
+        /* Get a reference to the root qdisc installed on the IFB, by
+         * querying the qdisc list we obtained earlier. The netbufscript
+         * sets up the plug qdisc as the root qdisc, so we don't have to
+         * search the entire qdisc tree on the IFB dev.
+
+         * There is no need to explicitly free this qdisc as its just a
+         * reference from the qdisc cache we allocated earlier.
+         */
+        qdisc = rtnl_qdisc_get_by_parent(netbuf_state->qdisc_cache, ifindex,
+                                         TC_H_ROOT);
+
+        if (qdisc) {
+            const char *tc_kind = rtnl_tc_get_kind(TC_CAST(qdisc));
+            /* Sanity check: Ensure that the root qdisc is a plug qdisc. */
+            if (!tc_kind || strcmp(tc_kind, "plug")) {
+                nl_object_put((struct nl_object *)qdisc);
+                LOG(ERROR, "plug qdisc is not installed on %s", ifb_list[i]);
+                goto out;
+            }
+            netbuf_state->netbuf_qdisc_list[i] = qdisc;
+        } else {
+            LOG(ERROR, "Cannot get qdisc handle from ifb %s", ifb_list[i]);
+            goto out;
+        }
+        rtnl_link_put(ifb);
+    }
+
+    return 0;
+
+ out:
+    if (ifb)
+        rtnl_link_put(ifb);
+    free_qdiscs(netbuf_state);
+    return ERROR_FAIL;
+}
+
+static void netbuf_setup_timeout_cb(libxl__egc *egc,
+                                    libxl__ev_time *ev,
+                                    const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+    assert(libxl__ev_child_inuse(&remus_state->child));
+
+    LOG(DEBUG, "killing hotplug script %s (on vif %s) because of timeout",
+        remus_state->netbufscript, vif);
+
+    if (kill(remus_state->child.pid, SIGKILL)) {
+        LOGEV(ERROR, errno, "unable to kill hotplug script %s [%ld]",
+              remus_state->netbufscript,
+              (unsigned long)remus_state->child.pid);
+    }
+
+    return;
+}
+
+/* the script needs the following env & args
+ * $vifname
+ * $XENBUS_PATH (/libxl/<domid>/remus/netbuf/<devid>/)
+ * $IFB (for teardown)
+ * setup/teardown as command line arg.
+ * In return, the script writes the name of IFB device (during setup) to be
+ * used for output buffering into XENBUS_PATH/ifb
+ */
+static int exec_netbuf_script(libxl__gc *gc, libxl__remus_state *remus_state,
+                              char *op, libxl__ev_child_callback *death)
+{
+    int arraysize, nr = 0;
+    char **env = NULL, **args = NULL;
+    pid_t pid;
+
+    /* Convenience aliases */
+    libxl__ev_child *const child = &remus_state->child;
+    libxl__ev_time *const timeout = &remus_state->timeout;
+    char *const script = libxl__strdup(gc, remus_state->netbufscript);
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char *const ifb = netbuf_state->ifb_list[devid];
+
+    arraysize = 7;
+    GCNEW_ARRAY(env, arraysize);
+    env[nr++] = "vifname";
+    env[nr++] = libxl__strdup(gc, vif);
+    env[nr++] = "XENBUS_PATH";
+    env[nr++] = GCSPRINTF("%s/remus/netbuf/%d",
+                          libxl__xs_libxl_path(gc, domid), devid);
+    if (!strcmp(op, "teardown")) {
+        env[nr++] = "IFB";
+        env[nr++] = libxl__strdup(gc, ifb);
+    }
+    env[nr++] = NULL;
+    assert(nr <= arraysize);
+
+    arraysize = 3; nr = 0;
+    GCNEW_ARRAY(args, arraysize);
+    args[nr++] = script;
+    args[nr++] = op;
+    args[nr++] = NULL;
+    assert(nr == arraysize);
+
+    /* Set hotplug timeout */
+    if (libxl__ev_time_register_rel(gc, timeout,
+                                    netbuf_setup_timeout_cb,
+                                    LIBXL_HOTPLUG_TIMEOUT * 1000)) {
+        LOG(ERROR, "unable to register timeout for "
+            "netbuf setup script %s on vif %s", script, vif);
+        return ERROR_FAIL;
+    }
+
+    LOG(DEBUG, "Calling netbuf script: %s %s on vif %s",
+        script, op, vif);
+
+    /* Fork and exec netbuf script */
+    pid = libxl__ev_child_fork(gc, child, death);
+    if (pid == -1) {
+        LOG(ERROR, "unable to fork netbuf script %s", script);
+        return ERROR_FAIL;
+    }
+
+    if (!pid) {
+        /* child: Launch netbuf script */
+        libxl__exec(gc, -1, -1, -1, args[0], args, env);
+        /* notreached */
+        abort();
+    }
+
+    return 0;
+}
+
+static void netbuf_setup_script_cb(libxl__egc *egc,
+                                   libxl__ev_child *child,
+                                   pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+    const char *out_path_base, *hotplug_error = NULL;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = remus_state->dss->domid;
+    const int devid = remus_state->dev_id;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+    const char *const vif = netbuf_state->vif_list[devid];
+    const char **const ifb = &netbuf_state->ifb_list[devid];
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    out_path_base = GCSPRINTF("%s/remus/netbuf/%d",
+                              libxl__xs_libxl_path(gc, domid), devid);
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/hotplug-error", out_path_base),
+                                &hotplug_error);
+    if (rc)
+        goto out;
+
+    if (hotplug_error) {
+        LOG(ERROR, "netbuf script %s setup failed for vif %s: %s",
+            remus_state->netbufscript,
+            netbuf_state->vif_list[devid], hotplug_error);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL,
+                                GCSPRINTF("%s/remus/netbuf/%d/ifb",
+                                          libxl__xs_libxl_path(gc, domid),
+                                          devid),
+                                ifb);
+    if (rc)
+        goto out;
+
+    if (!(*ifb)) {
+        LOG(ERROR, "Cannot get ifb dev name for domain %u dev %s",
+            domid, vif);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    LOG(DEBUG, "%s will buffer packets from vif %s", *ifb, vif);
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        rc = exec_netbuf_script(gc, remus_state,
+                                "setup", netbuf_setup_script_cb);
+        if (rc)
+            goto out;
+
+        return;
+    }
+
+    rc = init_qdiscs(gc, remus_state);
+ out:
+    libxl__remus_setup_done(egc, remus_state->dss, rc);
+}
+
+/* Scan through the list of vifs belonging to domid and
+ * invoke the netbufscript to setup the IFB device & plug qdisc
+ * for each vif. Then scan through the list of IFB devices to obtain
+ * a handle on the plug qdisc installed on these IFB devices.
+ * Network output buffering is controlled via these qdiscs.
+ */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+    libxl__remus_netbuf_state *netbuf_state = NULL;
+    int num_netbufs = 0;
+    int rc = ERROR_FAIL;
+
+    /* Convenience aliases */
+    const uint32_t domid = dss->domid;
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    GCNEW(netbuf_state);
+    netbuf_state->vif_list = get_guest_vif_list(gc, domid, &num_netbufs);
+    if (!num_netbufs) {
+        rc = 0;
+        goto out;
+    }
+
+    if (num_netbufs < 0) goto out;
+
+    GCNEW_ARRAY(netbuf_state->ifb_list, num_netbufs);
+    netbuf_state->num_netbufs = num_netbufs;
+    remus_state->netbuf_state = netbuf_state;
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "setup",
+                           netbuf_setup_script_cb))
+        goto out;
+    return;
+
+ out:
+    libxl__remus_setup_done(egc, dss, rc);
+}
+
+static void netbuf_teardown_script_cb(libxl__egc *egc,
+                                      libxl__ev_child *child,
+                                      pid_t pid, int status)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(child, *remus_state, child);
+
+    /* Convenience aliases */
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(remus_state->dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
+
+    if (status) {
+        libxl_report_child_exitstatus(CTX, LIBXL__LOG_ERROR,
+                                      remus_state->netbufscript,
+                                      pid, status);
+    }
+
+    remus_state->dev_id++;
+    if (remus_state->dev_id < netbuf_state->num_netbufs) {
+        if (exec_netbuf_script(gc, remus_state,
+                               "teardown", netbuf_teardown_script_cb))
+            goto out;
+        return;
+    }
+
+ out:
+    libxl__remus_teardown_done(egc, remus_state->dss);
+}
+
+/* Note: This function will be called in the same gc context as
+ * libxl__remus_netbuf_setup, created during the libxl_domain_remus_start
+ * API call.
+ */
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+    libxl__remus_netbuf_state *const netbuf_state = remus_state->netbuf_state;
+
+    STATE_AO_GC(dss->ao);
+
+    free_qdiscs(netbuf_state);
+
+    remus_state->dev_id = 0;
+    if (exec_netbuf_script(gc, remus_state, "teardown",
+                           netbuf_teardown_script_cb))
+        libxl__remus_teardown_done(egc, dss);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_nonetbuffer.c b/tools/libxl/libxl_nonetbuffer.c
index 6aa4bf1..559d0a6 100644
--- a/tools/libxl/libxl_nonetbuffer.c
+++ b/tools/libxl/libxl_nonetbuffer.c
@@ -22,6 +22,17 @@ int libxl__netbuffer_enabled(libxl__gc *gc)
     return 0;
 }
 
+/* Remus network buffer related stubs */
+void libxl__remus_netbuf_setup(libxl__egc *egc,
+                               libxl__domain_suspend_state *dss)
+{
+}
+
+void libxl__remus_netbuf_teardown(libxl__egc *egc,
+                                  libxl__domain_suspend_state *dss)
+{
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
new file mode 100644
index 0000000..4e40412
--- /dev/null
+++ b/tools/libxl/libxl_remus.c
@@ -0,0 +1,41 @@
+/*
+ * Copyright (C) 2014
+ * Author Shriram Rajagopalan <rshriram@cs.ubc.ca>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+/*----- remus setup/teardown code -----*/
+
+void libxl__remus_setup_done(libxl__egc *egc,
+                             libxl__domain_suspend_state *dss,
+                             int rc)
+{
+    STATE_AO_GC(dss->ao);
+    if (!rc) {
+        libxl__domain_suspend(egc, dss);
+        return;
+    }
+
+    LOG(ERROR, "Remus: failed to setup network buffering"
+        " for guest with domid %u", dss->domid);
+    domain_suspend_done(egc, dss, rc);
+}
+
+void libxl__remus_teardown_done(libxl__egc *egc,
+                                libxl__domain_suspend_state *dss)
+{
+    dss->callback(egc, dss, dss->remus_state->saved_rc);
+}
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzV-0005mb-QC; Mon, 10 Feb 2014 09:17:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzT-0005mB-LJ
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:23 +0000
Received: from [85.158.137.68:18929] by server-13.bemta-3.messagelabs.com id
	CD/C5-26923-22998F25; Mon, 10 Feb 2014 09:17:22 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!3
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12845 invoked from network); 10 Feb 2014 09:17:21 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497750"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8pH030289;
	Mon, 10 Feb 2014 17:17:08 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151392-1694486 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:23 +0800
Message-Id: <1392023972-24675-2-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:16,
	Serialize complete at 2014/02/10 17:15:16
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 01/10 V7] remus: add libnl3 dependency to
	autoconf scripts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Libnl3 is required for controlling Remus network buffering.
This patch adds dependency on libnl3 (>= 3.2.8) to autoconf scripts.
Also provide ability to configure tools without libnl3 support, that
is without network buffering support.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 README               |    4 ++++
 config/Tools.mk.in   |    3 +++
 tools/configure.ac   |   15 +++++++++++++++
 tools/libxl/Makefile |    2 ++
 tools/remus/README   |    6 ++++++
 5 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/README b/README
index 4148a26..7bb25fb 100644
--- a/README
+++ b/README
@@ -72,6 +72,10 @@ disabled at compile time:
     * cmake (if building vtpm stub domains)
     * markdown
     * figlet (for generating the traditional Xen start of day banner)
+    * Development install of libnl3 (e.g., libnl-3-200,
+      libnl-3-dev, etc).  Required if network buffering is desired
+      when using Remus with libxl.  See tools/remus/README for detailed
+      information.
 
 Second, you need to acquire a suitable kernel for use in domain 0. If
 possible you should use a kernel provided by your OS distributor. If
diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index d9d3239..81802b3 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -38,6 +38,8 @@ PTHREAD_LIBS        := @PTHREAD_LIBS@
 
 PTYFUNCS_LIBS       := @PTYFUNCS_LIBS@
 
+LIBNL3_LIBS         := @LIBNL3_LIBS@
+LIBNL3_CFLAGS       := @LIBNL3_CFLAGS@
 # Download GIT repositories via HTTP or GIT's own protocol?
 # GIT's protocol is faster and more robust, when it works at all (firewalls
 # may block it). We make it the default, but if your GIT repository downloads
@@ -56,6 +58,7 @@ CONFIG_QEMU_TRAD    := @qemu_traditional@
 CONFIG_QEMU_XEN     := @qemu_xen@
 CONFIG_XEND         := @xend@
 CONFIG_BLKTAP1      := @blktap1@
+CONFIG_REMUS_NETBUF := @remus_netbuf@
 
 #System options
 ZLIB                := @zlib@
diff --git a/tools/configure.ac b/tools/configure.ac
index 0754f0e..f95956d 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -236,6 +236,21 @@ esac
 # Checks for header files.
 AC_CHECK_HEADERS([yajl/yajl_version.h sys/eventfd.h])
 
+# Check for libnl3 >=3.2.8. If present enable remus network buffering.
+PKG_CHECK_MODULES(LIBNL3, [libnl-3.0 >= 3.2.8 libnl-route-3.0 >= 3.2.8],
+		[libnl3_lib="y"], [libnl3_lib="n"])
+
+AS_IF([test "x$libnl3_lib" = "xn" ], [
+	    AC_MSG_WARN([Disabling support for Remus network buffering.
+	    Please install libnl3 libraries, command line tools and devel
+	    headers - version 3.2.8 or higher])
+	    AC_SUBST(remus_netbuf, [n])
+	    ],[
+	    AC_SUBST(LIBNL3_LIBS)
+	    AC_SUBST(LIBNL3_CFLAGS)
+	    AC_SUBST(remus_netbuf, [y])
+])
+
 AC_OUTPUT()
 
 AS_IF([test "x$xend" = "xy" ], [
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..da27c84 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -21,11 +21,13 @@ endif
 
 LIBXL_LIBS =
 LIBXL_LIBS = $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libblktapctl) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS += $(LIBNL3_LIBS)
 
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 CFLAGS_LIBXL += $(CFLAGS_libblktapctl) 
+CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
 CFLAGS_LIBXL += -Wshadow
 
 LIBXL_LIBS-$(CONFIG_ARM) += -lfdt
diff --git a/tools/remus/README b/tools/remus/README
index 9e8140b..4736252 100644
--- a/tools/remus/README
+++ b/tools/remus/README
@@ -2,3 +2,9 @@ Remus provides fault tolerance for virtual machines by sending continuous
 checkpoints to a backup, which will activate if the target VM fails.
 
 See the website at http://nss.cs.ubc.ca/remus/ for details.
+
+Using Remus with libxl on Xen 4.4 and higher:
+ To enable network buffering, you need libnl 3.2.8
+ or higher along with the development headers and command line utilities.
+ If your distro does not have the appropriate libnl3 version, you can find
+ the latest source tarball of libnl3 at http://www.carisma.slowglass.com/~tgr/libnl/
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzW-0005n8-FX; Mon, 10 Feb 2014 09:17:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzV-0005mZ-Kh
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:25 +0000
Received: from [85.158.143.35:31267] by server-1.bemta-4.messagelabs.com id
	48/A3-31661-42998F25; Mon, 10 Feb 2014 09:17:24 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392023841!4445500!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18772 invoked from network); 10 Feb 2014 09:17:22 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-13.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497753"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H89o030290;
	Mon, 10 Feb 2014 17:17:11 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151397-1694488 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:25 +0800
Message-Id: <1392023972-24675-4-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:17,
	Serialize complete at 2014/02/10 17:15:17
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 03/10 V7] tools/libxl: introduce a new structure
	libxl__remus_state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

libxl_domain_remus_info only contains the argument of the command
'xl remus'. So introduce a new structure libxl__remus_state to save
the remus state.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.c          |   25 +++++++++++++++++++++++--
 tools/libxl/libxl_dom.c      |   12 ++++--------
 tools/libxl/libxl_internal.h |   22 ++++++++++++++++++++--
 3 files changed, 47 insertions(+), 12 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 2845ca4..25af816 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -729,11 +729,32 @@ int libxl_domain_remus_start(libxl_ctx *ctx, libxl_domain_remus_info *info,
     dss->type = type;
     dss->live = 1;
     dss->debug = 0;
-    dss->remus = info;
 
     assert(info);
 
-    /* TBD: Remus setup - i.e. attach qdisc, enable disk buffering, etc */
+    GCNEW(dss->remus_state);
+
+    /* convenience shorthand */
+    libxl__remus_state *remus_state = dss->remus_state;
+    remus_state->blackhole = info->blackhole;
+    remus_state->interval = info->interval;
+    remus_state->compression = info->compression;
+    remus_state->dss = dss;
+    libxl__ev_child_init(&remus_state->child);
+
+    /* TODO: enable disk buffering */
+
+    /* Setup network buffering */
+    if (info->netbuf) {
+        if (info->netbufscript) {
+            remus_state->netbufscript =
+                libxl__strdup(gc, info->netbufscript);
+        } else {
+            remus_state->netbufscript =
+                GCSPRINTF("%s/remus-netbuf-setup",
+                          libxl__xen_script_dir_path());
+        }
+    }
 
     /* Point of no return */
     libxl__domain_suspend(egc, dss);
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..8d63f90 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1290,7 +1290,7 @@ static void remus_checkpoint_dm_saved(libxl__egc *egc,
     /* REMUS TODO: Wait for disk and memory ack, release network buffer */
     /* REMUS TODO: make this asynchronous */
     assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->interval * 1000);
+    usleep(dss->remus_state->interval * 1000);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
@@ -1308,7 +1308,6 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     const libxl_domain_type type = dss->type;
     const int live = dss->live;
     const int debug = dss->debug;
-    const libxl_domain_remus_info *const r_info = dss->remus;
     libxl__srm_save_autogen_callbacks *const callbacks =
         &dss->shs.callbacks.save.a;
 
@@ -1343,11 +1342,8 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     dss->guest_responded = 0;
     dss->dm_savefile = libxl__device_model_savefile(gc, domid);
 
-    if (r_info != NULL) {
-        dss->interval = r_info->interval;
-        if (r_info->compression)
-            dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
-    }
+    if (dss->remus_state && dss->remus_state->compression)
+        dss->xcflags |= XCFLAGS_CHECKPOINT_COMPRESS;
 
     dss->xce = xc_evtchn_open(NULL, 0);
     if (dss->xce == NULL)
@@ -1366,7 +1362,7 @@ void libxl__domain_suspend(libxl__egc *egc, libxl__domain_suspend_state *dss)
     }
 
     memset(callbacks, 0, sizeof(*callbacks));
-    if (r_info != NULL) {
+    if (dss->remus_state != NULL) {
         callbacks->suspend = libxl__remus_domain_suspend_callback;
         callbacks->postcopy = libxl__remus_domain_resume_callback;
         callbacks->checkpoint = libxl__remus_domain_checkpoint_callback;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd23ff..9970780 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2292,6 +2292,25 @@ typedef struct libxl__logdirty_switch {
     libxl__ev_time timeout;
 } libxl__logdirty_switch;
 
+typedef struct libxl__remus_state {
+    /* filled by the user */
+    /* checkpoint interval */
+    int interval;
+    int blackhole;
+    int compression;
+    /* Script to setup/teardown network buffers */
+    const char *netbufscript;
+    libxl__domain_suspend_state *dss;
+
+    /* private */
+    int saved_rc;
+    int dev_id;
+    /* Opaque context containing network buffer related stuff */
+    void *netbuf_state;
+    libxl__ev_time timeout;
+    libxl__ev_child child;
+} libxl__remus_state;
+
 struct libxl__domain_suspend_state {
     /* set by caller of libxl__domain_suspend */
     libxl__ao *ao;
@@ -2302,7 +2321,7 @@ struct libxl__domain_suspend_state {
     libxl_domain_type type;
     int live;
     int debug;
-    const libxl_domain_remus_info *remus;
+    libxl__remus_state *remus_state;
     /* private */
     xc_evtchn *xce; /* event channel handle */
     int suspend_eventchn;
@@ -2310,7 +2329,6 @@ struct libxl__domain_suspend_state {
     int xcflags;
     int guest_responded;
     const char *dm_savefile;
-    int interval; /* checkpoint interval (for Remus) */
     libxl__save_helper_state shs;
     libxl__logdirty_switch logdirty;
     /* private for libxl__domain_save_device_model */
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzZ-0005ow-2G; Mon, 10 Feb 2014 09:17:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzX-0005nJ-6L
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:27 +0000
Received: from [85.158.143.35:31479] by server-2.bemta-4.messagelabs.com id
	B1/43-10891-62998F25; Mon, 10 Feb 2014 09:17:26 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392023840!4434687!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29907 invoked from network); 10 Feb 2014 09:17:25 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-12.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497756"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bG030288;
	Mon, 10 Feb 2014 17:17:13 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151428-1694495 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:32 +0800
Message-Id: <1392023972-24675-11-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:19,
	Serialize complete at 2014/02/10 17:15:19
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 10/10 V7] libxl: network buffering cmdline switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Command line switch to 'xl remus' command, to enable network buffering.
Pass on this flag to libxl so that it can act accordingly.
Also update man pages to reflect the addition of a new option to
'xl remus' command.

Note: the network buffering is enabled as default. If you want to
disable it, please use -n option.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 docs/man/xl.conf.pod.5    |    6 ++++++
 docs/man/xl.pod.1         |   11 ++++++++++-
 tools/libxl/xl.c          |    4 ++++
 tools/libxl/xl.h          |    1 +
 tools/libxl/xl_cmdimpl.c  |   28 ++++++++++++++++++++++------
 tools/libxl/xl_cmdtable.c |    3 +++
 6 files changed, 46 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.conf.pod.5 b/docs/man/xl.conf.pod.5
index 7c43bde..8ae19bb 100644
--- a/docs/man/xl.conf.pod.5
+++ b/docs/man/xl.conf.pod.5
@@ -105,6 +105,12 @@ Configures the default gateway device to set for virtual network devices.
 
 Default: C<None>
 
+=item B<remus.default.netbufscript="PATH">
+
+Configures the default script used by Remus to setup network buffering.
+
+Default: C</etc/xen/scripts/remus-netbuf-setup>
+
 =item B<output_format="json|sxp">
 
 Configures the default output format used by xl when printing "machine
diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..3c5f246 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -399,7 +399,7 @@ Enable Remus HA for domain. By default B<xl> relies on ssh as a transport
 mechanism between the two hosts.
 
 N.B: Remus support in xl is still in experimental (proof-of-concept) phase.
-     There is no support for network or disk buffering at the moment.
+     There is no support for disk buffering at the moment.
 
 B<OPTIONS>
 
@@ -418,6 +418,15 @@ Generally useful for debugging.
 
 Disable memory checkpoint compression.
 
+=item B<-n>
+
+Disable network output buffering.
+
+=item B<-N> I<netbufscript>
+
+Use <netbufscript> to setup network buffering instead of the instead of
+the default (/etc/xen/scripts/remus-netbuf-setup).
+
 =item B<-s> I<sshcommand>
 
 Use <sshcommand> instead of ssh.  String will be passed to sh.
diff --git a/tools/libxl/xl.c b/tools/libxl/xl.c
index 657610b..e02a618 100644
--- a/tools/libxl/xl.c
+++ b/tools/libxl/xl.c
@@ -46,6 +46,7 @@ char *default_vifscript = NULL;
 char *default_bridge = NULL;
 char *default_gatewaydev = NULL;
 char *default_vifbackend = NULL;
+char *default_remus_netbufscript = NULL;
 enum output_format default_output_format = OUTPUT_FORMAT_JSON;
 int claim_mode = 1;
 
@@ -177,6 +178,9 @@ static void parse_global_config(const char *configfile,
     if (!xlu_cfg_get_long (config, "claim_mode", &l, 0))
         claim_mode = l;
 
+    xlu_cfg_replace_string (config, "remus.default.netbufscript",
+                            &default_remus_netbufscript, 0);
+
     xlu_cfg_destroy(config);
 }
 
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..d991fd3 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -153,6 +153,7 @@ extern char *default_vifscript;
 extern char *default_bridge;
 extern char *default_gatewaydev;
 extern char *default_vifbackend;
+extern char *default_remus_netbufscript;
 extern char *blkdev_start;
 
 enum output_format {
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..6d41775 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7265,8 +7265,9 @@ int main_remus(int argc, char **argv)
     r_info.interval = 200;
     r_info.blackhole = 0;
     r_info.compression = 1;
+    r_info.netbuf = 1;
 
-    SWITCH_FOREACH_OPT(opt, "bui:s:e", NULL, "remus", 2) {
+    SWITCH_FOREACH_OPT(opt, "buni:s:N:e", NULL, "remus", 2) {
     case 'i':
         r_info.interval = atoi(optarg);
         break;
@@ -7276,6 +7277,12 @@ int main_remus(int argc, char **argv)
     case 'u':
         r_info.compression = 0;
         break;
+    case 'n':
+        r_info.netbuf = 0;
+        break;
+    case 'N':
+        r_info.netbufscript = optarg;
+        break;
     case 's':
         ssh_command = optarg;
         break;
@@ -7287,6 +7294,9 @@ int main_remus(int argc, char **argv)
     domid = find_domain(argv[optind]);
     host = argv[optind + 1];
 
+    if(!r_info.netbufscript)
+        r_info.netbufscript = default_remus_netbufscript;
+
     if (r_info.blackhole) {
         send_fd = open("/dev/null", O_RDWR, 0644);
         if (send_fd < 0) {
@@ -7324,13 +7334,19 @@ int main_remus(int argc, char **argv)
     /* Point of no return */
     rc = libxl_domain_remus_start(ctx, &r_info, domid, send_fd, recv_fd, 0);
 
-    /* If we are here, it means backup has failed/domain suspend failed.
-     * Try to resume the domain and exit gracefully.
-     * TODO: Split-Brain check.
+    /* check if the domain exists. User may have xl destroyed the
+     * domain to force failover
      */
-    fprintf(stderr, "remus sender: libxl_domain_suspend failed"
-            " (rc=%d)\n", rc);
+    if (libxl_domain_info(ctx, 0, domid)) {
+        fprintf(stderr, "Remus: Primary domain has been destroyed.\n");
+        close(send_fd);
+        return 0;
+    }
 
+    /* If we are here, it means remus setup/domain suspend/backup has
+     * failed. Try to resume the domain and exit gracefully.
+     * TODO: Split-Brain check.
+     */
     if (rc == ERROR_GUEST_TIMEDOUT)
         fprintf(stderr, "Failed to suspend domain at primary.\n");
     else {
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..9b7104c 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -481,6 +481,9 @@ struct cmd_spec cmd_table[] = {
       "-i MS                   Checkpoint domain memory every MS milliseconds (def. 200ms).\n"
       "-b                      Replicate memory checkpoints to /dev/null (blackhole)\n"
       "-u                      Disable memory checkpoint compression.\n"
+      "-n                      Disable network output buffering.\n"
+      "-N <netbufscript>       Use netbufscript to setup network buffering instead of the\n"
+      "                        instead of the default (/etc/xen/scripts/remus-netbuf-setup).\n"
       "-s <sshcommand>         Use <sshcommand> instead of ssh.  String will be passed\n"
       "                        to sh. If empty, run <host> instead of \n"
       "                        ssh <host> xl migrate-receive -r [-e]\n"
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzT-0005mD-DP; Mon, 10 Feb 2014 09:17:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzR-0005m4-SU
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:22 +0000
Received: from [85.158.137.68:38990] by server-3.bemta-3.messagelabs.com id
	97/E8-14520-12998F25; Mon, 10 Feb 2014 09:17:21 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392023836!765585!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12477 invoked from network); 10 Feb 2014 09:17:19 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-9.tower-31.messagelabs.com with SMTP;
	10 Feb 2014 09:17:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497751"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:24 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8aS030292;
	Mon, 10 Feb 2014 17:17:09 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151394-1694487 ;
	Mon, 10 Feb 2014 17:15:13 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:24 +0800
Message-Id: <1392023972-24675-3-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:13,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:16,
	Serialize complete at 2014/02/10 17:15:16
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 02/10 V7] tools/libxl: update
	libxl_domain_remus_info
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

Add two members:
1. netbuf: whether netbuf is enabled
2. netbufscript: the path of the script which will be run to setup
     and tear down the guest's interface.

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl.h         |   13 +++++++++++++
 tools/libxl/libxl_types.idl |    2 ++
 2 files changed, 15 insertions(+), 0 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..d89ad0a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -409,6 +409,19 @@
  */
 #define LIBXL_HAVE_DRIVER_DOMAIN_CREATION 1
 
+/*
+ * LIBXL_HAVE_REMUS_NETBUF 1
+ *
+ * If this is defined, then the libxl_domain_remus_info structure will
+ * have a boolean field (netbuf) and a string field (netbufscript).
+ *
+ * netbuf, if true, indicates that network buffering should be enabled.
+ *
+ * netbufscript, if set, indicates the path to the hotplug script to
+ * setup or teardown network buffers.
+ */
+#define LIBXL_HAVE_REMUS_NETBUF 1
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..e49945a 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -561,6 +561,8 @@ libxl_domain_remus_info = Struct("domain_remus_info",[
     ("interval",     integer),
     ("blackhole",    bool),
     ("compression",  bool),
+    ("netbuf",       bool),
+    ("netbufscript", string),
     ])
 
 libxl_event_type = Enumeration("event_type", [
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:17:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCmzZ-0005pF-GQ; Mon, 10 Feb 2014 09:17:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WCmzX-0005nT-I8
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:17:27 +0000
Received: from [85.158.143.35:21484] by server-3.bemta-4.messagelabs.com id
	F7/BA-11539-52998F25; Mon, 10 Feb 2014 09:17:25 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392023841!4445500!2
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19315 invoked from network); 10 Feb 2014 09:17:24 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-13.tower-21.messagelabs.com with SMTP;
	10 Feb 2014 09:17:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384272000"; 
   d="scan'208";a="9497755"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 10 Feb 2014 17:13:26 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1A9H8bF030288;
	Mon, 10 Feb 2014 17:17:13 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021017151427-1694494 ;
	Mon, 10 Feb 2014 17:15:14 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 17:19:31 +0800
Message-Id: <1392023972-24675-10-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/10 17:15:14,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/10 17:15:18,
	Serialize complete at 2014/02/10 17:15:18
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH 09/10 V7] libxl: control network buffering in
	remus callbacks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Shriram Rajagopalan <rshriram@cs.ubc.ca>

This patch constitutes the core network buffering logic.
and does the following:
 a) create a new network buffer when the domain is suspended
    (remus_domain_suspend_callback)
 b) release the previous network buffer pertaining to the
    committed checkpoint (remus_domain_checkpoint_dm_saved)

Signed-off-by: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: Wen Congyang <wency@cn.fujitsu.com>
---
 tools/libxl/libxl_dom.c |   90 ++++++++++++++++++++++++++++++++++++++++++----
 1 files changed, 82 insertions(+), 8 deletions(-)

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 912a6e4..a4ffdfd 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1243,8 +1243,30 @@ int libxl__toolstack_save(uint32_t domid, uint8_t **buf,
 
 static int libxl__remus_domain_suspend_callback(void *data)
 {
-    /* REMUS TODO: Issue disk and network checkpoint reqs. */
-    return libxl__domain_suspend_common_callback(data);
+    libxl__save_helper_state *shs = data;
+    libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
+
+    /* Convenience aliases */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    /* REMUS TODO: Issue disk checkpoint reqs. */
+    int ok = libxl__domain_suspend_common_callback(data);
+
+    if (!remus_state->netbuf_state || !ok) goto out;
+
+    /* The domain was suspended successfully. Start a new network
+     * buffer for the next epoch. If this operation fails, then act
+     * as though domain suspend failed -- libxc exits its infinite
+     * loop and ultimately, the replication stops.
+     */
+    if (libxl__remus_netbuf_start_new_epoch(gc, dss->domid,
+                                            remus_state))
+        ok = 0;
+
+ out:
+    return ok;
 }
 
 static int libxl__remus_domain_resume_callback(void *data)
@@ -1257,7 +1279,7 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* REMUS TODO: Deal with disk. Start a new network output buffer */
+    /* REMUS TODO: Deal with disk. */
     return 1;
 }
 
@@ -1266,11 +1288,17 @@ static int libxl__remus_domain_resume_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc);
 
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs);
+
 static void libxl__remus_domain_checkpoint_callback(void *data)
 {
     libxl__save_helper_state *shs = data;
     libxl__domain_suspend_state *dss = CONTAINER_OF(shs, *dss, shs);
-    libxl__egc *egc = dss->shs.egc;
+
+    /* Convenience aliases */
+    libxl__egc *const egc = dss->shs.egc;
+
     STATE_AO_GC(dss->ao);
 
     /* This would go into tailbuf. */
@@ -1284,10 +1312,56 @@ static void libxl__remus_domain_checkpoint_callback(void *data)
 static void remus_checkpoint_dm_saved(libxl__egc *egc,
                                       libxl__domain_suspend_state *dss, int rc)
 {
-    /* REMUS TODO: Wait for disk and memory ack, release network buffer */
-    /* REMUS TODO: make this asynchronous */
-    assert(!rc); /* REMUS TODO handle this error properly */
-    usleep(dss->remus_state->interval * 1000);
+    /* Convenience aliases */
+    /*
+     * REMUS TODO: Wait for disk and explicit memory ack (through restore
+     * callback from remote) before releasing network buffer.
+     */
+    libxl__remus_state *const remus_state = dss->remus_state;
+
+    STATE_AO_GC(dss->ao);
+
+    if (rc) {
+        LOG(ERROR, "Failed to save device model. Terminating Remus..");
+        goto out;
+    }
+
+    if (remus_state->netbuf_state) {
+        rc = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
+                                                    remus_state);
+        if (rc) {
+            LOG(ERROR, "Failed to release network buffer."
+                " Terminating Remus..");
+            goto out;
+        }
+    }
+
+    /* Set checkpoint interval timeout */
+    rc = libxl__ev_time_register_rel(gc, &remus_state->timeout,
+                                     remus_next_checkpoint,
+                                     dss->remus_state->interval);
+    if (rc) {
+        LOG(ERROR, "unable to register timeout for next epoch."
+            " Terminating Remus..");
+        goto out;
+    }
+    return;
+
+ out:
+    libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 0);
+}
+
+static void remus_next_checkpoint(libxl__egc *egc, libxl__ev_time *ev,
+                                  const struct timeval *requested_abs)
+{
+    libxl__remus_state *remus_state = CONTAINER_OF(ev, *remus_state, timeout);
+
+    /* Convenience aliases */
+    libxl__domain_suspend_state *const dss = remus_state->dss;
+
+    STATE_AO_GC(dss->ao);
+
+    libxl__ev_time_deregister(gc, &remus_state->timeout);
     libxl__xc_domain_saverestore_async_callback_done(egc, &dss->shs, 1);
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:23:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCn5b-0007hx-17; Mon, 10 Feb 2014 09:23:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCn5Z-0007hs-FU
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 09:23:41 +0000
Received: from [85.158.143.35:8496] by server-3.bemta-4.messagelabs.com id
	1C/B7-11539-C9A98F25; Mon, 10 Feb 2014 09:23:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392024219!4436111!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1984 invoked from network); 10 Feb 2014 09:23:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:23:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="99415379"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 09:23:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 04:23:38 -0500
Message-ID: <1392024217.5117.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 09:23:37 +0000
In-Reply-To: <osstest-24821-mainreport@xen.org>
References: <osstest-24821-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24821: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 08:13 +0000, xen.org wrote:
> flight 24821 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24821/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

2014-02-10 03:41:24 Z XenUse overriding $USER to osstest
Can't exec "/usr/groups/xencore/systems/bin/xenuse": Input/output error at Osstest.pm line 274.
/usr/groups/xencore/systems/bin/xenuse --off marilith-n5: -1 Input/output error at Osstest.pm line 275.

There seems to have been a lot of this over the w/e on woking. When I
looked yesterday /u/g/xencore jsut wasn't there, but I restarted autofs
and it came back.

Trying it this morning I got the same I/O error but then it started
working, all I did was ls the directory containing xenuse.

Very odd.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:23:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCn5b-0007hx-17; Mon, 10 Feb 2014 09:23:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCn5Z-0007hs-FU
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 09:23:41 +0000
Received: from [85.158.143.35:8496] by server-3.bemta-4.messagelabs.com id
	1C/B7-11539-C9A98F25; Mon, 10 Feb 2014 09:23:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392024219!4436111!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1984 invoked from network); 10 Feb 2014 09:23:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:23:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="99415379"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 09:23:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 04:23:38 -0500
Message-ID: <1392024217.5117.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 09:23:37 +0000
In-Reply-To: <osstest-24821-mainreport@xen.org>
References: <osstest-24821-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24821: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 08:13 +0000, xen.org wrote:
> flight 24821 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24821/
> 
> Failures and problems with tests :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-armhf-armhf-xl        10 capture-logs(10) broken in 24807 REGR. vs. 24743

2014-02-10 03:41:24 Z XenUse overriding $USER to osstest
Can't exec "/usr/groups/xencore/systems/bin/xenuse": Input/output error at Osstest.pm line 274.
/usr/groups/xencore/systems/bin/xenuse --off marilith-n5: -1 Input/output error at Osstest.pm line 275.

There seems to have been a lot of this over the w/e on woking. When I
looked yesterday /u/g/xencore jsut wasn't there, but I restarted autofs
and it came back.

Trying it this morning I got the same I/O error but then it started
working, all I did was ls the directory containing xenuse.

Very odd.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:47:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnSm-0000DY-QC; Mon, 10 Feb 2014 09:47:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WCnSl-0000DT-Pz
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:47:40 +0000
Received: from [85.158.143.35:64541] by server-2.bemta-4.messagelabs.com id
	C7/03-10891-B30A8F25; Mon, 10 Feb 2014 09:47:39 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392025655!4444489!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=-1.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_FP_R_14,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19434 invoked from network); 10 Feb 2014 09:47:36 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:47:36 -0000
Received: by mail-pd0-f179.google.com with SMTP id fp1so5484075pdb.24
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 01:47:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=XlP2gn1MwW3NIHMuXGIF151BV7DzdCNVkRDe3RxIjY8=;
	b=YUdgwcaGHy4F/hmw2DuFhVXFTINoKyMKyFPlx4mO24ox0yzQd6iTXtrcXlWyzbNgy6
	eD7uq65tYTXYAjudtxB4/PLmLSkIjgr6bhgW3QU+Ft+jlhY527annsBpLWeA2uBK6M4R
	inAwD6xVAjdGRf0CljJPQSprsg5gnPVKP+ATTJOzDjkKmmx6rZPu8Mg5EsrdT36Hb86B
	FAJV6op/5iqky+a5XyBxrqIZj5LWSSSrrWE3G4tBPgDcCVpfari+6praRtvQrDklxFkS
	orgT3KnxY55hErEUOtojEaUn/peuo9Xb7pEh6+LikRKzLmSgUDQ5emGpks6dpTuvS1wX
	kmSQ==
X-Received: by 10.66.233.71 with SMTP id tu7mr24719439pac.22.1392025654571;
	Mon, 10 Feb 2014 01:47:34 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id
	ns7sm40629911pbc.32.2014.02.10.01.47.32 for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 10 Feb 2014 01:47:33 -0800 (PST)
Date: Mon, 10 Feb 2014 17:47:33 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021017472829063425@gmail.com>
Subject: [Xen-devel] [XEN BUG] XEN + Qemu upstream + Qcow2 + coroutine = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4738104836848553965=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4738104836848553965==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart213635228552_=----"

This is a multi-part message in MIME format.

------=_001_NextPart213635228552_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCnFjb3cyIGZvcm1hdCBkaXNrIHdvcmtzIHdlbGwgd2l0aCB4ZW4gZm9yIG1v
c3QgY2FzZXM7DQpCVVQgaWYgdXNpbmcgY29yb3V0aW5lIGZ1bmN0aW9uIHN1Y2ggYXMgYmxvY2st
c3RyZWFtLCB0aGUgcWVtdSBwcm9jZXNzIGNyYXNoZWQhDQpBbmQgd2UgY2FuIGZvdW5kIGZvbGxv
d2luZyBtc2cgZnJvbSBjbWQgZG1lc2cNCg0KcWVtdS1zeXN0ZW0taTM4WzIwNTE0XTogc2VnZmF1
bHQgYXQgNjAgaXAgMDAwMDdmMmI2MGRjODc5YSBzcCAwMDAwN2ZmZmZjYzJmMDAwIGVycm9yIDQg
aW4gcWVtdS1zeXN0ZW0taTM4Nls3ZjJiNjBjZGYwMDArNTI5MDAwXSANCg0KQW55IHN1Z2dlc3Rp
b24/DQoNCi0tLS0tLS0tLS0tLS0tLS0NClhFTiB2ZXJzaW9uo7ogeGVuIDQuNC4wLXJjMw0KZ3Vl
c3Qgb3MgOiBjZW50b3MgNi40DQoNCg0KDQoNCg0KaGVyYmVydCBjbGFuZA==

------=_001_NextPart213635228552_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dgb2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">qcow2 format disk works well with xen for =
most=20
cases;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">BUT if using coroutine function such as=20
block-stream, the qemu process crashed!</DIV>
<DIV style=3D"TEXT-INDENT: 2em">And we can found following msg from cmd=20
dmesg</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">qemu-system-i38[20514]: segfault at 60 ip=20
00007f2b60dc879a sp 00007ffffcc2f000 error 4 in=20
qemu-system-i386[7f2b60cdf000+529000] </DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Any suggestion?</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">----------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">XEN version=A3=BA xen 4.4.0-rc3</DIV>
<DIV style=3D"TEXT-INDENT: 2em">guest os : centos 6.4</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart213635228552_=------



--===============4738104836848553965==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4738104836848553965==--



From xen-devel-bounces@lists.xen.org Mon Feb 10 09:47:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnSm-0000DY-QC; Mon, 10 Feb 2014 09:47:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WCnSl-0000DT-Pz
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:47:40 +0000
Received: from [85.158.143.35:64541] by server-2.bemta-4.messagelabs.com id
	C7/03-10891-B30A8F25; Mon, 10 Feb 2014 09:47:39 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392025655!4444489!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=-1.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_FP_R_14,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19434 invoked from network); 10 Feb 2014 09:47:36 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:47:36 -0000
Received: by mail-pd0-f179.google.com with SMTP id fp1so5484075pdb.24
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 01:47:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=XlP2gn1MwW3NIHMuXGIF151BV7DzdCNVkRDe3RxIjY8=;
	b=YUdgwcaGHy4F/hmw2DuFhVXFTINoKyMKyFPlx4mO24ox0yzQd6iTXtrcXlWyzbNgy6
	eD7uq65tYTXYAjudtxB4/PLmLSkIjgr6bhgW3QU+Ft+jlhY527annsBpLWeA2uBK6M4R
	inAwD6xVAjdGRf0CljJPQSprsg5gnPVKP+ATTJOzDjkKmmx6rZPu8Mg5EsrdT36Hb86B
	FAJV6op/5iqky+a5XyBxrqIZj5LWSSSrrWE3G4tBPgDcCVpfari+6praRtvQrDklxFkS
	orgT3KnxY55hErEUOtojEaUn/peuo9Xb7pEh6+LikRKzLmSgUDQ5emGpks6dpTuvS1wX
	kmSQ==
X-Received: by 10.66.233.71 with SMTP id tu7mr24719439pac.22.1392025654571;
	Mon, 10 Feb 2014 01:47:34 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id
	ns7sm40629911pbc.32.2014.02.10.01.47.32 for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 10 Feb 2014 01:47:33 -0800 (PST)
Date: Mon, 10 Feb 2014 17:47:33 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021017472829063425@gmail.com>
Subject: [Xen-devel] [XEN BUG] XEN + Qemu upstream + Qcow2 + coroutine = BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4738104836848553965=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4738104836848553965==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart213635228552_=----"

This is a multi-part message in MIME format.

------=_001_NextPart213635228552_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCnFjb3cyIGZvcm1hdCBkaXNrIHdvcmtzIHdlbGwgd2l0aCB4ZW4gZm9yIG1v
c3QgY2FzZXM7DQpCVVQgaWYgdXNpbmcgY29yb3V0aW5lIGZ1bmN0aW9uIHN1Y2ggYXMgYmxvY2st
c3RyZWFtLCB0aGUgcWVtdSBwcm9jZXNzIGNyYXNoZWQhDQpBbmQgd2UgY2FuIGZvdW5kIGZvbGxv
d2luZyBtc2cgZnJvbSBjbWQgZG1lc2cNCg0KcWVtdS1zeXN0ZW0taTM4WzIwNTE0XTogc2VnZmF1
bHQgYXQgNjAgaXAgMDAwMDdmMmI2MGRjODc5YSBzcCAwMDAwN2ZmZmZjYzJmMDAwIGVycm9yIDQg
aW4gcWVtdS1zeXN0ZW0taTM4Nls3ZjJiNjBjZGYwMDArNTI5MDAwXSANCg0KQW55IHN1Z2dlc3Rp
b24/DQoNCi0tLS0tLS0tLS0tLS0tLS0NClhFTiB2ZXJzaW9uo7ogeGVuIDQuNC4wLXJjMw0KZ3Vl
c3Qgb3MgOiBjZW50b3MgNi40DQoNCg0KDQoNCg0KaGVyYmVydCBjbGFuZA==

------=_001_NextPart213635228552_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dgb2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">qcow2 format disk works well with xen for =
most=20
cases;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">BUT if using coroutine function such as=20
block-stream, the qemu process crashed!</DIV>
<DIV style=3D"TEXT-INDENT: 2em">And we can found following msg from cmd=20
dmesg</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">qemu-system-i38[20514]: segfault at 60 ip=20
00007f2b60dc879a sp 00007ffffcc2f000 error 4 in=20
qemu-system-i386[7f2b60cdf000+529000] </DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Any suggestion?</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">----------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">XEN version=A3=BA xen 4.4.0-rc3</DIV>
<DIV style=3D"TEXT-INDENT: 2em">guest os : centos 6.4</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart213635228552_=------



--===============4738104836848553965==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4738104836848553965==--



From xen-devel-bounces@lists.xen.org Mon Feb 10 09:52:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnXe-0000kI-In; Mon, 10 Feb 2014 09:52:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WCnXd-0000kD-Rf
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:52:42 +0000
Received: from [85.158.137.68:21171] by server-13.bemta-3.messagelabs.com id
	21/F3-26923-961A8F25; Mon, 10 Feb 2014 09:52:41 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392025950!775496!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjE2MzcgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6634 invoked from network); 10 Feb 2014 09:52:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; 
	d="asc'?scan'208";a="99420175"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 09:52:29 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 04:52:28 -0500
Message-ID: <1392025946.12373.22.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 10:52:26 +0100
In-Reply-To: <52F8A15A020000780011A9E2@nat28.tlf.novell.com>
References: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
	<52F8A15A020000780011A9E2@nat28.tlf.novell.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, juergen.gross@ts.fujitsu.com,
	esb@ics.hawaii.edu, xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in
 credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7555661680099232760=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7555661680099232760==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-FVdf3VovzcPNGLtzvkNG"

--=-FVdf3VovzcPNGLtzvkNG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2014-02-10 at 08:52 +0000, Jan Beulich wrote:
> >>> On 09.02.14 at 02:57, Justin Weaver <jtweaver@hawaii.edu> wrote:
> > @@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *o=
ps, int cpu)
> >          return;
> >      }
> > =20
> > -    /* Figure out which runqueue to put it in */
> > +    /*
> > +     * Choose which run queue to add cpu to based on its socket.
> > +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a ST=
ARTING
> > +     * callback and socket information is not yet available for it).
>=20
> Did you verify that last part to be the case? Because if so, we would
> probably be better off fixing the initialization ordering.
>=20
Last part =3D=3D "socket information is not yet available" ? If yes, yes, a=
t
least on my system, cpu_to_socket() always return 0 (or, if I statically
initialize the array to -1, it always return -1) at that time, and I
have CPU0 on socket 1, so I'm quite sure that is the case.

By fixing the init order, do you mean moving whatever does the
cpu-to-socket mapping before scheduler's initialization?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-FVdf3VovzcPNGLtzvkNG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL4oVoACgkQk4XaBE3IOsTjuQCcDE0rhFPaRpP6d4ivhRadFoEk
i+0An1INJIiAbDxZxgqL1bYWn9PVtyyQ
=psPA
-----END PGP SIGNATURE-----

--=-FVdf3VovzcPNGLtzvkNG--


--===============7555661680099232760==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7555661680099232760==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 09:52:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnXe-0000kI-In; Mon, 10 Feb 2014 09:52:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WCnXd-0000kD-Rf
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 09:52:42 +0000
Received: from [85.158.137.68:21171] by server-13.bemta-3.messagelabs.com id
	21/F3-26923-961A8F25; Mon, 10 Feb 2014 09:52:41 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392025950!775496!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjE2MzcgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6634 invoked from network); 10 Feb 2014 09:52:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; 
	d="asc'?scan'208";a="99420175"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 09:52:29 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 04:52:28 -0500
Message-ID: <1392025946.12373.22.camel@Abyss>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 10:52:26 +0100
In-Reply-To: <52F8A15A020000780011A9E2@nat28.tlf.novell.com>
References: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
	<52F8A15A020000780011A9E2@nat28.tlf.novell.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, juergen.gross@ts.fujitsu.com,
	esb@ics.hawaii.edu, xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in
 credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7555661680099232760=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7555661680099232760==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-FVdf3VovzcPNGLtzvkNG"

--=-FVdf3VovzcPNGLtzvkNG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2014-02-10 at 08:52 +0000, Jan Beulich wrote:
> >>> On 09.02.14 at 02:57, Justin Weaver <jtweaver@hawaii.edu> wrote:
> > @@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *o=
ps, int cpu)
> >          return;
> >      }
> > =20
> > -    /* Figure out which runqueue to put it in */
> > +    /*
> > +     * Choose which run queue to add cpu to based on its socket.
> > +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a ST=
ARTING
> > +     * callback and socket information is not yet available for it).
>=20
> Did you verify that last part to be the case? Because if so, we would
> probably be better off fixing the initialization ordering.
>=20
Last part =3D=3D "socket information is not yet available" ? If yes, yes, a=
t
least on my system, cpu_to_socket() always return 0 (or, if I statically
initialize the array to -1, it always return -1) at that time, and I
have CPU0 on socket 1, so I'm quite sure that is the case.

By fixing the init order, do you mean moving whatever does the
cpu-to-socket mapping before scheduler's initialization?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-FVdf3VovzcPNGLtzvkNG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL4oVoACgkQk4XaBE3IOsTjuQCcDE0rhFPaRpP6d4ivhRadFoEk
i+0An1INJIiAbDxZxgqL1bYWn9PVtyyQ
=psPA
-----END PGP SIGNATURE-----

--=-FVdf3VovzcPNGLtzvkNG--


--===============7555661680099232760==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7555661680099232760==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 09:55:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnad-0000sT-7s; Mon, 10 Feb 2014 09:55:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCnac-0000sM-EP
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 09:55:46 +0000
Received: from [85.158.143.35:31278] by server-3.bemta-4.messagelabs.com id
	8E/4F-11539-122A8F25; Mon, 10 Feb 2014 09:55:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392026144!4429201!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5801 invoked from network); 10 Feb 2014 09:55:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:55:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="101256485"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 09:55:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 04:55:42 -0500
Message-ID: <1392026141.5117.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 10 Feb 2014 09:55:41 +0000
In-Reply-To: <52F509F3.1000806@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>, Rob Hoes <Rob.Hoes@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xenproject.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 16:29 +0000, Andrew Cooper wrote:
> On 07/02/14 16:22, Don Slutz wrote:
> > On 02/07/14 05:05, Ian Campbell wrote:
> >> On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
> >>>>>> cc1: warnings being treated as errors
> >>>>>> xenlight_stubs.c: In function 'Defbool_val':
> >>>>>> xenlight_stubs.c:344: warning: implicit declaration of function
> >>>>>> 'CAMLreturnT'
> >>>>>> xenlight_stubs.c:344: error: expected expression before
> >>>>>> 'libxl_defbool'
> >>>>>> xenlight_stubs.c: In function 'String_option_val':
> >>>>>> xenlight_stubs.c:379: error: expected expression before 'char'
> >>>>>> xenlight_stubs.c: In function 'aohow_val':
> >>>>>> xenlight_stubs.c:440: error: expected expression before
> >>>>>> 'libxl_asyncop_how'
> >>> Any idea on what to do about ocaml issue?
> >> My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
> >> What version do you have?
> >>
> >> Ian.
> >>
> > dcs-xen-53:~>ocaml -version
> > The Objective Caml toplevel, version 3.09.3
> >
> >    -Don Slutz
> >
> 
> Which, according to google, was introduced in 3.09.4
> 
> I think the ./configure script needs a min version check.

Yes, I think so too. Rob, could you advise on a suitable minimum and
perhaps patch tools/configure.ac and/or m4/ocaml.m4 as necessary.

Also CCing Roger who added the ocaml autoconf stuff.

> Applying the top macro from
> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
> as a compatibility hack might also be a good idea.

Not a bad idea.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 09:55:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 09:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnad-0000sT-7s; Mon, 10 Feb 2014 09:55:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCnac-0000sM-EP
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 09:55:46 +0000
Received: from [85.158.143.35:31278] by server-3.bemta-4.messagelabs.com id
	8E/4F-11539-122A8F25; Mon, 10 Feb 2014 09:55:45 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392026144!4429201!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5801 invoked from network); 10 Feb 2014 09:55:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 09:55:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="101256485"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 09:55:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 04:55:42 -0500
Message-ID: <1392026141.5117.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 10 Feb 2014 09:55:41 +0000
In-Reply-To: <52F509F3.1000806@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>, Rob Hoes <Rob.Hoes@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xenproject.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 16:29 +0000, Andrew Cooper wrote:
> On 07/02/14 16:22, Don Slutz wrote:
> > On 02/07/14 05:05, Ian Campbell wrote:
> >> On Thu, 2014-02-06 at 19:15 -0500, Don Slutz wrote:
> >>>>>> cc1: warnings being treated as errors
> >>>>>> xenlight_stubs.c: In function 'Defbool_val':
> >>>>>> xenlight_stubs.c:344: warning: implicit declaration of function
> >>>>>> 'CAMLreturnT'
> >>>>>> xenlight_stubs.c:344: error: expected expression before
> >>>>>> 'libxl_defbool'
> >>>>>> xenlight_stubs.c: In function 'String_option_val':
> >>>>>> xenlight_stubs.c:379: error: expected expression before 'char'
> >>>>>> xenlight_stubs.c: In function 'aohow_val':
> >>>>>> xenlight_stubs.c:440: error: expected expression before
> >>>>>> 'libxl_asyncop_how'
> >>> Any idea on what to do about ocaml issue?
> >> My guess is that your ocaml is too old and doesn't supply CAMLreturnT.
> >> What version do you have?
> >>
> >> Ian.
> >>
> > dcs-xen-53:~>ocaml -version
> > The Objective Caml toplevel, version 3.09.3
> >
> >    -Don Slutz
> >
> 
> Which, according to google, was introduced in 3.09.4
> 
> I think the ./configure script needs a min version check.

Yes, I think so too. Rob, could you advise on a suitable minimum and
perhaps patch tools/configure.ac and/or m4/ocaml.m4 as necessary.

Also CCing Roger who added the ocaml autoconf stuff.

> Applying the top macro from
> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
> as a compatibility hack might also be a good idea.

Not a bad idea.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:01:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCng2-0001Tv-5N; Mon, 10 Feb 2014 10:01:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCng0-0001To-Il
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:01:20 +0000
Received: from [193.109.254.147:52184] by server-7.bemta-14.messagelabs.com id
	2B/98-23424-F63A8F25; Mon, 10 Feb 2014 10:01:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392026479!3209349!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13395 invoked from network); 10 Feb 2014 10:01:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 10:01:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 10:02:43 +0000
Message-Id: <52F8B17A020000780011AA71@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 10:01:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
	<52F8A15A020000780011A9E2@nat28.tlf.novell.com>
	<1392025946.12373.22.camel@Abyss>
In-Reply-To: <1392025946.12373.22.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, juergen.gross@ts.fujitsu.com,
	esb@ics.hawaii.edu, xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in
 credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 10:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> On Mon, 2014-02-10 at 08:52 +0000, Jan Beulich wrote:
>> >>> On 09.02.14 at 02:57, Justin Weaver <jtweaver@hawaii.edu> wrote:
>> > @@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, 
> int cpu)
>> >          return;
>> >      }
>> >  
>> > -    /* Figure out which runqueue to put it in */
>> > +    /*
>> > +     * Choose which run queue to add cpu to based on its socket.
>> > +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
>> > +     * callback and socket information is not yet available for it).
>> 
>> Did you verify that last part to be the case? Because if so, we would
>> probably be better off fixing the initialization ordering.
>> 
> Last part == "socket information is not yet available" ? If yes, yes, at
> least on my system, cpu_to_socket() always return 0 (or, if I statically
> initialize the array to -1, it always return -1) at that time, and I
> have CPU0 on socket 1, so I'm quite sure that is the case.

Okay.

> By fixing the init order, do you mean moving whatever does the
> cpu-to-socket mapping before scheduler's initialization?

Yes (or vice versa), if reasonably possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:01:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCng2-0001Tv-5N; Mon, 10 Feb 2014 10:01:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCng0-0001To-Il
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:01:20 +0000
Received: from [193.109.254.147:52184] by server-7.bemta-14.messagelabs.com id
	2B/98-23424-F63A8F25; Mon, 10 Feb 2014 10:01:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392026479!3209349!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13395 invoked from network); 10 Feb 2014 10:01:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 10:01:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 10:02:43 +0000
Message-Id: <52F8B17A020000780011AA71@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 10:01:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dario Faggioli" <dario.faggioli@citrix.com>
References: <1391911066-2572-1-git-send-email-jtweaver@hawaii.edu>
	<52F8A15A020000780011A9E2@nat28.tlf.novell.com>
	<1392025946.12373.22.camel@Abyss>
In-Reply-To: <1392025946.12373.22.camel@Abyss>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Marcus.Granado@eu.citrix.com, Justin Weaver <jtweaver@hawaii.edu>,
	george.dunlap@eu.citrix.com, juergen.gross@ts.fujitsu.com,
	esb@ics.hawaii.edu, xen-devel@lists.xen.org, henric@hawaii.edu
Subject: Re: [Xen-devel] [PATCH v3] Xen sched: Fix multiple runqueues in
 credit2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 10:52, Dario Faggioli <dario.faggioli@citrix.com> wrote:
> On Mon, 2014-02-10 at 08:52 +0000, Jan Beulich wrote:
>> >>> On 09.02.14 at 02:57, Justin Weaver <jtweaver@hawaii.edu> wrote:
>> > @@ -1959,15 +1961,25 @@ static void init_pcpu(const struct scheduler *ops, 
> int cpu)
>> >          return;
>> >      }
>> >  
>> > -    /* Figure out which runqueue to put it in */
>> > +    /*
>> > +     * Choose which run queue to add cpu to based on its socket.
>> > +     * If it's CPU 0, hard code it to run queue 0 (it doesn't get a STARTING
>> > +     * callback and socket information is not yet available for it).
>> 
>> Did you verify that last part to be the case? Because if so, we would
>> probably be better off fixing the initialization ordering.
>> 
> Last part == "socket information is not yet available" ? If yes, yes, at
> least on my system, cpu_to_socket() always return 0 (or, if I statically
> initialize the array to -1, it always return -1) at that time, and I
> have CPU0 on socket 1, so I'm quite sure that is the case.

Okay.

> By fixing the init order, do you mean moving whatever does the
> cpu-to-socket mapping before scheduler's initialization?

Yes (or vice versa), if reasonably possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnpw-0002Cb-H9; Mon, 10 Feb 2014 10:11:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCnjV-0001b9-UF
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:04:58 +0000
Received: from [193.109.254.147:10041] by server-13.bemta-14.messagelabs.com
	id 9F/47-01226-944A8F25; Mon, 10 Feb 2014 10:04:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392026695!3204621!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15376 invoked from network); 10 Feb 2014 10:04:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:04:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="99422045"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 10:04:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:04:50 -0500
Message-ID: <1392026689.5117.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Date: Mon, 10 Feb 2014 10:04:49 +0000
In-Reply-To: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
X-Mailman-Approved-At: Mon, 10 Feb 2014 10:11:34 +0000
Cc: xen-users <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These sorts of questions are more appropriate to the users list, so
moving there.

On Fri, 2014-02-07 at 13:19 -0800, Peter X. Gao wrote:

>        I am new to Xen and I am trying to run Intel DPDK inside a domU
> with virtio on Xen 4.2. Is it possible to do this? 

There is no mainline support for virtio under Xen.

You can find info on the wiki about a GSoC project from a few years back
which prototyped this for some (but not all) configurations, but this
was only a prototype and is not ongoing work.

Why do you want virtio? The Xen PV drivers have been present in mainline
kernels for many years and are enabled in most distros these days.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:11:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:11:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnpw-0002Cb-H9; Mon, 10 Feb 2014 10:11:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCnjV-0001b9-UF
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:04:58 +0000
Received: from [193.109.254.147:10041] by server-13.bemta-14.messagelabs.com
	id 9F/47-01226-944A8F25; Mon, 10 Feb 2014 10:04:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392026695!3204621!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15376 invoked from network); 10 Feb 2014 10:04:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:04:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,816,1384300800"; d="scan'208";a="99422045"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 10:04:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:04:50 -0500
Message-ID: <1392026689.5117.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Date: Mon, 10 Feb 2014 10:04:49 +0000
In-Reply-To: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
X-Mailman-Approved-At: Mon, 10 Feb 2014 10:11:34 +0000
Cc: xen-users <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These sorts of questions are more appropriate to the users list, so
moving there.

On Fri, 2014-02-07 at 13:19 -0800, Peter X. Gao wrote:

>        I am new to Xen and I am trying to run Intel DPDK inside a domU
> with virtio on Xen 4.2. Is it possible to do this? 

There is no mainline support for virtio under Xen.

You can find info on the wiki about a GSoC project from a few years back
which prototyped this for some (but not all) configurations, but this
was only a prototype and is not ongoing work.

Why do you want virtio? The Xen PV drivers have been present in mainline
kernels for many years and are enabled in most distros these days.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:16:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnuG-0002Qn-Jn; Mon, 10 Feb 2014 10:16:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCnuF-0002Qf-7P
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:16:03 +0000
Received: from [193.109.254.147:50765] by server-14.bemta-14.messagelabs.com
	id FC/30-29228-2E6A8F25; Mon, 10 Feb 2014 10:16:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392027359!3205823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8003 invoked from network); 10 Feb 2014 10:16:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:16:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101259520"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:15:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:15:58 -0500
Message-ID: <1392027356.5117.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Mon, 10 Feb 2014 10:15:56 +0000
In-Reply-To: <1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-08 at 13:07 +0800, Dongxiao Xu wrote:
> Introduced two new xl commands to attach/detach CQM service for a guest
> $ xl pqos-attach cqm domid
> $ xl pqos-detach cqm domid
> 
> Introduce one new xl command to retrive guest CQM information

"retrieve"

> $ xl pqos-list cqm

Please patch the xl manpages to describe all these new commands.

I wonder though -- are these aimed at end users or are they really for
developer use? I'm wondering if they should be exposed this way or
whether they should be exposed by some lower level tool (similar to how
xenpm is separate). I don't know what the correct answer is here.

> +int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
> +{
> +    int ret = 0;
> +    DECLARE_SYSCTL;
> +
> +    sysctl.cmd = XEN_SYSCTL_getcqminfo;
> +    if ( xc_sysctl(xch, &sysctl) < 0 )
> +        ret = -1;

xc_sysctl returns -1 on error AFAICT. So:
	ret = xc_sysctl(...);
	if (ret >= 0)
	{
		info->etc
	}
	return ret;

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..43c0f48 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
>                                   ])),
>             ("domain_create_console_available", Struct(None, [])),
>             ]))])
> +
> +libxl_cqminfo = Struct("cqminfo", [

You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
see the existing examples in that header.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:16:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnuG-0002Qn-Jn; Mon, 10 Feb 2014 10:16:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCnuF-0002Qf-7P
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:16:03 +0000
Received: from [193.109.254.147:50765] by server-14.bemta-14.messagelabs.com
	id FC/30-29228-2E6A8F25; Mon, 10 Feb 2014 10:16:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392027359!3205823!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8003 invoked from network); 10 Feb 2014 10:16:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:16:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101259520"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:15:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:15:58 -0500
Message-ID: <1392027356.5117.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Mon, 10 Feb 2014 10:15:56 +0000
In-Reply-To: <1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-08 at 13:07 +0800, Dongxiao Xu wrote:
> Introduced two new xl commands to attach/detach CQM service for a guest
> $ xl pqos-attach cqm domid
> $ xl pqos-detach cqm domid
> 
> Introduce one new xl command to retrive guest CQM information

"retrieve"

> $ xl pqos-list cqm

Please patch the xl manpages to describe all these new commands.

I wonder though -- are these aimed at end users or are they really for
developer use? I'm wondering if they should be exposed this way or
whether they should be exposed by some lower level tool (similar to how
xenpm is separate). I don't know what the correct answer is here.

> +int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
> +{
> +    int ret = 0;
> +    DECLARE_SYSCTL;
> +
> +    sysctl.cmd = XEN_SYSCTL_getcqminfo;
> +    if ( xc_sysctl(xch, &sysctl) < 0 )
> +        ret = -1;

xc_sysctl returns -1 on error AFAICT. So:
	ret = xc_sysctl(...);
	if (ret >= 0)
	{
		info->etc
	}
	return ret;

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..43c0f48 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
>                                   ])),
>             ("domain_create_console_available", Struct(None, [])),
>             ]))])
> +
> +libxl_cqminfo = Struct("cqminfo", [

You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
see the existing examples in that header.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:21:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnzJ-0002jt-F0; Mon, 10 Feb 2014 10:21:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WCnzH-0002jj-Vz
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:21:16 +0000
Received: from [193.109.254.147:58753] by server-10.bemta-14.messagelabs.com
	id 50/0C-10711-B18A8F25; Mon, 10 Feb 2014 10:21:15 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392027674!3207452!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32165 invoked from network); 10 Feb 2014 10:21:14 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-4.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 10:21:14 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WCnz1-0007Fo-M8; Mon, 10 Feb 2014 10:20:59 +0000
Message-ID: <1392027658.5117.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Michael Tokarev <mjt@tls.msk.ru>
Date: Mon, 10 Feb 2014 10:20:58 +0000
In-Reply-To: <52F71EFF.2040803@msgid.tls.msk.ru>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
	<52F4B3AA.5050403@msgid.tls.msk.ru>
	<1391773231.2162.82.camel@kazak.uk.xensource.com>
	<52F71EFF.2040803@msgid.tls.msk.ru>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-09 at 10:23 +0400, Michael Tokarev wrote:
> 07.02.2014 15:40, Ian Campbell wrote:
> []
> > Note that the SeaBIOS image is baked into Xen at build time, not picked
> > up at runtime like with (I think) qemu.
> 
> Oh. Now things are much clearer.  Indeed, qemu does not embed seabios inside
> any image at build time, it just loads the bios from a given file, and you
> can even tell it to use another bios.  Ditto for other optional roms, network
> boot roms, vga bios and other things.
> 
> And so yes, with new xen builds it should just pick bios-256k.bin from qemu
> instead of bios.bin, so indeed, qemu does not need to build xen support for
> a smaller version of bios.bin it ships, anymore.  So finally I see the good
> reason of what Gerd did when removing xen support from small bios.bin.
> 
> Is it possible to change xen to do something similar?

I suppose it's a SMOP, so all that is needed is for someone who is
interested to do the work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:21:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCnzJ-0002jt-F0; Mon, 10 Feb 2014 10:21:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>) id 1WCnzH-0002jj-Vz
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:21:16 +0000
Received: from [193.109.254.147:58753] by server-10.bemta-14.messagelabs.com
	id 50/0C-10711-B18A8F25; Mon, 10 Feb 2014 10:21:15 +0000
X-Env-Sender: ijc@hellion.org.uk
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392027674!3207452!1
X-Originating-IP: [212.110.190.137]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32165 invoked from network); 10 Feb 2014 10:21:14 -0000
Received: from benson.vm.bytemark.co.uk (HELO benson.vm.bytemark.co.uk)
	(212.110.190.137)
	by server-4.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 10:21:14 -0000
Received: from [185.25.64.249] (helo=[10.80.2.80])
	by benson.vm.bytemark.co.uk with esmtpsa
	(TLS1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.72)
	(envelope-from <ijc@hellion.org.uk>)
	id 1WCnz1-0007Fo-M8; Mon, 10 Feb 2014 10:20:59 +0000
Message-ID: <1392027658.5117.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <ijc@hellion.org.uk>
To: Michael Tokarev <mjt@tls.msk.ru>
Date: Mon, 10 Feb 2014 10:20:58 +0000
In-Reply-To: <52F71EFF.2040803@msgid.tls.msk.ru>
References: <79F7254377EC0747833BC5079E9285D639B241C2@SACEXCMBX03-PRD.hq.netapp.com>
	<1391675519.17309.27.camel@nilsson.home.kraxel.org>
	<1391675812.22033.2.camel@dagon.hellion.org.uk>
	<79F7254377EC0747833BC5079E9285D639B24512@SACEXCMBX03-PRD.hq.netapp.com>
	<52F40718.4070904@msgid.tls.msk.ru>
	<1391768389.2162.26.camel@kazak.uk.xensource.com>
	<52F4B3AA.5050403@msgid.tls.msk.ru>
	<1391773231.2162.82.camel@kazak.uk.xensource.com>
	<52F71EFF.2040803@msgid.tls.msk.ru>
X-Mailer: Evolution 3.4.4-3 
Mime-Version: 1.0
Cc: "Hoyer, David" <David.Hoyer@netapp.com>,
	"seabios@seabios.org" <seabios@seabios.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel <xen-devel@lists.xen.org>, Gerd Hoffmann <kraxel@redhat.com>,
	Jan Beulich <JBeulich@suse.com>, "Moyer, Keith" <Keith.Moyer@netapp.com>
Subject: Re: [Xen-devel] [SeaBIOS] SeaBIOS causing Xen HVM VCPU Triple fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-09 at 10:23 +0400, Michael Tokarev wrote:
> 07.02.2014 15:40, Ian Campbell wrote:
> []
> > Note that the SeaBIOS image is baked into Xen at build time, not picked
> > up at runtime like with (I think) qemu.
> 
> Oh. Now things are much clearer.  Indeed, qemu does not embed seabios inside
> any image at build time, it just loads the bios from a given file, and you
> can even tell it to use another bios.  Ditto for other optional roms, network
> boot roms, vga bios and other things.
> 
> And so yes, with new xen builds it should just pick bios-256k.bin from qemu
> instead of bios.bin, so indeed, qemu does not need to build xen support for
> a smaller version of bios.bin it ships, anymore.  So finally I see the good
> reason of what Gerd did when removing xen support from small bios.bin.
> 
> Is it possible to change xen to do something similar?

I suppose it's a SMOP, so all that is needed is for someone who is
interested to do the work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:32:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCo9Z-0003YU-W5; Mon, 10 Feb 2014 10:31:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCo9X-0003YP-Pz
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:31:51 +0000
Received: from [193.109.254.147:32366] by server-13.bemta-14.messagelabs.com
	id AA/2E-01226-69AA8F25; Mon, 10 Feb 2014 10:31:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392028308!3205397!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7392 invoked from network); 10 Feb 2014 10:31:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:31:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101261950"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:31:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:31:34 -0500
Message-ID: <52F8AA78.80805@citrix.com>
Date: Mon, 10 Feb 2014 10:31:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-kernel@vger.kernel.org, josh@joshtriplett.org
Subject: Re: [Xen-devel] [PATCH 1/4] drivers: xen: Mark function as static
	in platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTA6NTksIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IE1hcmsgZnVuY3Rpb24g
YXMgc3RhdGljIGluIHhlbi9wbGF0Zm9ybS1wY2kuYyBiZWNhdXNlIGl0IGlzIG5vdCB1c2VkCj4g
b3V0c2lkZSB0aGlzIGZpbGUuCj4gCj4gVGhpcyBlbGltaW5hdGVzIHRoZSBmb2xsb3dpbmcgd2Fy
bmluZyBpbiB4ZW4vcGxhdGZvcm0tcGNpLmM6Cj4gZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmM6
NDg6MTU6IHdhcm5pbmc6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3Ig4oCYYWxsb2NfeGVuX21t
aW/igJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQo+IAo+IFNpZ25lZC1vZmYtYnk6IFJhc2hpa2Eg
S2hlcmlhIDxyYXNoaWthLmtoZXJpYUBnbWFpbC5jb20+Cj4gUmV2aWV3ZWQtYnk6IEpvc2ggVHJp
cGxldHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KClJldmlld2VkLWJ5OiBEYXZpZCBWcmFiZWwg
PGRhdmlkLHZyYWJlbEBjaXRyaXguY29tPgoKRGF2aWQKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:32:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCo9Z-0003YU-W5; Mon, 10 Feb 2014 10:31:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCo9X-0003YP-Pz
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:31:51 +0000
Received: from [193.109.254.147:32366] by server-13.bemta-14.messagelabs.com
	id AA/2E-01226-69AA8F25; Mon, 10 Feb 2014 10:31:50 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392028308!3205397!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7392 invoked from network); 10 Feb 2014 10:31:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:31:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101261950"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:31:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:31:34 -0500
Message-ID: <52F8AA78.80805@citrix.com>
Date: Mon, 10 Feb 2014 10:31:20 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-kernel@vger.kernel.org, josh@joshtriplett.org
Subject: Re: [Xen-devel] [PATCH 1/4] drivers: xen: Mark function as static
	in platform-pci.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTA6NTksIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IE1hcmsgZnVuY3Rpb24g
YXMgc3RhdGljIGluIHhlbi9wbGF0Zm9ybS1wY2kuYyBiZWNhdXNlIGl0IGlzIG5vdCB1c2VkCj4g
b3V0c2lkZSB0aGlzIGZpbGUuCj4gCj4gVGhpcyBlbGltaW5hdGVzIHRoZSBmb2xsb3dpbmcgd2Fy
bmluZyBpbiB4ZW4vcGxhdGZvcm0tcGNpLmM6Cj4gZHJpdmVycy94ZW4vcGxhdGZvcm0tcGNpLmM6
NDg6MTU6IHdhcm5pbmc6IG5vIHByZXZpb3VzIHByb3RvdHlwZSBmb3Ig4oCYYWxsb2NfeGVuX21t
aW/igJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQo+IAo+IFNpZ25lZC1vZmYtYnk6IFJhc2hpa2Eg
S2hlcmlhIDxyYXNoaWthLmtoZXJpYUBnbWFpbC5jb20+Cj4gUmV2aWV3ZWQtYnk6IEpvc2ggVHJp
cGxldHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KClJldmlld2VkLWJ5OiBEYXZpZCBWcmFiZWwg
PGRhdmlkLHZyYWJlbEBjaXRyaXguY29tPgoKRGF2aWQKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:32:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoA2-0003aP-Es; Mon, 10 Feb 2014 10:32:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCoA1-0003aI-Ew
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:32:21 +0000
Received: from [85.158.139.211:41238] by server-16.bemta-5.messagelabs.com id
	E2/B7-05060-4BAA8F25; Mon, 10 Feb 2014 10:32:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392028338!2846087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5979 invoked from network); 10 Feb 2014 10:32:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:32:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101262090"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:32:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:32:17 -0500
Message-ID: <52F8AAAF.5040909@citrix.com>
Date: Mon, 10 Feb 2014 10:32:15 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<a609cbd7fcce1a4f5df0b00e3f866461b9ad071f.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <a609cbd7fcce1a4f5df0b00e3f866461b9ad071f.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: x86@kernel.org, linux-kernel@vger.kernel.org, josh@joshtriplett.org,
	Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH 3/4] drivers: xen: Move prototype
 declaration to appropriate header file from arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTE6MDgsIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IE1vdmUgcHJvdG90eXBl
IGRlY2xhcmF0aW9uIHRvIGhlYWRlciBmaWxlIGluY2x1ZGUveGVuL3hlbi1vcHMuaCBmcm9tCj4g
YXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIHRoZXkgYXJlIHVzZWQgYnkgbW9yZSB0aGFu
IG9uZSBmaWxlLgo+IAo+IFRoaXMgZWxpbWluYXRlcyB0aGUgZm9sbG93aW5nIHdhcm5pbmcgaW4g
ZHJpdmVycy94ZW4vZXZlbnRzLzoKPiBkcml2ZXJzL3hlbi9ldmVudHNfMmwuYzoxMjMxOjEzOiB3
YXJuaW5nOiBubyBwcmV2aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9kZWJ1Z19pbnRlcnJ1cHTi
gJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQo+IAo+IFNpZ25lZC1vZmYtYnk6IFJhc2hpa2EgS2hl
cmlhIDxyYXNoaWthLmtoZXJpYUBnbWFpbC5jb20+Cj4gUmV2aWV3ZWQtYnk6IEpvc2ggVHJpcGxl
dHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KWy4uLl0KPiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4t
b3BzLmgKPiArKysgYi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKPiBAQCAtMiw2ICsyLDcgQEAKPiAg
I2RlZmluZSBJTkNMVURFX1hFTl9PUFNfSAo+ICAKPiAgI2luY2x1ZGUgPGxpbnV4L3BlcmNwdS5o
Pgo+ICsjaW5jbHVkZSA8bGludXgvaW50ZXJydXB0Lmg+Cj4gICNpbmNsdWRlIDxhc20veGVuL2lu
dGVyZmFjZS5oPgo+ICAKPiAgREVDTEFSRV9QRVJfQ1BVKHN0cnVjdCB2Y3B1X2luZm8gKiwgeGVu
X3ZjcHUpOwo+IEBAIC0zNSw0ICszNiw2IEBAIGludCB4ZW5fdW5tYXBfZG9tYWluX21mbl9yYW5n
ZShzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3QgKnZtYSwKPiAgCQkJICAgICAgIGludCBudW1wZ3MsIHN0
cnVjdCBwYWdlICoqcGFnZXMpOwo+ICAKPiAgYm9vbCB4ZW5fcnVubmluZ19vbl92ZXJzaW9uX29y
X2xhdGVyKHVuc2lnbmVkIGludCBtYWpvciwgdW5zaWduZWQgaW50IG1pbm9yKTsKPiArCj4gK2ly
cXJldHVybl90IHhlbl9kZWJ1Z19pbnRlcnJ1cHQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKClRo
aXMgc2hvdWxkIGJlIG1vdmVkIHRvIGluY2x1ZGUveGVuL2V2ZW50cy5oIGluc3RlYWQuCgpEYXZp
ZAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRl
dmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVu
Lm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:32:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:32:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoA2-0003aP-Es; Mon, 10 Feb 2014 10:32:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCoA1-0003aI-Ew
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:32:21 +0000
Received: from [85.158.139.211:41238] by server-16.bemta-5.messagelabs.com id
	E2/B7-05060-4BAA8F25; Mon, 10 Feb 2014 10:32:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392028338!2846087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5979 invoked from network); 10 Feb 2014 10:32:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:32:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101262090"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:32:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:32:17 -0500
Message-ID: <52F8AAAF.5040909@citrix.com>
Date: Mon, 10 Feb 2014 10:32:15 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<a609cbd7fcce1a4f5df0b00e3f866461b9ad071f.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <a609cbd7fcce1a4f5df0b00e3f866461b9ad071f.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: x86@kernel.org, linux-kernel@vger.kernel.org, josh@joshtriplett.org,
	Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH 3/4] drivers: xen: Move prototype
 declaration to appropriate header file from arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTE6MDgsIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IE1vdmUgcHJvdG90eXBl
IGRlY2xhcmF0aW9uIHRvIGhlYWRlciBmaWxlIGluY2x1ZGUveGVuL3hlbi1vcHMuaCBmcm9tCj4g
YXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIHRoZXkgYXJlIHVzZWQgYnkgbW9yZSB0aGFu
IG9uZSBmaWxlLgo+IAo+IFRoaXMgZWxpbWluYXRlcyB0aGUgZm9sbG93aW5nIHdhcm5pbmcgaW4g
ZHJpdmVycy94ZW4vZXZlbnRzLzoKPiBkcml2ZXJzL3hlbi9ldmVudHNfMmwuYzoxMjMxOjEzOiB3
YXJuaW5nOiBubyBwcmV2aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9kZWJ1Z19pbnRlcnJ1cHTi
gJkgWy1XbWlzc2luZy1wcm90b3R5cGVzXQo+IAo+IFNpZ25lZC1vZmYtYnk6IFJhc2hpa2EgS2hl
cmlhIDxyYXNoaWthLmtoZXJpYUBnbWFpbC5jb20+Cj4gUmV2aWV3ZWQtYnk6IEpvc2ggVHJpcGxl
dHQgPGpvc2hAam9zaHRyaXBsZXR0Lm9yZz4KWy4uLl0KPiAtLS0gYS9pbmNsdWRlL3hlbi94ZW4t
b3BzLmgKPiArKysgYi9pbmNsdWRlL3hlbi94ZW4tb3BzLmgKPiBAQCAtMiw2ICsyLDcgQEAKPiAg
I2RlZmluZSBJTkNMVURFX1hFTl9PUFNfSAo+ICAKPiAgI2luY2x1ZGUgPGxpbnV4L3BlcmNwdS5o
Pgo+ICsjaW5jbHVkZSA8bGludXgvaW50ZXJydXB0Lmg+Cj4gICNpbmNsdWRlIDxhc20veGVuL2lu
dGVyZmFjZS5oPgo+ICAKPiAgREVDTEFSRV9QRVJfQ1BVKHN0cnVjdCB2Y3B1X2luZm8gKiwgeGVu
X3ZjcHUpOwo+IEBAIC0zNSw0ICszNiw2IEBAIGludCB4ZW5fdW5tYXBfZG9tYWluX21mbl9yYW5n
ZShzdHJ1Y3Qgdm1fYXJlYV9zdHJ1Y3QgKnZtYSwKPiAgCQkJICAgICAgIGludCBudW1wZ3MsIHN0
cnVjdCBwYWdlICoqcGFnZXMpOwo+ICAKPiAgYm9vbCB4ZW5fcnVubmluZ19vbl92ZXJzaW9uX29y
X2xhdGVyKHVuc2lnbmVkIGludCBtYWpvciwgdW5zaWduZWQgaW50IG1pbm9yKTsKPiArCj4gK2ly
cXJldHVybl90IHhlbl9kZWJ1Z19pbnRlcnJ1cHQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKClRo
aXMgc2hvdWxkIGJlIG1vdmVkIHRvIGluY2x1ZGUveGVuL2V2ZW50cy5oIGluc3RlYWQuCgpEYXZp
ZAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRl
dmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVu
Lm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:37:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoEq-0003tw-3x; Mon, 10 Feb 2014 10:37:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCoEo-0003th-7x
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 10:37:18 +0000
Received: from [193.109.254.147:56715] by server-16.bemta-14.messagelabs.com
	id 12/CF-21945-DDBA8F25; Mon, 10 Feb 2014 10:37:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392028635!3212107!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20768 invoked from network); 10 Feb 2014 10:37:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:37:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101262985"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:37:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:37:14 -0500
Message-ID: <1392028633.5117.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 10:37:13 +0000
In-Reply-To: <osstest-24817-mainreport@xen.org>
References: <osstest-24817-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> 
>  build-armhf-pvops             4 kernel-build                 fail   never pass 

+ git checkout 494479038d97f1b9f76fc633a360a681acdf035c
fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c

This is because it is using git://xenbits.xen.org/linux-pvops.git
instead of the tree it should be testing...

The following fixes it for me, but although the results are as I wanted
I'm not 100% sure about this override in the first place. In my
experiments with cr-daily-branch I see:

        Branch		$TREE_LINUX		$TREE_LINUX_ARM
        
        xen-unstable	pvops			pvops
        linux-linus	torvalds		pvops
        linux-arm-xen	arm-xen			arm-xen

        Key:
        [arm-xen] git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
        [torvalds] git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
        [pvops] git://xenbits.xen.org/linux-pvops.git
        
IOW $TREE_LINUX is always correct.

When invoking make-flight directly both are always "pvops".

8<---------------------------------------

>From 344b0ca5e623d8212ee9d5452f19aba2df2c97fe Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 10 Feb 2014 09:33:00 +0000
Subject: [PATCH] mfi-common: Only override the pvops kernel repo for
 linux-arm-xen branch

Otherwise e.g. linux-linus tries to use the wrong tree and fails.

I have confirmed that for flights on xen-unstable, linux-linus and
linux-arm-xen the only difference in the runvars is
linux-linux.build-armhf-pvops.tree_linus which changes from
git://xenbits.xen.org/linux-pvops.git to
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git as
expected.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/mfi-common b/mfi-common
index 8f56092..f7f981e 100644
--- a/mfi-common
+++ b/mfi-common
@@ -44,6 +44,11 @@ create_build_jobs () {
 
     if [ "x$arch" = xdisable ]; then continue; fi
 
+    pvops_kernel="
+      tree_linux=$TREE_LINUX
+      revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+    "
+
     case "$arch" in
     armhf)
       case "$branch" in
@@ -59,10 +64,13 @@ create_build_jobs () {
       xen-4.1-testing) continue;;
       xen-4.2-testing) continue;;
       esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX_ARM
-        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-      "
+
+      if [ "$branch" = "linux-arm-xen" ]; then
+        pvops_kernel="
+          tree_linux=$TREE_LINUX_ARM
+          revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+        "
+      fi
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
       "
@@ -71,10 +79,6 @@ create_build_jobs () {
       case "$branch" in
       linux-arm-xen) continue;;
       esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX
-        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
-      "
       ;;
     esac
 
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:37:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoEq-0003tw-3x; Mon, 10 Feb 2014 10:37:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCoEo-0003th-7x
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 10:37:18 +0000
Received: from [193.109.254.147:56715] by server-16.bemta-14.messagelabs.com
	id 12/CF-21945-DDBA8F25; Mon, 10 Feb 2014 10:37:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392028635!3212107!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20768 invoked from network); 10 Feb 2014 10:37:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:37:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101262985"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:37:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:37:14 -0500
Message-ID: <1392028633.5117.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 10:37:13 +0000
In-Reply-To: <osstest-24817-mainreport@xen.org>
References: <osstest-24817-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> 
>  build-armhf-pvops             4 kernel-build                 fail   never pass 

+ git checkout 494479038d97f1b9f76fc633a360a681acdf035c
fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c

This is because it is using git://xenbits.xen.org/linux-pvops.git
instead of the tree it should be testing...

The following fixes it for me, but although the results are as I wanted
I'm not 100% sure about this override in the first place. In my
experiments with cr-daily-branch I see:

        Branch		$TREE_LINUX		$TREE_LINUX_ARM
        
        xen-unstable	pvops			pvops
        linux-linus	torvalds		pvops
        linux-arm-xen	arm-xen			arm-xen

        Key:
        [arm-xen] git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
        [torvalds] git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
        [pvops] git://xenbits.xen.org/linux-pvops.git
        
IOW $TREE_LINUX is always correct.

When invoking make-flight directly both are always "pvops".

8<---------------------------------------

>From 344b0ca5e623d8212ee9d5452f19aba2df2c97fe Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 10 Feb 2014 09:33:00 +0000
Subject: [PATCH] mfi-common: Only override the pvops kernel repo for
 linux-arm-xen branch

Otherwise e.g. linux-linus tries to use the wrong tree and fails.

I have confirmed that for flights on xen-unstable, linux-linus and
linux-arm-xen the only difference in the runvars is
linux-linux.build-armhf-pvops.tree_linus which changes from
git://xenbits.xen.org/linux-pvops.git to
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git as
expected.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/mfi-common b/mfi-common
index 8f56092..f7f981e 100644
--- a/mfi-common
+++ b/mfi-common
@@ -44,6 +44,11 @@ create_build_jobs () {
 
     if [ "x$arch" = xdisable ]; then continue; fi
 
+    pvops_kernel="
+      tree_linux=$TREE_LINUX
+      revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+    "
+
     case "$arch" in
     armhf)
       case "$branch" in
@@ -59,10 +64,13 @@ create_build_jobs () {
       xen-4.1-testing) continue;;
       xen-4.2-testing) continue;;
       esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX_ARM
-        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-      "
+
+      if [ "$branch" = "linux-arm-xen" ]; then
+        pvops_kernel="
+          tree_linux=$TREE_LINUX_ARM
+          revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+        "
+      fi
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
       "
@@ -71,10 +79,6 @@ create_build_jobs () {
       case "$branch" in
       linux-arm-xen) continue;;
       esac
-      pvops_kernel="
-        tree_linux=$TREE_LINUX
-        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
-      "
       ;;
     esac
 
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:42:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoJo-0004S1-HT; Mon, 10 Feb 2014 10:42:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WCoJn-0004Rg-32
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:42:27 +0000
Received: from [85.158.139.211:31983] by server-7.bemta-5.messagelabs.com id
	85/14-14867-21DA8F25; Mon, 10 Feb 2014 10:42:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392028944!2850575!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13419 invoked from network); 10 Feb 2014 10:42:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:42:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101263652"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:42:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:42:23 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WCoJj-0006oh-7H;
	Mon, 10 Feb 2014 10:42:23 +0000
Date: Mon, 10 Feb 2014 10:42:23 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Message-ID: <20140210104223.GN15387@zion.uk.xensource.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Xen-devel@lists.xenproject.org, wei.liu2@citrix.com
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
> Hi,
> 
>        I am new to Xen and I am trying to run Intel DPDK inside a domU with
> virtio on Xen 4.2. Is it possible to do this?
> 

DPDK doesn't seem to tightly coupled with VirtIO, does it?

Could you look at Xen's PV network protocol instead? VirtIO has no
mainline support on Xen while Xen's PV protocol has been in mainline for
years. And it's very likely to be enabled by default nowadays.

Wei.

> Regards
> Peter

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:42:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoJn-0004Rh-4Z; Mon, 10 Feb 2014 10:42:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCoJl-0004RY-P7
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:42:25 +0000
Received: from [85.158.139.211:27257] by server-3.bemta-5.messagelabs.com id
	DE/79-13671-01DA8F25; Mon, 10 Feb 2014 10:42:24 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392028942!2810580!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3611 invoked from network); 10 Feb 2014 10:42:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:42:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99428498"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 10:42:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:42:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCoJR-0006ob-BY;
	Mon, 10 Feb 2014 10:42:05 +0000
Message-ID: <52F8ACFD.7090801@citrix.com>
Date: Mon, 10 Feb 2014 10:42:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
In-Reply-To: <20140210080314.GA758@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Yang Zhang <yang.z.zhang@intel.com>, xiantao.zhang@intel.com,
	JBeulich@suse.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 08:03, Tim Deegan wrote:
> At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>
>> When enabling log dirty mode, it sets all guest's memory to readonly.
>> And in HAP enabled domain, it modifies all EPT entries to clear write bit
>> to make sure it is readonly. This will cause problem if VT-d shares page
>> table with EPT: the device may issue a DMA write request, then VT-d engine
>> tells it the target memory is readonly and result in VT-d fault.
> So that's a problem even if only the VGA framebuffer is being tracked
> -- DMA from a passthrough device will either cause a spurious error or
> fail to update the dirt bitmap. 
>
> I think it would be better not to allow VT-d and EPT to share
> pagetables in cases where devices are passed through (i.e. all cases
> where VT-d is in use).
>
> Tim.

Sadly, this would make shared VT-d/EPT completely pointless as a
feature, causing extra memory overhead in Xen by having to maintain EPT
and IOMMU tables separately.

Any usecase which doesn't involve dirty vram tracking (e.g. headless VM
with SRIOV, PVH dom0) would be adversely affected.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:42:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoJo-0004S1-HT; Mon, 10 Feb 2014 10:42:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WCoJn-0004Rg-32
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:42:27 +0000
Received: from [85.158.139.211:31983] by server-7.bemta-5.messagelabs.com id
	85/14-14867-21DA8F25; Mon, 10 Feb 2014 10:42:26 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392028944!2850575!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13419 invoked from network); 10 Feb 2014 10:42:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:42:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101263652"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:42:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:42:23 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WCoJj-0006oh-7H;
	Mon, 10 Feb 2014 10:42:23 +0000
Date: Mon, 10 Feb 2014 10:42:23 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Message-ID: <20140210104223.GN15387@zion.uk.xensource.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Xen-devel@lists.xenproject.org, wei.liu2@citrix.com
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
> Hi,
> 
>        I am new to Xen and I am trying to run Intel DPDK inside a domU with
> virtio on Xen 4.2. Is it possible to do this?
> 

DPDK doesn't seem to tightly coupled with VirtIO, does it?

Could you look at Xen's PV network protocol instead? VirtIO has no
mainline support on Xen while Xen's PV protocol has been in mainline for
years. And it's very likely to be enabled by default nowadays.

Wei.

> Regards
> Peter

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 10:42:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 10:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoJn-0004Rh-4Z; Mon, 10 Feb 2014 10:42:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCoJl-0004RY-P7
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:42:25 +0000
Received: from [85.158.139.211:27257] by server-3.bemta-5.messagelabs.com id
	DE/79-13671-01DA8F25; Mon, 10 Feb 2014 10:42:24 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392028942!2810580!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3611 invoked from network); 10 Feb 2014 10:42:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:42:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99428498"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 10:42:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:42:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCoJR-0006ob-BY;
	Mon, 10 Feb 2014 10:42:05 +0000
Message-ID: <52F8ACFD.7090801@citrix.com>
Date: Mon, 10 Feb 2014 10:42:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
In-Reply-To: <20140210080314.GA758@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Yang Zhang <yang.z.zhang@intel.com>, xiantao.zhang@intel.com,
	JBeulich@suse.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 08:03, Tim Deegan wrote:
> At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>>
>> When enabling log dirty mode, it sets all guest's memory to readonly.
>> And in HAP enabled domain, it modifies all EPT entries to clear write bit
>> to make sure it is readonly. This will cause problem if VT-d shares page
>> table with EPT: the device may issue a DMA write request, then VT-d engine
>> tells it the target memory is readonly and result in VT-d fault.
> So that's a problem even if only the VGA framebuffer is being tracked
> -- DMA from a passthrough device will either cause a spurious error or
> fail to update the dirt bitmap. 
>
> I think it would be better not to allow VT-d and EPT to share
> pagetables in cases where devices are passed through (i.e. all cases
> where VT-d is in use).
>
> Tim.

Sadly, this would make shared VT-d/EPT completely pointless as a
feature, causing extra memory overhead in Xen by having to maintain EPT
and IOMMU tables separately.

Any usecase which doesn't involve dirty vram tracking (e.g. headless VM
with SRIOV, PVH dom0) would be adversely affected.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:00:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoal-0005we-Av; Mon, 10 Feb 2014 10:59:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCoak-0005wG-0u
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:59:58 +0000
Received: from [85.158.137.68:43057] by server-1.bemta-3.messagelabs.com id
	51/AA-17293-C21B8F25; Mon, 10 Feb 2014 10:59:56 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392029994!797187!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4409 invoked from network); 10 Feb 2014 10:59:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:59:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101266000"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:59:54 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:59:53 -0500
Message-ID: <52F8B128.80800@citrix.com>
Date: Mon, 10 Feb 2014 11:59:52 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
In-Reply-To: <1392026141.5117.10.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMTA6NTUsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBGcmksIDIwMTQtMDIt
MDcgYXQgMTY6MjkgKzAwMDAsIEFuZHJldyBDb29wZXIgd3JvdGU6Cj4+IE9uIDA3LzAyLzE0IDE2
OjIyLCBEb24gU2x1dHogd3JvdGU6Cj4+PiBPbiAwMi8wNy8xNCAwNTowNSwgSWFuIENhbXBiZWxs
IHdyb3RlOgo+Pj4+IE9uIFRodSwgMjAxNC0wMi0wNiBhdCAxOToxNSAtMDUwMCwgRG9uIFNsdXR6
IHdyb3RlOgo+Pj4+Pj4+PiBjYzE6IHdhcm5pbmdzIGJlaW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4+
Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6IEluIGZ1bmN0aW9uICdEZWZib29sX3ZhbCc6Cj4+Pj4+
Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6MzQ0OiB3YXJuaW5nOiBpbXBsaWNpdCBkZWNsYXJhdGlvbiBv
ZiBmdW5jdGlvbgo+Pj4+Pj4+PiAnQ0FNTHJldHVyblQnCj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJz
LmM6MzQ0OiBlcnJvcjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4gJ2xpYnhs
X2RlZmJvb2wnCj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6IEluIGZ1bmN0aW9uICdTdHJpbmdf
b3B0aW9uX3ZhbCc6Cj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6Mzc5OiBlcnJvcjogZXhwZWN0
ZWQgZXhwcmVzc2lvbiBiZWZvcmUgJ2NoYXInCj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6IElu
IGZ1bmN0aW9uICdhb2hvd192YWwnOgo+Pj4+Pj4+PiB4ZW5saWdodF9zdHVicy5jOjQ0MDogZXJy
b3I6IGV4cGVjdGVkIGV4cHJlc3Npb24gYmVmb3JlCj4+Pj4+Pj4+ICdsaWJ4bF9hc3luY29wX2hv
dycKPj4+Pj4gQW55IGlkZWEgb24gd2hhdCB0byBkbyBhYm91dCBvY2FtbCBpc3N1ZT8KPj4+PiBN
eSBndWVzcyBpcyB0aGF0IHlvdXIgb2NhbWwgaXMgdG9vIG9sZCBhbmQgZG9lc24ndCBzdXBwbHkg
Q0FNTHJldHVyblQuCj4+Pj4gV2hhdCB2ZXJzaW9uIGRvIHlvdSBoYXZlPwo+Pj4+Cj4+Pj4gSWFu
Lgo+Pj4+Cj4+PiBkY3MteGVuLTUzOn4+b2NhbWwgLXZlcnNpb24KPj4+IFRoZSBPYmplY3RpdmUg
Q2FtbCB0b3BsZXZlbCwgdmVyc2lvbiAzLjA5LjMKPj4+Cj4+PiAgICAtRG9uIFNsdXR6Cj4+Pgo+
Pgo+PiBXaGljaCwgYWNjb3JkaW5nIHRvIGdvb2dsZSwgd2FzIGludHJvZHVjZWQgaW4gMy4wOS40
Cj4+Cj4+IEkgdGhpbmsgdGhlIC4vY29uZmlndXJlIHNjcmlwdCBuZWVkcyBhIG1pbiB2ZXJzaW9u
IGNoZWNrLgo+IAo+IFllcywgSSB0aGluayBzbyB0b28uIFJvYiwgY291bGQgeW91IGFkdmlzZSBv
biBhIHN1aXRhYmxlIG1pbmltdW0gYW5kCj4gcGVyaGFwcyBwYXRjaCB0b29scy9jb25maWd1cmUu
YWMgYW5kL29yIG00L29jYW1sLm00IGFzIG5lY2Vzc2FyeS4KPiAKPiBBbHNvIENDaW5nIFJvZ2Vy
IHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCgpUaGUgT2NhbWwgYXV0b2NvbmYg
c3R1ZmYgd2FzIHBpY2tlZCBmcm9tIGh0dHA6Ly9mb3JnZS5vY2FtbGNvcmUub3JnLy4gCkhlcmUg
aXMgYW4gdW50ZXN0ZWQgcGF0Y2ggZm9yIG91ciBjb25maWd1cmUgc2NyaXB0IHRvIGNoZWNrIGZv
ciB0aGUgCm1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAoMy4wOS4zKToKCihyZW1lbWJl
ciB0byByZS1nZW5lcmF0ZSB0aGUgY29uZmlndXJlIHNjcmlwdCBhZnRlciBhcHBseWluZykKCi0t
LQpjb21taXQgZTQ5NjA5Y2M3YjkzYzI2MzNjZjVhNDkyMDZjYjI5ZDZiZGQ2MTJiZQpBdXRob3I6
IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkRhdGU6ICAgTW9uIEZlYiAx
MCAxMTo1NDoxMyAyMDE0ICswMTAwCgogICAgdG9vbHM6IGNoZWNrIE9DYW1sIHZlcnNpb24gaXMg
YXQgbGVhc3QgMy4wOS4zCiAgICAKICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kg
PHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL2NvbmZpZ3VyZS5hYyBi
L3Rvb2xzL2NvbmZpZ3VyZS5hYwppbmRleCAwNzU0ZjBlLi42ZDFlMmVlIDEwMDY0NAotLS0gYS90
b29scy9jb25maWd1cmUuYWMKKysrIGIvdG9vbHMvY29uZmlndXJlLmFjCkBAIC0xNjEsNiArMTYx
LDEyIEBAIEFTX0lGKFt0ZXN0ICJ4JG9jYW1sdG9vbHMiID0gInh5Il0sIFsKICAgICAgICAgQVNf
SUYoW3Rlc3QgIngkZW5hYmxlX29jYW1sdG9vbHMiID0gInh5ZXMiXSwgWwogICAgICAgICAgICAg
QUNfTVNHX0VSUk9SKFtPY2FtbCB0b29scyBlbmFibGVkLCBidXQgdW5hYmxlIHRvIGZpbmQgT2Nh
bWxdKV0pCiAgICAgICAgIG9jYW1sdG9vbHM9Im4iCisgICAgXSwgWworICAgICAgICBBWF9DT01Q
QVJFX1ZFUlNJT04oWyRPQ0FNTFZFUlNJT05dLCBbbHRdLCBbMy4wOS40XSwgWworICAgICAgICAg
ICAgQVNfSUYoW3Rlc3QgIngkZW5hYmxlX29jYW1sdG9vbHMiID0gInh5ZXMiXSwgWworICAgICAg
ICAgICAgICAgIEFDX01TR19FUlJPUihbWW91ciB2ZXJzaW9uIG9mIE9DYW1sOiAkT0NBTUxWRVJT
SU9OIGlzIG5vdCBzdXBwb3J0ZWRdKV0pCisgICAgICAgICAgICBvY2FtbHRvb2xzPSJuIgorICAg
ICAgICBdKQogICAgIF0pCiBdKQogQVNfSUYoW3Rlc3QgIngkeHNtcG9saWN5IiA9ICJ4eSJdLCBb
CgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:00:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoal-0005we-Av; Mon, 10 Feb 2014 10:59:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCoak-0005wG-0u
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 10:59:58 +0000
Received: from [85.158.137.68:43057] by server-1.bemta-3.messagelabs.com id
	51/AA-17293-C21B8F25; Mon, 10 Feb 2014 10:59:56 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392029994!797187!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4409 invoked from network); 10 Feb 2014 10:59:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 10:59:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101266000"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 10:59:54 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 05:59:53 -0500
Message-ID: <52F8B128.80800@citrix.com>
Date: Mon, 10 Feb 2014 11:59:52 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
In-Reply-To: <1392026141.5117.10.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMTA6NTUsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBGcmksIDIwMTQtMDIt
MDcgYXQgMTY6MjkgKzAwMDAsIEFuZHJldyBDb29wZXIgd3JvdGU6Cj4+IE9uIDA3LzAyLzE0IDE2
OjIyLCBEb24gU2x1dHogd3JvdGU6Cj4+PiBPbiAwMi8wNy8xNCAwNTowNSwgSWFuIENhbXBiZWxs
IHdyb3RlOgo+Pj4+IE9uIFRodSwgMjAxNC0wMi0wNiBhdCAxOToxNSAtMDUwMCwgRG9uIFNsdXR6
IHdyb3RlOgo+Pj4+Pj4+PiBjYzE6IHdhcm5pbmdzIGJlaW5nIHRyZWF0ZWQgYXMgZXJyb3JzCj4+
Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6IEluIGZ1bmN0aW9uICdEZWZib29sX3ZhbCc6Cj4+Pj4+
Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6MzQ0OiB3YXJuaW5nOiBpbXBsaWNpdCBkZWNsYXJhdGlvbiBv
ZiBmdW5jdGlvbgo+Pj4+Pj4+PiAnQ0FNTHJldHVyblQnCj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJz
LmM6MzQ0OiBlcnJvcjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4gJ2xpYnhs
X2RlZmJvb2wnCj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6IEluIGZ1bmN0aW9uICdTdHJpbmdf
b3B0aW9uX3ZhbCc6Cj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6Mzc5OiBlcnJvcjogZXhwZWN0
ZWQgZXhwcmVzc2lvbiBiZWZvcmUgJ2NoYXInCj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6IElu
IGZ1bmN0aW9uICdhb2hvd192YWwnOgo+Pj4+Pj4+PiB4ZW5saWdodF9zdHVicy5jOjQ0MDogZXJy
b3I6IGV4cGVjdGVkIGV4cHJlc3Npb24gYmVmb3JlCj4+Pj4+Pj4+ICdsaWJ4bF9hc3luY29wX2hv
dycKPj4+Pj4gQW55IGlkZWEgb24gd2hhdCB0byBkbyBhYm91dCBvY2FtbCBpc3N1ZT8KPj4+PiBN
eSBndWVzcyBpcyB0aGF0IHlvdXIgb2NhbWwgaXMgdG9vIG9sZCBhbmQgZG9lc24ndCBzdXBwbHkg
Q0FNTHJldHVyblQuCj4+Pj4gV2hhdCB2ZXJzaW9uIGRvIHlvdSBoYXZlPwo+Pj4+Cj4+Pj4gSWFu
Lgo+Pj4+Cj4+PiBkY3MteGVuLTUzOn4+b2NhbWwgLXZlcnNpb24KPj4+IFRoZSBPYmplY3RpdmUg
Q2FtbCB0b3BsZXZlbCwgdmVyc2lvbiAzLjA5LjMKPj4+Cj4+PiAgICAtRG9uIFNsdXR6Cj4+Pgo+
Pgo+PiBXaGljaCwgYWNjb3JkaW5nIHRvIGdvb2dsZSwgd2FzIGludHJvZHVjZWQgaW4gMy4wOS40
Cj4+Cj4+IEkgdGhpbmsgdGhlIC4vY29uZmlndXJlIHNjcmlwdCBuZWVkcyBhIG1pbiB2ZXJzaW9u
IGNoZWNrLgo+IAo+IFllcywgSSB0aGluayBzbyB0b28uIFJvYiwgY291bGQgeW91IGFkdmlzZSBv
biBhIHN1aXRhYmxlIG1pbmltdW0gYW5kCj4gcGVyaGFwcyBwYXRjaCB0b29scy9jb25maWd1cmUu
YWMgYW5kL29yIG00L29jYW1sLm00IGFzIG5lY2Vzc2FyeS4KPiAKPiBBbHNvIENDaW5nIFJvZ2Vy
IHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCgpUaGUgT2NhbWwgYXV0b2NvbmYg
c3R1ZmYgd2FzIHBpY2tlZCBmcm9tIGh0dHA6Ly9mb3JnZS5vY2FtbGNvcmUub3JnLy4gCkhlcmUg
aXMgYW4gdW50ZXN0ZWQgcGF0Y2ggZm9yIG91ciBjb25maWd1cmUgc2NyaXB0IHRvIGNoZWNrIGZv
ciB0aGUgCm1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAoMy4wOS4zKToKCihyZW1lbWJl
ciB0byByZS1nZW5lcmF0ZSB0aGUgY29uZmlndXJlIHNjcmlwdCBhZnRlciBhcHBseWluZykKCi0t
LQpjb21taXQgZTQ5NjA5Y2M3YjkzYzI2MzNjZjVhNDkyMDZjYjI5ZDZiZGQ2MTJiZQpBdXRob3I6
IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkRhdGU6ICAgTW9uIEZlYiAx
MCAxMTo1NDoxMyAyMDE0ICswMTAwCgogICAgdG9vbHM6IGNoZWNrIE9DYW1sIHZlcnNpb24gaXMg
YXQgbGVhc3QgMy4wOS4zCiAgICAKICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kg
PHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL2NvbmZpZ3VyZS5hYyBi
L3Rvb2xzL2NvbmZpZ3VyZS5hYwppbmRleCAwNzU0ZjBlLi42ZDFlMmVlIDEwMDY0NAotLS0gYS90
b29scy9jb25maWd1cmUuYWMKKysrIGIvdG9vbHMvY29uZmlndXJlLmFjCkBAIC0xNjEsNiArMTYx
LDEyIEBAIEFTX0lGKFt0ZXN0ICJ4JG9jYW1sdG9vbHMiID0gInh5Il0sIFsKICAgICAgICAgQVNf
SUYoW3Rlc3QgIngkZW5hYmxlX29jYW1sdG9vbHMiID0gInh5ZXMiXSwgWwogICAgICAgICAgICAg
QUNfTVNHX0VSUk9SKFtPY2FtbCB0b29scyBlbmFibGVkLCBidXQgdW5hYmxlIHRvIGZpbmQgT2Nh
bWxdKV0pCiAgICAgICAgIG9jYW1sdG9vbHM9Im4iCisgICAgXSwgWworICAgICAgICBBWF9DT01Q
QVJFX1ZFUlNJT04oWyRPQ0FNTFZFUlNJT05dLCBbbHRdLCBbMy4wOS40XSwgWworICAgICAgICAg
ICAgQVNfSUYoW3Rlc3QgIngkZW5hYmxlX29jYW1sdG9vbHMiID0gInh5ZXMiXSwgWworICAgICAg
ICAgICAgICAgIEFDX01TR19FUlJPUihbWW91ciB2ZXJzaW9uIG9mIE9DYW1sOiAkT0NBTUxWRVJT
SU9OIGlzIG5vdCBzdXBwb3J0ZWRdKV0pCisgICAgICAgICAgICBvY2FtbHRvb2xzPSJuIgorICAg
ICAgICBdKQogICAgIF0pCiBdKQogQVNfSUYoW3Rlc3QgIngkeHNtcG9saWN5IiA9ICJ4eSJdLCBb
CgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:12:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WComj-0006JJ-5i; Mon, 10 Feb 2014 11:12:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WComi-0006JE-D6
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:12:20 +0000
Received: from [85.158.137.68:55395] by server-2.bemta-3.messagelabs.com id
	D0/E5-06531-314B8F25; Mon, 10 Feb 2014 11:12:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392030737!799809!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32404 invoked from network); 10 Feb 2014 11:12:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:12:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101268251"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 11:12:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 06:12:16 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WComd-0006ka-RU;
	Mon, 10 Feb 2014 11:12:15 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 11:12:15 +0000
Message-ID: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] icr-daily-branch: Make it possible to
	suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is undesirable (most of the time) in a standalone environment, where you
are mostl ikely to be interested in the current version and not historical
comparissons.

Not sure there isn't a better way.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cr-daily-branch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index da6cf2f..2a829c5 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -87,7 +87,7 @@ check_tested () {
 
 testedflight=`check_tested --revision-$tree="$OLD_REVISION"`
 
-if [ "x$testedflight" = x ]; then
+if [ "${OSSTEST_NO_BASELINE:-n}" != "y" -a "x$testedflight" = x ]; then
         wantpush=false
         skipidentical=false
         force_baseline=true
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:12:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WComj-0006JJ-5i; Mon, 10 Feb 2014 11:12:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WComi-0006JE-D6
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:12:20 +0000
Received: from [85.158.137.68:55395] by server-2.bemta-3.messagelabs.com id
	D0/E5-06531-314B8F25; Mon, 10 Feb 2014 11:12:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392030737!799809!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32404 invoked from network); 10 Feb 2014 11:12:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:12:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101268251"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 11:12:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 06:12:16 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WComd-0006ka-RU;
	Mon, 10 Feb 2014 11:12:15 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 11:12:15 +0000
Message-ID: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] icr-daily-branch: Make it possible to
	suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is undesirable (most of the time) in a standalone environment, where you
are mostl ikely to be interested in the current version and not historical
comparissons.

Not sure there isn't a better way.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cr-daily-branch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index da6cf2f..2a829c5 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -87,7 +87,7 @@ check_tested () {
 
 testedflight=`check_tested --revision-$tree="$OLD_REVISION"`
 
-if [ "x$testedflight" = x ]; then
+if [ "${OSSTEST_NO_BASELINE:-n}" != "y" -a "x$testedflight" = x ]; then
         wantpush=false
         skipidentical=false
         force_baseline=true
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:18:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCos8-0006Yx-21; Mon, 10 Feb 2014 11:17:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCos6-0006Ys-G5
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:17:54 +0000
Received: from [85.158.139.211:59113] by server-3.bemta-5.messagelabs.com id
	35/16-13671-165B8F25; Mon, 10 Feb 2014 11:17:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392031071!2862118!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17989 invoked from network); 10 Feb 2014 11:17:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:17:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101269128"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 11:17:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 06:17:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WCos2-0007Lo-07; Mon, 10 Feb 2014 11:17:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 10 Feb 2014 11:17:48 +0000
Message-ID: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel]  [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyByZXZlcnRzIGxhcmdlIGFtb3VudHMgb2Y6CiAgOTYwNzMyN2FiYmQzZTc3YmRlNmNjN2I1
MzI3ZjNlZmQ3ODFmYzA2ZQogICAgIng4Ni9IVk06IHByb3Blcmx5IGhhbmRsZSBSVEMgcGVyaW9k
aWMgdGltZXIgZXZlbiB3aGVuICFSVENfUElFIgogIDYyMGQ1ZGFkNTQwMDhlNDA3OThjNGEwYzQz
MjJhZWYyNzRjMzZmYTMKICAgICJ4ODYvSFZNOiBhc3NvcnRlZCBSVEMgZW11bGF0aW9uIGFkanVz
dG1lbnRzIgoKYW5kIGJ5IGV4dGVudHNpb246CiAgZjMzNDdmNTIwY2I0ZDhhYTQ1NjYxODJiMDEz
YzY3NThkODBjYmU4OAogICAgIng4Ni9IVk06IGFkanVzdCBJUlEgKGRlLSlhc3NlcnRpb24iCiAg
YzJmNzljNDY0ODQ5ZTVmNzk2YWE5ZDFkMGYyNmZlMzU2YWJkMWExYQogICAgIng4Ni9IVk06IGZp
eCBwcm9jZXNzaW5nIG9mIFJUQyBSRUdfQiB3cml0ZXMiCiAgNTI3ODI0ZjQxZjVmYWM5Y2JhM2Q0
NDQxYjJlNzNkMzExOGQ5ODgzNwogICAgIng4Ni9odm06IENlbnRyYWxpemUgYW5kIHNpbXBsaWZ5
IHRoZSBSVEMgSVJRIGxvZ2ljLiIKClRoZSBjdXJyZW50IGNvZGUgaGFzIGEgcGF0aG9sb2dpY2Fs
IGNhc2UsIHRpY2tsZWQgYnkgdGhlIGFjY2VzcyBwYXR0ZXJuIG9mCldpbmRvd3MgMjAwMyBTZXJ2
ZXIgU1AyLiAgT2NjYXNvbmFsbHkgb24gYm9vdCAod2hpY2ggSSBwcmVzdW1lIGlzIGR1cmluZyBh
CnRpbWUgY2FsaWJyYXRpb24gYWdhaW5zdCB0aGUgUlRDIFBlcmlvZGljIFRpbWVyKSwgV2luZG93
cyBnZXRzIHN0dWNrIGluIGFuCmluZmluaXRlIGxvb3AgcmVhZGluZyBSVEMgUkVHX0MuICBUaGlz
IGFmZmVjdHMgMzIgYW5kIDY0IGJpdCBndWVzdHMuCgpJbiB0aGUgcGF0aG9sb2dpY2FsIGNhc2Us
IHRoZSBWTSBzdGF0ZSBsb29rcyBsaWtlIHRoaXM6CiAgKiBSVEM6IDY0SHogcGVyaW9kLCBwZXJp
b2RpYyBpbnRlcnJ1cHRzIGVuYWJsZWQKICAqIFJUQ19JUlEgaW4gSU9BUElDIGFzIHZlY3RvciAw
eGQxLCBlZGdlIHRyaWdnZXJlZCwgbm90IHBlbmRpbmcKICAqIHZlY3RvciAweGQxIHNldCBpbiBM
QVBJQyBJUlIgYW5kIElTUiwgVFBSIGF0IDB4ZDAKICAqIFJlYWRzIGZyb20gUkVHX0MgcmV0dXJu
ICdSVENfUEYgfCBSVENfSVJRRicKCldpdGggYW4gaW50c3RydW1lbnRlZCBYZW4sIGR1bXBpbmcg
dGhlIHBlcmlvZGljIHRpbWVycyB3aXRoIGEgZ3Vlc3QgaW4gdGhpcwpzdGF0ZSBzaG93cyBhIHNp
bmdsZSB0aW1lciB3aXRoIHB0LT5pcnFfaXNzdWVkPTEgYW5kIHB0LT5wZW5kaW5nX2ludHJfbnI9
Mi4KCldpbmRvd3MgaXMgcHJlc3VtYWJseSB3YWl0aW5nIGZvciByZWFkcyBvZiBSRUdfQyB0byBk
cm9wIHRvIDAsIGFuZCByZWFkaW5nClJFR19DIGNsZWFycyB0aGUgdmFsdWUgZWFjaCB0aW1lIGlu
IHRoZSBlbXVsYXRlZCBSVEMuICBIb3dldmVyOgoKICAqIHtzdm0sdm14fV9pbnRyX2Fzc2lzdCgp
IGNhbGxzIHB0X3VwZGF0ZV9pcnEoKSB1bmNvbmRpdGlvbmFsbHkuCiAgKiBwdF91cGRhdGVfaXJx
KCkgYWx3YXlzIGZpbmRzIHRoZSBSVEMgYXMgZWFybGllc3RfcHQuCiAgKiBydGNfcGVyaW9kaWNf
aW50ZXJydXB0KCkgdW5jb25kaXRpb25hbGx5IHNldHMgUlRDX1BGIGluIG5vX2FjayBtb2RlLiAg
SXQKICAgIHJldHVybnMgdHJ1ZSwgaW5kaWNhdGluZyB0aGF0IHB0X3VwZGF0ZV9pcnEoKSBzaG91
bGQgcmVhbGx5IGluamVjdCB0aGUKICAgIGludGVycnVwdC4KICAqIHB0X3VwZGF0ZV9pcnEoKSBk
ZWNpZGVzIHRoYXQgaXQgZG9lc24ndCBuZWVkIHRvIGZha2UgdXAgcGFydCBvZgogICAgcHRfaW50
cl9wb3N0KCkgYmVjYXVzZSB0aGlzIGlzIGEgcmVhbCBpbnRlcnJ1cHQuCiAgKiB7c3ZtLHZteH1f
aW50cl9hc3Npc3QoKSBjYW4ndCBpbmplY3QgdGhlIGludGVycnVwdCBhcyBpdCBpcyBhbHJlYWR5
CiAgICBwZW5kaW5nLCBzbyBleGl0cyBlYXJseSB3aXRob3V0IGNhbGxpbmcgcHRfaW50cl9wb3N0
KCkuCgpUaGUgdW5kZXJseWluZyBwcm9ibGVtIGhlcmUgY29tZXMgYmVjYXVzZSB0aGUgQUYgYW5k
IFVGIGJpdHMgb2YgUlRDIGludGVycnVwdApzdGF0ZSBpcyBtb2RlbGxlZCBieSB0aGUgUlRDIGNv
ZGUsIGJ1dCB0aGUgUEYgaXMgbW9kZWxsZWQgYnkgdGhlIHB0IGNvZGUuICBUaGUKcm9vdCBjYXVz
ZSBvZiB3aW5kb3dzIGluZmluaXRlIGxvb3AgaXMgdGhhdCBSVENfUEYgaXMgYmVpbmcgcmUtc2V0
IG9uIHZtZW50cnkKYmVmb3JlIHRoZSBpbnRlcnJ1cHQgbG9naWMgaGFzIHdvcmtlZCBvdXQgdGhh
dCBpdCBjYW4ndCBhY3R1YWxseSBpbmplY3QgYW4gUlRDCmludGVycnVwdCwgY2F1c2luZyBXaW5k
b3dzIHRvIGVycm9uaW91c2x5IHJlYWQgKFJUQ19QRnxSVENfSVJRRikgd2hlbiBpdApzaG91bGQg
YmUgcmVhZGluZyAwLgoKVGhpcyBwYXRjaCByZXZlcnRzIHRoZSBSVENfUEYgbG9naWMgaGFuZGxp
bmcgdG8gaXRzIGZvcm1lciBzdGF0ZSwgd2hlcmVieQpydGNfcGVyaW9kaWNfY2IoKSBpcyBjYWxs
ZWQgc3RyaWN0bHkgd2hlbiB0aGUgcGVyaW9kaWMgdGltZXIgbG9naWMgaGFzCnN1Y2Nlc3NmdWxs
eSBpbmplY3RlZCBhIHBlcmlvZGljIGludGVycnVwdC4gIEluIGRvaW5nIHNvLCBpdCBpcyBpbXBv
cnRhbnQgdGhhdAp0aGUgUlRDIGNvZGUgaXRzZWxmIG5ldmVyIGRpcmVjdGx5IHRyaWdnZXJzIGFu
IGludGVycnVwdCBmb3IgdGhlIHBlcmlvZGljCnRpbWVyIChvdGhlciB0aGFuIHRoZSBjYXNlIHdo
ZW4gc2V0dGluZyBSRUdfQi5QSUUsIHdoZXJlIHRoZSBwdCBjb2RlIHdpbGwgaGF2ZQpkcm9wcGVk
IHRoZSBpbnRlcnJ1cHQpLgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+
CkNDOiBLZWlyIEZyYXNlciA8a2VpckB4ZW4ub3JnPgpDQzogSmFuIEJldWxpY2ggPEpCZXVsaWNo
QHN1c2UuY29tPgpDQzogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29t
PgpDQzogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQoKSSBzdGls
bCBkb250IGtub3cgZXhhY3RseSB3aGF0IGNvbmRpdGlvbiBjYXVzZXMgd2luZG93cyB0byB0aWNr
bGUgdGhpcwpiZWhhdm91ci4gIEl0IGlzIHNlZW4gYWJvdXQgMSBvciAyIHRpbWVzIGluIDkgdGVz
dHMgcnVubmluZyBhIDEyIGhvdXIgVk0KbGlmZWN5Y2xlIHRlc3QuICBPdmVyIHRoZSB3ZWVrZW5k
LCAxMDAgb2YgdGhlc2UgdGVzdHMgaGF2ZSBwYXNzZWQgd2l0aG91dCBhCnNpbmdsZSByZW9jY3Vy
ZW5jZSBvZiB0aGUgaW5maW5pdGUgbG9vcC4gIFRoZSBjaGFuZ2UgaGFzIGFsc28gcGFzc2VkIGEg
d2luZG93cwpleHRlbmRlZCByZWdyZXNzaW9uIHRlc3QsIHNvIGl0IHdvdWxkIGFwcGVhciB0aGF0
IG90aGVyIHZlcnNpb25zIG9mIHdpbmRvd3MKYXJlIHN0aWxsIGZpbmUgd2l0aCB0aGUgY2hhbmdl
LgoKUm9nZXI6IGFzIHRoaXMgY2F1c2VkIGlzc3VlcyBmb3IgRnJlZUJTRCwgd291bGQgeW91IG1p
bmQgdGVzdGluZyBpdCBhZ2FpbgpwbGVhc2U/CgpHZW9yZ2U6IFJlZ2FyZGluZyA0LjQgLSBJIHJl
cXVlc3QgYSByZWxlYXNlIGFjay4gIEhvd2V2ZXIsIHRoaXMgaXMgcXVpdGUgYSBiaWcKYW5kIGNv
bXBsaWNhdGVkIHBhdGNoOyBpdCB0b29rIFRpbSBhbmQgbXlzZWxmIHNldmVyYWwgaG91cnMgdG8g
d3JpdGUsIGV2ZW4KZ2l2ZW4gYW4gdW5kZXJzdGFuZGluZyBvZiB0aGUgcGF0aGFsb2dpY2FsIGNh
c2UuICBIYXZpbmcgc2FpZCB0aGF0LCBhYm91dCBoYWxmCnRoZSBwYXRjaCBpcyBqdXN0IHJldmVy
c2lvbnMgKGxpc3RlZCBhYm92ZSkgd2l0aCB0aGUgb3RoZXIgaGFsZiBiZWluZyBicmFuZApuZXcg
bG9naWMuCgpJbiBzdW1hcnk6CiAgKiBBcyBYZW4gY3VycmVudGx5IHN0YW5kcywgdzJrMyBTUDIg
KHN0aWxsIHN1cHBvcnRlZCkgaXMgbGlhYmxlIHRvIGZhbGwgaW50bwogICAgYW4gaW5maW5pdGUg
bG9vcCBvbiBib290LCBiZWNhdXNlIG9mIGEgYnVnIGluIFhlbidzIGVtdWxhdGlvbiBvZiB0aGUg
UlRDLgogICAgT3RoZXIgZ3Vlc3RzIHJpc2sgdGhlIHNhbWUgaW5maW5pdGUgbG9vcC4KICAqIFRo
ZXJlIGlzIGEgcmlzayB0aGF0IHNvbWUgb2YgdGhlIGxvZ2ljIGlzIG5vdCBxdWl0ZSBjb3JyZWN0
OyB0aGUgUlRDCiAgICBlbXVsYXRpb24gaGFzIHByb3ZlZCB0cmlja3kgdGltZSBhbmQgdGltZSBh
Z2Fpbi4KICAqIFRoZSByZXN1bHRzIGZyb20gWGVuUlQgc3VnZ2VzdCB0aGF0IHRoZSBuZXcgZW11
bGF0aW9uIGlzIGJldHRlciB0aGFuIHRoZQogICAgb2xkLgotLS0KIHhlbi9hcmNoL3g4Ni9odm0v
cnRjLmMgfCAgIDk0ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tLS0t
LS0tLQogeGVuL2FyY2gveDg2L2h2bS92cHQuYyB8ICAgNTAgKysrLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0KIDIgZmlsZXMgY2hhbmdlZCwgNjMgaW5zZXJ0aW9ucygrKSwgODEgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS9ydGMuYyBiL3hlbi9hcmNoL3g4Ni9odm0v
cnRjLmMKaW5kZXggY2RlZGVmZS4uZWJlYjY3NCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2h2
bS9ydGMuYworKysgYi94ZW4vYXJjaC94ODYvaHZtL3J0Yy5jCkBAIC01OSw0OCArNTksNzggQEAg
c3RhdGljIHZvaWQgcnRjX3NldF90aW1lKFJUQ1N0YXRlICpzKTsKIHN0YXRpYyBpbmxpbmUgaW50
IGZyb21fYmNkKFJUQ1N0YXRlICpzLCBpbnQgYSk7CiBzdGF0aWMgaW5saW5lIGludCBjb252ZXJ0
X2hvdXIoUlRDU3RhdGUgKnMsIGludCBob3VyKTsKIAotc3RhdGljIHZvaWQgcnRjX3VwZGF0ZV9p
cnEoUlRDU3RhdGUgKnMpCisvKgorICogU2VuZCBhbiBlZGdlIG9uIHRoZSBSVEMgSVNBIElSUSBs
aW5lLiAgVGhlIFJUQyBzcGVjIHN0YXRlcyB0aGF0IGl0IHNob3VsZAorICogYmUgYSBsaW5lIGxl
dmVsIGludGVycnVwdCwgYnV0IHRoZSBQSUlYMyBzdGF0ZXMgdGhhdCBpdCBtdXN0IGJlIGVkZ2UK
KyAqIHRyaWdnZXJlZC4gIFdlIG1vZGVsIHRoZSBSVEMgdXNpbmcgZWRnZSBzZW1hbnRpY3MuCisg
Ki8KK3N0YXRpYyB2b2lkIHJ0Y190b2dnbGVfaXJxKFJUQ1N0YXRlICpzKQogeworICAgIGh2bV9p
c2FfaXJxX2RlYXNzZXJ0KHZydGNfZG9tYWluKHMpLCBSVENfSVJRKTsKKyAgICBodm1faXNhX2ly
cV9hc3NlcnQodnJ0Y19kb21haW4ocyksIFJUQ19JUlEpOworfQorCitzdGF0aWMgdm9pZCBydGNf
dXBkYXRlX3JlZ2IoUlRDU3RhdGUgKnMsIHVpbnQ4X3QgbmV3X2IpCit7CisgICAgdWludDhfdCBu
ZXdfYyA9IHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdICYgflJUQ19JUlFGOworCiAgICAgQVNT
RVJUKHNwaW5faXNfbG9ja2VkKCZzLT5sb2NrKSk7CiAKLSAgICBpZiAoIHJ0Y19tb2RlX2lzKHMs
IHN0cmljdCkgJiYgKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdICYgUlRDX0lSUUYpICkKLSAg
ICAgICAgcmV0dXJuOworICAgIGlmICggbmV3X2IgJiBuZXdfYyAmIChSVENfUEYgfCBSVENfQUYg
fCBSVENfVUYpICkKKyAgICAgICAgbmV3X2MgfD0gUlRDX0lSUUY7CiAKLSAgICAvKiBJUlEgaXMg
cmFpc2VkIGlmIGFueSBzb3VyY2UgaXMgYm90aCByYWlzZWQgJiBlbmFibGVkICovCi0gICAgaWYg
KCAhKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYKLSAgICAgICAgICAgcy0+aHcuY21vc19k
YXRhW1JUQ19SRUdfQ10gJgotICAgICAgICAgICAoUlRDX1BGIHwgUlRDX0FGIHwgUlRDX1VGKSkg
KQotICAgICAgICByZXR1cm47CisgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gPSBuZXdf
YjsKKyAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSA9IG5ld19jOwogCi0gICAgcy0+aHcu
Y21vc19kYXRhW1JUQ19SRUdfQ10gfD0gUlRDX0lSUUY7Ci0gICAgaWYgKCBydGNfbW9kZV9pcyhz
LCBub19hY2spICkKLSAgICAgICAgaHZtX2lzYV9pcnFfZGVhc3NlcnQodnJ0Y19kb21haW4ocyks
IFJUQ19JUlEpOwotICAgIGh2bV9pc2FfaXJxX2Fzc2VydCh2cnRjX2RvbWFpbihzKSwgUlRDX0lS
USk7CisgICAgaWYgKCBuZXdfYyAmIFJUQ19JUlFGICkKKyAgICAgICAgcnRjX3RvZ2dsZV9pcnEo
cyk7Cit9CisKKy8qCisgKiBTdHJpY3RseSBvbmx5IGNhbGxlZCB3aGVuIHNldHRpbmcgUkVHX0Mu
e0FGLFVGfS4gIFRoZSBsb2dpYyBkZXBlbmQgb24KKyAqIGtub3dpbmcgdGhhdCBSRUJfQiBpcyB1
bmNoYW5nZWQsIGFuZCBSRUdfQyBkb2VzIG5vdCBoYXZlIGEgZmFsbGluZyBlZGdlLgorICogJ2V2
ZW50JyBzaG91bGQgYmUgUlRDX0FGIG9yIFJUQ19VRi4KKyAqLworc3RhdGljIHZvaWQgcnRjX2ly
cV9ldmVudChSVENTdGF0ZSAqcywgdWludDhfdCBldmVudCkKK3sKKyAgICB1aW50OF90IGIgPSBz
LT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXTsKKyAgICB1aW50OF90IG9sZF9jID0gcy0+aHcuY21v
c19kYXRhW1JUQ19SRUdfQ107CisgICAgdWludDhfdCBuZXdfYyA9IG9sZF9jICYgflJUQ19JUlFG
OworCisgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZzLT5sb2NrKSk7CisKKyAgICBpZiAoIGIg
JiBuZXdfYyAmIChSVENfUEYgfCBSVENfQUYgfCBSVENfVUYpICkKKyAgICAgICAgbmV3X2MgfD0g
UlRDX0lSUUY7CisKKyAgICBpZiAoIChiICYgZXZlbnQpICYmCisgICAgICAgICAocnRjX21vZGVf
aXMocywgbm9fYWNrKSB8fCAhKG9sZF9jICYgUlRDX0lSUUYpKSApCisgICAgICAgIHJ0Y190b2dn
bGVfaXJxKHMpOworCisgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gPSBuZXdfYzsKIH0K
IAotYm9vbF90IHJ0Y19wZXJpb2RpY19pbnRlcnJ1cHQodm9pZCAqb3BhcXVlKQorLyoKKyAqIENh
bGxiYWNrIGZyb20gdGhlIHBlcmlvZGljIHRpbWVyIHN0YXRlIG1hY2hpbmUsIGluZGljYXRpbmcg
dGhhdCBhIHBlcmlvZGljCisgKiB0aW1lciBpbnRlcnJ1cHQgaGFzIGJlZW4gaW5qZWN0ZWQgdG8g
dGhlIGd1ZXN0IG9uIG91ciBiZWhhbGYuCisgKi8KK3N0YXRpYyB2b2lkIHJ0Y19wZXJpb2RpY19j
YihzdHJ1Y3QgdmNwdSAqdiwgdm9pZCAqb3BhcXVlKQogewogICAgIFJUQ1N0YXRlICpzID0gb3Bh
cXVlOwotICAgIGJvb2xfdCByZXQ7CiAKICAgICBzcGluX2xvY2soJnMtPmxvY2spOwotICAgIHJl
dCA9IHJ0Y19tb2RlX2lzKHMsIG5vX2FjaykgfHwgIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19D
XSAmIFJUQ19JUlFGKTsKLSAgICBpZiAoIHJ0Y19tb2RlX2lzKHMsIG5vX2FjaykgfHwgIShzLT5o
dy5jbW9zX2RhdGFbUlRDX1JFR19DXSAmIFJUQ19QRikgKQotICAgIHsKLSAgICAgICAgcy0+aHcu
Y21vc19kYXRhW1JUQ19SRUdfQ10gfD0gUlRDX1BGOwotICAgICAgICBydGNfdXBkYXRlX2lycShz
KTsKLSAgICB9Ci0gICAgZWxzZSBpZiAoICsrKHMtPnB0X2RlYWRfdGlja3MpID49IDEwICkKKwor
ICAgIGlmICggIXJ0Y19tb2RlX2lzKHMsIG5vX2FjaykgJiYKKyAgICAgICAgIChzLT5ody5jbW9z
X2RhdGFbUlRDX1JFR19DXSAmIFJUQ19QRikgJiYKKyAgICAgICAgICgrKyhzLT5wdF9kZWFkX3Rp
Y2tzKSA+PSAxMCkgKQogICAgIHsKICAgICAgICAgLyogVk0gaXMgaWdub3JpbmcgaXRzIFJUQzsg
bm8gcG9pbnQgaW4gcnVubmluZyB0aGUgdGltZXIgKi8KICAgICAgICAgZGVzdHJveV9wZXJpb2Rp
Y190aW1lKCZzLT5wdCk7CiAgICAgICAgIHMtPnB0X2NvZGUgPSAwOwogICAgIH0KLSAgICBpZiAo
ICEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gJiBSVENfSVJRRikgKQotICAgICAgICByZXQg
PSAwOwotICAgIHNwaW5fdW5sb2NrKCZzLT5sb2NrKTsKIAotICAgIHJldHVybiByZXQ7CisgICAg
LyogU3BlY2lmaWNhbGx5IG5vdCByYWlzaW5nIHRoZSBpcnEgYXMgaXQgaGFzIGFscmVhZHkgYmVl
biBkb25lIGZvciB1cyAqLworICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19Q
RiB8IFJUQ19JUlFGOworICAgIHNwaW5fdW5sb2NrKCZzLT5sb2NrKTsKIH0KIAogLyogRW5hYmxl
L2NvbmZpZ3VyZS9kaXNhYmxlIHRoZSBwZXJpb2RpYyB0aW1lciBiYXNlZCBvbiB0aGUgUlRDX1BJ
RSBhbmQKQEAgLTEzNSw3ICsxNjUsNyBAQCBzdGF0aWMgdm9pZCBydGNfdGltZXJfdXBkYXRlKFJU
Q1N0YXRlICpzKQogICAgICAgICAgICAgICAgIGVsc2UKICAgICAgICAgICAgICAgICAgICAgZGVs
dGEgPSBwZXJpb2QgLSAoKE5PVygpIC0gcy0+c3RhcnRfdGltZSkgJSBwZXJpb2QpOwogICAgICAg
ICAgICAgICAgIGNyZWF0ZV9wZXJpb2RpY190aW1lKHYsICZzLT5wdCwgZGVsdGEsIHBlcmlvZCwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBSVENfSVJRLCBOVUxMLCBzKTsK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBSVENfSVJRLCBydGNfcGVyaW9k
aWNfY2IsIHMpOwogICAgICAgICAgICAgfQogICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgIH0K
QEAgLTIxMSw5ICsyNDEsOCBAQCBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpv
cGFxdWUpCiAgICAgc3Bpbl9sb2NrKCZzLT5sb2NrKTsKICAgICBpZiAoIShzLT5ody5jbW9zX2Rh
dGFbUlRDX1JFR19CXSAmIFJUQ19TRVQpKQogICAgIHsKLSAgICAgICAgcy0+aHcuY21vc19kYXRh
W1JUQ19SRUdfQ10gfD0gUlRDX1VGOwogICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19B
XSAmPSB+UlRDX1VJUDsKLSAgICAgICAgcnRjX3VwZGF0ZV9pcnEocyk7CisgICAgICAgIHJ0Y19p
cnFfZXZlbnQocywgUlRDX1VGKTsKICAgICAgICAgY2hlY2tfdXBkYXRlX3RpbWVyKHMpOwogICAg
IH0KICAgICBzcGluX3VubG9jaygmcy0+bG9jayk7CkBAIC00MDIsOCArNDMxLDcgQEAgc3RhdGlj
IHZvaWQgcnRjX2FsYXJtX2NiKHZvaWQgKm9wYXF1ZSkKICAgICBzcGluX2xvY2soJnMtPmxvY2sp
OwogICAgIGlmICghKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCkpCiAgICAg
ewotICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSBSVENfQUY7Ci0gICAgICAg
IHJ0Y191cGRhdGVfaXJxKHMpOworICAgICAgICBydGNfaXJxX2V2ZW50KHMsIFJUQ19BRik7CiAg
ICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsKICAgICB9CiAgICAgc3Bpbl91bmxvY2soJnMt
PmxvY2spOwpAQCAtNDg0LDEyICs1MTIsMTEgQEAgc3RhdGljIGludCBydGNfaW9wb3J0X3dyaXRl
KHZvaWQgKm9wYXF1ZSwgdWludDMyX3QgYWRkciwgdWludDMyX3QgZGF0YSkKICAgICAgICAgICAg
IGlmICggb3JpZyAmIFJUQ19TRVQgKQogICAgICAgICAgICAgICAgIHJ0Y19zZXRfdGltZShzKTsK
ICAgICAgICAgfQotICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSA9IGRhdGE7CiAg
ICAgICAgIC8qCiAgICAgICAgICAqIElmIHRoZSBpbnRlcnJ1cHQgaXMgYWxyZWFkeSBzZXQgd2hl
biB0aGUgaW50ZXJydXB0IGJlY29tZXMKICAgICAgICAgICogZW5hYmxlZCwgcmFpc2UgYW4gaW50
ZXJydXB0IGltbWVkaWF0ZWx5LgogICAgICAgICAgKi8KLSAgICAgICAgcnRjX3VwZGF0ZV9pcnEo
cyk7CisgICAgICAgIHJ0Y191cGRhdGVfcmVnYihzLCBkYXRhKTsKICAgICAgICAgaWYgKCAoZGF0
YSAmIFJUQ19QSUUpICYmICEob3JpZyAmIFJUQ19QSUUpICkKICAgICAgICAgICAgIHJ0Y190aW1l
cl91cGRhdGUocyk7CiAgICAgICAgIGlmICggKGRhdGEgXiBvcmlnKSAmIFJUQ19TRVQgKQpAQCAt
NjQ3LDkgKzY3NCw2IEBAIHN0YXRpYyB1aW50MzJfdCBydGNfaW9wb3J0X3JlYWQoUlRDU3RhdGUg
KnMsIHVpbnQzMl90IGFkZHIpCiAgICAgY2FzZSBSVENfUkVHX0M6CiAgICAgICAgIHJldCA9IHMt
Pmh3LmNtb3NfZGF0YVtzLT5ody5jbW9zX2luZGV4XTsKICAgICAgICAgcy0+aHcuY21vc19kYXRh
W1JUQ19SRUdfQ10gPSAweDAwOwotICAgICAgICBpZiAoIChyZXQgJiBSVENfSVJRRikgJiYgIXJ0
Y19tb2RlX2lzKHMsIG5vX2FjaykgKQotICAgICAgICAgICAgaHZtX2lzYV9pcnFfZGVhc3NlcnQo
ZCwgUlRDX0lSUSk7Ci0gICAgICAgIHJ0Y191cGRhdGVfaXJxKHMpOwogICAgICAgICBjaGVja191
cGRhdGVfdGltZXIocyk7CiAgICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsKICAgICAgICAg
cnRjX3RpbWVyX3VwZGF0ZShzKTsKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vdnB0LmMg
Yi94ZW4vYXJjaC94ODYvaHZtL3ZwdC5jCmluZGV4IDE5NjFiZGEuLmUxMmU5NDAgMTAwNjQ0Ci0t
LSBhL3hlbi9hcmNoL3g4Ni9odm0vdnB0LmMKKysrIGIveGVuL2FyY2gveDg2L2h2bS92cHQuYwpA
QCAtMjIsNyArMjIsNiBAQAogI2luY2x1ZGUgPGFzbS9odm0vdnB0Lmg+CiAjaW5jbHVkZSA8YXNt
L2V2ZW50Lmg+CiAjaW5jbHVkZSA8YXNtL2FwaWMuaD4KLSNpbmNsdWRlIDxhc20vbWMxNDY4MThy
dGMuaD4KIAogI2RlZmluZSBtb2RlX2lzKGQsIG5hbWUpIFwKICAgICAoKGQpLT5hcmNoLmh2bV9k
b21haW4ucGFyYW1zW0hWTV9QQVJBTV9USU1FUl9NT0RFXSA9PSBIVk1QVE1fIyNuYW1lKQpAQCAt
MjI4LDIzICsyMjcsMTcgQEAgc3RhdGljIHZvaWQgcHRfdGltZXJfZm4odm9pZCAqZGF0YSkKIGlu
dCBwdF91cGRhdGVfaXJxKHN0cnVjdCB2Y3B1ICp2KQogewogICAgIHN0cnVjdCBsaXN0X2hlYWQg
KmhlYWQgPSAmdi0+YXJjaC5odm1fdmNwdS50bV9saXN0OwotICAgIHN0cnVjdCBwZXJpb2RpY190
aW1lICpwdCwgKnRlbXAsICplYXJsaWVzdF9wdDsKLSAgICB1aW50NjRfdCBtYXhfbGFnOworICAg
IHN0cnVjdCBwZXJpb2RpY190aW1lICpwdCwgKnRlbXAsICplYXJsaWVzdF9wdCA9IE5VTEw7Cisg
ICAgdWludDY0X3QgbWF4X2xhZyA9IC0xVUxMOwogICAgIGludCBpcnEsIGlzX2xhcGljOwotICAg
IHZvaWQgKnB0X3ByaXY7CiAKLSByZXNjYW46CiAgICAgc3Bpbl9sb2NrKCZ2LT5hcmNoLmh2bV92
Y3B1LnRtX2xvY2spOwogCi0gcmVzY2FuX2xvY2tlZDoKLSAgICBlYXJsaWVzdF9wdCA9IE5VTEw7
Ci0gICAgbWF4X2xhZyA9IC0xVUxMOwogICAgIGxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZSAoIHB0
LCB0ZW1wLCBoZWFkLCBsaXN0ICkKICAgICB7CiAgICAgICAgIGlmICggcHQtPnBlbmRpbmdfaW50
cl9uciApCiAgICAgICAgIHsKLSAgICAgICAgICAgIC8qIFJUQyBjb2RlIHRha2VzIGNhcmUgb2Yg
ZGlzYWJsaW5nIHRoZSB0aW1lciBpdHNlbGYuICovCi0gICAgICAgICAgICBpZiAoIChwdC0+aXJx
ICE9IFJUQ19JUlEgfHwgIXB0LT5wcml2KSAmJiBwdF9pcnFfbWFza2VkKHB0KSApCisgICAgICAg
ICAgICBpZiAoIHB0X2lycV9tYXNrZWQocHQpICkKICAgICAgICAgICAgIHsKICAgICAgICAgICAg
ICAgICAvKiBzdXNwZW5kIHRpbWVyIGVtdWxhdGlvbiAqLwogICAgICAgICAgICAgICAgIGxpc3Rf
ZGVsKCZwdC0+bGlzdCk7CkBAIC0yNzAsNDcgKzI2MywxMiBAQCBpbnQgcHRfdXBkYXRlX2lycShz
dHJ1Y3QgdmNwdSAqdikKICAgICBlYXJsaWVzdF9wdC0+aXJxX2lzc3VlZCA9IDE7CiAgICAgaXJx
ID0gZWFybGllc3RfcHQtPmlycTsKICAgICBpc19sYXBpYyA9IChlYXJsaWVzdF9wdC0+c291cmNl
ID09IFBUU1JDX2xhcGljKTsKLSAgICBwdF9wcml2ID0gZWFybGllc3RfcHQtPnByaXY7CiAKICAg
ICBzcGluX3VubG9jaygmdi0+YXJjaC5odm1fdmNwdS50bV9sb2NrKTsKIAogICAgIGlmICggaXNf
bGFwaWMgKQotICAgICAgICB2bGFwaWNfc2V0X2lycSh2Y3B1X3ZsYXBpYyh2KSwgaXJxLCAwKTsK
LSAgICBlbHNlIGlmICggaXJxID09IFJUQ19JUlEgJiYgcHRfcHJpdiApCiAgICAgewotICAgICAg
ICBpZiAoICFydGNfcGVyaW9kaWNfaW50ZXJydXB0KHB0X3ByaXYpICkKLSAgICAgICAgICAgIGly
cSA9IC0xOwotCi0gICAgICAgIHB0X2xvY2soZWFybGllc3RfcHQpOwotCi0gICAgICAgIGlmICgg
aXJxIDwgMCAmJiBlYXJsaWVzdF9wdC0+cGVuZGluZ19pbnRyX25yICkKLSAgICAgICAgewotICAg
ICAgICAgICAgLyoKLSAgICAgICAgICAgICAqIFJUQyBwZXJpb2RpYyB0aW1lciBydW5zIHdpdGhv
dXQgdGhlIGNvcnJlc3BvbmRpbmcgaW50ZXJydXB0Ci0gICAgICAgICAgICAgKiBiZWluZyBlbmFi
bGVkIC0gbmVlZCB0byBtaW1pYyBlbm91Z2ggb2YgcHRfaW50cl9wb3N0KCkgdG8ga2VlcAotICAg
ICAgICAgICAgICogdGhpbmdzIGdvaW5nLgotICAgICAgICAgICAgICovCi0gICAgICAgICAgICBl
YXJsaWVzdF9wdC0+cGVuZGluZ19pbnRyX25yID0gMDsKLSAgICAgICAgICAgIGVhcmxpZXN0X3B0
LT5pcnFfaXNzdWVkID0gMDsKLSAgICAgICAgICAgIHNldF90aW1lcigmZWFybGllc3RfcHQtPnRp
bWVyLCBlYXJsaWVzdF9wdC0+c2NoZWR1bGVkKTsKLSAgICAgICAgfQotICAgICAgICBlbHNlIGlm
ICggaXJxID49IDAgJiYgcHRfaXJxX21hc2tlZChlYXJsaWVzdF9wdCkgKQotICAgICAgICB7Ci0g
ICAgICAgICAgICBpZiAoIGVhcmxpZXN0X3B0LT5vbl9saXN0ICkKLSAgICAgICAgICAgIHsKLSAg
ICAgICAgICAgICAgICAvKiBzdXNwZW5kIHRpbWVyIGVtdWxhdGlvbiAqLwotICAgICAgICAgICAg
ICAgIGxpc3RfZGVsKCZlYXJsaWVzdF9wdC0+bGlzdCk7Ci0gICAgICAgICAgICAgICAgZWFybGll
c3RfcHQtPm9uX2xpc3QgPSAwOwotICAgICAgICAgICAgfQotICAgICAgICAgICAgaXJxID0gLTE7
Ci0gICAgICAgIH0KLQotICAgICAgICAvKiBBdm9pZCBkcm9wcGluZyB0aGUgbG9jayBpZiB3ZSBj
YW4uICovCi0gICAgICAgIGlmICggaXJxIDwgMCAmJiB2ID09IGVhcmxpZXN0X3B0LT52Y3B1ICkK
LSAgICAgICAgICAgIGdvdG8gcmVzY2FuX2xvY2tlZDsKLSAgICAgICAgcHRfdW5sb2NrKGVhcmxp
ZXN0X3B0KTsKLSAgICAgICAgaWYgKCBpcnEgPCAwICkKLSAgICAgICAgICAgIGdvdG8gcmVzY2Fu
OworICAgICAgICB2bGFwaWNfc2V0X2lycSh2Y3B1X3ZsYXBpYyh2KSwgaXJxLCAwKTsKICAgICB9
CiAgICAgZWxzZQogICAgIHsKLS0gCjEuNy4xMC40CgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:18:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCos8-0006Yx-21; Mon, 10 Feb 2014 11:17:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCos6-0006Ys-G5
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:17:54 +0000
Received: from [85.158.139.211:59113] by server-3.bemta-5.messagelabs.com id
	35/16-13671-165B8F25; Mon, 10 Feb 2014 11:17:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392031071!2862118!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17989 invoked from network); 10 Feb 2014 11:17:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:17:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101269128"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 11:17:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 06:17:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WCos2-0007Lo-07; Mon, 10 Feb 2014 11:17:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 10 Feb 2014 11:17:48 +0000
Message-ID: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel]  [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhpcyByZXZlcnRzIGxhcmdlIGFtb3VudHMgb2Y6CiAgOTYwNzMyN2FiYmQzZTc3YmRlNmNjN2I1
MzI3ZjNlZmQ3ODFmYzA2ZQogICAgIng4Ni9IVk06IHByb3Blcmx5IGhhbmRsZSBSVEMgcGVyaW9k
aWMgdGltZXIgZXZlbiB3aGVuICFSVENfUElFIgogIDYyMGQ1ZGFkNTQwMDhlNDA3OThjNGEwYzQz
MjJhZWYyNzRjMzZmYTMKICAgICJ4ODYvSFZNOiBhc3NvcnRlZCBSVEMgZW11bGF0aW9uIGFkanVz
dG1lbnRzIgoKYW5kIGJ5IGV4dGVudHNpb246CiAgZjMzNDdmNTIwY2I0ZDhhYTQ1NjYxODJiMDEz
YzY3NThkODBjYmU4OAogICAgIng4Ni9IVk06IGFkanVzdCBJUlEgKGRlLSlhc3NlcnRpb24iCiAg
YzJmNzljNDY0ODQ5ZTVmNzk2YWE5ZDFkMGYyNmZlMzU2YWJkMWExYQogICAgIng4Ni9IVk06IGZp
eCBwcm9jZXNzaW5nIG9mIFJUQyBSRUdfQiB3cml0ZXMiCiAgNTI3ODI0ZjQxZjVmYWM5Y2JhM2Q0
NDQxYjJlNzNkMzExOGQ5ODgzNwogICAgIng4Ni9odm06IENlbnRyYWxpemUgYW5kIHNpbXBsaWZ5
IHRoZSBSVEMgSVJRIGxvZ2ljLiIKClRoZSBjdXJyZW50IGNvZGUgaGFzIGEgcGF0aG9sb2dpY2Fs
IGNhc2UsIHRpY2tsZWQgYnkgdGhlIGFjY2VzcyBwYXR0ZXJuIG9mCldpbmRvd3MgMjAwMyBTZXJ2
ZXIgU1AyLiAgT2NjYXNvbmFsbHkgb24gYm9vdCAod2hpY2ggSSBwcmVzdW1lIGlzIGR1cmluZyBh
CnRpbWUgY2FsaWJyYXRpb24gYWdhaW5zdCB0aGUgUlRDIFBlcmlvZGljIFRpbWVyKSwgV2luZG93
cyBnZXRzIHN0dWNrIGluIGFuCmluZmluaXRlIGxvb3AgcmVhZGluZyBSVEMgUkVHX0MuICBUaGlz
IGFmZmVjdHMgMzIgYW5kIDY0IGJpdCBndWVzdHMuCgpJbiB0aGUgcGF0aG9sb2dpY2FsIGNhc2Us
IHRoZSBWTSBzdGF0ZSBsb29rcyBsaWtlIHRoaXM6CiAgKiBSVEM6IDY0SHogcGVyaW9kLCBwZXJp
b2RpYyBpbnRlcnJ1cHRzIGVuYWJsZWQKICAqIFJUQ19JUlEgaW4gSU9BUElDIGFzIHZlY3RvciAw
eGQxLCBlZGdlIHRyaWdnZXJlZCwgbm90IHBlbmRpbmcKICAqIHZlY3RvciAweGQxIHNldCBpbiBM
QVBJQyBJUlIgYW5kIElTUiwgVFBSIGF0IDB4ZDAKICAqIFJlYWRzIGZyb20gUkVHX0MgcmV0dXJu
ICdSVENfUEYgfCBSVENfSVJRRicKCldpdGggYW4gaW50c3RydW1lbnRlZCBYZW4sIGR1bXBpbmcg
dGhlIHBlcmlvZGljIHRpbWVycyB3aXRoIGEgZ3Vlc3QgaW4gdGhpcwpzdGF0ZSBzaG93cyBhIHNp
bmdsZSB0aW1lciB3aXRoIHB0LT5pcnFfaXNzdWVkPTEgYW5kIHB0LT5wZW5kaW5nX2ludHJfbnI9
Mi4KCldpbmRvd3MgaXMgcHJlc3VtYWJseSB3YWl0aW5nIGZvciByZWFkcyBvZiBSRUdfQyB0byBk
cm9wIHRvIDAsIGFuZCByZWFkaW5nClJFR19DIGNsZWFycyB0aGUgdmFsdWUgZWFjaCB0aW1lIGlu
IHRoZSBlbXVsYXRlZCBSVEMuICBIb3dldmVyOgoKICAqIHtzdm0sdm14fV9pbnRyX2Fzc2lzdCgp
IGNhbGxzIHB0X3VwZGF0ZV9pcnEoKSB1bmNvbmRpdGlvbmFsbHkuCiAgKiBwdF91cGRhdGVfaXJx
KCkgYWx3YXlzIGZpbmRzIHRoZSBSVEMgYXMgZWFybGllc3RfcHQuCiAgKiBydGNfcGVyaW9kaWNf
aW50ZXJydXB0KCkgdW5jb25kaXRpb25hbGx5IHNldHMgUlRDX1BGIGluIG5vX2FjayBtb2RlLiAg
SXQKICAgIHJldHVybnMgdHJ1ZSwgaW5kaWNhdGluZyB0aGF0IHB0X3VwZGF0ZV9pcnEoKSBzaG91
bGQgcmVhbGx5IGluamVjdCB0aGUKICAgIGludGVycnVwdC4KICAqIHB0X3VwZGF0ZV9pcnEoKSBk
ZWNpZGVzIHRoYXQgaXQgZG9lc24ndCBuZWVkIHRvIGZha2UgdXAgcGFydCBvZgogICAgcHRfaW50
cl9wb3N0KCkgYmVjYXVzZSB0aGlzIGlzIGEgcmVhbCBpbnRlcnJ1cHQuCiAgKiB7c3ZtLHZteH1f
aW50cl9hc3Npc3QoKSBjYW4ndCBpbmplY3QgdGhlIGludGVycnVwdCBhcyBpdCBpcyBhbHJlYWR5
CiAgICBwZW5kaW5nLCBzbyBleGl0cyBlYXJseSB3aXRob3V0IGNhbGxpbmcgcHRfaW50cl9wb3N0
KCkuCgpUaGUgdW5kZXJseWluZyBwcm9ibGVtIGhlcmUgY29tZXMgYmVjYXVzZSB0aGUgQUYgYW5k
IFVGIGJpdHMgb2YgUlRDIGludGVycnVwdApzdGF0ZSBpcyBtb2RlbGxlZCBieSB0aGUgUlRDIGNv
ZGUsIGJ1dCB0aGUgUEYgaXMgbW9kZWxsZWQgYnkgdGhlIHB0IGNvZGUuICBUaGUKcm9vdCBjYXVz
ZSBvZiB3aW5kb3dzIGluZmluaXRlIGxvb3AgaXMgdGhhdCBSVENfUEYgaXMgYmVpbmcgcmUtc2V0
IG9uIHZtZW50cnkKYmVmb3JlIHRoZSBpbnRlcnJ1cHQgbG9naWMgaGFzIHdvcmtlZCBvdXQgdGhh
dCBpdCBjYW4ndCBhY3R1YWxseSBpbmplY3QgYW4gUlRDCmludGVycnVwdCwgY2F1c2luZyBXaW5k
b3dzIHRvIGVycm9uaW91c2x5IHJlYWQgKFJUQ19QRnxSVENfSVJRRikgd2hlbiBpdApzaG91bGQg
YmUgcmVhZGluZyAwLgoKVGhpcyBwYXRjaCByZXZlcnRzIHRoZSBSVENfUEYgbG9naWMgaGFuZGxp
bmcgdG8gaXRzIGZvcm1lciBzdGF0ZSwgd2hlcmVieQpydGNfcGVyaW9kaWNfY2IoKSBpcyBjYWxs
ZWQgc3RyaWN0bHkgd2hlbiB0aGUgcGVyaW9kaWMgdGltZXIgbG9naWMgaGFzCnN1Y2Nlc3NmdWxs
eSBpbmplY3RlZCBhIHBlcmlvZGljIGludGVycnVwdC4gIEluIGRvaW5nIHNvLCBpdCBpcyBpbXBv
cnRhbnQgdGhhdAp0aGUgUlRDIGNvZGUgaXRzZWxmIG5ldmVyIGRpcmVjdGx5IHRyaWdnZXJzIGFu
IGludGVycnVwdCBmb3IgdGhlIHBlcmlvZGljCnRpbWVyIChvdGhlciB0aGFuIHRoZSBjYXNlIHdo
ZW4gc2V0dGluZyBSRUdfQi5QSUUsIHdoZXJlIHRoZSBwdCBjb2RlIHdpbGwgaGF2ZQpkcm9wcGVk
IHRoZSBpbnRlcnJ1cHQpLgoKU2lnbmVkLW9mZi1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+
CkNDOiBLZWlyIEZyYXNlciA8a2VpckB4ZW4ub3JnPgpDQzogSmFuIEJldWxpY2ggPEpCZXVsaWNo
QHN1c2UuY29tPgpDQzogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBldS5jaXRyaXguY29t
PgpDQzogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQoKSSBzdGls
bCBkb250IGtub3cgZXhhY3RseSB3aGF0IGNvbmRpdGlvbiBjYXVzZXMgd2luZG93cyB0byB0aWNr
bGUgdGhpcwpiZWhhdm91ci4gIEl0IGlzIHNlZW4gYWJvdXQgMSBvciAyIHRpbWVzIGluIDkgdGVz
dHMgcnVubmluZyBhIDEyIGhvdXIgVk0KbGlmZWN5Y2xlIHRlc3QuICBPdmVyIHRoZSB3ZWVrZW5k
LCAxMDAgb2YgdGhlc2UgdGVzdHMgaGF2ZSBwYXNzZWQgd2l0aG91dCBhCnNpbmdsZSByZW9jY3Vy
ZW5jZSBvZiB0aGUgaW5maW5pdGUgbG9vcC4gIFRoZSBjaGFuZ2UgaGFzIGFsc28gcGFzc2VkIGEg
d2luZG93cwpleHRlbmRlZCByZWdyZXNzaW9uIHRlc3QsIHNvIGl0IHdvdWxkIGFwcGVhciB0aGF0
IG90aGVyIHZlcnNpb25zIG9mIHdpbmRvd3MKYXJlIHN0aWxsIGZpbmUgd2l0aCB0aGUgY2hhbmdl
LgoKUm9nZXI6IGFzIHRoaXMgY2F1c2VkIGlzc3VlcyBmb3IgRnJlZUJTRCwgd291bGQgeW91IG1p
bmQgdGVzdGluZyBpdCBhZ2FpbgpwbGVhc2U/CgpHZW9yZ2U6IFJlZ2FyZGluZyA0LjQgLSBJIHJl
cXVlc3QgYSByZWxlYXNlIGFjay4gIEhvd2V2ZXIsIHRoaXMgaXMgcXVpdGUgYSBiaWcKYW5kIGNv
bXBsaWNhdGVkIHBhdGNoOyBpdCB0b29rIFRpbSBhbmQgbXlzZWxmIHNldmVyYWwgaG91cnMgdG8g
d3JpdGUsIGV2ZW4KZ2l2ZW4gYW4gdW5kZXJzdGFuZGluZyBvZiB0aGUgcGF0aGFsb2dpY2FsIGNh
c2UuICBIYXZpbmcgc2FpZCB0aGF0LCBhYm91dCBoYWxmCnRoZSBwYXRjaCBpcyBqdXN0IHJldmVy
c2lvbnMgKGxpc3RlZCBhYm92ZSkgd2l0aCB0aGUgb3RoZXIgaGFsZiBiZWluZyBicmFuZApuZXcg
bG9naWMuCgpJbiBzdW1hcnk6CiAgKiBBcyBYZW4gY3VycmVudGx5IHN0YW5kcywgdzJrMyBTUDIg
KHN0aWxsIHN1cHBvcnRlZCkgaXMgbGlhYmxlIHRvIGZhbGwgaW50bwogICAgYW4gaW5maW5pdGUg
bG9vcCBvbiBib290LCBiZWNhdXNlIG9mIGEgYnVnIGluIFhlbidzIGVtdWxhdGlvbiBvZiB0aGUg
UlRDLgogICAgT3RoZXIgZ3Vlc3RzIHJpc2sgdGhlIHNhbWUgaW5maW5pdGUgbG9vcC4KICAqIFRo
ZXJlIGlzIGEgcmlzayB0aGF0IHNvbWUgb2YgdGhlIGxvZ2ljIGlzIG5vdCBxdWl0ZSBjb3JyZWN0
OyB0aGUgUlRDCiAgICBlbXVsYXRpb24gaGFzIHByb3ZlZCB0cmlja3kgdGltZSBhbmQgdGltZSBh
Z2Fpbi4KICAqIFRoZSByZXN1bHRzIGZyb20gWGVuUlQgc3VnZ2VzdCB0aGF0IHRoZSBuZXcgZW11
bGF0aW9uIGlzIGJldHRlciB0aGFuIHRoZQogICAgb2xkLgotLS0KIHhlbi9hcmNoL3g4Ni9odm0v
cnRjLmMgfCAgIDk0ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tLS0t
LS0tLQogeGVuL2FyY2gveDg2L2h2bS92cHQuYyB8ICAgNTAgKysrLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0KIDIgZmlsZXMgY2hhbmdlZCwgNjMgaW5zZXJ0aW9ucygrKSwgODEgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2h2bS9ydGMuYyBiL3hlbi9hcmNoL3g4Ni9odm0v
cnRjLmMKaW5kZXggY2RlZGVmZS4uZWJlYjY3NCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2h2
bS9ydGMuYworKysgYi94ZW4vYXJjaC94ODYvaHZtL3J0Yy5jCkBAIC01OSw0OCArNTksNzggQEAg
c3RhdGljIHZvaWQgcnRjX3NldF90aW1lKFJUQ1N0YXRlICpzKTsKIHN0YXRpYyBpbmxpbmUgaW50
IGZyb21fYmNkKFJUQ1N0YXRlICpzLCBpbnQgYSk7CiBzdGF0aWMgaW5saW5lIGludCBjb252ZXJ0
X2hvdXIoUlRDU3RhdGUgKnMsIGludCBob3VyKTsKIAotc3RhdGljIHZvaWQgcnRjX3VwZGF0ZV9p
cnEoUlRDU3RhdGUgKnMpCisvKgorICogU2VuZCBhbiBlZGdlIG9uIHRoZSBSVEMgSVNBIElSUSBs
aW5lLiAgVGhlIFJUQyBzcGVjIHN0YXRlcyB0aGF0IGl0IHNob3VsZAorICogYmUgYSBsaW5lIGxl
dmVsIGludGVycnVwdCwgYnV0IHRoZSBQSUlYMyBzdGF0ZXMgdGhhdCBpdCBtdXN0IGJlIGVkZ2UK
KyAqIHRyaWdnZXJlZC4gIFdlIG1vZGVsIHRoZSBSVEMgdXNpbmcgZWRnZSBzZW1hbnRpY3MuCisg
Ki8KK3N0YXRpYyB2b2lkIHJ0Y190b2dnbGVfaXJxKFJUQ1N0YXRlICpzKQogeworICAgIGh2bV9p
c2FfaXJxX2RlYXNzZXJ0KHZydGNfZG9tYWluKHMpLCBSVENfSVJRKTsKKyAgICBodm1faXNhX2ly
cV9hc3NlcnQodnJ0Y19kb21haW4ocyksIFJUQ19JUlEpOworfQorCitzdGF0aWMgdm9pZCBydGNf
dXBkYXRlX3JlZ2IoUlRDU3RhdGUgKnMsIHVpbnQ4X3QgbmV3X2IpCit7CisgICAgdWludDhfdCBu
ZXdfYyA9IHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdICYgflJUQ19JUlFGOworCiAgICAgQVNT
RVJUKHNwaW5faXNfbG9ja2VkKCZzLT5sb2NrKSk7CiAKLSAgICBpZiAoIHJ0Y19tb2RlX2lzKHMs
IHN0cmljdCkgJiYgKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdICYgUlRDX0lSUUYpICkKLSAg
ICAgICAgcmV0dXJuOworICAgIGlmICggbmV3X2IgJiBuZXdfYyAmIChSVENfUEYgfCBSVENfQUYg
fCBSVENfVUYpICkKKyAgICAgICAgbmV3X2MgfD0gUlRDX0lSUUY7CiAKLSAgICAvKiBJUlEgaXMg
cmFpc2VkIGlmIGFueSBzb3VyY2UgaXMgYm90aCByYWlzZWQgJiBlbmFibGVkICovCi0gICAgaWYg
KCAhKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYKLSAgICAgICAgICAgcy0+aHcuY21vc19k
YXRhW1JUQ19SRUdfQ10gJgotICAgICAgICAgICAoUlRDX1BGIHwgUlRDX0FGIHwgUlRDX1VGKSkg
KQotICAgICAgICByZXR1cm47CisgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQl0gPSBuZXdf
YjsKKyAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSA9IG5ld19jOwogCi0gICAgcy0+aHcu
Y21vc19kYXRhW1JUQ19SRUdfQ10gfD0gUlRDX0lSUUY7Ci0gICAgaWYgKCBydGNfbW9kZV9pcyhz
LCBub19hY2spICkKLSAgICAgICAgaHZtX2lzYV9pcnFfZGVhc3NlcnQodnJ0Y19kb21haW4ocyks
IFJUQ19JUlEpOwotICAgIGh2bV9pc2FfaXJxX2Fzc2VydCh2cnRjX2RvbWFpbihzKSwgUlRDX0lS
USk7CisgICAgaWYgKCBuZXdfYyAmIFJUQ19JUlFGICkKKyAgICAgICAgcnRjX3RvZ2dsZV9pcnEo
cyk7Cit9CisKKy8qCisgKiBTdHJpY3RseSBvbmx5IGNhbGxlZCB3aGVuIHNldHRpbmcgUkVHX0Mu
e0FGLFVGfS4gIFRoZSBsb2dpYyBkZXBlbmQgb24KKyAqIGtub3dpbmcgdGhhdCBSRUJfQiBpcyB1
bmNoYW5nZWQsIGFuZCBSRUdfQyBkb2VzIG5vdCBoYXZlIGEgZmFsbGluZyBlZGdlLgorICogJ2V2
ZW50JyBzaG91bGQgYmUgUlRDX0FGIG9yIFJUQ19VRi4KKyAqLworc3RhdGljIHZvaWQgcnRjX2ly
cV9ldmVudChSVENTdGF0ZSAqcywgdWludDhfdCBldmVudCkKK3sKKyAgICB1aW50OF90IGIgPSBz
LT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXTsKKyAgICB1aW50OF90IG9sZF9jID0gcy0+aHcuY21v
c19kYXRhW1JUQ19SRUdfQ107CisgICAgdWludDhfdCBuZXdfYyA9IG9sZF9jICYgflJUQ19JUlFG
OworCisgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZzLT5sb2NrKSk7CisKKyAgICBpZiAoIGIg
JiBuZXdfYyAmIChSVENfUEYgfCBSVENfQUYgfCBSVENfVUYpICkKKyAgICAgICAgbmV3X2MgfD0g
UlRDX0lSUUY7CisKKyAgICBpZiAoIChiICYgZXZlbnQpICYmCisgICAgICAgICAocnRjX21vZGVf
aXMocywgbm9fYWNrKSB8fCAhKG9sZF9jICYgUlRDX0lSUUYpKSApCisgICAgICAgIHJ0Y190b2dn
bGVfaXJxKHMpOworCisgICAgcy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gPSBuZXdfYzsKIH0K
IAotYm9vbF90IHJ0Y19wZXJpb2RpY19pbnRlcnJ1cHQodm9pZCAqb3BhcXVlKQorLyoKKyAqIENh
bGxiYWNrIGZyb20gdGhlIHBlcmlvZGljIHRpbWVyIHN0YXRlIG1hY2hpbmUsIGluZGljYXRpbmcg
dGhhdCBhIHBlcmlvZGljCisgKiB0aW1lciBpbnRlcnJ1cHQgaGFzIGJlZW4gaW5qZWN0ZWQgdG8g
dGhlIGd1ZXN0IG9uIG91ciBiZWhhbGYuCisgKi8KK3N0YXRpYyB2b2lkIHJ0Y19wZXJpb2RpY19j
YihzdHJ1Y3QgdmNwdSAqdiwgdm9pZCAqb3BhcXVlKQogewogICAgIFJUQ1N0YXRlICpzID0gb3Bh
cXVlOwotICAgIGJvb2xfdCByZXQ7CiAKICAgICBzcGluX2xvY2soJnMtPmxvY2spOwotICAgIHJl
dCA9IHJ0Y19tb2RlX2lzKHMsIG5vX2FjaykgfHwgIShzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19D
XSAmIFJUQ19JUlFGKTsKLSAgICBpZiAoIHJ0Y19tb2RlX2lzKHMsIG5vX2FjaykgfHwgIShzLT5o
dy5jbW9zX2RhdGFbUlRDX1JFR19DXSAmIFJUQ19QRikgKQotICAgIHsKLSAgICAgICAgcy0+aHcu
Y21vc19kYXRhW1JUQ19SRUdfQ10gfD0gUlRDX1BGOwotICAgICAgICBydGNfdXBkYXRlX2lycShz
KTsKLSAgICB9Ci0gICAgZWxzZSBpZiAoICsrKHMtPnB0X2RlYWRfdGlja3MpID49IDEwICkKKwor
ICAgIGlmICggIXJ0Y19tb2RlX2lzKHMsIG5vX2FjaykgJiYKKyAgICAgICAgIChzLT5ody5jbW9z
X2RhdGFbUlRDX1JFR19DXSAmIFJUQ19QRikgJiYKKyAgICAgICAgICgrKyhzLT5wdF9kZWFkX3Rp
Y2tzKSA+PSAxMCkgKQogICAgIHsKICAgICAgICAgLyogVk0gaXMgaWdub3JpbmcgaXRzIFJUQzsg
bm8gcG9pbnQgaW4gcnVubmluZyB0aGUgdGltZXIgKi8KICAgICAgICAgZGVzdHJveV9wZXJpb2Rp
Y190aW1lKCZzLT5wdCk7CiAgICAgICAgIHMtPnB0X2NvZGUgPSAwOwogICAgIH0KLSAgICBpZiAo
ICEocy0+aHcuY21vc19kYXRhW1JUQ19SRUdfQ10gJiBSVENfSVJRRikgKQotICAgICAgICByZXQg
PSAwOwotICAgIHNwaW5fdW5sb2NrKCZzLT5sb2NrKTsKIAotICAgIHJldHVybiByZXQ7CisgICAg
LyogU3BlY2lmaWNhbGx5IG5vdCByYWlzaW5nIHRoZSBpcnEgYXMgaXQgaGFzIGFscmVhZHkgYmVl
biBkb25lIGZvciB1cyAqLworICAgIHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0NdIHw9IFJUQ19Q
RiB8IFJUQ19JUlFGOworICAgIHNwaW5fdW5sb2NrKCZzLT5sb2NrKTsKIH0KIAogLyogRW5hYmxl
L2NvbmZpZ3VyZS9kaXNhYmxlIHRoZSBwZXJpb2RpYyB0aW1lciBiYXNlZCBvbiB0aGUgUlRDX1BJ
RSBhbmQKQEAgLTEzNSw3ICsxNjUsNyBAQCBzdGF0aWMgdm9pZCBydGNfdGltZXJfdXBkYXRlKFJU
Q1N0YXRlICpzKQogICAgICAgICAgICAgICAgIGVsc2UKICAgICAgICAgICAgICAgICAgICAgZGVs
dGEgPSBwZXJpb2QgLSAoKE5PVygpIC0gcy0+c3RhcnRfdGltZSkgJSBwZXJpb2QpOwogICAgICAg
ICAgICAgICAgIGNyZWF0ZV9wZXJpb2RpY190aW1lKHYsICZzLT5wdCwgZGVsdGEsIHBlcmlvZCwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBSVENfSVJRLCBOVUxMLCBzKTsK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBSVENfSVJRLCBydGNfcGVyaW9k
aWNfY2IsIHMpOwogICAgICAgICAgICAgfQogICAgICAgICAgICAgYnJlYWs7CiAgICAgICAgIH0K
QEAgLTIxMSw5ICsyNDEsOCBAQCBzdGF0aWMgdm9pZCBydGNfdXBkYXRlX3RpbWVyMih2b2lkICpv
cGFxdWUpCiAgICAgc3Bpbl9sb2NrKCZzLT5sb2NrKTsKICAgICBpZiAoIShzLT5ody5jbW9zX2Rh
dGFbUlRDX1JFR19CXSAmIFJUQ19TRVQpKQogICAgIHsKLSAgICAgICAgcy0+aHcuY21vc19kYXRh
W1JUQ19SRUdfQ10gfD0gUlRDX1VGOwogICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19B
XSAmPSB+UlRDX1VJUDsKLSAgICAgICAgcnRjX3VwZGF0ZV9pcnEocyk7CisgICAgICAgIHJ0Y19p
cnFfZXZlbnQocywgUlRDX1VGKTsKICAgICAgICAgY2hlY2tfdXBkYXRlX3RpbWVyKHMpOwogICAg
IH0KICAgICBzcGluX3VubG9jaygmcy0+bG9jayk7CkBAIC00MDIsOCArNDMxLDcgQEAgc3RhdGlj
IHZvaWQgcnRjX2FsYXJtX2NiKHZvaWQgKm9wYXF1ZSkKICAgICBzcGluX2xvY2soJnMtPmxvY2sp
OwogICAgIGlmICghKHMtPmh3LmNtb3NfZGF0YVtSVENfUkVHX0JdICYgUlRDX1NFVCkpCiAgICAg
ewotICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19DXSB8PSBSVENfQUY7Ci0gICAgICAg
IHJ0Y191cGRhdGVfaXJxKHMpOworICAgICAgICBydGNfaXJxX2V2ZW50KHMsIFJUQ19BRik7CiAg
ICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsKICAgICB9CiAgICAgc3Bpbl91bmxvY2soJnMt
PmxvY2spOwpAQCAtNDg0LDEyICs1MTIsMTEgQEAgc3RhdGljIGludCBydGNfaW9wb3J0X3dyaXRl
KHZvaWQgKm9wYXF1ZSwgdWludDMyX3QgYWRkciwgdWludDMyX3QgZGF0YSkKICAgICAgICAgICAg
IGlmICggb3JpZyAmIFJUQ19TRVQgKQogICAgICAgICAgICAgICAgIHJ0Y19zZXRfdGltZShzKTsK
ICAgICAgICAgfQotICAgICAgICBzLT5ody5jbW9zX2RhdGFbUlRDX1JFR19CXSA9IGRhdGE7CiAg
ICAgICAgIC8qCiAgICAgICAgICAqIElmIHRoZSBpbnRlcnJ1cHQgaXMgYWxyZWFkeSBzZXQgd2hl
biB0aGUgaW50ZXJydXB0IGJlY29tZXMKICAgICAgICAgICogZW5hYmxlZCwgcmFpc2UgYW4gaW50
ZXJydXB0IGltbWVkaWF0ZWx5LgogICAgICAgICAgKi8KLSAgICAgICAgcnRjX3VwZGF0ZV9pcnEo
cyk7CisgICAgICAgIHJ0Y191cGRhdGVfcmVnYihzLCBkYXRhKTsKICAgICAgICAgaWYgKCAoZGF0
YSAmIFJUQ19QSUUpICYmICEob3JpZyAmIFJUQ19QSUUpICkKICAgICAgICAgICAgIHJ0Y190aW1l
cl91cGRhdGUocyk7CiAgICAgICAgIGlmICggKGRhdGEgXiBvcmlnKSAmIFJUQ19TRVQgKQpAQCAt
NjQ3LDkgKzY3NCw2IEBAIHN0YXRpYyB1aW50MzJfdCBydGNfaW9wb3J0X3JlYWQoUlRDU3RhdGUg
KnMsIHVpbnQzMl90IGFkZHIpCiAgICAgY2FzZSBSVENfUkVHX0M6CiAgICAgICAgIHJldCA9IHMt
Pmh3LmNtb3NfZGF0YVtzLT5ody5jbW9zX2luZGV4XTsKICAgICAgICAgcy0+aHcuY21vc19kYXRh
W1JUQ19SRUdfQ10gPSAweDAwOwotICAgICAgICBpZiAoIChyZXQgJiBSVENfSVJRRikgJiYgIXJ0
Y19tb2RlX2lzKHMsIG5vX2FjaykgKQotICAgICAgICAgICAgaHZtX2lzYV9pcnFfZGVhc3NlcnQo
ZCwgUlRDX0lSUSk7Ci0gICAgICAgIHJ0Y191cGRhdGVfaXJxKHMpOwogICAgICAgICBjaGVja191
cGRhdGVfdGltZXIocyk7CiAgICAgICAgIGFsYXJtX3RpbWVyX3VwZGF0ZShzKTsKICAgICAgICAg
cnRjX3RpbWVyX3VwZGF0ZShzKTsKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vdnB0LmMg
Yi94ZW4vYXJjaC94ODYvaHZtL3ZwdC5jCmluZGV4IDE5NjFiZGEuLmUxMmU5NDAgMTAwNjQ0Ci0t
LSBhL3hlbi9hcmNoL3g4Ni9odm0vdnB0LmMKKysrIGIveGVuL2FyY2gveDg2L2h2bS92cHQuYwpA
QCAtMjIsNyArMjIsNiBAQAogI2luY2x1ZGUgPGFzbS9odm0vdnB0Lmg+CiAjaW5jbHVkZSA8YXNt
L2V2ZW50Lmg+CiAjaW5jbHVkZSA8YXNtL2FwaWMuaD4KLSNpbmNsdWRlIDxhc20vbWMxNDY4MThy
dGMuaD4KIAogI2RlZmluZSBtb2RlX2lzKGQsIG5hbWUpIFwKICAgICAoKGQpLT5hcmNoLmh2bV9k
b21haW4ucGFyYW1zW0hWTV9QQVJBTV9USU1FUl9NT0RFXSA9PSBIVk1QVE1fIyNuYW1lKQpAQCAt
MjI4LDIzICsyMjcsMTcgQEAgc3RhdGljIHZvaWQgcHRfdGltZXJfZm4odm9pZCAqZGF0YSkKIGlu
dCBwdF91cGRhdGVfaXJxKHN0cnVjdCB2Y3B1ICp2KQogewogICAgIHN0cnVjdCBsaXN0X2hlYWQg
KmhlYWQgPSAmdi0+YXJjaC5odm1fdmNwdS50bV9saXN0OwotICAgIHN0cnVjdCBwZXJpb2RpY190
aW1lICpwdCwgKnRlbXAsICplYXJsaWVzdF9wdDsKLSAgICB1aW50NjRfdCBtYXhfbGFnOworICAg
IHN0cnVjdCBwZXJpb2RpY190aW1lICpwdCwgKnRlbXAsICplYXJsaWVzdF9wdCA9IE5VTEw7Cisg
ICAgdWludDY0X3QgbWF4X2xhZyA9IC0xVUxMOwogICAgIGludCBpcnEsIGlzX2xhcGljOwotICAg
IHZvaWQgKnB0X3ByaXY7CiAKLSByZXNjYW46CiAgICAgc3Bpbl9sb2NrKCZ2LT5hcmNoLmh2bV92
Y3B1LnRtX2xvY2spOwogCi0gcmVzY2FuX2xvY2tlZDoKLSAgICBlYXJsaWVzdF9wdCA9IE5VTEw7
Ci0gICAgbWF4X2xhZyA9IC0xVUxMOwogICAgIGxpc3RfZm9yX2VhY2hfZW50cnlfc2FmZSAoIHB0
LCB0ZW1wLCBoZWFkLCBsaXN0ICkKICAgICB7CiAgICAgICAgIGlmICggcHQtPnBlbmRpbmdfaW50
cl9uciApCiAgICAgICAgIHsKLSAgICAgICAgICAgIC8qIFJUQyBjb2RlIHRha2VzIGNhcmUgb2Yg
ZGlzYWJsaW5nIHRoZSB0aW1lciBpdHNlbGYuICovCi0gICAgICAgICAgICBpZiAoIChwdC0+aXJx
ICE9IFJUQ19JUlEgfHwgIXB0LT5wcml2KSAmJiBwdF9pcnFfbWFza2VkKHB0KSApCisgICAgICAg
ICAgICBpZiAoIHB0X2lycV9tYXNrZWQocHQpICkKICAgICAgICAgICAgIHsKICAgICAgICAgICAg
ICAgICAvKiBzdXNwZW5kIHRpbWVyIGVtdWxhdGlvbiAqLwogICAgICAgICAgICAgICAgIGxpc3Rf
ZGVsKCZwdC0+bGlzdCk7CkBAIC0yNzAsNDcgKzI2MywxMiBAQCBpbnQgcHRfdXBkYXRlX2lycShz
dHJ1Y3QgdmNwdSAqdikKICAgICBlYXJsaWVzdF9wdC0+aXJxX2lzc3VlZCA9IDE7CiAgICAgaXJx
ID0gZWFybGllc3RfcHQtPmlycTsKICAgICBpc19sYXBpYyA9IChlYXJsaWVzdF9wdC0+c291cmNl
ID09IFBUU1JDX2xhcGljKTsKLSAgICBwdF9wcml2ID0gZWFybGllc3RfcHQtPnByaXY7CiAKICAg
ICBzcGluX3VubG9jaygmdi0+YXJjaC5odm1fdmNwdS50bV9sb2NrKTsKIAogICAgIGlmICggaXNf
bGFwaWMgKQotICAgICAgICB2bGFwaWNfc2V0X2lycSh2Y3B1X3ZsYXBpYyh2KSwgaXJxLCAwKTsK
LSAgICBlbHNlIGlmICggaXJxID09IFJUQ19JUlEgJiYgcHRfcHJpdiApCiAgICAgewotICAgICAg
ICBpZiAoICFydGNfcGVyaW9kaWNfaW50ZXJydXB0KHB0X3ByaXYpICkKLSAgICAgICAgICAgIGly
cSA9IC0xOwotCi0gICAgICAgIHB0X2xvY2soZWFybGllc3RfcHQpOwotCi0gICAgICAgIGlmICgg
aXJxIDwgMCAmJiBlYXJsaWVzdF9wdC0+cGVuZGluZ19pbnRyX25yICkKLSAgICAgICAgewotICAg
ICAgICAgICAgLyoKLSAgICAgICAgICAgICAqIFJUQyBwZXJpb2RpYyB0aW1lciBydW5zIHdpdGhv
dXQgdGhlIGNvcnJlc3BvbmRpbmcgaW50ZXJydXB0Ci0gICAgICAgICAgICAgKiBiZWluZyBlbmFi
bGVkIC0gbmVlZCB0byBtaW1pYyBlbm91Z2ggb2YgcHRfaW50cl9wb3N0KCkgdG8ga2VlcAotICAg
ICAgICAgICAgICogdGhpbmdzIGdvaW5nLgotICAgICAgICAgICAgICovCi0gICAgICAgICAgICBl
YXJsaWVzdF9wdC0+cGVuZGluZ19pbnRyX25yID0gMDsKLSAgICAgICAgICAgIGVhcmxpZXN0X3B0
LT5pcnFfaXNzdWVkID0gMDsKLSAgICAgICAgICAgIHNldF90aW1lcigmZWFybGllc3RfcHQtPnRp
bWVyLCBlYXJsaWVzdF9wdC0+c2NoZWR1bGVkKTsKLSAgICAgICAgfQotICAgICAgICBlbHNlIGlm
ICggaXJxID49IDAgJiYgcHRfaXJxX21hc2tlZChlYXJsaWVzdF9wdCkgKQotICAgICAgICB7Ci0g
ICAgICAgICAgICBpZiAoIGVhcmxpZXN0X3B0LT5vbl9saXN0ICkKLSAgICAgICAgICAgIHsKLSAg
ICAgICAgICAgICAgICAvKiBzdXNwZW5kIHRpbWVyIGVtdWxhdGlvbiAqLwotICAgICAgICAgICAg
ICAgIGxpc3RfZGVsKCZlYXJsaWVzdF9wdC0+bGlzdCk7Ci0gICAgICAgICAgICAgICAgZWFybGll
c3RfcHQtPm9uX2xpc3QgPSAwOwotICAgICAgICAgICAgfQotICAgICAgICAgICAgaXJxID0gLTE7
Ci0gICAgICAgIH0KLQotICAgICAgICAvKiBBdm9pZCBkcm9wcGluZyB0aGUgbG9jayBpZiB3ZSBj
YW4uICovCi0gICAgICAgIGlmICggaXJxIDwgMCAmJiB2ID09IGVhcmxpZXN0X3B0LT52Y3B1ICkK
LSAgICAgICAgICAgIGdvdG8gcmVzY2FuX2xvY2tlZDsKLSAgICAgICAgcHRfdW5sb2NrKGVhcmxp
ZXN0X3B0KTsKLSAgICAgICAgaWYgKCBpcnEgPCAwICkKLSAgICAgICAgICAgIGdvdG8gcmVzY2Fu
OworICAgICAgICB2bGFwaWNfc2V0X2lycSh2Y3B1X3ZsYXBpYyh2KSwgaXJxLCAwKTsKICAgICB9
CiAgICAgZWxzZQogICAgIHsKLS0gCjEuNy4xMC40CgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:18:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCosq-0006cG-MC; Mon, 10 Feb 2014 11:18:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WCoso-0006c4-EK
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:18:38 +0000
Received: from [85.158.139.211:18137] by server-15.bemta-5.messagelabs.com id
	D7/EE-24395-D85B8F25; Mon, 10 Feb 2014 11:18:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392031115!2851407!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15696 invoked from network); 10 Feb 2014 11:18:36 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-13.tower-206.messagelabs.com with SMTP;
	10 Feb 2014 11:18:36 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 03:18:34 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,817,1384329600"; d="scan'208";a="472701828"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 10 Feb 2014 03:18:32 -0800
Received: from fmsmsx120.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 03:18:32 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx120.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 03:18:32 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Mon, 10 Feb 2014 19:18:30 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
	libxl/libxc
Thread-Index: AQHPJkksaIXDDB2LwE6tYeMe5l1qi5quVnSQ
Date: Mon, 10 Feb 2014 11:18:30 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A91193518B@SHSMSX104.ccr.corp.intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
	<1392027356.5117.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1392027356.5117.21.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Monday, February 10, 2014 6:16 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org; keir@xen.org; JBeulich@suse.com;
> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> andrew.cooper3@citrix.com; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov
> Subject: Re: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
> libxl/libxc
> 
> On Sat, 2014-02-08 at 13:07 +0800, Dongxiao Xu wrote:
> > Introduced two new xl commands to attach/detach CQM service for a guest
> > $ xl pqos-attach cqm domid
> > $ xl pqos-detach cqm domid
> >
> > Introduce one new xl command to retrive guest CQM information
> 
> "retrieve"

Thanks.

> 
> > $ xl pqos-list cqm
> 
> Please patch the xl manpages to describe all these new commands.

OK.

> 
> I wonder though -- are these aimed at end users or are they really for
> developer use? I'm wondering if they should be exposed this way or
> whether they should be exposed by some lower level tool (similar to how
> xenpm is separate). I don't know what the correct answer is here.

I think one target should be the sys admins, who needs to watch the system cache usages and determine how to balance the resource.

> 
> > +int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
> > +{
> > +    int ret = 0;
> > +    DECLARE_SYSCTL;
> > +
> > +    sysctl.cmd = XEN_SYSCTL_getcqminfo;
> > +    if ( xc_sysctl(xch, &sysctl) < 0 )
> > +        ret = -1;
> 
> xc_sysctl returns -1 on error AFAICT. So:
> 	ret = xc_sysctl(...);
> 	if (ret >= 0)
> 	{
> 		info->etc
> 	}
> 	return ret;

OK.

> 
> > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > index 649ce50..43c0f48 100644
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
> >                                   ])),
> >             ("domain_create_console_available", Struct(None, [])),
> >             ]))])
> > +
> > +libxl_cqminfo = Struct("cqminfo", [
> 
> You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
> see the existing examples in that header.

OK.

Thanks,
Dongxiao

> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:18:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCosq-0006cG-MC; Mon, 10 Feb 2014 11:18:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WCoso-0006c4-EK
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:18:38 +0000
Received: from [85.158.139.211:18137] by server-15.bemta-5.messagelabs.com id
	D7/EE-24395-D85B8F25; Mon, 10 Feb 2014 11:18:37 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392031115!2851407!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15696 invoked from network); 10 Feb 2014 11:18:36 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-13.tower-206.messagelabs.com with SMTP;
	10 Feb 2014 11:18:36 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 03:18:34 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,817,1384329600"; d="scan'208";a="472701828"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga001.fm.intel.com with ESMTP; 10 Feb 2014 03:18:32 -0800
Received: from fmsmsx120.amr.corp.intel.com (10.19.9.29) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 03:18:32 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx120.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 03:18:32 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Mon, 10 Feb 2014 19:18:30 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
	libxl/libxc
Thread-Index: AQHPJkksaIXDDB2LwE6tYeMe5l1qi5quVnSQ
Date: Mon, 10 Feb 2014 11:18:30 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A91193518B@SHSMSX104.ccr.corp.intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
	<1392027356.5117.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1392027356.5117.21.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell [mailto:Ian.Campbell@citrix.com]
> Sent: Monday, February 10, 2014 6:16 PM
> To: Xu, Dongxiao
> Cc: xen-devel@lists.xen.org; keir@xen.org; JBeulich@suse.com;
> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> andrew.cooper3@citrix.com; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov
> Subject: Re: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
> libxl/libxc
> 
> On Sat, 2014-02-08 at 13:07 +0800, Dongxiao Xu wrote:
> > Introduced two new xl commands to attach/detach CQM service for a guest
> > $ xl pqos-attach cqm domid
> > $ xl pqos-detach cqm domid
> >
> > Introduce one new xl command to retrive guest CQM information
> 
> "retrieve"

Thanks.

> 
> > $ xl pqos-list cqm
> 
> Please patch the xl manpages to describe all these new commands.

OK.

> 
> I wonder though -- are these aimed at end users or are they really for
> developer use? I'm wondering if they should be exposed this way or
> whether they should be exposed by some lower level tool (similar to how
> xenpm is separate). I don't know what the correct answer is here.

I think one target should be the sys admins, who needs to watch the system cache usages and determine how to balance the resource.

> 
> > +int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
> > +{
> > +    int ret = 0;
> > +    DECLARE_SYSCTL;
> > +
> > +    sysctl.cmd = XEN_SYSCTL_getcqminfo;
> > +    if ( xc_sysctl(xch, &sysctl) < 0 )
> > +        ret = -1;
> 
> xc_sysctl returns -1 on error AFAICT. So:
> 	ret = xc_sysctl(...);
> 	if (ret >= 0)
> 	{
> 		info->etc
> 	}
> 	return ret;

OK.

> 
> > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > index 649ce50..43c0f48 100644
> > --- a/tools/libxl/libxl_types.idl
> > +++ b/tools/libxl/libxl_types.idl
> > @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
> >                                   ])),
> >             ("domain_create_console_available", Struct(None, [])),
> >             ]))])
> > +
> > +libxl_cqminfo = Struct("cqminfo", [
> 
> You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
> see the existing examples in that header.

OK.

Thanks,
Dongxiao

> 
> Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCotk-0006m3-5G; Mon, 10 Feb 2014 11:19:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WCotj-0006lp-9r
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:19:35 +0000
Received: from [85.158.139.211:14060] by server-12.bemta-5.messagelabs.com id
	D7/A9-15415-6C5B8F25; Mon, 10 Feb 2014 11:19:34 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392031173!2849791!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3940 invoked from network); 10 Feb 2014 11:19:33 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:19:33 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so4016494wgg.6
	for <Xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 03:19:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=r9FSNX8zkUO44V+n8OsiFJHmwLHA/DZcK5V4nF8T8ec=;
	b=LhjInLTdlnY5x0xBM1K01mmRI7YAGJwuDHeRNlgquetL0AFn8+e74kf315WDwkqnlZ
	PPZl89S2oPBRZ4vL8yCqEROi2imt59nybieGOIsNybjKzVjBegFwpWP9pU5d78170Osi
	tgtZnRsKzbQizAbP0f1veMfgntyvV73IEGA0O+ppXwJe34NQEYE4NFIOdLQhj9Ng/fwU
	WzStt/Wxx8mBPKQQ5JJ7fzzsnENKBeXAkTh18lKSc4t+q7RdFLsBD2w9XCZyOuWg0NNT
	GSaZ89KqCcYM+HZDjVAAxkdn+cxGNpPYA+COcEvILuvcoMZqQ39Sed/IYNnS+BqbSpVq
	nSbw==
X-Gm-Message-State: ALoCoQlYJmMLpShnouumvKzqUM8e7i9bhhCtjFBz4xzV7xLpz5vy78oSNfL2cp5iXxrQDUIa6fmi
X-Received: by 10.180.92.169 with SMTP id cn9mr9804344wib.35.1392031173149;
	Mon, 10 Feb 2014 03:19:33 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id jd2sm33758486wic.9.2014.02.10.03.19.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 03:19:32 -0800 (PST)
Message-ID: <52F8B5C3.1020308@m2r.biz>
Date: Mon, 10 Feb 2014 12:19:31 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, 
 "Peter X. Gao" <peterxianggao@gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
In-Reply-To: <20140210104223.GN15387@zion.uk.xensource.com>
Cc: Xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 10/02/2014 11:42, Wei Liu ha scritto:
> On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
>> Hi,
>>
>>         I am new to Xen and I am trying to run Intel DPDK inside a domU with
>> virtio on Xen 4.2. Is it possible to do this?
>>

Based on my tests about virtio:
- virtio-serial seems working out of box with windows domUs and also 
with xen pv driver, on linux domUs with old kernel (tested 2.6.32) is 
also working out of box but with newer kernel (tested >=3.2) require 
pci=nomsi to work correctly and works also with xen pvhvm drivers, for 
now I not found solution for msi problem, there are some posts about it.
- virtio-net was working out of box but with recent qemu versions is 
broken due qemu regression, I have narrowed down
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable 
to found the exact commit of regression because there are other critical 
problems with xen in the range.
- I not tested virtio-disk and I not know if is working with recent xen 
and qemu version.

> DPDK doesn't seem to tightly coupled with VirtIO, does it?
>
> Could you look at Xen's PV network protocol instead? VirtIO has no
> mainline support on Xen while Xen's PV protocol has been in mainline for
> years. And it's very likely to be enabled by default nowadays.
>
> Wei.
>
>> Regards
>> Peter
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCotk-0006m3-5G; Mon, 10 Feb 2014 11:19:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WCotj-0006lp-9r
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:19:35 +0000
Received: from [85.158.139.211:14060] by server-12.bemta-5.messagelabs.com id
	D7/A9-15415-6C5B8F25; Mon, 10 Feb 2014 11:19:34 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392031173!2849791!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3940 invoked from network); 10 Feb 2014 11:19:33 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:19:33 -0000
Received: by mail-wg0-f51.google.com with SMTP id z12so4016494wgg.6
	for <Xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 03:19:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=r9FSNX8zkUO44V+n8OsiFJHmwLHA/DZcK5V4nF8T8ec=;
	b=LhjInLTdlnY5x0xBM1K01mmRI7YAGJwuDHeRNlgquetL0AFn8+e74kf315WDwkqnlZ
	PPZl89S2oPBRZ4vL8yCqEROi2imt59nybieGOIsNybjKzVjBegFwpWP9pU5d78170Osi
	tgtZnRsKzbQizAbP0f1veMfgntyvV73IEGA0O+ppXwJe34NQEYE4NFIOdLQhj9Ng/fwU
	WzStt/Wxx8mBPKQQ5JJ7fzzsnENKBeXAkTh18lKSc4t+q7RdFLsBD2w9XCZyOuWg0NNT
	GSaZ89KqCcYM+HZDjVAAxkdn+cxGNpPYA+COcEvILuvcoMZqQ39Sed/IYNnS+BqbSpVq
	nSbw==
X-Gm-Message-State: ALoCoQlYJmMLpShnouumvKzqUM8e7i9bhhCtjFBz4xzV7xLpz5vy78oSNfL2cp5iXxrQDUIa6fmi
X-Received: by 10.180.92.169 with SMTP id cn9mr9804344wib.35.1392031173149;
	Mon, 10 Feb 2014 03:19:33 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id jd2sm33758486wic.9.2014.02.10.03.19.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 03:19:32 -0800 (PST)
Message-ID: <52F8B5C3.1020308@m2r.biz>
Date: Mon, 10 Feb 2014 12:19:31 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, 
 "Peter X. Gao" <peterxianggao@gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
In-Reply-To: <20140210104223.GN15387@zion.uk.xensource.com>
Cc: Xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 10/02/2014 11:42, Wei Liu ha scritto:
> On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
>> Hi,
>>
>>         I am new to Xen and I am trying to run Intel DPDK inside a domU with
>> virtio on Xen 4.2. Is it possible to do this?
>>

Based on my tests about virtio:
- virtio-serial seems working out of box with windows domUs and also 
with xen pv driver, on linux domUs with old kernel (tested 2.6.32) is 
also working out of box but with newer kernel (tested >=3.2) require 
pci=nomsi to work correctly and works also with xen pvhvm drivers, for 
now I not found solution for msi problem, there are some posts about it.
- virtio-net was working out of box but with recent qemu versions is 
broken due qemu regression, I have narrowed down
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable 
to found the exact commit of regression because there are other critical 
problems with xen in the range.
- I not tested virtio-disk and I not know if is working with recent xen 
and qemu version.

> DPDK doesn't seem to tightly coupled with VirtIO, does it?
>
> Could you look at Xen's PV network protocol instead? VirtIO has no
> mainline support on Xen while Xen's PV protocol has been in mainline for
> years. And it's very likely to be enabled by default nowadays.
>
> Wei.
>
>> Regards
>> Peter
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:23:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoxn-0007Bz-4M; Mon, 10 Feb 2014 11:23:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCoxl-0007Bu-TG
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:23:46 +0000
Received: from [193.109.254.147:18433] by server-12.bemta-14.messagelabs.com
	id 85/CC-17220-1C6B8F25; Mon, 10 Feb 2014 11:23:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392031422!3223081!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3330 invoked from network); 10 Feb 2014 11:23:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:23:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99435606"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 11:23:41 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 06:23:40 -0500
Message-ID: <52F8B6BB.6040205@citrix.com>
Date: Mon, 10 Feb 2014 11:23:39 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<a7672a06595d907ce9aacc65b3cbe0179684f5e0.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <a7672a06595d907ce9aacc65b3cbe0179684f5e0.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-kernel@vger.kernel.org, josh@joshtriplett.org
Subject: Re: [Xen-devel] [PATCH 2/4] drivers: xen: Include appropriate
 header file in pcpu.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTE6MDEsIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IEluY2x1ZGUgYXBwcm9w
cmlhdGUgaGVhZGVyIGZpbGUgaW4geGVuL3BjcHUuYyBiZWNhdXNlIGluY2x1ZGUveGVuL2FjcGku
aAo+IGNvbnRhaW5zIHByb3RvdHlwZSBkZWNsYXJhdGlvbiBvZiBmdW5jdGlvbnMgZGVmaW5lZCBp
biB0aGUgZmlsZS4KPiAKPiBUaGlzIGVsaW1pbmF0ZXMgdGhlIGZvbGxvd2luZyB3YXJuaW5nIGlu
IHhlbi9wY3B1LmM6Cj4gZHJpdmVycy94ZW4vcGNwdS5jOjMzNjo2OiB3YXJuaW5nOiBubyBwcmV2
aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9wY3B1X2hvdHBsdWdfc3luY+KAmSBbLVdtaXNzaW5n
LXByb3RvdHlwZXNdCj4gZHJpdmVycy94ZW4vcGNwdS5jOjM0Njo1OiB3YXJuaW5nOiBubyBwcmV2
aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9wY3B1X2lk4oCZIFstV21pc3NpbmctcHJvdG90eXBl
c10KPiAKPiBTaWduZWQtb2ZmLWJ5OiBSYXNoaWthIEtoZXJpYSA8cmFzaGlrYS5raGVyaWFAZ21h
aWwuY29tPgo+IFJldmlld2VkLWJ5OiBKb3NoIFRyaXBsZXR0IDxqb3NoQGpvc2h0cmlwbGV0dC5v
cmc+CgpSZXZpZXdlZC1ieTogRGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0cml4LmNvbT4K
CkRhdmlkCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:23:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCoxn-0007Bz-4M; Mon, 10 Feb 2014 11:23:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCoxl-0007Bu-TG
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:23:46 +0000
Received: from [193.109.254.147:18433] by server-12.bemta-14.messagelabs.com
	id 85/CC-17220-1C6B8F25; Mon, 10 Feb 2014 11:23:45 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392031422!3223081!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3330 invoked from network); 10 Feb 2014 11:23:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:23:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99435606"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 11:23:41 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 06:23:40 -0500
Message-ID: <52F8B6BB.6040205@citrix.com>
Date: Mon, 10 Feb 2014 11:23:39 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<a7672a06595d907ce9aacc65b3cbe0179684f5e0.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <a7672a06595d907ce9aacc65b3cbe0179684f5e0.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-kernel@vger.kernel.org, josh@joshtriplett.org
Subject: Re: [Xen-devel] [PATCH 2/4] drivers: xen: Include appropriate
 header file in pcpu.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTE6MDEsIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IEluY2x1ZGUgYXBwcm9w
cmlhdGUgaGVhZGVyIGZpbGUgaW4geGVuL3BjcHUuYyBiZWNhdXNlIGluY2x1ZGUveGVuL2FjcGku
aAo+IGNvbnRhaW5zIHByb3RvdHlwZSBkZWNsYXJhdGlvbiBvZiBmdW5jdGlvbnMgZGVmaW5lZCBp
biB0aGUgZmlsZS4KPiAKPiBUaGlzIGVsaW1pbmF0ZXMgdGhlIGZvbGxvd2luZyB3YXJuaW5nIGlu
IHhlbi9wY3B1LmM6Cj4gZHJpdmVycy94ZW4vcGNwdS5jOjMzNjo2OiB3YXJuaW5nOiBubyBwcmV2
aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9wY3B1X2hvdHBsdWdfc3luY+KAmSBbLVdtaXNzaW5n
LXByb3RvdHlwZXNdCj4gZHJpdmVycy94ZW4vcGNwdS5jOjM0Njo1OiB3YXJuaW5nOiBubyBwcmV2
aW91cyBwcm90b3R5cGUgZm9yIOKAmHhlbl9wY3B1X2lk4oCZIFstV21pc3NpbmctcHJvdG90eXBl
c10KPiAKPiBTaWduZWQtb2ZmLWJ5OiBSYXNoaWthIEtoZXJpYSA8cmFzaGlrYS5raGVyaWFAZ21h
aWwuY29tPgo+IFJldmlld2VkLWJ5OiBKb3NoIFRyaXBsZXR0IDxqb3NoQGpvc2h0cmlwbGV0dC5v
cmc+CgpSZXZpZXdlZC1ieTogRGF2aWQgVnJhYmVsIDxkYXZpZC52cmFiZWxAY2l0cml4LmNvbT4K
CkRhdmlkCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:26:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp01-0007bu-OE; Mon, 10 Feb 2014 11:26:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCp00-0007bn-Oz
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:26:05 +0000
Received: from [85.158.137.68:29772] by server-4.bemta-3.messagelabs.com id
	0B/07-11750-B47B8F25; Mon, 10 Feb 2014 11:26:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392031561!797907!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23414 invoked from network); 10 Feb 2014 11:26:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:26:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99436056"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 11:26:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 06:26:00 -0500
Message-ID: <52F8B746.9020008@citrix.com>
Date: Mon, 10 Feb 2014 11:25:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	josh@joshtriplett.org, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype
 declaration to header file include/xen/xen-ops.h from
 arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTE6MTIsIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IE1vdmUgcHJvdG90eXBl
IGRlY2xhcmF0aW9uIHRvIGhlYWRlciBmaWxlIGluY2x1ZGUveGVuL3hlbi1vcHMuaCBmcm9tCj4g
YXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIGl0IGlzIHVzZWQgYnkgbW9yZSB0aGFuIG9u
ZSBmaWxlLiBBbHNvLAo+IHJlbW92ZSBlbHNlIGNvbmRpdGlvbiBmcm9tIHhlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYyB0byBlbGltaW5hdGUKPiBjb25mbGljdGluZyBkZWZpbml0aW9ucyB3aGVuIENP
TkZJR19YRU5fUFZIVk0gaXMgbm90IGRlZmluZWQuCj4gCj4gVGhpcyBlbGltaW5hdGVzIHRoZSBm
b2xsb3dpbmcgd2FybmluZyBpbiB4ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmM6Cj4gZHJpdmVycy94
ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmM6MTY0MDo2OiB3YXJuaW5nOiBubyBwcmV2aW91cyBwcm90
b3R5cGUgZm9yIOKAmHhlbl9jYWxsYmFja192ZWN0b3LigJkgWy1XbWlzc2luZy1wcm90b3R5cGVz
XQpbLi4uXQo+IC0tLSBhL2luY2x1ZGUveGVuL3hlbi1vcHMuaAo+ICsrKyBiL2luY2x1ZGUveGVu
L3hlbi1vcHMuaAo+IEBAIC0zOCw0ICszOCwxMSBAQCBpbnQgeGVuX3VubWFwX2RvbWFpbl9tZm5f
cmFuZ2Uoc3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEsCj4gIGJvb2wgeGVuX3J1bm5pbmdfb25f
dmVyc2lvbl9vcl9sYXRlcih1bnNpZ25lZCBpbnQgbWFqb3IsIHVuc2lnbmVkIGludCBtaW5vcik7
Cj4gIAo+ICBpcnFyZXR1cm5fdCB4ZW5fZGVidWdfaW50ZXJydXB0KGludCBpcnEsIHZvaWQgKmRl
dl9pZCk7Cj4gKwo+ICsjaWZkZWYgQ09ORklHX1hFTl9QVkhWTQo+ICt2b2lkIHhlbl9jYWxsYmFj
a192ZWN0b3Iodm9pZCk7Cj4gKyNlbHNlCj4gK3N0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fY2FsbGJh
Y2tfdmVjdG9yKHZvaWQpIHt9Cj4gKyNlbmRpZgo+ICsKClRoaXMgc2hvdWxkIGJlIGluIGluY2x1
ZGUveGVuL2V2ZW50cy5oCgpEYXZpZAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:26:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp01-0007bu-OE; Mon, 10 Feb 2014 11:26:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCp00-0007bn-Oz
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:26:05 +0000
Received: from [85.158.137.68:29772] by server-4.bemta-3.messagelabs.com id
	0B/07-11750-B47B8F25; Mon, 10 Feb 2014 11:26:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392031561!797907!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23414 invoked from network); 10 Feb 2014 11:26:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:26:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99436056"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 11:26:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 06:26:00 -0500
Message-ID: <52F8B746.9020008@citrix.com>
Date: Mon, 10 Feb 2014 11:25:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Rashika Kheria <rashika.kheria@gmail.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
In-Reply-To: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	josh@joshtriplett.org, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype
 declaration to header file include/xen/xen-ops.h from
 arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDkvMDIvMTQgMTE6MTIsIFJhc2hpa2EgS2hlcmlhIHdyb3RlOgo+IE1vdmUgcHJvdG90eXBl
IGRlY2xhcmF0aW9uIHRvIGhlYWRlciBmaWxlIGluY2x1ZGUveGVuL3hlbi1vcHMuaCBmcm9tCj4g
YXJjaC94ODYveGVuL3hlbi1vcHMuaCBiZWNhdXNlIGl0IGlzIHVzZWQgYnkgbW9yZSB0aGFuIG9u
ZSBmaWxlLiBBbHNvLAo+IHJlbW92ZSBlbHNlIGNvbmRpdGlvbiBmcm9tIHhlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYyB0byBlbGltaW5hdGUKPiBjb25mbGljdGluZyBkZWZpbml0aW9ucyB3aGVuIENP
TkZJR19YRU5fUFZIVk0gaXMgbm90IGRlZmluZWQuCj4gCj4gVGhpcyBlbGltaW5hdGVzIHRoZSBm
b2xsb3dpbmcgd2FybmluZyBpbiB4ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmM6Cj4gZHJpdmVycy94
ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmM6MTY0MDo2OiB3YXJuaW5nOiBubyBwcmV2aW91cyBwcm90
b3R5cGUgZm9yIOKAmHhlbl9jYWxsYmFja192ZWN0b3LigJkgWy1XbWlzc2luZy1wcm90b3R5cGVz
XQpbLi4uXQo+IC0tLSBhL2luY2x1ZGUveGVuL3hlbi1vcHMuaAo+ICsrKyBiL2luY2x1ZGUveGVu
L3hlbi1vcHMuaAo+IEBAIC0zOCw0ICszOCwxMSBAQCBpbnQgeGVuX3VubWFwX2RvbWFpbl9tZm5f
cmFuZ2Uoc3RydWN0IHZtX2FyZWFfc3RydWN0ICp2bWEsCj4gIGJvb2wgeGVuX3J1bm5pbmdfb25f
dmVyc2lvbl9vcl9sYXRlcih1bnNpZ25lZCBpbnQgbWFqb3IsIHVuc2lnbmVkIGludCBtaW5vcik7
Cj4gIAo+ICBpcnFyZXR1cm5fdCB4ZW5fZGVidWdfaW50ZXJydXB0KGludCBpcnEsIHZvaWQgKmRl
dl9pZCk7Cj4gKwo+ICsjaWZkZWYgQ09ORklHX1hFTl9QVkhWTQo+ICt2b2lkIHhlbl9jYWxsYmFj
a192ZWN0b3Iodm9pZCk7Cj4gKyNlbHNlCj4gK3N0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fY2FsbGJh
Y2tfdmVjdG9yKHZvaWQpIHt9Cj4gKyNlbmRpZgo+ICsKClRoaXMgc2hvdWxkIGJlIGluIGluY2x1
ZGUveGVuL2V2ZW50cy5oCgpEYXZpZAoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:26:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp0U-0007fD-Ji; Mon, 10 Feb 2014 11:26:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0S-0007el-Id; Mon, 10 Feb 2014 11:26:32 +0000
Received: from [85.158.137.68:36912] by server-16.bemta-3.messagelabs.com id
	AA/24-29917-767B8F25; Mon, 10 Feb 2014 11:26:31 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392031589!798079!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28639 invoked from network); 10 Feb 2014 11:26:30 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-2.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 11:26:30 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0K-000265-Jz; Mon, 10 Feb 2014 11:26:24 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0J-0004NW-OI; Mon, 10 Feb 2014 11:26:24 +0000
Date: Mon, 10 Feb 2014 11:26:23 +0000
Message-Id: <E1WCp0J-0004NW-OI@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 85 (CVE-2014-1895) - Off-by-one
 error in FLASK_AVC_CACHESTAT hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2014-1895 / XSA-85
                              version 3

          Off-by-one error in FLASK_AVC_CACHESTAT hypercall

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

The FLASK_AVC_CACHESTAT hypercall, which provides access to per-cpu
statistics on the Flask security policy, incorrectly validates the
CPU for which statistics are being requested.

IMPACT
======

An attacker can cause the hypervisor to read past the end of an
array. This may result in either a host crash, leading to a denial of
service, or access to a small and static region of hypervisor memory,
leading to an information leak.

VULNERABLE SYSTEMS
==================

Xen version 4.2 and later are vulnerable to this issue when built with
XSM/Flask support. XSM support is disabled by default and is enabled
by building with XSM_ENABLE=y.

Only systems with the maximum supported number of physical CPUs are
vulnerable. Systems with a greater number of physical CPUs will only
make use of the maximum supported number and are therefore vulnerable.

By default the following maximums apply:
 * x86_32: 128 (only until Xen 4.2.x)
 * x86_64: 256
These defaults can be overridden at build time via max_phys_cpus=N.

The vulnerable hypercall is exposed to all domains.

MITIGATION
==========

Rebuilding Xen with more supported physical CPUs can avoid the
vulnerability; provided that the supported number is strictly greater
than the actual number of CPUs on any host on which the hypervisor is
to run.

If XSM is compiled in, but not actually in use, compiling it out (with
XSM_ENABLE=n) will avoid the vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa85.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x

$ sha256sum xsa85*.patch
20571024e6815eeb40d2f92a3d70ae699047cffafb5431ec74b652e0843a5315  xsa85.patch
$

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+LcqAAoJEIP+FMlX6CvZPk8H/iA8bLP81SKPT6IUlaw8RjzU
ZECj3ord+tLAcjvu93RmI5WVANNscwNdxhBIVQApzFOqMC5LGho5HHXgvi2WuRo4
zc3b4djT0PN6tTMAhJZU9WwZxIQx+60VSDpIJbVGyLrEjGHxS/l/liM3cOuj5FZs
ZpT3cQ47yHskkgCXGhdR4keAaXEA9qBtQ6EbraMWt/ynjXmZ2UGQyRB+md3IaG38
FOhzVIVvsGJ0ZrxhByrBrNYN04Fdnqx707dNIg5fYflqzuTJkuMiL4dLlBJBMeiP
aVEIAW1TD3ObiXNbC3/AjrXdgttA5e1JIHGJb9LV0RO1rhjuyZGLiLNp+Omx3KI=
=wpcu
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa85.patch"
Content-Disposition: attachment; filename="xsa85.patch"
Content-Transfer-Encoding: base64

RnJvbSA1OTNiYzhjNjNkNTgyZWMwZmMyYjNhMzUzMzYxMDZjZjljM2E4YjM0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBNYXR0aGV3IERhbGV5
IDxtYXR0ZEBidWdmdXp6LmNvbT4KRGF0ZTogU3VuLCAxMiBKYW4gMjAxNCAx
NDoyOTozMiArMTMwMApTdWJqZWN0OiBbUEFUQ0hdIHhzbS9mbGFzazogY29y
cmVjdCBvZmYtYnktb25lIGluCiBmbGFza19zZWN1cml0eV9hdmNfY2FjaGVz
dGF0cyBjcHUgaWQgY2hlY2sKClRoaXMgaXMgWFNBLTg1CgpTaWduZWQtb2Zm
LWJ5OiBNYXR0aGV3IERhbGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KUmV2aWV3
ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3
ZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+
Ci0tLQogeGVuL3hzbS9mbGFzay9mbGFza19vcC5jIHwgMiArLQogMSBmaWxl
IGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZm
IC0tZ2l0IGEveGVuL3hzbS9mbGFzay9mbGFza19vcC5jIGIveGVuL3hzbS9m
bGFzay9mbGFza19vcC5jCmluZGV4IDQ0MjZhYjkuLjIyODc4ZjUgMTAwNjQ0
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTQ1Nyw3ICs0NTcsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X2F2Y19jYWNoZXN0YXRzKHN0cnVjdCB4ZW5f
Zmxhc2tfY2FjaGVfc3RhdHMgKmFyZykKIHsKICAgICBzdHJ1Y3QgYXZjX2Nh
Y2hlX3N0YXRzICpzdDsKIAotICAgIGlmICggYXJnLT5jcHUgPiBucl9jcHVf
aWRzICkKKyAgICBpZiAoIGFyZy0+Y3B1ID49IG5yX2NwdV9pZHMgKQogICAg
ICAgICByZXR1cm4gLUVOT0VOVDsKICAgICBpZiAoICFjcHVfb25saW5lKGFy
Zy0+Y3B1KSApCiAgICAgICAgIHJldHVybiAtRU5PRU5UOwotLSAKMS44LjUu
MgoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Feb 10 11:26:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp0U-0007fD-Ji; Mon, 10 Feb 2014 11:26:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0S-0007el-Id; Mon, 10 Feb 2014 11:26:32 +0000
Received: from [85.158.137.68:36912] by server-16.bemta-3.messagelabs.com id
	AA/24-29917-767B8F25; Mon, 10 Feb 2014 11:26:31 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392031589!798079!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28639 invoked from network); 10 Feb 2014 11:26:30 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-2.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 11:26:30 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0K-000265-Jz; Mon, 10 Feb 2014 11:26:24 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0J-0004NW-OI; Mon, 10 Feb 2014 11:26:24 +0000
Date: Mon, 10 Feb 2014 11:26:23 +0000
Message-Id: <E1WCp0J-0004NW-OI@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 85 (CVE-2014-1895) - Off-by-one
 error in FLASK_AVC_CACHESTAT hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2014-1895 / XSA-85
                              version 3

          Off-by-one error in FLASK_AVC_CACHESTAT hypercall

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

The FLASK_AVC_CACHESTAT hypercall, which provides access to per-cpu
statistics on the Flask security policy, incorrectly validates the
CPU for which statistics are being requested.

IMPACT
======

An attacker can cause the hypervisor to read past the end of an
array. This may result in either a host crash, leading to a denial of
service, or access to a small and static region of hypervisor memory,
leading to an information leak.

VULNERABLE SYSTEMS
==================

Xen version 4.2 and later are vulnerable to this issue when built with
XSM/Flask support. XSM support is disabled by default and is enabled
by building with XSM_ENABLE=y.

Only systems with the maximum supported number of physical CPUs are
vulnerable. Systems with a greater number of physical CPUs will only
make use of the maximum supported number and are therefore vulnerable.

By default the following maximums apply:
 * x86_32: 128 (only until Xen 4.2.x)
 * x86_64: 256
These defaults can be overridden at build time via max_phys_cpus=N.

The vulnerable hypercall is exposed to all domains.

MITIGATION
==========

Rebuilding Xen with more supported physical CPUs can avoid the
vulnerability; provided that the supported number is strictly greater
than the actual number of CPUs on any host on which the hypervisor is
to run.

If XSM is compiled in, but not actually in use, compiling it out (with
XSM_ENABLE=n) will avoid the vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa85.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x

$ sha256sum xsa85*.patch
20571024e6815eeb40d2f92a3d70ae699047cffafb5431ec74b652e0843a5315  xsa85.patch
$

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+LcqAAoJEIP+FMlX6CvZPk8H/iA8bLP81SKPT6IUlaw8RjzU
ZECj3ord+tLAcjvu93RmI5WVANNscwNdxhBIVQApzFOqMC5LGho5HHXgvi2WuRo4
zc3b4djT0PN6tTMAhJZU9WwZxIQx+60VSDpIJbVGyLrEjGHxS/l/liM3cOuj5FZs
ZpT3cQ47yHskkgCXGhdR4keAaXEA9qBtQ6EbraMWt/ynjXmZ2UGQyRB+md3IaG38
FOhzVIVvsGJ0ZrxhByrBrNYN04Fdnqx707dNIg5fYflqzuTJkuMiL4dLlBJBMeiP
aVEIAW1TD3ObiXNbC3/AjrXdgttA5e1JIHGJb9LV0RO1rhjuyZGLiLNp+Omx3KI=
=wpcu
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa85.patch"
Content-Disposition: attachment; filename="xsa85.patch"
Content-Transfer-Encoding: base64

RnJvbSA1OTNiYzhjNjNkNTgyZWMwZmMyYjNhMzUzMzYxMDZjZjljM2E4YjM0
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBNYXR0aGV3IERhbGV5
IDxtYXR0ZEBidWdmdXp6LmNvbT4KRGF0ZTogU3VuLCAxMiBKYW4gMjAxNCAx
NDoyOTozMiArMTMwMApTdWJqZWN0OiBbUEFUQ0hdIHhzbS9mbGFzazogY29y
cmVjdCBvZmYtYnktb25lIGluCiBmbGFza19zZWN1cml0eV9hdmNfY2FjaGVz
dGF0cyBjcHUgaWQgY2hlY2sKClRoaXMgaXMgWFNBLTg1CgpTaWduZWQtb2Zm
LWJ5OiBNYXR0aGV3IERhbGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KUmV2aWV3
ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3
ZWQtYnk6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNpdHJpeC5jb20+
Ci0tLQogeGVuL3hzbS9mbGFzay9mbGFza19vcC5jIHwgMiArLQogMSBmaWxl
IGNoYW5nZWQsIDEgaW5zZXJ0aW9uKCspLCAxIGRlbGV0aW9uKC0pCgpkaWZm
IC0tZ2l0IGEveGVuL3hzbS9mbGFzay9mbGFza19vcC5jIGIveGVuL3hzbS9m
bGFzay9mbGFza19vcC5jCmluZGV4IDQ0MjZhYjkuLjIyODc4ZjUgMTAwNjQ0
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTQ1Nyw3ICs0NTcsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X2F2Y19jYWNoZXN0YXRzKHN0cnVjdCB4ZW5f
Zmxhc2tfY2FjaGVfc3RhdHMgKmFyZykKIHsKICAgICBzdHJ1Y3QgYXZjX2Nh
Y2hlX3N0YXRzICpzdDsKIAotICAgIGlmICggYXJnLT5jcHUgPiBucl9jcHVf
aWRzICkKKyAgICBpZiAoIGFyZy0+Y3B1ID49IG5yX2NwdV9pZHMgKQogICAg
ICAgICByZXR1cm4gLUVOT0VOVDsKICAgICBpZiAoICFjcHVfb25saW5lKGFy
Zy0+Y3B1KSApCiAgICAgICAgIHJldHVybiAtRU5PRU5UOwotLSAKMS44LjUu
MgoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Feb 10 11:26:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp0m-0007kN-8j; Mon, 10 Feb 2014 11:26:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0k-0007jL-5z; Mon, 10 Feb 2014 11:26:50 +0000
Received: from [85.158.143.35:36616] by server-1.bemta-4.messagelabs.com id
	89/97-31661-977B8F25; Mon, 10 Feb 2014 11:26:49 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392031607!4487956!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26827 invoked from network); 10 Feb 2014 11:26:48 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-16.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 11:26:48 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0e-00026M-1r; Mon, 10 Feb 2014 11:26:44 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0d-0004Oa-VA; Mon, 10 Feb 2014 11:26:44 +0000
Date: Mon, 10 Feb 2014 11:26:43 +0000
Message-Id: <E1WCp0d-0004Oa-VA@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 86 (CVE-2014-1896) - libvchan
 failure handling malicious ring indexes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2014-1896 / XSA-86
                              version 3

           libvchan failure handling malicious ring indexes

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

libvchan (a library for inter-domain communication) does not correctly
handle unusual or malicious contents in the xenstore ring.  A
malicious guest can exploit this to cause a libvchan-using facility to
read or write past the end of the ring.

IMPACT
======

libvchan-using facilities are vulnerable to denial of service and
perhaps privilege escalation.

There are no such services provided in the upstream Xen Project
codebase.

VULNERABLE SYSTEMS
==================

All versions of libvchan are vulnerable.  Only installations which use
libvchan for communication involving untrusted domains are vulnerable.

libvirt, xapi, xend, libxl and xl do not use libvchan.  If your
installation contains other Xen-related software components it is
possible that they use libvchan and might be vulnerable.

Xen versions 4.1 and earlier do not contain libvchan.

MITIGATION
==========

Disabling libvchan-based facilities could be used to mitigate the
vulnerability.

CREDITS
=======

This issue was discovered by Marek Marczykowski-GÃ³recki of Invisible
Things Lab.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

After the patch is applied to the Xen tree and built, any software
which is statically linked against libvchan will need to be relinked
against the new libvchan.a for the fix to take effect.

xsa86.patch        Xen 4.2.x, 4.3.x, 4.4-RC series, and xen-unstable

$ sha256sum xsa86*.patch
cd2df017e42717dd2a1b6f2fdd3ad30a38d3c0fbdd9d08b5f56ee0a01cd87b51  xsa86.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+LcuAAoJEIP+FMlX6CvZBjgH/RdmdarkaX/Bravq46egUtWT
OohBLoP+tnkg3w3DSvWlD45dlnwH2ptD/PTxyoH7XMoiajX0h3WRYf8ddu63Nwtl
qghb6EDuYF+iLf9nthdYqreVLdKQOJYXCv6c3i6odHRzGadb3cWTIv1xSDZcn+Qw
djSk2huXpuRVkpJeX05PNCkBktRe0Shwy0zgTUNC0GjWItma+NIKdvRODkON1Ai9
ilRsmlQXc2BJ7RcJGmvtcHEdIgLMJ8MzRZWspFPTuqRbQ1+XUJUxxQvJBAqIYRQ3
29iS0GxqXZDSWtTlY4xwAEdwtzsqVZx8VMQioxLUSB4fqm1s4XEfQEkH5VwoBs8=
=HSDt
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa86.patch"
Content-Disposition: attachment; filename="xsa86.patch"
Content-Transfer-Encoding: base64

RnJvbSBiNGM0NTI2NDZlZmQzN2I0Y2QwOTk2MjU2ZGQwYWI3YmY2Y2NiN2Y2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/TWFy
ZWs9MjBNYXJjenlrb3dza2ktRz1DMz1CM3JlY2tpPz0KIDxtYXJtYXJla0Bp
bnZpc2libGV0aGluZ3NsYWIuY29tPgpEYXRlOiBNb24sIDIwIEphbiAyMDE0
IDE1OjUxOjU2ICswMDAwClN1YmplY3Q6IFtQQVRDSF0gbGlidmNoYW46IEZp
eCBoYW5kbGluZyBvZiBpbnZhbGlkIHJpbmcgYnVmZmVyIGluZGljZXMKTUlN
RS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy
c2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKClRo
ZSByZW1vdGUgKGhvc3RpbGUpIHByb2Nlc3MgY2FuIHNldCByaW5nIGJ1ZmZl
ciBpbmRpY2VzIHRvIGFueSB2YWx1ZQphdCBhbnkgdGltZS4gSWYgdGhhdCBo
YXBwZW5zLCBpdCBpcyBwb3NzaWJsZSB0byBnZXQgImJ1ZmZlciBzcGFjZSIK
KGVpdGhlciBmb3Igd3JpdGluZyBkYXRhLCBvciByZWFkeSBmb3IgcmVhZGlu
ZykgbmVnYXRpdmUgb3IgZ3JlYXRlcgp0aGFuIGJ1ZmZlciBzaXplLiAgVGhp
cyB3aWxsIGVuZCB1cCB3aXRoIGJ1ZmZlciBvdmVyZmxvdyBpbiB0aGUgc2Vj
b25kCm1lbWNweSBpbnNpZGUgb2YgZG9fc2VuZC9kb19yZWN2LgoKRml4IHRo
aXMgYnkgaW50cm9kdWNpbmcgbmV3IGF2YWlsYWJsZSBieXRlcyBhY2Nlc3Nv
ciBmdW5jdGlvbnMKcmF3X2dldF9kYXRhX3JlYWR5IGFuZCByYXdfZ2V0X2J1
ZmZlcl9zcGFjZSB3aGljaCBhcmUgcm9idXN0IGFnYWluc3QKbWFkIHJpbmcg
c3RhdGVzLCBhbmQgb25seSByZXR1cm4gc2FuaXRpc2VkIHZhbHVlcy4KClBy
b29mIHNrZXRjaCBvZiBjb3JyZWN0bmVzczoKCk5vdyB7cmQsd3J9X3tjb25z
LHByb2R9IGFyZSBvbmx5IGV2ZXIgdXNlZCBpbiB0aGUgcmF3IGF2YWlsYWJs
ZSBieXRlcwpmdW5jdGlvbnMsIGFuZCBpbiBkb19zZW5kIGFuZCBkb19yZWN2
LgoKVGhlIHJhdyBhdmFpbGFibGUgYnl0ZXMgZnVuY3Rpb25zIGRvIHVuc2ln
bmVkIGFyaXRobWV0aWMgb24gdGhlCnJldHVybmVkIHZhbHVlcy4gIElmIHRo
ZSByZXN1bHQgaXMgIm5lZ2F0aXZlIiBvciB0b28gYmlnIGl0IHdpbGwgYmUK
PnJpbmdfc2l6ZSAoc2luY2Ugd2UgdXNlZCB1bnNpZ25lZCBhcml0aG1ldGlj
KS4gIE90aGVyd2lzZSB0aGUgcmVzdWx0CmlzIGEgcG9zaXRpdmUgaW4tcmFu
Z2UgdmFsdWUgcmVwcmVzZW50aW5nIGEgcmVhc29uYWJsZSByaW5nIHN0YXRl
LCBpbgp3aGljaCBjYXNlIHdlIGNhbiBzYWZlbHkgY29udmVydCBpdCB0byBp
bnQgKGFzIHRoZSByZXN0IG9mIHRoZSBjb2RlCmV4cGVjdHMpLgoKZG9fc2Vu
ZCBhbmQgZG9fcmVjdiBpbW1lZGlhdGVseSBtYXNrIHRoZSByaW5nIGluZGV4
IHZhbHVlIHdpdGggdGhlCnJpbmcgc2l6ZS4gIFRoZSByZXN1bHQgaXMgYWx3
YXlzIGdvaW5nIHRvIGJlIHBsYXVzaWJsZS4gIElmIHRoZSByaW5nCnN0YXRl
IGhhcyBiZWNvbWUgbWFkLCB0aGUgd29yc3QgY2FzZSBpcyB0aGF0IG91ciBi
ZWhhdmlvdXIgaXMKaW5jb25zaXN0ZW50IHdpdGggdGhlIHBlZXIncyByaW5n
IHBvaW50ZXIuICBJLmUuIHdlIHJlYWQgb3Igd3JpdGUgdG8KYXJndWFibHkt
aW5jb3JyZWN0IHBhcnRzIG9mIHRoZSByaW5nIC0gYnV0IGFsd2F5cyBwYXJ0
cyBvZiB0aGUgcmluZy4KQW5kIG9mIGNvdXJzZSBpZiBhIHBlZXIgbWlzb3Bl
cmF0ZXMgdGhlIHJpbmcgdGhleSBjYW4gYWNoaWV2ZSB0aGlzCmVmZmVjdCBh
bnl3YXkuCgpTbyB0aGUgc2VjdXJpdHkgcHJvYmxlbSBpcyBmaXhlZC4KClRo
aXMgaXMgWFNBLTg2LgoKKFRoZSBwYXRjaCBpcyBlc3NlbnRpYWxseSBJYW4g
SmFja3NvbidzIHdvcmssIGFsdGhvdWdoIHBhcnRzIG9mIHRoZQpjb21taXQg
bWVzc2FnZSBhcmUgYnkgTWFyZWsuKQoKU2lnbmVkLW9mZi1ieTogTWFyZWsg
TWFyY3p5a293c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGlu
Z3NsYWIuY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmph
Y2tzb25AZXUuY2l0cml4LmNvbT4KQ2M6IE1hcmVrIE1hcmN6eWtvd3NraS1H
w7NyZWNraSA8bWFybWFyZWtAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4KQ2M6
IEpvYW5uYSBSdXRrb3dza2EgPGpvYW5uYUBpbnZpc2libGV0aGluZ3NsYWIu
Y29tPgotLS0KIHRvb2xzL2xpYnZjaGFuL2lvLmMgfCAgIDQ3ICsrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tCiAxIGZp
bGUgY2hhbmdlZCwgNDEgaW5zZXJ0aW9ucygrKSwgNiBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS90b29scy9saWJ2Y2hhbi9pby5jIGIvdG9vbHMvbGli
dmNoYW4vaW8uYwppbmRleCAyMzgzMzY0Li44MDRjNjNjIDEwMDY0NAotLS0g
YS90b29scy9saWJ2Y2hhbi9pby5jCisrKyBiL3Rvb2xzL2xpYnZjaGFuL2lv
LmMKQEAgLTExMSwxMiArMTExLDI2IEBAIHN0YXRpYyBpbmxpbmUgaW50IHNl
bmRfbm90aWZ5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgdWludDhfdCBi
aXQpCiAJCXJldHVybiAwOwogfQogCisvKgorICogR2V0IHRoZSBhbW91bnQg
b2YgYnVmZmVyIHNwYWNlIGF2YWlsYWJsZSwgYW5kIGRvIG5vdGhpbmcgYWJv
dXQKKyAqIG5vdGlmaWNhdGlvbnMuCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50
IHJhd19nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
Cit7CisJdWludDMyX3QgcmVhZHkgPSByZF9wcm9kKGN0cmwpIC0gcmRfY29u
cyhjdHJsKTsKKwlpZiAocmVhZHkgPj0gcmRfcmluZ19zaXplKGN0cmwpKQor
CQkvKiBXZSBoYXZlIG5vIHdheSB0byByZXR1cm4gZXJyb3JzLiAgTG9ja2lu
ZyB1cCB0aGUgcmluZyBpcworCQkgKiBiZXR0ZXIgdGhhbiB0aGUgYWx0ZXJu
YXRpdmVzLiAqLworCQlyZXR1cm4gMDsKKwlyZXR1cm4gcmVhZHk7Cit9CisK
IC8qKgogICogR2V0IHRoZSBhbW91bnQgb2YgYnVmZmVyIHNwYWNlIGF2YWls
YWJsZSBhbmQgZW5hYmxlIG5vdGlmaWNhdGlvbnMgaWYgbmVlZGVkLgogICov
CiBzdGF0aWMgaW5saW5lIGludCBmYXN0X2dldF9kYXRhX3JlYWR5KHN0cnVj
dCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJlcXVlc3QpCiB7Ci0JaW50
IHJlYWR5ID0gcmRfcHJvZChjdHJsKSAtIHJkX2NvbnMoY3RybCk7CisJaW50
IHJlYWR5ID0gcmF3X2dldF9kYXRhX3JlYWR5KGN0cmwpOwogCWlmIChyZWFk
eSA+PSByZXF1ZXN0KQogCQlyZXR1cm4gcmVhZHk7CiAJLyogV2UgcGxhbiB0
byBjb25zdW1lIGFsbCBkYXRhOyBwbGVhc2UgdGVsbCB1cyBpZiB5b3Ugc2Vu
ZCBtb3JlICovCkBAIC0xMjYsNyArMTQwLDcgQEAgc3RhdGljIGlubGluZSBp
bnQgZmFzdF9nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0
cmwsIHNpemVfdCByZXF1ZXN0KQogCSAqIHdpbGwgbm90IGdldCBub3RpZmll
ZCBldmVuIHRob3VnaCB0aGUgYWN0dWFsIGFtb3VudCBvZiBkYXRhIHJlYWR5
IGlzCiAJICogYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHJkX3Byb2QgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiByZF9wcm9kKGN0cmwpIC0g
cmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRhX3JlYWR5KGN0
cmwpOwogfQogCiBpbnQgbGlieGVudmNoYW5fZGF0YV9yZWFkeShzdHJ1Y3Qg
bGlieGVudmNoYW4gKmN0cmwpCkBAIC0xMzUsNyArMTQ5LDIxIEBAIGludCBs
aWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCkKIAkgKiB3aGVuIGl0IGNoYW5nZXMKIAkgKi8KIAlyZXF1ZXN0X25vdGlm
eShjdHJsLCBWQ0hBTl9OT1RJRllfV1JJVEUpOwotCXJldHVybiByZF9wcm9k
KGN0cmwpIC0gcmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRh
X3JlYWR5KGN0cmwpOworfQorCisvKioKKyAqIEdldCB0aGUgYW1vdW50IG9m
IGJ1ZmZlciBzcGFjZSBhdmFpbGFibGUsIGFuZCBkbyBub3RoaW5nCisgKiBh
Ym91dCBub3RpZmljYXRpb25zCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50IHJh
d19nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCkK
K3sKKwl1aW50MzJfdCByZWFkeSA9IHdyX3Jpbmdfc2l6ZShjdHJsKSAtICh3
cl9wcm9kKGN0cmwpIC0gd3JfY29ucyhjdHJsKSk7CisJaWYgKHJlYWR5ID4g
d3JfcmluZ19zaXplKGN0cmwpKQorCQkvKiBXZSBoYXZlIG5vIHdheSB0byBy
ZXR1cm4gZXJyb3JzLiAgTG9ja2luZyB1cCB0aGUgcmluZyBpcworCQkgKiBi
ZXR0ZXIgdGhhbiB0aGUgYWx0ZXJuYXRpdmVzLiAqLworCQlyZXR1cm4gMDsK
KwlyZXR1cm4gcmVhZHk7CiB9CiAKIC8qKgpAQCAtMTQzLDcgKzE3MSw3IEBA
IGludCBsaWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hh
biAqY3RybCkKICAqLwogc3RhdGljIGlubGluZSBpbnQgZmFzdF9nZXRfYnVm
ZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJl
cXVlc3QpCiB7Ci0JaW50IHJlYWR5ID0gd3JfcmluZ19zaXplKGN0cmwpIC0g
KHdyX3Byb2QoY3RybCkgLSB3cl9jb25zKGN0cmwpKTsKKwlpbnQgcmVhZHkg
PSByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIAlpZiAocmVhZHkgPj0g
cmVxdWVzdCkKIAkJcmV0dXJuIHJlYWR5OwogCS8qIFdlIHBsYW4gdG8gZmls
bCB0aGUgYnVmZmVyOyBwbGVhc2UgdGVsbCB1cyB3aGVuIHlvdSd2ZSByZWFk
IGl0ICovCkBAIC0xNTMsNyArMTgxLDcgQEAgc3RhdGljIGlubGluZSBpbnQg
ZmFzdF9nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCwgc2l6ZV90IHJlcXVlc3QKIAkgKiB3aWxsIG5vdCBnZXQgbm90aWZpZWQg
ZXZlbiB0aG91Z2ggdGhlIGFjdHVhbCBhbW91bnQgb2YgYnVmZmVyIHNwYWNl
CiAJICogaXMgYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHdyX2NvbnMgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiB3cl9yaW5nX3NpemUoY3Ry
bCkgLSAod3JfcHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVy
biByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhl
bnZjaGFuX2J1ZmZlcl9zcGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
CkBAIC0xNjIsNyArMTkwLDcgQEAgaW50IGxpYnhlbnZjaGFuX2J1ZmZlcl9z
cGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwpCiAJICogd2hlbiBpdCBj
aGFuZ2VzCiAJICovCiAJcmVxdWVzdF9ub3RpZnkoY3RybCwgVkNIQU5fTk9U
SUZZX1JFQUQpOwotCXJldHVybiB3cl9yaW5nX3NpemUoY3RybCkgLSAod3Jf
cHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVybiByYXdfZ2V0
X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhlbnZjaGFuX3dh
aXQoc3RydWN0IGxpYnhlbnZjaGFuICpjdHJsKQpAQCAtMTc2LDYgKzIwNCw4
IEBAIGludCBsaWJ4ZW52Y2hhbl93YWl0KHN0cnVjdCBsaWJ4ZW52Y2hhbiAq
Y3RybCkKIAogLyoqCiAgKiByZXR1cm5zIC0xIG9uIGVycm9yLCBvciBzaXpl
IG9uIHN1Y2Nlc3MKKyAqCisgKiBjYWxsZXIgbXVzdCBoYXZlIGNoZWNrZWQg
dGhhdCBlbm91Z2ggc3BhY2UgaXMgYXZhaWxhYmxlCiAgKi8KIHN0YXRpYyBp
bnQgZG9fc2VuZChzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIGNvbnN0IHZv
aWQgKmRhdGEsIHNpemVfdCBzaXplKQogewpAQCAtMjQ4LDYgKzI3OCwxMSBA
QCBpbnQgbGlieGVudmNoYW5fd3JpdGUoc3RydWN0IGxpYnhlbnZjaGFuICpj
dHJsLCBjb25zdCB2b2lkICpkYXRhLCBzaXplX3Qgc2l6ZSkKIAl9CiB9CiAK
Ky8qKgorICogcmV0dXJucyAtMSBvbiBlcnJvciwgb3Igc2l6ZSBvbiBzdWNj
ZXNzCisgKgorICogY2FsbGVyIG11c3QgaGF2ZSBjaGVja2VkIHRoYXQgZW5v
dWdoIGRhdGEgaXMgYXZhaWxhYmxlCisgKi8KIHN0YXRpYyBpbnQgZG9fcmVj
dihzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIHZvaWQgKmRhdGEsIHNpemVf
dCBzaXplKQogewogCWludCByZWFsX2lkeCA9IHJkX2NvbnMoY3RybCkgJiAo
cmRfcmluZ19zaXplKGN0cmwpIC0gMSk7Ci0tIAoxLjcuMTAuNAoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Feb 10 11:26:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp0m-0007kN-8j; Mon, 10 Feb 2014 11:26:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0k-0007jL-5z; Mon, 10 Feb 2014 11:26:50 +0000
Received: from [85.158.143.35:36616] by server-1.bemta-4.messagelabs.com id
	89/97-31661-977B8F25; Mon, 10 Feb 2014 11:26:49 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392031607!4487956!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26827 invoked from network); 10 Feb 2014 11:26:48 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-16.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 11:26:48 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0e-00026M-1r; Mon, 10 Feb 2014 11:26:44 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp0d-0004Oa-VA; Mon, 10 Feb 2014 11:26:44 +0000
Date: Mon, 10 Feb 2014 11:26:43 +0000
Message-Id: <E1WCp0d-0004Oa-VA@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 86 (CVE-2014-1896) - libvchan
 failure handling malicious ring indexes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2014-1896 / XSA-86
                              version 3

           libvchan failure handling malicious ring indexes

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

libvchan (a library for inter-domain communication) does not correctly
handle unusual or malicious contents in the xenstore ring.  A
malicious guest can exploit this to cause a libvchan-using facility to
read or write past the end of the ring.

IMPACT
======

libvchan-using facilities are vulnerable to denial of service and
perhaps privilege escalation.

There are no such services provided in the upstream Xen Project
codebase.

VULNERABLE SYSTEMS
==================

All versions of libvchan are vulnerable.  Only installations which use
libvchan for communication involving untrusted domains are vulnerable.

libvirt, xapi, xend, libxl and xl do not use libvchan.  If your
installation contains other Xen-related software components it is
possible that they use libvchan and might be vulnerable.

Xen versions 4.1 and earlier do not contain libvchan.

MITIGATION
==========

Disabling libvchan-based facilities could be used to mitigate the
vulnerability.

CREDITS
=======

This issue was discovered by Marek Marczykowski-GÃ³recki of Invisible
Things Lab.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

After the patch is applied to the Xen tree and built, any software
which is statically linked against libvchan will need to be relinked
against the new libvchan.a for the fix to take effect.

xsa86.patch        Xen 4.2.x, 4.3.x, 4.4-RC series, and xen-unstable

$ sha256sum xsa86*.patch
cd2df017e42717dd2a1b6f2fdd3ad30a38d3c0fbdd9d08b5f56ee0a01cd87b51  xsa86.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+LcuAAoJEIP+FMlX6CvZBjgH/RdmdarkaX/Bravq46egUtWT
OohBLoP+tnkg3w3DSvWlD45dlnwH2ptD/PTxyoH7XMoiajX0h3WRYf8ddu63Nwtl
qghb6EDuYF+iLf9nthdYqreVLdKQOJYXCv6c3i6odHRzGadb3cWTIv1xSDZcn+Qw
djSk2huXpuRVkpJeX05PNCkBktRe0Shwy0zgTUNC0GjWItma+NIKdvRODkON1Ai9
ilRsmlQXc2BJ7RcJGmvtcHEdIgLMJ8MzRZWspFPTuqRbQ1+XUJUxxQvJBAqIYRQ3
29iS0GxqXZDSWtTlY4xwAEdwtzsqVZx8VMQioxLUSB4fqm1s4XEfQEkH5VwoBs8=
=HSDt
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa86.patch"
Content-Disposition: attachment; filename="xsa86.patch"
Content-Transfer-Encoding: base64

RnJvbSBiNGM0NTI2NDZlZmQzN2I0Y2QwOTk2MjU2ZGQwYWI3YmY2Y2NiN2Y2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiA9P1VURi04P3E/TWFy
ZWs9MjBNYXJjenlrb3dza2ktRz1DMz1CM3JlY2tpPz0KIDxtYXJtYXJla0Bp
bnZpc2libGV0aGluZ3NsYWIuY29tPgpEYXRlOiBNb24sIDIwIEphbiAyMDE0
IDE1OjUxOjU2ICswMDAwClN1YmplY3Q6IFtQQVRDSF0gbGlidmNoYW46IEZp
eCBoYW5kbGluZyBvZiBpbnZhbGlkIHJpbmcgYnVmZmVyIGluZGljZXMKTUlN
RS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy
c2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDhiaXQKClRo
ZSByZW1vdGUgKGhvc3RpbGUpIHByb2Nlc3MgY2FuIHNldCByaW5nIGJ1ZmZl
ciBpbmRpY2VzIHRvIGFueSB2YWx1ZQphdCBhbnkgdGltZS4gSWYgdGhhdCBo
YXBwZW5zLCBpdCBpcyBwb3NzaWJsZSB0byBnZXQgImJ1ZmZlciBzcGFjZSIK
KGVpdGhlciBmb3Igd3JpdGluZyBkYXRhLCBvciByZWFkeSBmb3IgcmVhZGlu
ZykgbmVnYXRpdmUgb3IgZ3JlYXRlcgp0aGFuIGJ1ZmZlciBzaXplLiAgVGhp
cyB3aWxsIGVuZCB1cCB3aXRoIGJ1ZmZlciBvdmVyZmxvdyBpbiB0aGUgc2Vj
b25kCm1lbWNweSBpbnNpZGUgb2YgZG9fc2VuZC9kb19yZWN2LgoKRml4IHRo
aXMgYnkgaW50cm9kdWNpbmcgbmV3IGF2YWlsYWJsZSBieXRlcyBhY2Nlc3Nv
ciBmdW5jdGlvbnMKcmF3X2dldF9kYXRhX3JlYWR5IGFuZCByYXdfZ2V0X2J1
ZmZlcl9zcGFjZSB3aGljaCBhcmUgcm9idXN0IGFnYWluc3QKbWFkIHJpbmcg
c3RhdGVzLCBhbmQgb25seSByZXR1cm4gc2FuaXRpc2VkIHZhbHVlcy4KClBy
b29mIHNrZXRjaCBvZiBjb3JyZWN0bmVzczoKCk5vdyB7cmQsd3J9X3tjb25z
LHByb2R9IGFyZSBvbmx5IGV2ZXIgdXNlZCBpbiB0aGUgcmF3IGF2YWlsYWJs
ZSBieXRlcwpmdW5jdGlvbnMsIGFuZCBpbiBkb19zZW5kIGFuZCBkb19yZWN2
LgoKVGhlIHJhdyBhdmFpbGFibGUgYnl0ZXMgZnVuY3Rpb25zIGRvIHVuc2ln
bmVkIGFyaXRobWV0aWMgb24gdGhlCnJldHVybmVkIHZhbHVlcy4gIElmIHRo
ZSByZXN1bHQgaXMgIm5lZ2F0aXZlIiBvciB0b28gYmlnIGl0IHdpbGwgYmUK
PnJpbmdfc2l6ZSAoc2luY2Ugd2UgdXNlZCB1bnNpZ25lZCBhcml0aG1ldGlj
KS4gIE90aGVyd2lzZSB0aGUgcmVzdWx0CmlzIGEgcG9zaXRpdmUgaW4tcmFu
Z2UgdmFsdWUgcmVwcmVzZW50aW5nIGEgcmVhc29uYWJsZSByaW5nIHN0YXRl
LCBpbgp3aGljaCBjYXNlIHdlIGNhbiBzYWZlbHkgY29udmVydCBpdCB0byBp
bnQgKGFzIHRoZSByZXN0IG9mIHRoZSBjb2RlCmV4cGVjdHMpLgoKZG9fc2Vu
ZCBhbmQgZG9fcmVjdiBpbW1lZGlhdGVseSBtYXNrIHRoZSByaW5nIGluZGV4
IHZhbHVlIHdpdGggdGhlCnJpbmcgc2l6ZS4gIFRoZSByZXN1bHQgaXMgYWx3
YXlzIGdvaW5nIHRvIGJlIHBsYXVzaWJsZS4gIElmIHRoZSByaW5nCnN0YXRl
IGhhcyBiZWNvbWUgbWFkLCB0aGUgd29yc3QgY2FzZSBpcyB0aGF0IG91ciBi
ZWhhdmlvdXIgaXMKaW5jb25zaXN0ZW50IHdpdGggdGhlIHBlZXIncyByaW5n
IHBvaW50ZXIuICBJLmUuIHdlIHJlYWQgb3Igd3JpdGUgdG8KYXJndWFibHkt
aW5jb3JyZWN0IHBhcnRzIG9mIHRoZSByaW5nIC0gYnV0IGFsd2F5cyBwYXJ0
cyBvZiB0aGUgcmluZy4KQW5kIG9mIGNvdXJzZSBpZiBhIHBlZXIgbWlzb3Bl
cmF0ZXMgdGhlIHJpbmcgdGhleSBjYW4gYWNoaWV2ZSB0aGlzCmVmZmVjdCBh
bnl3YXkuCgpTbyB0aGUgc2VjdXJpdHkgcHJvYmxlbSBpcyBmaXhlZC4KClRo
aXMgaXMgWFNBLTg2LgoKKFRoZSBwYXRjaCBpcyBlc3NlbnRpYWxseSBJYW4g
SmFja3NvbidzIHdvcmssIGFsdGhvdWdoIHBhcnRzIG9mIHRoZQpjb21taXQg
bWVzc2FnZSBhcmUgYnkgTWFyZWsuKQoKU2lnbmVkLW9mZi1ieTogTWFyZWsg
TWFyY3p5a293c2tpLUfDs3JlY2tpIDxtYXJtYXJla0BpbnZpc2libGV0aGlu
Z3NsYWIuY29tPgpTaWduZWQtb2ZmLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmph
Y2tzb25AZXUuY2l0cml4LmNvbT4KQ2M6IE1hcmVrIE1hcmN6eWtvd3NraS1H
w7NyZWNraSA8bWFybWFyZWtAaW52aXNpYmxldGhpbmdzbGFiLmNvbT4KQ2M6
IEpvYW5uYSBSdXRrb3dza2EgPGpvYW5uYUBpbnZpc2libGV0aGluZ3NsYWIu
Y29tPgotLS0KIHRvb2xzL2xpYnZjaGFuL2lvLmMgfCAgIDQ3ICsrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tCiAxIGZp
bGUgY2hhbmdlZCwgNDEgaW5zZXJ0aW9ucygrKSwgNiBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS90b29scy9saWJ2Y2hhbi9pby5jIGIvdG9vbHMvbGli
dmNoYW4vaW8uYwppbmRleCAyMzgzMzY0Li44MDRjNjNjIDEwMDY0NAotLS0g
YS90b29scy9saWJ2Y2hhbi9pby5jCisrKyBiL3Rvb2xzL2xpYnZjaGFuL2lv
LmMKQEAgLTExMSwxMiArMTExLDI2IEBAIHN0YXRpYyBpbmxpbmUgaW50IHNl
bmRfbm90aWZ5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgdWludDhfdCBi
aXQpCiAJCXJldHVybiAwOwogfQogCisvKgorICogR2V0IHRoZSBhbW91bnQg
b2YgYnVmZmVyIHNwYWNlIGF2YWlsYWJsZSwgYW5kIGRvIG5vdGhpbmcgYWJv
dXQKKyAqIG5vdGlmaWNhdGlvbnMuCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50
IHJhd19nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
Cit7CisJdWludDMyX3QgcmVhZHkgPSByZF9wcm9kKGN0cmwpIC0gcmRfY29u
cyhjdHJsKTsKKwlpZiAocmVhZHkgPj0gcmRfcmluZ19zaXplKGN0cmwpKQor
CQkvKiBXZSBoYXZlIG5vIHdheSB0byByZXR1cm4gZXJyb3JzLiAgTG9ja2lu
ZyB1cCB0aGUgcmluZyBpcworCQkgKiBiZXR0ZXIgdGhhbiB0aGUgYWx0ZXJu
YXRpdmVzLiAqLworCQlyZXR1cm4gMDsKKwlyZXR1cm4gcmVhZHk7Cit9CisK
IC8qKgogICogR2V0IHRoZSBhbW91bnQgb2YgYnVmZmVyIHNwYWNlIGF2YWls
YWJsZSBhbmQgZW5hYmxlIG5vdGlmaWNhdGlvbnMgaWYgbmVlZGVkLgogICov
CiBzdGF0aWMgaW5saW5lIGludCBmYXN0X2dldF9kYXRhX3JlYWR5KHN0cnVj
dCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJlcXVlc3QpCiB7Ci0JaW50
IHJlYWR5ID0gcmRfcHJvZChjdHJsKSAtIHJkX2NvbnMoY3RybCk7CisJaW50
IHJlYWR5ID0gcmF3X2dldF9kYXRhX3JlYWR5KGN0cmwpOwogCWlmIChyZWFk
eSA+PSByZXF1ZXN0KQogCQlyZXR1cm4gcmVhZHk7CiAJLyogV2UgcGxhbiB0
byBjb25zdW1lIGFsbCBkYXRhOyBwbGVhc2UgdGVsbCB1cyBpZiB5b3Ugc2Vu
ZCBtb3JlICovCkBAIC0xMjYsNyArMTQwLDcgQEAgc3RhdGljIGlubGluZSBp
bnQgZmFzdF9nZXRfZGF0YV9yZWFkeShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0
cmwsIHNpemVfdCByZXF1ZXN0KQogCSAqIHdpbGwgbm90IGdldCBub3RpZmll
ZCBldmVuIHRob3VnaCB0aGUgYWN0dWFsIGFtb3VudCBvZiBkYXRhIHJlYWR5
IGlzCiAJICogYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHJkX3Byb2QgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiByZF9wcm9kKGN0cmwpIC0g
cmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRhX3JlYWR5KGN0
cmwpOwogfQogCiBpbnQgbGlieGVudmNoYW5fZGF0YV9yZWFkeShzdHJ1Y3Qg
bGlieGVudmNoYW4gKmN0cmwpCkBAIC0xMzUsNyArMTQ5LDIxIEBAIGludCBs
aWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCkKIAkgKiB3aGVuIGl0IGNoYW5nZXMKIAkgKi8KIAlyZXF1ZXN0X25vdGlm
eShjdHJsLCBWQ0hBTl9OT1RJRllfV1JJVEUpOwotCXJldHVybiByZF9wcm9k
KGN0cmwpIC0gcmRfY29ucyhjdHJsKTsKKwlyZXR1cm4gcmF3X2dldF9kYXRh
X3JlYWR5KGN0cmwpOworfQorCisvKioKKyAqIEdldCB0aGUgYW1vdW50IG9m
IGJ1ZmZlciBzcGFjZSBhdmFpbGFibGUsIGFuZCBkbyBub3RoaW5nCisgKiBh
Ym91dCBub3RpZmljYXRpb25zCisgKi8KK3N0YXRpYyBpbmxpbmUgaW50IHJh
d19nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCkK
K3sKKwl1aW50MzJfdCByZWFkeSA9IHdyX3Jpbmdfc2l6ZShjdHJsKSAtICh3
cl9wcm9kKGN0cmwpIC0gd3JfY29ucyhjdHJsKSk7CisJaWYgKHJlYWR5ID4g
d3JfcmluZ19zaXplKGN0cmwpKQorCQkvKiBXZSBoYXZlIG5vIHdheSB0byBy
ZXR1cm4gZXJyb3JzLiAgTG9ja2luZyB1cCB0aGUgcmluZyBpcworCQkgKiBi
ZXR0ZXIgdGhhbiB0aGUgYWx0ZXJuYXRpdmVzLiAqLworCQlyZXR1cm4gMDsK
KwlyZXR1cm4gcmVhZHk7CiB9CiAKIC8qKgpAQCAtMTQzLDcgKzE3MSw3IEBA
IGludCBsaWJ4ZW52Y2hhbl9kYXRhX3JlYWR5KHN0cnVjdCBsaWJ4ZW52Y2hh
biAqY3RybCkKICAqLwogc3RhdGljIGlubGluZSBpbnQgZmFzdF9nZXRfYnVm
ZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3RybCwgc2l6ZV90IHJl
cXVlc3QpCiB7Ci0JaW50IHJlYWR5ID0gd3JfcmluZ19zaXplKGN0cmwpIC0g
KHdyX3Byb2QoY3RybCkgLSB3cl9jb25zKGN0cmwpKTsKKwlpbnQgcmVhZHkg
PSByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIAlpZiAocmVhZHkgPj0g
cmVxdWVzdCkKIAkJcmV0dXJuIHJlYWR5OwogCS8qIFdlIHBsYW4gdG8gZmls
bCB0aGUgYnVmZmVyOyBwbGVhc2UgdGVsbCB1cyB3aGVuIHlvdSd2ZSByZWFk
IGl0ICovCkBAIC0xNTMsNyArMTgxLDcgQEAgc3RhdGljIGlubGluZSBpbnQg
ZmFzdF9nZXRfYnVmZmVyX3NwYWNlKHN0cnVjdCBsaWJ4ZW52Y2hhbiAqY3Ry
bCwgc2l6ZV90IHJlcXVlc3QKIAkgKiB3aWxsIG5vdCBnZXQgbm90aWZpZWQg
ZXZlbiB0aG91Z2ggdGhlIGFjdHVhbCBhbW91bnQgb2YgYnVmZmVyIHNwYWNl
CiAJICogaXMgYWJvdmUgcmVxdWVzdC4gUmVyZWFkIHdyX2NvbnMgdG8gY292
ZXIgdGhpcyBjYXNlLgogCSAqLwotCXJldHVybiB3cl9yaW5nX3NpemUoY3Ry
bCkgLSAod3JfcHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVy
biByYXdfZ2V0X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhl
bnZjaGFuX2J1ZmZlcl9zcGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwp
CkBAIC0xNjIsNyArMTkwLDcgQEAgaW50IGxpYnhlbnZjaGFuX2J1ZmZlcl9z
cGFjZShzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwpCiAJICogd2hlbiBpdCBj
aGFuZ2VzCiAJICovCiAJcmVxdWVzdF9ub3RpZnkoY3RybCwgVkNIQU5fTk9U
SUZZX1JFQUQpOwotCXJldHVybiB3cl9yaW5nX3NpemUoY3RybCkgLSAod3Jf
cHJvZChjdHJsKSAtIHdyX2NvbnMoY3RybCkpOworCXJldHVybiByYXdfZ2V0
X2J1ZmZlcl9zcGFjZShjdHJsKTsKIH0KIAogaW50IGxpYnhlbnZjaGFuX3dh
aXQoc3RydWN0IGxpYnhlbnZjaGFuICpjdHJsKQpAQCAtMTc2LDYgKzIwNCw4
IEBAIGludCBsaWJ4ZW52Y2hhbl93YWl0KHN0cnVjdCBsaWJ4ZW52Y2hhbiAq
Y3RybCkKIAogLyoqCiAgKiByZXR1cm5zIC0xIG9uIGVycm9yLCBvciBzaXpl
IG9uIHN1Y2Nlc3MKKyAqCisgKiBjYWxsZXIgbXVzdCBoYXZlIGNoZWNrZWQg
dGhhdCBlbm91Z2ggc3BhY2UgaXMgYXZhaWxhYmxlCiAgKi8KIHN0YXRpYyBp
bnQgZG9fc2VuZChzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIGNvbnN0IHZv
aWQgKmRhdGEsIHNpemVfdCBzaXplKQogewpAQCAtMjQ4LDYgKzI3OCwxMSBA
QCBpbnQgbGlieGVudmNoYW5fd3JpdGUoc3RydWN0IGxpYnhlbnZjaGFuICpj
dHJsLCBjb25zdCB2b2lkICpkYXRhLCBzaXplX3Qgc2l6ZSkKIAl9CiB9CiAK
Ky8qKgorICogcmV0dXJucyAtMSBvbiBlcnJvciwgb3Igc2l6ZSBvbiBzdWNj
ZXNzCisgKgorICogY2FsbGVyIG11c3QgaGF2ZSBjaGVja2VkIHRoYXQgZW5v
dWdoIGRhdGEgaXMgYXZhaWxhYmxlCisgKi8KIHN0YXRpYyBpbnQgZG9fcmVj
dihzdHJ1Y3QgbGlieGVudmNoYW4gKmN0cmwsIHZvaWQgKmRhdGEsIHNpemVf
dCBzaXplKQogewogCWludCByZWFsX2lkeCA9IHJkX2NvbnMoY3RybCkgJiAo
cmRfcmluZ19zaXplKGN0cmwpIC0gMSk7Ci0tIAoxLjcuMTAuNAoK

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Feb 10 11:30:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp3R-0000CV-PB; Mon, 10 Feb 2014 11:29:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp3N-0000B6-In; Mon, 10 Feb 2014 11:29:33 +0000
Received: from [193.109.254.147:17754] by server-3.bemta-14.messagelabs.com id
	A5/98-00432-C18B8F25; Mon, 10 Feb 2014 11:29:32 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392031770!3201751!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18772 invoked from network); 10 Feb 2014 11:29:31 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-2.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 11:29:31 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp3H-00028Y-Ew; Mon, 10 Feb 2014 11:29:27 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp3H-0004it-7u; Mon, 10 Feb 2014 11:29:27 +0000
Date: Mon, 10 Feb 2014 11:29:27 +0000
Message-Id: <E1WCp3H-0004it-7u@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 84 (CVE-2014-1891, CVE-2014-1892,
 CVE-2014-1893,
 CVE-2014-1894) - integer overflow in several XSM/Flask hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

 Xen Security Advisory CVE-2014-1891,CVE-2014-1892,CVE-2014-1893,CVE-2014-1894 / XSA-84
                              version 3

           integer overflow in several XSM/Flask hypercalls

UPDATES IN VERSION 3
====================

CVE numbers have been assigned.

ISSUE DESCRIPTION
=================

The FLASK_{GET,SET}BOOL, FLASK_USER and FLASK_CONTEXT_TO_SID
suboperations of the flask hypercall are vulnerable to an integer
overflow on the input size. The hypercalls attempt to allocate a
buffer which is 1 larger than this size and is therefore vulnerable to
integer overflow and an attempt to allocate then access a zero byte
buffer.  (CVE-2014-1891)

Xen 3.3 through 4.1, while not affected by the above overflow, have a
different overflow issue on FLASK_{GET,SET}BOOL (CVE-2014-1893) and
expose unreasonably large memory allocation to aribitrary guests
(CVE-2014-1892).

Xen 3.2 (and presumably earlier) exhibit both problems with the
overflow issue being present for more than just the suboperations
listed above.  (CVE-2014-1894 for the subops not covered above.)

The FLASK_GETBOOL op is available to all domains.

The FLASK_SETBOOL op is only available to domains which are granted
access via the Flask policy.  However the permissions check is
performed only after running the vulnerable code and the vulnerability
via this subop is exposed to all domains.

The FLASK_USER and FLASK_CONTEXT_TO_SID ops are only available to
domains which are granted access via the Flask policy.

IMPACT
======

Attempting to access the result of a zero byte allocation results in
a processor fault leading to a denial of service.

VULNERABLE SYSTEMS
==================

All Xen versions back to at least 3.2 are vulnerable to this issue when
built with XSM/Flask support. XSM support is disabled by default and is
enabled by building with XSM_ENABLE=y.

We have not checked earlier versions of Xen, but it is likely that
they are vulnerable to this or related vulnerabilities.

All Xen versions built with XSM_ENABLE=y are vulnerable.

MITIGATION
==========

There is no useful mitigation available in installations where XSM
support is actually in use.

In other systems, compiling it out (with XSM_ENABLE=n) will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa84-unstable-4.3.patch        xen-unstable,Xen 4.3.x
xsa84-4.2.patch                 Xen 4.2.x
xsa84-4.1.patch                 Xen 4.1.x


$ sha256sum xsa84*.patch
e33dd94499959363ad01bebefda9733683c49fd42a9641cf2d7edcd87f853d55  xsa84-4.1.patch
433f3c8a202482c51a48dc0e9e47ac8751d1c0d0759b7bcd22804e1856279a89  xsa84-4.2.patch
64ae433eb606c5446184c08e6fceb9f660ed9a9c28ec112c8cc529251b3b49fb  xsa84-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+LgGAAoJEIP+FMlX6CvZH1MH/00JKMYdEyaSA3oVGRTeV3Wk
/ZgZl0dTuEBYLWTh/sE8txPGVb7jOvc4pzuhZ8Z0rvh4J10EKjqIUutSs0QR6m3U
+3H+C/eHW98oselKT1csUoIZuf+3oTkZeryVeTyUi7g04xoYHpljT/u+gku8Twuz
G8D3ckchHx5Zi40u0hQWAIOyJxwlpXD74mv2hnHa7X30anpLgGhsBxGLoghJSJwd
x+i82krxbs0Ac7zKQBeVpPhVHE7QHR5Em1BqkxxtT8c93aujeD0Lkdw2H2ki1uOc
+XOEwl/kT9TqiiHy+D+wZwY08xwijC4MZrxvVW35M6DupAG/4i9mv/ICs1GGfK8=
=GrAi
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa84-4.1.patch"
Content-Disposition: attachment; filename="xsa84-4.1.patch"
Content-Transfer-Encoding: base64

UmVmZXJlbmNlczogYm5jIzg2MDE2MyBYU0EtODQKCmZsYXNrOiByZXN0cmlj
dCBhbGxvY2F0aW9ucyBkb25lIGJ5IGh5cGVyY2FsbCBpbnRlcmZhY2UKCk90
aGVyIHRoYW4gaW4gNC4yIGFuZCBuZXdlciwgd2UncmUgbm90IGhhdmluZyBh
biBvdmVyZmxvdyBpc3N1ZSBoZXJlLApidXQgdW5jb250cm9sbGVkIGV4cG9z
dXJlIG9mIHRoZSBvcGVyYXRpb25zIG9wZW5zIHRoZSBob3N0IHRvIGJlIGRy
aXZlbgpvdXQgb2YgbWVtb3J5IGJ5IGFuIGFyYml0cmFyeSBndWVzdC4gU2lu
Y2UgYWxsIG9wZXJhdGlvbnMgb3RoZXIgdGhhbgpGTEFTS19MT0FEIHNpbXBs
eSBkZWFsIHdpdGggQVNDSUkgc3RyaW5ncywgbGltaXRpbmcgdGhlIGFsbG9j
YXRpb25zCihhbmQgaW5jb21pbmcgYnVmZmVyIHNpemVzKSB0byBhIHBhZ2Ug
d29ydGggb2YgbWVtb3J5IHNlZW1zIGxpa2UgdGhlCmJlc3QgdGhpbmcgd2Ug
Y2FuIGRvLgoKQ29uc2VxdWVudGx5LCBpbiBvcmRlciB0byBub3QgZXhwb3Nl
IHRoZSBsYXJnZXIgYWxsb2NhdGlvbiB0byBhcmJpdHJhcnkKZ3Vlc3RzLCB0
aGUgcGVybWlzc2lvbiBjaGVjayBmb3IgRkxBU0tfTE9BRCBuZWVkcyB0byBi
ZSBwdWxsZWQgYWhlYWQgb2YKdGhlIGFsbG9jYXRpb24gKGFuZCBpdCdzIHBl
cmhhcHMgd29ydGggbm90aW5nIHRoYXQgLSBhZmFpY3QgLSBpdCB3YXMKcG9p
bnRsZXNzbHkgZG9uZSB3aXRoIHRoZSBzZWxfc2VtIHNwaW4gbG9jayBoZWxk
KS4KCk5vdGUgdGhhdCB0aGlzIGJyZWFrcyBGTEFTS19BVkNfQ0FDSEVTVEFU
UyBvbiBzeXN0ZW1zIHdpdGggc3VmZmljaWVudGx5Cm1hbnkgQ1BVcyAoYXMg
cmVxdWlyaW5nIGEgYnVmZmVyIGJpZ2dlciB0aGFuIFBBR0VfU0laRSB0aGVy
ZSkuIE5vCmF0dGVtcHQgaXMgbWFkZSB0byBhZGRyZXNzIHRoaXMgaGVyZSwg
YXMgaXQgd291bGQgbmVlZGxlc3NseSBjb21wbGljYXRlCnRoaXMgZml4IHdp
dGggcmF0aGVyIGxpdHRsZSBnYWluLgoKVGhpcyBpcyBYU0EtODQuCgpSZXBv
cnRlZC1ieTogTWF0dGhldyBEYWxleSA8bWF0dGRAYnVnZnV6ei5jb20+ClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
ClRoZSBpbmRleCBvZiBib29sZWFuIHZhcmlhYmxlcyBpbiBGTEFTS197R0VU
LFNFVH1CT09MIHdhcyBub3QgYWx3YXlzCmNoZWNrZWQgYWdhaW5zdCB0aGUg
Ym91bmRzIG9mIHRoZSBhcnJheS4KClJlcG9ydGVkLWJ5OiBKb2huIE1jRGVy
bW90dCA8am9obi5tY2Rlcm1vdHRAbnJsLm5hdnkubWlsPgpTaWduZWQtb2Zm
LWJ5OiBEYW5pZWwgRGUgR3JhYWYgPGRnZGVncmFAdHljaG8ubnNhLmdvdj4K
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTU3Myw3ICs1NzMsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X3NldGF2Y190aHJlc2hvCiBzdGF0aWMgaW50
IGZsYXNrX3NlY3VyaXR5X3NldF9ib29sKGNoYXIgKmJ1ZiwgdWludDMyX3Qg
Y291bnQpCiB7CiAgICAgaW50IGxlbmd0aCA9IC1FRkFVTFQ7Ci0gICAgaW50
IGksIG5ld192YWx1ZTsKKyAgICB1bnNpZ25lZCBpbnQgaSwgbmV3X3ZhbHVl
OwogCiAgICAgc3Bpbl9sb2NrKCZzZWxfc2VtKTsKIApAQCAtNTg1LDYgKzU4
NSw5IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfc2V0X2Jvb2woY2hh
ciAKICAgICBpZiAoIHNzY2FuZihidWYsICIlZCAlZCIsICZpLCAmbmV3X3Zh
bHVlKSAhPSAyICkKICAgICAgICAgZ290byBvdXQ7CiAKKyAgICBpZiAoIGkg
Pj0gYm9vbF9udW0gKQorICAgICAgICBnb3RvIG91dDsKKwogICAgIGlmICgg
bmV3X3ZhbHVlICkKICAgICB7CiAgICAgICAgIG5ld192YWx1ZSA9IDE7CkBA
IC03MzQsMTAgKzczNyw2IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlf
bG9hZChjaGFyICpidWYKIAogICAgIHNwaW5fbG9jaygmc2VsX3NlbSk7CiAK
LSAgICBsZW5ndGggPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRv
bWFpbiwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKLSAgICBpZiAoIGxlbmd0
aCApCi0gICAgICAgIGdvdG8gb3V0OwotCiAgICAgbGVuZ3RoID0gc2VjdXJp
dHlfbG9hZF9wb2xpY3koYnVmLCBjb3VudCk7CiAgICAgaWYgKCBsZW5ndGgg
KQogICAgICAgICBnb3RvIG91dDsKQEAgLTg1Myw3ICs4NTIsMTUgQEAgbG9u
ZyBkb19mbGFza19vcChYRU5fR1VFU1RfSEFORExFKHhzbV9vcAogICAgIGlm
ICggb3AtPmNtZCA+IEZMQVNLX0xBU1QpCiAgICAgICAgIHJldHVybiAtRUlO
VkFMOwogCi0gICAgaWYgKCBvcC0+c2l6ZSA+IE1BWF9QT0xJQ1lfU0laRSAp
CisgICAgaWYgKCBvcC0+Y21kID09IEZMQVNLX0xPQUQgKQorICAgIHsKKyAg
ICAgICAgcmMgPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRvbWFp
biwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKKyAgICAgICAgaWYgKCByYyAp
CisgICAgICAgICAgICByZXR1cm4gcmM7CisgICAgICAgIGlmICggb3AtPnNp
emUgPiBNQVhfUE9MSUNZX1NJWkUgKQorICAgICAgICAgICAgcmV0dXJuIC1F
SU5WQUw7CisgICAgfQorICAgIGVsc2UgaWYgKCBvcC0+c2l6ZSA+PSBQQUdF
X1NJWkUgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKIAogICAgIGlmICgg
KG9wLT5idWYgPT0gTlVMTCAmJiBvcC0+c2l6ZSAhPSAwKSB8fCAKLS0tIGEv
eGVuL3hzbS9mbGFzay9zcy9zZXJ2aWNlcy5jCisrKyBiL3hlbi94c20vZmxh
c2svc3Mvc2VydmljZXMuYwpAQCAtMTk5MSw3ICsxOTkxLDcgQEAgaW50IHNl
Y3VyaXR5X2dldF9ib29sX3ZhbHVlKGludCBib29sKQogICAgIFBPTElDWV9S
RExPQ0s7CiAKICAgICBsZW4gPSBwb2xpY3lkYi5wX2Jvb2xzLm5wcmltOwot
ICAgIGlmICggYm9vbCA+PSBsZW4gKQorICAgIGlmICggYm9vbCA+PSBsZW4g
fHwgYm9vbCA8IDAgKQogICAgIHsKICAgICAgICAgcmMgPSAtRUZBVUxUOwog
ICAgICAgICBnb3RvIG91dDsK

--=separator
Content-Type: application/octet-stream; name="xsa84-4.2.patch"
Content-Disposition: attachment; filename="xsa84-4.2.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qgc2l6ZSkK
K3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VFU1RfSEFO
RExFKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXplX3QgbWF4X3NpemUp
CiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRlcyhzaXplICsgMSk7
CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXplID4gbWF4X3NpemUg
KQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAgIHRtcCA9IHhtYWxs
b2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlmICggIXRtcCApCiAg
ICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3ICsxMDYsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3RydWN0IHhlCiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVzZXIsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+dS51c2Vy
LCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3ICsyMTcsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQoc3RydWN0CiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZidWYsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+Y29udGV4
dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3ICszMTAsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVfYm9vbChzCiAgICAg
aWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAgICByZXR1cm4gMDsK
IAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPm5hbWUsICZu
YW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmlu
ZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJvb2xfbWF4c3RyKTsK
ICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2OwogCkBAIC0zMzQs
NyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1cml0eV9zZXRfYm9v
bChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAgICBpbnQgKnZhbHVl
czsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9ib29scygmbnVtLCBO
VUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAgICAgICAgIGlmICgg
cnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsKIApAQCAtNDQwLDcg
KzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfbWFrZV9ib29s
cyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRpbmdfdmFsdWVzKTsK
ICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9vbHMoJm51bSwgTlVM
TCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZu
dW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7CiAgICAgaWYgKCBy
ZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0tLSBhL3hlbi94c20v
Zmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBiL3hlbi94c20vZmxh
c2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3ICsxMyw5IEBACiAj
aWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2RlZmluZSBfRkxBU0tf
Q09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzKTsKKyNpbmNsdWRl
IDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzLCBzaXplX3QgKm1h
eHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMoaW50IGxlbiwgaW50
ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2svc3Mvc2VydmljZXMu
YworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2VzLmMKQEAgLTE5MDAs
NyArMTkwMCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jvb2woY29uc3QgY2hh
ciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWludCBzZWN1cml0eV9n
ZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVl
cykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioq
bmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhzdHIpCiB7CiAgICAg
aW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTkwOCw2ICsxOTA4LDggQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogICAg
IGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBOVUxMOwogICAgICp2
YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkKKyAgICAgICAgKm1h
eHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIucF9ib29scy5ucHJp
bTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE5MjksMTYgKzE5MzEsMTcgQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogCiAg
ICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQogICAgIHsKLSAgICAg
ICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXplX3QgbmFtZV9sZW4g
PSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKTsKKwog
ICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5ib29sX3ZhbF90b19z
dHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5hbWVzICkgewotICAg
ICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXSA9
IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVfbGVuKTsKKyAgICAg
ICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJheShjaGFyLCBuYW1l
X2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpuYW1lcylbaV0gKQog
ICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAgICAgICAgc3RybGNw
eSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ld
LCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXVtuYW1lX2xl
biAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHkoKCpuYW1lcylbaV0s
IHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwgbmFtZV9sZW4gKyAx
KTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0ciAmJiBuYW1lX2xl
biA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0ciA9IG5hbWVfbGVu
OwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0yMDU2LDcgKzIwNTks
NyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZlX2Jvb2xzKHN0cnVj
CiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9vbGRhdHVtOwogICAg
IHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJjID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFsdWVzKTsKKyAgICBy
YyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAmYm5hbWVzLCAmYnZh
bHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAgICAgIGdvdG8gb3V0
OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBpKysgKQo=

--=separator
Content-Type: application/octet-stream; name="xsa84-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa84-unstable-4.3.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qg
c2l6ZSkKK3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VF
U1RfSEFORExFX1BBUkFNKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXpl
X3QgbWF4X3NpemUpCiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRl
cyhzaXplICsgMSk7CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXpl
ID4gbWF4X3NpemUgKQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAg
IHRtcCA9IHhtYWxsb2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlm
ICggIXRtcCApCiAgICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3
ICsxMDYsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3Ry
dWN0IHhlCiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVz
ZXIsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+dS51c2VyLCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3
ICsyMTcsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQo
c3RydWN0CiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZi
dWYsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+Y29udGV4dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3
ICszMTAsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVf
Ym9vbChzCiAgICAgaWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAg
ICByZXR1cm4gMDsKIAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhh
cmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tf
Y29weWluX3N0cmluZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJv
b2xfbWF4c3RyKTsKICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2
OwogCkBAIC0zMzQsNyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1
cml0eV9zZXRfYm9vbChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAg
ICBpbnQgKnZhbHVlczsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9i
b29scygmbnVtLCBOVUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1
cml0eV9nZXRfYm9vbHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAg
ICAgICAgIGlmICggcnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsK
IApAQCAtNDQwLDcgKzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJp
dHlfbWFrZV9ib29scyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRp
bmdfdmFsdWVzKTsKICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZudW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7
CiAgICAgaWYgKCByZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0t
LSBhL3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBi
L3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3
ICsxMyw5IEBACiAjaWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2Rl
ZmluZSBfRkxBU0tfQ09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
KTsKKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
LCBzaXplX3QgKm1heHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMo
aW50IGxlbiwgaW50ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2sv
c3Mvc2VydmljZXMuYworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2Vz
LmMKQEAgLTE4NTAsNyArMTg1MCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jv
b2woY29uc3QgY2hhciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWlu
dCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMs
IGludCAqKnZhbHVlcykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICps
ZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhz
dHIpCiB7CiAgICAgaW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTg1OCw2
ICsxODU4LDggQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogICAgIGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBO
VUxMOwogICAgICp2YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkK
KyAgICAgICAgKm1heHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIu
cF9ib29scy5ucHJpbTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE4NzksMTYg
KzE4ODEsMTcgQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogCiAgICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQog
ICAgIHsKLSAgICAgICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXpl
X3QgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19u
YW1lW2ldKTsKKwogICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5i
b29sX3ZhbF90b19zdHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5h
bWVzICkgewotICAgICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5
ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAo
Km5hbWVzKVtpXSA9IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVf
bGVuKTsKKyAgICAgICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJh
eShjaGFyLCBuYW1lX2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpu
YW1lcylbaV0gKQogICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAg
ICAgICAgc3RybGNweSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldLCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVz
KVtpXVtuYW1lX2xlbiAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHko
KCpuYW1lcylbaV0sIHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwg
bmFtZV9sZW4gKyAxKTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0
ciAmJiBuYW1lX2xlbiA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0
ciA9IG5hbWVfbGVuOwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0y
MDA2LDcgKzIwMDksNyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZl
X2Jvb2xzKHN0cnVjCiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9v
bGRhdHVtOwogICAgIHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJj
ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFs
dWVzKTsKKyAgICByYyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAm
Ym5hbWVzLCAmYnZhbHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAg
ICAgIGdvdG8gb3V0OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBp
KysgKQo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Feb 10 11:30:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCp3R-0000CV-PB; Mon, 10 Feb 2014 11:29:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp3N-0000B6-In; Mon, 10 Feb 2014 11:29:33 +0000
Received: from [193.109.254.147:17754] by server-3.bemta-14.messagelabs.com id
	A5/98-00432-C18B8F25; Mon, 10 Feb 2014 11:29:32 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392031770!3201751!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18772 invoked from network); 10 Feb 2014 11:29:31 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-2.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	10 Feb 2014 11:29:31 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp3H-00028Y-Ew; Mon, 10 Feb 2014 11:29:27 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WCp3H-0004it-7u; Mon, 10 Feb 2014 11:29:27 +0000
Date: Mon, 10 Feb 2014 11:29:27 +0000
Message-Id: <E1WCp3H-0004it-7u@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 84 (CVE-2014-1891, CVE-2014-1892,
 CVE-2014-1893,
 CVE-2014-1894) - integer overflow in several XSM/Flask hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

 Xen Security Advisory CVE-2014-1891,CVE-2014-1892,CVE-2014-1893,CVE-2014-1894 / XSA-84
                              version 3

           integer overflow in several XSM/Flask hypercalls

UPDATES IN VERSION 3
====================

CVE numbers have been assigned.

ISSUE DESCRIPTION
=================

The FLASK_{GET,SET}BOOL, FLASK_USER and FLASK_CONTEXT_TO_SID
suboperations of the flask hypercall are vulnerable to an integer
overflow on the input size. The hypercalls attempt to allocate a
buffer which is 1 larger than this size and is therefore vulnerable to
integer overflow and an attempt to allocate then access a zero byte
buffer.  (CVE-2014-1891)

Xen 3.3 through 4.1, while not affected by the above overflow, have a
different overflow issue on FLASK_{GET,SET}BOOL (CVE-2014-1893) and
expose unreasonably large memory allocation to aribitrary guests
(CVE-2014-1892).

Xen 3.2 (and presumably earlier) exhibit both problems with the
overflow issue being present for more than just the suboperations
listed above.  (CVE-2014-1894 for the subops not covered above.)

The FLASK_GETBOOL op is available to all domains.

The FLASK_SETBOOL op is only available to domains which are granted
access via the Flask policy.  However the permissions check is
performed only after running the vulnerable code and the vulnerability
via this subop is exposed to all domains.

The FLASK_USER and FLASK_CONTEXT_TO_SID ops are only available to
domains which are granted access via the Flask policy.

IMPACT
======

Attempting to access the result of a zero byte allocation results in
a processor fault leading to a denial of service.

VULNERABLE SYSTEMS
==================

All Xen versions back to at least 3.2 are vulnerable to this issue when
built with XSM/Flask support. XSM support is disabled by default and is
enabled by building with XSM_ENABLE=y.

We have not checked earlier versions of Xen, but it is likely that
they are vulnerable to this or related vulnerabilities.

All Xen versions built with XSM_ENABLE=y are vulnerable.

MITIGATION
==========

There is no useful mitigation available in installations where XSM
support is actually in use.

In other systems, compiling it out (with XSM_ENABLE=n) will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Matthew Daley.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa84-unstable-4.3.patch        xen-unstable,Xen 4.3.x
xsa84-4.2.patch                 Xen 4.2.x
xsa84-4.1.patch                 Xen 4.1.x


$ sha256sum xsa84*.patch
e33dd94499959363ad01bebefda9733683c49fd42a9641cf2d7edcd87f853d55  xsa84-4.1.patch
433f3c8a202482c51a48dc0e9e47ac8751d1c0d0759b7bcd22804e1856279a89  xsa84-4.2.patch
64ae433eb606c5446184c08e6fceb9f660ed9a9c28ec112c8cc529251b3b49fb  xsa84-unstable-4.3.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+LgGAAoJEIP+FMlX6CvZH1MH/00JKMYdEyaSA3oVGRTeV3Wk
/ZgZl0dTuEBYLWTh/sE8txPGVb7jOvc4pzuhZ8Z0rvh4J10EKjqIUutSs0QR6m3U
+3H+C/eHW98oselKT1csUoIZuf+3oTkZeryVeTyUi7g04xoYHpljT/u+gku8Twuz
G8D3ckchHx5Zi40u0hQWAIOyJxwlpXD74mv2hnHa7X30anpLgGhsBxGLoghJSJwd
x+i82krxbs0Ac7zKQBeVpPhVHE7QHR5Em1BqkxxtT8c93aujeD0Lkdw2H2ki1uOc
+XOEwl/kT9TqiiHy+D+wZwY08xwijC4MZrxvVW35M6DupAG/4i9mv/ICs1GGfK8=
=GrAi
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa84-4.1.patch"
Content-Disposition: attachment; filename="xsa84-4.1.patch"
Content-Transfer-Encoding: base64

UmVmZXJlbmNlczogYm5jIzg2MDE2MyBYU0EtODQKCmZsYXNrOiByZXN0cmlj
dCBhbGxvY2F0aW9ucyBkb25lIGJ5IGh5cGVyY2FsbCBpbnRlcmZhY2UKCk90
aGVyIHRoYW4gaW4gNC4yIGFuZCBuZXdlciwgd2UncmUgbm90IGhhdmluZyBh
biBvdmVyZmxvdyBpc3N1ZSBoZXJlLApidXQgdW5jb250cm9sbGVkIGV4cG9z
dXJlIG9mIHRoZSBvcGVyYXRpb25zIG9wZW5zIHRoZSBob3N0IHRvIGJlIGRy
aXZlbgpvdXQgb2YgbWVtb3J5IGJ5IGFuIGFyYml0cmFyeSBndWVzdC4gU2lu
Y2UgYWxsIG9wZXJhdGlvbnMgb3RoZXIgdGhhbgpGTEFTS19MT0FEIHNpbXBs
eSBkZWFsIHdpdGggQVNDSUkgc3RyaW5ncywgbGltaXRpbmcgdGhlIGFsbG9j
YXRpb25zCihhbmQgaW5jb21pbmcgYnVmZmVyIHNpemVzKSB0byBhIHBhZ2Ug
d29ydGggb2YgbWVtb3J5IHNlZW1zIGxpa2UgdGhlCmJlc3QgdGhpbmcgd2Ug
Y2FuIGRvLgoKQ29uc2VxdWVudGx5LCBpbiBvcmRlciB0byBub3QgZXhwb3Nl
IHRoZSBsYXJnZXIgYWxsb2NhdGlvbiB0byBhcmJpdHJhcnkKZ3Vlc3RzLCB0
aGUgcGVybWlzc2lvbiBjaGVjayBmb3IgRkxBU0tfTE9BRCBuZWVkcyB0byBi
ZSBwdWxsZWQgYWhlYWQgb2YKdGhlIGFsbG9jYXRpb24gKGFuZCBpdCdzIHBl
cmhhcHMgd29ydGggbm90aW5nIHRoYXQgLSBhZmFpY3QgLSBpdCB3YXMKcG9p
bnRsZXNzbHkgZG9uZSB3aXRoIHRoZSBzZWxfc2VtIHNwaW4gbG9jayBoZWxk
KS4KCk5vdGUgdGhhdCB0aGlzIGJyZWFrcyBGTEFTS19BVkNfQ0FDSEVTVEFU
UyBvbiBzeXN0ZW1zIHdpdGggc3VmZmljaWVudGx5Cm1hbnkgQ1BVcyAoYXMg
cmVxdWlyaW5nIGEgYnVmZmVyIGJpZ2dlciB0aGFuIFBBR0VfU0laRSB0aGVy
ZSkuIE5vCmF0dGVtcHQgaXMgbWFkZSB0byBhZGRyZXNzIHRoaXMgaGVyZSwg
YXMgaXQgd291bGQgbmVlZGxlc3NseSBjb21wbGljYXRlCnRoaXMgZml4IHdp
dGggcmF0aGVyIGxpdHRsZSBnYWluLgoKVGhpcyBpcyBYU0EtODQuCgpSZXBv
cnRlZC1ieTogTWF0dGhldyBEYWxleSA8bWF0dGRAYnVnZnV6ei5jb20+ClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
ClRoZSBpbmRleCBvZiBib29sZWFuIHZhcmlhYmxlcyBpbiBGTEFTS197R0VU
LFNFVH1CT09MIHdhcyBub3QgYWx3YXlzCmNoZWNrZWQgYWdhaW5zdCB0aGUg
Ym91bmRzIG9mIHRoZSBhcnJheS4KClJlcG9ydGVkLWJ5OiBKb2huIE1jRGVy
bW90dCA8am9obi5tY2Rlcm1vdHRAbnJsLm5hdnkubWlsPgpTaWduZWQtb2Zm
LWJ5OiBEYW5pZWwgRGUgR3JhYWYgPGRnZGVncmFAdHljaG8ubnNhLmdvdj4K
Ci0tLSBhL3hlbi94c20vZmxhc2svZmxhc2tfb3AuYworKysgYi94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKQEAgLTU3Myw3ICs1NzMsNyBAQCBzdGF0aWMg
aW50IGZsYXNrX3NlY3VyaXR5X3NldGF2Y190aHJlc2hvCiBzdGF0aWMgaW50
IGZsYXNrX3NlY3VyaXR5X3NldF9ib29sKGNoYXIgKmJ1ZiwgdWludDMyX3Qg
Y291bnQpCiB7CiAgICAgaW50IGxlbmd0aCA9IC1FRkFVTFQ7Ci0gICAgaW50
IGksIG5ld192YWx1ZTsKKyAgICB1bnNpZ25lZCBpbnQgaSwgbmV3X3ZhbHVl
OwogCiAgICAgc3Bpbl9sb2NrKCZzZWxfc2VtKTsKIApAQCAtNTg1LDYgKzU4
NSw5IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfc2V0X2Jvb2woY2hh
ciAKICAgICBpZiAoIHNzY2FuZihidWYsICIlZCAlZCIsICZpLCAmbmV3X3Zh
bHVlKSAhPSAyICkKICAgICAgICAgZ290byBvdXQ7CiAKKyAgICBpZiAoIGkg
Pj0gYm9vbF9udW0gKQorICAgICAgICBnb3RvIG91dDsKKwogICAgIGlmICgg
bmV3X3ZhbHVlICkKICAgICB7CiAgICAgICAgIG5ld192YWx1ZSA9IDE7CkBA
IC03MzQsMTAgKzczNyw2IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlf
bG9hZChjaGFyICpidWYKIAogICAgIHNwaW5fbG9jaygmc2VsX3NlbSk7CiAK
LSAgICBsZW5ndGggPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRv
bWFpbiwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKLSAgICBpZiAoIGxlbmd0
aCApCi0gICAgICAgIGdvdG8gb3V0OwotCiAgICAgbGVuZ3RoID0gc2VjdXJp
dHlfbG9hZF9wb2xpY3koYnVmLCBjb3VudCk7CiAgICAgaWYgKCBsZW5ndGgg
KQogICAgICAgICBnb3RvIG91dDsKQEAgLTg1Myw3ICs4NTIsMTUgQEAgbG9u
ZyBkb19mbGFza19vcChYRU5fR1VFU1RfSEFORExFKHhzbV9vcAogICAgIGlm
ICggb3AtPmNtZCA+IEZMQVNLX0xBU1QpCiAgICAgICAgIHJldHVybiAtRUlO
VkFMOwogCi0gICAgaWYgKCBvcC0+c2l6ZSA+IE1BWF9QT0xJQ1lfU0laRSAp
CisgICAgaWYgKCBvcC0+Y21kID09IEZMQVNLX0xPQUQgKQorICAgIHsKKyAg
ICAgICAgcmMgPSBkb21haW5faGFzX3NlY3VyaXR5KGN1cnJlbnQtPmRvbWFp
biwgU0VDVVJJVFlfX0xPQURfUE9MSUNZKTsKKyAgICAgICAgaWYgKCByYyAp
CisgICAgICAgICAgICByZXR1cm4gcmM7CisgICAgICAgIGlmICggb3AtPnNp
emUgPiBNQVhfUE9MSUNZX1NJWkUgKQorICAgICAgICAgICAgcmV0dXJuIC1F
SU5WQUw7CisgICAgfQorICAgIGVsc2UgaWYgKCBvcC0+c2l6ZSA+PSBQQUdF
X1NJWkUgKQogICAgICAgICByZXR1cm4gLUVJTlZBTDsKIAogICAgIGlmICgg
KG9wLT5idWYgPT0gTlVMTCAmJiBvcC0+c2l6ZSAhPSAwKSB8fCAKLS0tIGEv
eGVuL3hzbS9mbGFzay9zcy9zZXJ2aWNlcy5jCisrKyBiL3hlbi94c20vZmxh
c2svc3Mvc2VydmljZXMuYwpAQCAtMTk5MSw3ICsxOTkxLDcgQEAgaW50IHNl
Y3VyaXR5X2dldF9ib29sX3ZhbHVlKGludCBib29sKQogICAgIFBPTElDWV9S
RExPQ0s7CiAKICAgICBsZW4gPSBwb2xpY3lkYi5wX2Jvb2xzLm5wcmltOwot
ICAgIGlmICggYm9vbCA+PSBsZW4gKQorICAgIGlmICggYm9vbCA+PSBsZW4g
fHwgYm9vbCA8IDAgKQogICAgIHsKICAgICAgICAgcmMgPSAtRUZBVUxUOwog
ICAgICAgICBnb3RvIG91dDsK

--=separator
Content-Type: application/octet-stream; name="xsa84-4.2.patch"
Content-Disposition: attachment; filename="xsa84-4.2.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qgc2l6ZSkK
K3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VFU1RfSEFO
RExFKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXplX3QgbWF4X3NpemUp
CiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRlcyhzaXplICsgMSk7
CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXplID4gbWF4X3NpemUg
KQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAgIHRtcCA9IHhtYWxs
b2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlmICggIXRtcCApCiAg
ICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3ICsxMDYsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3RydWN0IHhlCiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVzZXIsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+dS51c2Vy
LCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3ICsyMTcsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQoc3RydWN0CiAgICAg
aWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAotICAgIHJ2ID0gZmxh
c2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZidWYsIGFyZy0+c2l6
ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5nKGFyZy0+Y29udGV4
dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwogICAgIGlmICggcnYg
KQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3ICszMTAsNyBAQCBz
dGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVfYm9vbChzCiAgICAg
aWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAgICByZXR1cm4gMDsK
IAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPm5hbWUsICZu
YW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmlu
ZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJvb2xfbWF4c3RyKTsK
ICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2OwogCkBAIC0zMzQs
NyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1cml0eV9zZXRfYm9v
bChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAgICBpbnQgKnZhbHVl
czsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9ib29scygmbnVtLCBO
VUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAgICAgICAgIGlmICgg
cnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsKIApAQCAtNDQwLDcg
KzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJpdHlfbWFrZV9ib29s
cyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRpbmdfdmFsdWVzKTsK
ICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9vbHMoJm51bSwgTlVM
TCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZu
dW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7CiAgICAgaWYgKCBy
ZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0tLSBhL3hlbi94c20v
Zmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBiL3hlbi94c20vZmxh
c2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3ICsxMyw5IEBACiAj
aWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2RlZmluZSBfRkxBU0tf
Q09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzKTsKKyNpbmNsdWRl
IDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQg
KmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVzLCBzaXplX3QgKm1h
eHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMoaW50IGxlbiwgaW50
ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2svc3Mvc2VydmljZXMu
YworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2VzLmMKQEAgLTE5MDAs
NyArMTkwMCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jvb2woY29uc3QgY2hh
ciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWludCBzZWN1cml0eV9n
ZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVl
cykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioq
bmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhzdHIpCiB7CiAgICAg
aW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTkwOCw2ICsxOTA4LDggQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogICAg
IGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBOVUxMOwogICAgICp2
YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkKKyAgICAgICAgKm1h
eHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIucF9ib29scy5ucHJp
bTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE5MjksMTYgKzE5MzEsMTcgQEAg
aW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwgY2hhciAqKgogCiAg
ICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQogICAgIHsKLSAgICAg
ICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXplX3QgbmFtZV9sZW4g
PSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKTsKKwog
ICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5ib29sX3ZhbF90b19z
dHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5hbWVzICkgewotICAg
ICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXSA9
IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVfbGVuKTsKKyAgICAg
ICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJheShjaGFyLCBuYW1l
X2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpuYW1lcylbaV0gKQog
ICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAgICAgICAgc3RybGNw
eSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3ZhbF90b19uYW1lW2ld
LCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVzKVtpXVtuYW1lX2xl
biAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHkoKCpuYW1lcylbaV0s
IHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwgbmFtZV9sZW4gKyAx
KTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0ciAmJiBuYW1lX2xl
biA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0ciA9IG5hbWVfbGVu
OwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0yMDU2LDcgKzIwNTks
NyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZlX2Jvb2xzKHN0cnVj
CiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9vbGRhdHVtOwogICAg
IHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJjID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFsdWVzKTsKKyAgICBy
YyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAmYm5hbWVzLCAmYnZh
bHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAgICAgIGdvdG8gb3V0
OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBpKysgKQo=

--=separator
Content-Type: application/octet-stream; name="xsa84-unstable-4.3.patch"
Content-Disposition: attachment; filename="xsa84-unstable-4.3.patch"
Content-Transfer-Encoding: base64

Zmxhc2s6IGZpeCByZWFkaW5nIHN0cmluZ3MgZnJvbSBndWVzdCBtZW1vcnkK
ClNpbmNlIHRoZSBzdHJpbmcgc2l6ZSBpcyBiZWluZyBzcGVjaWZpZWQgYnkg
dGhlIGd1ZXN0LCB3ZSBtdXN0IHJhbmdlCmNoZWNrIGl0IHByb3Blcmx5IGJl
Zm9yZSBkb2luZyBhbGxvY2F0aW9ucyBiYXNlZCBvbiBpdC4gV2hpbGUgZm9y
IHRoZQp0d28gY2FzZXMgdGhhdCBhcmUgZXhwb3NlZCBvbmx5IHRvIHRydXN0
ZWQgZ3Vlc3RzICh2aWEgcG9saWN5CnJlc3RyaWN0aW9uKSB0aGlzIGp1c3Qg
dXNlcyBhbiBhcmJpdHJhcnkgdXBwZXIgbGltaXQgKFBBR0VfU0laRSksIGZv
cgp0aGUgRkxBU0tfW0dTXUVUQk9PTCBjYXNlICh3aGljaCBhbnkgZ3Vlc3Qg
Y2FuIHVzZSkgdGhlIHVwcGVyIGxpbWl0CmdldHMgZW5mb3JjZWQgYmFzZWQg
b24gdGhlIGxvbmdlc3QgbmFtZSBhY3Jvc3MgYWxsIGJvb2xlYW4gc2V0dGlu
Z3MuCgpUaGlzIGlzIFhTQS04NC4KClJlcG9ydGVkLWJ5OiBNYXR0aGV3IERh
bGV5IDxtYXR0ZEBidWdmdXp6LmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpBY2tlZC1ieTogRGFuaWVsIERl
IEdyYWFmIDxkZ2RlZ3JhQHR5Y2hvLm5zYS5nb3Y+CgotLS0gYS94ZW4veHNt
L2ZsYXNrL2ZsYXNrX29wLmMKKysrIGIveGVuL3hzbS9mbGFzay9mbGFza19v
cC5jCkBAIC01Myw2ICs1Myw3IEBAIHN0YXRpYyBERUZJTkVfU1BJTkxPQ0so
c2VsX3NlbSk7CiAvKiBnbG9iYWwgZGF0YSBmb3IgYm9vbGVhbnMgKi8KIHN0
YXRpYyBpbnQgYm9vbF9udW0gPSAwOwogc3RhdGljIGludCAqYm9vbF9wZW5k
aW5nX3ZhbHVlcyA9IE5VTEw7CitzdGF0aWMgc2l6ZV90IGJvb2xfbWF4c3Ry
Owogc3RhdGljIGludCBmbGFza19zZWN1cml0eV9tYWtlX2Jvb2xzKHZvaWQp
OwogCiBleHRlcm4gaW50IHNzX2luaXRpYWxpemVkOwpAQCAtNzEsOSArNzIs
MTUgQEAgc3RhdGljIGludCBkb21haW5faGFzX3NlY3VyaXR5KHN0cnVjdCBk
bwogICAgICAgICAgICAgICAgICAgICAgICAgcGVybXMsIE5VTEwpOwogfQog
Ci1zdGF0aWMgaW50IGZsYXNrX2NvcHlpbl9zdHJpbmcoWEVOX0dVRVNUX0hB
TkRMRV9QQVJBTShjaGFyKSB1X2J1ZiwgY2hhciAqKmJ1ZiwgdWludDMyX3Qg
c2l6ZSkKK3N0YXRpYyBpbnQgZmxhc2tfY29weWluX3N0cmluZyhYRU5fR1VF
U1RfSEFORExFX1BBUkFNKGNoYXIpIHVfYnVmLCBjaGFyICoqYnVmLAorICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHNpemVfdCBzaXplLCBzaXpl
X3QgbWF4X3NpemUpCiB7Ci0gICAgY2hhciAqdG1wID0geG1hbGxvY19ieXRl
cyhzaXplICsgMSk7CisgICAgY2hhciAqdG1wOworCisgICAgaWYgKCBzaXpl
ID4gbWF4X3NpemUgKQorICAgICAgICByZXR1cm4gLUVOT0VOVDsKKworICAg
IHRtcCA9IHhtYWxsb2NfYXJyYXkoY2hhciwgc2l6ZSArIDEpOwogICAgIGlm
ICggIXRtcCApCiAgICAgICAgIHJldHVybiAtRU5PTUVNOwogCkBAIC05OSw3
ICsxMDYsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3VzZXIoc3Ry
dWN0IHhlCiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPnUudXNlciwgJnVz
ZXIsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+dS51c2VyLCAmdXNlciwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTIxMCw3
ICsyMTcsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X2NvbnRleHQo
c3RydWN0CiAgICAgaWYgKCBydiApCiAgICAgICAgIHJldHVybiBydjsKIAot
ICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhhcmctPmNvbnRleHQsICZi
dWYsIGFyZy0+c2l6ZSk7CisgICAgcnYgPSBmbGFza19jb3B5aW5fc3RyaW5n
KGFyZy0+Y29udGV4dCwgJmJ1ZiwgYXJnLT5zaXplLCBQQUdFX1NJWkUpOwog
ICAgIGlmICggcnYgKQogICAgICAgICByZXR1cm4gcnY7CiAKQEAgLTMwMyw3
ICszMTAsNyBAQCBzdGF0aWMgaW50IGZsYXNrX3NlY3VyaXR5X3Jlc29sdmVf
Ym9vbChzCiAgICAgaWYgKCBhcmctPmJvb2xfaWQgIT0gLTEgKQogICAgICAg
ICByZXR1cm4gMDsKIAotICAgIHJ2ID0gZmxhc2tfY29weWluX3N0cmluZyhh
cmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUpOworICAgIHJ2ID0gZmxhc2tf
Y29weWluX3N0cmluZyhhcmctPm5hbWUsICZuYW1lLCBhcmctPnNpemUsIGJv
b2xfbWF4c3RyKTsKICAgICBpZiAoIHJ2ICkKICAgICAgICAgcmV0dXJuIHJ2
OwogCkBAIC0zMzQsNyArMzQxLDcgQEAgc3RhdGljIGludCBmbGFza19zZWN1
cml0eV9zZXRfYm9vbChzdHJ1YwogICAgICAgICBpbnQgbnVtOwogICAgICAg
ICBpbnQgKnZhbHVlczsKIAotICAgICAgICBydiA9IHNlY3VyaXR5X2dldF9i
b29scygmbnVtLCBOVUxMLCAmdmFsdWVzKTsKKyAgICAgICAgcnYgPSBzZWN1
cml0eV9nZXRfYm9vbHMoJm51bSwgTlVMTCwgJnZhbHVlcywgTlVMTCk7CiAg
ICAgICAgIGlmICggcnYgIT0gMCApCiAgICAgICAgICAgICBnb3RvIG91dDsK
IApAQCAtNDQwLDcgKzQ0Nyw3IEBAIHN0YXRpYyBpbnQgZmxhc2tfc2VjdXJp
dHlfbWFrZV9ib29scyh2b2kKICAgICAKICAgICB4ZnJlZShib29sX3BlbmRp
bmdfdmFsdWVzKTsKICAgICAKLSAgICByZXQgPSBzZWN1cml0eV9nZXRfYm9v
bHMoJm51bSwgTlVMTCwgJnZhbHVlcyk7CisgICAgcmV0ID0gc2VjdXJpdHlf
Z2V0X2Jvb2xzKCZudW0sIE5VTEwsICZ2YWx1ZXMsICZib29sX21heHN0cik7
CiAgICAgaWYgKCByZXQgIT0gMCApCiAgICAgICAgIGdvdG8gb3V0OwogCi0t
LSBhL3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCisrKyBi
L3hlbi94c20vZmxhc2svaW5jbHVkZS9jb25kaXRpb25hbC5oCkBAIC0xMyw3
ICsxMyw5IEBACiAjaWZuZGVmIF9GTEFTS19DT05ESVRJT05BTF9IXwogI2Rl
ZmluZSBfRkxBU0tfQ09ORElUSU9OQUxfSF8KIAotaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
KTsKKyNpbmNsdWRlIDx4ZW4vdHlwZXMuaD4KKworaW50IHNlY3VyaXR5X2dl
dF9ib29scyhpbnQgKmxlbiwgY2hhciAqKipuYW1lcywgaW50ICoqdmFsdWVz
LCBzaXplX3QgKm1heHN0cik7CiAKIGludCBzZWN1cml0eV9zZXRfYm9vbHMo
aW50IGxlbiwgaW50ICp2YWx1ZXMpOwogCi0tLSBhL3hlbi94c20vZmxhc2sv
c3Mvc2VydmljZXMuYworKysgYi94ZW4veHNtL2ZsYXNrL3NzL3NlcnZpY2Vz
LmMKQEAgLTE4NTAsNyArMTg1MCw3IEBAIGludCBzZWN1cml0eV9maW5kX2Jv
b2woY29uc3QgY2hhciAqbmFtZSkKICAgICByZXR1cm4gcnY7CiB9CiAKLWlu
dCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICpsZW4sIGNoYXIgKioqbmFtZXMs
IGludCAqKnZhbHVlcykKK2ludCBzZWN1cml0eV9nZXRfYm9vbHMoaW50ICps
ZW4sIGNoYXIgKioqbmFtZXMsIGludCAqKnZhbHVlcywgc2l6ZV90ICptYXhz
dHIpCiB7CiAgICAgaW50IGksIHJjID0gLUVOT01FTTsKIApAQCAtMTg1OCw2
ICsxODU4LDggQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogICAgIGlmICggbmFtZXMgKQogICAgICAgICAqbmFtZXMgPSBO
VUxMOwogICAgICp2YWx1ZXMgPSBOVUxMOworICAgIGlmICggbWF4c3RyICkK
KyAgICAgICAgKm1heHN0ciA9IDA7CiAKICAgICAqbGVuID0gcG9saWN5ZGIu
cF9ib29scy5ucHJpbTsKICAgICBpZiAoICEqbGVuICkKQEAgLTE4NzksMTYg
KzE4ODEsMTcgQEAgaW50IHNlY3VyaXR5X2dldF9ib29scyhpbnQgKmxlbiwg
Y2hhciAqKgogCiAgICAgZm9yICggaSA9IDA7IGkgPCAqbGVuOyBpKysgKQog
ICAgIHsKLSAgICAgICAgc2l6ZV90IG5hbWVfbGVuOworICAgICAgICBzaXpl
X3QgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5ZGIucF9ib29sX3ZhbF90b19u
YW1lW2ldKTsKKwogICAgICAgICAoKnZhbHVlcylbaV0gPSBwb2xpY3lkYi5i
b29sX3ZhbF90b19zdHJ1Y3RbaV0tPnN0YXRlOwogICAgICAgICBpZiAoIG5h
bWVzICkgewotICAgICAgICAgICAgbmFtZV9sZW4gPSBzdHJsZW4ocG9saWN5
ZGIucF9ib29sX3ZhbF90b19uYW1lW2ldKSArIDE7Ci0gICAgICAgICAgICAo
Km5hbWVzKVtpXSA9IChjaGFyKil4bWFsbG9jX2FycmF5KGNoYXIsIG5hbWVf
bGVuKTsKKyAgICAgICAgICAgICgqbmFtZXMpW2ldID0geG1hbGxvY19hcnJh
eShjaGFyLCBuYW1lX2xlbiArIDEpOwogICAgICAgICAgICAgaWYgKCAhKCpu
YW1lcylbaV0gKQogICAgICAgICAgICAgICAgIGdvdG8gZXJyOwotICAgICAg
ICAgICAgc3RybGNweSgoKm5hbWVzKVtpXSwgcG9saWN5ZGIucF9ib29sX3Zh
bF90b19uYW1lW2ldLCBuYW1lX2xlbik7Ci0gICAgICAgICAgICAoKm5hbWVz
KVtpXVtuYW1lX2xlbiAtIDFdID0gMDsKKyAgICAgICAgICAgIHN0cmxjcHko
KCpuYW1lcylbaV0sIHBvbGljeWRiLnBfYm9vbF92YWxfdG9fbmFtZVtpXSwg
bmFtZV9sZW4gKyAxKTsKICAgICAgICAgfQorICAgICAgICBpZiAoIG1heHN0
ciAmJiBuYW1lX2xlbiA+ICptYXhzdHIgKQorICAgICAgICAgICAgKm1heHN0
ciA9IG5hbWVfbGVuOwogICAgIH0KICAgICByYyA9IDA7CiBvdXQ6CkBAIC0y
MDA2LDcgKzIwMDksNyBAQCBzdGF0aWMgaW50IHNlY3VyaXR5X3ByZXNlcnZl
X2Jvb2xzKHN0cnVjCiAgICAgc3RydWN0IGNvbmRfYm9vbF9kYXR1bSAqYm9v
bGRhdHVtOwogICAgIHN0cnVjdCBjb25kX25vZGUgKmN1cjsKIAotICAgIHJj
ID0gc2VjdXJpdHlfZ2V0X2Jvb2xzKCZuYm9vbHMsICZibmFtZXMsICZidmFs
dWVzKTsKKyAgICByYyA9IHNlY3VyaXR5X2dldF9ib29scygmbmJvb2xzLCAm
Ym5hbWVzLCAmYnZhbHVlcywgTlVMTCk7CiAgICAgaWYgKCByYyApCiAgICAg
ICAgIGdvdG8gb3V0OwogICAgIGZvciAoIGkgPSAwOyBpIDwgbmJvb2xzOyBp
KysgKQo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Mon Feb 10 11:43:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpG3-0001tW-JF; Mon, 10 Feb 2014 11:42:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCpG2-0001tP-16
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:42:38 +0000
Received: from [85.158.139.211:46034] by server-11.bemta-5.messagelabs.com id
	C8/A6-23886-D2BB8F25; Mon, 10 Feb 2014 11:42:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392032556!2869519!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1935 invoked from network); 10 Feb 2014 11:42:36 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:42:36 -0000
Received: by mail-wg0-f47.google.com with SMTP id m15so4119173wgh.14
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 03:42:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=BKUfAwfJnjrKPAKX5gSyKA8Czy4TGX9/K22hiiNlylc=;
	b=YRgzrMhEoyWU+q9z1OCdChUbeGJNTE3iQmlyLwzdwiYEIup7+PY3esP4YD94VcNCL1
	KIGeGAMjNqRYxx0q7Y90Zq73F4TyGdGH9x0ZZEmT3yAW4HalMGj8bIby4zInh+uck9pu
	pBL07pxrQ+jnE2bQbL5ocVf5dwL5OVac3epsY6kgJBgRUgofTy+qhB4E1cXpJfA1jxE8
	ZkQyg2Uv3taMEaFZHfDxW2YhJqCe++JkQU3jO5YSdBv+K5oUPUiswOmrLrpnPWjezF/o
	/nzAfVrKXSriUWK9YNuSY6/ya7xOZyBwMX8G624JKYqnCxZqaY384eH5dqsj1bTARYqQ
	YXcg==
X-Gm-Message-State: ALoCoQnUgbi9mtSvZzGdO8Bx0mCdHl5lW2S5Zk7yYEZBsCfbBCvo0h3YZ58WzuWSt7H1hfyT6Vet
X-Received: by 10.194.47.144 with SMTP id d16mr21812221wjn.2.1392032555914;
	Mon, 10 Feb 2014 03:42:35 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id dd3sm34608579wjb.9.2014.02.10.03.42.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 03:42:34 -0800 (PST)
Message-ID: <52F8BB29.5020004@linaro.org>
Date: Mon, 10 Feb 2014 11:42:33 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F896BE020000780011A995@nat28.tlf.novell.com>
In-Reply-To: <52F896BE020000780011A995@nat28.tlf.novell.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Jan,

On 10/02/14 08:07, Jan Beulich wrote:
>>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is
>> enabled.
>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>> to know if it needs to check the requirements.
>>
>> Rename the function and remove "pvh" word in the commit message.
>
> s/commit/panic/ ?

It's an error when I wrote the commit message. I will replace it on the 
next version.


>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>
> Other than the above,
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:43:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpG3-0001tW-JF; Mon, 10 Feb 2014 11:42:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCpG2-0001tP-16
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:42:38 +0000
Received: from [85.158.139.211:46034] by server-11.bemta-5.messagelabs.com id
	C8/A6-23886-D2BB8F25; Mon, 10 Feb 2014 11:42:37 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392032556!2869519!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1935 invoked from network); 10 Feb 2014 11:42:36 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:42:36 -0000
Received: by mail-wg0-f47.google.com with SMTP id m15so4119173wgh.14
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 03:42:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=BKUfAwfJnjrKPAKX5gSyKA8Czy4TGX9/K22hiiNlylc=;
	b=YRgzrMhEoyWU+q9z1OCdChUbeGJNTE3iQmlyLwzdwiYEIup7+PY3esP4YD94VcNCL1
	KIGeGAMjNqRYxx0q7Y90Zq73F4TyGdGH9x0ZZEmT3yAW4HalMGj8bIby4zInh+uck9pu
	pBL07pxrQ+jnE2bQbL5ocVf5dwL5OVac3epsY6kgJBgRUgofTy+qhB4E1cXpJfA1jxE8
	ZkQyg2Uv3taMEaFZHfDxW2YhJqCe++JkQU3jO5YSdBv+K5oUPUiswOmrLrpnPWjezF/o
	/nzAfVrKXSriUWK9YNuSY6/ya7xOZyBwMX8G624JKYqnCxZqaY384eH5dqsj1bTARYqQ
	YXcg==
X-Gm-Message-State: ALoCoQnUgbi9mtSvZzGdO8Bx0mCdHl5lW2S5Zk7yYEZBsCfbBCvo0h3YZ58WzuWSt7H1hfyT6Vet
X-Received: by 10.194.47.144 with SMTP id d16mr21812221wjn.2.1392032555914;
	Mon, 10 Feb 2014 03:42:35 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id dd3sm34608579wjb.9.2014.02.10.03.42.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 03:42:34 -0800 (PST)
Message-ID: <52F8BB29.5020004@linaro.org>
Date: Mon, 10 Feb 2014 11:42:33 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F896BE020000780011A995@nat28.tlf.novell.com>
In-Reply-To: <52F896BE020000780011A995@nat28.tlf.novell.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Jan,

On 10/02/14 08:07, Jan Beulich wrote:
>>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is
>> enabled.
>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>> to know if it needs to check the requirements.
>>
>> Rename the function and remove "pvh" word in the commit message.
>
> s/commit/panic/ ?

It's an error when I wrote the commit message. I will replace it on the 
next version.


>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>
> Other than the above,
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:48:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpLK-0002TQ-F7; Mon, 10 Feb 2014 11:48:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WCpLI-0002TH-OV
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 11:48:04 +0000
Received: from [85.158.139.211:28411] by server-7.bemta-5.messagelabs.com id
	44/53-14867-37CB8F25; Mon, 10 Feb 2014 11:48:03 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392032882!2820567!1
X-Originating-IP: [62.142.5.116]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTE2ID0+IDk3MTU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28314 invoked from network); 10 Feb 2014 11:48:03 -0000
Received: from emh06.mail.saunalahti.fi (HELO emh06.mail.saunalahti.fi)
	(62.142.5.116)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 11:48:03 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh06.mail.saunalahti.fi (Postfix) with ESMTP id CADC2699DE;
	Mon, 10 Feb 2014 13:48:01 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 9E74636C01F; Mon, 10 Feb 2014 13:48:01 +0200 (EET)
Date: Mon, 10 Feb 2014 13:48:01 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140210114801.GV2924@reaktio.net>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392028633.5117.29.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 10:37:13AM +0000, Ian Campbell wrote:
> On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> > 
> >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> 
> + git checkout 494479038d97f1b9f76fc633a360a681acdf035c
> fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c
> 
> This is because it is using git://xenbits.xen.org/linux-pvops.git
> instead of the tree it should be testing...
> 
> The following fixes it for me, but although the results are as I wanted
> I'm not 100% sure about this override in the first place. In my
> experiments with cr-daily-branch I see:
> 
>         Branch		$TREE_LINUX		$TREE_LINUX_ARM
>         
>         xen-unstable	pvops			pvops
>         linux-linus	torvalds		pvops
>         linux-arm-xen	arm-xen			arm-xen
> 
>         Key:
>         [arm-xen] git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
>         [torvalds] git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

Isn't that the "old" repo (but a symlink to the new one)? The new one being:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:48:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpLK-0002TQ-F7; Mon, 10 Feb 2014 11:48:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WCpLI-0002TH-OV
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 11:48:04 +0000
Received: from [85.158.139.211:28411] by server-7.bemta-5.messagelabs.com id
	44/53-14867-37CB8F25; Mon, 10 Feb 2014 11:48:03 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392032882!2820567!1
X-Originating-IP: [62.142.5.116]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTE2ID0+IDk3MTU2\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28314 invoked from network); 10 Feb 2014 11:48:03 -0000
Received: from emh06.mail.saunalahti.fi (HELO emh06.mail.saunalahti.fi)
	(62.142.5.116)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 11:48:03 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh06.mail.saunalahti.fi (Postfix) with ESMTP id CADC2699DE;
	Mon, 10 Feb 2014 13:48:01 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 9E74636C01F; Mon, 10 Feb 2014 13:48:01 +0200 (EET)
Date: Mon, 10 Feb 2014 13:48:01 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140210114801.GV2924@reaktio.net>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392028633.5117.29.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 10:37:13AM +0000, Ian Campbell wrote:
> On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> > 
> >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> 
> + git checkout 494479038d97f1b9f76fc633a360a681acdf035c
> fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c
> 
> This is because it is using git://xenbits.xen.org/linux-pvops.git
> instead of the tree it should be testing...
> 
> The following fixes it for me, but although the results are as I wanted
> I'm not 100% sure about this override in the first place. In my
> experiments with cr-daily-branch I see:
> 
>         Branch		$TREE_LINUX		$TREE_LINUX_ARM
>         
>         xen-unstable	pvops			pvops
>         linux-linus	torvalds		pvops
>         linux-arm-xen	arm-xen			arm-xen
> 
>         Key:
>         [arm-xen] git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git
>         [torvalds] git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git

Isn't that the "old" repo (but a symlink to the new one)? The new one being:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:52:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpP5-0002mh-6l; Mon, 10 Feb 2014 11:51:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpP3-0002mc-UE
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 11:51:58 +0000
Received: from [85.158.143.35:59474] by server-3.bemta-4.messagelabs.com id
	E2/76-11539-D5DB8F25; Mon, 10 Feb 2014 11:51:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392033115!4489083!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25526 invoked from network); 10 Feb 2014 11:51:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:51:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99440826"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 11:51:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 06:51:54 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpP0-0007wj-7D;
	Mon, 10 Feb 2014 11:51:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpOz-0006xz-UD;
	Mon, 10 Feb 2014 11:51:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.48473.641987.485823@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 11:51:53 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392028633.5117.29.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> > 
> >  build-armhf-pvops             4 kernel-build                 fail   never pass 

Surely this is correct.  Linus's tip doesn't pass this test.

> + git checkout 494479038d97f1b9f76fc633a360a681acdf035c
> fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c
> 
> This is because it is using git://xenbits.xen.org/linux-pvops.git
> instead of the tree it should be testing...

That's exactly what it should be using, isn't it ?  After all, in
principle, eventually, Linus's tip should work.  In the meantime it
doesn't and this is a real bug.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:52:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpP5-0002mh-6l; Mon, 10 Feb 2014 11:51:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpP3-0002mc-UE
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 11:51:58 +0000
Received: from [85.158.143.35:59474] by server-3.bemta-4.messagelabs.com id
	E2/76-11539-D5DB8F25; Mon, 10 Feb 2014 11:51:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392033115!4489083!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25526 invoked from network); 10 Feb 2014 11:51:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:51:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99440826"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 11:51:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 06:51:54 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpP0-0007wj-7D;
	Mon, 10 Feb 2014 11:51:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpOz-0006xz-UD;
	Mon, 10 Feb 2014 11:51:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.48473.641987.485823@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 11:51:53 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392028633.5117.29.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> > 
> >  build-armhf-pvops             4 kernel-build                 fail   never pass 

Surely this is correct.  Linus's tip doesn't pass this test.

> + git checkout 494479038d97f1b9f76fc633a360a681acdf035c
> fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c
> 
> This is because it is using git://xenbits.xen.org/linux-pvops.git
> instead of the tree it should be testing...

That's exactly what it should be using, isn't it ?  After all, in
principle, eventually, Linus's tip should work.  In the meantime it
doesn't and this is a real bug.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpPl-0002qf-MT; Mon, 10 Feb 2014 11:52:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCpPk-0002qR-N9
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:52:40 +0000
Received: from [193.109.254.147:63178] by server-3.bemta-14.messagelabs.com id
	60/69-00432-78DB8F25; Mon, 10 Feb 2014 11:52:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392033158!3240493!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18012 invoked from network); 10 Feb 2014 11:52:39 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:52:39 -0000
Received: by mail-wi0-f178.google.com with SMTP id cc10so2702413wib.11
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 03:52:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8yx7ZC0NDCWdPo/DU4OxjL+pVcYuk9jDJGKIabCv6fA=;
	b=lIbBjFs03XJiinKzszplYU08uiWb6OQsF2FPwhQVKCFxaUH3NreU02PhAkxJ2k28fx
	akYFeODBbKL5UA5X8pbhTBuZqeuC89XKFUmtWMlpQmTC27TX0gAueRl01n4VD8E/3Kkc
	XAoguucdwACd8HDKWmqtytB75fAy/+Dy5mCfDVFnMNSRooEdQ8C7DKdR34XNavMwlrgC
	dYBTh7l99J0P46UaaGLh7bIGYhPGnssO0WlkLKB3PNoybXN8a8Ju+2cW4oVoqeLZ6T1Q
	Lc2Yx8OFqImHciP/vuRThjnp66znKpRW92SBzs3e5ZRWGBSQssim0lMdsJyg8zwDH5Vj
	zFTQ==
X-Gm-Message-State: ALoCoQlxumV7cgOUfeD1scVRGvS9WW9sCu4LH+YyFQyi2lpgvwolc/7qKNWPvwrXR4GExj8rZ44v
X-Received: by 10.180.9.51 with SMTP id w19mr9973946wia.27.1392033158425;
	Mon, 10 Feb 2014 03:52:38 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id hy8sm34704402wjb.2.2014.02.10.03.52.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 03:52:37 -0800 (PST)
Message-ID: <52F8BD84.3040805@linaro.org>
Date: Mon, 10 Feb 2014 11:52:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
	<52F89A5A020000780011A9C9@nat28.tlf.novell.com>
In-Reply-To: <52F89A5A020000780011A9C9@nat28.tlf.novell.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Jan,

Thanks for the review.

On 10/02/14 08:22, Jan Beulich wrote:
>>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
>> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
>> functions specific to x86 and PCI.
>>
>> Split the framework in 3 distincts files:
>>      - iommu.c: contains generic functions shared between x86 and ARM
>>                 (when it will be supported)
>>      - iommu_pci.c: contains specific functions for PCI passthrough
>>      - iommu_x86.c: contains specific functions for x86
>>
>> iommu_pci.c will be only compiled when PCI is supported by the architecture
>> (eg. HAS_PCI is defined).
>>
>> This patch is mostly code movement in new files.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> ---
>>   xen/drivers/passthrough/Makefile    |    6 +-
>>   xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>>   xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
>
> There's xen/drivers/passthrough/pci.c already - any reason not to
> move the code there?

I didn't specific about moving the code directly in passthrough/pci.c. I 
will do on the next version.

>
>>   xen/drivers/passthrough/iommu_x86.c |   65 +++++
>
> Same here for xen/drivers/passthrough/x86/.
>
>> @@ -696,125 +344,6 @@ void iommu_crash_shutdown(void)
>>       iommu_enabled = iommu_intremap = 0;
>>   }
>>
>> -int iommu_do_domctl(
>> -    struct xen_domctl *domctl, struct domain *d,
>> -    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>
> The function itself should probably not be moved out. Either the
> PCI-specific pieces of it should be made conditional, or a
> descendant function be created. Since (afaict) you'll need all of
> the domctl-s (with different arguments) too for pass-through on
> ARM - what's your plan for them?

PCI passthrough will be supported on ARM in the future. But we will also 
have to support passthrough (via the device tree). I think, we will add 
new DOMCTL for that. Didn't really think about that know.

I will add a descendant function to handle PCI DOMCTL.

>
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -1784,31 +1784,31 @@ static int intel_iommu_unmap_page(struct domain *d,
>> unsigned long gfn)
>>
>>   void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
>>                        int order, int present)
>> -{
>> -    struct acpi_drhd_unit *drhd;
>> -    struct iommu *iommu = NULL;
>> -    struct hvm_iommu *hd = domain_hvm_iommu(d);
>> -    int flush_dev_iotlb;
>> -    int iommu_domid;
>> +    {
>> +        struct acpi_drhd_unit *drhd;
>> +        struct iommu *iommu = NULL;
>> +        struct hvm_iommu *hd = domain_hvm_iommu(d);
>> +        int flush_dev_iotlb;
>> +        int iommu_domid;
>>
>> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>>
>> -    for_each_drhd_unit ( drhd )
>> -    {
>> -        iommu = drhd->iommu;
>> -        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
>> -            continue;
>> +        for_each_drhd_unit ( drhd )
>> +        {
>> +            iommu = drhd->iommu;
>> +            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
>> +                continue;
>>
>> -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
>> -        iommu_domid= domain_iommu_domid(d, iommu);
>> -        if ( iommu_domid == -1 )
>> -            continue;
>> -        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
>> -                                   (paddr_t)gfn << PAGE_SHIFT_4K,
>> -                                   order, !present, flush_dev_iotlb) )
>> -            iommu_flush_write_buffer(iommu);
>> +            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
>> +            iommu_domid= domain_iommu_domid(d, iommu);
>> +            if ( iommu_domid == -1 )
>> +                continue;
>> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
>> +                                       (paddr_t)gfn << PAGE_SHIFT_4K,
>> +                                       order, !present, flush_dev_iotlb) )
>> +                iommu_flush_write_buffer(iommu);
>> +        }
>>       }
>> -}
>
> What are these changes to indentation about? Are you
> deliberately breaking common rules here, or is this some sort of
> unintentional leftover?

It's an error when I have created this patch. I will remove it.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpPl-0002qf-MT; Mon, 10 Feb 2014 11:52:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCpPk-0002qR-N9
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 11:52:40 +0000
Received: from [193.109.254.147:63178] by server-3.bemta-14.messagelabs.com id
	60/69-00432-78DB8F25; Mon, 10 Feb 2014 11:52:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392033158!3240493!1
X-Originating-IP: [209.85.212.178]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18012 invoked from network); 10 Feb 2014 11:52:39 -0000
Received: from mail-wi0-f178.google.com (HELO mail-wi0-f178.google.com)
	(209.85.212.178)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 11:52:39 -0000
Received: by mail-wi0-f178.google.com with SMTP id cc10so2702413wib.11
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 03:52:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8yx7ZC0NDCWdPo/DU4OxjL+pVcYuk9jDJGKIabCv6fA=;
	b=lIbBjFs03XJiinKzszplYU08uiWb6OQsF2FPwhQVKCFxaUH3NreU02PhAkxJ2k28fx
	akYFeODBbKL5UA5X8pbhTBuZqeuC89XKFUmtWMlpQmTC27TX0gAueRl01n4VD8E/3Kkc
	XAoguucdwACd8HDKWmqtytB75fAy/+Dy5mCfDVFnMNSRooEdQ8C7DKdR34XNavMwlrgC
	dYBTh7l99J0P46UaaGLh7bIGYhPGnssO0WlkLKB3PNoybXN8a8Ju+2cW4oVoqeLZ6T1Q
	Lc2Yx8OFqImHciP/vuRThjnp66znKpRW92SBzs3e5ZRWGBSQssim0lMdsJyg8zwDH5Vj
	zFTQ==
X-Gm-Message-State: ALoCoQlxumV7cgOUfeD1scVRGvS9WW9sCu4LH+YyFQyi2lpgvwolc/7qKNWPvwrXR4GExj8rZ44v
X-Received: by 10.180.9.51 with SMTP id w19mr9973946wia.27.1392033158425;
	Mon, 10 Feb 2014 03:52:38 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id hy8sm34704402wjb.2.2014.02.10.03.52.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 03:52:37 -0800 (PST)
Message-ID: <52F8BD84.3040805@linaro.org>
Date: Mon, 10 Feb 2014 11:52:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
	<52F89A5A020000780011A9C9@nat28.tlf.novell.com>
In-Reply-To: <52F89A5A020000780011A9C9@nat28.tlf.novell.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Jan,

Thanks for the review.

On 10/02/14 08:22, Jan Beulich wrote:
>>>> On 07.02.14 at 18:43, Julien Grall <julien.grall@linaro.org> wrote:
>> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
>> functions specific to x86 and PCI.
>>
>> Split the framework in 3 distincts files:
>>      - iommu.c: contains generic functions shared between x86 and ARM
>>                 (when it will be supported)
>>      - iommu_pci.c: contains specific functions for PCI passthrough
>>      - iommu_x86.c: contains specific functions for x86
>>
>> iommu_pci.c will be only compiled when PCI is supported by the architecture
>> (eg. HAS_PCI is defined).
>>
>> This patch is mostly code movement in new files.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> ---
>>   xen/drivers/passthrough/Makefile    |    6 +-
>>   xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>>   xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
>
> There's xen/drivers/passthrough/pci.c already - any reason not to
> move the code there?

I didn't specific about moving the code directly in passthrough/pci.c. I 
will do on the next version.

>
>>   xen/drivers/passthrough/iommu_x86.c |   65 +++++
>
> Same here for xen/drivers/passthrough/x86/.
>
>> @@ -696,125 +344,6 @@ void iommu_crash_shutdown(void)
>>       iommu_enabled = iommu_intremap = 0;
>>   }
>>
>> -int iommu_do_domctl(
>> -    struct xen_domctl *domctl, struct domain *d,
>> -    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>
> The function itself should probably not be moved out. Either the
> PCI-specific pieces of it should be made conditional, or a
> descendant function be created. Since (afaict) you'll need all of
> the domctl-s (with different arguments) too for pass-through on
> ARM - what's your plan for them?

PCI passthrough will be supported on ARM in the future. But we will also 
have to support passthrough (via the device tree). I think, we will add 
new DOMCTL for that. Didn't really think about that know.

I will add a descendant function to handle PCI DOMCTL.

>
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -1784,31 +1784,31 @@ static int intel_iommu_unmap_page(struct domain *d,
>> unsigned long gfn)
>>
>>   void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
>>                        int order, int present)
>> -{
>> -    struct acpi_drhd_unit *drhd;
>> -    struct iommu *iommu = NULL;
>> -    struct hvm_iommu *hd = domain_hvm_iommu(d);
>> -    int flush_dev_iotlb;
>> -    int iommu_domid;
>> +    {
>> +        struct acpi_drhd_unit *drhd;
>> +        struct iommu *iommu = NULL;
>> +        struct hvm_iommu *hd = domain_hvm_iommu(d);
>> +        int flush_dev_iotlb;
>> +        int iommu_domid;
>>
>> -    iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>> +        iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
>>
>> -    for_each_drhd_unit ( drhd )
>> -    {
>> -        iommu = drhd->iommu;
>> -        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
>> -            continue;
>> +        for_each_drhd_unit ( drhd )
>> +        {
>> +            iommu = drhd->iommu;
>> +            if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
>> +                continue;
>>
>> -        flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
>> -        iommu_domid= domain_iommu_domid(d, iommu);
>> -        if ( iommu_domid == -1 )
>> -            continue;
>> -        if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
>> -                                   (paddr_t)gfn << PAGE_SHIFT_4K,
>> -                                   order, !present, flush_dev_iotlb) )
>> -            iommu_flush_write_buffer(iommu);
>> +            flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
>> +            iommu_domid= domain_iommu_domid(d, iommu);
>> +            if ( iommu_domid == -1 )
>> +                continue;
>> +            if ( iommu_flush_iotlb_psi(iommu, iommu_domid,
>> +                                       (paddr_t)gfn << PAGE_SHIFT_4K,
>> +                                       order, !present, flush_dev_iotlb) )
>> +                iommu_flush_write_buffer(iommu);
>> +        }
>>       }
>> -}
>
> What are these changes to indentation about? Are you
> deliberately breaking common rules here, or is this some sort of
> unintentional leftover?

It's an error when I have created this patch. I will remove it.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:57:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpU8-00036k-G2; Mon, 10 Feb 2014 11:57:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCpU6-00036c-Gb
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:57:11 +0000
Received: from [193.109.254.147:16610] by server-12.bemta-14.messagelabs.com
	id E9/3E-17220-59EB8F25; Mon, 10 Feb 2014 11:57:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392033427!3217328!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27501 invoked from network); 10 Feb 2014 11:57:08 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 11:57:08 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392033427; l=22265;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=jPXQrKMf8tqDlbLPayTCvic394M=;
	b=XZ8bVzm6x9wAqA/ja3QYbg8LHjErJsD1Uc2gKEcfcfWqG+dnPS/Lbbyy5hz5a+nblLy
	ah3SwfUz07lXiAkVyGN5vIG9K20/Jq2WrQ1GAdqpffhMd7jnjI2TXYw+GcPnZWuJfojDe
	owcCwE2CsEacMUF2vYatVO+j1vIspLIqoUs=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id j01f6dq1ABv79KI
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 12:57:07 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 1949B50269; Mon, 10 Feb 2014 12:57:06 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 12:57:05 +0100
Message-Id: <1392033425-1863-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers of xc_domain_save use errno to print diagnostics if the call
fails. But xc_domain_save does not preserve the actual errno in case of
a failure.

This change preserves errno in all cases where code jumps to the label
"out". In addition a new label "exit" is added to catch also code which
used to do just "return 1".

Now libxl_save_helper:complete can print the actual error string. Also
checkpoint is updated to pass the errno to its caller.

Note: some of the functions used in xc_domain_save do not use errno to
indicate a reason. In these cases the errno remains undefined as it used
to be without this change.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxc/xc_domain_save.c                       | 88 ++++++++++++++++++++--
 .../python/xen/lowlevel/checkpoint/libcheckpoint.c |  4 +-
 2 files changed, 85 insertions(+), 7 deletions(-)

diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 42c4752..f32ac81 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -806,6 +806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     xc_dominfo_t info;
     DECLARE_DOMCTL;
 
+    int errnoval = 0;
     int rc = 1, frc, i, j, last_iter = 0, iter = 0;
     int live  = (flags & XCFLAGS_LIVE);
     int debug = (flags & XCFLAGS_DEBUG);
@@ -898,8 +899,8 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( hvm && !callbacks->switch_qemu_logdirty )
     {
         ERROR("No switch_qemu_logdirty callback provided.");
-        errno = EINVAL;
-        return 1;
+        errnoval = EINVAL;
+        goto exit;
     }
 
     outbuf_init(xch, &ob_pagebuf, OUTBUF_SIZE);
@@ -913,14 +914,16 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( !get_platform_info(xch, dom,
                             &ctx->max_mfn, &ctx->hvirt_start, &ctx->pt_levels, &dinfo->guest_width) )
     {
+        errnoval = errno;
         ERROR("Unable to get platform info.");
-        return 1;
+        goto exit;
     }
 
     if ( xc_domain_getinfo(xch, dom, 1, &info) != 1 )
     {
+        errnoval = errno;
         PERROR("Could not get domain info");
-        return 1;
+        goto exit;
     }
 
     shared_info_frame = info.shared_info_frame;
@@ -932,6 +935,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                            PROT_READ, shared_info_frame);
         if ( !live_shinfo )
         {
+            errnoval = errno;
             PERROR("Couldn't map live_shinfo");
             goto out;
         }
@@ -942,6 +946,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( dinfo->p2m_size > ~XEN_DOMCTL_PFINFO_LTAB_MASK )
     {
+        errnoval = E2BIG;
         ERROR("Cannot save this big a guest");
         goto out;
     }
@@ -967,6 +972,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             
             if ( frc < 0 )
             {
+                errnoval = errno;
                 PERROR("Couldn't enable shadow mode (rc %d) (errno %d)", frc, errno );
                 goto out;
             }
@@ -975,6 +981,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         /* Enable qemu-dm logging dirty pages to xen */
         if ( hvm && callbacks->switch_qemu_logdirty(dom, 1, callbacks->data) )
         {
+            errnoval = errno;
             PERROR("Couldn't enable qemu log-dirty mode (errno %d)", errno);
             goto out;
         }
@@ -985,6 +992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -994,6 +1002,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     {
         if (!(compress_ctx = xc_compression_create_context(xch, dinfo->p2m_size)))
         {
+            errnoval = errno;
             ERROR("Failed to create compression context");
             goto out;
         }
@@ -1012,6 +1021,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( !to_send || !to_fix || !to_skip )
     {
+        errnoval = ENOMEM;
         ERROR("Couldn't allocate to_send array");
         goto out;
     }
@@ -1024,12 +1034,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
         if ( hvm_buf_size == -1 )
         {
+            errnoval = errno;
             PERROR("Couldn't get HVM context size from Xen");
             goto out;
         }
         hvm_buf = malloc(hvm_buf_size);
         if ( !hvm_buf )
         {
+            errnoval = ENOMEM;
             ERROR("Couldn't allocate memory");
             goto out;
         }
@@ -1043,7 +1055,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( (pfn_type == NULL) || (pfn_batch == NULL) || (pfn_err == NULL) )
     {
         ERROR("failed to alloc memory for pfn_type and/or pfn_batch arrays");
-        errno = ENOMEM;
+        errnoval = ENOMEM;
         goto out;
     }
     memset(pfn_type, 0,
@@ -1052,6 +1064,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Setup the mfn_to_pfn table mapping */
     if ( !(ctx->live_m2p = xc_map_m2p(xch, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) )
     {
+        errnoval = errno;
         PERROR("Failed to map live M2P table");
         goto out;
     }
@@ -1059,6 +1072,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Start writing out the saved-domain record. */
     if ( write_exact(io_fd, &dinfo->p2m_size, sizeof(unsigned long)) )
     {
+        errnoval = errno;
         PERROR("write: p2m_size");
         goto out;
     }
@@ -1071,6 +1085,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ctx->live_p2m = map_and_save_p2m_table(xch, io_fd, dom, ctx, live_shinfo);
         if ( ctx->live_p2m == NULL )
         {
+            errnoval = errno;
             PERROR("Failed to map/save the p2m frame list");
             goto out;
         }
@@ -1097,12 +1112,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     tmem_saved = xc_tmem_save(xch, dom, io_fd, live, XC_SAVE_ID_TMEM);
     if ( tmem_saved == -1 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tmem)");
         goto out;
     }
 
     if ( !live && save_tsc_info(xch, dom, io_fd) < 0 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tsc)");
         goto out;
     }
@@ -1143,6 +1160,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     dinfo->p2m_size, NULL, 0, NULL);
                 if ( frc != dinfo->p2m_size )
                 {
+                    errnoval = errno;
                     ERROR("Error peeking shadow bitmap");
                     goto out;
                 }
@@ -1257,6 +1275,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 xch, dom, PROT_READ, pfn_type, pfn_err, batch);
             if ( region_base == NULL )
             {
+                errnoval = errno;
                 PERROR("map batch failed");
                 goto out;
             }
@@ -1264,6 +1283,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* Get page types */
             if ( xc_get_pfn_type_batch(xch, dom, batch, pfn_type) )
             {
+                errnoval = errno;
                 PERROR("get_pfn_type_batch failed");
                 goto out;
             }
@@ -1332,6 +1352,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
             if ( wrexact(io_fd, &batch, sizeof(unsigned int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (2)");
                 goto out;
             }
@@ -1341,6 +1362,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     ((unsigned long *)pfn_type)[j] = pfn_type[j];
             if ( wrexact(io_fd, pfn_type, sizeof(unsigned long)*batch) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (3)");
                 goto out;
             }
@@ -1368,6 +1390,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                        (char*)region_base+(PAGE_SIZE*(j-run)), 
                                        PAGE_SIZE*run) != PAGE_SIZE*run )
                         {
+                            errnoval = errno;
                             PERROR("Error when writing to state file (4a)"
                                   " (errno %d)", errno);
                             goto out;
@@ -1396,6 +1419,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                     if ( race && !live )
                     {
+                        errnoval = errno;
                         ERROR("Fatal PT race (pfn %lx, type %08lx)", pfn,
                               pagetype);
                         goto out;
@@ -1409,6 +1433,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                                         pfn, 1 /* raw page */);
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add pagetable page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1428,6 +1453,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                              */
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4b)\n");
                                 goto out;
@@ -1437,6 +1463,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     else if ( wruncached(io_fd, live, page,
                                          PAGE_SIZE) != PAGE_SIZE )
                     {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (4b)"
                               " (errno %d)", errno);
                         goto out;
@@ -1456,6 +1483,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1465,6 +1493,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                         {
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4c)\n");
                                 goto out;
@@ -1483,6 +1512,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                (char*)region_base+(PAGE_SIZE*(j-run)), 
                                PAGE_SIZE*run) != PAGE_SIZE*run )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (4c)"
                           " (errno %d)", errno);
                     goto out;
@@ -1520,6 +1550,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* send "-1" to put receiver into debug mode */
             if ( wrexact(io_fd, &id, sizeof(int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (6)");
                 goto out;
             }
@@ -1542,6 +1573,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( suspend_and_state(callbacks->suspend, callbacks->data,
                                        xch, io_fd, dom, &info) )
                 {
+                    errnoval = errno;
                     ERROR("Domain appears not to have suspended");
                     goto out;
                 }
@@ -1550,12 +1582,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( (tmem_saved > 0) &&
                      (xc_tmem_save_extra(xch,dom,io_fd,XC_SAVE_ID_TMEM_EXTRA) == -1) )
                 {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (tmem)");
                         goto out;
                 }
 
                 if ( save_tsc_info(xch, dom, io_fd) < 0 )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (tsc)");
                     goto out;
                 }
@@ -1567,6 +1601,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                    XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send),
                                    dinfo->p2m_size, NULL, 0, &shadow_stats) != dinfo->p2m_size )
             {
+                errnoval = errno;
                 PERROR("Error flushing shadow PT");
                 goto out;
             }
@@ -1598,6 +1633,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( info.max_vcpu_id >= XC_SR_MAX_VCPUS )
         {
+            errnoval = E2BIG;
             ERROR("Too many VCPUS in guest!");
             goto out;
         }
@@ -1614,6 +1650,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, &chunk, offsetof(struct chunk, vcpumap)
                      + vcpumap_sz(info.max_vcpu_id)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file");
             goto out;
         }
@@ -1633,6 +1670,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the generation id buffer location for guest");
             goto out;
         }
@@ -1645,6 +1683,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the ident_pt for EPT guest");
             goto out;
         }
@@ -1657,6 +1696,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the paging ring pfn for guest");
             goto out;
         }
@@ -1669,6 +1709,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the access ring pfn for guest");
             goto out;
         }
@@ -1681,6 +1722,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the sharing ring pfn for guest");
             goto out;
         }
@@ -1693,6 +1735,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the vm86 TSS for guest");
             goto out;
         }
@@ -1705,6 +1748,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the console pfn for guest");
             goto out;
         }
@@ -1716,6 +1760,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ((chunk.data != 0) && wrexact(io_fd, &chunk, sizeof(chunk)))
         {
+            errnoval = errno;
             PERROR("Error when writing the firmware ioport version");
             goto out;
         }
@@ -1728,6 +1773,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the viridian flag");
             goto out;
         }
@@ -1741,6 +1787,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( callbacks->toolstack_save(dom, &buf, &len, callbacks->data) < 0 )
         {
+            errnoval = errno;
             PERROR("Error calling toolstack_save");
             goto out;
         }
@@ -1759,6 +1806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_LAST_CHECKPOINT;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing last checkpoint chunk");
             goto out;
         }
@@ -1778,6 +1826,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_ENABLE_COMPRESSION;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing enable_compression marker");
             goto out;
         }
@@ -1787,6 +1836,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     i = 0;
     if ( wrexact(io_fd, &i, sizeof(int)) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (6')");
         goto out;
     }
@@ -1805,6 +1855,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                          (unsigned long *)&magic_pfns[2]);
         if ( wrexact(io_fd, magic_pfns, sizeof(magic_pfns)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (7)");
             goto out;
         }
@@ -1813,18 +1864,21 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (rec_size = xc_domain_hvm_getcontext(xch, dom, hvm_buf, 
                                                   hvm_buf_size)) == -1 )
         {
+            errnoval = errno;
             PERROR("HVM:Could not get hvm buffer");
             goto out;
         }
         
         if ( wrexact(io_fd, &rec_size, sizeof(uint32_t)) )
         {
+            errnoval = errno;
             PERROR("error write hvm buffer size");
             goto out;
         }
         
         if ( wrexact(io_fd, hvm_buf, rec_size) )
         {
+            errnoval = errno;
             PERROR("write HVM info failed!");
             goto out;
         }
@@ -1849,6 +1903,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( wrexact(io_fd, &j, sizeof(unsigned int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (6a)");
             goto out;
         }
@@ -1863,6 +1918,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             {
                 if ( wrexact(io_fd, &pfntab, sizeof(unsigned long)*j) )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (6b)");
                     goto out;
                 }
@@ -1873,6 +1929,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( xc_vcpu_getcontext(xch, dom, 0, &ctxt) )
     {
+        errnoval = errno;
         PERROR("Could not get vcpu context");
         goto out;
     }
@@ -1888,6 +1945,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     mfn = GET_FIELD(&ctxt, user_regs.edx);
     if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
     {
+        errnoval = ERANGE;
         ERROR("Suspend record is not in range of pseudophys map");
         goto out;
     }
@@ -1900,6 +1958,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( (i != 0) && xc_vcpu_getcontext(xch, dom, i, &ctxt) )
         {
+            errnoval = errno;
             PERROR("No context for VCPU%d", i);
             goto out;
         }
@@ -1910,6 +1969,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             mfn = GET_FIELD(&ctxt, gdt_frames[j]);
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
             {
+                errnoval = ERANGE;
                 ERROR("GDT frame is not in range of pseudophys map");
                 goto out;
             }
@@ -1920,6 +1980,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(
                                            GET_FIELD(&ctxt, ctrlreg[3]))) )
         {
+            errnoval = ERANGE;
             ERROR("PT base is not in range of pseudophys map");
             goto out;
         }
@@ -1931,6 +1992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         {
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(ctxt.x64.ctrlreg[1])) )
             {
+                errnoval = ERANGE;
                 ERROR("PT base is not in range of pseudophys map");
                 goto out;
             }
@@ -1943,6 +2005,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                         ? sizeof(ctxt.x64) 
                                         : sizeof(ctxt.x32))) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (1)");
             goto out;
         }
@@ -1953,11 +2016,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.ext_vcpucontext.vcpu = i;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No extended context for VCPU%d", i);
             goto out;
         }
         if ( wrexact(io_fd, &domctl.u.ext_vcpucontext, 128) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (2)");
             goto out;
         }
@@ -1971,6 +2036,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.vcpuextstate.size = 0;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             goto out;
         }
@@ -1982,6 +2048,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         buffer = xc_hypercall_buffer_alloc(xch, buffer, domctl.u.vcpuextstate.size);
         if ( !buffer )
         {
+            errnoval = errno;
             PERROR("Insufficient memory for getting eXtended states for"
                    "VCPU%d", i);
             goto out;
@@ -1989,6 +2056,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         set_xen_guest_handle(domctl.u.vcpuextstate.buffer, buffer);
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2000,6 +2068,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                      sizeof(domctl.u.vcpuextstate.size)) ||
              wrexact(io_fd, buffer, domctl.u.vcpuextstate.size) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file VCPU extended state");
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2015,6 +2084,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
               arch.pfn_to_mfn_frame_list_list, 0);
     if ( wrexact(io_fd, page, PAGE_SIZE) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (1)");
         goto out;
     }
@@ -2022,6 +2092,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Flush last write and check for errors. */
     if ( fsync(io_fd) && errno != EINVAL )
     {
+        errnoval = errno;
         PERROR("Error when flushing state file");
         goto out;
     }
@@ -2043,6 +2114,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ob = &ob_pagebuf;
         if (wrcompressed(io_fd) < 0)
         {
+            errnoval = errno;
             ERROR("Error when writing compressed data, after postcopy\n");
             rc = 1;
             goto out;
@@ -2051,6 +2123,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, ob_tailbuf.buf, ob_tailbuf.pos) )
         {
             rc = 1;
+            errnoval = errno;
             PERROR("Error when copying tailbuf into outbuf");
             goto out;
         }
@@ -2079,6 +2152,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -2130,7 +2204,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     free(hvm_buf);
     outbuf_free(&ob_pagebuf);
 
-    DPRINTF("Save exit of domid %u with rc=%d\n", dom, rc);
+exit:
+    DPRINTF("Save exit of domid %u with rc=%d, errno=%d\n", dom, rc, errnoval);
+    errno = errnoval;
 
     return !!rc;
 }
diff --git a/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c b/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c
index 01c0d47..270d5a3 100644
--- a/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c
+++ b/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c
@@ -173,7 +173,7 @@ int checkpoint_start(checkpoint_state* s, int fd,
 		     struct save_callbacks* callbacks,
 		     unsigned int remus_flags)
 {
-    int hvm, rc;
+    int hvm, rc, errnoval;
     int flags = XCFLAGS_LIVE;
     unsigned long vm_generationid_addr;
 
@@ -209,9 +209,11 @@ int checkpoint_start(checkpoint_state* s, int fd,
     rc = xc_domain_save(s->xch, fd, s->domid, 0, 0, flags, callbacks, hvm,
                         vm_generationid_addr);
 
+    errnoval = errno;
     if (hvm)
        switch_qemu_logdirty(s, 0);
 
+    errno = errnoval;
     return rc;
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 11:57:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 11:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpU8-00036k-G2; Mon, 10 Feb 2014 11:57:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCpU6-00036c-Gb
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 11:57:11 +0000
Received: from [193.109.254.147:16610] by server-12.bemta-14.messagelabs.com
	id E9/3E-17220-59EB8F25; Mon, 10 Feb 2014 11:57:09 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392033427!3217328!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27501 invoked from network); 10 Feb 2014 11:57:08 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 11:57:08 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392033427; l=22265;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=jPXQrKMf8tqDlbLPayTCvic394M=;
	b=XZ8bVzm6x9wAqA/ja3QYbg8LHjErJsD1Uc2gKEcfcfWqG+dnPS/Lbbyy5hz5a+nblLy
	ah3SwfUz07lXiAkVyGN5vIG9K20/Jq2WrQ1GAdqpffhMd7jnjI2TXYw+GcPnZWuJfojDe
	owcCwE2CsEacMUF2vYatVO+j1vIspLIqoUs=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id j01f6dq1ABv79KI
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 12:57:07 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 1949B50269; Mon, 10 Feb 2014 12:57:06 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Mon, 10 Feb 2014 12:57:05 +0100
Message-Id: <1392033425-1863-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers of xc_domain_save use errno to print diagnostics if the call
fails. But xc_domain_save does not preserve the actual errno in case of
a failure.

This change preserves errno in all cases where code jumps to the label
"out". In addition a new label "exit" is added to catch also code which
used to do just "return 1".

Now libxl_save_helper:complete can print the actual error string. Also
checkpoint is updated to pass the errno to its caller.

Note: some of the functions used in xc_domain_save do not use errno to
indicate a reason. In these cases the errno remains undefined as it used
to be without this change.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxc/xc_domain_save.c                       | 88 ++++++++++++++++++++--
 .../python/xen/lowlevel/checkpoint/libcheckpoint.c |  4 +-
 2 files changed, 85 insertions(+), 7 deletions(-)

diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 42c4752..f32ac81 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -806,6 +806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     xc_dominfo_t info;
     DECLARE_DOMCTL;
 
+    int errnoval = 0;
     int rc = 1, frc, i, j, last_iter = 0, iter = 0;
     int live  = (flags & XCFLAGS_LIVE);
     int debug = (flags & XCFLAGS_DEBUG);
@@ -898,8 +899,8 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( hvm && !callbacks->switch_qemu_logdirty )
     {
         ERROR("No switch_qemu_logdirty callback provided.");
-        errno = EINVAL;
-        return 1;
+        errnoval = EINVAL;
+        goto exit;
     }
 
     outbuf_init(xch, &ob_pagebuf, OUTBUF_SIZE);
@@ -913,14 +914,16 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( !get_platform_info(xch, dom,
                             &ctx->max_mfn, &ctx->hvirt_start, &ctx->pt_levels, &dinfo->guest_width) )
     {
+        errnoval = errno;
         ERROR("Unable to get platform info.");
-        return 1;
+        goto exit;
     }
 
     if ( xc_domain_getinfo(xch, dom, 1, &info) != 1 )
     {
+        errnoval = errno;
         PERROR("Could not get domain info");
-        return 1;
+        goto exit;
     }
 
     shared_info_frame = info.shared_info_frame;
@@ -932,6 +935,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                            PROT_READ, shared_info_frame);
         if ( !live_shinfo )
         {
+            errnoval = errno;
             PERROR("Couldn't map live_shinfo");
             goto out;
         }
@@ -942,6 +946,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( dinfo->p2m_size > ~XEN_DOMCTL_PFINFO_LTAB_MASK )
     {
+        errnoval = E2BIG;
         ERROR("Cannot save this big a guest");
         goto out;
     }
@@ -967,6 +972,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             
             if ( frc < 0 )
             {
+                errnoval = errno;
                 PERROR("Couldn't enable shadow mode (rc %d) (errno %d)", frc, errno );
                 goto out;
             }
@@ -975,6 +981,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         /* Enable qemu-dm logging dirty pages to xen */
         if ( hvm && callbacks->switch_qemu_logdirty(dom, 1, callbacks->data) )
         {
+            errnoval = errno;
             PERROR("Couldn't enable qemu log-dirty mode (errno %d)", errno);
             goto out;
         }
@@ -985,6 +992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -994,6 +1002,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     {
         if (!(compress_ctx = xc_compression_create_context(xch, dinfo->p2m_size)))
         {
+            errnoval = errno;
             ERROR("Failed to create compression context");
             goto out;
         }
@@ -1012,6 +1021,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( !to_send || !to_fix || !to_skip )
     {
+        errnoval = ENOMEM;
         ERROR("Couldn't allocate to_send array");
         goto out;
     }
@@ -1024,12 +1034,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
         if ( hvm_buf_size == -1 )
         {
+            errnoval = errno;
             PERROR("Couldn't get HVM context size from Xen");
             goto out;
         }
         hvm_buf = malloc(hvm_buf_size);
         if ( !hvm_buf )
         {
+            errnoval = ENOMEM;
             ERROR("Couldn't allocate memory");
             goto out;
         }
@@ -1043,7 +1055,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( (pfn_type == NULL) || (pfn_batch == NULL) || (pfn_err == NULL) )
     {
         ERROR("failed to alloc memory for pfn_type and/or pfn_batch arrays");
-        errno = ENOMEM;
+        errnoval = ENOMEM;
         goto out;
     }
     memset(pfn_type, 0,
@@ -1052,6 +1064,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Setup the mfn_to_pfn table mapping */
     if ( !(ctx->live_m2p = xc_map_m2p(xch, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) )
     {
+        errnoval = errno;
         PERROR("Failed to map live M2P table");
         goto out;
     }
@@ -1059,6 +1072,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Start writing out the saved-domain record. */
     if ( write_exact(io_fd, &dinfo->p2m_size, sizeof(unsigned long)) )
     {
+        errnoval = errno;
         PERROR("write: p2m_size");
         goto out;
     }
@@ -1071,6 +1085,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ctx->live_p2m = map_and_save_p2m_table(xch, io_fd, dom, ctx, live_shinfo);
         if ( ctx->live_p2m == NULL )
         {
+            errnoval = errno;
             PERROR("Failed to map/save the p2m frame list");
             goto out;
         }
@@ -1097,12 +1112,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     tmem_saved = xc_tmem_save(xch, dom, io_fd, live, XC_SAVE_ID_TMEM);
     if ( tmem_saved == -1 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tmem)");
         goto out;
     }
 
     if ( !live && save_tsc_info(xch, dom, io_fd) < 0 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tsc)");
         goto out;
     }
@@ -1143,6 +1160,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     dinfo->p2m_size, NULL, 0, NULL);
                 if ( frc != dinfo->p2m_size )
                 {
+                    errnoval = errno;
                     ERROR("Error peeking shadow bitmap");
                     goto out;
                 }
@@ -1257,6 +1275,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 xch, dom, PROT_READ, pfn_type, pfn_err, batch);
             if ( region_base == NULL )
             {
+                errnoval = errno;
                 PERROR("map batch failed");
                 goto out;
             }
@@ -1264,6 +1283,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* Get page types */
             if ( xc_get_pfn_type_batch(xch, dom, batch, pfn_type) )
             {
+                errnoval = errno;
                 PERROR("get_pfn_type_batch failed");
                 goto out;
             }
@@ -1332,6 +1352,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
             if ( wrexact(io_fd, &batch, sizeof(unsigned int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (2)");
                 goto out;
             }
@@ -1341,6 +1362,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     ((unsigned long *)pfn_type)[j] = pfn_type[j];
             if ( wrexact(io_fd, pfn_type, sizeof(unsigned long)*batch) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (3)");
                 goto out;
             }
@@ -1368,6 +1390,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                        (char*)region_base+(PAGE_SIZE*(j-run)), 
                                        PAGE_SIZE*run) != PAGE_SIZE*run )
                         {
+                            errnoval = errno;
                             PERROR("Error when writing to state file (4a)"
                                   " (errno %d)", errno);
                             goto out;
@@ -1396,6 +1419,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                     if ( race && !live )
                     {
+                        errnoval = errno;
                         ERROR("Fatal PT race (pfn %lx, type %08lx)", pfn,
                               pagetype);
                         goto out;
@@ -1409,6 +1433,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                                         pfn, 1 /* raw page */);
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add pagetable page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1428,6 +1453,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                              */
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4b)\n");
                                 goto out;
@@ -1437,6 +1463,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     else if ( wruncached(io_fd, live, page,
                                          PAGE_SIZE) != PAGE_SIZE )
                     {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (4b)"
                               " (errno %d)", errno);
                         goto out;
@@ -1456,6 +1483,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1465,6 +1493,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                         {
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4c)\n");
                                 goto out;
@@ -1483,6 +1512,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                (char*)region_base+(PAGE_SIZE*(j-run)), 
                                PAGE_SIZE*run) != PAGE_SIZE*run )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (4c)"
                           " (errno %d)", errno);
                     goto out;
@@ -1520,6 +1550,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* send "-1" to put receiver into debug mode */
             if ( wrexact(io_fd, &id, sizeof(int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (6)");
                 goto out;
             }
@@ -1542,6 +1573,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( suspend_and_state(callbacks->suspend, callbacks->data,
                                        xch, io_fd, dom, &info) )
                 {
+                    errnoval = errno;
                     ERROR("Domain appears not to have suspended");
                     goto out;
                 }
@@ -1550,12 +1582,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( (tmem_saved > 0) &&
                      (xc_tmem_save_extra(xch,dom,io_fd,XC_SAVE_ID_TMEM_EXTRA) == -1) )
                 {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (tmem)");
                         goto out;
                 }
 
                 if ( save_tsc_info(xch, dom, io_fd) < 0 )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (tsc)");
                     goto out;
                 }
@@ -1567,6 +1601,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                    XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send),
                                    dinfo->p2m_size, NULL, 0, &shadow_stats) != dinfo->p2m_size )
             {
+                errnoval = errno;
                 PERROR("Error flushing shadow PT");
                 goto out;
             }
@@ -1598,6 +1633,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( info.max_vcpu_id >= XC_SR_MAX_VCPUS )
         {
+            errnoval = E2BIG;
             ERROR("Too many VCPUS in guest!");
             goto out;
         }
@@ -1614,6 +1650,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, &chunk, offsetof(struct chunk, vcpumap)
                      + vcpumap_sz(info.max_vcpu_id)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file");
             goto out;
         }
@@ -1633,6 +1670,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the generation id buffer location for guest");
             goto out;
         }
@@ -1645,6 +1683,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the ident_pt for EPT guest");
             goto out;
         }
@@ -1657,6 +1696,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the paging ring pfn for guest");
             goto out;
         }
@@ -1669,6 +1709,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the access ring pfn for guest");
             goto out;
         }
@@ -1681,6 +1722,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the sharing ring pfn for guest");
             goto out;
         }
@@ -1693,6 +1735,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the vm86 TSS for guest");
             goto out;
         }
@@ -1705,6 +1748,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the console pfn for guest");
             goto out;
         }
@@ -1716,6 +1760,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ((chunk.data != 0) && wrexact(io_fd, &chunk, sizeof(chunk)))
         {
+            errnoval = errno;
             PERROR("Error when writing the firmware ioport version");
             goto out;
         }
@@ -1728,6 +1773,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the viridian flag");
             goto out;
         }
@@ -1741,6 +1787,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( callbacks->toolstack_save(dom, &buf, &len, callbacks->data) < 0 )
         {
+            errnoval = errno;
             PERROR("Error calling toolstack_save");
             goto out;
         }
@@ -1759,6 +1806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_LAST_CHECKPOINT;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing last checkpoint chunk");
             goto out;
         }
@@ -1778,6 +1826,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_ENABLE_COMPRESSION;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing enable_compression marker");
             goto out;
         }
@@ -1787,6 +1836,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     i = 0;
     if ( wrexact(io_fd, &i, sizeof(int)) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (6')");
         goto out;
     }
@@ -1805,6 +1855,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                          (unsigned long *)&magic_pfns[2]);
         if ( wrexact(io_fd, magic_pfns, sizeof(magic_pfns)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (7)");
             goto out;
         }
@@ -1813,18 +1864,21 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (rec_size = xc_domain_hvm_getcontext(xch, dom, hvm_buf, 
                                                   hvm_buf_size)) == -1 )
         {
+            errnoval = errno;
             PERROR("HVM:Could not get hvm buffer");
             goto out;
         }
         
         if ( wrexact(io_fd, &rec_size, sizeof(uint32_t)) )
         {
+            errnoval = errno;
             PERROR("error write hvm buffer size");
             goto out;
         }
         
         if ( wrexact(io_fd, hvm_buf, rec_size) )
         {
+            errnoval = errno;
             PERROR("write HVM info failed!");
             goto out;
         }
@@ -1849,6 +1903,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( wrexact(io_fd, &j, sizeof(unsigned int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (6a)");
             goto out;
         }
@@ -1863,6 +1918,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             {
                 if ( wrexact(io_fd, &pfntab, sizeof(unsigned long)*j) )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (6b)");
                     goto out;
                 }
@@ -1873,6 +1929,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( xc_vcpu_getcontext(xch, dom, 0, &ctxt) )
     {
+        errnoval = errno;
         PERROR("Could not get vcpu context");
         goto out;
     }
@@ -1888,6 +1945,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     mfn = GET_FIELD(&ctxt, user_regs.edx);
     if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
     {
+        errnoval = ERANGE;
         ERROR("Suspend record is not in range of pseudophys map");
         goto out;
     }
@@ -1900,6 +1958,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( (i != 0) && xc_vcpu_getcontext(xch, dom, i, &ctxt) )
         {
+            errnoval = errno;
             PERROR("No context for VCPU%d", i);
             goto out;
         }
@@ -1910,6 +1969,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             mfn = GET_FIELD(&ctxt, gdt_frames[j]);
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
             {
+                errnoval = ERANGE;
                 ERROR("GDT frame is not in range of pseudophys map");
                 goto out;
             }
@@ -1920,6 +1980,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(
                                            GET_FIELD(&ctxt, ctrlreg[3]))) )
         {
+            errnoval = ERANGE;
             ERROR("PT base is not in range of pseudophys map");
             goto out;
         }
@@ -1931,6 +1992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         {
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(ctxt.x64.ctrlreg[1])) )
             {
+                errnoval = ERANGE;
                 ERROR("PT base is not in range of pseudophys map");
                 goto out;
             }
@@ -1943,6 +2005,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                         ? sizeof(ctxt.x64) 
                                         : sizeof(ctxt.x32))) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (1)");
             goto out;
         }
@@ -1953,11 +2016,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.ext_vcpucontext.vcpu = i;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No extended context for VCPU%d", i);
             goto out;
         }
         if ( wrexact(io_fd, &domctl.u.ext_vcpucontext, 128) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (2)");
             goto out;
         }
@@ -1971,6 +2036,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.vcpuextstate.size = 0;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             goto out;
         }
@@ -1982,6 +2048,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         buffer = xc_hypercall_buffer_alloc(xch, buffer, domctl.u.vcpuextstate.size);
         if ( !buffer )
         {
+            errnoval = errno;
             PERROR("Insufficient memory for getting eXtended states for"
                    "VCPU%d", i);
             goto out;
@@ -1989,6 +2056,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         set_xen_guest_handle(domctl.u.vcpuextstate.buffer, buffer);
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2000,6 +2068,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                      sizeof(domctl.u.vcpuextstate.size)) ||
              wrexact(io_fd, buffer, domctl.u.vcpuextstate.size) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file VCPU extended state");
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2015,6 +2084,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
               arch.pfn_to_mfn_frame_list_list, 0);
     if ( wrexact(io_fd, page, PAGE_SIZE) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (1)");
         goto out;
     }
@@ -2022,6 +2092,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Flush last write and check for errors. */
     if ( fsync(io_fd) && errno != EINVAL )
     {
+        errnoval = errno;
         PERROR("Error when flushing state file");
         goto out;
     }
@@ -2043,6 +2114,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ob = &ob_pagebuf;
         if (wrcompressed(io_fd) < 0)
         {
+            errnoval = errno;
             ERROR("Error when writing compressed data, after postcopy\n");
             rc = 1;
             goto out;
@@ -2051,6 +2123,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, ob_tailbuf.buf, ob_tailbuf.pos) )
         {
             rc = 1;
+            errnoval = errno;
             PERROR("Error when copying tailbuf into outbuf");
             goto out;
         }
@@ -2079,6 +2152,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -2130,7 +2204,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     free(hvm_buf);
     outbuf_free(&ob_pagebuf);
 
-    DPRINTF("Save exit of domid %u with rc=%d\n", dom, rc);
+exit:
+    DPRINTF("Save exit of domid %u with rc=%d, errno=%d\n", dom, rc, errnoval);
+    errno = errnoval;
 
     return !!rc;
 }
diff --git a/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c b/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c
index 01c0d47..270d5a3 100644
--- a/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c
+++ b/tools/python/xen/lowlevel/checkpoint/libcheckpoint.c
@@ -173,7 +173,7 @@ int checkpoint_start(checkpoint_state* s, int fd,
 		     struct save_callbacks* callbacks,
 		     unsigned int remus_flags)
 {
-    int hvm, rc;
+    int hvm, rc, errnoval;
     int flags = XCFLAGS_LIVE;
     unsigned long vm_generationid_addr;
 
@@ -209,9 +209,11 @@ int checkpoint_start(checkpoint_state* s, int fd,
     rc = xc_domain_save(s->xch, fd, s->domid, 0, 0, flags, callbacks, hvm,
                         vm_generationid_addr);
 
+    errnoval = errno;
     if (hvm)
        switch_qemu_logdirty(s, 0);
 
+    errno = errnoval;
     return rc;
 }
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:01:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpXo-0003SW-7z; Mon, 10 Feb 2014 12:01:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCpXm-0003SJ-An
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:00:58 +0000
Received: from [85.158.137.68:29260] by server-9.bemta-3.messagelabs.com id
	3E/52-10184-97FB8F25; Mon, 10 Feb 2014 12:00:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392033654!810745!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23804 invoked from network); 10 Feb 2014 12:00:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:00:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101276054"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:00:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:00:53 -0500
Message-ID: <1392033652.5117.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:00:52 +0000
In-Reply-To: <21240.48473.641987.485823@mariner.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<21240.48473.641987.485823@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 11:51 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> > On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> > > 
> > >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> 
> Surely this is correct.  Linus's tip doesn't pass this test.
> 
> > + git checkout 494479038d97f1b9f76fc633a360a681acdf035c
> > fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c
> > 
> > This is because it is using git://xenbits.xen.org/linux-pvops.git
> > instead of the tree it should be testing...
> 
> That's exactly what it should be using, isn't it ?  After all, in
> principle, eventually, Linus's tip should work.  In the meantime it
> doesn't and this is a real bug.

Linus' tip is git.kernel.org/...torvalds/xen.git. This linux-pvops.git
tree contains the output of this pushgate.

The error is that it appears to be looking in the output tree for the
new revision, which obviously only exists in the input tree.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:01:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpXo-0003SW-7z; Mon, 10 Feb 2014 12:01:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCpXm-0003SJ-An
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:00:58 +0000
Received: from [85.158.137.68:29260] by server-9.bemta-3.messagelabs.com id
	3E/52-10184-97FB8F25; Mon, 10 Feb 2014 12:00:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392033654!810745!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23804 invoked from network); 10 Feb 2014 12:00:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:00:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101276054"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:00:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:00:53 -0500
Message-ID: <1392033652.5117.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:00:52 +0000
In-Reply-To: <21240.48473.641987.485823@mariner.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<21240.48473.641987.485823@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 11:51 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> > On Sun, 2014-02-09 at 16:04 +0000, xen.org wrote:
> > > 
> > >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> 
> Surely this is correct.  Linus's tip doesn't pass this test.
> 
> > + git checkout 494479038d97f1b9f76fc633a360a681acdf035c
> > fatal: reference is not a tree: 494479038d97f1b9f76fc633a360a681acdf035c
> > 
> > This is because it is using git://xenbits.xen.org/linux-pvops.git
> > instead of the tree it should be testing...
> 
> That's exactly what it should be using, isn't it ?  After all, in
> principle, eventually, Linus's tip should work.  In the meantime it
> doesn't and this is a real bug.

Linus' tip is git.kernel.org/...torvalds/xen.git. This linux-pvops.git
tree contains the output of this pushgate.

The error is that it appears to be looking in the output tree for the
new revision, which obviously only exists in the input tree.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:01:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpYD-0003Zc-Lz; Mon, 10 Feb 2014 12:01:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCpYC-0003Yf-Pl
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:01:25 +0000
Received: from [85.158.143.35:41928] by server-1.bemta-4.messagelabs.com id
	05/5F-31661-39FB8F25; Mon, 10 Feb 2014 12:01:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392033681!4498467!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2197 invoked from network); 10 Feb 2014 12:01:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:01:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101276147"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:01:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:01:20 -0500
Message-ID: <1392033679.5117.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Mon, 10 Feb 2014 12:01:19 +0000
In-Reply-To: <20140210114801.GV2924@reaktio.net>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<20140210114801.GV2924@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTEwIGF0IDEzOjQ4ICswMjAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBNb24sIEZlYiAxMCwgMjAxNCBhdCAxMDozNzoxM0FNICswMDAwLCBJYW4gQ2FtcGJl
bGwgd3JvdGU6Cj4gPiBPbiBTdW4sIDIwMTQtMDItMDkgYXQgMTY6MDQgKzAwMDAsIHhlbi5vcmcg
d3JvdGU6Cj4gPiA+IAo+ID4gPiAgYnVpbGQtYXJtaGYtcHZvcHMgICAgICAgICAgICAgNCBrZXJu
ZWwtYnVpbGQgICAgICAgICAgICAgICAgIGZhaWwgICBuZXZlciBwYXNzIAo+ID4gCj4gPiArIGdp
dCBjaGVja291dCA0OTQ0NzkwMzhkOTdmMWI5Zjc2ZmM2MzNhMzYwYTY4MWFjZGYwMzVjCj4gPiBm
YXRhbDogcmVmZXJlbmNlIGlzIG5vdCBhIHRyZWU6IDQ5NDQ3OTAzOGQ5N2YxYjlmNzZmYzYzM2Ez
NjBhNjgxYWNkZjAzNWMKPiA+IAo+ID4gVGhpcyBpcyBiZWNhdXNlIGl0IGlzIHVzaW5nIGdpdDov
L3hlbmJpdHMueGVuLm9yZy9saW51eC1wdm9wcy5naXQKPiA+IGluc3RlYWQgb2YgdGhlIHRyZWUg
aXQgc2hvdWxkIGJlIHRlc3RpbmcuLi4KPiA+IAo+ID4gVGhlIGZvbGxvd2luZyBmaXhlcyBpdCBm
b3IgbWUsIGJ1dCBhbHRob3VnaCB0aGUgcmVzdWx0cyBhcmUgYXMgSSB3YW50ZWQKPiA+IEknbSBu
b3QgMTAwJSBzdXJlIGFib3V0IHRoaXMgb3ZlcnJpZGUgaW4gdGhlIGZpcnN0IHBsYWNlLiBJbiBt
eQo+ID4gZXhwZXJpbWVudHMgd2l0aCBjci1kYWlseS1icmFuY2ggSSBzZWU6Cj4gPiAKPiA+ICAg
ICAgICAgQnJhbmNoCQkkVFJFRV9MSU5VWAkJJFRSRUVfTElOVVhfQVJNCj4gPiAgICAgICAgIAo+
ID4gICAgICAgICB4ZW4tdW5zdGFibGUJcHZvcHMJCQlwdm9wcwo+ID4gICAgICAgICBsaW51eC1s
aW51cwl0b3J2YWxkcwkJcHZvcHMKPiA+ICAgICAgICAgbGludXgtYXJtLXhlbglhcm0teGVuCQkJ
YXJtLXhlbgo+ID4gCj4gPiAgICAgICAgIEtleToKPiA+ICAgICAgICAgW2FybS14ZW5dIGdpdDov
L2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC9zc3RhYmVsbGluaS94ZW4u
Z2l0Cj4gPiAgICAgICAgIFt0b3J2YWxkc10gZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9s
aW51eC9rZXJuZWwvZ2l0L3RvcnZhbGRzL2xpbnV4LTIuNi5naXQKPiAKPiBJc24ndCB0aGF0IHRo
ZSAib2xkIiByZXBvIChidXQgYSBzeW1saW5rIHRvIHRoZSBuZXcgb25lKT8gVGhlIG5ldyBvbmUg
YmVpbmc6Cj4gZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L3Rv
cnZhbGRzL2xpbnV4LmdpdAo+IAoKUHJlc3VtYWJseS4gSSBkb24ndCB0aGluayBpdCByZWFsbHkg
bWF0dGVycyBoZXJlIHRob3VnaC4KCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:01:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:01:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpYD-0003Zc-Lz; Mon, 10 Feb 2014 12:01:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCpYC-0003Yf-Pl
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:01:25 +0000
Received: from [85.158.143.35:41928] by server-1.bemta-4.messagelabs.com id
	05/5F-31661-39FB8F25; Mon, 10 Feb 2014 12:01:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392033681!4498467!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2197 invoked from network); 10 Feb 2014 12:01:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:01:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101276147"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:01:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:01:20 -0500
Message-ID: <1392033679.5117.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Pasi =?ISO-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
Date: Mon, 10 Feb 2014 12:01:19 +0000
In-Reply-To: <20140210114801.GV2924@reaktio.net>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<20140210114801.GV2924@reaktio.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTEwIGF0IDEzOjQ4ICswMjAwLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToKPiBPbiBNb24sIEZlYiAxMCwgMjAxNCBhdCAxMDozNzoxM0FNICswMDAwLCBJYW4gQ2FtcGJl
bGwgd3JvdGU6Cj4gPiBPbiBTdW4sIDIwMTQtMDItMDkgYXQgMTY6MDQgKzAwMDAsIHhlbi5vcmcg
d3JvdGU6Cj4gPiA+IAo+ID4gPiAgYnVpbGQtYXJtaGYtcHZvcHMgICAgICAgICAgICAgNCBrZXJu
ZWwtYnVpbGQgICAgICAgICAgICAgICAgIGZhaWwgICBuZXZlciBwYXNzIAo+ID4gCj4gPiArIGdp
dCBjaGVja291dCA0OTQ0NzkwMzhkOTdmMWI5Zjc2ZmM2MzNhMzYwYTY4MWFjZGYwMzVjCj4gPiBm
YXRhbDogcmVmZXJlbmNlIGlzIG5vdCBhIHRyZWU6IDQ5NDQ3OTAzOGQ5N2YxYjlmNzZmYzYzM2Ez
NjBhNjgxYWNkZjAzNWMKPiA+IAo+ID4gVGhpcyBpcyBiZWNhdXNlIGl0IGlzIHVzaW5nIGdpdDov
L3hlbmJpdHMueGVuLm9yZy9saW51eC1wdm9wcy5naXQKPiA+IGluc3RlYWQgb2YgdGhlIHRyZWUg
aXQgc2hvdWxkIGJlIHRlc3RpbmcuLi4KPiA+IAo+ID4gVGhlIGZvbGxvd2luZyBmaXhlcyBpdCBm
b3IgbWUsIGJ1dCBhbHRob3VnaCB0aGUgcmVzdWx0cyBhcmUgYXMgSSB3YW50ZWQKPiA+IEknbSBu
b3QgMTAwJSBzdXJlIGFib3V0IHRoaXMgb3ZlcnJpZGUgaW4gdGhlIGZpcnN0IHBsYWNlLiBJbiBt
eQo+ID4gZXhwZXJpbWVudHMgd2l0aCBjci1kYWlseS1icmFuY2ggSSBzZWU6Cj4gPiAKPiA+ICAg
ICAgICAgQnJhbmNoCQkkVFJFRV9MSU5VWAkJJFRSRUVfTElOVVhfQVJNCj4gPiAgICAgICAgIAo+
ID4gICAgICAgICB4ZW4tdW5zdGFibGUJcHZvcHMJCQlwdm9wcwo+ID4gICAgICAgICBsaW51eC1s
aW51cwl0b3J2YWxkcwkJcHZvcHMKPiA+ICAgICAgICAgbGludXgtYXJtLXhlbglhcm0teGVuCQkJ
YXJtLXhlbgo+ID4gCj4gPiAgICAgICAgIEtleToKPiA+ICAgICAgICAgW2FybS14ZW5dIGdpdDov
L2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC9zc3RhYmVsbGluaS94ZW4u
Z2l0Cj4gPiAgICAgICAgIFt0b3J2YWxkc10gZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9s
aW51eC9rZXJuZWwvZ2l0L3RvcnZhbGRzL2xpbnV4LTIuNi5naXQKPiAKPiBJc24ndCB0aGF0IHRo
ZSAib2xkIiByZXBvIChidXQgYSBzeW1saW5rIHRvIHRoZSBuZXcgb25lKT8gVGhlIG5ldyBvbmUg
YmVpbmc6Cj4gZ2l0Oi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L3Rv
cnZhbGRzL2xpbnV4LmdpdAo+IAoKUHJlc3VtYWJseS4gSSBkb24ndCB0aGluayBpdCByZWFsbHkg
bWF0dGVycyBoZXJlIHRob3VnaC4KCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:01:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpYY-0003eS-4W; Mon, 10 Feb 2014 12:01:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpYX-0003eA-5l
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:01:45 +0000
Received: from [85.158.139.211:54723] by server-16.bemta-5.messagelabs.com id
	E4/3D-05060-8AFB8F25; Mon, 10 Feb 2014 12:01:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392033702!2877754!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8271 invoked from network); 10 Feb 2014 12:01:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:01:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99442339"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:01:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:01:41 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpYT-0002JO-84;
	Mon, 10 Feb 2014 12:01:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpYT-00070D-1m;
	Mon, 10 Feb 2014 12:01:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.49060.911482.728763@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 12:01:40 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
References: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] icr-daily-branch: Make it possible
	to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] icr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> This is undesirable (most of the time) in a standalone environment, where you
> are mostl ikely to be interested in the current version and not historical
> comparissons.
> 
> Not sure there isn't a better way.

I think this is fine, although it still runs sg-check-tested
unnecessarily.

> -if [ "x$testedflight" = x ]; then
> +if [ "${OSSTEST_NO_BASELINE:-n}" != "y" -a "x$testedflight" = x ]; then

I do have a stylistic quibble: this seems quite fiddly syntzx and not
the way that things are done elsewhere.  I would suggest doing it like
OSSTEST_IGNORE_STOP in cri-args-hostlists:

> +if [ "x$OSSTEST_NO_BASELINE" != xy -a "x$testedflight" = x ]; then

or similar earlier if you want to make check_tested conditional.

The thing with the x prefix is to make it clear that the arguments are
being unparsed correctly (ie that [ is going to parse its arguments
how we hope it to).  Without something like the "x" trick there are
some cases where it is very complicated to reason about ['s argument
parsing, and it is IMO simpler to always deploy the x than to prove in
each case that the parsing is predictable.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:01:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpYY-0003eS-4W; Mon, 10 Feb 2014 12:01:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpYX-0003eA-5l
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:01:45 +0000
Received: from [85.158.139.211:54723] by server-16.bemta-5.messagelabs.com id
	E4/3D-05060-8AFB8F25; Mon, 10 Feb 2014 12:01:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392033702!2877754!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8271 invoked from network); 10 Feb 2014 12:01:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:01:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99442339"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:01:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:01:41 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpYT-0002JO-84;
	Mon, 10 Feb 2014 12:01:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpYT-00070D-1m;
	Mon, 10 Feb 2014 12:01:41 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.49060.911482.728763@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 12:01:40 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
References: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] icr-daily-branch: Make it possible
	to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] icr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> This is undesirable (most of the time) in a standalone environment, where you
> are mostl ikely to be interested in the current version and not historical
> comparissons.
> 
> Not sure there isn't a better way.

I think this is fine, although it still runs sg-check-tested
unnecessarily.

> -if [ "x$testedflight" = x ]; then
> +if [ "${OSSTEST_NO_BASELINE:-n}" != "y" -a "x$testedflight" = x ]; then

I do have a stylistic quibble: this seems quite fiddly syntzx and not
the way that things are done elsewhere.  I would suggest doing it like
OSSTEST_IGNORE_STOP in cri-args-hostlists:

> +if [ "x$OSSTEST_NO_BASELINE" != xy -a "x$testedflight" = x ]; then

or similar earlier if you want to make check_tested conditional.

The thing with the x prefix is to make it clear that the arguments are
being unparsed correctly (ie that [ is going to parse its arguments
how we hope it to).  Without something like the "x" trick there are
some cases where it is very complicated to reason about ['s argument
parsing, and it is IMO simpler to always deploy the x than to prove in
each case that the parsing is predictable.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:05:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpcI-00047t-2m; Mon, 10 Feb 2014 12:05:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpcF-00047k-S3
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:05:36 +0000
Received: from [193.109.254.147:59097] by server-15.bemta-14.messagelabs.com
	id 7F/17-10839-F80C8F25; Mon, 10 Feb 2014 12:05:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392033933!3220287!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16488 invoked from network); 10 Feb 2014 12:05:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:05:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101277111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:05:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:05:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpcC-0002pv-Ab;
	Mon, 10 Feb 2014 12:05:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpcC-00070y-4E;
	Mon, 10 Feb 2014 12:05:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.49291.804800.272953@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 12:05:31 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392033652.5117.54.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<21240.48473.641987.485823@mariner.uk.xensource.com>
	<1392033652.5117.54.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> Linus' tip is git.kernel.org/...torvalds/xen.git. This linux-pvops.git
> tree contains the output of this pushgate.
> 
> The error is that it appears to be looking in the output tree for the
> new revision, which obviously only exists in the input tree.

Oh, I see, yes.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Unless you have more bugfixes in the pipeline, you should probably
push that right away so that it doesn't get entangled with the
forthcoming change to use FreeBSD RELEASE instead (as prompted by
Roger).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:05:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpcI-00047t-2m; Mon, 10 Feb 2014 12:05:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpcF-00047k-S3
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:05:36 +0000
Received: from [193.109.254.147:59097] by server-15.bemta-14.messagelabs.com
	id 7F/17-10839-F80C8F25; Mon, 10 Feb 2014 12:05:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392033933!3220287!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16488 invoked from network); 10 Feb 2014 12:05:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:05:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101277111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:05:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:05:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpcC-0002pv-Ab;
	Mon, 10 Feb 2014 12:05:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpcC-00070y-4E;
	Mon, 10 Feb 2014 12:05:32 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.49291.804800.272953@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 12:05:31 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392033652.5117.54.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<21240.48473.641987.485823@mariner.uk.xensource.com>
	<1392033652.5117.54.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> Linus' tip is git.kernel.org/...torvalds/xen.git. This linux-pvops.git
> tree contains the output of this pushgate.
> 
> The error is that it appears to be looking in the output tree for the
> new revision, which obviously only exists in the input tree.

Oh, I see, yes.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Unless you have more bugfixes in the pipeline, you should probably
push that right away so that it doesn't get entangled with the
forthcoming change to use FreeBSD RELEASE instead (as prompted by
Roger).

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:10:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCphH-0004Ry-U8; Mon, 10 Feb 2014 12:10:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCphG-0004Rt-QO
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:10:47 +0000
Received: from [85.158.137.68:13860] by server-13.bemta-3.messagelabs.com id
	89/AF-26923-6C1C8F25; Mon, 10 Feb 2014 12:10:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392034243!824256!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18656 invoked from network); 10 Feb 2014 12:10:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:10:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99444854"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:10:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:10:42 -0500
Message-ID: <1392034241.5117.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:10:41 +0000
In-Reply-To: <21240.49291.804800.272953@mariner.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<21240.48473.641987.485823@mariner.uk.xensource.com>
	<1392033652.5117.54.camel@kazak.uk.xensource.com>
	<21240.49291.804800.272953@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 12:05 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> > Linus' tip is git.kernel.org/...torvalds/xen.git. This linux-pvops.git
> > tree contains the output of this pushgate.
> > 
> > The error is that it appears to be looking in the output tree for the
> > new revision, which obviously only exists in the input tree.
> 
> Oh, I see, yes.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Unless you have more bugfixes in the pipeline, you should probably
> push that right away so that it doesn't get entangled with the
> forthcoming change to use FreeBSD RELEASE instead (as prompted by
> Roger).

I don't have anything else pending[0], but I'm about to borrow one of
the marilith machines and do some local manual runs to get a quicker
turnaround on this stuff. I wouldn't expect that to result in any
patches before tomorrow though.

Ian.

[0] actually not quite true, I have a bunch of older minor patches:
        [PATCH OSSTEST] README: Add some core concepts and terminology
        [PATCH OSSTEST] standalone: add blessing to flights table.
        [PATCH OSSTEST] cri-args-hostlists: Allow environment to control OSSTEST_CONFIG

But these aren't urgent or tied up in anything else.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:10:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCphH-0004Ry-U8; Mon, 10 Feb 2014 12:10:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCphG-0004Rt-QO
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 12:10:47 +0000
Received: from [85.158.137.68:13860] by server-13.bemta-3.messagelabs.com id
	89/AF-26923-6C1C8F25; Mon, 10 Feb 2014 12:10:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392034243!824256!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18656 invoked from network); 10 Feb 2014 12:10:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:10:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99444854"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:10:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:10:42 -0500
Message-ID: <1392034241.5117.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:10:41 +0000
In-Reply-To: <21240.49291.804800.272953@mariner.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<21240.48473.641987.485823@mariner.uk.xensource.com>
	<1392033652.5117.54.camel@kazak.uk.xensource.com>
	<21240.49291.804800.272953@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 12:05 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH OSSTEST] mfi-common: Only override the pvops kernel repo for linux-arm-xen branch (Was: Re: [Xen-devel] [linux-linus test] 24817: regressions - FAIL)"):
> > Linus' tip is git.kernel.org/...torvalds/xen.git. This linux-pvops.git
> > tree contains the output of this pushgate.
> > 
> > The error is that it appears to be looking in the output tree for the
> > new revision, which obviously only exists in the input tree.
> 
> Oh, I see, yes.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Unless you have more bugfixes in the pipeline, you should probably
> push that right away so that it doesn't get entangled with the
> forthcoming change to use FreeBSD RELEASE instead (as prompted by
> Roger).

I don't have anything else pending[0], but I'm about to borrow one of
the marilith machines and do some local manual runs to get a quicker
turnaround on this stuff. I wouldn't expect that to result in any
patches before tomorrow though.

Ian.

[0] actually not quite true, I have a bunch of older minor patches:
        [PATCH OSSTEST] README: Add some core concepts and terminology
        [PATCH OSSTEST] standalone: add blessing to flights table.
        [PATCH OSSTEST] cri-args-hostlists: Allow environment to control OSSTEST_CONFIG

But these aren't urgent or tied up in anything else.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:13:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpjp-0004Yo-I0; Mon, 10 Feb 2014 12:13:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpjn-0004Yh-P5
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:13:23 +0000
Received: from [85.158.143.35:58916] by server-3.bemta-4.messagelabs.com id
	EC/26-11539-362C8F25; Mon, 10 Feb 2014 12:13:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392034401!4504289!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4794 invoked from network); 10 Feb 2014 12:13:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:13:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101278786"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:13:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:13:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpjf-0002sJ-D1;
	Mon, 10 Feb 2014 12:13:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpjf-00076S-47;
	Mon, 10 Feb 2014 12:13:15 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 10 Feb 2014 12:13:12 +0000
Message-ID: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 make-flight |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index 056fc7a..033c3f0 100755
--- a/make-flight
+++ b/make-flight
@@ -108,7 +108,7 @@ do_freebsd_tests () {
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                         test-freebsd xl $xenarch $dom0arch \
                         freebsd_arch=$freebsdarch \
- freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
+ freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20140116-r260789.qcow2.xz} \
                         all_hostflags=$most_hostflags
 
   done
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:13:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpjp-0004Yo-I0; Mon, 10 Feb 2014 12:13:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpjn-0004Yh-P5
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:13:23 +0000
Received: from [85.158.143.35:58916] by server-3.bemta-4.messagelabs.com id
	EC/26-11539-362C8F25; Mon, 10 Feb 2014 12:13:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392034401!4504289!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4794 invoked from network); 10 Feb 2014 12:13:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:13:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101278786"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:13:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:13:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpjf-0002sJ-D1;
	Mon, 10 Feb 2014 12:13:15 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpjf-00076S-47;
	Mon, 10 Feb 2014 12:13:15 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 10 Feb 2014 12:13:12 +0000
Message-ID: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 make-flight |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index 056fc7a..033c3f0 100755
--- a/make-flight
+++ b/make-flight
@@ -108,7 +108,7 @@ do_freebsd_tests () {
  job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                         test-freebsd xl $xenarch $dom0arch \
                         freebsd_arch=$freebsdarch \
- freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
+ freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20140116-r260789.qcow2.xz} \
                         all_hostflags=$most_hostflags
 
   done
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:14:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpl1-0004kY-20; Mon, 10 Feb 2014 12:14:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCpky-0004kL-TI
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:14:37 +0000
Received: from [85.158.143.35:12745] by server-2.bemta-4.messagelabs.com id
	7B/BD-10891-CA2C8F25; Mon, 10 Feb 2014 12:14:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392034474!4506699!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18951 invoked from network); 10 Feb 2014 12:14:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:14:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99445666"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:14:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:14:33 -0500
Message-ID: <1392034472.5117.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:14:32 +0000
In-Reply-To: <21240.49060.911482.728763@mariner.uk.xensource.com>
References: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
	<21240.49060.911482.728763@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] icr-daily-branch: Make it possible
 to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 12:01 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] icr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> > This is undesirable (most of the time) in a standalone environment, where you
> > are mostl ikely to be interested in the current version and not historical
> > comparissons.
> > 
> > Not sure there isn't a better way.
> 
> I think this is fine, although it still runs sg-check-tested
> unnecessarily.
> 
> > -if [ "x$testedflight" = x ]; then
> > +if [ "${OSSTEST_NO_BASELINE:-n}" != "y" -a "x$testedflight" = x ]; then
> 
> I do have a stylistic quibble: this seems quite fiddly syntzx and not
> the way that things are done elsewhere.  I would suggest doing it like
> OSSTEST_IGNORE_STOP in cri-args-hostlists:
> 
> > +if [ "x$OSSTEST_NO_BASELINE" != xy -a "x$testedflight" = x ]; then
> 
> or similar earlier if you want to make check_tested conditional.
> 
> The thing with the x prefix is to make it clear that the arguments are
> being unparsed correctly (ie that [ is going to parse its arguments
> how we hope it to).  Without something like the "x" trick there are
> some cases where it is very complicated to reason about ['s argument
> parsing, and it is IMO simpler to always deploy the x than to prove in
> each case that the parsing is predictable.

Ack. I'll make a change along those lines, and have a look at
check_tested too.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:14:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpl1-0004kY-20; Mon, 10 Feb 2014 12:14:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCpky-0004kL-TI
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:14:37 +0000
Received: from [85.158.143.35:12745] by server-2.bemta-4.messagelabs.com id
	7B/BD-10891-CA2C8F25; Mon, 10 Feb 2014 12:14:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392034474!4506699!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18951 invoked from network); 10 Feb 2014 12:14:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:14:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99445666"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:14:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:14:33 -0500
Message-ID: <1392034472.5117.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:14:32 +0000
In-Reply-To: <21240.49060.911482.728763@mariner.uk.xensource.com>
References: <1392030735-23710-1-git-send-email-ian.campbell@citrix.com>
	<21240.49060.911482.728763@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] icr-daily-branch: Make it possible
 to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 12:01 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] icr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> > This is undesirable (most of the time) in a standalone environment, where you
> > are mostl ikely to be interested in the current version and not historical
> > comparissons.
> > 
> > Not sure there isn't a better way.
> 
> I think this is fine, although it still runs sg-check-tested
> unnecessarily.
> 
> > -if [ "x$testedflight" = x ]; then
> > +if [ "${OSSTEST_NO_BASELINE:-n}" != "y" -a "x$testedflight" = x ]; then
> 
> I do have a stylistic quibble: this seems quite fiddly syntzx and not
> the way that things are done elsewhere.  I would suggest doing it like
> OSSTEST_IGNORE_STOP in cri-args-hostlists:
> 
> > +if [ "x$OSSTEST_NO_BASELINE" != xy -a "x$testedflight" = x ]; then
> 
> or similar earlier if you want to make check_tested conditional.
> 
> The thing with the x prefix is to make it clear that the arguments are
> being unparsed correctly (ie that [ is going to parse its arguments
> how we hope it to).  Without something like the "x" trick there are
> some cases where it is very complicated to reason about ['s argument
> parsing, and it is IMO simpler to always deploy the x than to prove in
> each case that the parsing is predictable.

Ack. I'll make a change along those lines, and have a look at
check_tested too.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:19:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCppb-0005Qd-TE; Mon, 10 Feb 2014 12:19:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WCppa-0005Ph-O7
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:19:22 +0000
Received: from [85.158.139.211:5504] by server-12.bemta-5.messagelabs.com id
	34/BC-15415-AC3C8F25; Mon, 10 Feb 2014 12:19:22 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392034752!2883220!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5751 invoked from network); 10 Feb 2014 12:19:20 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 12:19:20 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WCppO-0004Zp-KP; Mon, 10 Feb 2014 12:19:10 +0000
Date: Mon, 10 Feb 2014 13:19:10 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140210121910.GA1370@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:17 +0000 on 10 Feb (1392027468), Andrew Cooper wrote:
> The current code has a pathological case, tickled by the access pattern of
> Windows 2003 Server SP2.  Occasonally on boot (which I presume is during a
> time calibration against the RTC Periodic Timer), Windows gets stuck in an
> infinite loop reading RTC REG_C.  This affects 32 and 64 bit guests.
> 
> In the pathological case, the VM state looks like this:
>   * RTC: 64Hz period, periodic interrupts enabled
>   * RTC_IRQ in IOAPIC as vector 0xd1, edge triggered, not pending
>   * vector 0xd1 set in LAPIC IRR and ISR, TPR at 0xd0
>   * Reads from REG_C return 'RTC_PF | RTC_IRQF'
> 
> With an intstrumented Xen, dumping the periodic timers with a guest in this
> state shows a single timer with pt->irq_issued=1 and pt->pending_intr_nr=2.
> 
> Windows is presumably waiting for reads of REG_C to drop to 0

s/presumably/definitely/; we disassembled the kernel code in question.

>, and reading
> REG_C clears the value each time in the emulated RTC.  However:
> 
>   * {svm,vmx}_intr_assist() calls pt_update_irq() unconditionally.
>   * pt_update_irq() always finds the RTC as earliest_pt.

This is, AFAICT, because we are in no-missed-ticks mode, and there are
multiple RTC ticks queued waiting to be delivered.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:19:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCppb-0005Qd-TE; Mon, 10 Feb 2014 12:19:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WCppa-0005Ph-O7
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:19:22 +0000
Received: from [85.158.139.211:5504] by server-12.bemta-5.messagelabs.com id
	34/BC-15415-AC3C8F25; Mon, 10 Feb 2014 12:19:22 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392034752!2883220!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5751 invoked from network); 10 Feb 2014 12:19:20 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 12:19:20 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WCppO-0004Zp-KP; Mon, 10 Feb 2014 12:19:10 +0000
Date: Mon, 10 Feb 2014 13:19:10 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140210121910.GA1370@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 11:17 +0000 on 10 Feb (1392027468), Andrew Cooper wrote:
> The current code has a pathological case, tickled by the access pattern of
> Windows 2003 Server SP2.  Occasonally on boot (which I presume is during a
> time calibration against the RTC Periodic Timer), Windows gets stuck in an
> infinite loop reading RTC REG_C.  This affects 32 and 64 bit guests.
> 
> In the pathological case, the VM state looks like this:
>   * RTC: 64Hz period, periodic interrupts enabled
>   * RTC_IRQ in IOAPIC as vector 0xd1, edge triggered, not pending
>   * vector 0xd1 set in LAPIC IRR and ISR, TPR at 0xd0
>   * Reads from REG_C return 'RTC_PF | RTC_IRQF'
> 
> With an intstrumented Xen, dumping the periodic timers with a guest in this
> state shows a single timer with pt->irq_issued=1 and pt->pending_intr_nr=2.
> 
> Windows is presumably waiting for reads of REG_C to drop to 0

s/presumably/definitely/; we disassembled the kernel code in question.

>, and reading
> REG_C clears the value each time in the emulated RTC.  However:
> 
>   * {svm,vmx}_intr_assist() calls pt_update_irq() unconditionally.
>   * pt_update_irq() always finds the RTC as earliest_pt.

This is, AFAICT, because we are in no-missed-ticks mode, and there are
multiple RTC ticks queued waiting to be delivered.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:20:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpqm-0005bA-Hm; Mon, 10 Feb 2014 12:20:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCpqk-0005ax-IM
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:20:34 +0000
Received: from [193.109.254.147:55203] by server-10.bemta-14.messagelabs.com
	id 2A/6A-10711-114C8F25; Mon, 10 Feb 2014 12:20:33 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392034832!3254718!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20915 invoked from network); 10 Feb 2014 12:20:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99446734"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:20:31 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:20:31 -0500
Message-ID: <52F8C40E.7010707@citrix.com>
Date: Mon, 10 Feb 2014 13:20:30 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xenproject.org>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:13, Ian Jackson wrote:
> ---
>  make-flight |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/make-flight b/make-flight
> index 056fc7a..033c3f0 100755
> --- a/make-flight
> +++ b/make-flight
> @@ -108,7 +108,7 @@ do_freebsd_tests () {
>   job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
>                          test-freebsd xl $xenarch $dom0arch \
>                          freebsd_arch=$freebsdarch \
> - freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
> + freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20140116-r260789.qcow2.xz} \
>                          all_hostflags=$most_hostflags
>  
>    done

Thanks for the patch. I think it's missing the following chunk:

---
diff --git a/ts-freebsd-install b/ts-freebsd-install
index 6c6abbe..72542c2 100755
--- a/ts-freebsd-install
+++ b/ts-freebsd-install
@@ -36,7 +36,7 @@ our $gho;

 our $mnt= '/root/freebsd_root';

-our $freebsd_version= "10.0-BETA3";
+our $freebsd_version= "10.0-RELEASE";

 # Folder where the FreeBSD VM images are stored inside of the host
 #



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:20:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpqm-0005bA-Hm; Mon, 10 Feb 2014 12:20:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCpqk-0005ax-IM
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:20:34 +0000
Received: from [193.109.254.147:55203] by server-10.bemta-14.messagelabs.com
	id 2A/6A-10711-114C8F25; Mon, 10 Feb 2014 12:20:33 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392034832!3254718!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20915 invoked from network); 10 Feb 2014 12:20:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:20:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99446734"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:20:31 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:20:31 -0500
Message-ID: <52F8C40E.7010707@citrix.com>
Date: Mon, 10 Feb 2014 13:20:30 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xenproject.org>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:13, Ian Jackson wrote:
> ---
>  make-flight |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/make-flight b/make-flight
> index 056fc7a..033c3f0 100755
> --- a/make-flight
> +++ b/make-flight
> @@ -108,7 +108,7 @@ do_freebsd_tests () {
>   job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
>                          test-freebsd xl $xenarch $dom0arch \
>                          freebsd_arch=$freebsdarch \
> - freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-BETA3-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20131103-r257580.qcow2.xz} \
> + freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.0-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20140116-r260789.qcow2.xz} \
>                          all_hostflags=$most_hostflags
>  
>    done

Thanks for the patch. I think it's missing the following chunk:

---
diff --git a/ts-freebsd-install b/ts-freebsd-install
index 6c6abbe..72542c2 100755
--- a/ts-freebsd-install
+++ b/ts-freebsd-install
@@ -36,7 +36,7 @@ our $gho;

 our $mnt= '/root/freebsd_root';

-our $freebsd_version= "10.0-BETA3";
+our $freebsd_version= "10.0-RELEASE";

 # Folder where the FreeBSD VM images are stored inside of the host
 #



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpwy-0006EW-2u; Mon, 10 Feb 2014 12:27:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpww-0006EN-BM
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:26:58 +0000
Received: from [85.158.143.35:24692] by server-3.bemta-4.messagelabs.com id
	FC/72-11539-195C8F25; Mon, 10 Feb 2014 12:26:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392035215!4510871!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12697 invoked from network); 10 Feb 2014 12:26:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:26:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99447778"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:26:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:26:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpws-0002wU-OS;
	Mon, 10 Feb 2014 12:26:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpws-0007Fb-GJ;
	Mon, 10 Feb 2014 12:26:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.50574.203262.432094@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 12:26:54 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <52F8C40E.7010707@citrix.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F8C40E.7010707@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RELE=
ASE (20140116-r260789)"):
> Thanks for the patch. I think it's missing the following chunk:
...
> -our $freebsd_version=3D "10.0-BETA3";
> +our $freebsd_version=3D "10.0-RELEASE";

Oh.  Err, why is this hardcoded in the script ?  Changing the
runvar(s) ought to be sufficient.

... (looks at the code) ...

Oh, I see, that's just the default.  Perhaps the default should be
removed entirely ?  None of the other scripts have a default image
filename.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCpwy-0006EW-2u; Mon, 10 Feb 2014 12:27:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCpww-0006EN-BM
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:26:58 +0000
Received: from [85.158.143.35:24692] by server-3.bemta-4.messagelabs.com id
	FC/72-11539-195C8F25; Mon, 10 Feb 2014 12:26:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392035215!4510871!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12697 invoked from network); 10 Feb 2014 12:26:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:26:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="99447778"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 12:26:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 07:26:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpws-0002wU-OS;
	Mon, 10 Feb 2014 12:26:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCpws-0007Fb-GJ;
	Mon, 10 Feb 2014 12:26:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21240.50574.203262.432094@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 12:26:54 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <52F8C40E.7010707@citrix.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F8C40E.7010707@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RELE=
ASE (20140116-r260789)"):
> Thanks for the patch. I think it's missing the following chunk:
...
> -our $freebsd_version=3D "10.0-BETA3";
> +our $freebsd_version=3D "10.0-RELEASE";

Oh.  Err, why is this hardcoded in the script ?  Changing the
runvar(s) ought to be sufficient.

... (looks at the code) ...

Oh, I see, that's just the default.  Perhaps the default should be
removed entirely ?  None of the other scripts have a default image
filename.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCq7z-00071L-Kg; Mon, 10 Feb 2014 12:38:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCq7z-00071G-3p
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:38:23 +0000
Received: from [85.158.139.211:47386] by server-2.bemta-5.messagelabs.com id
	B0/11-23037-E38C8F25; Mon, 10 Feb 2014 12:38:22 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392035900!2884505!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14409 invoked from network); 10 Feb 2014 12:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101283362"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:38:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:38:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCq7v-0008Ud-0q;
	Mon, 10 Feb 2014 12:38:19 +0000
Message-ID: <52F8C834.5070104@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:38:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
	<1391809911-13610-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1391809911-13610-2-git-send-email-dslutz@verizon.com>
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	David Scott <dave.scott@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to
 build with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 09:51 PM, Don Slutz wrote:
> This code was copied from:
>
> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
>
> diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
> index 23f253a..8e825ae 100644
> --- a/tools/ocaml/libs/xl/xenlight_stubs.c
> +++ b/tools/ocaml/libs/xl/xenlight_stubs.c
> @@ -35,6 +35,19 @@
>   
>   #include "caml_xentoollog.h"
>   
> +/*
> + * Starting with ocaml-3.09.3, CAMLreturn can only be used for ``value''
> + * types. CAMLreturnT was only added in 3.09.4, so we define our own
> + * version here if needed.
> + */
> +#ifndef CAMLreturnT
> +#define CAMLreturnT(type, result) do { \
> +    type caml__temp_result = (result); \
> +    caml_local_roots = caml__frame; \
> +    return (caml__temp_result); \
> +} while (0)
> +#endif
> +
>   /* The following is equal to the CAMLreturn macro, but without the return */
>   #define CAMLdone do{ \
>   caml_local_roots = caml__frame; \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:38:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCq7z-00071L-Kg; Mon, 10 Feb 2014 12:38:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCq7z-00071G-3p
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 12:38:23 +0000
Received: from [85.158.139.211:47386] by server-2.bemta-5.messagelabs.com id
	B0/11-23037-E38C8F25; Mon, 10 Feb 2014 12:38:22 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392035900!2884505!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14409 invoked from network); 10 Feb 2014 12:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101283362"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:38:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:38:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCq7v-0008Ud-0q;
	Mon, 10 Feb 2014 12:38:19 +0000
Message-ID: <52F8C834.5070104@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:38:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
	<1391809911-13610-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1391809911-13610-2-git-send-email-dslutz@verizon.com>
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	David Scott <dave.scott@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to
 build with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 09:51 PM, Don Slutz wrote:
> This code was copied from:
>
> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
>
> Signed-off-by: Don Slutz <dslutz@verizon.com>

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
>   1 file changed, 13 insertions(+)
>
> diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
> index 23f253a..8e825ae 100644
> --- a/tools/ocaml/libs/xl/xenlight_stubs.c
> +++ b/tools/ocaml/libs/xl/xenlight_stubs.c
> @@ -35,6 +35,19 @@
>   
>   #include "caml_xentoollog.h"
>   
> +/*
> + * Starting with ocaml-3.09.3, CAMLreturn can only be used for ``value''
> + * types. CAMLreturnT was only added in 3.09.4, so we define our own
> + * version here if needed.
> + */
> +#ifndef CAMLreturnT
> +#define CAMLreturnT(type, result) do { \
> +    type caml__temp_result = (result); \
> +    caml_local_roots = caml__frame; \
> +    return (caml__temp_result); \
> +} while (0)
> +#endif
> +
>   /* The following is equal to the CAMLreturn macro, but without the return */
>   #define CAMLdone do{ \
>   caml_local_roots = caml__frame; \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCq9v-00078t-6g; Mon, 10 Feb 2014 12:40:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCq9u-00078n-6G
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:40:22 +0000
Received: from [85.158.137.68:62010] by server-13.bemta-3.messagelabs.com id
	75/7D-26923-5B8C8F25; Mon, 10 Feb 2014 12:40:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392036019!824254!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6833 invoked from network); 10 Feb 2014 12:40:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:40:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101283776"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:40:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:40:17 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCq9p-0008WB-Vv;
	Mon, 10 Feb 2014 12:40:17 +0000
Message-ID: <52F8C8AB.9000003@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:40:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Zhang, Yang Z"
	<yang.z.zhang@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
In-Reply-To: <20140207154128.GE3605@phenom.dumpdata.com>
X-DLP: MIA1
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 03:41 PM, Konrad Rzeszutek Wilk wrote:
> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>> Yes. As so
>>>>>> Thanks. Except that ...
>>>>>>
>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>       struct vcpu *v = current;
>>>>>>>       struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>       struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>> -
>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>> variable "ioreq" would seem preferable.
>>>>>>
>>>>>>>       /*
>>>>>>>        * a pending IO emualtion may still no finished. In this case,
>>>>>>>        * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>        * emulation will handled in a wrong VCPU context.
>>>>>>>        */
>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>> the right thing here. Yang, Jun?
>>>>> I have two patches - one the simpler one that is pretty
>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>> guests. I also did bootup tests with HVM guests to make sure they worked.
>>>>>
>>>>> Attached and inline.
>> Sorry for the late response. I just back from Chinese new year holiday.
>>
>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>> Correct.
>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>> incorrect, AFAICT.
>>> Both of them fix the hypervisor blowing up with any PVH guest.
>> Both of fixings are right to me.
>> The only concern is that what we want to do here:
>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO request emulation mechanism to continue nested check which current means HVM VCPU.
>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO request emulation mechanism only which current means PVH VCPU.
>>
>> The purpose of my original patch only wants to allow the HVM VCPU that doesn't has pending IO request to continue nested check. Not use it to distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here as Jan mentioned before that non-HVM domain should never call nested related function at all unless it also supports nested.
> So it sounds like the #2 patch is preferable by you.
>
> Can I stick Acked-by on it?
>
>
>  From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> Date: Mon, 3 Feb 2014 11:45:52 -0500
> Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
>   use io-backend device.
>
> The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
> "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
> assumes that the HVM paths are only taken by HVM guests. With the PVH
> enabled that is no longer the case - which means that we do not have
> to have the IO-backend device (QEMU) enabled.
>
> As such, that patch can crash the hypervisor:
>
> Xen call trace:
>      [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
>      [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
>
> Pagetable walk from 000000000000001e:
>    L4[0x000] = 0000000000000000 ffffffffffffffff
>
> ****************************************
> Panic on CPU 7:
> FATAL PAGE FAULT
> [error_code=0000]
> Faulting linear address: 000000000000001e
> ****************************************
>
> as we do not have an io based backend. In the case that the
> PVH guest does run an HVM guest inside it - we need to do
> further work to suport this - and for now the check will
> bail us out.
>
> We also fix spelling mistakes and the sentence structure.
>
> CC: Yang Zhang <yang.z.zhang@Intel.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> Suggested-by: Jan Beulich <JBeulich@suse.com>
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

I forget if I release acked this yet, but just in case:

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
>   1 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index d2ba435..71522cf 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
>       struct vcpu *v = current;
>       struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>       struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *ioreq = get_ioreq(v);
>   
>       /*
> -     * a pending IO emualtion may still no finished. In this case,
> +     * A pending IO emulation may still be not finished. In this case,
>        * no virtual vmswith is allowed. Or else, the following IO
> -     * emulation will handled in a wrong VCPU context.
> +     * emulation will be handled in a wrong VCPU context. If there are
> +     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
> +     * running inside - we don't want to continue as this setup is not
> +     * implemented nor supported as of right now.
>        */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
>           return;
>       /*
>        * a softirq may interrupt us between a virtual vmentry is


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 12:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 12:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCq9v-00078t-6g; Mon, 10 Feb 2014 12:40:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCq9u-00078n-6G
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 12:40:22 +0000
Received: from [85.158.137.68:62010] by server-13.bemta-3.messagelabs.com id
	75/7D-26923-5B8C8F25; Mon, 10 Feb 2014 12:40:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392036019!824254!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6833 invoked from network); 10 Feb 2014 12:40:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 12:40:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101283776"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 12:40:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 07:40:17 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCq9p-0008WB-Vv;
	Mon, 10 Feb 2014 12:40:17 +0000
Message-ID: <52F8C8AB.9000003@eu.citrix.com>
Date: Mon, 10 Feb 2014 12:40:11 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, "Zhang, Yang Z"
	<yang.z.zhang@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
In-Reply-To: <20140207154128.GE3605@phenom.dumpdata.com>
X-DLP: MIA1
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 03:41 PM, Konrad Rzeszutek Wilk wrote:
> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>> Yes. As so
>>>>>> Thanks. Except that ...
>>>>>>
>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>       struct vcpu *v = current;
>>>>>>>       struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>       struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>> -
>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>> variable "ioreq" would seem preferable.
>>>>>>
>>>>>>>       /*
>>>>>>>        * a pending IO emualtion may still no finished. In this case,
>>>>>>>        * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>        * emulation will handled in a wrong VCPU context.
>>>>>>>        */
>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>> the right thing here. Yang, Jun?
>>>>> I have two patches - one the simpler one that is pretty
>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>> guests. I also did bootup tests with HVM guests to make sure they worked.
>>>>>
>>>>> Attached and inline.
>> Sorry for the late response. I just back from Chinese new year holiday.
>>
>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>> Correct.
>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>> incorrect, AFAICT.
>>> Both of them fix the hypervisor blowing up with any PVH guest.
>> Both of fixings are right to me.
>> The only concern is that what we want to do here:
>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO request emulation mechanism to continue nested check which current means HVM VCPU.
>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't support the IO request emulation mechanism only which current means PVH VCPU.
>>
>> The purpose of my original patch only wants to allow the HVM VCPU that doesn't has pending IO request to continue nested check. Not use it to distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU goes to here as Jan mentioned before that non-HVM domain should never call nested related function at all unless it also supports nested.
> So it sounds like the #2 patch is preferable by you.
>
> Can I stick Acked-by on it?
>
>
>  From d76fc0d2f59ac65bd692adfa5f215da9ecf85d6a Mon Sep 17 00:00:00 2001
> From: Mukesh Rathor <mukesh.rathor@oracle.com>
> Date: Mon, 3 Feb 2014 11:45:52 -0500
> Subject: [PATCH] pvh: Fix regression due to assumption that HVM paths MUST
>   use io-backend device.
>
> The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
> "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
> assumes that the HVM paths are only taken by HVM guests. With the PVH
> enabled that is no longer the case - which means that we do not have
> to have the IO-backend device (QEMU) enabled.
>
> As such, that patch can crash the hypervisor:
>
> Xen call trace:
>      [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
>      [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
>
> Pagetable walk from 000000000000001e:
>    L4[0x000] = 0000000000000000 ffffffffffffffff
>
> ****************************************
> Panic on CPU 7:
> FATAL PAGE FAULT
> [error_code=0000]
> Faulting linear address: 000000000000001e
> ****************************************
>
> as we do not have an io based backend. In the case that the
> PVH guest does run an HVM guest inside it - we need to do
> further work to suport this - and for now the check will
> bail us out.
>
> We also fix spelling mistakes and the sentence structure.
>
> CC: Yang Zhang <yang.z.zhang@Intel.com>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> Suggested-by: Jan Beulich <JBeulich@suse.com>
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

I forget if I release acked this yet, but just in case:

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   xen/arch/x86/hvm/vmx/vvmx.c |   10 +++++++---
>   1 files changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index d2ba435..71522cf 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -1394,13 +1394,17 @@ void nvmx_switch_guest(void)
>       struct vcpu *v = current;
>       struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>       struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    ioreq_t *ioreq = get_ioreq(v);
>   
>       /*
> -     * a pending IO emualtion may still no finished. In this case,
> +     * A pending IO emulation may still be not finished. In this case,
>        * no virtual vmswith is allowed. Or else, the following IO
> -     * emulation will handled in a wrong VCPU context.
> +     * emulation will be handled in a wrong VCPU context. If there are
> +     * no IO backends - PVH guest by itself or a PVH guest with an HVM guest
> +     * running inside - we don't want to continue as this setup is not
> +     * implemented nor supported as of right now.
>        */
> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
> +    if ( !ioreq || ioreq->state != STATE_IOREQ_NONE )
>           return;
>       /*
>        * a softirq may interrupt us between a virtual vmentry is


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:23:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCqpY-00014B-7m; Mon, 10 Feb 2014 13:23:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCqpX-000146-4U
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:23:23 +0000
Received: from [85.158.139.211:12888] by server-15.bemta-5.messagelabs.com id
	CB/18-24395-AC2D8F25; Mon, 10 Feb 2014 13:23:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392038600!2896939!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15516 invoked from network); 10 Feb 2014 13:23:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:23:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101292359"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:23:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:23:18 -0500
Message-ID: <52F8D2C5.6040507@citrix.com>
Date: Mon, 10 Feb 2014 13:23:17 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paul Bolle <pebolle@tiscali.nl>
References: <1391898943.27190.5.camel@x220>
In-Reply-To: <1391898943.27190.5.camel@x220>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Tony Luck <tony.luck@intel.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even
 more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/14 22:35, Paul Bolle wrote:
> Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
> the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
> on that symbol. Remove that code now.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:23:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCqpY-00014B-7m; Mon, 10 Feb 2014 13:23:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCqpX-000146-4U
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:23:23 +0000
Received: from [85.158.139.211:12888] by server-15.bemta-5.messagelabs.com id
	CB/18-24395-AC2D8F25; Mon, 10 Feb 2014 13:23:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392038600!2896939!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15516 invoked from network); 10 Feb 2014 13:23:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:23:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101292359"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:23:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:23:18 -0500
Message-ID: <52F8D2C5.6040507@citrix.com>
Date: Mon, 10 Feb 2014 13:23:17 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paul Bolle <pebolle@tiscali.nl>
References: <1391898943.27190.5.camel@x220>
In-Reply-To: <1391898943.27190.5.camel@x220>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Tony Luck <tony.luck@intel.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even
 more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/14 22:35, Paul Bolle wrote:
> Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
> the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
> on that symbol. Remove that code now.

Acked-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:28:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCquI-0001CN-3J; Mon, 10 Feb 2014 13:28:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCquG-0001CF-Q9
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:28:17 +0000
Received: from [193.109.254.147:61554] by server-1.bemta-14.messagelabs.com id
	1B/58-15438-0F3D8F25; Mon, 10 Feb 2014 13:28:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392038894!3269672!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 719 invoked from network); 10 Feb 2014 13:28:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101293015"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:27:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:27:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCqtB-0000hT-C8;
	Mon, 10 Feb 2014 13:27:09 +0000
Date: Mon, 10 Feb 2014 13:26:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Rashika Kheria <rashika.kheria@gmail.com>
In-Reply-To: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
Message-ID: <alpine.DEB.2.02.1402101326460.4373@kaball.uk.xensource.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1702294038-1392038813=:4373"
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	josh@joshtriplett.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype
 declaration to header file include/xen/xen-ops.h from
 arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1702294038-1392038813=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Sun, 9 Feb 2014, Rashika Kheria wrote:
> Move prototype declaration to header file include/xen/xen-ops.h from
> arch/x86/xen/xen-ops.h because it is used by more than one file. Also,
> remove else condition from xen/events/events_base.c to eliminate
> conflicting definitions when CONFIG_XEN_PVHVM is not defined.
>=20
> This eliminates the following warning in xen/events/events_base.c:
> drivers/xen/events/events_base.c:1640:6: warning: no previous prototype f=
or =E2=80=98xen_callback_vector=E2=80=99 [-Wmissing-prototypes]
>=20
> Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/xen-ops.h           |    1 -
>  drivers/xen/events/events_base.c |    2 --
>  include/xen/xen-ops.h            |    7 +++++++
>  3 files changed, 7 insertions(+), 3 deletions(-)
>=20
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index e5edc7f..aa8a979 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -39,7 +39,6 @@ void xen_enable_sysenter(void);
>  void xen_enable_syscall(void);
>  void xen_vcpu_restore(void);
> =20
> -void xen_callback_vector(void);
>  void xen_hvm_init_shared_info(void);
>  void xen_unplug_emulated_devices(void);
> =20
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events=
_base.c
> index 4672e00..5466543 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -1656,8 +1656,6 @@ void xen_callback_vector(void)
>  =09=09=09=09=09xen_hvm_callback_vector);
>  =09}
>  }
> -#else
> -void xen_callback_vector(void) {}
>  #endif
> =20
>  #undef MODULE_PARAM_PREFIX
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 9a86337..cdea45b 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -38,4 +38,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *=
vma,
>  bool xen_running_on_version_or_later(unsigned int major, unsigned int mi=
nor);
> =20
>  irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
> +
> +#ifdef CONFIG_XEN_PVHVM
> +void xen_callback_vector(void);
> +#else
> +static inline void xen_callback_vector(void) {}
> +#endif
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> --=20
> 1.7.9.5
>=20
--1342847746-1702294038-1392038813=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1702294038-1392038813=:4373--


From xen-devel-bounces@lists.xen.org Mon Feb 10 13:28:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCquI-0001CN-3J; Mon, 10 Feb 2014 13:28:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCquG-0001CF-Q9
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:28:17 +0000
Received: from [193.109.254.147:61554] by server-1.bemta-14.messagelabs.com id
	1B/58-15438-0F3D8F25; Mon, 10 Feb 2014 13:28:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392038894!3269672!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 719 invoked from network); 10 Feb 2014 13:28:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:28:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,817,1384300800"; d="scan'208";a="101293015"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:27:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:27:09 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCqtB-0000hT-C8;
	Mon, 10 Feb 2014 13:27:09 +0000
Date: Mon, 10 Feb 2014 13:26:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Rashika Kheria <rashika.kheria@gmail.com>
In-Reply-To: <98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
Message-ID: <alpine.DEB.2.02.1402101326460.4373@kaball.uk.xensource.com>
References: <3826f170dae13e3ee700ad9e26a8c88be4b7aaa2.1391943416.git.rashika.kheria@gmail.com>
	<98aa418fd8664807c4bde1ff7cf2cd5969749129.1391943416.git.rashika.kheria@gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1702294038-1392038813=:4373"
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	josh@joshtriplett.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 4/4] drivers: xen: Move prototype
 declaration to header file include/xen/xen-ops.h from
 arch/x86/xen/xen-ops.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1702294038-1392038813=:4373
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Sun, 9 Feb 2014, Rashika Kheria wrote:
> Move prototype declaration to header file include/xen/xen-ops.h from
> arch/x86/xen/xen-ops.h because it is used by more than one file. Also,
> remove else condition from xen/events/events_base.c to eliminate
> conflicting definitions when CONFIG_XEN_PVHVM is not defined.
>=20
> This eliminates the following warning in xen/events/events_base.c:
> drivers/xen/events/events_base.c:1640:6: warning: no previous prototype f=
or =E2=80=98xen_callback_vector=E2=80=99 [-Wmissing-prototypes]
>=20
> Signed-off-by: Rashika Kheria <rashika.kheria@gmail.com>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>  arch/x86/xen/xen-ops.h           |    1 -
>  drivers/xen/events/events_base.c |    2 --
>  include/xen/xen-ops.h            |    7 +++++++
>  3 files changed, 7 insertions(+), 3 deletions(-)
>=20
> diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
> index e5edc7f..aa8a979 100644
> --- a/arch/x86/xen/xen-ops.h
> +++ b/arch/x86/xen/xen-ops.h
> @@ -39,7 +39,6 @@ void xen_enable_sysenter(void);
>  void xen_enable_syscall(void);
>  void xen_vcpu_restore(void);
> =20
> -void xen_callback_vector(void);
>  void xen_hvm_init_shared_info(void);
>  void xen_unplug_emulated_devices(void);
> =20
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events=
_base.c
> index 4672e00..5466543 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -1656,8 +1656,6 @@ void xen_callback_vector(void)
>  =09=09=09=09=09xen_hvm_callback_vector);
>  =09}
>  }
> -#else
> -void xen_callback_vector(void) {}
>  #endif
> =20
>  #undef MODULE_PARAM_PREFIX
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 9a86337..cdea45b 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -38,4 +38,11 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *=
vma,
>  bool xen_running_on_version_or_later(unsigned int major, unsigned int mi=
nor);
> =20
>  irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
> +
> +#ifdef CONFIG_XEN_PVHVM
> +void xen_callback_vector(void);
> +#else
> +static inline void xen_callback_vector(void) {}
> +#endif
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> --=20
> 1.7.9.5
>=20
--1342847746-1702294038-1392038813=:4373
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1702294038-1392038813=:4373--


From xen-devel-bounces@lists.xen.org Mon Feb 10 13:36:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr20-0001ia-9i; Mon, 10 Feb 2014 13:36:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCr1y-0001iV-62
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:36:14 +0000
Received: from [85.158.139.211:9398] by server-5.bemta-5.messagelabs.com id
	2E/2E-32749-DC5D8F25; Mon, 10 Feb 2014 13:36:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392039371!2891967!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2648 invoked from network); 10 Feb 2014 13:36:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:36:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101295009"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:36:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 08:36:10 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WCr1u-0003J8-47;
	Mon, 10 Feb 2014 13:36:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 13:36:10 +0000
Message-ID: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST v2] cr-daily-branch: Make it possible to
	suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is undesirable (most of the time) in a standalone environment, where you
are mostl ikely to be interested in the current version and not historical
comparissons.

Not sure there isn't a better way.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Remove spurious leading "i" from subject (damn you vi!)
    Use safer test conditional/more obvious syntax
    Remove unneeded call to check_tested as well.
---
 cr-daily-branch | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index da6cf2f..c4a0872 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -85,18 +85,20 @@ check_tested () {
 	  "$@"
 }
 
-testedflight=`check_tested --revision-$tree="$OLD_REVISION"`
-
-if [ "x$testedflight" = x ]; then
-        wantpush=false
-        skipidentical=false
-        force_baseline=true
-	if [ "x$treeurl" != xnone: ]; then
-		treearg=--tree-$tree=$treeurl
-	fi
-	tested_revision=`check_tested $treearg --print-revision=$tree`
-	if [ "x$tested_revision" != x ]; then
-		OLD_REVISION="$tested_revision"
+if [ "x$OSSTEST_NO_BASELINE" != xy ] ; then
+	testedflight=`check_tested --revision-$tree="$OLD_REVISION"`
+
+	if [ "x$testedflight" = x ]; then
+		wantpush=false
+		skipidentical=false
+		force_baseline=true
+		if [ "x$treeurl" != xnone: ]; then
+			treearg=--tree-$tree=$treeurl
+		fi
+		tested_revision=`check_tested $treearg --print-revision=$tree`
+		if [ "x$tested_revision" != x ]; then
+			OLD_REVISION="$tested_revision"
+		fi
 	fi
 fi
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:36:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr20-0001ia-9i; Mon, 10 Feb 2014 13:36:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCr1y-0001iV-62
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:36:14 +0000
Received: from [85.158.139.211:9398] by server-5.bemta-5.messagelabs.com id
	2E/2E-32749-DC5D8F25; Mon, 10 Feb 2014 13:36:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392039371!2891967!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2648 invoked from network); 10 Feb 2014 13:36:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:36:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101295009"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:36:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 08:36:10 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WCr1u-0003J8-47;
	Mon, 10 Feb 2014 13:36:10 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Mon, 10 Feb 2014 13:36:10 +0000
Message-ID: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST v2] cr-daily-branch: Make it possible to
	suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is undesirable (most of the time) in a standalone environment, where you
are mostl ikely to be interested in the current version and not historical
comparissons.

Not sure there isn't a better way.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: Remove spurious leading "i" from subject (damn you vi!)
    Use safer test conditional/more obvious syntax
    Remove unneeded call to check_tested as well.
---
 cr-daily-branch | 26 ++++++++++++++------------
 1 file changed, 14 insertions(+), 12 deletions(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index da6cf2f..c4a0872 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -85,18 +85,20 @@ check_tested () {
 	  "$@"
 }
 
-testedflight=`check_tested --revision-$tree="$OLD_REVISION"`
-
-if [ "x$testedflight" = x ]; then
-        wantpush=false
-        skipidentical=false
-        force_baseline=true
-	if [ "x$treeurl" != xnone: ]; then
-		treearg=--tree-$tree=$treeurl
-	fi
-	tested_revision=`check_tested $treearg --print-revision=$tree`
-	if [ "x$tested_revision" != x ]; then
-		OLD_REVISION="$tested_revision"
+if [ "x$OSSTEST_NO_BASELINE" != xy ] ; then
+	testedflight=`check_tested --revision-$tree="$OLD_REVISION"`
+
+	if [ "x$testedflight" = x ]; then
+		wantpush=false
+		skipidentical=false
+		force_baseline=true
+		if [ "x$treeurl" != xnone: ]; then
+			treearg=--tree-$tree=$treeurl
+		fi
+		tested_revision=`check_tested $treearg --print-revision=$tree`
+		if [ "x$tested_revision" != x ]; then
+			OLD_REVISION="$tested_revision"
+		fi
 	fi
 fi
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:38:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr3n-0001rC-1a; Mon, 10 Feb 2014 13:38:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCr3k-0001r2-Eh
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:38:06 +0000
Received: from [193.109.254.147:50553] by server-6.bemta-14.messagelabs.com id
	60/11-03396-B36D8F25; Mon, 10 Feb 2014 13:38:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392039481!3276446!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11646 invoked from network); 10 Feb 2014 13:38:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:38:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101295376"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:38:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:38:01 -0500
Message-ID: <52F8D637.30204@citrix.com>
Date: Mon, 10 Feb 2014 13:37:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1391898943.27190.5.camel@x220> <52F8D2C5.6040507@citrix.com>
In-Reply-To: <52F8D2C5.6040507@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Paul Bolle <pebolle@tiscali.nl>,
	Tony Luck <tony.luck@intel.com>, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even
 more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:23, David Vrabel wrote:
> On 08/02/14 22:35, Paul Bolle wrote:
>> Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
>> the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
>> on that symbol. Remove that code now.
> 
> Acked-by: David Vrabel <david.vrabel@citrix.com>

Forgot to say, since this only touches drivers/xen this is best merged
via the Xen tree.

Thanks

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:38:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr3n-0001rC-1a; Mon, 10 Feb 2014 13:38:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCr3k-0001r2-Eh
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:38:06 +0000
Received: from [193.109.254.147:50553] by server-6.bemta-14.messagelabs.com id
	60/11-03396-B36D8F25; Mon, 10 Feb 2014 13:38:03 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392039481!3276446!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11646 invoked from network); 10 Feb 2014 13:38:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:38:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101295376"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:38:01 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:38:01 -0500
Message-ID: <52F8D637.30204@citrix.com>
Date: Mon, 10 Feb 2014 13:37:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1391898943.27190.5.camel@x220> <52F8D2C5.6040507@citrix.com>
In-Reply-To: <52F8D2C5.6040507@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	xen-devel@lists.xenproject.org, Paul Bolle <pebolle@tiscali.nl>,
	Tony Luck <tony.luck@intel.com>, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even
 more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:23, David Vrabel wrote:
> On 08/02/14 22:35, Paul Bolle wrote:
>> Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
>> the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
>> on that symbol. Remove that code now.
> 
> Acked-by: David Vrabel <david.vrabel@citrix.com>

Forgot to say, since this only touches drivers/xen this is best merged
via the Xen tree.

Thanks

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:42:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr7m-00022k-8j; Mon, 10 Feb 2014 13:42:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCr7i-00022d-1g
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:42:12 +0000
Received: from [85.158.137.68:34217] by server-3.bemta-3.messagelabs.com id
	61/48-14520-137D8F25; Mon, 10 Feb 2014 13:42:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392039726!850952!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30776 invoked from network); 10 Feb 2014 13:42:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:42:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99467325"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 13:42:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:42:05 -0500
Message-ID: <1392039724.5117.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 10 Feb 2014 13:42:04 +0000
In-Reply-To: <52F7B213.3000302@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
> Hello Mukesh,
> 
> On 30/01/14 01:33, Mukesh Rathor wrote:
> >>> I'm not sure what you mean:
> >>>   - the code that Mukesh is adding doesn't have a struct page, it's
> >>>     just grabbing the foreign domid from the hypercall arg;
> >>>   - if we did have a struct page, we'd just need to take a ref to
> >>>     stop the owner changing underfoot; and
> >>>   - get_pg_owner() takes a domid anyway.
> >>
> >> Sorry, I was confused/mislead by the name...
> >>
> >> rcu_lock_live_remote_domain_by_id does look like what is needed.
> 
> Following the xentrace tread: 
> http://www.gossamer-threads.com/lists/xen/devel/315883, 
> rcu_lock_live_remote_domain_by_id will not correctly works.
> 
> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
> prevent xentrace to works on Xen on ARM (and on PVH).

I'm not sure how that extends to add_to_physmap though -- doing add to
physmap of a DOMID_XEN owned page through the "back door" in this way
isn't supposed to work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:42:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr7m-00022k-8j; Mon, 10 Feb 2014 13:42:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCr7i-00022d-1g
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:42:12 +0000
Received: from [85.158.137.68:34217] by server-3.bemta-3.messagelabs.com id
	61/48-14520-137D8F25; Mon, 10 Feb 2014 13:42:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392039726!850952!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30776 invoked from network); 10 Feb 2014 13:42:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:42:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99467325"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 13:42:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:42:05 -0500
Message-ID: <1392039724.5117.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 10 Feb 2014 13:42:04 +0000
In-Reply-To: <52F7B213.3000302@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
> Hello Mukesh,
> 
> On 30/01/14 01:33, Mukesh Rathor wrote:
> >>> I'm not sure what you mean:
> >>>   - the code that Mukesh is adding doesn't have a struct page, it's
> >>>     just grabbing the foreign domid from the hypercall arg;
> >>>   - if we did have a struct page, we'd just need to take a ref to
> >>>     stop the owner changing underfoot; and
> >>>   - get_pg_owner() takes a domid anyway.
> >>
> >> Sorry, I was confused/mislead by the name...
> >>
> >> rcu_lock_live_remote_domain_by_id does look like what is needed.
> 
> Following the xentrace tread: 
> http://www.gossamer-threads.com/lists/xen/devel/315883, 
> rcu_lock_live_remote_domain_by_id will not correctly works.
> 
> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
> prevent xentrace to works on Xen on ARM (and on PVH).

I'm not sure how that extends to add_to_physmap though -- doing add to
physmap of a DOMID_XEN owned page through the "back door" in this way
isn't supposed to work.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:43:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr8X-00025s-0C; Mon, 10 Feb 2014 13:43:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCr8V-00025k-3q
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:42:59 +0000
Received: from [85.158.139.211:49689] by server-15.bemta-5.messagelabs.com id
	A5/21-24395-267D8F25; Mon, 10 Feb 2014 13:42:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392039777!2891609!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29942 invoked from network); 10 Feb 2014 13:42:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 13:42:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 13:42:56 +0000
Message-Id: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 13:42:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part586BFF4D.1__="
Cc: xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] VT-d: fix RMRR handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part586BFF4D.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Removing mapped RMRR tracking structures in dma_pte_clear_one() is
wrong for two reasons: First, these regions may cover more than a
single page. And second, multiple devices (and hence multiple devices
assigned to any particular guest) may share a single RMRR (whether
assigning such devices to distinct guests is a safe thing to do is
another question).

Therefore move the removal of the tracking structures into the
counterpart function to the one doing the insertion -
intel_iommu_remove_device(), and add a reference count to the tracking
structure.

Further, for the handling of the mappings of the respective memory
regions to be correct, RMRRs must not overlap. Add a respective check
to acpi_parse_one_rmrr().

And finally, with all of this being VT-d specific, move the cleanup
of the list as well as the structure type definition where it belongs -
in VT-d specific rather than IOMMU generic code.

Note that this doesn't address yet another issue associated with RMRR
handling: The purpose of the RMRRs as well as the way the respective
IOMMU page table mappings get inserted both suggest that these regions
would need to be marked E820_RESERVED in all (HVM?) guests' memory
maps, yet nothing like this is being done in hvmloader. (For PV guests
this would also seem to be necessary, but may conflict with PV guests
possibly assuming there to be just a single E820 entry representing all
of its RAM.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -580,6 +580,16 @@ acpi_parse_one_rmrr(struct acpi_dmar_hea
     if ( (ret =3D acpi_dmar_check_length(header, sizeof(*rmrr))) !=3D 0 )
         return ret;
=20
+    list_for_each_entry(rmrru, &acpi_rmrr_units, list)
+       if ( base_addr <=3D rmrru->end_address && rmrru->base_address <=3D =
end_addr )
+       {
+           printk(XENLOG_ERR VTDPREFIX
+                  "Overlapping RMRRs [%"PRIx64",%"PRIx64"] and [%"PRIx64",=
%"PRIx64"]\n",
+                  rmrru->base_address, rmrru->end_address,
+                  base_addr, end_addr);
+           return -EEXIST;
+       }
+
     /* This check is here simply to detect when RMRR values are
      * not properly represented in the system memory map and
      * inform the user
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -412,9 +412,8 @@ static int iommu_populate_page_table(str
 void iommu_domain_destroy(struct domain *d)
 {
     struct hvm_iommu *hd  =3D domain_hvm_iommu(d);
-    struct list_head *ioport_list, *rmrr_list, *tmp;
+    struct list_head *ioport_list, *tmp;
     struct g2m_ioport *ioport;
-    struct mapped_rmrr *mrmrr;
=20
     if ( !iommu_enabled || !hd->platform_ops )
         return;
@@ -428,13 +427,6 @@ void iommu_domain_destroy(struct domain=20
         list_del(&ioport->list);
         xfree(ioport);
     }
-
-    list_for_each_safe ( rmrr_list, tmp, &hd->mapped_rmrrs )
-    {
-        mrmrr =3D list_entry(rmrr_list, struct mapped_rmrr, list);
-        list_del(&mrmrr->list);
-        xfree(mrmrr);
-    }
 }
=20
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long =
mfn,
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -43,6 +43,12 @@
 #include "vtd.h"
 #include "../ats.h"
=20
+struct mapped_rmrr {
+    struct list_head list;
+    u64 base, end;
+    unsigned int count;
+};
+
 /* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
 bool_t __read_mostly untrusted_msi;
=20
@@ -620,7 +626,6 @@ static void dma_pte_clear_one(struct dom
     struct hvm_iommu *hd =3D domain_hvm_iommu(domain);
     struct dma_pte *page =3D NULL, *pte =3D NULL;
     u64 pg_maddr;
-    struct mapped_rmrr *mrmrr;
=20
     spin_lock(&hd->mapping_lock);
     /* get last level pte */
@@ -649,21 +654,6 @@ static void dma_pte_clear_one(struct dom
         __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
=20
     unmap_vtd_domain_page(page);
-
-    /* if the cleared address is between mapped RMRR region,
-     * remove the mapped RMRR
-     */
-    spin_lock(&hd->mapping_lock);
-    list_for_each_entry ( mrmrr, &hd->mapped_rmrrs, list )
-    {
-        if ( addr >=3D mrmrr->base && addr <=3D mrmrr->end )
-        {
-            list_del(&mrmrr->list);
-            xfree(mrmrr);
-            break;
-        }
-    }
-    spin_unlock(&hd->mapping_lock);
 }
=20
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1704,10 +1694,17 @@ static int reassign_device_ownership(
 void iommu_domain_teardown(struct domain *d)
 {
     struct hvm_iommu *hd =3D domain_hvm_iommu(d);
+    struct mapped_rmrr *mrmrr, *tmp;
=20
     if ( list_empty(&acpi_drhd_units) )
         return;
=20
+    list_for_each_entry_safe ( mrmrr, tmp, &hd->mapped_rmrrs, list )
+    {
+        list_del(&mrmrr->list);
+        xfree(mrmrr);
+    }
+
     if ( iommu_use_hap_pt(d) )
         return;
=20
@@ -1852,14 +1849,17 @@ static int rmrr_identity_mapping(struct=20
     ASSERT(rmrr->base_address < rmrr->end_address);
=20
     /*
-     * No need to acquire hd->mapping_lock, as the only theoretical race =
is
-     * with the insertion below (impossible due to holding pcidevs_lock).
+     * No need to acquire hd->mapping_lock: Both insertion and removal
+     * get done while holding pcidevs_lock.
      */
     list_for_each_entry( mrmrr, &hd->mapped_rmrrs, list )
     {
         if ( mrmrr->base =3D=3D rmrr->base_address &&
              mrmrr->end =3D=3D rmrr->end_address )
+        {
+            ++mrmrr->count;
             return 0;
+        }
     }
=20
     base =3D rmrr->base_address & PAGE_MASK_4K;
@@ -1880,9 +1880,8 @@ static int rmrr_identity_mapping(struct=20
         return -ENOMEM;
     mrmrr->base =3D rmrr->base_address;
     mrmrr->end =3D rmrr->end_address;
-    spin_lock(&hd->mapping_lock);
+    mrmrr->count =3D 1;
     list_add_tail(&mrmrr->list, &hd->mapped_rmrrs);
-    spin_unlock(&hd->mapping_lock);
=20
     return 0;
 }
@@ -1944,17 +1943,50 @@ static int intel_iommu_remove_device(u8=20
     if ( !pdev->domain )
         return -EINVAL;
=20
-    /* If the device belongs to dom0, and it has RMRR, don't remove it
-     * from dom0, because BIOS may use RMRR at booting time.
-     */
-    if ( pdev->domain->domain_id =3D=3D 0 )
+    for_each_rmrr_device ( rmrr, bdf, i )
     {
-        for_each_rmrr_device ( rmrr, bdf, i )
+        struct hvm_iommu *hd;
+        struct mapped_rmrr *mrmrr, *tmp;
+
+        if ( rmrr->segment !=3D pdev->seg ||
+             PCI_BUS(bdf) !=3D pdev->bus ||
+             PCI_DEVFN2(bdf) !=3D devfn )
+            continue;
+
+        /*
+         * If the device belongs to dom0, and it has RMRR, don't remove
+         * it from dom0, because BIOS may use RMRR at booting time.
+         */
+        if ( is_hardware_domain(pdev->domain) )
+            return 0;
+
+        hd =3D domain_hvm_iommu(pdev->domain);
+
+        /*
+         * No need to acquire hd->mapping_lock: Both insertion and =
removal
+         * get done while holding pcidevs_lock.
+         */
+        ASSERT(spin_is_locked(&pcidevs_lock));
+        list_for_each_entry_safe ( mrmrr, tmp, &hd->mapped_rmrrs, list )
         {
-            if ( rmrr->segment =3D=3D pdev->seg &&
-                 PCI_BUS(bdf) =3D=3D pdev->bus &&
-                 PCI_DEVFN2(bdf) =3D=3D devfn )
-                return 0;
+            unsigned long base_pfn, end_pfn;
+
+            if ( rmrr->base_address !=3D mrmrr->base ||
+                 rmrr->end_address !=3D mrmrr->end ||
+                 --mrmrr->count )
+                continue;
+
+            base_pfn =3D (mrmrr->base & PAGE_MASK_4K) >> PAGE_SHIFT_4K;
+            end_pfn =3D PAGE_ALIGN_4K(mrmrr->end) >> PAGE_SHIFT_4K;
+            while ( base_pfn < end_pfn )
+            {
+                if ( intel_iommu_unmap_page(pdev->domain, base_pfn) )
+                    return -ENXIO;
+                base_pfn++;
+            }
+
+            list_del(&mrmrr->list);
+            xfree(mrmrr);
         }
     }
=20
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -29,12 +29,6 @@ struct g2m_ioport {
     unsigned int np;
 };
=20
-struct mapped_rmrr {
-    struct list_head list;
-    u64 base;
-    u64 end;
-};
-
 struct hvm_iommu {
     u64 pgd_maddr;                 /* io page directory machine address =
*/
     spinlock_t mapping_lock;       /* io page table lock */



--=__Part586BFF4D.1__=
Content-Type: text/plain; name="VT-d-RMRR-handling.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-RMRR-handling.patch"

VT-d: fix RMRR handling=0A=0ARemoving mapped RMRR tracking structures in =
dma_pte_clear_one() is=0Awrong for two reasons: First, these regions may =
cover more than a=0Asingle page. And second, multiple devices (and hence =
multiple devices=0Aassigned to any particular guest) may share a single =
RMRR (whether=0Aassigning such devices to distinct guests is a safe thing =
to do is=0Aanother question).=0A=0ATherefore move the removal of the =
tracking structures into the=0Acounterpart function to the one doing the =
insertion -=0Aintel_iommu_remove_device(), and add a reference count to =
the tracking=0Astructure.=0A=0AFurther, for the handling of the mappings =
of the respective memory=0Aregions to be correct, RMRRs must not overlap. =
Add a respective check=0Ato acpi_parse_one_rmrr().=0A=0AAnd finally, with =
all of this being VT-d specific, move the cleanup=0Aof the list as well as =
the structure type definition where it belongs -=0Ain VT-d specific rather =
than IOMMU generic code.=0A=0ANote that this doesn't address yet another =
issue associated with RMRR=0Ahandling: The purpose of the RMRRs as well as =
the way the respective=0AIOMMU page table mappings get inserted both =
suggest that these regions=0Awould need to be marked E820_RESERVED in all =
(HVM?) guests' memory=0Amaps, yet nothing like this is being done in =
hvmloader. (For PV guests=0Athis would also seem to be necessary, but may =
conflict with PV guests=0Apossibly assuming there to be just a single E820 =
entry representing all=0Aof its RAM.)=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/drivers/passthrough/vtd/dmar.c=0A+++ =
b/xen/drivers/passthrough/vtd/dmar.c=0A@@ -580,6 +580,16 @@ acpi_parse_one_=
rmrr(struct acpi_dmar_hea=0A     if ( (ret =3D acpi_dmar_check_length(heade=
r, sizeof(*rmrr))) !=3D 0 )=0A         return ret;=0A =0A+    list_for_each=
_entry(rmrru, &acpi_rmrr_units, list)=0A+       if ( base_addr <=3D =
rmrru->end_address && rmrru->base_address <=3D end_addr )=0A+       {=0A+  =
         printk(XENLOG_ERR VTDPREFIX=0A+                  "Overlapping =
RMRRs [%"PRIx64",%"PRIx64"] and [%"PRIx64",%"PRIx64"]\n",=0A+              =
    rmrru->base_address, rmrru->end_address,=0A+                  =
base_addr, end_addr);=0A+           return -EEXIST;=0A+       }=0A+=0A     =
/* This check is here simply to detect when RMRR values are=0A      * not =
properly represented in the system memory map and=0A      * inform the =
user=0A--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/drivers/passthroug=
h/iommu.c=0A@@ -412,9 +412,8 @@ static int iommu_populate_page_table(str=0A=
 void iommu_domain_destroy(struct domain *d)=0A {=0A     struct hvm_iommu =
*hd  =3D domain_hvm_iommu(d);=0A-    struct list_head *ioport_list, =
*rmrr_list, *tmp;=0A+    struct list_head *ioport_list, *tmp;=0A     =
struct g2m_ioport *ioport;=0A-    struct mapped_rmrr *mrmrr;=0A =0A     if =
( !iommu_enabled || !hd->platform_ops )=0A         return;=0A@@ -428,13 =
+427,6 @@ void iommu_domain_destroy(struct domain =0A         list_del(&iop=
ort->list);=0A         xfree(ioport);=0A     }=0A-=0A-    list_for_each_saf=
e ( rmrr_list, tmp, &hd->mapped_rmrrs )=0A-    {=0A-        mrmrr =3D =
list_entry(rmrr_list, struct mapped_rmrr, list);=0A-        list_del(&mrmrr=
->list);=0A-        xfree(mrmrr);=0A-    }=0A }=0A =0A int iommu_map_page(s=
truct domain *d, unsigned long gfn, unsigned long mfn,=0A--- a/xen/drivers/=
passthrough/vtd/iommu.c=0A+++ b/xen/drivers/passthrough/vtd/iommu.c=0A@@ =
-43,6 +43,12 @@=0A #include "vtd.h"=0A #include "../ats.h"=0A =0A+struct =
mapped_rmrr {=0A+    struct list_head list;=0A+    u64 base, end;=0A+    =
unsigned int count;=0A+};=0A+=0A /* Possible unfiltered LAPIC/MSI messages =
from untrusted sources? */=0A bool_t __read_mostly untrusted_msi;=0A =0A@@ =
-620,7 +626,6 @@ static void dma_pte_clear_one(struct dom=0A     struct =
hvm_iommu *hd =3D domain_hvm_iommu(domain);=0A     struct dma_pte *page =
=3D NULL, *pte =3D NULL;=0A     u64 pg_maddr;=0A-    struct mapped_rmrr =
*mrmrr;=0A =0A     spin_lock(&hd->mapping_lock);=0A     /* get last level =
pte */=0A@@ -649,21 +654,6 @@ static void dma_pte_clear_one(struct dom=0A  =
       __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);=0A =
=0A     unmap_vtd_domain_page(page);=0A-=0A-    /* if the cleared address =
is between mapped RMRR region,=0A-     * remove the mapped RMRR=0A-     =
*/=0A-    spin_lock(&hd->mapping_lock);=0A-    list_for_each_entry ( =
mrmrr, &hd->mapped_rmrrs, list )=0A-    {=0A-        if ( addr >=3D =
mrmrr->base && addr <=3D mrmrr->end )=0A-        {=0A-            =
list_del(&mrmrr->list);=0A-            xfree(mrmrr);=0A-            =
break;=0A-        }=0A-    }=0A-    spin_unlock(&hd->mapping_lock);=0A =
}=0A =0A static void iommu_free_pagetable(u64 pt_maddr, int level)=0A@@ =
-1704,10 +1694,17 @@ static int reassign_device_ownership(=0A void =
iommu_domain_teardown(struct domain *d)=0A {=0A     struct hvm_iommu *hd =
=3D domain_hvm_iommu(d);=0A+    struct mapped_rmrr *mrmrr, *tmp;=0A =0A    =
 if ( list_empty(&acpi_drhd_units) )=0A         return;=0A =0A+    =
list_for_each_entry_safe ( mrmrr, tmp, &hd->mapped_rmrrs, list )=0A+    =
{=0A+        list_del(&mrmrr->list);=0A+        xfree(mrmrr);=0A+    =
}=0A+=0A     if ( iommu_use_hap_pt(d) )=0A         return;=0A =0A@@ =
-1852,14 +1849,17 @@ static int rmrr_identity_mapping(struct =0A     =
ASSERT(rmrr->base_address < rmrr->end_address);=0A =0A     /*=0A-     * No =
need to acquire hd->mapping_lock, as the only theoretical race is=0A-     =
* with the insertion below (impossible due to holding pcidevs_lock).=0A+   =
  * No need to acquire hd->mapping_lock: Both insertion and removal=0A+    =
 * get done while holding pcidevs_lock.=0A      */=0A     list_for_each_ent=
ry( mrmrr, &hd->mapped_rmrrs, list )=0A     {=0A         if ( mrmrr->base =
=3D=3D rmrr->base_address &&=0A              mrmrr->end =3D=3D rmrr->end_ad=
dress )=0A+        {=0A+            ++mrmrr->count;=0A             return =
0;=0A+        }=0A     }=0A =0A     base =3D rmrr->base_address & =
PAGE_MASK_4K;=0A@@ -1880,9 +1880,8 @@ static int rmrr_identity_mapping(stru=
ct =0A         return -ENOMEM;=0A     mrmrr->base =3D rmrr->base_address;=
=0A     mrmrr->end =3D rmrr->end_address;=0A-    spin_lock(&hd->mapping_loc=
k);=0A+    mrmrr->count =3D 1;=0A     list_add_tail(&mrmrr->list, =
&hd->mapped_rmrrs);=0A-    spin_unlock(&hd->mapping_lock);=0A =0A     =
return 0;=0A }=0A@@ -1944,17 +1943,50 @@ static int intel_iommu_remove_devi=
ce(u8 =0A     if ( !pdev->domain )=0A         return -EINVAL;=0A =0A-    =
/* If the device belongs to dom0, and it has RMRR, don't remove it=0A-     =
* from dom0, because BIOS may use RMRR at booting time.=0A-     */=0A-    =
if ( pdev->domain->domain_id =3D=3D 0 )=0A+    for_each_rmrr_device ( =
rmrr, bdf, i )=0A     {=0A-        for_each_rmrr_device ( rmrr, bdf, i =
)=0A+        struct hvm_iommu *hd;=0A+        struct mapped_rmrr *mrmrr, =
*tmp;=0A+=0A+        if ( rmrr->segment !=3D pdev->seg ||=0A+             =
PCI_BUS(bdf) !=3D pdev->bus ||=0A+             PCI_DEVFN2(bdf) !=3D devfn =
)=0A+            continue;=0A+=0A+        /*=0A+         * If the device =
belongs to dom0, and it has RMRR, don't remove=0A+         * it from dom0, =
because BIOS may use RMRR at booting time.=0A+         */=0A+        if ( =
is_hardware_domain(pdev->domain) )=0A+            return 0;=0A+=0A+        =
hd =3D domain_hvm_iommu(pdev->domain);=0A+=0A+        /*=0A+         * No =
need to acquire hd->mapping_lock: Both insertion and removal=0A+         * =
get done while holding pcidevs_lock.=0A+         */=0A+        ASSERT(spin_=
is_locked(&pcidevs_lock));=0A+        list_for_each_entry_safe ( mrmrr, =
tmp, &hd->mapped_rmrrs, list )=0A         {=0A-            if ( rmrr->segme=
nt =3D=3D pdev->seg &&=0A-                 PCI_BUS(bdf) =3D=3D pdev->bus =
&&=0A-                 PCI_DEVFN2(bdf) =3D=3D devfn )=0A-                =
return 0;=0A+            unsigned long base_pfn, end_pfn;=0A+=0A+          =
  if ( rmrr->base_address !=3D mrmrr->base ||=0A+                 =
rmrr->end_address !=3D mrmrr->end ||=0A+                 --mrmrr->count =
)=0A+                continue;=0A+=0A+            base_pfn =3D (mrmrr->base=
 & PAGE_MASK_4K) >> PAGE_SHIFT_4K;=0A+            end_pfn =3D PAGE_ALIGN_4K=
(mrmrr->end) >> PAGE_SHIFT_4K;=0A+            while ( base_pfn < end_pfn =
)=0A+            {=0A+                if ( intel_iommu_unmap_page(pdev->dom=
ain, base_pfn) )=0A+                    return -ENXIO;=0A+                =
base_pfn++;=0A+            }=0A+=0A+            list_del(&mrmrr->list);=0A+=
            xfree(mrmrr);=0A         }=0A     }=0A =0A--- a/xen/include/xen=
/hvm/iommu.h=0A+++ b/xen/include/xen/hvm/iommu.h=0A@@ -29,12 +29,6 @@ =
struct g2m_ioport {=0A     unsigned int np;=0A };=0A =0A-struct mapped_rmrr=
 {=0A-    struct list_head list;=0A-    u64 base;=0A-    u64 end;=0A-};=0A-=
=0A struct hvm_iommu {=0A     u64 pgd_maddr;                 /* io page =
directory machine address */=0A     spinlock_t mapping_lock;       /* io =
page table lock */=0A
--=__Part586BFF4D.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part586BFF4D.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 10 13:43:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCr8X-00025s-0C; Mon, 10 Feb 2014 13:43:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCr8V-00025k-3q
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 13:42:59 +0000
Received: from [85.158.139.211:49689] by server-15.bemta-5.messagelabs.com id
	A5/21-24395-267D8F25; Mon, 10 Feb 2014 13:42:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392039777!2891609!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29942 invoked from network); 10 Feb 2014 13:42:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 13:42:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 13:42:56 +0000
Message-Id: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 13:42:53 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part586BFF4D.1__="
Cc: xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] VT-d: fix RMRR handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part586BFF4D.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Removing mapped RMRR tracking structures in dma_pte_clear_one() is
wrong for two reasons: First, these regions may cover more than a
single page. And second, multiple devices (and hence multiple devices
assigned to any particular guest) may share a single RMRR (whether
assigning such devices to distinct guests is a safe thing to do is
another question).

Therefore move the removal of the tracking structures into the
counterpart function to the one doing the insertion -
intel_iommu_remove_device(), and add a reference count to the tracking
structure.

Further, for the handling of the mappings of the respective memory
regions to be correct, RMRRs must not overlap. Add a respective check
to acpi_parse_one_rmrr().

And finally, with all of this being VT-d specific, move the cleanup
of the list as well as the structure type definition where it belongs -
in VT-d specific rather than IOMMU generic code.

Note that this doesn't address yet another issue associated with RMRR
handling: The purpose of the RMRRs as well as the way the respective
IOMMU page table mappings get inserted both suggest that these regions
would need to be marked E820_RESERVED in all (HVM?) guests' memory
maps, yet nothing like this is being done in hvmloader. (For PV guests
this would also seem to be necessary, but may conflict with PV guests
possibly assuming there to be just a single E820 entry representing all
of its RAM.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -580,6 +580,16 @@ acpi_parse_one_rmrr(struct acpi_dmar_hea
     if ( (ret =3D acpi_dmar_check_length(header, sizeof(*rmrr))) !=3D 0 )
         return ret;
=20
+    list_for_each_entry(rmrru, &acpi_rmrr_units, list)
+       if ( base_addr <=3D rmrru->end_address && rmrru->base_address <=3D =
end_addr )
+       {
+           printk(XENLOG_ERR VTDPREFIX
+                  "Overlapping RMRRs [%"PRIx64",%"PRIx64"] and [%"PRIx64",=
%"PRIx64"]\n",
+                  rmrru->base_address, rmrru->end_address,
+                  base_addr, end_addr);
+           return -EEXIST;
+       }
+
     /* This check is here simply to detect when RMRR values are
      * not properly represented in the system memory map and
      * inform the user
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -412,9 +412,8 @@ static int iommu_populate_page_table(str
 void iommu_domain_destroy(struct domain *d)
 {
     struct hvm_iommu *hd  =3D domain_hvm_iommu(d);
-    struct list_head *ioport_list, *rmrr_list, *tmp;
+    struct list_head *ioport_list, *tmp;
     struct g2m_ioport *ioport;
-    struct mapped_rmrr *mrmrr;
=20
     if ( !iommu_enabled || !hd->platform_ops )
         return;
@@ -428,13 +427,6 @@ void iommu_domain_destroy(struct domain=20
         list_del(&ioport->list);
         xfree(ioport);
     }
-
-    list_for_each_safe ( rmrr_list, tmp, &hd->mapped_rmrrs )
-    {
-        mrmrr =3D list_entry(rmrr_list, struct mapped_rmrr, list);
-        list_del(&mrmrr->list);
-        xfree(mrmrr);
-    }
 }
=20
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long =
mfn,
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -43,6 +43,12 @@
 #include "vtd.h"
 #include "../ats.h"
=20
+struct mapped_rmrr {
+    struct list_head list;
+    u64 base, end;
+    unsigned int count;
+};
+
 /* Possible unfiltered LAPIC/MSI messages from untrusted sources? */
 bool_t __read_mostly untrusted_msi;
=20
@@ -620,7 +626,6 @@ static void dma_pte_clear_one(struct dom
     struct hvm_iommu *hd =3D domain_hvm_iommu(domain);
     struct dma_pte *page =3D NULL, *pte =3D NULL;
     u64 pg_maddr;
-    struct mapped_rmrr *mrmrr;
=20
     spin_lock(&hd->mapping_lock);
     /* get last level pte */
@@ -649,21 +654,6 @@ static void dma_pte_clear_one(struct dom
         __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);
=20
     unmap_vtd_domain_page(page);
-
-    /* if the cleared address is between mapped RMRR region,
-     * remove the mapped RMRR
-     */
-    spin_lock(&hd->mapping_lock);
-    list_for_each_entry ( mrmrr, &hd->mapped_rmrrs, list )
-    {
-        if ( addr >=3D mrmrr->base && addr <=3D mrmrr->end )
-        {
-            list_del(&mrmrr->list);
-            xfree(mrmrr);
-            break;
-        }
-    }
-    spin_unlock(&hd->mapping_lock);
 }
=20
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1704,10 +1694,17 @@ static int reassign_device_ownership(
 void iommu_domain_teardown(struct domain *d)
 {
     struct hvm_iommu *hd =3D domain_hvm_iommu(d);
+    struct mapped_rmrr *mrmrr, *tmp;
=20
     if ( list_empty(&acpi_drhd_units) )
         return;
=20
+    list_for_each_entry_safe ( mrmrr, tmp, &hd->mapped_rmrrs, list )
+    {
+        list_del(&mrmrr->list);
+        xfree(mrmrr);
+    }
+
     if ( iommu_use_hap_pt(d) )
         return;
=20
@@ -1852,14 +1849,17 @@ static int rmrr_identity_mapping(struct=20
     ASSERT(rmrr->base_address < rmrr->end_address);
=20
     /*
-     * No need to acquire hd->mapping_lock, as the only theoretical race =
is
-     * with the insertion below (impossible due to holding pcidevs_lock).
+     * No need to acquire hd->mapping_lock: Both insertion and removal
+     * get done while holding pcidevs_lock.
      */
     list_for_each_entry( mrmrr, &hd->mapped_rmrrs, list )
     {
         if ( mrmrr->base =3D=3D rmrr->base_address &&
              mrmrr->end =3D=3D rmrr->end_address )
+        {
+            ++mrmrr->count;
             return 0;
+        }
     }
=20
     base =3D rmrr->base_address & PAGE_MASK_4K;
@@ -1880,9 +1880,8 @@ static int rmrr_identity_mapping(struct=20
         return -ENOMEM;
     mrmrr->base =3D rmrr->base_address;
     mrmrr->end =3D rmrr->end_address;
-    spin_lock(&hd->mapping_lock);
+    mrmrr->count =3D 1;
     list_add_tail(&mrmrr->list, &hd->mapped_rmrrs);
-    spin_unlock(&hd->mapping_lock);
=20
     return 0;
 }
@@ -1944,17 +1943,50 @@ static int intel_iommu_remove_device(u8=20
     if ( !pdev->domain )
         return -EINVAL;
=20
-    /* If the device belongs to dom0, and it has RMRR, don't remove it
-     * from dom0, because BIOS may use RMRR at booting time.
-     */
-    if ( pdev->domain->domain_id =3D=3D 0 )
+    for_each_rmrr_device ( rmrr, bdf, i )
     {
-        for_each_rmrr_device ( rmrr, bdf, i )
+        struct hvm_iommu *hd;
+        struct mapped_rmrr *mrmrr, *tmp;
+
+        if ( rmrr->segment !=3D pdev->seg ||
+             PCI_BUS(bdf) !=3D pdev->bus ||
+             PCI_DEVFN2(bdf) !=3D devfn )
+            continue;
+
+        /*
+         * If the device belongs to dom0, and it has RMRR, don't remove
+         * it from dom0, because BIOS may use RMRR at booting time.
+         */
+        if ( is_hardware_domain(pdev->domain) )
+            return 0;
+
+        hd =3D domain_hvm_iommu(pdev->domain);
+
+        /*
+         * No need to acquire hd->mapping_lock: Both insertion and =
removal
+         * get done while holding pcidevs_lock.
+         */
+        ASSERT(spin_is_locked(&pcidevs_lock));
+        list_for_each_entry_safe ( mrmrr, tmp, &hd->mapped_rmrrs, list )
         {
-            if ( rmrr->segment =3D=3D pdev->seg &&
-                 PCI_BUS(bdf) =3D=3D pdev->bus &&
-                 PCI_DEVFN2(bdf) =3D=3D devfn )
-                return 0;
+            unsigned long base_pfn, end_pfn;
+
+            if ( rmrr->base_address !=3D mrmrr->base ||
+                 rmrr->end_address !=3D mrmrr->end ||
+                 --mrmrr->count )
+                continue;
+
+            base_pfn =3D (mrmrr->base & PAGE_MASK_4K) >> PAGE_SHIFT_4K;
+            end_pfn =3D PAGE_ALIGN_4K(mrmrr->end) >> PAGE_SHIFT_4K;
+            while ( base_pfn < end_pfn )
+            {
+                if ( intel_iommu_unmap_page(pdev->domain, base_pfn) )
+                    return -ENXIO;
+                base_pfn++;
+            }
+
+            list_del(&mrmrr->list);
+            xfree(mrmrr);
         }
     }
=20
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -29,12 +29,6 @@ struct g2m_ioport {
     unsigned int np;
 };
=20
-struct mapped_rmrr {
-    struct list_head list;
-    u64 base;
-    u64 end;
-};
-
 struct hvm_iommu {
     u64 pgd_maddr;                 /* io page directory machine address =
*/
     spinlock_t mapping_lock;       /* io page table lock */



--=__Part586BFF4D.1__=
Content-Type: text/plain; name="VT-d-RMRR-handling.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-RMRR-handling.patch"

VT-d: fix RMRR handling=0A=0ARemoving mapped RMRR tracking structures in =
dma_pte_clear_one() is=0Awrong for two reasons: First, these regions may =
cover more than a=0Asingle page. And second, multiple devices (and hence =
multiple devices=0Aassigned to any particular guest) may share a single =
RMRR (whether=0Aassigning such devices to distinct guests is a safe thing =
to do is=0Aanother question).=0A=0ATherefore move the removal of the =
tracking structures into the=0Acounterpart function to the one doing the =
insertion -=0Aintel_iommu_remove_device(), and add a reference count to =
the tracking=0Astructure.=0A=0AFurther, for the handling of the mappings =
of the respective memory=0Aregions to be correct, RMRRs must not overlap. =
Add a respective check=0Ato acpi_parse_one_rmrr().=0A=0AAnd finally, with =
all of this being VT-d specific, move the cleanup=0Aof the list as well as =
the structure type definition where it belongs -=0Ain VT-d specific rather =
than IOMMU generic code.=0A=0ANote that this doesn't address yet another =
issue associated with RMRR=0Ahandling: The purpose of the RMRRs as well as =
the way the respective=0AIOMMU page table mappings get inserted both =
suggest that these regions=0Awould need to be marked E820_RESERVED in all =
(HVM?) guests' memory=0Amaps, yet nothing like this is being done in =
hvmloader. (For PV guests=0Athis would also seem to be necessary, but may =
conflict with PV guests=0Apossibly assuming there to be just a single E820 =
entry representing all=0Aof its RAM.)=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/drivers/passthrough/vtd/dmar.c=0A+++ =
b/xen/drivers/passthrough/vtd/dmar.c=0A@@ -580,6 +580,16 @@ acpi_parse_one_=
rmrr(struct acpi_dmar_hea=0A     if ( (ret =3D acpi_dmar_check_length(heade=
r, sizeof(*rmrr))) !=3D 0 )=0A         return ret;=0A =0A+    list_for_each=
_entry(rmrru, &acpi_rmrr_units, list)=0A+       if ( base_addr <=3D =
rmrru->end_address && rmrru->base_address <=3D end_addr )=0A+       {=0A+  =
         printk(XENLOG_ERR VTDPREFIX=0A+                  "Overlapping =
RMRRs [%"PRIx64",%"PRIx64"] and [%"PRIx64",%"PRIx64"]\n",=0A+              =
    rmrru->base_address, rmrru->end_address,=0A+                  =
base_addr, end_addr);=0A+           return -EEXIST;=0A+       }=0A+=0A     =
/* This check is here simply to detect when RMRR values are=0A      * not =
properly represented in the system memory map and=0A      * inform the =
user=0A--- a/xen/drivers/passthrough/iommu.c=0A+++ b/xen/drivers/passthroug=
h/iommu.c=0A@@ -412,9 +412,8 @@ static int iommu_populate_page_table(str=0A=
 void iommu_domain_destroy(struct domain *d)=0A {=0A     struct hvm_iommu =
*hd  =3D domain_hvm_iommu(d);=0A-    struct list_head *ioport_list, =
*rmrr_list, *tmp;=0A+    struct list_head *ioport_list, *tmp;=0A     =
struct g2m_ioport *ioport;=0A-    struct mapped_rmrr *mrmrr;=0A =0A     if =
( !iommu_enabled || !hd->platform_ops )=0A         return;=0A@@ -428,13 =
+427,6 @@ void iommu_domain_destroy(struct domain =0A         list_del(&iop=
ort->list);=0A         xfree(ioport);=0A     }=0A-=0A-    list_for_each_saf=
e ( rmrr_list, tmp, &hd->mapped_rmrrs )=0A-    {=0A-        mrmrr =3D =
list_entry(rmrr_list, struct mapped_rmrr, list);=0A-        list_del(&mrmrr=
->list);=0A-        xfree(mrmrr);=0A-    }=0A }=0A =0A int iommu_map_page(s=
truct domain *d, unsigned long gfn, unsigned long mfn,=0A--- a/xen/drivers/=
passthrough/vtd/iommu.c=0A+++ b/xen/drivers/passthrough/vtd/iommu.c=0A@@ =
-43,6 +43,12 @@=0A #include "vtd.h"=0A #include "../ats.h"=0A =0A+struct =
mapped_rmrr {=0A+    struct list_head list;=0A+    u64 base, end;=0A+    =
unsigned int count;=0A+};=0A+=0A /* Possible unfiltered LAPIC/MSI messages =
from untrusted sources? */=0A bool_t __read_mostly untrusted_msi;=0A =0A@@ =
-620,7 +626,6 @@ static void dma_pte_clear_one(struct dom=0A     struct =
hvm_iommu *hd =3D domain_hvm_iommu(domain);=0A     struct dma_pte *page =
=3D NULL, *pte =3D NULL;=0A     u64 pg_maddr;=0A-    struct mapped_rmrr =
*mrmrr;=0A =0A     spin_lock(&hd->mapping_lock);=0A     /* get last level =
pte */=0A@@ -649,21 +654,6 @@ static void dma_pte_clear_one(struct dom=0A  =
       __intel_iommu_iotlb_flush(domain, addr >> PAGE_SHIFT_4K, 1, 1);=0A =
=0A     unmap_vtd_domain_page(page);=0A-=0A-    /* if the cleared address =
is between mapped RMRR region,=0A-     * remove the mapped RMRR=0A-     =
*/=0A-    spin_lock(&hd->mapping_lock);=0A-    list_for_each_entry ( =
mrmrr, &hd->mapped_rmrrs, list )=0A-    {=0A-        if ( addr >=3D =
mrmrr->base && addr <=3D mrmrr->end )=0A-        {=0A-            =
list_del(&mrmrr->list);=0A-            xfree(mrmrr);=0A-            =
break;=0A-        }=0A-    }=0A-    spin_unlock(&hd->mapping_lock);=0A =
}=0A =0A static void iommu_free_pagetable(u64 pt_maddr, int level)=0A@@ =
-1704,10 +1694,17 @@ static int reassign_device_ownership(=0A void =
iommu_domain_teardown(struct domain *d)=0A {=0A     struct hvm_iommu *hd =
=3D domain_hvm_iommu(d);=0A+    struct mapped_rmrr *mrmrr, *tmp;=0A =0A    =
 if ( list_empty(&acpi_drhd_units) )=0A         return;=0A =0A+    =
list_for_each_entry_safe ( mrmrr, tmp, &hd->mapped_rmrrs, list )=0A+    =
{=0A+        list_del(&mrmrr->list);=0A+        xfree(mrmrr);=0A+    =
}=0A+=0A     if ( iommu_use_hap_pt(d) )=0A         return;=0A =0A@@ =
-1852,14 +1849,17 @@ static int rmrr_identity_mapping(struct =0A     =
ASSERT(rmrr->base_address < rmrr->end_address);=0A =0A     /*=0A-     * No =
need to acquire hd->mapping_lock, as the only theoretical race is=0A-     =
* with the insertion below (impossible due to holding pcidevs_lock).=0A+   =
  * No need to acquire hd->mapping_lock: Both insertion and removal=0A+    =
 * get done while holding pcidevs_lock.=0A      */=0A     list_for_each_ent=
ry( mrmrr, &hd->mapped_rmrrs, list )=0A     {=0A         if ( mrmrr->base =
=3D=3D rmrr->base_address &&=0A              mrmrr->end =3D=3D rmrr->end_ad=
dress )=0A+        {=0A+            ++mrmrr->count;=0A             return =
0;=0A+        }=0A     }=0A =0A     base =3D rmrr->base_address & =
PAGE_MASK_4K;=0A@@ -1880,9 +1880,8 @@ static int rmrr_identity_mapping(stru=
ct =0A         return -ENOMEM;=0A     mrmrr->base =3D rmrr->base_address;=
=0A     mrmrr->end =3D rmrr->end_address;=0A-    spin_lock(&hd->mapping_loc=
k);=0A+    mrmrr->count =3D 1;=0A     list_add_tail(&mrmrr->list, =
&hd->mapped_rmrrs);=0A-    spin_unlock(&hd->mapping_lock);=0A =0A     =
return 0;=0A }=0A@@ -1944,17 +1943,50 @@ static int intel_iommu_remove_devi=
ce(u8 =0A     if ( !pdev->domain )=0A         return -EINVAL;=0A =0A-    =
/* If the device belongs to dom0, and it has RMRR, don't remove it=0A-     =
* from dom0, because BIOS may use RMRR at booting time.=0A-     */=0A-    =
if ( pdev->domain->domain_id =3D=3D 0 )=0A+    for_each_rmrr_device ( =
rmrr, bdf, i )=0A     {=0A-        for_each_rmrr_device ( rmrr, bdf, i =
)=0A+        struct hvm_iommu *hd;=0A+        struct mapped_rmrr *mrmrr, =
*tmp;=0A+=0A+        if ( rmrr->segment !=3D pdev->seg ||=0A+             =
PCI_BUS(bdf) !=3D pdev->bus ||=0A+             PCI_DEVFN2(bdf) !=3D devfn =
)=0A+            continue;=0A+=0A+        /*=0A+         * If the device =
belongs to dom0, and it has RMRR, don't remove=0A+         * it from dom0, =
because BIOS may use RMRR at booting time.=0A+         */=0A+        if ( =
is_hardware_domain(pdev->domain) )=0A+            return 0;=0A+=0A+        =
hd =3D domain_hvm_iommu(pdev->domain);=0A+=0A+        /*=0A+         * No =
need to acquire hd->mapping_lock: Both insertion and removal=0A+         * =
get done while holding pcidevs_lock.=0A+         */=0A+        ASSERT(spin_=
is_locked(&pcidevs_lock));=0A+        list_for_each_entry_safe ( mrmrr, =
tmp, &hd->mapped_rmrrs, list )=0A         {=0A-            if ( rmrr->segme=
nt =3D=3D pdev->seg &&=0A-                 PCI_BUS(bdf) =3D=3D pdev->bus =
&&=0A-                 PCI_DEVFN2(bdf) =3D=3D devfn )=0A-                =
return 0;=0A+            unsigned long base_pfn, end_pfn;=0A+=0A+          =
  if ( rmrr->base_address !=3D mrmrr->base ||=0A+                 =
rmrr->end_address !=3D mrmrr->end ||=0A+                 --mrmrr->count =
)=0A+                continue;=0A+=0A+            base_pfn =3D (mrmrr->base=
 & PAGE_MASK_4K) >> PAGE_SHIFT_4K;=0A+            end_pfn =3D PAGE_ALIGN_4K=
(mrmrr->end) >> PAGE_SHIFT_4K;=0A+            while ( base_pfn < end_pfn =
)=0A+            {=0A+                if ( intel_iommu_unmap_page(pdev->dom=
ain, base_pfn) )=0A+                    return -ENXIO;=0A+                =
base_pfn++;=0A+            }=0A+=0A+            list_del(&mrmrr->list);=0A+=
            xfree(mrmrr);=0A         }=0A     }=0A =0A--- a/xen/include/xen=
/hvm/iommu.h=0A+++ b/xen/include/xen/hvm/iommu.h=0A@@ -29,12 +29,6 @@ =
struct g2m_ioport {=0A     unsigned int np;=0A };=0A =0A-struct mapped_rmrr=
 {=0A-    struct list_head list;=0A-    u64 base;=0A-    u64 end;=0A-};=0A-=
=0A struct hvm_iommu {=0A     u64 pgd_maddr;                 /* io page =
directory machine address */=0A     spinlock_t mapping_lock;       /* io =
page table lock */=0A
--=__Part586BFF4D.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part586BFF4D.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 10 13:49:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrEf-0002lN-BN; Mon, 10 Feb 2014 13:49:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCrES-0002lA-4c
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:49:19 +0000
Received: from [85.158.137.68:6772] by server-1.bemta-3.messagelabs.com id
	10/B4-17293-3D8D8F25; Mon, 10 Feb 2014 13:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392040146!852475!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10487 invoked from network); 10 Feb 2014 13:49:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 13:49:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 13:49:05 +0000
Message-Id: <52F8E6DE020000780011AC53@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 13:49:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
In-Reply-To: <52F4E663020000780011A3B0@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:57, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>>  
>> +/*
>> + * ARM: Clean and invalidate caches associated with given region of
>> + * guest memory.
>> + */
>> +struct xen_domctl_cacheflush {
>> +    /* IN: page range to flush. */
>> +    xen_pfn_t start_pfn, nr_pfns;
>> +};
> 
> The name here (and of the libxc interface) is now certainly
> counterintuitive. But it's a domctl (and an internal interface),
> which we can change post-4.4 (I'd envision it to actually take
> a flags parameter indicating the kind of flush that's wanted).

Actually the naming of things in the hypervisor part of the patch
is now bogus too - sysc_page_to_ram(), for example, does in no
way imply that the cache needs not just flushing, but also
invalidating. The need for which, btw, I continue to not
understand: Are there ways in ARM for one guest to observe
cache contents put in place by another guest (incl Dom0)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:49:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrEf-0002lN-BN; Mon, 10 Feb 2014 13:49:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCrES-0002lA-4c
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:49:19 +0000
Received: from [85.158.137.68:6772] by server-1.bemta-3.messagelabs.com id
	10/B4-17293-3D8D8F25; Mon, 10 Feb 2014 13:49:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392040146!852475!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10487 invoked from network); 10 Feb 2014 13:49:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 13:49:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 13:49:05 +0000
Message-Id: <52F8E6DE020000780011AC53@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 13:49:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
In-Reply-To: <52F4E663020000780011A3B0@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 07.02.14 at 13:57, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>>  
>> +/*
>> + * ARM: Clean and invalidate caches associated with given region of
>> + * guest memory.
>> + */
>> +struct xen_domctl_cacheflush {
>> +    /* IN: page range to flush. */
>> +    xen_pfn_t start_pfn, nr_pfns;
>> +};
> 
> The name here (and of the libxc interface) is now certainly
> counterintuitive. But it's a domctl (and an internal interface),
> which we can change post-4.4 (I'd envision it to actually take
> a flags parameter indicating the kind of flush that's wanted).

Actually the naming of things in the hypervisor part of the patch
is now bogus too - sysc_page_to_ram(), for example, does in no
way imply that the cache needs not just flushing, but also
invalidating. The need for which, btw, I continue to not
understand: Are there ways in ARM for one guest to observe
cache contents put in place by another guest (incl Dom0)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:50:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrFh-0002pn-F0; Mon, 10 Feb 2014 13:50:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Dave.Scott@citrix.com>) id 1WCrFT-0002pK-KE
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:50:23 +0000
Received: from [193.109.254.147:33773] by server-2.bemta-14.messagelabs.com id
	57/2F-01236-219D8F25; Mon, 10 Feb 2014 13:50:10 +0000
X-Env-Sender: Dave.Scott@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392040207!3243698!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8387 invoked from network); 10 Feb 2014 13:50:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:50:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101297777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:50:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:50:06 -0500
Received: from [10.80.239.111]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <dave.scott@eu.citrix.com>)	id 1WCrFO-000117-Vu;
	Mon, 10 Feb 2014 13:50:06 +0000
Message-ID: <52F8D8F6.1010209@eu.citrix.com>
Date: Mon, 10 Feb 2014 13:49:42 +0000
From: David Scott <dave.scott@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.1.1
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>, Don Slutz
	<dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
	<1391809911-13610-2-git-send-email-dslutz@verizon.com>
	<52F8C834.5070104@eu.citrix.com>
In-Reply-To: <52F8C834.5070104@eu.citrix.com>
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to
 build with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 12:38, George Dunlap wrote:
> On 02/07/2014 09:51 PM, Don Slutz wrote:
>> This code was copied from:
>>
>> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
>>
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Also looks fine to me
Acked-by: David Scott <dave.scott@citrix.com>

>
>> ---
>>   tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c
>> b/tools/ocaml/libs/xl/xenlight_stubs.c
>> index 23f253a..8e825ae 100644
>> --- a/tools/ocaml/libs/xl/xenlight_stubs.c
>> +++ b/tools/ocaml/libs/xl/xenlight_stubs.c
>> @@ -35,6 +35,19 @@
>>   #include "caml_xentoollog.h"
>> +/*
>> + * Starting with ocaml-3.09.3, CAMLreturn can only be used for ``value''
>> + * types. CAMLreturnT was only added in 3.09.4, so we define our own
>> + * version here if needed.
>> + */
>> +#ifndef CAMLreturnT
>> +#define CAMLreturnT(type, result) do { \
>> +    type caml__temp_result = (result); \
>> +    caml_local_roots = caml__frame; \
>> +    return (caml__temp_result); \
>> +} while (0)
>> +#endif
>> +
>>   /* The following is equal to the CAMLreturn macro, but without the
>> return */
>>   #define CAMLdone do{ \
>>   caml_local_roots = caml__frame; \
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:50:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrFh-0002pn-F0; Mon, 10 Feb 2014 13:50:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Dave.Scott@citrix.com>) id 1WCrFT-0002pK-KE
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:50:23 +0000
Received: from [193.109.254.147:33773] by server-2.bemta-14.messagelabs.com id
	57/2F-01236-219D8F25; Mon, 10 Feb 2014 13:50:10 +0000
X-Env-Sender: Dave.Scott@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392040207!3243698!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8387 invoked from network); 10 Feb 2014 13:50:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:50:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101297777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:50:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:50:06 -0500
Received: from [10.80.239.111]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <dave.scott@eu.citrix.com>)	id 1WCrFO-000117-Vu;
	Mon, 10 Feb 2014 13:50:06 +0000
Message-ID: <52F8D8F6.1010209@eu.citrix.com>
Date: Mon, 10 Feb 2014 13:49:42 +0000
From: David Scott <dave.scott@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.1.1
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>, Don Slutz
	<dslutz@verizon.com>, <xen-devel@lists.xen.org>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
	<1391809911-13610-2-git-send-email-dslutz@verizon.com>
	<52F8C834.5070104@eu.citrix.com>
In-Reply-To: <52F8C834.5070104@eu.citrix.com>
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to
 build with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 12:38, George Dunlap wrote:
> On 02/07/2014 09:51 PM, Don Slutz wrote:
>> This code was copied from:
>>
>> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
>>
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Also looks fine to me
Acked-by: David Scott <dave.scott@citrix.com>

>
>> ---
>>   tools/ocaml/libs/xl/xenlight_stubs.c | 13 +++++++++++++
>>   1 file changed, 13 insertions(+)
>>
>> diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c
>> b/tools/ocaml/libs/xl/xenlight_stubs.c
>> index 23f253a..8e825ae 100644
>> --- a/tools/ocaml/libs/xl/xenlight_stubs.c
>> +++ b/tools/ocaml/libs/xl/xenlight_stubs.c
>> @@ -35,6 +35,19 @@
>>   #include "caml_xentoollog.h"
>> +/*
>> + * Starting with ocaml-3.09.3, CAMLreturn can only be used for ``value''
>> + * types. CAMLreturnT was only added in 3.09.4, so we define our own
>> + * version here if needed.
>> + */
>> +#ifndef CAMLreturnT
>> +#define CAMLreturnT(type, result) do { \
>> +    type caml__temp_result = (result); \
>> +    caml_local_roots = caml__frame; \
>> +    return (caml__temp_result); \
>> +} while (0)
>> +#endif
>> +
>>   /* The following is equal to the CAMLreturn macro, but without the
>> return */
>>   #define CAMLdone do{ \
>>   caml_local_roots = caml__frame; \
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:54:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrJQ-0003E6-9b; Mon, 10 Feb 2014 13:54:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCrJO-0003CE-De
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:54:14 +0000
Received: from [85.158.137.68:22616] by server-8.bemta-3.messagelabs.com id
	05/92-16039-50AD8F25; Mon, 10 Feb 2014 13:54:13 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392040451!845246!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20892 invoked from network); 10 Feb 2014 13:54:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:54:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101298650"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:54:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:54:10 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WCrJK-000153-GW;
	Mon, 10 Feb 2014 13:54:10 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 10 Feb 2014 13:54:02 +0000
Message-ID: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] xen: install xen/gntdev.h and xen/gntalloc.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

xen/gntdev.h and xen/gntalloc.h both provide userspace ABIs so they
should be installed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 include/uapi/xen/Kbuild           |    2 ++
 include/{ => uapi}/xen/gntalloc.h |    0
 include/{ => uapi}/xen/gntdev.h   |    0
 3 files changed, 2 insertions(+), 0 deletions(-)
 rename include/{ => uapi}/xen/gntalloc.h (100%)
 rename include/{ => uapi}/xen/gntdev.h (100%)

diff --git a/include/uapi/xen/Kbuild b/include/uapi/xen/Kbuild
index 61257cb..5c45962 100644
--- a/include/uapi/xen/Kbuild
+++ b/include/uapi/xen/Kbuild
@@ -1,3 +1,5 @@
 # UAPI Header export list
 header-y += evtchn.h
+header-y += gntalloc.h
+header-y += gntdev.h
 header-y += privcmd.h
diff --git a/include/xen/gntalloc.h b/include/uapi/xen/gntalloc.h
similarity index 100%
rename from include/xen/gntalloc.h
rename to include/uapi/xen/gntalloc.h
diff --git a/include/xen/gntdev.h b/include/uapi/xen/gntdev.h
similarity index 100%
rename from include/xen/gntdev.h
rename to include/uapi/xen/gntdev.h
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 13:54:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 13:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrJQ-0003E6-9b; Mon, 10 Feb 2014 13:54:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCrJO-0003CE-De
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 13:54:14 +0000
Received: from [85.158.137.68:22616] by server-8.bemta-3.messagelabs.com id
	05/92-16039-50AD8F25; Mon, 10 Feb 2014 13:54:13 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392040451!845246!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20892 invoked from network); 10 Feb 2014 13:54:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 13:54:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101298650"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 13:54:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 08:54:10 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WCrJK-000153-GW;
	Mon, 10 Feb 2014 13:54:10 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 10 Feb 2014 13:54:02 +0000
Message-ID: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH] xen: install xen/gntdev.h and xen/gntalloc.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

xen/gntdev.h and xen/gntalloc.h both provide userspace ABIs so they
should be installed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 include/uapi/xen/Kbuild           |    2 ++
 include/{ => uapi}/xen/gntalloc.h |    0
 include/{ => uapi}/xen/gntdev.h   |    0
 3 files changed, 2 insertions(+), 0 deletions(-)
 rename include/{ => uapi}/xen/gntalloc.h (100%)
 rename include/{ => uapi}/xen/gntdev.h (100%)

diff --git a/include/uapi/xen/Kbuild b/include/uapi/xen/Kbuild
index 61257cb..5c45962 100644
--- a/include/uapi/xen/Kbuild
+++ b/include/uapi/xen/Kbuild
@@ -1,3 +1,5 @@
 # UAPI Header export list
 header-y += evtchn.h
+header-y += gntalloc.h
+header-y += gntdev.h
 header-y += privcmd.h
diff --git a/include/xen/gntalloc.h b/include/uapi/xen/gntalloc.h
similarity index 100%
rename from include/xen/gntalloc.h
rename to include/uapi/xen/gntalloc.h
diff --git a/include/xen/gntdev.h b/include/uapi/xen/gntdev.h
similarity index 100%
rename from include/xen/gntdev.h
rename to include/uapi/xen/gntdev.h
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:02:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrR9-0004Ev-EC; Mon, 10 Feb 2014 14:02:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCrR7-0004Eq-3G
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:02:13 +0000
Received: from [85.158.139.211:46406] by server-2.bemta-5.messagelabs.com id
	FE/0F-23037-4EBD8F25; Mon, 10 Feb 2014 14:02:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392040927!2860922!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26420 invoked from network); 10 Feb 2014 14:02:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:02:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99472798"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 14:02:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:02:06 -0500
Message-ID: <1392040924.5117.103.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 14:02:04 +0000
In-Reply-To: <52F8E6DE020000780011AC53@nat28.tlf.novell.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
	<52F8E6DE020000780011AC53@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 13:49 +0000, Jan Beulich wrote:
> >>> On 07.02.14 at 13:57, "Jan Beulich" <JBeulich@suse.com> wrote:
> >>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> >> --- a/xen/include/public/domctl.h
> >> +++ b/xen/include/public/domctl.h
> >> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> >>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> >>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
> >>  
> >> +/*
> >> + * ARM: Clean and invalidate caches associated with given region of
> >> + * guest memory.
> >> + */
> >> +struct xen_domctl_cacheflush {
> >> +    /* IN: page range to flush. */
> >> +    xen_pfn_t start_pfn, nr_pfns;
> >> +};
> > 
> > The name here (and of the libxc interface) is now certainly
> > counterintuitive. But it's a domctl (and an internal interface),
> > which we can change post-4.4 (I'd envision it to actually take
> > a flags parameter indicating the kind of flush that's wanted).
> 
> Actually the naming of things in the hypervisor part of the patch
> is now bogus too - sysc_page_to_ram(), for example, does in no
> way imply that the cache needs not just flushing, but also
> invalidating.

sync_and_clean_page ? Not quite right I think.

>  The need for which, btw, I continue to not
> understand: Are there ways in ARM for one guest to observe
> cache contents put in place by another guest (incl Dom0)?

The data cache is physically tagged on armv7+ (which is all we support),
so if I understand your question correctly the answer is: a guest cannot
see cache contents placed by another guest *unless* that page is freed
back to the hypervisor and recycled as a new page for that guest, which
is why we want to clean+invalidate in the allocation path, after the
page has been scrubbed if required.

If you don't invalidate the cache then a guest which starts without
caches enabled and writes to RAM may be surprised when it enables the
cache and finds the data it wrote is now shadowed by some stale cache
entries.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:02:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrR9-0004Ev-EC; Mon, 10 Feb 2014 14:02:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCrR7-0004Eq-3G
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:02:13 +0000
Received: from [85.158.139.211:46406] by server-2.bemta-5.messagelabs.com id
	FE/0F-23037-4EBD8F25; Mon, 10 Feb 2014 14:02:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392040927!2860922!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26420 invoked from network); 10 Feb 2014 14:02:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:02:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99472798"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 14:02:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:02:06 -0500
Message-ID: <1392040924.5117.103.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 14:02:04 +0000
In-Reply-To: <52F8E6DE020000780011AC53@nat28.tlf.novell.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
	<52F8E6DE020000780011AC53@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 13:49 +0000, Jan Beulich wrote:
> >>> On 07.02.14 at 13:57, "Jan Beulich" <JBeulich@suse.com> wrote:
> >>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> >> --- a/xen/include/public/domctl.h
> >> +++ b/xen/include/public/domctl.h
> >> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> >>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> >>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
> >>  
> >> +/*
> >> + * ARM: Clean and invalidate caches associated with given region of
> >> + * guest memory.
> >> + */
> >> +struct xen_domctl_cacheflush {
> >> +    /* IN: page range to flush. */
> >> +    xen_pfn_t start_pfn, nr_pfns;
> >> +};
> > 
> > The name here (and of the libxc interface) is now certainly
> > counterintuitive. But it's a domctl (and an internal interface),
> > which we can change post-4.4 (I'd envision it to actually take
> > a flags parameter indicating the kind of flush that's wanted).
> 
> Actually the naming of things in the hypervisor part of the patch
> is now bogus too - sysc_page_to_ram(), for example, does in no
> way imply that the cache needs not just flushing, but also
> invalidating.

sync_and_clean_page ? Not quite right I think.

>  The need for which, btw, I continue to not
> understand: Are there ways in ARM for one guest to observe
> cache contents put in place by another guest (incl Dom0)?

The data cache is physically tagged on armv7+ (which is all we support),
so if I understand your question correctly the answer is: a guest cannot
see cache contents placed by another guest *unless* that page is freed
back to the hypervisor and recycled as a new page for that guest, which
is why we want to clean+invalidate in the allocation path, after the
page has been scrubbed if required.

If you don't invalidate the cache then a guest which starts without
caches enabled and writes to RAM may be surprised when it enables the
cache and finds the data it wrote is now shadowed by some stale cache
entries.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCriI-00054q-NC; Mon, 10 Feb 2014 14:19:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCriH-00054O-6R
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:19:57 +0000
Received: from [85.158.139.211:17370] by server-4.bemta-5.messagelabs.com id
	A0/1F-08092-C00E8F25; Mon, 10 Feb 2014 14:19:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392041994!2924512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2857 invoked from network); 10 Feb 2014 14:19:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:19:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99477770"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 14:19:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:19:52 -0500
Message-ID: <1392041992.26657.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: herbert cland <herbert.cland@gmail.com>
Date: Mon, 10 Feb 2014 14:19:52 +0000
In-Reply-To: <2014021017472829063425@gmail.com>
References: <2014021017472829063425@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN BUG] XEN + Qemu upstream + Qcow2 + coroutine =
 BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:47 +0800, herbert cland wrote:
> Dear ALL!
>  
> qcow2 format disk works well with xen for most cases;
> BUT if using coroutine function such as block-stream, the qemu process
> crashed!
> And we can found following msg from cmd dmesg
>  
> qemu-system-i38[20514]: segfault at 60 ip 00007f2b60dc879a sp
> 00007ffffcc2f000 error 4 in qemu-system-i386[7f2b60cdf000+529000] 
>  
> Any suggestion?

Please can you try installing the necessary debug symbols package (I
don't know how to do this on centos) and try and capture a core dump
(which might require adjusting rlimits) or running qemu under gdb
somehow, in either case getting a backtrace out of this crash will help
us make progress.

Please also take a look at
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen and see what other
info might be useful, like guest config file, which toolstack you are
using, other useful logs etc.

Since this is a qemu issue you might also want to include the qemu list
in your report.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCriI-00054q-NC; Mon, 10 Feb 2014 14:19:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCriH-00054O-6R
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:19:57 +0000
Received: from [85.158.139.211:17370] by server-4.bemta-5.messagelabs.com id
	A0/1F-08092-C00E8F25; Mon, 10 Feb 2014 14:19:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392041994!2924512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2857 invoked from network); 10 Feb 2014 14:19:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:19:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99477770"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 14:19:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:19:52 -0500
Message-ID: <1392041992.26657.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: herbert cland <herbert.cland@gmail.com>
Date: Mon, 10 Feb 2014 14:19:52 +0000
In-Reply-To: <2014021017472829063425@gmail.com>
References: <2014021017472829063425@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN BUG] XEN + Qemu upstream + Qcow2 + coroutine =
 BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:47 +0800, herbert cland wrote:
> Dear ALL!
>  
> qcow2 format disk works well with xen for most cases;
> BUT if using coroutine function such as block-stream, the qemu process
> crashed!
> And we can found following msg from cmd dmesg
>  
> qemu-system-i38[20514]: segfault at 60 ip 00007f2b60dc879a sp
> 00007ffffcc2f000 error 4 in qemu-system-i386[7f2b60cdf000+529000] 
>  
> Any suggestion?

Please can you try installing the necessary debug symbols package (I
don't know how to do this on centos) and try and capture a core dump
(which might require adjusting rlimits) or running qemu under gdb
somehow, in either case getting a backtrace out of this crash will help
us make progress.

Please also take a look at
http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen and see what other
info might be useful, like guest config file, which toolstack you are
using, other useful logs etc.

Since this is a qemu issue you might also want to include the qemu list
in your report.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrm0-0005P5-GY; Mon, 10 Feb 2014 14:23:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCrlz-0005Oy-Pe
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 14:23:47 +0000
Received: from [193.109.254.147:24302] by server-3.bemta-14.messagelabs.com id
	2B/46-00432-3F0E8F25; Mon, 10 Feb 2014 14:23:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392042224!3284891!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31754 invoked from network); 10 Feb 2014 14:23:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:23:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101306789"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:23:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:23:43 -0500
Message-ID: <1392042223.26657.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Mon, 10 Feb 2014 14:23:43 +0000
In-Reply-To: <CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 16:03 +0000, Miguel Clara wrote:
> 
> If I use kernel/ramdisk files instead of pygrub it works, with he same
> disk syntax! 

Ah, I suspect that something isn't realising that drbd:ressouce-name
isn't a local disk name and so tries to get pygrub to run on it
directly, instead of creating a loopback xvd to use.

Some of this has been improved since 4.3, but an "xl -vvv create ..."
might shed some light on exactly what is going wrong.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:23:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrm0-0005P5-GY; Mon, 10 Feb 2014 14:23:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCrlz-0005Oy-Pe
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 14:23:47 +0000
Received: from [193.109.254.147:24302] by server-3.bemta-14.messagelabs.com id
	2B/46-00432-3F0E8F25; Mon, 10 Feb 2014 14:23:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392042224!3284891!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31754 invoked from network); 10 Feb 2014 14:23:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:23:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101306789"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:23:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:23:43 -0500
Message-ID: <1392042223.26657.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Mon, 10 Feb 2014 14:23:43 +0000
In-Reply-To: <CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-06 at 16:03 +0000, Miguel Clara wrote:
> 
> If I use kernel/ramdisk files instead of pygrub it works, with he same
> disk syntax! 

Ah, I suspect that something isn't realising that drbd:ressouce-name
isn't a local disk name and so tries to get pygrub to run on it
directly, instead of creating a loopback xvd to use.

Some of this has been improved since 4.3, but an "xl -vvv create ..."
might shed some light on exactly what is going wrong.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrp1-0005XM-5h; Mon, 10 Feb 2014 14:26:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCrp0-0005XG-CC
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:26:54 +0000
Received: from [85.158.139.211:11729] by server-9.bemta-5.messagelabs.com id
	AA/26-11237-DA1E8F25; Mon, 10 Feb 2014 14:26:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392042412!2915386!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21187 invoked from network); 10 Feb 2014 14:26:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 14:26:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 14:26:52 +0000
Message-Id: <52F8EFBA020000780011AC7C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 14:26:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
	<52F8E6DE020000780011AC53@nat28.tlf.novell.com>
	<1392040924.5117.103.camel@kazak.uk.xensource.com>
In-Reply-To: <1392040924.5117.103.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 15:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> If you don't invalidate the cache then a guest which starts without
> caches enabled and writes to RAM may be surprised when it enables the
> cache and finds the data it wrote is now shadowed by some stale cache
> entries.

Oh, right, I forgot guests get started in cache disabled mode on
ARM (for whatever that's good).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:27:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:27:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrp1-0005XM-5h; Mon, 10 Feb 2014 14:26:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCrp0-0005XG-CC
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:26:54 +0000
Received: from [85.158.139.211:11729] by server-9.bemta-5.messagelabs.com id
	AA/26-11237-DA1E8F25; Mon, 10 Feb 2014 14:26:53 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392042412!2915386!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21187 invoked from network); 10 Feb 2014 14:26:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 14:26:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 14:26:52 +0000
Message-Id: <52F8EFBA020000780011AC7C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 14:26:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
	<52F8E6DE020000780011AC53@nat28.tlf.novell.com>
	<1392040924.5117.103.camel@kazak.uk.xensource.com>
In-Reply-To: <1392040924.5117.103.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 15:02, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> If you don't invalidate the cache then a guest which starts without
> caches enabled and writes to RAM may be surprised when it enables the
> cache and finds the data it wrote is now shadowed by some stale cache
> entries.

Oh, right, I forgot guests get started in cache disabled mode on
ARM (for whatever that's good).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrpi-0005av-KN; Mon, 10 Feb 2014 14:27:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCrph-0005aZ-7c
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 14:27:37 +0000
Received: from [85.158.137.68:35525] by server-14.bemta-3.messagelabs.com id
	6B/55-08196-8D1E8F25; Mon, 10 Feb 2014 14:27:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392042454!863157!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13332 invoked from network); 10 Feb 2014 14:27:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:27:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101307805"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:27:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:27:21 -0500
Message-ID: <1392042440.26657.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Date: Mon, 10 Feb 2014 14:27:20 +0000
In-Reply-To: <52F26C40.2060901@scytl.com>
References: <52F26C40.2060901@scytl.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CCing the vTPM maintainer.

On Wed, 2014-02-05 at 17:52 +0100, Jordi Cucurull Juan wrote:
> Dear all,
> 
> I have recently configured a Xen 4.3 server with the vTPM enabled and a
> guest virtual machine that takes advantage of it. After playing a bit
> with it, I have a few questions:
> 
> 1.According to the documentation, to shutdown the vTPM stubdom it is
> only needed to normally shutdown the guest VM. Theoretically, the vTPM
> stubdom automatically shuts down after this. Nevertheless, if I shutdown
> the guest the vTPM stubdom continues active and, moreover, I can start
> the machine again and the values of the vTPM are the last ones there
> were in the previous instance of the guest. Is this normal?

I don't know much about vTPM but this seems odd to me. Which toolstack
are you using? Can you provide details of your config and logs from both
the startup and shutdown etc please.

I've no clue about #2 or #3 I'm afraid.

> 2.In the documentation it is recommended to avoid accessing the physical
> TPM from Dom0 at the same time than the vTPM Manager stubdom.
> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
> without any apparent issue. Why is not recommended directly accessing
> the physical TPM of Dom0?
> 
> 3.If it is not recommended to directly accessing the physical TPM in
> Dom0, which is the advisable way to check the integrity of this domain?
> With solutions such as TBOOT and IntelTXT?
> 
> Best regards,
> Jordi.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrpi-0005b2-Vb; Mon, 10 Feb 2014 14:27:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCrph-0005ac-Kb
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:27:37 +0000
Received: from [85.158.143.35:5529] by server-3.bemta-4.messagelabs.com id
	AD/02-11539-8D1E8F25; Mon, 10 Feb 2014 14:27:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392042455!4539444!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13774 invoked from network); 10 Feb 2014 14:27:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101307919"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:27:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:27:33 -0500
Message-ID: <52F8E1D4.1070209@citrix.com>
Date: Mon, 10 Feb 2014 14:27:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: install xen/gntdev.h and xen/gntalloc.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:54, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> xen/gntdev.h and xen/gntalloc.h both provide userspace ABIs so they
> should be installed.

I think this is a bug fix so please queue for 3.14 and Cc: stable, thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrpi-0005av-KN; Mon, 10 Feb 2014 14:27:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCrph-0005aZ-7c
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 14:27:37 +0000
Received: from [85.158.137.68:35525] by server-14.bemta-3.messagelabs.com id
	6B/55-08196-8D1E8F25; Mon, 10 Feb 2014 14:27:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392042454!863157!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13332 invoked from network); 10 Feb 2014 14:27:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:27:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101307805"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:27:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:27:21 -0500
Message-ID: <1392042440.26657.9.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Date: Mon, 10 Feb 2014 14:27:20 +0000
In-Reply-To: <52F26C40.2060901@scytl.com>
References: <52F26C40.2060901@scytl.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CCing the vTPM maintainer.

On Wed, 2014-02-05 at 17:52 +0100, Jordi Cucurull Juan wrote:
> Dear all,
> 
> I have recently configured a Xen 4.3 server with the vTPM enabled and a
> guest virtual machine that takes advantage of it. After playing a bit
> with it, I have a few questions:
> 
> 1.According to the documentation, to shutdown the vTPM stubdom it is
> only needed to normally shutdown the guest VM. Theoretically, the vTPM
> stubdom automatically shuts down after this. Nevertheless, if I shutdown
> the guest the vTPM stubdom continues active and, moreover, I can start
> the machine again and the values of the vTPM are the last ones there
> were in the previous instance of the guest. Is this normal?

I don't know much about vTPM but this seems odd to me. Which toolstack
are you using? Can you provide details of your config and logs from both
the startup and shutdown etc please.

I've no clue about #2 or #3 I'm afraid.

> 2.In the documentation it is recommended to avoid accessing the physical
> TPM from Dom0 at the same time than the vTPM Manager stubdom.
> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
> without any apparent issue. Why is not recommended directly accessing
> the physical TPM of Dom0?
> 
> 3.If it is not recommended to directly accessing the physical TPM in
> Dom0, which is the advisable way to check the integrity of this domain?
> With solutions such as TBOOT and IntelTXT?
> 
> Best regards,
> Jordi.
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:27:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrpi-0005b2-Vb; Mon, 10 Feb 2014 14:27:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCrph-0005ac-Kb
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 14:27:37 +0000
Received: from [85.158.143.35:5529] by server-3.bemta-4.messagelabs.com id
	AD/02-11539-8D1E8F25; Mon, 10 Feb 2014 14:27:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392042455!4539444!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13774 invoked from network); 10 Feb 2014 14:27:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:27:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101307919"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:27:34 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:27:33 -0500
Message-ID: <52F8E1D4.1070209@citrix.com>
Date: Mon, 10 Feb 2014 14:27:32 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: install xen/gntdev.h and xen/gntalloc.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:54, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> xen/gntdev.h and xen/gntalloc.h both provide userspace ABIs so they
> should be installed.

I think this is a bug fix so please queue for 3.14 and Cc: stable, thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:34:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrwV-0006X8-Jx; Mon, 10 Feb 2014 14:34:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCrwU-0006X1-Bq
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 14:34:38 +0000
Received: from [85.158.139.211:9218] by server-1.bemta-5.messagelabs.com id
	90/E0-12859-D73E8F25; Mon, 10 Feb 2014 14:34:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392042875!2909471!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21935 invoked from network); 10 Feb 2014 14:34:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:34:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101310054"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:34:35 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:34:34 -0500
Message-ID: <52F8E379.4020702@citrix.com>
Date: Mon, 10 Feb 2014 15:34:33 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>	<52F8C40E.7010707@citrix.com>
	<21240.50574.203262.432094@mariner.uk.xensource.com>
In-Reply-To: <21240.50574.203262.432094@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:26, Ian Jackson wrote:
> Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RE=
LEASE (20140116-r260789)"):
>> Thanks for the patch. I think it's missing the following chunk:
> ...
>> -our $freebsd_version=3D "10.0-BETA3";
>> +our $freebsd_version=3D "10.0-RELEASE";
> =

> Oh.  Err, why is this hardcoded in the script ?  Changing the
> runvar(s) ought to be sufficient.
> =

> ... (looks at the code) ...
> =

> Oh, I see, that's just the default.  Perhaps the default should be
> removed entirely ?  None of the other scripts have a default image
> filename.

So $freebsd_image is going to contain the absolute path to the image?
I'm asking because ts-freebsd-install searches for the image in
/var/images, do we have to do something like /var/images/$freebsd_image
in order to get the absolute image path?

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:34:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrwV-0006X8-Jx; Mon, 10 Feb 2014 14:34:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCrwU-0006X1-Bq
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 14:34:38 +0000
Received: from [85.158.139.211:9218] by server-1.bemta-5.messagelabs.com id
	90/E0-12859-D73E8F25; Mon, 10 Feb 2014 14:34:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392042875!2909471!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21935 invoked from network); 10 Feb 2014 14:34:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 14:34:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101310054"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 14:34:35 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 09:34:34 -0500
Message-ID: <52F8E379.4020702@citrix.com>
Date: Mon, 10 Feb 2014 15:34:33 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>	<52F8C40E.7010707@citrix.com>
	<21240.50574.203262.432094@mariner.uk.xensource.com>
In-Reply-To: <21240.50574.203262.432094@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:26, Ian Jackson wrote:
> Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RE=
LEASE (20140116-r260789)"):
>> Thanks for the patch. I think it's missing the following chunk:
> ...
>> -our $freebsd_version=3D "10.0-BETA3";
>> +our $freebsd_version=3D "10.0-RELEASE";
> =

> Oh.  Err, why is this hardcoded in the script ?  Changing the
> runvar(s) ought to be sufficient.
> =

> ... (looks at the code) ...
> =

> Oh, I see, that's just the default.  Perhaps the default should be
> removed entirely ?  None of the other scripts have a default image
> filename.

So $freebsd_image is going to contain the absolute path to the image?
I'm asking because ts-freebsd-install searches for the image in
/var/images, do we have to do something like /var/images/$freebsd_image
in order to get the absolute image path?

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:37:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrzS-0006ik-FT; Mon, 10 Feb 2014 14:37:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCrzQ-0006iZ-Rw
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 14:37:41 +0000
Received: from [193.109.254.147:34985] by server-2.bemta-14.messagelabs.com id
	CA/EE-01236-434E8F25; Mon, 10 Feb 2014 14:37:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392043057!3290384!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29007 invoked from network); 10 Feb 2014 14:37:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 14:37:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AEbVio029585
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 14:37:32 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEbTH9001718
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 14:37:29 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEbTwZ020436; Mon, 10 Feb 2014 14:37:29 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 06:37:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7EA361BFA73; Mon, 10 Feb 2014 09:37:27 -0500 (EST)
Date: Mon, 10 Feb 2014 09:37:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>
Message-ID: <20140210143727.GA15771@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
	<20140203141429.GD3400@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203141429.GD3400@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 09:14:29AM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 03, 2014 at 11:12:16AM +0100, Stanislaw Gruszka wrote:
> > On Fri, Jan 31, 2014 at 11:01:40AM -0500, Konrad Rzeszutek Wilk wrote:
> > > Perhaps by using 'subsys_system_register' and stick it there?
> > 
> > This will not call ->resume callback as it is only called for
> > devices, so additional dummy device is needed, for example:
> > 
> > struct device xap_dev = {
> >         .init_name = "xen-acpi-processor-dev",
> >         .bus = &xap_bus,
> > };
> > ...
> > subsys_system_register(&xap_bus, NULL);
> > device_register(&xap_dev);
> > 
> > But I'm not sure if that is good solution. It crate some not necessery
> > sysfs directories and files. Additionaly it can restore CPU C-states
> > after some other drivers resume, which prehaps require proper C-states.
> 
> Yes.
> > 
> > Hence maybe adding direct notify from xen core resume will be better
> > idea (proposed patch below). Plese let me know what you think, I'll
> > provide solution which you choose to bug reporters for testing.
> 
> Let me think about it for a day or so.

Sorry for the delay. I think this is fine.
> 
> Thanks!
> > 
> > Thanks
> > Stanislaw
> > 
> > diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> > index 624e8dc..96e4173 100644
> > --- a/drivers/xen/manage.c
> > +++ b/drivers/xen/manage.c
> > @@ -13,6 +13,7 @@
> >  #include <linux/freezer.h>
> >  #include <linux/syscore_ops.h>
> >  #include <linux/export.h>
> > +#include <linux/notifier.h>
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -46,6 +47,20 @@ struct suspend_info {
> >  	void (*post)(int cancelled);
> >  };
> >  
> > +static RAW_NOTIFIER_HEAD(xen_resume_notifier);
> > +
> > +void xen_resume_notifier_register(struct notifier_block *nb)
> > +{
> > +	raw_notifier_chain_register(&xen_resume_notifier, nb);
> > +}
> > +EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
> > +
> > +void xen_resume_notifier_unregister(struct notifier_block *nb)
> > +{
> > +	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
> > +}
> > +EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
> > +
> >  #ifdef CONFIG_HIBERNATE_CALLBACKS
> >  static void xen_hvm_post_suspend(int cancelled)
> >  {
> > @@ -152,6 +167,8 @@ static void do_suspend(void)
> >  
> >  	err = stop_machine(xen_suspend, &si, cpumask_of(0));
> >  
> > +	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
> > +
> >  	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
> >  
> >  	if (err) {
> > diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> > index 7231859..82358d1 100644
> > --- a/drivers/xen/xen-acpi-processor.c
> > +++ b/drivers/xen/xen-acpi-processor.c
> > @@ -27,10 +27,10 @@
> >  #include <linux/init.h>
> >  #include <linux/module.h>
> >  #include <linux/types.h>
> > -#include <linux/syscore_ops.h>
> >  #include <linux/acpi.h>
> >  #include <acpi/processor.h>
> >  #include <xen/xen.h>
> > +#include <xen/xen-ops.h>
> >  #include <xen/interface/platform.h>
> >  #include <asm/xen/hypercall.h>
> >  
> > @@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
> >  	return rc;
> >  }
> >  
> > -static void xen_acpi_processor_resume(void)
> > +static int xen_acpi_processor_resume(struct notifier_block *nb,
> > +				     unsigned long action, void *data)
> >  {
> >  	bitmap_zero(acpi_ids_done, nr_acpi_bits);
> > -	xen_upload_processor_pm_data();
> > +	return xen_upload_processor_pm_data();
> >  }
> >  
> > -static struct syscore_ops xap_syscore_ops = {
> > -	.resume	= xen_acpi_processor_resume,
> > +struct notifier_block xen_acpi_processor_resume_nb = {
> > +	.notifier_call = xen_acpi_processor_resume,
> >  };
> >  
> >  static int __init xen_acpi_processor_init(void)
> > @@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
> >  	if (rc)
> >  		goto err_unregister;
> >  
> > -	register_syscore_ops(&xap_syscore_ops);
> > +	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
> >  
> >  	return 0;
> >  err_unregister:
> > @@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
> >  {
> >  	int i;
> >  
> > -	unregister_syscore_ops(&xap_syscore_ops);
> > +	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
> >  	kfree(acpi_ids_done);
> >  	kfree(acpi_id_present);
> >  	kfree(acpi_id_cst_present);
> > diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> > index fb2ea8f..6412358 100644
> > --- a/include/xen/xen-ops.h
> > +++ b/include/xen/xen-ops.h
> > @@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
> >  void xen_timer_resume(void);
> >  void xen_arch_resume(void);
> >  
> > +void xen_resume_notifier_register(struct notifier_block *nb);
> > +void xen_resume_notifier_unregister(struct notifier_block *nb);
> > +
> >  int xen_setup_shutdown_event(void);
> >  
> >  extern unsigned long *xen_contiguous_bitmap;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:37:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCrzS-0006ik-FT; Mon, 10 Feb 2014 14:37:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCrzQ-0006iZ-Rw
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 14:37:41 +0000
Received: from [193.109.254.147:34985] by server-2.bemta-14.messagelabs.com id
	CA/EE-01236-434E8F25; Mon, 10 Feb 2014 14:37:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392043057!3290384!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29007 invoked from network); 10 Feb 2014 14:37:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 14:37:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AEbVio029585
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 14:37:32 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEbTH9001718
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 14:37:29 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEbTwZ020436; Mon, 10 Feb 2014 14:37:29 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 06:37:28 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7EA361BFA73; Mon, 10 Feb 2014 09:37:27 -0500 (EST)
Date: Mon, 10 Feb 2014 09:37:27 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stanislaw Gruszka <sgruszka@redhat.com>
Message-ID: <20140210143727.GA15771@phenom.dumpdata.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
	<20140203141429.GD3400@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140203141429.GD3400@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 03, 2014 at 09:14:29AM -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 03, 2014 at 11:12:16AM +0100, Stanislaw Gruszka wrote:
> > On Fri, Jan 31, 2014 at 11:01:40AM -0500, Konrad Rzeszutek Wilk wrote:
> > > Perhaps by using 'subsys_system_register' and stick it there?
> > 
> > This will not call ->resume callback as it is only called for
> > devices, so additional dummy device is needed, for example:
> > 
> > struct device xap_dev = {
> >         .init_name = "xen-acpi-processor-dev",
> >         .bus = &xap_bus,
> > };
> > ...
> > subsys_system_register(&xap_bus, NULL);
> > device_register(&xap_dev);
> > 
> > But I'm not sure if that is good solution. It crate some not necessery
> > sysfs directories and files. Additionaly it can restore CPU C-states
> > after some other drivers resume, which prehaps require proper C-states.
> 
> Yes.
> > 
> > Hence maybe adding direct notify from xen core resume will be better
> > idea (proposed patch below). Plese let me know what you think, I'll
> > provide solution which you choose to bug reporters for testing.
> 
> Let me think about it for a day or so.

Sorry for the delay. I think this is fine.
> 
> Thanks!
> > 
> > Thanks
> > Stanislaw
> > 
> > diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> > index 624e8dc..96e4173 100644
> > --- a/drivers/xen/manage.c
> > +++ b/drivers/xen/manage.c
> > @@ -13,6 +13,7 @@
> >  #include <linux/freezer.h>
> >  #include <linux/syscore_ops.h>
> >  #include <linux/export.h>
> > +#include <linux/notifier.h>
> >  
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -46,6 +47,20 @@ struct suspend_info {
> >  	void (*post)(int cancelled);
> >  };
> >  
> > +static RAW_NOTIFIER_HEAD(xen_resume_notifier);
> > +
> > +void xen_resume_notifier_register(struct notifier_block *nb)
> > +{
> > +	raw_notifier_chain_register(&xen_resume_notifier, nb);
> > +}
> > +EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
> > +
> > +void xen_resume_notifier_unregister(struct notifier_block *nb)
> > +{
> > +	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
> > +}
> > +EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
> > +
> >  #ifdef CONFIG_HIBERNATE_CALLBACKS
> >  static void xen_hvm_post_suspend(int cancelled)
> >  {
> > @@ -152,6 +167,8 @@ static void do_suspend(void)
> >  
> >  	err = stop_machine(xen_suspend, &si, cpumask_of(0));
> >  
> > +	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
> > +
> >  	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
> >  
> >  	if (err) {
> > diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
> > index 7231859..82358d1 100644
> > --- a/drivers/xen/xen-acpi-processor.c
> > +++ b/drivers/xen/xen-acpi-processor.c
> > @@ -27,10 +27,10 @@
> >  #include <linux/init.h>
> >  #include <linux/module.h>
> >  #include <linux/types.h>
> > -#include <linux/syscore_ops.h>
> >  #include <linux/acpi.h>
> >  #include <acpi/processor.h>
> >  #include <xen/xen.h>
> > +#include <xen/xen-ops.h>
> >  #include <xen/interface/platform.h>
> >  #include <asm/xen/hypercall.h>
> >  
> > @@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
> >  	return rc;
> >  }
> >  
> > -static void xen_acpi_processor_resume(void)
> > +static int xen_acpi_processor_resume(struct notifier_block *nb,
> > +				     unsigned long action, void *data)
> >  {
> >  	bitmap_zero(acpi_ids_done, nr_acpi_bits);
> > -	xen_upload_processor_pm_data();
> > +	return xen_upload_processor_pm_data();
> >  }
> >  
> > -static struct syscore_ops xap_syscore_ops = {
> > -	.resume	= xen_acpi_processor_resume,
> > +struct notifier_block xen_acpi_processor_resume_nb = {
> > +	.notifier_call = xen_acpi_processor_resume,
> >  };
> >  
> >  static int __init xen_acpi_processor_init(void)
> > @@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
> >  	if (rc)
> >  		goto err_unregister;
> >  
> > -	register_syscore_ops(&xap_syscore_ops);
> > +	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
> >  
> >  	return 0;
> >  err_unregister:
> > @@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
> >  {
> >  	int i;
> >  
> > -	unregister_syscore_ops(&xap_syscore_ops);
> > +	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
> >  	kfree(acpi_ids_done);
> >  	kfree(acpi_id_present);
> >  	kfree(acpi_id_cst_present);
> > diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> > index fb2ea8f..6412358 100644
> > --- a/include/xen/xen-ops.h
> > +++ b/include/xen/xen-ops.h
> > @@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
> >  void xen_timer_resume(void);
> >  void xen_arch_resume(void);
> >  
> > +void xen_resume_notifier_register(struct notifier_block *nb);
> > +void xen_resume_notifier_unregister(struct notifier_block *nb);
> > +
> >  int xen_setup_shutdown_event(void);
> >  
> >  extern unsigned long *xen_contiguous_bitmap;

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:55:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsGD-0007l4-He; Mon, 10 Feb 2014 14:55:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WCsGC-0007kz-0L
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 14:55:00 +0000
Received: from [85.158.139.211:17059] by server-17.bemta-5.messagelabs.com id
	DE/A1-31975-348E8F25; Mon, 10 Feb 2014 14:54:59 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392044097!2884398!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17660 invoked from network); 10 Feb 2014 14:54:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 14:54:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AEsrDl020385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 14:54:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEsoR1005556
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 14:54:53 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEsocP014556; Mon, 10 Feb 2014 14:54:50 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 06:54:50 -0800
Message-ID: <52F8E88D.3020600@oracle.com>
Date: Mon, 10 Feb 2014 09:56:13 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
References: <1391898943.27190.5.camel@x220>
In-Reply-To: <1391898943.27190.5.camel@x220>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, Paul Bolle <pebolle@tiscali.nl>,
	Tony Luck <tony.luck@intel.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even
 more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/2014 05:35 PM, Paul Bolle wrote:
> Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
> the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
> on that symbol. Remove that code now.
>
> Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
> ---
> Basically only tested with "git grep". There's a chance I might have
> missed some second order effects.
>
>   drivers/xen/Makefile            |   1 -
>   drivers/xen/xencomm.c           | 219 ----------------------------------------
>   include/xen/interface/xencomm.h |  41 --------
>   include/xen/xencomm.h           |  77 --------------
>   4 files changed, 338 deletions(-)
>   delete mode 100644 drivers/xen/xencomm.c
>   delete mode 100644 include/xen/interface/xencomm.h
>   delete mode 100644 include/xen/xencomm.h

Speaking of xencomm, do we still need it in Xen tree?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 14:55:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 14:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsGD-0007l4-He; Mon, 10 Feb 2014 14:55:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WCsGC-0007kz-0L
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 14:55:00 +0000
Received: from [85.158.139.211:17059] by server-17.bemta-5.messagelabs.com id
	DE/A1-31975-348E8F25; Mon, 10 Feb 2014 14:54:59 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392044097!2884398!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17660 invoked from network); 10 Feb 2014 14:54:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 14:54:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AEsrDl020385
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 14:54:54 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEsoR1005556
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 14:54:53 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AEsocP014556; Mon, 10 Feb 2014 14:54:50 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 06:54:50 -0800
Message-ID: <52F8E88D.3020600@oracle.com>
Date: Mon, 10 Feb 2014 09:56:13 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
References: <1391898943.27190.5.camel@x220>
In-Reply-To: <1391898943.27190.5.camel@x220>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, Paul Bolle <pebolle@tiscali.nl>,
	Tony Luck <tony.luck@intel.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] ia64/xen: Remove Xen support for ia64 even
 more
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/08/2014 05:35 PM, Paul Bolle wrote:
> Commit d52eefb47d4e ("ia64/xen: Remove Xen support for ia64") removed
> the Kconfig symbol XEN_XENCOMM. But it didn't remove the code depending
> on that symbol. Remove that code now.
>
> Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
> ---
> Basically only tested with "git grep". There's a chance I might have
> missed some second order effects.
>
>   drivers/xen/Makefile            |   1 -
>   drivers/xen/xencomm.c           | 219 ----------------------------------------
>   include/xen/interface/xencomm.h |  41 --------
>   include/xen/xencomm.h           |  77 --------------
>   4 files changed, 338 deletions(-)
>   delete mode 100644 drivers/xen/xencomm.c
>   delete mode 100644 include/xen/interface/xencomm.h
>   delete mode 100644 include/xen/xencomm.h

Speaking of xencomm, do we still need it in Xen tree?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:05:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsQ8-0008Ip-4T; Mon, 10 Feb 2014 15:05:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCsQ7-0008Id-0L
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:05:15 +0000
Received: from [85.158.139.211:26605] by server-11.bemta-5.messagelabs.com id
	98/A0-23886-AAAE8F25; Mon, 10 Feb 2014 15:05:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392044711!2879901!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23903 invoked from network); 10 Feb 2014 15:05:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:05:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101320328"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 15:05:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:05:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCsQ2-00028w-Qo;
	Mon, 10 Feb 2014 15:05:10 +0000
Date: Mon, 10 Feb 2014 15:04:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52F89013020000780011A952@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1402101502040.4373@kaball.uk.xensource.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52F5587A.4010608@terremark.com>
	<52F89013020000780011A952@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Jan Beulich wrote:
> >>> On 07.02.14 at 23:04, Don Slutz <dslutz@verizon.com> wrote:
> > On 01/29/14 06:40, Jan Beulich wrote:
> >> All,
> >>
> >> aiming at releases with, as before, presumably just one more RC on
> >> each of them, please test!
> > 
> > Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.
> > 
> > CentOS 5.10 has a build issue with QEMU:
> > 
> > http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html 
> 
> Is this a regression over 4.3.1?
> 
> In any event, it would be Stefano to take care of this, just like for
> 4.4.
> 

Don, would a backport of "configure: Disable libtool if -fPIE does not
work with it (bug #1257099)" (027c412ff71ad8bff6e335cc7932857f4ea74391
in qemu-upstream-unstable.git) fix the issue in 4.3 as well?  If so, I
would rather do that than introduce a workaround.

 
> > Has more info, for this testing I changed:
> > 
> > 
> > Author: Don Slutz <dslutz@verizon.com>
> > Date:   Fri Jan 31 22:37:04 2014 +0000
> > 
> >      Work around QEMU bug #1257099 on CentOS 5.10
> > 
> > diff --git a/tools/Makefile b/tools/Makefile
> > index e44a3e9..b411e60 100644
> > --- a/tools/Makefile
> > +++ b/tools/Makefile
> > @@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> >                  source=.; \
> >          fi; \
> >          cd qemu-xen-dir; \
> > -       $$source/configure --enable-xen --target-list=i386-softmmu \
> > +       $$source/configure --enable-xen --target-list=i386-softmmu 
> > --disable-smartcard-nss\
> >                  --prefix=$(PREFIX) \
> >                  --source-path=$$source \
> >                  --extra-cflags="-I$(XEN_ROOT)/tools/include \
> > 
> > and was able to use the resulting build for some simple testing.  No new 
> > issues were found.
> >       -Don Slutz
> > 
> >> Thanks, Jan
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org 
> >> http://lists.xen.org/xen-devel 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:05:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsQ8-0008Ip-4T; Mon, 10 Feb 2014 15:05:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCsQ7-0008Id-0L
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:05:15 +0000
Received: from [85.158.139.211:26605] by server-11.bemta-5.messagelabs.com id
	98/A0-23886-AAAE8F25; Mon, 10 Feb 2014 15:05:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392044711!2879901!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23903 invoked from network); 10 Feb 2014 15:05:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:05:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101320328"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 15:05:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:05:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCsQ2-00028w-Qo;
	Mon, 10 Feb 2014 15:05:10 +0000
Date: Mon, 10 Feb 2014 15:04:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52F89013020000780011A952@nat28.tlf.novell.com>
Message-ID: <alpine.DEB.2.02.1402101502040.4373@kaball.uk.xensource.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52F5587A.4010608@terremark.com>
	<52F89013020000780011A952@nat28.tlf.novell.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Jan Beulich wrote:
> >>> On 07.02.14 at 23:04, Don Slutz <dslutz@verizon.com> wrote:
> > On 01/29/14 06:40, Jan Beulich wrote:
> >> All,
> >>
> >> aiming at releases with, as before, presumably just one more RC on
> >> each of them, please test!
> > 
> > Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.
> > 
> > CentOS 5.10 has a build issue with QEMU:
> > 
> > http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html 
> 
> Is this a regression over 4.3.1?
> 
> In any event, it would be Stefano to take care of this, just like for
> 4.4.
> 

Don, would a backport of "configure: Disable libtool if -fPIE does not
work with it (bug #1257099)" (027c412ff71ad8bff6e335cc7932857f4ea74391
in qemu-upstream-unstable.git) fix the issue in 4.3 as well?  If so, I
would rather do that than introduce a workaround.

 
> > Has more info, for this testing I changed:
> > 
> > 
> > Author: Don Slutz <dslutz@verizon.com>
> > Date:   Fri Jan 31 22:37:04 2014 +0000
> > 
> >      Work around QEMU bug #1257099 on CentOS 5.10
> > 
> > diff --git a/tools/Makefile b/tools/Makefile
> > index e44a3e9..b411e60 100644
> > --- a/tools/Makefile
> > +++ b/tools/Makefile
> > @@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
> >                  source=.; \
> >          fi; \
> >          cd qemu-xen-dir; \
> > -       $$source/configure --enable-xen --target-list=i386-softmmu \
> > +       $$source/configure --enable-xen --target-list=i386-softmmu 
> > --disable-smartcard-nss\
> >                  --prefix=$(PREFIX) \
> >                  --source-path=$$source \
> >                  --extra-cflags="-I$(XEN_ROOT)/tools/include \
> > 
> > and was able to use the resulting build for some simple testing.  No new 
> > issues were found.
> >       -Don Slutz
> > 
> >> Thanks, Jan
> >>
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@lists.xen.org 
> >> http://lists.xen.org/xen-devel 
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:16:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsaj-0000gU-NN; Mon, 10 Feb 2014 15:16:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCsai-0000gP-FP
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:16:12 +0000
Received: from [193.109.254.147:60496] by server-7.bemta-14.messagelabs.com id
	F2/F7-23424-B3DE8F25; Mon, 10 Feb 2014 15:16:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392045370!3300364!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11642 invoked from network); 10 Feb 2014 15:16:10 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:16:10 -0000
Received: by mail-ea0-f176.google.com with SMTP id h14so3062900eaj.7
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 07:16:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ddxtMYVLHuYlxgkHvDjWXL9ukF8ItxOpRK2ewOFejzI=;
	b=LmjpzHJeBPBYFAOdo47IXEJpezm5mJ5JBtGq3+PYuU2C48V3jcSrIUCdW7YLfoEUgh
	bE6l6QeFkCA918Grh536Sb+BCqdJe65zJoYMGmgBVNBEr7NN30OvPfeP+sXJKgAlfXsy
	DoFV2DoTvifMhTRuQ9X++CEjVBQCPzXiTWF8VJTYP8tzsO8MDW8BJ/dsQ9RoAtfVTJm6
	d+Far6oBmIFkaM0SVILfsmHAeP5mbiNAEU4lD9+PNL+N74cIAyxkiJW7dmpPKMrMowNb
	pm/FcYDHDOn2dVolLLbR5B1Y7z+/fJtjDZtCDq3Uy3ijdL9b9kkvMYlSN6TbsEA2sR/Q
	GsDQ==
X-Gm-Message-State: ALoCoQmEjTUhWiS61aw97GLxu6Bmh7Qav/0Ra/nGD21Ia1g1HlD0sL8rFI2p5eSfXIDK2G/Om6zF
X-Received: by 10.14.179.129 with SMTP id h1mr37632221eem.26.1392045370180;
	Mon, 10 Feb 2014 07:16:10 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	n41sm25897157eeg.16.2014.02.10.07.16.06 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 07:16:08 -0800 (PST)
Message-ID: <52F8ED31.609@linaro.org>
Date: Mon, 10 Feb 2014 15:16:01 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
In-Reply-To: <1392039724.5117.94.camel@kazak.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/10/2014 01:42 PM, Ian Campbell wrote:
> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
>> Hello Mukesh,
>>
>> On 30/01/14 01:33, Mukesh Rathor wrote:
>>>>> I'm not sure what you mean:
>>>>>   - the code that Mukesh is adding doesn't have a struct page, it's
>>>>>     just grabbing the foreign domid from the hypercall arg;
>>>>>   - if we did have a struct page, we'd just need to take a ref to
>>>>>     stop the owner changing underfoot; and
>>>>>   - get_pg_owner() takes a domid anyway.
>>>>
>>>> Sorry, I was confused/mislead by the name...
>>>>
>>>> rcu_lock_live_remote_domain_by_id does look like what is needed.
>>
>> Following the xentrace tread: 
>> http://www.gossamer-threads.com/lists/xen/devel/315883, 
>> rcu_lock_live_remote_domain_by_id will not correctly works.
>>
>> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
>> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
>> prevent xentrace to works on Xen on ARM (and on PVH).
> 
> I'm not sure how that extends to add_to_physmap though -- doing add to
> physmap of a DOMID_XEN owned page through the "back door" in this way
> isn't supposed to work.

Currently xentrace is using xc_map_foreign_page to map the trace buffer
(with DOMID_XEN in argument).

AFAIK, on x86 PV domain, this called is resulting by an
HYPERVISOR_mmu_update which allow do map xen page on priviliged domain
(with the dummy XSM policy).

For ARM, a call to xc_map_foreign_page will end up to
XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).

For both architecture, you can look at the function
xen_remap_map_domain_mfn_range (implemented differently on ARM and x86)
which is the last function called before going to the hypervisor.

If we don't modify the hypercall XENMEM_add_to_physmap, we will have a
add a new way to map Xen page for xentrace & co.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:16:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:16:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsaj-0000gU-NN; Mon, 10 Feb 2014 15:16:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCsai-0000gP-FP
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:16:12 +0000
Received: from [193.109.254.147:60496] by server-7.bemta-14.messagelabs.com id
	F2/F7-23424-B3DE8F25; Mon, 10 Feb 2014 15:16:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392045370!3300364!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11642 invoked from network); 10 Feb 2014 15:16:10 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:16:10 -0000
Received: by mail-ea0-f176.google.com with SMTP id h14so3062900eaj.7
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 07:16:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ddxtMYVLHuYlxgkHvDjWXL9ukF8ItxOpRK2ewOFejzI=;
	b=LmjpzHJeBPBYFAOdo47IXEJpezm5mJ5JBtGq3+PYuU2C48V3jcSrIUCdW7YLfoEUgh
	bE6l6QeFkCA918Grh536Sb+BCqdJe65zJoYMGmgBVNBEr7NN30OvPfeP+sXJKgAlfXsy
	DoFV2DoTvifMhTRuQ9X++CEjVBQCPzXiTWF8VJTYP8tzsO8MDW8BJ/dsQ9RoAtfVTJm6
	d+Far6oBmIFkaM0SVILfsmHAeP5mbiNAEU4lD9+PNL+N74cIAyxkiJW7dmpPKMrMowNb
	pm/FcYDHDOn2dVolLLbR5B1Y7z+/fJtjDZtCDq3Uy3ijdL9b9kkvMYlSN6TbsEA2sR/Q
	GsDQ==
X-Gm-Message-State: ALoCoQmEjTUhWiS61aw97GLxu6Bmh7Qav/0Ra/nGD21Ia1g1HlD0sL8rFI2p5eSfXIDK2G/Om6zF
X-Received: by 10.14.179.129 with SMTP id h1mr37632221eem.26.1392045370180;
	Mon, 10 Feb 2014 07:16:10 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	n41sm25897157eeg.16.2014.02.10.07.16.06 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 07:16:08 -0800 (PST)
Message-ID: <52F8ED31.609@linaro.org>
Date: Mon, 10 Feb 2014 15:16:01 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
In-Reply-To: <1392039724.5117.94.camel@kazak.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/10/2014 01:42 PM, Ian Campbell wrote:
> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
>> Hello Mukesh,
>>
>> On 30/01/14 01:33, Mukesh Rathor wrote:
>>>>> I'm not sure what you mean:
>>>>>   - the code that Mukesh is adding doesn't have a struct page, it's
>>>>>     just grabbing the foreign domid from the hypercall arg;
>>>>>   - if we did have a struct page, we'd just need to take a ref to
>>>>>     stop the owner changing underfoot; and
>>>>>   - get_pg_owner() takes a domid anyway.
>>>>
>>>> Sorry, I was confused/mislead by the name...
>>>>
>>>> rcu_lock_live_remote_domain_by_id does look like what is needed.
>>
>> Following the xentrace tread: 
>> http://www.gossamer-threads.com/lists/xen/devel/315883, 
>> rcu_lock_live_remote_domain_by_id will not correctly works.
>>
>> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
>> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
>> prevent xentrace to works on Xen on ARM (and on PVH).
> 
> I'm not sure how that extends to add_to_physmap though -- doing add to
> physmap of a DOMID_XEN owned page through the "back door" in this way
> isn't supposed to work.

Currently xentrace is using xc_map_foreign_page to map the trace buffer
(with DOMID_XEN in argument).

AFAIK, on x86 PV domain, this called is resulting by an
HYPERVISOR_mmu_update which allow do map xen page on priviliged domain
(with the dummy XSM policy).

For ARM, a call to xc_map_foreign_page will end up to
XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).

For both architecture, you can look at the function
xen_remap_map_domain_mfn_range (implemented differently on ARM and x86)
which is the last function called before going to the hypervisor.

If we don't modify the hypercall XENMEM_add_to_physmap, we will have a
add a new way to map Xen page for xentrace & co.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:17:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCscE-0000m3-8S; Mon, 10 Feb 2014 15:17:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCscC-0000lo-Bs
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 15:17:44 +0000
Received: from [85.158.139.211:36964] by server-3.bemta-5.messagelabs.com id
	60/9F-13671-79DE8F25; Mon, 10 Feb 2014 15:17:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392045461!2943483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17193 invoked from network); 10 Feb 2014 15:17:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:17:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101325182"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 15:17:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:17:40 -0500
Message-ID: <52F8ED92.3050505@citrix.com>
Date: Mon, 10 Feb 2014 16:17:38 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMTI6MTcsIEFuZHJldyBDb29wZXIgd3JvdGU6Cj4gVGhpcyByZXZlcnRzIGxh
cmdlIGFtb3VudHMgb2Y6Cj4gICA5NjA3MzI3YWJiZDNlNzdiZGU2Y2M3YjUzMjdmM2VmZDc4MWZj
MDZlCj4gICAgICJ4ODYvSFZNOiBwcm9wZXJseSBoYW5kbGUgUlRDIHBlcmlvZGljIHRpbWVyIGV2
ZW4gd2hlbiAhUlRDX1BJRSIKPiAgIDYyMGQ1ZGFkNTQwMDhlNDA3OThjNGEwYzQzMjJhZWYyNzRj
MzZmYTMKPiAgICAgIng4Ni9IVk06IGFzc29ydGVkIFJUQyBlbXVsYXRpb24gYWRqdXN0bWVudHMi
Cj4gCj4gYW5kIGJ5IGV4dGVudHNpb246Cj4gICBmMzM0N2Y1MjBjYjRkOGFhNDU2NjE4MmIwMTNj
Njc1OGQ4MGNiZTg4Cj4gICAgICJ4ODYvSFZNOiBhZGp1c3QgSVJRIChkZS0pYXNzZXJ0aW9uIgo+
ICAgYzJmNzljNDY0ODQ5ZTVmNzk2YWE5ZDFkMGYyNmZlMzU2YWJkMWExYQo+ICAgICAieDg2L0hW
TTogZml4IHByb2Nlc3Npbmcgb2YgUlRDIFJFR19CIHdyaXRlcyIKPiAgIDUyNzgyNGY0MWY1ZmFj
OWNiYTNkNDQ0MWIyZTczZDMxMThkOTg4MzcKPiAgICAgIng4Ni9odm06IENlbnRyYWxpemUgYW5k
IHNpbXBsaWZ5IHRoZSBSVEMgSVJRIGxvZ2ljLiIKPiAKPiBUaGUgY3VycmVudCBjb2RlIGhhcyBh
IHBhdGhvbG9naWNhbCBjYXNlLCB0aWNrbGVkIGJ5IHRoZSBhY2Nlc3MgcGF0dGVybiBvZgo+IFdp
bmRvd3MgMjAwMyBTZXJ2ZXIgU1AyLiAgT2NjYXNvbmFsbHkgb24gYm9vdCAod2hpY2ggSSBwcmVz
dW1lIGlzIGR1cmluZyBhCj4gdGltZSBjYWxpYnJhdGlvbiBhZ2FpbnN0IHRoZSBSVEMgUGVyaW9k
aWMgVGltZXIpLCBXaW5kb3dzIGdldHMgc3R1Y2sgaW4gYW4KPiBpbmZpbml0ZSBsb29wIHJlYWRp
bmcgUlRDIFJFR19DLiAgVGhpcyBhZmZlY3RzIDMyIGFuZCA2NCBiaXQgZ3Vlc3RzLgo+IAo+IElu
IHRoZSBwYXRob2xvZ2ljYWwgY2FzZSwgdGhlIFZNIHN0YXRlIGxvb2tzIGxpa2UgdGhpczoKPiAg
ICogUlRDOiA2NEh6IHBlcmlvZCwgcGVyaW9kaWMgaW50ZXJydXB0cyBlbmFibGVkCj4gICAqIFJU
Q19JUlEgaW4gSU9BUElDIGFzIHZlY3RvciAweGQxLCBlZGdlIHRyaWdnZXJlZCwgbm90IHBlbmRp
bmcKPiAgICogdmVjdG9yIDB4ZDEgc2V0IGluIExBUElDIElSUiBhbmQgSVNSLCBUUFIgYXQgMHhk
MAo+ICAgKiBSZWFkcyBmcm9tIFJFR19DIHJldHVybiAnUlRDX1BGIHwgUlRDX0lSUUYnCj4gCj4g
V2l0aCBhbiBpbnRzdHJ1bWVudGVkIFhlbiwgZHVtcGluZyB0aGUgcGVyaW9kaWMgdGltZXJzIHdp
dGggYSBndWVzdCBpbiB0aGlzCj4gc3RhdGUgc2hvd3MgYSBzaW5nbGUgdGltZXIgd2l0aCBwdC0+
aXJxX2lzc3VlZD0xIGFuZCBwdC0+cGVuZGluZ19pbnRyX25yPTIuCj4gCj4gV2luZG93cyBpcyBw
cmVzdW1hYmx5IHdhaXRpbmcgZm9yIHJlYWRzIG9mIFJFR19DIHRvIGRyb3AgdG8gMCwgYW5kIHJl
YWRpbmcKPiBSRUdfQyBjbGVhcnMgdGhlIHZhbHVlIGVhY2ggdGltZSBpbiB0aGUgZW11bGF0ZWQg
UlRDLiAgSG93ZXZlcjoKPiAKPiAgICoge3N2bSx2bXh9X2ludHJfYXNzaXN0KCkgY2FsbHMgcHRf
dXBkYXRlX2lycSgpIHVuY29uZGl0aW9uYWxseS4KPiAgICogcHRfdXBkYXRlX2lycSgpIGFsd2F5
cyBmaW5kcyB0aGUgUlRDIGFzIGVhcmxpZXN0X3B0Lgo+ICAgKiBydGNfcGVyaW9kaWNfaW50ZXJy
dXB0KCkgdW5jb25kaXRpb25hbGx5IHNldHMgUlRDX1BGIGluIG5vX2FjayBtb2RlLiAgSXQKPiAg
ICAgcmV0dXJucyB0cnVlLCBpbmRpY2F0aW5nIHRoYXQgcHRfdXBkYXRlX2lycSgpIHNob3VsZCBy
ZWFsbHkgaW5qZWN0IHRoZQo+ICAgICBpbnRlcnJ1cHQuCj4gICAqIHB0X3VwZGF0ZV9pcnEoKSBk
ZWNpZGVzIHRoYXQgaXQgZG9lc24ndCBuZWVkIHRvIGZha2UgdXAgcGFydCBvZgo+ICAgICBwdF9p
bnRyX3Bvc3QoKSBiZWNhdXNlIHRoaXMgaXMgYSByZWFsIGludGVycnVwdC4KPiAgICoge3N2bSx2
bXh9X2ludHJfYXNzaXN0KCkgY2FuJ3QgaW5qZWN0IHRoZSBpbnRlcnJ1cHQgYXMgaXQgaXMgYWxy
ZWFkeQo+ICAgICBwZW5kaW5nLCBzbyBleGl0cyBlYXJseSB3aXRob3V0IGNhbGxpbmcgcHRfaW50
cl9wb3N0KCkuCj4gCj4gVGhlIHVuZGVybHlpbmcgcHJvYmxlbSBoZXJlIGNvbWVzIGJlY2F1c2Ug
dGhlIEFGIGFuZCBVRiBiaXRzIG9mIFJUQyBpbnRlcnJ1cHQKPiBzdGF0ZSBpcyBtb2RlbGxlZCBi
eSB0aGUgUlRDIGNvZGUsIGJ1dCB0aGUgUEYgaXMgbW9kZWxsZWQgYnkgdGhlIHB0IGNvZGUuICBU
aGUKPiByb290IGNhdXNlIG9mIHdpbmRvd3MgaW5maW5pdGUgbG9vcCBpcyB0aGF0IFJUQ19QRiBp
cyBiZWluZyByZS1zZXQgb24gdm1lbnRyeQo+IGJlZm9yZSB0aGUgaW50ZXJydXB0IGxvZ2ljIGhh
cyB3b3JrZWQgb3V0IHRoYXQgaXQgY2FuJ3QgYWN0dWFsbHkgaW5qZWN0IGFuIFJUQwo+IGludGVy
cnVwdCwgY2F1c2luZyBXaW5kb3dzIHRvIGVycm9uaW91c2x5IHJlYWQgKFJUQ19QRnxSVENfSVJR
Rikgd2hlbiBpdAo+IHNob3VsZCBiZSByZWFkaW5nIDAuCj4gCj4gVGhpcyBwYXRjaCByZXZlcnRz
IHRoZSBSVENfUEYgbG9naWMgaGFuZGxpbmcgdG8gaXRzIGZvcm1lciBzdGF0ZSwgd2hlcmVieQo+
IHJ0Y19wZXJpb2RpY19jYigpIGlzIGNhbGxlZCBzdHJpY3RseSB3aGVuIHRoZSBwZXJpb2RpYyB0
aW1lciBsb2dpYyBoYXMKPiBzdWNjZXNzZnVsbHkgaW5qZWN0ZWQgYSBwZXJpb2RpYyBpbnRlcnJ1
cHQuICBJbiBkb2luZyBzbywgaXQgaXMgaW1wb3J0YW50IHRoYXQKPiB0aGUgUlRDIGNvZGUgaXRz
ZWxmIG5ldmVyIGRpcmVjdGx5IHRyaWdnZXJzIGFuIGludGVycnVwdCBmb3IgdGhlIHBlcmlvZGlj
Cj4gdGltZXIgKG90aGVyIHRoYW4gdGhlIGNhc2Ugd2hlbiBzZXR0aW5nIFJFR19CLlBJRSwgd2hl
cmUgdGhlIHB0IGNvZGUgd2lsbCBoYXZlCj4gZHJvcHBlZCB0aGUgaW50ZXJydXB0KS4KPiAKPiBT
aWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgo+
IFNpZ25lZC1vZmYtYnk6IFRpbSBEZWVnYW4gPHRpbUB4ZW4ub3JnPgo+IENDOiBLZWlyIEZyYXNl
ciA8a2VpckB4ZW4ub3JnPgo+IENDOiBKYW4gQmV1bGljaCA8SkJldWxpY2hAc3VzZS5jb20+Cj4g
Q0M6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAZXUuY2l0cml4LmNvbT4KPiBDQzogUm9n
ZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4gLS0tCj4gCj4gSSBzdGlsbCBk
b250IGtub3cgZXhhY3RseSB3aGF0IGNvbmRpdGlvbiBjYXVzZXMgd2luZG93cyB0byB0aWNrbGUg
dGhpcwo+IGJlaGF2b3VyLiAgSXQgaXMgc2VlbiBhYm91dCAxIG9yIDIgdGltZXMgaW4gOSB0ZXN0
cyBydW5uaW5nIGEgMTIgaG91ciBWTQo+IGxpZmVjeWNsZSB0ZXN0LiAgT3ZlciB0aGUgd2Vla2Vu
ZCwgMTAwIG9mIHRoZXNlIHRlc3RzIGhhdmUgcGFzc2VkIHdpdGhvdXQgYQo+IHNpbmdsZSByZW9j
Y3VyZW5jZSBvZiB0aGUgaW5maW5pdGUgbG9vcC4gIFRoZSBjaGFuZ2UgaGFzIGFsc28gcGFzc2Vk
IGEgd2luZG93cwo+IGV4dGVuZGVkIHJlZ3Jlc3Npb24gdGVzdCwgc28gaXQgd291bGQgYXBwZWFy
IHRoYXQgb3RoZXIgdmVyc2lvbnMgb2Ygd2luZG93cwo+IGFyZSBzdGlsbCBmaW5lIHdpdGggdGhl
IGNoYW5nZS4KPiAKPiBSb2dlcjogYXMgdGhpcyBjYXVzZWQgaXNzdWVzIGZvciBGcmVlQlNELCB3
b3VsZCB5b3UgbWluZCB0ZXN0aW5nIGl0IGFnYWluCj4gcGxlYXNlPwoKVGVzdGVkLWJ5OiBSb2dl
ciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KT24gRnJlZUJTRCAxMC4wLCA5LjIg
YW5kIDguNAoKTm8gYXBwYXJlbnQgcmVncmVzc2lvbnMgQUZBSUNULgoKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:17:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCscE-0000m3-8S; Mon, 10 Feb 2014 15:17:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCscC-0000lo-Bs
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 15:17:44 +0000
Received: from [85.158.139.211:36964] by server-3.bemta-5.messagelabs.com id
	60/9F-13671-79DE8F25; Mon, 10 Feb 2014 15:17:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392045461!2943483!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17193 invoked from network); 10 Feb 2014 15:17:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:17:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101325182"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 15:17:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:17:40 -0500
Message-ID: <52F8ED92.3050505@citrix.com>
Date: Mon, 10 Feb 2014 16:17:38 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMTI6MTcsIEFuZHJldyBDb29wZXIgd3JvdGU6Cj4gVGhpcyByZXZlcnRzIGxh
cmdlIGFtb3VudHMgb2Y6Cj4gICA5NjA3MzI3YWJiZDNlNzdiZGU2Y2M3YjUzMjdmM2VmZDc4MWZj
MDZlCj4gICAgICJ4ODYvSFZNOiBwcm9wZXJseSBoYW5kbGUgUlRDIHBlcmlvZGljIHRpbWVyIGV2
ZW4gd2hlbiAhUlRDX1BJRSIKPiAgIDYyMGQ1ZGFkNTQwMDhlNDA3OThjNGEwYzQzMjJhZWYyNzRj
MzZmYTMKPiAgICAgIng4Ni9IVk06IGFzc29ydGVkIFJUQyBlbXVsYXRpb24gYWRqdXN0bWVudHMi
Cj4gCj4gYW5kIGJ5IGV4dGVudHNpb246Cj4gICBmMzM0N2Y1MjBjYjRkOGFhNDU2NjE4MmIwMTNj
Njc1OGQ4MGNiZTg4Cj4gICAgICJ4ODYvSFZNOiBhZGp1c3QgSVJRIChkZS0pYXNzZXJ0aW9uIgo+
ICAgYzJmNzljNDY0ODQ5ZTVmNzk2YWE5ZDFkMGYyNmZlMzU2YWJkMWExYQo+ICAgICAieDg2L0hW
TTogZml4IHByb2Nlc3Npbmcgb2YgUlRDIFJFR19CIHdyaXRlcyIKPiAgIDUyNzgyNGY0MWY1ZmFj
OWNiYTNkNDQ0MWIyZTczZDMxMThkOTg4MzcKPiAgICAgIng4Ni9odm06IENlbnRyYWxpemUgYW5k
IHNpbXBsaWZ5IHRoZSBSVEMgSVJRIGxvZ2ljLiIKPiAKPiBUaGUgY3VycmVudCBjb2RlIGhhcyBh
IHBhdGhvbG9naWNhbCBjYXNlLCB0aWNrbGVkIGJ5IHRoZSBhY2Nlc3MgcGF0dGVybiBvZgo+IFdp
bmRvd3MgMjAwMyBTZXJ2ZXIgU1AyLiAgT2NjYXNvbmFsbHkgb24gYm9vdCAod2hpY2ggSSBwcmVz
dW1lIGlzIGR1cmluZyBhCj4gdGltZSBjYWxpYnJhdGlvbiBhZ2FpbnN0IHRoZSBSVEMgUGVyaW9k
aWMgVGltZXIpLCBXaW5kb3dzIGdldHMgc3R1Y2sgaW4gYW4KPiBpbmZpbml0ZSBsb29wIHJlYWRp
bmcgUlRDIFJFR19DLiAgVGhpcyBhZmZlY3RzIDMyIGFuZCA2NCBiaXQgZ3Vlc3RzLgo+IAo+IElu
IHRoZSBwYXRob2xvZ2ljYWwgY2FzZSwgdGhlIFZNIHN0YXRlIGxvb2tzIGxpa2UgdGhpczoKPiAg
ICogUlRDOiA2NEh6IHBlcmlvZCwgcGVyaW9kaWMgaW50ZXJydXB0cyBlbmFibGVkCj4gICAqIFJU
Q19JUlEgaW4gSU9BUElDIGFzIHZlY3RvciAweGQxLCBlZGdlIHRyaWdnZXJlZCwgbm90IHBlbmRp
bmcKPiAgICogdmVjdG9yIDB4ZDEgc2V0IGluIExBUElDIElSUiBhbmQgSVNSLCBUUFIgYXQgMHhk
MAo+ICAgKiBSZWFkcyBmcm9tIFJFR19DIHJldHVybiAnUlRDX1BGIHwgUlRDX0lSUUYnCj4gCj4g
V2l0aCBhbiBpbnRzdHJ1bWVudGVkIFhlbiwgZHVtcGluZyB0aGUgcGVyaW9kaWMgdGltZXJzIHdp
dGggYSBndWVzdCBpbiB0aGlzCj4gc3RhdGUgc2hvd3MgYSBzaW5nbGUgdGltZXIgd2l0aCBwdC0+
aXJxX2lzc3VlZD0xIGFuZCBwdC0+cGVuZGluZ19pbnRyX25yPTIuCj4gCj4gV2luZG93cyBpcyBw
cmVzdW1hYmx5IHdhaXRpbmcgZm9yIHJlYWRzIG9mIFJFR19DIHRvIGRyb3AgdG8gMCwgYW5kIHJl
YWRpbmcKPiBSRUdfQyBjbGVhcnMgdGhlIHZhbHVlIGVhY2ggdGltZSBpbiB0aGUgZW11bGF0ZWQg
UlRDLiAgSG93ZXZlcjoKPiAKPiAgICoge3N2bSx2bXh9X2ludHJfYXNzaXN0KCkgY2FsbHMgcHRf
dXBkYXRlX2lycSgpIHVuY29uZGl0aW9uYWxseS4KPiAgICogcHRfdXBkYXRlX2lycSgpIGFsd2F5
cyBmaW5kcyB0aGUgUlRDIGFzIGVhcmxpZXN0X3B0Lgo+ICAgKiBydGNfcGVyaW9kaWNfaW50ZXJy
dXB0KCkgdW5jb25kaXRpb25hbGx5IHNldHMgUlRDX1BGIGluIG5vX2FjayBtb2RlLiAgSXQKPiAg
ICAgcmV0dXJucyB0cnVlLCBpbmRpY2F0aW5nIHRoYXQgcHRfdXBkYXRlX2lycSgpIHNob3VsZCBy
ZWFsbHkgaW5qZWN0IHRoZQo+ICAgICBpbnRlcnJ1cHQuCj4gICAqIHB0X3VwZGF0ZV9pcnEoKSBk
ZWNpZGVzIHRoYXQgaXQgZG9lc24ndCBuZWVkIHRvIGZha2UgdXAgcGFydCBvZgo+ICAgICBwdF9p
bnRyX3Bvc3QoKSBiZWNhdXNlIHRoaXMgaXMgYSByZWFsIGludGVycnVwdC4KPiAgICoge3N2bSx2
bXh9X2ludHJfYXNzaXN0KCkgY2FuJ3QgaW5qZWN0IHRoZSBpbnRlcnJ1cHQgYXMgaXQgaXMgYWxy
ZWFkeQo+ICAgICBwZW5kaW5nLCBzbyBleGl0cyBlYXJseSB3aXRob3V0IGNhbGxpbmcgcHRfaW50
cl9wb3N0KCkuCj4gCj4gVGhlIHVuZGVybHlpbmcgcHJvYmxlbSBoZXJlIGNvbWVzIGJlY2F1c2Ug
dGhlIEFGIGFuZCBVRiBiaXRzIG9mIFJUQyBpbnRlcnJ1cHQKPiBzdGF0ZSBpcyBtb2RlbGxlZCBi
eSB0aGUgUlRDIGNvZGUsIGJ1dCB0aGUgUEYgaXMgbW9kZWxsZWQgYnkgdGhlIHB0IGNvZGUuICBU
aGUKPiByb290IGNhdXNlIG9mIHdpbmRvd3MgaW5maW5pdGUgbG9vcCBpcyB0aGF0IFJUQ19QRiBp
cyBiZWluZyByZS1zZXQgb24gdm1lbnRyeQo+IGJlZm9yZSB0aGUgaW50ZXJydXB0IGxvZ2ljIGhh
cyB3b3JrZWQgb3V0IHRoYXQgaXQgY2FuJ3QgYWN0dWFsbHkgaW5qZWN0IGFuIFJUQwo+IGludGVy
cnVwdCwgY2F1c2luZyBXaW5kb3dzIHRvIGVycm9uaW91c2x5IHJlYWQgKFJUQ19QRnxSVENfSVJR
Rikgd2hlbiBpdAo+IHNob3VsZCBiZSByZWFkaW5nIDAuCj4gCj4gVGhpcyBwYXRjaCByZXZlcnRz
IHRoZSBSVENfUEYgbG9naWMgaGFuZGxpbmcgdG8gaXRzIGZvcm1lciBzdGF0ZSwgd2hlcmVieQo+
IHJ0Y19wZXJpb2RpY19jYigpIGlzIGNhbGxlZCBzdHJpY3RseSB3aGVuIHRoZSBwZXJpb2RpYyB0
aW1lciBsb2dpYyBoYXMKPiBzdWNjZXNzZnVsbHkgaW5qZWN0ZWQgYSBwZXJpb2RpYyBpbnRlcnJ1
cHQuICBJbiBkb2luZyBzbywgaXQgaXMgaW1wb3J0YW50IHRoYXQKPiB0aGUgUlRDIGNvZGUgaXRz
ZWxmIG5ldmVyIGRpcmVjdGx5IHRyaWdnZXJzIGFuIGludGVycnVwdCBmb3IgdGhlIHBlcmlvZGlj
Cj4gdGltZXIgKG90aGVyIHRoYW4gdGhlIGNhc2Ugd2hlbiBzZXR0aW5nIFJFR19CLlBJRSwgd2hl
cmUgdGhlIHB0IGNvZGUgd2lsbCBoYXZlCj4gZHJvcHBlZCB0aGUgaW50ZXJydXB0KS4KPiAKPiBT
aWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgo+
IFNpZ25lZC1vZmYtYnk6IFRpbSBEZWVnYW4gPHRpbUB4ZW4ub3JnPgo+IENDOiBLZWlyIEZyYXNl
ciA8a2VpckB4ZW4ub3JnPgo+IENDOiBKYW4gQmV1bGljaCA8SkJldWxpY2hAc3VzZS5jb20+Cj4g
Q0M6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAZXUuY2l0cml4LmNvbT4KPiBDQzogUm9n
ZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4gLS0tCj4gCj4gSSBzdGlsbCBk
b250IGtub3cgZXhhY3RseSB3aGF0IGNvbmRpdGlvbiBjYXVzZXMgd2luZG93cyB0byB0aWNrbGUg
dGhpcwo+IGJlaGF2b3VyLiAgSXQgaXMgc2VlbiBhYm91dCAxIG9yIDIgdGltZXMgaW4gOSB0ZXN0
cyBydW5uaW5nIGEgMTIgaG91ciBWTQo+IGxpZmVjeWNsZSB0ZXN0LiAgT3ZlciB0aGUgd2Vla2Vu
ZCwgMTAwIG9mIHRoZXNlIHRlc3RzIGhhdmUgcGFzc2VkIHdpdGhvdXQgYQo+IHNpbmdsZSByZW9j
Y3VyZW5jZSBvZiB0aGUgaW5maW5pdGUgbG9vcC4gIFRoZSBjaGFuZ2UgaGFzIGFsc28gcGFzc2Vk
IGEgd2luZG93cwo+IGV4dGVuZGVkIHJlZ3Jlc3Npb24gdGVzdCwgc28gaXQgd291bGQgYXBwZWFy
IHRoYXQgb3RoZXIgdmVyc2lvbnMgb2Ygd2luZG93cwo+IGFyZSBzdGlsbCBmaW5lIHdpdGggdGhl
IGNoYW5nZS4KPiAKPiBSb2dlcjogYXMgdGhpcyBjYXVzZWQgaXNzdWVzIGZvciBGcmVlQlNELCB3
b3VsZCB5b3UgbWluZCB0ZXN0aW5nIGl0IGFnYWluCj4gcGxlYXNlPwoKVGVzdGVkLWJ5OiBSb2dl
ciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KT24gRnJlZUJTRCAxMC4wLCA5LjIg
YW5kIDguNAoKTm8gYXBwYXJlbnQgcmVncmVzc2lvbnMgQUZBSUNULgoKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:27:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCslZ-0001NN-Hs; Mon, 10 Feb 2014 15:27:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCslX-0001NI-F6
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:27:23 +0000
Received: from [85.158.137.68:13016] by server-14.bemta-3.messagelabs.com id
	8E/5E-08196-ADFE8F25; Mon, 10 Feb 2014 15:27:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392046040!874752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20971 invoked from network); 10 Feb 2014 15:27:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99501347"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 15:27:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:27:19 -0500
Message-ID: <1392046038.26657.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 10 Feb 2014 15:27:18 +0000
In-Reply-To: <52F8ED31.609@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 02/10/2014 01:42 PM, Ian Campbell wrote:
> > On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
> >> Hello Mukesh,
> >>
> >> On 30/01/14 01:33, Mukesh Rathor wrote:
> >>>>> I'm not sure what you mean:
> >>>>>   - the code that Mukesh is adding doesn't have a struct page, it's
> >>>>>     just grabbing the foreign domid from the hypercall arg;
> >>>>>   - if we did have a struct page, we'd just need to take a ref to
> >>>>>     stop the owner changing underfoot; and
> >>>>>   - get_pg_owner() takes a domid anyway.
> >>>>
> >>>> Sorry, I was confused/mislead by the name...
> >>>>
> >>>> rcu_lock_live_remote_domain_by_id does look like what is needed.
> >>
> >> Following the xentrace tread: 
> >> http://www.gossamer-threads.com/lists/xen/devel/315883, 
> >> rcu_lock_live_remote_domain_by_id will not correctly works.
> >>
> >> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
> >> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
> >> prevent xentrace to works on Xen on ARM (and on PVH).
> > 
> > I'm not sure how that extends to add_to_physmap though -- doing add to
> > physmap of a DOMID_XEN owned page through the "back door" in this way
> > isn't supposed to work.
> 
> Currently xentrace is using xc_map_foreign_page to map the trace buffer
> (with DOMID_XEN in argument).

Sorry, I misunderstood, I thought you were suggesting that rcu_lock_...
didn't work for xentrace and so couldn't work for add_to_physmap either
-- but actually xentrace ends up using add_to_physmap itself.

> AFAIK, on x86 PV domain, this called is resulting by an
> HYPERVISOR_mmu_update which allow do map xen page on priviliged domain
> (with the dummy XSM policy).
> 
> For ARM, a call to xc_map_foreign_page will end up to
> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
> 
> For both architecture, you can look at the function
> xen_remap_map_domain_mfn_range (implemented differently on ARM and x86)
> which is the last function called before going to the hypervisor.
> 
> If we don't modify the hypercall XENMEM_add_to_physmap, we will have a
> add a new way to map Xen page for xentrace & co.

Wouldn't it be incorrect to generically return OK for mapping a
DOMID_XEN owned page -- at least something needs to validate that the
particular mfn being mapped is supposed to be shared with the guest in
question.

TBH, it doesn't seem that this mechanism for sharing xenpages with
guests is a good fit for PVH or HVM guests/dom0. Perhaps a specific
mapspace would be more appropriate?

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:27:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCslZ-0001NN-Hs; Mon, 10 Feb 2014 15:27:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCslX-0001NI-F6
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:27:23 +0000
Received: from [85.158.137.68:13016] by server-14.bemta-3.messagelabs.com id
	8E/5E-08196-ADFE8F25; Mon, 10 Feb 2014 15:27:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392046040!874752!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20971 invoked from network); 10 Feb 2014 15:27:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99501347"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 15:27:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:27:19 -0500
Message-ID: <1392046038.26657.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 10 Feb 2014 15:27:18 +0000
In-Reply-To: <52F8ED31.609@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 02/10/2014 01:42 PM, Ian Campbell wrote:
> > On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
> >> Hello Mukesh,
> >>
> >> On 30/01/14 01:33, Mukesh Rathor wrote:
> >>>>> I'm not sure what you mean:
> >>>>>   - the code that Mukesh is adding doesn't have a struct page, it's
> >>>>>     just grabbing the foreign domid from the hypercall arg;
> >>>>>   - if we did have a struct page, we'd just need to take a ref to
> >>>>>     stop the owner changing underfoot; and
> >>>>>   - get_pg_owner() takes a domid anyway.
> >>>>
> >>>> Sorry, I was confused/mislead by the name...
> >>>>
> >>>> rcu_lock_live_remote_domain_by_id does look like what is needed.
> >>
> >> Following the xentrace tread: 
> >> http://www.gossamer-threads.com/lists/xen/devel/315883, 
> >> rcu_lock_live_remote_domain_by_id will not correctly works.
> >>
> >> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
> >> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
> >> prevent xentrace to works on Xen on ARM (and on PVH).
> > 
> > I'm not sure how that extends to add_to_physmap though -- doing add to
> > physmap of a DOMID_XEN owned page through the "back door" in this way
> > isn't supposed to work.
> 
> Currently xentrace is using xc_map_foreign_page to map the trace buffer
> (with DOMID_XEN in argument).

Sorry, I misunderstood, I thought you were suggesting that rcu_lock_...
didn't work for xentrace and so couldn't work for add_to_physmap either
-- but actually xentrace ends up using add_to_physmap itself.

> AFAIK, on x86 PV domain, this called is resulting by an
> HYPERVISOR_mmu_update which allow do map xen page on priviliged domain
> (with the dummy XSM policy).
> 
> For ARM, a call to xc_map_foreign_page will end up to
> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
> 
> For both architecture, you can look at the function
> xen_remap_map_domain_mfn_range (implemented differently on ARM and x86)
> which is the last function called before going to the hypervisor.
> 
> If we don't modify the hypercall XENMEM_add_to_physmap, we will have a
> add a new way to map Xen page for xentrace & co.

Wouldn't it be incorrect to generically return OK for mapping a
DOMID_XEN owned page -- at least something needs to validate that the
particular mfn being mapped is supposed to be shared with the guest in
question.

TBH, it doesn't seem that this mechanism for sharing xenpages with
guests is a good fit for PVH or HVM guests/dom0. Perhaps a specific
mapspace would be more appropriate?

Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:32:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsqR-0001vo-JA; Mon, 10 Feb 2014 15:32:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCsqP-0001vh-Tj
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 15:32:26 +0000
Received: from [85.158.137.68:49210] by server-15.bemta-3.messagelabs.com id
	27/2D-19263-901F8F25; Mon, 10 Feb 2014 15:32:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392046342!859323!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18781 invoked from network); 10 Feb 2014 15:32:24 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 15:32:24 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AFWK6R023294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 15:32:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1AFWJ26004855
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 15:32:20 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AFWJMZ024969; Mon, 10 Feb 2014 15:32:19 GMT
Received: from [IPV6:2607:fb90:290d:93c3:e41f:c12d:c066:260e] (/172.56.22.79)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 07:32:19 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52F8E1D4.1070209@citrix.com>
References: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
	<52F8E1D4.1070209@citrix.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 10 Feb 2014 10:32:16 -0500
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <35e280df-f715-4cba-af1f-c34a1e8f1f55@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: install xen/gntdev.h and xen/gntalloc.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On February 10, 2014 9:27:32 AM EST, David Vrabel <david.vrabel@citrix.com> wrote:
>On 10/02/14 13:54, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>> 
>> xen/gntdev.h and xen/gntalloc.h both provide userspace ABIs so they
>> should be installed.
>
>I think this is a bug fix so please queue for 3.14 and Cc: stable,
>thanks.
>
>David
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel

Talking to yourself, eh? :-)

In case it was meant for me I will stash it along with the other patches today on #linux-next

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:32:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsqR-0001vo-JA; Mon, 10 Feb 2014 15:32:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCsqP-0001vh-Tj
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 15:32:26 +0000
Received: from [85.158.137.68:49210] by server-15.bemta-3.messagelabs.com id
	27/2D-19263-901F8F25; Mon, 10 Feb 2014 15:32:25 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392046342!859323!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18781 invoked from network); 10 Feb 2014 15:32:24 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 15:32:24 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AFWK6R023294
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 15:32:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1AFWJ26004855
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 15:32:20 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AFWJMZ024969; Mon, 10 Feb 2014 15:32:19 GMT
Received: from [IPV6:2607:fb90:290d:93c3:e41f:c12d:c066:260e] (/172.56.22.79)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 07:32:19 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52F8E1D4.1070209@citrix.com>
References: <1392040442-22457-1-git-send-email-david.vrabel@citrix.com>
	<52F8E1D4.1070209@citrix.com>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 10 Feb 2014 10:32:16 -0500
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <35e280df-f715-4cba-af1f-c34a1e8f1f55@email.android.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: install xen/gntdev.h and xen/gntalloc.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On February 10, 2014 9:27:32 AM EST, David Vrabel <david.vrabel@citrix.com> wrote:
>On 10/02/14 13:54, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>> 
>> xen/gntdev.h and xen/gntalloc.h both provide userspace ABIs so they
>> should be installed.
>
>I think this is a bug fix so please queue for 3.14 and Cc: stable,
>thanks.
>
>David
>
>_______________________________________________
>Xen-devel mailing list
>Xen-devel@lists.xen.org
>http://lists.xen.org/xen-devel

Talking to yourself, eh? :-)

In case it was meant for me I will stash it along with the other patches today on #linux-next

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsr9-00020Q-14; Mon, 10 Feb 2014 15:33:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WCsr7-00020F-ML
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 15:33:09 +0000
Received: from [193.109.254.147:42825] by server-12.bemta-14.messagelabs.com
	id DD/0D-17220-531F8F25; Mon, 10 Feb 2014 15:33:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392046387!3302947!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19112 invoked from network); 10 Feb 2014 15:33:08 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:33:08 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so4289210wes.41
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 07:33:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=82/HLLJ5qT5CHemh3b+jmRFenduJjf5QCoBFC9WeBaA=;
	b=gF36x+OHQf/l2mmouy3nJUBcHKpIPmrgVKbb0CdXPlsN90XtwLbaeyS/B0NBfbyrCu
	zwLjz+Uu7vSNI0v0nnAzpaa4Nrb8NwlCO7Vnzphz3qv3HOb6jMlSfUbVc8ZZZ6kmjg8I
	ZYpN2xTSBmoI2Bq6LSq+fwGapp7rM+14QArYM2XceD3NL3K2tLsTBHTE7QifMQjjr1Gu
	jB4u+RZF7QJFhUVUERYnFviziA4gKY6NIahPqbP20UHrCBp0UMgOJdGdu8naQ3k/q9BG
	5HMD4dzJ4KnRXhPfsRq6liw+fY/SW/0ITJL5GegwY6W+zIVuZ8FTSPWsPzLG786oftto
	CH5g==
X-Received: by 10.180.100.72 with SMTP id ew8mr11040933wib.16.1392046387852;
	Mon, 10 Feb 2014 07:33:07 -0800 (PST)
Received: from [172.16.10.73] (212.44.45.197.ip.redstone-isp.net.
	[212.44.45.197])
	by mx.google.com with ESMTPSA id y13sm36259194wjr.8.2014.02.10.07.33.03
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 07:33:06 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Mon, 10 Feb 2014 15:33:02 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <CF1EA1AE.70810%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
Thread-Index: Ac8mdWJYsa3rBmCRE0GC5tG/uGHI7A==
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
Mime-version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?ISO-8859-1?B?TW9ubuk=?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/2014 11:17, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

> George: Regarding 4.4 - I request a release ack.  However, this is quite a big
> and complicated patch; it took Tim and myself several hours to write, even
> given an understanding of the pathalogical case.  Having said that, about half
> the patch is just reversions (listed above) with the other half being brand
> new logic.

It does at least look to simplify the code, and make it clearer, as well as
fixing this bug.

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:33:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsr9-00020Q-14; Mon, 10 Feb 2014 15:33:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WCsr7-00020F-ML
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 15:33:09 +0000
Received: from [193.109.254.147:42825] by server-12.bemta-14.messagelabs.com
	id DD/0D-17220-531F8F25; Mon, 10 Feb 2014 15:33:09 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392046387!3302947!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19112 invoked from network); 10 Feb 2014 15:33:08 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:33:08 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so4289210wes.41
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 07:33:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=82/HLLJ5qT5CHemh3b+jmRFenduJjf5QCoBFC9WeBaA=;
	b=gF36x+OHQf/l2mmouy3nJUBcHKpIPmrgVKbb0CdXPlsN90XtwLbaeyS/B0NBfbyrCu
	zwLjz+Uu7vSNI0v0nnAzpaa4Nrb8NwlCO7Vnzphz3qv3HOb6jMlSfUbVc8ZZZ6kmjg8I
	ZYpN2xTSBmoI2Bq6LSq+fwGapp7rM+14QArYM2XceD3NL3K2tLsTBHTE7QifMQjjr1Gu
	jB4u+RZF7QJFhUVUERYnFviziA4gKY6NIahPqbP20UHrCBp0UMgOJdGdu8naQ3k/q9BG
	5HMD4dzJ4KnRXhPfsRq6liw+fY/SW/0ITJL5GegwY6W+zIVuZ8FTSPWsPzLG786oftto
	CH5g==
X-Received: by 10.180.100.72 with SMTP id ew8mr11040933wib.16.1392046387852;
	Mon, 10 Feb 2014 07:33:07 -0800 (PST)
Received: from [172.16.10.73] (212.44.45.197.ip.redstone-isp.net.
	[212.44.45.197])
	by mx.google.com with ESMTPSA id y13sm36259194wjr.8.2014.02.10.07.33.03
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 07:33:06 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Mon, 10 Feb 2014 15:33:02 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Message-ID: <CF1EA1AE.70810%keir.xen@gmail.com>
Thread-Topic: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
Thread-Index: Ac8mdWJYsa3rBmCRE0GC5tG/uGHI7A==
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
Mime-version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?ISO-8859-1?B?TW9ubuk=?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/2014 11:17, "Andrew Cooper" <andrew.cooper3@citrix.com> wrote:

> George: Regarding 4.4 - I request a release ack.  However, this is quite a big
> and complicated patch; it took Tim and myself several hours to write, even
> given an understanding of the pathalogical case.  Having said that, about half
> the patch is just reversions (listed above) with the other half being brand
> new logic.

It does at least look to simplify the code, and make it clearer, as well as
fixing this bug.

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsrR-00023I-FI; Mon, 10 Feb 2014 15:33:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCsrQ-000237-P5
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:33:28 +0000
Received: from [193.109.254.147:48818] by server-5.bemta-14.messagelabs.com id
	CA/36-16688-841F8F25; Mon, 10 Feb 2014 15:33:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392046407!3306637!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6580 invoked from network); 10 Feb 2014 15:33:27 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:33:27 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so3070571eak.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 07:33:26 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=dQbmpDh4Ve+a9zbesqredo5AK6sIay1km0VgaZFhPbg=;
	b=bSvuLrGmBeSE53KjrjUBCXbwtk3GV8h8CAIh72mYBMW2pGw3qx5fZub0ANUWgQRNFh
	gXNw2qnkIq22yJ2tPRT0m7c0lj7g1g50YLGp2NSKxfuHEo6sarptg5qYtA+g8TjqDlFQ
	0sy0Y/em99WlcJi9sNfQ1Cv4X92T5im+WhNRguh3T9TpUlLu4orehaCf7bhzVn8+qaE2
	xzux7dn2sRWahJYrkNr53vfgen3IBASelB0NU1hi78dAbyKeZ+rJJqleJE6zAyK6c75S
	+2bBSjeRkq36f7KESDmoA2U4wexzUUSl71q6TFBPcRj5mFlPdoXF6anOIqcMyfeOl01Z
	UQ7Q==
X-Gm-Message-State: ALoCoQmuwv4NpURzbQUDBUawHQwDAEO7gi3WG/gmfL539088E9XA9S5laDrppVMS15iA8D5eejok
X-Received: by 10.15.98.68 with SMTP id bi44mr3550285eeb.67.1392046406579;
	Mon, 10 Feb 2014 07:33:26 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o45sm42325466eeb.18.2014.02.10.07.33.24 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 07:33:25 -0800 (PST)
Message-ID: <52F8F13E.1070308@linaro.org>
Date: Mon, 10 Feb 2014 15:33:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1392046038.26657.19.camel@kazak.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 03:27 PM, Ian Campbell wrote:
> On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/10/2014 01:42 PM, Ian Campbell wrote:
>>> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
>>>> Hello Mukesh,
>>>>
>>>> On 30/01/14 01:33, Mukesh Rathor wrote:
>>>>>>> I'm not sure what you mean:
>>>>>>>   - the code that Mukesh is adding doesn't have a struct page, it's
>>>>>>>     just grabbing the foreign domid from the hypercall arg;
>>>>>>>   - if we did have a struct page, we'd just need to take a ref to
>>>>>>>     stop the owner changing underfoot; and
>>>>>>>   - get_pg_owner() takes a domid anyway.
>>>>>>
>>>>>> Sorry, I was confused/mislead by the name...
>>>>>>
>>>>>> rcu_lock_live_remote_domain_by_id does look like what is needed.
>>>>
>>>> Following the xentrace tread: 
>>>> http://www.gossamer-threads.com/lists/xen/devel/315883, 
>>>> rcu_lock_live_remote_domain_by_id will not correctly works.
>>>>
>>>> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
>>>> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
>>>> prevent xentrace to works on Xen on ARM (and on PVH).
>>>
>>> I'm not sure how that extends to add_to_physmap though -- doing add to
>>> physmap of a DOMID_XEN owned page through the "back door" in this way
>>> isn't supposed to work.
>>
>> Currently xentrace is using xc_map_foreign_page to map the trace buffer
>> (with DOMID_XEN in argument).
> 
> Sorry, I misunderstood, I thought you were suggesting that rcu_lock_...
> didn't work for xentrace and so couldn't work for add_to_physmap either
> -- but actually xentrace ends up using add_to_physmap itself.
> 
>> AFAIK, on x86 PV domain, this called is resulting by an
>> HYPERVISOR_mmu_update which allow do map xen page on priviliged domain
>> (with the dummy XSM policy).
>>
>> For ARM, a call to xc_map_foreign_page will end up to
>> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
>>
>> For both architecture, you can look at the function
>> xen_remap_map_domain_mfn_range (implemented differently on ARM and x86)
>> which is the last function called before going to the hypervisor.
>>
>> If we don't modify the hypercall XENMEM_add_to_physmap, we will have a
>> add a new way to map Xen page for xentrace & co.
> 
> Wouldn't it be incorrect to generically return OK for mapping a
> DOMID_XEN owned page -- at least something needs to validate that the
> particular mfn being mapped is supposed to be shared with the guest in
> question.

It's already the case. By default a xen heap page doesn't belong to
DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to the
given page.

> 
> TBH, it doesn't seem that this mechanism for sharing xenpages with
> guests is a good fit for PVH or HVM guests/dom0. Perhaps a specific
> mapspace would be more appropriate?

Following my explanation just above, I don't think we need to have a
specific mapspace. XSM and DOMID_XEN will already protect correctly the
mapping of xen heap page.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsrR-00023I-FI; Mon, 10 Feb 2014 15:33:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCsrQ-000237-P5
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:33:28 +0000
Received: from [193.109.254.147:48818] by server-5.bemta-14.messagelabs.com id
	CA/36-16688-841F8F25; Mon, 10 Feb 2014 15:33:28 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392046407!3306637!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6580 invoked from network); 10 Feb 2014 15:33:27 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:33:27 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so3070571eak.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 07:33:26 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=dQbmpDh4Ve+a9zbesqredo5AK6sIay1km0VgaZFhPbg=;
	b=bSvuLrGmBeSE53KjrjUBCXbwtk3GV8h8CAIh72mYBMW2pGw3qx5fZub0ANUWgQRNFh
	gXNw2qnkIq22yJ2tPRT0m7c0lj7g1g50YLGp2NSKxfuHEo6sarptg5qYtA+g8TjqDlFQ
	0sy0Y/em99WlcJi9sNfQ1Cv4X92T5im+WhNRguh3T9TpUlLu4orehaCf7bhzVn8+qaE2
	xzux7dn2sRWahJYrkNr53vfgen3IBASelB0NU1hi78dAbyKeZ+rJJqleJE6zAyK6c75S
	+2bBSjeRkq36f7KESDmoA2U4wexzUUSl71q6TFBPcRj5mFlPdoXF6anOIqcMyfeOl01Z
	UQ7Q==
X-Gm-Message-State: ALoCoQmuwv4NpURzbQUDBUawHQwDAEO7gi3WG/gmfL539088E9XA9S5laDrppVMS15iA8D5eejok
X-Received: by 10.15.98.68 with SMTP id bi44mr3550285eeb.67.1392046406579;
	Mon, 10 Feb 2014 07:33:26 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o45sm42325466eeb.18.2014.02.10.07.33.24 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 07:33:25 -0800 (PST)
Message-ID: <52F8F13E.1070308@linaro.org>
Date: Mon, 10 Feb 2014 15:33:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1392046038.26657.19.camel@kazak.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 03:27 PM, Ian Campbell wrote:
> On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/10/2014 01:42 PM, Ian Campbell wrote:
>>> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
>>>> Hello Mukesh,
>>>>
>>>> On 30/01/14 01:33, Mukesh Rathor wrote:
>>>>>>> I'm not sure what you mean:
>>>>>>>   - the code that Mukesh is adding doesn't have a struct page, it's
>>>>>>>     just grabbing the foreign domid from the hypercall arg;
>>>>>>>   - if we did have a struct page, we'd just need to take a ref to
>>>>>>>     stop the owner changing underfoot; and
>>>>>>>   - get_pg_owner() takes a domid anyway.
>>>>>>
>>>>>> Sorry, I was confused/mislead by the name...
>>>>>>
>>>>>> rcu_lock_live_remote_domain_by_id does look like what is needed.
>>>>
>>>> Following the xentrace tread: 
>>>> http://www.gossamer-threads.com/lists/xen/devel/315883, 
>>>> rcu_lock_live_remote_domain_by_id will not correctly works.
>>>>
>>>> On Xen on ARM, xentrace is using this hypercall to map XEN page (via 
>>>> DOMID_XEN). In this case, rcu_lock_*domain* will always fails which will 
>>>> prevent xentrace to works on Xen on ARM (and on PVH).
>>>
>>> I'm not sure how that extends to add_to_physmap though -- doing add to
>>> physmap of a DOMID_XEN owned page through the "back door" in this way
>>> isn't supposed to work.
>>
>> Currently xentrace is using xc_map_foreign_page to map the trace buffer
>> (with DOMID_XEN in argument).
> 
> Sorry, I misunderstood, I thought you were suggesting that rcu_lock_...
> didn't work for xentrace and so couldn't work for add_to_physmap either
> -- but actually xentrace ends up using add_to_physmap itself.
> 
>> AFAIK, on x86 PV domain, this called is resulting by an
>> HYPERVISOR_mmu_update which allow do map xen page on priviliged domain
>> (with the dummy XSM policy).
>>
>> For ARM, a call to xc_map_foreign_page will end up to
>> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
>>
>> For both architecture, you can look at the function
>> xen_remap_map_domain_mfn_range (implemented differently on ARM and x86)
>> which is the last function called before going to the hypervisor.
>>
>> If we don't modify the hypercall XENMEM_add_to_physmap, we will have a
>> add a new way to map Xen page for xentrace & co.
> 
> Wouldn't it be incorrect to generically return OK for mapping a
> DOMID_XEN owned page -- at least something needs to validate that the
> particular mfn being mapped is supposed to be shared with the guest in
> question.

It's already the case. By default a xen heap page doesn't belong to
DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to the
given page.

> 
> TBH, it doesn't seem that this mechanism for sharing xenpages with
> guests is a good fit for PVH or HVM guests/dom0. Perhaps a specific
> mapspace would be more appropriate?

Following my explanation just above, I don't think we need to have a
specific mapspace. XSM and DOMID_XEN will already protect correctly the
mapping of xen heap page.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:38:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsvz-0002MJ-8n; Mon, 10 Feb 2014 15:38:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCsvx-0002MD-Ny
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:38:09 +0000
Received: from [85.158.139.211:48074] by server-3.bemta-5.messagelabs.com id
	A0/73-13671-062F8F25; Mon, 10 Feb 2014 15:38:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392046686!2889205!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20776 invoked from network); 10 Feb 2014 15:38:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:38:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99504737"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 15:37:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:37:34 -0500
Message-ID: <1392046653.26657.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 10 Feb 2014 15:37:33 +0000
In-Reply-To: <52F8F13E.1070308@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
	<52F8F13E.1070308@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 15:33 +0000, Julien Grall wrote:
> On 02/10/2014 03:27 PM, Ian Campbell wrote:
> > Wouldn't it be incorrect to generically return OK for mapping a
> > DOMID_XEN owned page -- at least something needs to validate that the
> > particular mfn being mapped is supposed to be shared with the guest in
> > question.
> 
> It's already the case. By default a xen heap page doesn't belong to
> DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
> guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to the
> given page.

Ah, I didn't know this.

> > TBH, it doesn't seem that this mechanism for sharing xenpages with
> > guests is a good fit for PVH or HVM guests/dom0. Perhaps a specific
> > mapspace would be more appropriate?
> 
> Following my explanation just above, I don't think we need to have a
> specific mapspace. XSM and DOMID_XEN will already protect correctly the
> mapping of xen heap page.

Right.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 15:38:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 15:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCsvz-0002MJ-8n; Mon, 10 Feb 2014 15:38:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCsvx-0002MD-Ny
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 15:38:09 +0000
Received: from [85.158.139.211:48074] by server-3.bemta-5.messagelabs.com id
	A0/73-13671-062F8F25; Mon, 10 Feb 2014 15:38:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392046686!2889205!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20776 invoked from network); 10 Feb 2014 15:38:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 15:38:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99504737"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 15:37:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 10:37:34 -0500
Message-ID: <1392046653.26657.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 10 Feb 2014 15:37:33 +0000
In-Reply-To: <52F8F13E.1070308@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
	<52F8F13E.1070308@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 15:33 +0000, Julien Grall wrote:
> On 02/10/2014 03:27 PM, Ian Campbell wrote:
> > Wouldn't it be incorrect to generically return OK for mapping a
> > DOMID_XEN owned page -- at least something needs to validate that the
> > particular mfn being mapped is supposed to be shared with the guest in
> > question.
> 
> It's already the case. By default a xen heap page doesn't belong to
> DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
> guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to the
> given page.

Ah, I didn't know this.

> > TBH, it doesn't seem that this mechanism for sharing xenpages with
> > guests is a good fit for PVH or HVM guests/dom0. Perhaps a specific
> > mapspace would be more appropriate?
> 
> Following my explanation just above, I don't think we need to have a
> specific mapspace. XSM and DOMID_XEN will already protect correctly the
> mapping of xen heap page.

Right.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtRO-0004TE-8l; Mon, 10 Feb 2014 16:10:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCtRM-0004T3-M1
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 16:10:36 +0000
Received: from [85.158.143.35:41024] by server-3.bemta-4.messagelabs.com id
	18/73-11539-BF9F8F25; Mon, 10 Feb 2014 16:10:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392048634!4572211!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24061 invoked from network); 10 Feb 2014 16:10:35 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:10:35 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so3029347eek.19
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 08:10:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=yQiDMdBqzbxubvOAGFoGk/9kbZMnV6wgoHsQp45+bPc=;
	b=cxPADYTYYsYtYl5jfvRimlXW2pa4QBNQvBHi4w1NDwdd3w+GzVmtOcCUOc15Z8J578
	pfZwR1FvwCP9crK2upy2N8a1rClvGMTAPgGl1pmfEdTyebT91RYvrl84EMUT2V7weT+C
	BB4E4YMiEyu+heiwC8oke65VNpoISpwjSRtk427F4nEisRaAjdc93ciTa4ZhtFqSolMf
	iNN6v0Pi/bab3UFPpgSv0+cXor/BZfcwPOGezs2S93HWZMg3Lm234TcZqllxQpTJvnAF
	71SfyQ3GdKyokmE5jsbuRz0lH/aAHeVaQEnmJXWw+9Bm0OHlcHrBDgSMCEeh2dRPgPTI
	pnCw==
X-Gm-Message-State: ALoCoQmIFifPVsw1nbNvNkkc7ZVseHl0ZKLlTBa/YShBHO80CgDgRxt9V1VcPJLW8hchyOV+66GY
X-Received: by 10.14.223.71 with SMTP id u47mr3762078eep.89.1392048634594;
	Mon, 10 Feb 2014 08:10:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm35850015eef.2.2014.02.10.08.10.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 08:10:33 -0800 (PST)
Message-ID: <52F8F9F8.9000702@linaro.org>
Date: Mon, 10 Feb 2014 16:10:32 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-7-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 05:43 PM, Julien Grall wrote:
> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is enabled.
> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
> to know if it needs to check the requirements.
> 
> Rename the function and remove "pvh" word in the commit message.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 19b0e23..26a5d91 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>      return hd->platform_ops->init(d);
>  }
>  
> -static __init void check_dom0_pvh_reqs(struct domain *d)
> +static __init void check_dom0_reqs(struct domain *d)
>  {
> +    if ( !paging_mode_translate(d) )
> +        return;
> +
>      if ( !iommu_enabled )
> -        panic("Presently, iommu must be enabled for pvh dom0\n");
> +        panic("Presently, iommu must be enabled to use dom0 with translate "
> +              "paging mode\n");

Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
ARM platform (for instance Arndale).

Do we really this check for PVH? If yes, I will add replace the check
by: is_pvh_domain(d) && !iommu_enabled.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtRO-0004TE-8l; Mon, 10 Feb 2014 16:10:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCtRM-0004T3-M1
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 16:10:36 +0000
Received: from [85.158.143.35:41024] by server-3.bemta-4.messagelabs.com id
	18/73-11539-BF9F8F25; Mon, 10 Feb 2014 16:10:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392048634!4572211!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24061 invoked from network); 10 Feb 2014 16:10:35 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:10:35 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so3029347eek.19
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 08:10:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=yQiDMdBqzbxubvOAGFoGk/9kbZMnV6wgoHsQp45+bPc=;
	b=cxPADYTYYsYtYl5jfvRimlXW2pa4QBNQvBHi4w1NDwdd3w+GzVmtOcCUOc15Z8J578
	pfZwR1FvwCP9crK2upy2N8a1rClvGMTAPgGl1pmfEdTyebT91RYvrl84EMUT2V7weT+C
	BB4E4YMiEyu+heiwC8oke65VNpoISpwjSRtk427F4nEisRaAjdc93ciTa4ZhtFqSolMf
	iNN6v0Pi/bab3UFPpgSv0+cXor/BZfcwPOGezs2S93HWZMg3Lm234TcZqllxQpTJvnAF
	71SfyQ3GdKyokmE5jsbuRz0lH/aAHeVaQEnmJXWw+9Bm0OHlcHrBDgSMCEeh2dRPgPTI
	pnCw==
X-Gm-Message-State: ALoCoQmIFifPVsw1nbNvNkkc7ZVseHl0ZKLlTBa/YShBHO80CgDgRxt9V1VcPJLW8hchyOV+66GY
X-Received: by 10.14.223.71 with SMTP id u47mr3762078eep.89.1392048634594;
	Mon, 10 Feb 2014 08:10:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm35850015eef.2.2014.02.10.08.10.33
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 08:10:33 -0800 (PST)
Message-ID: <52F8F9F8.9000702@linaro.org>
Date: Mon, 10 Feb 2014 16:10:32 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1391794991-5919-7-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 05:43 PM, Julien Grall wrote:
> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is enabled.
> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
> to know if it needs to check the requirements.
> 
> Rename the function and remove "pvh" word in the commit message.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>  1 file changed, 9 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 19b0e23..26a5d91 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>      return hd->platform_ops->init(d);
>  }
>  
> -static __init void check_dom0_pvh_reqs(struct domain *d)
> +static __init void check_dom0_reqs(struct domain *d)
>  {
> +    if ( !paging_mode_translate(d) )
> +        return;
> +
>      if ( !iommu_enabled )
> -        panic("Presently, iommu must be enabled for pvh dom0\n");
> +        panic("Presently, iommu must be enabled to use dom0 with translate "
> +              "paging mode\n");

Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
ARM platform (for instance Arndale).

Do we really this check for PVH? If yes, I will add replace the check
by: is_pvh_domain(d) && !iommu_enabled.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:13:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtUY-0004aa-63; Mon, 10 Feb 2014 16:13:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCtUW-0004aR-Is
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:13:52 +0000
Received: from [85.158.143.35:10848] by server-3.bemta-4.messagelabs.com id
	CC/49-11539-FBAF8F25; Mon, 10 Feb 2014 16:13:51 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392048830!4583207!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27614 invoked from network); 10 Feb 2014 16:13:50 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:13:50 -0000
Received: by mail-wi0-f174.google.com with SMTP id f8so2984337wiw.7
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 08:13:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=h59J3fNYxHzJ1kyshaUw0P8cmzKFgd8c0A34nDotSaY=;
	b=avSuHwpNaVuI+Q1pCAQ/ehNRFPyonIUZRMOtE8QcsHAGFPF/g2wTpVUfSfzIYdhsHV
	Fe/nj8bgI5TtUgYhYo0g9l2Hh2tRmbakGBThJQ5hP0WjHBNg5X3ZNyJkGbXUmwL7+/JQ
	nKBw3+2HoI/bU+ejC+TPzfPoLEGapw4WJgteiFPHn6xlnJMD51oGXwMFfxn4+5B/kra5
	bE00TQ+/Gw1XNJ921V3r7UYqfaKznfe7JW4e0/MBoYuBB0pmSE/hJ8SHv6g3wWTtH8aP
	m06hu1JLncqQJ7kobsbJoApyGfv9PzGEcoE5XjJl6NGTFntTw7ibPk9VBRI8slSv41/1
	CSOw==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr11014321wic.56.1392048830622; 
	Mon, 10 Feb 2014 08:13:50 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 08:13:50 -0800 (PST)
In-Reply-To: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
Date: Mon, 10 Feb 2014 16:13:50 +0000
X-Google-Sender-Auth: x9f5VOQzyWbAE9XSbbb1UaOYWRo
Message-ID: <CAFLBxZYrQf6w3N_uz4=6eEyKL_rU6EcDoKSD90ASSUt1jOLBkw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Yang Zhang <yang.z.zhang@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it HAP dirty vram tracking breaks pass-through
thanks

On Mon, Feb 10, 2014 at 6:14 AM, Yang Zhang <yang.z.zhang@intel.com> wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
>
> When enabling log dirty mode, it sets all guest's memory to readonly.
> And in HAP enabled domain, it modifies all EPT entries to clear write bit
> to make sure it is readonly. This will cause problem if VT-d shares page
> table with EPT: the device may issue a DMA write request, then VT-d engine
> tells it the target memory is readonly and result in VT-d fault.
>
> Currnetly, there are two places will enable log dirty mode: migration and vram
> tracking. Migration with device assigned is not allowed, so it is ok. For vram,
> it doesn't need to set all memory to readonly. Only track the vram range is enough.
>
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/mm/hap/hap.c       |   20 ++++++++++++++------
>  xen/arch/x86/mm/paging.c        |    9 +++++----
>  xen/arch/x86/mm/shadow/common.c |    2 +-
>  xen/include/asm-x86/domain.h    |    2 +-
>  xen/include/asm-x86/paging.h    |    5 +++--
>  xen/include/asm-x86/shadow.h    |    2 +-
>  6 files changed, 25 insertions(+), 15 deletions(-)
>
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index d3f64bd..5f75636 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -82,7 +82,7 @@ int hap_track_dirty_vram(struct domain *d,
>          if ( !paging_mode_log_dirty(d) )
>          {
>              hap_logdirty_init(d);
> -            rc = paging_log_dirty_enable(d);
> +            rc = paging_log_dirty_enable(d, 0);
>              if ( rc )
>                  goto out;
>          }
> @@ -167,17 +167,25 @@ out:
>  /*            HAP LOG DIRTY SUPPORT             */
>  /************************************************/
>
> -/* hap code to call when log_dirty is enable. return 0 if no problem found. */
> -static int hap_enable_log_dirty(struct domain *d)
> +/*
> + * hap code to call when log_dirty is enable. return 0 if no problem found.
> + *
> + * NB: Domain that having device assigned should not set log_global. Because
> + * there is no way to track the memory updating from device.
> + */
> +static int hap_enable_log_dirty(struct domain *d, bool_t log_global)
>  {
>      /* turn on PG_log_dirty bit in paging mode */
>      paging_lock(d);
>      d->arch.paging.mode |= PG_log_dirty;
>      paging_unlock(d);
>
> -    /* set l1e entries of P2M table to be read-only. */
> -    p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
> -    flush_tlb_mask(d->domain_dirty_cpumask);
> +    if ( log_global )
> +    {
> +        /* set l1e entries of P2M table to be read-only. */
> +        p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
> +        flush_tlb_mask(d->domain_dirty_cpumask);
> +    }
>      return 0;
>  }
>
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index 21344e5..ab5eacb 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -164,7 +164,7 @@ void paging_free_log_dirty_bitmap(struct domain *d)
>      paging_unlock(d);
>  }
>
> -int paging_log_dirty_enable(struct domain *d)
> +int paging_log_dirty_enable(struct domain *d, bool_t log_global)
>  {
>      int ret;
>
> @@ -172,7 +172,7 @@ int paging_log_dirty_enable(struct domain *d)
>          return -EINVAL;
>
>      domain_pause(d);
> -    ret = d->arch.paging.log_dirty.enable_log_dirty(d);
> +    ret = d->arch.paging.log_dirty.enable_log_dirty(d, log_global);
>      domain_unpause(d);
>
>      return ret;
> @@ -489,7 +489,8 @@ void paging_log_dirty_range(struct domain *d,
>   * These function pointers must not be followed with the log-dirty lock held.
>   */
>  void paging_log_dirty_init(struct domain *d,
> -                           int    (*enable_log_dirty)(struct domain *d),
> +                           int    (*enable_log_dirty)(struct domain *d,
> +                                                      bool_t log_global),
>                             int    (*disable_log_dirty)(struct domain *d),
>                             void   (*clean_dirty_bitmap)(struct domain *d))
>  {
> @@ -590,7 +591,7 @@ int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
>      case XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY:
>          if ( hap_enabled(d) )
>              hap_logdirty_init(d);
> -        return paging_log_dirty_enable(d);
> +        return paging_log_dirty_enable(d, 1);
>
>      case XEN_DOMCTL_SHADOW_OP_OFF:
>          if ( paging_mode_log_dirty(d) )
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 0bfa595..11c6b62 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -3418,7 +3418,7 @@ shadow_write_p2m_entry(struct vcpu *v, unsigned long gfn,
>  /* Shadow specific code which is called in paging_log_dirty_enable().
>   * Return 0 if no problem found.
>   */
> -int shadow_enable_log_dirty(struct domain *d)
> +int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
>  {
>      int ret;
>
> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
> index ea72db2..4ff89f0 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -169,7 +169,7 @@ struct log_dirty_domain {
>      unsigned int   dirty_count;
>
>      /* functions which are paging mode specific */
> -    int            (*enable_log_dirty   )(struct domain *d);
> +    int            (*enable_log_dirty   )(struct domain *d, bool_t log_global);
>      int            (*disable_log_dirty  )(struct domain *d);
>      void           (*clean_dirty_bitmap )(struct domain *d);
>  };
> diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
> index cd7ee3b..8dd2a61 100644
> --- a/xen/include/asm-x86/paging.h
> +++ b/xen/include/asm-x86/paging.h
> @@ -143,14 +143,15 @@ void paging_log_dirty_range(struct domain *d,
>                              uint8_t *dirty_bitmap);
>
>  /* enable log dirty */
> -int paging_log_dirty_enable(struct domain *d);
> +int paging_log_dirty_enable(struct domain *d, bool_t log_global);
>
>  /* disable log dirty */
>  int paging_log_dirty_disable(struct domain *d);
>
>  /* log dirty initialization */
>  void paging_log_dirty_init(struct domain *d,
> -                           int  (*enable_log_dirty)(struct domain *d),
> +                           int  (*enable_log_dirty)(struct domain *d,
> +                                                    bool_t log_global),
>                             int  (*disable_log_dirty)(struct domain *d),
>                             void (*clean_dirty_bitmap)(struct domain *d));
>
> diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
> index 852023d..348915e 100644
> --- a/xen/include/asm-x86/shadow.h
> +++ b/xen/include/asm-x86/shadow.h
> @@ -82,7 +82,7 @@ void shadow_teardown(struct domain *d);
>  void shadow_final_teardown(struct domain *d);
>
>  /* shadow code to call when log dirty is enabled */
> -int shadow_enable_log_dirty(struct domain *d);
> +int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
>
>  /* shadow code to call when log dirty is disabled */
>  int shadow_disable_log_dirty(struct domain *d);
> --
> 1.7.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:13:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtUY-0004aa-63; Mon, 10 Feb 2014 16:13:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCtUW-0004aR-Is
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:13:52 +0000
Received: from [85.158.143.35:10848] by server-3.bemta-4.messagelabs.com id
	CC/49-11539-FBAF8F25; Mon, 10 Feb 2014 16:13:51 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392048830!4583207!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27614 invoked from network); 10 Feb 2014 16:13:50 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:13:50 -0000
Received: by mail-wi0-f174.google.com with SMTP id f8so2984337wiw.7
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 08:13:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=h59J3fNYxHzJ1kyshaUw0P8cmzKFgd8c0A34nDotSaY=;
	b=avSuHwpNaVuI+Q1pCAQ/ehNRFPyonIUZRMOtE8QcsHAGFPF/g2wTpVUfSfzIYdhsHV
	Fe/nj8bgI5TtUgYhYo0g9l2Hh2tRmbakGBThJQ5hP0WjHBNg5X3ZNyJkGbXUmwL7+/JQ
	nKBw3+2HoI/bU+ejC+TPzfPoLEGapw4WJgteiFPHn6xlnJMD51oGXwMFfxn4+5B/kra5
	bE00TQ+/Gw1XNJ921V3r7UYqfaKznfe7JW4e0/MBoYuBB0pmSE/hJ8SHv6g3wWTtH8aP
	m06hu1JLncqQJ7kobsbJoApyGfv9PzGEcoE5XjJl6NGTFntTw7ibPk9VBRI8slSv41/1
	CSOw==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr11014321wic.56.1392048830622; 
	Mon, 10 Feb 2014 08:13:50 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 08:13:50 -0800 (PST)
In-Reply-To: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
Date: Mon, 10 Feb 2014 16:13:50 +0000
X-Google-Sender-Auth: x9f5VOQzyWbAE9XSbbb1UaOYWRo
Message-ID: <CAFLBxZYrQf6w3N_uz4=6eEyKL_rU6EcDoKSD90ASSUt1jOLBkw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Yang Zhang <yang.z.zhang@intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it HAP dirty vram tracking breaks pass-through
thanks

On Mon, Feb 10, 2014 at 6:14 AM, Yang Zhang <yang.z.zhang@intel.com> wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
>
> When enabling log dirty mode, it sets all guest's memory to readonly.
> And in HAP enabled domain, it modifies all EPT entries to clear write bit
> to make sure it is readonly. This will cause problem if VT-d shares page
> table with EPT: the device may issue a DMA write request, then VT-d engine
> tells it the target memory is readonly and result in VT-d fault.
>
> Currnetly, there are two places will enable log dirty mode: migration and vram
> tracking. Migration with device assigned is not allowed, so it is ok. For vram,
> it doesn't need to set all memory to readonly. Only track the vram range is enough.
>
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/mm/hap/hap.c       |   20 ++++++++++++++------
>  xen/arch/x86/mm/paging.c        |    9 +++++----
>  xen/arch/x86/mm/shadow/common.c |    2 +-
>  xen/include/asm-x86/domain.h    |    2 +-
>  xen/include/asm-x86/paging.h    |    5 +++--
>  xen/include/asm-x86/shadow.h    |    2 +-
>  6 files changed, 25 insertions(+), 15 deletions(-)
>
> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
> index d3f64bd..5f75636 100644
> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -82,7 +82,7 @@ int hap_track_dirty_vram(struct domain *d,
>          if ( !paging_mode_log_dirty(d) )
>          {
>              hap_logdirty_init(d);
> -            rc = paging_log_dirty_enable(d);
> +            rc = paging_log_dirty_enable(d, 0);
>              if ( rc )
>                  goto out;
>          }
> @@ -167,17 +167,25 @@ out:
>  /*            HAP LOG DIRTY SUPPORT             */
>  /************************************************/
>
> -/* hap code to call when log_dirty is enable. return 0 if no problem found. */
> -static int hap_enable_log_dirty(struct domain *d)
> +/*
> + * hap code to call when log_dirty is enable. return 0 if no problem found.
> + *
> + * NB: Domain that having device assigned should not set log_global. Because
> + * there is no way to track the memory updating from device.
> + */
> +static int hap_enable_log_dirty(struct domain *d, bool_t log_global)
>  {
>      /* turn on PG_log_dirty bit in paging mode */
>      paging_lock(d);
>      d->arch.paging.mode |= PG_log_dirty;
>      paging_unlock(d);
>
> -    /* set l1e entries of P2M table to be read-only. */
> -    p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
> -    flush_tlb_mask(d->domain_dirty_cpumask);
> +    if ( log_global )
> +    {
> +        /* set l1e entries of P2M table to be read-only. */
> +        p2m_change_entry_type_global(d, p2m_ram_rw, p2m_ram_logdirty);
> +        flush_tlb_mask(d->domain_dirty_cpumask);
> +    }
>      return 0;
>  }
>
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index 21344e5..ab5eacb 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -164,7 +164,7 @@ void paging_free_log_dirty_bitmap(struct domain *d)
>      paging_unlock(d);
>  }
>
> -int paging_log_dirty_enable(struct domain *d)
> +int paging_log_dirty_enable(struct domain *d, bool_t log_global)
>  {
>      int ret;
>
> @@ -172,7 +172,7 @@ int paging_log_dirty_enable(struct domain *d)
>          return -EINVAL;
>
>      domain_pause(d);
> -    ret = d->arch.paging.log_dirty.enable_log_dirty(d);
> +    ret = d->arch.paging.log_dirty.enable_log_dirty(d, log_global);
>      domain_unpause(d);
>
>      return ret;
> @@ -489,7 +489,8 @@ void paging_log_dirty_range(struct domain *d,
>   * These function pointers must not be followed with the log-dirty lock held.
>   */
>  void paging_log_dirty_init(struct domain *d,
> -                           int    (*enable_log_dirty)(struct domain *d),
> +                           int    (*enable_log_dirty)(struct domain *d,
> +                                                      bool_t log_global),
>                             int    (*disable_log_dirty)(struct domain *d),
>                             void   (*clean_dirty_bitmap)(struct domain *d))
>  {
> @@ -590,7 +591,7 @@ int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
>      case XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY:
>          if ( hap_enabled(d) )
>              hap_logdirty_init(d);
> -        return paging_log_dirty_enable(d);
> +        return paging_log_dirty_enable(d, 1);
>
>      case XEN_DOMCTL_SHADOW_OP_OFF:
>          if ( paging_mode_log_dirty(d) )
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 0bfa595..11c6b62 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -3418,7 +3418,7 @@ shadow_write_p2m_entry(struct vcpu *v, unsigned long gfn,
>  /* Shadow specific code which is called in paging_log_dirty_enable().
>   * Return 0 if no problem found.
>   */
> -int shadow_enable_log_dirty(struct domain *d)
> +int shadow_enable_log_dirty(struct domain *d, bool_t log_global)
>  {
>      int ret;
>
> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
> index ea72db2..4ff89f0 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -169,7 +169,7 @@ struct log_dirty_domain {
>      unsigned int   dirty_count;
>
>      /* functions which are paging mode specific */
> -    int            (*enable_log_dirty   )(struct domain *d);
> +    int            (*enable_log_dirty   )(struct domain *d, bool_t log_global);
>      int            (*disable_log_dirty  )(struct domain *d);
>      void           (*clean_dirty_bitmap )(struct domain *d);
>  };
> diff --git a/xen/include/asm-x86/paging.h b/xen/include/asm-x86/paging.h
> index cd7ee3b..8dd2a61 100644
> --- a/xen/include/asm-x86/paging.h
> +++ b/xen/include/asm-x86/paging.h
> @@ -143,14 +143,15 @@ void paging_log_dirty_range(struct domain *d,
>                              uint8_t *dirty_bitmap);
>
>  /* enable log dirty */
> -int paging_log_dirty_enable(struct domain *d);
> +int paging_log_dirty_enable(struct domain *d, bool_t log_global);
>
>  /* disable log dirty */
>  int paging_log_dirty_disable(struct domain *d);
>
>  /* log dirty initialization */
>  void paging_log_dirty_init(struct domain *d,
> -                           int  (*enable_log_dirty)(struct domain *d),
> +                           int  (*enable_log_dirty)(struct domain *d,
> +                                                    bool_t log_global),
>                             int  (*disable_log_dirty)(struct domain *d),
>                             void (*clean_dirty_bitmap)(struct domain *d));
>
> diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
> index 852023d..348915e 100644
> --- a/xen/include/asm-x86/shadow.h
> +++ b/xen/include/asm-x86/shadow.h
> @@ -82,7 +82,7 @@ void shadow_teardown(struct domain *d);
>  void shadow_final_teardown(struct domain *d);
>
>  /* shadow code to call when log dirty is enabled */
> -int shadow_enable_log_dirty(struct domain *d);
> +int shadow_enable_log_dirty(struct domain *d, bool_t log_global);
>
>  /* shadow code to call when log dirty is disabled */
>  int shadow_disable_log_dirty(struct domain *d);
> --
> 1.7.1
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtfv-0005BK-RR; Mon, 10 Feb 2014 16:25:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtfu-0005BF-IO
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:25:38 +0000
Received: from [85.158.139.211:5204] by server-7.bemta-5.messagelabs.com id
	C1/1F-14867-18DF8F25; Mon, 10 Feb 2014 16:25:37 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392049536!2952613!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27977 invoked from network); 10 Feb 2014 16:25:36 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 10 Feb 2014 16:25:36 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtkA-00009z-Lk; Mon, 10 Feb 2014 16:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1392049802.617@bugs.xenproject.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<CAFLBxZYrQf6w3N_uz4=6eEyKL_rU6EcDoKSD90ASSUt1jOLBkw@mail.gmail.com>
In-Reply-To: <CAFLBxZYrQf6w3N_uz4=6eEyKL_rU6EcDoKSD90ASSUt1jOLBkw@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 10 Feb 2014 16:30:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] Don't track all memory when
 enabling log dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #38 rooted at `<1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>'
Title: `Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram'
> title it HAP dirty vram tracking breaks pass-through
Set title for #38 to `HAP dirty vram tracking breaks pass-through'
> thanks
Finished processing.

Modified/created Bugs:
 - 38: http://bugs.xenproject.org/xen/bug/38 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtfv-0005BK-RR; Mon, 10 Feb 2014 16:25:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtfu-0005BF-IO
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:25:38 +0000
Received: from [85.158.139.211:5204] by server-7.bemta-5.messagelabs.com id
	C1/1F-14867-18DF8F25; Mon, 10 Feb 2014 16:25:37 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392049536!2952613!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27977 invoked from network); 10 Feb 2014 16:25:36 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 10 Feb 2014 16:25:36 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtkA-00009z-Lk; Mon, 10 Feb 2014 16:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1392049802.617@bugs.xenproject.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<CAFLBxZYrQf6w3N_uz4=6eEyKL_rU6EcDoKSD90ASSUt1jOLBkw@mail.gmail.com>
In-Reply-To: <CAFLBxZYrQf6w3N_uz4=6eEyKL_rU6EcDoKSD90ASSUt1jOLBkw@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 10 Feb 2014 16:30:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] Don't track all memory when
 enabling log dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #38 rooted at `<1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>'
Title: `Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty to track vram'
> title it HAP dirty vram tracking breaks pass-through
Set title for #38 to `HAP dirty vram tracking breaks pass-through'
> thanks
Finished processing.

Modified/created Bugs:
 - 38: http://bugs.xenproject.org/xen/bug/38 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtoD-0005R6-14; Mon, 10 Feb 2014 16:34:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCtoA-0005R0-PX
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:34:11 +0000
Received: from [85.158.137.68:28740] by server-3.bemta-3.messagelabs.com id
	A2/61-14520-08FF8F25; Mon, 10 Feb 2014 16:34:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392050047!880805!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24156 invoked from network); 10 Feb 2014 16:34:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 16:34:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 16:34:06 +0000
Message-Id: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 16:34:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 12:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This reverts large amounts of:
>   9607327abbd3e77bde6cc7b5327f3efd781fc06e
>     "x86/HVM: properly handle RTC periodic timer even when !RTC_PIE"
>   620d5dad54008e40798c4a0c4322aef274c36fa3
>     "x86/HVM: assorted RTC emulation adjustments"
> 
> and by extentsion:
>   f3347f520cb4d8aa4566182b013c6758d80cbe88
>     "x86/HVM: adjust IRQ (de-)assertion"
>   c2f79c464849e5f796aa9d1d0f26fe356abd1a1a
>     "x86/HVM: fix processing of RTC REG_B writes"
>   527824f41f5fac9cba3d4441b2e73d3118d98837
>     "x86/hvm: Centralize and simplify the RTC IRQ logic."

So what does "by extension" mean here? Are these being
reverted?

> The current code has a pathological case, tickled by the access pattern of
> Windows 2003 Server SP2.  Occasonally on boot (which I presume is during a
> time calibration against the RTC Periodic Timer), Windows gets stuck in an
> infinite loop reading RTC REG_C.  This affects 32 and 64 bit guests.
> 
> In the pathological case, the VM state looks like this:
>   * RTC: 64Hz period, periodic interrupts enabled
>   * RTC_IRQ in IOAPIC as vector 0xd1, edge triggered, not pending
>   * vector 0xd1 set in LAPIC IRR and ISR, TPR at 0xd0
>   * Reads from REG_C return 'RTC_PF | RTC_IRQF'
> 
> With an intstrumented Xen, dumping the periodic timers with a guest in this
> state shows a single timer with pt->irq_issued=1 and pt->pending_intr_nr=2.
> 
> Windows is presumably waiting for reads of REG_C to drop to 0, and reading
> REG_C clears the value each time in the emulated RTC.  However:
> 
>   * {svm,vmx}_intr_assist() calls pt_update_irq() unconditionally.
>   * pt_update_irq() always finds the RTC as earliest_pt.
>   * rtc_periodic_interrupt() unconditionally sets RTC_PF in no_ack mode.  It
>     returns true, indicating that pt_update_irq() should really inject the
>     interrupt.
>   * pt_update_irq() decides that it doesn't need to fake up part of
>     pt_intr_post() because this is a real interrupt.
>   * {svm,vmx}_intr_assist() can't inject the interrupt as it is already
>     pending, so exits early without calling pt_intr_post().
> 
> The underlying problem here comes because the AF and UF bits of RTC 
> interrupt
> state is modelled by the RTC code, but the PF is modelled by the pt code.  
> The
> root cause of windows infinite loop is that RTC_PF is being re-set on 
> vmentry
> before the interrupt logic has worked out that it can't actually inject an 
> RTC
> interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
> should be reading 0.

So you're undoing a whole lot of changes done with the goal of
getting the overall emulation closer to what real hardware does,
just to paper over an issue elsewhere in the code? Not really an
approach I'm in favor of.

>   * The results from XenRT suggest that the new emulation is better than the
>     old.

"Better" in the sense of the limited set of uses of the virtual hardware
by whatever selection of guest OSes is being run there. But very
likely not "better" in the sense on matching up with how the respective
hardware specification would require it to behave.

> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -59,48 +59,78 @@ static void rtc_set_time(RTCState *s);
>  static inline int from_bcd(RTCState *s, int a);
>  static inline int convert_hour(RTCState *s, int hour);
>  
> -static void rtc_update_irq(RTCState *s)
> +/*
> + * Send an edge on the RTC ISA IRQ line.  The RTC spec states that it should
> + * be a line level interrupt, but the PIIX3 states that it must be edge
> + * triggered.  We model the RTC using edge semantics.
> + */
> +static void rtc_toggle_irq(RTCState *s)
>  {
> +    hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
> +    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
> +}
> +
> +static void rtc_update_regb(RTCState *s, uint8_t new_b)
> +{
> +    uint8_t new_c = s->hw.cmos_data[RTC_REG_C] & ~RTC_IRQF;
> +
>      ASSERT(spin_is_locked(&s->lock));
>  
> -    if ( rtc_mode_is(s, strict) && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
> -        return;
> +    if ( new_b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
> +        new_c |= RTC_IRQF;

Without going back to reading the spec, iirc RTC_IRQF is a sticky
bit cleared _only_ be REG_C reads, i.e. you shouldn't clear it
earlier in the function and then conditionally set it here.

>  
> -    /* IRQ is raised if any source is both raised & enabled */
> -    if ( !(s->hw.cmos_data[RTC_REG_B] &
> -           s->hw.cmos_data[RTC_REG_C] &
> -           (RTC_PF | RTC_AF | RTC_UF)) )
> -        return;
> +    s->hw.cmos_data[RTC_REG_B] = new_b;
> +    s->hw.cmos_data[RTC_REG_C] = new_c;
>  
> -    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> -    if ( rtc_mode_is(s, no_ack) )
> -        hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
> -    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
> +    if ( new_c & RTC_IRQF )
> +        rtc_toggle_irq(s);

Which then implies that the condition here would also need to
consider the old state of the flag.

> +static void rtc_irq_event(RTCState *s, uint8_t event)
> +{
> +    uint8_t b = s->hw.cmos_data[RTC_REG_B];
> +    uint8_t old_c = s->hw.cmos_data[RTC_REG_C];
> +    uint8_t new_c = old_c & ~RTC_IRQF;
> +
> +    ASSERT(spin_is_locked(&s->lock));
> +
> +    if ( b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
> +        new_c |= RTC_IRQF;

Same comment as above.

> @@ -647,9 +674,6 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
>      case RTC_REG_C:
>          ret = s->hw.cmos_data[s->hw.cmos_index];
>          s->hw.cmos_data[RTC_REG_C] = 0x00;
> -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> -            hvm_isa_irq_deassert(d, RTC_IRQ);

Why? With RTC_IRQF going from 1 to 0, the interrupt line should
get de-asserted.

> -        rtc_update_irq(s);

So given the problem description, this would seem to be the most
important part at a first glance. But looking more closely, I'm getting
the impression that the call to rtc_update_irq() had no effect at all
here anyway: The function would always bail on the second if() due
to REG_C having got cleared a few lines up.

> @@ -270,47 +263,12 @@ int pt_update_irq(struct vcpu *v)
>      earliest_pt->irq_issued = 1;
>      irq = earliest_pt->irq;
>      is_lapic = (earliest_pt->source == PTSRC_lapic);
> -    pt_priv = earliest_pt->priv;
>  
>      spin_unlock(&v->arch.hvm_vcpu.tm_lock);
>  
>      if ( is_lapic )
> -        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> -    else if ( irq == RTC_IRQ && pt_priv )
>      {
> -        if ( !rtc_periodic_interrupt(pt_priv) )
> -            irq = -1;
> -
> -        pt_lock(earliest_pt);
> -
> -        if ( irq < 0 && earliest_pt->pending_intr_nr )
> -        {
> -            /*
> -             * RTC periodic timer runs without the corresponding interrupt
> -             * being enabled - need to mimic enough of pt_intr_post() to keep
> -             * things going.
> -             */
> -            earliest_pt->pending_intr_nr = 0;
> -            earliest_pt->irq_issued = 0;
> -            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
> -        }
> -        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
> -        {
> -            if ( earliest_pt->on_list )
> -            {
> -                /* suspend timer emulation */
> -                list_del(&earliest_pt->list);
> -                earliest_pt->on_list = 0;
> -            }
> -            irq = -1;
> -        }
> -
> -        /* Avoid dropping the lock if we can. */
> -        if ( irq < 0 && v == earliest_pt->vcpu )
> -            goto rescan_locked;
> -        pt_unlock(earliest_pt);
> -        if ( irq < 0 )
> -            goto rescan;
> +        vlapic_set_irq(vcpu_vlapic(v), irq, 0);

If you didn't put this single function call in braces, the patch would
become more clear, as it would then be exactly the "else if()"
branch that got removed by it.

As a round-up: I'm not going to veto this, but I'm also not going to
be putting my name under it, nor am I going to make another
attempt to clean up the RTC emulation if this is to go in unchanged.
I'm personally getting the impression that the root cause of the
observed problem is still being left in place (and perhaps still not
being fully understood), and hence this whole change goes in the
wrong direction, _even_ if it makes the problem it is aiming at
fixing indeed appear to go away.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtoD-0005R6-14; Mon, 10 Feb 2014 16:34:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCtoA-0005R0-PX
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:34:11 +0000
Received: from [85.158.137.68:28740] by server-3.bemta-3.messagelabs.com id
	A2/61-14520-08FF8F25; Mon, 10 Feb 2014 16:34:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392050047!880805!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24156 invoked from network); 10 Feb 2014 16:34:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 16:34:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 16:34:06 +0000
Message-Id: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 16:34:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 12:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This reverts large amounts of:
>   9607327abbd3e77bde6cc7b5327f3efd781fc06e
>     "x86/HVM: properly handle RTC periodic timer even when !RTC_PIE"
>   620d5dad54008e40798c4a0c4322aef274c36fa3
>     "x86/HVM: assorted RTC emulation adjustments"
> 
> and by extentsion:
>   f3347f520cb4d8aa4566182b013c6758d80cbe88
>     "x86/HVM: adjust IRQ (de-)assertion"
>   c2f79c464849e5f796aa9d1d0f26fe356abd1a1a
>     "x86/HVM: fix processing of RTC REG_B writes"
>   527824f41f5fac9cba3d4441b2e73d3118d98837
>     "x86/hvm: Centralize and simplify the RTC IRQ logic."

So what does "by extension" mean here? Are these being
reverted?

> The current code has a pathological case, tickled by the access pattern of
> Windows 2003 Server SP2.  Occasonally on boot (which I presume is during a
> time calibration against the RTC Periodic Timer), Windows gets stuck in an
> infinite loop reading RTC REG_C.  This affects 32 and 64 bit guests.
> 
> In the pathological case, the VM state looks like this:
>   * RTC: 64Hz period, periodic interrupts enabled
>   * RTC_IRQ in IOAPIC as vector 0xd1, edge triggered, not pending
>   * vector 0xd1 set in LAPIC IRR and ISR, TPR at 0xd0
>   * Reads from REG_C return 'RTC_PF | RTC_IRQF'
> 
> With an intstrumented Xen, dumping the periodic timers with a guest in this
> state shows a single timer with pt->irq_issued=1 and pt->pending_intr_nr=2.
> 
> Windows is presumably waiting for reads of REG_C to drop to 0, and reading
> REG_C clears the value each time in the emulated RTC.  However:
> 
>   * {svm,vmx}_intr_assist() calls pt_update_irq() unconditionally.
>   * pt_update_irq() always finds the RTC as earliest_pt.
>   * rtc_periodic_interrupt() unconditionally sets RTC_PF in no_ack mode.  It
>     returns true, indicating that pt_update_irq() should really inject the
>     interrupt.
>   * pt_update_irq() decides that it doesn't need to fake up part of
>     pt_intr_post() because this is a real interrupt.
>   * {svm,vmx}_intr_assist() can't inject the interrupt as it is already
>     pending, so exits early without calling pt_intr_post().
> 
> The underlying problem here comes because the AF and UF bits of RTC 
> interrupt
> state is modelled by the RTC code, but the PF is modelled by the pt code.  
> The
> root cause of windows infinite loop is that RTC_PF is being re-set on 
> vmentry
> before the interrupt logic has worked out that it can't actually inject an 
> RTC
> interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
> should be reading 0.

So you're undoing a whole lot of changes done with the goal of
getting the overall emulation closer to what real hardware does,
just to paper over an issue elsewhere in the code? Not really an
approach I'm in favor of.

>   * The results from XenRT suggest that the new emulation is better than the
>     old.

"Better" in the sense of the limited set of uses of the virtual hardware
by whatever selection of guest OSes is being run there. But very
likely not "better" in the sense on matching up with how the respective
hardware specification would require it to behave.

> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -59,48 +59,78 @@ static void rtc_set_time(RTCState *s);
>  static inline int from_bcd(RTCState *s, int a);
>  static inline int convert_hour(RTCState *s, int hour);
>  
> -static void rtc_update_irq(RTCState *s)
> +/*
> + * Send an edge on the RTC ISA IRQ line.  The RTC spec states that it should
> + * be a line level interrupt, but the PIIX3 states that it must be edge
> + * triggered.  We model the RTC using edge semantics.
> + */
> +static void rtc_toggle_irq(RTCState *s)
>  {
> +    hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
> +    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
> +}
> +
> +static void rtc_update_regb(RTCState *s, uint8_t new_b)
> +{
> +    uint8_t new_c = s->hw.cmos_data[RTC_REG_C] & ~RTC_IRQF;
> +
>      ASSERT(spin_is_locked(&s->lock));
>  
> -    if ( rtc_mode_is(s, strict) && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
> -        return;
> +    if ( new_b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
> +        new_c |= RTC_IRQF;

Without going back to reading the spec, iirc RTC_IRQF is a sticky
bit cleared _only_ be REG_C reads, i.e. you shouldn't clear it
earlier in the function and then conditionally set it here.

>  
> -    /* IRQ is raised if any source is both raised & enabled */
> -    if ( !(s->hw.cmos_data[RTC_REG_B] &
> -           s->hw.cmos_data[RTC_REG_C] &
> -           (RTC_PF | RTC_AF | RTC_UF)) )
> -        return;
> +    s->hw.cmos_data[RTC_REG_B] = new_b;
> +    s->hw.cmos_data[RTC_REG_C] = new_c;
>  
> -    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> -    if ( rtc_mode_is(s, no_ack) )
> -        hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
> -    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
> +    if ( new_c & RTC_IRQF )
> +        rtc_toggle_irq(s);

Which then implies that the condition here would also need to
consider the old state of the flag.

> +static void rtc_irq_event(RTCState *s, uint8_t event)
> +{
> +    uint8_t b = s->hw.cmos_data[RTC_REG_B];
> +    uint8_t old_c = s->hw.cmos_data[RTC_REG_C];
> +    uint8_t new_c = old_c & ~RTC_IRQF;
> +
> +    ASSERT(spin_is_locked(&s->lock));
> +
> +    if ( b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
> +        new_c |= RTC_IRQF;

Same comment as above.

> @@ -647,9 +674,6 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
>      case RTC_REG_C:
>          ret = s->hw.cmos_data[s->hw.cmos_index];
>          s->hw.cmos_data[RTC_REG_C] = 0x00;
> -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> -            hvm_isa_irq_deassert(d, RTC_IRQ);

Why? With RTC_IRQF going from 1 to 0, the interrupt line should
get de-asserted.

> -        rtc_update_irq(s);

So given the problem description, this would seem to be the most
important part at a first glance. But looking more closely, I'm getting
the impression that the call to rtc_update_irq() had no effect at all
here anyway: The function would always bail on the second if() due
to REG_C having got cleared a few lines up.

> @@ -270,47 +263,12 @@ int pt_update_irq(struct vcpu *v)
>      earliest_pt->irq_issued = 1;
>      irq = earliest_pt->irq;
>      is_lapic = (earliest_pt->source == PTSRC_lapic);
> -    pt_priv = earliest_pt->priv;
>  
>      spin_unlock(&v->arch.hvm_vcpu.tm_lock);
>  
>      if ( is_lapic )
> -        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> -    else if ( irq == RTC_IRQ && pt_priv )
>      {
> -        if ( !rtc_periodic_interrupt(pt_priv) )
> -            irq = -1;
> -
> -        pt_lock(earliest_pt);
> -
> -        if ( irq < 0 && earliest_pt->pending_intr_nr )
> -        {
> -            /*
> -             * RTC periodic timer runs without the corresponding interrupt
> -             * being enabled - need to mimic enough of pt_intr_post() to keep
> -             * things going.
> -             */
> -            earliest_pt->pending_intr_nr = 0;
> -            earliest_pt->irq_issued = 0;
> -            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
> -        }
> -        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
> -        {
> -            if ( earliest_pt->on_list )
> -            {
> -                /* suspend timer emulation */
> -                list_del(&earliest_pt->list);
> -                earliest_pt->on_list = 0;
> -            }
> -            irq = -1;
> -        }
> -
> -        /* Avoid dropping the lock if we can. */
> -        if ( irq < 0 && v == earliest_pt->vcpu )
> -            goto rescan_locked;
> -        pt_unlock(earliest_pt);
> -        if ( irq < 0 )
> -            goto rescan;
> +        vlapic_set_irq(vcpu_vlapic(v), irq, 0);

If you didn't put this single function call in braces, the patch would
become more clear, as it would then be exactly the "else if()"
branch that got removed by it.

As a round-up: I'm not going to veto this, but I'm also not going to
be putting my name under it, nor am I going to make another
attempt to clean up the RTC emulation if this is to go in unchanged.
I'm personally getting the impression that the root cause of the
observed problem is still being left in place (and perhaps still not
being fully understood), and hence this whole change goes in the
wrong direction, _even_ if it makes the problem it is aiming at
fixing indeed appear to go away.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:36:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtpv-0005VP-Jg; Mon, 10 Feb 2014 16:35:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCtpt-0005VH-VR
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 16:35:58 +0000
Received: from [85.158.143.35:23908] by server-1.bemta-4.messagelabs.com id
	A3/2C-31661-CEFF8F25; Mon, 10 Feb 2014 16:35:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392050155!4578092!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2723 invoked from network); 10 Feb 2014 16:35:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 16:35:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 16:35:55 +0000
Message-Id: <52F90DF8020000780011AD65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 16:35:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F8F9F8.9000702@linaro.org>
In-Reply-To: <52F8F9F8.9000702@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 17:10, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/07/2014 05:43 PM, Julien Grall wrote:
>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
> enabled.
>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>> to know if it needs to check the requirements.
>> 
>> Rename the function and remove "pvh" word in the commit message.
>> 
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> ---
>>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>>  1 file changed, 9 insertions(+), 5 deletions(-)
>> 
>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>> index 19b0e23..26a5d91 100644
>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>>      return hd->platform_ops->init(d);
>>  }
>>  
>> -static __init void check_dom0_pvh_reqs(struct domain *d)
>> +static __init void check_dom0_reqs(struct domain *d)
>>  {
>> +    if ( !paging_mode_translate(d) )
>> +        return;
>> +
>>      if ( !iommu_enabled )
>> -        panic("Presently, iommu must be enabled for pvh dom0\n");
>> +        panic("Presently, iommu must be enabled to use dom0 with translate 
> "
>> +              "paging mode\n");
> 
> Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
> ARM platform (for instance Arndale).
> 
> Do we really this check for PVH? If yes, I will add replace the check
> by: is_pvh_domain(d) && !iommu_enabled.

Of course we need it: How would PVH Dom0 be able to do any kind
of DMA without an IOMMU?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:36:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtpv-0005VP-Jg; Mon, 10 Feb 2014 16:35:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCtpt-0005VH-VR
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 16:35:58 +0000
Received: from [85.158.143.35:23908] by server-1.bemta-4.messagelabs.com id
	A3/2C-31661-CEFF8F25; Mon, 10 Feb 2014 16:35:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392050155!4578092!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2723 invoked from network); 10 Feb 2014 16:35:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 16:35:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 16:35:55 +0000
Message-Id: <52F90DF8020000780011AD65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 16:35:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F8F9F8.9000702@linaro.org>
In-Reply-To: <52F8F9F8.9000702@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 17:10, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/07/2014 05:43 PM, Julien Grall wrote:
>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
> enabled.
>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>> to know if it needs to check the requirements.
>> 
>> Rename the function and remove "pvh" word in the commit message.
>> 
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> ---
>>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>>  1 file changed, 9 insertions(+), 5 deletions(-)
>> 
>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>> index 19b0e23..26a5d91 100644
>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>>      return hd->platform_ops->init(d);
>>  }
>>  
>> -static __init void check_dom0_pvh_reqs(struct domain *d)
>> +static __init void check_dom0_reqs(struct domain *d)
>>  {
>> +    if ( !paging_mode_translate(d) )
>> +        return;
>> +
>>      if ( !iommu_enabled )
>> -        panic("Presently, iommu must be enabled for pvh dom0\n");
>> +        panic("Presently, iommu must be enabled to use dom0 with translate 
> "
>> +              "paging mode\n");
> 
> Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
> ARM platform (for instance Arndale).
> 
> Do we really this check for PVH? If yes, I will add replace the check
> by: is_pvh_domain(d) && !iommu_enabled.

Of course we need it: How would PVH Dom0 be able to do any kind
of DMA without an IOMMU?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:40:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtuH-0005ss-Ce; Mon, 10 Feb 2014 16:40:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCtuF-0005sm-BB
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:40:27 +0000
Received: from [85.158.137.68:44919] by server-3.bemta-3.messagelabs.com id
	A7/8A-14520-AF009F25; Mon, 10 Feb 2014 16:40:26 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392050425!897551!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32442 invoked from network); 10 Feb 2014 16:40:25 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:40:25 -0000
Received: by mail-wi0-f179.google.com with SMTP id hn9so3008088wib.6
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 08:40:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type:content-transfer-encoding;
	bh=I9UOE4oluNHspLgEf0UZL+ZE+JD0B11VflCRLs+tPRo=;
	b=hKkBfJDPoHbZc+D0r1TR8tTzA9I3pU2loLHszdl3CFcH7E2Yh4hDejqr09yqbMgodC
	cPtbJerNW0Kzhmpq1k97cUWtxnc3AB2J0pOWiF3uuMkVg0d+Ull9kUZKUAdbL4SS7cZm
	0jpaI9PNGcm/BIwoiLJk+Dxlw9TSUF8ehySCD41lkMs6NUTCLPHPjGhEEFl9vsMULDRy
	n5+Iw+9LOXUnZjl0rnNasIrGDkbqjlIvTvKddt5i4HkuV/84hyZKCjEyylx0MFFBtgc6
	h0+rcf2sAXyaKfIAmMXxpGslxqM9ocA2XnYT+76bQIZiJu/AiM5Q6a7phlrWM6YMlYDY
	hPgQ==
MIME-Version: 1.0
X-Received: by 10.180.75.105 with SMTP id b9mr11162564wiw.6.1392050425116;
	Mon, 10 Feb 2014 08:40:25 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 08:40:25 -0800 (PST)
In-Reply-To: <20140204181023.GA5293@citrix.com>
References: <20140204181023.GA5293@citrix.com>
Date: Mon, 10 Feb 2014 16:40:25 +0000
X-Google-Sender-Auth: 0BCBBWCbeMZxjEHO1yDBiqS_oJ8
Message-ID: <CAFLBxZZOxFGRsHdDdz+LzMBsuSBkOMmXtKKLX1bpHRW4H+zU6A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it RHEL 7 doesn't boot under pygrub
thanks

On Tue, Feb 4, 2014 at 6:10 PM, Joby Poriyath <joby.poriyath@citrix.com> wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
>
> In addition to this, RHEL 7 menu entries have two different single-quote
> delimited strings on the same line, and the greedy grouping for menuentry
> parsing gets both strings, and the options inbetween.
>
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: george.dunlap@citrix.com
> ---
> v2: Added RHEL 7 grub.cfg in pygrub/examples
> v3 & v4: Tidied the commit message based on Andrew Cooper's feedback
>
> Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails to boot
> on Xen.
>
>  tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py            |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2
>
> diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7-beta.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +       load_video
> +       set gfxpayload=keep
> +       insmod gzio
> +       insmod part_msdos
> +       insmod xfs
> +       set root='hd0,msdos1'
> +       if [ x$feature_platform_search_hint = xy ]; then
> +         search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +       else
> +         search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +       fi
> +       linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +       initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +       load_video
> +       insmod gzio
> +       insmod part_msdos
> +       insmod xfs
> +       set root='hd0,msdos1'
> +       if [ x$feature_platform_search_hint = xy ]; then
> +         search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +       else
> +         search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +       fi
> +       linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +       initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
> --
> 1.7.10.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:40:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtuH-0005ss-Ce; Mon, 10 Feb 2014 16:40:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCtuF-0005sm-BB
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:40:27 +0000
Received: from [85.158.137.68:44919] by server-3.bemta-3.messagelabs.com id
	A7/8A-14520-AF009F25; Mon, 10 Feb 2014 16:40:26 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392050425!897551!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32442 invoked from network); 10 Feb 2014 16:40:25 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:40:25 -0000
Received: by mail-wi0-f179.google.com with SMTP id hn9so3008088wib.6
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 08:40:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type:content-transfer-encoding;
	bh=I9UOE4oluNHspLgEf0UZL+ZE+JD0B11VflCRLs+tPRo=;
	b=hKkBfJDPoHbZc+D0r1TR8tTzA9I3pU2loLHszdl3CFcH7E2Yh4hDejqr09yqbMgodC
	cPtbJerNW0Kzhmpq1k97cUWtxnc3AB2J0pOWiF3uuMkVg0d+Ull9kUZKUAdbL4SS7cZm
	0jpaI9PNGcm/BIwoiLJk+Dxlw9TSUF8ehySCD41lkMs6NUTCLPHPjGhEEFl9vsMULDRy
	n5+Iw+9LOXUnZjl0rnNasIrGDkbqjlIvTvKddt5i4HkuV/84hyZKCjEyylx0MFFBtgc6
	h0+rcf2sAXyaKfIAmMXxpGslxqM9ocA2XnYT+76bQIZiJu/AiM5Q6a7phlrWM6YMlYDY
	hPgQ==
MIME-Version: 1.0
X-Received: by 10.180.75.105 with SMTP id b9mr11162564wiw.6.1392050425116;
	Mon, 10 Feb 2014 08:40:25 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 08:40:25 -0800 (PST)
In-Reply-To: <20140204181023.GA5293@citrix.com>
References: <20140204181023.GA5293@citrix.com>
Date: Mon, 10 Feb 2014 16:40:25 +0000
X-Google-Sender-Auth: 0BCBBWCbeMZxjEHO1yDBiqS_oJ8
Message-ID: <CAFLBxZZOxFGRsHdDdz+LzMBsuSBkOMmXtKKLX1bpHRW4H+zU6A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Joby Poriyath <joby.poriyath@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7
 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it RHEL 7 doesn't boot under pygrub
thanks

On Tue, Feb 4, 2014 at 6:10 PM, Joby Poriyath <joby.poriyath@citrix.com> wrote:
> menuentry in grub2/grub.cfg uses linux16 and initrd16 commands
> instead of linux and initrd. Due to this RHEL 7 (beta) guest failed to
> boot after the installation.
>
> In addition to this, RHEL 7 menu entries have two different single-quote
> delimited strings on the same line, and the greedy grouping for menuentry
> parsing gets both strings, and the options inbetween.
>
> Signed-off-by: Joby Poriyath <joby.poriyath@citrix.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: george.dunlap@citrix.com
> ---
> v2: Added RHEL 7 grub.cfg in pygrub/examples
> v3 & v4: Tidied the commit message based on Andrew Cooper's feedback
>
> Kindly consider this patch for xen-4.4 as RHEL 7 (beta) fails to boot
> on Xen.
>
>  tools/pygrub/examples/rhel-7-beta.grub2 |  118 +++++++++++++++++++++++++++++++
>  tools/pygrub/src/GrubConf.py            |    4 +-
>  2 files changed, 121 insertions(+), 1 deletion(-)
>  create mode 100644 tools/pygrub/examples/rhel-7-beta.grub2
>
> diff --git a/tools/pygrub/examples/rhel-7-beta.grub2 b/tools/pygrub/examples/rhel-7-beta.grub2
> new file mode 100644
> index 0000000..88f0f99
> --- /dev/null
> +++ b/tools/pygrub/examples/rhel-7-beta.grub2
> @@ -0,0 +1,118 @@
> +#
> +# DO NOT EDIT THIS FILE
> +#
> +# It is automatically generated by grub2-mkconfig using templates
> +# from /etc/grub.d and settings from /etc/default/grub
> +#
> +
> +### BEGIN /etc/grub.d/00_header ###
> +set pager=1
> +
> +if [ -s $prefix/grubenv ]; then
> +  load_env
> +fi
> +if [ "${next_entry}" ] ; then
> +   set default="${next_entry}"
> +   set next_entry=
> +   save_env next_entry
> +   set boot_once=true
> +else
> +   set default="${saved_entry}"
> +fi
> +
> +if [ x"${feature_menuentry_id}" = xy ]; then
> +  menuentry_id_option="--id"
> +else
> +  menuentry_id_option=""
> +fi
> +
> +export menuentry_id_option
> +
> +if [ "${prev_saved_entry}" ]; then
> +  set saved_entry="${prev_saved_entry}"
> +  save_env saved_entry
> +  set prev_saved_entry=
> +  save_env prev_saved_entry
> +  set boot_once=true
> +fi
> +
> +function savedefault {
> +  if [ -z "${boot_once}" ]; then
> +    saved_entry="${chosen}"
> +    save_env saved_entry
> +  fi
> +}
> +
> +function load_video {
> +  if [ x$feature_all_video_module = xy ]; then
> +    insmod all_video
> +  else
> +    insmod efi_gop
> +    insmod efi_uga
> +    insmod ieee1275_fb
> +    insmod vbe
> +    insmod vga
> +    insmod video_bochs
> +    insmod video_cirrus
> +  fi
> +}
> +
> +terminal_output console
> +set timeout=5
> +### END /etc/grub.d/00_header ###
> +
> +### BEGIN /etc/grub.d/10_linux ###
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 3.10.0-54.0.1.el7.x86_64' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.10.0-54.0.1.el7.x86_64-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +       load_video
> +       set gfxpayload=keep
> +       insmod gzio
> +       insmod part_msdos
> +       insmod xfs
> +       set root='hd0,msdos1'
> +       if [ x$feature_platform_search_hint = xy ]; then
> +         search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +       else
> +         search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +       fi
> +       linux16 /vmlinuz-3.10.0-54.0.1.el7.x86_64 root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16 LANG=en_GB.UTF-8
> +       initrd16 /initramfs-3.10.0-54.0.1.el7.x86_64.img
> +}
> +menuentry 'Red Hat Enterprise Linux Everything, with Linux 0-rescue-af34f0b8cf364cdbbe6d093f8228a37f' --class red --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f-advanced-d23b8b49-4cfe-4900-8ef1-ec80bc633163' {
> +       load_video
> +       insmod gzio
> +       insmod part_msdos
> +       insmod xfs
> +       set root='hd0,msdos1'
> +       if [ x$feature_platform_search_hint = xy ]; then
> +         search --no-floppy --fs-uuid --set=root --hint='hd0,msdos1'  89ffef78-82b3-457c-bc57-42cccc373851
> +       else
> +         search --no-floppy --fs-uuid --set=root 89ffef78-82b3-457c-bc57-42cccc373851
> +       fi
> +       linux16 /vmlinuz-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f root=/dev/mapper/rhel-root ro rd.lvm.lv=rhel/swap vconsole.keymap=uk crashkernel=auto rd.lvm.lv=rhel/root vconsole.font=latarcyrheb-sun16
> +       initrd16 /initramfs-0-rescue-af34f0b8cf364cdbbe6d093f8228a37f.img
> +}
> +
> +### END /etc/grub.d/10_linux ###
> +
> +### BEGIN /etc/grub.d/20_linux_xen ###
> +### END /etc/grub.d/20_linux_xen ###
> +
> +### BEGIN /etc/grub.d/20_ppc_terminfo ###
> +### END /etc/grub.d/20_ppc_terminfo ###
> +
> +### BEGIN /etc/grub.d/30_os-prober ###
> +### END /etc/grub.d/30_os-prober ###
> +
> +### BEGIN /etc/grub.d/40_custom ###
> +# This file provides an easy way to add custom menu entries.  Simply type the
> +# menu entries you want to add after this comment.  Be careful not to change
> +# the 'exec tail' line above.
> +### END /etc/grub.d/40_custom ###
> +
> +### BEGIN /etc/grub.d/41_custom ###
> +if [ -f  ${config_directory}/custom.cfg ]; then
> +  source ${config_directory}/custom.cfg
> +elif [ -z "${config_directory}" -a -f  $prefix/custom.cfg ]; then
> +  source $prefix/custom.cfg;
> +fi
> +### END /etc/grub.d/41_custom ###
> diff --git a/tools/pygrub/src/GrubConf.py b/tools/pygrub/src/GrubConf.py
> index cb853c9..974cded 100644
> --- a/tools/pygrub/src/GrubConf.py
> +++ b/tools/pygrub/src/GrubConf.py
> @@ -348,7 +348,9 @@ class Grub2Image(_GrubImage):
>
>      commands = {'set:root': 'root',
>                  'linux': 'kernel',
> +                'linux16': 'kernel',
>                  'initrd': 'initrd',
> +                'initrd16': 'initrd',
>                  'echo': None,
>                  'insmod': None,
>                  'search': None}
> @@ -394,7 +396,7 @@ class Grub2ConfigFile(_GrubConfigFile):
>                  continue
>
>              # new image
> -            title_match = re.match('^menuentry ["\'](.*)["\'] (.*){', l)
> +            title_match = re.match('^menuentry ["\'](.*?)["\'] (.*){', l)
>              if title_match:
>                  if img is not None:
>                      raise RuntimeError, "syntax error: cannot nest menuentry (%d %s)" % (len(img),img)
> --
> 1.7.10.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:40:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtuR-0005x2-3f; Mon, 10 Feb 2014 16:40:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtuQ-0005wq-9d
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:40:38 +0000
Received: from [85.158.137.68:32395] by server-1.bemta-3.messagelabs.com id
	17/8A-17293-50109F25; Mon, 10 Feb 2014 16:40:37 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392050435!892428!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16735 invoked from network); 10 Feb 2014 16:40:36 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Feb 2014 16:40:36 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtyg-0000JV-5L; Mon, 10 Feb 2014 16:45:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1392050702.1207@bugs.xenproject.org>
References: <20140204181023.GA5293@citrix.com>
	<CAFLBxZZOxFGRsHdDdz+LzMBsuSBkOMmXtKKLX1bpHRW4H+zU6A@mail.gmail.com>
In-Reply-To: <CAFLBxZZOxFGRsHdDdz+LzMBsuSBkOMmXtKKLX1bpHRW4H+zU6A@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 10 Feb 2014 16:45:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH v4] xen/pygrub: grub2/grub.cfg
 from RHEL 7 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #39 rooted at `<20140204181023.GA5293@citrix.com>'
Title: `Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7 has new commands in menuentry'
> title it RHEL 7 doesn't boot under pygrub
Set title for #39 to `RHEL 7 doesn't boot under pygrub'
> thanks
Finished processing.

Modified/created Bugs:
 - 39: http://bugs.xenproject.org/xen/bug/39 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:40:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCtuR-0005x2-3f; Mon, 10 Feb 2014 16:40:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtuQ-0005wq-9d
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:40:38 +0000
Received: from [85.158.137.68:32395] by server-1.bemta-3.messagelabs.com id
	17/8A-17293-50109F25; Mon, 10 Feb 2014 16:40:37 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392050435!892428!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16735 invoked from network); 10 Feb 2014 16:40:36 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 10 Feb 2014 16:40:36 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WCtyg-0000JV-5L; Mon, 10 Feb 2014 16:45:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1392050702.1207@bugs.xenproject.org>
References: <20140204181023.GA5293@citrix.com>
	<CAFLBxZZOxFGRsHdDdz+LzMBsuSBkOMmXtKKLX1bpHRW4H+zU6A@mail.gmail.com>
In-Reply-To: <CAFLBxZZOxFGRsHdDdz+LzMBsuSBkOMmXtKKLX1bpHRW4H+zU6A@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 10 Feb 2014 16:45:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH v4] xen/pygrub: grub2/grub.cfg
 from RHEL 7 has new commands in menuentry
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #39 rooted at `<20140204181023.GA5293@citrix.com>'
Title: `Re: [Xen-devel] [PATCH v4] xen/pygrub: grub2/grub.cfg from RHEL 7 has new commands in menuentry'
> title it RHEL 7 doesn't boot under pygrub
Set title for #39 to `RHEL 7 doesn't boot under pygrub'
> thanks
Finished processing.

Modified/created Bugs:
 - 39: http://bugs.xenproject.org/xen/bug/39 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu47-00071g-Nt; Mon, 10 Feb 2014 16:50:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu46-00071Z-HE
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:38 +0000
Received: from [85.158.143.35:19680] by server-2.bemta-4.messagelabs.com id
	1C/A1-10891-D5309F25; Mon, 10 Feb 2014 16:50:37 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392051037!4585007!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6087 invoked from network); 10 Feb 2014 16:50:37 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:37 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051037; l=688;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=m7OXw0gli8rOvOZucZBRLTdyRi0=;
	b=Dh9UtPQr1SIN+aE0L2DD4013FQ5lIEPVKu0+6CVcgoXc4EDfPpTuO1nWVvQ0irxEhPq
	5TF71T1ZYfVgaJ21c/yWtOAB5XSMwSf7QrjJUq53zwn36exXtuUnDAYOOzilAFtqU5j8g
	gYQtBwqcjlmdz+THEJpZtB8H3Q/8p/1ig4s=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id L07764q1AGoVFa5
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:31 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 28CFE5026A; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:24 +0100
Message-Id: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/3] xen-block: changes for discard support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fix blkfront to handle all sorts of backends.

Also let blkback recognize a xenstore property to disable discard for a
given device. It requires upcoming libxl changes ("discard-enable"):
http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg02632.html

Olaf Hering (3):
  xen-blkfront: remove type check from blkfront_setup_discard
  xen blkif.h: fix comment typo in discard-alignment
  xen/blkback: disable discard feature if requested by toolstack

 drivers/block/xen-blkback/xenbus.c |  7 ++++++-
 drivers/block/xen-blkfront.c       | 40 +++++++++++++-------------------------
 include/xen/interface/io/blkif.h   |  2 +-
 3 files changed, 21 insertions(+), 28 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu3w-00070H-8J; Mon, 10 Feb 2014 16:50:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCu3u-0006zm-MZ
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 16:50:27 +0000
Received: from [85.158.139.211:45480] by server-9.bemta-5.messagelabs.com id
	26/FF-11237-15309F25; Mon, 10 Feb 2014 16:50:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392051023!2907918!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7739 invoked from network); 10 Feb 2014 16:50:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:50:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101358248"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 16:50:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 11:50:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCu3q-0003b1-En;
	Mon, 10 Feb 2014 16:50:22 +0000
Date: Mon, 10 Feb 2014 16:50:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F55EA4.2060703@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101649120.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F55EA4.2060703@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 2/4] xen/arm: support HW interrupts in
 gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > If the irq to be injected is an hardware irq (p->desc != NULL), set
> > GICH_LR_HW.
> 
> If you set the GICH_LR_HW, I think you should remove the EOI of physical
> interrupt in maintenance IRQ in this patch. Otherwise we will EOI twice and
> from the documentation the behavior is unpredicatable.

Yes, you are right.


> > Also add a struct vcpu* parameter to gic_set_lr.
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >   xen/arch/arm/gic.c |   28 ++++++++++++++++------------
> >   1 file changed, 16 insertions(+), 12 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index acf7195..215b679 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq,
> > struct irqaction *new)
> >       return rc;
> >   }
> > 
> > -static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> > +static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
> >           unsigned int state, unsigned int priority)
> >   {
> > -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> > -    struct pending_irq *p = irq_to_pending(current, virtual_irq);
> > +    struct pending_irq *p = irq_to_pending(v, irq);
> > 
> >       BUG_ON(lr >= nr_lrs);
> >       BUG_ON(lr < 0);
> >       BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
> > 
> > -    GICH[GICH_LR + lr] = state |
> > -        maintenance_int |
> > -        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > -        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> > +    if ( p->desc != NULL )
> > +        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
> > +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > +            ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |
> 
> We should not assume that the physical IRQ == virtual IRQ. You should use
> p->desc->irq

Right, I'll make the change.


> > +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> > +    else
> > +        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
> > +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> 
> The final result of virtual IRQ is a subset of the physical IRQ. Can you
> factor the code?

Yeah

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu47-00071g-Nt; Mon, 10 Feb 2014 16:50:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu46-00071Z-HE
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:38 +0000
Received: from [85.158.143.35:19680] by server-2.bemta-4.messagelabs.com id
	1C/A1-10891-D5309F25; Mon, 10 Feb 2014 16:50:37 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392051037!4585007!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6087 invoked from network); 10 Feb 2014 16:50:37 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:37 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051037; l=688;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=m7OXw0gli8rOvOZucZBRLTdyRi0=;
	b=Dh9UtPQr1SIN+aE0L2DD4013FQ5lIEPVKu0+6CVcgoXc4EDfPpTuO1nWVvQ0irxEhPq
	5TF71T1ZYfVgaJ21c/yWtOAB5XSMwSf7QrjJUq53zwn36exXtuUnDAYOOzilAFtqU5j8g
	gYQtBwqcjlmdz+THEJpZtB8H3Q/8p/1ig4s=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id L07764q1AGoVFa5
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:31 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 28CFE5026A; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:24 +0100
Message-Id: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 0/3] xen-block: changes for discard support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fix blkfront to handle all sorts of backends.

Also let blkback recognize a xenstore property to disable discard for a
given device. It requires upcoming libxl changes ("discard-enable"):
http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg02632.html

Olaf Hering (3):
  xen-blkfront: remove type check from blkfront_setup_discard
  xen blkif.h: fix comment typo in discard-alignment
  xen/blkback: disable discard feature if requested by toolstack

 drivers/block/xen-blkback/xenbus.c |  7 ++++++-
 drivers/block/xen-blkfront.c       | 40 +++++++++++++-------------------------
 include/xen/interface/io/blkif.h   |  2 +-
 3 files changed, 21 insertions(+), 28 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu3w-00070H-8J; Mon, 10 Feb 2014 16:50:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCu3u-0006zm-MZ
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 16:50:27 +0000
Received: from [85.158.139.211:45480] by server-9.bemta-5.messagelabs.com id
	26/FF-11237-15309F25; Mon, 10 Feb 2014 16:50:25 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392051023!2907918!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7739 invoked from network); 10 Feb 2014 16:50:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 16:50:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101358248"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 16:50:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 11:50:22 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCu3q-0003b1-En;
	Mon, 10 Feb 2014 16:50:22 +0000
Date: Mon, 10 Feb 2014 16:50:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F55EA4.2060703@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101649120.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-2-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F55EA4.2060703@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 2/4] xen/arm: support HW interrupts in
 gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > If the irq to be injected is an hardware irq (p->desc != NULL), set
> > GICH_LR_HW.
> 
> If you set the GICH_LR_HW, I think you should remove the EOI of physical
> interrupt in maintenance IRQ in this patch. Otherwise we will EOI twice and
> from the documentation the behavior is unpredicatable.

Yes, you are right.


> > Also add a struct vcpu* parameter to gic_set_lr.
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >   xen/arch/arm/gic.c |   28 ++++++++++++++++------------
> >   1 file changed, 16 insertions(+), 12 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index acf7195..215b679 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq,
> > struct irqaction *new)
> >       return rc;
> >   }
> > 
> > -static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> > +static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
> >           unsigned int state, unsigned int priority)
> >   {
> > -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> > -    struct pending_irq *p = irq_to_pending(current, virtual_irq);
> > +    struct pending_irq *p = irq_to_pending(v, irq);
> > 
> >       BUG_ON(lr >= nr_lrs);
> >       BUG_ON(lr < 0);
> >       BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
> > 
> > -    GICH[GICH_LR + lr] = state |
> > -        maintenance_int |
> > -        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > -        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> > +    if ( p->desc != NULL )
> > +        GICH[GICH_LR + lr] = GICH_LR_HW | state | GICH_LR_MAINTENANCE_IRQ |
> > +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > +            ((irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT) |
> 
> We should not assume that the physical IRQ == virtual IRQ. You should use
> p->desc->irq

Right, I'll make the change.


> > +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> > +    else
> > +        GICH[GICH_LR + lr] = state | GICH_LR_MAINTENANCE_IRQ |
> > +            ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> > +            ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> 
> The final result of virtual IRQ is a subset of the physical IRQ. Can you
> factor the code?

Yeah

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu4D-00072i-4H; Mon, 10 Feb 2014 16:50:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu4B-00071x-46
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:43 +0000
Received: from [85.158.139.211:41841] by server-14.bemta-5.messagelabs.com id
	0E/FA-27598-26309F25; Mon, 10 Feb 2014 16:50:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392051041!2954093!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32192 invoked from network); 10 Feb 2014 16:50:41 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:41 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051041; l=972;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=ZdjBNDZVKpgHqbnBLs5JD9LinZg=;
	b=IVeIH4pSmuB+ua77FkPahaSvmjYqtoSYcdhJ8Ajwx2p2tHCA0u92H2z/RIiTu+wtmLs
	dG1LgPOtfE70Z8ELZFo4/OASU6k8UQDw1lb94vy9SlmKJYLHN65eq7RrewKY7C4nzi3dD
	K7uHOOMi3TGHKpkPzMzbZ4NpFyy6KWfqkzc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id h00ff2q1AGobHHu
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:37 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 57AC35025A; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:26 +0100
Message-Id: <1392051027-29516-3-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
References: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/3] xen blkif.h: fix comment typo in
	discard-alignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the missing 'n' to discard-alignment

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 include/xen/interface/io/blkif.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/blkif.h
index ae665ac..19ebcc5 100644
--- a/include/xen/interface/io/blkif.h
+++ b/include/xen/interface/io/blkif.h
@@ -86,7 +86,7 @@ typedef uint64_t blkif_sector_t;
  *     Interface%20manuals/100293068c.pdf
  * The backend can optionally provide three extra XenBus attributes to
  * further optimize the discard functionality:
- * 'discard-aligment' - Devices that support discard functionality may
+ * 'discard-alignment' - Devices that support discard functionality may
  * internally allocate space in units that are bigger than the exported
  * logical block size. The discard-alignment parameter indicates how many bytes
  * the beginning of the partition is offset from the internal allocation unit's

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu4D-00072i-4H; Mon, 10 Feb 2014 16:50:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu4B-00071x-46
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:43 +0000
Received: from [85.158.139.211:41841] by server-14.bemta-5.messagelabs.com id
	0E/FA-27598-26309F25; Mon, 10 Feb 2014 16:50:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392051041!2954093!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32192 invoked from network); 10 Feb 2014 16:50:41 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:41 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051041; l=972;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=ZdjBNDZVKpgHqbnBLs5JD9LinZg=;
	b=IVeIH4pSmuB+ua77FkPahaSvmjYqtoSYcdhJ8Ajwx2p2tHCA0u92H2z/RIiTu+wtmLs
	dG1LgPOtfE70Z8ELZFo4/OASU6k8UQDw1lb94vy9SlmKJYLHN65eq7RrewKY7C4nzi3dD
	K7uHOOMi3TGHKpkPzMzbZ4NpFyy6KWfqkzc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id h00ff2q1AGobHHu
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:37 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 57AC35025A; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:26 +0100
Message-Id: <1392051027-29516-3-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
References: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 2/3] xen blkif.h: fix comment typo in
	discard-alignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the missing 'n' to discard-alignment

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 include/xen/interface/io/blkif.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/xen/interface/io/blkif.h b/include/xen/interface/io/blkif.h
index ae665ac..19ebcc5 100644
--- a/include/xen/interface/io/blkif.h
+++ b/include/xen/interface/io/blkif.h
@@ -86,7 +86,7 @@ typedef uint64_t blkif_sector_t;
  *     Interface%20manuals/100293068c.pdf
  * The backend can optionally provide three extra XenBus attributes to
  * further optimize the discard functionality:
- * 'discard-aligment' - Devices that support discard functionality may
+ * 'discard-alignment' - Devices that support discard functionality may
  * internally allocate space in units that are bigger than the exported
  * logical block size. The discard-alignment parameter indicates how many bytes
  * the beginning of the partition is offset from the internal allocation unit's

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu4H-00074F-Jp; Mon, 10 Feb 2014 16:50:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu4G-00073x-Qc
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:49 +0000
Received: from [193.109.254.147:40542] by server-4.bemta-14.messagelabs.com id
	D3/BA-32066-86309F25; Mon, 10 Feb 2014 16:50:48 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392051046!3320822!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7397 invoked from network); 10 Feb 2014 16:50:47 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:47 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051046; l=2488;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=K/kFzY+owLx8498/I8YMPidxioQ=;
	b=WnPQkHBz9UGrvxO6019QOYbfL/fuGx23MS7kaaZjtuU3sb5ZQypslStM3kt3u7vth/g
	k3Io8HTaDrU0RDsXKs0XchH54Sb2cf59HaijMFT/Tl+F4CRL6KH74H+hIdoROUZpEjmEF
	XzimNd0CiaLBCAq+4t6vUk3SnChnds3fvyc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id j05dacq1AGogIWu
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:42 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 889F95026B; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:25 +0100
Message-Id: <1392051027-29516-2-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
References: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/3] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In its initial implementation a check for "type" was added, but only phy
and file are handled. This breaks advertised discard support for other
type values such as qdisk.

Fix and simplify this function: If the backend advertises discard
support it is supposed to implement it properly, so enable
feature_discard unconditionally. If the backend advertises the need for
a certain granularity and alignment then propagate both properties to
the blocklayer. The discard-secure property is a boolean, update the code
to reflect that.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkfront.c | 40 ++++++++++++++--------------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8dcfb54..4d8ddea 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1635,36 +1635,24 @@ blkfront_closing(struct blkfront_info *info)
 static void blkfront_setup_discard(struct blkfront_info *info)
 {
 	int err;
-	char *type;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
 	unsigned int discard_secure;
 
-	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
-	if (IS_ERR(type))
-		return;
-
-	info->feature_secdiscard = 0;
-	if (strncmp(type, "phy", 3) == 0) {
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			"discard-granularity", "%u", &discard_granularity,
-			"discard-alignment", "%u", &discard_alignment,
-			NULL);
-		if (!err) {
-			info->feature_discard = 1;
-			info->discard_granularity = discard_granularity;
-			info->discard_alignment = discard_alignment;
-		}
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			    "discard-secure", "%d", &discard_secure,
-			    NULL);
-		if (!err)
-			info->feature_secdiscard = discard_secure;
-
-	} else if (strncmp(type, "file", 4) == 0)
-		info->feature_discard = 1;
-
-	kfree(type);
+	info->feature_discard = 1;
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		"discard-granularity", "%u", &discard_granularity,
+		"discard-alignment", "%u", &discard_alignment,
+		NULL);
+	if (!err) {
+		info->discard_granularity = discard_granularity;
+		info->discard_alignment = discard_alignment;
+	}
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		    "discard-secure", "%d", &discard_secure,
+		    NULL);
+	if (!err)
+		info->feature_secdiscard = !!discard_secure;
 }
 
 static int blkfront_setup_indirect(struct blkfront_info *info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu4H-00074F-Jp; Mon, 10 Feb 2014 16:50:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu4G-00073x-Qc
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:49 +0000
Received: from [193.109.254.147:40542] by server-4.bemta-14.messagelabs.com id
	D3/BA-32066-86309F25; Mon, 10 Feb 2014 16:50:48 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392051046!3320822!1
X-Originating-IP: [81.169.146.160]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MCA9PiA1NTc3MTg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7397 invoked from network); 10 Feb 2014 16:50:47 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.160)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:47 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051046; l=2488;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=K/kFzY+owLx8498/I8YMPidxioQ=;
	b=WnPQkHBz9UGrvxO6019QOYbfL/fuGx23MS7kaaZjtuU3sb5ZQypslStM3kt3u7vth/g
	k3Io8HTaDrU0RDsXKs0XchH54Sb2cf59HaijMFT/Tl+F4CRL6KH74H+hIdoROUZpEjmEF
	XzimNd0CiaLBCAq+4t6vUk3SnChnds3fvyc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id j05dacq1AGogIWu
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:42 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 889F95026B; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:25 +0100
Message-Id: <1392051027-29516-2-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
References: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 1/3] xen-blkfront: remove type check from
	blkfront_setup_discard
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In its initial implementation a check for "type" was added, but only phy
and file are handled. This breaks advertised discard support for other
type values such as qdisk.

Fix and simplify this function: If the backend advertises discard
support it is supposed to implement it properly, so enable
feature_discard unconditionally. If the backend advertises the need for
a certain granularity and alignment then propagate both properties to
the blocklayer. The discard-secure property is a boolean, update the code
to reflect that.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkfront.c | 40 ++++++++++++++--------------------------
 1 file changed, 14 insertions(+), 26 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 8dcfb54..4d8ddea 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1635,36 +1635,24 @@ blkfront_closing(struct blkfront_info *info)
 static void blkfront_setup_discard(struct blkfront_info *info)
 {
 	int err;
-	char *type;
 	unsigned int discard_granularity;
 	unsigned int discard_alignment;
 	unsigned int discard_secure;
 
-	type = xenbus_read(XBT_NIL, info->xbdev->otherend, "type", NULL);
-	if (IS_ERR(type))
-		return;
-
-	info->feature_secdiscard = 0;
-	if (strncmp(type, "phy", 3) == 0) {
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			"discard-granularity", "%u", &discard_granularity,
-			"discard-alignment", "%u", &discard_alignment,
-			NULL);
-		if (!err) {
-			info->feature_discard = 1;
-			info->discard_granularity = discard_granularity;
-			info->discard_alignment = discard_alignment;
-		}
-		err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
-			    "discard-secure", "%d", &discard_secure,
-			    NULL);
-		if (!err)
-			info->feature_secdiscard = discard_secure;
-
-	} else if (strncmp(type, "file", 4) == 0)
-		info->feature_discard = 1;
-
-	kfree(type);
+	info->feature_discard = 1;
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		"discard-granularity", "%u", &discard_granularity,
+		"discard-alignment", "%u", &discard_alignment,
+		NULL);
+	if (!err) {
+		info->discard_granularity = discard_granularity;
+		info->discard_alignment = discard_alignment;
+	}
+	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
+		    "discard-secure", "%d", &discard_secure,
+		    NULL);
+	if (!err)
+		info->feature_secdiscard = !!discard_secure;
 }
 
 static int blkfront_setup_indirect(struct blkfront_info *info)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu4O-00076W-3i; Mon, 10 Feb 2014 16:50:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu4M-00075x-Cp
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:54 +0000
Received: from [85.158.143.35:4359] by server-1.bemta-4.messagelabs.com id
	5D/E2-31661-D6309F25; Mon, 10 Feb 2014 16:50:53 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392051052!4590425!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23123 invoked from network); 10 Feb 2014 16:50:52 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051052; l=1258;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=LUOT5RWABoq+FbPK33Fj9cWccrE=;
	b=Pcr9SsCdtyvapQFOAIPQ1jj2ODyGW5Q3ZDBY29y/DCrGeMyyOS5Q0GMm07n8g/CH4kQ
	1OdqFBtZvKgGrI9XJ23dWMVAODFFg8SE8AlK6rfwIQU8CleCrtdtTwgbKxS9ZFq534QEs
	zVAs3gg8favRav+ObT8XI1L0EEiDK/B9CZA=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id w07194q1AGolFDv
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:47 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A0C8650269; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:27 +0100
Message-Id: <1392051027-29516-4-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
References: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/3] xen/blkback: disable discard feature if
	requested by toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Newer toolstacks may provide a boolean property "discard-enable" in the
backend node. Its purpose is to disable discard for file backed storage
to avoid fragmentation. Recognize this setting also for physical
storage.  If that property exists and is false, do not advertise
"feature-discard" to the frontend.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkback/xenbus.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index c2014a0..83125e2 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -467,10 +467,15 @@ static void xen_blkbk_discard(struct xenbus_transaction xbt, struct backend_info
 	struct xenbus_device *dev = be->dev;
 	struct xen_blkif *blkif = be->blkif;
 	int err;
-	int state = 0;
+	int state = 0, discard_enable;
 	struct block_device *bdev = be->blkif->vbd.bdev;
 	struct request_queue *q = bdev_get_queue(bdev);
 
+	err = xenbus_scanf(XBT_NIL, dev->nodename, "discard-enable", "%d",
+			   &discard_enable);
+	if (err == 1 && !discard_enable)
+		return;
+
 	if (blk_queue_discard(q)) {
 		err = xenbus_printf(xbt, dev->nodename,
 			"discard-granularity", "%u",

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:50:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu4O-00076W-3i; Mon, 10 Feb 2014 16:50:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WCu4M-00075x-Cp
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:50:54 +0000
Received: from [85.158.143.35:4359] by server-1.bemta-4.messagelabs.com id
	5D/E2-31661-D6309F25; Mon, 10 Feb 2014 16:50:53 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392051052!4590425!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23123 invoked from network); 10 Feb 2014 16:50:52 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 16:50:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392051052; l=1258;
	s=domk; d=aepfle.de;
	h=References:In-Reply-To:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:
	X-RZG-AUTH; bh=LUOT5RWABoq+FbPK33Fj9cWccrE=;
	b=Pcr9SsCdtyvapQFOAIPQ1jj2ODyGW5Q3ZDBY29y/DCrGeMyyOS5Q0GMm07n8g/CH4kQ
	1OdqFBtZvKgGrI9XJ23dWMVAODFFg8SE8AlK6rfwIQU8CleCrtdtTwgbKxS9ZFq534QEs
	zVAs3gg8favRav+ObT8XI1L0EEiDK/B9CZA=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.25 AUTH) with ESMTPSA id w07194q1AGolFDv
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Mon, 10 Feb 2014 17:50:47 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A0C8650269; Mon, 10 Feb 2014 17:50:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: konrad.wilk@oracle.com
Date: Mon, 10 Feb 2014 17:50:27 +0100
Message-Id: <1392051027-29516-4-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
References: <1392051027-29516-1-git-send-email-olaf@aepfle.de>
Cc: boris.ostrovsky@oracle.com, david.vrabel@citrix.com,
	Olaf Hering <olaf@aepfle.de>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH 3/3] xen/blkback: disable discard feature if
	requested by toolstack
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Newer toolstacks may provide a boolean property "discard-enable" in the
backend node. Its purpose is to disable discard for file backed storage
to avoid fragmentation. Recognize this setting also for physical
storage.  If that property exists and is false, do not advertise
"feature-discard" to the frontend.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 drivers/block/xen-blkback/xenbus.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index c2014a0..83125e2 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -467,10 +467,15 @@ static void xen_blkbk_discard(struct xenbus_transaction xbt, struct backend_info
 	struct xenbus_device *dev = be->dev;
 	struct xen_blkif *blkif = be->blkif;
 	int err;
-	int state = 0;
+	int state = 0, discard_enable;
 	struct block_device *bdev = be->blkif->vbd.bdev;
 	struct request_queue *q = bdev_get_queue(bdev);
 
+	err = xenbus_scanf(XBT_NIL, dev->nodename, "discard-enable", "%d",
+			   &discard_enable);
+	if (err == 1 && !discard_enable)
+		return;
+
 	if (blk_queue_discard(q)) {
 		err = xenbus_printf(xbt, dev->nodename,
 			"discard-granularity", "%u",

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:54:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu7n-0007e3-IX; Mon, 10 Feb 2014 16:54:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCu7m-0007dq-9J
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:54:26 +0000
Received: from [85.158.137.68:29047] by server-7.bemta-3.messagelabs.com id
	51/D7-13775-14409F25; Mon, 10 Feb 2014 16:54:25 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392051263!900528!1
X-Originating-IP: [207.46.163.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17103 invoked from network); 10 Feb 2014 16:54:24 -0000
Received: from co9ehsobe005.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.28)
	by server-15.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 16:54:24 -0000
Received: from mail118-co9-R.bigfish.com (10.236.132.226) by
	CO9EHSOBE002.bigfish.com (10.236.130.65) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 16:54:22 +0000
Received: from mail118-co9 (localhost [127.0.0.1])	by
	mail118-co9-R.bigfish.com (Postfix) with ESMTP id 8E270CC0085;
	Mon, 10 Feb 2014 16:54:22 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371Ida00h1432I4015Idc73hzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h1155h)
Received: from mail118-co9 (localhost.localdomain [127.0.0.1]) by mail118-co9
	(MessageSwitch) id 1392051260217527_9646;
	Mon, 10 Feb 2014 16:54:20 +0000 (UTC)
Received: from CO9EHSMHS011.bigfish.com (unknown [10.236.132.227])	by
	mail118-co9.bigfish.com (Postfix) with ESMTP id 2F5A4C4004A;
	Mon, 10 Feb 2014 16:54:20 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS011.bigfish.com
	(10.236.130.21) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 16:54:19 +0000
X-WSS-ID: 0N0SHME-08-BMW-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	26346D16089;	Mon, 10 Feb 2014 10:54:14 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 10:54:18 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 11:53:10 -0500
Message-ID: <52F90438.1060203@amd.com>
Date: Mon, 10 Feb 2014 10:54:16 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
	<20140207212724.GD8837@arav-dinar>
	<52F890CD020000780011A95D@nat28.tlf.novell.com>
In-Reply-To: <52F890CD020000780011A95D@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/10/2014 1:41 AM, Jan Beulich wrote:
> That's all valid argumentation as long as you leave migration out
> of the picture.
>
> Jan
>
>
Hmm. Allright; I am sending revised version of the patch with 
'amd_thresholding_reg_present' function removed..

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:54:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu7n-0007e3-IX; Mon, 10 Feb 2014 16:54:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCu7m-0007dq-9J
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:54:26 +0000
Received: from [85.158.137.68:29047] by server-7.bemta-3.messagelabs.com id
	51/D7-13775-14409F25; Mon, 10 Feb 2014 16:54:25 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392051263!900528!1
X-Originating-IP: [207.46.163.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17103 invoked from network); 10 Feb 2014 16:54:24 -0000
Received: from co9ehsobe005.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.28)
	by server-15.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 16:54:24 -0000
Received: from mail118-co9-R.bigfish.com (10.236.132.226) by
	CO9EHSOBE002.bigfish.com (10.236.130.65) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 16:54:22 +0000
Received: from mail118-co9 (localhost [127.0.0.1])	by
	mail118-co9-R.bigfish.com (Postfix) with ESMTP id 8E270CC0085;
	Mon, 10 Feb 2014 16:54:22 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371Ida00h1432I4015Idc73hzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h1155h)
Received: from mail118-co9 (localhost.localdomain [127.0.0.1]) by mail118-co9
	(MessageSwitch) id 1392051260217527_9646;
	Mon, 10 Feb 2014 16:54:20 +0000 (UTC)
Received: from CO9EHSMHS011.bigfish.com (unknown [10.236.132.227])	by
	mail118-co9.bigfish.com (Postfix) with ESMTP id 2F5A4C4004A;
	Mon, 10 Feb 2014 16:54:20 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO9EHSMHS011.bigfish.com
	(10.236.130.21) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 16:54:19 +0000
X-WSS-ID: 0N0SHME-08-BMW-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	26346D16089;	Mon, 10 Feb 2014 10:54:14 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 10:54:18 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 11:53:10 -0500
Message-ID: <52F90438.1060203@amd.com>
Date: Mon, 10 Feb 2014 10:54:16 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
	<20140207212724.GD8837@arav-dinar>
	<52F890CD020000780011A95D@nat28.tlf.novell.com>
In-Reply-To: <52F890CD020000780011A95D@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/10/2014 1:41 AM, Jan Beulich wrote:
> That's all valid argumentation as long as you leave migration out
> of the picture.
>
> Jan
>
>
Hmm. Allright; I am sending revised version of the patch with 
'amd_thresholding_reg_present' function removed..

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu7y-0007gU-3z; Mon, 10 Feb 2014 16:54:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCu7w-0007g1-Qg
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:54:37 +0000
Received: from [85.158.139.211:29531] by server-5.bemta-5.messagelabs.com id
	E6/F1-32749-C4409F25; Mon, 10 Feb 2014 16:54:36 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392051273!2954985!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29119 invoked from network); 10 Feb 2014 16:54:35 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-4.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 16:54:35 -0000
Received: from mail190-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE035.bigfish.com (10.243.66.100) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 16:54:33 +0000
Received: from mail190-co1 (localhost [127.0.0.1])	by
	mail190-co1-R.bigfish.com (Postfix) with ESMTP id 0CD8B9006AF;
	Mon, 10 Feb 2014 16:54:33 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail190-co1 (localhost.localdomain [127.0.0.1]) by mail190-co1
	(MessageSwitch) id 1392051270143032_14487;
	Mon, 10 Feb 2014 16:54:30 +0000 (UTC)
Received: from CO1EHSMHS023.bigfish.com (unknown [10.243.78.247])	by
	mail190-co1.bigfish.com (Postfix) with ESMTP id 1EDC3940075;
	Mon, 10 Feb 2014 16:54:30 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO1EHSMHS023.bigfish.com
	(10.243.66.33) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 16:54:29 +0000
X-WSS-ID: 0N0SHMO-08-BNL-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29E48D2200C;	Mon, 10 Feb 2014 10:54:24 -0600 (CST)
Received: from SATLEXDAG01.amd.com (10.181.40.3) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 10:54:28 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by SATLEXDAG01.amd.com
	(10.181.40.3) with Microsoft SMTP Server id 14.2.328.9;
	Mon, 10 Feb 2014 11:54:26 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 10:38:58 -0600
Message-ID: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V2] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

Corrected this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

While at it, remove some clutter in the vmce_amd* functions. Retained
current policy of returning zero for reads and ignoring writes.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   41 ++++++-------------------------------
 xen/arch/x86/cpu/mcheck/vmce.c    |    4 ++--
 2 files changed, 8 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..03797ab 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Do nothing as we don't emulate this MC bank currently */
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Assign '0' as we don't emulate this MC bank currently */
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..be9bb5e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 16:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 16:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCu7y-0007gU-3z; Mon, 10 Feb 2014 16:54:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCu7w-0007g1-Qg
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 16:54:37 +0000
Received: from [85.158.139.211:29531] by server-5.bemta-5.messagelabs.com id
	E6/F1-32749-C4409F25; Mon, 10 Feb 2014 16:54:36 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392051273!2954985!1
X-Originating-IP: [216.32.180.184]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29119 invoked from network); 10 Feb 2014 16:54:35 -0000
Received: from co1ehsobe001.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.184)
	by server-4.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 16:54:35 -0000
Received: from mail190-co1-R.bigfish.com (10.243.78.230) by
	CO1EHSOBE035.bigfish.com (10.243.66.100) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 16:54:33 +0000
Received: from mail190-co1 (localhost [127.0.0.1])	by
	mail190-co1-R.bigfish.com (Postfix) with ESMTP id 0CD8B9006AF;
	Mon, 10 Feb 2014 16:54:33 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail190-co1 (localhost.localdomain [127.0.0.1]) by mail190-co1
	(MessageSwitch) id 1392051270143032_14487;
	Mon, 10 Feb 2014 16:54:30 +0000 (UTC)
Received: from CO1EHSMHS023.bigfish.com (unknown [10.243.78.247])	by
	mail190-co1.bigfish.com (Postfix) with ESMTP id 1EDC3940075;
	Mon, 10 Feb 2014 16:54:30 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO1EHSMHS023.bigfish.com
	(10.243.66.33) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 16:54:29 +0000
X-WSS-ID: 0N0SHMO-08-BNL-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29E48D2200C;	Mon, 10 Feb 2014 10:54:24 -0600 (CST)
Received: from SATLEXDAG01.amd.com (10.181.40.3) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 10:54:28 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by SATLEXDAG01.amd.com
	(10.181.40.3) with Microsoft SMTP Server id 14.2.328.9;
	Mon, 10 Feb 2014 11:54:26 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 10:38:58 -0600
Message-ID: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V2] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

Corrected this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

While at it, remove some clutter in the vmce_amd* functions. Retained
current policy of returning zero for reads and ignoring writes.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   41 ++++++-------------------------------
 xen/arch/x86/cpu/mcheck/vmce.c    |    4 ++--
 2 files changed, 8 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..03797ab 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Do nothing as we don't emulate this MC bank currently */
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Assign '0' as we don't emulate this MC bank currently */
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..be9bb5e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:00:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuDL-0008Bu-9N; Mon, 10 Feb 2014 17:00:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuDF-0008Bm-Vb
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:00:10 +0000
Received: from [85.158.137.68:24459] by server-2.bemta-3.messagelabs.com id
	2C/F0-06531-59509F25; Mon, 10 Feb 2014 17:00:05 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392051602!892978!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5615 invoked from network); 10 Feb 2014 17:00:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:00:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99535526"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:00:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:00:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuDB-0003jo-QM;
	Mon, 10 Feb 2014 17:00:01 +0000
Date: Mon, 10 Feb 2014 16:59:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F56EA9.9020903@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56EA9.9020903@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > On return to guest, if there are no free LRs and we still have more
> > interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
> > maintenance interrupt when no pending interrupts are present in the LR
> > registers.
> > The maintenance interrupt handler won't do anything anymore, but
> > receiving the interrupt is going to cause gic_inject to be called on
> > return to guest that is going to clear the old LRs and inject new
> > interrupts.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >   xen/arch/arm/gic.c |    8 +++++++-
> >   1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 87bd5d3..bee2618 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -810,8 +810,14 @@ void gic_inject(void)
> >       gic_restore_pending_irqs(current);
> >       if (!gic_events_need_delivery())
> >           gic_inject_irq_stop();
> > -    else
> > +    else {
> >           gic_inject_irq_start();
> > +    }
> > +
> > +    if ( !list_empty(&current->arch.vgic.lr_pending) )
> > +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
> > +    else
> > +        GICH[GICH_HCR] = GICH_HCR_EN;
> 
> Any reason to not move this in the else?

Yes: we need to be able to disable GICH_HCR_NPIE even if there are no
irqs to inject.


> BTW, I think we can safely avoid to reinject the IRQ for the event channels if
> there is pending events. It will also avoid spurious IRQ on some specific case
> :).

Currently we don't set GIC_IRQ_GUEST_PENDING for evtchn_irq if the irq
is already inflight. I'll see if I can come up with a patch to
streamline that behaviour.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:00:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuDL-0008Bu-9N; Mon, 10 Feb 2014 17:00:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuDF-0008Bm-Vb
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:00:10 +0000
Received: from [85.158.137.68:24459] by server-2.bemta-3.messagelabs.com id
	2C/F0-06531-59509F25; Mon, 10 Feb 2014 17:00:05 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392051602!892978!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5615 invoked from network); 10 Feb 2014 17:00:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:00:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99535526"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:00:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:00:02 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuDB-0003jo-QM;
	Mon, 10 Feb 2014 17:00:01 +0000
Date: Mon, 10 Feb 2014 16:59:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F56EA9.9020903@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56EA9.9020903@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > On return to guest, if there are no free LRs and we still have more
> > interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
> > maintenance interrupt when no pending interrupts are present in the LR
> > registers.
> > The maintenance interrupt handler won't do anything anymore, but
> > receiving the interrupt is going to cause gic_inject to be called on
> > return to guest that is going to clear the old LRs and inject new
> > interrupts.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >   xen/arch/arm/gic.c |    8 +++++++-
> >   1 file changed, 7 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 87bd5d3..bee2618 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -810,8 +810,14 @@ void gic_inject(void)
> >       gic_restore_pending_irqs(current);
> >       if (!gic_events_need_delivery())
> >           gic_inject_irq_stop();
> > -    else
> > +    else {
> >           gic_inject_irq_start();
> > +    }
> > +
> > +    if ( !list_empty(&current->arch.vgic.lr_pending) )
> > +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
> > +    else
> > +        GICH[GICH_HCR] = GICH_HCR_EN;
> 
> Any reason to not move this in the else?

Yes: we need to be able to disable GICH_HCR_NPIE even if there are no
irqs to inject.


> BTW, I think we can safely avoid to reinject the IRQ for the event channels if
> there is pending events. It will also avoid spurious IRQ on some specific case
> :).

Currently we don't set GIC_IRQ_GUEST_PENDING for evtchn_irq if the irq
is already inflight. I'll see if I can come up with a patch to
streamline that behaviour.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuEU-0008Ff-7g; Mon, 10 Feb 2014 17:01:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCuES-0008FW-6T
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:01:20 +0000
Received: from [85.158.139.211:52503] by server-7.bemta-5.messagelabs.com id
	15/86-14867-FD509F25; Mon, 10 Feb 2014 17:01:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392051677!2970524!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13931 invoked from network); 10 Feb 2014 17:01:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 17:01:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 17:01:16 +0000
Message-Id: <52F913EA020000780011ADB3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 17:01:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 17:38, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits which meant the register
> accesses never made it to vmce_amd_* functions.
> 
> Corrected this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> While at it, remove some clutter in the vmce_amd* functions. Retained
> current policy of returning zero for reads and ignoring writes.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Reviewed-by: Christoph Egger <chegger@amazon.de>

Are these tags for _this_ version of the patch, or an earlier one?
The nature of the changes done on this latest round (which finally
looks good to me) would require you to drop all earlier acks and
reviews, unless some reviewing went on behind the scenes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:01:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuEU-0008Ff-7g; Mon, 10 Feb 2014 17:01:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WCuES-0008FW-6T
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:01:20 +0000
Received: from [85.158.139.211:52503] by server-7.bemta-5.messagelabs.com id
	15/86-14867-FD509F25; Mon, 10 Feb 2014 17:01:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392051677!2970524!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13931 invoked from network); 10 Feb 2014 17:01:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 17:01:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 10 Feb 2014 17:01:16 +0000
Message-Id: <52F913EA020000780011ADB3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 10 Feb 2014 17:01:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 17:38, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits which meant the register
> accesses never made it to vmce_amd_* functions.
> 
> Corrected this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> While at it, remove some clutter in the vmce_amd* functions. Retained
> current policy of returning zero for reads and ignoring writes.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Reviewed-by: Christoph Egger <chegger@amazon.de>

Are these tags for _this_ version of the patch, or an earlier one?
The nature of the changes done on this latest round (which finally
looks good to me) would require you to drop all earlier acks and
reviews, unless some reviewing went on behind the scenes.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:04:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuH6-0008QQ-WF; Mon, 10 Feb 2014 17:04:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuH5-0008QE-1L
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:04:03 +0000
Received: from [85.158.139.211:63203] by server-2.bemta-5.messagelabs.com id
	1A/39-23037-28609F25; Mon, 10 Feb 2014 17:04:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392051840!2961177!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21703 invoked from network); 10 Feb 2014 17:04:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:04:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101362613"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:03:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:03:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuH1-0003oZ-3O;
	Mon, 10 Feb 2014 17:03:59 +0000
Date: Mon, 10 Feb 2014 17:03:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F56208.3090504@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101700000.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56208.3090504@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
> > GICH_LR registers.
> > 
> > Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
> > registers, clear the invalid ones and free the corresponding interrupts
> > from the inflight queue if appropriate. Add the interrupt to lr_pending
> > if the GIC_IRQ_GUEST_PENDING is still set.
> > 
> > Call gic_clear_lrs from gic_restore_state and on return to guest
> > (gic_inject).
> > 
> > Remove the now unused code in maintenance_interrupts and gic_irq_eoi.
> > 
> > In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
> > send and SGI to it to interrupt it and force it to clear the old LRs.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >   xen/arch/arm/gic.c  |  126
> > ++++++++++++++++++++++-----------------------------
> >   xen/arch/arm/vgic.c |    3 +-
> >   2 files changed, 56 insertions(+), 73 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 215b679..87bd5d3 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > +static void gic_clear_lrs(struct vcpu *v)
> > +{
> > +    struct pending_irq *p;
> > +    int i = 0, irq;
> > +    uint32_t lr;
> > +    bool_t inflight;
> > +
> > +    ASSERT(!local_irq_is_enabled());
> > +
> > +    while ((i = find_next_bit((const long unsigned int *)
> > &this_cpu(lr_mask),
> > +                              nr_lrs, i)) < nr_lrs) {
> > +        lr = GICH[GICH_LR + i];
> > +        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> > +        {
> > +            if ( lr & GICH_LR_HW )
> > +                irq = (lr >> GICH_LR_PHYSICAL_SHIFT) &
> > GICH_LR_PHYSICAL_MASK;
> > +            else
> > +                irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> > +
> 
> The if sentence can be simply by:
> 
> irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;

right

 
> > +            inflight = 0;
> > +            GICH[GICH_LR + i] = 0;
> > +            clear_bit(i, &this_cpu(lr_mask));
> > +
> > +            spin_lock(&gic.lock);
> > +            p = irq_to_pending(v, irq);
> > +            if ( p->desc != NULL )
> > +                p->desc->status &= ~IRQ_INPROGRESS;
> > +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > +            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> > +                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> > +            {
> 
> I would add a WARN_ON(p->desc != NULL) here. AFAIK, this code path shouldn't
> be used for physical IRQ.

That's not true: an edge physical irq can come through while another one
of the same type is being handled. In fact pending and active bits exist
even on the physical GIC interface.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:04:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuH6-0008QQ-WF; Mon, 10 Feb 2014 17:04:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuH5-0008QE-1L
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:04:03 +0000
Received: from [85.158.139.211:63203] by server-2.bemta-5.messagelabs.com id
	1A/39-23037-28609F25; Mon, 10 Feb 2014 17:04:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392051840!2961177!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21703 invoked from network); 10 Feb 2014 17:04:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:04:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101362613"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:03:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:03:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuH1-0003oZ-3O;
	Mon, 10 Feb 2014 17:03:59 +0000
Date: Mon, 10 Feb 2014 17:03:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F56208.3090504@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101700000.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56208.3090504@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> Hi Stefano,
> 
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
> > GICH_LR registers.
> > 
> > Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
> > registers, clear the invalid ones and free the corresponding interrupts
> > from the inflight queue if appropriate. Add the interrupt to lr_pending
> > if the GIC_IRQ_GUEST_PENDING is still set.
> > 
> > Call gic_clear_lrs from gic_restore_state and on return to guest
> > (gic_inject).
> > 
> > Remove the now unused code in maintenance_interrupts and gic_irq_eoi.
> > 
> > In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
> > send and SGI to it to interrupt it and force it to clear the old LRs.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > ---
> >   xen/arch/arm/gic.c  |  126
> > ++++++++++++++++++++++-----------------------------
> >   xen/arch/arm/vgic.c |    3 +-
> >   2 files changed, 56 insertions(+), 73 deletions(-)
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index 215b679..87bd5d3 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > +static void gic_clear_lrs(struct vcpu *v)
> > +{
> > +    struct pending_irq *p;
> > +    int i = 0, irq;
> > +    uint32_t lr;
> > +    bool_t inflight;
> > +
> > +    ASSERT(!local_irq_is_enabled());
> > +
> > +    while ((i = find_next_bit((const long unsigned int *)
> > &this_cpu(lr_mask),
> > +                              nr_lrs, i)) < nr_lrs) {
> > +        lr = GICH[GICH_LR + i];
> > +        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> > +        {
> > +            if ( lr & GICH_LR_HW )
> > +                irq = (lr >> GICH_LR_PHYSICAL_SHIFT) &
> > GICH_LR_PHYSICAL_MASK;
> > +            else
> > +                irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> > +
> 
> The if sentence can be simply by:
> 
> irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;

right

 
> > +            inflight = 0;
> > +            GICH[GICH_LR + i] = 0;
> > +            clear_bit(i, &this_cpu(lr_mask));
> > +
> > +            spin_lock(&gic.lock);
> > +            p = irq_to_pending(v, irq);
> > +            if ( p->desc != NULL )
> > +                p->desc->status &= ~IRQ_INPROGRESS;
> > +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > +            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> > +                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> > +            {
> 
> I would add a WARN_ON(p->desc != NULL) here. AFAIK, this code path shouldn't
> be used for physical IRQ.

That's not true: an edge physical irq can come through while another one
of the same type is being handled. In fact pending and active bits exist
even on the physical GIC interface.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:06:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuJ1-0000I4-Ih; Mon, 10 Feb 2014 17:06:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCuIz-0000Hy-PS
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:06:02 +0000
Received: from [193.109.254.147:19465] by server-7.bemta-14.messagelabs.com id
	FD/7F-23424-9F609F25; Mon, 10 Feb 2014 17:06:01 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392051955!3326894!1
X-Originating-IP: [209.85.212.172]
X-SpamReason: No, hits=2.0 required=7.0 tests=BIZ_TLD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 606 invoked from network); 10 Feb 2014 17:05:55 -0000
Received: from mail-wi0-f172.google.com (HELO mail-wi0-f172.google.com)
	(209.85.212.172)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:05:55 -0000
Received: by mail-wi0-f172.google.com with SMTP id e4so3045364wiv.5
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 09:05:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=EMLHR2kEkYFgDUD86OX2mAl7MpEYP3drOyCVPSlOiwc=;
	b=XVEgTUyWIHgRxX9RfouCuQmxZ3K68nhth4foty+804cbX3G1iAl/R95W/xqR3xfGaJ
	Iju9F1427TR+AKy5ELrWSVJ2sBpEBC/T1VXjBj8wg36dLIuU23rV7GuuizylHoSXwyrD
	ikLc2CpkcZO/YUQOjIQY6uNw84AqEyj4LjdKfcoKQivbo1F12Uo7x3i7NcjTZG8/8qOI
	6xkiSOTwyJiFd1C3YpJzwwdWjKS631KguvrlAiltiy77zJrr4itGmdT9xFGNwbY8FyLd
	L1pt9dHHuIVZEA4oYzPyOeBR075ym1i9+mXUvn76viJq5TG8wkQQTRsPZ5CRgyugt8Ex
	Ycug==
MIME-Version: 1.0
X-Received: by 10.180.77.129 with SMTP id s1mr11208691wiw.56.1392051955157;
	Mon, 10 Feb 2014 09:05:55 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 09:05:55 -0800 (PST)
In-Reply-To: <52F10EEF.7050402@m2r.biz>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
Date: Mon, 10 Feb 2014 17:05:55 +0000
X-Google-Sender-Auth: HqeonH9zWDdju50abC_ZScAz2-g
Message-ID: <CAFLBxZagZC2-C000jBjU3qb14Fxva08NQSwT_X9s1JC8JMvKQQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 4, 2014 at 4:01 PM, Fabio Fantoni <fabio.fantoni@m2r.biz> wrote:
> Il 04/02/2014 16:41, Eric Houby ha scritto:
>
>> Xen list,
>>
>> I am trying to boot a F20 guest and connect using Spice but have run
>> into an issue.
>>
>> My VM config file includes:
>> spice = 1
>> spicehost='0.0.0.0'
>> spiceport=6001
>> spicedisable_ticketing=1
>>
>>
>> Is Spice supported with qemu-xen-traditional?
>
>
> No, only with upstream qemu and if compile xen and qemu from source you also
> enable spice support on qemu build, for example on my xen build tests I add:
>
> tools/Makefile
> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>          --datadir=$(SHAREDIR)/qemu-xen \
>          --localstatedir=/var \
>          --disable-kvm \
> +        --enable-spice \
> +        --enable-usb-redir \
>          --disable-docs \
>          --disable-guest-agent \
>          --python=$(PYTHON) \
>
> If you use upstream qemu from distribution package probably have already
> spice build-in, for example, on debian I've already tested and working.

It might be nice at some point to have this integrated into the
top-level configure, possibly enabled by default (gated on the
appropriate development libraries being enabled).

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:06:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuJ1-0000I4-Ih; Mon, 10 Feb 2014 17:06:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCuIz-0000Hy-PS
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:06:02 +0000
Received: from [193.109.254.147:19465] by server-7.bemta-14.messagelabs.com id
	FD/7F-23424-9F609F25; Mon, 10 Feb 2014 17:06:01 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392051955!3326894!1
X-Originating-IP: [209.85.212.172]
X-SpamReason: No, hits=2.0 required=7.0 tests=BIZ_TLD,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 606 invoked from network); 10 Feb 2014 17:05:55 -0000
Received: from mail-wi0-f172.google.com (HELO mail-wi0-f172.google.com)
	(209.85.212.172)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:05:55 -0000
Received: by mail-wi0-f172.google.com with SMTP id e4so3045364wiv.5
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 09:05:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=EMLHR2kEkYFgDUD86OX2mAl7MpEYP3drOyCVPSlOiwc=;
	b=XVEgTUyWIHgRxX9RfouCuQmxZ3K68nhth4foty+804cbX3G1iAl/R95W/xqR3xfGaJ
	Iju9F1427TR+AKy5ELrWSVJ2sBpEBC/T1VXjBj8wg36dLIuU23rV7GuuizylHoSXwyrD
	ikLc2CpkcZO/YUQOjIQY6uNw84AqEyj4LjdKfcoKQivbo1F12Uo7x3i7NcjTZG8/8qOI
	6xkiSOTwyJiFd1C3YpJzwwdWjKS631KguvrlAiltiy77zJrr4itGmdT9xFGNwbY8FyLd
	L1pt9dHHuIVZEA4oYzPyOeBR075ym1i9+mXUvn76viJq5TG8wkQQTRsPZ5CRgyugt8Ex
	Ycug==
MIME-Version: 1.0
X-Received: by 10.180.77.129 with SMTP id s1mr11208691wiw.56.1392051955157;
	Mon, 10 Feb 2014 09:05:55 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 09:05:55 -0800 (PST)
In-Reply-To: <52F10EEF.7050402@m2r.biz>
References: <1391528492.2441.26.camel@astar.houby.net>
	<52F10EEF.7050402@m2r.biz>
Date: Mon, 10 Feb 2014 17:05:55 +0000
X-Google-Sender-Auth: HqeonH9zWDdju50abC_ZScAz2-g
Message-ID: <CAFLBxZagZC2-C000jBjU3qb14Fxva08NQSwT_X9s1JC8JMvKQQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 4, 2014 at 4:01 PM, Fabio Fantoni <fabio.fantoni@m2r.biz> wrote:
> Il 04/02/2014 16:41, Eric Houby ha scritto:
>
>> Xen list,
>>
>> I am trying to boot a F20 guest and connect using Spice but have run
>> into an issue.
>>
>> My VM config file includes:
>> spice = 1
>> spicehost='0.0.0.0'
>> spiceport=6001
>> spicedisable_ticketing=1
>>
>>
>> Is Spice supported with qemu-xen-traditional?
>
>
> No, only with upstream qemu and if compile xen and qemu from source you also
> enable spice support on qemu build, for example on my xen build tests I add:
>
> tools/Makefile
> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>          --datadir=$(SHAREDIR)/qemu-xen \
>          --localstatedir=/var \
>          --disable-kvm \
> +        --enable-spice \
> +        --enable-usb-redir \
>          --disable-docs \
>          --disable-guest-agent \
>          --python=$(PYTHON) \
>
> If you use upstream qemu from distribution package probably have already
> spice build-in, for example, on debian I've already tested and working.

It might be nice at some point to have this integrated into the
top-level configure, possibly enabled by default (gated on the
appropriate development libraries being enabled).

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:06:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuJe-0000Oh-Vl; Mon, 10 Feb 2014 17:06:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCuJe-0000OX-1y
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:06:42 +0000
Received: from [85.158.139.211:4444] by server-16.bemta-5.messagelabs.com id
	4E/20-05060-12709F25; Mon, 10 Feb 2014 17:06:41 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392051998!2947745!1
X-Originating-IP: [216.32.180.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1220 invoked from network); 10 Feb 2014 17:06:40 -0000
Received: from co1ehsobe005.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.188)
	by server-16.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 17:06:40 -0000
Received: from mail126-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE015.bigfish.com (10.243.66.78) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 17:06:38 +0000
Received: from mail126-co1 (localhost [127.0.0.1])	by
	mail126-co1-R.bigfish.com (Postfix) with ESMTP id 1BCC26A00F2;
	Mon, 10 Feb 2014 17:06:38 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 1
X-BigFish: VPS1(z579eh37d5kz98dI1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail126-co1 (localhost.localdomain [127.0.0.1]) by mail126-co1
	(MessageSwitch) id 1392051996396732_17460;
	Mon, 10 Feb 2014 17:06:36 +0000 (UTC)
Received: from CO1EHSMHS019.bigfish.com (unknown [10.243.78.226])	by
	mail126-co1.bigfish.com (Postfix) with ESMTP id 5C64D860062;
	Mon, 10 Feb 2014 17:06:36 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO1EHSMHS019.bigfish.com
	(10.243.66.29) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 17:06:33 +0000
X-WSS-ID: 0N0SI6S-08-D7R-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2647DD16095;	Mon, 10 Feb 2014 11:06:28 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 11:06:32 -0600
Received: from arav-dinar (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 12:06:30 -0500
Date: Mon, 10 Feb 2014 11:07:15 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140210170714.GA14285@arav-dinar>
References: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F913EA020000780011ADB3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F913EA020000780011ADB3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 05:01:14PM +0000, Jan Beulich wrote:
> >>> On 10.02.14 at 17:38, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> > Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> > Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> > Reviewed-by: Christoph Egger <chegger@amazon.de>
> 
> Are these tags for _this_ version of the patch, or an earlier one?
> The nature of the changes done on this latest round (which finally
> looks good to me) would require you to drop all earlier acks and
> reviews, unless some reviewing went on behind the scenes.
> 

Ah. Nope. I assumed I had to retain the 'reviewed-by' lines as the patch is
related to the earlier ones I had sent out..

Sent a corrected one now.

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:06:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuJe-0000Oh-Vl; Mon, 10 Feb 2014 17:06:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCuJe-0000OX-1y
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:06:42 +0000
Received: from [85.158.139.211:4444] by server-16.bemta-5.messagelabs.com id
	4E/20-05060-12709F25; Mon, 10 Feb 2014 17:06:41 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392051998!2947745!1
X-Originating-IP: [216.32.180.188]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1220 invoked from network); 10 Feb 2014 17:06:40 -0000
Received: from co1ehsobe005.messaging.microsoft.com (HELO
	co1outboundpool.messaging.microsoft.com) (216.32.180.188)
	by server-16.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 17:06:40 -0000
Received: from mail126-co1-R.bigfish.com (10.243.78.227) by
	CO1EHSOBE015.bigfish.com (10.243.66.78) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 17:06:38 +0000
Received: from mail126-co1 (localhost [127.0.0.1])	by
	mail126-co1-R.bigfish.com (Postfix) with ESMTP id 1BCC26A00F2;
	Mon, 10 Feb 2014 17:06:38 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: 1
X-BigFish: VPS1(z579eh37d5kz98dI1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail126-co1 (localhost.localdomain [127.0.0.1]) by mail126-co1
	(MessageSwitch) id 1392051996396732_17460;
	Mon, 10 Feb 2014 17:06:36 +0000 (UTC)
Received: from CO1EHSMHS019.bigfish.com (unknown [10.243.78.226])	by
	mail126-co1.bigfish.com (Postfix) with ESMTP id 5C64D860062;
	Mon, 10 Feb 2014 17:06:36 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CO1EHSMHS019.bigfish.com
	(10.243.66.29) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 17:06:33 +0000
X-WSS-ID: 0N0SI6S-08-D7R-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2647DD16095;	Mon, 10 Feb 2014 11:06:28 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 11:06:32 -0600
Received: from arav-dinar (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 12:06:30 -0500
Date: Mon, 10 Feb 2014 11:07:15 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140210170714.GA14285@arav-dinar>
References: <1392050338-2915-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F913EA020000780011ADB3@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F913EA020000780011ADB3@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 05:01:14PM +0000, Jan Beulich wrote:
> >>> On 10.02.14 at 17:38, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> > Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> > Reviewed-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> > Reviewed-by: Christoph Egger <chegger@amazon.de>
> 
> Are these tags for _this_ version of the patch, or an earlier one?
> The nature of the changes done on this latest round (which finally
> looks good to me) would require you to drop all earlier acks and
> reviews, unless some reviewing went on behind the scenes.
> 

Ah. Nope. I assumed I had to retain the 'reviewed-by' lines as the patch is
related to the earlier ones I had sent out..

Sent a corrected one now.

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:06:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuJh-0000PZ-HT; Mon, 10 Feb 2014 17:06:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCuJg-0000P8-Au
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:06:44 +0000
Received: from [85.158.139.211:4631] by server-3.bemta-5.messagelabs.com id
	48/35-13671-32709F25; Mon, 10 Feb 2014 17:06:43 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392052001!2961786!1
X-Originating-IP: [65.55.88.13]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2047 invoked from network); 10 Feb 2014 17:06:42 -0000
Received: from tx2ehsobe003.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.13)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 17:06:42 -0000
Received: from mail128-tx2-R.bigfish.com (10.9.14.240) by
	TX2EHSOBE010.bigfish.com (10.9.40.30) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 17:06:40 +0000
Received: from mail128-tx2 (localhost [127.0.0.1])	by
	mail128-tx2-R.bigfish.com (Postfix) with ESMTP id B885338087C;
	Mon, 10 Feb 2014 17:06:40 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail128-tx2 (localhost.localdomain [127.0.0.1]) by mail128-tx2
	(MessageSwitch) id 1392051982336221_26287;
	Mon, 10 Feb 2014 17:06:22 +0000 (UTC)
Received: from TX2EHSMHS023.bigfish.com (unknown [10.9.14.239])	by
	mail128-tx2.bigfish.com (Postfix) with ESMTP id 570FB2C0164;
	Mon, 10 Feb 2014 17:06:20 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS023.bigfish.com
	(10.9.99.123) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 17:06:11 +0000
X-WSS-ID: 0N0SI69-07-BQJ-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2D6D712C0077;	Mon, 10 Feb 2014 11:06:08 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 11:06:11 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Mon, 10 Feb 2014 12:06:09 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 10:50:41 -0600
Message-ID: <1392051041-3372-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V3] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

Corrected this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

While at it, remove some clutter in the vmce_amd* functions. Retained
current policy of returning zero for reads and ignoring writes.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   41 ++++++-------------------------------
 xen/arch/x86/cpu/mcheck/vmce.c    |    4 ++--
 2 files changed, 8 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..03797ab 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Do nothing as we don't emulate this MC bank currently */
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Assign '0' as we don't emulate this MC bank currently */
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..be9bb5e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:06:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuJh-0000PZ-HT; Mon, 10 Feb 2014 17:06:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WCuJg-0000P8-Au
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:06:44 +0000
Received: from [85.158.139.211:4631] by server-3.bemta-5.messagelabs.com id
	48/35-13671-32709F25; Mon, 10 Feb 2014 17:06:43 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392052001!2961786!1
X-Originating-IP: [65.55.88.13]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2047 invoked from network); 10 Feb 2014 17:06:42 -0000
Received: from tx2ehsobe003.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.13)
	by server-12.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	10 Feb 2014 17:06:42 -0000
Received: from mail128-tx2-R.bigfish.com (10.9.14.240) by
	TX2EHSOBE010.bigfish.com (10.9.40.30) with Microsoft SMTP Server id
	14.1.225.22; Mon, 10 Feb 2014 17:06:40 +0000
Received: from mail128-tx2 (localhost [127.0.0.1])	by
	mail128-tx2-R.bigfish.com (Postfix) with ESMTP id B885338087C;
	Mon, 10 Feb 2014 17:06:40 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h1155h)
Received: from mail128-tx2 (localhost.localdomain [127.0.0.1]) by mail128-tx2
	(MessageSwitch) id 1392051982336221_26287;
	Mon, 10 Feb 2014 17:06:22 +0000 (UTC)
Received: from TX2EHSMHS023.bigfish.com (unknown [10.9.14.239])	by
	mail128-tx2.bigfish.com (Postfix) with ESMTP id 570FB2C0164;
	Mon, 10 Feb 2014 17:06:20 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS023.bigfish.com
	(10.9.99.123) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 10 Feb 2014 17:06:11 +0000
X-WSS-ID: 0N0SI69-07-BQJ-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2D6D712C0077;	Mon, 10 Feb 2014 11:06:08 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 10 Feb 2014 11:06:11 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server id 14.2.328.9;
	Mon, 10 Feb 2014 12:06:09 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Mon, 10 Feb 2014 10:50:41 -0600
Message-ID: <1392051041-3372-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V3] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

Corrected this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

While at it, remove some clutter in the vmce_amd* functions. Retained
current policy of returning zero for reads and ignoring writes.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   41 ++++++-------------------------------
 xen/arch/x86/cpu/mcheck/vmce.c    |    4 ++--
 2 files changed, 8 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..03797ab 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Do nothing as we don't emulate this MC bank currently */
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Assign '0' as we don't emulate this MC bank currently */
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..be9bb5e 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:07:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuK0-0000WP-F7; Mon, 10 Feb 2014 17:07:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuJy-0000VM-Fg
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:07:04 +0000
Received: from [85.158.143.35:37476] by server-1.bemta-4.messagelabs.com id
	CA/3D-31661-53709F25; Mon, 10 Feb 2014 17:07:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392052020!4595143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3713 invoked from network); 10 Feb 2014 17:07:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:07:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99537799"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:06:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:06:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuJu-0003rk-Vc;
	Mon, 10 Feb 2014 17:06:58 +0000
Date: Mon, 10 Feb 2014 17:06:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F567CB.7080302@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> On 07/02/14 18:56, Stefano Stabellini wrote:
>    > +static void gic_clear_lrs(struct vcpu *v)
> > +{
> > +    struct pending_irq *p;
> > +    int i = 0, irq;
> > +    uint32_t lr;
> > +    bool_t inflight;
> > +
> > +    ASSERT(!local_irq_is_enabled());
> > +
> > +    while ((i = find_next_bit((const long unsigned int *)
> > &this_cpu(lr_mask),
> > +                              nr_lrs, i)) < nr_lrs) {
> 
> Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> can use it with the this_cpu(lr_mask) to avoid browsing every LRs.

Given that we only have 4 LR registers, I think that unconditionally
reading 2 ELRSR registers would cost more than simply checking lr_mask
on average.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:07:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuK0-0000WP-F7; Mon, 10 Feb 2014 17:07:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuJy-0000VM-Fg
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:07:04 +0000
Received: from [85.158.143.35:37476] by server-1.bemta-4.messagelabs.com id
	CA/3D-31661-53709F25; Mon, 10 Feb 2014 17:07:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392052020!4595143!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3713 invoked from network); 10 Feb 2014 17:07:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:07:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99537799"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:06:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:06:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuJu-0003rk-Vc;
	Mon, 10 Feb 2014 17:06:58 +0000
Date: Mon, 10 Feb 2014 17:06:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F567CB.7080302@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> On 07/02/14 18:56, Stefano Stabellini wrote:
>    > +static void gic_clear_lrs(struct vcpu *v)
> > +{
> > +    struct pending_irq *p;
> > +    int i = 0, irq;
> > +    uint32_t lr;
> > +    bool_t inflight;
> > +
> > +    ASSERT(!local_irq_is_enabled());
> > +
> > +    while ((i = find_next_bit((const long unsigned int *)
> > &this_cpu(lr_mask),
> > +                              nr_lrs, i)) < nr_lrs) {
> 
> Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> can use it with the this_cpu(lr_mask) to avoid browsing every LRs.

Given that we only have 4 LR registers, I think that unconditionally
reading 2 ELRSR registers would cost more than simply checking lr_mask
on average.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuLd-0000sF-17; Mon, 10 Feb 2014 17:08:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuLb-0000qy-GU
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:08:43 +0000
Received: from [85.158.137.68:43397] by server-14.bemta-3.messagelabs.com id
	F7/C6-08196-A9709F25; Mon, 10 Feb 2014 17:08:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392052120!895285!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4076 invoked from network); 10 Feb 2014 17:08:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:08:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101364038"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:08:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:08:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuLX-0003tw-8C;
	Mon, 10 Feb 2014 17:08:39 +0000
Date: Mon, 10 Feb 2014 17:08:23 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F56AAB.7080300@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101707230.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<52F56AAB.7080300@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 0/4] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > Hi all,
> 
> Hi Stefano,
> 
> > this patch series removes any needs for maintenance interrupts for both
> > hardware and software interrupts in Xen.
> > It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
> > and by checking the status of the GICH_LR registers on return to guest,
> > clearing the registers that are invalid and handling the lifecycle of
> > the corresponding interrupts in Xen data structures.
> 
> After reading your patch series I see a possible race condition with the timer
> interrupt.
> 
> As you know, Xen can re-inject the timer interrupt before the previous one is
> EOIed. As it's the timer, the IRQ is injected on the current running VCPU.
> 
> vgic_vcpu_inject_irq(timer)
>   -> IRQ already visible to the guest -> set PENDING
> return to guest context
> <--------------------- Guest EOI the IRQ
> .... few milleseconds
> going to hyp mode
>   -> doing stuff
>   -> reinject the timer IRQ
> 
> If I'm not mistaken, with your solution, the next IRQ can be delayed for few
> milliseconds. That could be fixed by updating the Lrs.

You are right, this race exists. I'll work on a fix for the next
iteration of the series.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuLd-0000sF-17; Mon, 10 Feb 2014 17:08:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuLb-0000qy-GU
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:08:43 +0000
Received: from [85.158.137.68:43397] by server-14.bemta-3.messagelabs.com id
	F7/C6-08196-A9709F25; Mon, 10 Feb 2014 17:08:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392052120!895285!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4076 invoked from network); 10 Feb 2014 17:08:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:08:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101364038"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:08:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:08:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuLX-0003tw-8C;
	Mon, 10 Feb 2014 17:08:39 +0000
Date: Mon, 10 Feb 2014 17:08:23 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F56AAB.7080300@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101707230.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<52F56AAB.7080300@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 0/4] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 7 Feb 2014, Julien Grall wrote:
> On 07/02/14 18:56, Stefano Stabellini wrote:
> > Hi all,
> 
> Hi Stefano,
> 
> > this patch series removes any needs for maintenance interrupts for both
> > hardware and software interrupts in Xen.
> > It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
> > and by checking the status of the GICH_LR registers on return to guest,
> > clearing the registers that are invalid and handling the lifecycle of
> > the corresponding interrupts in Xen data structures.
> 
> After reading your patch series I see a possible race condition with the timer
> interrupt.
> 
> As you know, Xen can re-inject the timer interrupt before the previous one is
> EOIed. As it's the timer, the IRQ is injected on the current running VCPU.
> 
> vgic_vcpu_inject_irq(timer)
>   -> IRQ already visible to the guest -> set PENDING
> return to guest context
> <--------------------- Guest EOI the IRQ
> .... few milleseconds
> going to hyp mode
>   -> doing stuff
>   -> reinject the timer IRQ
> 
> If I'm not mistaken, with your solution, the next IRQ can be delayed for few
> milliseconds. That could be fixed by updating the Lrs.

You are right, this race exists. I'll work on a fix for the next
iteration of the series.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:10:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuMv-0001Kq-TA; Mon, 10 Feb 2014 17:10:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCuMu-0001KU-1O
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:10:04 +0000
Received: from [85.158.137.68:10205] by server-17.bemta-3.messagelabs.com id
	34/06-22569-9E709F25; Mon, 10 Feb 2014 17:10:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392052199!902898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28081 invoked from network); 10 Feb 2014 17:10:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:10:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99538942"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:09:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:09:58 -0500
Message-ID: <1392052197.26657.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:09:57 +0000
In-Reply-To: <alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> On Fri, 7 Feb 2014, Julien Grall wrote:
> > On 07/02/14 18:56, Stefano Stabellini wrote:
> >    > +static void gic_clear_lrs(struct vcpu *v)
> > > +{
> > > +    struct pending_irq *p;
> > > +    int i = 0, irq;
> > > +    uint32_t lr;
> > > +    bool_t inflight;
> > > +
> > > +    ASSERT(!local_irq_is_enabled());
> > > +
> > > +    while ((i = find_next_bit((const long unsigned int *)
> > > &this_cpu(lr_mask),
> > > +                              nr_lrs, i)) < nr_lrs) {
> > 
> > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> 
> Given that we only have 4 LR registers, I think that unconditionally
> reading 2 ELRSR registers would cost more than simply checking lr_mask
> on average.

You also need to actually read the LR and do some bit masking etc don't
you?

What about implementations with >4 LRs?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:10:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:10:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuMv-0001Kq-TA; Mon, 10 Feb 2014 17:10:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCuMu-0001KU-1O
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:10:04 +0000
Received: from [85.158.137.68:10205] by server-17.bemta-3.messagelabs.com id
	34/06-22569-9E709F25; Mon, 10 Feb 2014 17:10:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392052199!902898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28081 invoked from network); 10 Feb 2014 17:10:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:10:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99538942"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:09:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:09:58 -0500
Message-ID: <1392052197.26657.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:09:57 +0000
In-Reply-To: <alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> On Fri, 7 Feb 2014, Julien Grall wrote:
> > On 07/02/14 18:56, Stefano Stabellini wrote:
> >    > +static void gic_clear_lrs(struct vcpu *v)
> > > +{
> > > +    struct pending_irq *p;
> > > +    int i = 0, irq;
> > > +    uint32_t lr;
> > > +    bool_t inflight;
> > > +
> > > +    ASSERT(!local_irq_is_enabled());
> > > +
> > > +    while ((i = find_next_bit((const long unsigned int *)
> > > &this_cpu(lr_mask),
> > > +                              nr_lrs, i)) < nr_lrs) {
> > 
> > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> 
> Given that we only have 4 LR registers, I think that unconditionally
> reading 2 ELRSR registers would cost more than simply checking lr_mask
> on average.

You also need to actually read the LR and do some bit masking etc don't
you?

What about implementations with >4 LRs?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuNz-0001Yk-FV; Mon, 10 Feb 2014 17:11:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCuNy-0001YU-8H
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:11:10 +0000
Received: from [85.158.139.211:27761] by server-9.bemta-5.messagelabs.com id
	91/82-11237-D2809F25; Mon, 10 Feb 2014 17:11:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392052269!2958473!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15011 invoked from network); 10 Feb 2014 17:11:09 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:11:09 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so2636891eek.14
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 09:11:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=AwGgaKekgs1KE/2h7p8G7Je36gaLqXYAdSD/W7cljyQ=;
	b=GOk/Q6fr3NCtOMPbV0LT2q62/snaiZxZSlbNqDHUYP2aHbU+/L0qx0jChxKu1vGyjO
	yKj0VFsTfEee6kh1Z0wrMFGHThflr/v1SawWKkO3iRFpNLr8gQYJONYIauwo6RJkhMXK
	ZAHXdxdOSkL/Wo2QyNxWZrNpaL+NextRipqMTeccEaUU2i12oLgvHRViBOx3nJJUdiUS
	ninrlG6IIZek/YPwccnnB9UdF3Q23ggKL4pDW8o8AA9dvys6WSZVewlPCwAgQRTLWD36
	e6vJmQBBbvjHY6ly2+h4176rvv7Zbb7mNb19NOfgDn1jLXaq7qEWIg8zIKdrEQjIJ4QZ
	4FYQ==
X-Gm-Message-State: ALoCoQmiAerSLkLUb325ivhckpVg4WA+/qF1im08P4KNmXIK2h4C/rm8ib37pm6U4beW6BUxoZEM
X-Received: by 10.14.127.72 with SMTP id c48mr37521689eei.16.1392052268775;
	Mon, 10 Feb 2014 09:11:08 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m1sm56615310een.7.2014.02.10.09.11.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:11:07 -0800 (PST)
Message-ID: <52F90828.4060809@linaro.org>
Date: Mon, 10 Feb 2014 17:11:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 05:06 PM, Stefano Stabellini wrote:
> On Fri, 7 Feb 2014, Julien Grall wrote:
>> On 07/02/14 18:56, Stefano Stabellini wrote:
>>    > +static void gic_clear_lrs(struct vcpu *v)
>>> +{
>>> +    struct pending_irq *p;
>>> +    int i = 0, irq;
>>> +    uint32_t lr;
>>> +    bool_t inflight;
>>> +
>>> +    ASSERT(!local_irq_is_enabled());
>>> +
>>> +    while ((i = find_next_bit((const long unsigned int *)
>>> &this_cpu(lr_mask),
>>> +                              nr_lrs, i)) < nr_lrs) {
>>
>> Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
>> can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> 
> Given that we only have 4 LR registers, I think that unconditionally
> reading 2 ELRSR registers would cost more than simply checking lr_mask
> on average.

The maximum number of LR registers is 64. I agree that the current
hardwares only handle 4 ... but we should think about future hardware.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuNz-0001Yk-FV; Mon, 10 Feb 2014 17:11:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCuNy-0001YU-8H
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:11:10 +0000
Received: from [85.158.139.211:27761] by server-9.bemta-5.messagelabs.com id
	91/82-11237-D2809F25; Mon, 10 Feb 2014 17:11:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392052269!2958473!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15011 invoked from network); 10 Feb 2014 17:11:09 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:11:09 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so2636891eek.14
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 09:11:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=AwGgaKekgs1KE/2h7p8G7Je36gaLqXYAdSD/W7cljyQ=;
	b=GOk/Q6fr3NCtOMPbV0LT2q62/snaiZxZSlbNqDHUYP2aHbU+/L0qx0jChxKu1vGyjO
	yKj0VFsTfEee6kh1Z0wrMFGHThflr/v1SawWKkO3iRFpNLr8gQYJONYIauwo6RJkhMXK
	ZAHXdxdOSkL/Wo2QyNxWZrNpaL+NextRipqMTeccEaUU2i12oLgvHRViBOx3nJJUdiUS
	ninrlG6IIZek/YPwccnnB9UdF3Q23ggKL4pDW8o8AA9dvys6WSZVewlPCwAgQRTLWD36
	e6vJmQBBbvjHY6ly2+h4176rvv7Zbb7mNb19NOfgDn1jLXaq7qEWIg8zIKdrEQjIJ4QZ
	4FYQ==
X-Gm-Message-State: ALoCoQmiAerSLkLUb325ivhckpVg4WA+/qF1im08P4KNmXIK2h4C/rm8ib37pm6U4beW6BUxoZEM
X-Received: by 10.14.127.72 with SMTP id c48mr37521689eei.16.1392052268775;
	Mon, 10 Feb 2014 09:11:08 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m1sm56615310een.7.2014.02.10.09.11.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:11:07 -0800 (PST)
Message-ID: <52F90828.4060809@linaro.org>
Date: Mon, 10 Feb 2014 17:11:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 05:06 PM, Stefano Stabellini wrote:
> On Fri, 7 Feb 2014, Julien Grall wrote:
>> On 07/02/14 18:56, Stefano Stabellini wrote:
>>    > +static void gic_clear_lrs(struct vcpu *v)
>>> +{
>>> +    struct pending_irq *p;
>>> +    int i = 0, irq;
>>> +    uint32_t lr;
>>> +    bool_t inflight;
>>> +
>>> +    ASSERT(!local_irq_is_enabled());
>>> +
>>> +    while ((i = find_next_bit((const long unsigned int *)
>>> &this_cpu(lr_mask),
>>> +                              nr_lrs, i)) < nr_lrs) {
>>
>> Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
>> can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> 
> Given that we only have 4 LR registers, I think that unconditionally
> reading 2 ELRSR registers would cost more than simply checking lr_mask
> on average.

The maximum number of LR registers is 64. I agree that the current
hardwares only handle 4 ... but we should think about future hardware.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuOP-0001cp-Sm; Mon, 10 Feb 2014 17:11:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCuOO-0001cM-5X
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:11:36 +0000
Received: from [193.109.254.147:14768] by server-5.bemta-14.messagelabs.com id
	90/2D-16688-74809F25; Mon, 10 Feb 2014 17:11:35 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392052294!3328239!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4930 invoked from network); 10 Feb 2014 17:11:34 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:11:34 -0000
Received: by mail-we0-f174.google.com with SMTP id x55so4525794wes.33
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 09:11:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=E25L7wK9RxMSnfP3BFJGWYy6s1FCv7unMVDq6OYbWe0=;
	b=WZk8FIhcIr1XgI977jwikRfepua4Sqv508sWRq6oJvSxPSgXQ7RqXH+7wHh+Gl5s4/
	W22uKC3qAtFP51ZRukwasn9fMj13/QE34DHRtLlJjU/AdBqPyWL3bvNanJFf8zwtppOC
	5H/gSemexTIPy098S96SmIx+OEOTR7qRZ6uo8IgTmr8V7OunyJvU99IvlUDlduxhSnN9
	VkZChXA+bSOlOxu4tRYHu4+qLyqi3Q3NDLpEpGF0lxCi+oQ3i4oJvDtRdtfRJjHKYLLw
	8Y4OSNxP+piDWLrV80eZNd0/KrjaBbZ9+vKxotRBZ9xNISa4VeYWPXetCtitNF8Z5fiT
	BL4Q==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr11264766wie.6.1392052294491;
	Mon, 10 Feb 2014 09:11:34 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 09:11:34 -0800 (PST)
In-Reply-To: <CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
Date: Mon, 10 Feb 2014 17:11:34 +0000
X-Google-Sender-Auth: Jvi7SfhlreIjsylLgYZJz-SZQJI
Message-ID: <CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
<mikeneiderhauser@gmail.com> wrote:
> Works like a charm.  I do not have physical access to the computer this
> weekend to verify that the cards are isolated, but the HVM starts and
> appears to be working well.
>
> When do you think Xen 4.4 will be released.  The article I read mentioned it
> will be released in 2014 (hinting towards the end of February).  I also read
> 'When it is ready.'
>
> Any timeline would be great.

I'm afraid that's about all we can give. :-)  We've locked down
development for 2 months now and are working on finding and fixing
bugs.  If there are no more blocker bugs or other unforeseen delays,
it should be out by the end of February.  But there are necessarily
significant unknowns, so we can't make any promises.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuOP-0001cp-Sm; Mon, 10 Feb 2014 17:11:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCuOO-0001cM-5X
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:11:36 +0000
Received: from [193.109.254.147:14768] by server-5.bemta-14.messagelabs.com id
	90/2D-16688-74809F25; Mon, 10 Feb 2014 17:11:35 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392052294!3328239!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4930 invoked from network); 10 Feb 2014 17:11:34 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:11:34 -0000
Received: by mail-we0-f174.google.com with SMTP id x55so4525794wes.33
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 09:11:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=E25L7wK9RxMSnfP3BFJGWYy6s1FCv7unMVDq6OYbWe0=;
	b=WZk8FIhcIr1XgI977jwikRfepua4Sqv508sWRq6oJvSxPSgXQ7RqXH+7wHh+Gl5s4/
	W22uKC3qAtFP51ZRukwasn9fMj13/QE34DHRtLlJjU/AdBqPyWL3bvNanJFf8zwtppOC
	5H/gSemexTIPy098S96SmIx+OEOTR7qRZ6uo8IgTmr8V7OunyJvU99IvlUDlduxhSnN9
	VkZChXA+bSOlOxu4tRYHu4+qLyqi3Q3NDLpEpGF0lxCi+oQ3i4oJvDtRdtfRJjHKYLLw
	8Y4OSNxP+piDWLrV80eZNd0/KrjaBbZ9+vKxotRBZ9xNISa4VeYWPXetCtitNF8Z5fiT
	BL4Q==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr11264766wie.6.1392052294491;
	Mon, 10 Feb 2014 09:11:34 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 09:11:34 -0800 (PST)
In-Reply-To: <CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
Date: Mon, 10 Feb 2014 17:11:34 +0000
X-Google-Sender-Auth: Jvi7SfhlreIjsylLgYZJz-SZQJI
Message-ID: <CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
<mikeneiderhauser@gmail.com> wrote:
> Works like a charm.  I do not have physical access to the computer this
> weekend to verify that the cards are isolated, but the HVM starts and
> appears to be working well.
>
> When do you think Xen 4.4 will be released.  The article I read mentioned it
> will be released in 2014 (hinting towards the end of February).  I also read
> 'When it is ready.'
>
> Any timeline would be great.

I'm afraid that's about all we can give. :-)  We've locked down
development for 2 months now and are working on finding and fixing
bugs.  If there are no more blocker bugs or other unforeseen delays,
it should be out by the end of February.  But there are necessarily
significant unknowns, so we can't make any promises.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:13:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuQE-0001qG-HR; Mon, 10 Feb 2014 17:13:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCuQC-0001q1-Di
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:13:28 +0000
Received: from [193.109.254.147:42781] by server-9.bemta-14.messagelabs.com id
	04/AE-24895-7B809F25; Mon, 10 Feb 2014 17:13:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392052404!3333796!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2843 invoked from network); 10 Feb 2014 17:13:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:13:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99539891"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:13:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:13:09 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCuPt-0003y6-3E;
	Mon, 10 Feb 2014 17:13:09 +0000
Message-ID: <52F908A4.20902@citrix.com>
Date: Mon, 10 Feb 2014 17:13:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
In-Reply-To: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 16:34, Jan Beulich wrote:
>>>> On 10.02.14 at 12:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> This reverts large amounts of:
>>   9607327abbd3e77bde6cc7b5327f3efd781fc06e
>>     "x86/HVM: properly handle RTC periodic timer even when !RTC_PIE"
>>   620d5dad54008e40798c4a0c4322aef274c36fa3
>>     "x86/HVM: assorted RTC emulation adjustments"
>>
>> and by extentsion:
>>   f3347f520cb4d8aa4566182b013c6758d80cbe88
>>     "x86/HVM: adjust IRQ (de-)assertion"
>>   c2f79c464849e5f796aa9d1d0f26fe356abd1a1a
>>     "x86/HVM: fix processing of RTC REG_B writes"
>>   527824f41f5fac9cba3d4441b2e73d3118d98837
>>     "x86/hvm: Centralize and simplify the RTC IRQ logic."
> So what does "by extension" mean here? Are these being
> reverted?

The logic here was based on the logic being reverted, so the changes
themselves are mostly gone.

>
>> The current code has a pathological case, tickled by the access pattern =
of
>> Windows 2003 Server SP2.  Occasonally on boot (which I presume is during=
 a
>> time calibration against the RTC Periodic Timer), Windows gets stuck in =
an
>> infinite loop reading RTC REG_C.  This affects 32 and 64 bit guests.
>>
>> In the pathological case, the VM state looks like this:
>>   * RTC: 64Hz period, periodic interrupts enabled
>>   * RTC_IRQ in IOAPIC as vector 0xd1, edge triggered, not pending
>>   * vector 0xd1 set in LAPIC IRR and ISR, TPR at 0xd0
>>   * Reads from REG_C return 'RTC_PF | RTC_IRQF'
>>
>> With an intstrumented Xen, dumping the periodic timers with a guest in t=
his
>> state shows a single timer with pt->irq_issued=3D1 and pt->pending_intr_=
nr=3D2.
>>
>> Windows is presumably waiting for reads of REG_C to drop to 0, and readi=
ng
>> REG_C clears the value each time in the emulated RTC.  However:
>>
>>   * {svm,vmx}_intr_assist() calls pt_update_irq() unconditionally.
>>   * pt_update_irq() always finds the RTC as earliest_pt.
>>   * rtc_periodic_interrupt() unconditionally sets RTC_PF in no_ack mode.=
  It
>>     returns true, indicating that pt_update_irq() should really inject t=
he
>>     interrupt.
>>   * pt_update_irq() decides that it doesn't need to fake up part of
>>     pt_intr_post() because this is a real interrupt.
>>   * {svm,vmx}_intr_assist() can't inject the interrupt as it is already
>>     pending, so exits early without calling pt_intr_post().
>>
>> The underlying problem here comes because the AF and UF bits of RTC =

>> interrupt
>> state is modelled by the RTC code, but the PF is modelled by the pt code=
.  =

>> The
>> root cause of windows infinite loop is that RTC_PF is being re-set on =

>> vmentry
>> before the interrupt logic has worked out that it can't actually inject =
an =

>> RTC
>> interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
>> should be reading 0.
> So you're undoing a whole lot of changes done with the goal of
> getting the overall emulation closer to what real hardware does,
> just to paper over an issue elsewhere in the code? Not really an
> approach I'm in favor of.

I fail to see how the current code is closer to what hardware does than
this proposed patch.

>
>>   * The results from XenRT suggest that the new emulation is better than=
 the
>>     old.
> "Better" in the sense of the limited set of uses of the virtual hardware
> by whatever selection of guest OSes is being run there. But very
> likely not "better" in the sense on matching up with how the respective
> hardware specification would require it to behave.
>
>> --- a/xen/arch/x86/hvm/rtc.c
>> +++ b/xen/arch/x86/hvm/rtc.c
>> @@ -59,48 +59,78 @@ static void rtc_set_time(RTCState *s);
>>  static inline int from_bcd(RTCState *s, int a);
>>  static inline int convert_hour(RTCState *s, int hour);
>>  =

>> -static void rtc_update_irq(RTCState *s)
>> +/*
>> + * Send an edge on the RTC ISA IRQ line.  The RTC spec states that it s=
hould
>> + * be a line level interrupt, but the PIIX3 states that it must be edge
>> + * triggered.  We model the RTC using edge semantics.
>> + */
>> +static void rtc_toggle_irq(RTCState *s)
>>  {
>> +    hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
>> +    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
>> +}
>> +
>> +static void rtc_update_regb(RTCState *s, uint8_t new_b)
>> +{
>> +    uint8_t new_c =3D s->hw.cmos_data[RTC_REG_C] & ~RTC_IRQF;
>> +
>>      ASSERT(spin_is_locked(&s->lock));
>>  =

>> -    if ( rtc_mode_is(s, strict) && (s->hw.cmos_data[RTC_REG_C] & RTC_IR=
QF) )
>> -        return;
>> +    if ( new_b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
>> +        new_c |=3D RTC_IRQF;
> Without going back to reading the spec, iirc RTC_IRQF is a sticky
> bit cleared _only_ be REG_C reads, i.e. you shouldn't clear it
> earlier in the function and then conditionally set it here.

The main problem is that the MC146818 states that the interrupt is line
level, while the PIIX3 mandates that it is edge triggered, and our ACPI
tables define it to be edge triggered.

The datasheet states "The IRQF bit in Register C is a '1' whenever the
=ACIRQ pin is being driven low".  With edge semantics where the line never
actually stays asserted, this would degrade to being sticky until read.

However, with edge semantics, we need to send new interrupts when
enabling bits in REG_B even if IRQF is already outstanding, which is why
the logic starts by masking it back out.  Furthermore, in no-ack mode,
we expect IRQF to be set (as the guest isn't reading REG_C), but still
wanting interrupts.

Perhaps IRQF should be or'd back together when writing the state back.

>
>>  =

>> -    /* IRQ is raised if any source is both raised & enabled */
>> -    if ( !(s->hw.cmos_data[RTC_REG_B] &
>> -           s->hw.cmos_data[RTC_REG_C] &
>> -           (RTC_PF | RTC_AF | RTC_UF)) )
>> -        return;
>> +    s->hw.cmos_data[RTC_REG_B] =3D new_b;
>> +    s->hw.cmos_data[RTC_REG_C] =3D new_c;
>>  =

>> -    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
>> -    if ( rtc_mode_is(s, no_ack) )
>> -        hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
>> -    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
>> +    if ( new_c & RTC_IRQF )
>> +        rtc_toggle_irq(s);
> Which then implies that the condition here would also need to
> consider the old state of the flag.
>
>> +static void rtc_irq_event(RTCState *s, uint8_t event)
>> +{
>> +    uint8_t b =3D s->hw.cmos_data[RTC_REG_B];
>> +    uint8_t old_c =3D s->hw.cmos_data[RTC_REG_C];
>> +    uint8_t new_c =3D old_c & ~RTC_IRQF;
>> +
>> +    ASSERT(spin_is_locked(&s->lock));
>> +
>> +    if ( b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
>> +        new_c |=3D RTC_IRQF;
> Same comment as above.
>
>> @@ -647,9 +674,6 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_=
t addr)
>>      case RTC_REG_C:
>>          ret =3D s->hw.cmos_data[s->hw.cmos_index];
>>          s->hw.cmos_data[RTC_REG_C] =3D 0x00;
>> -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
>> -            hvm_isa_irq_deassert(d, RTC_IRQ);
> Why? With RTC_IRQF going from 1 to 0, the interrupt line should
> get de-asserted.
>
>> -        rtc_update_irq(s);
> So given the problem description, this would seem to be the most
> important part at a first glance. But looking more closely, I'm getting
> the impression that the call to rtc_update_irq() had no effect at all
> here anyway: The function would always bail on the second if() due
> to REG_C having got cleared a few lines up.
>
>> @@ -270,47 +263,12 @@ int pt_update_irq(struct vcpu *v)
>>      earliest_pt->irq_issued =3D 1;
>>      irq =3D earliest_pt->irq;
>>      is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
>> -    pt_priv =3D earliest_pt->priv;
>>  =

>>      spin_unlock(&v->arch.hvm_vcpu.tm_lock);
>>  =

>>      if ( is_lapic )
>> -        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
>> -    else if ( irq =3D=3D RTC_IRQ && pt_priv )
>>      {
>> -        if ( !rtc_periodic_interrupt(pt_priv) )
>> -            irq =3D -1;
>> -
>> -        pt_lock(earliest_pt);
>> -
>> -        if ( irq < 0 && earliest_pt->pending_intr_nr )
>> -        {
>> -            /*
>> -             * RTC periodic timer runs without the corresponding interr=
upt
>> -             * being enabled - need to mimic enough of pt_intr_post() t=
o keep
>> -             * things going.
>> -             */
>> -            earliest_pt->pending_intr_nr =3D 0;
>> -            earliest_pt->irq_issued =3D 0;
>> -            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
>> -        }
>> -        else if ( irq >=3D 0 && pt_irq_masked(earliest_pt) )
>> -        {
>> -            if ( earliest_pt->on_list )
>> -            {
>> -                /* suspend timer emulation */
>> -                list_del(&earliest_pt->list);
>> -                earliest_pt->on_list =3D 0;
>> -            }
>> -            irq =3D -1;
>> -        }
>> -
>> -        /* Avoid dropping the lock if we can. */
>> -        if ( irq < 0 && v =3D=3D earliest_pt->vcpu )
>> -            goto rescan_locked;
>> -        pt_unlock(earliest_pt);
>> -        if ( irq < 0 )
>> -            goto rescan;
>> +        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> If you didn't put this single function call in braces, the patch would
> become more clear, as it would then be exactly the "else if()"
> branch that got removed by it.

Right - This part of the patch was produced with reversion alone, but
dropping the braces would make it clearer.

>
> As a round-up: I'm not going to veto this, but I'm also not going to
> be putting my name under it, nor am I going to make another
> attempt to clean up the RTC emulation if this is to go in unchanged.
> I'm personally getting the impression that the root cause of the
> observed problem is still being left in place (and perhaps still not
> being fully understood), and hence this whole change goes in the
> wrong direction, _even_ if it makes the problem it is aiming at
> fixing indeed appear to go away.
>
> Jan

I am not sure I follow you.  The root case of the infinite loop is
because Xen was erroniously setting REG_C.PF, because pt_update_irq()
was trying to pre-guess what the interrupt injection logic would do,
(and getting it wrong).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:13:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuQE-0001qG-HR; Mon, 10 Feb 2014 17:13:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCuQC-0001q1-Di
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:13:28 +0000
Received: from [193.109.254.147:42781] by server-9.bemta-14.messagelabs.com id
	04/AE-24895-7B809F25; Mon, 10 Feb 2014 17:13:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392052404!3333796!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2843 invoked from network); 10 Feb 2014 17:13:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:13:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99539891"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:13:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:13:09 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCuPt-0003y6-3E;
	Mon, 10 Feb 2014 17:13:09 +0000
Message-ID: <52F908A4.20902@citrix.com>
Date: Mon, 10 Feb 2014 17:13:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
In-Reply-To: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 16:34, Jan Beulich wrote:
>>>> On 10.02.14 at 12:17, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> This reverts large amounts of:
>>   9607327abbd3e77bde6cc7b5327f3efd781fc06e
>>     "x86/HVM: properly handle RTC periodic timer even when !RTC_PIE"
>>   620d5dad54008e40798c4a0c4322aef274c36fa3
>>     "x86/HVM: assorted RTC emulation adjustments"
>>
>> and by extentsion:
>>   f3347f520cb4d8aa4566182b013c6758d80cbe88
>>     "x86/HVM: adjust IRQ (de-)assertion"
>>   c2f79c464849e5f796aa9d1d0f26fe356abd1a1a
>>     "x86/HVM: fix processing of RTC REG_B writes"
>>   527824f41f5fac9cba3d4441b2e73d3118d98837
>>     "x86/hvm: Centralize and simplify the RTC IRQ logic."
> So what does "by extension" mean here? Are these being
> reverted?

The logic here was based on the logic being reverted, so the changes
themselves are mostly gone.

>
>> The current code has a pathological case, tickled by the access pattern =
of
>> Windows 2003 Server SP2.  Occasonally on boot (which I presume is during=
 a
>> time calibration against the RTC Periodic Timer), Windows gets stuck in =
an
>> infinite loop reading RTC REG_C.  This affects 32 and 64 bit guests.
>>
>> In the pathological case, the VM state looks like this:
>>   * RTC: 64Hz period, periodic interrupts enabled
>>   * RTC_IRQ in IOAPIC as vector 0xd1, edge triggered, not pending
>>   * vector 0xd1 set in LAPIC IRR and ISR, TPR at 0xd0
>>   * Reads from REG_C return 'RTC_PF | RTC_IRQF'
>>
>> With an intstrumented Xen, dumping the periodic timers with a guest in t=
his
>> state shows a single timer with pt->irq_issued=3D1 and pt->pending_intr_=
nr=3D2.
>>
>> Windows is presumably waiting for reads of REG_C to drop to 0, and readi=
ng
>> REG_C clears the value each time in the emulated RTC.  However:
>>
>>   * {svm,vmx}_intr_assist() calls pt_update_irq() unconditionally.
>>   * pt_update_irq() always finds the RTC as earliest_pt.
>>   * rtc_periodic_interrupt() unconditionally sets RTC_PF in no_ack mode.=
  It
>>     returns true, indicating that pt_update_irq() should really inject t=
he
>>     interrupt.
>>   * pt_update_irq() decides that it doesn't need to fake up part of
>>     pt_intr_post() because this is a real interrupt.
>>   * {svm,vmx}_intr_assist() can't inject the interrupt as it is already
>>     pending, so exits early without calling pt_intr_post().
>>
>> The underlying problem here comes because the AF and UF bits of RTC =

>> interrupt
>> state is modelled by the RTC code, but the PF is modelled by the pt code=
.  =

>> The
>> root cause of windows infinite loop is that RTC_PF is being re-set on =

>> vmentry
>> before the interrupt logic has worked out that it can't actually inject =
an =

>> RTC
>> interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
>> should be reading 0.
> So you're undoing a whole lot of changes done with the goal of
> getting the overall emulation closer to what real hardware does,
> just to paper over an issue elsewhere in the code? Not really an
> approach I'm in favor of.

I fail to see how the current code is closer to what hardware does than
this proposed patch.

>
>>   * The results from XenRT suggest that the new emulation is better than=
 the
>>     old.
> "Better" in the sense of the limited set of uses of the virtual hardware
> by whatever selection of guest OSes is being run there. But very
> likely not "better" in the sense on matching up with how the respective
> hardware specification would require it to behave.
>
>> --- a/xen/arch/x86/hvm/rtc.c
>> +++ b/xen/arch/x86/hvm/rtc.c
>> @@ -59,48 +59,78 @@ static void rtc_set_time(RTCState *s);
>>  static inline int from_bcd(RTCState *s, int a);
>>  static inline int convert_hour(RTCState *s, int hour);
>>  =

>> -static void rtc_update_irq(RTCState *s)
>> +/*
>> + * Send an edge on the RTC ISA IRQ line.  The RTC spec states that it s=
hould
>> + * be a line level interrupt, but the PIIX3 states that it must be edge
>> + * triggered.  We model the RTC using edge semantics.
>> + */
>> +static void rtc_toggle_irq(RTCState *s)
>>  {
>> +    hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
>> +    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
>> +}
>> +
>> +static void rtc_update_regb(RTCState *s, uint8_t new_b)
>> +{
>> +    uint8_t new_c =3D s->hw.cmos_data[RTC_REG_C] & ~RTC_IRQF;
>> +
>>      ASSERT(spin_is_locked(&s->lock));
>>  =

>> -    if ( rtc_mode_is(s, strict) && (s->hw.cmos_data[RTC_REG_C] & RTC_IR=
QF) )
>> -        return;
>> +    if ( new_b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
>> +        new_c |=3D RTC_IRQF;
> Without going back to reading the spec, iirc RTC_IRQF is a sticky
> bit cleared _only_ be REG_C reads, i.e. you shouldn't clear it
> earlier in the function and then conditionally set it here.

The main problem is that the MC146818 states that the interrupt is line
level, while the PIIX3 mandates that it is edge triggered, and our ACPI
tables define it to be edge triggered.

The datasheet states "The IRQF bit in Register C is a '1' whenever the
=ACIRQ pin is being driven low".  With edge semantics where the line never
actually stays asserted, this would degrade to being sticky until read.

However, with edge semantics, we need to send new interrupts when
enabling bits in REG_B even if IRQF is already outstanding, which is why
the logic starts by masking it back out.  Furthermore, in no-ack mode,
we expect IRQF to be set (as the guest isn't reading REG_C), but still
wanting interrupts.

Perhaps IRQF should be or'd back together when writing the state back.

>
>>  =

>> -    /* IRQ is raised if any source is both raised & enabled */
>> -    if ( !(s->hw.cmos_data[RTC_REG_B] &
>> -           s->hw.cmos_data[RTC_REG_C] &
>> -           (RTC_PF | RTC_AF | RTC_UF)) )
>> -        return;
>> +    s->hw.cmos_data[RTC_REG_B] =3D new_b;
>> +    s->hw.cmos_data[RTC_REG_C] =3D new_c;
>>  =

>> -    s->hw.cmos_data[RTC_REG_C] |=3D RTC_IRQF;
>> -    if ( rtc_mode_is(s, no_ack) )
>> -        hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
>> -    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
>> +    if ( new_c & RTC_IRQF )
>> +        rtc_toggle_irq(s);
> Which then implies that the condition here would also need to
> consider the old state of the flag.
>
>> +static void rtc_irq_event(RTCState *s, uint8_t event)
>> +{
>> +    uint8_t b =3D s->hw.cmos_data[RTC_REG_B];
>> +    uint8_t old_c =3D s->hw.cmos_data[RTC_REG_C];
>> +    uint8_t new_c =3D old_c & ~RTC_IRQF;
>> +
>> +    ASSERT(spin_is_locked(&s->lock));
>> +
>> +    if ( b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
>> +        new_c |=3D RTC_IRQF;
> Same comment as above.
>
>> @@ -647,9 +674,6 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_=
t addr)
>>      case RTC_REG_C:
>>          ret =3D s->hw.cmos_data[s->hw.cmos_index];
>>          s->hw.cmos_data[RTC_REG_C] =3D 0x00;
>> -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
>> -            hvm_isa_irq_deassert(d, RTC_IRQ);
> Why? With RTC_IRQF going from 1 to 0, the interrupt line should
> get de-asserted.
>
>> -        rtc_update_irq(s);
> So given the problem description, this would seem to be the most
> important part at a first glance. But looking more closely, I'm getting
> the impression that the call to rtc_update_irq() had no effect at all
> here anyway: The function would always bail on the second if() due
> to REG_C having got cleared a few lines up.
>
>> @@ -270,47 +263,12 @@ int pt_update_irq(struct vcpu *v)
>>      earliest_pt->irq_issued =3D 1;
>>      irq =3D earliest_pt->irq;
>>      is_lapic =3D (earliest_pt->source =3D=3D PTSRC_lapic);
>> -    pt_priv =3D earliest_pt->priv;
>>  =

>>      spin_unlock(&v->arch.hvm_vcpu.tm_lock);
>>  =

>>      if ( is_lapic )
>> -        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
>> -    else if ( irq =3D=3D RTC_IRQ && pt_priv )
>>      {
>> -        if ( !rtc_periodic_interrupt(pt_priv) )
>> -            irq =3D -1;
>> -
>> -        pt_lock(earliest_pt);
>> -
>> -        if ( irq < 0 && earliest_pt->pending_intr_nr )
>> -        {
>> -            /*
>> -             * RTC periodic timer runs without the corresponding interr=
upt
>> -             * being enabled - need to mimic enough of pt_intr_post() t=
o keep
>> -             * things going.
>> -             */
>> -            earliest_pt->pending_intr_nr =3D 0;
>> -            earliest_pt->irq_issued =3D 0;
>> -            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
>> -        }
>> -        else if ( irq >=3D 0 && pt_irq_masked(earliest_pt) )
>> -        {
>> -            if ( earliest_pt->on_list )
>> -            {
>> -                /* suspend timer emulation */
>> -                list_del(&earliest_pt->list);
>> -                earliest_pt->on_list =3D 0;
>> -            }
>> -            irq =3D -1;
>> -        }
>> -
>> -        /* Avoid dropping the lock if we can. */
>> -        if ( irq < 0 && v =3D=3D earliest_pt->vcpu )
>> -            goto rescan_locked;
>> -        pt_unlock(earliest_pt);
>> -        if ( irq < 0 )
>> -            goto rescan;
>> +        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> If you didn't put this single function call in braces, the patch would
> become more clear, as it would then be exactly the "else if()"
> branch that got removed by it.

Right - This part of the patch was produced with reversion alone, but
dropping the braces would make it clearer.

>
> As a round-up: I'm not going to veto this, but I'm also not going to
> be putting my name under it, nor am I going to make another
> attempt to clean up the RTC emulation if this is to go in unchanged.
> I'm personally getting the impression that the root cause of the
> observed problem is still being left in place (and perhaps still not
> being fully understood), and hence this whole change goes in the
> wrong direction, _even_ if it makes the problem it is aiming at
> fixing indeed appear to go away.
>
> Jan

I am not sure I follow you.  The root case of the infinite loop is
because Xen was erroniously setting REG_C.PF, because pt_update_irq()
was trying to pre-guess what the interrupt injection logic would do,
(and getting it wrong).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuRI-00020E-5y; Mon, 10 Feb 2014 17:14:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCuRG-000200-Iy
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:14:34 +0000
Received: from [85.158.139.211:8674] by server-8.bemta-5.messagelabs.com id
	E6/74-05298-9F809F25; Mon, 10 Feb 2014 17:14:33 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392052473!2973524!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30553 invoked from network); 10 Feb 2014 17:14:33 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:14:33 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so3097498eek.38
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 09:14:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kbPZ6GHZ0Ks/vIXbaX9OEa6Bir9d5ZrY50FJipuZQM4=;
	b=iwBAhWCkr+Ac1zio77F3STsNIDXKocPNIT3MDrL841XyHzuN6cfO0t6QL5QVl2n8pO
	8TfpRFCZMGKQVQ1EXmKhGNKLbuEm4iGsnnc+EGYKcgobOb4oE0g+3/9nU17EDYEFxUvu
	2A1CmQ6BZ1i9fDm1wSScms1jCEmXJU4ihf0KmeUUmKqBuQSl2EjydAE6B8+0KXSyeWaZ
	W8ErxpuNJX6p3/V33ydceuNYzyhhjodyeKeuzagN5GW9E6LotI/OMZP14w3iNxBy1cCY
	BXaSl9AkVqx3EQ5GpaF7UoNZwWeet3WG16gfne5HyIl7VtvFtDtOStwlLCmK0YZY8WcW
	3GFQ==
X-Gm-Message-State: ALoCoQlCffwlBqkr4vERLgTCw5srn5MQt6ekC+XkafjRTaal70KcoKf1526qfjmyDj6DlCnsAL5X
X-Received: by 10.15.93.203 with SMTP id w51mr38228419eez.33.1392052472976;
	Mon, 10 Feb 2014 09:14:32 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	y47sm56794463eel.14.2014.02.10.09.14.31 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:14:32 -0800 (PST)
Message-ID: <52F908F7.30405@linaro.org>
Date: Mon, 10 Feb 2014 17:14:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56EA9.9020903@linaro.org>
	<alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 04:59 PM, Stefano Stabellini wrote:
> On Fri, 7 Feb 2014, Julien Grall wrote:
>> On 07/02/14 18:56, Stefano Stabellini wrote:
>>> On return to guest, if there are no free LRs and we still have more
>>> interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
>>> maintenance interrupt when no pending interrupts are present in the LR
>>> registers.
>>> The maintenance interrupt handler won't do anything anymore, but
>>> receiving the interrupt is going to cause gic_inject to be called on
>>> return to guest that is going to clear the old LRs and inject new
>>> interrupts.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>> ---
>>>   xen/arch/arm/gic.c |    8 +++++++-
>>>   1 file changed, 7 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index 87bd5d3..bee2618 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -810,8 +810,14 @@ void gic_inject(void)
>>>       gic_restore_pending_irqs(current);
>>>       if (!gic_events_need_delivery())
>>>           gic_inject_irq_stop();
>>> -    else
>>> +    else {
>>>           gic_inject_irq_start();
>>> +    }
>>> +
>>> +    if ( !list_empty(&current->arch.vgic.lr_pending) )
>>> +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
>>> +    else
>>> +        GICH[GICH_HCR] = GICH_HCR_EN;
>>
>> Any reason to not move this in the else?
> 
> Yes: we need to be able to disable GICH_HCR_NPIE even if there are no
> irqs to inject.

In this case, can we simply enable/disable GICH_HCR_NPIE instead of
resetting the register every time? GICH_HCR contains other bits that
could be set in the future.

>> BTW, I think we can safely avoid to reinject the IRQ for the event channels if
>> there is pending events. It will also avoid spurious IRQ on some specific case
>> :).
> 
> Currently we don't set GIC_IRQ_GUEST_PENDING for evtchn_irq if the irq
> is already inflight. I'll see if I can come up with a patch to
> streamline that behaviour.

Ok.


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:14:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuRI-00020E-5y; Mon, 10 Feb 2014 17:14:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCuRG-000200-Iy
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:14:34 +0000
Received: from [85.158.139.211:8674] by server-8.bemta-5.messagelabs.com id
	E6/74-05298-9F809F25; Mon, 10 Feb 2014 17:14:33 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392052473!2973524!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30553 invoked from network); 10 Feb 2014 17:14:33 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:14:33 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so3097498eek.38
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 09:14:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kbPZ6GHZ0Ks/vIXbaX9OEa6Bir9d5ZrY50FJipuZQM4=;
	b=iwBAhWCkr+Ac1zio77F3STsNIDXKocPNIT3MDrL841XyHzuN6cfO0t6QL5QVl2n8pO
	8TfpRFCZMGKQVQ1EXmKhGNKLbuEm4iGsnnc+EGYKcgobOb4oE0g+3/9nU17EDYEFxUvu
	2A1CmQ6BZ1i9fDm1wSScms1jCEmXJU4ihf0KmeUUmKqBuQSl2EjydAE6B8+0KXSyeWaZ
	W8ErxpuNJX6p3/V33ydceuNYzyhhjodyeKeuzagN5GW9E6LotI/OMZP14w3iNxBy1cCY
	BXaSl9AkVqx3EQ5GpaF7UoNZwWeet3WG16gfne5HyIl7VtvFtDtOStwlLCmK0YZY8WcW
	3GFQ==
X-Gm-Message-State: ALoCoQlCffwlBqkr4vERLgTCw5srn5MQt6ekC+XkafjRTaal70KcoKf1526qfjmyDj6DlCnsAL5X
X-Received: by 10.15.93.203 with SMTP id w51mr38228419eez.33.1392052472976;
	Mon, 10 Feb 2014 09:14:32 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	y47sm56794463eel.14.2014.02.10.09.14.31 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:14:32 -0800 (PST)
Message-ID: <52F908F7.30405@linaro.org>
Date: Mon, 10 Feb 2014 17:14:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56EA9.9020903@linaro.org>
	<alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 04:59 PM, Stefano Stabellini wrote:
> On Fri, 7 Feb 2014, Julien Grall wrote:
>> On 07/02/14 18:56, Stefano Stabellini wrote:
>>> On return to guest, if there are no free LRs and we still have more
>>> interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
>>> maintenance interrupt when no pending interrupts are present in the LR
>>> registers.
>>> The maintenance interrupt handler won't do anything anymore, but
>>> receiving the interrupt is going to cause gic_inject to be called on
>>> return to guest that is going to clear the old LRs and inject new
>>> interrupts.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>>> ---
>>>   xen/arch/arm/gic.c |    8 +++++++-
>>>   1 file changed, 7 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>>> index 87bd5d3..bee2618 100644
>>> --- a/xen/arch/arm/gic.c
>>> +++ b/xen/arch/arm/gic.c
>>> @@ -810,8 +810,14 @@ void gic_inject(void)
>>>       gic_restore_pending_irqs(current);
>>>       if (!gic_events_need_delivery())
>>>           gic_inject_irq_stop();
>>> -    else
>>> +    else {
>>>           gic_inject_irq_start();
>>> +    }
>>> +
>>> +    if ( !list_empty(&current->arch.vgic.lr_pending) )
>>> +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
>>> +    else
>>> +        GICH[GICH_HCR] = GICH_HCR_EN;
>>
>> Any reason to not move this in the else?
> 
> Yes: we need to be able to disable GICH_HCR_NPIE even if there are no
> irqs to inject.

In this case, can we simply enable/disable GICH_HCR_NPIE instead of
resetting the register every time? GICH_HCR contains other bits that
could be set in the future.

>> BTW, I think we can safely avoid to reinject the IRQ for the event channels if
>> there is pending events. It will also avoid spurious IRQ on some specific case
>> :).
> 
> Currently we don't set GIC_IRQ_GUEST_PENDING for evtchn_irq if the irq
> is already inflight. I'll see if I can come up with a patch to
> streamline that behaviour.

Ok.


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:16:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuTI-0002C5-Ov; Mon, 10 Feb 2014 17:16:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuTG-0002Bu-OJ
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:16:38 +0000
Received: from [193.109.254.147:8710] by server-13.bemta-14.messagelabs.com id
	04/6B-01226-67909F25; Mon, 10 Feb 2014 17:16:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392052596!3332420!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2578 invoked from network); 10 Feb 2014 17:16:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99540955"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:16:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:16:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuTD-00043Q-En;
	Mon, 10 Feb 2014 17:16:35 +0000
Date: Mon, 10 Feb 2014 17:16:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392052197.26657.26.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Ian Campbell wrote:
> On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > +{
> > > > +    struct pending_irq *p;
> > > > +    int i = 0, irq;
> > > > +    uint32_t lr;
> > > > +    bool_t inflight;
> > > > +
> > > > +    ASSERT(!local_irq_is_enabled());
> > > > +
> > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > &this_cpu(lr_mask),
> > > > +                              nr_lrs, i)) < nr_lrs) {
> > > 
> > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > 
> > Given that we only have 4 LR registers, I think that unconditionally
> > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > on average.
> 
> You also need to actually read the LR and do some bit masking etc don't
> you?

No bit masking but I need to read the LRs, that are just 4.

> What about implementations with >4 LRs?

I could read ELRSR only if nr_lrs > 8 or something like that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:16:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuTI-0002C5-Ov; Mon, 10 Feb 2014 17:16:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuTG-0002Bu-OJ
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:16:38 +0000
Received: from [193.109.254.147:8710] by server-13.bemta-14.messagelabs.com id
	04/6B-01226-67909F25; Mon, 10 Feb 2014 17:16:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392052596!3332420!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2578 invoked from network); 10 Feb 2014 17:16:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99540955"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:16:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:16:35 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuTD-00043Q-En;
	Mon, 10 Feb 2014 17:16:35 +0000
Date: Mon, 10 Feb 2014 17:16:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392052197.26657.26.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Ian Campbell wrote:
> On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > +{
> > > > +    struct pending_irq *p;
> > > > +    int i = 0, irq;
> > > > +    uint32_t lr;
> > > > +    bool_t inflight;
> > > > +
> > > > +    ASSERT(!local_irq_is_enabled());
> > > > +
> > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > &this_cpu(lr_mask),
> > > > +                              nr_lrs, i)) < nr_lrs) {
> > > 
> > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > 
> > Given that we only have 4 LR registers, I think that unconditionally
> > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > on average.
> 
> You also need to actually read the LR and do some bit masking etc don't
> you?

No bit masking but I need to read the LRs, that are just 4.

> What about implementations with >4 LRs?

I could read ELRSR only if nr_lrs > 8 or something like that.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:17:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:17:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuTh-0002F9-5l; Mon, 10 Feb 2014 17:17:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuTg-0002Ev-Be
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:17:04 +0000
Received: from [85.158.139.211:46499] by server-11.bemta-5.messagelabs.com id
	5E/DB-23886-F8909F25; Mon, 10 Feb 2014 17:17:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392052621!2952491!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17078 invoked from network); 10 Feb 2014 17:17:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:17:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99541108"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:17:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:17:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuTc-00043a-O5;
	Mon, 10 Feb 2014 17:17:00 +0000
Date: Mon, 10 Feb 2014 17:16:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F908F7.30405@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101716360.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56EA9.9020903@linaro.org>
	<alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
	<52F908F7.30405@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Julien Grall wrote:
> On 02/10/2014 04:59 PM, Stefano Stabellini wrote:
> > On Fri, 7 Feb 2014, Julien Grall wrote:
> >> On 07/02/14 18:56, Stefano Stabellini wrote:
> >>> On return to guest, if there are no free LRs and we still have more
> >>> interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
> >>> maintenance interrupt when no pending interrupts are present in the LR
> >>> registers.
> >>> The maintenance interrupt handler won't do anything anymore, but
> >>> receiving the interrupt is going to cause gic_inject to be called on
> >>> return to guest that is going to clear the old LRs and inject new
> >>> interrupts.
> >>>
> >>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >>> ---
> >>>   xen/arch/arm/gic.c |    8 +++++++-
> >>>   1 file changed, 7 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> >>> index 87bd5d3..bee2618 100644
> >>> --- a/xen/arch/arm/gic.c
> >>> +++ b/xen/arch/arm/gic.c
> >>> @@ -810,8 +810,14 @@ void gic_inject(void)
> >>>       gic_restore_pending_irqs(current);
> >>>       if (!gic_events_need_delivery())
> >>>           gic_inject_irq_stop();
> >>> -    else
> >>> +    else {
> >>>           gic_inject_irq_start();
> >>> +    }
> >>> +
> >>> +    if ( !list_empty(&current->arch.vgic.lr_pending) )
> >>> +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
> >>> +    else
> >>> +        GICH[GICH_HCR] = GICH_HCR_EN;
> >>
> >> Any reason to not move this in the else?
> > 
> > Yes: we need to be able to disable GICH_HCR_NPIE even if there are no
> > irqs to inject.
> 
> In this case, can we simply enable/disable GICH_HCR_NPIE instead of
> resetting the register every time? GICH_HCR contains other bits that
> could be set in the future.

Sure

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:17:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:17:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuTh-0002F9-5l; Mon, 10 Feb 2014 17:17:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCuTg-0002Ev-Be
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:17:04 +0000
Received: from [85.158.139.211:46499] by server-11.bemta-5.messagelabs.com id
	5E/DB-23886-F8909F25; Mon, 10 Feb 2014 17:17:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392052621!2952491!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17078 invoked from network); 10 Feb 2014 17:17:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:17:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99541108"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:17:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:17:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCuTc-00043a-O5;
	Mon, 10 Feb 2014 17:17:00 +0000
Date: Mon, 10 Feb 2014 17:16:44 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <52F908F7.30405@linaro.org>
Message-ID: <alpine.DEB.2.02.1402101716360.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56EA9.9020903@linaro.org>
	<alpine.DEB.2.02.1402101657030.4373@kaball.uk.xensource.com>
	<52F908F7.30405@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 4/4] xen/arm: set GICH_HCR_NPIE if all
 the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Julien Grall wrote:
> On 02/10/2014 04:59 PM, Stefano Stabellini wrote:
> > On Fri, 7 Feb 2014, Julien Grall wrote:
> >> On 07/02/14 18:56, Stefano Stabellini wrote:
> >>> On return to guest, if there are no free LRs and we still have more
> >>> interrupt to inject, set GICH_HCR_NPIE so that we are going to receive a
> >>> maintenance interrupt when no pending interrupts are present in the LR
> >>> registers.
> >>> The maintenance interrupt handler won't do anything anymore, but
> >>> receiving the interrupt is going to cause gic_inject to be called on
> >>> return to guest that is going to clear the old LRs and inject new
> >>> interrupts.
> >>>
> >>> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> >>> ---
> >>>   xen/arch/arm/gic.c |    8 +++++++-
> >>>   1 file changed, 7 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> >>> index 87bd5d3..bee2618 100644
> >>> --- a/xen/arch/arm/gic.c
> >>> +++ b/xen/arch/arm/gic.c
> >>> @@ -810,8 +810,14 @@ void gic_inject(void)
> >>>       gic_restore_pending_irqs(current);
> >>>       if (!gic_events_need_delivery())
> >>>           gic_inject_irq_stop();
> >>> -    else
> >>> +    else {
> >>>           gic_inject_irq_start();
> >>> +    }
> >>> +
> >>> +    if ( !list_empty(&current->arch.vgic.lr_pending) )
> >>> +        GICH[GICH_HCR] = GICH_HCR_EN | GICH_HCR_NPIE;
> >>> +    else
> >>> +        GICH[GICH_HCR] = GICH_HCR_EN;
> >>
> >> Any reason to not move this in the else?
> > 
> > Yes: we need to be able to disable GICH_HCR_NPIE even if there are no
> > irqs to inject.
> 
> In this case, can we simply enable/disable GICH_HCR_NPIE instead of
> resetting the register every time? GICH_HCR contains other bits that
> could be set in the future.

Sure

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:18:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuVA-0002QT-N8; Mon, 10 Feb 2014 17:18:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCuV9-0002QC-Os
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:18:35 +0000
Received: from [85.158.137.68:38182] by server-15.bemta-3.messagelabs.com id
	FB/2B-19263-BE909F25; Mon, 10 Feb 2014 17:18:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392052711!897609!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29437 invoked from network); 10 Feb 2014 17:18:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:18:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99541414"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:18:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:18:20 -0500
Message-ID: <1392052699.26657.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:18:19 +0000
In-Reply-To: <alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:16 +0000, Stefano Stabellini wrote:
> On Mon, 10 Feb 2014, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > > +{
> > > > > +    struct pending_irq *p;
> > > > > +    int i = 0, irq;
> > > > > +    uint32_t lr;
> > > > > +    bool_t inflight;
> > > > > +
> > > > > +    ASSERT(!local_irq_is_enabled());
> > > > > +
> > > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > > &this_cpu(lr_mask),
> > > > > +                              nr_lrs, i)) < nr_lrs) {
> > > > 
> > > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > > 
> > > Given that we only have 4 LR registers, I think that unconditionally
> > > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > > on average.
> > 
> > You also need to actually read the LR and do some bit masking etc don't
> > you?
> 
> No bit masking but I need to read the LRs, that are just 4.

Having read them then what do you do with the result? Surely you need to
examine some or all of the bits to determine if it is free or not?

> > What about implementations with >4 LRs?
> 
> I could read ELRSR only if nr_lrs > 8 or something like that.

That would be the worst of both worlds IMHO.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:18:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuVA-0002QT-N8; Mon, 10 Feb 2014 17:18:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCuV9-0002QC-Os
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:18:35 +0000
Received: from [85.158.137.68:38182] by server-15.bemta-3.messagelabs.com id
	FB/2B-19263-BE909F25; Mon, 10 Feb 2014 17:18:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392052711!897609!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29437 invoked from network); 10 Feb 2014 17:18:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:18:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99541414"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:18:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:18:20 -0500
Message-ID: <1392052699.26657.29.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:18:19 +0000
In-Reply-To: <alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:16 +0000, Stefano Stabellini wrote:
> On Mon, 10 Feb 2014, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > > +{
> > > > > +    struct pending_irq *p;
> > > > > +    int i = 0, irq;
> > > > > +    uint32_t lr;
> > > > > +    bool_t inflight;
> > > > > +
> > > > > +    ASSERT(!local_irq_is_enabled());
> > > > > +
> > > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > > &this_cpu(lr_mask),
> > > > > +                              nr_lrs, i)) < nr_lrs) {
> > > > 
> > > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > > 
> > > Given that we only have 4 LR registers, I think that unconditionally
> > > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > > on average.
> > 
> > You also need to actually read the LR and do some bit masking etc don't
> > you?
> 
> No bit masking but I need to read the LRs, that are just 4.

Having read them then what do you do with the result? Surely you need to
examine some or all of the bits to determine if it is free or not?

> > What about implementations with >4 LRs?
> 
> I could read ELRSR only if nr_lrs > 8 or something like that.

That would be the worst of both worlds IMHO.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:21:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuXQ-0002j6-BV; Mon, 10 Feb 2014 17:20:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCuXO-0002hQ-Ht
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:20:54 +0000
Received: from [85.158.143.35:30972] by server-2.bemta-4.messagelabs.com id
	1A/2F-10891-57A09F25; Mon, 10 Feb 2014 17:20:53 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392052851!4592533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25722 invoked from network); 10 Feb 2014 17:20:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:20:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99542171"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:20:51 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:20:50 -0500
Message-ID: <52F90A71.40802@citrix.com>
Date: Mon, 10 Feb 2014 17:20:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is a draft of a proposal for a new domain save image format.  It
does not currently cover all use cases (e.g., images for HVM guest are
not considered).

http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf

Introduction
============

Revision History
----------------

--------------------------------------------------------------------
Version  Date         Changes
-------  -----------  ----------------------------------------------
Draft A  6 Feb 2014   Initial draft.

Draft B  10 Feb 2014  Corrected image header field widths.

                      Minor updates and clarifications.
--------------------------------------------------------------------

Purpose
-------

The _domain save image_ is the context of a running domain used for
snapshots of a domain or for transferring domains between hosts during
migration.

There are a number of problems with the format of the domain save
image used in Xen 4.4 and earlier (the _legacy format_).

* Dependant on toolstack word size.  A number of fields within the
  image are native types such as `unsigned long` which have different
  sizes between 32-bit and 64-bit hosts.  This prevents domains from
  being migrated between 32-bit and 64-bit hosts.

* There is no header identifying the image.

* The image has no version information.

A new format that addresses the above is required.

ARM does not yet have have a domain save image format specified and
the format described in this specification should be suitable.


Overview
========

The image format consists of two main sections:

* _Headers_
* _Records_

Headers
-------

There are two headers: the _image header_, and the _domain header_.
The image header describes the format of the image (version etc.).
The _domain header_ contains general information about the domain
(architecture, type etc.).

Records
-------

The main part of the format is a sequence of different _records_.
Each record type contains information about the domain context.  At a
minimum there is a END record marking the end of the records section.


Fields
------

All the fields within the headers and records have a fixed width.

Fields are always aligned to their size.

Padding and reserved fields are set to zero on save and must be
ignored during restore.

Integer (numeric) fields in the image header are always in big-endian
byte order.

Integer fields in the domain header and in the records are in the
endianess described in the image header (which will typically be the
native ordering).

Headers
=======

Image Header
------------

The image header identifies an image as a Xen domain save image.  It
includes the version of this specification that the image complies
with.

Tools supporting version _V_ of the specification shall always save
images using version _V_.  Tools shall support restoring from version
_V_ and version _V_ - 1.  Tools may additionally support restoring
from earlier versions.

The marker field can be used to distinguish between legacy images and
those corresponding to this specification.  Legacy images will have at
one or more zero bits within the first 8 octets of the image.

Fields within the image header are always in _big-endian_ byte order,
regardless of the setting of the endianness bit.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+
    | marker                                          |
    +-----------------------+-------------------------+
    | id                    | version                 |
    +-----------+-----------+-------------------------+
    | options   |                                     |
    +-----------+-------------------------------------+


--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
marker      0xFFFFFFFFFFFFFFFF.

id          0x58454E46 ("XENF" in ASCII).

version     0x00000001.  The version of this specification.

options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.

            bit 1-15: Reserved.
--------------------------------------------------------------------

Domain Header
-------------

The domain header includes general properties of the domain.

     0      1     2     3     4     5     6     7 octet
    +-----------+-----------+-----------+-------------+
    | arch      | type      | page_shift| (reserved)  |
    +-----------+-----------+-----------+-------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
arch        0x0000: Reserved.

            0x0001: x86.

            0x0002: ARM.

type        0x0000: Reserved.

            0x0001: x86 PV.

            0x0002 - 0xFFFF: Reserved.

page_shift  Size of a guest page as a power of two.

            i.e., page size = 2^page_shift^.
--------------------------------------------------------------------


Records
=======

A record has a record header, type specific data and a trailing
footer.  If body_length is not a multiple of 8, the body is padded
with zeroes to align the checksum field on an 8 octet boundary.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | type                  | body_length             |
    +-----------+-----------+-------------------------+
    | options   | (reserved)                          |
    +-----------+-------------------------------------+
    ...
    Record body of length body_length octets followed by
    0 to 7 octets of padding.
    ...
    +-----------------------+-------------------------+
    | checksum              | (reserved)              |
    +-----------------------+-------------------------+

--------------------------------------------------------------------
Field        Description
-----------  -------------------------------------------------------
type         0x00000000: END

             0x00000001: PAGE_DATA

             0x00000002: VCPU_INFO

             0x00000003: VCPU_CONTEXT

             0x00000004: X86_PV_INFO

             0x00000005: P2M

             0x00000006 - 0xFFFFFFFF: Reserved

body_length  Length in octets of the record body.

options      Bit 0: 0 - checksum invalid, 1 = checksum valid.

             Bit 1-15: Reserved.

checksum     CRC-32 checksum of the record body (including any trailing
             padding), or 0x00000000 if the checksum field is invalid.
--------------------------------------------------------------------

The following sub-sections specify the record body format for each of
the record types.

END
----

A end record marks the end of the image.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+

The end record contains no fields; its body_length is 0.

PAGE_DATA
---------

The bulk of an image consists of many PAGE_DATA records containing the
memory contents.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | count (C)             | (reserved)              |
    +-----------------------+-------------------------+
    | pfn[0]                                          |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | pfn[C-1]                                        |
    +-------------------------------------------------+
    | page_data[0]...                                 |
    ...
    +-------------------------------------------------+
    | page_data[N-1]...                               |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
count       Number of pages described in this record.

pfn         An array of count PFNs. Bits 63-60 contain
            the XEN\_DOMCTL\_PFINFO_* value for that PFN.

page_data   page_size octets of uncompressed page contents for each page
            set as present in the pfn array.
--------------------------------------------------------------------

VCPU_INFO
---------

> [ This is a combination of parts of the extended-info and
> XC_SAVE_ID_VCPU_INFO chunks. ]

The VCPU_INFO record includes the maximum possible VCPU ID.  This will
be followed a VCPU_CONTEXT record for each online VCPU.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+------------------------+
    | max_vcpu_id           | (reserved)             |
    +-----------------------+------------------------+

--------------------------------------------------------------------
Field        Description
-----------  ---------------------------------------------------
max_vcpu_id  Maximum possible VCPU ID.
--------------------------------------------------------------------


VCPU_CONTEXT
------------

The context for a single VCPU.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

[ vcpu_ctx format TBD. ]


X86_PV_INFO
-----------

> [ This record replaces part of the extended-info chunk. ]

     0     1     2     3     4     5     6     7 octet
    +-----+-----+-----+-------------------------------+
    | w   | ptl | o   | (reserved)                    |
    +-----+-----+-----+-------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
guest_width (w)  Guest width in octets (either 4 or 8).

pt_levels (ptl)  Number of page table levels (either 3 or 4).

options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
                 1 - VMASST_pae_extended_cr3.

                 Bit 1-7: Reserved.
--------------------------------------------------------------------


P2M
---

[ This is a more flexible replacement for the old p2m_size field and
p2m array. ]

The P2M record contains a portion of the source domain's P2M.
Multiple P2M records may be sent if the source P2M changes during the
stream.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+
    | pfn_begin                                       |
    +-------------------------------------------------+
    | pfn_end                                         |
    +-------------------------------------------------+
    | mfn[0]                                          |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | mfn[N-1]                                        |
    +-------------------------------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
pfn_begin   The first PFN in this portion of the P2M

pfn_end     One past the last PFN in this portion of the P2M.

mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
            the set of PFNs in the range [pfn_begin, pfn_end).
--------------------------------------------------------------------


Layout
======

The set of valid records depends on the guest architecture and type.

x86 PV Guest
------------

An x86 PV guest image will have in this order:

1. Image header
2. Domain header
3. X86_PV_INFO record
4. At least one P2M record
5. At least one PAGE_DATA record
6. VCPU_INFO record
6. At least one VCPU_CONTEXT record
7. END record


Legacy Images (x86 only)
========================

Restoring legacy images from older tools shall be handled by
translating the legacy format image into this new format.

It shall not be possible to save in the legacy format.

There are two different legacy images depending on whether they were
generated by a 32-bit or a 64-bit toolstack. These shall be
distinguished by inspecting octets 4-7 in the image.  If these are
zero then it is a 64-bit image.

Toolstack  Field                            Value
---------  -----                            -----
64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
32-bit     Chunk type (HVM)                 < 0
32-bit     Page count (HVM)                 > 0

Table: Possible values for octet 4-7 in legacy images

This assumes the presence of the extended-info chunk which was
introduced in Xen 3.0.


Future Extensions
=================

All changes to this format require the image version to be increased.

The format may be extended by adding additional record types.

Extending an existing record type must be done by adding a new record
type.  This allows old images with the old record to still be
restored.

The image header may be extended by _appending_ additional fields.  In
particular, the `marker`, `id` and `version` fields must never change
size or location.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:21:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:21:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuXQ-0002j6-BV; Mon, 10 Feb 2014 17:20:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WCuXO-0002hQ-Ht
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:20:54 +0000
Received: from [85.158.143.35:30972] by server-2.bemta-4.messagelabs.com id
	1A/2F-10891-57A09F25; Mon, 10 Feb 2014 17:20:53 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392052851!4592533!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25722 invoked from network); 10 Feb 2014 17:20:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:20:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99542171"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:20:51 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:20:50 -0500
Message-ID: <52F90A71.40802@citrix.com>
Date: Mon, 10 Feb 2014 17:20:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is a draft of a proposal for a new domain save image format.  It
does not currently cover all use cases (e.g., images for HVM guest are
not considered).

http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf

Introduction
============

Revision History
----------------

--------------------------------------------------------------------
Version  Date         Changes
-------  -----------  ----------------------------------------------
Draft A  6 Feb 2014   Initial draft.

Draft B  10 Feb 2014  Corrected image header field widths.

                      Minor updates and clarifications.
--------------------------------------------------------------------

Purpose
-------

The _domain save image_ is the context of a running domain used for
snapshots of a domain or for transferring domains between hosts during
migration.

There are a number of problems with the format of the domain save
image used in Xen 4.4 and earlier (the _legacy format_).

* Dependant on toolstack word size.  A number of fields within the
  image are native types such as `unsigned long` which have different
  sizes between 32-bit and 64-bit hosts.  This prevents domains from
  being migrated between 32-bit and 64-bit hosts.

* There is no header identifying the image.

* The image has no version information.

A new format that addresses the above is required.

ARM does not yet have have a domain save image format specified and
the format described in this specification should be suitable.


Overview
========

The image format consists of two main sections:

* _Headers_
* _Records_

Headers
-------

There are two headers: the _image header_, and the _domain header_.
The image header describes the format of the image (version etc.).
The _domain header_ contains general information about the domain
(architecture, type etc.).

Records
-------

The main part of the format is a sequence of different _records_.
Each record type contains information about the domain context.  At a
minimum there is a END record marking the end of the records section.


Fields
------

All the fields within the headers and records have a fixed width.

Fields are always aligned to their size.

Padding and reserved fields are set to zero on save and must be
ignored during restore.

Integer (numeric) fields in the image header are always in big-endian
byte order.

Integer fields in the domain header and in the records are in the
endianess described in the image header (which will typically be the
native ordering).

Headers
=======

Image Header
------------

The image header identifies an image as a Xen domain save image.  It
includes the version of this specification that the image complies
with.

Tools supporting version _V_ of the specification shall always save
images using version _V_.  Tools shall support restoring from version
_V_ and version _V_ - 1.  Tools may additionally support restoring
from earlier versions.

The marker field can be used to distinguish between legacy images and
those corresponding to this specification.  Legacy images will have at
one or more zero bits within the first 8 octets of the image.

Fields within the image header are always in _big-endian_ byte order,
regardless of the setting of the endianness bit.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+
    | marker                                          |
    +-----------------------+-------------------------+
    | id                    | version                 |
    +-----------+-----------+-------------------------+
    | options   |                                     |
    +-----------+-------------------------------------+


--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
marker      0xFFFFFFFFFFFFFFFF.

id          0x58454E46 ("XENF" in ASCII).

version     0x00000001.  The version of this specification.

options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.

            bit 1-15: Reserved.
--------------------------------------------------------------------

Domain Header
-------------

The domain header includes general properties of the domain.

     0      1     2     3     4     5     6     7 octet
    +-----------+-----------+-----------+-------------+
    | arch      | type      | page_shift| (reserved)  |
    +-----------+-----------+-----------+-------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
arch        0x0000: Reserved.

            0x0001: x86.

            0x0002: ARM.

type        0x0000: Reserved.

            0x0001: x86 PV.

            0x0002 - 0xFFFF: Reserved.

page_shift  Size of a guest page as a power of two.

            i.e., page size = 2^page_shift^.
--------------------------------------------------------------------


Records
=======

A record has a record header, type specific data and a trailing
footer.  If body_length is not a multiple of 8, the body is padded
with zeroes to align the checksum field on an 8 octet boundary.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | type                  | body_length             |
    +-----------+-----------+-------------------------+
    | options   | (reserved)                          |
    +-----------+-------------------------------------+
    ...
    Record body of length body_length octets followed by
    0 to 7 octets of padding.
    ...
    +-----------------------+-------------------------+
    | checksum              | (reserved)              |
    +-----------------------+-------------------------+

--------------------------------------------------------------------
Field        Description
-----------  -------------------------------------------------------
type         0x00000000: END

             0x00000001: PAGE_DATA

             0x00000002: VCPU_INFO

             0x00000003: VCPU_CONTEXT

             0x00000004: X86_PV_INFO

             0x00000005: P2M

             0x00000006 - 0xFFFFFFFF: Reserved

body_length  Length in octets of the record body.

options      Bit 0: 0 - checksum invalid, 1 = checksum valid.

             Bit 1-15: Reserved.

checksum     CRC-32 checksum of the record body (including any trailing
             padding), or 0x00000000 if the checksum field is invalid.
--------------------------------------------------------------------

The following sub-sections specify the record body format for each of
the record types.

END
----

A end record marks the end of the image.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+

The end record contains no fields; its body_length is 0.

PAGE_DATA
---------

The bulk of an image consists of many PAGE_DATA records containing the
memory contents.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | count (C)             | (reserved)              |
    +-----------------------+-------------------------+
    | pfn[0]                                          |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | pfn[C-1]                                        |
    +-------------------------------------------------+
    | page_data[0]...                                 |
    ...
    +-------------------------------------------------+
    | page_data[N-1]...                               |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
count       Number of pages described in this record.

pfn         An array of count PFNs. Bits 63-60 contain
            the XEN\_DOMCTL\_PFINFO_* value for that PFN.

page_data   page_size octets of uncompressed page contents for each page
            set as present in the pfn array.
--------------------------------------------------------------------

VCPU_INFO
---------

> [ This is a combination of parts of the extended-info and
> XC_SAVE_ID_VCPU_INFO chunks. ]

The VCPU_INFO record includes the maximum possible VCPU ID.  This will
be followed a VCPU_CONTEXT record for each online VCPU.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+------------------------+
    | max_vcpu_id           | (reserved)             |
    +-----------------------+------------------------+

--------------------------------------------------------------------
Field        Description
-----------  ---------------------------------------------------
max_vcpu_id  Maximum possible VCPU ID.
--------------------------------------------------------------------


VCPU_CONTEXT
------------

The context for a single VCPU.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

[ vcpu_ctx format TBD. ]


X86_PV_INFO
-----------

> [ This record replaces part of the extended-info chunk. ]

     0     1     2     3     4     5     6     7 octet
    +-----+-----+-----+-------------------------------+
    | w   | ptl | o   | (reserved)                    |
    +-----+-----+-----+-------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
guest_width (w)  Guest width in octets (either 4 or 8).

pt_levels (ptl)  Number of page table levels (either 3 or 4).

options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
                 1 - VMASST_pae_extended_cr3.

                 Bit 1-7: Reserved.
--------------------------------------------------------------------


P2M
---

[ This is a more flexible replacement for the old p2m_size field and
p2m array. ]

The P2M record contains a portion of the source domain's P2M.
Multiple P2M records may be sent if the source P2M changes during the
stream.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+
    | pfn_begin                                       |
    +-------------------------------------------------+
    | pfn_end                                         |
    +-------------------------------------------------+
    | mfn[0]                                          |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | mfn[N-1]                                        |
    +-------------------------------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
pfn_begin   The first PFN in this portion of the P2M

pfn_end     One past the last PFN in this portion of the P2M.

mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
            the set of PFNs in the range [pfn_begin, pfn_end).
--------------------------------------------------------------------


Layout
======

The set of valid records depends on the guest architecture and type.

x86 PV Guest
------------

An x86 PV guest image will have in this order:

1. Image header
2. Domain header
3. X86_PV_INFO record
4. At least one P2M record
5. At least one PAGE_DATA record
6. VCPU_INFO record
6. At least one VCPU_CONTEXT record
7. END record


Legacy Images (x86 only)
========================

Restoring legacy images from older tools shall be handled by
translating the legacy format image into this new format.

It shall not be possible to save in the legacy format.

There are two different legacy images depending on whether they were
generated by a 32-bit or a 64-bit toolstack. These shall be
distinguished by inspecting octets 4-7 in the image.  If these are
zero then it is a 64-bit image.

Toolstack  Field                            Value
---------  -----                            -----
64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
32-bit     Chunk type (HVM)                 < 0
32-bit     Page count (HVM)                 > 0

Table: Possible values for octet 4-7 in legacy images

This assumes the presence of the extended-info chunk which was
introduced in Xen 3.0.


Future Extensions
=================

All changes to this format require the image version to be increased.

The format may be extended by adding additional record types.

Extending an existing record type must be done by adding a new record
type.  This allows old images with the old record to still be
restored.

The image header may be extended by _appending_ additional fields.  In
particular, the `marker`, `id` and `version` fields must never change
size or location.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuYO-0002wY-Dh; Mon, 10 Feb 2014 17:21:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCuYN-0002wI-7A
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:21:55 +0000
Received: from [85.158.143.35:56863] by server-2.bemta-4.messagelabs.com id
	34/80-10891-2BA09F25; Mon, 10 Feb 2014 17:21:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392052913!4592829!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2732 invoked from network); 10 Feb 2014 17:21:53 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:21:53 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3063176eek.37
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 09:21:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OcABFrYnrdq6ljHw4eCtHeQxq3qBvg9aJHk6RdDoVIE=;
	b=Yja7EogAwj0ICoMMsn5MIVRdZRkIwwVRPMrZ/R2SDS0e23m84cNRNmFT24IgqmF+me
	3hnXr9M89CSBmrc/ngA12gJM74uDkgm+wBs7E91ZQstmuX3KnxelLCFKQ34DE050gZnM
	2itBKZdl0ED+85ZJY6hO9XLaz5TO3Jx+4mWMth5QyclFwhgtlgerodKcq3DHQNQhtA6Q
	ZFqNYhFAbHfP0S8miq7eWMiTykN592uqDm+SEkNb6M4vrUguv66ISsAdMPwWW0eQkgt9
	GI3Zk7/lzNohK9xXB4BXBDWrrXX5jJcc1TAUDuI8WGToblPG8e/9ZGPgI67QXDxhmjhQ
	lKmA==
X-Gm-Message-State: ALoCoQnBfYeNXRRtcH+4Yk3K0vG7Ml83r49dcV90K7vU5n4Un7sKF15XGdPh5u4Pg8d1nyEcr03u
X-Received: by 10.14.94.135 with SMTP id n7mr38327342eef.40.1392052913324;
	Mon, 10 Feb 2014 09:21:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id q44sm21721597eez.1.2014.02.10.09.21.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:21:52 -0800 (PST)
Message-ID: <52F90AAE.90306@linaro.org>
Date: Mon, 10 Feb 2014 17:21:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56208.3090504@linaro.org>
	<alpine.DEB.2.02.1402101700000.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101700000.4373@kaball.uk.xensource.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 05:03 PM, Stefano Stabellini wrote:
>  
>>> +            inflight = 0;
>>> +            GICH[GICH_LR + i] = 0;
>>> +            clear_bit(i, &this_cpu(lr_mask));
>>> +
>>> +            spin_lock(&gic.lock);
>>> +            p = irq_to_pending(v, irq);
>>> +            if ( p->desc != NULL )
>>> +                p->desc->status &= ~IRQ_INPROGRESS;
>>> +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>>> +            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
>>> +                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
>>> +            {
>>
>> I would add a WARN_ON(p->desc != NULL) here. AFAIK, this code path shouldn't
>> be used for physical IRQ.
> 
> That's not true: an edge physical irq can come through while another one
> of the same type is being handled. In fact pending and active bits exist
> even on the physical GIC interface.
> 

It won't be fired until the previous one is EOIed. The physical GIC
interface will keep it internally.

But ... after thinking the WARN is stupid here because we can have this
following case:

IRQ A fired
    -> inject IRQ A
IRQ A eoied

IRQ A fired
   -> set pending bits

clear lrs
   -> re-inject IRQ A

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuYH-0002u5-0T; Mon, 10 Feb 2014 17:21:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WCuYG-0002tr-5c
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:21:48 +0000
Received: from [85.158.137.68:43097] by server-8.bemta-3.messagelabs.com id
	E0/2D-16039-BAA09F25; Mon, 10 Feb 2014 17:21:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392052906!903047!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11573 invoked from network); 10 Feb 2014 17:21:46 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 17:21:46 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WCuY8-0009T2-Lg; Mon, 10 Feb 2014 17:21:40 +0000
Date: Mon, 10 Feb 2014 18:21:40 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140210172140.GA34003@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:34 +0000 on 10 Feb (1392046444), Jan Beulich wrote:
> > The underlying problem here comes because the AF and UF bits of RTC 
> > interrupt
> > state is modelled by the RTC code, but the PF is modelled by the pt code.  
> > The
> > root cause of windows infinite loop is that RTC_PF is being re-set on 
> > vmentry
> > before the interrupt logic has worked out that it can't actually inject an 
> > RTC
> > interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
> > should be reading 0.
> 
> So you're undoing a whole lot of changes done with the goal of
> getting the overall emulation closer to what real hardware does,
> just to paper over an issue elsewhere in the code? Not really an
> approach I'm in favor of.

My understanding was that the problem is explicitly in the change to
how RTC code is called from vpt code.

Originally, the RTC callback was called from pt_intr_post, like other
vpt sources.  Your rework changed it to be called much earlier, when
the vpt was considering which time source to choose.  AIUI that was to
let the RTC code tell the VPT not to inject, if the guest hasn't acked
the last interrupt, right?

Since that was changed later to allow a certain number of dead ticks
before deciding to stop the timer chain, the decision no longer has to
be made so early -- we can allow one more IRQ to go in and then
disable it. 

That is the main change of this cset:  we go back to driving
the interrupt from the vpt code and fixing up the RTC state after vpt
tells us it's injected an interrupt.

> > +    if ( new_b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
> > +        new_c |= RTC_IRQF;
> 
> Without going back to reading the spec, iirc RTC_IRQF is a sticky
> bit cleared _only_ be REG_C reads, i.e. you shouldn't clear it
> earlier in the function and then conditionally set it here.

All those bits are sticky, so if IRQF was set before it will be now;
but true that we could drop the mask at the top of the function.
(+ again in rtc_irq_event)

> >  
> > -    /* IRQ is raised if any source is both raised & enabled */
> > -    if ( !(s->hw.cmos_data[RTC_REG_B] &
> > -           s->hw.cmos_data[RTC_REG_C] &
> > -           (RTC_PF | RTC_AF | RTC_UF)) )
> > -        return;
> > +    s->hw.cmos_data[RTC_REG_B] = new_b;
> > +    s->hw.cmos_data[RTC_REG_C] = new_c;
> >  
> > -    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> > -    if ( rtc_mode_is(s, no_ack) )
> > -        hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
> > -    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
> > +    if ( new_c & RTC_IRQF )
> > +        rtc_toggle_irq(s);
> 
> Which then implies that the condition here would also need to
> consider the old state of the flag.

Sure.

> > @@ -647,9 +674,6 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
> >      case RTC_REG_C:
> >          ret = s->hw.cmos_data[s->hw.cmos_index];
> >          s->hw.cmos_data[RTC_REG_C] = 0x00;
> > -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> > -            hvm_isa_irq_deassert(d, RTC_IRQ);
> 
> Why? With RTC_IRQF going from 1 to 0, the interrupt line should
> get de-asserted.

After this patch the RTC model only sends edges (as mentioned in the
description, maybe not clearly enough).  I think that might be
caused by some confusion on my part.  I'll go into it a bit more below.

> > -        rtc_update_irq(s);
> 
> So given the problem description, this would seem to be the most
> important part at a first glance. But looking more closely, I'm getting
> the impression that the call to rtc_update_irq() had no effect at all
> here anyway: The function would always bail on the second if() due
> to REG_C having got cleared a few lines up.

Yeah, this has nothing to do with the bug being fixed here.  The old
REG_C read was operating correctly, but on the return-to-guest path:
 - vpt sees another RTC interrupt is due and calls RTC code
 - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
 - vlapic code sees the last interrupt is still in the ISR and does
   nothing;
 - we return to the guest having set IRQF but not consumed a timer
   event, so vpt stste is the same
 - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
   waiting for a read of 0.
 - repeat forever.

AFAICS this is caused by the disconnect where the RTC tries to
reinject but doesn't use up a tick.  A real RTC doesn't have that
problem, because it doesn't have all the scheduling artefacts that the
VPT code is intended to work around.

So IMO the right fix is to go back to letting the VPT code control IRQ
injection for the PF source, which is what this patch does.

(I don't consider it to be a revert of any particular csets, BTW --
there have been a lot of other improvements in the RTC as part of that
and later series that this leaves in place).

The switch to always sending edges is incidental, and I'd be happy to
get rid of it.  IIRC the reason we ended up with that is that when the
vpt code injects an interrupt, we need to set IRQF|PF in REG_C but
_not_ assert the line (which would cause a second interrupt).
Tracking that disconnect seemed tricky, esp. across save/restore. 

If you have any idea how we could sort that out, I'd be inclined to go
back to a level-triggered model (i.e. following the RTC spec) even in
no-ack mode.  

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuYO-0002wY-Dh; Mon, 10 Feb 2014 17:21:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCuYN-0002wI-7A
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:21:55 +0000
Received: from [85.158.143.35:56863] by server-2.bemta-4.messagelabs.com id
	34/80-10891-2BA09F25; Mon, 10 Feb 2014 17:21:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392052913!4592829!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2732 invoked from network); 10 Feb 2014 17:21:53 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:21:53 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3063176eek.37
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 09:21:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OcABFrYnrdq6ljHw4eCtHeQxq3qBvg9aJHk6RdDoVIE=;
	b=Yja7EogAwj0ICoMMsn5MIVRdZRkIwwVRPMrZ/R2SDS0e23m84cNRNmFT24IgqmF+me
	3hnXr9M89CSBmrc/ngA12gJM74uDkgm+wBs7E91ZQstmuX3KnxelLCFKQ34DE050gZnM
	2itBKZdl0ED+85ZJY6hO9XLaz5TO3Jx+4mWMth5QyclFwhgtlgerodKcq3DHQNQhtA6Q
	ZFqNYhFAbHfP0S8miq7eWMiTykN592uqDm+SEkNb6M4vrUguv66ISsAdMPwWW0eQkgt9
	GI3Zk7/lzNohK9xXB4BXBDWrrXX5jJcc1TAUDuI8WGToblPG8e/9ZGPgI67QXDxhmjhQ
	lKmA==
X-Gm-Message-State: ALoCoQnBfYeNXRRtcH+4Yk3K0vG7Ml83r49dcV90K7vU5n4Un7sKF15XGdPh5u4Pg8d1nyEcr03u
X-Received: by 10.14.94.135 with SMTP id n7mr38327342eef.40.1392052913324;
	Mon, 10 Feb 2014 09:21:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id q44sm21721597eez.1.2014.02.10.09.21.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:21:52 -0800 (PST)
Message-ID: <52F90AAE.90306@linaro.org>
Date: Mon, 10 Feb 2014 17:21:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F56208.3090504@linaro.org>
	<alpine.DEB.2.02.1402101700000.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101700000.4373@kaball.uk.xensource.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 05:03 PM, Stefano Stabellini wrote:
>  
>>> +            inflight = 0;
>>> +            GICH[GICH_LR + i] = 0;
>>> +            clear_bit(i, &this_cpu(lr_mask));
>>> +
>>> +            spin_lock(&gic.lock);
>>> +            p = irq_to_pending(v, irq);
>>> +            if ( p->desc != NULL )
>>> +                p->desc->status &= ~IRQ_INPROGRESS;
>>> +            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>>> +            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
>>> +                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
>>> +            {
>>
>> I would add a WARN_ON(p->desc != NULL) here. AFAIK, this code path shouldn't
>> be used for physical IRQ.
> 
> That's not true: an edge physical irq can come through while another one
> of the same type is being handled. In fact pending and active bits exist
> even on the physical GIC interface.
> 

It won't be fired until the previous one is EOIed. The physical GIC
interface will keep it internally.

But ... after thinking the WARN is stupid here because we can have this
following case:

IRQ A fired
    -> inject IRQ A
IRQ A eoied

IRQ A fired
   -> set pending bits

clear lrs
   -> re-inject IRQ A

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:21:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuYH-0002u5-0T; Mon, 10 Feb 2014 17:21:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WCuYG-0002tr-5c
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:21:48 +0000
Received: from [85.158.137.68:43097] by server-8.bemta-3.messagelabs.com id
	E0/2D-16039-BAA09F25; Mon, 10 Feb 2014 17:21:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392052906!903047!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11573 invoked from network); 10 Feb 2014 17:21:46 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 17:21:46 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WCuY8-0009T2-Lg; Mon, 10 Feb 2014 17:21:40 +0000
Date: Mon, 10 Feb 2014 18:21:40 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140210172140.GA34003@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:34 +0000 on 10 Feb (1392046444), Jan Beulich wrote:
> > The underlying problem here comes because the AF and UF bits of RTC 
> > interrupt
> > state is modelled by the RTC code, but the PF is modelled by the pt code.  
> > The
> > root cause of windows infinite loop is that RTC_PF is being re-set on 
> > vmentry
> > before the interrupt logic has worked out that it can't actually inject an 
> > RTC
> > interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
> > should be reading 0.
> 
> So you're undoing a whole lot of changes done with the goal of
> getting the overall emulation closer to what real hardware does,
> just to paper over an issue elsewhere in the code? Not really an
> approach I'm in favor of.

My understanding was that the problem is explicitly in the change to
how RTC code is called from vpt code.

Originally, the RTC callback was called from pt_intr_post, like other
vpt sources.  Your rework changed it to be called much earlier, when
the vpt was considering which time source to choose.  AIUI that was to
let the RTC code tell the VPT not to inject, if the guest hasn't acked
the last interrupt, right?

Since that was changed later to allow a certain number of dead ticks
before deciding to stop the timer chain, the decision no longer has to
be made so early -- we can allow one more IRQ to go in and then
disable it. 

That is the main change of this cset:  we go back to driving
the interrupt from the vpt code and fixing up the RTC state after vpt
tells us it's injected an interrupt.

> > +    if ( new_b & new_c & (RTC_PF | RTC_AF | RTC_UF) )
> > +        new_c |= RTC_IRQF;
> 
> Without going back to reading the spec, iirc RTC_IRQF is a sticky
> bit cleared _only_ be REG_C reads, i.e. you shouldn't clear it
> earlier in the function and then conditionally set it here.

All those bits are sticky, so if IRQF was set before it will be now;
but true that we could drop the mask at the top of the function.
(+ again in rtc_irq_event)

> >  
> > -    /* IRQ is raised if any source is both raised & enabled */
> > -    if ( !(s->hw.cmos_data[RTC_REG_B] &
> > -           s->hw.cmos_data[RTC_REG_C] &
> > -           (RTC_PF | RTC_AF | RTC_UF)) )
> > -        return;
> > +    s->hw.cmos_data[RTC_REG_B] = new_b;
> > +    s->hw.cmos_data[RTC_REG_C] = new_c;
> >  
> > -    s->hw.cmos_data[RTC_REG_C] |= RTC_IRQF;
> > -    if ( rtc_mode_is(s, no_ack) )
> > -        hvm_isa_irq_deassert(vrtc_domain(s), RTC_IRQ);
> > -    hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
> > +    if ( new_c & RTC_IRQF )
> > +        rtc_toggle_irq(s);
> 
> Which then implies that the condition here would also need to
> consider the old state of the flag.

Sure.

> > @@ -647,9 +674,6 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
> >      case RTC_REG_C:
> >          ret = s->hw.cmos_data[s->hw.cmos_index];
> >          s->hw.cmos_data[RTC_REG_C] = 0x00;
> > -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> > -            hvm_isa_irq_deassert(d, RTC_IRQ);
> 
> Why? With RTC_IRQF going from 1 to 0, the interrupt line should
> get de-asserted.

After this patch the RTC model only sends edges (as mentioned in the
description, maybe not clearly enough).  I think that might be
caused by some confusion on my part.  I'll go into it a bit more below.

> > -        rtc_update_irq(s);
> 
> So given the problem description, this would seem to be the most
> important part at a first glance. But looking more closely, I'm getting
> the impression that the call to rtc_update_irq() had no effect at all
> here anyway: The function would always bail on the second if() due
> to REG_C having got cleared a few lines up.

Yeah, this has nothing to do with the bug being fixed here.  The old
REG_C read was operating correctly, but on the return-to-guest path:
 - vpt sees another RTC interrupt is due and calls RTC code
 - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
 - vlapic code sees the last interrupt is still in the ISR and does
   nothing;
 - we return to the guest having set IRQF but not consumed a timer
   event, so vpt stste is the same
 - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
   waiting for a read of 0.
 - repeat forever.

AFAICS this is caused by the disconnect where the RTC tries to
reinject but doesn't use up a tick.  A real RTC doesn't have that
problem, because it doesn't have all the scheduling artefacts that the
VPT code is intended to work around.

So IMO the right fix is to go back to letting the VPT code control IRQ
injection for the PF source, which is what this patch does.

(I don't consider it to be a revert of any particular csets, BTW --
there have been a lot of other improvements in the RTC as part of that
and later series that this leaves in place).

The switch to always sending edges is incidental, and I'd be happy to
get rid of it.  IIRC the reason we ended up with that is that when the
vpt code injects an interrupt, we need to set IRQF|PF in REG_C but
_not_ assert the line (which would cause a second interrupt).
Tracking that disconnect seemed tricky, esp. across save/restore. 

If you have any idea how we could sort that out, I'd be inclined to go
back to a level-triggered model (i.e. following the RTC spec) even in
no-ack mode.  

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:23:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuZZ-0003CI-V5; Mon, 10 Feb 2014 17:23:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WCuZY-0003Br-5N
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:23:08 +0000
Received: from [193.109.254.147:38446] by server-9.bemta-14.messagelabs.com id
	C9/9B-24895-BFA09F25; Mon, 10 Feb 2014 17:23:07 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392052985!3328060!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7466 invoked from network); 10 Feb 2014 17:23:05 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 17:23:05 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 256F8D;
	Mon, 10 Feb 2014 18:23:04 +0100
Message-ID: <52F90AF7.603@scytl.com>
Date: Mon, 10 Feb 2014 18:23:03 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>,  xen-devel@lists.xenproject.org
References: <52F26C40.2060901@scytl.com>
	<1392042440.26657.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1392042440.26657.9.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
Content-Type: multipart/mixed; boundary="------------030304040406040803060905"
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello Ian,

I am using the "xl" toolstack. I have included the configuration and
screen logs of the vTPM-Mgr stub domain, vTPM stub domain and DomU.

As you can see in the logs, I have enabled the vTPM Mgr and vTPM stub
domains once. Then I have enabled the DomU two consecutive times without
disconnecting the stub domains (in all the cases issuing the command "xl
create -c /var/xen/configuration.cfg).

When the DomU shuts down (after issuing a poweroff command with an ssh
connection) the vTPM stub domain does not stop. Instead the following
entries appear on its log:

Tpmback:Info Frontend 14/0 disconnected^M
Failed to read /local/domain/14/device/vtpm/0/state.^M
Tpmback:Info Frontend 14/0 disconnected^M

and later, when the DomU is started again:

Tpmback:Info Frontend 15/0 connected^M

In addition, one can see that the measurements performed by the
"pv-grub" differ from the first to the second boot of the DomU (since
the vTPM domain instance has been kept alive):

[root@localhost ~]# cat /sys/class/misc/tpm0/device/pcrs
...
PCR-04: 5A 4D CA AA C4 90 19 78 9A CB 7A C9 87 A6 08 A8 7C A2 7B DB
PCR-05: E5 6C FC F9 65 D2 D0 FC 7A 24 7F 42 66 28 D5 F9 D3 10 EF 72
...

[root@localhost ~]# cat /sys/class/misc/tpm0/device/pcrs
...
PCR-04: BB 67 AA F3 9E B6 4B 8F 7E 76 57 7A 16 14 FB 0C B2 57 DF 69
PCR-05: C0 A5 04 68 85 93 1B CD AE 61 F7 DA 49 ED 72 9E 2E D7 06 F0
...


Does anybody know if this is the expected behaviour? Can this be changed?


Thanks!
Jordi.



On 02/10/2014 03:27 PM, Ian Campbell wrote:
> CCing the vTPM maintainer.
>
> On Wed, 2014-02-05 at 17:52 +0100, Jordi Cucurull Juan wrote:
>> Dear all,
>>
>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>> guest virtual machine that takes advantage of it. After playing a bit
>> with it, I have a few questions:
>>
>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>> the guest the vTPM stubdom continues active and, moreover, I can start
>> the machine again and the values of the vTPM are the last ones there
>> were in the previous instance of the guest. Is this normal?
> I don't know much about vTPM but this seems odd to me. Which toolstack
> are you using? Can you provide details of your config and logs from both
> the startup and shutdown etc please.
>
> I've no clue about #2 or #3 I'm afraid.
>
>> 2.In the documentation it is recommended to avoid accessing the physical
>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
>> without any apparent issue. Why is not recommended directly accessing
>> the physical TPM of Dom0?
>>
>> 3.If it is not recommended to directly accessing the physical TPM in
>> Dom0, which is the advisable way to check the integrity of this domain?
>> With solutions such as TBOOT and IntelTXT?
>>
>> Best regards,
>> Jordi.
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>


--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8;
 name="conf-domu.cfg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="conf-domu.cfg"

IyBDb25maWd1cmF0aW9uIG9mIHB2LWdydWIKa2VybmVsID0gIi91c3IvbG9jYWwvbGliL3hl
bi9ib290L3B2LWdydWIteDg2XzY0Lmd6IgpleHRyYT0gIihoZDAsMCkvZ3J1Yi9ncnViLmNv
bmYiCgojIENvbmZpZ3VyYXRpb24gb2YgZ3Vlc3QKbmFtZSA9ICJ2aXJ0dWFsMSIKbWVtb3J5
ID0gIjUxMiIKZGlzayA9IFsgJ3RhcDphaW86L3Zhci94ZW4vdmlydHVhbDEvdmlydHVhbDEu
aW1nLHh2ZGEsdycgXQp2aWYgPSBbICdtYWM9MDA6MTY6M0U6NUM6NDg6QTIsaXA9MTAuMC4w
LjEnIF0KdmNwdXM9MQpvbl9yZWJvb3QgPSAnZGVzdHJveScKb25fY3Jhc2ggPSAnZGVzdHJv
eScKdnRwbT1bImJhY2tlbmQ9ZG9tdS12dHBtMSJdCgo=
--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8;
 name="conf-vtpm.cfg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="conf-vtpm.cfg"

a2VybmVsPSIvdXNyL2xvY2FsL2xpYi94ZW4vYm9vdC92dHBtLXN0dWJkb20uZ3oiCm1lbW9y
eT04CmRpc2s9WyJmaWxlOi9ob21lL2pjdWN1cnVsbC9YZW4vdmlydHVhbDEvdnRwbS5pbWcs
aGRhLHciXQpuYW1lPSJkb211LXZ0cG0xIgp2dHBtPVsiYmFja2VuZD12dHBtbWdyLHV1aWQ9
Yjg1Y2Q1MmMtZDM5Yy00MzY0LTkzMDYtMmJmYTQ3NmJlMmUyIl0KZXh0cmE9Imh3aW5pdHBj
cj1ub25lIgoK
--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8;
 name="conf-vtpmmgr.cfg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="conf-vtpmmgr.cfg"

a2VybmVsPSIvdXNyL2xvY2FsL2xpYi94ZW4vYm9vdC92dHBtbWdyLXN0dWJkb20uZ3oiCm1l
bW9yeT0xNgpkaXNrPVsiZmlsZTovdmFyL3hlbi92dHBtbWdyLXN0dWJkb20uaW1nLGhkYSx3
Il0KbmFtZT0idnRwbW1nciIKaW9tZW09WyJmZWQ0MCw1Il0KCg==
--------------030304040406040803060905
Content-Type: application/x-gzip;
 name="enable-domu.log.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="enable-domu.log.gz"

H4sICM0H+VIAA2VuYWJsZS1kb211LmxvZwDtGmlv28j18wrQL8iXqYFi7SIURVmSFaEu4DOb
2k4C28l2ESzYITmUBiY5zAwpWVvsf+97M6QOipKVbbeLApLhROI75l3zLvkjlYonI+KLJOQj
EkoRE3tCpf3MEnvCZZbTyLF9lmRC9Vt+OGr8q0HgdRCImAcHQ5LkUfTaPDI84JlBMc9cnoRi
5Zl+ns1SBk8P0snB61XImKYI+GvAQppH2d+qcCHUVrhSPJAsBJx2BZLQWJ9ZqlWlzHOt0UHP
oyeB0+5bnbB/bHVD37foG6dnddud4x497h+zk06V9lkFNKNazxXArxW8NKJZKGS8I7YQkZap
qorME3cssjTKR67yJU8zbZRHmbODRg23A2+DH2L67E78NEdyp3IInVAezaFfVoD4aq88+blC
DWSxduWXxla8RARsN0Tg54L9fBZDOG6NgUz5bgx8EanAqaKg4jGLnzzA6XW6ncGgyoPKEcu2
40x4wMQcxaoaUI1pIKYLFk5nzY+Z72Y8ZiIMFctqHM2eme9uDulI+DRCBlvNEXBFvYi5MR9J
mm3HBbfpiHvBGV70FLCJq8BK2UoWWBzKJtxn2g0QRAzSjEh2wFRZ7kFuoTzZrtISzS5cN1qQ
PWeSvqyvRnMhW+2IOZ7EL6Mqf8wCN6WSxmrtai4w0BB58pSIaVIxg8aZMj4aZzXxp6G+vlq1
oJRJLoJNUBWB+TYBIYuxxJ9tAmsbFGFpOVtTHBepkJl62Vhcft0FS8BtexnNjyiP5ynimkaK
rVWDeo88MZnokDuwcyVtfQHtiHu6YHpCZHY6sUYy96znQd/td1ujX+qcpiCPPW3MHBoFmUWC
BkzWBngFx4V0VWOfGuWNAeIg4onW/nActF+3j2wUWv/TwkJeJzTEKaSSp83isEGnDXVJVbLz
agDU1ifku15lauzvgd1YErhl/9Gus8oCp6j5G+RNMYelNBtrf661PeWbFo9HdfaYADlSPk8C
Wgcv5Hjh/mI7QLXFJJ3Wxoou8Ju1kCwWE0zwG8whGQ2mkuvEXxdoXLl+AI0fkm9w1VIIHSTc
/997CrP5JiZxlm+CbCwPBkx9NHu7PXT6w2M27PnD7mBIq82dxuW6K3XaLfypto5GF8mD0RYV
eLhdxbmboa0IrYRWWxaNBMYvWudNwQCZ2fVmGVMu5Hfo+jImJzTaFBqIXeK4uWLatXWYI8Cc
0pmJeDx7h0hJfeiPJpWstIwwCb1t0Ket0CyNf8c4xAEnt/CQWmdvi8dyjmgXL6vmn/K1KTcu
ayoSNxVTJqFJNA2tyqSYHaxiSIa1YDN8SjN/HIjRZgxfUjVeBjeMSL82Gv9gCbnjCY9pRD48
/KkJEN356aECjPBMqQfKHH4+O2pqqkRCWh0xhbAO6qlJxlRCvwM0+Nin/VAT3ZVEJM1cjypm
GLI5Q2SWuSFUH8PwjUGPw8SNuMr0o9Xz4d6b1hRhS2JpQMQSeNwsdQ8jOtJsi0dQGV0sjUNS
XxgL5f2noaGHw/vUQbc+vxngm2bj7m5I3iU8K89wM2iIqqK4rHx64r2h3TnEBUfjdKjVOlmx
KsDmED8oIVoYMlfXSLNEhGRJYKxETxYg48E0BGtQjxY2halIPymcdkfTFNcD0KgIOSOSJiOG
vtOxS6zSuxoXBpgMcUFNqxCdYOkRSTQD6BNPUxYA1Gkv24hT8CEjGC2ERthOZUISqInE87qo
vfnvyCoPOizfHBkmwJ81G5cwQyYByJ8SkF8RmmkV8CygLN+2mo0fGKBIBvMAK7AQiL843xdv
AW8uG6qErawkOk+GMH9WoBAVSkSMtFotLQ1Qj5Iso56bYVVGoVBzOMyYy6k5QTf6ecRks/E4
RquRg3eBruip0Oeam2QE7LVflyEIvvCN+Usy6GAUmLCedNBeIQ0MKZB4uQL9SncERCQEGm7i
4C1Dn/X7Ha+zOESN8yzAfqbuELjWy4cwc8hlHsczgoPdcCl5nOpb01lWoJj91vgen/RX+IaG
r8Hf2MaSZmPiBTB7nzgdwhUBrGbjL+svcn57c33/4f2jjj0zPNpAaRvKBWKzgT8PYAAd7WgF
cnhknugPkn3NIYMyuHLHNRYj7BlasQACoCg6GBjFHGEGX7ttFyD7K/bFttM1UjQb15RH4J1M
6Ju1I5kdMprlklkelZIzCUcPjgeDfntAoOBDrCgiQtxOEN031Jpnrnypz5MXhFLoPcyap7q9
k7oIqLP69fm60UPPbq8ZvI745vxynRoEqyGvGG75qNJqFkenXN3ff7gfmlTk5okncvBQaKin
PBsT6Z9azgZ2+uzfwu/MgyGYwNCcKOpnHG4fNuwYX5gYLcmg7Px4dv9+SN4LMoZMB5lCa43D
M2mT6RgqNHDnSQBEJa4fCcVcjXKILdtReTCe2em0CB8lkCkCMNBHSImK/PPq4eJ71IqhQ0k2
huTFkhwTWxeqw8tYxzthdXbCcnbCagPWqy8/vPryd0LOoQNCm31/AXgfHsjhcctxWr2j75ug
YbMhAVwWdfj4R9t87SYtMgO5BhI1gyQSE2z3MXFBs9AJ1WsQTmZci6sh7efBMShj1hLEnsTQ
u+S/WEZzIiFAQetTGyLU1nVI2p9F9FaKPLWiiatNIgP3/Qf39tPNA7k9e//2lCUucPz00Pr0
eG0NCvDdZVnnTscTH6x+c/XT49n57dUpVA5Auf18B7+nJXMbmKsplNqHnx6u4YqeAscIehfp
z+SYeZbKE6dPdMtpRD+lOd6mWkZLUl7eETkeeeRrzoEjywj0pKdYrGLqZr4Hz7CKSUiN+n8a
h6qwBQ7xJg5Ol1+6CSGPH+/INTqBnFZe4Kc01v4ZvoN6RX6kJkZ0h1IkcDBMwnQItVpV/PMC
58LgaMevIJRtwC9UO/VCxGkErWFAVO77EPwhBNGsSrRafrICtousYGQdolskRXBADqdL1JIV
OkKh0+TEiyC3F0xPd6tDBFf9p9XSCh4btLsdfcvgEJg8InPRIFE2G4HkuDu2ZebbY6hVcCla
/hCune6sQBmRwi0EaJGDySG8x9bw1Z/fXp89nt0OyZ3Axgo6LMVd1MlVPk1Alsyc0YKI+O67
H1nki1hzLJIHPH7ANgVNkAPzIQGWXwj5cAPTWROhRb+L+y4cGok2An5q6XfGGjAl1BHlKbkV
Iw5oBCI9B+o7mkDawa82cLDoADcDnmjwoTqC/qz4QEZ4M8hBeUcOQJspwSQ2YXDG8nkXY+Y/
aS/OM4pafgx1SYNIAUNrfLEVJDU7VP5TC7JOlxw6R8SyiP0zWTyzKNmaVIDPNjgmSkaT16R7
Mhj07E4bm3wj5GvSPz55c+LYA6fntLsQacJ/Utvkwsm3TjbcyzlzSfSn+bk9x3agHYYkVBz6
xjnpdmz8jgYEWZy5ZMt7GILyRLtPq7gwKboGk7ml12w4ZbKK3+9KSh0ay96oIF5hZCOizSDk
Q2iiPIKJdB0PKiHiJSKxdP9l/G966zxdDt9YQI4RsrzMn9+S0h2/IdQKZpDIVq3D074ed0DQ
szSNZiboJJtiiMlcA1bxvwn9HMUvbk4kRIopZjGRVYyzjDzHISwbtwHxErKrjKE8AJyHhKeE
BoFuM8rtHpZbGqFDZ2iJHNLdosPUbKDzqKg/NzbNA54Fq8LPgUW4gMFH2DDXRsjm2LhnmeRI
WfYUmJcIm0DOUPXHqfEmSVLIUyF/Buj12bvbq8tVqA9loUIJScMkRsngBoFJ+tBfHF7zhEaQ
bm9M/1G0HVDHIMWaL0CQsjYz6rCD/PiQCbNkqErbmEPWxFmA9E0tXaO/saYej3g2G4Kj9ZWg
WRVPDXHb8eXh5t3Hn7GW6GuNM9BaFjsMYqt9RDSCmYrtZbcvxFj3emVE3Oj5Ct5auG7EfOEW
LF/Isur4Y7C7wiKXioj7M3J2cXH18XGIMYeNdT31dZSr8fb7uYz+KcFvpEzeCWoQ/yOhvk2m
3USaO/G/micxrMpMuRIyTM8LuvKmUmCLx5QeaR6v7u+IgvmARjjZ7ERz8+72dgMNnehKBYOL
iGHQx33EMvwxlzoFijDUBWYV+imJl/MRqUtIutUekntm4V8p5BrZlGgEAPwH/NuEedbDnNkw
a2pwvrkOY8BA+3z85j9DuqRQjaHq5onWQo/SH99dQj/Re/M77auDk5P9vnq/r97vq/9P9tXO
fl/9B+2re79tX93b76v3++r9vnq/r97vq/f76j9wX93b76v3++pv3FfDw/2+er+v3u+r9/vq
/b56v6/e76v3++qd9tW9Y6fT+Df9EGIHhjsAAA==
--------------030304040406040803060905
Content-Type: application/x-gzip;
 name="enable-vtpm.log.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="enable-vtpm.log.gz"

H4sICM0H+VIAA2VuYWJsZS12dHBtLmxvZwDtnW1z28iVRr+ryv8B6y8r71oiQYJvrnWqPLI9
mZporJU1k02lUlwQaEpckwADQrKdrfz37QYoihQgWfGasaTnqMqyTdwGGn36dvft5wI8CrPF
JDn1ojQZT069cZbOvMZZOjON/4nOo/PsfDpt/JdJGheTLD8Pp37jIp/P9hb5+ShOZ/vR+HTn
f3c8+/PU/ncSP33hJbbE8/Kj8pz2s9Kk/Gw4ScbpxmfF5/nnubGfPp1fPH2+eeQsnLsD/xGb
cXg+zX93/XiaLm49vlhM4syMrU3z2pEknBXXtFU/33P35V8ve35e3NPTbqfph81uf69nuqO9
II56e4Nxv23/2467gen3+u3getlPizjMw+JONw78/ZrdfBrm4zSb3dE6TadFna7fTHaeDM/S
fD49Px0uomwyz4tmOcnOzdOdmrM9Hd1AYhZ+Gl5E8/NFzUXCi3AyXR3982ZV/3LN2FrNCnZf
sEvS2NzN0J5vaJsrMjOT5LdCzxfRcGbPW+Atba6buPucmdmHkbXp+4PW9TOE2anJb7O4mMQm
XRns+df73VkYpx9Xx7vBdWB5NMwnM5OOxwuT1zS2+WSi4c29d5pG4dSd4NaGiCeLcDQ1w9nk
NAvz220tsKJrfQHDaPohNhfDhW2hfMPhry5qLiaRKQDY3mLsCJMmd7BcjirhJLn9ltbK3OWs
N7ag+ZRn4ZfvtzAb2oHpjpZnF7Mvmy6iMxMP52EWzhYVH7yycA1xnnxI0o/JtWYobD6ayelZ
XtP7iqNR4VS1h+Ymm6TxTUcXU9t8Nx20w5VJos83HS7aYNkt9/xbx7JJOk+z/A4jyST7612s
7Kx1h5aPpuFkthoc3obThakM+/VEPpgsKbrc08b5ImsUDtiYTkaNT3aCHKVpvjk5nv6tDtnC
jl8fbhw1ChN3qmkaxiar7d7XbIZ2oKppnZpbL29/Fk8nSXHvZx8nySSfR9nLJE2uN0JhbHun
HUA+3FwN02817bSzuDYab2KvnX7cectqr9vWtPrItpdJ4uHlAqNZ1xpXNstJ/Yb6zt3INQ/z
s4Lil9Y5+5PZaV2zXNizFA0Yh3WHl7X5gu+6OT8s2i0LP9b2lGIWv/leMjNLL9zgfkOjZCaM
P2aTYtCv62aTxTCK7XrPFb8B2FoHeppMomvdbP3oPLKz4cUtBhfj0W1HP9x61LLYYl8pzj87
zeoguKnkpjNdrg5H/U4Ud1rRXtweRHtBuxvsDdrN7l5rNA6DXndkWqZ1k0us32aaDOfpR5PZ
FUG5blnkWfr56aZFZpzrF93GlFPw5vGPYR6dxenpzWeIsnBxtn54p6zS33d2rAd4h3ZQmIVT
7937f3lijxTXKJaKthHsiqTZbO7+9urZk6JQkllnOjULd6jfbBb2Z2FmZzZbwH0Yhf1+xxU5
vCzizfPhKFyY4mzj1dncmfLh2A445dk6pfVsnAynk0VeWPfXr20H8HIJ4g6tVak4MDWJ/fjJ
5W2Pp+FpcdblR3YMHLpB8IW3OQQu7zf68KIs1/wU9vyouWfvw/39ZOfw8IX3ky1weeZhbqe7
6xUYmstPe4MoCFZHhpasW+QXjTXeaEd77PLIwA+D5ZGiKt7qJou6rJVxpZK4aJlez6yOlMDm
Y9sC496yFe1St/igYHQYzucu5LNzUJp99rIwOTX2JEHT/Xh7Jcqms7Tr0txZ2vvbW9bZc6NK
mkw/26MfJvO5ie1Rv7C+bJxJaIkZz3UML5y6WTJPM2/s/gzcXRe/n+2VF9kt/3pWFo8LCK9t
OJDEttZzz9Z64YW5rbi7xl7L/ir/uf9k5/fGGlgnsANPYVMedL/3gtU/rd2qTu5W3Mok8yZJ
brKxDSSuHbUR6yKdGm9/f7+oiy19muR5OBrmbqB1VXJ3bC/mGsmvOX+xajufmuzJzsmZayvv
6U9xMUTP0+KqjtdlTTvN55f9zTaiXzb6ZTG7qFjYhqsvalttvWirLGqLjM4X3mQFIfbSxLOr
J893nuQsuwMzMFcXWZyd57GboOouMh5tXKRdXuT1+Wz22XOr9Bdrg8NL5yOtaK3+y3V85bTt
XnfjtEF52tK+sijxnuy8t3UsuqGrqLf7rPyk+E9m/npuBzFjvaBdc1Oe+WSnv9gyKmbz6EXQ
6rg+6sayoururCdHh96b2fm06KW7/n5rv7nf2wt6nWerYu1O77LYe1fsfO5GGeNNFt5/R1MT
Zv96Zdpfmb6aTr2jg+MrHn+zPPLUW66SvItwait/VXLgX5Z0VToMXbslYRIZ7yCdOY9YeMto
Lr66oeaq0B/SU29qLszUs27rrrPr1g32Jl6u/xQeWlzgh1cHP3svr/1cNaI9v5sp99zoa5Ji
KXrXHnLDBd9maZLXXXE+G7tDL9x9eH8MJwUWN14sp2rnlomJchtF7u9ft/9haXNQ2ri22TS4
dM+/he4Eri3nUztCx97iPIrMYjG2rfS5bFAb2ToQwWWTroq6Cs1dHOvaIvd++e3Qc75pR7gn
O092/q364/3wh5/fHr/75aS4jzIebVyM4kav2/euzFzpJzuXt2mHlWVQUUbBjWZjeajxV7dc
bvhtV/7JzttwMi37UsHqLoUaYxPm55nZG4VZNjGZbUe/2+4HtrNE9k4WXjr2On7LG33OXZ+s
uaVVlW0rDdO5SXY3b+uZt/c754XLnun3Vq72m6O/3pbupq3V0M3DZ7Zj2+HS9WXfb7/wXpvR
+ekLzx02S68cOgfa9Z+7eaj86S19053BXauzutZxOSI4YG+SKPs8L6B/MJ/Lrc1l06zjbg06
q6se2+YsRvFpaof4sohrRFfhqxK+P7i6nG3/7mCwbLiyhOsea71l2VO83fKszzb6WnNQf3H/
6uLebrlF9DLwB0G7GTz7hrXxn5UkFuWwVjTH2ig0dNR2lzZ52a7OJmivG7030/GJPfjWelLV
uD3YgOoODOdZclq17F5ZHqZJOprYZY+dFP3gumGvuTI8Sj+4Ecnz/N5+1c5f2bntUd/ZtTqt
wXP7O+hXrFsb1q3irK1e97n9HfgV6/aGddtZd9u2k3Y77YptsGEbOFsbIDy3v6q2nQ3bTlGL
jj2vH3Qqtt0N2+6/v3C2dkj2O9W26G3W11Wied3I9/0qLLui96uw/E6vano2C6Ma035QNc0W
4fDNz3XG3Q1jZ3fqJqAwN0PrxzUlBmuVXvq+9XE7qBX7xM75q0V6lYssJqfJ7vH7V8P39s/R
zwfv/eH737/yq52/2ayUvTDZZPz5bqXbd7ny6zfHNUWrLVN74drCfrXWphwgi9Jv3heFf/M7
NUWDStHY3LFof1W0rKo3n9pJynMRUsW45X+piu9evTm6oV1bnS9V8rbCg3+gmt3VkssNfHtu
5CvXEiZ2q4/N4fTaSFl+vuuqcFIusMpFSrE2cmsAP2g0L9c7l2e7Pk0Gm12oPDSMykXibvNZ
Xal20LnqAH92dTl59ePw+D9/HR68Ozx89cvrv9SW6q8NCEWpd8evh28+uaouC7gl4Wk2yT8v
h/qrmy3NdmurEzT7wfpqd1n56+3oguJrPX9hl9FusWKnNFvC+XcZOXcH3fb6tLg2mfxgTi3L
P7qtsML45eX6tR+Zpl0zJy+LOdNNiC+b3sJd4XKuXT9jv39Z4z9mqT3V2kRrV2L10+zGMqXV
XOs5SVyOUxtrFHue1QoF8v9c8j7kRcnfH5+/ZU6wkcD9mRY63W61i/xo8oNwHo4mU9szLsut
Pijw9Nb7yob9qsusF+j5vc3LHLw6Gh4dvzt6c3zyp7or+EG/vsDw5Kf39s/hm3e/ntTe0r3t
nAxL973PDQY39LnXvx6/Ovnp3S/3ucPF7Qc5Gt6fobDv13TLo4NjtxdTM112+uvdcWn3wOZL
hiRd9vdndIA9fg97/B72+D3s8XvY4/ewx+9hj9+jhOD1kL9l17eDz0Men5ciH+HzmuTNPdb4
IL9V8iE+r0l+7OPzouT7+Lwo+TE+L0l+0Ozi86LkY3xek7wf4POi5Ef4vCb5VgufFyU/wOc1
ybeb+Lwo+R4+L0qenBxR8gH6vCp59HlR8k3ieVHyPvG8KnnieVHyLeJ5VfLE86rkvz6eB8w2
wfRZgImSD0iQVCVPgqQqeRIkRcl3SJBUJU+CpCj5LgmSquQRVETJ94jnVckjqIiS7yOoqJJH
UFElT4KkKPkBCZKq5EmQFCUf8gIjVfK8wEiU/Ah9XpX8A9Tn6RFbXfkx/6uSZ/4XJR8y/6uS
f4DzP+S/CXny80TJj8jPUyVPfp4o+Yj8PFXy5OeJko/Jz1MlT36eKHlDfp4qefLzVMmTnydK
PiSeFyU/Ip5XJU88L0o+Ip5XJU88L0o+Jp5XJU88r0qeeF6UvPn65+0As00wY5KiVcmTFK1J
PmySFK1KnqRoVfIkRYuS90mKViWPiCpKvoWIqkl+YIjnVckTz4uSHxPPq5InnlclTzyvST5s
Es+rkieeFyXvE88/dvI04zYdiLf7y5In21CVPNmGouR5u78sed7uL0qet/vLkkf4EiXP2/1l
ySN8qZJH+BIlHyF8iZLvE8+rkieeFyXPt/XIkieeFyXPt/XIkieeVyVPPC9Knm/rkSVPIqso
eb6tR5Y8b/cVJc+39ciS5+2+ouT5th5Z8g8w3/7eVYiu+E2mHx4AECX//3jdMOQfNnkSBkTJ
8yZrWfIkDGiSH/Ema1nyJAyokidhQJQ8b7J+9ORpxm06UAs9RpR8Gz1GlfwD1GMg/03II3+I
kg+QP1TJI3+Iku8gf6iSR/4QJd9F/lAlj/whSr5FPK9KnnhelHybeF6VPPG8KPmAeF6VPPG8
KPkO8bwqeeJ5VfKkM4qS75LOqEqe9x+Jku/x/iNV8rz/SJR8n/cfqZIn316U/IB8e1Xy6POq
5NHnRcl3iOdFyXcfYDxPj9hqnE+0p0qeaE+UfJ9oT5U80Z4qeaI9UfIDsrFVyZONLUo+JBtb
lTzZ2KLkR2Rjq5InG1uVPNnYouQjsrFVyaPeipKPH6B6C/lvspNDPK9KnnhelHxIPK9Knnhe
lTzxvCj5EfG8KnnieVHyEfG8KnmerhYlH5Nvr0qefHtR8oZ8e1Xy5NurkiffXpT8mHx7VfLk
22uSj5ro86rk0ec1yY8i4nlR8jHxvCp54nlV8sTzouQN8bwqeeJ5UfJj4nlV8sTzmuSjJvn2
quTJt1clT769KHmffHtV8uTbi5JvkW+vSp58e1HybfLtVcmjz4uSD9DnNcmPxsTzmuSjJvG8
KnnieVHyPvG8KnnieVHyLeJ5VfLE86Lk28TzquTJt1clT769KPmAfHtV8uTbi5LvkG+vSp58
e1HyXfLtVcmTb69KHn1elHwPfV6UfJt4XpU88bwo+YB4XpU88bwo+Q7xvCp54nlV8sTzouS7
xPOq5Mm3FyXfI99elTz59qLk++Tbq5In316U/IB8e1Xy5Nurkn+A+fb0iK2u+Zn/Rcn3mf9V
yTP/q5J/gPM/5L/Jyo/8PFXy5OeJkg/Jz1MlT36eKPkR+Xmq5MnPUyVPfp4o+Yj8PFXy5OeJ
ko/Jz1MlT36eKHlDfp4o+ZB4XpU88bwqeeJ5UfIj4nlV8sTzouQj4nlV8sTzouRj4nlV8uTb
i5I35NurkiffXpU8+fai5Mfk26uSJ99ek3zcJN9elTz59qLkffR5VfLo85rko5h4XpU88bwo
eUM8r0qeeF6U/Jh4XpU88bwm+bhJPK9KnnhelTz59qLkffLtVcmTby9KvkW+vSp58u1FybfJ
t1clT769KPmAfHtV8ujzquTR50XJN4nnRcn7xPOq5L8+ngfMVoMult6q5Fl6q5Jn6S1KPiA1
VpU8qbGi5DukxqqSJzVWlHyX1FhV8qTGqpInNVaUfI/UWFXySGmi5PtIaaLkA+J5VfLE86Lk
O8TzquSJ51XJE8+Lku8Sz6uS/37xPGC3CTZEfhclPyJcVyVPuC5KvseTrKLk+6TTq5InnV6V
POn0ouQHpNOrkieeFyUfEs+rkieeFyU/Qn5XJY/8rkoe+V2UfIT8rkqedHpR8jHp9KrkedO0
KHnDm6ZVyT9AfZ4esdVoj5WfKPmIlZ8qeVZ+ouRjVn6q5B/gyg/y32TNT2amKnkyM1XJk5l5
E/lOt1sl/6PJD8J5OJpMLfDLcqsPilbvrXeBDftVT1gv0PN7m5c5eHU0PDp+d/Tm+ORPdVfo
BfX2w6OD49o7uSddbUwq6M2DTE1XsziPTVg3ynT6611saXevh5mqbMgwo8I+riwr8XsZ9pVU
YPxehn0lSQi/l2Ff2UTC72XYVx78we9l2FdSgvF7FfbVZCH8XoZ9ZW8Hv3807L97Az7mHXjj
8/yUKnmen4I8Pi9FvsWmmCp5JDBR8u1K/jQ+L0K+kj+Nz2uQD9gMUyWP9CVKvkOiiyr5Sv40
Pi9CvpI/zdeRPAqwvcqrDBnMNcj3K68yZDAXIV+R4vB5EfIIMqLkBwgyouQ75CypkidDWZR8
l+eRVMkTz4uS7xHPq5InnlclTzwvSr5PPK9KngRLUfIDEixVyZNgKUo+JMFSlTwJlqLkRyRY
qpInwVKV/PdLsIT8dyUfoc+rkkeff+zkacatbosQIomSDwmRVMkTIqmSJ0QSJT8iRFIlT4gk
Sj4ihVmVPCnMouRjUphVyZPCrEqeFGZR8oYUZlXy9ziF+cRe1/77hSvgvc3SxLWm5weNphdP
FlGaJCbKXUO9DSdTE7uymQljrzFNo3DaiNNZOEka1j42F5PINFxFGs3GIg9zs/8PnP4Gw441
XLOin26zn45JuIb8vRuhIL9V8pVU+/vj87fMCZuTx3fuIvf1K8X9oF9fYHjy03v75/DNu19P
7vF3i4+blacBGJbue58bDG7oc69/PX518tO7X+51h7vH+ssDWSE/9K9F+0IP8dFpVNkbc4/X
SrD/Z7PH73XZ4/e67PF7Xfb4vS57/F6XPX6vyx6/11BC8HpZ8jxXCXl8Xpw8Pi9C/h5rfJDf
Knlyr0XJj3mtpCp5nqVWJc8zVqrk2cNTJc87UzTJj5s8S61KnqeWVMnzbjRV8rwzRZX8PX6W
GvLbJO/zDlRV8jxzo0qed6aokkefVyWPPq9KHn1elHwLfV6VPPq8Knn0eVXyX6/PA2abYAIW
YKLkOyzAVMmzAFMlzwJMlTwJkqrkSZAUJd8lQVKVPAmSquRJkFQlT4KkKnkSJEXJ90iQVCVP
gqQqeRIkVcmTIKlKngRJVfLo86Lk++jzquTR51XJfz99HrBbBYv8rkoe+V2U/AD5XZU88rsq
eeR3VfLI76rkkd9FyYfI76rkkd9VySO/q5JHflclj/yuSh75XZT8CPldlTzyuyp5Ho9XJY8+
r0oefV6UfIQ+r0oefV6VPPq8Knn0eVXy6POi5GP0eVXy6POq5L9enwfMNsEYttVVybOtrkqe
bXVV8myri5Ifs62uSp5tdVXybKurkmdbXZU82+qS5H37C58XJc+2uip5HntTJc9jb6rkv/6x
N8BsE0wPQUWVPIKKKnkEFVHyfQQVVfIIKqrkEVRUySOoqJJHUBElP0BQUSWPoKJKHkFFlTyC
iip53iOoSp73CD528vB72Pxu8NwRe7Cq5NmDVSXPHqwqefZgRclH7MGqkmcPVpU8e7Cq5NmD
VSXPHuxjJ08zbtOBxgydquQZOlXJI19pkvebfA2WKnne16lKnsdLVcnzeKkqeR4vFSXv83ip
KnlSmx47eZpxqw5Etogo+RbZIqrkyRZRJU+2iCp5JE9V8kiequSRPEXJt5E8VckjeaqSR/JU
JY/kqUoeyVOUfIDkqUoeyVOVPG9zUCXP2xxUyaPPi5LvoM+rkkefVyWPPq9KHn1elTz6vCp5
9HlR8l30eVXy6POq5NHnVcmjz6uSR58XJd97gPo8PWKbPaLP3p4qefb2VMmzt6dKnr09UfID
9vZUybO3p0qevT1V8uztqZJnb0+UfPgA9/Yg/03I8+zNYydPM27VgXicQZT8iMcZVMnzOIMq
eSRPVfJInqrkkTxVySN5ipKPkDxVySN5qpJH8lQlj+SpSh7JU5R8jOSpSh7JU5U8rxtUJc/r
BlXJo8+Lkjfo86rk0edVyaPPq5JHn1cljz6vSh59XpT8GH1elTz6vCp59HlV8ujzquTR5zXJ
WxT4vCh59HlV8ujzquTR51XJo8+LkvfR51XJo8+rkkefVyWPPq9KHn1elTz6vCj5Fvq8Knn0
eVXy6POq5NHnVcmjz4uSb6PPq5JHn1cljz6vSh59XpU8+rwo+QB9XpU8+rwqefR5VfLo86rk
0edVyaPPP3byNOM2HaiD5KlKHslTlTySpyj5LpKnKnkkT1XySJ6q5JE8VckjeYqS7yF5qpJH
8nzs5GnGrToQ+qEqefRDVfLoh6Lk+zzfq0qe53tVySN2q5JH7FYlj9gtSn6A2K1KHrFblTxi
typ5xG5V8ojdouRDxG5V8ojdquR5vleVPPq8KvkHqM/TI7bZIyKiPVXyRHui5GOiPVXyRHuq
5In2VMkT7amSf4DRHuS/CXmysUXJG7KxVck/wGxsesQ2e8SYvD1V8uTtqZInb0+VPEqOKnmU
HE3y7SZKjip5lBxV8ig5quRRclTJo+SokkfJESXvo+Sokn+ASg7kvwl53qujSp736jx28jTj
Nh2oxSa4KPk2m+Cq5NkEVyXPJrgqeTbBVcmzCa5Knk1wUfIBm+Cq5NkEVyXPJrgqeTbBVcnz
cnlR8h0eUlMlz0NqquR5SE2VPA+pqZJHnxcl30WfVyWPPq9KHn1elTz6/GMnTzNu04F6CF+q
5BG+VMkjfKmSR/gSJd9H+FIlj/ClSh7hS5U8wpcqeYQvUfIDhC9V8ghfquQRvlTJI3ypkufB
VFXyPJgqSj7kwVRV8ujzquTR51XJo88/dvI04zYdaITwpUoe4UuVPMKXKPkI4UuVPMKXKnmE
L1XyCF+q5BG+biLf6Xar5H80+UE4D0eTqQV+WW71QdHqvfUusGG/6gnrBXp+b/MyB6+OhkfH
747eHJ/8qe4KvaDefnh0cFx7J/ekq8Vky988yNR0NYvz2IR1o0ynv97Flnb3epipsmeYkWFf
eVYCv5dhX9k0xu9l2Fe2jfF7GfaVjWP8XoW9qWwd4/cy7Cubx/i9DPvK9jF+L8O+soGM3z8a
9t+9AR/zDnzgswOvSp5HT0TJt3j0RJU8j56okufRE1XyPHqiSp5XQ4qSb5PsokqeV0OqkucJ
OVXyX/+EHGC2CSYgsV2VPNvqquTZVhcl32FbXZU82+qq5NlWVyXPtroqebbVRcl32VZXJc+2
uip5ttVVyfPiOVXyvHhOlHyPF8+pkufFc6rkefGcKnn0eVXy6POq5NHnRcn30edVyaPPq5JH
n1cljz6vSh59/rGTpxm36UADJE9V8kiequSRPEXJh0iequSRPFXJI3mqkkfyVCWP5KlKHslT
lPwIyVOVPJKnKnkkT1Xy91jyPLHXtf9+4Qp4b7M0ca3p+Z1G04sniyhNEhPlrqHehpOpiV3Z
zISx15imUThtxOksnCQNax+bi0lkGq4ijWZjkYe52f8HTv9/85oj8Z1XBgA=
--------------030304040406040803060905
Content-Type: application/x-gzip;
 name="enable-vtpmmgr.log.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="enable-vtpmmgr.log.gz"

H4sICM0H+VIAA2VuYWJsZS12dHBtbWdyLmxvZwDt2f1v28YZwPH9OgP+H67+Zc5QRq+WZGMp
kMTxajRugjjtChSFcCSPEmHxJSQl2x36v++5oyRbr3GHrui2rwLEEe+5u8+9H533uijjdKSC
LI3ikYqKLFENUwWNO5M2ZlWeJKPCK6upH2bJ8yAaHfzzQMnnSL7G4dGZSqeTyZf1o7oIeVaH
1M+GcRplK8/c8+o+N/L0KJ8dfbmaMta5TfhbaCI9nVRfradnWbk3vSzjsDCRxDTXUlKduDrn
rVrPOJ26Bh11g14v6oZNz/e19rr904GnTdD0utFJ1w/D9uC0c7qe964MdaVdM1cSflmLyye6
irIieWJ0lk2cab0lxTQdjrMqn0xHwzIo4rxyffKxmJqjgy2lHfk7hiHRd8NZkE/LLZXomY4n
y9QfV6k/rQVLVOIG7jNxaRaapwVKeUPprsAkJq32jnhVBsNEyrVB85j1ENvOxCQ3vsS0ep1B
d70IXYxMtTdkFocmW0Z4rfVpN9ZhdvtQQnuwPmZVMKzixGRRVJpqS3+bOxMMd8/eSRboiS1g
b1+Ecan9iRkm8ajQ1f5YGTM3uz4zEv7kJjSzYSl9VK0s+IdKzSwOjBsDmTBGNpQsfULkfFfR
cbq/SY/yPKXUnT1o7qpCf769LmwoG9MTI8ez5POhZTA24TDXhU7KjWX4EGE7YprepNltutYN
LubWxKNxtWX6udTArautSbkp4izclVpOpPt2JcqOZdLgfley64P5tPRae7ezOMuzonrCZhIX
n54Slclic2HrqM3erZs5n8GtZrfT7/c2G+OiZOPxTSFhJxvpv+zfAic6TpYb0YWelGbjiNk+
9DemSN3cPmpMy6LhVnpjEvvuDPazrNo4iEc/b5sepWyXNzu3KBdiS5tkOnQt3FxKazFD2Ra3
jMSW1tc9kISTOHXN38aTuS/b083uis2g3ZRzrVzb7lcn1dbzzZZbbsyFLV3tSw+ZNBwuri/N
be1/iJlfGXZ4c7sv5roau6Gb6WLrpSlORtt6YyaZbb5xqLclzxGf2RDsXUK77ir07dYp4W4H
u5tQmCSb2RNjR18URoe3RexOkm3zKS6HQSh3Rpt9xzg9milHaRyszafHqXkgZ+xsT8As8vel
3uxNlWHZk5ylwzy7NYUcz/U9oqyK7P5oNaIwdm247jb1brKafqurYBxmo90lBIUux4+TD+rO
+uXg4D038Xny4iYe6IE5aXVbXt+PBl7XH3S9QdBre2HrNDxtml7UH6yvHW7i3MS5iXMT3x3K
TZybODdxbuIPmbmJb6T+f9/EfzCpuorTONET9e76i0NJcXW4q5t0753uNJvN4+9fPjt0mdJC
5t7IlDapJSkuw1gXcs5IDvtUbrKDvs1ztcij8mro69K44nrL4mxR1TCSBVoXd1JHJ1E6nMRl
ZR+d+o8rl12uvhDYpEcmlzAxqTw+XLQ7muiRK3X+SHaJod0mztS8icHNWR0pFeu+bnrNu779
eXhwdXWmLtO4WpQ1rOS4Wa9yaBZPO6eDk+4yZSiDae/ZNqXbW+k6SVumDIJFiqOoZbOc5VEe
mysNXV/oMFim1GOUR9Jmbeb9JrdN96Aeliud5/YVSzbmrLhXhU5HxtbctB/lzYfPhcrNsLKh
0kJvrlZ2CWbp5F5Sb+I8N+FyvBfdE2sZJaPsbFB6Yo+PKiuU7A3Kt5Ucu7+fefNajuc/n9UF
SNnCPpdbeRqKPFciL5WuLN5Gem330/7z+eHB10YiZPLLQnVBdWLbxnWX/5S4Jcu2xl4PChWn
lSkiudCvpcprY5lNjHr+/LnDSO5RWlXaH1Z2Y7Im22ipzLW7taUCd3eaTkxxePBxbPtLHV2G
bk/LM1etHbUF9aT55WLWNe/Cut8XuWT/LqXvtuccNB/nNHVOyeFPS2ncYhhClaVKbjCqZdeP
NffCvh881FGOp1Vo9/NtdUT+4zqiuo7zaZLcK3tRPnu0Jbywy2SgH+nnV+mNUjv93uNSW/PJ
VsfLKrwWkZt2lqWOn9VP3JfCfJrKTmVk3ne2NEGZOzkbQjse3168+/H7j++vfjpT19Zoy5vJ
d3GnMjELVV/1VyPf5ZW8K5yp7+p5kifyilSqsIhndiz/uvlRr95+c/Hh3bcf3fSuL/6NmR82
+r2Begg7PLB/5meonTrzS1VtaDQb86TGJ3tzaLTaNv/hwYW88coAVplbdE/K1IiMrqaF8Xxd
FLEppCs6bYspTSAzqVRZpE5abeXfV6bc2qQlWV60hllu0uPVZj1T3le28188/rh1r2z3vnr5
+hv1Yu3zMFTSp5bt2X3cpO7atzE/up1oy6zbUd/Hy2t17gZIrdd5+e7qzZW60sFYtnf1Ss4Z
9TKUl7FStv+LN+fdutw3qV3WoXprO1dWjTtyZBa2nrddBfPme3EoM/yVmkmPZ4V8Uy9U6+S8
JUMzm3+zW5jNsdxZlLwCaT+uS1XHzbuBm+pR9Ozs8ODPr7PEbXIfpGPupUGVup7m9pVE0i5t
EcU0r9SbUHZRub3LETXaTPjgNpyV52/NzEykNbdbnn4t72vyeN7Ue/V67Db/1bqvq1J9L+nh
2vNzOaLUS/tbmNWE+TKp54qbHd35qqoXlfw9/LupXi/64n51zX2ti/BWrgk27mw1Sc1f3uXk
Um3VUa3+WnqZm8C17Ey115Kk3eL9YGabSfUQXp7LaXXxw5+2Jl5LwXEUB9fxz3IxONkbI9Ol
07Kb2MBOp6c3/GIyLcd2n2kpGQY5H47LZ3Z52t9FqtZGSS58UedvVk/7N6jnd0r9IPgs+RUp
7y5fvl95+HJajdW1rH83p+RC2hqYZk/ZiSsbgH+/zLV2gLyVt093Hsk+q+RCLEtmbOwL6WrY
d2kuW5a7AMhWa6NuzGZLbGHfrD2X77IO7Ni4m95JYIKuXnOt5ltW6ceps9lfF6v6hlLeJ4mp
ijjYWv936SvJslrMuQmK+7xaLWc15IMZ2S1brvPKvrqo09NeO2oGoRf0w1Ovq3u+Z39P7fnd
k5b2TUv+BPtL8AcnQXjSDrywcxp43U6v6512mj2v7Ue62+/5pm3am+Mgudsuf/nFauI/dOwa
YA/ioN5ay/r/D+yx/5dS9paP9eFzdinXFXVRZLI7yv7b6jSa9taXyglp1jrmvS7Lalxk09F4
+2x7euX/yWL3j+6828byqqfsgSOzwpX4b47Cr0dtkq71DBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKE6H9f9MdUIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQ/beJ/pgqRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUL0e4n+BdDW9YD0WQIA
--------------030304040406040803060905
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------030304040406040803060905--


From xen-devel-bounces@lists.xen.org Mon Feb 10 17:23:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuZZ-0003CI-V5; Mon, 10 Feb 2014 17:23:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WCuZY-0003Br-5N
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:23:08 +0000
Received: from [193.109.254.147:38446] by server-9.bemta-14.messagelabs.com id
	C9/9B-24895-BFA09F25; Mon, 10 Feb 2014 17:23:07 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392052985!3328060!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7466 invoked from network); 10 Feb 2014 17:23:05 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 17:23:05 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 256F8D;
	Mon, 10 Feb 2014 18:23:04 +0100
Message-ID: <52F90AF7.603@scytl.com>
Date: Mon, 10 Feb 2014 18:23:03 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>,  xen-devel@lists.xenproject.org
References: <52F26C40.2060901@scytl.com>
	<1392042440.26657.9.camel@kazak.uk.xensource.com>
In-Reply-To: <1392042440.26657.9.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
Content-Type: multipart/mixed; boundary="------------030304040406040803060905"
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello Ian,

I am using the "xl" toolstack. I have included the configuration and
screen logs of the vTPM-Mgr stub domain, vTPM stub domain and DomU.

As you can see in the logs, I have enabled the vTPM Mgr and vTPM stub
domains once. Then I have enabled the DomU two consecutive times without
disconnecting the stub domains (in all the cases issuing the command "xl
create -c /var/xen/configuration.cfg).

When the DomU shuts down (after issuing a poweroff command with an ssh
connection) the vTPM stub domain does not stop. Instead the following
entries appear on its log:

Tpmback:Info Frontend 14/0 disconnected^M
Failed to read /local/domain/14/device/vtpm/0/state.^M
Tpmback:Info Frontend 14/0 disconnected^M

and later, when the DomU is started again:

Tpmback:Info Frontend 15/0 connected^M

In addition, one can see that the measurements performed by the
"pv-grub" differ from the first to the second boot of the DomU (since
the vTPM domain instance has been kept alive):

[root@localhost ~]# cat /sys/class/misc/tpm0/device/pcrs
...
PCR-04: 5A 4D CA AA C4 90 19 78 9A CB 7A C9 87 A6 08 A8 7C A2 7B DB
PCR-05: E5 6C FC F9 65 D2 D0 FC 7A 24 7F 42 66 28 D5 F9 D3 10 EF 72
...

[root@localhost ~]# cat /sys/class/misc/tpm0/device/pcrs
...
PCR-04: BB 67 AA F3 9E B6 4B 8F 7E 76 57 7A 16 14 FB 0C B2 57 DF 69
PCR-05: C0 A5 04 68 85 93 1B CD AE 61 F7 DA 49 ED 72 9E 2E D7 06 F0
...


Does anybody know if this is the expected behaviour? Can this be changed?


Thanks!
Jordi.



On 02/10/2014 03:27 PM, Ian Campbell wrote:
> CCing the vTPM maintainer.
>
> On Wed, 2014-02-05 at 17:52 +0100, Jordi Cucurull Juan wrote:
>> Dear all,
>>
>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>> guest virtual machine that takes advantage of it. After playing a bit
>> with it, I have a few questions:
>>
>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>> the guest the vTPM stubdom continues active and, moreover, I can start
>> the machine again and the values of the vTPM are the last ones there
>> were in the previous instance of the guest. Is this normal?
> I don't know much about vTPM but this seems odd to me. Which toolstack
> are you using? Can you provide details of your config and logs from both
> the startup and shutdown etc please.
>
> I've no clue about #2 or #3 I'm afraid.
>
>> 2.In the documentation it is recommended to avoid accessing the physical
>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
>> without any apparent issue. Why is not recommended directly accessing
>> the physical TPM of Dom0?
>>
>> 3.If it is not recommended to directly accessing the physical TPM in
>> Dom0, which is the advisable way to check the integrity of this domain?
>> With solutions such as TBOOT and IntelTXT?
>>
>> Best regards,
>> Jordi.
>>
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>


--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8;
 name="conf-domu.cfg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="conf-domu.cfg"

IyBDb25maWd1cmF0aW9uIG9mIHB2LWdydWIKa2VybmVsID0gIi91c3IvbG9jYWwvbGliL3hl
bi9ib290L3B2LWdydWIteDg2XzY0Lmd6IgpleHRyYT0gIihoZDAsMCkvZ3J1Yi9ncnViLmNv
bmYiCgojIENvbmZpZ3VyYXRpb24gb2YgZ3Vlc3QKbmFtZSA9ICJ2aXJ0dWFsMSIKbWVtb3J5
ID0gIjUxMiIKZGlzayA9IFsgJ3RhcDphaW86L3Zhci94ZW4vdmlydHVhbDEvdmlydHVhbDEu
aW1nLHh2ZGEsdycgXQp2aWYgPSBbICdtYWM9MDA6MTY6M0U6NUM6NDg6QTIsaXA9MTAuMC4w
LjEnIF0KdmNwdXM9MQpvbl9yZWJvb3QgPSAnZGVzdHJveScKb25fY3Jhc2ggPSAnZGVzdHJv
eScKdnRwbT1bImJhY2tlbmQ9ZG9tdS12dHBtMSJdCgo=
--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8;
 name="conf-vtpm.cfg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="conf-vtpm.cfg"

a2VybmVsPSIvdXNyL2xvY2FsL2xpYi94ZW4vYm9vdC92dHBtLXN0dWJkb20uZ3oiCm1lbW9y
eT04CmRpc2s9WyJmaWxlOi9ob21lL2pjdWN1cnVsbC9YZW4vdmlydHVhbDEvdnRwbS5pbWcs
aGRhLHciXQpuYW1lPSJkb211LXZ0cG0xIgp2dHBtPVsiYmFja2VuZD12dHBtbWdyLHV1aWQ9
Yjg1Y2Q1MmMtZDM5Yy00MzY0LTkzMDYtMmJmYTQ3NmJlMmUyIl0KZXh0cmE9Imh3aW5pdHBj
cj1ub25lIgoK
--------------030304040406040803060905
Content-Type: text/plain; charset=UTF-8;
 name="conf-vtpmmgr.cfg"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="conf-vtpmmgr.cfg"

a2VybmVsPSIvdXNyL2xvY2FsL2xpYi94ZW4vYm9vdC92dHBtbWdyLXN0dWJkb20uZ3oiCm1l
bW9yeT0xNgpkaXNrPVsiZmlsZTovdmFyL3hlbi92dHBtbWdyLXN0dWJkb20uaW1nLGhkYSx3
Il0KbmFtZT0idnRwbW1nciIKaW9tZW09WyJmZWQ0MCw1Il0KCg==
--------------030304040406040803060905
Content-Type: application/x-gzip;
 name="enable-domu.log.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="enable-domu.log.gz"

H4sICM0H+VIAA2VuYWJsZS1kb211LmxvZwDtGmlv28j18wrQL8iXqYFi7SIURVmSFaEu4DOb
2k4C28l2ESzYITmUBiY5zAwpWVvsf+97M6QOipKVbbeLApLhROI75l3zLvkjlYonI+KLJOQj
EkoRE3tCpf3MEnvCZZbTyLF9lmRC9Vt+OGr8q0HgdRCImAcHQ5LkUfTaPDI84JlBMc9cnoRi
5Zl+ns1SBk8P0snB61XImKYI+GvAQppH2d+qcCHUVrhSPJAsBJx2BZLQWJ9ZqlWlzHOt0UHP
oyeB0+5bnbB/bHVD37foG6dnddud4x497h+zk06V9lkFNKNazxXArxW8NKJZKGS8I7YQkZap
qorME3cssjTKR67yJU8zbZRHmbODRg23A2+DH2L67E78NEdyp3IInVAezaFfVoD4aq88+blC
DWSxduWXxla8RARsN0Tg54L9fBZDOG6NgUz5bgx8EanAqaKg4jGLnzzA6XW6ncGgyoPKEcu2
40x4wMQcxaoaUI1pIKYLFk5nzY+Z72Y8ZiIMFctqHM2eme9uDulI+DRCBlvNEXBFvYi5MR9J
mm3HBbfpiHvBGV70FLCJq8BK2UoWWBzKJtxn2g0QRAzSjEh2wFRZ7kFuoTzZrtISzS5cN1qQ
PWeSvqyvRnMhW+2IOZ7EL6Mqf8wCN6WSxmrtai4w0BB58pSIaVIxg8aZMj4aZzXxp6G+vlq1
oJRJLoJNUBWB+TYBIYuxxJ9tAmsbFGFpOVtTHBepkJl62Vhcft0FS8BtexnNjyiP5ynimkaK
rVWDeo88MZnokDuwcyVtfQHtiHu6YHpCZHY6sUYy96znQd/td1ujX+qcpiCPPW3MHBoFmUWC
BkzWBngFx4V0VWOfGuWNAeIg4onW/nActF+3j2wUWv/TwkJeJzTEKaSSp83isEGnDXVJVbLz
agDU1ifku15lauzvgd1YErhl/9Gus8oCp6j5G+RNMYelNBtrf661PeWbFo9HdfaYADlSPk8C
Wgcv5Hjh/mI7QLXFJJ3Wxoou8Ju1kCwWE0zwG8whGQ2mkuvEXxdoXLl+AI0fkm9w1VIIHSTc
/997CrP5JiZxlm+CbCwPBkx9NHu7PXT6w2M27PnD7mBIq82dxuW6K3XaLfypto5GF8mD0RYV
eLhdxbmboa0IrYRWWxaNBMYvWudNwQCZ2fVmGVMu5Hfo+jImJzTaFBqIXeK4uWLatXWYI8Cc
0pmJeDx7h0hJfeiPJpWstIwwCb1t0Ket0CyNf8c4xAEnt/CQWmdvi8dyjmgXL6vmn/K1KTcu
ayoSNxVTJqFJNA2tyqSYHaxiSIa1YDN8SjN/HIjRZgxfUjVeBjeMSL82Gv9gCbnjCY9pRD48
/KkJEN356aECjPBMqQfKHH4+O2pqqkRCWh0xhbAO6qlJxlRCvwM0+Nin/VAT3ZVEJM1cjypm
GLI5Q2SWuSFUH8PwjUGPw8SNuMr0o9Xz4d6b1hRhS2JpQMQSeNwsdQ8jOtJsi0dQGV0sjUNS
XxgL5f2noaGHw/vUQbc+vxngm2bj7m5I3iU8K89wM2iIqqK4rHx64r2h3TnEBUfjdKjVOlmx
KsDmED8oIVoYMlfXSLNEhGRJYKxETxYg48E0BGtQjxY2halIPymcdkfTFNcD0KgIOSOSJiOG
vtOxS6zSuxoXBpgMcUFNqxCdYOkRSTQD6BNPUxYA1Gkv24hT8CEjGC2ERthOZUISqInE87qo
vfnvyCoPOizfHBkmwJ81G5cwQyYByJ8SkF8RmmkV8CygLN+2mo0fGKBIBvMAK7AQiL843xdv
AW8uG6qErawkOk+GMH9WoBAVSkSMtFotLQ1Qj5Iso56bYVVGoVBzOMyYy6k5QTf6ecRks/E4
RquRg3eBruip0Oeam2QE7LVflyEIvvCN+Usy6GAUmLCedNBeIQ0MKZB4uQL9SncERCQEGm7i
4C1Dn/X7Ha+zOESN8yzAfqbuELjWy4cwc8hlHsczgoPdcCl5nOpb01lWoJj91vgen/RX+IaG
r8Hf2MaSZmPiBTB7nzgdwhUBrGbjL+svcn57c33/4f2jjj0zPNpAaRvKBWKzgT8PYAAd7WgF
cnhknugPkn3NIYMyuHLHNRYj7BlasQACoCg6GBjFHGEGX7ttFyD7K/bFttM1UjQb15RH4J1M
6Ju1I5kdMprlklkelZIzCUcPjgeDfntAoOBDrCgiQtxOEN031Jpnrnypz5MXhFLoPcyap7q9
k7oIqLP69fm60UPPbq8ZvI745vxynRoEqyGvGG75qNJqFkenXN3ff7gfmlTk5okncvBQaKin
PBsT6Z9azgZ2+uzfwu/MgyGYwNCcKOpnHG4fNuwYX5gYLcmg7Px4dv9+SN4LMoZMB5lCa43D
M2mT6RgqNHDnSQBEJa4fCcVcjXKILdtReTCe2em0CB8lkCkCMNBHSImK/PPq4eJ71IqhQ0k2
huTFkhwTWxeqw8tYxzthdXbCcnbCagPWqy8/vPryd0LOoQNCm31/AXgfHsjhcctxWr2j75ug
YbMhAVwWdfj4R9t87SYtMgO5BhI1gyQSE2z3MXFBs9AJ1WsQTmZci6sh7efBMShj1hLEnsTQ
u+S/WEZzIiFAQetTGyLU1nVI2p9F9FaKPLWiiatNIgP3/Qf39tPNA7k9e//2lCUucPz00Pr0
eG0NCvDdZVnnTscTH6x+c/XT49n57dUpVA5Auf18B7+nJXMbmKsplNqHnx6u4YqeAscIehfp
z+SYeZbKE6dPdMtpRD+lOd6mWkZLUl7eETkeeeRrzoEjywj0pKdYrGLqZr4Hz7CKSUiN+n8a
h6qwBQ7xJg5Ol1+6CSGPH+/INTqBnFZe4Kc01v4ZvoN6RX6kJkZ0h1IkcDBMwnQItVpV/PMC
58LgaMevIJRtwC9UO/VCxGkErWFAVO77EPwhBNGsSrRafrICtousYGQdolskRXBADqdL1JIV
OkKh0+TEiyC3F0xPd6tDBFf9p9XSCh4btLsdfcvgEJg8InPRIFE2G4HkuDu2ZebbY6hVcCla
/hCune6sQBmRwi0EaJGDySG8x9bw1Z/fXp89nt0OyZ3Axgo6LMVd1MlVPk1Alsyc0YKI+O67
H1nki1hzLJIHPH7ANgVNkAPzIQGWXwj5cAPTWROhRb+L+y4cGok2An5q6XfGGjAl1BHlKbkV
Iw5oBCI9B+o7mkDawa82cLDoADcDnmjwoTqC/qz4QEZ4M8hBeUcOQJspwSQ2YXDG8nkXY+Y/
aS/OM4pafgx1SYNIAUNrfLEVJDU7VP5TC7JOlxw6R8SyiP0zWTyzKNmaVIDPNjgmSkaT16R7
Mhj07E4bm3wj5GvSPz55c+LYA6fntLsQacJ/Utvkwsm3TjbcyzlzSfSn+bk9x3agHYYkVBz6
xjnpdmz8jgYEWZy5ZMt7GILyRLtPq7gwKboGk7ml12w4ZbKK3+9KSh0ay96oIF5hZCOizSDk
Q2iiPIKJdB0PKiHiJSKxdP9l/G966zxdDt9YQI4RsrzMn9+S0h2/IdQKZpDIVq3D074ed0DQ
szSNZiboJJtiiMlcA1bxvwn9HMUvbk4kRIopZjGRVYyzjDzHISwbtwHxErKrjKE8AJyHhKeE
BoFuM8rtHpZbGqFDZ2iJHNLdosPUbKDzqKg/NzbNA54Fq8LPgUW4gMFH2DDXRsjm2LhnmeRI
WfYUmJcIm0DOUPXHqfEmSVLIUyF/Buj12bvbq8tVqA9loUIJScMkRsngBoFJ+tBfHF7zhEaQ
bm9M/1G0HVDHIMWaL0CQsjYz6rCD/PiQCbNkqErbmEPWxFmA9E0tXaO/saYej3g2G4Kj9ZWg
WRVPDXHb8eXh5t3Hn7GW6GuNM9BaFjsMYqt9RDSCmYrtZbcvxFj3emVE3Oj5Ct5auG7EfOEW
LF/Isur4Y7C7wiKXioj7M3J2cXH18XGIMYeNdT31dZSr8fb7uYz+KcFvpEzeCWoQ/yOhvk2m
3USaO/G/micxrMpMuRIyTM8LuvKmUmCLx5QeaR6v7u+IgvmARjjZ7ERz8+72dgMNnehKBYOL
iGHQx33EMvwxlzoFijDUBWYV+imJl/MRqUtIutUekntm4V8p5BrZlGgEAPwH/NuEedbDnNkw
a2pwvrkOY8BA+3z85j9DuqRQjaHq5onWQo/SH99dQj/Re/M77auDk5P9vnq/r97vq/9P9tXO
fl/9B+2re79tX93b76v3++r9vnq/r97vq/f76j9wX93b76v3++pv3FfDw/2+er+v3u+r9/vq
/b56v6/e76v3++qd9tW9Y6fT+Df9EGIHhjsAAA==
--------------030304040406040803060905
Content-Type: application/x-gzip;
 name="enable-vtpm.log.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="enable-vtpm.log.gz"

H4sICM0H+VIAA2VuYWJsZS12dHBtLmxvZwDtnW1z28iVRr+ryv8B6y8r71oiQYJvrnWqPLI9
mZporJU1k02lUlwQaEpckwADQrKdrfz37QYoihQgWfGasaTnqMqyTdwGGn36dvft5wI8CrPF
JDn1ojQZT069cZbOvMZZOjON/4nOo/PsfDpt/JdJGheTLD8Pp37jIp/P9hb5+ShOZ/vR+HTn
f3c8+/PU/ncSP33hJbbE8/Kj8pz2s9Kk/Gw4ScbpxmfF5/nnubGfPp1fPH2+eeQsnLsD/xGb
cXg+zX93/XiaLm49vlhM4syMrU3z2pEknBXXtFU/33P35V8ve35e3NPTbqfph81uf69nuqO9
II56e4Nxv23/2467gen3+u3getlPizjMw+JONw78/ZrdfBrm4zSb3dE6TadFna7fTHaeDM/S
fD49Px0uomwyz4tmOcnOzdOdmrM9Hd1AYhZ+Gl5E8/NFzUXCi3AyXR3982ZV/3LN2FrNCnZf
sEvS2NzN0J5vaJsrMjOT5LdCzxfRcGbPW+Atba6buPucmdmHkbXp+4PW9TOE2anJb7O4mMQm
XRns+df73VkYpx9Xx7vBdWB5NMwnM5OOxwuT1zS2+WSi4c29d5pG4dSd4NaGiCeLcDQ1w9nk
NAvz220tsKJrfQHDaPohNhfDhW2hfMPhry5qLiaRKQDY3mLsCJMmd7BcjirhJLn9ltbK3OWs
N7ag+ZRn4ZfvtzAb2oHpjpZnF7Mvmy6iMxMP52EWzhYVH7yycA1xnnxI0o/JtWYobD6ayelZ
XtP7iqNR4VS1h+Ymm6TxTUcXU9t8Nx20w5VJos83HS7aYNkt9/xbx7JJOk+z/A4jyST7612s
7Kx1h5aPpuFkthoc3obThakM+/VEPpgsKbrc08b5ImsUDtiYTkaNT3aCHKVpvjk5nv6tDtnC
jl8fbhw1ChN3qmkaxiar7d7XbIZ2oKppnZpbL29/Fk8nSXHvZx8nySSfR9nLJE2uN0JhbHun
HUA+3FwN02817bSzuDYab2KvnX7cectqr9vWtPrItpdJ4uHlAqNZ1xpXNstJ/Yb6zt3INQ/z
s4Lil9Y5+5PZaV2zXNizFA0Yh3WHl7X5gu+6OT8s2i0LP9b2lGIWv/leMjNLL9zgfkOjZCaM
P2aTYtCv62aTxTCK7XrPFb8B2FoHeppMomvdbP3oPLKz4cUtBhfj0W1HP9x61LLYYl8pzj87
zeoguKnkpjNdrg5H/U4Ud1rRXtweRHtBuxvsDdrN7l5rNA6DXndkWqZ1k0us32aaDOfpR5PZ
FUG5blnkWfr56aZFZpzrF93GlFPw5vGPYR6dxenpzWeIsnBxtn54p6zS33d2rAd4h3ZQmIVT
7937f3lijxTXKJaKthHsiqTZbO7+9urZk6JQkllnOjULd6jfbBb2Z2FmZzZbwH0Yhf1+xxU5
vCzizfPhKFyY4mzj1dncmfLh2A445dk6pfVsnAynk0VeWPfXr20H8HIJ4g6tVak4MDWJ/fjJ
5W2Pp+FpcdblR3YMHLpB8IW3OQQu7zf68KIs1/wU9vyouWfvw/39ZOfw8IX3ky1weeZhbqe7
6xUYmstPe4MoCFZHhpasW+QXjTXeaEd77PLIwA+D5ZGiKt7qJou6rJVxpZK4aJlez6yOlMDm
Y9sC496yFe1St/igYHQYzucu5LNzUJp99rIwOTX2JEHT/Xh7Jcqms7Tr0txZ2vvbW9bZc6NK
mkw/26MfJvO5ie1Rv7C+bJxJaIkZz3UML5y6WTJPM2/s/gzcXRe/n+2VF9kt/3pWFo8LCK9t
OJDEttZzz9Z64YW5rbi7xl7L/ir/uf9k5/fGGlgnsANPYVMedL/3gtU/rd2qTu5W3Mok8yZJ
brKxDSSuHbUR6yKdGm9/f7+oiy19muR5OBrmbqB1VXJ3bC/mGsmvOX+xajufmuzJzsmZayvv
6U9xMUTP0+KqjtdlTTvN55f9zTaiXzb6ZTG7qFjYhqsvalttvWirLGqLjM4X3mQFIfbSxLOr
J893nuQsuwMzMFcXWZyd57GboOouMh5tXKRdXuT1+Wz22XOr9Bdrg8NL5yOtaK3+y3V85bTt
XnfjtEF52tK+sijxnuy8t3UsuqGrqLf7rPyk+E9m/npuBzFjvaBdc1Oe+WSnv9gyKmbz6EXQ
6rg+6sayoururCdHh96b2fm06KW7/n5rv7nf2wt6nWerYu1O77LYe1fsfO5GGeNNFt5/R1MT
Zv96Zdpfmb6aTr2jg+MrHn+zPPLUW66SvItwait/VXLgX5Z0VToMXbslYRIZ7yCdOY9YeMto
Lr66oeaq0B/SU29qLszUs27rrrPr1g32Jl6u/xQeWlzgh1cHP3svr/1cNaI9v5sp99zoa5Ji
KXrXHnLDBd9maZLXXXE+G7tDL9x9eH8MJwUWN14sp2rnlomJchtF7u9ft/9haXNQ2ri22TS4
dM+/he4Eri3nUztCx97iPIrMYjG2rfS5bFAb2ToQwWWTroq6Cs1dHOvaIvd++e3Qc75pR7gn
O092/q364/3wh5/fHr/75aS4jzIebVyM4kav2/euzFzpJzuXt2mHlWVQUUbBjWZjeajxV7dc
bvhtV/7JzttwMi37UsHqLoUaYxPm55nZG4VZNjGZbUe/2+4HtrNE9k4WXjr2On7LG33OXZ+s
uaVVlW0rDdO5SXY3b+uZt/c754XLnun3Vq72m6O/3pbupq3V0M3DZ7Zj2+HS9WXfb7/wXpvR
+ekLzx02S68cOgfa9Z+7eaj86S19053BXauzutZxOSI4YG+SKPs8L6B/MJ/Lrc1l06zjbg06
q6se2+YsRvFpaof4sohrRFfhqxK+P7i6nG3/7mCwbLiyhOsea71l2VO83fKszzb6WnNQf3H/
6uLebrlF9DLwB0G7GTz7hrXxn5UkFuWwVjTH2ig0dNR2lzZ52a7OJmivG7030/GJPfjWelLV
uD3YgOoODOdZclq17F5ZHqZJOprYZY+dFP3gumGvuTI8Sj+4Ecnz/N5+1c5f2bntUd/ZtTqt
wXP7O+hXrFsb1q3irK1e97n9HfgV6/aGddtZd9u2k3Y77YptsGEbOFsbIDy3v6q2nQ3bTlGL
jj2vH3Qqtt0N2+6/v3C2dkj2O9W26G3W11Wied3I9/0qLLui96uw/E6vano2C6Ma035QNc0W
4fDNz3XG3Q1jZ3fqJqAwN0PrxzUlBmuVXvq+9XE7qBX7xM75q0V6lYssJqfJ7vH7V8P39s/R
zwfv/eH737/yq52/2ayUvTDZZPz5bqXbd7ny6zfHNUWrLVN74drCfrXWphwgi9Jv3heFf/M7
NUWDStHY3LFof1W0rKo3n9pJynMRUsW45X+piu9evTm6oV1bnS9V8rbCg3+gmt3VkssNfHtu
5CvXEiZ2q4/N4fTaSFl+vuuqcFIusMpFSrE2cmsAP2g0L9c7l2e7Pk0Gm12oPDSMykXibvNZ
Xal20LnqAH92dTl59ePw+D9/HR68Ozx89cvrv9SW6q8NCEWpd8evh28+uaouC7gl4Wk2yT8v
h/qrmy3NdmurEzT7wfpqd1n56+3oguJrPX9hl9FusWKnNFvC+XcZOXcH3fb6tLg2mfxgTi3L
P7qtsML45eX6tR+Zpl0zJy+LOdNNiC+b3sJd4XKuXT9jv39Z4z9mqT3V2kRrV2L10+zGMqXV
XOs5SVyOUxtrFHue1QoF8v9c8j7kRcnfH5+/ZU6wkcD9mRY63W61i/xo8oNwHo4mU9szLsut
Pijw9Nb7yob9qsusF+j5vc3LHLw6Gh4dvzt6c3zyp7or+EG/vsDw5Kf39s/hm3e/ntTe0r3t
nAxL973PDQY39LnXvx6/Ovnp3S/3ucPF7Qc5Gt6fobDv13TLo4NjtxdTM112+uvdcWn3wOZL
hiRd9vdndIA9fg97/B72+D3s8XvY4/ewx+9hj9+jhOD1kL9l17eDz0Men5ciH+HzmuTNPdb4
IL9V8iE+r0l+7OPzouT7+Lwo+TE+L0l+0Ozi86LkY3xek7wf4POi5Ef4vCb5VgufFyU/wOc1
ybeb+Lwo+R4+L0qenBxR8gH6vCp59HlR8k3ieVHyPvG8KnnieVHyLeJ5VfLE86rkvz6eB8w2
wfRZgImSD0iQVCVPgqQqeRIkRcl3SJBUJU+CpCj5LgmSquQRVETJ94jnVckjqIiS7yOoqJJH
UFElT4KkKPkBCZKq5EmQFCUf8gIjVfK8wEiU/Ah9XpX8A9Tn6RFbXfkx/6uSZ/4XJR8y/6uS
f4DzP+S/CXny80TJj8jPUyVPfp4o+Yj8PFXy5OeJko/Jz1MlT36eKHlDfp4qefLzVMmTnydK
PiSeFyU/Ip5XJU88L0o+Ip5XJU88L0o+Jp5XJU88r0qeeF6UvPn65+0As00wY5KiVcmTFK1J
PmySFK1KnqRoVfIkRYuS90mKViWPiCpKvoWIqkl+YIjnVckTz4uSHxPPq5InnlclTzyvST5s
Es+rkieeFyXvE88/dvI04zYdiLf7y5In21CVPNmGouR5u78sed7uL0qet/vLkkf4EiXP2/1l
ySN8qZJH+BIlHyF8iZLvE8+rkieeFyXPt/XIkieeFyXPt/XIkieeVyVPPC9Knm/rkSVPIqso
eb6tR5Y8b/cVJc+39ciS5+2+ouT5th5Z8g8w3/7eVYiu+E2mHx4AECX//3jdMOQfNnkSBkTJ
8yZrWfIkDGiSH/Ema1nyJAyokidhQJQ8b7J+9ORpxm06UAs9RpR8Gz1GlfwD1GMg/03II3+I
kg+QP1TJI3+Iku8gf6iSR/4QJd9F/lAlj/whSr5FPK9KnnhelHybeF6VPPG8KPmAeF6VPPG8
KPkO8bwqeeJ5VfKkM4qS75LOqEqe9x+Jku/x/iNV8rz/SJR8n/cfqZIn316U/IB8e1Xy6POq
5NHnRcl3iOdFyXcfYDxPj9hqnE+0p0qeaE+UfJ9oT5U80Z4qeaI9UfIDsrFVyZONLUo+JBtb
lTzZ2KLkR2Rjq5InG1uVPNnYouQjsrFVyaPeipKPH6B6C/lvspNDPK9KnnhelHxIPK9Knnhe
lTzxvCj5EfG8KnnieVHyEfG8KnmerhYlH5Nvr0qefHtR8oZ8e1Xy5NurkiffXpT8mHx7VfLk
22uSj5ro86rk0ec1yY8i4nlR8jHxvCp54nlV8sTzouQN8bwqeeJ5UfJj4nlV8sTzmuSjJvn2
quTJt1clT769KHmffHtV8uTbi5JvkW+vSp58e1HybfLtVcmjz4uSD9DnNcmPxsTzmuSjJvG8
KnnieVHyPvG8KnnieVHyLeJ5VfLE86Lk28TzquTJt1clT769KPmAfHtV8uTbi5LvkG+vSp58
e1HyXfLtVcmTb69KHn1elHwPfV6UfJt4XpU88bwo+YB4XpU88bwo+Q7xvCp54nlV8sTzouS7
xPOq5Mm3FyXfI99elTz59qLk++Tbq5In316U/IB8e1Xy5Nurkn+A+fb0iK2u+Zn/Rcn3mf9V
yTP/q5J/gPM/5L/Jyo/8PFXy5OeJkg/Jz1MlT36eKPkR+Xmq5MnPUyVPfp4o+Yj8PFXy5OeJ
ko/Jz1MlT36eKHlDfp4o+ZB4XpU88bwqeeJ5UfIj4nlV8sTzouQj4nlV8sTzouRj4nlV8uTb
i5I35NurkiffXpU8+fai5Mfk26uSJ99ek3zcJN9elTz59qLkffR5VfLo85rko5h4XpU88bwo
eUM8r0qeeF6U/Jh4XpU88bwm+bhJPK9KnnhelTz59qLkffLtVcmTby9KvkW+vSp58u1FybfJ
t1clT769KPmAfHtV8ujzquTR50XJN4nnRcn7xPOq5L8+ngfMVoMult6q5Fl6q5Jn6S1KPiA1
VpU8qbGi5DukxqqSJzVWlHyX1FhV8qTGqpInNVaUfI/UWFXySGmi5PtIaaLkA+J5VfLE86Lk
O8TzquSJ51XJE8+Lku8Sz6uS/37xPGC3CTZEfhclPyJcVyVPuC5KvseTrKLk+6TTq5InnV6V
POn0ouQHpNOrkieeFyUfEs+rkieeFyU/Qn5XJY/8rkoe+V2UfIT8rkqedHpR8jHp9KrkedO0
KHnDm6ZVyT9AfZ4esdVoj5WfKPmIlZ8qeVZ+ouRjVn6q5B/gyg/y32TNT2amKnkyM1XJk5l5
E/lOt1sl/6PJD8J5OJpMLfDLcqsPilbvrXeBDftVT1gv0PN7m5c5eHU0PDp+d/Tm+ORPdVfo
BfX2w6OD49o7uSddbUwq6M2DTE1XsziPTVg3ynT6611saXevh5mqbMgwo8I+riwr8XsZ9pVU
YPxehn0lSQi/l2Ff2UTC72XYVx78we9l2FdSgvF7FfbVZCH8XoZ9ZW8Hv3807L97Az7mHXjj
8/yUKnmen4I8Pi9FvsWmmCp5JDBR8u1K/jQ+L0K+kj+Nz2uQD9gMUyWP9CVKvkOiiyr5Sv40
Pi9CvpI/zdeRPAqwvcqrDBnMNcj3K68yZDAXIV+R4vB5EfIIMqLkBwgyouQ75CypkidDWZR8
l+eRVMkTz4uS7xHPq5InnlclTzwvSr5PPK9KngRLUfIDEixVyZNgKUo+JMFSlTwJlqLkRyRY
qpInwVKV/PdLsIT8dyUfoc+rkkeff+zkacatbosQIomSDwmRVMkTIqmSJ0QSJT8iRFIlT4gk
Sj4ihVmVPCnMouRjUphVyZPCrEqeFGZR8oYUZlXy9ziF+cRe1/77hSvgvc3SxLWm5weNphdP
FlGaJCbKXUO9DSdTE7uymQljrzFNo3DaiNNZOEka1j42F5PINFxFGs3GIg9zs/8PnP4Gw441
XLOin26zn45JuIb8vRuhIL9V8pVU+/vj87fMCZuTx3fuIvf1K8X9oF9fYHjy03v75/DNu19P
7vF3i4+blacBGJbue58bDG7oc69/PX518tO7X+51h7vH+ssDWSE/9K9F+0IP8dFpVNkbc4/X
SrD/Z7PH73XZ4/e67PF7Xfb4vS57/F6XPX6vyx6/11BC8HpZ8jxXCXl8Xpw8Pi9C/h5rfJDf
Knlyr0XJj3mtpCp5nqVWJc8zVqrk2cNTJc87UzTJj5s8S61KnqeWVMnzbjRV8rwzRZX8PX6W
GvLbJO/zDlRV8jxzo0qed6aokkefVyWPPq9KHn1elHwLfV6VPPq8Knn0eVXyX6/PA2abYAIW
YKLkOyzAVMmzAFMlzwJMlTwJkqrkSZAUJd8lQVKVPAmSquRJkFQlT4KkKnkSJEXJ90iQVCVP
gqQqeRIkVcmTIKlKngRJVfLo86Lk++jzquTR51XJfz99HrBbBYv8rkoe+V2U/AD5XZU88rsq
eeR3VfLI76rkkd9FyYfI76rkkd9VySO/q5JHflclj/yuSh75XZT8CPldlTzyuyp5Ho9XJY8+
r0oefV6UfIQ+r0oefV6VPPq8Knn0eVXy6POi5GP0eVXy6POq5L9enwfMNsEYttVVybOtrkqe
bXVV8myri5Ifs62uSp5tdVXybKurkmdbXZU82+qS5H37C58XJc+2uip5HntTJc9jb6rkv/6x
N8BsE0wPQUWVPIKKKnkEFVHyfQQVVfIIKqrkEVRUySOoqJJHUBElP0BQUSWPoKJKHkFFlTyC
iip53iOoSp73CD528vB72Pxu8NwRe7Cq5NmDVSXPHqwqefZgRclH7MGqkmcPVpU8e7Cq5NmD
VSXPHuxjJ08zbtOBxgydquQZOlXJI19pkvebfA2WKnne16lKnsdLVcnzeKkqeR4vFSXv83ip
KnlSmx47eZpxqw5Etogo+RbZIqrkyRZRJU+2iCp5JE9V8kiequSRPEXJt5E8VckjeaqSR/JU
JY/kqUoeyVOUfIDkqUoeyVOVPG9zUCXP2xxUyaPPi5LvoM+rkkefVyWPPq9KHn1elTz6vCp5
9HlR8l30eVXy6POq5NHnVcmjz6uSR58XJd97gPo8PWKbPaLP3p4qefb2VMmzt6dKnr09UfID
9vZUybO3p0qevT1V8uztqZJnb0+UfPgA9/Yg/03I8+zNYydPM27VgXicQZT8iMcZVMnzOIMq
eSRPVfJInqrkkTxVySN5ipKPkDxVySN5qpJH8lQlj+SpSh7JU5R8jOSpSh7JU5U8rxtUJc/r
BlXJo8+Lkjfo86rk0edVyaPPq5JHn1cljz6vSh59XpT8GH1elTz6vCp59HlV8ujzquTR5zXJ
WxT4vCh59HlV8ujzquTR51XJo8+LkvfR51XJo8+rkkefVyWPPq9KHn1elTz6vCj5Fvq8Knn0
eVXy6POq5NHnVcmjz4uSb6PPq5JHn1cljz6vSh59XpU8+rwo+QB9XpU8+rwqefR5VfLo86rk
0edVyaPPP3byNOM2HaiD5KlKHslTlTySpyj5LpKnKnkkT1XySJ6q5JE8VckjeYqS7yF5qpJH
8nzs5GnGrToQ+qEqefRDVfLoh6Lk+zzfq0qe53tVySN2q5JH7FYlj9gtSn6A2K1KHrFblTxi
typ5xG5V8ojdouRDxG5V8ojdquR5vleVPPq8KvkHqM/TI7bZIyKiPVXyRHui5GOiPVXyRHuq
5In2VMkT7amSf4DRHuS/CXmysUXJG7KxVck/wGxsesQ2e8SYvD1V8uTtqZInb0+VPEqOKnmU
HE3y7SZKjip5lBxV8ig5quRRclTJo+SokkfJESXvo+Sokn+ASg7kvwl53qujSp736jx28jTj
Nh2oxSa4KPk2m+Cq5NkEVyXPJrgqeTbBVcmzCa5Knk1wUfIBm+Cq5NkEVyXPJrgqeTbBVcnz
cnlR8h0eUlMlz0NqquR5SE2VPA+pqZJHnxcl30WfVyWPPq9KHn1elTz6/GMnTzNu04F6CF+q
5BG+VMkjfKmSR/gSJd9H+FIlj/ClSh7hS5U8wpcqeYQvUfIDhC9V8ghfquQRvlTJI3ypkufB
VFXyPJgqSj7kwVRV8ujzquTR51XJo88/dvI04zYdaITwpUoe4UuVPMKXKPkI4UuVPMKXKnmE
L1XyCF+q5BG+biLf6Xar5H80+UE4D0eTqQV+WW71QdHqvfUusGG/6gnrBXp+b/MyB6+OhkfH
747eHJ/8qe4KvaDefnh0cFx7J/ekq8Vky988yNR0NYvz2IR1o0ynv97Flnb3epipsmeYkWFf
eVYCv5dhX9k0xu9l2Fe2jfF7GfaVjWP8XoW9qWwd4/cy7Cubx/i9DPvK9jF+L8O+soGM3z8a
9t+9AR/zDnzgswOvSp5HT0TJt3j0RJU8j56okufRE1XyPHqiSp5XQ4qSb5PsokqeV0OqkucJ
OVXyX/+EHGC2CSYgsV2VPNvqquTZVhcl32FbXZU82+qq5NlWVyXPtroqebbVRcl32VZXJc+2
uip5ttVVyfPiOVXyvHhOlHyPF8+pkufFc6rkefGcKnn0eVXy6POq5NHnRcn30edVyaPPq5JH
n1cljz6vSh59/rGTpxm36UADJE9V8kiequSRPEXJh0iequSRPFXJI3mqkkfyVCWP5KlKHslT
lPwIyVOVPJKnKnkkT1Xy91jyPLHXtf9+4Qp4b7M0ca3p+Z1G04sniyhNEhPlrqHehpOpiV3Z
zISx15imUThtxOksnCQNax+bi0lkGq4ijWZjkYe52f8HTv9/85oj8Z1XBgA=
--------------030304040406040803060905
Content-Type: application/x-gzip;
 name="enable-vtpmmgr.log.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="enable-vtpmmgr.log.gz"

H4sICM0H+VIAA2VuYWJsZS12dHBtbWdyLmxvZwDt2f1v28YZwPH9OgP+H67+Zc5QRq+WZGMp
kMTxajRugjjtChSFcCSPEmHxJSQl2x36v++5oyRbr3GHrui2rwLEEe+5u8+9H533uijjdKSC
LI3ikYqKLFENUwWNO5M2ZlWeJKPCK6upH2bJ8yAaHfzzQMnnSL7G4dGZSqeTyZf1o7oIeVaH
1M+GcRplK8/c8+o+N/L0KJ8dfbmaMta5TfhbaCI9nVRfradnWbk3vSzjsDCRxDTXUlKduDrn
rVrPOJ26Bh11g14v6oZNz/e19rr904GnTdD0utFJ1w/D9uC0c7qe964MdaVdM1cSflmLyye6
irIieWJ0lk2cab0lxTQdjrMqn0xHwzIo4rxyffKxmJqjgy2lHfk7hiHRd8NZkE/LLZXomY4n
y9QfV6k/rQVLVOIG7jNxaRaapwVKeUPprsAkJq32jnhVBsNEyrVB85j1ENvOxCQ3vsS0ep1B
d70IXYxMtTdkFocmW0Z4rfVpN9ZhdvtQQnuwPmZVMKzixGRRVJpqS3+bOxMMd8/eSRboiS1g
b1+Ecan9iRkm8ajQ1f5YGTM3uz4zEv7kJjSzYSl9VK0s+IdKzSwOjBsDmTBGNpQsfULkfFfR
cbq/SY/yPKXUnT1o7qpCf769LmwoG9MTI8ez5POhZTA24TDXhU7KjWX4EGE7YprepNltutYN
LubWxKNxtWX6udTArautSbkp4izclVpOpPt2JcqOZdLgfley64P5tPRae7ezOMuzonrCZhIX
n54Slclic2HrqM3erZs5n8GtZrfT7/c2G+OiZOPxTSFhJxvpv+zfAic6TpYb0YWelGbjiNk+
9DemSN3cPmpMy6LhVnpjEvvuDPazrNo4iEc/b5sepWyXNzu3KBdiS5tkOnQt3FxKazFD2Ra3
jMSW1tc9kISTOHXN38aTuS/b083uis2g3ZRzrVzb7lcn1dbzzZZbbsyFLV3tSw+ZNBwuri/N
be1/iJlfGXZ4c7sv5roau6Gb6WLrpSlORtt6YyaZbb5xqLclzxGf2RDsXUK77ir07dYp4W4H
u5tQmCSb2RNjR18URoe3RexOkm3zKS6HQSh3Rpt9xzg9milHaRyszafHqXkgZ+xsT8As8vel
3uxNlWHZk5ylwzy7NYUcz/U9oqyK7P5oNaIwdm247jb1brKafqurYBxmo90lBIUux4+TD+rO
+uXg4D038Xny4iYe6IE5aXVbXt+PBl7XH3S9QdBre2HrNDxtml7UH6yvHW7i3MS5iXMT3x3K
TZybODdxbuIPmbmJb6T+f9/EfzCpuorTONET9e76i0NJcXW4q5t0753uNJvN4+9fPjt0mdJC
5t7IlDapJSkuw1gXcs5IDvtUbrKDvs1ztcij8mro69K44nrL4mxR1TCSBVoXd1JHJ1E6nMRl
ZR+d+o8rl12uvhDYpEcmlzAxqTw+XLQ7muiRK3X+SHaJod0mztS8icHNWR0pFeu+bnrNu779
eXhwdXWmLtO4WpQ1rOS4Wa9yaBZPO6eDk+4yZSiDae/ZNqXbW+k6SVumDIJFiqOoZbOc5VEe
mysNXV/oMFim1GOUR9Jmbeb9JrdN96Aeliud5/YVSzbmrLhXhU5HxtbctB/lzYfPhcrNsLKh
0kJvrlZ2CWbp5F5Sb+I8N+FyvBfdE2sZJaPsbFB6Yo+PKiuU7A3Kt5Ucu7+fefNajuc/n9UF
SNnCPpdbeRqKPFciL5WuLN5Gem330/7z+eHB10YiZPLLQnVBdWLbxnWX/5S4Jcu2xl4PChWn
lSkiudCvpcprY5lNjHr+/LnDSO5RWlXaH1Z2Y7Im22ipzLW7taUCd3eaTkxxePBxbPtLHV2G
bk/LM1etHbUF9aT55WLWNe/Cut8XuWT/LqXvtuccNB/nNHVOyeFPS2ncYhhClaVKbjCqZdeP
NffCvh881FGOp1Vo9/NtdUT+4zqiuo7zaZLcK3tRPnu0Jbywy2SgH+nnV+mNUjv93uNSW/PJ
VsfLKrwWkZt2lqWOn9VP3JfCfJrKTmVk3ne2NEGZOzkbQjse3168+/H7j++vfjpT19Zoy5vJ
d3GnMjELVV/1VyPf5ZW8K5yp7+p5kifyilSqsIhndiz/uvlRr95+c/Hh3bcf3fSuL/6NmR82
+r2Begg7PLB/5meonTrzS1VtaDQb86TGJ3tzaLTaNv/hwYW88coAVplbdE/K1IiMrqaF8Xxd
FLEppCs6bYspTSAzqVRZpE5abeXfV6bc2qQlWV60hllu0uPVZj1T3le28188/rh1r2z3vnr5
+hv1Yu3zMFTSp5bt2X3cpO7atzE/up1oy6zbUd/Hy2t17gZIrdd5+e7qzZW60sFYtnf1Ss4Z
9TKUl7FStv+LN+fdutw3qV3WoXprO1dWjTtyZBa2nrddBfPme3EoM/yVmkmPZ4V8Uy9U6+S8
JUMzm3+zW5jNsdxZlLwCaT+uS1XHzbuBm+pR9Ozs8ODPr7PEbXIfpGPupUGVup7m9pVE0i5t
EcU0r9SbUHZRub3LETXaTPjgNpyV52/NzEykNbdbnn4t72vyeN7Ue/V67Db/1bqvq1J9L+nh
2vNzOaLUS/tbmNWE+TKp54qbHd35qqoXlfw9/LupXi/64n51zX2ti/BWrgk27mw1Sc1f3uXk
Um3VUa3+WnqZm8C17Ey115Kk3eL9YGabSfUQXp7LaXXxw5+2Jl5LwXEUB9fxz3IxONkbI9Ol
07Kb2MBOp6c3/GIyLcd2n2kpGQY5H47LZ3Z52t9FqtZGSS58UedvVk/7N6jnd0r9IPgs+RUp
7y5fvl95+HJajdW1rH83p+RC2hqYZk/ZiSsbgH+/zLV2gLyVt093Hsk+q+RCLEtmbOwL6WrY
d2kuW5a7AMhWa6NuzGZLbGHfrD2X77IO7Ni4m95JYIKuXnOt5ltW6ceps9lfF6v6hlLeJ4mp
ijjYWv936SvJslrMuQmK+7xaLWc15IMZ2S1brvPKvrqo09NeO2oGoRf0w1Ovq3u+Z39P7fnd
k5b2TUv+BPtL8AcnQXjSDrywcxp43U6v6512mj2v7Ue62+/5pm3am+Mgudsuf/nFauI/dOwa
YA/ioN5ay/r/D+yx/5dS9paP9eFzdinXFXVRZLI7yv7b6jSa9taXyglp1jrmvS7Lalxk09F4
+2x7euX/yWL3j+6828byqqfsgSOzwpX4b47Cr0dtkq71DBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKE6H9f9MdUIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQ/beJ/pgqRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEiRIgQIUKECBEi
RIgQIUKECBEiRIgQIUKECBEiRIgQIUL0e4n+BdDW9YD0WQIA
--------------030304040406040803060905
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------030304040406040803060905--


From xen-devel-bounces@lists.xen.org Mon Feb 10 17:25:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCubX-0003XN-OF; Mon, 10 Feb 2014 17:25:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCubP-0003WU-6y
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:25:10 +0000
Received: from [193.109.254.147:4936] by server-5.bemta-14.messagelabs.com id
	C0/EC-16688-D6B09F25; Mon, 10 Feb 2014 17:25:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392053099!3331082!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22275 invoked from network); 10 Feb 2014 17:25:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99543698"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:24:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:24:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCubK-0004AU-Cz;
	Mon, 10 Feb 2014 17:24:58 +0000
Date: Mon, 10 Feb 2014 17:24:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392052699.26657.29.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402101721000.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com> 
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
	<1392052699.26657.29.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Ian Campbell wrote:
> On Mon, 2014-02-10 at 17:16 +0000, Stefano Stabellini wrote:
> > On Mon, 10 Feb 2014, Ian Campbell wrote:
> > > On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > > > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > > > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > > > +{
> > > > > > +    struct pending_irq *p;
> > > > > > +    int i = 0, irq;
> > > > > > +    uint32_t lr;
> > > > > > +    bool_t inflight;
> > > > > > +
> > > > > > +    ASSERT(!local_irq_is_enabled());
> > > > > > +
> > > > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > > > &this_cpu(lr_mask),
> > > > > > +                              nr_lrs, i)) < nr_lrs) {
> > > > > 
> > > > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > > > 
> > > > Given that we only have 4 LR registers, I think that unconditionally
> > > > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > > > on average.
> > > 
> > > You also need to actually read the LR and do some bit masking etc don't
> > > you?
> > 
> > No bit masking but I need to read the LRs, that are just 4.
> 
> Having read them then what do you do with the result? Surely you need to
> examine some or all of the bits to determine if it is free or not?

Right:

if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )

I thought you meant we need to write back a masked value.


> > > What about implementations with >4 LRs?
> > 
> > I could read ELRSR only if nr_lrs > 8 or something like that.
> 
> That would be the worst of both worlds IMHO.

On second thought you are right: if we always clear the LRs, the logic
is simpler and may be slower with an high number of LRs and concurrent
irqs.

If we don't clear all the LRs on return to guest and use ELRSR to filter
them, we are slower with a small number of LRs / concurrent irqs, faster
with an high number of LRs and concurrent irqs but the code would be
much more complex.

Overall checking for nr_lrs > 8 would add too much complexity for little
benefit.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:25:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCubX-0003XN-OF; Mon, 10 Feb 2014 17:25:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WCubP-0003WU-6y
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:25:10 +0000
Received: from [193.109.254.147:4936] by server-5.bemta-14.messagelabs.com id
	C0/EC-16688-D6B09F25; Mon, 10 Feb 2014 17:25:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392053099!3331082!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22275 invoked from network); 10 Feb 2014 17:25:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99543698"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:24:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:24:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WCubK-0004AU-Cz;
	Mon, 10 Feb 2014 17:24:58 +0000
Date: Mon, 10 Feb 2014 17:24:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392052699.26657.29.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402101721000.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com> 
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
	<1392052699.26657.29.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014, Ian Campbell wrote:
> On Mon, 2014-02-10 at 17:16 +0000, Stefano Stabellini wrote:
> > On Mon, 10 Feb 2014, Ian Campbell wrote:
> > > On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > > > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > > > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > > > +{
> > > > > > +    struct pending_irq *p;
> > > > > > +    int i = 0, irq;
> > > > > > +    uint32_t lr;
> > > > > > +    bool_t inflight;
> > > > > > +
> > > > > > +    ASSERT(!local_irq_is_enabled());
> > > > > > +
> > > > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > > > &this_cpu(lr_mask),
> > > > > > +                              nr_lrs, i)) < nr_lrs) {
> > > > > 
> > > > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > > > 
> > > > Given that we only have 4 LR registers, I think that unconditionally
> > > > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > > > on average.
> > > 
> > > You also need to actually read the LR and do some bit masking etc don't
> > > you?
> > 
> > No bit masking but I need to read the LRs, that are just 4.
> 
> Having read them then what do you do with the result? Surely you need to
> examine some or all of the bits to determine if it is free or not?

Right:

if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )

I thought you meant we need to write back a masked value.


> > > What about implementations with >4 LRs?
> > 
> > I could read ELRSR only if nr_lrs > 8 or something like that.
> 
> That would be the worst of both worlds IMHO.

On second thought you are right: if we always clear the LRs, the logic
is simpler and may be slower with an high number of LRs and concurrent
irqs.

If we don't clear all the LRs on return to guest and use ELRSR to filter
them, we are slower with a small number of LRs / concurrent irqs, faster
with an high number of LRs and concurrent irqs but the code would be
much more complex.

Overall checking for nr_lrs > 8 would add too much complexity for little
benefit.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:32:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuin-0003uQ-VV; Mon, 10 Feb 2014 17:32:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCuil-0003uL-VF
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:32:40 +0000
Received: from [85.158.143.35:60292] by server-2.bemta-4.messagelabs.com id
	18/51-10891-73D09F25; Mon, 10 Feb 2014 17:32:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392053557!4600946!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18381 invoked from network); 10 Feb 2014 17:32:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99546114"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:32:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:32:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCuig-0004Go-W1;
	Mon, 10 Feb 2014 17:32:35 +0000
Message-ID: <52F90D32.7080807@citrix.com>
Date: Mon, 10 Feb 2014 17:32:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 17:20, David Vrabel wrote:
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).
>
> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>
> Introduction
> ============
>
> Revision History
> ----------------
>
> --------------------------------------------------------------------
> Version  Date         Changes
> -------  -----------  ----------------------------------------------
> Draft A  6 Feb 2014   Initial draft.
>
> Draft B  10 Feb 2014  Corrected image header field widths.
>
>                       Minor updates and clarifications.
> --------------------------------------------------------------------
>
> Purpose
> -------
>
> The _domain save image_ is the context of a running domain used for
> snapshots of a domain or for transferring domains between hosts during
> migration.
>
> There are a number of problems with the format of the domain save
> image used in Xen 4.4 and earlier (the _legacy format_).
>
> * Dependant on toolstack word size.  A number of fields within the
>   image are native types such as `unsigned long` which have different
>   sizes between 32-bit and 64-bit hosts.  This prevents domains from
>   being migrated between 32-bit and 64-bit hosts.

s/hosts/toolstacks/ (or toolstack domains)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:32:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuin-0003uQ-VV; Mon, 10 Feb 2014 17:32:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCuil-0003uL-VF
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:32:40 +0000
Received: from [85.158.143.35:60292] by server-2.bemta-4.messagelabs.com id
	18/51-10891-73D09F25; Mon, 10 Feb 2014 17:32:39 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392053557!4600946!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18381 invoked from network); 10 Feb 2014 17:32:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99546114"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:32:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:32:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCuig-0004Go-W1;
	Mon, 10 Feb 2014 17:32:35 +0000
Message-ID: <52F90D32.7080807@citrix.com>
Date: Mon, 10 Feb 2014 17:32:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 17:20, David Vrabel wrote:
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).
>
> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>
> Introduction
> ============
>
> Revision History
> ----------------
>
> --------------------------------------------------------------------
> Version  Date         Changes
> -------  -----------  ----------------------------------------------
> Draft A  6 Feb 2014   Initial draft.
>
> Draft B  10 Feb 2014  Corrected image header field widths.
>
>                       Minor updates and clarifications.
> --------------------------------------------------------------------
>
> Purpose
> -------
>
> The _domain save image_ is the context of a running domain used for
> snapshots of a domain or for transferring domains between hosts during
> migration.
>
> There are a number of problems with the format of the domain save
> image used in Xen 4.4 and earlier (the _legacy format_).
>
> * Dependant on toolstack word size.  A number of fields within the
>   image are native types such as `unsigned long` which have different
>   sizes between 32-bit and 64-bit hosts.  This prevents domains from
>   being migrated between 32-bit and 64-bit hosts.

s/hosts/toolstacks/ (or toolstack domains)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:33:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCujH-0003wa-DB; Mon, 10 Feb 2014 17:33:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCujF-0003wO-W7
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:33:10 +0000
Received: from [85.158.137.68:50937] by server-14.bemta-3.messagelabs.com id
	C5/0A-08196-55D09F25; Mon, 10 Feb 2014 17:33:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392053586!900730!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15149 invoked from network); 10 Feb 2014 17:33:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:33:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101371325"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:33:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:33:06 -0500
Message-ID: <1392053584.26657.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:33:04 +0000
In-Reply-To: <alpine.DEB.2.02.1402101721000.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
	<1392052699.26657.29.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101721000.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:24 +0000, Stefano Stabellini wrote:
> On Mon, 10 Feb 2014, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 17:16 +0000, Stefano Stabellini wrote:
> > > On Mon, 10 Feb 2014, Ian Campbell wrote:
> > > > On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > > > > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > > > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > > > > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > > > > +{
> > > > > > > +    struct pending_irq *p;
> > > > > > > +    int i = 0, irq;
> > > > > > > +    uint32_t lr;
> > > > > > > +    bool_t inflight;
> > > > > > > +
> > > > > > > +    ASSERT(!local_irq_is_enabled());
> > > > > > > +
> > > > > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > > > > &this_cpu(lr_mask),
> > > > > > > +                              nr_lrs, i)) < nr_lrs) {
> > > > > > 
> > > > > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > > > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > > > > 
> > > > > Given that we only have 4 LR registers, I think that unconditionally
> > > > > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > > > > on average.
> > > > 
> > > > You also need to actually read the LR and do some bit masking etc don't
> > > > you?
> > > 
> > > No bit masking but I need to read the LRs, that are just 4.
> > 
> > Having read them then what do you do with the result? Surely you need to
> > examine some or all of the bits to determine if it is free or not?
> 
> Right:
> 
> if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )

This is basically exactly what the ELRSR registers do for you.

Why not just use the h/w feature which gives you exactly what you need
instead of reinventing it yourself because you think it might be faster?
I think there is a certain amount of premature optimisation going on
here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:33:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCujH-0003wa-DB; Mon, 10 Feb 2014 17:33:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WCujF-0003wO-W7
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 17:33:10 +0000
Received: from [85.158.137.68:50937] by server-14.bemta-3.messagelabs.com id
	C5/0A-08196-55D09F25; Mon, 10 Feb 2014 17:33:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392053586!900730!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15149 invoked from network); 10 Feb 2014 17:33:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:33:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101371325"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:33:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:33:06 -0500
Message-ID: <1392053584.26657.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:33:04 +0000
In-Reply-To: <alpine.DEB.2.02.1402101721000.4373@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402071847430.4373@kaball.uk.xensource.com>
	<1391799378-31664-3-git-send-email-stefano.stabellini@eu.citrix.com>
	<52F567CB.7080302@linaro.org>
	<alpine.DEB.2.02.1402101705340.4373@kaball.uk.xensource.com>
	<1392052197.26657.26.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101715150.4373@kaball.uk.xensource.com>
	<1392052699.26657.29.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402101721000.4373@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: julien.grall@citrix.com, Julien Grall <julien.grall@linaro.org>,
	xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 3/4] xen/arm: do not request
 maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:24 +0000, Stefano Stabellini wrote:
> On Mon, 10 Feb 2014, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 17:16 +0000, Stefano Stabellini wrote:
> > > On Mon, 10 Feb 2014, Ian Campbell wrote:
> > > > On Mon, 2014-02-10 at 17:06 +0000, Stefano Stabellini wrote:
> > > > > On Fri, 7 Feb 2014, Julien Grall wrote:
> > > > > > On 07/02/14 18:56, Stefano Stabellini wrote:
> > > > > >    > +static void gic_clear_lrs(struct vcpu *v)
> > > > > > > +{
> > > > > > > +    struct pending_irq *p;
> > > > > > > +    int i = 0, irq;
> > > > > > > +    uint32_t lr;
> > > > > > > +    bool_t inflight;
> > > > > > > +
> > > > > > > +    ASSERT(!local_irq_is_enabled());
> > > > > > > +
> > > > > > > +    while ((i = find_next_bit((const long unsigned int *)
> > > > > > > &this_cpu(lr_mask),
> > > > > > > +                              nr_lrs, i)) < nr_lrs) {
> > > > > > 
> > > > > > Did you look at to ELRSR{0,1} registers which list the usable LRs? I think you
> > > > > > can use it with the this_cpu(lr_mask) to avoid browsing every LRs.
> > > > > 
> > > > > Given that we only have 4 LR registers, I think that unconditionally
> > > > > reading 2 ELRSR registers would cost more than simply checking lr_mask
> > > > > on average.
> > > > 
> > > > You also need to actually read the LR and do some bit masking etc don't
> > > > you?
> > > 
> > > No bit masking but I need to read the LRs, that are just 4.
> > 
> > Having read them then what do you do with the result? Surely you need to
> > examine some or all of the bits to determine if it is free or not?
> 
> Right:
> 
> if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )

This is basically exactly what the ELRSR registers do for you.

Why not just use the h/w feature which gives you exactly what you need
instead of reinventing it yourself because you think it might be faster?
I think there is a certain amount of premature optimisation going on
here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:35:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCukw-00046v-0I; Mon, 10 Feb 2014 17:34:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCukv-00046m-4F
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:34:53 +0000
Received: from [85.158.143.35:32641] by server-3.bemta-4.messagelabs.com id
	A3/4D-11539-CBD09F25; Mon, 10 Feb 2014 17:34:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392053691!4605094!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5234 invoked from network); 10 Feb 2014 17:34:51 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:34:51 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so536397eaj.8
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 09:34:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=mL0Bfnnu9+xYpMvc654pW3yukGLkbiq3v511/5pBa3U=;
	b=bJmBvMCSosvQbfQmyASW/OVhIPPb6tJYmACSheosXpAFxJssWPhJmCSqJxCR2E+j2B
	/mwCCD8CKiTxDtGfb4y/F/i93M2yG0e1mS9LiBbovJmzDNVKZs3rlxBEcwCXEEczLulz
	jM0wIRrLSeK1mIfldSFPsnomNysRmmthZ9NWLYw+msF/2v2oImnWSxf4pRNd8t3lofge
	n+UFoYxXrhkL6LUbIKhqpDNCXUuSxqaf+4nsZ/8k/V4k8dy8zYON56jqqEdQEvWHw9cr
	2FWbA445W7vEKV+qut7hiEUEz9gK7CGYoETvghUnyuiRrqTr99AByPs9gMf4T5iin0cu
	D6Ag==
X-Gm-Message-State: ALoCoQmjtsd7zRmc2255Qtm1e+i5h5OjURVQRieTFBF/B/+HUqKFwcnpoNlDqagxd0qiF7OukP+7
X-Received: by 10.14.149.131 with SMTP id x3mr23181462eej.7.1392053691167;
	Mon, 10 Feb 2014 09:34:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	k41sm57201787een.19.2014.02.10.09.34.49 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 10 Feb 2014 09:34:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 10 Feb 2014 17:34:46 +0000
Message-Id: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an initrd
	and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When DOM0 device tree is building, the properties for initrd will
only be added if there is a linux command line. This will result to a panic
later:

(XEN) *** LOADING DOMAIN 0 ***
(XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
(XEN) Loading kernel from boot module 2
(XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
(XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Cannot fix up "linux,initrd-start" property
(XEN) ****************************************
(XEN)

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
initrd when the linux command is not set.
---
 xen/arch/arm/domain_build.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..5ca2f15 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -209,12 +209,15 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
             return res;
     }
 
-    if ( dt_node_path_is_equal(node, "/chosen") && bootargs )
+    if ( dt_node_path_is_equal(node, "/chosen") )
     {
-        res = fdt_property(kinfo->fdt, "bootargs", bootargs,
-                           strlen(bootargs) + 1);
-        if ( res )
-            return res;
+        if ( bootargs )
+        {
+            res = fdt_property(kinfo->fdt, "bootargs", bootargs,
+                               strlen(bootargs) + 1);
+            if ( res )
+                return res;
+        }
 
         /*
          * If the bootloader provides an initrd, we must create a placeholder
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:35:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCukw-00046v-0I; Mon, 10 Feb 2014 17:34:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCukv-00046m-4F
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:34:53 +0000
Received: from [85.158.143.35:32641] by server-3.bemta-4.messagelabs.com id
	A3/4D-11539-CBD09F25; Mon, 10 Feb 2014 17:34:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392053691!4605094!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5234 invoked from network); 10 Feb 2014 17:34:51 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:34:51 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so536397eaj.8
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 09:34:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=mL0Bfnnu9+xYpMvc654pW3yukGLkbiq3v511/5pBa3U=;
	b=bJmBvMCSosvQbfQmyASW/OVhIPPb6tJYmACSheosXpAFxJssWPhJmCSqJxCR2E+j2B
	/mwCCD8CKiTxDtGfb4y/F/i93M2yG0e1mS9LiBbovJmzDNVKZs3rlxBEcwCXEEczLulz
	jM0wIRrLSeK1mIfldSFPsnomNysRmmthZ9NWLYw+msF/2v2oImnWSxf4pRNd8t3lofge
	n+UFoYxXrhkL6LUbIKhqpDNCXUuSxqaf+4nsZ/8k/V4k8dy8zYON56jqqEdQEvWHw9cr
	2FWbA445W7vEKV+qut7hiEUEz9gK7CGYoETvghUnyuiRrqTr99AByPs9gMf4T5iin0cu
	D6Ag==
X-Gm-Message-State: ALoCoQmjtsd7zRmc2255Qtm1e+i5h5OjURVQRieTFBF/B/+HUqKFwcnpoNlDqagxd0qiF7OukP+7
X-Received: by 10.14.149.131 with SMTP id x3mr23181462eej.7.1392053691167;
	Mon, 10 Feb 2014 09:34:51 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	k41sm57201787een.19.2014.02.10.09.34.49 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 10 Feb 2014 09:34:50 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 10 Feb 2014 17:34:46 +0000
Message-Id: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an initrd
	and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When DOM0 device tree is building, the properties for initrd will
only be added if there is a linux command line. This will result to a panic
later:

(XEN) *** LOADING DOMAIN 0 ***
(XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
(XEN) Loading kernel from boot module 2
(XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
(XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Cannot fix up "linux,initrd-start" property
(XEN) ****************************************
(XEN)

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
initrd when the linux command is not set.
---
 xen/arch/arm/domain_build.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 47b781b..5ca2f15 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -209,12 +209,15 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
             return res;
     }
 
-    if ( dt_node_path_is_equal(node, "/chosen") && bootargs )
+    if ( dt_node_path_is_equal(node, "/chosen") )
     {
-        res = fdt_property(kinfo->fdt, "bootargs", bootargs,
-                           strlen(bootargs) + 1);
-        if ( res )
-            return res;
+        if ( bootargs )
+        {
+            res = fdt_property(kinfo->fdt, "bootargs", bootargs,
+                               strlen(bootargs) + 1);
+            if ( res )
+                return res;
+        }
 
         /*
          * If the bootloader provides an initrd, we must create a placeholder
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:36:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCumI-0004Em-HF; Mon, 10 Feb 2014 17:36:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCumG-0004EV-PE
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:36:16 +0000
Received: from [85.158.137.68:55778] by server-3.bemta-3.messagelabs.com id
	21/07-14520-F0E09F25; Mon, 10 Feb 2014 17:36:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392053774!913476!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20464 invoked from network); 10 Feb 2014 17:36:15 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:36:15 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so1392001eak.29
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 09:36:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=79A+Nz7vQuGsPhNym2Sz/IkE3absR+VIjtd5ulI1o6s=;
	b=fxVcsbj++GNhbYjNHUeqTripBTUoBq7Tp0dYk2Rc31Rqtz/TDgfkQ0+E7l+ecQVCS4
	5HcDKhpymOvqivoXp7Tki6phuRSAgj+tjpsJ//C9S6sm5PjFFq1FnvXiB37N7N5Mtd8s
	2C43NNtjdlOj8ZgMXU4CTmO6NF61gj9fLOs6bo48H2YvjK0yF2OMEnm4rxr7UyeF4F6w
	dXibKisV2VSY0ig/9KqMoxYIYUhFdsYxQLema1V+DAo4jJLmr0fF1KBePfgVkFef+GYb
	Fcv/5aQj3GNK3ILSPGT8A+aPuyw86t58JR99HaJyY59zQ1IP+QPBgxfwibguvTpleH7b
	IlhQ==
X-Gm-Message-State: ALoCoQkzFdJDbAsxt53S7gvSbBJlzhb9kF1iXE0iHroXUJzDXj6UfwNzxuCwsSlKIt3lwy2pf/MU
X-Received: by 10.14.174.5 with SMTP id w5mr27654088eel.14.1392053774818;
	Mon, 10 Feb 2014 09:36:14 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm57050951eeq.15.2014.02.10.09.36.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:36:14 -0800 (PST)
Message-ID: <52F90E0C.4000008@linaro.org>
Date: Mon, 10 Feb 2014 17:36:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	tim@xen.org, ian.campbell@citrix.com, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an
 initrd and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Forget to cc George.

On 02/10/2014 05:34 PM, Julien Grall wrote:
> When DOM0 device tree is building, the properties for initrd will
> only be added if there is a linux command line. This will result to a panic
> later:
> 
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
> (XEN) Loading kernel from boot module 2
> (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
> (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Cannot fix up "linux,initrd-start" property
> (XEN) ****************************************
> (XEN)
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
> initrd when the linux command is not set.
> ---
>  xen/arch/arm/domain_build.c |   13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..5ca2f15 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -209,12 +209,15 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>              return res;
>      }
>  
> -    if ( dt_node_path_is_equal(node, "/chosen") && bootargs )
> +    if ( dt_node_path_is_equal(node, "/chosen") )
>      {
> -        res = fdt_property(kinfo->fdt, "bootargs", bootargs,
> -                           strlen(bootargs) + 1);
> -        if ( res )
> -            return res;
> +        if ( bootargs )
> +        {
> +            res = fdt_property(kinfo->fdt, "bootargs", bootargs,
> +                               strlen(bootargs) + 1);
> +            if ( res )
> +                return res;
> +        }
>  
>          /*
>           * If the bootloader provides an initrd, we must create a placeholder
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:36:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCumI-0004Em-HF; Mon, 10 Feb 2014 17:36:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCumG-0004EV-PE
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:36:16 +0000
Received: from [85.158.137.68:55778] by server-3.bemta-3.messagelabs.com id
	21/07-14520-F0E09F25; Mon, 10 Feb 2014 17:36:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392053774!913476!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20464 invoked from network); 10 Feb 2014 17:36:15 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:36:15 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so1392001eak.29
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 09:36:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=79A+Nz7vQuGsPhNym2Sz/IkE3absR+VIjtd5ulI1o6s=;
	b=fxVcsbj++GNhbYjNHUeqTripBTUoBq7Tp0dYk2Rc31Rqtz/TDgfkQ0+E7l+ecQVCS4
	5HcDKhpymOvqivoXp7Tki6phuRSAgj+tjpsJ//C9S6sm5PjFFq1FnvXiB37N7N5Mtd8s
	2C43NNtjdlOj8ZgMXU4CTmO6NF61gj9fLOs6bo48H2YvjK0yF2OMEnm4rxr7UyeF4F6w
	dXibKisV2VSY0ig/9KqMoxYIYUhFdsYxQLema1V+DAo4jJLmr0fF1KBePfgVkFef+GYb
	Fcv/5aQj3GNK3ILSPGT8A+aPuyw86t58JR99HaJyY59zQ1IP+QPBgxfwibguvTpleH7b
	IlhQ==
X-Gm-Message-State: ALoCoQkzFdJDbAsxt53S7gvSbBJlzhb9kF1iXE0iHroXUJzDXj6UfwNzxuCwsSlKIt3lwy2pf/MU
X-Received: by 10.14.174.5 with SMTP id w5mr27654088eel.14.1392053774818;
	Mon, 10 Feb 2014 09:36:14 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 8sm57050951eeq.15.2014.02.10.09.36.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:36:14 -0800 (PST)
Message-ID: <52F90E0C.4000008@linaro.org>
Date: Mon, 10 Feb 2014 17:36:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	tim@xen.org, ian.campbell@citrix.com, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an
 initrd and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Forget to cc George.

On 02/10/2014 05:34 PM, Julien Grall wrote:
> When DOM0 device tree is building, the properties for initrd will
> only be added if there is a linux command line. This will result to a panic
> later:
> 
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
> (XEN) Loading kernel from boot module 2
> (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
> (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Cannot fix up "linux,initrd-start" property
> (XEN) ****************************************
> (XEN)
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
> initrd when the linux command is not set.
> ---
>  xen/arch/arm/domain_build.c |   13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..5ca2f15 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -209,12 +209,15 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>              return res;
>      }
>  
> -    if ( dt_node_path_is_equal(node, "/chosen") && bootargs )
> +    if ( dt_node_path_is_equal(node, "/chosen") )
>      {
> -        res = fdt_property(kinfo->fdt, "bootargs", bootargs,
> -                           strlen(bootargs) + 1);
> -        if ( res )
> -            return res;
> +        if ( bootargs )
> +        {
> +            res = fdt_property(kinfo->fdt, "bootargs", bootargs,
> +                               strlen(bootargs) + 1);
> +            if ( res )
> +                return res;
> +        }
>  
>          /*
>           * If the bootloader provides an initrd, we must create a placeholder
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:38:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuoP-0004RK-7U; Mon, 10 Feb 2014 17:38:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCuoO-0004R7-6g
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:38:28 +0000
Received: from [85.158.137.68:17323] by server-5.bemta-3.messagelabs.com id
	42/F4-04712-39E09F25; Mon, 10 Feb 2014 17:38:27 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392053904!913723!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18029 invoked from network); 10 Feb 2014 17:38:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:38:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99547767"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:37:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:37:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCunu-0004Kk-0E;
	Mon, 10 Feb 2014 17:37:58 +0000
Message-ID: <52F90E6F.9040409@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:37:51 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
	<52F90E0C.4000008@linaro.org>
In-Reply-To: <52F90E0C.4000008@linaro.org>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an
 initrd and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 05:36 PM, Julien Grall wrote:
> Forget to cc George.
>
> On 02/10/2014 05:34 PM, Julien Grall wrote:
>> When DOM0 device tree is building, the properties for initrd will
>> only be added if there is a linux command line. This will result to a panic
>> later:
>>
>> (XEN) *** LOADING DOMAIN 0 ***
>> (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
>> (XEN) Loading kernel from boot module 2
>> (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
>> (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Cannot fix up "linux,initrd-start" property
>> (XEN) ****************************************
>> (XEN)
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>      This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
>> initrd when the linux command is not set.

Oops. :-)  Looks like a good risk:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>> ---
>>   xen/arch/arm/domain_build.c |   13 ++++++++-----
>>   1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 47b781b..5ca2f15 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -209,12 +209,15 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>>               return res;
>>       }
>>   
>> -    if ( dt_node_path_is_equal(node, "/chosen") && bootargs )
>> +    if ( dt_node_path_is_equal(node, "/chosen") )
>>       {
>> -        res = fdt_property(kinfo->fdt, "bootargs", bootargs,
>> -                           strlen(bootargs) + 1);
>> -        if ( res )
>> -            return res;
>> +        if ( bootargs )
>> +        {
>> +            res = fdt_property(kinfo->fdt, "bootargs", bootargs,
>> +                               strlen(bootargs) + 1);
>> +            if ( res )
>> +                return res;
>> +        }
>>   
>>           /*
>>            * If the bootloader provides an initrd, we must create a placeholder
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:38:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:38:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuoP-0004RK-7U; Mon, 10 Feb 2014 17:38:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCuoO-0004R7-6g
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:38:28 +0000
Received: from [85.158.137.68:17323] by server-5.bemta-3.messagelabs.com id
	42/F4-04712-39E09F25; Mon, 10 Feb 2014 17:38:27 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392053904!913723!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18029 invoked from network); 10 Feb 2014 17:38:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:38:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="99547767"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 17:37:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:37:58 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCunu-0004Kk-0E;
	Mon, 10 Feb 2014 17:37:58 +0000
Message-ID: <52F90E6F.9040409@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:37:51 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
	<52F90E0C.4000008@linaro.org>
In-Reply-To: <52F90E0C.4000008@linaro.org>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an
 initrd and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 05:36 PM, Julien Grall wrote:
> Forget to cc George.
>
> On 02/10/2014 05:34 PM, Julien Grall wrote:
>> When DOM0 device tree is building, the properties for initrd will
>> only be added if there is a linux command line. This will result to a panic
>> later:
>>
>> (XEN) *** LOADING DOMAIN 0 ***
>> (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
>> (XEN) Loading kernel from boot module 2
>> (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
>> (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Cannot fix up "linux,initrd-start" property
>> (XEN) ****************************************
>> (XEN)
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>      This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
>> initrd when the linux command is not set.

Oops. :-)  Looks like a good risk:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>> ---
>>   xen/arch/arm/domain_build.c |   13 ++++++++-----
>>   1 file changed, 8 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 47b781b..5ca2f15 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -209,12 +209,15 @@ static int write_properties(struct domain *d, struct kernel_info *kinfo,
>>               return res;
>>       }
>>   
>> -    if ( dt_node_path_is_equal(node, "/chosen") && bootargs )
>> +    if ( dt_node_path_is_equal(node, "/chosen") )
>>       {
>> -        res = fdt_property(kinfo->fdt, "bootargs", bootargs,
>> -                           strlen(bootargs) + 1);
>> -        if ( res )
>> -            return res;
>> +        if ( bootargs )
>> +        {
>> +            res = fdt_property(kinfo->fdt, "bootargs", bootargs,
>> +                               strlen(bootargs) + 1);
>> +            if ( res )
>> +                return res;
>> +        }
>>   
>>           /*
>>            * If the bootloader provides an initrd, we must create a placeholder
>>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:39:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuot-0004VE-Lz; Mon, 10 Feb 2014 17:38:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCuor-0004V1-Sm
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:38:58 +0000
Received: from [193.109.254.147:3665] by server-12.bemta-14.messagelabs.com id
	8B/6E-17220-1BE09F25; Mon, 10 Feb 2014 17:38:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392053935!3349249!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31572 invoked from network); 10 Feb 2014 17:38:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:38:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101373300"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:38:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:38:54 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCuoo-0004LT-CW;
	Mon, 10 Feb 2014 17:38:54 +0000
Message-ID: <52F90EAE.6000603@citrix.com>
Date: Mon, 10 Feb 2014 17:38:54 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
In-Reply-To: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] VT-d: fix RMRR handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:42, Jan Beulich wrote:
> Removing mapped RMRR tracking structures in dma_pte_clear_one() is
> wrong for two reasons: First, these regions may cover more than a
> single page. And second, multiple devices (and hence multiple devices
> assigned to any particular guest) may share a single RMRR (whether
> assigning such devices to distinct guests is a safe thing to do is
> another question).
>
> Therefore move the removal of the tracking structures into the
> counterpart function to the one doing the insertion -
> intel_iommu_remove_device(), and add a reference count to the tracking
> structure.
>
> Further, for the handling of the mappings of the respective memory
> regions to be correct, RMRRs must not overlap. Add a respective check
> to acpi_parse_one_rmrr().
>
> And finally, with all of this being VT-d specific, move the cleanup
> of the list as well as the structure type definition where it belongs -
> in VT-d specific rather than IOMMU generic code.
>
> Note that this doesn't address yet another issue associated with RMRR
> handling: The purpose of the RMRRs as well as the way the respective
> IOMMU page table mappings get inserted both suggest that these regions
> would need to be marked E820_RESERVED in all (HVM?) guests' memory
> maps, yet nothing like this is being done in hvmloader. (For PV guests
> this would also seem to be necessary, but may conflict with PV guests
> possibly assuming there to be just a single E820 entry representing all
> of its RAM.)
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Having read up on RMRRs again, I suspect it is more complicated.  RMRRs
are for legacy devices which need to DMA to the BIOS to function
correctly.  In the case of dom0, they should necessarily be identity
mapped in the IOMMU tables (which does appear to be the case currently).

For domains with passthrough of a device using an RMRR, then the region
should be marked as reserved (and possibly removed from the physmap for
HVM guests?), and the IOMMU mappings again identity mapped, so the
device can talk to the BIOS.  Having the device talk to an
equally-positioned RMRR in the guest address space seems pointless. 
Furthermore, XenServer has had one support escalation which touched upon
this.  It turned out to be unrelated to the problem causing the
escalation, but was identified as something which should be handled better.

One way or another, the security when passing through devices covered by
RMRR would end up being reduced.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:39:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuot-0004VE-Lz; Mon, 10 Feb 2014 17:38:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCuor-0004V1-Sm
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:38:58 +0000
Received: from [193.109.254.147:3665] by server-12.bemta-14.messagelabs.com id
	8B/6E-17220-1BE09F25; Mon, 10 Feb 2014 17:38:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392053935!3349249!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31572 invoked from network); 10 Feb 2014 17:38:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:38:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,818,1384300800"; d="scan'208";a="101373300"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:38:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:38:54 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCuoo-0004LT-CW;
	Mon, 10 Feb 2014 17:38:54 +0000
Message-ID: <52F90EAE.6000603@citrix.com>
Date: Mon, 10 Feb 2014 17:38:54 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
In-Reply-To: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] VT-d: fix RMRR handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 13:42, Jan Beulich wrote:
> Removing mapped RMRR tracking structures in dma_pte_clear_one() is
> wrong for two reasons: First, these regions may cover more than a
> single page. And second, multiple devices (and hence multiple devices
> assigned to any particular guest) may share a single RMRR (whether
> assigning such devices to distinct guests is a safe thing to do is
> another question).
>
> Therefore move the removal of the tracking structures into the
> counterpart function to the one doing the insertion -
> intel_iommu_remove_device(), and add a reference count to the tracking
> structure.
>
> Further, for the handling of the mappings of the respective memory
> regions to be correct, RMRRs must not overlap. Add a respective check
> to acpi_parse_one_rmrr().
>
> And finally, with all of this being VT-d specific, move the cleanup
> of the list as well as the structure type definition where it belongs -
> in VT-d specific rather than IOMMU generic code.
>
> Note that this doesn't address yet another issue associated with RMRR
> handling: The purpose of the RMRRs as well as the way the respective
> IOMMU page table mappings get inserted both suggest that these regions
> would need to be marked E820_RESERVED in all (HVM?) guests' memory
> maps, yet nothing like this is being done in hvmloader. (For PV guests
> this would also seem to be necessary, but may conflict with PV guests
> possibly assuming there to be just a single E820 entry representing all
> of its RAM.)
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Having read up on RMRRs again, I suspect it is more complicated.  RMRRs
are for legacy devices which need to DMA to the BIOS to function
correctly.  In the case of dom0, they should necessarily be identity
mapped in the IOMMU tables (which does appear to be the case currently).

For domains with passthrough of a device using an RMRR, then the region
should be marked as reserved (and possibly removed from the physmap for
HVM guests?), and the IOMMU mappings again identity mapped, so the
device can talk to the BIOS.  Having the device talk to an
equally-positioned RMRR in the guest address space seems pointless. 
Furthermore, XenServer has had one support escalation which touched upon
this.  It turned out to be unrelated to the problem causing the
escalation, but was identified as something which should be handled better.

One way or another, the security when passing through devices covered by
RMRR would end up being reduced.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:42:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCusO-0004xa-Kp; Mon, 10 Feb 2014 17:42:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCusN-0004xD-Hh
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:42:35 +0000
Received: from [193.109.254.147:20927] by server-13.bemta-14.messagelabs.com
	id BC/DA-01226-A8F09F25; Mon, 10 Feb 2014 17:42:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392054153!3332247!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13583 invoked from network); 10 Feb 2014 17:42:34 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:42:34 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so3210189eak.28
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 09:42:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=DRNLnKu+qp3NhTKIEmJ33g3RTs8iZVPDV271RA5//go=;
	b=RQKOl7C/YXUNJ2w9t3U2YT5wBAXPwFR0vpZXgllqLc97fD3jd1uY0monVhhgQJkVvt
	DJVRMA87NUuTw0S+sQm9/EoQ1HEzoj1yg67dfhV3+FoqBrU+tU+x7S7P3hVCjFOhtzlx
	sKc4Ua/LUG+m2xRMRQXs+NQ/1VRrw4aRXwCa5393qZsNW65vlUKqoC+/0ikCG+PT95yG
	UMnDXrHwrlADO8IiDGqmZYv8WFkiHChmA3mfa921MjaOWVI2J7Y1+Zf6QT6AodsKxpSY
	VxstPCCpA+s3+nEHiCqjTscRySK3Vmh3Gk255MOMgj2c5xivEIuwcuub+iIXwuIm6SJX
	sAPg==
X-Gm-Message-State: ALoCoQk640bW9uaAzK3NWC+3JmFB2sJm8ZSnqolBOwqE4YzTLdxHmAZeOC9jxgJ5km4i4daE9ycH
X-Received: by 10.14.95.134 with SMTP id p6mr4358521eef.73.1392054153454;
	Mon, 10 Feb 2014 09:42:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm56881971eeg.5.2014.02.10.09.42.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:42:32 -0800 (PST)
Message-ID: <52F90F84.2090506@linaro.org>
Date: Mon, 10 Feb 2014 17:42:28 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F8F9F8.9000702@linaro.org>
	<52F90DF8020000780011AD65@nat28.tlf.novell.com>
In-Reply-To: <52F90DF8020000780011AD65@nat28.tlf.novell.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 04:35 PM, Jan Beulich wrote:
>>>> On 10.02.14 at 17:10, Julien Grall <julien.grall@linaro.org> wrote:
>> On 02/07/2014 05:43 PM, Julien Grall wrote:
>>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
>> enabled.
>>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>>> to know if it needs to check the requirements.
>>>
>>> Rename the function and remove "pvh" word in the commit message.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> ---
>>>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>>>  1 file changed, 9 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>>> index 19b0e23..26a5d91 100644
>>> --- a/xen/drivers/passthrough/iommu.c
>>> +++ b/xen/drivers/passthrough/iommu.c
>>> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>>>      return hd->platform_ops->init(d);
>>>  }
>>>  
>>> -static __init void check_dom0_pvh_reqs(struct domain *d)
>>> +static __init void check_dom0_reqs(struct domain *d)
>>>  {
>>> +    if ( !paging_mode_translate(d) )
>>> +        return;
>>> +
>>>      if ( !iommu_enabled )
>>> -        panic("Presently, iommu must be enabled for pvh dom0\n");
>>> +        panic("Presently, iommu must be enabled to use dom0 with translate 
>> "
>>> +              "paging mode\n");
>>
>> Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
>> ARM platform (for instance Arndale).
>>
>> Do we really this check for PVH? If yes, I will add replace the check
>> by: is_pvh_domain(d) && !iommu_enabled.
> 
> Of course we need it: How would PVH Dom0 be able to do any kind
> of DMA without an IOMMU?

Right, on ARM we have the 1:1 memory mapping to avoid this issue.

I will fix it. Can I keep your ack on this patch?

> Jan
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:42:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCusO-0004xa-Kp; Mon, 10 Feb 2014 17:42:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WCusN-0004xD-Hh
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 17:42:35 +0000
Received: from [193.109.254.147:20927] by server-13.bemta-14.messagelabs.com
	id BC/DA-01226-A8F09F25; Mon, 10 Feb 2014 17:42:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392054153!3332247!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13583 invoked from network); 10 Feb 2014 17:42:34 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:42:34 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so3210189eak.28
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 09:42:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=DRNLnKu+qp3NhTKIEmJ33g3RTs8iZVPDV271RA5//go=;
	b=RQKOl7C/YXUNJ2w9t3U2YT5wBAXPwFR0vpZXgllqLc97fD3jd1uY0monVhhgQJkVvt
	DJVRMA87NUuTw0S+sQm9/EoQ1HEzoj1yg67dfhV3+FoqBrU+tU+x7S7P3hVCjFOhtzlx
	sKc4Ua/LUG+m2xRMRQXs+NQ/1VRrw4aRXwCa5393qZsNW65vlUKqoC+/0ikCG+PT95yG
	UMnDXrHwrlADO8IiDGqmZYv8WFkiHChmA3mfa921MjaOWVI2J7Y1+Zf6QT6AodsKxpSY
	VxstPCCpA+s3+nEHiCqjTscRySK3Vmh3Gk255MOMgj2c5xivEIuwcuub+iIXwuIm6SJX
	sAPg==
X-Gm-Message-State: ALoCoQk640bW9uaAzK3NWC+3JmFB2sJm8ZSnqolBOwqE4YzTLdxHmAZeOC9jxgJ5km4i4daE9ycH
X-Received: by 10.14.95.134 with SMTP id p6mr4358521eef.73.1392054153454;
	Mon, 10 Feb 2014 09:42:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm56881971eeg.5.2014.02.10.09.42.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 09:42:32 -0800 (PST)
Message-ID: <52F90F84.2090506@linaro.org>
Date: Mon, 10 Feb 2014 17:42:28 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F8F9F8.9000702@linaro.org>
	<52F90DF8020000780011AD65@nat28.tlf.novell.com>
In-Reply-To: <52F90DF8020000780011AD65@nat28.tlf.novell.com>
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 04:35 PM, Jan Beulich wrote:
>>>> On 10.02.14 at 17:10, Julien Grall <julien.grall@linaro.org> wrote:
>> On 02/07/2014 05:43 PM, Julien Grall wrote:
>>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
>> enabled.
>>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>>> to know if it needs to check the requirements.
>>>
>>> Rename the function and remove "pvh" word in the commit message.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> ---
>>>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>>>  1 file changed, 9 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>>> index 19b0e23..26a5d91 100644
>>> --- a/xen/drivers/passthrough/iommu.c
>>> +++ b/xen/drivers/passthrough/iommu.c
>>> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>>>      return hd->platform_ops->init(d);
>>>  }
>>>  
>>> -static __init void check_dom0_pvh_reqs(struct domain *d)
>>> +static __init void check_dom0_reqs(struct domain *d)
>>>  {
>>> +    if ( !paging_mode_translate(d) )
>>> +        return;
>>> +
>>>      if ( !iommu_enabled )
>>> -        panic("Presently, iommu must be enabled for pvh dom0\n");
>>> +        panic("Presently, iommu must be enabled to use dom0 with translate 
>> "
>>> +              "paging mode\n");
>>
>> Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
>> ARM platform (for instance Arndale).
>>
>> Do we really this check for PVH? If yes, I will add replace the check
>> by: is_pvh_domain(d) && !iommu_enabled.
> 
> Of course we need it: How would PVH Dom0 be able to do any kind
> of DMA without an IOMMU?

Right, on ARM we have the 1:1 memory mapping to avoid this issue.

I will fix it. Can I keep your ack on this patch?

> Jan
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuwH-0005T1-Ep; Mon, 10 Feb 2014 17:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCuwG-0005Sp-82
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:46:36 +0000
Received: from [193.109.254.147:63622] by server-5.bemta-14.messagelabs.com id
	DC/E3-16688-B7019F25; Mon, 10 Feb 2014 17:46:35 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392054393!3305349!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8908 invoked from network); 10 Feb 2014 17:46:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:46:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="101375265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:46:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:46:32 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCuwC-0004RA-FR;
	Mon, 10 Feb 2014 17:46:32 +0000
Message-ID: <52F91071.1080007@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:46:25 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
In-Reply-To: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	KeirFraser <keir@xen.org>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 04:34 PM, Jan Beulich wrote:
>>    * The results from XenRT suggest that the new emulation is better than the
>>      old.
> "Better" in the sense of the limited set of uses of the virtual hardware
> by whatever selection of guest OSes is being run there. But very
> likely not "better" in the sense on matching up with how the respective
> hardware specification would require it to behave.

The context of the above sentence was in a justification for including 
it in 4.4.  Obviously "occasionally gets stuck during boot" is a pretty 
bad bug that we'd like to see fix.  But given the tricky nature of this 
whole area, there's a risk that this will cause regressions in *other* 
situations or operating systems.  What I understand Andy to be saying is 
that with the patch, the RTC appears to cause less problems than without it.

What your analysis is missing, Andy, is what the effects might be if 
there were a bug.  Obviously other guests might hang during boot; but 
what else?  Might they hang at some point much later, perhaps when being 
pounded with interrupts due to heavy network traffic? Might the clock 
begin to drift or jump around?  Would the XenRT testing catch that if it 
happened?  And, would those potential bugs be worse than what we have now?

There's a reason for trying to go through the whole exercise, 
particularly in bugs like this.  I do have a lot of faith in our 
intuition to consider hundreds of individual factors and come up with a 
reasonably good judgement of the probabilities -- but only if it is 
guided properly.  We are all very prone to only consider the things we 
happen to be thinking about, and to completely ignore all the things we 
don't happen to be looking at.  My own temptation, looking at this bug, 
is to say, "Random hangs during boot, yeah, that's pretty bad; we should 
take it."  But then I'm only looking at the positives of the patch: I'm 
not really making a balance of the positives versus the negatives.

The goal of going through the "worst-case-scenario" exercise is to bring 
to our minds the potential outcomes we are prone to ignore. Only then 
can we reasonably trust our intuition to make a properly informed judgement.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuwH-0005T1-Ep; Mon, 10 Feb 2014 17:46:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WCuwG-0005Sp-82
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:46:36 +0000
Received: from [193.109.254.147:63622] by server-5.bemta-14.messagelabs.com id
	DC/E3-16688-B7019F25; Mon, 10 Feb 2014 17:46:35 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392054393!3305349!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8908 invoked from network); 10 Feb 2014 17:46:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:46:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="101375265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:46:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:46:32 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WCuwC-0004RA-FR;
	Mon, 10 Feb 2014 17:46:32 +0000
Message-ID: <52F91071.1080007@eu.citrix.com>
Date: Mon, 10 Feb 2014 17:46:25 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
In-Reply-To: <52F90D8C020000780011AD54@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	KeirFraser <keir@xen.org>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/10/2014 04:34 PM, Jan Beulich wrote:
>>    * The results from XenRT suggest that the new emulation is better than the
>>      old.
> "Better" in the sense of the limited set of uses of the virtual hardware
> by whatever selection of guest OSes is being run there. But very
> likely not "better" in the sense on matching up with how the respective
> hardware specification would require it to behave.

The context of the above sentence was in a justification for including 
it in 4.4.  Obviously "occasionally gets stuck during boot" is a pretty 
bad bug that we'd like to see fix.  But given the tricky nature of this 
whole area, there's a risk that this will cause regressions in *other* 
situations or operating systems.  What I understand Andy to be saying is 
that with the patch, the RTC appears to cause less problems than without it.

What your analysis is missing, Andy, is what the effects might be if 
there were a bug.  Obviously other guests might hang during boot; but 
what else?  Might they hang at some point much later, perhaps when being 
pounded with interrupts due to heavy network traffic? Might the clock 
begin to drift or jump around?  Would the XenRT testing catch that if it 
happened?  And, would those potential bugs be worse than what we have now?

There's a reason for trying to go through the whole exercise, 
particularly in bugs like this.  I do have a lot of faith in our 
intuition to consider hundreds of individual factors and come up with a 
reasonably good judgement of the probabilities -- but only if it is 
guided properly.  We are all very prone to only consider the things we 
happen to be thinking about, and to completely ignore all the things we 
don't happen to be looking at.  My own temptation, looking at this bug, 
is to say, "Random hangs during boot, yeah, that's pretty bad; we should 
take it."  But then I'm only looking at the positives of the patch: I'm 
not really making a balance of the positives versus the negatives.

The goal of going through the "worst-case-scenario" exercise is to bring 
to our minds the potential outcomes we are prone to ignore. Only then 
can we reasonably trust our intuition to make a properly informed judgement.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:49:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuyh-0005vI-1l; Mon, 10 Feb 2014 17:49:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WCuyf-0005v8-14
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:49:05 +0000
Received: from [193.109.254.147:29156] by server-9.bemta-14.messagelabs.com id
	F8/28-24895-01119F25; Mon, 10 Feb 2014 17:49:04 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392054542!3325509!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10731 invoked from network); 10 Feb 2014 17:49:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:49:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="101375868"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:49:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:49:01 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WCuyb-0004TH-KS;
	Mon, 10 Feb 2014 17:49:01 +0000
Message-ID: <1392054536.28219.2.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 10 Feb 2014 17:48:56 +0000
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).
> 
> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
> 
....
> 
> Integer fields in the domain header and in the records are in the
> endianess described in the image header (which will typically be the
> native ordering).
> 

This mean that we could have a Intel image coded in big endian format.
It's fine but we have to support native and not native images.

...

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:49:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuyh-0005vI-1l; Mon, 10 Feb 2014 17:49:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WCuyf-0005v8-14
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:49:05 +0000
Received: from [193.109.254.147:29156] by server-9.bemta-14.messagelabs.com id
	F8/28-24895-01119F25; Mon, 10 Feb 2014 17:49:04 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392054542!3325509!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10731 invoked from network); 10 Feb 2014 17:49:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:49:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="101375868"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 17:49:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 12:49:01 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WCuyb-0004TH-KS;
	Mon, 10 Feb 2014 17:49:01 +0000
Message-ID: <1392054536.28219.2.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Mon, 10 Feb 2014 17:48:56 +0000
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
X-Mailer: Evolution 3.6.2-0ubuntu0.1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).
> 
> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
> 
....
> 
> Integer fields in the domain header and in the records are in the
> endianess described in the image header (which will typically be the
> native ordering).
> 

This mean that we could have a Intel image coded in big endian format.
It's fine but we have to support native and not native images.

...

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:49:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:49:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuz7-00066Z-H4; Mon, 10 Feb 2014 17:49:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCuz5-00066R-Tq
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:49:32 +0000
Received: from [85.158.139.211:27321] by server-12.bemta-5.messagelabs.com id
	A7/4C-15415-B2119F25; Mon, 10 Feb 2014 17:49:31 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392054569!2966138!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18684 invoked from network); 10 Feb 2014 17:49:30 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:49:30 -0000
Received: by mail-wg0-f49.google.com with SMTP id a1so4368786wgh.4
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 09:49:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=0Y4nBL5EIxyfco2DrUzOptF3bnfRpVYClw1NHhZWkPk=;
	b=LCiK7EPcemPwf4tQK6fHTKjStWETjhku5eWqNGI/eTPv1cCX63fIi4UDnyzprmnoRL
	y9qANu2YSUV/aRNh9Jf6fb7i9WwFhcXV80UQv4l6b5f7sDqFj6d42LWMFvxrNPk/n4Op
	tsMlyZAr+C9o1JqPqQIq+TjDpp/DjN2Jrms99qBxe5dcdxzqiguG2nINJBtNJ58L9R+E
	Tw+jamPeMwfCGjqiNljh0GfYJfocOW0FDHX2aAjCbTW0dhsAfa28rUwCWqUVPtHvjZwX
	Wr4bsMPyvHMkc4y2WlmJNaWAnHEvY2Lkfv7TgxWcOeDuT/kEbIc54l+m7Z0+ituRmuVH
	bZHQ==
MIME-Version: 1.0
X-Received: by 10.194.60.37 with SMTP id e5mr22463560wjr.32.1392054569761;
	Mon, 10 Feb 2014 09:49:29 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 09:49:29 -0800 (PST)
Date: Mon, 10 Feb 2014 17:49:29 +0000
X-Google-Sender-Auth: FL42q7mbcT-xfkYnZyg7xFNbXlQ
Message-ID: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

Well since rc3, we've had a number of fairly major bugs reported, the
fixes for which are unfortunately a bit on the risky side.  I've
listed the ones accepted under "Major patches since RC3".
Additionally, there are a number of open issues which may involve
slightly risky changes (the w2k3 RTC loop bug, the PVH regression,
dirty vram / IOMMU, &c).

We've got a test day planned for 18 February, so we'll probably cut
RC4 at the end of this week (14 Feb).  We definitely shouldn't rush,
but from a priority perspective, anything to make it into RC4 will
have to be in the tree by the morning of 13 Feb at the latest so that
it can pass the push gate.  (But of course, if there are any test
failures it will mean missing the RC.)

If all the fixes are in, and the testing of RC4 goes well, then we
could conceivably branch on the 24th and start the wheels of the
marketing machine in motion for an announcement the following week.

But those are two very big ifs, so don't book the table for the
celebration party just yet. :-)

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013
* Code freezing point: 18 November 2013
* First RCs: 6 December 2013  <== WE ARE HERE
* Release: When it's ready (Probably by the end of February).

Last updated: 10 February 2014

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support
 - Spice usbredirection support for upstream qemu

* PHV domU (experimental only)

* pvgrub2 checked into grub upstream

* ARM64 guest

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* Update to qemu 1.6.2

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

* Reworked ocaml bindings

== Resolved since last update ==

* qemu-* parses "008" as octal in USB bus.addr format

* Claim mode and PoD

* Disable IOMMU if no southbridge

* osstest windows-install failures

* libxl / libvirt races

== Major patches since RC3 ==

* qemu-xen DMA corruption issue
  > http://bugs.xenproject.org/xen/bug/29

* FP save/restore support for ARM guests

* libxl async patch series

* AMD IOMMU: fail if there is no southbridge IO-APIC

== Open ==

* Win2k3 SP2 RTC infinite loops
   > Regression introduced late in Xen-4.3 development
   owner: andrew.cooper@citrix
   status: patches posted, undergoing review.

* PVH regression

* dirty vram / IOMMU bug
 > http://bugs.xenproject.org/xen/bug/38
 status: Patch posted

* RHEL 7 pygrub patches
 > http://bugs.xenproject.org/xen/bug/39
 status: Wait for 4.4.1?

* credit2 runqueues
 > http://bugs.xenproject.org/xen/bug/36

* RHEL 5.x ocaml build bug
  status: patch posted

* libxl / xl does not handle failure of remote qemu gracefully
  > Related to http://bugs.xenproject.org/xen/bug/30
  > Easiest way to reproduce:
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is
 Ian J investigating

* qemu memory leak?
  > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 17:49:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 17:49:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCuz7-00066Z-H4; Mon, 10 Feb 2014 17:49:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WCuz5-00066R-Tq
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 17:49:32 +0000
Received: from [85.158.139.211:27321] by server-12.bemta-5.messagelabs.com id
	A7/4C-15415-B2119F25; Mon, 10 Feb 2014 17:49:31 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392054569!2966138!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18684 invoked from network); 10 Feb 2014 17:49:30 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 17:49:30 -0000
Received: by mail-wg0-f49.google.com with SMTP id a1so4368786wgh.4
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 09:49:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=0Y4nBL5EIxyfco2DrUzOptF3bnfRpVYClw1NHhZWkPk=;
	b=LCiK7EPcemPwf4tQK6fHTKjStWETjhku5eWqNGI/eTPv1cCX63fIi4UDnyzprmnoRL
	y9qANu2YSUV/aRNh9Jf6fb7i9WwFhcXV80UQv4l6b5f7sDqFj6d42LWMFvxrNPk/n4Op
	tsMlyZAr+C9o1JqPqQIq+TjDpp/DjN2Jrms99qBxe5dcdxzqiguG2nINJBtNJ58L9R+E
	Tw+jamPeMwfCGjqiNljh0GfYJfocOW0FDHX2aAjCbTW0dhsAfa28rUwCWqUVPtHvjZwX
	Wr4bsMPyvHMkc4y2WlmJNaWAnHEvY2Lkfv7TgxWcOeDuT/kEbIc54l+m7Z0+ituRmuVH
	bZHQ==
MIME-Version: 1.0
X-Received: by 10.194.60.37 with SMTP id e5mr22463560wjr.32.1392054569761;
	Mon, 10 Feb 2014 09:49:29 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 10 Feb 2014 09:49:29 -0800 (PST)
Date: Mon, 10 Feb 2014 17:49:29 +0000
X-Google-Sender-Auth: FL42q7mbcT-xfkYnZyg7xFNbXlQ
Message-ID: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This information will be mirrored on the Xen 4.4 Roadmap wiki page:
 http://wiki.xen.org/wiki/Xen_Roadmap/4.4

Well since rc3, we've had a number of fairly major bugs reported, the
fixes for which are unfortunately a bit on the risky side.  I've
listed the ones accepted under "Major patches since RC3".
Additionally, there are a number of open issues which may involve
slightly risky changes (the w2k3 RTC loop bug, the PVH regression,
dirty vram / IOMMU, &c).

We've got a test day planned for 18 February, so we'll probably cut
RC4 at the end of this week (14 Feb).  We definitely shouldn't rush,
but from a priority perspective, anything to make it into RC4 will
have to be in the tree by the morning of 13 Feb at the latest so that
it can pass the push gate.  (But of course, if there are any test
failures it will mean missing the RC.)

If all the fixes are in, and the testing of RC4 goes well, then we
could conceivably branch on the 24th and start the wheels of the
marketing machine in motion for an announcement the following week.

But those are two very big ifs, so don't book the table for the
celebration party just yet. :-)

= Timeline =

Here is our current timeline based on a 6-month release:

* Feature freeze: 18 October 2013
* Code freezing point: 18 November 2013
* First RCs: 6 December 2013  <== WE ARE HERE
* Release: When it's ready (Probably by the end of February).

Last updated: 10 February 2014

== Completed ==

* Event channel scalability (FIFO event channels)

* Non-udev scripts for driver domains (non-Linux driver domains)

* Multi-vector PCI MSI (Hypervisor side)

* Improved Spice support on libxl
 - Added Spice vdagent support
 - Added Spice clipboard sharing support
 - Spice usbredirection support for upstream qemu

* PHV domU (experimental only)

* pvgrub2 checked into grub upstream

* ARM64 guest

* Guest EFI booting (tianocore)

* kexec

* Testing: Xen on ARM

* Update to SeaBIOS 1.7.3.1

* Update to qemu 1.6.2

* SWIOTLB (in Linux 3.13)

* Disk: indirect descriptors (in 3.11)

* Reworked ocaml bindings

== Resolved since last update ==

* qemu-* parses "008" as octal in USB bus.addr format

* Claim mode and PoD

* Disable IOMMU if no southbridge

* osstest windows-install failures

* libxl / libvirt races

== Major patches since RC3 ==

* qemu-xen DMA corruption issue
  > http://bugs.xenproject.org/xen/bug/29

* FP save/restore support for ARM guests

* libxl async patch series

* AMD IOMMU: fail if there is no southbridge IO-APIC

== Open ==

* Win2k3 SP2 RTC infinite loops
   > Regression introduced late in Xen-4.3 development
   owner: andrew.cooper@citrix
   status: patches posted, undergoing review.

* PVH regression

* dirty vram / IOMMU bug
 > http://bugs.xenproject.org/xen/bug/38
 status: Patch posted

* RHEL 7 pygrub patches
 > http://bugs.xenproject.org/xen/bug/39
 status: Wait for 4.4.1?

* credit2 runqueues
 > http://bugs.xenproject.org/xen/bug/36

* RHEL 5.x ocaml build bug
  status: patch posted

* libxl / xl does not handle failure of remote qemu gracefully
  > Related to http://bugs.xenproject.org/xen/bug/30
  > Easiest way to reproduce:
  >  - set "vncunused=0" and do a local migrate
  >  - The "remote" qemu will fail because the vnc port is in use
  > The failure isn't the problem, but everything being stuck afterwards is
 Ian J investigating

* qemu memory leak?
  > http://lists.xen.org/archives/html/xen-users/2013-03/msg00276.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:14:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvMw-0007Mt-0G; Mon, 10 Feb 2014 18:14:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCvMu-0007Ml-6Y
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:14:08 +0000
Received: from [85.158.143.35:35581] by server-3.bemta-4.messagelabs.com id
	91/88-11539-FE619F25; Mon, 10 Feb 2014 18:14:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392056044!4601280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28781 invoked from network); 10 Feb 2014 18:14:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:14:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="99558381"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 18:14:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 13:14:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCvMo-0000JK-Ob;
	Mon, 10 Feb 2014 18:14:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCvMo-0000xl-IF;
	Mon, 10 Feb 2014 18:14:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21241.5866.285503.630110@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 18:14:02 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
References: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST v2] cr-daily-branch: Make it
	possible to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST v2] cr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> This is undesirable (most of the time) in a standalone environment, where you
> are mostl ikely to be interested in the current version and not historical
> comparissons.
> 
> Not sure there isn't a better way.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

(But there's a lot of stuff in the queue so don't push it just yet...)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:14:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvMw-0007Mt-0G; Mon, 10 Feb 2014 18:14:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WCvMu-0007Ml-6Y
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:14:08 +0000
Received: from [85.158.143.35:35581] by server-3.bemta-4.messagelabs.com id
	91/88-11539-FE619F25; Mon, 10 Feb 2014 18:14:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392056044!4601280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28781 invoked from network); 10 Feb 2014 18:14:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:14:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="99558381"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 18:14:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 13:14:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCvMo-0000JK-Ob;
	Mon, 10 Feb 2014 18:14:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WCvMo-0000xl-IF;
	Mon, 10 Feb 2014 18:14:02 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21241.5866.285503.630110@mariner.uk.xensource.com>
Date: Mon, 10 Feb 2014 18:14:02 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
References: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST v2] cr-daily-branch: Make it
	possible to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST v2] cr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> This is undesirable (most of the time) in a standalone environment, where you
> are mostl ikely to be interested in the current version and not historical
> comparissons.
> 
> Not sure there isn't a better way.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

(But there's a lot of stuff in the queue so don't push it just yet...)

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:32:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCveT-0008O9-MP; Mon, 10 Feb 2014 18:32:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCveS-0008O1-3m
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:32:16 +0000
Received: from [85.158.137.68:11225] by server-14.bemta-3.messagelabs.com id
	32/AA-08196-F2B19F25; Mon, 10 Feb 2014 18:32:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392057127!904571!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17759 invoked from network); 10 Feb 2014 18:32:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:32:14 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AIW26n010058
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 18:32:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1AIW15m005024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 18:32:02 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1AIW1H0004948; Mon, 10 Feb 2014 18:32:01 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 10:32:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0AD751C0954; Mon, 10 Feb 2014 13:32:00 -0500 (EST)
Date: Mon, 10 Feb 2014 13:31:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, JBeulich@suse.com,
	yang.z.zhang@intel.com
Message-ID: <20140210183159.GC17601@phenom.dumpdata.com>
References: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> == Open ==
> 
..
 
> * PVH regression

Patch posted, just needs an Ack from the Intel folks (which
I think they did give: "Both of fixings are right to me.")

So if Jan is OK with it (and since he suggested the fix I think
he would be), then the fix should go in.

Jan, do you want me to repost it with the right 'Acked' by
tags?

> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:32:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCveT-0008O9-MP; Mon, 10 Feb 2014 18:32:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCveS-0008O1-3m
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:32:16 +0000
Received: from [85.158.137.68:11225] by server-14.bemta-3.messagelabs.com id
	32/AA-08196-F2B19F25; Mon, 10 Feb 2014 18:32:15 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392057127!904571!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17759 invoked from network); 10 Feb 2014 18:32:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:32:14 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AIW26n010058
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 18:32:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1AIW15m005024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 18:32:02 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1AIW1H0004948; Mon, 10 Feb 2014 18:32:01 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 10:32:00 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0AD751C0954; Mon, 10 Feb 2014 13:32:00 -0500 (EST)
Date: Mon, 10 Feb 2014 13:31:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, JBeulich@suse.com,
	yang.z.zhang@intel.com
Message-ID: <20140210183159.GC17601@phenom.dumpdata.com>
References: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> == Open ==
> 
..
 
> * PVH regression

Patch posted, just needs an Ack from the Intel folks (which
I think they did give: "Both of fixings are right to me.")

So if Jan is OK with it (and since he suggested the fix I think
he would be), then the fix should go in.

Jan, do you want me to repost it with the right 'Acked' by
tags?

> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:43:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvpV-0000Uf-DR; Mon, 10 Feb 2014 18:43:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCvpU-0000Ua-LS
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:43:40 +0000
Received: from [85.158.143.35:19483] by server-2.bemta-4.messagelabs.com id
	28/18-10891-CDD19F25; Mon, 10 Feb 2014 18:43:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392057817!4602237!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17741 invoked from network); 10 Feb 2014 18:43:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="99567421"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 18:43:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 13:43:24 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCvpE-0005CP-FP;
	Mon, 10 Feb 2014 18:43:24 +0000
Message-ID: <52F91DCC.1060007@citrix.com>
Date: Mon, 10 Feb 2014 18:43:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F91071.1080007@eu.citrix.com>
In-Reply-To: <52F91071.1080007@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	KeirFraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 17:46, George Dunlap wrote:
> On 02/10/2014 04:34 PM, Jan Beulich wrote:
>>>    * The results from XenRT suggest that the new emulation is better
>>> than the
>>>      old.
>> "Better" in the sense of the limited set of uses of the virtual hardware
>> by whatever selection of guest OSes is being run there. But very
>> likely not "better" in the sense on matching up with how the respective
>> hardware specification would require it to behave.
>
> The context of the above sentence was in a justification for including
> it in 4.4.  Obviously "occasionally gets stuck during boot" is a
> pretty bad bug that we'd like to see fix.  But given the tricky nature
> of this whole area, there's a risk that this will cause regressions in
> *other* situations or operating systems.  What I understand Andy to be
> saying is that with the patch, the RTC appears to cause less problems
> than without it.

That was indeed the message I was trying to convey.

>
> What your analysis is missing, Andy, is what the effects might be if
> there were a bug.  Obviously other guests might hang during boot; but
> what else?  Might they hang at some point much later, perhaps when
> being pounded with interrupts due to heavy network traffic? Might the
> clock begin to drift or jump around?  Would the XenRT testing catch
> that if it happened?  And, would those potential bugs be worse than
> what we have now?

This is much more complicated to answer.  Lets try.

The patch only touches RTC and Periodic Timer code.  All of the Periodic
Timer code is reversions back to the previous code (mid Xen-4.3 and
before), along with an attempted justification as to why the old code
was more correct than the current code.

Some of the RTC changes are reversions, but there is also new logic. 
All new logic is to do with how to update REG_B and REG_C correctly. 
None of the other functionality is touched.

REG_B and REG_C are to do with interrupts, and which events
should(B)/have(C) generated interrupts.  The worst case is that a guest
gets none/too-few/too-many interrupts when trying to drive the RTC. 
None of this should lead to clock skew, as reading the time values
directly will still provide the same information as before, although any
guest which attempts to guess time based on counting periodic interrupts
from the RTC is a) already broken and b) already having massive skew as
a VM due to vcpu scheduling.

XenRT does have tests for clock drift, but don't know for certain
whether they have been run against the new code yet.

I will ensure they get run on v2 of the patch.

~Andrew

>
> There's a reason for trying to go through the whole exercise,
> particularly in bugs like this.  I do have a lot of faith in our
> intuition to consider hundreds of individual factors and come up with
> a reasonably good judgement of the probabilities -- but only if it is
> guided properly.  We are all very prone to only consider the things we
> happen to be thinking about, and to completely ignore all the things
> we don't happen to be looking at.  My own temptation, looking at this
> bug, is to say, "Random hangs during boot, yeah, that's pretty bad; we
> should take it."  But then I'm only looking at the positives of the
> patch: I'm not really making a balance of the positives versus the
> negatives.
>
> The goal of going through the "worst-case-scenario" exercise is to
> bring to our minds the potential outcomes we are prone to ignore. Only
> then can we reasonably trust our intuition to make a properly informed
> judgement.
>
>  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:43:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvpV-0000Uf-DR; Mon, 10 Feb 2014 18:43:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WCvpU-0000Ua-LS
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:43:40 +0000
Received: from [85.158.143.35:19483] by server-2.bemta-4.messagelabs.com id
	28/18-10891-CDD19F25; Mon, 10 Feb 2014 18:43:40 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392057817!4602237!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17741 invoked from network); 10 Feb 2014 18:43:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:43:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="99567421"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 18:43:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 13:43:24 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WCvpE-0005CP-FP;
	Mon, 10 Feb 2014 18:43:24 +0000
Message-ID: <52F91DCC.1060007@citrix.com>
Date: Mon, 10 Feb 2014 18:43:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <george.dunlap@eu.citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F91071.1080007@eu.citrix.com>
In-Reply-To: <52F91071.1080007@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Tim Deegan <tim@xen.org>, Xen-devel <xen-devel@lists.xen.org>,
	KeirFraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 17:46, George Dunlap wrote:
> On 02/10/2014 04:34 PM, Jan Beulich wrote:
>>>    * The results from XenRT suggest that the new emulation is better
>>> than the
>>>      old.
>> "Better" in the sense of the limited set of uses of the virtual hardware
>> by whatever selection of guest OSes is being run there. But very
>> likely not "better" in the sense on matching up with how the respective
>> hardware specification would require it to behave.
>
> The context of the above sentence was in a justification for including
> it in 4.4.  Obviously "occasionally gets stuck during boot" is a
> pretty bad bug that we'd like to see fix.  But given the tricky nature
> of this whole area, there's a risk that this will cause regressions in
> *other* situations or operating systems.  What I understand Andy to be
> saying is that with the patch, the RTC appears to cause less problems
> than without it.

That was indeed the message I was trying to convey.

>
> What your analysis is missing, Andy, is what the effects might be if
> there were a bug.  Obviously other guests might hang during boot; but
> what else?  Might they hang at some point much later, perhaps when
> being pounded with interrupts due to heavy network traffic? Might the
> clock begin to drift or jump around?  Would the XenRT testing catch
> that if it happened?  And, would those potential bugs be worse than
> what we have now?

This is much more complicated to answer.  Lets try.

The patch only touches RTC and Periodic Timer code.  All of the Periodic
Timer code is reversions back to the previous code (mid Xen-4.3 and
before), along with an attempted justification as to why the old code
was more correct than the current code.

Some of the RTC changes are reversions, but there is also new logic. 
All new logic is to do with how to update REG_B and REG_C correctly. 
None of the other functionality is touched.

REG_B and REG_C are to do with interrupts, and which events
should(B)/have(C) generated interrupts.  The worst case is that a guest
gets none/too-few/too-many interrupts when trying to drive the RTC. 
None of this should lead to clock skew, as reading the time values
directly will still provide the same information as before, although any
guest which attempts to guess time based on counting periodic interrupts
from the RTC is a) already broken and b) already having massive skew as
a VM due to vcpu scheduling.

XenRT does have tests for clock drift, but don't know for certain
whether they have been run against the new code yet.

I will ensure they get run on v2 of the patch.

~Andrew

>
> There's a reason for trying to go through the whole exercise,
> particularly in bugs like this.  I do have a lot of faith in our
> intuition to consider hundreds of individual factors and come up with
> a reasonably good judgement of the probabilities -- but only if it is
> guided properly.  We are all very prone to only consider the things we
> happen to be thinking about, and to completely ignore all the things
> we don't happen to be looking at.  My own temptation, looking at this
> bug, is to say, "Random hangs during boot, yeah, that's pretty bad; we
> should take it."  But then I'm only looking at the positives of the
> patch: I'm not really making a balance of the positives versus the
> negatives.
>
> The goal of going through the "worst-case-scenario" exercise is to
> bring to our minds the potential outcomes we are prone to ignore. Only
> then can we reasonably trust our intuition to make a properly informed
> judgement.
>
>  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:44:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvqA-0000YC-6O; Mon, 10 Feb 2014 18:44:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCvq8-0000Y2-9c
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 18:44:20 +0000
Received: from [193.109.254.147:47974] by server-12.bemta-14.messagelabs.com
	id E1/3B-17220-30E19F25; Mon, 10 Feb 2014 18:44:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392057857!3314923!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2918 invoked from network); 10 Feb 2014 18:44:18 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:44:18 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AIiF8R025361
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 18:44:15 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AIiDiN014779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 18:44:14 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1AIiD7Z011889; Mon, 10 Feb 2014 18:44:13 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 10:44:13 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 71C2F1C0954; Mon, 10 Feb 2014 13:44:12 -0500 (EST)
Date: Mon, 10 Feb 2014 13:44:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: axboe@kernel.dk, linux-kernel@vger.kernel.org,
	xen-devel@lists.xensource.com
Message-ID: <20140210184412.GA18198@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8548459873320567521=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8548459873320567521==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="CE+1k2dSO48ffgeK"
Content-Disposition: inline


--CE+1k2dSO48ffgeK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Jens,

Please git pull the following branch:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14

which is based off v3.13-rc6. If you would like me to rebase it on
a different branch/tag I would be more than happy to do so.

The patches are all bug-fixes and hopefully can go in 3.14.

They deal with xen-blkback shutdown and cause memory leaks
as well as shutdown races. They should go to stable tree and if you
are OK with I will ask them to backport those fixes.

There is also a fix to xen-blkfront to deal with unexpected state
transition. And lastly a fix to the header where it was using the
__aligned__ unnecessarily.

Please pull!


 drivers/block/xen-blkback/blkback.c | 63 ++++++++++++++++++++++++-------------
 drivers/block/xen-blkback/common.h  |  4 ++-
 drivers/block/xen-blkback/xenbus.c  | 13 ++++++++
 drivers/block/xen-blkfront.c        | 11 ++++---
 include/xen/interface/io/blkif.h    | 34 +++++++++-----------
 5 files changed, 79 insertions(+), 46 deletions(-)

David Vrabel (1):
      xen-blkfront: handle backend CLOSED without CLOSING

Matt Rushton (1):
      xen-blkback: fix memory leak when persistent grants are used

Roger Pau Monne (3):
      xen-blkback: fix memory leaks
      xen-blkback: fix shutdown race
      xen-blkif: drop struct blkif_request_segment_aligned


--CE+1k2dSO48ffgeK
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS+R33AAoJEFjIrFwIi8fJpAgH/0TR1KeZMyV3+/c06eNBUGYV
sHOOr9PmfQgMxTL6elHJ/5BPYnVz+VmmyPukb42sH5dcGn/urQt4zwOgZg7SbynJ
RJNcKhnElEute7upiFg8TGj96FeBS3EvwVGHFlXdjJANCrrI6AkFouV7uSXtGYDd
MYd1t1+mX+b+v/SlBXxKwrQ8i+IaBxGb0FsgpAIMo6ATbR8yTXacVoXJGDrs9pmL
tNtQw88vZy638wAuojpuJtNQTM466XXa0GsaVpqbZq5Lw4tErsMvtz4qkqBIP5dN
dQBOK5JIWtDkz6JXlF7oYmoQHw+Ey4ZuF116QYWPdNZ5BvUeSn30ISDvKQuvGcI=
=R7SI
-----END PGP SIGNATURE-----

--CE+1k2dSO48ffgeK--


--===============8548459873320567521==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8548459873320567521==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 18:44:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvqA-0000YC-6O; Mon, 10 Feb 2014 18:44:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCvq8-0000Y2-9c
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 18:44:20 +0000
Received: from [193.109.254.147:47974] by server-12.bemta-14.messagelabs.com
	id E1/3B-17220-30E19F25; Mon, 10 Feb 2014 18:44:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392057857!3314923!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2918 invoked from network); 10 Feb 2014 18:44:18 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:44:18 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AIiF8R025361
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 18:44:15 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AIiDiN014779
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 18:44:14 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1AIiD7Z011889; Mon, 10 Feb 2014 18:44:13 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 10:44:13 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 71C2F1C0954; Mon, 10 Feb 2014 13:44:12 -0500 (EST)
Date: Mon, 10 Feb 2014 13:44:12 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: axboe@kernel.dk, linux-kernel@vger.kernel.org,
	xen-devel@lists.xensource.com
Message-ID: <20140210184412.GA18198@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8548459873320567521=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============8548459873320567521==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="CE+1k2dSO48ffgeK"
Content-Disposition: inline


--CE+1k2dSO48ffgeK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Jens,

Please git pull the following branch:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14

which is based off v3.13-rc6. If you would like me to rebase it on
a different branch/tag I would be more than happy to do so.

The patches are all bug-fixes and hopefully can go in 3.14.

They deal with xen-blkback shutdown and cause memory leaks
as well as shutdown races. They should go to stable tree and if you
are OK with I will ask them to backport those fixes.

There is also a fix to xen-blkfront to deal with unexpected state
transition. And lastly a fix to the header where it was using the
__aligned__ unnecessarily.

Please pull!


 drivers/block/xen-blkback/blkback.c | 63 ++++++++++++++++++++++++-------------
 drivers/block/xen-blkback/common.h  |  4 ++-
 drivers/block/xen-blkback/xenbus.c  | 13 ++++++++
 drivers/block/xen-blkfront.c        | 11 ++++---
 include/xen/interface/io/blkif.h    | 34 +++++++++-----------
 5 files changed, 79 insertions(+), 46 deletions(-)

David Vrabel (1):
      xen-blkfront: handle backend CLOSED without CLOSING

Matt Rushton (1):
      xen-blkback: fix memory leak when persistent grants are used

Roger Pau Monne (3):
      xen-blkback: fix memory leaks
      xen-blkback: fix shutdown race
      xen-blkif: drop struct blkif_request_segment_aligned


--CE+1k2dSO48ffgeK
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS+R33AAoJEFjIrFwIi8fJpAgH/0TR1KeZMyV3+/c06eNBUGYV
sHOOr9PmfQgMxTL6elHJ/5BPYnVz+VmmyPukb42sH5dcGn/urQt4zwOgZg7SbynJ
RJNcKhnElEute7upiFg8TGj96FeBS3EvwVGHFlXdjJANCrrI6AkFouV7uSXtGYDd
MYd1t1+mX+b+v/SlBXxKwrQ8i+IaBxGb0FsgpAIMo6ATbR8yTXacVoXJGDrs9pmL
tNtQw88vZy638wAuojpuJtNQTM466XXa0GsaVpqbZq5Lw4tErsMvtz4qkqBIP5dN
dQBOK5JIWtDkz6JXlF7oYmoQHw+Ey4ZuF116QYWPdNZ5BvUeSn30ISDvKQuvGcI=
=R7SI
-----END PGP SIGNATURE-----

--CE+1k2dSO48ffgeK--


--===============8548459873320567521==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8548459873320567521==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 18:47:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvtB-0000lJ-U4; Mon, 10 Feb 2014 18:47:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCvtB-0000lB-5p
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:47:29 +0000
Received: from [85.158.143.35:41437] by server-2.bemta-4.messagelabs.com id
	12/9B-10891-0CE19F25; Mon, 10 Feb 2014 18:47:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392058046!4595732!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9279 invoked from network); 10 Feb 2014 18:47:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:47:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AIl9pU028732
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 18:47:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AIl8GT024479
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 18:47:09 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AIl8jV024475; Mon, 10 Feb 2014 18:47:08 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 10:47:08 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 838FC1C0954; Mon, 10 Feb 2014 13:47:07 -0500 (EST)
Date: Mon, 10 Feb 2014 13:47:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140210184707.GA18755@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> Thanks for the answers on the timeline.
> 
> When I start the HVM with th broadcom adapter, I get this message back.
> Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> support reset from sysfs for PCI device 0000:05:00.0
> libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> support reset from sysfs for PCI device 0000:05:00.1
> 
> However, the devices appear in the HVM.  Is this something that I should be
> concerned about?

No. Xen pciback does the reset automatically.

Actually we might want to ditch that reporting in libxl, or maybe just
implement a stub function in xen-pciback so that libxl will be happy.

> 
> 
> 
> On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu> wrote:
> 
> > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > <mikeneiderhauser@gmail.com> wrote:
> > > Works like a charm.  I do not have physical access to the computer this
> > > weekend to verify that the cards are isolated, but the HVM starts and
> > > appears to be working well.
> > >
> > > When do you think Xen 4.4 will be released.  The article I read
> > mentioned it
> > > will be released in 2014 (hinting towards the end of February).  I also
> > read
> > > 'When it is ready.'
> > >
> > > Any timeline would be great.
> >
> > I'm afraid that's about all we can give. :-)  We've locked down
> > development for 2 months now and are working on finding and fixing
> > bugs.  If there are no more blocker bugs or other unforeseen delays,
> > it should be out by the end of February.  But there are necessarily
> > significant unknowns, so we can't make any promises.
> >
> >  -George
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:47:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:47:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvtB-0000lJ-U4; Mon, 10 Feb 2014 18:47:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WCvtB-0000lB-5p
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:47:29 +0000
Received: from [85.158.143.35:41437] by server-2.bemta-4.messagelabs.com id
	12/9B-10891-0CE19F25; Mon, 10 Feb 2014 18:47:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392058046!4595732!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9279 invoked from network); 10 Feb 2014 18:47:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:47:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1AIl9pU028732
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 10 Feb 2014 18:47:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AIl8GT024479
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 10 Feb 2014 18:47:09 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1AIl8jV024475; Mon, 10 Feb 2014 18:47:08 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 10:47:08 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 838FC1C0954; Mon, 10 Feb 2014 13:47:07 -0500 (EST)
Date: Mon, 10 Feb 2014 13:47:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140210184707.GA18755@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> Thanks for the answers on the timeline.
> 
> When I start the HVM with th broadcom adapter, I get this message back.
> Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> support reset from sysfs for PCI device 0000:05:00.0
> libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> support reset from sysfs for PCI device 0000:05:00.1
> 
> However, the devices appear in the HVM.  Is this something that I should be
> concerned about?

No. Xen pciback does the reset automatically.

Actually we might want to ditch that reporting in libxl, or maybe just
implement a stub function in xen-pciback so that libxl will be happy.

> 
> 
> 
> On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu> wrote:
> 
> > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > <mikeneiderhauser@gmail.com> wrote:
> > > Works like a charm.  I do not have physical access to the computer this
> > > weekend to verify that the cards are isolated, but the HVM starts and
> > > appears to be working well.
> > >
> > > When do you think Xen 4.4 will be released.  The article I read
> > mentioned it
> > > will be released in 2014 (hinting towards the end of February).  I also
> > read
> > > 'When it is ready.'
> > >
> > > Any timeline would be great.
> >
> > I'm afraid that's about all we can give. :-)  We've locked down
> > development for 2 months now and are working on finding and fixing
> > bugs.  If there are no more blocker bugs or other unforeseen delays,
> > it should be out by the end of February.  But there are necessarily
> > significant unknowns, so we can't make any promises.
> >
> >  -George
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:48:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvuN-0000tF-ED; Mon, 10 Feb 2014 18:48:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WCvuL-0000sc-RM
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 18:48:42 +0000
Received: from [85.158.139.211:28960] by server-12.bemta-5.messagelabs.com id
	D3/08-15415-90F19F25; Mon, 10 Feb 2014 18:48:41 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392058119!2936831!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28921 invoked from network); 10 Feb 2014 18:48:40 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:48:40 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Feb 2014 18:48:38 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="650193091"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.182])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 18:48:36 +0000
Message-ID: <52F91F04.6030507@terremark.com>
Date: Mon, 10 Feb 2014 13:48:36 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com>
In-Reply-To: <52F8B128.80800@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTAvMTQgMDU6NTksIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTAvMDIvMTQg
MTA6NTUsIElhbiBDYW1wYmVsbCB3cm90ZToKPj4gT24gRnJpLCAyMDE0LTAyLTA3IGF0IDE2OjI5
ICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+Pj4gT24gMDcvMDIvMTQgMTY6MjIsIERvbiBT
bHV0eiB3cm90ZToKPj4+PiBPbiAwMi8wNy8xNCAwNTowNSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+
Pj4+PiBPbiBUaHUsIDIwMTQtMDItMDYgYXQgMTk6MTUgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToK
Pj4+Pj4+Pj4+IGNjMTogd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMKPj4+Pj4+Pj4+
IHhlbmxpZ2h0X3N0dWJzLmM6IEluIGZ1bmN0aW9uICdEZWZib29sX3ZhbCc6Cj4+Pj4+Pj4+PiB4
ZW5saWdodF9zdHVicy5jOjM0NDogd2FybmluZzogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YgZnVu
Y3Rpb24KPj4+Pj4+Pj4+ICdDQU1McmV0dXJuVCcKPj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6
MzQ0OiBlcnJvcjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4+ICdsaWJ4bF9k
ZWZib29sJwo+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1YnMuYzogSW4gZnVuY3Rpb24gJ1N0cmluZ19v
cHRpb25fdmFsJzoKPj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6Mzc5OiBlcnJvcjogZXhwZWN0
ZWQgZXhwcmVzc2lvbiBiZWZvcmUgJ2NoYXInCj4+Pj4+Pj4+PiB4ZW5saWdodF9zdHVicy5jOiBJ
biBmdW5jdGlvbiAnYW9ob3dfdmFsJzoKPj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6NDQwOiBl
cnJvcjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4+ICdsaWJ4bF9hc3luY29w
X2hvdycKPj4+Pj4+IEFueSBpZGVhIG9uIHdoYXQgdG8gZG8gYWJvdXQgb2NhbWwgaXNzdWU/Cj4+
Pj4+IE15IGd1ZXNzIGlzIHRoYXQgeW91ciBvY2FtbCBpcyB0b28gb2xkIGFuZCBkb2Vzbid0IHN1
cHBseSBDQU1McmV0dXJuVC4KPj4+Pj4gV2hhdCB2ZXJzaW9uIGRvIHlvdSBoYXZlPwo+Pj4+Pgo+
Pj4+PiBJYW4uCj4+Pj4+Cj4+Pj4gZGNzLXhlbi01Mzp+Pm9jYW1sIC12ZXJzaW9uCj4+Pj4gVGhl
IE9iamVjdGl2ZSBDYW1sIHRvcGxldmVsLCB2ZXJzaW9uIDMuMDkuMwo+Pj4+Cj4+Pj4gICAgIC1E
b24gU2x1dHoKPj4+Pgo+Pj4gV2hpY2gsIGFjY29yZGluZyB0byBnb29nbGUsIHdhcyBpbnRyb2R1
Y2VkIGluIDMuMDkuNAo+Pj4KPj4+IEkgdGhpbmsgdGhlIC4vY29uZmlndXJlIHNjcmlwdCBuZWVk
cyBhIG1pbiB2ZXJzaW9uIGNoZWNrLgo+PiBZZXMsIEkgdGhpbmsgc28gdG9vLiBSb2IsIGNvdWxk
IHlvdSBhZHZpc2Ugb24gYSBzdWl0YWJsZSBtaW5pbXVtIGFuZAo+PiBwZXJoYXBzIHBhdGNoIHRv
b2xzL2NvbmZpZ3VyZS5hYyBhbmQvb3IgbTQvb2NhbWwubTQgYXMgbmVjZXNzYXJ5Lgo+Pgo+PiBB
bHNvIENDaW5nIFJvZ2VyIHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCj4gVGhl
IE9jYW1sIGF1dG9jb25mIHN0dWZmIHdhcyBwaWNrZWQgZnJvbSBodHRwOi8vZm9yZ2Uub2NhbWxj
b3JlLm9yZy8uCj4gSGVyZSBpcyBhbiB1bnRlc3RlZCBwYXRjaCBmb3Igb3VyIGNvbmZpZ3VyZSBz
Y3JpcHQgdG8gY2hlY2sgZm9yIHRoZQo+IG1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAo
My4wOS4zKToKPgo+IChyZW1lbWJlciB0byByZS1nZW5lcmF0ZSB0aGUgY29uZmlndXJlIHNjcmlw
dCBhZnRlciBhcHBseWluZykKCk5vdCBzdXJlIGlmIHRoZSBvbGRlciBBdXRvY29uZiAoMi42OCkg
aXMgYXQgZmF1bHQsIGJ1dCBJIGdldDoKCmRjcy14ZW4tNTQ6fi94ZW4vdG9vbHM+YXV0b2NvbmYg
Y29uZmlndXJlLmFjID5jb25maWd1cmUKY29uZmlndXJlLmFjOjE0OiBlcnJvcjogcG9zc2libHkg
dW5kZWZpbmVkIG1hY3JvOiBBU19JRgogICAgICAgSWYgdGhpcyB0b2tlbiBhbmQgb3RoZXJzIGFy
ZSBsZWdpdGltYXRlLCBwbGVhc2UgdXNlIG00X3BhdHRlcm5fYWxsb3cuCiAgICAgICBTZWUgdGhl
IEF1dG9jb25mIGRvY3VtZW50YXRpb24uCmNvbmZpZ3VyZS5hYzoxNjI6IGVycm9yOiBwb3NzaWJs
eSB1bmRlZmluZWQgbWFjcm86IEFDX01TR19FUlJPUgoKYW5kIHRoZSBnZW5lcmF0ZWQgc2NyaXB0
IGlzIGJhZDoKCgogICAgIGlmIHRlc3QgIngkT0NBTUxDIiA9ICJ4bm8iIHx8IHRlc3QgIngkT0NB
TUxGSU5EIiA9ICJ4bm8iOyB0aGVuIDoKCiAgICAgICAgIGlmIHRlc3QgIngkZW5hYmxlX29jYW1s
dG9vbHMiID0gInh5ZXMiOyB0aGVuIDoKCiAgICAgICAgICAgICBhc19mbl9lcnJvciAkPyAiT2Nh
bWwgdG9vbHMgZW5hYmxlZCwgYnV0IHVuYWJsZSB0byBmaW5kIE9jYW1sIiAiJExJTkVOTyIgNQpm
aQogICAgICAgICBvY2FtbHRvb2xzPSJuIgoKZWxzZQoKICAgICAgICAgQVhfQ09NUEFSRV9WRVJT
SU9OKCRPQ0FNTFZFUlNJT04sIGx0LCAzLjA5LjQsCiAgICAgICAgICAgICBBU19JRihbdGVzdCAi
eCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJdLCBbCiAgICAgICAgICAgICAgICAgQUNfTVNH
X0VSUk9SKFtZb3VyIHZlcnNpb24gb2YgT0NhbWw6ICRPQ0FNTFZFUlNJT04gaXMgbm90IHN1cHBv
cnRlZF0pXSkKICAgICAgICAgICAgIG9jYW1sdG9vbHM9Im4iCiAgICAgICAgICkKCmZpCgpmaQoK
CiAgICAtRG9uIFNsdXR6CgoKCj4gLS0tCj4gY29tbWl0IGU0OTYwOWNjN2I5M2MyNjMzY2Y1YTQ5
MjA2Y2IyOWQ2YmRkNjEyYmUKPiBBdXRob3I6IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Cj4gRGF0ZTogICBNb24gRmViIDEwIDExOjU0OjEzIDIwMTQgKzAxMDAKPgo+ICAg
ICAgdG9vbHM6IGNoZWNrIE9DYW1sIHZlcnNpb24gaXMgYXQgbGVhc3QgMy4wOS4zCj4gICAgICAK
PiAgICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPgo+Cj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2NvbmZpZ3VyZS5hYyBiL3Rvb2xzL2NvbmZpZ3Vy
ZS5hYwo+IGluZGV4IDA3NTRmMGUuLjZkMWUyZWUgMTAwNjQ0Cj4gLS0tIGEvdG9vbHMvY29uZmln
dXJlLmFjCj4gKysrIGIvdG9vbHMvY29uZmlndXJlLmFjCj4gQEAgLTE2MSw2ICsxNjEsMTIgQEAg
QVNfSUYoW3Rlc3QgIngkb2NhbWx0b29scyIgPSAieHkiXSwgWwo+ICAgICAgICAgICBBU19JRihb
dGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJdLCBbCj4gICAgICAgICAgICAgICBB
Q19NU0dfRVJST1IoW09jYW1sIHRvb2xzIGVuYWJsZWQsIGJ1dCB1bmFibGUgdG8gZmluZCBPY2Ft
bF0pXSkKPiAgICAgICAgICAgb2NhbWx0b29scz0ibiIKPiArICAgIF0sIFsKPiArICAgICAgICBB
WF9DT01QQVJFX1ZFUlNJT04oWyRPQ0FNTFZFUlNJT05dLCBbbHRdLCBbMy4wOS40XSwgWwo+ICsg
ICAgICAgICAgICBBU19JRihbdGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJdLCBb
Cj4gKyAgICAgICAgICAgICAgICBBQ19NU0dfRVJST1IoW1lvdXIgdmVyc2lvbiBvZiBPQ2FtbDog
JE9DQU1MVkVSU0lPTiBpcyBub3Qgc3VwcG9ydGVkXSldKQo+ICsgICAgICAgICAgICBvY2FtbHRv
b2xzPSJuIgo+ICsgICAgICAgIF0pCj4gICAgICAgXSkKPiAgIF0pCj4gICBBU19JRihbdGVzdCAi
eCR4c21wb2xpY3kiID0gInh5Il0sIFsKPgo+CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:48:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCvuN-0000tF-ED; Mon, 10 Feb 2014 18:48:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WCvuL-0000sc-RM
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 18:48:42 +0000
Received: from [85.158.139.211:28960] by server-12.bemta-5.messagelabs.com id
	D3/08-15415-90F19F25; Mon, 10 Feb 2014 18:48:41 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392058119!2936831!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28921 invoked from network); 10 Feb 2014 18:48:40 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 18:48:40 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Feb 2014 18:48:38 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="650193091"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.182])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 18:48:36 +0000
Message-ID: <52F91F04.6030507@terremark.com>
Date: Mon, 10 Feb 2014 13:48:36 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com>
In-Reply-To: <52F8B128.80800@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTAvMTQgMDU6NTksIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTAvMDIvMTQg
MTA6NTUsIElhbiBDYW1wYmVsbCB3cm90ZToKPj4gT24gRnJpLCAyMDE0LTAyLTA3IGF0IDE2OjI5
ICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+Pj4gT24gMDcvMDIvMTQgMTY6MjIsIERvbiBT
bHV0eiB3cm90ZToKPj4+PiBPbiAwMi8wNy8xNCAwNTowNSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+
Pj4+PiBPbiBUaHUsIDIwMTQtMDItMDYgYXQgMTk6MTUgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToK
Pj4+Pj4+Pj4+IGNjMTogd2FybmluZ3MgYmVpbmcgdHJlYXRlZCBhcyBlcnJvcnMKPj4+Pj4+Pj4+
IHhlbmxpZ2h0X3N0dWJzLmM6IEluIGZ1bmN0aW9uICdEZWZib29sX3ZhbCc6Cj4+Pj4+Pj4+PiB4
ZW5saWdodF9zdHVicy5jOjM0NDogd2FybmluZzogaW1wbGljaXQgZGVjbGFyYXRpb24gb2YgZnVu
Y3Rpb24KPj4+Pj4+Pj4+ICdDQU1McmV0dXJuVCcKPj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6
MzQ0OiBlcnJvcjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4+ICdsaWJ4bF9k
ZWZib29sJwo+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1YnMuYzogSW4gZnVuY3Rpb24gJ1N0cmluZ19v
cHRpb25fdmFsJzoKPj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6Mzc5OiBlcnJvcjogZXhwZWN0
ZWQgZXhwcmVzc2lvbiBiZWZvcmUgJ2NoYXInCj4+Pj4+Pj4+PiB4ZW5saWdodF9zdHVicy5jOiBJ
biBmdW5jdGlvbiAnYW9ob3dfdmFsJzoKPj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6NDQwOiBl
cnJvcjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4+ICdsaWJ4bF9hc3luY29w
X2hvdycKPj4+Pj4+IEFueSBpZGVhIG9uIHdoYXQgdG8gZG8gYWJvdXQgb2NhbWwgaXNzdWU/Cj4+
Pj4+IE15IGd1ZXNzIGlzIHRoYXQgeW91ciBvY2FtbCBpcyB0b28gb2xkIGFuZCBkb2Vzbid0IHN1
cHBseSBDQU1McmV0dXJuVC4KPj4+Pj4gV2hhdCB2ZXJzaW9uIGRvIHlvdSBoYXZlPwo+Pj4+Pgo+
Pj4+PiBJYW4uCj4+Pj4+Cj4+Pj4gZGNzLXhlbi01Mzp+Pm9jYW1sIC12ZXJzaW9uCj4+Pj4gVGhl
IE9iamVjdGl2ZSBDYW1sIHRvcGxldmVsLCB2ZXJzaW9uIDMuMDkuMwo+Pj4+Cj4+Pj4gICAgIC1E
b24gU2x1dHoKPj4+Pgo+Pj4gV2hpY2gsIGFjY29yZGluZyB0byBnb29nbGUsIHdhcyBpbnRyb2R1
Y2VkIGluIDMuMDkuNAo+Pj4KPj4+IEkgdGhpbmsgdGhlIC4vY29uZmlndXJlIHNjcmlwdCBuZWVk
cyBhIG1pbiB2ZXJzaW9uIGNoZWNrLgo+PiBZZXMsIEkgdGhpbmsgc28gdG9vLiBSb2IsIGNvdWxk
IHlvdSBhZHZpc2Ugb24gYSBzdWl0YWJsZSBtaW5pbXVtIGFuZAo+PiBwZXJoYXBzIHBhdGNoIHRv
b2xzL2NvbmZpZ3VyZS5hYyBhbmQvb3IgbTQvb2NhbWwubTQgYXMgbmVjZXNzYXJ5Lgo+Pgo+PiBB
bHNvIENDaW5nIFJvZ2VyIHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCj4gVGhl
IE9jYW1sIGF1dG9jb25mIHN0dWZmIHdhcyBwaWNrZWQgZnJvbSBodHRwOi8vZm9yZ2Uub2NhbWxj
b3JlLm9yZy8uCj4gSGVyZSBpcyBhbiB1bnRlc3RlZCBwYXRjaCBmb3Igb3VyIGNvbmZpZ3VyZSBz
Y3JpcHQgdG8gY2hlY2sgZm9yIHRoZQo+IG1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAo
My4wOS4zKToKPgo+IChyZW1lbWJlciB0byByZS1nZW5lcmF0ZSB0aGUgY29uZmlndXJlIHNjcmlw
dCBhZnRlciBhcHBseWluZykKCk5vdCBzdXJlIGlmIHRoZSBvbGRlciBBdXRvY29uZiAoMi42OCkg
aXMgYXQgZmF1bHQsIGJ1dCBJIGdldDoKCmRjcy14ZW4tNTQ6fi94ZW4vdG9vbHM+YXV0b2NvbmYg
Y29uZmlndXJlLmFjID5jb25maWd1cmUKY29uZmlndXJlLmFjOjE0OiBlcnJvcjogcG9zc2libHkg
dW5kZWZpbmVkIG1hY3JvOiBBU19JRgogICAgICAgSWYgdGhpcyB0b2tlbiBhbmQgb3RoZXJzIGFy
ZSBsZWdpdGltYXRlLCBwbGVhc2UgdXNlIG00X3BhdHRlcm5fYWxsb3cuCiAgICAgICBTZWUgdGhl
IEF1dG9jb25mIGRvY3VtZW50YXRpb24uCmNvbmZpZ3VyZS5hYzoxNjI6IGVycm9yOiBwb3NzaWJs
eSB1bmRlZmluZWQgbWFjcm86IEFDX01TR19FUlJPUgoKYW5kIHRoZSBnZW5lcmF0ZWQgc2NyaXB0
IGlzIGJhZDoKCgogICAgIGlmIHRlc3QgIngkT0NBTUxDIiA9ICJ4bm8iIHx8IHRlc3QgIngkT0NB
TUxGSU5EIiA9ICJ4bm8iOyB0aGVuIDoKCiAgICAgICAgIGlmIHRlc3QgIngkZW5hYmxlX29jYW1s
dG9vbHMiID0gInh5ZXMiOyB0aGVuIDoKCiAgICAgICAgICAgICBhc19mbl9lcnJvciAkPyAiT2Nh
bWwgdG9vbHMgZW5hYmxlZCwgYnV0IHVuYWJsZSB0byBmaW5kIE9jYW1sIiAiJExJTkVOTyIgNQpm
aQogICAgICAgICBvY2FtbHRvb2xzPSJuIgoKZWxzZQoKICAgICAgICAgQVhfQ09NUEFSRV9WRVJT
SU9OKCRPQ0FNTFZFUlNJT04sIGx0LCAzLjA5LjQsCiAgICAgICAgICAgICBBU19JRihbdGVzdCAi
eCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJdLCBbCiAgICAgICAgICAgICAgICAgQUNfTVNH
X0VSUk9SKFtZb3VyIHZlcnNpb24gb2YgT0NhbWw6ICRPQ0FNTFZFUlNJT04gaXMgbm90IHN1cHBv
cnRlZF0pXSkKICAgICAgICAgICAgIG9jYW1sdG9vbHM9Im4iCiAgICAgICAgICkKCmZpCgpmaQoK
CiAgICAtRG9uIFNsdXR6CgoKCj4gLS0tCj4gY29tbWl0IGU0OTYwOWNjN2I5M2MyNjMzY2Y1YTQ5
MjA2Y2IyOWQ2YmRkNjEyYmUKPiBBdXRob3I6IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNp
dHJpeC5jb20+Cj4gRGF0ZTogICBNb24gRmViIDEwIDExOjU0OjEzIDIwMTQgKzAxMDAKPgo+ICAg
ICAgdG9vbHM6IGNoZWNrIE9DYW1sIHZlcnNpb24gaXMgYXQgbGVhc3QgMy4wOS4zCj4gICAgICAK
PiAgICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPgo+Cj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2NvbmZpZ3VyZS5hYyBiL3Rvb2xzL2NvbmZpZ3Vy
ZS5hYwo+IGluZGV4IDA3NTRmMGUuLjZkMWUyZWUgMTAwNjQ0Cj4gLS0tIGEvdG9vbHMvY29uZmln
dXJlLmFjCj4gKysrIGIvdG9vbHMvY29uZmlndXJlLmFjCj4gQEAgLTE2MSw2ICsxNjEsMTIgQEAg
QVNfSUYoW3Rlc3QgIngkb2NhbWx0b29scyIgPSAieHkiXSwgWwo+ICAgICAgICAgICBBU19JRihb
dGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJdLCBbCj4gICAgICAgICAgICAgICBB
Q19NU0dfRVJST1IoW09jYW1sIHRvb2xzIGVuYWJsZWQsIGJ1dCB1bmFibGUgdG8gZmluZCBPY2Ft
bF0pXSkKPiAgICAgICAgICAgb2NhbWx0b29scz0ibiIKPiArICAgIF0sIFsKPiArICAgICAgICBB
WF9DT01QQVJFX1ZFUlNJT04oWyRPQ0FNTFZFUlNJT05dLCBbbHRdLCBbMy4wOS40XSwgWwo+ICsg
ICAgICAgICAgICBBU19JRihbdGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJdLCBb
Cj4gKyAgICAgICAgICAgICAgICBBQ19NU0dfRVJST1IoW1lvdXIgdmVyc2lvbiBvZiBPQ2FtbDog
JE9DQU1MVkVSU0lPTiBpcyBub3Qgc3VwcG9ydGVkXSldKQo+ICsgICAgICAgICAgICBvY2FtbHRv
b2xzPSJuIgo+ICsgICAgICAgIF0pCj4gICAgICAgXSkKPiAgIF0pCj4gICBBU19JRihbdGVzdCAi
eCR4c21wb2xpY3kiID0gInh5Il0sIFsKPgo+CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:59:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCw4e-0001Xh-Th; Mon, 10 Feb 2014 18:59:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCw4c-0001Xc-MI
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 18:59:18 +0000
Received: from [193.109.254.147:27341] by server-1.bemta-14.messagelabs.com id
	E3/FE-15438-68129F25; Mon, 10 Feb 2014 18:59:18 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392058756!3348189!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27200 invoked from network); 10 Feb 2014 18:59:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:59:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="99571587"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 18:59:15 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 13:59:14 -0500
Message-ID: <52F92181.8010907@citrix.com>
Date: Mon, 10 Feb 2014 19:59:13 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
In-Reply-To: <52F91F04.6030507@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMTk6NDgsIERvbiBTbHV0eiB3cm90ZToKPiBPbiAwMi8xMC8xNCAwNTo1OSwg
Um9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4gT24gMTAvMDIvMTQgMTA6NTUsIElhbiBDYW1wYmVs
bCB3cm90ZToKPj4+IE9uIEZyaSwgMjAxNC0wMi0wNyBhdCAxNjoyOSArMDAwMCwgQW5kcmV3IENv
b3BlciB3cm90ZToKPj4+PiBPbiAwNy8wMi8xNCAxNjoyMiwgRG9uIFNsdXR6IHdyb3RlOgo+Pj4+
PiBPbiAwMi8wNy8xNCAwNTowNSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+Pj4+Pj4gT24gVGh1LCAy
MDE0LTAyLTA2IGF0IDE5OjE1IC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4+Pj4gY2Mx
OiB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+Pj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0
dWJzLmM6IEluIGZ1bmN0aW9uICdEZWZib29sX3ZhbCc6Cj4+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1
YnMuYzozNDQ6IHdhcm5pbmc6IGltcGxpY2l0IGRlY2xhcmF0aW9uIG9mIGZ1bmN0aW9uCj4+Pj4+
Pj4+Pj4gJ0NBTUxyZXR1cm5UJwo+Pj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6MzQ0OiBlcnJv
cjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4+PiAnbGlieGxfZGVmYm9vbCcK
Pj4+Pj4+Pj4+PiB4ZW5saWdodF9zdHVicy5jOiBJbiBmdW5jdGlvbiAnU3RyaW5nX29wdGlvbl92
YWwnOgo+Pj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6Mzc5OiBlcnJvcjogZXhwZWN0ZWQgZXhw
cmVzc2lvbiBiZWZvcmUgJ2NoYXInCj4+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1YnMuYzogSW4gZnVu
Y3Rpb24gJ2FvaG93X3ZhbCc6Cj4+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1YnMuYzo0NDA6IGVycm9y
OiBleHBlY3RlZCBleHByZXNzaW9uIGJlZm9yZQo+Pj4+Pj4+Pj4+ICdsaWJ4bF9hc3luY29wX2hv
dycKPj4+Pj4+PiBBbnkgaWRlYSBvbiB3aGF0IHRvIGRvIGFib3V0IG9jYW1sIGlzc3VlPwo+Pj4+
Pj4gTXkgZ3Vlc3MgaXMgdGhhdCB5b3VyIG9jYW1sIGlzIHRvbyBvbGQgYW5kIGRvZXNuJ3Qgc3Vw
cGx5Cj4+Pj4+PiBDQU1McmV0dXJuVC4KPj4+Pj4+IFdoYXQgdmVyc2lvbiBkbyB5b3UgaGF2ZT8K
Pj4+Pj4+Cj4+Pj4+PiBJYW4uCj4+Pj4+Pgo+Pj4+PiBkY3MteGVuLTUzOn4+b2NhbWwgLXZlcnNp
b24KPj4+Pj4gVGhlIE9iamVjdGl2ZSBDYW1sIHRvcGxldmVsLCB2ZXJzaW9uIDMuMDkuMwo+Pj4+
Pgo+Pj4+PiAgICAgLURvbiBTbHV0ego+Pj4+Pgo+Pj4+IFdoaWNoLCBhY2NvcmRpbmcgdG8gZ29v
Z2xlLCB3YXMgaW50cm9kdWNlZCBpbiAzLjA5LjQKPj4+Pgo+Pj4+IEkgdGhpbmsgdGhlIC4vY29u
ZmlndXJlIHNjcmlwdCBuZWVkcyBhIG1pbiB2ZXJzaW9uIGNoZWNrLgo+Pj4gWWVzLCBJIHRoaW5r
IHNvIHRvby4gUm9iLCBjb3VsZCB5b3UgYWR2aXNlIG9uIGEgc3VpdGFibGUgbWluaW11bSBhbmQK
Pj4+IHBlcmhhcHMgcGF0Y2ggdG9vbHMvY29uZmlndXJlLmFjIGFuZC9vciBtNC9vY2FtbC5tNCBh
cyBuZWNlc3NhcnkuCj4+Pgo+Pj4gQWxzbyBDQ2luZyBSb2dlciB3aG8gYWRkZWQgdGhlIG9jYW1s
IGF1dG9jb25mIHN0dWZmLgo+PiBUaGUgT2NhbWwgYXV0b2NvbmYgc3R1ZmYgd2FzIHBpY2tlZCBm
cm9tIGh0dHA6Ly9mb3JnZS5vY2FtbGNvcmUub3JnLy4KPj4gSGVyZSBpcyBhbiB1bnRlc3RlZCBw
YXRjaCBmb3Igb3VyIGNvbmZpZ3VyZSBzY3JpcHQgdG8gY2hlY2sgZm9yIHRoZQo+PiBtaW5pbXVt
IHJlcXVpcmVkIE9DYW1sIHZlcnNpb24gKDMuMDkuMyk6Cj4+Cj4+IChyZW1lbWJlciB0byByZS1n
ZW5lcmF0ZSB0aGUgY29uZmlndXJlIHNjcmlwdCBhZnRlciBhcHBseWluZykKPiAKPiBOb3Qgc3Vy
ZSBpZiB0aGUgb2xkZXIgQXV0b2NvbmYgKDIuNjgpIGlzIGF0IGZhdWx0LCBidXQgSSBnZXQ6Cj4g
Cj4gZGNzLXhlbi01NDp+L3hlbi90b29scz5hdXRvY29uZiBjb25maWd1cmUuYWMgPmNvbmZpZ3Vy
ZQo+IGNvbmZpZ3VyZS5hYzoxNDogZXJyb3I6IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNybzogQVNf
SUYKPiAgICAgICBJZiB0aGlzIHRva2VuIGFuZCBvdGhlcnMgYXJlIGxlZ2l0aW1hdGUsIHBsZWFz
ZSB1c2UgbTRfcGF0dGVybl9hbGxvdy4KPiAgICAgICBTZWUgdGhlIEF1dG9jb25mIGRvY3VtZW50
YXRpb24uCj4gY29uZmlndXJlLmFjOjE2MjogZXJyb3I6IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNy
bzogQUNfTVNHX0VSUk9SCgpPbiBEZWJpYW4gc3lzdGVtcyB5b3UgbmVlZCB0aGUgYXV0b2NvbmYt
YXJjaGl2ZSBwYWNrYWdlIHdoaWNoIHByb3ZpZGVzCnRoZSBBWF9DT01QQVJFX1ZFUlNJT04gbWFj
cm8sIGFuZCB0aGVuIHlvdSBhbHNvIG5lZWQgdG8gcnVuIGFjbG9jYWwKYmVmb3JlIGF1dG9jb25m
LgoKUm9nZXIuCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 18:59:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 18:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCw4e-0001Xh-Th; Mon, 10 Feb 2014 18:59:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCw4c-0001Xc-MI
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 18:59:18 +0000
Received: from [193.109.254.147:27341] by server-1.bemta-14.messagelabs.com id
	E3/FE-15438-68129F25; Mon, 10 Feb 2014 18:59:18 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392058756!3348189!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27200 invoked from network); 10 Feb 2014 18:59:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:59:17 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="99571587"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 10 Feb 2014 18:59:15 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 13:59:14 -0500
Message-ID: <52F92181.8010907@citrix.com>
Date: Mon, 10 Feb 2014 19:59:13 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
In-Reply-To: <52F91F04.6030507@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMTk6NDgsIERvbiBTbHV0eiB3cm90ZToKPiBPbiAwMi8xMC8xNCAwNTo1OSwg
Um9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4gT24gMTAvMDIvMTQgMTA6NTUsIElhbiBDYW1wYmVs
bCB3cm90ZToKPj4+IE9uIEZyaSwgMjAxNC0wMi0wNyBhdCAxNjoyOSArMDAwMCwgQW5kcmV3IENv
b3BlciB3cm90ZToKPj4+PiBPbiAwNy8wMi8xNCAxNjoyMiwgRG9uIFNsdXR6IHdyb3RlOgo+Pj4+
PiBPbiAwMi8wNy8xNCAwNTowNSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+Pj4+Pj4gT24gVGh1LCAy
MDE0LTAyLTA2IGF0IDE5OjE1IC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4+Pj4gY2Mx
OiB3YXJuaW5ncyBiZWluZyB0cmVhdGVkIGFzIGVycm9ycwo+Pj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0
dWJzLmM6IEluIGZ1bmN0aW9uICdEZWZib29sX3ZhbCc6Cj4+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1
YnMuYzozNDQ6IHdhcm5pbmc6IGltcGxpY2l0IGRlY2xhcmF0aW9uIG9mIGZ1bmN0aW9uCj4+Pj4+
Pj4+Pj4gJ0NBTUxyZXR1cm5UJwo+Pj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6MzQ0OiBlcnJv
cjogZXhwZWN0ZWQgZXhwcmVzc2lvbiBiZWZvcmUKPj4+Pj4+Pj4+PiAnbGlieGxfZGVmYm9vbCcK
Pj4+Pj4+Pj4+PiB4ZW5saWdodF9zdHVicy5jOiBJbiBmdW5jdGlvbiAnU3RyaW5nX29wdGlvbl92
YWwnOgo+Pj4+Pj4+Pj4+IHhlbmxpZ2h0X3N0dWJzLmM6Mzc5OiBlcnJvcjogZXhwZWN0ZWQgZXhw
cmVzc2lvbiBiZWZvcmUgJ2NoYXInCj4+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1YnMuYzogSW4gZnVu
Y3Rpb24gJ2FvaG93X3ZhbCc6Cj4+Pj4+Pj4+Pj4geGVubGlnaHRfc3R1YnMuYzo0NDA6IGVycm9y
OiBleHBlY3RlZCBleHByZXNzaW9uIGJlZm9yZQo+Pj4+Pj4+Pj4+ICdsaWJ4bF9hc3luY29wX2hv
dycKPj4+Pj4+PiBBbnkgaWRlYSBvbiB3aGF0IHRvIGRvIGFib3V0IG9jYW1sIGlzc3VlPwo+Pj4+
Pj4gTXkgZ3Vlc3MgaXMgdGhhdCB5b3VyIG9jYW1sIGlzIHRvbyBvbGQgYW5kIGRvZXNuJ3Qgc3Vw
cGx5Cj4+Pj4+PiBDQU1McmV0dXJuVC4KPj4+Pj4+IFdoYXQgdmVyc2lvbiBkbyB5b3UgaGF2ZT8K
Pj4+Pj4+Cj4+Pj4+PiBJYW4uCj4+Pj4+Pgo+Pj4+PiBkY3MteGVuLTUzOn4+b2NhbWwgLXZlcnNp
b24KPj4+Pj4gVGhlIE9iamVjdGl2ZSBDYW1sIHRvcGxldmVsLCB2ZXJzaW9uIDMuMDkuMwo+Pj4+
Pgo+Pj4+PiAgICAgLURvbiBTbHV0ego+Pj4+Pgo+Pj4+IFdoaWNoLCBhY2NvcmRpbmcgdG8gZ29v
Z2xlLCB3YXMgaW50cm9kdWNlZCBpbiAzLjA5LjQKPj4+Pgo+Pj4+IEkgdGhpbmsgdGhlIC4vY29u
ZmlndXJlIHNjcmlwdCBuZWVkcyBhIG1pbiB2ZXJzaW9uIGNoZWNrLgo+Pj4gWWVzLCBJIHRoaW5r
IHNvIHRvby4gUm9iLCBjb3VsZCB5b3UgYWR2aXNlIG9uIGEgc3VpdGFibGUgbWluaW11bSBhbmQK
Pj4+IHBlcmhhcHMgcGF0Y2ggdG9vbHMvY29uZmlndXJlLmFjIGFuZC9vciBtNC9vY2FtbC5tNCBh
cyBuZWNlc3NhcnkuCj4+Pgo+Pj4gQWxzbyBDQ2luZyBSb2dlciB3aG8gYWRkZWQgdGhlIG9jYW1s
IGF1dG9jb25mIHN0dWZmLgo+PiBUaGUgT2NhbWwgYXV0b2NvbmYgc3R1ZmYgd2FzIHBpY2tlZCBm
cm9tIGh0dHA6Ly9mb3JnZS5vY2FtbGNvcmUub3JnLy4KPj4gSGVyZSBpcyBhbiB1bnRlc3RlZCBw
YXRjaCBmb3Igb3VyIGNvbmZpZ3VyZSBzY3JpcHQgdG8gY2hlY2sgZm9yIHRoZQo+PiBtaW5pbXVt
IHJlcXVpcmVkIE9DYW1sIHZlcnNpb24gKDMuMDkuMyk6Cj4+Cj4+IChyZW1lbWJlciB0byByZS1n
ZW5lcmF0ZSB0aGUgY29uZmlndXJlIHNjcmlwdCBhZnRlciBhcHBseWluZykKPiAKPiBOb3Qgc3Vy
ZSBpZiB0aGUgb2xkZXIgQXV0b2NvbmYgKDIuNjgpIGlzIGF0IGZhdWx0LCBidXQgSSBnZXQ6Cj4g
Cj4gZGNzLXhlbi01NDp+L3hlbi90b29scz5hdXRvY29uZiBjb25maWd1cmUuYWMgPmNvbmZpZ3Vy
ZQo+IGNvbmZpZ3VyZS5hYzoxNDogZXJyb3I6IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNybzogQVNf
SUYKPiAgICAgICBJZiB0aGlzIHRva2VuIGFuZCBvdGhlcnMgYXJlIGxlZ2l0aW1hdGUsIHBsZWFz
ZSB1c2UgbTRfcGF0dGVybl9hbGxvdy4KPiAgICAgICBTZWUgdGhlIEF1dG9jb25mIGRvY3VtZW50
YXRpb24uCj4gY29uZmlndXJlLmFjOjE2MjogZXJyb3I6IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNy
bzogQUNfTVNHX0VSUk9SCgpPbiBEZWJpYW4gc3lzdGVtcyB5b3UgbmVlZCB0aGUgYXV0b2NvbmYt
YXJjaGl2ZSBwYWNrYWdlIHdoaWNoIHByb3ZpZGVzCnRoZSBBWF9DT01QQVJFX1ZFUlNJT04gbWFj
cm8sIGFuZCB0aGVuIHlvdSBhbHNvIG5lZWQgdG8gcnVuIGFjbG9jYWwKYmVmb3JlIGF1dG9jb25m
LgoKUm9nZXIuCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8v
bGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:24:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwSP-0002ct-O3; Mon, 10 Feb 2014 19:23:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WCwSO-0002cb-6v
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 19:23:52 +0000
Received: from [85.158.143.35:14220] by server-2.bemta-4.messagelabs.com id
	7C/68-10891-74729F25; Mon, 10 Feb 2014 19:23:51 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392060229!4614756!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26283 invoked from network); 10 Feb 2014 19:23:50 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 19:23:50 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Feb 2014 19:23:48 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="650226818"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.216])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 19:23:47 +0000
Message-ID: <52F92743.8090901@terremark.com>
Date: Mon, 10 Feb 2014 14:23:47 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com>
In-Reply-To: <52F92181.8010907@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTAvMTQgMTM6NTksIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTAvMDIvMTQg
MTk6NDgsIERvbiBTbHV0eiB3cm90ZToKPj4gT24gMDIvMTAvMTQgMDU6NTksIFJvZ2VyIFBhdSBN
b25uw6kgd3JvdGU6Cj4+PiBPbiAxMC8wMi8xNCAxMDo1NSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+
Pj4+IE9uIEZyaSwgMjAxNC0wMi0wNyBhdCAxNjoyOSArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90
ZToKPj4+Pj4gT24gMDcvMDIvMTQgMTY6MjIsIERvbiBTbHV0eiB3cm90ZToKPj4+Pj4+IE9uIDAy
LzA3LzE0IDA1OjA1LCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4+Pj4+Pj4gT24gVGh1LCAyMDE0LTAy
LTA2IGF0IDE5OjE1IC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pgpbc25pcF0KPj4+Pj4g
V2hpY2gsIGFjY29yZGluZyB0byBnb29nbGUsIHdhcyBpbnRyb2R1Y2VkIGluIDMuMDkuNAo+Pj4+
Pgo+Pj4+PiBJIHRoaW5rIHRoZSAuL2NvbmZpZ3VyZSBzY3JpcHQgbmVlZHMgYSBtaW4gdmVyc2lv
biBjaGVjay4KPj4+PiBZZXMsIEkgdGhpbmsgc28gdG9vLiBSb2IsIGNvdWxkIHlvdSBhZHZpc2Ug
b24gYSBzdWl0YWJsZSBtaW5pbXVtIGFuZAo+Pj4+IHBlcmhhcHMgcGF0Y2ggdG9vbHMvY29uZmln
dXJlLmFjIGFuZC9vciBtNC9vY2FtbC5tNCBhcyBuZWNlc3NhcnkuCj4+Pj4KPj4+PiBBbHNvIEND
aW5nIFJvZ2VyIHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCj4+PiBUaGUgT2Nh
bWwgYXV0b2NvbmYgc3R1ZmYgd2FzIHBpY2tlZCBmcm9tIGh0dHA6Ly9mb3JnZS5vY2FtbGNvcmUu
b3JnLy4KPj4+IEhlcmUgaXMgYW4gdW50ZXN0ZWQgcGF0Y2ggZm9yIG91ciBjb25maWd1cmUgc2Ny
aXB0IHRvIGNoZWNrIGZvciB0aGUKPj4+IG1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAo
My4wOS4zKToKPj4+Cj4+PiAocmVtZW1iZXIgdG8gcmUtZ2VuZXJhdGUgdGhlIGNvbmZpZ3VyZSBz
Y3JpcHQgYWZ0ZXIgYXBwbHlpbmcpCj4+IE5vdCBzdXJlIGlmIHRoZSBvbGRlciBBdXRvY29uZiAo
Mi42OCkgaXMgYXQgZmF1bHQsIGJ1dCBJIGdldDoKPj4KPj4gZGNzLXhlbi01NDp+L3hlbi90b29s
cz5hdXRvY29uZiBjb25maWd1cmUuYWMgPmNvbmZpZ3VyZQo+PiBjb25maWd1cmUuYWM6MTQ6IGVy
cm9yOiBwb3NzaWJseSB1bmRlZmluZWQgbWFjcm86IEFTX0lGCj4+ICAgICAgICBJZiB0aGlzIHRv
a2VuIGFuZCBvdGhlcnMgYXJlIGxlZ2l0aW1hdGUsIHBsZWFzZSB1c2UgbTRfcGF0dGVybl9hbGxv
dy4KPj4gICAgICAgIFNlZSB0aGUgQXV0b2NvbmYgZG9jdW1lbnRhdGlvbi4KPj4gY29uZmlndXJl
LmFjOjE2MjogZXJyb3I6IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNybzogQUNfTVNHX0VSUk9SCj4g
T24gRGViaWFuIHN5c3RlbXMgeW91IG5lZWQgdGhlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSB3
aGljaCBwcm92aWRlcwo+IHRoZSBBWF9DT01QQVJFX1ZFUlNJT04gbWFjcm8sIGFuZCB0aGVuIHlv
dSBhbHNvIG5lZWQgdG8gcnVuIGFjbG9jYWwKPiBiZWZvcmUgYXV0b2NvbmYuCj4KPiBSb2dlci4K
PgoKVGhlIENlbnRPUyA1LjEwIHN5c3RlbSBoYXMgYSB0b28gb2xkIGF1dG9jb25mIHRvIGRvIHRo
ZSByZWJ1aWxkLCBTbyBJIHdlbnQgdG8gYSBGZWRvcmEgMTcgc3lzdGVtLgoKSXQgZG9lcyBoYXZl
IGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSwgbnV0IHRoYXQgbWFrZSBubyBkaWZmZXJlbmNlOgoK
Ckluc3RhbGxlZDoKICAgYXV0b2NvbmYtYXJjaGl2ZS5ub2FyY2ggMDoyMDEyLjA5LjA4LTEuZmMx
NwoKQ29tcGxldGUhCmRjcy14ZW4tNTQ6fi94ZW4vdG9vbHM+YXV0b2NvbmYgY29uZmlndXJlLmFj
ID5jb25maWd1cmUKY29uZmlndXJlLmFjOjE0OiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1h
Y3JvOiBBU19JRgogICAgICAgSWYgdGhpcyB0b2tlbiBhbmQgb3RoZXJzIGFyZSBsZWdpdGltYXRl
LCBwbGVhc2UgdXNlIG00X3BhdHRlcm5fYWxsb3cuCiAgICAgICBTZWUgdGhlIEF1dG9jb25mIGRv
Y3VtZW50YXRpb24uCmNvbmZpZ3VyZS5hYzoxNjI6IGVycm9yOiBwb3NzaWJseSB1bmRlZmluZWQg
bWFjcm86IEFDX01TR19FUlJPUgoKCkFuZCB0aGUgc2FtZSBiYWQgY29kZSBpcyBzdGlsbCBnZW5l
cmF0ZWQ6CgoKICAgICBpZiB0ZXN0ICJ4JE9DQU1MQyIgPSAieG5vIiB8fCB0ZXN0ICJ4JE9DQU1M
RklORCIgPSAieG5vIjsgdGhlbiA6CgogICAgICAgICBpZiB0ZXN0ICJ4JGVuYWJsZV9vY2FtbHRv
b2xzIiA9ICJ4eWVzIjsgdGhlbiA6CgogICAgICAgICAgICAgYXNfZm5fZXJyb3IgJD8gIk9jYW1s
IHRvb2xzIGVuYWJsZWQsIGJ1dCB1bmFibGUgdG8gZmluZCBPY2FtbCIgIiRMSU5FTk8iIDUKZmkK
ICAgICAgICAgb2NhbWx0b29scz0ibiIKCmVsc2UKCiAgICAgICAgIEFYX0NPTVBBUkVfVkVSU0lP
TigkT0NBTUxWRVJTSU9OLCBsdCwgMy4wOS40LAogICAgICAgICAgICAgQVNfSUYoW3Rlc3QgIngk
ZW5hYmxlX29jYW1sdG9vbHMiID0gInh5ZXMiXSwgWwogICAgICAgICAgICAgICAgIEFDX01TR19F
UlJPUihbWW91ciB2ZXJzaW9uIG9mIE9DYW1sOiAkT0NBTUxWRVJTSU9OIGlzIG5vdCBzdXBwb3J0
ZWRdKV0pCiAgICAgICAgICAgICBvY2FtbHRvb2xzPSJuIgogICAgICAgICApCgpmaQoKCgogICAg
LURvbiBTbHV0egoKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:24:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwSP-0002ct-O3; Mon, 10 Feb 2014 19:23:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WCwSO-0002cb-6v
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 19:23:52 +0000
Received: from [85.158.143.35:14220] by server-2.bemta-4.messagelabs.com id
	7C/68-10891-74729F25; Mon, 10 Feb 2014 19:23:51 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392060229!4614756!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26283 invoked from network); 10 Feb 2014 19:23:50 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 19:23:50 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Feb 2014 19:23:48 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="650226818"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.5.216])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 19:23:47 +0000
Message-ID: <52F92743.8090901@terremark.com>
Date: Mon, 10 Feb 2014 14:23:47 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com>
In-Reply-To: <52F92181.8010907@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTAvMTQgMTM6NTksIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTAvMDIvMTQg
MTk6NDgsIERvbiBTbHV0eiB3cm90ZToKPj4gT24gMDIvMTAvMTQgMDU6NTksIFJvZ2VyIFBhdSBN
b25uw6kgd3JvdGU6Cj4+PiBPbiAxMC8wMi8xNCAxMDo1NSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+
Pj4+IE9uIEZyaSwgMjAxNC0wMi0wNyBhdCAxNjoyOSArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90
ZToKPj4+Pj4gT24gMDcvMDIvMTQgMTY6MjIsIERvbiBTbHV0eiB3cm90ZToKPj4+Pj4+IE9uIDAy
LzA3LzE0IDA1OjA1LCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4+Pj4+Pj4gT24gVGh1LCAyMDE0LTAy
LTA2IGF0IDE5OjE1IC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pgpbc25pcF0KPj4+Pj4g
V2hpY2gsIGFjY29yZGluZyB0byBnb29nbGUsIHdhcyBpbnRyb2R1Y2VkIGluIDMuMDkuNAo+Pj4+
Pgo+Pj4+PiBJIHRoaW5rIHRoZSAuL2NvbmZpZ3VyZSBzY3JpcHQgbmVlZHMgYSBtaW4gdmVyc2lv
biBjaGVjay4KPj4+PiBZZXMsIEkgdGhpbmsgc28gdG9vLiBSb2IsIGNvdWxkIHlvdSBhZHZpc2Ug
b24gYSBzdWl0YWJsZSBtaW5pbXVtIGFuZAo+Pj4+IHBlcmhhcHMgcGF0Y2ggdG9vbHMvY29uZmln
dXJlLmFjIGFuZC9vciBtNC9vY2FtbC5tNCBhcyBuZWNlc3NhcnkuCj4+Pj4KPj4+PiBBbHNvIEND
aW5nIFJvZ2VyIHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCj4+PiBUaGUgT2Nh
bWwgYXV0b2NvbmYgc3R1ZmYgd2FzIHBpY2tlZCBmcm9tIGh0dHA6Ly9mb3JnZS5vY2FtbGNvcmUu
b3JnLy4KPj4+IEhlcmUgaXMgYW4gdW50ZXN0ZWQgcGF0Y2ggZm9yIG91ciBjb25maWd1cmUgc2Ny
aXB0IHRvIGNoZWNrIGZvciB0aGUKPj4+IG1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAo
My4wOS4zKToKPj4+Cj4+PiAocmVtZW1iZXIgdG8gcmUtZ2VuZXJhdGUgdGhlIGNvbmZpZ3VyZSBz
Y3JpcHQgYWZ0ZXIgYXBwbHlpbmcpCj4+IE5vdCBzdXJlIGlmIHRoZSBvbGRlciBBdXRvY29uZiAo
Mi42OCkgaXMgYXQgZmF1bHQsIGJ1dCBJIGdldDoKPj4KPj4gZGNzLXhlbi01NDp+L3hlbi90b29s
cz5hdXRvY29uZiBjb25maWd1cmUuYWMgPmNvbmZpZ3VyZQo+PiBjb25maWd1cmUuYWM6MTQ6IGVy
cm9yOiBwb3NzaWJseSB1bmRlZmluZWQgbWFjcm86IEFTX0lGCj4+ICAgICAgICBJZiB0aGlzIHRv
a2VuIGFuZCBvdGhlcnMgYXJlIGxlZ2l0aW1hdGUsIHBsZWFzZSB1c2UgbTRfcGF0dGVybl9hbGxv
dy4KPj4gICAgICAgIFNlZSB0aGUgQXV0b2NvbmYgZG9jdW1lbnRhdGlvbi4KPj4gY29uZmlndXJl
LmFjOjE2MjogZXJyb3I6IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNybzogQUNfTVNHX0VSUk9SCj4g
T24gRGViaWFuIHN5c3RlbXMgeW91IG5lZWQgdGhlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSB3
aGljaCBwcm92aWRlcwo+IHRoZSBBWF9DT01QQVJFX1ZFUlNJT04gbWFjcm8sIGFuZCB0aGVuIHlv
dSBhbHNvIG5lZWQgdG8gcnVuIGFjbG9jYWwKPiBiZWZvcmUgYXV0b2NvbmYuCj4KPiBSb2dlci4K
PgoKVGhlIENlbnRPUyA1LjEwIHN5c3RlbSBoYXMgYSB0b28gb2xkIGF1dG9jb25mIHRvIGRvIHRo
ZSByZWJ1aWxkLCBTbyBJIHdlbnQgdG8gYSBGZWRvcmEgMTcgc3lzdGVtLgoKSXQgZG9lcyBoYXZl
IGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSwgbnV0IHRoYXQgbWFrZSBubyBkaWZmZXJlbmNlOgoK
Ckluc3RhbGxlZDoKICAgYXV0b2NvbmYtYXJjaGl2ZS5ub2FyY2ggMDoyMDEyLjA5LjA4LTEuZmMx
NwoKQ29tcGxldGUhCmRjcy14ZW4tNTQ6fi94ZW4vdG9vbHM+YXV0b2NvbmYgY29uZmlndXJlLmFj
ID5jb25maWd1cmUKY29uZmlndXJlLmFjOjE0OiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1h
Y3JvOiBBU19JRgogICAgICAgSWYgdGhpcyB0b2tlbiBhbmQgb3RoZXJzIGFyZSBsZWdpdGltYXRl
LCBwbGVhc2UgdXNlIG00X3BhdHRlcm5fYWxsb3cuCiAgICAgICBTZWUgdGhlIEF1dG9jb25mIGRv
Y3VtZW50YXRpb24uCmNvbmZpZ3VyZS5hYzoxNjI6IGVycm9yOiBwb3NzaWJseSB1bmRlZmluZWQg
bWFjcm86IEFDX01TR19FUlJPUgoKCkFuZCB0aGUgc2FtZSBiYWQgY29kZSBpcyBzdGlsbCBnZW5l
cmF0ZWQ6CgoKICAgICBpZiB0ZXN0ICJ4JE9DQU1MQyIgPSAieG5vIiB8fCB0ZXN0ICJ4JE9DQU1M
RklORCIgPSAieG5vIjsgdGhlbiA6CgogICAgICAgICBpZiB0ZXN0ICJ4JGVuYWJsZV9vY2FtbHRv
b2xzIiA9ICJ4eWVzIjsgdGhlbiA6CgogICAgICAgICAgICAgYXNfZm5fZXJyb3IgJD8gIk9jYW1s
IHRvb2xzIGVuYWJsZWQsIGJ1dCB1bmFibGUgdG8gZmluZCBPY2FtbCIgIiRMSU5FTk8iIDUKZmkK
ICAgICAgICAgb2NhbWx0b29scz0ibiIKCmVsc2UKCiAgICAgICAgIEFYX0NPTVBBUkVfVkVSU0lP
TigkT0NBTUxWRVJTSU9OLCBsdCwgMy4wOS40LAogICAgICAgICAgICAgQVNfSUYoW3Rlc3QgIngk
ZW5hYmxlX29jYW1sdG9vbHMiID0gInh5ZXMiXSwgWwogICAgICAgICAgICAgICAgIEFDX01TR19F
UlJPUihbWW91ciB2ZXJzaW9uIG9mIE9DYW1sOiAkT0NBTUxWRVJTSU9OIGlzIG5vdCBzdXBwb3J0
ZWRdKV0pCiAgICAgICAgICAgICBvY2FtbHRvb2xzPSJuIgogICAgICAgICApCgpmaQoKCgogICAg
LURvbiBTbHV0egoKCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRw
Oi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwbc-0003Di-7A; Mon, 10 Feb 2014 19:33:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCwbb-0003Dc-BC
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 19:33:23 +0000
Received: from [85.158.137.68:26385] by server-15.bemta-3.messagelabs.com id
	B6/81-19263-28929F25; Mon, 10 Feb 2014 19:33:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392060799!930423!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8599 invoked from network); 10 Feb 2014 19:33:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 19:33:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="101407085"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 19:33:10 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 14:33:09 -0500
Message-ID: <52F92973.9080804@citrix.com>
Date: Mon, 10 Feb 2014 20:33:07 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
In-Reply-To: <52F92743.8090901@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMjA6MjMsIERvbiBTbHV0eiB3cm90ZToKPiBPbiAwMi8xMC8xNCAxMzo1OSwg
Um9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4gT24gMTAvMDIvMTQgMTk6NDgsIERvbiBTbHV0eiB3
cm90ZToKPj4+IE9uIDAyLzEwLzE0IDA1OjU5LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4+
IE9uIDEwLzAyLzE0IDEwOjU1LCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4+Pj4+IE9uIEZyaSwgMjAx
NC0wMi0wNyBhdCAxNjoyOSArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToKPj4+Pj4+IE9uIDA3
LzAyLzE0IDE2OjIyLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4gT24gMDIvMDcvMTQgMDU6MDUs
IElhbiBDYW1wYmVsbCB3cm90ZToKPj4+Pj4+Pj4gT24gVGh1LCAyMDE0LTAyLTA2IGF0IDE5OjE1
IC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4KPiBbc25pcF0KPj4+Pj4+IFdoaWNoLCBh
Y2NvcmRpbmcgdG8gZ29vZ2xlLCB3YXMgaW50cm9kdWNlZCBpbiAzLjA5LjQKPj4+Pj4+Cj4+Pj4+
PiBJIHRoaW5rIHRoZSAuL2NvbmZpZ3VyZSBzY3JpcHQgbmVlZHMgYSBtaW4gdmVyc2lvbiBjaGVj
ay4KPj4+Pj4gWWVzLCBJIHRoaW5rIHNvIHRvby4gUm9iLCBjb3VsZCB5b3UgYWR2aXNlIG9uIGEg
c3VpdGFibGUgbWluaW11bSBhbmQKPj4+Pj4gcGVyaGFwcyBwYXRjaCB0b29scy9jb25maWd1cmUu
YWMgYW5kL29yIG00L29jYW1sLm00IGFzIG5lY2Vzc2FyeS4KPj4+Pj4KPj4+Pj4gQWxzbyBDQ2lu
ZyBSb2dlciB3aG8gYWRkZWQgdGhlIG9jYW1sIGF1dG9jb25mIHN0dWZmLgo+Pj4+IFRoZSBPY2Ft
bCBhdXRvY29uZiBzdHVmZiB3YXMgcGlja2VkIGZyb20gaHR0cDovL2ZvcmdlLm9jYW1sY29yZS5v
cmcvLgo+Pj4+IEhlcmUgaXMgYW4gdW50ZXN0ZWQgcGF0Y2ggZm9yIG91ciBjb25maWd1cmUgc2Ny
aXB0IHRvIGNoZWNrIGZvciB0aGUKPj4+PiBtaW5pbXVtIHJlcXVpcmVkIE9DYW1sIHZlcnNpb24g
KDMuMDkuMyk6Cj4+Pj4KPj4+PiAocmVtZW1iZXIgdG8gcmUtZ2VuZXJhdGUgdGhlIGNvbmZpZ3Vy
ZSBzY3JpcHQgYWZ0ZXIgYXBwbHlpbmcpCj4+PiBOb3Qgc3VyZSBpZiB0aGUgb2xkZXIgQXV0b2Nv
bmYgKDIuNjgpIGlzIGF0IGZhdWx0LCBidXQgSSBnZXQ6Cj4+Pgo+Pj4gZGNzLXhlbi01NDp+L3hl
bi90b29scz5hdXRvY29uZiBjb25maWd1cmUuYWMgPmNvbmZpZ3VyZQo+Pj4gY29uZmlndXJlLmFj
OjE0OiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBBU19JRgo+Pj4gICAgICAgIElm
IHRoaXMgdG9rZW4gYW5kIG90aGVycyBhcmUgbGVnaXRpbWF0ZSwgcGxlYXNlIHVzZQo+Pj4gbTRf
cGF0dGVybl9hbGxvdy4KPj4+ICAgICAgICBTZWUgdGhlIEF1dG9jb25mIGRvY3VtZW50YXRpb24u
Cj4+PiBjb25maWd1cmUuYWM6MTYyOiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBB
Q19NU0dfRVJST1IKPj4gT24gRGViaWFuIHN5c3RlbXMgeW91IG5lZWQgdGhlIGF1dG9jb25mLWFy
Y2hpdmUgcGFja2FnZSB3aGljaCBwcm92aWRlcwo+PiB0aGUgQVhfQ09NUEFSRV9WRVJTSU9OIG1h
Y3JvLCBhbmQgdGhlbiB5b3UgYWxzbyBuZWVkIHRvIHJ1biBhY2xvY2FsCj4+IGJlZm9yZSBhdXRv
Y29uZi4KPj4KPj4gUm9nZXIuCj4+Cj4gCj4gVGhlIENlbnRPUyA1LjEwIHN5c3RlbSBoYXMgYSB0
b28gb2xkIGF1dG9jb25mIHRvIGRvIHRoZSByZWJ1aWxkLCBTbyBJCj4gd2VudCB0byBhIEZlZG9y
YSAxNyBzeXN0ZW0uCj4gCj4gSXQgZG9lcyBoYXZlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSwg
bnV0IHRoYXQgbWFrZSBubyBkaWZmZXJlbmNlOgo+IAo+IAo+IEluc3RhbGxlZDoKPiAgIGF1dG9j
b25mLWFyY2hpdmUubm9hcmNoIDA6MjAxMi4wOS4wOC0xLmZjMTcKPiAKPiBDb21wbGV0ZSEKPiBk
Y3MteGVuLTU0On4veGVuL3Rvb2xzPmF1dG9jb25mIGNvbmZpZ3VyZS5hYyA+Y29uZmlndXJlCj4g
Y29uZmlndXJlLmFjOjE0OiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBBU19JRgo+
ICAgICAgIElmIHRoaXMgdG9rZW4gYW5kIG90aGVycyBhcmUgbGVnaXRpbWF0ZSwgcGxlYXNlIHVz
ZSBtNF9wYXR0ZXJuX2FsbG93Lgo+ICAgICAgIFNlZSB0aGUgQXV0b2NvbmYgZG9jdW1lbnRhdGlv
bi4KPiBjb25maWd1cmUuYWM6MTYyOiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBB
Q19NU0dfRVJST1IKCkRpZCB5b3UgcnVuIGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mPwoKVGhhbmtz
LCBSb2dlci4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwbc-0003Di-7A; Mon, 10 Feb 2014 19:33:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WCwbb-0003Dc-BC
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 19:33:23 +0000
Received: from [85.158.137.68:26385] by server-15.bemta-3.messagelabs.com id
	B6/81-19263-28929F25; Mon, 10 Feb 2014 19:33:22 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392060799!930423!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8599 invoked from network); 10 Feb 2014 19:33:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 19:33:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,819,1384300800"; d="scan'208";a="101407085"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 10 Feb 2014 19:33:10 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 10 Feb 2014 14:33:09 -0500
Message-ID: <52F92973.9080804@citrix.com>
Date: Mon, 10 Feb 2014 20:33:07 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
In-Reply-To: <52F92743.8090901@terremark.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTAvMDIvMTQgMjA6MjMsIERvbiBTbHV0eiB3cm90ZToKPiBPbiAwMi8xMC8xNCAxMzo1OSwg
Um9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4gT24gMTAvMDIvMTQgMTk6NDgsIERvbiBTbHV0eiB3
cm90ZToKPj4+IE9uIDAyLzEwLzE0IDA1OjU5LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4+
IE9uIDEwLzAyLzE0IDEwOjU1LCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4+Pj4+IE9uIEZyaSwgMjAx
NC0wMi0wNyBhdCAxNjoyOSArMDAwMCwgQW5kcmV3IENvb3BlciB3cm90ZToKPj4+Pj4+IE9uIDA3
LzAyLzE0IDE2OjIyLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4gT24gMDIvMDcvMTQgMDU6MDUs
IElhbiBDYW1wYmVsbCB3cm90ZToKPj4+Pj4+Pj4gT24gVGh1LCAyMDE0LTAyLTA2IGF0IDE5OjE1
IC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4KPiBbc25pcF0KPj4+Pj4+IFdoaWNoLCBh
Y2NvcmRpbmcgdG8gZ29vZ2xlLCB3YXMgaW50cm9kdWNlZCBpbiAzLjA5LjQKPj4+Pj4+Cj4+Pj4+
PiBJIHRoaW5rIHRoZSAuL2NvbmZpZ3VyZSBzY3JpcHQgbmVlZHMgYSBtaW4gdmVyc2lvbiBjaGVj
ay4KPj4+Pj4gWWVzLCBJIHRoaW5rIHNvIHRvby4gUm9iLCBjb3VsZCB5b3UgYWR2aXNlIG9uIGEg
c3VpdGFibGUgbWluaW11bSBhbmQKPj4+Pj4gcGVyaGFwcyBwYXRjaCB0b29scy9jb25maWd1cmUu
YWMgYW5kL29yIG00L29jYW1sLm00IGFzIG5lY2Vzc2FyeS4KPj4+Pj4KPj4+Pj4gQWxzbyBDQ2lu
ZyBSb2dlciB3aG8gYWRkZWQgdGhlIG9jYW1sIGF1dG9jb25mIHN0dWZmLgo+Pj4+IFRoZSBPY2Ft
bCBhdXRvY29uZiBzdHVmZiB3YXMgcGlja2VkIGZyb20gaHR0cDovL2ZvcmdlLm9jYW1sY29yZS5v
cmcvLgo+Pj4+IEhlcmUgaXMgYW4gdW50ZXN0ZWQgcGF0Y2ggZm9yIG91ciBjb25maWd1cmUgc2Ny
aXB0IHRvIGNoZWNrIGZvciB0aGUKPj4+PiBtaW5pbXVtIHJlcXVpcmVkIE9DYW1sIHZlcnNpb24g
KDMuMDkuMyk6Cj4+Pj4KPj4+PiAocmVtZW1iZXIgdG8gcmUtZ2VuZXJhdGUgdGhlIGNvbmZpZ3Vy
ZSBzY3JpcHQgYWZ0ZXIgYXBwbHlpbmcpCj4+PiBOb3Qgc3VyZSBpZiB0aGUgb2xkZXIgQXV0b2Nv
bmYgKDIuNjgpIGlzIGF0IGZhdWx0LCBidXQgSSBnZXQ6Cj4+Pgo+Pj4gZGNzLXhlbi01NDp+L3hl
bi90b29scz5hdXRvY29uZiBjb25maWd1cmUuYWMgPmNvbmZpZ3VyZQo+Pj4gY29uZmlndXJlLmFj
OjE0OiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBBU19JRgo+Pj4gICAgICAgIElm
IHRoaXMgdG9rZW4gYW5kIG90aGVycyBhcmUgbGVnaXRpbWF0ZSwgcGxlYXNlIHVzZQo+Pj4gbTRf
cGF0dGVybl9hbGxvdy4KPj4+ICAgICAgICBTZWUgdGhlIEF1dG9jb25mIGRvY3VtZW50YXRpb24u
Cj4+PiBjb25maWd1cmUuYWM6MTYyOiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBB
Q19NU0dfRVJST1IKPj4gT24gRGViaWFuIHN5c3RlbXMgeW91IG5lZWQgdGhlIGF1dG9jb25mLWFy
Y2hpdmUgcGFja2FnZSB3aGljaCBwcm92aWRlcwo+PiB0aGUgQVhfQ09NUEFSRV9WRVJTSU9OIG1h
Y3JvLCBhbmQgdGhlbiB5b3UgYWxzbyBuZWVkIHRvIHJ1biBhY2xvY2FsCj4+IGJlZm9yZSBhdXRv
Y29uZi4KPj4KPj4gUm9nZXIuCj4+Cj4gCj4gVGhlIENlbnRPUyA1LjEwIHN5c3RlbSBoYXMgYSB0
b28gb2xkIGF1dG9jb25mIHRvIGRvIHRoZSByZWJ1aWxkLCBTbyBJCj4gd2VudCB0byBhIEZlZG9y
YSAxNyBzeXN0ZW0uCj4gCj4gSXQgZG9lcyBoYXZlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSwg
bnV0IHRoYXQgbWFrZSBubyBkaWZmZXJlbmNlOgo+IAo+IAo+IEluc3RhbGxlZDoKPiAgIGF1dG9j
b25mLWFyY2hpdmUubm9hcmNoIDA6MjAxMi4wOS4wOC0xLmZjMTcKPiAKPiBDb21wbGV0ZSEKPiBk
Y3MteGVuLTU0On4veGVuL3Rvb2xzPmF1dG9jb25mIGNvbmZpZ3VyZS5hYyA+Y29uZmlndXJlCj4g
Y29uZmlndXJlLmFjOjE0OiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBBU19JRgo+
ICAgICAgIElmIHRoaXMgdG9rZW4gYW5kIG90aGVycyBhcmUgbGVnaXRpbWF0ZSwgcGxlYXNlIHVz
ZSBtNF9wYXR0ZXJuX2FsbG93Lgo+ICAgICAgIFNlZSB0aGUgQXV0b2NvbmYgZG9jdW1lbnRhdGlv
bi4KPiBjb25maWd1cmUuYWM6MTYyOiBlcnJvcjogcG9zc2libHkgdW5kZWZpbmVkIG1hY3JvOiBB
Q19NU0dfRVJST1IKCkRpZCB5b3UgcnVuIGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mPwoKVGhhbmtz
LCBSb2dlci4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:42:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwjy-0003pU-QL; Mon, 10 Feb 2014 19:42:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WCwjx-0003pP-9x
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 19:42:01 +0000
Received: from [193.109.254.147:4032] by server-8.bemta-14.messagelabs.com id
	8A/24-18529-88B29F25; Mon, 10 Feb 2014 19:42:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392061319!3331446!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15300 invoked from network); 10 Feb 2014 19:41:59 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-15.tower-27.messagelabs.com with SMTP;
	10 Feb 2014 19:41:59 -0000
X-TM-IMSS-Message-ID: <75e8b45c0003eed0@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 75e8b45c0003eed0 ;
	Mon, 10 Feb 2014 14:43:12 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1AJfqKh029343; 
	Mon, 10 Feb 2014 14:41:54 -0500
Message-ID: <52F92B4A.3010805@tycho.nsa.gov>
Date: Mon, 10 Feb 2014 14:40:58 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>,
	xen-devel@lists.xenproject.org
References: <52F26C40.2060901@scytl.com>
In-Reply-To: <52F26C40.2060901@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
> Dear all,
>
> I have recently configured a Xen 4.3 server with the vTPM enabled and a
> guest virtual machine that takes advantage of it. After playing a bit
> with it, I have a few questions:
>
> 1.According to the documentation, to shutdown the vTPM stubdom it is
> only needed to normally shutdown the guest VM. Theoretically, the vTPM
> stubdom automatically shuts down after this. Nevertheless, if I shutdown
> the guest the vTPM stubdom continues active and, moreover, I can start
> the machine again and the values of the vTPM are the last ones there
> were in the previous instance of the guest. Is this normal?

The documentation is in error here; while this was originally how the vTPM
domain behaved, this automatic shutdown was not reliable: it was not done
if the peer domain did not use the vTPM, and it was incorrectly triggered
by pv-grub's use of the vTPM to record guest kernel measurements (which was
the immediate reason for its removal). The solution now is to either send a
shutdown request or simply destroy the vTPM upon guest shutdown.

An alternative that may require less work on your part is to destroy
the vTPM stub domain during a guest's construction, something like:

#!/bin/sh -e
xl destroy "$1-vtpm" || true
xl create $1-vtpm.cfg
xl create $1-domu.cfg

Allowing a vTPM to remain active across a guest restart will cause the
PCR values extended by pv-grub to be incorrect, as you observed in your
second email. In order for the vTPM's PCRs to be useful for quotes or
releasing sealed secrets, you need to ensure that a new vTPM is started
if and only if it is paired with a corresponding guest.

> 2.In the documentation it is recommended to avoid accessing the physical
> TPM from Dom0 at the same time than the vTPM Manager stubdom.
> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
> without any apparent issue. Why is not recommended directly accessing
> the physical TPM of Dom0?

While most of the time it is not a problem to have two entities talking to
the physical TPM, it is possible for the trousers daemon in dom0 to interfere
with key handles used by the TPM Manager. There are also certain operations
of the TPM that may not handle concurrency, although I do not believe that
trousers uses them - SHA1Start, the DAA commands, and certain audit logs
come to mind.

The other reason why it is recommended to avoid pTPM access from dom0 is
because the ability to send unseal/unbind requests to the physical TPM makes
it possible for applications running in dom0 to decrypt the TPM Manager's
data (and thereby access vTPM private keys).

At present, sharing the physical TPM between dom0 and the TPM Manager is
the only way to get full integrity checks.

> 3.If it is not recommended to directly accessing the physical TPM in
> Dom0, which is the advisable way to check the integrity of this domain?
> With solutions such as TBOOT and IntelTXT?

While the TPM Manager in Xen 4.3/4.4 does not yet have this functionality,
an update which I will be submitting for inclusion in Xen 4.5 has the
ability to get physical TPM quotes using a virtual TPM. Combined with an
early domain builder, the eventual goal is to have dom0 use a vTPM for
its integrity/reporting/sealing operations, and use the physical TPM only
to secure the secrets of vTPMs and for deep quotes to provide fresh proofs
of the system's state.

> Best regards,
> Jordi.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:42:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwjy-0003pU-QL; Mon, 10 Feb 2014 19:42:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WCwjx-0003pP-9x
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 19:42:01 +0000
Received: from [193.109.254.147:4032] by server-8.bemta-14.messagelabs.com id
	8A/24-18529-88B29F25; Mon, 10 Feb 2014 19:42:00 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392061319!3331446!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15300 invoked from network); 10 Feb 2014 19:41:59 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-15.tower-27.messagelabs.com with SMTP;
	10 Feb 2014 19:41:59 -0000
X-TM-IMSS-Message-ID: <75e8b45c0003eed0@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 75e8b45c0003eed0 ;
	Mon, 10 Feb 2014 14:43:12 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1AJfqKh029343; 
	Mon, 10 Feb 2014 14:41:54 -0500
Message-ID: <52F92B4A.3010805@tycho.nsa.gov>
Date: Mon, 10 Feb 2014 14:40:58 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>,
	xen-devel@lists.xenproject.org
References: <52F26C40.2060901@scytl.com>
In-Reply-To: <52F26C40.2060901@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
> Dear all,
>
> I have recently configured a Xen 4.3 server with the vTPM enabled and a
> guest virtual machine that takes advantage of it. After playing a bit
> with it, I have a few questions:
>
> 1.According to the documentation, to shutdown the vTPM stubdom it is
> only needed to normally shutdown the guest VM. Theoretically, the vTPM
> stubdom automatically shuts down after this. Nevertheless, if I shutdown
> the guest the vTPM stubdom continues active and, moreover, I can start
> the machine again and the values of the vTPM are the last ones there
> were in the previous instance of the guest. Is this normal?

The documentation is in error here; while this was originally how the vTPM
domain behaved, this automatic shutdown was not reliable: it was not done
if the peer domain did not use the vTPM, and it was incorrectly triggered
by pv-grub's use of the vTPM to record guest kernel measurements (which was
the immediate reason for its removal). The solution now is to either send a
shutdown request or simply destroy the vTPM upon guest shutdown.

An alternative that may require less work on your part is to destroy
the vTPM stub domain during a guest's construction, something like:

#!/bin/sh -e
xl destroy "$1-vtpm" || true
xl create $1-vtpm.cfg
xl create $1-domu.cfg

Allowing a vTPM to remain active across a guest restart will cause the
PCR values extended by pv-grub to be incorrect, as you observed in your
second email. In order for the vTPM's PCRs to be useful for quotes or
releasing sealed secrets, you need to ensure that a new vTPM is started
if and only if it is paired with a corresponding guest.

> 2.In the documentation it is recommended to avoid accessing the physical
> TPM from Dom0 at the same time than the vTPM Manager stubdom.
> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
> without any apparent issue. Why is not recommended directly accessing
> the physical TPM of Dom0?

While most of the time it is not a problem to have two entities talking to
the physical TPM, it is possible for the trousers daemon in dom0 to interfere
with key handles used by the TPM Manager. There are also certain operations
of the TPM that may not handle concurrency, although I do not believe that
trousers uses them - SHA1Start, the DAA commands, and certain audit logs
come to mind.

The other reason why it is recommended to avoid pTPM access from dom0 is
because the ability to send unseal/unbind requests to the physical TPM makes
it possible for applications running in dom0 to decrypt the TPM Manager's
data (and thereby access vTPM private keys).

At present, sharing the physical TPM between dom0 and the TPM Manager is
the only way to get full integrity checks.

> 3.If it is not recommended to directly accessing the physical TPM in
> Dom0, which is the advisable way to check the integrity of this domain?
> With solutions such as TBOOT and IntelTXT?

While the TPM Manager in Xen 4.3/4.4 does not yet have this functionality,
an update which I will be submitting for inclusion in Xen 4.5 has the
ability to get physical TPM quotes using a virtual TPM. Combined with an
early domain builder, the eventual goal is to have dom0 use a vTPM for
its integrity/reporting/sealing operations, and use the physical TPM only
to secure the secrets of vTPMs and for deep quotes to provide fresh proofs
of the system's state.

> Best regards,
> Jordi.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:54:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwvo-0004Gb-Q5; Mon, 10 Feb 2014 19:54:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1WCwvi-0004GW-SO
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 19:54:15 +0000
Received: from [85.158.137.68:57515] by server-7.bemta-3.messagelabs.com id
	7A/78-13775-26E29F25; Mon, 10 Feb 2014 19:54:10 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392062047!909411!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29620 invoked from network); 10 Feb 2014 19:54:09 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 19:54:09 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so6663060pbc.18
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 11:54:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to;
	bh=3UYrwujjUQ485rUTIgqEe9BLJJQeWXAgbxasoZDXxbo=;
	b=lzdvXkz8b68YNNOaBuPWccsWFse4SYL1zLMgE7gD9GsE6t3m4RlWFdWO1q6LnylAX2
	qmDWJQoAtFxj859FsASt9S4XPaotDrVMS9c27TPj2pVuIu4DTv20HbXPeSuHgG2ZL88f
	hxXUTQDlNxH3DE9WnCF09Z2hY3HcTN5RlTOuAY5hWj/MFYYxRKdzpGhRQGiDaRhFXDeI
	9z8JO1K2oDhHIMcmAwwmwDvJ1d+MslZqftpsMeuOKyBZo5UhsXPkOezUGT9nje6cUV7E
	q2wsltlTmCmQ59Yh2qX7V3gPO7O8UrTt4JNG5TP1AXpnfke3CMQXC2Vw6XqUPWq8Mqot
	GbDg==
X-Gm-Message-State: ALoCoQk8iPTPCHUiDS8rF30w0gESmq0+RZr6zM/Xj4VT94DU3zBxuGWMX6CGYIP3sq9tRfRZkU+6
X-Received: by 10.68.170.66 with SMTP id ak2mr39396464pbc.5.1392062046920;
	Mon, 10 Feb 2014 11:54:06 -0800 (PST)
Received: from localhost (66.29.187.52.static.utbb.net. [66.29.187.52])
	by mx.google.com with ESMTPSA id
	dr1sm45222974pbc.18.2014.02.10.11.54.04 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 11:54:05 -0800 (PST)
Date: Mon, 10 Feb 2014 12:54:02 -0700
From: Jens Axboe <axboe@kernel.dk>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140210195402.GA3924@kernel.dk>
References: <20140210184412.GA18198@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140210184412.GA18198@phenom.dumpdata.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
> Hey Jens,
> 
> Please git pull the following branch:
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
> 
> which is based off v3.13-rc6. If you would like me to rebase it on
> a different branch/tag I would be more than happy to do so.

Older is fine, it's only an issue if you are ahead of the branch you
want to go into.

dd> 
> The patches are all bug-fixes and hopefully can go in 3.14.
> 
> They deal with xen-blkback shutdown and cause memory leaks
> as well as shutdown races. They should go to stable tree and if you
> are OK with I will ask them to backport those fixes.
> 
> There is also a fix to xen-blkfront to deal with unexpected state
> transition. And lastly a fix to the header where it was using the
> __aligned__ unnecessarily.

Pulled!

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 19:54:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 19:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCwvo-0004Gb-Q5; Mon, 10 Feb 2014 19:54:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1WCwvi-0004GW-SO
	for xen-devel@lists.xensource.com; Mon, 10 Feb 2014 19:54:15 +0000
Received: from [85.158.137.68:57515] by server-7.bemta-3.messagelabs.com id
	7A/78-13775-26E29F25; Mon, 10 Feb 2014 19:54:10 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392062047!909411!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29620 invoked from network); 10 Feb 2014 19:54:09 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 19:54:09 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so6663060pbc.18
	for <xen-devel@lists.xensource.com>;
	Mon, 10 Feb 2014 11:54:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to;
	bh=3UYrwujjUQ485rUTIgqEe9BLJJQeWXAgbxasoZDXxbo=;
	b=lzdvXkz8b68YNNOaBuPWccsWFse4SYL1zLMgE7gD9GsE6t3m4RlWFdWO1q6LnylAX2
	qmDWJQoAtFxj859FsASt9S4XPaotDrVMS9c27TPj2pVuIu4DTv20HbXPeSuHgG2ZL88f
	hxXUTQDlNxH3DE9WnCF09Z2hY3HcTN5RlTOuAY5hWj/MFYYxRKdzpGhRQGiDaRhFXDeI
	9z8JO1K2oDhHIMcmAwwmwDvJ1d+MslZqftpsMeuOKyBZo5UhsXPkOezUGT9nje6cUV7E
	q2wsltlTmCmQ59Yh2qX7V3gPO7O8UrTt4JNG5TP1AXpnfke3CMQXC2Vw6XqUPWq8Mqot
	GbDg==
X-Gm-Message-State: ALoCoQk8iPTPCHUiDS8rF30w0gESmq0+RZr6zM/Xj4VT94DU3zBxuGWMX6CGYIP3sq9tRfRZkU+6
X-Received: by 10.68.170.66 with SMTP id ak2mr39396464pbc.5.1392062046920;
	Mon, 10 Feb 2014 11:54:06 -0800 (PST)
Received: from localhost (66.29.187.52.static.utbb.net. [66.29.187.52])
	by mx.google.com with ESMTPSA id
	dr1sm45222974pbc.18.2014.02.10.11.54.04 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 11:54:05 -0800 (PST)
Date: Mon, 10 Feb 2014 12:54:02 -0700
From: Jens Axboe <axboe@kernel.dk>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140210195402.GA3924@kernel.dk>
References: <20140210184412.GA18198@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140210184412.GA18198@phenom.dumpdata.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
> Hey Jens,
> 
> Please git pull the following branch:
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
> 
> which is based off v3.13-rc6. If you would like me to rebase it on
> a different branch/tag I would be more than happy to do so.

Older is fine, it's only an issue if you are ahead of the branch you
want to go into.

dd> 
> The patches are all bug-fixes and hopefully can go in 3.14.
> 
> They deal with xen-blkback shutdown and cause memory leaks
> as well as shutdown races. They should go to stable tree and if you
> are OK with I will ask them to backport those fixes.
> 
> There is also a fix to xen-blkfront to deal with unexpected state
> transition. And lastly a fix to the header where it was using the
> __aligned__ unnecessarily.

Pulled!

-- 
Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 20:01:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 20:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCx2t-0004nc-Bi; Mon, 10 Feb 2014 20:01:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WCx2r-0004nX-1R
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 20:01:33 +0000
Received: from [193.109.254.147:13540] by server-12.bemta-14.messagelabs.com
	id F2/8E-17220-C1039F25; Mon, 10 Feb 2014 20:01:32 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392062488!3351181!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9064 invoked from network); 10 Feb 2014 20:01:29 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 20:01:29 -0000
Received: from mail-ie0-f179.google.com (mail-ie0-f179.google.com
	[209.85.223.179]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1AK1Q59023214
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 12:01:27 -0800
Received: by mail-ie0-f179.google.com with SMTP id ar20so3886423iec.38
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 12:01:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=sR/2KMeWYScDjft8OoIijmh/ydmqUZxNMWbBVl5lses=;
	b=jyu8l/MNyiqAWFvywCME/RfuzA1fWgbLRwXoHozc82gboXtbHio5Y1cwI2uO8Wgit+
	ACQjvecpaw1d1+g5x+xH+EIQsXyJgcJT2Hfy9wMGwiRnDPqL5vXGx/tw9u9m1u6aNX5G
	j6/CUOZyO4r0OHFKWxH4T3opugQ4qrYV8NUqGpsvHpETeVdJCZ0MzVDmIdx0kjcsxtIG
	PA8ypy4fuX3raaxzmrxqOgJ91hs9Na/av0CCr1D06xk0sHkVDQ0jNo826nuLzw1JJy/0
	GHoqSaCfFDO/7JwE9KDzknx6MVhT85xy7HQ+fRSBi61RB9gHwq0d1bSPbyx5BdqRn1NT
	lAgQ==
X-Received: by 10.50.78.200 with SMTP id d8mr16024759igx.38.1392062485040;
	Mon, 10 Feb 2014 12:01:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Mon, 10 Feb 2014 12:00:44 -0800 (PST)
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Mon, 10 Feb 2014 12:00:44 -0800
Message-ID: <CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7328091245157788830=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7328091245157788830==
Content-Type: multipart/alternative; boundary=089e013c6eeaed1ee804f212cd90

--089e013c6eeaed1ee804f212cd90
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com>wrote:

> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).
>
> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>
> Introduction
> ============
>
> Revision History
> ----------------
>
> --------------------------------------------------------------------
> Version  Date         Changes
> -------  -----------  ----------------------------------------------
> Draft A  6 Feb 2014   Initial draft.
>
> Draft B  10 Feb 2014  Corrected image header field widths.
>
>                       Minor updates and clarifications.
> --------------------------------------------------------------------
>
> Purpose
> -------
>
> The _domain save image_ is the context of a running domain used for
> snapshots of a domain or for transferring domains between hosts during
> migration.
>
> There are a number of problems with the format of the domain save
> image used in Xen 4.4 and earlier (the _legacy format_).
>
> * Dependant on toolstack word size.  A number of fields within the
>   image are native types such as `unsigned long` which have different
>   sizes between 32-bit and 64-bit hosts.  This prevents domains from
>   being migrated between 32-bit and 64-bit hosts.
>
> * There is no header identifying the image.
>
> * The image has no version information.
>
> A new format that addresses the above is required.
>
> ARM does not yet have have a domain save image format specified and
> the format described in this specification should be suitable.
>
>

I suggest keeping the processing overhead in mind when designing the new
image format. Some key things have been addressed, such as making sure data
is always padded to maintain alignment. But there are also some aspects of
this
proposal that seem awfully unnecessary.. More details below.


>
> Overview
> ========
>
> The image format consists of two main sections:
>
> * _Headers_
> * _Records_
>
> Headers
> -------
>
> There are two headers: the _image header_, and the _domain header_.
> The image header describes the format of the image (version etc.).
> The _domain header_ contains general information about the domain
> (architecture, type etc.).
>
> Records
> -------
>
> The main part of the format is a sequence of different _records_.
> Each record type contains information about the domain context.  At a
> minimum there is a END record marking the end of the records section.
>
>
> Fields
> ------
>
> All the fields within the headers and records have a fixed width.
>
> Fields are always aligned to their size.
>
> Padding and reserved fields are set to zero on save and must be
> ignored during restore.
>
>
So far so good.


> Integer (numeric) fields in the image header are always in big-endian
> byte order.
>
> Integer fields in the domain header and in the records are in the
> endianess described in the image header (which will typically be the
> native ordering).
>
>

Its tempting to adopt all the TCP-style madness for transferring a set of
structured data.  Why this endian-ness mess?  Am I missing something here?
I am assuming that a lion's share of Xen's deployment is on x86
(not including Amazon). So that leaves ARM.  Why not let these
processors take the hit of endian-ness conversion?

Headers
> =======
>
> Image Header
> ------------
>
> The image header identifies an image as a Xen domain save image.  It
> includes the version of this specification that the image complies
> with.
>
> Tools supporting version _V_ of the specification shall always save
> images using version _V_.  Tools shall support restoring from version
> _V_ and version _V_ - 1.  Tools may additionally support restoring
> from earlier versions.
>
> The marker field can be used to distinguish between legacy images and
> those corresponding to this specification.  Legacy images will have at
> one or more zero bits within the first 8 octets of the image.
>
> Fields within the image header are always in _big-endian_ byte order,
> regardless of the setting of the endianness bit.
>

and more endian-ness mess.


>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | marker                                          |
>     +-----------------------+-------------------------+
>     | id                    | version                 |
>     +-----------+-----------+-------------------------+
>     | options   |                                     |
>     +-----------+-------------------------------------+
>
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> marker      0xFFFFFFFFFFFFFFFF.
>
> id          0x58454E46 ("XENF" in ASCII).
>
> version     0x00000001.  The version of this specification.
>
> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
>
>             bit 1-15: Reserved.
> --------------------------------------------------------------------
>
> Domain Header
> -------------
>
> The domain header includes general properties of the domain.
>
>      0      1     2     3     4     5     6     7 octet
>     +-----------+-----------+-----------+-------------+
>     | arch      | type      | page_shift| (reserved)  |
>     +-----------+-----------+-----------+-------------+
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> arch        0x0000: Reserved.
>
>             0x0001: x86.
>
>             0x0002: ARM.
>
> type        0x0000: Reserved.
>
>             0x0001: x86 PV.
>
>             0x0002 - 0xFFFF: Reserved.
>
> page_shift  Size of a guest page as a power of two.
>
>             i.e., page size = 2^page_shift^.
> --------------------------------------------------------------------
>
>
> Records
> =======
>
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
>     ...
>     Record body of length body_length octets followed by
>     0 to 7 octets of padding.
>     ...
>     +-----------------------+-------------------------+
>     | checksum              | (reserved)              |
>     +-----------------------+-------------------------+
>
>
I am assuming that you the checksum field is present only
for debugging purposes? Otherwise, I see no reason for the
computational overhead, given that we are already sending data
over a reliable channel + IIRC we already have an image-wide checksum
when saving the image to disk.

If debugging is the only use case, then I guess, the type field
can be prefixed with a 1/0 bit, eliminating the need for the
1-bit checkum options field + 7-byte padding. Similarly, if debugging
mode is not set, why waste another 8 bytes in the end for the checksum
field.

Unless you think there may be more types with need of special options,

Feel free to correct me if I am missing something elementary here..




> --------------------------------------------------------------------
> Field        Description
> -----------  -------------------------------------------------------
> type         0x00000000: END
>
>              0x00000001: PAGE_DATA
>
>              0x00000002: VCPU_INFO
>
>              0x00000003: VCPU_CONTEXT
>
>              0x00000004: X86_PV_INFO
>
>              0x00000005: P2M
>
>              0x00000006 - 0xFFFFFFFF: Reserved
>
> body_length  Length in octets of the record body.
>
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
>
>              Bit 1-15: Reserved.
>
> checksum     CRC-32 checksum of the record body (including any trailing
>              padding), or 0x00000000 if the checksum field is invalid.
> --------------------------------------------------------------------
>
> The following sub-sections specify the record body format for each of
> the record types.
>
> END
> ----
>
> A end record marks the end of the image.
>
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>
> The end record contains no fields; its body_length is 0.
>
> PAGE_DATA
> ---------
>
> The bulk of an image consists of many PAGE_DATA records containing the
> memory contents.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | count (C)             | (reserved)              |
>     +-----------------------+-------------------------+
>     | pfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | pfn[C-1]                                        |
>     +-------------------------------------------------+
>     | page_data[0]...                                 |
>     ...
>     +-------------------------------------------------+
>     | page_data[N-1]...                               |
>     ...
>     +-------------------------------------------------+
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> count       Number of pages described in this record.
>
> pfn         An array of count PFNs. Bits 63-60 contain
>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
>
> page_data   page_size octets of uncompressed page contents for each page
>             set as present in the pfn array.
> --------------------------------------------------------------------
>
>
s/uncompressed/(compressed/uncompressed)/
(Remus sends compressed data)


> VCPU_INFO
> ---------
>
> > [ This is a combination of parts of the extended-info and
> > XC_SAVE_ID_VCPU_INFO chunks. ]
>
> The VCPU_INFO record includes the maximum possible VCPU ID.  This will
> be followed a VCPU_CONTEXT record for each online VCPU.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+------------------------+
>     | max_vcpu_id           | (reserved)             |
>     +-----------------------+------------------------+
>
> --------------------------------------------------------------------
> Field        Description
> -----------  ---------------------------------------------------
> max_vcpu_id  Maximum possible VCPU ID.
> --------------------------------------------------------------------
>
>
> VCPU_CONTEXT
> ------------
>
> The context for a single VCPU.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | vcpu_id               | (reserved)              |
>     +-----------------------+-------------------------+
>     | vcpu_ctx...                                     |
>     ...
>     +-------------------------------------------------+
>
> --------------------------------------------------------------------
> Field            Description
> -----------      ---------------------------------------------------
> vcpu_id          The VCPU ID.
>
> vcpu_ctx         Context for this VCPU.
> --------------------------------------------------------------------
>
> [ vcpu_ctx format TBD. ]
>
>
> X86_PV_INFO
> -----------
>
> > [ This record replaces part of the extended-info chunk. ]
>
>      0     1     2     3     4     5     6     7 octet
>     +-----+-----+-----+-------------------------------+
>     | w   | ptl | o   | (reserved)                    |
>     +-----+-----+-----+-------------------------------+
>
> --------------------------------------------------------------------
> Field            Description
> -----------      ---------------------------------------------------
> guest_width (w)  Guest width in octets (either 4 or 8).
>
> pt_levels (ptl)  Number of page table levels (either 3 or 4).
>
> options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
>                  1 - VMASST_pae_extended_cr3.
>
>                  Bit 1-7: Reserved.
> --------------------------------------------------------------------
>
>
> P2M
> ---
>
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]
>
> The P2M record contains a portion of the source domain's P2M.
> Multiple P2M records may be sent if the source P2M changes during the
> stream.
>
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | pfn_begin                                       |
>     +-------------------------------------------------+
>     | pfn_end                                         |
>     +-------------------------------------------------+
>     | mfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | mfn[N-1]                                        |
>     +-------------------------------------------------+
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> pfn_begin   The first PFN in this portion of the P2M
>
> pfn_end     One past the last PFN in this portion of the P2M.
>
> mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
>             the set of PFNs in the range [pfn_begin, pfn_end).
> --------------------------------------------------------------------
>
>
> Layout
> ======
>
> The set of valid records depends on the guest architecture and type.
>
> x86 PV Guest
> ------------
>
> An x86 PV guest image will have in this order:
>
> 1. Image header
> 2. Domain header
> 3. X86_PV_INFO record
> 4. At least one P2M record
> 5. At least one PAGE_DATA record

6. VCPU_INFO record
> 6. At least one VCPU_CONTEXT record

7. END record
>
>
There seems to be a bunch of info missing. Here are some
missing elements that I can recall at the moment:
a) there is no support for sending over one time markers that switch the
receiver's operating mode in the middle of a data stream.
E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.
XC_SAVE_ENABLE_VERIFY_MODE,

b) in pv case, the tail also has a list of unmapped PFNs at the end of
every iteration.

c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device context information
(generally
for HVMs).


>
> Legacy Images (x86 only)
> ========================
>
> Restoring legacy images from older tools shall be handled by
> translating the legacy format image into this new format.
>
> It shall not be possible to save in the legacy format.
>
> There are two different legacy images depending on whether they were
> generated by a 32-bit or a 64-bit toolstack. These shall be
> distinguished by inspecting octets 4-7 in the image.  If these are
> zero then it is a 64-bit image.
>
> Toolstack  Field                            Value
> ---------  -----                            -----
> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
> 32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
> 32-bit     Chunk type (HVM)                 < 0
> 32-bit     Page count (HVM)                 > 0
>
> Table: Possible values for octet 4-7 in legacy images
>
> This assumes the presence of the extended-info chunk which was
> introduced in Xen 3.0.
>
>
> Future Extensions
> =================
>
> All changes to this format require the image version to be increased.
>
> The format may be extended by adding additional record types.
>
> Extending an existing record type must be done by adding a new record
> type.  This allows old images with the old record to still be
> restored.
>
> The image header may be extended by _appending_ additional fields.  In
> particular, the `marker`, `id` and `version` fields must never change
> size or location.
>
>

--089e013c6eeaed1ee804f212cd90
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On M=
on, Feb 10, 2014 at 9:20 AM, David Vrabel <span dir=3D"ltr">&lt;<a href=3D"=
mailto:david.vrabel@citrix.com" target=3D"_blank">david.vrabel@citrix.com</=
a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">Here is a draft of a proposal for a new domain save image =
format. =A0It<br>


does not currently cover all use cases (e.g., images for HVM guest are<br>
not considered).<br>
<br>
<a href=3D"http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf" =
target=3D"_blank">http://xenbits.xen.org/people/dvrabel/domain-save-format-=
B.pdf</a><br>
<br>
Introduction<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Revision History<br>
----------------<br>
<br>
--------------------------------------------------------------------<br>
Version =A0Date =A0 =A0 =A0 =A0 Changes<br>
------- =A0----------- =A0----------------------------------------------<br=
>
Draft A =A06 Feb 2014 =A0 Initial draft.<br>
<br>
Draft B =A010 Feb 2014 =A0Corrected image header field widths.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Minor updates and clarification=
s.<br>
--------------------------------------------------------------------<br>
<br>
Purpose<br>
-------<br>
<br>
The _domain save image_ is the context of a running domain used for<br>
snapshots of a domain or for transferring domains between hosts during<br>
migration.<br>
<br>
There are a number of problems with the format of the domain save<br>
image used in Xen 4.4 and earlier (the _legacy format_).<br>
<br>
* Dependant on toolstack word size. =A0A number of fields within the<br>
=A0 image are native types such as `unsigned long` which have different<br>
=A0 sizes between 32-bit and 64-bit hosts. =A0This prevents domains from<br=
>
=A0 being migrated between 32-bit and 64-bit hosts.<br>
<br>
* There is no header identifying the image.<br>
<br>
* The image has no version information.<br>
<br>
A new format that addresses the above is required.<br>
<br>
ARM does not yet have have a domain save image format specified and<br>
the format described in this specification should be suitable.<br>
<br></blockquote><div><br></div><div><br></div><div>I suggest keeping the p=
rocessing overhead in mind when designing the new</div><div>image format. S=
ome key things have been addressed, such as making sure data</div><div>

is always padded to maintain alignment. But there are also some aspects of =
this</div><div>proposal that seem awfully unnecessary.. More details below.=
</div><div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);borde=
r-left-style:solid;padding-left:1ex">


<br>
Overview<br>
=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
The image format consists of two main sections:<br>
<br>
* _Headers_<br>
* _Records_<br>
<br>
Headers<br>
-------<br>
<br>
There are two headers: the _image header_, and the _domain header_.<br>
The image header describes the format of the image (version etc.).<br>
The _domain header_ contains general information about the domain<br>
(architecture, type etc.).<br>
<br>
Records<br>
-------<br>
<br>
The main part of the format is a sequence of different _records_.<br>
Each record type contains information about the domain context. =A0At a<br>
minimum there is a END record marking the end of the records section.<br>
<br>
<br>
Fields<br>
------<br>
<br>
All the fields within the headers and records have a fixed width.<br>
<br>
Fields are always aligned to their size.<br>
<br>
Padding and reserved fields are set to zero on save and must be<br>
ignored during restore.<br>
<br></blockquote><div><br></div><div>So far so good.</div><div>=A0</div><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-lef=
t-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padd=
ing-left:1ex">


Integer (numeric) fields in the image header are always in big-endian<br>
byte order.<br>
<br>
Integer fields in the domain header and in the records are in the<br>
endianess described in the image header (which will typically be the<br>
native ordering).<br>
<br></blockquote><div><br></div><div><br></div><div><div><div>Its tempting =
to adopt all the TCP-style madness for transferring a set of</div><div>stru=
ctured data. =A0Why this endian-ness mess? =A0Am I missing something here?<=
/div>

<div>I am assuming that a lion&#39;s share of Xen&#39;s deployment is on x8=
6=A0</div><div>(not including Amazon). So that leaves ARM. =A0Why not let t=
hese=A0</div><div>processors take the hit of endian-ness conversion?</div><=
/div>

<div><br></div></div><blockquote class=3D"gmail_quote" style=3D"margin:0px =
0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);bord=
er-left-style:solid;padding-left:1ex">
Headers<br>
=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Image Header<br>
------------<br>
<br>
The image header identifies an image as a Xen domain save image. =A0It<br>
includes the version of this specification that the image complies<br>
with.<br>
<br>
Tools supporting version _V_ of the specification shall always save<br>
images using version _V_. =A0Tools shall support restoring from version<br>
_V_ and version _V_ - 1. =A0Tools may additionally support restoring<br>
from earlier versions.<br>
<br>
The marker field can be used to distinguish between legacy images and<br>
those corresponding to this specification. =A0Legacy images will have at<br=
>
one or more zero bits within the first 8 octets of the image.<br>
<br>
Fields within the image header are always in _big-endian_ byte order,<br>
regardless of the setting of the endianness bit.<br></blockquote><div><br><=
/div><div>and more endian-ness mess.</div><div><br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;bo=
rder-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">


<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | marker =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | id =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| version =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-----------+-----------+-------------------------+<br>
=A0 =A0 | options =A0 | =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-----------+-------------------------------------+<br>
<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
marker =A0 =A0 =A00xFFFFFFFFFFFFFFFF.<br>
<br>
id =A0 =A0 =A0 =A0 =A00x58454E46 (&quot;XENF&quot; in ASCII).<br>
<br>
version =A0 =A0 0x00000001. =A0The version of this specification.<br>
<br>
options =A0 =A0 bit 0: Endianness. =A00 =3D little-endian, 1 =3D big-endian=
.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 bit 1-15: Reserved.<br>
--------------------------------------------------------------------<br>
<br>
Domain Header<br>
-------------<br>
<br>
The domain header includes general properties of the domain.<br>
<br>
=A0 =A0 =A00 =A0 =A0 =A01 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6=
 =A0 =A0 7 octet<br>
=A0 =A0 +-----------+-----------+-----------+-------------+<br>
=A0 =A0 | arch =A0 =A0 =A0| type =A0 =A0 =A0| page_shift| (reserved) =A0|<b=
r>
=A0 =A0 +-----------+-----------+-----------+-------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
arch =A0 =A0 =A0 =A00x0000: Reserved.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0002: ARM.<br>
<br>
type =A0 =A0 =A0 =A00x0000: Reserved.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86 PV.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0002 - 0xFFFF: Reserved.<br>
<br>
page_shift =A0Size of a guest page as a power of two.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 i.e., page size =3D 2^page_shift^.<br>
--------------------------------------------------------------------<br>
<br>
<br>
Records<br>
=3D=3D=3D=3D=3D=3D=3D<br>
<br>
A record has a record header, type specific data and a trailing<br>
footer. =A0If body_length is not a multiple of 8, the body is padded<br>
with zeroes to align the checksum field on an 8 octet boundary.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| body_length =A0 =A0 =A0=
 =A0 =A0 =A0 |<br>
=A0 =A0 +-----------+-----------+-------------------------+<br>
=A0 =A0 | options =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------+-------------------------------------+<br>
=A0 =A0 ...<br>
=A0 =A0 Record body of length body_length octets followed by<br>
=A0 =A0 0 to 7 octets of padding.<br>
=A0 =A0 ...<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0| (reserved) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
<br></blockquote><div><br></div><div>I am assuming that you the checksum fi=
eld is present only</div><div>for debugging purposes? Otherwise, I see no r=
eason for the</div><div>computational overhead, given that we are already s=
ending data</div>

<div>over a reliable channel + IIRC we already have an image-wide checksum=
=A0</div><div>when saving the image to disk.</div><div><br></div><div>If de=
bugging is the only use case, then I guess, the type field</div><div>can be=
 prefixed with a 1/0 bit, eliminating the need for the=A0</div>

<div>1-bit checkum options field + 7-byte padding. Similarly, if debugging=
=A0</div><div>mode is not set, why waste another 8 bytes in the end for the=
 checksum field.</div><div><br></div><div>Unless you think there may be mor=
e types with need of special options,</div>

<div><br></div><div>Feel free to correct me if I am missing something eleme=
ntary here..</div><div><br></div><div><br></div><div>=A0</div><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1p=
x;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1=
ex">


--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0Description<br>
----------- =A0-------------------------------------------------------<br>
type =A0 =A0 =A0 =A0 0x00000000: END<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000001: PAGE_DATA<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000002: VCPU_INFO<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000003: VCPU_CONTEXT<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000004: X86_PV_INFO<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000005: P2M<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000006 - 0xFFFFFFFF: Reserved<br>
<br>
body_length =A0Length in octets of the record body.<br>
<br>
options =A0 =A0 =A0Bit 0: 0 - checksum invalid, 1 =3D checksum valid.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0Bit 1-15: Reserved.<br>
<br>
checksum =A0 =A0 CRC-32 checksum of the record body (including any trailing=
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0padding), or 0x00000000 if the checksum field is=
 invalid.<br>
--------------------------------------------------------------------<br>
<br></blockquote><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex">The following sub-sections specify the re=
cord body format for each of<br>


the record types.<br>
<br>
END<br>
----<br>
<br>
A end record marks the end of the image.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
The end record contains no fields; its body_length is 0.<br>
<br>
PAGE_DATA<br>
---------<br>
<br>
The bulk of an image consists of many PAGE_DATA records containing the<br>
memory contents.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | count (C) =A0 =A0 =A0 =A0 =A0 =A0 | (reserved) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | pfn[0] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | pfn[C-1] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | page_data[0]... =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 |<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | page_data[N-1]... =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 |<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
count =A0 =A0 =A0 Number of pages described in this record.<br>
<br>
pfn =A0 =A0 =A0 =A0 An array of count PFNs. Bits 63-60 contain<br>
=A0 =A0 =A0 =A0 =A0 =A0 the XEN\_DOMCTL\_PFINFO_* value for that PFN.<br>
<br>
page_data =A0 page_size octets of uncompressed page contents for each page<=
br>
=A0 =A0 =A0 =A0 =A0 =A0 set as present in the pfn array.<br>
--------------------------------------------------------------------<br>
<br></blockquote><div><br></div><div>s/uncompressed/(compressed/uncompresse=
d)/</div><div>(Remus sends compressed data)</div><div>=A0</div><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1=
px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:=
1ex">


VCPU_INFO<br>
---------<br>
<br>
&gt; [ This is a combination of parts of the extended-info and<br>
&gt; XC_SAVE_ID_VCPU_INFO chunks. ]<br>
<br>
The VCPU_INFO record includes the maximum possible VCPU ID. =A0This will<br=
>
be followed a VCPU_CONTEXT record for each online VCPU.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+------------------------+<br>
=A0 =A0 | max_vcpu_id =A0 =A0 =A0 =A0 =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =
=A0 |<br>
=A0 =A0 +-----------------------+------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0Description<br>
----------- =A0---------------------------------------------------<br>
max_vcpu_id =A0Maximum possible VCPU ID.<br>
--------------------------------------------------------------------<br>
<br>
<br>
VCPU_CONTEXT<br>
------------<br>
<br>
The context for a single VCPU.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | vcpu_id =A0 =A0 =A0 =A0 =A0 =A0 =A0 | (reserved) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | vcpu_ctx... =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0 =A0 =A0Description<br>
----------- =A0 =A0 =A0---------------------------------------------------<=
br>
vcpu_id =A0 =A0 =A0 =A0 =A0The VCPU ID.<br>
<br>
vcpu_ctx =A0 =A0 =A0 =A0 Context for this VCPU.<br>
--------------------------------------------------------------------<br>
<br>
[ vcpu_ctx format TBD. ]<br>
<br>
<br>
X86_PV_INFO<br>
-----------<br>
<br>
&gt; [ This record replaces part of the extended-info chunk. ]<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----+-----+-----+-------------------------------+<br>
=A0 =A0 | w =A0 | ptl | o =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0|<br>
=A0 =A0 +-----+-----+-----+-------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0 =A0 =A0Description<br>
----------- =A0 =A0 =A0---------------------------------------------------<=
br>
guest_width (w) =A0Guest width in octets (either 4 or 8).<br>
<br>
pt_levels (ptl) =A0Number of page table levels (either 3 or 4).<br>
<br>
options (o) =A0 =A0 =A0Bit 0: 0 - no VMASST_pae_extended_cr3,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 - VMASST_pae_extended_cr3.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Bit 1-7: Reserved.<br>
--------------------------------------------------------------------<br>
<br>
<br>
P2M<br>
---<br>
<br>
[ This is a more flexible replacement for the old p2m_size field and<br>
p2m array. ]<br>
<br>
The P2M record contains a portion of the source domain&#39;s P2M.<br>
Multiple P2M records may be sent if the source P2M changes during the<br>
stream.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | pfn_begin =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | pfn_end =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | mfn[0] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | mfn[N-1] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
pfn_begin =A0 The first PFN in this portion of the P2M<br>
<br>
pfn_end =A0 =A0 One past the last PFN in this portion of the P2M.<br>
<br>
mfn =A0 =A0 =A0 =A0 Array of (pfn_end - pfn-begin) MFNs corresponding to<br=
>
=A0 =A0 =A0 =A0 =A0 =A0 the set of PFNs in the range [pfn_begin, pfn_end).<=
br>
--------------------------------------------------------------------<br>
<br>
<br>
Layout<br>
=3D=3D=3D=3D=3D=3D<br>
<br>
The set of valid records depends on the guest architecture and type.<br>
<br>
x86 PV Guest<br>
------------<br>
<br>
An x86 PV guest image will have in this order:<br>
<br>
1. Image header<br>
2. Domain header<br>
3. X86_PV_INFO record<br>
4. At least one P2M record<br>
5. At least one PAGE_DATA record</blockquote><blockquote class=3D"gmail_quo=
te" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-col=
or:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
6. VCPU_INFO record<br>
6. At least one VCPU_CONTEXT record</blockquote><blockquote class=3D"gmail_=
quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-=
color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
7. END record<br>
<br></blockquote><div><br></div><div>There seems to be a bunch of info miss=
ing. Here are some</div><div>missing elements that I can recall at the mome=
nt:</div><div>a) there is no support for sending over one time markers that=
 switch the</div>

<div>receiver&#39;s operating mode in the middle of a data stream.=A0</div>=
<div>E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.</di=
v><div>XC_SAVE_ENABLE_VERIFY_MODE,</div><div><br></div><div>b) in pv case, =
the tail also has a list of unmapped PFNs at the end of every iteration.</d=
iv>

<div><br></div><div>c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device co=
ntext information (generally<br></div><div>for HVMs).</div><div>=A0</div><b=
lockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-le=
ft-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;pad=
ding-left:1ex">


<br>
Legacy Images (x86 only)<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br=
>
<br>
Restoring legacy images from older tools shall be handled by<br>
translating the legacy format image into this new format.<br>
<br>
It shall not be possible to save in the legacy format.<br>
<br>
There are two different legacy images depending on whether they were<br>
generated by a 32-bit or a 64-bit toolstack. These shall be<br>
distinguished by inspecting octets 4-7 in the image. =A0If these are<br>
zero then it is a 64-bit image.<br>
<br>
Toolstack =A0Field =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0V=
alue<br>
--------- =A0----- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0-=
----<br>
64-bit =A0 =A0 Bit 31-63 of the p2m_size field =A00 (since p2m_size &lt; 2^=
32^)<br>
32-bit =A0 =A0 extended-info chunk ID (PV) =A0 =A0 =A00xFFFFFFFF<br>
32-bit =A0 =A0 Chunk type (HVM) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &lt; 0<br>
32-bit =A0 =A0 Page count (HVM) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &gt; 0<br>
<br>
Table: Possible values for octet 4-7 in legacy images<br>
<br>
This assumes the presence of the extended-info chunk which was<br>
introduced in Xen 3.0.<br>
<br>
<br>
Future Extensions<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
All changes to this format require the image version to be increased.<br>
<br>
The format may be extended by adding additional record types.<br>
<br>
Extending an existing record type must be done by adding a new record<br>
type. =A0This allows old images with the old record to still be<br>
restored.<br>
<br>
The image header may be extended by _appending_ additional fields. =A0In<br=
>
particular, the `marker`, `id` and `version` fields must never change<br>
size or location.<br>
<br>
</blockquote></div><br></div></div>

--089e013c6eeaed1ee804f212cd90--


--===============7328091245157788830==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7328091245157788830==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 20:01:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 20:01:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCx2t-0004nc-Bi; Mon, 10 Feb 2014 20:01:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WCx2r-0004nX-1R
	for Xen-devel@lists.xen.org; Mon, 10 Feb 2014 20:01:33 +0000
Received: from [193.109.254.147:13540] by server-12.bemta-14.messagelabs.com
	id F2/8E-17220-C1039F25; Mon, 10 Feb 2014 20:01:32 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392062488!3351181!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9064 invoked from network); 10 Feb 2014 20:01:29 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 20:01:29 -0000
Received: from mail-ie0-f179.google.com (mail-ie0-f179.google.com
	[209.85.223.179]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1AK1Q59023214
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 12:01:27 -0800
Received: by mail-ie0-f179.google.com with SMTP id ar20so3886423iec.38
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 12:01:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=sR/2KMeWYScDjft8OoIijmh/ydmqUZxNMWbBVl5lses=;
	b=jyu8l/MNyiqAWFvywCME/RfuzA1fWgbLRwXoHozc82gboXtbHio5Y1cwI2uO8Wgit+
	ACQjvecpaw1d1+g5x+xH+EIQsXyJgcJT2Hfy9wMGwiRnDPqL5vXGx/tw9u9m1u6aNX5G
	j6/CUOZyO4r0OHFKWxH4T3opugQ4qrYV8NUqGpsvHpETeVdJCZ0MzVDmIdx0kjcsxtIG
	PA8ypy4fuX3raaxzmrxqOgJ91hs9Na/av0CCr1D06xk0sHkVDQ0jNo826nuLzw1JJy/0
	GHoqSaCfFDO/7JwE9KDzknx6MVhT85xy7HQ+fRSBi61RB9gHwq0d1bSPbyx5BdqRn1NT
	lAgQ==
X-Received: by 10.50.78.200 with SMTP id d8mr16024759igx.38.1392062485040;
	Mon, 10 Feb 2014 12:01:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Mon, 10 Feb 2014 12:00:44 -0800 (PST)
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Mon, 10 Feb 2014 12:00:44 -0800
Message-ID: <CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7328091245157788830=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7328091245157788830==
Content-Type: multipart/alternative; boundary=089e013c6eeaed1ee804f212cd90

--089e013c6eeaed1ee804f212cd90
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com>wrote:

> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).
>
> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>
> Introduction
> ============
>
> Revision History
> ----------------
>
> --------------------------------------------------------------------
> Version  Date         Changes
> -------  -----------  ----------------------------------------------
> Draft A  6 Feb 2014   Initial draft.
>
> Draft B  10 Feb 2014  Corrected image header field widths.
>
>                       Minor updates and clarifications.
> --------------------------------------------------------------------
>
> Purpose
> -------
>
> The _domain save image_ is the context of a running domain used for
> snapshots of a domain or for transferring domains between hosts during
> migration.
>
> There are a number of problems with the format of the domain save
> image used in Xen 4.4 and earlier (the _legacy format_).
>
> * Dependant on toolstack word size.  A number of fields within the
>   image are native types such as `unsigned long` which have different
>   sizes between 32-bit and 64-bit hosts.  This prevents domains from
>   being migrated between 32-bit and 64-bit hosts.
>
> * There is no header identifying the image.
>
> * The image has no version information.
>
> A new format that addresses the above is required.
>
> ARM does not yet have have a domain save image format specified and
> the format described in this specification should be suitable.
>
>

I suggest keeping the processing overhead in mind when designing the new
image format. Some key things have been addressed, such as making sure data
is always padded to maintain alignment. But there are also some aspects of
this
proposal that seem awfully unnecessary.. More details below.


>
> Overview
> ========
>
> The image format consists of two main sections:
>
> * _Headers_
> * _Records_
>
> Headers
> -------
>
> There are two headers: the _image header_, and the _domain header_.
> The image header describes the format of the image (version etc.).
> The _domain header_ contains general information about the domain
> (architecture, type etc.).
>
> Records
> -------
>
> The main part of the format is a sequence of different _records_.
> Each record type contains information about the domain context.  At a
> minimum there is a END record marking the end of the records section.
>
>
> Fields
> ------
>
> All the fields within the headers and records have a fixed width.
>
> Fields are always aligned to their size.
>
> Padding and reserved fields are set to zero on save and must be
> ignored during restore.
>
>
So far so good.


> Integer (numeric) fields in the image header are always in big-endian
> byte order.
>
> Integer fields in the domain header and in the records are in the
> endianess described in the image header (which will typically be the
> native ordering).
>
>

Its tempting to adopt all the TCP-style madness for transferring a set of
structured data.  Why this endian-ness mess?  Am I missing something here?
I am assuming that a lion's share of Xen's deployment is on x86
(not including Amazon). So that leaves ARM.  Why not let these
processors take the hit of endian-ness conversion?

Headers
> =======
>
> Image Header
> ------------
>
> The image header identifies an image as a Xen domain save image.  It
> includes the version of this specification that the image complies
> with.
>
> Tools supporting version _V_ of the specification shall always save
> images using version _V_.  Tools shall support restoring from version
> _V_ and version _V_ - 1.  Tools may additionally support restoring
> from earlier versions.
>
> The marker field can be used to distinguish between legacy images and
> those corresponding to this specification.  Legacy images will have at
> one or more zero bits within the first 8 octets of the image.
>
> Fields within the image header are always in _big-endian_ byte order,
> regardless of the setting of the endianness bit.
>

and more endian-ness mess.


>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | marker                                          |
>     +-----------------------+-------------------------+
>     | id                    | version                 |
>     +-----------+-----------+-------------------------+
>     | options   |                                     |
>     +-----------+-------------------------------------+
>
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> marker      0xFFFFFFFFFFFFFFFF.
>
> id          0x58454E46 ("XENF" in ASCII).
>
> version     0x00000001.  The version of this specification.
>
> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
>
>             bit 1-15: Reserved.
> --------------------------------------------------------------------
>
> Domain Header
> -------------
>
> The domain header includes general properties of the domain.
>
>      0      1     2     3     4     5     6     7 octet
>     +-----------+-----------+-----------+-------------+
>     | arch      | type      | page_shift| (reserved)  |
>     +-----------+-----------+-----------+-------------+
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> arch        0x0000: Reserved.
>
>             0x0001: x86.
>
>             0x0002: ARM.
>
> type        0x0000: Reserved.
>
>             0x0001: x86 PV.
>
>             0x0002 - 0xFFFF: Reserved.
>
> page_shift  Size of a guest page as a power of two.
>
>             i.e., page size = 2^page_shift^.
> --------------------------------------------------------------------
>
>
> Records
> =======
>
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
>     ...
>     Record body of length body_length octets followed by
>     0 to 7 octets of padding.
>     ...
>     +-----------------------+-------------------------+
>     | checksum              | (reserved)              |
>     +-----------------------+-------------------------+
>
>
I am assuming that you the checksum field is present only
for debugging purposes? Otherwise, I see no reason for the
computational overhead, given that we are already sending data
over a reliable channel + IIRC we already have an image-wide checksum
when saving the image to disk.

If debugging is the only use case, then I guess, the type field
can be prefixed with a 1/0 bit, eliminating the need for the
1-bit checkum options field + 7-byte padding. Similarly, if debugging
mode is not set, why waste another 8 bytes in the end for the checksum
field.

Unless you think there may be more types with need of special options,

Feel free to correct me if I am missing something elementary here..




> --------------------------------------------------------------------
> Field        Description
> -----------  -------------------------------------------------------
> type         0x00000000: END
>
>              0x00000001: PAGE_DATA
>
>              0x00000002: VCPU_INFO
>
>              0x00000003: VCPU_CONTEXT
>
>              0x00000004: X86_PV_INFO
>
>              0x00000005: P2M
>
>              0x00000006 - 0xFFFFFFFF: Reserved
>
> body_length  Length in octets of the record body.
>
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
>
>              Bit 1-15: Reserved.
>
> checksum     CRC-32 checksum of the record body (including any trailing
>              padding), or 0x00000000 if the checksum field is invalid.
> --------------------------------------------------------------------
>
> The following sub-sections specify the record body format for each of
> the record types.
>
> END
> ----
>
> A end record marks the end of the image.
>
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>
> The end record contains no fields; its body_length is 0.
>
> PAGE_DATA
> ---------
>
> The bulk of an image consists of many PAGE_DATA records containing the
> memory contents.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | count (C)             | (reserved)              |
>     +-----------------------+-------------------------+
>     | pfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | pfn[C-1]                                        |
>     +-------------------------------------------------+
>     | page_data[0]...                                 |
>     ...
>     +-------------------------------------------------+
>     | page_data[N-1]...                               |
>     ...
>     +-------------------------------------------------+
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> count       Number of pages described in this record.
>
> pfn         An array of count PFNs. Bits 63-60 contain
>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
>
> page_data   page_size octets of uncompressed page contents for each page
>             set as present in the pfn array.
> --------------------------------------------------------------------
>
>
s/uncompressed/(compressed/uncompressed)/
(Remus sends compressed data)


> VCPU_INFO
> ---------
>
> > [ This is a combination of parts of the extended-info and
> > XC_SAVE_ID_VCPU_INFO chunks. ]
>
> The VCPU_INFO record includes the maximum possible VCPU ID.  This will
> be followed a VCPU_CONTEXT record for each online VCPU.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+------------------------+
>     | max_vcpu_id           | (reserved)             |
>     +-----------------------+------------------------+
>
> --------------------------------------------------------------------
> Field        Description
> -----------  ---------------------------------------------------
> max_vcpu_id  Maximum possible VCPU ID.
> --------------------------------------------------------------------
>
>
> VCPU_CONTEXT
> ------------
>
> The context for a single VCPU.
>
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | vcpu_id               | (reserved)              |
>     +-----------------------+-------------------------+
>     | vcpu_ctx...                                     |
>     ...
>     +-------------------------------------------------+
>
> --------------------------------------------------------------------
> Field            Description
> -----------      ---------------------------------------------------
> vcpu_id          The VCPU ID.
>
> vcpu_ctx         Context for this VCPU.
> --------------------------------------------------------------------
>
> [ vcpu_ctx format TBD. ]
>
>
> X86_PV_INFO
> -----------
>
> > [ This record replaces part of the extended-info chunk. ]
>
>      0     1     2     3     4     5     6     7 octet
>     +-----+-----+-----+-------------------------------+
>     | w   | ptl | o   | (reserved)                    |
>     +-----+-----+-----+-------------------------------+
>
> --------------------------------------------------------------------
> Field            Description
> -----------      ---------------------------------------------------
> guest_width (w)  Guest width in octets (either 4 or 8).
>
> pt_levels (ptl)  Number of page table levels (either 3 or 4).
>
> options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
>                  1 - VMASST_pae_extended_cr3.
>
>                  Bit 1-7: Reserved.
> --------------------------------------------------------------------
>
>
> P2M
> ---
>
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]
>
> The P2M record contains a portion of the source domain's P2M.
> Multiple P2M records may be sent if the source P2M changes during the
> stream.
>
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | pfn_begin                                       |
>     +-------------------------------------------------+
>     | pfn_end                                         |
>     +-------------------------------------------------+
>     | mfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | mfn[N-1]                                        |
>     +-------------------------------------------------+
>
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> pfn_begin   The first PFN in this portion of the P2M
>
> pfn_end     One past the last PFN in this portion of the P2M.
>
> mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
>             the set of PFNs in the range [pfn_begin, pfn_end).
> --------------------------------------------------------------------
>
>
> Layout
> ======
>
> The set of valid records depends on the guest architecture and type.
>
> x86 PV Guest
> ------------
>
> An x86 PV guest image will have in this order:
>
> 1. Image header
> 2. Domain header
> 3. X86_PV_INFO record
> 4. At least one P2M record
> 5. At least one PAGE_DATA record

6. VCPU_INFO record
> 6. At least one VCPU_CONTEXT record

7. END record
>
>
There seems to be a bunch of info missing. Here are some
missing elements that I can recall at the moment:
a) there is no support for sending over one time markers that switch the
receiver's operating mode in the middle of a data stream.
E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.
XC_SAVE_ENABLE_VERIFY_MODE,

b) in pv case, the tail also has a list of unmapped PFNs at the end of
every iteration.

c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device context information
(generally
for HVMs).


>
> Legacy Images (x86 only)
> ========================
>
> Restoring legacy images from older tools shall be handled by
> translating the legacy format image into this new format.
>
> It shall not be possible to save in the legacy format.
>
> There are two different legacy images depending on whether they were
> generated by a 32-bit or a 64-bit toolstack. These shall be
> distinguished by inspecting octets 4-7 in the image.  If these are
> zero then it is a 64-bit image.
>
> Toolstack  Field                            Value
> ---------  -----                            -----
> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
> 32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
> 32-bit     Chunk type (HVM)                 < 0
> 32-bit     Page count (HVM)                 > 0
>
> Table: Possible values for octet 4-7 in legacy images
>
> This assumes the presence of the extended-info chunk which was
> introduced in Xen 3.0.
>
>
> Future Extensions
> =================
>
> All changes to this format require the image version to be increased.
>
> The format may be extended by adding additional record types.
>
> Extending an existing record type must be done by adding a new record
> type.  This allows old images with the old record to still be
> restored.
>
> The image header may be extended by _appending_ additional fields.  In
> particular, the `marker`, `id` and `version` fields must never change
> size or location.
>
>

--089e013c6eeaed1ee804f212cd90
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On M=
on, Feb 10, 2014 at 9:20 AM, David Vrabel <span dir=3D"ltr">&lt;<a href=3D"=
mailto:david.vrabel@citrix.com" target=3D"_blank">david.vrabel@citrix.com</=
a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">Here is a draft of a proposal for a new domain save image =
format. =A0It<br>


does not currently cover all use cases (e.g., images for HVM guest are<br>
not considered).<br>
<br>
<a href=3D"http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf" =
target=3D"_blank">http://xenbits.xen.org/people/dvrabel/domain-save-format-=
B.pdf</a><br>
<br>
Introduction<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Revision History<br>
----------------<br>
<br>
--------------------------------------------------------------------<br>
Version =A0Date =A0 =A0 =A0 =A0 Changes<br>
------- =A0----------- =A0----------------------------------------------<br=
>
Draft A =A06 Feb 2014 =A0 Initial draft.<br>
<br>
Draft B =A010 Feb 2014 =A0Corrected image header field widths.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Minor updates and clarification=
s.<br>
--------------------------------------------------------------------<br>
<br>
Purpose<br>
-------<br>
<br>
The _domain save image_ is the context of a running domain used for<br>
snapshots of a domain or for transferring domains between hosts during<br>
migration.<br>
<br>
There are a number of problems with the format of the domain save<br>
image used in Xen 4.4 and earlier (the _legacy format_).<br>
<br>
* Dependant on toolstack word size. =A0A number of fields within the<br>
=A0 image are native types such as `unsigned long` which have different<br>
=A0 sizes between 32-bit and 64-bit hosts. =A0This prevents domains from<br=
>
=A0 being migrated between 32-bit and 64-bit hosts.<br>
<br>
* There is no header identifying the image.<br>
<br>
* The image has no version information.<br>
<br>
A new format that addresses the above is required.<br>
<br>
ARM does not yet have have a domain save image format specified and<br>
the format described in this specification should be suitable.<br>
<br></blockquote><div><br></div><div><br></div><div>I suggest keeping the p=
rocessing overhead in mind when designing the new</div><div>image format. S=
ome key things have been addressed, such as making sure data</div><div>

is always padded to maintain alignment. But there are also some aspects of =
this</div><div>proposal that seem awfully unnecessary.. More details below.=
</div><div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0=
px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);borde=
r-left-style:solid;padding-left:1ex">


<br>
Overview<br>
=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
The image format consists of two main sections:<br>
<br>
* _Headers_<br>
* _Records_<br>
<br>
Headers<br>
-------<br>
<br>
There are two headers: the _image header_, and the _domain header_.<br>
The image header describes the format of the image (version etc.).<br>
The _domain header_ contains general information about the domain<br>
(architecture, type etc.).<br>
<br>
Records<br>
-------<br>
<br>
The main part of the format is a sequence of different _records_.<br>
Each record type contains information about the domain context. =A0At a<br>
minimum there is a END record marking the end of the records section.<br>
<br>
<br>
Fields<br>
------<br>
<br>
All the fields within the headers and records have a fixed width.<br>
<br>
Fields are always aligned to their size.<br>
<br>
Padding and reserved fields are set to zero on save and must be<br>
ignored during restore.<br>
<br></blockquote><div><br></div><div>So far so good.</div><div>=A0</div><bl=
ockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-lef=
t-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padd=
ing-left:1ex">


Integer (numeric) fields in the image header are always in big-endian<br>
byte order.<br>
<br>
Integer fields in the domain header and in the records are in the<br>
endianess described in the image header (which will typically be the<br>
native ordering).<br>
<br></blockquote><div><br></div><div><br></div><div><div><div>Its tempting =
to adopt all the TCP-style madness for transferring a set of</div><div>stru=
ctured data. =A0Why this endian-ness mess? =A0Am I missing something here?<=
/div>

<div>I am assuming that a lion&#39;s share of Xen&#39;s deployment is on x8=
6=A0</div><div>(not including Amazon). So that leaves ARM. =A0Why not let t=
hese=A0</div><div>processors take the hit of endian-ness conversion?</div><=
/div>

<div><br></div></div><blockquote class=3D"gmail_quote" style=3D"margin:0px =
0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);bord=
er-left-style:solid;padding-left:1ex">
Headers<br>
=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Image Header<br>
------------<br>
<br>
The image header identifies an image as a Xen domain save image. =A0It<br>
includes the version of this specification that the image complies<br>
with.<br>
<br>
Tools supporting version _V_ of the specification shall always save<br>
images using version _V_. =A0Tools shall support restoring from version<br>
_V_ and version _V_ - 1. =A0Tools may additionally support restoring<br>
from earlier versions.<br>
<br>
The marker field can be used to distinguish between legacy images and<br>
those corresponding to this specification. =A0Legacy images will have at<br=
>
one or more zero bits within the first 8 octets of the image.<br>
<br>
Fields within the image header are always in _big-endian_ byte order,<br>
regardless of the setting of the endianness bit.<br></blockquote><div><br><=
/div><div>and more endian-ness mess.</div><div><br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;bo=
rder-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">


<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | marker =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | id =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| version =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-----------+-----------+-------------------------+<br>
=A0 =A0 | options =A0 | =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-----------+-------------------------------------+<br>
<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
marker =A0 =A0 =A00xFFFFFFFFFFFFFFFF.<br>
<br>
id =A0 =A0 =A0 =A0 =A00x58454E46 (&quot;XENF&quot; in ASCII).<br>
<br>
version =A0 =A0 0x00000001. =A0The version of this specification.<br>
<br>
options =A0 =A0 bit 0: Endianness. =A00 =3D little-endian, 1 =3D big-endian=
.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 bit 1-15: Reserved.<br>
--------------------------------------------------------------------<br>
<br>
Domain Header<br>
-------------<br>
<br>
The domain header includes general properties of the domain.<br>
<br>
=A0 =A0 =A00 =A0 =A0 =A01 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6=
 =A0 =A0 7 octet<br>
=A0 =A0 +-----------+-----------+-----------+-------------+<br>
=A0 =A0 | arch =A0 =A0 =A0| type =A0 =A0 =A0| page_shift| (reserved) =A0|<b=
r>
=A0 =A0 +-----------+-----------+-----------+-------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
arch =A0 =A0 =A0 =A00x0000: Reserved.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0002: ARM.<br>
<br>
type =A0 =A0 =A0 =A00x0000: Reserved.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86 PV.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 0x0002 - 0xFFFF: Reserved.<br>
<br>
page_shift =A0Size of a guest page as a power of two.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 i.e., page size =3D 2^page_shift^.<br>
--------------------------------------------------------------------<br>
<br>
<br>
Records<br>
=3D=3D=3D=3D=3D=3D=3D<br>
<br>
A record has a record header, type specific data and a trailing<br>
footer. =A0If body_length is not a multiple of 8, the body is padded<br>
with zeroes to align the checksum field on an 8 octet boundary.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| body_length =A0 =A0 =A0=
 =A0 =A0 =A0 |<br>
=A0 =A0 +-----------+-----------+-------------------------+<br>
=A0 =A0 | options =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------+-------------------------------------+<br>
=A0 =A0 ...<br>
=A0 =A0 Record body of length body_length octets followed by<br>
=A0 =A0 0 to 7 octets of padding.<br>
=A0 =A0 ...<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0| (reserved) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
<br></blockquote><div><br></div><div>I am assuming that you the checksum fi=
eld is present only</div><div>for debugging purposes? Otherwise, I see no r=
eason for the</div><div>computational overhead, given that we are already s=
ending data</div>

<div>over a reliable channel + IIRC we already have an image-wide checksum=
=A0</div><div>when saving the image to disk.</div><div><br></div><div>If de=
bugging is the only use case, then I guess, the type field</div><div>can be=
 prefixed with a 1/0 bit, eliminating the need for the=A0</div>

<div>1-bit checkum options field + 7-byte padding. Similarly, if debugging=
=A0</div><div>mode is not set, why waste another 8 bytes in the end for the=
 checksum field.</div><div><br></div><div>Unless you think there may be mor=
e types with need of special options,</div>

<div><br></div><div>Feel free to correct me if I am missing something eleme=
ntary here..</div><div><br></div><div><br></div><div>=A0</div><blockquote c=
lass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1p=
x;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1=
ex">


--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0Description<br>
----------- =A0-------------------------------------------------------<br>
type =A0 =A0 =A0 =A0 0x00000000: END<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000001: PAGE_DATA<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000002: VCPU_INFO<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000003: VCPU_CONTEXT<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000004: X86_PV_INFO<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000005: P2M<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A00x00000006 - 0xFFFFFFFF: Reserved<br>
<br>
body_length =A0Length in octets of the record body.<br>
<br>
options =A0 =A0 =A0Bit 0: 0 - checksum invalid, 1 =3D checksum valid.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0Bit 1-15: Reserved.<br>
<br>
checksum =A0 =A0 CRC-32 checksum of the record body (including any trailing=
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0padding), or 0x00000000 if the checksum field is=
 invalid.<br>
--------------------------------------------------------------------<br>
<br></blockquote><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex">The following sub-sections specify the re=
cord body format for each of<br>


the record types.<br>
<br>
END<br>
----<br>
<br>
A end record marks the end of the image.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
The end record contains no fields; its body_length is 0.<br>
<br>
PAGE_DATA<br>
---------<br>
<br>
The bulk of an image consists of many PAGE_DATA records containing the<br>
memory contents.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | count (C) =A0 =A0 =A0 =A0 =A0 =A0 | (reserved) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | pfn[0] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | pfn[C-1] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | page_data[0]... =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 |<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | page_data[N-1]... =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 |<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
count =A0 =A0 =A0 Number of pages described in this record.<br>
<br>
pfn =A0 =A0 =A0 =A0 An array of count PFNs. Bits 63-60 contain<br>
=A0 =A0 =A0 =A0 =A0 =A0 the XEN\_DOMCTL\_PFINFO_* value for that PFN.<br>
<br>
page_data =A0 page_size octets of uncompressed page contents for each page<=
br>
=A0 =A0 =A0 =A0 =A0 =A0 set as present in the pfn array.<br>
--------------------------------------------------------------------<br>
<br></blockquote><div><br></div><div>s/uncompressed/(compressed/uncompresse=
d)/</div><div>(Remus sends compressed data)</div><div>=A0</div><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1=
px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:=
1ex">


VCPU_INFO<br>
---------<br>
<br>
&gt; [ This is a combination of parts of the extended-info and<br>
&gt; XC_SAVE_ID_VCPU_INFO chunks. ]<br>
<br>
The VCPU_INFO record includes the maximum possible VCPU ID. =A0This will<br=
>
be followed a VCPU_CONTEXT record for each online VCPU.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+------------------------+<br>
=A0 =A0 | max_vcpu_id =A0 =A0 =A0 =A0 =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =
=A0 |<br>
=A0 =A0 +-----------------------+------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0Description<br>
----------- =A0---------------------------------------------------<br>
max_vcpu_id =A0Maximum possible VCPU ID.<br>
--------------------------------------------------------------------<br>
<br>
<br>
VCPU_CONTEXT<br>
------------<br>
<br>
The context for a single VCPU.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | vcpu_id =A0 =A0 =A0 =A0 =A0 =A0 =A0 | (reserved) =A0 =A0 =A0 =A0 =
=A0 =A0 =A0|<br>
=A0 =A0 +-----------------------+-------------------------+<br>
=A0 =A0 | vcpu_ctx... =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0 =A0 =A0Description<br>
----------- =A0 =A0 =A0---------------------------------------------------<=
br>
vcpu_id =A0 =A0 =A0 =A0 =A0The VCPU ID.<br>
<br>
vcpu_ctx =A0 =A0 =A0 =A0 Context for this VCPU.<br>
--------------------------------------------------------------------<br>
<br>
[ vcpu_ctx format TBD. ]<br>
<br>
<br>
X86_PV_INFO<br>
-----------<br>
<br>
&gt; [ This record replaces part of the extended-info chunk. ]<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-----+-----+-----+-------------------------------+<br>
=A0 =A0 | w =A0 | ptl | o =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0|<br>
=A0 =A0 +-----+-----+-----+-------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 =A0 =A0 =A0Description<br>
----------- =A0 =A0 =A0---------------------------------------------------<=
br>
guest_width (w) =A0Guest width in octets (either 4 or 8).<br>
<br>
pt_levels (ptl) =A0Number of page table levels (either 3 or 4).<br>
<br>
options (o) =A0 =A0 =A0Bit 0: 0 - no VMASST_pae_extended_cr3,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A01 - VMASST_pae_extended_cr3.<br>
<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Bit 1-7: Reserved.<br>
--------------------------------------------------------------------<br>
<br>
<br>
P2M<br>
---<br>
<br>
[ This is a more flexible replacement for the old p2m_size field and<br>
p2m array. ]<br>
<br>
The P2M record contains a portion of the source domain&#39;s P2M.<br>
Multiple P2M records may be sent if the source P2M changes during the<br>
stream.<br>
<br>
=A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 5 =A0 =A0 6 =
=A0 =A0 7 octet<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | pfn_begin =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | pfn_end =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 |<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | mfn[0] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 ...<br>
=A0 =A0 +-------------------------------------------------+<br>
=A0 =A0 | mfn[N-1] =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0|<br>
=A0 =A0 +-------------------------------------------------+<br>
<br>
--------------------------------------------------------------------<br>
Field =A0 =A0 =A0 Description<br>
----------- --------------------------------------------------------<br>
pfn_begin =A0 The first PFN in this portion of the P2M<br>
<br>
pfn_end =A0 =A0 One past the last PFN in this portion of the P2M.<br>
<br>
mfn =A0 =A0 =A0 =A0 Array of (pfn_end - pfn-begin) MFNs corresponding to<br=
>
=A0 =A0 =A0 =A0 =A0 =A0 the set of PFNs in the range [pfn_begin, pfn_end).<=
br>
--------------------------------------------------------------------<br>
<br>
<br>
Layout<br>
=3D=3D=3D=3D=3D=3D<br>
<br>
The set of valid records depends on the guest architecture and type.<br>
<br>
x86 PV Guest<br>
------------<br>
<br>
An x86 PV guest image will have in this order:<br>
<br>
1. Image header<br>
2. Domain header<br>
3. X86_PV_INFO record<br>
4. At least one P2M record<br>
5. At least one PAGE_DATA record</blockquote><blockquote class=3D"gmail_quo=
te" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-col=
or:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
6. VCPU_INFO record<br>
6. At least one VCPU_CONTEXT record</blockquote><blockquote class=3D"gmail_=
quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-=
color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
7. END record<br>
<br></blockquote><div><br></div><div>There seems to be a bunch of info miss=
ing. Here are some</div><div>missing elements that I can recall at the mome=
nt:</div><div>a) there is no support for sending over one time markers that=
 switch the</div>

<div>receiver&#39;s operating mode in the middle of a data stream.=A0</div>=
<div>E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.</di=
v><div>XC_SAVE_ENABLE_VERIFY_MODE,</div><div><br></div><div>b) in pv case, =
the tail also has a list of unmapped PFNs at the end of every iteration.</d=
iv>

<div><br></div><div>c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device co=
ntext information (generally<br></div><div>for HVMs).</div><div>=A0</div><b=
lockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-le=
ft-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;pad=
ding-left:1ex">


<br>
Legacy Images (x86 only)<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br=
>
<br>
Restoring legacy images from older tools shall be handled by<br>
translating the legacy format image into this new format.<br>
<br>
It shall not be possible to save in the legacy format.<br>
<br>
There are two different legacy images depending on whether they were<br>
generated by a 32-bit or a 64-bit toolstack. These shall be<br>
distinguished by inspecting octets 4-7 in the image. =A0If these are<br>
zero then it is a 64-bit image.<br>
<br>
Toolstack =A0Field =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0V=
alue<br>
--------- =A0----- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0-=
----<br>
64-bit =A0 =A0 Bit 31-63 of the p2m_size field =A00 (since p2m_size &lt; 2^=
32^)<br>
32-bit =A0 =A0 extended-info chunk ID (PV) =A0 =A0 =A00xFFFFFFFF<br>
32-bit =A0 =A0 Chunk type (HVM) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &lt; 0<br>
32-bit =A0 =A0 Page count (HVM) =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &gt; 0<br>
<br>
Table: Possible values for octet 4-7 in legacy images<br>
<br>
This assumes the presence of the extended-info chunk which was<br>
introduced in Xen 3.0.<br>
<br>
<br>
Future Extensions<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
All changes to this format require the image version to be increased.<br>
<br>
The format may be extended by adding additional record types.<br>
<br>
Extending an existing record type must be done by adding a new record<br>
type. =A0This allows old images with the old record to still be<br>
restored.<br>
<br>
The image header may be extended by _appending_ additional fields. =A0In<br=
>
particular, the `marker`, `id` and `version` fields must never change<br>
size or location.<br>
<br>
</blockquote></div><br></div></div>

--089e013c6eeaed1ee804f212cd90--


--===============7328091245157788830==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7328091245157788830==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 20:23:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 20:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCxOF-00069q-88; Mon, 10 Feb 2014 20:23:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WCxOE-00069l-2i
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 20:23:38 +0000
Received: from [85.158.139.211:17630] by server-3.bemta-5.messagelabs.com id
	76/68-13671-94539F25; Mon, 10 Feb 2014 20:23:37 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392063816!2980426!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29446 invoked from network); 10 Feb 2014 20:23:36 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-206.messagelabs.com with SMTP;
	10 Feb 2014 20:23:36 -0000
X-TM-IMSS-Message-ID: <998e3c7a00046c30@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.9]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 998e3c7a00046c30 ;
	Mon, 10 Feb 2014 15:22:51 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1AKNRor031714; 
	Mon, 10 Feb 2014 15:23:29 -0500
Message-ID: <52F93509.2030304@tycho.nsa.gov>
Date: Mon, 10 Feb 2014 15:22:33 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 04:41 AM, Jan Beulich wrote:
> 1: fix memory leaks
> 2: fix error propagation from flask_security_set_bool()
> 3: check permissions first thing in flask_security_set_bool()
> 4: add compat mode guest support
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> Release-wise, I would think that 1-3 should certainly go in. While I'd
> like 4 to be in for 4.4 too, I realize that's a little more intrusive than
> one would want at this point.
>
> Jan

All four patches look correct to me. I assume the movement of the flask_security_commit_bools inside the #ifdef is made possible by
the xlat.lst parsing, but didn't look too closely at how that was
done.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Re: what goes in release - I agree that #4 would be nice but I wouldn't
push too hard to make an exception for it. The users of the XSM interface
would primarily be toolstack and related domains where a requirement to
be 64-bit should not be too restrictive (not to say this shouldn't be
fixed, of course).

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 20:23:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 20:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCxOF-00069q-88; Mon, 10 Feb 2014 20:23:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WCxOE-00069l-2i
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 20:23:38 +0000
Received: from [85.158.139.211:17630] by server-3.bemta-5.messagelabs.com id
	76/68-13671-94539F25; Mon, 10 Feb 2014 20:23:37 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392063816!2980426!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29446 invoked from network); 10 Feb 2014 20:23:36 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-3.tower-206.messagelabs.com with SMTP;
	10 Feb 2014 20:23:36 -0000
X-TM-IMSS-Message-ID: <998e3c7a00046c30@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.9]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 998e3c7a00046c30 ;
	Mon, 10 Feb 2014 15:22:51 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1AKNRor031714; 
	Mon, 10 Feb 2014 15:23:29 -0500
Message-ID: <52F93509.2030304@tycho.nsa.gov>
Date: Mon, 10 Feb 2014 15:22:33 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
In-Reply-To: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/07/2014 04:41 AM, Jan Beulich wrote:
> 1: fix memory leaks
> 2: fix error propagation from flask_security_set_bool()
> 3: check permissions first thing in flask_security_set_bool()
> 4: add compat mode guest support
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> Release-wise, I would think that 1-3 should certainly go in. While I'd
> like 4 to be in for 4.4 too, I realize that's a little more intrusive than
> one would want at this point.
>
> Jan

All four patches look correct to me. I assume the movement of the flask_security_commit_bools inside the #ifdef is made possible by
the xlat.lst parsing, but didn't look too closely at how that was
done.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Re: what goes in release - I agree that #4 would be nice but I wouldn't
push too hard to make an exception for it. The users of the XSM interface
would primarily be toolstack and related domains where a requirement to
be 64-bit should not be too restrictive (not to say this shouldn't be
fixed, of course).

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 21:08:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 21:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCy4r-0007x1-S9; Mon, 10 Feb 2014 21:07:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <john.stultz@linaro.org>) id 1WCy4p-0007ww-J8
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 21:07:39 +0000
Received: from [85.158.139.211:29491] by server-5.bemta-5.messagelabs.com id
	E3/B3-32749-A9F39F25; Mon, 10 Feb 2014 21:07:38 +0000
X-Env-Sender: john.stultz@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392066455!3007558!1
X-Originating-IP: [209.85.160.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25369 invoked from network); 10 Feb 2014 21:07:37 -0000
Received: from mail-pb0-f51.google.com (HELO mail-pb0-f51.google.com)
	(209.85.160.51)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 21:07:37 -0000
Received: by mail-pb0-f51.google.com with SMTP id un15so6788945pbc.10
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 13:07:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=tmOF5lKvuzbdesv/8CskTspUUeVKdaTPpZ4sAU8ykks=;
	b=e7zekxwNXnCGn2xLu2OwDOrvTpgZYO/Upx7t+2jAGGePSk2gv/xppRNw38mnbmH75E
	Q2h78BraN0bY8G6BhkUz0EzcHBf2lSPvVA5jgaqmwTM/RH2G0c/I/iIy8VxEuCZqtvnX
	m43g5A6IZPCj24FN4ISRVkcfWftsZ+h8yIdoA8kFxWFQzyK6+Fo1dnP2F3hO4bM0saXi
	wjVhusvPt62aZRLCsHWzScACYFbH5lIeySrQgCnAAIWUDh/C/HyqnJDMoDapHR/sm6fW
	/nJa7JH9pYbA91ind5CxEIxdpJa4FRyPYoLWkr21gEIxlI0DhqwErkSAJ/SprgEj1/MD
	Q5PA==
X-Gm-Message-State: ALoCoQkntJDXUvY5cjWFldcckBMjQp3HwLXRcauXtsiI2ESvPT+Wv1cRzDKPZjtYt8JfWosszPY1
X-Received: by 10.67.3.97 with SMTP id bv1mr28091138pad.54.1392066455242;
	Mon, 10 Feb 2014 13:07:35 -0800 (PST)
Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net.
	[67.170.153.23]) by mx.google.com with ESMTPSA id
	bz4sm45596387pbb.12.2014.02.10.13.07.33 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 13:07:34 -0800 (PST)
From: John Stultz <john.stultz@linaro.org>
To: stable <stable@vger.kernel.org>
Date: Mon, 10 Feb 2014 13:07:19 -0800
Message-Id: <1392066444-4940-3-git-send-email-john.stultz@linaro.org>
X-Mailer: git-send-email 1.8.3.2
In-Reply-To: <1392066444-4940-1-git-send-email-john.stultz@linaro.org>
References: <1392066444-4940-1-git-send-email-john.stultz@linaro.org>
Cc: Prarit Bhargava <prarit@redhat.com>,
	Richard Cochran <richardcochran@gmail.com>,
	xen-devel@lists.xen.org, John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 2/7] 3.13.y: timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a 3.13-stable backport of 5258d3f25c76f6ab86e9333abf97a55a877d3870

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Cc: stable <stable@vger.kernel.org> #3.11+
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6bad3d9..419faec 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 21:08:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 21:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCy4r-0007x1-S9; Mon, 10 Feb 2014 21:07:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <john.stultz@linaro.org>) id 1WCy4p-0007ww-J8
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 21:07:39 +0000
Received: from [85.158.139.211:29491] by server-5.bemta-5.messagelabs.com id
	E3/B3-32749-A9F39F25; Mon, 10 Feb 2014 21:07:38 +0000
X-Env-Sender: john.stultz@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392066455!3007558!1
X-Originating-IP: [209.85.160.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25369 invoked from network); 10 Feb 2014 21:07:37 -0000
Received: from mail-pb0-f51.google.com (HELO mail-pb0-f51.google.com)
	(209.85.160.51)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 21:07:37 -0000
Received: by mail-pb0-f51.google.com with SMTP id un15so6788945pbc.10
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 13:07:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=tmOF5lKvuzbdesv/8CskTspUUeVKdaTPpZ4sAU8ykks=;
	b=e7zekxwNXnCGn2xLu2OwDOrvTpgZYO/Upx7t+2jAGGePSk2gv/xppRNw38mnbmH75E
	Q2h78BraN0bY8G6BhkUz0EzcHBf2lSPvVA5jgaqmwTM/RH2G0c/I/iIy8VxEuCZqtvnX
	m43g5A6IZPCj24FN4ISRVkcfWftsZ+h8yIdoA8kFxWFQzyK6+Fo1dnP2F3hO4bM0saXi
	wjVhusvPt62aZRLCsHWzScACYFbH5lIeySrQgCnAAIWUDh/C/HyqnJDMoDapHR/sm6fW
	/nJa7JH9pYbA91ind5CxEIxdpJa4FRyPYoLWkr21gEIxlI0DhqwErkSAJ/SprgEj1/MD
	Q5PA==
X-Gm-Message-State: ALoCoQkntJDXUvY5cjWFldcckBMjQp3HwLXRcauXtsiI2ESvPT+Wv1cRzDKPZjtYt8JfWosszPY1
X-Received: by 10.67.3.97 with SMTP id bv1mr28091138pad.54.1392066455242;
	Mon, 10 Feb 2014 13:07:35 -0800 (PST)
Received: from localhost.localdomain (c-67-170-153-23.hsd1.or.comcast.net.
	[67.170.153.23]) by mx.google.com with ESMTPSA id
	bz4sm45596387pbb.12.2014.02.10.13.07.33 for <multiple recipients>
	(version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 13:07:34 -0800 (PST)
From: John Stultz <john.stultz@linaro.org>
To: stable <stable@vger.kernel.org>
Date: Mon, 10 Feb 2014 13:07:19 -0800
Message-Id: <1392066444-4940-3-git-send-email-john.stultz@linaro.org>
X-Mailer: git-send-email 1.8.3.2
In-Reply-To: <1392066444-4940-1-git-send-email-john.stultz@linaro.org>
References: <1392066444-4940-1-git-send-email-john.stultz@linaro.org>
Cc: Prarit Bhargava <prarit@redhat.com>,
	Richard Cochran <richardcochran@gmail.com>,
	xen-devel@lists.xen.org, John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 2/7] 3.13.y: timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a 3.13-stable backport of 5258d3f25c76f6ab86e9333abf97a55a877d3870

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Cc: stable <stable@vger.kernel.org> #3.11+
Signed-off-by: John Stultz <john.stultz@linaro.org>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 6bad3d9..419faec 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
-- 
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzMl-0003L7-8H; Mon, 10 Feb 2014 22:30:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WCzMk-0003Kx-Ax
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 22:30:14 +0000
Received: from [85.158.137.68:21100] by server-3.bemta-3.messagelabs.com id
	67/1A-14520-5F259F25; Mon, 10 Feb 2014 22:30:13 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392071410!933620!1
X-Originating-IP: [209.85.220.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24367 invoked from network); 10 Feb 2014 22:30:12 -0000
Received: from mail-pa0-f45.google.com (HELO mail-pa0-f45.google.com)
	(209.85.220.45)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 22:30:12 -0000
Received: by mail-pa0-f45.google.com with SMTP id lf10so6798297pab.32
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 14:30:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id;
	bh=H83VPmq81wZ1xGRb5iTzEVOggxKCGFRbB00Fdn+A3Tk=;
	b=TmuyEJ8CiMrylRQcebYLOmA8Uj9IEJQg91LK6rxYkJFTtl6ZAYAuPrtxUB8VxA1iJ/
	bIvicntEFw/4tTOGUOHxiDEGzIvvOLCPP0Zp+fE10OgVLWMZ8EJJ7lKf4JS+C9WsI4FX
	wJI0MOrfVxTcKwRL2AzbQGgShh6jgzNBWAnSrvKfMZmM+NtfY/GlIDr4mPJMwaNLM8fY
	+4/XE+ag2dd5Req7T4Nhcriol/oeIfKqpugTqYWaOH18+3aJCGexuN9ylYnU77AMxTDq
	nAmRn5JE1wuRyiY1Dqqv7uzQRKV6KnxsKP5mMKPDl77yLHFJ3J5ixmH+jcTsVr7g4PnF
	hQag==
X-Received: by 10.66.142.107 with SMTP id rv11mr28652693pab.17.1392071405809; 
	Mon, 10 Feb 2014 14:30:05 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	qh2sm120444656pab.13.2014.02.10.14.30.02 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 14:30:04 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Mon, 10 Feb 2014 14:30:01 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Mon, 10 Feb 2014 14:29:49 -0800
Message-Id: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
Cc: xen-devel@lists.xenproject.org,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Subject: [Xen-devel] [RFC 0/2] xen-backend interfaces and IFF_MULTICAST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Virtualization hypervisors do not make use of backend Ethernet interfaces
like typical Ethernet interfaces. In my current testing at least xen-netback
requires at least a struct in_device with a proper MTU setting in order for
the front end interface to function properly. Under the current design this
will kick off ipv6 autoconfiguration and DAD on these interfaces. This does
not happen for xen's TAP interface when HVM is used. KVM only uses TAP
interfaces but their backend interfaces *do not* perform ipv6 autoconfiguration
and DAD even though its TAP interfaces do have multicast enabled.

In xen's case some xen users used to run into issues with the current
architecture when bundles of xen guests were on the same network and ipv6
autoconfiguration was performed. This happens because the MAC address is
static and while this can be corrected by randomizing it an ipv6 address
is simply not needed for them. There is currently no way to disable ipv6
interfaces on specific type of interfaces but this is just begging review
of the architecture on why an interface is even needed at all, how about
ipv4 addresses, why do we need inetdev_init() on these virtualized
interfaces?

Disabling multicast on an interface should disable ipv6 autoconfiguration
and DAD but the note on include/uapi/linux/if_link.h makes it clear that
IFF_MULTICAST should be considered carefully given that not-NBMA links are
known to support multicast, this includes all IFF_POINTOPOINT and IFF_BROADCAST
as well. If we are to follow the RFCs on ipv6 autoconfiguration and DAD
however its clear that muliticast is required -- but if we have no reliable
way of determining this capability we won't know when we could perform
autoconfiguration and DAD properly.

If the patch to require IFF_MULTICAST for autoconfiguration and DAD is
valid then xen-netback can simply clear the flag, clearing it is required
as ether_setup() is used during the net_device allocation. I'm currently
reviewing the need for any proper-mtu interface on xen-netback but in the
meantime I'd like some feedback on IFF_MULTICAST and the following
patches.

Luis R. Rodriguez (2):
  ipv6: disable autoconfiguration and DAD on non-multicast links
  xen-netback: disable multicast and use a random hw MAC address

 drivers/net/xen-netback/interface.c | 14 +++++---------
 net/ipv6/addrconf.c                 | 18 ++++++++++++------
 2 files changed, 17 insertions(+), 15 deletions(-)

-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzMr-0003La-GT; Mon, 10 Feb 2014 22:30:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WCzMp-0003LL-Bf
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 22:30:19 +0000
Received: from [193.109.254.147:37403] by server-11.bemta-14.messagelabs.com
	id E5/B5-24604-AF259F25; Mon, 10 Feb 2014 22:30:18 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392071415!3376240!1
X-Originating-IP: [209.85.220.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10070 invoked from network); 10 Feb 2014 22:30:17 -0000
Received: from mail-pa0-f51.google.com (HELO mail-pa0-f51.google.com)
	(209.85.220.51)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 22:30:17 -0000
Received: by mail-pa0-f51.google.com with SMTP id ld10so6827741pab.10
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 14:30:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=bZCDQ4rPJYti59Au7qUDX4RK6wYsxCtBIDWHU8mMPJU=;
	b=rcylrC88GQD4awmewHrS04WQ6kneWKbdLxGPhT5A5mYAurh2qPZM0ruiBEboP4mv1r
	UQBQVj+MQlDv80T3OK+F2UnHPhhrtCLWAx++y2w8Ik6E+xQvht5K/NjFIwr99RGOlu3C
	9NT8t8VYBlkBj6cx/hiRs6w7BTN31s+ec/v9E3QDkn9sSmWYQG2TZMDlt9hgy/yPtZy4
	I6i3yXyNT7lU0fC8MCidnJ8TXCYjzo2JHJPWb1Ka+AEsOcKaHPewQn1DAp0iD/uuoeFb
	Gp0gwzrqXLlQb2r9nfxh1muPnEks2nbzND179geCJRKFFVUsRZW3k+UPPJ8GxqMlhT2b
	6v6w==
X-Received: by 10.68.138.165 with SMTP id qr5mr40748688pbb.123.1392071415240; 
	Mon, 10 Feb 2014 14:30:15 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223])
	by mx.google.com with ESMTPSA id c7sm46023850pbt.0.2014.02.10.14.30.11
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 14:30:13 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Mon, 10 Feb 2014 14:30:10 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Mon, 10 Feb 2014 14:29:51 -0800
Message-Id: <1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
Cc: xen-devel@lists.xenproject.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

Although the xen-netback interfaces do not participate in the
link as a typical Ethernet device interfaces for them are
still required under the current archtitecture. IPv6 addresses
do not need to be created or assigned on the xen-netback interfaces
however, even if the frontend devices do need them, so clear the
multicast flag to ensure the net core does not initiate IPv6
Stateless Address Autoconfiguration. Clearing the multicast
flag is required given that the net_device is using the
ether_setup() helper.

There's also no good reason why the special MAC address of
FE:FF:FF:FF:FF:FF is being used other than to avoid issues
with STP, since using this can create an issue if a user
decides to enable multicast on the backend interfaces simply
use a random MAC address with the xen OUI prefix as is done
with the frontend through xen udev scripts.

Cc: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 drivers/net/xen-netback/interface.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..479fbd1 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -42,6 +42,8 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -347,15 +349,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->mmap_pages[i] = NULL;
 
-	/*
-	 * Initialise a dummy MAC address. We choose the numerically
-	 * largest non-broadcast address to prevent the address getting
-	 * stolen by an Ethernet bridge for STP purposes.
-	 * (FE:FF:FF:FF:FF:FF)
-	 */
-	memset(dev->dev_addr, 0xFF, ETH_ALEN);
-	dev->dev_addr[0] &= ~0x01;
-
+	eth_hw_addr_random(dev);
+	memcpy(dev->dev_addr, xen_oui, 3);
+	dev->flags &= ~IFF_MULTICAST;
 	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
 
 	netif_carrier_off(dev);
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzMl-0003LE-MW; Mon, 10 Feb 2014 22:30:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WCzMk-0003Ky-H4
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 22:30:14 +0000
Received: from [85.158.137.68:33394] by server-1.bemta-3.messagelabs.com id
	3E/AC-17293-5F259F25; Mon, 10 Feb 2014 22:30:13 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392071410!928615!1
X-Originating-IP: [209.85.220.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18183 invoked from network); 10 Feb 2014 22:30:12 -0000
Received: from mail-pa0-f44.google.com (HELO mail-pa0-f44.google.com)
	(209.85.220.44)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 22:30:12 -0000
Received: by mail-pa0-f44.google.com with SMTP id kq14so6823924pab.3
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 14:30:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=aussYPwSvZV6SF21+5dTA+poZCitOf3XR3y5aylnehg=;
	b=vLjQiNB7jpmEpjXOzMRTM4+EZO4hrmTD1iHU+LTAvbnCCpyLTNWi2dRmohmiRKza1d
	0mBGWqR1OZvmnbwBcalHKSiKG9uhkoLrDrQVss5IZ6S5EDXZQMIn6P0mEsXzmJwbw++J
	17MXrcpwxusGGF1mizjXDRW8LKyCpb7EMdBwbyVIWJOPinCh2vPd8Iy7R4HaPVmpMw5g
	P6BjDtxg9REveThPfeLF5h6kcyWPiyPCtbctvCVLOm94Mu+faUlcIhhzSjNsQ+O3lpqR
	VgXJY2ZwuWwMGDbnAgiuVQXu3ega4FQ7yREZC8twpuMUvnW5uJy97oPDzkI1eJbRbbuF
	yK8g==
X-Received: by 10.68.189.100 with SMTP id gh4mr24229864pbc.21.1392071410333;
	Mon, 10 Feb 2014 14:30:10 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	ug2sm120427384pac.21.2014.02.10.14.30.07 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 14:30:09 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Mon, 10 Feb 2014 14:30:05 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Mon, 10 Feb 2014 14:29:50 -0800
Message-Id: <1392071391-13215-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
Cc: Olaf Kirch <okir@suse.de>, "Luis R. Rodriguez" <mcgrof@suse.com>,
	James Morris <jmorris@namei.org>, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: [Xen-devel] [RFC 1/2] ipv6: disable autoconfiguration and DAD on
	non-multicast links
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

RFC4862 [0] on IPv6 on Stateless Address Autoconfiguration on
Sections 4 and 5 state that autoconfiguration is performed only
on multicast-capable links. Multicast is used to ensure the
automatically assigned address is unique by sending Neighbor
Solicitation Messages and listening for these same messages
on both the all-nodes multicast address and the solicited-node
multicast address of the tentative address, this is called
Duplicate Address Detection (DAD) and documented on Section 5.4.
DAD has an optimization, Optimistic DAD [1] and it also requires
multicast. Skip autoconfiguration and all forms of DAD on
non-multicast links.

We don't *fully* disable IPV6 for non-multicast links as
there are signs non-multicast IPV6 devices are wished to
be supported, one example being the ipv6 autoconf module
parameter, but it should be noted that RFC4682 Section 5.4
makes it clear that DAD *MUST* be performed on all unicast
addresses prior to assigning them to an interface, regardless of
whether they are obtained through stateless autoconfiguration,
DHCPv6, or manual configuration with the following exceptions:

   -  When DupAddrDetectTransmits is set to zero, DAD
      can be skipped
   -  Anycast addresses can skip DAD

In the case that autoconfiguration is disabled the interface
still gets assigned a temporary address via ipv6_create_tempaddr()
however it will be kept as temporary, IFA_F_TEMPORARY.

[0] http://tools.ietf.org/html/rfc4862
[1] http://tools.ietf.org/html/rfc4429

Cc: Olaf Kirch <okir@suse.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 net/ipv6/addrconf.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index ad23569..362f64f 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -2211,7 +2211,8 @@ void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len, bool sllao)
 
 	/* Try to figure out our local address for this prefix */
 
-	if (pinfo->autoconf && in6_dev->cnf.autoconf) {
+	if (pinfo->autoconf && in6_dev->cnf.autoconf &&
+	    dev->flags & IFF_MULTICAST) {
 		struct inet6_ifaddr *ifp;
 		struct in6_addr addr;
 		int create = 0, update_lft = 0;
@@ -2248,7 +2249,8 @@ ok:
 
 #ifdef CONFIG_IPV6_OPTIMISTIC_DAD
 			if (in6_dev->cnf.optimistic_dad &&
-			    !net->ipv6.devconf_all->forwarding && sllao)
+			    !net->ipv6.devconf_all->forwarding && sllao &&
+			    dev->flags & IFF_MULTICAST)
 				addr_flags = IFA_F_OPTIMISTIC;
 #endif
 
@@ -3161,6 +3163,7 @@ static void addrconf_dad_start(struct inet6_ifaddr *ifp)
 		goto out;
 
 	if (dev->flags&(IFF_NOARP|IFF_LOOPBACK) ||
+	    !(dev->flags&IFF_MULTICAST) ||
 	    idev->cnf.accept_dad < 1 ||
 	    !(ifp->flags&IFA_F_TENTATIVE) ||
 	    ifp->flags & IFA_F_NODAD) {
@@ -3288,6 +3291,7 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp)
 	send_rs = send_mld &&
 		  ipv6_accept_ra(ifp->idev) &&
 		  ifp->idev->cnf.rtr_solicits > 0 &&
+		  (dev->flags&IFF_MULTICAST) &&
 		  (dev->flags&IFF_LOOPBACK) == 0;
 	read_unlock_bh(&ifp->idev->lock);
 
@@ -4192,8 +4196,9 @@ errout:
 		rtnl_set_sk_err(net, RTNLGRP_IPV6_IFADDR, err);
 }
 
-static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
-				__s32 *array, int bytes)
+static inline void ipv6_store_devconf(struct net_device *dev,
+				      struct ipv6_devconf *cnf,
+				      __s32 *array, int bytes)
 {
 	BUG_ON(bytes < (DEVCONF_MAX * 4));
 
@@ -4203,7 +4208,8 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
 	array[DEVCONF_MTU6] = cnf->mtu6;
 	array[DEVCONF_ACCEPT_RA] = cnf->accept_ra;
 	array[DEVCONF_ACCEPT_REDIRECTS] = cnf->accept_redirects;
-	array[DEVCONF_AUTOCONF] = cnf->autoconf;
+	if (dev->flags & IFF_MULTICAST)
+		array[DEVCONF_AUTOCONF] = cnf->autoconf;
 	array[DEVCONF_DAD_TRANSMITS] = cnf->dad_transmits;
 	array[DEVCONF_RTR_SOLICITS] = cnf->rtr_solicits;
 	array[DEVCONF_RTR_SOLICIT_INTERVAL] =
@@ -4326,7 +4332,7 @@ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev)
 	nla = nla_reserve(skb, IFLA_INET6_CONF, DEVCONF_MAX * sizeof(s32));
 	if (nla == NULL)
 		goto nla_put_failure;
-	ipv6_store_devconf(&idev->cnf, nla_data(nla), nla_len(nla));
+	ipv6_store_devconf(idev->dev, &idev->cnf, nla_data(nla), nla_len(nla));
 
 	/* XXX - MC not implemented */
 
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzMr-0003La-GT; Mon, 10 Feb 2014 22:30:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WCzMp-0003LL-Bf
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 22:30:19 +0000
Received: from [193.109.254.147:37403] by server-11.bemta-14.messagelabs.com
	id E5/B5-24604-AF259F25; Mon, 10 Feb 2014 22:30:18 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392071415!3376240!1
X-Originating-IP: [209.85.220.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10070 invoked from network); 10 Feb 2014 22:30:17 -0000
Received: from mail-pa0-f51.google.com (HELO mail-pa0-f51.google.com)
	(209.85.220.51)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 22:30:17 -0000
Received: by mail-pa0-f51.google.com with SMTP id ld10so6827741pab.10
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 14:30:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=bZCDQ4rPJYti59Au7qUDX4RK6wYsxCtBIDWHU8mMPJU=;
	b=rcylrC88GQD4awmewHrS04WQ6kneWKbdLxGPhT5A5mYAurh2qPZM0ruiBEboP4mv1r
	UQBQVj+MQlDv80T3OK+F2UnHPhhrtCLWAx++y2w8Ik6E+xQvht5K/NjFIwr99RGOlu3C
	9NT8t8VYBlkBj6cx/hiRs6w7BTN31s+ec/v9E3QDkn9sSmWYQG2TZMDlt9hgy/yPtZy4
	I6i3yXyNT7lU0fC8MCidnJ8TXCYjzo2JHJPWb1Ka+AEsOcKaHPewQn1DAp0iD/uuoeFb
	Gp0gwzrqXLlQb2r9nfxh1muPnEks2nbzND179geCJRKFFVUsRZW3k+UPPJ8GxqMlhT2b
	6v6w==
X-Received: by 10.68.138.165 with SMTP id qr5mr40748688pbb.123.1392071415240; 
	Mon, 10 Feb 2014 14:30:15 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223])
	by mx.google.com with ESMTPSA id c7sm46023850pbt.0.2014.02.10.14.30.11
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 14:30:13 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Mon, 10 Feb 2014 14:30:10 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Mon, 10 Feb 2014 14:29:51 -0800
Message-Id: <1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
Cc: xen-devel@lists.xenproject.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

Although the xen-netback interfaces do not participate in the
link as a typical Ethernet device interfaces for them are
still required under the current archtitecture. IPv6 addresses
do not need to be created or assigned on the xen-netback interfaces
however, even if the frontend devices do need them, so clear the
multicast flag to ensure the net core does not initiate IPv6
Stateless Address Autoconfiguration. Clearing the multicast
flag is required given that the net_device is using the
ether_setup() helper.

There's also no good reason why the special MAC address of
FE:FF:FF:FF:FF:FF is being used other than to avoid issues
with STP, since using this can create an issue if a user
decides to enable multicast on the backend interfaces simply
use a random MAC address with the xen OUI prefix as is done
with the frontend through xen udev scripts.

Cc: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 drivers/net/xen-netback/interface.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index b9de31e..479fbd1 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -42,6 +42,8 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -347,15 +349,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->mmap_pages[i] = NULL;
 
-	/*
-	 * Initialise a dummy MAC address. We choose the numerically
-	 * largest non-broadcast address to prevent the address getting
-	 * stolen by an Ethernet bridge for STP purposes.
-	 * (FE:FF:FF:FF:FF:FF)
-	 */
-	memset(dev->dev_addr, 0xFF, ETH_ALEN);
-	dev->dev_addr[0] &= ~0x01;
-
+	eth_hw_addr_random(dev);
+	memcpy(dev->dev_addr, xen_oui, 3);
+	dev->flags &= ~IFF_MULTICAST;
 	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
 
 	netif_carrier_off(dev);
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzMl-0003L7-8H; Mon, 10 Feb 2014 22:30:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WCzMk-0003Kx-Ax
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 22:30:14 +0000
Received: from [85.158.137.68:21100] by server-3.bemta-3.messagelabs.com id
	67/1A-14520-5F259F25; Mon, 10 Feb 2014 22:30:13 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392071410!933620!1
X-Originating-IP: [209.85.220.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24367 invoked from network); 10 Feb 2014 22:30:12 -0000
Received: from mail-pa0-f45.google.com (HELO mail-pa0-f45.google.com)
	(209.85.220.45)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 22:30:12 -0000
Received: by mail-pa0-f45.google.com with SMTP id lf10so6798297pab.32
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 14:30:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id;
	bh=H83VPmq81wZ1xGRb5iTzEVOggxKCGFRbB00Fdn+A3Tk=;
	b=TmuyEJ8CiMrylRQcebYLOmA8Uj9IEJQg91LK6rxYkJFTtl6ZAYAuPrtxUB8VxA1iJ/
	bIvicntEFw/4tTOGUOHxiDEGzIvvOLCPP0Zp+fE10OgVLWMZ8EJJ7lKf4JS+C9WsI4FX
	wJI0MOrfVxTcKwRL2AzbQGgShh6jgzNBWAnSrvKfMZmM+NtfY/GlIDr4mPJMwaNLM8fY
	+4/XE+ag2dd5Req7T4Nhcriol/oeIfKqpugTqYWaOH18+3aJCGexuN9ylYnU77AMxTDq
	nAmRn5JE1wuRyiY1Dqqv7uzQRKV6KnxsKP5mMKPDl77yLHFJ3J5ixmH+jcTsVr7g4PnF
	hQag==
X-Received: by 10.66.142.107 with SMTP id rv11mr28652693pab.17.1392071405809; 
	Mon, 10 Feb 2014 14:30:05 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	qh2sm120444656pab.13.2014.02.10.14.30.02 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 14:30:04 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Mon, 10 Feb 2014 14:30:01 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Mon, 10 Feb 2014 14:29:49 -0800
Message-Id: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
Cc: xen-devel@lists.xenproject.org,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Subject: [Xen-devel] [RFC 0/2] xen-backend interfaces and IFF_MULTICAST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Virtualization hypervisors do not make use of backend Ethernet interfaces
like typical Ethernet interfaces. In my current testing at least xen-netback
requires at least a struct in_device with a proper MTU setting in order for
the front end interface to function properly. Under the current design this
will kick off ipv6 autoconfiguration and DAD on these interfaces. This does
not happen for xen's TAP interface when HVM is used. KVM only uses TAP
interfaces but their backend interfaces *do not* perform ipv6 autoconfiguration
and DAD even though its TAP interfaces do have multicast enabled.

In xen's case some xen users used to run into issues with the current
architecture when bundles of xen guests were on the same network and ipv6
autoconfiguration was performed. This happens because the MAC address is
static and while this can be corrected by randomizing it an ipv6 address
is simply not needed for them. There is currently no way to disable ipv6
interfaces on specific type of interfaces but this is just begging review
of the architecture on why an interface is even needed at all, how about
ipv4 addresses, why do we need inetdev_init() on these virtualized
interfaces?

Disabling multicast on an interface should disable ipv6 autoconfiguration
and DAD but the note on include/uapi/linux/if_link.h makes it clear that
IFF_MULTICAST should be considered carefully given that not-NBMA links are
known to support multicast, this includes all IFF_POINTOPOINT and IFF_BROADCAST
as well. If we are to follow the RFCs on ipv6 autoconfiguration and DAD
however its clear that muliticast is required -- but if we have no reliable
way of determining this capability we won't know when we could perform
autoconfiguration and DAD properly.

If the patch to require IFF_MULTICAST for autoconfiguration and DAD is
valid then xen-netback can simply clear the flag, clearing it is required
as ether_setup() is used during the net_device allocation. I'm currently
reviewing the need for any proper-mtu interface on xen-netback but in the
meantime I'd like some feedback on IFF_MULTICAST and the following
patches.

Luis R. Rodriguez (2):
  ipv6: disable autoconfiguration and DAD on non-multicast links
  xen-netback: disable multicast and use a random hw MAC address

 drivers/net/xen-netback/interface.c | 14 +++++---------
 net/ipv6/addrconf.c                 | 18 ++++++++++++------
 2 files changed, 17 insertions(+), 15 deletions(-)

-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:30:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzMl-0003LE-MW; Mon, 10 Feb 2014 22:30:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WCzMk-0003Ky-H4
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 22:30:14 +0000
Received: from [85.158.137.68:33394] by server-1.bemta-3.messagelabs.com id
	3E/AC-17293-5F259F25; Mon, 10 Feb 2014 22:30:13 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392071410!928615!1
X-Originating-IP: [209.85.220.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18183 invoked from network); 10 Feb 2014 22:30:12 -0000
Received: from mail-pa0-f44.google.com (HELO mail-pa0-f44.google.com)
	(209.85.220.44)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 22:30:12 -0000
Received: by mail-pa0-f44.google.com with SMTP id kq14so6823924pab.3
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 14:30:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=aussYPwSvZV6SF21+5dTA+poZCitOf3XR3y5aylnehg=;
	b=vLjQiNB7jpmEpjXOzMRTM4+EZO4hrmTD1iHU+LTAvbnCCpyLTNWi2dRmohmiRKza1d
	0mBGWqR1OZvmnbwBcalHKSiKG9uhkoLrDrQVss5IZ6S5EDXZQMIn6P0mEsXzmJwbw++J
	17MXrcpwxusGGF1mizjXDRW8LKyCpb7EMdBwbyVIWJOPinCh2vPd8Iy7R4HaPVmpMw5g
	P6BjDtxg9REveThPfeLF5h6kcyWPiyPCtbctvCVLOm94Mu+faUlcIhhzSjNsQ+O3lpqR
	VgXJY2ZwuWwMGDbnAgiuVQXu3ega4FQ7yREZC8twpuMUvnW5uJy97oPDzkI1eJbRbbuF
	yK8g==
X-Received: by 10.68.189.100 with SMTP id gh4mr24229864pbc.21.1392071410333;
	Mon, 10 Feb 2014 14:30:10 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	ug2sm120427384pac.21.2014.02.10.14.30.07 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 10 Feb 2014 14:30:09 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Mon, 10 Feb 2014 14:30:05 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Mon, 10 Feb 2014 14:29:50 -0800
Message-Id: <1392071391-13215-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
Cc: Olaf Kirch <okir@suse.de>, "Luis R. Rodriguez" <mcgrof@suse.com>,
	James Morris <jmorris@namei.org>, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: [Xen-devel] [RFC 1/2] ipv6: disable autoconfiguration and DAD on
	non-multicast links
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

RFC4862 [0] on IPv6 on Stateless Address Autoconfiguration on
Sections 4 and 5 state that autoconfiguration is performed only
on multicast-capable links. Multicast is used to ensure the
automatically assigned address is unique by sending Neighbor
Solicitation Messages and listening for these same messages
on both the all-nodes multicast address and the solicited-node
multicast address of the tentative address, this is called
Duplicate Address Detection (DAD) and documented on Section 5.4.
DAD has an optimization, Optimistic DAD [1] and it also requires
multicast. Skip autoconfiguration and all forms of DAD on
non-multicast links.

We don't *fully* disable IPV6 for non-multicast links as
there are signs non-multicast IPV6 devices are wished to
be supported, one example being the ipv6 autoconf module
parameter, but it should be noted that RFC4682 Section 5.4
makes it clear that DAD *MUST* be performed on all unicast
addresses prior to assigning them to an interface, regardless of
whether they are obtained through stateless autoconfiguration,
DHCPv6, or manual configuration with the following exceptions:

   -  When DupAddrDetectTransmits is set to zero, DAD
      can be skipped
   -  Anycast addresses can skip DAD

In the case that autoconfiguration is disabled the interface
still gets assigned a temporary address via ipv6_create_tempaddr()
however it will be kept as temporary, IFA_F_TEMPORARY.

[0] http://tools.ietf.org/html/rfc4862
[1] http://tools.ietf.org/html/rfc4429

Cc: Olaf Kirch <okir@suse.de>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 net/ipv6/addrconf.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index ad23569..362f64f 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -2211,7 +2211,8 @@ void addrconf_prefix_rcv(struct net_device *dev, u8 *opt, int len, bool sllao)
 
 	/* Try to figure out our local address for this prefix */
 
-	if (pinfo->autoconf && in6_dev->cnf.autoconf) {
+	if (pinfo->autoconf && in6_dev->cnf.autoconf &&
+	    dev->flags & IFF_MULTICAST) {
 		struct inet6_ifaddr *ifp;
 		struct in6_addr addr;
 		int create = 0, update_lft = 0;
@@ -2248,7 +2249,8 @@ ok:
 
 #ifdef CONFIG_IPV6_OPTIMISTIC_DAD
 			if (in6_dev->cnf.optimistic_dad &&
-			    !net->ipv6.devconf_all->forwarding && sllao)
+			    !net->ipv6.devconf_all->forwarding && sllao &&
+			    dev->flags & IFF_MULTICAST)
 				addr_flags = IFA_F_OPTIMISTIC;
 #endif
 
@@ -3161,6 +3163,7 @@ static void addrconf_dad_start(struct inet6_ifaddr *ifp)
 		goto out;
 
 	if (dev->flags&(IFF_NOARP|IFF_LOOPBACK) ||
+	    !(dev->flags&IFF_MULTICAST) ||
 	    idev->cnf.accept_dad < 1 ||
 	    !(ifp->flags&IFA_F_TENTATIVE) ||
 	    ifp->flags & IFA_F_NODAD) {
@@ -3288,6 +3291,7 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp)
 	send_rs = send_mld &&
 		  ipv6_accept_ra(ifp->idev) &&
 		  ifp->idev->cnf.rtr_solicits > 0 &&
+		  (dev->flags&IFF_MULTICAST) &&
 		  (dev->flags&IFF_LOOPBACK) == 0;
 	read_unlock_bh(&ifp->idev->lock);
 
@@ -4192,8 +4196,9 @@ errout:
 		rtnl_set_sk_err(net, RTNLGRP_IPV6_IFADDR, err);
 }
 
-static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
-				__s32 *array, int bytes)
+static inline void ipv6_store_devconf(struct net_device *dev,
+				      struct ipv6_devconf *cnf,
+				      __s32 *array, int bytes)
 {
 	BUG_ON(bytes < (DEVCONF_MAX * 4));
 
@@ -4203,7 +4208,8 @@ static inline void ipv6_store_devconf(struct ipv6_devconf *cnf,
 	array[DEVCONF_MTU6] = cnf->mtu6;
 	array[DEVCONF_ACCEPT_RA] = cnf->accept_ra;
 	array[DEVCONF_ACCEPT_REDIRECTS] = cnf->accept_redirects;
-	array[DEVCONF_AUTOCONF] = cnf->autoconf;
+	if (dev->flags & IFF_MULTICAST)
+		array[DEVCONF_AUTOCONF] = cnf->autoconf;
 	array[DEVCONF_DAD_TRANSMITS] = cnf->dad_transmits;
 	array[DEVCONF_RTR_SOLICITS] = cnf->rtr_solicits;
 	array[DEVCONF_RTR_SOLICIT_INTERVAL] =
@@ -4326,7 +4332,7 @@ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev)
 	nla = nla_reserve(skb, IFLA_INET6_CONF, DEVCONF_MAX * sizeof(s32));
 	if (nla == NULL)
 		goto nla_put_failure;
-	ipv6_store_devconf(&idev->cnf, nla_data(nla), nla_len(nla));
+	ipv6_store_devconf(idev->dev, &idev->cnf, nla_data(nla), nla_len(nla));
 
 	/* XXX - MC not implemented */
 
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 22:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzeW-0004Gx-TX; Mon, 10 Feb 2014 22:48:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bmenges@gogrid.com>) id 1WCvYR-0007uF-Bl
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:26:03 +0000
Received: from [85.158.143.35:42790] by server-2.bemta-4.messagelabs.com id
	DE/D8-10891-AB919F25; Mon, 10 Feb 2014 18:26:02 +0000
X-Env-Sender: bmenges@gogrid.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392056759!4606165!1
X-Originating-IP: [216.93.160.25]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23515 invoked from network); 10 Feb 2014 18:26:01 -0000
Received: from smtp1.servepath.com (HELO smtp1.servepath.com) (216.93.160.25)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 18:26:01 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=january; d=gogrid.com;
	h=Received:Received:From:To:Subject:Thread-Topic:Thread-Index:Date:Message-ID:Accept-Language:Content-Language:X-MS-Has-Attach:X-MS-TNEF-Correlator:x-originating-ip:Content-Type:MIME-Version;
	b=F1M0S4p/w5eotidXtJaG+8FbldLuqo53EuFBa8G8zZ3pHCCUacRiP3PBBJ9GOLAwLGlnSrs2iAr+oh1/WYUdiHpMOZzyorMYWahD5naNrulvsiFaOs+Er6YHl5gB6GHf;
Received: from [192.168.6.220] (helo=ex-001-sfo.servepath.com)
	by smtp1.servepath.com with esmtp (Exim 4.68 (FreeBSD))
	(envelope-from <bmenges@gogrid.com>) id 1WCvYB-00029U-2x
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:26:00 -0800
Received: from EX-002-SFO.servepath.com ([169.254.2.202]) by
	ex-001-sfo.servepath.com ([169.254.1.228]) with mapi id 14.03.0123.003;
	Mon, 10 Feb 2014 10:13:04 -0800
From: Brian Menges <bmenges@gogrid.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: build xen 4.3 backport for wheezy
Thread-Index: Ac8mi7yUvYMzBKP8TbC0RLZqc1DA5Q==
Date: Mon, 10 Feb 2014 18:13:04 +0000
Message-ID: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.3.1]
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 10 Feb 2014 22:48:34 +0000
Subject: [Xen-devel] build xen 4.3 backport for wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1121073818030779364=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1121073818030779364==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_"

--_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

I'm attempting to build 4.3 from Jessie (Debian) and I'm bumping into an in=
teresting dependency:

[POC][root@debian xen-4.3.0]# debuild -us -uc -b -d
dpkg-buildpackage -rfakeroot -d -us -uc -b
dpkg-buildpackage: warning: using a gain-root-command while being root
dpkg-buildpackage: source package xen
dpkg-buildpackage: source version 4.3.0-3.1
dpkg-buildpackage: source changed by Brian Menges <bmenges>
dpkg-source --before-build xen-4.3.0
dpkg-buildpackage: host architecture amd64
fakeroot debian/rules clean
md5sum --check debian/control.md5sum --status || \
              /usr/bin/make -f debian/rules debian/control-real
make[1]: Entering directory `/root/wheezy-xen-4.3/xen-4.3.0'
debian/bin/gencontrol.py
Traceback (most recent call last):
  File "debian/bin/gencontrol.py", line 6, in <module>
    from debian_xen.debian import VersionXen
  File "/root/wheezy-xen-4.3/xen-4.3.0/debian/bin/../lib/python/debian_xen/=
__init__.py", line 19, in <module>
    _setup()
  File "/root/wheezy-xen-4.3/xen-4.3.0/debian/bin/../lib/python/debian_xen/=
__init__.py", line 16, in _setup
    raise RuntimeError("Can't find %s, please install the linux-support-%s =
package" % (support, version))
RuntimeError: Can't find /usr/src/linux-support-3.10-3, please install the =
linux-support-3.10-3 package
make[1]: *** [debian/control-real] Error 1
make[1]: Leaving directory `/root/wheezy-xen-4.3/xen-4.3.0'
make: *** [debian/control] Error 2
dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit statu=
s 2
debuild: fatal error at line 1357:
dpkg-buildpackage -rfakeroot -d -us -uc -b failed

It looks like the build scripts aren't detecting my installation of linux-s=
upport-3.12-0.bpo.1 and has a version lock on 3.10-3 (which isn't backporte=
d, and available only in Jessie even though BPO has the newest 3.12).

I tried looking into the Debian_xen/__init__py however I see that the line =
there 'linux-support-%' which I then checked 'version' and it says None. So=
 I'm wondering where else this might be getting a frozen version from and n=
ot attempting to check for other versions, or newer versions.

- Brian Menges
Principal Engineer, DevOps
GoGrid | ServePath | ColoServe | UpStream Networks


________________________________

The information contained in this message, and any attachments, may contain=
 confidential and legally privileged material. It is solely for the use of =
the person or entity to which it is addressed. Any review, retransmission, =
dissemination, or action taken in reliance upon this information by persons=
 or entities other than the intended recipient is prohibited. If you receiv=
e this in error, please contact the sender and delete the material from any=
 computer.

--_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
	{font-family:"Cambria Math"}
@font-face
	{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
	{color:#0563C1;
	text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
	{color:#954F72;
	text-decoration:underline}
span.EmailStyle17
	{font-family:"Calibri","sans-serif";
	color:windowtext}
.MsoChpDefault
	{font-family:"Calibri","sans-serif"}
@page WordSection1
	{margin:1.0in 1.0in 1.0in 1.0in}
div.WordSection1
	{}
-->
</style>
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">I&#8217;m attempting to build 4.3 from Jessie (Debia=
n) and I&#8217;m bumping into an interesting dependency:</p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">[POC][root@debian xen-4.3.0]# debuild -us -uc -b -d</span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage -rfakeroot -d -us -uc -b</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: warning: using a gain-root-command whil=
e being root</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: source package xen</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: source version 4.3.0-3.1</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: source changed by Brian Menges &lt;bmen=
ges&gt;</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-source --before-build xen-4.3.0</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: host architecture amd64</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">fakeroot debian/rules clean</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">md5sum --check debian/control.md5sum --status || \</span><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; /usr/bin/make -f debian/rules debian/control-real</spa=
n></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make[1]: Entering directory `/root/wheezy-xen-4.3/xen-4.3.=
0'</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">debian/bin/gencontrol.py</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">Traceback (most recent call last):</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp; File &quot;debian/bin/gencontrol.py&quot;, line 6, =
in &lt;module&gt;</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp; from debian_xen.debian import VersionXe=
n</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp; File &quot;/root/wheezy-xen-4.3/xen-4.3.0/debian/bi=
n/../lib/python/debian_xen/__init__.py&quot;, line 19, in &lt;module&gt;</s=
pan></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp; _setup()</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp; File &quot;/root/wheezy-xen-4.3/xen-4.3.0/debian/bi=
n/../lib/python/debian_xen/__init__.py&quot;, line 16, in _setup</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp; raise RuntimeError(&quot;Can't find %s,=
 please install the linux-support-%s package&quot; % (support, version))</s=
pan></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">RuntimeError: Can't find /usr/src/linux-support-3.10-3, pl=
ease install the linux-support-3.10-3 package</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make[1]: *** [debian/control-real] Error 1</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make[1]: Leaving directory `/root/wheezy-xen-4.3/xen-4.3.0=
'</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make: *** [debian/control] Error 2</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: error: fakeroot debian/rules clean gave=
 error exit status 2</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">debuild: fatal error at line 1357:</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage -rfakeroot -d -us -uc -b failed</span></=
p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal">It looks like the build scripts aren&#8217;t detecti=
ng my installation of linux-support-3.12-0.bpo.1 and has a version lock on =
3.10-3 (which isn&#8217;t backported, and available only in Jessie even tho=
ugh BPO has the newest 3.12).</p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal">I tried looking into the Debian_xen/__init__py howev=
er I see that the line there &#8216;linux-support-%&#8217; which I then che=
cked &#8216;version&#8217; and it says None. So I&#8217;m wondering where e=
lse this might be getting a frozen version from and not attempting
 to check for other versions, or newer versions.</p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal">- Brian Menges</p>
<p class=3D"MsoNormal">Principal Engineer, DevOps</p>
<p class=3D"MsoNormal">GoGrid | ServePath | ColoServe | UpStream Networks</=
p>
<p class=3D"MsoNormal">&nbsp;</p>
</div>
<br>
<hr>
<font face=3D"Courier New" color=3D"Gray" size=3D"1"><br>
The information contained in this message, and any attachments, may contain=
 confidential and legally privileged material. It is solely for the use of =
the person or entity to which it is addressed. Any review, retransmission, =
dissemination, or action taken in
 reliance upon this information by persons or entities other than the inten=
ded recipient is prohibited. If you receive this in error, please contact t=
he sender and delete the material from any computer.<br>
</font>
</body>
</html>

--_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_--


--===============1121073818030779364==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1121073818030779364==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 22:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzeX-0004Hc-AT; Mon, 10 Feb 2014 22:48:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WCvcL-0008AE-Fy
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:30:09 +0000
Received: from [85.158.143.35:44107] by server-1.bemta-4.messagelabs.com id
	FF/5D-31661-CAA19F25; Mon, 10 Feb 2014 18:30:04 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392057002!4611237!1
X-Originating-IP: [209.85.212.53]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11907 invoked from network); 10 Feb 2014 18:30:03 -0000
Received: from mail-vb0-f53.google.com (HELO mail-vb0-f53.google.com)
	(209.85.212.53)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:30:03 -0000
Received: by mail-vb0-f53.google.com with SMTP id p17so4956321vbe.40
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 10:30:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=Jm8r9bWx8zcX+k2esYFE8BxoDJ4u80OKvQqYGlK/xAE=;
	b=JLvZ69zD/aZmqbO4r8KHZYA/zgL5f+nhflQ6SpeAhYe5/TmeCkm4zUb5DyNn3rpSUK
	9LKe466wBo6awaO4UACOziN/1hn3LQUTSkqMsriikhLFiYwwTAJUhzOdiMAUjgSGpQTU
	6gkjXfJpf5wPet8oLvtKkrX2v26rm7sOqleuyaGZoMKTJvCLBXDHSFtNg59qkAqI8633
	Wco7E7ToD+M2nzRdUn7o9KVA0SI9wZcEAuE2Lyy7y/EAORHttJyMW2qKx5IoDnTBmn6M
	gk5QPCWPgBzE3dQrZtHj5pURZFgKbeGyco0L2AB/bTqFJRhveCQIapWCaHOz14cXyw8L
	mgYw==
X-Received: by 10.52.160.233 with SMTP id xn9mr314531vdb.48.1392057002400;
	Mon, 10 Feb 2014 10:30:02 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Mon, 10 Feb 2014 10:29:22 -0800 (PST)
In-Reply-To: <CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Mon, 10 Feb 2014 13:29:22 -0500
Message-ID: <CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
To: George Dunlap <dunlapg@umich.edu>
X-Mailman-Approved-At: Mon, 10 Feb 2014 22:48:34 +0000
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4102429627951789400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4102429627951789400==
Content-Type: multipart/alternative; boundary=089e0160c3ae22a0dd04f21187a3

--089e0160c3ae22a0dd04f21187a3
Content-Type: text/plain; charset=ISO-8859-1

Thanks for the answers on the timeline.

When I start the HVM with th broadcom adapter, I get this message back.
Parsing config from /etc/xen/ubuntu-hvm-1.cfg
libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
support reset from sysfs for PCI device 0000:05:00.0
libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
support reset from sysfs for PCI device 0000:05:00.1

However, the devices appear in the HVM.  Is this something that I should be
concerned about?



On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu> wrote:

> On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> <mikeneiderhauser@gmail.com> wrote:
> > Works like a charm.  I do not have physical access to the computer this
> > weekend to verify that the cards are isolated, but the HVM starts and
> > appears to be working well.
> >
> > When do you think Xen 4.4 will be released.  The article I read
> mentioned it
> > will be released in 2014 (hinting towards the end of February).  I also
> read
> > 'When it is ready.'
> >
> > Any timeline would be great.
>
> I'm afraid that's about all we can give. :-)  We've locked down
> development for 2 months now and are working on finding and fixing
> bugs.  If there are no more blocker bugs or other unforeseen delays,
> it should be out by the end of February.  But there are necessarily
> significant unknowns, so we can't make any promises.
>
>  -George
>

--089e0160c3ae22a0dd04f21187a3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for the answers on the timeline.<div><br></div><div=
>When I start the HVM with th broadcom adapter, I get this message back.</d=
iv><div><div>Parsing config from /etc/xen/ubuntu-hvm-1.cfg</div><div>libxl:=
 error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn&#39;t sup=
port reset from sysfs for PCI device 0000:05:00.0</div>

<div>libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel does=
n&#39;t support reset from sysfs for PCI device 0000:05:00.1</div></div><di=
v><br></div><div>However, the devices appear in the HVM. =A0Is this somethi=
ng that I should be concerned about?</div>

<div><br></div></div><div class=3D"gmail_extra"><br><br><div class=3D"gmail=
_quote">On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <span dir=3D"ltr">&=
lt;<a href=3D"mailto:dunlapg@umich.edu" target=3D"_blank">dunlapg@umich.edu=
</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Sat, Feb 8, 2014 at 5:42 =
PM, Mike Neiderhauser<br>
&lt;<a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhauser@gmail.co=
m</a>&gt; wrote:<br>
&gt; Works like a charm. =A0I do not have physical access to the computer t=
his<br>
&gt; weekend to verify that the cards are isolated, but the HVM starts and<=
br>
&gt; appears to be working well.<br>
&gt;<br>
&gt; When do you think Xen 4.4 will be released. =A0The article I read ment=
ioned it<br>
&gt; will be released in 2014 (hinting towards the end of February). =A0I a=
lso read<br>
&gt; &#39;When it is ready.&#39;<br>
&gt;<br>
&gt; Any timeline would be great.<br>
<br>
</div>I&#39;m afraid that&#39;s about all we can give. :-) =A0We&#39;ve loc=
ked down<br>
development for 2 months now and are working on finding and fixing<br>
bugs. =A0If there are no more blocker bugs or other unforeseen delays,<br>
it should be out by the end of February. =A0But there are necessarily<br>
significant unknowns, so we can&#39;t make any promises.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
=A0-George<br>
</font></span></blockquote></div><br></div>

--089e0160c3ae22a0dd04f21187a3--


--===============4102429627951789400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4102429627951789400==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 22:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzeW-0004GT-Ck; Mon, 10 Feb 2014 22:48:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterxianggao@gmail.com>) id 1WCvGT-0006xB-0a
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 18:07:29 +0000
Received: from [85.158.139.211:19069] by server-1.bemta-5.messagelabs.com id
	0E/DA-12859-06519F25; Mon, 10 Feb 2014 18:07:28 +0000
X-Env-Sender: peterxianggao@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392055638!2959296!1
X-Originating-IP: [209.85.219.49]
X-SpamReason: No, hits=2.9 required=7.0 tests=BIZ_TLD,HTML_20_30,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31566 invoked from network); 10 Feb 2014 18:07:19 -0000
Received: from mail-oa0-f49.google.com (HELO mail-oa0-f49.google.com)
	(209.85.219.49)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:07:19 -0000
Received: by mail-oa0-f49.google.com with SMTP id i7so7780459oag.8
	for <Xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 10:07:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=4GwZ8MHp4oViCsetBkxydPLAb1jqGcCPGFZPRFJWgm0=;
	b=fCqzukxz31/LBzUNQP6lt84S1T6rldX7F6hUUVYcl+Mx0kqLNWarrZIIE/biaric3U
	EImcj9IA8LnDKTFMIgSJM8uey50NastCMdfU8EGq4874rWFsbc9jH1lyME3VZ28UDDxf
	maC6rDuHJV68EtkDeCMAcexmF/xz9yw8lauEbc+xUjG/5qdu/yk4UgLSOe4pePiRPLnK
	IBlk9s9wwbgwhlgokx14qdpIBocuzRl1qnn8TyxnMjTpa/KmS8fjjhqnmQUI+RtwDkmJ
	qYICtrePYhnvSUXpCudlzKGxfg7SfLpwnBOBUgM2a2srrKiyFj1hN+e7/WDLSQRO5IYH
	kMfg==
MIME-Version: 1.0
X-Received: by 10.60.146.194 with SMTP id te2mr28347144oeb.3.1392055638142;
	Mon, 10 Feb 2014 10:07:18 -0800 (PST)
Received: by 10.182.33.34 with HTTP; Mon, 10 Feb 2014 10:07:17 -0800 (PST)
In-Reply-To: <52F8B5C3.1020308@m2r.biz>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
	<52F8B5C3.1020308@m2r.biz>
Date: Mon, 10 Feb 2014 10:07:17 -0800
Message-ID: <CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
From: "Peter X. Gao" <peterxianggao@gmail.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
X-Mailman-Approved-At: Mon, 10 Feb 2014 22:48:34 +0000
Cc: Xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	xen-users@lists.xen.org
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0653985584819691314=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0653985584819691314==
Content-Type: multipart/alternative; boundary=047d7b5d3d7ad1ca5004f21135eb

--047d7b5d3d7ad1ca5004f21135eb
Content-Type: text/plain; charset=ISO-8859-1

Thanks for your reply. I am now using virtio-net and it seems working.
However, Intel DPDK also requires hugepage. When a DPDK application is
initiating hugepage, I got the following error. Do I need to config
something in Xen to support hugepage?



[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.2.0-58-generic (buildd@allspice) (gcc
version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubuntu SMP Tue Dec 3
17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)
[    0.000000] Command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
ip=:127.0.255.255::::eth0:dhcp iommu=soft
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] ACPI in unprivileged domain disabled
[    0.000000] Released 0 pages of unused memory
[    0.000000] Set 0 page(s) to 1-1 mapping
[    0.000000] BIOS-provided physical RAM map:
[    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
[    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
[    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI not present or invalid.
[    0.000000] No AGP bridge found
[    0.000000] last_pfn = 0x100800 max_arch_pfn = 0x400000000
[    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
[    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
[    0.000000] init_memory_mapping: 0000000100000000-0000000100800000
[    0.000000] RAMDISK: 02060000 - 045e3000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at 0000000000000000-0000000100800000
[    0.000000] Initmem setup node 0 0000000000000000-0000000100800000
[    0.000000]   NODE_DATA [00000000ffff5000 - 00000000ffff9fff]
[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00000010 -> 0x00001000
[    0.000000]   DMA32    0x00001000 -> 0x00100000
[    0.000000]   Normal   0x00100000 -> 0x00100800
[    0.000000] Movable zone start PFN for each node
[    0.000000] early_node_map[2] active PFN ranges
[    0.000000]     0: 0x00000010 -> 0x000000a0
[    0.000000]     0: 0x00000100 -> 0x00100800
[    0.000000] SFI: Simple Firmware Interface v0.81
http://simplefirmware.org
[    0.000000] SMP: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] No local APIC present
[    0.000000] APIC: disable apic facility
[    0.000000] APIC: switched to apic NOOP
[    0.000000] PM: Registered nosave memory: 00000000000a0000 -
0000000000100000
[    0.000000] PCI: Warning: Cannot find a gap in the 32bit address range
[    0.000000] PCI: Unassigned devices with 32bit resource registers may
break!
[    0.000000] Allocating PCI resources starting at 100900000 (gap:
100900000:400000)
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.2.1 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8
nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136 r8192
d23360 u262144
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.
 Total pages: 1032084
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
ip=:127.0.255.255::::eth0:dhcp iommu=soft
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Placing 64MB software IO TLB between ffff8800f7400000 -
ffff8800fb400000
[    0.000000] software IO TLB at phys 0xf7400000 - 0xfb400000
[    0.000000] Memory: 3988436k/4202496k available (6588k kernel code, 448k
absent, 213612k reserved, 6617k data, 924k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
CPUs=8, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:16640 nr_irqs:336 16
[    0.000000] Console: colour dummy device 80x25
[    0.000000] console [tty0] enabled
[    0.000000] console [hvc0] enabled
[    0.000000] allocated 34603008 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want
memory cgroups
[    0.000000] installing Xen timer for CPU 0
[    0.000000] Detected 2793.098 MHz processor.
[    0.004000] Calibrating delay loop (skipped), value calculated using
timer frequency.. 5586.19 BogoMIPS (lpj=11172392)
[    0.004000] pid_max: default: 32768 minimum: 301
[    0.004000] Security Framework initialized
[    0.004000] AppArmor: AppArmor initialized
[    0.004000] Yama: becoming mindful.
[    0.004000] Dentry cache hash table entries: 524288 (order: 10, 4194304
bytes)
[    0.004000] Inode-cache hash table entries: 262144 (order: 9, 2097152
bytes)
[    0.004000] Mount-cache hash table entries: 256
[    0.004000] Initializing cgroup subsys cpuacct
[    0.004000] Initializing cgroup subsys memory
[    0.004000] Initializing cgroup subsys devices
[    0.004000] Initializing cgroup subsys freezer
[    0.004000] Initializing cgroup subsys blkio
[    0.004000] Initializing cgroup subsys perf_event
[    0.004000] CPU: Physical Processor ID: 0
[    0.004000] CPU: Processor Core ID: 0
[    0.004000] SMP alternatives: switching to UP code
[    0.031040] ftrace: allocating 26602 entries in 105 pages
[    0.032055] cpu 0 spinlock event irq 17
[    0.032115] Performance Events: unsupported p6 CPU model 26 no PMU
driver, software events only.
[    0.032244] NMI watchdog disabled (cpu0): hardware events not enabled
[    0.032350] installing Xen timer for CPU 1
[    0.032363] cpu 1 spinlock event irq 23
[    0.032623] SMP alternatives: switching to SMP code
[    0.057953] NMI watchdog disabled (cpu1): hardware events not enabled
[    0.058085] installing Xen timer for CPU 2
[    0.058103] cpu 2 spinlock event irq 29
[    0.058542] NMI watchdog disabled (cpu2): hardware events not enabled
[    0.058696] installing Xen timer for CPU 3
[    0.058724] cpu 3 spinlock event irq 35
[    0.059115] NMI watchdog disabled (cpu3): hardware events not enabled
[    0.059227] installing Xen timer for CPU 4
[    0.059246] cpu 4 spinlock event irq 41
[    0.059423] NMI watchdog disabled (cpu4): hardware events not enabled
[    0.059544] installing Xen timer for CPU 5
[    0.059562] cpu 5 spinlock event irq 47
[    0.059724] NMI watchdog disabled (cpu5): hardware events not enabled
[    0.059833] installing Xen timer for CPU 6
[    0.059852] cpu 6 spinlock event irq 53
[    0.060003] NMI watchdog disabled (cpu6): hardware events not enabled
[    0.060037] installing Xen timer for CPU 7
[    0.060056] cpu 7 spinlock event irq 59
[    0.060209] NMI watchdog disabled (cpu7): hardware events not enabled
[    0.060243] Brought up 8 CPUs
[    0.060494] devtmpfs: initialized
[    0.061531] EVM: security.selinux
[    0.061537] EVM: security.SMACK64
[    0.061542] EVM: security.capability
[    0.061711] Grant table initialized
[    0.061711] print_constraints: dummy:
[    0.083057] RTC time: 165:165:165, date: 165/165/65
[    0.083093] NET: Registered protocol family 16
[    0.083159] Trying to unpack rootfs image as initramfs...
[    0.084665] PCI: setting up Xen PCI frontend stub
[    0.086003] bio: create slab <bio-0> at 0
[    0.086003] ACPI: Interpreter disabled.
[    0.086003] xen/balloon: Initialising balloon driver.
[    0.088136] xen-balloon: Initialising balloon driver.
[    0.088139] vgaarb: loaded
[    0.088184] i2c-core: driver [aat2870] using legacy suspend method
[    0.088192] i2c-core: driver [aat2870] using legacy resume method
[    0.088283] SCSI subsystem initialized
[    0.088341] usbcore: registered new interface driver usbfs
[    0.088341] usbcore: registered new interface driver hub
[    0.088341] usbcore: registered new device driver usb
[    0.088341] PCI: System does not support PCI
[    0.088341] PCI: System does not support PCI
[    0.088341] NetLabel: Initializing
[    0.088341] NetLabel:  domain hash size = 128
[    0.184026] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.184051] NetLabel:  unlabeled traffic allowed by default
[    0.184159] Switching to clocksource xen
[    0.188203] Freeing initrd memory: 38412k freed
[    0.202280] AppArmor: AppArmor Filesystem Enabled
[    0.202308] pnp: PnP ACPI: disabled
[    0.205341] NET: Registered protocol family 2
[    0.205661] IP route cache hash table entries: 131072 (order: 8, 1048576
bytes)
[    0.207989] TCP established hash table entries: 524288 (order: 11,
8388608 bytes)
[    0.209497] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.209644] TCP: Hash tables configured (established 524288 bind 65536)
[    0.209650] TCP reno registered
[    0.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[    0.209704] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[    0.209817] NET: Registered protocol family 1
[    0.210139] platform rtc_cmos: registered platform RTC device (no PNP
device found)
[    0.211002] audit: initializing netlink socket (disabled)
[    0.211015] type=2000 audit(1392055157.599:1): initialized
[    0.229178] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.230818] VFS: Disk quotas dquot_6.5.2
[    0.230873] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.231462] fuse init (API version 7.17)
[    0.231605] msgmni has been set to 7864
[    0.232267] Block layer SCSI generic (bsg) driver version 0.4 loaded
(major 253)
[    0.232382] io scheduler noop registered
[    0.232417] io scheduler deadline registered
[    0.232449] io scheduler cfq registered (default)
[    0.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.232529] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.233195] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    0.437179] Linux agpgart interface v0.103
[    0.439329] brd: module loaded
[    0.440557] loop: module loaded
[    0.442439] blkfront device/vbd/51714 num-ring-pages 1 nr_ents 32.
[    0.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents 32.
[    0.447233] blkfront: xvda2: flush diskcache: enabled
[    0.447810] Fixed MDIO Bus: probed
[    0.447856] tun: Universal TUN/TAP device driver, 1.6
[    0.447864] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    0.447945] PPP generic driver version 2.4.2
[    0.448029] Initialising Xen virtual ethernet driver.
[    0.453923] blkfront: xvda1: flush diskcache: enabled
[    0.455000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.455031] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.455048] uhci_hcd: USB Universal Host Controller Interface driver
[    0.455100] usbcore: registered new interface driver libusual
[    0.455134] i8042: PNP: No PS/2 controller found. Probing ports directly.
[    1.455791] i8042: No controller found
[    1.456071] mousedev: PS/2 mouse device common for all mice
[    1.496241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
[    1.496489] rtc_cmos: probe of rtc_cmos failed with error -38
[    1.496624] device-mapper: uevent: version 1.0.3
...............
...............
...............
...............


[  135.957086] BUG: unable to handle kernel paging request at
ffff8800f36c0960
[  135.957105] IP: [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
[  135.957122] PGD 1c06067 PUD dd1067 PMD f6d067 PTE 80100000f36c0065
[  135.957134] Oops: 0003 [#1] SMP
[  135.957141] CPU 0
[  135.957144] Modules linked in: igb_uio(O) uio
[  135.957155]
[  135.957160] Pid: 659, comm: helloworld Tainted: G           O
3.2.0-58-generic #88-Ubuntu
[  135.957171] RIP: e030:[<ffffffff81008efe>]  [<ffffffff81008efe>]
xen_set_pte_at+0x3e/0x210
[  135.957183] RSP: e02b:ffff8800037ddc88  EFLAGS: 00010297
[  135.957189] RAX: 0000000000000000 RBX: 800000008c6000e7 RCX:
800000008c6000e7
[  135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI:
ffff880003044980
[  135.957205] RBP: ffff8800037ddcd8 R08: 0000000000000000 R09:
dead000000100100
[  135.957212] R10: dead000000200200 R11: 00007f4a64f7e02a R12:
ffffea0003c48000
[  135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15:
0000000000000001
[  135.957232] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
knlGS:0000000000000000
[  135.957241] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4:
0000000000002660
[  135.957255] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  135.957263] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  135.957271] Process helloworld (pid: 659, threadinfo ffff8800037dc000,
task ffff8800034c1700)
[  135.957279] Stack:
[  135.957283]  00007f4a65800000 ffff880003044980 dead000000200200
dead000000100100
[  135.957297]  0000000000000000 0000000000000000 ffffea0003c48000
800000008c6000e7
[  135.957310]  ffff8800030449ec 0000000000000001 ffff8800037ddd68
ffffffff81158453
[  135.957322] Call Trace:
[  135.957333]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  135.957342]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  135.957351]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  135.957361]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  135.957370]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  135.957378]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  135.957391]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  135.957399]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  135.957408]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  135.957417]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48 89 7d b8
48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83 f8 01 74 75
<49> 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0 4c 8b
[  135.957507] RIP  [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
[  135.957517]  RSP <ffff8800037ddc88>
[  135.957521] CR2: ffff8800f36c0960
[  135.957528] ---[ end trace f6a013072f2aee83 ]---
[  160.032062] BUG: soft lockup - CPU#0 stuck for 23s! [helloworld:659]
[  160.032129] Modules linked in: igb_uio(O) uio
[  160.032140] CPU 0
[  160.032143] Modules linked in: igb_uio(O) uio
[  160.032153]
[  160.032159] Pid: 659, comm: helloworld Tainted: G      D    O
3.2.0-58-generic #88-Ubuntu
[  160.032170] RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>]
hypercall_page+0x3aa/0x1000
[  160.032190] RSP: e02b:ffff8800037dd730  EFLAGS: 00000202
[  160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
ffffffff810013aa
[  160.032204] RDX: 0000000000000000 RSI: ffff8800037dd748 RDI:
0000000000000003
[  160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09:
ffff8800f6c000a0
[  160.032220] R10: 0000000000007ff0 R11: 0000000000000202 R12:
0000000000000011
[  160.032227] R13: 0000000000000201 R14: ffff880003044901 R15:
ffff880003044900
[  160.032239] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
knlGS:0000000000000000
[  160.032248] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  160.032255] CR2: ffff8800f36c0960 CR3: 0000000001c05000 CR4:
0000000000002660
[  160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  160.032271] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  160.032279] Process helloworld (pid: 659, threadinfo ffff8800037dc000,
task ffff8800034c1700)
[  160.032287] Stack:
[  160.032291]  0000000000000011 00000000fffffffa ffffffff813adade
ffff8800037dd764
[  160.032304]  ffffffff00000001 0000000000000000 00000004813ad17e
ffff8800037dd778
[  160.032317]  ffff8800030449ec ffff8800037dd788 ffffffff813af5e0
ffff8800037dd7d8
[  160.032329] Call Trace:
[  160.032341]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
[  160.032350]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
[  160.032360]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
[  160.032370]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
[  160.032381]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
[  160.032390]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
[  160.032400]  [<ffffffff81146408>] exit_mmap+0x58/0x140
[  160.032408]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
[  160.032416]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
[  160.032425]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032433]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032442]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032452]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
[  160.032460]  [<ffffffff81065f39>] mmput+0x29/0x30
[  160.032470]  [<ffffffff8106c943>] exit_mm+0x113/0x130
[  160.032479]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
[  160.032488]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
[  160.032496]  [<ffffffff8106cace>] do_exit+0x16e/0x450
[  160.032504]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
[  160.032513]  [<ffffffff8164812f>] no_context+0x150/0x15d
[  160.032520]  [<ffffffff81648307>] __bad_area_nosemaphore+0x1cb/0x1ea
[  160.032529]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
[  160.032537]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
[  160.032545]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
[  160.032555]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
[  160.032564]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init+0x48/0x160
[  160.032575]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
[  160.032588]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
[  160.032597]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.032605]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
[  160.032613]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
[  160.032622]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  160.032630]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  160.032638]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  160.032648]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  160.032656]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  160.032664]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  160.032673]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  160.032681]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  160.032689]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  160.032697]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05
<41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
[  160.032781] Call Trace:
[  160.032787]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
[  160.032795]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
[  160.032803]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
[  160.032811]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
[  160.032818]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
[  160.032826]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
[  160.032833]  [<ffffffff81146408>] exit_mmap+0x58/0x140
[  160.032841]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
[  160.032849]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
[  160.032857]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032866]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032874]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032882]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
[  160.032889]  [<ffffffff81065f39>] mmput+0x29/0x30
[  160.032896]  [<ffffffff8106c943>] exit_mm+0x113/0x130
[  160.032904]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
[  160.032912]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
[  160.032920]  [<ffffffff8106cace>] do_exit+0x16e/0x450
[  160.032928]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
[  160.032935]  [<ffffffff8164812f>] no_context+0x150/0x15d
[  160.032943]  [<ffffffff81648307>] __bad_area_nosemaphore+0x1cb/0x1ea
[  160.032951]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
[  160.032959]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
[  160.032967]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
[  160.032975]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
[  160.036054]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init+0x48/0x160
[  160.036054]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
[  160.036054]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
[  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.036054]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
[  160.036054]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
[  160.036054]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  160.036054]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  160.036054]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  160.036054]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  160.036054]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  160.036054]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  160.036054]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  160.036054]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  160.036054]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30



On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <fabio.fantoni@m2r.biz>wrote:

> Il 10/02/2014 11:42, Wei Liu ha scritto:
>
>  On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
>>
>>> Hi,
>>>
>>>         I am new to Xen and I am trying to run Intel DPDK inside a domU
>>> with
>>> virtio on Xen 4.2. Is it possible to do this?
>>>
>>>
> Based on my tests about virtio:
> - virtio-serial seems working out of box with windows domUs and also with
> xen pv driver, on linux domUs with old kernel (tested 2.6.32) is also
> working out of box but with newer kernel (tested >=3.2) require pci=nomsi
> to work correctly and works also with xen pvhvm drivers, for now I not
> found solution for msi problem, there are some posts about it.
> - virtio-net was working out of box but with recent qemu versions is
> broken due qemu regression, I have narrowed down
> with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable
> to found the exact commit of regression because there are other critical
> problems with xen in the range.
> - I not tested virtio-disk and I not know if is working with recent xen
> and qemu version.
>
>
>  DPDK doesn't seem to tightly coupled with VirtIO, does it?
>>
>> Could you look at Xen's PV network protocol instead? VirtIO has no
>> mainline support on Xen while Xen's PV protocol has been in mainline for
>> years. And it's very likely to be enabled by default nowadays.
>>
>> Wei.
>>
>>  Regards
>>> Peter
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
>

--047d7b5d3d7ad1ca5004f21135eb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for your reply. I am now using virtio-net and it se=
ems working. However, Intel DPDK also requires hugepage. When a DPDK applic=
ation is initiating hugepage, I got the following error. Do I need to confi=
g something in Xen to support hugepage?<div>
<br></div><div><br></div><div><br></div><div><div>[ =A0 =A00.000000] Initia=
lizing cgroup subsys cpuset</div><div>[ =A0 =A00.000000] Initializing cgrou=
p subsys cpu</div><div>[ =A0 =A00.000000] Linux version 3.2.0-58-generic (b=
uildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubu=
ntu SMP Tue Dec 3 17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)</di=
v>
<div>[ =A0 =A00.000000] Command line: root=3D/dev/xvda2 ro root=3D/dev/xvda=
2 ro ip=3D:127.0.255.255::::eth0:dhcp iommu=3Dsoft</div><div>[ =A0 =A00.000=
000] KERNEL supported cpus:</div><div>[ =A0 =A00.000000] =A0 Intel GenuineI=
ntel</div><div>[ =A0 =A00.000000] =A0 AMD AuthenticAMD</div>
<div>[ =A0 =A00.000000] =A0 Centaur CentaurHauls</div><div>[ =A0 =A00.00000=
0] ACPI in unprivileged domain disabled</div><div>[ =A0 =A00.000000] Releas=
ed 0 pages of unused memory</div><div>[ =A0 =A00.000000] Set 0 page(s) to 1=
-1 mapping</div>
<div>[ =A0 =A00.000000] BIOS-provided physical RAM map:</div><div>[ =A0 =A0=
0.000000] =A0Xen: 0000000000000000 - 00000000000a0000 (usable)</div><div>[ =
=A0 =A00.000000] =A0Xen: 00000000000a0000 - 0000000000100000 (reserved)</di=
v><div>[ =A0 =A00.000000] =A0Xen: 0000000000100000 - 0000000100800000 (usab=
le)</div>
<div>[ =A0 =A00.000000] NX (Execute Disable) protection: active</div><div>[=
 =A0 =A00.000000] DMI not present or invalid.</div><div>[ =A0 =A00.000000] =
No AGP bridge found</div><div>[ =A0 =A00.000000] last_pfn =3D 0x100800 max_=
arch_pfn =3D 0x400000000</div>
<div>[ =A0 =A00.000000] last_pfn =3D 0x100000 max_arch_pfn =3D 0x400000000<=
/div><div>[ =A0 =A00.000000] init_memory_mapping: 0000000000000000-00000001=
00000000</div><div>[ =A0 =A00.000000] init_memory_mapping: 0000000100000000=
-0000000100800000</div>
<div>[ =A0 =A00.000000] RAMDISK: 02060000 - 045e3000</div><div>[ =A0 =A00.0=
00000] NUMA turned off</div><div>[ =A0 =A00.000000] Faking a node at 000000=
0000000000-0000000100800000</div><div>[ =A0 =A00.000000] Initmem setup node=
 0 0000000000000000-0000000100800000</div>
<div>[ =A0 =A00.000000] =A0 NODE_DATA [00000000ffff5000 - 00000000ffff9fff]=
</div><div>[ =A0 =A00.000000] Zone PFN ranges:</div><div>[ =A0 =A00.000000]=
 =A0 DMA =A0 =A0 =A00x00000010 -&gt; 0x00001000</div><div>[ =A0 =A00.000000=
] =A0 DMA32 =A0 =A00x00001000 -&gt; 0x00100000</div>
<div>[ =A0 =A00.000000] =A0 Normal =A0 0x00100000 -&gt; 0x00100800</div><di=
v>[ =A0 =A00.000000] Movable zone start PFN for each node</div><div>[ =A0 =
=A00.000000] early_node_map[2] active PFN ranges</div><div>[ =A0 =A00.00000=
0] =A0 =A0 0: 0x00000010 -&gt; 0x000000a0</div>
<div>[ =A0 =A00.000000] =A0 =A0 0: 0x00000100 -&gt; 0x00100800</div><div>[ =
=A0 =A00.000000] SFI: Simple Firmware Interface v0.81 <a href=3D"http://sim=
plefirmware.org">http://simplefirmware.org</a></div><div>[ =A0 =A00.000000]=
 SMP: Allowing 8 CPUs, 0 hotplug CPUs</div>
<div>[ =A0 =A00.000000] No local APIC present</div><div>[ =A0 =A00.000000] =
APIC: disable apic facility</div><div>[ =A0 =A00.000000] APIC: switched to =
apic NOOP</div><div>[ =A0 =A00.000000] PM: Registered nosave memory: 000000=
00000a0000 - 0000000000100000</div>
<div>[ =A0 =A00.000000] PCI: Warning: Cannot find a gap in the 32bit addres=
s range</div><div>[ =A0 =A00.000000] PCI: Unassigned devices with 32bit res=
ource registers may break!</div><div>[ =A0 =A00.000000] Allocating PCI reso=
urces starting at 100900000 (gap: 100900000:400000)</div>
<div>[ =A0 =A00.000000] Booting paravirtualized kernel on Xen</div><div>[ =
=A0 =A00.000000] Xen version: 4.2.1 (preserve-AD)</div><div>[ =A0 =A00.0000=
00] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8 nr_node_ids:=
1</div><div>
[ =A0 =A00.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136 r=
8192 d23360 u262144</div><div>[ =A0 =A00.000000] Built 1 zonelists in Node =
order, mobility grouping on. =A0Total pages: 1032084</div><div>[ =A0 =A00.0=
00000] Policy zone: Normal</div>
<div>[ =A0 =A00.000000] Kernel command line: root=3D/dev/xvda2 ro root=3D/d=
ev/xvda2 ro ip=3D:127.0.255.255::::eth0:dhcp iommu=3Dsoft</div><div>[ =A0 =
=A00.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)</div><div=
>[ =A0 =A00.000000] Placing 64MB software IO TLB between ffff8800f7400000 -=
 ffff8800fb400000</div>
<div>[ =A0 =A00.000000] software IO TLB at phys 0xf7400000 - 0xfb400000</di=
v><div>[ =A0 =A00.000000] Memory: 3988436k/4202496k available (6588k kernel=
 code, 448k absent, 213612k reserved, 6617k data, 924k init)</div><div>[ =
=A0 =A00.000000] SLUB: Genslabs=3D15, HWalign=3D64, Order=3D0-3, MinObjects=
=3D0, CPUs=3D8, Nodes=3D1</div>
<div>[ =A0 =A00.000000] Hierarchical RCU implementation.</div><div>[ =A0 =
=A00.000000] <span class=3D"" style=3D"white-space:pre">	</span>RCU dyntick=
-idle grace-period acceleration is enabled.</div><div>[ =A0 =A00.000000] NR=
_IRQS:16640 nr_irqs:336 16</div>
<div>[ =A0 =A00.000000] Console: colour dummy device 80x25</div><div>[ =A0 =
=A00.000000] console [tty0] enabled</div><div>[ =A0 =A00.000000] console [h=
vc0] enabled</div><div>[ =A0 =A00.000000] allocated 34603008 bytes of page_=
cgroup</div>
<div>[ =A0 =A00.000000] please try &#39;cgroup_disable=3Dmemory&#39; option=
 if you don&#39;t want memory cgroups</div><div>[ =A0 =A00.000000] installi=
ng Xen timer for CPU 0</div><div>[ =A0 =A00.000000] Detected 2793.098 MHz p=
rocessor.</div>
<div>[ =A0 =A00.004000] Calibrating delay loop (skipped), value calculated =
using timer frequency.. 5586.19 BogoMIPS (lpj=3D11172392)</div><div>[ =A0 =
=A00.004000] pid_max: default: 32768 minimum: 301</div><div>[ =A0 =A00.0040=
00] Security Framework initialized</div>
<div>[ =A0 =A00.004000] AppArmor: AppArmor initialized</div><div>[ =A0 =A00=
.004000] Yama: becoming mindful.</div><div>[ =A0 =A00.004000] Dentry cache =
hash table entries: 524288 (order: 10, 4194304 bytes)</div><div>[ =A0 =A00.=
004000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)</d=
iv>
<div>[ =A0 =A00.004000] Mount-cache hash table entries: 256</div><div>[ =A0=
 =A00.004000] Initializing cgroup subsys cpuacct</div><div>[ =A0 =A00.00400=
0] Initializing cgroup subsys memory</div><div>[ =A0 =A00.004000] Initializ=
ing cgroup subsys devices</div>
<div>[ =A0 =A00.004000] Initializing cgroup subsys freezer</div><div>[ =A0 =
=A00.004000] Initializing cgroup subsys blkio</div><div>[ =A0 =A00.004000] =
Initializing cgroup subsys perf_event</div><div>[ =A0 =A00.004000] CPU: Phy=
sical Processor ID: 0</div>
<div>[ =A0 =A00.004000] CPU: Processor Core ID: 0</div><div>[ =A0 =A00.0040=
00] SMP alternatives: switching to UP code</div><div>[ =A0 =A00.031040] ftr=
ace: allocating 26602 entries in 105 pages</div><div>[ =A0 =A00.032055] cpu=
 0 spinlock event irq 17</div>
<div>[ =A0 =A00.032115] Performance Events: unsupported p6 CPU model 26 no =
PMU driver, software events only.</div><div>[ =A0 =A00.032244] NMI watchdog=
 disabled (cpu0): hardware events not enabled</div><div>[ =A0 =A00.032350] =
installing Xen timer for CPU 1</div>
<div>[ =A0 =A00.032363] cpu 1 spinlock event irq 23</div><div>[ =A0 =A00.03=
2623] SMP alternatives: switching to SMP code</div><div>[ =A0 =A00.057953] =
NMI watchdog disabled (cpu1): hardware events not enabled</div><div>[ =A0 =
=A00.058085] installing Xen timer for CPU 2</div>
<div>[ =A0 =A00.058103] cpu 2 spinlock event irq 29</div><div>[ =A0 =A00.05=
8542] NMI watchdog disabled (cpu2): hardware events not enabled</div><div>[=
 =A0 =A00.058696] installing Xen timer for CPU 3</div><div>[ =A0 =A00.05872=
4] cpu 3 spinlock event irq 35</div>
<div>[ =A0 =A00.059115] NMI watchdog disabled (cpu3): hardware events not e=
nabled</div><div>[ =A0 =A00.059227] installing Xen timer for CPU 4</div><di=
v>[ =A0 =A00.059246] cpu 4 spinlock event irq 41</div><div>[ =A0 =A00.05942=
3] NMI watchdog disabled (cpu4): hardware events not enabled</div>
<div>[ =A0 =A00.059544] installing Xen timer for CPU 5</div><div>[ =A0 =A00=
.059562] cpu 5 spinlock event irq 47</div><div>[ =A0 =A00.059724] NMI watch=
dog disabled (cpu5): hardware events not enabled</div><div>[ =A0 =A00.05983=
3] installing Xen timer for CPU 6</div>
<div>[ =A0 =A00.059852] cpu 6 spinlock event irq 53</div><div>[ =A0 =A00.06=
0003] NMI watchdog disabled (cpu6): hardware events not enabled</div><div>[=
 =A0 =A00.060037] installing Xen timer for CPU 7</div><div>[ =A0 =A00.06005=
6] cpu 7 spinlock event irq 59</div>
<div>[ =A0 =A00.060209] NMI watchdog disabled (cpu7): hardware events not e=
nabled</div><div>[ =A0 =A00.060243] Brought up 8 CPUs</div><div>[ =A0 =A00.=
060494] devtmpfs: initialized</div><div>[ =A0 =A00.061531] EVM: security.se=
linux</div><div>
[ =A0 =A00.061537] EVM: security.SMACK64</div><div>[ =A0 =A00.061542] EVM: =
security.capability</div><div>[ =A0 =A00.061711] Grant table initialized</d=
iv><div>[ =A0 =A00.061711] print_constraints: dummy:=A0</div><div>[ =A0 =A0=
0.083057] RTC time: 165:165:165, date: 165/165/65</div>
<div>[ =A0 =A00.083093] NET: Registered protocol family 16</div><div>[ =A0 =
=A00.083159] Trying to unpack rootfs image as initramfs...</div><div>[ =A0 =
=A00.084665] PCI: setting up Xen PCI frontend stub</div><div>[ =A0 =A00.086=
003] bio: create slab &lt;bio-0&gt; at 0</div>
<div>[ =A0 =A00.086003] ACPI: Interpreter disabled.</div><div>[ =A0 =A00.08=
6003] xen/balloon: Initialising balloon driver.</div><div>[ =A0 =A00.088136=
] xen-balloon: Initialising balloon driver.</div><div>[ =A0 =A00.088139] vg=
aarb: loaded</div>
<div>[ =A0 =A00.088184] i2c-core: driver [aat2870] using legacy suspend met=
hod</div><div>[ =A0 =A00.088192] i2c-core: driver [aat2870] using legacy re=
sume method</div><div>[ =A0 =A00.088283] SCSI subsystem initialized</div><d=
iv>[ =A0 =A00.088341] usbcore: registered new interface driver usbfs</div>
<div>[ =A0 =A00.088341] usbcore: registered new interface driver hub</div><=
div>[ =A0 =A00.088341] usbcore: registered new device driver usb</div><div>=
[ =A0 =A00.088341] PCI: System does not support PCI</div><div>[ =A0 =A00.08=
8341] PCI: System does not support PCI</div>
<div>[ =A0 =A00.088341] NetLabel: Initializing</div><div>[ =A0 =A00.088341]=
 NetLabel: =A0domain hash size =3D 128</div><div>[ =A0 =A00.184026] NetLabe=
l: =A0protocols =3D UNLABELED CIPSOv4</div><div>[ =A0 =A00.184051] NetLabel=
: =A0unlabeled traffic allowed by default</div>
<div>[ =A0 =A00.184159] Switching to clocksource xen</div><div>[ =A0 =A00.1=
88203] Freeing initrd memory: 38412k freed</div><div>[ =A0 =A00.202280] App=
Armor: AppArmor Filesystem Enabled</div><div>[ =A0 =A00.202308] pnp: PnP AC=
PI: disabled</div>
<div>[ =A0 =A00.205341] NET: Registered protocol family 2</div><div>[ =A0 =
=A00.205661] IP route cache hash table entries: 131072 (order: 8, 1048576 b=
ytes)</div><div>[ =A0 =A00.207989] TCP established hash table entries: 5242=
88 (order: 11, 8388608 bytes)</div>
<div>[ =A0 =A00.209497] TCP bind hash table entries: 65536 (order: 8, 10485=
76 bytes)</div><div>[ =A0 =A00.209644] TCP: Hash tables configured (establi=
shed 524288 bind 65536)</div><div>[ =A0 =A00.209650] TCP reno registered</d=
iv><div>
[ =A0 =A00.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)</di=
v><div>[ =A0 =A00.209704] UDP-Lite hash table entries: 2048 (order: 4, 6553=
6 bytes)</div><div>[ =A0 =A00.209817] NET: Registered protocol family 1</di=
v><div>[ =A0 =A00.210139] platform rtc_cmos: registered platform RTC device=
 (no PNP device found)</div>
<div>[ =A0 =A00.211002] audit: initializing netlink socket (disabled)</div>=
<div>[ =A0 =A00.211015] type=3D2000 audit(1392055157.599:1): initialized</d=
iv><div>[ =A0 =A00.229178] HugeTLB registered 2 MB page size, pre-allocated=
 0 pages</div>
<div>[ =A0 =A00.230818] VFS: Disk quotas dquot_6.5.2</div><div>[ =A0 =A00.2=
30873] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)</div><div>=
[ =A0 =A00.231462] fuse init (API version 7.17)</div><div>[ =A0 =A00.231605=
] msgmni has been set to 7864</div>
<div>[ =A0 =A00.232267] Block layer SCSI generic (bsg) driver version 0.4 l=
oaded (major 253)</div><div>[ =A0 =A00.232382] io scheduler noop registered=
</div><div>[ =A0 =A00.232417] io scheduler deadline registered</div><div>[ =
=A0 =A00.232449] io scheduler cfq registered (default)</div>
<div>[ =A0 =A00.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5</di=
v><div>[ =A0 =A00.232529] pciehp: PCI Express Hot Plug Controller Driver ve=
rsion: 0.4</div><div>[ =A0 =A00.233195] Serial: 8250/16550 driver, 32 ports=
, IRQ sharing enabled</div>
<div>[ =A0 =A00.437179] Linux agpgart interface v0.103</div><div>[ =A0 =A00=
.439329] brd: module loaded</div><div>[ =A0 =A00.440557] loop: module loade=
d</div><div>[ =A0 =A00.442439] blkfront device/vbd/51714 num-ring-pages 1 n=
r_ents 32.</div>
<div>[ =A0 =A00.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents =
32.</div><div>[ =A0 =A00.447233] blkfront: xvda2: flush diskcache: enabled<=
/div><div>[ =A0 =A00.447810] Fixed MDIO Bus: probed</div><div>[ =A0 =A00.44=
7856] tun: Universal TUN/TAP device driver, 1.6</div>
<div>[ =A0 =A00.447864] tun: (C) 1999-2004 Max Krasnyansky &lt;<a href=3D"m=
ailto:maxk@qualcomm.com">maxk@qualcomm.com</a>&gt;</div><div>[ =A0 =A00.447=
945] PPP generic driver version 2.4.2</div><div>[ =A0 =A00.448029] Initiali=
sing Xen virtual ethernet driver.</div>
<div>[ =A0 =A00.453923] blkfront: xvda1: flush diskcache: enabled</div><div=
>[ =A0 =A00.455000] ehci_hcd: USB 2.0 &#39;Enhanced&#39; Host Controller (E=
HCI) Driver</div><div>[ =A0 =A00.455031] ohci_hcd: USB 1.1 &#39;Open&#39; H=
ost Controller (OHCI) Driver</div>
<div>[ =A0 =A00.455048] uhci_hcd: USB Universal Host Controller Interface d=
river</div><div>[ =A0 =A00.455100] usbcore: registered new interface driver=
 libusual</div><div>[ =A0 =A00.455134] i8042: PNP: No PS/2 controller found=
. Probing ports directly.</div>
<div>[ =A0 =A01.455791] i8042: No controller found</div><div>[ =A0 =A01.456=
071] mousedev: PS/2 mouse device common for all mice</div><div>[ =A0 =A01.4=
96241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0</div><div>[=
 =A0 =A01.496489] rtc_cmos: probe of rtc_cmos failed with error -38</div>
<div>[ =A0 =A01.496624] device-mapper: uevent: version 1.0.3</div><div>....=
...........</div><div><div>...............</div></div><div><div>...........=
....</div></div><div><div>...............</div></div><div><br></div><div><b=
r>
</div><div>[ =A0135.957086] BUG: unable to handle kernel paging request at =
ffff8800f36c0960</div><div>[ =A0135.957105] IP: [&lt;ffffffff81008efe&gt;] =
xen_set_pte_at+0x3e/0x210</div><div>[ =A0135.957122] PGD 1c06067 PUD dd1067=
 PMD f6d067 PTE 80100000f36c0065</div>
<div>[ =A0135.957134] Oops: 0003 [#1] SMP=A0</div><div>[ =A0135.957141] CPU=
 0=A0</div><div>[ =A0135.957144] Modules linked in: igb_uio(O) uio</div><di=
v>[ =A0135.957155]=A0</div><div>[ =A0135.957160] Pid: 659, comm: helloworld=
 Tainted: G =A0 =A0 =A0 =A0 =A0 O 3.2.0-58-generic #88-Ubuntu =A0</div>
<div>[ =A0135.957171] RIP: e030:[&lt;ffffffff81008efe&gt;] =A0[&lt;ffffffff=
81008efe&gt;] xen_set_pte_at+0x3e/0x210</div><div>[ =A0135.957183] RSP: e02=
b:ffff8800037ddc88 =A0EFLAGS: 00010297</div><div>[ =A0135.957189] RAX: 0000=
000000000000 RBX: 800000008c6000e7 RCX: 800000008c6000e7</div>
<div>[ =A0135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI: ffff=
880003044980</div><div>[ =A0135.957205] RBP: ffff8800037ddcd8 R08: 00000000=
00000000 R09: dead000000100100</div><div>[ =A0135.957212] R10: dead00000020=
0200 R11: 00007f4a64f7e02a R12: ffffea0003c48000</div>
<div>[ =A0135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15: 0000=
000000000001</div><div>[ =A0135.957232] FS: =A000007f4a656e8800(0000) GS:ff=
ff8800ffc00000(0000) knlGS:0000000000000000</div><div>[ =A0135.957241] CS: =
=A0e033 DS: 0000 ES: 0000 CR0: 000000008005003b</div>
<div>[ =A0135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4: 0000=
000000002660</div><div>[ =A0135.957255] DR0: 0000000000000000 DR1: 00000000=
00000000 DR2: 0000000000000000</div><div>[ =A0135.957263] DR3: 000000000000=
0000 DR6: 00000000ffff0ff0 DR7: 0000000000000400</div>
<div>[ =A0135.957271] Process helloworld (pid: 659, threadinfo ffff8800037d=
c000, task ffff8800034c1700)</div><div>[ =A0135.957279] Stack:</div><div>[ =
=A0135.957283] =A000007f4a65800000 ffff880003044980 dead000000200200 dead00=
0000100100</div>
<div>[ =A0135.957297] =A00000000000000000 0000000000000000 ffffea0003c48000=
 800000008c6000e7</div><div>[ =A0135.957310] =A0ffff8800030449ec 0000000000=
000001 ffff8800037ddd68 ffffffff81158453</div><div>[ =A0135.957322] Call Tr=
ace:</div>
<div>[ =A0135.957333] =A0[&lt;ffffffff81158453&gt;] hugetlb_no_page+0x233/0=
x370</div><div>[ =A0135.957342] =A0[&lt;ffffffff8100640e&gt;] ? xen_pud_val=
+0xe/0x10</div><div>[ =A0135.957351] =A0[&lt;ffffffff810053b5&gt;] ? __raw_=
callee_save_xen_pud_val+0x11/0x1e</div>
<div>[ =A0135.957361] =A0[&lt;ffffffff8115883e&gt;] hugetlb_fault+0x1fe/0x3=
40</div><div>[ =A0135.957370] =A0[&lt;ffffffff81143e18&gt;] ? vma_link+0x88=
/0xe0</div><div>[ =A0135.957378] =A0[&lt;ffffffff81140a3c&gt;] handle_mm_fa=
ult+0x2ec/0x370</div>
<div>[ =A0135.957391] =A0[&lt;ffffffff816658be&gt;] do_page_fault+0x17e/0x5=
40</div><div>[ =A0135.957399] =A0[&lt;ffffffff81145af8&gt;] ? do_mmap_pgoff=
+0x348/0x360</div><div>[ =A0135.957408] =A0[&lt;ffffffff81145bf1&gt;] ? sys=
_mmap_pgoff+0xe1/0x230</div>
<div>[ =A0135.957417] =A0[&lt;ffffffff816624f5&gt;] page_fault+0x25/0x30</d=
iv><div>[ =A0135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48=
 89 7d b8 48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83 f8=
 01 74 75 &lt;49&gt; 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0 =
4c 8b=A0</div>
<div>[ =A0135.957507] RIP =A0[&lt;ffffffff81008efe&gt;] xen_set_pte_at+0x3e=
/0x210</div><div>[ =A0135.957517] =A0RSP &lt;ffff8800037ddc88&gt;</div><div=
>[ =A0135.957521] CR2: ffff8800f36c0960</div><div>[ =A0135.957528] ---[ end=
 trace f6a013072f2aee83 ]---</div>
<div>[ =A0160.032062] BUG: soft lockup - CPU#0 stuck for 23s! [helloworld:6=
59]</div><div>[ =A0160.032129] Modules linked in: igb_uio(O) uio</div><div>=
[ =A0160.032140] CPU 0=A0</div><div>[ =A0160.032143] Modules linked in: igb=
_uio(O) uio</div>
<div>[ =A0160.032153]=A0</div><div>[ =A0160.032159] Pid: 659, comm: hellowo=
rld Tainted: G =A0 =A0 =A0D =A0 =A0O 3.2.0-58-generic #88-Ubuntu =A0</div><=
div>[ =A0160.032170] RIP: e030:[&lt;ffffffff810013aa&gt;] =A0[&lt;ffffffff8=
10013aa&gt;] hypercall_page+0x3aa/0x1000</div>
<div>[ =A0160.032190] RSP: e02b:ffff8800037dd730 =A0EFLAGS: 00000202</div><=
div>[ =A0160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX: fffff=
fff810013aa</div><div>[ =A0160.032204] RDX: 0000000000000000 RSI: ffff88000=
37dd748 RDI: 0000000000000003</div>
<div>[ =A0160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09: ffff=
8800f6c000a0</div><div>[ =A0160.032220] R10: 0000000000007ff0 R11: 00000000=
00000202 R12: 0000000000000011</div><div>[ =A0160.032227] R13: 000000000000=
0201 R14: ffff880003044901 R15: ffff880003044900</div>
<div>[ =A0160.032239] FS: =A000007f4a656e8800(0000) GS:ffff8800ffc00000(000=
0) knlGS:0000000000000000</div><div>[ =A0160.032248] CS: =A0e033 DS: 0000 E=
S: 0000 CR0: 000000008005003b</div><div>[ =A0160.032255] CR2: ffff8800f36c0=
960 CR3: 0000000001c05000 CR4: 0000000000002660</div>
<div>[ =A0160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000=
000000000000</div><div>[ =A0160.032271] DR3: 0000000000000000 DR6: 00000000=
ffff0ff0 DR7: 0000000000000400</div><div>[ =A0160.032279] Process helloworl=
d (pid: 659, threadinfo ffff8800037dc000, task ffff8800034c1700)</div>
<div>[ =A0160.032287] Stack:</div><div>[ =A0160.032291] =A00000000000000011=
 00000000fffffffa ffffffff813adade ffff8800037dd764</div><div>[ =A0160.0323=
04] =A0ffffffff00000001 0000000000000000 00000004813ad17e ffff8800037dd778<=
/div><div>
[ =A0160.032317] =A0ffff8800030449ec ffff8800037dd788 ffffffff813af5e0 ffff=
8800037dd7d8</div><div>[ =A0160.032329] Call Trace:</div><div>[ =A0160.0323=
41] =A0[&lt;ffffffff813adade&gt;] ? xen_poll_irq_timeout+0x3e/0x50</div><di=
v>[ =A0160.032350] =A0[&lt;ffffffff813af5e0&gt;] xen_poll_irq+0x10/0x20</di=
v>
<div>[ =A0160.032360] =A0[&lt;ffffffff81646686&gt;] xen_spin_lock_slow+0x98=
/0xf4</div><div>[ =A0160.032370] =A0[&lt;ffffffff810124ba&gt;] xen_spin_loc=
k+0x4a/0x50</div><div>[ =A0160.032381] =A0[&lt;ffffffff81661d8e&gt;] _raw_s=
pin_lock+0xe/0x20</div>
<div>[ =A0160.032390] =A0[&lt;ffffffff81007d9a&gt;] xen_exit_mmap+0x2a/0x60=
</div><div>[ =A0160.032400] =A0[&lt;ffffffff81146408&gt;] exit_mmap+0x58/0x=
140</div><div>[ =A0160.032408] =A0[&lt;ffffffff8166275a&gt;] ? error_exit+0=
x2a/0x60</div>
<div>[ =A0160.032416] =A0[&lt;ffffffff8166227c&gt;] ? retint_restore_args+0=
x5/0x6</div><div>[ =A0160.032425] =A0[&lt;ffffffff8100132a&gt;] ? hypercall=
_page+0x32a/0x1000</div><div>[ =A0160.032433] =A0[&lt;ffffffff8100132a&gt;]=
 ? hypercall_page+0x32a/0x1000</div>
<div>[ =A0160.032442] =A0[&lt;ffffffff8100132a&gt;] ? hypercall_page+0x32a/=
0x1000</div><div>[ =A0160.032452] =A0[&lt;ffffffff81065e22&gt;] mmput.part.=
16+0x42/0x130</div><div>[ =A0160.032460] =A0[&lt;ffffffff81065f39&gt;] mmpu=
t+0x29/0x30</div>
<div>[ =A0160.032470] =A0[&lt;ffffffff8106c943&gt;] exit_mm+0x113/0x130</di=
v><div>[ =A0160.032479] =A0[&lt;ffffffff810e58c5&gt;] ? taskstats_exit+0x45=
/0x240</div><div>[ =A0160.032488] =A0[&lt;ffffffff81662075&gt;] ? _raw_spin=
_lock_irq+0x15/0x20</div>
<div>[ =A0160.032496] =A0[&lt;ffffffff8106cace&gt;] do_exit+0x16e/0x450</di=
v><div>[ =A0160.032504] =A0[&lt;ffffffff81662f20&gt;] oops_end+0xb0/0xf0</d=
iv><div>[ =A0160.032513] =A0[&lt;ffffffff8164812f&gt;] no_context+0x150/0x1=
5d</div>
<div>[ =A0160.032520] =A0[&lt;ffffffff81648307&gt;] __bad_area_nosemaphore+=
0x1cb/0x1ea</div><div>[ =A0160.032529] =A0[&lt;ffffffff816622ad&gt;] ? rest=
ore_args+0x30/0x30</div><div>[ =A0160.032537] =A0[&lt;ffffffff8164795b&gt;]=
 ? pte_offset_kernel+0xe/0x37</div>
<div>[ =A0160.032545] =A0[&lt;ffffffff81648339&gt;] bad_area_nosemaphore+0x=
13/0x15</div><div>[ =A0160.032555] =A0[&lt;ffffffff81665bab&gt;] do_page_fa=
ult+0x46b/0x540</div><div>[ =A0160.032564] =A0[&lt;ffffffff8115c3f8&gt;] ? =
mpol_shared_policy_init+0x48/0x160</div>
<div>[ =A0160.032575] =A0[&lt;ffffffff811667bd&gt;] ? kmem_cache_alloc+0x11=
d/0x140</div><div>[ =A0160.032588] =A0[&lt;ffffffff8126d5fb&gt;] ? hugetlbf=
s_alloc_inode+0x5b/0xa0</div><div>[ =A0160.032597] =A0[&lt;ffffffff816624f5=
&gt;] page_fault+0x25/0x30</div>
<div>[ =A0160.032605] =A0[&lt;ffffffff81008efe&gt;] ? xen_set_pte_at+0x3e/0=
x210</div><div>[ =A0160.032613] =A0[&lt;ffffffff81008ef9&gt;] ? xen_set_pte=
_at+0x39/0x210</div><div>[ =A0160.032622] =A0[&lt;ffffffff81158453&gt;] hug=
etlb_no_page+0x233/0x370</div>
<div>[ =A0160.032630] =A0[&lt;ffffffff8100640e&gt;] ? xen_pud_val+0xe/0x10<=
/div><div>[ =A0160.032638] =A0[&lt;ffffffff810053b5&gt;] ? __raw_callee_sav=
e_xen_pud_val+0x11/0x1e</div><div>[ =A0160.032648] =A0[&lt;ffffffff8115883e=
&gt;] hugetlb_fault+0x1fe/0x340</div>
<div>[ =A0160.032656] =A0[&lt;ffffffff81143e18&gt;] ? vma_link+0x88/0xe0</d=
iv><div>[ =A0160.032664] =A0[&lt;ffffffff81140a3c&gt;] handle_mm_fault+0x2e=
c/0x370</div><div>[ =A0160.032673] =A0[&lt;ffffffff816658be&gt;] do_page_fa=
ult+0x17e/0x540</div>
<div>[ =A0160.032681] =A0[&lt;ffffffff81145af8&gt;] ? do_mmap_pgoff+0x348/0=
x360</div><div>[ =A0160.032689] =A0[&lt;ffffffff81145bf1&gt;] ? sys_mmap_pg=
off+0xe1/0x230</div><div>[ =A0160.032697] =A0[&lt;ffffffff816624f5&gt;] pag=
e_fault+0x25/0x30</div>
<div>[ =A0160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc=
 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00=
 0f 05 &lt;41&gt; 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc =
cc=A0</div>
<div>[ =A0160.032781] Call Trace:</div><div>[ =A0160.032787] =A0[&lt;ffffff=
ff813adade&gt;] ? xen_poll_irq_timeout+0x3e/0x50</div><div>[ =A0160.032795]=
 =A0[&lt;ffffffff813af5e0&gt;] xen_poll_irq+0x10/0x20</div><div>[ =A0160.03=
2803] =A0[&lt;ffffffff81646686&gt;] xen_spin_lock_slow+0x98/0xf4</div>
<div>[ =A0160.032811] =A0[&lt;ffffffff810124ba&gt;] xen_spin_lock+0x4a/0x50=
</div><div>[ =A0160.032818] =A0[&lt;ffffffff81661d8e&gt;] _raw_spin_lock+0x=
e/0x20</div><div>[ =A0160.032826] =A0[&lt;ffffffff81007d9a&gt;] xen_exit_mm=
ap+0x2a/0x60</div>
<div>[ =A0160.032833] =A0[&lt;ffffffff81146408&gt;] exit_mmap+0x58/0x140</d=
iv><div>[ =A0160.032841] =A0[&lt;ffffffff8166275a&gt;] ? error_exit+0x2a/0x=
60</div><div>[ =A0160.032849] =A0[&lt;ffffffff8166227c&gt;] ? retint_restor=
e_args+0x5/0x6</div>
<div>[ =A0160.032857] =A0[&lt;ffffffff8100132a&gt;] ? hypercall_page+0x32a/=
0x1000</div><div>[ =A0160.032866] =A0[&lt;ffffffff8100132a&gt;] ? hypercall=
_page+0x32a/0x1000</div><div>[ =A0160.032874] =A0[&lt;ffffffff8100132a&gt;]=
 ? hypercall_page+0x32a/0x1000</div>
<div>[ =A0160.032882] =A0[&lt;ffffffff81065e22&gt;] mmput.part.16+0x42/0x13=
0</div><div>[ =A0160.032889] =A0[&lt;ffffffff81065f39&gt;] mmput+0x29/0x30<=
/div><div>[ =A0160.032896] =A0[&lt;ffffffff8106c943&gt;] exit_mm+0x113/0x13=
0</div><div>
[ =A0160.032904] =A0[&lt;ffffffff810e58c5&gt;] ? taskstats_exit+0x45/0x240<=
/div><div>[ =A0160.032912] =A0[&lt;ffffffff81662075&gt;] ? _raw_spin_lock_i=
rq+0x15/0x20</div><div>[ =A0160.032920] =A0[&lt;ffffffff8106cace&gt;] do_ex=
it+0x16e/0x450</div>
<div>[ =A0160.032928] =A0[&lt;ffffffff81662f20&gt;] oops_end+0xb0/0xf0</div=
><div>[ =A0160.032935] =A0[&lt;ffffffff8164812f&gt;] no_context+0x150/0x15d=
</div><div>[ =A0160.032943] =A0[&lt;ffffffff81648307&gt;] __bad_area_nosema=
phore+0x1cb/0x1ea</div>
<div>[ =A0160.032951] =A0[&lt;ffffffff816622ad&gt;] ? restore_args+0x30/0x3=
0</div><div>[ =A0160.032959] =A0[&lt;ffffffff8164795b&gt;] ? pte_offset_ker=
nel+0xe/0x37</div><div>[ =A0160.032967] =A0[&lt;ffffffff81648339&gt;] bad_a=
rea_nosemaphore+0x13/0x15</div>
<div>[ =A0160.032975] =A0[&lt;ffffffff81665bab&gt;] do_page_fault+0x46b/0x5=
40</div><div>[ =A0160.036054] =A0[&lt;ffffffff8115c3f8&gt;] ? mpol_shared_p=
olicy_init+0x48/0x160</div><div>[ =A0160.036054] =A0[&lt;ffffffff811667bd&g=
t;] ? kmem_cache_alloc+0x11d/0x140</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff8126d5fb&gt;] ? hugetlbfs_alloc_inode=
+0x5b/0xa0</div><div>[ =A0160.036054] =A0[&lt;ffffffff816624f5&gt;] page_fa=
ult+0x25/0x30</div><div>[ =A0160.036054] =A0[&lt;ffffffff81008efe&gt;] ? xe=
n_set_pte_at+0x3e/0x210</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff81008ef9&gt;] ? xen_set_pte_at+0x39/0=
x210</div><div>[ =A0160.036054] =A0[&lt;ffffffff81158453&gt;] hugetlb_no_pa=
ge+0x233/0x370</div><div>[ =A0160.036054] =A0[&lt;ffffffff8100640e&gt;] ? x=
en_pud_val+0xe/0x10</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff810053b5&gt;] ? __raw_callee_save_xen=
_pud_val+0x11/0x1e</div><div>[ =A0160.036054] =A0[&lt;ffffffff8115883e&gt;]=
 hugetlb_fault+0x1fe/0x340</div><div>[ =A0160.036054] =A0[&lt;ffffffff81143=
e18&gt;] ? vma_link+0x88/0xe0</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff81140a3c&gt;] handle_mm_fault+0x2ec/0=
x370</div><div>[ =A0160.036054] =A0[&lt;ffffffff816658be&gt;] do_page_fault=
+0x17e/0x540</div><div>[ =A0160.036054] =A0[&lt;ffffffff81145af8&gt;] ? do_=
mmap_pgoff+0x348/0x360</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff81145bf1&gt;] ? sys_mmap_pgoff+0xe1/0=
x230</div><div>[ =A0160.036054] =A0[&lt;ffffffff816624f5&gt;] page_fault+0x=
25/0x30</div></div><div><br></div></div><div class=3D"gmail_extra"><br><br>=
<div class=3D"gmail_quote">
On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:fabio.fantoni@m2r.biz" target=3D"_blank">fabio.fantoni@m2r.biz<=
/a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:=
0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Il 10/02/2014 11:42, Wei Liu ha scritto:<div class=3D""><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Hi,<br>
<br>
=A0 =A0 =A0 =A0 I am new to Xen and I am trying to run Intel DPDK inside a =
domU with<br>
virtio on Xen 4.2. Is it possible to do this?<br>
<br>
</blockquote></blockquote>
<br></div>
Based on my tests about virtio:<br>
- virtio-serial seems working out of box with windows domUs and also with x=
en pv driver, on linux domUs with old kernel (tested 2.6.32) is also workin=
g out of box but with newer kernel (tested &gt;=3D3.2) require pci=3Dnomsi =
to work correctly and works also with xen pvhvm drivers, for now I not foun=
d solution for msi problem, there are some posts about it.<br>

- virtio-net was working out of box but with recent qemu versions is broken=
 due qemu regression, I have narrowed down<br>
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable to=
 found the exact commit of regression because there are other critical prob=
lems with xen in the range.<br>
- I not tested virtio-disk and I not know if is working with recent xen and=
 qemu version.<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
DPDK doesn&#39;t seem to tightly coupled with VirtIO, does it?<br>
<br>
Could you look at Xen&#39;s PV network protocol instead? VirtIO has no<br>
mainline support on Xen while Xen&#39;s PV protocol has been in mainline fo=
r<br>
years. And it&#39;s very likely to be enabled by default nowadays.<br>
<br>
Wei.<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Regards<br>
Peter<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote>
<br>
</div></div></blockquote></div><br></div>

--047d7b5d3d7ad1ca5004f21135eb--


--===============0653985584819691314==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0653985584819691314==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 22:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzeW-0004Gx-TX; Mon, 10 Feb 2014 22:48:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bmenges@gogrid.com>) id 1WCvYR-0007uF-Bl
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:26:03 +0000
Received: from [85.158.143.35:42790] by server-2.bemta-4.messagelabs.com id
	DE/D8-10891-AB919F25; Mon, 10 Feb 2014 18:26:02 +0000
X-Env-Sender: bmenges@gogrid.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392056759!4606165!1
X-Originating-IP: [216.93.160.25]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23515 invoked from network); 10 Feb 2014 18:26:01 -0000
Received: from smtp1.servepath.com (HELO smtp1.servepath.com) (216.93.160.25)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 10 Feb 2014 18:26:01 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=january; d=gogrid.com;
	h=Received:Received:From:To:Subject:Thread-Topic:Thread-Index:Date:Message-ID:Accept-Language:Content-Language:X-MS-Has-Attach:X-MS-TNEF-Correlator:x-originating-ip:Content-Type:MIME-Version;
	b=F1M0S4p/w5eotidXtJaG+8FbldLuqo53EuFBa8G8zZ3pHCCUacRiP3PBBJ9GOLAwLGlnSrs2iAr+oh1/WYUdiHpMOZzyorMYWahD5naNrulvsiFaOs+Er6YHl5gB6GHf;
Received: from [192.168.6.220] (helo=ex-001-sfo.servepath.com)
	by smtp1.servepath.com with esmtp (Exim 4.68 (FreeBSD))
	(envelope-from <bmenges@gogrid.com>) id 1WCvYB-00029U-2x
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 10:26:00 -0800
Received: from EX-002-SFO.servepath.com ([169.254.2.202]) by
	ex-001-sfo.servepath.com ([169.254.1.228]) with mapi id 14.03.0123.003;
	Mon, 10 Feb 2014 10:13:04 -0800
From: Brian Menges <bmenges@gogrid.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: build xen 4.3 backport for wheezy
Thread-Index: Ac8mi7yUvYMzBKP8TbC0RLZqc1DA5Q==
Date: Mon, 10 Feb 2014 18:13:04 +0000
Message-ID: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.3.1]
MIME-Version: 1.0
X-Mailman-Approved-At: Mon, 10 Feb 2014 22:48:34 +0000
Subject: [Xen-devel] build xen 4.3 backport for wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1121073818030779364=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1121073818030779364==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_"

--_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

I'm attempting to build 4.3 from Jessie (Debian) and I'm bumping into an in=
teresting dependency:

[POC][root@debian xen-4.3.0]# debuild -us -uc -b -d
dpkg-buildpackage -rfakeroot -d -us -uc -b
dpkg-buildpackage: warning: using a gain-root-command while being root
dpkg-buildpackage: source package xen
dpkg-buildpackage: source version 4.3.0-3.1
dpkg-buildpackage: source changed by Brian Menges <bmenges>
dpkg-source --before-build xen-4.3.0
dpkg-buildpackage: host architecture amd64
fakeroot debian/rules clean
md5sum --check debian/control.md5sum --status || \
              /usr/bin/make -f debian/rules debian/control-real
make[1]: Entering directory `/root/wheezy-xen-4.3/xen-4.3.0'
debian/bin/gencontrol.py
Traceback (most recent call last):
  File "debian/bin/gencontrol.py", line 6, in <module>
    from debian_xen.debian import VersionXen
  File "/root/wheezy-xen-4.3/xen-4.3.0/debian/bin/../lib/python/debian_xen/=
__init__.py", line 19, in <module>
    _setup()
  File "/root/wheezy-xen-4.3/xen-4.3.0/debian/bin/../lib/python/debian_xen/=
__init__.py", line 16, in _setup
    raise RuntimeError("Can't find %s, please install the linux-support-%s =
package" % (support, version))
RuntimeError: Can't find /usr/src/linux-support-3.10-3, please install the =
linux-support-3.10-3 package
make[1]: *** [debian/control-real] Error 1
make[1]: Leaving directory `/root/wheezy-xen-4.3/xen-4.3.0'
make: *** [debian/control] Error 2
dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit statu=
s 2
debuild: fatal error at line 1357:
dpkg-buildpackage -rfakeroot -d -us -uc -b failed

It looks like the build scripts aren't detecting my installation of linux-s=
upport-3.12-0.bpo.1 and has a version lock on 3.10-3 (which isn't backporte=
d, and available only in Jessie even though BPO has the newest 3.12).

I tried looking into the Debian_xen/__init__py however I see that the line =
there 'linux-support-%' which I then checked 'version' and it says None. So=
 I'm wondering where else this might be getting a frozen version from and n=
ot attempting to check for other versions, or newer versions.

- Brian Menges
Principal Engineer, DevOps
GoGrid | ServePath | ColoServe | UpStream Networks


________________________________

The information contained in this message, and any attachments, may contain=
 confidential and legally privileged material. It is solely for the use of =
the person or entity to which it is addressed. Any review, retransmission, =
dissemination, or action taken in reliance upon this information by persons=
 or entities other than the intended recipient is prohibited. If you receiv=
e this in error, please contact the sender and delete the material from any=
 computer.

--_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<style>
<!--
@font-face
	{font-family:"Cambria Math"}
@font-face
	{font-family:Calibri}
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif"}
a:link, span.MsoHyperlink
	{color:#0563C1;
	text-decoration:underline}
a:visited, span.MsoHyperlinkFollowed
	{color:#954F72;
	text-decoration:underline}
span.EmailStyle17
	{font-family:"Calibri","sans-serif";
	color:windowtext}
.MsoChpDefault
	{font-family:"Calibri","sans-serif"}
@page WordSection1
	{margin:1.0in 1.0in 1.0in 1.0in}
div.WordSection1
	{}
-->
</style>
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">I&#8217;m attempting to build 4.3 from Jessie (Debia=
n) and I&#8217;m bumping into an interesting dependency:</p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">[POC][root@debian xen-4.3.0]# debuild -us -uc -b -d</span>=
</p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage -rfakeroot -d -us -uc -b</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: warning: using a gain-root-command whil=
e being root</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: source package xen</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: source version 4.3.0-3.1</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: source changed by Brian Menges &lt;bmen=
ges&gt;</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-source --before-build xen-4.3.0</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: host architecture amd64</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">fakeroot debian/rules clean</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">md5sum --check debian/control.md5sum --status || \</span><=
/p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp; /usr/bin/make -f debian/rules debian/control-real</spa=
n></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make[1]: Entering directory `/root/wheezy-xen-4.3/xen-4.3.=
0'</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">debian/bin/gencontrol.py</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">Traceback (most recent call last):</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp; File &quot;debian/bin/gencontrol.py&quot;, line 6, =
in &lt;module&gt;</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp; from debian_xen.debian import VersionXe=
n</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp; File &quot;/root/wheezy-xen-4.3/xen-4.3.0/debian/bi=
n/../lib/python/debian_xen/__init__.py&quot;, line 19, in &lt;module&gt;</s=
pan></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp; _setup()</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp; File &quot;/root/wheezy-xen-4.3/xen-4.3.0/debian/bi=
n/../lib/python/debian_xen/__init__.py&quot;, line 16, in _setup</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">&nbsp;&nbsp;&nbsp; raise RuntimeError(&quot;Can't find %s,=
 please install the linux-support-%s package&quot; % (support, version))</s=
pan></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">RuntimeError: Can't find /usr/src/linux-support-3.10-3, pl=
ease install the linux-support-3.10-3 package</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make[1]: *** [debian/control-real] Error 1</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make[1]: Leaving directory `/root/wheezy-xen-4.3/xen-4.3.0=
'</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">make: *** [debian/control] Error 2</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage: error: fakeroot debian/rules clean gave=
 error exit status 2</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">debuild: fatal error at line 1357:</span></p>
<p class=3D"MsoNormal"><span style=3D"font-size:8.0pt; font-family:&quot;Co=
urier New&quot;">dpkg-buildpackage -rfakeroot -d -us -uc -b failed</span></=
p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal">It looks like the build scripts aren&#8217;t detecti=
ng my installation of linux-support-3.12-0.bpo.1 and has a version lock on =
3.10-3 (which isn&#8217;t backported, and available only in Jessie even tho=
ugh BPO has the newest 3.12).</p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal">I tried looking into the Debian_xen/__init__py howev=
er I see that the line there &#8216;linux-support-%&#8217; which I then che=
cked &#8216;version&#8217; and it says None. So I&#8217;m wondering where e=
lse this might be getting a frozen version from and not attempting
 to check for other versions, or newer versions.</p>
<p class=3D"MsoNormal">&nbsp;</p>
<p class=3D"MsoNormal">- Brian Menges</p>
<p class=3D"MsoNormal">Principal Engineer, DevOps</p>
<p class=3D"MsoNormal">GoGrid | ServePath | ColoServe | UpStream Networks</=
p>
<p class=3D"MsoNormal">&nbsp;</p>
</div>
<br>
<hr>
<font face=3D"Courier New" color=3D"Gray" size=3D"1"><br>
The information contained in this message, and any attachments, may contain=
 confidential and legally privileged material. It is solely for the use of =
the person or entity to which it is addressed. Any review, retransmission, =
dissemination, or action taken in
 reliance upon this information by persons or entities other than the inten=
ded recipient is prohibited. If you receive this in error, please contact t=
he sender and delete the material from any computer.<br>
</font>
</body>
</html>

--_000_F33FED1E326F7448A0623CC9BFA2D4F9188292ex002sfoservepath_--


--===============1121073818030779364==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1121073818030779364==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 22:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzeW-0004GT-Ck; Mon, 10 Feb 2014 22:48:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterxianggao@gmail.com>) id 1WCvGT-0006xB-0a
	for Xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 18:07:29 +0000
Received: from [85.158.139.211:19069] by server-1.bemta-5.messagelabs.com id
	0E/DA-12859-06519F25; Mon, 10 Feb 2014 18:07:28 +0000
X-Env-Sender: peterxianggao@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392055638!2959296!1
X-Originating-IP: [209.85.219.49]
X-SpamReason: No, hits=2.9 required=7.0 tests=BIZ_TLD,HTML_20_30,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31566 invoked from network); 10 Feb 2014 18:07:19 -0000
Received: from mail-oa0-f49.google.com (HELO mail-oa0-f49.google.com)
	(209.85.219.49)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:07:19 -0000
Received: by mail-oa0-f49.google.com with SMTP id i7so7780459oag.8
	for <Xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 10:07:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=4GwZ8MHp4oViCsetBkxydPLAb1jqGcCPGFZPRFJWgm0=;
	b=fCqzukxz31/LBzUNQP6lt84S1T6rldX7F6hUUVYcl+Mx0kqLNWarrZIIE/biaric3U
	EImcj9IA8LnDKTFMIgSJM8uey50NastCMdfU8EGq4874rWFsbc9jH1lyME3VZ28UDDxf
	maC6rDuHJV68EtkDeCMAcexmF/xz9yw8lauEbc+xUjG/5qdu/yk4UgLSOe4pePiRPLnK
	IBlk9s9wwbgwhlgokx14qdpIBocuzRl1qnn8TyxnMjTpa/KmS8fjjhqnmQUI+RtwDkmJ
	qYICtrePYhnvSUXpCudlzKGxfg7SfLpwnBOBUgM2a2srrKiyFj1hN+e7/WDLSQRO5IYH
	kMfg==
MIME-Version: 1.0
X-Received: by 10.60.146.194 with SMTP id te2mr28347144oeb.3.1392055638142;
	Mon, 10 Feb 2014 10:07:18 -0800 (PST)
Received: by 10.182.33.34 with HTTP; Mon, 10 Feb 2014 10:07:17 -0800 (PST)
In-Reply-To: <52F8B5C3.1020308@m2r.biz>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
	<52F8B5C3.1020308@m2r.biz>
Date: Mon, 10 Feb 2014 10:07:17 -0800
Message-ID: <CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
From: "Peter X. Gao" <peterxianggao@gmail.com>
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
X-Mailman-Approved-At: Mon, 10 Feb 2014 22:48:34 +0000
Cc: Xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	xen-users@lists.xen.org
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0653985584819691314=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0653985584819691314==
Content-Type: multipart/alternative; boundary=047d7b5d3d7ad1ca5004f21135eb

--047d7b5d3d7ad1ca5004f21135eb
Content-Type: text/plain; charset=ISO-8859-1

Thanks for your reply. I am now using virtio-net and it seems working.
However, Intel DPDK also requires hugepage. When a DPDK application is
initiating hugepage, I got the following error. Do I need to config
something in Xen to support hugepage?



[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 3.2.0-58-generic (buildd@allspice) (gcc
version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubuntu SMP Tue Dec 3
17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)
[    0.000000] Command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
ip=:127.0.255.255::::eth0:dhcp iommu=soft
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
[    0.000000] ACPI in unprivileged domain disabled
[    0.000000] Released 0 pages of unused memory
[    0.000000] Set 0 page(s) to 1-1 mapping
[    0.000000] BIOS-provided physical RAM map:
[    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
[    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
[    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] DMI not present or invalid.
[    0.000000] No AGP bridge found
[    0.000000] last_pfn = 0x100800 max_arch_pfn = 0x400000000
[    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
[    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
[    0.000000] init_memory_mapping: 0000000100000000-0000000100800000
[    0.000000] RAMDISK: 02060000 - 045e3000
[    0.000000] NUMA turned off
[    0.000000] Faking a node at 0000000000000000-0000000100800000
[    0.000000] Initmem setup node 0 0000000000000000-0000000100800000
[    0.000000]   NODE_DATA [00000000ffff5000 - 00000000ffff9fff]
[    0.000000] Zone PFN ranges:
[    0.000000]   DMA      0x00000010 -> 0x00001000
[    0.000000]   DMA32    0x00001000 -> 0x00100000
[    0.000000]   Normal   0x00100000 -> 0x00100800
[    0.000000] Movable zone start PFN for each node
[    0.000000] early_node_map[2] active PFN ranges
[    0.000000]     0: 0x00000010 -> 0x000000a0
[    0.000000]     0: 0x00000100 -> 0x00100800
[    0.000000] SFI: Simple Firmware Interface v0.81
http://simplefirmware.org
[    0.000000] SMP: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] No local APIC present
[    0.000000] APIC: disable apic facility
[    0.000000] APIC: switched to apic NOOP
[    0.000000] PM: Registered nosave memory: 00000000000a0000 -
0000000000100000
[    0.000000] PCI: Warning: Cannot find a gap in the 32bit address range
[    0.000000] PCI: Unassigned devices with 32bit resource registers may
break!
[    0.000000] Allocating PCI resources starting at 100900000 (gap:
100900000:400000)
[    0.000000] Booting paravirtualized kernel on Xen
[    0.000000] Xen version: 4.2.1 (preserve-AD)
[    0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8
nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136 r8192
d23360 u262144
[    0.000000] Built 1 zonelists in Node order, mobility grouping on.
 Total pages: 1032084
[    0.000000] Policy zone: Normal
[    0.000000] Kernel command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
ip=:127.0.255.255::::eth0:dhcp iommu=soft
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Placing 64MB software IO TLB between ffff8800f7400000 -
ffff8800fb400000
[    0.000000] software IO TLB at phys 0xf7400000 - 0xfb400000
[    0.000000] Memory: 3988436k/4202496k available (6588k kernel code, 448k
absent, 213612k reserved, 6617k data, 924k init)
[    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
CPUs=8, Nodes=1
[    0.000000] Hierarchical RCU implementation.
[    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:16640 nr_irqs:336 16
[    0.000000] Console: colour dummy device 80x25
[    0.000000] console [tty0] enabled
[    0.000000] console [hvc0] enabled
[    0.000000] allocated 34603008 bytes of page_cgroup
[    0.000000] please try 'cgroup_disable=memory' option if you don't want
memory cgroups
[    0.000000] installing Xen timer for CPU 0
[    0.000000] Detected 2793.098 MHz processor.
[    0.004000] Calibrating delay loop (skipped), value calculated using
timer frequency.. 5586.19 BogoMIPS (lpj=11172392)
[    0.004000] pid_max: default: 32768 minimum: 301
[    0.004000] Security Framework initialized
[    0.004000] AppArmor: AppArmor initialized
[    0.004000] Yama: becoming mindful.
[    0.004000] Dentry cache hash table entries: 524288 (order: 10, 4194304
bytes)
[    0.004000] Inode-cache hash table entries: 262144 (order: 9, 2097152
bytes)
[    0.004000] Mount-cache hash table entries: 256
[    0.004000] Initializing cgroup subsys cpuacct
[    0.004000] Initializing cgroup subsys memory
[    0.004000] Initializing cgroup subsys devices
[    0.004000] Initializing cgroup subsys freezer
[    0.004000] Initializing cgroup subsys blkio
[    0.004000] Initializing cgroup subsys perf_event
[    0.004000] CPU: Physical Processor ID: 0
[    0.004000] CPU: Processor Core ID: 0
[    0.004000] SMP alternatives: switching to UP code
[    0.031040] ftrace: allocating 26602 entries in 105 pages
[    0.032055] cpu 0 spinlock event irq 17
[    0.032115] Performance Events: unsupported p6 CPU model 26 no PMU
driver, software events only.
[    0.032244] NMI watchdog disabled (cpu0): hardware events not enabled
[    0.032350] installing Xen timer for CPU 1
[    0.032363] cpu 1 spinlock event irq 23
[    0.032623] SMP alternatives: switching to SMP code
[    0.057953] NMI watchdog disabled (cpu1): hardware events not enabled
[    0.058085] installing Xen timer for CPU 2
[    0.058103] cpu 2 spinlock event irq 29
[    0.058542] NMI watchdog disabled (cpu2): hardware events not enabled
[    0.058696] installing Xen timer for CPU 3
[    0.058724] cpu 3 spinlock event irq 35
[    0.059115] NMI watchdog disabled (cpu3): hardware events not enabled
[    0.059227] installing Xen timer for CPU 4
[    0.059246] cpu 4 spinlock event irq 41
[    0.059423] NMI watchdog disabled (cpu4): hardware events not enabled
[    0.059544] installing Xen timer for CPU 5
[    0.059562] cpu 5 spinlock event irq 47
[    0.059724] NMI watchdog disabled (cpu5): hardware events not enabled
[    0.059833] installing Xen timer for CPU 6
[    0.059852] cpu 6 spinlock event irq 53
[    0.060003] NMI watchdog disabled (cpu6): hardware events not enabled
[    0.060037] installing Xen timer for CPU 7
[    0.060056] cpu 7 spinlock event irq 59
[    0.060209] NMI watchdog disabled (cpu7): hardware events not enabled
[    0.060243] Brought up 8 CPUs
[    0.060494] devtmpfs: initialized
[    0.061531] EVM: security.selinux
[    0.061537] EVM: security.SMACK64
[    0.061542] EVM: security.capability
[    0.061711] Grant table initialized
[    0.061711] print_constraints: dummy:
[    0.083057] RTC time: 165:165:165, date: 165/165/65
[    0.083093] NET: Registered protocol family 16
[    0.083159] Trying to unpack rootfs image as initramfs...
[    0.084665] PCI: setting up Xen PCI frontend stub
[    0.086003] bio: create slab <bio-0> at 0
[    0.086003] ACPI: Interpreter disabled.
[    0.086003] xen/balloon: Initialising balloon driver.
[    0.088136] xen-balloon: Initialising balloon driver.
[    0.088139] vgaarb: loaded
[    0.088184] i2c-core: driver [aat2870] using legacy suspend method
[    0.088192] i2c-core: driver [aat2870] using legacy resume method
[    0.088283] SCSI subsystem initialized
[    0.088341] usbcore: registered new interface driver usbfs
[    0.088341] usbcore: registered new interface driver hub
[    0.088341] usbcore: registered new device driver usb
[    0.088341] PCI: System does not support PCI
[    0.088341] PCI: System does not support PCI
[    0.088341] NetLabel: Initializing
[    0.088341] NetLabel:  domain hash size = 128
[    0.184026] NetLabel:  protocols = UNLABELED CIPSOv4
[    0.184051] NetLabel:  unlabeled traffic allowed by default
[    0.184159] Switching to clocksource xen
[    0.188203] Freeing initrd memory: 38412k freed
[    0.202280] AppArmor: AppArmor Filesystem Enabled
[    0.202308] pnp: PnP ACPI: disabled
[    0.205341] NET: Registered protocol family 2
[    0.205661] IP route cache hash table entries: 131072 (order: 8, 1048576
bytes)
[    0.207989] TCP established hash table entries: 524288 (order: 11,
8388608 bytes)
[    0.209497] TCP bind hash table entries: 65536 (order: 8, 1048576 bytes)
[    0.209644] TCP: Hash tables configured (established 524288 bind 65536)
[    0.209650] TCP reno registered
[    0.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)
[    0.209704] UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes)
[    0.209817] NET: Registered protocol family 1
[    0.210139] platform rtc_cmos: registered platform RTC device (no PNP
device found)
[    0.211002] audit: initializing netlink socket (disabled)
[    0.211015] type=2000 audit(1392055157.599:1): initialized
[    0.229178] HugeTLB registered 2 MB page size, pre-allocated 0 pages
[    0.230818] VFS: Disk quotas dquot_6.5.2
[    0.230873] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
[    0.231462] fuse init (API version 7.17)
[    0.231605] msgmni has been set to 7864
[    0.232267] Block layer SCSI generic (bsg) driver version 0.4 loaded
(major 253)
[    0.232382] io scheduler noop registered
[    0.232417] io scheduler deadline registered
[    0.232449] io scheduler cfq registered (default)
[    0.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[    0.232529] pciehp: PCI Express Hot Plug Controller Driver version: 0.4
[    0.233195] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
[    0.437179] Linux agpgart interface v0.103
[    0.439329] brd: module loaded
[    0.440557] loop: module loaded
[    0.442439] blkfront device/vbd/51714 num-ring-pages 1 nr_ents 32.
[    0.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents 32.
[    0.447233] blkfront: xvda2: flush diskcache: enabled
[    0.447810] Fixed MDIO Bus: probed
[    0.447856] tun: Universal TUN/TAP device driver, 1.6
[    0.447864] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
[    0.447945] PPP generic driver version 2.4.2
[    0.448029] Initialising Xen virtual ethernet driver.
[    0.453923] blkfront: xvda1: flush diskcache: enabled
[    0.455000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[    0.455031] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
[    0.455048] uhci_hcd: USB Universal Host Controller Interface driver
[    0.455100] usbcore: registered new interface driver libusual
[    0.455134] i8042: PNP: No PS/2 controller found. Probing ports directly.
[    1.455791] i8042: No controller found
[    1.456071] mousedev: PS/2 mouse device common for all mice
[    1.496241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0
[    1.496489] rtc_cmos: probe of rtc_cmos failed with error -38
[    1.496624] device-mapper: uevent: version 1.0.3
...............
...............
...............
...............


[  135.957086] BUG: unable to handle kernel paging request at
ffff8800f36c0960
[  135.957105] IP: [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
[  135.957122] PGD 1c06067 PUD dd1067 PMD f6d067 PTE 80100000f36c0065
[  135.957134] Oops: 0003 [#1] SMP
[  135.957141] CPU 0
[  135.957144] Modules linked in: igb_uio(O) uio
[  135.957155]
[  135.957160] Pid: 659, comm: helloworld Tainted: G           O
3.2.0-58-generic #88-Ubuntu
[  135.957171] RIP: e030:[<ffffffff81008efe>]  [<ffffffff81008efe>]
xen_set_pte_at+0x3e/0x210
[  135.957183] RSP: e02b:ffff8800037ddc88  EFLAGS: 00010297
[  135.957189] RAX: 0000000000000000 RBX: 800000008c6000e7 RCX:
800000008c6000e7
[  135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI:
ffff880003044980
[  135.957205] RBP: ffff8800037ddcd8 R08: 0000000000000000 R09:
dead000000100100
[  135.957212] R10: dead000000200200 R11: 00007f4a64f7e02a R12:
ffffea0003c48000
[  135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15:
0000000000000001
[  135.957232] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
knlGS:0000000000000000
[  135.957241] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4:
0000000000002660
[  135.957255] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  135.957263] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  135.957271] Process helloworld (pid: 659, threadinfo ffff8800037dc000,
task ffff8800034c1700)
[  135.957279] Stack:
[  135.957283]  00007f4a65800000 ffff880003044980 dead000000200200
dead000000100100
[  135.957297]  0000000000000000 0000000000000000 ffffea0003c48000
800000008c6000e7
[  135.957310]  ffff8800030449ec 0000000000000001 ffff8800037ddd68
ffffffff81158453
[  135.957322] Call Trace:
[  135.957333]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  135.957342]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  135.957351]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  135.957361]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  135.957370]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  135.957378]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  135.957391]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  135.957399]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  135.957408]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  135.957417]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48 89 7d b8
48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83 f8 01 74 75
<49> 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0 4c 8b
[  135.957507] RIP  [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
[  135.957517]  RSP <ffff8800037ddc88>
[  135.957521] CR2: ffff8800f36c0960
[  135.957528] ---[ end trace f6a013072f2aee83 ]---
[  160.032062] BUG: soft lockup - CPU#0 stuck for 23s! [helloworld:659]
[  160.032129] Modules linked in: igb_uio(O) uio
[  160.032140] CPU 0
[  160.032143] Modules linked in: igb_uio(O) uio
[  160.032153]
[  160.032159] Pid: 659, comm: helloworld Tainted: G      D    O
3.2.0-58-generic #88-Ubuntu
[  160.032170] RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>]
hypercall_page+0x3aa/0x1000
[  160.032190] RSP: e02b:ffff8800037dd730  EFLAGS: 00000202
[  160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
ffffffff810013aa
[  160.032204] RDX: 0000000000000000 RSI: ffff8800037dd748 RDI:
0000000000000003
[  160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09:
ffff8800f6c000a0
[  160.032220] R10: 0000000000007ff0 R11: 0000000000000202 R12:
0000000000000011
[  160.032227] R13: 0000000000000201 R14: ffff880003044901 R15:
ffff880003044900
[  160.032239] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
knlGS:0000000000000000
[  160.032248] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[  160.032255] CR2: ffff8800f36c0960 CR3: 0000000001c05000 CR4:
0000000000002660
[  160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
0000000000000000
[  160.032271] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
0000000000000400
[  160.032279] Process helloworld (pid: 659, threadinfo ffff8800037dc000,
task ffff8800034c1700)
[  160.032287] Stack:
[  160.032291]  0000000000000011 00000000fffffffa ffffffff813adade
ffff8800037dd764
[  160.032304]  ffffffff00000001 0000000000000000 00000004813ad17e
ffff8800037dd778
[  160.032317]  ffff8800030449ec ffff8800037dd788 ffffffff813af5e0
ffff8800037dd7d8
[  160.032329] Call Trace:
[  160.032341]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
[  160.032350]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
[  160.032360]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
[  160.032370]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
[  160.032381]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
[  160.032390]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
[  160.032400]  [<ffffffff81146408>] exit_mmap+0x58/0x140
[  160.032408]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
[  160.032416]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
[  160.032425]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032433]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032442]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032452]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
[  160.032460]  [<ffffffff81065f39>] mmput+0x29/0x30
[  160.032470]  [<ffffffff8106c943>] exit_mm+0x113/0x130
[  160.032479]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
[  160.032488]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
[  160.032496]  [<ffffffff8106cace>] do_exit+0x16e/0x450
[  160.032504]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
[  160.032513]  [<ffffffff8164812f>] no_context+0x150/0x15d
[  160.032520]  [<ffffffff81648307>] __bad_area_nosemaphore+0x1cb/0x1ea
[  160.032529]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
[  160.032537]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
[  160.032545]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
[  160.032555]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
[  160.032564]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init+0x48/0x160
[  160.032575]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
[  160.032588]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
[  160.032597]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.032605]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
[  160.032613]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
[  160.032622]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  160.032630]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  160.032638]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  160.032648]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  160.032656]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  160.032664]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  160.032673]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  160.032681]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  160.032689]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  160.032697]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05
<41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
[  160.032781] Call Trace:
[  160.032787]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
[  160.032795]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
[  160.032803]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
[  160.032811]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
[  160.032818]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
[  160.032826]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
[  160.032833]  [<ffffffff81146408>] exit_mmap+0x58/0x140
[  160.032841]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
[  160.032849]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
[  160.032857]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032866]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032874]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
[  160.032882]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
[  160.032889]  [<ffffffff81065f39>] mmput+0x29/0x30
[  160.032896]  [<ffffffff8106c943>] exit_mm+0x113/0x130
[  160.032904]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
[  160.032912]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
[  160.032920]  [<ffffffff8106cace>] do_exit+0x16e/0x450
[  160.032928]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
[  160.032935]  [<ffffffff8164812f>] no_context+0x150/0x15d
[  160.032943]  [<ffffffff81648307>] __bad_area_nosemaphore+0x1cb/0x1ea
[  160.032951]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
[  160.032959]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
[  160.032967]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
[  160.032975]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
[  160.036054]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init+0x48/0x160
[  160.036054]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
[  160.036054]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
[  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
[  160.036054]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
[  160.036054]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
[  160.036054]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
[  160.036054]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
[  160.036054]  [<ffffffff810053b5>] ?
__raw_callee_save_xen_pud_val+0x11/0x1e
[  160.036054]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
[  160.036054]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
[  160.036054]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
[  160.036054]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
[  160.036054]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
[  160.036054]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
[  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30



On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <fabio.fantoni@m2r.biz>wrote:

> Il 10/02/2014 11:42, Wei Liu ha scritto:
>
>  On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:
>>
>>> Hi,
>>>
>>>         I am new to Xen and I am trying to run Intel DPDK inside a domU
>>> with
>>> virtio on Xen 4.2. Is it possible to do this?
>>>
>>>
> Based on my tests about virtio:
> - virtio-serial seems working out of box with windows domUs and also with
> xen pv driver, on linux domUs with old kernel (tested 2.6.32) is also
> working out of box but with newer kernel (tested >=3.2) require pci=nomsi
> to work correctly and works also with xen pvhvm drivers, for now I not
> found solution for msi problem, there are some posts about it.
> - virtio-net was working out of box but with recent qemu versions is
> broken due qemu regression, I have narrowed down
> with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable
> to found the exact commit of regression because there are other critical
> problems with xen in the range.
> - I not tested virtio-disk and I not know if is working with recent xen
> and qemu version.
>
>
>  DPDK doesn't seem to tightly coupled with VirtIO, does it?
>>
>> Could you look at Xen's PV network protocol instead? VirtIO has no
>> mainline support on Xen while Xen's PV protocol has been in mainline for
>> years. And it's very likely to be enabled by default nowadays.
>>
>> Wei.
>>
>>  Regards
>>> Peter
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>>
>
>

--047d7b5d3d7ad1ca5004f21135eb
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for your reply. I am now using virtio-net and it se=
ems working. However, Intel DPDK also requires hugepage. When a DPDK applic=
ation is initiating hugepage, I got the following error. Do I need to confi=
g something in Xen to support hugepage?<div>
<br></div><div><br></div><div><br></div><div><div>[ =A0 =A00.000000] Initia=
lizing cgroup subsys cpuset</div><div>[ =A0 =A00.000000] Initializing cgrou=
p subsys cpu</div><div>[ =A0 =A00.000000] Linux version 3.2.0-58-generic (b=
uildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubu=
ntu SMP Tue Dec 3 17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)</di=
v>
<div>[ =A0 =A00.000000] Command line: root=3D/dev/xvda2 ro root=3D/dev/xvda=
2 ro ip=3D:127.0.255.255::::eth0:dhcp iommu=3Dsoft</div><div>[ =A0 =A00.000=
000] KERNEL supported cpus:</div><div>[ =A0 =A00.000000] =A0 Intel GenuineI=
ntel</div><div>[ =A0 =A00.000000] =A0 AMD AuthenticAMD</div>
<div>[ =A0 =A00.000000] =A0 Centaur CentaurHauls</div><div>[ =A0 =A00.00000=
0] ACPI in unprivileged domain disabled</div><div>[ =A0 =A00.000000] Releas=
ed 0 pages of unused memory</div><div>[ =A0 =A00.000000] Set 0 page(s) to 1=
-1 mapping</div>
<div>[ =A0 =A00.000000] BIOS-provided physical RAM map:</div><div>[ =A0 =A0=
0.000000] =A0Xen: 0000000000000000 - 00000000000a0000 (usable)</div><div>[ =
=A0 =A00.000000] =A0Xen: 00000000000a0000 - 0000000000100000 (reserved)</di=
v><div>[ =A0 =A00.000000] =A0Xen: 0000000000100000 - 0000000100800000 (usab=
le)</div>
<div>[ =A0 =A00.000000] NX (Execute Disable) protection: active</div><div>[=
 =A0 =A00.000000] DMI not present or invalid.</div><div>[ =A0 =A00.000000] =
No AGP bridge found</div><div>[ =A0 =A00.000000] last_pfn =3D 0x100800 max_=
arch_pfn =3D 0x400000000</div>
<div>[ =A0 =A00.000000] last_pfn =3D 0x100000 max_arch_pfn =3D 0x400000000<=
/div><div>[ =A0 =A00.000000] init_memory_mapping: 0000000000000000-00000001=
00000000</div><div>[ =A0 =A00.000000] init_memory_mapping: 0000000100000000=
-0000000100800000</div>
<div>[ =A0 =A00.000000] RAMDISK: 02060000 - 045e3000</div><div>[ =A0 =A00.0=
00000] NUMA turned off</div><div>[ =A0 =A00.000000] Faking a node at 000000=
0000000000-0000000100800000</div><div>[ =A0 =A00.000000] Initmem setup node=
 0 0000000000000000-0000000100800000</div>
<div>[ =A0 =A00.000000] =A0 NODE_DATA [00000000ffff5000 - 00000000ffff9fff]=
</div><div>[ =A0 =A00.000000] Zone PFN ranges:</div><div>[ =A0 =A00.000000]=
 =A0 DMA =A0 =A0 =A00x00000010 -&gt; 0x00001000</div><div>[ =A0 =A00.000000=
] =A0 DMA32 =A0 =A00x00001000 -&gt; 0x00100000</div>
<div>[ =A0 =A00.000000] =A0 Normal =A0 0x00100000 -&gt; 0x00100800</div><di=
v>[ =A0 =A00.000000] Movable zone start PFN for each node</div><div>[ =A0 =
=A00.000000] early_node_map[2] active PFN ranges</div><div>[ =A0 =A00.00000=
0] =A0 =A0 0: 0x00000010 -&gt; 0x000000a0</div>
<div>[ =A0 =A00.000000] =A0 =A0 0: 0x00000100 -&gt; 0x00100800</div><div>[ =
=A0 =A00.000000] SFI: Simple Firmware Interface v0.81 <a href=3D"http://sim=
plefirmware.org">http://simplefirmware.org</a></div><div>[ =A0 =A00.000000]=
 SMP: Allowing 8 CPUs, 0 hotplug CPUs</div>
<div>[ =A0 =A00.000000] No local APIC present</div><div>[ =A0 =A00.000000] =
APIC: disable apic facility</div><div>[ =A0 =A00.000000] APIC: switched to =
apic NOOP</div><div>[ =A0 =A00.000000] PM: Registered nosave memory: 000000=
00000a0000 - 0000000000100000</div>
<div>[ =A0 =A00.000000] PCI: Warning: Cannot find a gap in the 32bit addres=
s range</div><div>[ =A0 =A00.000000] PCI: Unassigned devices with 32bit res=
ource registers may break!</div><div>[ =A0 =A00.000000] Allocating PCI reso=
urces starting at 100900000 (gap: 100900000:400000)</div>
<div>[ =A0 =A00.000000] Booting paravirtualized kernel on Xen</div><div>[ =
=A0 =A00.000000] Xen version: 4.2.1 (preserve-AD)</div><div>[ =A0 =A00.0000=
00] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256 nr_cpu_ids:8 nr_node_ids:=
1</div><div>
[ =A0 =A00.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136 r=
8192 d23360 u262144</div><div>[ =A0 =A00.000000] Built 1 zonelists in Node =
order, mobility grouping on. =A0Total pages: 1032084</div><div>[ =A0 =A00.0=
00000] Policy zone: Normal</div>
<div>[ =A0 =A00.000000] Kernel command line: root=3D/dev/xvda2 ro root=3D/d=
ev/xvda2 ro ip=3D:127.0.255.255::::eth0:dhcp iommu=3Dsoft</div><div>[ =A0 =
=A00.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)</div><div=
>[ =A0 =A00.000000] Placing 64MB software IO TLB between ffff8800f7400000 -=
 ffff8800fb400000</div>
<div>[ =A0 =A00.000000] software IO TLB at phys 0xf7400000 - 0xfb400000</di=
v><div>[ =A0 =A00.000000] Memory: 3988436k/4202496k available (6588k kernel=
 code, 448k absent, 213612k reserved, 6617k data, 924k init)</div><div>[ =
=A0 =A00.000000] SLUB: Genslabs=3D15, HWalign=3D64, Order=3D0-3, MinObjects=
=3D0, CPUs=3D8, Nodes=3D1</div>
<div>[ =A0 =A00.000000] Hierarchical RCU implementation.</div><div>[ =A0 =
=A00.000000] <span class=3D"" style=3D"white-space:pre">	</span>RCU dyntick=
-idle grace-period acceleration is enabled.</div><div>[ =A0 =A00.000000] NR=
_IRQS:16640 nr_irqs:336 16</div>
<div>[ =A0 =A00.000000] Console: colour dummy device 80x25</div><div>[ =A0 =
=A00.000000] console [tty0] enabled</div><div>[ =A0 =A00.000000] console [h=
vc0] enabled</div><div>[ =A0 =A00.000000] allocated 34603008 bytes of page_=
cgroup</div>
<div>[ =A0 =A00.000000] please try &#39;cgroup_disable=3Dmemory&#39; option=
 if you don&#39;t want memory cgroups</div><div>[ =A0 =A00.000000] installi=
ng Xen timer for CPU 0</div><div>[ =A0 =A00.000000] Detected 2793.098 MHz p=
rocessor.</div>
<div>[ =A0 =A00.004000] Calibrating delay loop (skipped), value calculated =
using timer frequency.. 5586.19 BogoMIPS (lpj=3D11172392)</div><div>[ =A0 =
=A00.004000] pid_max: default: 32768 minimum: 301</div><div>[ =A0 =A00.0040=
00] Security Framework initialized</div>
<div>[ =A0 =A00.004000] AppArmor: AppArmor initialized</div><div>[ =A0 =A00=
.004000] Yama: becoming mindful.</div><div>[ =A0 =A00.004000] Dentry cache =
hash table entries: 524288 (order: 10, 4194304 bytes)</div><div>[ =A0 =A00.=
004000] Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes)</d=
iv>
<div>[ =A0 =A00.004000] Mount-cache hash table entries: 256</div><div>[ =A0=
 =A00.004000] Initializing cgroup subsys cpuacct</div><div>[ =A0 =A00.00400=
0] Initializing cgroup subsys memory</div><div>[ =A0 =A00.004000] Initializ=
ing cgroup subsys devices</div>
<div>[ =A0 =A00.004000] Initializing cgroup subsys freezer</div><div>[ =A0 =
=A00.004000] Initializing cgroup subsys blkio</div><div>[ =A0 =A00.004000] =
Initializing cgroup subsys perf_event</div><div>[ =A0 =A00.004000] CPU: Phy=
sical Processor ID: 0</div>
<div>[ =A0 =A00.004000] CPU: Processor Core ID: 0</div><div>[ =A0 =A00.0040=
00] SMP alternatives: switching to UP code</div><div>[ =A0 =A00.031040] ftr=
ace: allocating 26602 entries in 105 pages</div><div>[ =A0 =A00.032055] cpu=
 0 spinlock event irq 17</div>
<div>[ =A0 =A00.032115] Performance Events: unsupported p6 CPU model 26 no =
PMU driver, software events only.</div><div>[ =A0 =A00.032244] NMI watchdog=
 disabled (cpu0): hardware events not enabled</div><div>[ =A0 =A00.032350] =
installing Xen timer for CPU 1</div>
<div>[ =A0 =A00.032363] cpu 1 spinlock event irq 23</div><div>[ =A0 =A00.03=
2623] SMP alternatives: switching to SMP code</div><div>[ =A0 =A00.057953] =
NMI watchdog disabled (cpu1): hardware events not enabled</div><div>[ =A0 =
=A00.058085] installing Xen timer for CPU 2</div>
<div>[ =A0 =A00.058103] cpu 2 spinlock event irq 29</div><div>[ =A0 =A00.05=
8542] NMI watchdog disabled (cpu2): hardware events not enabled</div><div>[=
 =A0 =A00.058696] installing Xen timer for CPU 3</div><div>[ =A0 =A00.05872=
4] cpu 3 spinlock event irq 35</div>
<div>[ =A0 =A00.059115] NMI watchdog disabled (cpu3): hardware events not e=
nabled</div><div>[ =A0 =A00.059227] installing Xen timer for CPU 4</div><di=
v>[ =A0 =A00.059246] cpu 4 spinlock event irq 41</div><div>[ =A0 =A00.05942=
3] NMI watchdog disabled (cpu4): hardware events not enabled</div>
<div>[ =A0 =A00.059544] installing Xen timer for CPU 5</div><div>[ =A0 =A00=
.059562] cpu 5 spinlock event irq 47</div><div>[ =A0 =A00.059724] NMI watch=
dog disabled (cpu5): hardware events not enabled</div><div>[ =A0 =A00.05983=
3] installing Xen timer for CPU 6</div>
<div>[ =A0 =A00.059852] cpu 6 spinlock event irq 53</div><div>[ =A0 =A00.06=
0003] NMI watchdog disabled (cpu6): hardware events not enabled</div><div>[=
 =A0 =A00.060037] installing Xen timer for CPU 7</div><div>[ =A0 =A00.06005=
6] cpu 7 spinlock event irq 59</div>
<div>[ =A0 =A00.060209] NMI watchdog disabled (cpu7): hardware events not e=
nabled</div><div>[ =A0 =A00.060243] Brought up 8 CPUs</div><div>[ =A0 =A00.=
060494] devtmpfs: initialized</div><div>[ =A0 =A00.061531] EVM: security.se=
linux</div><div>
[ =A0 =A00.061537] EVM: security.SMACK64</div><div>[ =A0 =A00.061542] EVM: =
security.capability</div><div>[ =A0 =A00.061711] Grant table initialized</d=
iv><div>[ =A0 =A00.061711] print_constraints: dummy:=A0</div><div>[ =A0 =A0=
0.083057] RTC time: 165:165:165, date: 165/165/65</div>
<div>[ =A0 =A00.083093] NET: Registered protocol family 16</div><div>[ =A0 =
=A00.083159] Trying to unpack rootfs image as initramfs...</div><div>[ =A0 =
=A00.084665] PCI: setting up Xen PCI frontend stub</div><div>[ =A0 =A00.086=
003] bio: create slab &lt;bio-0&gt; at 0</div>
<div>[ =A0 =A00.086003] ACPI: Interpreter disabled.</div><div>[ =A0 =A00.08=
6003] xen/balloon: Initialising balloon driver.</div><div>[ =A0 =A00.088136=
] xen-balloon: Initialising balloon driver.</div><div>[ =A0 =A00.088139] vg=
aarb: loaded</div>
<div>[ =A0 =A00.088184] i2c-core: driver [aat2870] using legacy suspend met=
hod</div><div>[ =A0 =A00.088192] i2c-core: driver [aat2870] using legacy re=
sume method</div><div>[ =A0 =A00.088283] SCSI subsystem initialized</div><d=
iv>[ =A0 =A00.088341] usbcore: registered new interface driver usbfs</div>
<div>[ =A0 =A00.088341] usbcore: registered new interface driver hub</div><=
div>[ =A0 =A00.088341] usbcore: registered new device driver usb</div><div>=
[ =A0 =A00.088341] PCI: System does not support PCI</div><div>[ =A0 =A00.08=
8341] PCI: System does not support PCI</div>
<div>[ =A0 =A00.088341] NetLabel: Initializing</div><div>[ =A0 =A00.088341]=
 NetLabel: =A0domain hash size =3D 128</div><div>[ =A0 =A00.184026] NetLabe=
l: =A0protocols =3D UNLABELED CIPSOv4</div><div>[ =A0 =A00.184051] NetLabel=
: =A0unlabeled traffic allowed by default</div>
<div>[ =A0 =A00.184159] Switching to clocksource xen</div><div>[ =A0 =A00.1=
88203] Freeing initrd memory: 38412k freed</div><div>[ =A0 =A00.202280] App=
Armor: AppArmor Filesystem Enabled</div><div>[ =A0 =A00.202308] pnp: PnP AC=
PI: disabled</div>
<div>[ =A0 =A00.205341] NET: Registered protocol family 2</div><div>[ =A0 =
=A00.205661] IP route cache hash table entries: 131072 (order: 8, 1048576 b=
ytes)</div><div>[ =A0 =A00.207989] TCP established hash table entries: 5242=
88 (order: 11, 8388608 bytes)</div>
<div>[ =A0 =A00.209497] TCP bind hash table entries: 65536 (order: 8, 10485=
76 bytes)</div><div>[ =A0 =A00.209644] TCP: Hash tables configured (establi=
shed 524288 bind 65536)</div><div>[ =A0 =A00.209650] TCP reno registered</d=
iv><div>
[ =A0 =A00.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)</di=
v><div>[ =A0 =A00.209704] UDP-Lite hash table entries: 2048 (order: 4, 6553=
6 bytes)</div><div>[ =A0 =A00.209817] NET: Registered protocol family 1</di=
v><div>[ =A0 =A00.210139] platform rtc_cmos: registered platform RTC device=
 (no PNP device found)</div>
<div>[ =A0 =A00.211002] audit: initializing netlink socket (disabled)</div>=
<div>[ =A0 =A00.211015] type=3D2000 audit(1392055157.599:1): initialized</d=
iv><div>[ =A0 =A00.229178] HugeTLB registered 2 MB page size, pre-allocated=
 0 pages</div>
<div>[ =A0 =A00.230818] VFS: Disk quotas dquot_6.5.2</div><div>[ =A0 =A00.2=
30873] Dquot-cache hash table entries: 512 (order 0, 4096 bytes)</div><div>=
[ =A0 =A00.231462] fuse init (API version 7.17)</div><div>[ =A0 =A00.231605=
] msgmni has been set to 7864</div>
<div>[ =A0 =A00.232267] Block layer SCSI generic (bsg) driver version 0.4 l=
oaded (major 253)</div><div>[ =A0 =A00.232382] io scheduler noop registered=
</div><div>[ =A0 =A00.232417] io scheduler deadline registered</div><div>[ =
=A0 =A00.232449] io scheduler cfq registered (default)</div>
<div>[ =A0 =A00.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5</di=
v><div>[ =A0 =A00.232529] pciehp: PCI Express Hot Plug Controller Driver ve=
rsion: 0.4</div><div>[ =A0 =A00.233195] Serial: 8250/16550 driver, 32 ports=
, IRQ sharing enabled</div>
<div>[ =A0 =A00.437179] Linux agpgart interface v0.103</div><div>[ =A0 =A00=
.439329] brd: module loaded</div><div>[ =A0 =A00.440557] loop: module loade=
d</div><div>[ =A0 =A00.442439] blkfront device/vbd/51714 num-ring-pages 1 n=
r_ents 32.</div>
<div>[ =A0 =A00.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents =
32.</div><div>[ =A0 =A00.447233] blkfront: xvda2: flush diskcache: enabled<=
/div><div>[ =A0 =A00.447810] Fixed MDIO Bus: probed</div><div>[ =A0 =A00.44=
7856] tun: Universal TUN/TAP device driver, 1.6</div>
<div>[ =A0 =A00.447864] tun: (C) 1999-2004 Max Krasnyansky &lt;<a href=3D"m=
ailto:maxk@qualcomm.com">maxk@qualcomm.com</a>&gt;</div><div>[ =A0 =A00.447=
945] PPP generic driver version 2.4.2</div><div>[ =A0 =A00.448029] Initiali=
sing Xen virtual ethernet driver.</div>
<div>[ =A0 =A00.453923] blkfront: xvda1: flush diskcache: enabled</div><div=
>[ =A0 =A00.455000] ehci_hcd: USB 2.0 &#39;Enhanced&#39; Host Controller (E=
HCI) Driver</div><div>[ =A0 =A00.455031] ohci_hcd: USB 1.1 &#39;Open&#39; H=
ost Controller (OHCI) Driver</div>
<div>[ =A0 =A00.455048] uhci_hcd: USB Universal Host Controller Interface d=
river</div><div>[ =A0 =A00.455100] usbcore: registered new interface driver=
 libusual</div><div>[ =A0 =A00.455134] i8042: PNP: No PS/2 controller found=
. Probing ports directly.</div>
<div>[ =A0 =A01.455791] i8042: No controller found</div><div>[ =A0 =A01.456=
071] mousedev: PS/2 mouse device common for all mice</div><div>[ =A0 =A01.4=
96241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as rtc0</div><div>[=
 =A0 =A01.496489] rtc_cmos: probe of rtc_cmos failed with error -38</div>
<div>[ =A0 =A01.496624] device-mapper: uevent: version 1.0.3</div><div>....=
...........</div><div><div>...............</div></div><div><div>...........=
....</div></div><div><div>...............</div></div><div><br></div><div><b=
r>
</div><div>[ =A0135.957086] BUG: unable to handle kernel paging request at =
ffff8800f36c0960</div><div>[ =A0135.957105] IP: [&lt;ffffffff81008efe&gt;] =
xen_set_pte_at+0x3e/0x210</div><div>[ =A0135.957122] PGD 1c06067 PUD dd1067=
 PMD f6d067 PTE 80100000f36c0065</div>
<div>[ =A0135.957134] Oops: 0003 [#1] SMP=A0</div><div>[ =A0135.957141] CPU=
 0=A0</div><div>[ =A0135.957144] Modules linked in: igb_uio(O) uio</div><di=
v>[ =A0135.957155]=A0</div><div>[ =A0135.957160] Pid: 659, comm: helloworld=
 Tainted: G =A0 =A0 =A0 =A0 =A0 O 3.2.0-58-generic #88-Ubuntu =A0</div>
<div>[ =A0135.957171] RIP: e030:[&lt;ffffffff81008efe&gt;] =A0[&lt;ffffffff=
81008efe&gt;] xen_set_pte_at+0x3e/0x210</div><div>[ =A0135.957183] RSP: e02=
b:ffff8800037ddc88 =A0EFLAGS: 00010297</div><div>[ =A0135.957189] RAX: 0000=
000000000000 RBX: 800000008c6000e7 RCX: 800000008c6000e7</div>
<div>[ =A0135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI: ffff=
880003044980</div><div>[ =A0135.957205] RBP: ffff8800037ddcd8 R08: 00000000=
00000000 R09: dead000000100100</div><div>[ =A0135.957212] R10: dead00000020=
0200 R11: 00007f4a64f7e02a R12: ffffea0003c48000</div>
<div>[ =A0135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15: 0000=
000000000001</div><div>[ =A0135.957232] FS: =A000007f4a656e8800(0000) GS:ff=
ff8800ffc00000(0000) knlGS:0000000000000000</div><div>[ =A0135.957241] CS: =
=A0e033 DS: 0000 ES: 0000 CR0: 000000008005003b</div>
<div>[ =A0135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4: 0000=
000000002660</div><div>[ =A0135.957255] DR0: 0000000000000000 DR1: 00000000=
00000000 DR2: 0000000000000000</div><div>[ =A0135.957263] DR3: 000000000000=
0000 DR6: 00000000ffff0ff0 DR7: 0000000000000400</div>
<div>[ =A0135.957271] Process helloworld (pid: 659, threadinfo ffff8800037d=
c000, task ffff8800034c1700)</div><div>[ =A0135.957279] Stack:</div><div>[ =
=A0135.957283] =A000007f4a65800000 ffff880003044980 dead000000200200 dead00=
0000100100</div>
<div>[ =A0135.957297] =A00000000000000000 0000000000000000 ffffea0003c48000=
 800000008c6000e7</div><div>[ =A0135.957310] =A0ffff8800030449ec 0000000000=
000001 ffff8800037ddd68 ffffffff81158453</div><div>[ =A0135.957322] Call Tr=
ace:</div>
<div>[ =A0135.957333] =A0[&lt;ffffffff81158453&gt;] hugetlb_no_page+0x233/0=
x370</div><div>[ =A0135.957342] =A0[&lt;ffffffff8100640e&gt;] ? xen_pud_val=
+0xe/0x10</div><div>[ =A0135.957351] =A0[&lt;ffffffff810053b5&gt;] ? __raw_=
callee_save_xen_pud_val+0x11/0x1e</div>
<div>[ =A0135.957361] =A0[&lt;ffffffff8115883e&gt;] hugetlb_fault+0x1fe/0x3=
40</div><div>[ =A0135.957370] =A0[&lt;ffffffff81143e18&gt;] ? vma_link+0x88=
/0xe0</div><div>[ =A0135.957378] =A0[&lt;ffffffff81140a3c&gt;] handle_mm_fa=
ult+0x2ec/0x370</div>
<div>[ =A0135.957391] =A0[&lt;ffffffff816658be&gt;] do_page_fault+0x17e/0x5=
40</div><div>[ =A0135.957399] =A0[&lt;ffffffff81145af8&gt;] ? do_mmap_pgoff=
+0x348/0x360</div><div>[ =A0135.957408] =A0[&lt;ffffffff81145bf1&gt;] ? sys=
_mmap_pgoff+0xe1/0x230</div>
<div>[ =A0135.957417] =A0[&lt;ffffffff816624f5&gt;] page_fault+0x25/0x30</d=
iv><div>[ =A0135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48=
 89 7d b8 48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83 f8=
 01 74 75 &lt;49&gt; 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0 =
4c 8b=A0</div>
<div>[ =A0135.957507] RIP =A0[&lt;ffffffff81008efe&gt;] xen_set_pte_at+0x3e=
/0x210</div><div>[ =A0135.957517] =A0RSP &lt;ffff8800037ddc88&gt;</div><div=
>[ =A0135.957521] CR2: ffff8800f36c0960</div><div>[ =A0135.957528] ---[ end=
 trace f6a013072f2aee83 ]---</div>
<div>[ =A0160.032062] BUG: soft lockup - CPU#0 stuck for 23s! [helloworld:6=
59]</div><div>[ =A0160.032129] Modules linked in: igb_uio(O) uio</div><div>=
[ =A0160.032140] CPU 0=A0</div><div>[ =A0160.032143] Modules linked in: igb=
_uio(O) uio</div>
<div>[ =A0160.032153]=A0</div><div>[ =A0160.032159] Pid: 659, comm: hellowo=
rld Tainted: G =A0 =A0 =A0D =A0 =A0O 3.2.0-58-generic #88-Ubuntu =A0</div><=
div>[ =A0160.032170] RIP: e030:[&lt;ffffffff810013aa&gt;] =A0[&lt;ffffffff8=
10013aa&gt;] hypercall_page+0x3aa/0x1000</div>
<div>[ =A0160.032190] RSP: e02b:ffff8800037dd730 =A0EFLAGS: 00000202</div><=
div>[ =A0160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX: fffff=
fff810013aa</div><div>[ =A0160.032204] RDX: 0000000000000000 RSI: ffff88000=
37dd748 RDI: 0000000000000003</div>
<div>[ =A0160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09: ffff=
8800f6c000a0</div><div>[ =A0160.032220] R10: 0000000000007ff0 R11: 00000000=
00000202 R12: 0000000000000011</div><div>[ =A0160.032227] R13: 000000000000=
0201 R14: ffff880003044901 R15: ffff880003044900</div>
<div>[ =A0160.032239] FS: =A000007f4a656e8800(0000) GS:ffff8800ffc00000(000=
0) knlGS:0000000000000000</div><div>[ =A0160.032248] CS: =A0e033 DS: 0000 E=
S: 0000 CR0: 000000008005003b</div><div>[ =A0160.032255] CR2: ffff8800f36c0=
960 CR3: 0000000001c05000 CR4: 0000000000002660</div>
<div>[ =A0160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000=
000000000000</div><div>[ =A0160.032271] DR3: 0000000000000000 DR6: 00000000=
ffff0ff0 DR7: 0000000000000400</div><div>[ =A0160.032279] Process helloworl=
d (pid: 659, threadinfo ffff8800037dc000, task ffff8800034c1700)</div>
<div>[ =A0160.032287] Stack:</div><div>[ =A0160.032291] =A00000000000000011=
 00000000fffffffa ffffffff813adade ffff8800037dd764</div><div>[ =A0160.0323=
04] =A0ffffffff00000001 0000000000000000 00000004813ad17e ffff8800037dd778<=
/div><div>
[ =A0160.032317] =A0ffff8800030449ec ffff8800037dd788 ffffffff813af5e0 ffff=
8800037dd7d8</div><div>[ =A0160.032329] Call Trace:</div><div>[ =A0160.0323=
41] =A0[&lt;ffffffff813adade&gt;] ? xen_poll_irq_timeout+0x3e/0x50</div><di=
v>[ =A0160.032350] =A0[&lt;ffffffff813af5e0&gt;] xen_poll_irq+0x10/0x20</di=
v>
<div>[ =A0160.032360] =A0[&lt;ffffffff81646686&gt;] xen_spin_lock_slow+0x98=
/0xf4</div><div>[ =A0160.032370] =A0[&lt;ffffffff810124ba&gt;] xen_spin_loc=
k+0x4a/0x50</div><div>[ =A0160.032381] =A0[&lt;ffffffff81661d8e&gt;] _raw_s=
pin_lock+0xe/0x20</div>
<div>[ =A0160.032390] =A0[&lt;ffffffff81007d9a&gt;] xen_exit_mmap+0x2a/0x60=
</div><div>[ =A0160.032400] =A0[&lt;ffffffff81146408&gt;] exit_mmap+0x58/0x=
140</div><div>[ =A0160.032408] =A0[&lt;ffffffff8166275a&gt;] ? error_exit+0=
x2a/0x60</div>
<div>[ =A0160.032416] =A0[&lt;ffffffff8166227c&gt;] ? retint_restore_args+0=
x5/0x6</div><div>[ =A0160.032425] =A0[&lt;ffffffff8100132a&gt;] ? hypercall=
_page+0x32a/0x1000</div><div>[ =A0160.032433] =A0[&lt;ffffffff8100132a&gt;]=
 ? hypercall_page+0x32a/0x1000</div>
<div>[ =A0160.032442] =A0[&lt;ffffffff8100132a&gt;] ? hypercall_page+0x32a/=
0x1000</div><div>[ =A0160.032452] =A0[&lt;ffffffff81065e22&gt;] mmput.part.=
16+0x42/0x130</div><div>[ =A0160.032460] =A0[&lt;ffffffff81065f39&gt;] mmpu=
t+0x29/0x30</div>
<div>[ =A0160.032470] =A0[&lt;ffffffff8106c943&gt;] exit_mm+0x113/0x130</di=
v><div>[ =A0160.032479] =A0[&lt;ffffffff810e58c5&gt;] ? taskstats_exit+0x45=
/0x240</div><div>[ =A0160.032488] =A0[&lt;ffffffff81662075&gt;] ? _raw_spin=
_lock_irq+0x15/0x20</div>
<div>[ =A0160.032496] =A0[&lt;ffffffff8106cace&gt;] do_exit+0x16e/0x450</di=
v><div>[ =A0160.032504] =A0[&lt;ffffffff81662f20&gt;] oops_end+0xb0/0xf0</d=
iv><div>[ =A0160.032513] =A0[&lt;ffffffff8164812f&gt;] no_context+0x150/0x1=
5d</div>
<div>[ =A0160.032520] =A0[&lt;ffffffff81648307&gt;] __bad_area_nosemaphore+=
0x1cb/0x1ea</div><div>[ =A0160.032529] =A0[&lt;ffffffff816622ad&gt;] ? rest=
ore_args+0x30/0x30</div><div>[ =A0160.032537] =A0[&lt;ffffffff8164795b&gt;]=
 ? pte_offset_kernel+0xe/0x37</div>
<div>[ =A0160.032545] =A0[&lt;ffffffff81648339&gt;] bad_area_nosemaphore+0x=
13/0x15</div><div>[ =A0160.032555] =A0[&lt;ffffffff81665bab&gt;] do_page_fa=
ult+0x46b/0x540</div><div>[ =A0160.032564] =A0[&lt;ffffffff8115c3f8&gt;] ? =
mpol_shared_policy_init+0x48/0x160</div>
<div>[ =A0160.032575] =A0[&lt;ffffffff811667bd&gt;] ? kmem_cache_alloc+0x11=
d/0x140</div><div>[ =A0160.032588] =A0[&lt;ffffffff8126d5fb&gt;] ? hugetlbf=
s_alloc_inode+0x5b/0xa0</div><div>[ =A0160.032597] =A0[&lt;ffffffff816624f5=
&gt;] page_fault+0x25/0x30</div>
<div>[ =A0160.032605] =A0[&lt;ffffffff81008efe&gt;] ? xen_set_pte_at+0x3e/0=
x210</div><div>[ =A0160.032613] =A0[&lt;ffffffff81008ef9&gt;] ? xen_set_pte=
_at+0x39/0x210</div><div>[ =A0160.032622] =A0[&lt;ffffffff81158453&gt;] hug=
etlb_no_page+0x233/0x370</div>
<div>[ =A0160.032630] =A0[&lt;ffffffff8100640e&gt;] ? xen_pud_val+0xe/0x10<=
/div><div>[ =A0160.032638] =A0[&lt;ffffffff810053b5&gt;] ? __raw_callee_sav=
e_xen_pud_val+0x11/0x1e</div><div>[ =A0160.032648] =A0[&lt;ffffffff8115883e=
&gt;] hugetlb_fault+0x1fe/0x340</div>
<div>[ =A0160.032656] =A0[&lt;ffffffff81143e18&gt;] ? vma_link+0x88/0xe0</d=
iv><div>[ =A0160.032664] =A0[&lt;ffffffff81140a3c&gt;] handle_mm_fault+0x2e=
c/0x370</div><div>[ =A0160.032673] =A0[&lt;ffffffff816658be&gt;] do_page_fa=
ult+0x17e/0x540</div>
<div>[ =A0160.032681] =A0[&lt;ffffffff81145af8&gt;] ? do_mmap_pgoff+0x348/0=
x360</div><div>[ =A0160.032689] =A0[&lt;ffffffff81145bf1&gt;] ? sys_mmap_pg=
off+0xe1/0x230</div><div>[ =A0160.032697] =A0[&lt;ffffffff816624f5&gt;] pag=
e_fault+0x25/0x30</div>
<div>[ =A0160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc=
 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00=
 0f 05 &lt;41&gt; 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc =
cc=A0</div>
<div>[ =A0160.032781] Call Trace:</div><div>[ =A0160.032787] =A0[&lt;ffffff=
ff813adade&gt;] ? xen_poll_irq_timeout+0x3e/0x50</div><div>[ =A0160.032795]=
 =A0[&lt;ffffffff813af5e0&gt;] xen_poll_irq+0x10/0x20</div><div>[ =A0160.03=
2803] =A0[&lt;ffffffff81646686&gt;] xen_spin_lock_slow+0x98/0xf4</div>
<div>[ =A0160.032811] =A0[&lt;ffffffff810124ba&gt;] xen_spin_lock+0x4a/0x50=
</div><div>[ =A0160.032818] =A0[&lt;ffffffff81661d8e&gt;] _raw_spin_lock+0x=
e/0x20</div><div>[ =A0160.032826] =A0[&lt;ffffffff81007d9a&gt;] xen_exit_mm=
ap+0x2a/0x60</div>
<div>[ =A0160.032833] =A0[&lt;ffffffff81146408&gt;] exit_mmap+0x58/0x140</d=
iv><div>[ =A0160.032841] =A0[&lt;ffffffff8166275a&gt;] ? error_exit+0x2a/0x=
60</div><div>[ =A0160.032849] =A0[&lt;ffffffff8166227c&gt;] ? retint_restor=
e_args+0x5/0x6</div>
<div>[ =A0160.032857] =A0[&lt;ffffffff8100132a&gt;] ? hypercall_page+0x32a/=
0x1000</div><div>[ =A0160.032866] =A0[&lt;ffffffff8100132a&gt;] ? hypercall=
_page+0x32a/0x1000</div><div>[ =A0160.032874] =A0[&lt;ffffffff8100132a&gt;]=
 ? hypercall_page+0x32a/0x1000</div>
<div>[ =A0160.032882] =A0[&lt;ffffffff81065e22&gt;] mmput.part.16+0x42/0x13=
0</div><div>[ =A0160.032889] =A0[&lt;ffffffff81065f39&gt;] mmput+0x29/0x30<=
/div><div>[ =A0160.032896] =A0[&lt;ffffffff8106c943&gt;] exit_mm+0x113/0x13=
0</div><div>
[ =A0160.032904] =A0[&lt;ffffffff810e58c5&gt;] ? taskstats_exit+0x45/0x240<=
/div><div>[ =A0160.032912] =A0[&lt;ffffffff81662075&gt;] ? _raw_spin_lock_i=
rq+0x15/0x20</div><div>[ =A0160.032920] =A0[&lt;ffffffff8106cace&gt;] do_ex=
it+0x16e/0x450</div>
<div>[ =A0160.032928] =A0[&lt;ffffffff81662f20&gt;] oops_end+0xb0/0xf0</div=
><div>[ =A0160.032935] =A0[&lt;ffffffff8164812f&gt;] no_context+0x150/0x15d=
</div><div>[ =A0160.032943] =A0[&lt;ffffffff81648307&gt;] __bad_area_nosema=
phore+0x1cb/0x1ea</div>
<div>[ =A0160.032951] =A0[&lt;ffffffff816622ad&gt;] ? restore_args+0x30/0x3=
0</div><div>[ =A0160.032959] =A0[&lt;ffffffff8164795b&gt;] ? pte_offset_ker=
nel+0xe/0x37</div><div>[ =A0160.032967] =A0[&lt;ffffffff81648339&gt;] bad_a=
rea_nosemaphore+0x13/0x15</div>
<div>[ =A0160.032975] =A0[&lt;ffffffff81665bab&gt;] do_page_fault+0x46b/0x5=
40</div><div>[ =A0160.036054] =A0[&lt;ffffffff8115c3f8&gt;] ? mpol_shared_p=
olicy_init+0x48/0x160</div><div>[ =A0160.036054] =A0[&lt;ffffffff811667bd&g=
t;] ? kmem_cache_alloc+0x11d/0x140</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff8126d5fb&gt;] ? hugetlbfs_alloc_inode=
+0x5b/0xa0</div><div>[ =A0160.036054] =A0[&lt;ffffffff816624f5&gt;] page_fa=
ult+0x25/0x30</div><div>[ =A0160.036054] =A0[&lt;ffffffff81008efe&gt;] ? xe=
n_set_pte_at+0x3e/0x210</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff81008ef9&gt;] ? xen_set_pte_at+0x39/0=
x210</div><div>[ =A0160.036054] =A0[&lt;ffffffff81158453&gt;] hugetlb_no_pa=
ge+0x233/0x370</div><div>[ =A0160.036054] =A0[&lt;ffffffff8100640e&gt;] ? x=
en_pud_val+0xe/0x10</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff810053b5&gt;] ? __raw_callee_save_xen=
_pud_val+0x11/0x1e</div><div>[ =A0160.036054] =A0[&lt;ffffffff8115883e&gt;]=
 hugetlb_fault+0x1fe/0x340</div><div>[ =A0160.036054] =A0[&lt;ffffffff81143=
e18&gt;] ? vma_link+0x88/0xe0</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff81140a3c&gt;] handle_mm_fault+0x2ec/0=
x370</div><div>[ =A0160.036054] =A0[&lt;ffffffff816658be&gt;] do_page_fault=
+0x17e/0x540</div><div>[ =A0160.036054] =A0[&lt;ffffffff81145af8&gt;] ? do_=
mmap_pgoff+0x348/0x360</div>
<div>[ =A0160.036054] =A0[&lt;ffffffff81145bf1&gt;] ? sys_mmap_pgoff+0xe1/0=
x230</div><div>[ =A0160.036054] =A0[&lt;ffffffff816624f5&gt;] page_fault+0x=
25/0x30</div></div><div><br></div></div><div class=3D"gmail_extra"><br><br>=
<div class=3D"gmail_quote">
On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <span dir=3D"ltr">&lt;<a hre=
f=3D"mailto:fabio.fantoni@m2r.biz" target=3D"_blank">fabio.fantoni@m2r.biz<=
/a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=3D"margin:=
0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Il 10/02/2014 11:42, Wei Liu ha scritto:<div class=3D""><br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Hi,<br>
<br>
=A0 =A0 =A0 =A0 I am new to Xen and I am trying to run Intel DPDK inside a =
domU with<br>
virtio on Xen 4.2. Is it possible to do this?<br>
<br>
</blockquote></blockquote>
<br></div>
Based on my tests about virtio:<br>
- virtio-serial seems working out of box with windows domUs and also with x=
en pv driver, on linux domUs with old kernel (tested 2.6.32) is also workin=
g out of box but with newer kernel (tested &gt;=3D3.2) require pci=3Dnomsi =
to work correctly and works also with xen pvhvm drivers, for now I not foun=
d solution for msi problem, there are some posts about it.<br>

- virtio-net was working out of box but with recent qemu versions is broken=
 due qemu regression, I have narrowed down<br>
with bisect (one commit between 4 Jul 2013 and 22 Jul 2013) but I unable to=
 found the exact commit of regression because there are other critical prob=
lems with xen in the range.<br>
- I not tested virtio-disk and I not know if is working with recent xen and=
 qemu version.<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
DPDK doesn&#39;t seem to tightly coupled with VirtIO, does it?<br>
<br>
Could you look at Xen&#39;s PV network protocol instead? VirtIO has no<br>
mainline support on Xen while Xen&#39;s PV protocol has been in mainline fo=
r<br>
years. And it&#39;s very likely to be enabled by default nowadays.<br>
<br>
Wei.<br>
<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">
Regards<br>
Peter<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote>
<br>
______________________________<u></u>_________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org" target=3D"_blank">Xen-devel@list=
s.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote>
<br>
</div></div></blockquote></div><br></div>

--047d7b5d3d7ad1ca5004f21135eb--


--===============0653985584819691314==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0653985584819691314==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 22:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 22:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzeX-0004Hc-AT; Mon, 10 Feb 2014 22:48:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WCvcL-0008AE-Fy
	for xen-devel@lists.xen.org; Mon, 10 Feb 2014 18:30:09 +0000
Received: from [85.158.143.35:44107] by server-1.bemta-4.messagelabs.com id
	FF/5D-31661-CAA19F25; Mon, 10 Feb 2014 18:30:04 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392057002!4611237!1
X-Originating-IP: [209.85.212.53]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11907 invoked from network); 10 Feb 2014 18:30:03 -0000
Received: from mail-vb0-f53.google.com (HELO mail-vb0-f53.google.com)
	(209.85.212.53)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 18:30:03 -0000
Received: by mail-vb0-f53.google.com with SMTP id p17so4956321vbe.40
	for <xen-devel@lists.xen.org>; Mon, 10 Feb 2014 10:30:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=Jm8r9bWx8zcX+k2esYFE8BxoDJ4u80OKvQqYGlK/xAE=;
	b=JLvZ69zD/aZmqbO4r8KHZYA/zgL5f+nhflQ6SpeAhYe5/TmeCkm4zUb5DyNn3rpSUK
	9LKe466wBo6awaO4UACOziN/1hn3LQUTSkqMsriikhLFiYwwTAJUhzOdiMAUjgSGpQTU
	6gkjXfJpf5wPet8oLvtKkrX2v26rm7sOqleuyaGZoMKTJvCLBXDHSFtNg59qkAqI8633
	Wco7E7ToD+M2nzRdUn7o9KVA0SI9wZcEAuE2Lyy7y/EAORHttJyMW2qKx5IoDnTBmn6M
	gk5QPCWPgBzE3dQrZtHj5pURZFgKbeGyco0L2AB/bTqFJRhveCQIapWCaHOz14cXyw8L
	mgYw==
X-Received: by 10.52.160.233 with SMTP id xn9mr314531vdb.48.1392057002400;
	Mon, 10 Feb 2014 10:30:02 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Mon, 10 Feb 2014 10:29:22 -0800 (PST)
In-Reply-To: <CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Mon, 10 Feb 2014 13:29:22 -0500
Message-ID: <CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
To: George Dunlap <dunlapg@umich.edu>
X-Mailman-Approved-At: Mon, 10 Feb 2014 22:48:34 +0000
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4102429627951789400=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4102429627951789400==
Content-Type: multipart/alternative; boundary=089e0160c3ae22a0dd04f21187a3

--089e0160c3ae22a0dd04f21187a3
Content-Type: text/plain; charset=ISO-8859-1

Thanks for the answers on the timeline.

When I start the HVM with th broadcom adapter, I get this message back.
Parsing config from /etc/xen/ubuntu-hvm-1.cfg
libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
support reset from sysfs for PCI device 0000:05:00.0
libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
support reset from sysfs for PCI device 0000:05:00.1

However, the devices appear in the HVM.  Is this something that I should be
concerned about?



On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu> wrote:

> On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> <mikeneiderhauser@gmail.com> wrote:
> > Works like a charm.  I do not have physical access to the computer this
> > weekend to verify that the cards are isolated, but the HVM starts and
> > appears to be working well.
> >
> > When do you think Xen 4.4 will be released.  The article I read
> mentioned it
> > will be released in 2014 (hinting towards the end of February).  I also
> read
> > 'When it is ready.'
> >
> > Any timeline would be great.
>
> I'm afraid that's about all we can give. :-)  We've locked down
> development for 2 months now and are working on finding and fixing
> bugs.  If there are no more blocker bugs or other unforeseen delays,
> it should be out by the end of February.  But there are necessarily
> significant unknowns, so we can't make any promises.
>
>  -George
>

--089e0160c3ae22a0dd04f21187a3
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Thanks for the answers on the timeline.<div><br></div><div=
>When I start the HVM with th broadcom adapter, I get this message back.</d=
iv><div><div>Parsing config from /etc/xen/ubuntu-hvm-1.cfg</div><div>libxl:=
 error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn&#39;t sup=
port reset from sysfs for PCI device 0000:05:00.0</div>

<div>libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel does=
n&#39;t support reset from sysfs for PCI device 0000:05:00.1</div></div><di=
v><br></div><div>However, the devices appear in the HVM. =A0Is this somethi=
ng that I should be concerned about?</div>

<div><br></div></div><div class=3D"gmail_extra"><br><br><div class=3D"gmail=
_quote">On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <span dir=3D"ltr">&=
lt;<a href=3D"mailto:dunlapg@umich.edu" target=3D"_blank">dunlapg@umich.edu=
</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Sat, Feb 8, 2014 at 5:42 =
PM, Mike Neiderhauser<br>
&lt;<a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhauser@gmail.co=
m</a>&gt; wrote:<br>
&gt; Works like a charm. =A0I do not have physical access to the computer t=
his<br>
&gt; weekend to verify that the cards are isolated, but the HVM starts and<=
br>
&gt; appears to be working well.<br>
&gt;<br>
&gt; When do you think Xen 4.4 will be released. =A0The article I read ment=
ioned it<br>
&gt; will be released in 2014 (hinting towards the end of February). =A0I a=
lso read<br>
&gt; &#39;When it is ready.&#39;<br>
&gt;<br>
&gt; Any timeline would be great.<br>
<br>
</div>I&#39;m afraid that&#39;s about all we can give. :-) =A0We&#39;ve loc=
ked down<br>
development for 2 months now and are working on finding and fixing<br>
bugs. =A0If there are no more blocker bugs or other unforeseen delays,<br>
it should be out by the end of February. =A0But there are necessarily<br>
significant unknowns, so we can&#39;t make any promises.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
=A0-George<br>
</font></span></blockquote></div><br></div>

--089e0160c3ae22a0dd04f21187a3--


--===============4102429627951789400==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4102429627951789400==--


From xen-devel-bounces@lists.xen.org Mon Feb 10 23:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 23:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzxc-0005Rj-2N; Mon, 10 Feb 2014 23:08:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WCzxG-0005Re-SH
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 23:07:59 +0000
Received: from [193.109.254.147:19669] by server-6.bemta-14.messagelabs.com id
	EF/34-03396-ECB59F25; Mon, 10 Feb 2014 23:07:58 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392073674!3348935!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15030 invoked from network); 10 Feb 2014 23:07:56 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 23:07:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Feb 2014 23:07:42 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,820,1384300800"; 
	d="scan'208,223";a="650407468"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.72])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 23:07:42 +0000
Message-ID: <52F95BBD.1080509@terremark.com>
Date: Mon, 10 Feb 2014 18:07:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Jan Beulich <JBeulich@suse.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52F5587A.4010608@terremark.com>
	<52F89013020000780011A952@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1402101502040.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101502040.4373@kaball.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------080301060203090908010708"
Cc: xen-devel <xen-devel@lists.xenproject.org>, Don Slutz <dslutz@verizon.com>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------080301060203090908010708
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/10/14 10:04, Stefano Stabellini wrote:
> On Mon, 10 Feb 2014, Jan Beulich wrote:
>>>>> On 07.02.14 at 23:04, Don Slutz <dslutz@verizon.com> wrote:
>>> On 01/29/14 06:40, Jan Beulich wrote:
>>>> All,
>>>>
>>>> aiming at releases with, as before, presumably just one more RC on
>>>> each of them, please test!
>>> Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.
>>>
>>> CentOS 5.10 has a build issue with QEMU:
>>>
>>> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html
>> Is this a regression over 4.3.1?

Using

http://bits.xensource.com/oss-xen/release/4.3.1/xen-4.3.1.tar.gz

   lt LINK libcacard.la
/usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_delete_applet' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: ld returned 1 exit status


So no, this is not a regression.



>>
>> In any event, it would be Stefano to take care of this, just like for
>> 4.4.
>>
> Don, would a backport of "configure: Disable libtool if -fPIE does not
> work with it (bug #1257099)" (027c412ff71ad8bff6e335cc7932857f4ea74391
> in qemu-upstream-unstable.git) fix the issue in 4.3 as well?  If so, I
> would rather do that than introduce a workaround.
>
>   

The backport of "configure: Disable libtool if -fPIE does not work with it (bug #1257099)" (027c412ff71ad8bff6e335cc7932857f4ea74391 in qemu-upstream-unstable.git)  turns out mot to be enough.  You also need to backport "libcacard: require libtool to build it (b6fc675b25d32f018870e202eb4b2a6eb509f88b)"

to not get the message:

libtool is missing, please install and rerun configure


I have attached my version of these 2 backports.

Using them, I am able to build xen, and the resulting bits pass some simple tests.

    -Don Slutz


>>> Has more info, for this testing I changed:
>>>
>>>
>>> Author: Don Slutz <dslutz@verizon.com>
>>> Date:   Fri Jan 31 22:37:04 2014 +0000
>>>
>>>       Work around QEMU bug #1257099 on CentOS 5.10
>>>
>>> diff --git a/tools/Makefile b/tools/Makefile
>>> index e44a3e9..b411e60 100644
>>> --- a/tools/Makefile
>>> +++ b/tools/Makefile
>>> @@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>>>                   source=.; \
>>>           fi; \
>>>           cd qemu-xen-dir; \
>>> -       $$source/configure --enable-xen --target-list=i386-softmmu \
>>> +       $$source/configure --enable-xen --target-list=i386-softmmu
>>> --disable-smartcard-nss\
>>>                   --prefix=$(PREFIX) \
>>>                   --source-path=$$source \
>>>                   --extra-cflags="-I$(XEN_ROOT)/tools/include \
>>>
>>> and was able to use the resulting build for some simple testing.  No new
>>> issues were found.
>>>        -Don Slutz
>>>
>>>> Thanks, Jan
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>
>>


--------------080301060203090908010708
Content-Type: text/x-patch;
 name="0001-configure-Disable-libtool-if-fPIE-does-not-work-with.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0001-configure-Disable-libtool-if-fPIE-does-not-work-with.pa";
 filename*1="tch"

>From 9a91ed949b0c5a7c3058bc12ac393018cbbd73e3 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Sat, 14 Dec 2013 19:43:56 +0000
Subject: [PATCH 1/2] configure: Disable libtool if -fPIE does not work with it
 (bug #1257099)

Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
to be fixed (TMPB).

Add new functions do_libtool and libtool_prog.

Add check for broken gcc and libtool.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 027c412ff71ad8bff6e335cc7932857f4ea74391)

Conflicts:
	configure
---
 configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index 994f731..cc87f7b 100755
--- a/configure
+++ b/configure
@@ -12,7 +12,10 @@ else
 fi
 
 TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
-TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
+TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
+TMPO="${TMPDIR1}/${TMPB}.o"
+TMPL="${TMPDIR1}/${TMPB}.lo"
+TMPA="${TMPDIR1}/lib${TMPB}.la"
 TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
 
 # NB: do not call "exit" in the trap handler; this is buggy with some shells;
@@ -63,6 +66,38 @@ compile_prog() {
   do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
 }
 
+do_libtool() {
+    local mode=$1
+    shift
+    # Run the compiler, capturing its output to the log.
+    echo $libtool $mode --tag=CC $cc "$@" >> config.log
+    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
+    # Test passed. If this is an --enable-werror build, rerun
+    # the test with -Werror and bail out if it fails. This
+    # makes warning-generating-errors in configure test code
+    # obvious to developers.
+    if test "$werror" != "yes"; then
+        return 0
+    fi
+    # Don't bother rerunning the compile if we were already using -Werror
+    case "$*" in
+        *-Werror*)
+           return 0
+        ;;
+    esac
+    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
+    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
+    error_exit "configure test passed without -Werror but failed with -Werror." \
+        "This is probably a bug in the configure script. The failing command" \
+        "will be at the bottom of config.log." \
+        "You can run configure with --disable-werror to bypass this check."
+}
+
+libtool_prog() {
+    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
+    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
+}
+
 # symbolically link $1 to $2.  Portable version of "ln -sf".
 symlink() {
   rm -rf "$2"
@@ -1249,6 +1284,32 @@ EOF
   fi
 fi
 
+# check for broken gcc and libtool in RHEL5
+if test -n "$libtool" -a "$pie" != "no" ; then
+  cat > $TMPC <<EOF
+
+void *f(unsigned char *buf, int len);
+void *g(unsigned char *buf, int len);
+
+void *
+f(unsigned char *buf, int len)
+{
+    return (void*)0L;
+}
+
+void *
+g(unsigned char *buf, int len)
+{
+    return f(buf, len);
+}
+
+EOF
+  if ! libtool_prog; then
+    echo "Disabling libtool due to broken toolchain support"
+    libtool=
+  fi
+fi
+
 #
 # Solaris specific configure tool chain decisions
 #
-- 
1.8.2.1


--------------080301060203090908010708
Content-Type: text/x-patch;
 name="0002-libcacard-require-libtool-to-build-it.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="0002-libcacard-require-libtool-to-build-it.patch"

>From 576072fdaf653aff7d708a0d940f14727d3d6a25 Mon Sep 17 00:00:00 2001
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Thu, 20 Dec 2012 20:40:35 +0100
Subject: [PATCH 2/2] libcacard: require libtool to build it

Do not fail at build time, instead just disable the library if libtool
is not present.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b6fc675b25d32f018870e202eb4b2a6eb509f88b)

Conflicts:
	configure
	rules.mak
---
 Makefile           |  8 +-------
 configure          | 45 +++++++++++++++++++++++----------------------
 libcacard/Makefile |  8 --------
 rules.mak          |  6 ++----
 4 files changed, 26 insertions(+), 41 deletions(-)

diff --git a/Makefile b/Makefile
index 9ecbcbb..c8a35a0 100644
--- a/Makefile
+++ b/Makefile
@@ -168,14 +168,8 @@ libqemustub.a: $(stub-obj-y)
 ######################################################################
 # Support building shared library libcacard
 
+ifeq ($(CONFIG_SMARTCARD_NSS),y)
 .PHONY: libcacard.la install-libcacard
-ifeq ($(LIBTOOL),)
-libcacard.la:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-
-install-libcacard:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-else
 libcacard.la: $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
 	$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" libcacard.la,)
 
diff --git a/configure b/configure
index cc87f7b..4f0d0e0 100755
--- a/configure
+++ b/configure
@@ -2832,29 +2832,30 @@ if test "$smartcard" != "no" ; then
 #include <pk11pub.h>
 int main(void) { PK11_FreeSlot(0); return 0; }
 EOF
-        smartcard_includes="-I\$(SRC_PATH)/libcacard"
-        libcacard_libs="$($pkg_config --libs nss 2>/dev/null) $glib_libs"
-        libcacard_cflags="$($pkg_config --cflags nss 2>/dev/null) $glib_cflags"
-        test_cflags="$libcacard_cflags"
-        # The header files in nss < 3.13.3 have a bug which causes them to
-        # emit a warning. If we're going to compile QEMU with -Werror, then
-        # test that the headers don't have this bug. Otherwise we would pass
-        # the configure test but fail to compile QEMU later.
-        if test "$werror" = "yes"; then
-            test_cflags="-Werror $test_cflags"
-        fi
-        if $pkg_config --atleast-version=3.12.8 nss >/dev/null 2>&1 && \
-          compile_prog "$test_cflags" "$libcacard_libs"; then
-            smartcard_nss="yes"
-            QEMU_CFLAGS="$QEMU_CFLAGS $libcacard_cflags"
-            QEMU_INCLUDES="$QEMU_INCLUDES $smartcard_includes"
-            libs_softmmu="$libcacard_libs $libs_softmmu"
-        else
-            if test "$smartcard_nss" = "yes"; then
-                feature_not_found "nss"
-            fi
-            smartcard_nss="no"
+    smartcard_includes="-I\$(SRC_PATH)/libcacard"
+    libcacard_libs="$($pkg_config --libs nss 2>/dev/null) $glib_libs"
+    libcacard_cflags="$($pkg_config --cflags nss 2>/dev/null) $glib_cflags"
+    test_cflags="$libcacard_cflags"
+    # The header files in nss < 3.13.3 have a bug which causes them to
+    # emit a warning. If we're going to compile QEMU with -Werror, then
+    # test that the headers don't have this bug. Otherwise we would pass
+    # the configure test but fail to compile QEMU later.
+    if test "$werror" = "yes"; then
+        test_cflags="-Werror $test_cflags"
+    fi
+    if test -n "$libtool" &&
+            $pkg_config --atleast-version=3.12.8 nss >/dev/null 2>&1 && \
+      compile_prog "$test_cflags" "$libcacard_libs"; then
+        smartcard_nss="yes"
+        QEMU_CFLAGS="$QEMU_CFLAGS $libcacard_cflags"
+        QEMU_INCLUDES="$QEMU_INCLUDES $smartcard_includes"
+        libs_softmmu="$libcacard_libs $libs_softmmu"
+    else
+        if test "$smartcard_nss" = "yes"; then
+            feature_not_found "nss"
         fi
+        smartcard_nss="no"
+    fi
     fi
 fi
 if test "$smartcard" = "no" ; then
diff --git a/libcacard/Makefile b/libcacard/Makefile
index c26aac6..2bb5aea 100644
--- a/libcacard/Makefile
+++ b/libcacard/Makefile
@@ -28,13 +28,6 @@ all: libcacard.la libcacard.pc
 #########################################################################
 # Rules for building libcacard standalone library
 
-ifeq ($(LIBTOOL),)
-libcacard.la:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-
-install-libcacard:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-else
 libcacard.la: $(libcacard.lib-y) $(QEMU_OBJS_LIB)
 	$(call quiet-command,$(LIBTOOL) --mode=link --quiet --tag=CC $(CC) -rpath $(libdir) -o $@ $^ $(libcacard_libs),"  lt LINK $@")
 
@@ -60,4 +53,3 @@ install-libcacard: libcacard.pc libcacard.la vscclient
 	for inc in *.h; do \
 		$(LIBTOOL) --mode=install $(INSTALL_DATA) $(libcacard_srcpath)/$$inc "$(DESTDIR)$(libcacard_includedir)"; \
 	done
-endif
diff --git a/rules.mak b/rules.mak
index d0b04e4..c41c08b 100644
--- a/rules.mak
+++ b/rules.mak
@@ -18,12 +18,10 @@ QEMU_DGFLAGS += -MMD -MP -MT $@ -MF $(*D)/$(*F).d
 	$(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  CC    $(TARGET_DIR)$@")
 
 ifeq ($(LIBTOOL),)
-%.lo: %.c
-	@echo "missing libtool. please install and rerun configure"; exit 1
-else
+LIBTOOL = /bin/false
+endif
 %.lo: %.c
 	$(call quiet-command,$(LIBTOOL) --mode=compile --quiet --tag=CC $(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  lt CC $@")
-endif
 
 %.o: %.S
 	$(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  AS    $(TARGET_DIR)$@")
-- 
1.8.2.1


--------------080301060203090908010708
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080301060203090908010708--


From xen-devel-bounces@lists.xen.org Mon Feb 10 23:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 23:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WCzxc-0005Rj-2N; Mon, 10 Feb 2014 23:08:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WCzxG-0005Re-SH
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 23:07:59 +0000
Received: from [193.109.254.147:19669] by server-6.bemta-14.messagelabs.com id
	EF/34-03396-ECB59F25; Mon, 10 Feb 2014 23:07:58 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392073674!3348935!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15030 invoked from network); 10 Feb 2014 23:07:56 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 23:07:56 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 10 Feb 2014 23:07:42 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,820,1384300800"; 
	d="scan'208,223";a="650407468"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.72])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 23:07:42 +0000
Message-ID: <52F95BBD.1080509@terremark.com>
Date: Mon, 10 Feb 2014 18:07:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Jan Beulich <JBeulich@suse.com>
References: <52E7D3BB02000078001179F5@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52E8F6B30200007800117E35@nat28.tlf.novell.com>
	<52F5587A.4010608@terremark.com>
	<52F89013020000780011A952@nat28.tlf.novell.com>
	<alpine.DEB.2.02.1402101502040.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402101502040.4373@kaball.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------080301060203090908010708"
Cc: xen-devel <xen-devel@lists.xenproject.org>, Don Slutz <dslutz@verizon.com>
Subject: Re: [Xen-devel] Xen 4.3.2-rc1 and 4.2.4-rc1 have been tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------080301060203090908010708
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/10/14 10:04, Stefano Stabellini wrote:
> On Mon, 10 Feb 2014, Jan Beulich wrote:
>>>>> On 07.02.14 at 23:04, Don Slutz <dslutz@verizon.com> wrote:
>>> On 01/29/14 06:40, Jan Beulich wrote:
>>>> All,
>>>>
>>>> aiming at releases with, as before, presumably just one more RC on
>>>> each of them, please test!
>>> Tested 4.3.2-rc1 on CentOS 5.10 and Fedora 17.
>>>
>>> CentOS 5.10 has a build issue with QEMU:
>>>
>>> http://lists.xen.org/archives/html/xen-devel/2014-02/msg00084.html
>> Is this a regression over 4.3.1?

Using

http://bits.xensource.com/oss-xen/release/4.3.1/xen-4.3.1.tar.gz

   lt LINK libcacard.la
/usr/bin/ld: libcacard/.libs/vcard.o: relocation R_X86_64_PC32 against `vcard_delete_applet' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: ld returned 1 exit status


So no, this is not a regression.



>>
>> In any event, it would be Stefano to take care of this, just like for
>> 4.4.
>>
> Don, would a backport of "configure: Disable libtool if -fPIE does not
> work with it (bug #1257099)" (027c412ff71ad8bff6e335cc7932857f4ea74391
> in qemu-upstream-unstable.git) fix the issue in 4.3 as well?  If so, I
> would rather do that than introduce a workaround.
>
>   

The backport of "configure: Disable libtool if -fPIE does not work with it (bug #1257099)" (027c412ff71ad8bff6e335cc7932857f4ea74391 in qemu-upstream-unstable.git)  turns out mot to be enough.  You also need to backport "libcacard: require libtool to build it (b6fc675b25d32f018870e202eb4b2a6eb509f88b)"

to not get the message:

libtool is missing, please install and rerun configure


I have attached my version of these 2 backports.

Using them, I am able to build xen, and the resulting bits pass some simple tests.

    -Don Slutz


>>> Has more info, for this testing I changed:
>>>
>>>
>>> Author: Don Slutz <dslutz@verizon.com>
>>> Date:   Fri Jan 31 22:37:04 2014 +0000
>>>
>>>       Work around QEMU bug #1257099 on CentOS 5.10
>>>
>>> diff --git a/tools/Makefile b/tools/Makefile
>>> index e44a3e9..b411e60 100644
>>> --- a/tools/Makefile
>>> +++ b/tools/Makefile
>>> @@ -187,7 +187,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>>>                   source=.; \
>>>           fi; \
>>>           cd qemu-xen-dir; \
>>> -       $$source/configure --enable-xen --target-list=i386-softmmu \
>>> +       $$source/configure --enable-xen --target-list=i386-softmmu
>>> --disable-smartcard-nss\
>>>                   --prefix=$(PREFIX) \
>>>                   --source-path=$$source \
>>>                   --extra-cflags="-I$(XEN_ROOT)/tools/include \
>>>
>>> and was able to use the resulting build for some simple testing.  No new
>>> issues were found.
>>>        -Don Slutz
>>>
>>>> Thanks, Jan
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>
>>


--------------080301060203090908010708
Content-Type: text/x-patch;
 name="0001-configure-Disable-libtool-if-fPIE-does-not-work-with.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0001-configure-Disable-libtool-if-fPIE-does-not-work-with.pa";
 filename*1="tch"

>From 9a91ed949b0c5a7c3058bc12ac393018cbbd73e3 Mon Sep 17 00:00:00 2001
From: Don Slutz <dslutz@verizon.com>
Date: Sat, 14 Dec 2013 19:43:56 +0000
Subject: [PATCH 1/2] configure: Disable libtool if -fPIE does not work with it
 (bug #1257099)

Adjust TMPO and added TMPB, TMPL, and TMPA.  libtool needs the names
to be fixed (TMPB).

Add new functions do_libtool and libtool_prog.

Add check for broken gcc and libtool.

Signed-off-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 027c412ff71ad8bff6e335cc7932857f4ea74391)

Conflicts:
	configure
---
 configure | 63 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 62 insertions(+), 1 deletion(-)

diff --git a/configure b/configure
index 994f731..cc87f7b 100755
--- a/configure
+++ b/configure
@@ -12,7 +12,10 @@ else
 fi
 
 TMPC="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.c"
-TMPO="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.o"
+TMPB="qemu-conf-${RANDOM}-$$-${RANDOM}"
+TMPO="${TMPDIR1}/${TMPB}.o"
+TMPL="${TMPDIR1}/${TMPB}.lo"
+TMPA="${TMPDIR1}/lib${TMPB}.la"
 TMPE="${TMPDIR1}/qemu-conf-${RANDOM}-$$-${RANDOM}.exe"
 
 # NB: do not call "exit" in the trap handler; this is buggy with some shells;
@@ -63,6 +66,38 @@ compile_prog() {
   do_cc $QEMU_CFLAGS $local_cflags -o $TMPE $TMPC $LDFLAGS $local_ldflags
 }
 
+do_libtool() {
+    local mode=$1
+    shift
+    # Run the compiler, capturing its output to the log.
+    echo $libtool $mode --tag=CC $cc "$@" >> config.log
+    $libtool $mode --tag=CC $cc "$@" >> config.log 2>&1 || return $?
+    # Test passed. If this is an --enable-werror build, rerun
+    # the test with -Werror and bail out if it fails. This
+    # makes warning-generating-errors in configure test code
+    # obvious to developers.
+    if test "$werror" != "yes"; then
+        return 0
+    fi
+    # Don't bother rerunning the compile if we were already using -Werror
+    case "$*" in
+        *-Werror*)
+           return 0
+        ;;
+    esac
+    echo $libtool $mode --tag=CC $cc -Werror "$@" >> config.log
+    $libtool $mode --tag=CC $cc -Werror "$@" >> config.log 2>&1 && return $?
+    error_exit "configure test passed without -Werror but failed with -Werror." \
+        "This is probably a bug in the configure script. The failing command" \
+        "will be at the bottom of config.log." \
+        "You can run configure with --disable-werror to bypass this check."
+}
+
+libtool_prog() {
+    do_libtool --mode=compile $QEMU_CFLAGS -c -fPIE -DPIE -o $TMPO $TMPC || return $?
+    do_libtool --mode=link $LDFLAGS -o $TMPA $TMPL -rpath /usr/local/lib
+}
+
 # symbolically link $1 to $2.  Portable version of "ln -sf".
 symlink() {
   rm -rf "$2"
@@ -1249,6 +1284,32 @@ EOF
   fi
 fi
 
+# check for broken gcc and libtool in RHEL5
+if test -n "$libtool" -a "$pie" != "no" ; then
+  cat > $TMPC <<EOF
+
+void *f(unsigned char *buf, int len);
+void *g(unsigned char *buf, int len);
+
+void *
+f(unsigned char *buf, int len)
+{
+    return (void*)0L;
+}
+
+void *
+g(unsigned char *buf, int len)
+{
+    return f(buf, len);
+}
+
+EOF
+  if ! libtool_prog; then
+    echo "Disabling libtool due to broken toolchain support"
+    libtool=
+  fi
+fi
+
 #
 # Solaris specific configure tool chain decisions
 #
-- 
1.8.2.1


--------------080301060203090908010708
Content-Type: text/x-patch;
 name="0002-libcacard-require-libtool-to-build-it.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="0002-libcacard-require-libtool-to-build-it.patch"

>From 576072fdaf653aff7d708a0d940f14727d3d6a25 Mon Sep 17 00:00:00 2001
From: Paolo Bonzini <pbonzini@redhat.com>
Date: Thu, 20 Dec 2012 20:40:35 +0100
Subject: [PATCH 2/2] libcacard: require libtool to build it

Do not fail at build time, instead just disable the library if libtool
is not present.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit b6fc675b25d32f018870e202eb4b2a6eb509f88b)

Conflicts:
	configure
	rules.mak
---
 Makefile           |  8 +-------
 configure          | 45 +++++++++++++++++++++++----------------------
 libcacard/Makefile |  8 --------
 rules.mak          |  6 ++----
 4 files changed, 26 insertions(+), 41 deletions(-)

diff --git a/Makefile b/Makefile
index 9ecbcbb..c8a35a0 100644
--- a/Makefile
+++ b/Makefile
@@ -168,14 +168,8 @@ libqemustub.a: $(stub-obj-y)
 ######################################################################
 # Support building shared library libcacard
 
+ifeq ($(CONFIG_SMARTCARD_NSS),y)
 .PHONY: libcacard.la install-libcacard
-ifeq ($(LIBTOOL),)
-libcacard.la:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-
-install-libcacard:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-else
 libcacard.la: $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
 	$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" libcacard.la,)
 
diff --git a/configure b/configure
index cc87f7b..4f0d0e0 100755
--- a/configure
+++ b/configure
@@ -2832,29 +2832,30 @@ if test "$smartcard" != "no" ; then
 #include <pk11pub.h>
 int main(void) { PK11_FreeSlot(0); return 0; }
 EOF
-        smartcard_includes="-I\$(SRC_PATH)/libcacard"
-        libcacard_libs="$($pkg_config --libs nss 2>/dev/null) $glib_libs"
-        libcacard_cflags="$($pkg_config --cflags nss 2>/dev/null) $glib_cflags"
-        test_cflags="$libcacard_cflags"
-        # The header files in nss < 3.13.3 have a bug which causes them to
-        # emit a warning. If we're going to compile QEMU with -Werror, then
-        # test that the headers don't have this bug. Otherwise we would pass
-        # the configure test but fail to compile QEMU later.
-        if test "$werror" = "yes"; then
-            test_cflags="-Werror $test_cflags"
-        fi
-        if $pkg_config --atleast-version=3.12.8 nss >/dev/null 2>&1 && \
-          compile_prog "$test_cflags" "$libcacard_libs"; then
-            smartcard_nss="yes"
-            QEMU_CFLAGS="$QEMU_CFLAGS $libcacard_cflags"
-            QEMU_INCLUDES="$QEMU_INCLUDES $smartcard_includes"
-            libs_softmmu="$libcacard_libs $libs_softmmu"
-        else
-            if test "$smartcard_nss" = "yes"; then
-                feature_not_found "nss"
-            fi
-            smartcard_nss="no"
+    smartcard_includes="-I\$(SRC_PATH)/libcacard"
+    libcacard_libs="$($pkg_config --libs nss 2>/dev/null) $glib_libs"
+    libcacard_cflags="$($pkg_config --cflags nss 2>/dev/null) $glib_cflags"
+    test_cflags="$libcacard_cflags"
+    # The header files in nss < 3.13.3 have a bug which causes them to
+    # emit a warning. If we're going to compile QEMU with -Werror, then
+    # test that the headers don't have this bug. Otherwise we would pass
+    # the configure test but fail to compile QEMU later.
+    if test "$werror" = "yes"; then
+        test_cflags="-Werror $test_cflags"
+    fi
+    if test -n "$libtool" &&
+            $pkg_config --atleast-version=3.12.8 nss >/dev/null 2>&1 && \
+      compile_prog "$test_cflags" "$libcacard_libs"; then
+        smartcard_nss="yes"
+        QEMU_CFLAGS="$QEMU_CFLAGS $libcacard_cflags"
+        QEMU_INCLUDES="$QEMU_INCLUDES $smartcard_includes"
+        libs_softmmu="$libcacard_libs $libs_softmmu"
+    else
+        if test "$smartcard_nss" = "yes"; then
+            feature_not_found "nss"
         fi
+        smartcard_nss="no"
+    fi
     fi
 fi
 if test "$smartcard" = "no" ; then
diff --git a/libcacard/Makefile b/libcacard/Makefile
index c26aac6..2bb5aea 100644
--- a/libcacard/Makefile
+++ b/libcacard/Makefile
@@ -28,13 +28,6 @@ all: libcacard.la libcacard.pc
 #########################################################################
 # Rules for building libcacard standalone library
 
-ifeq ($(LIBTOOL),)
-libcacard.la:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-
-install-libcacard:
-	@echo "libtool is missing, please install and rerun configure"; exit 1
-else
 libcacard.la: $(libcacard.lib-y) $(QEMU_OBJS_LIB)
 	$(call quiet-command,$(LIBTOOL) --mode=link --quiet --tag=CC $(CC) -rpath $(libdir) -o $@ $^ $(libcacard_libs),"  lt LINK $@")
 
@@ -60,4 +53,3 @@ install-libcacard: libcacard.pc libcacard.la vscclient
 	for inc in *.h; do \
 		$(LIBTOOL) --mode=install $(INSTALL_DATA) $(libcacard_srcpath)/$$inc "$(DESTDIR)$(libcacard_includedir)"; \
 	done
-endif
diff --git a/rules.mak b/rules.mak
index d0b04e4..c41c08b 100644
--- a/rules.mak
+++ b/rules.mak
@@ -18,12 +18,10 @@ QEMU_DGFLAGS += -MMD -MP -MT $@ -MF $(*D)/$(*F).d
 	$(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  CC    $(TARGET_DIR)$@")
 
 ifeq ($(LIBTOOL),)
-%.lo: %.c
-	@echo "missing libtool. please install and rerun configure"; exit 1
-else
+LIBTOOL = /bin/false
+endif
 %.lo: %.c
 	$(call quiet-command,$(LIBTOOL) --mode=compile --quiet --tag=CC $(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  lt CC $@")
-endif
 
 %.o: %.S
 	$(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  AS    $(TARGET_DIR)$@")
-- 
1.8.2.1


--------------080301060203090908010708
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------080301060203090908010708--


From xen-devel-bounces@lists.xen.org Mon Feb 10 23:29:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 23:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD0IH-0006pK-43; Mon, 10 Feb 2014 23:29:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WD0IF-0006pF-DT
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 23:29:39 +0000
Received: from [193.109.254.147:5947] by server-13.bemta-14.messagelabs.com id
	06/1F-01226-2E069F25; Mon, 10 Feb 2014 23:29:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392074977!3351253!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28166 invoked from network); 10 Feb 2014 23:29:37 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 23:29:37 -0000
Received: by mail-we0-f176.google.com with SMTP id q58so4841705wes.35
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 15:29:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=P+mtEywRX+DftsGbOUbZQae//j9GzVhYuYLMpqvvs2U=;
	b=DyE9jpN4UnZ8rQL6xOmBBLAcb8PuFXMpq6qnHOOQ+iMbrLsU+R5ge1WDr7t+GTl1wH
	68Bq6R3T9De7HJhDfwP76dQgSjzMa4JphC1178PhdrJpPJHso3SZ2yA76YVJoLNlcWhU
	fXPT8hGSyguwv5EHKlTd77EQOoHUkIFVlbjrQRnFV0o7hIAbKzFewJwoXrghw0weYClU
	/wQAh2OsGSyVxGXJgVqfJPhhiJwVTTPeNwrR2PFmnIbBNS8ijrqo8zavTPX/KQ2+WePk
	65zGs7ufxbo24lUhJT3kJpo0c/BE10Y6ztPoN6mi0GVLgjz9yVLIDke71GdXTwPN+kEf
	qQEg==
X-Gm-Message-State: ALoCoQniu3VwM1Gz+GZmp3S5ROUWHVmRiRID1RX1Em1kenNBUzLZNqhwm/LxKFOLJXwCv/iN+nxs
X-Received: by 10.180.164.229 with SMTP id yt5mr12325729wib.49.1392074977191; 
	Mon, 10 Feb 2014 15:29:37 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ga20sm34684036wic.0.2014.02.10.15.29.35 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 10 Feb 2014 15:29:36 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 10 Feb 2014 23:29:34 +0000
Message-Id: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.5.3
Cc: Ian.Jackson@eu.citrix.com, keir@xen.org, tim@xen.org,
	ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>
Subject: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
compilation with clang:

In file included from sched_sedf.c:8:
In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
/home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
not found with <angled> include; use "quotes" instead
           ^~~~~~~~~~
           "stdarg.h"
In file included from sched_sedf.c:8:
/home/julieng/works/xen/xen/include/xen/lib.h:101:63: error: unknown type name 'va_list'
extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
                                                              ^
/home/julieng/works/xen/xen/include/xen/lib.h:105:64: error: unknown type name 'va_list'
extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)

I have the same errors on different version of clang:
    - clang 3.0 on debian wheezy
    - clang 3.3 on Fedora 20
    - clang 3.5 build from trunk

Removing -nostdinc fix the build on clang.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/Rules.mk | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index df1428f..ed9b8d0 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -46,7 +46,8 @@ CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
 CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
 # Solaris puts stdarg.h &c in the system include directory.
 ifneq ($(XEN_OS),SunOS)
-CFLAGS += -nostdinc -iwithprefix include
+CFLAGS-y        += -iwithprefix include
+CFLAGS-$(gcc)   += -nostdinc
 endif
 
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 23:29:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 23:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD0IH-0006pK-43; Mon, 10 Feb 2014 23:29:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WD0IF-0006pF-DT
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 23:29:39 +0000
Received: from [193.109.254.147:5947] by server-13.bemta-14.messagelabs.com id
	06/1F-01226-2E069F25; Mon, 10 Feb 2014 23:29:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392074977!3351253!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28166 invoked from network); 10 Feb 2014 23:29:37 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	10 Feb 2014 23:29:37 -0000
Received: by mail-we0-f176.google.com with SMTP id q58so4841705wes.35
	for <xen-devel@lists.xenproject.org>;
	Mon, 10 Feb 2014 15:29:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=P+mtEywRX+DftsGbOUbZQae//j9GzVhYuYLMpqvvs2U=;
	b=DyE9jpN4UnZ8rQL6xOmBBLAcb8PuFXMpq6qnHOOQ+iMbrLsU+R5ge1WDr7t+GTl1wH
	68Bq6R3T9De7HJhDfwP76dQgSjzMa4JphC1178PhdrJpPJHso3SZ2yA76YVJoLNlcWhU
	fXPT8hGSyguwv5EHKlTd77EQOoHUkIFVlbjrQRnFV0o7hIAbKzFewJwoXrghw0weYClU
	/wQAh2OsGSyVxGXJgVqfJPhhiJwVTTPeNwrR2PFmnIbBNS8ijrqo8zavTPX/KQ2+WePk
	65zGs7ufxbo24lUhJT3kJpo0c/BE10Y6ztPoN6mi0GVLgjz9yVLIDke71GdXTwPN+kEf
	qQEg==
X-Gm-Message-State: ALoCoQniu3VwM1Gz+GZmp3S5ROUWHVmRiRID1RX1Em1kenNBUzLZNqhwm/LxKFOLJXwCv/iN+nxs
X-Received: by 10.180.164.229 with SMTP id yt5mr12325729wib.49.1392074977191; 
	Mon, 10 Feb 2014 15:29:37 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ga20sm34684036wic.0.2014.02.10.15.29.35 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 10 Feb 2014 15:29:36 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 10 Feb 2014 23:29:34 +0000
Message-Id: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.8.5.3
Cc: Ian.Jackson@eu.citrix.com, keir@xen.org, tim@xen.org,
	ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>
Subject: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
compilation with clang:

In file included from sched_sedf.c:8:
In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
/home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
not found with <angled> include; use "quotes" instead
           ^~~~~~~~~~
           "stdarg.h"
In file included from sched_sedf.c:8:
/home/julieng/works/xen/xen/include/xen/lib.h:101:63: error: unknown type name 'va_list'
extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
                                                              ^
/home/julieng/works/xen/xen/include/xen/lib.h:105:64: error: unknown type name 'va_list'
extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)

I have the same errors on different version of clang:
    - clang 3.0 on debian wheezy
    - clang 3.3 on Fedora 20
    - clang 3.5 build from trunk

Removing -nostdinc fix the build on clang.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/Rules.mk | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index df1428f..ed9b8d0 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -46,7 +46,8 @@ CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
 CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
 # Solaris puts stdarg.h &c in the system include directory.
 ifneq ($(XEN_OS),SunOS)
-CFLAGS += -nostdinc -iwithprefix include
+CFLAGS-y        += -iwithprefix include
+CFLAGS-$(gcc)   += -nostdinc
 endif
 
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 10 23:54:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 23:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD0fp-0007nN-FX; Mon, 10 Feb 2014 23:54:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WD0fn-0007nI-6w
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 23:53:59 +0000
Received: from [85.158.137.68:54806] by server-4.bemta-3.messagelabs.com id
	64/CE-11750-69669F25; Mon, 10 Feb 2014 23:53:58 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392076435!960391!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3803 invoked from network); 10 Feb 2014 23:53:57 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 23:53:57 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Feb 2014 23:53:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,821,1384300800"; d="scan'208";a="650430800"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.239])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 23:53:53 +0000
Message-ID: <52F96691.7000801@terremark.com>
Date: Mon, 10 Feb 2014 18:53:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com>
In-Reply-To: <52F92973.9080804@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTAvMTQgMTQ6MzMsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTAvMDIvMTQg
MjA6MjMsIERvbiBTbHV0eiB3cm90ZToKPj4gT24gMDIvMTAvMTQgMTM6NTksIFJvZ2VyIFBhdSBN
b25uw6kgd3JvdGU6Cj4+PiBPbiAxMC8wMi8xNCAxOTo0OCwgRG9uIFNsdXR6IHdyb3RlOgo+Pj4+
IE9uIDAyLzEwLzE0IDA1OjU5LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4+PiBPbiAxMC8w
Mi8xNCAxMDo1NSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+Pj4+Pj4gT24gRnJpLCAyMDE0LTAyLTA3
IGF0IDE2OjI5ICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+Pj4+Pj4+IE9uIDA3LzAyLzE0
IDE2OjIyLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4+IE9uIDAyLzA3LzE0IDA1OjA1LCBJYW4g
Q2FtcGJlbGwgd3JvdGU6Cj4+Pj4+Pj4+PiBPbiBUaHUsIDIwMTQtMDItMDYgYXQgMTk6MTUgLTA1
MDAsIERvbiBTbHV0eiB3cm90ZToKPj4gW3NuaXBdCj4+Pj4+Pj4gV2hpY2gsIGFjY29yZGluZyB0
byBnb29nbGUsIHdhcyBpbnRyb2R1Y2VkIGluIDMuMDkuNAo+Pj4+Pj4+Cj4+Pj4+Pj4gSSB0aGlu
ayB0aGUgLi9jb25maWd1cmUgc2NyaXB0IG5lZWRzIGEgbWluIHZlcnNpb24gY2hlY2suCj4+Pj4+
PiBZZXMsIEkgdGhpbmsgc28gdG9vLiBSb2IsIGNvdWxkIHlvdSBhZHZpc2Ugb24gYSBzdWl0YWJs
ZSBtaW5pbXVtIGFuZAo+Pj4+Pj4gcGVyaGFwcyBwYXRjaCB0b29scy9jb25maWd1cmUuYWMgYW5k
L29yIG00L29jYW1sLm00IGFzIG5lY2Vzc2FyeS4KPj4+Pj4+Cj4+Pj4+PiBBbHNvIENDaW5nIFJv
Z2VyIHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCj4+Pj4+IFRoZSBPY2FtbCBh
dXRvY29uZiBzdHVmZiB3YXMgcGlja2VkIGZyb20gaHR0cDovL2ZvcmdlLm9jYW1sY29yZS5vcmcv
Lgo+Pj4+PiBIZXJlIGlzIGFuIHVudGVzdGVkIHBhdGNoIGZvciBvdXIgY29uZmlndXJlIHNjcmlw
dCB0byBjaGVjayBmb3IgdGhlCj4+Pj4+IG1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAo
My4wOS4zKToKPj4+Pj4KPj4+Pj4gKHJlbWVtYmVyIHRvIHJlLWdlbmVyYXRlIHRoZSBjb25maWd1
cmUgc2NyaXB0IGFmdGVyIGFwcGx5aW5nKQo+Pj4+IE5vdCBzdXJlIGlmIHRoZSBvbGRlciBBdXRv
Y29uZiAoMi42OCkgaXMgYXQgZmF1bHQsIGJ1dCBJIGdldDoKPj4+Pgo+Pj4+IGRjcy14ZW4tNTQ6
fi94ZW4vdG9vbHM+YXV0b2NvbmYgY29uZmlndXJlLmFjID5jb25maWd1cmUKPj4+PiBjb25maWd1
cmUuYWM6MTQ6IGVycm9yOiBwb3NzaWJseSB1bmRlZmluZWQgbWFjcm86IEFTX0lGCj4+Pj4gICAg
ICAgICBJZiB0aGlzIHRva2VuIGFuZCBvdGhlcnMgYXJlIGxlZ2l0aW1hdGUsIHBsZWFzZSB1c2UK
Pj4+PiBtNF9wYXR0ZXJuX2FsbG93Lgo+Pj4+ICAgICAgICAgU2VlIHRoZSBBdXRvY29uZiBkb2N1
bWVudGF0aW9uLgo+Pj4+IGNvbmZpZ3VyZS5hYzoxNjI6IGVycm9yOiBwb3NzaWJseSB1bmRlZmlu
ZWQgbWFjcm86IEFDX01TR19FUlJPUgo+Pj4gT24gRGViaWFuIHN5c3RlbXMgeW91IG5lZWQgdGhl
IGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSB3aGljaCBwcm92aWRlcwo+Pj4gdGhlIEFYX0NPTVBB
UkVfVkVSU0lPTiBtYWNybywgYW5kIHRoZW4geW91IGFsc28gbmVlZCB0byBydW4gYWNsb2NhbAo+
Pj4gYmVmb3JlIGF1dG9jb25mLgo+Pj4KPj4+IFJvZ2VyLgo+Pj4KPj4gVGhlIENlbnRPUyA1LjEw
IHN5c3RlbSBoYXMgYSB0b28gb2xkIGF1dG9jb25mIHRvIGRvIHRoZSByZWJ1aWxkLCBTbyBJCj4+
IHdlbnQgdG8gYSBGZWRvcmEgMTcgc3lzdGVtLgo+Pgo+PiBJdCBkb2VzIGhhdmUgYXV0b2NvbmYt
YXJjaGl2ZSBwYWNrYWdlLCBudXQgdGhhdCBtYWtlIG5vIGRpZmZlcmVuY2U6Cj4+Cj4+Cj4+IElu
c3RhbGxlZDoKPj4gICAgYXV0b2NvbmYtYXJjaGl2ZS5ub2FyY2ggMDoyMDEyLjA5LjA4LTEuZmMx
Nwo+Pgo+PiBDb21wbGV0ZSEKPj4gZGNzLXhlbi01NDp+L3hlbi90b29scz5hdXRvY29uZiBjb25m
aWd1cmUuYWMgPmNvbmZpZ3VyZQo+PiBjb25maWd1cmUuYWM6MTQ6IGVycm9yOiBwb3NzaWJseSB1
bmRlZmluZWQgbWFjcm86IEFTX0lGCj4+ICAgICAgICBJZiB0aGlzIHRva2VuIGFuZCBvdGhlcnMg
YXJlIGxlZ2l0aW1hdGUsIHBsZWFzZSB1c2UgbTRfcGF0dGVybl9hbGxvdy4KPj4gICAgICAgIFNl
ZSB0aGUgQXV0b2NvbmYgZG9jdW1lbnRhdGlvbi4KPj4gY29uZmlndXJlLmFjOjE2MjogZXJyb3I6
IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNybzogQUNfTVNHX0VSUk9SCj4gRGlkIHlvdSBydW4gYWNs
b2NhbCBiZWZvcmUgYXV0b2NvbmY/Cj4KPiBUaGFua3MsIFJvZ2VyLgo+CgpUaGF0IGRpZCB3aGF0
IHdhcyBuZWVkZWQuICBBbmQgaXQgbG9va3MgdG8gYmUgd29ya2luZy4gIEhvd2V2ZXIsIHdpdGgg
cGF0Y2g6CgpodHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE0
LTAyL21zZzAwODI2Lmh0bWwKCldoaWNoIGxvb2tzIHRvIGJlIGdvaW5nIGluLCB0aGUgdmVyc2lv
biB0aGF0IGlzIGNoZWNrZWQgc2hvdWxkIGJlIGNoYW5nZWQgdG8gMy4wOS4zIGZyb20gMy4wOS40
LgoKTm90ZTogVGhlIGNvbW1pdCBtZXNzYWdlIGRvZXMgbm90IG1hdGNoIHRoZSBjb2RlLgoKWW91
IGNhbiBhZGQ6CgpUZXN0ZWQtYnk6IERvbiBTbHV0eiA8ZHNsdXR6QHZlcml6b24uY29tPgoKICAg
IC1Eb24gU2x1dHoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 10 23:54:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Feb 2014 23:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD0fp-0007nN-FX; Mon, 10 Feb 2014 23:54:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WD0fn-0007nI-6w
	for xen-devel@lists.xenproject.org; Mon, 10 Feb 2014 23:53:59 +0000
Received: from [85.158.137.68:54806] by server-4.bemta-3.messagelabs.com id
	64/CE-11750-69669F25; Mon, 10 Feb 2014 23:53:58 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392076435!960391!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3803 invoked from network); 10 Feb 2014 23:53:57 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 10 Feb 2014 23:53:57 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 10 Feb 2014 23:53:54 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,821,1384300800"; d="scan'208";a="650430800"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.239])
	by fldsmtpi03.verizon.com with ESMTP; 10 Feb 2014 23:53:53 +0000
Message-ID: <52F96691.7000801@terremark.com>
Date: Mon, 10 Feb 2014 18:53:53 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Don Slutz <dslutz@verizon.com>, Ian Campbell <Ian.Campbell@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>	
	<52EC2C9C.9090202@terremark.com>	
	<21231.33273.164782.738071@mariner.uk.xensource.com>	
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>	
	<52F425AD.50209@terremark.com>	
	<1391767501.2162.20.camel@kazak.uk.xensource.com>	
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com>
In-Reply-To: <52F92973.9080804@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTAvMTQgMTQ6MzMsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTAvMDIvMTQg
MjA6MjMsIERvbiBTbHV0eiB3cm90ZToKPj4gT24gMDIvMTAvMTQgMTM6NTksIFJvZ2VyIFBhdSBN
b25uw6kgd3JvdGU6Cj4+PiBPbiAxMC8wMi8xNCAxOTo0OCwgRG9uIFNsdXR6IHdyb3RlOgo+Pj4+
IE9uIDAyLzEwLzE0IDA1OjU5LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4+PiBPbiAxMC8w
Mi8xNCAxMDo1NSwgSWFuIENhbXBiZWxsIHdyb3RlOgo+Pj4+Pj4gT24gRnJpLCAyMDE0LTAyLTA3
IGF0IDE2OjI5ICswMDAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+Pj4+Pj4+IE9uIDA3LzAyLzE0
IDE2OjIyLCBEb24gU2x1dHogd3JvdGU6Cj4+Pj4+Pj4+IE9uIDAyLzA3LzE0IDA1OjA1LCBJYW4g
Q2FtcGJlbGwgd3JvdGU6Cj4+Pj4+Pj4+PiBPbiBUaHUsIDIwMTQtMDItMDYgYXQgMTk6MTUgLTA1
MDAsIERvbiBTbHV0eiB3cm90ZToKPj4gW3NuaXBdCj4+Pj4+Pj4gV2hpY2gsIGFjY29yZGluZyB0
byBnb29nbGUsIHdhcyBpbnRyb2R1Y2VkIGluIDMuMDkuNAo+Pj4+Pj4+Cj4+Pj4+Pj4gSSB0aGlu
ayB0aGUgLi9jb25maWd1cmUgc2NyaXB0IG5lZWRzIGEgbWluIHZlcnNpb24gY2hlY2suCj4+Pj4+
PiBZZXMsIEkgdGhpbmsgc28gdG9vLiBSb2IsIGNvdWxkIHlvdSBhZHZpc2Ugb24gYSBzdWl0YWJs
ZSBtaW5pbXVtIGFuZAo+Pj4+Pj4gcGVyaGFwcyBwYXRjaCB0b29scy9jb25maWd1cmUuYWMgYW5k
L29yIG00L29jYW1sLm00IGFzIG5lY2Vzc2FyeS4KPj4+Pj4+Cj4+Pj4+PiBBbHNvIENDaW5nIFJv
Z2VyIHdobyBhZGRlZCB0aGUgb2NhbWwgYXV0b2NvbmYgc3R1ZmYuCj4+Pj4+IFRoZSBPY2FtbCBh
dXRvY29uZiBzdHVmZiB3YXMgcGlja2VkIGZyb20gaHR0cDovL2ZvcmdlLm9jYW1sY29yZS5vcmcv
Lgo+Pj4+PiBIZXJlIGlzIGFuIHVudGVzdGVkIHBhdGNoIGZvciBvdXIgY29uZmlndXJlIHNjcmlw
dCB0byBjaGVjayBmb3IgdGhlCj4+Pj4+IG1pbmltdW0gcmVxdWlyZWQgT0NhbWwgdmVyc2lvbiAo
My4wOS4zKToKPj4+Pj4KPj4+Pj4gKHJlbWVtYmVyIHRvIHJlLWdlbmVyYXRlIHRoZSBjb25maWd1
cmUgc2NyaXB0IGFmdGVyIGFwcGx5aW5nKQo+Pj4+IE5vdCBzdXJlIGlmIHRoZSBvbGRlciBBdXRv
Y29uZiAoMi42OCkgaXMgYXQgZmF1bHQsIGJ1dCBJIGdldDoKPj4+Pgo+Pj4+IGRjcy14ZW4tNTQ6
fi94ZW4vdG9vbHM+YXV0b2NvbmYgY29uZmlndXJlLmFjID5jb25maWd1cmUKPj4+PiBjb25maWd1
cmUuYWM6MTQ6IGVycm9yOiBwb3NzaWJseSB1bmRlZmluZWQgbWFjcm86IEFTX0lGCj4+Pj4gICAg
ICAgICBJZiB0aGlzIHRva2VuIGFuZCBvdGhlcnMgYXJlIGxlZ2l0aW1hdGUsIHBsZWFzZSB1c2UK
Pj4+PiBtNF9wYXR0ZXJuX2FsbG93Lgo+Pj4+ICAgICAgICAgU2VlIHRoZSBBdXRvY29uZiBkb2N1
bWVudGF0aW9uLgo+Pj4+IGNvbmZpZ3VyZS5hYzoxNjI6IGVycm9yOiBwb3NzaWJseSB1bmRlZmlu
ZWQgbWFjcm86IEFDX01TR19FUlJPUgo+Pj4gT24gRGViaWFuIHN5c3RlbXMgeW91IG5lZWQgdGhl
IGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSB3aGljaCBwcm92aWRlcwo+Pj4gdGhlIEFYX0NPTVBB
UkVfVkVSU0lPTiBtYWNybywgYW5kIHRoZW4geW91IGFsc28gbmVlZCB0byBydW4gYWNsb2NhbAo+
Pj4gYmVmb3JlIGF1dG9jb25mLgo+Pj4KPj4+IFJvZ2VyLgo+Pj4KPj4gVGhlIENlbnRPUyA1LjEw
IHN5c3RlbSBoYXMgYSB0b28gb2xkIGF1dG9jb25mIHRvIGRvIHRoZSByZWJ1aWxkLCBTbyBJCj4+
IHdlbnQgdG8gYSBGZWRvcmEgMTcgc3lzdGVtLgo+Pgo+PiBJdCBkb2VzIGhhdmUgYXV0b2NvbmYt
YXJjaGl2ZSBwYWNrYWdlLCBudXQgdGhhdCBtYWtlIG5vIGRpZmZlcmVuY2U6Cj4+Cj4+Cj4+IElu
c3RhbGxlZDoKPj4gICAgYXV0b2NvbmYtYXJjaGl2ZS5ub2FyY2ggMDoyMDEyLjA5LjA4LTEuZmMx
Nwo+Pgo+PiBDb21wbGV0ZSEKPj4gZGNzLXhlbi01NDp+L3hlbi90b29scz5hdXRvY29uZiBjb25m
aWd1cmUuYWMgPmNvbmZpZ3VyZQo+PiBjb25maWd1cmUuYWM6MTQ6IGVycm9yOiBwb3NzaWJseSB1
bmRlZmluZWQgbWFjcm86IEFTX0lGCj4+ICAgICAgICBJZiB0aGlzIHRva2VuIGFuZCBvdGhlcnMg
YXJlIGxlZ2l0aW1hdGUsIHBsZWFzZSB1c2UgbTRfcGF0dGVybl9hbGxvdy4KPj4gICAgICAgIFNl
ZSB0aGUgQXV0b2NvbmYgZG9jdW1lbnRhdGlvbi4KPj4gY29uZmlndXJlLmFjOjE2MjogZXJyb3I6
IHBvc3NpYmx5IHVuZGVmaW5lZCBtYWNybzogQUNfTVNHX0VSUk9SCj4gRGlkIHlvdSBydW4gYWNs
b2NhbCBiZWZvcmUgYXV0b2NvbmY/Cj4KPiBUaGFua3MsIFJvZ2VyLgo+CgpUaGF0IGRpZCB3aGF0
IHdhcyBuZWVkZWQuICBBbmQgaXQgbG9va3MgdG8gYmUgd29ya2luZy4gIEhvd2V2ZXIsIHdpdGgg
cGF0Y2g6CgpodHRwOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE0
LTAyL21zZzAwODI2Lmh0bWwKCldoaWNoIGxvb2tzIHRvIGJlIGdvaW5nIGluLCB0aGUgdmVyc2lv
biB0aGF0IGlzIGNoZWNrZWQgc2hvdWxkIGJlIGNoYW5nZWQgdG8gMy4wOS4zIGZyb20gMy4wOS40
LgoKTm90ZTogVGhlIGNvbW1pdCBtZXNzYWdlIGRvZXMgbm90IG1hdGNoIHRoZSBjb2RlLgoKWW91
IGNhbiBhZGQ6CgpUZXN0ZWQtYnk6IERvbiBTbHV0eiA8ZHNsdXR6QHZlcml6b24uY29tPgoKICAg
IC1Eb24gU2x1dHoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 00:18:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 00:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD131-0000ll-AO; Tue, 11 Feb 2014 00:17:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WD12z-0000lg-94
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 00:17:57 +0000
Received: from [193.109.254.147:62226] by server-2.bemta-14.messagelabs.com id
	55/B8-01236-43C69F25; Tue, 11 Feb 2014 00:17:56 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392077875!3385621!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31490 invoked from network); 11 Feb 2014 00:17:55 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 00:17:55 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 10 Feb 2014 16:13:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,821,1384329600"; d="scan'208";a="481256970"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 10 Feb 2014 16:17:40 -0800
Received: from fmsmsx158.amr.corp.intel.com (10.18.116.75) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 16:17:40 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx158.amr.corp.intel.com (10.18.116.75) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 16:17:41 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 11 Feb 2014 08:17:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH] pvh: Fix regression caused by assumption that HVM
	paths MUST use io-backend device.
Thread-Index: AQHPIn+njVLnofJuHUihOs0NqzOmUZqmQkSAgALI8BCAAF/RAIAFx1CA
Date: Tue, 11 Feb 2014 00:17:37 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
In-Reply-To: <20140207154128.GE3605@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Konrad
	Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2014-02-07:
> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>> Yes. As so
>>>>>> Thanks. Except that ...
>>>>>> 
>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>      struct vcpu *v = current;
>>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>> -
>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>> variable "ioreq" would seem preferable.
>>>>>> 
>>>>>>>      /*
>>>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>>>       */
>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>> the right thing here. Yang, Jun?
>>>>> I have two patches - one the simpler one that is pretty
>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>> guests. I also did bootup tests with HVM guests to make sure they
>>>>> worked.
>>>>> 
>>>>> Attached and inline.
>>>> 
>> 
>> Sorry for the late response. I just back from Chinese new year holiday.
>> 
>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>> 
>>> Correct.
>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>> incorrect, AFAICT.
>>> 
>>> Both of them fix the hypervisor blowing up with any PVH guest.
>> 
>> Both of fixings are right to me.
>> The only concern is that what we want to do here:
>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO
> request emulation mechanism to continue nested check which current means
> HVM VCPU.
>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't
>> support the IO request emulation mechanism only which current means PVH
>> VCPU.
>> 
>> The purpose of my original patch only wants to allow the HVM VCPU that
> doesn't has pending IO request to continue nested check. Not use it to
> distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU
> goes to here as Jan mentioned before that non-HVM domain should never call
> nested related function at all unless it also supports nested.
> 
> So it sounds like the #2 patch is preferable by you.
> 
> Can I stick Acked-by on it?
> 

Sure.

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 00:18:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 00:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD131-0000ll-AO; Tue, 11 Feb 2014 00:17:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WD12z-0000lg-94
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 00:17:57 +0000
Received: from [193.109.254.147:62226] by server-2.bemta-14.messagelabs.com id
	55/B8-01236-43C69F25; Tue, 11 Feb 2014 00:17:56 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392077875!3385621!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31490 invoked from network); 11 Feb 2014 00:17:55 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 00:17:55 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 10 Feb 2014 16:13:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,821,1384329600"; d="scan'208";a="481256970"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 10 Feb 2014 16:17:40 -0800
Received: from fmsmsx158.amr.corp.intel.com (10.18.116.75) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 16:17:40 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx158.amr.corp.intel.com (10.18.116.75) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 16:17:41 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 11 Feb 2014 08:17:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [PATCH] pvh: Fix regression caused by assumption that HVM
	paths MUST use io-backend device.
Thread-Index: AQHPIn+njVLnofJuHUihOs0NqzOmUZqmQkSAgALI8BCAAF/RAIAFx1CA
Date: Tue, 11 Feb 2014 00:17:37 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
In-Reply-To: <20140207154128.GE3605@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Konrad
	Rzeszutek Wilk <konrad@kernel.org>, "Nakajima,
	Jun" <jun.nakajima@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk wrote on 2014-02-07:
> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>> <konrad.wilk@oracle.com> wrote:
>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>> Yes. As so
>>>>>> Thanks. Except that ...
>>>>>> 
>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>      struct vcpu *v = current;
>>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>> -
>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>> variable "ioreq" would seem preferable.
>>>>>> 
>>>>>>>      /*
>>>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>>>       */
>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>> the right thing here. Yang, Jun?
>>>>> I have two patches - one the simpler one that is pretty
>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>> guests. I also did bootup tests with HVM guests to make sure they
>>>>> worked.
>>>>> 
>>>>> Attached and inline.
>>>> 
>> 
>> Sorry for the late response. I just back from Chinese new year holiday.
>> 
>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>> 
>>> Correct.
>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>> incorrect, AFAICT.
>>> 
>>> Both of them fix the hypervisor blowing up with any PVH guest.
>> 
>> Both of fixings are right to me.
>> The only concern is that what we want to do here:
>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO
> request emulation mechanism to continue nested check which current means
> HVM VCPU.
>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't
>> support the IO request emulation mechanism only which current means PVH
>> VCPU.
>> 
>> The purpose of my original patch only wants to allow the HVM VCPU that
> doesn't has pending IO request to continue nested check. Not use it to
> distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU
> goes to here as Jan mentioned before that non-HVM domain should never call
> nested related function at all unless it also supports nested.
> 
> So it sounds like the #2 patch is preferable by you.
> 
> Can I stick Acked-by on it?
> 

Sure.

Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 01:36:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 01:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD2GS-00089y-Sj; Tue, 11 Feb 2014 01:35:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WD2GQ-00089t-TE
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 01:35:55 +0000
Received: from [85.158.137.68:53124] by server-13.bemta-3.messagelabs.com id
	C7/5E-26923-97E79F25; Tue, 11 Feb 2014 01:35:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392082550!970265!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19999 invoked from network); 11 Feb 2014 01:35:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 01:35:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,821,1384300800"; 
	d="scan'208,217";a="101479628"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 01:35:49 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 10 Feb 2014 20:35:48 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Tue, 11 Feb 2014
	02:35:45 +0100
Message-ID: <52F97E6F.2000402@citrix.com>
Date: Tue, 11 Feb 2014 01:35:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: <rshriram@cs.ubc.ca>, David Vrabel <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
In-Reply-To: <CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8346258649584390089=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8346258649584390089==
Content-Type: multipart/alternative;
	boundary="------------050200090706030607020009"

--------------050200090706030607020009
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 10/02/2014 20:00, Shriram Rajagopalan wrote:
> On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com
> <mailto:david.vrabel@citrix.com>> wrote:
>
>     Here is a draft of a proposal for a new domain save image format.  It
>     does not currently cover all use cases (e.g., images for HVM guest are
>     not considered).
>
>     http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>
>     Introduction
>     ============
>
>     Revision History
>     ----------------
>
>     --------------------------------------------------------------------
>     Version  Date         Changes
>     -------  -----------  ----------------------------------------------
>     Draft A  6 Feb 2014   Initial draft.
>
>     Draft B  10 Feb 2014  Corrected image header field widths.
>
>                           Minor updates and clarifications.
>     --------------------------------------------------------------------
>
>     Purpose
>     -------
>
>     The _domain save image_ is the context of a running domain used for
>     snapshots of a domain or for transferring domains between hosts during
>     migration.
>
>     There are a number of problems with the format of the domain save
>     image used in Xen 4.4 and earlier (the _legacy format_).
>
>     * Dependant on toolstack word size.  A number of fields within the
>       image are native types such as `unsigned long` which have different
>       sizes between 32-bit and 64-bit hosts.  This prevents domains from
>       being migrated between 32-bit and 64-bit hosts.
>
>     * There is no header identifying the image.
>
>     * The image has no version information.
>
>     A new format that addresses the above is required.
>
>     ARM does not yet have have a domain save image format specified and
>     the format described in this specification should be suitable.
>
>
>
> I suggest keeping the processing overhead in mind when designing the new
> image format. Some key things have been addressed, such as making sure
> data
> is always padded to maintain alignment. But there are also some
> aspects of this
> proposal that seem awfully unnecessary.. More details below.
>  
>
>
>     Overview
>     ========
>
>     The image format consists of two main sections:
>
>     * _Headers_
>     * _Records_
>
>     Headers
>     -------
>
>     There are two headers: the _image header_, and the _domain header_.
>     The image header describes the format of the image (version etc.).
>     The _domain header_ contains general information about the domain
>     (architecture, type etc.).
>
>     Records
>     -------
>
>     The main part of the format is a sequence of different _records_.
>     Each record type contains information about the domain context.  At a
>     minimum there is a END record marking the end of the records section.
>
>
>     Fields
>     ------
>
>     All the fields within the headers and records have a fixed width.
>
>     Fields are always aligned to their size.
>
>     Padding and reserved fields are set to zero on save and must be
>     ignored during restore.
>
>
> So far so good.
>  
>
>     Integer (numeric) fields in the image header are always in big-endian
>     byte order.
>
>     Integer fields in the domain header and in the records are in the
>     endianess described in the image header (which will typically be the
>     native ordering).
>
>
>
> Its tempting to adopt all the TCP-style madness for transferring a set of
> structured data.  Why this endian-ness mess?  Am I missing something here?
> I am assuming that a lion's share of Xen's deployment is on x86 
> (not including Amazon). So that leaves ARM.  Why not let these 
> processors take the hit of endian-ness conversion?

The large majority is indeed x86, but don't discount ARM because it is
currently in the minority.  With the current requirements, the vast
majority of the data will still be little endian on x86.

>
>     Headers
>     =======
>
>     Image Header
>     ------------
>
>     The image header identifies an image as a Xen domain save image.  It
>     includes the version of this specification that the image complies
>     with.
>
>     Tools supporting version _V_ of the specification shall always save
>     images using version _V_.  Tools shall support restoring from version
>     _V_ and version _V_ - 1.  Tools may additionally support restoring
>     from earlier versions.
>
>     The marker field can be used to distinguish between legacy images and
>     those corresponding to this specification.  Legacy images will have at
>     one or more zero bits within the first 8 octets of the image.
>
>     Fields within the image header are always in _big-endian_ byte order,
>     regardless of the setting of the endianness bit.
>
>
> and more endian-ness mess.

Network order is perfectly valid.  Is is how all your network packets
arrive...

>
>
>          0     1     2     3     4     5     6     7 octet
>         +-------------------------------------------------+
>         | marker                                          |
>         +-----------------------+-------------------------+
>         | id                    | version                 |
>         +-----------+-----------+-------------------------+
>         | options   |                                     |
>         +-----------+-------------------------------------+
>
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     marker      0xFFFFFFFFFFFFFFFF.
>
>     id          0x58454E46 ("XENF" in ASCII).
>
>     version     0x00000001.  The version of this specification.
>
>     options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
>
>                 bit 1-15: Reserved.
>     --------------------------------------------------------------------
>
>     Domain Header
>     -------------
>
>     The domain header includes general properties of the domain.
>
>          0      1     2     3     4     5     6     7 octet
>         +-----------+-----------+-----------+-------------+
>         | arch      | type      | page_shift| (reserved)  |
>         +-----------+-----------+-----------+-------------+
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     arch        0x0000: Reserved.
>
>                 0x0001: x86.
>
>                 0x0002: ARM.
>
>     type        0x0000: Reserved.
>
>                 0x0001: x86 PV.
>
>                 0x0002 - 0xFFFF: Reserved.
>
>     page_shift  Size of a guest page as a power of two.
>
>                 i.e., page size = 2^page_shift^.
>     --------------------------------------------------------------------
>
>
>     Records
>     =======
>
>     A record has a record header, type specific data and a trailing
>     footer.  If body_length is not a multiple of 8, the body is padded
>     with zeroes to align the checksum field on an 8 octet boundary.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+-------------------------+
>         | type                  | body_length             |
>         +-----------+-----------+-------------------------+
>         | options   | (reserved)                          |
>         +-----------+-------------------------------------+
>         ...
>         Record body of length body_length octets followed by
>         0 to 7 octets of padding.
>         ...
>         +-----------------------+-------------------------+
>         | checksum              | (reserved)              |
>         +-----------------------+-------------------------+
>
>
> I am assuming that you the checksum field is present only
> for debugging purposes? Otherwise, I see no reason for the
> computational overhead, given that we are already sending data
> over a reliable channel + IIRC we already have an image-wide checksum 
> when saving the image to disk.
>
> If debugging is the only use case, then I guess, the type field
> can be prefixed with a 1/0 bit, eliminating the need for the 
> 1-bit checkum options field + 7-byte padding. Similarly, if debugging 
> mode is not set, why waste another 8 bytes in the end for the checksum
> field.
>
> Unless you think there may be more types with need of special options,
>
> Feel free to correct me if I am missing something elementary here..

What image-wide checksum?

Are you certain that all your data is moving over reliable channels? 
Are you certain that your hard drives are bit perfect.  Are you certain
that your network connection is bit perfect?

Given the amount of data sent as part of a migration, 8 bytes per record
is not a substantial overhead.

>
>
>  
>
>     --------------------------------------------------------------------
>     Field        Description
>     -----------  -------------------------------------------------------
>     type         0x00000000: END
>
>                  0x00000001: PAGE_DATA
>
>                  0x00000002: VCPU_INFO
>
>                  0x00000003: VCPU_CONTEXT
>
>                  0x00000004: X86_PV_INFO
>
>                  0x00000005: P2M
>
>                  0x00000006 - 0xFFFFFFFF: Reserved
>
>     body_length  Length in octets of the record body.
>
>     options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
>
>                  Bit 1-15: Reserved.
>
>     checksum     CRC-32 checksum of the record body (including any
>     trailing
>                  padding), or 0x00000000 if the checksum field is invalid.
>     --------------------------------------------------------------------
>
>     The following sub-sections specify the record body format for each of
>     the record types.
>
>     END
>     ----
>
>     A end record marks the end of the image.
>
>          0     1     2     3     4     5     6     7 octet
>         +-------------------------------------------------+
>
>     The end record contains no fields; its body_length is 0.
>
>     PAGE_DATA
>     ---------
>
>     The bulk of an image consists of many PAGE_DATA records containing the
>     memory contents.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+-------------------------+
>         | count (C)             | (reserved)              |
>         +-----------------------+-------------------------+
>         | pfn[0]                                          |
>         +-------------------------------------------------+
>         ...
>         +-------------------------------------------------+
>         | pfn[C-1]                                        |
>         +-------------------------------------------------+
>         | page_data[0]...                                 |
>         ...
>         +-------------------------------------------------+
>         | page_data[N-1]...                               |
>         ...
>         +-------------------------------------------------+
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     count       Number of pages described in this record.
>
>     pfn         An array of count PFNs. Bits 63-60 contain
>                 the XEN\_DOMCTL\_PFINFO_* value for that PFN.
>
>     page_data   page_size octets of uncompressed page contents for
>     each page
>                 set as present in the pfn array.
>     --------------------------------------------------------------------
>
>
> s/uncompressed/(compressed/uncompressed)/
> (Remus sends compressed data)

IIRC, remus sends XOR+RLE encoded pages?  Along with HVM domains, this
needs covering in a future draft.

~Andrew

>  
>
>     VCPU_INFO
>     ---------
>
>     > [ This is a combination of parts of the extended-info and
>     > XC_SAVE_ID_VCPU_INFO chunks. ]
>
>     The VCPU_INFO record includes the maximum possible VCPU ID.  This will
>     be followed a VCPU_CONTEXT record for each online VCPU.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+------------------------+
>         | max_vcpu_id           | (reserved)             |
>         +-----------------------+------------------------+
>
>     --------------------------------------------------------------------
>     Field        Description
>     -----------  ---------------------------------------------------
>     max_vcpu_id  Maximum possible VCPU ID.
>     --------------------------------------------------------------------
>
>
>     VCPU_CONTEXT
>     ------------
>
>     The context for a single VCPU.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+-------------------------+
>         | vcpu_id               | (reserved)              |
>         +-----------------------+-------------------------+
>         | vcpu_ctx...                                     |
>         ...
>         +-------------------------------------------------+
>
>     --------------------------------------------------------------------
>     Field            Description
>     -----------      ---------------------------------------------------
>     vcpu_id          The VCPU ID.
>
>     vcpu_ctx         Context for this VCPU.
>     --------------------------------------------------------------------
>
>     [ vcpu_ctx format TBD. ]
>
>
>     X86_PV_INFO
>     -----------
>
>     > [ This record replaces part of the extended-info chunk. ]
>
>          0     1     2     3     4     5     6     7 octet
>         +-----+-----+-----+-------------------------------+
>         | w   | ptl | o   | (reserved)                    |
>         +-----+-----+-----+-------------------------------+
>
>     --------------------------------------------------------------------
>     Field            Description
>     -----------      ---------------------------------------------------
>     guest_width (w)  Guest width in octets (either 4 or 8).
>
>     pt_levels (ptl)  Number of page table levels (either 3 or 4).
>
>     options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
>                      1 - VMASST_pae_extended_cr3.
>
>                      Bit 1-7: Reserved.
>     --------------------------------------------------------------------
>
>
>     P2M
>     ---
>
>     [ This is a more flexible replacement for the old p2m_size field and
>     p2m array. ]
>
>     The P2M record contains a portion of the source domain's P2M.
>     Multiple P2M records may be sent if the source P2M changes during the
>     stream.
>
>          0     1     2     3     4     5     6     7 octet
>         +-------------------------------------------------+
>         | pfn_begin                                       |
>         +-------------------------------------------------+
>         | pfn_end                                         |
>         +-------------------------------------------------+
>         | mfn[0]                                          |
>         +-------------------------------------------------+
>         ...
>         +-------------------------------------------------+
>         | mfn[N-1]                                        |
>         +-------------------------------------------------+
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     pfn_begin   The first PFN in this portion of the P2M
>
>     pfn_end     One past the last PFN in this portion of the P2M.
>
>     mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
>                 the set of PFNs in the range [pfn_begin, pfn_end).
>     --------------------------------------------------------------------
>
>
>     Layout
>     ======
>
>     The set of valid records depends on the guest architecture and type.
>
>     x86 PV Guest
>     ------------
>
>     An x86 PV guest image will have in this order:
>
>     1. Image header
>     2. Domain header
>     3. X86_PV_INFO record
>     4. At least one P2M record
>     5. At least one PAGE_DATA record
>
>     6. VCPU_INFO record
>     6. At least one VCPU_CONTEXT record
>
>     7. END record
>
>
> There seems to be a bunch of info missing. Here are some
> missing elements that I can recall at the moment:
> a) there is no support for sending over one time markers that switch the
> receiver's operating mode in the middle of a data stream. 
> E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.
> XC_SAVE_ENABLE_VERIFY_MODE,
>
> b) in pv case, the tail also has a list of unmapped PFNs at the end of
> every iteration.
>
> c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device context
> information (generally
> for HVMs).
>  
>
>
>     Legacy Images (x86 only)
>     ========================
>
>     Restoring legacy images from older tools shall be handled by
>     translating the legacy format image into this new format.
>
>     It shall not be possible to save in the legacy format.
>
>     There are two different legacy images depending on whether they were
>     generated by a 32-bit or a 64-bit toolstack. These shall be
>     distinguished by inspecting octets 4-7 in the image.  If these are
>     zero then it is a 64-bit image.
>
>     Toolstack  Field                            Value
>     ---------  -----                            -----
>     64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
>     32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
>     32-bit     Chunk type (HVM)                 < 0
>     32-bit     Page count (HVM)                 > 0
>
>     Table: Possible values for octet 4-7 in legacy images
>
>     This assumes the presence of the extended-info chunk which was
>     introduced in Xen 3.0.
>
>
>     Future Extensions
>     =================
>
>     All changes to this format require the image version to be increased.
>
>     The format may be extended by adding additional record types.
>
>     Extending an existing record type must be done by adding a new record
>     type.  This allows old images with the old record to still be
>     restored.
>
>     The image header may be extended by _appending_ additional fields.  In
>     particular, the `marker`, `id` and `version` fields must never change
>     size or location.
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------050200090706030607020009
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 10/02/2014 20:00, Shriram
      Rajagopalan wrote:<br>
    </div>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">On Mon, Feb 10, 2014 at 9:20 AM,
            David Vrabel <span dir="ltr">&lt;<a moz-do-not-send="true"
                href="mailto:david.vrabel@citrix.com" target="_blank">david.vrabel@citrix.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Here
              is a draft of a proposal for a new domain save image
              format. &nbsp;It<br>
              does not currently cover all use cases (e.g., images for
              HVM guest are<br>
              not considered).<br>
              <br>
              <a moz-do-not-send="true"
                href="http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf"
                target="_blank">http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf</a><br>
              <br>
              Introduction<br>
              ============<br>
              <br>
              Revision History<br>
              ----------------<br>
              <br>
--------------------------------------------------------------------<br>
              Version &nbsp;Date &nbsp; &nbsp; &nbsp; &nbsp; Changes<br>
              ------- &nbsp;-----------
              &nbsp;----------------------------------------------<br>
              Draft A &nbsp;6 Feb 2014 &nbsp; Initial draft.<br>
              <br>
              Draft B &nbsp;10 Feb 2014 &nbsp;Corrected image header field widths.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Minor updates and clarifications.<br>
--------------------------------------------------------------------<br>
              <br>
              Purpose<br>
              -------<br>
              <br>
              The _domain save image_ is the context of a running domain
              used for<br>
              snapshots of a domain or for transferring domains between
              hosts during<br>
              migration.<br>
              <br>
              There are a number of problems with the format of the
              domain save<br>
              image used in Xen 4.4 and earlier (the _legacy format_).<br>
              <br>
              * Dependant on toolstack word size. &nbsp;A number of fields
              within the<br>
              &nbsp; image are native types such as `unsigned long` which
              have different<br>
              &nbsp; sizes between 32-bit and 64-bit hosts. &nbsp;This prevents
              domains from<br>
              &nbsp; being migrated between 32-bit and 64-bit hosts.<br>
              <br>
              * There is no header identifying the image.<br>
              <br>
              * The image has no version information.<br>
              <br>
              A new format that addresses the above is required.<br>
              <br>
              ARM does not yet have have a domain save image format
              specified and<br>
              the format described in this specification should be
              suitable.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>I suggest keeping the processing overhead in mind when
              designing the new</div>
            <div>image format. Some key things have been addressed, such
              as making sure data</div>
            <div>
              is always padded to maintain alignment. But there are also
              some aspects of this</div>
            <div>proposal that seem awfully unnecessary.. More details
              below.</div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
              Overview<br>
              ========<br>
              <br>
              The image format consists of two main sections:<br>
              <br>
              * _Headers_<br>
              * _Records_<br>
              <br>
              Headers<br>
              -------<br>
              <br>
              There are two headers: the _image header_, and the _domain
              header_.<br>
              The image header describes the format of the image
              (version etc.).<br>
              The _domain header_ contains general information about the
              domain<br>
              (architecture, type etc.).<br>
              <br>
              Records<br>
              -------<br>
              <br>
              The main part of the format is a sequence of different
              _records_.<br>
              Each record type contains information about the domain
              context. &nbsp;At a<br>
              minimum there is a END record marking the end of the
              records section.<br>
              <br>
              <br>
              Fields<br>
              ------<br>
              <br>
              All the fields within the headers and records have a fixed
              width.<br>
              <br>
              Fields are always aligned to their size.<br>
              <br>
              Padding and reserved fields are set to zero on save and
              must be<br>
              ignored during restore.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>So far so good.</div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Integer
              (numeric) fields in the image header are always in
              big-endian<br>
              byte order.<br>
              <br>
              Integer fields in the domain header and in the records are
              in the<br>
              endianess described in the image header (which will
              typically be the<br>
              native ordering).<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>
              <div>
                <div>Its tempting to adopt all the TCP-style madness for
                  transferring a set of</div>
                <div>structured data. &nbsp;Why this endian-ness mess? &nbsp;Am I
                  missing something here?</div>
                <div>I am assuming that a lion's share of Xen's
                  deployment is on x86&nbsp;</div>
                <div>(not including Amazon). So that leaves ARM. &nbsp;Why
                  not let these&nbsp;</div>
                <div>processors take the hit of endian-ness conversion?</div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The large majority is indeed x86, but don't discount ARM because it
    is currently in the minority.&nbsp; With the current requirements, the
    vast majority of the data will still be little endian on x86.<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>
              <div><br>
              </div>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Headers<br>
              =======<br>
              <br>
              Image Header<br>
              ------------<br>
              <br>
              The image header identifies an image as a Xen domain save
              image. &nbsp;It<br>
              includes the version of this specification that the image
              complies<br>
              with.<br>
              <br>
              Tools supporting version _V_ of the specification shall
              always save<br>
              images using version _V_. &nbsp;Tools shall support restoring
              from version<br>
              _V_ and version _V_ - 1. &nbsp;Tools may additionally support
              restoring<br>
              from earlier versions.<br>
              <br>
              The marker field can be used to distinguish between legacy
              images and<br>
              those corresponding to this specification. &nbsp;Legacy images
              will have at<br>
              one or more zero bits within the first 8 octets of the
              image.<br>
              <br>
              Fields within the image header are always in _big-endian_
              byte order,<br>
              regardless of the setting of the endianness bit.<br>
            </blockquote>
            <div><br>
            </div>
            <div>and more endian-ness mess.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    Network order is perfectly valid.&nbsp; Is is how all your network
    packets arrive...<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | marker &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| version &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------+-----------+-------------------------+<br>
              &nbsp; &nbsp; | options &nbsp; | &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------+-------------------------------------+<br>
              <br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              marker &nbsp; &nbsp; &nbsp;0xFFFFFFFFFFFFFFFF.<br>
              <br>
              id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x58454E46 ("XENF" in ASCII).<br>
              <br>
              version &nbsp; &nbsp; 0x00000001. &nbsp;The version of this
              specification.<br>
              <br>
              options &nbsp; &nbsp; bit 0: Endianness. &nbsp;0 = little-endian, 1 =
              big-endian.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; bit 1-15: Reserved.<br>
--------------------------------------------------------------------<br>
              <br>
              Domain Header<br>
              -------------<br>
              <br>
              The domain header includes general properties of the
              domain.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------+-----------+-----------+-------------+<br>
              &nbsp; &nbsp; | arch &nbsp; &nbsp; &nbsp;| type &nbsp; &nbsp; &nbsp;| page_shift| (reserved) &nbsp;|<br>
              &nbsp; &nbsp; +-----------+-----------+-----------+-------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              arch &nbsp; &nbsp; &nbsp; &nbsp;0x0000: Reserved.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0001: x86.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0002: ARM.<br>
              <br>
              type &nbsp; &nbsp; &nbsp; &nbsp;0x0000: Reserved.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0001: x86 PV.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0002 - 0xFFFF: Reserved.<br>
              <br>
              page_shift &nbsp;Size of a guest page as a power of two.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; i.e., page size = 2^page_shift^.<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              Records<br>
              =======<br>
              <br>
              A record has a record header, type specific data and a
              trailing<br>
              footer. &nbsp;If body_length is not a multiple of 8, the body
              is padded<br>
              with zeroes to align the checksum field on an 8 octet
              boundary.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | type &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| body_length &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------+-----------+-------------------------+<br>
              &nbsp; &nbsp; | options &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------+-------------------------------------+<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; Record body of length body_length octets followed by<br>
              &nbsp; &nbsp; 0 to 7 octets of padding.<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | checksum &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>I am assuming that you the checksum field is present
              only</div>
            <div>for debugging purposes? Otherwise, I see no reason for
              the</div>
            <div>computational overhead, given that we are already
              sending data</div>
            <div>over a reliable channel + IIRC we already have an
              image-wide checksum&nbsp;</div>
            <div>when saving the image to disk.</div>
            <div><br>
            </div>
            <div>If debugging is the only use case, then I guess, the
              type field</div>
            <div>can be prefixed with a 1/0 bit, eliminating the need
              for the&nbsp;</div>
            <div>1-bit checkum options field + 7-byte padding.
              Similarly, if debugging&nbsp;</div>
            <div>mode is not set, why waste another 8 bytes in the end
              for the checksum field.</div>
            <div><br>
            </div>
            <div>Unless you think there may be more types with need of
              special options,</div>
            <div><br>
            </div>
            <div>Feel free to correct me if I am missing something
              elementary here..</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    What image-wide checksum?<br>
    <br>
    Are you certain that all your data is moving over reliable
    channels?&nbsp; Are you certain that your hard drives are bit perfect.&nbsp;
    Are you certain that your network connection is bit perfect?<br>
    <br>
    Given the amount of data sent as part of a migration, 8 bytes per
    record is not a substantial overhead.<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <div><br>
            </div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              -----------
              &nbsp;-------------------------------------------------------<br>
              type &nbsp; &nbsp; &nbsp; &nbsp; 0x00000000: END<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000001: PAGE_DATA<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000002: VCPU_INFO<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000003: VCPU_CONTEXT<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000004: X86_PV_INFO<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000005: P2M<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000006 - 0xFFFFFFFF: Reserved<br>
              <br>
              body_length &nbsp;Length in octets of the record body.<br>
              <br>
              options &nbsp; &nbsp; &nbsp;Bit 0: 0 - checksum invalid, 1 = checksum
              valid.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Bit 1-15: Reserved.<br>
              <br>
              checksum &nbsp; &nbsp; CRC-32 checksum of the record body (including
              any trailing<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;padding), or 0x00000000 if the checksum field
              is invalid.<br>
--------------------------------------------------------------------<br>
              <br>
            </blockquote>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">The
              following sub-sections specify the record body format for
              each of<br>
              the record types.<br>
              <br>
              END<br>
              ----<br>
              <br>
              A end record marks the end of the image.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
              The end record contains no fields; its body_length is 0.<br>
              <br>
              PAGE_DATA<br>
              ---------<br>
              <br>
              The bulk of an image consists of many PAGE_DATA records
              containing the<br>
              memory contents.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | count (C) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | pfn[0] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | pfn[C-1] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | page_data[0]... &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | page_data[N-1]... &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              count &nbsp; &nbsp; &nbsp; Number of pages described in this record.<br>
              <br>
              pfn &nbsp; &nbsp; &nbsp; &nbsp; An array of count PFNs. Bits 63-60 contain<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; the XEN\_DOMCTL\_PFINFO_* value for that PFN.<br>
              <br>
              page_data &nbsp; page_size octets of uncompressed page contents
              for each page<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; set as present in the pfn array.<br>
--------------------------------------------------------------------<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>s/uncompressed/(compressed/uncompressed)/</div>
            <div>(Remus sends compressed data)</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    IIRC, remus sends XOR+RLE encoded pages?&nbsp; Along with HVM domains,
    this needs covering in a future draft.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">VCPU_INFO<br>
              ---------<br>
              <br>
              &gt; [ This is a combination of parts of the extended-info
              and<br>
              &gt; XC_SAVE_ID_VCPU_INFO chunks. ]<br>
              <br>
              The VCPU_INFO record includes the maximum possible VCPU
              ID. &nbsp;This will<br>
              be followed a VCPU_CONTEXT record for each online VCPU.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+------------------------+<br>
              &nbsp; &nbsp; | max_vcpu_id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------------------+------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              -----------
              &nbsp;---------------------------------------------------<br>
              max_vcpu_id &nbsp;Maximum possible VCPU ID.<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              VCPU_CONTEXT<br>
              ------------<br>
              <br>
              The context for a single VCPU.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | vcpu_id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | vcpu_ctx... &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              ----------- &nbsp; &nbsp;
              &nbsp;---------------------------------------------------<br>
              vcpu_id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;The VCPU ID.<br>
              <br>
              vcpu_ctx &nbsp; &nbsp; &nbsp; &nbsp; Context for this VCPU.<br>
--------------------------------------------------------------------<br>
              <br>
              [ vcpu_ctx format TBD. ]<br>
              <br>
              <br>
              X86_PV_INFO<br>
              -----------<br>
              <br>
              &gt; [ This record replaces part of the extended-info
              chunk. ]<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----+-----+-----+-------------------------------+<br>
              &nbsp; &nbsp; | w &nbsp; | ptl | o &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----+-----+-----+-------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              ----------- &nbsp; &nbsp;
              &nbsp;---------------------------------------------------<br>
              guest_width (w) &nbsp;Guest width in octets (either 4 or 8).<br>
              <br>
              pt_levels (ptl) &nbsp;Number of page table levels (either 3 or
              4).<br>
              <br>
              options (o) &nbsp; &nbsp; &nbsp;Bit 0: 0 - no VMASST_pae_extended_cr3,<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1 - VMASST_pae_extended_cr3.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Bit 1-7: Reserved.<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              P2M<br>
              ---<br>
              <br>
              [ This is a more flexible replacement for the old p2m_size
              field and<br>
              p2m array. ]<br>
              <br>
              The P2M record contains a portion of the source domain's
              P2M.<br>
              Multiple P2M records may be sent if the source P2M changes
              during the<br>
              stream.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | pfn_begin &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | pfn_end &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | mfn[0] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | mfn[N-1] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              pfn_begin &nbsp; The first PFN in this portion of the P2M<br>
              <br>
              pfn_end &nbsp; &nbsp; One past the last PFN in this portion of the
              P2M.<br>
              <br>
              mfn &nbsp; &nbsp; &nbsp; &nbsp; Array of (pfn_end - pfn-begin) MFNs
              corresponding to<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; the set of PFNs in the range [pfn_begin,
              pfn_end).<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              Layout<br>
              ======<br>
              <br>
              The set of valid records depends on the guest architecture
              and type.<br>
              <br>
              x86 PV Guest<br>
              ------------<br>
              <br>
              An x86 PV guest image will have in this order:<br>
              <br>
              1. Image header<br>
              2. Domain header<br>
              3. X86_PV_INFO record<br>
              4. At least one P2M record<br>
              5. At least one PAGE_DATA record</blockquote>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">6.
              VCPU_INFO record<br>
              6. At least one VCPU_CONTEXT record</blockquote>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">7.
              END record<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>There seems to be a bunch of info missing. Here are
              some</div>
            <div>missing elements that I can recall at the moment:</div>
            <div>a) there is no support for sending over one time
              markers that switch the</div>
            <div>receiver's operating mode in the middle of a data
              stream.&nbsp;</div>
            <div>E.g., XC_SAVE_ENABLE_COMPRESSION,
              XC_SAVE_ID_LAST_CHECKPOINT, etc.</div>
            <div>XC_SAVE_ENABLE_VERIFY_MODE,</div>
            <div><br>
            </div>
            <div>b) in pv case, the tail also has a list of unmapped
              PFNs at the end of every iteration.</div>
            <div><br>
            </div>
            <div>c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device
              context information (generally<br>
            </div>
            <div>for HVMs).</div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
              Legacy Images (x86 only)<br>
              ========================<br>
              <br>
              Restoring legacy images from older tools shall be handled
              by<br>
              translating the legacy format image into this new format.<br>
              <br>
              It shall not be possible to save in the legacy format.<br>
              <br>
              There are two different legacy images depending on whether
              they were<br>
              generated by a 32-bit or a 64-bit toolstack. These shall
              be<br>
              distinguished by inspecting octets 4-7 in the image. &nbsp;If
              these are<br>
              zero then it is a 64-bit image.<br>
              <br>
              Toolstack &nbsp;Field &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Value<br>
              --------- &nbsp;----- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;-----<br>
              64-bit &nbsp; &nbsp; Bit 31-63 of the p2m_size field &nbsp;0 (since
              p2m_size &lt; 2^32^)<br>
              32-bit &nbsp; &nbsp; extended-info chunk ID (PV) &nbsp; &nbsp; &nbsp;0xFFFFFFFF<br>
              32-bit &nbsp; &nbsp; Chunk type (HVM) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &lt; 0<br>
              32-bit &nbsp; &nbsp; Page count (HVM) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 0<br>
              <br>
              Table: Possible values for octet 4-7 in legacy images<br>
              <br>
              This assumes the presence of the extended-info chunk which
              was<br>
              introduced in Xen 3.0.<br>
              <br>
              <br>
              Future Extensions<br>
              =================<br>
              <br>
              All changes to this format require the image version to be
              increased.<br>
              <br>
              The format may be extended by adding additional record
              types.<br>
              <br>
              Extending an existing record type must be done by adding a
              new record<br>
              type. &nbsp;This allows old images with the old record to still
              be<br>
              restored.<br>
              <br>
              The image header may be extended by _appending_ additional
              fields. &nbsp;In<br>
              particular, the `marker`, `id` and `version` fields must
              never change<br>
              size or location.<br>
              <br>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------050200090706030607020009--


--===============8346258649584390089==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8346258649584390089==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 01:36:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 01:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD2GS-00089y-Sj; Tue, 11 Feb 2014 01:35:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WD2GQ-00089t-TE
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 01:35:55 +0000
Received: from [85.158.137.68:53124] by server-13.bemta-3.messagelabs.com id
	C7/5E-26923-97E79F25; Tue, 11 Feb 2014 01:35:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392082550!970265!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19999 invoked from network); 11 Feb 2014 01:35:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 01:35:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,821,1384300800"; 
	d="scan'208,217";a="101479628"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 01:35:49 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL03.citrite.net (10.13.107.80) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Mon, 10 Feb 2014 20:35:48 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Tue, 11 Feb 2014
	02:35:45 +0100
Message-ID: <52F97E6F.2000402@citrix.com>
Date: Tue, 11 Feb 2014 01:35:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: <rshriram@cs.ubc.ca>, David Vrabel <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
In-Reply-To: <CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA2
Cc: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8346258649584390089=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8346258649584390089==
Content-Type: multipart/alternative;
	boundary="------------050200090706030607020009"

--------------050200090706030607020009
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 10/02/2014 20:00, Shriram Rajagopalan wrote:
> On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com
> <mailto:david.vrabel@citrix.com>> wrote:
>
>     Here is a draft of a proposal for a new domain save image format.  It
>     does not currently cover all use cases (e.g., images for HVM guest are
>     not considered).
>
>     http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>
>     Introduction
>     ============
>
>     Revision History
>     ----------------
>
>     --------------------------------------------------------------------
>     Version  Date         Changes
>     -------  -----------  ----------------------------------------------
>     Draft A  6 Feb 2014   Initial draft.
>
>     Draft B  10 Feb 2014  Corrected image header field widths.
>
>                           Minor updates and clarifications.
>     --------------------------------------------------------------------
>
>     Purpose
>     -------
>
>     The _domain save image_ is the context of a running domain used for
>     snapshots of a domain or for transferring domains between hosts during
>     migration.
>
>     There are a number of problems with the format of the domain save
>     image used in Xen 4.4 and earlier (the _legacy format_).
>
>     * Dependant on toolstack word size.  A number of fields within the
>       image are native types such as `unsigned long` which have different
>       sizes between 32-bit and 64-bit hosts.  This prevents domains from
>       being migrated between 32-bit and 64-bit hosts.
>
>     * There is no header identifying the image.
>
>     * The image has no version information.
>
>     A new format that addresses the above is required.
>
>     ARM does not yet have have a domain save image format specified and
>     the format described in this specification should be suitable.
>
>
>
> I suggest keeping the processing overhead in mind when designing the new
> image format. Some key things have been addressed, such as making sure
> data
> is always padded to maintain alignment. But there are also some
> aspects of this
> proposal that seem awfully unnecessary.. More details below.
>  
>
>
>     Overview
>     ========
>
>     The image format consists of two main sections:
>
>     * _Headers_
>     * _Records_
>
>     Headers
>     -------
>
>     There are two headers: the _image header_, and the _domain header_.
>     The image header describes the format of the image (version etc.).
>     The _domain header_ contains general information about the domain
>     (architecture, type etc.).
>
>     Records
>     -------
>
>     The main part of the format is a sequence of different _records_.
>     Each record type contains information about the domain context.  At a
>     minimum there is a END record marking the end of the records section.
>
>
>     Fields
>     ------
>
>     All the fields within the headers and records have a fixed width.
>
>     Fields are always aligned to their size.
>
>     Padding and reserved fields are set to zero on save and must be
>     ignored during restore.
>
>
> So far so good.
>  
>
>     Integer (numeric) fields in the image header are always in big-endian
>     byte order.
>
>     Integer fields in the domain header and in the records are in the
>     endianess described in the image header (which will typically be the
>     native ordering).
>
>
>
> Its tempting to adopt all the TCP-style madness for transferring a set of
> structured data.  Why this endian-ness mess?  Am I missing something here?
> I am assuming that a lion's share of Xen's deployment is on x86 
> (not including Amazon). So that leaves ARM.  Why not let these 
> processors take the hit of endian-ness conversion?

The large majority is indeed x86, but don't discount ARM because it is
currently in the minority.  With the current requirements, the vast
majority of the data will still be little endian on x86.

>
>     Headers
>     =======
>
>     Image Header
>     ------------
>
>     The image header identifies an image as a Xen domain save image.  It
>     includes the version of this specification that the image complies
>     with.
>
>     Tools supporting version _V_ of the specification shall always save
>     images using version _V_.  Tools shall support restoring from version
>     _V_ and version _V_ - 1.  Tools may additionally support restoring
>     from earlier versions.
>
>     The marker field can be used to distinguish between legacy images and
>     those corresponding to this specification.  Legacy images will have at
>     one or more zero bits within the first 8 octets of the image.
>
>     Fields within the image header are always in _big-endian_ byte order,
>     regardless of the setting of the endianness bit.
>
>
> and more endian-ness mess.

Network order is perfectly valid.  Is is how all your network packets
arrive...

>
>
>          0     1     2     3     4     5     6     7 octet
>         +-------------------------------------------------+
>         | marker                                          |
>         +-----------------------+-------------------------+
>         | id                    | version                 |
>         +-----------+-----------+-------------------------+
>         | options   |                                     |
>         +-----------+-------------------------------------+
>
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     marker      0xFFFFFFFFFFFFFFFF.
>
>     id          0x58454E46 ("XENF" in ASCII).
>
>     version     0x00000001.  The version of this specification.
>
>     options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
>
>                 bit 1-15: Reserved.
>     --------------------------------------------------------------------
>
>     Domain Header
>     -------------
>
>     The domain header includes general properties of the domain.
>
>          0      1     2     3     4     5     6     7 octet
>         +-----------+-----------+-----------+-------------+
>         | arch      | type      | page_shift| (reserved)  |
>         +-----------+-----------+-----------+-------------+
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     arch        0x0000: Reserved.
>
>                 0x0001: x86.
>
>                 0x0002: ARM.
>
>     type        0x0000: Reserved.
>
>                 0x0001: x86 PV.
>
>                 0x0002 - 0xFFFF: Reserved.
>
>     page_shift  Size of a guest page as a power of two.
>
>                 i.e., page size = 2^page_shift^.
>     --------------------------------------------------------------------
>
>
>     Records
>     =======
>
>     A record has a record header, type specific data and a trailing
>     footer.  If body_length is not a multiple of 8, the body is padded
>     with zeroes to align the checksum field on an 8 octet boundary.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+-------------------------+
>         | type                  | body_length             |
>         +-----------+-----------+-------------------------+
>         | options   | (reserved)                          |
>         +-----------+-------------------------------------+
>         ...
>         Record body of length body_length octets followed by
>         0 to 7 octets of padding.
>         ...
>         +-----------------------+-------------------------+
>         | checksum              | (reserved)              |
>         +-----------------------+-------------------------+
>
>
> I am assuming that you the checksum field is present only
> for debugging purposes? Otherwise, I see no reason for the
> computational overhead, given that we are already sending data
> over a reliable channel + IIRC we already have an image-wide checksum 
> when saving the image to disk.
>
> If debugging is the only use case, then I guess, the type field
> can be prefixed with a 1/0 bit, eliminating the need for the 
> 1-bit checkum options field + 7-byte padding. Similarly, if debugging 
> mode is not set, why waste another 8 bytes in the end for the checksum
> field.
>
> Unless you think there may be more types with need of special options,
>
> Feel free to correct me if I am missing something elementary here..

What image-wide checksum?

Are you certain that all your data is moving over reliable channels? 
Are you certain that your hard drives are bit perfect.  Are you certain
that your network connection is bit perfect?

Given the amount of data sent as part of a migration, 8 bytes per record
is not a substantial overhead.

>
>
>  
>
>     --------------------------------------------------------------------
>     Field        Description
>     -----------  -------------------------------------------------------
>     type         0x00000000: END
>
>                  0x00000001: PAGE_DATA
>
>                  0x00000002: VCPU_INFO
>
>                  0x00000003: VCPU_CONTEXT
>
>                  0x00000004: X86_PV_INFO
>
>                  0x00000005: P2M
>
>                  0x00000006 - 0xFFFFFFFF: Reserved
>
>     body_length  Length in octets of the record body.
>
>     options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
>
>                  Bit 1-15: Reserved.
>
>     checksum     CRC-32 checksum of the record body (including any
>     trailing
>                  padding), or 0x00000000 if the checksum field is invalid.
>     --------------------------------------------------------------------
>
>     The following sub-sections specify the record body format for each of
>     the record types.
>
>     END
>     ----
>
>     A end record marks the end of the image.
>
>          0     1     2     3     4     5     6     7 octet
>         +-------------------------------------------------+
>
>     The end record contains no fields; its body_length is 0.
>
>     PAGE_DATA
>     ---------
>
>     The bulk of an image consists of many PAGE_DATA records containing the
>     memory contents.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+-------------------------+
>         | count (C)             | (reserved)              |
>         +-----------------------+-------------------------+
>         | pfn[0]                                          |
>         +-------------------------------------------------+
>         ...
>         +-------------------------------------------------+
>         | pfn[C-1]                                        |
>         +-------------------------------------------------+
>         | page_data[0]...                                 |
>         ...
>         +-------------------------------------------------+
>         | page_data[N-1]...                               |
>         ...
>         +-------------------------------------------------+
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     count       Number of pages described in this record.
>
>     pfn         An array of count PFNs. Bits 63-60 contain
>                 the XEN\_DOMCTL\_PFINFO_* value for that PFN.
>
>     page_data   page_size octets of uncompressed page contents for
>     each page
>                 set as present in the pfn array.
>     --------------------------------------------------------------------
>
>
> s/uncompressed/(compressed/uncompressed)/
> (Remus sends compressed data)

IIRC, remus sends XOR+RLE encoded pages?  Along with HVM domains, this
needs covering in a future draft.

~Andrew

>  
>
>     VCPU_INFO
>     ---------
>
>     > [ This is a combination of parts of the extended-info and
>     > XC_SAVE_ID_VCPU_INFO chunks. ]
>
>     The VCPU_INFO record includes the maximum possible VCPU ID.  This will
>     be followed a VCPU_CONTEXT record for each online VCPU.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+------------------------+
>         | max_vcpu_id           | (reserved)             |
>         +-----------------------+------------------------+
>
>     --------------------------------------------------------------------
>     Field        Description
>     -----------  ---------------------------------------------------
>     max_vcpu_id  Maximum possible VCPU ID.
>     --------------------------------------------------------------------
>
>
>     VCPU_CONTEXT
>     ------------
>
>     The context for a single VCPU.
>
>          0     1     2     3     4     5     6     7 octet
>         +-----------------------+-------------------------+
>         | vcpu_id               | (reserved)              |
>         +-----------------------+-------------------------+
>         | vcpu_ctx...                                     |
>         ...
>         +-------------------------------------------------+
>
>     --------------------------------------------------------------------
>     Field            Description
>     -----------      ---------------------------------------------------
>     vcpu_id          The VCPU ID.
>
>     vcpu_ctx         Context for this VCPU.
>     --------------------------------------------------------------------
>
>     [ vcpu_ctx format TBD. ]
>
>
>     X86_PV_INFO
>     -----------
>
>     > [ This record replaces part of the extended-info chunk. ]
>
>          0     1     2     3     4     5     6     7 octet
>         +-----+-----+-----+-------------------------------+
>         | w   | ptl | o   | (reserved)                    |
>         +-----+-----+-----+-------------------------------+
>
>     --------------------------------------------------------------------
>     Field            Description
>     -----------      ---------------------------------------------------
>     guest_width (w)  Guest width in octets (either 4 or 8).
>
>     pt_levels (ptl)  Number of page table levels (either 3 or 4).
>
>     options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
>                      1 - VMASST_pae_extended_cr3.
>
>                      Bit 1-7: Reserved.
>     --------------------------------------------------------------------
>
>
>     P2M
>     ---
>
>     [ This is a more flexible replacement for the old p2m_size field and
>     p2m array. ]
>
>     The P2M record contains a portion of the source domain's P2M.
>     Multiple P2M records may be sent if the source P2M changes during the
>     stream.
>
>          0     1     2     3     4     5     6     7 octet
>         +-------------------------------------------------+
>         | pfn_begin                                       |
>         +-------------------------------------------------+
>         | pfn_end                                         |
>         +-------------------------------------------------+
>         | mfn[0]                                          |
>         +-------------------------------------------------+
>         ...
>         +-------------------------------------------------+
>         | mfn[N-1]                                        |
>         +-------------------------------------------------+
>
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     pfn_begin   The first PFN in this portion of the P2M
>
>     pfn_end     One past the last PFN in this portion of the P2M.
>
>     mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
>                 the set of PFNs in the range [pfn_begin, pfn_end).
>     --------------------------------------------------------------------
>
>
>     Layout
>     ======
>
>     The set of valid records depends on the guest architecture and type.
>
>     x86 PV Guest
>     ------------
>
>     An x86 PV guest image will have in this order:
>
>     1. Image header
>     2. Domain header
>     3. X86_PV_INFO record
>     4. At least one P2M record
>     5. At least one PAGE_DATA record
>
>     6. VCPU_INFO record
>     6. At least one VCPU_CONTEXT record
>
>     7. END record
>
>
> There seems to be a bunch of info missing. Here are some
> missing elements that I can recall at the moment:
> a) there is no support for sending over one time markers that switch the
> receiver's operating mode in the middle of a data stream. 
> E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.
> XC_SAVE_ENABLE_VERIFY_MODE,
>
> b) in pv case, the tail also has a list of unmapped PFNs at the end of
> every iteration.
>
> c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device context
> information (generally
> for HVMs).
>  
>
>
>     Legacy Images (x86 only)
>     ========================
>
>     Restoring legacy images from older tools shall be handled by
>     translating the legacy format image into this new format.
>
>     It shall not be possible to save in the legacy format.
>
>     There are two different legacy images depending on whether they were
>     generated by a 32-bit or a 64-bit toolstack. These shall be
>     distinguished by inspecting octets 4-7 in the image.  If these are
>     zero then it is a 64-bit image.
>
>     Toolstack  Field                            Value
>     ---------  -----                            -----
>     64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
>     32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
>     32-bit     Chunk type (HVM)                 < 0
>     32-bit     Page count (HVM)                 > 0
>
>     Table: Possible values for octet 4-7 in legacy images
>
>     This assumes the presence of the extended-info chunk which was
>     introduced in Xen 3.0.
>
>
>     Future Extensions
>     =================
>
>     All changes to this format require the image version to be increased.
>
>     The format may be extended by adding additional record types.
>
>     Extending an existing record type must be done by adding a new record
>     type.  This allows old images with the old record to still be
>     restored.
>
>     The image header may be extended by _appending_ additional fields.  In
>     particular, the `marker`, `id` and `version` fields must never change
>     size or location.
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------050200090706030607020009
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 10/02/2014 20:00, Shriram
      Rajagopalan wrote:<br>
    </div>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">On Mon, Feb 10, 2014 at 9:20 AM,
            David Vrabel <span dir="ltr">&lt;<a moz-do-not-send="true"
                href="mailto:david.vrabel@citrix.com" target="_blank">david.vrabel@citrix.com</a>&gt;</span>
            wrote:<br>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Here
              is a draft of a proposal for a new domain save image
              format. &nbsp;It<br>
              does not currently cover all use cases (e.g., images for
              HVM guest are<br>
              not considered).<br>
              <br>
              <a moz-do-not-send="true"
                href="http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf"
                target="_blank">http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf</a><br>
              <br>
              Introduction<br>
              ============<br>
              <br>
              Revision History<br>
              ----------------<br>
              <br>
--------------------------------------------------------------------<br>
              Version &nbsp;Date &nbsp; &nbsp; &nbsp; &nbsp; Changes<br>
              ------- &nbsp;-----------
              &nbsp;----------------------------------------------<br>
              Draft A &nbsp;6 Feb 2014 &nbsp; Initial draft.<br>
              <br>
              Draft B &nbsp;10 Feb 2014 &nbsp;Corrected image header field widths.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Minor updates and clarifications.<br>
--------------------------------------------------------------------<br>
              <br>
              Purpose<br>
              -------<br>
              <br>
              The _domain save image_ is the context of a running domain
              used for<br>
              snapshots of a domain or for transferring domains between
              hosts during<br>
              migration.<br>
              <br>
              There are a number of problems with the format of the
              domain save<br>
              image used in Xen 4.4 and earlier (the _legacy format_).<br>
              <br>
              * Dependant on toolstack word size. &nbsp;A number of fields
              within the<br>
              &nbsp; image are native types such as `unsigned long` which
              have different<br>
              &nbsp; sizes between 32-bit and 64-bit hosts. &nbsp;This prevents
              domains from<br>
              &nbsp; being migrated between 32-bit and 64-bit hosts.<br>
              <br>
              * There is no header identifying the image.<br>
              <br>
              * The image has no version information.<br>
              <br>
              A new format that addresses the above is required.<br>
              <br>
              ARM does not yet have have a domain save image format
              specified and<br>
              the format described in this specification should be
              suitable.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>I suggest keeping the processing overhead in mind when
              designing the new</div>
            <div>image format. Some key things have been addressed, such
              as making sure data</div>
            <div>
              is always padded to maintain alignment. But there are also
              some aspects of this</div>
            <div>proposal that seem awfully unnecessary.. More details
              below.</div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
              Overview<br>
              ========<br>
              <br>
              The image format consists of two main sections:<br>
              <br>
              * _Headers_<br>
              * _Records_<br>
              <br>
              Headers<br>
              -------<br>
              <br>
              There are two headers: the _image header_, and the _domain
              header_.<br>
              The image header describes the format of the image
              (version etc.).<br>
              The _domain header_ contains general information about the
              domain<br>
              (architecture, type etc.).<br>
              <br>
              Records<br>
              -------<br>
              <br>
              The main part of the format is a sequence of different
              _records_.<br>
              Each record type contains information about the domain
              context. &nbsp;At a<br>
              minimum there is a END record marking the end of the
              records section.<br>
              <br>
              <br>
              Fields<br>
              ------<br>
              <br>
              All the fields within the headers and records have a fixed
              width.<br>
              <br>
              Fields are always aligned to their size.<br>
              <br>
              Padding and reserved fields are set to zero on save and
              must be<br>
              ignored during restore.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>So far so good.</div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Integer
              (numeric) fields in the image header are always in
              big-endian<br>
              byte order.<br>
              <br>
              Integer fields in the domain header and in the records are
              in the<br>
              endianess described in the image header (which will
              typically be the<br>
              native ordering).<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>
              <div>
                <div>Its tempting to adopt all the TCP-style madness for
                  transferring a set of</div>
                <div>structured data. &nbsp;Why this endian-ness mess? &nbsp;Am I
                  missing something here?</div>
                <div>I am assuming that a lion's share of Xen's
                  deployment is on x86&nbsp;</div>
                <div>(not including Amazon). So that leaves ARM. &nbsp;Why
                  not let these&nbsp;</div>
                <div>processors take the hit of endian-ness conversion?</div>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    The large majority is indeed x86, but don't discount ARM because it
    is currently in the minority.&nbsp; With the current requirements, the
    vast majority of the data will still be little endian on x86.<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>
              <div><br>
              </div>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Headers<br>
              =======<br>
              <br>
              Image Header<br>
              ------------<br>
              <br>
              The image header identifies an image as a Xen domain save
              image. &nbsp;It<br>
              includes the version of this specification that the image
              complies<br>
              with.<br>
              <br>
              Tools supporting version _V_ of the specification shall
              always save<br>
              images using version _V_. &nbsp;Tools shall support restoring
              from version<br>
              _V_ and version _V_ - 1. &nbsp;Tools may additionally support
              restoring<br>
              from earlier versions.<br>
              <br>
              The marker field can be used to distinguish between legacy
              images and<br>
              those corresponding to this specification. &nbsp;Legacy images
              will have at<br>
              one or more zero bits within the first 8 octets of the
              image.<br>
              <br>
              Fields within the image header are always in _big-endian_
              byte order,<br>
              regardless of the setting of the endianness bit.<br>
            </blockquote>
            <div><br>
            </div>
            <div>and more endian-ness mess.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    Network order is perfectly valid.&nbsp; Is is how all your network
    packets arrive...<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | marker &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| version &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------+-----------+-------------------------+<br>
              &nbsp; &nbsp; | options &nbsp; | &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------+-------------------------------------+<br>
              <br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              marker &nbsp; &nbsp; &nbsp;0xFFFFFFFFFFFFFFFF.<br>
              <br>
              id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x58454E46 ("XENF" in ASCII).<br>
              <br>
              version &nbsp; &nbsp; 0x00000001. &nbsp;The version of this
              specification.<br>
              <br>
              options &nbsp; &nbsp; bit 0: Endianness. &nbsp;0 = little-endian, 1 =
              big-endian.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; bit 1-15: Reserved.<br>
--------------------------------------------------------------------<br>
              <br>
              Domain Header<br>
              -------------<br>
              <br>
              The domain header includes general properties of the
              domain.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; &nbsp;1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------+-----------+-----------+-------------+<br>
              &nbsp; &nbsp; | arch &nbsp; &nbsp; &nbsp;| type &nbsp; &nbsp; &nbsp;| page_shift| (reserved) &nbsp;|<br>
              &nbsp; &nbsp; +-----------+-----------+-----------+-------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              arch &nbsp; &nbsp; &nbsp; &nbsp;0x0000: Reserved.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0001: x86.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0002: ARM.<br>
              <br>
              type &nbsp; &nbsp; &nbsp; &nbsp;0x0000: Reserved.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0001: x86 PV.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0x0002 - 0xFFFF: Reserved.<br>
              <br>
              page_shift &nbsp;Size of a guest page as a power of two.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; i.e., page size = 2^page_shift^.<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              Records<br>
              =======<br>
              <br>
              A record has a record header, type specific data and a
              trailing<br>
              footer. &nbsp;If body_length is not a multiple of 8, the body
              is padded<br>
              with zeroes to align the checksum field on an 8 octet
              boundary.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | type &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| body_length &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------+-----------+-------------------------+<br>
              &nbsp; &nbsp; | options &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------+-------------------------------------+<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; Record body of length body_length octets followed by<br>
              &nbsp; &nbsp; 0 to 7 octets of padding.<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | checksum &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;| (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>I am assuming that you the checksum field is present
              only</div>
            <div>for debugging purposes? Otherwise, I see no reason for
              the</div>
            <div>computational overhead, given that we are already
              sending data</div>
            <div>over a reliable channel + IIRC we already have an
              image-wide checksum&nbsp;</div>
            <div>when saving the image to disk.</div>
            <div><br>
            </div>
            <div>If debugging is the only use case, then I guess, the
              type field</div>
            <div>can be prefixed with a 1/0 bit, eliminating the need
              for the&nbsp;</div>
            <div>1-bit checkum options field + 7-byte padding.
              Similarly, if debugging&nbsp;</div>
            <div>mode is not set, why waste another 8 bytes in the end
              for the checksum field.</div>
            <div><br>
            </div>
            <div>Unless you think there may be more types with need of
              special options,</div>
            <div><br>
            </div>
            <div>Feel free to correct me if I am missing something
              elementary here..</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    What image-wide checksum?<br>
    <br>
    Are you certain that all your data is moving over reliable
    channels?&nbsp; Are you certain that your hard drives are bit perfect.&nbsp;
    Are you certain that your network connection is bit perfect?<br>
    <br>
    Given the amount of data sent as part of a migration, 8 bytes per
    record is not a substantial overhead.<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div><br>
            </div>
            <div><br>
            </div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              -----------
              &nbsp;-------------------------------------------------------<br>
              type &nbsp; &nbsp; &nbsp; &nbsp; 0x00000000: END<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000001: PAGE_DATA<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000002: VCPU_INFO<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000003: VCPU_CONTEXT<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000004: X86_PV_INFO<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000005: P2M<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;0x00000006 - 0xFFFFFFFF: Reserved<br>
              <br>
              body_length &nbsp;Length in octets of the record body.<br>
              <br>
              options &nbsp; &nbsp; &nbsp;Bit 0: 0 - checksum invalid, 1 = checksum
              valid.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Bit 1-15: Reserved.<br>
              <br>
              checksum &nbsp; &nbsp; CRC-32 checksum of the record body (including
              any trailing<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;padding), or 0x00000000 if the checksum field
              is invalid.<br>
--------------------------------------------------------------------<br>
              <br>
            </blockquote>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">The
              following sub-sections specify the record body format for
              each of<br>
              the record types.<br>
              <br>
              END<br>
              ----<br>
              <br>
              A end record marks the end of the image.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
              The end record contains no fields; its body_length is 0.<br>
              <br>
              PAGE_DATA<br>
              ---------<br>
              <br>
              The bulk of an image consists of many PAGE_DATA records
              containing the<br>
              memory contents.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | count (C) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | pfn[0] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | pfn[C-1] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | page_data[0]... &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | page_data[N-1]... &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              count &nbsp; &nbsp; &nbsp; Number of pages described in this record.<br>
              <br>
              pfn &nbsp; &nbsp; &nbsp; &nbsp; An array of count PFNs. Bits 63-60 contain<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; the XEN\_DOMCTL\_PFINFO_* value for that PFN.<br>
              <br>
              page_data &nbsp; page_size octets of uncompressed page contents
              for each page<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; set as present in the pfn array.<br>
--------------------------------------------------------------------<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>s/uncompressed/(compressed/uncompressed)/</div>
            <div>(Remus sends compressed data)</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br>
    IIRC, remus sends XOR+RLE encoded pages?&nbsp; Along with HVM domains,
    this needs covering in a future draft.<br>
    <br>
    ~Andrew<br>
    <br>
    <blockquote
cite="mid:CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_extra">
          <div class="gmail_quote">
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">VCPU_INFO<br>
              ---------<br>
              <br>
              &gt; [ This is a combination of parts of the extended-info
              and<br>
              &gt; XC_SAVE_ID_VCPU_INFO chunks. ]<br>
              <br>
              The VCPU_INFO record includes the maximum possible VCPU
              ID. &nbsp;This will<br>
              be followed a VCPU_CONTEXT record for each online VCPU.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+------------------------+<br>
              &nbsp; &nbsp; | max_vcpu_id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-----------------------+------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              -----------
              &nbsp;---------------------------------------------------<br>
              max_vcpu_id &nbsp;Maximum possible VCPU ID.<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              VCPU_CONTEXT<br>
              ------------<br>
              <br>
              The context for a single VCPU.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | vcpu_id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----------------------+-------------------------+<br>
              &nbsp; &nbsp; | vcpu_ctx... &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              ----------- &nbsp; &nbsp;
              &nbsp;---------------------------------------------------<br>
              vcpu_id &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;The VCPU ID.<br>
              <br>
              vcpu_ctx &nbsp; &nbsp; &nbsp; &nbsp; Context for this VCPU.<br>
--------------------------------------------------------------------<br>
              <br>
              [ vcpu_ctx format TBD. ]<br>
              <br>
              <br>
              X86_PV_INFO<br>
              -----------<br>
              <br>
              &gt; [ This record replaces part of the extended-info
              chunk. ]<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-----+-----+-----+-------------------------------+<br>
              &nbsp; &nbsp; | w &nbsp; | ptl | o &nbsp; | (reserved) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-----+-----+-----+-------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Description<br>
              ----------- &nbsp; &nbsp;
              &nbsp;---------------------------------------------------<br>
              guest_width (w) &nbsp;Guest width in octets (either 4 or 8).<br>
              <br>
              pt_levels (ptl) &nbsp;Number of page table levels (either 3 or
              4).<br>
              <br>
              options (o) &nbsp; &nbsp; &nbsp;Bit 0: 0 - no VMASST_pae_extended_cr3,<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;1 - VMASST_pae_extended_cr3.<br>
              <br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Bit 1-7: Reserved.<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              P2M<br>
              ---<br>
              <br>
              [ This is a more flexible replacement for the old p2m_size
              field and<br>
              p2m array. ]<br>
              <br>
              The P2M record contains a portion of the source domain's
              P2M.<br>
              Multiple P2M records may be sent if the source P2M changes
              during the<br>
              stream.<br>
              <br>
              &nbsp; &nbsp; &nbsp;0 &nbsp; &nbsp; 1 &nbsp; &nbsp; 2 &nbsp; &nbsp; 3 &nbsp; &nbsp; 4 &nbsp; &nbsp; 5 &nbsp; &nbsp; 6 &nbsp; &nbsp; 7 octet<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | pfn_begin &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | pfn_end &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; |<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | mfn[0] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; ...<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              &nbsp; &nbsp; | mfn[N-1] &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;|<br>
              &nbsp; &nbsp; +-------------------------------------------------+<br>
              <br>
--------------------------------------------------------------------<br>
              Field &nbsp; &nbsp; &nbsp; Description<br>
              -----------
              --------------------------------------------------------<br>
              pfn_begin &nbsp; The first PFN in this portion of the P2M<br>
              <br>
              pfn_end &nbsp; &nbsp; One past the last PFN in this portion of the
              P2M.<br>
              <br>
              mfn &nbsp; &nbsp; &nbsp; &nbsp; Array of (pfn_end - pfn-begin) MFNs
              corresponding to<br>
              &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; the set of PFNs in the range [pfn_begin,
              pfn_end).<br>
--------------------------------------------------------------------<br>
              <br>
              <br>
              Layout<br>
              ======<br>
              <br>
              The set of valid records depends on the guest architecture
              and type.<br>
              <br>
              x86 PV Guest<br>
              ------------<br>
              <br>
              An x86 PV guest image will have in this order:<br>
              <br>
              1. Image header<br>
              2. Domain header<br>
              3. X86_PV_INFO record<br>
              4. At least one P2M record<br>
              5. At least one PAGE_DATA record</blockquote>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">6.
              VCPU_INFO record<br>
              6. At least one VCPU_CONTEXT record</blockquote>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">7.
              END record<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>There seems to be a bunch of info missing. Here are
              some</div>
            <div>missing elements that I can recall at the moment:</div>
            <div>a) there is no support for sending over one time
              markers that switch the</div>
            <div>receiver's operating mode in the middle of a data
              stream.&nbsp;</div>
            <div>E.g., XC_SAVE_ENABLE_COMPRESSION,
              XC_SAVE_ID_LAST_CHECKPOINT, etc.</div>
            <div>XC_SAVE_ENABLE_VERIFY_MODE,</div>
            <div><br>
            </div>
            <div>b) in pv case, the tail also has a list of unmapped
              PFNs at the end of every iteration.</div>
            <div><br>
            </div>
            <div>c) XC_SAVE_ID_TOOLSTACK -- used by xl to pass device
              context information (generally<br>
            </div>
            <div>for HVMs).</div>
            <div>&nbsp;</div>
            <blockquote class="gmail_quote" style="margin:0px 0px 0px
0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
              Legacy Images (x86 only)<br>
              ========================<br>
              <br>
              Restoring legacy images from older tools shall be handled
              by<br>
              translating the legacy format image into this new format.<br>
              <br>
              It shall not be possible to save in the legacy format.<br>
              <br>
              There are two different legacy images depending on whether
              they were<br>
              generated by a 32-bit or a 64-bit toolstack. These shall
              be<br>
              distinguished by inspecting octets 4-7 in the image. &nbsp;If
              these are<br>
              zero then it is a 64-bit image.<br>
              <br>
              Toolstack &nbsp;Field &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Value<br>
              --------- &nbsp;----- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;-----<br>
              64-bit &nbsp; &nbsp; Bit 31-63 of the p2m_size field &nbsp;0 (since
              p2m_size &lt; 2^32^)<br>
              32-bit &nbsp; &nbsp; extended-info chunk ID (PV) &nbsp; &nbsp; &nbsp;0xFFFFFFFF<br>
              32-bit &nbsp; &nbsp; Chunk type (HVM) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &lt; 0<br>
              32-bit &nbsp; &nbsp; Page count (HVM) &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &gt; 0<br>
              <br>
              Table: Possible values for octet 4-7 in legacy images<br>
              <br>
              This assumes the presence of the extended-info chunk which
              was<br>
              introduced in Xen 3.0.<br>
              <br>
              <br>
              Future Extensions<br>
              =================<br>
              <br>
              All changes to this format require the image version to be
              increased.<br>
              <br>
              The format may be extended by adding additional record
              types.<br>
              <br>
              Extending an existing record type must be done by adding a
              new record<br>
              type. &nbsp;This allows old images with the old record to still
              be<br>
              restored.<br>
              <br>
              The image header may be extended by _appending_ additional
              fields. &nbsp;In<br>
              particular, the `marker`, `id` and `version` fields must
              never change<br>
              size or location.<br>
              <br>
            </blockquote>
          </div>
          <br>
        </div>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------050200090706030607020009--


--===============8346258649584390089==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8346258649584390089==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 02:04:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 02:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD2hh-0001SS-EB; Tue, 11 Feb 2014 02:04:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WD2hf-0001SN-86
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 02:04:03 +0000
Received: from [85.158.137.68:63686] by server-11.bemta-3.messagelabs.com id
	1E/2F-04255-21589F25; Tue, 11 Feb 2014 02:04:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392084240!973235!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10158 invoked from network); 11 Feb 2014 02:04:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 02:04:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,821,1384300800"; d="scan'208";a="101482936"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 02:03:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 21:03:39 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WD2hH-0007sQ-43;
	Tue, 11 Feb 2014 02:03:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WD2hH-000488-2W;
	Tue, 11 Feb 2014 02:03:39 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24830-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 02:03:39 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24830: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3009323473635838048=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3009323473635838048==
Content-Type: text/plain

flight 24830 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24830/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           2 hosts-allocate          broken REGR. vs. 24743
 test-amd64-i386-xl            7 debian-install           running [st=running!]

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 727 lines long.)


--===============3009323473635838048==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3009323473635838048==--

From xen-devel-bounces@lists.xen.org Tue Feb 11 02:04:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 02:04:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD2hh-0001SS-EB; Tue, 11 Feb 2014 02:04:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WD2hf-0001SN-86
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 02:04:03 +0000
Received: from [85.158.137.68:63686] by server-11.bemta-3.messagelabs.com id
	1E/2F-04255-21589F25; Tue, 11 Feb 2014 02:04:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392084240!973235!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10158 invoked from network); 11 Feb 2014 02:04:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 02:04:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,821,1384300800"; d="scan'208";a="101482936"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 02:03:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 10 Feb 2014 21:03:39 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WD2hH-0007sQ-43;
	Tue, 11 Feb 2014 02:03:39 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WD2hH-000488-2W;
	Tue, 11 Feb 2014 02:03:39 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24830-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 02:03:39 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24830: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3009323473635838048=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3009323473635838048==
Content-Type: text/plain

flight 24830 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24830/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           2 hosts-allocate          broken REGR. vs. 24743
 test-amd64-i386-xl            7 debian-install           running [st=running!]

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 727 lines long.)


--===============3009323473635838048==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3009323473635838048==--

From xen-devel-bounces@lists.xen.org Tue Feb 11 04:14:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 04:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD4jA-00077X-32; Tue, 11 Feb 2014 04:13:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WD4j7-00077Q-OW
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 04:13:42 +0000
Received: from [85.158.143.35:7451] by server-3.bemta-4.messagelabs.com id
	FC/62-11539-473A9F25; Tue, 11 Feb 2014 04:13:40 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392092017!4677945!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29687 invoked from network); 11 Feb 2014 04:13:38 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 04:13:38 -0000
Received: from mail-ig0-f175.google.com (mail-ig0-f175.google.com
	[209.85.213.175]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1B4DZjC022716
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 20:13:36 -0800
Received: by mail-ig0-f175.google.com with SMTP id uq10so8175018igb.2
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 20:13:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=I403ZxdkTkp4E/zIDi71ABCOcl6XvYxqQCfVCB8Wvyw=;
	b=WMjmX6PeWSnlp5iP1/ef+mqXcXc70HOalNBMJaLEigK1+ndKXSzf5Hu8NHiSiMQOT1
	Idb6+EFppy02k2nnqpugLzTGVayqBKscEXucWAL4INcuG977evy8/E4RchVXg7j/a8CA
	e95v6mfl0cTBhmBnCuGNsIp21j2GhcR+V1+WClaAxj+gYYOXXCNrWPls7a6Kg3D+/j9D
	oS7XdU0uL811VpRvH8ZpqLDaw+HBNKiNwXIxy7XX2nwFtAZ7p4SgUXJbjvp2H64bG2TK
	RAIbXhyO5tjVtXsf/FqFgybp17YYlKLjRlJXaddAQOcZ3gaqf07XIINuw5PmsqKrveLD
	Q5Pw==
X-Received: by 10.50.79.198 with SMTP id l6mr17604208igx.23.1392092014462;
	Mon, 10 Feb 2014 20:13:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Mon, 10 Feb 2014 20:12:53 -0800 (PST)
In-Reply-To: <52F97E6F.2000402@citrix.com>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
	<52F97E6F.2000402@citrix.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Mon, 10 Feb 2014 22:12:53 -0600
Message-ID: <CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1212042773042195788=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1212042773042195788==
Content-Type: multipart/alternative; boundary=089e013a1102044f8904f219ae76

--089e013a1102044f8904f219ae76
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Feb 10, 2014 at 7:35 PM, Andrew Cooper <andrew.cooper3@citrix.com>wrote:

>  On 10/02/2014 20:00, Shriram Rajagopalan wrote:
>
>  On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com>wrote:
>
>> Here is a draft of a proposal for a new domain save image format.  It
>> does not currently cover all use cases (e.g., images for HVM guest are
>> not considered).
>>
>> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>>
>> Introduction
>> ============
>>
>> Revision History
>> ----------------
>>
>> --------------------------------------------------------------------
>> Version  Date         Changes
>> -------  -----------  ----------------------------------------------
>> Draft A  6 Feb 2014   Initial draft.
>>
>> Draft B  10 Feb 2014  Corrected image header field widths.
>>
>>                       Minor updates and clarifications.
>> --------------------------------------------------------------------
>>
>> Purpose
>> -------
>>
>> The _domain save image_ is the context of a running domain used for
>> snapshots of a domain or for transferring domains between hosts during
>> migration.
>>
>> There are a number of problems with the format of the domain save
>> image used in Xen 4.4 and earlier (the _legacy format_).
>>
>> * Dependant on toolstack word size.  A number of fields within the
>>   image are native types such as `unsigned long` which have different
>>   sizes between 32-bit and 64-bit hosts.  This prevents domains from
>>   being migrated between 32-bit and 64-bit hosts.
>>
>> * There is no header identifying the image.
>>
>> * The image has no version information.
>>
>> A new format that addresses the above is required.
>>
>> ARM does not yet have have a domain save image format specified and
>> the format described in this specification should be suitable.
>>
>>
>
>  I suggest keeping the processing overhead in mind when designing the new
> image format. Some key things have been addressed, such as making sure data
>  is always padded to maintain alignment. But there are also some aspects
> of this
> proposal that seem awfully unnecessary.. More details below.
>
>
>>
>> Overview
>> ========
>>
>> The image format consists of two main sections:
>>
>> * _Headers_
>> * _Records_
>>
>> Headers
>> -------
>>
>> There are two headers: the _image header_, and the _domain header_.
>> The image header describes the format of the image (version etc.).
>> The _domain header_ contains general information about the domain
>> (architecture, type etc.).
>>
>> Records
>> -------
>>
>> The main part of the format is a sequence of different _records_.
>> Each record type contains information about the domain context.  At a
>> minimum there is a END record marking the end of the records section.
>>
>>
>> Fields
>> ------
>>
>> All the fields within the headers and records have a fixed width.
>>
>> Fields are always aligned to their size.
>>
>> Padding and reserved fields are set to zero on save and must be
>> ignored during restore.
>>
>>
>  So far so good.
>
>
>> Integer (numeric) fields in the image header are always in big-endian
>> byte order.
>>
>> Integer fields in the domain header and in the records are in the
>> endianess described in the image header (which will typically be the
>> native ordering).
>>
>>
>
>   Its tempting to adopt all the TCP-style madness for transferring a set
> of
> structured data.  Why this endian-ness mess?  Am I missing something here?
> I am assuming that a lion's share of Xen's deployment is on x86
> (not including Amazon). So that leaves ARM.  Why not let these
> processors take the hit of endian-ness conversion?
>
>
> The large majority is indeed x86, but don't discount ARM because it is
> currently in the minority.  With the current requirements, the vast
> majority of the data will still be little endian on x86.
>
>
>
>  Headers
>> =======
>>
>> Image Header
>> ------------
>>
>> The image header identifies an image as a Xen domain save image.  It
>> includes the version of this specification that the image complies
>> with.
>>
>> Tools supporting version _V_ of the specification shall always save
>> images using version _V_.  Tools shall support restoring from version
>> _V_ and version _V_ - 1.  Tools may additionally support restoring
>> from earlier versions.
>>
>> The marker field can be used to distinguish between legacy images and
>> those corresponding to this specification.  Legacy images will have at
>> one or more zero bits within the first 8 octets of the image.
>>
>> Fields within the image header are always in _big-endian_ byte order,
>> regardless of the setting of the endianness bit.
>>
>
>  and more endian-ness mess.
>
>
> Network order is perfectly valid.  Is is how all your network packets
> arrive...
>
>
>

True. But why should we explicitly convert the application level data to
network byte order and then convert it back to host byte order, when its
already going to be done by the underlying stack, as you put it?


>
>
>>      0     1     2     3     4     5     6     7 octet
>>     +-------------------------------------------------+
>>     | marker                                          |
>>     +-----------------------+-------------------------+
>>     | id                    | version                 |
>>     +-----------+-----------+-------------------------+
>>     | options   |                                     |
>>     +-----------+-------------------------------------+
>>
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> marker      0xFFFFFFFFFFFFFFFF.
>>
>> id          0x58454E46 ("XENF" in ASCII).
>>
>> version     0x00000001.  The version of this specification.
>>
>> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
>>
>>             bit 1-15: Reserved.
>> --------------------------------------------------------------------
>>
>> Domain Header
>> -------------
>>
>> The domain header includes general properties of the domain.
>>
>>      0      1     2     3     4     5     6     7 octet
>>     +-----------+-----------+-----------+-------------+
>>     | arch      | type      | page_shift| (reserved)  |
>>     +-----------+-----------+-----------+-------------+
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> arch        0x0000: Reserved.
>>
>>             0x0001: x86.
>>
>>             0x0002: ARM.
>>
>> type        0x0000: Reserved.
>>
>>             0x0001: x86 PV.
>>
>>             0x0002 - 0xFFFF: Reserved.
>>
>> page_shift  Size of a guest page as a power of two.
>>
>>             i.e., page size = 2^page_shift^.
>> --------------------------------------------------------------------
>>
>>
>> Records
>> =======
>>
>> A record has a record header, type specific data and a trailing
>> footer.  If body_length is not a multiple of 8, the body is padded
>> with zeroes to align the checksum field on an 8 octet boundary.
>>
>>      0     1     2     3     4     5     6     7 octet
>>     +-----------------------+-------------------------+
>>     | type                  | body_length             |
>>     +-----------+-----------+-------------------------+
>>     | options   | (reserved)                          |
>>     +-----------+-------------------------------------+
>>     ...
>>     Record body of length body_length octets followed by
>>     0 to 7 octets of padding.
>>     ...
>>     +-----------------------+-------------------------+
>>     | checksum              | (reserved)              |
>>     +-----------------------+-------------------------+
>>
>>
>  I am assuming that you the checksum field is present only
> for debugging purposes? Otherwise, I see no reason for the
> computational overhead, given that we are already sending data
> over a reliable channel + IIRC we already have an image-wide checksum
> when saving the image to disk.
>
>  If debugging is the only use case, then I guess, the type field
> can be prefixed with a 1/0 bit, eliminating the need for the
> 1-bit checkum options field + 7-byte padding. Similarly, if debugging
> mode is not set, why waste another 8 bytes in the end for the checksum
> field.
>
>  Unless you think there may be more types with need of special options,
>
>  Feel free to correct me if I am missing something elementary here..
>
>
> What image-wide checksum?
>
>
May be I got it wrong. I vaguely recall some sort of a crc checksum being
stored along with
the saved memory snapshots. But that could have been someone else's
research code. Sorry..


> Are you certain that all your data is moving over reliable channels?
>

Lets see.. Am I certain that all migration is happening over TCP ? yes.
Worst case reliable UDP.
By reliable, I just mean no bit errors or such stuff. I am not talking
about security.


> Are you certain that your hard drives are bit perfect.
>

Absolutely not. Which is why I was under the impression that the image wide
checksum
would detect a corrupt image.

  Are you certain that your network connection is bit perfect?
>
>
Nope. But I am fairly certain that good old TCP and IP checksums + the
ethernet's checksum have
been put in place to detect these errors and recover transparent to the
application. Are you are implying
that there is some remote corner case that allows corrupt data to escape
all of these three checks in
the network stack and percolate to the application layer? I don't think so.

If you are implying that the DRAMs cause memory bit errors that flip bits
here and here, wreaking havoc,
then probably yes, checksums make sense. However, with ECC memory modules
being the norm (please
correct me if I wrong about this), why start bothering now, if we didn't
over the last 3 years? What has changed?

My point here being, checksums seem like unnecessary compute overhead when
doing live migration
or Remus.  One can simply set this field to 0 when doing live
migration/Remus.
And, as you said later in this mail, data transmission overhead is not that
much.

However, as far as storing snapshots in disks is concerned, I totally agree
that there needs to be some
form of a checksum to ensure that the data has not been corrupted. But why
have record-level checksums?
It is not as if we can recover the corrupted records. Majority of the use
cases are, IMO, do or die. If checksum
is correct, then start the restore process. Else abort.  So why not have an
image wide checksum?



>  Given the amount of data sent as part of a migration, 8 bytes per record
> is not a substantial overhead.
>
>
thanks
shriram

--089e013a1102044f8904f219ae76
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On M=
on, Feb 10, 2014 at 7:35 PM, Andrew Cooper <span dir=3D"ltr">&lt;<a href=3D=
"mailto:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@citrix.=
com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">
 =20
   =20
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF"><div><div class=3D"h5">
    <div>On 10/02/2014 20:00, Shriram
      Rajagopalan wrote:<br>
    </div>
    <blockquote type=3D"cite">
     =20
      <div dir=3D"ltr">
        <div class=3D"gmail_extra">
          <div class=3D"gmail_quote">On Mon, Feb 10, 2014 at 9:20 AM,
            David Vrabel <span dir=3D"ltr">&lt;<a href=3D"mailto:david.vrab=
el@citrix.com" target=3D"_blank">david.vrabel@citrix.com</a>&gt;</span>
            wrote:<br>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex">Here
              is a draft of a proposal for a new domain save image
              format. =A0It<br>
              does not currently cover all use cases (e.g., images for
              HVM guest are<br>
              not considered).<br>
              <br>
              <a href=3D"http://xenbits.xen.org/people/dvrabel/domain-save-=
format-B.pdf" target=3D"_blank">http://xenbits.xen.org/people/dvrabel/domai=
n-save-format-B.pdf</a><br>
              <br>
              Introduction<br>
              =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              Revision History<br>
              ----------------<br>
              <br>
--------------------------------------------------------------------<br>
              Version =A0Date =A0 =A0 =A0 =A0 Changes<br>
              ------- =A0-----------
              =A0----------------------------------------------<br>
              Draft A =A06 Feb 2014 =A0 Initial draft.<br>
              <br>
              Draft B =A010 Feb 2014 =A0Corrected image header field widths=
.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Minor updates and=
 clarifications.<br>
--------------------------------------------------------------------<br>
              <br>
              Purpose<br>
              -------<br>
              <br>
              The _domain save image_ is the context of a running domain
              used for<br>
              snapshots of a domain or for transferring domains between
              hosts during<br>
              migration.<br>
              <br>
              There are a number of problems with the format of the
              domain save<br>
              image used in Xen 4.4 and earlier (the _legacy format_).<br>
              <br>
              * Dependant on toolstack word size. =A0A number of fields
              within the<br>
              =A0 image are native types such as `unsigned long` which
              have different<br>
              =A0 sizes between 32-bit and 64-bit hosts. =A0This prevents
              domains from<br>
              =A0 being migrated between 32-bit and 64-bit hosts.<br>
              <br>
              * There is no header identifying the image.<br>
              <br>
              * The image has no version information.<br>
              <br>
              A new format that addresses the above is required.<br>
              <br>
              ARM does not yet have have a domain save image format
              specified and<br>
              the format described in this specification should be
              suitable.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>I suggest keeping the processing overhead in mind when
              designing the new</div>
            <div>image format. Some key things have been addressed, such
              as making sure data</div>
            <div>
              is always padded to maintain alignment. But there are also
              some aspects of this</div>
            <div>proposal that seem awfully unnecessary.. More details
              below.</div>
            <div>=A0</div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex"><br>
              Overview<br>
              =3D=3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              The image format consists of two main sections:<br>
              <br>
              * _Headers_<br>
              * _Records_<br>
              <br>
              Headers<br>
              -------<br>
              <br>
              There are two headers: the _image header_, and the _domain
              header_.<br>
              The image header describes the format of the image
              (version etc.).<br>
              The _domain header_ contains general information about the
              domain<br>
              (architecture, type etc.).<br>
              <br>
              Records<br>
              -------<br>
              <br>
              The main part of the format is a sequence of different
              _records_.<br>
              Each record type contains information about the domain
              context. =A0At a<br>
              minimum there is a END record marking the end of the
              records section.<br>
              <br>
              <br>
              Fields<br>
              ------<br>
              <br>
              All the fields within the headers and records have a fixed
              width.<br>
              <br>
              Fields are always aligned to their size.<br>
              <br>
              Padding and reserved fields are set to zero on save and
              must be<br>
              ignored during restore.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>So far so good.</div>
            <div>=A0</div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex">Integer
              (numeric) fields in the image header are always in
              big-endian<br>
              byte order.<br>
              <br>
              Integer fields in the domain header and in the records are
              in the<br>
              endianess described in the image header (which will
              typically be the<br>
              native ordering).<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>
              <div>
                <div>Its tempting to adopt all the TCP-style madness for
                  transferring a set of</div>
                <div>structured data. =A0Why this endian-ness mess? =A0Am I
                  missing something here?</div>
                <div>I am assuming that a lion&#39;s share of Xen&#39;s
                  deployment is on x86=A0</div>
                <div>(not including Amazon). So that leaves ARM. =A0Why
                  not let these=A0</div>
                <div>processors take the hit of endian-ness conversion?</di=
v>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br></div></div>
    The large majority is indeed x86, but don&#39;t discount ARM because it
    is currently in the minority.=A0 With the current requirements, the
    vast majority of the data will still be little endian on x86.<div class=
=3D""><br>
    <br>
    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div class=3D"gmail_extra">
          <div class=3D"gmail_quote">
            <div>
              <div><br>
              </div>
            </div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex">Headers<br>
              =3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              Image Header<br>
              ------------<br>
              <br>
              The image header identifies an image as a Xen domain save
              image. =A0It<br>
              includes the version of this specification that the image
              complies<br>
              with.<br>
              <br>
              Tools supporting version _V_ of the specification shall
              always save<br>
              images using version _V_. =A0Tools shall support restoring
              from version<br>
              _V_ and version _V_ - 1. =A0Tools may additionally support
              restoring<br>
              from earlier versions.<br>
              <br>
              The marker field can be used to distinguish between legacy
              images and<br>
              those corresponding to this specification. =A0Legacy images
              will have at<br>
              one or more zero bits within the first 8 octets of the
              image.<br>
              <br>
              Fields within the image header are always in _big-endian_
              byte order,<br>
              regardless of the setting of the endianness bit.<br>
            </blockquote>
            <div><br>
            </div>
            <div>and more endian-ness mess.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br></div>
    Network order is perfectly valid.=A0 Is is how all your network
    packets arrive...<div><div class=3D"h5"><br>
    <br></div></div></div></blockquote><div><br></div><div><br></div><div>T=
rue. But why should we explicitly convert the application level data to=A0<=
/div><div>network byte order and then convert it back to host byte order, w=
hen its</div>

<div>already going to be done by the underlying stack, as you put it?</div>=
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF"><div><div class=3D"h5">
    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div class=3D"gmail_extra">
          <div class=3D"gmail_quote">
            <div><br>
            </div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex"><br>
              =A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 =
5 =A0 =A0 6 =A0 =A0 7 octet<br>
              =A0 =A0 +-------------------------------------------------+<b=
r>
              =A0 =A0 | marker =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              =A0 =A0 | id =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| version=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 |<br>
              =A0 =A0 +-----------+-----------+-------------------------+<b=
r>
              =A0 =A0 | options =A0 | =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 |<br>
              =A0 =A0 +-----------+-------------------------------------+<b=
r>
              <br>
              <br>
--------------------------------------------------------------------<br>
              Field =A0 =A0 =A0 Description<br>
              -----------
              --------------------------------------------------------<br>
              marker =A0 =A0 =A00xFFFFFFFFFFFFFFFF.<br>
              <br>
              id =A0 =A0 =A0 =A0 =A00x58454E46 (&quot;XENF&quot; in ASCII).=
<br>
              <br>
              version =A0 =A0 0x00000001. =A0The version of this
              specification.<br>
              <br>
              options =A0 =A0 bit 0: Endianness. =A00 =3D little-endian, 1 =
=3D
              big-endian.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 bit 1-15: Reserved.<br>
--------------------------------------------------------------------<br>
              <br>
              Domain Header<br>
              -------------<br>
              <br>
              The domain header includes general properties of the
              domain.<br>
              <br>
              =A0 =A0 =A00 =A0 =A0 =A01 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =
=A0 5 =A0 =A0 6 =A0 =A0 7 octet<br>
              =A0 =A0 +-----------+-----------+-----------+-------------+<b=
r>
              =A0 =A0 | arch =A0 =A0 =A0| type =A0 =A0 =A0| page_shift| (re=
served) =A0|<br>
              =A0 =A0 +-----------+-----------+-----------+-------------+<b=
r>
              <br>
--------------------------------------------------------------------<br>
              Field =A0 =A0 =A0 Description<br>
              -----------
              --------------------------------------------------------<br>
              arch =A0 =A0 =A0 =A00x0000: Reserved.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0002: ARM.<br>
              <br>
              type =A0 =A0 =A0 =A00x0000: Reserved.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86 PV.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0002 - 0xFFFF: Reserved.<br>
              <br>
              page_shift =A0Size of a guest page as a power of two.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 i.e., page size =3D 2^page_shift^.<br=
>
--------------------------------------------------------------------<br>
              <br>
              <br>
              Records<br>
              =3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              A record has a record header, type specific data and a
              trailing<br>
              footer. =A0If body_length is not a multiple of 8, the body
              is padded<br>
              with zeroes to align the checksum field on an 8 octet
              boundary.<br>
              <br>
              =A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 =
5 =A0 =A0 6 =A0 =A0 7 octet<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              =A0 =A0 | type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| body_leng=
th =A0 =A0 =A0 =A0 =A0 =A0 |<br>
              =A0 =A0 +-----------+-----------+-------------------------+<b=
r>
              =A0 =A0 | options =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
              =A0 =A0 +-----------+-------------------------------------+<b=
r>
              =A0 =A0 ...<br>
              =A0 =A0 Record body of length body_length octets followed by<=
br>
              =A0 =A0 0 to 7 octets of padding.<br>
              =A0 =A0 ...<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              =A0 =A0 | checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0| (reserved) =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>I am assuming that you the checksum field is present
              only</div>
            <div>for debugging purposes? Otherwise, I see no reason for
              the</div>
            <div>computational overhead, given that we are already
              sending data</div>
            <div>over a reliable channel + IIRC we already have an
              image-wide checksum=A0</div>
            <div>when saving the image to disk.</div>
            <div><br>
            </div>
            <div>If debugging is the only use case, then I guess, the
              type field</div>
            <div>can be prefixed with a 1/0 bit, eliminating the need
              for the=A0</div>
            <div>1-bit checkum options field + 7-byte padding.
              Similarly, if debugging=A0</div>
            <div>mode is not set, why waste another 8 bytes in the end
              for the checksum field.</div>
            <div><br>
            </div>
            <div>Unless you think there may be more types with need of
              special options,</div>
            <div><br>
            </div>
            <div>Feel free to correct me if I am missing something
              elementary here..</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br></div></div>
    What image-wide checksum?<br>
    <br></div></blockquote><div><br></div><div>May be I got it wrong. I vag=
uely recall some sort of a crc checksum being stored along with</div><div>t=
he saved memory snapshots. But that could have been someone else&#39;s rese=
arch code. Sorry..</div>

<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF">
    Are you certain that all your data is moving over reliable
    channels?=A0</div></blockquote><div><br></div><div>Lets see.. Am I cert=
ain that all migration is happening over TCP ? yes. Worst case reliable UDP=
.</div><div>By reliable, I just mean no bit errors or such stuff. I am not =
talking about security.</div>

<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF"> A=
re you certain that your hard drives are bit perfect.</div>

</blockquote><div><br></div><div>Absolutely not. Which is why I was under t=
he impression that the image wide checksum</div><div>would detect a corrupt=
 image.</div><div><br></div><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,20=
4);border-left-style:solid;padding-left:1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF">=A0
    Are you certain that your network connection is bit perfect?<br>
    <br></div></blockquote><div><br></div><div>Nope. But I am fairly certai=
n that good old TCP and IP checksums + the ethernet&#39;s checksum have</di=
v><div>been put in place to detect these errors and recover transparent to =
the application. Are you are implying</div>

<div>that there is some remote corner case that allows corrupt data to esca=
pe all of these three checks in=A0</div><div>the network stack and percolat=
e to the application layer? I don&#39;t think so.</div><div><br></div><div>

If you are implying that the DRAMs cause memory bit errors that flip bits h=
ere and here, wreaking havoc,</div><div>then probably yes, checksums make s=
ense. However, with ECC memory modules being the norm (please</div><div>

correct me if I wrong about this), why start bothering now, if we didn&#39;=
t over the last 3 years? What has changed?</div><div><br></div><div>My poin=
t here being, checksums seem like unnecessary compute overhead when doing l=
ive migration</div>

<div>or Remus. =A0One can simply set this field to 0 when doing live migrat=
ion/Remus.</div><div>And, as you said later in this mail, data transmission=
 overhead is not that much.</div><div><br></div><div>However, as far as sto=
ring snapshots in disks is concerned, I totally agree that there needs to b=
e some</div>

<div>form of a checksum to ensure that the data has not been corrupted. But=
 why have record-level checksums?</div><div>It is not as if we can recover =
the corrupted records. Majority of the use cases are, IMO, do or die. If ch=
ecksum</div>

<div>is correct, then start the restore process. Else abort. =A0So why not =
have an image wide checksum?</div><div><br></div><div>=A0</div><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1=
px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:=
1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF">
    Given the amount of data sent as part of a migration, 8 bytes per
    record is not a substantial overhead.<div><div class=3D"h5"><br></div><=
/div></div></blockquote><div><br></div><div>thanks</div><div>shriram=A0</di=
v></div></div></div>

--089e013a1102044f8904f219ae76--


--===============1212042773042195788==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1212042773042195788==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 04:14:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 04:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD4jA-00077X-32; Tue, 11 Feb 2014 04:13:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WD4j7-00077Q-OW
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 04:13:42 +0000
Received: from [85.158.143.35:7451] by server-3.bemta-4.messagelabs.com id
	FC/62-11539-473A9F25; Tue, 11 Feb 2014 04:13:40 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392092017!4677945!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29687 invoked from network); 11 Feb 2014 04:13:38 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 04:13:38 -0000
Received: from mail-ig0-f175.google.com (mail-ig0-f175.google.com
	[209.85.213.175]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1B4DZjC022716
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 20:13:36 -0800
Received: by mail-ig0-f175.google.com with SMTP id uq10so8175018igb.2
	for <Xen-devel@lists.xen.org>; Mon, 10 Feb 2014 20:13:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=I403ZxdkTkp4E/zIDi71ABCOcl6XvYxqQCfVCB8Wvyw=;
	b=WMjmX6PeWSnlp5iP1/ef+mqXcXc70HOalNBMJaLEigK1+ndKXSzf5Hu8NHiSiMQOT1
	Idb6+EFppy02k2nnqpugLzTGVayqBKscEXucWAL4INcuG977evy8/E4RchVXg7j/a8CA
	e95v6mfl0cTBhmBnCuGNsIp21j2GhcR+V1+WClaAxj+gYYOXXCNrWPls7a6Kg3D+/j9D
	oS7XdU0uL811VpRvH8ZpqLDaw+HBNKiNwXIxy7XX2nwFtAZ7p4SgUXJbjvp2H64bG2TK
	RAIbXhyO5tjVtXsf/FqFgybp17YYlKLjRlJXaddAQOcZ3gaqf07XIINuw5PmsqKrveLD
	Q5Pw==
X-Received: by 10.50.79.198 with SMTP id l6mr17604208igx.23.1392092014462;
	Mon, 10 Feb 2014 20:13:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Mon, 10 Feb 2014 20:12:53 -0800 (PST)
In-Reply-To: <52F97E6F.2000402@citrix.com>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
	<52F97E6F.2000402@citrix.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Mon, 10 Feb 2014 22:12:53 -0600
Message-ID: <CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1212042773042195788=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1212042773042195788==
Content-Type: multipart/alternative; boundary=089e013a1102044f8904f219ae76

--089e013a1102044f8904f219ae76
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Feb 10, 2014 at 7:35 PM, Andrew Cooper <andrew.cooper3@citrix.com>wrote:

>  On 10/02/2014 20:00, Shriram Rajagopalan wrote:
>
>  On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com>wrote:
>
>> Here is a draft of a proposal for a new domain save image format.  It
>> does not currently cover all use cases (e.g., images for HVM guest are
>> not considered).
>>
>> http://xenbits.xen.org/people/dvrabel/domain-save-format-B.pdf
>>
>> Introduction
>> ============
>>
>> Revision History
>> ----------------
>>
>> --------------------------------------------------------------------
>> Version  Date         Changes
>> -------  -----------  ----------------------------------------------
>> Draft A  6 Feb 2014   Initial draft.
>>
>> Draft B  10 Feb 2014  Corrected image header field widths.
>>
>>                       Minor updates and clarifications.
>> --------------------------------------------------------------------
>>
>> Purpose
>> -------
>>
>> The _domain save image_ is the context of a running domain used for
>> snapshots of a domain or for transferring domains between hosts during
>> migration.
>>
>> There are a number of problems with the format of the domain save
>> image used in Xen 4.4 and earlier (the _legacy format_).
>>
>> * Dependant on toolstack word size.  A number of fields within the
>>   image are native types such as `unsigned long` which have different
>>   sizes between 32-bit and 64-bit hosts.  This prevents domains from
>>   being migrated between 32-bit and 64-bit hosts.
>>
>> * There is no header identifying the image.
>>
>> * The image has no version information.
>>
>> A new format that addresses the above is required.
>>
>> ARM does not yet have have a domain save image format specified and
>> the format described in this specification should be suitable.
>>
>>
>
>  I suggest keeping the processing overhead in mind when designing the new
> image format. Some key things have been addressed, such as making sure data
>  is always padded to maintain alignment. But there are also some aspects
> of this
> proposal that seem awfully unnecessary.. More details below.
>
>
>>
>> Overview
>> ========
>>
>> The image format consists of two main sections:
>>
>> * _Headers_
>> * _Records_
>>
>> Headers
>> -------
>>
>> There are two headers: the _image header_, and the _domain header_.
>> The image header describes the format of the image (version etc.).
>> The _domain header_ contains general information about the domain
>> (architecture, type etc.).
>>
>> Records
>> -------
>>
>> The main part of the format is a sequence of different _records_.
>> Each record type contains information about the domain context.  At a
>> minimum there is a END record marking the end of the records section.
>>
>>
>> Fields
>> ------
>>
>> All the fields within the headers and records have a fixed width.
>>
>> Fields are always aligned to their size.
>>
>> Padding and reserved fields are set to zero on save and must be
>> ignored during restore.
>>
>>
>  So far so good.
>
>
>> Integer (numeric) fields in the image header are always in big-endian
>> byte order.
>>
>> Integer fields in the domain header and in the records are in the
>> endianess described in the image header (which will typically be the
>> native ordering).
>>
>>
>
>   Its tempting to adopt all the TCP-style madness for transferring a set
> of
> structured data.  Why this endian-ness mess?  Am I missing something here?
> I am assuming that a lion's share of Xen's deployment is on x86
> (not including Amazon). So that leaves ARM.  Why not let these
> processors take the hit of endian-ness conversion?
>
>
> The large majority is indeed x86, but don't discount ARM because it is
> currently in the minority.  With the current requirements, the vast
> majority of the data will still be little endian on x86.
>
>
>
>  Headers
>> =======
>>
>> Image Header
>> ------------
>>
>> The image header identifies an image as a Xen domain save image.  It
>> includes the version of this specification that the image complies
>> with.
>>
>> Tools supporting version _V_ of the specification shall always save
>> images using version _V_.  Tools shall support restoring from version
>> _V_ and version _V_ - 1.  Tools may additionally support restoring
>> from earlier versions.
>>
>> The marker field can be used to distinguish between legacy images and
>> those corresponding to this specification.  Legacy images will have at
>> one or more zero bits within the first 8 octets of the image.
>>
>> Fields within the image header are always in _big-endian_ byte order,
>> regardless of the setting of the endianness bit.
>>
>
>  and more endian-ness mess.
>
>
> Network order is perfectly valid.  Is is how all your network packets
> arrive...
>
>
>

True. But why should we explicitly convert the application level data to
network byte order and then convert it back to host byte order, when its
already going to be done by the underlying stack, as you put it?


>
>
>>      0     1     2     3     4     5     6     7 octet
>>     +-------------------------------------------------+
>>     | marker                                          |
>>     +-----------------------+-------------------------+
>>     | id                    | version                 |
>>     +-----------+-----------+-------------------------+
>>     | options   |                                     |
>>     +-----------+-------------------------------------+
>>
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> marker      0xFFFFFFFFFFFFFFFF.
>>
>> id          0x58454E46 ("XENF" in ASCII).
>>
>> version     0x00000001.  The version of this specification.
>>
>> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
>>
>>             bit 1-15: Reserved.
>> --------------------------------------------------------------------
>>
>> Domain Header
>> -------------
>>
>> The domain header includes general properties of the domain.
>>
>>      0      1     2     3     4     5     6     7 octet
>>     +-----------+-----------+-----------+-------------+
>>     | arch      | type      | page_shift| (reserved)  |
>>     +-----------+-----------+-----------+-------------+
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> arch        0x0000: Reserved.
>>
>>             0x0001: x86.
>>
>>             0x0002: ARM.
>>
>> type        0x0000: Reserved.
>>
>>             0x0001: x86 PV.
>>
>>             0x0002 - 0xFFFF: Reserved.
>>
>> page_shift  Size of a guest page as a power of two.
>>
>>             i.e., page size = 2^page_shift^.
>> --------------------------------------------------------------------
>>
>>
>> Records
>> =======
>>
>> A record has a record header, type specific data and a trailing
>> footer.  If body_length is not a multiple of 8, the body is padded
>> with zeroes to align the checksum field on an 8 octet boundary.
>>
>>      0     1     2     3     4     5     6     7 octet
>>     +-----------------------+-------------------------+
>>     | type                  | body_length             |
>>     +-----------+-----------+-------------------------+
>>     | options   | (reserved)                          |
>>     +-----------+-------------------------------------+
>>     ...
>>     Record body of length body_length octets followed by
>>     0 to 7 octets of padding.
>>     ...
>>     +-----------------------+-------------------------+
>>     | checksum              | (reserved)              |
>>     +-----------------------+-------------------------+
>>
>>
>  I am assuming that you the checksum field is present only
> for debugging purposes? Otherwise, I see no reason for the
> computational overhead, given that we are already sending data
> over a reliable channel + IIRC we already have an image-wide checksum
> when saving the image to disk.
>
>  If debugging is the only use case, then I guess, the type field
> can be prefixed with a 1/0 bit, eliminating the need for the
> 1-bit checkum options field + 7-byte padding. Similarly, if debugging
> mode is not set, why waste another 8 bytes in the end for the checksum
> field.
>
>  Unless you think there may be more types with need of special options,
>
>  Feel free to correct me if I am missing something elementary here..
>
>
> What image-wide checksum?
>
>
May be I got it wrong. I vaguely recall some sort of a crc checksum being
stored along with
the saved memory snapshots. But that could have been someone else's
research code. Sorry..


> Are you certain that all your data is moving over reliable channels?
>

Lets see.. Am I certain that all migration is happening over TCP ? yes.
Worst case reliable UDP.
By reliable, I just mean no bit errors or such stuff. I am not talking
about security.


> Are you certain that your hard drives are bit perfect.
>

Absolutely not. Which is why I was under the impression that the image wide
checksum
would detect a corrupt image.

  Are you certain that your network connection is bit perfect?
>
>
Nope. But I am fairly certain that good old TCP and IP checksums + the
ethernet's checksum have
been put in place to detect these errors and recover transparent to the
application. Are you are implying
that there is some remote corner case that allows corrupt data to escape
all of these three checks in
the network stack and percolate to the application layer? I don't think so.

If you are implying that the DRAMs cause memory bit errors that flip bits
here and here, wreaking havoc,
then probably yes, checksums make sense. However, with ECC memory modules
being the norm (please
correct me if I wrong about this), why start bothering now, if we didn't
over the last 3 years? What has changed?

My point here being, checksums seem like unnecessary compute overhead when
doing live migration
or Remus.  One can simply set this field to 0 when doing live
migration/Remus.
And, as you said later in this mail, data transmission overhead is not that
much.

However, as far as storing snapshots in disks is concerned, I totally agree
that there needs to be some
form of a checksum to ensure that the data has not been corrupted. But why
have record-level checksums?
It is not as if we can recover the corrupted records. Majority of the use
cases are, IMO, do or die. If checksum
is correct, then start the restore process. Else abort.  So why not have an
image wide checksum?



>  Given the amount of data sent as part of a migration, 8 bytes per record
> is not a substantial overhead.
>
>
thanks
shriram

--089e013a1102044f8904f219ae76
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On M=
on, Feb 10, 2014 at 7:35 PM, Andrew Cooper <span dir=3D"ltr">&lt;<a href=3D=
"mailto:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@citrix.=
com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">
 =20
   =20
 =20
  <div text=3D"#000000" bgcolor=3D"#FFFFFF"><div><div class=3D"h5">
    <div>On 10/02/2014 20:00, Shriram
      Rajagopalan wrote:<br>
    </div>
    <blockquote type=3D"cite">
     =20
      <div dir=3D"ltr">
        <div class=3D"gmail_extra">
          <div class=3D"gmail_quote">On Mon, Feb 10, 2014 at 9:20 AM,
            David Vrabel <span dir=3D"ltr">&lt;<a href=3D"mailto:david.vrab=
el@citrix.com" target=3D"_blank">david.vrabel@citrix.com</a>&gt;</span>
            wrote:<br>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex">Here
              is a draft of a proposal for a new domain save image
              format. =A0It<br>
              does not currently cover all use cases (e.g., images for
              HVM guest are<br>
              not considered).<br>
              <br>
              <a href=3D"http://xenbits.xen.org/people/dvrabel/domain-save-=
format-B.pdf" target=3D"_blank">http://xenbits.xen.org/people/dvrabel/domai=
n-save-format-B.pdf</a><br>
              <br>
              Introduction<br>
              =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              Revision History<br>
              ----------------<br>
              <br>
--------------------------------------------------------------------<br>
              Version =A0Date =A0 =A0 =A0 =A0 Changes<br>
              ------- =A0-----------
              =A0----------------------------------------------<br>
              Draft A =A06 Feb 2014 =A0 Initial draft.<br>
              <br>
              Draft B =A010 Feb 2014 =A0Corrected image header field widths=
.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Minor updates and=
 clarifications.<br>
--------------------------------------------------------------------<br>
              <br>
              Purpose<br>
              -------<br>
              <br>
              The _domain save image_ is the context of a running domain
              used for<br>
              snapshots of a domain or for transferring domains between
              hosts during<br>
              migration.<br>
              <br>
              There are a number of problems with the format of the
              domain save<br>
              image used in Xen 4.4 and earlier (the _legacy format_).<br>
              <br>
              * Dependant on toolstack word size. =A0A number of fields
              within the<br>
              =A0 image are native types such as `unsigned long` which
              have different<br>
              =A0 sizes between 32-bit and 64-bit hosts. =A0This prevents
              domains from<br>
              =A0 being migrated between 32-bit and 64-bit hosts.<br>
              <br>
              * There is no header identifying the image.<br>
              <br>
              * The image has no version information.<br>
              <br>
              A new format that addresses the above is required.<br>
              <br>
              ARM does not yet have have a domain save image format
              specified and<br>
              the format described in this specification should be
              suitable.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>I suggest keeping the processing overhead in mind when
              designing the new</div>
            <div>image format. Some key things have been addressed, such
              as making sure data</div>
            <div>
              is always padded to maintain alignment. But there are also
              some aspects of this</div>
            <div>proposal that seem awfully unnecessary.. More details
              below.</div>
            <div>=A0</div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex"><br>
              Overview<br>
              =3D=3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              The image format consists of two main sections:<br>
              <br>
              * _Headers_<br>
              * _Records_<br>
              <br>
              Headers<br>
              -------<br>
              <br>
              There are two headers: the _image header_, and the _domain
              header_.<br>
              The image header describes the format of the image
              (version etc.).<br>
              The _domain header_ contains general information about the
              domain<br>
              (architecture, type etc.).<br>
              <br>
              Records<br>
              -------<br>
              <br>
              The main part of the format is a sequence of different
              _records_.<br>
              Each record type contains information about the domain
              context. =A0At a<br>
              minimum there is a END record marking the end of the
              records section.<br>
              <br>
              <br>
              Fields<br>
              ------<br>
              <br>
              All the fields within the headers and records have a fixed
              width.<br>
              <br>
              Fields are always aligned to their size.<br>
              <br>
              Padding and reserved fields are set to zero on save and
              must be<br>
              ignored during restore.<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>So far so good.</div>
            <div>=A0</div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex">Integer
              (numeric) fields in the image header are always in
              big-endian<br>
              byte order.<br>
              <br>
              Integer fields in the domain header and in the records are
              in the<br>
              endianess described in the image header (which will
              typically be the<br>
              native ordering).<br>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div><br>
            </div>
            <div>
              <div>
                <div>Its tempting to adopt all the TCP-style madness for
                  transferring a set of</div>
                <div>structured data. =A0Why this endian-ness mess? =A0Am I
                  missing something here?</div>
                <div>I am assuming that a lion&#39;s share of Xen&#39;s
                  deployment is on x86=A0</div>
                <div>(not including Amazon). So that leaves ARM. =A0Why
                  not let these=A0</div>
                <div>processors take the hit of endian-ness conversion?</di=
v>
              </div>
            </div>
          </div>
        </div>
      </div>
    </blockquote>
    <br></div></div>
    The large majority is indeed x86, but don&#39;t discount ARM because it
    is currently in the minority.=A0 With the current requirements, the
    vast majority of the data will still be little endian on x86.<div class=
=3D""><br>
    <br>
    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div class=3D"gmail_extra">
          <div class=3D"gmail_quote">
            <div>
              <div><br>
              </div>
            </div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex">Headers<br>
              =3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              Image Header<br>
              ------------<br>
              <br>
              The image header identifies an image as a Xen domain save
              image. =A0It<br>
              includes the version of this specification that the image
              complies<br>
              with.<br>
              <br>
              Tools supporting version _V_ of the specification shall
              always save<br>
              images using version _V_. =A0Tools shall support restoring
              from version<br>
              _V_ and version _V_ - 1. =A0Tools may additionally support
              restoring<br>
              from earlier versions.<br>
              <br>
              The marker field can be used to distinguish between legacy
              images and<br>
              those corresponding to this specification. =A0Legacy images
              will have at<br>
              one or more zero bits within the first 8 octets of the
              image.<br>
              <br>
              Fields within the image header are always in _big-endian_
              byte order,<br>
              regardless of the setting of the endianness bit.<br>
            </blockquote>
            <div><br>
            </div>
            <div>and more endian-ness mess.</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br></div>
    Network order is perfectly valid.=A0 Is is how all your network
    packets arrive...<div><div class=3D"h5"><br>
    <br></div></div></div></blockquote><div><br></div><div><br></div><div>T=
rue. But why should we explicitly convert the application level data to=A0<=
/div><div>network byte order and then convert it back to host byte order, w=
hen its</div>

<div>already going to be done by the underlying stack, as you put it?</div>=
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF"><div><div class=3D"h5">
    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div class=3D"gmail_extra">
          <div class=3D"gmail_quote">
            <div><br>
            </div>
            <blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-s=
tyle:solid;padding-left:1ex"><br>
              =A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 =
5 =A0 =A0 6 =A0 =A0 7 octet<br>
              =A0 =A0 +-------------------------------------------------+<b=
r>
              =A0 =A0 | marker =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              =A0 =A0 | id =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| version=
 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 |<br>
              =A0 =A0 +-----------+-----------+-------------------------+<b=
r>
              =A0 =A0 | options =A0 | =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 |<br>
              =A0 =A0 +-----------+-------------------------------------+<b=
r>
              <br>
              <br>
--------------------------------------------------------------------<br>
              Field =A0 =A0 =A0 Description<br>
              -----------
              --------------------------------------------------------<br>
              marker =A0 =A0 =A00xFFFFFFFFFFFFFFFF.<br>
              <br>
              id =A0 =A0 =A0 =A0 =A00x58454E46 (&quot;XENF&quot; in ASCII).=
<br>
              <br>
              version =A0 =A0 0x00000001. =A0The version of this
              specification.<br>
              <br>
              options =A0 =A0 bit 0: Endianness. =A00 =3D little-endian, 1 =
=3D
              big-endian.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 bit 1-15: Reserved.<br>
--------------------------------------------------------------------<br>
              <br>
              Domain Header<br>
              -------------<br>
              <br>
              The domain header includes general properties of the
              domain.<br>
              <br>
              =A0 =A0 =A00 =A0 =A0 =A01 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =
=A0 5 =A0 =A0 6 =A0 =A0 7 octet<br>
              =A0 =A0 +-----------+-----------+-----------+-------------+<b=
r>
              =A0 =A0 | arch =A0 =A0 =A0| type =A0 =A0 =A0| page_shift| (re=
served) =A0|<br>
              =A0 =A0 +-----------+-----------+-----------+-------------+<b=
r>
              <br>
--------------------------------------------------------------------<br>
              Field =A0 =A0 =A0 Description<br>
              -----------
              --------------------------------------------------------<br>
              arch =A0 =A0 =A0 =A00x0000: Reserved.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0002: ARM.<br>
              <br>
              type =A0 =A0 =A0 =A00x0000: Reserved.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0001: x86 PV.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 0x0002 - 0xFFFF: Reserved.<br>
              <br>
              page_shift =A0Size of a guest page as a power of two.<br>
              <br>
              =A0 =A0 =A0 =A0 =A0 =A0 i.e., page size =3D 2^page_shift^.<br=
>
--------------------------------------------------------------------<br>
              <br>
              <br>
              Records<br>
              =3D=3D=3D=3D=3D=3D=3D<br>
              <br>
              A record has a record header, type specific data and a
              trailing<br>
              footer. =A0If body_length is not a multiple of 8, the body
              is padded<br>
              with zeroes to align the checksum field on an 8 octet
              boundary.<br>
              <br>
              =A0 =A0 =A00 =A0 =A0 1 =A0 =A0 2 =A0 =A0 3 =A0 =A0 4 =A0 =A0 =
5 =A0 =A0 6 =A0 =A0 7 octet<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              =A0 =A0 | type =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0| body_leng=
th =A0 =A0 =A0 =A0 =A0 =A0 |<br>
              =A0 =A0 +-----------+-----------+-------------------------+<b=
r>
              =A0 =A0 | options =A0 | (reserved) =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
              =A0 =A0 +-----------+-------------------------------------+<b=
r>
              =A0 =A0 ...<br>
              =A0 =A0 Record body of length body_length octets followed by<=
br>
              =A0 =A0 0 to 7 octets of padding.<br>
              =A0 =A0 ...<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              =A0 =A0 | checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0| (reserved) =
=A0 =A0 =A0 =A0 =A0 =A0 =A0|<br>
              =A0 =A0 +-----------------------+-------------------------+<b=
r>
              <br>
            </blockquote>
            <div><br>
            </div>
            <div>I am assuming that you the checksum field is present
              only</div>
            <div>for debugging purposes? Otherwise, I see no reason for
              the</div>
            <div>computational overhead, given that we are already
              sending data</div>
            <div>over a reliable channel + IIRC we already have an
              image-wide checksum=A0</div>
            <div>when saving the image to disk.</div>
            <div><br>
            </div>
            <div>If debugging is the only use case, then I guess, the
              type field</div>
            <div>can be prefixed with a 1/0 bit, eliminating the need
              for the=A0</div>
            <div>1-bit checkum options field + 7-byte padding.
              Similarly, if debugging=A0</div>
            <div>mode is not set, why waste another 8 bytes in the end
              for the checksum field.</div>
            <div><br>
            </div>
            <div>Unless you think there may be more types with need of
              special options,</div>
            <div><br>
            </div>
            <div>Feel free to correct me if I am missing something
              elementary here..</div>
          </div>
        </div>
      </div>
    </blockquote>
    <br></div></div>
    What image-wide checksum?<br>
    <br></div></blockquote><div><br></div><div>May be I got it wrong. I vag=
uely recall some sort of a crc checksum being stored along with</div><div>t=
he saved memory snapshots. But that could have been someone else&#39;s rese=
arch code. Sorry..</div>

<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF">
    Are you certain that all your data is moving over reliable
    channels?=A0</div></blockquote><div><br></div><div>Lets see.. Am I cert=
ain that all migration is happening over TCP ? yes. Worst case reliable UDP=
.</div><div>By reliable, I just mean no bit errors or such stuff. I am not =
talking about security.</div>

<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div text=3D"#000000" bgcolor=3D"#FFFFFF"> A=
re you certain that your hard drives are bit perfect.</div>

</blockquote><div><br></div><div>Absolutely not. Which is why I was under t=
he impression that the image wide checksum</div><div>would detect a corrupt=
 image.</div><div><br></div><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,20=
4);border-left-style:solid;padding-left:1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF">=A0
    Are you certain that your network connection is bit perfect?<br>
    <br></div></blockquote><div><br></div><div>Nope. But I am fairly certai=
n that good old TCP and IP checksums + the ethernet&#39;s checksum have</di=
v><div>been put in place to detect these errors and recover transparent to =
the application. Are you are implying</div>

<div>that there is some remote corner case that allows corrupt data to esca=
pe all of these three checks in=A0</div><div>the network stack and percolat=
e to the application layer? I don&#39;t think so.</div><div><br></div><div>

If you are implying that the DRAMs cause memory bit errors that flip bits h=
ere and here, wreaking havoc,</div><div>then probably yes, checksums make s=
ense. However, with ECC memory modules being the norm (please</div><div>

correct me if I wrong about this), why start bothering now, if we didn&#39;=
t over the last 3 years? What has changed?</div><div><br></div><div>My poin=
t here being, checksums seem like unnecessary compute overhead when doing l=
ive migration</div>

<div>or Remus. =A0One can simply set this field to 0 when doing live migrat=
ion/Remus.</div><div>And, as you said later in this mail, data transmission=
 overhead is not that much.</div><div><br></div><div>However, as far as sto=
ring snapshots in disks is concerned, I totally agree that there needs to b=
e some</div>

<div>form of a checksum to ensure that the data has not been corrupted. But=
 why have record-level checksums?</div><div>It is not as if we can recover =
the corrupted records. Majority of the use cases are, IMO, do or die. If ch=
ecksum</div>

<div>is correct, then start the restore process. Else abort. =A0So why not =
have an image wide checksum?</div><div><br></div><div>=A0</div><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1=
px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:=
1ex">

<div text=3D"#000000" bgcolor=3D"#FFFFFF">
    Given the amount of data sent as part of a migration, 8 bytes per
    record is not a substantial overhead.<div><div class=3D"h5"><br></div><=
/div></div></blockquote><div><br></div><div>thanks</div><div>shriram=A0</di=
v></div></div></div>

--089e013a1102044f8904f219ae76--


--===============1212042773042195788==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1212042773042195788==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 05:08:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 05:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD5Zq-00019g-3b; Tue, 11 Feb 2014 05:08:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WD5Zo-00019Y-El
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 05:08:08 +0000
Received: from [85.158.143.35:31301] by server-3.bemta-4.messagelabs.com id
	DC/2D-11539-730B9F25; Tue, 11 Feb 2014 05:08:07 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392095286!4687051!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7668 invoked from network); 11 Feb 2014 05:08:07 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-21.messagelabs.com with SMTP;
	11 Feb 2014 05:08:07 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 21:08:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,823,1384329600"; d="scan'208";a="481365957"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 10 Feb 2014 21:08:04 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 21:08:04 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 21:08:04 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Tue, 11 Feb 2014 13:08:02 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Xen 4.4 development update: RC4 end of this week
Thread-Index: AQHPJoiZHusDqFUeokKV5/kyhf4DI5qvgDnw
Date: Tue, 11 Feb 2014 05:08:02 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D85A7@SHSMSX104.ccr.corp.intel.com>
References: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
In-Reply-To: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-11:
> == Open ==
> 
> * Win2k3 SP2 RTC infinite loops
>> Regression introduced late in Xen-4.3 development
>    owner: andrew.cooper@citrix
>    status: patches posted, undergoing review.
> * PVH regression
> 
> * dirty vram / IOMMU bug
>> http://bugs.xenproject.org/xen/bug/38
>> status: Patch posted

Though I have patch to fix it, still need more discussions. Hope we can fix it before Xen 4.4 release.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 05:08:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 05:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD5Zq-00019g-3b; Tue, 11 Feb 2014 05:08:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WD5Zo-00019Y-El
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 05:08:08 +0000
Received: from [85.158.143.35:31301] by server-3.bemta-4.messagelabs.com id
	DC/2D-11539-730B9F25; Tue, 11 Feb 2014 05:08:07 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392095286!4687051!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7668 invoked from network); 11 Feb 2014 05:08:07 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-21.messagelabs.com with SMTP;
	11 Feb 2014 05:08:07 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 10 Feb 2014 21:08:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,823,1384329600"; d="scan'208";a="481365957"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga002.jf.intel.com with ESMTP; 10 Feb 2014 21:08:04 -0800
Received: from FMSMSX109.amr.corp.intel.com (10.18.116.9) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 21:08:04 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx109.amr.corp.intel.com (10.18.116.9) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 10 Feb 2014 21:08:04 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Tue, 11 Feb 2014 13:08:02 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [Xen-devel] Xen 4.4 development update: RC4 end of this week
Thread-Index: AQHPJoiZHusDqFUeokKV5/kyhf4DI5qvgDnw
Date: Tue, 11 Feb 2014 05:08:02 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D85A7@SHSMSX104.ccr.corp.intel.com>
References: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
In-Reply-To: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-11:
> == Open ==
> 
> * Win2k3 SP2 RTC infinite loops
>> Regression introduced late in Xen-4.3 development
>    owner: andrew.cooper@citrix
>    status: patches posted, undergoing review.
> * PVH regression
> 
> * dirty vram / IOMMU bug
>> http://bugs.xenproject.org/xen/bug/38
>> status: Patch posted

Though I have patch to fix it, still need more discussions. Hope we can fix it before Xen 4.4 release.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 06:59:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 06:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD7J3-0006Np-BA; Tue, 11 Feb 2014 06:58:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WD7J1-0006Nk-Ry
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 06:58:56 +0000
Received: from [193.109.254.147:10669] by server-1.bemta-14.messagelabs.com id
	D2/EB-15438-E2AC9F25; Tue, 11 Feb 2014 06:58:54 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392101934!3429097!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24744 invoked from network); 11 Feb 2014 06:58:54 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 06:58:54 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392101934; l=721;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=YdycA9wYZA8jzB3Qvusz0idF5SE=;
	b=vs+wWSpKzQACZZWicJEptInma7tK3ZmjSBNkib/FLtfwvpgDMrq8fKz09tj9d/O+V9M
	lyxDCS+lFIXXpdL8L5QV+mDdmsdLpGshPMSaVIOfkPP3Mz3Kwnvgdr07c1snd+oaFPsCk
	IIqo3qsdbHRqT3kzFoN/tLW9hQYInyT0DXw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id R01cf4q1B6wr3AF
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 11 Feb 2014 07:58:53 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 51D5E50269; Tue, 11 Feb 2014 07:58:53 +0100 (CET)
Date: Tue, 11 Feb 2014 07:58:53 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140211065853.GB21167@aepfle.de>
References: <1392033425-1863-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392033425-1863-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, Olaf Hering wrote:

> Now libxl_save_helper:complete can print the actual error string. Also
> checkpoint is updated to pass the errno to its caller.

> @@ -209,9 +209,11 @@ int checkpoint_start(checkpoint_state* s, int fd,
>      rc = xc_domain_save(s->xch, fd, s->domid, 0, 0, flags, callbacks, hvm,
>                          vm_generationid_addr);
>  
> +    errnoval = errno;
>      if (hvm)
>         switch_qemu_logdirty(s, 0);
>  
> +    errno = errnoval;
>      return rc;

It just occoured to me that this part is wrong:
rc should have been 'rc ? -1 : 0;'. And in addition to that the caller
pycheckpoint_start expects checkpoint_state->errstr should be set to
some error string. So this part of the patch has to be dropped.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 06:59:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 06:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD7J3-0006Np-BA; Tue, 11 Feb 2014 06:58:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WD7J1-0006Nk-Ry
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 06:58:56 +0000
Received: from [193.109.254.147:10669] by server-1.bemta-14.messagelabs.com id
	D2/EB-15438-E2AC9F25; Tue, 11 Feb 2014 06:58:54 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392101934!3429097!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24744 invoked from network); 11 Feb 2014 06:58:54 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 06:58:54 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392101934; l=721;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=YdycA9wYZA8jzB3Qvusz0idF5SE=;
	b=vs+wWSpKzQACZZWicJEptInma7tK3ZmjSBNkib/FLtfwvpgDMrq8fKz09tj9d/O+V9M
	lyxDCS+lFIXXpdL8L5QV+mDdmsdLpGshPMSaVIOfkPP3Mz3Kwnvgdr07c1snd+oaFPsCk
	IIqo3qsdbHRqT3kzFoN/tLW9hQYInyT0DXw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id R01cf4q1B6wr3AF
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 11 Feb 2014 07:58:53 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 51D5E50269; Tue, 11 Feb 2014 07:58:53 +0100 (CET)
Date: Tue, 11 Feb 2014 07:58:53 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140211065853.GB21167@aepfle.de>
References: <1392033425-1863-1-git-send-email-olaf@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392033425-1863-1-git-send-email-olaf@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, Olaf Hering wrote:

> Now libxl_save_helper:complete can print the actual error string. Also
> checkpoint is updated to pass the errno to its caller.

> @@ -209,9 +209,11 @@ int checkpoint_start(checkpoint_state* s, int fd,
>      rc = xc_domain_save(s->xch, fd, s->domid, 0, 0, flags, callbacks, hvm,
>                          vm_generationid_addr);
>  
> +    errnoval = errno;
>      if (hvm)
>         switch_qemu_logdirty(s, 0);
>  
> +    errno = errnoval;
>      return rc;

It just occoured to me that this part is wrong:
rc should have been 'rc ? -1 : 0;'. And in addition to that the caller
pycheckpoint_start expects checkpoint_state->errstr should be set to
some error string. So this part of the patch has to be dropped.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 07:46:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 07:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD82V-0008Oc-DL; Tue, 11 Feb 2014 07:45:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WD82U-0008OX-RV
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 07:45:55 +0000
Received: from [193.109.254.147:21756] by server-11.bemta-14.messagelabs.com
	id EC/CA-24604-235D9F25; Tue, 11 Feb 2014 07:45:54 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392104752!3442617!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13067 invoked from network); 11 Feb 2014 07:45:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 07:45:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1B7jlnA008042
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Feb 2014 07:45:48 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1B7jjMm007040
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Feb 2014 07:45:46 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1B7jjcI028532; Tue, 11 Feb 2014 07:45:45 GMT
Received: from [192.168.0.101] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 23:45:44 -0800
Message-ID: <52F9D518.1090908@oracle.com>
Date: Tue, 11 Feb 2014 15:45:28 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1383872637-15486-1-git-send-email-bob.liu@oracle.com>
	<1383872637-15486-12-git-send-email-bob.liu@oracle.com>
	<20140207154800.GA4855@phenom.dumpdata.com>
In-Reply-To: <20140207154800.GA4855@phenom.dumpdata.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org, Bob Liu <lliubbo@gmail.com>, keir@xen.org,
	ian.campbell@citrix.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 11/11] tmem: cleanup: drop useless
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

On 02/07/2014 11:48 PM, Konrad Rzeszutek Wilk wrote:
> On Fri, Nov 08, 2013 at 09:03:57AM +0800, Bob Liu wrote:
>> Function tmem_release_avail_pages_to_host() and tmem_scrub_page() only used
>> once, no need to separate them out.
> 
> All of the patches look good to me. Let me put them in my tree
> and do a sanity check tonight and then send a git pull to Jan
> on Monday.
> 

This series of patches have already get merged.
I have three series of patches on tmem, the previous two have been merged.

The third one which you haven't review is:
[PATCH RESEND 00/14] xen: new patches for tmem

Thanks,
-Bob

> Thank you for making the code much easier to read!
>>
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/tmem.c          |   19 +++++++++++++++++--
>>  xen/common/tmem_xen.c      |   24 ------------------------
>>  xen/include/xen/tmem_xen.h |    3 ---
>>  3 files changed, 17 insertions(+), 29 deletions(-)
>>
>> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
>> index f009fd8..3d15ead 100644
>> --- a/xen/common/tmem.c
>> +++ b/xen/common/tmem.c
>> @@ -1418,7 +1418,19 @@ static unsigned long tmem_relinquish_npages(unsigned long n)
>>              break;
>>      }
>>      if ( avail_pages )
>> -        tmem_release_avail_pages_to_host();
>> +    {
>> +        spin_lock(&tmem_page_list_lock);
>> +        while ( !page_list_empty(&tmem_page_list) )
>> +        {
>> +            struct page_info *pg = page_list_remove_head(&tmem_page_list);
>> +            scrub_one_page(pg);
>> +            tmem_page_list_pages--;
>> +            free_domheap_page(pg);
>> +        }
>> +        ASSERT(tmem_page_list_pages == 0);
>> +        INIT_PAGE_LIST_HEAD(&tmem_page_list);
>> +        spin_unlock(&tmem_page_list_lock);
>> +    }
>>      return avail_pages;
>>  }
>>  
>> @@ -2911,9 +2923,12 @@ EXPORT void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
>>      }
>>      if ( evicts_per_relinq > max_evicts_per_relinq )
>>          max_evicts_per_relinq = evicts_per_relinq;
>> -    tmem_scrub_page(pfp, memflags);
>>      if ( pfp != NULL )
>> +    {
>> +        if ( !(memflags & MEMF_tmem) )
>> +            scrub_one_page(pfp);
>>          relinq_pgs++;
>> +    }
>>  
>>      if ( tmem_called_from_tmem(memflags) )
>>      {
>> diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
>> index 0f5955d..d6e2e0d 100644
>> --- a/xen/common/tmem_xen.c
>> +++ b/xen/common/tmem_xen.c
>> @@ -289,30 +289,6 @@ EXPORT DEFINE_SPINLOCK(tmem_page_list_lock);
>>  EXPORT PAGE_LIST_HEAD(tmem_page_list);
>>  EXPORT unsigned long tmem_page_list_pages = 0;
>>  
>> -/* free anything on tmem_page_list to Xen's scrub list */
>> -EXPORT void tmem_release_avail_pages_to_host(void)
>> -{
>> -    spin_lock(&tmem_page_list_lock);
>> -    while ( !page_list_empty(&tmem_page_list) )
>> -    {
>> -        struct page_info *pg = page_list_remove_head(&tmem_page_list);
>> -        scrub_one_page(pg);
>> -        tmem_page_list_pages--;
>> -        free_domheap_page(pg);
>> -    }
>> -    ASSERT(tmem_page_list_pages == 0);
>> -    INIT_PAGE_LIST_HEAD(&tmem_page_list);
>> -    spin_unlock(&tmem_page_list_lock);
>> -}
>> -
>> -EXPORT void tmem_scrub_page(struct page_info *pi, unsigned int memflags)
>> -{
>> -    if ( pi == NULL )
>> -        return;
>> -    if ( !(memflags & MEMF_tmem) )
>> -        scrub_one_page(pi);
>> -}
>> -
>>  static noinline void *tmem_mempool_page_get(unsigned long size)
>>  {
>>      struct page_info *pi;
>> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
>> index f9639a5..034fd5c 100644
>> --- a/xen/include/xen/tmem_xen.h
>> +++ b/xen/include/xen/tmem_xen.h
>> @@ -42,9 +42,6 @@ extern void tmem_copy_page(char *to, char*from);
>>  extern int tmem_init(void);
>>  #define tmem_hash hash_long
>>  
>> -extern void tmem_release_avail_pages_to_host(void);
>> -extern void tmem_scrub_page(struct page_info *pi, unsigned int memflags);
>> -
>>  extern bool_t opt_tmem_compress;
>>  static inline bool_t tmem_compression_enabled(void)
>>  {
>> -- 
>> 1.7.10.4
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 07:46:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 07:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD82V-0008Oc-DL; Tue, 11 Feb 2014 07:45:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WD82U-0008OX-RV
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 07:45:55 +0000
Received: from [193.109.254.147:21756] by server-11.bemta-14.messagelabs.com
	id EC/CA-24604-235D9F25; Tue, 11 Feb 2014 07:45:54 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392104752!3442617!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13067 invoked from network); 11 Feb 2014 07:45:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 07:45:53 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1B7jlnA008042
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Feb 2014 07:45:48 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1B7jjMm007040
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Feb 2014 07:45:46 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1B7jjcI028532; Tue, 11 Feb 2014 07:45:45 GMT
Received: from [192.168.0.101] (/116.227.28.52)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 10 Feb 2014 23:45:44 -0800
Message-ID: <52F9D518.1090908@oracle.com>
Date: Tue, 11 Feb 2014 15:45:28 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1383872637-15486-1-git-send-email-bob.liu@oracle.com>
	<1383872637-15486-12-git-send-email-bob.liu@oracle.com>
	<20140207154800.GA4855@phenom.dumpdata.com>
In-Reply-To: <20140207154800.GA4855@phenom.dumpdata.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org, Bob Liu <lliubbo@gmail.com>, keir@xen.org,
	ian.campbell@citrix.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v2 11/11] tmem: cleanup: drop useless
	functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

On 02/07/2014 11:48 PM, Konrad Rzeszutek Wilk wrote:
> On Fri, Nov 08, 2013 at 09:03:57AM +0800, Bob Liu wrote:
>> Function tmem_release_avail_pages_to_host() and tmem_scrub_page() only used
>> once, no need to separate them out.
> 
> All of the patches look good to me. Let me put them in my tree
> and do a sanity check tonight and then send a git pull to Jan
> on Monday.
> 

This series of patches have already get merged.
I have three series of patches on tmem, the previous two have been merged.

The third one which you haven't review is:
[PATCH RESEND 00/14] xen: new patches for tmem

Thanks,
-Bob

> Thank you for making the code much easier to read!
>>
>> Signed-off-by: Bob Liu <bob.liu@oracle.com>
>> ---
>>  xen/common/tmem.c          |   19 +++++++++++++++++--
>>  xen/common/tmem_xen.c      |   24 ------------------------
>>  xen/include/xen/tmem_xen.h |    3 ---
>>  3 files changed, 17 insertions(+), 29 deletions(-)
>>
>> diff --git a/xen/common/tmem.c b/xen/common/tmem.c
>> index f009fd8..3d15ead 100644
>> --- a/xen/common/tmem.c
>> +++ b/xen/common/tmem.c
>> @@ -1418,7 +1418,19 @@ static unsigned long tmem_relinquish_npages(unsigned long n)
>>              break;
>>      }
>>      if ( avail_pages )
>> -        tmem_release_avail_pages_to_host();
>> +    {
>> +        spin_lock(&tmem_page_list_lock);
>> +        while ( !page_list_empty(&tmem_page_list) )
>> +        {
>> +            struct page_info *pg = page_list_remove_head(&tmem_page_list);
>> +            scrub_one_page(pg);
>> +            tmem_page_list_pages--;
>> +            free_domheap_page(pg);
>> +        }
>> +        ASSERT(tmem_page_list_pages == 0);
>> +        INIT_PAGE_LIST_HEAD(&tmem_page_list);
>> +        spin_unlock(&tmem_page_list_lock);
>> +    }
>>      return avail_pages;
>>  }
>>  
>> @@ -2911,9 +2923,12 @@ EXPORT void *tmem_relinquish_pages(unsigned int order, unsigned int memflags)
>>      }
>>      if ( evicts_per_relinq > max_evicts_per_relinq )
>>          max_evicts_per_relinq = evicts_per_relinq;
>> -    tmem_scrub_page(pfp, memflags);
>>      if ( pfp != NULL )
>> +    {
>> +        if ( !(memflags & MEMF_tmem) )
>> +            scrub_one_page(pfp);
>>          relinq_pgs++;
>> +    }
>>  
>>      if ( tmem_called_from_tmem(memflags) )
>>      {
>> diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
>> index 0f5955d..d6e2e0d 100644
>> --- a/xen/common/tmem_xen.c
>> +++ b/xen/common/tmem_xen.c
>> @@ -289,30 +289,6 @@ EXPORT DEFINE_SPINLOCK(tmem_page_list_lock);
>>  EXPORT PAGE_LIST_HEAD(tmem_page_list);
>>  EXPORT unsigned long tmem_page_list_pages = 0;
>>  
>> -/* free anything on tmem_page_list to Xen's scrub list */
>> -EXPORT void tmem_release_avail_pages_to_host(void)
>> -{
>> -    spin_lock(&tmem_page_list_lock);
>> -    while ( !page_list_empty(&tmem_page_list) )
>> -    {
>> -        struct page_info *pg = page_list_remove_head(&tmem_page_list);
>> -        scrub_one_page(pg);
>> -        tmem_page_list_pages--;
>> -        free_domheap_page(pg);
>> -    }
>> -    ASSERT(tmem_page_list_pages == 0);
>> -    INIT_PAGE_LIST_HEAD(&tmem_page_list);
>> -    spin_unlock(&tmem_page_list_lock);
>> -}
>> -
>> -EXPORT void tmem_scrub_page(struct page_info *pi, unsigned int memflags)
>> -{
>> -    if ( pi == NULL )
>> -        return;
>> -    if ( !(memflags & MEMF_tmem) )
>> -        scrub_one_page(pi);
>> -}
>> -
>>  static noinline void *tmem_mempool_page_get(unsigned long size)
>>  {
>>      struct page_info *pi;
>> diff --git a/xen/include/xen/tmem_xen.h b/xen/include/xen/tmem_xen.h
>> index f9639a5..034fd5c 100644
>> --- a/xen/include/xen/tmem_xen.h
>> +++ b/xen/include/xen/tmem_xen.h
>> @@ -42,9 +42,6 @@ extern void tmem_copy_page(char *to, char*from);
>>  extern int tmem_init(void);
>>  #define tmem_hash hash_long
>>  
>> -extern void tmem_release_avail_pages_to_host(void);
>> -extern void tmem_scrub_page(struct page_info *pi, unsigned int memflags);
>> -
>>  extern bool_t opt_tmem_compress;
>>  static inline bool_t tmem_compression_enabled(void)
>>  {
>> -- 
>> 1.7.10.4
>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 07:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 07:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD841-0008Tk-TM; Tue, 11 Feb 2014 07:47:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD840-0008Tf-E7
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 07:47:28 +0000
Received: from [85.158.137.68:27860] by server-1.bemta-3.messagelabs.com id
	44/4B-17293-F85D9F25; Tue, 11 Feb 2014 07:47:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392104846!1006523!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 525 invoked from network); 11 Feb 2014 07:47:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 07:47:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 07:47:23 +0000
Message-Id: <52F9E39A020000780011B00A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 07:47:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F8F9F8.9000702@linaro.org>
	<52F90DF8020000780011AD65@nat28.tlf.novell.com>
	<52F90F84.2090506@linaro.org>
In-Reply-To: <52F90F84.2090506@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:42, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/10/2014 04:35 PM, Jan Beulich wrote:
>>>>> On 10.02.14 at 17:10, Julien Grall <julien.grall@linaro.org> wrote:
>>> On 02/07/2014 05:43 PM, Julien Grall wrote:
>>>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
>>> enabled.
>>>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>>>> to know if it needs to check the requirements.
>>>>
>>>> Rename the function and remove "pvh" word in the commit message.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>>>>  1 file changed, 9 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>>>> index 19b0e23..26a5d91 100644
>>>> --- a/xen/drivers/passthrough/iommu.c
>>>> +++ b/xen/drivers/passthrough/iommu.c
>>>> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>>>>      return hd->platform_ops->init(d);
>>>>  }
>>>>  
>>>> -static __init void check_dom0_pvh_reqs(struct domain *d)
>>>> +static __init void check_dom0_reqs(struct domain *d)
>>>>  {
>>>> +    if ( !paging_mode_translate(d) )
>>>> +        return;
>>>> +
>>>>      if ( !iommu_enabled )
>>>> -        panic("Presently, iommu must be enabled for pvh dom0\n");
>>>> +        panic("Presently, iommu must be enabled to use dom0 with translate 
>>> "
>>>> +              "paging mode\n");
>>>
>>> Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
>>> ARM platform (for instance Arndale).
>>>
>>> Do we really this check for PVH? If yes, I will add replace the check
>>> by: is_pvh_domain(d) && !iommu_enabled.
>> 
>> Of course we need it: How would PVH Dom0 be able to do any kind
>> of DMA without an IOMMU?
> 
> Right, on ARM we have the 1:1 memory mapping to avoid this issue.
> 
> I will fix it. Can I keep your ack on this patch?

For this simple an adjustment - sure.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 07:47:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 07:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD841-0008Tk-TM; Tue, 11 Feb 2014 07:47:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD840-0008Tf-E7
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 07:47:28 +0000
Received: from [85.158.137.68:27860] by server-1.bemta-3.messagelabs.com id
	44/4B-17293-F85D9F25; Tue, 11 Feb 2014 07:47:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392104846!1006523!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 525 invoked from network); 11 Feb 2014 07:47:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 07:47:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 07:47:23 +0000
Message-Id: <52F9E39A020000780011B00A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 07:47:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-7-git-send-email-julien.grall@linaro.org>
	<52F8F9F8.9000702@linaro.org>
	<52F90DF8020000780011AD65@nat28.tlf.novell.com>
	<52F90F84.2090506@linaro.org>
In-Reply-To: <52F90F84.2090506@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ian.campbell@citrix.com, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 06/12] xen/passthrough: rework
 dom0_pvh_reqs to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:42, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/10/2014 04:35 PM, Jan Beulich wrote:
>>>>> On 10.02.14 at 17:10, Julien Grall <julien.grall@linaro.org> wrote:
>>> On 02/07/2014 05:43 PM, Julien Grall wrote:
>>>> DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is 
>>> enabled.
>>>> Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
>>>> to know if it needs to check the requirements.
>>>>
>>>> Rename the function and remove "pvh" word in the commit message.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>>  xen/drivers/passthrough/iommu.c |   14 +++++++++-----
>>>>  1 file changed, 9 insertions(+), 5 deletions(-)
>>>>
>>>> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
>>>> index 19b0e23..26a5d91 100644
>>>> --- a/xen/drivers/passthrough/iommu.c
>>>> +++ b/xen/drivers/passthrough/iommu.c
>>>> @@ -130,13 +130,18 @@ int iommu_domain_init(struct domain *d)
>>>>      return hd->platform_ops->init(d);
>>>>  }
>>>>  
>>>> -static __init void check_dom0_pvh_reqs(struct domain *d)
>>>> +static __init void check_dom0_reqs(struct domain *d)
>>>>  {
>>>> +    if ( !paging_mode_translate(d) )
>>>> +        return;
>>>> +
>>>>      if ( !iommu_enabled )
>>>> -        panic("Presently, iommu must be enabled for pvh dom0\n");
>>>> +        panic("Presently, iommu must be enabled to use dom0 with translate 
>>> "
>>>> +              "paging mode\n");
>>>
>>> Hmmm... this change is wrong. I forgot that iommu doesn't exist on some
>>> ARM platform (for instance Arndale).
>>>
>>> Do we really this check for PVH? If yes, I will add replace the check
>>> by: is_pvh_domain(d) && !iommu_enabled.
>> 
>> Of course we need it: How would PVH Dom0 be able to do any kind
>> of DMA without an IOMMU?
> 
> Right, on ARM we have the 1:1 memory mapping to avoid this issue.
> 
> I will fix it. Can I keep your ack on this patch?

For this simple an adjustment - sure.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 07:54:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 07:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8At-0000dZ-Vf; Tue, 11 Feb 2014 07:54:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8Ar-0000dU-Pi
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 07:54:34 +0000
Received: from [85.158.137.68:38718] by server-9.bemta-3.messagelabs.com id
	4E/66-10184-837D9F25; Tue, 11 Feb 2014 07:54:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392105271!1008303!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11077 invoked from network); 11 Feb 2014 07:54:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 07:54:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 07:54:30 +0000
Message-Id: <52F9E546020000780011B029@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 07:54:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
	<52F90EAE.6000603@citrix.com>
In-Reply-To: <52F90EAE.6000603@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] VT-d: fix RMRR handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:38, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 10/02/14 13:42, Jan Beulich wrote:
>> Removing mapped RMRR tracking structures in dma_pte_clear_one() is
>> wrong for two reasons: First, these regions may cover more than a
>> single page. And second, multiple devices (and hence multiple devices
>> assigned to any particular guest) may share a single RMRR (whether
>> assigning such devices to distinct guests is a safe thing to do is
>> another question).
>>
>> Therefore move the removal of the tracking structures into the
>> counterpart function to the one doing the insertion -
>> intel_iommu_remove_device(), and add a reference count to the tracking
>> structure.
>>
>> Further, for the handling of the mappings of the respective memory
>> regions to be correct, RMRRs must not overlap. Add a respective check
>> to acpi_parse_one_rmrr().
>>
>> And finally, with all of this being VT-d specific, move the cleanup
>> of the list as well as the structure type definition where it belongs -
>> in VT-d specific rather than IOMMU generic code.
>>
>> Note that this doesn't address yet another issue associated with RMRR
>> handling: The purpose of the RMRRs as well as the way the respective
>> IOMMU page table mappings get inserted both suggest that these regions
>> would need to be marked E820_RESERVED in all (HVM?) guests' memory
>> maps, yet nothing like this is being done in hvmloader. (For PV guests
>> this would also seem to be necessary, but may conflict with PV guests
>> possibly assuming there to be just a single E820 entry representing all
>> of its RAM.)
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Having read up on RMRRs again, I suspect it is more complicated.  RMRRs
> are for legacy devices which need to DMA to the BIOS to function
> correctly.  In the case of dom0, they should necessarily be identity
> mapped in the IOMMU tables (which does appear to be the case currently).
> 
> For domains with passthrough of a device using an RMRR, then the region
> should be marked as reserved (and possibly removed from the physmap for
> HVM guests?), and the IOMMU mappings again identity mapped, so the
> device can talk to the BIOS.  Having the device talk to an
> equally-positioned RMRR in the guest address space seems pointless. 

I at least meant to say exactly that. Except that I implied that _all_
RMRR specified regions should be marked reserved (and "holes"
punched _and_ enforced to remain holes in the physmap), since
otherwise you can't hotplug such a device. (Of course we could
consider the alternative of not allowing hotplug of devices listed
underneath some RMRR, or not allowing assignment of such
devices at all.)

> One way or another, the security when passing through devices covered by
> RMRR would end up being reduced.

If multiple devices share a region, and they're being assigned to
different guests (with Dom0 counted as guest) - yes. But if such
a group was assigned collectively, I don't think security would be
reduced.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 07:54:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 07:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8At-0000dZ-Vf; Tue, 11 Feb 2014 07:54:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8Ar-0000dU-Pi
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 07:54:34 +0000
Received: from [85.158.137.68:38718] by server-9.bemta-3.messagelabs.com id
	4E/66-10184-837D9F25; Tue, 11 Feb 2014 07:54:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392105271!1008303!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11077 invoked from network); 11 Feb 2014 07:54:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 07:54:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 07:54:30 +0000
Message-Id: <52F9E546020000780011B029@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 07:54:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <52F8E56D020000780011AC43@nat28.tlf.novell.com>
	<52F90EAE.6000603@citrix.com>
In-Reply-To: <52F90EAE.6000603@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>, xiantao.zhang@intel.com
Subject: Re: [Xen-devel] [PATCH] VT-d: fix RMRR handling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:38, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 10/02/14 13:42, Jan Beulich wrote:
>> Removing mapped RMRR tracking structures in dma_pte_clear_one() is
>> wrong for two reasons: First, these regions may cover more than a
>> single page. And second, multiple devices (and hence multiple devices
>> assigned to any particular guest) may share a single RMRR (whether
>> assigning such devices to distinct guests is a safe thing to do is
>> another question).
>>
>> Therefore move the removal of the tracking structures into the
>> counterpart function to the one doing the insertion -
>> intel_iommu_remove_device(), and add a reference count to the tracking
>> structure.
>>
>> Further, for the handling of the mappings of the respective memory
>> regions to be correct, RMRRs must not overlap. Add a respective check
>> to acpi_parse_one_rmrr().
>>
>> And finally, with all of this being VT-d specific, move the cleanup
>> of the list as well as the structure type definition where it belongs -
>> in VT-d specific rather than IOMMU generic code.
>>
>> Note that this doesn't address yet another issue associated with RMRR
>> handling: The purpose of the RMRRs as well as the way the respective
>> IOMMU page table mappings get inserted both suggest that these regions
>> would need to be marked E820_RESERVED in all (HVM?) guests' memory
>> maps, yet nothing like this is being done in hvmloader. (For PV guests
>> this would also seem to be necessary, but may conflict with PV guests
>> possibly assuming there to be just a single E820 entry representing all
>> of its RAM.)
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Having read up on RMRRs again, I suspect it is more complicated.  RMRRs
> are for legacy devices which need to DMA to the BIOS to function
> correctly.  In the case of dom0, they should necessarily be identity
> mapped in the IOMMU tables (which does appear to be the case currently).
> 
> For domains with passthrough of a device using an RMRR, then the region
> should be marked as reserved (and possibly removed from the physmap for
> HVM guests?), and the IOMMU mappings again identity mapped, so the
> device can talk to the BIOS.  Having the device talk to an
> equally-positioned RMRR in the guest address space seems pointless. 

I at least meant to say exactly that. Except that I implied that _all_
RMRR specified regions should be marked reserved (and "holes"
punched _and_ enforced to remain holes in the physmap), since
otherwise you can't hotplug such a device. (Of course we could
consider the alternative of not allowing hotplug of devices listed
underneath some RMRR, or not allowing assignment of such
devices at all.)

> One way or another, the security when passing through devices covered by
> RMRR would end up being reduced.

If multiple devices share a region, and they're being assigned to
different guests (with Dom0 counted as guest) - yes. But if such
a group was assigned collectively, I don't think security would be
reduced.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:02:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8IR-0001fa-Bi; Tue, 11 Feb 2014 08:02:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8IP-0001fV-Km
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:02:21 +0000
Received: from [85.158.139.211:46386] by server-5.bemta-5.messagelabs.com id
	F9/B5-32749-C09D9F25; Tue, 11 Feb 2014 08:02:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392105740!3061337!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28803 invoked from network); 11 Feb 2014 08:02:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:02:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 08:02:18 +0000
Message-Id: <52F9E719020000780011B034@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 08:02:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F93509.2030304@tycho.nsa.gov>
In-Reply-To: <52F93509.2030304@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 21:22, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 02/07/2014 04:41 AM, Jan Beulich wrote:
>> 1: fix memory leaks
>> 2: fix error propagation from flask_security_set_bool()
>> 3: check permissions first thing in flask_security_set_bool()
>> 4: add compat mode guest support
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Release-wise, I would think that 1-3 should certainly go in. While I'd
>> like 4 to be in for 4.4 too, I realize that's a little more intrusive than
>> one would want at this point.
> 
> All four patches look correct to me. I assume the movement of the 
> flask_security_commit_bools inside the #ifdef is made possible by
> the xlat.lst parsing, but didn't look too closely at how that was
> done.

No, that has nothing to do with the xlat.lst parsing. It's solely with
the goal of having one less #ifndef COMPAT code section (as we
need static helper functions to be defined exactly once in the whole
compilation unit, yet the file includes itself after #define-ing COMPAT,
all these functions must be inside such conditionals).

> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> Re: what goes in release - I agree that #4 would be nice but I wouldn't
> push too hard to make an exception for it. The users of the XSM interface
> would primarily be toolstack and related domains where a requirement to
> be 64-bit should not be too restrictive (not to say this shouldn't be
> fixed, of course).

With George's feedback on the same matter, I already pushed the
patch back to my 4.5 queue.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:02:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8IR-0001fa-Bi; Tue, 11 Feb 2014 08:02:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8IP-0001fV-Km
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:02:21 +0000
Received: from [85.158.139.211:46386] by server-5.bemta-5.messagelabs.com id
	F9/B5-32749-C09D9F25; Tue, 11 Feb 2014 08:02:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392105740!3061337!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28803 invoked from network); 11 Feb 2014 08:02:20 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:02:20 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 08:02:18 +0000
Message-Id: <52F9E719020000780011B034@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 08:02:17 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Daniel De Graaf" <dgdegra@tycho.nsa.gov>
References: <52F4B840020000780011A1E2@nat28.tlf.novell.com>
	<52F93509.2030304@tycho.nsa.gov>
In-Reply-To: <52F93509.2030304@tycho.nsa.gov>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 0/4] flask: XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 21:22, Daniel De Graaf <dgdegra@tycho.nsa.gov> wrote:
> On 02/07/2014 04:41 AM, Jan Beulich wrote:
>> 1: fix memory leaks
>> 2: fix error propagation from flask_security_set_bool()
>> 3: check permissions first thing in flask_security_set_bool()
>> 4: add compat mode guest support
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Release-wise, I would think that 1-3 should certainly go in. While I'd
>> like 4 to be in for 4.4 too, I realize that's a little more intrusive than
>> one would want at this point.
> 
> All four patches look correct to me. I assume the movement of the 
> flask_security_commit_bools inside the #ifdef is made possible by
> the xlat.lst parsing, but didn't look too closely at how that was
> done.

No, that has nothing to do with the xlat.lst parsing. It's solely with
the goal of having one less #ifndef COMPAT code section (as we
need static helper functions to be defined exactly once in the whole
compilation unit, yet the file includes itself after #define-ing COMPAT,
all these functions must be inside such conditionals).

> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> Re: what goes in release - I agree that #4 would be nice but I wouldn't
> push too hard to make an exception for it. The users of the XSM interface
> would primarily be toolstack and related domains where a requirement to
> be 64-bit should not be too restrictive (not to say this shouldn't be
> fixed, of course).

With George's feedback on the same matter, I already pushed the
patch back to my 4.5 queue.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:21:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8aM-0002ej-JR; Tue, 11 Feb 2014 08:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8aL-0002ee-GE
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 08:20:53 +0000
Received: from [85.158.139.211:51273] by server-12.bemta-5.messagelabs.com id
	BF/67-15415-46DD9F25; Tue, 11 Feb 2014 08:20:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392106851!3063801!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8259 invoked from network); 11 Feb 2014 08:20:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:20:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 08:20:49 +0000
Message-Id: <52F9EB71020000780011B055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 08:20:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
	<20140210183159.GC17601@phenom.dumpdata.com>
In-Reply-To: <20140210183159.GC17601@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, yang.z.zhang@intel.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 19:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>  == Open ==
>> 
> ..
>  
>> * PVH regression
> 
> Patch posted, just needs an Ack from the Intel folks (which
> I think they did give: "Both of fixings are right to me.")
> 
> So if Jan is OK with it (and since he suggested the fix I think
> he would be), then the fix should go in.
> 
> Jan, do you want me to repost it with the right 'Acked' by
> tags?

No need to, unless you have the hope of catching the Intel folks'
attention by doing so (rather unlikely imo).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:21:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8aM-0002ej-JR; Tue, 11 Feb 2014 08:20:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8aL-0002ee-GE
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 08:20:53 +0000
Received: from [85.158.139.211:51273] by server-12.bemta-5.messagelabs.com id
	BF/67-15415-46DD9F25; Tue, 11 Feb 2014 08:20:52 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392106851!3063801!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8259 invoked from network); 11 Feb 2014 08:20:52 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:20:52 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 08:20:49 +0000
Message-Id: <52F9EB71020000780011B055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 08:20:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <CAFLBxZaZP91VW0aDRzRHiGyZ+r4+y2hHQyTnExyDKqtOb=JzjQ@mail.gmail.com>
	<20140210183159.GC17601@phenom.dumpdata.com>
In-Reply-To: <20140210183159.GC17601@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, yang.z.zhang@intel.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 development update: RC4 end of this week
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 19:31, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
>>  == Open ==
>> 
> ..
>  
>> * PVH regression
> 
> Patch posted, just needs an Ack from the Intel folks (which
> I think they did give: "Both of fixings are right to me.")
> 
> So if Jan is OK with it (and since he suggested the fix I think
> he would be), then the fix should go in.
> 
> Jan, do you want me to repost it with the right 'Acked' by
> tags?

No need to, unless you have the hope of catching the Intel folks'
attention by doing so (rather unlikely imo).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:37:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8q8-0003Kz-MO; Tue, 11 Feb 2014 08:37:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8q6-0003Ku-R9
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:37:11 +0000
Received: from [193.109.254.147:57547] by server-2.bemta-14.messagelabs.com id
	CC/C9-01236-631E9F25; Tue, 11 Feb 2014 08:37:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392107829!3449753!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 779 invoked from network); 11 Feb 2014 08:37:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:37:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 08:37:08 +0000
Message-Id: <52F9EF3F020000780011B06A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 08:37:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 00:29, Julien Grall <julien.grall@linaro.org> wrote:
> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> compilation with clang:
> 
> In file included from sched_sedf.c:8:
> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' 
> file
> not found with <angled> include; use "quotes" instead
>            ^~~~~~~~~~
>            "stdarg.h"
> In file included from sched_sedf.c:8:
> /home/julieng/works/xen/xen/include/xen/lib.h:101:63: error: unknown type 
> name 'va_list'
> extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
>                                                               ^
> /home/julieng/works/xen/xen/include/xen/lib.h:105:64: error: unknown type 
> name 'va_list'
> extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
> 
> I have the same errors on different version of clang:
>     - clang 3.0 on debian wheezy
>     - clang 3.3 on Fedora 20
>     - clang 3.5 build from trunk
> 
> Removing -nostdinc fix the build on clang.

But does this also do the right thing? I.e. I doubt you're immune
then against picking up headers you don't want to include in a
hypervisor build, or properly failing if - for whatever reason, e.g.
during development after having made a mistake - a header can't
be found in the paths we want the compiler to look for them, but
can be found in a "standard" include directory.

IOW I think it first needs to be understood/explained why and by
how much clang behavior differs here.

Jan

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/Rules.mk | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index df1428f..ed9b8d0 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -46,7 +46,8 @@ CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
>  CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
>  # Solaris puts stdarg.h &c in the system include directory.
>  ifneq ($(XEN_OS),SunOS)
> -CFLAGS += -nostdinc -iwithprefix include
> +CFLAGS-y        += -iwithprefix include
> +CFLAGS-$(gcc)   += -nostdinc
>  endif
>  
>  CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
> -- 
> 1.8.5.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:37:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8q8-0003Kz-MO; Tue, 11 Feb 2014 08:37:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD8q6-0003Ku-R9
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:37:11 +0000
Received: from [193.109.254.147:57547] by server-2.bemta-14.messagelabs.com id
	CC/C9-01236-631E9F25; Tue, 11 Feb 2014 08:37:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392107829!3449753!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 779 invoked from network); 11 Feb 2014 08:37:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:37:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 08:37:08 +0000
Message-Id: <52F9EF3F020000780011B06A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 08:37:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 00:29, Julien Grall <julien.grall@linaro.org> wrote:
> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> compilation with clang:
> 
> In file included from sched_sedf.c:8:
> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' 
> file
> not found with <angled> include; use "quotes" instead
>            ^~~~~~~~~~
>            "stdarg.h"
> In file included from sched_sedf.c:8:
> /home/julieng/works/xen/xen/include/xen/lib.h:101:63: error: unknown type 
> name 'va_list'
> extern int vsnprintf(char *buf, size_t size, const char *fmt, va_list args)
>                                                               ^
> /home/julieng/works/xen/xen/include/xen/lib.h:105:64: error: unknown type 
> name 'va_list'
> extern int vscnprintf(char *buf, size_t size, const char *fmt, va_list args)
> 
> I have the same errors on different version of clang:
>     - clang 3.0 on debian wheezy
>     - clang 3.3 on Fedora 20
>     - clang 3.5 build from trunk
> 
> Removing -nostdinc fix the build on clang.

But does this also do the right thing? I.e. I doubt you're immune
then against picking up headers you don't want to include in a
hypervisor build, or properly failing if - for whatever reason, e.g.
during development after having made a mistake - a header can't
be found in the paths we want the compiler to look for them, but
can be found in a "standard" include directory.

IOW I think it first needs to be understood/explained why and by
how much clang behavior differs here.

Jan

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/Rules.mk | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index df1428f..ed9b8d0 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -46,7 +46,8 @@ CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
>  CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
>  # Solaris puts stdarg.h &c in the system include directory.
>  ifneq ($(XEN_OS),SunOS)
> -CFLAGS += -nostdinc -iwithprefix include
> +CFLAGS-y        += -iwithprefix include
> +CFLAGS-$(gcc)   += -nostdinc
>  endif
>  
>  CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
> -- 
> 1.8.5.3
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:43:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8wE-0003sq-L8; Tue, 11 Feb 2014 08:43:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD8wD-0003sl-4m
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:43:29 +0000
Received: from [85.158.143.35:14650] by server-3.bemta-4.messagelabs.com id
	1A/36-11539-0B2E9F25; Tue, 11 Feb 2014 08:43:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392108206!4711105!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14514 invoked from network); 11 Feb 2014 08:43:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 08:43:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99720282"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 08:43:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 03:43:25 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WD8w9-0007a1-Hc;
	Tue, 11 Feb 2014 08:43:25 +0000
Message-ID: <1392108205.22033.16.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 11 Feb 2014 08:43:25 +0000
In-Reply-To: <1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, "Luis R.
	Rodriguez" <mcgrof@suse.com>, Paul Durrant <Paul.Durrant@citrix.com>, Wei
	Liu <wei.liu2@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> Although the xen-netback interfaces do not participate in the
> link as a typical Ethernet device interfaces for them are
> still required under the current archtitecture. IPv6 addresses
> do not need to be created or assigned on the xen-netback interfaces
> however, even if the frontend devices do need them, so clear the
> multicast flag to ensure the net core does not initiate IPv6
> Stateless Address Autoconfiguration.

How does disabling SAA flow from the absence of multicast? Surely these
should be controlled logically independently even if there is some
notional linkage. Can SAA not be disabled directly?

>  Clearing the multicast
> flag is required given that the net_device is using the
> ether_setup() helper.
> 
> There's also no good reason why the special MAC address of
> FE:FF:FF:FF:FF:FF is being used other than to avoid issues
> with STP,

With your change there is a random probability on reboot that the bridge
will end up with a randomly generated MAC address instead of a static
MAC address (usually that of the physical NIC on the bridge), since the
bridge tends to inherit the lowest MAC of any port.

Since IP configuration is done on the bridge this will break DHCP,
whether it is using static or dynamic mappings from MAC to IP address,
and the host will randomly change IP address on reboot.

So Nack for that reason.

>  since using this can create an issue if a user
> decides to enable multicast on the backend interfaces

Please explain what this issue is.

Also how can a user enable multicast on the b/e? AFAIK only Solaris ever
implemented the m/c bits of the Xen PV network protocol (not that I
wouldn't welcome attempts to add it to other platforms)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:43:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD8wE-0003sq-L8; Tue, 11 Feb 2014 08:43:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD8wD-0003sl-4m
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:43:29 +0000
Received: from [85.158.143.35:14650] by server-3.bemta-4.messagelabs.com id
	1A/36-11539-0B2E9F25; Tue, 11 Feb 2014 08:43:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392108206!4711105!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14514 invoked from network); 11 Feb 2014 08:43:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 08:43:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99720282"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 08:43:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 03:43:25 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WD8w9-0007a1-Hc;
	Tue, 11 Feb 2014 08:43:25 +0000
Message-ID: <1392108205.22033.16.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 11 Feb 2014 08:43:25 +0000
In-Reply-To: <1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, "Luis R.
	Rodriguez" <mcgrof@suse.com>, Paul Durrant <Paul.Durrant@citrix.com>, Wei
	Liu <wei.liu2@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> Although the xen-netback interfaces do not participate in the
> link as a typical Ethernet device interfaces for them are
> still required under the current archtitecture. IPv6 addresses
> do not need to be created or assigned on the xen-netback interfaces
> however, even if the frontend devices do need them, so clear the
> multicast flag to ensure the net core does not initiate IPv6
> Stateless Address Autoconfiguration.

How does disabling SAA flow from the absence of multicast? Surely these
should be controlled logically independently even if there is some
notional linkage. Can SAA not be disabled directly?

>  Clearing the multicast
> flag is required given that the net_device is using the
> ether_setup() helper.
> 
> There's also no good reason why the special MAC address of
> FE:FF:FF:FF:FF:FF is being used other than to avoid issues
> with STP,

With your change there is a random probability on reboot that the bridge
will end up with a randomly generated MAC address instead of a static
MAC address (usually that of the physical NIC on the bridge), since the
bridge tends to inherit the lowest MAC of any port.

Since IP configuration is done on the bridge this will break DHCP,
whether it is using static or dynamic mappings from MAC to IP address,
and the host will randomly change IP address on reboot.

So Nack for that reason.

>  since using this can create an issue if a user
> decides to enable multicast on the backend interfaces

Please explain what this issue is.

Also how can a user enable multicast on the b/e? AFAIK only Solaris ever
implemented the m/c bits of the Xen PV network protocol (not that I
wouldn't welcome attempts to add it to other platforms)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:47:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD903-000411-CH; Tue, 11 Feb 2014 08:47:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WD902-00040w-6Y
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 08:47:26 +0000
Received: from [85.158.137.68:11177] by server-12.bemta-3.messagelabs.com id
	88/31-01674-D93E9F25; Tue, 11 Feb 2014 08:47:25 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392108444!1017458!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9549 invoked from network); 11 Feb 2014 08:47:25 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:47:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WD8zw-000O7U-4W; Tue, 11 Feb 2014 08:47:20 +0000
Date: Tue, 11 Feb 2014 09:47:20 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140211084720.GA92054@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F91071.1080007@eu.citrix.com> <52F91DCC.1060007@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F91DCC.1060007@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:43 +0000 on 10 Feb (1392054204), Andrew Cooper wrote:
> REG_B and REG_C are to do with interrupts, and which events
> should(B)/have(C) generated interrupts.  The worst case is that a guest
> gets none/too-few/too-many interrupts when trying to drive the RTC. 
> None of this should lead to clock skew, as reading the time values
> directly will still provide the same information as before, although any
> guest which attempts to guess time based on counting periodic interrupts
> from the RTC is a) already broken and b) already having massive skew as
> a VM due to vcpu scheduling.

Sadly, not the case.  There absolutely are OSes that compute wallclock
time by counting timer ticks -- that's why we have the no-missed-ticks
mode.

Based on what these bugs have looked like in the past, the main risk
is that some kind of guest behaviour will cause clock skew or even
clock freeze in the guest.   In the past some of these have depended
on workload, e.g. something that changes the tick frequency. 

> XenRT does have tests for clock drift, but don't know for certain
> whether they have been run against the new code yet.
> 
> I will ensure they get run on v2 of the patch.

Excellent!

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:47:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD903-000411-CH; Tue, 11 Feb 2014 08:47:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WD902-00040w-6Y
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 08:47:26 +0000
Received: from [85.158.137.68:11177] by server-12.bemta-3.messagelabs.com id
	88/31-01674-D93E9F25; Tue, 11 Feb 2014 08:47:25 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392108444!1017458!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9549 invoked from network); 11 Feb 2014 08:47:25 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:47:25 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WD8zw-000O7U-4W; Tue, 11 Feb 2014 08:47:20 +0000
Date: Tue, 11 Feb 2014 09:47:20 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140211084720.GA92054@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F91071.1080007@eu.citrix.com> <52F91DCC.1060007@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F91DCC.1060007@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 18:43 +0000 on 10 Feb (1392054204), Andrew Cooper wrote:
> REG_B and REG_C are to do with interrupts, and which events
> should(B)/have(C) generated interrupts.  The worst case is that a guest
> gets none/too-few/too-many interrupts when trying to drive the RTC. 
> None of this should lead to clock skew, as reading the time values
> directly will still provide the same information as before, although any
> guest which attempts to guess time based on counting periodic interrupts
> from the RTC is a) already broken and b) already having massive skew as
> a VM due to vcpu scheduling.

Sadly, not the case.  There absolutely are OSes that compute wallclock
time by counting timer ticks -- that's why we have the no-missed-ticks
mode.

Based on what these bugs have looked like in the past, the main risk
is that some kind of guest behaviour will cause clock skew or even
clock freeze in the guest.   In the past some of these have depended
on workload, e.g. something that changes the tick frequency. 

> XenRT does have tests for clock drift, but don't know for certain
> whether they have been run against the new code yet.
> 
> I will ensure they get run on v2 of the patch.

Excellent!

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD95m-0004H6-Ak; Tue, 11 Feb 2014 08:53:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WD95k-0004H0-Aq
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:53:20 +0000
Received: from [85.158.143.35:50343] by server-3.bemta-4.messagelabs.com id
	97/C6-11539-FF4E9F25; Tue, 11 Feb 2014 08:53:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392108798!4730616!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 425 invoked from network); 11 Feb 2014 08:53:19 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:53:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WD95h-000ODV-H6; Tue, 11 Feb 2014 08:53:17 +0000
Date: Tue, 11 Feb 2014 09:53:17 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211085317.GB92054@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> compilation with clang:
> 
> In file included from sched_sedf.c:8:
> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
> not found with <angled> include; use "quotes" instead
>            ^~~~~~~~~~
>            "stdarg.h"

Looks like on your system stdarg.h doesn't live in a compiler-specific
path, like we have for the BSDs.  I think we should just go to using
our own definitions for stdarg/stdbool everywhere; trying to chase the
compiler-specific versions around is a PITA, and the pieces we
actually need are trivial.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 08:53:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 08:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD95m-0004H6-Ak; Tue, 11 Feb 2014 08:53:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WD95k-0004H0-Aq
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 08:53:20 +0000
Received: from [85.158.143.35:50343] by server-3.bemta-4.messagelabs.com id
	97/C6-11539-FF4E9F25; Tue, 11 Feb 2014 08:53:19 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392108798!4730616!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 425 invoked from network); 11 Feb 2014 08:53:19 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 08:53:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WD95h-000ODV-H6; Tue, 11 Feb 2014 08:53:17 +0000
Date: Tue, 11 Feb 2014 09:53:17 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211085317.GB92054@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> compilation with clang:
> 
> In file included from sched_sedf.c:8:
> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
> not found with <angled> include; use "quotes" instead
>            ^~~~~~~~~~
>            "stdarg.h"

Looks like on your system stdarg.h doesn't live in a compiler-specific
path, like we have for the BSDs.  I think we should just go to using
our own definitions for stdarg/stdbool everywhere; trying to chase the
compiler-specific versions around is a PITA, and the pieces we
actually need are trivial.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:02:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9EK-00059k-NZ; Tue, 11 Feb 2014 09:02:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WD9EJ-00059f-69
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:02:11 +0000
Received: from [85.158.143.35:35409] by server-2.bemta-4.messagelabs.com id
	92/2E-10891-217E9F25; Tue, 11 Feb 2014 09:02:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392109327!4735289!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31313 invoked from network); 11 Feb 2014 09:02:07 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:02:07 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WD9EA-000ONR-NP; Tue, 11 Feb 2014 09:02:02 +0000
Date: Tue, 11 Feb 2014 10:02:02 +0100
From: Tim Deegan <tim@xen.org>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140211090202.GC92054@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:15 +0000 on 10 Feb (1392016516), Zhang, Yang Z wrote:
> Tim Deegan wrote on 2014-02-10:
> > At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
> >> From: Yang Zhang <yang.z.zhang@Intel.com>
> >> 
> >> When enabling log dirty mode, it sets all guest's memory to readonly.
> >> And in HAP enabled domain, it modifies all EPT entries to clear
> >> write bit to make sure it is readonly. This will cause problem if
> >> VT-d shares page table with EPT: the device may issue a DMA write
> >> request, then VT-d engine tells it the target memory is readonly and
> >> result in VT-d
> > fault.
> > 
> > So that's a problem even if only the VGA framebuffer is being tracked
> > -- DMA from a passthrough device will either cause a spurious error or
> > fail to update the dirt bitmap.
> 
> Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.
> 

I don't think that works.  We can't expect arbitrary OSes to (a) know
they're running on Xen and (b) know that that means they can't DMA to
or from their framebuffers.

> Without VT-d and EPT share page, we still cannot track the memory
> updating from DMA.

Yeah, but at least we don't risk crashing the _host_ by throwing DMA
failures around. 

> I think the point is that we cannot track the
> memory updating via DMA. So the user should use the log dirty mode
> carefully. Also, I am not sure whether the memory updating from dom0
> and QEMU is tracked currently.

Yes, dom0 and qemu updates are tracked in the log-dirty bitmaps.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:02:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9EK-00059k-NZ; Tue, 11 Feb 2014 09:02:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WD9EJ-00059f-69
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:02:11 +0000
Received: from [85.158.143.35:35409] by server-2.bemta-4.messagelabs.com id
	92/2E-10891-217E9F25; Tue, 11 Feb 2014 09:02:10 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392109327!4735289!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31313 invoked from network); 11 Feb 2014 09:02:07 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:02:07 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WD9EA-000ONR-NP; Tue, 11 Feb 2014 09:02:02 +0000
Date: Tue, 11 Feb 2014 10:02:02 +0100
From: Tim Deegan <tim@xen.org>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140211090202.GC92054@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 08:15 +0000 on 10 Feb (1392016516), Zhang, Yang Z wrote:
> Tim Deegan wrote on 2014-02-10:
> > At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
> >> From: Yang Zhang <yang.z.zhang@Intel.com>
> >> 
> >> When enabling log dirty mode, it sets all guest's memory to readonly.
> >> And in HAP enabled domain, it modifies all EPT entries to clear
> >> write bit to make sure it is readonly. This will cause problem if
> >> VT-d shares page table with EPT: the device may issue a DMA write
> >> request, then VT-d engine tells it the target memory is readonly and
> >> result in VT-d
> > fault.
> > 
> > So that's a problem even if only the VGA framebuffer is being tracked
> > -- DMA from a passthrough device will either cause a spurious error or
> > fail to update the dirt bitmap.
> 
> Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.
> 

I don't think that works.  We can't expect arbitrary OSes to (a) know
they're running on Xen and (b) know that that means they can't DMA to
or from their framebuffers.

> Without VT-d and EPT share page, we still cannot track the memory
> updating from DMA.

Yeah, but at least we don't risk crashing the _host_ by throwing DMA
failures around. 

> I think the point is that we cannot track the
> memory updating via DMA. So the user should use the log dirty mode
> carefully. Also, I am not sure whether the memory updating from dom0
> and QEMU is tracked currently.

Yes, dom0 and qemu updates are tracked in the log-dirty bitmaps.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:03:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9F3-0005Cg-5B; Tue, 11 Feb 2014 09:02:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD9F1-0005CV-Mw
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:02:55 +0000
Received: from [85.158.143.35:43375] by server-3.bemta-4.messagelabs.com id
	67/09-11539-F37E9F25; Tue, 11 Feb 2014 09:02:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392109374!4734931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9389 invoked from network); 11 Feb 2014 09:02:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:02:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 09:02:53 +0000
Message-Id: <52F9F549020000780011B097@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 09:02:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F91071.1080007@eu.citrix.com> <52F91DCC.1060007@citrix.com>
In-Reply-To: <52F91DCC.1060007@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 19:43, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> REG_B and REG_C are to do with interrupts, and which events
> should(B)/have(C) generated interrupts.  The worst case is that a guest
> gets none/too-few/too-many interrupts when trying to drive the RTC. 
> None of this should lead to clock skew, as reading the time values
> directly will still provide the same information as before, although any
> guest which attempts to guess time based on counting periodic interrupts
> from the RTC is a) already broken

Why? If an OS itself guarantees low enough interrupt latency (for
the chosen period), no interrupt would ever get lost, and counting
would be precise afaict.

> and b) already having massive skew as
> a VM due to vcpu scheduling.

Yes, very likely. But that's a problem we (the ones virtualizing the
OS) have to solve, not the OS (it would be the other way around
only if the OS was PV or at least virtualization aware).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:03:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9F3-0005Cg-5B; Tue, 11 Feb 2014 09:02:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD9F1-0005CV-Mw
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:02:55 +0000
Received: from [85.158.143.35:43375] by server-3.bemta-4.messagelabs.com id
	67/09-11539-F37E9F25; Tue, 11 Feb 2014 09:02:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392109374!4734931!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9389 invoked from network); 11 Feb 2014 09:02:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:02:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 09:02:53 +0000
Message-Id: <52F9F549020000780011B097@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 09:02:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F91071.1080007@eu.citrix.com> <52F91DCC.1060007@citrix.com>
In-Reply-To: <52F91DCC.1060007@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 19:43, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> REG_B and REG_C are to do with interrupts, and which events
> should(B)/have(C) generated interrupts.  The worst case is that a guest
> gets none/too-few/too-many interrupts when trying to drive the RTC. 
> None of this should lead to clock skew, as reading the time values
> directly will still provide the same information as before, although any
> guest which attempts to guess time based on counting periodic interrupts
> from the RTC is a) already broken

Why? If an OS itself guarantees low enough interrupt latency (for
the chosen period), no interrupt would ever get lost, and counting
would be precise afaict.

> and b) already having massive skew as
> a VM due to vcpu scheduling.

Yes, very likely. But that's a problem we (the ones virtualizing the
OS) have to solve, not the OS (it would be the other way around
only if the OS was PV or at least virtualization aware).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:05:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9H2-0005MU-OP; Tue, 11 Feb 2014 09:05:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WD9H1-0005MH-Hv
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:05:00 +0000
Received: from [85.158.137.68:35959] by server-8.bemta-3.messagelabs.com id
	16/FE-16039-AB7E9F25; Tue, 11 Feb 2014 09:04:58 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392109497!1027678!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12920 invoked from network); 11 Feb 2014 09:04:57 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 09:04:57 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392109497; l=21450;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=XH0PDybLaHDV29Jvh9iM4mgQURU=;
	b=JM1MVjMXULefhbS9LN2+DGtFJPErIRWIsKZ2ShtKLatuginC8Hs9WTP+W7YfiK7XfHY
	cqufF5QBLF2BmocvcFjLbvmhFM4r0Vc6cQvMcI6rlLIh+iR+Mk5qsbSO8q2RlLu7FRIsi
	0hxTDQdXRW6zAsq9ngMJTM7yqMLjx3jqmMA=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id K028a6q1B94u53O
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 11 Feb 2014 10:04:56 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 6BE0750269; Tue, 11 Feb 2014 10:04:56 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 11 Feb 2014 10:04:52 +0100
Message-Id: <1392109492-17436-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH v2] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers of xc_domain_save use errno to print diagnostics if the call
fails. But xc_domain_save does not preserve the actual errno in case of
a failure.

This change preserves errno in all cases where code jumps to the label
"out". In addition a new label "exit" is added to catch also code which
used to do just "return 1".

Now libxl_save_helper:complete can print the actual error string.

Note: some of the functions used in xc_domain_save do not use errno to
indicate a reason. In these cases the errno remains undefined as it used
to be without this change.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2:
 remove changes for checkpoint, caller needs fixing to make use of errno

 tools/libxc/xc_domain_save.c | 88 +++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 82 insertions(+), 6 deletions(-)

diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 42c4752..f32ac81 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -806,6 +806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     xc_dominfo_t info;
     DECLARE_DOMCTL;
 
+    int errnoval = 0;
     int rc = 1, frc, i, j, last_iter = 0, iter = 0;
     int live  = (flags & XCFLAGS_LIVE);
     int debug = (flags & XCFLAGS_DEBUG);
@@ -898,8 +899,8 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( hvm && !callbacks->switch_qemu_logdirty )
     {
         ERROR("No switch_qemu_logdirty callback provided.");
-        errno = EINVAL;
-        return 1;
+        errnoval = EINVAL;
+        goto exit;
     }
 
     outbuf_init(xch, &ob_pagebuf, OUTBUF_SIZE);
@@ -913,14 +914,16 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( !get_platform_info(xch, dom,
                             &ctx->max_mfn, &ctx->hvirt_start, &ctx->pt_levels, &dinfo->guest_width) )
     {
+        errnoval = errno;
         ERROR("Unable to get platform info.");
-        return 1;
+        goto exit;
     }
 
     if ( xc_domain_getinfo(xch, dom, 1, &info) != 1 )
     {
+        errnoval = errno;
         PERROR("Could not get domain info");
-        return 1;
+        goto exit;
     }
 
     shared_info_frame = info.shared_info_frame;
@@ -932,6 +935,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                            PROT_READ, shared_info_frame);
         if ( !live_shinfo )
         {
+            errnoval = errno;
             PERROR("Couldn't map live_shinfo");
             goto out;
         }
@@ -942,6 +946,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( dinfo->p2m_size > ~XEN_DOMCTL_PFINFO_LTAB_MASK )
     {
+        errnoval = E2BIG;
         ERROR("Cannot save this big a guest");
         goto out;
     }
@@ -967,6 +972,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             
             if ( frc < 0 )
             {
+                errnoval = errno;
                 PERROR("Couldn't enable shadow mode (rc %d) (errno %d)", frc, errno );
                 goto out;
             }
@@ -975,6 +981,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         /* Enable qemu-dm logging dirty pages to xen */
         if ( hvm && callbacks->switch_qemu_logdirty(dom, 1, callbacks->data) )
         {
+            errnoval = errno;
             PERROR("Couldn't enable qemu log-dirty mode (errno %d)", errno);
             goto out;
         }
@@ -985,6 +992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -994,6 +1002,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     {
         if (!(compress_ctx = xc_compression_create_context(xch, dinfo->p2m_size)))
         {
+            errnoval = errno;
             ERROR("Failed to create compression context");
             goto out;
         }
@@ -1012,6 +1021,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( !to_send || !to_fix || !to_skip )
     {
+        errnoval = ENOMEM;
         ERROR("Couldn't allocate to_send array");
         goto out;
     }
@@ -1024,12 +1034,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
         if ( hvm_buf_size == -1 )
         {
+            errnoval = errno;
             PERROR("Couldn't get HVM context size from Xen");
             goto out;
         }
         hvm_buf = malloc(hvm_buf_size);
         if ( !hvm_buf )
         {
+            errnoval = ENOMEM;
             ERROR("Couldn't allocate memory");
             goto out;
         }
@@ -1043,7 +1055,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( (pfn_type == NULL) || (pfn_batch == NULL) || (pfn_err == NULL) )
     {
         ERROR("failed to alloc memory for pfn_type and/or pfn_batch arrays");
-        errno = ENOMEM;
+        errnoval = ENOMEM;
         goto out;
     }
     memset(pfn_type, 0,
@@ -1052,6 +1064,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Setup the mfn_to_pfn table mapping */
     if ( !(ctx->live_m2p = xc_map_m2p(xch, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) )
     {
+        errnoval = errno;
         PERROR("Failed to map live M2P table");
         goto out;
     }
@@ -1059,6 +1072,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Start writing out the saved-domain record. */
     if ( write_exact(io_fd, &dinfo->p2m_size, sizeof(unsigned long)) )
     {
+        errnoval = errno;
         PERROR("write: p2m_size");
         goto out;
     }
@@ -1071,6 +1085,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ctx->live_p2m = map_and_save_p2m_table(xch, io_fd, dom, ctx, live_shinfo);
         if ( ctx->live_p2m == NULL )
         {
+            errnoval = errno;
             PERROR("Failed to map/save the p2m frame list");
             goto out;
         }
@@ -1097,12 +1112,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     tmem_saved = xc_tmem_save(xch, dom, io_fd, live, XC_SAVE_ID_TMEM);
     if ( tmem_saved == -1 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tmem)");
         goto out;
     }
 
     if ( !live && save_tsc_info(xch, dom, io_fd) < 0 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tsc)");
         goto out;
     }
@@ -1143,6 +1160,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     dinfo->p2m_size, NULL, 0, NULL);
                 if ( frc != dinfo->p2m_size )
                 {
+                    errnoval = errno;
                     ERROR("Error peeking shadow bitmap");
                     goto out;
                 }
@@ -1257,6 +1275,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 xch, dom, PROT_READ, pfn_type, pfn_err, batch);
             if ( region_base == NULL )
             {
+                errnoval = errno;
                 PERROR("map batch failed");
                 goto out;
             }
@@ -1264,6 +1283,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* Get page types */
             if ( xc_get_pfn_type_batch(xch, dom, batch, pfn_type) )
             {
+                errnoval = errno;
                 PERROR("get_pfn_type_batch failed");
                 goto out;
             }
@@ -1332,6 +1352,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
             if ( wrexact(io_fd, &batch, sizeof(unsigned int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (2)");
                 goto out;
             }
@@ -1341,6 +1362,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     ((unsigned long *)pfn_type)[j] = pfn_type[j];
             if ( wrexact(io_fd, pfn_type, sizeof(unsigned long)*batch) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (3)");
                 goto out;
             }
@@ -1368,6 +1390,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                        (char*)region_base+(PAGE_SIZE*(j-run)), 
                                        PAGE_SIZE*run) != PAGE_SIZE*run )
                         {
+                            errnoval = errno;
                             PERROR("Error when writing to state file (4a)"
                                   " (errno %d)", errno);
                             goto out;
@@ -1396,6 +1419,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                     if ( race && !live )
                     {
+                        errnoval = errno;
                         ERROR("Fatal PT race (pfn %lx, type %08lx)", pfn,
                               pagetype);
                         goto out;
@@ -1409,6 +1433,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                                         pfn, 1 /* raw page */);
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add pagetable page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1428,6 +1453,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                              */
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4b)\n");
                                 goto out;
@@ -1437,6 +1463,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     else if ( wruncached(io_fd, live, page,
                                          PAGE_SIZE) != PAGE_SIZE )
                     {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (4b)"
                               " (errno %d)", errno);
                         goto out;
@@ -1456,6 +1483,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1465,6 +1493,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                         {
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4c)\n");
                                 goto out;
@@ -1483,6 +1512,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                (char*)region_base+(PAGE_SIZE*(j-run)), 
                                PAGE_SIZE*run) != PAGE_SIZE*run )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (4c)"
                           " (errno %d)", errno);
                     goto out;
@@ -1520,6 +1550,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* send "-1" to put receiver into debug mode */
             if ( wrexact(io_fd, &id, sizeof(int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (6)");
                 goto out;
             }
@@ -1542,6 +1573,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( suspend_and_state(callbacks->suspend, callbacks->data,
                                        xch, io_fd, dom, &info) )
                 {
+                    errnoval = errno;
                     ERROR("Domain appears not to have suspended");
                     goto out;
                 }
@@ -1550,12 +1582,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( (tmem_saved > 0) &&
                      (xc_tmem_save_extra(xch,dom,io_fd,XC_SAVE_ID_TMEM_EXTRA) == -1) )
                 {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (tmem)");
                         goto out;
                 }
 
                 if ( save_tsc_info(xch, dom, io_fd) < 0 )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (tsc)");
                     goto out;
                 }
@@ -1567,6 +1601,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                    XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send),
                                    dinfo->p2m_size, NULL, 0, &shadow_stats) != dinfo->p2m_size )
             {
+                errnoval = errno;
                 PERROR("Error flushing shadow PT");
                 goto out;
             }
@@ -1598,6 +1633,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( info.max_vcpu_id >= XC_SR_MAX_VCPUS )
         {
+            errnoval = E2BIG;
             ERROR("Too many VCPUS in guest!");
             goto out;
         }
@@ -1614,6 +1650,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, &chunk, offsetof(struct chunk, vcpumap)
                      + vcpumap_sz(info.max_vcpu_id)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file");
             goto out;
         }
@@ -1633,6 +1670,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the generation id buffer location for guest");
             goto out;
         }
@@ -1645,6 +1683,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the ident_pt for EPT guest");
             goto out;
         }
@@ -1657,6 +1696,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the paging ring pfn for guest");
             goto out;
         }
@@ -1669,6 +1709,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the access ring pfn for guest");
             goto out;
         }
@@ -1681,6 +1722,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the sharing ring pfn for guest");
             goto out;
         }
@@ -1693,6 +1735,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the vm86 TSS for guest");
             goto out;
         }
@@ -1705,6 +1748,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the console pfn for guest");
             goto out;
         }
@@ -1716,6 +1760,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ((chunk.data != 0) && wrexact(io_fd, &chunk, sizeof(chunk)))
         {
+            errnoval = errno;
             PERROR("Error when writing the firmware ioport version");
             goto out;
         }
@@ -1728,6 +1773,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the viridian flag");
             goto out;
         }
@@ -1741,6 +1787,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( callbacks->toolstack_save(dom, &buf, &len, callbacks->data) < 0 )
         {
+            errnoval = errno;
             PERROR("Error calling toolstack_save");
             goto out;
         }
@@ -1759,6 +1806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_LAST_CHECKPOINT;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing last checkpoint chunk");
             goto out;
         }
@@ -1778,6 +1826,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_ENABLE_COMPRESSION;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing enable_compression marker");
             goto out;
         }
@@ -1787,6 +1836,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     i = 0;
     if ( wrexact(io_fd, &i, sizeof(int)) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (6')");
         goto out;
     }
@@ -1805,6 +1855,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                          (unsigned long *)&magic_pfns[2]);
         if ( wrexact(io_fd, magic_pfns, sizeof(magic_pfns)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (7)");
             goto out;
         }
@@ -1813,18 +1864,21 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (rec_size = xc_domain_hvm_getcontext(xch, dom, hvm_buf, 
                                                   hvm_buf_size)) == -1 )
         {
+            errnoval = errno;
             PERROR("HVM:Could not get hvm buffer");
             goto out;
         }
         
         if ( wrexact(io_fd, &rec_size, sizeof(uint32_t)) )
         {
+            errnoval = errno;
             PERROR("error write hvm buffer size");
             goto out;
         }
         
         if ( wrexact(io_fd, hvm_buf, rec_size) )
         {
+            errnoval = errno;
             PERROR("write HVM info failed!");
             goto out;
         }
@@ -1849,6 +1903,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( wrexact(io_fd, &j, sizeof(unsigned int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (6a)");
             goto out;
         }
@@ -1863,6 +1918,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             {
                 if ( wrexact(io_fd, &pfntab, sizeof(unsigned long)*j) )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (6b)");
                     goto out;
                 }
@@ -1873,6 +1929,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( xc_vcpu_getcontext(xch, dom, 0, &ctxt) )
     {
+        errnoval = errno;
         PERROR("Could not get vcpu context");
         goto out;
     }
@@ -1888,6 +1945,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     mfn = GET_FIELD(&ctxt, user_regs.edx);
     if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
     {
+        errnoval = ERANGE;
         ERROR("Suspend record is not in range of pseudophys map");
         goto out;
     }
@@ -1900,6 +1958,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( (i != 0) && xc_vcpu_getcontext(xch, dom, i, &ctxt) )
         {
+            errnoval = errno;
             PERROR("No context for VCPU%d", i);
             goto out;
         }
@@ -1910,6 +1969,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             mfn = GET_FIELD(&ctxt, gdt_frames[j]);
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
             {
+                errnoval = ERANGE;
                 ERROR("GDT frame is not in range of pseudophys map");
                 goto out;
             }
@@ -1920,6 +1980,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(
                                            GET_FIELD(&ctxt, ctrlreg[3]))) )
         {
+            errnoval = ERANGE;
             ERROR("PT base is not in range of pseudophys map");
             goto out;
         }
@@ -1931,6 +1992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         {
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(ctxt.x64.ctrlreg[1])) )
             {
+                errnoval = ERANGE;
                 ERROR("PT base is not in range of pseudophys map");
                 goto out;
             }
@@ -1943,6 +2005,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                         ? sizeof(ctxt.x64) 
                                         : sizeof(ctxt.x32))) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (1)");
             goto out;
         }
@@ -1953,11 +2016,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.ext_vcpucontext.vcpu = i;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No extended context for VCPU%d", i);
             goto out;
         }
         if ( wrexact(io_fd, &domctl.u.ext_vcpucontext, 128) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (2)");
             goto out;
         }
@@ -1971,6 +2036,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.vcpuextstate.size = 0;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             goto out;
         }
@@ -1982,6 +2048,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         buffer = xc_hypercall_buffer_alloc(xch, buffer, domctl.u.vcpuextstate.size);
         if ( !buffer )
         {
+            errnoval = errno;
             PERROR("Insufficient memory for getting eXtended states for"
                    "VCPU%d", i);
             goto out;
@@ -1989,6 +2056,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         set_xen_guest_handle(domctl.u.vcpuextstate.buffer, buffer);
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2000,6 +2068,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                      sizeof(domctl.u.vcpuextstate.size)) ||
              wrexact(io_fd, buffer, domctl.u.vcpuextstate.size) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file VCPU extended state");
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2015,6 +2084,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
               arch.pfn_to_mfn_frame_list_list, 0);
     if ( wrexact(io_fd, page, PAGE_SIZE) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (1)");
         goto out;
     }
@@ -2022,6 +2092,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Flush last write and check for errors. */
     if ( fsync(io_fd) && errno != EINVAL )
     {
+        errnoval = errno;
         PERROR("Error when flushing state file");
         goto out;
     }
@@ -2043,6 +2114,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ob = &ob_pagebuf;
         if (wrcompressed(io_fd) < 0)
         {
+            errnoval = errno;
             ERROR("Error when writing compressed data, after postcopy\n");
             rc = 1;
             goto out;
@@ -2051,6 +2123,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, ob_tailbuf.buf, ob_tailbuf.pos) )
         {
             rc = 1;
+            errnoval = errno;
             PERROR("Error when copying tailbuf into outbuf");
             goto out;
         }
@@ -2079,6 +2152,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -2130,7 +2204,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     free(hvm_buf);
     outbuf_free(&ob_pagebuf);
 
-    DPRINTF("Save exit of domid %u with rc=%d\n", dom, rc);
+exit:
+    DPRINTF("Save exit of domid %u with rc=%d, errno=%d\n", dom, rc, errnoval);
+    errno = errnoval;
 
     return !!rc;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:05:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9H2-0005MU-OP; Tue, 11 Feb 2014 09:05:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WD9H1-0005MH-Hv
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:05:00 +0000
Received: from [85.158.137.68:35959] by server-8.bemta-3.messagelabs.com id
	16/FE-16039-AB7E9F25; Tue, 11 Feb 2014 09:04:58 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392109497!1027678!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12920 invoked from network); 11 Feb 2014 09:04:57 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 09:04:57 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392109497; l=21450;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=XH0PDybLaHDV29Jvh9iM4mgQURU=;
	b=JM1MVjMXULefhbS9LN2+DGtFJPErIRWIsKZ2ShtKLatuginC8Hs9WTP+W7YfiK7XfHY
	cqufF5QBLF2BmocvcFjLbvmhFM4r0Vc6cQvMcI6rlLIh+iR+Mk5qsbSO8q2RlLu7FRIsi
	0hxTDQdXRW6zAsq9ngMJTM7yqMLjx3jqmMA=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id K028a6q1B94u53O
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 11 Feb 2014 10:04:56 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 6BE0750269; Tue, 11 Feb 2014 10:04:56 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 11 Feb 2014 10:04:52 +0100
Message-Id: <1392109492-17436-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH v2] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Callers of xc_domain_save use errno to print diagnostics if the call
fails. But xc_domain_save does not preserve the actual errno in case of
a failure.

This change preserves errno in all cases where code jumps to the label
"out". In addition a new label "exit" is added to catch also code which
used to do just "return 1".

Now libxl_save_helper:complete can print the actual error string.

Note: some of the functions used in xc_domain_save do not use errno to
indicate a reason. In these cases the errno remains undefined as it used
to be without this change.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
v2:
 remove changes for checkpoint, caller needs fixing to make use of errno

 tools/libxc/xc_domain_save.c | 88 +++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 82 insertions(+), 6 deletions(-)

diff --git a/tools/libxc/xc_domain_save.c b/tools/libxc/xc_domain_save.c
index 42c4752..f32ac81 100644
--- a/tools/libxc/xc_domain_save.c
+++ b/tools/libxc/xc_domain_save.c
@@ -806,6 +806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     xc_dominfo_t info;
     DECLARE_DOMCTL;
 
+    int errnoval = 0;
     int rc = 1, frc, i, j, last_iter = 0, iter = 0;
     int live  = (flags & XCFLAGS_LIVE);
     int debug = (flags & XCFLAGS_DEBUG);
@@ -898,8 +899,8 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( hvm && !callbacks->switch_qemu_logdirty )
     {
         ERROR("No switch_qemu_logdirty callback provided.");
-        errno = EINVAL;
-        return 1;
+        errnoval = EINVAL;
+        goto exit;
     }
 
     outbuf_init(xch, &ob_pagebuf, OUTBUF_SIZE);
@@ -913,14 +914,16 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( !get_platform_info(xch, dom,
                             &ctx->max_mfn, &ctx->hvirt_start, &ctx->pt_levels, &dinfo->guest_width) )
     {
+        errnoval = errno;
         ERROR("Unable to get platform info.");
-        return 1;
+        goto exit;
     }
 
     if ( xc_domain_getinfo(xch, dom, 1, &info) != 1 )
     {
+        errnoval = errno;
         PERROR("Could not get domain info");
-        return 1;
+        goto exit;
     }
 
     shared_info_frame = info.shared_info_frame;
@@ -932,6 +935,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                            PROT_READ, shared_info_frame);
         if ( !live_shinfo )
         {
+            errnoval = errno;
             PERROR("Couldn't map live_shinfo");
             goto out;
         }
@@ -942,6 +946,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( dinfo->p2m_size > ~XEN_DOMCTL_PFINFO_LTAB_MASK )
     {
+        errnoval = E2BIG;
         ERROR("Cannot save this big a guest");
         goto out;
     }
@@ -967,6 +972,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             
             if ( frc < 0 )
             {
+                errnoval = errno;
                 PERROR("Couldn't enable shadow mode (rc %d) (errno %d)", frc, errno );
                 goto out;
             }
@@ -975,6 +981,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         /* Enable qemu-dm logging dirty pages to xen */
         if ( hvm && callbacks->switch_qemu_logdirty(dom, 1, callbacks->data) )
         {
+            errnoval = errno;
             PERROR("Couldn't enable qemu log-dirty mode (errno %d)", errno);
             goto out;
         }
@@ -985,6 +992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -994,6 +1002,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     {
         if (!(compress_ctx = xc_compression_create_context(xch, dinfo->p2m_size)))
         {
+            errnoval = errno;
             ERROR("Failed to create compression context");
             goto out;
         }
@@ -1012,6 +1021,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( !to_send || !to_fix || !to_skip )
     {
+        errnoval = ENOMEM;
         ERROR("Couldn't allocate to_send array");
         goto out;
     }
@@ -1024,12 +1034,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
         if ( hvm_buf_size == -1 )
         {
+            errnoval = errno;
             PERROR("Couldn't get HVM context size from Xen");
             goto out;
         }
         hvm_buf = malloc(hvm_buf_size);
         if ( !hvm_buf )
         {
+            errnoval = ENOMEM;
             ERROR("Couldn't allocate memory");
             goto out;
         }
@@ -1043,7 +1055,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     if ( (pfn_type == NULL) || (pfn_batch == NULL) || (pfn_err == NULL) )
     {
         ERROR("failed to alloc memory for pfn_type and/or pfn_batch arrays");
-        errno = ENOMEM;
+        errnoval = ENOMEM;
         goto out;
     }
     memset(pfn_type, 0,
@@ -1052,6 +1064,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Setup the mfn_to_pfn table mapping */
     if ( !(ctx->live_m2p = xc_map_m2p(xch, ctx->max_mfn, PROT_READ, &ctx->m2p_mfn0)) )
     {
+        errnoval = errno;
         PERROR("Failed to map live M2P table");
         goto out;
     }
@@ -1059,6 +1072,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Start writing out the saved-domain record. */
     if ( write_exact(io_fd, &dinfo->p2m_size, sizeof(unsigned long)) )
     {
+        errnoval = errno;
         PERROR("write: p2m_size");
         goto out;
     }
@@ -1071,6 +1085,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ctx->live_p2m = map_and_save_p2m_table(xch, io_fd, dom, ctx, live_shinfo);
         if ( ctx->live_p2m == NULL )
         {
+            errnoval = errno;
             PERROR("Failed to map/save the p2m frame list");
             goto out;
         }
@@ -1097,12 +1112,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     tmem_saved = xc_tmem_save(xch, dom, io_fd, live, XC_SAVE_ID_TMEM);
     if ( tmem_saved == -1 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tmem)");
         goto out;
     }
 
     if ( !live && save_tsc_info(xch, dom, io_fd) < 0 )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (tsc)");
         goto out;
     }
@@ -1143,6 +1160,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     dinfo->p2m_size, NULL, 0, NULL);
                 if ( frc != dinfo->p2m_size )
                 {
+                    errnoval = errno;
                     ERROR("Error peeking shadow bitmap");
                     goto out;
                 }
@@ -1257,6 +1275,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 xch, dom, PROT_READ, pfn_type, pfn_err, batch);
             if ( region_base == NULL )
             {
+                errnoval = errno;
                 PERROR("map batch failed");
                 goto out;
             }
@@ -1264,6 +1283,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* Get page types */
             if ( xc_get_pfn_type_batch(xch, dom, batch, pfn_type) )
             {
+                errnoval = errno;
                 PERROR("get_pfn_type_batch failed");
                 goto out;
             }
@@ -1332,6 +1352,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
             if ( wrexact(io_fd, &batch, sizeof(unsigned int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (2)");
                 goto out;
             }
@@ -1341,6 +1362,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     ((unsigned long *)pfn_type)[j] = pfn_type[j];
             if ( wrexact(io_fd, pfn_type, sizeof(unsigned long)*batch) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (3)");
                 goto out;
             }
@@ -1368,6 +1390,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                        (char*)region_base+(PAGE_SIZE*(j-run)), 
                                        PAGE_SIZE*run) != PAGE_SIZE*run )
                         {
+                            errnoval = errno;
                             PERROR("Error when writing to state file (4a)"
                                   " (errno %d)", errno);
                             goto out;
@@ -1396,6 +1419,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                     if ( race && !live )
                     {
+                        errnoval = errno;
                         ERROR("Fatal PT race (pfn %lx, type %08lx)", pfn,
                               pagetype);
                         goto out;
@@ -1409,6 +1433,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                                         pfn, 1 /* raw page */);
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add pagetable page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1428,6 +1453,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                              */
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4b)\n");
                                 goto out;
@@ -1437,6 +1463,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                     else if ( wruncached(io_fd, live, page,
                                          PAGE_SIZE) != PAGE_SIZE )
                     {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (4b)"
                               " (errno %d)", errno);
                         goto out;
@@ -1456,6 +1483,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
                         if (c_err == -2) /* OOB PFN */
                         {
+                            errnoval = errno;
                             ERROR("Could not add page "
                                   "(pfn:%" PRIpfn "to page buffer\n", pfn);
                             goto out;
@@ -1465,6 +1493,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                         {
                             if (wrcompressed(io_fd) < 0)
                             {
+                                errnoval = errno;
                                 ERROR("Error when writing compressed"
                                       " data (4c)\n");
                                 goto out;
@@ -1483,6 +1512,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                (char*)region_base+(PAGE_SIZE*(j-run)), 
                                PAGE_SIZE*run) != PAGE_SIZE*run )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (4c)"
                           " (errno %d)", errno);
                     goto out;
@@ -1520,6 +1550,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             /* send "-1" to put receiver into debug mode */
             if ( wrexact(io_fd, &id, sizeof(int)) )
             {
+                errnoval = errno;
                 PERROR("Error when writing to state file (6)");
                 goto out;
             }
@@ -1542,6 +1573,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( suspend_and_state(callbacks->suspend, callbacks->data,
                                        xch, io_fd, dom, &info) )
                 {
+                    errnoval = errno;
                     ERROR("Domain appears not to have suspended");
                     goto out;
                 }
@@ -1550,12 +1582,14 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                 if ( (tmem_saved > 0) &&
                      (xc_tmem_save_extra(xch,dom,io_fd,XC_SAVE_ID_TMEM_EXTRA) == -1) )
                 {
+                        errnoval = errno;
                         PERROR("Error when writing to state file (tmem)");
                         goto out;
                 }
 
                 if ( save_tsc_info(xch, dom, io_fd) < 0 )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (tsc)");
                     goto out;
                 }
@@ -1567,6 +1601,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                    XEN_DOMCTL_SHADOW_OP_CLEAN, HYPERCALL_BUFFER(to_send),
                                    dinfo->p2m_size, NULL, 0, &shadow_stats) != dinfo->p2m_size )
             {
+                errnoval = errno;
                 PERROR("Error flushing shadow PT");
                 goto out;
             }
@@ -1598,6 +1633,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( info.max_vcpu_id >= XC_SR_MAX_VCPUS )
         {
+            errnoval = E2BIG;
             ERROR("Too many VCPUS in guest!");
             goto out;
         }
@@ -1614,6 +1650,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, &chunk, offsetof(struct chunk, vcpumap)
                      + vcpumap_sz(info.max_vcpu_id)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file");
             goto out;
         }
@@ -1633,6 +1670,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the generation id buffer location for guest");
             goto out;
         }
@@ -1645,6 +1683,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the ident_pt for EPT guest");
             goto out;
         }
@@ -1657,6 +1696,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the paging ring pfn for guest");
             goto out;
         }
@@ -1669,6 +1709,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the access ring pfn for guest");
             goto out;
         }
@@ -1681,6 +1722,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the sharing ring pfn for guest");
             goto out;
         }
@@ -1693,6 +1735,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the vm86 TSS for guest");
             goto out;
         }
@@ -1705,6 +1748,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the console pfn for guest");
             goto out;
         }
@@ -1716,6 +1760,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ((chunk.data != 0) && wrexact(io_fd, &chunk, sizeof(chunk)))
         {
+            errnoval = errno;
             PERROR("Error when writing the firmware ioport version");
             goto out;
         }
@@ -1728,6 +1773,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (chunk.data != 0) &&
              wrexact(io_fd, &chunk, sizeof(chunk)) )
         {
+            errnoval = errno;
             PERROR("Error when writing the viridian flag");
             goto out;
         }
@@ -1741,6 +1787,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( callbacks->toolstack_save(dom, &buf, &len, callbacks->data) < 0 )
         {
+            errnoval = errno;
             PERROR("Error calling toolstack_save");
             goto out;
         }
@@ -1759,6 +1806,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_LAST_CHECKPOINT;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing last checkpoint chunk");
             goto out;
         }
@@ -1778,6 +1826,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         i = XC_SAVE_ID_ENABLE_COMPRESSION;
         if ( wrexact(io_fd, &i, sizeof(int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing enable_compression marker");
             goto out;
         }
@@ -1787,6 +1836,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     i = 0;
     if ( wrexact(io_fd, &i, sizeof(int)) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (6')");
         goto out;
     }
@@ -1805,6 +1855,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                          (unsigned long *)&magic_pfns[2]);
         if ( wrexact(io_fd, magic_pfns, sizeof(magic_pfns)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (7)");
             goto out;
         }
@@ -1813,18 +1864,21 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( (rec_size = xc_domain_hvm_getcontext(xch, dom, hvm_buf, 
                                                   hvm_buf_size)) == -1 )
         {
+            errnoval = errno;
             PERROR("HVM:Could not get hvm buffer");
             goto out;
         }
         
         if ( wrexact(io_fd, &rec_size, sizeof(uint32_t)) )
         {
+            errnoval = errno;
             PERROR("error write hvm buffer size");
             goto out;
         }
         
         if ( wrexact(io_fd, hvm_buf, rec_size) )
         {
+            errnoval = errno;
             PERROR("write HVM info failed!");
             goto out;
         }
@@ -1849,6 +1903,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( wrexact(io_fd, &j, sizeof(unsigned int)) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (6a)");
             goto out;
         }
@@ -1863,6 +1918,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             {
                 if ( wrexact(io_fd, &pfntab, sizeof(unsigned long)*j) )
                 {
+                    errnoval = errno;
                     PERROR("Error when writing to state file (6b)");
                     goto out;
                 }
@@ -1873,6 +1929,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
     if ( xc_vcpu_getcontext(xch, dom, 0, &ctxt) )
     {
+        errnoval = errno;
         PERROR("Could not get vcpu context");
         goto out;
     }
@@ -1888,6 +1945,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     mfn = GET_FIELD(&ctxt, user_regs.edx);
     if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
     {
+        errnoval = ERANGE;
         ERROR("Suspend record is not in range of pseudophys map");
         goto out;
     }
@@ -1900,6 +1958,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
 
         if ( (i != 0) && xc_vcpu_getcontext(xch, dom, i, &ctxt) )
         {
+            errnoval = errno;
             PERROR("No context for VCPU%d", i);
             goto out;
         }
@@ -1910,6 +1969,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
             mfn = GET_FIELD(&ctxt, gdt_frames[j]);
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(mfn) )
             {
+                errnoval = ERANGE;
                 ERROR("GDT frame is not in range of pseudophys map");
                 goto out;
             }
@@ -1920,6 +1980,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(
                                            GET_FIELD(&ctxt, ctrlreg[3]))) )
         {
+            errnoval = ERANGE;
             ERROR("PT base is not in range of pseudophys map");
             goto out;
         }
@@ -1931,6 +1992,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         {
             if ( !MFN_IS_IN_PSEUDOPHYS_MAP(UNFOLD_CR3(ctxt.x64.ctrlreg[1])) )
             {
+                errnoval = ERANGE;
                 ERROR("PT base is not in range of pseudophys map");
                 goto out;
             }
@@ -1943,6 +2005,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                                         ? sizeof(ctxt.x64) 
                                         : sizeof(ctxt.x32))) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (1)");
             goto out;
         }
@@ -1953,11 +2016,13 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.ext_vcpucontext.vcpu = i;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No extended context for VCPU%d", i);
             goto out;
         }
         if ( wrexact(io_fd, &domctl.u.ext_vcpucontext, 128) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file (2)");
             goto out;
         }
@@ -1971,6 +2036,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         domctl.u.vcpuextstate.size = 0;
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             goto out;
         }
@@ -1982,6 +2048,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         buffer = xc_hypercall_buffer_alloc(xch, buffer, domctl.u.vcpuextstate.size);
         if ( !buffer )
         {
+            errnoval = errno;
             PERROR("Insufficient memory for getting eXtended states for"
                    "VCPU%d", i);
             goto out;
@@ -1989,6 +2056,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         set_xen_guest_handle(domctl.u.vcpuextstate.buffer, buffer);
         if ( xc_domctl(xch, &domctl) < 0 )
         {
+            errnoval = errno;
             PERROR("No eXtended states (XSAVE) for VCPU%d", i);
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2000,6 +2068,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
                      sizeof(domctl.u.vcpuextstate.size)) ||
              wrexact(io_fd, buffer, domctl.u.vcpuextstate.size) )
         {
+            errnoval = errno;
             PERROR("Error when writing to state file VCPU extended state");
             xc_hypercall_buffer_free(xch, buffer);
             goto out;
@@ -2015,6 +2084,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
               arch.pfn_to_mfn_frame_list_list, 0);
     if ( wrexact(io_fd, page, PAGE_SIZE) )
     {
+        errnoval = errno;
         PERROR("Error when writing to state file (1)");
         goto out;
     }
@@ -2022,6 +2092,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     /* Flush last write and check for errors. */
     if ( fsync(io_fd) && errno != EINVAL )
     {
+        errnoval = errno;
         PERROR("Error when flushing state file");
         goto out;
     }
@@ -2043,6 +2114,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         ob = &ob_pagebuf;
         if (wrcompressed(io_fd) < 0)
         {
+            errnoval = errno;
             ERROR("Error when writing compressed data, after postcopy\n");
             rc = 1;
             goto out;
@@ -2051,6 +2123,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( wrexact(io_fd, ob_tailbuf.buf, ob_tailbuf.pos) )
         {
             rc = 1;
+            errnoval = errno;
             PERROR("Error when copying tailbuf into outbuf");
             goto out;
         }
@@ -2079,6 +2152,7 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
         if ( suspend_and_state(callbacks->suspend, callbacks->data, xch,
                                io_fd, dom, &info) )
         {
+            errnoval = errno;
             ERROR("Domain appears not to have suspended");
             goto out;
         }
@@ -2130,7 +2204,9 @@ int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom, uint32_t max_iter
     free(hvm_buf);
     outbuf_free(&ob_pagebuf);
 
-    DPRINTF("Save exit of domid %u with rc=%d\n", dom, rc);
+exit:
+    DPRINTF("Save exit of domid %u with rc=%d, errno=%d\n", dom, rc, errnoval);
+    errno = errnoval;
 
     return !!rc;
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:15:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9R7-00061M-FC; Tue, 11 Feb 2014 09:15:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD9R6-00061H-23
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:15:24 +0000
Received: from [85.158.143.35:8447] by server-1.bemta-4.messagelabs.com id
	01/A7-31661-B2AE9F25; Tue, 11 Feb 2014 09:15:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392110122!4739482!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1764 invoked from network); 11 Feb 2014 09:15:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:15:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 09:15:21 +0000
Message-Id: <52F9F838020000780011B0AF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 09:15:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
In-Reply-To: <20140210172140.GA34003@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> At 16:34 +0000 on 10 Feb (1392046444), Jan Beulich wrote:
>> > The underlying problem here comes because the AF and UF bits of RTC 
>> > interrupt
>> > state is modelled by the RTC code, but the PF is modelled by the pt code.  
>> > The
>> > root cause of windows infinite loop is that RTC_PF is being re-set on 
>> > vmentry
>> > before the interrupt logic has worked out that it can't actually inject an 
>> > RTC
>> > interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
>> > should be reading 0.
>> 
>> So you're undoing a whole lot of changes done with the goal of
>> getting the overall emulation closer to what real hardware does,
>> just to paper over an issue elsewhere in the code? Not really an
>> approach I'm in favor of.
> 
> My understanding was that the problem is explicitly in the change to
> how RTC code is called from vpt code.
> 
> Originally, the RTC callback was called from pt_intr_post, like other
> vpt sources.  Your rework changed it to be called much earlier, when
> the vpt was considering which time source to choose.  AIUI that was to
> let the RTC code tell the VPT not to inject, if the guest hasn't acked
> the last interrupt, right?

Not only, it was also a necessary pre-adjustment for the !PIE case
to work (see below).

> Since that was changed later to allow a certain number of dead ticks
> before deciding to stop the timer chain, the decision no longer has to
> be made so early -- we can allow one more IRQ to go in and then
> disable it. 
> 
> That is the main change of this cset:  we go back to driving
> the interrupt from the vpt code and fixing up the RTC state after vpt
> tells us it's injected an interrupt.

And that's what is wrong imo, as it doesn't allow driving PF correctly
when !PIE.

>> > -        rtc_update_irq(s);
>> 
>> So given the problem description, this would seem to be the most
>> important part at a first glance. But looking more closely, I'm getting
>> the impression that the call to rtc_update_irq() had no effect at all
>> here anyway: The function would always bail on the second if() due
>> to REG_C having got cleared a few lines up.
> 
> Yeah, this has nothing to do with the bug being fixed here.  The old
> REG_C read was operating correctly, but on the return-to-guest path:
>  - vpt sees another RTC interrupt is due and calls RTC code
>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>  - vlapic code sees the last interrupt is still in the ISR and does
>    nothing;
>  - we return to the guest having set IRQF but not consumed a timer
>    event, so vpt stste is the same
>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>    waiting for a read of 0.
>  - repeat forever.

Which would call for a flag suppressing the setting of PF|IRQF
until the timer event got consumed. Possibly with some safety
belt for this to not get deferred indefinitely (albeit if the interrupt
doesn't get injected for extended periods of time, the guest
would presumably have more severe problems than these flags
not getting updated as expected).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:15:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9R7-00061M-FC; Tue, 11 Feb 2014 09:15:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD9R6-00061H-23
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:15:24 +0000
Received: from [85.158.143.35:8447] by server-1.bemta-4.messagelabs.com id
	01/A7-31661-B2AE9F25; Tue, 11 Feb 2014 09:15:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392110122!4739482!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1764 invoked from network); 11 Feb 2014 09:15:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:15:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 09:15:21 +0000
Message-Id: <52F9F838020000780011B0AF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 09:15:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
In-Reply-To: <20140210172140.GA34003@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> At 16:34 +0000 on 10 Feb (1392046444), Jan Beulich wrote:
>> > The underlying problem here comes because the AF and UF bits of RTC 
>> > interrupt
>> > state is modelled by the RTC code, but the PF is modelled by the pt code.  
>> > The
>> > root cause of windows infinite loop is that RTC_PF is being re-set on 
>> > vmentry
>> > before the interrupt logic has worked out that it can't actually inject an 
>> > RTC
>> > interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
>> > should be reading 0.
>> 
>> So you're undoing a whole lot of changes done with the goal of
>> getting the overall emulation closer to what real hardware does,
>> just to paper over an issue elsewhere in the code? Not really an
>> approach I'm in favor of.
> 
> My understanding was that the problem is explicitly in the change to
> how RTC code is called from vpt code.
> 
> Originally, the RTC callback was called from pt_intr_post, like other
> vpt sources.  Your rework changed it to be called much earlier, when
> the vpt was considering which time source to choose.  AIUI that was to
> let the RTC code tell the VPT not to inject, if the guest hasn't acked
> the last interrupt, right?

Not only, it was also a necessary pre-adjustment for the !PIE case
to work (see below).

> Since that was changed later to allow a certain number of dead ticks
> before deciding to stop the timer chain, the decision no longer has to
> be made so early -- we can allow one more IRQ to go in and then
> disable it. 
> 
> That is the main change of this cset:  we go back to driving
> the interrupt from the vpt code and fixing up the RTC state after vpt
> tells us it's injected an interrupt.

And that's what is wrong imo, as it doesn't allow driving PF correctly
when !PIE.

>> > -        rtc_update_irq(s);
>> 
>> So given the problem description, this would seem to be the most
>> important part at a first glance. But looking more closely, I'm getting
>> the impression that the call to rtc_update_irq() had no effect at all
>> here anyway: The function would always bail on the second if() due
>> to REG_C having got cleared a few lines up.
> 
> Yeah, this has nothing to do with the bug being fixed here.  The old
> REG_C read was operating correctly, but on the return-to-guest path:
>  - vpt sees another RTC interrupt is due and calls RTC code
>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>  - vlapic code sees the last interrupt is still in the ISR and does
>    nothing;
>  - we return to the guest having set IRQF but not consumed a timer
>    event, so vpt stste is the same
>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>    waiting for a read of 0.
>  - repeat forever.

Which would call for a flag suppressing the setting of PF|IRQF
until the timer event got consumed. Possibly with some safety
belt for this to not get deferred indefinitely (albeit if the interrupt
doesn't get injected for extended periods of time, the guest
would presumably have more severe problems than these flags
not getting updated as expected).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:28:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9ds-0006d3-2s; Tue, 11 Feb 2014 09:28:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD9dq-0006bs-Qr
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:28:35 +0000
Received: from [85.158.139.211:17977] by server-13.bemta-5.messagelabs.com id
	B9/F9-18801-24DE9F25; Tue, 11 Feb 2014 09:28:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392110913!3085032!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31734 invoked from network); 11 Feb 2014 09:28:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:28:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 09:28:31 +0000
Message-Id: <52F9FB4C020000780011B0D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 09:28:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F908A4.20902@citrix.com>
In-Reply-To: <52F908A4.20902@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEwLjAyLjE0IGF0IDE4OjEzLCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPiB3cm90ZToKPiBPbiAxMC8wMi8xNCAxNjozNCwgSmFuIEJldWxpY2ggd3JvdGU6
Cj4+Pj4+IE9uIDEwLjAyLjE0IGF0IDEyOjE3LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPiB3cm90ZToKPj4+IFRoaXMgcmV2ZXJ0cyBsYXJnZSBhbW91bnRzIG9mOgo+
Pj4gICA5NjA3MzI3YWJiZDNlNzdiZGU2Y2M3YjUzMjdmM2VmZDc4MWZjMDZlCj4+PiAgICAgIng4
Ni9IVk06IHByb3Blcmx5IGhhbmRsZSBSVEMgcGVyaW9kaWMgdGltZXIgZXZlbiB3aGVuICFSVENf
UElFIgo+Pj4gICA2MjBkNWRhZDU0MDA4ZTQwNzk4YzRhMGM0MzIyYWVmMjc0YzM2ZmEzCj4+PiAg
ICAgIng4Ni9IVk06IGFzc29ydGVkIFJUQyBlbXVsYXRpb24gYWRqdXN0bWVudHMiCj4+Pgo+Pj4g
YW5kIGJ5IGV4dGVudHNpb246Cj4+PiAgIGYzMzQ3ZjUyMGNiNGQ4YWE0NTY2MTgyYjAxM2M2NzU4
ZDgwY2JlODgKPj4+ICAgICAieDg2L0hWTTogYWRqdXN0IElSUSAoZGUtKWFzc2VydGlvbiIKPj4+
ICAgYzJmNzljNDY0ODQ5ZTVmNzk2YWE5ZDFkMGYyNmZlMzU2YWJkMWExYQo+Pj4gICAgICJ4ODYv
SFZNOiBmaXggcHJvY2Vzc2luZyBvZiBSVEMgUkVHX0Igd3JpdGVzIgo+Pj4gICA1Mjc4MjRmNDFm
NWZhYzljYmEzZDQ0NDFiMmU3M2QzMTE4ZDk4ODM3Cj4+PiAgICAgIng4Ni9odm06IENlbnRyYWxp
emUgYW5kIHNpbXBsaWZ5IHRoZSBSVEMgSVJRIGxvZ2ljLiIKPj4gU28gd2hhdCBkb2VzICJieSBl
eHRlbnNpb24iIG1lYW4gaGVyZT8gQXJlIHRoZXNlIGJlaW5nCj4+IHJldmVydGVkPwo+IAo+IFRo
ZSBsb2dpYyBoZXJlIHdhcyBiYXNlZCBvbiB0aGUgbG9naWMgYmVpbmcgcmV2ZXJ0ZWQsIHNvIHRo
ZSBjaGFuZ2VzCj4gdGhlbXNlbHZlcyBhcmUgbW9zdGx5IGdvbmUuCj4gCj4+Cj4+PiBUaGUgY3Vy
cmVudCBjb2RlIGhhcyBhIHBhdGhvbG9naWNhbCBjYXNlLCB0aWNrbGVkIGJ5IHRoZSBhY2Nlc3Mg
cGF0dGVybiBvZgo+Pj4gV2luZG93cyAyMDAzIFNlcnZlciBTUDIuICBPY2Nhc29uYWxseSBvbiBi
b290ICh3aGljaCBJIHByZXN1bWUgaXMgZHVyaW5nIGEKPj4+IHRpbWUgY2FsaWJyYXRpb24gYWdh
aW5zdCB0aGUgUlRDIFBlcmlvZGljIFRpbWVyKSwgV2luZG93cyBnZXRzIHN0dWNrIGluIGFuCj4+
PiBpbmZpbml0ZSBsb29wIHJlYWRpbmcgUlRDIFJFR19DLiAgVGhpcyBhZmZlY3RzIDMyIGFuZCA2
NCBiaXQgZ3Vlc3RzLgo+Pj4KPj4+IEluIHRoZSBwYXRob2xvZ2ljYWwgY2FzZSwgdGhlIFZNIHN0
YXRlIGxvb2tzIGxpa2UgdGhpczoKPj4+ICAgKiBSVEM6IDY0SHogcGVyaW9kLCBwZXJpb2RpYyBp
bnRlcnJ1cHRzIGVuYWJsZWQKPj4+ICAgKiBSVENfSVJRIGluIElPQVBJQyBhcyB2ZWN0b3IgMHhk
MSwgZWRnZSB0cmlnZ2VyZWQsIG5vdCBwZW5kaW5nCj4+PiAgICogdmVjdG9yIDB4ZDEgc2V0IGlu
IExBUElDIElSUiBhbmQgSVNSLCBUUFIgYXQgMHhkMAo+Pj4gICAqIFJlYWRzIGZyb20gUkVHX0Mg
cmV0dXJuICdSVENfUEYgfCBSVENfSVJRRicKPj4+Cj4+PiBXaXRoIGFuIGludHN0cnVtZW50ZWQg
WGVuLCBkdW1waW5nIHRoZSBwZXJpb2RpYyB0aW1lcnMgd2l0aCBhIGd1ZXN0IGluIHRoaXMKPj4+
IHN0YXRlIHNob3dzIGEgc2luZ2xlIHRpbWVyIHdpdGggcHQtPmlycV9pc3N1ZWQ9MSBhbmQgcHQt
PnBlbmRpbmdfaW50cl9ucj0yLgo+Pj4KPj4+IFdpbmRvd3MgaXMgcHJlc3VtYWJseSB3YWl0aW5n
IGZvciByZWFkcyBvZiBSRUdfQyB0byBkcm9wIHRvIDAsIGFuZCByZWFkaW5nCj4+PiBSRUdfQyBj
bGVhcnMgdGhlIHZhbHVlIGVhY2ggdGltZSBpbiB0aGUgZW11bGF0ZWQgUlRDLiAgSG93ZXZlcjoK
Pj4+Cj4+PiAgICoge3N2bSx2bXh9X2ludHJfYXNzaXN0KCkgY2FsbHMgcHRfdXBkYXRlX2lycSgp
IHVuY29uZGl0aW9uYWxseS4KPj4+ICAgKiBwdF91cGRhdGVfaXJxKCkgYWx3YXlzIGZpbmRzIHRo
ZSBSVEMgYXMgZWFybGllc3RfcHQuCj4+PiAgICogcnRjX3BlcmlvZGljX2ludGVycnVwdCgpIHVu
Y29uZGl0aW9uYWxseSBzZXRzIFJUQ19QRiBpbiBub19hY2sgbW9kZS4gIEl0Cj4+PiAgICAgcmV0
dXJucyB0cnVlLCBpbmRpY2F0aW5nIHRoYXQgcHRfdXBkYXRlX2lycSgpIHNob3VsZCByZWFsbHkg
aW5qZWN0IHRoZQo+Pj4gICAgIGludGVycnVwdC4KPj4+ICAgKiBwdF91cGRhdGVfaXJxKCkgZGVj
aWRlcyB0aGF0IGl0IGRvZXNuJ3QgbmVlZCB0byBmYWtlIHVwIHBhcnQgb2YKPj4+ICAgICBwdF9p
bnRyX3Bvc3QoKSBiZWNhdXNlIHRoaXMgaXMgYSByZWFsIGludGVycnVwdC4KPj4+ICAgKiB7c3Zt
LHZteH1faW50cl9hc3Npc3QoKSBjYW4ndCBpbmplY3QgdGhlIGludGVycnVwdCBhcyBpdCBpcyBh
bHJlYWR5Cj4+PiAgICAgcGVuZGluZywgc28gZXhpdHMgZWFybHkgd2l0aG91dCBjYWxsaW5nIHB0
X2ludHJfcG9zdCgpLgo+Pj4KPj4+IFRoZSB1bmRlcmx5aW5nIHByb2JsZW0gaGVyZSBjb21lcyBi
ZWNhdXNlIHRoZSBBRiBhbmQgVUYgYml0cyBvZiBSVEMgCj4+PiBpbnRlcnJ1cHQKPj4+IHN0YXRl
IGlzIG1vZGVsbGVkIGJ5IHRoZSBSVEMgY29kZSwgYnV0IHRoZSBQRiBpcyBtb2RlbGxlZCBieSB0
aGUgcHQgY29kZS4gIAo+Pj4gVGhlCj4+PiByb290IGNhdXNlIG9mIHdpbmRvd3MgaW5maW5pdGUg
bG9vcCBpcyB0aGF0IFJUQ19QRiBpcyBiZWluZyByZS1zZXQgb24gCj4+PiB2bWVudHJ5Cj4+PiBi
ZWZvcmUgdGhlIGludGVycnVwdCBsb2dpYyBoYXMgd29ya2VkIG91dCB0aGF0IGl0IGNhbid0IGFj
dHVhbGx5IGluamVjdCBhbiAKPj4+IFJUQwo+Pj4gaW50ZXJydXB0LCBjYXVzaW5nIFdpbmRvd3Mg
dG8gZXJyb25pb3VzbHkgcmVhZCAoUlRDX1BGfFJUQ19JUlFGKSB3aGVuIGl0Cj4+PiBzaG91bGQg
YmUgcmVhZGluZyAwLgo+PiBTbyB5b3UncmUgdW5kb2luZyBhIHdob2xlIGxvdCBvZiBjaGFuZ2Vz
IGRvbmUgd2l0aCB0aGUgZ29hbCBvZgo+PiBnZXR0aW5nIHRoZSBvdmVyYWxsIGVtdWxhdGlvbiBj
bG9zZXIgdG8gd2hhdCByZWFsIGhhcmR3YXJlIGRvZXMsCj4+IGp1c3QgdG8gcGFwZXIgb3ZlciBh
biBpc3N1ZSBlbHNld2hlcmUgaW4gdGhlIGNvZGU/IE5vdCByZWFsbHkgYW4KPj4gYXBwcm9hY2gg
SSdtIGluIGZhdm9yIG9mLgo+IAo+IEkgZmFpbCB0byBzZWUgaG93IHRoZSBjdXJyZW50IGNvZGUg
aXMgY2xvc2VyIHRvIHdoYXQgaGFyZHdhcmUgZG9lcyB0aGFuCj4gdGhpcyBwcm9wb3NlZCBwYXRj
aC4KClRoZSB0aGluZyBpcyB0aGF0IGlmIGFueSBvZiB0aGUgY2hhbmdlcyB5b3UgcmV2ZXJ0IChm
dWxseSBvciBwYXJ0bHkpCndhcyBhY3RpdmVseSB3cm9uZywgeW91IHNob3VsZCBoYXZlIGJyb2tl
biB1cCB0aGUgcGF0Y2ggc3VjaAp0aGF0IGl0IHJldmVydHMgdGhlIGJyb2tlbiBwaWVjZXMgaW4g
aW5kaXZpZHVhbCBzdGVwcy4gVGhlIHdheSBJIHNlZQp0aGUgc2l0dWF0aW9uIHJpZ2h0IG5vdyBp
cyB0aGF0IHlvdSBlZmZlY3RpdmVseSBzYXkgImluZGl2aWR1YWxseSBhbGwKdGhvc2UgY2hhbmdl
cyBoYXZlIGJlZW4gZmluZSwgYnV0IGNvbGxlY3RpdmVseSB0aGV5IGhhdmUgYQpwcm9ibGVtIHdo
aWNoIHdlIGNhbid0IHJlYWxseSBpc29sYXRlLCBzbyBsZXQncyByZXZlcnQgdGhlbSBhbHRvZ2V0
aGVyIi4KClRoZSBtb3N0IHByb21pbmVudCBleGFtcGxlIG9mIGZ1cnRoZXIgZGl2ZXJnaW5nIGZy
b20gYWN0dWFsCmhhcmR3YXJlIGJlaGF2aW9yIGlzIHRoZSBjb3JyZWN0IGRyaXZpbmcgb2YgUEYg
d2l0aG91dCBQSUUsIGFzCmp1c3QgZXhwbGFpbmVkIGluIGFub3RoZXIgcmVzcG9uc2Ugb24gdGhp
cyB0aHJlYWQuCgo+Pj4gLSAgICBpZiAoIHJ0Y19tb2RlX2lzKHMsIHN0cmljdCkgJiYgKHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0NdICYgUlRDX0lSUUYpICkKPj4+IC0gICAgICAgIHJldHVybjsK
Pj4+ICsgICAgaWYgKCBuZXdfYiAmIG5ld19jICYgKFJUQ19QRiB8IFJUQ19BRiB8IFJUQ19VRikg
KQo+Pj4gKyAgICAgICAgbmV3X2MgfD0gUlRDX0lSUUY7Cj4+IFdpdGhvdXQgZ29pbmcgYmFjayB0
byByZWFkaW5nIHRoZSBzcGVjLCBpaXJjIFJUQ19JUlFGIGlzIGEgc3RpY2t5Cj4+IGJpdCBjbGVh
cmVkIF9vbmx5XyBiZSBSRUdfQyByZWFkcywgaS5lLiB5b3Ugc2hvdWxkbid0IGNsZWFyIGl0Cj4+
IGVhcmxpZXIgaW4gdGhlIGZ1bmN0aW9uIGFuZCB0aGVuIGNvbmRpdGlvbmFsbHkgc2V0IGl0IGhl
cmUuCj4gCj4gVGhlIG1haW4gcHJvYmxlbSBpcyB0aGF0IHRoZSBNQzE0NjgxOCBzdGF0ZXMgdGhh
dCB0aGUgaW50ZXJydXB0IGlzIGxpbmUKPiBsZXZlbCwgd2hpbGUgdGhlIFBJSVgzIG1hbmRhdGVz
IHRoYXQgaXQgaXMgZWRnZSB0cmlnZ2VyZWQsIGFuZCBvdXIgQUNQSQo+IHRhYmxlcyBkZWZpbmUg
aXQgdG8gYmUgZWRnZSB0cmlnZ2VyZWQuCj4gCj4gVGhlIGRhdGFzaGVldCBzdGF0ZXMgIlRoZSBJ
UlFGIGJpdCBpbiBSZWdpc3RlciBDIGlzIGEgJzEnIHdoZW5ldmVyIHRoZQo+IMKsSVJRIHBpbiBp
cyBiZWluZyBkcml2ZW4gbG93Ii4gIFdpdGggZWRnZSBzZW1hbnRpY3Mgd2hlcmUgdGhlIGxpbmUg
bmV2ZXIKPiBhY3R1YWxseSBzdGF5cyBhc3NlcnRlZCwgdGhpcyB3b3VsZCBkZWdyYWRlIHRvIGJl
aW5nIHN0aWNreSB1bnRpbCByZWFkLgoKRWRnZSBzZW1hbnRpY3Mgc2F5IG5vdGhpbmcgYWJvdXQg
dGhlIGxpbmUgcmVtYWluaW5nIGFzc2VydGVkLCBpLmUuCmFuIGVkZ2UgdHJpZ2dlcmVkIGludGVy
cnVwdCBjb3VsZCBoYXZlIGl0cyBsaW5lIHJlbWFpbmluZyBhc3NlcnRlZApmb3IgYWxsIHRoZSB0
aW1lIGV4Y2VwdCB3aGVuIGEgbmV3IGludGVycnVwdCBpcyBuZWVkZWQsIGF0IHdoaWNoCnBvaW50
IGl0IHdvdWxkIGdldCBicmllZmx5IGRlLWFzc2VydGVkIGFuZCB0aGVuIGFzc2VydGVkIGFnYWlu
LgpIZW5jZSB0aGUgaW50ZXJydXB0IGFzc2VydGlvbiByZXF1aXJlbWVudHMgb2YgUElJWDMgYW5k
IHRob3NlIG9mCnRoZSBNQzE0NjgxOCBhcmVuJ3QgcmVhbGx5IGNvbnRyYWRpY3Rpbmcgd2l0aCBv
bmUgYW5vdGhlci4gQW5kCm91ciBjb2RlIChpbiBuby1hY2sgbW9kZSkgbW9zdGx5IGRvZXMgd2hh
dCBJIGp1c3QgZGVzY3JpYmVkOiBJdApkZS1hc3NlcnRzIHRoZSBsaW5lIGJyaWVmbHkgYmVmb3Jl
IHJlLWFzc2VydGluZyBpdC4gSW4gInN0cmljdCIgbW9kZQp0aGUgZGUtYXNzZXJ0aW9uIGhhcHBl
bnMgd2hlbiBJUlFGIGdldHMgY2xlYXJlZC4KCj4gSG93ZXZlciwgd2l0aCBlZGdlIHNlbWFudGlj
cywgd2UgbmVlZCB0byBzZW5kIG5ldyBpbnRlcnJ1cHRzIHdoZW4KPiBlbmFibGluZyBiaXRzIGlu
IFJFR19CIGV2ZW4gaWYgSVJRRiBpcyBhbHJlYWR5IG91dHN0YW5kaW5nLAoKV2h5PwoKPiB3aGlj
aCBpcyB3aHkKPiB0aGUgbG9naWMgc3RhcnRzIGJ5IG1hc2tpbmcgaXQgYmFjayBvdXQuICBGdXJ0
aGVybW9yZSwgaW4gbm8tYWNrIG1vZGUsCj4gd2UgZXhwZWN0IElSUUYgdG8gYmUgc2V0IChhcyB0
aGUgZ3Vlc3QgaXNuJ3QgcmVhZGluZyBSRUdfQyksIGJ1dCBzdGlsbAo+IHdhbnRpbmcgaW50ZXJy
dXB0cy4KCldoaWNoIGlzIHByZWNpc2VseSB3aGF0IHJ0Y191cGRhdGVfaXJxKCkgZG9lcy4KCj4+
IEFzIGEgcm91bmQtdXA6IEknbSBub3QgZ29pbmcgdG8gdmV0byB0aGlzLCBidXQgSSdtIGFsc28g
bm90IGdvaW5nIHRvCj4+IGJlIHB1dHRpbmcgbXkgbmFtZSB1bmRlciBpdCwgbm9yIGFtIEkgZ29p
bmcgdG8gbWFrZSBhbm90aGVyCj4+IGF0dGVtcHQgdG8gY2xlYW4gdXAgdGhlIFJUQyBlbXVsYXRp
b24gaWYgdGhpcyBpcyB0byBnbyBpbiB1bmNoYW5nZWQuCj4+IEknbSBwZXJzb25hbGx5IGdldHRp
bmcgdGhlIGltcHJlc3Npb24gdGhhdCB0aGUgcm9vdCBjYXVzZSBvZiB0aGUKPj4gb2JzZXJ2ZWQg
cHJvYmxlbSBpcyBzdGlsbCBiZWluZyBsZWZ0IGluIHBsYWNlIChhbmQgcGVyaGFwcyBzdGlsbCBu
b3QKPj4gYmVpbmcgZnVsbHkgdW5kZXJzdG9vZCksIGFuZCBoZW5jZSB0aGlzIHdob2xlIGNoYW5n
ZSBnb2VzIGluIHRoZQo+PiB3cm9uZyBkaXJlY3Rpb24sIF9ldmVuXyBpZiBpdCBtYWtlcyB0aGUg
cHJvYmxlbSBpdCBpcyBhaW1pbmcgYXQKPj4gZml4aW5nIGluZGVlZCBhcHBlYXIgdG8gZ28gYXdh
eS4KPiAKPiBJIGFtIG5vdCBzdXJlIEkgZm9sbG93IHlvdS4gIFRoZSByb290IGNhc2Ugb2YgdGhl
IGluZmluaXRlIGxvb3AgaXMKPiBiZWNhdXNlIFhlbiB3YXMgZXJyb25pb3VzbHkgc2V0dGluZyBS
RUdfQy5QRiwgYmVjYXVzZSBwdF91cGRhdGVfaXJxKCkKPiB3YXMgdHJ5aW5nIHRvIHByZS1ndWVz
cyB3aGF0IHRoZSBpbnRlcnJ1cHQgaW5qZWN0aW9uIGxvZ2ljIHdvdWxkIGRvLAo+IChhbmQgZ2V0
dGluZyBpdCB3cm9uZykuCgpTbyBpdCB3b3VsZCBiZSAtIGFzIGFsc28gc2FpZCBpbiB0aGUgcmVz
cG9uc2UgdG8gVGltIC0gdGhhdAppbnRlcmFjdGlvbiBiZXR3ZWVuIHZwdCBhbmQgcnRjIGNvZGUg
dGhhdCBuZWVkcyBmdXJ0aGVyCnR3ZWFraW5nLgoKSmFuCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:28:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9ds-0006d3-2s; Tue, 11 Feb 2014 09:28:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WD9dq-0006bs-Qr
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:28:35 +0000
Received: from [85.158.139.211:17977] by server-13.bemta-5.messagelabs.com id
	B9/F9-18801-24DE9F25; Tue, 11 Feb 2014 09:28:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392110913!3085032!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31734 invoked from network); 11 Feb 2014 09:28:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 09:28:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 09:28:31 +0000
Message-Id: <52F9FB4C020000780011B0D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 09:28:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<52F908A4.20902@citrix.com>
In-Reply-To: <52F908A4.20902@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDEwLjAyLjE0IGF0IDE4OjEzLCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPiB3cm90ZToKPiBPbiAxMC8wMi8xNCAxNjozNCwgSmFuIEJldWxpY2ggd3JvdGU6
Cj4+Pj4+IE9uIDEwLjAyLjE0IGF0IDEyOjE3LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPiB3cm90ZToKPj4+IFRoaXMgcmV2ZXJ0cyBsYXJnZSBhbW91bnRzIG9mOgo+
Pj4gICA5NjA3MzI3YWJiZDNlNzdiZGU2Y2M3YjUzMjdmM2VmZDc4MWZjMDZlCj4+PiAgICAgIng4
Ni9IVk06IHByb3Blcmx5IGhhbmRsZSBSVEMgcGVyaW9kaWMgdGltZXIgZXZlbiB3aGVuICFSVENf
UElFIgo+Pj4gICA2MjBkNWRhZDU0MDA4ZTQwNzk4YzRhMGM0MzIyYWVmMjc0YzM2ZmEzCj4+PiAg
ICAgIng4Ni9IVk06IGFzc29ydGVkIFJUQyBlbXVsYXRpb24gYWRqdXN0bWVudHMiCj4+Pgo+Pj4g
YW5kIGJ5IGV4dGVudHNpb246Cj4+PiAgIGYzMzQ3ZjUyMGNiNGQ4YWE0NTY2MTgyYjAxM2M2NzU4
ZDgwY2JlODgKPj4+ICAgICAieDg2L0hWTTogYWRqdXN0IElSUSAoZGUtKWFzc2VydGlvbiIKPj4+
ICAgYzJmNzljNDY0ODQ5ZTVmNzk2YWE5ZDFkMGYyNmZlMzU2YWJkMWExYQo+Pj4gICAgICJ4ODYv
SFZNOiBmaXggcHJvY2Vzc2luZyBvZiBSVEMgUkVHX0Igd3JpdGVzIgo+Pj4gICA1Mjc4MjRmNDFm
NWZhYzljYmEzZDQ0NDFiMmU3M2QzMTE4ZDk4ODM3Cj4+PiAgICAgIng4Ni9odm06IENlbnRyYWxp
emUgYW5kIHNpbXBsaWZ5IHRoZSBSVEMgSVJRIGxvZ2ljLiIKPj4gU28gd2hhdCBkb2VzICJieSBl
eHRlbnNpb24iIG1lYW4gaGVyZT8gQXJlIHRoZXNlIGJlaW5nCj4+IHJldmVydGVkPwo+IAo+IFRo
ZSBsb2dpYyBoZXJlIHdhcyBiYXNlZCBvbiB0aGUgbG9naWMgYmVpbmcgcmV2ZXJ0ZWQsIHNvIHRo
ZSBjaGFuZ2VzCj4gdGhlbXNlbHZlcyBhcmUgbW9zdGx5IGdvbmUuCj4gCj4+Cj4+PiBUaGUgY3Vy
cmVudCBjb2RlIGhhcyBhIHBhdGhvbG9naWNhbCBjYXNlLCB0aWNrbGVkIGJ5IHRoZSBhY2Nlc3Mg
cGF0dGVybiBvZgo+Pj4gV2luZG93cyAyMDAzIFNlcnZlciBTUDIuICBPY2Nhc29uYWxseSBvbiBi
b290ICh3aGljaCBJIHByZXN1bWUgaXMgZHVyaW5nIGEKPj4+IHRpbWUgY2FsaWJyYXRpb24gYWdh
aW5zdCB0aGUgUlRDIFBlcmlvZGljIFRpbWVyKSwgV2luZG93cyBnZXRzIHN0dWNrIGluIGFuCj4+
PiBpbmZpbml0ZSBsb29wIHJlYWRpbmcgUlRDIFJFR19DLiAgVGhpcyBhZmZlY3RzIDMyIGFuZCA2
NCBiaXQgZ3Vlc3RzLgo+Pj4KPj4+IEluIHRoZSBwYXRob2xvZ2ljYWwgY2FzZSwgdGhlIFZNIHN0
YXRlIGxvb2tzIGxpa2UgdGhpczoKPj4+ICAgKiBSVEM6IDY0SHogcGVyaW9kLCBwZXJpb2RpYyBp
bnRlcnJ1cHRzIGVuYWJsZWQKPj4+ICAgKiBSVENfSVJRIGluIElPQVBJQyBhcyB2ZWN0b3IgMHhk
MSwgZWRnZSB0cmlnZ2VyZWQsIG5vdCBwZW5kaW5nCj4+PiAgICogdmVjdG9yIDB4ZDEgc2V0IGlu
IExBUElDIElSUiBhbmQgSVNSLCBUUFIgYXQgMHhkMAo+Pj4gICAqIFJlYWRzIGZyb20gUkVHX0Mg
cmV0dXJuICdSVENfUEYgfCBSVENfSVJRRicKPj4+Cj4+PiBXaXRoIGFuIGludHN0cnVtZW50ZWQg
WGVuLCBkdW1waW5nIHRoZSBwZXJpb2RpYyB0aW1lcnMgd2l0aCBhIGd1ZXN0IGluIHRoaXMKPj4+
IHN0YXRlIHNob3dzIGEgc2luZ2xlIHRpbWVyIHdpdGggcHQtPmlycV9pc3N1ZWQ9MSBhbmQgcHQt
PnBlbmRpbmdfaW50cl9ucj0yLgo+Pj4KPj4+IFdpbmRvd3MgaXMgcHJlc3VtYWJseSB3YWl0aW5n
IGZvciByZWFkcyBvZiBSRUdfQyB0byBkcm9wIHRvIDAsIGFuZCByZWFkaW5nCj4+PiBSRUdfQyBj
bGVhcnMgdGhlIHZhbHVlIGVhY2ggdGltZSBpbiB0aGUgZW11bGF0ZWQgUlRDLiAgSG93ZXZlcjoK
Pj4+Cj4+PiAgICoge3N2bSx2bXh9X2ludHJfYXNzaXN0KCkgY2FsbHMgcHRfdXBkYXRlX2lycSgp
IHVuY29uZGl0aW9uYWxseS4KPj4+ICAgKiBwdF91cGRhdGVfaXJxKCkgYWx3YXlzIGZpbmRzIHRo
ZSBSVEMgYXMgZWFybGllc3RfcHQuCj4+PiAgICogcnRjX3BlcmlvZGljX2ludGVycnVwdCgpIHVu
Y29uZGl0aW9uYWxseSBzZXRzIFJUQ19QRiBpbiBub19hY2sgbW9kZS4gIEl0Cj4+PiAgICAgcmV0
dXJucyB0cnVlLCBpbmRpY2F0aW5nIHRoYXQgcHRfdXBkYXRlX2lycSgpIHNob3VsZCByZWFsbHkg
aW5qZWN0IHRoZQo+Pj4gICAgIGludGVycnVwdC4KPj4+ICAgKiBwdF91cGRhdGVfaXJxKCkgZGVj
aWRlcyB0aGF0IGl0IGRvZXNuJ3QgbmVlZCB0byBmYWtlIHVwIHBhcnQgb2YKPj4+ICAgICBwdF9p
bnRyX3Bvc3QoKSBiZWNhdXNlIHRoaXMgaXMgYSByZWFsIGludGVycnVwdC4KPj4+ICAgKiB7c3Zt
LHZteH1faW50cl9hc3Npc3QoKSBjYW4ndCBpbmplY3QgdGhlIGludGVycnVwdCBhcyBpdCBpcyBh
bHJlYWR5Cj4+PiAgICAgcGVuZGluZywgc28gZXhpdHMgZWFybHkgd2l0aG91dCBjYWxsaW5nIHB0
X2ludHJfcG9zdCgpLgo+Pj4KPj4+IFRoZSB1bmRlcmx5aW5nIHByb2JsZW0gaGVyZSBjb21lcyBi
ZWNhdXNlIHRoZSBBRiBhbmQgVUYgYml0cyBvZiBSVEMgCj4+PiBpbnRlcnJ1cHQKPj4+IHN0YXRl
IGlzIG1vZGVsbGVkIGJ5IHRoZSBSVEMgY29kZSwgYnV0IHRoZSBQRiBpcyBtb2RlbGxlZCBieSB0
aGUgcHQgY29kZS4gIAo+Pj4gVGhlCj4+PiByb290IGNhdXNlIG9mIHdpbmRvd3MgaW5maW5pdGUg
bG9vcCBpcyB0aGF0IFJUQ19QRiBpcyBiZWluZyByZS1zZXQgb24gCj4+PiB2bWVudHJ5Cj4+PiBi
ZWZvcmUgdGhlIGludGVycnVwdCBsb2dpYyBoYXMgd29ya2VkIG91dCB0aGF0IGl0IGNhbid0IGFj
dHVhbGx5IGluamVjdCBhbiAKPj4+IFJUQwo+Pj4gaW50ZXJydXB0LCBjYXVzaW5nIFdpbmRvd3Mg
dG8gZXJyb25pb3VzbHkgcmVhZCAoUlRDX1BGfFJUQ19JUlFGKSB3aGVuIGl0Cj4+PiBzaG91bGQg
YmUgcmVhZGluZyAwLgo+PiBTbyB5b3UncmUgdW5kb2luZyBhIHdob2xlIGxvdCBvZiBjaGFuZ2Vz
IGRvbmUgd2l0aCB0aGUgZ29hbCBvZgo+PiBnZXR0aW5nIHRoZSBvdmVyYWxsIGVtdWxhdGlvbiBj
bG9zZXIgdG8gd2hhdCByZWFsIGhhcmR3YXJlIGRvZXMsCj4+IGp1c3QgdG8gcGFwZXIgb3ZlciBh
biBpc3N1ZSBlbHNld2hlcmUgaW4gdGhlIGNvZGU/IE5vdCByZWFsbHkgYW4KPj4gYXBwcm9hY2gg
SSdtIGluIGZhdm9yIG9mLgo+IAo+IEkgZmFpbCB0byBzZWUgaG93IHRoZSBjdXJyZW50IGNvZGUg
aXMgY2xvc2VyIHRvIHdoYXQgaGFyZHdhcmUgZG9lcyB0aGFuCj4gdGhpcyBwcm9wb3NlZCBwYXRj
aC4KClRoZSB0aGluZyBpcyB0aGF0IGlmIGFueSBvZiB0aGUgY2hhbmdlcyB5b3UgcmV2ZXJ0IChm
dWxseSBvciBwYXJ0bHkpCndhcyBhY3RpdmVseSB3cm9uZywgeW91IHNob3VsZCBoYXZlIGJyb2tl
biB1cCB0aGUgcGF0Y2ggc3VjaAp0aGF0IGl0IHJldmVydHMgdGhlIGJyb2tlbiBwaWVjZXMgaW4g
aW5kaXZpZHVhbCBzdGVwcy4gVGhlIHdheSBJIHNlZQp0aGUgc2l0dWF0aW9uIHJpZ2h0IG5vdyBp
cyB0aGF0IHlvdSBlZmZlY3RpdmVseSBzYXkgImluZGl2aWR1YWxseSBhbGwKdGhvc2UgY2hhbmdl
cyBoYXZlIGJlZW4gZmluZSwgYnV0IGNvbGxlY3RpdmVseSB0aGV5IGhhdmUgYQpwcm9ibGVtIHdo
aWNoIHdlIGNhbid0IHJlYWxseSBpc29sYXRlLCBzbyBsZXQncyByZXZlcnQgdGhlbSBhbHRvZ2V0
aGVyIi4KClRoZSBtb3N0IHByb21pbmVudCBleGFtcGxlIG9mIGZ1cnRoZXIgZGl2ZXJnaW5nIGZy
b20gYWN0dWFsCmhhcmR3YXJlIGJlaGF2aW9yIGlzIHRoZSBjb3JyZWN0IGRyaXZpbmcgb2YgUEYg
d2l0aG91dCBQSUUsIGFzCmp1c3QgZXhwbGFpbmVkIGluIGFub3RoZXIgcmVzcG9uc2Ugb24gdGhp
cyB0aHJlYWQuCgo+Pj4gLSAgICBpZiAoIHJ0Y19tb2RlX2lzKHMsIHN0cmljdCkgJiYgKHMtPmh3
LmNtb3NfZGF0YVtSVENfUkVHX0NdICYgUlRDX0lSUUYpICkKPj4+IC0gICAgICAgIHJldHVybjsK
Pj4+ICsgICAgaWYgKCBuZXdfYiAmIG5ld19jICYgKFJUQ19QRiB8IFJUQ19BRiB8IFJUQ19VRikg
KQo+Pj4gKyAgICAgICAgbmV3X2MgfD0gUlRDX0lSUUY7Cj4+IFdpdGhvdXQgZ29pbmcgYmFjayB0
byByZWFkaW5nIHRoZSBzcGVjLCBpaXJjIFJUQ19JUlFGIGlzIGEgc3RpY2t5Cj4+IGJpdCBjbGVh
cmVkIF9vbmx5XyBiZSBSRUdfQyByZWFkcywgaS5lLiB5b3Ugc2hvdWxkbid0IGNsZWFyIGl0Cj4+
IGVhcmxpZXIgaW4gdGhlIGZ1bmN0aW9uIGFuZCB0aGVuIGNvbmRpdGlvbmFsbHkgc2V0IGl0IGhl
cmUuCj4gCj4gVGhlIG1haW4gcHJvYmxlbSBpcyB0aGF0IHRoZSBNQzE0NjgxOCBzdGF0ZXMgdGhh
dCB0aGUgaW50ZXJydXB0IGlzIGxpbmUKPiBsZXZlbCwgd2hpbGUgdGhlIFBJSVgzIG1hbmRhdGVz
IHRoYXQgaXQgaXMgZWRnZSB0cmlnZ2VyZWQsIGFuZCBvdXIgQUNQSQo+IHRhYmxlcyBkZWZpbmUg
aXQgdG8gYmUgZWRnZSB0cmlnZ2VyZWQuCj4gCj4gVGhlIGRhdGFzaGVldCBzdGF0ZXMgIlRoZSBJ
UlFGIGJpdCBpbiBSZWdpc3RlciBDIGlzIGEgJzEnIHdoZW5ldmVyIHRoZQo+IMKsSVJRIHBpbiBp
cyBiZWluZyBkcml2ZW4gbG93Ii4gIFdpdGggZWRnZSBzZW1hbnRpY3Mgd2hlcmUgdGhlIGxpbmUg
bmV2ZXIKPiBhY3R1YWxseSBzdGF5cyBhc3NlcnRlZCwgdGhpcyB3b3VsZCBkZWdyYWRlIHRvIGJl
aW5nIHN0aWNreSB1bnRpbCByZWFkLgoKRWRnZSBzZW1hbnRpY3Mgc2F5IG5vdGhpbmcgYWJvdXQg
dGhlIGxpbmUgcmVtYWluaW5nIGFzc2VydGVkLCBpLmUuCmFuIGVkZ2UgdHJpZ2dlcmVkIGludGVy
cnVwdCBjb3VsZCBoYXZlIGl0cyBsaW5lIHJlbWFpbmluZyBhc3NlcnRlZApmb3IgYWxsIHRoZSB0
aW1lIGV4Y2VwdCB3aGVuIGEgbmV3IGludGVycnVwdCBpcyBuZWVkZWQsIGF0IHdoaWNoCnBvaW50
IGl0IHdvdWxkIGdldCBicmllZmx5IGRlLWFzc2VydGVkIGFuZCB0aGVuIGFzc2VydGVkIGFnYWlu
LgpIZW5jZSB0aGUgaW50ZXJydXB0IGFzc2VydGlvbiByZXF1aXJlbWVudHMgb2YgUElJWDMgYW5k
IHRob3NlIG9mCnRoZSBNQzE0NjgxOCBhcmVuJ3QgcmVhbGx5IGNvbnRyYWRpY3Rpbmcgd2l0aCBv
bmUgYW5vdGhlci4gQW5kCm91ciBjb2RlIChpbiBuby1hY2sgbW9kZSkgbW9zdGx5IGRvZXMgd2hh
dCBJIGp1c3QgZGVzY3JpYmVkOiBJdApkZS1hc3NlcnRzIHRoZSBsaW5lIGJyaWVmbHkgYmVmb3Jl
IHJlLWFzc2VydGluZyBpdC4gSW4gInN0cmljdCIgbW9kZQp0aGUgZGUtYXNzZXJ0aW9uIGhhcHBl
bnMgd2hlbiBJUlFGIGdldHMgY2xlYXJlZC4KCj4gSG93ZXZlciwgd2l0aCBlZGdlIHNlbWFudGlj
cywgd2UgbmVlZCB0byBzZW5kIG5ldyBpbnRlcnJ1cHRzIHdoZW4KPiBlbmFibGluZyBiaXRzIGlu
IFJFR19CIGV2ZW4gaWYgSVJRRiBpcyBhbHJlYWR5IG91dHN0YW5kaW5nLAoKV2h5PwoKPiB3aGlj
aCBpcyB3aHkKPiB0aGUgbG9naWMgc3RhcnRzIGJ5IG1hc2tpbmcgaXQgYmFjayBvdXQuICBGdXJ0
aGVybW9yZSwgaW4gbm8tYWNrIG1vZGUsCj4gd2UgZXhwZWN0IElSUUYgdG8gYmUgc2V0IChhcyB0
aGUgZ3Vlc3QgaXNuJ3QgcmVhZGluZyBSRUdfQyksIGJ1dCBzdGlsbAo+IHdhbnRpbmcgaW50ZXJy
dXB0cy4KCldoaWNoIGlzIHByZWNpc2VseSB3aGF0IHJ0Y191cGRhdGVfaXJxKCkgZG9lcy4KCj4+
IEFzIGEgcm91bmQtdXA6IEknbSBub3QgZ29pbmcgdG8gdmV0byB0aGlzLCBidXQgSSdtIGFsc28g
bm90IGdvaW5nIHRvCj4+IGJlIHB1dHRpbmcgbXkgbmFtZSB1bmRlciBpdCwgbm9yIGFtIEkgZ29p
bmcgdG8gbWFrZSBhbm90aGVyCj4+IGF0dGVtcHQgdG8gY2xlYW4gdXAgdGhlIFJUQyBlbXVsYXRp
b24gaWYgdGhpcyBpcyB0byBnbyBpbiB1bmNoYW5nZWQuCj4+IEknbSBwZXJzb25hbGx5IGdldHRp
bmcgdGhlIGltcHJlc3Npb24gdGhhdCB0aGUgcm9vdCBjYXVzZSBvZiB0aGUKPj4gb2JzZXJ2ZWQg
cHJvYmxlbSBpcyBzdGlsbCBiZWluZyBsZWZ0IGluIHBsYWNlIChhbmQgcGVyaGFwcyBzdGlsbCBu
b3QKPj4gYmVpbmcgZnVsbHkgdW5kZXJzdG9vZCksIGFuZCBoZW5jZSB0aGlzIHdob2xlIGNoYW5n
ZSBnb2VzIGluIHRoZQo+PiB3cm9uZyBkaXJlY3Rpb24sIF9ldmVuXyBpZiBpdCBtYWtlcyB0aGUg
cHJvYmxlbSBpdCBpcyBhaW1pbmcgYXQKPj4gZml4aW5nIGluZGVlZCBhcHBlYXIgdG8gZ28gYXdh
eS4KPiAKPiBJIGFtIG5vdCBzdXJlIEkgZm9sbG93IHlvdS4gIFRoZSByb290IGNhc2Ugb2YgdGhl
IGluZmluaXRlIGxvb3AgaXMKPiBiZWNhdXNlIFhlbiB3YXMgZXJyb25pb3VzbHkgc2V0dGluZyBS
RUdfQy5QRiwgYmVjYXVzZSBwdF91cGRhdGVfaXJxKCkKPiB3YXMgdHJ5aW5nIHRvIHByZS1ndWVz
cyB3aGF0IHRoZSBpbnRlcnJ1cHQgaW5qZWN0aW9uIGxvZ2ljIHdvdWxkIGRvLAo+IChhbmQgZ2V0
dGluZyBpdCB3cm9uZykuCgpTbyBpdCB3b3VsZCBiZSAtIGFzIGFsc28gc2FpZCBpbiB0aGUgcmVz
cG9uc2UgdG8gVGltIC0gdGhhdAppbnRlcmFjdGlvbiBiZXR3ZWVuIHZwdCBhbmQgcnRjIGNvZGUg
dGhhdCBuZWVkcyBmdXJ0aGVyCnR3ZWFraW5nLgoKSmFuCgpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:31:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9gL-00077E-19; Tue, 11 Feb 2014 09:31:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9gJ-000772-Fw
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:31:07 +0000
Received: from [193.109.254.147:27005] by server-15.bemta-14.messagelabs.com
	id 5D/29-10839-ADDE9F25; Tue, 11 Feb 2014 09:31:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392111060!3468667!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28419 invoked from network); 11 Feb 2014 09:31:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:31:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99729010"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 09:30:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:30:42 -0500
Message-ID: <1392111040.26657.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 09:30:40 +0000
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
> It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).

Other currently missing cases:
      * live migration
      * remus
      * page compression
      * ...?

I think it is useful to enumerate them, and if necessary to note which
ones you aren't planning to consider in the immediate future.

> Headers
> =======
> 
> Image Header
> ------------
> 
> The image header identifies an image as a Xen domain save image.  It
> includes the version of this specification that the image complies
> with.
> 
> Tools supporting version _V_ of the specification shall always save
> images using version _V_.  Tools shall support restoring from version
> _V_ and version _V_ - 1.

This isn't quite right since it is in terms of image format version and
not Xen version (unless the two are to be linked somehow). The Xen
project supports migration from version X-1 to version X (where X is the
Xen version). It's not inconceivable that the image format version
wouldn't change over Xen releases.

This might become a more obvious issue once you add HVM to the spec and
introduce the necessary Xen HVM save state block.

> Tools may additionally support restoring from earlier versions.

> The marker field can be used to distinguish between legacy images and
> those corresponding to this specification.  Legacy images will have at
> one or more zero bits within the first 8 octets of the image.
> 
> Fields within the image header are always in _big-endian_ byte order,
> regardless of the setting of the endianness bit.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | marker                                          |
>     +-----------------------+-------------------------+
>     | id                    | version                 |
>     +-----------+-----------+-------------------------+
>     | options   |                                     |
>     +-----------+-------------------------------------+
> 
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> marker      0xFFFFFFFFFFFFFFFF.
> 
> id          0x58454E46 ("XENF" in ASCII).
> 
> version     0x00000001.  The version of this specification.
> 
> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.

Couldn't we just specify that things are in a specific endianness
related to the header's arch field?

I appreciate the desire to make the format endian neutral and explicit
about which is in use but (apart from the header) why would you ever
want to go non-native endian for a given arch?

>             bit 1-15: Reserved.
> --------------------------------------------------------------------
> 
> Domain Header
> -------------
> 
> The domain header includes general properties of the domain.
> 
>      0      1     2     3     4     5     6     7 octet
>     +-----------+-----------+-----------+-------------+
>     | arch      | type      | page_shift| (reserved)  |
>     +-----------+-----------+-----------+-------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> arch        0x0000: Reserved.
> 
>             0x0001: x86.
> 
>             0x0002: ARM.
> 
> type        0x0000: Reserved.
> 
>             0x0001: x86 PV.
> 
>             0x0002 - 0xFFFF: Reserved.

Is the type field per-arch? i.e. if arch=0x0002 can we use type = 0x0001
for ARM domains?

> page_shift  Size of a guest page as a power of two.
> 
>             i.e., page size = 2^page_shift^.

This doesn't really need 16 bits, no harm though I suppose.

> --------------------------------------------------------------------
> 
> 
> Records
> =======
> 
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
>     ...
>     Record body of length body_length octets followed by
>     0 to 7 octets of padding.
>     ...
>     +-----------------------+-------------------------+
>     | checksum              | (reserved)              |
>     +-----------------------+-------------------------+
> 
> --------------------------------------------------------------------
> Field        Description
> -----------  -------------------------------------------------------
> type         0x00000000: END
> 
>              0x00000001: PAGE_DATA
> 
>              0x00000002: VCPU_INFO
> 
>              0x00000003: VCPU_CONTEXT
> 
>              0x00000004: X86_PV_INFO
> 
>              0x00000005: P2M
> 
>              0x00000006 - 0xFFFFFFFF: Reserved
> 
> body_length  Length in octets of the record body.
> 
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
> 
>              Bit 1-15: Reserved.
> 
> checksum     CRC-32 checksum of the record body (including any trailing
>              padding), or 0x00000000 if the checksum field is invalid.

Which CRC-32 :-P

(wikipedia claims there are 6 variants using various polynomials...)

> P2M
> ---
> 
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]


What is the latter for again?

I don't know if it really fits into this spec, but it might be useful to
include the rationale for the inclusion of each record type.

> PAGE_DATA
> ---------
> 
> The bulk of an image consists of many PAGE_DATA records containing the
> memory contents.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | count (C)             | (reserved)              |
>     +-----------------------+-------------------------+
>     | pfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | pfn[C-1]                                        |
>     +-------------------------------------------------+
>     | page_data[0]...                                 |
>     ...
>     +-------------------------------------------------+
>     | page_data[N-1]...                               |
>     ...
>     +-------------------------------------------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> count       Number of pages described in this record.
> 
> pfn         An array of count PFNs. Bits 63-60 contain
>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.

Now might be a good time to remove this intertwining? I suppose 60-bits
is a lot of pfn's, but if the VMs address space is sparse it isn't out
of the question.

It might be useful to be explicit (perhaps in a glossary) about what you
mean by "PFN" for various guest types -- some of the terminology can be
a bit confused in actual usage between the various guest types,
including in some of the existing interface definitions.

> Layout
> ======
> 
> The set of valid records depends on the guest architecture and type.
> 
> x86 PV Guest
> ------------
> 
> An x86 PV guest image will have in this order:
> 
> 1. Image header
> 2. Domain header
> 3. X86_PV_INFO record
> 4. At least one P2M record
> 5. At least one PAGE_DATA record
> 6. VCPU_INFO record
> 6. At least one VCPU_CONTEXT record
> 7. END record

It might be good to be more flexible in the ordering (or rather, less
prescriptive here), unless there is a reason for a strict ordering i.e.
dependencies between records.

For example it seems like the p2m cannot change after you start sending
page_data, which I don't think works for live migration (although I'm
not sure you've yet incorporated that as a feature).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:31:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9gL-00077E-19; Tue, 11 Feb 2014 09:31:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9gJ-000772-Fw
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:31:07 +0000
Received: from [193.109.254.147:27005] by server-15.bemta-14.messagelabs.com
	id 5D/29-10839-ADDE9F25; Tue, 11 Feb 2014 09:31:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392111060!3468667!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28419 invoked from network); 11 Feb 2014 09:31:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:31:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99729010"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 09:30:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:30:42 -0500
Message-ID: <1392111040.26657.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 09:30:40 +0000
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
> It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).

Other currently missing cases:
      * live migration
      * remus
      * page compression
      * ...?

I think it is useful to enumerate them, and if necessary to note which
ones you aren't planning to consider in the immediate future.

> Headers
> =======
> 
> Image Header
> ------------
> 
> The image header identifies an image as a Xen domain save image.  It
> includes the version of this specification that the image complies
> with.
> 
> Tools supporting version _V_ of the specification shall always save
> images using version _V_.  Tools shall support restoring from version
> _V_ and version _V_ - 1.

This isn't quite right since it is in terms of image format version and
not Xen version (unless the two are to be linked somehow). The Xen
project supports migration from version X-1 to version X (where X is the
Xen version). It's not inconceivable that the image format version
wouldn't change over Xen releases.

This might become a more obvious issue once you add HVM to the spec and
introduce the necessary Xen HVM save state block.

> Tools may additionally support restoring from earlier versions.

> The marker field can be used to distinguish between legacy images and
> those corresponding to this specification.  Legacy images will have at
> one or more zero bits within the first 8 octets of the image.
> 
> Fields within the image header are always in _big-endian_ byte order,
> regardless of the setting of the endianness bit.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | marker                                          |
>     +-----------------------+-------------------------+
>     | id                    | version                 |
>     +-----------+-----------+-------------------------+
>     | options   |                                     |
>     +-----------+-------------------------------------+
> 
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> marker      0xFFFFFFFFFFFFFFFF.
> 
> id          0x58454E46 ("XENF" in ASCII).
> 
> version     0x00000001.  The version of this specification.
> 
> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.

Couldn't we just specify that things are in a specific endianness
related to the header's arch field?

I appreciate the desire to make the format endian neutral and explicit
about which is in use but (apart from the header) why would you ever
want to go non-native endian for a given arch?

>             bit 1-15: Reserved.
> --------------------------------------------------------------------
> 
> Domain Header
> -------------
> 
> The domain header includes general properties of the domain.
> 
>      0      1     2     3     4     5     6     7 octet
>     +-----------+-----------+-----------+-------------+
>     | arch      | type      | page_shift| (reserved)  |
>     +-----------+-----------+-----------+-------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> arch        0x0000: Reserved.
> 
>             0x0001: x86.
> 
>             0x0002: ARM.
> 
> type        0x0000: Reserved.
> 
>             0x0001: x86 PV.
> 
>             0x0002 - 0xFFFF: Reserved.

Is the type field per-arch? i.e. if arch=0x0002 can we use type = 0x0001
for ARM domains?

> page_shift  Size of a guest page as a power of two.
> 
>             i.e., page size = 2^page_shift^.

This doesn't really need 16 bits, no harm though I suppose.

> --------------------------------------------------------------------
> 
> 
> Records
> =======
> 
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
>     ...
>     Record body of length body_length octets followed by
>     0 to 7 octets of padding.
>     ...
>     +-----------------------+-------------------------+
>     | checksum              | (reserved)              |
>     +-----------------------+-------------------------+
> 
> --------------------------------------------------------------------
> Field        Description
> -----------  -------------------------------------------------------
> type         0x00000000: END
> 
>              0x00000001: PAGE_DATA
> 
>              0x00000002: VCPU_INFO
> 
>              0x00000003: VCPU_CONTEXT
> 
>              0x00000004: X86_PV_INFO
> 
>              0x00000005: P2M
> 
>              0x00000006 - 0xFFFFFFFF: Reserved
> 
> body_length  Length in octets of the record body.
> 
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
> 
>              Bit 1-15: Reserved.
> 
> checksum     CRC-32 checksum of the record body (including any trailing
>              padding), or 0x00000000 if the checksum field is invalid.

Which CRC-32 :-P

(wikipedia claims there are 6 variants using various polynomials...)

> P2M
> ---
> 
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]


What is the latter for again?

I don't know if it really fits into this spec, but it might be useful to
include the rationale for the inclusion of each record type.

> PAGE_DATA
> ---------
> 
> The bulk of an image consists of many PAGE_DATA records containing the
> memory contents.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | count (C)             | (reserved)              |
>     +-----------------------+-------------------------+
>     | pfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | pfn[C-1]                                        |
>     +-------------------------------------------------+
>     | page_data[0]...                                 |
>     ...
>     +-------------------------------------------------+
>     | page_data[N-1]...                               |
>     ...
>     +-------------------------------------------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> count       Number of pages described in this record.
> 
> pfn         An array of count PFNs. Bits 63-60 contain
>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.

Now might be a good time to remove this intertwining? I suppose 60-bits
is a lot of pfn's, but if the VMs address space is sparse it isn't out
of the question.

It might be useful to be explicit (perhaps in a glossary) about what you
mean by "PFN" for various guest types -- some of the terminology can be
a bit confused in actual usage between the various guest types,
including in some of the existing interface definitions.

> Layout
> ======
> 
> The set of valid records depends on the guest architecture and type.
> 
> x86 PV Guest
> ------------
> 
> An x86 PV guest image will have in this order:
> 
> 1. Image header
> 2. Domain header
> 3. X86_PV_INFO record
> 4. At least one P2M record
> 5. At least one PAGE_DATA record
> 6. VCPU_INFO record
> 6. At least one VCPU_CONTEXT record
> 7. END record

It might be good to be more flexible in the ordering (or rather, less
prescriptive here), unless there is a reason for a strict ordering i.e.
dependencies between records.

For example it seems like the p2m cannot change after you start sending
page_data, which I don't think works for live migration (although I'm
not sure you've yet incorporated that as a feature).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9kZ-0007NS-Qz; Tue, 11 Feb 2014 09:35:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9kY-0007NK-8Q
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:35:30 +0000
Received: from [85.158.139.211:39070] by server-7.bemta-5.messagelabs.com id
	47/1F-14867-1EEE9F25; Tue, 11 Feb 2014 09:35:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392111327!3087250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20982 invoked from network); 11 Feb 2014 09:35:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:35:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="101546887"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 09:35:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:35:26 -0500
Message-ID: <1392111325.26657.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Brian Menges <bmenges@gogrid.com>
Date: Tue, 11 Feb 2014 09:35:25 +0000
In-Reply-To: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
References: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build xen 4.3 backport for wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTEwIGF0IDE4OjEzICswMDAwLCBCcmlhbiBNZW5nZXMgd3JvdGU6Cj4g
SeKAmW0gYXR0ZW1wdGluZyB0byBidWlsZCA0LjMgZnJvbSBKZXNzaWUgKERlYmlhbikgYW5kIEni
gJltIGJ1bXBpbmcgaW50bwo+IGFuIGludGVyZXN0aW5nIGRlcGVuZGVuY3k6Cgo+IFJ1bnRpbWVF
cnJvcjogQ2FuJ3QgZmluZCAvdXNyL3NyYy9saW51eC1zdXBwb3J0LTMuMTAtMywgcGxlYXNlIGlu
c3RhbGwKPiB0aGUgbGludXgtc3VwcG9ydC0zLjEwLTMgcGFja2FnZQoKPiBJdCBsb29rcyBsaWtl
IHRoZSBidWlsZCBzY3JpcHRzIGFyZW7igJl0IGRldGVjdGluZyBteSBpbnN0YWxsYXRpb24gb2YK
PiBsaW51eC1zdXBwb3J0LTMuMTItMC5icG8uMSBhbmQgaGFzIGEgdmVyc2lvbiBsb2NrIG9uIDMu
MTAtMyAod2hpY2gKPiBpc27igJl0IGJhY2twb3J0ZWQsIGFuZCBhdmFpbGFibGUgb25seSBpbiBK
ZXNzaWUgZXZlbiB0aG91Z2ggQlBPIGhhcyB0aGUKPiBuZXdlc3QgMy4xMikuCgpUaGlzIGlzIGEg
RGViaWFuIHBhY2thZ2luZyB0aGluZywgbm90IGEgWGVuIHRoaW5nIGF0IGFsbCwgYW5kIGNlcnRh
aW5seQpub3QgYSB4ZW4tZGV2ZWwgdGhpbmcuCgpJSVJDIHRoZSBYZW4gcGFja2FnZXMgaW4gRGVi
aWFuIHVzZSBzb21lIG9mIHRoZSBwYWNrYWdpbmcgaW5mcmFzdHJ1Y3R1cmUKcHJvdmlkZWQgYnkg
TGludXggYXQgc291cmNlIHBhY2thZ2UgcHJlcGFyYXRpb24gdGltZSAobm90IGFjdHVhbCBidWls
ZAp0aW1lKSBhbmQgZW5kIHVwIHdpdGggYSBzcGVjaWZpYyBkZXBlbmRlbmN5LgoKWW91IG1pZ2h0
IGhhdmUgc29tZSBsdWNrIGZyb2JiaW5nIHhlbi9kZWJpYW4vcnVsZXMuZGVmcywgb3IgeW91IG1p
Z2h0CmZpbmQgaXQgZWFzaWVyIHRvIGJhY2twb3J0IHRoZSByZXF1aXJlZCB2ZXJzaW9uLgoKSWFu
LgoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4t
ZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54
ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9kZ-0007NS-Qz; Tue, 11 Feb 2014 09:35:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9kY-0007NK-8Q
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:35:30 +0000
Received: from [85.158.139.211:39070] by server-7.bemta-5.messagelabs.com id
	47/1F-14867-1EEE9F25; Tue, 11 Feb 2014 09:35:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392111327!3087250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20982 invoked from network); 11 Feb 2014 09:35:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:35:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="101546887"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 09:35:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:35:26 -0500
Message-ID: <1392111325.26657.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Brian Menges <bmenges@gogrid.com>
Date: Tue, 11 Feb 2014 09:35:25 +0000
In-Reply-To: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
References: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build xen 4.3 backport for wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTEwIGF0IDE4OjEzICswMDAwLCBCcmlhbiBNZW5nZXMgd3JvdGU6Cj4g
SeKAmW0gYXR0ZW1wdGluZyB0byBidWlsZCA0LjMgZnJvbSBKZXNzaWUgKERlYmlhbikgYW5kIEni
gJltIGJ1bXBpbmcgaW50bwo+IGFuIGludGVyZXN0aW5nIGRlcGVuZGVuY3k6Cgo+IFJ1bnRpbWVF
cnJvcjogQ2FuJ3QgZmluZCAvdXNyL3NyYy9saW51eC1zdXBwb3J0LTMuMTAtMywgcGxlYXNlIGlu
c3RhbGwKPiB0aGUgbGludXgtc3VwcG9ydC0zLjEwLTMgcGFja2FnZQoKPiBJdCBsb29rcyBsaWtl
IHRoZSBidWlsZCBzY3JpcHRzIGFyZW7igJl0IGRldGVjdGluZyBteSBpbnN0YWxsYXRpb24gb2YK
PiBsaW51eC1zdXBwb3J0LTMuMTItMC5icG8uMSBhbmQgaGFzIGEgdmVyc2lvbiBsb2NrIG9uIDMu
MTAtMyAod2hpY2gKPiBpc27igJl0IGJhY2twb3J0ZWQsIGFuZCBhdmFpbGFibGUgb25seSBpbiBK
ZXNzaWUgZXZlbiB0aG91Z2ggQlBPIGhhcyB0aGUKPiBuZXdlc3QgMy4xMikuCgpUaGlzIGlzIGEg
RGViaWFuIHBhY2thZ2luZyB0aGluZywgbm90IGEgWGVuIHRoaW5nIGF0IGFsbCwgYW5kIGNlcnRh
aW5seQpub3QgYSB4ZW4tZGV2ZWwgdGhpbmcuCgpJSVJDIHRoZSBYZW4gcGFja2FnZXMgaW4gRGVi
aWFuIHVzZSBzb21lIG9mIHRoZSBwYWNrYWdpbmcgaW5mcmFzdHJ1Y3R1cmUKcHJvdmlkZWQgYnkg
TGludXggYXQgc291cmNlIHBhY2thZ2UgcHJlcGFyYXRpb24gdGltZSAobm90IGFjdHVhbCBidWls
ZAp0aW1lKSBhbmQgZW5kIHVwIHdpdGggYSBzcGVjaWZpYyBkZXBlbmRlbmN5LgoKWW91IG1pZ2h0
IGhhdmUgc29tZSBsdWNrIGZyb2JiaW5nIHhlbi9kZWJpYW4vcnVsZXMuZGVmcywgb3IgeW91IG1p
Z2h0CmZpbmQgaXQgZWFzaWVyIHRvIGJhY2twb3J0IHRoZSByZXF1aXJlZCB2ZXJzaW9uLgoKSWFu
LgoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4t
ZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54
ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:35:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9kr-0007PB-7g; Tue, 11 Feb 2014 09:35:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9kq-0007Ov-4T
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:35:48 +0000
Received: from [85.158.139.211:13207] by server-13.bemta-5.messagelabs.com id
	FC/AA-18801-3FEE9F25; Tue, 11 Feb 2014 09:35:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392111345!3103916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23900 invoked from network); 11 Feb 2014 09:35:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="101546913"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 09:35:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:35:44 -0500
Message-ID: <1392111343.26657.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 09:35:43 +0000
In-Reply-To: <21241.5866.285503.630110@mariner.uk.xensource.com>
References: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
	<21241.5866.285503.630110@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST v2] cr-daily-branch: Make it
 possible to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 18:14 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST v2] cr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> > This is undesirable (most of the time) in a standalone environment, where you
> > are mostl ikely to be interested in the current version and not historical
> > comparissons.
> > 
> > Not sure there isn't a better way.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks.

> (But there's a lot of stuff in the queue so don't push it just yet...)

Understood.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:35:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9kr-0007PB-7g; Tue, 11 Feb 2014 09:35:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9kq-0007Ov-4T
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:35:48 +0000
Received: from [85.158.139.211:13207] by server-13.bemta-5.messagelabs.com id
	FC/AA-18801-3FEE9F25; Tue, 11 Feb 2014 09:35:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392111345!3103916!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23900 invoked from network); 11 Feb 2014 09:35:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="101546913"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 09:35:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:35:44 -0500
Message-ID: <1392111343.26657.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 09:35:43 +0000
In-Reply-To: <21241.5866.285503.630110@mariner.uk.xensource.com>
References: <1392039370-26216-1-git-send-email-ian.campbell@citrix.com>
	<21241.5866.285503.630110@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST v2] cr-daily-branch: Make it
 possible to suppress the forcing of a baseline test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 18:14 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST v2] cr-daily-branch: Make it possible to suppress the forcing of a baseline test"):
> > This is undesirable (most of the time) in a standalone environment, where you
> > are mostl ikely to be interested in the current version and not historical
> > comparissons.
> > 
> > Not sure there isn't a better way.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks.

> (But there's a lot of stuff in the queue so don't push it just yet...)

Understood.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:37:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9m9-0007aO-1B; Tue, 11 Feb 2014 09:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9m8-0007aJ-7O
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 09:37:08 +0000
Received: from [85.158.143.35:58920] by server-1.bemta-4.messagelabs.com id
	AC/A1-31661-34FE9F25; Tue, 11 Feb 2014 09:37:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392111424!4744749!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7994 invoked from network); 11 Feb 2014 09:37:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:37:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="101547071"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 09:37:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:37:04 -0500
Message-ID: <1392111423.26657.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Tue, 11 Feb 2014 09:37:03 +0000
In-Reply-To: <52F92B4A.3010805@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 14:40 -0500, Daniel De Graaf wrote:
> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
> > Dear all,
> >
> > I have recently configured a Xen 4.3 server with the vTPM enabled and a
> > guest virtual machine that takes advantage of it. After playing a bit
> > with it, I have a few questions:
> >
> > 1.According to the documentation, to shutdown the vTPM stubdom it is
> > only needed to normally shutdown the guest VM. Theoretically, the vTPM
> > stubdom automatically shuts down after this. Nevertheless, if I shutdown
> > the guest the vTPM stubdom continues active and, moreover, I can start
> > the machine again and the values of the vTPM are the last ones there
> > were in the previous instance of the guest. Is this normal?
> 
> The documentation is in error here;

Can you send a patch please.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:37:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9m9-0007aO-1B; Tue, 11 Feb 2014 09:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9m8-0007aJ-7O
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 09:37:08 +0000
Received: from [85.158.143.35:58920] by server-1.bemta-4.messagelabs.com id
	AC/A1-31661-34FE9F25; Tue, 11 Feb 2014 09:37:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392111424!4744749!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7994 invoked from network); 11 Feb 2014 09:37:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:37:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="101547071"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 09:37:04 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:37:04 -0500
Message-ID: <1392111423.26657.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Tue, 11 Feb 2014 09:37:03 +0000
In-Reply-To: <52F92B4A.3010805@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 14:40 -0500, Daniel De Graaf wrote:
> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
> > Dear all,
> >
> > I have recently configured a Xen 4.3 server with the vTPM enabled and a
> > guest virtual machine that takes advantage of it. After playing a bit
> > with it, I have a few questions:
> >
> > 1.According to the documentation, to shutdown the vTPM stubdom it is
> > only needed to normally shutdown the guest VM. Theoretically, the vTPM
> > stubdom automatically shuts down after this. Nevertheless, if I shutdown
> > the guest the vTPM stubdom continues active and, moreover, I can start
> > the machine again and the values of the vTPM are the last ones there
> > were in the previous instance of the guest. Is this normal?
> 
> The documentation is in error here;

Can you send a patch please.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:41:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9qO-0007wB-Pp; Tue, 11 Feb 2014 09:41:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9qN-0007w6-L1
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 09:41:31 +0000
Received: from [85.158.139.211:17642] by server-6.bemta-5.messagelabs.com id
	B0/30-14342-A40F9F25; Tue, 11 Feb 2014 09:41:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392111688!2992741!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8718 invoked from network); 11 Feb 2014 09:41:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99730764"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 09:41:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:41:28 -0500
Message-ID: <1392111686.26657.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Tue, 11 Feb 2014 09:41:26 +0000
In-Reply-To: <52F96691.7000801@terremark.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, xen-devel@lists.xenproject.org,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTEwIGF0IDE4OjUzIC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4gT24g
MDIvMTAvMTQgMTQ6MzMsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gPiBEaWQgeW91IHJ1biBh
Y2xvY2FsIGJlZm9yZSBhdXRvY29uZj8KCj4gVGhhdCBkaWQgd2hhdCB3YXMgbmVlZGVkLgoKRm9y
IGZ1dHVyZSByZWZlcmVuY2UgeW91IGNhbiBqdXN0IHJ1biAuL2F1dG9nZW4uc2ggYXQgdGhlIHRv
cCBsZXZlbCBhbmQKaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcuCgoob3IgaWYgaXQgZG9lc24ndCBJ
J2QgYXBwcmVjaWF0ZSBhIHBhdGNoIHRvIG1ha2UgaXQgZG8gdGhlIHJpZ2h0IHRoaW5nKQoKSWFu
LgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:41:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WD9qO-0007wB-Pp; Tue, 11 Feb 2014 09:41:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WD9qN-0007w6-L1
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 09:41:31 +0000
Received: from [85.158.139.211:17642] by server-6.bemta-5.messagelabs.com id
	B0/30-14342-A40F9F25; Tue, 11 Feb 2014 09:41:30 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392111688!2992741!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8718 invoked from network); 11 Feb 2014 09:41:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99730764"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 09:41:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:41:28 -0500
Message-ID: <1392111686.26657.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Tue, 11 Feb 2014 09:41:26 +0000
In-Reply-To: <52F96691.7000801@terremark.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, xen-devel@lists.xenproject.org,
	Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTEwIGF0IDE4OjUzIC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4gT24g
MDIvMTAvMTQgMTQ6MzMsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gPiBEaWQgeW91IHJ1biBh
Y2xvY2FsIGJlZm9yZSBhdXRvY29uZj8KCj4gVGhhdCBkaWQgd2hhdCB3YXMgbmVlZGVkLgoKRm9y
IGZ1dHVyZSByZWZlcmVuY2UgeW91IGNhbiBqdXN0IHJ1biAuL2F1dG9nZW4uc2ggYXQgdGhlIHRv
cCBsZXZlbCBhbmQKaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcuCgoob3IgaWYgaXQgZG9lc24ndCBJ
J2QgYXBwcmVjaWF0ZSBhIHBhdGNoIHRvIG1ha2UgaXQgZG8gdGhlIHJpZ2h0IHRoaW5nKQoKSWFu
LgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDA4g-0000Oc-Hj; Tue, 11 Feb 2014 09:56:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDA4f-0000OX-Pn
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 09:56:17 +0000
Received: from [193.109.254.147:36072] by server-12.bemta-14.messagelabs.com
	id 75/E9-17220-1C3F9F25; Tue, 11 Feb 2014 09:56:17 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392112575!3453762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27789 invoked from network); 11 Feb 2014 09:56:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:56:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99733352"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 09:56:15 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:56:14 -0500
Message-ID: <52F9F3BE.1010106@citrix.com>
Date: Tue, 11 Feb 2014 10:56:14 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Don Slutz <dslutz@verizon.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>		
	<52EC2C9C.9090202@terremark.com>		
	<21231.33273.164782.738071@mariner.uk.xensource.com>		
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>		
	<52F425AD.50209@terremark.com>		
	<1391767501.2162.20.camel@kazak.uk.xensource.com>		
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>	
	<1392026141.5117.10.camel@kazak.uk.xensource.com>	
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>	
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>	
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111686.26657.56.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMDIvMTQgMTA6NDEsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBNb24sIDIwMTQtMDIt
MTAgYXQgMTg6NTMgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToKPj4gT24gMDIvMTAvMTQgMTQ6MzMs
IFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4+PiBEaWQgeW91IHJ1biBhY2xvY2FsIGJlZm9yZSBh
dXRvY29uZj8KPiAKPj4gVGhhdCBkaWQgd2hhdCB3YXMgbmVlZGVkLgo+IAo+IEZvciBmdXR1cmUg
cmVmZXJlbmNlIHlvdSBjYW4ganVzdCBydW4gLi9hdXRvZ2VuLnNoIGF0IHRoZSB0b3AgbGV2ZWwg
YW5kCj4gaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcuCj4gCj4gKG9yIGlmIGl0IGRvZXNuJ3QgSSdk
IGFwcHJlY2lhdGUgYSBwYXRjaCB0byBtYWtlIGl0IGRvIHRoZSByaWdodCB0aGluZykKCkkgdGhp
bmsgd2UgaGF2ZSB0d28gb3B0aW9ucyBoZXJlIGlmIHdlIHdhbnQgdG8gYWRkIHRoZSBjaGVjayB2
ZXJzaW9uCnN0dWZmLCBlaXRoZXIgd2UgcnVuIGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mIChhbmQg
cmVxdWlyZSB0aGUgdXNlciB0bwpoYXZlIHRoZSBhdXRvY29uZi1hcmNoaXZlIHBhY2thZ2UpIG9y
IHdlIHBpY2sgdGhlIEFYX0NPTVBBUkVfVkVSU0lPTgptYWNybyBzb3VyY2UgWzBdIGFuZCBhZGQg
aXQgdG8gb3VyIGxvY2FsIG00LyBmb2xkZXIuCgpbMF0KaHR0cDovL2dpdC5zYXZhbm5haC5nbnUu
b3JnL2dpdHdlYi8/cD1hdXRvY29uZi1hcmNoaXZlLmdpdDthPWJsb2JfcGxhaW47Zj1tNC9heF9j
b21wYXJlX3ZlcnNpb24ubTQKClJvZ2VyLgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMu
eGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDA4q-0000Oz-UU; Tue, 11 Feb 2014 09:56:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WDA4p-0000Op-8p
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:56:27 +0000
Received: from [193.109.254.147:37140] by server-15.bemta-14.messagelabs.com
	id 7E/2F-10839-AC3F9F25; Tue, 11 Feb 2014 09:56:26 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392112585!3453829!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28906 invoked from network); 11 Feb 2014 09:56:25 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:56:25 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so3408617eek.5
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 01:56:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=O20w5Rbg3TKtsbYAtTxv9fyU555hrXjzjA4op5c5DsQ=;
	b=Rp703Wbon5efyGiAOUIcBsmz5yjDtqlENXXLbmwS0PYNXaoqjE3/bZnPNC1Vr22keT
	y8GjLNsOmx8KxK2KRBbut0qS2kakcN+XE1hLvu68yvm8OfK+dihzi0NmvNlEALiHqZtq
	f//RHEpm0iBq2u7QwhnUBBriCS3yoc+7IUsY5aFfXaEh5qBCCkdxdZzL1PaXLUAF5Oa/
	T+/Is+MvoYaJDZpF1j1ifaDZ7dsDga6yJdMEXVYF+7lrPHYhL/eY4pF4R9WNxAcwMbVh
	k9A48Lpx+fRR+YuGnZoBPl7jW6bPAMeYLGY+qc6CSrsJs9GnxAnYjOi2sp6UWCt+PSbi
	VTHA==
X-Gm-Message-State: ALoCoQnt+VDE3GF/9z3Xu10r4WkZQJgJxZY5fxOEk/BTUmPimMZ31akYvBdhz6SZpXJmrK2rL5kb
X-Received: by 10.14.3.72 with SMTP id 48mr43187593eeg.34.1392112585328;
	Tue, 11 Feb 2014 01:56:25 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id u6sm65335805eep.11.2014.02.11.01.56.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 01:56:24 -0800 (PST)
Message-ID: <52F9F3C7.6060204@m2r.biz>
Date: Tue, 11 Feb 2014 10:56:23 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1391528492.2441.26.camel@astar.houby.net>	<52F10EEF.7050402@m2r.biz>
	<CAFLBxZagZC2-C000jBjU3qb14Fxva08NQSwT_X9s1JC8JMvKQQ@mail.gmail.com>
In-Reply-To: <CAFLBxZagZC2-C000jBjU3qb14Fxva08NQSwT_X9s1JC8JMvKQQ@mail.gmail.com>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 10/02/2014 18:05, George Dunlap ha scritto:
> On Tue, Feb 4, 2014 at 4:01 PM, Fabio Fantoni <fabio.fantoni@m2r.biz> wrote:
>> Il 04/02/2014 16:41, Eric Houby ha scritto:
>>
>>> Xen list,
>>>
>>> I am trying to boot a F20 guest and connect using Spice but have run
>>> into an issue.
>>>
>>> My VM config file includes:
>>> spice = 1
>>> spicehost='0.0.0.0'
>>> spiceport=6001
>>> spicedisable_ticketing=1
>>>
>>>
>>> Is Spice supported with qemu-xen-traditional?
>>
>> No, only with upstream qemu and if compile xen and qemu from source you also
>> enable spice support on qemu build, for example on my xen build tests I add:
>>
>> tools/Makefile
>> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>>           --datadir=$(SHAREDIR)/qemu-xen \
>>           --localstatedir=/var \
>>           --disable-kvm \
>> +        --enable-spice \
>> +        --enable-usb-redir \
>>           --disable-docs \
>>           --disable-guest-agent \
>>           --python=$(PYTHON) \
>>
>> If you use upstream qemu from distribution package probably have already
>> spice build-in, for example, on debian I've already tested and working.
> It might be nice at some point to have this integrated into the
> top-level configure, possibly enabled by default (gated on the
> appropriate development libraries being enabled).
>
>   -George

I already did a patch to do it time ago in three different ways but all 
rejected :(
A one reason was the lack of needed libraries on many distros versions 
used with xen.
Now all newer versions have the needed spice and usbredir libraries 
included, so if you would like reconsider the path please point me in 
the right direction in order to have this done correctly.
The three way I did were:
- enabled by default
- Added optional build configs for qemu upstream (debug part now no more 
needed because already present) 
http://lists.xen.org/archives/html/xen-devel/2012-03/msg00506.html
- Autoconf: add variable for pass arbitrary options to qemu upstream 
http://lists.xen.org/archives/html/xen-devel/2012-03/msg01677.html
Probably there are also newer versions of these patches that I not found 
anymore.

Thanks for any reply.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDA4g-0000Oc-Hj; Tue, 11 Feb 2014 09:56:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDA4f-0000OX-Pn
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 09:56:17 +0000
Received: from [193.109.254.147:36072] by server-12.bemta-14.messagelabs.com
	id 75/E9-17220-1C3F9F25; Tue, 11 Feb 2014 09:56:17 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392112575!3453762!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27789 invoked from network); 11 Feb 2014 09:56:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:56:16 -0000
X-IronPort-AV: E=Sophos;i="4.95,824,1384300800"; d="scan'208";a="99733352"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 09:56:15 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 04:56:14 -0500
Message-ID: <52F9F3BE.1010106@citrix.com>
Date: Tue, 11 Feb 2014 10:56:14 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Don Slutz <dslutz@verizon.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>		
	<52EC2C9C.9090202@terremark.com>		
	<21231.33273.164782.738071@mariner.uk.xensource.com>		
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>		
	<52F425AD.50209@terremark.com>		
	<1391767501.2162.20.camel@kazak.uk.xensource.com>		
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>	
	<1392026141.5117.10.camel@kazak.uk.xensource.com>	
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>	
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>	
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111686.26657.56.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMDIvMTQgMTA6NDEsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBNb24sIDIwMTQtMDIt
MTAgYXQgMTg6NTMgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToKPj4gT24gMDIvMTAvMTQgMTQ6MzMs
IFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4+PiBEaWQgeW91IHJ1biBhY2xvY2FsIGJlZm9yZSBh
dXRvY29uZj8KPiAKPj4gVGhhdCBkaWQgd2hhdCB3YXMgbmVlZGVkLgo+IAo+IEZvciBmdXR1cmUg
cmVmZXJlbmNlIHlvdSBjYW4ganVzdCBydW4gLi9hdXRvZ2VuLnNoIGF0IHRoZSB0b3AgbGV2ZWwg
YW5kCj4gaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcuCj4gCj4gKG9yIGlmIGl0IGRvZXNuJ3QgSSdk
IGFwcHJlY2lhdGUgYSBwYXRjaCB0byBtYWtlIGl0IGRvIHRoZSByaWdodCB0aGluZykKCkkgdGhp
bmsgd2UgaGF2ZSB0d28gb3B0aW9ucyBoZXJlIGlmIHdlIHdhbnQgdG8gYWRkIHRoZSBjaGVjayB2
ZXJzaW9uCnN0dWZmLCBlaXRoZXIgd2UgcnVuIGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mIChhbmQg
cmVxdWlyZSB0aGUgdXNlciB0bwpoYXZlIHRoZSBhdXRvY29uZi1hcmNoaXZlIHBhY2thZ2UpIG9y
IHdlIHBpY2sgdGhlIEFYX0NPTVBBUkVfVkVSU0lPTgptYWNybyBzb3VyY2UgWzBdIGFuZCBhZGQg
aXQgdG8gb3VyIGxvY2FsIG00LyBmb2xkZXIuCgpbMF0KaHR0cDovL2dpdC5zYXZhbm5haC5nbnUu
b3JnL2dpdHdlYi8/cD1hdXRvY29uZi1hcmNoaXZlLmdpdDthPWJsb2JfcGxhaW47Zj1tNC9heF9j
b21wYXJlX3ZlcnNpb24ubTQKClJvZ2VyLgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMu
eGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 09:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 09:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDA4q-0000Oz-UU; Tue, 11 Feb 2014 09:56:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WDA4p-0000Op-8p
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 09:56:27 +0000
Received: from [193.109.254.147:37140] by server-15.bemta-14.messagelabs.com
	id 7E/2F-10839-AC3F9F25; Tue, 11 Feb 2014 09:56:26 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392112585!3453829!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28906 invoked from network); 11 Feb 2014 09:56:25 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 09:56:25 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so3408617eek.5
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 01:56:25 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=O20w5Rbg3TKtsbYAtTxv9fyU555hrXjzjA4op5c5DsQ=;
	b=Rp703Wbon5efyGiAOUIcBsmz5yjDtqlENXXLbmwS0PYNXaoqjE3/bZnPNC1Vr22keT
	y8GjLNsOmx8KxK2KRBbut0qS2kakcN+XE1hLvu68yvm8OfK+dihzi0NmvNlEALiHqZtq
	f//RHEpm0iBq2u7QwhnUBBriCS3yoc+7IUsY5aFfXaEh5qBCCkdxdZzL1PaXLUAF5Oa/
	T+/Is+MvoYaJDZpF1j1ifaDZ7dsDga6yJdMEXVYF+7lrPHYhL/eY4pF4R9WNxAcwMbVh
	k9A48Lpx+fRR+YuGnZoBPl7jW6bPAMeYLGY+qc6CSrsJs9GnxAnYjOi2sp6UWCt+PSbi
	VTHA==
X-Gm-Message-State: ALoCoQnt+VDE3GF/9z3Xu10r4WkZQJgJxZY5fxOEk/BTUmPimMZ31akYvBdhz6SZpXJmrK2rL5kb
X-Received: by 10.14.3.72 with SMTP id 48mr43187593eeg.34.1392112585328;
	Tue, 11 Feb 2014 01:56:25 -0800 (PST)
Received: from [192.168.1.31] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id u6sm65335805eep.11.2014.02.11.01.56.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 01:56:24 -0800 (PST)
Message-ID: <52F9F3C7.6060204@m2r.biz>
Date: Tue, 11 Feb 2014 10:56:23 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1391528492.2441.26.camel@astar.houby.net>	<52F10EEF.7050402@m2r.biz>
	<CAFLBxZagZC2-C000jBjU3qb14Fxva08NQSwT_X9s1JC8JMvKQQ@mail.gmail.com>
In-Reply-To: <CAFLBxZagZC2-C000jBjU3qb14Fxva08NQSwT_X9s1JC8JMvKQQ@mail.gmail.com>
Cc: ehouby@yahoo.com, xen@lists.fedoraproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [TestDay] F20 Xen 4.4 RC3 Spice support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 10/02/2014 18:05, George Dunlap ha scritto:
> On Tue, Feb 4, 2014 at 4:01 PM, Fabio Fantoni <fabio.fantoni@m2r.biz> wrote:
>> Il 04/02/2014 16:41, Eric Houby ha scritto:
>>
>>> Xen list,
>>>
>>> I am trying to boot a F20 guest and connect using Spice but have run
>>> into an issue.
>>>
>>> My VM config file includes:
>>> spice = 1
>>> spicehost='0.0.0.0'
>>> spiceport=6001
>>> spicedisable_ticketing=1
>>>
>>>
>>> Is Spice supported with qemu-xen-traditional?
>>
>> No, only with upstream qemu and if compile xen and qemu from source you also
>> enable spice support on qemu build, for example on my xen build tests I add:
>>
>> tools/Makefile
>> @@ -201,6 +201,8 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
>>           --datadir=$(SHAREDIR)/qemu-xen \
>>           --localstatedir=/var \
>>           --disable-kvm \
>> +        --enable-spice \
>> +        --enable-usb-redir \
>>           --disable-docs \
>>           --disable-guest-agent \
>>           --python=$(PYTHON) \
>>
>> If you use upstream qemu from distribution package probably have already
>> spice build-in, for example, on debian I've already tested and working.
> It might be nice at some point to have this integrated into the
> top-level configure, possibly enabled by default (gated on the
> appropriate development libraries being enabled).
>
>   -George

I already did a patch to do it time ago in three different ways but all 
rejected :(
A one reason was the lack of needed libraries on many distros versions 
used with xen.
Now all newer versions have the needed spice and usbredir libraries 
included, so if you would like reconsider the path please point me in 
the right direction in order to have this done correctly.
The three way I did were:
- enabled by default
- Added optional build configs for qemu upstream (debug part now no more 
needed because already present) 
http://lists.xen.org/archives/html/xen-devel/2012-03/msg00506.html
- Autoconf: add variable for pass arbitrary options to qemu upstream 
http://lists.xen.org/archives/html/xen-devel/2012-03/msg01677.html
Probably there are also newer versions of these patches that I not found 
anymore.

Thanks for any reply.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDA9B-00018U-RU; Tue, 11 Feb 2014 10:00:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDA9A-00018M-0Z
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:00:56 +0000
Received: from [85.158.143.35:38608] by server-1.bemta-4.messagelabs.com id
	8A/E3-31661-7D4F9F25; Tue, 11 Feb 2014 10:00:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392112853!4755439!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27656 invoked from network); 11 Feb 2014 10:00:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101550862"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 10:00:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:00:52 -0500
Message-ID: <1392112851.26657.59.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Tue, 11 Feb 2014 10:00:51 +0000
In-Reply-To: <52F9F3BE.1010106@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
	<52F9F3BE.1010106@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDEwOjU2ICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDExLzAyLzE0IDEwOjQxLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4gPiBPbiBNb24sIDIw
MTQtMDItMTAgYXQgMTg6NTMgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToKPiA+PiBPbiAwMi8xMC8x
NCAxNDozMywgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiA+Pj4gRGlkIHlvdSBydW4gYWNsb2Nh
bCBiZWZvcmUgYXV0b2NvbmY/Cj4gPiAKPiA+PiBUaGF0IGRpZCB3aGF0IHdhcyBuZWVkZWQuCj4g
PiAKPiA+IEZvciBmdXR1cmUgcmVmZXJlbmNlIHlvdSBjYW4ganVzdCBydW4gLi9hdXRvZ2VuLnNo
IGF0IHRoZSB0b3AgbGV2ZWwgYW5kCj4gPiBpdCBkb2VzIHRoZSByaWdodCB0aGluZy4KPiA+IAo+
ID4gKG9yIGlmIGl0IGRvZXNuJ3QgSSdkIGFwcHJlY2lhdGUgYSBwYXRjaCB0byBtYWtlIGl0IGRv
IHRoZSByaWdodCB0aGluZykKPiAKPiBJIHRoaW5rIHdlIGhhdmUgdHdvIG9wdGlvbnMgaGVyZSBp
ZiB3ZSB3YW50IHRvIGFkZCB0aGUgY2hlY2sgdmVyc2lvbgo+IHN0dWZmLCBlaXRoZXIgd2UgcnVu
IGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mIChhbmQgcmVxdWlyZSB0aGUgdXNlciB0bwo+IGhhdmUg
dGhlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSkgb3Igd2UgcGljayB0aGUgQVhfQ09NUEFSRV9W
RVJTSU9OCj4gbWFjcm8gc291cmNlIFswXSBhbmQgYWRkIGl0IHRvIG91ciBsb2NhbCBtNC8gZm9s
ZGVyLgoKSXMgaXQgdGhlIGNhc2UgdGhhdCB3ZSBjdXJyZW50bHkgZG8gdGhlIGxhdHRlciBmb3Ig
YWwgdGhlIGV4aXN0aW5nCmNoZWNrcz8KCj4gCj4gWzBdCj4gaHR0cDovL2dpdC5zYXZhbm5haC5n
bnUub3JnL2dpdHdlYi8/cD1hdXRvY29uZi1hcmNoaXZlLmdpdDthPWJsb2JfcGxhaW47Zj1tNC9h
eF9jb21wYXJlX3ZlcnNpb24ubTQKPiAKPiBSb2dlci4KCgoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDA9B-00018U-RU; Tue, 11 Feb 2014 10:00:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDA9A-00018M-0Z
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:00:56 +0000
Received: from [85.158.143.35:38608] by server-1.bemta-4.messagelabs.com id
	8A/E3-31661-7D4F9F25; Tue, 11 Feb 2014 10:00:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392112853!4755439!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27656 invoked from network); 11 Feb 2014 10:00:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:00:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101550862"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 10:00:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:00:52 -0500
Message-ID: <1392112851.26657.59.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Tue, 11 Feb 2014 10:00:51 +0000
In-Reply-To: <52F9F3BE.1010106@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
	<52F9F3BE.1010106@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDEwOjU2ICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDExLzAyLzE0IDEwOjQxLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4gPiBPbiBNb24sIDIw
MTQtMDItMTAgYXQgMTg6NTMgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToKPiA+PiBPbiAwMi8xMC8x
NCAxNDozMywgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiA+Pj4gRGlkIHlvdSBydW4gYWNsb2Nh
bCBiZWZvcmUgYXV0b2NvbmY/Cj4gPiAKPiA+PiBUaGF0IGRpZCB3aGF0IHdhcyBuZWVkZWQuCj4g
PiAKPiA+IEZvciBmdXR1cmUgcmVmZXJlbmNlIHlvdSBjYW4ganVzdCBydW4gLi9hdXRvZ2VuLnNo
IGF0IHRoZSB0b3AgbGV2ZWwgYW5kCj4gPiBpdCBkb2VzIHRoZSByaWdodCB0aGluZy4KPiA+IAo+
ID4gKG9yIGlmIGl0IGRvZXNuJ3QgSSdkIGFwcHJlY2lhdGUgYSBwYXRjaCB0byBtYWtlIGl0IGRv
IHRoZSByaWdodCB0aGluZykKPiAKPiBJIHRoaW5rIHdlIGhhdmUgdHdvIG9wdGlvbnMgaGVyZSBp
ZiB3ZSB3YW50IHRvIGFkZCB0aGUgY2hlY2sgdmVyc2lvbgo+IHN0dWZmLCBlaXRoZXIgd2UgcnVu
IGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mIChhbmQgcmVxdWlyZSB0aGUgdXNlciB0bwo+IGhhdmUg
dGhlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSkgb3Igd2UgcGljayB0aGUgQVhfQ09NUEFSRV9W
RVJTSU9OCj4gbWFjcm8gc291cmNlIFswXSBhbmQgYWRkIGl0IHRvIG91ciBsb2NhbCBtNC8gZm9s
ZGVyLgoKSXMgaXQgdGhlIGNhc2UgdGhhdCB3ZSBjdXJyZW50bHkgZG8gdGhlIGxhdHRlciBmb3Ig
YWwgdGhlIGV4aXN0aW5nCmNoZWNrcz8KCj4gCj4gWzBdCj4gaHR0cDovL2dpdC5zYXZhbm5haC5n
bnUub3JnL2dpdHdlYi8/cD1hdXRvY29uZi1hcmNoaXZlLmdpdDthPWJsb2JfcGxhaW47Zj1tNC9h
eF9jb21wYXJlX3ZlcnNpb24ubTQKPiAKPiBSb2dlci4KCgoKX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2
ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:02:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAAC-0001E4-Ao; Tue, 11 Feb 2014 10:02:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WDAAB-0001Dw-4o
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:01:59 +0000
Received: from [85.158.143.35:42161] by server-3.bemta-4.messagelabs.com id
	DB/D5-11539-615F9F25; Tue, 11 Feb 2014 10:01:58 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392112917!4754353!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20700 invoked from network); 11 Feb 2014 10:01:57 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 10:01:57 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 07FC43;
	Tue, 11 Feb 2014 11:01:56 +0100
Message-ID: <52F9F514.8040907@scytl.com>
Date: Tue, 11 Feb 2014 11:01:56 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
In-Reply-To: <52F92B4A.3010805@tycho.nsa.gov>
X-Enigmail-Version: 1.6
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Daniel,

Thanks for your thorough answer. I have a few comments below.

On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>> Dear all,
>>
>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>> guest virtual machine that takes advantage of it. After playing a bit
>> with it, I have a few questions:
>>
>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>> the guest the vTPM stubdom continues active and, moreover, I can start
>> the machine again and the values of the vTPM are the last ones there
>> were in the previous instance of the guest. Is this normal?
>
> The documentation is in error here; while this was originally how the
> vTPM
> domain behaved, this automatic shutdown was not reliable: it was not done
> if the peer domain did not use the vTPM, and it was incorrectly triggered
> by pv-grub's use of the vTPM to record guest kernel measurements
> (which was
> the immediate reason for its removal). The solution now is to either
> send a
> shutdown request or simply destroy the vTPM upon guest shutdown.
>
> An alternative that may require less work on your part is to destroy
> the vTPM stub domain during a guest's construction, something like:
>
> #!/bin/sh -e
> xl destroy "$1-vtpm" || true
> xl create $1-vtpm.cfg
> xl create $1-domu.cfg
>
> Allowing a vTPM to remain active across a guest restart will cause the
> PCR values extended by pv-grub to be incorrect, as you observed in your
> second email. In order for the vTPM's PCRs to be useful for quotes or
> releasing sealed secrets, you need to ensure that a new vTPM is started
> if and only if it is paired with a corresponding guest.

I see a potential threat due to this behaviour (please correct me if I
am wrong).

Assume an administrator of Dom0 becomes malicious. Since the hypervisor
does not enforce the shut down of the vTPM domain, the malicious
administrator could try the following: 1) make a copy of the peer
domain, 2) manipulate the copy of the peer domain and disable its
measurements, 3) boot the original peer domain, 4) switch it off or
pause it, 5) boot the manipulated copy of the peer domain.

Then, the shown PCR values of the manipulated copy of the peer domain
are the ones measured by the original peer domain during the first boot.
But the manipulated copy is the one actually running. Hence, this could
not be detected nor by quoting the vTPM neither the pTPM.


May be, one possible solution could be to enforce an XSM FLASK policy to
prevent any user in Dom0 from destroying, shutting down or pausing a
domain. Then, measure the policy when Dom0 starts into a PCR of the
phsyical TPM. Nevertheless, on one hand I do not know if this is
feasible and, in the other hand, this prevents the system from
destroying the vTPM domain when the peer domain shuts down.

>
>> 2.In the documentation it is recommended to avoid accessing the physical
>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
>> without any apparent issue. Why is not recommended directly accessing
>> the physical TPM of Dom0?
>
> While most of the time it is not a problem to have two entities
> talking to
> the physical TPM, it is possible for the trousers daemon in dom0 to
> interfere
> with key handles used by the TPM Manager. There are also certain
> operations
> of the TPM that may not handle concurrency, although I do not believe
> that
> trousers uses them - SHA1Start, the DAA commands, and certain audit logs
> come to mind.
>
> The other reason why it is recommended to avoid pTPM access from dom0 is
> because the ability to send unseal/unbind requests to the physical TPM
> makes
> it possible for applications running in dom0 to decrypt the TPM Manager's
> data (and thereby access vTPM private keys).
>
> At present, sharing the physical TPM between dom0 and the TPM Manager is
> the only way to get full integrity checks.

OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
security problem to the vTPM. But if we do not enable the support, the
integrity of Dom0 cannot be proved using the TPM (e.g. by remote
attestation).

>
>> 3.If it is not recommended to directly accessing the physical TPM in
>> Dom0, which is the advisable way to check the integrity of this domain?
>> With solutions such as TBOOT and IntelTXT?
>
> While the TPM Manager in Xen 4.3/4.4 does not yet have this
> functionality,
> an update which I will be submitting for inclusion in Xen 4.5 has the
> ability to get physical TPM quotes using a virtual TPM. Combined with an
> early domain builder, the eventual goal is to have dom0 use a vTPM for
> its integrity/reporting/sealing operations, and use the physical TPM only
> to secure the secrets of vTPMs and for deep quotes to provide fresh
> proofs
> of the system's state.

This sounds really good. I look forward to try it in Xen 4.5!!


Thank you for your answers!
Jordi.

>
>> Best regards,
>> Jordi.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:02:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:02:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAAC-0001E4-Ao; Tue, 11 Feb 2014 10:02:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WDAAB-0001Dw-4o
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:01:59 +0000
Received: from [85.158.143.35:42161] by server-3.bemta-4.messagelabs.com id
	DB/D5-11539-615F9F25; Tue, 11 Feb 2014 10:01:58 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392112917!4754353!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20700 invoked from network); 11 Feb 2014 10:01:57 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 10:01:57 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 07FC43;
	Tue, 11 Feb 2014 11:01:56 +0100
Message-ID: <52F9F514.8040907@scytl.com>
Date: Tue, 11 Feb 2014 11:01:56 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
In-Reply-To: <52F92B4A.3010805@tycho.nsa.gov>
X-Enigmail-Version: 1.6
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Daniel,

Thanks for your thorough answer. I have a few comments below.

On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>> Dear all,
>>
>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>> guest virtual machine that takes advantage of it. After playing a bit
>> with it, I have a few questions:
>>
>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>> the guest the vTPM stubdom continues active and, moreover, I can start
>> the machine again and the values of the vTPM are the last ones there
>> were in the previous instance of the guest. Is this normal?
>
> The documentation is in error here; while this was originally how the
> vTPM
> domain behaved, this automatic shutdown was not reliable: it was not done
> if the peer domain did not use the vTPM, and it was incorrectly triggered
> by pv-grub's use of the vTPM to record guest kernel measurements
> (which was
> the immediate reason for its removal). The solution now is to either
> send a
> shutdown request or simply destroy the vTPM upon guest shutdown.
>
> An alternative that may require less work on your part is to destroy
> the vTPM stub domain during a guest's construction, something like:
>
> #!/bin/sh -e
> xl destroy "$1-vtpm" || true
> xl create $1-vtpm.cfg
> xl create $1-domu.cfg
>
> Allowing a vTPM to remain active across a guest restart will cause the
> PCR values extended by pv-grub to be incorrect, as you observed in your
> second email. In order for the vTPM's PCRs to be useful for quotes or
> releasing sealed secrets, you need to ensure that a new vTPM is started
> if and only if it is paired with a corresponding guest.

I see a potential threat due to this behaviour (please correct me if I
am wrong).

Assume an administrator of Dom0 becomes malicious. Since the hypervisor
does not enforce the shut down of the vTPM domain, the malicious
administrator could try the following: 1) make a copy of the peer
domain, 2) manipulate the copy of the peer domain and disable its
measurements, 3) boot the original peer domain, 4) switch it off or
pause it, 5) boot the manipulated copy of the peer domain.

Then, the shown PCR values of the manipulated copy of the peer domain
are the ones measured by the original peer domain during the first boot.
But the manipulated copy is the one actually running. Hence, this could
not be detected nor by quoting the vTPM neither the pTPM.


May be, one possible solution could be to enforce an XSM FLASK policy to
prevent any user in Dom0 from destroying, shutting down or pausing a
domain. Then, measure the policy when Dom0 starts into a PCR of the
phsyical TPM. Nevertheless, on one hand I do not know if this is
feasible and, in the other hand, this prevents the system from
destroying the vTPM domain when the peer domain shuts down.

>
>> 2.In the documentation it is recommended to avoid accessing the physical
>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
>> without any apparent issue. Why is not recommended directly accessing
>> the physical TPM of Dom0?
>
> While most of the time it is not a problem to have two entities
> talking to
> the physical TPM, it is possible for the trousers daemon in dom0 to
> interfere
> with key handles used by the TPM Manager. There are also certain
> operations
> of the TPM that may not handle concurrency, although I do not believe
> that
> trousers uses them - SHA1Start, the DAA commands, and certain audit logs
> come to mind.
>
> The other reason why it is recommended to avoid pTPM access from dom0 is
> because the ability to send unseal/unbind requests to the physical TPM
> makes
> it possible for applications running in dom0 to decrypt the TPM Manager's
> data (and thereby access vTPM private keys).
>
> At present, sharing the physical TPM between dom0 and the TPM Manager is
> the only way to get full integrity checks.

OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
security problem to the vTPM. But if we do not enable the support, the
integrity of Dom0 cannot be proved using the TPM (e.g. by remote
attestation).

>
>> 3.If it is not recommended to directly accessing the physical TPM in
>> Dom0, which is the advisable way to check the integrity of this domain?
>> With solutions such as TBOOT and IntelTXT?
>
> While the TPM Manager in Xen 4.3/4.4 does not yet have this
> functionality,
> an update which I will be submitting for inclusion in Xen 4.5 has the
> ability to get physical TPM quotes using a virtual TPM. Combined with an
> early domain builder, the eventual goal is to have dom0 use a vTPM for
> its integrity/reporting/sealing operations, and use the physical TPM only
> to secure the secrets of vTPMs and for deep quotes to provide fresh
> proofs
> of the system's state.

This sounds really good. I look forward to try it in Xen 4.5!!


Thank you for your answers!
Jordi.

>
>> Best regards,
>> Jordi.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:06:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAEl-0001QW-6N; Tue, 11 Feb 2014 10:06:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDAEj-0001QP-Ip
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 10:06:41 +0000
Received: from [193.109.254.147:5175] by server-16.bemta-14.messagelabs.com id
	72/91-21945-F26F9F25; Tue, 11 Feb 2014 10:06:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392113198!3452656!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30553 invoked from network); 11 Feb 2014 10:06:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 10:06:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 10:06:35 +0000
Message-Id: <52FA043B020000780011B10C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 10:06:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
> Fields
> ------
> 
> All the fields within the headers and records have a fixed width.
> 
> Fields are always aligned to their size.
> 
> Padding and reserved fields are set to zero on save and must be
> ignored during restore.

Meaning it would be impossible to assign a meaning to these fields
later. I'd rather mandate that the restore side has to check these
fields are zero, and bail if they aren't.

> Integer (numeric) fields in the image header are always in big-endian
> byte order.

Why would big endian be preferable when both currently
supported architectures use little endian?

> Domain Header
> -------------
> 
> The domain header includes general properties of the domain.
> 
>      0      1     2     3     4     5     6     7 octet
>     +-----------+-----------+-----------+-------------+
>     | arch      | type      | page_shift| (reserved)  |
>     +-----------+-----------+-----------+-------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> arch        0x0000: Reserved.
> 
>             0x0001: x86.
> 
>             0x0002: ARM.
> 
> type        0x0000: Reserved.
> 
>             0x0001: x86 PV.
> 
>             0x0002 - 0xFFFF: Reserved.

So how would ARM, x86 HVM, and x86 PVH be expressed?

> P2M
> ---
> 
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]
> 
> The P2M record contains a portion of the source domain's P2M.
> Multiple P2M records may be sent if the source P2M changes during the
> stream.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | pfn_begin                                       |
>     +-------------------------------------------------+
>     | pfn_end                                         |
>     +-------------------------------------------------+
>     | mfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | mfn[N-1]                                        |
>     +-------------------------------------------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> pfn_begin   The first PFN in this portion of the P2M
> 
> pfn_end     One past the last PFN in this portion of the P2M.

I'd favor an inclusive range here, such that if we ever reach a
fully populatable 64-bit PFN space (on some future architecture)
there'd still be no issue with special casing the then unavoidable
wraparound.

> Layout
> ======
> 
> The set of valid records depends on the guest architecture and type.
> 
> x86 PV Guest
> ------------
> 
> An x86 PV guest image will have in this order:
> 
> 1. Image header
> 2. Domain header
> 3. X86_PV_INFO record
> 4. At least one P2M record
> 5. At least one PAGE_DATA record
> 6. VCPU_INFO record
> 6. At least one VCPU_CONTEXT record
> 7. END record

Is it necessary to require this strict ordering? Obviously the headers
have to be first and the END record last, but at least some of the
other records don't seem to need to be at exactly the place you list
them.

> Legacy Images (x86 only)
> ========================
> 
> Restoring legacy images from older tools shall be handled by
> translating the legacy format image into this new format.
> 
> It shall not be possible to save in the legacy format.
> 
> There are two different legacy images depending on whether they were
> generated by a 32-bit or a 64-bit toolstack. These shall be
> distinguished by inspecting octets 4-7 in the image.  If these are
> zero then it is a 64-bit image.
> 
> Toolstack  Field                            Value
> ---------  -----                            -----
> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)

Afaics this is being determined via xc_domain_maximum_gpfn(),
which I don't think guarantees the result to be limited to 2^32.
Or in fact the libxc interface wrongly limits the value (by
truncating the "long" returned from the hypercall to an "int"). So
in practice consistent images would have the field limited to 2^31
on 64-bit tool stacks (since for larger values the negative function
return value would get converted by sign-extension, but all sorts
of other trouble would result due to the now huge p2m_size).

> Future Extensions
> =================
> 
> All changes to this format require the image version to be increased.

Oh, okay, this partly deals with the first question above. Question
is whether that's a useful requirement, i.e. whether that wouldn't
lead to an inflation of versions needing conversion (for a tool stack
that wants to support more than just migration from N-1).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:06:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAEl-0001QW-6N; Tue, 11 Feb 2014 10:06:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDAEj-0001QP-Ip
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 10:06:41 +0000
Received: from [193.109.254.147:5175] by server-16.bemta-14.messagelabs.com id
	72/91-21945-F26F9F25; Tue, 11 Feb 2014 10:06:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392113198!3452656!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30553 invoked from network); 11 Feb 2014 10:06:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 10:06:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 10:06:35 +0000
Message-Id: <52FA043B020000780011B10C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 10:06:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
> Fields
> ------
> 
> All the fields within the headers and records have a fixed width.
> 
> Fields are always aligned to their size.
> 
> Padding and reserved fields are set to zero on save and must be
> ignored during restore.

Meaning it would be impossible to assign a meaning to these fields
later. I'd rather mandate that the restore side has to check these
fields are zero, and bail if they aren't.

> Integer (numeric) fields in the image header are always in big-endian
> byte order.

Why would big endian be preferable when both currently
supported architectures use little endian?

> Domain Header
> -------------
> 
> The domain header includes general properties of the domain.
> 
>      0      1     2     3     4     5     6     7 octet
>     +-----------+-----------+-----------+-------------+
>     | arch      | type      | page_shift| (reserved)  |
>     +-----------+-----------+-----------+-------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> arch        0x0000: Reserved.
> 
>             0x0001: x86.
> 
>             0x0002: ARM.
> 
> type        0x0000: Reserved.
> 
>             0x0001: x86 PV.
> 
>             0x0002 - 0xFFFF: Reserved.

So how would ARM, x86 HVM, and x86 PVH be expressed?

> P2M
> ---
> 
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]
> 
> The P2M record contains a portion of the source domain's P2M.
> Multiple P2M records may be sent if the source P2M changes during the
> stream.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | pfn_begin                                       |
>     +-------------------------------------------------+
>     | pfn_end                                         |
>     +-------------------------------------------------+
>     | mfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | mfn[N-1]                                        |
>     +-------------------------------------------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> pfn_begin   The first PFN in this portion of the P2M
> 
> pfn_end     One past the last PFN in this portion of the P2M.

I'd favor an inclusive range here, such that if we ever reach a
fully populatable 64-bit PFN space (on some future architecture)
there'd still be no issue with special casing the then unavoidable
wraparound.

> Layout
> ======
> 
> The set of valid records depends on the guest architecture and type.
> 
> x86 PV Guest
> ------------
> 
> An x86 PV guest image will have in this order:
> 
> 1. Image header
> 2. Domain header
> 3. X86_PV_INFO record
> 4. At least one P2M record
> 5. At least one PAGE_DATA record
> 6. VCPU_INFO record
> 6. At least one VCPU_CONTEXT record
> 7. END record

Is it necessary to require this strict ordering? Obviously the headers
have to be first and the END record last, but at least some of the
other records don't seem to need to be at exactly the place you list
them.

> Legacy Images (x86 only)
> ========================
> 
> Restoring legacy images from older tools shall be handled by
> translating the legacy format image into this new format.
> 
> It shall not be possible to save in the legacy format.
> 
> There are two different legacy images depending on whether they were
> generated by a 32-bit or a 64-bit toolstack. These shall be
> distinguished by inspecting octets 4-7 in the image.  If these are
> zero then it is a 64-bit image.
> 
> Toolstack  Field                            Value
> ---------  -----                            -----
> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)

Afaics this is being determined via xc_domain_maximum_gpfn(),
which I don't think guarantees the result to be limited to 2^32.
Or in fact the libxc interface wrongly limits the value (by
truncating the "long" returned from the hypercall to an "int"). So
in practice consistent images would have the field limited to 2^31
on 64-bit tool stacks (since for larger values the negative function
return value would get converted by sign-extension, but all sorts
of other trouble would result due to the now huge p2m_size).

> Future Extensions
> =================
> 
> All changes to this format require the image version to be increased.

Oh, okay, this partly deals with the first question above. Question
is whether that's a useful requirement, i.e. whether that wouldn't
lead to an inflation of versions needing conversion (for a tool stack
that wants to support more than just migration from N-1).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:12:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAKH-00022Q-AF; Tue, 11 Feb 2014 10:12:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDAKG-00022L-LP
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:12:24 +0000
Received: from [85.158.143.35:48069] by server-3.bemta-4.messagelabs.com id
	48/3F-11539-887F9F25; Tue, 11 Feb 2014 10:12:24 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392113542!4758345!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32338 invoked from network); 11 Feb 2014 10:12:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99736391"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 10:12:21 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:12:21 -0500
Message-ID: <52F9F783.2040708@citrix.com>
Date: Tue, 11 Feb 2014 11:12:19 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>			
	<52EC2C9C.9090202@terremark.com>			
	<21231.33273.164782.738071@mariner.uk.xensource.com>			
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>			
	<52F425AD.50209@terremark.com>			
	<1391767501.2162.20.camel@kazak.uk.xensource.com>			
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>		
	<1392026141.5117.10.camel@kazak.uk.xensource.com>		
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>		
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>		
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>	
	<1392111686.26657.56.camel@kazak.uk.xensource.com>	
	<52F9F3BE.1010106@citrix.com>
	<1392112851.26657.59.camel@kazak.uk.xensource.com>
In-Reply-To: <1392112851.26657.59.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMDIvMTQgMTE6MDAsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUdWUsIDIwMTQtMDIt
MTEgYXQgMTA6NTYgKzAxMDAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4+IE9uIDExLzAyLzE0
IDEwOjQxLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4+PiBPbiBNb24sIDIwMTQtMDItMTAgYXQgMTg6
NTMgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToKPj4+PiBPbiAwMi8xMC8xNCAxNDozMywgUm9nZXIg
UGF1IE1vbm7DqSB3cm90ZToKPj4+Pj4gRGlkIHlvdSBydW4gYWNsb2NhbCBiZWZvcmUgYXV0b2Nv
bmY/Cj4+Pgo+Pj4+IFRoYXQgZGlkIHdoYXQgd2FzIG5lZWRlZC4KPj4+Cj4+PiBGb3IgZnV0dXJl
IHJlZmVyZW5jZSB5b3UgY2FuIGp1c3QgcnVuIC4vYXV0b2dlbi5zaCBhdCB0aGUgdG9wIGxldmVs
IGFuZAo+Pj4gaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcuCj4+Pgo+Pj4gKG9yIGlmIGl0IGRvZXNu
J3QgSSdkIGFwcHJlY2lhdGUgYSBwYXRjaCB0byBtYWtlIGl0IGRvIHRoZSByaWdodCB0aGluZykK
Pj4KPj4gSSB0aGluayB3ZSBoYXZlIHR3byBvcHRpb25zIGhlcmUgaWYgd2Ugd2FudCB0byBhZGQg
dGhlIGNoZWNrIHZlcnNpb24KPj4gc3R1ZmYsIGVpdGhlciB3ZSBydW4gYWNsb2NhbCBiZWZvcmUg
YXV0b2NvbmYgKGFuZCByZXF1aXJlIHRoZSB1c2VyIHRvCj4+IGhhdmUgdGhlIGF1dG9jb25mLWFy
Y2hpdmUgcGFja2FnZSkgb3Igd2UgcGljayB0aGUgQVhfQ09NUEFSRV9WRVJTSU9OCj4+IG1hY3Jv
IHNvdXJjZSBbMF0gYW5kIGFkZCBpdCB0byBvdXIgbG9jYWwgbTQvIGZvbGRlci4KPiAKPiBJcyBp
dCB0aGUgY2FzZSB0aGF0IHdlIGN1cnJlbnRseSBkbyB0aGUgbGF0dGVyIGZvciBhbCB0aGUgZXhp
c3RpbmcKPiBjaGVja3M/CgpZZXMsIGJ1dCBJJ20gbm90IHN1cmUgaWYgYW55IG9mIHRoZSBtYWNy
b3Mgd2UgaGF2ZSBpbnNpZGUgb2YgbTQvIGFyZQphbHNvIHBhcnQgb2YgYXV0b2NvbmYtYXJjaGl2
ZSAoYXQgbGVhc3Qgc29tZSBvZiB0aG9zZSBhcmUgY3VzdG9tLW1hZGUKQUZBSUspLiBJIGFsc28g
dGhpbmsgaXQncyBiZXN0IHRvIGRvIHRoZSBsYXR0ZXIsIEknbSBnb2luZyB0byBhZGQgdGhlCm1h
Y3JvIGl0c2VsZiB0byBteSBwYXRjaCBhbmQgcmVzZW5kLgoKUm9nZXIuCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:12:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAKH-00022Q-AF; Tue, 11 Feb 2014 10:12:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDAKG-00022L-LP
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:12:24 +0000
Received: from [85.158.143.35:48069] by server-3.bemta-4.messagelabs.com id
	48/3F-11539-887F9F25; Tue, 11 Feb 2014 10:12:24 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392113542!4758345!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32338 invoked from network); 11 Feb 2014 10:12:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:12:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99736391"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 10:12:21 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:12:21 -0500
Message-ID: <52F9F783.2040708@citrix.com>
Date: Tue, 11 Feb 2014 11:12:19 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>			
	<52EC2C9C.9090202@terremark.com>			
	<21231.33273.164782.738071@mariner.uk.xensource.com>			
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>			
	<52F425AD.50209@terremark.com>			
	<1391767501.2162.20.camel@kazak.uk.xensource.com>			
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>		
	<1392026141.5117.10.camel@kazak.uk.xensource.com>		
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>		
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>		
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>	
	<1392111686.26657.56.camel@kazak.uk.xensource.com>	
	<52F9F3BE.1010106@citrix.com>
	<1392112851.26657.59.camel@kazak.uk.xensource.com>
In-Reply-To: <1392112851.26657.59.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMDIvMTQgMTE6MDAsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUdWUsIDIwMTQtMDIt
MTEgYXQgMTA6NTYgKzAxMDAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4+IE9uIDExLzAyLzE0
IDEwOjQxLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4+PiBPbiBNb24sIDIwMTQtMDItMTAgYXQgMTg6
NTMgLTA1MDAsIERvbiBTbHV0eiB3cm90ZToKPj4+PiBPbiAwMi8xMC8xNCAxNDozMywgUm9nZXIg
UGF1IE1vbm7DqSB3cm90ZToKPj4+Pj4gRGlkIHlvdSBydW4gYWNsb2NhbCBiZWZvcmUgYXV0b2Nv
bmY/Cj4+Pgo+Pj4+IFRoYXQgZGlkIHdoYXQgd2FzIG5lZWRlZC4KPj4+Cj4+PiBGb3IgZnV0dXJl
IHJlZmVyZW5jZSB5b3UgY2FuIGp1c3QgcnVuIC4vYXV0b2dlbi5zaCBhdCB0aGUgdG9wIGxldmVs
IGFuZAo+Pj4gaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcuCj4+Pgo+Pj4gKG9yIGlmIGl0IGRvZXNu
J3QgSSdkIGFwcHJlY2lhdGUgYSBwYXRjaCB0byBtYWtlIGl0IGRvIHRoZSByaWdodCB0aGluZykK
Pj4KPj4gSSB0aGluayB3ZSBoYXZlIHR3byBvcHRpb25zIGhlcmUgaWYgd2Ugd2FudCB0byBhZGQg
dGhlIGNoZWNrIHZlcnNpb24KPj4gc3R1ZmYsIGVpdGhlciB3ZSBydW4gYWNsb2NhbCBiZWZvcmUg
YXV0b2NvbmYgKGFuZCByZXF1aXJlIHRoZSB1c2VyIHRvCj4+IGhhdmUgdGhlIGF1dG9jb25mLWFy
Y2hpdmUgcGFja2FnZSkgb3Igd2UgcGljayB0aGUgQVhfQ09NUEFSRV9WRVJTSU9OCj4+IG1hY3Jv
IHNvdXJjZSBbMF0gYW5kIGFkZCBpdCB0byBvdXIgbG9jYWwgbTQvIGZvbGRlci4KPiAKPiBJcyBp
dCB0aGUgY2FzZSB0aGF0IHdlIGN1cnJlbnRseSBkbyB0aGUgbGF0dGVyIGZvciBhbCB0aGUgZXhp
c3RpbmcKPiBjaGVja3M/CgpZZXMsIGJ1dCBJJ20gbm90IHN1cmUgaWYgYW55IG9mIHRoZSBtYWNy
b3Mgd2UgaGF2ZSBpbnNpZGUgb2YgbTQvIGFyZQphbHNvIHBhcnQgb2YgYXV0b2NvbmYtYXJjaGl2
ZSAoYXQgbGVhc3Qgc29tZSBvZiB0aG9zZSBhcmUgY3VzdG9tLW1hZGUKQUZBSUspLiBJIGFsc28g
dGhpbmsgaXQncyBiZXN0IHRvIGRvIHRoZSBsYXR0ZXIsIEknbSBnb2luZyB0byBhZGQgdGhlCm1h
Y3JvIGl0c2VsZiB0byBteSBwYXRjaCBhbmQgcmVzZW5kLgoKUm9nZXIuCgpfX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0
Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:14:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAM6-00028j-RZ; Tue, 11 Feb 2014 10:14:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDAM4-00028d-KI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:14:17 +0000
Received: from [193.109.254.147:40426] by server-6.bemta-14.messagelabs.com id
	8F/48-03396-7F7F9F25; Tue, 11 Feb 2014 10:14:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392113654!3459618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7744 invoked from network); 11 Feb 2014 10:14:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:14:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101553031"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 10:14:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:14:13 -0500
Message-ID: <1392113651.26657.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Tue, 11 Feb 2014 10:14:11 +0000
In-Reply-To: <52F9F783.2040708@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
	<52F9F3BE.1010106@citrix.com>
	<1392112851.26657.59.camel@kazak.uk.xensource.com>
	<52F9F783.2040708@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDExOjEyICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDExLzAyLzE0IDExOjAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4gPiBPbiBUdWUsIDIw
MTQtMDItMTEgYXQgMTA6NTYgKzAxMDAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gPj4gT24g
MTEvMDIvMTQgMTA6NDEsIElhbiBDYW1wYmVsbCB3cm90ZToKPiA+Pj4gT24gTW9uLCAyMDE0LTAy
LTEwIGF0IDE4OjUzIC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4gPj4+PiBPbiAwMi8xMC8xNCAx
NDozMywgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiA+Pj4+PiBEaWQgeW91IHJ1biBhY2xvY2Fs
IGJlZm9yZSBhdXRvY29uZj8KPiA+Pj4KPiA+Pj4+IFRoYXQgZGlkIHdoYXQgd2FzIG5lZWRlZC4K
PiA+Pj4KPiA+Pj4gRm9yIGZ1dHVyZSByZWZlcmVuY2UgeW91IGNhbiBqdXN0IHJ1biAuL2F1dG9n
ZW4uc2ggYXQgdGhlIHRvcCBsZXZlbCBhbmQKPiA+Pj4gaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcu
Cj4gPj4+Cj4gPj4+IChvciBpZiBpdCBkb2Vzbid0IEknZCBhcHByZWNpYXRlIGEgcGF0Y2ggdG8g
bWFrZSBpdCBkbyB0aGUgcmlnaHQgdGhpbmcpCj4gPj4KPiA+PiBJIHRoaW5rIHdlIGhhdmUgdHdv
IG9wdGlvbnMgaGVyZSBpZiB3ZSB3YW50IHRvIGFkZCB0aGUgY2hlY2sgdmVyc2lvbgo+ID4+IHN0
dWZmLCBlaXRoZXIgd2UgcnVuIGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mIChhbmQgcmVxdWlyZSB0
aGUgdXNlciB0bwo+ID4+IGhhdmUgdGhlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSkgb3Igd2Ug
cGljayB0aGUgQVhfQ09NUEFSRV9WRVJTSU9OCj4gPj4gbWFjcm8gc291cmNlIFswXSBhbmQgYWRk
IGl0IHRvIG91ciBsb2NhbCBtNC8gZm9sZGVyLgo+ID4gCj4gPiBJcyBpdCB0aGUgY2FzZSB0aGF0
IHdlIGN1cnJlbnRseSBkbyB0aGUgbGF0dGVyIGZvciBhbCB0aGUgZXhpc3RpbmcKPiA+IGNoZWNr
cz8KPiAKPiBZZXMsIGJ1dCBJJ20gbm90IHN1cmUgaWYgYW55IG9mIHRoZSBtYWNyb3Mgd2UgaGF2
ZSBpbnNpZGUgb2YgbTQvIGFyZQo+IGFsc28gcGFydCBvZiBhdXRvY29uZi1hcmNoaXZlIChhdCBs
ZWFzdCBzb21lIG9mIHRob3NlIGFyZSBjdXN0b20tbWFkZQo+IEFGQUlLKS4gSSBhbHNvIHRoaW5r
IGl0J3MgYmVzdCB0byBkbyB0aGUgbGF0dGVyLCBJJ20gZ29pbmcgdG8gYWRkIHRoZQo+IG1hY3Jv
IGl0c2VsZiB0byBteSBwYXRjaCBhbmQgcmVzZW5kLgoKUmlnaHQsIEkgdGhpbmsgdGhhdCBpcyBi
ZXN0IGZvciBub3cuCgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:14:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAM6-00028j-RZ; Tue, 11 Feb 2014 10:14:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDAM4-00028d-KI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:14:17 +0000
Received: from [193.109.254.147:40426] by server-6.bemta-14.messagelabs.com id
	8F/48-03396-7F7F9F25; Tue, 11 Feb 2014 10:14:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392113654!3459618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7744 invoked from network); 11 Feb 2014 10:14:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:14:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101553031"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 10:14:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:14:13 -0500
Message-ID: <1392113651.26657.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Tue, 11 Feb 2014 10:14:11 +0000
In-Reply-To: <52F9F783.2040708@citrix.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
	<52F9F3BE.1010106@citrix.com>
	<1392112851.26657.59.camel@kazak.uk.xensource.com>
	<52F9F783.2040708@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDExOjEyICswMTAwLCBSb2dlciBQYXUgTW9ubsOpIHdyb3Rl
Ogo+IE9uIDExLzAyLzE0IDExOjAwLCBJYW4gQ2FtcGJlbGwgd3JvdGU6Cj4gPiBPbiBUdWUsIDIw
MTQtMDItMTEgYXQgMTA6NTYgKzAxMDAsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gPj4gT24g
MTEvMDIvMTQgMTA6NDEsIElhbiBDYW1wYmVsbCB3cm90ZToKPiA+Pj4gT24gTW9uLCAyMDE0LTAy
LTEwIGF0IDE4OjUzIC0wNTAwLCBEb24gU2x1dHogd3JvdGU6Cj4gPj4+PiBPbiAwMi8xMC8xNCAx
NDozMywgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiA+Pj4+PiBEaWQgeW91IHJ1biBhY2xvY2Fs
IGJlZm9yZSBhdXRvY29uZj8KPiA+Pj4KPiA+Pj4+IFRoYXQgZGlkIHdoYXQgd2FzIG5lZWRlZC4K
PiA+Pj4KPiA+Pj4gRm9yIGZ1dHVyZSByZWZlcmVuY2UgeW91IGNhbiBqdXN0IHJ1biAuL2F1dG9n
ZW4uc2ggYXQgdGhlIHRvcCBsZXZlbCBhbmQKPiA+Pj4gaXQgZG9lcyB0aGUgcmlnaHQgdGhpbmcu
Cj4gPj4+Cj4gPj4+IChvciBpZiBpdCBkb2Vzbid0IEknZCBhcHByZWNpYXRlIGEgcGF0Y2ggdG8g
bWFrZSBpdCBkbyB0aGUgcmlnaHQgdGhpbmcpCj4gPj4KPiA+PiBJIHRoaW5rIHdlIGhhdmUgdHdv
IG9wdGlvbnMgaGVyZSBpZiB3ZSB3YW50IHRvIGFkZCB0aGUgY2hlY2sgdmVyc2lvbgo+ID4+IHN0
dWZmLCBlaXRoZXIgd2UgcnVuIGFjbG9jYWwgYmVmb3JlIGF1dG9jb25mIChhbmQgcmVxdWlyZSB0
aGUgdXNlciB0bwo+ID4+IGhhdmUgdGhlIGF1dG9jb25mLWFyY2hpdmUgcGFja2FnZSkgb3Igd2Ug
cGljayB0aGUgQVhfQ09NUEFSRV9WRVJTSU9OCj4gPj4gbWFjcm8gc291cmNlIFswXSBhbmQgYWRk
IGl0IHRvIG91ciBsb2NhbCBtNC8gZm9sZGVyLgo+ID4gCj4gPiBJcyBpdCB0aGUgY2FzZSB0aGF0
IHdlIGN1cnJlbnRseSBkbyB0aGUgbGF0dGVyIGZvciBhbCB0aGUgZXhpc3RpbmcKPiA+IGNoZWNr
cz8KPiAKPiBZZXMsIGJ1dCBJJ20gbm90IHN1cmUgaWYgYW55IG9mIHRoZSBtYWNyb3Mgd2UgaGF2
ZSBpbnNpZGUgb2YgbTQvIGFyZQo+IGFsc28gcGFydCBvZiBhdXRvY29uZi1hcmNoaXZlIChhdCBs
ZWFzdCBzb21lIG9mIHRob3NlIGFyZSBjdXN0b20tbWFkZQo+IEFGQUlLKS4gSSBhbHNvIHRoaW5r
IGl0J3MgYmVzdCB0byBkbyB0aGUgbGF0dGVyLCBJJ20gZ29pbmcgdG8gYWRkIHRoZQo+IG1hY3Jv
IGl0c2VsZiB0byBteSBwYXRjaCBhbmQgcmVzZW5kLgoKUmlnaHQsIEkgdGhpbmsgdGhhdCBpcyBi
ZXN0IGZvciBub3cuCgpJYW4uCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:38:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAjX-0003ET-Lr; Tue, 11 Feb 2014 10:38:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDAjW-0003EO-OU
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:38:31 +0000
Received: from [85.158.139.211:63998] by server-1.bemta-5.messagelabs.com id
	05/29-12859-5ADF9F25; Tue, 11 Feb 2014 10:38:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392115107!3107512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3019 invoked from network); 11 Feb 2014 10:38:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:38:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99741005"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 10:38:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:38:26 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WDAjS-0000gW-2d;
	Tue, 11 Feb 2014 10:38:26 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 11:38:24 +0100
Message-ID: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNj
OiBEb24gU2x1dHogPGRzbHV0ekB2ZXJpem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8aWFuLmNh
bXBiZWxsQGNpdHJpeC5jb20+CkNjOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AY2l0cml4LmNv
bT4KLS0tCkkndmUgcmVtb3ZlZCBEb24ncyAiVGVzdGVkLWJ5IiBiZWNhdXNlIHRoZSBwYXRjaCBo
YXMgY2hhbmdlZCwgY291bGQKeW91IHBsZWFzZSByZS10ZXN0IGl0IERvbj8KClBsZWFzZSBydW4g
YXV0b2dlbi5zaCBhZnRlciBhcHBseWluZy4KLS0tCiBtNC9heF9jb21wYXJlX3ZlcnNpb24ubTQg
fCAgMTc5ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIHRv
b2xzL2NvbmZpZ3VyZS5hYyAgICAgICB8ICAgIDcgKysKIDIgZmlsZXMgY2hhbmdlZCwgMTg2IGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgbTQvYXhfY29t
cGFyZV92ZXJzaW9uLm00CgpkaWZmIC0tZ2l0IGEvbTQvYXhfY29tcGFyZV92ZXJzaW9uLm00IGIv
bTQvYXhfY29tcGFyZV92ZXJzaW9uLm00Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAw
MDAuLjI2ZjRkZWMKLS0tIC9kZXYvbnVsbAorKysgYi9tNC9heF9jb21wYXJlX3ZlcnNpb24ubTQK
QEAgLTAsMCArMSwxNzkgQEAKKyMgRmV0Y2hlZCBmcm9tIGh0dHA6Ly9naXQuc2F2YW5uYWguZ251
Lm9yZy9naXR3ZWIvP3A9YXV0b2NvbmYtYXJjaGl2ZS5naXQ7YT1ibG9iX3BsYWluO2Y9bTQvYXhf
Y29tcGFyZV92ZXJzaW9uLm00CisjIENvbW1pdCBJRDogMjc5NDhmNDljYTMwZTQyMjJiYjdjZGQ1
NTE4MmJkNzM0MWFjNTBjNQorIyA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KKyMgICAgaHR0cDovL3d3dy5n
bnUub3JnL3NvZnR3YXJlL2F1dG9jb25mLWFyY2hpdmUvYXhfY29tcGFyZV92ZXJzaW9uLmh0bWwK
KyMgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09CisjCisjIFNZTk9QU0lTCisjCisjICAgQVhfQ09NUEFSRV9W
RVJTSU9OKFZFUlNJT05fQSwgT1AsIFZFUlNJT05fQiwgW0FDVElPTi1JRi1UUlVFXSwgW0FDVElP
Ti1JRi1GQUxTRV0pCisjCisjIERFU0NSSVBUSU9OCisjCisjICAgVGhpcyBtYWNybyBjb21wYXJl
cyB0d28gdmVyc2lvbiBzdHJpbmdzLiBEdWUgdG8gdGhlIHZhcmlvdXMgbnVtYmVyIG9mCisjICAg
bWlub3ItdmVyc2lvbiBudW1iZXJzIHRoYXQgY2FuIGV4aXN0LCBhbmQgdGhlIGZhY3QgdGhhdCBz
dHJpbmcKKyMgICBjb21wYXJpc29ucyBhcmUgbm90IGNvbXBhdGlibGUgd2l0aCBudW1lcmljIGNv
bXBhcmlzb25zLCB0aGlzIGlzIG5vdAorIyAgIG5lY2Vzc2FyaWx5IHRyaXZpYWwgdG8gZG8gaW4g
YSBhdXRvY29uZiBzY3JpcHQuIFRoaXMgbWFjcm8gbWFrZXMgZG9pbmcKKyMgICB0aGVzZSBjb21w
YXJpc29ucyBlYXN5LgorIworIyAgIFRoZSBzaXggYmFzaWMgY29tcGFyaXNvbnMgYXJlIGF2YWls
YWJsZSwgYXMgd2VsbCBhcyBjaGVja2luZyBlcXVhbGl0eQorIyAgIGxpbWl0ZWQgdG8gYSBjZXJ0
YWluIG51bWJlciBvZiBtaW5vci12ZXJzaW9uIGxldmVscy4KKyMKKyMgICBUaGUgb3BlcmF0b3Ig
T1AgZGV0ZXJtaW5lcyB3aGF0IHR5cGUgb2YgY29tcGFyaXNvbiB0byBkbywgYW5kIGNhbiBiZSBv
bmUKKyMgICBvZjoKKyMKKyMgICAgZXEgIC0gZXF1YWwgKHRlc3QgQSA9PSBCKQorIyAgICBuZSAg
LSBub3QgZXF1YWwgKHRlc3QgQSAhPSBCKQorIyAgICBsZSAgLSBsZXNzIHRoYW4gb3IgZXF1YWwg
KHRlc3QgQSA8PSBCKQorIyAgICBnZSAgLSBncmVhdGVyIHRoYW4gb3IgZXF1YWwgKHRlc3QgQSA+
PSBCKQorIyAgICBsdCAgLSBsZXNzIHRoYW4gKHRlc3QgQSA8IEIpCisjICAgIGd0ICAtIGdyZWF0
ZXIgdGhhbiAodGVzdCBBID4gQikKKyMKKyMgICBBZGRpdGlvbmFsbHksIHRoZSBlcSBhbmQgbmUg
b3BlcmF0b3IgY2FuIGhhdmUgYSBudW1iZXIgYWZ0ZXIgaXQgdG8gbGltaXQKKyMgICB0aGUgdGVz
dCB0byB0aGF0IG51bWJlciBvZiBtaW5vciB2ZXJzaW9ucy4KKyMKKyMgICAgZXEwIC0gZXF1YWwg
dXAgdG8gdGhlIGxlbmd0aCBvZiB0aGUgc2hvcnRlciB2ZXJzaW9uCisjICAgIG5lMCAtIG5vdCBl
cXVhbCB1cCB0byB0aGUgbGVuZ3RoIG9mIHRoZSBzaG9ydGVyIHZlcnNpb24KKyMgICAgZXFOIC0g
ZXF1YWwgdXAgdG8gTiBzdWItdmVyc2lvbiBsZXZlbHMKKyMgICAgbmVOIC0gbm90IGVxdWFsIHVw
IHRvIE4gc3ViLXZlcnNpb24gbGV2ZWxzCisjCisjICAgV2hlbiB0aGUgY29uZGl0aW9uIGlzIHRy
dWUsIHNoZWxsIGNvbW1hbmRzIEFDVElPTi1JRi1UUlVFIGFyZSBydW4sCisjICAgb3RoZXJ3aXNl
IHNoZWxsIGNvbW1hbmRzIEFDVElPTi1JRi1GQUxTRSBhcmUgcnVuLiBUaGUgZW52aXJvbm1lbnQK
KyMgICB2YXJpYWJsZSAnYXhfY29tcGFyZV92ZXJzaW9uJyBpcyBhbHdheXMgc2V0IHRvIGVpdGhl
ciAndHJ1ZScgb3IgJ2ZhbHNlJworIyAgIGFzIHdlbGwuCisjCisjICAgRXhhbXBsZXM6CisjCisj
ICAgICBBWF9DT01QQVJFX1ZFUlNJT04oWzMuMTUuN10sW2x0XSxbMy4xNS44XSkKKyMgICAgIEFY
X0NPTVBBUkVfVkVSU0lPTihbMy4xNV0sW2x0XSxbMy4xNS44XSkKKyMKKyMgICB3b3VsZCBib3Ro
IGJlIHRydWUuCisjCisjICAgICBBWF9DT01QQVJFX1ZFUlNJT04oWzMuMTUuN10sW2VxXSxbMy4x
NS44XSkKKyMgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbMy4xNV0sW2d0XSxbMy4xNS44XSkKKyMK
KyMgICB3b3VsZCBib3RoIGJlIGZhbHNlLgorIworIyAgICAgQVhfQ09NUEFSRV9WRVJTSU9OKFsz
LjE1LjddLFtlcTJdLFszLjE1LjhdKQorIworIyAgIHdvdWxkIGJlIHRydWUgYmVjYXVzZSBpdCBp
cyBvbmx5IGNvbXBhcmluZyB0d28gbWlub3IgdmVyc2lvbnMuCisjCisjICAgICBBWF9DT01QQVJF
X1ZFUlNJT04oWzMuMTUuN10sW2VxMF0sWzMuMTVdKQorIworIyAgIHdvdWxkIGJlIHRydWUgYmVj
YXVzZSBpdCBpcyBvbmx5IGNvbXBhcmluZyB0aGUgbGVzc2VyIG51bWJlciBvZiBtaW5vcgorIyAg
IHZlcnNpb25zIG9mIHRoZSB0d28gdmFsdWVzLgorIworIyAgIE5vdGU6IFRoZSBjaGFyYWN0ZXJz
IHRoYXQgc2VwYXJhdGUgdGhlIHZlcnNpb24gbnVtYmVycyBkbyBub3QgbWF0dGVyLiBBbgorIyAg
IGVtcHR5IHN0cmluZyBpcyB0aGUgc2FtZSBhcyB2ZXJzaW9uIDAuIE9QIGlzIGV2YWx1YXRlZCBi
eSBhdXRvY29uZiwgbm90CisjICAgY29uZmlndXJlLCBzbyBtdXN0IGJlIGEgc3RyaW5nLCBub3Qg
YSB2YXJpYWJsZS4KKyMKKyMgICBUaGUgYXV0aG9yIHdvdWxkIGxpa2UgdG8gYWNrbm93bGVkZ2Ug
R3VpZG8gRHJhaGVpbSB3aG9zZSBhZHZpY2UgYWJvdXQKKyMgICB0aGUgbTRfY2FzZSBhbmQgbTRf
aWZ2YWxuIGZ1bmN0aW9ucyBtYWtlIHRoaXMgbWFjcm8gb25seSBpbmNsdWRlIHRoZQorIyAgIHBv
cnRpb25zIG5lY2Vzc2FyeSB0byBwZXJmb3JtIHRoZSBzcGVjaWZpYyBjb21wYXJpc29uIHNwZWNp
ZmllZCBieSB0aGUKKyMgICBPUCBhcmd1bWVudCBpbiB0aGUgZmluYWwgY29uZmlndXJlIHNjcmlw
dC4KKyMKKyMgTElDRU5TRQorIworIyAgIENvcHlyaWdodCAoYykgMjAwOCBUaW0gVG9vbGFuIDx0
b29sYW5AZWxlLnVyaS5lZHU+CisjCisjICAgQ29weWluZyBhbmQgZGlzdHJpYnV0aW9uIG9mIHRo
aXMgZmlsZSwgd2l0aCBvciB3aXRob3V0IG1vZGlmaWNhdGlvbiwgYXJlCisjICAgcGVybWl0dGVk
IGluIGFueSBtZWRpdW0gd2l0aG91dCByb3lhbHR5IHByb3ZpZGVkIHRoZSBjb3B5cmlnaHQgbm90
aWNlCisjICAgYW5kIHRoaXMgbm90aWNlIGFyZSBwcmVzZXJ2ZWQuIFRoaXMgZmlsZSBpcyBvZmZl
cmVkIGFzLWlzLCB3aXRob3V0IGFueQorIyAgIHdhcnJhbnR5LgorCisjc2VyaWFsIDExCisKK2Ru
bCAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjCitBQ19ERUZVTihbQVhfQ09NUEFSRV9WRVJTSU9OXSwgWworICBB
Q19SRVFVSVJFKFtBQ19QUk9HX0FXS10pCisKKyAgIyBVc2VkIHRvIGluZGljYXRlIHRydWUgb3Ig
ZmFsc2UgY29uZGl0aW9uCisgIGF4X2NvbXBhcmVfdmVyc2lvbj1mYWxzZQorCisgICMgQ29udmVy
dCB0aGUgdHdvIHZlcnNpb24gc3RyaW5ncyB0byBiZSBjb21wYXJlZCBpbnRvIGEgZm9ybWF0IHRo
YXQKKyAgIyBhbGxvd3MgYSBzaW1wbGUgc3RyaW5nIGNvbXBhcmlzb24uICBUaGUgZW5kIHJlc3Vs
dCBpcyB0aGF0IGEgdmVyc2lvbgorICAjIHN0cmluZyBvZiB0aGUgZm9ybSAxLjEyLjUtcjYxNyB3
aWxsIGJlIGNvbnZlcnRlZCB0byB0aGUgZm9ybQorICAjIDAwMDEwMDEyMDAwNTA2MTcuICBJbiBv
dGhlciB3b3JkcywgZWFjaCBudW1iZXIgaXMgemVybyBwYWRkZWQgdG8gZm91cgorICAjIGRpZ2l0
cywgYW5kIG5vbiBkaWdpdHMgYXJlIHJlbW92ZWQuCisgIEFTX1ZBUl9QVVNIREVGKFtBXSxbYXhf
Y29tcGFyZV92ZXJzaW9uX0FdKQorICBBPWBlY2hvICIkMSIgfCBzZWQgLWUgJ3MvXChbWzAtOV1d
KlwpL1pcMVovZycgXAorICAgICAgICAgICAgICAgICAgICAgLWUgJ3MvWlwoW1swLTldXVwpWi9a
MFwxWi9nJyBcCisgICAgICAgICAgICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVwp
Wi9aMFwxWi9nJyBcCisgICAgICAgICAgICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTld
XVtbMC05XV1cKVovWjBcMVovZycgXAorICAgICAgICAgICAgICAgICAgICAgLWUgJ3MvW1teMC05
XV0vL2cnYAorCisgIEFTX1ZBUl9QVVNIREVGKFtCXSxbYXhfY29tcGFyZV92ZXJzaW9uX0JdKQor
ICBCPWBlY2hvICIkMyIgfCBzZWQgLWUgJ3MvXChbWzAtOV1dKlwpL1pcMVovZycgXAorICAgICAg
ICAgICAgICAgICAgICAgLWUgJ3MvWlwoW1swLTldXVwpWi9aMFwxWi9nJyBcCisgICAgICAgICAg
ICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVwpWi9aMFwxWi9nJyBcCisgICAgICAg
ICAgICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVtbMC05XV1cKVovWjBcMVovZycg
XAorICAgICAgICAgICAgICAgICAgICAgLWUgJ3MvW1teMC05XV0vL2cnYAorCisgIGRubCAjIElu
IHRoZSBjYXNlIG9mIGxlLCBnZSwgbHQsIGFuZCBndCwgdGhlIHN0cmluZ3MgYXJlIHNvcnRlZCBh
cyBuZWNlc3NhcnkKKyAgZG5sICMgdGhlbiB0aGUgZmlyc3QgbGluZSBpcyB1c2VkIHRvIGRldGVy
bWluZSBpZiB0aGUgY29uZGl0aW9uIGlzIHRydWUuCisgIGRubCAjIFRoZSBzZWQgcmlnaHQgYWZ0
ZXIgdGhlIGVjaG8gaXMgdG8gcmVtb3ZlIGFueSBpbmRlbnRlZCB3aGl0ZSBzcGFjZS4KKyAgbTRf
Y2FzZShtNF90b2xvd2VyKCQyKSwKKyAgW2x0XSxbCisgICAgYXhfY29tcGFyZV92ZXJzaW9uPWBl
Y2hvICJ4JEEKK3gkQiIgfCBzZWQgJ3MvXiAqLy8nIHwgc29ydCAtciB8IHNlZCAicy94JHtBfS9m
YWxzZS87cy94JHtCfS90cnVlLzsxcSJgCisgIF0sCisgIFtndF0sWworICAgIGF4X2NvbXBhcmVf
dmVyc2lvbj1gZWNobyAieCRBCit4JEIiIHwgc2VkICdzL14gKi8vJyB8IHNvcnQgfCBzZWQgInMv
eCR7QX0vZmFsc2UvO3MveCR7Qn0vdHJ1ZS87MXEiYAorICBdLAorICBbbGVdLFsKKyAgICBheF9j
b21wYXJlX3ZlcnNpb249YGVjaG8gIngkQQoreCRCIiB8IHNlZCAncy9eICovLycgfCBzb3J0IHwg
c2VkICJzL3gke0F9L3RydWUvO3MveCR7Qn0vZmFsc2UvOzFxImAKKyAgXSwKKyAgW2dlXSxbCisg
ICAgYXhfY29tcGFyZV92ZXJzaW9uPWBlY2hvICJ4JEEKK3gkQiIgfCBzZWQgJ3MvXiAqLy8nIHwg
c29ydCAtciB8IHNlZCAicy94JHtBfS90cnVlLztzL3gke0J9L2ZhbHNlLzsxcSJgCisgIF0sWwor
ICAgIGRubCBTcGxpdCB0aGUgb3BlcmF0b3IgZnJvbSB0aGUgc3VidmVyc2lvbiBjb3VudCBpZiBw
cmVzZW50LgorICAgIG00X2JtYXRjaChtNF9zdWJzdHIoJDIsMiksCisgICAgWzBdLFsKKyAgICAg
ICMgQSBjb3VudCBvZiB6ZXJvIG1lYW5zIHVzZSB0aGUgbGVuZ3RoIG9mIHRoZSBzaG9ydGVyIHZl
cnNpb24uCisgICAgICAjIERldGVybWluZSB0aGUgbnVtYmVyIG9mIGNoYXJhY3RlcnMgaW4gQSBh
bmQgQi4KKyAgICAgIGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQT1gZWNobyAiJEEiIHwgJEFXSyAn
e3ByaW50KGxlbmd0aCl9J2AKKyAgICAgIGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQj1gZWNobyAi
JEIiIHwgJEFXSyAne3ByaW50KGxlbmd0aCl9J2AKKworICAgICAgIyBTZXQgQSB0byBubyBtb3Jl
IHRoYW4gQidzIGxlbmd0aCBhbmQgQiB0byBubyBtb3JlIHRoYW4gQSdzIGxlbmd0aC4KKyAgICAg
IEE9YGVjaG8gIiRBIiB8IHNlZCAicy9cKC5ceyRheF9jb21wYXJlX3ZlcnNpb25fbGVuX0JcfVwp
LiovXDEvImAKKyAgICAgIEI9YGVjaG8gIiRCIiB8IHNlZCAicy9cKC5ceyRheF9jb21wYXJlX3Zl
cnNpb25fbGVuX0FcfVwpLiovXDEvImAKKyAgICBdLAorICAgIFtbMC05XStdLFsKKyAgICAgICMg
QSBjb3VudCBncmVhdGVyIHRoYW4gemVybyBtZWFucyB1c2Ugb25seSB0aGF0IG1hbnkgc3VidmVy
c2lvbnMKKyAgICAgIEE9YGVjaG8gIiRBIiB8IHNlZCAicy9cKFwoW1swLTldXVx7NFx9XClce200
X3N1YnN0cigkMiwyKVx9XCkuKi9cMS8iYAorICAgICAgQj1gZWNobyAiJEIiIHwgc2VkICJzL1wo
XChbWzAtOV1dXHs0XH1cKVx7bTRfc3Vic3RyKCQyLDIpXH1cKS4qL1wxLyJgCisgICAgXSwKKyAg
ICBbLitdLFsKKyAgICAgIEFDX1dBUk5JTkcoCisgICAgICAgIFtpbGxlZ2FsIE9QIG51bWVyaWMg
cGFyYW1ldGVyOiAkMl0pCisgICAgXSxbXSkKKworICAgICMgUGFkIHplcm9zIGF0IGVuZCBvZiBu
dW1iZXJzIHRvIG1ha2Ugc2FtZSBsZW5ndGguCisgICAgYXhfY29tcGFyZV92ZXJzaW9uX3RtcF9B
PSIkQWBlY2hvICRCIHwgc2VkICdzLy4vMC9nJ2AiCisgICAgQj0iJEJgZWNobyAkQSB8IHNlZCAn
cy8uLzAvZydgIgorICAgIEE9IiRheF9jb21wYXJlX3ZlcnNpb25fdG1wX0EiCisKKyAgICAjIENo
ZWNrIGZvciBlcXVhbGl0eSBvciBpbmVxdWFsaXR5IGFzIG5lY2Vzc2FyeS4KKyAgICBtNF9jYXNl
KG00X3RvbG93ZXIobTRfc3Vic3RyKCQyLDAsMikpLAorICAgIFtlcV0sWworICAgICAgdGVzdCAi
eCRBIiA9ICJ4JEIiICYmIGF4X2NvbXBhcmVfdmVyc2lvbj10cnVlCisgICAgXSwKKyAgICBbbmVd
LFsKKyAgICAgIHRlc3QgIngkQSIgIT0gIngkQiIgJiYgYXhfY29tcGFyZV92ZXJzaW9uPXRydWUK
KyAgICBdLFsKKyAgICAgIEFDX1dBUk5JTkcoW2lsbGVnYWwgT1AgcGFyYW1ldGVyOiAkMl0pCisg
ICAgXSkKKyAgXSkKKworICBBU19WQVJfUE9QREVGKFtBXSlkbmwKKyAgQVNfVkFSX1BPUERFRihb
Ql0pZG5sCisKKyAgZG5sICMgRXhlY3V0ZSBBQ1RJT04tSUYtVFJVRSAvIEFDVElPTi1JRi1GQUxT
RS4KKyAgaWYgdGVzdCAiJGF4X2NvbXBhcmVfdmVyc2lvbiIgPSAidHJ1ZSIgOyB0aGVuCisgICAg
bTRfaWZ2YWxuKFskNF0sWyQ0XSxbOl0pZG5sCisgICAgbTRfaWZ2YWxuKFskNV0sW2Vsc2UgJDVd
KWRubAorICBmaQorXSkgZG5sIEFYX0NPTVBBUkVfVkVSU0lPTgpkaWZmIC0tZ2l0IGEvdG9vbHMv
Y29uZmlndXJlLmFjIGIvdG9vbHMvY29uZmlndXJlLmFjCmluZGV4IDA3NTRmMGUuLjM0OTZjMTIg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL2NvbmZpZ3VyZS5hYworKysgYi90b29scy9jb25maWd1cmUuYWMK
QEAgLTQ2LDYgKzQ2LDcgQEAgbTRfaW5jbHVkZShbLi4vbTQvcHRocmVhZC5tNF0pCiBtNF9pbmNs
dWRlKFsuLi9tNC9wdHlmdW5jcy5tNF0pCiBtNF9pbmNsdWRlKFsuLi9tNC9leHRmcy5tNF0pCiBt
NF9pbmNsdWRlKFsuLi9tNC9mZXRjaGVyLm00XSkKK200X2luY2x1ZGUoWy4uL200L2F4X2NvbXBh
cmVfdmVyc2lvbi5tNF0pCiAKICMgRW5hYmxlL2Rpc2FibGUgb3B0aW9ucwogQVhfQVJHX0RFRkFV
TFRfRElTQUJMRShbZ2l0aHR0cF0sIFtEb3dubG9hZCBHSVQgcmVwb3NpdG9yaWVzIHZpYSBIVFRQ
XSkKQEAgLTE2MSw2ICsxNjIsMTIgQEAgQVNfSUYoW3Rlc3QgIngkb2NhbWx0b29scyIgPSAieHki
XSwgWwogICAgICAgICBBU19JRihbdGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJd
LCBbCiAgICAgICAgICAgICBBQ19NU0dfRVJST1IoW09jYW1sIHRvb2xzIGVuYWJsZWQsIGJ1dCB1
bmFibGUgdG8gZmluZCBPY2FtbF0pXSkKICAgICAgICAgb2NhbWx0b29scz0ibiIKKyAgICBdLCBb
CisgICAgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbJE9DQU1MVkVSU0lPTl0sIFtsdF0sIFszLjA5
LjNdLCBbCisgICAgICAgICAgICBBU19JRihbdGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAi
eHllcyJdLCBbCisgICAgICAgICAgICAgICAgQUNfTVNHX0VSUk9SKFtZb3VyIHZlcnNpb24gb2Yg
T0NhbWw6ICRPQ0FNTFZFUlNJT04gaXMgbm90IHN1cHBvcnRlZF0pXSkKKyAgICAgICAgICAgIG9j
YW1sdG9vbHM9Im4iCisgICAgICAgIF0pCiAgICAgXSkKIF0pCiBBU19JRihbdGVzdCAieCR4c21w
b2xpY3kiID0gInh5Il0sIFsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:38:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDAjX-0003ET-Lr; Tue, 11 Feb 2014 10:38:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDAjW-0003EO-OU
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 10:38:31 +0000
Received: from [85.158.139.211:63998] by server-1.bemta-5.messagelabs.com id
	05/29-12859-5ADF9F25; Tue, 11 Feb 2014 10:38:29 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392115107!3107512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3019 invoked from network); 11 Feb 2014 10:38:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:38:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99741005"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 10:38:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:38:26 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WDAjS-0000gW-2d;
	Tue, 11 Feb 2014 10:38:26 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 11:38:24 +0100
Message-ID: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

U2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNj
OiBEb24gU2x1dHogPGRzbHV0ekB2ZXJpem9uLmNvbT4KQ2M6IElhbiBDYW1wYmVsbCA8aWFuLmNh
bXBiZWxsQGNpdHJpeC5jb20+CkNjOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AY2l0cml4LmNv
bT4KLS0tCkkndmUgcmVtb3ZlZCBEb24ncyAiVGVzdGVkLWJ5IiBiZWNhdXNlIHRoZSBwYXRjaCBo
YXMgY2hhbmdlZCwgY291bGQKeW91IHBsZWFzZSByZS10ZXN0IGl0IERvbj8KClBsZWFzZSBydW4g
YXV0b2dlbi5zaCBhZnRlciBhcHBseWluZy4KLS0tCiBtNC9heF9jb21wYXJlX3ZlcnNpb24ubTQg
fCAgMTc5ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysKIHRv
b2xzL2NvbmZpZ3VyZS5hYyAgICAgICB8ICAgIDcgKysKIDIgZmlsZXMgY2hhbmdlZCwgMTg2IGlu
c2VydGlvbnMoKyksIDAgZGVsZXRpb25zKC0pCiBjcmVhdGUgbW9kZSAxMDA2NDQgbTQvYXhfY29t
cGFyZV92ZXJzaW9uLm00CgpkaWZmIC0tZ2l0IGEvbTQvYXhfY29tcGFyZV92ZXJzaW9uLm00IGIv
bTQvYXhfY29tcGFyZV92ZXJzaW9uLm00Cm5ldyBmaWxlIG1vZGUgMTAwNjQ0CmluZGV4IDAwMDAw
MDAuLjI2ZjRkZWMKLS0tIC9kZXYvbnVsbAorKysgYi9tNC9heF9jb21wYXJlX3ZlcnNpb24ubTQK
QEAgLTAsMCArMSwxNzkgQEAKKyMgRmV0Y2hlZCBmcm9tIGh0dHA6Ly9naXQuc2F2YW5uYWguZ251
Lm9yZy9naXR3ZWIvP3A9YXV0b2NvbmYtYXJjaGl2ZS5naXQ7YT1ibG9iX3BsYWluO2Y9bTQvYXhf
Y29tcGFyZV92ZXJzaW9uLm00CisjIENvbW1pdCBJRDogMjc5NDhmNDljYTMwZTQyMjJiYjdjZGQ1
NTE4MmJkNzM0MWFjNTBjNQorIyA9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0KKyMgICAgaHR0cDovL3d3dy5n
bnUub3JnL3NvZnR3YXJlL2F1dG9jb25mLWFyY2hpdmUvYXhfY29tcGFyZV92ZXJzaW9uLmh0bWwK
KyMgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09CisjCisjIFNZTk9QU0lTCisjCisjICAgQVhfQ09NUEFSRV9W
RVJTSU9OKFZFUlNJT05fQSwgT1AsIFZFUlNJT05fQiwgW0FDVElPTi1JRi1UUlVFXSwgW0FDVElP
Ti1JRi1GQUxTRV0pCisjCisjIERFU0NSSVBUSU9OCisjCisjICAgVGhpcyBtYWNybyBjb21wYXJl
cyB0d28gdmVyc2lvbiBzdHJpbmdzLiBEdWUgdG8gdGhlIHZhcmlvdXMgbnVtYmVyIG9mCisjICAg
bWlub3ItdmVyc2lvbiBudW1iZXJzIHRoYXQgY2FuIGV4aXN0LCBhbmQgdGhlIGZhY3QgdGhhdCBz
dHJpbmcKKyMgICBjb21wYXJpc29ucyBhcmUgbm90IGNvbXBhdGlibGUgd2l0aCBudW1lcmljIGNv
bXBhcmlzb25zLCB0aGlzIGlzIG5vdAorIyAgIG5lY2Vzc2FyaWx5IHRyaXZpYWwgdG8gZG8gaW4g
YSBhdXRvY29uZiBzY3JpcHQuIFRoaXMgbWFjcm8gbWFrZXMgZG9pbmcKKyMgICB0aGVzZSBjb21w
YXJpc29ucyBlYXN5LgorIworIyAgIFRoZSBzaXggYmFzaWMgY29tcGFyaXNvbnMgYXJlIGF2YWls
YWJsZSwgYXMgd2VsbCBhcyBjaGVja2luZyBlcXVhbGl0eQorIyAgIGxpbWl0ZWQgdG8gYSBjZXJ0
YWluIG51bWJlciBvZiBtaW5vci12ZXJzaW9uIGxldmVscy4KKyMKKyMgICBUaGUgb3BlcmF0b3Ig
T1AgZGV0ZXJtaW5lcyB3aGF0IHR5cGUgb2YgY29tcGFyaXNvbiB0byBkbywgYW5kIGNhbiBiZSBv
bmUKKyMgICBvZjoKKyMKKyMgICAgZXEgIC0gZXF1YWwgKHRlc3QgQSA9PSBCKQorIyAgICBuZSAg
LSBub3QgZXF1YWwgKHRlc3QgQSAhPSBCKQorIyAgICBsZSAgLSBsZXNzIHRoYW4gb3IgZXF1YWwg
KHRlc3QgQSA8PSBCKQorIyAgICBnZSAgLSBncmVhdGVyIHRoYW4gb3IgZXF1YWwgKHRlc3QgQSA+
PSBCKQorIyAgICBsdCAgLSBsZXNzIHRoYW4gKHRlc3QgQSA8IEIpCisjICAgIGd0ICAtIGdyZWF0
ZXIgdGhhbiAodGVzdCBBID4gQikKKyMKKyMgICBBZGRpdGlvbmFsbHksIHRoZSBlcSBhbmQgbmUg
b3BlcmF0b3IgY2FuIGhhdmUgYSBudW1iZXIgYWZ0ZXIgaXQgdG8gbGltaXQKKyMgICB0aGUgdGVz
dCB0byB0aGF0IG51bWJlciBvZiBtaW5vciB2ZXJzaW9ucy4KKyMKKyMgICAgZXEwIC0gZXF1YWwg
dXAgdG8gdGhlIGxlbmd0aCBvZiB0aGUgc2hvcnRlciB2ZXJzaW9uCisjICAgIG5lMCAtIG5vdCBl
cXVhbCB1cCB0byB0aGUgbGVuZ3RoIG9mIHRoZSBzaG9ydGVyIHZlcnNpb24KKyMgICAgZXFOIC0g
ZXF1YWwgdXAgdG8gTiBzdWItdmVyc2lvbiBsZXZlbHMKKyMgICAgbmVOIC0gbm90IGVxdWFsIHVw
IHRvIE4gc3ViLXZlcnNpb24gbGV2ZWxzCisjCisjICAgV2hlbiB0aGUgY29uZGl0aW9uIGlzIHRy
dWUsIHNoZWxsIGNvbW1hbmRzIEFDVElPTi1JRi1UUlVFIGFyZSBydW4sCisjICAgb3RoZXJ3aXNl
IHNoZWxsIGNvbW1hbmRzIEFDVElPTi1JRi1GQUxTRSBhcmUgcnVuLiBUaGUgZW52aXJvbm1lbnQK
KyMgICB2YXJpYWJsZSAnYXhfY29tcGFyZV92ZXJzaW9uJyBpcyBhbHdheXMgc2V0IHRvIGVpdGhl
ciAndHJ1ZScgb3IgJ2ZhbHNlJworIyAgIGFzIHdlbGwuCisjCisjICAgRXhhbXBsZXM6CisjCisj
ICAgICBBWF9DT01QQVJFX1ZFUlNJT04oWzMuMTUuN10sW2x0XSxbMy4xNS44XSkKKyMgICAgIEFY
X0NPTVBBUkVfVkVSU0lPTihbMy4xNV0sW2x0XSxbMy4xNS44XSkKKyMKKyMgICB3b3VsZCBib3Ro
IGJlIHRydWUuCisjCisjICAgICBBWF9DT01QQVJFX1ZFUlNJT04oWzMuMTUuN10sW2VxXSxbMy4x
NS44XSkKKyMgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbMy4xNV0sW2d0XSxbMy4xNS44XSkKKyMK
KyMgICB3b3VsZCBib3RoIGJlIGZhbHNlLgorIworIyAgICAgQVhfQ09NUEFSRV9WRVJTSU9OKFsz
LjE1LjddLFtlcTJdLFszLjE1LjhdKQorIworIyAgIHdvdWxkIGJlIHRydWUgYmVjYXVzZSBpdCBp
cyBvbmx5IGNvbXBhcmluZyB0d28gbWlub3IgdmVyc2lvbnMuCisjCisjICAgICBBWF9DT01QQVJF
X1ZFUlNJT04oWzMuMTUuN10sW2VxMF0sWzMuMTVdKQorIworIyAgIHdvdWxkIGJlIHRydWUgYmVj
YXVzZSBpdCBpcyBvbmx5IGNvbXBhcmluZyB0aGUgbGVzc2VyIG51bWJlciBvZiBtaW5vcgorIyAg
IHZlcnNpb25zIG9mIHRoZSB0d28gdmFsdWVzLgorIworIyAgIE5vdGU6IFRoZSBjaGFyYWN0ZXJz
IHRoYXQgc2VwYXJhdGUgdGhlIHZlcnNpb24gbnVtYmVycyBkbyBub3QgbWF0dGVyLiBBbgorIyAg
IGVtcHR5IHN0cmluZyBpcyB0aGUgc2FtZSBhcyB2ZXJzaW9uIDAuIE9QIGlzIGV2YWx1YXRlZCBi
eSBhdXRvY29uZiwgbm90CisjICAgY29uZmlndXJlLCBzbyBtdXN0IGJlIGEgc3RyaW5nLCBub3Qg
YSB2YXJpYWJsZS4KKyMKKyMgICBUaGUgYXV0aG9yIHdvdWxkIGxpa2UgdG8gYWNrbm93bGVkZ2Ug
R3VpZG8gRHJhaGVpbSB3aG9zZSBhZHZpY2UgYWJvdXQKKyMgICB0aGUgbTRfY2FzZSBhbmQgbTRf
aWZ2YWxuIGZ1bmN0aW9ucyBtYWtlIHRoaXMgbWFjcm8gb25seSBpbmNsdWRlIHRoZQorIyAgIHBv
cnRpb25zIG5lY2Vzc2FyeSB0byBwZXJmb3JtIHRoZSBzcGVjaWZpYyBjb21wYXJpc29uIHNwZWNp
ZmllZCBieSB0aGUKKyMgICBPUCBhcmd1bWVudCBpbiB0aGUgZmluYWwgY29uZmlndXJlIHNjcmlw
dC4KKyMKKyMgTElDRU5TRQorIworIyAgIENvcHlyaWdodCAoYykgMjAwOCBUaW0gVG9vbGFuIDx0
b29sYW5AZWxlLnVyaS5lZHU+CisjCisjICAgQ29weWluZyBhbmQgZGlzdHJpYnV0aW9uIG9mIHRo
aXMgZmlsZSwgd2l0aCBvciB3aXRob3V0IG1vZGlmaWNhdGlvbiwgYXJlCisjICAgcGVybWl0dGVk
IGluIGFueSBtZWRpdW0gd2l0aG91dCByb3lhbHR5IHByb3ZpZGVkIHRoZSBjb3B5cmlnaHQgbm90
aWNlCisjICAgYW5kIHRoaXMgbm90aWNlIGFyZSBwcmVzZXJ2ZWQuIFRoaXMgZmlsZSBpcyBvZmZl
cmVkIGFzLWlzLCB3aXRob3V0IGFueQorIyAgIHdhcnJhbnR5LgorCisjc2VyaWFsIDExCisKK2Ru
bCAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjCitBQ19ERUZVTihbQVhfQ09NUEFSRV9WRVJTSU9OXSwgWworICBB
Q19SRVFVSVJFKFtBQ19QUk9HX0FXS10pCisKKyAgIyBVc2VkIHRvIGluZGljYXRlIHRydWUgb3Ig
ZmFsc2UgY29uZGl0aW9uCisgIGF4X2NvbXBhcmVfdmVyc2lvbj1mYWxzZQorCisgICMgQ29udmVy
dCB0aGUgdHdvIHZlcnNpb24gc3RyaW5ncyB0byBiZSBjb21wYXJlZCBpbnRvIGEgZm9ybWF0IHRo
YXQKKyAgIyBhbGxvd3MgYSBzaW1wbGUgc3RyaW5nIGNvbXBhcmlzb24uICBUaGUgZW5kIHJlc3Vs
dCBpcyB0aGF0IGEgdmVyc2lvbgorICAjIHN0cmluZyBvZiB0aGUgZm9ybSAxLjEyLjUtcjYxNyB3
aWxsIGJlIGNvbnZlcnRlZCB0byB0aGUgZm9ybQorICAjIDAwMDEwMDEyMDAwNTA2MTcuICBJbiBv
dGhlciB3b3JkcywgZWFjaCBudW1iZXIgaXMgemVybyBwYWRkZWQgdG8gZm91cgorICAjIGRpZ2l0
cywgYW5kIG5vbiBkaWdpdHMgYXJlIHJlbW92ZWQuCisgIEFTX1ZBUl9QVVNIREVGKFtBXSxbYXhf
Y29tcGFyZV92ZXJzaW9uX0FdKQorICBBPWBlY2hvICIkMSIgfCBzZWQgLWUgJ3MvXChbWzAtOV1d
KlwpL1pcMVovZycgXAorICAgICAgICAgICAgICAgICAgICAgLWUgJ3MvWlwoW1swLTldXVwpWi9a
MFwxWi9nJyBcCisgICAgICAgICAgICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVwp
Wi9aMFwxWi9nJyBcCisgICAgICAgICAgICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTld
XVtbMC05XV1cKVovWjBcMVovZycgXAorICAgICAgICAgICAgICAgICAgICAgLWUgJ3MvW1teMC05
XV0vL2cnYAorCisgIEFTX1ZBUl9QVVNIREVGKFtCXSxbYXhfY29tcGFyZV92ZXJzaW9uX0JdKQor
ICBCPWBlY2hvICIkMyIgfCBzZWQgLWUgJ3MvXChbWzAtOV1dKlwpL1pcMVovZycgXAorICAgICAg
ICAgICAgICAgICAgICAgLWUgJ3MvWlwoW1swLTldXVwpWi9aMFwxWi9nJyBcCisgICAgICAgICAg
ICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVwpWi9aMFwxWi9nJyBcCisgICAgICAg
ICAgICAgICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVtbMC05XV1cKVovWjBcMVovZycg
XAorICAgICAgICAgICAgICAgICAgICAgLWUgJ3MvW1teMC05XV0vL2cnYAorCisgIGRubCAjIElu
IHRoZSBjYXNlIG9mIGxlLCBnZSwgbHQsIGFuZCBndCwgdGhlIHN0cmluZ3MgYXJlIHNvcnRlZCBh
cyBuZWNlc3NhcnkKKyAgZG5sICMgdGhlbiB0aGUgZmlyc3QgbGluZSBpcyB1c2VkIHRvIGRldGVy
bWluZSBpZiB0aGUgY29uZGl0aW9uIGlzIHRydWUuCisgIGRubCAjIFRoZSBzZWQgcmlnaHQgYWZ0
ZXIgdGhlIGVjaG8gaXMgdG8gcmVtb3ZlIGFueSBpbmRlbnRlZCB3aGl0ZSBzcGFjZS4KKyAgbTRf
Y2FzZShtNF90b2xvd2VyKCQyKSwKKyAgW2x0XSxbCisgICAgYXhfY29tcGFyZV92ZXJzaW9uPWBl
Y2hvICJ4JEEKK3gkQiIgfCBzZWQgJ3MvXiAqLy8nIHwgc29ydCAtciB8IHNlZCAicy94JHtBfS9m
YWxzZS87cy94JHtCfS90cnVlLzsxcSJgCisgIF0sCisgIFtndF0sWworICAgIGF4X2NvbXBhcmVf
dmVyc2lvbj1gZWNobyAieCRBCit4JEIiIHwgc2VkICdzL14gKi8vJyB8IHNvcnQgfCBzZWQgInMv
eCR7QX0vZmFsc2UvO3MveCR7Qn0vdHJ1ZS87MXEiYAorICBdLAorICBbbGVdLFsKKyAgICBheF9j
b21wYXJlX3ZlcnNpb249YGVjaG8gIngkQQoreCRCIiB8IHNlZCAncy9eICovLycgfCBzb3J0IHwg
c2VkICJzL3gke0F9L3RydWUvO3MveCR7Qn0vZmFsc2UvOzFxImAKKyAgXSwKKyAgW2dlXSxbCisg
ICAgYXhfY29tcGFyZV92ZXJzaW9uPWBlY2hvICJ4JEEKK3gkQiIgfCBzZWQgJ3MvXiAqLy8nIHwg
c29ydCAtciB8IHNlZCAicy94JHtBfS90cnVlLztzL3gke0J9L2ZhbHNlLzsxcSJgCisgIF0sWwor
ICAgIGRubCBTcGxpdCB0aGUgb3BlcmF0b3IgZnJvbSB0aGUgc3VidmVyc2lvbiBjb3VudCBpZiBw
cmVzZW50LgorICAgIG00X2JtYXRjaChtNF9zdWJzdHIoJDIsMiksCisgICAgWzBdLFsKKyAgICAg
ICMgQSBjb3VudCBvZiB6ZXJvIG1lYW5zIHVzZSB0aGUgbGVuZ3RoIG9mIHRoZSBzaG9ydGVyIHZl
cnNpb24uCisgICAgICAjIERldGVybWluZSB0aGUgbnVtYmVyIG9mIGNoYXJhY3RlcnMgaW4gQSBh
bmQgQi4KKyAgICAgIGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQT1gZWNobyAiJEEiIHwgJEFXSyAn
e3ByaW50KGxlbmd0aCl9J2AKKyAgICAgIGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQj1gZWNobyAi
JEIiIHwgJEFXSyAne3ByaW50KGxlbmd0aCl9J2AKKworICAgICAgIyBTZXQgQSB0byBubyBtb3Jl
IHRoYW4gQidzIGxlbmd0aCBhbmQgQiB0byBubyBtb3JlIHRoYW4gQSdzIGxlbmd0aC4KKyAgICAg
IEE9YGVjaG8gIiRBIiB8IHNlZCAicy9cKC5ceyRheF9jb21wYXJlX3ZlcnNpb25fbGVuX0JcfVwp
LiovXDEvImAKKyAgICAgIEI9YGVjaG8gIiRCIiB8IHNlZCAicy9cKC5ceyRheF9jb21wYXJlX3Zl
cnNpb25fbGVuX0FcfVwpLiovXDEvImAKKyAgICBdLAorICAgIFtbMC05XStdLFsKKyAgICAgICMg
QSBjb3VudCBncmVhdGVyIHRoYW4gemVybyBtZWFucyB1c2Ugb25seSB0aGF0IG1hbnkgc3VidmVy
c2lvbnMKKyAgICAgIEE9YGVjaG8gIiRBIiB8IHNlZCAicy9cKFwoW1swLTldXVx7NFx9XClce200
X3N1YnN0cigkMiwyKVx9XCkuKi9cMS8iYAorICAgICAgQj1gZWNobyAiJEIiIHwgc2VkICJzL1wo
XChbWzAtOV1dXHs0XH1cKVx7bTRfc3Vic3RyKCQyLDIpXH1cKS4qL1wxLyJgCisgICAgXSwKKyAg
ICBbLitdLFsKKyAgICAgIEFDX1dBUk5JTkcoCisgICAgICAgIFtpbGxlZ2FsIE9QIG51bWVyaWMg
cGFyYW1ldGVyOiAkMl0pCisgICAgXSxbXSkKKworICAgICMgUGFkIHplcm9zIGF0IGVuZCBvZiBu
dW1iZXJzIHRvIG1ha2Ugc2FtZSBsZW5ndGguCisgICAgYXhfY29tcGFyZV92ZXJzaW9uX3RtcF9B
PSIkQWBlY2hvICRCIHwgc2VkICdzLy4vMC9nJ2AiCisgICAgQj0iJEJgZWNobyAkQSB8IHNlZCAn
cy8uLzAvZydgIgorICAgIEE9IiRheF9jb21wYXJlX3ZlcnNpb25fdG1wX0EiCisKKyAgICAjIENo
ZWNrIGZvciBlcXVhbGl0eSBvciBpbmVxdWFsaXR5IGFzIG5lY2Vzc2FyeS4KKyAgICBtNF9jYXNl
KG00X3RvbG93ZXIobTRfc3Vic3RyKCQyLDAsMikpLAorICAgIFtlcV0sWworICAgICAgdGVzdCAi
eCRBIiA9ICJ4JEIiICYmIGF4X2NvbXBhcmVfdmVyc2lvbj10cnVlCisgICAgXSwKKyAgICBbbmVd
LFsKKyAgICAgIHRlc3QgIngkQSIgIT0gIngkQiIgJiYgYXhfY29tcGFyZV92ZXJzaW9uPXRydWUK
KyAgICBdLFsKKyAgICAgIEFDX1dBUk5JTkcoW2lsbGVnYWwgT1AgcGFyYW1ldGVyOiAkMl0pCisg
ICAgXSkKKyAgXSkKKworICBBU19WQVJfUE9QREVGKFtBXSlkbmwKKyAgQVNfVkFSX1BPUERFRihb
Ql0pZG5sCisKKyAgZG5sICMgRXhlY3V0ZSBBQ1RJT04tSUYtVFJVRSAvIEFDVElPTi1JRi1GQUxT
RS4KKyAgaWYgdGVzdCAiJGF4X2NvbXBhcmVfdmVyc2lvbiIgPSAidHJ1ZSIgOyB0aGVuCisgICAg
bTRfaWZ2YWxuKFskNF0sWyQ0XSxbOl0pZG5sCisgICAgbTRfaWZ2YWxuKFskNV0sW2Vsc2UgJDVd
KWRubAorICBmaQorXSkgZG5sIEFYX0NPTVBBUkVfVkVSU0lPTgpkaWZmIC0tZ2l0IGEvdG9vbHMv
Y29uZmlndXJlLmFjIGIvdG9vbHMvY29uZmlndXJlLmFjCmluZGV4IDA3NTRmMGUuLjM0OTZjMTIg
MTAwNjQ0Ci0tLSBhL3Rvb2xzL2NvbmZpZ3VyZS5hYworKysgYi90b29scy9jb25maWd1cmUuYWMK
QEAgLTQ2LDYgKzQ2LDcgQEAgbTRfaW5jbHVkZShbLi4vbTQvcHRocmVhZC5tNF0pCiBtNF9pbmNs
dWRlKFsuLi9tNC9wdHlmdW5jcy5tNF0pCiBtNF9pbmNsdWRlKFsuLi9tNC9leHRmcy5tNF0pCiBt
NF9pbmNsdWRlKFsuLi9tNC9mZXRjaGVyLm00XSkKK200X2luY2x1ZGUoWy4uL200L2F4X2NvbXBh
cmVfdmVyc2lvbi5tNF0pCiAKICMgRW5hYmxlL2Rpc2FibGUgb3B0aW9ucwogQVhfQVJHX0RFRkFV
TFRfRElTQUJMRShbZ2l0aHR0cF0sIFtEb3dubG9hZCBHSVQgcmVwb3NpdG9yaWVzIHZpYSBIVFRQ
XSkKQEAgLTE2MSw2ICsxNjIsMTIgQEAgQVNfSUYoW3Rlc3QgIngkb2NhbWx0b29scyIgPSAieHki
XSwgWwogICAgICAgICBBU19JRihbdGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAieHllcyJd
LCBbCiAgICAgICAgICAgICBBQ19NU0dfRVJST1IoW09jYW1sIHRvb2xzIGVuYWJsZWQsIGJ1dCB1
bmFibGUgdG8gZmluZCBPY2FtbF0pXSkKICAgICAgICAgb2NhbWx0b29scz0ibiIKKyAgICBdLCBb
CisgICAgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbJE9DQU1MVkVSU0lPTl0sIFtsdF0sIFszLjA5
LjNdLCBbCisgICAgICAgICAgICBBU19JRihbdGVzdCAieCRlbmFibGVfb2NhbWx0b29scyIgPSAi
eHllcyJdLCBbCisgICAgICAgICAgICAgICAgQUNfTVNHX0VSUk9SKFtZb3VyIHZlcnNpb24gb2Yg
T0NhbWw6ICRPQ0FNTFZFUlNJT04gaXMgbm90IHN1cHBvcnRlZF0pXSkKKyAgICAgICAgICAgIG9j
YW1sdG9vbHM9Im4iCisgICAgICAgIF0pCiAgICAgXSkKIF0pCiBBU19JRihbdGVzdCAieCR4c21w
b2xpY3kiID0gInh5Il0sIFsKLS0gCjEuNy43LjUgKEFwcGxlIEdpdC0yNikKCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:59:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDB3Q-0004RK-4K; Tue, 11 Feb 2014 10:59:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WDB3N-0004QS-5x
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 10:59:01 +0000
Received: from [85.158.143.35:15344] by server-1.bemta-4.messagelabs.com id
	F6/80-31661-4720AF25; Tue, 11 Feb 2014 10:59:00 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392116338!4776091!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1374 invoked from network); 11 Feb 2014 10:58:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:58:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99744559"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 10:58:58 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:58:57 -0500
Message-ID: <52FA0270.7040501@citrix.com>
Date: Tue, 11 Feb 2014 10:58:56 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <rshriram@cs.ubc.ca>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <52F90A71.40802@citrix.com>	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>	<52F97E6F.2000402@citrix.com>
	<CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
In-Reply-To: <CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 04:12, Shriram Rajagopalan wrote:
> On Mon, Feb 10, 2014 at 7:35 PM, Andrew Cooper
>>         The marker field can be used to distinguish between legacy
>>         images and
>>         those corresponding to this specification.  Legacy images will
>>         have at
>>         one or more zero bits within the first 8 octets of the image.
>>
>>         Fields within the image header are always in _big-endian_ byte
>>         order,
>>         regardless of the setting of the endianness bit.
>>
>>
>>     and more endian-ness mess.
>
>     Network order is perfectly valid.  Is is how all your network
>     packets arrive...
>
>
>
>
> True. But why should we explicitly convert the application level data to
> network byte order and then convert it back to host byte order, when its
> already going to be done by the underlying stack, as you put it?

TCP doesn't change the byte order of the payload, only header fields are 
defined to be in big endian, AFAIK.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:59:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDB3Q-0004RK-4K; Tue, 11 Feb 2014 10:59:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WDB3N-0004QS-5x
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 10:59:01 +0000
Received: from [85.158.143.35:15344] by server-1.bemta-4.messagelabs.com id
	F6/80-31661-4720AF25; Tue, 11 Feb 2014 10:59:00 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392116338!4776091!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1374 invoked from network); 11 Feb 2014 10:58:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:58:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99744559"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 10:58:58 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 05:58:57 -0500
Message-ID: <52FA0270.7040501@citrix.com>
Date: Tue, 11 Feb 2014 10:58:56 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: <rshriram@cs.ubc.ca>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <52F90A71.40802@citrix.com>	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>	<52F97E6F.2000402@citrix.com>
	<CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
In-Reply-To: <CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 04:12, Shriram Rajagopalan wrote:
> On Mon, Feb 10, 2014 at 7:35 PM, Andrew Cooper
>>         The marker field can be used to distinguish between legacy
>>         images and
>>         those corresponding to this specification.  Legacy images will
>>         have at
>>         one or more zero bits within the first 8 octets of the image.
>>
>>         Fields within the image header are always in _big-endian_ byte
>>         order,
>>         regardless of the setting of the endianness bit.
>>
>>
>>     and more endian-ness mess.
>
>     Network order is perfectly valid.  Is is how all your network
>     packets arrive...
>
>
>
>
> True. But why should we explicitly convert the application level data to
> network byte order and then convert it back to host byte order, when its
> already going to be done by the underlying stack, as you put it?

TCP doesn't change the byte order of the payload, only header fields are 
defined to be in big endian, AFAIK.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:59:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDB42-0004jw-IR; Tue, 11 Feb 2014 10:59:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WDB41-0004jk-3F
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 10:59:41 +0000
Received: from [85.158.139.211:8474] by server-4.bemta-5.messagelabs.com id
	38/21-08092-C920AF25; Tue, 11 Feb 2014 10:59:40 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392116379!3139805!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31819 invoked from network); 11 Feb 2014 10:59:39 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:59:39 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so5343779wes.39
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 02:59:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=TB7P316J1pcSS18kyFHS1BxOC0lcoZghbl9+FVV1zy0=;
	b=jXcCtq8mvvnPf/bO0jdG4I3YLB2kk/qnFu2mrxrPBGKJsQjKrSZqCuNQxQ4XAFmrDm
	f3aX/AIHbWfHal70ZQS5SqUP5ugzWZC6lnfv0E4w01587qJM11BlDlEbTdcVUkGJTBMN
	JRWA2GqZK4ZhBpnEs5+QYCzyUBbDcjWmaEZvuNDVZn41SyJv022WSzEJjFIXrn5zIyl0
	hOKtgTh0JnshVPceWWPjBKvnrLetjtjNxLOnQHHxK7hCCbPpb9ueosNc6OEHlOwdcCTs
	YMdE29VQzR0mnaad69oC+BDh+vWa/OfJet+GxTTkP/CSZPDN0vUI/syhaFqhXaCA6f1U
	jZAw==
MIME-Version: 1.0
X-Received: by 10.194.22.129 with SMTP id d1mr26456865wjf.22.1392116379048;
	Tue, 11 Feb 2014 02:59:39 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 11 Feb 2014 02:59:38 -0800 (PST)
In-Reply-To: <20140211090202.GC92054@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
Date: Tue, 11 Feb 2014 10:59:38 +0000
X-Google-Sender-Auth: XLtDqNu4uglBFu9BP6ribMgEknQ
Message-ID: <CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Tim Deegan <tim@xen.org>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 9:02 AM, Tim Deegan <tim@xen.org> wrote:
> At 08:15 +0000 on 10 Feb (1392016516), Zhang, Yang Z wrote:
>> Tim Deegan wrote on 2014-02-10:
>> > At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
>> >> From: Yang Zhang <yang.z.zhang@Intel.com>
>> >>
>> >> When enabling log dirty mode, it sets all guest's memory to readonly.
>> >> And in HAP enabled domain, it modifies all EPT entries to clear
>> >> write bit to make sure it is readonly. This will cause problem if
>> >> VT-d shares page table with EPT: the device may issue a DMA write
>> >> request, then VT-d engine tells it the target memory is readonly and
>> >> result in VT-d
>> > fault.
>> >
>> > So that's a problem even if only the VGA framebuffer is being tracked
>> > -- DMA from a passthrough device will either cause a spurious error or
>> > fail to update the dirt bitmap.
>>
>> Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.
>>
>
> I don't think that works.  We can't expect arbitrary OSes to (a) know
> they're running on Xen and (b) know that that means they can't DMA to
> or from their framebuffers.
>
>> Without VT-d and EPT share page, we still cannot track the memory
>> updating from DMA.
>
> Yeah, but at least we don't risk crashing the _host_ by throwing DMA
> failures around.

What I'm missing here is what you think a proper solution is.  It seems we have:

A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
being tracked, and hope the guest doesn't DMA into video ram; DMA
causes IOMMU fault. (This really shouldn't crash the host under normal
circumstances; if it does it's a hardware bug.)
B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
video ram.  DMA causes missed update to dirty bitmap, which will
hopefully just cause screen corruption.
C. Do buffer scanning rather than dirty vram tracking (SLOW)
D. Don't allow both a virtual video card and pass-through

Given that most operating systems will probably *not* DMA into video
ram, and that an IOMMU fault isn't *supposed* to be able to crash the
host, 'A' seems like the most reasonable option to me.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 10:59:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 10:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDB42-0004jw-IR; Tue, 11 Feb 2014 10:59:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WDB41-0004jk-3F
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 10:59:41 +0000
Received: from [85.158.139.211:8474] by server-4.bemta-5.messagelabs.com id
	38/21-08092-C920AF25; Tue, 11 Feb 2014 10:59:40 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392116379!3139805!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31819 invoked from network); 11 Feb 2014 10:59:39 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 10:59:39 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so5343779wes.39
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 02:59:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=TB7P316J1pcSS18kyFHS1BxOC0lcoZghbl9+FVV1zy0=;
	b=jXcCtq8mvvnPf/bO0jdG4I3YLB2kk/qnFu2mrxrPBGKJsQjKrSZqCuNQxQ4XAFmrDm
	f3aX/AIHbWfHal70ZQS5SqUP5ugzWZC6lnfv0E4w01587qJM11BlDlEbTdcVUkGJTBMN
	JRWA2GqZK4ZhBpnEs5+QYCzyUBbDcjWmaEZvuNDVZn41SyJv022WSzEJjFIXrn5zIyl0
	hOKtgTh0JnshVPceWWPjBKvnrLetjtjNxLOnQHHxK7hCCbPpb9ueosNc6OEHlOwdcCTs
	YMdE29VQzR0mnaad69oC+BDh+vWa/OfJet+GxTTkP/CSZPDN0vUI/syhaFqhXaCA6f1U
	jZAw==
MIME-Version: 1.0
X-Received: by 10.194.22.129 with SMTP id d1mr26456865wjf.22.1392116379048;
	Tue, 11 Feb 2014 02:59:39 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 11 Feb 2014 02:59:38 -0800 (PST)
In-Reply-To: <20140211090202.GC92054@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
Date: Tue, 11 Feb 2014 10:59:38 +0000
X-Google-Sender-Auth: XLtDqNu4uglBFu9BP6ribMgEknQ
Message-ID: <CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Tim Deegan <tim@xen.org>
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 9:02 AM, Tim Deegan <tim@xen.org> wrote:
> At 08:15 +0000 on 10 Feb (1392016516), Zhang, Yang Z wrote:
>> Tim Deegan wrote on 2014-02-10:
>> > At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
>> >> From: Yang Zhang <yang.z.zhang@Intel.com>
>> >>
>> >> When enabling log dirty mode, it sets all guest's memory to readonly.
>> >> And in HAP enabled domain, it modifies all EPT entries to clear
>> >> write bit to make sure it is readonly. This will cause problem if
>> >> VT-d shares page table with EPT: the device may issue a DMA write
>> >> request, then VT-d engine tells it the target memory is readonly and
>> >> result in VT-d
>> > fault.
>> >
>> > So that's a problem even if only the VGA framebuffer is being tracked
>> > -- DMA from a passthrough device will either cause a spurious error or
>> > fail to update the dirt bitmap.
>>
>> Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.
>>
>
> I don't think that works.  We can't expect arbitrary OSes to (a) know
> they're running on Xen and (b) know that that means they can't DMA to
> or from their framebuffers.
>
>> Without VT-d and EPT share page, we still cannot track the memory
>> updating from DMA.
>
> Yeah, but at least we don't risk crashing the _host_ by throwing DMA
> failures around.

What I'm missing here is what you think a proper solution is.  It seems we have:

A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
being tracked, and hope the guest doesn't DMA into video ram; DMA
causes IOMMU fault. (This really shouldn't crash the host under normal
circumstances; if it does it's a hardware bug.)
B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
video ram.  DMA causes missed update to dirty bitmap, which will
hopefully just cause screen corruption.
C. Do buffer scanning rather than dirty vram tracking (SLOW)
D. Don't allow both a virtual video card and pass-through

Given that most operating systems will probably *not* DMA into video
ram, and that an IOMMU fault isn't *supposed* to be able to crash the
host, 'A' seems like the most reasonable option to me.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:11:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBFO-0005Ro-3B; Tue, 11 Feb 2014 11:11:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDBFL-0005Rj-EA
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 11:11:24 +0000
Received: from [193.109.254.147:64154] by server-1.bemta-14.messagelabs.com id
	B6/E1-15438-A550AF25; Tue, 11 Feb 2014 11:11:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392117077!3510870!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28817 invoked from network); 11 Feb 2014 11:11:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:11:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101562751"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 11:11:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:11:14 -0500
Message-ID: <1392117073.26657.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 11:11:13 +0000
In-Reply-To: <osstest-24830-mainreport@xen.org>
References: <osstest-24830-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24830: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDAyOjAzICswMDAwLCB4ZW4ub3JnIHdyb3RlOg0KPiBmbGln
aHQgMjQ4MzAgeGVuLXVuc3RhYmxlIHJlYWwgW3JlYWxdDQo+IGh0dHA6Ly93d3cuY2hpYXJrLmdy
ZWVuZW5kLm9yZy51ay9+eGVuc3JjdHMvbG9ncy8yNDgzMC8NCj4gDQo+IEZhaWx1cmVzIGFuZCBw
cm9ibGVtcyB3aXRoIHRlc3RzIDotKA0KPiANCj4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVk
IGFuZCBhcmUgYmxvY2tpbmcsDQo+IGluY2x1ZGluZyB0ZXN0cyB3aGljaCBjb3VsZCBub3QgYmUg
cnVuOg0KPiAgdGVzdC1hcm1oZi1hcm1oZi14bCAgICAgICAgICAgMiBob3N0cy1hbGxvY2F0ZSAg
ICAgICAgICBicm9rZW4gUkVHUi4gdnMuIDI0NzQzDQoNCiAgICAgICAgMjAxNC0wMi0xMSAwMTo0
NTowNiBaIGhvc3QgYWxsb2NhdGlvbjogcGxhbm5lZCBzdGFydCBpbiAzODkxMiBzZWNvbmRzLg0K
ICAgICAgICAyMDE0LTAyLTExIDAxOjQ1OjA2IFogcmVzb3VyY2UgYWxsb2NhdGlvbjogYm9va2lu
ZyB7IkJvb2tpbmdzIjpbeyJFbmQiOjQxMjEyLCJYaW5mbyI6Imhvc3QiLCJSZXNvIjoiaG9zdCBt
YXJpbGl0aC1uNSIsIlN0YXJ0IjoiMzg5MTIifV19DQogICAgICAgIDIwMTQtMDItMTEgMDE6NDU6
MDYgWiByZXNvdXJjZSBhbGxvY2F0aW9uOiB3ZSBhcmUgaW4gdGhlIHBsYW4uDQogICAgICAgIDIw
MTQtMDItMTEgMDE6NDU6MDYgWiByZXNvdXJjZSBhbGxvY2F0aW9uOiBkZWZlcnJpbmcNCiAgICAg
ICAgMjAxNC0wMi0xMSAwMTo0NTowNiBaIHJlc291cmNlIGFsbG9jYXRpb246IGF3YWl0aW5nIG91
ciBzbG90Li4uDQogICAgICAgIO+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/vTIwMTQtMDIt
MTEgMDE6NTc6MTEgWiByZXNvdXJjZSBhbGxvY2F0aW9uOiBiYXNlIHBsYW4geyJFdmVudHMiOnsi
aG9zdCBiZWRidWciOlt7IkF2YWlsIjowLCJUeXBlIjoiU3RhcnQiLCJUaW1lIjotMjI3Nzk1ODR9
LHsiQXZhaWwiOjEsIlR5cGUiOiJFbmQiLCJUaW1lIjo2MDQ4MDB9XSwiaG9zdCBtYXJpbGl0aC1u
NSI6W3siQXZhaWwiOjAsIlR5cGUiOiJTdGFydCIsIlRpbWUiOi00ODIxM30seyJBdmFpbCI6MSwi
VHlwZSI6IkVuZCIsIlRpbWUiOjM4MTg3fV19fQ0KICAgICAgICBJbnB1dC9vdXRwdXQgZXJyb3Ig
YXQgT3NzdGVzdC9UZXN0U3VwcG9ydC5wbSBsaW5lIDE3NSwgPEdFTjQ+IGxpbmUgNTY4Lg0KICAg
ICAgICAyMDE0LTAyLTExIDAxOjU3OjExIFogcmVzb3VyY2UgYWxsb2NhdGlvbjogcXVldWUtc2Vy
dmVyIHRyb3VibGUgKElucHV0L291dHB1dCBlcnJvciBhdCBPc3N0ZXN0L1Rlc3RTdXBwb3J0LnBt
IGxpbmUgMTc5LCA8R0VOND4gbGluZSA1NjguKQ0KICAgICAgICArIHJjPTUNCiAgICAgICAgKyBk
YXRlIC11ICcrJVktJW0tJWQgJUg6JU06JVMgWiBleGl0IHN0YXR1cyA1Jw0KICAgICAgICAyMDE0
LTAyLTExIDAxOjU3OjExIFogZXhpdCBzdGF0dXMgNQ0KDQoNCk1vcmUgTkZTIHN1Y2tpdHVkZSBJ
IHN1c3BlY3QuDQoNCj4gIHRlc3QtYW1kNjQtaTM4Ni14bCAgICAgICAgICAgIDcgZGViaWFuLWlu
c3RhbGwgICAgICAgICAgIHJ1bm5pbmcgW3N0PXJ1bm5pbmchXQ0KDQpOb3QgYSBjbHVlLi4uDQoN
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:11:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBFO-0005Ro-3B; Tue, 11 Feb 2014 11:11:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDBFL-0005Rj-EA
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 11:11:24 +0000
Received: from [193.109.254.147:64154] by server-1.bemta-14.messagelabs.com id
	B6/E1-15438-A550AF25; Tue, 11 Feb 2014 11:11:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392117077!3510870!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28817 invoked from network); 11 Feb 2014 11:11:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:11:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101562751"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 11:11:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:11:14 -0500
Message-ID: <1392117073.26657.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 11:11:13 +0000
In-Reply-To: <osstest-24830-mainreport@xen.org>
References: <osstest-24830-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-unstable test] 24830: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDAyOjAzICswMDAwLCB4ZW4ub3JnIHdyb3RlOg0KPiBmbGln
aHQgMjQ4MzAgeGVuLXVuc3RhYmxlIHJlYWwgW3JlYWxdDQo+IGh0dHA6Ly93d3cuY2hpYXJrLmdy
ZWVuZW5kLm9yZy51ay9+eGVuc3JjdHMvbG9ncy8yNDgzMC8NCj4gDQo+IEZhaWx1cmVzIGFuZCBw
cm9ibGVtcyB3aXRoIHRlc3RzIDotKA0KPiANCj4gVGVzdHMgd2hpY2ggZGlkIG5vdCBzdWNjZWVk
IGFuZCBhcmUgYmxvY2tpbmcsDQo+IGluY2x1ZGluZyB0ZXN0cyB3aGljaCBjb3VsZCBub3QgYmUg
cnVuOg0KPiAgdGVzdC1hcm1oZi1hcm1oZi14bCAgICAgICAgICAgMiBob3N0cy1hbGxvY2F0ZSAg
ICAgICAgICBicm9rZW4gUkVHUi4gdnMuIDI0NzQzDQoNCiAgICAgICAgMjAxNC0wMi0xMSAwMTo0
NTowNiBaIGhvc3QgYWxsb2NhdGlvbjogcGxhbm5lZCBzdGFydCBpbiAzODkxMiBzZWNvbmRzLg0K
ICAgICAgICAyMDE0LTAyLTExIDAxOjQ1OjA2IFogcmVzb3VyY2UgYWxsb2NhdGlvbjogYm9va2lu
ZyB7IkJvb2tpbmdzIjpbeyJFbmQiOjQxMjEyLCJYaW5mbyI6Imhvc3QiLCJSZXNvIjoiaG9zdCBt
YXJpbGl0aC1uNSIsIlN0YXJ0IjoiMzg5MTIifV19DQogICAgICAgIDIwMTQtMDItMTEgMDE6NDU6
MDYgWiByZXNvdXJjZSBhbGxvY2F0aW9uOiB3ZSBhcmUgaW4gdGhlIHBsYW4uDQogICAgICAgIDIw
MTQtMDItMTEgMDE6NDU6MDYgWiByZXNvdXJjZSBhbGxvY2F0aW9uOiBkZWZlcnJpbmcNCiAgICAg
ICAgMjAxNC0wMi0xMSAwMTo0NTowNiBaIHJlc291cmNlIGFsbG9jYXRpb246IGF3YWl0aW5nIG91
ciBzbG90Li4uDQogICAgICAgIO+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/
ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/ve+/vTIwMTQtMDIt
MTEgMDE6NTc6MTEgWiByZXNvdXJjZSBhbGxvY2F0aW9uOiBiYXNlIHBsYW4geyJFdmVudHMiOnsi
aG9zdCBiZWRidWciOlt7IkF2YWlsIjowLCJUeXBlIjoiU3RhcnQiLCJUaW1lIjotMjI3Nzk1ODR9
LHsiQXZhaWwiOjEsIlR5cGUiOiJFbmQiLCJUaW1lIjo2MDQ4MDB9XSwiaG9zdCBtYXJpbGl0aC1u
NSI6W3siQXZhaWwiOjAsIlR5cGUiOiJTdGFydCIsIlRpbWUiOi00ODIxM30seyJBdmFpbCI6MSwi
VHlwZSI6IkVuZCIsIlRpbWUiOjM4MTg3fV19fQ0KICAgICAgICBJbnB1dC9vdXRwdXQgZXJyb3Ig
YXQgT3NzdGVzdC9UZXN0U3VwcG9ydC5wbSBsaW5lIDE3NSwgPEdFTjQ+IGxpbmUgNTY4Lg0KICAg
ICAgICAyMDE0LTAyLTExIDAxOjU3OjExIFogcmVzb3VyY2UgYWxsb2NhdGlvbjogcXVldWUtc2Vy
dmVyIHRyb3VibGUgKElucHV0L291dHB1dCBlcnJvciBhdCBPc3N0ZXN0L1Rlc3RTdXBwb3J0LnBt
IGxpbmUgMTc5LCA8R0VOND4gbGluZSA1NjguKQ0KICAgICAgICArIHJjPTUNCiAgICAgICAgKyBk
YXRlIC11ICcrJVktJW0tJWQgJUg6JU06JVMgWiBleGl0IHN0YXR1cyA1Jw0KICAgICAgICAyMDE0
LTAyLTExIDAxOjU3OjExIFogZXhpdCBzdGF0dXMgNQ0KDQoNCk1vcmUgTkZTIHN1Y2tpdHVkZSBJ
IHN1c3BlY3QuDQoNCj4gIHRlc3QtYW1kNjQtaTM4Ni14bCAgICAgICAgICAgIDcgZGViaWFuLWlu
c3RhbGwgICAgICAgICAgIHJ1bm5pbmcgW3N0PXJ1bm5pbmchXQ0KDQpOb3QgYSBjbHVlLi4uDQoN
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:14:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBHs-0005ZE-1i; Tue, 11 Feb 2014 11:14:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDBHq-0005Z0-NE
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 11:13:58 +0000
Received: from [193.109.254.147:14612] by server-4.bemta-14.messagelabs.com id
	10/AF-32066-5F50AF25; Tue, 11 Feb 2014 11:13:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392117235!3506493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3573 invoked from network); 11 Feb 2014 11:13:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:13:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99747515"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 11:13:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:13:55 -0500
Message-ID: <1392117234.26657.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 11:13:54 +0000
In-Reply-To: <1392028633.5117.29.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 10:37 +0000, Ian Campbell wrote:
> The following fixes it for me, but although the results are as I wanted
> I'm not 100% sure about this override in the first place. In my
> experiments with cr-daily-branch I see:

But I didn't consider {DEFAULT_}REVISION_LINUX{_ARM} and how it
interacts:

        Branch		$TREE_LINUX#$REVISION_LINUX	$TREE_LINUX_ARM#$R_L_ARM
        xen-unstable	pvops#tested/linux-3.4		pvops#tested/linux-arm-xen
        linux-linus	torvalds#master			pvops#XXX
        linux-arm-xen	stefano#master			stefano#master
        osstest		pvops#tested/linux-3.4		pvops#tested/linux-arm-xen

pvops#XXX == the revision from torvalds#master, but with the pvops tree
i.e. the brokeness I started out trying to fix.

What this means is that osstest's own push gate is currently blocked
because it is trying to test using the standard tree on armhf -- which
is a 3.4 tree and has no support for any of the stuff we need. I suspect
xen-unstable would also be broken by this (so its good the push gate
caught it).

One way (to bodge?) around this would be to use $R_L on any branch
matching linux-* *except* linux-arm-xen and for anything else to use
$R_L_A for (on armhf only of course). This has the affect (I hope) of
using the right branch for gating trees like linux-linus and
linux-arm-xen but continuing to use the correct tested branch for the
arch.

But I can't help thinking that this should perhaps be solved further up
by e.g. making $TREE_LINUX_ARM#$REVISION_LINUX_ARM be correct for
linux-linus -- i.e. by making some change in ap-common:info_linux_tree.

I'm not sure what that would look like though -- it might be easiest to
thrash this stuff out f2f later.

In the meantime here something which I think might fix mfi-common as I
described 3 paras ago...

Ian.

diff --git a/mfi-common b/mfi-common
index f7f981e..6ec164e 100644
--- a/mfi-common
+++ b/mfi-common
@@ -44,10 +44,19 @@ create_build_jobs () {
 
     if [ "x$arch" = xdisable ]; then continue; fi
 
+    if [ "x$arch" = "xarmhf" ] ; then
+      echo LINUX $TREE_LINUX \"${REVISION_LINUX}\" \"${DEFAULT_REVISION_LINUX}\" >&2
+      echo ARM $TREE_LINUX_ARM \"${REVISION_LINUX_ARM}\" \"${DEFAULT_REVISION_LINUX_ARM}\" >&2
+    fi
+
     pvops_kernel="
       tree_linux=$TREE_LINUX
       revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
     "
+    pvops_kernel_arm="
+      tree_linux=$TREE_LINUX_ARM
+      revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+    "
 
     case "$arch" in
     armhf)
@@ -65,12 +74,12 @@ create_build_jobs () {
       xen-4.2-testing) continue;;
       esac
 
-      if [ "$branch" = "linux-arm-xen" ]; then
-        pvops_kernel="
-          tree_linux=$TREE_LINUX_ARM
-          revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-        "
-      fi
+      case $branch in
+      linux-arm-xen) pvops_kernel="$pvops_kernel_arm" ;;
+      linux-*) ;; # Apart from linux-arm-xen these should test mainline kernels
+      *) pvops_kernel="$pvops_kernel_arm" ;;
+      esac
+
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
       "



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:14:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBHs-0005ZE-1i; Tue, 11 Feb 2014 11:14:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDBHq-0005Z0-NE
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 11:13:58 +0000
Received: from [193.109.254.147:14612] by server-4.bemta-14.messagelabs.com id
	10/AF-32066-5F50AF25; Tue, 11 Feb 2014 11:13:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392117235!3506493!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3573 invoked from network); 11 Feb 2014 11:13:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:13:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99747515"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 11:13:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:13:55 -0500
Message-ID: <1392117234.26657.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 11:13:54 +0000
In-Reply-To: <1392028633.5117.29.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH OSSTEST] mfi-common: Only override the pvops
 kernel repo for linux-arm-xen branch (Was: Re: [linux-linus test] 24817:
 regressions - FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 10:37 +0000, Ian Campbell wrote:
> The following fixes it for me, but although the results are as I wanted
> I'm not 100% sure about this override in the first place. In my
> experiments with cr-daily-branch I see:

But I didn't consider {DEFAULT_}REVISION_LINUX{_ARM} and how it
interacts:

        Branch		$TREE_LINUX#$REVISION_LINUX	$TREE_LINUX_ARM#$R_L_ARM
        xen-unstable	pvops#tested/linux-3.4		pvops#tested/linux-arm-xen
        linux-linus	torvalds#master			pvops#XXX
        linux-arm-xen	stefano#master			stefano#master
        osstest		pvops#tested/linux-3.4		pvops#tested/linux-arm-xen

pvops#XXX == the revision from torvalds#master, but with the pvops tree
i.e. the brokeness I started out trying to fix.

What this means is that osstest's own push gate is currently blocked
because it is trying to test using the standard tree on armhf -- which
is a 3.4 tree and has no support for any of the stuff we need. I suspect
xen-unstable would also be broken by this (so its good the push gate
caught it).

One way (to bodge?) around this would be to use $R_L on any branch
matching linux-* *except* linux-arm-xen and for anything else to use
$R_L_A for (on armhf only of course). This has the affect (I hope) of
using the right branch for gating trees like linux-linus and
linux-arm-xen but continuing to use the correct tested branch for the
arch.

But I can't help thinking that this should perhaps be solved further up
by e.g. making $TREE_LINUX_ARM#$REVISION_LINUX_ARM be correct for
linux-linus -- i.e. by making some change in ap-common:info_linux_tree.

I'm not sure what that would look like though -- it might be easiest to
thrash this stuff out f2f later.

In the meantime here something which I think might fix mfi-common as I
described 3 paras ago...

Ian.

diff --git a/mfi-common b/mfi-common
index f7f981e..6ec164e 100644
--- a/mfi-common
+++ b/mfi-common
@@ -44,10 +44,19 @@ create_build_jobs () {
 
     if [ "x$arch" = xdisable ]; then continue; fi
 
+    if [ "x$arch" = "xarmhf" ] ; then
+      echo LINUX $TREE_LINUX \"${REVISION_LINUX}\" \"${DEFAULT_REVISION_LINUX}\" >&2
+      echo ARM $TREE_LINUX_ARM \"${REVISION_LINUX_ARM}\" \"${DEFAULT_REVISION_LINUX_ARM}\" >&2
+    fi
+
     pvops_kernel="
       tree_linux=$TREE_LINUX
       revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
     "
+    pvops_kernel_arm="
+      tree_linux=$TREE_LINUX_ARM
+      revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+    "
 
     case "$arch" in
     armhf)
@@ -65,12 +74,12 @@ create_build_jobs () {
       xen-4.2-testing) continue;;
       esac
 
-      if [ "$branch" = "linux-arm-xen" ]; then
-        pvops_kernel="
-          tree_linux=$TREE_LINUX_ARM
-          revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-        "
-      fi
+      case $branch in
+      linux-arm-xen) pvops_kernel="$pvops_kernel_arm" ;;
+      linux-*) ;; # Apart from linux-arm-xen these should test mainline kernels
+      *) pvops_kernel="$pvops_kernel_arm" ;;
+      esac
+
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
       "



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:40:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBhK-00074A-19; Tue, 11 Feb 2014 11:40:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDBhI-000745-NI
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 11:40:16 +0000
Received: from [85.158.137.68:6751] by server-8.bemta-3.messagelabs.com id
	B5/DD-16039-F1C0AF25; Tue, 11 Feb 2014 11:40:15 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392118813!1073434!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25948 invoked from network); 11 Feb 2014 11:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99752565"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 11:40:12 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:40:12 -0500
Message-ID: <52FA0C1A.5080004@citrix.com>
Date: Tue, 11 Feb 2014 11:40:10 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111040.26657.50.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 09:30, Ian Campbell wrote:
> On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
>> 
>> Tools supporting version _V_ of the specification shall always save
>> images using version _V_.  Tools shall support restoring from version
>> _V_ and version _V_ - 1.
> 
> This isn't quite right since it is in terms of image format version and
> not Xen version (unless the two are to be linked somehow). The Xen
> project supports migration from version X-1 to version X (where X is the
> Xen version). It's not inconceivable that the image format version
> wouldn't change over Xen releases.

I was expecting it to be only necessary to bump the format version with
each Xen x.y release but I think you're right.  It may be needed to bump
the version of a x.y.z release.

>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> marker      0xFFFFFFFFFFFFFFFF.
>>
>> id          0x58454E46 ("XENF" in ASCII).
>>
>> version     0x00000001.  The version of this specification.
>>
>> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
> 
> Couldn't we just specify that things are in a specific endianness
> related to the header's arch field?
> 
> I appreciate the desire to make the format endian neutral and explicit
> about which is in use but (apart from the header) why would you ever
> want to go non-native endian for a given arch?

I am anticipating bi-endian architectures which could mean (for example)
migrating a little-endian guest from a little-endian host to a
big-endian host.

I would prefer to retain this bit, but I think we can specify that
certain architectures always use a specific endianness so initially we
wouldn't need to support anything other than the native endianness
(little-endian on both x86 and ARM).

>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> arch        0x0000: Reserved.
>>
>>             0x0001: x86.
>>
>>             0x0002: ARM.
>>
>> type        0x0000: Reserved.
>>
>>             0x0001: x86 PV.
>>
>>             0x0002 - 0xFFFF: Reserved.
> 
> Is the type field per-arch? i.e. if arch=0x0002 can we use type = 0x0001
> for ARM domains?

I think it would be best to avoid reusing types for different
architectures -- it's not like we're going to be short on types.

>> P2M
>> ---
>>
>> [ This is a more flexible replacement for the old p2m_size field and
>> p2m array. ]
> 
> 
> What is the latter for again?

Er.  I think I've misunderstood the code and gotten confused here.

>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> count       Number of pages described in this record.
>>
>> pfn         An array of count PFNs. Bits 63-60 contain
>>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> 
> Now might be a good time to remove this intertwining? I suppose 60-bits
> is a lot of pfn's, but if the VMs address space is sparse it isn't out
> of the question.

I don't think we want to consider systems with > 64 bits of address
space so 60-bits is more than enough for PFNs.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:40:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBhK-00074A-19; Tue, 11 Feb 2014 11:40:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDBhI-000745-NI
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 11:40:16 +0000
Received: from [85.158.137.68:6751] by server-8.bemta-3.messagelabs.com id
	B5/DD-16039-F1C0AF25; Tue, 11 Feb 2014 11:40:15 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392118813!1073434!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25948 invoked from network); 11 Feb 2014 11:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99752565"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 11:40:12 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:40:12 -0500
Message-ID: <52FA0C1A.5080004@citrix.com>
Date: Tue, 11 Feb 2014 11:40:10 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111040.26657.50.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 09:30, Ian Campbell wrote:
> On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
>> 
>> Tools supporting version _V_ of the specification shall always save
>> images using version _V_.  Tools shall support restoring from version
>> _V_ and version _V_ - 1.
> 
> This isn't quite right since it is in terms of image format version and
> not Xen version (unless the two are to be linked somehow). The Xen
> project supports migration from version X-1 to version X (where X is the
> Xen version). It's not inconceivable that the image format version
> wouldn't change over Xen releases.

I was expecting it to be only necessary to bump the format version with
each Xen x.y release but I think you're right.  It may be needed to bump
the version of a x.y.z release.

>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> marker      0xFFFFFFFFFFFFFFFF.
>>
>> id          0x58454E46 ("XENF" in ASCII).
>>
>> version     0x00000001.  The version of this specification.
>>
>> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
> 
> Couldn't we just specify that things are in a specific endianness
> related to the header's arch field?
> 
> I appreciate the desire to make the format endian neutral and explicit
> about which is in use but (apart from the header) why would you ever
> want to go non-native endian for a given arch?

I am anticipating bi-endian architectures which could mean (for example)
migrating a little-endian guest from a little-endian host to a
big-endian host.

I would prefer to retain this bit, but I think we can specify that
certain architectures always use a specific endianness so initially we
wouldn't need to support anything other than the native endianness
(little-endian on both x86 and ARM).

>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> arch        0x0000: Reserved.
>>
>>             0x0001: x86.
>>
>>             0x0002: ARM.
>>
>> type        0x0000: Reserved.
>>
>>             0x0001: x86 PV.
>>
>>             0x0002 - 0xFFFF: Reserved.
> 
> Is the type field per-arch? i.e. if arch=0x0002 can we use type = 0x0001
> for ARM domains?

I think it would be best to avoid reusing types for different
architectures -- it's not like we're going to be short on types.

>> P2M
>> ---
>>
>> [ This is a more flexible replacement for the old p2m_size field and
>> p2m array. ]
> 
> 
> What is the latter for again?

Er.  I think I've misunderstood the code and gotten confused here.

>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> count       Number of pages described in this record.
>>
>> pfn         An array of count PFNs. Bits 63-60 contain
>>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> 
> Now might be a good time to remove this intertwining? I suppose 60-bits
> is a lot of pfn's, but if the VMs address space is sparse it isn't out
> of the question.

I don't think we want to consider systems with > 64 bits of address
space so 60-bits is more than enough for PFNs.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:56:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBwc-0007hi-9X; Tue, 11 Feb 2014 11:56:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDBwa-0007hd-Nr
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 11:56:04 +0000
Received: from [85.158.137.68:17063] by server-16.bemta-3.messagelabs.com id
	44/C7-29917-3DF0AF25; Tue, 11 Feb 2014 11:56:03 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392119763!66627!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25941 invoked from network); 11 Feb 2014 11:56:03 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 11:56:03 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDBwP-00019s-4n; Tue, 11 Feb 2014 11:55:53 +0000
Date: Tue, 11 Feb 2014 12:55:53 +0100
From: Tim Deegan <tim@xen.org>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20140211115553.GB97288@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
> On Tue, Feb 11, 2014 at 9:02 AM, Tim Deegan <tim@xen.org> wrote:
> > At 08:15 +0000 on 10 Feb (1392016516), Zhang, Yang Z wrote:
> >> Tim Deegan wrote on 2014-02-10:
> >> > At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
> >> >> From: Yang Zhang <yang.z.zhang@Intel.com>
> >> >>
> >> >> When enabling log dirty mode, it sets all guest's memory to readonly.
> >> >> And in HAP enabled domain, it modifies all EPT entries to clear
> >> >> write bit to make sure it is readonly. This will cause problem if
> >> >> VT-d shares page table with EPT: the device may issue a DMA write
> >> >> request, then VT-d engine tells it the target memory is readonly and
> >> >> result in VT-d
> >> > fault.
> >> >
> >> > So that's a problem even if only the VGA framebuffer is being tracked
> >> > -- DMA from a passthrough device will either cause a spurious error or
> >> > fail to update the dirt bitmap.
> >>
> >> Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.
> >>
> >
> > I don't think that works.  We can't expect arbitrary OSes to (a) know
> > they're running on Xen and (b) know that that means they can't DMA to
> > or from their framebuffers.
> >
> >> Without VT-d and EPT share page, we still cannot track the memory
> >> updating from DMA.
> >
> > Yeah, but at least we don't risk crashing the _host_ by throwing DMA
> > failures around.
> 
> What I'm missing here is what you think a proper solution is.

A _proper_ solution would be for the IOMMU h/w to allow restartable
faults, so that we can do all the usual fault-driven virtual memory
operations with DMA. :)  In the meantime...

>  It seems we have:
> 
> A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
> being tracked, and hope the guest doesn't DMA into video ram; DMA
> causes IOMMU fault. (This really shouldn't crash the host under normal
> circumstances; if it does it's a hardware bug.)

Note "hope" and "shouldn't" there. :)

> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
> video ram.  DMA causes missed update to dirty bitmap, which will
> hopefully just cause screen corruption.

Yep.  At a cost of about 0.2% in space and some extra bookkeeping
(for VMs that actually have devices passed through to them).
The extra bookkeeping could be expensive in some cases, but basically
all of those cases are already incompatible with IOMMU.

> C. Do buffer scanning rather than dirty vram tracking (SLOW)
> D. Don't allow both a virtual video card and pass-through

E. Share EPT and IOMMU tables until someone turns on log-dirty mode
and then split them out.  That one 

> Given that most operating systems will probably *not* DMA into video
> ram, and that an IOMMU fault isn't *supposed* to be able to crash the
> host, 'A' seems like the most reasonable option to me.

Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
seems to have most support from other people.  On that basis this
patch can have my Ack.

Is it intended for 4.4 as a bugfix (i.e. should I be checking it in?)

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:56:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBwc-0007hi-9X; Tue, 11 Feb 2014 11:56:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDBwa-0007hd-Nr
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 11:56:04 +0000
Received: from [85.158.137.68:17063] by server-16.bemta-3.messagelabs.com id
	44/C7-29917-3DF0AF25; Tue, 11 Feb 2014 11:56:03 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392119763!66627!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25941 invoked from network); 11 Feb 2014 11:56:03 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 11:56:03 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDBwP-00019s-4n; Tue, 11 Feb 2014 11:55:53 +0000
Date: Tue, 11 Feb 2014 12:55:53 +0100
From: Tim Deegan <tim@xen.org>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Message-ID: <20140211115553.GB97288@deinos.phlegethon.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"JBeulich@suse.com" <JBeulich@suse.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
> On Tue, Feb 11, 2014 at 9:02 AM, Tim Deegan <tim@xen.org> wrote:
> > At 08:15 +0000 on 10 Feb (1392016516), Zhang, Yang Z wrote:
> >> Tim Deegan wrote on 2014-02-10:
> >> > At 14:14 +0800 on 10 Feb (1392038040), Yang Zhang wrote:
> >> >> From: Yang Zhang <yang.z.zhang@Intel.com>
> >> >>
> >> >> When enabling log dirty mode, it sets all guest's memory to readonly.
> >> >> And in HAP enabled domain, it modifies all EPT entries to clear
> >> >> write bit to make sure it is readonly. This will cause problem if
> >> >> VT-d shares page table with EPT: the device may issue a DMA write
> >> >> request, then VT-d engine tells it the target memory is readonly and
> >> >> result in VT-d
> >> > fault.
> >> >
> >> > So that's a problem even if only the VGA framebuffer is being tracked
> >> > -- DMA from a passthrough device will either cause a spurious error or
> >> > fail to update the dirt bitmap.
> >>
> >> Do you mean the VGA frambuffer will be used as DMA buffer in guest? If yes, I think it's guest's responsibility to ensure it never happens.
> >>
> >
> > I don't think that works.  We can't expect arbitrary OSes to (a) know
> > they're running on Xen and (b) know that that means they can't DMA to
> > or from their framebuffers.
> >
> >> Without VT-d and EPT share page, we still cannot track the memory
> >> updating from DMA.
> >
> > Yeah, but at least we don't risk crashing the _host_ by throwing DMA
> > failures around.
> 
> What I'm missing here is what you think a proper solution is.

A _proper_ solution would be for the IOMMU h/w to allow restartable
faults, so that we can do all the usual fault-driven virtual memory
operations with DMA. :)  In the meantime...

>  It seems we have:
> 
> A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
> being tracked, and hope the guest doesn't DMA into video ram; DMA
> causes IOMMU fault. (This really shouldn't crash the host under normal
> circumstances; if it does it's a hardware bug.)

Note "hope" and "shouldn't" there. :)

> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
> video ram.  DMA causes missed update to dirty bitmap, which will
> hopefully just cause screen corruption.

Yep.  At a cost of about 0.2% in space and some extra bookkeeping
(for VMs that actually have devices passed through to them).
The extra bookkeeping could be expensive in some cases, but basically
all of those cases are already incompatible with IOMMU.

> C. Do buffer scanning rather than dirty vram tracking (SLOW)
> D. Don't allow both a virtual video card and pass-through

E. Share EPT and IOMMU tables until someone turns on log-dirty mode
and then split them out.  That one 

> Given that most operating systems will probably *not* DMA into video
> ram, and that an IOMMU fault isn't *supposed* to be able to crash the
> host, 'A' seems like the most reasonable option to me.

Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
seems to have most support from other people.  On that basis this
patch can have my Ack.

Is it intended for 4.4 as a bugfix (i.e. should I be checking it in?)

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBz5-000832-TS; Tue, 11 Feb 2014 11:58:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDBz4-000808-0d
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 11:58:38 +0000
Received: from [193.109.254.147:38102] by server-9.bemta-14.messagelabs.com id
	B6/F1-24895-D601AF25; Tue, 11 Feb 2014 11:58:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392119915!3524512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25721 invoked from network); 11 Feb 2014 11:58:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:58:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99755315"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 11:58:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:58:34 -0500
Message-ID: <52FA1069.2040709@citrix.com>
Date: Tue, 11 Feb 2014 11:58:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: <rshriram@cs.ubc.ca>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
In-Reply-To: <CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 20:00, Shriram Rajagopalan wrote:
> On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com
> <mailto:david.vrabel@citrix.com>> wrote:
> 
> 
> Its tempting to adopt all the TCP-style madness for transferring a set of
> structured data.  Why this endian-ness mess?  Am I missing something here?
> I am assuming that a lion's share of Xen's deployment is on x86 
> (not including Amazon). So that leaves ARM.  Why not let these 
> processors take the hit of endian-ness conversion?

I'm not sure I would characterize a spec being precise about byte
ordering as "endianness mess".

I think it would be a pretty poor specification if it didn't specify
byte ordering -- we can't have the tools having to make assumptions
about the ordering.

However, I do think it can be specified in such a way that all the
current use cases don't have to do any byte swapping (except for the
minimal header).

>         +-----------------------+-------------------------+
>         | checksum              | (reserved)              |
>         +-----------------------+-------------------------+
> 
> 
> I am assuming that you the checksum field is present only
> for debugging purposes? Otherwise, I see no reason for the
> computational overhead, given that we are already sending data
> over a reliable channel + IIRC we already have an image-wide checksum 
> when saving the image to disk.

I'm not aware of any image wide checksum.

The checksum seems like a potentially useful feature but I don't have a
requirement for it so if no one else thinks it is useful it can be removed.

>     PAGE_DATA
>     ---------
[...]
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     count       Number of pages described in this record.
> 
>     pfn         An array of count PFNs. Bits 63-60 contain
>                 the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> 
>     page_data   page_size octets of uncompressed page contents for each page
>                 set as present in the pfn array.
>     --------------------------------------------------------------------
> 
> 
> s/uncompressed/(compressed/uncompressed)/
> (Remus sends compressed data)

No.  I think compressed page data should have its own record type. The
current scheme of mode flipping records seems crazy to me.

>     x86 PV Guest
>     ------------
> 
>     An x86 PV guest image will have in this order:
> 
>     1. Image header
>     2. Domain header
>     3. X86_PV_INFO record
>     4. At least one P2M record
>     5. At least one PAGE_DATA record
> 
>     6. VCPU_INFO record
>     6. At least one VCPU_CONTEXT record
> 
>     7. END record
> 
> 
> There seems to be a bunch of info missing. Here are some
> missing elements that I can recall at the moment:
> a) there is no support for sending over one time markers that switch the
> receiver's operating mode in the middle of a data stream. 
> E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.
> XC_SAVE_ENABLE_VERIFY_MODE,

Yes. As I noted, this specification is not yet complete.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 11:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 11:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDBz5-000832-TS; Tue, 11 Feb 2014 11:58:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDBz4-000808-0d
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 11:58:38 +0000
Received: from [193.109.254.147:38102] by server-9.bemta-14.messagelabs.com id
	B6/F1-24895-D601AF25; Tue, 11 Feb 2014 11:58:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392119915!3524512!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25721 invoked from network); 11 Feb 2014 11:58:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 11:58:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99755315"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 11:58:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 06:58:34 -0500
Message-ID: <52FA1069.2040709@citrix.com>
Date: Tue, 11 Feb 2014 11:58:33 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: <rshriram@cs.ubc.ca>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
In-Reply-To: <CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10/02/14 20:00, Shriram Rajagopalan wrote:
> On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com
> <mailto:david.vrabel@citrix.com>> wrote:
> 
> 
> Its tempting to adopt all the TCP-style madness for transferring a set of
> structured data.  Why this endian-ness mess?  Am I missing something here?
> I am assuming that a lion's share of Xen's deployment is on x86 
> (not including Amazon). So that leaves ARM.  Why not let these 
> processors take the hit of endian-ness conversion?

I'm not sure I would characterize a spec being precise about byte
ordering as "endianness mess".

I think it would be a pretty poor specification if it didn't specify
byte ordering -- we can't have the tools having to make assumptions
about the ordering.

However, I do think it can be specified in such a way that all the
current use cases don't have to do any byte swapping (except for the
minimal header).

>         +-----------------------+-------------------------+
>         | checksum              | (reserved)              |
>         +-----------------------+-------------------------+
> 
> 
> I am assuming that you the checksum field is present only
> for debugging purposes? Otherwise, I see no reason for the
> computational overhead, given that we are already sending data
> over a reliable channel + IIRC we already have an image-wide checksum 
> when saving the image to disk.

I'm not aware of any image wide checksum.

The checksum seems like a potentially useful feature but I don't have a
requirement for it so if no one else thinks it is useful it can be removed.

>     PAGE_DATA
>     ---------
[...]
>     --------------------------------------------------------------------
>     Field       Description
>     ----------- --------------------------------------------------------
>     count       Number of pages described in this record.
> 
>     pfn         An array of count PFNs. Bits 63-60 contain
>                 the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> 
>     page_data   page_size octets of uncompressed page contents for each page
>                 set as present in the pfn array.
>     --------------------------------------------------------------------
> 
> 
> s/uncompressed/(compressed/uncompressed)/
> (Remus sends compressed data)

No.  I think compressed page data should have its own record type. The
current scheme of mode flipping records seems crazy to me.

>     x86 PV Guest
>     ------------
> 
>     An x86 PV guest image will have in this order:
> 
>     1. Image header
>     2. Domain header
>     3. X86_PV_INFO record
>     4. At least one P2M record
>     5. At least one PAGE_DATA record
> 
>     6. VCPU_INFO record
>     6. At least one VCPU_CONTEXT record
> 
>     7. END record
> 
> 
> There seems to be a bunch of info missing. Here are some
> missing elements that I can recall at the moment:
> a) there is no support for sending over one time markers that switch the
> receiver's operating mode in the middle of a data stream. 
> E.g., XC_SAVE_ENABLE_COMPRESSION, XC_SAVE_ID_LAST_CHECKPOINT, etc.
> XC_SAVE_ENABLE_VERIFY_MODE,

Yes. As I noted, this specification is not yet complete.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:10:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDC9x-0000YD-Se; Tue, 11 Feb 2014 12:09:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDC9w-0000Y8-Ut
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:09:53 +0000
Received: from [193.109.254.147:4251] by server-15.bemta-14.messagelabs.com id
	8E/ED-10839-0131AF25; Tue, 11 Feb 2014 12:09:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392120590!3526999!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4195 invoked from network); 11 Feb 2014 12:09:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:09:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99757591"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 12:09:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:09:49 -0500
Message-ID: <1392120588.26657.99.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 12:09:48 +0000
In-Reply-To: <52FA0C1A.5080004@citrix.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
	<52FA0C1A.5080004@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 11:40 +0000, David Vrabel wrote:
> On 11/02/14 09:30, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
> >> 
> >> Tools supporting version _V_ of the specification shall always save
> >> images using version _V_.  Tools shall support restoring from version
> >> _V_ and version _V_ - 1.
> > 
> > This isn't quite right since it is in terms of image format version and
> > not Xen version (unless the two are to be linked somehow). The Xen
> > project supports migration from version X-1 to version X (where X is the
> > Xen version). It's not inconceivable that the image format version
> > wouldn't change over Xen releases.
> 
> I was expecting it to be only necessary to bump the format version with
> each Xen x.y release but I think you're right.  It may be needed to bump
> the version of a x.y.z release.

We try very hard to avoid that (i.e. changing the save format in a
stable branch) actually. 

What I meant was:
	Xen = 4.5, V = 1
	Xen = 4.6, V = 1 (nothing changed in the save file spec for 4.5)
	Xen = 4.7, V = 2

The spec with it's current wording about V would seem to say the Xen 4.7
must support migration from Xen 4.5, which is not the case.

For HVM support there are going to be (at least) two additional record
types "opaque blob of Xen state" and "opaque blob of qemu state". It is
very unlikely that anyone will remember to bump V in the tools when
changing those.

I think the solution is to also include the Xen version in the header,
as well as the "file format" version V. You could do this as part of the
header of those opaque blobs, but it might be better in the actual
overall headers (/as well).

I've no idea how to deal with the qemu blob -- but I think qemu upstream
has been working to make cross version migration work better, so maybe
it isn't a problem any more.

> 
> >> --------------------------------------------------------------------
> >> Field       Description
> >> ----------- --------------------------------------------------------
> >> marker      0xFFFFFFFFFFFFFFFF.
> >>
> >> id          0x58454E46 ("XENF" in ASCII).
> >>
> >> version     0x00000001.  The version of this specification.
> >>
> >> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
> > 
> > Couldn't we just specify that things are in a specific endianness
> > related to the header's arch field?
> > 
> > I appreciate the desire to make the format endian neutral and explicit
> > about which is in use but (apart from the header) why would you ever
> > want to go non-native endian for a given arch?
> 
> I am anticipating bi-endian architectures which could mean (for example)
> migrating a little-endian guest from a little-endian host to a
> big-endian host.

It's possible that this would end up looking like a totally separate
arch anyway (cf "armbe"), not just at the save format level, but at the
hypercall and PVIO layers too.

> I would prefer to retain this bit, but I think we can specify that
> certain architectures always use a specific endianness so initially we
> wouldn't need to support anything other than the native endianness
> (little-endian on both x86 and ARM).

I think that makes sense initially.

It's always a bit tricky with a spec to separate the notion of what is
possible within the spec from what you actually intend to implement now.

Perhaps a reasonable compromise is to document a requirement that savers
populate this field accurately but only require restorers to support
their nativeness (leaving anything further as optional).

> >> --------------------------------------------------------------------
> >> Field       Description
> >> ----------- --------------------------------------------------------
> >> arch        0x0000: Reserved.
> >>
> >>             0x0001: x86.
> >>
> >>             0x0002: ARM.
> >>
> >> type        0x0000: Reserved.
> >>
> >>             0x0001: x86 PV.
> >>
> >>             0x0002 - 0xFFFF: Reserved.
> > 
> > Is the type field per-arch? i.e. if arch=0x0002 can we use type = 0x0001
> > for ARM domains?
> 
> I think it would be best to avoid reusing types for different
> architectures -- it's not like we're going to be short on types.

Any reason not to combine the arch+type into a single field then?

> >> --------------------------------------------------------------------
> >> Field       Description
> >> ----------- --------------------------------------------------------
> >> count       Number of pages described in this record.
> >>
> >> pfn         An array of count PFNs. Bits 63-60 contain
> >>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> > 
> > Now might be a good time to remove this intertwining? I suppose 60-bits
> > is a lot of pfn's, but if the VMs address space is sparse it isn't out
> > of the question.
> 
> I don't think we want to consider systems with > 64 bits of address
> space so 60-bits is more than enough for PFNs.

Is it? What about systems with 61..63 bits of address space?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:10:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDC9x-0000YD-Se; Tue, 11 Feb 2014 12:09:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDC9w-0000Y8-Ut
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:09:53 +0000
Received: from [193.109.254.147:4251] by server-15.bemta-14.messagelabs.com id
	8E/ED-10839-0131AF25; Tue, 11 Feb 2014 12:09:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392120590!3526999!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4195 invoked from network); 11 Feb 2014 12:09:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:09:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99757591"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 12:09:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:09:49 -0500
Message-ID: <1392120588.26657.99.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 12:09:48 +0000
In-Reply-To: <52FA0C1A.5080004@citrix.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
	<52FA0C1A.5080004@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 11:40 +0000, David Vrabel wrote:
> On 11/02/14 09:30, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
> >> 
> >> Tools supporting version _V_ of the specification shall always save
> >> images using version _V_.  Tools shall support restoring from version
> >> _V_ and version _V_ - 1.
> > 
> > This isn't quite right since it is in terms of image format version and
> > not Xen version (unless the two are to be linked somehow). The Xen
> > project supports migration from version X-1 to version X (where X is the
> > Xen version). It's not inconceivable that the image format version
> > wouldn't change over Xen releases.
> 
> I was expecting it to be only necessary to bump the format version with
> each Xen x.y release but I think you're right.  It may be needed to bump
> the version of a x.y.z release.

We try very hard to avoid that (i.e. changing the save format in a
stable branch) actually. 

What I meant was:
	Xen = 4.5, V = 1
	Xen = 4.6, V = 1 (nothing changed in the save file spec for 4.5)
	Xen = 4.7, V = 2

The spec with it's current wording about V would seem to say the Xen 4.7
must support migration from Xen 4.5, which is not the case.

For HVM support there are going to be (at least) two additional record
types "opaque blob of Xen state" and "opaque blob of qemu state". It is
very unlikely that anyone will remember to bump V in the tools when
changing those.

I think the solution is to also include the Xen version in the header,
as well as the "file format" version V. You could do this as part of the
header of those opaque blobs, but it might be better in the actual
overall headers (/as well).

I've no idea how to deal with the qemu blob -- but I think qemu upstream
has been working to make cross version migration work better, so maybe
it isn't a problem any more.

> 
> >> --------------------------------------------------------------------
> >> Field       Description
> >> ----------- --------------------------------------------------------
> >> marker      0xFFFFFFFFFFFFFFFF.
> >>
> >> id          0x58454E46 ("XENF" in ASCII).
> >>
> >> version     0x00000001.  The version of this specification.
> >>
> >> options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
> > 
> > Couldn't we just specify that things are in a specific endianness
> > related to the header's arch field?
> > 
> > I appreciate the desire to make the format endian neutral and explicit
> > about which is in use but (apart from the header) why would you ever
> > want to go non-native endian for a given arch?
> 
> I am anticipating bi-endian architectures which could mean (for example)
> migrating a little-endian guest from a little-endian host to a
> big-endian host.

It's possible that this would end up looking like a totally separate
arch anyway (cf "armbe"), not just at the save format level, but at the
hypercall and PVIO layers too.

> I would prefer to retain this bit, but I think we can specify that
> certain architectures always use a specific endianness so initially we
> wouldn't need to support anything other than the native endianness
> (little-endian on both x86 and ARM).

I think that makes sense initially.

It's always a bit tricky with a spec to separate the notion of what is
possible within the spec from what you actually intend to implement now.

Perhaps a reasonable compromise is to document a requirement that savers
populate this field accurately but only require restorers to support
their nativeness (leaving anything further as optional).

> >> --------------------------------------------------------------------
> >> Field       Description
> >> ----------- --------------------------------------------------------
> >> arch        0x0000: Reserved.
> >>
> >>             0x0001: x86.
> >>
> >>             0x0002: ARM.
> >>
> >> type        0x0000: Reserved.
> >>
> >>             0x0001: x86 PV.
> >>
> >>             0x0002 - 0xFFFF: Reserved.
> > 
> > Is the type field per-arch? i.e. if arch=0x0002 can we use type = 0x0001
> > for ARM domains?
> 
> I think it would be best to avoid reusing types for different
> architectures -- it's not like we're going to be short on types.

Any reason not to combine the arch+type into a single field then?

> >> --------------------------------------------------------------------
> >> Field       Description
> >> ----------- --------------------------------------------------------
> >> count       Number of pages described in this record.
> >>
> >> pfn         An array of count PFNs. Bits 63-60 contain
> >>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> > 
> > Now might be a good time to remove this intertwining? I suppose 60-bits
> > is a lot of pfn's, but if the VMs address space is sparse it isn't out
> > of the question.
> 
> I don't think we want to consider systems with > 64 bits of address
> space so 60-bits is more than enough for PFNs.

Is it? What about systems with 61..63 bits of address space?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCBX-0000d0-FP; Tue, 11 Feb 2014 12:11:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDCBV-0000cu-Nc
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:11:29 +0000
Received: from [85.158.137.68:26745] by server-5.bemta-3.messagelabs.com id
	4E/91-04712-0731AF25; Tue, 11 Feb 2014 12:11:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392120688!1097117!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23383 invoked from network); 11 Feb 2014 12:11:28 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:11:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDCBN-0001Rd-A7; Tue, 11 Feb 2014 12:11:21 +0000
Date: Tue, 11 Feb 2014 13:11:21 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140211121121.GC97288@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F9F838020000780011B0AF@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
> >>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> > At 16:34 +0000 on 10 Feb (1392046444), Jan Beulich wrote:
> >> > The underlying problem here comes because the AF and UF bits of RTC 
> >> > interrupt
> >> > state is modelled by the RTC code, but the PF is modelled by the pt code.  
> >> > The
> >> > root cause of windows infinite loop is that RTC_PF is being re-set on 
> >> > vmentry
> >> > before the interrupt logic has worked out that it can't actually inject an 
> >> > RTC
> >> > interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
> >> > should be reading 0.
> >> 
> >> So you're undoing a whole lot of changes done with the goal of
> >> getting the overall emulation closer to what real hardware does,
> >> just to paper over an issue elsewhere in the code? Not really an
> >> approach I'm in favor of.
> > 
> > My understanding was that the problem is explicitly in the change to
> > how RTC code is called from vpt code.
> > 
> > Originally, the RTC callback was called from pt_intr_post, like other
> > vpt sources.  Your rework changed it to be called much earlier, when
> > the vpt was considering which time source to choose.  AIUI that was to
> > let the RTC code tell the VPT not to inject, if the guest hasn't acked
> > the last interrupt, right?
> 
> Not only, it was also a necessary pre-adjustment for the !PIE case
> to work (see below).
> 
> > Since that was changed later to allow a certain number of dead ticks
> > before deciding to stop the timer chain, the decision no longer has to
> > be made so early -- we can allow one more IRQ to go in and then
> > disable it. 
> > 
> > That is the main change of this cset:  we go back to driving
> > the interrupt from the vpt code and fixing up the RTC state after vpt
> > tells us it's injected an interrupt.
> 
> And that's what is wrong imo, as it doesn't allow driving PF correctly
> when !PIE.

Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
you remember why not?  Have I forgotten some wrinkle or race here?

> >> > -        rtc_update_irq(s);
> >> 
> >> So given the problem description, this would seem to be the most
> >> important part at a first glance. But looking more closely, I'm getting
> >> the impression that the call to rtc_update_irq() had no effect at all
> >> here anyway: The function would always bail on the second if() due
> >> to REG_C having got cleared a few lines up.
> > 
> > Yeah, this has nothing to do with the bug being fixed here.  The old
> > REG_C read was operating correctly, but on the return-to-guest path:
> >  - vpt sees another RTC interrupt is due and calls RTC code
> >  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
> >  - vlapic code sees the last interrupt is still in the ISR and does
> >    nothing;
> >  - we return to the guest having set IRQF but not consumed a timer
> >    event, so vpt stste is the same
> >  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
> >    waiting for a read of 0.
> >  - repeat forever.
> 
> Which would call for a flag suppressing the setting of PF|IRQF
> until the timer event got consumed. Possibly with some safety
> belt for this to not get deferred indefinitely (albeit if the interrupt
> doesn't get injected for extended periods of time, the guest
> would presumably have more severe problems than these flags
> not getting updated as expected).

That's pretty much what we're doing here -- the pt_intr_post callback
sets PF|IRQF when the interrupt is injected.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCBX-0000d0-FP; Tue, 11 Feb 2014 12:11:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDCBV-0000cu-Nc
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:11:29 +0000
Received: from [85.158.137.68:26745] by server-5.bemta-3.messagelabs.com id
	4E/91-04712-0731AF25; Tue, 11 Feb 2014 12:11:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392120688!1097117!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23383 invoked from network); 11 Feb 2014 12:11:28 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:11:28 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDCBN-0001Rd-A7; Tue, 11 Feb 2014 12:11:21 +0000
Date: Tue, 11 Feb 2014 13:11:21 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140211121121.GC97288@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F9F838020000780011B0AF@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
> >>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> > At 16:34 +0000 on 10 Feb (1392046444), Jan Beulich wrote:
> >> > The underlying problem here comes because the AF and UF bits of RTC 
> >> > interrupt
> >> > state is modelled by the RTC code, but the PF is modelled by the pt code.  
> >> > The
> >> > root cause of windows infinite loop is that RTC_PF is being re-set on 
> >> > vmentry
> >> > before the interrupt logic has worked out that it can't actually inject an 
> >> > RTC
> >> > interrupt, causing Windows to erroniously read (RTC_PF|RTC_IRQF) when it
> >> > should be reading 0.
> >> 
> >> So you're undoing a whole lot of changes done with the goal of
> >> getting the overall emulation closer to what real hardware does,
> >> just to paper over an issue elsewhere in the code? Not really an
> >> approach I'm in favor of.
> > 
> > My understanding was that the problem is explicitly in the change to
> > how RTC code is called from vpt code.
> > 
> > Originally, the RTC callback was called from pt_intr_post, like other
> > vpt sources.  Your rework changed it to be called much earlier, when
> > the vpt was considering which time source to choose.  AIUI that was to
> > let the RTC code tell the VPT not to inject, if the guest hasn't acked
> > the last interrupt, right?
> 
> Not only, it was also a necessary pre-adjustment for the !PIE case
> to work (see below).
> 
> > Since that was changed later to allow a certain number of dead ticks
> > before deciding to stop the timer chain, the decision no longer has to
> > be made so early -- we can allow one more IRQ to go in and then
> > disable it. 
> > 
> > That is the main change of this cset:  we go back to driving
> > the interrupt from the vpt code and fixing up the RTC state after vpt
> > tells us it's injected an interrupt.
> 
> And that's what is wrong imo, as it doesn't allow driving PF correctly
> when !PIE.

Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
you remember why not?  Have I forgotten some wrinkle or race here?

> >> > -        rtc_update_irq(s);
> >> 
> >> So given the problem description, this would seem to be the most
> >> important part at a first glance. But looking more closely, I'm getting
> >> the impression that the call to rtc_update_irq() had no effect at all
> >> here anyway: The function would always bail on the second if() due
> >> to REG_C having got cleared a few lines up.
> > 
> > Yeah, this has nothing to do with the bug being fixed here.  The old
> > REG_C read was operating correctly, but on the return-to-guest path:
> >  - vpt sees another RTC interrupt is due and calls RTC code
> >  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
> >  - vlapic code sees the last interrupt is still in the ISR and does
> >    nothing;
> >  - we return to the guest having set IRQF but not consumed a timer
> >    event, so vpt stste is the same
> >  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
> >    waiting for a read of 0.
> >  - repeat forever.
> 
> Which would call for a flag suppressing the setting of PF|IRQF
> until the timer event got consumed. Possibly with some safety
> belt for this to not get deferred indefinitely (albeit if the interrupt
> doesn't get injected for extended periods of time, the guest
> would presumably have more severe problems than these flags
> not getting updated as expected).

That's pretty much what we're doing here -- the pt_intr_post callback
sets PF|IRQF when the interrupt is injected.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:30:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCTw-0001iW-Lh; Tue, 11 Feb 2014 12:30:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDCTv-0001iR-6i
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:30:31 +0000
Received: from [193.109.254.147:28588] by server-9.bemta-14.messagelabs.com id
	31/2C-24895-6E71AF25; Tue, 11 Feb 2014 12:30:30 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392121829!3524741!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19352 invoked from network); 11 Feb 2014 12:30:30 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:30:30 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so5523832wes.16
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 04:30:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lJGQ7QvVTz/Xz1X4tIM4yHc7tfn3sT1jUyXWdLOJ1GU=;
	b=Erjfkz0dn0C4zDY4t8zsgujGQTaAbZQLOpp5r56AqFN6R5+ibh4HBI3YuDowotc5Zy
	KEx01DN/CjKVCh25+uWMAXYrDu3cCxZRAMyBH0ci8C+8pGP7k2BZBLJVPkdNGHM7INuE
	80byWA06mr/Bf3yoNo474sOsenSp1rUX3o/4CmBzP3FyLG1ZJtPplQTCNWiQ2VUQ5eN3
	suM21eJnlFdRriTSczxAex6iSz6IwhVCZHAJCnYnUNTh7fCrTJJlHfkxYKMqDmbtBWDi
	9K3a12IkGRd7ZlrR2cmlFHLyxJIXId1gBUooJZZYBLRP+0IYRtQludgb0fNhc4Iyb6W2
	3GYw==
X-Gm-Message-State: ALoCoQnpuuHHmtjrjgu4+MUHg2241x639uf/4Gw8GAFPwWrOa4mEn5Y2B2f6hzKT2noqFo6jAj5O
X-Received: by 10.194.63.228 with SMTP id j4mr26833428wjs.34.1392121829506;
	Tue, 11 Feb 2014 04:30:29 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ci4sm43769593wjc.21.2014.02.11.04.30.28 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 04:30:28 -0800 (PST)
Message-ID: <52FA17E3.9070105@linaro.org>
Date: Tue, 11 Feb 2014 12:30:27 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
In-Reply-To: <20140211085317.GB92054@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 08:53, Tim Deegan wrote:
> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
>> compilation with clang:
>>
>> In file included from sched_sedf.c:8:
>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
>> not found with <angled> include; use "quotes" instead
>>             ^~~~~~~~~~
>>             "stdarg.h"
>
> Looks like on your system stdarg.h doesn't live in a compiler-specific
> path, like we have for the BSDs.  I think we should just go to using
> our own definitions for stdarg/stdbool everywhere; trying to chase the
> compiler-specific versions around is a PITA, and the pieces we
> actually need are trivial.

For BSDs, we are using our own stdargs/stdbool.  So we don't include the 
system <stdarg.h>.

Linux is using $(CC) -print-file-name=include to get the right path. It 
works with both gcc and clang on Linux distos, but not on FreeBSD.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:30:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCTw-0001iW-Lh; Tue, 11 Feb 2014 12:30:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDCTv-0001iR-6i
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:30:31 +0000
Received: from [193.109.254.147:28588] by server-9.bemta-14.messagelabs.com id
	31/2C-24895-6E71AF25; Tue, 11 Feb 2014 12:30:30 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392121829!3524741!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19352 invoked from network); 11 Feb 2014 12:30:30 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:30:30 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so5523832wes.16
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 04:30:29 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lJGQ7QvVTz/Xz1X4tIM4yHc7tfn3sT1jUyXWdLOJ1GU=;
	b=Erjfkz0dn0C4zDY4t8zsgujGQTaAbZQLOpp5r56AqFN6R5+ibh4HBI3YuDowotc5Zy
	KEx01DN/CjKVCh25+uWMAXYrDu3cCxZRAMyBH0ci8C+8pGP7k2BZBLJVPkdNGHM7INuE
	80byWA06mr/Bf3yoNo474sOsenSp1rUX3o/4CmBzP3FyLG1ZJtPplQTCNWiQ2VUQ5eN3
	suM21eJnlFdRriTSczxAex6iSz6IwhVCZHAJCnYnUNTh7fCrTJJlHfkxYKMqDmbtBWDi
	9K3a12IkGRd7ZlrR2cmlFHLyxJIXId1gBUooJZZYBLRP+0IYRtQludgb0fNhc4Iyb6W2
	3GYw==
X-Gm-Message-State: ALoCoQnpuuHHmtjrjgu4+MUHg2241x639uf/4Gw8GAFPwWrOa4mEn5Y2B2f6hzKT2noqFo6jAj5O
X-Received: by 10.194.63.228 with SMTP id j4mr26833428wjs.34.1392121829506;
	Tue, 11 Feb 2014 04:30:29 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	ci4sm43769593wjc.21.2014.02.11.04.30.28 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 04:30:28 -0800 (PST)
Message-ID: <52FA17E3.9070105@linaro.org>
Date: Tue, 11 Feb 2014 12:30:27 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
In-Reply-To: <20140211085317.GB92054@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 08:53, Tim Deegan wrote:
> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
>> compilation with clang:
>>
>> In file included from sched_sedf.c:8:
>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
>> not found with <angled> include; use "quotes" instead
>>             ^~~~~~~~~~
>>             "stdarg.h"
>
> Looks like on your system stdarg.h doesn't live in a compiler-specific
> path, like we have for the BSDs.  I think we should just go to using
> our own definitions for stdarg/stdbool everywhere; trying to chase the
> compiler-specific versions around is a PITA, and the pieces we
> actually need are trivial.

For BSDs, we are using our own stdargs/stdbool.  So we don't include the 
system <stdarg.h>.

Linux is using $(CC) -print-file-name=include to get the right path. It 
works with both gcc and clang on Linux distos, but not on FreeBSD.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:35:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCYb-0001rD-GJ; Tue, 11 Feb 2014 12:35:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDCYZ-0001r5-GJ
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:35:19 +0000
Received: from [85.158.139.211:32990] by server-3.bemta-5.messagelabs.com id
	F7/2A-13671-6091AF25; Tue, 11 Feb 2014 12:35:18 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392122117!3139504!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22849 invoked from network); 11 Feb 2014 12:35:17 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:35:17 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDCYV-0001or-NB; Tue, 11 Feb 2014 12:35:15 +0000
Date: Tue, 11 Feb 2014 13:35:15 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211123515.GD97288@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA17E3.9070105@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
> 
> 
> On 11/02/14 08:53, Tim Deegan wrote:
> > At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
> >> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> >> compilation with clang:
> >>
> >> In file included from sched_sedf.c:8:
> >> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> >> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
> >> not found with <angled> include; use "quotes" instead
> >>             ^~~~~~~~~~
> >>             "stdarg.h"
> >
> > Looks like on your system stdarg.h doesn't live in a compiler-specific
> > path, like we have for the BSDs.  I think we should just go to using
> > our own definitions for stdarg/stdbool everywhere; trying to chase the
> > compiler-specific versions around is a PITA, and the pieces we
> > actually need are trivial.
> 
> For BSDs, we are using our own stdargs/stdbool.  So we don't include the 
> system <stdarg.h>.
> 
> Linux is using $(CC) -print-file-name=include to get the right path. It 
> works with both gcc and clang on Linux distos, but not on FreeBSD.

Wait - is the error message you posted from clang on FreeBSD?
That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
include <stdarg.h> at all.  Is __FreeBSD__ not being defined?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:35:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCYb-0001rD-GJ; Tue, 11 Feb 2014 12:35:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDCYZ-0001r5-GJ
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:35:19 +0000
Received: from [85.158.139.211:32990] by server-3.bemta-5.messagelabs.com id
	F7/2A-13671-6091AF25; Tue, 11 Feb 2014 12:35:18 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392122117!3139504!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22849 invoked from network); 11 Feb 2014 12:35:17 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:35:17 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDCYV-0001or-NB; Tue, 11 Feb 2014 12:35:15 +0000
Date: Tue, 11 Feb 2014 13:35:15 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211123515.GD97288@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA17E3.9070105@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
> 
> 
> On 11/02/14 08:53, Tim Deegan wrote:
> > At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
> >> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> >> compilation with clang:
> >>
> >> In file included from sched_sedf.c:8:
> >> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> >> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
> >> not found with <angled> include; use "quotes" instead
> >>             ^~~~~~~~~~
> >>             "stdarg.h"
> >
> > Looks like on your system stdarg.h doesn't live in a compiler-specific
> > path, like we have for the BSDs.  I think we should just go to using
> > our own definitions for stdarg/stdbool everywhere; trying to chase the
> > compiler-specific versions around is a PITA, and the pieces we
> > actually need are trivial.
> 
> For BSDs, we are using our own stdargs/stdbool.  So we don't include the 
> system <stdarg.h>.
> 
> Linux is using $(CC) -print-file-name=include to get the right path. It 
> works with both gcc and clang on Linux distos, but not on FreeBSD.

Wait - is the error message you posted from clang on FreeBSD?
That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
include <stdarg.h> at all.  Is __FreeBSD__ not being defined?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCZg-0001w5-0I; Tue, 11 Feb 2014 12:36:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDCZe-0001vv-NI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:36:26 +0000
Received: from [85.158.143.35:26936] by server-2.bemta-4.messagelabs.com id
	CC/B2-10891-A491AF25; Tue, 11 Feb 2014 12:36:26 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392122185!4806952!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21771 invoked from network); 11 Feb 2014 12:36:25 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:36:25 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm4so4149940wib.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 04:36:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=VJXUQaJLRShKrRkOV35uSpNZf1O2FQQgVY5VM4ljdec=;
	b=Asyzx1dwPUEdQkDPm46lkRf7AZcN9au/7RiQ5iYr8aillD0VJxgdLM9DhiQBhs1uDm
	AO36pZtleiPULvtHr6jbDoHHeSvPGqbgbUyNmKf4me2OZWJN1dmO6ssOUG/MOPuFMP+W
	wg3D8JKOsoxG+Zs4QJ95YnwbGMDnVoxSyn+BnmqaApsFQUeLribfRaRImd7s3FM6tEuw
	0zsP6dmpMcmQ4mVyRDk7jNgIHIT8iEEolrHd+DnWOEwO3HuLZget6q0vOATaRbtHCMG4
	+xD+H0g3HgAoxS9NGuIa8KQHww6yAyKD8Q5AtQ9FAha22uYIYoIL4aiFi1detdAJR3wG
	LOiA==
X-Gm-Message-State: ALoCoQnDmCqFv/aRvFjqn4R17YXKu1UiNPHvuuBNgjKEmA8i9HeyAubu3ogvXX1WJhfaKwJImBDr
X-Received: by 10.181.13.11 with SMTP id eu11mr14550604wid.30.1392122184615;
	Tue, 11 Feb 2014 04:36:24 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id r1sm45703715wia.5.2014.02.11.04.36.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 04:36:23 -0800 (PST)
Message-ID: <52FA1945.8010400@linaro.org>
Date: Tue, 11 Feb 2014 12:36:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
In-Reply-To: <20140211123515.GD97288@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 12:35, Tim Deegan wrote:
> At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
>>
>>
>> On 11/02/14 08:53, Tim Deegan wrote:
>>> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
>>>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
>>>> compilation with clang:
>>>>
>>>> In file included from sched_sedf.c:8:
>>>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
>>>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
>>>> not found with <angled> include; use "quotes" instead
>>>>              ^~~~~~~~~~
>>>>              "stdarg.h"
>>>
>>> Looks like on your system stdarg.h doesn't live in a compiler-specific
>>> path, like we have for the BSDs.  I think we should just go to using
>>> our own definitions for stdarg/stdbool everywhere; trying to chase the
>>> compiler-specific versions around is a PITA, and the pieces we
>>> actually need are trivial.
>>
>> For BSDs, we are using our own stdargs/stdbool.  So we don't include the
>> system <stdarg.h>.
>>
>> Linux is using $(CC) -print-file-name=include to get the right path. It
>> works with both gcc and clang on Linux distos, but not on FreeBSD.
>
> Wait - is the error message you posted from clang on FreeBSD?
> That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
> include <stdarg.h> at all.  Is __FreeBSD__ not being defined?


No it's from Linux (Debian Wheezy and Fedora). I just gave a try to the 
"-print-file-name" solution on FreeBSD.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCZg-0001w5-0I; Tue, 11 Feb 2014 12:36:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDCZe-0001vv-NI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:36:26 +0000
Received: from [85.158.143.35:26936] by server-2.bemta-4.messagelabs.com id
	CC/B2-10891-A491AF25; Tue, 11 Feb 2014 12:36:26 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392122185!4806952!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21771 invoked from network); 11 Feb 2014 12:36:25 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:36:25 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm4so4149940wib.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 04:36:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=VJXUQaJLRShKrRkOV35uSpNZf1O2FQQgVY5VM4ljdec=;
	b=Asyzx1dwPUEdQkDPm46lkRf7AZcN9au/7RiQ5iYr8aillD0VJxgdLM9DhiQBhs1uDm
	AO36pZtleiPULvtHr6jbDoHHeSvPGqbgbUyNmKf4me2OZWJN1dmO6ssOUG/MOPuFMP+W
	wg3D8JKOsoxG+Zs4QJ95YnwbGMDnVoxSyn+BnmqaApsFQUeLribfRaRImd7s3FM6tEuw
	0zsP6dmpMcmQ4mVyRDk7jNgIHIT8iEEolrHd+DnWOEwO3HuLZget6q0vOATaRbtHCMG4
	+xD+H0g3HgAoxS9NGuIa8KQHww6yAyKD8Q5AtQ9FAha22uYIYoIL4aiFi1detdAJR3wG
	LOiA==
X-Gm-Message-State: ALoCoQnDmCqFv/aRvFjqn4R17YXKu1UiNPHvuuBNgjKEmA8i9HeyAubu3ogvXX1WJhfaKwJImBDr
X-Received: by 10.181.13.11 with SMTP id eu11mr14550604wid.30.1392122184615;
	Tue, 11 Feb 2014 04:36:24 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id r1sm45703715wia.5.2014.02.11.04.36.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 04:36:23 -0800 (PST)
Message-ID: <52FA1945.8010400@linaro.org>
Date: Tue, 11 Feb 2014 12:36:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
In-Reply-To: <20140211123515.GD97288@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 12:35, Tim Deegan wrote:
> At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
>>
>>
>> On 11/02/14 08:53, Tim Deegan wrote:
>>> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
>>>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
>>>> compilation with clang:
>>>>
>>>> In file included from sched_sedf.c:8:
>>>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
>>>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
>>>> not found with <angled> include; use "quotes" instead
>>>>              ^~~~~~~~~~
>>>>              "stdarg.h"
>>>
>>> Looks like on your system stdarg.h doesn't live in a compiler-specific
>>> path, like we have for the BSDs.  I think we should just go to using
>>> our own definitions for stdarg/stdbool everywhere; trying to chase the
>>> compiler-specific versions around is a PITA, and the pieces we
>>> actually need are trivial.
>>
>> For BSDs, we are using our own stdargs/stdbool.  So we don't include the
>> system <stdarg.h>.
>>
>> Linux is using $(CC) -print-file-name=include to get the right path. It
>> works with both gcc and clang on Linux distos, but not on FreeBSD.
>
> Wait - is the error message you posted from clang on FreeBSD?
> That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
> include <stdarg.h> at all.  Is __FreeBSD__ not being defined?


No it's from Linux (Debian Wheezy and Fedora). I just gave a try to the 
"-print-file-name" solution on FreeBSD.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:40:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCd6-0002ZP-NN; Tue, 11 Feb 2014 12:40:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCd4-0002ZK-SX
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 12:39:59 +0000
Received: from [193.109.254.147:4383] by server-4.bemta-14.messagelabs.com id
	32/61-32066-D1A1AF25; Tue, 11 Feb 2014 12:39:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392122396!3503484!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19940 invoked from network); 11 Feb 2014 12:39:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:39:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99764359"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 12:39:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:39:54 -0500
Message-ID: <1392122393.26657.102.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 12:39:53 +0000
In-Reply-To: <1392117234.26657.87.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<1392117234.26657.87.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH OSSTEST 0/2] fix tree and revision selection for
 linux branches under armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 11:13 +0000, Ian Campbell wrote:
> But I can't help thinking that this should perhaps be solved further up
> by e.g. making $TREE_LINUX_ARM#$REVISION_LINUX_ARM be correct for
> linux-linus -- i.e. by making some change in ap-common:info_linux_tree.
> 
> I'm not sure what that would look like though -- it might be easiest to
> thrash this stuff out f2f later.

Which is what we did. 2 patches will follow, the first to revert the
incorrect change, the second to introduce the proper fix.

Ian -- it seems that osstest.git#pretest is in a state where I could
rewind instead of reverting the first change -- which do you prefer
under the circumstances?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:40:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCd6-0002ZP-NN; Tue, 11 Feb 2014 12:40:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCd4-0002ZK-SX
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 12:39:59 +0000
Received: from [193.109.254.147:4383] by server-4.bemta-14.messagelabs.com id
	32/61-32066-D1A1AF25; Tue, 11 Feb 2014 12:39:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392122396!3503484!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19940 invoked from network); 11 Feb 2014 12:39:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:39:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99764359"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 12:39:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:39:54 -0500
Message-ID: <1392122393.26657.102.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 12:39:53 +0000
In-Reply-To: <1392117234.26657.87.camel@kazak.uk.xensource.com>
References: <osstest-24817-mainreport@xen.org>
	<1392028633.5117.29.camel@kazak.uk.xensource.com>
	<1392117234.26657.87.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: [Xen-devel] [PATCH OSSTEST 0/2] fix tree and revision selection for
 linux branches under armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 11:13 +0000, Ian Campbell wrote:
> But I can't help thinking that this should perhaps be solved further up
> by e.g. making $TREE_LINUX_ARM#$REVISION_LINUX_ARM be correct for
> linux-linus -- i.e. by making some change in ap-common:info_linux_tree.
> 
> I'm not sure what that would look like though -- it might be easiest to
> thrash this stuff out f2f later.

Which is what we did. 2 patches will follow, the first to revert the
incorrect change, the second to introduce the proper fix.

Ian -- it seems that osstest.git#pretest is in a state where I could
rewind instead of reverting the first change -- which do you prefer
under the circumstances?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:42:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCf7-0002iE-RU; Tue, 11 Feb 2014 12:42:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCf6-0002i0-Nb
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:42:04 +0000
Received: from [85.158.143.35:18301] by server-3.bemta-4.messagelabs.com id
	5F/7B-11539-B9A1AF25; Tue, 11 Feb 2014 12:42:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392122521!4812911!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21584 invoked from network); 11 Feb 2014 12:42:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:42:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101578912"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:42:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 07:42:00 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDCf2-000856-7V;
	Tue, 11 Feb 2014 12:42:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 12:41:59 +0000
Message-ID: <1392122520-5453-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392122393.26657.102.camel@kazak.uk.xensource.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 1/2] Revert "mfi-common: Only override
	the pvops kernel repo for linux-arm-xen branch"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit a528b10cd6a552e92ed2b1b809c991a421ebdba6.

Fixing this in make-flight was incorrect -- make-flight should do as it was
told and use the appropriate TREE_LINUX(_ARCH) and REVISION_LINUX(_ARCH)
values it is given.

The real issue here was that these inputs were bogus for the linux-linus tree
when running on armhf and this will be fixed in a followup patch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mfi-common b/mfi-common
index f7f981e..8f56092 100644
--- a/mfi-common
+++ b/mfi-common
@@ -44,11 +44,6 @@ create_build_jobs () {
 
     if [ "x$arch" = xdisable ]; then continue; fi
 
-    pvops_kernel="
-      tree_linux=$TREE_LINUX
-      revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
-    "
-
     case "$arch" in
     armhf)
       case "$branch" in
@@ -64,13 +59,10 @@ create_build_jobs () {
       xen-4.1-testing) continue;;
       xen-4.2-testing) continue;;
       esac
-
-      if [ "$branch" = "linux-arm-xen" ]; then
-        pvops_kernel="
-          tree_linux=$TREE_LINUX_ARM
-          revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-        "
-      fi
+      pvops_kernel="
+        tree_linux=$TREE_LINUX_ARM
+        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+      "
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
       "
@@ -79,6 +71,10 @@ create_build_jobs () {
       case "$branch" in
       linux-arm-xen) continue;;
       esac
+      pvops_kernel="
+        tree_linux=$TREE_LINUX
+        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+      "
       ;;
     esac
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:42:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCf6-0002i2-Ey; Tue, 11 Feb 2014 12:42:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCf5-0002hv-Qn
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:42:04 +0000
Received: from [85.158.143.35:18231] by server-3.bemta-4.messagelabs.com id
	18/7B-11539-B9A1AF25; Tue, 11 Feb 2014 12:42:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392122521!4812911!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21468 invoked from network); 11 Feb 2014 12:42:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:42:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101578911"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:42:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 07:42:00 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDCf2-000856-AK;
	Tue, 11 Feb 2014 12:42:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 12:42:00 +0000
Message-ID: <1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392122393.26657.102.camel@kazak.uk.xensource.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 2/2] cr-daily-branch: make sure we test
	the correct tree for Linux branches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These branches should test the specific Linux tree which they and so should
not apply the per-arch overrides which are only intended to be used to pick up
an already verified tested Linux branch for use when testing some other
non-linux branch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
Applied on top of a revert of a528b10cd6a552e92ed2b1b809c991a421ebdba6.
---
 cr-daily-branch | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index 41ca796..02fef15 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -122,9 +122,16 @@ if [ "x$REVISION_LINUX" = x ]; then
         export REVISION_LINUX
 fi
 if [ "x$REVISION_LINUX_ARM" = x ]; then
+    if [ "x$tree" = "xlinux" ] ; then
+	TREE_LINUX_ARM=$TREE_LINUX
+	export TREE_LINUX_ARM
+	REVISION_LINUX_ARM=$REVISION_LINUX
+        export REVISION_LINUX_ARM
+    else
 	determine_version REVISION_LINUX_ARM ${linuxbranch:-linux-arm-xen} \
 		LINUX_ARM
         export REVISION_LINUX_ARM
+    fi
 fi
 if [ "x$REVISION_LINUXFIRMWARE" = x ]; then
 	determine_version REVISION_LINUXFIRMWARE linuxfirmware
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:42:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCf7-0002iE-RU; Tue, 11 Feb 2014 12:42:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCf6-0002i0-Nb
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:42:04 +0000
Received: from [85.158.143.35:18301] by server-3.bemta-4.messagelabs.com id
	5F/7B-11539-B9A1AF25; Tue, 11 Feb 2014 12:42:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392122521!4812911!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21584 invoked from network); 11 Feb 2014 12:42:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:42:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101578912"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:42:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 07:42:00 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDCf2-000856-7V;
	Tue, 11 Feb 2014 12:42:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 12:41:59 +0000
Message-ID: <1392122520-5453-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392122393.26657.102.camel@kazak.uk.xensource.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 1/2] Revert "mfi-common: Only override
	the pvops kernel repo for linux-arm-xen branch"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit a528b10cd6a552e92ed2b1b809c991a421ebdba6.

Fixing this in make-flight was incorrect -- make-flight should do as it was
told and use the appropriate TREE_LINUX(_ARCH) and REVISION_LINUX(_ARCH)
values it is given.

The real issue here was that these inputs were bogus for the linux-linus tree
when running on armhf and this will be fixed in a followup patch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mfi-common b/mfi-common
index f7f981e..8f56092 100644
--- a/mfi-common
+++ b/mfi-common
@@ -44,11 +44,6 @@ create_build_jobs () {
 
     if [ "x$arch" = xdisable ]; then continue; fi
 
-    pvops_kernel="
-      tree_linux=$TREE_LINUX
-      revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
-    "
-
     case "$arch" in
     armhf)
       case "$branch" in
@@ -64,13 +59,10 @@ create_build_jobs () {
       xen-4.1-testing) continue;;
       xen-4.2-testing) continue;;
       esac
-
-      if [ "$branch" = "linux-arm-xen" ]; then
-        pvops_kernel="
-          tree_linux=$TREE_LINUX_ARM
-          revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
-        "
-      fi
+      pvops_kernel="
+        tree_linux=$TREE_LINUX_ARM
+        revision_linux=${REVISION_LINUX_ARM:-${DEFAULT_REVISION_LINUX_ARM}}
+      "
       pvops_kconfig_overrides="
         kconfig_override_y=CONFIG_EXT4_FS
       "
@@ -79,6 +71,10 @@ create_build_jobs () {
       case "$branch" in
       linux-arm-xen) continue;;
       esac
+      pvops_kernel="
+        tree_linux=$TREE_LINUX
+        revision_linux=${REVISION_LINUX:-${DEFAULT_REVISION_LINUX}}
+      "
       ;;
     esac
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:42:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:42:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCf6-0002i2-Ey; Tue, 11 Feb 2014 12:42:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCf5-0002hv-Qn
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:42:04 +0000
Received: from [85.158.143.35:18231] by server-3.bemta-4.messagelabs.com id
	18/7B-11539-B9A1AF25; Tue, 11 Feb 2014 12:42:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392122521!4812911!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21468 invoked from network); 11 Feb 2014 12:42:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:42:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101578911"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:42:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 07:42:00 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDCf2-000856-AK;
	Tue, 11 Feb 2014 12:42:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 12:42:00 +0000
Message-ID: <1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392122393.26657.102.camel@kazak.uk.xensource.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST 2/2] cr-daily-branch: make sure we test
	the correct tree for Linux branches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These branches should test the specific Linux tree which they and so should
not apply the per-arch overrides which are only intended to be used to pick up
an already verified tested Linux branch for use when testing some other
non-linux branch.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
Applied on top of a revert of a528b10cd6a552e92ed2b1b809c991a421ebdba6.
---
 cr-daily-branch | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index 41ca796..02fef15 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -122,9 +122,16 @@ if [ "x$REVISION_LINUX" = x ]; then
         export REVISION_LINUX
 fi
 if [ "x$REVISION_LINUX_ARM" = x ]; then
+    if [ "x$tree" = "xlinux" ] ; then
+	TREE_LINUX_ARM=$TREE_LINUX
+	export TREE_LINUX_ARM
+	REVISION_LINUX_ARM=$REVISION_LINUX
+        export REVISION_LINUX_ARM
+    else
 	determine_version REVISION_LINUX_ARM ${linuxbranch:-linux-arm-xen} \
 		LINUX_ARM
         export REVISION_LINUX_ARM
+    fi
 fi
 if [ "x$REVISION_LINUXFIRMWARE" = x ]; then
 	determine_version REVISION_LINUXFIRMWARE linuxfirmware
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:50:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCmp-0003SB-2v; Tue, 11 Feb 2014 12:50:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCmn-0003S6-NI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:50:01 +0000
Received: from [85.158.143.35:14837] by server-3.bemta-4.messagelabs.com id
	D3/6B-11539-97C1AF25; Tue, 11 Feb 2014 12:50:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392122999!4819092!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32418 invoked from network); 11 Feb 2014 12:50:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:50:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101580423"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:49:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:49:57 -0500
Message-ID: <1392122996.26657.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 11 Feb 2014 12:49:56 +0000
In-Reply-To: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	Don Slutz <dslutz@verizon.com>, Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDExOjM4ICswMTAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
Cj4gQ2M6IERvbiBTbHV0eiA8ZHNsdXR6QHZlcml6b24uY29tPgo+IENjOiBJYW4gQ2FtcGJlbGwg
PGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgo+IENjOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25A
Y2l0cml4LmNvbT4KPiAtLS0KPiBJJ3ZlIHJlbW92ZWQgRG9uJ3MgIlRlc3RlZC1ieSIgYmVjYXVz
ZSB0aGUgcGF0Y2ggaGFzIGNoYW5nZWQsIGNvdWxkCj4geW91IHBsZWFzZSByZS10ZXN0IGl0IERv
bj8KPiAKPiBQbGVhc2UgcnVuIGF1dG9nZW4uc2ggYWZ0ZXIgYXBwbHlpbmcuCgpJcyB0aGlzIGlu
dGVuZGVkIGZvciA0LjQ/IElmIFllcyB0aGVuIHlvdSBuZWVkIHRvIG1ha2UgdGhlIGNhc2UgdG8K
R2VvcmdlLgoKT3Igc2hhbGwgd2Ugc2ltcGx5IHJlbGVhc2Ugbm90ZSB0aGUgbWluaW11bSByZXF1
aXJlbWVudCBhbmQgZW5mb3JjZSBpdApmb3IgNC41PyAKCkknbSBwcm9iYWJseSB1bmR1bHkgbmVy
dm91cyBhYm91dCBjaGFuZ2luZyBhdXRvY29uZiBzdHVmZi4uLgoKSSd2ZSBhcHBsaWVkIERvbidz
IENBTUxyZXR1cm5UIGZpeCBCVFcuCgo+IC0tLQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxp
c3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:50:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCmp-0003SB-2v; Tue, 11 Feb 2014 12:50:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCmn-0003S6-NI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:50:01 +0000
Received: from [85.158.143.35:14837] by server-3.bemta-4.messagelabs.com id
	D3/6B-11539-97C1AF25; Tue, 11 Feb 2014 12:50:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392122999!4819092!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32418 invoked from network); 11 Feb 2014 12:50:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:50:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101580423"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:49:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:49:57 -0500
Message-ID: <1392122996.26657.107.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Tue, 11 Feb 2014 12:49:56 +0000
In-Reply-To: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	Don Slutz <dslutz@verizon.com>, Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAyMDE0LTAyLTExIGF0IDExOjM4ICswMTAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4gU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
Cj4gQ2M6IERvbiBTbHV0eiA8ZHNsdXR6QHZlcml6b24uY29tPgo+IENjOiBJYW4gQ2FtcGJlbGwg
PGlhbi5jYW1wYmVsbEBjaXRyaXguY29tPgo+IENjOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25A
Y2l0cml4LmNvbT4KPiAtLS0KPiBJJ3ZlIHJlbW92ZWQgRG9uJ3MgIlRlc3RlZC1ieSIgYmVjYXVz
ZSB0aGUgcGF0Y2ggaGFzIGNoYW5nZWQsIGNvdWxkCj4geW91IHBsZWFzZSByZS10ZXN0IGl0IERv
bj8KPiAKPiBQbGVhc2UgcnVuIGF1dG9nZW4uc2ggYWZ0ZXIgYXBwbHlpbmcuCgpJcyB0aGlzIGlu
dGVuZGVkIGZvciA0LjQ/IElmIFllcyB0aGVuIHlvdSBuZWVkIHRvIG1ha2UgdGhlIGNhc2UgdG8K
R2VvcmdlLgoKT3Igc2hhbGwgd2Ugc2ltcGx5IHJlbGVhc2Ugbm90ZSB0aGUgbWluaW11bSByZXF1
aXJlbWVudCBhbmQgZW5mb3JjZSBpdApmb3IgNC41PyAKCkknbSBwcm9iYWJseSB1bmR1bHkgbmVy
dm91cyBhYm91dCBjaGFuZ2luZyBhdXRvY29uZiBzdHVmZi4uLgoKSSd2ZSBhcHBsaWVkIERvbidz
IENBTUxyZXR1cm5UIGZpeCBCVFcuCgo+IC0tLQoKCl9fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxp
c3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCnW-0003V9-TX; Tue, 11 Feb 2014 12:50:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCnV-0003Ur-AH
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:50:45 +0000
Received: from [85.158.137.68:33773] by server-9.bemta-3.messagelabs.com id
	1B/E4-10184-4AC1AF25; Tue, 11 Feb 2014 12:50:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392123042!1108011!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24894 invoked from network); 11 Feb 2014 12:50:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:50:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101580570"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:50:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:50:41 -0500
Message-ID: <1392123040.26657.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 11 Feb 2014 12:50:40 +0000
In-Reply-To: <1392109492-17436-1-git-send-email-olaf@aepfle.de>
References: <1392109492-17436-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 10:04 +0100, Olaf Hering wrote:
> Callers of xc_domain_save use errno to print diagnostics if the call
> fails. But xc_domain_save does not preserve the actual errno in case of
> a failure.
> 
> This change preserves errno in all cases where code jumps to the label
> "out". In addition a new label "exit" is added to catch also code which
> used to do just "return 1".
> 
> Now libxl_save_helper:complete can print the actual error string.
> 
> Note: some of the functions used in xc_domain_save do not use errno to
> indicate a reason. In these cases the errno remains undefined as it used
> to be without this change.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I'm assuming this is 4.5 material at this point.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCnW-0003V9-TX; Tue, 11 Feb 2014 12:50:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCnV-0003Ur-AH
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:50:45 +0000
Received: from [85.158.137.68:33773] by server-9.bemta-3.messagelabs.com id
	1B/E4-10184-4AC1AF25; Tue, 11 Feb 2014 12:50:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392123042!1108011!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24894 invoked from network); 11 Feb 2014 12:50:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 12:50:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101580570"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 12:50:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 07:50:41 -0500
Message-ID: <1392123040.26657.108.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 11 Feb 2014 12:50:40 +0000
In-Reply-To: <1392109492-17436-1-git-send-email-olaf@aepfle.de>
References: <1392109492-17436-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] tools/xc: pass errno to callers of
	xc_domain_save
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 10:04 +0100, Olaf Hering wrote:
> Callers of xc_domain_save use errno to print diagnostics if the call
> fails. But xc_domain_save does not preserve the actual errno in case of
> a failure.
> 
> This change preserves errno in all cases where code jumps to the label
> "out". In addition a new label "exit" is added to catch also code which
> used to do just "return 1".
> 
> Now libxl_save_helper:complete can print the actual error string.
> 
> Note: some of the functions used in xc_domain_save do not use errno to
> indicate a reason. In these cases the errno remains undefined as it used
> to be without this change.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

I'm assuming this is 4.5 material at this point.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:51:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCno-0003Y6-C2; Tue, 11 Feb 2014 12:51:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDCnl-0003Xj-EY
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:51:01 +0000
Received: from [193.109.254.147:60256] by server-5.bemta-14.messagelabs.com id
	E1/60-16688-4BC1AF25; Tue, 11 Feb 2014 12:51:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392123059!3540549!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3563 invoked from network); 11 Feb 2014 12:51:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:51:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 12:50:55 +0000
Message-Id: <52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 12:50:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
In-Reply-To: <20140211121121.GC97288@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
>> >>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
>> > My understanding was that the problem is explicitly in the change to
>> > how RTC code is called from vpt code.
>> > 
>> > Originally, the RTC callback was called from pt_intr_post, like other
>> > vpt sources.  Your rework changed it to be called much earlier, when
>> > the vpt was considering which time source to choose.  AIUI that was to
>> > let the RTC code tell the VPT not to inject, if the guest hasn't acked
>> > the last interrupt, right?
>> 
>> Not only, it was also a necessary pre-adjustment for the !PIE case
>> to work (see below).
>> 
>> > Since that was changed later to allow a certain number of dead ticks
>> > before deciding to stop the timer chain, the decision no longer has to
>> > be made so early -- we can allow one more IRQ to go in and then
>> > disable it. 
>> > 
>> > That is the main change of this cset:  we go back to driving
>> > the interrupt from the vpt code and fixing up the RTC state after vpt
>> > tells us it's injected an interrupt.
>> 
>> And that's what is wrong imo, as it doesn't allow driving PF correctly
>> when !PIE.
> 
> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
> you remember why not?  Have I forgotten some wrinkle or race here?

Because an OS could inspect PF without setting PIE.

>> > Yeah, this has nothing to do with the bug being fixed here.  The old
>> > REG_C read was operating correctly, but on the return-to-guest path:
>> >  - vpt sees another RTC interrupt is due and calls RTC code
>> >  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>> >  - vlapic code sees the last interrupt is still in the ISR and does
>> >    nothing;
>> >  - we return to the guest having set IRQF but not consumed a timer
>> >    event, so vpt stste is the same
>> >  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>> >    waiting for a read of 0.
>> >  - repeat forever.
>> 
>> Which would call for a flag suppressing the setting of PF|IRQF
>> until the timer event got consumed. Possibly with some safety
>> belt for this to not get deferred indefinitely (albeit if the interrupt
>> doesn't get injected for extended periods of time, the guest
>> would presumably have more severe problems than these flags
>> not getting updated as expected).
> 
> That's pretty much what we're doing here -- the pt_intr_post callback
> sets PF|IRQF when the interrupt is injected.

Right, except you do this be reverting other stuff rather than
adding the missing functionality on top.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:51:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCno-0003Y6-C2; Tue, 11 Feb 2014 12:51:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDCnl-0003Xj-EY
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:51:01 +0000
Received: from [193.109.254.147:60256] by server-5.bemta-14.messagelabs.com id
	E1/60-16688-4BC1AF25; Tue, 11 Feb 2014 12:51:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392123059!3540549!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3563 invoked from network); 11 Feb 2014 12:51:00 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:51:00 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 12:50:55 +0000
Message-Id: <52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 12:50:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
In-Reply-To: <20140211121121.GC97288@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
>> >>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
>> > My understanding was that the problem is explicitly in the change to
>> > how RTC code is called from vpt code.
>> > 
>> > Originally, the RTC callback was called from pt_intr_post, like other
>> > vpt sources.  Your rework changed it to be called much earlier, when
>> > the vpt was considering which time source to choose.  AIUI that was to
>> > let the RTC code tell the VPT not to inject, if the guest hasn't acked
>> > the last interrupt, right?
>> 
>> Not only, it was also a necessary pre-adjustment for the !PIE case
>> to work (see below).
>> 
>> > Since that was changed later to allow a certain number of dead ticks
>> > before deciding to stop the timer chain, the decision no longer has to
>> > be made so early -- we can allow one more IRQ to go in and then
>> > disable it. 
>> > 
>> > That is the main change of this cset:  we go back to driving
>> > the interrupt from the vpt code and fixing up the RTC state after vpt
>> > tells us it's injected an interrupt.
>> 
>> And that's what is wrong imo, as it doesn't allow driving PF correctly
>> when !PIE.
> 
> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
> you remember why not?  Have I forgotten some wrinkle or race here?

Because an OS could inspect PF without setting PIE.

>> > Yeah, this has nothing to do with the bug being fixed here.  The old
>> > REG_C read was operating correctly, but on the return-to-guest path:
>> >  - vpt sees another RTC interrupt is due and calls RTC code
>> >  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>> >  - vlapic code sees the last interrupt is still in the ISR and does
>> >    nothing;
>> >  - we return to the guest having set IRQF but not consumed a timer
>> >    event, so vpt stste is the same
>> >  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>> >    waiting for a read of 0.
>> >  - repeat forever.
>> 
>> Which would call for a flag suppressing the setting of PF|IRQF
>> until the timer event got consumed. Possibly with some safety
>> belt for this to not get deferred indefinitely (albeit if the interrupt
>> doesn't get injected for extended periods of time, the guest
>> would presumably have more severe problems than these flags
>> not getting updated as expected).
> 
> That's pretty much what we're doing here -- the pt_intr_post callback
> sets PF|IRQF when the interrupt is injected.

Right, except you do this be reverting other stuff rather than
adding the missing functionality on top.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:58:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCuW-0003rt-E5; Tue, 11 Feb 2014 12:58:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDCuV-0003ro-5a
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:57:59 +0000
Received: from [85.158.143.35:21087] by server-1.bemta-4.messagelabs.com id
	07/0A-31661-65E1AF25; Tue, 11 Feb 2014 12:57:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392123477!4821761!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13983 invoked from network); 11 Feb 2014 12:57:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:57:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 12:57:52 +0000
Message-Id: <52FA2C63020000780011B201@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 12:57:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
In-Reply-To: <20140211115553.GB97288@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>> What I'm missing here is what you think a proper solution is.
> 
> A _proper_ solution would be for the IOMMU h/w to allow restartable
> faults, so that we can do all the usual fault-driven virtual memory
> operations with DMA. :)  In the meantime...

Or maintaining the A/D bits for IOMMU side accesses too.

>>  It seems we have:
>> 
>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
>> being tracked, and hope the guest doesn't DMA into video ram; DMA
>> causes IOMMU fault. (This really shouldn't crash the host under normal
>> circumstances; if it does it's a hardware bug.)
> 
> Note "hope" and "shouldn't" there. :)
> 
>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
>> video ram.  DMA causes missed update to dirty bitmap, which will
>> hopefully just cause screen corruption.
> 
> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
> (for VMs that actually have devices passed through to them).
> The extra bookkeeping could be expensive in some cases, but basically
> all of those cases are already incompatible with IOMMU.
> 
>> C. Do buffer scanning rather than dirty vram tracking (SLOW)
>> D. Don't allow both a virtual video card and pass-through
> 
> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
> and then split them out.  That one 

Wouldn't that be problematic in terms of memory being available,
namely when using ballooning in Dom0?

>> Given that most operating systems will probably *not* DMA into video
>> ram, and that an IOMMU fault isn't *supposed* to be able to crash the
>> host, 'A' seems like the most reasonable option to me.
> 
> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
> seems to have most support from other people.  On that basis this
> patch can have my Ack.

I too would consider B better than A.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:58:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCuW-0003rt-E5; Tue, 11 Feb 2014 12:58:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDCuV-0003ro-5a
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 12:57:59 +0000
Received: from [85.158.143.35:21087] by server-1.bemta-4.messagelabs.com id
	07/0A-31661-65E1AF25; Tue, 11 Feb 2014 12:57:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392123477!4821761!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13983 invoked from network); 11 Feb 2014 12:57:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:57:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 12:57:52 +0000
Message-Id: <52FA2C63020000780011B201@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 12:57:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
In-Reply-To: <20140211115553.GB97288@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>> What I'm missing here is what you think a proper solution is.
> 
> A _proper_ solution would be for the IOMMU h/w to allow restartable
> faults, so that we can do all the usual fault-driven virtual memory
> operations with DMA. :)  In the meantime...

Or maintaining the A/D bits for IOMMU side accesses too.

>>  It seems we have:
>> 
>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
>> being tracked, and hope the guest doesn't DMA into video ram; DMA
>> causes IOMMU fault. (This really shouldn't crash the host under normal
>> circumstances; if it does it's a hardware bug.)
> 
> Note "hope" and "shouldn't" there. :)
> 
>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
>> video ram.  DMA causes missed update to dirty bitmap, which will
>> hopefully just cause screen corruption.
> 
> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
> (for VMs that actually have devices passed through to them).
> The extra bookkeeping could be expensive in some cases, but basically
> all of those cases are already incompatible with IOMMU.
> 
>> C. Do buffer scanning rather than dirty vram tracking (SLOW)
>> D. Don't allow both a virtual video card and pass-through
> 
> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
> and then split them out.  That one 

Wouldn't that be problematic in terms of memory being available,
namely when using ballooning in Dom0?

>> Given that most operating systems will probably *not* DMA into video
>> ram, and that an IOMMU fault isn't *supposed* to be able to crash the
>> host, 'A' seems like the most reasonable option to me.
> 
> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
> seems to have most support from other people.  On that basis this
> patch can have my Ack.

I too would consider B better than A.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:59:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCw0-0004OW-Uy; Tue, 11 Feb 2014 12:59:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDCw0-0004OR-17
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:59:32 +0000
Received: from [85.158.143.35:19091] by server-1.bemta-4.messagelabs.com id
	CE/AC-31661-3BE1AF25; Tue, 11 Feb 2014 12:59:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392123570!4812281!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18716 invoked from network); 11 Feb 2014 12:59:30 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:59:30 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDCvw-0002Cf-M3; Tue, 11 Feb 2014 12:59:28 +0000
Date: Tue, 11 Feb 2014 13:59:28 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211125928.GE97288@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA1945.8010400@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 12:36 +0000 on 11 Feb (1392118581), Julien Grall wrote:
> 
> 
> On 11/02/14 12:35, Tim Deegan wrote:
> > At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
> >>
> >>
> >> On 11/02/14 08:53, Tim Deegan wrote:
> >>> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
> >>>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> >>>> compilation with clang:
> >>>>
> >>>> In file included from sched_sedf.c:8:
> >>>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> >>>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
> >>>> not found with <angled> include; use "quotes" instead
> >>>>              ^~~~~~~~~~
> >>>>              "stdarg.h"
> >>>
> >>> Looks like on your system stdarg.h doesn't live in a compiler-specific
> >>> path, like we have for the BSDs.  I think we should just go to using
> >>> our own definitions for stdarg/stdbool everywhere; trying to chase the
> >>> compiler-specific versions around is a PITA, and the pieces we
> >>> actually need are trivial.
> >>
> >> For BSDs, we are using our own stdargs/stdbool.  So we don't include the
> >> system <stdarg.h>.
> >>
> >> Linux is using $(CC) -print-file-name=include to get the right path. It
> >> works with both gcc and clang on Linux distos, but not on FreeBSD.
> >
> > Wait - is the error message you posted from clang on FreeBSD?
> > That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
> > include <stdarg.h> at all.  Is __FreeBSD__ not being defined?
> 
> 
> No it's from Linux (Debian Wheezy and Fedora). I just gave a try to the 
> "-print-file-name" solution on FreeBSD.

Oh, OK.  Yeah, we knew that didn't work there, because on *BSD the
compiler-specific headers like stdarg.h actually live in /usr/include
and can themselves include other system headers.  That's why we have
our own implementation of the bits we need, that we use on BSD.

Are you using a very old version of clang?  As 06a9c7e points out,
our current runes didn't work before clang v3.0.

If not, rather than chasing this around any further, I think we should
abandon trying to use the compiler-provided headers even on linux.
Does this patch fix your build issue?

commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
Author: Tim Deegan <tim@xen.org>
Date:   Tue Feb 11 12:44:09 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>

diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
index d1b2540..0283f06 100644
--- a/xen/include/xen/stdarg.h
+++ b/xen/include/xen/stdarg.h
@@ -1,23 +1,21 @@
 #ifndef __XEN_STDARG_H__
 #define __XEN_STDARG_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-   typedef __builtin_va_list va_list;
-#  ifdef __GNUC__
-#    define __GNUC_PREREQ__(x, y)                                       \
-        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
-         (__GNUC__ > (x)))
-#  else
-#    define __GNUC_PREREQ__(x, y)   0
-#  endif
-#  if !__GNUC_PREREQ__(4, 5)
-#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
-#  endif
-#  define va_start(ap, last)    __builtin_va_start((ap), (last))
-#  define va_end(ap)            __builtin_va_end(ap)
-#  define va_arg                __builtin_va_arg
+#ifdef __GNUC__
+#  define __GNUC_PREREQ__(x, y)                                       \
+      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
+       (__GNUC__ > (x)))
 #else
-#  include <stdarg.h>
+#  define __GNUC_PREREQ__(x, y)   0
 #endif
 
+#if !__GNUC_PREREQ__(4, 5)
+#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
+#endif
+
+typedef __builtin_va_list va_list;
+#define va_start(ap, last)    __builtin_va_start((ap), (last))
+#define va_end(ap)            __builtin_va_end(ap)
+#define va_arg                __builtin_va_arg
+
 #endif /* __XEN_STDARG_H__ */
diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
index f0faedf..b0947a6 100644
--- a/xen/include/xen/stdbool.h
+++ b/xen/include/xen/stdbool.h
@@ -1,13 +1,9 @@
 #ifndef __XEN_STDBOOL_H__
 #define __XEN_STDBOOL_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-#  define bool _Bool
-#  define true 1
-#  define false 0
-#  define __bool_true_false_are_defined   1
-#else
-#  include <stdbool.h>
-#endif
+#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined   1
 
 #endif /* __XEN_STDBOOL_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 12:59:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 12:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCw0-0004OW-Uy; Tue, 11 Feb 2014 12:59:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDCw0-0004OR-17
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 12:59:32 +0000
Received: from [85.158.143.35:19091] by server-1.bemta-4.messagelabs.com id
	CE/AC-31661-3BE1AF25; Tue, 11 Feb 2014 12:59:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392123570!4812281!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18716 invoked from network); 11 Feb 2014 12:59:30 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 12:59:30 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDCvw-0002Cf-M3; Tue, 11 Feb 2014 12:59:28 +0000
Date: Tue, 11 Feb 2014 13:59:28 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211125928.GE97288@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA1945.8010400@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 12:36 +0000 on 11 Feb (1392118581), Julien Grall wrote:
> 
> 
> On 11/02/14 12:35, Tim Deegan wrote:
> > At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
> >>
> >>
> >> On 11/02/14 08:53, Tim Deegan wrote:
> >>> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
> >>>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
> >>>> compilation with clang:
> >>>>
> >>>> In file included from sched_sedf.c:8:
> >>>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
> >>>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
> >>>> not found with <angled> include; use "quotes" instead
> >>>>              ^~~~~~~~~~
> >>>>              "stdarg.h"
> >>>
> >>> Looks like on your system stdarg.h doesn't live in a compiler-specific
> >>> path, like we have for the BSDs.  I think we should just go to using
> >>> our own definitions for stdarg/stdbool everywhere; trying to chase the
> >>> compiler-specific versions around is a PITA, and the pieces we
> >>> actually need are trivial.
> >>
> >> For BSDs, we are using our own stdargs/stdbool.  So we don't include the
> >> system <stdarg.h>.
> >>
> >> Linux is using $(CC) -print-file-name=include to get the right path. It
> >> works with both gcc and clang on Linux distos, but not on FreeBSD.
> >
> > Wait - is the error message you posted from clang on FreeBSD?
> > That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
> > include <stdarg.h> at all.  Is __FreeBSD__ not being defined?
> 
> 
> No it's from Linux (Debian Wheezy and Fedora). I just gave a try to the 
> "-print-file-name" solution on FreeBSD.

Oh, OK.  Yeah, we knew that didn't work there, because on *BSD the
compiler-specific headers like stdarg.h actually live in /usr/include
and can themselves include other system headers.  That's why we have
our own implementation of the bits we need, that we use on BSD.

Are you using a very old version of clang?  As 06a9c7e points out,
our current runes didn't work before clang v3.0.

If not, rather than chasing this around any further, I think we should
abandon trying to use the compiler-provided headers even on linux.
Does this patch fix your build issue?

commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
Author: Tim Deegan <tim@xen.org>
Date:   Tue Feb 11 12:44:09 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>

diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
index d1b2540..0283f06 100644
--- a/xen/include/xen/stdarg.h
+++ b/xen/include/xen/stdarg.h
@@ -1,23 +1,21 @@
 #ifndef __XEN_STDARG_H__
 #define __XEN_STDARG_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-   typedef __builtin_va_list va_list;
-#  ifdef __GNUC__
-#    define __GNUC_PREREQ__(x, y)                                       \
-        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
-         (__GNUC__ > (x)))
-#  else
-#    define __GNUC_PREREQ__(x, y)   0
-#  endif
-#  if !__GNUC_PREREQ__(4, 5)
-#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
-#  endif
-#  define va_start(ap, last)    __builtin_va_start((ap), (last))
-#  define va_end(ap)            __builtin_va_end(ap)
-#  define va_arg                __builtin_va_arg
+#ifdef __GNUC__
+#  define __GNUC_PREREQ__(x, y)                                       \
+      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
+       (__GNUC__ > (x)))
 #else
-#  include <stdarg.h>
+#  define __GNUC_PREREQ__(x, y)   0
 #endif
 
+#if !__GNUC_PREREQ__(4, 5)
+#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
+#endif
+
+typedef __builtin_va_list va_list;
+#define va_start(ap, last)    __builtin_va_start((ap), (last))
+#define va_end(ap)            __builtin_va_end(ap)
+#define va_arg                __builtin_va_arg
+
 #endif /* __XEN_STDARG_H__ */
diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
index f0faedf..b0947a6 100644
--- a/xen/include/xen/stdbool.h
+++ b/xen/include/xen/stdbool.h
@@ -1,13 +1,9 @@
 #ifndef __XEN_STDBOOL_H__
 #define __XEN_STDBOOL_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-#  define bool _Bool
-#  define true 1
-#  define false 0
-#  define __bool_true_false_are_defined   1
-#else
-#  include <stdbool.h>
-#endif
+#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined   1
 
 #endif /* __XEN_STDBOOL_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCyP-0004az-9t; Tue, 11 Feb 2014 13:02:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCyN-0004ah-G9
	for Xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:01:59 +0000
Received: from [85.158.139.211:37870] by server-2.bemta-5.messagelabs.com id
	1B/5C-23037-64F1AF25; Tue, 11 Feb 2014 13:01:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392123716!3124773!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21771 invoked from network); 11 Feb 2014 13:01:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:01:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99768723"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:01:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:01:42 -0500
Message-ID: <1392123700.26657.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Date: Tue, 11 Feb 2014 13:01:40 +0000
In-Reply-To: <CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
	<52F8B5C3.1020308@m2r.biz>
	<CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel@lists.xenproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Wei Liu <wei.liu2@citrix.com>, xen-users@lists.xen.org
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> Thanks for your reply. I am now using virtio-net and it seems working.
> However, Intel DPDK also requires hugepage. When a DPDK application is
> initiating hugepage, I got the following error. Do I need to config
> something in Xen to support hugepage?

I'm not sure about the status of superpage support in mainline kernels
for PV Xen guests. IIRC there was a requirement to add a Xen command
line flag to enable it at the level.

Or you could just use an HVM guest, since no special support is needed
for hugepages there.

But maybe I'm confused because I think your use of virtio-net would
necessarily require that you be using an HVM not PV guest.

But then looking at your logs I see Xen PV block and net but no sign of
virtio -- so I suspect you are actually doing PV and not using
virtio-net at all.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCyP-0004az-9t; Tue, 11 Feb 2014 13:02:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDCyN-0004ah-G9
	for Xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:01:59 +0000
Received: from [85.158.139.211:37870] by server-2.bemta-5.messagelabs.com id
	1B/5C-23037-64F1AF25; Tue, 11 Feb 2014 13:01:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392123716!3124773!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21771 invoked from network); 11 Feb 2014 13:01:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:01:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99768723"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:01:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:01:42 -0500
Message-ID: <1392123700.26657.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Date: Tue, 11 Feb 2014 13:01:40 +0000
In-Reply-To: <CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
	<52F8B5C3.1020308@m2r.biz>
	<CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Xen-devel@lists.xenproject.org, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Wei Liu <wei.liu2@citrix.com>, xen-users@lists.xen.org
Subject: Re: [Xen-devel] Virtio on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> Thanks for your reply. I am now using virtio-net and it seems working.
> However, Intel DPDK also requires hugepage. When a DPDK application is
> initiating hugepage, I got the following error. Do I need to config
> something in Xen to support hugepage?

I'm not sure about the status of superpage support in mainline kernels
for PV Xen guests. IIRC there was a requirement to add a Xen command
line flag to enable it at the level.

Or you could just use an HVM guest, since no special support is needed
for hugepages there.

But maybe I'm confused because I think your use of virtio-net would
necessarily require that you be using an HVM not PV guest.

But then looking at your logs I see Xen PV block and net but no sign of
virtio -- so I suspect you are actually doing PV and not using
virtio-net at all.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:03:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCzQ-0004j1-Pr; Tue, 11 Feb 2014 13:03:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDCzP-0004ij-3d
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:03:03 +0000
Received: from [193.109.254.147:37978] by server-4.bemta-14.messagelabs.com id
	E5/8E-32066-68F1AF25; Tue, 11 Feb 2014 13:03:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392123780!3541997!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 993 invoked from network); 11 Feb 2014 13:03:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:03:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99769240"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:03:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:02:59 -0500
Message-ID: <52FA1F82.10307@citrix.com>
Date: Tue, 11 Feb 2014 14:02:58 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
	<1392122996.26657.107.camel@kazak.uk.xensource.com>
In-Reply-To: <1392122996.26657.107.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	Don Slutz <dslutz@verizon.com>, Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMDIvMTQgMTM6NDksIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUdWUsIDIwMTQtMDIt
MTEgYXQgMTE6MzggKzAxMDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4gU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4+IENjOiBEb24gU2x1
dHogPGRzbHV0ekB2ZXJpem9uLmNvbT4KPj4gQ2M6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxs
QGNpdHJpeC5jb20+Cj4+IENjOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AY2l0cml4LmNvbT4K
Pj4gLS0tCj4+IEkndmUgcmVtb3ZlZCBEb24ncyAiVGVzdGVkLWJ5IiBiZWNhdXNlIHRoZSBwYXRj
aCBoYXMgY2hhbmdlZCwgY291bGQKPj4geW91IHBsZWFzZSByZS10ZXN0IGl0IERvbj8KPj4KPj4g
UGxlYXNlIHJ1biBhdXRvZ2VuLnNoIGFmdGVyIGFwcGx5aW5nLgo+IAo+IElzIHRoaXMgaW50ZW5k
ZWQgZm9yIDQuND8gSWYgWWVzIHRoZW4geW91IG5lZWQgdG8gbWFrZSB0aGUgY2FzZSB0bwo+IEdl
b3JnZS4KPiAKPiBPciBzaGFsbCB3ZSBzaW1wbHkgcmVsZWFzZSBub3RlIHRoZSBtaW5pbXVtIHJl
cXVpcmVtZW50IGFuZCBlbmZvcmNlIGl0Cj4gZm9yIDQuNT8gCj4gCj4gSSdtIHByb2JhYmx5IHVu
ZHVseSBuZXJ2b3VzIGFib3V0IGNoYW5naW5nIGF1dG9jb25mIHN0dWZmLi4uCj4gCj4gSSd2ZSBh
cHBsaWVkIERvbidzIENBTUxyZXR1cm5UIGZpeCBCVFcuCgpJTUhPIEkgdGhpbmsgaXQgY2FuIHdh
aXQgdW50aWwgNC41IGFuZCB0aGVuIGJhY2twb3J0IGl0IHRvIHByZXZpb3VzCnJlbGVhc2VzIGFz
IG5lZWRlZC4KClJvZ2VyLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:03:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDCzQ-0004j1-Pr; Tue, 11 Feb 2014 13:03:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDCzP-0004ij-3d
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:03:03 +0000
Received: from [193.109.254.147:37978] by server-4.bemta-14.messagelabs.com id
	E5/8E-32066-68F1AF25; Tue, 11 Feb 2014 13:03:02 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392123780!3541997!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 993 invoked from network); 11 Feb 2014 13:03:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:03:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99769240"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:03:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:02:59 -0500
Message-ID: <52FA1F82.10307@citrix.com>
Date: Tue, 11 Feb 2014 14:02:58 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
	<1392122996.26657.107.camel@kazak.uk.xensource.com>
In-Reply-To: <1392122996.26657.107.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	Don Slutz <dslutz@verizon.com>, Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTEvMDIvMTQgMTM6NDksIElhbiBDYW1wYmVsbCB3cm90ZToKPiBPbiBUdWUsIDIwMTQtMDIt
MTEgYXQgMTE6MzggKzAxMDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPj4gU2lnbmVkLW9mZi1i
eTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Cj4+IENjOiBEb24gU2x1
dHogPGRzbHV0ekB2ZXJpem9uLmNvbT4KPj4gQ2M6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxs
QGNpdHJpeC5jb20+Cj4+IENjOiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AY2l0cml4LmNvbT4K
Pj4gLS0tCj4+IEkndmUgcmVtb3ZlZCBEb24ncyAiVGVzdGVkLWJ5IiBiZWNhdXNlIHRoZSBwYXRj
aCBoYXMgY2hhbmdlZCwgY291bGQKPj4geW91IHBsZWFzZSByZS10ZXN0IGl0IERvbj8KPj4KPj4g
UGxlYXNlIHJ1biBhdXRvZ2VuLnNoIGFmdGVyIGFwcGx5aW5nLgo+IAo+IElzIHRoaXMgaW50ZW5k
ZWQgZm9yIDQuND8gSWYgWWVzIHRoZW4geW91IG5lZWQgdG8gbWFrZSB0aGUgY2FzZSB0bwo+IEdl
b3JnZS4KPiAKPiBPciBzaGFsbCB3ZSBzaW1wbHkgcmVsZWFzZSBub3RlIHRoZSBtaW5pbXVtIHJl
cXVpcmVtZW50IGFuZCBlbmZvcmNlIGl0Cj4gZm9yIDQuNT8gCj4gCj4gSSdtIHByb2JhYmx5IHVu
ZHVseSBuZXJ2b3VzIGFib3V0IGNoYW5naW5nIGF1dG9jb25mIHN0dWZmLi4uCj4gCj4gSSd2ZSBh
cHBsaWVkIERvbidzIENBTUxyZXR1cm5UIGZpeCBCVFcuCgpJTUhPIEkgdGhpbmsgaXQgY2FuIHdh
aXQgdW50aWwgNC41IGFuZCB0aGVuIGJhY2twb3J0IGl0IHRvIHByZXZpb3VzCnJlbGVhc2VzIGFz
IG5lZWRlZC4KClJvZ2VyLgoKCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcK
aHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:04:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD0a-0004se-BY; Tue, 11 Feb 2014 13:04:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDD0Y-0004sM-39
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:04:14 +0000
Received: from [85.158.137.68:2397] by server-11.bemta-3.messagelabs.com id
	31/17-04255-DCF1AF25; Tue, 11 Feb 2014 13:04:13 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392123850!1095943!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20677 invoked from network); 11 Feb 2014 13:04:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:04:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99769590"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:04:10 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:04:09 -0500
Message-ID: <52FA1FC8.7010104@citrix.com>
Date: Tue, 11 Feb 2014 13:04:08 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
In-Reply-To: <52FA043B020000780011B10C@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 10:06, Jan Beulich wrote:
>>>> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
>> Fields
>> ------
>>
>> All the fields within the headers and records have a fixed width.
>>
>> Fields are always aligned to their size.
>>
>> Padding and reserved fields are set to zero on save and must be
>> ignored during restore.
> 
> Meaning it would be impossible to assign a meaning to these fields
> later. I'd rather mandate that the restore side has to check these
> fields are zero, and bail if they aren't.

Reserved fields/bits can be used but it would require a new record type
and a bump of the format version.

I was aiming to minimize the number of ways the format can be extended.

>> Integer (numeric) fields in the image header are always in big-endian
>> byte order.
> 
> Why would big endian be preferable when both currently
> supported architectures use little endian?

Mostly to encourage tools to pay attention to byte order rather than
assuming native and getting away with it...

>> Domain Header
>> -------------
>>
>> The domain header includes general properties of the domain.
>>
>>      0      1     2     3     4     5     6     7 octet
>>     +-----------+-----------+-----------+-------------+
>>     | arch      | type      | page_shift| (reserved)  |
>>     +-----------+-----------+-----------+-------------+
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> arch        0x0000: Reserved.
>>
>>             0x0001: x86.
>>
>>             0x0002: ARM.
>>
>> type        0x0000: Reserved.
>>
>>             0x0001: x86 PV.
>>
>>             0x0002 - 0xFFFF: Reserved.
> 
> So how would ARM, x86 HVM, and x86 PVH be expressed?

Something like:

  0x0001: x86 PV.
  0x0002: x86 HVM.
  0x0003: x86 PVH.
  0x0004: ARM.

Which does make the arch field a bit redundant, I suppose.

>> P2M
>> ---
>>
>> [ This is a more flexible replacement for the old p2m_size field and
>> p2m array. ]
>>
>> The P2M record contains a portion of the source domain's P2M.
>> Multiple P2M records may be sent if the source P2M changes during the
>> stream.
>>
>>      0     1     2     3     4     5     6     7 octet
>>     +-------------------------------------------------+
>>     | pfn_begin                                       |
>>     +-------------------------------------------------+
>>     | pfn_end                                         |
>>     +-------------------------------------------------+
>>     | mfn[0]                                          |
>>     +-------------------------------------------------+
>>     ...
>>     +-------------------------------------------------+
>>     | mfn[N-1]                                        |
>>     +-------------------------------------------------+
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> pfn_begin   The first PFN in this portion of the P2M
>>
>> pfn_end     One past the last PFN in this portion of the P2M.
> 
> I'd favor an inclusive range here, such that if we ever reach a
> fully populatable 64-bit PFN space (on some future architecture)
> there'd still be no issue with special casing the then unavoidable
> wraparound.

Ok, but 64-bit PFN space would suggest 76 bit of address space which
seems somewhat far off.  Is that something we want to consider now?

>> Legacy Images (x86 only)
>> ========================
>>
>> Restoring legacy images from older tools shall be handled by
>> translating the legacy format image into this new format.
>>
>> It shall not be possible to save in the legacy format.
>>
>> There are two different legacy images depending on whether they were
>> generated by a 32-bit or a 64-bit toolstack. These shall be
>> distinguished by inspecting octets 4-7 in the image.  If these are
>> zero then it is a 64-bit image.
>>
>> Toolstack  Field                            Value
>> ---------  -----                            -----
>> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
> 
> Afaics this is being determined via xc_domain_maximum_gpfn(),
> which I don't think guarantees the result to be limited to 2^32.
> Or in fact the libxc interface wrongly limits the value (by
> truncating the "long" returned from the hypercall to an "int"). So
> in practice consistent images would have the field limited to 2^31
> on 64-bit tool stacks (since for larger values the negative function
> return value would get converted by sign-extension, but all sorts
> of other trouble would result due to the now huge p2m_size).

For the handling of legacy images I think we need to only consider
images that could have been practically generated by older tools.

>> Future Extensions
>> =================
>>
>> All changes to this format require the image version to be increased.
> 
> Oh, okay, this partly deals with the first question above. Question
> is whether that's a useful requirement, i.e. whether that wouldn't
> lead to an inflation of versions needing conversion (for a tool stack
> that wants to support more than just migration from N-1).

Only legacy images would be converted to the newest format.  I would
expect version V-1 images would be handled by (mostly) the same code as
V images.  Particularly if V is V-1 with extra record types.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:04:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD0a-0004se-BY; Tue, 11 Feb 2014 13:04:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDD0Y-0004sM-39
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:04:14 +0000
Received: from [85.158.137.68:2397] by server-11.bemta-3.messagelabs.com id
	31/17-04255-DCF1AF25; Tue, 11 Feb 2014 13:04:13 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392123850!1095943!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20677 invoked from network); 11 Feb 2014 13:04:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:04:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99769590"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:04:10 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:04:09 -0500
Message-ID: <52FA1FC8.7010104@citrix.com>
Date: Tue, 11 Feb 2014 13:04:08 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
In-Reply-To: <52FA043B020000780011B10C@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 10:06, Jan Beulich wrote:
>>>> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
>> Fields
>> ------
>>
>> All the fields within the headers and records have a fixed width.
>>
>> Fields are always aligned to their size.
>>
>> Padding and reserved fields are set to zero on save and must be
>> ignored during restore.
> 
> Meaning it would be impossible to assign a meaning to these fields
> later. I'd rather mandate that the restore side has to check these
> fields are zero, and bail if they aren't.

Reserved fields/bits can be used but it would require a new record type
and a bump of the format version.

I was aiming to minimize the number of ways the format can be extended.

>> Integer (numeric) fields in the image header are always in big-endian
>> byte order.
> 
> Why would big endian be preferable when both currently
> supported architectures use little endian?

Mostly to encourage tools to pay attention to byte order rather than
assuming native and getting away with it...

>> Domain Header
>> -------------
>>
>> The domain header includes general properties of the domain.
>>
>>      0      1     2     3     4     5     6     7 octet
>>     +-----------+-----------+-----------+-------------+
>>     | arch      | type      | page_shift| (reserved)  |
>>     +-----------+-----------+-----------+-------------+
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> arch        0x0000: Reserved.
>>
>>             0x0001: x86.
>>
>>             0x0002: ARM.
>>
>> type        0x0000: Reserved.
>>
>>             0x0001: x86 PV.
>>
>>             0x0002 - 0xFFFF: Reserved.
> 
> So how would ARM, x86 HVM, and x86 PVH be expressed?

Something like:

  0x0001: x86 PV.
  0x0002: x86 HVM.
  0x0003: x86 PVH.
  0x0004: ARM.

Which does make the arch field a bit redundant, I suppose.

>> P2M
>> ---
>>
>> [ This is a more flexible replacement for the old p2m_size field and
>> p2m array. ]
>>
>> The P2M record contains a portion of the source domain's P2M.
>> Multiple P2M records may be sent if the source P2M changes during the
>> stream.
>>
>>      0     1     2     3     4     5     6     7 octet
>>     +-------------------------------------------------+
>>     | pfn_begin                                       |
>>     +-------------------------------------------------+
>>     | pfn_end                                         |
>>     +-------------------------------------------------+
>>     | mfn[0]                                          |
>>     +-------------------------------------------------+
>>     ...
>>     +-------------------------------------------------+
>>     | mfn[N-1]                                        |
>>     +-------------------------------------------------+
>>
>> --------------------------------------------------------------------
>> Field       Description
>> ----------- --------------------------------------------------------
>> pfn_begin   The first PFN in this portion of the P2M
>>
>> pfn_end     One past the last PFN in this portion of the P2M.
> 
> I'd favor an inclusive range here, such that if we ever reach a
> fully populatable 64-bit PFN space (on some future architecture)
> there'd still be no issue with special casing the then unavoidable
> wraparound.

Ok, but 64-bit PFN space would suggest 76 bit of address space which
seems somewhat far off.  Is that something we want to consider now?

>> Legacy Images (x86 only)
>> ========================
>>
>> Restoring legacy images from older tools shall be handled by
>> translating the legacy format image into this new format.
>>
>> It shall not be possible to save in the legacy format.
>>
>> There are two different legacy images depending on whether they were
>> generated by a 32-bit or a 64-bit toolstack. These shall be
>> distinguished by inspecting octets 4-7 in the image.  If these are
>> zero then it is a 64-bit image.
>>
>> Toolstack  Field                            Value
>> ---------  -----                            -----
>> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
> 
> Afaics this is being determined via xc_domain_maximum_gpfn(),
> which I don't think guarantees the result to be limited to 2^32.
> Or in fact the libxc interface wrongly limits the value (by
> truncating the "long" returned from the hypercall to an "int"). So
> in practice consistent images would have the field limited to 2^31
> on 64-bit tool stacks (since for larger values the negative function
> return value would get converted by sign-extension, but all sorts
> of other trouble would result due to the now huge p2m_size).

For the handling of legacy images I think we need to only consider
images that could have been practically generated by older tools.

>> Future Extensions
>> =================
>>
>> All changes to this format require the image version to be increased.
> 
> Oh, okay, this partly deals with the first question above. Question
> is whether that's a useful requirement, i.e. whether that wouldn't
> lead to an inflation of versions needing conversion (for a tool stack
> that wants to support more than just migration from N-1).

Only legacy images would be converted to the newest format.  I would
expect version V-1 images would be handled by (mostly) the same code as
V images.  Particularly if V is V-1 with extra record types.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:06:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:06:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD34-00057b-5a; Tue, 11 Feb 2014 13:06:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD32-00057O-EA
	for Xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:06:48 +0000
Received: from [85.158.137.68:15004] by server-8.bemta-3.messagelabs.com id
	92/CB-16039-6602AF25; Tue, 11 Feb 2014 13:06:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392124003!1110697!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14438 invoked from network); 11 Feb 2014 13:06:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:06:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101584238"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:06:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:06:41 -0500
Message-ID: <1392124000.26657.114.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Date: Tue, 11 Feb 2014 13:06:40 +0000
In-Reply-To: <CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
	<52F8B5C3.1020308@m2r.biz>
	<CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	David Vrabel <david.vrabel@citrix.com>, Xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [Xen-devel] Application using hugepages crashes PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> Thanks for your reply. I am now using virtio-net and it seems working.
> However, Intel DPDK also requires hugepage. When a DPDK application is
> initiating hugepage, I got the following error. Do I need to config
> something in Xen to support hugepage?

Retitling the thread and adding the Linux Xen maintainers, since
regardless of whether it is supported trying to use hugepages shouldn't
crash the kernel.

That said, this is with 3.2.0 so it may well be fixed already. Are you
able to try a more recent mainline kernel?

> 
> 
> 
> 
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Linux version 3.2.0-58-generic (buildd@allspice) (gcc
> version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubuntu SMP Tue Dec
> 3 17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)
> [    0.000000] Command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
> ip=:127.0.255.255::::eth0:dhcp iommu=soft
> [    0.000000] KERNEL supported cpus:
> [    0.000000]   Intel GenuineIntel
> [    0.000000]   AMD AuthenticAMD
> [    0.000000]   Centaur CentaurHauls
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] Released 0 pages of unused memory
> [    0.000000] Set 0 page(s) to 1-1 mapping
> [    0.000000] BIOS-provided physical RAM map:
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] No AGP bridge found
> [    0.000000] last_pfn = 0x100800 max_arch_pfn = 0x400000000
> [    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
> [    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
> [    0.000000] init_memory_mapping: 0000000100000000-0000000100800000
> [    0.000000] RAMDISK: 02060000 - 045e3000
> [    0.000000] NUMA turned off
> [    0.000000] Faking a node at 0000000000000000-0000000100800000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000100800000
> [    0.000000]   NODE_DATA [00000000ffff5000 - 00000000ffff9fff]
> [    0.000000] Zone PFN ranges:
> [    0.000000]   DMA      0x00000010 -> 0x00001000
> [    0.000000]   DMA32    0x00001000 -> 0x00100000
> [    0.000000]   Normal   0x00100000 -> 0x00100800
> [    0.000000] Movable zone start PFN for each node
> [    0.000000] early_node_map[2] active PFN ranges
> [    0.000000]     0: 0x00000010 -> 0x000000a0
> [    0.000000]     0: 0x00000100 -> 0x00100800
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] SMP: Allowing 8 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] PM: Registered nosave memory: 00000000000a0000 -
> 0000000000100000
> [    0.000000] PCI: Warning: Cannot find a gap in the 32bit address
> range
> [    0.000000] PCI: Unassigned devices with 32bit resource registers
> may break!
> [    0.000000] Allocating PCI resources starting at 100900000 (gap:
> 100900000:400000)
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.2.1 (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256
> nr_cpu_ids:8 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136
> r8192 d23360 u262144
> [    0.000000] Built 1 zonelists in Node order, mobility grouping on.
>  Total pages: 1032084
> [    0.000000] Policy zone: Normal
> [    0.000000] Kernel command line: root=/dev/xvda2 ro root=/dev/xvda2
> ro ip=:127.0.255.255::::eth0:dhcp iommu=soft
> [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
> [    0.000000] Placing 64MB software IO TLB between ffff8800f7400000 -
> ffff8800fb400000
> [    0.000000] software IO TLB at phys 0xf7400000 - 0xfb400000
> [    0.000000] Memory: 3988436k/4202496k available (6588k kernel code,
> 448k absent, 213612k reserved, 6617k data, 924k init)
> [    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
> CPUs=8, Nodes=1
> [    0.000000] Hierarchical RCU implementation.
> [    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
> [    0.000000] NR_IRQS:16640 nr_irqs:336 16
> [    0.000000] Console: colour dummy device 80x25
> [    0.000000] console [tty0] enabled
> [    0.000000] console [hvc0] enabled
> [    0.000000] allocated 34603008 bytes of page_cgroup
> [    0.000000] please try 'cgroup_disable=memory' option if you don't
> want memory cgroups
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] Detected 2793.098 MHz processor.
> [    0.004000] Calibrating delay loop (skipped), value calculated
> using timer frequency.. 5586.19 BogoMIPS (lpj=11172392)
> [    0.004000] pid_max: default: 32768 minimum: 301
> [    0.004000] Security Framework initialized
> [    0.004000] AppArmor: AppArmor initialized
> [    0.004000] Yama: becoming mindful.
> [    0.004000] Dentry cache hash table entries: 524288 (order: 10,
> 4194304 bytes)
> [    0.004000] Inode-cache hash table entries: 262144 (order: 9,
> 2097152 bytes)
> [    0.004000] Mount-cache hash table entries: 256
> [    0.004000] Initializing cgroup subsys cpuacct
> [    0.004000] Initializing cgroup subsys memory
> [    0.004000] Initializing cgroup subsys devices
> [    0.004000] Initializing cgroup subsys freezer
> [    0.004000] Initializing cgroup subsys blkio
> [    0.004000] Initializing cgroup subsys perf_event
> [    0.004000] CPU: Physical Processor ID: 0
> [    0.004000] CPU: Processor Core ID: 0
> [    0.004000] SMP alternatives: switching to UP code
> [    0.031040] ftrace: allocating 26602 entries in 105 pages
> [    0.032055] cpu 0 spinlock event irq 17
> [    0.032115] Performance Events: unsupported p6 CPU model 26 no PMU
> driver, software events only.
> [    0.032244] NMI watchdog disabled (cpu0): hardware events not
> enabled
> [    0.032350] installing Xen timer for CPU 1
> [    0.032363] cpu 1 spinlock event irq 23
> [    0.032623] SMP alternatives: switching to SMP code
> [    0.057953] NMI watchdog disabled (cpu1): hardware events not
> enabled
> [    0.058085] installing Xen timer for CPU 2
> [    0.058103] cpu 2 spinlock event irq 29
> [    0.058542] NMI watchdog disabled (cpu2): hardware events not
> enabled
> [    0.058696] installing Xen timer for CPU 3
> [    0.058724] cpu 3 spinlock event irq 35
> [    0.059115] NMI watchdog disabled (cpu3): hardware events not
> enabled
> [    0.059227] installing Xen timer for CPU 4
> [    0.059246] cpu 4 spinlock event irq 41
> [    0.059423] NMI watchdog disabled (cpu4): hardware events not
> enabled
> [    0.059544] installing Xen timer for CPU 5
> [    0.059562] cpu 5 spinlock event irq 47
> [    0.059724] NMI watchdog disabled (cpu5): hardware events not
> enabled
> [    0.059833] installing Xen timer for CPU 6
> [    0.059852] cpu 6 spinlock event irq 53
> [    0.060003] NMI watchdog disabled (cpu6): hardware events not
> enabled
> [    0.060037] installing Xen timer for CPU 7
> [    0.060056] cpu 7 spinlock event irq 59
> [    0.060209] NMI watchdog disabled (cpu7): hardware events not
> enabled
> [    0.060243] Brought up 8 CPUs
> [    0.060494] devtmpfs: initialized
> [    0.061531] EVM: security.selinux
> [    0.061537] EVM: security.SMACK64
> [    0.061542] EVM: security.capability
> [    0.061711] Grant table initialized
> [    0.061711] print_constraints: dummy: 
> [    0.083057] RTC time: 165:165:165, date: 165/165/65
> [    0.083093] NET: Registered protocol family 16
> [    0.083159] Trying to unpack rootfs image as initramfs...
> [    0.084665] PCI: setting up Xen PCI frontend stub
> [    0.086003] bio: create slab <bio-0> at 0
> [    0.086003] ACPI: Interpreter disabled.
> [    0.086003] xen/balloon: Initialising balloon driver.
> [    0.088136] xen-balloon: Initialising balloon driver.
> [    0.088139] vgaarb: loaded
> [    0.088184] i2c-core: driver [aat2870] using legacy suspend method
> [    0.088192] i2c-core: driver [aat2870] using legacy resume method
> [    0.088283] SCSI subsystem initialized
> [    0.088341] usbcore: registered new interface driver usbfs
> [    0.088341] usbcore: registered new interface driver hub
> [    0.088341] usbcore: registered new device driver usb
> [    0.088341] PCI: System does not support PCI
> [    0.088341] PCI: System does not support PCI
> [    0.088341] NetLabel: Initializing
> [    0.088341] NetLabel:  domain hash size = 128
> [    0.184026] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.184051] NetLabel:  unlabeled traffic allowed by default
> [    0.184159] Switching to clocksource xen
> [    0.188203] Freeing initrd memory: 38412k freed
> [    0.202280] AppArmor: AppArmor Filesystem Enabled
> [    0.202308] pnp: PnP ACPI: disabled
> [    0.205341] NET: Registered protocol family 2
> [    0.205661] IP route cache hash table entries: 131072 (order: 8,
> 1048576 bytes)
> [    0.207989] TCP established hash table entries: 524288 (order: 11,
> 8388608 bytes)
> [    0.209497] TCP bind hash table entries: 65536 (order: 8, 1048576
> bytes)
> [    0.209644] TCP: Hash tables configured (established 524288 bind
> 65536)
> [    0.209650] TCP reno registered
> [    0.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.209704] UDP-Lite hash table entries: 2048 (order: 4, 65536
> bytes)
> [    0.209817] NET: Registered protocol family 1
> [    0.210139] platform rtc_cmos: registered platform RTC device (no
> PNP device found)
> [    0.211002] audit: initializing netlink socket (disabled)
> [    0.211015] type=2000 audit(1392055157.599:1): initialized
> [    0.229178] HugeTLB registered 2 MB page size, pre-allocated 0
> pages
> [    0.230818] VFS: Disk quotas dquot_6.5.2
> [    0.230873] Dquot-cache hash table entries: 512 (order 0, 4096
> bytes)
> [    0.231462] fuse init (API version 7.17)
> [    0.231605] msgmni has been set to 7864
> [    0.232267] Block layer SCSI generic (bsg) driver version 0.4
> loaded (major 253)
> [    0.232382] io scheduler noop registered
> [    0.232417] io scheduler deadline registered
> [    0.232449] io scheduler cfq registered (default)
> [    0.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.232529] pciehp: PCI Express Hot Plug Controller Driver version:
> 0.4
> [    0.233195] Serial: 8250/16550 driver, 32 ports, IRQ sharing
> enabled
> [    0.437179] Linux agpgart interface v0.103
> [    0.439329] brd: module loaded
> [    0.440557] loop: module loaded
> [    0.442439] blkfront device/vbd/51714 num-ring-pages 1 nr_ents 32.
> [    0.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents 32.
> [    0.447233] blkfront: xvda2: flush diskcache: enabled
> [    0.447810] Fixed MDIO Bus: probed
> [    0.447856] tun: Universal TUN/TAP device driver, 1.6
> [    0.447864] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.447945] PPP generic driver version 2.4.2
> [    0.448029] Initialising Xen virtual ethernet driver.
> [    0.453923] blkfront: xvda1: flush diskcache: enabled
> [    0.455000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI)
> Driver
> [    0.455031] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.455048] uhci_hcd: USB Universal Host Controller Interface
> driver
> [    0.455100] usbcore: registered new interface driver libusual
> [    0.455134] i8042: PNP: No PS/2 controller found. Probing ports
> directly.
> [    1.455791] i8042: No controller found
> [    1.456071] mousedev: PS/2 mouse device common for all mice
> [    1.496241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as
> rtc0
> [    1.496489] rtc_cmos: probe of rtc_cmos failed with error -38
> [    1.496624] device-mapper: uevent: version 1.0.3
> ...............
> ...............
> ...............
> ...............
> 
> 
> 
> 
> [  135.957086] BUG: unable to handle kernel paging request at
> ffff8800f36c0960
> [  135.957105] IP: [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
> [  135.957122] PGD 1c06067 PUD dd1067 PMD f6d067 PTE 80100000f36c0065
> [  135.957134] Oops: 0003 [#1] SMP 
> [  135.957141] CPU 0 
> [  135.957144] Modules linked in: igb_uio(O) uio
> [  135.957155] 
> [  135.957160] Pid: 659, comm: helloworld Tainted: G           O
> 3.2.0-58-generic #88-Ubuntu  
> [  135.957171] RIP: e030:[<ffffffff81008efe>]  [<ffffffff81008efe>]
> xen_set_pte_at+0x3e/0x210
> [  135.957183] RSP: e02b:ffff8800037ddc88  EFLAGS: 00010297
> [  135.957189] RAX: 0000000000000000 RBX: 800000008c6000e7 RCX:
> 800000008c6000e7
> [  135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI:
> ffff880003044980
> [  135.957205] RBP: ffff8800037ddcd8 R08: 0000000000000000 R09:
> dead000000100100
> [  135.957212] R10: dead000000200200 R11: 00007f4a64f7e02a R12:
> ffffea0003c48000
> [  135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15:
> 0000000000000001
> [  135.957232] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> [  135.957241] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4:
> 0000000000002660
> [  135.957255] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  135.957263] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  135.957271] Process helloworld (pid: 659, threadinfo
> ffff8800037dc000, task ffff8800034c1700)
> [  135.957279] Stack:
> [  135.957283]  00007f4a65800000 ffff880003044980 dead000000200200
> dead000000100100
> [  135.957297]  0000000000000000 0000000000000000 ffffea0003c48000
> 800000008c6000e7
> [  135.957310]  ffff8800030449ec 0000000000000001 ffff8800037ddd68
> ffffffff81158453
> [  135.957322] Call Trace:
> [  135.957333]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  135.957342]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  135.957351]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  135.957361]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  135.957370]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  135.957378]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  135.957391]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  135.957399]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  135.957408]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  135.957417]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48 89
> 7d b8 48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83
> f8 01 74 75 <49> 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0
> 4c 8b 
> [  135.957507] RIP  [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
> [  135.957517]  RSP <ffff8800037ddc88>
> [  135.957521] CR2: ffff8800f36c0960
> [  135.957528] ---[ end trace f6a013072f2aee83 ]---
> [  160.032062] BUG: soft lockup - CPU#0 stuck for 23s!
> [helloworld:659]
> [  160.032129] Modules linked in: igb_uio(O) uio
> [  160.032140] CPU 0 
> [  160.032143] Modules linked in: igb_uio(O) uio
> [  160.032153] 
> [  160.032159] Pid: 659, comm: helloworld Tainted: G      D    O
> 3.2.0-58-generic #88-Ubuntu  
> [  160.032170] RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>]
> hypercall_page+0x3aa/0x1000
> [  160.032190] RSP: e02b:ffff8800037dd730  EFLAGS: 00000202
> [  160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
> ffffffff810013aa
> [  160.032204] RDX: 0000000000000000 RSI: ffff8800037dd748 RDI:
> 0000000000000003
> [  160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09:
> ffff8800f6c000a0
> [  160.032220] R10: 0000000000007ff0 R11: 0000000000000202 R12:
> 0000000000000011
> [  160.032227] R13: 0000000000000201 R14: ffff880003044901 R15:
> ffff880003044900
> [  160.032239] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> [  160.032248] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  160.032255] CR2: ffff8800f36c0960 CR3: 0000000001c05000 CR4:
> 0000000000002660
> [  160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  160.032271] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  160.032279] Process helloworld (pid: 659, threadinfo
> ffff8800037dc000, task ffff8800034c1700)
> [  160.032287] Stack:
> [  160.032291]  0000000000000011 00000000fffffffa ffffffff813adade
> ffff8800037dd764
> [  160.032304]  ffffffff00000001 0000000000000000 00000004813ad17e
> ffff8800037dd778
> [  160.032317]  ffff8800030449ec ffff8800037dd788 ffffffff813af5e0
> ffff8800037dd7d8
> [  160.032329] Call Trace:
> [  160.032341]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
> [  160.032350]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
> [  160.032360]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
> [  160.032370]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
> [  160.032381]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
> [  160.032390]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
> [  160.032400]  [<ffffffff81146408>] exit_mmap+0x58/0x140
> [  160.032408]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
> [  160.032416]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
> [  160.032425]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032433]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032442]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032452]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
> [  160.032460]  [<ffffffff81065f39>] mmput+0x29/0x30
> [  160.032470]  [<ffffffff8106c943>] exit_mm+0x113/0x130
> [  160.032479]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
> [  160.032488]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
> [  160.032496]  [<ffffffff8106cace>] do_exit+0x16e/0x450
> [  160.032504]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
> [  160.032513]  [<ffffffff8164812f>] no_context+0x150/0x15d
> [  160.032520]  [<ffffffff81648307>] __bad_area_nosemaphore
> +0x1cb/0x1ea
> [  160.032529]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
> [  160.032537]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
> [  160.032545]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
> [  160.032555]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
> [  160.032564]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init
> +0x48/0x160
> [  160.032575]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
> [  160.032588]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
> [  160.032597]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.032605]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
> [  160.032613]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
> [  160.032622]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  160.032630]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  160.032638]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  160.032648]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  160.032656]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  160.032664]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  160.032673]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  160.032681]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  160.032689]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  160.032697]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc
> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00
> 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
> cc cc 
> [  160.032781] Call Trace:
> [  160.032787]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
> [  160.032795]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
> [  160.032803]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
> [  160.032811]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
> [  160.032818]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
> [  160.032826]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
> [  160.032833]  [<ffffffff81146408>] exit_mmap+0x58/0x140
> [  160.032841]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
> [  160.032849]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
> [  160.032857]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032866]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032874]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032882]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
> [  160.032889]  [<ffffffff81065f39>] mmput+0x29/0x30
> [  160.032896]  [<ffffffff8106c943>] exit_mm+0x113/0x130
> [  160.032904]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
> [  160.032912]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
> [  160.032920]  [<ffffffff8106cace>] do_exit+0x16e/0x450
> [  160.032928]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
> [  160.032935]  [<ffffffff8164812f>] no_context+0x150/0x15d
> [  160.032943]  [<ffffffff81648307>] __bad_area_nosemaphore
> +0x1cb/0x1ea
> [  160.032951]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
> [  160.032959]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
> [  160.032967]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
> [  160.032975]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
> [  160.036054]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init
> +0x48/0x160
> [  160.036054]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
> [  160.036054]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
> [  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.036054]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
> [  160.036054]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
> [  160.036054]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  160.036054]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  160.036054]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  160.036054]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  160.036054]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  160.036054]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  160.036054]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  160.036054]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  160.036054]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
> 
> 
> 
> 
> On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <fabio.fantoni@m2r.biz>
> wrote:
>         Il 10/02/2014 11:42, Wei Liu ha scritto:
>         
>                 On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao
>                 wrote:
>                         Hi,
>                         
>                                 I am new to Xen and I am trying to run
>                         Intel DPDK inside a domU with
>                         virtio on Xen 4.2. Is it possible to do this?
>                         
>         
>         
>         Based on my tests about virtio:
>         - virtio-serial seems working out of box with windows domUs
>         and also with xen pv driver, on linux domUs with old kernel
>         (tested 2.6.32) is also working out of box but with newer
>         kernel (tested >=3.2) require pci=nomsi to work correctly and
>         works also with xen pvhvm drivers, for now I not found
>         solution for msi problem, there are some posts about it.
>         - virtio-net was working out of box but with recent qemu
>         versions is broken due qemu regression, I have narrowed down
>         with bisect (one commit between 4 Jul 2013 and 22 Jul 2013)
>         but I unable to found the exact commit of regression because
>         there are other critical problems with xen in the range.
>         - I not tested virtio-disk and I not know if is working with
>         recent xen and qemu version.
>         
>         
>                 DPDK doesn't seem to tightly coupled with VirtIO, does
>                 it?
>                 
>                 Could you look at Xen's PV network protocol instead?
>                 VirtIO has no
>                 mainline support on Xen while Xen's PV protocol has
>                 been in mainline for
>                 years. And it's very likely to be enabled by default
>                 nowadays.
>                 
>                 Wei.
>                 
>                         Regards
>                         Peter
>                         _______________________________________________
>                         Xen-devel mailing list
>                         Xen-devel@lists.xen.org
>                         http://lists.xen.org/xen-devel
>                 
>                 _______________________________________________
>                 Xen-devel mailing list
>                 Xen-devel@lists.xen.org
>                 http://lists.xen.org/xen-devel
>         
>         
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:06:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:06:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD34-00057b-5a; Tue, 11 Feb 2014 13:06:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD32-00057O-EA
	for Xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:06:48 +0000
Received: from [85.158.137.68:15004] by server-8.bemta-3.messagelabs.com id
	92/CB-16039-6602AF25; Tue, 11 Feb 2014 13:06:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392124003!1110697!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14438 invoked from network); 11 Feb 2014 13:06:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:06:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101584238"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:06:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:06:41 -0500
Message-ID: <1392124000.26657.114.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Peter X. Gao" <peterxianggao@gmail.com>
Date: Tue, 11 Feb 2014 13:06:40 +0000
In-Reply-To: <CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
References: <CAAjP99Yu=tf07FqDNhJm00xid44LzqcvNNmRVLOte2OL02PqRA@mail.gmail.com>
	<20140210104223.GN15387@zion.uk.xensource.com>
	<52F8B5C3.1020308@m2r.biz>
	<CAAjP99Y-xKrhV+jB=H9g4S-MoJCQDn8yMwHX9OjCKpv5bxHCvw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	David Vrabel <david.vrabel@citrix.com>, Xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: [Xen-devel] Application using hugepages crashes PV guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 10:07 -0800, Peter X. Gao wrote:
> Thanks for your reply. I am now using virtio-net and it seems working.
> However, Intel DPDK also requires hugepage. When a DPDK application is
> initiating hugepage, I got the following error. Do I need to config
> something in Xen to support hugepage?

Retitling the thread and adding the Linux Xen maintainers, since
regardless of whether it is supported trying to use hugepages shouldn't
crash the kernel.

That said, this is with 3.2.0 so it may well be fixed already. Are you
able to try a more recent mainline kernel?

> 
> 
> 
> 
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Linux version 3.2.0-58-generic (buildd@allspice) (gcc
> version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #88-Ubuntu SMP Tue Dec
> 3 17:37:58 UTC 2013 (Ubuntu 3.2.0-58.88-generic 3.2.53)
> [    0.000000] Command line: root=/dev/xvda2 ro root=/dev/xvda2 ro
> ip=:127.0.255.255::::eth0:dhcp iommu=soft
> [    0.000000] KERNEL supported cpus:
> [    0.000000]   Intel GenuineIntel
> [    0.000000]   AMD AuthenticAMD
> [    0.000000]   Centaur CentaurHauls
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] Released 0 pages of unused memory
> [    0.000000] Set 0 page(s) to 1-1 mapping
> [    0.000000] BIOS-provided physical RAM map:
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] No AGP bridge found
> [    0.000000] last_pfn = 0x100800 max_arch_pfn = 0x400000000
> [    0.000000] last_pfn = 0x100000 max_arch_pfn = 0x400000000
> [    0.000000] init_memory_mapping: 0000000000000000-0000000100000000
> [    0.000000] init_memory_mapping: 0000000100000000-0000000100800000
> [    0.000000] RAMDISK: 02060000 - 045e3000
> [    0.000000] NUMA turned off
> [    0.000000] Faking a node at 0000000000000000-0000000100800000
> [    0.000000] Initmem setup node 0 0000000000000000-0000000100800000
> [    0.000000]   NODE_DATA [00000000ffff5000 - 00000000ffff9fff]
> [    0.000000] Zone PFN ranges:
> [    0.000000]   DMA      0x00000010 -> 0x00001000
> [    0.000000]   DMA32    0x00001000 -> 0x00100000
> [    0.000000]   Normal   0x00100000 -> 0x00100800
> [    0.000000] Movable zone start PFN for each node
> [    0.000000] early_node_map[2] active PFN ranges
> [    0.000000]     0: 0x00000010 -> 0x000000a0
> [    0.000000]     0: 0x00000100 -> 0x00100800
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] SMP: Allowing 8 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] PM: Registered nosave memory: 00000000000a0000 -
> 0000000000100000
> [    0.000000] PCI: Warning: Cannot find a gap in the 32bit address
> range
> [    0.000000] PCI: Unassigned devices with 32bit resource registers
> may break!
> [    0.000000] Allocating PCI resources starting at 100900000 (gap:
> 100900000:400000)
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.2.1 (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:256 nr_cpumask_bits:256
> nr_cpu_ids:8 nr_node_ids:1
> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s83136
> r8192 d23360 u262144
> [    0.000000] Built 1 zonelists in Node order, mobility grouping on.
>  Total pages: 1032084
> [    0.000000] Policy zone: Normal
> [    0.000000] Kernel command line: root=/dev/xvda2 ro root=/dev/xvda2
> ro ip=:127.0.255.255::::eth0:dhcp iommu=soft
> [    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
> [    0.000000] Placing 64MB software IO TLB between ffff8800f7400000 -
> ffff8800fb400000
> [    0.000000] software IO TLB at phys 0xf7400000 - 0xfb400000
> [    0.000000] Memory: 3988436k/4202496k available (6588k kernel code,
> 448k absent, 213612k reserved, 6617k data, 924k init)
> [    0.000000] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
> CPUs=8, Nodes=1
> [    0.000000] Hierarchical RCU implementation.
> [    0.000000] RCU dyntick-idle grace-period acceleration is enabled.
> [    0.000000] NR_IRQS:16640 nr_irqs:336 16
> [    0.000000] Console: colour dummy device 80x25
> [    0.000000] console [tty0] enabled
> [    0.000000] console [hvc0] enabled
> [    0.000000] allocated 34603008 bytes of page_cgroup
> [    0.000000] please try 'cgroup_disable=memory' option if you don't
> want memory cgroups
> [    0.000000] installing Xen timer for CPU 0
> [    0.000000] Detected 2793.098 MHz processor.
> [    0.004000] Calibrating delay loop (skipped), value calculated
> using timer frequency.. 5586.19 BogoMIPS (lpj=11172392)
> [    0.004000] pid_max: default: 32768 minimum: 301
> [    0.004000] Security Framework initialized
> [    0.004000] AppArmor: AppArmor initialized
> [    0.004000] Yama: becoming mindful.
> [    0.004000] Dentry cache hash table entries: 524288 (order: 10,
> 4194304 bytes)
> [    0.004000] Inode-cache hash table entries: 262144 (order: 9,
> 2097152 bytes)
> [    0.004000] Mount-cache hash table entries: 256
> [    0.004000] Initializing cgroup subsys cpuacct
> [    0.004000] Initializing cgroup subsys memory
> [    0.004000] Initializing cgroup subsys devices
> [    0.004000] Initializing cgroup subsys freezer
> [    0.004000] Initializing cgroup subsys blkio
> [    0.004000] Initializing cgroup subsys perf_event
> [    0.004000] CPU: Physical Processor ID: 0
> [    0.004000] CPU: Processor Core ID: 0
> [    0.004000] SMP alternatives: switching to UP code
> [    0.031040] ftrace: allocating 26602 entries in 105 pages
> [    0.032055] cpu 0 spinlock event irq 17
> [    0.032115] Performance Events: unsupported p6 CPU model 26 no PMU
> driver, software events only.
> [    0.032244] NMI watchdog disabled (cpu0): hardware events not
> enabled
> [    0.032350] installing Xen timer for CPU 1
> [    0.032363] cpu 1 spinlock event irq 23
> [    0.032623] SMP alternatives: switching to SMP code
> [    0.057953] NMI watchdog disabled (cpu1): hardware events not
> enabled
> [    0.058085] installing Xen timer for CPU 2
> [    0.058103] cpu 2 spinlock event irq 29
> [    0.058542] NMI watchdog disabled (cpu2): hardware events not
> enabled
> [    0.058696] installing Xen timer for CPU 3
> [    0.058724] cpu 3 spinlock event irq 35
> [    0.059115] NMI watchdog disabled (cpu3): hardware events not
> enabled
> [    0.059227] installing Xen timer for CPU 4
> [    0.059246] cpu 4 spinlock event irq 41
> [    0.059423] NMI watchdog disabled (cpu4): hardware events not
> enabled
> [    0.059544] installing Xen timer for CPU 5
> [    0.059562] cpu 5 spinlock event irq 47
> [    0.059724] NMI watchdog disabled (cpu5): hardware events not
> enabled
> [    0.059833] installing Xen timer for CPU 6
> [    0.059852] cpu 6 spinlock event irq 53
> [    0.060003] NMI watchdog disabled (cpu6): hardware events not
> enabled
> [    0.060037] installing Xen timer for CPU 7
> [    0.060056] cpu 7 spinlock event irq 59
> [    0.060209] NMI watchdog disabled (cpu7): hardware events not
> enabled
> [    0.060243] Brought up 8 CPUs
> [    0.060494] devtmpfs: initialized
> [    0.061531] EVM: security.selinux
> [    0.061537] EVM: security.SMACK64
> [    0.061542] EVM: security.capability
> [    0.061711] Grant table initialized
> [    0.061711] print_constraints: dummy: 
> [    0.083057] RTC time: 165:165:165, date: 165/165/65
> [    0.083093] NET: Registered protocol family 16
> [    0.083159] Trying to unpack rootfs image as initramfs...
> [    0.084665] PCI: setting up Xen PCI frontend stub
> [    0.086003] bio: create slab <bio-0> at 0
> [    0.086003] ACPI: Interpreter disabled.
> [    0.086003] xen/balloon: Initialising balloon driver.
> [    0.088136] xen-balloon: Initialising balloon driver.
> [    0.088139] vgaarb: loaded
> [    0.088184] i2c-core: driver [aat2870] using legacy suspend method
> [    0.088192] i2c-core: driver [aat2870] using legacy resume method
> [    0.088283] SCSI subsystem initialized
> [    0.088341] usbcore: registered new interface driver usbfs
> [    0.088341] usbcore: registered new interface driver hub
> [    0.088341] usbcore: registered new device driver usb
> [    0.088341] PCI: System does not support PCI
> [    0.088341] PCI: System does not support PCI
> [    0.088341] NetLabel: Initializing
> [    0.088341] NetLabel:  domain hash size = 128
> [    0.184026] NetLabel:  protocols = UNLABELED CIPSOv4
> [    0.184051] NetLabel:  unlabeled traffic allowed by default
> [    0.184159] Switching to clocksource xen
> [    0.188203] Freeing initrd memory: 38412k freed
> [    0.202280] AppArmor: AppArmor Filesystem Enabled
> [    0.202308] pnp: PnP ACPI: disabled
> [    0.205341] NET: Registered protocol family 2
> [    0.205661] IP route cache hash table entries: 131072 (order: 8,
> 1048576 bytes)
> [    0.207989] TCP established hash table entries: 524288 (order: 11,
> 8388608 bytes)
> [    0.209497] TCP bind hash table entries: 65536 (order: 8, 1048576
> bytes)
> [    0.209644] TCP: Hash tables configured (established 524288 bind
> 65536)
> [    0.209650] TCP reno registered
> [    0.209674] UDP hash table entries: 2048 (order: 4, 65536 bytes)
> [    0.209704] UDP-Lite hash table entries: 2048 (order: 4, 65536
> bytes)
> [    0.209817] NET: Registered protocol family 1
> [    0.210139] platform rtc_cmos: registered platform RTC device (no
> PNP device found)
> [    0.211002] audit: initializing netlink socket (disabled)
> [    0.211015] type=2000 audit(1392055157.599:1): initialized
> [    0.229178] HugeTLB registered 2 MB page size, pre-allocated 0
> pages
> [    0.230818] VFS: Disk quotas dquot_6.5.2
> [    0.230873] Dquot-cache hash table entries: 512 (order 0, 4096
> bytes)
> [    0.231462] fuse init (API version 7.17)
> [    0.231605] msgmni has been set to 7864
> [    0.232267] Block layer SCSI generic (bsg) driver version 0.4
> loaded (major 253)
> [    0.232382] io scheduler noop registered
> [    0.232417] io scheduler deadline registered
> [    0.232449] io scheduler cfq registered (default)
> [    0.232511] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
> [    0.232529] pciehp: PCI Express Hot Plug Controller Driver version:
> 0.4
> [    0.233195] Serial: 8250/16550 driver, 32 ports, IRQ sharing
> enabled
> [    0.437179] Linux agpgart interface v0.103
> [    0.439329] brd: module loaded
> [    0.440557] loop: module loaded
> [    0.442439] blkfront device/vbd/51714 num-ring-pages 1 nr_ents 32.
> [    0.445706] blkfront device/vbd/51713 num-ring-pages 1 nr_ents 32.
> [    0.447233] blkfront: xvda2: flush diskcache: enabled
> [    0.447810] Fixed MDIO Bus: probed
> [    0.447856] tun: Universal TUN/TAP device driver, 1.6
> [    0.447864] tun: (C) 1999-2004 Max Krasnyansky <maxk@qualcomm.com>
> [    0.447945] PPP generic driver version 2.4.2
> [    0.448029] Initialising Xen virtual ethernet driver.
> [    0.453923] blkfront: xvda1: flush diskcache: enabled
> [    0.455000] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI)
> Driver
> [    0.455031] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
> [    0.455048] uhci_hcd: USB Universal Host Controller Interface
> driver
> [    0.455100] usbcore: registered new interface driver libusual
> [    0.455134] i8042: PNP: No PS/2 controller found. Probing ports
> directly.
> [    1.455791] i8042: No controller found
> [    1.456071] mousedev: PS/2 mouse device common for all mice
> [    1.496241] rtc_cmos rtc_cmos: rtc core: registered rtc_cmos as
> rtc0
> [    1.496489] rtc_cmos: probe of rtc_cmos failed with error -38
> [    1.496624] device-mapper: uevent: version 1.0.3
> ...............
> ...............
> ...............
> ...............
> 
> 
> 
> 
> [  135.957086] BUG: unable to handle kernel paging request at
> ffff8800f36c0960
> [  135.957105] IP: [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
> [  135.957122] PGD 1c06067 PUD dd1067 PMD f6d067 PTE 80100000f36c0065
> [  135.957134] Oops: 0003 [#1] SMP 
> [  135.957141] CPU 0 
> [  135.957144] Modules linked in: igb_uio(O) uio
> [  135.957155] 
> [  135.957160] Pid: 659, comm: helloworld Tainted: G           O
> 3.2.0-58-generic #88-Ubuntu  
> [  135.957171] RIP: e030:[<ffffffff81008efe>]  [<ffffffff81008efe>]
> xen_set_pte_at+0x3e/0x210
> [  135.957183] RSP: e02b:ffff8800037ddc88  EFLAGS: 00010297
> [  135.957189] RAX: 0000000000000000 RBX: 800000008c6000e7 RCX:
> 800000008c6000e7
> [  135.957197] RDX: 0000000000000000 RSI: 00007f4a65800000 RDI:
> ffff880003044980
> [  135.957205] RBP: ffff8800037ddcd8 R08: 0000000000000000 R09:
> dead000000100100
> [  135.957212] R10: dead000000200200 R11: 00007f4a64f7e02a R12:
> ffffea0003c48000
> [  135.957220] R13: 800000008c6000e7 R14: ffff8800f36c0960 R15:
> 0000000000000001
> [  135.957232] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> [  135.957241] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  135.957247] CR2: ffff8800f36c0960 CR3: 0000000002d08000 CR4:
> 0000000000002660
> [  135.957255] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  135.957263] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  135.957271] Process helloworld (pid: 659, threadinfo
> ffff8800037dc000, task ffff8800034c1700)
> [  135.957279] Stack:
> [  135.957283]  00007f4a65800000 ffff880003044980 dead000000200200
> dead000000100100
> [  135.957297]  0000000000000000 0000000000000000 ffffea0003c48000
> 800000008c6000e7
> [  135.957310]  ffff8800030449ec 0000000000000001 ffff8800037ddd68
> ffffffff81158453
> [  135.957322] Call Trace:
> [  135.957333]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  135.957342]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  135.957351]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  135.957361]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  135.957370]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  135.957378]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  135.957391]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  135.957399]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  135.957408]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  135.957417]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  135.957424] Code: e8 4c 89 75 f0 4c 89 7d f8 66 66 66 66 90 48 89
> 7d b8 48 89 75 b0 49 89 d6 48 89 cb 66 66 66 66 90 e8 77 5a 03 00 83
> f8 01 74 75 <49> 89 1e 48 8b 5d d8 4c 8b 65 e0 4c 8b 6d e8 4c 8b 75 f0
> 4c 8b 
> [  135.957507] RIP  [<ffffffff81008efe>] xen_set_pte_at+0x3e/0x210
> [  135.957517]  RSP <ffff8800037ddc88>
> [  135.957521] CR2: ffff8800f36c0960
> [  135.957528] ---[ end trace f6a013072f2aee83 ]---
> [  160.032062] BUG: soft lockup - CPU#0 stuck for 23s!
> [helloworld:659]
> [  160.032129] Modules linked in: igb_uio(O) uio
> [  160.032140] CPU 0 
> [  160.032143] Modules linked in: igb_uio(O) uio
> [  160.032153] 
> [  160.032159] Pid: 659, comm: helloworld Tainted: G      D    O
> 3.2.0-58-generic #88-Ubuntu  
> [  160.032170] RIP: e030:[<ffffffff810013aa>]  [<ffffffff810013aa>]
> hypercall_page+0x3aa/0x1000
> [  160.032190] RSP: e02b:ffff8800037dd730  EFLAGS: 00000202
> [  160.032197] RAX: 0000000000000000 RBX: 0000000000000000 RCX:
> ffffffff810013aa
> [  160.032204] RDX: 0000000000000000 RSI: ffff8800037dd748 RDI:
> 0000000000000003
> [  160.032212] RBP: ffff8800037dd778 R08: ffff8800f7008000 R09:
> ffff8800f6c000a0
> [  160.032220] R10: 0000000000007ff0 R11: 0000000000000202 R12:
> 0000000000000011
> [  160.032227] R13: 0000000000000201 R14: ffff880003044901 R15:
> ffff880003044900
> [  160.032239] FS:  00007f4a656e8800(0000) GS:ffff8800ffc00000(0000)
> knlGS:0000000000000000
> [  160.032248] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
> [  160.032255] CR2: ffff8800f36c0960 CR3: 0000000001c05000 CR4:
> 0000000000002660
> [  160.032263] DR0: 0000000000000000 DR1: 0000000000000000 DR2:
> 0000000000000000
> [  160.032271] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7:
> 0000000000000400
> [  160.032279] Process helloworld (pid: 659, threadinfo
> ffff8800037dc000, task ffff8800034c1700)
> [  160.032287] Stack:
> [  160.032291]  0000000000000011 00000000fffffffa ffffffff813adade
> ffff8800037dd764
> [  160.032304]  ffffffff00000001 0000000000000000 00000004813ad17e
> ffff8800037dd778
> [  160.032317]  ffff8800030449ec ffff8800037dd788 ffffffff813af5e0
> ffff8800037dd7d8
> [  160.032329] Call Trace:
> [  160.032341]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
> [  160.032350]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
> [  160.032360]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
> [  160.032370]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
> [  160.032381]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
> [  160.032390]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
> [  160.032400]  [<ffffffff81146408>] exit_mmap+0x58/0x140
> [  160.032408]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
> [  160.032416]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
> [  160.032425]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032433]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032442]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032452]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
> [  160.032460]  [<ffffffff81065f39>] mmput+0x29/0x30
> [  160.032470]  [<ffffffff8106c943>] exit_mm+0x113/0x130
> [  160.032479]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
> [  160.032488]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
> [  160.032496]  [<ffffffff8106cace>] do_exit+0x16e/0x450
> [  160.032504]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
> [  160.032513]  [<ffffffff8164812f>] no_context+0x150/0x15d
> [  160.032520]  [<ffffffff81648307>] __bad_area_nosemaphore
> +0x1cb/0x1ea
> [  160.032529]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
> [  160.032537]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
> [  160.032545]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
> [  160.032555]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
> [  160.032564]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init
> +0x48/0x160
> [  160.032575]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
> [  160.032588]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
> [  160.032597]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.032605]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
> [  160.032613]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
> [  160.032622]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  160.032630]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  160.032638]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  160.032648]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  160.032656]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  160.032664]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  160.032673]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  160.032681]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  160.032689]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  160.032697]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.032703] Code: cc 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc
> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00
> 00 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
> cc cc 
> [  160.032781] Call Trace:
> [  160.032787]  [<ffffffff813adade>] ? xen_poll_irq_timeout+0x3e/0x50
> [  160.032795]  [<ffffffff813af5e0>] xen_poll_irq+0x10/0x20
> [  160.032803]  [<ffffffff81646686>] xen_spin_lock_slow+0x98/0xf4
> [  160.032811]  [<ffffffff810124ba>] xen_spin_lock+0x4a/0x50
> [  160.032818]  [<ffffffff81661d8e>] _raw_spin_lock+0xe/0x20
> [  160.032826]  [<ffffffff81007d9a>] xen_exit_mmap+0x2a/0x60
> [  160.032833]  [<ffffffff81146408>] exit_mmap+0x58/0x140
> [  160.032841]  [<ffffffff8166275a>] ? error_exit+0x2a/0x60
> [  160.032849]  [<ffffffff8166227c>] ? retint_restore_args+0x5/0x6
> [  160.032857]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032866]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032874]  [<ffffffff8100132a>] ? hypercall_page+0x32a/0x1000
> [  160.032882]  [<ffffffff81065e22>] mmput.part.16+0x42/0x130
> [  160.032889]  [<ffffffff81065f39>] mmput+0x29/0x30
> [  160.032896]  [<ffffffff8106c943>] exit_mm+0x113/0x130
> [  160.032904]  [<ffffffff810e58c5>] ? taskstats_exit+0x45/0x240
> [  160.032912]  [<ffffffff81662075>] ? _raw_spin_lock_irq+0x15/0x20
> [  160.032920]  [<ffffffff8106cace>] do_exit+0x16e/0x450
> [  160.032928]  [<ffffffff81662f20>] oops_end+0xb0/0xf0
> [  160.032935]  [<ffffffff8164812f>] no_context+0x150/0x15d
> [  160.032943]  [<ffffffff81648307>] __bad_area_nosemaphore
> +0x1cb/0x1ea
> [  160.032951]  [<ffffffff816622ad>] ? restore_args+0x30/0x30
> [  160.032959]  [<ffffffff8164795b>] ? pte_offset_kernel+0xe/0x37
> [  160.032967]  [<ffffffff81648339>] bad_area_nosemaphore+0x13/0x15
> [  160.032975]  [<ffffffff81665bab>] do_page_fault+0x46b/0x540
> [  160.036054]  [<ffffffff8115c3f8>] ? mpol_shared_policy_init
> +0x48/0x160
> [  160.036054]  [<ffffffff811667bd>] ? kmem_cache_alloc+0x11d/0x140
> [  160.036054]  [<ffffffff8126d5fb>] ? hugetlbfs_alloc_inode+0x5b/0xa0
> [  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
> [  160.036054]  [<ffffffff81008efe>] ? xen_set_pte_at+0x3e/0x210
> [  160.036054]  [<ffffffff81008ef9>] ? xen_set_pte_at+0x39/0x210
> [  160.036054]  [<ffffffff81158453>] hugetlb_no_page+0x233/0x370
> [  160.036054]  [<ffffffff8100640e>] ? xen_pud_val+0xe/0x10
> [  160.036054]  [<ffffffff810053b5>] ? __raw_callee_save_xen_pud_val
> +0x11/0x1e
> [  160.036054]  [<ffffffff8115883e>] hugetlb_fault+0x1fe/0x340
> [  160.036054]  [<ffffffff81143e18>] ? vma_link+0x88/0xe0
> [  160.036054]  [<ffffffff81140a3c>] handle_mm_fault+0x2ec/0x370
> [  160.036054]  [<ffffffff816658be>] do_page_fault+0x17e/0x540
> [  160.036054]  [<ffffffff81145af8>] ? do_mmap_pgoff+0x348/0x360
> [  160.036054]  [<ffffffff81145bf1>] ? sys_mmap_pgoff+0xe1/0x230
> [  160.036054]  [<ffffffff816624f5>] page_fault+0x25/0x30
> 
> 
> 
> 
> On Mon, Feb 10, 2014 at 3:19 AM, Fabio Fantoni <fabio.fantoni@m2r.biz>
> wrote:
>         Il 10/02/2014 11:42, Wei Liu ha scritto:
>         
>                 On Fri, Feb 07, 2014 at 01:19:45PM -0800, Peter X. Gao
>                 wrote:
>                         Hi,
>                         
>                                 I am new to Xen and I am trying to run
>                         Intel DPDK inside a domU with
>                         virtio on Xen 4.2. Is it possible to do this?
>                         
>         
>         
>         Based on my tests about virtio:
>         - virtio-serial seems working out of box with windows domUs
>         and also with xen pv driver, on linux domUs with old kernel
>         (tested 2.6.32) is also working out of box but with newer
>         kernel (tested >=3.2) require pci=nomsi to work correctly and
>         works also with xen pvhvm drivers, for now I not found
>         solution for msi problem, there are some posts about it.
>         - virtio-net was working out of box but with recent qemu
>         versions is broken due qemu regression, I have narrowed down
>         with bisect (one commit between 4 Jul 2013 and 22 Jul 2013)
>         but I unable to found the exact commit of regression because
>         there are other critical problems with xen in the range.
>         - I not tested virtio-disk and I not know if is working with
>         recent xen and qemu version.
>         
>         
>                 DPDK doesn't seem to tightly coupled with VirtIO, does
>                 it?
>                 
>                 Could you look at Xen's PV network protocol instead?
>                 VirtIO has no
>                 mainline support on Xen while Xen's PV protocol has
>                 been in mainline for
>                 years. And it's very likely to be enabled by default
>                 nowadays.
>                 
>                 Wei.
>                 
>                         Regards
>                         Peter
>                         _______________________________________________
>                         Xen-devel mailing list
>                         Xen-devel@lists.xen.org
>                         http://lists.xen.org/xen-devel
>                 
>                 _______________________________________________
>                 Xen-devel mailing list
>                 Xen-devel@lists.xen.org
>                 http://lists.xen.org/xen-devel
>         
>         
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:08:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD4B-0005HB-F4; Tue, 11 Feb 2014 13:07:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD4A-0005Gq-6z
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:07:58 +0000
Received: from [85.158.139.211:6448] by server-3.bemta-5.messagelabs.com id
	47/93-13671-DA02AF25; Tue, 11 Feb 2014 13:07:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392124075!3157482!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 313 invoked from network); 11 Feb 2014 13:07:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:07:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101584598"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:07:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:07:54 -0500
Message-ID: <1392124072.26657.115.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 11 Feb 2014 13:07:52 +0000
In-Reply-To: <52F90E6F.9040409@eu.citrix.com>
References: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
	<52F90E0C.4000008@linaro.org> <52F90E6F.9040409@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an
 initrd and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:37 +0000, George Dunlap wrote:
> On 02/10/2014 05:36 PM, Julien Grall wrote:
> > Forget to cc George.
> >
> > On 02/10/2014 05:34 PM, Julien Grall wrote:
> >> When DOM0 device tree is building, the properties for initrd will
> >> only be added if there is a linux command line. This will result to a panic
> >> later:
> >>
> >> (XEN) *** LOADING DOMAIN 0 ***
> >> (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
> >> (XEN) Loading kernel from boot module 2
> >> (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
> >> (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
> >> (XEN)
> >> (XEN) ****************************************
> >> (XEN) Panic on CPU 0:
> >> (XEN) Cannot fix up "linux,initrd-start" property
> >> (XEN) ****************************************
> >> (XEN)
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >>
> >> ---
> >>      This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
> >> initrd when the linux command is not set.
> 
> Oops. :-)  Looks like a good risk:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked + Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:08:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD4B-0005HB-F4; Tue, 11 Feb 2014 13:07:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD4A-0005Gq-6z
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:07:58 +0000
Received: from [85.158.139.211:6448] by server-3.bemta-5.messagelabs.com id
	47/93-13671-DA02AF25; Tue, 11 Feb 2014 13:07:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392124075!3157482!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 313 invoked from network); 11 Feb 2014 13:07:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:07:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101584598"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:07:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:07:54 -0500
Message-ID: <1392124072.26657.115.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Tue, 11 Feb 2014 13:07:52 +0000
In-Reply-To: <52F90E6F.9040409@eu.citrix.com>
References: <1392053686-16843-1-git-send-email-julien.grall@linaro.org>
	<52F90E0C.4000008@linaro.org> <52F90E6F.9040409@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.4] xen/arm: Correctly boot with an
 initrd and no linux command line
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 17:37 +0000, George Dunlap wrote:
> On 02/10/2014 05:36 PM, Julien Grall wrote:
> > Forget to cc George.
> >
> > On 02/10/2014 05:34 PM, Julien Grall wrote:
> >> When DOM0 device tree is building, the properties for initrd will
> >> only be added if there is a linux command line. This will result to a panic
> >> later:
> >>
> >> (XEN) *** LOADING DOMAIN 0 ***
> >> (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
> >> (XEN) Loading kernel from boot module 2
> >> (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
> >> (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
> >> (XEN)
> >> (XEN) ****************************************
> >> (XEN) Panic on CPU 0:
> >> (XEN) Cannot fix up "linux,initrd-start" property
> >> (XEN) ****************************************
> >> (XEN)
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> >>
> >> ---
> >>      This is a bug fix for Xen 4.4. Without this patch, Xen won't boot with
> >> initrd when the linux command is not set.
> 
> Oops. :-)  Looks like a good risk:
> 
> Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

Acked + Applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:08:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD4Z-0005Ld-Sl; Tue, 11 Feb 2014 13:08:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD4Y-0005LK-4h
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:08:22 +0000
Received: from [193.109.254.147:63546] by server-1.bemta-14.messagelabs.com id
	09/C3-15438-5C02AF25; Tue, 11 Feb 2014 13:08:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392124099!3540568!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15157 invoked from network); 11 Feb 2014 13:08:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:08:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99770945"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:08:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:08:18 -0500
Message-ID: <1392124097.26657.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Scott <dave.scott@eu.citrix.com>
Date: Tue, 11 Feb 2014 13:08:17 +0000
In-Reply-To: <52F8D8F6.1010209@eu.citrix.com>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
	<1391809911-13610-2-git-send-email-dslutz@verizon.com>
	<52F8C834.5070104@eu.citrix.com> <52F8D8F6.1010209@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to
 build with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 13:49 +0000, David Scott wrote:
> On 10/02/14 12:38, George Dunlap wrote:
> > On 02/07/2014 09:51 PM, Don Slutz wrote:
> >> This code was copied from:
> >>
> >> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c

I see an LGPL 2.1 statement in that file, so I think it is OK to copy. 

> >>
> >>
> >> Signed-off-by: Don Slutz <dslutz@verizon.com>
> >
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> Also looks fine to me
> Acked-by: David Scott <dave.scott@citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:08:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD4Z-0005Ld-Sl; Tue, 11 Feb 2014 13:08:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD4Y-0005LK-4h
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:08:22 +0000
Received: from [193.109.254.147:63546] by server-1.bemta-14.messagelabs.com id
	09/C3-15438-5C02AF25; Tue, 11 Feb 2014 13:08:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392124099!3540568!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15157 invoked from network); 11 Feb 2014 13:08:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:08:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99770945"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:08:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:08:18 -0500
Message-ID: <1392124097.26657.117.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Scott <dave.scott@eu.citrix.com>
Date: Tue, 11 Feb 2014 13:08:17 +0000
In-Reply-To: <52F8D8F6.1010209@eu.citrix.com>
References: <1391809911-13610-1-git-send-email-dslutz@verizon.com>
	<1391809911-13610-2-git-send-email-dslutz@verizon.com>
	<52F8C834.5070104@eu.citrix.com> <52F8D8F6.1010209@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUGFIX][PATCH 1/1] xenlight_stubs.c: Allow it to
 build with ocaml 3.09.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 13:49 +0000, David Scott wrote:
> On 10/02/14 12:38, George Dunlap wrote:
> > On 02/07/2014 09:51 PM, Don Slutz wrote:
> >> This code was copied from:
> >>
> >> http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c

I see an LGPL 2.1 statement in that file, so I think it is OK to copy. 

> >>
> >>
> >> Signed-off-by: Don Slutz <dslutz@verizon.com>
> >
> > Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
> 
> Also looks fine to me
> Acked-by: David Scott <dave.scott@citrix.com>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:08:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:08:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD53-0005SQ-AW; Tue, 11 Feb 2014 13:08:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD52-0005S5-64
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:08:52 +0000
Received: from [85.158.139.211:41721] by server-7.bemta-5.messagelabs.com id
	89/A2-14867-3E02AF25; Tue, 11 Feb 2014 13:08:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392124129!3167292!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27806 invoked from network); 11 Feb 2014 13:08:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101584790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:08:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:08:47 -0500
Message-ID: <1392124126.26657.118.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 11 Feb 2014 13:08:46 +0000
In-Reply-To: <52F4DEE4.3080106@linaro.org>
References: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
	<52F4DEE4.3080106@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 13:25 +0000, Julien Grall wrote:
> Hello Pranavkumar,
> 
> On 07/02/14 12:57, Pranavkumar Sawargaonkar wrote:
> > This patch addresses memory cloberring issue mentioed by Julien Grall
> > with my earlier patch -
> > Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f
> >
> > Discussion related to this fix -
> > http://www.gossamer-threads.com/lists/xen/devel/316247
> >
> > V2: Incorporating comments received on V1.
> > V1: Initial Patch
> >
> > Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> > Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Acked-by: Julien Grall <julien.grall@linaro.org>

Acked. George release-acked v1 so I have also applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:08:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:08:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD53-0005SQ-AW; Tue, 11 Feb 2014 13:08:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD52-0005S5-64
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:08:52 +0000
Received: from [85.158.139.211:41721] by server-7.bemta-5.messagelabs.com id
	89/A2-14867-3E02AF25; Tue, 11 Feb 2014 13:08:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392124129!3167292!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27806 invoked from network); 11 Feb 2014 13:08:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101584790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:08:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:08:47 -0500
Message-ID: <1392124126.26657.118.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 11 Feb 2014 13:08:46 +0000
In-Reply-To: <52F4DEE4.3080106@linaro.org>
References: <1391777836-12260-1-git-send-email-pranavkumar@linaro.org>
	<52F4DEE4.3080106@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anup Patel <anup.patel@linaro.org>, patches@linaro.org, patches@apm.com,
	xen-devel@lists.xen.org, stefano.stabellini@citrix.com,
	Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Subject: Re: [Xen-devel] [PATCH V2] xen: arm: arm64: Fix memory cloberring
 issues during VFP save restore.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 13:25 +0000, Julien Grall wrote:
> Hello Pranavkumar,
> 
> On 07/02/14 12:57, Pranavkumar Sawargaonkar wrote:
> > This patch addresses memory cloberring issue mentioed by Julien Grall
> > with my earlier patch -
> > Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f
> >
> > Discussion related to this fix -
> > http://www.gossamer-threads.com/lists/xen/devel/316247
> >
> > V2: Incorporating comments received on V1.
> > V1: Initial Patch
> >
> > Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
> > Signed-off-by: Anup Patel <anup.patel@linaro.org>
> Acked-by: Julien Grall <julien.grall@linaro.org>

Acked. George release-acked v1 so I have also applied.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:10:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD6q-00069H-7D; Tue, 11 Feb 2014 13:10:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDD6o-00068e-AJ
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:10:42 +0000
Received: from [85.158.139.211:25857] by server-15.bemta-5.messagelabs.com id
	C8/70-24395-1512AF25; Tue, 11 Feb 2014 13:10:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392124240!3124763!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22696 invoked from network); 11 Feb 2014 13:10:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:10:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 13:10:35 +0000
Message-Id: <52FA2F5E020000780011B234@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 13:10:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
	<52FA0C1A.5080004@citrix.com>
	<1392120588.26657.99.camel@kazak.uk.xensource.com>
In-Reply-To: <1392120588.26657.99.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 13:09, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-11 at 11:40 +0000, David Vrabel wrote:
>> On 11/02/14 09:30, Ian Campbell wrote:
>> > On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
>> >> --------------------------------------------------------------------
>> >> Field       Description
>> >> ----------- --------------------------------------------------------
>> >> count       Number of pages described in this record.
>> >>
>> >> pfn         An array of count PFNs. Bits 63-60 contain
>> >>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
>> > 
>> > Now might be a good time to remove this intertwining? I suppose 60-bits
>> > is a lot of pfn's, but if the VMs address space is sparse it isn't out
>> > of the question.
>> 
>> I don't think we want to consider systems with > 64 bits of address
>> space so 60-bits is more than enough for PFNs.
> 
> Is it? What about systems with 61..63 bits of address space?

Their PFNs would, assuming 4k page size, still only be 49..51
bits wide.

That said - if reasonably cleanly doable within such a revised save
image format, I would second Ian's desire to split the type from
the PFN. Not so much because of limited PFN space, but because
of the chance of needing to introduce further types, requiring
more than 4 bits.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:10:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD6q-00069H-7D; Tue, 11 Feb 2014 13:10:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDD6o-00068e-AJ
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:10:42 +0000
Received: from [85.158.139.211:25857] by server-15.bemta-5.messagelabs.com id
	C8/70-24395-1512AF25; Tue, 11 Feb 2014 13:10:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392124240!3124763!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22696 invoked from network); 11 Feb 2014 13:10:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:10:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 13:10:35 +0000
Message-Id: <52FA2F5E020000780011B234@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 13:10:38 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>,
	"Ian Campbell" <Ian.Campbell@citrix.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
	<52FA0C1A.5080004@citrix.com>
	<1392120588.26657.99.camel@kazak.uk.xensource.com>
In-Reply-To: <1392120588.26657.99.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 13:09, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-11 at 11:40 +0000, David Vrabel wrote:
>> On 11/02/14 09:30, Ian Campbell wrote:
>> > On Mon, 2014-02-10 at 17:20 +0000, David Vrabel wrote:
>> >> --------------------------------------------------------------------
>> >> Field       Description
>> >> ----------- --------------------------------------------------------
>> >> count       Number of pages described in this record.
>> >>
>> >> pfn         An array of count PFNs. Bits 63-60 contain
>> >>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
>> > 
>> > Now might be a good time to remove this intertwining? I suppose 60-bits
>> > is a lot of pfn's, but if the VMs address space is sparse it isn't out
>> > of the question.
>> 
>> I don't think we want to consider systems with > 64 bits of address
>> space so 60-bits is more than enough for PFNs.
> 
> Is it? What about systems with 61..63 bits of address space?

Their PFNs would, assuming 4k page size, still only be 49..51
bits wide.

That said - if reasonably cleanly doable within such a revised save
image format, I would second Ian's desire to split the type from
the PFN. Not so much because of limited PFN space, but because
of the chance of needing to introduce further types, requiring
more than 4 bits.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:12:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD8g-0006MK-UB; Tue, 11 Feb 2014 13:12:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD8e-0006M0-Pe
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:12:36 +0000
Received: from [85.158.139.211:39294] by server-4.bemta-5.messagelabs.com id
	D9/FF-08092-4C12AF25; Tue, 11 Feb 2014 13:12:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392124354!3180519!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6301 invoked from network); 11 Feb 2014 13:12:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:12:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99772459"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:12:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:12:33 -0500
Message-ID: <1392124352.26657.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 13:12:32 +0000
In-Reply-To: <1392120588.26657.99.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
	<52FA0C1A.5080004@citrix.com>
	<1392120588.26657.99.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 12:09 +0000, Ian Campbell wrote:
> 
> 
> > >>
> --------------------------------------------------------------------
> > >> Field       Description
> > >> -----------
> --------------------------------------------------------
> > >> count       Number of pages described in this record.
> > >>
> > >> pfn         An array of count PFNs. Bits 63-60 contain
> > >>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> > > 
> > > Now might be a good time to remove this intertwining? I suppose
> 60-bits
> > > is a lot of pfn's, but if the VMs address space is sparse it isn't
> out
> > > of the question.
> > 
> > I don't think we want to consider systems with > 64 bits of address
> > space so 60-bits is more than enough for PFNs.
> 
> Is it? What about systems with 61..63 bits of address space?

Nevermind, another reply in the thread reminded me that these are PFNs,
so 64-bit PFN == 72 bits of address space.

Although, perhaps it would be better to spec this as nibble and a 60-bit
field rather than by pretending the top of a 64-bit field is special.

(or an octet and a 7 octet field if you prefer)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:12:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDD8g-0006MK-UB; Tue, 11 Feb 2014 13:12:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDD8e-0006M0-Pe
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:12:36 +0000
Received: from [85.158.139.211:39294] by server-4.bemta-5.messagelabs.com id
	D9/FF-08092-4C12AF25; Tue, 11 Feb 2014 13:12:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392124354!3180519!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6301 invoked from network); 11 Feb 2014 13:12:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:12:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99772459"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:12:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:12:33 -0500
Message-ID: <1392124352.26657.120.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 13:12:32 +0000
In-Reply-To: <1392120588.26657.99.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
	<52FA0C1A.5080004@citrix.com>
	<1392120588.26657.99.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>, Stefano
	Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 12:09 +0000, Ian Campbell wrote:
> 
> 
> > >>
> --------------------------------------------------------------------
> > >> Field       Description
> > >> -----------
> --------------------------------------------------------
> > >> count       Number of pages described in this record.
> > >>
> > >> pfn         An array of count PFNs. Bits 63-60 contain
> > >>             the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> > > 
> > > Now might be a good time to remove this intertwining? I suppose
> 60-bits
> > > is a lot of pfn's, but if the VMs address space is sparse it isn't
> out
> > > of the question.
> > 
> > I don't think we want to consider systems with > 64 bits of address
> > space so 60-bits is more than enough for PFNs.
> 
> Is it? What about systems with 61..63 bits of address space?

Nevermind, another reply in the thread reminded me that these are PFNs,
so 64-bit PFN == 72 bits of address space.

Although, perhaps it would be better to spec this as nibble and a 60-bit
field rather than by pretending the top of a 64-bit field is special.

(or an octet and a 7 octet field if you prefer)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:15:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDBK-0006Ys-Nj; Tue, 11 Feb 2014 13:15:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDDBJ-0006Yf-1l
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:15:21 +0000
Received: from [85.158.139.211:10353] by server-6.bemta-5.messagelabs.com id
	E3/7D-14342-8622AF25; Tue, 11 Feb 2014 13:15:20 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392124519!3125894!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16291 invoked from network); 11 Feb 2014 13:15:19 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:15:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDDB7-0002Vh-QK; Tue, 11 Feb 2014 13:15:09 +0000
Date: Tue, 11 Feb 2014 14:15:09 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140211131509.GF97288@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
> >>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
> > At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
> >> >>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> >> > That is the main change of this cset:  we go back to driving
> >> > the interrupt from the vpt code and fixing up the RTC state after vpt
> >> > tells us it's injected an interrupt.
> >> 
> >> And that's what is wrong imo, as it doesn't allow driving PF correctly
> >> when !PIE.
> > 
> > Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
> > you remember why not?  Have I forgotten some wrinkle or race here?
> 
> Because an OS could inspect PF without setting PIE.

Ugh. :( 

> >> > Yeah, this has nothing to do with the bug being fixed here.  The old
> >> > REG_C read was operating correctly, but on the return-to-guest path:
> >> >  - vpt sees another RTC interrupt is due and calls RTC code
> >> >  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
> >> >  - vlapic code sees the last interrupt is still in the ISR and does
> >> >    nothing;
> >> >  - we return to the guest having set IRQF but not consumed a timer
> >> >    event, so vpt stste is the same
> >> >  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
> >> >    waiting for a read of 0.
> >> >  - repeat forever.
> >> 
> >> Which would call for a flag suppressing the setting of PF|IRQF
> >> until the timer event got consumed. Possibly with some safety
> >> belt for this to not get deferred indefinitely (albeit if the interrupt
> >> doesn't get injected for extended periods of time, the guest
> >> would presumably have more severe problems than these flags
> >> not getting updated as expected).
> > 
> > That's pretty much what we're doing here -- the pt_intr_post callback
> > sets PF|IRQF when the interrupt is injected.
> 
> Right, except you do this be reverting other stuff rather than
> adding the missing functionality on top.

Absolutely -- because once we went back to having PF set only when the
interrupt was injected, it seemed better to reduce the amount of
special-case plumbing for RTC than to add yet more.

But for the case of an OS polling for PF with PIE clear, I guess we
might need to keep all the current special cases.  Was that a known
observed bug or a theoretical one?  I can't see a way of handling
both that case and the w2k3 case.

Either we always set PF when the tick happens, even if the interrupt
is masked (which breaks w2k3) or we don't set it until we can deliver
the interrupt (which breaks pollers).

Or equivalently, either setting PF consumes a tick (which breaks
no-missed-ticks mode for OSes that don't poll) or it doesn't (which
breaks w2k3).

Or do we have to treat 'masked in REG_B/IOAPIC' differently from
'masked by ISR/TPR/RFLAGS.IF/...'?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:15:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDBK-0006Ys-Nj; Tue, 11 Feb 2014 13:15:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDDBJ-0006Yf-1l
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:15:21 +0000
Received: from [85.158.139.211:10353] by server-6.bemta-5.messagelabs.com id
	E3/7D-14342-8622AF25; Tue, 11 Feb 2014 13:15:20 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392124519!3125894!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16291 invoked from network); 11 Feb 2014 13:15:19 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:15:19 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDDB7-0002Vh-QK; Tue, 11 Feb 2014 13:15:09 +0000
Date: Tue, 11 Feb 2014 14:15:09 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140211131509.GF97288@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
> >>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
> > At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
> >> >>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> >> > That is the main change of this cset:  we go back to driving
> >> > the interrupt from the vpt code and fixing up the RTC state after vpt
> >> > tells us it's injected an interrupt.
> >> 
> >> And that's what is wrong imo, as it doesn't allow driving PF correctly
> >> when !PIE.
> > 
> > Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
> > you remember why not?  Have I forgotten some wrinkle or race here?
> 
> Because an OS could inspect PF without setting PIE.

Ugh. :( 

> >> > Yeah, this has nothing to do with the bug being fixed here.  The old
> >> > REG_C read was operating correctly, but on the return-to-guest path:
> >> >  - vpt sees another RTC interrupt is due and calls RTC code
> >> >  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
> >> >  - vlapic code sees the last interrupt is still in the ISR and does
> >> >    nothing;
> >> >  - we return to the guest having set IRQF but not consumed a timer
> >> >    event, so vpt stste is the same
> >> >  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
> >> >    waiting for a read of 0.
> >> >  - repeat forever.
> >> 
> >> Which would call for a flag suppressing the setting of PF|IRQF
> >> until the timer event got consumed. Possibly with some safety
> >> belt for this to not get deferred indefinitely (albeit if the interrupt
> >> doesn't get injected for extended periods of time, the guest
> >> would presumably have more severe problems than these flags
> >> not getting updated as expected).
> > 
> > That's pretty much what we're doing here -- the pt_intr_post callback
> > sets PF|IRQF when the interrupt is injected.
> 
> Right, except you do this be reverting other stuff rather than
> adding the missing functionality on top.

Absolutely -- because once we went back to having PF set only when the
interrupt was injected, it seemed better to reduce the amount of
special-case plumbing for RTC than to add yet more.

But for the case of an OS polling for PF with PIE clear, I guess we
might need to keep all the current special cases.  Was that a known
observed bug or a theoretical one?  I can't see a way of handling
both that case and the w2k3 case.

Either we always set PF when the tick happens, even if the interrupt
is masked (which breaks w2k3) or we don't set it until we can deliver
the interrupt (which breaks pollers).

Or equivalently, either setting PF consumes a tick (which breaks
no-missed-ticks mode for OSes that don't poll) or it doesn't (which
breaks w2k3).

Or do we have to treat 'masked in REG_B/IOAPIC' differently from
'masked by ISR/TPR/RFLAGS.IF/...'?

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:20:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDGH-00076N-UR; Tue, 11 Feb 2014 13:20:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDDGG-00076F-7w
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:20:28 +0000
Received: from [85.158.143.35:6551] by server-2.bemta-4.messagelabs.com id
	76/25-10891-B932AF25; Tue, 11 Feb 2014 13:20:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392124826!4823596!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21675 invoked from network); 11 Feb 2014 13:20:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:20:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 13:20:21 +0000
Message-Id: <52FA31A9020000780011B261@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 13:20:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<52FA1FC8.7010104@citrix.com>
In-Reply-To: <52FA1FC8.7010104@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	StefanoStabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 14:04, David Vrabel <david.vrabel@citrix.com> wrote:
> On 11/02/14 10:06, Jan Beulich wrote:
>>>>> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
>>> Domain Header
>>> -------------
>>>
>>> The domain header includes general properties of the domain.
>>>
>>>      0      1     2     3     4     5     6     7 octet
>>>     +-----------+-----------+-----------+-------------+
>>>     | arch      | type      | page_shift| (reserved)  |
>>>     +-----------+-----------+-----------+-------------+
>>>
>>> --------------------------------------------------------------------
>>> Field       Description
>>> ----------- --------------------------------------------------------
>>> arch        0x0000: Reserved.
>>>
>>>             0x0001: x86.
>>>
>>>             0x0002: ARM.
>>>
>>> type        0x0000: Reserved.
>>>
>>>             0x0001: x86 PV.
>>>
>>>             0x0002 - 0xFFFF: Reserved.
>> 
>> So how would ARM, x86 HVM, and x86 PVH be expressed?
> 
> Something like:
> 
>   0x0001: x86 PV.
>   0x0002: x86 HVM.
>   0x0003: x86 PVH.
>   0x0004: ARM.

Ah, so the list above wasn't meant to be exhaustive (for the
current set of things to care about).

> Which does make the arch field a bit redundant, I suppose.

Indeed.

>>> P2M
>>> ---
>>>
>>> [ This is a more flexible replacement for the old p2m_size field and
>>> p2m array. ]
>>>
>>> The P2M record contains a portion of the source domain's P2M.
>>> Multiple P2M records may be sent if the source P2M changes during the
>>> stream.
>>>
>>>      0     1     2     3     4     5     6     7 octet
>>>     +-------------------------------------------------+
>>>     | pfn_begin                                       |
>>>     +-------------------------------------------------+
>>>     | pfn_end                                         |
>>>     +-------------------------------------------------+
>>>     | mfn[0]                                          |
>>>     +-------------------------------------------------+
>>>     ...
>>>     +-------------------------------------------------+
>>>     | mfn[N-1]                                        |
>>>     +-------------------------------------------------+
>>>
>>> --------------------------------------------------------------------
>>> Field       Description
>>> ----------- --------------------------------------------------------
>>> pfn_begin   The first PFN in this portion of the P2M
>>>
>>> pfn_end     One past the last PFN in this portion of the P2M.
>> 
>> I'd favor an inclusive range here, such that if we ever reach a
>> fully populatable 64-bit PFN space (on some future architecture)
>> there'd still be no issue with special casing the then unavoidable
>> wraparound.
> 
> Ok, but 64-bit PFN space would suggest 76 bit of address space which
> seems somewhat far off.  Is that something we want to consider now?

If it's as cheap as using an inclusive range instead of a half-inclusive
one, I'd say yes.

>>> Legacy Images (x86 only)
>>> ========================
>>>
>>> Restoring legacy images from older tools shall be handled by
>>> translating the legacy format image into this new format.
>>>
>>> It shall not be possible to save in the legacy format.
>>>
>>> There are two different legacy images depending on whether they were
>>> generated by a 32-bit or a 64-bit toolstack. These shall be
>>> distinguished by inspecting octets 4-7 in the image.  If these are
>>> zero then it is a 64-bit image.
>>>
>>> Toolstack  Field                            Value
>>> ---------  -----                            -----
>>> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
>> 
>> Afaics this is being determined via xc_domain_maximum_gpfn(),
>> which I don't think guarantees the result to be limited to 2^32.
>> Or in fact the libxc interface wrongly limits the value (by
>> truncating the "long" returned from the hypercall to an "int"). So
>> in practice consistent images would have the field limited to 2^31
>> on 64-bit tool stacks (since for larger values the negative function
>> return value would get converted by sign-extension, but all sorts
>> of other trouble would result due to the now huge p2m_size).
> 
> For the handling of legacy images I think we need to only consider
> images that could have been practically generated by older tools.

Right. That's what I meant to say with everything following the
first sentence.

>>> Future Extensions
>>> =================
>>>
>>> All changes to this format require the image version to be increased.
>> 
>> Oh, okay, this partly deals with the first question above. Question
>> is whether that's a useful requirement, i.e. whether that wouldn't
>> lead to an inflation of versions needing conversion (for a tool stack
>> that wants to support more than just migration from N-1).
> 
> Only legacy images would be converted to the newest format.  I would
> expect version V-1 images would be handled by (mostly) the same code as
> V images.  Particularly if V is V-1 with extra record types.

Just consider distros, namely such that have a lower release
frequency than we. They'd necessarily want to cover at least
the range between their V'-1 and V', which could just end up
being V-2 ... V from Xen pov, but - especially if we were to
further shorten the release cycle - could easily become a larger
range.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:20:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDGH-00076N-UR; Tue, 11 Feb 2014 13:20:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDDGG-00076F-7w
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:20:28 +0000
Received: from [85.158.143.35:6551] by server-2.bemta-4.messagelabs.com id
	76/25-10891-B932AF25; Tue, 11 Feb 2014 13:20:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392124826!4823596!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21675 invoked from network); 11 Feb 2014 13:20:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:20:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 13:20:21 +0000
Message-Id: <52FA31A9020000780011B261@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 13:20:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<52FA1FC8.7010104@citrix.com>
In-Reply-To: <52FA1FC8.7010104@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	StefanoStabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 14:04, David Vrabel <david.vrabel@citrix.com> wrote:
> On 11/02/14 10:06, Jan Beulich wrote:
>>>>> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
>>> Domain Header
>>> -------------
>>>
>>> The domain header includes general properties of the domain.
>>>
>>>      0      1     2     3     4     5     6     7 octet
>>>     +-----------+-----------+-----------+-------------+
>>>     | arch      | type      | page_shift| (reserved)  |
>>>     +-----------+-----------+-----------+-------------+
>>>
>>> --------------------------------------------------------------------
>>> Field       Description
>>> ----------- --------------------------------------------------------
>>> arch        0x0000: Reserved.
>>>
>>>             0x0001: x86.
>>>
>>>             0x0002: ARM.
>>>
>>> type        0x0000: Reserved.
>>>
>>>             0x0001: x86 PV.
>>>
>>>             0x0002 - 0xFFFF: Reserved.
>> 
>> So how would ARM, x86 HVM, and x86 PVH be expressed?
> 
> Something like:
> 
>   0x0001: x86 PV.
>   0x0002: x86 HVM.
>   0x0003: x86 PVH.
>   0x0004: ARM.

Ah, so the list above wasn't meant to be exhaustive (for the
current set of things to care about).

> Which does make the arch field a bit redundant, I suppose.

Indeed.

>>> P2M
>>> ---
>>>
>>> [ This is a more flexible replacement for the old p2m_size field and
>>> p2m array. ]
>>>
>>> The P2M record contains a portion of the source domain's P2M.
>>> Multiple P2M records may be sent if the source P2M changes during the
>>> stream.
>>>
>>>      0     1     2     3     4     5     6     7 octet
>>>     +-------------------------------------------------+
>>>     | pfn_begin                                       |
>>>     +-------------------------------------------------+
>>>     | pfn_end                                         |
>>>     +-------------------------------------------------+
>>>     | mfn[0]                                          |
>>>     +-------------------------------------------------+
>>>     ...
>>>     +-------------------------------------------------+
>>>     | mfn[N-1]                                        |
>>>     +-------------------------------------------------+
>>>
>>> --------------------------------------------------------------------
>>> Field       Description
>>> ----------- --------------------------------------------------------
>>> pfn_begin   The first PFN in this portion of the P2M
>>>
>>> pfn_end     One past the last PFN in this portion of the P2M.
>> 
>> I'd favor an inclusive range here, such that if we ever reach a
>> fully populatable 64-bit PFN space (on some future architecture)
>> there'd still be no issue with special casing the then unavoidable
>> wraparound.
> 
> Ok, but 64-bit PFN space would suggest 76 bit of address space which
> seems somewhat far off.  Is that something we want to consider now?

If it's as cheap as using an inclusive range instead of a half-inclusive
one, I'd say yes.

>>> Legacy Images (x86 only)
>>> ========================
>>>
>>> Restoring legacy images from older tools shall be handled by
>>> translating the legacy format image into this new format.
>>>
>>> It shall not be possible to save in the legacy format.
>>>
>>> There are two different legacy images depending on whether they were
>>> generated by a 32-bit or a 64-bit toolstack. These shall be
>>> distinguished by inspecting octets 4-7 in the image.  If these are
>>> zero then it is a 64-bit image.
>>>
>>> Toolstack  Field                            Value
>>> ---------  -----                            -----
>>> 64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
>> 
>> Afaics this is being determined via xc_domain_maximum_gpfn(),
>> which I don't think guarantees the result to be limited to 2^32.
>> Or in fact the libxc interface wrongly limits the value (by
>> truncating the "long" returned from the hypercall to an "int"). So
>> in practice consistent images would have the field limited to 2^31
>> on 64-bit tool stacks (since for larger values the negative function
>> return value would get converted by sign-extension, but all sorts
>> of other trouble would result due to the now huge p2m_size).
> 
> For the handling of legacy images I think we need to only consider
> images that could have been practically generated by older tools.

Right. That's what I meant to say with everything following the
first sentence.

>>> Future Extensions
>>> =================
>>>
>>> All changes to this format require the image version to be increased.
>> 
>> Oh, okay, this partly deals with the first question above. Question
>> is whether that's a useful requirement, i.e. whether that wouldn't
>> lead to an inflation of versions needing conversion (for a tool stack
>> that wants to support more than just migration from N-1).
> 
> Only legacy images would be converted to the newest format.  I would
> expect version V-1 images would be handled by (mostly) the same code as
> V images.  Particularly if V is V-1 with extra record types.

Just consider distros, namely such that have a lower release
frequency than we. They'd necessarily want to cover at least
the range between their V'-1 and V', which could just end up
being V-2 ... V from Xen pov, but - especially if we were to
further shorten the release cycle - could easily become a larger
range.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:20:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDGj-00079W-HX; Tue, 11 Feb 2014 13:20:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDDGi-00079F-2u
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:20:56 +0000
Received: from [85.158.137.68:6840] by server-3.bemta-3.messagelabs.com id
	21/28-14520-7B32AF25; Tue, 11 Feb 2014 13:20:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392124854!1081086!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28815 invoked from network); 11 Feb 2014 13:20:54 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:20:54 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so4230577wib.8
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 05:20:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Lz+W0giyFjrAX3qsW6vs6NNO6otmP3U8LXtfQ7jEIUg=;
	b=G8ufo++pqzBTsXkzleoXCaY4Ld6D4FCUiZ2VCzrPIRAZfBnV2rhfIzQkgChMPtUa2i
	JgiYBn1UGUFFV560kUJLehNLxbI7mHEzDxT1TnVSNMdYgLRGYlTKroZIk3PTHd8R7XO5
	2azcggVwkllDwZ4H4CI04cnqkID3mDVlLHaORUAzMmn3jVyNPX0su0EO/us+8ScXmi1X
	mc1UfzPCfpnIoqx1WRcXNNwi3V99sN+cUHjoCovhjvD9d+GPnXb52EMrSYXrXhTer0zo
	nxrj5a/rB9YuJCqsRBj5Ko+mhmHjXm0oRiYge9vcXgaWtpthDow4SLxXEIR/T4iE8mGt
	U8Kg==
X-Gm-Message-State: ALoCoQms1MTqmrJcaGyO7chwaa3Y0S/og3oGkhFAui0ybgRlWonbTBX9WMr1MRSZmor40Fx00WiB
X-Received: by 10.194.188.80 with SMTP id fy16mr26442831wjc.30.1392124853936; 
	Tue, 11 Feb 2014 05:20:53 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ju6sm44079681wjc.1.2014.02.11.05.20.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 05:20:53 -0800 (PST)
Message-ID: <52FA23B4.5060203@linaro.org>
Date: Tue, 11 Feb 2014 13:20:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
In-Reply-To: <20140211125928.GE97288@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 12:59, Tim Deegan wrote:
> Are you using a very old version of clang?  As 06a9c7e points out,
> our current runes didn't work before clang v3.0.

I'm using clang 3.5 (which have other issue to compile Xen), but I have 
also tried 3.0 and 3.3.

> If not, rather than chasing this around any further, I think we should
> abandon trying to use the compiler-provided headers even on linux.
> Does this patch fix your build issue?
>
> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
>
>      xen: stop trying to use the system <stdarg.h> and <stdbool.h>

With this patch, -iwithprefix include is not necessary now. I wondering 
if we can remove it from the command line.

>      We already have our own versions of the stdarg/stdbool definitions, for
>      systems where those headers are installed in /usr/include.
>
>      On linux, they're typically installed in compiler-specific paths, but
>      finding them has proved unreliable.  Drop that and use our own versions
>      everywhere.
>
>      Signed-off-by: Tim Deegan <tim@xen.org>

This patch is working fine to build xen clang 3.0, 3.3.
I have others issue to build with clang 3.5.

Tested-by: Julien Grall <julien.grall@linaro.org>

Thanks!

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:20:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDGj-00079W-HX; Tue, 11 Feb 2014 13:20:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDDGi-00079F-2u
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:20:56 +0000
Received: from [85.158.137.68:6840] by server-3.bemta-3.messagelabs.com id
	21/28-14520-7B32AF25; Tue, 11 Feb 2014 13:20:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392124854!1081086!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28815 invoked from network); 11 Feb 2014 13:20:54 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:20:54 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so4230577wib.8
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 05:20:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Lz+W0giyFjrAX3qsW6vs6NNO6otmP3U8LXtfQ7jEIUg=;
	b=G8ufo++pqzBTsXkzleoXCaY4Ld6D4FCUiZ2VCzrPIRAZfBnV2rhfIzQkgChMPtUa2i
	JgiYBn1UGUFFV560kUJLehNLxbI7mHEzDxT1TnVSNMdYgLRGYlTKroZIk3PTHd8R7XO5
	2azcggVwkllDwZ4H4CI04cnqkID3mDVlLHaORUAzMmn3jVyNPX0su0EO/us+8ScXmi1X
	mc1UfzPCfpnIoqx1WRcXNNwi3V99sN+cUHjoCovhjvD9d+GPnXb52EMrSYXrXhTer0zo
	nxrj5a/rB9YuJCqsRBj5Ko+mhmHjXm0oRiYge9vcXgaWtpthDow4SLxXEIR/T4iE8mGt
	U8Kg==
X-Gm-Message-State: ALoCoQms1MTqmrJcaGyO7chwaa3Y0S/og3oGkhFAui0ybgRlWonbTBX9WMr1MRSZmor40Fx00WiB
X-Received: by 10.194.188.80 with SMTP id fy16mr26442831wjc.30.1392124853936; 
	Tue, 11 Feb 2014 05:20:53 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id ju6sm44079681wjc.1.2014.02.11.05.20.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 05:20:53 -0800 (PST)
Message-ID: <52FA23B4.5060203@linaro.org>
Date: Tue, 11 Feb 2014 13:20:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
In-Reply-To: <20140211125928.GE97288@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 12:59, Tim Deegan wrote:
> Are you using a very old version of clang?  As 06a9c7e points out,
> our current runes didn't work before clang v3.0.

I'm using clang 3.5 (which have other issue to compile Xen), but I have 
also tried 3.0 and 3.3.

> If not, rather than chasing this around any further, I think we should
> abandon trying to use the compiler-provided headers even on linux.
> Does this patch fix your build issue?
>
> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
>
>      xen: stop trying to use the system <stdarg.h> and <stdbool.h>

With this patch, -iwithprefix include is not necessary now. I wondering 
if we can remove it from the command line.

>      We already have our own versions of the stdarg/stdbool definitions, for
>      systems where those headers are installed in /usr/include.
>
>      On linux, they're typically installed in compiler-specific paths, but
>      finding them has proved unreliable.  Drop that and use our own versions
>      everywhere.
>
>      Signed-off-by: Tim Deegan <tim@xen.org>

This patch is working fine to build xen clang 3.0, 3.3.
I have others issue to build with clang 3.5.

Tested-by: Julien Grall <julien.grall@linaro.org>

Thanks!

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:22:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDHp-0007IG-0O; Tue, 11 Feb 2014 13:22:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDDHn-0007Hv-6D
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:22:03 +0000
Received: from [85.158.137.68:21531] by server-14.bemta-3.messagelabs.com id
	35/C9-08196-AF32AF25; Tue, 11 Feb 2014 13:22:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392124921!1110813!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6067 invoked from network); 11 Feb 2014 13:22:01 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:22:01 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so5617281wes.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 05:22:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=dH5PRgex0fIGP23w7Bs7BDB5AtQPeumDSGNordf1X9U=;
	b=Aqa5yCsTacAkzmVHa+BlmFrEWEO3Shm1zxX7UvPqhLzj+1XJD5k/WUicArDTyGnIeP
	Vcj/hYR+YC8hjAakbJ9nD343+zUuMFykCLJL+Uhi1z0MF5m3qSitx/MqxMh/zZ9YCFHM
	sNWeU6cvleU3m0cr1RfYVI417RYZQcqrYNcY26cbBOtdCIgXTVMPyynxj+gj730YXOQk
	DogWS19FHXuYQqFQbuFVEYEtuzlppUYupvkWISS0hiW72Ug2cBgiT2VCTMtVfMlqJXLR
	lTbwvM90vz5K8hO4sWzVEm5JxCkFaZ2yvJPPsZsXd++0u/Z3AbctHuQ1rfpyzAMdJKsG
	4pwA==
X-Gm-Message-State: ALoCoQmioLi1GYvnOP983y59bW2DC9QJkPX+bDoVLbwkwpnzyLR8V4d0X24jrvCpQTpIwq+Epra2
X-Received: by 10.180.149.206 with SMTP id uc14mr15252302wib.10.1392124920987; 
	Tue, 11 Feb 2014 05:22:00 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id fb6sm41950189wib.2.2014.02.11.05.21.59
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 05:22:00 -0800 (PST)
Message-ID: <52FA23F7.1020209@linaro.org>
Date: Tue, 11 Feb 2014 13:21:59 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
In-Reply-To: <20140211125928.GE97288@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 12:59, Tim Deegan wrote:
> At 12:36 +0000 on 11 Feb (1392118581), Julien Grall wrote:
>>
>>
>> On 11/02/14 12:35, Tim Deegan wrote:
>>> At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
>>>>
>>>>
>>>> On 11/02/14 08:53, Tim Deegan wrote:
>>>>> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
>>>>>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
>>>>>> compilation with clang:
>>>>>>
>>>>>> In file included from sched_sedf.c:8:
>>>>>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
>>>>>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
>>>>>> not found with <angled> include; use "quotes" instead
>>>>>>               ^~~~~~~~~~
>>>>>>               "stdarg.h"
>>>>>
>>>>> Looks like on your system stdarg.h doesn't live in a compiler-specific
>>>>> path, like we have for the BSDs.  I think we should just go to using
>>>>> our own definitions for stdarg/stdbool everywhere; trying to chase the
>>>>> compiler-specific versions around is a PITA, and the pieces we
>>>>> actually need are trivial.
>>>>
>>>> For BSDs, we are using our own stdargs/stdbool.  So we don't include the
>>>> system <stdarg.h>.
>>>>
>>>> Linux is using $(CC) -print-file-name=include to get the right path. It
>>>> works with both gcc and clang on Linux distos, but not on FreeBSD.
>>>
>>> Wait - is the error message you posted from clang on FreeBSD?
>>> That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
>>> include <stdarg.h> at all.  Is __FreeBSD__ not being defined?
>>
>>
>> No it's from Linux (Debian Wheezy and Fedora). I just gave a try to the
>> "-print-file-name" solution on FreeBSD.
>
> Oh, OK.  Yeah, we knew that didn't work there, because on *BSD the
> compiler-specific headers like stdarg.h actually live in /usr/include
> and can themselves include other system headers.  That's why we have
> our own implementation of the bits we need, that we use on BSD.
>
> Are you using a very old version of clang?  As 06a9c7e points out,
> our current runes didn't work before clang v3.0.
>
> If not, rather than chasing this around any further, I think we should
> abandon trying to use the compiler-provided headers even on linux.
> Does this patch fix your build issue?
>
> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
>
>      xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>
>      We already have our own versions of the stdarg/stdbool definitions, for
>      systems where those headers are installed in /usr/include.
>
>      On linux, they're typically installed in compiler-specific paths, but
>      finding them has proved unreliable.  Drop that and use our own versions
>      everywhere.
>
>      Signed-off-by: Tim Deegan <tim@xen.org>
>
> diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
> index d1b2540..0283f06 100644
> --- a/xen/include/xen/stdarg.h
> +++ b/xen/include/xen/stdarg.h
> @@ -1,23 +1,21 @@
>   #ifndef __XEN_STDARG_H__
>   #define __XEN_STDARG_H__
>
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -   typedef __builtin_va_list va_list;
> -#  ifdef __GNUC__
> -#    define __GNUC_PREREQ__(x, y)                                       \
> -        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> -         (__GNUC__ > (x)))
> -#  else
> -#    define __GNUC_PREREQ__(x, y)   0
> -#  endif
> -#  if !__GNUC_PREREQ__(4, 5)
> -#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> -#  endif
> -#  define va_start(ap, last)    __builtin_va_start((ap), (last))
> -#  define va_end(ap)            __builtin_va_end(ap)
> -#  define va_arg                __builtin_va_arg
> +#ifdef __GNUC__
> +#  define __GNUC_PREREQ__(x, y)                                       \
> +      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> +       (__GNUC__ > (x)))
>   #else
> -#  include <stdarg.h>
> +#  define __GNUC_PREREQ__(x, y)   0
>   #endif
>
> +#if !__GNUC_PREREQ__(4, 5)
> +#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> +#endif
> +
> +typedef __builtin_va_list va_list;
> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
> +#define va_end(ap)            __builtin_va_end(ap)
> +#define va_arg                __builtin_va_arg
> +
>   #endif /* __XEN_STDARG_H__ */
> diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
> index f0faedf..b0947a6 100644
> --- a/xen/include/xen/stdbool.h
> +++ b/xen/include/xen/stdbool.h
> @@ -1,13 +1,9 @@
>   #ifndef __XEN_STDBOOL_H__
>   #define __XEN_STDBOOL_H__
>
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -#  define bool _Bool
> -#  define true 1
> -#  define false 0
> -#  define __bool_true_false_are_defined   1
> -#else
> -#  include <stdbool.h>
> -#endif
> +#define bool _Bool
> +#define true 1
> +#define false 0
> +#define __bool_true_false_are_defined   1
>
>   #endif /* __XEN_STDBOOL_H__ */
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:22:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDHp-0007IG-0O; Tue, 11 Feb 2014 13:22:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDDHn-0007Hv-6D
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:22:03 +0000
Received: from [85.158.137.68:21531] by server-14.bemta-3.messagelabs.com id
	35/C9-08196-AF32AF25; Tue, 11 Feb 2014 13:22:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392124921!1110813!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6067 invoked from network); 11 Feb 2014 13:22:01 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:22:01 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so5617281wes.13
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 05:22:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=dH5PRgex0fIGP23w7Bs7BDB5AtQPeumDSGNordf1X9U=;
	b=Aqa5yCsTacAkzmVHa+BlmFrEWEO3Shm1zxX7UvPqhLzj+1XJD5k/WUicArDTyGnIeP
	Vcj/hYR+YC8hjAakbJ9nD343+zUuMFykCLJL+Uhi1z0MF5m3qSitx/MqxMh/zZ9YCFHM
	sNWeU6cvleU3m0cr1RfYVI417RYZQcqrYNcY26cbBOtdCIgXTVMPyynxj+gj730YXOQk
	DogWS19FHXuYQqFQbuFVEYEtuzlppUYupvkWISS0hiW72Ug2cBgiT2VCTMtVfMlqJXLR
	lTbwvM90vz5K8hO4sWzVEm5JxCkFaZ2yvJPPsZsXd++0u/Z3AbctHuQ1rfpyzAMdJKsG
	4pwA==
X-Gm-Message-State: ALoCoQmioLi1GYvnOP983y59bW2DC9QJkPX+bDoVLbwkwpnzyLR8V4d0X24jrvCpQTpIwq+Epra2
X-Received: by 10.180.149.206 with SMTP id uc14mr15252302wib.10.1392124920987; 
	Tue, 11 Feb 2014 05:22:00 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id fb6sm41950189wib.2.2014.02.11.05.21.59
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 05:22:00 -0800 (PST)
Message-ID: <52FA23F7.1020209@linaro.org>
Date: Tue, 11 Feb 2014 13:21:59 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
In-Reply-To: <20140211125928.GE97288@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 11/02/14 12:59, Tim Deegan wrote:
> At 12:36 +0000 on 11 Feb (1392118581), Julien Grall wrote:
>>
>>
>> On 11/02/14 12:35, Tim Deegan wrote:
>>> At 12:30 +0000 on 11 Feb (1392118227), Julien Grall wrote:
>>>>
>>>>
>>>> On 11/02/14 08:53, Tim Deegan wrote:
>>>>> At 23:29 +0000 on 10 Feb (1392071374), Julien Grall wrote:
>>>>>> Commit 06a9c7e "xen: move -nostdinc into common Rules.mk." breaks
>>>>>> compilation with clang:
>>>>>>
>>>>>> In file included from sched_sedf.c:8:
>>>>>> In file included from /home/julieng/works/xen/xen/include/xen/lib.h:5:
>>>>>> /home/julieng/works/xen/xen/include/xen/stdarg.h:20:12: error: 'stdarg.h' file
>>>>>> not found with <angled> include; use "quotes" instead
>>>>>>               ^~~~~~~~~~
>>>>>>               "stdarg.h"
>>>>>
>>>>> Looks like on your system stdarg.h doesn't live in a compiler-specific
>>>>> path, like we have for the BSDs.  I think we should just go to using
>>>>> our own definitions for stdarg/stdbool everywhere; trying to chase the
>>>>> compiler-specific versions around is a PITA, and the pieces we
>>>>> actually need are trivial.
>>>>
>>>> For BSDs, we are using our own stdargs/stdbool.  So we don't include the
>>>> system <stdarg.h>.
>>>>
>>>> Linux is using $(CC) -print-file-name=include to get the right path. It
>>>> works with both gcc and clang on Linux distos, but not on FreeBSD.
>>>
>>> Wait - is the error message you posted from clang on FreeBSD?
>>> That's surprising; on FreeBSD xen/stdarg.h shouldn't be trying to
>>> include <stdarg.h> at all.  Is __FreeBSD__ not being defined?
>>
>>
>> No it's from Linux (Debian Wheezy and Fedora). I just gave a try to the
>> "-print-file-name" solution on FreeBSD.
>
> Oh, OK.  Yeah, we knew that didn't work there, because on *BSD the
> compiler-specific headers like stdarg.h actually live in /usr/include
> and can themselves include other system headers.  That's why we have
> our own implementation of the bits we need, that we use on BSD.
>
> Are you using a very old version of clang?  As 06a9c7e points out,
> our current runes didn't work before clang v3.0.
>
> If not, rather than chasing this around any further, I think we should
> abandon trying to use the compiler-provided headers even on linux.
> Does this patch fix your build issue?
>
> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
>
>      xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>
>      We already have our own versions of the stdarg/stdbool definitions, for
>      systems where those headers are installed in /usr/include.
>
>      On linux, they're typically installed in compiler-specific paths, but
>      finding them has proved unreliable.  Drop that and use our own versions
>      everywhere.
>
>      Signed-off-by: Tim Deegan <tim@xen.org>
>
> diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
> index d1b2540..0283f06 100644
> --- a/xen/include/xen/stdarg.h
> +++ b/xen/include/xen/stdarg.h
> @@ -1,23 +1,21 @@
>   #ifndef __XEN_STDARG_H__
>   #define __XEN_STDARG_H__
>
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -   typedef __builtin_va_list va_list;
> -#  ifdef __GNUC__
> -#    define __GNUC_PREREQ__(x, y)                                       \
> -        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> -         (__GNUC__ > (x)))
> -#  else
> -#    define __GNUC_PREREQ__(x, y)   0
> -#  endif
> -#  if !__GNUC_PREREQ__(4, 5)
> -#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> -#  endif
> -#  define va_start(ap, last)    __builtin_va_start((ap), (last))
> -#  define va_end(ap)            __builtin_va_end(ap)
> -#  define va_arg                __builtin_va_arg
> +#ifdef __GNUC__
> +#  define __GNUC_PREREQ__(x, y)                                       \
> +      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> +       (__GNUC__ > (x)))
>   #else
> -#  include <stdarg.h>
> +#  define __GNUC_PREREQ__(x, y)   0
>   #endif
>
> +#if !__GNUC_PREREQ__(4, 5)
> +#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> +#endif
> +
> +typedef __builtin_va_list va_list;
> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
> +#define va_end(ap)            __builtin_va_end(ap)
> +#define va_arg                __builtin_va_arg
> +
>   #endif /* __XEN_STDARG_H__ */
> diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
> index f0faedf..b0947a6 100644
> --- a/xen/include/xen/stdbool.h
> +++ b/xen/include/xen/stdbool.h
> @@ -1,13 +1,9 @@
>   #ifndef __XEN_STDBOOL_H__
>   #define __XEN_STDBOOL_H__
>
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -#  define bool _Bool
> -#  define true 1
> -#  define false 0
> -#  define __bool_true_false_are_defined   1
> -#else
> -#  include <stdbool.h>
> -#endif
> +#define bool _Bool
> +#define true 1
> +#define false 0
> +#define __bool_true_false_are_defined   1
>
>   #endif /* __XEN_STDBOOL_H__ */
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:29:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:29:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDPM-0007nF-5Z; Tue, 11 Feb 2014 13:29:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDDPK-0007mK-6i
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:29:50 +0000
Received: from [193.109.254.147:55331] by server-2.bemta-14.messagelabs.com id
	67/06-01236-CC52AF25; Tue, 11 Feb 2014 13:29:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392125386!3550414!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5914 invoked from network); 11 Feb 2014 13:29:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:29:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101589456"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:29:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:29:45 -0500
Message-ID: <1392125383.26657.124.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 11 Feb 2014 13:29:43 +0000
In-Reply-To: <1392040924.5117.103.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
	<52F8E6DE020000780011AC53@nat28.tlf.novell.com>
	<1392040924.5117.103.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 14:02 +0000, Ian Campbell wrote:
> On Mon, 2014-02-10 at 13:49 +0000, Jan Beulich wrote:
> > >>> On 07.02.14 at 13:57, "Jan Beulich" <JBeulich@suse.com> wrote:
> > >>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> > >> --- a/xen/include/public/domctl.h
> > >> +++ b/xen/include/public/domctl.h
> > >> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> > >>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> > >>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
> > >>  
> > >> +/*
> > >> + * ARM: Clean and invalidate caches associated with given region of
> > >> + * guest memory.
> > >> + */
> > >> +struct xen_domctl_cacheflush {
> > >> +    /* IN: page range to flush. */
> > >> +    xen_pfn_t start_pfn, nr_pfns;
> > >> +};
> > > 
> > > The name here (and of the libxc interface) is now certainly
> > > counterintuitive. But it's a domctl (and an internal interface),
> > > which we can change post-4.4 (I'd envision it to actually take
> > > a flags parameter indicating the kind of flush that's wanted).
> > 
> > Actually the naming of things in the hypervisor part of the patch
> > is now bogus too - sysc_page_to_ram(), for example, does in no
> > way imply that the cache needs not just flushing, but also
> > invalidating.
> 
> sync_and_clean_page ? Not quite right I think.

Unless there are objections my next posting will use
"flush_page_to_ram"..


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:29:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:29:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDPM-0007nF-5Z; Tue, 11 Feb 2014 13:29:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDDPK-0007mK-6i
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:29:50 +0000
Received: from [193.109.254.147:55331] by server-2.bemta-14.messagelabs.com id
	67/06-01236-CC52AF25; Tue, 11 Feb 2014 13:29:48 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392125386!3550414!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5914 invoked from network); 11 Feb 2014 13:29:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:29:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101589456"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:29:45 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:29:45 -0500
Message-ID: <1392125383.26657.124.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Tue, 11 Feb 2014 13:29:43 +0000
In-Reply-To: <1392040924.5117.103.camel@kazak.uk.xensource.com>
References: <1391775139.2162.88.camel@kazak.uk.xensource.com>
	<1391775176-30313-3-git-send-email-ian.campbell@citrix.com>
	<52F4E663020000780011A3B0@nat28.tlf.novell.com>
	<52F8E6DE020000780011AC53@nat28.tlf.novell.com>
	<1392040924.5117.103.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v4 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-10 at 14:02 +0000, Ian Campbell wrote:
> On Mon, 2014-02-10 at 13:49 +0000, Jan Beulich wrote:
> > >>> On 07.02.14 at 13:57, "Jan Beulich" <JBeulich@suse.com> wrote:
> > >>>> On 07.02.14 at 13:12, Ian Campbell <ian.campbell@citrix.com> wrote:
> > >> --- a/xen/include/public/domctl.h
> > >> +++ b/xen/include/public/domctl.h
> > >> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
> > >>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
> > >>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
> > >>  
> > >> +/*
> > >> + * ARM: Clean and invalidate caches associated with given region of
> > >> + * guest memory.
> > >> + */
> > >> +struct xen_domctl_cacheflush {
> > >> +    /* IN: page range to flush. */
> > >> +    xen_pfn_t start_pfn, nr_pfns;
> > >> +};
> > > 
> > > The name here (and of the libxc interface) is now certainly
> > > counterintuitive. But it's a domctl (and an internal interface),
> > > which we can change post-4.4 (I'd envision it to actually take
> > > a flags parameter indicating the kind of flush that's wanted).
> > 
> > Actually the naming of things in the hypervisor part of the patch
> > is now bogus too - sysc_page_to_ram(), for example, does in no
> > way imply that the cache needs not just flushing, but also
> > invalidating.
> 
> sync_and_clean_page ? Not quite right I think.

Unless there are objections my next posting will use
"flush_page_to_ram"..


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:31:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDRA-0008Co-PX; Tue, 11 Feb 2014 13:31:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDDR9-0008Cc-Ro
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:31:44 +0000
Received: from [85.158.137.68:52620] by server-9.bemta-3.messagelabs.com id
	AE/A4-10184-F362AF25; Tue, 11 Feb 2014 13:31:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392125502!1105795!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11966 invoked from network); 11 Feb 2014 13:31:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:31:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 13:31:36 +0000
Message-Id: <52FA344B020000780011B276@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 13:31:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
In-Reply-To: <20140211131509.GF97288@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 14:15, Tim Deegan <tim@xen.org> wrote:
> At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
>> Right, except you do this be reverting other stuff rather than
>> adding the missing functionality on top.
> 
> Absolutely -- because once we went back to having PF set only when the
> interrupt was injected, it seemed better to reduce the amount of
> special-case plumbing for RTC than to add yet more.
> 
> But for the case of an OS polling for PF with PIE clear, I guess we
> might need to keep all the current special cases.  Was that a known
> observed bug or a theoretical one?

A theoretical one. Along the lines of the general theme of all the
patches around that time: Get the emulation closer to what real
hardware does.

> I can't see a way of handling both that case and the w2k3 case.

Since hardware can, software certainly could as well.

> Either we always set PF when the tick happens, even if the interrupt
> is masked (which breaks w2k3) or we don't set it until we can deliver
> the interrupt (which breaks pollers).
> 
> Or equivalently, either setting PF consumes a tick (which breaks
> no-missed-ticks mode for OSes that don't poll) or it doesn't (which
> breaks w2k3).

Are these two really equivalent (perhaps they are in our current
implementation, but I ask from an abstract POV)? In particular
since for the above it's not immediately clear what "masked"
means, i.e. ...

> Or do we have to treat 'masked in REG_B/IOAPIC' differently from
> 'masked by ISR/TPR/RFLAGS.IF/...'?

... this might be the right direction (albeit I think it would be REG_B
on one side and collectively IOAPIC/ISR/TPR/EFLAGS.IF).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:31:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDRA-0008Co-PX; Tue, 11 Feb 2014 13:31:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDDR9-0008Cc-Ro
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:31:44 +0000
Received: from [85.158.137.68:52620] by server-9.bemta-3.messagelabs.com id
	AE/A4-10184-F362AF25; Tue, 11 Feb 2014 13:31:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392125502!1105795!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11966 invoked from network); 11 Feb 2014 13:31:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:31:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 13:31:36 +0000
Message-Id: <52FA344B020000780011B276@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 13:31:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
In-Reply-To: <20140211131509.GF97288@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 14:15, Tim Deegan <tim@xen.org> wrote:
> At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
>> Right, except you do this be reverting other stuff rather than
>> adding the missing functionality on top.
> 
> Absolutely -- because once we went back to having PF set only when the
> interrupt was injected, it seemed better to reduce the amount of
> special-case plumbing for RTC than to add yet more.
> 
> But for the case of an OS polling for PF with PIE clear, I guess we
> might need to keep all the current special cases.  Was that a known
> observed bug or a theoretical one?

A theoretical one. Along the lines of the general theme of all the
patches around that time: Get the emulation closer to what real
hardware does.

> I can't see a way of handling both that case and the w2k3 case.

Since hardware can, software certainly could as well.

> Either we always set PF when the tick happens, even if the interrupt
> is masked (which breaks w2k3) or we don't set it until we can deliver
> the interrupt (which breaks pollers).
> 
> Or equivalently, either setting PF consumes a tick (which breaks
> no-missed-ticks mode for OSes that don't poll) or it doesn't (which
> breaks w2k3).

Are these two really equivalent (perhaps they are in our current
implementation, but I ask from an abstract POV)? In particular
since for the above it's not immediately clear what "masked"
means, i.e. ...

> Or do we have to treat 'masked in REG_B/IOAPIC' differently from
> 'masked by ISR/TPR/RFLAGS.IF/...'?

... this might be the right direction (albeit I think it would be REG_B
on one side and collectively IOAPIC/ISR/TPR/EFLAGS.IF).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDRT-0008Fv-Ig; Tue, 11 Feb 2014 13:32:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDDRQ-0008Fc-E9
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:32:00 +0000
Received: from [85.158.137.68:47922] by server-11.bemta-3.messagelabs.com id
	A5/1C-04255-F462AF25; Tue, 11 Feb 2014 13:31:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392125517!1116840!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29695 invoked from network); 11 Feb 2014 13:31:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:31:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99777518"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:31:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 08:31:43 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDDR9-0008K0-7q;
	Tue, 11 Feb 2014 13:31:43 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 13:31:43 +0000
Message-ID: <1392125503-15803-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Allow forcing the use of current
	osstest HEAD for branch=osstest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise cr-daily-branch expects $HOME/testing.git to exist and will
git-reset it etc, which is rather annoying when in standalone mode...

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ap-fetch-version     | 8 ++++++--
 ap-fetch-version-old | 8 ++++++--
 cr-daily-branch      | 4 +++-
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/ap-fetch-version b/ap-fetch-version
index 1f3c6e9..5a5d5c4 100755
--- a/ap-fetch-version
+++ b/ap-fetch-version
@@ -70,8 +70,12 @@ linuxfirmware)
 		$UPSTREAM_TREE_LINUXFIRMWARE master daily-cron.$branch
 	;;
 osstest)
-	git-fetch $HOME/testing.git pretest:ap-fetch >&2
-        git-rev-parse ap-fetch^0
+	if [ "x$OSSTEST_USE_HEAD" != "xy" ] ; then
+	    git-fetch $HOME/testing.git pretest:ap-fetch >&2
+            git-rev-parse ap-fetch^0
+	else
+	    git-rev-parse HEAD^0
+	fi
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/ap-fetch-version-old b/ap-fetch-version-old
index 353a817..0f5af49 100755
--- a/ap-fetch-version-old
+++ b/ap-fetch-version-old
@@ -74,8 +74,12 @@ linuxfirmware)
 		$TREE_LINUXFIRMWARE master daily-cron-old.$branch
 	;;
 osstest)
-	git-fetch -f $HOME/testing.git incoming:ap-fetch
-        git-rev-parse ap-fetch^0
+	if [ "x$OSSTEST_USE_HEAD" != "xy" ] ; then
+	    git-fetch -f $HOME/testing.git incoming:ap-fetch
+            git-rev-parse ap-fetch^0
+	else
+	    git-rev-parse HEAD^0
+	fi
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/cr-daily-branch b/cr-daily-branch
index c4a0872..41ca796 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -146,7 +146,9 @@ osstest)
 	determine_version REVISION_OSSTEST osstest
         realtree=
 	NEW_REVISION=$REVISION_OSSTEST
-	git reset --hard $REVISION_OSSTEST
+	if [ "x$OSSTEST_USE_HEAD" != "xy" ] ; then
+	    git reset --hard $REVISION_OSSTEST
+	fi
 	;;
 qemuu)
 	realtree=qemu-upstream-${xenbranch#xen-}
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:32:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDRT-0008Fv-Ig; Tue, 11 Feb 2014 13:32:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDDRQ-0008Fc-E9
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:32:00 +0000
Received: from [85.158.137.68:47922] by server-11.bemta-3.messagelabs.com id
	A5/1C-04255-F462AF25; Tue, 11 Feb 2014 13:31:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392125517!1116840!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29695 invoked from network); 11 Feb 2014 13:31:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:31:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99777518"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 13:31:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 08:31:43 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDDR9-0008K0-7q;
	Tue, 11 Feb 2014 13:31:43 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 13:31:43 +0000
Message-ID: <1392125503-15803-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Allow forcing the use of current
	osstest HEAD for branch=osstest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise cr-daily-branch expects $HOME/testing.git to exist and will
git-reset it etc, which is rather annoying when in standalone mode...

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ap-fetch-version     | 8 ++++++--
 ap-fetch-version-old | 8 ++++++--
 cr-daily-branch      | 4 +++-
 3 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/ap-fetch-version b/ap-fetch-version
index 1f3c6e9..5a5d5c4 100755
--- a/ap-fetch-version
+++ b/ap-fetch-version
@@ -70,8 +70,12 @@ linuxfirmware)
 		$UPSTREAM_TREE_LINUXFIRMWARE master daily-cron.$branch
 	;;
 osstest)
-	git-fetch $HOME/testing.git pretest:ap-fetch >&2
-        git-rev-parse ap-fetch^0
+	if [ "x$OSSTEST_USE_HEAD" != "xy" ] ; then
+	    git-fetch $HOME/testing.git pretest:ap-fetch >&2
+            git-rev-parse ap-fetch^0
+	else
+	    git-rev-parse HEAD^0
+	fi
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/ap-fetch-version-old b/ap-fetch-version-old
index 353a817..0f5af49 100755
--- a/ap-fetch-version-old
+++ b/ap-fetch-version-old
@@ -74,8 +74,12 @@ linuxfirmware)
 		$TREE_LINUXFIRMWARE master daily-cron-old.$branch
 	;;
 osstest)
-	git-fetch -f $HOME/testing.git incoming:ap-fetch
-        git-rev-parse ap-fetch^0
+	if [ "x$OSSTEST_USE_HEAD" != "xy" ] ; then
+	    git-fetch -f $HOME/testing.git incoming:ap-fetch
+            git-rev-parse ap-fetch^0
+	else
+	    git-rev-parse HEAD^0
+	fi
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/cr-daily-branch b/cr-daily-branch
index c4a0872..41ca796 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -146,7 +146,9 @@ osstest)
 	determine_version REVISION_OSSTEST osstest
         realtree=
 	NEW_REVISION=$REVISION_OSSTEST
-	git reset --hard $REVISION_OSSTEST
+	if [ "x$OSSTEST_USE_HEAD" != "xy" ] ; then
+	    git reset --hard $REVISION_OSSTEST
+	fi
 	;;
 qemuu)
 	realtree=qemu-upstream-${xenbranch#xen-}
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:58:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDqM-0000wB-6U; Tue, 11 Feb 2014 13:57:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDDqL-0000w6-9g
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:57:45 +0000
Received: from [85.158.137.68:60872] by server-8.bemta-3.messagelabs.com id
	B0/8E-16039-85C2AF25; Tue, 11 Feb 2014 13:57:44 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392127063!1109123!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12666 invoked from network); 11 Feb 2014 13:57:43 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:57:43 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDDq7-0003Aw-JJ; Tue, 11 Feb 2014 13:57:31 +0000
Date: Tue, 11 Feb 2014 14:57:31 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140211135731.GA10482@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA344B020000780011B276@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA344B020000780011B276@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:31 +0000 on 11 Feb (1392121899), Jan Beulich wrote:
> >>> On 11.02.14 at 14:15, Tim Deegan <tim@xen.org> wrote:
> > At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
> >> Right, except you do this be reverting other stuff rather than
> >> adding the missing functionality on top.
> > 
> > Absolutely -- because once we went back to having PF set only when the
> > interrupt was injected, it seemed better to reduce the amount of
> > special-case plumbing for RTC than to add yet more.
> > 
> > But for the case of an OS polling for PF with PIE clear, I guess we
> > might need to keep all the current special cases.  Was that a known
> > observed bug or a theoretical one?
> 
> A theoretical one. Along the lines of the general theme of all the
> patches around that time: Get the emulation closer to what real
> hardware does.

Righto.

> > I can't see a way of handling both that case and the w2k3 case.
> 
> Since hardware can, software certainly could as well.

Hardware doesn't have to do things like no-missed-ticks mode; it
certainly doesn't have to deal with no-ack mode. :(

> > Or do we have to treat 'masked in REG_B/IOAPIC' differently from
> > 'masked by ISR/TPR/RFLAGS.IF/...'?
> 
> ... this might be the right direction (albeit I think it would be REG_B
> on one side and collectively IOAPIC/ISR/TPR/EFLAGS.IF).

Yeah, maybe.

Something like, if !PIE then we set PF and consume the tick in the
early vpt callback, otherwise we do it in the late callback (and in
both cases, the early vpt callback doesn't actually assert the line)?

Or: we drop the early callback, and go back to setting PF|IRQF in the
pt_intr_post callback (much like the patch under discussion), and
disable the vpt when the guest clears PIE.  Then in the REG_C read, if
!PIE, we can set the PF bit if a tick should have happened since
the last read (but saving ourselves the bother of running the actual
timer).  Then the only case where things are wrong is an OS which
_both_ polls for the edge _and_ relies on the vpt logic for adding the
right number of ticks after being descheduled.  Or an OS which asks
for interrupts, masks them in the *APIC/RFLAGS and then polls for PF.
But that guest, I think, is indistinguishable from w2k3 in the stuck
state.

Actually that second patch doesn't sound too bad.  If that sounds OK
to you I can look into coding it up on Thursday.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:58:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDqM-0000wB-6U; Tue, 11 Feb 2014 13:57:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDDqL-0000w6-9g
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:57:45 +0000
Received: from [85.158.137.68:60872] by server-8.bemta-3.messagelabs.com id
	B0/8E-16039-85C2AF25; Tue, 11 Feb 2014 13:57:44 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392127063!1109123!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12666 invoked from network); 11 Feb 2014 13:57:43 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:57:43 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDDq7-0003Aw-JJ; Tue, 11 Feb 2014 13:57:31 +0000
Date: Tue, 11 Feb 2014 14:57:31 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140211135731.GA10482@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA344B020000780011B276@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA344B020000780011B276@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:31 +0000 on 11 Feb (1392121899), Jan Beulich wrote:
> >>> On 11.02.14 at 14:15, Tim Deegan <tim@xen.org> wrote:
> > At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
> >> Right, except you do this be reverting other stuff rather than
> >> adding the missing functionality on top.
> > 
> > Absolutely -- because once we went back to having PF set only when the
> > interrupt was injected, it seemed better to reduce the amount of
> > special-case plumbing for RTC than to add yet more.
> > 
> > But for the case of an OS polling for PF with PIE clear, I guess we
> > might need to keep all the current special cases.  Was that a known
> > observed bug or a theoretical one?
> 
> A theoretical one. Along the lines of the general theme of all the
> patches around that time: Get the emulation closer to what real
> hardware does.

Righto.

> > I can't see a way of handling both that case and the w2k3 case.
> 
> Since hardware can, software certainly could as well.

Hardware doesn't have to do things like no-missed-ticks mode; it
certainly doesn't have to deal with no-ack mode. :(

> > Or do we have to treat 'masked in REG_B/IOAPIC' differently from
> > 'masked by ISR/TPR/RFLAGS.IF/...'?
> 
> ... this might be the right direction (albeit I think it would be REG_B
> on one side and collectively IOAPIC/ISR/TPR/EFLAGS.IF).

Yeah, maybe.

Something like, if !PIE then we set PF and consume the tick in the
early vpt callback, otherwise we do it in the late callback (and in
both cases, the early vpt callback doesn't actually assert the line)?

Or: we drop the early callback, and go back to setting PF|IRQF in the
pt_intr_post callback (much like the patch under discussion), and
disable the vpt when the guest clears PIE.  Then in the REG_C read, if
!PIE, we can set the PF bit if a tick should have happened since
the last read (but saving ourselves the bother of running the actual
timer).  Then the only case where things are wrong is an OS which
_both_ polls for the edge _and_ relies on the vpt logic for adding the
right number of ticks after being descheduled.  Or an OS which asks
for interrupts, masks them in the *APIC/RFLAGS and then polls for PF.
But that guest, I think, is indistinguishable from w2k3 in the stuck
state.

Actually that second patch doesn't sound too bad.  If that sounds OK
to you I can look into coding it up on Thursday.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:59:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDrk-00014H-T7; Tue, 11 Feb 2014 13:59:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDDri-00013J-SB
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:59:11 +0000
Received: from [85.158.143.35:42738] by server-2.bemta-4.messagelabs.com id
	AE/6A-10891-DAC2AF25; Tue, 11 Feb 2014 13:59:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392127147!4837450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31545 invoked from network); 11 Feb 2014 13:59:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:59:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101596757"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:59:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:59:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDDre-0003QM-HG;
	Tue, 11 Feb 2014 13:59:06 +0000
Message-ID: <52FA2CAA.7000303@citrix.com>
Date: Tue, 11 Feb 2014 13:59:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
In-Reply-To: <20140211131509.GF97288@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 13:15, Tim Deegan wrote:
> At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
>>>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
>>> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
>>>>>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
>>>>> That is the main change of this cset:  we go back to driving
>>>>> the interrupt from the vpt code and fixing up the RTC state after vpt
>>>>> tells us it's injected an interrupt.
>>>> And that's what is wrong imo, as it doesn't allow driving PF correctly
>>>> when !PIE.
>>> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
>>> you remember why not?  Have I forgotten some wrinkle or race here?
>> Because an OS could inspect PF without setting PIE.
> Ugh. :( 
>
>>>>> Yeah, this has nothing to do with the bug being fixed here.  The old
>>>>> REG_C read was operating correctly, but on the return-to-guest path:
>>>>>  - vpt sees another RTC interrupt is due and calls RTC code
>>>>>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>>>>>  - vlapic code sees the last interrupt is still in the ISR and does
>>>>>    nothing;
>>>>>  - we return to the guest having set IRQF but not consumed a timer
>>>>>    event, so vpt stste is the same
>>>>>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>>>>>    waiting for a read of 0.
>>>>>  - repeat forever.
>>>> Which would call for a flag suppressing the setting of PF|IRQF
>>>> until the timer event got consumed. Possibly with some safety
>>>> belt for this to not get deferred indefinitely (albeit if the interrupt
>>>> doesn't get injected for extended periods of time, the guest
>>>> would presumably have more severe problems than these flags
>>>> not getting updated as expected).
>>> That's pretty much what we're doing here -- the pt_intr_post callback
>>> sets PF|IRQF when the interrupt is injected.
>> Right, except you do this be reverting other stuff rather than
>> adding the missing functionality on top.
> Absolutely -- because once we went back to having PF set only when the
> interrupt was injected, it seemed better to reduce the amount of
> special-case plumbing for RTC than to add yet more.
>
> But for the case of an OS polling for PF with PIE clear, I guess we
> might need to keep all the current special cases.  Was that a known
> observed bug or a theoretical one?  I can't see a way of handling
> both that case and the w2k3 case.
>
> Either we always set PF when the tick happens, even if the interrupt
> is masked (which breaks w2k3) or we don't set it until we can deliver
> the interrupt (which breaks pollers).

This doesn't break w2k3.  Setting PF when a tick happens (or should
happen for !PIE) is the correct thing to do.

The bug is that we see an interrupt pending and set PF when we
shouldn't, so w2k3 is constantly seeing PF set when it shouldn't be. 
The first read of REG_C should clear PF, which should then stay clear
until the next 64Hz period is up, whether or not interrupts are pending
injection.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:59:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDrk-00014H-T7; Tue, 11 Feb 2014 13:59:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDDri-00013J-SB
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:59:11 +0000
Received: from [85.158.143.35:42738] by server-2.bemta-4.messagelabs.com id
	AE/6A-10891-DAC2AF25; Tue, 11 Feb 2014 13:59:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392127147!4837450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31545 invoked from network); 11 Feb 2014 13:59:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:59:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101596757"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 13:59:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 08:59:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDDre-0003QM-HG;
	Tue, 11 Feb 2014 13:59:06 +0000
Message-ID: <52FA2CAA.7000303@citrix.com>
Date: Tue, 11 Feb 2014 13:59:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
In-Reply-To: <20140211131509.GF97288@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 13:15, Tim Deegan wrote:
> At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
>>>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
>>> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
>>>>>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
>>>>> That is the main change of this cset:  we go back to driving
>>>>> the interrupt from the vpt code and fixing up the RTC state after vpt
>>>>> tells us it's injected an interrupt.
>>>> And that's what is wrong imo, as it doesn't allow driving PF correctly
>>>> when !PIE.
>>> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
>>> you remember why not?  Have I forgotten some wrinkle or race here?
>> Because an OS could inspect PF without setting PIE.
> Ugh. :( 
>
>>>>> Yeah, this has nothing to do with the bug being fixed here.  The old
>>>>> REG_C read was operating correctly, but on the return-to-guest path:
>>>>>  - vpt sees another RTC interrupt is due and calls RTC code
>>>>>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>>>>>  - vlapic code sees the last interrupt is still in the ISR and does
>>>>>    nothing;
>>>>>  - we return to the guest having set IRQF but not consumed a timer
>>>>>    event, so vpt stste is the same
>>>>>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>>>>>    waiting for a read of 0.
>>>>>  - repeat forever.
>>>> Which would call for a flag suppressing the setting of PF|IRQF
>>>> until the timer event got consumed. Possibly with some safety
>>>> belt for this to not get deferred indefinitely (albeit if the interrupt
>>>> doesn't get injected for extended periods of time, the guest
>>>> would presumably have more severe problems than these flags
>>>> not getting updated as expected).
>>> That's pretty much what we're doing here -- the pt_intr_post callback
>>> sets PF|IRQF when the interrupt is injected.
>> Right, except you do this be reverting other stuff rather than
>> adding the missing functionality on top.
> Absolutely -- because once we went back to having PF set only when the
> interrupt was injected, it seemed better to reduce the amount of
> special-case plumbing for RTC than to add yet more.
>
> But for the case of an OS polling for PF with PIE clear, I guess we
> might need to keep all the current special cases.  Was that a known
> observed bug or a theoretical one?  I can't see a way of handling
> both that case and the w2k3 case.
>
> Either we always set PF when the tick happens, even if the interrupt
> is masked (which breaks w2k3) or we don't set it until we can deliver
> the interrupt (which breaks pollers).

This doesn't break w2k3.  Setting PF when a tick happens (or should
happen for !PIE) is the correct thing to do.

The bug is that we see an interrupt pending and set PF when we
shouldn't, so w2k3 is constantly seeing PF set when it shouldn't be. 
The first read of REG_C should clear PF, which should then stay clear
until the next 64Hz period is up, whether or not interrupts are pending
injection.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDs1-00016s-9D; Tue, 11 Feb 2014 13:59:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDDs0-00016a-GM
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:59:28 +0000
Received: from [85.158.137.68:22692] by server-16.bemta-3.messagelabs.com id
	1C/C6-29917-FBC2AF25; Tue, 11 Feb 2014 13:59:27 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127166!1121612!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6870 invoked from network); 11 Feb 2014 13:59:27 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:59:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDDry-0003DN-EA; Tue, 11 Feb 2014 13:59:26 +0000
Date: Tue, 11 Feb 2014 14:59:26 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211135926.GB10482@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA23B4.5060203@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:20 +0000 on 11 Feb (1392121252), Julien Grall wrote:
> 
> 
> On 11/02/14 12:59, Tim Deegan wrote:
> > Are you using a very old version of clang?  As 06a9c7e points out,
> > our current runes didn't work before clang v3.0.
> 
> I'm using clang 3.5 (which have other issue to compile Xen), but I have 
> also tried 3.0 and 3.3.
> 
> > If not, rather than chasing this around any further, I think we should
> > abandon trying to use the compiler-provided headers even on linux.
> > Does this patch fix your build issue?
> >
> > commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> > Author: Tim Deegan <tim@xen.org>
> > Date:   Tue Feb 11 12:44:09 2014 +0000
> >
> >      xen: stop trying to use the system <stdarg.h> and <stdbool.h>
> 
> With this patch, -iwithprefix include is not necessary now. I wondering 
> if we can remove it from the command line.

Yes, I think so.

> >      We already have our own versions of the stdarg/stdbool definitions, for
> >      systems where those headers are installed in /usr/include.
> >
> >      On linux, they're typically installed in compiler-specific paths, but
> >      finding them has proved unreliable.  Drop that and use our own versions
> >      everywhere.
> >
> >      Signed-off-by: Tim Deegan <tim@xen.org>
> 
> This patch is working fine to build xen clang 3.0, 3.3.
> I have others issue to build with clang 3.5.
> 
> Tested-by: Julien Grall <julien.grall@linaro.org>

Great!  Assuming you'll have a series of patches to fix the clang-3.5
build, do you want to just take this into that series, and drop the
-iwithprefix at the same time?

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 13:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 13:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDDs1-00016s-9D; Tue, 11 Feb 2014 13:59:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDDs0-00016a-GM
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 13:59:28 +0000
Received: from [85.158.137.68:22692] by server-16.bemta-3.messagelabs.com id
	1C/C6-29917-FBC2AF25; Tue, 11 Feb 2014 13:59:27 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127166!1121612!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6870 invoked from network); 11 Feb 2014 13:59:27 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 13:59:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDDry-0003DN-EA; Tue, 11 Feb 2014 13:59:26 +0000
Date: Tue, 11 Feb 2014 14:59:26 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211135926.GB10482@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA23B4.5060203@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:20 +0000 on 11 Feb (1392121252), Julien Grall wrote:
> 
> 
> On 11/02/14 12:59, Tim Deegan wrote:
> > Are you using a very old version of clang?  As 06a9c7e points out,
> > our current runes didn't work before clang v3.0.
> 
> I'm using clang 3.5 (which have other issue to compile Xen), but I have 
> also tried 3.0 and 3.3.
> 
> > If not, rather than chasing this around any further, I think we should
> > abandon trying to use the compiler-provided headers even on linux.
> > Does this patch fix your build issue?
> >
> > commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> > Author: Tim Deegan <tim@xen.org>
> > Date:   Tue Feb 11 12:44:09 2014 +0000
> >
> >      xen: stop trying to use the system <stdarg.h> and <stdbool.h>
> 
> With this patch, -iwithprefix include is not necessary now. I wondering 
> if we can remove it from the command line.

Yes, I think so.

> >      We already have our own versions of the stdarg/stdbool definitions, for
> >      systems where those headers are installed in /usr/include.
> >
> >      On linux, they're typically installed in compiler-specific paths, but
> >      finding them has proved unreliable.  Drop that and use our own versions
> >      everywhere.
> >
> >      Signed-off-by: Tim Deegan <tim@xen.org>
> 
> This patch is working fine to build xen clang 3.0, 3.3.
> I have others issue to build with clang 3.5.
> 
> Tested-by: Julien Grall <julien.grall@linaro.org>

Great!  Assuming you'll have a series of patches to fix the clang-3.5
build, do you want to just take this into that series, and drop the
-iwithprefix at the same time?

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:10:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE2F-0002Rz-6X; Tue, 11 Feb 2014 14:10:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE2D-0002Rq-HH
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:10:01 +0000
Received: from [193.109.254.147:42729] by server-8.bemta-14.messagelabs.com id
	05/DD-18529-83F2AF25; Tue, 11 Feb 2014 14:10:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392127798!3557154!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31122 invoked from network); 11 Feb 2014 14:09:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:09:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99788297"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 14:09:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 09:09:57 -0500
Message-ID: <1392127796.26657.130.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:09:56 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/5 v5] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George gave a release ack to v3.

Both 32 and 64 bit have survived ~10,000 boots with v4.

Changes in v5:
        avoid get_order_from_pages and just use 1<<MAX_ORDER
    
    s/sync_page_to_ram/flush_page_to_ram/g
    
    remove hard tab, add an emacs magic block

Changes in v4:
                make sure to actually invalidate the cache, not just
                clean it
                
                rename existing cache flush functions to avoid catching
                me out that way again.

                switch to using a start + length in the domctl interface
                
Changes in v3:
        s/cacheflush_page/sync_page_to_ram/
                    
        xc interface takes a length instead of an end
                    
        make the domctl range inclusive.
                    
        make xc interface internal -- it isn't needed from libxl
        in the current design and it is easier to expose an
        interface in the future than to hide it.
                
Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:10:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE2F-0002Rz-6X; Tue, 11 Feb 2014 14:10:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE2D-0002Rq-HH
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:10:01 +0000
Received: from [193.109.254.147:42729] by server-8.bemta-14.messagelabs.com id
	05/DD-18529-83F2AF25; Tue, 11 Feb 2014 14:10:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392127798!3557154!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31122 invoked from network); 11 Feb 2014 14:09:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:09:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99788297"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 14:09:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 09:09:57 -0500
Message-ID: <1392127796.26657.130.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:09:56 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/5 v5] xen/arm: fix guest builder cache
 cohenrency (again, again)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George gave a release ack to v3.

Both 32 and 64 bit have survived ~10,000 boots with v4.

Changes in v5:
        avoid get_order_from_pages and just use 1<<MAX_ORDER
    
    s/sync_page_to_ram/flush_page_to_ram/g
    
    remove hard tab, add an emacs magic block

Changes in v4:
                make sure to actually invalidate the cache, not just
                clean it
                
                rename existing cache flush functions to avoid catching
                me out that way again.

                switch to using a start + length in the domctl interface
                
Changes in v3:
        s/cacheflush_page/sync_page_to_ram/
                    
        xc interface takes a length instead of an end
                    
        make the domctl range inclusive.
                    
        make xc interface internal -- it isn't needed from libxl
        in the current design and it is easier to expose an
        interface in the future than to hide it.
                
Changes in v2:
        Flush on page alloc and do targeted flushes at domain build time
        rather than a big flush after domain build. This adds a new call
        to common code, which is stubbed out on x86. This avoid needing
        to worry about preemptability of the new domctl and also catches
        cases related to ballooning where things might not be flushed
        (e.g. a guest scrubs a page but doesn't clean the cache)

This has done 12000 boot loops on arm32 and 10000 on arm64.

Given the security aspect I would like to put this in 4.4.

Original blurb:

On ARM we need to take care of cache coherency for guests which we have
just built because they start with their caches disabled.

Our current strategy for dealing with this, which is to make guest
memory default to cacheable regardless of the in guest configuration
(the HCR.DC bit), is flawed because it doesn't handle guests which
enable their MMU before enabling their caches, which at least FreeBSD
does. (NB: Setting HCR.DC while the guest MMU is enabled is
UNPREDICTABLE, hence we must disable it when the guest turns its MMU
one).

There is also a security aspect here since the current strategy means
that a guest which enables its MMU before its caches can potentially see
unscrubbed data in RAM (because the scrubbed bytes are still held in the
cache).

As well as the new stuff this series removes the HCR.DC support and
performs two purely cosmetic renames.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:10:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE2q-0002UB-SJ; Tue, 11 Feb 2014 14:10:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDE2p-0002Tw-Us
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:10:40 +0000
Received: from [85.158.137.68:63242] by server-6.bemta-3.messagelabs.com id
	90/6B-09180-F5F2AF25; Tue, 11 Feb 2014 14:10:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127838!1125574!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2458 invoked from network); 11 Feb 2014 14:10:38 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 14:10:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDE2k-0003R3-Lz; Tue, 11 Feb 2014 14:10:34 +0000
Date: Tue, 11 Feb 2014 15:10:34 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140211141034.GC10482@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA2CAA.7000303@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA2CAA.7000303@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:59 +0000 on 11 Feb (1392123546), Andrew Cooper wrote:
> On 11/02/14 13:15, Tim Deegan wrote:
> > At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
> >>>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
> >>> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
> >>>>>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> >>>>> That is the main change of this cset:  we go back to driving
> >>>>> the interrupt from the vpt code and fixing up the RTC state after vpt
> >>>>> tells us it's injected an interrupt.
> >>>> And that's what is wrong imo, as it doesn't allow driving PF correctly
> >>>> when !PIE.
> >>> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
> >>> you remember why not?  Have I forgotten some wrinkle or race here?
> >> Because an OS could inspect PF without setting PIE.
> > Ugh. :( 
> >
> >>>>> Yeah, this has nothing to do with the bug being fixed here.  The old
> >>>>> REG_C read was operating correctly, but on the return-to-guest path:
> >>>>>  - vpt sees another RTC interrupt is due and calls RTC code
> >>>>>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
> >>>>>  - vlapic code sees the last interrupt is still in the ISR and does
> >>>>>    nothing;
> >>>>>  - we return to the guest having set IRQF but not consumed a timer
> >>>>>    event, so vpt stste is the same
> >>>>>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
> >>>>>    waiting for a read of 0.
> >>>>>  - repeat forever.
> >>>> Which would call for a flag suppressing the setting of PF|IRQF
> >>>> until the timer event got consumed. Possibly with some safety
> >>>> belt for this to not get deferred indefinitely (albeit if the interrupt
> >>>> doesn't get injected for extended periods of time, the guest
> >>>> would presumably have more severe problems than these flags
> >>>> not getting updated as expected).
> >>> That's pretty much what we're doing here -- the pt_intr_post callback
> >>> sets PF|IRQF when the interrupt is injected.
> >> Right, except you do this be reverting other stuff rather than
> >> adding the missing functionality on top.
> > Absolutely -- because once we went back to having PF set only when the
> > interrupt was injected, it seemed better to reduce the amount of
> > special-case plumbing for RTC than to add yet more.
> >
> > But for the case of an OS polling for PF with PIE clear, I guess we
> > might need to keep all the current special cases.  Was that a known
> > observed bug or a theoretical one?  I can't see a way of handling
> > both that case and the w2k3 case.
> >
> > Either we always set PF when the tick happens, even if the interrupt
> > is masked (which breaks w2k3) or we don't set it until we can deliver
> > the interrupt (which breaks pollers).
> 
> This doesn't break w2k3.  Setting PF when a tick happens (or should
> happen for !PIE) is the correct thing to do.
> 
> The bug is that we see an interrupt pending and set PF when we
> shouldn't

We _are_ setting PF when the tick happens; it's just that because of
no-missed-ticks mode the tick happens before w2k3 has finished
handling the last one.  At that point, anything we do breaks w2k3 in
some way -- either we leave the tick pending until the interrupt is
actually delivered (which leads to the hang) or we consume the tick
even though the interrupt will be lost (which causes clock drift).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:10:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE2q-0002UB-SJ; Tue, 11 Feb 2014 14:10:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDE2p-0002Tw-Us
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:10:40 +0000
Received: from [85.158.137.68:63242] by server-6.bemta-3.messagelabs.com id
	90/6B-09180-F5F2AF25; Tue, 11 Feb 2014 14:10:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127838!1125574!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2458 invoked from network); 11 Feb 2014 14:10:38 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 14:10:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDE2k-0003R3-Lz; Tue, 11 Feb 2014 14:10:34 +0000
Date: Tue, 11 Feb 2014 15:10:34 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140211141034.GC10482@deinos.phlegethon.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA2CAA.7000303@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA2CAA.7000303@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:59 +0000 on 11 Feb (1392123546), Andrew Cooper wrote:
> On 11/02/14 13:15, Tim Deegan wrote:
> > At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
> >>>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
> >>> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
> >>>>>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
> >>>>> That is the main change of this cset:  we go back to driving
> >>>>> the interrupt from the vpt code and fixing up the RTC state after vpt
> >>>>> tells us it's injected an interrupt.
> >>>> And that's what is wrong imo, as it doesn't allow driving PF correctly
> >>>> when !PIE.
> >>> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
> >>> you remember why not?  Have I forgotten some wrinkle or race here?
> >> Because an OS could inspect PF without setting PIE.
> > Ugh. :( 
> >
> >>>>> Yeah, this has nothing to do with the bug being fixed here.  The old
> >>>>> REG_C read was operating correctly, but on the return-to-guest path:
> >>>>>  - vpt sees another RTC interrupt is due and calls RTC code
> >>>>>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
> >>>>>  - vlapic code sees the last interrupt is still in the ISR and does
> >>>>>    nothing;
> >>>>>  - we return to the guest having set IRQF but not consumed a timer
> >>>>>    event, so vpt stste is the same
> >>>>>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
> >>>>>    waiting for a read of 0.
> >>>>>  - repeat forever.
> >>>> Which would call for a flag suppressing the setting of PF|IRQF
> >>>> until the timer event got consumed. Possibly with some safety
> >>>> belt for this to not get deferred indefinitely (albeit if the interrupt
> >>>> doesn't get injected for extended periods of time, the guest
> >>>> would presumably have more severe problems than these flags
> >>>> not getting updated as expected).
> >>> That's pretty much what we're doing here -- the pt_intr_post callback
> >>> sets PF|IRQF when the interrupt is injected.
> >> Right, except you do this be reverting other stuff rather than
> >> adding the missing functionality on top.
> > Absolutely -- because once we went back to having PF set only when the
> > interrupt was injected, it seemed better to reduce the amount of
> > special-case plumbing for RTC than to add yet more.
> >
> > But for the case of an OS polling for PF with PIE clear, I guess we
> > might need to keep all the current special cases.  Was that a known
> > observed bug or a theoretical one?  I can't see a way of handling
> > both that case and the w2k3 case.
> >
> > Either we always set PF when the tick happens, even if the interrupt
> > is masked (which breaks w2k3) or we don't set it until we can deliver
> > the interrupt (which breaks pollers).
> 
> This doesn't break w2k3.  Setting PF when a tick happens (or should
> happen for !PIE) is the correct thing to do.
> 
> The bug is that we see an interrupt pending and set PF when we
> shouldn't

We _are_ setting PF when the tick happens; it's just that because of
no-missed-ticks mode the tick happens before w2k3 has finished
handling the last one.  At that point, anything we do breaks w2k3 in
some way -- either we leave the tick pending until the interrupt is
actually delivered (which leads to the hang) or we consume the tick
even though the interrupt will be lost (which causes clock drift).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3K-0002Xw-BL; Tue, 11 Feb 2014 14:11:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3I-0002Xj-Uz
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:09 +0000
Received: from [85.158.139.211:42889] by server-2.bemta-5.messagelabs.com id
	5B/0B-23037-C7F2AF25; Tue, 11 Feb 2014 14:11:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392127865!3133880!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21595 invoked from network); 11 Feb 2014 14:11:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99788793"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-94;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:00 +0000
Message-ID: <1392127864-2605-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 1/5] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3K-0002Xw-BL; Tue, 11 Feb 2014 14:11:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3I-0002Xj-Uz
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:09 +0000
Received: from [85.158.139.211:42889] by server-2.bemta-5.messagelabs.com id
	5B/0B-23037-C7F2AF25; Tue, 11 Feb 2014 14:11:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392127865!3133880!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21595 invoked from network); 11 Feb 2014 14:11:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="99788793"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-94;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:00 +0000
Message-ID: <1392127864-2605-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 1/5] xen: arm: rename create_p2m_entries to
	apply_p2m_changes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This function hasn't been only about creating for quite a while.

This is purely a rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c |   28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 85ca330..ace3c54 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -230,7 +230,7 @@ enum p2m_operation {
     RELINQUISH,
 };
 
-static int create_p2m_entries(struct domain *d,
+static int apply_p2m_changes(struct domain *d,
                      enum p2m_operation op,
                      paddr_t start_gpaddr,
                      paddr_t end_gpaddr,
@@ -438,8 +438,8 @@ int p2m_populate_ram(struct domain *d,
                      paddr_t start,
                      paddr_t end)
 {
-    return create_p2m_entries(d, ALLOCATE, start, end,
-                              0, MATTR_MEM, p2m_ram_rw);
+    return apply_p2m_changes(d, ALLOCATE, start, end,
+                             0, MATTR_MEM, p2m_ram_rw);
 }
 
 int map_mmio_regions(struct domain *d,
@@ -447,8 +447,8 @@ int map_mmio_regions(struct domain *d,
                      paddr_t end_gaddr,
                      paddr_t maddr)
 {
-    return create_p2m_entries(d, INSERT, start_gaddr, end_gaddr,
-                              maddr, MATTR_DEV, p2m_mmio_direct);
+    return apply_p2m_changes(d, INSERT, start_gaddr, end_gaddr,
+                             maddr, MATTR_DEV, p2m_mmio_direct);
 }
 
 int guest_physmap_add_entry(struct domain *d,
@@ -457,20 +457,20 @@ int guest_physmap_add_entry(struct domain *d,
                             unsigned long page_order,
                             p2m_type_t t)
 {
-    return create_p2m_entries(d, INSERT,
-                              pfn_to_paddr(gpfn),
-                              pfn_to_paddr(gpfn + (1 << page_order)),
-                              pfn_to_paddr(mfn), MATTR_MEM, t);
+    return apply_p2m_changes(d, INSERT,
+                             pfn_to_paddr(gpfn),
+                             pfn_to_paddr(gpfn + (1 << page_order)),
+                             pfn_to_paddr(mfn), MATTR_MEM, t);
 }
 
 void guest_physmap_remove_page(struct domain *d,
                                unsigned long gpfn,
                                unsigned long mfn, unsigned int page_order)
 {
-    create_p2m_entries(d, REMOVE,
-                       pfn_to_paddr(gpfn),
-                       pfn_to_paddr(gpfn + (1<<page_order)),
-                       pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
+    apply_p2m_changes(d, REMOVE,
+                      pfn_to_paddr(gpfn),
+                      pfn_to_paddr(gpfn + (1<<page_order)),
+                      pfn_to_paddr(mfn), MATTR_MEM, p2m_invalid);
 }
 
 int p2m_alloc_table(struct domain *d)
@@ -618,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
 {
     struct p2m_domain *p2m = &d->arch.p2m;
 
-    return create_p2m_entries(d, RELINQUISH,
+    return apply_p2m_changes(d, RELINQUISH,
                               pfn_to_paddr(p2m->next_gfn_to_relinquish),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3N-0002Yp-OB; Tue, 11 Feb 2014 14:11:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3M-0002YQ-6h
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:12 +0000
Received: from [85.158.137.68:26589] by server-7.bemta-3.messagelabs.com id
	74/D9-13775-F7F2AF25; Tue, 11 Feb 2014 14:11:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392127868!1127751!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23126 invoked from network); 11 Feb 2014 14:11:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601009"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-DB;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:01 +0000
Message-ID: <1392127864-2605-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 2/5] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3N-0002Yp-OB; Tue, 11 Feb 2014 14:11:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3M-0002YQ-6h
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:12 +0000
Received: from [85.158.137.68:26589] by server-7.bemta-3.messagelabs.com id
	74/D9-13775-F7F2AF25; Tue, 11 Feb 2014 14:11:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392127868!1127751!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23126 invoked from network); 11 Feb 2014 14:11:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601009"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-DB;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:01 +0000
Message-ID: <1392127864-2605-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 2/5] xen: arm: rename p2m
	next_gfn_to_relinquish to lowest_mapped_gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This has other uses other than during relinquish, so rename it for clarity.

This is a pure rename.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/p2m.c        |    9 ++++-----
 xen/include/asm-arm/p2m.h |    8 +++++---
 2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ace3c54..a61edeb 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -388,7 +388,7 @@ static int apply_p2m_changes(struct domain *d,
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_gfn_to_relinquish = addr >> PAGE_SHIFT;
+                p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
                 rc = -EAGAIN;
                 goto out;
             }
@@ -415,8 +415,7 @@ static int apply_p2m_changes(struct domain *d,
         unsigned long egfn = paddr_to_pfn(end_gpaddr);
 
         p2m->max_mapped_gfn = MAX(p2m->max_mapped_gfn, egfn);
-        /* Use next_gfn_to_relinquish to store the lowest gfn mapped */
-        p2m->next_gfn_to_relinquish = MIN(p2m->next_gfn_to_relinquish, sgfn);
+        p2m->lowest_mapped_gfn = MIN(p2m->lowest_mapped_gfn, sgfn);
     }
 
     rc = 0;
@@ -606,7 +605,7 @@ int p2m_init(struct domain *d)
     p2m->first_level = NULL;
 
     p2m->max_mapped_gfn = 0;
-    p2m->next_gfn_to_relinquish = ULONG_MAX;
+    p2m->lowest_mapped_gfn = ULONG_MAX;
 
 err:
     spin_unlock(&p2m->lock);
@@ -619,7 +618,7 @@ int relinquish_p2m_mapping(struct domain *d)
     struct p2m_domain *p2m = &d->arch.p2m;
 
     return apply_p2m_changes(d, RELINQUISH,
-                              pfn_to_paddr(p2m->next_gfn_to_relinquish),
+                              pfn_to_paddr(p2m->lowest_mapped_gfn),
                               pfn_to_paddr(p2m->max_mapped_gfn),
                               pfn_to_paddr(INVALID_MFN),
                               MATTR_MEM, p2m_invalid);
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 53b3266..e9c884a 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -24,9 +24,11 @@ struct p2m_domain {
      */
     unsigned long max_mapped_gfn;
 
-    /* When releasing mapped gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_gfn_to_relinquish;
+    /* Lowest mapped gfn in the p2m. When releasing mapped gfn's in a
+     * preemptible manner this is update to track recall where to
+     * resume the search. Apart from during teardown this can only
+     * decrease. */
+    unsigned long lowest_mapped_gfn;
 };
 
 /* List of possible type for each page in the p2m entry.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3P-0002aE-AS; Tue, 11 Feb 2014 14:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3N-0002Yh-Cm
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:13 +0000
Received: from [85.158.137.68:12060] by server-16.bemta-3.messagelabs.com id
	84/D1-29917-08F2AF25; Tue, 11 Feb 2014 14:11:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127870!1125740!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6266 invoked from network); 11 Feb 2014 14:11:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601027"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-Lk;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:04 +0000
Message-ID: <1392127864-2605-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 5/5] xen: arm: correct terminology for cache
	flush macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The term "flush" is slightly ambiguous. The correct ARM term for for this
operaton is clean, as opposed to clean+invalidate for which we also now have a
function.

This is a pure rename, no functional change.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
This could easily be left for 4.5.
---
 xen/arch/arm/guestcopy.c         |    2 +-
 xen/arch/arm/kernel.c            |    2 +-
 xen/arch/arm/mm.c                |   16 ++++++++--------
 xen/arch/arm/smpboot.c           |    2 +-
 xen/include/asm-arm/arm32/page.h |    2 +-
 xen/include/asm-arm/arm64/page.h |    2 +-
 xen/include/asm-arm/page.h       |   10 +++++-----
 7 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index bd0a355..af0af6b 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -24,7 +24,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         p += offset;
         memcpy(p, from, size);
         if ( flush_dcache )
-            flush_xen_dcache_va_range(p, size);
+            clean_xen_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
         len -= size;
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..1e3107d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -58,7 +58,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 
         set_fixmap(FIXMAP_MISC, p, attrindx);
         memcpy(dst, src + s, l);
-        flush_xen_dcache_va_range(dst, l);
+        clean_xen_dcache_va_range(dst, l);
 
         paddr += l;
         dst += l;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 98d054b..af7b189 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -484,13 +484,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Clear the copy of the boot pagetables. Each secondary CPU
      * rebuilds these itself (see head.S) */
     memset(boot_pgtable, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_pgtable);
+    clean_xen_dcache(boot_pgtable);
 #ifdef CONFIG_ARM_64
     memset(boot_first, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_first);
+    clean_xen_dcache(boot_first);
 #endif
     memset(boot_second, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_second);
+    clean_xen_dcache(boot_second);
 
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < LPAE_ENTRIES; i++ )
@@ -528,7 +528,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 
     /* Make sure it is clear */
     memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
-    flush_xen_dcache_va_range(this_cpu(xen_dommap),
+    clean_xen_dcache_va_range(this_cpu(xen_dommap),
                               DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 #endif
 }
@@ -539,7 +539,7 @@ int init_secondary_pagetables(int cpu)
     /* Set init_ttbr for this CPU coming up. All CPus share a single setof
      * pagetables, but rewrite it each time for consistency with 32 bit. */
     init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
     return 0;
 }
 #else
@@ -574,15 +574,15 @@ int init_secondary_pagetables(int cpu)
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
 
-    flush_xen_dcache_va_range(first, PAGE_SIZE);
-    flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
+    clean_xen_dcache_va_range(first, PAGE_SIZE);
+    clean_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 
     per_cpu(xen_pgtable, cpu) = first;
     per_cpu(xen_dommap, cpu) = domheap;
 
     /* Set init_ttbr for this CPU coming up */
     init_ttbr = __pa(first);
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
 
     return 0;
 }
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..a829957 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -378,7 +378,7 @@ int __cpu_up(unsigned int cpu)
 
     /* Open the gate for this CPU */
     smp_up_cpu = cpu_logical_map(cpu);
-    flush_xen_dcache(smp_up_cpu);
+    clean_xen_dcache(smp_up_cpu);
 
     rc = arch_cpu_up(cpu);
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cb6add4..b8221ca 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -20,7 +20,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+#define __clean_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index baf8903..3352821 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -15,7 +15,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+#define __clean_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 5a371da..e00be9e 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -229,26 +229,26 @@ extern size_t cacheline_bytes;
 /* Function for flushing medium-sized areas.
  * if 'range' is large enough we might want to use model-specific
  * full-cache flushes. */
-static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
+static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
 {
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        asm volatile (__flush_xen_dcache_one(0) : : "r" (p));
+        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
     dsb();           /* So we know the flushes happen before continuing */
 }
 
 /* Macro for flushing a single small item.  The predicate is always
  * compile-time constant so this will compile down to 3 instructions in
  * the common case. */
-#define flush_xen_dcache(x) do {                                        \
+#define clean_xen_dcache(x) do {                                        \
     typeof(x) *_p = &(x);                                               \
     if ( sizeof(x) > MIN_CACHELINE_BYTES || sizeof(x) > alignof(x) )    \
-        flush_xen_dcache_va_range(_p, sizeof(x));                       \
+        clean_xen_dcache_va_range(_p, sizeof(x));                       \
     else                                                                \
         asm volatile (                                                  \
             "dsb sy;"   /* Finish all earlier writes */                 \
-            __flush_xen_dcache_one(0)                                   \
+            __clean_xen_dcache_one(0)                                   \
             "dsb sy;"   /* Finish flush before continuing */            \
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3P-0002aE-AS; Tue, 11 Feb 2014 14:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3N-0002Yh-Cm
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:13 +0000
Received: from [85.158.137.68:12060] by server-16.bemta-3.messagelabs.com id
	84/D1-29917-08F2AF25; Tue, 11 Feb 2014 14:11:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127870!1125740!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6266 invoked from network); 11 Feb 2014 14:11:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601027"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-Lk;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:04 +0000
Message-ID: <1392127864-2605-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 5/5] xen: arm: correct terminology for cache
	flush macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The term "flush" is slightly ambiguous. The correct ARM term for for this
operaton is clean, as opposed to clean+invalidate for which we also now have a
function.

This is a pure rename, no functional change.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
This could easily be left for 4.5.
---
 xen/arch/arm/guestcopy.c         |    2 +-
 xen/arch/arm/kernel.c            |    2 +-
 xen/arch/arm/mm.c                |   16 ++++++++--------
 xen/arch/arm/smpboot.c           |    2 +-
 xen/include/asm-arm/arm32/page.h |    2 +-
 xen/include/asm-arm/arm64/page.h |    2 +-
 xen/include/asm-arm/page.h       |   10 +++++-----
 7 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index bd0a355..af0af6b 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -24,7 +24,7 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
         p += offset;
         memcpy(p, from, size);
         if ( flush_dcache )
-            flush_xen_dcache_va_range(p, size);
+            clean_xen_dcache_va_range(p, size);
 
         unmap_domain_page(p - offset);
         len -= size;
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 6a5772b..1e3107d 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -58,7 +58,7 @@ void copy_from_paddr(void *dst, paddr_t paddr, unsigned long len, int attrindx)
 
         set_fixmap(FIXMAP_MISC, p, attrindx);
         memcpy(dst, src + s, l);
-        flush_xen_dcache_va_range(dst, l);
+        clean_xen_dcache_va_range(dst, l);
 
         paddr += l;
         dst += l;
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 98d054b..af7b189 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -484,13 +484,13 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Clear the copy of the boot pagetables. Each secondary CPU
      * rebuilds these itself (see head.S) */
     memset(boot_pgtable, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_pgtable);
+    clean_xen_dcache(boot_pgtable);
 #ifdef CONFIG_ARM_64
     memset(boot_first, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_first);
+    clean_xen_dcache(boot_first);
 #endif
     memset(boot_second, 0x0, PAGE_SIZE);
-    flush_xen_dcache(boot_second);
+    clean_xen_dcache(boot_second);
 
     /* Break up the Xen mapping into 4k pages and protect them separately. */
     for ( i = 0; i < LPAE_ENTRIES; i++ )
@@ -528,7 +528,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 
     /* Make sure it is clear */
     memset(this_cpu(xen_dommap), 0, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
-    flush_xen_dcache_va_range(this_cpu(xen_dommap),
+    clean_xen_dcache_va_range(this_cpu(xen_dommap),
                               DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 #endif
 }
@@ -539,7 +539,7 @@ int init_secondary_pagetables(int cpu)
     /* Set init_ttbr for this CPU coming up. All CPus share a single setof
      * pagetables, but rewrite it each time for consistency with 32 bit. */
     init_ttbr = (uintptr_t) xen_pgtable + phys_offset;
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
     return 0;
 }
 #else
@@ -574,15 +574,15 @@ int init_secondary_pagetables(int cpu)
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
 
-    flush_xen_dcache_va_range(first, PAGE_SIZE);
-    flush_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
+    clean_xen_dcache_va_range(first, PAGE_SIZE);
+    clean_xen_dcache_va_range(domheap, DOMHEAP_SECOND_PAGES*PAGE_SIZE);
 
     per_cpu(xen_pgtable, cpu) = first;
     per_cpu(xen_dommap, cpu) = domheap;
 
     /* Set init_ttbr for this CPU coming up */
     init_ttbr = __pa(first);
-    flush_xen_dcache(init_ttbr);
+    clean_xen_dcache(init_ttbr);
 
     return 0;
 }
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index c53c765..a829957 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -378,7 +378,7 @@ int __cpu_up(unsigned int cpu)
 
     /* Open the gate for this CPU */
     smp_up_cpu = cpu_logical_map(cpu);
-    flush_xen_dcache(smp_up_cpu);
+    clean_xen_dcache(smp_up_cpu);
 
     rc = arch_cpu_up(cpu);
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cb6add4..b8221ca 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -20,7 +20,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
+#define __clean_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index baf8903..3352821 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -15,7 +15,7 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 }
 
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
-#define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
+#define __clean_xen_dcache_one(R) "dc cvac, %" #R ";"
 
 /* Inline ASM to clean and invalidate dcache on register R (may be an
  * inline asm operand) */
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 5a371da..e00be9e 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -229,26 +229,26 @@ extern size_t cacheline_bytes;
 /* Function for flushing medium-sized areas.
  * if 'range' is large enough we might want to use model-specific
  * full-cache flushes. */
-static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
+static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
 {
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        asm volatile (__flush_xen_dcache_one(0) : : "r" (p));
+        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
     dsb();           /* So we know the flushes happen before continuing */
 }
 
 /* Macro for flushing a single small item.  The predicate is always
  * compile-time constant so this will compile down to 3 instructions in
  * the common case. */
-#define flush_xen_dcache(x) do {                                        \
+#define clean_xen_dcache(x) do {                                        \
     typeof(x) *_p = &(x);                                               \
     if ( sizeof(x) > MIN_CACHELINE_BYTES || sizeof(x) > alignof(x) )    \
-        flush_xen_dcache_va_range(_p, sizeof(x));                       \
+        clean_xen_dcache_va_range(_p, sizeof(x));                       \
     else                                                                \
         asm volatile (                                                  \
             "dsb sy;"   /* Finish all earlier writes */                 \
-            __flush_xen_dcache_one(0)                                   \
+            __clean_xen_dcache_one(0)                                   \
             "dsb sy;"   /* Finish flush before continuing */            \
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3P-0002al-NE; Tue, 11 Feb 2014 14:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3N-0002Yk-Pv
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:14 +0000
Received: from [85.158.137.68:26777] by server-15.bemta-3.messagelabs.com id
	F7/65-19263-18F2AF25; Tue, 11 Feb 2014 14:11:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392127868!1127751!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23277 invoked from network); 11 Feb 2014 14:11:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601015"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-H7;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:02 +0000
Message-ID: <1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v5 3/5] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v5: avoid get_order_from_pages and just use 1<<MAX_ORDER

    s/sync_page_to_ram/flush_page_to_ram/g

    remove hard tab, add an emacs magic block

v4: introduce a function to clean and invalidate as intended

    make the domctl take a length not an end.

v3:
    s/cacheflush_page/sync_page_to_ram/

    xc interface takes a length instead of an end

    make the domctl range inclusive.

    make xc interface internal -- it isn't needed from libxl in the current
    design and it is easier to expose an interface in the future than to hide
    it.

v2:
   Switch to cleaning at page allocation time + explicit flushing of the
   regions which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    2 ++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xc_private.h            |    3 +++
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |   12 ++++++++++++
 xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/arm32/page.h    |    4 ++++
 xen/include/asm-arm/arm64/page.h    |    4 ++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |   13 +++++++++++++
 xen/xsm/flask/policy/access_vectors |    2 ++
 17 files changed, 122 insertions(+)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e1d1bec..369c3f3 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.nr_pfns = nr_pfns;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..45974e7 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
+
+        if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
+            return -EINVAL;
+
+        if ( e < s )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index cf4e7d4..98d054b 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -342,6 +342,18 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
 }
 #endif
 
+void flush_page_to_ram(unsigned long mfn)
+{
+    void *p, *v = map_domain_page(mfn);
+
+    dsb();           /* So the CPU issues all writes to the range */
+    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
+        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
+    dsb();           /* So we know the flushes happen before continuing */
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..d00c882 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+
+                    flush_page_to_ram(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..601319c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        flush_page_to_ram(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..cb6add4 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
+
 /*
  * Flush all hypervisor mappings from the TLB and branch predictor.
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..baf8903 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
+
 /*
  * Flush all hypervisor mappings from the TLB
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..5a371da 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void flush_page_to_ram(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..ccc268d 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void flush_page_to_ram(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f22fe2e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush. */
+    xen_pfn_t start_pfn, nr_pfns;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..d515702 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -1617,3 +1620,13 @@ static __init int flask_init(void)
 }
 
 xsm_initcall(flask_init);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3P-0002al-NE; Tue, 11 Feb 2014 14:11:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3N-0002Yk-Pv
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:14 +0000
Received: from [85.158.137.68:26777] by server-15.bemta-3.messagelabs.com id
	F7/65-19263-18F2AF25; Tue, 11 Feb 2014 14:11:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392127868!1127751!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23277 invoked from network); 11 Feb 2014 14:11:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601015"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:06 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-H7;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:02 +0000
Message-ID: <1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, jbeulich@suse.com
Subject: [Xen-devel] [PATCH v5 3/5] xen/arm: clean and invalidate all guest
	caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Guests are initially started with caches disabled and so we need to make sure
they see consistent data in RAM (requiring a cache clean) but also that they
do not have old stale data suddenly appear in the caches when they enable
their caches (requiring the invalidate).

This can be split into two halves. First we must flush each page as it is
allocated to the guest. It is not sufficient to do the flush at scrub time
since this will miss pages which are ballooned out by the guest (where the
guest must scrub if it cares about not leaking the pagecontent). We need to
clean as well as invalidate to make sure that any scrubbing which has occured
gets committed to real RAM. To achieve this add a new cacheflush_page function,
which is a stub on x86.

Secondly we need to flush anything which the domain builder touches, which we
do via a new domctl.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: jbeulich@suse.com
Cc: keir@xen.org
Cc: ian.jackson@eu.citrix.com
--
v5: avoid get_order_from_pages and just use 1<<MAX_ORDER

    s/sync_page_to_ram/flush_page_to_ram/g

    remove hard tab, add an emacs magic block

v4: introduce a function to clean and invalidate as intended

    make the domctl take a length not an end.

v3:
    s/cacheflush_page/sync_page_to_ram/

    xc interface takes a length instead of an end

    make the domctl range inclusive.

    make xc interface internal -- it isn't needed from libxl in the current
    design and it is easier to expose an interface in the future than to hide
    it.

v2:
   Switch to cleaning at page allocation time + explicit flushing of the
   regions which the toolstack touches.

   Add XSM for new domctl.

   New domctl restricts the amount of space it is willing to flush, to avoid
   thinking about preemption.
---
 tools/libxc/xc_dom_boot.c           |    4 ++++
 tools/libxc/xc_dom_core.c           |    2 ++
 tools/libxc/xc_domain.c             |   10 ++++++++++
 tools/libxc/xc_private.c            |    2 ++
 tools/libxc/xc_private.h            |    3 +++
 xen/arch/arm/domctl.c               |   14 ++++++++++++++
 xen/arch/arm/mm.c                   |   12 ++++++++++++
 xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
 xen/common/page_alloc.c             |    5 +++++
 xen/include/asm-arm/arm32/page.h    |    4 ++++
 xen/include/asm-arm/arm64/page.h    |    4 ++++
 xen/include/asm-arm/p2m.h           |    3 +++
 xen/include/asm-arm/page.h          |    3 +++
 xen/include/asm-x86/page.h          |    3 +++
 xen/include/public/domctl.h         |   13 +++++++++++++
 xen/xsm/flask/hooks.c               |   13 +++++++++++++
 xen/xsm/flask/policy/access_vectors |    2 ++
 17 files changed, 122 insertions(+)

diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
index 5a9cfc6..3d4d107 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xc_dom_boot.c
@@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
         return -1;
     }
 
+    /* Guest shouldn't really touch its grant table until it has
+     * enabled its caches. But lets be nice. */
+    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
+
     return 0;
 }
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 77a4e64..b9d1015 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn)
         prev->next = phys->next;
     else
         dom->phys_pages = phys->next;
+
+    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
 }
 
 void xc_dom_unmap_all(struct xc_dom_image *dom)
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index e1d1bec..369c3f3 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
     return 0;
 }
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_cacheflush;
+    domctl.domain = (domid_t)domid;
+    domctl.u.cacheflush.start_pfn = start_pfn;
+    domctl.u.cacheflush.nr_pfns = nr_pfns;
+    return do_domctl(xch, &domctl);
+}
 
 int xc_domain_pause(xc_interface *xch,
                     uint32_t domid)
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 838fd21..33ed15b 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
         return -1;
     memcpy(vaddr, src_page, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
@@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
         return -1;
     memset(vaddr, 0, PAGE_SIZE);
     munmap(vaddr, PAGE_SIZE);
+    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
     return 0;
 }
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 92271c9..a610f0c 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
+int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
+			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
+
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 546e86b..45974e7 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
 {
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_cacheflush:
+    {
+        unsigned long s = domctl->u.cacheflush.start_pfn;
+        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
+
+        if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
+            return -EINVAL;
+
+        if ( e < s )
+            return -EINVAL;
+
+        return p2m_cache_flush(d, s, e);
+    }
+
     default:
         return subarch_do_domctl(domctl, d, u_domctl);
     }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index cf4e7d4..98d054b 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -342,6 +342,18 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
 }
 #endif
 
+void flush_page_to_ram(unsigned long mfn)
+{
+    void *p, *v = map_domain_page(mfn);
+
+    dsb();           /* So the CPU issues all writes to the range */
+    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
+        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
+    dsb();           /* So we know the flushes happen before continuing */
+
+    unmap_domain_page(v);
+}
+
 void __init arch_init_memory(void)
 {
     /*
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index a61edeb..d00c882 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -8,6 +8,7 @@
 #include <asm/gic.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
+#include <asm/page.h>
 
 /* First level P2M is 2 consecutive pages */
 #define P2M_FIRST_ORDER 1
@@ -228,6 +229,7 @@ enum p2m_operation {
     ALLOCATE,
     REMOVE,
     RELINQUISH,
+    CACHEFLUSH,
 };
 
 static int apply_p2m_changes(struct domain *d,
@@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
                     count++;
                 }
                 break;
+
+            case CACHEFLUSH:
+                {
+                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
+                        break;
+
+                    flush_page_to_ram(pte.p2m.base);
+                }
+                break;
         }
 
         /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
@@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
                               MATTR_MEM, p2m_invalid);
 }
 
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+
+    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
+    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
+
+    return apply_p2m_changes(d, CACHEFLUSH,
+                             pfn_to_paddr(start_mfn),
+                             pfn_to_paddr(end_mfn),
+                             pfn_to_paddr(INVALID_MFN),
+                             MATTR_MEM, p2m_invalid);
+}
+
 unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
 {
     paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 5f484a2..601319c 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
         /* Initialise fields which have other uses for free pages. */
         pg[i].u.inuse.type_info = 0;
         page_set_owner(&pg[i], NULL);
+
+        /* Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        flush_page_to_ram(page_to_mfn(&pg[i]));
     }
 
     spin_unlock(&heap_lock);
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index cf12a89..cb6add4 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
+
 /*
  * Flush all hypervisor mappings from the TLB and branch predictor.
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 9551f90..baf8903 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
 
+/* Inline ASM to clean and invalidate dcache on register R (may be an
+ * inline asm operand) */
+#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
+
 /*
  * Flush all hypervisor mappings from the TLB
  * This is needed after changing Xen code mappings.
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index e9c884a..3b39c45 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
 /* Look up the MFN corresponding to a domain's PFN. */
 paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
 
+/* Clean & invalidate caches corresponding to a region of guest address space */
+int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t end_mfn);
+
 /* Setup p2m RAM mapping for domain d from start-end. */
 int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
 /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 670d4e7..5a371da 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, unsigned long size)
             : : "r" (_p), "m" (*_p));                                   \
 } while (0)
 
+/* Flush the dcache for an entire page. */
+void flush_page_to_ram(unsigned long mfn);
+
 /* Print a walk of an arbitrary page table */
 void dump_pt_walk(lpae_t *table, paddr_t addr);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 7a46af5..ccc268d 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t cacheattr)
     return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
 }
 
+/* No cache maintenance required on x86 architecture. */
+static inline void flush_page_to_ram(unsigned long mfn) {}
+
 /* return true if permission increased */
 static inline bool_t
 perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f22fe2e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+/*
+ * ARM: Clean and invalidate caches associated with given region of
+ * guest memory.
+ */
+struct xen_domctl_cacheflush {
+    /* IN: page range to flush. */
+    xen_pfn_t start_pfn, nr_pfns;
+};
+typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +965,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_cacheflush                    71
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1012,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_max_evtchn    set_max_evtchn;
         struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
+        struct xen_domctl_cacheflush        cacheflush;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
         uint8_t                             pad[128];
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 50a35fc..d515702 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_cacheflush:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -1617,3 +1620,13 @@ static __init int flask_init(void)
 }
 
 xsm_initcall(flask_init);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..a0ed13d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -196,6 +196,8 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_cacheflush
+    cacheflush
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3R-0002bo-50; Tue, 11 Feb 2014 14:11:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3O-0002ZW-TX
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:15 +0000
Received: from [85.158.137.68:26857] by server-4.bemta-3.messagelabs.com id
	ED/02-11750-18F2AF25; Tue, 11 Feb 2014 14:11:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127870!1125740!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6375 invoked from network); 11 Feb 2014 14:11:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601013"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-JZ;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:03 +0000
Message-ID: <1392127864-2605-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 4/5] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  158 ------------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..8f20fdf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -477,7 +471,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:11:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDE3R-0002bo-50; Tue, 11 Feb 2014 14:11:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDE3O-0002ZW-TX
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:11:15 +0000
Received: from [85.158.137.68:26857] by server-4.bemta-3.messagelabs.com id
	ED/02-11750-18F2AF25; Tue, 11 Feb 2014 14:11:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392127870!1125740!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6375 invoked from network); 11 Feb 2014 14:11:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:11:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,825,1384300800"; d="scan'208";a="101601013"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 14:11:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 09:11:04 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDE3E-00005n-JZ;
	Tue, 11 Feb 2014 14:11:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 14:11:03 +0000
Message-ID: <1392127864-2605-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392127796.26657.130.camel@kazak.uk.xensource.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v5 4/5] Revert "xen: arm: force guest memory
	accesses to cacheable when MMU is disabled"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This reverts commit 89eb02c2204a0b42a0aa169f107bc346a3fef802.

This approach has a short coming in that it breaks when a guest enables its
MMU (SCTLR.M, disabling HCR.DC) without enabling caches (SCTLR.C) first/at the
same time. It turns out that FreeBSD does this.

This has now been fixed (yet) another way (third time is the charm!) so remove
this support. The original commit contained some fixes which are still
relevant even with the revert of the bulk of the patch:
 - Correction to HSR_SYSREG_CRN_MASK
 - Rename of HSR_SYSCTL macros to avoid naming clash
 - Definition of some additional cp reg specifications

Since these are still useful they are not reverted.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
v2: Move to end of series
    Do not revert useful bits
---
 xen/arch/arm/domain.c        |    7 --
 xen/arch/arm/traps.c         |  158 ------------------------------------------
 xen/include/asm-arm/domain.h |    2 -
 3 files changed, 167 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..8f20fdf 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,7 +19,6 @@
 #include <xen/errno.h>
 #include <xen/bitops.h>
 #include <xen/grant_table.h>
-#include <xen/stdbool.h>
 
 #include <asm/current.h>
 #include <asm/event.h>
@@ -220,11 +219,6 @@ static void ctxt_switch_to(struct vcpu *n)
     else
         hcr |= HCR_RW;
 
-    if ( n->arch.default_cache )
-        hcr |= (HCR_TVM|HCR_DC);
-    else
-        hcr &= ~(HCR_TVM|HCR_DC);
-
     WRITE_SYSREG(hcr, HCR_EL2);
     isb();
 
@@ -477,7 +471,6 @@ int vcpu_initialise(struct vcpu *v)
         return rc;
 
     v->arch.sctlr = SCTLR_GUEST_INIT;
-    v->arch.default_cache = true;
 
     /*
      * By default exposes an SMP system with AFF0 set to the VCPU ID
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b8f2e82..a15b59e 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -29,14 +29,12 @@
 #include <xen/hypercall.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
-#include <xen/stdbool.h>
 #include <public/sched.h>
 #include <public/xen.h>
 #include <asm/event.h>
 #include <asm/regs.h>
 #include <asm/cpregs.h>
 #include <asm/psci.h>
-#include <asm/flushtlb.h>
 
 #include "decode.h"
 #include "io.h"
@@ -1292,29 +1290,6 @@ static void advance_pc(struct cpu_user_regs *regs, union hsr hsr)
     regs->pc += hsr.len ? 4 : 2;
 }
 
-static void update_sctlr(struct vcpu *v, uint32_t val)
-{
-    /*
-     * If MMU (SCTLR_M) is now enabled then we must disable HCR.DC
-     * because they are incompatible.
-     *
-     * Once HCR.DC is disabled then we do not need HCR_TVM either,
-     * since it's only purpose was to catch the MMU being enabled.
-     *
-     * Both are set appropriately on context switch but we need to
-     * clear them now since we may not context switch on return to
-     * guest.
-     */
-    if ( val & SCTLR_M )
-    {
-        WRITE_SYSREG(READ_SYSREG(HCR_EL2) & ~(HCR_DC|HCR_TVM), HCR_EL2);
-        /* ARM ARM 0406C.b B3.2.1: Disabling HCR.DC without changing
-         * VMID requires us to flush the TLB for that VMID. */
-        flush_tlb();
-        v->arch.default_cache = false;
-    }
-}
-
 static void do_cp15_32(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
@@ -1374,89 +1349,6 @@ static void do_cp15_32(struct cpu_user_regs *regs,
         if ( cp32.read )
            *r = v->arch.actlr;
         break;
-
-/* Passthru a 32-bit AArch32 register which is also 32-bit under AArch64 */
-#define CP32_PASSTHRU32(R...) do {              \
-    if ( cp32.read )                            \
-        *r = READ_SYSREG32(R);                  \
-    else                                        \
-        WRITE_SYSREG32(*r, R);                  \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates the lower 32-bits and clears the upper bits.
- */
-#define CP32_PASSTHRU64(R...) do {              \
-    if ( cp32.read )                            \
-        *r = (uint32_t)READ_SYSREG64(R);        \
-    else                                        \
-        WRITE_SYSREG64((uint64_t)*r, R);        \
-} while(0)
-
-/*
- * Passthru a 32-bit AArch32 register which is 64-bit under AArch64.
- * Updates either the HI ([63:32]) or LO ([31:0]) 32-bits preserving
- * the other half.
- */
-#ifdef CONFIG_ARM_64
-#define CP32_PASSTHRU64_HI(R...) do {                   \
-    if ( cp32.read )                                    \
-        *r = (uint32_t)(READ_SYSREG64(R) >> 32);        \
-    else                                                \
-    {                                                   \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffffUL;   \
-        t |= ((uint64_t)(*r)) << 32;                    \
-        WRITE_SYSREG64(t, R);                           \
-    }                                                   \
-} while(0)
-#define CP32_PASSTHRU64_LO(R...) do {                           \
-    if ( cp32.read )                                            \
-        *r = (uint32_t)(READ_SYSREG64(R) & 0xffffffff);         \
-    else                                                        \
-    {                                                           \
-        uint64_t t = READ_SYSREG64(R) & 0xffffffff00000000UL;   \
-        t |= *r;                                                \
-        WRITE_SYSREG64(t, R);                                   \
-    }                                                           \
-} while(0)
-#endif
-
-    /* HCR.TVM */
-    case HSR_CPREG32(SCTLR):
-        CP32_PASSTHRU32(SCTLR_EL1);
-        update_sctlr(v, *r);
-        break;
-    case HSR_CPREG32(TTBR0_32):   CP32_PASSTHRU64(TTBR0_EL1);      break;
-    case HSR_CPREG32(TTBR1_32):   CP32_PASSTHRU64(TTBR1_EL1);      break;
-    case HSR_CPREG32(TTBCR):      CP32_PASSTHRU32(TCR_EL1);        break;
-    case HSR_CPREG32(DACR):       CP32_PASSTHRU32(DACR32_EL2);     break;
-    case HSR_CPREG32(DFSR):       CP32_PASSTHRU32(ESR_EL1);        break;
-    case HSR_CPREG32(IFSR):       CP32_PASSTHRU32(IFSR32_EL2);     break;
-    case HSR_CPREG32(ADFSR):      CP32_PASSTHRU32(AFSR0_EL1);      break;
-    case HSR_CPREG32(AIFSR):      CP32_PASSTHRU32(AFSR1_EL1);      break;
-    case HSR_CPREG32(CONTEXTIDR): CP32_PASSTHRU32(CONTEXTIDR_EL1); break;
-
-#ifdef CONFIG_ARM_64
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU64_LO(FAR_EL1);     break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU64_HI(FAR_EL1);     break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU64_LO(MAIR_EL1);    break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU64_HI(MAIR_EL1);    break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU64_LO(AMAIR_EL1);   break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU64_HI(AMAIR_EL1);   break;
-#else
-    case HSR_CPREG32(DFAR):       CP32_PASSTHRU32(DFAR);           break;
-    case HSR_CPREG32(IFAR):       CP32_PASSTHRU32(IFAR);           break;
-    case HSR_CPREG32(MAIR0):      CP32_PASSTHRU32(MAIR0);          break;
-    case HSR_CPREG32(MAIR1):      CP32_PASSTHRU32(MAIR1);          break;
-    case HSR_CPREG32(AMAIR0):     CP32_PASSTHRU32(AMAIR0);         break;
-    case HSR_CPREG32(AMAIR1):     CP32_PASSTHRU32(AMAIR1);         break;
-#endif
-
-#undef CP32_PASSTHRU32
-#undef CP32_PASSTHRU64
-#undef CP32_PASSTHRU64_LO
-#undef CP32_PASSTHRU64_HI
     default:
         printk("%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                cp32.read ? "mrc" : "mcr",
@@ -1470,9 +1362,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
                        union hsr hsr)
 {
     struct hsr_cp64 cp64 = hsr.cp64;
-    uint32_t *r1 = (uint32_t *)select_user_reg(regs, cp64.reg1);
-    uint32_t *r2 = (uint32_t *)select_user_reg(regs, cp64.reg2);
-    uint64_t r;
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1490,26 +1379,6 @@ static void do_cp15_64(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define CP64_PASSTHRU(R...) do {                                  \
-    if ( cp64.read )                                            \
-    {                                                           \
-        r = READ_SYSREG64(R);                                   \
-        *r1 = r & 0xffffffffUL;                                 \
-        *r2 = r >> 32;                                          \
-    }                                                           \
-    else                                                        \
-    {                                                           \
-        r = (*r1) | (((uint64_t)(*r2))<<32);                    \
-        WRITE_SYSREG64(r, R);                                   \
-    }                                                           \
-} while(0)
-
-    case HSR_CPREG64(TTBR0): CP64_PASSTHRU(TTBR0_EL1); break;
-    case HSR_CPREG64(TTBR1): CP64_PASSTHRU(TTBR1_EL1); break;
-
-#undef CP64_PASSTHRU
-
     default:
         printk("%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                cp64.read ? "mrrc" : "mcrr",
@@ -1524,8 +1393,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
                       union hsr hsr)
 {
     struct hsr_sysreg sysreg = hsr.sysreg;
-    register_t *x = select_user_reg(regs, sysreg.reg);
-    struct vcpu *v = current;
 
     switch ( hsr.bits & HSR_SYSREG_REGS_MASK )
     {
@@ -1538,31 +1405,6 @@ static void do_sysreg(struct cpu_user_regs *regs,
             domain_crash_synchronous();
         }
         break;
-
-#define SYSREG_PASSTHRU(R...) do {              \
-    if ( sysreg.read )                          \
-        *x = READ_SYSREG(R);                    \
-    else                                        \
-        WRITE_SYSREG(*x, R);                    \
-} while(0)
-
-    case HSR_SYSREG_SCTLR_EL1:
-        SYSREG_PASSTHRU(SCTLR_EL1);
-        update_sctlr(v, *x);
-        break;
-    case HSR_SYSREG_TTBR0_EL1:      SYSREG_PASSTHRU(TTBR0_EL1);      break;
-    case HSR_SYSREG_TTBR1_EL1:      SYSREG_PASSTHRU(TTBR1_EL1);      break;
-    case HSR_SYSREG_TCR_EL1:        SYSREG_PASSTHRU(TCR_EL1);        break;
-    case HSR_SYSREG_ESR_EL1:        SYSREG_PASSTHRU(ESR_EL1);        break;
-    case HSR_SYSREG_FAR_EL1:        SYSREG_PASSTHRU(FAR_EL1);        break;
-    case HSR_SYSREG_AFSR0_EL1:      SYSREG_PASSTHRU(AFSR0_EL1);      break;
-    case HSR_SYSREG_AFSR1_EL1:      SYSREG_PASSTHRU(AFSR1_EL1);      break;
-    case HSR_SYSREG_MAIR_EL1:       SYSREG_PASSTHRU(MAIR_EL1);       break;
-    case HSR_SYSREG_AMAIR_EL1:      SYSREG_PASSTHRU(AMAIR_EL1);      break;
-    case HSR_SYSREG_CONTEXTIDR_EL1: SYSREG_PASSTHRU(CONTEXTIDR_EL1); break;
-
-#undef SYSREG_PASSTHRU
-
     default:
         printk("%s %d, %d, c%d, c%d, %d %s x%d @ 0x%"PRIregister"\n",
                sysreg.read ? "mrs" : "msr",
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..bc20a15 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -257,8 +257,6 @@ struct arch_vcpu
     uint64_t event_mask;
     uint64_t lr_mask;
 
-    bool_t default_cache;
-
     struct {
         /*
          * SGIs and PPIs are per-VCPU, SPIs are domain global and in
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:24:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEG3-0003tD-B3; Tue, 11 Feb 2014 14:24:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDEG0-0003t8-U2
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 14:24:17 +0000
Received: from [85.158.137.68:58213] by server-13.bemta-3.messagelabs.com id
	9F/0D-26923-0923AF25; Tue, 11 Feb 2014 14:24:16 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392128655!1125787!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16487 invoked from network); 11 Feb 2014 14:24:15 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:24:15 -0000
Received: by mail-wi0-f169.google.com with SMTP id e4so4131011wiv.2
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 06:24:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Q6xe+opgVmEcHyyeHj2PETqbBVEOL+d56szI+dvYfrw=;
	b=fmNdEv7qGYqCDFQ/NT++d7bbM5Qm9O4JSkFlq5ozaXaBkN2h/1l0GrA1/276a7p3XD
	1I3PVr/pxiUgY+TGGJPSAyR2zFTGQ0V42nA8QHLPvfwodaxPeG4u0Y7GFbGXkLtEqtuy
	Ugxrw5E90uLct7pm4rvJi3wpZVkLBNl+Iy+xdwuPvXA4Lycu7Eiw5LbFobwvzM+rxNmQ
	scQLwYWJseUnj5zmKfC6FXJGt1hkKnRRZVLX/wxPhE25kIYPhKLzzyyhO0T6SfNW2IJP
	wBd0rvt5LFfMbq/u5qTVo7UjR/h9GkOxW8d30cYLDqULy41yp5eAsZROfA4B2N+c/QP1
	dIcA==
X-Gm-Message-State: ALoCoQkSv9bmswhUKTGIBMaqU/rtHko/eo9syWTucmGETp8Kce08qX6KNLnqeiC+FFaSPmkHiKxh
X-Received: by 10.180.13.5 with SMTP id d5mr15098373wic.54.1392128654479;
	Tue, 11 Feb 2014 06:24:14 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cm5sm46412720wid.5.2014.02.11.06.24.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 06:24:13 -0800 (PST)
Message-ID: <52FA328C.4000103@linaro.org>
Date: Tue, 11 Feb 2014 14:24:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
In-Reply-To: <20140211135926.GB10482@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Add George as release manager)

On 11/02/14 13:59, Tim Deegan wrote:
> At 13:20 +0000 on 11 Feb (1392121252), Julien Grall wrote:
>>
>>
>> On 11/02/14 12:59, Tim Deegan wrote:
>>> Are you using a very old version of clang?  As 06a9c7e points out,
>>> our current runes didn't work before clang v3.0.
>>
>> I'm using clang 3.5 (which have other issue to compile Xen), but I have
>> also tried 3.0 and 3.3.
>>
>>> If not, rather than chasing this around any further, I think we should
>>> abandon trying to use the compiler-provided headers even on linux.
>>> Does this patch fix your build issue?
>>>
>>> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
>>> Author: Tim Deegan <tim@xen.org>
>>> Date:   Tue Feb 11 12:44:09 2014 +0000
>>>
>>>       xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>>
>> With this patch, -iwithprefix include is not necessary now. I wondering
>> if we can remove it from the command line.
>
> Yes, I think so.
>
>>>       We already have our own versions of the stdarg/stdbool definitions, for
>>>       systems where those headers are installed in /usr/include.
>>>
>>>       On linux, they're typically installed in compiler-specific paths, but
>>>       finding them has proved unreliable.  Drop that and use our own versions
>>>       everywhere.
>>>
>>>       Signed-off-by: Tim Deegan <tim@xen.org>
>>
>> This patch is working fine to build xen clang 3.0, 3.3.
>> I have others issue to build with clang 3.5.
>>
>> Tested-by: Julien Grall <julien.grall@linaro.org>
>
> Great!  Assuming you'll have a series of patches to fix the clang-3.5
> build, do you want to just take this into that series, and drop the
> -iwithprefix at the same time?

If it's possible I'd like this patch goes in Xen 4.4 to fix build with 
official version of clang (until 3.4).

Clang 3.5 is still under development, so I don't think it's important to 
have support for it in Xen 4.4.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:24:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEG3-0003tD-B3; Tue, 11 Feb 2014 14:24:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDEG0-0003t8-U2
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 14:24:17 +0000
Received: from [85.158.137.68:58213] by server-13.bemta-3.messagelabs.com id
	9F/0D-26923-0923AF25; Tue, 11 Feb 2014 14:24:16 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392128655!1125787!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16487 invoked from network); 11 Feb 2014 14:24:15 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 14:24:15 -0000
Received: by mail-wi0-f169.google.com with SMTP id e4so4131011wiv.2
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 06:24:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Q6xe+opgVmEcHyyeHj2PETqbBVEOL+d56szI+dvYfrw=;
	b=fmNdEv7qGYqCDFQ/NT++d7bbM5Qm9O4JSkFlq5ozaXaBkN2h/1l0GrA1/276a7p3XD
	1I3PVr/pxiUgY+TGGJPSAyR2zFTGQ0V42nA8QHLPvfwodaxPeG4u0Y7GFbGXkLtEqtuy
	Ugxrw5E90uLct7pm4rvJi3wpZVkLBNl+Iy+xdwuPvXA4Lycu7Eiw5LbFobwvzM+rxNmQ
	scQLwYWJseUnj5zmKfC6FXJGt1hkKnRRZVLX/wxPhE25kIYPhKLzzyyhO0T6SfNW2IJP
	wBd0rvt5LFfMbq/u5qTVo7UjR/h9GkOxW8d30cYLDqULy41yp5eAsZROfA4B2N+c/QP1
	dIcA==
X-Gm-Message-State: ALoCoQkSv9bmswhUKTGIBMaqU/rtHko/eo9syWTucmGETp8Kce08qX6KNLnqeiC+FFaSPmkHiKxh
X-Received: by 10.180.13.5 with SMTP id d5mr15098373wic.54.1392128654479;
	Tue, 11 Feb 2014 06:24:14 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cm5sm46412720wid.5.2014.02.11.06.24.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 06:24:13 -0800 (PST)
Message-ID: <52FA328C.4000103@linaro.org>
Date: Tue, 11 Feb 2014 14:24:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
In-Reply-To: <20140211135926.GB10482@deinos.phlegethon.org>
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Add George as release manager)

On 11/02/14 13:59, Tim Deegan wrote:
> At 13:20 +0000 on 11 Feb (1392121252), Julien Grall wrote:
>>
>>
>> On 11/02/14 12:59, Tim Deegan wrote:
>>> Are you using a very old version of clang?  As 06a9c7e points out,
>>> our current runes didn't work before clang v3.0.
>>
>> I'm using clang 3.5 (which have other issue to compile Xen), but I have
>> also tried 3.0 and 3.3.
>>
>>> If not, rather than chasing this around any further, I think we should
>>> abandon trying to use the compiler-provided headers even on linux.
>>> Does this patch fix your build issue?
>>>
>>> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
>>> Author: Tim Deegan <tim@xen.org>
>>> Date:   Tue Feb 11 12:44:09 2014 +0000
>>>
>>>       xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>>
>> With this patch, -iwithprefix include is not necessary now. I wondering
>> if we can remove it from the command line.
>
> Yes, I think so.
>
>>>       We already have our own versions of the stdarg/stdbool definitions, for
>>>       systems where those headers are installed in /usr/include.
>>>
>>>       On linux, they're typically installed in compiler-specific paths, but
>>>       finding them has proved unreliable.  Drop that and use our own versions
>>>       everywhere.
>>>
>>>       Signed-off-by: Tim Deegan <tim@xen.org>
>>
>> This patch is working fine to build xen clang 3.0, 3.3.
>> I have others issue to build with clang 3.5.
>>
>> Tested-by: Julien Grall <julien.grall@linaro.org>
>
> Great!  Assuming you'll have a series of patches to fix the clang-3.5
> build, do you want to just take this into that series, and drop the
> -iwithprefix at the same time?

If it's possible I'd like this patch goes in Xen 4.4 to fix build with 
official version of clang (until 3.4).

Clang 3.5 is still under development, so I don't think it's important to 
have support for it in Xen 4.4.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:27:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEJA-00040O-7d; Tue, 11 Feb 2014 14:27:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WDEJ8-00040C-8E
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:27:30 +0000
Received: from [85.158.137.68:18493] by server-15.bemta-3.messagelabs.com id
	46/D7-19263-1533AF25; Tue, 11 Feb 2014 14:27:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392128848!1132970!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21150 invoked from network); 11 Feb 2014 14:27:28 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 14:27:28 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392128848; l=1276;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=+FHfkXft+1jGApLkrOuufcVMZEA=;
	b=QPq2lYxL9gvaJS8ZMn2HnX6ocuFVOO2PYTiQb0ot9NEwT4xYIOHXbmZ2L8Yc7CuW1c0
	uucMf9jCQ+TCJCS/2R03bZva2DKIhlc6XPXy2DZy6pQ3Dd86CVPIElhB7lBKJCosTBd0W
	xVPXVaSuqUBJy5ZfkLszfE4j4BfyC8W7w/A=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id t05aedq1BERSArg
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 11 Feb 2014 15:27:28 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 06B9850269; Tue, 11 Feb 2014 15:27:27 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 11 Feb 2014 15:27:24 +0100
Message-Id: <1392128844-18318-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] xend/pvscsi: recognize also SCSI CDROM devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Attaching a CDROM device with 'xm scsi-attach domU /dev/sr0 0:0:0:0'
fails because for some reason the sr driver was not handled at all in
the match list. With the change the above command succeeds and the
device is attached.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

Once this change gets commited, it should also be backported to the
maintained trees which contains these fixes (4.3+):

89bb46e xend/pvscsi: update sysfs parser for Linux 3.0
65ddfc5 xend/pvscsi: fix usage of persistant device names for SCSI devices
a6046ec xend/pvscsi: fix passing of SCSI control LUNs

 tools/python/xen/util/vscsi_util.py | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/python/xen/util/vscsi_util.py b/tools/python/xen/util/vscsi_util.py
index 5872e65..a4f5ad3 100644
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -66,6 +66,9 @@ def _vscsi_get_hctl_by(phyname, scsi_devices):
     if re.match('/dev/sd[a-z]+([1-9]|1[0-5])?$', phyname):
         # sd driver
         name = re.sub('(^/dev/)|([1-9]|1[0-5])?$', '', phyname)
+    elif re.match('/dev/sr[0-9]+$', phyname):
+        # sr driver
+        name = re.sub('^/dev/', '', phyname)
     elif re.match('/dev/sg[0-9]+$', phyname):
         # sg driver
         name = re.sub('^/dev/', '', phyname)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:27:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEJA-00040O-7d; Tue, 11 Feb 2014 14:27:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WDEJ8-00040C-8E
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:27:30 +0000
Received: from [85.158.137.68:18493] by server-15.bemta-3.messagelabs.com id
	46/D7-19263-1533AF25; Tue, 11 Feb 2014 14:27:29 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392128848!1132970!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21150 invoked from network); 11 Feb 2014 14:27:28 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 14:27:28 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392128848; l=1276;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=+FHfkXft+1jGApLkrOuufcVMZEA=;
	b=QPq2lYxL9gvaJS8ZMn2HnX6ocuFVOO2PYTiQb0ot9NEwT4xYIOHXbmZ2L8Yc7CuW1c0
	uucMf9jCQ+TCJCS/2R03bZva2DKIhlc6XPXy2DZy6pQ3Dd86CVPIElhB7lBKJCosTBd0W
	xVPXVaSuqUBJy5ZfkLszfE4j4BfyC8W7w/A=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id t05aedq1BERSArg
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 11 Feb 2014 15:27:28 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 06B9850269; Tue, 11 Feb 2014 15:27:27 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Tue, 11 Feb 2014 15:27:24 +0100
Message-Id: <1392128844-18318-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] xend/pvscsi: recognize also SCSI CDROM devices
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Attaching a CDROM device with 'xm scsi-attach domU /dev/sr0 0:0:0:0'
fails because for some reason the sr driver was not handled at all in
the match list. With the change the above command succeeds and the
device is attached.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

Once this change gets commited, it should also be backported to the
maintained trees which contains these fixes (4.3+):

89bb46e xend/pvscsi: update sysfs parser for Linux 3.0
65ddfc5 xend/pvscsi: fix usage of persistant device names for SCSI devices
a6046ec xend/pvscsi: fix passing of SCSI control LUNs

 tools/python/xen/util/vscsi_util.py | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/python/xen/util/vscsi_util.py b/tools/python/xen/util/vscsi_util.py
index 5872e65..a4f5ad3 100644
--- a/tools/python/xen/util/vscsi_util.py
+++ b/tools/python/xen/util/vscsi_util.py
@@ -66,6 +66,9 @@ def _vscsi_get_hctl_by(phyname, scsi_devices):
     if re.match('/dev/sd[a-z]+([1-9]|1[0-5])?$', phyname):
         # sd driver
         name = re.sub('(^/dev/)|([1-9]|1[0-5])?$', '', phyname)
+    elif re.match('/dev/sr[0-9]+$', phyname):
+        # sr driver
+        name = re.sub('^/dev/', '', phyname)
     elif re.match('/dev/sg[0-9]+$', phyname):
         # sg driver
         name = re.sub('^/dev/', '', phyname)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDENN-0004KS-Fz; Tue, 11 Feb 2014 14:31:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WDD16-0004y6-QA
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:04:49 +0000
Received: from [85.158.143.35:22449] by server-2.bemta-4.messagelabs.com id
	0F/E7-10891-0FF1AF25; Tue, 11 Feb 2014 13:04:48 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392123885!4816768!1
X-Originating-IP: [209.85.128.175]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2512 invoked from network); 11 Feb 2014 13:04:46 -0000
Received: from mail-ve0-f175.google.com (HELO mail-ve0-f175.google.com)
	(209.85.128.175)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:04:46 -0000
Received: by mail-ve0-f175.google.com with SMTP id c14so5966619vea.6
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 05:04:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=aoMoXlaJMPgq/vT5dK02nozW+JA7DVS48HbeCit23Es=;
	b=a/gxMzX6ZX9onzOPBISdsGs6jweCPZ5KhWLioH30ARjcRecIbdOHTTVmvTukqcYMnV
	e6BDi4jizKeMF8RkXcry/Olr0mgT0TKikIriHR9S9ejsaHFyfQ1cGl8ItWZZSIhiAvjU
	KucxSiH99KPLI7eaZhPPSbtERmWmwY2ROU93usohi3cmY/nw1vWbPPoKXKZLkwgnEAKV
	EDZpqh6KTR+b8tRh4c865j0AEgoF3va6y2qAYHbFf/Jx9Y8WHSC5M/kQPIEaprMq437P
	1qdqHxVJqBfp+2+xxL9q6JX/ptXCq/DdKad/T3mn0yqehCJckq4NnZDZoTPZ6Cu79qpV
	w53Q==
X-Received: by 10.220.11.141 with SMTP id t13mr755045vct.30.1392123885164;
	Tue, 11 Feb 2014 05:04:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Tue, 11 Feb 2014 05:04:04 -0800 (PST)
In-Reply-To: <20140210184707.GA18755@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
	<20140210184707.GA18755@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Tue, 11 Feb 2014 08:04:04 -0500
Message-ID: <CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Tue, 11 Feb 2014 14:31:51 +0000
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2505890737519724385=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2505890737519724385==
Content-Type: multipart/alternative; boundary=001a11c3c970a89d6d04f22119a9

--001a11c3c970a89d6d04f22119a9
Content-Type: text/plain; charset=ISO-8859-1

So I went ahead to tested the setup I am trying to achieve using xen.  This
setup basically requires two isolated machines that can be used for network
testing.  On the hvm mentioned above, this testing fails due to something I
cannot wrap my head around.  I believe it is still related to the PCI
passthrough of a device and I believe it is related to the libxl error
mentioned above. Can anyone shed some light on what is going on?  Is it a
driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet)

[79464.816085] ------------[ cut here ]------------
[79464.816093] WARNING: at
/build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254
dev_watchdog+0x262/0x270()
[79464.816094] Hardware name: HVM domU
[79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed out
[79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)
xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) authenc(F)
esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)
twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)
twofish_x86_64(F) twofish_common(F) camellia_generic(F)
camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)
serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)
blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)
cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) xcbc(F)
rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F)
llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_intel(F)
ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirrus(F)
ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)
sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F)
lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F)
lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded:
ipmi_msghandler]
[79464.816139] Pid: 0, comm: swapper/1 Tainted: GF
 3.8.0-29-generic #42~precise1-Ubuntu
[79464.816140] Call Trace:
[79464.816142]  <IRQ>  [<ffffffff81059b0f>] warn_slowpath_common+0x7f/0xc0
[79464.816149]  [<ffffffff8135b9d4>] ? timerqueue_add+0x64/0xb0
[79464.816151]  [<ffffffff81059c06>] warn_slowpath_fmt+0x46/0x50
[79464.816154]  [<ffffffff81076794>] ? wake_up_worker+0x24/0x30
[79464.816157]  [<ffffffff81602062>] dev_watchdog+0x262/0x270
[79464.816160]  [<ffffffff810771f0>] ? __queue_work+0x2d0/0x2d0
[79464.816161]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
[79464.816164]  [<ffffffff8106995b>] call_timer_fn+0x3b/0x150
[79464.816167]  [<ffffffff8144f5a1>] ? add_interrupt_randomness+0x41/0x190
[79464.816170]  [<ffffffff8106b427>] run_timer_softirq+0x267/0x2c0
[79464.816173]  [<ffffffff810ee3c9>] ? handle_irq_event_percpu+0xa9/0x210
[79464.816175]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
[79464.816177]  [<ffffffff81062620>] __do_softirq+0xc0/0x240
[79464.816180]  [<ffffffff810b67cd>] ? tick_do_update_jiffies64+0x9d/0xd0
[79464.816184]  [<ffffffff816fdd5c>] call_softirq+0x1c/0x30
[79464.816188]  [<ffffffff81016775>] do_softirq+0x65/0xa0
[79464.816189]  [<ffffffff810628fe>] irq_exit+0x8e/0xb0
[79464.816193]  [<ffffffff8140a125>] xen_evtchn_do_upcall+0x35/0x50
[79464.816195]  [<ffffffff816fdeed>] xen_hvm_callback_vector+0x6d/0x80
[79464.816196]  <EOI>  [<ffffffff81084008>] ? hrtimer_start+0x18/0x20
[79464.816201]  [<ffffffff81045136>] ? native_safe_halt+0x6/0x10
[79464.816204]  [<ffffffff8101cc33>] default_idle+0x53/0x1f0
[79464.816206]  [<ffffffff8101dad9>] cpu_idle+0xd9/0x120
[79464.816209]  [<ffffffff816d10fe>] start_secondary+0xc3/0xc5
[79464.816210] ---[ end trace 48cf6b13be16e0ae ]---
[79464.816214] bnx2 0000:00:05.0 eth1: <--- start FTQ dump --->
[79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000
[79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000
[79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000
[79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000
[79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000
[79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
[79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
[79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000
[79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000
[79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000
[79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000
[79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000
[79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000
[79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000
[79464.816308] bnx2 0000:00:05.0 eth1: CPU states:
[79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000
evt_mask 500 pc 8001288 pc 8001288 instr 38640001
[79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000
evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016
[79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000
evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
[79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000
evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020
[79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000
evt_mask 500 pc 8009c00 pc 800d948 instr 30420040
[79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000
evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823
[79464.816392] bnx2 0000:00:05.0 eth1: <--- end FTQ dump --->
[79464.816394] bnx2 0000:00:05.0 eth1: <--- start TBDC dump --->
[79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32
[79464.816401] bnx2 0000:00:05.0 eth1: LINE     CID  BIDX   CMD  VALIDS
[79464.816410] bnx2 0000:00:05.0 eth1: 00    001000  00e8   00    [0]
[79464.816420] bnx2 0000:00:05.0 eth1: 01    001000  00e8   00    [0]
[79464.816429] bnx2 0000:00:05.0 eth1: 02    000800  afc8   00    [0]
[79464.816438] bnx2 0000:00:05.0 eth1: 03    000800  afb8   00    [0]
[79464.816447] bnx2 0000:00:05.0 eth1: 04    000800  afd8   00    [0]
[79464.816456] bnx2 0000:00:05.0 eth1: 05    000800  afe0   00    [0]
[79464.816465] bnx2 0000:00:05.0 eth1: 06    000800  afe8   00    [0]
[79464.816474] bnx2 0000:00:05.0 eth1: 07    000800  afd0   00    [0]
[79464.816485] bnx2 0000:00:05.0 eth1: 08    001000  3510   00    [0]
[79464.816494] bnx2 0000:00:05.0 eth1: 09    000800  aec0   00    [0]
[79464.816504] bnx2 0000:00:05.0 eth1: 0a    001000  3530   00    [0]
[79464.816514] bnx2 0000:00:05.0 eth1: 0b    000800  aec8   00    [0]
[79464.816523] bnx2 0000:00:05.0 eth1: 0c    000800  aed0   00    [0]
[79464.816559] bnx2 0000:00:05.0 eth1: 0d    001000  34f8   00    [0]
[79464.816570] bnx2 0000:00:05.0 eth1: 0e    001000  3500   00    [0]
[79464.816580] bnx2 0000:00:05.0 eth1: 0f    001000  3518   00    [0]
[79464.816590] bnx2 0000:00:05.0 eth1: 10    1fbc00  2fe8   7d    [0]
[79464.816599] bnx2 0000:00:05.0 eth1: 11    1ab780  fff8   7d    [0]
[79464.816608] bnx2 0000:00:05.0 eth1: 12    17ff00  b908   f7    [0]
[79464.816618] bnx2 0000:00:05.0 eth1: 13    0cb700  ff40   d7    [0]
[79464.816627] bnx2 0000:00:05.0 eth1: 14    177a80  efe0   03    [0]
[79464.816637] bnx2 0000:00:05.0 eth1: 15    037d80  9f88   72    [0]
[79464.816646] bnx2 0000:00:05.0 eth1: 16    1bae00  eef8   ce    [0]
[79464.816657] bnx2 0000:00:05.0 eth1: 17    1bbc80  a7f8   df    [0]
[79464.816666] bnx2 0000:00:05.0 eth1: 18    17e180  6aa8   e4    [0]
[79464.816675] bnx2 0000:00:05.0 eth1: 19    07ff80  6e50   dd    [0]
[79464.816683] bnx2 0000:00:05.0 eth1: 1a    1fda80  f790   6e    [0]
[79464.816694] bnx2 0000:00:05.0 eth1: 1b    151580  d7b0   fc    [0]
[79464.816703] bnx2 0000:00:05.0 eth1: 1c    1b9f80  cef8   6b    [0]
[79464.816712] bnx2 0000:00:05.0 eth1: 1d    1ebf00  ffa8   df    [0]
[79464.816723] bnx2 0000:00:05.0 eth1: 1e    1e7e00  ff78   ef    [0]
[79464.816731] bnx2 0000:00:05.0 eth1: 1f    166e80  fbd8   aa    [0]
[79464.816733] bnx2 0000:00:05.0 eth1: <--- end TBDC dump --->
[79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[00100406]
[79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]
PCI_MISC_CFG[92000088]
[79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]
EMAC_RX_STATUS[00000000]
[79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:
HC_STATS_INTERRUPT_STATUS[01fc0003]
[79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]
[79464.817264] bnx2 0000:00:05.0 eth1: <--- start MCP states dump --->
[79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]
MCP_STATE_P1[0003611e]
[79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]
state[80000000] evt_mask[00000500]
[79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994]
instr[32020020]
[79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:
[79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]
fw_mb[00000027] link_status[0008506b]
[79464.817301]  drv_pulse_mb[0000338a]
[79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_signature[44564907]
reset_type[01005254]
[79464.817310]  condition[0003611e]
[79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083
0003611e 00000000
[79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000000
00000000 00000000
[79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000000
00000000 00000000
[79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000000
00000000 00000000
[79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]
[79464.817374] bnx2 0000:00:05.0 eth1: <--- end MCP states dump --->
[79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down



On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> > Thanks for the answers on the timeline.
> >
> > When I start the HVM with th broadcom adapter, I get this message back.
> > Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > support reset from sysfs for PCI device 0000:05:00.0
> > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > support reset from sysfs for PCI device 0000:05:00.1
> >
> > However, the devices appear in the HVM.  Is this something that I should
> be
> > concerned about?
>
> No. Xen pciback does the reset automatically.
>
> Actually we might want to ditch that reporting in libxl, or maybe just
> implement a stub function in xen-pciback so that libxl will be happy.
>
> >
> >
> >
> > On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu>
> wrote:
> >
> > > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > > <mikeneiderhauser@gmail.com> wrote:
> > > > Works like a charm.  I do not have physical access to the computer
> this
> > > > weekend to verify that the cards are isolated, but the HVM starts and
> > > > appears to be working well.
> > > >
> > > > When do you think Xen 4.4 will be released.  The article I read
> > > mentioned it
> > > > will be released in 2014 (hinting towards the end of February).  I
> also
> > > read
> > > > 'When it is ready.'
> > > >
> > > > Any timeline would be great.
> > >
> > > I'm afraid that's about all we can give. :-)  We've locked down
> > > development for 2 months now and are working on finding and fixing
> > > bugs.  If there are no more blocker bugs or other unforeseen delays,
> > > it should be out by the end of February.  But there are necessarily
> > > significant unknowns, so we can't make any promises.
> > >
> > >  -George
> > >
>

--001a11c3c970a89d6d04f22119a9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">So I went ahead to tested the setup I am trying to achieve=
 using xen. =A0This setup basically requires two isolated machines that can=
 be used for network testing. =A0On the hvm mentioned above, this testing f=
ails due to something I cannot wrap my head around. =A0I believe it is stil=
l related to the PCI passthrough of a device and I believe it is related to=
 the libxl error mentioned above. Can anyone shed some light on what is goi=
ng on? =A0Is it a driver issue? (Broadcom Corporation NetXtreme II BCM5716 =
Gigabit Ethernet)<div>

<br></div><div><div>[79464.816085] ------------[ cut here ]------------</di=
v><div>[79464.816093] WARNING: at /build/buildd/linux-lts-raring-3.8.0/net/=
sched/sch_generic.c:254 dev_watchdog+0x262/0x270()</div><div>[79464.816094]=
 Hardware name: HVM domU</div>

<div>[79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed ou=
t</div><div>[79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tun=
nel(F) xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) aut=
henc(F) esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(=
F) twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F) twofish_=
x86_64(F) twofish_common(F) camellia_generic(F) camellia_aesni_avx_x86_64(F=
) camellia_x86_64(F) serpent_avx_x86_64(F) serpent_sse2_x86_64(F) glue_help=
er(F) serpent_generic(F) blowfish_generic(F) blowfish_x86_64(F) blowfish_co=
mmon(F) cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) =
xcbc(F) rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) st=
p(F) llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_inte=
l(F) ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirru=
s(F) ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F) sysi=
mgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F) lockd(F=
) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F) lp(F) ma=
c_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded: ipmi_msghan=
dler]</div>

<div>[79464.816139] Pid: 0, comm: swapper/1 Tainted: GF =A0 =A0 =A0 =A0 =A0=
 =A03.8.0-29-generic #42~precise1-Ubuntu</div><div>[79464.816140] Call Trac=
e:</div><div>[79464.816142] =A0&lt;IRQ&gt; =A0[&lt;ffffffff81059b0f&gt;] wa=
rn_slowpath_common+0x7f/0xc0</div>

<div>[79464.816149] =A0[&lt;ffffffff8135b9d4&gt;] ? timerqueue_add+0x64/0xb=
0</div><div>[79464.816151] =A0[&lt;ffffffff81059c06&gt;] warn_slowpath_fmt+=
0x46/0x50</div><div>[79464.816154] =A0[&lt;ffffffff81076794&gt;] ? wake_up_=
worker+0x24/0x30</div>

<div>[79464.816157] =A0[&lt;ffffffff81602062&gt;] dev_watchdog+0x262/0x270<=
/div><div>[79464.816160] =A0[&lt;ffffffff810771f0&gt;] ? __queue_work+0x2d0=
/0x2d0</div><div>[79464.816161] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_fast_=
dequeue+0xe0/0xe0</div>

<div>[79464.816164] =A0[&lt;ffffffff8106995b&gt;] call_timer_fn+0x3b/0x150<=
/div><div>[79464.816167] =A0[&lt;ffffffff8144f5a1&gt;] ? add_interrupt_rand=
omness+0x41/0x190</div><div>[79464.816170] =A0[&lt;ffffffff8106b427&gt;] ru=
n_timer_softirq+0x267/0x2c0</div>

<div>[79464.816173] =A0[&lt;ffffffff810ee3c9&gt;] ? handle_irq_event_percpu=
+0xa9/0x210</div><div>[79464.816175] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_=
fast_dequeue+0xe0/0xe0</div><div>[79464.816177] =A0[&lt;ffffffff81062620&gt=
;] __do_softirq+0xc0/0x240</div>

<div>[79464.816180] =A0[&lt;ffffffff810b67cd&gt;] ? tick_do_update_jiffies6=
4+0x9d/0xd0</div><div>[79464.816184] =A0[&lt;ffffffff816fdd5c&gt;] call_sof=
tirq+0x1c/0x30</div><div>[79464.816188] =A0[&lt;ffffffff81016775&gt;] do_so=
ftirq+0x65/0xa0</div>

<div>[79464.816189] =A0[&lt;ffffffff810628fe&gt;] irq_exit+0x8e/0xb0</div><=
div>[79464.816193] =A0[&lt;ffffffff8140a125&gt;] xen_evtchn_do_upcall+0x35/=
0x50</div><div>[79464.816195] =A0[&lt;ffffffff816fdeed&gt;] xen_hvm_callbac=
k_vector+0x6d/0x80</div>

<div>[79464.816196] =A0&lt;EOI&gt; =A0[&lt;ffffffff81084008&gt;] ? hrtimer_=
start+0x18/0x20</div><div>[79464.816201] =A0[&lt;ffffffff81045136&gt;] ? na=
tive_safe_halt+0x6/0x10</div><div>[79464.816204] =A0[&lt;ffffffff8101cc33&g=
t;] default_idle+0x53/0x1f0</div>

<div>[79464.816206] =A0[&lt;ffffffff8101dad9&gt;] cpu_idle+0xd9/0x120</div>=
<div>[79464.816209] =A0[&lt;ffffffff816d10fe&gt;] start_secondary+0xc3/0xc5=
</div><div>[79464.816210] ---[ end trace 48cf6b13be16e0ae ]---</div><div>[7=
9464.816214] bnx2 0000:00:05.0 eth1: &lt;--- start FTQ dump ---&gt;</div>

<div>[79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000</div><di=
v>[79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000</div><div>[=
79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000</div><div>

[79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000</div><div>[794=
64.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000</div><div>[79464.8=
16272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000</div><div>[79464.816276=
] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000</div>

<div>[79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000</div><div=
>[79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000</div><div>[79=
464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000</div><div>[79464.8=
16293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000</div>

<div>[79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000</div=
><div>[79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000</div=
><div>[79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000</div>

<div>[79464.816308] bnx2 0000:00:05.0 eth1: CPU states:</div><div>[79464.81=
6321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000 evt_mask 500 =
pc 8001288 pc 8001288 instr 38640001</div><div>[79464.816335] bnx2 0000:00:=
05.0 eth1: 085000 mode b84c state 80005000 evt_mask 500 pc 8000a5c pc 8000a=
5c instr 10400016</div>

<div>[79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000=
 evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003</div><div>[79464.816362]=
 bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000 evt_mask 500 pc 80=
00a9c pc 8000a94 instr 8c420020</div>

<div>[79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000=
 evt_mask 500 pc 8009c00 pc 800d948 instr 30420040</div><div>[79464.816389]=
 bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000 evt_mask 500 pc 80=
00c58 pc 8000c58 instr 1092823</div>

<div>[79464.816392] bnx2 0000:00:05.0 eth1: &lt;--- end FTQ dump ---&gt;</d=
iv><div>[79464.816394] bnx2 0000:00:05.0 eth1: &lt;--- start TBDC dump ---&=
gt;</div><div>[79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32</div=
>

<div>[79464.816401] bnx2 0000:00:05.0 eth1: LINE =A0 =A0 CID =A0BIDX =A0 CM=
D =A0VALIDS</div><div>[79464.816410] bnx2 0000:00:05.0 eth1: 00 =A0 =A00010=
00 =A000e8 =A0 00 =A0 =A0[0]</div><div>[79464.816420] bnx2 0000:00:05.0 eth=
1: 01 =A0 =A0001000 =A000e8 =A0 00 =A0 =A0[0]</div>

<div>[79464.816429] bnx2 0000:00:05.0 eth1: 02 =A0 =A0000800 =A0afc8 =A0 00=
 =A0 =A0[0]</div><div>[79464.816438] bnx2 0000:00:05.0 eth1: 03 =A0 =A00008=
00 =A0afb8 =A0 00 =A0 =A0[0]</div><div>[79464.816447] bnx2 0000:00:05.0 eth=
1: 04 =A0 =A0000800 =A0afd8 =A0 00 =A0 =A0[0]</div>

<div>[79464.816456] bnx2 0000:00:05.0 eth1: 05 =A0 =A0000800 =A0afe0 =A0 00=
 =A0 =A0[0]</div><div>[79464.816465] bnx2 0000:00:05.0 eth1: 06 =A0 =A00008=
00 =A0afe8 =A0 00 =A0 =A0[0]</div><div>[79464.816474] bnx2 0000:00:05.0 eth=
1: 07 =A0 =A0000800 =A0afd0 =A0 00 =A0 =A0[0]</div>

<div>[79464.816485] bnx2 0000:00:05.0 eth1: 08 =A0 =A0001000 =A03510 =A0 00=
 =A0 =A0[0]</div><div>[79464.816494] bnx2 0000:00:05.0 eth1: 09 =A0 =A00008=
00 =A0aec0 =A0 00 =A0 =A0[0]</div><div>[79464.816504] bnx2 0000:00:05.0 eth=
1: 0a =A0 =A0001000 =A03530 =A0 00 =A0 =A0[0]</div>

<div>[79464.816514] bnx2 0000:00:05.0 eth1: 0b =A0 =A0000800 =A0aec8 =A0 00=
 =A0 =A0[0]</div><div>[79464.816523] bnx2 0000:00:05.0 eth1: 0c =A0 =A00008=
00 =A0aed0 =A0 00 =A0 =A0[0]</div><div>[79464.816559] bnx2 0000:00:05.0 eth=
1: 0d =A0 =A0001000 =A034f8 =A0 00 =A0 =A0[0]</div>

<div>[79464.816570] bnx2 0000:00:05.0 eth1: 0e =A0 =A0001000 =A03500 =A0 00=
 =A0 =A0[0]</div><div>[79464.816580] bnx2 0000:00:05.0 eth1: 0f =A0 =A00010=
00 =A03518 =A0 00 =A0 =A0[0]</div><div>[79464.816590] bnx2 0000:00:05.0 eth=
1: 10 =A0 =A01fbc00 =A02fe8 =A0 7d =A0 =A0[0]</div>

<div>[79464.816599] bnx2 0000:00:05.0 eth1: 11 =A0 =A01ab780 =A0fff8 =A0 7d=
 =A0 =A0[0]</div><div>[79464.816608] bnx2 0000:00:05.0 eth1: 12 =A0 =A017ff=
00 =A0b908 =A0 f7 =A0 =A0[0]</div><div>[79464.816618] bnx2 0000:00:05.0 eth=
1: 13 =A0 =A00cb700 =A0ff40 =A0 d7 =A0 =A0[0]</div>

<div>[79464.816627] bnx2 0000:00:05.0 eth1: 14 =A0 =A0177a80 =A0efe0 =A0 03=
 =A0 =A0[0]</div><div>[79464.816637] bnx2 0000:00:05.0 eth1: 15 =A0 =A0037d=
80 =A09f88 =A0 72 =A0 =A0[0]</div><div>[79464.816646] bnx2 0000:00:05.0 eth=
1: 16 =A0 =A01bae00 =A0eef8 =A0 ce =A0 =A0[0]</div>

<div>[79464.816657] bnx2 0000:00:05.0 eth1: 17 =A0 =A01bbc80 =A0a7f8 =A0 df=
 =A0 =A0[0]</div><div>[79464.816666] bnx2 0000:00:05.0 eth1: 18 =A0 =A017e1=
80 =A06aa8 =A0 e4 =A0 =A0[0]</div><div>[79464.816675] bnx2 0000:00:05.0 eth=
1: 19 =A0 =A007ff80 =A06e50 =A0 dd =A0 =A0[0]</div>

<div>[79464.816683] bnx2 0000:00:05.0 eth1: 1a =A0 =A01fda80 =A0f790 =A0 6e=
 =A0 =A0[0]</div><div>[79464.816694] bnx2 0000:00:05.0 eth1: 1b =A0 =A01515=
80 =A0d7b0 =A0 fc =A0 =A0[0]</div><div>[79464.816703] bnx2 0000:00:05.0 eth=
1: 1c =A0 =A01b9f80 =A0cef8 =A0 6b =A0 =A0[0]</div>

<div>[79464.816712] bnx2 0000:00:05.0 eth1: 1d =A0 =A01ebf00 =A0ffa8 =A0 df=
 =A0 =A0[0]</div><div>[79464.816723] bnx2 0000:00:05.0 eth1: 1e =A0 =A01e7e=
00 =A0ff78 =A0 ef =A0 =A0[0]</div><div>[79464.816731] bnx2 0000:00:05.0 eth=
1: 1f =A0 =A0166e80 =A0fbd8 =A0 aa =A0 =A0[0]</div>

<div>[79464.816733] bnx2 0000:00:05.0 eth1: &lt;--- end TBDC dump ---&gt;</=
div><div>[79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[=
00100406]</div><div>[79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19=
002008] PCI_MISC_CFG[92000088]</div>

<div>[79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]=
 EMAC_RX_STATUS[00000000]</div><div>[79464.817252] bnx2 0000:00:05.0 eth1: =
DEBUG: RPM_MGMT_PKT_CTRL[40000088]</div><div>[79464.817257] bnx2 0000:00:05=
.0 eth1: DEBUG: HC_STATS_INTERRUPT_STATUS[01fc0003]</div>

<div>[79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]</div><div>=
[79464.817264] bnx2 0000:00:05.0 eth1: &lt;--- start MCP states dump ---&gt=
;</div><div>[79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003=
611e] MCP_STATE_P1[0003611e]</div>

<div>[79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880] state=
[80000000] evt_mask[00000500]</div><div>[79464.817286] bnx2 0000:00:05.0 et=
h1: DEBUG: pc[08003ebc] pc[0800d994] instr[32020020]</div><div>[79464.81728=
8] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:</div>

<div>[79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027] fw_mb[0=
0000027] link_status[0008506b]</div><div>[79464.817301] =A0drv_pulse_mb[000=
0338a]</div><div>[79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_sig=
nature[44564907] reset_type[01005254]</div>

<div>[79464.817310] =A0condition[0003611e]</div><div>[79464.817318] bnx2 00=
00:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083 0003611e 00000000</div>=
<div>[79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000=
000 00000000 00000000</div>

<div>[79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000=
000 00000000 00000000</div><div>[79464.817367] bnx2 0000:00:05.0 eth1: DEBU=
G: 000003ec: 00000000 00000000 00000000 00000000</div><div>[79464.817372] b=
nx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]</div>

<div>[79464.817374] bnx2 0000:00:05.0 eth1: &lt;--- end MCP states dump ---=
&gt;</div><div>[79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Do=
wn</div></div><div><br></div></div><div class=3D"gmail_extra"><br><br><div =
class=3D"gmail_quote">

On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&l=
t;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@o=
racle.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div class=3D"">On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser=
 wrote:<br>
&gt; Thanks for the answers on the timeline.<br>
&gt;<br>
&gt; When I start the HVM with th broadcom adapter, I get this message back=
.<br>
&gt; Parsing config from /etc/xen/ubuntu-hvm-1.cfg<br>
&gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel does=
n&#39;t<br>
&gt; support reset from sysfs for PCI device 0000:05:00.0<br>
&gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel does=
n&#39;t<br>
&gt; support reset from sysfs for PCI device 0000:05:00.1<br>
&gt;<br>
&gt; However, the devices appear in the HVM. =A0Is this something that I sh=
ould be<br>
&gt; concerned about?<br>
<br>
</div>No. Xen pciback does the reset automatically.<br>
<br>
Actually we might want to ditch that reporting in libxl, or maybe just<br>
implement a stub function in xen-pciback so that libxl will be happy.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap &lt;<a href=3D"mailto:=
dunlapg@umich.edu">dunlapg@umich.edu</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser<br>
&gt; &gt; &lt;<a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhause=
r@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; Works like a charm. =A0I do not have physical access to the =
computer this<br>
&gt; &gt; &gt; weekend to verify that the cards are isolated, but the HVM s=
tarts and<br>
&gt; &gt; &gt; appears to be working well.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; When do you think Xen 4.4 will be released. =A0The article I=
 read<br>
&gt; &gt; mentioned it<br>
&gt; &gt; &gt; will be released in 2014 (hinting towards the end of Februar=
y). =A0I also<br>
&gt; &gt; read<br>
&gt; &gt; &gt; &#39;When it is ready.&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Any timeline would be great.<br>
&gt; &gt;<br>
&gt; &gt; I&#39;m afraid that&#39;s about all we can give. :-) =A0We&#39;ve=
 locked down<br>
&gt; &gt; development for 2 months now and are working on finding and fixin=
g<br>
&gt; &gt; bugs. =A0If there are no more blocker bugs or other unforeseen de=
lays,<br>
&gt; &gt; it should be out by the end of February. =A0But there are necessa=
rily<br>
&gt; &gt; significant unknowns, so we can&#39;t make any promises.<br>
&gt; &gt;<br>
&gt; &gt; =A0-George<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a11c3c970a89d6d04f22119a9--


--===============2505890737519724385==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2505890737519724385==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 14:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDENN-0004KS-Fz; Tue, 11 Feb 2014 14:31:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WDD16-0004y6-QA
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 13:04:49 +0000
Received: from [85.158.143.35:22449] by server-2.bemta-4.messagelabs.com id
	0F/E7-10891-0FF1AF25; Tue, 11 Feb 2014 13:04:48 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392123885!4816768!1
X-Originating-IP: [209.85.128.175]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2512 invoked from network); 11 Feb 2014 13:04:46 -0000
Received: from mail-ve0-f175.google.com (HELO mail-ve0-f175.google.com)
	(209.85.128.175)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 13:04:46 -0000
Received: by mail-ve0-f175.google.com with SMTP id c14so5966619vea.6
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 05:04:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=aoMoXlaJMPgq/vT5dK02nozW+JA7DVS48HbeCit23Es=;
	b=a/gxMzX6ZX9onzOPBISdsGs6jweCPZ5KhWLioH30ARjcRecIbdOHTTVmvTukqcYMnV
	e6BDi4jizKeMF8RkXcry/Olr0mgT0TKikIriHR9S9ejsaHFyfQ1cGl8ItWZZSIhiAvjU
	KucxSiH99KPLI7eaZhPPSbtERmWmwY2ROU93usohi3cmY/nw1vWbPPoKXKZLkwgnEAKV
	EDZpqh6KTR+b8tRh4c865j0AEgoF3va6y2qAYHbFf/Jx9Y8WHSC5M/kQPIEaprMq437P
	1qdqHxVJqBfp+2+xxL9q6JX/ptXCq/DdKad/T3mn0yqehCJckq4NnZDZoTPZ6Cu79qpV
	w53Q==
X-Received: by 10.220.11.141 with SMTP id t13mr755045vct.30.1392123885164;
	Tue, 11 Feb 2014 05:04:45 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Tue, 11 Feb 2014 05:04:04 -0800 (PST)
In-Reply-To: <20140210184707.GA18755@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
	<20140210184707.GA18755@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Tue, 11 Feb 2014 08:04:04 -0500
Message-ID: <CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Tue, 11 Feb 2014 14:31:51 +0000
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2505890737519724385=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2505890737519724385==
Content-Type: multipart/alternative; boundary=001a11c3c970a89d6d04f22119a9

--001a11c3c970a89d6d04f22119a9
Content-Type: text/plain; charset=ISO-8859-1

So I went ahead to tested the setup I am trying to achieve using xen.  This
setup basically requires two isolated machines that can be used for network
testing.  On the hvm mentioned above, this testing fails due to something I
cannot wrap my head around.  I believe it is still related to the PCI
passthrough of a device and I believe it is related to the libxl error
mentioned above. Can anyone shed some light on what is going on?  Is it a
driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet)

[79464.816085] ------------[ cut here ]------------
[79464.816093] WARNING: at
/build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254
dev_watchdog+0x262/0x270()
[79464.816094] Hardware name: HVM domU
[79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed out
[79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)
xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) authenc(F)
esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)
twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)
twofish_x86_64(F) twofish_common(F) camellia_generic(F)
camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)
serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)
blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)
cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) xcbc(F)
rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F)
llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_intel(F)
ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirrus(F)
ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)
sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F)
lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F)
lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded:
ipmi_msghandler]
[79464.816139] Pid: 0, comm: swapper/1 Tainted: GF
 3.8.0-29-generic #42~precise1-Ubuntu
[79464.816140] Call Trace:
[79464.816142]  <IRQ>  [<ffffffff81059b0f>] warn_slowpath_common+0x7f/0xc0
[79464.816149]  [<ffffffff8135b9d4>] ? timerqueue_add+0x64/0xb0
[79464.816151]  [<ffffffff81059c06>] warn_slowpath_fmt+0x46/0x50
[79464.816154]  [<ffffffff81076794>] ? wake_up_worker+0x24/0x30
[79464.816157]  [<ffffffff81602062>] dev_watchdog+0x262/0x270
[79464.816160]  [<ffffffff810771f0>] ? __queue_work+0x2d0/0x2d0
[79464.816161]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
[79464.816164]  [<ffffffff8106995b>] call_timer_fn+0x3b/0x150
[79464.816167]  [<ffffffff8144f5a1>] ? add_interrupt_randomness+0x41/0x190
[79464.816170]  [<ffffffff8106b427>] run_timer_softirq+0x267/0x2c0
[79464.816173]  [<ffffffff810ee3c9>] ? handle_irq_event_percpu+0xa9/0x210
[79464.816175]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
[79464.816177]  [<ffffffff81062620>] __do_softirq+0xc0/0x240
[79464.816180]  [<ffffffff810b67cd>] ? tick_do_update_jiffies64+0x9d/0xd0
[79464.816184]  [<ffffffff816fdd5c>] call_softirq+0x1c/0x30
[79464.816188]  [<ffffffff81016775>] do_softirq+0x65/0xa0
[79464.816189]  [<ffffffff810628fe>] irq_exit+0x8e/0xb0
[79464.816193]  [<ffffffff8140a125>] xen_evtchn_do_upcall+0x35/0x50
[79464.816195]  [<ffffffff816fdeed>] xen_hvm_callback_vector+0x6d/0x80
[79464.816196]  <EOI>  [<ffffffff81084008>] ? hrtimer_start+0x18/0x20
[79464.816201]  [<ffffffff81045136>] ? native_safe_halt+0x6/0x10
[79464.816204]  [<ffffffff8101cc33>] default_idle+0x53/0x1f0
[79464.816206]  [<ffffffff8101dad9>] cpu_idle+0xd9/0x120
[79464.816209]  [<ffffffff816d10fe>] start_secondary+0xc3/0xc5
[79464.816210] ---[ end trace 48cf6b13be16e0ae ]---
[79464.816214] bnx2 0000:00:05.0 eth1: <--- start FTQ dump --->
[79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000
[79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000
[79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000
[79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000
[79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000
[79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
[79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
[79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000
[79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000
[79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000
[79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000
[79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000
[79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000
[79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000
[79464.816308] bnx2 0000:00:05.0 eth1: CPU states:
[79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000
evt_mask 500 pc 8001288 pc 8001288 instr 38640001
[79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000
evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016
[79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000
evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
[79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000
evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020
[79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000
evt_mask 500 pc 8009c00 pc 800d948 instr 30420040
[79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000
evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823
[79464.816392] bnx2 0000:00:05.0 eth1: <--- end FTQ dump --->
[79464.816394] bnx2 0000:00:05.0 eth1: <--- start TBDC dump --->
[79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32
[79464.816401] bnx2 0000:00:05.0 eth1: LINE     CID  BIDX   CMD  VALIDS
[79464.816410] bnx2 0000:00:05.0 eth1: 00    001000  00e8   00    [0]
[79464.816420] bnx2 0000:00:05.0 eth1: 01    001000  00e8   00    [0]
[79464.816429] bnx2 0000:00:05.0 eth1: 02    000800  afc8   00    [0]
[79464.816438] bnx2 0000:00:05.0 eth1: 03    000800  afb8   00    [0]
[79464.816447] bnx2 0000:00:05.0 eth1: 04    000800  afd8   00    [0]
[79464.816456] bnx2 0000:00:05.0 eth1: 05    000800  afe0   00    [0]
[79464.816465] bnx2 0000:00:05.0 eth1: 06    000800  afe8   00    [0]
[79464.816474] bnx2 0000:00:05.0 eth1: 07    000800  afd0   00    [0]
[79464.816485] bnx2 0000:00:05.0 eth1: 08    001000  3510   00    [0]
[79464.816494] bnx2 0000:00:05.0 eth1: 09    000800  aec0   00    [0]
[79464.816504] bnx2 0000:00:05.0 eth1: 0a    001000  3530   00    [0]
[79464.816514] bnx2 0000:00:05.0 eth1: 0b    000800  aec8   00    [0]
[79464.816523] bnx2 0000:00:05.0 eth1: 0c    000800  aed0   00    [0]
[79464.816559] bnx2 0000:00:05.0 eth1: 0d    001000  34f8   00    [0]
[79464.816570] bnx2 0000:00:05.0 eth1: 0e    001000  3500   00    [0]
[79464.816580] bnx2 0000:00:05.0 eth1: 0f    001000  3518   00    [0]
[79464.816590] bnx2 0000:00:05.0 eth1: 10    1fbc00  2fe8   7d    [0]
[79464.816599] bnx2 0000:00:05.0 eth1: 11    1ab780  fff8   7d    [0]
[79464.816608] bnx2 0000:00:05.0 eth1: 12    17ff00  b908   f7    [0]
[79464.816618] bnx2 0000:00:05.0 eth1: 13    0cb700  ff40   d7    [0]
[79464.816627] bnx2 0000:00:05.0 eth1: 14    177a80  efe0   03    [0]
[79464.816637] bnx2 0000:00:05.0 eth1: 15    037d80  9f88   72    [0]
[79464.816646] bnx2 0000:00:05.0 eth1: 16    1bae00  eef8   ce    [0]
[79464.816657] bnx2 0000:00:05.0 eth1: 17    1bbc80  a7f8   df    [0]
[79464.816666] bnx2 0000:00:05.0 eth1: 18    17e180  6aa8   e4    [0]
[79464.816675] bnx2 0000:00:05.0 eth1: 19    07ff80  6e50   dd    [0]
[79464.816683] bnx2 0000:00:05.0 eth1: 1a    1fda80  f790   6e    [0]
[79464.816694] bnx2 0000:00:05.0 eth1: 1b    151580  d7b0   fc    [0]
[79464.816703] bnx2 0000:00:05.0 eth1: 1c    1b9f80  cef8   6b    [0]
[79464.816712] bnx2 0000:00:05.0 eth1: 1d    1ebf00  ffa8   df    [0]
[79464.816723] bnx2 0000:00:05.0 eth1: 1e    1e7e00  ff78   ef    [0]
[79464.816731] bnx2 0000:00:05.0 eth1: 1f    166e80  fbd8   aa    [0]
[79464.816733] bnx2 0000:00:05.0 eth1: <--- end TBDC dump --->
[79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[00100406]
[79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]
PCI_MISC_CFG[92000088]
[79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]
EMAC_RX_STATUS[00000000]
[79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
[79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:
HC_STATS_INTERRUPT_STATUS[01fc0003]
[79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]
[79464.817264] bnx2 0000:00:05.0 eth1: <--- start MCP states dump --->
[79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]
MCP_STATE_P1[0003611e]
[79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]
state[80000000] evt_mask[00000500]
[79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994]
instr[32020020]
[79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:
[79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]
fw_mb[00000027] link_status[0008506b]
[79464.817301]  drv_pulse_mb[0000338a]
[79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_signature[44564907]
reset_type[01005254]
[79464.817310]  condition[0003611e]
[79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083
0003611e 00000000
[79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000000
00000000 00000000
[79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000000
00000000 00000000
[79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000000
00000000 00000000
[79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]
[79464.817374] bnx2 0000:00:05.0 eth1: <--- end MCP states dump --->
[79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down



On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> > Thanks for the answers on the timeline.
> >
> > When I start the HVM with th broadcom adapter, I get this message back.
> > Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > support reset from sysfs for PCI device 0000:05:00.0
> > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > support reset from sysfs for PCI device 0000:05:00.1
> >
> > However, the devices appear in the HVM.  Is this something that I should
> be
> > concerned about?
>
> No. Xen pciback does the reset automatically.
>
> Actually we might want to ditch that reporting in libxl, or maybe just
> implement a stub function in xen-pciback so that libxl will be happy.
>
> >
> >
> >
> > On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu>
> wrote:
> >
> > > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > > <mikeneiderhauser@gmail.com> wrote:
> > > > Works like a charm.  I do not have physical access to the computer
> this
> > > > weekend to verify that the cards are isolated, but the HVM starts and
> > > > appears to be working well.
> > > >
> > > > When do you think Xen 4.4 will be released.  The article I read
> > > mentioned it
> > > > will be released in 2014 (hinting towards the end of February).  I
> also
> > > read
> > > > 'When it is ready.'
> > > >
> > > > Any timeline would be great.
> > >
> > > I'm afraid that's about all we can give. :-)  We've locked down
> > > development for 2 months now and are working on finding and fixing
> > > bugs.  If there are no more blocker bugs or other unforeseen delays,
> > > it should be out by the end of February.  But there are necessarily
> > > significant unknowns, so we can't make any promises.
> > >
> > >  -George
> > >
>

--001a11c3c970a89d6d04f22119a9
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">So I went ahead to tested the setup I am trying to achieve=
 using xen. =A0This setup basically requires two isolated machines that can=
 be used for network testing. =A0On the hvm mentioned above, this testing f=
ails due to something I cannot wrap my head around. =A0I believe it is stil=
l related to the PCI passthrough of a device and I believe it is related to=
 the libxl error mentioned above. Can anyone shed some light on what is goi=
ng on? =A0Is it a driver issue? (Broadcom Corporation NetXtreme II BCM5716 =
Gigabit Ethernet)<div>

<br></div><div><div>[79464.816085] ------------[ cut here ]------------</di=
v><div>[79464.816093] WARNING: at /build/buildd/linux-lts-raring-3.8.0/net/=
sched/sch_generic.c:254 dev_watchdog+0x262/0x270()</div><div>[79464.816094]=
 Hardware name: HVM domU</div>

<div>[79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed ou=
t</div><div>[79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tun=
nel(F) xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) aut=
henc(F) esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(=
F) twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F) twofish_=
x86_64(F) twofish_common(F) camellia_generic(F) camellia_aesni_avx_x86_64(F=
) camellia_x86_64(F) serpent_avx_x86_64(F) serpent_sse2_x86_64(F) glue_help=
er(F) serpent_generic(F) blowfish_generic(F) blowfish_x86_64(F) blowfish_co=
mmon(F) cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) =
xcbc(F) rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) st=
p(F) llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_inte=
l(F) ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirru=
s(F) ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F) sysi=
mgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F) lockd(F=
) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F) lp(F) ma=
c_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded: ipmi_msghan=
dler]</div>

<div>[79464.816139] Pid: 0, comm: swapper/1 Tainted: GF =A0 =A0 =A0 =A0 =A0=
 =A03.8.0-29-generic #42~precise1-Ubuntu</div><div>[79464.816140] Call Trac=
e:</div><div>[79464.816142] =A0&lt;IRQ&gt; =A0[&lt;ffffffff81059b0f&gt;] wa=
rn_slowpath_common+0x7f/0xc0</div>

<div>[79464.816149] =A0[&lt;ffffffff8135b9d4&gt;] ? timerqueue_add+0x64/0xb=
0</div><div>[79464.816151] =A0[&lt;ffffffff81059c06&gt;] warn_slowpath_fmt+=
0x46/0x50</div><div>[79464.816154] =A0[&lt;ffffffff81076794&gt;] ? wake_up_=
worker+0x24/0x30</div>

<div>[79464.816157] =A0[&lt;ffffffff81602062&gt;] dev_watchdog+0x262/0x270<=
/div><div>[79464.816160] =A0[&lt;ffffffff810771f0&gt;] ? __queue_work+0x2d0=
/0x2d0</div><div>[79464.816161] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_fast_=
dequeue+0xe0/0xe0</div>

<div>[79464.816164] =A0[&lt;ffffffff8106995b&gt;] call_timer_fn+0x3b/0x150<=
/div><div>[79464.816167] =A0[&lt;ffffffff8144f5a1&gt;] ? add_interrupt_rand=
omness+0x41/0x190</div><div>[79464.816170] =A0[&lt;ffffffff8106b427&gt;] ru=
n_timer_softirq+0x267/0x2c0</div>

<div>[79464.816173] =A0[&lt;ffffffff810ee3c9&gt;] ? handle_irq_event_percpu=
+0xa9/0x210</div><div>[79464.816175] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_=
fast_dequeue+0xe0/0xe0</div><div>[79464.816177] =A0[&lt;ffffffff81062620&gt=
;] __do_softirq+0xc0/0x240</div>

<div>[79464.816180] =A0[&lt;ffffffff810b67cd&gt;] ? tick_do_update_jiffies6=
4+0x9d/0xd0</div><div>[79464.816184] =A0[&lt;ffffffff816fdd5c&gt;] call_sof=
tirq+0x1c/0x30</div><div>[79464.816188] =A0[&lt;ffffffff81016775&gt;] do_so=
ftirq+0x65/0xa0</div>

<div>[79464.816189] =A0[&lt;ffffffff810628fe&gt;] irq_exit+0x8e/0xb0</div><=
div>[79464.816193] =A0[&lt;ffffffff8140a125&gt;] xen_evtchn_do_upcall+0x35/=
0x50</div><div>[79464.816195] =A0[&lt;ffffffff816fdeed&gt;] xen_hvm_callbac=
k_vector+0x6d/0x80</div>

<div>[79464.816196] =A0&lt;EOI&gt; =A0[&lt;ffffffff81084008&gt;] ? hrtimer_=
start+0x18/0x20</div><div>[79464.816201] =A0[&lt;ffffffff81045136&gt;] ? na=
tive_safe_halt+0x6/0x10</div><div>[79464.816204] =A0[&lt;ffffffff8101cc33&g=
t;] default_idle+0x53/0x1f0</div>

<div>[79464.816206] =A0[&lt;ffffffff8101dad9&gt;] cpu_idle+0xd9/0x120</div>=
<div>[79464.816209] =A0[&lt;ffffffff816d10fe&gt;] start_secondary+0xc3/0xc5=
</div><div>[79464.816210] ---[ end trace 48cf6b13be16e0ae ]---</div><div>[7=
9464.816214] bnx2 0000:00:05.0 eth1: &lt;--- start FTQ dump ---&gt;</div>

<div>[79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000</div><di=
v>[79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000</div><div>[=
79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000</div><div>

[79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000</div><div>[794=
64.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000</div><div>[79464.8=
16272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000</div><div>[79464.816276=
] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000</div>

<div>[79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000</div><div=
>[79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000</div><div>[79=
464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000</div><div>[79464.8=
16293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000</div>

<div>[79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000</div=
><div>[79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000</div=
><div>[79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000</div>

<div>[79464.816308] bnx2 0000:00:05.0 eth1: CPU states:</div><div>[79464.81=
6321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000 evt_mask 500 =
pc 8001288 pc 8001288 instr 38640001</div><div>[79464.816335] bnx2 0000:00:=
05.0 eth1: 085000 mode b84c state 80005000 evt_mask 500 pc 8000a5c pc 8000a=
5c instr 10400016</div>

<div>[79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000=
 evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003</div><div>[79464.816362]=
 bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000 evt_mask 500 pc 80=
00a9c pc 8000a94 instr 8c420020</div>

<div>[79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000=
 evt_mask 500 pc 8009c00 pc 800d948 instr 30420040</div><div>[79464.816389]=
 bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000 evt_mask 500 pc 80=
00c58 pc 8000c58 instr 1092823</div>

<div>[79464.816392] bnx2 0000:00:05.0 eth1: &lt;--- end FTQ dump ---&gt;</d=
iv><div>[79464.816394] bnx2 0000:00:05.0 eth1: &lt;--- start TBDC dump ---&=
gt;</div><div>[79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32</div=
>

<div>[79464.816401] bnx2 0000:00:05.0 eth1: LINE =A0 =A0 CID =A0BIDX =A0 CM=
D =A0VALIDS</div><div>[79464.816410] bnx2 0000:00:05.0 eth1: 00 =A0 =A00010=
00 =A000e8 =A0 00 =A0 =A0[0]</div><div>[79464.816420] bnx2 0000:00:05.0 eth=
1: 01 =A0 =A0001000 =A000e8 =A0 00 =A0 =A0[0]</div>

<div>[79464.816429] bnx2 0000:00:05.0 eth1: 02 =A0 =A0000800 =A0afc8 =A0 00=
 =A0 =A0[0]</div><div>[79464.816438] bnx2 0000:00:05.0 eth1: 03 =A0 =A00008=
00 =A0afb8 =A0 00 =A0 =A0[0]</div><div>[79464.816447] bnx2 0000:00:05.0 eth=
1: 04 =A0 =A0000800 =A0afd8 =A0 00 =A0 =A0[0]</div>

<div>[79464.816456] bnx2 0000:00:05.0 eth1: 05 =A0 =A0000800 =A0afe0 =A0 00=
 =A0 =A0[0]</div><div>[79464.816465] bnx2 0000:00:05.0 eth1: 06 =A0 =A00008=
00 =A0afe8 =A0 00 =A0 =A0[0]</div><div>[79464.816474] bnx2 0000:00:05.0 eth=
1: 07 =A0 =A0000800 =A0afd0 =A0 00 =A0 =A0[0]</div>

<div>[79464.816485] bnx2 0000:00:05.0 eth1: 08 =A0 =A0001000 =A03510 =A0 00=
 =A0 =A0[0]</div><div>[79464.816494] bnx2 0000:00:05.0 eth1: 09 =A0 =A00008=
00 =A0aec0 =A0 00 =A0 =A0[0]</div><div>[79464.816504] bnx2 0000:00:05.0 eth=
1: 0a =A0 =A0001000 =A03530 =A0 00 =A0 =A0[0]</div>

<div>[79464.816514] bnx2 0000:00:05.0 eth1: 0b =A0 =A0000800 =A0aec8 =A0 00=
 =A0 =A0[0]</div><div>[79464.816523] bnx2 0000:00:05.0 eth1: 0c =A0 =A00008=
00 =A0aed0 =A0 00 =A0 =A0[0]</div><div>[79464.816559] bnx2 0000:00:05.0 eth=
1: 0d =A0 =A0001000 =A034f8 =A0 00 =A0 =A0[0]</div>

<div>[79464.816570] bnx2 0000:00:05.0 eth1: 0e =A0 =A0001000 =A03500 =A0 00=
 =A0 =A0[0]</div><div>[79464.816580] bnx2 0000:00:05.0 eth1: 0f =A0 =A00010=
00 =A03518 =A0 00 =A0 =A0[0]</div><div>[79464.816590] bnx2 0000:00:05.0 eth=
1: 10 =A0 =A01fbc00 =A02fe8 =A0 7d =A0 =A0[0]</div>

<div>[79464.816599] bnx2 0000:00:05.0 eth1: 11 =A0 =A01ab780 =A0fff8 =A0 7d=
 =A0 =A0[0]</div><div>[79464.816608] bnx2 0000:00:05.0 eth1: 12 =A0 =A017ff=
00 =A0b908 =A0 f7 =A0 =A0[0]</div><div>[79464.816618] bnx2 0000:00:05.0 eth=
1: 13 =A0 =A00cb700 =A0ff40 =A0 d7 =A0 =A0[0]</div>

<div>[79464.816627] bnx2 0000:00:05.0 eth1: 14 =A0 =A0177a80 =A0efe0 =A0 03=
 =A0 =A0[0]</div><div>[79464.816637] bnx2 0000:00:05.0 eth1: 15 =A0 =A0037d=
80 =A09f88 =A0 72 =A0 =A0[0]</div><div>[79464.816646] bnx2 0000:00:05.0 eth=
1: 16 =A0 =A01bae00 =A0eef8 =A0 ce =A0 =A0[0]</div>

<div>[79464.816657] bnx2 0000:00:05.0 eth1: 17 =A0 =A01bbc80 =A0a7f8 =A0 df=
 =A0 =A0[0]</div><div>[79464.816666] bnx2 0000:00:05.0 eth1: 18 =A0 =A017e1=
80 =A06aa8 =A0 e4 =A0 =A0[0]</div><div>[79464.816675] bnx2 0000:00:05.0 eth=
1: 19 =A0 =A007ff80 =A06e50 =A0 dd =A0 =A0[0]</div>

<div>[79464.816683] bnx2 0000:00:05.0 eth1: 1a =A0 =A01fda80 =A0f790 =A0 6e=
 =A0 =A0[0]</div><div>[79464.816694] bnx2 0000:00:05.0 eth1: 1b =A0 =A01515=
80 =A0d7b0 =A0 fc =A0 =A0[0]</div><div>[79464.816703] bnx2 0000:00:05.0 eth=
1: 1c =A0 =A01b9f80 =A0cef8 =A0 6b =A0 =A0[0]</div>

<div>[79464.816712] bnx2 0000:00:05.0 eth1: 1d =A0 =A01ebf00 =A0ffa8 =A0 df=
 =A0 =A0[0]</div><div>[79464.816723] bnx2 0000:00:05.0 eth1: 1e =A0 =A01e7e=
00 =A0ff78 =A0 ef =A0 =A0[0]</div><div>[79464.816731] bnx2 0000:00:05.0 eth=
1: 1f =A0 =A0166e80 =A0fbd8 =A0 aa =A0 =A0[0]</div>

<div>[79464.816733] bnx2 0000:00:05.0 eth1: &lt;--- end TBDC dump ---&gt;</=
div><div>[79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[=
00100406]</div><div>[79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19=
002008] PCI_MISC_CFG[92000088]</div>

<div>[79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]=
 EMAC_RX_STATUS[00000000]</div><div>[79464.817252] bnx2 0000:00:05.0 eth1: =
DEBUG: RPM_MGMT_PKT_CTRL[40000088]</div><div>[79464.817257] bnx2 0000:00:05=
.0 eth1: DEBUG: HC_STATS_INTERRUPT_STATUS[01fc0003]</div>

<div>[79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]</div><div>=
[79464.817264] bnx2 0000:00:05.0 eth1: &lt;--- start MCP states dump ---&gt=
;</div><div>[79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003=
611e] MCP_STATE_P1[0003611e]</div>

<div>[79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880] state=
[80000000] evt_mask[00000500]</div><div>[79464.817286] bnx2 0000:00:05.0 et=
h1: DEBUG: pc[08003ebc] pc[0800d994] instr[32020020]</div><div>[79464.81728=
8] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:</div>

<div>[79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027] fw_mb[0=
0000027] link_status[0008506b]</div><div>[79464.817301] =A0drv_pulse_mb[000=
0338a]</div><div>[79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_sig=
nature[44564907] reset_type[01005254]</div>

<div>[79464.817310] =A0condition[0003611e]</div><div>[79464.817318] bnx2 00=
00:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083 0003611e 00000000</div>=
<div>[79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000=
000 00000000 00000000</div>

<div>[79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000=
000 00000000 00000000</div><div>[79464.817367] bnx2 0000:00:05.0 eth1: DEBU=
G: 000003ec: 00000000 00000000 00000000 00000000</div><div>[79464.817372] b=
nx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]</div>

<div>[79464.817374] bnx2 0000:00:05.0 eth1: &lt;--- end MCP states dump ---=
&gt;</div><div>[79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Do=
wn</div></div><div><br></div></div><div class=3D"gmail_extra"><br><br><div =
class=3D"gmail_quote">

On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&l=
t;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@o=
racle.com</a>&gt;</span> wrote:<br><blockquote class=3D"gmail_quote" style=
=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">

<div class=3D"">On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser=
 wrote:<br>
&gt; Thanks for the answers on the timeline.<br>
&gt;<br>
&gt; When I start the HVM with th broadcom adapter, I get this message back=
.<br>
&gt; Parsing config from /etc/xen/ubuntu-hvm-1.cfg<br>
&gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel does=
n&#39;t<br>
&gt; support reset from sysfs for PCI device 0000:05:00.0<br>
&gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel does=
n&#39;t<br>
&gt; support reset from sysfs for PCI device 0000:05:00.1<br>
&gt;<br>
&gt; However, the devices appear in the HVM. =A0Is this something that I sh=
ould be<br>
&gt; concerned about?<br>
<br>
</div>No. Xen pciback does the reset automatically.<br>
<br>
Actually we might want to ditch that reporting in libxl, or maybe just<br>
implement a stub function in xen-pciback so that libxl will be happy.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap &lt;<a href=3D"mailto:=
dunlapg@umich.edu">dunlapg@umich.edu</a>&gt; wrote:<br>
&gt;<br>
&gt; &gt; On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser<br>
&gt; &gt; &lt;<a href=3D"mailto:mikeneiderhauser@gmail.com">mikeneiderhause=
r@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; Works like a charm. =A0I do not have physical access to the =
computer this<br>
&gt; &gt; &gt; weekend to verify that the cards are isolated, but the HVM s=
tarts and<br>
&gt; &gt; &gt; appears to be working well.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; When do you think Xen 4.4 will be released. =A0The article I=
 read<br>
&gt; &gt; mentioned it<br>
&gt; &gt; &gt; will be released in 2014 (hinting towards the end of Februar=
y). =A0I also<br>
&gt; &gt; read<br>
&gt; &gt; &gt; &#39;When it is ready.&#39;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Any timeline would be great.<br>
&gt; &gt;<br>
&gt; &gt; I&#39;m afraid that&#39;s about all we can give. :-) =A0We&#39;ve=
 locked down<br>
&gt; &gt; development for 2 months now and are working on finding and fixin=
g<br>
&gt; &gt; bugs. =A0If there are no more blocker bugs or other unforeseen de=
lays,<br>
&gt; &gt; it should be out by the end of February. =A0But there are necessa=
rily<br>
&gt; &gt; significant unknowns, so we can&#39;t make any promises.<br>
&gt; &gt;<br>
&gt; &gt; =A0-George<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a11c3c970a89d6d04f22119a9--


--===============2505890737519724385==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2505890737519724385==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 14:33:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEPI-0004ea-Mp; Tue, 11 Feb 2014 14:33:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDEPG-0004eQ-GY
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 14:33:50 +0000
Received: from [85.158.139.211:52984] by server-1.bemta-5.messagelabs.com id
	F6/41-12859-DC43AF25; Tue, 11 Feb 2014 14:33:49 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392129228!3161861!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16779 invoked from network); 11 Feb 2014 14:33:48 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 14:33:48 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDEPC-0003pS-9X; Tue, 11 Feb 2014 14:33:46 +0000
Date: Tue, 11 Feb 2014 15:33:46 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211143346.GE10482@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA328C.4000103@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
> (Add George as release manager)
> 
> On 11/02/14 13:59, Tim Deegan wrote:
> > At 13:20 +0000 on 11 Feb (1392121252), Julien Grall wrote:
> >>
> >>
> >> On 11/02/14 12:59, Tim Deegan wrote:
> >>> Are you using a very old version of clang?  As 06a9c7e points out,
> >>> our current runes didn't work before clang v3.0.
> >>
> >> I'm using clang 3.5 (which have other issue to compile Xen), but I have
> >> also tried 3.0 and 3.3.
> >>
> >>> If not, rather than chasing this around any further, I think we should
> >>> abandon trying to use the compiler-provided headers even on linux.
> >>> Does this patch fix your build issue?
> >>>
> >>> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> >>> Author: Tim Deegan <tim@xen.org>
> >>> Date:   Tue Feb 11 12:44:09 2014 +0000
> >>>
> >>>       xen: stop trying to use the system <stdarg.h> and <stdbool.h>
> >>
> >> With this patch, -iwithprefix include is not necessary now. I wondering
> >> if we can remove it from the command line.
> >
> > Yes, I think so.
> >
> >>>       We already have our own versions of the stdarg/stdbool definitions, for
> >>>       systems where those headers are installed in /usr/include.
> >>>
> >>>       On linux, they're typically installed in compiler-specific paths, but
> >>>       finding them has proved unreliable.  Drop that and use our own versions
> >>>       everywhere.
> >>>
> >>>       Signed-off-by: Tim Deegan <tim@xen.org>
> >>
> >> This patch is working fine to build xen clang 3.0, 3.3.
> >> I have others issue to build with clang 3.5.
> >>
> >> Tested-by: Julien Grall <julien.grall@linaro.org>
> >
> > Great!  Assuming you'll have a series of patches to fix the clang-3.5
> > build, do you want to just take this into that series, and drop the
> > -iwithprefix at the same time?
> 
> If it's possible I'd like this patch goes in Xen 4.4 to fix build with 
> official version of clang (until 3.4).
> 
> Clang 3.5 is still under development, so I don't think it's important to 
> have support for it in Xen 4.4.

Fair enough.  In that case it needs a release ack from George.  It:
 - fixes a compile issue on some version s of clang;
 - might cause a regression with other compilers, but the regression
   is likely to be obvious (i.e. a compile-time failure).

And it needs an ack from Keir, for changing common code. 

v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
that's OK.

Cheers,

Tim.

commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
Author: Tim Deegan <tim@xen.org>
Date:   Tue Feb 11 12:44:09 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>

diff --git a/xen/Rules.mk b/xen/Rules.mk
index df1428f..3a6cec5 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -44,10 +44,7 @@ ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
 CFLAGS += -fno-builtin -fno-common
 CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
 CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
-# Solaris puts stdarg.h &c in the system include directory.
-ifneq ($(XEN_OS),SunOS)
-CFLAGS += -nostdinc -iwithprefix include
-endif
+CFLAGS += -nostdinc
 
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
 CFLAGS-$(FLASK_ENABLE)  += -DFLASK_ENABLE -DXSM_MAGIC=0xf97cff8c
diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
index d1b2540..0283f06 100644
--- a/xen/include/xen/stdarg.h
+++ b/xen/include/xen/stdarg.h
@@ -1,23 +1,21 @@
 #ifndef __XEN_STDARG_H__
 #define __XEN_STDARG_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-   typedef __builtin_va_list va_list;
-#  ifdef __GNUC__
-#    define __GNUC_PREREQ__(x, y)                                       \
-        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
-         (__GNUC__ > (x)))
-#  else
-#    define __GNUC_PREREQ__(x, y)   0
-#  endif
-#  if !__GNUC_PREREQ__(4, 5)
-#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
-#  endif
-#  define va_start(ap, last)    __builtin_va_start((ap), (last))
-#  define va_end(ap)            __builtin_va_end(ap)
-#  define va_arg                __builtin_va_arg
+#ifdef __GNUC__
+#  define __GNUC_PREREQ__(x, y)                                       \
+      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
+       (__GNUC__ > (x)))
 #else
-#  include <stdarg.h>
+#  define __GNUC_PREREQ__(x, y)   0
 #endif
 
+#if !__GNUC_PREREQ__(4, 5)
+#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
+#endif
+
+typedef __builtin_va_list va_list;
+#define va_start(ap, last)    __builtin_va_start((ap), (last))
+#define va_end(ap)            __builtin_va_end(ap)
+#define va_arg                __builtin_va_arg
+
 #endif /* __XEN_STDARG_H__ */
diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
index f0faedf..b0947a6 100644
--- a/xen/include/xen/stdbool.h
+++ b/xen/include/xen/stdbool.h
@@ -1,13 +1,9 @@
 #ifndef __XEN_STDBOOL_H__
 #define __XEN_STDBOOL_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-#  define bool _Bool
-#  define true 1
-#  define false 0
-#  define __bool_true_false_are_defined   1
-#else
-#  include <stdbool.h>
-#endif
+#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined   1
 
 #endif /* __XEN_STDBOOL_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:33:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEPI-0004ea-Mp; Tue, 11 Feb 2014 14:33:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDEPG-0004eQ-GY
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 14:33:50 +0000
Received: from [85.158.139.211:52984] by server-1.bemta-5.messagelabs.com id
	F6/41-12859-DC43AF25; Tue, 11 Feb 2014 14:33:49 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392129228!3161861!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16779 invoked from network); 11 Feb 2014 14:33:48 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 14:33:48 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDEPC-0003pS-9X; Tue, 11 Feb 2014 14:33:46 +0000
Date: Tue, 11 Feb 2014 15:33:46 +0100
From: Tim Deegan <tim@xen.org>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140211143346.GE10482@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FA328C.4000103@linaro.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, keir@xen.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
> (Add George as release manager)
> 
> On 11/02/14 13:59, Tim Deegan wrote:
> > At 13:20 +0000 on 11 Feb (1392121252), Julien Grall wrote:
> >>
> >>
> >> On 11/02/14 12:59, Tim Deegan wrote:
> >>> Are you using a very old version of clang?  As 06a9c7e points out,
> >>> our current runes didn't work before clang v3.0.
> >>
> >> I'm using clang 3.5 (which have other issue to compile Xen), but I have
> >> also tried 3.0 and 3.3.
> >>
> >>> If not, rather than chasing this around any further, I think we should
> >>> abandon trying to use the compiler-provided headers even on linux.
> >>> Does this patch fix your build issue?
> >>>
> >>> commit e7003f174e0df9192dde6fa8d33b0a20f99ce053
> >>> Author: Tim Deegan <tim@xen.org>
> >>> Date:   Tue Feb 11 12:44:09 2014 +0000
> >>>
> >>>       xen: stop trying to use the system <stdarg.h> and <stdbool.h>
> >>
> >> With this patch, -iwithprefix include is not necessary now. I wondering
> >> if we can remove it from the command line.
> >
> > Yes, I think so.
> >
> >>>       We already have our own versions of the stdarg/stdbool definitions, for
> >>>       systems where those headers are installed in /usr/include.
> >>>
> >>>       On linux, they're typically installed in compiler-specific paths, but
> >>>       finding them has proved unreliable.  Drop that and use our own versions
> >>>       everywhere.
> >>>
> >>>       Signed-off-by: Tim Deegan <tim@xen.org>
> >>
> >> This patch is working fine to build xen clang 3.0, 3.3.
> >> I have others issue to build with clang 3.5.
> >>
> >> Tested-by: Julien Grall <julien.grall@linaro.org>
> >
> > Great!  Assuming you'll have a series of patches to fix the clang-3.5
> > build, do you want to just take this into that series, and drop the
> > -iwithprefix at the same time?
> 
> If it's possible I'd like this patch goes in Xen 4.4 to fix build with 
> official version of clang (until 3.4).
> 
> Clang 3.5 is still under development, so I don't think it's important to 
> have support for it in Xen 4.4.

Fair enough.  In that case it needs a release ack from George.  It:
 - fixes a compile issue on some version s of clang;
 - might cause a regression with other compilers, but the regression
   is likely to be obvious (i.e. a compile-time failure).

And it needs an ack from Keir, for changing common code. 

v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
that's OK.

Cheers,

Tim.

commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
Author: Tim Deegan <tim@xen.org>
Date:   Tue Feb 11 12:44:09 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>

diff --git a/xen/Rules.mk b/xen/Rules.mk
index df1428f..3a6cec5 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -44,10 +44,7 @@ ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
 CFLAGS += -fno-builtin -fno-common
 CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
 CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
-# Solaris puts stdarg.h &c in the system include directory.
-ifneq ($(XEN_OS),SunOS)
-CFLAGS += -nostdinc -iwithprefix include
-endif
+CFLAGS += -nostdinc
 
 CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
 CFLAGS-$(FLASK_ENABLE)  += -DFLASK_ENABLE -DXSM_MAGIC=0xf97cff8c
diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
index d1b2540..0283f06 100644
--- a/xen/include/xen/stdarg.h
+++ b/xen/include/xen/stdarg.h
@@ -1,23 +1,21 @@
 #ifndef __XEN_STDARG_H__
 #define __XEN_STDARG_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-   typedef __builtin_va_list va_list;
-#  ifdef __GNUC__
-#    define __GNUC_PREREQ__(x, y)                                       \
-        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
-         (__GNUC__ > (x)))
-#  else
-#    define __GNUC_PREREQ__(x, y)   0
-#  endif
-#  if !__GNUC_PREREQ__(4, 5)
-#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
-#  endif
-#  define va_start(ap, last)    __builtin_va_start((ap), (last))
-#  define va_end(ap)            __builtin_va_end(ap)
-#  define va_arg                __builtin_va_arg
+#ifdef __GNUC__
+#  define __GNUC_PREREQ__(x, y)                                       \
+      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
+       (__GNUC__ > (x)))
 #else
-#  include <stdarg.h>
+#  define __GNUC_PREREQ__(x, y)   0
 #endif
 
+#if !__GNUC_PREREQ__(4, 5)
+#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
+#endif
+
+typedef __builtin_va_list va_list;
+#define va_start(ap, last)    __builtin_va_start((ap), (last))
+#define va_end(ap)            __builtin_va_end(ap)
+#define va_arg                __builtin_va_arg
+
 #endif /* __XEN_STDARG_H__ */
diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
index f0faedf..b0947a6 100644
--- a/xen/include/xen/stdbool.h
+++ b/xen/include/xen/stdbool.h
@@ -1,13 +1,9 @@
 #ifndef __XEN_STDBOOL_H__
 #define __XEN_STDBOOL_H__
 
-#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
-#  define bool _Bool
-#  define true 1
-#  define false 0
-#  define __bool_true_false_are_defined   1
-#else
-#  include <stdbool.h>
-#endif
+#define bool _Bool
+#define true 1
+#define false 0
+#define __bool_true_false_are_defined   1
 
 #endif /* __XEN_STDBOOL_H__ */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:35:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEQk-0004lh-LO; Tue, 11 Feb 2014 14:35:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDEQe-0004l7-Qt
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:35:19 +0000
Received: from [193.109.254.147:64495] by server-9.bemta-14.messagelabs.com id
	7F/03-24895-4253AF25; Tue, 11 Feb 2014 14:35:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392129315!3563434!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11901 invoked from network); 11 Feb 2014 14:35:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 14:35:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 14:35:07 +0000
Message-Id: <52FA4331020000780011B32F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 14:35:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA344B020000780011B276@nat28.tlf.novell.com>
	<20140211135731.GA10482@deinos.phlegethon.org>
In-Reply-To: <20140211135731.GA10482@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 14:57, Tim Deegan <tim@xen.org> wrote:
> At 13:31 +0000 on 11 Feb (1392121899), Jan Beulich wrote:
>> >>> On 11.02.14 at 14:15, Tim Deegan <tim@xen.org> wrote:
>> > Or do we have to treat 'masked in REG_B/IOAPIC' differently from
>> > 'masked by ISR/TPR/RFLAGS.IF/...'?
>> 
>> ... this might be the right direction (albeit I think it would be REG_B
>> on one side and collectively IOAPIC/ISR/TPR/EFLAGS.IF).
> 
> Yeah, maybe.
> 
> Something like, if !PIE then we set PF and consume the tick in the
> early vpt callback, otherwise we do it in the late callback (and in
> both cases, the early vpt callback doesn't actually assert the line)?

Yes. Question is how intrusive a change that would be (namely to
have the two callbacks deal correctly with an RTC register - in
particular PIE - changing between them being called).

> Or: we drop the early callback, and go back to setting PF|IRQF in the
> pt_intr_post callback (much like the patch under discussion), and
> disable the vpt when the guest clears PIE.  Then in the REG_C read, if
> !PIE, we can set the PF bit if a tick should have happened since
> the last read (but saving ourselves the bother of running the actual
> timer).  Then the only case where things are wrong is an OS which
> _both_ polls for the edge _and_ relies on the vpt logic for adding the
> right number of ticks after being descheduled.  Or an OS which asks
> for interrupts, masks them in the *APIC/RFLAGS and then polls for PF.
> But that guest, I think, is indistinguishable from w2k3 in the stuck
> state.
> 
> Actually that second patch doesn't sound too bad.  If that sounds OK
> to you I can look into coding it up on Thursday.

That second one might be okay too, as long as it is clearly written
down in at least the commit message which modes we expect to
work and which not.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 14:35:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 14:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEQk-0004lh-LO; Tue, 11 Feb 2014 14:35:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDEQe-0004l7-Qt
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 14:35:19 +0000
Received: from [193.109.254.147:64495] by server-9.bemta-14.messagelabs.com id
	7F/03-24895-4253AF25; Tue, 11 Feb 2014 14:35:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392129315!3563434!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11901 invoked from network); 11 Feb 2014 14:35:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 11 Feb 2014 14:35:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 11 Feb 2014 14:35:07 +0000
Message-Id: <52FA4331020000780011B32F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 11 Feb 2014 14:35:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA344B020000780011B276@nat28.tlf.novell.com>
	<20140211135731.GA10482@deinos.phlegethon.org>
In-Reply-To: <20140211135731.GA10482@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 14:57, Tim Deegan <tim@xen.org> wrote:
> At 13:31 +0000 on 11 Feb (1392121899), Jan Beulich wrote:
>> >>> On 11.02.14 at 14:15, Tim Deegan <tim@xen.org> wrote:
>> > Or do we have to treat 'masked in REG_B/IOAPIC' differently from
>> > 'masked by ISR/TPR/RFLAGS.IF/...'?
>> 
>> ... this might be the right direction (albeit I think it would be REG_B
>> on one side and collectively IOAPIC/ISR/TPR/EFLAGS.IF).
> 
> Yeah, maybe.
> 
> Something like, if !PIE then we set PF and consume the tick in the
> early vpt callback, otherwise we do it in the late callback (and in
> both cases, the early vpt callback doesn't actually assert the line)?

Yes. Question is how intrusive a change that would be (namely to
have the two callbacks deal correctly with an RTC register - in
particular PIE - changing between them being called).

> Or: we drop the early callback, and go back to setting PF|IRQF in the
> pt_intr_post callback (much like the patch under discussion), and
> disable the vpt when the guest clears PIE.  Then in the REG_C read, if
> !PIE, we can set the PF bit if a tick should have happened since
> the last read (but saving ourselves the bother of running the actual
> timer).  Then the only case where things are wrong is an OS which
> _both_ polls for the edge _and_ relies on the vpt logic for adding the
> right number of ticks after being descheduled.  Or an OS which asks
> for interrupts, masks them in the *APIC/RFLAGS and then polls for PF.
> But that guest, I think, is indistinguishable from w2k3 in the stuck
> state.
> 
> Actually that second patch doesn't sound too bad.  If that sounds OK
> to you I can look into coding it up on Thursday.

That second one might be okay too, as long as it is clearly written
down in at least the commit message which modes we expect to
work and which not.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEqb-0006FD-13; Tue, 11 Feb 2014 15:02:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WDEqY-0006F8-Co
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:02:03 +0000
Received: from [85.158.143.35:29731] by server-2.bemta-4.messagelabs.com id
	93/D8-10891-96B3AF25; Tue, 11 Feb 2014 15:02:01 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392130920!4863833!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8054 invoked from network); 11 Feb 2014 15:02:00 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:02:00 -0000
Received: by mail-ea0-f175.google.com with SMTP id n15so973866ead.6
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 07:02:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=7Ob82QOlSu3Wq53s2TyncCX490nPaApGUyRMtIdzUTw=;
	b=YKgD6Sh6Mr4+kXOxw/C84ks9jw3jjI9I9bRrrTLqWkDhMxz/7m3LeUj2FSXM//33S+
	CSmd1kPqyEa0dysIXBb/HG0X4YdoRVOl7SM1xJKS9xRXOR/2OpVdimZJn3HWG3C9/KHU
	HDw8qyMQ7gC5RhdZW5Lvi34u9+vWa/rzXF9bJJ5uS590ak3+RbvPdSxpIcs2v+z2ikk6
	YXEKsNnmMsRNIu7F1zMQCpztKD9dGDLm5DZjHb3NPJiMAq3W5J9VwQXCS3PNaGPI+fZd
	yxp5Sw/I7Rk6dXiSfRmKSUeVYq1tpx+DWbsa8K919T9e5aE2ElVyWLOUudzqNWH1UQRD
	jjlw==
X-Received: by 10.14.204.9 with SMTP id g9mr3736894eeo.82.1392130920375;
	Tue, 11 Feb 2014 07:02:00 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165])
	by mx.google.com with ESMTPSA id g1sm68109571eet.6.2014.02.11.07.01.56
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 07:01:59 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 11 Feb 2014 15:01:53 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@linaro.org>
Message-ID: <CF1FEBE1.51E26%keir.xen@gmail.com>
Thread-Topic: [PATCH] xen: Don't use -nostdinc flags with CLANG
Thread-Index: Ac8nOjK/+RBhp7SP3U2t9zupV5YXoA==
In-Reply-To: <20140211143346.GE10482@deinos.phlegethon.org>
Mime-version: 1.0
Cc: xen-devel@lists.xenproject.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/2014 14:33, "Tim Deegan" <tim@xen.org> wrote:

> Fair enough.  In that case it needs a release ack from George.  It:
>  - fixes a compile issue on some version s of clang;
>  - might cause a regression with other compilers, but the regression
>    is likely to be obvious (i.e. a compile-time failure).
> 
> And it needs an ack from Keir, for changing common code.
> 
> v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
> that's OK.
> 
> Cheers,
> 
> Tim.
> 
> commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
> 
>     xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>     
>     We already have our own versions of the stdarg/stdbool definitions, for
>     systems where those headers are installed in /usr/include.
>     
>     On linux, they're typically installed in compiler-specific paths, but
>     finding them has proved unreliable.  Drop that and use our own versions
>     everywhere.
>     
>     Signed-off-by: Tim Deegan <tim@xen.org>
>     Tested-by: Julien Grall <julien.grall@linaro.org>

I'm fine with the principle of it. I don't know about how risky it is for
4.4.

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEqb-0006FD-13; Tue, 11 Feb 2014 15:02:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WDEqY-0006F8-Co
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:02:03 +0000
Received: from [85.158.143.35:29731] by server-2.bemta-4.messagelabs.com id
	93/D8-10891-96B3AF25; Tue, 11 Feb 2014 15:02:01 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392130920!4863833!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8054 invoked from network); 11 Feb 2014 15:02:00 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:02:00 -0000
Received: by mail-ea0-f175.google.com with SMTP id n15so973866ead.6
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 07:02:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:date:subject:from:to:cc:message-id:thread-topic
	:thread-index:in-reply-to:mime-version:content-type
	:content-transfer-encoding;
	bh=7Ob82QOlSu3Wq53s2TyncCX490nPaApGUyRMtIdzUTw=;
	b=YKgD6Sh6Mr4+kXOxw/C84ks9jw3jjI9I9bRrrTLqWkDhMxz/7m3LeUj2FSXM//33S+
	CSmd1kPqyEa0dysIXBb/HG0X4YdoRVOl7SM1xJKS9xRXOR/2OpVdimZJn3HWG3C9/KHU
	HDw8qyMQ7gC5RhdZW5Lvi34u9+vWa/rzXF9bJJ5uS590ak3+RbvPdSxpIcs2v+z2ikk6
	YXEKsNnmMsRNIu7F1zMQCpztKD9dGDLm5DZjHb3NPJiMAq3W5J9VwQXCS3PNaGPI+fZd
	yxp5Sw/I7Rk6dXiSfRmKSUeVYq1tpx+DWbsa8K919T9e5aE2ElVyWLOUudzqNWH1UQRD
	jjlw==
X-Received: by 10.14.204.9 with SMTP id g9mr3736894eeo.82.1392130920375;
	Tue, 11 Feb 2014 07:02:00 -0800 (PST)
Received: from [192.168.1.3] (host86-139-172-165.range86-139.btcentralplus.com.
	[86.139.172.165])
	by mx.google.com with ESMTPSA id g1sm68109571eet.6.2014.02.11.07.01.56
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 07:01:59 -0800 (PST)
User-Agent: Microsoft-Entourage/12.36.0.130206
Date: Tue, 11 Feb 2014 15:01:53 +0000
From: Keir Fraser <keir.xen@gmail.com>
To: Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@linaro.org>
Message-ID: <CF1FEBE1.51E26%keir.xen@gmail.com>
Thread-Topic: [PATCH] xen: Don't use -nostdinc flags with CLANG
Thread-Index: Ac8nOjK/+RBhp7SP3U2t9zupV5YXoA==
In-Reply-To: <20140211143346.GE10482@deinos.phlegethon.org>
Mime-version: 1.0
Cc: xen-devel@lists.xenproject.org, Ian.Jackson@eu.citrix.com,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/2014 14:33, "Tim Deegan" <tim@xen.org> wrote:

> Fair enough.  In that case it needs a release ack from George.  It:
>  - fixes a compile issue on some version s of clang;
>  - might cause a regression with other compilers, but the regression
>    is likely to be obvious (i.e. a compile-time failure).
> 
> And it needs an ack from Keir, for changing common code.
> 
> v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
> that's OK.
> 
> Cheers,
> 
> Tim.
> 
> commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
> 
>     xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>     
>     We already have our own versions of the stdarg/stdbool definitions, for
>     systems where those headers are installed in /usr/include.
>     
>     On linux, they're typically installed in compiler-specific paths, but
>     finding them has proved unreliable.  Drop that and use our own versions
>     everywhere.
>     
>     Signed-off-by: Tim Deegan <tim@xen.org>
>     Tested-by: Julien Grall <julien.grall@linaro.org>

I'm fine with the principle of it. I don't know about how risky it is for
4.4.

Acked-by: Keir Fraser <keir@xen.org>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:07:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEvV-0006cd-Kz; Tue, 11 Feb 2014 15:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDEvT-0006cU-Kp
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:07:07 +0000
Received: from [85.158.139.211:12915] by server-9.bemta-5.messagelabs.com id
	6F/FD-11237-A9C3AF25; Tue, 11 Feb 2014 15:07:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392131224!3172349!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24692 invoked from network); 11 Feb 2014 15:07:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:07:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101619298"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:06:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:06:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDEvL-0000Oi-0z;
	Tue, 11 Feb 2014 15:06:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDEvK-0001o0-QO;
	Tue, 11 Feb 2014 15:06:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.15506.465097.940904@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 15:06:58 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
	<1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 2/2] cr-daily-branch: make sure we
	test the correct tree for Linux branches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST 2/2] cr-daily-branch: make sure we test the correct tree for Linux branches"):
> These branches should test the specific Linux tree which they and so
> should not apply the per-arch overrides which are only intended to
> be used to pick up an already verified tested Linux branch for use
> when testing some other non-linux branch.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> Applied on top of a revert of a528b10cd6a552e92ed2b1b809c991a421ebdba6.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:07:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDEvV-0006cd-Kz; Tue, 11 Feb 2014 15:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDEvT-0006cU-Kp
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:07:07 +0000
Received: from [85.158.139.211:12915] by server-9.bemta-5.messagelabs.com id
	6F/FD-11237-A9C3AF25; Tue, 11 Feb 2014 15:07:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392131224!3172349!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24692 invoked from network); 11 Feb 2014 15:07:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:07:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101619298"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:06:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:06:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDEvL-0000Oi-0z;
	Tue, 11 Feb 2014 15:06:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDEvK-0001o0-QO;
	Tue, 11 Feb 2014 15:06:58 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.15506.465097.940904@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 15:06:58 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
	<1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 2/2] cr-daily-branch: make sure we
	test the correct tree for Linux branches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST 2/2] cr-daily-branch: make sure we test the correct tree for Linux branches"):
> These branches should test the specific Linux tree which they and so
> should not apply the per-arch overrides which are only intended to
> be used to pick up an already verified tested Linux branch for use
> when testing some other non-linux branch.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> ---
> Applied on top of a revert of a528b10cd6a552e92ed2b1b809c991a421ebdba6.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:09:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDExU-0006lZ-AZ; Tue, 11 Feb 2014 15:09:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDExT-0006lT-1C
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:09:11 +0000
Received: from [85.158.143.35:57768] by server-3.bemta-4.messagelabs.com id
	98/EA-11539-61D3AF25; Tue, 11 Feb 2014 15:09:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392131348!4857596!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27307 invoked from network); 11 Feb 2014 15:09:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101620668"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:09:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 10:09:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDEhF-00046A-Ay;
	Tue, 11 Feb 2014 14:52:25 +0000
Message-ID: <52FA3929.70100@citrix.com>
Date: Tue, 11 Feb 2014 14:52:25 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA2CAA.7000303@citrix.com>
	<20140211141034.GC10482@deinos.phlegethon.org>
In-Reply-To: <20140211141034.GC10482@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 14:10, Tim Deegan wrote:
> At 13:59 +0000 on 11 Feb (1392123546), Andrew Cooper wrote:
>> On 11/02/14 13:15, Tim Deegan wrote:
>>> At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
>>>>>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
>>>>> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
>>>>>>>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
>>>>>>> That is the main change of this cset:  we go back to driving
>>>>>>> the interrupt from the vpt code and fixing up the RTC state after vpt
>>>>>>> tells us it's injected an interrupt.
>>>>>> And that's what is wrong imo, as it doesn't allow driving PF correctly
>>>>>> when !PIE.
>>>>> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
>>>>> you remember why not?  Have I forgotten some wrinkle or race here?
>>>> Because an OS could inspect PF without setting PIE.
>>> Ugh. :( 
>>>
>>>>>>> Yeah, this has nothing to do with the bug being fixed here.  The old
>>>>>>> REG_C read was operating correctly, but on the return-to-guest path:
>>>>>>>  - vpt sees another RTC interrupt is due and calls RTC code
>>>>>>>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>>>>>>>  - vlapic code sees the last interrupt is still in the ISR and does
>>>>>>>    nothing;
>>>>>>>  - we return to the guest having set IRQF but not consumed a timer
>>>>>>>    event, so vpt stste is the same
>>>>>>>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>>>>>>>    waiting for a read of 0.
>>>>>>>  - repeat forever.
>>>>>> Which would call for a flag suppressing the setting of PF|IRQF
>>>>>> until the timer event got consumed. Possibly with some safety
>>>>>> belt for this to not get deferred indefinitely (albeit if the interrupt
>>>>>> doesn't get injected for extended periods of time, the guest
>>>>>> would presumably have more severe problems than these flags
>>>>>> not getting updated as expected).
>>>>> That's pretty much what we're doing here -- the pt_intr_post callback
>>>>> sets PF|IRQF when the interrupt is injected.
>>>> Right, except you do this be reverting other stuff rather than
>>>> adding the missing functionality on top.
>>> Absolutely -- because once we went back to having PF set only when the
>>> interrupt was injected, it seemed better to reduce the amount of
>>> special-case plumbing for RTC than to add yet more.
>>>
>>> But for the case of an OS polling for PF with PIE clear, I guess we
>>> might need to keep all the current special cases.  Was that a known
>>> observed bug or a theoretical one?  I can't see a way of handling
>>> both that case and the w2k3 case.
>>>
>>> Either we always set PF when the tick happens, even if the interrupt
>>> is masked (which breaks w2k3) or we don't set it until we can deliver
>>> the interrupt (which breaks pollers).
>> This doesn't break w2k3.  Setting PF when a tick happens (or should
>> happen for !PIE) is the correct thing to do.
>>
>> The bug is that we see an interrupt pending and set PF when we
>> shouldn't
> We _are_ setting PF when the tick happens; it's just that because of
> no-missed-ticks mode the tick happens before w2k3 has finished
> handling the last one.  At that point, anything we do breaks w2k3 in
> some way -- either we leave the tick pending until the interrupt is
> actually delivered (which leads to the hang) or we consume the tick
> even though the interrupt will be lost (which causes clock drift).
>
> Tim.

No - we are setting PF on every vmentry, not every tick.

* pt_update_irq() finds the timer pending and decides to inject an
interrupt.  This sets REG_C.PF
* {svm,vmx}_intr_assist() bails early because it can't actually inject
the interrupt.
* pt_intr_post() doesn't run, which doesn't update the PT state, yet
because pt_update_irq() thought it was injecting an interrupt, it didn't
run its faked-up pt_intr_post()
* Next VMentry finds an erroneously pending tick and decides to inject
an interrupt.

If w2k3 were to repeatedly read REG_C alone without writing to the index
register in-between, it would observe PF being alternately set and clear.


w2k3 would still work perfectly fine if we only set PF when actually
injecting the interrupt, which is why the patch at the root of this
thread fixes the observed hang.  However, Jans comment about the
behaviour of the PF bit when !PIE is quite correct.

Therefore, for !PIE, PF must be updated ahead of time, but for PIE, it
must be updated when the interrupt is actually injected.

~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:09:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDExU-0006lZ-AZ; Tue, 11 Feb 2014 15:09:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDExT-0006lT-1C
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:09:11 +0000
Received: from [85.158.143.35:57768] by server-3.bemta-4.messagelabs.com id
	98/EA-11539-61D3AF25; Tue, 11 Feb 2014 15:09:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392131348!4857596!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27307 invoked from network); 11 Feb 2014 15:09:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:09:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101620668"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:09:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 10:09:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDEhF-00046A-Ay;
	Tue, 11 Feb 2014 14:52:25 +0000
Message-ID: <52FA3929.70100@citrix.com>
Date: Tue, 11 Feb 2014 14:52:25 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392031068-21943-1-git-send-email-andrew.cooper3@citrix.com>
	<52F90D8C020000780011AD54@nat28.tlf.novell.com>
	<20140210172140.GA34003@deinos.phlegethon.org>
	<52F9F838020000780011B0AF@nat28.tlf.novell.com>
	<20140211121121.GC97288@deinos.phlegethon.org>
	<52FA2AC1020000780011B1DF@nat28.tlf.novell.com>
	<20140211131509.GF97288@deinos.phlegethon.org>
	<52FA2CAA.7000303@citrix.com>
	<20140211141034.GC10482@deinos.phlegethon.org>
In-Reply-To: <20140211141034.GC10482@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>, KeirFraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [Patch] x86/HVM: Fix RTC interrupt modelling
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 14:10, Tim Deegan wrote:
> At 13:59 +0000 on 11 Feb (1392123546), Andrew Cooper wrote:
>> On 11/02/14 13:15, Tim Deegan wrote:
>>> At 12:50 +0000 on 11 Feb (1392119457), Jan Beulich wrote:
>>>>>>> On 11.02.14 at 13:11, Tim Deegan <tim@xen.org> wrote:
>>>>> At 09:15 +0000 on 11 Feb (1392106520), Jan Beulich wrote:
>>>>>>>>> On 10.02.14 at 18:21, Tim Deegan <tim@xen.org> wrote:
>>>>>>> That is the main change of this cset:  we go back to driving
>>>>>>> the interrupt from the vpt code and fixing up the RTC state after vpt
>>>>>>> tells us it's injected an interrupt.
>>>>>> And that's what is wrong imo, as it doesn't allow driving PF correctly
>>>>>> when !PIE.
>>>>> Oh, I see -- the current code doesn't turn the vpt off when !PIE.  Can
>>>>> you remember why not?  Have I forgotten some wrinkle or race here?
>>>> Because an OS could inspect PF without setting PIE.
>>> Ugh. :( 
>>>
>>>>>>> Yeah, this has nothing to do with the bug being fixed here.  The old
>>>>>>> REG_C read was operating correctly, but on the return-to-guest path:
>>>>>>>  - vpt sees another RTC interrupt is due and calls RTC code
>>>>>>>  - RTC code sees REG_C clear, sets PF|IRQF and asserts the line
>>>>>>>  - vlapic code sees the last interrupt is still in the ISR and does
>>>>>>>    nothing;
>>>>>>>  - we return to the guest having set IRQF but not consumed a timer
>>>>>>>    event, so vpt stste is the same
>>>>>>>  - the guest sees the old REG_C, with PF|IRQF set, and re-reads, 
>>>>>>>    waiting for a read of 0.
>>>>>>>  - repeat forever.
>>>>>> Which would call for a flag suppressing the setting of PF|IRQF
>>>>>> until the timer event got consumed. Possibly with some safety
>>>>>> belt for this to not get deferred indefinitely (albeit if the interrupt
>>>>>> doesn't get injected for extended periods of time, the guest
>>>>>> would presumably have more severe problems than these flags
>>>>>> not getting updated as expected).
>>>>> That's pretty much what we're doing here -- the pt_intr_post callback
>>>>> sets PF|IRQF when the interrupt is injected.
>>>> Right, except you do this be reverting other stuff rather than
>>>> adding the missing functionality on top.
>>> Absolutely -- because once we went back to having PF set only when the
>>> interrupt was injected, it seemed better to reduce the amount of
>>> special-case plumbing for RTC than to add yet more.
>>>
>>> But for the case of an OS polling for PF with PIE clear, I guess we
>>> might need to keep all the current special cases.  Was that a known
>>> observed bug or a theoretical one?  I can't see a way of handling
>>> both that case and the w2k3 case.
>>>
>>> Either we always set PF when the tick happens, even if the interrupt
>>> is masked (which breaks w2k3) or we don't set it until we can deliver
>>> the interrupt (which breaks pollers).
>> This doesn't break w2k3.  Setting PF when a tick happens (or should
>> happen for !PIE) is the correct thing to do.
>>
>> The bug is that we see an interrupt pending and set PF when we
>> shouldn't
> We _are_ setting PF when the tick happens; it's just that because of
> no-missed-ticks mode the tick happens before w2k3 has finished
> handling the last one.  At that point, anything we do breaks w2k3 in
> some way -- either we leave the tick pending until the interrupt is
> actually delivered (which leads to the hang) or we consume the tick
> even though the interrupt will be lost (which causes clock drift).
>
> Tim.

No - we are setting PF on every vmentry, not every tick.

* pt_update_irq() finds the timer pending and decides to inject an
interrupt.  This sets REG_C.PF
* {svm,vmx}_intr_assist() bails early because it can't actually inject
the interrupt.
* pt_intr_post() doesn't run, which doesn't update the PT state, yet
because pt_update_irq() thought it was injecting an interrupt, it didn't
run its faked-up pt_intr_post()
* Next VMentry finds an erroneously pending tick and decides to inject
an interrupt.

If w2k3 were to repeatedly read REG_C alone without writing to the index
register in-between, it would observe PF being alternately set and clear.


w2k3 would still work perfectly fine if we only set PF when actually
injecting the interrupt, which is why the patch at the root of this
thread fixes the observed hang.  However, Jans comment about the
behaviour of the PF bit when !PIE is quite correct.

Therefore, for !PIE, PF must be updated ahead of time, but for PIE, it
must be updated when the interrupt is actually injected.

~Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:12:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDF0O-000782-2y; Tue, 11 Feb 2014 15:12:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDF0M-00077b-63
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:12:10 +0000
Received: from [85.158.143.35:5799] by server-3.bemta-4.messagelabs.com id
	1E/F0-11539-9CD3AF25; Tue, 11 Feb 2014 15:12:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392131526!4843707!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16692 invoked from network); 11 Feb 2014 15:12:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:12:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99811102"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:11:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 10:11:52 -0500
Message-ID: <1392131512.26657.131.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 15:11:52 +0000
In-Reply-To: <21242.15506.465097.940904@mariner.uk.xensource.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
	<1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
	<21242.15506.465097.940904@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 2/2] cr-daily-branch: make sure we
 test the correct tree for Linux branches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 15:06 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST 2/2] cr-daily-branch: make sure we test the correct tree for Linux branches"):
> > These branches should test the specific Linux tree which they and so
> > should not apply the per-arch overrides which are only intended to
> > be used to pick up an already verified tested Linux branch for use
> > when testing some other non-linux branch.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > Applied on top of a revert of a528b10cd6a552e92ed2b1b809c991a421ebdba6.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks. I rewound pretest from a528b and stuck this on top as discussed.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:12:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDF0O-000782-2y; Tue, 11 Feb 2014 15:12:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDF0M-00077b-63
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:12:10 +0000
Received: from [85.158.143.35:5799] by server-3.bemta-4.messagelabs.com id
	1E/F0-11539-9CD3AF25; Tue, 11 Feb 2014 15:12:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392131526!4843707!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16692 invoked from network); 11 Feb 2014 15:12:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:12:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99811102"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:11:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 10:11:52 -0500
Message-ID: <1392131512.26657.131.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 15:11:52 +0000
In-Reply-To: <21242.15506.465097.940904@mariner.uk.xensource.com>
References: <1392122393.26657.102.camel@kazak.uk.xensource.com>
	<1392122520-5453-2-git-send-email-ian.campbell@citrix.com>
	<21242.15506.465097.940904@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST 2/2] cr-daily-branch: make sure we
 test the correct tree for Linux branches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 15:06 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST 2/2] cr-daily-branch: make sure we test the correct tree for Linux branches"):
> > These branches should test the specific Linux tree which they and so
> > should not apply the per-arch overrides which are only intended to
> > be used to pick up an already verified tested Linux branch for use
> > when testing some other non-linux branch.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > ---
> > Applied on top of a revert of a528b10cd6a552e92ed2b1b809c991a421ebdba6.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks. I rewound pretest from a528b and stuck this on top as discussed.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDx-00080F-GL; Tue, 11 Feb 2014 15:26:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDv-0007zx-Sx
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:12 +0000
Received: from [85.158.139.211:61579] by server-12.bemta-5.messagelabs.com id
	6D/CA-15415-3114AF25; Tue, 11 Feb 2014 15:26:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392132369!3218677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6929 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627168"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-0000VX-0O;
	Tue, 11 Feb 2014 15:26:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-00020G-Py;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:52 +0000
Message-ID: <1392132354-7594-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 10/12] ts-guests-nbd-mirror: set
	"oldstyle=true"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Newer NBDs (wheezy's and later) need a config option to say we're
using the old-style port-based addressing.  At some point we will
probably switch to using the new-style addressing, but not yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-guests-nbd-mirror |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/ts-guests-nbd-mirror b/ts-guests-nbd-mirror
index af200f4..ee7ff60 100755
--- a/ts-guests-nbd-mirror
+++ b/ts-guests-nbd-mirror
@@ -59,6 +59,9 @@ sub configserver () {
 [generic]
     user = root
 END
+    $scfg .= <<END unless $sho->{Suite} =~ m/sarge|lenny|squeeze/;
+    oldstyle = true
+END
     foreach my $v (@vols) {
 	$v->{Port}= unique_incrementing_runvar("${srvhost}_nextport",4000);
 	$v->{Path}= "/dev/$v->{Gho}{Vg}/$v->{Lv}";
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDo-0007z8-14; Tue, 11 Feb 2014 15:26:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDm-0007yx-2i
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:02 +0000
Received: from [85.158.139.211:12108] by server-17.bemta-5.messagelabs.com id
	BE/48-31975-9014AF25; Tue, 11 Feb 2014 15:26:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31939 invoked from network); 11 Feb 2014 15:26:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99816987"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:25:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:25:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDh-0000V0-RX;
	Tue, 11 Feb 2014 15:25:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDh-0001zT-Ix;
	Tue, 11 Feb 2014 15:25:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:42 +0000
Message-ID: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 00/12] Wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workarounds for bugs and incompatibilities in wheezy:
 01/12 ts-xen-build-prep: avoid lvextend segfault
 10/12 ts-guests-nbd-mirror: set "oldstyle=true"

A bunch of bugfixes:
 02/12 ts-kernel-build: force CONFIG_BLK_DEV_NBD=y
 03/12 ts-host-install: set `IPAPPEND 2' (if
 04/12 ts-xen-install: nodhcp: restructure
 05/12 ts-xen-install: default the interface to the
 06/12 TestSupport: break out target_run_apt
 07/12 TestSupport: Suppress prompting by apt
 08/12 ts-guests-nbd-mirror: purge old packages first
 09/12 ts-guests-nbd-mirror: add checkaccessible test

And switch the default suite:
 11/12 Debian: Switch to wheezy
 12/12 make-flight: abolish special-casing of suite


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE1-00082c-3N; Tue, 11 Feb 2014 15:26:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDz-00081G-I3
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:15 +0000
Received: from [85.158.143.35:52827] by server-1.bemta-4.messagelabs.com id
	DC/58-31661-6114AF25; Tue, 11 Feb 2014 15:26:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25956 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627171"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-0000Va-5c;
	Tue, 11 Feb 2014 15:26:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-00020L-W9;
	Tue, 11 Feb 2014 15:26:03 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:53 +0000
Message-ID: <1392132354-7594-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@woking.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 11/12] Debian: Switch to wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@woking.cam.xci-test.com>

This involves updating the d-i version too.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest.pm        |    2 +-
 production-config |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest.pm b/Osstest.pm
index 9ba2882..8df0c15 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -59,7 +59,7 @@ our %c = qw(
     Logs logs
     Results results
 
-    DebianSuite squeeze
+    DebianSuite wheezy
     DebianMirrorSubpath debian
 
     TestHostKeypairPath id_rsa_osstest
diff --git a/production-config b/production-config
index a0e73cb..1ee0ba0 100644
--- a/production-config
+++ b/production-config
@@ -71,7 +71,7 @@ TftpPxeDir /
 TftpPxeTemplates %ipaddrhex%/pxelinux.cfg
 
 TftpPxeGroup osstest
-TftpDiVersion 2013-09-23
+TftpDiVersion 2013-12-12
 
 XenUsePath /usr/groups/xencore/systems/bin/xenuse
 XenUseUser osstest
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDq-0007zV-OZ; Tue, 11 Feb 2014 15:26:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDo-0007z7-F5
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:04 +0000
Received: from [85.158.139.211:34713] by server-12.bemta-5.messagelabs.com id
	C5/9A-15415-B014AF25; Tue, 11 Feb 2014 15:26:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32261 invoked from network); 11 Feb 2014 15:26:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99817006"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:01 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDl-0000VF-S3;
	Tue, 11 Feb 2014 15:26:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zl-Rm;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:46 +0000
Message-ID: <1392132354-7594-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 04/12] ts-xen-install: nodhcp:
	restructure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Move target_editfile_root to contain all the meat of the function.
This is useful because we're going to possibly want to read the input
interfaces file twice.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-install |   42 +++++++++++++++++++++++-------------------
 1 file changed, 23 insertions(+), 19 deletions(-)

diff --git a/ts-xen-install b/ts-xen-install
index fc96516..09c90ce 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -206,33 +206,37 @@ END
 }
 
 sub nodhcp () {
-    my ($iface,$bridgex);
-    my $physif= get_host_property($ho,'interface force','eth0');
-    if ($initscripts_nobridge) {
-        $iface= 'xenbr0';
-        $bridgex= <<END;
+    target_editfile_root($ho, "/etc/network/interfaces",
+                         "etc-network-interfaces", sub {
+        my $physif= get_host_property($ho,'interface force',undef);
+
+	my ($iface,$bridgex);
+
+	if ($initscripts_nobridge) {
+	    $iface= 'xenbr0';
+	    $bridgex= <<END;
     bridge_ports $physif
     bridge_fd 0
     bridge_stp off
 END
-    } else {
-        $iface= $physif;
-        $bridgex= '';
-    }
+	} else {
+	    $iface= $physif;
+	    $bridgex= '';
+        }
 
-    my $routes= target_cmd_output_root($ho, "route -n");
+	my $routes= target_cmd_output_root($ho, "route -n");
 
-    $routes =~ m/^ [0-9.]+ \s+ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ \S*U\S* \s /mxi
-        or die "no own local network in route ?  $routes ";
-    my $netmask= $1;
+	$routes =~ m/^ [0-9.]+ \s+ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ \S*U\S* \s /mxi
+	    or die "no own local network in route ?  $routes ";
+	my $netmask= $1;
 
-    $routes =~ m/^ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ 0\.0\.0\.0 \s+ \S*UG\S* \s /mxi
-        or die "no default gateway ?  $routes ";
-    my $gateway= $1;
+	$routes =~
+	    m/^ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ 0\.0\.0\.0 \s+ \S*UG\S* \s /mxi
+	    or die "no default gateway ?  $routes ";
+	my $gateway= $1;
+
+	logm("iface $iface mask=$netmask gw=$gateway");
 
-    logm("iface $iface mask=$netmask gw=$gateway");
-    target_editfile_root($ho, "/etc/network/interfaces",
-                         "etc-network-interfaces", sub {
         my $suppress= 0;
         while (<EI>) {
             $suppress= 0 unless m/^\s+/;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFEA-00087w-NK; Tue, 11 Feb 2014 15:26:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFE9-00086r-5F
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:25 +0000
Received: from [85.158.143.35:3019] by server-3.bemta-4.messagelabs.com id
	D1/9D-11539-0214AF25; Tue, 11 Feb 2014 15:26:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392132367!4868902!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24555 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627156"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0000VC-SG;
	Tue, 11 Feb 2014 15:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zg-Kz;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:45 +0000
Message-ID: <1392132354-7594-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 03/12] ts-host-install: set `IPAPPEND 2'
	(if interface isn't forced)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This causes BOOTIF=<mac-address> to appear on command line.  This
makes d-i use that interface.  (See also Debian #615600.)

This is a better approach to interface name instability than setting
the force interface host property.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-install |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/ts-host-install b/ts-host-install
index 5c0018e..8e119ca 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -174,8 +174,10 @@ sub setup_pxeboot_firstboot($) {
     system qw(rm -rf --),"$initrd_overlay.d";
     mkdir "$initrd_overlay.d" or die "$initrd_overlay.d: $!";
 
+    my $ipappend = 2;
     my $wantphysif= get_host_property($ho,'interface force','auto');
     if ($wantphysif ne 'auto') {
+	$ipappend = 0;
 	die "need Ether for $ho->{Name} ($wantphysif)"
 	    unless defined $ho->{Ether};
         system_checked(qw(mkdir -p --), "$initrd_overlay.d/etc/udev/rules.d");
@@ -219,6 +221,7 @@ label overwrite
 	menu default
 	kernel $kernel
 	append $installcmdline
+	ipappend $ipappend
 default overwrite
 END
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE1-00082c-3N; Tue, 11 Feb 2014 15:26:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDz-00081G-I3
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:15 +0000
Received: from [85.158.143.35:52827] by server-1.bemta-4.messagelabs.com id
	DC/58-31661-6114AF25; Tue, 11 Feb 2014 15:26:14 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25956 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627171"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-0000Va-5c;
	Tue, 11 Feb 2014 15:26:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-00020L-W9;
	Tue, 11 Feb 2014 15:26:03 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:53 +0000
Message-ID: <1392132354-7594-12-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@woking.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 11/12] Debian: Switch to wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@woking.cam.xci-test.com>

This involves updating the d-i version too.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest.pm        |    2 +-
 production-config |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest.pm b/Osstest.pm
index 9ba2882..8df0c15 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -59,7 +59,7 @@ our %c = qw(
     Logs logs
     Results results
 
-    DebianSuite squeeze
+    DebianSuite wheezy
     DebianMirrorSubpath debian
 
     TestHostKeypairPath id_rsa_osstest
diff --git a/production-config b/production-config
index a0e73cb..1ee0ba0 100644
--- a/production-config
+++ b/production-config
@@ -71,7 +71,7 @@ TftpPxeDir /
 TftpPxeTemplates %ipaddrhex%/pxelinux.cfg
 
 TftpPxeGroup osstest
-TftpDiVersion 2013-09-23
+TftpDiVersion 2013-12-12
 
 XenUsePath /usr/groups/xencore/systems/bin/xenuse
 XenUseUser osstest
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDo-0007z8-14; Tue, 11 Feb 2014 15:26:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDm-0007yx-2i
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:02 +0000
Received: from [85.158.139.211:12108] by server-17.bemta-5.messagelabs.com id
	BE/48-31975-9014AF25; Tue, 11 Feb 2014 15:26:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31939 invoked from network); 11 Feb 2014 15:26:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99816987"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:25:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:25:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDh-0000V0-RX;
	Tue, 11 Feb 2014 15:25:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDh-0001zT-Ix;
	Tue, 11 Feb 2014 15:25:57 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:42 +0000
Message-ID: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 00/12] Wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Workarounds for bugs and incompatibilities in wheezy:
 01/12 ts-xen-build-prep: avoid lvextend segfault
 10/12 ts-guests-nbd-mirror: set "oldstyle=true"

A bunch of bugfixes:
 02/12 ts-kernel-build: force CONFIG_BLK_DEV_NBD=y
 03/12 ts-host-install: set `IPAPPEND 2' (if
 04/12 ts-xen-install: nodhcp: restructure
 05/12 ts-xen-install: default the interface to the
 06/12 TestSupport: break out target_run_apt
 07/12 TestSupport: Suppress prompting by apt
 08/12 ts-guests-nbd-mirror: purge old packages first
 09/12 ts-guests-nbd-mirror: add checkaccessible test

And switch the default suite:
 11/12 Debian: Switch to wheezy
 12/12 make-flight: abolish special-casing of suite


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDx-00080F-GL; Tue, 11 Feb 2014 15:26:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDv-0007zx-Sx
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:12 +0000
Received: from [85.158.139.211:61579] by server-12.bemta-5.messagelabs.com id
	6D/CA-15415-3114AF25; Tue, 11 Feb 2014 15:26:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392132369!3218677!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6929 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627168"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:03 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-0000VX-0O;
	Tue, 11 Feb 2014 15:26:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-00020G-Py;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:52 +0000
Message-ID: <1392132354-7594-11-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 10/12] ts-guests-nbd-mirror: set
	"oldstyle=true"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Newer NBDs (wheezy's and later) need a config option to say we're
using the old-style port-based addressing.  At some point we will
probably switch to using the new-style addressing, but not yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-guests-nbd-mirror |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/ts-guests-nbd-mirror b/ts-guests-nbd-mirror
index af200f4..ee7ff60 100755
--- a/ts-guests-nbd-mirror
+++ b/ts-guests-nbd-mirror
@@ -59,6 +59,9 @@ sub configserver () {
 [generic]
     user = root
 END
+    $scfg .= <<END unless $sho->{Suite} =~ m/sarge|lenny|squeeze/;
+    oldstyle = true
+END
     foreach my $v (@vols) {
 	$v->{Port}= unique_incrementing_runvar("${srvhost}_nextport",4000);
 	$v->{Path}= "/dev/$v->{Gho}{Vg}/$v->{Lv}";
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDq-0007zV-OZ; Tue, 11 Feb 2014 15:26:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDo-0007z7-F5
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:04 +0000
Received: from [85.158.139.211:34713] by server-12.bemta-5.messagelabs.com id
	C5/9A-15415-B014AF25; Tue, 11 Feb 2014 15:26:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32261 invoked from network); 11 Feb 2014 15:26:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99817006"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:01 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDl-0000VF-S3;
	Tue, 11 Feb 2014 15:26:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zl-Rm;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:46 +0000
Message-ID: <1392132354-7594-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 04/12] ts-xen-install: nodhcp:
	restructure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Move target_editfile_root to contain all the meat of the function.
This is useful because we're going to possibly want to read the input
interfaces file twice.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-install |   42 +++++++++++++++++++++++-------------------
 1 file changed, 23 insertions(+), 19 deletions(-)

diff --git a/ts-xen-install b/ts-xen-install
index fc96516..09c90ce 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -206,33 +206,37 @@ END
 }
 
 sub nodhcp () {
-    my ($iface,$bridgex);
-    my $physif= get_host_property($ho,'interface force','eth0');
-    if ($initscripts_nobridge) {
-        $iface= 'xenbr0';
-        $bridgex= <<END;
+    target_editfile_root($ho, "/etc/network/interfaces",
+                         "etc-network-interfaces", sub {
+        my $physif= get_host_property($ho,'interface force',undef);
+
+	my ($iface,$bridgex);
+
+	if ($initscripts_nobridge) {
+	    $iface= 'xenbr0';
+	    $bridgex= <<END;
     bridge_ports $physif
     bridge_fd 0
     bridge_stp off
 END
-    } else {
-        $iface= $physif;
-        $bridgex= '';
-    }
+	} else {
+	    $iface= $physif;
+	    $bridgex= '';
+        }
 
-    my $routes= target_cmd_output_root($ho, "route -n");
+	my $routes= target_cmd_output_root($ho, "route -n");
 
-    $routes =~ m/^ [0-9.]+ \s+ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ \S*U\S* \s /mxi
-        or die "no own local network in route ?  $routes ";
-    my $netmask= $1;
+	$routes =~ m/^ [0-9.]+ \s+ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ \S*U\S* \s /mxi
+	    or die "no own local network in route ?  $routes ";
+	my $netmask= $1;
 
-    $routes =~ m/^ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ 0\.0\.0\.0 \s+ \S*UG\S* \s /mxi
-        or die "no default gateway ?  $routes ";
-    my $gateway= $1;
+	$routes =~
+	    m/^ 0\.0\.0\.0 \s+ ([0-9.]+) \s+ 0\.0\.0\.0 \s+ \S*UG\S* \s /mxi
+	    or die "no default gateway ?  $routes ";
+	my $gateway= $1;
+
+	logm("iface $iface mask=$netmask gw=$gateway");
 
-    logm("iface $iface mask=$netmask gw=$gateway");
-    target_editfile_root($ho, "/etc/network/interfaces",
-                         "etc-network-interfaces", sub {
         my $suppress= 0;
         while (<EI>) {
             $suppress= 0 unless m/^\s+/;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFEA-00087w-NK; Tue, 11 Feb 2014 15:26:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFE9-00086r-5F
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:25 +0000
Received: from [85.158.143.35:3019] by server-3.bemta-4.messagelabs.com id
	D1/9D-11539-0214AF25; Tue, 11 Feb 2014 15:26:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392132367!4868902!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24555 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627156"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0000VC-SG;
	Tue, 11 Feb 2014 15:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zg-Kz;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:45 +0000
Message-ID: <1392132354-7594-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 03/12] ts-host-install: set `IPAPPEND 2'
	(if interface isn't forced)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This causes BOOTIF=<mac-address> to appear on command line.  This
makes d-i use that interface.  (See also Debian #615600.)

This is a better approach to interface name instability than setting
the force interface host property.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-install |    3 +++
 1 file changed, 3 insertions(+)

diff --git a/ts-host-install b/ts-host-install
index 5c0018e..8e119ca 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -174,8 +174,10 @@ sub setup_pxeboot_firstboot($) {
     system qw(rm -rf --),"$initrd_overlay.d";
     mkdir "$initrd_overlay.d" or die "$initrd_overlay.d: $!";
 
+    my $ipappend = 2;
     my $wantphysif= get_host_property($ho,'interface force','auto');
     if ($wantphysif ne 'auto') {
+	$ipappend = 0;
 	die "need Ether for $ho->{Name} ($wantphysif)"
 	    unless defined $ho->{Ether};
         system_checked(qw(mkdir -p --), "$initrd_overlay.d/etc/udev/rules.d");
@@ -219,6 +221,7 @@ label overwrite
 	menu default
 	kernel $kernel
 	append $installcmdline
+	ipappend $ipappend
 default overwrite
 END
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE0-00082J-NF; Tue, 11 Feb 2014 15:26:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDz-00080S-Gb
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:15 +0000
Received: from [85.158.143.35:63106] by server-3.bemta-4.messagelabs.com id
	BF/2D-11539-5114AF25; Tue, 11 Feb 2014 15:26:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25862 invoked from network); 11 Feb 2014 15:26:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627152"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0000V9-LW;
	Tue, 11 Feb 2014 15:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zb-DI;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:44 +0000
Message-ID: <1392132354-7594-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 02/12] ts-kernel-build: force
	CONFIG_BLK_DEV_NBD=y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise, with wheezy and Linux 3.4.77:

Jan 23 04:36:42 lake-frog nbd_client[4274]: Cannot open NBD: No such file or directory#012Please ensure the nbd module is loaded.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-kernel-build |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/ts-kernel-build b/ts-kernel-build
index 742d2b4..9b92ffc 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -174,6 +174,10 @@ setopt CONFIG_EXT4_FS m
 setopt CONFIG_UFS_FS m
 setopt CONFIG_UFS_FS_WRITE y
 
+setopt CONFIG_BLK_DEV_NBD y
+# At least with Linux 3.4.77 on wheezy, the nbd module is
+# not loaded automatically.
+
 END
 
 our $config_features= <<END;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE0-00082J-NF; Tue, 11 Feb 2014 15:26:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDz-00080S-Gb
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:15 +0000
Received: from [85.158.143.35:63106] by server-3.bemta-4.messagelabs.com id
	BF/2D-11539-5114AF25; Tue, 11 Feb 2014 15:26:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25862 invoked from network); 11 Feb 2014 15:26:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627152"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0000V9-LW;
	Tue, 11 Feb 2014 15:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zb-DI;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:44 +0000
Message-ID: <1392132354-7594-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 02/12] ts-kernel-build: force
	CONFIG_BLK_DEV_NBD=y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise, with wheezy and Linux 3.4.77:

Jan 23 04:36:42 lake-frog nbd_client[4274]: Cannot open NBD: No such file or directory#012Please ensure the nbd module is loaded.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-kernel-build |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/ts-kernel-build b/ts-kernel-build
index 742d2b4..9b92ffc 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -174,6 +174,10 @@ setopt CONFIG_EXT4_FS m
 setopt CONFIG_UFS_FS m
 setopt CONFIG_UFS_FS_WRITE y
 
+setopt CONFIG_BLK_DEV_NBD y
+# At least with Linux 3.4.77 on wheezy, the nbd module is
+# not loaded automatically.
+
 END
 
 our $config_features= <<END;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDr-0007zc-3Q; Tue, 11 Feb 2014 15:26:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDp-0007zQ-EO
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:05 +0000
Received: from [85.158.139.211:56946] by server-16.bemta-5.messagelabs.com id
	C2/88-05060-C014AF25; Tue, 11 Feb 2014 15:26:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32330 invoked from network); 11 Feb 2014 15:26:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99817007"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VI-2C;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDl-0001zq-RZ;
	Tue, 11 Feb 2014 15:26:01 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:47 +0000
Message-ID: <1392132354-7594-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 05/12] ts-xen-install: default the
	interface to the one in /etc/network/interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The default was simply eth0.  This is the other piece of automatically
coping with the boot interface not being eth0.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-install |   18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/ts-xen-install b/ts-xen-install
index 09c90ce..a1f5998 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -22,6 +22,7 @@ use File::Path;
 use POSIX;
 use Osstest::Debian;
 use Osstest::TestSupport;
+use Data::Dumper;
 
 my $checkmode= 0;
 
@@ -210,6 +211,23 @@ sub nodhcp () {
                          "etc-network-interfaces", sub {
         my $physif= get_host_property($ho,'interface force',undef);
 
+	if (!defined $physif) {
+	    # preread /etc/network/interfaces to figure out the interface
+	    my %candidates;
+	    while (<EI>) {
+		next unless
+		    m{^ \s* (  auto \s+ (\S+)               ) \s* $}x ||
+		    m{^ \s* (  allow-hotplug \s+ (\S+)      ) \s* $}x ||
+		    m{^ \s* (  iface \s+ (\S+) \s+ inet \s+ ) \s* $}x ;
+		push @{ $candidates{$2} }, $1;
+	    }
+	    EI->error and die $!;
+	    delete $candidates{'lo'};
+	    die Dumper(\%candidates)." -- cannot determine default interface"
+		unless (scalar keys %candidates) == 1;
+	    ($physif,) = keys %candidates;
+	    seek EI,0,0 or die $!;
+	}
 	my ($iface,$bridgex);
 
 	if ($initscripts_nobridge) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDr-0007zc-3Q; Tue, 11 Feb 2014 15:26:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDp-0007zQ-EO
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:05 +0000
Received: from [85.158.139.211:56946] by server-16.bemta-5.messagelabs.com id
	C2/88-05060-C014AF25; Tue, 11 Feb 2014 15:26:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32330 invoked from network); 11 Feb 2014 15:26:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99817007"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VI-2C;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDl-0001zq-RZ;
	Tue, 11 Feb 2014 15:26:01 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:47 +0000
Message-ID: <1392132354-7594-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 05/12] ts-xen-install: default the
	interface to the one in /etc/network/interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The default was simply eth0.  This is the other piece of automatically
coping with the boot interface not being eth0.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-install |   18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/ts-xen-install b/ts-xen-install
index 09c90ce..a1f5998 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -22,6 +22,7 @@ use File::Path;
 use POSIX;
 use Osstest::Debian;
 use Osstest::TestSupport;
+use Data::Dumper;
 
 my $checkmode= 0;
 
@@ -210,6 +211,23 @@ sub nodhcp () {
                          "etc-network-interfaces", sub {
         my $physif= get_host_property($ho,'interface force',undef);
 
+	if (!defined $physif) {
+	    # preread /etc/network/interfaces to figure out the interface
+	    my %candidates;
+	    while (<EI>) {
+		next unless
+		    m{^ \s* (  auto \s+ (\S+)               ) \s* $}x ||
+		    m{^ \s* (  allow-hotplug \s+ (\S+)      ) \s* $}x ||
+		    m{^ \s* (  iface \s+ (\S+) \s+ inet \s+ ) \s* $}x ;
+		push @{ $candidates{$2} }, $1;
+	    }
+	    EI->error and die $!;
+	    delete $candidates{'lo'};
+	    die Dumper(\%candidates)." -- cannot determine default interface"
+		unless (scalar keys %candidates) == 1;
+	    ($physif,) = keys %candidates;
+	    seek EI,0,0 or die $!;
+	}
 	my ($iface,$bridgex);
 
 	if ($initscripts_nobridge) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE6-00085D-9h; Tue, 11 Feb 2014 15:26:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WDFE5-00083z-3g
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:21 +0000
Received: from [85.158.137.68:50049] by server-10.bemta-3.messagelabs.com id
	AD/2E-07302-C114AF25; Tue, 11 Feb 2014 15:26:20 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392132379!1156185!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10286 invoked from network); 11 Feb 2014 15:26:19 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-10.tower-31.messagelabs.com with SMTP;
	11 Feb 2014 15:26:19 -0000
X-TM-IMSS-Message-ID: <7a24f21d0004800f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7a24f21d0004800f ;
	Tue, 11 Feb 2014 10:27:29 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1BFQCTd017123; 
	Tue, 11 Feb 2014 10:26:13 -0500
Message-ID: <52FA40DD.6060106@tycho.nsa.gov>
Date: Tue, 11 Feb 2014 10:25:17 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<1392111423.26657.55.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111423.26657.55.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: [Xen-devel] [PATCH] docs/vtpm: fix auto-shutdown reference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/11/2014 04:37 AM, Ian Campbell wrote:
> On Mon, 2014-02-10 at 14:40 -0500, Daniel De Graaf wrote:
>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>> Dear all,
>>>
>>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>>> guest virtual machine that takes advantage of it. After playing a bit
>>> with it, I have a few questions:
>>>
>>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>>> the guest the vTPM stubdom continues active and, moreover, I can start
>>> the machine again and the values of the vTPM are the last ones there
>>> were in the previous instance of the guest. Is this normal?
>>
>> The documentation is in error here;
>
> Can you send a patch please.
>
> Ian.
>
Patch below.

------------------------->8--------------------------------------

The automatic shutdown feature of the vTPM was removed because it
interfered with pv-grub measurement support and was also not triggered
if the guest did not use the vTPM. Virtual TPM domains will need to be
shut down or destroyed on guest shutdown via a script or other user
action.

This also fixes an incorrect reference to the vTPM being PV-only.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
  docs/misc/vtpm.txt | 12 +++---------
  1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index b8979a3..df1dfae 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -234,7 +234,7 @@ the Linux tpmfront driver. Add the following line:
  
  vtpm=["backend=domu-vtpm"]
  
-Currently only paravirtualized guests are supported.
+Currently only Linux guests are supported (PV or HVM with PV drivers).
  
  Launching and shut down:
  ------------------------
@@ -280,14 +280,8 @@ You should also see the command being sent to the vtpm console as well
  as the vtpm saving its state. You should see the vtpm key being
  encrypted and stored on the vtpmmgr console.
  
-To shutdown the guest and its vtpm, you just have to shutdown the guest
-normally. As soon as the guest vm disconnects, the vtpm will shut itself
-down automatically.
-
-On guest:
-# shutdown -h now
-
-You may wish to write a script to start your vtpm and guest together.
+You may wish to write a script to start your vtpm and guest together and
+to destroy the vtpm when the guest shuts down.
  
  ------------------------------
  INTEGRATION WITH PV-GRUB
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE6-00085D-9h; Tue, 11 Feb 2014 15:26:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WDFE5-00083z-3g
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:21 +0000
Received: from [85.158.137.68:50049] by server-10.bemta-3.messagelabs.com id
	AD/2E-07302-C114AF25; Tue, 11 Feb 2014 15:26:20 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392132379!1156185!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10286 invoked from network); 11 Feb 2014 15:26:19 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-10.tower-31.messagelabs.com with SMTP;
	11 Feb 2014 15:26:19 -0000
X-TM-IMSS-Message-ID: <7a24f21d0004800f@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 7a24f21d0004800f ;
	Tue, 11 Feb 2014 10:27:29 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1BFQCTd017123; 
	Tue, 11 Feb 2014 10:26:13 -0500
Message-ID: <52FA40DD.6060106@tycho.nsa.gov>
Date: Tue, 11 Feb 2014 10:25:17 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<1392111423.26657.55.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111423.26657.55.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: [Xen-devel] [PATCH] docs/vtpm: fix auto-shutdown reference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/11/2014 04:37 AM, Ian Campbell wrote:
> On Mon, 2014-02-10 at 14:40 -0500, Daniel De Graaf wrote:
>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>> Dear all,
>>>
>>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>>> guest virtual machine that takes advantage of it. After playing a bit
>>> with it, I have a few questions:
>>>
>>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>>> the guest the vTPM stubdom continues active and, moreover, I can start
>>> the machine again and the values of the vTPM are the last ones there
>>> were in the previous instance of the guest. Is this normal?
>>
>> The documentation is in error here;
>
> Can you send a patch please.
>
> Ian.
>
Patch below.

------------------------->8--------------------------------------

The automatic shutdown feature of the vTPM was removed because it
interfered with pv-grub measurement support and was also not triggered
if the guest did not use the vTPM. Virtual TPM domains will need to be
shut down or destroyed on guest shutdown via a script or other user
action.

This also fixes an incorrect reference to the vTPM being PV-only.

Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
  docs/misc/vtpm.txt | 12 +++---------
  1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
index b8979a3..df1dfae 100644
--- a/docs/misc/vtpm.txt
+++ b/docs/misc/vtpm.txt
@@ -234,7 +234,7 @@ the Linux tpmfront driver. Add the following line:
  
  vtpm=["backend=domu-vtpm"]
  
-Currently only paravirtualized guests are supported.
+Currently only Linux guests are supported (PV or HVM with PV drivers).
  
  Launching and shut down:
  ------------------------
@@ -280,14 +280,8 @@ You should also see the command being sent to the vtpm console as well
  as the vtpm saving its state. You should see the vtpm key being
  encrypted and stored on the vtpmmgr console.
  
-To shutdown the guest and its vtpm, you just have to shutdown the guest
-normally. As soon as the guest vm disconnects, the vtpm will shut itself
-down automatically.
-
-On guest:
-# shutdown -h now
-
-You may wish to write a script to start your vtpm and guest together.
+You may wish to write a script to start your vtpm and guest together and
+to destroy the vtpm when the guest shuts down.
  
  ------------------------------
  INTEGRATION WITH PV-GRUB
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDx-00080V-SS; Tue, 11 Feb 2014 15:26:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDw-0007zy-CC
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:12 +0000
Received: from [85.158.137.68:42985] by server-5.bemta-3.messagelabs.com id
	F1/7A-04712-3114AF25; Tue, 11 Feb 2014 15:26:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392132369!1144056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14514 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99817040"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-0000Vd-E0;
	Tue, 11 Feb 2014 15:26:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-00020Q-54;
	Tue, 11 Feb 2014 15:26:03 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:54 +0000
Message-ID: <1392132354-7594-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 12/12] make-flight: abolish
	special-casing of suite for armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since the suite is now wheezy by default this is no longer needed.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 mfi-common |    2 --
 1 file changed, 2 deletions(-)

diff --git a/mfi-common b/mfi-common
index 8f56092..a95703d 100644
--- a/mfi-common
+++ b/mfi-common
@@ -79,7 +79,6 @@ create_build_jobs () {
     esac
 
     case "$arch" in
-    armhf) suite="wheezy";;
     *)     suite=$defsuite;;
     esac
 
@@ -267,7 +266,6 @@ test_matrix_iterate () {
     esac
 
     case "$xenarch" in
-    armhf) suite="wheezy";  guestsuite="wheezy";;
     *)     suite=$defsuite; guestsuite=$defguestsuite;;
     esac
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDx-00080V-SS; Tue, 11 Feb 2014 15:26:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDw-0007zy-CC
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:12 +0000
Received: from [85.158.137.68:42985] by server-5.bemta-3.messagelabs.com id
	F1/7A-04712-3114AF25; Tue, 11 Feb 2014 15:26:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392132369!1144056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14514 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99817040"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-0000Vd-E0;
	Tue, 11 Feb 2014 15:26:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDn-00020Q-54;
	Tue, 11 Feb 2014 15:26:03 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:54 +0000
Message-ID: <1392132354-7594-13-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 12/12] make-flight: abolish
	special-casing of suite for armhf
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since the suite is now wheezy by default this is no longer needed.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 mfi-common |    2 --
 1 file changed, 2 deletions(-)

diff --git a/mfi-common b/mfi-common
index 8f56092..a95703d 100644
--- a/mfi-common
+++ b/mfi-common
@@ -79,7 +79,6 @@ create_build_jobs () {
     esac
 
     case "$arch" in
-    armhf) suite="wheezy";;
     *)     suite=$defsuite;;
     esac
 
@@ -267,7 +266,6 @@ test_matrix_iterate () {
     esac
 
     case "$xenarch" in
-    armhf) suite="wheezy";  guestsuite="wheezy";;
     *)     suite=$defsuite; guestsuite=$defguestsuite;;
     esac
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDz-00081o-9s; Tue, 11 Feb 2014 15:26:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDy-00080p-K7
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:14 +0000
Received: from [85.158.143.35:52766] by server-1.bemta-4.messagelabs.com id
	98/58-31661-5114AF25; Tue, 11 Feb 2014 15:26:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25925 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627165"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VR-KT;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-000206-E8;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:50 +0000
Message-ID: <1392132354-7594-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 08/12] ts-guests-nbd-mirror: purge old
	packages first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Purge any old nbd-client and nbd-server _before_ we make their config.

This only has any effect if the packages are installed before this
script starts, which isn't the case in any of the automatically-run
recipes.  But it can occur when the script is being tested by hand.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-guests-nbd-mirror |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/ts-guests-nbd-mirror b/ts-guests-nbd-mirror
index 89508dd..c0a021a 100755
--- a/ts-guests-nbd-mirror
+++ b/ts-guests-nbd-mirror
@@ -52,6 +52,8 @@ sub findvols () {
 }
 
 sub configserver () {
+    target_cmd_root($sho, "dpkg --purge nbd-server ||:");
+
     my $scfg= <<END;
 # generated by $0
 [generic]
@@ -67,7 +69,6 @@ END
 END
     }
 
-    target_cmd_root($sho, "dpkg --purge nbd-server ||:");
     target_cmd_root($sho, "mkdir -p /etc/nbd-server");
     target_putfilecontents_root_stash($sho,10, $scfg,
         "/etc/nbd-server/config", "${srvhost}_nbd-server_config");
@@ -75,6 +76,8 @@ END
 }
 
 sub configclient () {
+    target_cmd_root($cho, "dpkg --purge nbd-client ||:");
+
     my $mydaemon= '/root/nbd-client-async';
     target_putfilecontents_root_stash($cho,10,<<'END',$mydaemon);
 #!/bin/sh
@@ -99,7 +102,6 @@ NBD_HOST[$v->{Ix}]=$sho->{Name}
 NBD_PORT[$v->{Ix}]=$v->{Port}
 END
     }
-    target_cmd_root($cho, "dpkg --purge nbd-client ||:");
     target_putfilecontents_root_stash($cho,10,$ccfg,"/etc/nbd-client");
     target_install_packages($cho, qw(nbd-client));
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDz-00081o-9s; Tue, 11 Feb 2014 15:26:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDy-00080p-K7
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:14 +0000
Received: from [85.158.143.35:52766] by server-1.bemta-4.messagelabs.com id
	98/58-31661-5114AF25; Tue, 11 Feb 2014 15:26:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25925 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627165"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VR-KT;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-000206-E8;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:50 +0000
Message-ID: <1392132354-7594-9-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 08/12] ts-guests-nbd-mirror: purge old
	packages first
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Purge any old nbd-client and nbd-server _before_ we make their config.

This only has any effect if the packages are installed before this
script starts, which isn't the case in any of the automatically-run
recipes.  But it can occur when the script is being tested by hand.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-guests-nbd-mirror |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/ts-guests-nbd-mirror b/ts-guests-nbd-mirror
index 89508dd..c0a021a 100755
--- a/ts-guests-nbd-mirror
+++ b/ts-guests-nbd-mirror
@@ -52,6 +52,8 @@ sub findvols () {
 }
 
 sub configserver () {
+    target_cmd_root($sho, "dpkg --purge nbd-server ||:");
+
     my $scfg= <<END;
 # generated by $0
 [generic]
@@ -67,7 +69,6 @@ END
 END
     }
 
-    target_cmd_root($sho, "dpkg --purge nbd-server ||:");
     target_cmd_root($sho, "mkdir -p /etc/nbd-server");
     target_putfilecontents_root_stash($sho,10, $scfg,
         "/etc/nbd-server/config", "${srvhost}_nbd-server_config");
@@ -75,6 +76,8 @@ END
 }
 
 sub configclient () {
+    target_cmd_root($cho, "dpkg --purge nbd-client ||:");
+
     my $mydaemon= '/root/nbd-client-async';
     target_putfilecontents_root_stash($cho,10,<<'END',$mydaemon);
 #!/bin/sh
@@ -99,7 +102,6 @@ NBD_HOST[$v->{Ix}]=$sho->{Name}
 NBD_PORT[$v->{Ix}]=$v->{Port}
 END
     }
-    target_cmd_root($cho, "dpkg --purge nbd-client ||:");
     target_putfilecontents_root_stash($cho,10,$ccfg,"/etc/nbd-client");
     target_install_packages($cho, qw(nbd-client));
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFED-0008BO-Ha; Tue, 11 Feb 2014 15:26:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFEC-00089c-EU
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:28 +0000
Received: from [85.158.143.35:54633] by server-1.bemta-4.messagelabs.com id
	82/E8-31661-3214AF25; Tue, 11 Feb 2014 15:26:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392132367!4868902!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24610 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627166"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VU-QR;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-00020B-Jx;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:51 +0000
Message-ID: <1392132354-7594-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 09/12] ts-guests-nbd-mirror: add
	checkaccessible test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the NBD devices are not properly accessible on the client, bomb out
here rather than futilely starting a guest, and then timing out when
the guest fails to boot because it can't find its root fs.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-guests-nbd-mirror |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/ts-guests-nbd-mirror b/ts-guests-nbd-mirror
index c0a021a..af200f4 100755
--- a/ts-guests-nbd-mirror
+++ b/ts-guests-nbd-mirror
@@ -116,7 +116,18 @@ sub shuffleconfigs () {
     }
 }
 
+sub checkaccessible () {
+    foreach my $v (@vols) {
+	my $finfo = target_cmd_output_root($cho, "file -Ls $v->{Path}");
+	chomp $finfo;
+	logm("file(1) on $v->{Path}: $finfo");
+	die "unexpected contents for $v->{Path}, nbd failed?"
+	    unless $finfo =~ m/filesystem|swap/i;
+    }
+}
+
 findvols();
 configserver();
 configclient();
 shuffleconfigs();
+checkaccessible();
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFED-0008BO-Ha; Tue, 11 Feb 2014 15:26:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFEC-00089c-EU
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:28 +0000
Received: from [85.158.143.35:54633] by server-1.bemta-4.messagelabs.com id
	82/E8-31661-3214AF25; Tue, 11 Feb 2014 15:26:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392132367!4868902!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24610 invoked from network); 11 Feb 2014 15:26:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627166"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VU-QR;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-00020B-Jx;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:51 +0000
Message-ID: <1392132354-7594-10-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 09/12] ts-guests-nbd-mirror: add
	checkaccessible test
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the NBD devices are not properly accessible on the client, bomb out
here rather than futilely starting a guest, and then timing out when
the guest fails to boot because it can't find its root fs.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-guests-nbd-mirror |   11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/ts-guests-nbd-mirror b/ts-guests-nbd-mirror
index c0a021a..af200f4 100755
--- a/ts-guests-nbd-mirror
+++ b/ts-guests-nbd-mirror
@@ -116,7 +116,18 @@ sub shuffleconfigs () {
     }
 }
 
+sub checkaccessible () {
+    foreach my $v (@vols) {
+	my $finfo = target_cmd_output_root($cho, "file -Ls $v->{Path}");
+	chomp $finfo;
+	logm("file(1) on $v->{Path}: $finfo");
+	die "unexpected contents for $v->{Path}, nbd failed?"
+	    unless $finfo =~ m/filesystem|swap/i;
+    }
+}
+
 findvols();
 configserver();
 configclient();
 shuffleconfigs();
+checkaccessible();
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE1-000839-Qu; Tue, 11 Feb 2014 15:26:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFE0-00082F-OI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:16 +0000
Received: from [85.158.143.35:63318] by server-1.bemta-4.messagelabs.com id
	88/68-31661-8114AF25; Tue, 11 Feb 2014 15:26:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25885 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627164"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VO-Ea;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-000201-82;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:49 +0000
Message-ID: <1392132354-7594-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 07/12] TestSupport: Suppress prompting
	by apt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Always set DEBIAN_PRIORITY=critical UCF_FORCE_CONFFOLD=y in
environment of apt, to suppress some prompts.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index e546dc9..545095f 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -420,7 +420,7 @@ sub target_putfile_root ($$$$;$) {
 sub target_run_apt {
     my ($ho, $timeout, @aptopts) = @_;
     target_cmd_root($ho,
-                    "apt-get @aptopts",
+   "DEBIAN_PRIORITY=critical UCF_FORCE_CONFFOLD=y apt-get @aptopts",
                     $timeout);
 }
 sub target_install_packages {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFE1-000839-Qu; Tue, 11 Feb 2014 15:26:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFE0-00082F-OI
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:16 +0000
Received: from [85.158.143.35:63318] by server-1.bemta-4.messagelabs.com id
	88/68-31661-8114AF25; Tue, 11 Feb 2014 15:26:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392132366!4863033!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25885 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627164"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VO-Ea;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-000201-82;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:49 +0000
Message-ID: <1392132354-7594-8-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 07/12] TestSupport: Suppress prompting
	by apt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Always set DEBIAN_PRIORITY=critical UCF_FORCE_CONFFOLD=y in
environment of apt, to suppress some prompts.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index e546dc9..545095f 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -420,7 +420,7 @@ sub target_putfile_root ($$$$;$) {
 sub target_run_apt {
     my ($ho, $timeout, @aptopts) = @_;
     target_cmd_root($ho,
-                    "apt-get @aptopts",
+   "DEBIAN_PRIORITY=critical UCF_FORCE_CONFFOLD=y apt-get @aptopts",
                     $timeout);
 }
 sub target_install_packages {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDo-0007zF-CI; Tue, 11 Feb 2014 15:26:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDn-0007z2-4Y
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:03 +0000
Received: from [85.158.139.211:56658] by server-16.bemta-5.messagelabs.com id
	65/78-05060-A014AF25; Tue, 11 Feb 2014 15:26:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32113 invoked from network); 11 Feb 2014 15:26:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99816998"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0000V3-E1;
	Tue, 11 Feb 2014 15:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zY-6G;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:43 +0000
Message-ID: <1392132354-7594-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 01/12] ts-xen-build-prep: avoid lvextend
	segfault (Debian #736173) with wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-build-prep |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index b395584..528d0a4 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -133,7 +133,9 @@ sub lvextend1 ($$$) {
 
     my $stripes_free = $stripe_count * $stripe_minfree;
     $do_limit_pe->(\$stripes_free, 'striped');
-    if ($stripe_minfree && $stripe_count>1) {
+    if ($stripe_minfree && $stripe_count>1
+	&& $ho->{Suite} !~ m/wheezy/ # bugs.debian.org/736173
+	) {
         overall_limit_pe(\$stripes_free);
         $more_pe += $stripes_free;
         target_cmd_root($ho, "lvextend -i$stripe_count -l +$stripes_free $lv");
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFDo-0007zF-CI; Tue, 11 Feb 2014 15:26:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFDn-0007z2-4Y
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:03 +0000
Received: from [85.158.139.211:56658] by server-16.bemta-5.messagelabs.com id
	65/78-05060-A014AF25; Tue, 11 Feb 2014 15:26:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392132358!3194965!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32113 invoked from network); 11 Feb 2014 15:26:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99816998"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0000V3-E1;
	Tue, 11 Feb 2014 15:26:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDk-0001zY-6G;
	Tue, 11 Feb 2014 15:26:00 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:43 +0000
Message-ID: <1392132354-7594-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 01/12] ts-xen-build-prep: avoid lvextend
	segfault (Debian #736173) with wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-build-prep |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index b395584..528d0a4 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -133,7 +133,9 @@ sub lvextend1 ($$$) {
 
     my $stripes_free = $stripe_count * $stripe_minfree;
     $do_limit_pe->(\$stripes_free, 'striped');
-    if ($stripe_minfree && $stripe_count>1) {
+    if ($stripe_minfree && $stripe_count>1
+	&& $ho->{Suite} !~ m/wheezy/ # bugs.debian.org/736173
+	) {
         overall_limit_pe(\$stripes_free);
         $more_pe += $stripes_free;
         target_cmd_root($ho, "lvextend -i$stripe_count -l +$stripes_free $lv");
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFED-0008Aw-4L; Tue, 11 Feb 2014 15:26:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFEB-000885-Gp
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:27 +0000
Received: from [85.158.143.35:3361] by server-2.bemta-4.messagelabs.com id
	A4/09-10891-2214AF25; Tue, 11 Feb 2014 15:26:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392132367!4868902!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24588 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627163"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VL-8W;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0001zw-1h;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:48 +0000
Message-ID: <1392132354-7594-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 06/12] TestSupport: break out
	target_run_apt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to add some environment variables to the apt invocation,
so centralise where this happens.  No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |   16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..e546dc9 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -56,6 +56,7 @@ BEGIN {
 		      target_putfilecontents_root_stash
                       target_put_guest_image
                       target_editfile_root target_file_exists
+                      target_run_apt
                       target_install_packages target_install_packages_norec
                       target_extract_jobdistpath target_guest_lv_name
 
@@ -416,16 +417,21 @@ sub target_putfile ($$$$;$) {
 sub target_putfile_root ($$$$;$) {
     tputfileex('root', @_);
 }
+sub target_run_apt {
+    my ($ho, $timeout, @aptopts) = @_;
+    target_cmd_root($ho,
+                    "apt-get @aptopts",
+                    $timeout);
+}
 sub target_install_packages {
     my ($ho, @packages) = @_;
-    target_cmd_root($ho, "apt-get -y install @packages",
-                    300 + 100 * @packages);
+    target_run_apt($ho, 300 + 100 * @packages,
+		   qw(-y install), @packages);
 }
 sub target_install_packages_norec {
     my ($ho, @packages) = @_;
-    target_cmd_root($ho,
-                    "apt-get --no-install-recommends -y install @packages",
-                    300 + 100 * @packages);
+    target_run_apt($ho, 300 + 100 * @packages,
+		   qw(--no-install-recommends -y install), @packages);
 }
 
 sub target_somefile_getleaf ($$$) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:26:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFED-0008Aw-4L; Tue, 11 Feb 2014 15:26:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFEB-000885-Gp
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:26:27 +0000
Received: from [85.158.143.35:3361] by server-2.bemta-4.messagelabs.com id
	A4/09-10891-2214AF25; Tue, 11 Feb 2014 15:26:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392132367!4868902!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24588 invoked from network); 11 Feb 2014 15:26:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:26:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101627163"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:26:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:26:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0000VL-8W;
	Tue, 11 Feb 2014 15:26:02 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDFDm-0001zw-1h;
	Tue, 11 Feb 2014 15:26:02 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Tue, 11 Feb 2014 15:25:48 +0000
Message-ID: <1392132354-7594-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392132354-7594-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 06/12] TestSupport: break out
	target_run_apt
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to add some environment variables to the apt invocation,
so centralise where this happens.  No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |   16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..e546dc9 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -56,6 +56,7 @@ BEGIN {
 		      target_putfilecontents_root_stash
                       target_put_guest_image
                       target_editfile_root target_file_exists
+                      target_run_apt
                       target_install_packages target_install_packages_norec
                       target_extract_jobdistpath target_guest_lv_name
 
@@ -416,16 +417,21 @@ sub target_putfile ($$$$;$) {
 sub target_putfile_root ($$$$;$) {
     tputfileex('root', @_);
 }
+sub target_run_apt {
+    my ($ho, $timeout, @aptopts) = @_;
+    target_cmd_root($ho,
+                    "apt-get @aptopts",
+                    $timeout);
+}
 sub target_install_packages {
     my ($ho, @packages) = @_;
-    target_cmd_root($ho, "apt-get -y install @packages",
-                    300 + 100 * @packages);
+    target_run_apt($ho, 300 + 100 * @packages,
+		   qw(-y install), @packages);
 }
 sub target_install_packages_norec {
     my ($ho, @packages) = @_;
-    target_cmd_root($ho,
-                    "apt-get --no-install-recommends -y install @packages",
-                    300 + 100 * @packages);
+    target_run_apt($ho, 300 + 100 * @packages,
+		   qw(--no-install-recommends -y install), @packages);
 }
 
 sub target_somefile_getleaf ($$$) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:27:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFFd-0000hE-Bg; Tue, 11 Feb 2014 15:27:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WDFFc-0000gU-6K
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:27:56 +0000
Received: from [193.109.254.147:24054] by server-9.bemta-14.messagelabs.com id
	D3/62-24895-B714AF25; Tue, 11 Feb 2014 15:27:55 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392132474!3588745!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29693 invoked from network); 11 Feb 2014 15:27:54 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-5.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 15:27:54 -0000
X-TM-IMSS-Message-ID: <9da5e47100050e0e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.9]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 9da5e47100050e0e ;
	Tue, 11 Feb 2014 10:27:10 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1BFRoLE017446; 
	Tue, 11 Feb 2014 10:27:51 -0500
Message-ID: <52FA413F.1040608@tycho.nsa.gov>
Date: Tue, 11 Feb 2014 10:26:55 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<52F9F514.8040907@scytl.com>
In-Reply-To: <52F9F514.8040907@scytl.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
> Hello Daniel,
>
> Thanks for your thorough answer. I have a few comments below.
>
> On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>> Dear all,
>>>
>>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>>> guest virtual machine that takes advantage of it. After playing a bit
>>> with it, I have a few questions:
>>>
>>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>>> the guest the vTPM stubdom continues active and, moreover, I can start
>>> the machine again and the values of the vTPM are the last ones there
>>> were in the previous instance of the guest. Is this normal?
>>
>> The documentation is in error here; while this was originally how the
>> vTPM
>> domain behaved, this automatic shutdown was not reliable: it was not done
>> if the peer domain did not use the vTPM, and it was incorrectly triggered
>> by pv-grub's use of the vTPM to record guest kernel measurements
>> (which was
>> the immediate reason for its removal). The solution now is to either
>> send a
>> shutdown request or simply destroy the vTPM upon guest shutdown.
>>
>> An alternative that may require less work on your part is to destroy
>> the vTPM stub domain during a guest's construction, something like:
>>
>> #!/bin/sh -e
>> xl destroy "$1-vtpm" || true
>> xl create $1-vtpm.cfg
>> xl create $1-domu.cfg
>>
>> Allowing a vTPM to remain active across a guest restart will cause the
>> PCR values extended by pv-grub to be incorrect, as you observed in your
>> second email. In order for the vTPM's PCRs to be useful for quotes or
>> releasing sealed secrets, you need to ensure that a new vTPM is started
>> if and only if it is paired with a corresponding guest.
>
> I see a potential threat due to this behaviour (please correct me if I
> am wrong).
>
> Assume an administrator of Dom0 becomes malicious. Since the hypervisor
> does not enforce the shut down of the vTPM domain, the malicious
> administrator could try the following: 1) make a copy of the peer
> domain, 2) manipulate the copy of the peer domain and disable its
> measurements, 3) boot the original peer domain, 4) switch it off or
> pause it, 5) boot the manipulated copy of the peer domain.
>
> Then, the shown PCR values of the manipulated copy of the peer domain
> are the ones measured by the original peer domain during the first boot.
> But the manipulated copy is the one actually running. Hence, this could
> not be detected nor by quoting the vTPM neither the pTPM.
>

A malicious dom0 has a much simpler attack vector: start the domain with
a custom version of pv-grub that extends arbitrary measurements instead
of the real kernel's measurements. Then, a user kernel with disabled or
similarly false measuring capabilities can be booted. Alternatively,
if XSM polices do not restrict it, a debugger could be attached to the
guest so that it can be manipulated online.

> May be, one possible solution could be to enforce an XSM FLASK policy to
> prevent any user in Dom0 from destroying, shutting down or pausing a
> domain. Then, measure the policy when Dom0 starts into a PCR of the
> phsyical TPM. Nevertheless, on one hand I do not know if this is
> feasible and, in the other hand, this prevents the system from
> destroying the vTPM domain when the peer domain shuts down.

The solution to this problem is to disaggregate dom0 and relocate the
domain building component to a stub domain that is completely measured
in the pTPM (perhaps by TBOOT). The domain builder could use a static
library of domains to build (hardware domain and TPM Manager built only
once; vTPM and pv-grub domain pairs built on request). An XSM policy
could then restrict vTPM communication so that only correctly built
guests are allowed to talk to their paired vTPM. In this case, dom0
would have permission to shut down either VM, but could not start a
replacement.

>>> 2.In the documentation it is recommended to avoid accessing the physical
>>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>>> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
>>> without any apparent issue. Why is not recommended directly accessing
>>> the physical TPM of Dom0?
>>
>> While most of the time it is not a problem to have two entities
>> talking to
>> the physical TPM, it is possible for the trousers daemon in dom0 to
>> interfere
>> with key handles used by the TPM Manager. There are also certain
>> operations
>> of the TPM that may not handle concurrency, although I do not believe
>> that
>> trousers uses them - SHA1Start, the DAA commands, and certain audit logs
>> come to mind.
>>
>> The other reason why it is recommended to avoid pTPM access from dom0 is
>> because the ability to send unseal/unbind requests to the physical TPM
>> makes
>> it possible for applications running in dom0 to decrypt the TPM Manager's
>> data (and thereby access vTPM private keys).
>>
>> At present, sharing the physical TPM between dom0 and the TPM Manager is
>> the only way to get full integrity checks.
>
> OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
> security problem to the vTPM. But if we do not enable the support, the
> integrity of Dom0 cannot be proved using the TPM (e.g. by remote
> attestation).

Right. Since dom0 currently must be trusted (as discussed above) this is
currently the best way to handle the dom0 attestation problem.

>>
>>> 3.If it is not recommended to directly accessing the physical TPM in
>>> Dom0, which is the advisable way to check the integrity of this domain?
>>> With solutions such as TBOOT and IntelTXT?
>>
>> While the TPM Manager in Xen 4.3/4.4 does not yet have this
>> functionality,
>> an update which I will be submitting for inclusion in Xen 4.5 has the
>> ability to get physical TPM quotes using a virtual TPM. Combined with an
>> early domain builder, the eventual goal is to have dom0 use a vTPM for
>> its integrity/reporting/sealing operations, and use the physical TPM only
>> to secure the secrets of vTPMs and for deep quotes to provide fresh
>> proofs
>> of the system's state.
>
> This sounds really good. I look forward to try it in Xen 4.5!!
>
>
> Thank you for your answers!
> Jordi.
>


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:27:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFFd-0000hE-Bg; Tue, 11 Feb 2014 15:27:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WDFFc-0000gU-6K
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 15:27:56 +0000
Received: from [193.109.254.147:24054] by server-9.bemta-14.messagelabs.com id
	D3/62-24895-B714AF25; Tue, 11 Feb 2014 15:27:55 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392132474!3588745!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29693 invoked from network); 11 Feb 2014 15:27:54 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-5.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 15:27:54 -0000
X-TM-IMSS-Message-ID: <9da5e47100050e0e@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.9]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 9da5e47100050e0e ;
	Tue, 11 Feb 2014 10:27:10 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1BFRoLE017446; 
	Tue, 11 Feb 2014 10:27:51 -0500
Message-ID: <52FA413F.1040608@tycho.nsa.gov>
Date: Tue, 11 Feb 2014 10:26:55 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<52F9F514.8040907@scytl.com>
In-Reply-To: <52F9F514.8040907@scytl.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
> Hello Daniel,
>
> Thanks for your thorough answer. I have a few comments below.
>
> On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>> Dear all,
>>>
>>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
>>> guest virtual machine that takes advantage of it. After playing a bit
>>> with it, I have a few questions:
>>>
>>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
>>> the guest the vTPM stubdom continues active and, moreover, I can start
>>> the machine again and the values of the vTPM are the last ones there
>>> were in the previous instance of the guest. Is this normal?
>>
>> The documentation is in error here; while this was originally how the
>> vTPM
>> domain behaved, this automatic shutdown was not reliable: it was not done
>> if the peer domain did not use the vTPM, and it was incorrectly triggered
>> by pv-grub's use of the vTPM to record guest kernel measurements
>> (which was
>> the immediate reason for its removal). The solution now is to either
>> send a
>> shutdown request or simply destroy the vTPM upon guest shutdown.
>>
>> An alternative that may require less work on your part is to destroy
>> the vTPM stub domain during a guest's construction, something like:
>>
>> #!/bin/sh -e
>> xl destroy "$1-vtpm" || true
>> xl create $1-vtpm.cfg
>> xl create $1-domu.cfg
>>
>> Allowing a vTPM to remain active across a guest restart will cause the
>> PCR values extended by pv-grub to be incorrect, as you observed in your
>> second email. In order for the vTPM's PCRs to be useful for quotes or
>> releasing sealed secrets, you need to ensure that a new vTPM is started
>> if and only if it is paired with a corresponding guest.
>
> I see a potential threat due to this behaviour (please correct me if I
> am wrong).
>
> Assume an administrator of Dom0 becomes malicious. Since the hypervisor
> does not enforce the shut down of the vTPM domain, the malicious
> administrator could try the following: 1) make a copy of the peer
> domain, 2) manipulate the copy of the peer domain and disable its
> measurements, 3) boot the original peer domain, 4) switch it off or
> pause it, 5) boot the manipulated copy of the peer domain.
>
> Then, the shown PCR values of the manipulated copy of the peer domain
> are the ones measured by the original peer domain during the first boot.
> But the manipulated copy is the one actually running. Hence, this could
> not be detected nor by quoting the vTPM neither the pTPM.
>

A malicious dom0 has a much simpler attack vector: start the domain with
a custom version of pv-grub that extends arbitrary measurements instead
of the real kernel's measurements. Then, a user kernel with disabled or
similarly false measuring capabilities can be booted. Alternatively,
if XSM polices do not restrict it, a debugger could be attached to the
guest so that it can be manipulated online.

> May be, one possible solution could be to enforce an XSM FLASK policy to
> prevent any user in Dom0 from destroying, shutting down or pausing a
> domain. Then, measure the policy when Dom0 starts into a PCR of the
> phsyical TPM. Nevertheless, on one hand I do not know if this is
> feasible and, in the other hand, this prevents the system from
> destroying the vTPM domain when the peer domain shuts down.

The solution to this problem is to disaggregate dom0 and relocate the
domain building component to a stub domain that is completely measured
in the pTPM (perhaps by TBOOT). The domain builder could use a static
library of domains to build (hardware domain and TPM Manager built only
once; vTPM and pv-grub domain pairs built on request). An XSM policy
could then restrict vTPM communication so that only correctly built
guests are allowed to talk to their paired vTPM. In this case, dom0
would have permission to shut down either VM, but could not start a
replacement.

>>> 2.In the documentation it is recommended to avoid accessing the physical
>>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>>> Nevertheless, I currently have the IMA and the Trousers enabled in Dom0
>>> without any apparent issue. Why is not recommended directly accessing
>>> the physical TPM of Dom0?
>>
>> While most of the time it is not a problem to have two entities
>> talking to
>> the physical TPM, it is possible for the trousers daemon in dom0 to
>> interfere
>> with key handles used by the TPM Manager. There are also certain
>> operations
>> of the TPM that may not handle concurrency, although I do not believe
>> that
>> trousers uses them - SHA1Start, the DAA commands, and certain audit logs
>> come to mind.
>>
>> The other reason why it is recommended to avoid pTPM access from dom0 is
>> because the ability to send unseal/unbind requests to the physical TPM
>> makes
>> it possible for applications running in dom0 to decrypt the TPM Manager's
>> data (and thereby access vTPM private keys).
>>
>> At present, sharing the physical TPM between dom0 and the TPM Manager is
>> the only way to get full integrity checks.
>
> OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
> security problem to the vTPM. But if we do not enable the support, the
> integrity of Dom0 cannot be proved using the TPM (e.g. by remote
> attestation).

Right. Since dom0 currently must be trusted (as discussed above) this is
currently the best way to handle the dom0 attestation problem.

>>
>>> 3.If it is not recommended to directly accessing the physical TPM in
>>> Dom0, which is the advisable way to check the integrity of this domain?
>>> With solutions such as TBOOT and IntelTXT?
>>
>> While the TPM Manager in Xen 4.3/4.4 does not yet have this
>> functionality,
>> an update which I will be submitting for inclusion in Xen 4.5 has the
>> ability to get physical TPM quotes using a virtual TPM. Combined with an
>> early domain builder, the eventual goal is to have dom0 use a vTPM for
>> its integrity/reporting/sealing operations, and use the physical TPM only
>> to secure the secrets of vTPMs and for deep quotes to provide fresh
>> proofs
>> of the system's state.
>
> This sounds really good. I look forward to try it in Xen 4.5!!
>
>
> Thank you for your answers!
> Jordi.
>


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:35:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFN0-0001tf-UM; Tue, 11 Feb 2014 15:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0119358d1f=mlabriol@gdeb.com>)
	id 1WDFMy-0001tS-Uy; Tue, 11 Feb 2014 15:35:33 +0000
Received: from [85.158.139.211:32534] by server-1.bemta-5.messagelabs.com id
	9E/8E-12859-4434AF25; Tue, 11 Feb 2014 15:35:32 +0000
X-Env-Sender: prvs=0119358d1f=mlabriol@gdeb.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392132930!3221281!1
X-Originating-IP: [153.11.250.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MSA9PiAyMDMzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8140 invoked from network); 11 Feb 2014 15:35:31 -0000
Received: from mx2.gd-ms.com (HELO mx2.gd-ms.com) (153.11.250.41)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 15:35:31 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx2.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1WDFMr-0006hH-VR; Tue, 11 Feb 2014 10:35:26 -0500
In-Reply-To: <20140124144938.GD12946@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 60145CF5:5A89BE42-85257C7B:006FD806;
 type=4; name=$KeepSent
Message-ID: <OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Tue, 11 Feb 2014 10:35:18 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx2.gd-ms.com  1WDFMr-0006hH-VR
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014 
09:49:38 AM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> bounces@lists.xen.org
> Date: 01/24/2014 09:50 AM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> > 
> > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > Date: 01/21/2014 04:59 PM
> > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > Sent by: xen-devel-bounces@lists.xen.org
> > > 
> > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 

> > > > 10:38:27 AM:
> > > > 
> > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > > Date: 01/20/2014 10:38 AM
> > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > 
> > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola 
wrote:
> > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
> > 10:14:36 
> > > > AM:
> > > > > > 
> > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > > > Date: 01/20/2014 10:14 AM
> > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > > 
> > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 

> > wrote:
> > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
> > consistent 
> > > > > > crashes 
> > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
> > unusably 
> > > > 
> > > > > > slow 
> > > > > > > > graphics with a newer HD7000 (can see each line refresh 
> > > > indiviually on 
> > > > > > 
> > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare 
metal.
> > > > > > > 
> > > > > > > I hadn't been using DRM, just Xserver. Is that what you 
mean?
> > > > > > 
> > > > > > The R600 problems happen when in X, using OpenGL, on my dom0. 
The 
> > 
> > > > > > RadeonSI sluggishness is when using the KMS framebuffer device 
for 
> > a 
> > > > plain 
> > > > > > text console login.
> > > > > 
> > > > > So sluggish is probably due to the PAT not being enabled. This 
patch
> > > > > should be applied:
> > > > > 
> > > > > lkml.org/lkml/2011/11/8/406
> > > > > 
> > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > > > 
> > > > > and these two reverted:
> > > > > 
> > > > >  "xen/pat: Disable PAT support for now."
> > > > >  "xen/pat: Disable PAT using pat_enabled value."
> > > > > 
> > > > > Which is to say do:
> > > > > 
> > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > > > 
> > > > Thanks!  I cherry-picked that patch out of your testing tree, 
reverted 
> > 
> > > > those 2 commits, recompiled and installed.  Definitely fixed the 
HD 
> > 7000 
> > > > sluggishness and appears to have fixed the R600 crashes (although 
it's 
> > 
> > > > only been running a few hours).
> > > > 
> > > > How come that patch didn't get into mainline?  It looks pretty 
> > innocuous 
> > > > to me...
> > > 
> > > <Sigh> the x86 maintainers wanted a different route. And I hadn't 
had
> > > the chance nor time to implement it.
> > 
> > I see.  Well, I've got a handful of boxes in my lab that need that 
patch 
> > to be usable.  If you do come up with a more mainline-able solution, 
I'd 
> > gladly test it for you.  ;-)
> 
> Thank you!

Uh, oh.  Looks like those reverts and patches didn't entirely fix my 
problem.  My box with the HD5450 (r600 gallium3d) started going bonkers 
again yeserday.  After being solid as a rock for 2 weeks as my primary 
workstation, X has crashed a half dozen or so times so far this week. I've 
been in Xen with 2 paravirtual linux guests running almost constantly for 
this whole period.  I don't understand what's changed, but my system has 
been entirely unstable now.  I did recompile my kernel... but I all did 
was merge the v3.13.1 stable commit into my working tree and turn a few 
things on (netfilter, wifi, a couple drivers turned on here and there).  I 
just went and verified that those patches are still applied in my tree 
(i.e., I didn't accidentally undo them).  I'm scratching my head (and 
staring at a TTY login).

When X crashes, my kernel log prints a couple dozen iterations of this. 3d 
acceleration no longer functions unless I reboot.  If memory serves, the 
unpatched behavior upon X crash was that the kernel continued to spew 
these errors until the whole box locked up.  At least that's not happening 
any more... ;-)

[  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
[  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
[  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
GEM object (8192, 2, 4096, -12)

and here's a slightly different variant that happened while I was typing 
this email (on a different machine, luckily):

[ 3107.713039] sdf: detected capacity change from 31625052160 to 0
[ 3114.491717] usb 9-1: USB disconnect, device number 2
[64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
[64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
[64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[64348.297561] [TTM] Buffer eviction failed
[64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
[64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
GEM object (16384, 2, 4096, -12)

Any ideas?


---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:35:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFN0-0001tf-UM; Tue, 11 Feb 2014 15:35:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0119358d1f=mlabriol@gdeb.com>)
	id 1WDFMy-0001tS-Uy; Tue, 11 Feb 2014 15:35:33 +0000
Received: from [85.158.139.211:32534] by server-1.bemta-5.messagelabs.com id
	9E/8E-12859-4434AF25; Tue, 11 Feb 2014 15:35:32 +0000
X-Env-Sender: prvs=0119358d1f=mlabriol@gdeb.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392132930!3221281!1
X-Originating-IP: [153.11.250.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MSA9PiAyMDMzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8140 invoked from network); 11 Feb 2014 15:35:31 -0000
Received: from mx2.gd-ms.com (HELO mx2.gd-ms.com) (153.11.250.41)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 15:35:31 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx2.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1WDFMr-0006hH-VR; Tue, 11 Feb 2014 10:35:26 -0500
In-Reply-To: <20140124144938.GD12946@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 60145CF5:5A89BE42-85257C7B:006FD806;
 type=4; name=$KeepSent
Message-ID: <OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Tue, 11 Feb 2014 10:35:18 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx2.gd-ms.com  1WDFMr-0006hH-VR
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014 
09:49:38 AM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael D Labriola <mlabriol@gdeb.com>, 
> Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> bounces@lists.xen.org
> Date: 01/24/2014 09:50 AM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> > 
> > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > Date: 01/21/2014 04:59 PM
> > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > Sent by: xen-devel-bounces@lists.xen.org
> > > 
> > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 

> > > > 10:38:27 AM:
> > > > 
> > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > > Date: 01/20/2014 10:38 AM
> > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > 
> > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola 
wrote:
> > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
> > 10:14:36 
> > > > AM:
> > > > > > 
> > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > > > Date: 01/20/2014 10:14 AM
> > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > > 
> > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 

> > wrote:
> > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
> > consistent 
> > > > > > crashes 
> > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
> > unusably 
> > > > 
> > > > > > slow 
> > > > > > > > graphics with a newer HD7000 (can see each line refresh 
> > > > indiviually on 
> > > > > > 
> > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare 
metal.
> > > > > > > 
> > > > > > > I hadn't been using DRM, just Xserver. Is that what you 
mean?
> > > > > > 
> > > > > > The R600 problems happen when in X, using OpenGL, on my dom0. 
The 
> > 
> > > > > > RadeonSI sluggishness is when using the KMS framebuffer device 
for 
> > a 
> > > > plain 
> > > > > > text console login.
> > > > > 
> > > > > So sluggish is probably due to the PAT not being enabled. This 
patch
> > > > > should be applied:
> > > > > 
> > > > > lkml.org/lkml/2011/11/8/406
> > > > > 
> > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > > > 
> > > > > and these two reverted:
> > > > > 
> > > > >  "xen/pat: Disable PAT support for now."
> > > > >  "xen/pat: Disable PAT using pat_enabled value."
> > > > > 
> > > > > Which is to say do:
> > > > > 
> > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > > > 
> > > > Thanks!  I cherry-picked that patch out of your testing tree, 
reverted 
> > 
> > > > those 2 commits, recompiled and installed.  Definitely fixed the 
HD 
> > 7000 
> > > > sluggishness and appears to have fixed the R600 crashes (although 
it's 
> > 
> > > > only been running a few hours).
> > > > 
> > > > How come that patch didn't get into mainline?  It looks pretty 
> > innocuous 
> > > > to me...
> > > 
> > > <Sigh> the x86 maintainers wanted a different route. And I hadn't 
had
> > > the chance nor time to implement it.
> > 
> > I see.  Well, I've got a handful of boxes in my lab that need that 
patch 
> > to be usable.  If you do come up with a more mainline-able solution, 
I'd 
> > gladly test it for you.  ;-)
> 
> Thank you!

Uh, oh.  Looks like those reverts and patches didn't entirely fix my 
problem.  My box with the HD5450 (r600 gallium3d) started going bonkers 
again yeserday.  After being solid as a rock for 2 weeks as my primary 
workstation, X has crashed a half dozen or so times so far this week. I've 
been in Xen with 2 paravirtual linux guests running almost constantly for 
this whole period.  I don't understand what's changed, but my system has 
been entirely unstable now.  I did recompile my kernel... but I all did 
was merge the v3.13.1 stable commit into my working tree and turn a few 
things on (netfilter, wifi, a couple drivers turned on here and there).  I 
just went and verified that those patches are still applied in my tree 
(i.e., I didn't accidentally undo them).  I'm scratching my head (and 
staring at a TTY login).

When X crashes, my kernel log prints a couple dozen iterations of this. 3d 
acceleration no longer functions unless I reboot.  If memory serves, the 
unpatched behavior upon X crash was that the kernel continued to spew 
these errors until the whole box locked up.  At least that's not happening 
any more... ;-)

[  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
[  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
[  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
GEM object (8192, 2, 4096, -12)

and here's a slightly different variant that happened while I was typing 
this email (on a different machine, luckily):

[ 3107.713039] sdf: detected capacity change from 31625052160 to 0
[ 3114.491717] usb 9-1: USB disconnect, device number 2
[64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
[64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
[64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[64348.297561] [TTM] Buffer eviction failed
[64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
[64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
(r:-12)!
[64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
GEM object (16384, 2, 4096, -12)

Any ideas?


---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)



 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:52:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:52:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFdE-0002qY-Lv; Tue, 11 Feb 2014 15:52:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WDFdE-0002qT-1O
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 15:52:20 +0000
Received: from [85.158.143.35:50338] by server-3.bemta-4.messagelabs.com id
	C0/FD-11539-3374AF25; Tue, 11 Feb 2014 15:52:19 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392133938!4873613!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27462 invoked from network); 11 Feb 2014 15:52:18 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 11 Feb 2014 15:52:18 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55127 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WDFcv-00075g-VS; Tue, 11 Feb 2014 16:52:02 +0100
Date: Tue, 11 Feb 2014 16:52:15 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <771950784.20140211165215@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140210195402.GA3924@kernel.dk>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
MIME-Version: 1.0
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Today decided to tryout another kernel RC and your pull request to Jens on top of it .. and I encoutered this one:


[  438.029756] INFO: trying to register non-static key.
[  438.029759] the code is fine but needs lockdep annotation.
[  438.029760] turning off the locking correctness validator.
[  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
[  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
[  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff88004ba2b510
[  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff88004e5d9bf8
[  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffffffff82cee570
[  438.029799] Call Trace:
[  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
[  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
[  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
[  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
[  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
[  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
[  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
[  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
[  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
[  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d/0x90
[  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x240
[  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
[  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
[  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
[  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
[  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81/0x90
[  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
[  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
[  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
[  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
[  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
[  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70

Doesn't seem to serious .. but never the less :-)

--

Sander


Monday, February 10, 2014, 8:54:02 PM, you wrote:

> On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
>> Hey Jens,
>> 
>> Please git pull the following branch:
>> 
>>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
>> 
>> which is based off v3.13-rc6. If you would like me to rebase it on
>> a different branch/tag I would be more than happy to do so.

> Older is fine, it's only an issue if you are ahead of the branch you
> want to go into.

dd>> 
>> The patches are all bug-fixes and hopefully can go in 3.14.
>> 
>> They deal with xen-blkback shutdown and cause memory leaks
>> as well as shutdown races. They should go to stable tree and if you
>> are OK with I will ask them to backport those fixes.
>> 
>> There is also a fix to xen-blkfront to deal with unexpected state
>> transition. And lastly a fix to the header where it was using the
>> __aligned__ unnecessarily.

> Pulled!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:52:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:52:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFdE-0002qY-Lv; Tue, 11 Feb 2014 15:52:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WDFdE-0002qT-1O
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 15:52:20 +0000
Received: from [85.158.143.35:50338] by server-3.bemta-4.messagelabs.com id
	C0/FD-11539-3374AF25; Tue, 11 Feb 2014 15:52:19 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392133938!4873613!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27462 invoked from network); 11 Feb 2014 15:52:18 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 11 Feb 2014 15:52:18 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55127 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WDFcv-00075g-VS; Tue, 11 Feb 2014 16:52:02 +0100
Date: Tue, 11 Feb 2014 16:52:15 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <771950784.20140211165215@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140210195402.GA3924@kernel.dk>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
MIME-Version: 1.0
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Today decided to tryout another kernel RC and your pull request to Jens on top of it .. and I encoutered this one:


[  438.029756] INFO: trying to register non-static key.
[  438.029759] the code is fine but needs lockdep annotation.
[  438.029760] turning off the locking correctness validator.
[  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
[  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
[  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff88004ba2b510
[  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff88004e5d9bf8
[  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffffffff82cee570
[  438.029799] Call Trace:
[  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
[  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
[  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
[  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
[  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
[  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
[  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
[  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
[  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
[  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d/0x90
[  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x240
[  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
[  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
[  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
[  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
[  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81/0x90
[  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
[  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
[  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
[  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
[  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
[  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70

Doesn't seem to serious .. but never the less :-)

--

Sander


Monday, February 10, 2014, 8:54:02 PM, you wrote:

> On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
>> Hey Jens,
>> 
>> Please git pull the following branch:
>> 
>>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
>> 
>> which is based off v3.13-rc6. If you would like me to rebase it on
>> a different branch/tag I would be more than happy to do so.

> Older is fine, it's only an issue if you are ahead of the branch you
> want to go into.

dd>> 
>> The patches are all bug-fixes and hopefully can go in 3.14.
>> 
>> They deal with xen-blkback shutdown and cause memory leaks
>> as well as shutdown races. They should go to stable tree and if you
>> are OK with I will ask them to backport those fixes.
>> 
>> There is also a fix to xen-blkfront to deal with unexpected state
>> transition. And lastly a fix to the header where it was using the
>> __aligned__ unnecessarily.

> Pulled!



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:54:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFfG-0002xA-I1; Tue, 11 Feb 2014 15:54:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFfE-0002x2-DC
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 15:54:24 +0000
Received: from [85.158.139.211:63246] by server-10.bemta-5.messagelabs.com id
	EE/A2-08578-FA74AF25; Tue, 11 Feb 2014 15:54:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392134060!3186484!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21353 invoked from network); 11 Feb 2014 15:54:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:54:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99828068"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:54:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:54:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDFf9-0000g2-L7;
	Tue, 11 Feb 2014 15:54:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDFf9-0003Nl-3f;
	Tue, 11 Feb 2014 15:54:19 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24839-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 15:54:19 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24839: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2162330747615774108=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2162330747615774108==
Content-Type: text/plain

flight 24839 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24839/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=06bbcaf48d09c18a41c482866941ddd5d2846b44
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 06bbcaf48d09c18a41c482866941ddd5d2846b44
+ branch=xen-unstable
+ revision=06bbcaf48d09c18a41c482866941ddd5d2846b44
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 06bbcaf48d09c18a41c482866941ddd5d2846b44:master
Counting objects: 252, done.
Compressing objects:   1% (1/78)   
Compressing objects:   2% (2/78)   
Compressing objects:   3% (3/78)   
Compressing objects:   5% (4/78)   
Compressing objects:   6% (5/78)   
Compressing objects:   7% (6/78)   
Compressing objects:   8% (7/78)   
Compressing objects:  10% (8/78)   
Compressing objects:  11% (9/78)   
Compressing objects:  12% (10/78)   
Compressing objects:  14% (11/78)   
Compressing objects:  15% (12/78)   
Compressing objects:  16% (13/78)   
Compressing objects:  17% (14/78)   
Compressing objects:  19% (15/78)   
Compressing objects:  20% (16/78)   
Compressing objects:  21% (17/78)   
Compressing objects:  23% (18/78)   
Compressing objects:  24% (19/78)   
Compressing objects:  25% (20/78)   
Compressing objects:  26% (21/78)   
Compressing objects:  28% (22/78)   
Compressing objects:  29% (23/78)   
Compressing objects:  30% (24/78)   
Compressing objects:  32% (25/78)   
Compressing objects:  33% (26/78)   
Compressing objects:  34% (27/78)   
Compressing objects:  35% (28/78)   
Compressing objects:  37% (29/78)   
Compressing objects:  38% (30/78)   
Compressing objects:  39% (31/78)   
Compressing objects:  41% (32/78)   
Compressing objects:  42% (33/78)   
Compressing objects:  43% (34/78)   
Compressing objects:  44% (35/78)   
Compressing objects:  46% (36/78)   
Compressing objects:  47% (37/78)   
Compressing objects:  48% (38/78)   
Compressing objects:  50% (39/78)   
Compressing objects:  51% (40/78)   
Compressing objects:  52% (41/78)   
Compressing objects:  53% (42/78)   
Compressing objects:  55% (43/78)   
Compressing objects:  56% (44/78)   
Compressing objects:  57% (45/78)   
Compressing objects:  58% (46/78)   
Compressing objects:  60% (47/78)   
Compressing objects:  61% (48/78)   
Compressing objects:  62% (49/78)   
Compressing objects:  64% (50/78)   
Compressing objects:  65% (51/78)   
Compressing objects:  66% (52/78)   
Compressing objects:  67% (53/78)   
Compressing objects:  69% (54/78)   
Compressing objects:  70% (55/78)   
Compressing objects:  71% (56/78)   
Compressing objects:  73% (57/78)   
Compressing objects:  74% (58/78)   
Compressing objects:  75% (59/78)   
Compressing objects:  76% (60/78)   
Compressing objects:  78% (61/78)   
Compressing objects:  79% (62/78)   
Compressing objects:  80% (63/78)   
Compressing objects:  82% (64/78)   
Compressing objects:  83% (65/78)   
Compressing objects:  84% (66/78)   
Compressing objects:  85% (67/78)   
Compressing objects:  87% (68/78)   
Compressing objects:  88% (69/78)   
Compressing objects:  89% (70/78)   
Compressing objects:  91% (71/78)   
Compressing objects:  92% (72/78)   
Compressing objects:  93% (73/78)   
Compressing objects:  94% (74/78)   
Compressing objects:  96% (75/78)   
Compressing objects:  97% (76/78)   
Compressing objects:  98% (77/78)   
Compressing objects: 100% (78/78)   
Compressing objects: 100% (78/78), done.
Writing objects:   0% (1/204)   
Writing objects:   1% (3/204)   
Writing objects:   2% (5/204)   
Writing objects:   3% (7/204)   
Writing objects:   4% (9/204)   
Writing objects:   5% (11/204)   
Writing objects:   6% (13/204)   
Writing objects:   7% (15/204)   
Writing objects:   8% (17/204)   
Writing objects:   9% (19/204)   
Writing objects:  10% (22/204)   
Writing objects:  11% (23/204)   
Writing objects:  12% (25/204)   
Writing objects:  13% (27/204)   
Writing objects:  14% (29/204)   
Writing objects:  15% (31/204)   
Writing objects:  16% (33/204)   
Writing objects:  17% (35/204)   
Writing objects:  18% (37/204)   
Writing objects:  19% (39/204)   
Writing objects:  20% (41/204)   
Writing objects:  21% (43/204)   
Writing objects:  22% (45/204)   
Writing objects:  23% (47/204)   
Writing objects:  24% (50/204)   
Writing objects:  25% (51/204)   
Writing objects:  26% (55/204)   
Writing objects:  27% (56/204)   
Writing objects:  28% (58/204)   
Writing objects:  29% (60/204)   
Writing objects:  30% (62/204)   
Writing objects:  31% (64/204)   
Writing objects:  32% (66/204)   
Writing objects:  33% (68/204)   
Writing objects:  34% (70/204)   
Writing objects:  35% (72/204)   
Writing objects:  36% (74/204)   
Writing objects:  37% (76/204)   
Writing objects:  38% (78/204)   
Writing objects:  39% (80/204)   
Writing objects:  40% (82/204)   
Writing objects:  41% (84/204)   
Writing objects:  42% (86/204)   
Writing objects:  43% (88/204)   
Writing objects:  44% (90/204)   
Writing objects:  45% (92/204)   
Writing objects:  46% (94/204)   
Writing objects:  47% (96/204)   
Writing objects:  48% (98/204)   
Writing objects:  49% (101/204)   
Writing objects:  50% (103/204)   
Writing objects:  51% (106/204)   
Writing objects:  52% (107/204)   
Writing objects:  53% (109/204)   
Writing objects:  54% (111/204)   
Writing objects:  55% (113/204)   
Writing objects:  56% (115/204)   
Writing objects:  57% (117/204)   
Writing objects:  58% (119/204)   
Writing objects:  59% (121/204)   
Writing objects:  60% (123/204)   
Writing objects:  61% (125/204)   
Writing objects:  62% (127/204)   
Writing objects:  63% (129/204)   
Writing objects:  64% (131/204)   
Writing objects:  65% (133/204)   
Writing objects:  66% (135/204)   
Writing objects:  67% (137/204)   
Writing objects:  68% (139/204)   
Writing objects:  69% (141/204)   
Writing objects:  70% (143/204)   
Writing objects:  71% (145/204)   
Writing objects:  72% (147/204)   
Writing objects:  73% (149/204)   
Writing objects:  74% (151/204)   
Writing objects:  75% (153/204)   
Writing objects:  76% (156/204)   
Writing objects:  77% (158/204)   
Writing objects:  78% (160/204)   
Writing objects:  79% (162/204)   
Writing objects:  80% (164/204)   
Writing objects:  81% (166/204)   
Writing objects:  82% (168/204)   
Writing objects:  83% (170/204)   
Writing objects:  84% (172/204)   
Writing objects:  85% (174/204)   
Writing objects:  86% (176/204)   
Writing objects:  87% (178/204)   
Writing objects:  88% (180/204)   
Writing objects:  89% (182/204)   
Writing objects:  90% (184/204)   
Writing objects:  91% (186/204)   
Writing objects:  92% (188/204)   
Writing objects:  93% (190/204)   
Writing objects:  94% (192/204)   
Writing objects:  95% (194/204)   
Writing objects:  96% (196/204)   
Writing objects:  97% (198/204)   
Writing objects:  98% (200/204)   
Writing objects:  99% (202/204)   
Writing objects: 100% (204/204)   
Writing objects: 100% (204/204), 131.15 KiB, done.
Total 204 (delta 129), reused 200 (delta 125)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   fee6163..06bbcaf  06bbcaf48d09c18a41c482866941ddd5d2846b44 -> master


--===============2162330747615774108==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2162330747615774108==--

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:54:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFfG-0002xA-I1; Tue, 11 Feb 2014 15:54:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDFfE-0002x2-DC
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 15:54:24 +0000
Received: from [85.158.139.211:63246] by server-10.bemta-5.messagelabs.com id
	EE/A2-08578-FA74AF25; Tue, 11 Feb 2014 15:54:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392134060!3186484!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21353 invoked from network); 11 Feb 2014 15:54:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:54:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99828068"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 15:54:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 10:54:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDFf9-0000g2-L7;
	Tue, 11 Feb 2014 15:54:19 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDFf9-0003Nl-3f;
	Tue, 11 Feb 2014 15:54:19 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24839-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 15:54:19 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24839: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2162330747615774108=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2162330747615774108==
Content-Type: text/plain

flight 24839 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24839/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44
baseline version:
 xen                  fee61634e8f3fec7c137f0d16478c64c7f355587

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Anup Patel <anup.patel@linaro.org>
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.gral@linaro.org>
  Julien Grall <julien.grall@linaro.org>
  Marek Marczykowski-GÃ³recki <marmarek@invisiblethingslab.com>
  Matthew Daley <mattd@bugfuzz.com>
  Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=06bbcaf48d09c18a41c482866941ddd5d2846b44
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 06bbcaf48d09c18a41c482866941ddd5d2846b44
+ branch=xen-unstable
+ revision=06bbcaf48d09c18a41c482866941ddd5d2846b44
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 06bbcaf48d09c18a41c482866941ddd5d2846b44:master
Counting objects: 252, done.
Compressing objects:   1% (1/78)   
Compressing objects:   2% (2/78)   
Compressing objects:   3% (3/78)   
Compressing objects:   5% (4/78)   
Compressing objects:   6% (5/78)   
Compressing objects:   7% (6/78)   
Compressing objects:   8% (7/78)   
Compressing objects:  10% (8/78)   
Compressing objects:  11% (9/78)   
Compressing objects:  12% (10/78)   
Compressing objects:  14% (11/78)   
Compressing objects:  15% (12/78)   
Compressing objects:  16% (13/78)   
Compressing objects:  17% (14/78)   
Compressing objects:  19% (15/78)   
Compressing objects:  20% (16/78)   
Compressing objects:  21% (17/78)   
Compressing objects:  23% (18/78)   
Compressing objects:  24% (19/78)   
Compressing objects:  25% (20/78)   
Compressing objects:  26% (21/78)   
Compressing objects:  28% (22/78)   
Compressing objects:  29% (23/78)   
Compressing objects:  30% (24/78)   
Compressing objects:  32% (25/78)   
Compressing objects:  33% (26/78)   
Compressing objects:  34% (27/78)   
Compressing objects:  35% (28/78)   
Compressing objects:  37% (29/78)   
Compressing objects:  38% (30/78)   
Compressing objects:  39% (31/78)   
Compressing objects:  41% (32/78)   
Compressing objects:  42% (33/78)   
Compressing objects:  43% (34/78)   
Compressing objects:  44% (35/78)   
Compressing objects:  46% (36/78)   
Compressing objects:  47% (37/78)   
Compressing objects:  48% (38/78)   
Compressing objects:  50% (39/78)   
Compressing objects:  51% (40/78)   
Compressing objects:  52% (41/78)   
Compressing objects:  53% (42/78)   
Compressing objects:  55% (43/78)   
Compressing objects:  56% (44/78)   
Compressing objects:  57% (45/78)   
Compressing objects:  58% (46/78)   
Compressing objects:  60% (47/78)   
Compressing objects:  61% (48/78)   
Compressing objects:  62% (49/78)   
Compressing objects:  64% (50/78)   
Compressing objects:  65% (51/78)   
Compressing objects:  66% (52/78)   
Compressing objects:  67% (53/78)   
Compressing objects:  69% (54/78)   
Compressing objects:  70% (55/78)   
Compressing objects:  71% (56/78)   
Compressing objects:  73% (57/78)   
Compressing objects:  74% (58/78)   
Compressing objects:  75% (59/78)   
Compressing objects:  76% (60/78)   
Compressing objects:  78% (61/78)   
Compressing objects:  79% (62/78)   
Compressing objects:  80% (63/78)   
Compressing objects:  82% (64/78)   
Compressing objects:  83% (65/78)   
Compressing objects:  84% (66/78)   
Compressing objects:  85% (67/78)   
Compressing objects:  87% (68/78)   
Compressing objects:  88% (69/78)   
Compressing objects:  89% (70/78)   
Compressing objects:  91% (71/78)   
Compressing objects:  92% (72/78)   
Compressing objects:  93% (73/78)   
Compressing objects:  94% (74/78)   
Compressing objects:  96% (75/78)   
Compressing objects:  97% (76/78)   
Compressing objects:  98% (77/78)   
Compressing objects: 100% (78/78)   
Compressing objects: 100% (78/78), done.
Writing objects:   0% (1/204)   
Writing objects:   1% (3/204)   
Writing objects:   2% (5/204)   
Writing objects:   3% (7/204)   
Writing objects:   4% (9/204)   
Writing objects:   5% (11/204)   
Writing objects:   6% (13/204)   
Writing objects:   7% (15/204)   
Writing objects:   8% (17/204)   
Writing objects:   9% (19/204)   
Writing objects:  10% (22/204)   
Writing objects:  11% (23/204)   
Writing objects:  12% (25/204)   
Writing objects:  13% (27/204)   
Writing objects:  14% (29/204)   
Writing objects:  15% (31/204)   
Writing objects:  16% (33/204)   
Writing objects:  17% (35/204)   
Writing objects:  18% (37/204)   
Writing objects:  19% (39/204)   
Writing objects:  20% (41/204)   
Writing objects:  21% (43/204)   
Writing objects:  22% (45/204)   
Writing objects:  23% (47/204)   
Writing objects:  24% (50/204)   
Writing objects:  25% (51/204)   
Writing objects:  26% (55/204)   
Writing objects:  27% (56/204)   
Writing objects:  28% (58/204)   
Writing objects:  29% (60/204)   
Writing objects:  30% (62/204)   
Writing objects:  31% (64/204)   
Writing objects:  32% (66/204)   
Writing objects:  33% (68/204)   
Writing objects:  34% (70/204)   
Writing objects:  35% (72/204)   
Writing objects:  36% (74/204)   
Writing objects:  37% (76/204)   
Writing objects:  38% (78/204)   
Writing objects:  39% (80/204)   
Writing objects:  40% (82/204)   
Writing objects:  41% (84/204)   
Writing objects:  42% (86/204)   
Writing objects:  43% (88/204)   
Writing objects:  44% (90/204)   
Writing objects:  45% (92/204)   
Writing objects:  46% (94/204)   
Writing objects:  47% (96/204)   
Writing objects:  48% (98/204)   
Writing objects:  49% (101/204)   
Writing objects:  50% (103/204)   
Writing objects:  51% (106/204)   
Writing objects:  52% (107/204)   
Writing objects:  53% (109/204)   
Writing objects:  54% (111/204)   
Writing objects:  55% (113/204)   
Writing objects:  56% (115/204)   
Writing objects:  57% (117/204)   
Writing objects:  58% (119/204)   
Writing objects:  59% (121/204)   
Writing objects:  60% (123/204)   
Writing objects:  61% (125/204)   
Writing objects:  62% (127/204)   
Writing objects:  63% (129/204)   
Writing objects:  64% (131/204)   
Writing objects:  65% (133/204)   
Writing objects:  66% (135/204)   
Writing objects:  67% (137/204)   
Writing objects:  68% (139/204)   
Writing objects:  69% (141/204)   
Writing objects:  70% (143/204)   
Writing objects:  71% (145/204)   
Writing objects:  72% (147/204)   
Writing objects:  73% (149/204)   
Writing objects:  74% (151/204)   
Writing objects:  75% (153/204)   
Writing objects:  76% (156/204)   
Writing objects:  77% (158/204)   
Writing objects:  78% (160/204)   
Writing objects:  79% (162/204)   
Writing objects:  80% (164/204)   
Writing objects:  81% (166/204)   
Writing objects:  82% (168/204)   
Writing objects:  83% (170/204)   
Writing objects:  84% (172/204)   
Writing objects:  85% (174/204)   
Writing objects:  86% (176/204)   
Writing objects:  87% (178/204)   
Writing objects:  88% (180/204)   
Writing objects:  89% (182/204)   
Writing objects:  90% (184/204)   
Writing objects:  91% (186/204)   
Writing objects:  92% (188/204)   
Writing objects:  93% (190/204)   
Writing objects:  94% (192/204)   
Writing objects:  95% (194/204)   
Writing objects:  96% (196/204)   
Writing objects:  97% (198/204)   
Writing objects:  98% (200/204)   
Writing objects:  99% (202/204)   
Writing objects: 100% (204/204)   
Writing objects: 100% (204/204), 131.15 KiB, done.
Total 204 (delta 129), reused 200 (delta 125)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   fee6163..06bbcaf  06bbcaf48d09c18a41c482866941ddd5d2846b44 -> master


--===============2162330747615774108==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2162330747615774108==--

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:56:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFh3-0003FR-C7; Tue, 11 Feb 2014 15:56:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDFh2-0003F5-At
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:56:16 +0000
Received: from [85.158.137.68:7904] by server-7.bemta-3.messagelabs.com id
	65/FF-13775-F184AF25; Tue, 11 Feb 2014 15:56:15 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392134172!1146665!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19032 invoked from network); 11 Feb 2014 15:56:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101638426"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:55:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 10:55:59 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDFgl-0005NT-2P;
	Tue, 11 Feb 2014 15:55:59 +0000
Message-ID: <52FA480D.9040707@eu.citrix.com>
Date: Tue, 11 Feb 2014 15:55:57 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
In-Reply-To: <52FA2C63020000780011B201@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/11/2014 12:57 PM, Jan Beulich wrote:
>>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>>> What I'm missing here is what you think a proper solution is.
>>
>> A _proper_ solution would be for the IOMMU h/w to allow restartable
>> faults, so that we can do all the usual fault-driven virtual memory
>> operations with DMA. :)  In the meantime...
>
> Or maintaining the A/D bits for IOMMU side accesses too.
>
>>>   It seems we have:
>>>
>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
>>> being tracked, and hope the guest doesn't DMA into video ram; DMA
>>> causes IOMMU fault. (This really shouldn't crash the host under normal
>>> circumstances; if it does it's a hardware bug.)
>>
>> Note "hope" and "shouldn't" there. :)
>>
>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
>>> video ram.  DMA causes missed update to dirty bitmap, which will
>>> hopefully just cause screen corruption.
>>
>> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
>> (for VMs that actually have devices passed through to them).
>> The extra bookkeeping could be expensive in some cases, but basically
>> all of those cases are already incompatible with IOMMU.
>>
>>> C. Do buffer scanning rather than dirty vram tracking (SLOW)
>>> D. Don't allow both a virtual video card and pass-through
>>
>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
>> and then split them out.  That one
>
> Wouldn't that be problematic in terms of memory being available,
> namely when using ballooning in Dom0?
>
>>> Given that most operating systems will probably *not* DMA into video
>>> ram, and that an IOMMU fault isn't *supposed* to be able to crash the
>>> host, 'A' seems like the most reasonable option to me.
>>
>> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
>> seems to have most support from other people.  On that basis this
>> patch can have my Ack.
>
> I too would consider B better than A.

I think I got a bit distracted with the "A isn't really so bad" thing. 
Actually, if the overhead of not sharing tables isn't very high, then B 
isn't such a bad option.  In fact, B is what I expected Yang to submit 
when he originally described the problem.

I was going to say, from a release perspective, B is probably the safest 
option for now.  But on the other hand, if we've been testing sharing 
all this time, maybe switching back over to non-sharing whole-hog has 
the higher risk?

Anyway, both are at least probably equal risk-wise.  How easy is it to 
implement?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:56:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFh3-0003FR-C7; Tue, 11 Feb 2014 15:56:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDFh2-0003F5-At
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 15:56:16 +0000
Received: from [85.158.137.68:7904] by server-7.bemta-3.messagelabs.com id
	65/FF-13775-F184AF25; Tue, 11 Feb 2014 15:56:15 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392134172!1146665!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19032 invoked from network); 11 Feb 2014 15:56:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 15:56:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101638426"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 15:55:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 10:55:59 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[0.0.0.0])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDFgl-0005NT-2P;
	Tue, 11 Feb 2014 15:55:59 +0000
Message-ID: <52FA480D.9040707@eu.citrix.com>
Date: Tue, 11 Feb 2014 15:55:57 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
In-Reply-To: <52FA2C63020000780011B201@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/11/2014 12:57 PM, Jan Beulich wrote:
>>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>>> What I'm missing here is what you think a proper solution is.
>>
>> A _proper_ solution would be for the IOMMU h/w to allow restartable
>> faults, so that we can do all the usual fault-driven virtual memory
>> operations with DMA. :)  In the meantime...
>
> Or maintaining the A/D bits for IOMMU side accesses too.
>
>>>   It seems we have:
>>>
>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the buffer
>>> being tracked, and hope the guest doesn't DMA into video ram; DMA
>>> causes IOMMU fault. (This really shouldn't crash the host under normal
>>> circumstances; if it does it's a hardware bug.)
>>
>> Note "hope" and "shouldn't" there. :)
>>
>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA into
>>> video ram.  DMA causes missed update to dirty bitmap, which will
>>> hopefully just cause screen corruption.
>>
>> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
>> (for VMs that actually have devices passed through to them).
>> The extra bookkeeping could be expensive in some cases, but basically
>> all of those cases are already incompatible with IOMMU.
>>
>>> C. Do buffer scanning rather than dirty vram tracking (SLOW)
>>> D. Don't allow both a virtual video card and pass-through
>>
>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
>> and then split them out.  That one
>
> Wouldn't that be problematic in terms of memory being available,
> namely when using ballooning in Dom0?
>
>>> Given that most operating systems will probably *not* DMA into video
>>> ram, and that an IOMMU fault isn't *supposed* to be able to crash the
>>> host, 'A' seems like the most reasonable option to me.
>>
>> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
>> seems to have most support from other people.  On that basis this
>> patch can have my Ack.
>
> I too would consider B better than A.

I think I got a bit distracted with the "A isn't really so bad" thing. 
Actually, if the overhead of not sharing tables isn't very high, then B 
isn't such a bad option.  In fact, B is what I expected Yang to submit 
when he originally described the problem.

I was going to say, from a release perspective, B is probably the safest 
option for now.  But on the other hand, if we've been testing sharing 
all this time, maybe switching back over to non-sharing whole-hog has 
the higher risk?

Anyway, both are at least probably equal risk-wise.  How easy is it to 
implement?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:57:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFhv-0003Tn-Cw; Tue, 11 Feb 2014 15:57:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDFht-0003St-UZ
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 15:57:10 +0000
Received: from [85.158.143.35:54574] by server-2.bemta-4.messagelabs.com id
	77/CF-10891-5584AF25; Tue, 11 Feb 2014 15:57:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392134227!4856652!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13849 invoked from network); 11 Feb 2014 15:57:08 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 15:57:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1BFuuws010340
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Feb 2014 15:56:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1BFuuuu004200
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Feb 2014 15:56:56 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1BFuuA6021102; Tue, 11 Feb 2014 15:56:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 11 Feb 2014 07:56:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6DACB1C0972; Tue, 11 Feb 2014 10:56:50 -0500 (EST)
Date: Tue, 11 Feb 2014 10:56:50 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, david.vrabel@citrix.com,
	roger.pau@citrix.com, mrushton@amazon.com, msw@amazon.com,
	boris.ostrovsky@oracle.com
Message-ID: <20140211155650.GA23026@phenom.dumpdata.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <771950784.20140211165215@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> Today decided to tryout another kernel RC and your pull request to Jens on top of it .. and I encoutered this one:

Thank you for testing!

Could you provide the .config file please?

Did you see this _before_ the pull request with Jens? I presume
not, but just double checking?

And lastly - what were you doing when you triggered this? Just launching
a guest?

CC-ing Roger and other folks who were on the patches.

> 
> 
> [  438.029756] INFO: trying to register non-static key.
> [  438.029759] the code is fine but needs lockdep annotation.
> [  438.029760] turning off the locking correctness validator.
> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff88004ba2b510
> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff88004e5d9bf8
> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffffffff82cee570
> [  438.029799] Call Trace:
> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d/0x90
> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x240
> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81/0x90
> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
> 
> Doesn't seem to serious .. but never the less :-)
> 
> --
> 
> Sander
> 
> 
> Monday, February 10, 2014, 8:54:02 PM, you wrote:
> 
> > On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
> >> Hey Jens,
> >> 
> >> Please git pull the following branch:
> >> 
> >>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
> >> 
> >> which is based off v3.13-rc6. If you would like me to rebase it on
> >> a different branch/tag I would be more than happy to do so.
> 
> > Older is fine, it's only an issue if you are ahead of the branch you
> > want to go into.
> 
> dd>> 
> >> The patches are all bug-fixes and hopefully can go in 3.14.
> >> 
> >> They deal with xen-blkback shutdown and cause memory leaks
> >> as well as shutdown races. They should go to stable tree and if you
> >> are OK with I will ask them to backport those fixes.
> >> 
> >> There is also a fix to xen-blkfront to deal with unexpected state
> >> transition. And lastly a fix to the header where it was using the
> >> __aligned__ unnecessarily.
> 
> > Pulled!
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 15:57:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 15:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFhv-0003Tn-Cw; Tue, 11 Feb 2014 15:57:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDFht-0003St-UZ
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 15:57:10 +0000
Received: from [85.158.143.35:54574] by server-2.bemta-4.messagelabs.com id
	77/CF-10891-5584AF25; Tue, 11 Feb 2014 15:57:09 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392134227!4856652!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13849 invoked from network); 11 Feb 2014 15:57:08 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 15:57:08 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1BFuuws010340
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 11 Feb 2014 15:56:57 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1BFuuuu004200
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 11 Feb 2014 15:56:56 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1BFuuA6021102; Tue, 11 Feb 2014 15:56:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 11 Feb 2014 07:56:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6DACB1C0972; Tue, 11 Feb 2014 10:56:50 -0500 (EST)
Date: Tue, 11 Feb 2014 10:56:50 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Sander Eikelenboom <linux@eikelenboom.it>, david.vrabel@citrix.com,
	roger.pau@citrix.com, mrushton@amazon.com, msw@amazon.com,
	boris.ostrovsky@oracle.com
Message-ID: <20140211155650.GA23026@phenom.dumpdata.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <771950784.20140211165215@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
> Hi Konrad,
> 
> Today decided to tryout another kernel RC and your pull request to Jens on top of it .. and I encoutered this one:

Thank you for testing!

Could you provide the .config file please?

Did you see this _before_ the pull request with Jens? I presume
not, but just double checking?

And lastly - what were you doing when you triggered this? Just launching
a guest?

CC-ing Roger and other folks who were on the patches.

> 
> 
> [  438.029756] INFO: trying to register non-static key.
> [  438.029759] the code is fine but needs lockdep annotation.
> [  438.029760] turning off the locking correctness validator.
> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff88004ba2b510
> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff88004e5d9bf8
> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffffffff82cee570
> [  438.029799] Call Trace:
> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d/0x90
> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x240
> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81/0x90
> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
> 
> Doesn't seem to serious .. but never the less :-)
> 
> --
> 
> Sander
> 
> 
> Monday, February 10, 2014, 8:54:02 PM, you wrote:
> 
> > On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
> >> Hey Jens,
> >> 
> >> Please git pull the following branch:
> >> 
> >>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
> >> 
> >> which is based off v3.13-rc6. If you would like me to rebase it on
> >> a different branch/tag I would be more than happy to do so.
> 
> > Older is fine, it's only an issue if you are ahead of the branch you
> > want to go into.
> 
> dd>> 
> >> The patches are all bug-fixes and hopefully can go in 3.14.
> >> 
> >> They deal with xen-blkback shutdown and cause memory leaks
> >> as well as shutdown races. They should go to stable tree and if you
> >> are OK with I will ask them to backport those fixes.
> >> 
> >> There is also a fix to xen-blkfront to deal with unexpected state
> >> transition. And lastly a fix to the header where it was using the
> >> __aligned__ unnecessarily.
> 
> > Pulled!
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:08:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:08:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFsM-0005Hm-5X; Tue, 11 Feb 2014 16:07:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WDFsK-0005Hh-Mv
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 16:07:57 +0000
Received: from [85.158.137.68:65394] by server-15.bemta-3.messagelabs.com id
	7C/55-19263-BDA4AF25; Tue, 11 Feb 2014 16:07:55 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392134874!1148410!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24769 invoked from network); 11 Feb 2014 16:07:54 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 11 Feb 2014 16:07:54 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55302 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WDFs1-0008OY-Lf; Tue, 11 Feb 2014 17:07:38 +0100
Date: Tue, 11 Feb 2014 17:07:50 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1542261541.20140211170750@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140211155650.GA23026@phenom.dumpdata.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------0000A013B0B93F0E3"
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	david.vrabel@citrix.com, msw@amazon.com,
	boris.ostrovsky@oracle.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------0000A013B0B93F0E3
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Tuesday, February 11, 2014, 4:56:50 PM, you wrote:

> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> Today decided to tryout another kernel RC and your pull request to Jens on top of it .. and I encoutered this one:

> Thank you for testing!

> Could you provide the .config file please?

Attached

> Did you see this _before_ the pull request with Jens? I presume
> not, but just double checking?

Nope not too my knowledge (though it's a bit messy with things broken on 3.14 at the moment)

> And lastly - what were you doing when you triggered this? Just launching
> a guest?

Nope it triggers on guest shutdown ..


> CC-ing Roger and other folks who were on the patches.

>> 
>> 
>> [  438.029756] INFO: trying to register non-static key.
>> [  438.029759] the code is fine but needs lockdep annotation.
>> [  438.029760] turning off the locking correctness validator.
>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff88004ba2b510
>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff88004e5d9bf8
>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffffffff82cee570
>> [  438.029799] Call Trace:
>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d/0x90
>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x240
>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81/0x90
>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>> 
>> Doesn't seem to serious .. but never the less :-)
>> 
>> --
>> 
>> Sander
>> 
>> 
>> Monday, February 10, 2014, 8:54:02 PM, you wrote:
>> 
>> > On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
>> >> Hey Jens,
>> >> 
>> >> Please git pull the following branch:
>> >> 
>> >>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
>> >> 
>> >> which is based off v3.13-rc6. If you would like me to rebase it on
>> >> a different branch/tag I would be more than happy to do so.
>> 
>> > Older is fine, it's only an issue if you are ahead of the branch you
>> > want to go into.
>> 
>> dd>> 
>> >> The patches are all bug-fixes and hopefully can go in 3.14.
>> >> 
>> >> They deal with xen-blkback shutdown and cause memory leaks
>> >> as well as shutdown races. They should go to stable tree and if you
>> >> are OK with I will ask them to backport those fixes.
>> >> 
>> >> There is also a fix to xen-blkfront to deal with unexpected state
>> >> transition. And lastly a fix to the header where it was using the
>> >> __aligned__ unnecessarily.
>> 
>> > Pulled!
>> 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/

------------0000A013B0B93F0E3
Content-Type: application/octet-stream;
 name=".config"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename=".config"

IwojIEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgojIExpbnV4
L3g4Nl82NCAzLjE0LjAtcmMyLTIwMTQwMjExLXBjaXJlc2V0LW5ldC1idHJldmVydC14ZW5i
bG9jay1kbWFkZWJ1ZyBLZXJuZWwgQ29uZmlndXJhdGlvbgojCkNPTkZJR182NEJJVD15CkNP
TkZJR19YODZfNjQ9eQpDT05GSUdfWDg2PXkKQ09ORklHX0lOU1RSVUNUSU9OX0RFQ09ERVI9
eQpDT05GSUdfT1VUUFVUX0ZPUk1BVD0iZWxmNjQteDg2LTY0IgpDT05GSUdfQVJDSF9ERUZD
T05GSUc9ImFyY2gveDg2L2NvbmZpZ3MveDg2XzY0X2RlZmNvbmZpZyIKQ09ORklHX0xPQ0tE
RVBfU1VQUE9SVD15CkNPTkZJR19TVEFDS1RSQUNFX1NVUFBPUlQ9eQpDT05GSUdfSEFWRV9M
QVRFTkNZVE9QX1NVUFBPUlQ9eQpDT05GSUdfTU1VPXkKQ09ORklHX05FRURfRE1BX01BUF9T
VEFURT15CkNPTkZJR19ORUVEX1NHX0RNQV9MRU5HVEg9eQpDT05GSUdfR0VORVJJQ19JU0Ff
RE1BPXkKQ09ORklHX0dFTkVSSUNfQlVHPXkKQ09ORklHX0dFTkVSSUNfQlVHX1JFTEFUSVZF
X1BPSU5URVJTPXkKQ09ORklHX0dFTkVSSUNfSFdFSUdIVD15CkNPTkZJR19BUkNIX01BWV9I
QVZFX1BDX0ZEQz15CkNPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15CkNPTkZJR19H
RU5FUklDX0NBTElCUkFURV9ERUxBWT15CkNPTkZJR19BUkNIX0hBU19DUFVfUkVMQVg9eQpD
T05GSUdfQVJDSF9IQVNfQ0FDSEVfTElORV9TSVpFPXkKQ09ORklHX0FSQ0hfSEFTX0NQVV9B
VVRPUFJPQkU9eQpDT05GSUdfSEFWRV9TRVRVUF9QRVJfQ1BVX0FSRUE9eQpDT05GSUdfTkVF
RF9QRVJfQ1BVX0VNQkVEX0ZJUlNUX0NIVU5LPXkKQ09ORklHX05FRURfUEVSX0NQVV9QQUdF
X0ZJUlNUX0NIVU5LPXkKQ09ORklHX0FSQ0hfSElCRVJOQVRJT05fUE9TU0lCTEU9eQpDT05G
SUdfQVJDSF9TVVNQRU5EX1BPU1NJQkxFPXkKQ09ORklHX0FSQ0hfV0FOVF9IVUdFX1BNRF9T
SEFSRT15CkNPTkZJR19BUkNIX1dBTlRfR0VORVJBTF9IVUdFVExCPXkKQ09ORklHX1pPTkVf
RE1BMzI9eQpDT05GSUdfQVVESVRfQVJDSD15CkNPTkZJR19BUkNIX1NVUFBPUlRTX09QVElN
SVpFRF9JTkxJTklORz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15
CkNPTkZJR19YODZfNjRfU01QPXkKQ09ORklHX1g4Nl9IVD15CkNPTkZJR19BUkNIX0hXRUlH
SFRfQ0ZMQUdTPSItZmNhbGwtc2F2ZWQtcmRpIC1mY2FsbC1zYXZlZC1yc2kgLWZjYWxsLXNh
dmVkLXJkeCAtZmNhbGwtc2F2ZWQtcmN4IC1mY2FsbC1zYXZlZC1yOCAtZmNhbGwtc2F2ZWQt
cjkgLWZjYWxsLXNhdmVkLXIxMCAtZmNhbGwtc2F2ZWQtcjExIgpDT05GSUdfQVJDSF9TVVBQ
T1JUU19VUFJPQkVTPXkKQ09ORklHX0RFRkNPTkZJR19MSVNUPSIvbGliL21vZHVsZXMvJFVO
QU1FX1JFTEVBU0UvLmNvbmZpZyIKQ09ORklHX0lSUV9XT1JLPXkKQ09ORklHX0JVSUxEVElN
RV9FWFRBQkxFX1NPUlQ9eQoKIwojIEdlbmVyYWwgc2V0dXAKIwpDT05GSUdfSU5JVF9FTlZf
QVJHX0xJTUlUPTMyCkNPTkZJR19DUk9TU19DT01QSUxFPSIiCiMgQ09ORklHX0NPTVBJTEVf
VEVTVCBpcyBub3Qgc2V0CkNPTkZJR19MT0NBTFZFUlNJT049IiIKIyBDT05GSUdfTE9DQUxW
RVJTSU9OX0FVVE8gaXMgbm90IHNldApDT05GSUdfSEFWRV9LRVJORUxfR1pJUD15CkNPTkZJ
R19IQVZFX0tFUk5FTF9CWklQMj15CkNPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkKQ09ORklH
X0hBVkVfS0VSTkVMX1haPXkKQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15CkNPTkZJR19IQVZF
X0tFUk5FTF9MWjQ9eQpDT05GSUdfS0VSTkVMX0daSVA9eQojIENPTkZJR19LRVJORUxfQlpJ
UDIgaXMgbm90IHNldAojIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0CiMgQ09ORklH
X0tFUk5FTF9YWiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFUk5FTF9MWk8gaXMgbm90IHNldAoj
IENPTkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfSE9TVE5BTUU9
Iihub25lKSIKQ09ORklHX1NXQVA9eQpDT05GSUdfU1lTVklQQz15CkNPTkZJR19TWVNWSVBD
X1NZU0NUTD15CiMgQ09ORklHX1BPU0lYX01RVUVVRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZI
QU5ETEUgaXMgbm90IHNldApDT05GSUdfQVVESVQ9eQpDT05GSUdfQVVESVRTWVNDQUxMPXkK
Q09ORklHX0FVRElUX1dBVENIPXkKQ09ORklHX0FVRElUX1RSRUU9eQoKIwojIElSUSBzdWJz
eXN0ZW0KIwpDT05GSUdfR0VORVJJQ19JUlFfUFJPQkU9eQpDT05GSUdfR0VORVJJQ19JUlFf
U0hPVz15CkNPTkZJR19HRU5FUklDX1BFTkRJTkdfSVJRPXkKQ09ORklHX0lSUV9ET01BSU49
eQojIENPTkZJR19JUlFfRE9NQUlOX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklHX0lSUV9GT1JD
RURfVEhSRUFESU5HPXkKQ09ORklHX1NQQVJTRV9JUlE9eQpDT05GSUdfQ0xPQ0tTT1VSQ0Vf
V0FUQ0hET0c9eQpDT05GSUdfQVJDSF9DTE9DS1NPVVJDRV9EQVRBPXkKQ09ORklHX0dFTkVS
SUNfVElNRV9WU1lTQ0FMTD15CkNPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTPXkKQ09ORklH
X0dFTkVSSUNfQ0xPQ0tFVkVOVFNfQlVJTEQ9eQpDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5U
U19CUk9BRENBU1Q9eQpDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5UU19NSU5fQURKVVNUPXkK
Q09ORklHX0dFTkVSSUNfQ01PU19VUERBVEU9eQoKIwojIFRpbWVycyBzdWJzeXN0ZW0KIwpD
T05GSUdfVElDS19PTkVTSE9UPXkKQ09ORklHX05PX0haX0NPTU1PTj15CiMgQ09ORklHX0ha
X1BFUklPRElDIGlzIG5vdCBzZXQKQ09ORklHX05PX0haX0lETEU9eQojIENPTkZJR19OT19I
Wl9GVUxMIGlzIG5vdCBzZXQKQ09ORklHX05PX0haPXkKQ09ORklHX0hJR0hfUkVTX1RJTUVS
Uz15CgojCiMgQ1BVL1Rhc2sgdGltZSBhbmQgc3RhdHMgYWNjb3VudGluZwojCkNPTkZJR19U
SUNLX0NQVV9BQ0NPVU5USU5HPXkKIyBDT05GSUdfVklSVF9DUFVfQUNDT1VOVElOR19HRU4g
aXMgbm90IHNldAojIENPTkZJR19JUlFfVElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQKQ09O
RklHX0JTRF9QUk9DRVNTX0FDQ1Q9eQojIENPTkZJR19CU0RfUFJPQ0VTU19BQ0NUX1YzIGlz
IG5vdCBzZXQKQ09ORklHX1RBU0tTVEFUUz15CkNPTkZJR19UQVNLX0RFTEFZX0FDQ1Q9eQpD
T05GSUdfVEFTS19YQUNDVD15CkNPTkZJR19UQVNLX0lPX0FDQ09VTlRJTkc9eQoKIwojIFJD
VSBTdWJzeXN0ZW0KIwpDT05GSUdfVFJFRV9SQ1U9eQojIENPTkZJR19QUkVFTVBUX1JDVSBp
cyBub3Qgc2V0CkNPTkZJR19SQ1VfU1RBTExfQ09NTU9OPXkKIyBDT05GSUdfUkNVX1VTRVJf
UVMgaXMgbm90IHNldApDT05GSUdfUkNVX0ZBTk9VVD02NApDT05GSUdfUkNVX0ZBTk9VVF9M
RUFGPTE2CiMgQ09ORklHX1JDVV9GQU5PVVRfRVhBQ1QgaXMgbm90IHNldApDT05GSUdfUkNV
X0ZBU1RfTk9fSFo9eQojIENPTkZJR19UUkVFX1JDVV9UUkFDRSBpcyBub3Qgc2V0CiMgQ09O
RklHX1JDVV9OT0NCX0NQVSBpcyBub3Qgc2V0CkNPTkZJR19JS0NPTkZJRz15CiMgQ09ORklH
X0lLQ09ORklHX1BST0MgaXMgbm90IHNldApDT05GSUdfTE9HX0JVRl9TSElGVD0xOApDT05G
SUdfSEFWRV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX05V
TUFfQkFMQU5DSU5HPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfSU5UMTI4PXkKQ09ORklHX0FS
Q0hfV0FOVFNfUFJPVF9OVU1BX1BST1RfTk9ORT15CiMgQ09ORklHX05VTUFfQkFMQU5DSU5H
IGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUFM9eQojIENPTkZJR19DR1JPVVBfREVCVUcgaXMg
bm90IHNldApDT05GSUdfQ0dST1VQX0ZSRUVaRVI9eQojIENPTkZJR19DR1JPVVBfREVWSUNF
IGlzIG5vdCBzZXQKQ09ORklHX0NQVVNFVFM9eQpDT05GSUdfUFJPQ19QSURfQ1BVU0VUPXkK
Q09ORklHX0NHUk9VUF9DUFVBQ0NUPXkKQ09ORklHX1JFU09VUkNFX0NPVU5URVJTPXkKIyBD
T05GSUdfTUVNQ0cgaXMgbm90IHNldAojIENPTkZJR19DR1JPVVBfSFVHRVRMQiBpcyBub3Qg
c2V0CiMgQ09ORklHX0NHUk9VUF9QRVJGIGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUF9TQ0hF
RD15CkNPTkZJR19GQUlSX0dST1VQX1NDSEVEPXkKIyBDT05GSUdfQ0ZTX0JBTkRXSURUSCBp
cyBub3Qgc2V0CiMgQ09ORklHX1JUX0dST1VQX1NDSEVEIGlzIG5vdCBzZXQKQ09ORklHX0JM
S19DR1JPVVA9eQojIENPTkZJR19ERUJVR19CTEtfQ0dST1VQIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hFQ0tQT0lOVF9SRVNUT1JFIGlzIG5vdCBzZXQKQ09ORklHX05BTUVTUEFDRVM9eQpD
T05GSUdfVVRTX05TPXkKQ09ORklHX0lQQ19OUz15CiMgQ09ORklHX1VTRVJfTlMgaXMgbm90
IHNldApDT05GSUdfUElEX05TPXkKQ09ORklHX05FVF9OUz15CkNPTkZJR19TQ0hFRF9BVVRP
R1JPVVA9eQojIENPTkZJR19TWVNGU19ERVBSRUNBVEVEIGlzIG5vdCBzZXQKIyBDT05GSUdf
UkVMQVkgaXMgbm90IHNldApDT05GSUdfQkxLX0RFVl9JTklUUkQ9eQpDT05GSUdfSU5JVFJB
TUZTX1NPVVJDRT0iIgpDT05GSUdfUkRfR1pJUD15CkNPTkZJR19SRF9CWklQMj15CkNPTkZJ
R19SRF9MWk1BPXkKQ09ORklHX1JEX1haPXkKQ09ORklHX1JEX0xaTz15CkNPTkZJR19SRF9M
WjQ9eQojIENPTkZJR19DQ19PUFRJTUlaRV9GT1JfU0laRSBpcyBub3Qgc2V0CkNPTkZJR19T
WVNDVEw9eQpDT05GSUdfQU5PTl9JTk9ERVM9eQpDT05GSUdfSEFWRV9VSUQxNj15CkNPTkZJ
R19TWVNDVExfRVhDRVBUSU9OX1RSQUNFPXkKQ09ORklHX0hBVkVfUENTUEtSX1BMQVRGT1JN
PXkKIyBDT05GSUdfRVhQRVJUIGlzIG5vdCBzZXQKQ09ORklHX1VJRDE2PXkKIyBDT05GSUdf
U1lTQ1RMX1NZU0NBTEwgaXMgbm90IHNldApDT05GSUdfS0FMTFNZTVM9eQpDT05GSUdfS0FM
TFNZTVNfQUxMPXkKQ09ORklHX1BSSU5USz15CkNPTkZJR19CVUc9eQpDT05GSUdfRUxGX0NP
UkU9eQpDT05GSUdfUENTUEtSX1BMQVRGT1JNPXkKQ09ORklHX0JBU0VfRlVMTD15CkNPTkZJ
R19GVVRFWD15CkNPTkZJR19FUE9MTD15CkNPTkZJR19TSUdOQUxGRD15CkNPTkZJR19USU1F
UkZEPXkKQ09ORklHX0VWRU5URkQ9eQpDT05GSUdfU0hNRU09eQpDT05GSUdfQUlPPXkKQ09O
RklHX1BDSV9RVUlSS1M9eQojIENPTkZJR19FTUJFRERFRCBpcyBub3Qgc2V0CkNPTkZJR19I
QVZFX1BFUkZfRVZFTlRTPXkKCiMKIyBLZXJuZWwgUGVyZm9ybWFuY2UgRXZlbnRzIEFuZCBD
b3VudGVycwojCkNPTkZJR19QRVJGX0VWRU5UUz15CiMgQ09ORklHX0RFQlVHX1BFUkZfVVNF
X1ZNQUxMT0MgaXMgbm90IHNldApDT05GSUdfVk1fRVZFTlRfQ09VTlRFUlM9eQpDT05GSUdf
U0xVQl9ERUJVRz15CiMgQ09ORklHX0NPTVBBVF9CUksgaXMgbm90IHNldAojIENPTkZJR19T
TEFCIGlzIG5vdCBzZXQKQ09ORklHX1NMVUI9eQpDT05GSUdfU0xVQl9DUFVfUEFSVElBTD15
CiMgQ09ORklHX1BST0ZJTElORyBpcyBub3Qgc2V0CkNPTkZJR19UUkFDRVBPSU5UUz15CkNP
TkZJR19IQVZFX09QUk9GSUxFPXkKQ09ORklHX09QUk9GSUxFX05NSV9USU1FUj15CiMgQ09O
RklHX0tQUk9CRVMgaXMgbm90IHNldApDT05GSUdfSlVNUF9MQUJFTD15CiMgQ09ORklHX0hB
VkVfNjRCSVRfQUxJR05FRF9BQ0NFU1MgaXMgbm90IHNldApDT05GSUdfSEFWRV9FRkZJQ0lF
TlRfVU5BTElHTkVEX0FDQ0VTUz15CkNPTkZJR19BUkNIX1VTRV9CVUlMVElOX0JTV0FQPXkK
Q09ORklHX0hBVkVfSU9SRU1BUF9QUk9UPXkKQ09ORklHX0hBVkVfS1BST0JFUz15CkNPTkZJ
R19IQVZFX0tSRVRQUk9CRVM9eQpDT05GSUdfSEFWRV9PUFRQUk9CRVM9eQpDT05GSUdfSEFW
RV9LUFJPQkVTX09OX0ZUUkFDRT15CkNPTkZJR19IQVZFX0FSQ0hfVFJBQ0VIT09LPXkKQ09O
RklHX0hBVkVfRE1BX0FUVFJTPXkKQ09ORklHX0dFTkVSSUNfU01QX0lETEVfVEhSRUFEPXkK
Q09ORklHX0hBVkVfUkVHU19BTkRfU1RBQ0tfQUNDRVNTX0FQST15CkNPTkZJR19IQVZFX0RN
QV9BUElfREVCVUc9eQpDT05GSUdfSEFWRV9IV19CUkVBS1BPSU5UPXkKQ09ORklHX0hBVkVf
TUlYRURfQlJFQUtQT0lOVFNfUkVHUz15CkNPTkZJR19IQVZFX1VTRVJfUkVUVVJOX05PVElG
SUVSPXkKQ09ORklHX0hBVkVfUEVSRl9FVkVOVFNfTk1JPXkKQ09ORklHX0hBVkVfUEVSRl9S
RUdTPXkKQ09ORklHX0hBVkVfUEVSRl9VU0VSX1NUQUNLX0RVTVA9eQpDT05GSUdfSEFWRV9B
UkNIX0pVTVBfTEFCRUw9eQpDT05GSUdfQVJDSF9IQVZFX05NSV9TQUZFX0NNUFhDSEc9eQpD
T05GSUdfSEFWRV9BTElHTkVEX1NUUlVDVF9QQUdFPXkKQ09ORklHX0hBVkVfQ01QWENIR19M
T0NBTD15CkNPTkZJR19IQVZFX0NNUFhDSEdfRE9VQkxFPXkKQ09ORklHX0FSQ0hfV0FOVF9D
T01QQVRfSVBDX1BBUlNFX1ZFUlNJT049eQpDT05GSUdfQVJDSF9XQU5UX09MRF9DT01QQVRf
SVBDPXkKQ09ORklHX0hBVkVfQVJDSF9TRUNDT01QX0ZJTFRFUj15CkNPTkZJR19TRUNDT01Q
X0ZJTFRFUj15CkNPTkZJR19IQVZFX0NDX1NUQUNLUFJPVEVDVE9SPXkKIyBDT05GSUdfQ0Nf
U1RBQ0tQUk9URUNUT1IgaXMgbm90IHNldApDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfTk9O
RT15CiMgQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SX1JFR1VMQVIgaXMgbm90IHNldAojIENP
TkZJR19DQ19TVEFDS1BST1RFQ1RPUl9TVFJPTkcgaXMgbm90IHNldApDT05GSUdfSEFWRV9D
T05URVhUX1RSQUNLSU5HPXkKQ09ORklHX0hBVkVfVklSVF9DUFVfQUNDT1VOVElOR19HRU49
eQpDT05GSUdfSEFWRV9JUlFfVElNRV9BQ0NPVU5USU5HPXkKQ09ORklHX0hBVkVfQVJDSF9U
UkFOU1BBUkVOVF9IVUdFUEFHRT15CkNPTkZJR19IQVZFX0FSQ0hfU09GVF9ESVJUWT15CkNP
TkZJR19NT0RVTEVTX1VTRV9FTEZfUkVMQT15CkNPTkZJR19IQVZFX0lSUV9FWElUX09OX0lS
UV9TVEFDSz15CkNPTkZJR19PTERfU0lHU1VTUEVORDM9eQpDT05GSUdfQ09NUEFUX09MRF9T
SUdBQ1RJT049eQoKIwojIEdDT1YtYmFzZWQga2VybmVsIHByb2ZpbGluZwojCiMgQ09ORklH
X0dDT1ZfS0VSTkVMIGlzIG5vdCBzZXQKIyBDT05GSUdfSEFWRV9HRU5FUklDX0RNQV9DT0hF
UkVOVCBpcyBub3Qgc2V0CkNPTkZJR19TTEFCSU5GTz15CkNPTkZJR19SVF9NVVRFWEVTPXkK
Q09ORklHX0JBU0VfU01BTEw9MAojIENPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HIGlz
IG5vdCBzZXQKQ09ORklHX01PRFVMRVM9eQojIENPTkZJR19NT0RVTEVfRk9SQ0VfTE9BRCBp
cyBub3Qgc2V0CkNPTkZJR19NT0RVTEVfVU5MT0FEPXkKIyBDT05GSUdfTU9EVUxFX0ZPUkNF
X1VOTE9BRCBpcyBub3Qgc2V0CiMgQ09ORklHX01PRFZFUlNJT05TIGlzIG5vdCBzZXQKIyBD
T05GSUdfTU9EVUxFX1NSQ1ZFUlNJT05fQUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9EVUxF
X1NJRyBpcyBub3Qgc2V0CkNPTkZJR19TVE9QX01BQ0hJTkU9eQpDT05GSUdfQkxPQ0s9eQpD
T05GSUdfQkxLX0RFVl9CU0c9eQojIENPTkZJR19CTEtfREVWX0JTR0xJQiBpcyBub3Qgc2V0
CkNPTkZJR19CTEtfREVWX0lOVEVHUklUWT15CiMgQ09ORklHX0JMS19ERVZfVEhST1RUTElO
RyBpcyBub3Qgc2V0CiMgQ09ORklHX0JMS19DTURMSU5FX1BBUlNFUiBpcyBub3Qgc2V0Cgoj
CiMgUGFydGl0aW9uIFR5cGVzCiMKQ09ORklHX1BBUlRJVElPTl9BRFZBTkNFRD15CiMgQ09O
RklHX0FDT1JOX1BBUlRJVElPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0FJWF9QQVJUSVRJT04g
aXMgbm90IHNldApDT05GSUdfT1NGX1BBUlRJVElPTj15CkNPTkZJR19BTUlHQV9QQVJUSVRJ
T049eQojIENPTkZJR19BVEFSSV9QQVJUSVRJT04gaXMgbm90IHNldApDT05GSUdfTUFDX1BB
UlRJVElPTj15CkNPTkZJR19NU0RPU19QQVJUSVRJT049eQpDT05GSUdfQlNEX0RJU0tMQUJF
TD15CkNPTkZJR19NSU5JWF9TVUJQQVJUSVRJT049eQpDT05GSUdfU09MQVJJU19YODZfUEFS
VElUSU9OPXkKQ09ORklHX1VOSVhXQVJFX0RJU0tMQUJFTD15CiMgQ09ORklHX0xETV9QQVJU
SVRJT04gaXMgbm90IHNldApDT05GSUdfU0dJX1BBUlRJVElPTj15CiMgQ09ORklHX1VMVFJJ
WF9QQVJUSVRJT04gaXMgbm90IHNldApDT05GSUdfU1VOX1BBUlRJVElPTj15CkNPTkZJR19L
QVJNQV9QQVJUSVRJT049eQpDT05GSUdfRUZJX1BBUlRJVElPTj15CiMgQ09ORklHX1NZU1Y2
OF9QQVJUSVRJT04gaXMgbm90IHNldAojIENPTkZJR19DTURMSU5FX1BBUlRJVElPTiBpcyBu
b3Qgc2V0CkNPTkZJR19CTE9DS19DT01QQVQ9eQoKIwojIElPIFNjaGVkdWxlcnMKIwpDT05G
SUdfSU9TQ0hFRF9OT09QPXkKQ09ORklHX0lPU0NIRURfREVBRExJTkU9eQpDT05GSUdfSU9T
Q0hFRF9DRlE9eQpDT05GSUdfQ0ZRX0dST1VQX0lPU0NIRUQ9eQojIENPTkZJR19ERUZBVUxU
X0RFQURMSU5FIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfQ0ZRPXkKIyBDT05GSUdfREVG
QVVMVF9OT09QIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfSU9TQ0hFRD0iY2ZxIgpDT05G
SUdfVU5JTkxJTkVfU1BJTl9VTkxPQ0s9eQpDT05GSUdfRlJFRVpFUj15CgojCiMgUHJvY2Vz
c29yIHR5cGUgYW5kIGZlYXR1cmVzCiMKQ09ORklHX1pPTkVfRE1BPXkKQ09ORklHX1NNUD15
CkNPTkZJR19YODZfWDJBUElDPXkKIyBDT05GSUdfWDg2X01QUEFSU0UgaXMgbm90IHNldAoj
IENPTkZJR19YODZfRVhURU5ERURfUExBVEZPUk0gaXMgbm90IHNldAojIENPTkZJR19YODZf
SU5URUxfTFBTUyBpcyBub3Qgc2V0CkNPTkZJR19YODZfU1VQUE9SVFNfTUVNT1JZX0ZBSUxV
UkU9eQpDT05GSUdfU0NIRURfT01JVF9GUkFNRV9QT0lOVEVSPXkKQ09ORklHX0hZUEVSVklT
T1JfR1VFU1Q9eQpDT05GSUdfUEFSQVZJUlQ9eQpDT05GSUdfUEFSQVZJUlRfREVCVUc9eQpD
T05GSUdfUEFSQVZJUlRfU1BJTkxPQ0tTPXkKQ09ORklHX1hFTj15CkNPTkZJR19YRU5fRE9N
MD15CkNPTkZJR19YRU5fUFJJVklMRUdFRF9HVUVTVD15CkNPTkZJR19YRU5fUFZIVk09eQpD
T05GSUdfWEVOX01BWF9ET01BSU5fTUVNT1JZPTUwMApDT05GSUdfWEVOX1NBVkVfUkVTVE9S
RT15CkNPTkZJR19YRU5fREVCVUdfRlM9eQpDT05GSUdfWEVOX1BWSD15CiMgQ09ORklHX0tW
TV9HVUVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX1BBUkFWSVJUX1RJTUVfQUNDT1VOVElORyBp
cyBub3Qgc2V0CkNPTkZJR19QQVJBVklSVF9DTE9DSz15CkNPTkZJR19OT19CT09UTUVNPXkK
IyBDT05GSUdfTUVNVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX01LOCBpcyBub3Qgc2V0CiMg
Q09ORklHX01QU0MgaXMgbm90IHNldAojIENPTkZJR19NQ09SRTIgaXMgbm90IHNldAojIENP
TkZJR19NQVRPTSBpcyBub3Qgc2V0CkNPTkZJR19HRU5FUklDX0NQVT15CkNPTkZJR19YODZf
SU5URVJOT0RFX0NBQ0hFX1NISUZUPTYKQ09ORklHX1g4Nl9MMV9DQUNIRV9TSElGVD02CkNP
TkZJR19YODZfVFNDPXkKQ09ORklHX1g4Nl9DTVBYQ0hHNjQ9eQpDT05GSUdfWDg2X0NNT1Y9
eQpDT05GSUdfWDg2X01JTklNVU1fQ1BVX0ZBTUlMWT02NApDT05GSUdfWDg2X0RFQlVHQ1RM
TVNSPXkKQ09ORklHX0NQVV9TVVBfSU5URUw9eQpDT05GSUdfQ1BVX1NVUF9BTUQ9eQpDT05G
SUdfQ1BVX1NVUF9DRU5UQVVSPXkKQ09ORklHX0hQRVRfVElNRVI9eQpDT05GSUdfSFBFVF9F
TVVMQVRFX1JUQz15CkNPTkZJR19ETUk9eQpDT05GSUdfR0FSVF9JT01NVT15CiMgQ09ORklH
X0NBTEdBUllfSU9NTVUgaXMgbm90IHNldApDT05GSUdfU1dJT1RMQj15CkNPTkZJR19JT01N
VV9IRUxQRVI9eQojIENPTkZJR19NQVhTTVAgaXMgbm90IHNldApDT05GSUdfTlJfQ1BVUz04
CkNPTkZJR19TQ0hFRF9TTVQ9eQpDT05GSUdfU0NIRURfTUM9eQojIENPTkZJR19QUkVFTVBU
X05PTkUgaXMgbm90IHNldApDT05GSUdfUFJFRU1QVF9WT0xVTlRBUlk9eQojIENPTkZJR19Q
UkVFTVBUIGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9MT0NBTF9BUElDPXkKQ09ORklHX1g4Nl9J
T19BUElDPXkKQ09ORklHX1g4Nl9SRVJPVVRFX0ZPUl9CUk9LRU5fQk9PVF9JUlFTPXkKQ09O
RklHX1g4Nl9NQ0U9eQpDT05GSUdfWDg2X01DRV9JTlRFTD15CkNPTkZJR19YODZfTUNFX0FN
RD15CkNPTkZJR19YODZfTUNFX1RIUkVTSE9MRD15CiMgQ09ORklHX1g4Nl9NQ0VfSU5KRUNU
IGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9USEVSTUFMX1ZFQ1RPUj15CiMgQ09ORklHX0k4SyBp
cyBub3Qgc2V0CiMgQ09ORklHX01JQ1JPQ09ERSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JP
Q09ERV9JTlRFTF9FQVJMWSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JPQ09ERV9BTURfRUFS
TFkgaXMgbm90IHNldApDT05GSUdfWDg2X01TUj15CkNPTkZJR19YODZfQ1BVSUQ9eQpDT05G
SUdfQVJDSF9QSFlTX0FERFJfVF82NEJJVD15CkNPTkZJR19BUkNIX0RNQV9BRERSX1RfNjRC
SVQ9eQpDT05GSUdfRElSRUNUX0dCUEFHRVM9eQpDT05GSUdfTlVNQT15CkNPTkZJR19BTURf
TlVNQT15CkNPTkZJR19YODZfNjRfQUNQSV9OVU1BPXkKQ09ORklHX05PREVTX1NQQU5fT1RI
RVJfTk9ERVM9eQojIENPTkZJR19OVU1BX0VNVSBpcyBub3Qgc2V0CkNPTkZJR19OT0RFU19T
SElGVD04CkNPTkZJR19BUkNIX1NQQVJTRU1FTV9FTkFCTEU9eQpDT05GSUdfQVJDSF9TUEFS
U0VNRU1fREVGQVVMVD15CkNPTkZJR19BUkNIX1NFTEVDVF9NRU1PUllfTU9ERUw9eQpDT05G
SUdfQVJDSF9QUk9DX0tDT1JFX1RFWFQ9eQpDT05GSUdfSUxMRUdBTF9QT0lOVEVSX1ZBTFVF
PTB4ZGVhZDAwMDAwMDAwMDAwMApDT05GSUdfU0VMRUNUX01FTU9SWV9NT0RFTD15CkNPTkZJ
R19TUEFSU0VNRU1fTUFOVUFMPXkKQ09ORklHX1NQQVJTRU1FTT15CkNPTkZJR19ORUVEX01V
TFRJUExFX05PREVTPXkKQ09ORklHX0hBVkVfTUVNT1JZX1BSRVNFTlQ9eQpDT05GSUdfU1BB
UlNFTUVNX0VYVFJFTUU9eQpDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkKQ09O
RklHX1NQQVJTRU1FTV9BTExPQ19NRU1fTUFQX1RPR0VUSEVSPXkKQ09ORklHX1NQQVJTRU1F
TV9WTUVNTUFQPXkKQ09ORklHX0hBVkVfTUVNQkxPQ0s9eQpDT05GSUdfSEFWRV9NRU1CTE9D
S19OT0RFX01BUD15CkNPTkZJR19BUkNIX0RJU0NBUkRfTUVNQkxPQ0s9eQojIENPTkZJR19N
T1ZBQkxFX05PREUgaXMgbm90IHNldAojIENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RF
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUVNT1JZX0hPVFBMVUcgaXMgbm90IHNldApDT05GSUdf
UEFHRUZMQUdTX0VYVEVOREVEPXkKQ09ORklHX1NQTElUX1BUTE9DS19DUFVTPTQKQ09ORklH
X0FSQ0hfRU5BQkxFX1NQTElUX1BNRF9QVExPQ0s9eQpDT05GSUdfQ09NUEFDVElPTj15CkNP
TkZJR19NSUdSQVRJT049eQpDT05GSUdfUEhZU19BRERSX1RfNjRCSVQ9eQpDT05GSUdfWk9O
RV9ETUFfRkxBRz0xCkNPTkZJR19CT1VOQ0U9eQpDT05GSUdfTkVFRF9CT1VOQ0VfUE9PTD15
CkNPTkZJR19WSVJUX1RPX0JVUz15CkNPTkZJR19NTVVfTk9USUZJRVI9eQojIENPTkZJR19L
U00gaXMgbm90IHNldApDT05GSUdfREVGQVVMVF9NTUFQX01JTl9BRERSPTQwOTYKQ09ORklH
X0FSQ0hfU1VQUE9SVFNfTUVNT1JZX0ZBSUxVUkU9eQojIENPTkZJR19NRU1PUllfRkFJTFVS
RSBpcyBub3Qgc2V0CkNPTkZJR19UUkFOU1BBUkVOVF9IVUdFUEFHRT15CkNPTkZJR19UUkFO
U1BBUkVOVF9IVUdFUEFHRV9BTFdBWVM9eQojIENPTkZJR19UUkFOU1BBUkVOVF9IVUdFUEFH
RV9NQURWSVNFIGlzIG5vdCBzZXQKQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0g9eQojIENP
TkZJR19DTEVBTkNBQ0hFIGlzIG5vdCBzZXQKIyBDT05GSUdfRlJPTlRTV0FQIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQ01BIGlzIG5vdCBzZXQKIyBDT05GSUdfWkJVRCBpcyBub3Qgc2V0CiMg
Q09ORklHX1pTTUFMTE9DIGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJV
UFRJT049eQpDT05GSUdfWDg2X0JPT1RQQVJBTV9NRU1PUllfQ09SUlVQVElPTl9DSEVDSz15
CkNPTkZJR19YODZfUkVTRVJWRV9MT1c9NjQKQ09ORklHX01UUlI9eQpDT05GSUdfTVRSUl9T
QU5JVElaRVI9eQpDT05GSUdfTVRSUl9TQU5JVElaRVJfRU5BQkxFX0RFRkFVTFQ9MApDT05G
SUdfTVRSUl9TQU5JVElaRVJfU1BBUkVfUkVHX05SX0RFRkFVTFQ9MQpDT05GSUdfWDg2X1BB
VD15CkNPTkZJR19BUkNIX1VTRVNfUEdfVU5DQUNIRUQ9eQpDT05GSUdfQVJDSF9SQU5ET009
eQpDT05GSUdfWDg2X1NNQVA9eQojIENPTkZJR19FRkkgaXMgbm90IHNldApDT05GSUdfU0VD
Q09NUD15CiMgQ09ORklHX0haXzEwMCBpcyBub3Qgc2V0CiMgQ09ORklHX0haXzI1MCBpcyBu
b3Qgc2V0CkNPTkZJR19IWl8zMDA9eQojIENPTkZJR19IWl8xMDAwIGlzIG5vdCBzZXQKQ09O
RklHX0haPTMwMApDT05GSUdfU0NIRURfSFJUSUNLPXkKQ09ORklHX0tFWEVDPXkKQ09ORklH
X0NSQVNIX0RVTVA9eQpDT05GSUdfUEhZU0lDQUxfU1RBUlQ9MHgxMDAwMDAwCkNPTkZJR19S
RUxPQ0FUQUJMRT15CiMgQ09ORklHX1JBTkRPTUlaRV9CQVNFIGlzIG5vdCBzZXQKQ09ORklH
X1BIWVNJQ0FMX0FMSUdOPTB4MTAwMDAwMApDT05GSUdfSE9UUExVR19DUFU9eQojIENPTkZJ
R19CT09UUEFSQU1fSE9UUExVR19DUFUwIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfSE9U
UExVR19DUFUwIGlzIG5vdCBzZXQKIyBDT05GSUdfQ09NUEFUX1ZEU08gaXMgbm90IHNldAoj
IENPTkZJR19DTURMSU5FX0JPT0wgaXMgbm90IHNldApDT05GSUdfQVJDSF9FTkFCTEVfTUVN
T1JZX0hPVFBMVUc9eQpDT05GSUdfVVNFX1BFUkNQVV9OVU1BX05PREVfSUQ9eQoKIwojIFBv
d2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9ucwojCiMgQ09ORklHX1NVU1BFTkQgaXMg
bm90IHNldApDT05GSUdfSElCRVJOQVRFX0NBTExCQUNLUz15CiMgQ09ORklHX0hJQkVSTkFU
SU9OIGlzIG5vdCBzZXQKQ09ORklHX1BNX1NMRUVQPXkKQ09ORklHX1BNX1NMRUVQX1NNUD15
CiMgQ09ORklHX1BNX0FVVE9TTEVFUCBpcyBub3Qgc2V0CiMgQ09ORklHX1BNX1dBS0VMT0NL
UyBpcyBub3Qgc2V0CiMgQ09ORklHX1BNX1JVTlRJTUUgaXMgbm90IHNldApDT05GSUdfUE09
eQpDT05GSUdfUE1fREVCVUc9eQojIENPTkZJR19QTV9BRFZBTkNFRF9ERUJVRyBpcyBub3Qg
c2V0CkNPTkZJR19QTV9TTEVFUF9ERUJVRz15CiMgQ09ORklHX1BNX1RSQUNFX1JUQyBpcyBu
b3Qgc2V0CiMgQ09ORklHX1dRX1BPV0VSX0VGRklDSUVOVF9ERUZBVUxUIGlzIG5vdCBzZXQK
Q09ORklHX0FDUEk9eQpDT05GSUdfQUNQSV9QUk9DRlM9eQojIENPTkZJR19BQ1BJX0VDX0RF
QlVHRlMgaXMgbm90IHNldApDT05GSUdfQUNQSV9BQz15CkNPTkZJR19BQ1BJX0JBVFRFUlk9
eQpDT05GSUdfQUNQSV9CVVRUT049eQpDT05GSUdfQUNQSV9WSURFTz15CkNPTkZJR19BQ1BJ
X0ZBTj15CiMgQ09ORklHX0FDUElfRE9DSyBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX1BST0NF
U1NPUj15CkNPTkZJR19BQ1BJX0hPVFBMVUdfQ1BVPXkKQ09ORklHX0FDUElfUFJPQ0VTU09S
X0FHR1JFR0FUT1I9eQpDT05GSUdfQUNQSV9USEVSTUFMPXkKQ09ORklHX0FDUElfTlVNQT15
CkNPTkZJR19BQ1BJX0NVU1RPTV9EU0RUX0ZJTEU9IiIKIyBDT05GSUdfQUNQSV9DVVNUT01f
RFNEVCBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX0lOSVRSRF9UQUJMRV9PVkVSUklERT15CiMg
Q09ORklHX0FDUElfREVCVUcgaXMgbm90IHNldApDT05GSUdfQUNQSV9QQ0lfU0xPVD15CkNP
TkZJR19YODZfUE1fVElNRVI9eQpDT05GSUdfQUNQSV9DT05UQUlORVI9eQojIENPTkZJR19B
Q1BJX1NCUyBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX0hFRD15CiMgQ09ORklHX0FDUElfQ1VT
VE9NX01FVEhPRCBpcyBub3Qgc2V0CiMgQ09ORklHX0FDUElfQVBFSSBpcyBub3Qgc2V0CiMg
Q09ORklHX0FDUElfRVhUTE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfU0ZJIGlzIG5vdCBzZXQK
CiMKIyBDUFUgRnJlcXVlbmN5IHNjYWxpbmcKIwpDT05GSUdfQ1BVX0ZSRVE9eQpDT05GSUdf
Q1BVX0ZSRVFfR09WX0NPTU1PTj15CiMgQ09ORklHX0NQVV9GUkVRX1NUQVQgaXMgbm90IHNl
dAojIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9QRVJGT1JNQU5DRSBpcyBub3Qgc2V0
CkNPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9VU0VSU1BBQ0U9eQojIENPTkZJR19DUFVf
RlJFUV9ERUZBVUxUX0dPVl9PTkRFTUFORCBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVV9GUkVR
X0RFRkFVTFRfR09WX0NPTlNFUlZBVElWRSBpcyBub3Qgc2V0CkNPTkZJR19DUFVfRlJFUV9H
T1ZfUEVSRk9STUFOQ0U9eQojIENPTkZJR19DUFVfRlJFUV9HT1ZfUE9XRVJTQVZFIGlzIG5v
dCBzZXQKQ09ORklHX0NQVV9GUkVRX0dPVl9VU0VSU1BBQ0U9eQpDT05GSUdfQ1BVX0ZSRVFf
R09WX09OREVNQU5EPXkKIyBDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTlNFUlZBVElWRSBpcyBu
b3Qgc2V0CgojCiMgeDg2IENQVSBmcmVxdWVuY3kgc2NhbGluZyBkcml2ZXJzCiMKIyBDT05G
SUdfWDg2X0lOVEVMX1BTVEFURSBpcyBub3Qgc2V0CkNPTkZJR19YODZfUENDX0NQVUZSRVE9
eQpDT05GSUdfWDg2X0FDUElfQ1BVRlJFUT15CkNPTkZJR19YODZfQUNQSV9DUFVGUkVRX0NQ
Qj15CiMgQ09ORklHX1g4Nl9QT1dFUk5PV19LOCBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9B
TURfRlJFUV9TRU5TSVRJVklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9TUEVFRFNURVBf
Q0VOVFJJTk8gaXMgbm90IHNldAojIENPTkZJR19YODZfUDRfQ0xPQ0tNT0QgaXMgbm90IHNl
dAoKIwojIHNoYXJlZCBvcHRpb25zCiMKIyBDT05GSUdfWDg2X1NQRUVEU1RFUF9MSUIgaXMg
bm90IHNldAoKIwojIENQVSBJZGxlCiMKQ09ORklHX0NQVV9JRExFPXkKIyBDT05GSUdfQ1BV
X0lETEVfTVVMVElQTEVfRFJJVkVSUyBpcyBub3Qgc2V0CkNPTkZJR19DUFVfSURMRV9HT1Zf
TEFEREVSPXkKQ09ORklHX0NQVV9JRExFX0dPVl9NRU5VPXkKIyBDT05GSUdfQVJDSF9ORUVE
U19DUFVfSURMRV9DT1VQTEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5URUxfSURMRSBpcyBu
b3Qgc2V0CgojCiMgTWVtb3J5IHBvd2VyIHNhdmluZ3MKIwojIENPTkZJR19JNzMwMF9JRExF
IGlzIG5vdCBzZXQKCiMKIyBCdXMgb3B0aW9ucyAoUENJIGV0Yy4pCiMKQ09ORklHX1BDST15
CkNPTkZJR19QQ0lfRElSRUNUPXkKQ09ORklHX1BDSV9NTUNPTkZJRz15CkNPTkZJR19QQ0lf
WEVOPXkKQ09ORklHX1BDSV9ET01BSU5TPXkKQ09ORklHX1BDSUVQT1JUQlVTPXkKQ09ORklH
X0hPVFBMVUdfUENJX1BDSUU9eQpDT05GSUdfUENJRUFFUj15CkNPTkZJR19QQ0lFX0VDUkM9
eQpDT05GSUdfUENJRUFFUl9JTkpFQ1Q9eQpDT05GSUdfUENJRUFTUE09eQpDT05GSUdfUENJ
RUFTUE1fREVCVUc9eQpDT05GSUdfUENJRUFTUE1fREVGQVVMVD15CiMgQ09ORklHX1BDSUVB
U1BNX1BPV0VSU0FWRSBpcyBub3Qgc2V0CiMgQ09ORklHX1BDSUVBU1BNX1BFUkZPUk1BTkNF
IGlzIG5vdCBzZXQKQ09ORklHX1BDSV9NU0k9eQpDT05GSUdfUENJX0RFQlVHPXkKQ09ORklH
X1BDSV9SRUFMTE9DX0VOQUJMRV9BVVRPPXkKQ09ORklHX1BDSV9TVFVCPXkKQ09ORklHX1hF
Tl9QQ0lERVZfRlJPTlRFTkQ9eQpDT05GSUdfSFRfSVJRPXkKQ09ORklHX1BDSV9BVFM9eQpD
T05GSUdfUENJX0lPVj15CkNPTkZJR19QQ0lfUFJJPXkKQ09ORklHX1BDSV9QQVNJRD15CkNP
TkZJR19QQ0lfSU9BUElDPXkKQ09ORklHX1BDSV9MQUJFTD15CgojCiMgUENJIGhvc3QgY29u
dHJvbGxlciBkcml2ZXJzCiMKQ09ORklHX0lTQV9ETUFfQVBJPXkKQ09ORklHX0FNRF9OQj15
CiMgQ09ORklHX1BDQ0FSRCBpcyBub3Qgc2V0CkNPTkZJR19IT1RQTFVHX1BDST15CkNPTkZJ
R19IT1RQTFVHX1BDSV9BQ1BJPXkKQ09ORklHX0hPVFBMVUdfUENJX0FDUElfSUJNPXkKQ09O
RklHX0hPVFBMVUdfUENJX0NQQ0k9eQojIENPTkZJR19IT1RQTFVHX1BDSV9DUENJX1pUNTU1
MCBpcyBub3Qgc2V0CkNPTkZJR19IT1RQTFVHX1BDSV9DUENJX0dFTkVSSUM9eQpDT05GSUdf
SE9UUExVR19QQ0lfU0hQQz15CiMgQ09ORklHX1JBUElESU8gaXMgbm90IHNldAojIENPTkZJ
R19YODZfU1lTRkIgaXMgbm90IHNldAoKIwojIEV4ZWN1dGFibGUgZmlsZSBmb3JtYXRzIC8g
RW11bGF0aW9ucwojCkNPTkZJR19CSU5GTVRfRUxGPXkKQ09ORklHX0NPTVBBVF9CSU5GTVRf
RUxGPXkKQ09ORklHX0FSQ0hfQklORk1UX0VMRl9SQU5ET01JWkVfUElFPXkKQ09ORklHX0NP
UkVfRFVNUF9ERUZBVUxUX0VMRl9IRUFERVJTPXkKQ09ORklHX0JJTkZNVF9TQ1JJUFQ9eQoj
IENPTkZJR19IQVZFX0FPVVQgaXMgbm90IHNldApDT05GSUdfQklORk1UX01JU0M9eQpDT05G
SUdfQ09SRURVTVA9eQpDT05GSUdfSUEzMl9FTVVMQVRJT049eQojIENPTkZJR19JQTMyX0FP
VVQgaXMgbm90IHNldAojIENPTkZJR19YODZfWDMyIGlzIG5vdCBzZXQKQ09ORklHX0NPTVBB
VD15CkNPTkZJR19DT01QQVRfRk9SX1U2NF9BTElHTk1FTlQ9eQpDT05GSUdfU1lTVklQQ19D
T01QQVQ9eQpDT05GSUdfS0VZU19DT01QQVQ9eQpDT05GSUdfWDg2X0RFVl9ETUFfT1BTPXkK
Q09ORklHX05FVD15CgojCiMgTmV0d29ya2luZyBvcHRpb25zCiMKQ09ORklHX1BBQ0tFVD15
CiMgQ09ORklHX1BBQ0tFVF9ESUFHIGlzIG5vdCBzZXQKQ09ORklHX1VOSVg9eQojIENPTkZJ
R19VTklYX0RJQUcgaXMgbm90IHNldAojIENPTkZJR19YRlJNX1VTRVIgaXMgbm90IHNldAoj
IENPTkZJR19ORVRfS0VZIGlzIG5vdCBzZXQKQ09ORklHX0lORVQ9eQpDT05GSUdfSVBfTVVM
VElDQVNUPXkKQ09ORklHX0lQX0FEVkFOQ0VEX1JPVVRFUj15CiMgQ09ORklHX0lQX0ZJQl9U
UklFX1NUQVRTIGlzIG5vdCBzZXQKQ09ORklHX0lQX01VTFRJUExFX1RBQkxFUz15CkNPTkZJ
R19JUF9ST1VURV9NVUxUSVBBVEg9eQpDT05GSUdfSVBfUk9VVEVfVkVSQk9TRT15CkNPTkZJ
R19JUF9ST1VURV9DTEFTU0lEPXkKQ09ORklHX0lQX1BOUD15CkNPTkZJR19JUF9QTlBfREhD
UD15CkNPTkZJR19JUF9QTlBfQk9PVFA9eQpDT05GSUdfSVBfUE5QX1JBUlA9eQojIENPTkZJ
R19ORVRfSVBJUCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9JUEdSRV9ERU1VWCBpcyBub3Qg
c2V0CiMgQ09ORklHX05FVF9JUF9UVU5ORUwgaXMgbm90IHNldAojIENPTkZJR19JUF9NUk9V
VEUgaXMgbm90IHNldApDT05GSUdfU1lOX0NPT0tJRVM9eQojIENPTkZJR19JTkVUX0FIIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU5FVF9FU1AgaXMgbm90IHNldAojIENPTkZJR19JTkVUX0lQ
Q09NUCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9UVU5ORUwgaXMgbm90IHNldAoj
IENPTkZJR19JTkVUX1RVTk5FTCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9NT0RF
X1RSQU5TUE9SVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9NT0RFX1RVTk5FTCBp
cyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9NT0RFX0JFRVQgaXMgbm90IHNldApDT05G
SUdfSU5FVF9MUk89eQojIENPTkZJR19JTkVUX0RJQUcgaXMgbm90IHNldApDT05GSUdfVENQ
X0NPTkdfQURWQU5DRUQ9eQojIENPTkZJR19UQ1BfQ09OR19CSUMgaXMgbm90IHNldApDT05G
SUdfVENQX0NPTkdfQ1VCSUM9eQojIENPTkZJR19UQ1BfQ09OR19XRVNUV09PRCBpcyBub3Qg
c2V0CiMgQ09ORklHX1RDUF9DT05HX0hUQ1AgaXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09O
R19IU1RDUCBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX0hZQkxBIGlzIG5vdCBzZXQK
IyBDT05GSUdfVENQX0NPTkdfVkVHQVMgaXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09OR19T
Q0FMQUJMRSBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX0xQIGlzIG5vdCBzZXQKIyBD
T05GSUdfVENQX0NPTkdfVkVOTyBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX1lFQUgg
aXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09OR19JTExJTk9JUyBpcyBub3Qgc2V0CkNPTkZJ
R19ERUZBVUxUX0NVQklDPXkKIyBDT05GSUdfREVGQVVMVF9SRU5PIGlzIG5vdCBzZXQKQ09O
RklHX0RFRkFVTFRfVENQX0NPTkc9ImN1YmljIgojIENPTkZJR19UQ1BfTUQ1U0lHIGlzIG5v
dCBzZXQKIyBDT05GSUdfSVBWNiBpcyBub3Qgc2V0CkNPTkZJR19ORVRXT1JLX1NFQ01BUks9
eQojIENPTkZJR19ORVRXT1JLX1BIWV9USU1FU1RBTVBJTkcgaXMgbm90IHNldApDT05GSUdf
TkVURklMVEVSPXkKIyBDT05GSUdfTkVURklMVEVSX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklH
X05FVEZJTFRFUl9BRFZBTkNFRD15CkNPTkZJR19CUklER0VfTkVURklMVEVSPXkKCiMKIyBD
b3JlIE5ldGZpbHRlciBDb25maWd1cmF0aW9uCiMKQ09ORklHX05FVEZJTFRFUl9ORVRMSU5L
PXkKQ09ORklHX05FVEZJTFRFUl9ORVRMSU5LX0FDQ1Q9eQpDT05GSUdfTkVURklMVEVSX05F
VExJTktfUVVFVUU9eQpDT05GSUdfTkVURklMVEVSX05FVExJTktfTE9HPXkKQ09ORklHX05G
X0NPTk5UUkFDSz15CkNPTkZJR19ORl9DT05OVFJBQ0tfTUFSSz15CkNPTkZJR19ORl9DT05O
VFJBQ0tfU0VDTUFSSz15CkNPTkZJR19ORl9DT05OVFJBQ0tfUFJPQ0ZTPXkKQ09ORklHX05G
X0NPTk5UUkFDS19FVkVOVFM9eQojIENPTkZJR19ORl9DT05OVFJBQ0tfVElNRU9VVCBpcyBu
b3Qgc2V0CkNPTkZJR19ORl9DT05OVFJBQ0tfVElNRVNUQU1QPXkKIyBDT05GSUdfTkZfQ1Rf
UFJPVE9fRENDUCBpcyBub3Qgc2V0CkNPTkZJR19ORl9DVF9QUk9UT19HUkU9eQojIENPTkZJ
R19ORl9DVF9QUk9UT19TQ1RQIGlzIG5vdCBzZXQKIyBDT05GSUdfTkZfQ1RfUFJPVE9fVURQ
TElURSBpcyBub3Qgc2V0CiMgQ09ORklHX05GX0NPTk5UUkFDS19BTUFOREEgaXMgbm90IHNl
dApDT05GSUdfTkZfQ09OTlRSQUNLX0ZUUD15CkNPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15
CkNPTkZJR19ORl9DT05OVFJBQ0tfSVJDPXkKIyBDT05GSUdfTkZfQ09OTlRSQUNLX05FVEJJ
T1NfTlMgaXMgbm90IHNldAojIENPTkZJR19ORl9DT05OVFJBQ0tfU05NUCBpcyBub3Qgc2V0
CkNPTkZJR19ORl9DT05OVFJBQ0tfUFBUUD15CiMgQ09ORklHX05GX0NPTk5UUkFDS19TQU5F
IGlzIG5vdCBzZXQKQ09ORklHX05GX0NPTk5UUkFDS19TSVA9eQojIENPTkZJR19ORl9DT05O
VFJBQ0tfVEZUUCBpcyBub3Qgc2V0CkNPTkZJR19ORl9DVF9ORVRMSU5LPXkKIyBDT05GSUdf
TkZfQ1RfTkVUTElOS19USU1FT1VUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVURklMVEVSX05F
VExJTktfUVVFVUVfQ1QgaXMgbm90IHNldApDT05GSUdfTkZfTkFUPXkKQ09ORklHX05GX05B
VF9ORUVERUQ9eQojIENPTkZJR19ORl9OQVRfQU1BTkRBIGlzIG5vdCBzZXQKQ09ORklHX05G
X05BVF9GVFA9eQpDT05GSUdfTkZfTkFUX0lSQz15CkNPTkZJR19ORl9OQVRfU0lQPXkKIyBD
T05GSUdfTkZfTkFUX1RGVFAgaXMgbm90IHNldAojIENPTkZJR19ORl9UQUJMRVMgaXMgbm90
IHNldApDT05GSUdfTkVURklMVEVSX1hUQUJMRVM9eQoKIwojIFh0YWJsZXMgY29tYmluZWQg
bW9kdWxlcwojCkNPTkZJR19ORVRGSUxURVJfWFRfTUFSSz15CkNPTkZJR19ORVRGSUxURVJf
WFRfQ09OTk1BUks9eQojIENPTkZJR19ORVRGSUxURVJfWFRfU0VUIGlzIG5vdCBzZXQKCiMK
IyBYdGFibGVzIHRhcmdldHMKIwpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9BVURJVD15
CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0NIRUNLU1VNPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfQ0xBU1NJRlk9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9DT05O
TUFSSz15CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0NPTk5TRUNNQVJLPXkKIyBDT05G
SUdfTkVURklMVEVSX1hUX1RBUkdFVF9DVCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJf
WFRfVEFSR0VUX0RTQ1A9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ITD15CiMgQ09O
RklHX05FVEZJTFRFUl9YVF9UQVJHRVRfSE1BUksgaXMgbm90IHNldApDT05GSUdfTkVURklM
VEVSX1hUX1RBUkdFVF9JRExFVElNRVI9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9M
T0c9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9NQVJLPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfTkVUTUFQPXkKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfTkZMT0c9
eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORlFVRVVFPXkKIyBDT05GSUdfTkVURklM
VEVSX1hUX1RBUkdFVF9OT1RSQUNLIGlzIG5vdCBzZXQKQ09ORklHX05FVEZJTFRFUl9YVF9U
QVJHRVRfUkFURUVTVD15CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX1JFRElSRUNUPXkK
Q09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfVEVFPXkKIyBDT05GSUdfTkVURklMVEVSX1hU
X1RBUkdFVF9UUFJPWFkgaXMgbm90IHNldAojIENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VU
X1RSQUNFIGlzIG5vdCBzZXQKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfU0VDTUFSSz15
CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX1RDUE1TUz15CiMgQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfVENQT1BUU1RSSVAgaXMgbm90IHNldAoKIwojIFh0YWJsZXMgbWF0Y2hl
cwojCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQUREUlRZUEU9eQojIENPTkZJR19ORVRG
SUxURVJfWFRfTUFUQ0hfQlBGIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVURklMVEVSX1hUX01B
VENIX0NHUk9VUCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ0xVU1RF
Uj15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09NTUVOVD15CkNPTkZJR19ORVRGSUxU
RVJfWFRfTUFUQ0hfQ09OTkJZVEVTPXkKIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0NP
Tk5MQUJFTCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09OTkxJTUlU
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTUFSSz15CkNPTkZJR19ORVRGSUxU
RVJfWFRfTUFUQ0hfQ09OTlRSQUNLPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DUFU9
eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0RDQ1A9eQpDT05GSUdfTkVURklMVEVSX1hU
X01BVENIX0RFVkdST1VQPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9EU0NQPXkKQ09O
RklHX05FVEZJTFRFUl9YVF9NQVRDSF9FQ049eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENI
X0VTUD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEFTSExJTUlUPXkKQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9IRUxQRVI9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0hM
PXkKIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0lQQ09NUCBpcyBub3Qgc2V0CkNPTkZJ
R19ORVRGSUxURVJfWFRfTUFUQ0hfSVBSQU5HRT15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFU
Q0hfSVBWUz15CiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9MMlRQIGlzIG5vdCBzZXQK
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9MRU5HVEg9eQpDT05GSUdfTkVURklMVEVSX1hU
X01BVENIX0xJTUlUPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9NQUM9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX01BUks9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX01V
TFRJUE9SVD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTkZBQ0NUPXkKQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9PU0Y9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX09XTkVS
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9QSFlTREVWPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9QS1RUWVBFPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9RVU9UQT15
CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfUkFURUVTVD15CkNPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfUkVBTE09eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1JFQ0VOVD15CiMg
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TQ1RQIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVU
RklMVEVSX1hUX01BVENIX1NPQ0tFVCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfU1RBVEU9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1NUQVRJU1RJQz15CkNP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU1RSSU5HPXkKQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9UQ1BNU1M9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1RJTUU9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX1UzMj15CkNPTkZJR19JUF9TRVQ9eQpDT05GSUdfSVBfU0VU
X01BWD0yNTYKQ09ORklHX0lQX1NFVF9CSVRNQVBfSVA9eQpDT05GSUdfSVBfU0VUX0JJVE1B
UF9JUE1BQz15CkNPTkZJR19JUF9TRVRfQklUTUFQX1BPUlQ9eQpDT05GSUdfSVBfU0VUX0hB
U0hfSVA9eQpDT05GSUdfSVBfU0VUX0hBU0hfSVBQT1JUPXkKQ09ORklHX0lQX1NFVF9IQVNI
X0lQUE9SVElQPXkKQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVE5FVD15CkNPTkZJR19JUF9T
RVRfSEFTSF9ORVRQT1JUTkVUPXkKQ09ORklHX0lQX1NFVF9IQVNIX05FVD15CkNPTkZJR19J
UF9TRVRfSEFTSF9ORVRORVQ9eQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVD15CkNPTkZJ
R19JUF9TRVRfSEFTSF9ORVRJRkFDRT15CkNPTkZJR19JUF9TRVRfTElTVF9TRVQ9eQpDT05G
SUdfSVBfVlM9eQojIENPTkZJR19JUF9WU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19JUF9W
U19UQUJfQklUUz0xMgoKIwojIElQVlMgdHJhbnNwb3J0IHByb3RvY29sIGxvYWQgYmFsYW5j
aW5nIHN1cHBvcnQKIwojIENPTkZJR19JUF9WU19QUk9UT19UQ1AgaXMgbm90IHNldAojIENP
TkZJR19JUF9WU19QUk9UT19VRFAgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19QUk9UT19B
SF9FU1AgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19QUk9UT19FU1AgaXMgbm90IHNldAoj
IENPTkZJR19JUF9WU19QUk9UT19BSCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX1BST1RP
X1NDVFAgaXMgbm90IHNldAoKIwojIElQVlMgc2NoZWR1bGVyCiMKIyBDT05GSUdfSVBfVlNf
UlIgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19XUlIgaXMgbm90IHNldAojIENPTkZJR19J
UF9WU19MQyBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX1dMQyBpcyBub3Qgc2V0CiMgQ09O
RklHX0lQX1ZTX0xCTEMgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19MQkxDUiBpcyBub3Qg
c2V0CiMgQ09ORklHX0lQX1ZTX0RIIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfVlNfU0ggaXMg
bm90IHNldAojIENPTkZJR19JUF9WU19TRUQgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19O
USBpcyBub3Qgc2V0CgojCiMgSVBWUyBTSCBzY2hlZHVsZXIKIwpDT05GSUdfSVBfVlNfU0hf
VEFCX0JJVFM9OAoKIwojIElQVlMgYXBwbGljYXRpb24gaGVscGVyCiMKQ09ORklHX0lQX1ZT
X05GQ1Q9eQoKIwojIElQOiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbgojCkNPTkZJR19ORl9E
RUZSQUdfSVBWND15CkNPTkZJR19ORl9DT05OVFJBQ0tfSVBWND15CkNPTkZJR19ORl9DT05O
VFJBQ0tfUFJPQ19DT01QQVQ9eQpDT05GSUdfSVBfTkZfSVBUQUJMRVM9eQpDT05GSUdfSVBf
TkZfTUFUQ0hfQUg9eQpDT05GSUdfSVBfTkZfTUFUQ0hfRUNOPXkKIyBDT05GSUdfSVBfTkZf
TUFUQ0hfUlBGSUxURVIgaXMgbm90IHNldApDT05GSUdfSVBfTkZfTUFUQ0hfVFRMPXkKQ09O
RklHX0lQX05GX0ZJTFRFUj15CkNPTkZJR19JUF9ORl9UQVJHRVRfUkVKRUNUPXkKIyBDT05G
SUdfSVBfTkZfVEFSR0VUX1NZTlBST1hZIGlzIG5vdCBzZXQKQ09ORklHX0lQX05GX1RBUkdF
VF9VTE9HPXkKQ09ORklHX05GX05BVF9JUFY0PXkKQ09ORklHX0lQX05GX1RBUkdFVF9NQVNR
VUVSQURFPXkKQ09ORklHX0lQX05GX1RBUkdFVF9ORVRNQVA9eQpDT05GSUdfSVBfTkZfVEFS
R0VUX1JFRElSRUNUPXkKQ09ORklHX05GX05BVF9QUk9UT19HUkU9eQpDT05GSUdfTkZfTkFU
X1BQVFA9eQpDT05GSUdfTkZfTkFUX0gzMjM9eQpDT05GSUdfSVBfTkZfTUFOR0xFPXkKIyBD
T05GSUdfSVBfTkZfVEFSR0VUX0NMVVNURVJJUCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX05G
X1RBUkdFVF9FQ04gaXMgbm90IHNldAojIENPTkZJR19JUF9ORl9UQVJHRVRfVFRMIGlzIG5v
dCBzZXQKQ09ORklHX0lQX05GX1JBVz15CiMgQ09ORklHX0lQX05GX0FSUFRBQkxFUyBpcyBu
b3Qgc2V0CkNPTkZJR19CUklER0VfTkZfRUJUQUJMRVM9eQojIENPTkZJR19CUklER0VfRUJU
X0JST1VURSBpcyBub3Qgc2V0CiMgQ09ORklHX0JSSURHRV9FQlRfVF9GSUxURVIgaXMgbm90
IHNldAojIENPTkZJR19CUklER0VfRUJUX1RfTkFUIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJ
REdFX0VCVF84MDJfMyBpcyBub3Qgc2V0CiMgQ09ORklHX0JSSURHRV9FQlRfQU1PTkcgaXMg
bm90IHNldAojIENPTkZJR19CUklER0VfRUJUX0FSUCBpcyBub3Qgc2V0CiMgQ09ORklHX0JS
SURHRV9FQlRfSVAgaXMgbm90IHNldAojIENPTkZJR19CUklER0VfRUJUX0xJTUlUIGlzIG5v
dCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9NQVJLIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJ
REdFX0VCVF9QS1RUWVBFIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9TVFAgaXMg
bm90IHNldAojIENPTkZJR19CUklER0VfRUJUX1ZMQU4gaXMgbm90IHNldAojIENPTkZJR19C
UklER0VfRUJUX0FSUFJFUExZIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9ETkFU
IGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9NQVJLX1QgaXMgbm90IHNldAojIENP
TkZJR19CUklER0VfRUJUX1JFRElSRUNUIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VC
VF9TTkFUIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9MT0cgaXMgbm90IHNldAoj
IENPTkZJR19CUklER0VfRUJUX1VMT0cgaXMgbm90IHNldAojIENPTkZJR19CUklER0VfRUJU
X05GTE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfRENDUCBpcyBub3Qgc2V0CiMgQ09ORklH
X0lQX1NDVFAgaXMgbm90IHNldAojIENPTkZJR19SRFMgaXMgbm90IHNldAojIENPTkZJR19U
SVBDIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRNIGlzIG5vdCBzZXQKIyBDT05GSUdfTDJUUCBp
cyBub3Qgc2V0CkNPTkZJR19TVFA9eQpDT05GSUdfQlJJREdFPXkKQ09ORklHX0JSSURHRV9J
R01QX1NOT09QSU5HPXkKQ09ORklHX0hBVkVfTkVUX0RTQT15CiMgQ09ORklHX1ZMQU5fODAy
MVEgaXMgbm90IHNldAojIENPTkZJR19ERUNORVQgaXMgbm90IHNldApDT05GSUdfTExDPXkK
IyBDT05GSUdfTExDMiBpcyBub3Qgc2V0CiMgQ09ORklHX0lQWCBpcyBub3Qgc2V0CiMgQ09O
RklHX0FUQUxLIGlzIG5vdCBzZXQKIyBDT05GSUdfWDI1IGlzIG5vdCBzZXQKIyBDT05GSUdf
TEFQQiBpcyBub3Qgc2V0CiMgQ09ORklHX1BIT05FVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lF
RUU4MDIxNTQgaXMgbm90IHNldApDT05GSUdfNkxPV1BBTl9JUEhDPXkKQ09ORklHX05FVF9T
Q0hFRD15CgojCiMgUXVldWVpbmcvU2NoZWR1bGluZwojCiMgQ09ORklHX05FVF9TQ0hfQ0JR
IGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9IVEIgaXMgbm90IHNldAojIENPTkZJR19O
RVRfU0NIX0hGU0MgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX1BSSU8gaXMgbm90IHNl
dAojIENPTkZJR19ORVRfU0NIX01VTFRJUSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hf
UkVEIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9TRkIgaXMgbm90IHNldAojIENPTkZJ
R19ORVRfU0NIX1NGUSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfVEVRTCBpcyBub3Qg
c2V0CiMgQ09ORklHX05FVF9TQ0hfVEJGIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9H
UkVEIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9EU01BUksgaXMgbm90IHNldAojIENP
TkZJR19ORVRfU0NIX05FVEVNIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9EUlIgaXMg
bm90IHNldAojIENPTkZJR19ORVRfU0NIX01RUFJJTyBpcyBub3Qgc2V0CiMgQ09ORklHX05F
VF9TQ0hfQ0hPS0UgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX1FGUSBpcyBub3Qgc2V0
CiMgQ09ORklHX05FVF9TQ0hfQ09ERUwgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX0ZR
X0NPREVMIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9GUSBpcyBub3Qgc2V0CiMgQ09O
RklHX05FVF9TQ0hfSEhGIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9QSUUgaXMgbm90
IHNldAojIENPTkZJR19ORVRfU0NIX0lOR1JFU1MgaXMgbm90IHNldAojIENPTkZJR19ORVRf
U0NIX1BMVUcgaXMgbm90IHNldAoKIwojIENsYXNzaWZpY2F0aW9uCiMKQ09ORklHX05FVF9D
TFM9eQojIENPTkZJR19ORVRfQ0xTX0JBU0lDIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NM
U19UQ0lOREVYIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19ST1VURTQgaXMgbm90IHNl
dAojIENPTkZJR19ORVRfQ0xTX0ZXIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19VMzIg
aXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xTX1JTVlAgaXMgbm90IHNldAojIENPTkZJR19O
RVRfQ0xTX1JTVlA2IGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19GTE9XIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX0NMU19DR1JPVVAgaXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xT
X0JQRiBpcyBub3Qgc2V0CkNPTkZJR19ORVRfRU1BVENIPXkKQ09ORklHX05FVF9FTUFUQ0hf
U1RBQ0s9MzIKIyBDT05GSUdfTkVUX0VNQVRDSF9DTVAgaXMgbm90IHNldAojIENPTkZJR19O
RVRfRU1BVENIX05CWVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0VNQVRDSF9VMzIgaXMg
bm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX01FVEEgaXMgbm90IHNldAojIENPTkZJR19O
RVRfRU1BVENIX1RFWFQgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX0lQU0VUIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9DTFNfQUNUPXkKIyBDT05GSUdfTkVUX0FDVF9QT0xJQ0Ug
aXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX0dBQ1QgaXMgbm90IHNldAojIENPTkZJR19O
RVRfQUNUX01JUlJFRCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9BQ1RfSVBUIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX0FDVF9OQVQgaXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX1BF
RElUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0FDVF9TSU1QIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX0FDVF9TS0JFRElUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0FDVF9DU1VNIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9TQ0hfRklGTz15CiMgQ09ORklHX0RDQiBpcyBub3Qgc2V0
CiMgQ09ORklHX0ROU19SRVNPTFZFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0JBVE1BTl9BRFYg
aXMgbm90IHNldAojIENPTkZJR19PUEVOVlNXSVRDSCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZT
T0NLRVRTIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUTElOS19NTUFQIGlzIG5vdCBzZXQKIyBD
T05GSUdfTkVUTElOS19ESUFHIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX01QTFNfR1NPIGlz
IG5vdCBzZXQKIyBDT05GSUdfSFNSIGlzIG5vdCBzZXQKQ09ORklHX1JQUz15CkNPTkZJR19S
RlNfQUNDRUw9eQpDT05GSUdfWFBTPXkKIyBDT05GSUdfQ0dST1VQX05FVF9QUklPIGlzIG5v
dCBzZXQKIyBDT05GSUdfQ0dST1VQX05FVF9DTEFTU0lEIGlzIG5vdCBzZXQKQ09ORklHX05F
VF9SWF9CVVNZX1BPTEw9eQpDT05GSUdfQlFMPXkKIyBDT05GSUdfQlBGX0pJVCBpcyBub3Qg
c2V0CkNPTkZJR19ORVRfRkxPV19MSU1JVD15CgojCiMgTmV0d29yayB0ZXN0aW5nCiMKIyBD
T05GSUdfTkVUX1BLVEdFTiBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9EUk9QX01PTklUT1Ig
aXMgbm90IHNldAojIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0CiMgQ09ORklHX0NBTiBp
cyBub3Qgc2V0CiMgQ09ORklHX0lSREEgaXMgbm90IHNldApDT05GSUdfQlQ9eQpDT05GSUdf
QlRfUkZDT01NPXkKQ09ORklHX0JUX1JGQ09NTV9UVFk9eQpDT05GSUdfQlRfQk5FUD15CkNP
TkZJR19CVF9CTkVQX01DX0ZJTFRFUj15CkNPTkZJR19CVF9CTkVQX1BST1RPX0ZJTFRFUj15
CkNPTkZJR19CVF9ISURQPXkKCiMKIyBCbHVldG9vdGggZGV2aWNlIGRyaXZlcnMKIwpDT05G
SUdfQlRfSENJQlRVU0I9eQpDT05GSUdfQlRfSENJVUFSVD15CkNPTkZJR19CVF9IQ0lVQVJU
X0g0PXkKQ09ORklHX0JUX0hDSVVBUlRfQkNTUD15CkNPTkZJR19CVF9IQ0lVQVJUX0FUSDNL
PXkKQ09ORklHX0JUX0hDSVVBUlRfTEw9eQpDT05GSUdfQlRfSENJVUFSVF8zV0lSRT15CkNP
TkZJR19CVF9IQ0lCQ00yMDNYPXkKQ09ORklHX0JUX0hDSUJQQTEwWD15CkNPTkZJR19CVF9I
Q0lCRlVTQj15CkNPTkZJR19CVF9IQ0lWSENJPXkKQ09ORklHX0JUX01SVkw9eQpDT05GSUdf
QlRfQVRIM0s9eQojIENPTkZJR19BRl9SWFJQQyBpcyBub3Qgc2V0CkNPTkZJR19GSUJfUlVM
RVM9eQojIENPTkZJR19XSVJFTEVTUyBpcyBub3Qgc2V0CiMgQ09ORklHX1dJTUFYIGlzIG5v
dCBzZXQKIyBDT05GSUdfUkZLSUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUXzlQIGlzIG5v
dCBzZXQKIyBDT05GSUdfQ0FJRiBpcyBub3Qgc2V0CkNPTkZJR19DRVBIX0xJQj15CiMgQ09O
RklHX0NFUEhfTElCX1BSRVRUWURFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0VQSF9MSUJf
VVNFX0ROU19SRVNPTFZFUiBpcyBub3Qgc2V0CiMgQ09ORklHX05GQyBpcyBub3Qgc2V0CkNP
TkZJR19IQVZFX0JQRl9KSVQ9eQoKIwojIERldmljZSBEcml2ZXJzCiMKCiMKIyBHZW5lcmlj
IERyaXZlciBPcHRpb25zCiMKQ09ORklHX1VFVkVOVF9IRUxQRVJfUEFUSD0iL3NiaW4vaG90
cGx1ZyIKQ09ORklHX0RFVlRNUEZTPXkKQ09ORklHX0RFVlRNUEZTX01PVU5UPXkKIyBDT05G
SUdfU1RBTkRBTE9ORSBpcyBub3Qgc2V0CiMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJ
TEQgaXMgbm90IHNldApDT05GSUdfRldfTE9BREVSPXkKQ09ORklHX0ZJUk1XQVJFX0lOX0tF
Uk5FTD15CkNPTkZJR19FWFRSQV9GSVJNV0FSRT0iIgpDT05GSUdfRldfTE9BREVSX1VTRVJf
SEVMUEVSPXkKIyBDT05GSUdfREVCVUdfRFJJVkVSIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVH
X0RFVlJFUz15CkNPTkZJR19TWVNfSFlQRVJWSVNPUj15CiMgQ09ORklHX0dFTkVSSUNfQ1BV
X0RFVklDRVMgaXMgbm90IHNldApDT05GSUdfRE1BX1NIQVJFRF9CVUZGRVI9eQoKIwojIEJ1
cyBkZXZpY2VzCiMKQ09ORklHX0NPTk5FQ1RPUj15CkNPTkZJR19QUk9DX0VWRU5UUz15CiMg
Q09ORklHX01URCBpcyBub3Qgc2V0CiMgQ09ORklHX1BBUlBPUlQgaXMgbm90IHNldApDT05G
SUdfQVJDSF9NSUdIVF9IQVZFX1BDX1BBUlBPUlQ9eQpDT05GSUdfUE5QPXkKQ09ORklHX1BO
UF9ERUJVR19NRVNTQUdFUz15CgojCiMgUHJvdG9jb2xzCiMKQ09ORklHX1BOUEFDUEk9eQpD
T05GSUdfQkxLX0RFVj15CiMgQ09ORklHX0JMS19ERVZfTlVMTF9CTEsgaXMgbm90IHNldAoj
IENPTkZJR19CTEtfREVWX0ZEIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9QQ0lFU1NE
X01USVAzMlhYIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0NQUV9DSVNTX0RBIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQkxLX0RFVl9EQUM5NjAgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVW
X1VNRU0gaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX0NPV19DT01NT04gaXMgbm90IHNl
dApDT05GSUdfQkxLX0RFVl9MT09QPXkKQ09ORklHX0JMS19ERVZfTE9PUF9NSU5fQ09VTlQ9
OAojIENPTkZJR19CTEtfREVWX0NSWVBUT0xPT1AgaXMgbm90IHNldAojIENPTkZJR19CTEtf
REVWX0RSQkQgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX05CRCBpcyBub3Qgc2V0CiMg
Q09ORklHX0JMS19ERVZfTlZNRSBpcyBub3Qgc2V0CiMgQ09ORklHX0JMS19ERVZfU0tEIGlz
IG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9TWDggaXMgbm90IHNldApDT05GSUdfQkxLX0RF
Vl9SQU09eQpDT05GSUdfQkxLX0RFVl9SQU1fQ09VTlQ9MTYKQ09ORklHX0JMS19ERVZfUkFN
X1NJWkU9MTYzODQKIyBDT05GSUdfQkxLX0RFVl9YSVAgaXMgbm90IHNldAojIENPTkZJR19D
RFJPTV9QS1RDRFZEIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRBX09WRVJfRVRIIGlzIG5vdCBz
ZXQKQ09ORklHX1hFTl9CTEtERVZfRlJPTlRFTkQ9eQpDT05GSUdfWEVOX0JMS0RFVl9CQUNL
RU5EPXkKIyBDT05GSUdfQkxLX0RFVl9IRCBpcyBub3Qgc2V0CiMgQ09ORklHX0JMS19ERVZf
UkJEIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9SU1hYIGlzIG5vdCBzZXQKCiMKIyBN
aXNjIGRldmljZXMKIwojIENPTkZJR19TRU5TT1JTX0xJUzNMVjAyRCBpcyBub3Qgc2V0CiMg
Q09ORklHX0FENTI1WF9EUE9UIGlzIG5vdCBzZXQKIyBDT05GSUdfRFVNTVlfSVJRIGlzIG5v
dCBzZXQKIyBDT05GSUdfSUJNX0FTTSBpcyBub3Qgc2V0CiMgQ09ORklHX1BIQU5UT00gaXMg
bm90IHNldAojIENPTkZJR19TR0lfSU9DNCBpcyBub3Qgc2V0CiMgQ09ORklHX1RJRk1fQ09S
RSBpcyBub3Qgc2V0CiMgQ09ORklHX0lDUzkzMlM0MDEgaXMgbm90IHNldAojIENPTkZJR19B
VE1FTF9TU0MgaXMgbm90IHNldAojIENPTkZJR19FTkNMT1NVUkVfU0VSVklDRVMgaXMgbm90
IHNldAojIENPTkZJR19IUF9JTE8gaXMgbm90IHNldAojIENPTkZJR19BUERTOTgwMkFMUyBp
cyBub3Qgc2V0CiMgQ09ORklHX0lTTDI5MDAzIGlzIG5vdCBzZXQKIyBDT05GSUdfSVNMMjkw
MjAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RTTDI1NTAgaXMgbm90IHNldAojIENP
TkZJR19TRU5TT1JTX0JIMTc4MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQkgxNzcw
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BUERTOTkwWCBpcyBub3Qgc2V0CiMgQ09O
RklHX0hNQzYzNTIgaXMgbm90IHNldAojIENPTkZJR19EUzE2ODIgaXMgbm90IHNldAojIENP
TkZJR19WTVdBUkVfQkFMTE9PTiBpcyBub3Qgc2V0CiMgQ09ORklHX0JNUDA4NV9JMkMgaXMg
bm90IHNldAojIENPTkZJR19QQ0hfUEhVQiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TV0lU
Q0hfRlNBOTQ4MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NSQU0gaXMgbm90IHNldAojIENPTkZJ
R19DMlBPUlQgaXMgbm90IHNldAoKIwojIEVFUFJPTSBzdXBwb3J0CiMKIyBDT05GSUdfRUVQ
Uk9NX0FUMjQgaXMgbm90IHNldAojIENPTkZJR19FRVBST01fTEVHQUNZIGlzIG5vdCBzZXQK
IyBDT05GSUdfRUVQUk9NX01BWDY4NzUgaXMgbm90IHNldAojIENPTkZJR19FRVBST01fOTND
WDYgaXMgbm90IHNldAojIENPTkZJR19DQjcxMF9DT1JFIGlzIG5vdCBzZXQKCiMKIyBUZXhh
cyBJbnN0cnVtZW50cyBzaGFyZWQgdHJhbnNwb3J0IGxpbmUgZGlzY2lwbGluZQojCiMgQ09O
RklHX1NFTlNPUlNfTElTM19JMkMgaXMgbm90IHNldAoKIwojIEFsdGVyYSBGUEdBIGZpcm13
YXJlIGRvd25sb2FkIG1vZHVsZQojCkNPTkZJR19BTFRFUkFfU1RBUEw9eQojIENPTkZJR19J
TlRFTF9NRUkgaXMgbm90IHNldAojIENPTkZJR19JTlRFTF9NRUlfTUUgaXMgbm90IHNldAoj
IENPTkZJR19WTVdBUkVfVk1DSSBpcyBub3Qgc2V0CgojCiMgSW50ZWwgTUlDIEhvc3QgRHJp
dmVyCiMKIyBDT05GSUdfSU5URUxfTUlDX0hPU1QgaXMgbm90IHNldAoKIwojIEludGVsIE1J
QyBDYXJkIERyaXZlcgojCiMgQ09ORklHX0lOVEVMX01JQ19DQVJEIGlzIG5vdCBzZXQKIyBD
T05GSUdfR0VOV1FFIGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfSURFPXkKIyBDT05GSUdfSURF
IGlzIG5vdCBzZXQKCiMKIyBTQ1NJIGRldmljZSBzdXBwb3J0CiMKQ09ORklHX1NDU0lfTU9E
PXkKIyBDT05GSUdfUkFJRF9BVFRSUyBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJPXkKQ09ORklH
X1NDU0lfRE1BPXkKIyBDT05GSUdfU0NTSV9UR1QgaXMgbm90IHNldAojIENPTkZJR19TQ1NJ
X05FVExJTksgaXMgbm90IHNldApDT05GSUdfU0NTSV9QUk9DX0ZTPXkKCiMKIyBTQ1NJIHN1
cHBvcnQgdHlwZSAoZGlzaywgdGFwZSwgQ0QtUk9NKQojCkNPTkZJR19CTEtfREVWX1NEPXkK
IyBDT05GSUdfQ0hSX0RFVl9TVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NIUl9ERVZfT1NTVCBp
cyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX1NSPXkKQ09ORklHX0JMS19ERVZfU1JfVkVORE9S
PXkKQ09ORklHX0NIUl9ERVZfU0c9eQojIENPTkZJR19DSFJfREVWX1NDSCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NDU0lfTVVMVElfTFVOIGlzIG5vdCBzZXQKQ09ORklHX1NDU0lfQ09OU1RB
TlRTPXkKIyBDT05GSUdfU0NTSV9MT0dHSU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9T
Q0FOX0FTWU5DIGlzIG5vdCBzZXQKCiMKIyBTQ1NJIFRyYW5zcG9ydHMKIwpDT05GSUdfU0NT
SV9TUElfQVRUUlM9eQojIENPTkZJR19TQ1NJX0ZDX0FUVFJTIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0NTSV9JU0NTSV9BVFRSUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfU0FTX0FUVFJT
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9TQVNfTElCU0FTIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0NTSV9TUlBfQVRUUlMgaXMgbm90IHNldAojIENPTkZJR19TQ1NJX0xPV0xFVkVMIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0NTSV9ESCBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfT1NE
X0lOSVRJQVRPUiBpcyBub3Qgc2V0CkNPTkZJR19BVEE9eQojIENPTkZJR19BVEFfTk9OU1RB
TkRBUkQgaXMgbm90IHNldApDT05GSUdfQVRBX1ZFUkJPU0VfRVJST1I9eQpDT05GSUdfQVRB
X0FDUEk9eQojIENPTkZJR19TQVRBX1pQT0REIGlzIG5vdCBzZXQKQ09ORklHX1NBVEFfUE1Q
PXkKCiMKIyBDb250cm9sbGVycyB3aXRoIG5vbi1TRkYgbmF0aXZlIGludGVyZmFjZQojCkNP
TkZJR19TQVRBX0FIQ0k9eQpDT05GSUdfU0FUQV9BSENJX1BMQVRGT1JNPXkKIyBDT05GSUdf
U0FUQV9JTklDMTYyWCBpcyBub3Qgc2V0CiMgQ09ORklHX1NBVEFfQUNBUkRfQUhDSSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NBVEFfU0lMMjQgaXMgbm90IHNldAojIENPTkZJR19BVEFfU0ZG
IGlzIG5vdCBzZXQKQ09ORklHX01EPXkKIyBDT05GSUdfQkxLX0RFVl9NRCBpcyBub3Qgc2V0
CkNPTkZJR19CQ0FDSEU9eQojIENPTkZJR19CQ0FDSEVfREVCVUcgaXMgbm90IHNldAojIENP
TkZJR19CQ0FDSEVfQ0xPU1VSRVNfREVCVUcgaXMgbm90IHNldApDT05GSUdfQkxLX0RFVl9E
TV9CVUlMVElOPXkKQ09ORklHX0JMS19ERVZfRE09eQpDT05GSUdfRE1fREVCVUc9eQpDT05G
SUdfRE1fQlVGSU89eQpDT05GSUdfRE1fQklPX1BSSVNPTj15CkNPTkZJR19ETV9QRVJTSVNU
RU5UX0RBVEE9eQpDT05GSUdfRE1fQ1JZUFQ9eQpDT05GSUdfRE1fU05BUFNIT1Q9eQojIENP
TkZJR19ETV9USElOX1BST1ZJU0lPTklORyBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX0RFQlVH
X0JMT0NLX1NUQUNLX1RSQUNJTkcgaXMgbm90IHNldApDT05GSUdfRE1fQ0FDSEU9eQpDT05G
SUdfRE1fQ0FDSEVfTVE9eQpDT05GSUdfRE1fQ0FDSEVfQ0xFQU5FUj15CkNPTkZJR19ETV9N
SVJST1I9eQojIENPTkZJR19ETV9MT0dfVVNFUlNQQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdf
RE1fUkFJRCBpcyBub3Qgc2V0CkNPTkZJR19ETV9aRVJPPXkKIyBDT05GSUdfRE1fTVVMVElQ
QVRIIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1fREVMQVkgaXMgbm90IHNldAojIENPTkZJR19E
TV9VRVZFTlQgaXMgbm90IHNldAojIENPTkZJR19ETV9GTEFLRVkgaXMgbm90IHNldAojIENP
TkZJR19ETV9WRVJJVFkgaXMgbm90IHNldAojIENPTkZJR19ETV9TV0lUQ0ggaXMgbm90IHNl
dAojIENPTkZJR19UQVJHRVRfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZVU0lPTiBpcyBu
b3Qgc2V0CgojCiMgSUVFRSAxMzk0IChGaXJlV2lyZSkgc3VwcG9ydAojCiMgQ09ORklHX0ZJ
UkVXSVJFIGlzIG5vdCBzZXQKIyBDT05GSUdfRklSRVdJUkVfTk9TWSBpcyBub3Qgc2V0CiMg
Q09ORklHX0kyTyBpcyBub3Qgc2V0CiMgQ09ORklHX01BQ0lOVE9TSF9EUklWRVJTIGlzIG5v
dCBzZXQKQ09ORklHX05FVERFVklDRVM9eQpDT05GSUdfTUlJPXkKQ09ORklHX05FVF9DT1JF
PXkKIyBDT05GSUdfQk9ORElORyBpcyBub3Qgc2V0CiMgQ09ORklHX0RVTU1ZIGlzIG5vdCBz
ZXQKIyBDT05GSUdfRVFVQUxJWkVSIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0ZDIGlzIG5v
dCBzZXQKIyBDT05GSUdfSUZCIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1RFQU0gaXMgbm90
IHNldAojIENPTkZJR19NQUNWTEFOIGlzIG5vdCBzZXQKIyBDT05GSUdfVlhMQU4gaXMgbm90
IHNldApDT05GSUdfTkVUQ09OU09MRT15CkNPTkZJR19ORVRQT0xMPXkKIyBDT05GSUdfTkVU
UE9MTF9UUkFQIGlzIG5vdCBzZXQKQ09ORklHX05FVF9QT0xMX0NPTlRST0xMRVI9eQpDT05G
SUdfVFVOPXkKQ09ORklHX1ZFVEg9eQojIENPTkZJR19OTE1PTiBpcyBub3Qgc2V0CiMgQ09O
RklHX0FSQ05FVCBpcyBub3Qgc2V0CgojCiMgQ0FJRiB0cmFuc3BvcnQgZHJpdmVycwojCgoj
CiMgRGlzdHJpYnV0ZWQgU3dpdGNoIEFyY2hpdGVjdHVyZSBkcml2ZXJzCiMKIyBDT05GSUdf
TkVUX0RTQV9NVjg4RTZYWFggaXMgbm90IHNldAojIENPTkZJR19ORVRfRFNBX01WODhFNjA2
MCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9EU0FfTVY4OEU2WFhYX05FRURfUFBVIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX0RTQV9NVjg4RTYxMzEgaXMgbm90IHNldAojIENPTkZJR19O
RVRfRFNBX01WODhFNjEyM182MV82NSBpcyBub3Qgc2V0CkNPTkZJR19FVEhFUk5FVD15CiMg
Q09ORklHX05FVF9WRU5ET1JfM0NPTSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1Jf
QURBUFRFQyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfQUxURU9OIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9BTUQgaXMgbm90IHNldApDT05GSUdfTkVUX1ZFTkRP
Ul9BUkM9eQojIENPTkZJR19ORVRfVkVORE9SX0FUSEVST1MgaXMgbm90IHNldApDT05GSUdf
TkVUX0NBREVOQ0U9eQojIENPTkZJR19BUk1fQVQ5MV9FVEhFUiBpcyBub3Qgc2V0CiMgQ09O
RklHX01BQ0IgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0JST0FEQ09NIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9CUk9DQURFIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkVUX0NBTFhFREFfWEdNQUMgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0NIRUxT
SU8gaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0NJU0NPIGlzIG5vdCBzZXQKIyBD
T05GSUdfRE5FVCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfREVDIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9ETElOSyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9W
RU5ET1JfRU1VTEVYIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9FWEFSIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9IUCBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVO
RE9SX0lOVEVMPXkKIyBDT05GSUdfRTEwMCBpcyBub3Qgc2V0CkNPTkZJR19FMTAwMD15CkNP
TkZJR19FMTAwMEU9eQpDT05GSUdfSUdCPXkKQ09ORklHX0lHQl9IV01PTj15CkNPTkZJR19J
R0JWRj15CiMgQ09ORklHX0lYR0IgaXMgbm90IHNldAojIENPTkZJR19JWEdCRSBpcyBub3Qg
c2V0CiMgQ09ORklHX0lYR0JFVkYgaXMgbm90IHNldAojIENPTkZJR19JNDBFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSTQwRVZGIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfSTgyNVhY
PXkKIyBDT05GSUdfSVAxMDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfSk1FIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkVUX1ZFTkRPUl9NQVJWRUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZF
TkRPUl9NRUxMQU5PWCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfTUlDUkVMIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9NWVJJIGlzIG5vdCBzZXQKIyBDT05GSUdf
RkVBTE5YIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9OQVRTRU1JIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9OVklESUEgaXMgbm90IHNldAojIENPTkZJR19ORVRf
VkVORE9SX09LSSBpcyBub3Qgc2V0CiMgQ09ORklHX0VUSE9DIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX1BBQ0tFVF9FTkdJTkUgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1FM
T0dJQyBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQojIENPTkZJR184
MTM5Q1AgaXMgbm90IHNldAojIENPTkZJR184MTM5VE9PIGlzIG5vdCBzZXQKQ09ORklHX1I4
MTY5PXkKIyBDT05GSUdfU0hfRVRIIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9S
REMgaXMgbm90IHNldApDT05GSUdfTkVUX1ZFTkRPUl9TRUVRPXkKQ09ORklHX05FVF9WRU5E
T1JfU0lMQU49eQojIENPTkZJR19TQzkyMDMxIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZF
TkRPUl9TSVMgaXMgbm90IHNldAojIENPTkZJR19TRkMgaXMgbm90IHNldAojIENPTkZJR19O
RVRfVkVORE9SX1NNU0MgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1NUTUlDUk8g
aXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1NVTiBpcyBub3Qgc2V0CiMgQ09ORklH
X05FVF9WRU5ET1JfVEVIVVRJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9USSBp
cyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfVklBIGlzIG5vdCBzZXQKQ09ORklHX05F
VF9WRU5ET1JfV0laTkVUPXkKIyBDT05GSUdfV0laTkVUX1c1MTAwIGlzIG5vdCBzZXQKIyBD
T05GSUdfV0laTkVUX1c1MzAwIGlzIG5vdCBzZXQKIyBDT05GSUdfRkRESSBpcyBub3Qgc2V0
CiMgQ09ORklHX0hJUFBJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NCMTAwMCBpcyBub3Qg
c2V0CkNPTkZJR19QSFlMSUI9eQoKIwojIE1JSSBQSFkgZGV2aWNlIGRyaXZlcnMKIwojIENP
TkZJR19BVDgwM1hfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQU1EX1BIWSBpcyBub3Qgc2V0
CiMgQ09ORklHX01BUlZFTExfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfREFWSUNPTV9QSFkg
aXMgbm90IHNldAojIENPTkZJR19RU0VNSV9QSFkgaXMgbm90IHNldAojIENPTkZJR19MWFRf
UEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lDQURBX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklH
X1ZJVEVTU0VfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfU01TQ19QSFkgaXMgbm90IHNldAoj
IENPTkZJR19CUk9BRENPTV9QSFkgaXMgbm90IHNldAojIENPTkZJR19CQ004N1hYX1BIWSBp
cyBub3Qgc2V0CiMgQ09ORklHX0lDUExVU19QSFkgaXMgbm90IHNldApDT05GSUdfUkVBTFRF
S19QSFk9eQojIENPTkZJR19OQVRJT05BTF9QSFkgaXMgbm90IHNldAojIENPTkZJR19TVEUx
MFhQIGlzIG5vdCBzZXQKIyBDT05GSUdfTFNJX0VUMTAxMUNfUEhZIGlzIG5vdCBzZXQKIyBD
T05GSUdfTUlDUkVMX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZJWEVEX1BIWSBpcyBub3Qg
c2V0CiMgQ09ORklHX01ESU9fQklUQkFORyBpcyBub3Qgc2V0CiMgQ09ORklHX1BQUCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NMSVAgaXMgbm90IHNldAoKIwojIFVTQiBOZXR3b3JrIEFkYXB0
ZXJzCiMKIyBDT05GSUdfVVNCX0NBVEMgaXMgbm90IHNldAojIENPTkZJR19VU0JfS0FXRVRI
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1BFR0FTVVMgaXMgbm90IHNldAojIENPTkZJR19V
U0JfUlRMODE1MCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9SVEw4MTUyIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX1VTQk5FVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JUEhFVEggaXMg
bm90IHNldAojIENPTkZJR19XTEFOIGlzIG5vdCBzZXQKCiMKIyBFbmFibGUgV2lNQVggKE5l
dHdvcmtpbmcgb3B0aW9ucykgdG8gc2VlIHRoZSBXaU1BWCBkcml2ZXJzCiMKIyBDT05GSUdf
V0FOIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9ORVRERVZfRlJPTlRFTkQ9eQpDT05GSUdfWEVO
X05FVERFVl9CQUNLRU5EPXkKIyBDT05GSUdfVk1YTkVUMyBpcyBub3Qgc2V0CiMgQ09ORklH
X0lTRE4gaXMgbm90IHNldAoKIwojIElucHV0IGRldmljZSBzdXBwb3J0CiMKQ09ORklHX0lO
UFVUPXkKQ09ORklHX0lOUFVUX0ZGX01FTUxFU1M9eQpDT05GSUdfSU5QVVRfUE9MTERFVj15
CkNPTkZJR19JTlBVVF9TUEFSU0VLTUFQPXkKIyBDT05GSUdfSU5QVVRfTUFUUklYS01BUCBp
cyBub3Qgc2V0CgojCiMgVXNlcmxhbmQgaW50ZXJmYWNlcwojCkNPTkZJR19JTlBVVF9NT1VT
RURFVj15CiMgQ09ORklHX0lOUFVUX01PVVNFREVWX1BTQVVYIGlzIG5vdCBzZXQKQ09ORklH
X0lOUFVUX01PVVNFREVWX1NDUkVFTl9YPTEwMjQKQ09ORklHX0lOUFVUX01PVVNFREVWX1ND
UkVFTl9ZPTc2OAojIENPTkZJR19JTlBVVF9KT1lERVYgaXMgbm90IHNldApDT05GSUdfSU5Q
VVRfRVZERVY9eQojIENPTkZJR19JTlBVVF9FVkJVRyBpcyBub3Qgc2V0CgojCiMgSW5wdXQg
RGV2aWNlIERyaXZlcnMKIwpDT05GSUdfSU5QVVRfS0VZQk9BUkQ9eQojIENPTkZJR19LRVlC
T0FSRF9BRFA1NTg4IGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfQURQNTU4OSBpcyBu
b3Qgc2V0CkNPTkZJR19LRVlCT0FSRF9BVEtCRD15CiMgQ09ORklHX0tFWUJPQVJEX1FUMTA3
MCBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX1FUMjE2MCBpcyBub3Qgc2V0CiMgQ09O
RklHX0tFWUJPQVJEX0xLS0JEIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfVENBNjQx
NiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX1RDQTg0MTggaXMgbm90IHNldAojIENP
TkZJR19LRVlCT0FSRF9MTTgzMjMgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9MTTgz
MzMgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9NQVg3MzU5IGlzIG5vdCBzZXQKIyBD
T05GSUdfS0VZQk9BUkRfTUNTIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTVBSMTIx
IGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTkVXVE9OIGlzIG5vdCBzZXQKIyBDT05G
SUdfS0VZQk9BUkRfT1BFTkNPUkVTIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfU1RP
V0FXQVkgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9TVU5LQkQgaXMgbm90IHNldAoj
IENPTkZJR19LRVlCT0FSRF9YVEtCRCBpcyBub3Qgc2V0CkNPTkZJR19JTlBVVF9NT1VTRT15
CkNPTkZJR19NT1VTRV9QUzI9eQpDT05GSUdfTU9VU0VfUFMyX0FMUFM9eQpDT05GSUdfTU9V
U0VfUFMyX0xPR0lQUzJQUD15CkNPTkZJR19NT1VTRV9QUzJfU1lOQVBUSUNTPXkKQ09ORklH
X01PVVNFX1BTMl9DWVBSRVNTPXkKQ09ORklHX01PVVNFX1BTMl9MSUZFQk9PSz15CkNPTkZJ
R19NT1VTRV9QUzJfVFJBQ0tQT0lOVD15CiMgQ09ORklHX01PVVNFX1BTMl9FTEFOVEVDSCBp
cyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1BTMl9TRU5URUxJQyBpcyBub3Qgc2V0CiMgQ09O
RklHX01PVVNFX1BTMl9UT1VDSEtJVCBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1NFUklB
TCBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX0FQUExFVE9VQ0ggaXMgbm90IHNldAojIENP
TkZJR19NT1VTRV9CQ001OTc0IGlzIG5vdCBzZXQKIyBDT05GSUdfTU9VU0VfQ1lBUEEgaXMg
bm90IHNldAojIENPTkZJR19NT1VTRV9WU1hYWEFBIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9V
U0VfU1lOQVBUSUNTX0kyQyBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1NZTkFQVElDU19V
U0IgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9KT1lTVElDSyBpcyBub3Qgc2V0CkNPTkZJ
R19JTlBVVF9UQUJMRVQ9eQojIENPTkZJR19UQUJMRVRfVVNCX0FDRUNBRCBpcyBub3Qgc2V0
CiMgQ09ORklHX1RBQkxFVF9VU0JfQUlQVEVLIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFCTEVU
X1VTQl9HVENPIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFCTEVUX1VTQl9IQU5XQU5HIGlzIG5v
dCBzZXQKIyBDT05GSUdfVEFCTEVUX1VTQl9LQlRBQiBpcyBub3Qgc2V0CiMgQ09ORklHX1RB
QkxFVF9VU0JfV0FDT00gaXMgbm90IHNldApDT05GSUdfSU5QVVRfVE9VQ0hTQ1JFRU49eQoj
IENPTkZJR19UT1VDSFNDUkVFTl9BRDc4NzkgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFND
UkVFTl9BVE1FTF9NWFQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9CVTIxMDEz
IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQ1lUVFNQX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19UT1VDSFNDUkVFTl9DWVRUU1A0X0NPUkUgaXMgbm90IHNldAojIENPTkZJ
R19UT1VDSFNDUkVFTl9EWU5BUFJPIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5f
SEFNUFNISVJFIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fRUVUSSBpcyBub3Qg
c2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0ZVSklUU1UgaXMgbm90IHNldAojIENPTkZJR19U
T1VDSFNDUkVFTl9JTEkyMTBYIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fR1VO
WkUgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9FTE8gaXMgbm90IHNldAojIENP
TkZJR19UT1VDSFNDUkVFTl9XQUNPTV9XODAwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNI
U0NSRUVOX1dBQ09NX0kyQyBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX01BWDEx
ODAxIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTUNTNTAwMCBpcyBub3Qgc2V0
CiMgQ09ORklHX1RPVUNIU0NSRUVOX01NUzExNCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNI
U0NSRUVOX01UT1VDSCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0lORVhJTyBp
cyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX01LNzEyIGlzIG5vdCBzZXQKIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fUEVOTU9VTlQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVF
Tl9FRFRfRlQ1WDA2IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fVE9VQ0hSSUdI
VCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1RPVUNIV0lOIGlzIG5vdCBzZXQK
IyBDT05GSUdfVE9VQ0hTQ1JFRU5fUElYQ0lSIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hT
Q1JFRU5fVVNCX0NPTVBPU0lURSBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1RP
VUNISVQyMTMgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UU0NfU0VSSU8gaXMg
bm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UU0MyMDA3IGlzIG5vdCBzZXQKIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fU1QxMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5f
U1VSNDAgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UUFM2NTA3WCBpcyBub3Qg
c2V0CkNPTkZJR19JTlBVVF9NSVNDPXkKIyBDT05GSUdfSU5QVVRfQUQ3MTRYIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSU5QVVRfQk1BMTUwIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfUENT
UEtSIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfTU1BODQ1MCBpcyBub3Qgc2V0CiMgQ09O
RklHX0lOUFVUX01QVTMwNTAgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9BUEFORUwgaXMg
bm90IHNldAojIENPTkZJR19JTlBVVF9BVExBU19CVE5TIGlzIG5vdCBzZXQKIyBDT05GSUdf
SU5QVVRfQVRJX1JFTU9URTIgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9LRVlTUEFOX1JF
TU9URSBpcyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0tYVEo5IGlzIG5vdCBzZXQKIyBDT05G
SUdfSU5QVVRfUE9XRVJNQVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfWUVBTElOSyBp
cyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0NNMTA5IGlzIG5vdCBzZXQKIyBDT05GSUdfSU5Q
VVRfVUlOUFVUIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfUENGODU3NCBpcyBub3Qgc2V0
CiMgQ09ORklHX0lOUFVUX0FEWEwzNFggaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9JTVNf
UENVIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfQ01BMzAwMCBpcyBub3Qgc2V0CkNPTkZJ
R19JTlBVVF9YRU5fS0JEREVWX0ZST05URU5EPXkKIyBDT05GSUdfSU5QVVRfSURFQVBBRF9T
TElERUJBUiBpcyBub3Qgc2V0CgojCiMgSGFyZHdhcmUgSS9PIHBvcnRzCiMKQ09ORklHX1NF
UklPPXkKQ09ORklHX0FSQ0hfTUlHSFRfSEFWRV9QQ19TRVJJTz15CkNPTkZJR19TRVJJT19J
ODA0Mj15CkNPTkZJR19TRVJJT19TRVJQT1JUPXkKIyBDT05GSUdfU0VSSU9fQ1Q4MkM3MTAg
aXMgbm90IHNldAojIENPTkZJR19TRVJJT19QQ0lQUzIgaXMgbm90IHNldApDT05GSUdfU0VS
SU9fTElCUFMyPXkKIyBDT05GSUdfU0VSSU9fUkFXIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VS
SU9fQUxURVJBX1BTMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklPX1BTMk1VTFQgaXMgbm90
IHNldAojIENPTkZJR19TRVJJT19BUkNfUFMyIGlzIG5vdCBzZXQKIyBDT05GSUdfR0FNRVBP
UlQgaXMgbm90IHNldAoKIwojIENoYXJhY3RlciBkZXZpY2VzCiMKQ09ORklHX1RUWT15CkNP
TkZJR19WVD15CkNPTkZJR19DT05TT0xFX1RSQU5TTEFUSU9OUz15CkNPTkZJR19WVF9DT05T
T0xFPXkKQ09ORklHX1ZUX0NPTlNPTEVfU0xFRVA9eQpDT05GSUdfSFdfQ09OU09MRT15CkNP
TkZJR19WVF9IV19DT05TT0xFX0JJTkRJTkc9eQpDT05GSUdfVU5JWDk4X1BUWVM9eQojIENP
TkZJR19ERVZQVFNfTVVMVElQTEVfSU5TVEFOQ0VTIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVH
QUNZX1BUWVMgaXMgbm90IHNldApDT05GSUdfU0VSSUFMX05PTlNUQU5EQVJEPXkKIyBDT05G
SUdfUk9DS0VUUE9SVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NZQ0xBREVTIGlzIG5vdCBzZXQK
IyBDT05GSUdfTU9YQV9JTlRFTExJTyBpcyBub3Qgc2V0CiMgQ09ORklHX01PWEFfU01BUlRJ
TyBpcyBub3Qgc2V0CiMgQ09ORklHX1NZTkNMSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfU1lO
Q0xJTktNUCBpcyBub3Qgc2V0CiMgQ09ORklHX1NZTkNMSU5LX0dUIGlzIG5vdCBzZXQKIyBD
T05GSUdfTk9aT01JIGlzIG5vdCBzZXQKIyBDT05GSUdfSVNJIGlzIG5vdCBzZXQKIyBDT05G
SUdfTl9IRExDIGlzIG5vdCBzZXQKIyBDT05GSUdfTl9HU00gaXMgbm90IHNldAojIENPTkZJ
R19UUkFDRV9TSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfREVWS01FTSBpcyBub3Qgc2V0Cgoj
CiMgU2VyaWFsIGRyaXZlcnMKIwpDT05GSUdfU0VSSUFMXzgyNTA9eQpDT05GSUdfU0VSSUFM
XzgyNTBfREVQUkVDQVRFRF9PUFRJT05TPXkKQ09ORklHX1NFUklBTF84MjUwX1BOUD15CkNP
TkZJR19TRVJJQUxfODI1MF9DT05TT0xFPXkKQ09ORklHX0ZJWF9FQVJMWUNPTl9NRU09eQpD
T05GSUdfU0VSSUFMXzgyNTBfUENJPXkKQ09ORklHX1NFUklBTF84MjUwX05SX1VBUlRTPTMy
CkNPTkZJR19TRVJJQUxfODI1MF9SVU5USU1FX1VBUlRTPTQKQ09ORklHX1NFUklBTF84MjUw
X0VYVEVOREVEPXkKQ09ORklHX1NFUklBTF84MjUwX01BTllfUE9SVFM9eQpDT05GSUdfU0VS
SUFMXzgyNTBfU0hBUkVfSVJRPXkKQ09ORklHX1NFUklBTF84MjUwX0RFVEVDVF9JUlE9eQpD
T05GSUdfU0VSSUFMXzgyNTBfUlNBPXkKIyBDT05GSUdfU0VSSUFMXzgyNTBfRFcgaXMgbm90
IHNldAoKIwojIE5vbi04MjUwIHNlcmlhbCBwb3J0IHN1cHBvcnQKIwojIENPTkZJR19TRVJJ
QUxfTUZEX0hTVSBpcyBub3Qgc2V0CkNPTkZJR19TRVJJQUxfQ09SRT15CkNPTkZJR19TRVJJ
QUxfQ09SRV9DT05TT0xFPXkKIyBDT05GSUdfU0VSSUFMX0pTTSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFUklBTF9TQ0NOWFAgaXMgbm90IHNldAojIENPTkZJR19TRVJJQUxfVElNQkVSREFM
RSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF9BTFRFUkFfSlRBR1VBUlQgaXMgbm90IHNl
dAojIENPTkZJR19TRVJJQUxfQUxURVJBX1VBUlQgaXMgbm90IHNldAojIENPTkZJR19TRVJJ
QUxfUENIX1VBUlQgaXMgbm90IHNldAojIENPTkZJR19TRVJJQUxfQVJDIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VSSUFMX1JQMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF9GU0xfTFBV
QVJUIGlzIG5vdCBzZXQKQ09ORklHX0hWQ19EUklWRVI9eQpDT05GSUdfSFZDX0lSUT15CkNP
TkZJR19IVkNfWEVOPXkKQ09ORklHX0hWQ19YRU5fRlJPTlRFTkQ9eQojIENPTkZJR19JUE1J
X0hBTkRMRVIgaXMgbm90IHNldApDT05GSUdfSFdfUkFORE9NPXkKQ09ORklHX0hXX1JBTkRP
TV9USU1FUklPTUVNPXkKQ09ORklHX0hXX1JBTkRPTV9JTlRFTD15CkNPTkZJR19IV19SQU5E
T01fQU1EPXkKQ09ORklHX0hXX1JBTkRPTV9WSUE9eQojIENPTkZJR19OVlJBTSBpcyBub3Qg
c2V0CiMgQ09ORklHX1IzOTY0IGlzIG5vdCBzZXQKIyBDT05GSUdfQVBQTElDT00gaXMgbm90
IHNldAojIENPTkZJR19NV0FWRSBpcyBub3Qgc2V0CiMgQ09ORklHX1JBV19EUklWRVIgaXMg
bm90IHNldApDT05GSUdfSFBFVD15CiMgQ09ORklHX0hQRVRfTU1BUCBpcyBub3Qgc2V0CkNP
TkZJR19IQU5HQ0hFQ0tfVElNRVI9eQojIENPTkZJR19UQ0dfVFBNIGlzIG5vdCBzZXQKIyBD
T05GSUdfVEVMQ0xPQ0sgaXMgbm90IHNldApDT05GSUdfREVWUE9SVD15CkNPTkZJR19JMkM9
eQpDT05GSUdfSTJDX0JPQVJESU5GTz15CkNPTkZJR19JMkNfQ09NUEFUPXkKIyBDT05GSUdf
STJDX0NIQVJERVYgaXMgbm90IHNldApDT05GSUdfSTJDX01VWD15CgojCiMgTXVsdGlwbGV4
ZXIgSTJDIENoaXAgc3VwcG9ydAojCiMgQ09ORklHX0kyQ19NVVhfUENBOTU0MSBpcyBub3Qg
c2V0CiMgQ09ORklHX0kyQ19NVVhfUENBOTU0eCBpcyBub3Qgc2V0CkNPTkZJR19JMkNfSEVM
UEVSX0FVVE89eQpDT05GSUdfSTJDX0FMR09CSVQ9eQoKIwojIEkyQyBIYXJkd2FyZSBCdXMg
c3VwcG9ydAojCgojCiMgUEMgU01CdXMgaG9zdCBjb250cm9sbGVyIGRyaXZlcnMKIwojIENP
TkZJR19JMkNfQUxJMTUzNSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19BTEkxNTYzIGlzIG5v
dCBzZXQKIyBDT05GSUdfSTJDX0FMSTE1WDMgaXMgbm90IHNldApDT05GSUdfSTJDX0FNRDc1
Nj15CiMgQ09ORklHX0kyQ19BTUQ3NTZfUzQ4ODIgaXMgbm90IHNldApDT05GSUdfSTJDX0FN
RDgxMTE9eQpDT05GSUdfSTJDX0k4MDE9eQpDT05GSUdfSTJDX0lTQ0g9eQojIENPTkZJR19J
MkNfSVNNVCBpcyBub3Qgc2V0CkNPTkZJR19JMkNfUElJWDQ9eQojIENPTkZJR19JMkNfTkZP
UkNFMiBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TSVM1NTk1IGlzIG5vdCBzZXQKIyBDT05G
SUdfSTJDX1NJUzYzMCBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TSVM5NlggaXMgbm90IHNl
dAojIENPTkZJR19JMkNfVklBIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1ZJQVBSTyBpcyBu
b3Qgc2V0CgojCiMgQUNQSSBkcml2ZXJzCiMKQ09ORklHX0kyQ19TQ01JPXkKCiMKIyBJMkMg
c3lzdGVtIGJ1cyBkcml2ZXJzIChtb3N0bHkgZW1iZWRkZWQgLyBzeXN0ZW0tb24tY2hpcCkK
IwojIENPTkZJR19JMkNfREVTSUdOV0FSRV9QTEFURk9STSBpcyBub3Qgc2V0CiMgQ09ORklH
X0kyQ19ERVNJR05XQVJFX1BDSSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19FRzIwVCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0kyQ19PQ09SRVMgaXMgbm90IHNldAojIENPTkZJR19JMkNfUENB
X1BMQVRGT1JNIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1BYQV9QQ0kgaXMgbm90IHNldAoj
IENPTkZJR19JMkNfU0lNVEVDIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1hJTElOWCBpcyBu
b3Qgc2V0CgojCiMgRXh0ZXJuYWwgSTJDL1NNQnVzIGFkYXB0ZXIgZHJpdmVycwojCiMgQ09O
RklHX0kyQ19ESU9MQU5fVTJDIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1BBUlBPUlRfTElH
SFQgaXMgbm90IHNldAojIENPTkZJR19JMkNfUk9CT1RGVVpaX09TSUYgaXMgbm90IHNldAoj
IENPTkZJR19JMkNfVEFPU19FVk0gaXMgbm90IHNldAojIENPTkZJR19JMkNfVElOWV9VU0Ig
aXMgbm90IHNldAoKIwojIE90aGVyIEkyQy9TTUJ1cyBidXMgZHJpdmVycwojCiMgQ09ORklH
X0kyQ19TVFVCIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX0RFQlVHX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19JMkNfREVCVUdfQUxHTyBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19ERUJV
R19CVVMgaXMgbm90IHNldAojIENPTkZJR19TUEkgaXMgbm90IHNldAojIENPTkZJR19IU0kg
aXMgbm90IHNldAoKIwojIFBQUyBzdXBwb3J0CiMKQ09ORklHX1BQUz15CiMgQ09ORklHX1BQ
U19ERUJVRyBpcyBub3Qgc2V0CgojCiMgUFBTIGNsaWVudHMgc3VwcG9ydAojCiMgQ09ORklH
X1BQU19DTElFTlRfS1RJTUVSIGlzIG5vdCBzZXQKIyBDT05GSUdfUFBTX0NMSUVOVF9MRElT
QyBpcyBub3Qgc2V0CiMgQ09ORklHX1BQU19DTElFTlRfR1BJTyBpcyBub3Qgc2V0CgojCiMg
UFBTIGdlbmVyYXRvcnMgc3VwcG9ydAojCgojCiMgUFRQIGNsb2NrIHN1cHBvcnQKIwpDT05G
SUdfUFRQXzE1ODhfQ0xPQ0s9eQoKIwojIEVuYWJsZSBQSFlMSUIgYW5kIE5FVFdPUktfUEhZ
X1RJTUVTVEFNUElORyB0byBzZWUgdGhlIGFkZGl0aW9uYWwgY2xvY2tzLgojCiMgQ09ORklH
X1BUUF8xNTg4X0NMT0NLX1BDSCBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX1dBTlRfT1BUSU9O
QUxfR1BJT0xJQj15CiMgQ09ORklHX0dQSU9MSUIgaXMgbm90IHNldAojIENPTkZJR19XMSBp
cyBub3Qgc2V0CkNPTkZJR19QT1dFUl9TVVBQTFk9eQojIENPTkZJR19QT1dFUl9TVVBQTFlf
REVCVUcgaXMgbm90IHNldAojIENPTkZJR19QREFfUE9XRVIgaXMgbm90IHNldAojIENPTkZJ
R19URVNUX1BPV0VSIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9EUzI3ODAgaXMgbm90
IHNldAojIENPTkZJR19CQVRURVJZX0RTMjc4MSBpcyBub3Qgc2V0CiMgQ09ORklHX0JBVFRF
UllfRFMyNzgyIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9TQlMgaXMgbm90IHNldAoj
IENPTkZJR19CQVRURVJZX0JRMjd4MDAgaXMgbm90IHNldAojIENPTkZJR19CQVRURVJZX01B
WDE3MDQwIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9NQVgxNzA0MiBpcyBub3Qgc2V0
CiMgQ09ORklHX0NIQVJHRVJfTUFYODkwMyBpcyBub3Qgc2V0CiMgQ09ORklHX0NIQVJHRVJf
TFA4NzI3IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI0MTVYIGlzIG5vdCBzZXQK
IyBDT05GSUdfQ0hBUkdFUl9TTUIzNDcgaXMgbm90IHNldAojIENPTkZJR19QT1dFUl9SRVNF
VCBpcyBub3Qgc2V0CiMgQ09ORklHX1BPV0VSX0FWUyBpcyBub3Qgc2V0CkNPTkZJR19IV01P
Tj15CkNPTkZJR19IV01PTl9WSUQ9eQojIENPTkZJR19IV01PTl9ERUJVR19DSElQIGlzIG5v
dCBzZXQKCiMKIyBOYXRpdmUgZHJpdmVycwojCiMgQ09ORklHX1NFTlNPUlNfQUJJVFVHVVJV
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BQklUVUdVUlUzIGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19BRDc0MTQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0FENzQx
OCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyMSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURNMTAyNSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAy
NiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyOSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURNMTAzMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNOTI0
MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQxMCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURUNzQxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQ2
MiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQ3MCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURUNzQ3NSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQVNDNzYy
MSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfSzhURU1QIGlzIG5vdCBzZXQKQ09ORklH
X1NFTlNPUlNfSzEwVEVNUD15CkNPTkZJR19TRU5TT1JTX0ZBTTE1SF9QT1dFUj15CiMgQ09O
RklHX1NFTlNPUlNfQVNCMTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BVFhQMSBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfRFM2MjAgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0RTMTYyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfSTVLX0FNQiBpcyBu
b3Qgc2V0CkNPTkZJR19TRU5TT1JTX0Y3MTgwNUY9eQpDT05GSUdfU0VOU09SU19GNzE4ODJG
Rz15CkNPTkZJR19TRU5TT1JTX0Y3NTM3NVM9eQojIENPTkZJR19TRU5TT1JTX0ZTQ0hNRCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfRzc2MEEgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0c3NjIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0dMNTE4U00gaXMgbm90
IHNldAojIENPTkZJR19TRU5TT1JTX0dMNTIwU00gaXMgbm90IHNldAojIENPTkZJR19TRU5T
T1JTX0hJSDYxMzAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0hUVTIxIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19DT1JFVEVNUCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JT
X0lUODc9eQpDT05GSUdfU0VOU09SU19KQzQyPXkKIyBDT05GSUdfU0VOU09SU19MSU5FQUdF
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTYzIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0VOU09SU19MTTczIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTc1IGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19MTTc3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19M
TTc4IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTgwIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19MTTgzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTg1IGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTg3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19MTTkwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTkyIGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19MTTkzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MTUx
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjE1IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19MVEM0MjQ1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjYx
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTk1MjM0IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19MTTk1MjQxIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTk1MjQ1
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVgxNjA2NSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfTUFYMTYxOSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYMTY2
OCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYMTk3IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19NQVg2NjM5IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVg2NjQy
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVg2NjUwIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19NQVg2Njk3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQ1AzMDIx
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19OQ1Q2Nzc1IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19OVENfVEhFUk1JU1RPUiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
UEM4NzM2MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfUEM4NzQyNyBpcyBub3Qgc2V0
CiMgQ09ORklHX1NFTlNPUlNfUENGODU5MSBpcyBub3Qgc2V0CiMgQ09ORklHX1BNQlVTIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NF
TlNPUlNfU0lTNTU5NSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfU01NNjY1IGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VO
U09SU19FTUMxNDAzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19FTUMyMTAzIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19FTUM2VzIwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NF
TlNPUlNfU01TQzQ3TTEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NNU0M0N00xOTIg
aXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NNU0M0N0IzOTcgaXMgbm90IHNldAojIENP
TkZJR19TRU5TT1JTX1NDSDU2WFhfQ09NTU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19TQ0g1NjI3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19TQ0g1NjM2IGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19BRFMxMDE1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19BRFM3ODI4IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BTUM2ODIxIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19JTkEyMDkgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JT
X0lOQTJYWCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVEhNQzUwIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19UTVAxMDIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RN
UDQwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVE1QNDIxIGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19WSUFfQ1BVVEVNUCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
VklBNjg2QSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVlQxMjExIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19WVDgyMzEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4
Mzc4MUQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4Mzc5MUQgaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX1c4Mzc5MkQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4
Mzc5MyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzNzk1IGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19XODNMNzg1VFMgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4
M0w3ODZORyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzNjI3SEYgaXMgbm90IHNl
dAojIENPTkZJR19TRU5TT1JTX1c4MzYyN0VIRiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQVBQTEVTTUMgaXMgbm90IHNldAoKIwojIEFDUEkgZHJpdmVycwojCkNPTkZJR19TRU5T
T1JTX0FDUElfUE9XRVI9eQojIENPTkZJR19TRU5TT1JTX0FUSzAxMTAgaXMgbm90IHNldApD
T05GSUdfVEhFUk1BTD15CkNPTkZJR19USEVSTUFMX0hXTU9OPXkKQ09ORklHX1RIRVJNQUxf
REVGQVVMVF9HT1ZfU1RFUF9XSVNFPXkKIyBDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9G
QUlSX1NIQVJFIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9VU0VS
X1NQQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhFUk1BTF9HT1ZfRkFJUl9TSEFSRSBpcyBu
b3Qgc2V0CkNPTkZJR19USEVSTUFMX0dPVl9TVEVQX1dJU0U9eQojIENPTkZJR19USEVSTUFM
X0dPVl9VU0VSX1NQQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhFUk1BTF9FTVVMQVRJT04g
aXMgbm90IHNldAojIENPTkZJR19JTlRFTF9QT1dFUkNMQU1QIGlzIG5vdCBzZXQKIyBDT05G
SUdfWDg2X1BLR19URU1QX1RIRVJNQUwgaXMgbm90IHNldAojIENPTkZJR19BQ1BJX0lOVDM0
MDNfVEhFUk1BTCBpcyBub3Qgc2V0CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgdGhlcm1hbCBk
cml2ZXJzCiMKQ09ORklHX1dBVENIRE9HPXkKQ09ORklHX1dBVENIRE9HX0NPUkU9eQojIENP
TkZJR19XQVRDSERPR19OT1dBWU9VVCBpcyBub3Qgc2V0CgojCiMgV2F0Y2hkb2cgRGV2aWNl
IERyaXZlcnMKIwojIENPTkZJR19TT0ZUX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFdfV0FUQ0hET0cgaXMgbm90IHNldAojIENPTkZJR19BQ1FVSVJFX1dEVCBpcyBub3Qgc2V0
CiMgQ09ORklHX0FEVkFOVEVDSF9XRFQgaXMgbm90IHNldAojIENPTkZJR19BTElNMTUzNV9X
RFQgaXMgbm90IHNldAojIENPTkZJR19BTElNNzEwMV9XRFQgaXMgbm90IHNldAojIENPTkZJ
R19GNzE4MDhFX1dEVCBpcyBub3Qgc2V0CkNPTkZJR19TUDUxMDBfVENPPXkKIyBDT05GSUdf
U0M1MjBfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfU0JDX0ZJVFBDMl9XQVRDSERPRyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0VVUk9URUNIX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lCNzAw
X1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lCTUFTUiBpcyBub3Qgc2V0CiMgQ09ORklHX1dB
RkVSX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0k2MzAwRVNCX1dEVCBpcyBub3Qgc2V0CiMg
Q09ORklHX0lFNlhYX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lUQ09fV0RUIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSVQ4NzEyRl9XRFQgaXMgbm90IHNldAojIENPTkZJR19JVDg3X1dEVCBp
cyBub3Qgc2V0CiMgQ09ORklHX0hQX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfU0Mx
MjAwX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1BDODc0MTNfV0RUIGlzIG5vdCBzZXQKIyBD
T05GSUdfTlZfVENPIGlzIG5vdCBzZXQKIyBDT05GSUdfNjBYWF9XRFQgaXMgbm90IHNldAoj
IENPTkZJR19TQkM4MzYwX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVTVfV0RUIGlzIG5v
dCBzZXQKIyBDT05GSUdfU01TQ19TQ0gzMTFYX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1NN
U0MzN0I3ODdfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfVklBX1dEVCBpcyBub3Qgc2V0CiMg
Q09ORklHX1c4MzYyN0hGX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1c4MzY5N0hGX1dEVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1c4MzY5N1VHX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1c4
Mzg3N0ZfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfVzgzOTc3Rl9XRFQgaXMgbm90IHNldAoj
IENPTkZJR19NQUNIWl9XRFQgaXMgbm90IHNldAojIENPTkZJR19TQkNfRVBYX0MzX1dBVENI
RE9HIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9XRFQ9eQoKIwojIFBDSS1iYXNlZCBXYXRjaGRv
ZyBDYXJkcwojCiMgQ09ORklHX1BDSVBDV0FUQ0hET0cgaXMgbm90IHNldAojIENPTkZJR19X
RFRQQ0kgaXMgbm90IHNldAoKIwojIFVTQi1iYXNlZCBXYXRjaGRvZyBDYXJkcwojCiMgQ09O
RklHX1VTQlBDV0FUQ0hET0cgaXMgbm90IHNldApDT05GSUdfU1NCX1BPU1NJQkxFPXkKCiMK
IyBTb25pY3MgU2lsaWNvbiBCYWNrcGxhbmUKIwojIENPTkZJR19TU0IgaXMgbm90IHNldApD
T05GSUdfQkNNQV9QT1NTSUJMRT15CgojCiMgQnJvYWRjb20gc3BlY2lmaWMgQU1CQQojCiMg
Q09ORklHX0JDTUEgaXMgbm90IHNldAoKIwojIE11bHRpZnVuY3Rpb24gZGV2aWNlIGRyaXZl
cnMKIwpDT05GSUdfTUZEX0NPUkU9eQojIENPTkZJR19NRkRfQ1M1NTM1IGlzIG5vdCBzZXQK
IyBDT05GSUdfTUZEX0FTMzcxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1BNSUNfQURQNTUyMCBp
cyBub3Qgc2V0CiMgQ09ORklHX01GRF9DUk9TX0VDIGlzIG5vdCBzZXQKIyBDT05GSUdfUE1J
Q19EQTkwM1ggaXMgbm90IHNldAojIENPTkZJR19NRkRfREE5MDUyX0kyQyBpcyBub3Qgc2V0
CiMgQ09ORklHX01GRF9EQTkwNTUgaXMgbm90IHNldAojIENPTkZJR19NRkRfREE5MDYzIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUZEX01DMTNYWFhfSTJDIGlzIG5vdCBzZXQKIyBDT05GSUdf
SFRDX1BBU0lDMyBpcyBub3Qgc2V0CiMgQ09ORklHX0xQQ19JQ0ggaXMgbm90IHNldApDT05G
SUdfTFBDX1NDSD15CiMgQ09ORklHX01GRF9KQU5aX0NNT0RJTyBpcyBub3Qgc2V0CiMgQ09O
RklHX01GRF9LRU1QTEQgaXMgbm90IHNldAojIENPTkZJR19NRkRfODhQTTgwMCBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF84OFBNODA1IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEXzg4UE04
NjBYIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01BWDE0NTc3IGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX01BWDc3Njg2IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01BWDc3NjkzIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUZEX01BWDg5MDcgaXMgbm90IHNldAojIENPTkZJR19NRkRfTUFY
ODkyNSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVg4OTk3IGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX01BWDg5OTggaXMgbm90IHNldAojIENPTkZJR19NRkRfVklQRVJCT0FSRCBpcyBu
b3Qgc2V0CiMgQ09ORklHX01GRF9SRVRVIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1BDRjUw
NjMzIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1JEQzMyMVggaXMgbm90IHNldAojIENPTkZJ
R19NRkRfUlRTWF9QQ0kgaXMgbm90IHNldAojIENPTkZJR19NRkRfUkM1VDU4MyBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9TRUNfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9TSTQ3
NlhfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9TTTUwMSBpcyBub3Qgc2V0CiMgQ09O
RklHX01GRF9TTVNDIGlzIG5vdCBzZXQKIyBDT05GSUdfQUJYNTAwX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19NRkRfU1RNUEUgaXMgbm90IHNldAojIENPTkZJR19NRkRfU1lTQ09OIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUZEX1RJX0FNMzM1WF9UU0NBREMgaXMgbm90IHNldAojIENP
TkZJR19NRkRfTFAzOTQzIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0xQODc4OCBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9QQUxNQVMgaXMgbm90IHNldAojIENPTkZJR19UUFM2MTA1WCBp
cyBub3Qgc2V0CiMgQ09ORklHX1RQUzY1MDdYIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQ
UzY1MDkwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQUzY1MjE3IGlzIG5vdCBzZXQKIyBD
T05GSUdfTUZEX1RQUzY1ODZYIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQUzgwMDMxIGlz
IG5vdCBzZXQKIyBDT05GSUdfVFdMNDAzMF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfVFdM
NjA0MF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dMMTI3M19DT1JFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUZEX0xNMzUzMyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9UQzM1ODlY
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RNSU8gaXMgbm90IHNldAojIENPTkZJR19NRkRf
Vlg4NTUgaXMgbm90IHNldAojIENPTkZJR19NRkRfQVJJWk9OQV9JMkMgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfV004NDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dNODMxWF9JMkMg
aXMgbm90IHNldAojIENPTkZJR19NRkRfV004MzUwX0kyQyBpcyBub3Qgc2V0CiMgQ09ORklH
X01GRF9XTTg5OTQgaXMgbm90IHNldAojIENPTkZJR19SRUdVTEFUT1IgaXMgbm90IHNldApD
T05GSUdfTUVESUFfU1VQUE9SVD15CgojCiMgTXVsdGltZWRpYSBjb3JlIHN1cHBvcnQKIwpD
T05GSUdfTUVESUFfQ0FNRVJBX1NVUFBPUlQ9eQpDT05GSUdfTUVESUFfQU5BTE9HX1RWX1NV
UFBPUlQ9eQpDT05GSUdfTUVESUFfRElHSVRBTF9UVl9TVVBQT1JUPXkKQ09ORklHX01FRElB
X1JBRElPX1NVUFBPUlQ9eQpDT05GSUdfTUVESUFfUkNfU1VQUE9SVD15CiMgQ09ORklHX01F
RElBX0NPTlRST0xMRVIgaXMgbm90IHNldApDT05GSUdfVklERU9fREVWPXkKQ09ORklHX1ZJ
REVPX1Y0TDI9eQpDT05GSUdfVklERU9fQURWX0RFQlVHPXkKIyBDT05GSUdfVklERU9fRklY
RURfTUlOT1JfUkFOR0VTIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX1RVTkVSPXkKQ09ORklH
X1ZJREVPQlVGX0dFTj15CkNPTkZJR19WSURFT0JVRl9ETUFfU0c9eQpDT05GSUdfRFZCX0NP
UkU9eQpDT05GSUdfRFZCX05FVD15CiMgQ09ORklHX1RUUENJX0VFUFJPTSBpcyBub3Qgc2V0
CkNPTkZJR19EVkJfTUFYX0FEQVBURVJTPTgKIyBDT05GSUdfRFZCX0RZTkFNSUNfTUlOT1JT
IGlzIG5vdCBzZXQKCiMKIyBNZWRpYSBkcml2ZXJzCiMKQ09ORklHX1JDX0NPUkU9eQpDT05G
SUdfUkNfTUFQPXkKQ09ORklHX1JDX0RFQ09ERVJTPXkKQ09ORklHX0xJUkM9eQpDT05GSUdf
SVJfTElSQ19DT0RFQz15CkNPTkZJR19JUl9ORUNfREVDT0RFUj15CkNPTkZJR19JUl9SQzVf
REVDT0RFUj15CkNPTkZJR19JUl9SQzZfREVDT0RFUj15CkNPTkZJR19JUl9KVkNfREVDT0RF
Uj15CkNPTkZJR19JUl9TT05ZX0RFQ09ERVI9eQpDT05GSUdfSVJfUkM1X1NaX0RFQ09ERVI9
eQpDT05GSUdfSVJfU0FOWU9fREVDT0RFUj15CkNPTkZJR19JUl9NQ0VfS0JEX0RFQ09ERVI9
eQojIENPTkZJR19SQ19ERVZJQ0VTIGlzIG5vdCBzZXQKQ09ORklHX01FRElBX1VTQl9TVVBQ
T1JUPXkKCiMKIyBXZWJjYW0gZGV2aWNlcwojCiMgQ09ORklHX1VTQl9WSURFT19DTEFTUyBp
cyBub3Qgc2V0CiMgQ09ORklHX1VTQl9HU1BDQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9Q
V0MgaXMgbm90IHNldAojIENPTkZJR19WSURFT19DUElBMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9aUjM2NFhYIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUS1dFQkNBTSBpcyBub3Qg
c2V0CiMgQ09ORklHX1VTQl9TMjI1NSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1VTQlRW
IGlzIG5vdCBzZXQKCiMKIyBBbmFsb2cgVFYgVVNCIGRldmljZXMKIwpDT05GSUdfVklERU9f
UFZSVVNCMj15CkNPTkZJR19WSURFT19QVlJVU0IyX1NZU0ZTPXkKQ09ORklHX1ZJREVPX1BW
UlVTQjJfRFZCPXkKIyBDT05GSUdfVklERU9fUFZSVVNCMl9ERUJVR0lGQyBpcyBub3Qgc2V0
CiMgQ09ORklHX1ZJREVPX0hEUFZSIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fVExHMjMw
MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1VTQlZJU0lPTiBpcyBub3Qgc2V0CiMgQ09O
RklHX1ZJREVPX1NUSzExNjBfQ09NTU9OIGlzIG5vdCBzZXQKCiMKIyBBbmFsb2cvZGlnaXRh
bCBUViBVU0IgZGV2aWNlcwojCiMgQ09ORklHX1ZJREVPX0FVMDgyOCBpcyBub3Qgc2V0CiMg
Q09ORklHX1ZJREVPX0NYMjMxWFggaXMgbm90IHNldAojIENPTkZJR19WSURFT19UTTYwMDAg
aXMgbm90IHNldAoKIwojIERpZ2l0YWwgVFYgVVNCIGRldmljZXMKIwojIENPTkZJR19EVkJf
VVNCIGlzIG5vdCBzZXQKIyBDT05GSUdfRFZCX1VTQl9WMiBpcyBub3Qgc2V0CiMgQ09ORklH
X0RWQl9UVFVTQl9CVURHRVQgaXMgbm90IHNldAojIENPTkZJR19EVkJfVFRVU0JfREVDIGlz
IG5vdCBzZXQKIyBDT05GSUdfU01TX1VTQl9EUlYgaXMgbm90IHNldAojIENPTkZJR19EVkJf
QjJDMl9GTEVYQ09QX1VTQiBpcyBub3Qgc2V0CgojCiMgV2ViY2FtLCBUViAoYW5hbG9nL2Rp
Z2l0YWwpIFVTQiBkZXZpY2VzCiMKIyBDT05GSUdfVklERU9fRU0yOFhYIGlzIG5vdCBzZXQK
Q09ORklHX01FRElBX1BDSV9TVVBQT1JUPXkKCiMKIyBNZWRpYSBjYXB0dXJlIHN1cHBvcnQK
IwoKIwojIE1lZGlhIGNhcHR1cmUvYW5hbG9nIFRWIHN1cHBvcnQKIwojIENPTkZJR19WSURF
T19JVlRWIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fWk9SQU4gaXMgbm90IHNldAojIENP
TkZJR19WSURFT19IRVhJVU1fR0VNSU5JIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fSEVY
SVVNX09SSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fTVhCIGlzIG5vdCBzZXQKCiMK
IyBNZWRpYSBjYXB0dXJlL2FuYWxvZy9oeWJyaWQgVFYgc3VwcG9ydAojCiMgQ09ORklHX1ZJ
REVPX0NYMTggaXMgbm90IHNldAojIENPTkZJR19WSURFT19DWDIzODg1IGlzIG5vdCBzZXQK
Q09ORklHX1ZJREVPX0NYMjU4MjE9eQojIENPTkZJR19WSURFT19DWDI1ODIxX0FMU0EgaXMg
bm90IHNldAojIENPTkZJR19WSURFT19DWDg4IGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9f
QlQ4NDggaXMgbm90IHNldAojIENPTkZJR19WSURFT19TQUE3MTM0IGlzIG5vdCBzZXQKIyBD
T05GSUdfVklERU9fU0FBNzE2NCBpcyBub3Qgc2V0CgojCiMgTWVkaWEgZGlnaXRhbCBUViBQ
Q0kgQWRhcHRlcnMKIwojIENPTkZJR19EVkJfQVY3MTEwIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFZCX0JVREdFVF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfRFZCX0IyQzJfRkxFWENPUF9Q
Q0kgaXMgbm90IHNldAojIENPTkZJR19EVkJfUExVVE8yIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFZCX0RNMTEwNSBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9QVDEgaXMgbm90IHNldAojIENP
TkZJR19NQU5USVNfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9OR0VORSBpcyBub3Qg
c2V0CiMgQ09ORklHX0RWQl9EREJSSURHRSBpcyBub3Qgc2V0CiMgQ09ORklHX1Y0TF9QTEFU
Rk9STV9EUklWRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfVjRMX01FTTJNRU1fRFJJVkVSUyBp
cyBub3Qgc2V0CiMgQ09ORklHX1Y0TF9URVNUX0RSSVZFUlMgaXMgbm90IHNldAoKIwojIFN1
cHBvcnRlZCBNTUMvU0RJTyBhZGFwdGVycwojCiMgQ09ORklHX1JBRElPX0FEQVBURVJTIGlz
IG5vdCBzZXQKQ09ORklHX1ZJREVPX0NYMjM0MVg9eQpDT05GSUdfVklERU9fQlRDWD15CkNP
TkZJR19WSURFT19UVkVFUFJPTT15CiMgQ09ORklHX0NZUFJFU1NfRklSTVdBUkUgaXMgbm90
IHNldAoKIwojIE1lZGlhIGFuY2lsbGFyeSBkcml2ZXJzICh0dW5lcnMsIHNlbnNvcnMsIGky
YywgZnJvbnRlbmRzKQojCkNPTkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15CkNPTkZJ
R19NRURJQV9BVFRBQ0g9eQpDT05GSUdfVklERU9fSVJfSTJDPXkKCiMKIyBBdWRpbyBkZWNv
ZGVycywgcHJvY2Vzc29ycyBhbmQgbWl4ZXJzCiMKQ09ORklHX1ZJREVPX01TUDM0MDA9eQpD
T05GSUdfVklERU9fQ1M1M0wzMkE9eQpDT05GSUdfVklERU9fV004Nzc1PXkKCiMKIyBSRFMg
ZGVjb2RlcnMKIwoKIwojIFZpZGVvIGRlY29kZXJzCiMKQ09ORklHX1ZJREVPX1NBQTcxMVg9
eQoKIwojIFZpZGVvIGFuZCBhdWRpbyBkZWNvZGVycwojCkNPTkZJR19WSURFT19DWDI1ODQw
PXkKCiMKIyBWaWRlbyBlbmNvZGVycwojCgojCiMgQ2FtZXJhIHNlbnNvciBkZXZpY2VzCiMK
CiMKIyBGbGFzaCBkZXZpY2VzCiMKCiMKIyBWaWRlbyBpbXByb3ZlbWVudCBjaGlwcwojCgoj
CiMgQXVkaW8vVmlkZW8gY29tcHJlc3Npb24gY2hpcHMKIwoKIwojIE1pc2NlbGxhbmVvdXMg
aGVscGVyIGNoaXBzCiMKCiMKIyBTZW5zb3JzIHVzZWQgb24gc29jX2NhbWVyYSBkcml2ZXIK
IwpDT05GSUdfTUVESUFfVFVORVI9eQpDT05GSUdfTUVESUFfVFVORVJfU0lNUExFPXkKQ09O
RklHX01FRElBX1RVTkVSX1REQTgyOTA9eQpDT05GSUdfTUVESUFfVFVORVJfVERBODI3WD15
CkNPTkZJR19NRURJQV9UVU5FUl9UREExODI3MT15CkNPTkZJR19NRURJQV9UVU5FUl9UREE5
ODg3PXkKQ09ORklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQpDT05GSUdfTUVESUFfVFVORVJf
VEVBNTc2Nz15CkNPTkZJR19NRURJQV9UVU5FUl9NVDIwWFg9eQpDT05GSUdfTUVESUFfVFVO
RVJfWEMyMDI4PXkKQ09ORklHX01FRElBX1RVTkVSX1hDNTAwMD15CkNPTkZJR19NRURJQV9U
VU5FUl9YQzQwMDA9eQpDT05GSUdfTUVESUFfVFVORVJfTUM0NFM4MDM9eQoKIwojIE11bHRp
c3RhbmRhcmQgKHNhdGVsbGl0ZSkgZnJvbnRlbmRzCiMKCiMKIyBNdWx0aXN0YW5kYXJkIChj
YWJsZSArIHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKIwoKIwojIERWQi1TIChzYXRlbGxpdGUp
IGZyb250ZW5kcwojCgojCiMgRFZCLVQgKHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKIwpDT05G
SUdfRFZCX1REQTEwMDQ4PXkKCiMKIyBEVkItQyAoY2FibGUpIGZyb250ZW5kcwojCgojCiMg
QVRTQyAoTm9ydGggQW1lcmljYW4vS29yZWFuIFRlcnJlc3RyaWFsL0NhYmxlIERUVikgZnJv
bnRlbmRzCiMKQ09ORklHX0RWQl9MR0RUMzMwWD15CkNPTkZJR19EVkJfUzVIMTQwOT15CkNP
TkZJR19EVkJfUzVIMTQxMT15CgojCiMgSVNEQi1UICh0ZXJyZXN0cmlhbCkgZnJvbnRlbmRz
CiMKCiMKIyBEaWdpdGFsIHRlcnJlc3RyaWFsIG9ubHkgdHVuZXJzL1BMTAojCgojCiMgU0VD
IGNvbnRyb2wgZGV2aWNlcyBmb3IgRFZCLVMKIwoKIwojIFRvb2xzIHRvIGRldmVsb3AgbmV3
IGZyb250ZW5kcwojCiMgQ09ORklHX0RWQl9EVU1NWV9GRSBpcyBub3Qgc2V0CgojCiMgR3Jh
cGhpY3Mgc3VwcG9ydAojCkNPTkZJR19BR1A9eQpDT05GSUdfQUdQX0FNRDY0PXkKQ09ORklH
X0FHUF9JTlRFTD15CiMgQ09ORklHX0FHUF9TSVMgaXMgbm90IHNldAojIENPTkZJR19BR1Bf
VklBIGlzIG5vdCBzZXQKQ09ORklHX0lOVEVMX0dUVD15CkNPTkZJR19WR0FfQVJCPXkKQ09O
RklHX1ZHQV9BUkJfTUFYX0dQVVM9MTYKIyBDT05GSUdfVkdBX1NXSVRDSEVST08gaXMgbm90
IHNldApDT05GSUdfRFJNPXkKQ09ORklHX0RSTV9LTVNfSEVMUEVSPXkKQ09ORklHX0RSTV9L
TVNfRkJfSEVMUEVSPXkKIyBDT05GSUdfRFJNX0xPQURfRURJRF9GSVJNV0FSRSBpcyBub3Qg
c2V0CkNPTkZJR19EUk1fVFRNPXkKCiMKIyBJMkMgZW5jb2RlciBvciBoZWxwZXIgY2hpcHMK
IwojIENPTkZJR19EUk1fSTJDX0NINzAwNiBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9JMkNf
U0lMMTY0IGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX0kyQ19OWFBfVERBOTk4WCBpcyBub3Qg
c2V0CiMgQ09ORklHX0RSTV9UREZYIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX1IxMjggaXMg
bm90IHNldApDT05GSUdfRFJNX1JBREVPTj15CiMgQ09ORklHX0RSTV9SQURFT05fVU1TIGlz
IG5vdCBzZXQKIyBDT05GSUdfRFJNX05PVVZFQVUgaXMgbm90IHNldAojIENPTkZJR19EUk1f
STgxMCBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9JOTE1IGlzIG5vdCBzZXQKIyBDT05GSUdf
RFJNX01HQSBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9TSVMgaXMgbm90IHNldAojIENPTkZJ
R19EUk1fVklBIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX1NBVkFHRSBpcyBub3Qgc2V0CiMg
Q09ORklHX0RSTV9WTVdHRlggaXMgbm90IHNldAojIENPTkZJR19EUk1fR01BNTAwIGlzIG5v
dCBzZXQKIyBDT05GSUdfRFJNX1VETCBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9BU1QgaXMg
bm90IHNldAojIENPTkZJR19EUk1fTUdBRzIwMCBpcyBub3Qgc2V0CkNPTkZJR19EUk1fQ0lS
UlVTX1FFTVU9eQpDT05GSUdfRFJNX1FYTD15CiMgQ09ORklHX0RSTV9CT0NIUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1ZHQVNUQVRFIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX09VVFBVVF9D
T05UUk9MPXkKQ09ORklHX0hETUk9eQpDT05GSUdfRkI9eQojIENPTkZJR19GSVJNV0FSRV9F
RElEIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfRERDIGlzIG5vdCBzZXQKQ09ORklHX0ZCX0JP
T1RfVkVTQV9TVVBQT1JUPXkKQ09ORklHX0ZCX0NGQl9GSUxMUkVDVD15CkNPTkZJR19GQl9D
RkJfQ09QWUFSRUE9eQpDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15CiMgQ09ORklHX0ZCX0NG
Ql9SRVZfUElYRUxTX0lOX0JZVEUgaXMgbm90IHNldApDT05GSUdfRkJfU1lTX0ZJTExSRUNU
PXkKQ09ORklHX0ZCX1NZU19DT1BZQVJFQT15CkNPTkZJR19GQl9TWVNfSU1BR0VCTElUPXkK
IyBDT05GSUdfRkJfRk9SRUlHTl9FTkRJQU4gaXMgbm90IHNldApDT05GSUdfRkJfU1lTX0ZP
UFM9eQpDT05GSUdfRkJfREVGRVJSRURfSU89eQojIENPTkZJR19GQl9TVkdBTElCIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfTUFDTU9ERVMgaXMgbm90IHNldAojIENPTkZJR19GQl9CQUNL
TElHSFQgaXMgbm90IHNldApDT05GSUdfRkJfTU9ERV9IRUxQRVJTPXkKQ09ORklHX0ZCX1RJ
TEVCTElUVElORz15CgojCiMgRnJhbWUgYnVmZmVyIGhhcmR3YXJlIGRyaXZlcnMKIwojIENP
TkZJR19GQl9DSVJSVVMgaXMgbm90IHNldAojIENPTkZJR19GQl9QTTIgaXMgbm90IHNldAoj
IENPTkZJR19GQl9DWUJFUjIwMDAgaXMgbm90IHNldAojIENPTkZJR19GQl9BUkMgaXMgbm90
IHNldAojIENPTkZJR19GQl9BU0lMSUFOVCBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0lNU1RU
IGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfVkdBMTYgaXMgbm90IHNldAojIENPTkZJR19GQl9V
VkVTQSBpcyBub3Qgc2V0CkNPTkZJR19GQl9WRVNBPXkKIyBDT05GSUdfRkJfTjQxMSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0ZCX0hHQSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX09QRU5DT1JF
UyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1MxRDEzWFhYIGlzIG5vdCBzZXQKIyBDT05GSUdf
RkJfTlZJRElBIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfUklWQSBpcyBub3Qgc2V0CiMgQ09O
RklHX0ZCX0k3NDAgaXMgbm90IHNldAojIENPTkZJR19GQl9MRTgwNTc4IGlzIG5vdCBzZXQK
IyBDT05GSUdfRkJfTUFUUk9YIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfUkFERU9OIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfQVRZMTI4IGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfQVRZIGlz
IG5vdCBzZXQKIyBDT05GSUdfRkJfUzMgaXMgbm90IHNldAojIENPTkZJR19GQl9TQVZBR0Ug
aXMgbm90IHNldAojIENPTkZJR19GQl9TSVMgaXMgbm90IHNldAojIENPTkZJR19GQl9WSUEg
aXMgbm90IHNldAojIENPTkZJR19GQl9ORU9NQUdJQyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZC
X0tZUk8gaXMgbm90IHNldAojIENPTkZJR19GQl8zREZYIGlzIG5vdCBzZXQKIyBDT05GSUdf
RkJfVk9PRE9PMSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1ZUODYyMyBpcyBub3Qgc2V0CiMg
Q09ORklHX0ZCX1RSSURFTlQgaXMgbm90IHNldAojIENPTkZJR19GQl9BUksgaXMgbm90IHNl
dAojIENPTkZJR19GQl9QTTMgaXMgbm90IHNldAojIENPTkZJR19GQl9DQVJNSU5FIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfVE1JTyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1NNU0NVRlgg
aXMgbm90IHNldApDT05GSUdfRkJfVURMPXkKIyBDT05GSUdfRkJfR09MREZJU0ggaXMgbm90
IHNldAojIENPTkZJR19GQl9WSVJUVUFMIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9GQkRFVl9G
Uk9OVEVORD15CiMgQ09ORklHX0ZCX01FVFJPTk9NRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZC
X01CODYyWFggaXMgbm90IHNldAojIENPTkZJR19GQl9CUk9BRFNIRUVUIGlzIG5vdCBzZXQK
IyBDT05GSUdfRkJfQVVPX0sxOTBYIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfU0lNUExFIGlz
IG5vdCBzZXQKIyBDT05GSUdfRVhZTk9TX1ZJREVPIGlzIG5vdCBzZXQKQ09ORklHX0JBQ0tM
SUdIVF9MQ0RfU1VQUE9SVD15CiMgQ09ORklHX0xDRF9DTEFTU19ERVZJQ0UgaXMgbm90IHNl
dApDT05GSUdfQkFDS0xJR0hUX0NMQVNTX0RFVklDRT15CkNPTkZJR19CQUNLTElHSFRfR0VO
RVJJQz15CiMgQ09ORklHX0JBQ0tMSUdIVF9BUFBMRSBpcyBub3Qgc2V0CiMgQ09ORklHX0JB
Q0tMSUdIVF9TQUhBUkEgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfQURQODg2MCBp
cyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tMSUdIVF9BRFA4ODcwIGlzIG5vdCBzZXQKIyBDT05G
SUdfQkFDS0xJR0hUX0xNMzYzMEEgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfTE0z
NjM5IGlzIG5vdCBzZXQKIyBDT05GSUdfQkFDS0xJR0hUX0xQODU1WCBpcyBub3Qgc2V0CiMg
Q09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUCBpcyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tMSUdI
VF9CRDYxMDcgaXMgbm90IHNldAoKIwojIENvbnNvbGUgZGlzcGxheSBkcml2ZXIgc3VwcG9y
dAojCkNPTkZJR19WR0FfQ09OU09MRT15CkNPTkZJR19WR0FDT05fU09GVF9TQ1JPTExCQUNL
PXkKQ09ORklHX1ZHQUNPTl9TT0ZUX1NDUk9MTEJBQ0tfU0laRT02NApDT05GSUdfRFVNTVlf
Q09OU09MRT15CkNPTkZJR19GUkFNRUJVRkZFUl9DT05TT0xFPXkKQ09ORklHX0ZSQU1FQlVG
RkVSX0NPTlNPTEVfREVURUNUX1BSSU1BUlk9eQojIENPTkZJR19GUkFNRUJVRkZFUl9DT05T
T0xFX1JPVEFUSU9OIGlzIG5vdCBzZXQKQ09ORklHX0xPR089eQojIENPTkZJR19MT0dPX0xJ
TlVYX01PTk8gaXMgbm90IHNldAojIENPTkZJR19MT0dPX0xJTlVYX1ZHQTE2IGlzIG5vdCBz
ZXQKQ09ORklHX0xPR09fTElOVVhfQ0xVVDIyND15CkNPTkZJR19TT1VORD15CkNPTkZJR19T
T1VORF9PU1NfQ09SRT15CkNPTkZJR19TT1VORF9PU1NfQ09SRV9QUkVDTEFJTT15CkNPTkZJ
R19TTkQ9eQpDT05GSUdfU05EX1RJTUVSPXkKQ09ORklHX1NORF9QQ009eQpDT05GSUdfU05E
X0hXREVQPXkKQ09ORklHX1NORF9SQVdNSURJPXkKQ09ORklHX1NORF9TRVFVRU5DRVI9eQpD
T05GSUdfU05EX1NFUV9EVU1NWT15CkNPTkZJR19TTkRfT1NTRU1VTD15CkNPTkZJR19TTkRf
TUlYRVJfT1NTPXkKQ09ORklHX1NORF9QQ01fT1NTPXkKQ09ORklHX1NORF9QQ01fT1NTX1BM
VUdJTlM9eQpDT05GSUdfU05EX1NFUVVFTkNFUl9PU1M9eQpDT05GSUdfU05EX0hSVElNRVI9
eQpDT05GSUdfU05EX1NFUV9IUlRJTUVSX0RFRkFVTFQ9eQpDT05GSUdfU05EX0RZTkFNSUNf
TUlOT1JTPXkKQ09ORklHX1NORF9NQVhfQ0FSRFM9MzIKQ09ORklHX1NORF9TVVBQT1JUX09M
RF9BUEk9eQpDT05GSUdfU05EX1ZFUkJPU0VfUFJPQ0ZTPXkKIyBDT05GSUdfU05EX1ZFUkJP
U0VfUFJJTlRLIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0RFQlVHIGlzIG5vdCBzZXQKQ09O
RklHX1NORF9WTUFTVEVSPXkKQ09ORklHX1NORF9LQ1RMX0pBQ0s9eQpDT05GSUdfU05EX0RN
QV9TR0JVRj15CkNPTkZJR19TTkRfUkFXTUlESV9TRVE9eQpDT05GSUdfU05EX09QTDNfTElC
X1NFUT15CiMgQ09ORklHX1NORF9PUEw0X0xJQl9TRVEgaXMgbm90IHNldAojIENPTkZJR19T
TkRfU0JBV0VfU0VRIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VNVTEwSzFfU0VRIGlzIG5v
dCBzZXQKQ09ORklHX1NORF9NUFU0MDFfVUFSVD15CkNPTkZJR19TTkRfT1BMM19MSUI9eQpD
T05GSUdfU05EX0RSSVZFUlM9eQojIENPTkZJR19TTkRfUENTUCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NORF9EVU1NWSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9BTE9PUCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9WSVJNSURJIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX01UUEFWIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX1NFUklBTF9VMTY1NTAgaXMgbm90IHNldAojIENPTkZJ
R19TTkRfTVBVNDAxIGlzIG5vdCBzZXQKQ09ORklHX1NORF9QQ0k9eQojIENPTkZJR19TTkRf
QUQxODg5IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FMUzMwMCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NORF9BTFM0MDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FMSTU0NTEgaXMgbm90
IHNldAojIENPTkZJR19TTkRfQVNJSFBJIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FUSUlY
UCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9BVElJWFBfTU9ERU0gaXMgbm90IHNldAojIENP
TkZJR19TTkRfQVU4ODEwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FVODgyMCBpcyBub3Qg
c2V0CiMgQ09ORklHX1NORF9BVTg4MzAgaXMgbm90IHNldAojIENPTkZJR19TTkRfQVcyIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0FaVDMzMjggaXMgbm90IHNldAojIENPTkZJR19TTkRf
QlQ4N1ggaXMgbm90IHNldAojIENPTkZJR19TTkRfQ0EwMTA2IGlzIG5vdCBzZXQKQ09ORklH
X1NORF9DTUlQQ0k9eQpDT05GSUdfU05EX09YWUdFTl9MSUI9eQpDT05GSUdfU05EX09YWUdF
Tj15CiMgQ09ORklHX1NORF9DUzQyODEgaXMgbm90IHNldAojIENPTkZJR19TTkRfQ1M0NlhY
IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0NTNTUzMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NO
RF9DUzU1MzVBVURJTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9DVFhGSSBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9EQVJMQTIwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0dJTkEyMCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9MQVlMQTIwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05E
X0RBUkxBMjQgaXMgbm90IHNldAojIENPTkZJR19TTkRfR0lOQTI0IGlzIG5vdCBzZXQKIyBD
T05GSUdfU05EX0xBWUxBMjQgaXMgbm90IHNldAojIENPTkZJR19TTkRfTU9OQSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NORF9NSUEgaXMgbm90IHNldAojIENPTkZJR19TTkRfRUNITzNHIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0lORElHTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9J
TkRJR09JTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9JTkRJR09ESiBpcyBub3Qgc2V0CiMg
Q09ORklHX1NORF9JTkRJR09JT1ggaXMgbm90IHNldAojIENPTkZJR19TTkRfSU5ESUdPREpY
IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VNVTEwSzEgaXMgbm90IHNldAojIENPTkZJR19T
TkRfRU1VMTBLMVggaXMgbm90IHNldAojIENPTkZJR19TTkRfRU5TMTM3MCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9FTlMxMzcxIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VTMTkzOCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9FUzE5NjggaXMgbm90IHNldAojIENPTkZJR19TTkRf
Rk04MDEgaXMgbm90IHNldApDT05GSUdfU05EX0hEQV9JTlRFTD15CkNPTkZJR19TTkRfSERB
X1BSRUFMTE9DX1NJWkU9NjQKQ09ORklHX1NORF9IREFfSFdERVA9eQojIENPTkZJR19TTkRf
SERBX1JFQ09ORklHIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0hEQV9JTlBVVF9CRUVQIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0hEQV9JTlBVVF9KQUNLIGlzIG5vdCBzZXQKIyBDT05G
SUdfU05EX0hEQV9QQVRDSF9MT0FERVIgaXMgbm90IHNldApDT05GSUdfU05EX0hEQV9DT0RF
Q19SRUFMVEVLPXkKQ09ORklHX1NORF9IREFfQ09ERUNfQU5BTE9HPXkKQ09ORklHX1NORF9I
REFfQ09ERUNfU0lHTUFURUw9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19WSUE9eQpDT05GSUdf
U05EX0hEQV9DT0RFQ19IRE1JPXkKQ09ORklHX1NORF9IREFfQ09ERUNfQ0lSUlVTPXkKQ09O
RklHX1NORF9IREFfQ09ERUNfQ09ORVhBTlQ9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19DQTAx
MTA9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19DQTAxMzI9eQojIENPTkZJR19TTkRfSERBX0NP
REVDX0NBMDEzMl9EU1AgaXMgbm90IHNldApDT05GSUdfU05EX0hEQV9DT0RFQ19DTUVESUE9
eQpDT05GSUdfU05EX0hEQV9DT0RFQ19TSTMwNTQ9eQpDT05GSUdfU05EX0hEQV9HRU5FUklD
PXkKQ09ORklHX1NORF9IREFfUE9XRVJfU0FWRV9ERUZBVUxUPTAKIyBDT05GSUdfU05EX0hE
U1AgaXMgbm90IHNldAojIENPTkZJR19TTkRfSERTUE0gaXMgbm90IHNldAojIENPTkZJR19T
TkRfSUNFMTcxMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9JQ0UxNzI0IGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX0lOVEVMOFgwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0lOVEVMOFgw
TSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9LT1JHMTIxMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1NORF9MT0xBIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0xYNjQ2NEVTIGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX01BRVNUUk8zIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX01JWEFSVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9OTTI1NiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9Q
Q1hIUiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9SSVBUSURFIGlzIG5vdCBzZXQKIyBDT05G
SUdfU05EX1JNRTMyIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1JNRTk2IGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX1JNRTk2NTIgaXMgbm90IHNldAojIENPTkZJR19TTkRfU09OSUNWSUJF
UyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9UUklERU5UIGlzIG5vdCBzZXQKIyBDT05GSUdf
U05EX1ZJQTgyWFggaXMgbm90IHNldAojIENPTkZJR19TTkRfVklBODJYWF9NT0RFTSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NORF9WSVJUVU9TTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9W
WDIyMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9ZTUZQQ0kgaXMgbm90IHNldApDT05GSUdf
U05EX1VTQj15CkNPTkZJR19TTkRfVVNCX0FVRElPPXkKQ09ORklHX1NORF9VU0JfVUExMDE9
eQpDT05GSUdfU05EX1VTQl9VU1gyWT15CkNPTkZJR19TTkRfVVNCX0NBSUFRPXkKQ09ORklH
X1NORF9VU0JfQ0FJQVFfSU5QVVQ9eQojIENPTkZJR19TTkRfVVNCX1VTMTIyTCBpcyBub3Qg
c2V0CkNPTkZJR19TTkRfVVNCXzZGSVJFPXkKIyBDT05GSUdfU05EX1VTQl9ISUZBQ0UgaXMg
bm90IHNldAojIENPTkZJR19TTkRfU09DIGlzIG5vdCBzZXQKIyBDT05GSUdfU09VTkRfUFJJ
TUUgaXMgbm90IHNldAoKIwojIEhJRCBzdXBwb3J0CiMKQ09ORklHX0hJRD15CiMgQ09ORklH
X0hJRF9CQVRURVJZX1NUUkVOR1RIIGlzIG5vdCBzZXQKQ09ORklHX0hJRFJBVz15CiMgQ09O
RklHX1VISUQgaXMgbm90IHNldApDT05GSUdfSElEX0dFTkVSSUM9eQoKIwojIFNwZWNpYWwg
SElEIGRyaXZlcnMKIwpDT05GSUdfSElEX0E0VEVDSD15CiMgQ09ORklHX0hJRF9BQ1JVWCBp
cyBub3Qgc2V0CkNPTkZJR19ISURfQVBQTEU9eQojIENPTkZJR19ISURfQVBQTEVJUiBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9BVVJFQUwgaXMgbm90IHNldApDT05GSUdfSElEX0JFTEtJ
Tj15CkNPTkZJR19ISURfQ0hFUlJZPXkKQ09ORklHX0hJRF9DSElDT05ZPXkKIyBDT05GSUdf
SElEX1BST0RJS0VZUyBpcyBub3Qgc2V0CkNPTkZJR19ISURfQ1lQUkVTUz15CiMgQ09ORklH
X0hJRF9EUkFHT05SSVNFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0VNU19GRiBpcyBub3Qg
c2V0CiMgQ09ORklHX0hJRF9FTEVDT00gaXMgbm90IHNldAojIENPTkZJR19ISURfRUxPIGlz
IG5vdCBzZXQKQ09ORklHX0hJRF9FWktFWT15CiMgQ09ORklHX0hJRF9IT0xURUsgaXMgbm90
IHNldAojIENPTkZJR19ISURfSFVJT04gaXMgbm90IHNldAojIENPTkZJR19ISURfS0VZVE9V
Q0ggaXMgbm90IHNldAojIENPTkZJR19ISURfS1lFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElE
X1VDTE9HSUMgaXMgbm90IHNldAojIENPTkZJR19ISURfV0FMVE9QIGlzIG5vdCBzZXQKIyBD
T05GSUdfSElEX0dZUkFUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0lDQURFIGlzIG5v
dCBzZXQKIyBDT05GSUdfSElEX1RXSU5IQU4gaXMgbm90IHNldApDT05GSUdfSElEX0tFTlNJ
TkdUT049eQojIENPTkZJR19ISURfTENQT1dFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9M
RU5PVk9fVFBLQkQgaXMgbm90IHNldApDT05GSUdfSElEX0xPR0lURUNIPXkKIyBDT05GSUdf
SElEX0xPR0lURUNIX0RKIGlzIG5vdCBzZXQKIyBDT05GSUdfTE9HSVRFQ0hfRkYgaXMgbm90
IHNldAojIENPTkZJR19MT0dJUlVNQkxFUEFEMl9GRiBpcyBub3Qgc2V0CiMgQ09ORklHX0xP
R0lHOTQwX0ZGIGlzIG5vdCBzZXQKIyBDT05GSUdfTE9HSVdIRUVMU19GRiBpcyBub3Qgc2V0
CiMgQ09ORklHX0hJRF9NQUdJQ01PVVNFIGlzIG5vdCBzZXQKQ09ORklHX0hJRF9NSUNST1NP
RlQ9eQpDT05GSUdfSElEX01PTlRFUkVZPXkKIyBDT05GSUdfSElEX01VTFRJVE9VQ0ggaXMg
bm90IHNldAojIENPTkZJR19ISURfTlRSSUcgaXMgbm90IHNldAojIENPTkZJR19ISURfT1JU
RUsgaXMgbm90IHNldAojIENPTkZJR19ISURfUEFOVEhFUkxPUkQgaXMgbm90IHNldAojIENP
TkZJR19ISURfUEVUQUxZTlggaXMgbm90IHNldAojIENPTkZJR19ISURfUElDT0xDRCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9QUklNQVggaXMgbm90IHNldAojIENPTkZJR19ISURfUk9D
Q0FUIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NBSVRFSyBpcyBub3Qgc2V0CiMgQ09ORklH
X0hJRF9TQU1TVU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NPTlkgaXMgbm90IHNldAoj
IENPTkZJR19ISURfU1BFRURMSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NURUVMU0VS
SUVTIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NVTlBMVVMgaXMgbm90IHNldAojIENPTkZJ
R19ISURfR1JFRU5BU0lBIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NNQVJUSk9ZUExVUyBp
cyBub3Qgc2V0CiMgQ09ORklHX0hJRF9USVZPIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1RP
UFNFRUQgaXMgbm90IHNldAojIENPTkZJR19ISURfVEhJTkdNIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX1RIUlVTVE1BU1RFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9XQUNPTSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9XSUlNT1RFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1hJ
Tk1PIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1pFUk9QTFVTIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX1pZREFDUk9OIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NFTlNPUl9IVUIgaXMg
bm90IHNldAoKIwojIFVTQiBISUQgc3VwcG9ydAojCkNPTkZJR19VU0JfSElEPXkKQ09ORklH
X0hJRF9QSUQ9eQpDT05GSUdfVVNCX0hJRERFVj15CgojCiMgSTJDIEhJRCBzdXBwb3J0CiMK
IyBDT05GSUdfSTJDX0hJRCBpcyBub3Qgc2V0CkNPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5E
SUFOPXkKQ09ORklHX1VTQl9TVVBQT1JUPXkKQ09ORklHX1VTQl9DT01NT049eQpDT05GSUdf
VVNCX0FSQ0hfSEFTX0hDRD15CkNPTkZJR19VU0I9eQojIENPTkZJR19VU0JfREVCVUcgaXMg
bm90IHNldApDT05GSUdfVVNCX0FOTk9VTkNFX05FV19ERVZJQ0VTPXkKCiMKIyBNaXNjZWxs
YW5lb3VzIFVTQiBvcHRpb25zCiMKQ09ORklHX1VTQl9ERUZBVUxUX1BFUlNJU1Q9eQojIENP
TkZJR19VU0JfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldApDT05GSUdfVVNCX01PTj15CiMg
Q09ORklHX1VTQl9XVVNCX0NCQUYgaXMgbm90IHNldAoKIwojIFVTQiBIb3N0IENvbnRyb2xs
ZXIgRHJpdmVycwojCiMgQ09ORklHX1VTQl9DNjdYMDBfSENEIGlzIG5vdCBzZXQKQ09ORklH
X1VTQl9YSENJX0hDRD15CkNPTkZJR19VU0JfRUhDSV9IQ0Q9eQpDT05GSUdfVVNCX0VIQ0lf
Uk9PVF9IVUJfVFQ9eQpDT05GSUdfVVNCX0VIQ0lfVFRfTkVXU0NIRUQ9eQpDT05GSUdfVVNC
X0VIQ0lfUENJPXkKIyBDT05GSUdfVVNCX0VIQ0lfSENEX1BMQVRGT1JNIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX09YVTIxMEhQX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JU1Ax
MTZYX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JU1AxNzYwX0hDRCBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9JU1AxMzYyX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9GVVNC
SDIwMF9IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfRk9URzIxMF9IQ0QgaXMgbm90IHNl
dApDT05GSUdfVVNCX09IQ0lfSENEPXkKQ09ORklHX1VTQl9PSENJX0hDRF9QQ0k9eQojIENP
TkZJR19VU0JfT0hDSV9IQ0RfUExBVEZPUk0gaXMgbm90IHNldApDT05GSUdfVVNCX1VIQ0lf
SENEPXkKIyBDT05GSUdfVVNCX1NMODExX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9S
OEE2NjU5N19IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfSENEX1RFU1RfTU9ERSBpcyBu
b3Qgc2V0CgojCiMgVVNCIERldmljZSBDbGFzcyBkcml2ZXJzCiMKIyBDT05GSUdfVVNCX0FD
TSBpcyBub3Qgc2V0CkNPTkZJR19VU0JfUFJJTlRFUj15CiMgQ09ORklHX1VTQl9XRE0gaXMg
bm90IHNldAojIENPTkZJR19VU0JfVE1DIGlzIG5vdCBzZXQKCiMKIyBOT1RFOiBVU0JfU1RP
UkFHRSBkZXBlbmRzIG9uIFNDU0kgYnV0IEJMS19ERVZfU0QgbWF5CiMKCiMKIyBhbHNvIGJl
IG5lZWRlZDsgc2VlIFVTQl9TVE9SQUdFIEhlbHAgZm9yIG1vcmUgaW5mbwojCkNPTkZJR19V
U0JfU1RPUkFHRT15CiMgQ09ORklHX1VTQl9TVE9SQUdFX0RFQlVHIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX1NUT1JBR0VfUkVBTFRFSyBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TVE9S
QUdFX0RBVEFGQUIgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9GUkVFQ09NIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfSVNEMjAwIGlzIG5vdCBzZXQKIyBDT05G
SUdfVVNCX1NUT1JBR0VfVVNCQVQgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9T
RERSMDkgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9TRERSNTUgaXMgbm90IHNl
dAojIENPTkZJR19VU0JfU1RPUkFHRV9KVU1QU0hPVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TVE9SQUdFX0FMQVVEQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TVE9SQUdFX09ORVRP
VUNIIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfS0FSTUEgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfU1RPUkFHRV9DWVBSRVNTX0FUQUNCIGlzIG5vdCBzZXQKIyBDT05GSUdf
VVNCX1NUT1JBR0VfRU5FX1VCNjI1MCBpcyBub3Qgc2V0CgojCiMgVVNCIEltYWdpbmcgZGV2
aWNlcwojCiMgQ09ORklHX1VTQl9NREM4MDAgaXMgbm90IHNldAojIENPTkZJR19VU0JfTUlD
Uk9URUsgaXMgbm90IHNldAojIENPTkZJR19VU0JfTVVTQl9IRFJDIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX0RXQzMgaXMgbm90IHNldAojIENPTkZJR19VU0JfRFdDMiBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9DSElQSURFQSBpcyBub3Qgc2V0CgojCiMgVVNCIHBvcnQgZHJpdmVy
cwojCkNPTkZJR19VU0JfU0VSSUFMPXkKIyBDT05GSUdfVVNCX1NFUklBTF9DT05TT0xFIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9HRU5FUklDIGlzIG5vdCBzZXQKIyBDT05G
SUdfVVNCX1NFUklBTF9TSU1QTEUgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0FJ
UkNBQkxFIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9BUkszMTE2IGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9CRUxLSU4gaXMgbm90IHNldAojIENPTkZJR19VU0Jf
U0VSSUFMX0NIMzQxIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9XSElURUhFQVQg
aXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0RJR0lfQUNDRUxFUE9SVCBpcyBub3Qg
c2V0CkNPTkZJR19VU0JfU0VSSUFMX0NQMjEwWD15CkNPTkZJR19VU0JfU0VSSUFMX0NZUFJF
U1NfTTg9eQojIENPTkZJR19VU0JfU0VSSUFMX0VNUEVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
VVNCX1NFUklBTF9GVERJX1NJTyBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfVklT
T1IgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0lQQVEgaXMgbm90IHNldAojIENP
TkZJR19VU0JfU0VSSUFMX0lSIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9FREdF
UE9SVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfRURHRVBPUlRfVEkgaXMgbm90
IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0Y4MTIzMiBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfR0FSTUlOIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9JUFcgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0lVVSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfS0VZU1BBTl9QREEgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0tF
WVNQQU4gaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0tMU0kgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfU0VSSUFMX0tPQklMX1NDVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9T
RVJJQUxfTUNUX1UyMzIgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX01FVFJPIGlz
IG5vdCBzZXQKQ09ORklHX1VTQl9TRVJJQUxfTU9TNzcyMD15CkNPTkZJR19VU0JfU0VSSUFM
X01PUzc4NDA9eQojIENPTkZJR19VU0JfU0VSSUFMX01YVVBPUlQgaXMgbm90IHNldAojIENP
TkZJR19VU0JfU0VSSUFMX05BVk1BTiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxf
UEwyMzAzIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9PVEk2ODU4IGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9RQ0FVWCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9T
RVJJQUxfUVVBTENPTU0gaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1NQQ1A4WDUg
aXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1NBRkUgaXMgbm90IHNldAojIENPTkZJ
R19VU0JfU0VSSUFMX1NJRVJSQVdJUkVMRVNTIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NF
UklBTF9TWU1CT0wgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1RJIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9DWUJFUkpBQ0sgaXMgbm90IHNldAojIENPTkZJR19V
U0JfU0VSSUFMX1hJUkNPTSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfT1BUSU9O
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9PTU5JTkVUIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX1NFUklBTF9PUFRJQ09OIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklB
TF9YU0VOU19NVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfV0lTSEJPTkUgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1pURSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfU1NVMTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9RVDIgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0RFQlVHIGlzIG5vdCBzZXQKCiMKIyBVU0Ig
TWlzY2VsbGFuZW91cyBkcml2ZXJzCiMKIyBDT05GSUdfVVNCX0VNSTYyIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX0VNSTI2IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0FEVVRVWCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1VTQl9TRVZTRUcgaXMgbm90IHNldAojIENPTkZJR19VU0JfUklP
NTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0xFR09UT1dFUiBpcyBub3Qgc2V0CiMgQ09O
RklHX1VTQl9MQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfTEVEIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX0NZUFJFU1NfQ1k3QzYzIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0NZVEhF
Uk0gaXMgbm90IHNldAojIENPTkZJR19VU0JfSURNT1VTRSBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9GVERJX0VMQU4gaXMgbm90IHNldAojIENPTkZJR19VU0JfQVBQTEVESVNQTEFZIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NJU1VTQlZHQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9MRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9UUkFOQ0VWSUJSQVRPUiBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9JT1dBUlJJT1IgaXMgbm90IHNldAojIENPTkZJR19VU0JfVEVTVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1VTQl9FSFNFVF9URVNUX0ZJWFRVUkUgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfSVNJR0hURlcgaXMgbm90IHNldAojIENPTkZJR19VU0JfWVVSRVggaXMg
bm90IHNldAojIENPTkZJR19VU0JfRVpVU0JfRlgyIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNC
X0hTSUNfVVNCMzUwMyBpcyBub3Qgc2V0CgojCiMgVVNCIFBoeXNpY2FsIExheWVyIGRyaXZl
cnMKIwojIENPTkZJR19VU0JfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX09UR19GU00g
aXMgbm90IHNldAojIENPTkZJR19OT1BfVVNCX1hDRUlWIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0FNU1VOR19VU0IyUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfU0FNU1VOR19VU0IzUEhZIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX0lTUDEzMDEgaXMgbm90IHNldAojIENPTkZJR19VU0Jf
UkNBUl9QSFkgaXMgbm90IHNldAojIENPTkZJR19VU0JfR0FER0VUIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVdCIGlzIG5vdCBzZXQKIyBDT05GSUdfTU1DIGlzIG5vdCBzZXQKIyBDT05GSUdf
TUVNU1RJQ0sgaXMgbm90IHNldApDT05GSUdfTkVXX0xFRFM9eQpDT05GSUdfTEVEU19DTEFT
Uz15CgojCiMgTEVEIGRyaXZlcnMKIwojIENPTkZJR19MRURTX0xNMzUzMCBpcyBub3Qgc2V0
CiMgQ09ORklHX0xFRFNfTE0zNjQyIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19QQ0E5NTMy
IGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19MUDM5NDQgaXMgbm90IHNldAojIENPTkZJR19M
RURTX0xQNTUyMSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfTFA1NTIzIGlzIG5vdCBzZXQK
IyBDT05GSUdfTEVEU19MUDU1NjIgaXMgbm90IHNldAojIENPTkZJR19MRURTX0xQODUwMSBp
cyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfQ0xFVk9fTUFJTCBpcyBub3Qgc2V0CiMgQ09ORklH
X0xFRFNfUENBOTU1WCBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfUENBOTYzWCBpcyBub3Qg
c2V0CiMgQ09ORklHX0xFRFNfUENBOTY4NSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfQkQy
ODAyIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19JTlRFTF9TUzQyMDAgaXMgbm90IHNldAoj
IENPTkZJR19MRURTX1RDQTY1MDcgaXMgbm90IHNldAojIENPTkZJR19MRURTX0xNMzU1eCBp
cyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfT1QyMDAgaXMgbm90IHNldAojIENPTkZJR19MRURT
X0JMSU5LTSBpcyBub3Qgc2V0CgojCiMgTEVEIFRyaWdnZXJzCiMKIyBDT05GSUdfTEVEU19U
UklHR0VSUyBpcyBub3Qgc2V0CiMgQ09ORklHX0FDQ0VTU0lCSUxJVFkgaXMgbm90IHNldAoj
IENPTkZJR19JTkZJTklCQU5EIGlzIG5vdCBzZXQKIyBDT05GSUdfRURBQyBpcyBub3Qgc2V0
CkNPTkZJR19SVENfTElCPXkKQ09ORklHX1JUQ19DTEFTUz15CkNPTkZJR19SVENfSENUT1NZ
Uz15CkNPTkZJR19SVENfU1lTVE9IQz15CkNPTkZJR19SVENfSENUT1NZU19ERVZJQ0U9InJ0
YzAiCiMgQ09ORklHX1JUQ19ERUJVRyBpcyBub3Qgc2V0CgojCiMgUlRDIGludGVyZmFjZXMK
IwpDT05GSUdfUlRDX0lOVEZfU1lTRlM9eQpDT05GSUdfUlRDX0lOVEZfUFJPQz15CkNPTkZJ
R19SVENfSU5URl9ERVY9eQojIENPTkZJR19SVENfSU5URl9ERVZfVUlFX0VNVUwgaXMgbm90
IHNldAojIENPTkZJR19SVENfRFJWX1RFU1QgaXMgbm90IHNldAoKIwojIEkyQyBSVEMgZHJp
dmVycwojCiMgQ09ORklHX1JUQ19EUlZfRFMxMzA3IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9EUzEzNzQgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX0RTMTY3MiBpcyBub3Qg
c2V0CiMgQ09ORklHX1JUQ19EUlZfRFMzMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9NQVg2OTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SUzVDMzcyIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUlRDX0RSVl9JU0wxMjA4IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9JU0wxMjAyMiBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfSVNMMTIwNTcgaXMgbm90
IHNldAojIENPTkZJR19SVENfRFJWX1gxMjA1IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9QQ0YyMTI3IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTIzIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTYzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9QQ0Y4NTgzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9NNDFUODAgaXMgbm90IHNl
dAojIENPTkZJR19SVENfRFJWX0JRMzJLIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9T
MzUzOTBBIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9GTTMxMzAgaXMgbm90IHNldAoj
IENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfUlg4
MDI1IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9FTTMwMjcgaXMgbm90IHNldAojIENP
TkZJR19SVENfRFJWX1JWMzAyOUMyIGlzIG5vdCBzZXQKCiMKIyBTUEkgUlRDIGRyaXZlcnMK
IwoKIwojIFBsYXRmb3JtIFJUQyBkcml2ZXJzCiMKQ09ORklHX1JUQ19EUlZfQ01PUz15CiMg
Q09ORklHX1JUQ19EUlZfRFMxMjg2IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzE1
MTEgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX0RTMTU1MyBpcyBub3Qgc2V0CiMgQ09O
RklHX1JUQ19EUlZfRFMxNzQyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9TVEsxN1RB
OCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfTTQ4VDg2IGlzIG5vdCBzZXQKIyBDT05G
SUdfUlRDX0RSVl9NNDhUMzUgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX000OFQ1OSBp
cyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfTVNNNjI0MiBpcyBub3Qgc2V0CiMgQ09ORklH
X1JUQ19EUlZfQlE0ODAyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SUDVDMDEgaXMg
bm90IHNldAojIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9EUzI0MDQgaXMgbm90IHNldAoKIwojIG9uLUNQVSBSVEMgZHJpdmVycwojCiMgQ09O
RklHX1JUQ19EUlZfTU9YQVJUIGlzIG5vdCBzZXQKCiMKIyBISUQgU2Vuc29yIFJUQyBkcml2
ZXJzCiMKIyBDT05GSUdfUlRDX0RSVl9ISURfU0VOU09SX1RJTUUgaXMgbm90IHNldAojIENP
TkZJR19ETUFERVZJQ0VTIGlzIG5vdCBzZXQKIyBDT05GSUdfQVVYRElTUExBWSBpcyBub3Qg
c2V0CiMgQ09ORklHX1VJTyBpcyBub3Qgc2V0CiMgQ09ORklHX1ZGSU8gaXMgbm90IHNldAoj
IENPTkZJR19WSVJUX0RSSVZFUlMgaXMgbm90IHNldAoKIwojIFZpcnRpbyBkcml2ZXJzCiMK
IyBDT05GSUdfVklSVElPX1BDSSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJUlRJT19NTUlPIGlz
IG5vdCBzZXQKCiMKIyBNaWNyb3NvZnQgSHlwZXItViBndWVzdCBzdXBwb3J0CiMKIyBDT05G
SUdfSFlQRVJWIGlzIG5vdCBzZXQKCiMKIyBYZW4gZHJpdmVyIHN1cHBvcnQKIwpDT05GSUdf
WEVOX0JBTExPT049eQpDT05GSUdfWEVOX1NDUlVCX1BBR0VTPXkKQ09ORklHX1hFTl9ERVZf
RVZUQ0hOPXkKQ09ORklHX1hFTl9CQUNLRU5EPXkKQ09ORklHX1hFTkZTPXkKQ09ORklHX1hF
Tl9DT01QQVRfWEVORlM9eQpDT05GSUdfWEVOX1NZU19IWVBFUlZJU09SPXkKQ09ORklHX1hF
Tl9YRU5CVVNfRlJPTlRFTkQ9eQpDT05GSUdfWEVOX0dOVERFVj15CkNPTkZJR19YRU5fR1JB
TlRfREVWX0FMTE9DPXkKQ09ORklHX1NXSU9UTEJfWEVOPXkKQ09ORklHX1hFTl9QQ0lERVZf
QkFDS0VORD15CkNPTkZJR19YRU5fUFJJVkNNRD15CkNPTkZJR19YRU5fQUNQSV9QUk9DRVNT
T1I9eQojIENPTkZJR19YRU5fTUNFX0xPRyBpcyBub3Qgc2V0CkNPTkZJR19YRU5fSEFWRV9Q
Vk1NVT15CiMgQ09ORklHX1NUQUdJTkcgaXMgbm90IHNldAojIENPTkZJR19YODZfUExBVEZP
Uk1fREVWSUNFUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NIUk9NRV9QTEFURk9STVMgaXMgbm90
IHNldAoKIwojIEhhcmR3YXJlIFNwaW5sb2NrIGRyaXZlcnMKIwpDT05GSUdfQ0xLRVZUX0k4
MjUzPXkKQ09ORklHX0k4MjUzX0xPQ0s9eQpDT05GSUdfQ0xLQkxEX0k4MjUzPXkKIyBDT05G
SUdfTUFJTEJPWCBpcyBub3Qgc2V0CkNPTkZJR19JT01NVV9BUEk9eQpDT05GSUdfSU9NTVVf
U1VQUE9SVD15CkNPTkZJR19BTURfSU9NTVU9eQpDT05GSUdfQU1EX0lPTU1VX1NUQVRTPXkK
Q09ORklHX0RNQVJfVEFCTEU9eQojIENPTkZJR19JTlRFTF9JT01NVSBpcyBub3Qgc2V0CkNP
TkZJR19JUlFfUkVNQVA9eQoKIwojIFJlbW90ZXByb2MgZHJpdmVycwojCiMgQ09ORklHX1NU
RV9NT0RFTV9SUFJPQyBpcyBub3Qgc2V0CgojCiMgUnBtc2cgZHJpdmVycwojCiMgQ09ORklH
X1BNX0RFVkZSRVEgaXMgbm90IHNldAojIENPTkZJR19FWFRDT04gaXMgbm90IHNldAojIENP
TkZJR19NRU1PUlkgaXMgbm90IHNldAojIENPTkZJR19JSU8gaXMgbm90IHNldAojIENPTkZJ
R19OVEIgaXMgbm90IHNldAojIENPTkZJR19WTUVfQlVTIGlzIG5vdCBzZXQKIyBDT05GSUdf
UFdNIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBBQ0tfQlVTIGlzIG5vdCBzZXQKIyBDT05GSUdf
UkVTRVRfQ09OVFJPTExFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZNQyBpcyBub3Qgc2V0Cgoj
CiMgUEhZIFN1YnN5c3RlbQojCkNPTkZJR19HRU5FUklDX1BIWT15CiMgQ09ORklHX1BIWV9F
WFlOT1NfTUlQSV9WSURFTyBpcyBub3Qgc2V0CiMgQ09ORklHX0JDTV9LT05BX1VTQjJfUEhZ
IGlzIG5vdCBzZXQKIyBDT05GSUdfUE9XRVJDQVAgaXMgbm90IHNldAoKIwojIEZpcm13YXJl
IERyaXZlcnMKIwojIENPTkZJR19FREQgaXMgbm90IHNldApDT05GSUdfRklSTVdBUkVfTUVN
TUFQPXkKIyBDT05GSUdfREVMTF9SQlUgaXMgbm90IHNldAojIENPTkZJR19EQ0RCQVMgaXMg
bm90IHNldApDT05GSUdfRE1JSUQ9eQpDT05GSUdfRE1JX1NZU0ZTPXkKQ09ORklHX0RNSV9T
Q0FOX01BQ0hJTkVfTk9OX0VGSV9GQUxMQkFDSz15CiMgQ09ORklHX0lTQ1NJX0lCRlRfRklO
RCBpcyBub3Qgc2V0CiMgQ09ORklHX0dPT0dMRV9GSVJNV0FSRSBpcyBub3Qgc2V0CgojCiMg
RmlsZSBzeXN0ZW1zCiMKQ09ORklHX0RDQUNIRV9XT1JEX0FDQ0VTUz15CiMgQ09ORklHX0VY
VDJfRlMgaXMgbm90IHNldApDT05GSUdfRVhUM19GUz15CiMgQ09ORklHX0VYVDNfREVGQVVM
VFNfVE9fT1JERVJFRCBpcyBub3Qgc2V0CkNPTkZJR19FWFQzX0ZTX1hBVFRSPXkKQ09ORklH
X0VYVDNfRlNfUE9TSVhfQUNMPXkKQ09ORklHX0VYVDNfRlNfU0VDVVJJVFk9eQpDT05GSUdf
RVhUNF9GUz15CkNPTkZJR19FWFQ0X1VTRV9GT1JfRVhUMjM9eQojIENPTkZJR19FWFQ0X0ZT
X1BPU0lYX0FDTCBpcyBub3Qgc2V0CiMgQ09ORklHX0VYVDRfRlNfU0VDVVJJVFkgaXMgbm90
IHNldApDT05GSUdfRVhUNF9ERUJVRz15CkNPTkZJR19KQkQ9eQojIENPTkZJR19KQkRfREVC
VUcgaXMgbm90IHNldApDT05GSUdfSkJEMj15CkNPTkZJR19KQkQyX0RFQlVHPXkKQ09ORklH
X0ZTX01CQ0FDSEU9eQojIENPTkZJR19SRUlTRVJGU19GUyBpcyBub3Qgc2V0CiMgQ09ORklH
X0pGU19GUyBpcyBub3Qgc2V0CiMgQ09ORklHX1hGU19GUyBpcyBub3Qgc2V0CkNPTkZJR19H
RlMyX0ZTPXkKQ09ORklHX0JUUkZTX0ZTPXkKQ09ORklHX0JUUkZTX0ZTX1BPU0lYX0FDTD15
CiMgQ09ORklHX0JUUkZTX0ZTX0NIRUNLX0lOVEVHUklUWSBpcyBub3Qgc2V0CiMgQ09ORklH
X0JUUkZTX0ZTX1JVTl9TQU5JVFlfVEVTVFMgaXMgbm90IHNldAojIENPTkZJR19CVFJGU19E
RUJVRyBpcyBub3Qgc2V0CiMgQ09ORklHX0JUUkZTX0FTU0VSVCBpcyBub3Qgc2V0CiMgQ09O
RklHX05JTEZTMl9GUyBpcyBub3Qgc2V0CkNPTkZJR19GU19QT1NJWF9BQ0w9eQpDT05GSUdf
RklMRV9MT0NLSU5HPXkKQ09ORklHX0ZTTk9USUZZPXkKQ09ORklHX0ROT1RJRlk9eQpDT05G
SUdfSU5PVElGWV9VU0VSPXkKQ09ORklHX0ZBTk9USUZZPXkKQ09ORklHX1FVT1RBPXkKQ09O
RklHX1FVT1RBX05FVExJTktfSU5URVJGQUNFPXkKIyBDT05GSUdfUFJJTlRfUVVPVEFfV0FS
TklORyBpcyBub3Qgc2V0CiMgQ09ORklHX1FVT1RBX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklH
X1FVT1RBX1RSRUU9eQojIENPTkZJR19RRk1UX1YxIGlzIG5vdCBzZXQKQ09ORklHX1FGTVRf
VjI9eQpDT05GSUdfUVVPVEFDVEw9eQpDT05GSUdfUVVPVEFDVExfQ09NUEFUPXkKQ09ORklH
X0FVVE9GUzRfRlM9eQpDT05GSUdfRlVTRV9GUz15CiMgQ09ORklHX0NVU0UgaXMgbm90IHNl
dAoKIwojIENhY2hlcwojCkNPTkZJR19GU0NBQ0hFPXkKQ09ORklHX0ZTQ0FDSEVfU1RBVFM9
eQpDT05GSUdfRlNDQUNIRV9ISVNUT0dSQU09eQojIENPTkZJR19GU0NBQ0hFX0RFQlVHIGlz
IG5vdCBzZXQKIyBDT05GSUdfRlNDQUNIRV9PQkpFQ1RfTElTVCBpcyBub3Qgc2V0CiMgQ09O
RklHX0NBQ0hFRklMRVMgaXMgbm90IHNldAoKIwojIENELVJPTS9EVkQgRmlsZXN5c3RlbXMK
IwpDT05GSUdfSVNPOTY2MF9GUz15CkNPTkZJR19KT0xJRVQ9eQpDT05GSUdfWklTT0ZTPXkK
Q09ORklHX1VERl9GUz15CkNPTkZJR19VREZfTkxTPXkKCiMKIyBET1MvRkFUL05UIEZpbGVz
eXN0ZW1zCiMKQ09ORklHX0ZBVF9GUz15CkNPTkZJR19NU0RPU19GUz15CkNPTkZJR19WRkFU
X0ZTPXkKQ09ORklHX0ZBVF9ERUZBVUxUX0NPREVQQUdFPTQzNwpDT05GSUdfRkFUX0RFRkFV
TFRfSU9DSEFSU0VUPSJpc284ODU5LTEiCkNPTkZJR19OVEZTX0ZTPXkKIyBDT05GSUdfTlRG
U19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19OVEZTX1JXPXkKCiMKIyBQc2V1ZG8gZmlsZXN5
c3RlbXMKIwpDT05GSUdfUFJPQ19GUz15CkNPTkZJR19QUk9DX0tDT1JFPXkKQ09ORklHX1BS
T0NfVk1DT1JFPXkKQ09ORklHX1BST0NfU1lTQ1RMPXkKQ09ORklHX1BST0NfUEFHRV9NT05J
VE9SPXkKQ09ORklHX1NZU0ZTPXkKQ09ORklHX1RNUEZTPXkKQ09ORklHX1RNUEZTX1BPU0lY
X0FDTD15CkNPTkZJR19UTVBGU19YQVRUUj15CkNPTkZJR19IVUdFVExCRlM9eQpDT05GSUdf
SFVHRVRMQl9QQUdFPXkKIyBDT05GSUdfQ09ORklHRlNfRlMgaXMgbm90IHNldAojIENPTkZJ
R19NSVNDX0ZJTEVTWVNURU1TIGlzIG5vdCBzZXQKQ09ORklHX05FVFdPUktfRklMRVNZU1RF
TVM9eQojIENPTkZJR19ORlNfRlMgaXMgbm90IHNldAojIENPTkZJR19ORlNEIGlzIG5vdCBz
ZXQKQ09ORklHX0NFUEhfRlM9eQojIENPTkZJR19DRVBIX0ZTQ0FDSEUgaXMgbm90IHNldAoj
IENPTkZJR19DRVBIX0ZTX1BPU0lYX0FDTCBpcyBub3Qgc2V0CkNPTkZJR19DSUZTPXkKIyBD
T05GSUdfQ0lGU19TVEFUUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NJRlNfV0VBS19QV19IQVNI
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lGU19VUENBTEwgaXMgbm90IHNldAojIENPTkZJR19D
SUZTX1hBVFRSIGlzIG5vdCBzZXQKQ09ORklHX0NJRlNfREVCVUc9eQojIENPTkZJR19DSUZT
X0RFQlVHMiBpcyBub3Qgc2V0CiMgQ09ORklHX0NJRlNfREZTX1VQQ0FMTCBpcyBub3Qgc2V0
CiMgQ09ORklHX0NJRlNfU01CMiBpcyBub3Qgc2V0CiMgQ09ORklHX0NJRlNfRlNDQUNIRSBp
cyBub3Qgc2V0CiMgQ09ORklHX05DUF9GUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NPREFfRlMg
aXMgbm90IHNldAojIENPTkZJR19BRlNfRlMgaXMgbm90IHNldApDT05GSUdfTkxTPXkKQ09O
RklHX05MU19ERUZBVUxUPSJ1dGY4IgpDT05GSUdfTkxTX0NPREVQQUdFXzQzNz15CiMgQ09O
RklHX05MU19DT0RFUEFHRV83MzcgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0Vf
Nzc1IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1MCBpcyBub3Qgc2V0CiMg
Q09ORklHX05MU19DT0RFUEFHRV84NTIgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBB
R0VfODU1IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0
CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjAgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09E
RVBBR0VfODYxIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2MiBpcyBub3Qg
c2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjMgaXMgbm90IHNldAojIENPTkZJR19OTFNf
Q09ERVBBR0VfODY0IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2NSBpcyBu
b3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjYgaXMgbm90IHNldAojIENPTkZJR19O
TFNfQ09ERVBBR0VfODY5IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzkzNiBp
cyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV85NTAgaXMgbm90IHNldAojIENPTkZJ
R19OTFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzk0
OSBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NzQgaXMgbm90IHNldAojIENP
TkZJR19OTFNfSVNPODg1OV84IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzEy
NTAgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfMTI1MSBpcyBub3Qgc2V0CkNP
TkZJR19OTFNfQVNDSUk9eQpDT05GSUdfTkxTX0lTTzg4NTlfMT15CiMgQ09ORklHX05MU19J
U084ODU5XzIgaXMgbm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV8zIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkxTX0lTTzg4NTlfNCBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19JU084ODU5
XzUgaXMgbm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV82IGlzIG5vdCBzZXQKIyBDT05G
SUdfTkxTX0lTTzg4NTlfNyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19JU084ODU5XzkgaXMg
bm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV8xMyBpcyBub3Qgc2V0CiMgQ09ORklHX05M
U19JU084ODU5XzE0IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90
IHNldAojIENPTkZJR19OTFNfS09JOF9SIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0tPSThf
VSBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfUk9NQU4gaXMgbm90IHNldAojIENPTkZJ
R19OTFNfTUFDX0NFTFRJQyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfQ0VOVEVVUk8g
aXMgbm90IHNldAojIENPTkZJR19OTFNfTUFDX0NST0FUSUFOIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkxTX01BQ19DWVJJTExJQyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfR0FFTElD
IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX01BQ19HUkVFSyBpcyBub3Qgc2V0CiMgQ09ORklH
X05MU19NQUNfSUNFTEFORCBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfSU5VSVQgaXMg
bm90IHNldAojIENPTkZJR19OTFNfTUFDX1JPTUFOSUFOIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkxTX01BQ19UVVJLSVNIIGlzIG5vdCBzZXQKQ09ORklHX05MU19VVEY4PXkKCiMKIyBLZXJu
ZWwgaGFja2luZwojCkNPTkZJR19UUkFDRV9JUlFGTEFHU19TVVBQT1JUPXkKCiMKIyBwcmlu
dGsgYW5kIGRtZXNnIG9wdGlvbnMKIwpDT05GSUdfUFJJTlRLX1RJTUU9eQpDT05GSUdfREVG
QVVMVF9NRVNTQUdFX0xPR0xFVkVMPTcKIyBDT05GSUdfQk9PVF9QUklOVEtfREVMQVkgaXMg
bm90IHNldAojIENPTkZJR19EWU5BTUlDX0RFQlVHIGlzIG5vdCBzZXQKCiMKIyBDb21waWxl
LXRpbWUgY2hlY2tzIGFuZCBjb21waWxlciBvcHRpb25zCiMKQ09ORklHX0RFQlVHX0lORk89
eQojIENPTkZJR19ERUJVR19JTkZPX1JFRFVDRUQgaXMgbm90IHNldAojIENPTkZJR19FTkFC
TEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfRU5BQkxFX01VU1RfQ0hF
Q0sgaXMgbm90IHNldApDT05GSUdfRlJBTUVfV0FSTj0yMDQ4CiMgQ09ORklHX1NUUklQX0FT
TV9TWU1TIGlzIG5vdCBzZXQKIyBDT05GSUdfUkVBREFCTEVfQVNNIGlzIG5vdCBzZXQKIyBD
T05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldApDT05GSUdfREVCVUdfRlM9eQojIENP
TkZJR19IRUFERVJTX0NIRUNLIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfU0VDVElPTl9N
SVNNQVRDSCBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQpD
T05GSUdfRlJBTUVfUE9JTlRFUj15CiMgQ09ORklHX0RFQlVHX0ZPUkNFX1dFQUtfUEVSX0NQ
VSBpcyBub3Qgc2V0CkNPTkZJR19NQUdJQ19TWVNSUT15CkNPTkZJR19NQUdJQ19TWVNSUV9E
RUZBVUxUX0VOQUJMRT0weDEKQ09ORklHX0RFQlVHX0tFUk5FTD15CgojCiMgTWVtb3J5IERl
YnVnZ2luZwojCiMgQ09ORklHX0RFQlVHX1BBR0VBTExPQyBpcyBub3Qgc2V0CiMgQ09ORklH
X0RFQlVHX09CSkVDVFMgaXMgbm90IHNldAojIENPTkZJR19TTFVCX0RFQlVHX09OIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0xVQl9TVEFUUyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0RFQlVH
X0tNRU1MRUFLPXkKQ09ORklHX0RFQlVHX0tNRU1MRUFLPXkKQ09ORklHX0RFQlVHX0tNRU1M
RUFLX0VBUkxZX0xPR19TSVpFPTQwMAojIENPTkZJR19ERUJVR19LTUVNTEVBS19URVNUIGlz
IG5vdCBzZXQKQ09ORklHX0RFQlVHX0tNRU1MRUFLX0RFRkFVTFRfT0ZGPXkKIyBDT05GSUdf
REVCVUdfU1RBQ0tfVVNBR0UgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19WTSBpcyBub3Qg
c2V0CiMgQ09ORklHX0RFQlVHX1ZJUlRVQUwgaXMgbm90IHNldApDT05GSUdfREVCVUdfTUVN
T1JZX0lOSVQ9eQojIENPTkZJR19ERUJVR19QRVJfQ1BVX01BUFMgaXMgbm90IHNldApDT05G
SUdfSEFWRV9ERUJVR19TVEFDS09WRVJGTE9XPXkKIyBDT05GSUdfREVCVUdfU1RBQ0tPVkVS
RkxPVyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0FSQ0hfS01FTUNIRUNLPXkKQ09ORklHX0RF
QlVHX1NISVJRPXkKCiMKIyBEZWJ1ZyBMb2NrdXBzIGFuZCBIYW5ncwojCkNPTkZJR19MT0NL
VVBfREVURUNUT1I9eQpDT05GSUdfSEFSRExPQ0tVUF9ERVRFQ1RPUj15CiMgQ09ORklHX0JP
T1RQQVJBTV9IQVJETE9DS1VQX1BBTklDIGlzIG5vdCBzZXQKQ09ORklHX0JPT1RQQVJBTV9I
QVJETE9DS1VQX1BBTklDX1ZBTFVFPTAKIyBDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBf
UEFOSUMgaXMgbm90IHNldApDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBfUEFOSUNfVkFM
VUU9MApDT05GSUdfREVURUNUX0hVTkdfVEFTSz15CkNPTkZJR19ERUZBVUxUX0hVTkdfVEFT
S19USU1FT1VUPTEyMAojIENPTkZJR19CT09UUEFSQU1fSFVOR19UQVNLX1BBTklDIGlzIG5v
dCBzZXQKQ09ORklHX0JPT1RQQVJBTV9IVU5HX1RBU0tfUEFOSUNfVkFMVUU9MAojIENPTkZJ
R19QQU5JQ19PTl9PT1BTIGlzIG5vdCBzZXQKQ09ORklHX1BBTklDX09OX09PUFNfVkFMVUU9
MApDT05GSUdfUEFOSUNfVElNRU9VVD0wCiMgQ09ORklHX1NDSEVEX0RFQlVHIGlzIG5vdCBz
ZXQKQ09ORklHX1NDSEVEU1RBVFM9eQpDT05GSUdfVElNRVJfU1RBVFM9eQoKIwojIExvY2sg
RGVidWdnaW5nIChzcGlubG9ja3MsIG11dGV4ZXMsIGV0Yy4uLikKIwpDT05GSUdfREVCVUdf
UlRfTVVURVhFUz15CkNPTkZJR19ERUJVR19QSV9MSVNUPXkKIyBDT05GSUdfUlRfTVVURVhf
VEVTVEVSIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX1NQSU5MT0NLPXkKQ09ORklHX0RFQlVH
X01VVEVYRVM9eQojIENPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSCBpcyBub3Qgc2V0
CkNPTkZJR19ERUJVR19MT0NLX0FMTE9DPXkKQ09ORklHX1BST1ZFX0xPQ0tJTkc9eQpDT05G
SUdfTE9DS0RFUD15CiMgQ09ORklHX0xPQ0tfU1RBVCBpcyBub3Qgc2V0CkNPTkZJR19ERUJV
R19MT0NLREVQPXkKIyBDT05GSUdfREVCVUdfQVRPTUlDX1NMRUVQIGlzIG5vdCBzZXQKIyBD
T05GSUdfREVCVUdfTE9DS0lOR19BUElfU0VMRlRFU1RTIGlzIG5vdCBzZXQKQ09ORklHX1RS
QUNFX0lSUUZMQUdTPXkKQ09ORklHX1NUQUNLVFJBQ0U9eQojIENPTkZJR19ERUJVR19LT0JK
RUNUIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX0JVR1ZFUkJPU0U9eQpDT05GSUdfREVCVUdf
V1JJVEVDT1VOVD15CkNPTkZJR19ERUJVR19MSVNUPXkKQ09ORklHX0RFQlVHX1NHPXkKIyBD
T05GSUdfREVCVUdfTk9USUZJRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfQ1JFREVO
VElBTFMgaXMgbm90IHNldAoKIwojIFJDVSBEZWJ1Z2dpbmcKIwojIENPTkZJR19QUk9WRV9S
Q1UgaXMgbm90IHNldApDT05GSUdfU1BBUlNFX1JDVV9QT0lOVEVSPXkKIyBDT05GSUdfUkNV
X1RPUlRVUkVfVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19SQ1VfQ1BVX1NUQUxMX1RJTUVPVVQ9
NjAKQ09ORklHX1JDVV9DUFVfU1RBTExfSU5GTz15CiMgQ09ORklHX1JDVV9UUkFDRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0RFQlVHX0JMT0NLX0VYVF9ERVZUIGlzIG5vdCBzZXQKIyBDT05G
SUdfTk9USUZJRVJfRVJST1JfSU5KRUNUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfRkFVTFRf
SU5KRUNUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfTEFURU5DWVRPUCBpcyBub3Qgc2V0CkNP
TkZJR19BUkNIX0hBU19ERUJVR19TVFJJQ1RfVVNFUl9DT1BZX0NIRUNLUz15CiMgQ09ORklH
X0RFQlVHX1NUUklDVF9VU0VSX0NPUFlfQ0hFQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1VTRVJf
U1RBQ0tUUkFDRV9TVVBQT1JUPXkKQ09ORklHX05PUF9UUkFDRVI9eQpDT05GSUdfSEFWRV9G
VU5DVElPTl9UUkFDRVI9eQpDT05GSUdfSEFWRV9GVU5DVElPTl9HUkFQSF9UUkFDRVI9eQpD
T05GSUdfSEFWRV9GVU5DVElPTl9HUkFQSF9GUF9URVNUPXkKQ09ORklHX0hBVkVfRlVOQ1RJ
T05fVFJBQ0VfTUNPVU5UX1RFU1Q9eQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRT15CkNP
TkZJR19IQVZFX0RZTkFNSUNfRlRSQUNFX1dJVEhfUkVHUz15CkNPTkZJR19IQVZFX0ZUUkFD
RV9NQ09VTlRfUkVDT1JEPXkKQ09ORklHX0hBVkVfU1lTQ0FMTF9UUkFDRVBPSU5UUz15CkNP
TkZJR19IQVZFX0ZFTlRSWT15CkNPTkZJR19IQVZFX0NfUkVDT1JETUNPVU5UPXkKQ09ORklH
X1RSQUNFX0NMT0NLPXkKQ09ORklHX1JJTkdfQlVGRkVSPXkKQ09ORklHX0VWRU5UX1RSQUNJ
Tkc9eQpDT05GSUdfQ09OVEVYVF9TV0lUQ0hfVFJBQ0VSPXkKQ09ORklHX1RSQUNJTkc9eQpD
T05GSUdfR0VORVJJQ19UUkFDRVI9eQpDT05GSUdfVFJBQ0lOR19TVVBQT1JUPXkKQ09ORklH
X0ZUUkFDRT15CkNPTkZJR19GVU5DVElPTl9UUkFDRVI9eQpDT05GSUdfRlVOQ1RJT05fR1JB
UEhfVFJBQ0VSPXkKIyBDT05GSUdfSVJRU09GRl9UUkFDRVIgaXMgbm90IHNldAojIENPTkZJ
R19TQ0hFRF9UUkFDRVIgaXMgbm90IHNldAojIENPTkZJR19GVFJBQ0VfU1lTQ0FMTFMgaXMg
bm90IHNldAojIENPTkZJR19UUkFDRVJfU05BUFNIT1QgaXMgbm90IHNldApDT05GSUdfQlJB
TkNIX1BST0ZJTEVfTk9ORT15CiMgQ09ORklHX1BST0ZJTEVfQU5OT1RBVEVEX0JSQU5DSEVT
IGlzIG5vdCBzZXQKIyBDT05GSUdfUFJPRklMRV9BTExfQlJBTkNIRVMgaXMgbm90IHNldAoj
IENPTkZJR19TVEFDS19UUkFDRVIgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX0lPX1RS
QUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfVVBST0JFX0VWRU5UIGlzIG5vdCBzZXQKIyBDT05G
SUdfUFJPQkVfRVZFTlRTIGlzIG5vdCBzZXQKQ09ORklHX0RZTkFNSUNfRlRSQUNFPXkKQ09O
RklHX0RZTkFNSUNfRlRSQUNFX1dJVEhfUkVHUz15CiMgQ09ORklHX0ZVTkNUSU9OX1BST0ZJ
TEVSIGlzIG5vdCBzZXQKQ09ORklHX0ZUUkFDRV9NQ09VTlRfUkVDT1JEPXkKIyBDT05GSUdf
RlRSQUNFX1NUQVJUVVBfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX01NSU9UUkFDRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1JJTkdfQlVGRkVSX0JFTkNITUFSSyBpcyBub3Qgc2V0CiMgQ09O
RklHX1JJTkdfQlVGRkVSX1NUQVJUVVBfVEVTVCBpcyBub3Qgc2V0CgojCiMgUnVudGltZSBU
ZXN0aW5nCiMKIyBDT05GSUdfTEtEVE0gaXMgbm90IHNldAojIENPTkZJR19URVNUX0xJU1Rf
U09SVCBpcyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tUUkFDRV9TRUxGX1RFU1QgaXMgbm90IHNl
dAojIENPTkZJR19SQlRSRUVfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lOVEVSVkFMX1RS
RUVfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX1BFUkNQVV9URVNUIGlzIG5vdCBzZXQKIyBD
T05GSUdfQVRPTUlDNjRfU0VMRlRFU1QgaXMgbm90IHNldAojIENPTkZJR19URVNUX1NUUklO
R19IRUxQRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfVEVTVF9LU1RSVE9YIGlzIG5vdCBzZXQK
IyBDT05GSUdfUFJPVklERV9PSENJMTM5NF9ETUFfSU5JVCBpcyBub3Qgc2V0CkNPTkZJR19E
TUFfQVBJX0RFQlVHPXkKIyBDT05GSUdfVEVTVF9NT0RVTEUgaXMgbm90IHNldAojIENPTkZJ
R19URVNUX1VTRVJfQ09QWSBpcyBub3Qgc2V0CiMgQ09ORklHX1NBTVBMRVMgaXMgbm90IHNl
dApDT05GSUdfSEFWRV9BUkNIX0tHREI9eQojIENPTkZJR19LR0RCIGlzIG5vdCBzZXQKIyBD
T05GSUdfU1RSSUNUX0RFVk1FTSBpcyBub3Qgc2V0CkNPTkZJR19YODZfVkVSQk9TRV9CT09U
VVA9eQpDT05GSUdfRUFSTFlfUFJJTlRLPXkKIyBDT05GSUdfRUFSTFlfUFJJTlRLX0RCR1Ag
aXMgbm90IHNldAojIENPTkZJR19YODZfUFREVU1QIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVH
X1JPREFUQT15CiMgQ09ORklHX0RFQlVHX1JPREFUQV9URVNUIGlzIG5vdCBzZXQKIyBDT05G
SUdfREVCVUdfU0VUX01PRFVMRV9ST05YIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfTlhf
VEVTVCBpcyBub3Qgc2V0CkNPTkZJR19ET1VCTEVGQVVMVD15CiMgQ09ORklHX0RFQlVHX1RM
QkZMVVNIIGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX0RFQlVHPXkKIyBDT05GSUdfSU9NTVVf
U1RSRVNTIGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX0xFQUs9eQpDT05GSUdfSEFWRV9NTUlP
VFJBQ0VfU1VQUE9SVD15CkNPTkZJR19JT19ERUxBWV9UWVBFXzBYODA9MApDT05GSUdfSU9f
REVMQVlfVFlQRV8wWEVEPTEKQ09ORklHX0lPX0RFTEFZX1RZUEVfVURFTEFZPTIKQ09ORklH
X0lPX0RFTEFZX1RZUEVfTk9ORT0zCkNPTkZJR19JT19ERUxBWV8wWDgwPXkKIyBDT05GSUdf
SU9fREVMQVlfMFhFRCBpcyBub3Qgc2V0CiMgQ09ORklHX0lPX0RFTEFZX1VERUxBWSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0lPX0RFTEFZX05PTkUgaXMgbm90IHNldApDT05GSUdfREVGQVVM
VF9JT19ERUxBWV9UWVBFPTAKQ09ORklHX0RFQlVHX0JPT1RfUEFSQU1TPXkKIyBDT05GSUdf
Q1BBX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfT1BUSU1JWkVfSU5MSU5JTkcgaXMgbm90
IHNldAojIENPTkZJR19ERUJVR19OTUlfU0VMRlRFU1QgaXMgbm90IHNldAojIENPTkZJR19Y
ODZfREVCVUdfU1RBVElDX0NQVV9IQVMgaXMgbm90IHNldAoKIwojIFNlY3VyaXR5IG9wdGlv
bnMKIwpDT05GSUdfS0VZUz15CiMgQ09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90
IHNldAojIENPTkZJR19CSUdfS0VZUyBpcyBub3Qgc2V0CiMgQ09ORklHX0VOQ1JZUFRFRF9L
RVlTIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZU19ERUJVR19QUk9DX0tFWVMgaXMgbm90IHNl
dAojIENPTkZJR19TRUNVUklUWV9ETUVTR19SRVNUUklDVCBpcyBub3Qgc2V0CiMgQ09ORklH
X1NFQ1VSSVRZIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VDVVJJVFlGUyBpcyBub3Qgc2V0CkNP
TkZJR19ERUZBVUxUX1NFQ1VSSVRZX0RBQz15CkNPTkZJR19ERUZBVUxUX1NFQ1VSSVRZPSIi
CkNPTkZJR19YT1JfQkxPQ0tTPXkKQ09ORklHX0NSWVBUTz15CgojCiMgQ3J5cHRvIGNvcmUg
b3IgaGVscGVyCiMKQ09ORklHX0NSWVBUT19BTEdBUEk9eQpDT05GSUdfQ1JZUFRPX0FMR0FQ
STI9eQpDT05GSUdfQ1JZUFRPX0FFQUQ9eQpDT05GSUdfQ1JZUFRPX0FFQUQyPXkKQ09ORklH
X0NSWVBUT19CTEtDSVBIRVI9eQpDT05GSUdfQ1JZUFRPX0JMS0NJUEhFUjI9eQpDT05GSUdf
Q1JZUFRPX0hBU0g9eQpDT05GSUdfQ1JZUFRPX0hBU0gyPXkKQ09ORklHX0NSWVBUT19STkc9
eQpDT05GSUdfQ1JZUFRPX1JORzI9eQpDT05GSUdfQ1JZUFRPX1BDT01QPXkKQ09ORklHX0NS
WVBUT19QQ09NUDI9eQpDT05GSUdfQ1JZUFRPX01BTkFHRVI9eQpDT05GSUdfQ1JZUFRPX01B
TkFHRVIyPXkKIyBDT05GSUdfQ1JZUFRPX1VTRVIgaXMgbm90IHNldApDT05GSUdfQ1JZUFRP
X01BTkFHRVJfRElTQUJMRV9URVNUUz15CkNPTkZJR19DUllQVE9fR0YxMjhNVUw9eQojIENP
TkZJR19DUllQVE9fTlVMTCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19QQ1JZUFQgaXMg
bm90IHNldApDT05GSUdfQ1JZUFRPX1dPUktRVUVVRT15CkNPTkZJR19DUllQVE9fQ1JZUFRE
PXkKQ09ORklHX0NSWVBUT19BVVRIRU5DPXkKIyBDT05GSUdfQ1JZUFRPX1RFU1QgaXMgbm90
IHNldApDT05GSUdfQ1JZUFRPX0FCTEtfSEVMUEVSPXkKQ09ORklHX0NSWVBUT19HTFVFX0hF
TFBFUl9YODY9eQoKIwojIEF1dGhlbnRpY2F0ZWQgRW5jcnlwdGlvbiB3aXRoIEFzc29jaWF0
ZWQgRGF0YQojCiMgQ09ORklHX0NSWVBUT19DQ00gaXMgbm90IHNldAojIENPTkZJR19DUllQ
VE9fR0NNIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1NFUUlWIGlzIG5vdCBzZXQKCiMK
IyBCbG9jayBtb2RlcwojCkNPTkZJR19DUllQVE9fQ0JDPXkKIyBDT05GSUdfQ1JZUFRPX0NU
UiBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19DVFMgaXMgbm90IHNldApDT05GSUdfQ1JZ
UFRPX0VDQj15CkNPTkZJR19DUllQVE9fTFJXPXkKIyBDT05GSUdfQ1JZUFRPX1BDQkMgaXMg
bm90IHNldApDT05GSUdfQ1JZUFRPX1hUUz15CgojCiMgSGFzaCBtb2RlcwojCkNPTkZJR19D
UllQVE9fQ01BQz15CkNPTkZJR19DUllQVE9fSE1BQz15CiMgQ09ORklHX0NSWVBUT19YQ0JD
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1ZNQUMgaXMgbm90IHNldAoKIwojIERpZ2Vz
dAojCkNPTkZJR19DUllQVE9fQ1JDMzJDPXkKQ09ORklHX0NSWVBUT19DUkMzMkNfSU5URUw9
eQojIENPTkZJR19DUllQVE9fQ1JDMzIgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQ1JD
MzJfUENMTVVMIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19DUkNUMTBESUY9eQojIENPTkZJ
R19DUllQVE9fQ1JDVDEwRElGX1BDTE1VTCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19H
SEFTSCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fTUQ0PXkKQ09ORklHX0NSWVBUT19NRDU9
eQojIENPTkZJR19DUllQVE9fTUlDSEFFTF9NSUMgaXMgbm90IHNldAojIENPTkZJR19DUllQ
VE9fUk1EMTI4IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1JNRDE2MCBpcyBub3Qgc2V0
CiMgQ09ORklHX0NSWVBUT19STUQyNTYgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fUk1E
MzIwIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19TSEExPXkKQ09ORklHX0NSWVBUT19TSEEx
X1NTU0UzPXkKQ09ORklHX0NSWVBUT19TSEEyNTZfU1NTRTM9eQpDT05GSUdfQ1JZUFRPX1NI
QTUxMl9TU1NFMz15CkNPTkZJR19DUllQVE9fU0hBMjU2PXkKQ09ORklHX0NSWVBUT19TSEE1
MTI9eQojIENPTkZJR19DUllQVE9fVEdSMTkyIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRP
X1dQNTEyIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0dIQVNIX0NMTVVMX05JX0lOVEVM
IGlzIG5vdCBzZXQKCiMKIyBDaXBoZXJzCiMKQ09ORklHX0NSWVBUT19BRVM9eQpDT05GSUdf
Q1JZUFRPX0FFU19YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0FFU19OSV9JTlRFTD15CiMgQ09O
RklHX0NSWVBUT19BTlVCSVMgaXMgbm90IHNldApDT05GSUdfQ1JZUFRPX0FSQzQ9eQpDT05G
SUdfQ1JZUFRPX0JMT1dGSVNIPXkKQ09ORklHX0NSWVBUT19CTE9XRklTSF9DT01NT049eQpD
T05GSUdfQ1JZUFRPX0JMT1dGSVNIX1g4Nl82ND15CiMgQ09ORklHX0NSWVBUT19DQU1FTExJ
QSBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fQ0FNRUxMSUFfWDg2XzY0PXkKQ09ORklHX0NS
WVBUT19DQU1FTExJQV9BRVNOSV9BVlhfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19DQU1FTExJ
QV9BRVNOSV9BVlgyX1g4Nl82ND15CiMgQ09ORklHX0NSWVBUT19DQVNUNSBpcyBub3Qgc2V0
CiMgQ09ORklHX0NSWVBUT19DQVNUNV9BVlhfWDg2XzY0IGlzIG5vdCBzZXQKIyBDT05GSUdf
Q1JZUFRPX0NBU1Q2IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0NBU1Q2X0FWWF9YODZf
NjQgaXMgbm90IHNldApDT05GSUdfQ1JZUFRPX0RFUz15CiMgQ09ORklHX0NSWVBUT19GQ1JZ
UFQgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fS0hBWkFEIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ1JZUFRPX1NBTFNBMjAgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fU0FMU0EyMF9Y
ODZfNjQgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fU0VFRCBpcyBub3Qgc2V0CkNPTkZJ
R19DUllQVE9fU0VSUEVOVD15CkNPTkZJR19DUllQVE9fU0VSUEVOVF9TU0UyX1g4Nl82ND15
CkNPTkZJR19DUllQVE9fU0VSUEVOVF9BVlhfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19TRVJQ
RU5UX0FWWDJfWDg2XzY0PXkKIyBDT05GSUdfQ1JZUFRPX1RFQSBpcyBub3Qgc2V0CkNPTkZJ
R19DUllQVE9fVFdPRklTSD15CkNPTkZJR19DUllQVE9fVFdPRklTSF9DT01NT049eQpDT05G
SUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19UV09GSVNIX1g4Nl82
NF8zV0FZPXkKQ09ORklHX0NSWVBUT19UV09GSVNIX0FWWF9YODZfNjQ9eQoKIwojIENvbXBy
ZXNzaW9uCiMKQ09ORklHX0NSWVBUT19ERUZMQVRFPXkKQ09ORklHX0NSWVBUT19aTElCPXkK
Q09ORklHX0NSWVBUT19MWk89eQojIENPTkZJR19DUllQVE9fTFo0IGlzIG5vdCBzZXQKIyBD
T05GSUdfQ1JZUFRPX0xaNEhDIGlzIG5vdCBzZXQKCiMKIyBSYW5kb20gTnVtYmVyIEdlbmVy
YXRpb24KIwpDT05GSUdfQ1JZUFRPX0FOU0lfQ1BSTkc9eQojIENPTkZJR19DUllQVE9fVVNF
Ul9BUElfSEFTSCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9TS0NJUEhF
UiBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19IVyBpcyBub3Qgc2V0CiMgQ09ORklHX0FT
WU1NRVRSSUNfS0VZX1RZUEUgaXMgbm90IHNldApDT05GSUdfSEFWRV9LVk09eQojIENPTkZJ
R19WSVJUVUFMSVpBVElPTiBpcyBub3Qgc2V0CkNPTkZJR19CSU5BUllfUFJJTlRGPXkKCiMK
IyBMaWJyYXJ5IHJvdXRpbmVzCiMKQ09ORklHX1JBSUQ2X1BRPXkKQ09ORklHX0JJVFJFVkVS
U0U9eQpDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15CkNPTkZJR19HRU5FUklD
X1NUUk5MRU5fVVNFUj15CkNPTkZJR19HRU5FUklDX05FVF9VVElMUz15CkNPTkZJR19HRU5F
UklDX0ZJTkRfRklSU1RfQklUPXkKQ09ORklHX0dFTkVSSUNfUENJX0lPTUFQPXkKQ09ORklH
X0dFTkVSSUNfSU9NQVA9eQpDT05GSUdfR0VORVJJQ19JTz15CkNPTkZJR19BUkNIX1VTRV9D
TVBYQ0hHX0xPQ0tSRUY9eQojIENPTkZJR19DUkNfQ0NJVFQgaXMgbm90IHNldApDT05GSUdf
Q1JDMTY9eQpDT05GSUdfQ1JDX1QxMERJRj15CkNPTkZJR19DUkNfSVRVX1Q9eQpDT05GSUdf
Q1JDMzI9eQpDT05GSUdfQ1JDMzJfU0VMRlRFU1Q9eQpDT05GSUdfQ1JDMzJfU0xJQ0VCWTg9
eQojIENPTkZJR19DUkMzMl9TTElDRUJZNCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSQzMyX1NB
UldBVEUgaXMgbm90IHNldAojIENPTkZJR19DUkMzMl9CSVQgaXMgbm90IHNldAojIENPTkZJ
R19DUkM3IGlzIG5vdCBzZXQKQ09ORklHX0xJQkNSQzMyQz15CiMgQ09ORklHX0NSQzggaXMg
bm90IHNldAojIENPTkZJR19SQU5ET00zMl9TRUxGVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19a
TElCX0lORkxBVEU9eQpDT05GSUdfWkxJQl9ERUZMQVRFPXkKQ09ORklHX0xaT19DT01QUkVT
Uz15CkNPTkZJR19MWk9fREVDT01QUkVTUz15CkNPTkZJR19MWjRfREVDT01QUkVTUz15CkNP
TkZJR19YWl9ERUM9eQpDT05GSUdfWFpfREVDX1g4Nj15CkNPTkZJR19YWl9ERUNfUE9XRVJQ
Qz15CkNPTkZJR19YWl9ERUNfSUE2ND15CkNPTkZJR19YWl9ERUNfQVJNPXkKQ09ORklHX1ha
X0RFQ19BUk1USFVNQj15CkNPTkZJR19YWl9ERUNfU1BBUkM9eQpDT05GSUdfWFpfREVDX0JD
Sj15CiMgQ09ORklHX1haX0RFQ19URVNUIGlzIG5vdCBzZXQKQ09ORklHX0RFQ09NUFJFU1Nf
R1pJUD15CkNPTkZJR19ERUNPTVBSRVNTX0JaSVAyPXkKQ09ORklHX0RFQ09NUFJFU1NfTFpN
QT15CkNPTkZJR19ERUNPTVBSRVNTX1haPXkKQ09ORklHX0RFQ09NUFJFU1NfTFpPPXkKQ09O
RklHX0RFQ09NUFJFU1NfTFo0PXkKQ09ORklHX1RFWFRTRUFSQ0g9eQpDT05GSUdfVEVYVFNF
QVJDSF9LTVA9eQpDT05GSUdfVEVYVFNFQVJDSF9CTT15CkNPTkZJR19URVhUU0VBUkNIX0ZT
TT15CkNPTkZJR19BU1NPQ0lBVElWRV9BUlJBWT15CkNPTkZJR19IQVNfSU9NRU09eQpDT05G
SUdfSEFTX0lPUE9SVD15CkNPTkZJR19IQVNfRE1BPXkKQ09ORklHX0NIRUNLX1NJR05BVFVS
RT15CkNPTkZJR19DUFVfUk1BUD15CkNPTkZJR19EUUw9eQpDT05GSUdfTkxBVFRSPXkKQ09O
RklHX0FSQ0hfSEFTX0FUT01JQzY0X0RFQ19JRl9QT1NJVElWRT15CkNPTkZJR19BVkVSQUdF
PXkKIyBDT05GSUdfQ09SRElDIGlzIG5vdCBzZXQKIyBDT05GSUdfRERSIGlzIG5vdCBzZXQK
Q09ORklHX0ZPTlRfU1VQUE9SVD15CiMgQ09ORklHX0ZPTlRTIGlzIG5vdCBzZXQKQ09ORklH
X0ZPTlRfOHg4PXkKQ09ORklHX0ZPTlRfOHgxNj15Cg==
------------0000A013B0B93F0E3
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------0000A013B0B93F0E3--



From xen-devel-bounces@lists.xen.org Tue Feb 11 16:08:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:08:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFsM-0005Hm-5X; Tue, 11 Feb 2014 16:07:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WDFsK-0005Hh-Mv
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 16:07:57 +0000
Received: from [85.158.137.68:65394] by server-15.bemta-3.messagelabs.com id
	7C/55-19263-BDA4AF25; Tue, 11 Feb 2014 16:07:55 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392134874!1148410!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24769 invoked from network); 11 Feb 2014 16:07:54 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 11 Feb 2014 16:07:54 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55302 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WDFs1-0008OY-Lf; Tue, 11 Feb 2014 17:07:38 +0100
Date: Tue, 11 Feb 2014 17:07:50 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1542261541.20140211170750@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140211155650.GA23026@phenom.dumpdata.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------0000A013B0B93F0E3"
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	david.vrabel@citrix.com, msw@amazon.com,
	boris.ostrovsky@oracle.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------0000A013B0B93F0E3
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Tuesday, February 11, 2014, 4:56:50 PM, you wrote:

> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>> Hi Konrad,
>> 
>> Today decided to tryout another kernel RC and your pull request to Jens on top of it .. and I encoutered this one:

> Thank you for testing!

> Could you provide the .config file please?

Attached

> Did you see this _before_ the pull request with Jens? I presume
> not, but just double checking?

Nope not too my knowledge (though it's a bit messy with things broken on 3.14 at the moment)

> And lastly - what were you doing when you triggered this? Just launching
> a guest?

Nope it triggers on guest shutdown ..


> CC-ing Roger and other folks who were on the patches.

>> 
>> 
>> [  438.029756] INFO: trying to register non-static key.
>> [  438.029759] the code is fine but needs lockdep annotation.
>> [  438.029760] turning off the locking correctness validator.
>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS V1.8B1 09/13/2010
>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff88004ba2b510
>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff88004e5d9bf8
>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffffffff82cee570
>> [  438.029799] Call Trace:
>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d/0x90
>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x240
>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81/0x90
>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>> 
>> Doesn't seem to serious .. but never the less :-)
>> 
>> --
>> 
>> Sander
>> 
>> 
>> Monday, February 10, 2014, 8:54:02 PM, you wrote:
>> 
>> > On Mon, Feb 10 2014, Konrad Rzeszutek Wilk wrote:
>> >> Hey Jens,
>> >> 
>> >> Please git pull the following branch:
>> >> 
>> >>  git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-jens-3.14
>> >> 
>> >> which is based off v3.13-rc6. If you would like me to rebase it on
>> >> a different branch/tag I would be more than happy to do so.
>> 
>> > Older is fine, it's only an issue if you are ahead of the branch you
>> > want to go into.
>> 
>> dd>> 
>> >> The patches are all bug-fixes and hopefully can go in 3.14.
>> >> 
>> >> They deal with xen-blkback shutdown and cause memory leaks
>> >> as well as shutdown races. They should go to stable tree and if you
>> >> are OK with I will ask them to backport those fixes.
>> >> 
>> >> There is also a fix to xen-blkfront to deal with unexpected state
>> >> transition. And lastly a fix to the header where it was using the
>> >> __aligned__ unnecessarily.
>> 
>> > Pulled!
>> 
>> 
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/

------------0000A013B0B93F0E3
Content-Type: application/octet-stream;
 name=".config"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename=".config"

IwojIEF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGZpbGU7IERPIE5PVCBFRElULgojIExpbnV4
L3g4Nl82NCAzLjE0LjAtcmMyLTIwMTQwMjExLXBjaXJlc2V0LW5ldC1idHJldmVydC14ZW5i
bG9jay1kbWFkZWJ1ZyBLZXJuZWwgQ29uZmlndXJhdGlvbgojCkNPTkZJR182NEJJVD15CkNP
TkZJR19YODZfNjQ9eQpDT05GSUdfWDg2PXkKQ09ORklHX0lOU1RSVUNUSU9OX0RFQ09ERVI9
eQpDT05GSUdfT1VUUFVUX0ZPUk1BVD0iZWxmNjQteDg2LTY0IgpDT05GSUdfQVJDSF9ERUZD
T05GSUc9ImFyY2gveDg2L2NvbmZpZ3MveDg2XzY0X2RlZmNvbmZpZyIKQ09ORklHX0xPQ0tE
RVBfU1VQUE9SVD15CkNPTkZJR19TVEFDS1RSQUNFX1NVUFBPUlQ9eQpDT05GSUdfSEFWRV9M
QVRFTkNZVE9QX1NVUFBPUlQ9eQpDT05GSUdfTU1VPXkKQ09ORklHX05FRURfRE1BX01BUF9T
VEFURT15CkNPTkZJR19ORUVEX1NHX0RNQV9MRU5HVEg9eQpDT05GSUdfR0VORVJJQ19JU0Ff
RE1BPXkKQ09ORklHX0dFTkVSSUNfQlVHPXkKQ09ORklHX0dFTkVSSUNfQlVHX1JFTEFUSVZF
X1BPSU5URVJTPXkKQ09ORklHX0dFTkVSSUNfSFdFSUdIVD15CkNPTkZJR19BUkNIX01BWV9I
QVZFX1BDX0ZEQz15CkNPTkZJR19SV1NFTV9YQ0hHQUREX0FMR09SSVRITT15CkNPTkZJR19H
RU5FUklDX0NBTElCUkFURV9ERUxBWT15CkNPTkZJR19BUkNIX0hBU19DUFVfUkVMQVg9eQpD
T05GSUdfQVJDSF9IQVNfQ0FDSEVfTElORV9TSVpFPXkKQ09ORklHX0FSQ0hfSEFTX0NQVV9B
VVRPUFJPQkU9eQpDT05GSUdfSEFWRV9TRVRVUF9QRVJfQ1BVX0FSRUE9eQpDT05GSUdfTkVF
RF9QRVJfQ1BVX0VNQkVEX0ZJUlNUX0NIVU5LPXkKQ09ORklHX05FRURfUEVSX0NQVV9QQUdF
X0ZJUlNUX0NIVU5LPXkKQ09ORklHX0FSQ0hfSElCRVJOQVRJT05fUE9TU0lCTEU9eQpDT05G
SUdfQVJDSF9TVVNQRU5EX1BPU1NJQkxFPXkKQ09ORklHX0FSQ0hfV0FOVF9IVUdFX1BNRF9T
SEFSRT15CkNPTkZJR19BUkNIX1dBTlRfR0VORVJBTF9IVUdFVExCPXkKQ09ORklHX1pPTkVf
RE1BMzI9eQpDT05GSUdfQVVESVRfQVJDSD15CkNPTkZJR19BUkNIX1NVUFBPUlRTX09QVElN
SVpFRF9JTkxJTklORz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX0RFQlVHX1BBR0VBTExPQz15
CkNPTkZJR19YODZfNjRfU01QPXkKQ09ORklHX1g4Nl9IVD15CkNPTkZJR19BUkNIX0hXRUlH
SFRfQ0ZMQUdTPSItZmNhbGwtc2F2ZWQtcmRpIC1mY2FsbC1zYXZlZC1yc2kgLWZjYWxsLXNh
dmVkLXJkeCAtZmNhbGwtc2F2ZWQtcmN4IC1mY2FsbC1zYXZlZC1yOCAtZmNhbGwtc2F2ZWQt
cjkgLWZjYWxsLXNhdmVkLXIxMCAtZmNhbGwtc2F2ZWQtcjExIgpDT05GSUdfQVJDSF9TVVBQ
T1JUU19VUFJPQkVTPXkKQ09ORklHX0RFRkNPTkZJR19MSVNUPSIvbGliL21vZHVsZXMvJFVO
QU1FX1JFTEVBU0UvLmNvbmZpZyIKQ09ORklHX0lSUV9XT1JLPXkKQ09ORklHX0JVSUxEVElN
RV9FWFRBQkxFX1NPUlQ9eQoKIwojIEdlbmVyYWwgc2V0dXAKIwpDT05GSUdfSU5JVF9FTlZf
QVJHX0xJTUlUPTMyCkNPTkZJR19DUk9TU19DT01QSUxFPSIiCiMgQ09ORklHX0NPTVBJTEVf
VEVTVCBpcyBub3Qgc2V0CkNPTkZJR19MT0NBTFZFUlNJT049IiIKIyBDT05GSUdfTE9DQUxW
RVJTSU9OX0FVVE8gaXMgbm90IHNldApDT05GSUdfSEFWRV9LRVJORUxfR1pJUD15CkNPTkZJ
R19IQVZFX0tFUk5FTF9CWklQMj15CkNPTkZJR19IQVZFX0tFUk5FTF9MWk1BPXkKQ09ORklH
X0hBVkVfS0VSTkVMX1haPXkKQ09ORklHX0hBVkVfS0VSTkVMX0xaTz15CkNPTkZJR19IQVZF
X0tFUk5FTF9MWjQ9eQpDT05GSUdfS0VSTkVMX0daSVA9eQojIENPTkZJR19LRVJORUxfQlpJ
UDIgaXMgbm90IHNldAojIENPTkZJR19LRVJORUxfTFpNQSBpcyBub3Qgc2V0CiMgQ09ORklH
X0tFUk5FTF9YWiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFUk5FTF9MWk8gaXMgbm90IHNldAoj
IENPTkZJR19LRVJORUxfTFo0IGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfSE9TVE5BTUU9
Iihub25lKSIKQ09ORklHX1NXQVA9eQpDT05GSUdfU1lTVklQQz15CkNPTkZJR19TWVNWSVBD
X1NZU0NUTD15CiMgQ09ORklHX1BPU0lYX01RVUVVRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZI
QU5ETEUgaXMgbm90IHNldApDT05GSUdfQVVESVQ9eQpDT05GSUdfQVVESVRTWVNDQUxMPXkK
Q09ORklHX0FVRElUX1dBVENIPXkKQ09ORklHX0FVRElUX1RSRUU9eQoKIwojIElSUSBzdWJz
eXN0ZW0KIwpDT05GSUdfR0VORVJJQ19JUlFfUFJPQkU9eQpDT05GSUdfR0VORVJJQ19JUlFf
U0hPVz15CkNPTkZJR19HRU5FUklDX1BFTkRJTkdfSVJRPXkKQ09ORklHX0lSUV9ET01BSU49
eQojIENPTkZJR19JUlFfRE9NQUlOX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklHX0lSUV9GT1JD
RURfVEhSRUFESU5HPXkKQ09ORklHX1NQQVJTRV9JUlE9eQpDT05GSUdfQ0xPQ0tTT1VSQ0Vf
V0FUQ0hET0c9eQpDT05GSUdfQVJDSF9DTE9DS1NPVVJDRV9EQVRBPXkKQ09ORklHX0dFTkVS
SUNfVElNRV9WU1lTQ0FMTD15CkNPTkZJR19HRU5FUklDX0NMT0NLRVZFTlRTPXkKQ09ORklH
X0dFTkVSSUNfQ0xPQ0tFVkVOVFNfQlVJTEQ9eQpDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5U
U19CUk9BRENBU1Q9eQpDT05GSUdfR0VORVJJQ19DTE9DS0VWRU5UU19NSU5fQURKVVNUPXkK
Q09ORklHX0dFTkVSSUNfQ01PU19VUERBVEU9eQoKIwojIFRpbWVycyBzdWJzeXN0ZW0KIwpD
T05GSUdfVElDS19PTkVTSE9UPXkKQ09ORklHX05PX0haX0NPTU1PTj15CiMgQ09ORklHX0ha
X1BFUklPRElDIGlzIG5vdCBzZXQKQ09ORklHX05PX0haX0lETEU9eQojIENPTkZJR19OT19I
Wl9GVUxMIGlzIG5vdCBzZXQKQ09ORklHX05PX0haPXkKQ09ORklHX0hJR0hfUkVTX1RJTUVS
Uz15CgojCiMgQ1BVL1Rhc2sgdGltZSBhbmQgc3RhdHMgYWNjb3VudGluZwojCkNPTkZJR19U
SUNLX0NQVV9BQ0NPVU5USU5HPXkKIyBDT05GSUdfVklSVF9DUFVfQUNDT1VOVElOR19HRU4g
aXMgbm90IHNldAojIENPTkZJR19JUlFfVElNRV9BQ0NPVU5USU5HIGlzIG5vdCBzZXQKQ09O
RklHX0JTRF9QUk9DRVNTX0FDQ1Q9eQojIENPTkZJR19CU0RfUFJPQ0VTU19BQ0NUX1YzIGlz
IG5vdCBzZXQKQ09ORklHX1RBU0tTVEFUUz15CkNPTkZJR19UQVNLX0RFTEFZX0FDQ1Q9eQpD
T05GSUdfVEFTS19YQUNDVD15CkNPTkZJR19UQVNLX0lPX0FDQ09VTlRJTkc9eQoKIwojIFJD
VSBTdWJzeXN0ZW0KIwpDT05GSUdfVFJFRV9SQ1U9eQojIENPTkZJR19QUkVFTVBUX1JDVSBp
cyBub3Qgc2V0CkNPTkZJR19SQ1VfU1RBTExfQ09NTU9OPXkKIyBDT05GSUdfUkNVX1VTRVJf
UVMgaXMgbm90IHNldApDT05GSUdfUkNVX0ZBTk9VVD02NApDT05GSUdfUkNVX0ZBTk9VVF9M
RUFGPTE2CiMgQ09ORklHX1JDVV9GQU5PVVRfRVhBQ1QgaXMgbm90IHNldApDT05GSUdfUkNV
X0ZBU1RfTk9fSFo9eQojIENPTkZJR19UUkVFX1JDVV9UUkFDRSBpcyBub3Qgc2V0CiMgQ09O
RklHX1JDVV9OT0NCX0NQVSBpcyBub3Qgc2V0CkNPTkZJR19JS0NPTkZJRz15CiMgQ09ORklH
X0lLQ09ORklHX1BST0MgaXMgbm90IHNldApDT05GSUdfTE9HX0JVRl9TSElGVD0xOApDT05G
SUdfSEFWRV9VTlNUQUJMRV9TQ0hFRF9DTE9DSz15CkNPTkZJR19BUkNIX1NVUFBPUlRTX05V
TUFfQkFMQU5DSU5HPXkKQ09ORklHX0FSQ0hfU1VQUE9SVFNfSU5UMTI4PXkKQ09ORklHX0FS
Q0hfV0FOVFNfUFJPVF9OVU1BX1BST1RfTk9ORT15CiMgQ09ORklHX05VTUFfQkFMQU5DSU5H
IGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUFM9eQojIENPTkZJR19DR1JPVVBfREVCVUcgaXMg
bm90IHNldApDT05GSUdfQ0dST1VQX0ZSRUVaRVI9eQojIENPTkZJR19DR1JPVVBfREVWSUNF
IGlzIG5vdCBzZXQKQ09ORklHX0NQVVNFVFM9eQpDT05GSUdfUFJPQ19QSURfQ1BVU0VUPXkK
Q09ORklHX0NHUk9VUF9DUFVBQ0NUPXkKQ09ORklHX1JFU09VUkNFX0NPVU5URVJTPXkKIyBD
T05GSUdfTUVNQ0cgaXMgbm90IHNldAojIENPTkZJR19DR1JPVVBfSFVHRVRMQiBpcyBub3Qg
c2V0CiMgQ09ORklHX0NHUk9VUF9QRVJGIGlzIG5vdCBzZXQKQ09ORklHX0NHUk9VUF9TQ0hF
RD15CkNPTkZJR19GQUlSX0dST1VQX1NDSEVEPXkKIyBDT05GSUdfQ0ZTX0JBTkRXSURUSCBp
cyBub3Qgc2V0CiMgQ09ORklHX1JUX0dST1VQX1NDSEVEIGlzIG5vdCBzZXQKQ09ORklHX0JM
S19DR1JPVVA9eQojIENPTkZJR19ERUJVR19CTEtfQ0dST1VQIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ0hFQ0tQT0lOVF9SRVNUT1JFIGlzIG5vdCBzZXQKQ09ORklHX05BTUVTUEFDRVM9eQpD
T05GSUdfVVRTX05TPXkKQ09ORklHX0lQQ19OUz15CiMgQ09ORklHX1VTRVJfTlMgaXMgbm90
IHNldApDT05GSUdfUElEX05TPXkKQ09ORklHX05FVF9OUz15CkNPTkZJR19TQ0hFRF9BVVRP
R1JPVVA9eQojIENPTkZJR19TWVNGU19ERVBSRUNBVEVEIGlzIG5vdCBzZXQKIyBDT05GSUdf
UkVMQVkgaXMgbm90IHNldApDT05GSUdfQkxLX0RFVl9JTklUUkQ9eQpDT05GSUdfSU5JVFJB
TUZTX1NPVVJDRT0iIgpDT05GSUdfUkRfR1pJUD15CkNPTkZJR19SRF9CWklQMj15CkNPTkZJ
R19SRF9MWk1BPXkKQ09ORklHX1JEX1haPXkKQ09ORklHX1JEX0xaTz15CkNPTkZJR19SRF9M
WjQ9eQojIENPTkZJR19DQ19PUFRJTUlaRV9GT1JfU0laRSBpcyBub3Qgc2V0CkNPTkZJR19T
WVNDVEw9eQpDT05GSUdfQU5PTl9JTk9ERVM9eQpDT05GSUdfSEFWRV9VSUQxNj15CkNPTkZJ
R19TWVNDVExfRVhDRVBUSU9OX1RSQUNFPXkKQ09ORklHX0hBVkVfUENTUEtSX1BMQVRGT1JN
PXkKIyBDT05GSUdfRVhQRVJUIGlzIG5vdCBzZXQKQ09ORklHX1VJRDE2PXkKIyBDT05GSUdf
U1lTQ1RMX1NZU0NBTEwgaXMgbm90IHNldApDT05GSUdfS0FMTFNZTVM9eQpDT05GSUdfS0FM
TFNZTVNfQUxMPXkKQ09ORklHX1BSSU5USz15CkNPTkZJR19CVUc9eQpDT05GSUdfRUxGX0NP
UkU9eQpDT05GSUdfUENTUEtSX1BMQVRGT1JNPXkKQ09ORklHX0JBU0VfRlVMTD15CkNPTkZJ
R19GVVRFWD15CkNPTkZJR19FUE9MTD15CkNPTkZJR19TSUdOQUxGRD15CkNPTkZJR19USU1F
UkZEPXkKQ09ORklHX0VWRU5URkQ9eQpDT05GSUdfU0hNRU09eQpDT05GSUdfQUlPPXkKQ09O
RklHX1BDSV9RVUlSS1M9eQojIENPTkZJR19FTUJFRERFRCBpcyBub3Qgc2V0CkNPTkZJR19I
QVZFX1BFUkZfRVZFTlRTPXkKCiMKIyBLZXJuZWwgUGVyZm9ybWFuY2UgRXZlbnRzIEFuZCBD
b3VudGVycwojCkNPTkZJR19QRVJGX0VWRU5UUz15CiMgQ09ORklHX0RFQlVHX1BFUkZfVVNF
X1ZNQUxMT0MgaXMgbm90IHNldApDT05GSUdfVk1fRVZFTlRfQ09VTlRFUlM9eQpDT05GSUdf
U0xVQl9ERUJVRz15CiMgQ09ORklHX0NPTVBBVF9CUksgaXMgbm90IHNldAojIENPTkZJR19T
TEFCIGlzIG5vdCBzZXQKQ09ORklHX1NMVUI9eQpDT05GSUdfU0xVQl9DUFVfUEFSVElBTD15
CiMgQ09ORklHX1BST0ZJTElORyBpcyBub3Qgc2V0CkNPTkZJR19UUkFDRVBPSU5UUz15CkNP
TkZJR19IQVZFX09QUk9GSUxFPXkKQ09ORklHX09QUk9GSUxFX05NSV9USU1FUj15CiMgQ09O
RklHX0tQUk9CRVMgaXMgbm90IHNldApDT05GSUdfSlVNUF9MQUJFTD15CiMgQ09ORklHX0hB
VkVfNjRCSVRfQUxJR05FRF9BQ0NFU1MgaXMgbm90IHNldApDT05GSUdfSEFWRV9FRkZJQ0lF
TlRfVU5BTElHTkVEX0FDQ0VTUz15CkNPTkZJR19BUkNIX1VTRV9CVUlMVElOX0JTV0FQPXkK
Q09ORklHX0hBVkVfSU9SRU1BUF9QUk9UPXkKQ09ORklHX0hBVkVfS1BST0JFUz15CkNPTkZJ
R19IQVZFX0tSRVRQUk9CRVM9eQpDT05GSUdfSEFWRV9PUFRQUk9CRVM9eQpDT05GSUdfSEFW
RV9LUFJPQkVTX09OX0ZUUkFDRT15CkNPTkZJR19IQVZFX0FSQ0hfVFJBQ0VIT09LPXkKQ09O
RklHX0hBVkVfRE1BX0FUVFJTPXkKQ09ORklHX0dFTkVSSUNfU01QX0lETEVfVEhSRUFEPXkK
Q09ORklHX0hBVkVfUkVHU19BTkRfU1RBQ0tfQUNDRVNTX0FQST15CkNPTkZJR19IQVZFX0RN
QV9BUElfREVCVUc9eQpDT05GSUdfSEFWRV9IV19CUkVBS1BPSU5UPXkKQ09ORklHX0hBVkVf
TUlYRURfQlJFQUtQT0lOVFNfUkVHUz15CkNPTkZJR19IQVZFX1VTRVJfUkVUVVJOX05PVElG
SUVSPXkKQ09ORklHX0hBVkVfUEVSRl9FVkVOVFNfTk1JPXkKQ09ORklHX0hBVkVfUEVSRl9S
RUdTPXkKQ09ORklHX0hBVkVfUEVSRl9VU0VSX1NUQUNLX0RVTVA9eQpDT05GSUdfSEFWRV9B
UkNIX0pVTVBfTEFCRUw9eQpDT05GSUdfQVJDSF9IQVZFX05NSV9TQUZFX0NNUFhDSEc9eQpD
T05GSUdfSEFWRV9BTElHTkVEX1NUUlVDVF9QQUdFPXkKQ09ORklHX0hBVkVfQ01QWENIR19M
T0NBTD15CkNPTkZJR19IQVZFX0NNUFhDSEdfRE9VQkxFPXkKQ09ORklHX0FSQ0hfV0FOVF9D
T01QQVRfSVBDX1BBUlNFX1ZFUlNJT049eQpDT05GSUdfQVJDSF9XQU5UX09MRF9DT01QQVRf
SVBDPXkKQ09ORklHX0hBVkVfQVJDSF9TRUNDT01QX0ZJTFRFUj15CkNPTkZJR19TRUNDT01Q
X0ZJTFRFUj15CkNPTkZJR19IQVZFX0NDX1NUQUNLUFJPVEVDVE9SPXkKIyBDT05GSUdfQ0Nf
U1RBQ0tQUk9URUNUT1IgaXMgbm90IHNldApDT05GSUdfQ0NfU1RBQ0tQUk9URUNUT1JfTk9O
RT15CiMgQ09ORklHX0NDX1NUQUNLUFJPVEVDVE9SX1JFR1VMQVIgaXMgbm90IHNldAojIENP
TkZJR19DQ19TVEFDS1BST1RFQ1RPUl9TVFJPTkcgaXMgbm90IHNldApDT05GSUdfSEFWRV9D
T05URVhUX1RSQUNLSU5HPXkKQ09ORklHX0hBVkVfVklSVF9DUFVfQUNDT1VOVElOR19HRU49
eQpDT05GSUdfSEFWRV9JUlFfVElNRV9BQ0NPVU5USU5HPXkKQ09ORklHX0hBVkVfQVJDSF9U
UkFOU1BBUkVOVF9IVUdFUEFHRT15CkNPTkZJR19IQVZFX0FSQ0hfU09GVF9ESVJUWT15CkNP
TkZJR19NT0RVTEVTX1VTRV9FTEZfUkVMQT15CkNPTkZJR19IQVZFX0lSUV9FWElUX09OX0lS
UV9TVEFDSz15CkNPTkZJR19PTERfU0lHU1VTUEVORDM9eQpDT05GSUdfQ09NUEFUX09MRF9T
SUdBQ1RJT049eQoKIwojIEdDT1YtYmFzZWQga2VybmVsIHByb2ZpbGluZwojCiMgQ09ORklH
X0dDT1ZfS0VSTkVMIGlzIG5vdCBzZXQKIyBDT05GSUdfSEFWRV9HRU5FUklDX0RNQV9DT0hF
UkVOVCBpcyBub3Qgc2V0CkNPTkZJR19TTEFCSU5GTz15CkNPTkZJR19SVF9NVVRFWEVTPXkK
Q09ORklHX0JBU0VfU01BTEw9MAojIENPTkZJR19TWVNURU1fVFJVU1RFRF9LRVlSSU5HIGlz
IG5vdCBzZXQKQ09ORklHX01PRFVMRVM9eQojIENPTkZJR19NT0RVTEVfRk9SQ0VfTE9BRCBp
cyBub3Qgc2V0CkNPTkZJR19NT0RVTEVfVU5MT0FEPXkKIyBDT05GSUdfTU9EVUxFX0ZPUkNF
X1VOTE9BRCBpcyBub3Qgc2V0CiMgQ09ORklHX01PRFZFUlNJT05TIGlzIG5vdCBzZXQKIyBD
T05GSUdfTU9EVUxFX1NSQ1ZFUlNJT05fQUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9EVUxF
X1NJRyBpcyBub3Qgc2V0CkNPTkZJR19TVE9QX01BQ0hJTkU9eQpDT05GSUdfQkxPQ0s9eQpD
T05GSUdfQkxLX0RFVl9CU0c9eQojIENPTkZJR19CTEtfREVWX0JTR0xJQiBpcyBub3Qgc2V0
CkNPTkZJR19CTEtfREVWX0lOVEVHUklUWT15CiMgQ09ORklHX0JMS19ERVZfVEhST1RUTElO
RyBpcyBub3Qgc2V0CiMgQ09ORklHX0JMS19DTURMSU5FX1BBUlNFUiBpcyBub3Qgc2V0Cgoj
CiMgUGFydGl0aW9uIFR5cGVzCiMKQ09ORklHX1BBUlRJVElPTl9BRFZBTkNFRD15CiMgQ09O
RklHX0FDT1JOX1BBUlRJVElPTiBpcyBub3Qgc2V0CiMgQ09ORklHX0FJWF9QQVJUSVRJT04g
aXMgbm90IHNldApDT05GSUdfT1NGX1BBUlRJVElPTj15CkNPTkZJR19BTUlHQV9QQVJUSVRJ
T049eQojIENPTkZJR19BVEFSSV9QQVJUSVRJT04gaXMgbm90IHNldApDT05GSUdfTUFDX1BB
UlRJVElPTj15CkNPTkZJR19NU0RPU19QQVJUSVRJT049eQpDT05GSUdfQlNEX0RJU0tMQUJF
TD15CkNPTkZJR19NSU5JWF9TVUJQQVJUSVRJT049eQpDT05GSUdfU09MQVJJU19YODZfUEFS
VElUSU9OPXkKQ09ORklHX1VOSVhXQVJFX0RJU0tMQUJFTD15CiMgQ09ORklHX0xETV9QQVJU
SVRJT04gaXMgbm90IHNldApDT05GSUdfU0dJX1BBUlRJVElPTj15CiMgQ09ORklHX1VMVFJJ
WF9QQVJUSVRJT04gaXMgbm90IHNldApDT05GSUdfU1VOX1BBUlRJVElPTj15CkNPTkZJR19L
QVJNQV9QQVJUSVRJT049eQpDT05GSUdfRUZJX1BBUlRJVElPTj15CiMgQ09ORklHX1NZU1Y2
OF9QQVJUSVRJT04gaXMgbm90IHNldAojIENPTkZJR19DTURMSU5FX1BBUlRJVElPTiBpcyBu
b3Qgc2V0CkNPTkZJR19CTE9DS19DT01QQVQ9eQoKIwojIElPIFNjaGVkdWxlcnMKIwpDT05G
SUdfSU9TQ0hFRF9OT09QPXkKQ09ORklHX0lPU0NIRURfREVBRExJTkU9eQpDT05GSUdfSU9T
Q0hFRF9DRlE9eQpDT05GSUdfQ0ZRX0dST1VQX0lPU0NIRUQ9eQojIENPTkZJR19ERUZBVUxU
X0RFQURMSU5FIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfQ0ZRPXkKIyBDT05GSUdfREVG
QVVMVF9OT09QIGlzIG5vdCBzZXQKQ09ORklHX0RFRkFVTFRfSU9TQ0hFRD0iY2ZxIgpDT05G
SUdfVU5JTkxJTkVfU1BJTl9VTkxPQ0s9eQpDT05GSUdfRlJFRVpFUj15CgojCiMgUHJvY2Vz
c29yIHR5cGUgYW5kIGZlYXR1cmVzCiMKQ09ORklHX1pPTkVfRE1BPXkKQ09ORklHX1NNUD15
CkNPTkZJR19YODZfWDJBUElDPXkKIyBDT05GSUdfWDg2X01QUEFSU0UgaXMgbm90IHNldAoj
IENPTkZJR19YODZfRVhURU5ERURfUExBVEZPUk0gaXMgbm90IHNldAojIENPTkZJR19YODZf
SU5URUxfTFBTUyBpcyBub3Qgc2V0CkNPTkZJR19YODZfU1VQUE9SVFNfTUVNT1JZX0ZBSUxV
UkU9eQpDT05GSUdfU0NIRURfT01JVF9GUkFNRV9QT0lOVEVSPXkKQ09ORklHX0hZUEVSVklT
T1JfR1VFU1Q9eQpDT05GSUdfUEFSQVZJUlQ9eQpDT05GSUdfUEFSQVZJUlRfREVCVUc9eQpD
T05GSUdfUEFSQVZJUlRfU1BJTkxPQ0tTPXkKQ09ORklHX1hFTj15CkNPTkZJR19YRU5fRE9N
MD15CkNPTkZJR19YRU5fUFJJVklMRUdFRF9HVUVTVD15CkNPTkZJR19YRU5fUFZIVk09eQpD
T05GSUdfWEVOX01BWF9ET01BSU5fTUVNT1JZPTUwMApDT05GSUdfWEVOX1NBVkVfUkVTVE9S
RT15CkNPTkZJR19YRU5fREVCVUdfRlM9eQpDT05GSUdfWEVOX1BWSD15CiMgQ09ORklHX0tW
TV9HVUVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX1BBUkFWSVJUX1RJTUVfQUNDT1VOVElORyBp
cyBub3Qgc2V0CkNPTkZJR19QQVJBVklSVF9DTE9DSz15CkNPTkZJR19OT19CT09UTUVNPXkK
IyBDT05GSUdfTUVNVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX01LOCBpcyBub3Qgc2V0CiMg
Q09ORklHX01QU0MgaXMgbm90IHNldAojIENPTkZJR19NQ09SRTIgaXMgbm90IHNldAojIENP
TkZJR19NQVRPTSBpcyBub3Qgc2V0CkNPTkZJR19HRU5FUklDX0NQVT15CkNPTkZJR19YODZf
SU5URVJOT0RFX0NBQ0hFX1NISUZUPTYKQ09ORklHX1g4Nl9MMV9DQUNIRV9TSElGVD02CkNP
TkZJR19YODZfVFNDPXkKQ09ORklHX1g4Nl9DTVBYQ0hHNjQ9eQpDT05GSUdfWDg2X0NNT1Y9
eQpDT05GSUdfWDg2X01JTklNVU1fQ1BVX0ZBTUlMWT02NApDT05GSUdfWDg2X0RFQlVHQ1RM
TVNSPXkKQ09ORklHX0NQVV9TVVBfSU5URUw9eQpDT05GSUdfQ1BVX1NVUF9BTUQ9eQpDT05G
SUdfQ1BVX1NVUF9DRU5UQVVSPXkKQ09ORklHX0hQRVRfVElNRVI9eQpDT05GSUdfSFBFVF9F
TVVMQVRFX1JUQz15CkNPTkZJR19ETUk9eQpDT05GSUdfR0FSVF9JT01NVT15CiMgQ09ORklH
X0NBTEdBUllfSU9NTVUgaXMgbm90IHNldApDT05GSUdfU1dJT1RMQj15CkNPTkZJR19JT01N
VV9IRUxQRVI9eQojIENPTkZJR19NQVhTTVAgaXMgbm90IHNldApDT05GSUdfTlJfQ1BVUz04
CkNPTkZJR19TQ0hFRF9TTVQ9eQpDT05GSUdfU0NIRURfTUM9eQojIENPTkZJR19QUkVFTVBU
X05PTkUgaXMgbm90IHNldApDT05GSUdfUFJFRU1QVF9WT0xVTlRBUlk9eQojIENPTkZJR19Q
UkVFTVBUIGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9MT0NBTF9BUElDPXkKQ09ORklHX1g4Nl9J
T19BUElDPXkKQ09ORklHX1g4Nl9SRVJPVVRFX0ZPUl9CUk9LRU5fQk9PVF9JUlFTPXkKQ09O
RklHX1g4Nl9NQ0U9eQpDT05GSUdfWDg2X01DRV9JTlRFTD15CkNPTkZJR19YODZfTUNFX0FN
RD15CkNPTkZJR19YODZfTUNFX1RIUkVTSE9MRD15CiMgQ09ORklHX1g4Nl9NQ0VfSU5KRUNU
IGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9USEVSTUFMX1ZFQ1RPUj15CiMgQ09ORklHX0k4SyBp
cyBub3Qgc2V0CiMgQ09ORklHX01JQ1JPQ09ERSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JP
Q09ERV9JTlRFTF9FQVJMWSBpcyBub3Qgc2V0CiMgQ09ORklHX01JQ1JPQ09ERV9BTURfRUFS
TFkgaXMgbm90IHNldApDT05GSUdfWDg2X01TUj15CkNPTkZJR19YODZfQ1BVSUQ9eQpDT05G
SUdfQVJDSF9QSFlTX0FERFJfVF82NEJJVD15CkNPTkZJR19BUkNIX0RNQV9BRERSX1RfNjRC
SVQ9eQpDT05GSUdfRElSRUNUX0dCUEFHRVM9eQpDT05GSUdfTlVNQT15CkNPTkZJR19BTURf
TlVNQT15CkNPTkZJR19YODZfNjRfQUNQSV9OVU1BPXkKQ09ORklHX05PREVTX1NQQU5fT1RI
RVJfTk9ERVM9eQojIENPTkZJR19OVU1BX0VNVSBpcyBub3Qgc2V0CkNPTkZJR19OT0RFU19T
SElGVD04CkNPTkZJR19BUkNIX1NQQVJTRU1FTV9FTkFCTEU9eQpDT05GSUdfQVJDSF9TUEFS
U0VNRU1fREVGQVVMVD15CkNPTkZJR19BUkNIX1NFTEVDVF9NRU1PUllfTU9ERUw9eQpDT05G
SUdfQVJDSF9QUk9DX0tDT1JFX1RFWFQ9eQpDT05GSUdfSUxMRUdBTF9QT0lOVEVSX1ZBTFVF
PTB4ZGVhZDAwMDAwMDAwMDAwMApDT05GSUdfU0VMRUNUX01FTU9SWV9NT0RFTD15CkNPTkZJ
R19TUEFSU0VNRU1fTUFOVUFMPXkKQ09ORklHX1NQQVJTRU1FTT15CkNPTkZJR19ORUVEX01V
TFRJUExFX05PREVTPXkKQ09ORklHX0hBVkVfTUVNT1JZX1BSRVNFTlQ9eQpDT05GSUdfU1BB
UlNFTUVNX0VYVFJFTUU9eQpDT05GSUdfU1BBUlNFTUVNX1ZNRU1NQVBfRU5BQkxFPXkKQ09O
RklHX1NQQVJTRU1FTV9BTExPQ19NRU1fTUFQX1RPR0VUSEVSPXkKQ09ORklHX1NQQVJTRU1F
TV9WTUVNTUFQPXkKQ09ORklHX0hBVkVfTUVNQkxPQ0s9eQpDT05GSUdfSEFWRV9NRU1CTE9D
S19OT0RFX01BUD15CkNPTkZJR19BUkNIX0RJU0NBUkRfTUVNQkxPQ0s9eQojIENPTkZJR19N
T1ZBQkxFX05PREUgaXMgbm90IHNldAojIENPTkZJR19IQVZFX0JPT1RNRU1fSU5GT19OT0RF
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUVNT1JZX0hPVFBMVUcgaXMgbm90IHNldApDT05GSUdf
UEFHRUZMQUdTX0VYVEVOREVEPXkKQ09ORklHX1NQTElUX1BUTE9DS19DUFVTPTQKQ09ORklH
X0FSQ0hfRU5BQkxFX1NQTElUX1BNRF9QVExPQ0s9eQpDT05GSUdfQ09NUEFDVElPTj15CkNP
TkZJR19NSUdSQVRJT049eQpDT05GSUdfUEhZU19BRERSX1RfNjRCSVQ9eQpDT05GSUdfWk9O
RV9ETUFfRkxBRz0xCkNPTkZJR19CT1VOQ0U9eQpDT05GSUdfTkVFRF9CT1VOQ0VfUE9PTD15
CkNPTkZJR19WSVJUX1RPX0JVUz15CkNPTkZJR19NTVVfTk9USUZJRVI9eQojIENPTkZJR19L
U00gaXMgbm90IHNldApDT05GSUdfREVGQVVMVF9NTUFQX01JTl9BRERSPTQwOTYKQ09ORklH
X0FSQ0hfU1VQUE9SVFNfTUVNT1JZX0ZBSUxVUkU9eQojIENPTkZJR19NRU1PUllfRkFJTFVS
RSBpcyBub3Qgc2V0CkNPTkZJR19UUkFOU1BBUkVOVF9IVUdFUEFHRT15CkNPTkZJR19UUkFO
U1BBUkVOVF9IVUdFUEFHRV9BTFdBWVM9eQojIENPTkZJR19UUkFOU1BBUkVOVF9IVUdFUEFH
RV9NQURWSVNFIGlzIG5vdCBzZXQKQ09ORklHX0NST1NTX01FTU9SWV9BVFRBQ0g9eQojIENP
TkZJR19DTEVBTkNBQ0hFIGlzIG5vdCBzZXQKIyBDT05GSUdfRlJPTlRTV0FQIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQ01BIGlzIG5vdCBzZXQKIyBDT05GSUdfWkJVRCBpcyBub3Qgc2V0CiMg
Q09ORklHX1pTTUFMTE9DIGlzIG5vdCBzZXQKQ09ORklHX1g4Nl9DSEVDS19CSU9TX0NPUlJV
UFRJT049eQpDT05GSUdfWDg2X0JPT1RQQVJBTV9NRU1PUllfQ09SUlVQVElPTl9DSEVDSz15
CkNPTkZJR19YODZfUkVTRVJWRV9MT1c9NjQKQ09ORklHX01UUlI9eQpDT05GSUdfTVRSUl9T
QU5JVElaRVI9eQpDT05GSUdfTVRSUl9TQU5JVElaRVJfRU5BQkxFX0RFRkFVTFQ9MApDT05G
SUdfTVRSUl9TQU5JVElaRVJfU1BBUkVfUkVHX05SX0RFRkFVTFQ9MQpDT05GSUdfWDg2X1BB
VD15CkNPTkZJR19BUkNIX1VTRVNfUEdfVU5DQUNIRUQ9eQpDT05GSUdfQVJDSF9SQU5ET009
eQpDT05GSUdfWDg2X1NNQVA9eQojIENPTkZJR19FRkkgaXMgbm90IHNldApDT05GSUdfU0VD
Q09NUD15CiMgQ09ORklHX0haXzEwMCBpcyBub3Qgc2V0CiMgQ09ORklHX0haXzI1MCBpcyBu
b3Qgc2V0CkNPTkZJR19IWl8zMDA9eQojIENPTkZJR19IWl8xMDAwIGlzIG5vdCBzZXQKQ09O
RklHX0haPTMwMApDT05GSUdfU0NIRURfSFJUSUNLPXkKQ09ORklHX0tFWEVDPXkKQ09ORklH
X0NSQVNIX0RVTVA9eQpDT05GSUdfUEhZU0lDQUxfU1RBUlQ9MHgxMDAwMDAwCkNPTkZJR19S
RUxPQ0FUQUJMRT15CiMgQ09ORklHX1JBTkRPTUlaRV9CQVNFIGlzIG5vdCBzZXQKQ09ORklH
X1BIWVNJQ0FMX0FMSUdOPTB4MTAwMDAwMApDT05GSUdfSE9UUExVR19DUFU9eQojIENPTkZJ
R19CT09UUEFSQU1fSE9UUExVR19DUFUwIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfSE9U
UExVR19DUFUwIGlzIG5vdCBzZXQKIyBDT05GSUdfQ09NUEFUX1ZEU08gaXMgbm90IHNldAoj
IENPTkZJR19DTURMSU5FX0JPT0wgaXMgbm90IHNldApDT05GSUdfQVJDSF9FTkFCTEVfTUVN
T1JZX0hPVFBMVUc9eQpDT05GSUdfVVNFX1BFUkNQVV9OVU1BX05PREVfSUQ9eQoKIwojIFBv
d2VyIG1hbmFnZW1lbnQgYW5kIEFDUEkgb3B0aW9ucwojCiMgQ09ORklHX1NVU1BFTkQgaXMg
bm90IHNldApDT05GSUdfSElCRVJOQVRFX0NBTExCQUNLUz15CiMgQ09ORklHX0hJQkVSTkFU
SU9OIGlzIG5vdCBzZXQKQ09ORklHX1BNX1NMRUVQPXkKQ09ORklHX1BNX1NMRUVQX1NNUD15
CiMgQ09ORklHX1BNX0FVVE9TTEVFUCBpcyBub3Qgc2V0CiMgQ09ORklHX1BNX1dBS0VMT0NL
UyBpcyBub3Qgc2V0CiMgQ09ORklHX1BNX1JVTlRJTUUgaXMgbm90IHNldApDT05GSUdfUE09
eQpDT05GSUdfUE1fREVCVUc9eQojIENPTkZJR19QTV9BRFZBTkNFRF9ERUJVRyBpcyBub3Qg
c2V0CkNPTkZJR19QTV9TTEVFUF9ERUJVRz15CiMgQ09ORklHX1BNX1RSQUNFX1JUQyBpcyBu
b3Qgc2V0CiMgQ09ORklHX1dRX1BPV0VSX0VGRklDSUVOVF9ERUZBVUxUIGlzIG5vdCBzZXQK
Q09ORklHX0FDUEk9eQpDT05GSUdfQUNQSV9QUk9DRlM9eQojIENPTkZJR19BQ1BJX0VDX0RF
QlVHRlMgaXMgbm90IHNldApDT05GSUdfQUNQSV9BQz15CkNPTkZJR19BQ1BJX0JBVFRFUlk9
eQpDT05GSUdfQUNQSV9CVVRUT049eQpDT05GSUdfQUNQSV9WSURFTz15CkNPTkZJR19BQ1BJ
X0ZBTj15CiMgQ09ORklHX0FDUElfRE9DSyBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX1BST0NF
U1NPUj15CkNPTkZJR19BQ1BJX0hPVFBMVUdfQ1BVPXkKQ09ORklHX0FDUElfUFJPQ0VTU09S
X0FHR1JFR0FUT1I9eQpDT05GSUdfQUNQSV9USEVSTUFMPXkKQ09ORklHX0FDUElfTlVNQT15
CkNPTkZJR19BQ1BJX0NVU1RPTV9EU0RUX0ZJTEU9IiIKIyBDT05GSUdfQUNQSV9DVVNUT01f
RFNEVCBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX0lOSVRSRF9UQUJMRV9PVkVSUklERT15CiMg
Q09ORklHX0FDUElfREVCVUcgaXMgbm90IHNldApDT05GSUdfQUNQSV9QQ0lfU0xPVD15CkNP
TkZJR19YODZfUE1fVElNRVI9eQpDT05GSUdfQUNQSV9DT05UQUlORVI9eQojIENPTkZJR19B
Q1BJX1NCUyBpcyBub3Qgc2V0CkNPTkZJR19BQ1BJX0hFRD15CiMgQ09ORklHX0FDUElfQ1VT
VE9NX01FVEhPRCBpcyBub3Qgc2V0CiMgQ09ORklHX0FDUElfQVBFSSBpcyBub3Qgc2V0CiMg
Q09ORklHX0FDUElfRVhUTE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfU0ZJIGlzIG5vdCBzZXQK
CiMKIyBDUFUgRnJlcXVlbmN5IHNjYWxpbmcKIwpDT05GSUdfQ1BVX0ZSRVE9eQpDT05GSUdf
Q1BVX0ZSRVFfR09WX0NPTU1PTj15CiMgQ09ORklHX0NQVV9GUkVRX1NUQVQgaXMgbm90IHNl
dAojIENPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9QRVJGT1JNQU5DRSBpcyBub3Qgc2V0
CkNPTkZJR19DUFVfRlJFUV9ERUZBVUxUX0dPVl9VU0VSU1BBQ0U9eQojIENPTkZJR19DUFVf
RlJFUV9ERUZBVUxUX0dPVl9PTkRFTUFORCBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVV9GUkVR
X0RFRkFVTFRfR09WX0NPTlNFUlZBVElWRSBpcyBub3Qgc2V0CkNPTkZJR19DUFVfRlJFUV9H
T1ZfUEVSRk9STUFOQ0U9eQojIENPTkZJR19DUFVfRlJFUV9HT1ZfUE9XRVJTQVZFIGlzIG5v
dCBzZXQKQ09ORklHX0NQVV9GUkVRX0dPVl9VU0VSU1BBQ0U9eQpDT05GSUdfQ1BVX0ZSRVFf
R09WX09OREVNQU5EPXkKIyBDT05GSUdfQ1BVX0ZSRVFfR09WX0NPTlNFUlZBVElWRSBpcyBu
b3Qgc2V0CgojCiMgeDg2IENQVSBmcmVxdWVuY3kgc2NhbGluZyBkcml2ZXJzCiMKIyBDT05G
SUdfWDg2X0lOVEVMX1BTVEFURSBpcyBub3Qgc2V0CkNPTkZJR19YODZfUENDX0NQVUZSRVE9
eQpDT05GSUdfWDg2X0FDUElfQ1BVRlJFUT15CkNPTkZJR19YODZfQUNQSV9DUFVGUkVRX0NQ
Qj15CiMgQ09ORklHX1g4Nl9QT1dFUk5PV19LOCBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9B
TURfRlJFUV9TRU5TSVRJVklUWSBpcyBub3Qgc2V0CiMgQ09ORklHX1g4Nl9TUEVFRFNURVBf
Q0VOVFJJTk8gaXMgbm90IHNldAojIENPTkZJR19YODZfUDRfQ0xPQ0tNT0QgaXMgbm90IHNl
dAoKIwojIHNoYXJlZCBvcHRpb25zCiMKIyBDT05GSUdfWDg2X1NQRUVEU1RFUF9MSUIgaXMg
bm90IHNldAoKIwojIENQVSBJZGxlCiMKQ09ORklHX0NQVV9JRExFPXkKIyBDT05GSUdfQ1BV
X0lETEVfTVVMVElQTEVfRFJJVkVSUyBpcyBub3Qgc2V0CkNPTkZJR19DUFVfSURMRV9HT1Zf
TEFEREVSPXkKQ09ORklHX0NQVV9JRExFX0dPVl9NRU5VPXkKIyBDT05GSUdfQVJDSF9ORUVE
U19DUFVfSURMRV9DT1VQTEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5URUxfSURMRSBpcyBu
b3Qgc2V0CgojCiMgTWVtb3J5IHBvd2VyIHNhdmluZ3MKIwojIENPTkZJR19JNzMwMF9JRExF
IGlzIG5vdCBzZXQKCiMKIyBCdXMgb3B0aW9ucyAoUENJIGV0Yy4pCiMKQ09ORklHX1BDST15
CkNPTkZJR19QQ0lfRElSRUNUPXkKQ09ORklHX1BDSV9NTUNPTkZJRz15CkNPTkZJR19QQ0lf
WEVOPXkKQ09ORklHX1BDSV9ET01BSU5TPXkKQ09ORklHX1BDSUVQT1JUQlVTPXkKQ09ORklH
X0hPVFBMVUdfUENJX1BDSUU9eQpDT05GSUdfUENJRUFFUj15CkNPTkZJR19QQ0lFX0VDUkM9
eQpDT05GSUdfUENJRUFFUl9JTkpFQ1Q9eQpDT05GSUdfUENJRUFTUE09eQpDT05GSUdfUENJ
RUFTUE1fREVCVUc9eQpDT05GSUdfUENJRUFTUE1fREVGQVVMVD15CiMgQ09ORklHX1BDSUVB
U1BNX1BPV0VSU0FWRSBpcyBub3Qgc2V0CiMgQ09ORklHX1BDSUVBU1BNX1BFUkZPUk1BTkNF
IGlzIG5vdCBzZXQKQ09ORklHX1BDSV9NU0k9eQpDT05GSUdfUENJX0RFQlVHPXkKQ09ORklH
X1BDSV9SRUFMTE9DX0VOQUJMRV9BVVRPPXkKQ09ORklHX1BDSV9TVFVCPXkKQ09ORklHX1hF
Tl9QQ0lERVZfRlJPTlRFTkQ9eQpDT05GSUdfSFRfSVJRPXkKQ09ORklHX1BDSV9BVFM9eQpD
T05GSUdfUENJX0lPVj15CkNPTkZJR19QQ0lfUFJJPXkKQ09ORklHX1BDSV9QQVNJRD15CkNP
TkZJR19QQ0lfSU9BUElDPXkKQ09ORklHX1BDSV9MQUJFTD15CgojCiMgUENJIGhvc3QgY29u
dHJvbGxlciBkcml2ZXJzCiMKQ09ORklHX0lTQV9ETUFfQVBJPXkKQ09ORklHX0FNRF9OQj15
CiMgQ09ORklHX1BDQ0FSRCBpcyBub3Qgc2V0CkNPTkZJR19IT1RQTFVHX1BDST15CkNPTkZJ
R19IT1RQTFVHX1BDSV9BQ1BJPXkKQ09ORklHX0hPVFBMVUdfUENJX0FDUElfSUJNPXkKQ09O
RklHX0hPVFBMVUdfUENJX0NQQ0k9eQojIENPTkZJR19IT1RQTFVHX1BDSV9DUENJX1pUNTU1
MCBpcyBub3Qgc2V0CkNPTkZJR19IT1RQTFVHX1BDSV9DUENJX0dFTkVSSUM9eQpDT05GSUdf
SE9UUExVR19QQ0lfU0hQQz15CiMgQ09ORklHX1JBUElESU8gaXMgbm90IHNldAojIENPTkZJ
R19YODZfU1lTRkIgaXMgbm90IHNldAoKIwojIEV4ZWN1dGFibGUgZmlsZSBmb3JtYXRzIC8g
RW11bGF0aW9ucwojCkNPTkZJR19CSU5GTVRfRUxGPXkKQ09ORklHX0NPTVBBVF9CSU5GTVRf
RUxGPXkKQ09ORklHX0FSQ0hfQklORk1UX0VMRl9SQU5ET01JWkVfUElFPXkKQ09ORklHX0NP
UkVfRFVNUF9ERUZBVUxUX0VMRl9IRUFERVJTPXkKQ09ORklHX0JJTkZNVF9TQ1JJUFQ9eQoj
IENPTkZJR19IQVZFX0FPVVQgaXMgbm90IHNldApDT05GSUdfQklORk1UX01JU0M9eQpDT05G
SUdfQ09SRURVTVA9eQpDT05GSUdfSUEzMl9FTVVMQVRJT049eQojIENPTkZJR19JQTMyX0FP
VVQgaXMgbm90IHNldAojIENPTkZJR19YODZfWDMyIGlzIG5vdCBzZXQKQ09ORklHX0NPTVBB
VD15CkNPTkZJR19DT01QQVRfRk9SX1U2NF9BTElHTk1FTlQ9eQpDT05GSUdfU1lTVklQQ19D
T01QQVQ9eQpDT05GSUdfS0VZU19DT01QQVQ9eQpDT05GSUdfWDg2X0RFVl9ETUFfT1BTPXkK
Q09ORklHX05FVD15CgojCiMgTmV0d29ya2luZyBvcHRpb25zCiMKQ09ORklHX1BBQ0tFVD15
CiMgQ09ORklHX1BBQ0tFVF9ESUFHIGlzIG5vdCBzZXQKQ09ORklHX1VOSVg9eQojIENPTkZJ
R19VTklYX0RJQUcgaXMgbm90IHNldAojIENPTkZJR19YRlJNX1VTRVIgaXMgbm90IHNldAoj
IENPTkZJR19ORVRfS0VZIGlzIG5vdCBzZXQKQ09ORklHX0lORVQ9eQpDT05GSUdfSVBfTVVM
VElDQVNUPXkKQ09ORklHX0lQX0FEVkFOQ0VEX1JPVVRFUj15CiMgQ09ORklHX0lQX0ZJQl9U
UklFX1NUQVRTIGlzIG5vdCBzZXQKQ09ORklHX0lQX01VTFRJUExFX1RBQkxFUz15CkNPTkZJ
R19JUF9ST1VURV9NVUxUSVBBVEg9eQpDT05GSUdfSVBfUk9VVEVfVkVSQk9TRT15CkNPTkZJ
R19JUF9ST1VURV9DTEFTU0lEPXkKQ09ORklHX0lQX1BOUD15CkNPTkZJR19JUF9QTlBfREhD
UD15CkNPTkZJR19JUF9QTlBfQk9PVFA9eQpDT05GSUdfSVBfUE5QX1JBUlA9eQojIENPTkZJ
R19ORVRfSVBJUCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9JUEdSRV9ERU1VWCBpcyBub3Qg
c2V0CiMgQ09ORklHX05FVF9JUF9UVU5ORUwgaXMgbm90IHNldAojIENPTkZJR19JUF9NUk9V
VEUgaXMgbm90IHNldApDT05GSUdfU1lOX0NPT0tJRVM9eQojIENPTkZJR19JTkVUX0FIIGlz
IG5vdCBzZXQKIyBDT05GSUdfSU5FVF9FU1AgaXMgbm90IHNldAojIENPTkZJR19JTkVUX0lQ
Q09NUCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9UVU5ORUwgaXMgbm90IHNldAoj
IENPTkZJR19JTkVUX1RVTk5FTCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9NT0RF
X1RSQU5TUE9SVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9NT0RFX1RVTk5FTCBp
cyBub3Qgc2V0CiMgQ09ORklHX0lORVRfWEZSTV9NT0RFX0JFRVQgaXMgbm90IHNldApDT05G
SUdfSU5FVF9MUk89eQojIENPTkZJR19JTkVUX0RJQUcgaXMgbm90IHNldApDT05GSUdfVENQ
X0NPTkdfQURWQU5DRUQ9eQojIENPTkZJR19UQ1BfQ09OR19CSUMgaXMgbm90IHNldApDT05G
SUdfVENQX0NPTkdfQ1VCSUM9eQojIENPTkZJR19UQ1BfQ09OR19XRVNUV09PRCBpcyBub3Qg
c2V0CiMgQ09ORklHX1RDUF9DT05HX0hUQ1AgaXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09O
R19IU1RDUCBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX0hZQkxBIGlzIG5vdCBzZXQK
IyBDT05GSUdfVENQX0NPTkdfVkVHQVMgaXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09OR19T
Q0FMQUJMRSBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX0xQIGlzIG5vdCBzZXQKIyBD
T05GSUdfVENQX0NPTkdfVkVOTyBpcyBub3Qgc2V0CiMgQ09ORklHX1RDUF9DT05HX1lFQUgg
aXMgbm90IHNldAojIENPTkZJR19UQ1BfQ09OR19JTExJTk9JUyBpcyBub3Qgc2V0CkNPTkZJ
R19ERUZBVUxUX0NVQklDPXkKIyBDT05GSUdfREVGQVVMVF9SRU5PIGlzIG5vdCBzZXQKQ09O
RklHX0RFRkFVTFRfVENQX0NPTkc9ImN1YmljIgojIENPTkZJR19UQ1BfTUQ1U0lHIGlzIG5v
dCBzZXQKIyBDT05GSUdfSVBWNiBpcyBub3Qgc2V0CkNPTkZJR19ORVRXT1JLX1NFQ01BUks9
eQojIENPTkZJR19ORVRXT1JLX1BIWV9USU1FU1RBTVBJTkcgaXMgbm90IHNldApDT05GSUdf
TkVURklMVEVSPXkKIyBDT05GSUdfTkVURklMVEVSX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklH
X05FVEZJTFRFUl9BRFZBTkNFRD15CkNPTkZJR19CUklER0VfTkVURklMVEVSPXkKCiMKIyBD
b3JlIE5ldGZpbHRlciBDb25maWd1cmF0aW9uCiMKQ09ORklHX05FVEZJTFRFUl9ORVRMSU5L
PXkKQ09ORklHX05FVEZJTFRFUl9ORVRMSU5LX0FDQ1Q9eQpDT05GSUdfTkVURklMVEVSX05F
VExJTktfUVVFVUU9eQpDT05GSUdfTkVURklMVEVSX05FVExJTktfTE9HPXkKQ09ORklHX05G
X0NPTk5UUkFDSz15CkNPTkZJR19ORl9DT05OVFJBQ0tfTUFSSz15CkNPTkZJR19ORl9DT05O
VFJBQ0tfU0VDTUFSSz15CkNPTkZJR19ORl9DT05OVFJBQ0tfUFJPQ0ZTPXkKQ09ORklHX05G
X0NPTk5UUkFDS19FVkVOVFM9eQojIENPTkZJR19ORl9DT05OVFJBQ0tfVElNRU9VVCBpcyBu
b3Qgc2V0CkNPTkZJR19ORl9DT05OVFJBQ0tfVElNRVNUQU1QPXkKIyBDT05GSUdfTkZfQ1Rf
UFJPVE9fRENDUCBpcyBub3Qgc2V0CkNPTkZJR19ORl9DVF9QUk9UT19HUkU9eQojIENPTkZJ
R19ORl9DVF9QUk9UT19TQ1RQIGlzIG5vdCBzZXQKIyBDT05GSUdfTkZfQ1RfUFJPVE9fVURQ
TElURSBpcyBub3Qgc2V0CiMgQ09ORklHX05GX0NPTk5UUkFDS19BTUFOREEgaXMgbm90IHNl
dApDT05GSUdfTkZfQ09OTlRSQUNLX0ZUUD15CkNPTkZJR19ORl9DT05OVFJBQ0tfSDMyMz15
CkNPTkZJR19ORl9DT05OVFJBQ0tfSVJDPXkKIyBDT05GSUdfTkZfQ09OTlRSQUNLX05FVEJJ
T1NfTlMgaXMgbm90IHNldAojIENPTkZJR19ORl9DT05OVFJBQ0tfU05NUCBpcyBub3Qgc2V0
CkNPTkZJR19ORl9DT05OVFJBQ0tfUFBUUD15CiMgQ09ORklHX05GX0NPTk5UUkFDS19TQU5F
IGlzIG5vdCBzZXQKQ09ORklHX05GX0NPTk5UUkFDS19TSVA9eQojIENPTkZJR19ORl9DT05O
VFJBQ0tfVEZUUCBpcyBub3Qgc2V0CkNPTkZJR19ORl9DVF9ORVRMSU5LPXkKIyBDT05GSUdf
TkZfQ1RfTkVUTElOS19USU1FT1VUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVURklMVEVSX05F
VExJTktfUVVFVUVfQ1QgaXMgbm90IHNldApDT05GSUdfTkZfTkFUPXkKQ09ORklHX05GX05B
VF9ORUVERUQ9eQojIENPTkZJR19ORl9OQVRfQU1BTkRBIGlzIG5vdCBzZXQKQ09ORklHX05G
X05BVF9GVFA9eQpDT05GSUdfTkZfTkFUX0lSQz15CkNPTkZJR19ORl9OQVRfU0lQPXkKIyBD
T05GSUdfTkZfTkFUX1RGVFAgaXMgbm90IHNldAojIENPTkZJR19ORl9UQUJMRVMgaXMgbm90
IHNldApDT05GSUdfTkVURklMVEVSX1hUQUJMRVM9eQoKIwojIFh0YWJsZXMgY29tYmluZWQg
bW9kdWxlcwojCkNPTkZJR19ORVRGSUxURVJfWFRfTUFSSz15CkNPTkZJR19ORVRGSUxURVJf
WFRfQ09OTk1BUks9eQojIENPTkZJR19ORVRGSUxURVJfWFRfU0VUIGlzIG5vdCBzZXQKCiMK
IyBYdGFibGVzIHRhcmdldHMKIwpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9BVURJVD15
CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0NIRUNLU1VNPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfQ0xBU1NJRlk9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9DT05O
TUFSSz15CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX0NPTk5TRUNNQVJLPXkKIyBDT05G
SUdfTkVURklMVEVSX1hUX1RBUkdFVF9DVCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJf
WFRfVEFSR0VUX0RTQ1A9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ITD15CiMgQ09O
RklHX05FVEZJTFRFUl9YVF9UQVJHRVRfSE1BUksgaXMgbm90IHNldApDT05GSUdfTkVURklM
VEVSX1hUX1RBUkdFVF9JRExFVElNRVI9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9M
T0c9eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9NQVJLPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfTkVUTUFQPXkKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfTkZMT0c9
eQpDT05GSUdfTkVURklMVEVSX1hUX1RBUkdFVF9ORlFVRVVFPXkKIyBDT05GSUdfTkVURklM
VEVSX1hUX1RBUkdFVF9OT1RSQUNLIGlzIG5vdCBzZXQKQ09ORklHX05FVEZJTFRFUl9YVF9U
QVJHRVRfUkFURUVTVD15CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX1JFRElSRUNUPXkK
Q09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfVEVFPXkKIyBDT05GSUdfTkVURklMVEVSX1hU
X1RBUkdFVF9UUFJPWFkgaXMgbm90IHNldAojIENPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VU
X1RSQUNFIGlzIG5vdCBzZXQKQ09ORklHX05FVEZJTFRFUl9YVF9UQVJHRVRfU0VDTUFSSz15
CkNPTkZJR19ORVRGSUxURVJfWFRfVEFSR0VUX1RDUE1TUz15CiMgQ09ORklHX05FVEZJTFRF
Ul9YVF9UQVJHRVRfVENQT1BUU1RSSVAgaXMgbm90IHNldAoKIwojIFh0YWJsZXMgbWF0Y2hl
cwojCkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQUREUlRZUEU9eQojIENPTkZJR19ORVRG
SUxURVJfWFRfTUFUQ0hfQlBGIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVURklMVEVSX1hUX01B
VENIX0NHUk9VUCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ0xVU1RF
Uj15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09NTUVOVD15CkNPTkZJR19ORVRGSUxU
RVJfWFRfTUFUQ0hfQ09OTkJZVEVTPXkKIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0NP
Tk5MQUJFTCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfQ09OTkxJTUlU
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DT05OTUFSSz15CkNPTkZJR19ORVRGSUxU
RVJfWFRfTUFUQ0hfQ09OTlRSQUNLPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9DUFU9
eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0RDQ1A9eQpDT05GSUdfTkVURklMVEVSX1hU
X01BVENIX0RFVkdST1VQPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9EU0NQPXkKQ09O
RklHX05FVEZJTFRFUl9YVF9NQVRDSF9FQ049eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENI
X0VTUD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfSEFTSExJTUlUPXkKQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9IRUxQRVI9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0hM
PXkKIyBDT05GSUdfTkVURklMVEVSX1hUX01BVENIX0lQQ09NUCBpcyBub3Qgc2V0CkNPTkZJ
R19ORVRGSUxURVJfWFRfTUFUQ0hfSVBSQU5HRT15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFU
Q0hfSVBWUz15CiMgQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9MMlRQIGlzIG5vdCBzZXQK
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9MRU5HVEg9eQpDT05GSUdfTkVURklMVEVSX1hU
X01BVENIX0xJTUlUPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9NQUM9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX01BUks9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX01V
TFRJUE9SVD15CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfTkZBQ0NUPXkKQ09ORklHX05F
VEZJTFRFUl9YVF9NQVRDSF9PU0Y9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX09XTkVS
PXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9QSFlTREVWPXkKQ09ORklHX05FVEZJTFRF
Ul9YVF9NQVRDSF9QS1RUWVBFPXkKQ09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9RVU9UQT15
CkNPTkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfUkFURUVTVD15CkNPTkZJR19ORVRGSUxURVJf
WFRfTUFUQ0hfUkVBTE09eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1JFQ0VOVD15CiMg
Q09ORklHX05FVEZJTFRFUl9YVF9NQVRDSF9TQ1RQIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVU
RklMVEVSX1hUX01BVENIX1NPQ0tFVCBpcyBub3Qgc2V0CkNPTkZJR19ORVRGSUxURVJfWFRf
TUFUQ0hfU1RBVEU9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1NUQVRJU1RJQz15CkNP
TkZJR19ORVRGSUxURVJfWFRfTUFUQ0hfU1RSSU5HPXkKQ09ORklHX05FVEZJTFRFUl9YVF9N
QVRDSF9UQ1BNU1M9eQpDT05GSUdfTkVURklMVEVSX1hUX01BVENIX1RJTUU9eQpDT05GSUdf
TkVURklMVEVSX1hUX01BVENIX1UzMj15CkNPTkZJR19JUF9TRVQ9eQpDT05GSUdfSVBfU0VU
X01BWD0yNTYKQ09ORklHX0lQX1NFVF9CSVRNQVBfSVA9eQpDT05GSUdfSVBfU0VUX0JJVE1B
UF9JUE1BQz15CkNPTkZJR19JUF9TRVRfQklUTUFQX1BPUlQ9eQpDT05GSUdfSVBfU0VUX0hB
U0hfSVA9eQpDT05GSUdfSVBfU0VUX0hBU0hfSVBQT1JUPXkKQ09ORklHX0lQX1NFVF9IQVNI
X0lQUE9SVElQPXkKQ09ORklHX0lQX1NFVF9IQVNIX0lQUE9SVE5FVD15CkNPTkZJR19JUF9T
RVRfSEFTSF9ORVRQT1JUTkVUPXkKQ09ORklHX0lQX1NFVF9IQVNIX05FVD15CkNPTkZJR19J
UF9TRVRfSEFTSF9ORVRORVQ9eQpDT05GSUdfSVBfU0VUX0hBU0hfTkVUUE9SVD15CkNPTkZJ
R19JUF9TRVRfSEFTSF9ORVRJRkFDRT15CkNPTkZJR19JUF9TRVRfTElTVF9TRVQ9eQpDT05G
SUdfSVBfVlM9eQojIENPTkZJR19JUF9WU19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19JUF9W
U19UQUJfQklUUz0xMgoKIwojIElQVlMgdHJhbnNwb3J0IHByb3RvY29sIGxvYWQgYmFsYW5j
aW5nIHN1cHBvcnQKIwojIENPTkZJR19JUF9WU19QUk9UT19UQ1AgaXMgbm90IHNldAojIENP
TkZJR19JUF9WU19QUk9UT19VRFAgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19QUk9UT19B
SF9FU1AgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19QUk9UT19FU1AgaXMgbm90IHNldAoj
IENPTkZJR19JUF9WU19QUk9UT19BSCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX1BST1RP
X1NDVFAgaXMgbm90IHNldAoKIwojIElQVlMgc2NoZWR1bGVyCiMKIyBDT05GSUdfSVBfVlNf
UlIgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19XUlIgaXMgbm90IHNldAojIENPTkZJR19J
UF9WU19MQyBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX1ZTX1dMQyBpcyBub3Qgc2V0CiMgQ09O
RklHX0lQX1ZTX0xCTEMgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19MQkxDUiBpcyBub3Qg
c2V0CiMgQ09ORklHX0lQX1ZTX0RIIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfVlNfU0ggaXMg
bm90IHNldAojIENPTkZJR19JUF9WU19TRUQgaXMgbm90IHNldAojIENPTkZJR19JUF9WU19O
USBpcyBub3Qgc2V0CgojCiMgSVBWUyBTSCBzY2hlZHVsZXIKIwpDT05GSUdfSVBfVlNfU0hf
VEFCX0JJVFM9OAoKIwojIElQVlMgYXBwbGljYXRpb24gaGVscGVyCiMKQ09ORklHX0lQX1ZT
X05GQ1Q9eQoKIwojIElQOiBOZXRmaWx0ZXIgQ29uZmlndXJhdGlvbgojCkNPTkZJR19ORl9E
RUZSQUdfSVBWND15CkNPTkZJR19ORl9DT05OVFJBQ0tfSVBWND15CkNPTkZJR19ORl9DT05O
VFJBQ0tfUFJPQ19DT01QQVQ9eQpDT05GSUdfSVBfTkZfSVBUQUJMRVM9eQpDT05GSUdfSVBf
TkZfTUFUQ0hfQUg9eQpDT05GSUdfSVBfTkZfTUFUQ0hfRUNOPXkKIyBDT05GSUdfSVBfTkZf
TUFUQ0hfUlBGSUxURVIgaXMgbm90IHNldApDT05GSUdfSVBfTkZfTUFUQ0hfVFRMPXkKQ09O
RklHX0lQX05GX0ZJTFRFUj15CkNPTkZJR19JUF9ORl9UQVJHRVRfUkVKRUNUPXkKIyBDT05G
SUdfSVBfTkZfVEFSR0VUX1NZTlBST1hZIGlzIG5vdCBzZXQKQ09ORklHX0lQX05GX1RBUkdF
VF9VTE9HPXkKQ09ORklHX05GX05BVF9JUFY0PXkKQ09ORklHX0lQX05GX1RBUkdFVF9NQVNR
VUVSQURFPXkKQ09ORklHX0lQX05GX1RBUkdFVF9ORVRNQVA9eQpDT05GSUdfSVBfTkZfVEFS
R0VUX1JFRElSRUNUPXkKQ09ORklHX05GX05BVF9QUk9UT19HUkU9eQpDT05GSUdfTkZfTkFU
X1BQVFA9eQpDT05GSUdfTkZfTkFUX0gzMjM9eQpDT05GSUdfSVBfTkZfTUFOR0xFPXkKIyBD
T05GSUdfSVBfTkZfVEFSR0VUX0NMVVNURVJJUCBpcyBub3Qgc2V0CiMgQ09ORklHX0lQX05G
X1RBUkdFVF9FQ04gaXMgbm90IHNldAojIENPTkZJR19JUF9ORl9UQVJHRVRfVFRMIGlzIG5v
dCBzZXQKQ09ORklHX0lQX05GX1JBVz15CiMgQ09ORklHX0lQX05GX0FSUFRBQkxFUyBpcyBu
b3Qgc2V0CkNPTkZJR19CUklER0VfTkZfRUJUQUJMRVM9eQojIENPTkZJR19CUklER0VfRUJU
X0JST1VURSBpcyBub3Qgc2V0CiMgQ09ORklHX0JSSURHRV9FQlRfVF9GSUxURVIgaXMgbm90
IHNldAojIENPTkZJR19CUklER0VfRUJUX1RfTkFUIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJ
REdFX0VCVF84MDJfMyBpcyBub3Qgc2V0CiMgQ09ORklHX0JSSURHRV9FQlRfQU1PTkcgaXMg
bm90IHNldAojIENPTkZJR19CUklER0VfRUJUX0FSUCBpcyBub3Qgc2V0CiMgQ09ORklHX0JS
SURHRV9FQlRfSVAgaXMgbm90IHNldAojIENPTkZJR19CUklER0VfRUJUX0xJTUlUIGlzIG5v
dCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9NQVJLIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJ
REdFX0VCVF9QS1RUWVBFIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9TVFAgaXMg
bm90IHNldAojIENPTkZJR19CUklER0VfRUJUX1ZMQU4gaXMgbm90IHNldAojIENPTkZJR19C
UklER0VfRUJUX0FSUFJFUExZIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9ETkFU
IGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9NQVJLX1QgaXMgbm90IHNldAojIENP
TkZJR19CUklER0VfRUJUX1JFRElSRUNUIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VC
VF9TTkFUIGlzIG5vdCBzZXQKIyBDT05GSUdfQlJJREdFX0VCVF9MT0cgaXMgbm90IHNldAoj
IENPTkZJR19CUklER0VfRUJUX1VMT0cgaXMgbm90IHNldAojIENPTkZJR19CUklER0VfRUJU
X05GTE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBfRENDUCBpcyBub3Qgc2V0CiMgQ09ORklH
X0lQX1NDVFAgaXMgbm90IHNldAojIENPTkZJR19SRFMgaXMgbm90IHNldAojIENPTkZJR19U
SVBDIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRNIGlzIG5vdCBzZXQKIyBDT05GSUdfTDJUUCBp
cyBub3Qgc2V0CkNPTkZJR19TVFA9eQpDT05GSUdfQlJJREdFPXkKQ09ORklHX0JSSURHRV9J
R01QX1NOT09QSU5HPXkKQ09ORklHX0hBVkVfTkVUX0RTQT15CiMgQ09ORklHX1ZMQU5fODAy
MVEgaXMgbm90IHNldAojIENPTkZJR19ERUNORVQgaXMgbm90IHNldApDT05GSUdfTExDPXkK
IyBDT05GSUdfTExDMiBpcyBub3Qgc2V0CiMgQ09ORklHX0lQWCBpcyBub3Qgc2V0CiMgQ09O
RklHX0FUQUxLIGlzIG5vdCBzZXQKIyBDT05GSUdfWDI1IGlzIG5vdCBzZXQKIyBDT05GSUdf
TEFQQiBpcyBub3Qgc2V0CiMgQ09ORklHX1BIT05FVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lF
RUU4MDIxNTQgaXMgbm90IHNldApDT05GSUdfNkxPV1BBTl9JUEhDPXkKQ09ORklHX05FVF9T
Q0hFRD15CgojCiMgUXVldWVpbmcvU2NoZWR1bGluZwojCiMgQ09ORklHX05FVF9TQ0hfQ0JR
IGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9IVEIgaXMgbm90IHNldAojIENPTkZJR19O
RVRfU0NIX0hGU0MgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX1BSSU8gaXMgbm90IHNl
dAojIENPTkZJR19ORVRfU0NIX01VTFRJUSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hf
UkVEIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9TRkIgaXMgbm90IHNldAojIENPTkZJ
R19ORVRfU0NIX1NGUSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9TQ0hfVEVRTCBpcyBub3Qg
c2V0CiMgQ09ORklHX05FVF9TQ0hfVEJGIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9H
UkVEIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9EU01BUksgaXMgbm90IHNldAojIENP
TkZJR19ORVRfU0NIX05FVEVNIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9EUlIgaXMg
bm90IHNldAojIENPTkZJR19ORVRfU0NIX01RUFJJTyBpcyBub3Qgc2V0CiMgQ09ORklHX05F
VF9TQ0hfQ0hPS0UgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX1FGUSBpcyBub3Qgc2V0
CiMgQ09ORklHX05FVF9TQ0hfQ09ERUwgaXMgbm90IHNldAojIENPTkZJR19ORVRfU0NIX0ZR
X0NPREVMIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9GUSBpcyBub3Qgc2V0CiMgQ09O
RklHX05FVF9TQ0hfSEhGIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NDSF9QSUUgaXMgbm90
IHNldAojIENPTkZJR19ORVRfU0NIX0lOR1JFU1MgaXMgbm90IHNldAojIENPTkZJR19ORVRf
U0NIX1BMVUcgaXMgbm90IHNldAoKIwojIENsYXNzaWZpY2F0aW9uCiMKQ09ORklHX05FVF9D
TFM9eQojIENPTkZJR19ORVRfQ0xTX0JBU0lDIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NM
U19UQ0lOREVYIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19ST1VURTQgaXMgbm90IHNl
dAojIENPTkZJR19ORVRfQ0xTX0ZXIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19VMzIg
aXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xTX1JTVlAgaXMgbm90IHNldAojIENPTkZJR19O
RVRfQ0xTX1JTVlA2IGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0NMU19GTE9XIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX0NMU19DR1JPVVAgaXMgbm90IHNldAojIENPTkZJR19ORVRfQ0xT
X0JQRiBpcyBub3Qgc2V0CkNPTkZJR19ORVRfRU1BVENIPXkKQ09ORklHX05FVF9FTUFUQ0hf
U1RBQ0s9MzIKIyBDT05GSUdfTkVUX0VNQVRDSF9DTVAgaXMgbm90IHNldAojIENPTkZJR19O
RVRfRU1BVENIX05CWVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0VNQVRDSF9VMzIgaXMg
bm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX01FVEEgaXMgbm90IHNldAojIENPTkZJR19O
RVRfRU1BVENIX1RFWFQgaXMgbm90IHNldAojIENPTkZJR19ORVRfRU1BVENIX0lQU0VUIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9DTFNfQUNUPXkKIyBDT05GSUdfTkVUX0FDVF9QT0xJQ0Ug
aXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX0dBQ1QgaXMgbm90IHNldAojIENPTkZJR19O
RVRfQUNUX01JUlJFRCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9BQ1RfSVBUIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX0FDVF9OQVQgaXMgbm90IHNldAojIENPTkZJR19ORVRfQUNUX1BF
RElUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0FDVF9TSU1QIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX0FDVF9TS0JFRElUIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0FDVF9DU1VNIGlz
IG5vdCBzZXQKQ09ORklHX05FVF9TQ0hfRklGTz15CiMgQ09ORklHX0RDQiBpcyBub3Qgc2V0
CiMgQ09ORklHX0ROU19SRVNPTFZFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0JBVE1BTl9BRFYg
aXMgbm90IHNldAojIENPTkZJR19PUEVOVlNXSVRDSCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZT
T0NLRVRTIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUTElOS19NTUFQIGlzIG5vdCBzZXQKIyBD
T05GSUdfTkVUTElOS19ESUFHIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX01QTFNfR1NPIGlz
IG5vdCBzZXQKIyBDT05GSUdfSFNSIGlzIG5vdCBzZXQKQ09ORklHX1JQUz15CkNPTkZJR19S
RlNfQUNDRUw9eQpDT05GSUdfWFBTPXkKIyBDT05GSUdfQ0dST1VQX05FVF9QUklPIGlzIG5v
dCBzZXQKIyBDT05GSUdfQ0dST1VQX05FVF9DTEFTU0lEIGlzIG5vdCBzZXQKQ09ORklHX05F
VF9SWF9CVVNZX1BPTEw9eQpDT05GSUdfQlFMPXkKIyBDT05GSUdfQlBGX0pJVCBpcyBub3Qg
c2V0CkNPTkZJR19ORVRfRkxPV19MSU1JVD15CgojCiMgTmV0d29yayB0ZXN0aW5nCiMKIyBD
T05GSUdfTkVUX1BLVEdFTiBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9EUk9QX01PTklUT1Ig
aXMgbm90IHNldAojIENPTkZJR19IQU1SQURJTyBpcyBub3Qgc2V0CiMgQ09ORklHX0NBTiBp
cyBub3Qgc2V0CiMgQ09ORklHX0lSREEgaXMgbm90IHNldApDT05GSUdfQlQ9eQpDT05GSUdf
QlRfUkZDT01NPXkKQ09ORklHX0JUX1JGQ09NTV9UVFk9eQpDT05GSUdfQlRfQk5FUD15CkNP
TkZJR19CVF9CTkVQX01DX0ZJTFRFUj15CkNPTkZJR19CVF9CTkVQX1BST1RPX0ZJTFRFUj15
CkNPTkZJR19CVF9ISURQPXkKCiMKIyBCbHVldG9vdGggZGV2aWNlIGRyaXZlcnMKIwpDT05G
SUdfQlRfSENJQlRVU0I9eQpDT05GSUdfQlRfSENJVUFSVD15CkNPTkZJR19CVF9IQ0lVQVJU
X0g0PXkKQ09ORklHX0JUX0hDSVVBUlRfQkNTUD15CkNPTkZJR19CVF9IQ0lVQVJUX0FUSDNL
PXkKQ09ORklHX0JUX0hDSVVBUlRfTEw9eQpDT05GSUdfQlRfSENJVUFSVF8zV0lSRT15CkNP
TkZJR19CVF9IQ0lCQ00yMDNYPXkKQ09ORklHX0JUX0hDSUJQQTEwWD15CkNPTkZJR19CVF9I
Q0lCRlVTQj15CkNPTkZJR19CVF9IQ0lWSENJPXkKQ09ORklHX0JUX01SVkw9eQpDT05GSUdf
QlRfQVRIM0s9eQojIENPTkZJR19BRl9SWFJQQyBpcyBub3Qgc2V0CkNPTkZJR19GSUJfUlVM
RVM9eQojIENPTkZJR19XSVJFTEVTUyBpcyBub3Qgc2V0CiMgQ09ORklHX1dJTUFYIGlzIG5v
dCBzZXQKIyBDT05GSUdfUkZLSUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUXzlQIGlzIG5v
dCBzZXQKIyBDT05GSUdfQ0FJRiBpcyBub3Qgc2V0CkNPTkZJR19DRVBIX0xJQj15CiMgQ09O
RklHX0NFUEhfTElCX1BSRVRUWURFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0VQSF9MSUJf
VVNFX0ROU19SRVNPTFZFUiBpcyBub3Qgc2V0CiMgQ09ORklHX05GQyBpcyBub3Qgc2V0CkNP
TkZJR19IQVZFX0JQRl9KSVQ9eQoKIwojIERldmljZSBEcml2ZXJzCiMKCiMKIyBHZW5lcmlj
IERyaXZlciBPcHRpb25zCiMKQ09ORklHX1VFVkVOVF9IRUxQRVJfUEFUSD0iL3NiaW4vaG90
cGx1ZyIKQ09ORklHX0RFVlRNUEZTPXkKQ09ORklHX0RFVlRNUEZTX01PVU5UPXkKIyBDT05G
SUdfU1RBTkRBTE9ORSBpcyBub3Qgc2V0CiMgQ09ORklHX1BSRVZFTlRfRklSTVdBUkVfQlVJ
TEQgaXMgbm90IHNldApDT05GSUdfRldfTE9BREVSPXkKQ09ORklHX0ZJUk1XQVJFX0lOX0tF
Uk5FTD15CkNPTkZJR19FWFRSQV9GSVJNV0FSRT0iIgpDT05GSUdfRldfTE9BREVSX1VTRVJf
SEVMUEVSPXkKIyBDT05GSUdfREVCVUdfRFJJVkVSIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVH
X0RFVlJFUz15CkNPTkZJR19TWVNfSFlQRVJWSVNPUj15CiMgQ09ORklHX0dFTkVSSUNfQ1BV
X0RFVklDRVMgaXMgbm90IHNldApDT05GSUdfRE1BX1NIQVJFRF9CVUZGRVI9eQoKIwojIEJ1
cyBkZXZpY2VzCiMKQ09ORklHX0NPTk5FQ1RPUj15CkNPTkZJR19QUk9DX0VWRU5UUz15CiMg
Q09ORklHX01URCBpcyBub3Qgc2V0CiMgQ09ORklHX1BBUlBPUlQgaXMgbm90IHNldApDT05G
SUdfQVJDSF9NSUdIVF9IQVZFX1BDX1BBUlBPUlQ9eQpDT05GSUdfUE5QPXkKQ09ORklHX1BO
UF9ERUJVR19NRVNTQUdFUz15CgojCiMgUHJvdG9jb2xzCiMKQ09ORklHX1BOUEFDUEk9eQpD
T05GSUdfQkxLX0RFVj15CiMgQ09ORklHX0JMS19ERVZfTlVMTF9CTEsgaXMgbm90IHNldAoj
IENPTkZJR19CTEtfREVWX0ZEIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9QQ0lFU1NE
X01USVAzMlhYIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0NQUV9DSVNTX0RBIGlzIG5vdCBz
ZXQKIyBDT05GSUdfQkxLX0RFVl9EQUM5NjAgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVW
X1VNRU0gaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX0NPV19DT01NT04gaXMgbm90IHNl
dApDT05GSUdfQkxLX0RFVl9MT09QPXkKQ09ORklHX0JMS19ERVZfTE9PUF9NSU5fQ09VTlQ9
OAojIENPTkZJR19CTEtfREVWX0NSWVBUT0xPT1AgaXMgbm90IHNldAojIENPTkZJR19CTEtf
REVWX0RSQkQgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX05CRCBpcyBub3Qgc2V0CiMg
Q09ORklHX0JMS19ERVZfTlZNRSBpcyBub3Qgc2V0CiMgQ09ORklHX0JMS19ERVZfU0tEIGlz
IG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9TWDggaXMgbm90IHNldApDT05GSUdfQkxLX0RF
Vl9SQU09eQpDT05GSUdfQkxLX0RFVl9SQU1fQ09VTlQ9MTYKQ09ORklHX0JMS19ERVZfUkFN
X1NJWkU9MTYzODQKIyBDT05GSUdfQkxLX0RFVl9YSVAgaXMgbm90IHNldAojIENPTkZJR19D
RFJPTV9QS1RDRFZEIGlzIG5vdCBzZXQKIyBDT05GSUdfQVRBX09WRVJfRVRIIGlzIG5vdCBz
ZXQKQ09ORklHX1hFTl9CTEtERVZfRlJPTlRFTkQ9eQpDT05GSUdfWEVOX0JMS0RFVl9CQUNL
RU5EPXkKIyBDT05GSUdfQkxLX0RFVl9IRCBpcyBub3Qgc2V0CiMgQ09ORklHX0JMS19ERVZf
UkJEIGlzIG5vdCBzZXQKIyBDT05GSUdfQkxLX0RFVl9SU1hYIGlzIG5vdCBzZXQKCiMKIyBN
aXNjIGRldmljZXMKIwojIENPTkZJR19TRU5TT1JTX0xJUzNMVjAyRCBpcyBub3Qgc2V0CiMg
Q09ORklHX0FENTI1WF9EUE9UIGlzIG5vdCBzZXQKIyBDT05GSUdfRFVNTVlfSVJRIGlzIG5v
dCBzZXQKIyBDT05GSUdfSUJNX0FTTSBpcyBub3Qgc2V0CiMgQ09ORklHX1BIQU5UT00gaXMg
bm90IHNldAojIENPTkZJR19TR0lfSU9DNCBpcyBub3Qgc2V0CiMgQ09ORklHX1RJRk1fQ09S
RSBpcyBub3Qgc2V0CiMgQ09ORklHX0lDUzkzMlM0MDEgaXMgbm90IHNldAojIENPTkZJR19B
VE1FTF9TU0MgaXMgbm90IHNldAojIENPTkZJR19FTkNMT1NVUkVfU0VSVklDRVMgaXMgbm90
IHNldAojIENPTkZJR19IUF9JTE8gaXMgbm90IHNldAojIENPTkZJR19BUERTOTgwMkFMUyBp
cyBub3Qgc2V0CiMgQ09ORklHX0lTTDI5MDAzIGlzIG5vdCBzZXQKIyBDT05GSUdfSVNMMjkw
MjAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RTTDI1NTAgaXMgbm90IHNldAojIENP
TkZJR19TRU5TT1JTX0JIMTc4MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQkgxNzcw
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BUERTOTkwWCBpcyBub3Qgc2V0CiMgQ09O
RklHX0hNQzYzNTIgaXMgbm90IHNldAojIENPTkZJR19EUzE2ODIgaXMgbm90IHNldAojIENP
TkZJR19WTVdBUkVfQkFMTE9PTiBpcyBub3Qgc2V0CiMgQ09ORklHX0JNUDA4NV9JMkMgaXMg
bm90IHNldAojIENPTkZJR19QQ0hfUEhVQiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TV0lU
Q0hfRlNBOTQ4MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NSQU0gaXMgbm90IHNldAojIENPTkZJ
R19DMlBPUlQgaXMgbm90IHNldAoKIwojIEVFUFJPTSBzdXBwb3J0CiMKIyBDT05GSUdfRUVQ
Uk9NX0FUMjQgaXMgbm90IHNldAojIENPTkZJR19FRVBST01fTEVHQUNZIGlzIG5vdCBzZXQK
IyBDT05GSUdfRUVQUk9NX01BWDY4NzUgaXMgbm90IHNldAojIENPTkZJR19FRVBST01fOTND
WDYgaXMgbm90IHNldAojIENPTkZJR19DQjcxMF9DT1JFIGlzIG5vdCBzZXQKCiMKIyBUZXhh
cyBJbnN0cnVtZW50cyBzaGFyZWQgdHJhbnNwb3J0IGxpbmUgZGlzY2lwbGluZQojCiMgQ09O
RklHX1NFTlNPUlNfTElTM19JMkMgaXMgbm90IHNldAoKIwojIEFsdGVyYSBGUEdBIGZpcm13
YXJlIGRvd25sb2FkIG1vZHVsZQojCkNPTkZJR19BTFRFUkFfU1RBUEw9eQojIENPTkZJR19J
TlRFTF9NRUkgaXMgbm90IHNldAojIENPTkZJR19JTlRFTF9NRUlfTUUgaXMgbm90IHNldAoj
IENPTkZJR19WTVdBUkVfVk1DSSBpcyBub3Qgc2V0CgojCiMgSW50ZWwgTUlDIEhvc3QgRHJp
dmVyCiMKIyBDT05GSUdfSU5URUxfTUlDX0hPU1QgaXMgbm90IHNldAoKIwojIEludGVsIE1J
QyBDYXJkIERyaXZlcgojCiMgQ09ORklHX0lOVEVMX01JQ19DQVJEIGlzIG5vdCBzZXQKIyBD
T05GSUdfR0VOV1FFIGlzIG5vdCBzZXQKQ09ORklHX0hBVkVfSURFPXkKIyBDT05GSUdfSURF
IGlzIG5vdCBzZXQKCiMKIyBTQ1NJIGRldmljZSBzdXBwb3J0CiMKQ09ORklHX1NDU0lfTU9E
PXkKIyBDT05GSUdfUkFJRF9BVFRSUyBpcyBub3Qgc2V0CkNPTkZJR19TQ1NJPXkKQ09ORklH
X1NDU0lfRE1BPXkKIyBDT05GSUdfU0NTSV9UR1QgaXMgbm90IHNldAojIENPTkZJR19TQ1NJ
X05FVExJTksgaXMgbm90IHNldApDT05GSUdfU0NTSV9QUk9DX0ZTPXkKCiMKIyBTQ1NJIHN1
cHBvcnQgdHlwZSAoZGlzaywgdGFwZSwgQ0QtUk9NKQojCkNPTkZJR19CTEtfREVWX1NEPXkK
IyBDT05GSUdfQ0hSX0RFVl9TVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NIUl9ERVZfT1NTVCBp
cyBub3Qgc2V0CkNPTkZJR19CTEtfREVWX1NSPXkKQ09ORklHX0JMS19ERVZfU1JfVkVORE9S
PXkKQ09ORklHX0NIUl9ERVZfU0c9eQojIENPTkZJR19DSFJfREVWX1NDSCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NDU0lfTVVMVElfTFVOIGlzIG5vdCBzZXQKQ09ORklHX1NDU0lfQ09OU1RB
TlRTPXkKIyBDT05GSUdfU0NTSV9MT0dHSU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9T
Q0FOX0FTWU5DIGlzIG5vdCBzZXQKCiMKIyBTQ1NJIFRyYW5zcG9ydHMKIwpDT05GSUdfU0NT
SV9TUElfQVRUUlM9eQojIENPTkZJR19TQ1NJX0ZDX0FUVFJTIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0NTSV9JU0NTSV9BVFRSUyBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfU0FTX0FUVFJT
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0NTSV9TQVNfTElCU0FTIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0NTSV9TUlBfQVRUUlMgaXMgbm90IHNldAojIENPTkZJR19TQ1NJX0xPV0xFVkVMIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0NTSV9ESCBpcyBub3Qgc2V0CiMgQ09ORklHX1NDU0lfT1NE
X0lOSVRJQVRPUiBpcyBub3Qgc2V0CkNPTkZJR19BVEE9eQojIENPTkZJR19BVEFfTk9OU1RB
TkRBUkQgaXMgbm90IHNldApDT05GSUdfQVRBX1ZFUkJPU0VfRVJST1I9eQpDT05GSUdfQVRB
X0FDUEk9eQojIENPTkZJR19TQVRBX1pQT0REIGlzIG5vdCBzZXQKQ09ORklHX1NBVEFfUE1Q
PXkKCiMKIyBDb250cm9sbGVycyB3aXRoIG5vbi1TRkYgbmF0aXZlIGludGVyZmFjZQojCkNP
TkZJR19TQVRBX0FIQ0k9eQpDT05GSUdfU0FUQV9BSENJX1BMQVRGT1JNPXkKIyBDT05GSUdf
U0FUQV9JTklDMTYyWCBpcyBub3Qgc2V0CiMgQ09ORklHX1NBVEFfQUNBUkRfQUhDSSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NBVEFfU0lMMjQgaXMgbm90IHNldAojIENPTkZJR19BVEFfU0ZG
IGlzIG5vdCBzZXQKQ09ORklHX01EPXkKIyBDT05GSUdfQkxLX0RFVl9NRCBpcyBub3Qgc2V0
CkNPTkZJR19CQ0FDSEU9eQojIENPTkZJR19CQ0FDSEVfREVCVUcgaXMgbm90IHNldAojIENP
TkZJR19CQ0FDSEVfQ0xPU1VSRVNfREVCVUcgaXMgbm90IHNldApDT05GSUdfQkxLX0RFVl9E
TV9CVUlMVElOPXkKQ09ORklHX0JMS19ERVZfRE09eQpDT05GSUdfRE1fREVCVUc9eQpDT05G
SUdfRE1fQlVGSU89eQpDT05GSUdfRE1fQklPX1BSSVNPTj15CkNPTkZJR19ETV9QRVJTSVNU
RU5UX0RBVEE9eQpDT05GSUdfRE1fQ1JZUFQ9eQpDT05GSUdfRE1fU05BUFNIT1Q9eQojIENP
TkZJR19ETV9USElOX1BST1ZJU0lPTklORyBpcyBub3Qgc2V0CiMgQ09ORklHX0RNX0RFQlVH
X0JMT0NLX1NUQUNLX1RSQUNJTkcgaXMgbm90IHNldApDT05GSUdfRE1fQ0FDSEU9eQpDT05G
SUdfRE1fQ0FDSEVfTVE9eQpDT05GSUdfRE1fQ0FDSEVfQ0xFQU5FUj15CkNPTkZJR19ETV9N
SVJST1I9eQojIENPTkZJR19ETV9MT0dfVVNFUlNQQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdf
RE1fUkFJRCBpcyBub3Qgc2V0CkNPTkZJR19ETV9aRVJPPXkKIyBDT05GSUdfRE1fTVVMVElQ
QVRIIGlzIG5vdCBzZXQKIyBDT05GSUdfRE1fREVMQVkgaXMgbm90IHNldAojIENPTkZJR19E
TV9VRVZFTlQgaXMgbm90IHNldAojIENPTkZJR19ETV9GTEFLRVkgaXMgbm90IHNldAojIENP
TkZJR19ETV9WRVJJVFkgaXMgbm90IHNldAojIENPTkZJR19ETV9TV0lUQ0ggaXMgbm90IHNl
dAojIENPTkZJR19UQVJHRVRfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZVU0lPTiBpcyBu
b3Qgc2V0CgojCiMgSUVFRSAxMzk0IChGaXJlV2lyZSkgc3VwcG9ydAojCiMgQ09ORklHX0ZJ
UkVXSVJFIGlzIG5vdCBzZXQKIyBDT05GSUdfRklSRVdJUkVfTk9TWSBpcyBub3Qgc2V0CiMg
Q09ORklHX0kyTyBpcyBub3Qgc2V0CiMgQ09ORklHX01BQ0lOVE9TSF9EUklWRVJTIGlzIG5v
dCBzZXQKQ09ORklHX05FVERFVklDRVM9eQpDT05GSUdfTUlJPXkKQ09ORklHX05FVF9DT1JF
PXkKIyBDT05GSUdfQk9ORElORyBpcyBub3Qgc2V0CiMgQ09ORklHX0RVTU1ZIGlzIG5vdCBz
ZXQKIyBDT05GSUdfRVFVQUxJWkVSIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX0ZDIGlzIG5v
dCBzZXQKIyBDT05GSUdfSUZCIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1RFQU0gaXMgbm90
IHNldAojIENPTkZJR19NQUNWTEFOIGlzIG5vdCBzZXQKIyBDT05GSUdfVlhMQU4gaXMgbm90
IHNldApDT05GSUdfTkVUQ09OU09MRT15CkNPTkZJR19ORVRQT0xMPXkKIyBDT05GSUdfTkVU
UE9MTF9UUkFQIGlzIG5vdCBzZXQKQ09ORklHX05FVF9QT0xMX0NPTlRST0xMRVI9eQpDT05G
SUdfVFVOPXkKQ09ORklHX1ZFVEg9eQojIENPTkZJR19OTE1PTiBpcyBub3Qgc2V0CiMgQ09O
RklHX0FSQ05FVCBpcyBub3Qgc2V0CgojCiMgQ0FJRiB0cmFuc3BvcnQgZHJpdmVycwojCgoj
CiMgRGlzdHJpYnV0ZWQgU3dpdGNoIEFyY2hpdGVjdHVyZSBkcml2ZXJzCiMKIyBDT05GSUdf
TkVUX0RTQV9NVjg4RTZYWFggaXMgbm90IHNldAojIENPTkZJR19ORVRfRFNBX01WODhFNjA2
MCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9EU0FfTVY4OEU2WFhYX05FRURfUFBVIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX0RTQV9NVjg4RTYxMzEgaXMgbm90IHNldAojIENPTkZJR19O
RVRfRFNBX01WODhFNjEyM182MV82NSBpcyBub3Qgc2V0CkNPTkZJR19FVEhFUk5FVD15CiMg
Q09ORklHX05FVF9WRU5ET1JfM0NPTSBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1Jf
QURBUFRFQyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfQUxURU9OIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9BTUQgaXMgbm90IHNldApDT05GSUdfTkVUX1ZFTkRP
Ul9BUkM9eQojIENPTkZJR19ORVRfVkVORE9SX0FUSEVST1MgaXMgbm90IHNldApDT05GSUdf
TkVUX0NBREVOQ0U9eQojIENPTkZJR19BUk1fQVQ5MV9FVEhFUiBpcyBub3Qgc2V0CiMgQ09O
RklHX01BQ0IgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0JST0FEQ09NIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9CUk9DQURFIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkVUX0NBTFhFREFfWEdNQUMgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0NIRUxT
SU8gaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX0NJU0NPIGlzIG5vdCBzZXQKIyBD
T05GSUdfRE5FVCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfREVDIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9ETElOSyBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9W
RU5ET1JfRU1VTEVYIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9FWEFSIGlzIG5v
dCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9IUCBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVO
RE9SX0lOVEVMPXkKIyBDT05GSUdfRTEwMCBpcyBub3Qgc2V0CkNPTkZJR19FMTAwMD15CkNP
TkZJR19FMTAwMEU9eQpDT05GSUdfSUdCPXkKQ09ORklHX0lHQl9IV01PTj15CkNPTkZJR19J
R0JWRj15CiMgQ09ORklHX0lYR0IgaXMgbm90IHNldAojIENPTkZJR19JWEdCRSBpcyBub3Qg
c2V0CiMgQ09ORklHX0lYR0JFVkYgaXMgbm90IHNldAojIENPTkZJR19JNDBFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSTQwRVZGIGlzIG5vdCBzZXQKQ09ORklHX05FVF9WRU5ET1JfSTgyNVhY
PXkKIyBDT05GSUdfSVAxMDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfSk1FIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkVUX1ZFTkRPUl9NQVJWRUxMIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZF
TkRPUl9NRUxMQU5PWCBpcyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfTUlDUkVMIGlz
IG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9NWVJJIGlzIG5vdCBzZXQKIyBDT05GSUdf
RkVBTE5YIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9OQVRTRU1JIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9OVklESUEgaXMgbm90IHNldAojIENPTkZJR19ORVRf
VkVORE9SX09LSSBpcyBub3Qgc2V0CiMgQ09ORklHX0VUSE9DIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkVUX1BBQ0tFVF9FTkdJTkUgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1FM
T0dJQyBpcyBub3Qgc2V0CkNPTkZJR19ORVRfVkVORE9SX1JFQUxURUs9eQojIENPTkZJR184
MTM5Q1AgaXMgbm90IHNldAojIENPTkZJR184MTM5VE9PIGlzIG5vdCBzZXQKQ09ORklHX1I4
MTY5PXkKIyBDT05GSUdfU0hfRVRIIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9S
REMgaXMgbm90IHNldApDT05GSUdfTkVUX1ZFTkRPUl9TRUVRPXkKQ09ORklHX05FVF9WRU5E
T1JfU0lMQU49eQojIENPTkZJR19TQzkyMDMxIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZF
TkRPUl9TSVMgaXMgbm90IHNldAojIENPTkZJR19TRkMgaXMgbm90IHNldAojIENPTkZJR19O
RVRfVkVORE9SX1NNU0MgaXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1NUTUlDUk8g
aXMgbm90IHNldAojIENPTkZJR19ORVRfVkVORE9SX1NVTiBpcyBub3Qgc2V0CiMgQ09ORklH
X05FVF9WRU5ET1JfVEVIVVRJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1ZFTkRPUl9USSBp
cyBub3Qgc2V0CiMgQ09ORklHX05FVF9WRU5ET1JfVklBIGlzIG5vdCBzZXQKQ09ORklHX05F
VF9WRU5ET1JfV0laTkVUPXkKIyBDT05GSUdfV0laTkVUX1c1MTAwIGlzIG5vdCBzZXQKIyBD
T05GSUdfV0laTkVUX1c1MzAwIGlzIG5vdCBzZXQKIyBDT05GSUdfRkRESSBpcyBub3Qgc2V0
CiMgQ09ORklHX0hJUFBJIGlzIG5vdCBzZXQKIyBDT05GSUdfTkVUX1NCMTAwMCBpcyBub3Qg
c2V0CkNPTkZJR19QSFlMSUI9eQoKIwojIE1JSSBQSFkgZGV2aWNlIGRyaXZlcnMKIwojIENP
TkZJR19BVDgwM1hfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQU1EX1BIWSBpcyBub3Qgc2V0
CiMgQ09ORklHX01BUlZFTExfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfREFWSUNPTV9QSFkg
aXMgbm90IHNldAojIENPTkZJR19RU0VNSV9QSFkgaXMgbm90IHNldAojIENPTkZJR19MWFRf
UEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lDQURBX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklH
X1ZJVEVTU0VfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfU01TQ19QSFkgaXMgbm90IHNldAoj
IENPTkZJR19CUk9BRENPTV9QSFkgaXMgbm90IHNldAojIENPTkZJR19CQ004N1hYX1BIWSBp
cyBub3Qgc2V0CiMgQ09ORklHX0lDUExVU19QSFkgaXMgbm90IHNldApDT05GSUdfUkVBTFRF
S19QSFk9eQojIENPTkZJR19OQVRJT05BTF9QSFkgaXMgbm90IHNldAojIENPTkZJR19TVEUx
MFhQIGlzIG5vdCBzZXQKIyBDT05GSUdfTFNJX0VUMTAxMUNfUEhZIGlzIG5vdCBzZXQKIyBD
T05GSUdfTUlDUkVMX1BIWSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZJWEVEX1BIWSBpcyBub3Qg
c2V0CiMgQ09ORklHX01ESU9fQklUQkFORyBpcyBub3Qgc2V0CiMgQ09ORklHX1BQUCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NMSVAgaXMgbm90IHNldAoKIwojIFVTQiBOZXR3b3JrIEFkYXB0
ZXJzCiMKIyBDT05GSUdfVVNCX0NBVEMgaXMgbm90IHNldAojIENPTkZJR19VU0JfS0FXRVRI
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1BFR0FTVVMgaXMgbm90IHNldAojIENPTkZJR19V
U0JfUlRMODE1MCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9SVEw4MTUyIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX1VTQk5FVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JUEhFVEggaXMg
bm90IHNldAojIENPTkZJR19XTEFOIGlzIG5vdCBzZXQKCiMKIyBFbmFibGUgV2lNQVggKE5l
dHdvcmtpbmcgb3B0aW9ucykgdG8gc2VlIHRoZSBXaU1BWCBkcml2ZXJzCiMKIyBDT05GSUdf
V0FOIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9ORVRERVZfRlJPTlRFTkQ9eQpDT05GSUdfWEVO
X05FVERFVl9CQUNLRU5EPXkKIyBDT05GSUdfVk1YTkVUMyBpcyBub3Qgc2V0CiMgQ09ORklH
X0lTRE4gaXMgbm90IHNldAoKIwojIElucHV0IGRldmljZSBzdXBwb3J0CiMKQ09ORklHX0lO
UFVUPXkKQ09ORklHX0lOUFVUX0ZGX01FTUxFU1M9eQpDT05GSUdfSU5QVVRfUE9MTERFVj15
CkNPTkZJR19JTlBVVF9TUEFSU0VLTUFQPXkKIyBDT05GSUdfSU5QVVRfTUFUUklYS01BUCBp
cyBub3Qgc2V0CgojCiMgVXNlcmxhbmQgaW50ZXJmYWNlcwojCkNPTkZJR19JTlBVVF9NT1VT
RURFVj15CiMgQ09ORklHX0lOUFVUX01PVVNFREVWX1BTQVVYIGlzIG5vdCBzZXQKQ09ORklH
X0lOUFVUX01PVVNFREVWX1NDUkVFTl9YPTEwMjQKQ09ORklHX0lOUFVUX01PVVNFREVWX1ND
UkVFTl9ZPTc2OAojIENPTkZJR19JTlBVVF9KT1lERVYgaXMgbm90IHNldApDT05GSUdfSU5Q
VVRfRVZERVY9eQojIENPTkZJR19JTlBVVF9FVkJVRyBpcyBub3Qgc2V0CgojCiMgSW5wdXQg
RGV2aWNlIERyaXZlcnMKIwpDT05GSUdfSU5QVVRfS0VZQk9BUkQ9eQojIENPTkZJR19LRVlC
T0FSRF9BRFA1NTg4IGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfQURQNTU4OSBpcyBu
b3Qgc2V0CkNPTkZJR19LRVlCT0FSRF9BVEtCRD15CiMgQ09ORklHX0tFWUJPQVJEX1FUMTA3
MCBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX1FUMjE2MCBpcyBub3Qgc2V0CiMgQ09O
RklHX0tFWUJPQVJEX0xLS0JEIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfVENBNjQx
NiBpcyBub3Qgc2V0CiMgQ09ORklHX0tFWUJPQVJEX1RDQTg0MTggaXMgbm90IHNldAojIENP
TkZJR19LRVlCT0FSRF9MTTgzMjMgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9MTTgz
MzMgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9NQVg3MzU5IGlzIG5vdCBzZXQKIyBD
T05GSUdfS0VZQk9BUkRfTUNTIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTVBSMTIx
IGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfTkVXVE9OIGlzIG5vdCBzZXQKIyBDT05G
SUdfS0VZQk9BUkRfT1BFTkNPUkVTIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZQk9BUkRfU1RP
V0FXQVkgaXMgbm90IHNldAojIENPTkZJR19LRVlCT0FSRF9TVU5LQkQgaXMgbm90IHNldAoj
IENPTkZJR19LRVlCT0FSRF9YVEtCRCBpcyBub3Qgc2V0CkNPTkZJR19JTlBVVF9NT1VTRT15
CkNPTkZJR19NT1VTRV9QUzI9eQpDT05GSUdfTU9VU0VfUFMyX0FMUFM9eQpDT05GSUdfTU9V
U0VfUFMyX0xPR0lQUzJQUD15CkNPTkZJR19NT1VTRV9QUzJfU1lOQVBUSUNTPXkKQ09ORklH
X01PVVNFX1BTMl9DWVBSRVNTPXkKQ09ORklHX01PVVNFX1BTMl9MSUZFQk9PSz15CkNPTkZJ
R19NT1VTRV9QUzJfVFJBQ0tQT0lOVD15CiMgQ09ORklHX01PVVNFX1BTMl9FTEFOVEVDSCBp
cyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1BTMl9TRU5URUxJQyBpcyBub3Qgc2V0CiMgQ09O
RklHX01PVVNFX1BTMl9UT1VDSEtJVCBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1NFUklB
TCBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX0FQUExFVE9VQ0ggaXMgbm90IHNldAojIENP
TkZJR19NT1VTRV9CQ001OTc0IGlzIG5vdCBzZXQKIyBDT05GSUdfTU9VU0VfQ1lBUEEgaXMg
bm90IHNldAojIENPTkZJR19NT1VTRV9WU1hYWEFBIGlzIG5vdCBzZXQKIyBDT05GSUdfTU9V
U0VfU1lOQVBUSUNTX0kyQyBpcyBub3Qgc2V0CiMgQ09ORklHX01PVVNFX1NZTkFQVElDU19V
U0IgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9KT1lTVElDSyBpcyBub3Qgc2V0CkNPTkZJ
R19JTlBVVF9UQUJMRVQ9eQojIENPTkZJR19UQUJMRVRfVVNCX0FDRUNBRCBpcyBub3Qgc2V0
CiMgQ09ORklHX1RBQkxFVF9VU0JfQUlQVEVLIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFCTEVU
X1VTQl9HVENPIGlzIG5vdCBzZXQKIyBDT05GSUdfVEFCTEVUX1VTQl9IQU5XQU5HIGlzIG5v
dCBzZXQKIyBDT05GSUdfVEFCTEVUX1VTQl9LQlRBQiBpcyBub3Qgc2V0CiMgQ09ORklHX1RB
QkxFVF9VU0JfV0FDT00gaXMgbm90IHNldApDT05GSUdfSU5QVVRfVE9VQ0hTQ1JFRU49eQoj
IENPTkZJR19UT1VDSFNDUkVFTl9BRDc4NzkgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFND
UkVFTl9BVE1FTF9NWFQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9CVTIxMDEz
IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fQ1lUVFNQX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19UT1VDSFNDUkVFTl9DWVRUU1A0X0NPUkUgaXMgbm90IHNldAojIENPTkZJ
R19UT1VDSFNDUkVFTl9EWU5BUFJPIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5f
SEFNUFNISVJFIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fRUVUSSBpcyBub3Qg
c2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0ZVSklUU1UgaXMgbm90IHNldAojIENPTkZJR19U
T1VDSFNDUkVFTl9JTEkyMTBYIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fR1VO
WkUgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9FTE8gaXMgbm90IHNldAojIENP
TkZJR19UT1VDSFNDUkVFTl9XQUNPTV9XODAwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNI
U0NSRUVOX1dBQ09NX0kyQyBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX01BWDEx
ODAxIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fTUNTNTAwMCBpcyBub3Qgc2V0
CiMgQ09ORklHX1RPVUNIU0NSRUVOX01NUzExNCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNI
U0NSRUVOX01UT1VDSCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX0lORVhJTyBp
cyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX01LNzEyIGlzIG5vdCBzZXQKIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fUEVOTU9VTlQgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVF
Tl9FRFRfRlQ1WDA2IGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5fVE9VQ0hSSUdI
VCBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1RPVUNIV0lOIGlzIG5vdCBzZXQK
IyBDT05GSUdfVE9VQ0hTQ1JFRU5fUElYQ0lSIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hT
Q1JFRU5fVVNCX0NPTVBPU0lURSBpcyBub3Qgc2V0CiMgQ09ORklHX1RPVUNIU0NSRUVOX1RP
VUNISVQyMTMgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UU0NfU0VSSU8gaXMg
bm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UU0MyMDA3IGlzIG5vdCBzZXQKIyBDT05G
SUdfVE9VQ0hTQ1JFRU5fU1QxMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdfVE9VQ0hTQ1JFRU5f
U1VSNDAgaXMgbm90IHNldAojIENPTkZJR19UT1VDSFNDUkVFTl9UUFM2NTA3WCBpcyBub3Qg
c2V0CkNPTkZJR19JTlBVVF9NSVNDPXkKIyBDT05GSUdfSU5QVVRfQUQ3MTRYIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSU5QVVRfQk1BMTUwIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfUENT
UEtSIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfTU1BODQ1MCBpcyBub3Qgc2V0CiMgQ09O
RklHX0lOUFVUX01QVTMwNTAgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9BUEFORUwgaXMg
bm90IHNldAojIENPTkZJR19JTlBVVF9BVExBU19CVE5TIGlzIG5vdCBzZXQKIyBDT05GSUdf
SU5QVVRfQVRJX1JFTU9URTIgaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9LRVlTUEFOX1JF
TU9URSBpcyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0tYVEo5IGlzIG5vdCBzZXQKIyBDT05G
SUdfSU5QVVRfUE9XRVJNQVRFIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfWUVBTElOSyBp
cyBub3Qgc2V0CiMgQ09ORklHX0lOUFVUX0NNMTA5IGlzIG5vdCBzZXQKIyBDT05GSUdfSU5Q
VVRfVUlOUFVUIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfUENGODU3NCBpcyBub3Qgc2V0
CiMgQ09ORklHX0lOUFVUX0FEWEwzNFggaXMgbm90IHNldAojIENPTkZJR19JTlBVVF9JTVNf
UENVIGlzIG5vdCBzZXQKIyBDT05GSUdfSU5QVVRfQ01BMzAwMCBpcyBub3Qgc2V0CkNPTkZJ
R19JTlBVVF9YRU5fS0JEREVWX0ZST05URU5EPXkKIyBDT05GSUdfSU5QVVRfSURFQVBBRF9T
TElERUJBUiBpcyBub3Qgc2V0CgojCiMgSGFyZHdhcmUgSS9PIHBvcnRzCiMKQ09ORklHX1NF
UklPPXkKQ09ORklHX0FSQ0hfTUlHSFRfSEFWRV9QQ19TRVJJTz15CkNPTkZJR19TRVJJT19J
ODA0Mj15CkNPTkZJR19TRVJJT19TRVJQT1JUPXkKIyBDT05GSUdfU0VSSU9fQ1Q4MkM3MTAg
aXMgbm90IHNldAojIENPTkZJR19TRVJJT19QQ0lQUzIgaXMgbm90IHNldApDT05GSUdfU0VS
SU9fTElCUFMyPXkKIyBDT05GSUdfU0VSSU9fUkFXIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VS
SU9fQUxURVJBX1BTMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklPX1BTMk1VTFQgaXMgbm90
IHNldAojIENPTkZJR19TRVJJT19BUkNfUFMyIGlzIG5vdCBzZXQKIyBDT05GSUdfR0FNRVBP
UlQgaXMgbm90IHNldAoKIwojIENoYXJhY3RlciBkZXZpY2VzCiMKQ09ORklHX1RUWT15CkNP
TkZJR19WVD15CkNPTkZJR19DT05TT0xFX1RSQU5TTEFUSU9OUz15CkNPTkZJR19WVF9DT05T
T0xFPXkKQ09ORklHX1ZUX0NPTlNPTEVfU0xFRVA9eQpDT05GSUdfSFdfQ09OU09MRT15CkNP
TkZJR19WVF9IV19DT05TT0xFX0JJTkRJTkc9eQpDT05GSUdfVU5JWDk4X1BUWVM9eQojIENP
TkZJR19ERVZQVFNfTVVMVElQTEVfSU5TVEFOQ0VTIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVH
QUNZX1BUWVMgaXMgbm90IHNldApDT05GSUdfU0VSSUFMX05PTlNUQU5EQVJEPXkKIyBDT05G
SUdfUk9DS0VUUE9SVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NZQ0xBREVTIGlzIG5vdCBzZXQK
IyBDT05GSUdfTU9YQV9JTlRFTExJTyBpcyBub3Qgc2V0CiMgQ09ORklHX01PWEFfU01BUlRJ
TyBpcyBub3Qgc2V0CiMgQ09ORklHX1NZTkNMSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfU1lO
Q0xJTktNUCBpcyBub3Qgc2V0CiMgQ09ORklHX1NZTkNMSU5LX0dUIGlzIG5vdCBzZXQKIyBD
T05GSUdfTk9aT01JIGlzIG5vdCBzZXQKIyBDT05GSUdfSVNJIGlzIG5vdCBzZXQKIyBDT05G
SUdfTl9IRExDIGlzIG5vdCBzZXQKIyBDT05GSUdfTl9HU00gaXMgbm90IHNldAojIENPTkZJ
R19UUkFDRV9TSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfREVWS01FTSBpcyBub3Qgc2V0Cgoj
CiMgU2VyaWFsIGRyaXZlcnMKIwpDT05GSUdfU0VSSUFMXzgyNTA9eQpDT05GSUdfU0VSSUFM
XzgyNTBfREVQUkVDQVRFRF9PUFRJT05TPXkKQ09ORklHX1NFUklBTF84MjUwX1BOUD15CkNP
TkZJR19TRVJJQUxfODI1MF9DT05TT0xFPXkKQ09ORklHX0ZJWF9FQVJMWUNPTl9NRU09eQpD
T05GSUdfU0VSSUFMXzgyNTBfUENJPXkKQ09ORklHX1NFUklBTF84MjUwX05SX1VBUlRTPTMy
CkNPTkZJR19TRVJJQUxfODI1MF9SVU5USU1FX1VBUlRTPTQKQ09ORklHX1NFUklBTF84MjUw
X0VYVEVOREVEPXkKQ09ORklHX1NFUklBTF84MjUwX01BTllfUE9SVFM9eQpDT05GSUdfU0VS
SUFMXzgyNTBfU0hBUkVfSVJRPXkKQ09ORklHX1NFUklBTF84MjUwX0RFVEVDVF9JUlE9eQpD
T05GSUdfU0VSSUFMXzgyNTBfUlNBPXkKIyBDT05GSUdfU0VSSUFMXzgyNTBfRFcgaXMgbm90
IHNldAoKIwojIE5vbi04MjUwIHNlcmlhbCBwb3J0IHN1cHBvcnQKIwojIENPTkZJR19TRVJJ
QUxfTUZEX0hTVSBpcyBub3Qgc2V0CkNPTkZJR19TRVJJQUxfQ09SRT15CkNPTkZJR19TRVJJ
QUxfQ09SRV9DT05TT0xFPXkKIyBDT05GSUdfU0VSSUFMX0pTTSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFUklBTF9TQ0NOWFAgaXMgbm90IHNldAojIENPTkZJR19TRVJJQUxfVElNQkVSREFM
RSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF9BTFRFUkFfSlRBR1VBUlQgaXMgbm90IHNl
dAojIENPTkZJR19TRVJJQUxfQUxURVJBX1VBUlQgaXMgbm90IHNldAojIENPTkZJR19TRVJJ
QUxfUENIX1VBUlQgaXMgbm90IHNldAojIENPTkZJR19TRVJJQUxfQVJDIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VSSUFMX1JQMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFUklBTF9GU0xfTFBV
QVJUIGlzIG5vdCBzZXQKQ09ORklHX0hWQ19EUklWRVI9eQpDT05GSUdfSFZDX0lSUT15CkNP
TkZJR19IVkNfWEVOPXkKQ09ORklHX0hWQ19YRU5fRlJPTlRFTkQ9eQojIENPTkZJR19JUE1J
X0hBTkRMRVIgaXMgbm90IHNldApDT05GSUdfSFdfUkFORE9NPXkKQ09ORklHX0hXX1JBTkRP
TV9USU1FUklPTUVNPXkKQ09ORklHX0hXX1JBTkRPTV9JTlRFTD15CkNPTkZJR19IV19SQU5E
T01fQU1EPXkKQ09ORklHX0hXX1JBTkRPTV9WSUE9eQojIENPTkZJR19OVlJBTSBpcyBub3Qg
c2V0CiMgQ09ORklHX1IzOTY0IGlzIG5vdCBzZXQKIyBDT05GSUdfQVBQTElDT00gaXMgbm90
IHNldAojIENPTkZJR19NV0FWRSBpcyBub3Qgc2V0CiMgQ09ORklHX1JBV19EUklWRVIgaXMg
bm90IHNldApDT05GSUdfSFBFVD15CiMgQ09ORklHX0hQRVRfTU1BUCBpcyBub3Qgc2V0CkNP
TkZJR19IQU5HQ0hFQ0tfVElNRVI9eQojIENPTkZJR19UQ0dfVFBNIGlzIG5vdCBzZXQKIyBD
T05GSUdfVEVMQ0xPQ0sgaXMgbm90IHNldApDT05GSUdfREVWUE9SVD15CkNPTkZJR19JMkM9
eQpDT05GSUdfSTJDX0JPQVJESU5GTz15CkNPTkZJR19JMkNfQ09NUEFUPXkKIyBDT05GSUdf
STJDX0NIQVJERVYgaXMgbm90IHNldApDT05GSUdfSTJDX01VWD15CgojCiMgTXVsdGlwbGV4
ZXIgSTJDIENoaXAgc3VwcG9ydAojCiMgQ09ORklHX0kyQ19NVVhfUENBOTU0MSBpcyBub3Qg
c2V0CiMgQ09ORklHX0kyQ19NVVhfUENBOTU0eCBpcyBub3Qgc2V0CkNPTkZJR19JMkNfSEVM
UEVSX0FVVE89eQpDT05GSUdfSTJDX0FMR09CSVQ9eQoKIwojIEkyQyBIYXJkd2FyZSBCdXMg
c3VwcG9ydAojCgojCiMgUEMgU01CdXMgaG9zdCBjb250cm9sbGVyIGRyaXZlcnMKIwojIENP
TkZJR19JMkNfQUxJMTUzNSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19BTEkxNTYzIGlzIG5v
dCBzZXQKIyBDT05GSUdfSTJDX0FMSTE1WDMgaXMgbm90IHNldApDT05GSUdfSTJDX0FNRDc1
Nj15CiMgQ09ORklHX0kyQ19BTUQ3NTZfUzQ4ODIgaXMgbm90IHNldApDT05GSUdfSTJDX0FN
RDgxMTE9eQpDT05GSUdfSTJDX0k4MDE9eQpDT05GSUdfSTJDX0lTQ0g9eQojIENPTkZJR19J
MkNfSVNNVCBpcyBub3Qgc2V0CkNPTkZJR19JMkNfUElJWDQ9eQojIENPTkZJR19JMkNfTkZP
UkNFMiBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TSVM1NTk1IGlzIG5vdCBzZXQKIyBDT05G
SUdfSTJDX1NJUzYzMCBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19TSVM5NlggaXMgbm90IHNl
dAojIENPTkZJR19JMkNfVklBIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1ZJQVBSTyBpcyBu
b3Qgc2V0CgojCiMgQUNQSSBkcml2ZXJzCiMKQ09ORklHX0kyQ19TQ01JPXkKCiMKIyBJMkMg
c3lzdGVtIGJ1cyBkcml2ZXJzIChtb3N0bHkgZW1iZWRkZWQgLyBzeXN0ZW0tb24tY2hpcCkK
IwojIENPTkZJR19JMkNfREVTSUdOV0FSRV9QTEFURk9STSBpcyBub3Qgc2V0CiMgQ09ORklH
X0kyQ19ERVNJR05XQVJFX1BDSSBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19FRzIwVCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0kyQ19PQ09SRVMgaXMgbm90IHNldAojIENPTkZJR19JMkNfUENB
X1BMQVRGT1JNIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1BYQV9QQ0kgaXMgbm90IHNldAoj
IENPTkZJR19JMkNfU0lNVEVDIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1hJTElOWCBpcyBu
b3Qgc2V0CgojCiMgRXh0ZXJuYWwgSTJDL1NNQnVzIGFkYXB0ZXIgZHJpdmVycwojCiMgQ09O
RklHX0kyQ19ESU9MQU5fVTJDIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX1BBUlBPUlRfTElH
SFQgaXMgbm90IHNldAojIENPTkZJR19JMkNfUk9CT1RGVVpaX09TSUYgaXMgbm90IHNldAoj
IENPTkZJR19JMkNfVEFPU19FVk0gaXMgbm90IHNldAojIENPTkZJR19JMkNfVElOWV9VU0Ig
aXMgbm90IHNldAoKIwojIE90aGVyIEkyQy9TTUJ1cyBidXMgZHJpdmVycwojCiMgQ09ORklH
X0kyQ19TVFVCIGlzIG5vdCBzZXQKIyBDT05GSUdfSTJDX0RFQlVHX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19JMkNfREVCVUdfQUxHTyBpcyBub3Qgc2V0CiMgQ09ORklHX0kyQ19ERUJV
R19CVVMgaXMgbm90IHNldAojIENPTkZJR19TUEkgaXMgbm90IHNldAojIENPTkZJR19IU0kg
aXMgbm90IHNldAoKIwojIFBQUyBzdXBwb3J0CiMKQ09ORklHX1BQUz15CiMgQ09ORklHX1BQ
U19ERUJVRyBpcyBub3Qgc2V0CgojCiMgUFBTIGNsaWVudHMgc3VwcG9ydAojCiMgQ09ORklH
X1BQU19DTElFTlRfS1RJTUVSIGlzIG5vdCBzZXQKIyBDT05GSUdfUFBTX0NMSUVOVF9MRElT
QyBpcyBub3Qgc2V0CiMgQ09ORklHX1BQU19DTElFTlRfR1BJTyBpcyBub3Qgc2V0CgojCiMg
UFBTIGdlbmVyYXRvcnMgc3VwcG9ydAojCgojCiMgUFRQIGNsb2NrIHN1cHBvcnQKIwpDT05G
SUdfUFRQXzE1ODhfQ0xPQ0s9eQoKIwojIEVuYWJsZSBQSFlMSUIgYW5kIE5FVFdPUktfUEhZ
X1RJTUVTVEFNUElORyB0byBzZWUgdGhlIGFkZGl0aW9uYWwgY2xvY2tzLgojCiMgQ09ORklH
X1BUUF8xNTg4X0NMT0NLX1BDSCBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX1dBTlRfT1BUSU9O
QUxfR1BJT0xJQj15CiMgQ09ORklHX0dQSU9MSUIgaXMgbm90IHNldAojIENPTkZJR19XMSBp
cyBub3Qgc2V0CkNPTkZJR19QT1dFUl9TVVBQTFk9eQojIENPTkZJR19QT1dFUl9TVVBQTFlf
REVCVUcgaXMgbm90IHNldAojIENPTkZJR19QREFfUE9XRVIgaXMgbm90IHNldAojIENPTkZJ
R19URVNUX1BPV0VSIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9EUzI3ODAgaXMgbm90
IHNldAojIENPTkZJR19CQVRURVJZX0RTMjc4MSBpcyBub3Qgc2V0CiMgQ09ORklHX0JBVFRF
UllfRFMyNzgyIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9TQlMgaXMgbm90IHNldAoj
IENPTkZJR19CQVRURVJZX0JRMjd4MDAgaXMgbm90IHNldAojIENPTkZJR19CQVRURVJZX01B
WDE3MDQwIGlzIG5vdCBzZXQKIyBDT05GSUdfQkFUVEVSWV9NQVgxNzA0MiBpcyBub3Qgc2V0
CiMgQ09ORklHX0NIQVJHRVJfTUFYODkwMyBpcyBub3Qgc2V0CiMgQ09ORklHX0NIQVJHRVJf
TFA4NzI3IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0hBUkdFUl9CUTI0MTVYIGlzIG5vdCBzZXQK
IyBDT05GSUdfQ0hBUkdFUl9TTUIzNDcgaXMgbm90IHNldAojIENPTkZJR19QT1dFUl9SRVNF
VCBpcyBub3Qgc2V0CiMgQ09ORklHX1BPV0VSX0FWUyBpcyBub3Qgc2V0CkNPTkZJR19IV01P
Tj15CkNPTkZJR19IV01PTl9WSUQ9eQojIENPTkZJR19IV01PTl9ERUJVR19DSElQIGlzIG5v
dCBzZXQKCiMKIyBOYXRpdmUgZHJpdmVycwojCiMgQ09ORklHX1NFTlNPUlNfQUJJVFVHVVJV
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BQklUVUdVUlUzIGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19BRDc0MTQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0FENzQx
OCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyMSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURNMTAyNSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAy
NiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNMTAyOSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURNMTAzMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURNOTI0
MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQxMCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURUNzQxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQ2
MiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQURUNzQ3MCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfQURUNzQ3NSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfQVNDNzYy
MSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfSzhURU1QIGlzIG5vdCBzZXQKQ09ORklH
X1NFTlNPUlNfSzEwVEVNUD15CkNPTkZJR19TRU5TT1JTX0ZBTTE1SF9QT1dFUj15CiMgQ09O
RklHX1NFTlNPUlNfQVNCMTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BVFhQMSBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfRFM2MjAgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0RTMTYyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfSTVLX0FNQiBpcyBu
b3Qgc2V0CkNPTkZJR19TRU5TT1JTX0Y3MTgwNUY9eQpDT05GSUdfU0VOU09SU19GNzE4ODJG
Rz15CkNPTkZJR19TRU5TT1JTX0Y3NTM3NVM9eQojIENPTkZJR19TRU5TT1JTX0ZTQ0hNRCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfRzc2MEEgaXMgbm90IHNldAojIENPTkZJR19T
RU5TT1JTX0c3NjIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0dMNTE4U00gaXMgbm90
IHNldAojIENPTkZJR19TRU5TT1JTX0dMNTIwU00gaXMgbm90IHNldAojIENPTkZJR19TRU5T
T1JTX0hJSDYxMzAgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX0hUVTIxIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19DT1JFVEVNUCBpcyBub3Qgc2V0CkNPTkZJR19TRU5TT1JT
X0lUODc9eQpDT05GSUdfU0VOU09SU19KQzQyPXkKIyBDT05GSUdfU0VOU09SU19MSU5FQUdF
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTYzIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0VOU09SU19MTTczIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTc1IGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19MTTc3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19M
TTc4IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTgwIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19MTTgzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTg1IGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTg3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19MTTkwIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTkyIGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19MTTkzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MTUx
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjE1IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19MVEM0MjQ1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MVEM0MjYx
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTk1MjM0IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19MTTk1MjQxIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19MTTk1MjQ1
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVgxNjA2NSBpcyBub3Qgc2V0CiMgQ09O
RklHX1NFTlNPUlNfTUFYMTYxOSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYMTY2
OCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfTUFYMTk3IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19NQVg2NjM5IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVg2NjQy
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQVg2NjUwIGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19NQVg2Njk3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19NQ1AzMDIx
IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19OQ1Q2Nzc1IGlzIG5vdCBzZXQKIyBDT05G
SUdfU0VOU09SU19OVENfVEhFUk1JU1RPUiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
UEM4NzM2MCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfUEM4NzQyNyBpcyBub3Qgc2V0
CiMgQ09ORklHX1NFTlNPUlNfUENGODU5MSBpcyBub3Qgc2V0CiMgQ09ORklHX1BNQlVTIGlz
IG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19TSFQyMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NF
TlNPUlNfU0lTNTU5NSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfU01NNjY1IGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19ETUUxNzM3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VO
U09SU19FTUMxNDAzIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19FTUMyMTAzIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0VOU09SU19FTUM2VzIwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NF
TlNPUlNfU01TQzQ3TTEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NNU0M0N00xOTIg
aXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1NNU0M0N0IzOTcgaXMgbm90IHNldAojIENP
TkZJR19TRU5TT1JTX1NDSDU2WFhfQ09NTU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19TQ0g1NjI3IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19TQ0g1NjM2IGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19BRFMxMDE1IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09S
U19BRFM3ODI4IGlzIG5vdCBzZXQKIyBDT05GSUdfU0VOU09SU19BTUM2ODIxIGlzIG5vdCBz
ZXQKIyBDT05GSUdfU0VOU09SU19JTkEyMDkgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JT
X0lOQTJYWCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVEhNQzUwIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19UTVAxMDIgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1RN
UDQwMSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVE1QNDIxIGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19WSUFfQ1BVVEVNUCBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNf
VklBNjg2QSBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVlQxMjExIGlzIG5vdCBzZXQK
IyBDT05GSUdfU0VOU09SU19WVDgyMzEgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4
Mzc4MUQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4Mzc5MUQgaXMgbm90IHNldAoj
IENPTkZJR19TRU5TT1JTX1c4Mzc5MkQgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4
Mzc5MyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzNzk1IGlzIG5vdCBzZXQKIyBD
T05GSUdfU0VOU09SU19XODNMNzg1VFMgaXMgbm90IHNldAojIENPTkZJR19TRU5TT1JTX1c4
M0w3ODZORyBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNPUlNfVzgzNjI3SEYgaXMgbm90IHNl
dAojIENPTkZJR19TRU5TT1JTX1c4MzYyN0VIRiBpcyBub3Qgc2V0CiMgQ09ORklHX1NFTlNP
UlNfQVBQTEVTTUMgaXMgbm90IHNldAoKIwojIEFDUEkgZHJpdmVycwojCkNPTkZJR19TRU5T
T1JTX0FDUElfUE9XRVI9eQojIENPTkZJR19TRU5TT1JTX0FUSzAxMTAgaXMgbm90IHNldApD
T05GSUdfVEhFUk1BTD15CkNPTkZJR19USEVSTUFMX0hXTU9OPXkKQ09ORklHX1RIRVJNQUxf
REVGQVVMVF9HT1ZfU1RFUF9XSVNFPXkKIyBDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9G
QUlSX1NIQVJFIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhFUk1BTF9ERUZBVUxUX0dPVl9VU0VS
X1NQQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhFUk1BTF9HT1ZfRkFJUl9TSEFSRSBpcyBu
b3Qgc2V0CkNPTkZJR19USEVSTUFMX0dPVl9TVEVQX1dJU0U9eQojIENPTkZJR19USEVSTUFM
X0dPVl9VU0VSX1NQQUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfVEhFUk1BTF9FTVVMQVRJT04g
aXMgbm90IHNldAojIENPTkZJR19JTlRFTF9QT1dFUkNMQU1QIGlzIG5vdCBzZXQKIyBDT05G
SUdfWDg2X1BLR19URU1QX1RIRVJNQUwgaXMgbm90IHNldAojIENPTkZJR19BQ1BJX0lOVDM0
MDNfVEhFUk1BTCBpcyBub3Qgc2V0CgojCiMgVGV4YXMgSW5zdHJ1bWVudHMgdGhlcm1hbCBk
cml2ZXJzCiMKQ09ORklHX1dBVENIRE9HPXkKQ09ORklHX1dBVENIRE9HX0NPUkU9eQojIENP
TkZJR19XQVRDSERPR19OT1dBWU9VVCBpcyBub3Qgc2V0CgojCiMgV2F0Y2hkb2cgRGV2aWNl
IERyaXZlcnMKIwojIENPTkZJR19TT0ZUX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFdfV0FUQ0hET0cgaXMgbm90IHNldAojIENPTkZJR19BQ1FVSVJFX1dEVCBpcyBub3Qgc2V0
CiMgQ09ORklHX0FEVkFOVEVDSF9XRFQgaXMgbm90IHNldAojIENPTkZJR19BTElNMTUzNV9X
RFQgaXMgbm90IHNldAojIENPTkZJR19BTElNNzEwMV9XRFQgaXMgbm90IHNldAojIENPTkZJ
R19GNzE4MDhFX1dEVCBpcyBub3Qgc2V0CkNPTkZJR19TUDUxMDBfVENPPXkKIyBDT05GSUdf
U0M1MjBfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfU0JDX0ZJVFBDMl9XQVRDSERPRyBpcyBu
b3Qgc2V0CiMgQ09ORklHX0VVUk9URUNIX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lCNzAw
X1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lCTUFTUiBpcyBub3Qgc2V0CiMgQ09ORklHX1dB
RkVSX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0k2MzAwRVNCX1dEVCBpcyBub3Qgc2V0CiMg
Q09ORklHX0lFNlhYX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lUQ09fV0RUIGlzIG5vdCBz
ZXQKIyBDT05GSUdfSVQ4NzEyRl9XRFQgaXMgbm90IHNldAojIENPTkZJR19JVDg3X1dEVCBp
cyBub3Qgc2V0CiMgQ09ORklHX0hQX1dBVENIRE9HIGlzIG5vdCBzZXQKIyBDT05GSUdfU0Mx
MjAwX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1BDODc0MTNfV0RUIGlzIG5vdCBzZXQKIyBD
T05GSUdfTlZfVENPIGlzIG5vdCBzZXQKIyBDT05GSUdfNjBYWF9XRFQgaXMgbm90IHNldAoj
IENPTkZJR19TQkM4MzYwX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX0NQVTVfV0RUIGlzIG5v
dCBzZXQKIyBDT05GSUdfU01TQ19TQ0gzMTFYX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1NN
U0MzN0I3ODdfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfVklBX1dEVCBpcyBub3Qgc2V0CiMg
Q09ORklHX1c4MzYyN0hGX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1c4MzY5N0hGX1dEVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1c4MzY5N1VHX1dEVCBpcyBub3Qgc2V0CiMgQ09ORklHX1c4
Mzg3N0ZfV0RUIGlzIG5vdCBzZXQKIyBDT05GSUdfVzgzOTc3Rl9XRFQgaXMgbm90IHNldAoj
IENPTkZJR19NQUNIWl9XRFQgaXMgbm90IHNldAojIENPTkZJR19TQkNfRVBYX0MzX1dBVENI
RE9HIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9XRFQ9eQoKIwojIFBDSS1iYXNlZCBXYXRjaGRv
ZyBDYXJkcwojCiMgQ09ORklHX1BDSVBDV0FUQ0hET0cgaXMgbm90IHNldAojIENPTkZJR19X
RFRQQ0kgaXMgbm90IHNldAoKIwojIFVTQi1iYXNlZCBXYXRjaGRvZyBDYXJkcwojCiMgQ09O
RklHX1VTQlBDV0FUQ0hET0cgaXMgbm90IHNldApDT05GSUdfU1NCX1BPU1NJQkxFPXkKCiMK
IyBTb25pY3MgU2lsaWNvbiBCYWNrcGxhbmUKIwojIENPTkZJR19TU0IgaXMgbm90IHNldApD
T05GSUdfQkNNQV9QT1NTSUJMRT15CgojCiMgQnJvYWRjb20gc3BlY2lmaWMgQU1CQQojCiMg
Q09ORklHX0JDTUEgaXMgbm90IHNldAoKIwojIE11bHRpZnVuY3Rpb24gZGV2aWNlIGRyaXZl
cnMKIwpDT05GSUdfTUZEX0NPUkU9eQojIENPTkZJR19NRkRfQ1M1NTM1IGlzIG5vdCBzZXQK
IyBDT05GSUdfTUZEX0FTMzcxMSBpcyBub3Qgc2V0CiMgQ09ORklHX1BNSUNfQURQNTUyMCBp
cyBub3Qgc2V0CiMgQ09ORklHX01GRF9DUk9TX0VDIGlzIG5vdCBzZXQKIyBDT05GSUdfUE1J
Q19EQTkwM1ggaXMgbm90IHNldAojIENPTkZJR19NRkRfREE5MDUyX0kyQyBpcyBub3Qgc2V0
CiMgQ09ORklHX01GRF9EQTkwNTUgaXMgbm90IHNldAojIENPTkZJR19NRkRfREE5MDYzIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUZEX01DMTNYWFhfSTJDIGlzIG5vdCBzZXQKIyBDT05GSUdf
SFRDX1BBU0lDMyBpcyBub3Qgc2V0CiMgQ09ORklHX0xQQ19JQ0ggaXMgbm90IHNldApDT05G
SUdfTFBDX1NDSD15CiMgQ09ORklHX01GRF9KQU5aX0NNT0RJTyBpcyBub3Qgc2V0CiMgQ09O
RklHX01GRF9LRU1QTEQgaXMgbm90IHNldAojIENPTkZJR19NRkRfODhQTTgwMCBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF84OFBNODA1IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEXzg4UE04
NjBYIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01BWDE0NTc3IGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX01BWDc3Njg2IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX01BWDc3NjkzIGlzIG5v
dCBzZXQKIyBDT05GSUdfTUZEX01BWDg5MDcgaXMgbm90IHNldAojIENPTkZJR19NRkRfTUFY
ODkyNSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9NQVg4OTk3IGlzIG5vdCBzZXQKIyBDT05G
SUdfTUZEX01BWDg5OTggaXMgbm90IHNldAojIENPTkZJR19NRkRfVklQRVJCT0FSRCBpcyBu
b3Qgc2V0CiMgQ09ORklHX01GRF9SRVRVIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1BDRjUw
NjMzIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1JEQzMyMVggaXMgbm90IHNldAojIENPTkZJ
R19NRkRfUlRTWF9QQ0kgaXMgbm90IHNldAojIENPTkZJR19NRkRfUkM1VDU4MyBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9TRUNfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9TSTQ3
NlhfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9TTTUwMSBpcyBub3Qgc2V0CiMgQ09O
RklHX01GRF9TTVNDIGlzIG5vdCBzZXQKIyBDT05GSUdfQUJYNTAwX0NPUkUgaXMgbm90IHNl
dAojIENPTkZJR19NRkRfU1RNUEUgaXMgbm90IHNldAojIENPTkZJR19NRkRfU1lTQ09OIGlz
IG5vdCBzZXQKIyBDT05GSUdfTUZEX1RJX0FNMzM1WF9UU0NBREMgaXMgbm90IHNldAojIENP
TkZJR19NRkRfTFAzOTQzIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX0xQODc4OCBpcyBub3Qg
c2V0CiMgQ09ORklHX01GRF9QQUxNQVMgaXMgbm90IHNldAojIENPTkZJR19UUFM2MTA1WCBp
cyBub3Qgc2V0CiMgQ09ORklHX1RQUzY1MDdYIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQ
UzY1MDkwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQUzY1MjE3IGlzIG5vdCBzZXQKIyBD
T05GSUdfTUZEX1RQUzY1ODZYIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RQUzgwMDMxIGlz
IG5vdCBzZXQKIyBDT05GSUdfVFdMNDAzMF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfVFdM
NjA0MF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dMMTI3M19DT1JFIGlzIG5vdCBz
ZXQKIyBDT05GSUdfTUZEX0xNMzUzMyBpcyBub3Qgc2V0CiMgQ09ORklHX01GRF9UQzM1ODlY
IGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1RNSU8gaXMgbm90IHNldAojIENPTkZJR19NRkRf
Vlg4NTUgaXMgbm90IHNldAojIENPTkZJR19NRkRfQVJJWk9OQV9JMkMgaXMgbm90IHNldAoj
IENPTkZJR19NRkRfV004NDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfTUZEX1dNODMxWF9JMkMg
aXMgbm90IHNldAojIENPTkZJR19NRkRfV004MzUwX0kyQyBpcyBub3Qgc2V0CiMgQ09ORklH
X01GRF9XTTg5OTQgaXMgbm90IHNldAojIENPTkZJR19SRUdVTEFUT1IgaXMgbm90IHNldApD
T05GSUdfTUVESUFfU1VQUE9SVD15CgojCiMgTXVsdGltZWRpYSBjb3JlIHN1cHBvcnQKIwpD
T05GSUdfTUVESUFfQ0FNRVJBX1NVUFBPUlQ9eQpDT05GSUdfTUVESUFfQU5BTE9HX1RWX1NV
UFBPUlQ9eQpDT05GSUdfTUVESUFfRElHSVRBTF9UVl9TVVBQT1JUPXkKQ09ORklHX01FRElB
X1JBRElPX1NVUFBPUlQ9eQpDT05GSUdfTUVESUFfUkNfU1VQUE9SVD15CiMgQ09ORklHX01F
RElBX0NPTlRST0xMRVIgaXMgbm90IHNldApDT05GSUdfVklERU9fREVWPXkKQ09ORklHX1ZJ
REVPX1Y0TDI9eQpDT05GSUdfVklERU9fQURWX0RFQlVHPXkKIyBDT05GSUdfVklERU9fRklY
RURfTUlOT1JfUkFOR0VTIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX1RVTkVSPXkKQ09ORklH
X1ZJREVPQlVGX0dFTj15CkNPTkZJR19WSURFT0JVRl9ETUFfU0c9eQpDT05GSUdfRFZCX0NP
UkU9eQpDT05GSUdfRFZCX05FVD15CiMgQ09ORklHX1RUUENJX0VFUFJPTSBpcyBub3Qgc2V0
CkNPTkZJR19EVkJfTUFYX0FEQVBURVJTPTgKIyBDT05GSUdfRFZCX0RZTkFNSUNfTUlOT1JT
IGlzIG5vdCBzZXQKCiMKIyBNZWRpYSBkcml2ZXJzCiMKQ09ORklHX1JDX0NPUkU9eQpDT05G
SUdfUkNfTUFQPXkKQ09ORklHX1JDX0RFQ09ERVJTPXkKQ09ORklHX0xJUkM9eQpDT05GSUdf
SVJfTElSQ19DT0RFQz15CkNPTkZJR19JUl9ORUNfREVDT0RFUj15CkNPTkZJR19JUl9SQzVf
REVDT0RFUj15CkNPTkZJR19JUl9SQzZfREVDT0RFUj15CkNPTkZJR19JUl9KVkNfREVDT0RF
Uj15CkNPTkZJR19JUl9TT05ZX0RFQ09ERVI9eQpDT05GSUdfSVJfUkM1X1NaX0RFQ09ERVI9
eQpDT05GSUdfSVJfU0FOWU9fREVDT0RFUj15CkNPTkZJR19JUl9NQ0VfS0JEX0RFQ09ERVI9
eQojIENPTkZJR19SQ19ERVZJQ0VTIGlzIG5vdCBzZXQKQ09ORklHX01FRElBX1VTQl9TVVBQ
T1JUPXkKCiMKIyBXZWJjYW0gZGV2aWNlcwojCiMgQ09ORklHX1VTQl9WSURFT19DTEFTUyBp
cyBub3Qgc2V0CiMgQ09ORklHX1VTQl9HU1BDQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9Q
V0MgaXMgbm90IHNldAojIENPTkZJR19WSURFT19DUElBMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9aUjM2NFhYIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUS1dFQkNBTSBpcyBub3Qg
c2V0CiMgQ09ORklHX1VTQl9TMjI1NSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1VTQlRW
IGlzIG5vdCBzZXQKCiMKIyBBbmFsb2cgVFYgVVNCIGRldmljZXMKIwpDT05GSUdfVklERU9f
UFZSVVNCMj15CkNPTkZJR19WSURFT19QVlJVU0IyX1NZU0ZTPXkKQ09ORklHX1ZJREVPX1BW
UlVTQjJfRFZCPXkKIyBDT05GSUdfVklERU9fUFZSVVNCMl9ERUJVR0lGQyBpcyBub3Qgc2V0
CiMgQ09ORklHX1ZJREVPX0hEUFZSIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fVExHMjMw
MCBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJREVPX1VTQlZJU0lPTiBpcyBub3Qgc2V0CiMgQ09O
RklHX1ZJREVPX1NUSzExNjBfQ09NTU9OIGlzIG5vdCBzZXQKCiMKIyBBbmFsb2cvZGlnaXRh
bCBUViBVU0IgZGV2aWNlcwojCiMgQ09ORklHX1ZJREVPX0FVMDgyOCBpcyBub3Qgc2V0CiMg
Q09ORklHX1ZJREVPX0NYMjMxWFggaXMgbm90IHNldAojIENPTkZJR19WSURFT19UTTYwMDAg
aXMgbm90IHNldAoKIwojIERpZ2l0YWwgVFYgVVNCIGRldmljZXMKIwojIENPTkZJR19EVkJf
VVNCIGlzIG5vdCBzZXQKIyBDT05GSUdfRFZCX1VTQl9WMiBpcyBub3Qgc2V0CiMgQ09ORklH
X0RWQl9UVFVTQl9CVURHRVQgaXMgbm90IHNldAojIENPTkZJR19EVkJfVFRVU0JfREVDIGlz
IG5vdCBzZXQKIyBDT05GSUdfU01TX1VTQl9EUlYgaXMgbm90IHNldAojIENPTkZJR19EVkJf
QjJDMl9GTEVYQ09QX1VTQiBpcyBub3Qgc2V0CgojCiMgV2ViY2FtLCBUViAoYW5hbG9nL2Rp
Z2l0YWwpIFVTQiBkZXZpY2VzCiMKIyBDT05GSUdfVklERU9fRU0yOFhYIGlzIG5vdCBzZXQK
Q09ORklHX01FRElBX1BDSV9TVVBQT1JUPXkKCiMKIyBNZWRpYSBjYXB0dXJlIHN1cHBvcnQK
IwoKIwojIE1lZGlhIGNhcHR1cmUvYW5hbG9nIFRWIHN1cHBvcnQKIwojIENPTkZJR19WSURF
T19JVlRWIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fWk9SQU4gaXMgbm90IHNldAojIENP
TkZJR19WSURFT19IRVhJVU1fR0VNSU5JIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fSEVY
SVVNX09SSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9fTVhCIGlzIG5vdCBzZXQKCiMK
IyBNZWRpYSBjYXB0dXJlL2FuYWxvZy9oeWJyaWQgVFYgc3VwcG9ydAojCiMgQ09ORklHX1ZJ
REVPX0NYMTggaXMgbm90IHNldAojIENPTkZJR19WSURFT19DWDIzODg1IGlzIG5vdCBzZXQK
Q09ORklHX1ZJREVPX0NYMjU4MjE9eQojIENPTkZJR19WSURFT19DWDI1ODIxX0FMU0EgaXMg
bm90IHNldAojIENPTkZJR19WSURFT19DWDg4IGlzIG5vdCBzZXQKIyBDT05GSUdfVklERU9f
QlQ4NDggaXMgbm90IHNldAojIENPTkZJR19WSURFT19TQUE3MTM0IGlzIG5vdCBzZXQKIyBD
T05GSUdfVklERU9fU0FBNzE2NCBpcyBub3Qgc2V0CgojCiMgTWVkaWEgZGlnaXRhbCBUViBQ
Q0kgQWRhcHRlcnMKIwojIENPTkZJR19EVkJfQVY3MTEwIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFZCX0JVREdFVF9DT1JFIGlzIG5vdCBzZXQKIyBDT05GSUdfRFZCX0IyQzJfRkxFWENPUF9Q
Q0kgaXMgbm90IHNldAojIENPTkZJR19EVkJfUExVVE8yIGlzIG5vdCBzZXQKIyBDT05GSUdf
RFZCX0RNMTEwNSBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9QVDEgaXMgbm90IHNldAojIENP
TkZJR19NQU5USVNfQ09SRSBpcyBub3Qgc2V0CiMgQ09ORklHX0RWQl9OR0VORSBpcyBub3Qg
c2V0CiMgQ09ORklHX0RWQl9EREJSSURHRSBpcyBub3Qgc2V0CiMgQ09ORklHX1Y0TF9QTEFU
Rk9STV9EUklWRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfVjRMX01FTTJNRU1fRFJJVkVSUyBp
cyBub3Qgc2V0CiMgQ09ORklHX1Y0TF9URVNUX0RSSVZFUlMgaXMgbm90IHNldAoKIwojIFN1
cHBvcnRlZCBNTUMvU0RJTyBhZGFwdGVycwojCiMgQ09ORklHX1JBRElPX0FEQVBURVJTIGlz
IG5vdCBzZXQKQ09ORklHX1ZJREVPX0NYMjM0MVg9eQpDT05GSUdfVklERU9fQlRDWD15CkNP
TkZJR19WSURFT19UVkVFUFJPTT15CiMgQ09ORklHX0NZUFJFU1NfRklSTVdBUkUgaXMgbm90
IHNldAoKIwojIE1lZGlhIGFuY2lsbGFyeSBkcml2ZXJzICh0dW5lcnMsIHNlbnNvcnMsIGky
YywgZnJvbnRlbmRzKQojCkNPTkZJR19NRURJQV9TVUJEUlZfQVVUT1NFTEVDVD15CkNPTkZJ
R19NRURJQV9BVFRBQ0g9eQpDT05GSUdfVklERU9fSVJfSTJDPXkKCiMKIyBBdWRpbyBkZWNv
ZGVycywgcHJvY2Vzc29ycyBhbmQgbWl4ZXJzCiMKQ09ORklHX1ZJREVPX01TUDM0MDA9eQpD
T05GSUdfVklERU9fQ1M1M0wzMkE9eQpDT05GSUdfVklERU9fV004Nzc1PXkKCiMKIyBSRFMg
ZGVjb2RlcnMKIwoKIwojIFZpZGVvIGRlY29kZXJzCiMKQ09ORklHX1ZJREVPX1NBQTcxMVg9
eQoKIwojIFZpZGVvIGFuZCBhdWRpbyBkZWNvZGVycwojCkNPTkZJR19WSURFT19DWDI1ODQw
PXkKCiMKIyBWaWRlbyBlbmNvZGVycwojCgojCiMgQ2FtZXJhIHNlbnNvciBkZXZpY2VzCiMK
CiMKIyBGbGFzaCBkZXZpY2VzCiMKCiMKIyBWaWRlbyBpbXByb3ZlbWVudCBjaGlwcwojCgoj
CiMgQXVkaW8vVmlkZW8gY29tcHJlc3Npb24gY2hpcHMKIwoKIwojIE1pc2NlbGxhbmVvdXMg
aGVscGVyIGNoaXBzCiMKCiMKIyBTZW5zb3JzIHVzZWQgb24gc29jX2NhbWVyYSBkcml2ZXIK
IwpDT05GSUdfTUVESUFfVFVORVI9eQpDT05GSUdfTUVESUFfVFVORVJfU0lNUExFPXkKQ09O
RklHX01FRElBX1RVTkVSX1REQTgyOTA9eQpDT05GSUdfTUVESUFfVFVORVJfVERBODI3WD15
CkNPTkZJR19NRURJQV9UVU5FUl9UREExODI3MT15CkNPTkZJR19NRURJQV9UVU5FUl9UREE5
ODg3PXkKQ09ORklHX01FRElBX1RVTkVSX1RFQTU3NjE9eQpDT05GSUdfTUVESUFfVFVORVJf
VEVBNTc2Nz15CkNPTkZJR19NRURJQV9UVU5FUl9NVDIwWFg9eQpDT05GSUdfTUVESUFfVFVO
RVJfWEMyMDI4PXkKQ09ORklHX01FRElBX1RVTkVSX1hDNTAwMD15CkNPTkZJR19NRURJQV9U
VU5FUl9YQzQwMDA9eQpDT05GSUdfTUVESUFfVFVORVJfTUM0NFM4MDM9eQoKIwojIE11bHRp
c3RhbmRhcmQgKHNhdGVsbGl0ZSkgZnJvbnRlbmRzCiMKCiMKIyBNdWx0aXN0YW5kYXJkIChj
YWJsZSArIHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKIwoKIwojIERWQi1TIChzYXRlbGxpdGUp
IGZyb250ZW5kcwojCgojCiMgRFZCLVQgKHRlcnJlc3RyaWFsKSBmcm9udGVuZHMKIwpDT05G
SUdfRFZCX1REQTEwMDQ4PXkKCiMKIyBEVkItQyAoY2FibGUpIGZyb250ZW5kcwojCgojCiMg
QVRTQyAoTm9ydGggQW1lcmljYW4vS29yZWFuIFRlcnJlc3RyaWFsL0NhYmxlIERUVikgZnJv
bnRlbmRzCiMKQ09ORklHX0RWQl9MR0RUMzMwWD15CkNPTkZJR19EVkJfUzVIMTQwOT15CkNP
TkZJR19EVkJfUzVIMTQxMT15CgojCiMgSVNEQi1UICh0ZXJyZXN0cmlhbCkgZnJvbnRlbmRz
CiMKCiMKIyBEaWdpdGFsIHRlcnJlc3RyaWFsIG9ubHkgdHVuZXJzL1BMTAojCgojCiMgU0VD
IGNvbnRyb2wgZGV2aWNlcyBmb3IgRFZCLVMKIwoKIwojIFRvb2xzIHRvIGRldmVsb3AgbmV3
IGZyb250ZW5kcwojCiMgQ09ORklHX0RWQl9EVU1NWV9GRSBpcyBub3Qgc2V0CgojCiMgR3Jh
cGhpY3Mgc3VwcG9ydAojCkNPTkZJR19BR1A9eQpDT05GSUdfQUdQX0FNRDY0PXkKQ09ORklH
X0FHUF9JTlRFTD15CiMgQ09ORklHX0FHUF9TSVMgaXMgbm90IHNldAojIENPTkZJR19BR1Bf
VklBIGlzIG5vdCBzZXQKQ09ORklHX0lOVEVMX0dUVD15CkNPTkZJR19WR0FfQVJCPXkKQ09O
RklHX1ZHQV9BUkJfTUFYX0dQVVM9MTYKIyBDT05GSUdfVkdBX1NXSVRDSEVST08gaXMgbm90
IHNldApDT05GSUdfRFJNPXkKQ09ORklHX0RSTV9LTVNfSEVMUEVSPXkKQ09ORklHX0RSTV9L
TVNfRkJfSEVMUEVSPXkKIyBDT05GSUdfRFJNX0xPQURfRURJRF9GSVJNV0FSRSBpcyBub3Qg
c2V0CkNPTkZJR19EUk1fVFRNPXkKCiMKIyBJMkMgZW5jb2RlciBvciBoZWxwZXIgY2hpcHMK
IwojIENPTkZJR19EUk1fSTJDX0NINzAwNiBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9JMkNf
U0lMMTY0IGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX0kyQ19OWFBfVERBOTk4WCBpcyBub3Qg
c2V0CiMgQ09ORklHX0RSTV9UREZYIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX1IxMjggaXMg
bm90IHNldApDT05GSUdfRFJNX1JBREVPTj15CiMgQ09ORklHX0RSTV9SQURFT05fVU1TIGlz
IG5vdCBzZXQKIyBDT05GSUdfRFJNX05PVVZFQVUgaXMgbm90IHNldAojIENPTkZJR19EUk1f
STgxMCBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9JOTE1IGlzIG5vdCBzZXQKIyBDT05GSUdf
RFJNX01HQSBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9TSVMgaXMgbm90IHNldAojIENPTkZJ
R19EUk1fVklBIGlzIG5vdCBzZXQKIyBDT05GSUdfRFJNX1NBVkFHRSBpcyBub3Qgc2V0CiMg
Q09ORklHX0RSTV9WTVdHRlggaXMgbm90IHNldAojIENPTkZJR19EUk1fR01BNTAwIGlzIG5v
dCBzZXQKIyBDT05GSUdfRFJNX1VETCBpcyBub3Qgc2V0CiMgQ09ORklHX0RSTV9BU1QgaXMg
bm90IHNldAojIENPTkZJR19EUk1fTUdBRzIwMCBpcyBub3Qgc2V0CkNPTkZJR19EUk1fQ0lS
UlVTX1FFTVU9eQpDT05GSUdfRFJNX1FYTD15CiMgQ09ORklHX0RSTV9CT0NIUyBpcyBub3Qg
c2V0CiMgQ09ORklHX1ZHQVNUQVRFIGlzIG5vdCBzZXQKQ09ORklHX1ZJREVPX09VVFBVVF9D
T05UUk9MPXkKQ09ORklHX0hETUk9eQpDT05GSUdfRkI9eQojIENPTkZJR19GSVJNV0FSRV9F
RElEIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfRERDIGlzIG5vdCBzZXQKQ09ORklHX0ZCX0JP
T1RfVkVTQV9TVVBQT1JUPXkKQ09ORklHX0ZCX0NGQl9GSUxMUkVDVD15CkNPTkZJR19GQl9D
RkJfQ09QWUFSRUE9eQpDT05GSUdfRkJfQ0ZCX0lNQUdFQkxJVD15CiMgQ09ORklHX0ZCX0NG
Ql9SRVZfUElYRUxTX0lOX0JZVEUgaXMgbm90IHNldApDT05GSUdfRkJfU1lTX0ZJTExSRUNU
PXkKQ09ORklHX0ZCX1NZU19DT1BZQVJFQT15CkNPTkZJR19GQl9TWVNfSU1BR0VCTElUPXkK
IyBDT05GSUdfRkJfRk9SRUlHTl9FTkRJQU4gaXMgbm90IHNldApDT05GSUdfRkJfU1lTX0ZP
UFM9eQpDT05GSUdfRkJfREVGRVJSRURfSU89eQojIENPTkZJR19GQl9TVkdBTElCIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfTUFDTU9ERVMgaXMgbm90IHNldAojIENPTkZJR19GQl9CQUNL
TElHSFQgaXMgbm90IHNldApDT05GSUdfRkJfTU9ERV9IRUxQRVJTPXkKQ09ORklHX0ZCX1RJ
TEVCTElUVElORz15CgojCiMgRnJhbWUgYnVmZmVyIGhhcmR3YXJlIGRyaXZlcnMKIwojIENP
TkZJR19GQl9DSVJSVVMgaXMgbm90IHNldAojIENPTkZJR19GQl9QTTIgaXMgbm90IHNldAoj
IENPTkZJR19GQl9DWUJFUjIwMDAgaXMgbm90IHNldAojIENPTkZJR19GQl9BUkMgaXMgbm90
IHNldAojIENPTkZJR19GQl9BU0lMSUFOVCBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX0lNU1RU
IGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfVkdBMTYgaXMgbm90IHNldAojIENPTkZJR19GQl9V
VkVTQSBpcyBub3Qgc2V0CkNPTkZJR19GQl9WRVNBPXkKIyBDT05GSUdfRkJfTjQxMSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0ZCX0hHQSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX09QRU5DT1JF
UyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1MxRDEzWFhYIGlzIG5vdCBzZXQKIyBDT05GSUdf
RkJfTlZJRElBIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfUklWQSBpcyBub3Qgc2V0CiMgQ09O
RklHX0ZCX0k3NDAgaXMgbm90IHNldAojIENPTkZJR19GQl9MRTgwNTc4IGlzIG5vdCBzZXQK
IyBDT05GSUdfRkJfTUFUUk9YIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfUkFERU9OIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfQVRZMTI4IGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfQVRZIGlz
IG5vdCBzZXQKIyBDT05GSUdfRkJfUzMgaXMgbm90IHNldAojIENPTkZJR19GQl9TQVZBR0Ug
aXMgbm90IHNldAojIENPTkZJR19GQl9TSVMgaXMgbm90IHNldAojIENPTkZJR19GQl9WSUEg
aXMgbm90IHNldAojIENPTkZJR19GQl9ORU9NQUdJQyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZC
X0tZUk8gaXMgbm90IHNldAojIENPTkZJR19GQl8zREZYIGlzIG5vdCBzZXQKIyBDT05GSUdf
RkJfVk9PRE9PMSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1ZUODYyMyBpcyBub3Qgc2V0CiMg
Q09ORklHX0ZCX1RSSURFTlQgaXMgbm90IHNldAojIENPTkZJR19GQl9BUksgaXMgbm90IHNl
dAojIENPTkZJR19GQl9QTTMgaXMgbm90IHNldAojIENPTkZJR19GQl9DQVJNSU5FIGlzIG5v
dCBzZXQKIyBDT05GSUdfRkJfVE1JTyBpcyBub3Qgc2V0CiMgQ09ORklHX0ZCX1NNU0NVRlgg
aXMgbm90IHNldApDT05GSUdfRkJfVURMPXkKIyBDT05GSUdfRkJfR09MREZJU0ggaXMgbm90
IHNldAojIENPTkZJR19GQl9WSVJUVUFMIGlzIG5vdCBzZXQKQ09ORklHX1hFTl9GQkRFVl9G
Uk9OVEVORD15CiMgQ09ORklHX0ZCX01FVFJPTk9NRSBpcyBub3Qgc2V0CiMgQ09ORklHX0ZC
X01CODYyWFggaXMgbm90IHNldAojIENPTkZJR19GQl9CUk9BRFNIRUVUIGlzIG5vdCBzZXQK
IyBDT05GSUdfRkJfQVVPX0sxOTBYIGlzIG5vdCBzZXQKIyBDT05GSUdfRkJfU0lNUExFIGlz
IG5vdCBzZXQKIyBDT05GSUdfRVhZTk9TX1ZJREVPIGlzIG5vdCBzZXQKQ09ORklHX0JBQ0tM
SUdIVF9MQ0RfU1VQUE9SVD15CiMgQ09ORklHX0xDRF9DTEFTU19ERVZJQ0UgaXMgbm90IHNl
dApDT05GSUdfQkFDS0xJR0hUX0NMQVNTX0RFVklDRT15CkNPTkZJR19CQUNLTElHSFRfR0VO
RVJJQz15CiMgQ09ORklHX0JBQ0tMSUdIVF9BUFBMRSBpcyBub3Qgc2V0CiMgQ09ORklHX0JB
Q0tMSUdIVF9TQUhBUkEgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfQURQODg2MCBp
cyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tMSUdIVF9BRFA4ODcwIGlzIG5vdCBzZXQKIyBDT05G
SUdfQkFDS0xJR0hUX0xNMzYzMEEgaXMgbm90IHNldAojIENPTkZJR19CQUNLTElHSFRfTE0z
NjM5IGlzIG5vdCBzZXQKIyBDT05GSUdfQkFDS0xJR0hUX0xQODU1WCBpcyBub3Qgc2V0CiMg
Q09ORklHX0JBQ0tMSUdIVF9MVjUyMDdMUCBpcyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tMSUdI
VF9CRDYxMDcgaXMgbm90IHNldAoKIwojIENvbnNvbGUgZGlzcGxheSBkcml2ZXIgc3VwcG9y
dAojCkNPTkZJR19WR0FfQ09OU09MRT15CkNPTkZJR19WR0FDT05fU09GVF9TQ1JPTExCQUNL
PXkKQ09ORklHX1ZHQUNPTl9TT0ZUX1NDUk9MTEJBQ0tfU0laRT02NApDT05GSUdfRFVNTVlf
Q09OU09MRT15CkNPTkZJR19GUkFNRUJVRkZFUl9DT05TT0xFPXkKQ09ORklHX0ZSQU1FQlVG
RkVSX0NPTlNPTEVfREVURUNUX1BSSU1BUlk9eQojIENPTkZJR19GUkFNRUJVRkZFUl9DT05T
T0xFX1JPVEFUSU9OIGlzIG5vdCBzZXQKQ09ORklHX0xPR089eQojIENPTkZJR19MT0dPX0xJ
TlVYX01PTk8gaXMgbm90IHNldAojIENPTkZJR19MT0dPX0xJTlVYX1ZHQTE2IGlzIG5vdCBz
ZXQKQ09ORklHX0xPR09fTElOVVhfQ0xVVDIyND15CkNPTkZJR19TT1VORD15CkNPTkZJR19T
T1VORF9PU1NfQ09SRT15CkNPTkZJR19TT1VORF9PU1NfQ09SRV9QUkVDTEFJTT15CkNPTkZJ
R19TTkQ9eQpDT05GSUdfU05EX1RJTUVSPXkKQ09ORklHX1NORF9QQ009eQpDT05GSUdfU05E
X0hXREVQPXkKQ09ORklHX1NORF9SQVdNSURJPXkKQ09ORklHX1NORF9TRVFVRU5DRVI9eQpD
T05GSUdfU05EX1NFUV9EVU1NWT15CkNPTkZJR19TTkRfT1NTRU1VTD15CkNPTkZJR19TTkRf
TUlYRVJfT1NTPXkKQ09ORklHX1NORF9QQ01fT1NTPXkKQ09ORklHX1NORF9QQ01fT1NTX1BM
VUdJTlM9eQpDT05GSUdfU05EX1NFUVVFTkNFUl9PU1M9eQpDT05GSUdfU05EX0hSVElNRVI9
eQpDT05GSUdfU05EX1NFUV9IUlRJTUVSX0RFRkFVTFQ9eQpDT05GSUdfU05EX0RZTkFNSUNf
TUlOT1JTPXkKQ09ORklHX1NORF9NQVhfQ0FSRFM9MzIKQ09ORklHX1NORF9TVVBQT1JUX09M
RF9BUEk9eQpDT05GSUdfU05EX1ZFUkJPU0VfUFJPQ0ZTPXkKIyBDT05GSUdfU05EX1ZFUkJP
U0VfUFJJTlRLIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0RFQlVHIGlzIG5vdCBzZXQKQ09O
RklHX1NORF9WTUFTVEVSPXkKQ09ORklHX1NORF9LQ1RMX0pBQ0s9eQpDT05GSUdfU05EX0RN
QV9TR0JVRj15CkNPTkZJR19TTkRfUkFXTUlESV9TRVE9eQpDT05GSUdfU05EX09QTDNfTElC
X1NFUT15CiMgQ09ORklHX1NORF9PUEw0X0xJQl9TRVEgaXMgbm90IHNldAojIENPTkZJR19T
TkRfU0JBV0VfU0VRIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VNVTEwSzFfU0VRIGlzIG5v
dCBzZXQKQ09ORklHX1NORF9NUFU0MDFfVUFSVD15CkNPTkZJR19TTkRfT1BMM19MSUI9eQpD
T05GSUdfU05EX0RSSVZFUlM9eQojIENPTkZJR19TTkRfUENTUCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NORF9EVU1NWSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9BTE9PUCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9WSVJNSURJIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX01UUEFWIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX1NFUklBTF9VMTY1NTAgaXMgbm90IHNldAojIENPTkZJ
R19TTkRfTVBVNDAxIGlzIG5vdCBzZXQKQ09ORklHX1NORF9QQ0k9eQojIENPTkZJR19TTkRf
QUQxODg5IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FMUzMwMCBpcyBub3Qgc2V0CiMgQ09O
RklHX1NORF9BTFM0MDAwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FMSTU0NTEgaXMgbm90
IHNldAojIENPTkZJR19TTkRfQVNJSFBJIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FUSUlY
UCBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9BVElJWFBfTU9ERU0gaXMgbm90IHNldAojIENP
TkZJR19TTkRfQVU4ODEwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0FVODgyMCBpcyBub3Qg
c2V0CiMgQ09ORklHX1NORF9BVTg4MzAgaXMgbm90IHNldAojIENPTkZJR19TTkRfQVcyIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0FaVDMzMjggaXMgbm90IHNldAojIENPTkZJR19TTkRf
QlQ4N1ggaXMgbm90IHNldAojIENPTkZJR19TTkRfQ0EwMTA2IGlzIG5vdCBzZXQKQ09ORklH
X1NORF9DTUlQQ0k9eQpDT05GSUdfU05EX09YWUdFTl9MSUI9eQpDT05GSUdfU05EX09YWUdF
Tj15CiMgQ09ORklHX1NORF9DUzQyODEgaXMgbm90IHNldAojIENPTkZJR19TTkRfQ1M0NlhY
IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0NTNTUzMCBpcyBub3Qgc2V0CiMgQ09ORklHX1NO
RF9DUzU1MzVBVURJTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9DVFhGSSBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9EQVJMQTIwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0dJTkEyMCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9MQVlMQTIwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05E
X0RBUkxBMjQgaXMgbm90IHNldAojIENPTkZJR19TTkRfR0lOQTI0IGlzIG5vdCBzZXQKIyBD
T05GSUdfU05EX0xBWUxBMjQgaXMgbm90IHNldAojIENPTkZJR19TTkRfTU9OQSBpcyBub3Qg
c2V0CiMgQ09ORklHX1NORF9NSUEgaXMgbm90IHNldAojIENPTkZJR19TTkRfRUNITzNHIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0lORElHTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9J
TkRJR09JTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9JTkRJR09ESiBpcyBub3Qgc2V0CiMg
Q09ORklHX1NORF9JTkRJR09JT1ggaXMgbm90IHNldAojIENPTkZJR19TTkRfSU5ESUdPREpY
IGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VNVTEwSzEgaXMgbm90IHNldAojIENPTkZJR19T
TkRfRU1VMTBLMVggaXMgbm90IHNldAojIENPTkZJR19TTkRfRU5TMTM3MCBpcyBub3Qgc2V0
CiMgQ09ORklHX1NORF9FTlMxMzcxIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0VTMTkzOCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9FUzE5NjggaXMgbm90IHNldAojIENPTkZJR19TTkRf
Rk04MDEgaXMgbm90IHNldApDT05GSUdfU05EX0hEQV9JTlRFTD15CkNPTkZJR19TTkRfSERB
X1BSRUFMTE9DX1NJWkU9NjQKQ09ORklHX1NORF9IREFfSFdERVA9eQojIENPTkZJR19TTkRf
SERBX1JFQ09ORklHIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0hEQV9JTlBVVF9CRUVQIGlz
IG5vdCBzZXQKIyBDT05GSUdfU05EX0hEQV9JTlBVVF9KQUNLIGlzIG5vdCBzZXQKIyBDT05G
SUdfU05EX0hEQV9QQVRDSF9MT0FERVIgaXMgbm90IHNldApDT05GSUdfU05EX0hEQV9DT0RF
Q19SRUFMVEVLPXkKQ09ORklHX1NORF9IREFfQ09ERUNfQU5BTE9HPXkKQ09ORklHX1NORF9I
REFfQ09ERUNfU0lHTUFURUw9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19WSUE9eQpDT05GSUdf
U05EX0hEQV9DT0RFQ19IRE1JPXkKQ09ORklHX1NORF9IREFfQ09ERUNfQ0lSUlVTPXkKQ09O
RklHX1NORF9IREFfQ09ERUNfQ09ORVhBTlQ9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19DQTAx
MTA9eQpDT05GSUdfU05EX0hEQV9DT0RFQ19DQTAxMzI9eQojIENPTkZJR19TTkRfSERBX0NP
REVDX0NBMDEzMl9EU1AgaXMgbm90IHNldApDT05GSUdfU05EX0hEQV9DT0RFQ19DTUVESUE9
eQpDT05GSUdfU05EX0hEQV9DT0RFQ19TSTMwNTQ9eQpDT05GSUdfU05EX0hEQV9HRU5FUklD
PXkKQ09ORklHX1NORF9IREFfUE9XRVJfU0FWRV9ERUZBVUxUPTAKIyBDT05GSUdfU05EX0hE
U1AgaXMgbm90IHNldAojIENPTkZJR19TTkRfSERTUE0gaXMgbm90IHNldAojIENPTkZJR19T
TkRfSUNFMTcxMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9JQ0UxNzI0IGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX0lOVEVMOFgwIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0lOVEVMOFgw
TSBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9LT1JHMTIxMiBpcyBub3Qgc2V0CiMgQ09ORklH
X1NORF9MT0xBIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX0xYNjQ2NEVTIGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX01BRVNUUk8zIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX01JWEFSVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1NORF9OTTI1NiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9Q
Q1hIUiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9SSVBUSURFIGlzIG5vdCBzZXQKIyBDT05G
SUdfU05EX1JNRTMyIGlzIG5vdCBzZXQKIyBDT05GSUdfU05EX1JNRTk2IGlzIG5vdCBzZXQK
IyBDT05GSUdfU05EX1JNRTk2NTIgaXMgbm90IHNldAojIENPTkZJR19TTkRfU09OSUNWSUJF
UyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9UUklERU5UIGlzIG5vdCBzZXQKIyBDT05GSUdf
U05EX1ZJQTgyWFggaXMgbm90IHNldAojIENPTkZJR19TTkRfVklBODJYWF9NT0RFTSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1NORF9WSVJUVU9TTyBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9W
WDIyMiBpcyBub3Qgc2V0CiMgQ09ORklHX1NORF9ZTUZQQ0kgaXMgbm90IHNldApDT05GSUdf
U05EX1VTQj15CkNPTkZJR19TTkRfVVNCX0FVRElPPXkKQ09ORklHX1NORF9VU0JfVUExMDE9
eQpDT05GSUdfU05EX1VTQl9VU1gyWT15CkNPTkZJR19TTkRfVVNCX0NBSUFRPXkKQ09ORklH
X1NORF9VU0JfQ0FJQVFfSU5QVVQ9eQojIENPTkZJR19TTkRfVVNCX1VTMTIyTCBpcyBub3Qg
c2V0CkNPTkZJR19TTkRfVVNCXzZGSVJFPXkKIyBDT05GSUdfU05EX1VTQl9ISUZBQ0UgaXMg
bm90IHNldAojIENPTkZJR19TTkRfU09DIGlzIG5vdCBzZXQKIyBDT05GSUdfU09VTkRfUFJJ
TUUgaXMgbm90IHNldAoKIwojIEhJRCBzdXBwb3J0CiMKQ09ORklHX0hJRD15CiMgQ09ORklH
X0hJRF9CQVRURVJZX1NUUkVOR1RIIGlzIG5vdCBzZXQKQ09ORklHX0hJRFJBVz15CiMgQ09O
RklHX1VISUQgaXMgbm90IHNldApDT05GSUdfSElEX0dFTkVSSUM9eQoKIwojIFNwZWNpYWwg
SElEIGRyaXZlcnMKIwpDT05GSUdfSElEX0E0VEVDSD15CiMgQ09ORklHX0hJRF9BQ1JVWCBp
cyBub3Qgc2V0CkNPTkZJR19ISURfQVBQTEU9eQojIENPTkZJR19ISURfQVBQTEVJUiBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9BVVJFQUwgaXMgbm90IHNldApDT05GSUdfSElEX0JFTEtJ
Tj15CkNPTkZJR19ISURfQ0hFUlJZPXkKQ09ORklHX0hJRF9DSElDT05ZPXkKIyBDT05GSUdf
SElEX1BST0RJS0VZUyBpcyBub3Qgc2V0CkNPTkZJR19ISURfQ1lQUkVTUz15CiMgQ09ORklH
X0hJRF9EUkFHT05SSVNFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0VNU19GRiBpcyBub3Qg
c2V0CiMgQ09ORklHX0hJRF9FTEVDT00gaXMgbm90IHNldAojIENPTkZJR19ISURfRUxPIGlz
IG5vdCBzZXQKQ09ORklHX0hJRF9FWktFWT15CiMgQ09ORklHX0hJRF9IT0xURUsgaXMgbm90
IHNldAojIENPTkZJR19ISURfSFVJT04gaXMgbm90IHNldAojIENPTkZJR19ISURfS0VZVE9V
Q0ggaXMgbm90IHNldAojIENPTkZJR19ISURfS1lFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElE
X1VDTE9HSUMgaXMgbm90IHNldAojIENPTkZJR19ISURfV0FMVE9QIGlzIG5vdCBzZXQKIyBD
T05GSUdfSElEX0dZUkFUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX0lDQURFIGlzIG5v
dCBzZXQKIyBDT05GSUdfSElEX1RXSU5IQU4gaXMgbm90IHNldApDT05GSUdfSElEX0tFTlNJ
TkdUT049eQojIENPTkZJR19ISURfTENQT1dFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9M
RU5PVk9fVFBLQkQgaXMgbm90IHNldApDT05GSUdfSElEX0xPR0lURUNIPXkKIyBDT05GSUdf
SElEX0xPR0lURUNIX0RKIGlzIG5vdCBzZXQKIyBDT05GSUdfTE9HSVRFQ0hfRkYgaXMgbm90
IHNldAojIENPTkZJR19MT0dJUlVNQkxFUEFEMl9GRiBpcyBub3Qgc2V0CiMgQ09ORklHX0xP
R0lHOTQwX0ZGIGlzIG5vdCBzZXQKIyBDT05GSUdfTE9HSVdIRUVMU19GRiBpcyBub3Qgc2V0
CiMgQ09ORklHX0hJRF9NQUdJQ01PVVNFIGlzIG5vdCBzZXQKQ09ORklHX0hJRF9NSUNST1NP
RlQ9eQpDT05GSUdfSElEX01PTlRFUkVZPXkKIyBDT05GSUdfSElEX01VTFRJVE9VQ0ggaXMg
bm90IHNldAojIENPTkZJR19ISURfTlRSSUcgaXMgbm90IHNldAojIENPTkZJR19ISURfT1JU
RUsgaXMgbm90IHNldAojIENPTkZJR19ISURfUEFOVEhFUkxPUkQgaXMgbm90IHNldAojIENP
TkZJR19ISURfUEVUQUxZTlggaXMgbm90IHNldAojIENPTkZJR19ISURfUElDT0xDRCBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9QUklNQVggaXMgbm90IHNldAojIENPTkZJR19ISURfUk9D
Q0FUIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NBSVRFSyBpcyBub3Qgc2V0CiMgQ09ORklH
X0hJRF9TQU1TVU5HIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NPTlkgaXMgbm90IHNldAoj
IENPTkZJR19ISURfU1BFRURMSU5LIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NURUVMU0VS
SUVTIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NVTlBMVVMgaXMgbm90IHNldAojIENPTkZJ
R19ISURfR1JFRU5BU0lBIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NNQVJUSk9ZUExVUyBp
cyBub3Qgc2V0CiMgQ09ORklHX0hJRF9USVZPIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1RP
UFNFRUQgaXMgbm90IHNldAojIENPTkZJR19ISURfVEhJTkdNIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX1RIUlVTVE1BU1RFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0hJRF9XQUNPTSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0hJRF9XSUlNT1RFIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1hJ
Tk1PIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1pFUk9QTFVTIGlzIG5vdCBzZXQKIyBDT05G
SUdfSElEX1pZREFDUk9OIGlzIG5vdCBzZXQKIyBDT05GSUdfSElEX1NFTlNPUl9IVUIgaXMg
bm90IHNldAoKIwojIFVTQiBISUQgc3VwcG9ydAojCkNPTkZJR19VU0JfSElEPXkKQ09ORklH
X0hJRF9QSUQ9eQpDT05GSUdfVVNCX0hJRERFVj15CgojCiMgSTJDIEhJRCBzdXBwb3J0CiMK
IyBDT05GSUdfSTJDX0hJRCBpcyBub3Qgc2V0CkNPTkZJR19VU0JfT0hDSV9MSVRUTEVfRU5E
SUFOPXkKQ09ORklHX1VTQl9TVVBQT1JUPXkKQ09ORklHX1VTQl9DT01NT049eQpDT05GSUdf
VVNCX0FSQ0hfSEFTX0hDRD15CkNPTkZJR19VU0I9eQojIENPTkZJR19VU0JfREVCVUcgaXMg
bm90IHNldApDT05GSUdfVVNCX0FOTk9VTkNFX05FV19ERVZJQ0VTPXkKCiMKIyBNaXNjZWxs
YW5lb3VzIFVTQiBvcHRpb25zCiMKQ09ORklHX1VTQl9ERUZBVUxUX1BFUlNJU1Q9eQojIENP
TkZJR19VU0JfRFlOQU1JQ19NSU5PUlMgaXMgbm90IHNldApDT05GSUdfVVNCX01PTj15CiMg
Q09ORklHX1VTQl9XVVNCX0NCQUYgaXMgbm90IHNldAoKIwojIFVTQiBIb3N0IENvbnRyb2xs
ZXIgRHJpdmVycwojCiMgQ09ORklHX1VTQl9DNjdYMDBfSENEIGlzIG5vdCBzZXQKQ09ORklH
X1VTQl9YSENJX0hDRD15CkNPTkZJR19VU0JfRUhDSV9IQ0Q9eQpDT05GSUdfVVNCX0VIQ0lf
Uk9PVF9IVUJfVFQ9eQpDT05GSUdfVVNCX0VIQ0lfVFRfTkVXU0NIRUQ9eQpDT05GSUdfVVNC
X0VIQ0lfUENJPXkKIyBDT05GSUdfVVNCX0VIQ0lfSENEX1BMQVRGT1JNIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX09YVTIxMEhQX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JU1Ax
MTZYX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9JU1AxNzYwX0hDRCBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9JU1AxMzYyX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9GVVNC
SDIwMF9IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfRk9URzIxMF9IQ0QgaXMgbm90IHNl
dApDT05GSUdfVVNCX09IQ0lfSENEPXkKQ09ORklHX1VTQl9PSENJX0hDRF9QQ0k9eQojIENP
TkZJR19VU0JfT0hDSV9IQ0RfUExBVEZPUk0gaXMgbm90IHNldApDT05GSUdfVVNCX1VIQ0lf
SENEPXkKIyBDT05GSUdfVVNCX1NMODExX0hDRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9S
OEE2NjU5N19IQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfSENEX1RFU1RfTU9ERSBpcyBu
b3Qgc2V0CgojCiMgVVNCIERldmljZSBDbGFzcyBkcml2ZXJzCiMKIyBDT05GSUdfVVNCX0FD
TSBpcyBub3Qgc2V0CkNPTkZJR19VU0JfUFJJTlRFUj15CiMgQ09ORklHX1VTQl9XRE0gaXMg
bm90IHNldAojIENPTkZJR19VU0JfVE1DIGlzIG5vdCBzZXQKCiMKIyBOT1RFOiBVU0JfU1RP
UkFHRSBkZXBlbmRzIG9uIFNDU0kgYnV0IEJMS19ERVZfU0QgbWF5CiMKCiMKIyBhbHNvIGJl
IG5lZWRlZDsgc2VlIFVTQl9TVE9SQUdFIEhlbHAgZm9yIG1vcmUgaW5mbwojCkNPTkZJR19V
U0JfU1RPUkFHRT15CiMgQ09ORklHX1VTQl9TVE9SQUdFX0RFQlVHIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX1NUT1JBR0VfUkVBTFRFSyBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TVE9S
QUdFX0RBVEFGQUIgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9GUkVFQ09NIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfSVNEMjAwIGlzIG5vdCBzZXQKIyBDT05G
SUdfVVNCX1NUT1JBR0VfVVNCQVQgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9T
RERSMDkgaXMgbm90IHNldAojIENPTkZJR19VU0JfU1RPUkFHRV9TRERSNTUgaXMgbm90IHNl
dAojIENPTkZJR19VU0JfU1RPUkFHRV9KVU1QU0hPVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TVE9SQUdFX0FMQVVEQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TVE9SQUdFX09ORVRP
VUNIIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NUT1JBR0VfS0FSTUEgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfU1RPUkFHRV9DWVBSRVNTX0FUQUNCIGlzIG5vdCBzZXQKIyBDT05GSUdf
VVNCX1NUT1JBR0VfRU5FX1VCNjI1MCBpcyBub3Qgc2V0CgojCiMgVVNCIEltYWdpbmcgZGV2
aWNlcwojCiMgQ09ORklHX1VTQl9NREM4MDAgaXMgbm90IHNldAojIENPTkZJR19VU0JfTUlD
Uk9URUsgaXMgbm90IHNldAojIENPTkZJR19VU0JfTVVTQl9IRFJDIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX0RXQzMgaXMgbm90IHNldAojIENPTkZJR19VU0JfRFdDMiBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9DSElQSURFQSBpcyBub3Qgc2V0CgojCiMgVVNCIHBvcnQgZHJpdmVy
cwojCkNPTkZJR19VU0JfU0VSSUFMPXkKIyBDT05GSUdfVVNCX1NFUklBTF9DT05TT0xFIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9HRU5FUklDIGlzIG5vdCBzZXQKIyBDT05G
SUdfVVNCX1NFUklBTF9TSU1QTEUgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0FJ
UkNBQkxFIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9BUkszMTE2IGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9CRUxLSU4gaXMgbm90IHNldAojIENPTkZJR19VU0Jf
U0VSSUFMX0NIMzQxIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9XSElURUhFQVQg
aXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0RJR0lfQUNDRUxFUE9SVCBpcyBub3Qg
c2V0CkNPTkZJR19VU0JfU0VSSUFMX0NQMjEwWD15CkNPTkZJR19VU0JfU0VSSUFMX0NZUFJF
U1NfTTg9eQojIENPTkZJR19VU0JfU0VSSUFMX0VNUEVHIGlzIG5vdCBzZXQKIyBDT05GSUdf
VVNCX1NFUklBTF9GVERJX1NJTyBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfVklT
T1IgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0lQQVEgaXMgbm90IHNldAojIENP
TkZJR19VU0JfU0VSSUFMX0lSIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9FREdF
UE9SVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfRURHRVBPUlRfVEkgaXMgbm90
IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0Y4MTIzMiBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfR0FSTUlOIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9JUFcgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0lVVSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfS0VZU1BBTl9QREEgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0tF
WVNQQU4gaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0tMU0kgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfU0VSSUFMX0tPQklMX1NDVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9T
RVJJQUxfTUNUX1UyMzIgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX01FVFJPIGlz
IG5vdCBzZXQKQ09ORklHX1VTQl9TRVJJQUxfTU9TNzcyMD15CkNPTkZJR19VU0JfU0VSSUFM
X01PUzc4NDA9eQojIENPTkZJR19VU0JfU0VSSUFMX01YVVBPUlQgaXMgbm90IHNldAojIENP
TkZJR19VU0JfU0VSSUFMX05BVk1BTiBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxf
UEwyMzAzIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9PVEk2ODU4IGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9RQ0FVWCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9T
RVJJQUxfUVVBTENPTU0gaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1NQQ1A4WDUg
aXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1NBRkUgaXMgbm90IHNldAojIENPTkZJ
R19VU0JfU0VSSUFMX1NJRVJSQVdJUkVMRVNTIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NF
UklBTF9TWU1CT0wgaXMgbm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1RJIGlzIG5vdCBz
ZXQKIyBDT05GSUdfVVNCX1NFUklBTF9DWUJFUkpBQ0sgaXMgbm90IHNldAojIENPTkZJR19V
U0JfU0VSSUFMX1hJUkNPTSBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfT1BUSU9O
IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9PTU5JTkVUIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX1NFUklBTF9PUFRJQ09OIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklB
TF9YU0VOU19NVCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9TRVJJQUxfV0lTSEJPTkUgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX1pURSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9TRVJJQUxfU1NVMTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX1NFUklBTF9RVDIgaXMg
bm90IHNldAojIENPTkZJR19VU0JfU0VSSUFMX0RFQlVHIGlzIG5vdCBzZXQKCiMKIyBVU0Ig
TWlzY2VsbGFuZW91cyBkcml2ZXJzCiMKIyBDT05GSUdfVVNCX0VNSTYyIGlzIG5vdCBzZXQK
IyBDT05GSUdfVVNCX0VNSTI2IGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0FEVVRVWCBpcyBu
b3Qgc2V0CiMgQ09ORklHX1VTQl9TRVZTRUcgaXMgbm90IHNldAojIENPTkZJR19VU0JfUklP
NTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0xFR09UT1dFUiBpcyBub3Qgc2V0CiMgQ09O
RklHX1VTQl9MQ0QgaXMgbm90IHNldAojIENPTkZJR19VU0JfTEVEIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVNCX0NZUFJFU1NfQ1k3QzYzIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX0NZVEhF
Uk0gaXMgbm90IHNldAojIENPTkZJR19VU0JfSURNT1VTRSBpcyBub3Qgc2V0CiMgQ09ORklH
X1VTQl9GVERJX0VMQU4gaXMgbm90IHNldAojIENPTkZJR19VU0JfQVBQTEVESVNQTEFZIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX1NJU1VTQlZHQSBpcyBub3Qgc2V0CiMgQ09ORklHX1VT
Ql9MRCBpcyBub3Qgc2V0CiMgQ09ORklHX1VTQl9UUkFOQ0VWSUJSQVRPUiBpcyBub3Qgc2V0
CiMgQ09ORklHX1VTQl9JT1dBUlJJT1IgaXMgbm90IHNldAojIENPTkZJR19VU0JfVEVTVCBp
cyBub3Qgc2V0CiMgQ09ORklHX1VTQl9FSFNFVF9URVNUX0ZJWFRVUkUgaXMgbm90IHNldAoj
IENPTkZJR19VU0JfSVNJR0hURlcgaXMgbm90IHNldAojIENPTkZJR19VU0JfWVVSRVggaXMg
bm90IHNldAojIENPTkZJR19VU0JfRVpVU0JfRlgyIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNC
X0hTSUNfVVNCMzUwMyBpcyBub3Qgc2V0CgojCiMgVVNCIFBoeXNpY2FsIExheWVyIGRyaXZl
cnMKIwojIENPTkZJR19VU0JfUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfVVNCX09UR19GU00g
aXMgbm90IHNldAojIENPTkZJR19OT1BfVVNCX1hDRUlWIGlzIG5vdCBzZXQKIyBDT05GSUdf
U0FNU1VOR19VU0IyUEhZIGlzIG5vdCBzZXQKIyBDT05GSUdfU0FNU1VOR19VU0IzUEhZIGlz
IG5vdCBzZXQKIyBDT05GSUdfVVNCX0lTUDEzMDEgaXMgbm90IHNldAojIENPTkZJR19VU0Jf
UkNBUl9QSFkgaXMgbm90IHNldAojIENPTkZJR19VU0JfR0FER0VUIGlzIG5vdCBzZXQKIyBD
T05GSUdfVVdCIGlzIG5vdCBzZXQKIyBDT05GSUdfTU1DIGlzIG5vdCBzZXQKIyBDT05GSUdf
TUVNU1RJQ0sgaXMgbm90IHNldApDT05GSUdfTkVXX0xFRFM9eQpDT05GSUdfTEVEU19DTEFT
Uz15CgojCiMgTEVEIGRyaXZlcnMKIwojIENPTkZJR19MRURTX0xNMzUzMCBpcyBub3Qgc2V0
CiMgQ09ORklHX0xFRFNfTE0zNjQyIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19QQ0E5NTMy
IGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19MUDM5NDQgaXMgbm90IHNldAojIENPTkZJR19M
RURTX0xQNTUyMSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfTFA1NTIzIGlzIG5vdCBzZXQK
IyBDT05GSUdfTEVEU19MUDU1NjIgaXMgbm90IHNldAojIENPTkZJR19MRURTX0xQODUwMSBp
cyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfQ0xFVk9fTUFJTCBpcyBub3Qgc2V0CiMgQ09ORklH
X0xFRFNfUENBOTU1WCBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfUENBOTYzWCBpcyBub3Qg
c2V0CiMgQ09ORklHX0xFRFNfUENBOTY4NSBpcyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfQkQy
ODAyIGlzIG5vdCBzZXQKIyBDT05GSUdfTEVEU19JTlRFTF9TUzQyMDAgaXMgbm90IHNldAoj
IENPTkZJR19MRURTX1RDQTY1MDcgaXMgbm90IHNldAojIENPTkZJR19MRURTX0xNMzU1eCBp
cyBub3Qgc2V0CiMgQ09ORklHX0xFRFNfT1QyMDAgaXMgbm90IHNldAojIENPTkZJR19MRURT
X0JMSU5LTSBpcyBub3Qgc2V0CgojCiMgTEVEIFRyaWdnZXJzCiMKIyBDT05GSUdfTEVEU19U
UklHR0VSUyBpcyBub3Qgc2V0CiMgQ09ORklHX0FDQ0VTU0lCSUxJVFkgaXMgbm90IHNldAoj
IENPTkZJR19JTkZJTklCQU5EIGlzIG5vdCBzZXQKIyBDT05GSUdfRURBQyBpcyBub3Qgc2V0
CkNPTkZJR19SVENfTElCPXkKQ09ORklHX1JUQ19DTEFTUz15CkNPTkZJR19SVENfSENUT1NZ
Uz15CkNPTkZJR19SVENfU1lTVE9IQz15CkNPTkZJR19SVENfSENUT1NZU19ERVZJQ0U9InJ0
YzAiCiMgQ09ORklHX1JUQ19ERUJVRyBpcyBub3Qgc2V0CgojCiMgUlRDIGludGVyZmFjZXMK
IwpDT05GSUdfUlRDX0lOVEZfU1lTRlM9eQpDT05GSUdfUlRDX0lOVEZfUFJPQz15CkNPTkZJ
R19SVENfSU5URl9ERVY9eQojIENPTkZJR19SVENfSU5URl9ERVZfVUlFX0VNVUwgaXMgbm90
IHNldAojIENPTkZJR19SVENfRFJWX1RFU1QgaXMgbm90IHNldAoKIwojIEkyQyBSVEMgZHJp
dmVycwojCiMgQ09ORklHX1JUQ19EUlZfRFMxMzA3IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9EUzEzNzQgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX0RTMTY3MiBpcyBub3Qg
c2V0CiMgQ09ORklHX1JUQ19EUlZfRFMzMjMyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9NQVg2OTAwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SUzVDMzcyIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUlRDX0RSVl9JU0wxMjA4IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9JU0wxMjAyMiBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfSVNMMTIwNTcgaXMgbm90
IHNldAojIENPTkZJR19SVENfRFJWX1gxMjA1IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9QQ0YyMTI3IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTIzIGlzIG5vdCBz
ZXQKIyBDT05GSUdfUlRDX0RSVl9QQ0Y4NTYzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RS
Vl9QQ0Y4NTgzIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9NNDFUODAgaXMgbm90IHNl
dAojIENPTkZJR19SVENfRFJWX0JRMzJLIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9T
MzUzOTBBIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9GTTMxMzAgaXMgbm90IHNldAoj
IENPTkZJR19SVENfRFJWX1JYODU4MSBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfUlg4
MDI1IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9FTTMwMjcgaXMgbm90IHNldAojIENP
TkZJR19SVENfRFJWX1JWMzAyOUMyIGlzIG5vdCBzZXQKCiMKIyBTUEkgUlRDIGRyaXZlcnMK
IwoKIwojIFBsYXRmb3JtIFJUQyBkcml2ZXJzCiMKQ09ORklHX1JUQ19EUlZfQ01PUz15CiMg
Q09ORklHX1JUQ19EUlZfRFMxMjg2IGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9EUzE1
MTEgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX0RTMTU1MyBpcyBub3Qgc2V0CiMgQ09O
RklHX1JUQ19EUlZfRFMxNzQyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9TVEsxN1RB
OCBpcyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfTTQ4VDg2IGlzIG5vdCBzZXQKIyBDT05G
SUdfUlRDX0RSVl9NNDhUMzUgaXMgbm90IHNldAojIENPTkZJR19SVENfRFJWX000OFQ1OSBp
cyBub3Qgc2V0CiMgQ09ORklHX1JUQ19EUlZfTVNNNjI0MiBpcyBub3Qgc2V0CiMgQ09ORklH
X1JUQ19EUlZfQlE0ODAyIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRDX0RSVl9SUDVDMDEgaXMg
bm90IHNldAojIENPTkZJR19SVENfRFJWX1YzMDIwIGlzIG5vdCBzZXQKIyBDT05GSUdfUlRD
X0RSVl9EUzI0MDQgaXMgbm90IHNldAoKIwojIG9uLUNQVSBSVEMgZHJpdmVycwojCiMgQ09O
RklHX1JUQ19EUlZfTU9YQVJUIGlzIG5vdCBzZXQKCiMKIyBISUQgU2Vuc29yIFJUQyBkcml2
ZXJzCiMKIyBDT05GSUdfUlRDX0RSVl9ISURfU0VOU09SX1RJTUUgaXMgbm90IHNldAojIENP
TkZJR19ETUFERVZJQ0VTIGlzIG5vdCBzZXQKIyBDT05GSUdfQVVYRElTUExBWSBpcyBub3Qg
c2V0CiMgQ09ORklHX1VJTyBpcyBub3Qgc2V0CiMgQ09ORklHX1ZGSU8gaXMgbm90IHNldAoj
IENPTkZJR19WSVJUX0RSSVZFUlMgaXMgbm90IHNldAoKIwojIFZpcnRpbyBkcml2ZXJzCiMK
IyBDT05GSUdfVklSVElPX1BDSSBpcyBub3Qgc2V0CiMgQ09ORklHX1ZJUlRJT19NTUlPIGlz
IG5vdCBzZXQKCiMKIyBNaWNyb3NvZnQgSHlwZXItViBndWVzdCBzdXBwb3J0CiMKIyBDT05G
SUdfSFlQRVJWIGlzIG5vdCBzZXQKCiMKIyBYZW4gZHJpdmVyIHN1cHBvcnQKIwpDT05GSUdf
WEVOX0JBTExPT049eQpDT05GSUdfWEVOX1NDUlVCX1BBR0VTPXkKQ09ORklHX1hFTl9ERVZf
RVZUQ0hOPXkKQ09ORklHX1hFTl9CQUNLRU5EPXkKQ09ORklHX1hFTkZTPXkKQ09ORklHX1hF
Tl9DT01QQVRfWEVORlM9eQpDT05GSUdfWEVOX1NZU19IWVBFUlZJU09SPXkKQ09ORklHX1hF
Tl9YRU5CVVNfRlJPTlRFTkQ9eQpDT05GSUdfWEVOX0dOVERFVj15CkNPTkZJR19YRU5fR1JB
TlRfREVWX0FMTE9DPXkKQ09ORklHX1NXSU9UTEJfWEVOPXkKQ09ORklHX1hFTl9QQ0lERVZf
QkFDS0VORD15CkNPTkZJR19YRU5fUFJJVkNNRD15CkNPTkZJR19YRU5fQUNQSV9QUk9DRVNT
T1I9eQojIENPTkZJR19YRU5fTUNFX0xPRyBpcyBub3Qgc2V0CkNPTkZJR19YRU5fSEFWRV9Q
Vk1NVT15CiMgQ09ORklHX1NUQUdJTkcgaXMgbm90IHNldAojIENPTkZJR19YODZfUExBVEZP
Uk1fREVWSUNFUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NIUk9NRV9QTEFURk9STVMgaXMgbm90
IHNldAoKIwojIEhhcmR3YXJlIFNwaW5sb2NrIGRyaXZlcnMKIwpDT05GSUdfQ0xLRVZUX0k4
MjUzPXkKQ09ORklHX0k4MjUzX0xPQ0s9eQpDT05GSUdfQ0xLQkxEX0k4MjUzPXkKIyBDT05G
SUdfTUFJTEJPWCBpcyBub3Qgc2V0CkNPTkZJR19JT01NVV9BUEk9eQpDT05GSUdfSU9NTVVf
U1VQUE9SVD15CkNPTkZJR19BTURfSU9NTVU9eQpDT05GSUdfQU1EX0lPTU1VX1NUQVRTPXkK
Q09ORklHX0RNQVJfVEFCTEU9eQojIENPTkZJR19JTlRFTF9JT01NVSBpcyBub3Qgc2V0CkNP
TkZJR19JUlFfUkVNQVA9eQoKIwojIFJlbW90ZXByb2MgZHJpdmVycwojCiMgQ09ORklHX1NU
RV9NT0RFTV9SUFJPQyBpcyBub3Qgc2V0CgojCiMgUnBtc2cgZHJpdmVycwojCiMgQ09ORklH
X1BNX0RFVkZSRVEgaXMgbm90IHNldAojIENPTkZJR19FWFRDT04gaXMgbm90IHNldAojIENP
TkZJR19NRU1PUlkgaXMgbm90IHNldAojIENPTkZJR19JSU8gaXMgbm90IHNldAojIENPTkZJ
R19OVEIgaXMgbm90IHNldAojIENPTkZJR19WTUVfQlVTIGlzIG5vdCBzZXQKIyBDT05GSUdf
UFdNIGlzIG5vdCBzZXQKIyBDT05GSUdfSVBBQ0tfQlVTIGlzIG5vdCBzZXQKIyBDT05GSUdf
UkVTRVRfQ09OVFJPTExFUiBpcyBub3Qgc2V0CiMgQ09ORklHX0ZNQyBpcyBub3Qgc2V0Cgoj
CiMgUEhZIFN1YnN5c3RlbQojCkNPTkZJR19HRU5FUklDX1BIWT15CiMgQ09ORklHX1BIWV9F
WFlOT1NfTUlQSV9WSURFTyBpcyBub3Qgc2V0CiMgQ09ORklHX0JDTV9LT05BX1VTQjJfUEhZ
IGlzIG5vdCBzZXQKIyBDT05GSUdfUE9XRVJDQVAgaXMgbm90IHNldAoKIwojIEZpcm13YXJl
IERyaXZlcnMKIwojIENPTkZJR19FREQgaXMgbm90IHNldApDT05GSUdfRklSTVdBUkVfTUVN
TUFQPXkKIyBDT05GSUdfREVMTF9SQlUgaXMgbm90IHNldAojIENPTkZJR19EQ0RCQVMgaXMg
bm90IHNldApDT05GSUdfRE1JSUQ9eQpDT05GSUdfRE1JX1NZU0ZTPXkKQ09ORklHX0RNSV9T
Q0FOX01BQ0hJTkVfTk9OX0VGSV9GQUxMQkFDSz15CiMgQ09ORklHX0lTQ1NJX0lCRlRfRklO
RCBpcyBub3Qgc2V0CiMgQ09ORklHX0dPT0dMRV9GSVJNV0FSRSBpcyBub3Qgc2V0CgojCiMg
RmlsZSBzeXN0ZW1zCiMKQ09ORklHX0RDQUNIRV9XT1JEX0FDQ0VTUz15CiMgQ09ORklHX0VY
VDJfRlMgaXMgbm90IHNldApDT05GSUdfRVhUM19GUz15CiMgQ09ORklHX0VYVDNfREVGQVVM
VFNfVE9fT1JERVJFRCBpcyBub3Qgc2V0CkNPTkZJR19FWFQzX0ZTX1hBVFRSPXkKQ09ORklH
X0VYVDNfRlNfUE9TSVhfQUNMPXkKQ09ORklHX0VYVDNfRlNfU0VDVVJJVFk9eQpDT05GSUdf
RVhUNF9GUz15CkNPTkZJR19FWFQ0X1VTRV9GT1JfRVhUMjM9eQojIENPTkZJR19FWFQ0X0ZT
X1BPU0lYX0FDTCBpcyBub3Qgc2V0CiMgQ09ORklHX0VYVDRfRlNfU0VDVVJJVFkgaXMgbm90
IHNldApDT05GSUdfRVhUNF9ERUJVRz15CkNPTkZJR19KQkQ9eQojIENPTkZJR19KQkRfREVC
VUcgaXMgbm90IHNldApDT05GSUdfSkJEMj15CkNPTkZJR19KQkQyX0RFQlVHPXkKQ09ORklH
X0ZTX01CQ0FDSEU9eQojIENPTkZJR19SRUlTRVJGU19GUyBpcyBub3Qgc2V0CiMgQ09ORklH
X0pGU19GUyBpcyBub3Qgc2V0CiMgQ09ORklHX1hGU19GUyBpcyBub3Qgc2V0CkNPTkZJR19H
RlMyX0ZTPXkKQ09ORklHX0JUUkZTX0ZTPXkKQ09ORklHX0JUUkZTX0ZTX1BPU0lYX0FDTD15
CiMgQ09ORklHX0JUUkZTX0ZTX0NIRUNLX0lOVEVHUklUWSBpcyBub3Qgc2V0CiMgQ09ORklH
X0JUUkZTX0ZTX1JVTl9TQU5JVFlfVEVTVFMgaXMgbm90IHNldAojIENPTkZJR19CVFJGU19E
RUJVRyBpcyBub3Qgc2V0CiMgQ09ORklHX0JUUkZTX0FTU0VSVCBpcyBub3Qgc2V0CiMgQ09O
RklHX05JTEZTMl9GUyBpcyBub3Qgc2V0CkNPTkZJR19GU19QT1NJWF9BQ0w9eQpDT05GSUdf
RklMRV9MT0NLSU5HPXkKQ09ORklHX0ZTTk9USUZZPXkKQ09ORklHX0ROT1RJRlk9eQpDT05G
SUdfSU5PVElGWV9VU0VSPXkKQ09ORklHX0ZBTk9USUZZPXkKQ09ORklHX1FVT1RBPXkKQ09O
RklHX1FVT1RBX05FVExJTktfSU5URVJGQUNFPXkKIyBDT05GSUdfUFJJTlRfUVVPVEFfV0FS
TklORyBpcyBub3Qgc2V0CiMgQ09ORklHX1FVT1RBX0RFQlVHIGlzIG5vdCBzZXQKQ09ORklH
X1FVT1RBX1RSRUU9eQojIENPTkZJR19RRk1UX1YxIGlzIG5vdCBzZXQKQ09ORklHX1FGTVRf
VjI9eQpDT05GSUdfUVVPVEFDVEw9eQpDT05GSUdfUVVPVEFDVExfQ09NUEFUPXkKQ09ORklH
X0FVVE9GUzRfRlM9eQpDT05GSUdfRlVTRV9GUz15CiMgQ09ORklHX0NVU0UgaXMgbm90IHNl
dAoKIwojIENhY2hlcwojCkNPTkZJR19GU0NBQ0hFPXkKQ09ORklHX0ZTQ0FDSEVfU1RBVFM9
eQpDT05GSUdfRlNDQUNIRV9ISVNUT0dSQU09eQojIENPTkZJR19GU0NBQ0hFX0RFQlVHIGlz
IG5vdCBzZXQKIyBDT05GSUdfRlNDQUNIRV9PQkpFQ1RfTElTVCBpcyBub3Qgc2V0CiMgQ09O
RklHX0NBQ0hFRklMRVMgaXMgbm90IHNldAoKIwojIENELVJPTS9EVkQgRmlsZXN5c3RlbXMK
IwpDT05GSUdfSVNPOTY2MF9GUz15CkNPTkZJR19KT0xJRVQ9eQpDT05GSUdfWklTT0ZTPXkK
Q09ORklHX1VERl9GUz15CkNPTkZJR19VREZfTkxTPXkKCiMKIyBET1MvRkFUL05UIEZpbGVz
eXN0ZW1zCiMKQ09ORklHX0ZBVF9GUz15CkNPTkZJR19NU0RPU19GUz15CkNPTkZJR19WRkFU
X0ZTPXkKQ09ORklHX0ZBVF9ERUZBVUxUX0NPREVQQUdFPTQzNwpDT05GSUdfRkFUX0RFRkFV
TFRfSU9DSEFSU0VUPSJpc284ODU5LTEiCkNPTkZJR19OVEZTX0ZTPXkKIyBDT05GSUdfTlRG
U19ERUJVRyBpcyBub3Qgc2V0CkNPTkZJR19OVEZTX1JXPXkKCiMKIyBQc2V1ZG8gZmlsZXN5
c3RlbXMKIwpDT05GSUdfUFJPQ19GUz15CkNPTkZJR19QUk9DX0tDT1JFPXkKQ09ORklHX1BS
T0NfVk1DT1JFPXkKQ09ORklHX1BST0NfU1lTQ1RMPXkKQ09ORklHX1BST0NfUEFHRV9NT05J
VE9SPXkKQ09ORklHX1NZU0ZTPXkKQ09ORklHX1RNUEZTPXkKQ09ORklHX1RNUEZTX1BPU0lY
X0FDTD15CkNPTkZJR19UTVBGU19YQVRUUj15CkNPTkZJR19IVUdFVExCRlM9eQpDT05GSUdf
SFVHRVRMQl9QQUdFPXkKIyBDT05GSUdfQ09ORklHRlNfRlMgaXMgbm90IHNldAojIENPTkZJ
R19NSVNDX0ZJTEVTWVNURU1TIGlzIG5vdCBzZXQKQ09ORklHX05FVFdPUktfRklMRVNZU1RF
TVM9eQojIENPTkZJR19ORlNfRlMgaXMgbm90IHNldAojIENPTkZJR19ORlNEIGlzIG5vdCBz
ZXQKQ09ORklHX0NFUEhfRlM9eQojIENPTkZJR19DRVBIX0ZTQ0FDSEUgaXMgbm90IHNldAoj
IENPTkZJR19DRVBIX0ZTX1BPU0lYX0FDTCBpcyBub3Qgc2V0CkNPTkZJR19DSUZTPXkKIyBD
T05GSUdfQ0lGU19TVEFUUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NJRlNfV0VBS19QV19IQVNI
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ0lGU19VUENBTEwgaXMgbm90IHNldAojIENPTkZJR19D
SUZTX1hBVFRSIGlzIG5vdCBzZXQKQ09ORklHX0NJRlNfREVCVUc9eQojIENPTkZJR19DSUZT
X0RFQlVHMiBpcyBub3Qgc2V0CiMgQ09ORklHX0NJRlNfREZTX1VQQ0FMTCBpcyBub3Qgc2V0
CiMgQ09ORklHX0NJRlNfU01CMiBpcyBub3Qgc2V0CiMgQ09ORklHX0NJRlNfRlNDQUNIRSBp
cyBub3Qgc2V0CiMgQ09ORklHX05DUF9GUyBpcyBub3Qgc2V0CiMgQ09ORklHX0NPREFfRlMg
aXMgbm90IHNldAojIENPTkZJR19BRlNfRlMgaXMgbm90IHNldApDT05GSUdfTkxTPXkKQ09O
RklHX05MU19ERUZBVUxUPSJ1dGY4IgpDT05GSUdfTkxTX0NPREVQQUdFXzQzNz15CiMgQ09O
RklHX05MU19DT0RFUEFHRV83MzcgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0Vf
Nzc1IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1MCBpcyBub3Qgc2V0CiMg
Q09ORklHX05MU19DT0RFUEFHRV84NTIgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBB
R0VfODU1IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg1NyBpcyBub3Qgc2V0
CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjAgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09E
RVBBR0VfODYxIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2MiBpcyBub3Qg
c2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjMgaXMgbm90IHNldAojIENPTkZJR19OTFNf
Q09ERVBBR0VfODY0IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzg2NSBpcyBu
b3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NjYgaXMgbm90IHNldAojIENPTkZJR19O
TFNfQ09ERVBBR0VfODY5IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzkzNiBp
cyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV85NTAgaXMgbm90IHNldAojIENPTkZJ
R19OTFNfQ09ERVBBR0VfOTMyIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzk0
OSBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19DT0RFUEFHRV84NzQgaXMgbm90IHNldAojIENP
TkZJR19OTFNfSVNPODg1OV84IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0NPREVQQUdFXzEy
NTAgaXMgbm90IHNldAojIENPTkZJR19OTFNfQ09ERVBBR0VfMTI1MSBpcyBub3Qgc2V0CkNP
TkZJR19OTFNfQVNDSUk9eQpDT05GSUdfTkxTX0lTTzg4NTlfMT15CiMgQ09ORklHX05MU19J
U084ODU5XzIgaXMgbm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV8zIGlzIG5vdCBzZXQK
IyBDT05GSUdfTkxTX0lTTzg4NTlfNCBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19JU084ODU5
XzUgaXMgbm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV82IGlzIG5vdCBzZXQKIyBDT05G
SUdfTkxTX0lTTzg4NTlfNyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19JU084ODU5XzkgaXMg
bm90IHNldAojIENPTkZJR19OTFNfSVNPODg1OV8xMyBpcyBub3Qgc2V0CiMgQ09ORklHX05M
U19JU084ODU5XzE0IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0lTTzg4NTlfMTUgaXMgbm90
IHNldAojIENPTkZJR19OTFNfS09JOF9SIGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX0tPSThf
VSBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfUk9NQU4gaXMgbm90IHNldAojIENPTkZJ
R19OTFNfTUFDX0NFTFRJQyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfQ0VOVEVVUk8g
aXMgbm90IHNldAojIENPTkZJR19OTFNfTUFDX0NST0FUSUFOIGlzIG5vdCBzZXQKIyBDT05G
SUdfTkxTX01BQ19DWVJJTExJQyBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfR0FFTElD
IGlzIG5vdCBzZXQKIyBDT05GSUdfTkxTX01BQ19HUkVFSyBpcyBub3Qgc2V0CiMgQ09ORklH
X05MU19NQUNfSUNFTEFORCBpcyBub3Qgc2V0CiMgQ09ORklHX05MU19NQUNfSU5VSVQgaXMg
bm90IHNldAojIENPTkZJR19OTFNfTUFDX1JPTUFOSUFOIGlzIG5vdCBzZXQKIyBDT05GSUdf
TkxTX01BQ19UVVJLSVNIIGlzIG5vdCBzZXQKQ09ORklHX05MU19VVEY4PXkKCiMKIyBLZXJu
ZWwgaGFja2luZwojCkNPTkZJR19UUkFDRV9JUlFGTEFHU19TVVBQT1JUPXkKCiMKIyBwcmlu
dGsgYW5kIGRtZXNnIG9wdGlvbnMKIwpDT05GSUdfUFJJTlRLX1RJTUU9eQpDT05GSUdfREVG
QVVMVF9NRVNTQUdFX0xPR0xFVkVMPTcKIyBDT05GSUdfQk9PVF9QUklOVEtfREVMQVkgaXMg
bm90IHNldAojIENPTkZJR19EWU5BTUlDX0RFQlVHIGlzIG5vdCBzZXQKCiMKIyBDb21waWxl
LXRpbWUgY2hlY2tzIGFuZCBjb21waWxlciBvcHRpb25zCiMKQ09ORklHX0RFQlVHX0lORk89
eQojIENPTkZJR19ERUJVR19JTkZPX1JFRFVDRUQgaXMgbm90IHNldAojIENPTkZJR19FTkFC
TEVfV0FSTl9ERVBSRUNBVEVEIGlzIG5vdCBzZXQKIyBDT05GSUdfRU5BQkxFX01VU1RfQ0hF
Q0sgaXMgbm90IHNldApDT05GSUdfRlJBTUVfV0FSTj0yMDQ4CiMgQ09ORklHX1NUUklQX0FT
TV9TWU1TIGlzIG5vdCBzZXQKIyBDT05GSUdfUkVBREFCTEVfQVNNIGlzIG5vdCBzZXQKIyBD
T05GSUdfVU5VU0VEX1NZTUJPTFMgaXMgbm90IHNldApDT05GSUdfREVCVUdfRlM9eQojIENP
TkZJR19IRUFERVJTX0NIRUNLIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfU0VDVElPTl9N
SVNNQVRDSCBpcyBub3Qgc2V0CkNPTkZJR19BUkNIX1dBTlRfRlJBTUVfUE9JTlRFUlM9eQpD
T05GSUdfRlJBTUVfUE9JTlRFUj15CiMgQ09ORklHX0RFQlVHX0ZPUkNFX1dFQUtfUEVSX0NQ
VSBpcyBub3Qgc2V0CkNPTkZJR19NQUdJQ19TWVNSUT15CkNPTkZJR19NQUdJQ19TWVNSUV9E
RUZBVUxUX0VOQUJMRT0weDEKQ09ORklHX0RFQlVHX0tFUk5FTD15CgojCiMgTWVtb3J5IERl
YnVnZ2luZwojCiMgQ09ORklHX0RFQlVHX1BBR0VBTExPQyBpcyBub3Qgc2V0CiMgQ09ORklH
X0RFQlVHX09CSkVDVFMgaXMgbm90IHNldAojIENPTkZJR19TTFVCX0RFQlVHX09OIGlzIG5v
dCBzZXQKIyBDT05GSUdfU0xVQl9TVEFUUyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0RFQlVH
X0tNRU1MRUFLPXkKQ09ORklHX0RFQlVHX0tNRU1MRUFLPXkKQ09ORklHX0RFQlVHX0tNRU1M
RUFLX0VBUkxZX0xPR19TSVpFPTQwMAojIENPTkZJR19ERUJVR19LTUVNTEVBS19URVNUIGlz
IG5vdCBzZXQKQ09ORklHX0RFQlVHX0tNRU1MRUFLX0RFRkFVTFRfT0ZGPXkKIyBDT05GSUdf
REVCVUdfU1RBQ0tfVVNBR0UgaXMgbm90IHNldAojIENPTkZJR19ERUJVR19WTSBpcyBub3Qg
c2V0CiMgQ09ORklHX0RFQlVHX1ZJUlRVQUwgaXMgbm90IHNldApDT05GSUdfREVCVUdfTUVN
T1JZX0lOSVQ9eQojIENPTkZJR19ERUJVR19QRVJfQ1BVX01BUFMgaXMgbm90IHNldApDT05G
SUdfSEFWRV9ERUJVR19TVEFDS09WRVJGTE9XPXkKIyBDT05GSUdfREVCVUdfU1RBQ0tPVkVS
RkxPVyBpcyBub3Qgc2V0CkNPTkZJR19IQVZFX0FSQ0hfS01FTUNIRUNLPXkKQ09ORklHX0RF
QlVHX1NISVJRPXkKCiMKIyBEZWJ1ZyBMb2NrdXBzIGFuZCBIYW5ncwojCkNPTkZJR19MT0NL
VVBfREVURUNUT1I9eQpDT05GSUdfSEFSRExPQ0tVUF9ERVRFQ1RPUj15CiMgQ09ORklHX0JP
T1RQQVJBTV9IQVJETE9DS1VQX1BBTklDIGlzIG5vdCBzZXQKQ09ORklHX0JPT1RQQVJBTV9I
QVJETE9DS1VQX1BBTklDX1ZBTFVFPTAKIyBDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBf
UEFOSUMgaXMgbm90IHNldApDT05GSUdfQk9PVFBBUkFNX1NPRlRMT0NLVVBfUEFOSUNfVkFM
VUU9MApDT05GSUdfREVURUNUX0hVTkdfVEFTSz15CkNPTkZJR19ERUZBVUxUX0hVTkdfVEFT
S19USU1FT1VUPTEyMAojIENPTkZJR19CT09UUEFSQU1fSFVOR19UQVNLX1BBTklDIGlzIG5v
dCBzZXQKQ09ORklHX0JPT1RQQVJBTV9IVU5HX1RBU0tfUEFOSUNfVkFMVUU9MAojIENPTkZJ
R19QQU5JQ19PTl9PT1BTIGlzIG5vdCBzZXQKQ09ORklHX1BBTklDX09OX09PUFNfVkFMVUU9
MApDT05GSUdfUEFOSUNfVElNRU9VVD0wCiMgQ09ORklHX1NDSEVEX0RFQlVHIGlzIG5vdCBz
ZXQKQ09ORklHX1NDSEVEU1RBVFM9eQpDT05GSUdfVElNRVJfU1RBVFM9eQoKIwojIExvY2sg
RGVidWdnaW5nIChzcGlubG9ja3MsIG11dGV4ZXMsIGV0Yy4uLikKIwpDT05GSUdfREVCVUdf
UlRfTVVURVhFUz15CkNPTkZJR19ERUJVR19QSV9MSVNUPXkKIyBDT05GSUdfUlRfTVVURVhf
VEVTVEVSIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX1NQSU5MT0NLPXkKQ09ORklHX0RFQlVH
X01VVEVYRVM9eQojIENPTkZJR19ERUJVR19XV19NVVRFWF9TTE9XUEFUSCBpcyBub3Qgc2V0
CkNPTkZJR19ERUJVR19MT0NLX0FMTE9DPXkKQ09ORklHX1BST1ZFX0xPQ0tJTkc9eQpDT05G
SUdfTE9DS0RFUD15CiMgQ09ORklHX0xPQ0tfU1RBVCBpcyBub3Qgc2V0CkNPTkZJR19ERUJV
R19MT0NLREVQPXkKIyBDT05GSUdfREVCVUdfQVRPTUlDX1NMRUVQIGlzIG5vdCBzZXQKIyBD
T05GSUdfREVCVUdfTE9DS0lOR19BUElfU0VMRlRFU1RTIGlzIG5vdCBzZXQKQ09ORklHX1RS
QUNFX0lSUUZMQUdTPXkKQ09ORklHX1NUQUNLVFJBQ0U9eQojIENPTkZJR19ERUJVR19LT0JK
RUNUIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVHX0JVR1ZFUkJPU0U9eQpDT05GSUdfREVCVUdf
V1JJVEVDT1VOVD15CkNPTkZJR19ERUJVR19MSVNUPXkKQ09ORklHX0RFQlVHX1NHPXkKIyBD
T05GSUdfREVCVUdfTk9USUZJRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfQ1JFREVO
VElBTFMgaXMgbm90IHNldAoKIwojIFJDVSBEZWJ1Z2dpbmcKIwojIENPTkZJR19QUk9WRV9S
Q1UgaXMgbm90IHNldApDT05GSUdfU1BBUlNFX1JDVV9QT0lOVEVSPXkKIyBDT05GSUdfUkNV
X1RPUlRVUkVfVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19SQ1VfQ1BVX1NUQUxMX1RJTUVPVVQ9
NjAKQ09ORklHX1JDVV9DUFVfU1RBTExfSU5GTz15CiMgQ09ORklHX1JDVV9UUkFDRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0RFQlVHX0JMT0NLX0VYVF9ERVZUIGlzIG5vdCBzZXQKIyBDT05G
SUdfTk9USUZJRVJfRVJST1JfSU5KRUNUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfRkFVTFRf
SU5KRUNUSU9OIGlzIG5vdCBzZXQKIyBDT05GSUdfTEFURU5DWVRPUCBpcyBub3Qgc2V0CkNP
TkZJR19BUkNIX0hBU19ERUJVR19TVFJJQ1RfVVNFUl9DT1BZX0NIRUNLUz15CiMgQ09ORklH
X0RFQlVHX1NUUklDVF9VU0VSX0NPUFlfQ0hFQ0tTIGlzIG5vdCBzZXQKQ09ORklHX1VTRVJf
U1RBQ0tUUkFDRV9TVVBQT1JUPXkKQ09ORklHX05PUF9UUkFDRVI9eQpDT05GSUdfSEFWRV9G
VU5DVElPTl9UUkFDRVI9eQpDT05GSUdfSEFWRV9GVU5DVElPTl9HUkFQSF9UUkFDRVI9eQpD
T05GSUdfSEFWRV9GVU5DVElPTl9HUkFQSF9GUF9URVNUPXkKQ09ORklHX0hBVkVfRlVOQ1RJ
T05fVFJBQ0VfTUNPVU5UX1RFU1Q9eQpDT05GSUdfSEFWRV9EWU5BTUlDX0ZUUkFDRT15CkNP
TkZJR19IQVZFX0RZTkFNSUNfRlRSQUNFX1dJVEhfUkVHUz15CkNPTkZJR19IQVZFX0ZUUkFD
RV9NQ09VTlRfUkVDT1JEPXkKQ09ORklHX0hBVkVfU1lTQ0FMTF9UUkFDRVBPSU5UUz15CkNP
TkZJR19IQVZFX0ZFTlRSWT15CkNPTkZJR19IQVZFX0NfUkVDT1JETUNPVU5UPXkKQ09ORklH
X1RSQUNFX0NMT0NLPXkKQ09ORklHX1JJTkdfQlVGRkVSPXkKQ09ORklHX0VWRU5UX1RSQUNJ
Tkc9eQpDT05GSUdfQ09OVEVYVF9TV0lUQ0hfVFJBQ0VSPXkKQ09ORklHX1RSQUNJTkc9eQpD
T05GSUdfR0VORVJJQ19UUkFDRVI9eQpDT05GSUdfVFJBQ0lOR19TVVBQT1JUPXkKQ09ORklH
X0ZUUkFDRT15CkNPTkZJR19GVU5DVElPTl9UUkFDRVI9eQpDT05GSUdfRlVOQ1RJT05fR1JB
UEhfVFJBQ0VSPXkKIyBDT05GSUdfSVJRU09GRl9UUkFDRVIgaXMgbm90IHNldAojIENPTkZJ
R19TQ0hFRF9UUkFDRVIgaXMgbm90IHNldAojIENPTkZJR19GVFJBQ0VfU1lTQ0FMTFMgaXMg
bm90IHNldAojIENPTkZJR19UUkFDRVJfU05BUFNIT1QgaXMgbm90IHNldApDT05GSUdfQlJB
TkNIX1BST0ZJTEVfTk9ORT15CiMgQ09ORklHX1BST0ZJTEVfQU5OT1RBVEVEX0JSQU5DSEVT
IGlzIG5vdCBzZXQKIyBDT05GSUdfUFJPRklMRV9BTExfQlJBTkNIRVMgaXMgbm90IHNldAoj
IENPTkZJR19TVEFDS19UUkFDRVIgaXMgbm90IHNldAojIENPTkZJR19CTEtfREVWX0lPX1RS
QUNFIGlzIG5vdCBzZXQKIyBDT05GSUdfVVBST0JFX0VWRU5UIGlzIG5vdCBzZXQKIyBDT05G
SUdfUFJPQkVfRVZFTlRTIGlzIG5vdCBzZXQKQ09ORklHX0RZTkFNSUNfRlRSQUNFPXkKQ09O
RklHX0RZTkFNSUNfRlRSQUNFX1dJVEhfUkVHUz15CiMgQ09ORklHX0ZVTkNUSU9OX1BST0ZJ
TEVSIGlzIG5vdCBzZXQKQ09ORklHX0ZUUkFDRV9NQ09VTlRfUkVDT1JEPXkKIyBDT05GSUdf
RlRSQUNFX1NUQVJUVVBfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX01NSU9UUkFDRSBpcyBu
b3Qgc2V0CiMgQ09ORklHX1JJTkdfQlVGRkVSX0JFTkNITUFSSyBpcyBub3Qgc2V0CiMgQ09O
RklHX1JJTkdfQlVGRkVSX1NUQVJUVVBfVEVTVCBpcyBub3Qgc2V0CgojCiMgUnVudGltZSBU
ZXN0aW5nCiMKIyBDT05GSUdfTEtEVE0gaXMgbm90IHNldAojIENPTkZJR19URVNUX0xJU1Rf
U09SVCBpcyBub3Qgc2V0CiMgQ09ORklHX0JBQ0tUUkFDRV9TRUxGX1RFU1QgaXMgbm90IHNl
dAojIENPTkZJR19SQlRSRUVfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX0lOVEVSVkFMX1RS
RUVfVEVTVCBpcyBub3Qgc2V0CiMgQ09ORklHX1BFUkNQVV9URVNUIGlzIG5vdCBzZXQKIyBD
T05GSUdfQVRPTUlDNjRfU0VMRlRFU1QgaXMgbm90IHNldAojIENPTkZJR19URVNUX1NUUklO
R19IRUxQRVJTIGlzIG5vdCBzZXQKIyBDT05GSUdfVEVTVF9LU1RSVE9YIGlzIG5vdCBzZXQK
IyBDT05GSUdfUFJPVklERV9PSENJMTM5NF9ETUFfSU5JVCBpcyBub3Qgc2V0CkNPTkZJR19E
TUFfQVBJX0RFQlVHPXkKIyBDT05GSUdfVEVTVF9NT0RVTEUgaXMgbm90IHNldAojIENPTkZJ
R19URVNUX1VTRVJfQ09QWSBpcyBub3Qgc2V0CiMgQ09ORklHX1NBTVBMRVMgaXMgbm90IHNl
dApDT05GSUdfSEFWRV9BUkNIX0tHREI9eQojIENPTkZJR19LR0RCIGlzIG5vdCBzZXQKIyBD
T05GSUdfU1RSSUNUX0RFVk1FTSBpcyBub3Qgc2V0CkNPTkZJR19YODZfVkVSQk9TRV9CT09U
VVA9eQpDT05GSUdfRUFSTFlfUFJJTlRLPXkKIyBDT05GSUdfRUFSTFlfUFJJTlRLX0RCR1Ag
aXMgbm90IHNldAojIENPTkZJR19YODZfUFREVU1QIGlzIG5vdCBzZXQKQ09ORklHX0RFQlVH
X1JPREFUQT15CiMgQ09ORklHX0RFQlVHX1JPREFUQV9URVNUIGlzIG5vdCBzZXQKIyBDT05G
SUdfREVCVUdfU0VUX01PRFVMRV9ST05YIGlzIG5vdCBzZXQKIyBDT05GSUdfREVCVUdfTlhf
VEVTVCBpcyBub3Qgc2V0CkNPTkZJR19ET1VCTEVGQVVMVD15CiMgQ09ORklHX0RFQlVHX1RM
QkZMVVNIIGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX0RFQlVHPXkKIyBDT05GSUdfSU9NTVVf
U1RSRVNTIGlzIG5vdCBzZXQKQ09ORklHX0lPTU1VX0xFQUs9eQpDT05GSUdfSEFWRV9NTUlP
VFJBQ0VfU1VQUE9SVD15CkNPTkZJR19JT19ERUxBWV9UWVBFXzBYODA9MApDT05GSUdfSU9f
REVMQVlfVFlQRV8wWEVEPTEKQ09ORklHX0lPX0RFTEFZX1RZUEVfVURFTEFZPTIKQ09ORklH
X0lPX0RFTEFZX1RZUEVfTk9ORT0zCkNPTkZJR19JT19ERUxBWV8wWDgwPXkKIyBDT05GSUdf
SU9fREVMQVlfMFhFRCBpcyBub3Qgc2V0CiMgQ09ORklHX0lPX0RFTEFZX1VERUxBWSBpcyBu
b3Qgc2V0CiMgQ09ORklHX0lPX0RFTEFZX05PTkUgaXMgbm90IHNldApDT05GSUdfREVGQVVM
VF9JT19ERUxBWV9UWVBFPTAKQ09ORklHX0RFQlVHX0JPT1RfUEFSQU1TPXkKIyBDT05GSUdf
Q1BBX0RFQlVHIGlzIG5vdCBzZXQKIyBDT05GSUdfT1BUSU1JWkVfSU5MSU5JTkcgaXMgbm90
IHNldAojIENPTkZJR19ERUJVR19OTUlfU0VMRlRFU1QgaXMgbm90IHNldAojIENPTkZJR19Y
ODZfREVCVUdfU1RBVElDX0NQVV9IQVMgaXMgbm90IHNldAoKIwojIFNlY3VyaXR5IG9wdGlv
bnMKIwpDT05GSUdfS0VZUz15CiMgQ09ORklHX1BFUlNJU1RFTlRfS0VZUklOR1MgaXMgbm90
IHNldAojIENPTkZJR19CSUdfS0VZUyBpcyBub3Qgc2V0CiMgQ09ORklHX0VOQ1JZUFRFRF9L
RVlTIGlzIG5vdCBzZXQKIyBDT05GSUdfS0VZU19ERUJVR19QUk9DX0tFWVMgaXMgbm90IHNl
dAojIENPTkZJR19TRUNVUklUWV9ETUVTR19SRVNUUklDVCBpcyBub3Qgc2V0CiMgQ09ORklH
X1NFQ1VSSVRZIGlzIG5vdCBzZXQKIyBDT05GSUdfU0VDVVJJVFlGUyBpcyBub3Qgc2V0CkNP
TkZJR19ERUZBVUxUX1NFQ1VSSVRZX0RBQz15CkNPTkZJR19ERUZBVUxUX1NFQ1VSSVRZPSIi
CkNPTkZJR19YT1JfQkxPQ0tTPXkKQ09ORklHX0NSWVBUTz15CgojCiMgQ3J5cHRvIGNvcmUg
b3IgaGVscGVyCiMKQ09ORklHX0NSWVBUT19BTEdBUEk9eQpDT05GSUdfQ1JZUFRPX0FMR0FQ
STI9eQpDT05GSUdfQ1JZUFRPX0FFQUQ9eQpDT05GSUdfQ1JZUFRPX0FFQUQyPXkKQ09ORklH
X0NSWVBUT19CTEtDSVBIRVI9eQpDT05GSUdfQ1JZUFRPX0JMS0NJUEhFUjI9eQpDT05GSUdf
Q1JZUFRPX0hBU0g9eQpDT05GSUdfQ1JZUFRPX0hBU0gyPXkKQ09ORklHX0NSWVBUT19STkc9
eQpDT05GSUdfQ1JZUFRPX1JORzI9eQpDT05GSUdfQ1JZUFRPX1BDT01QPXkKQ09ORklHX0NS
WVBUT19QQ09NUDI9eQpDT05GSUdfQ1JZUFRPX01BTkFHRVI9eQpDT05GSUdfQ1JZUFRPX01B
TkFHRVIyPXkKIyBDT05GSUdfQ1JZUFRPX1VTRVIgaXMgbm90IHNldApDT05GSUdfQ1JZUFRP
X01BTkFHRVJfRElTQUJMRV9URVNUUz15CkNPTkZJR19DUllQVE9fR0YxMjhNVUw9eQojIENP
TkZJR19DUllQVE9fTlVMTCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19QQ1JZUFQgaXMg
bm90IHNldApDT05GSUdfQ1JZUFRPX1dPUktRVUVVRT15CkNPTkZJR19DUllQVE9fQ1JZUFRE
PXkKQ09ORklHX0NSWVBUT19BVVRIRU5DPXkKIyBDT05GSUdfQ1JZUFRPX1RFU1QgaXMgbm90
IHNldApDT05GSUdfQ1JZUFRPX0FCTEtfSEVMUEVSPXkKQ09ORklHX0NSWVBUT19HTFVFX0hF
TFBFUl9YODY9eQoKIwojIEF1dGhlbnRpY2F0ZWQgRW5jcnlwdGlvbiB3aXRoIEFzc29jaWF0
ZWQgRGF0YQojCiMgQ09ORklHX0NSWVBUT19DQ00gaXMgbm90IHNldAojIENPTkZJR19DUllQ
VE9fR0NNIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1NFUUlWIGlzIG5vdCBzZXQKCiMK
IyBCbG9jayBtb2RlcwojCkNPTkZJR19DUllQVE9fQ0JDPXkKIyBDT05GSUdfQ1JZUFRPX0NU
UiBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19DVFMgaXMgbm90IHNldApDT05GSUdfQ1JZ
UFRPX0VDQj15CkNPTkZJR19DUllQVE9fTFJXPXkKIyBDT05GSUdfQ1JZUFRPX1BDQkMgaXMg
bm90IHNldApDT05GSUdfQ1JZUFRPX1hUUz15CgojCiMgSGFzaCBtb2RlcwojCkNPTkZJR19D
UllQVE9fQ01BQz15CkNPTkZJR19DUllQVE9fSE1BQz15CiMgQ09ORklHX0NSWVBUT19YQ0JD
IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1ZNQUMgaXMgbm90IHNldAoKIwojIERpZ2Vz
dAojCkNPTkZJR19DUllQVE9fQ1JDMzJDPXkKQ09ORklHX0NSWVBUT19DUkMzMkNfSU5URUw9
eQojIENPTkZJR19DUllQVE9fQ1JDMzIgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fQ1JD
MzJfUENMTVVMIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19DUkNUMTBESUY9eQojIENPTkZJ
R19DUllQVE9fQ1JDVDEwRElGX1BDTE1VTCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19H
SEFTSCBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fTUQ0PXkKQ09ORklHX0NSWVBUT19NRDU9
eQojIENPTkZJR19DUllQVE9fTUlDSEFFTF9NSUMgaXMgbm90IHNldAojIENPTkZJR19DUllQ
VE9fUk1EMTI4IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX1JNRDE2MCBpcyBub3Qgc2V0
CiMgQ09ORklHX0NSWVBUT19STUQyNTYgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fUk1E
MzIwIGlzIG5vdCBzZXQKQ09ORklHX0NSWVBUT19TSEExPXkKQ09ORklHX0NSWVBUT19TSEEx
X1NTU0UzPXkKQ09ORklHX0NSWVBUT19TSEEyNTZfU1NTRTM9eQpDT05GSUdfQ1JZUFRPX1NI
QTUxMl9TU1NFMz15CkNPTkZJR19DUllQVE9fU0hBMjU2PXkKQ09ORklHX0NSWVBUT19TSEE1
MTI9eQojIENPTkZJR19DUllQVE9fVEdSMTkyIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRP
X1dQNTEyIGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0dIQVNIX0NMTVVMX05JX0lOVEVM
IGlzIG5vdCBzZXQKCiMKIyBDaXBoZXJzCiMKQ09ORklHX0NSWVBUT19BRVM9eQpDT05GSUdf
Q1JZUFRPX0FFU19YODZfNjQ9eQpDT05GSUdfQ1JZUFRPX0FFU19OSV9JTlRFTD15CiMgQ09O
RklHX0NSWVBUT19BTlVCSVMgaXMgbm90IHNldApDT05GSUdfQ1JZUFRPX0FSQzQ9eQpDT05G
SUdfQ1JZUFRPX0JMT1dGSVNIPXkKQ09ORklHX0NSWVBUT19CTE9XRklTSF9DT01NT049eQpD
T05GSUdfQ1JZUFRPX0JMT1dGSVNIX1g4Nl82ND15CiMgQ09ORklHX0NSWVBUT19DQU1FTExJ
QSBpcyBub3Qgc2V0CkNPTkZJR19DUllQVE9fQ0FNRUxMSUFfWDg2XzY0PXkKQ09ORklHX0NS
WVBUT19DQU1FTExJQV9BRVNOSV9BVlhfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19DQU1FTExJ
QV9BRVNOSV9BVlgyX1g4Nl82ND15CiMgQ09ORklHX0NSWVBUT19DQVNUNSBpcyBub3Qgc2V0
CiMgQ09ORklHX0NSWVBUT19DQVNUNV9BVlhfWDg2XzY0IGlzIG5vdCBzZXQKIyBDT05GSUdf
Q1JZUFRPX0NBU1Q2IGlzIG5vdCBzZXQKIyBDT05GSUdfQ1JZUFRPX0NBU1Q2X0FWWF9YODZf
NjQgaXMgbm90IHNldApDT05GSUdfQ1JZUFRPX0RFUz15CiMgQ09ORklHX0NSWVBUT19GQ1JZ
UFQgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fS0hBWkFEIGlzIG5vdCBzZXQKIyBDT05G
SUdfQ1JZUFRPX1NBTFNBMjAgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fU0FMU0EyMF9Y
ODZfNjQgaXMgbm90IHNldAojIENPTkZJR19DUllQVE9fU0VFRCBpcyBub3Qgc2V0CkNPTkZJ
R19DUllQVE9fU0VSUEVOVD15CkNPTkZJR19DUllQVE9fU0VSUEVOVF9TU0UyX1g4Nl82ND15
CkNPTkZJR19DUllQVE9fU0VSUEVOVF9BVlhfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19TRVJQ
RU5UX0FWWDJfWDg2XzY0PXkKIyBDT05GSUdfQ1JZUFRPX1RFQSBpcyBub3Qgc2V0CkNPTkZJ
R19DUllQVE9fVFdPRklTSD15CkNPTkZJR19DUllQVE9fVFdPRklTSF9DT01NT049eQpDT05G
SUdfQ1JZUFRPX1RXT0ZJU0hfWDg2XzY0PXkKQ09ORklHX0NSWVBUT19UV09GSVNIX1g4Nl82
NF8zV0FZPXkKQ09ORklHX0NSWVBUT19UV09GSVNIX0FWWF9YODZfNjQ9eQoKIwojIENvbXBy
ZXNzaW9uCiMKQ09ORklHX0NSWVBUT19ERUZMQVRFPXkKQ09ORklHX0NSWVBUT19aTElCPXkK
Q09ORklHX0NSWVBUT19MWk89eQojIENPTkZJR19DUllQVE9fTFo0IGlzIG5vdCBzZXQKIyBD
T05GSUdfQ1JZUFRPX0xaNEhDIGlzIG5vdCBzZXQKCiMKIyBSYW5kb20gTnVtYmVyIEdlbmVy
YXRpb24KIwpDT05GSUdfQ1JZUFRPX0FOU0lfQ1BSTkc9eQojIENPTkZJR19DUllQVE9fVVNF
Ul9BUElfSEFTSCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19VU0VSX0FQSV9TS0NJUEhF
UiBpcyBub3Qgc2V0CiMgQ09ORklHX0NSWVBUT19IVyBpcyBub3Qgc2V0CiMgQ09ORklHX0FT
WU1NRVRSSUNfS0VZX1RZUEUgaXMgbm90IHNldApDT05GSUdfSEFWRV9LVk09eQojIENPTkZJ
R19WSVJUVUFMSVpBVElPTiBpcyBub3Qgc2V0CkNPTkZJR19CSU5BUllfUFJJTlRGPXkKCiMK
IyBMaWJyYXJ5IHJvdXRpbmVzCiMKQ09ORklHX1JBSUQ2X1BRPXkKQ09ORklHX0JJVFJFVkVS
U0U9eQpDT05GSUdfR0VORVJJQ19TVFJOQ1BZX0ZST01fVVNFUj15CkNPTkZJR19HRU5FUklD
X1NUUk5MRU5fVVNFUj15CkNPTkZJR19HRU5FUklDX05FVF9VVElMUz15CkNPTkZJR19HRU5F
UklDX0ZJTkRfRklSU1RfQklUPXkKQ09ORklHX0dFTkVSSUNfUENJX0lPTUFQPXkKQ09ORklH
X0dFTkVSSUNfSU9NQVA9eQpDT05GSUdfR0VORVJJQ19JTz15CkNPTkZJR19BUkNIX1VTRV9D
TVBYQ0hHX0xPQ0tSRUY9eQojIENPTkZJR19DUkNfQ0NJVFQgaXMgbm90IHNldApDT05GSUdf
Q1JDMTY9eQpDT05GSUdfQ1JDX1QxMERJRj15CkNPTkZJR19DUkNfSVRVX1Q9eQpDT05GSUdf
Q1JDMzI9eQpDT05GSUdfQ1JDMzJfU0VMRlRFU1Q9eQpDT05GSUdfQ1JDMzJfU0xJQ0VCWTg9
eQojIENPTkZJR19DUkMzMl9TTElDRUJZNCBpcyBub3Qgc2V0CiMgQ09ORklHX0NSQzMyX1NB
UldBVEUgaXMgbm90IHNldAojIENPTkZJR19DUkMzMl9CSVQgaXMgbm90IHNldAojIENPTkZJ
R19DUkM3IGlzIG5vdCBzZXQKQ09ORklHX0xJQkNSQzMyQz15CiMgQ09ORklHX0NSQzggaXMg
bm90IHNldAojIENPTkZJR19SQU5ET00zMl9TRUxGVEVTVCBpcyBub3Qgc2V0CkNPTkZJR19a
TElCX0lORkxBVEU9eQpDT05GSUdfWkxJQl9ERUZMQVRFPXkKQ09ORklHX0xaT19DT01QUkVT
Uz15CkNPTkZJR19MWk9fREVDT01QUkVTUz15CkNPTkZJR19MWjRfREVDT01QUkVTUz15CkNP
TkZJR19YWl9ERUM9eQpDT05GSUdfWFpfREVDX1g4Nj15CkNPTkZJR19YWl9ERUNfUE9XRVJQ
Qz15CkNPTkZJR19YWl9ERUNfSUE2ND15CkNPTkZJR19YWl9ERUNfQVJNPXkKQ09ORklHX1ha
X0RFQ19BUk1USFVNQj15CkNPTkZJR19YWl9ERUNfU1BBUkM9eQpDT05GSUdfWFpfREVDX0JD
Sj15CiMgQ09ORklHX1haX0RFQ19URVNUIGlzIG5vdCBzZXQKQ09ORklHX0RFQ09NUFJFU1Nf
R1pJUD15CkNPTkZJR19ERUNPTVBSRVNTX0JaSVAyPXkKQ09ORklHX0RFQ09NUFJFU1NfTFpN
QT15CkNPTkZJR19ERUNPTVBSRVNTX1haPXkKQ09ORklHX0RFQ09NUFJFU1NfTFpPPXkKQ09O
RklHX0RFQ09NUFJFU1NfTFo0PXkKQ09ORklHX1RFWFRTRUFSQ0g9eQpDT05GSUdfVEVYVFNF
QVJDSF9LTVA9eQpDT05GSUdfVEVYVFNFQVJDSF9CTT15CkNPTkZJR19URVhUU0VBUkNIX0ZT
TT15CkNPTkZJR19BU1NPQ0lBVElWRV9BUlJBWT15CkNPTkZJR19IQVNfSU9NRU09eQpDT05G
SUdfSEFTX0lPUE9SVD15CkNPTkZJR19IQVNfRE1BPXkKQ09ORklHX0NIRUNLX1NJR05BVFVS
RT15CkNPTkZJR19DUFVfUk1BUD15CkNPTkZJR19EUUw9eQpDT05GSUdfTkxBVFRSPXkKQ09O
RklHX0FSQ0hfSEFTX0FUT01JQzY0X0RFQ19JRl9QT1NJVElWRT15CkNPTkZJR19BVkVSQUdF
PXkKIyBDT05GSUdfQ09SRElDIGlzIG5vdCBzZXQKIyBDT05GSUdfRERSIGlzIG5vdCBzZXQK
Q09ORklHX0ZPTlRfU1VQUE9SVD15CiMgQ09ORklHX0ZPTlRTIGlzIG5vdCBzZXQKQ09ORklH
X0ZPTlRfOHg4PXkKQ09ORklHX0ZPTlRfOHgxNj15Cg==
------------0000A013B0B93F0E3
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------0000A013B0B93F0E3--



From xen-devel-bounces@lists.xen.org Tue Feb 11 16:14:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFyM-0006D5-I1; Tue, 11 Feb 2014 16:14:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WDFyK-0006Cv-82
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:14:08 +0000
Received: from [85.158.143.35:41316] by server-2.bemta-4.messagelabs.com id
	14/2E-10891-F4C4AF25; Tue, 11 Feb 2014 16:14:07 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392135243!4883560!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8066 invoked from network); 11 Feb 2014 16:14:04 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 16:14:04 -0000
Received: from mail-ie0-f181.google.com (mail-ie0-f181.google.com
	[209.85.223.181]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1BGE1eQ001128
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 08:14:02 -0800
Received: by mail-ie0-f181.google.com with SMTP id to1so4348478ieb.26
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 08:14:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Ch59HAGepQML1HOqfRqQForL4KH8HSoz8/dR3bXCvTk=;
	b=JWg09wez9oXRA31x8AZsqwtzheY2e3A9DKWeFTszAX9icummNjllxqUaSI0fsuck2q
	UxP5hBsmigYMMAp+VEld/36pX/jTlTZgdeBlzJtYCL/AKtGAVpAv6y++4dzXPALN0Srf
	zKOCIUkA0GnlWg0WzLT0exfpIMCiebsBQegMvhkHCuzvocHsu55CJEGSMOH8AHlsXEhc
	j1z1IyMjECi+s7lqHlM7QVAfOatKaUZWViHau9u1vzJa5rCQNGm1LRH4ggqMeWdZEZbY
	PdgVHG92hyFTmwPRKAm7bD//3g/VIhW4IbjO3ZgVlmHeWOl1FpnSJdBGgv6v33XrLHNf
	yVNA==
X-Received: by 10.42.92.194 with SMTP id u2mr24140497icm.19.1392135240457;
	Tue, 11 Feb 2014 08:14:00 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 11 Feb 2014 08:13:10 -0800 (PST)
In-Reply-To: <52FA1069.2040709@citrix.com>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
	<52FA1069.2040709@citrix.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 11 Feb 2014 10:13:10 -0600
Message-ID: <CAP8mzPMgH7rjm0OPwAmEiCKVJMeWXsCvYybHEb2FdnVD9VXXvA@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1782634504721351375=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1782634504721351375==
Content-Type: multipart/alternative; boundary=90e6ba3fcf8f7ca20704f223be00

--90e6ba3fcf8f7ca20704f223be00
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 11, 2014 at 5:58 AM, David Vrabel <david.vrabel@citrix.com>wrote:

> On 10/02/14 20:00, Shriram Rajagopalan wrote:
> > On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com
> > <mailto:david.vrabel@citrix.com>> wrote:
> >
> >
> > Its tempting to adopt all the TCP-style madness for transferring a set of
> > structured data.  Why this endian-ness mess?  Am I missing something
> here?
> > I am assuming that a lion's share of Xen's deployment is on x86
> > (not including Amazon). So that leaves ARM.  Why not let these
> > processors take the hit of endian-ness conversion?
>
> I'm not sure I would characterize a spec being precise about byte
> ordering as "endianness mess".
>
> I think it would be a pretty poor specification if it didn't specify
> byte ordering -- we can't have the tools having to make assumptions
> about the ordering.
>
>
Totally agree. But as someone else put it (and you did as well), my point
was
that its sufficient to specify it once, somewhere in the image header and
making
sure that (as you put it below), that the current use cases don't have to
go through
needless endian conversion.



> However, I do think it can be specified in such a way that all the
> current use cases don't have to do any byte swapping (except for the
> minimal header).
>
> >         +-----------------------+-------------------------+
> >         | checksum              | (reserved)              |
> >         +-----------------------+-------------------------+
> >
> >
> > I am assuming that you the checksum field is present only
> > for debugging purposes? Otherwise, I see no reason for the
> > computational overhead, given that we are already sending data
> > over a reliable channel + IIRC we already have an image-wide checksum
> > when saving the image to disk.
>
> I'm not aware of any image wide checksum.
>

Yep. I was mistaken.


> The checksum seems like a potentially useful feature but I don't have a
> requirement for it so if no one else thinks it is useful it can be removed.
>
>
My suggestion is that when saving the image to disk, why not have a single
image-wide checksum to ensure that the image from disk being restored is
still valid?


> >     PAGE_DATA
> >     ---------
> [...]
> >     --------------------------------------------------------------------
> >     Field       Description
> >     ----------- --------------------------------------------------------
> >     count       Number of pages described in this record.
> >
> >     pfn         An array of count PFNs. Bits 63-60 contain
> >                 the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> >
> >     page_data   page_size octets of uncompressed page contents for each
> page
> >                 set as present in the pfn array.
> >     --------------------------------------------------------------------
> >
> >
> > s/uncompressed/(compressed/uncompressed)/
> > (Remus sends compressed data)
>
> No.  I think compressed page data should have its own record type. The
> current scheme of mode flipping records seems crazy to me.
>
>
What record flipping? For page compression, Remus basically has a simple
XOR+RLE encoded sequence of bytes, preceded by a 4-byte length field.
Instead of sending the usual 4K per-page page_data, this compressed chunk
is sent.
The additional code on the remote side is an additional "if" block, that
uses
 xc_uncompess instead of memcpy to get the uncompressed page.

It would not change the way the PAGE_DATA record would be transmitted.

Though, one potentially cooler addition could be to use the option field of
the record header
to indicate whether the data is compressed or not. Given that we have 64
bits, we could even
go as far as specifying the type of compression module used (e.g., none,
remus, gzip, etc.).
This might be really helpful when one wants to save/restore large images (a
8GB VM for example)
to/from disks. Is this better/worse than simply gzipping the entire saved
image? I don't know yet.

However, for live migration, this would be pretty helpful (especially when
migrating over long latency
networks).  Remus' compression technique cannot be used for live migration
as it requires a previous
version of pages for XOR+RLE compression.  However, gzip and other such
compression algorithms
would be pretty handy in the live migration case, over WAN or even a
clogged LAN, where there
are tons of VMs being moved back and forth.

Feel free to shoot down this idea if it seems unfeasible.

 Thanks
shriram

--90e6ba3fcf8f7ca20704f223be00
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 11, 2014 at 5:58 AM, David Vrabel <span dir=3D"ltr">&lt;<a href=3D"=
mailto:david.vrabel@citrix.com" target=3D"_blank">david.vrabel@citrix.com</=
a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex"><div class=3D"">On 10/02/14 20:00, Shriram Rajagopalan wro=
te:<br>


&gt; On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel &lt;<a href=3D"mailto:da=
vid.vrabel@citrix.com">david.vrabel@citrix.com</a><br>
</div><div class=3D"">&gt; &lt;mailto:<a href=3D"mailto:david.vrabel@citrix=
.com">david.vrabel@citrix.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt; Its tempting to adopt all the TCP-style madness for transferring a set=
 of<br>
&gt; structured data. =A0Why this endian-ness mess? =A0Am I missing somethi=
ng here?<br>
&gt; I am assuming that a lion&#39;s share of Xen&#39;s deployment is on x8=
6<br>
&gt; (not including Amazon). So that leaves ARM. =A0Why not let these<br>
&gt; processors take the hit of endian-ness conversion?<br>
<br>
</div>I&#39;m not sure I would characterize a spec being precise about byte=
<br>
ordering as &quot;endianness mess&quot;.<br>
<br>
I think it would be a pretty poor specification if it didn&#39;t specify<br=
>
byte ordering -- we can&#39;t have the tools having to make assumptions<br>
about the ordering.<br>
<br></blockquote><div><br></div><div>Totally agree. But as someone else put=
 it (and you did as well), my point was</div><div>that its sufficient to sp=
ecify it once, somewhere in the image header and making</div><div>sure that=
 (as you put it below), that the current use cases don&#39;t have to go thr=
ough=A0</div>

<div>needless endian conversion.</div><div><br></div><div>=A0</div><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-wid=
th:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-l=
eft:1ex">


However, I do think it can be specified in such a way that all the<br>
current use cases don&#39;t have to do any byte swapping (except for the<br=
>
minimal header).<br>
<div class=3D""><br>
&gt; =A0 =A0 =A0 =A0 +-----------------------+-------------------------+<br=
>
&gt; =A0 =A0 =A0 =A0 | checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0| (reserved) =A0=
 =A0 =A0 =A0 =A0 =A0 =A0|<br>
&gt; =A0 =A0 =A0 =A0 +-----------------------+-------------------------+<br=
>
&gt;<br>
&gt;<br>
&gt; I am assuming that you the checksum field is present only<br>
&gt; for debugging purposes? Otherwise, I see no reason for the<br>
&gt; computational overhead, given that we are already sending data<br>
&gt; over a reliable channel + IIRC we already have an image-wide checksum<=
br>
&gt; when saving the image to disk.<br>
<br>
</div>I&#39;m not aware of any image wide checksum.<br></blockquote><div><b=
r></div><div>Yep. I was mistaken.</div><div><br></div><blockquote class=3D"=
gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border=
-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">


<br>
The checksum seems like a potentially useful feature but I don&#39;t have a=
<br>
requirement for it so if no one else thinks it is useful it can be removed.=
<br>
<br></blockquote><div><br></div><div>My suggestion is that when saving the =
image to disk, why not have a single=A0</div><div>image-wide checksum to en=
sure that the image from disk being restored is still valid?</div><div>=A0<=
/div>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">
&gt; =A0 =A0 PAGE_DATA<br>
&gt; =A0 =A0 ---------<br>
[...]<br>
<div class=3D"">&gt; =A0 =A0 ----------------------------------------------=
----------------------<br>
&gt; =A0 =A0 Field =A0 =A0 =A0 Description<br>
&gt; =A0 =A0 ----------- --------------------------------------------------=
------<br>
&gt; =A0 =A0 count =A0 =A0 =A0 Number of pages described in this record.<br=
>
&gt;<br>
&gt; =A0 =A0 pfn =A0 =A0 =A0 =A0 An array of count PFNs. Bits 63-60 contain=
<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 the XEN\_DOMCTL\_PFINFO_* value for th=
at PFN.<br>
&gt;<br>
&gt; =A0 =A0 page_data =A0 page_size octets of uncompressed page contents f=
or each page<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 set as present in the pfn array.<br>
&gt; =A0 =A0 --------------------------------------------------------------=
------<br>
&gt;<br>
&gt;<br>
&gt; s/uncompressed/(compressed/uncompressed)/<br>
&gt; (Remus sends compressed data)<br>
<br>
</div>No. =A0I think compressed page data should have its own record type. =
The<br>
current scheme of mode flipping records seems crazy to me.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>What record flip=
ping? For page compression, Remus basically has a simple=A0</div><div>XOR+R=
LE encoded sequence of bytes, preceded by a 4-byte length field.=A0</div><d=
iv>

Instead of sending the usual 4K per-page page_data, this compressed chunk i=
s sent.</div><div>The additional code on the remote side is an additional &=
quot;if&quot; block, that uses</div><div>=A0xc_uncompess instead of memcpy =
to get the uncompressed page.</div>

<div><br></div><div>It would not change the way the PAGE_DATA record would =
be transmitted.<br></div><div><br></div><div>Though, one potentially cooler=
 addition could be to use the option field of the record header</div><div>

to indicate whether the data is compressed or not. Given that we have 64 bi=
ts, we could even</div><div>go as far as specifying the type of compression=
 module used (e.g., none, remus, gzip, etc.).</div><div>This might be reall=
y helpful when one wants to save/restore large images (a 8GB VM for example=
)<br>

</div><div>to/from disks. Is this better/worse than simply gzipping the ent=
ire saved image? I don&#39;t know yet.</div><div><br></div><div>However, fo=
r live migration, this would be pretty helpful (especially when migrating o=
ver long latency</div>

<div>networks). =A0Remus&#39; compression technique cannot be used for live=
 migration as it requires a previous</div><div>version of pages for XOR+RLE=
 compression. =A0However, gzip and other such compression algorithms</div>
<div>
would be pretty handy in the live migration case, over WAN or even a clogge=
d LAN, where there</div><div>are tons of VMs being moved back and forth.</d=
iv><div><br></div><div>Feel free to shoot down this idea if it seems unfeas=
ible.</div>

<div><br></div><div>=A0Thanks</div><div>shriram</div></div></div></div>

--90e6ba3fcf8f7ca20704f223be00--


--===============1782634504721351375==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1782634504721351375==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 16:14:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:14:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDFyM-0006D5-I1; Tue, 11 Feb 2014 16:14:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WDFyK-0006Cv-82
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:14:08 +0000
Received: from [85.158.143.35:41316] by server-2.bemta-4.messagelabs.com id
	14/2E-10891-F4C4AF25; Tue, 11 Feb 2014 16:14:07 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392135243!4883560!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8066 invoked from network); 11 Feb 2014 16:14:04 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 16:14:04 -0000
Received: from mail-ie0-f181.google.com (mail-ie0-f181.google.com
	[209.85.223.181]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1BGE1eQ001128
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 08:14:02 -0800
Received: by mail-ie0-f181.google.com with SMTP id to1so4348478ieb.26
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 08:14:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Ch59HAGepQML1HOqfRqQForL4KH8HSoz8/dR3bXCvTk=;
	b=JWg09wez9oXRA31x8AZsqwtzheY2e3A9DKWeFTszAX9icummNjllxqUaSI0fsuck2q
	UxP5hBsmigYMMAp+VEld/36pX/jTlTZgdeBlzJtYCL/AKtGAVpAv6y++4dzXPALN0Srf
	zKOCIUkA0GnlWg0WzLT0exfpIMCiebsBQegMvhkHCuzvocHsu55CJEGSMOH8AHlsXEhc
	j1z1IyMjECi+s7lqHlM7QVAfOatKaUZWViHau9u1vzJa5rCQNGm1LRH4ggqMeWdZEZbY
	PdgVHG92hyFTmwPRKAm7bD//3g/VIhW4IbjO3ZgVlmHeWOl1FpnSJdBGgv6v33XrLHNf
	yVNA==
X-Received: by 10.42.92.194 with SMTP id u2mr24140497icm.19.1392135240457;
	Tue, 11 Feb 2014 08:14:00 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 11 Feb 2014 08:13:10 -0800 (PST)
In-Reply-To: <52FA1069.2040709@citrix.com>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
	<52FA1069.2040709@citrix.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 11 Feb 2014 10:13:10 -0600
Message-ID: <CAP8mzPMgH7rjm0OPwAmEiCKVJMeWXsCvYybHEb2FdnVD9VXXvA@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1782634504721351375=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1782634504721351375==
Content-Type: multipart/alternative; boundary=90e6ba3fcf8f7ca20704f223be00

--90e6ba3fcf8f7ca20704f223be00
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 11, 2014 at 5:58 AM, David Vrabel <david.vrabel@citrix.com>wrote:

> On 10/02/14 20:00, Shriram Rajagopalan wrote:
> > On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel <david.vrabel@citrix.com
> > <mailto:david.vrabel@citrix.com>> wrote:
> >
> >
> > Its tempting to adopt all the TCP-style madness for transferring a set of
> > structured data.  Why this endian-ness mess?  Am I missing something
> here?
> > I am assuming that a lion's share of Xen's deployment is on x86
> > (not including Amazon). So that leaves ARM.  Why not let these
> > processors take the hit of endian-ness conversion?
>
> I'm not sure I would characterize a spec being precise about byte
> ordering as "endianness mess".
>
> I think it would be a pretty poor specification if it didn't specify
> byte ordering -- we can't have the tools having to make assumptions
> about the ordering.
>
>
Totally agree. But as someone else put it (and you did as well), my point
was
that its sufficient to specify it once, somewhere in the image header and
making
sure that (as you put it below), that the current use cases don't have to
go through
needless endian conversion.



> However, I do think it can be specified in such a way that all the
> current use cases don't have to do any byte swapping (except for the
> minimal header).
>
> >         +-----------------------+-------------------------+
> >         | checksum              | (reserved)              |
> >         +-----------------------+-------------------------+
> >
> >
> > I am assuming that you the checksum field is present only
> > for debugging purposes? Otherwise, I see no reason for the
> > computational overhead, given that we are already sending data
> > over a reliable channel + IIRC we already have an image-wide checksum
> > when saving the image to disk.
>
> I'm not aware of any image wide checksum.
>

Yep. I was mistaken.


> The checksum seems like a potentially useful feature but I don't have a
> requirement for it so if no one else thinks it is useful it can be removed.
>
>
My suggestion is that when saving the image to disk, why not have a single
image-wide checksum to ensure that the image from disk being restored is
still valid?


> >     PAGE_DATA
> >     ---------
> [...]
> >     --------------------------------------------------------------------
> >     Field       Description
> >     ----------- --------------------------------------------------------
> >     count       Number of pages described in this record.
> >
> >     pfn         An array of count PFNs. Bits 63-60 contain
> >                 the XEN\_DOMCTL\_PFINFO_* value for that PFN.
> >
> >     page_data   page_size octets of uncompressed page contents for each
> page
> >                 set as present in the pfn array.
> >     --------------------------------------------------------------------
> >
> >
> > s/uncompressed/(compressed/uncompressed)/
> > (Remus sends compressed data)
>
> No.  I think compressed page data should have its own record type. The
> current scheme of mode flipping records seems crazy to me.
>
>
What record flipping? For page compression, Remus basically has a simple
XOR+RLE encoded sequence of bytes, preceded by a 4-byte length field.
Instead of sending the usual 4K per-page page_data, this compressed chunk
is sent.
The additional code on the remote side is an additional "if" block, that
uses
 xc_uncompess instead of memcpy to get the uncompressed page.

It would not change the way the PAGE_DATA record would be transmitted.

Though, one potentially cooler addition could be to use the option field of
the record header
to indicate whether the data is compressed or not. Given that we have 64
bits, we could even
go as far as specifying the type of compression module used (e.g., none,
remus, gzip, etc.).
This might be really helpful when one wants to save/restore large images (a
8GB VM for example)
to/from disks. Is this better/worse than simply gzipping the entire saved
image? I don't know yet.

However, for live migration, this would be pretty helpful (especially when
migrating over long latency
networks).  Remus' compression technique cannot be used for live migration
as it requires a previous
version of pages for XOR+RLE compression.  However, gzip and other such
compression algorithms
would be pretty handy in the live migration case, over WAN or even a
clogged LAN, where there
are tons of VMs being moved back and forth.

Feel free to shoot down this idea if it seems unfeasible.

 Thanks
shriram

--90e6ba3fcf8f7ca20704f223be00
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 11, 2014 at 5:58 AM, David Vrabel <span dir=3D"ltr">&lt;<a href=3D"=
mailto:david.vrabel@citrix.com" target=3D"_blank">david.vrabel@citrix.com</=
a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex"><div class=3D"">On 10/02/14 20:00, Shriram Rajagopalan wro=
te:<br>


&gt; On Mon, Feb 10, 2014 at 9:20 AM, David Vrabel &lt;<a href=3D"mailto:da=
vid.vrabel@citrix.com">david.vrabel@citrix.com</a><br>
</div><div class=3D"">&gt; &lt;mailto:<a href=3D"mailto:david.vrabel@citrix=
.com">david.vrabel@citrix.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt; Its tempting to adopt all the TCP-style madness for transferring a set=
 of<br>
&gt; structured data. =A0Why this endian-ness mess? =A0Am I missing somethi=
ng here?<br>
&gt; I am assuming that a lion&#39;s share of Xen&#39;s deployment is on x8=
6<br>
&gt; (not including Amazon). So that leaves ARM. =A0Why not let these<br>
&gt; processors take the hit of endian-ness conversion?<br>
<br>
</div>I&#39;m not sure I would characterize a spec being precise about byte=
<br>
ordering as &quot;endianness mess&quot;.<br>
<br>
I think it would be a pretty poor specification if it didn&#39;t specify<br=
>
byte ordering -- we can&#39;t have the tools having to make assumptions<br>
about the ordering.<br>
<br></blockquote><div><br></div><div>Totally agree. But as someone else put=
 it (and you did as well), my point was</div><div>that its sufficient to sp=
ecify it once, somewhere in the image header and making</div><div>sure that=
 (as you put it below), that the current use cases don&#39;t have to go thr=
ough=A0</div>

<div>needless endian conversion.</div><div><br></div><div>=A0</div><blockqu=
ote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-wid=
th:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-l=
eft:1ex">


However, I do think it can be specified in such a way that all the<br>
current use cases don&#39;t have to do any byte swapping (except for the<br=
>
minimal header).<br>
<div class=3D""><br>
&gt; =A0 =A0 =A0 =A0 +-----------------------+-------------------------+<br=
>
&gt; =A0 =A0 =A0 =A0 | checksum =A0 =A0 =A0 =A0 =A0 =A0 =A0| (reserved) =A0=
 =A0 =A0 =A0 =A0 =A0 =A0|<br>
&gt; =A0 =A0 =A0 =A0 +-----------------------+-------------------------+<br=
>
&gt;<br>
&gt;<br>
&gt; I am assuming that you the checksum field is present only<br>
&gt; for debugging purposes? Otherwise, I see no reason for the<br>
&gt; computational overhead, given that we are already sending data<br>
&gt; over a reliable channel + IIRC we already have an image-wide checksum<=
br>
&gt; when saving the image to disk.<br>
<br>
</div>I&#39;m not aware of any image wide checksum.<br></blockquote><div><b=
r></div><div>Yep. I was mistaken.</div><div><br></div><blockquote class=3D"=
gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border=
-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">


<br>
The checksum seems like a potentially useful feature but I don&#39;t have a=
<br>
requirement for it so if no one else thinks it is useful it can be removed.=
<br>
<br></blockquote><div><br></div><div>My suggestion is that when saving the =
image to disk, why not have a single=A0</div><div>image-wide checksum to en=
sure that the image from disk being restored is still valid?</div><div>=A0<=
/div>

<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">
&gt; =A0 =A0 PAGE_DATA<br>
&gt; =A0 =A0 ---------<br>
[...]<br>
<div class=3D"">&gt; =A0 =A0 ----------------------------------------------=
----------------------<br>
&gt; =A0 =A0 Field =A0 =A0 =A0 Description<br>
&gt; =A0 =A0 ----------- --------------------------------------------------=
------<br>
&gt; =A0 =A0 count =A0 =A0 =A0 Number of pages described in this record.<br=
>
&gt;<br>
&gt; =A0 =A0 pfn =A0 =A0 =A0 =A0 An array of count PFNs. Bits 63-60 contain=
<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 the XEN\_DOMCTL\_PFINFO_* value for th=
at PFN.<br>
&gt;<br>
&gt; =A0 =A0 page_data =A0 page_size octets of uncompressed page contents f=
or each page<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 set as present in the pfn array.<br>
&gt; =A0 =A0 --------------------------------------------------------------=
------<br>
&gt;<br>
&gt;<br>
&gt; s/uncompressed/(compressed/uncompressed)/<br>
&gt; (Remus sends compressed data)<br>
<br>
</div>No. =A0I think compressed page data should have its own record type. =
The<br>
current scheme of mode flipping records seems crazy to me.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>What record flip=
ping? For page compression, Remus basically has a simple=A0</div><div>XOR+R=
LE encoded sequence of bytes, preceded by a 4-byte length field.=A0</div><d=
iv>

Instead of sending the usual 4K per-page page_data, this compressed chunk i=
s sent.</div><div>The additional code on the remote side is an additional &=
quot;if&quot; block, that uses</div><div>=A0xc_uncompess instead of memcpy =
to get the uncompressed page.</div>

<div><br></div><div>It would not change the way the PAGE_DATA record would =
be transmitted.<br></div><div><br></div><div>Though, one potentially cooler=
 addition could be to use the option field of the record header</div><div>

to indicate whether the data is compressed or not. Given that we have 64 bi=
ts, we could even</div><div>go as far as specifying the type of compression=
 module used (e.g., none, remus, gzip, etc.).</div><div>This might be reall=
y helpful when one wants to save/restore large images (a 8GB VM for example=
)<br>

</div><div>to/from disks. Is this better/worse than simply gzipping the ent=
ire saved image? I don&#39;t know yet.</div><div><br></div><div>However, fo=
r live migration, this would be pretty helpful (especially when migrating o=
ver long latency</div>

<div>networks). =A0Remus&#39; compression technique cannot be used for live=
 migration as it requires a previous</div><div>version of pages for XOR+RLE=
 compression. =A0However, gzip and other such compression algorithms</div>
<div>
would be pretty handy in the live migration case, over WAN or even a clogge=
d LAN, where there</div><div>are tons of VMs being moved back and forth.</d=
iv><div><br></div><div>Feel free to shoot down this idea if it seems unfeas=
ible.</div>

<div><br></div><div>=A0Thanks</div><div>shriram</div></div></div></div>

--90e6ba3fcf8f7ca20704f223be00--


--===============1782634504721351375==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1782634504721351375==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 16:22:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDG6b-0006dY-Qa; Tue, 11 Feb 2014 16:22:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WDG6a-0006cj-7x
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 16:22:40 +0000
Received: from [85.158.143.35:32769] by server-2.bemta-4.messagelabs.com id
	B0/8C-10891-F4E4AF25; Tue, 11 Feb 2014 16:22:39 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392135756!4879637!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9665 invoked from network); 11 Feb 2014 16:22:38 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 16:22:38 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 11 Feb 2014 16:22:36 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="650957304"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.168])
	by fldsmtpi03.verizon.com with ESMTP; 11 Feb 2014 16:22:35 +0000
Message-ID: <52FA4E4B.2000906@terremark.com>
Date: Tue, 11 Feb 2014 11:22:35 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, 
 xen-devel@lists.xenproject.org
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
Cc: Ian Jackson <ian.jackson@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTEvMTQgMDU6MzgsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBTaWduZWQtb2ZmLWJ5
OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiBDYzogRG9uIFNsdXR6
IDxkc2x1dHpAdmVyaXpvbi5jb20+Cj4gQ2M6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNp
dHJpeC5jb20+Cj4gQ2M6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBjaXRyaXguY29tPgo+IC0t
LQo+IEkndmUgcmVtb3ZlZCBEb24ncyAiVGVzdGVkLWJ5IiBiZWNhdXNlIHRoZSBwYXRjaCBoYXMg
Y2hhbmdlZCwgY291bGQKPiB5b3UgcGxlYXNlIHJlLXRlc3QgaXQgRG9uPwo+Cj4gUGxlYXNlIHJ1
biBhdXRvZ2VuLnNoIGFmdGVyIGFwcGx5aW5nLgoKSSBoYXZlIHJldGVzdGVkIGFmdGVyIGFwcGx5
aW5nLCBhbmQgYXV0b2dlbi5zaCByYW4gY2xlYW5seSBmb3IgbWUgb24gRmVkb3JhIDE3IHN5c3Rl
bS4gIEZhaWxzIChhcyBleHBlY3RlZCBvbiBDZW50T1MgNS4xMCkuCgpTbyB5b3UgY2FuIGFkZAoK
VGVzdGVkLWJ5OiBEb24gU2x1dHogPGRzbHV0ekB2ZXJpem9uLmNvbT4KCiAgICAtRG9uIFNsdXR6
Cgo+IC0tLQo+ICAgbTQvYXhfY29tcGFyZV92ZXJzaW9uLm00IHwgIDE3OSArKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCj4gICB0b29scy9jb25maWd1cmUuYWMg
ICAgICAgfCAgICA3ICsrCj4gICAyIGZpbGVzIGNoYW5nZWQsIDE4NiBpbnNlcnRpb25zKCspLCAw
IGRlbGV0aW9ucygtKQo+ICAgY3JlYXRlIG1vZGUgMTAwNjQ0IG00L2F4X2NvbXBhcmVfdmVyc2lv
bi5tNAo+Cj4gZGlmZiAtLWdpdCBhL200L2F4X2NvbXBhcmVfdmVyc2lvbi5tNCBiL200L2F4X2Nv
bXBhcmVfdmVyc2lvbi5tNAo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0Cj4gaW5kZXggMDAwMDAwMC4u
MjZmNGRlYwo+IC0tLSAvZGV2L251bGwKPiArKysgYi9tNC9heF9jb21wYXJlX3ZlcnNpb24ubTQK
PiBAQCAtMCwwICsxLDE3OSBAQAo+ICsjIEZldGNoZWQgZnJvbSBodHRwOi8vZ2l0LnNhdmFubmFo
LmdudS5vcmcvZ2l0d2ViLz9wPWF1dG9jb25mLWFyY2hpdmUuZ2l0O2E9YmxvYl9wbGFpbjtmPW00
L2F4X2NvbXBhcmVfdmVyc2lvbi5tNAo+ICsjIENvbW1pdCBJRDogMjc5NDhmNDljYTMwZTQyMjJi
YjdjZGQ1NTE4MmJkNzM0MWFjNTBjNQo+ICsjID09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQo+ICsjICAgIGh0
dHA6Ly93d3cuZ251Lm9yZy9zb2Z0d2FyZS9hdXRvY29uZi1hcmNoaXZlL2F4X2NvbXBhcmVfdmVy
c2lvbi5odG1sCj4gKyMgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Cj4gKyMKPiArIyBTWU5PUFNJUwo+ICsj
Cj4gKyMgICBBWF9DT01QQVJFX1ZFUlNJT04oVkVSU0lPTl9BLCBPUCwgVkVSU0lPTl9CLCBbQUNU
SU9OLUlGLVRSVUVdLCBbQUNUSU9OLUlGLUZBTFNFXSkKPiArIwo+ICsjIERFU0NSSVBUSU9OCj4g
KyMKPiArIyAgIFRoaXMgbWFjcm8gY29tcGFyZXMgdHdvIHZlcnNpb24gc3RyaW5ncy4gRHVlIHRv
IHRoZSB2YXJpb3VzIG51bWJlciBvZgo+ICsjICAgbWlub3ItdmVyc2lvbiBudW1iZXJzIHRoYXQg
Y2FuIGV4aXN0LCBhbmQgdGhlIGZhY3QgdGhhdCBzdHJpbmcKPiArIyAgIGNvbXBhcmlzb25zIGFy
ZSBub3QgY29tcGF0aWJsZSB3aXRoIG51bWVyaWMgY29tcGFyaXNvbnMsIHRoaXMgaXMgbm90Cj4g
KyMgICBuZWNlc3NhcmlseSB0cml2aWFsIHRvIGRvIGluIGEgYXV0b2NvbmYgc2NyaXB0LiBUaGlz
IG1hY3JvIG1ha2VzIGRvaW5nCj4gKyMgICB0aGVzZSBjb21wYXJpc29ucyBlYXN5Lgo+ICsjCj4g
KyMgICBUaGUgc2l4IGJhc2ljIGNvbXBhcmlzb25zIGFyZSBhdmFpbGFibGUsIGFzIHdlbGwgYXMg
Y2hlY2tpbmcgZXF1YWxpdHkKPiArIyAgIGxpbWl0ZWQgdG8gYSBjZXJ0YWluIG51bWJlciBvZiBt
aW5vci12ZXJzaW9uIGxldmVscy4KPiArIwo+ICsjICAgVGhlIG9wZXJhdG9yIE9QIGRldGVybWlu
ZXMgd2hhdCB0eXBlIG9mIGNvbXBhcmlzb24gdG8gZG8sIGFuZCBjYW4gYmUgb25lCj4gKyMgICBv
ZjoKPiArIwo+ICsjICAgIGVxICAtIGVxdWFsICh0ZXN0IEEgPT0gQikKPiArIyAgICBuZSAgLSBu
b3QgZXF1YWwgKHRlc3QgQSAhPSBCKQo+ICsjICAgIGxlICAtIGxlc3MgdGhhbiBvciBlcXVhbCAo
dGVzdCBBIDw9IEIpCj4gKyMgICAgZ2UgIC0gZ3JlYXRlciB0aGFuIG9yIGVxdWFsICh0ZXN0IEEg
Pj0gQikKPiArIyAgICBsdCAgLSBsZXNzIHRoYW4gKHRlc3QgQSA8IEIpCj4gKyMgICAgZ3QgIC0g
Z3JlYXRlciB0aGFuICh0ZXN0IEEgPiBCKQo+ICsjCj4gKyMgICBBZGRpdGlvbmFsbHksIHRoZSBl
cSBhbmQgbmUgb3BlcmF0b3IgY2FuIGhhdmUgYSBudW1iZXIgYWZ0ZXIgaXQgdG8gbGltaXQKPiAr
IyAgIHRoZSB0ZXN0IHRvIHRoYXQgbnVtYmVyIG9mIG1pbm9yIHZlcnNpb25zLgo+ICsjCj4gKyMg
ICAgZXEwIC0gZXF1YWwgdXAgdG8gdGhlIGxlbmd0aCBvZiB0aGUgc2hvcnRlciB2ZXJzaW9uCj4g
KyMgICAgbmUwIC0gbm90IGVxdWFsIHVwIHRvIHRoZSBsZW5ndGggb2YgdGhlIHNob3J0ZXIgdmVy
c2lvbgo+ICsjICAgIGVxTiAtIGVxdWFsIHVwIHRvIE4gc3ViLXZlcnNpb24gbGV2ZWxzCj4gKyMg
ICAgbmVOIC0gbm90IGVxdWFsIHVwIHRvIE4gc3ViLXZlcnNpb24gbGV2ZWxzCj4gKyMKPiArIyAg
IFdoZW4gdGhlIGNvbmRpdGlvbiBpcyB0cnVlLCBzaGVsbCBjb21tYW5kcyBBQ1RJT04tSUYtVFJV
RSBhcmUgcnVuLAo+ICsjICAgb3RoZXJ3aXNlIHNoZWxsIGNvbW1hbmRzIEFDVElPTi1JRi1GQUxT
RSBhcmUgcnVuLiBUaGUgZW52aXJvbm1lbnQKPiArIyAgIHZhcmlhYmxlICdheF9jb21wYXJlX3Zl
cnNpb24nIGlzIGFsd2F5cyBzZXQgdG8gZWl0aGVyICd0cnVlJyBvciAnZmFsc2UnCj4gKyMgICBh
cyB3ZWxsLgo+ICsjCj4gKyMgICBFeGFtcGxlczoKPiArIwo+ICsjICAgICBBWF9DT01QQVJFX1ZF
UlNJT04oWzMuMTUuN10sW2x0XSxbMy4xNS44XSkKPiArIyAgICAgQVhfQ09NUEFSRV9WRVJTSU9O
KFszLjE1XSxbbHRdLFszLjE1LjhdKQo+ICsjCj4gKyMgICB3b3VsZCBib3RoIGJlIHRydWUuCj4g
KyMKPiArIyAgICAgQVhfQ09NUEFSRV9WRVJTSU9OKFszLjE1LjddLFtlcV0sWzMuMTUuOF0pCj4g
KyMgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbMy4xNV0sW2d0XSxbMy4xNS44XSkKPiArIwo+ICsj
ICAgd291bGQgYm90aCBiZSBmYWxzZS4KPiArIwo+ICsjICAgICBBWF9DT01QQVJFX1ZFUlNJT04o
WzMuMTUuN10sW2VxMl0sWzMuMTUuOF0pCj4gKyMKPiArIyAgIHdvdWxkIGJlIHRydWUgYmVjYXVz
ZSBpdCBpcyBvbmx5IGNvbXBhcmluZyB0d28gbWlub3IgdmVyc2lvbnMuCj4gKyMKPiArIyAgICAg
QVhfQ09NUEFSRV9WRVJTSU9OKFszLjE1LjddLFtlcTBdLFszLjE1XSkKPiArIwo+ICsjICAgd291
bGQgYmUgdHJ1ZSBiZWNhdXNlIGl0IGlzIG9ubHkgY29tcGFyaW5nIHRoZSBsZXNzZXIgbnVtYmVy
IG9mIG1pbm9yCj4gKyMgICB2ZXJzaW9ucyBvZiB0aGUgdHdvIHZhbHVlcy4KPiArIwo+ICsjICAg
Tm90ZTogVGhlIGNoYXJhY3RlcnMgdGhhdCBzZXBhcmF0ZSB0aGUgdmVyc2lvbiBudW1iZXJzIGRv
IG5vdCBtYXR0ZXIuIEFuCj4gKyMgICBlbXB0eSBzdHJpbmcgaXMgdGhlIHNhbWUgYXMgdmVyc2lv
biAwLiBPUCBpcyBldmFsdWF0ZWQgYnkgYXV0b2NvbmYsIG5vdAo+ICsjICAgY29uZmlndXJlLCBz
byBtdXN0IGJlIGEgc3RyaW5nLCBub3QgYSB2YXJpYWJsZS4KPiArIwo+ICsjICAgVGhlIGF1dGhv
ciB3b3VsZCBsaWtlIHRvIGFja25vd2xlZGdlIEd1aWRvIERyYWhlaW0gd2hvc2UgYWR2aWNlIGFi
b3V0Cj4gKyMgICB0aGUgbTRfY2FzZSBhbmQgbTRfaWZ2YWxuIGZ1bmN0aW9ucyBtYWtlIHRoaXMg
bWFjcm8gb25seSBpbmNsdWRlIHRoZQo+ICsjICAgcG9ydGlvbnMgbmVjZXNzYXJ5IHRvIHBlcmZv
cm0gdGhlIHNwZWNpZmljIGNvbXBhcmlzb24gc3BlY2lmaWVkIGJ5IHRoZQo+ICsjICAgT1AgYXJn
dW1lbnQgaW4gdGhlIGZpbmFsIGNvbmZpZ3VyZSBzY3JpcHQuCj4gKyMKPiArIyBMSUNFTlNFCj4g
KyMKPiArIyAgIENvcHlyaWdodCAoYykgMjAwOCBUaW0gVG9vbGFuIDx0b29sYW5AZWxlLnVyaS5l
ZHU+Cj4gKyMKPiArIyAgIENvcHlpbmcgYW5kIGRpc3RyaWJ1dGlvbiBvZiB0aGlzIGZpbGUsIHdp
dGggb3Igd2l0aG91dCBtb2RpZmljYXRpb24sIGFyZQo+ICsjICAgcGVybWl0dGVkIGluIGFueSBt
ZWRpdW0gd2l0aG91dCByb3lhbHR5IHByb3ZpZGVkIHRoZSBjb3B5cmlnaHQgbm90aWNlCj4gKyMg
ICBhbmQgdGhpcyBub3RpY2UgYXJlIHByZXNlcnZlZC4gVGhpcyBmaWxlIGlzIG9mZmVyZWQgYXMt
aXMsIHdpdGhvdXQgYW55Cj4gKyMgICB3YXJyYW50eS4KPiArCj4gKyNzZXJpYWwgMTEKPiArCj4g
K2RubCAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjCj4gK0FDX0RFRlVOKFtBWF9DT01QQVJFX1ZFUlNJT05dLCBb
Cj4gKyAgQUNfUkVRVUlSRShbQUNfUFJPR19BV0tdKQo+ICsKPiArICAjIFVzZWQgdG8gaW5kaWNh
dGUgdHJ1ZSBvciBmYWxzZSBjb25kaXRpb24KPiArICBheF9jb21wYXJlX3ZlcnNpb249ZmFsc2UK
PiArCj4gKyAgIyBDb252ZXJ0IHRoZSB0d28gdmVyc2lvbiBzdHJpbmdzIHRvIGJlIGNvbXBhcmVk
IGludG8gYSBmb3JtYXQgdGhhdAo+ICsgICMgYWxsb3dzIGEgc2ltcGxlIHN0cmluZyBjb21wYXJp
c29uLiAgVGhlIGVuZCByZXN1bHQgaXMgdGhhdCBhIHZlcnNpb24KPiArICAjIHN0cmluZyBvZiB0
aGUgZm9ybSAxLjEyLjUtcjYxNyB3aWxsIGJlIGNvbnZlcnRlZCB0byB0aGUgZm9ybQo+ICsgICMg
MDAwMTAwMTIwMDA1MDYxNy4gIEluIG90aGVyIHdvcmRzLCBlYWNoIG51bWJlciBpcyB6ZXJvIHBh
ZGRlZCB0byBmb3VyCj4gKyAgIyBkaWdpdHMsIGFuZCBub24gZGlnaXRzIGFyZSByZW1vdmVkLgo+
ICsgIEFTX1ZBUl9QVVNIREVGKFtBXSxbYXhfY29tcGFyZV92ZXJzaW9uX0FdKQo+ICsgIEE9YGVj
aG8gIiQxIiB8IHNlZCAtZSAncy9cKFtbMC05XV0qXCkvWlwxWi9nJyBcCj4gKyAgICAgICAgICAg
ICAgICAgICAgIC1lICdzL1pcKFtbMC05XV1cKVovWjBcMVovZycgXAo+ICsgICAgICAgICAgICAg
ICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVwpWi9aMFwxWi9nJyBcCj4gKyAgICAgICAg
ICAgICAgICAgICAgIC1lICdzL1pcKFtbMC05XV1bWzAtOV1dW1swLTldXVwpWi9aMFwxWi9nJyBc
Cj4gKyAgICAgICAgICAgICAgICAgICAgIC1lICdzL1tbXjAtOV1dLy9nJ2AKPiArCj4gKyAgQVNf
VkFSX1BVU0hERUYoW0JdLFtheF9jb21wYXJlX3ZlcnNpb25fQl0pCj4gKyAgQj1gZWNobyAiJDMi
IHwgc2VkIC1lICdzL1woW1swLTldXSpcKS9aXDFaL2cnIFwKPiArICAgICAgICAgICAgICAgICAg
ICAgLWUgJ3MvWlwoW1swLTldXVwpWi9aMFwxWi9nJyBcCj4gKyAgICAgICAgICAgICAgICAgICAg
IC1lICdzL1pcKFtbMC05XV1bWzAtOV1dXClaL1owXDFaL2cnIFwKPiArICAgICAgICAgICAgICAg
ICAgICAgLWUgJ3MvWlwoW1swLTldXVtbMC05XV1bWzAtOV1dXClaL1owXDFaL2cnIFwKPiArICAg
ICAgICAgICAgICAgICAgICAgLWUgJ3MvW1teMC05XV0vL2cnYAo+ICsKPiArICBkbmwgIyBJbiB0
aGUgY2FzZSBvZiBsZSwgZ2UsIGx0LCBhbmQgZ3QsIHRoZSBzdHJpbmdzIGFyZSBzb3J0ZWQgYXMg
bmVjZXNzYXJ5Cj4gKyAgZG5sICMgdGhlbiB0aGUgZmlyc3QgbGluZSBpcyB1c2VkIHRvIGRldGVy
bWluZSBpZiB0aGUgY29uZGl0aW9uIGlzIHRydWUuCj4gKyAgZG5sICMgVGhlIHNlZCByaWdodCBh
ZnRlciB0aGUgZWNobyBpcyB0byByZW1vdmUgYW55IGluZGVudGVkIHdoaXRlIHNwYWNlLgo+ICsg
IG00X2Nhc2UobTRfdG9sb3dlcigkMiksCj4gKyAgW2x0XSxbCj4gKyAgICBheF9jb21wYXJlX3Zl
cnNpb249YGVjaG8gIngkQQo+ICt4JEIiIHwgc2VkICdzL14gKi8vJyB8IHNvcnQgLXIgfCBzZWQg
InMveCR7QX0vZmFsc2UvO3MveCR7Qn0vdHJ1ZS87MXEiYAo+ICsgIF0sCj4gKyAgW2d0XSxbCj4g
KyAgICBheF9jb21wYXJlX3ZlcnNpb249YGVjaG8gIngkQQo+ICt4JEIiIHwgc2VkICdzL14gKi8v
JyB8IHNvcnQgfCBzZWQgInMveCR7QX0vZmFsc2UvO3MveCR7Qn0vdHJ1ZS87MXEiYAo+ICsgIF0s
Cj4gKyAgW2xlXSxbCj4gKyAgICBheF9jb21wYXJlX3ZlcnNpb249YGVjaG8gIngkQQo+ICt4JEIi
IHwgc2VkICdzL14gKi8vJyB8IHNvcnQgfCBzZWQgInMveCR7QX0vdHJ1ZS87cy94JHtCfS9mYWxz
ZS87MXEiYAo+ICsgIF0sCj4gKyAgW2dlXSxbCj4gKyAgICBheF9jb21wYXJlX3ZlcnNpb249YGVj
aG8gIngkQQo+ICt4JEIiIHwgc2VkICdzL14gKi8vJyB8IHNvcnQgLXIgfCBzZWQgInMveCR7QX0v
dHJ1ZS87cy94JHtCfS9mYWxzZS87MXEiYAo+ICsgIF0sWwo+ICsgICAgZG5sIFNwbGl0IHRoZSBv
cGVyYXRvciBmcm9tIHRoZSBzdWJ2ZXJzaW9uIGNvdW50IGlmIHByZXNlbnQuCj4gKyAgICBtNF9i
bWF0Y2gobTRfc3Vic3RyKCQyLDIpLAo+ICsgICAgWzBdLFsKPiArICAgICAgIyBBIGNvdW50IG9m
IHplcm8gbWVhbnMgdXNlIHRoZSBsZW5ndGggb2YgdGhlIHNob3J0ZXIgdmVyc2lvbi4KPiArICAg
ICAgIyBEZXRlcm1pbmUgdGhlIG51bWJlciBvZiBjaGFyYWN0ZXJzIGluIEEgYW5kIEIuCj4gKyAg
ICAgIGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQT1gZWNobyAiJEEiIHwgJEFXSyAne3ByaW50KGxl
bmd0aCl9J2AKPiArICAgICAgYXhfY29tcGFyZV92ZXJzaW9uX2xlbl9CPWBlY2hvICIkQiIgfCAk
QVdLICd7cHJpbnQobGVuZ3RoKX0nYAo+ICsKPiArICAgICAgIyBTZXQgQSB0byBubyBtb3JlIHRo
YW4gQidzIGxlbmd0aCBhbmQgQiB0byBubyBtb3JlIHRoYW4gQSdzIGxlbmd0aC4KPiArICAgICAg
QT1gZWNobyAiJEEiIHwgc2VkICJzL1woLlx7JGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQlx9XCku
Ki9cMS8iYAo+ICsgICAgICBCPWBlY2hvICIkQiIgfCBzZWQgInMvXCguXHskYXhfY29tcGFyZV92
ZXJzaW9uX2xlbl9BXH1cKS4qL1wxLyJgCj4gKyAgICBdLAo+ICsgICAgW1swLTldK10sWwo+ICsg
ICAgICAjIEEgY291bnQgZ3JlYXRlciB0aGFuIHplcm8gbWVhbnMgdXNlIG9ubHkgdGhhdCBtYW55
IHN1YnZlcnNpb25zCj4gKyAgICAgIEE9YGVjaG8gIiRBIiB8IHNlZCAicy9cKFwoW1swLTldXVx7
NFx9XClce200X3N1YnN0cigkMiwyKVx9XCkuKi9cMS8iYAo+ICsgICAgICBCPWBlY2hvICIkQiIg
fCBzZWQgInMvXChcKFtbMC05XV1cezRcfVwpXHttNF9zdWJzdHIoJDIsMilcfVwpLiovXDEvImAK
PiArICAgIF0sCj4gKyAgICBbLitdLFsKPiArICAgICAgQUNfV0FSTklORygKPiArICAgICAgICBb
aWxsZWdhbCBPUCBudW1lcmljIHBhcmFtZXRlcjogJDJdKQo+ICsgICAgXSxbXSkKPiArCj4gKyAg
ICAjIFBhZCB6ZXJvcyBhdCBlbmQgb2YgbnVtYmVycyB0byBtYWtlIHNhbWUgbGVuZ3RoLgo+ICsg
ICAgYXhfY29tcGFyZV92ZXJzaW9uX3RtcF9BPSIkQWBlY2hvICRCIHwgc2VkICdzLy4vMC9nJ2Ai
Cj4gKyAgICBCPSIkQmBlY2hvICRBIHwgc2VkICdzLy4vMC9nJ2AiCj4gKyAgICBBPSIkYXhfY29t
cGFyZV92ZXJzaW9uX3RtcF9BIgo+ICsKPiArICAgICMgQ2hlY2sgZm9yIGVxdWFsaXR5IG9yIGlu
ZXF1YWxpdHkgYXMgbmVjZXNzYXJ5Lgo+ICsgICAgbTRfY2FzZShtNF90b2xvd2VyKG00X3N1YnN0
cigkMiwwLDIpKSwKPiArICAgIFtlcV0sWwo+ICsgICAgICB0ZXN0ICJ4JEEiID0gIngkQiIgJiYg
YXhfY29tcGFyZV92ZXJzaW9uPXRydWUKPiArICAgIF0sCj4gKyAgICBbbmVdLFsKPiArICAgICAg
dGVzdCAieCRBIiAhPSAieCRCIiAmJiBheF9jb21wYXJlX3ZlcnNpb249dHJ1ZQo+ICsgICAgXSxb
Cj4gKyAgICAgIEFDX1dBUk5JTkcoW2lsbGVnYWwgT1AgcGFyYW1ldGVyOiAkMl0pCj4gKyAgICBd
KQo+ICsgIF0pCj4gKwo+ICsgIEFTX1ZBUl9QT1BERUYoW0FdKWRubAo+ICsgIEFTX1ZBUl9QT1BE
RUYoW0JdKWRubAo+ICsKPiArICBkbmwgIyBFeGVjdXRlIEFDVElPTi1JRi1UUlVFIC8gQUNUSU9O
LUlGLUZBTFNFLgo+ICsgIGlmIHRlc3QgIiRheF9jb21wYXJlX3ZlcnNpb24iID0gInRydWUiIDsg
dGhlbgo+ICsgICAgbTRfaWZ2YWxuKFskNF0sWyQ0XSxbOl0pZG5sCj4gKyAgICBtNF9pZnZhbG4o
WyQ1XSxbZWxzZSAkNV0pZG5sCj4gKyAgZmkKPiArXSkgZG5sIEFYX0NPTVBBUkVfVkVSU0lPTgo+
IGRpZmYgLS1naXQgYS90b29scy9jb25maWd1cmUuYWMgYi90b29scy9jb25maWd1cmUuYWMKPiBp
bmRleCAwNzU0ZjBlLi4zNDk2YzEyIDEwMDY0NAo+IC0tLSBhL3Rvb2xzL2NvbmZpZ3VyZS5hYwo+
ICsrKyBiL3Rvb2xzL2NvbmZpZ3VyZS5hYwo+IEBAIC00Niw2ICs0Niw3IEBAIG00X2luY2x1ZGUo
Wy4uL200L3B0aHJlYWQubTRdKQo+ICAgbTRfaW5jbHVkZShbLi4vbTQvcHR5ZnVuY3MubTRdKQo+
ICAgbTRfaW5jbHVkZShbLi4vbTQvZXh0ZnMubTRdKQo+ICAgbTRfaW5jbHVkZShbLi4vbTQvZmV0
Y2hlci5tNF0pCj4gK200X2luY2x1ZGUoWy4uL200L2F4X2NvbXBhcmVfdmVyc2lvbi5tNF0pCj4g
ICAKPiAgICMgRW5hYmxlL2Rpc2FibGUgb3B0aW9ucwo+ICAgQVhfQVJHX0RFRkFVTFRfRElTQUJM
RShbZ2l0aHR0cF0sIFtEb3dubG9hZCBHSVQgcmVwb3NpdG9yaWVzIHZpYSBIVFRQXSkKPiBAQCAt
MTYxLDYgKzE2MiwxMiBAQCBBU19JRihbdGVzdCAieCRvY2FtbHRvb2xzIiA9ICJ4eSJdLCBbCj4g
ICAgICAgICAgIEFTX0lGKFt0ZXN0ICJ4JGVuYWJsZV9vY2FtbHRvb2xzIiA9ICJ4eWVzIl0sIFsK
PiAgICAgICAgICAgICAgIEFDX01TR19FUlJPUihbT2NhbWwgdG9vbHMgZW5hYmxlZCwgYnV0IHVu
YWJsZSB0byBmaW5kIE9jYW1sXSldKQo+ICAgICAgICAgICBvY2FtbHRvb2xzPSJuIgo+ICsgICAg
XSwgWwo+ICsgICAgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbJE9DQU1MVkVSU0lPTl0sIFtsdF0s
IFszLjA5LjNdLCBbCj4gKyAgICAgICAgICAgIEFTX0lGKFt0ZXN0ICJ4JGVuYWJsZV9vY2FtbHRv
b2xzIiA9ICJ4eWVzIl0sIFsKPiArICAgICAgICAgICAgICAgIEFDX01TR19FUlJPUihbWW91ciB2
ZXJzaW9uIG9mIE9DYW1sOiAkT0NBTUxWRVJTSU9OIGlzIG5vdCBzdXBwb3J0ZWRdKV0pCj4gKyAg
ICAgICAgICAgIG9jYW1sdG9vbHM9Im4iCj4gKyAgICAgICAgXSkKPiAgICAgICBdKQo+ICAgXSkK
PiAgIEFTX0lGKFt0ZXN0ICJ4JHhzbXBvbGljeSIgPSAieHkiXSwgWwoKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:22:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDG6b-0006dY-Qa; Tue, 11 Feb 2014 16:22:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WDG6a-0006cj-7x
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 16:22:40 +0000
Received: from [85.158.143.35:32769] by server-2.bemta-4.messagelabs.com id
	B0/8C-10891-F4E4AF25; Tue, 11 Feb 2014 16:22:39 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392135756!4879637!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9665 invoked from network); 11 Feb 2014 16:22:38 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 16:22:38 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 11 Feb 2014 16:22:36 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="650957304"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.168])
	by fldsmtpi03.verizon.com with ESMTP; 11 Feb 2014 16:22:35 +0000
Message-ID: <52FA4E4B.2000906@terremark.com>
Date: Tue, 11 Feb 2014 11:22:35 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>, 
 xen-devel@lists.xenproject.org
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
Cc: Ian Jackson <ian.jackson@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTEvMTQgMDU6MzgsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToKPiBTaWduZWQtb2ZmLWJ5
OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiBDYzogRG9uIFNsdXR6
IDxkc2x1dHpAdmVyaXpvbi5jb20+Cj4gQ2M6IElhbiBDYW1wYmVsbCA8aWFuLmNhbXBiZWxsQGNp
dHJpeC5jb20+Cj4gQ2M6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBjaXRyaXguY29tPgo+IC0t
LQo+IEkndmUgcmVtb3ZlZCBEb24ncyAiVGVzdGVkLWJ5IiBiZWNhdXNlIHRoZSBwYXRjaCBoYXMg
Y2hhbmdlZCwgY291bGQKPiB5b3UgcGxlYXNlIHJlLXRlc3QgaXQgRG9uPwo+Cj4gUGxlYXNlIHJ1
biBhdXRvZ2VuLnNoIGFmdGVyIGFwcGx5aW5nLgoKSSBoYXZlIHJldGVzdGVkIGFmdGVyIGFwcGx5
aW5nLCBhbmQgYXV0b2dlbi5zaCByYW4gY2xlYW5seSBmb3IgbWUgb24gRmVkb3JhIDE3IHN5c3Rl
bS4gIEZhaWxzIChhcyBleHBlY3RlZCBvbiBDZW50T1MgNS4xMCkuCgpTbyB5b3UgY2FuIGFkZAoK
VGVzdGVkLWJ5OiBEb24gU2x1dHogPGRzbHV0ekB2ZXJpem9uLmNvbT4KCiAgICAtRG9uIFNsdXR6
Cgo+IC0tLQo+ICAgbTQvYXhfY29tcGFyZV92ZXJzaW9uLm00IHwgIDE3OSArKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrCj4gICB0b29scy9jb25maWd1cmUuYWMg
ICAgICAgfCAgICA3ICsrCj4gICAyIGZpbGVzIGNoYW5nZWQsIDE4NiBpbnNlcnRpb25zKCspLCAw
IGRlbGV0aW9ucygtKQo+ICAgY3JlYXRlIG1vZGUgMTAwNjQ0IG00L2F4X2NvbXBhcmVfdmVyc2lv
bi5tNAo+Cj4gZGlmZiAtLWdpdCBhL200L2F4X2NvbXBhcmVfdmVyc2lvbi5tNCBiL200L2F4X2Nv
bXBhcmVfdmVyc2lvbi5tNAo+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0Cj4gaW5kZXggMDAwMDAwMC4u
MjZmNGRlYwo+IC0tLSAvZGV2L251bGwKPiArKysgYi9tNC9heF9jb21wYXJlX3ZlcnNpb24ubTQK
PiBAQCAtMCwwICsxLDE3OSBAQAo+ICsjIEZldGNoZWQgZnJvbSBodHRwOi8vZ2l0LnNhdmFubmFo
LmdudS5vcmcvZ2l0d2ViLz9wPWF1dG9jb25mLWFyY2hpdmUuZ2l0O2E9YmxvYl9wbGFpbjtmPW00
L2F4X2NvbXBhcmVfdmVyc2lvbi5tNAo+ICsjIENvbW1pdCBJRDogMjc5NDhmNDljYTMwZTQyMjJi
YjdjZGQ1NTE4MmJkNzM0MWFjNTBjNQo+ICsjID09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PQo+ICsjICAgIGh0
dHA6Ly93d3cuZ251Lm9yZy9zb2Z0d2FyZS9hdXRvY29uZi1hcmNoaXZlL2F4X2NvbXBhcmVfdmVy
c2lvbi5odG1sCj4gKyMgPT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09Cj4gKyMKPiArIyBTWU5PUFNJUwo+ICsj
Cj4gKyMgICBBWF9DT01QQVJFX1ZFUlNJT04oVkVSU0lPTl9BLCBPUCwgVkVSU0lPTl9CLCBbQUNU
SU9OLUlGLVRSVUVdLCBbQUNUSU9OLUlGLUZBTFNFXSkKPiArIwo+ICsjIERFU0NSSVBUSU9OCj4g
KyMKPiArIyAgIFRoaXMgbWFjcm8gY29tcGFyZXMgdHdvIHZlcnNpb24gc3RyaW5ncy4gRHVlIHRv
IHRoZSB2YXJpb3VzIG51bWJlciBvZgo+ICsjICAgbWlub3ItdmVyc2lvbiBudW1iZXJzIHRoYXQg
Y2FuIGV4aXN0LCBhbmQgdGhlIGZhY3QgdGhhdCBzdHJpbmcKPiArIyAgIGNvbXBhcmlzb25zIGFy
ZSBub3QgY29tcGF0aWJsZSB3aXRoIG51bWVyaWMgY29tcGFyaXNvbnMsIHRoaXMgaXMgbm90Cj4g
KyMgICBuZWNlc3NhcmlseSB0cml2aWFsIHRvIGRvIGluIGEgYXV0b2NvbmYgc2NyaXB0LiBUaGlz
IG1hY3JvIG1ha2VzIGRvaW5nCj4gKyMgICB0aGVzZSBjb21wYXJpc29ucyBlYXN5Lgo+ICsjCj4g
KyMgICBUaGUgc2l4IGJhc2ljIGNvbXBhcmlzb25zIGFyZSBhdmFpbGFibGUsIGFzIHdlbGwgYXMg
Y2hlY2tpbmcgZXF1YWxpdHkKPiArIyAgIGxpbWl0ZWQgdG8gYSBjZXJ0YWluIG51bWJlciBvZiBt
aW5vci12ZXJzaW9uIGxldmVscy4KPiArIwo+ICsjICAgVGhlIG9wZXJhdG9yIE9QIGRldGVybWlu
ZXMgd2hhdCB0eXBlIG9mIGNvbXBhcmlzb24gdG8gZG8sIGFuZCBjYW4gYmUgb25lCj4gKyMgICBv
ZjoKPiArIwo+ICsjICAgIGVxICAtIGVxdWFsICh0ZXN0IEEgPT0gQikKPiArIyAgICBuZSAgLSBu
b3QgZXF1YWwgKHRlc3QgQSAhPSBCKQo+ICsjICAgIGxlICAtIGxlc3MgdGhhbiBvciBlcXVhbCAo
dGVzdCBBIDw9IEIpCj4gKyMgICAgZ2UgIC0gZ3JlYXRlciB0aGFuIG9yIGVxdWFsICh0ZXN0IEEg
Pj0gQikKPiArIyAgICBsdCAgLSBsZXNzIHRoYW4gKHRlc3QgQSA8IEIpCj4gKyMgICAgZ3QgIC0g
Z3JlYXRlciB0aGFuICh0ZXN0IEEgPiBCKQo+ICsjCj4gKyMgICBBZGRpdGlvbmFsbHksIHRoZSBl
cSBhbmQgbmUgb3BlcmF0b3IgY2FuIGhhdmUgYSBudW1iZXIgYWZ0ZXIgaXQgdG8gbGltaXQKPiAr
IyAgIHRoZSB0ZXN0IHRvIHRoYXQgbnVtYmVyIG9mIG1pbm9yIHZlcnNpb25zLgo+ICsjCj4gKyMg
ICAgZXEwIC0gZXF1YWwgdXAgdG8gdGhlIGxlbmd0aCBvZiB0aGUgc2hvcnRlciB2ZXJzaW9uCj4g
KyMgICAgbmUwIC0gbm90IGVxdWFsIHVwIHRvIHRoZSBsZW5ndGggb2YgdGhlIHNob3J0ZXIgdmVy
c2lvbgo+ICsjICAgIGVxTiAtIGVxdWFsIHVwIHRvIE4gc3ViLXZlcnNpb24gbGV2ZWxzCj4gKyMg
ICAgbmVOIC0gbm90IGVxdWFsIHVwIHRvIE4gc3ViLXZlcnNpb24gbGV2ZWxzCj4gKyMKPiArIyAg
IFdoZW4gdGhlIGNvbmRpdGlvbiBpcyB0cnVlLCBzaGVsbCBjb21tYW5kcyBBQ1RJT04tSUYtVFJV
RSBhcmUgcnVuLAo+ICsjICAgb3RoZXJ3aXNlIHNoZWxsIGNvbW1hbmRzIEFDVElPTi1JRi1GQUxT
RSBhcmUgcnVuLiBUaGUgZW52aXJvbm1lbnQKPiArIyAgIHZhcmlhYmxlICdheF9jb21wYXJlX3Zl
cnNpb24nIGlzIGFsd2F5cyBzZXQgdG8gZWl0aGVyICd0cnVlJyBvciAnZmFsc2UnCj4gKyMgICBh
cyB3ZWxsLgo+ICsjCj4gKyMgICBFeGFtcGxlczoKPiArIwo+ICsjICAgICBBWF9DT01QQVJFX1ZF
UlNJT04oWzMuMTUuN10sW2x0XSxbMy4xNS44XSkKPiArIyAgICAgQVhfQ09NUEFSRV9WRVJTSU9O
KFszLjE1XSxbbHRdLFszLjE1LjhdKQo+ICsjCj4gKyMgICB3b3VsZCBib3RoIGJlIHRydWUuCj4g
KyMKPiArIyAgICAgQVhfQ09NUEFSRV9WRVJTSU9OKFszLjE1LjddLFtlcV0sWzMuMTUuOF0pCj4g
KyMgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbMy4xNV0sW2d0XSxbMy4xNS44XSkKPiArIwo+ICsj
ICAgd291bGQgYm90aCBiZSBmYWxzZS4KPiArIwo+ICsjICAgICBBWF9DT01QQVJFX1ZFUlNJT04o
WzMuMTUuN10sW2VxMl0sWzMuMTUuOF0pCj4gKyMKPiArIyAgIHdvdWxkIGJlIHRydWUgYmVjYXVz
ZSBpdCBpcyBvbmx5IGNvbXBhcmluZyB0d28gbWlub3IgdmVyc2lvbnMuCj4gKyMKPiArIyAgICAg
QVhfQ09NUEFSRV9WRVJTSU9OKFszLjE1LjddLFtlcTBdLFszLjE1XSkKPiArIwo+ICsjICAgd291
bGQgYmUgdHJ1ZSBiZWNhdXNlIGl0IGlzIG9ubHkgY29tcGFyaW5nIHRoZSBsZXNzZXIgbnVtYmVy
IG9mIG1pbm9yCj4gKyMgICB2ZXJzaW9ucyBvZiB0aGUgdHdvIHZhbHVlcy4KPiArIwo+ICsjICAg
Tm90ZTogVGhlIGNoYXJhY3RlcnMgdGhhdCBzZXBhcmF0ZSB0aGUgdmVyc2lvbiBudW1iZXJzIGRv
IG5vdCBtYXR0ZXIuIEFuCj4gKyMgICBlbXB0eSBzdHJpbmcgaXMgdGhlIHNhbWUgYXMgdmVyc2lv
biAwLiBPUCBpcyBldmFsdWF0ZWQgYnkgYXV0b2NvbmYsIG5vdAo+ICsjICAgY29uZmlndXJlLCBz
byBtdXN0IGJlIGEgc3RyaW5nLCBub3QgYSB2YXJpYWJsZS4KPiArIwo+ICsjICAgVGhlIGF1dGhv
ciB3b3VsZCBsaWtlIHRvIGFja25vd2xlZGdlIEd1aWRvIERyYWhlaW0gd2hvc2UgYWR2aWNlIGFi
b3V0Cj4gKyMgICB0aGUgbTRfY2FzZSBhbmQgbTRfaWZ2YWxuIGZ1bmN0aW9ucyBtYWtlIHRoaXMg
bWFjcm8gb25seSBpbmNsdWRlIHRoZQo+ICsjICAgcG9ydGlvbnMgbmVjZXNzYXJ5IHRvIHBlcmZv
cm0gdGhlIHNwZWNpZmljIGNvbXBhcmlzb24gc3BlY2lmaWVkIGJ5IHRoZQo+ICsjICAgT1AgYXJn
dW1lbnQgaW4gdGhlIGZpbmFsIGNvbmZpZ3VyZSBzY3JpcHQuCj4gKyMKPiArIyBMSUNFTlNFCj4g
KyMKPiArIyAgIENvcHlyaWdodCAoYykgMjAwOCBUaW0gVG9vbGFuIDx0b29sYW5AZWxlLnVyaS5l
ZHU+Cj4gKyMKPiArIyAgIENvcHlpbmcgYW5kIGRpc3RyaWJ1dGlvbiBvZiB0aGlzIGZpbGUsIHdp
dGggb3Igd2l0aG91dCBtb2RpZmljYXRpb24sIGFyZQo+ICsjICAgcGVybWl0dGVkIGluIGFueSBt
ZWRpdW0gd2l0aG91dCByb3lhbHR5IHByb3ZpZGVkIHRoZSBjb3B5cmlnaHQgbm90aWNlCj4gKyMg
ICBhbmQgdGhpcyBub3RpY2UgYXJlIHByZXNlcnZlZC4gVGhpcyBmaWxlIGlzIG9mZmVyZWQgYXMt
aXMsIHdpdGhvdXQgYW55Cj4gKyMgICB3YXJyYW50eS4KPiArCj4gKyNzZXJpYWwgMTEKPiArCj4g
K2RubCAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjCj4gK0FDX0RFRlVOKFtBWF9DT01QQVJFX1ZFUlNJT05dLCBb
Cj4gKyAgQUNfUkVRVUlSRShbQUNfUFJPR19BV0tdKQo+ICsKPiArICAjIFVzZWQgdG8gaW5kaWNh
dGUgdHJ1ZSBvciBmYWxzZSBjb25kaXRpb24KPiArICBheF9jb21wYXJlX3ZlcnNpb249ZmFsc2UK
PiArCj4gKyAgIyBDb252ZXJ0IHRoZSB0d28gdmVyc2lvbiBzdHJpbmdzIHRvIGJlIGNvbXBhcmVk
IGludG8gYSBmb3JtYXQgdGhhdAo+ICsgICMgYWxsb3dzIGEgc2ltcGxlIHN0cmluZyBjb21wYXJp
c29uLiAgVGhlIGVuZCByZXN1bHQgaXMgdGhhdCBhIHZlcnNpb24KPiArICAjIHN0cmluZyBvZiB0
aGUgZm9ybSAxLjEyLjUtcjYxNyB3aWxsIGJlIGNvbnZlcnRlZCB0byB0aGUgZm9ybQo+ICsgICMg
MDAwMTAwMTIwMDA1MDYxNy4gIEluIG90aGVyIHdvcmRzLCBlYWNoIG51bWJlciBpcyB6ZXJvIHBh
ZGRlZCB0byBmb3VyCj4gKyAgIyBkaWdpdHMsIGFuZCBub24gZGlnaXRzIGFyZSByZW1vdmVkLgo+
ICsgIEFTX1ZBUl9QVVNIREVGKFtBXSxbYXhfY29tcGFyZV92ZXJzaW9uX0FdKQo+ICsgIEE9YGVj
aG8gIiQxIiB8IHNlZCAtZSAncy9cKFtbMC05XV0qXCkvWlwxWi9nJyBcCj4gKyAgICAgICAgICAg
ICAgICAgICAgIC1lICdzL1pcKFtbMC05XV1cKVovWjBcMVovZycgXAo+ICsgICAgICAgICAgICAg
ICAgICAgICAtZSAncy9aXChbWzAtOV1dW1swLTldXVwpWi9aMFwxWi9nJyBcCj4gKyAgICAgICAg
ICAgICAgICAgICAgIC1lICdzL1pcKFtbMC05XV1bWzAtOV1dW1swLTldXVwpWi9aMFwxWi9nJyBc
Cj4gKyAgICAgICAgICAgICAgICAgICAgIC1lICdzL1tbXjAtOV1dLy9nJ2AKPiArCj4gKyAgQVNf
VkFSX1BVU0hERUYoW0JdLFtheF9jb21wYXJlX3ZlcnNpb25fQl0pCj4gKyAgQj1gZWNobyAiJDMi
IHwgc2VkIC1lICdzL1woW1swLTldXSpcKS9aXDFaL2cnIFwKPiArICAgICAgICAgICAgICAgICAg
ICAgLWUgJ3MvWlwoW1swLTldXVwpWi9aMFwxWi9nJyBcCj4gKyAgICAgICAgICAgICAgICAgICAg
IC1lICdzL1pcKFtbMC05XV1bWzAtOV1dXClaL1owXDFaL2cnIFwKPiArICAgICAgICAgICAgICAg
ICAgICAgLWUgJ3MvWlwoW1swLTldXVtbMC05XV1bWzAtOV1dXClaL1owXDFaL2cnIFwKPiArICAg
ICAgICAgICAgICAgICAgICAgLWUgJ3MvW1teMC05XV0vL2cnYAo+ICsKPiArICBkbmwgIyBJbiB0
aGUgY2FzZSBvZiBsZSwgZ2UsIGx0LCBhbmQgZ3QsIHRoZSBzdHJpbmdzIGFyZSBzb3J0ZWQgYXMg
bmVjZXNzYXJ5Cj4gKyAgZG5sICMgdGhlbiB0aGUgZmlyc3QgbGluZSBpcyB1c2VkIHRvIGRldGVy
bWluZSBpZiB0aGUgY29uZGl0aW9uIGlzIHRydWUuCj4gKyAgZG5sICMgVGhlIHNlZCByaWdodCBh
ZnRlciB0aGUgZWNobyBpcyB0byByZW1vdmUgYW55IGluZGVudGVkIHdoaXRlIHNwYWNlLgo+ICsg
IG00X2Nhc2UobTRfdG9sb3dlcigkMiksCj4gKyAgW2x0XSxbCj4gKyAgICBheF9jb21wYXJlX3Zl
cnNpb249YGVjaG8gIngkQQo+ICt4JEIiIHwgc2VkICdzL14gKi8vJyB8IHNvcnQgLXIgfCBzZWQg
InMveCR7QX0vZmFsc2UvO3MveCR7Qn0vdHJ1ZS87MXEiYAo+ICsgIF0sCj4gKyAgW2d0XSxbCj4g
KyAgICBheF9jb21wYXJlX3ZlcnNpb249YGVjaG8gIngkQQo+ICt4JEIiIHwgc2VkICdzL14gKi8v
JyB8IHNvcnQgfCBzZWQgInMveCR7QX0vZmFsc2UvO3MveCR7Qn0vdHJ1ZS87MXEiYAo+ICsgIF0s
Cj4gKyAgW2xlXSxbCj4gKyAgICBheF9jb21wYXJlX3ZlcnNpb249YGVjaG8gIngkQQo+ICt4JEIi
IHwgc2VkICdzL14gKi8vJyB8IHNvcnQgfCBzZWQgInMveCR7QX0vdHJ1ZS87cy94JHtCfS9mYWxz
ZS87MXEiYAo+ICsgIF0sCj4gKyAgW2dlXSxbCj4gKyAgICBheF9jb21wYXJlX3ZlcnNpb249YGVj
aG8gIngkQQo+ICt4JEIiIHwgc2VkICdzL14gKi8vJyB8IHNvcnQgLXIgfCBzZWQgInMveCR7QX0v
dHJ1ZS87cy94JHtCfS9mYWxzZS87MXEiYAo+ICsgIF0sWwo+ICsgICAgZG5sIFNwbGl0IHRoZSBv
cGVyYXRvciBmcm9tIHRoZSBzdWJ2ZXJzaW9uIGNvdW50IGlmIHByZXNlbnQuCj4gKyAgICBtNF9i
bWF0Y2gobTRfc3Vic3RyKCQyLDIpLAo+ICsgICAgWzBdLFsKPiArICAgICAgIyBBIGNvdW50IG9m
IHplcm8gbWVhbnMgdXNlIHRoZSBsZW5ndGggb2YgdGhlIHNob3J0ZXIgdmVyc2lvbi4KPiArICAg
ICAgIyBEZXRlcm1pbmUgdGhlIG51bWJlciBvZiBjaGFyYWN0ZXJzIGluIEEgYW5kIEIuCj4gKyAg
ICAgIGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQT1gZWNobyAiJEEiIHwgJEFXSyAne3ByaW50KGxl
bmd0aCl9J2AKPiArICAgICAgYXhfY29tcGFyZV92ZXJzaW9uX2xlbl9CPWBlY2hvICIkQiIgfCAk
QVdLICd7cHJpbnQobGVuZ3RoKX0nYAo+ICsKPiArICAgICAgIyBTZXQgQSB0byBubyBtb3JlIHRo
YW4gQidzIGxlbmd0aCBhbmQgQiB0byBubyBtb3JlIHRoYW4gQSdzIGxlbmd0aC4KPiArICAgICAg
QT1gZWNobyAiJEEiIHwgc2VkICJzL1woLlx7JGF4X2NvbXBhcmVfdmVyc2lvbl9sZW5fQlx9XCku
Ki9cMS8iYAo+ICsgICAgICBCPWBlY2hvICIkQiIgfCBzZWQgInMvXCguXHskYXhfY29tcGFyZV92
ZXJzaW9uX2xlbl9BXH1cKS4qL1wxLyJgCj4gKyAgICBdLAo+ICsgICAgW1swLTldK10sWwo+ICsg
ICAgICAjIEEgY291bnQgZ3JlYXRlciB0aGFuIHplcm8gbWVhbnMgdXNlIG9ubHkgdGhhdCBtYW55
IHN1YnZlcnNpb25zCj4gKyAgICAgIEE9YGVjaG8gIiRBIiB8IHNlZCAicy9cKFwoW1swLTldXVx7
NFx9XClce200X3N1YnN0cigkMiwyKVx9XCkuKi9cMS8iYAo+ICsgICAgICBCPWBlY2hvICIkQiIg
fCBzZWQgInMvXChcKFtbMC05XV1cezRcfVwpXHttNF9zdWJzdHIoJDIsMilcfVwpLiovXDEvImAK
PiArICAgIF0sCj4gKyAgICBbLitdLFsKPiArICAgICAgQUNfV0FSTklORygKPiArICAgICAgICBb
aWxsZWdhbCBPUCBudW1lcmljIHBhcmFtZXRlcjogJDJdKQo+ICsgICAgXSxbXSkKPiArCj4gKyAg
ICAjIFBhZCB6ZXJvcyBhdCBlbmQgb2YgbnVtYmVycyB0byBtYWtlIHNhbWUgbGVuZ3RoLgo+ICsg
ICAgYXhfY29tcGFyZV92ZXJzaW9uX3RtcF9BPSIkQWBlY2hvICRCIHwgc2VkICdzLy4vMC9nJ2Ai
Cj4gKyAgICBCPSIkQmBlY2hvICRBIHwgc2VkICdzLy4vMC9nJ2AiCj4gKyAgICBBPSIkYXhfY29t
cGFyZV92ZXJzaW9uX3RtcF9BIgo+ICsKPiArICAgICMgQ2hlY2sgZm9yIGVxdWFsaXR5IG9yIGlu
ZXF1YWxpdHkgYXMgbmVjZXNzYXJ5Lgo+ICsgICAgbTRfY2FzZShtNF90b2xvd2VyKG00X3N1YnN0
cigkMiwwLDIpKSwKPiArICAgIFtlcV0sWwo+ICsgICAgICB0ZXN0ICJ4JEEiID0gIngkQiIgJiYg
YXhfY29tcGFyZV92ZXJzaW9uPXRydWUKPiArICAgIF0sCj4gKyAgICBbbmVdLFsKPiArICAgICAg
dGVzdCAieCRBIiAhPSAieCRCIiAmJiBheF9jb21wYXJlX3ZlcnNpb249dHJ1ZQo+ICsgICAgXSxb
Cj4gKyAgICAgIEFDX1dBUk5JTkcoW2lsbGVnYWwgT1AgcGFyYW1ldGVyOiAkMl0pCj4gKyAgICBd
KQo+ICsgIF0pCj4gKwo+ICsgIEFTX1ZBUl9QT1BERUYoW0FdKWRubAo+ICsgIEFTX1ZBUl9QT1BE
RUYoW0JdKWRubAo+ICsKPiArICBkbmwgIyBFeGVjdXRlIEFDVElPTi1JRi1UUlVFIC8gQUNUSU9O
LUlGLUZBTFNFLgo+ICsgIGlmIHRlc3QgIiRheF9jb21wYXJlX3ZlcnNpb24iID0gInRydWUiIDsg
dGhlbgo+ICsgICAgbTRfaWZ2YWxuKFskNF0sWyQ0XSxbOl0pZG5sCj4gKyAgICBtNF9pZnZhbG4o
WyQ1XSxbZWxzZSAkNV0pZG5sCj4gKyAgZmkKPiArXSkgZG5sIEFYX0NPTVBBUkVfVkVSU0lPTgo+
IGRpZmYgLS1naXQgYS90b29scy9jb25maWd1cmUuYWMgYi90b29scy9jb25maWd1cmUuYWMKPiBp
bmRleCAwNzU0ZjBlLi4zNDk2YzEyIDEwMDY0NAo+IC0tLSBhL3Rvb2xzL2NvbmZpZ3VyZS5hYwo+
ICsrKyBiL3Rvb2xzL2NvbmZpZ3VyZS5hYwo+IEBAIC00Niw2ICs0Niw3IEBAIG00X2luY2x1ZGUo
Wy4uL200L3B0aHJlYWQubTRdKQo+ICAgbTRfaW5jbHVkZShbLi4vbTQvcHR5ZnVuY3MubTRdKQo+
ICAgbTRfaW5jbHVkZShbLi4vbTQvZXh0ZnMubTRdKQo+ICAgbTRfaW5jbHVkZShbLi4vbTQvZmV0
Y2hlci5tNF0pCj4gK200X2luY2x1ZGUoWy4uL200L2F4X2NvbXBhcmVfdmVyc2lvbi5tNF0pCj4g
ICAKPiAgICMgRW5hYmxlL2Rpc2FibGUgb3B0aW9ucwo+ICAgQVhfQVJHX0RFRkFVTFRfRElTQUJM
RShbZ2l0aHR0cF0sIFtEb3dubG9hZCBHSVQgcmVwb3NpdG9yaWVzIHZpYSBIVFRQXSkKPiBAQCAt
MTYxLDYgKzE2MiwxMiBAQCBBU19JRihbdGVzdCAieCRvY2FtbHRvb2xzIiA9ICJ4eSJdLCBbCj4g
ICAgICAgICAgIEFTX0lGKFt0ZXN0ICJ4JGVuYWJsZV9vY2FtbHRvb2xzIiA9ICJ4eWVzIl0sIFsK
PiAgICAgICAgICAgICAgIEFDX01TR19FUlJPUihbT2NhbWwgdG9vbHMgZW5hYmxlZCwgYnV0IHVu
YWJsZSB0byBmaW5kIE9jYW1sXSldKQo+ICAgICAgICAgICBvY2FtbHRvb2xzPSJuIgo+ICsgICAg
XSwgWwo+ICsgICAgICAgIEFYX0NPTVBBUkVfVkVSU0lPTihbJE9DQU1MVkVSU0lPTl0sIFtsdF0s
IFszLjA5LjNdLCBbCj4gKyAgICAgICAgICAgIEFTX0lGKFt0ZXN0ICJ4JGVuYWJsZV9vY2FtbHRv
b2xzIiA9ICJ4eWVzIl0sIFsKPiArICAgICAgICAgICAgICAgIEFDX01TR19FUlJPUihbWW91ciB2
ZXJzaW9uIG9mIE9DYW1sOiAkT0NBTUxWRVJTSU9OIGlzIG5vdCBzdXBwb3J0ZWRdKV0pCj4gKyAg
ICAgICAgICAgIG9jYW1sdG9vbHM9Im4iCj4gKyAgICAgICAgXSkKPiAgICAgICBdKQo+ICAgXSkK
PiAgIEFTX0lGKFt0ZXN0ICJ4JHhzbXBvbGljeSIgPSAieHkiXSwgWwoKCl9fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QK
WGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:23:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDG7m-0006l8-Ae; Tue, 11 Feb 2014 16:23:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WDG7k-0006kY-Pz
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 16:23:52 +0000
Received: from [85.158.143.35:10152] by server-3.bemta-4.messagelabs.com id
	54/78-11539-89E4AF25; Tue, 11 Feb 2014 16:23:52 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392135830!4881944!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1279 invoked from network); 11 Feb 2014 16:23:51 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 16:23:51 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 11 Feb 2014 16:23:49 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="650958434"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.168])
	by fldsmtpi03.verizon.com with ESMTP; 11 Feb 2014 16:23:49 +0000
Message-ID: <52FA4E94.8000802@terremark.com>
Date: Tue, 11 Feb 2014 11:23:48 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
	<1392122996.26657.107.camel@kazak.uk.xensource.com>
	<52FA1F82.10307@citrix.com>
In-Reply-To: <52FA1F82.10307@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	Don Slutz <dslutz@verizon.com>, Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTEvMTQgMDg6MDIsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTEvMDIvMTQg
MTM6NDksIElhbiBDYW1wYmVsbCB3cm90ZToKPj4gT24gVHVlLCAyMDE0LTAyLTExIGF0IDExOjM4
ICswMTAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+PiBTaWduZWQtb2ZmLWJ5OiBSb2dlciBQ
YXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPj4+IENjOiBEb24gU2x1dHogPGRzbHV0
ekB2ZXJpem9uLmNvbT4KPj4+IENjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXgu
Y29tPgo+Pj4gQ2M6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBjaXRyaXguY29tPgo+Pj4gLS0t
Cj4+PiBJJ3ZlIHJlbW92ZWQgRG9uJ3MgIlRlc3RlZC1ieSIgYmVjYXVzZSB0aGUgcGF0Y2ggaGFz
IGNoYW5nZWQsIGNvdWxkCj4+PiB5b3UgcGxlYXNlIHJlLXRlc3QgaXQgRG9uPwo+Pj4KPj4+IFBs
ZWFzZSBydW4gYXV0b2dlbi5zaCBhZnRlciBhcHBseWluZy4KPj4gSXMgdGhpcyBpbnRlbmRlZCBm
b3IgNC40PyBJZiBZZXMgdGhlbiB5b3UgbmVlZCB0byBtYWtlIHRoZSBjYXNlIHRvCj4+IEdlb3Jn
ZS4KPj4KPj4gT3Igc2hhbGwgd2Ugc2ltcGx5IHJlbGVhc2Ugbm90ZSB0aGUgbWluaW11bSByZXF1
aXJlbWVudCBhbmQgZW5mb3JjZSBpdAo+PiBmb3IgNC41Pwo+Pgo+PiBJJ20gcHJvYmFibHkgdW5k
dWx5IG5lcnZvdXMgYWJvdXQgY2hhbmdpbmcgYXV0b2NvbmYgc3R1ZmYuLi4KPj4KPj4gSSd2ZSBh
cHBsaWVkIERvbidzIENBTUxyZXR1cm5UIGZpeCBCVFcuCj4gSU1ITyBJIHRoaW5rIGl0IGNhbiB3
YWl0IHVudGlsIDQuNSBhbmQgdGhlbiBiYWNrcG9ydCBpdCB0byBwcmV2aW91cwo+IHJlbGVhc2Vz
IGFzIG5lZWRlZC4KPgo+IFJvZ2VyLgo+CgpJIGhhdmUgbm8gaXNzdWUgd2l0aCB0aGlzIHdhaXRp
bmcgZm9yIDQuNS4KCiAgICAtRG9uIFNsdXR6CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:23:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDG7m-0006l8-Ae; Tue, 11 Feb 2014 16:23:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WDG7k-0006kY-Pz
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 16:23:52 +0000
Received: from [85.158.143.35:10152] by server-3.bemta-4.messagelabs.com id
	54/78-11539-89E4AF25; Tue, 11 Feb 2014 16:23:52 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392135830!4881944!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1279 invoked from network); 11 Feb 2014 16:23:51 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 16:23:51 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 11 Feb 2014 16:23:49 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="650958434"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.168])
	by fldsmtpi03.verizon.com with ESMTP; 11 Feb 2014 16:23:49 +0000
Message-ID: <52FA4E94.8000802@terremark.com>
Date: Tue, 11 Feb 2014 11:23:48 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
References: <1392115104-22828-1-git-send-email-roger.pau@citrix.com>
	<1392122996.26657.107.camel@kazak.uk.xensource.com>
	<52FA1F82.10307@citrix.com>
In-Reply-To: <52FA1F82.10307@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xenproject.org,
	Don Slutz <dslutz@verizon.com>, Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] tools: require OCaml version 3.09.3 or
	greater
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMTEvMTQgMDg6MDIsIFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6Cj4gT24gMTEvMDIvMTQg
MTM6NDksIElhbiBDYW1wYmVsbCB3cm90ZToKPj4gT24gVHVlLCAyMDE0LTAyLTExIGF0IDExOjM4
ICswMTAwLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+PiBTaWduZWQtb2ZmLWJ5OiBSb2dlciBQ
YXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPj4+IENjOiBEb24gU2x1dHogPGRzbHV0
ekB2ZXJpem9uLmNvbT4KPj4+IENjOiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1wYmVsbEBjaXRyaXgu
Y29tPgo+Pj4gQ2M6IElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBjaXRyaXguY29tPgo+Pj4gLS0t
Cj4+PiBJJ3ZlIHJlbW92ZWQgRG9uJ3MgIlRlc3RlZC1ieSIgYmVjYXVzZSB0aGUgcGF0Y2ggaGFz
IGNoYW5nZWQsIGNvdWxkCj4+PiB5b3UgcGxlYXNlIHJlLXRlc3QgaXQgRG9uPwo+Pj4KPj4+IFBs
ZWFzZSBydW4gYXV0b2dlbi5zaCBhZnRlciBhcHBseWluZy4KPj4gSXMgdGhpcyBpbnRlbmRlZCBm
b3IgNC40PyBJZiBZZXMgdGhlbiB5b3UgbmVlZCB0byBtYWtlIHRoZSBjYXNlIHRvCj4+IEdlb3Jn
ZS4KPj4KPj4gT3Igc2hhbGwgd2Ugc2ltcGx5IHJlbGVhc2Ugbm90ZSB0aGUgbWluaW11bSByZXF1
aXJlbWVudCBhbmQgZW5mb3JjZSBpdAo+PiBmb3IgNC41Pwo+Pgo+PiBJJ20gcHJvYmFibHkgdW5k
dWx5IG5lcnZvdXMgYWJvdXQgY2hhbmdpbmcgYXV0b2NvbmYgc3R1ZmYuLi4KPj4KPj4gSSd2ZSBh
cHBsaWVkIERvbidzIENBTUxyZXR1cm5UIGZpeCBCVFcuCj4gSU1ITyBJIHRoaW5rIGl0IGNhbiB3
YWl0IHVudGlsIDQuNSBhbmQgdGhlbiBiYWNrcG9ydCBpdCB0byBwcmV2aW91cwo+IHJlbGVhc2Vz
IGFzIG5lZWRlZC4KPgo+IFJvZ2VyLgo+CgpJIGhhdmUgbm8gaXNzdWUgd2l0aCB0aGlzIHdhaXRp
bmcgZm9yIDQuNS4KCiAgICAtRG9uIFNsdXR6CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlz
dHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:31:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGF6-0007M1-FC; Tue, 11 Feb 2014 16:31:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGF4-0007Lw-2B
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 16:31:26 +0000
Received: from [85.158.143.35:38175] by server-2.bemta-4.messagelabs.com id
	6B/EA-10891-D505AF25; Tue, 11 Feb 2014 16:31:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392136283!4870212!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27083 invoked from network); 11 Feb 2014 16:31:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:31:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99843749"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:31:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:31:22 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGF0-0000vv-6C;
	Tue, 11 Feb 2014 16:31:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGEz-00028a-Rp;
	Tue, 11 Feb 2014 16:31:21 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.20569.635505.708712@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:31:21 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392113651.26657.63.camel@kazak.uk.xensource.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
	<52F9F3BE.1010106@citrix.com>
	<1392112851.26657.59.camel@kazak.uk.xensource.com>
	<52F9F783.2040708@citrix.com>
	<1392113651.26657.63.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org,
	Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] 4.4.0-rc3 tagged"):
> On Tue, 2014-02-11 at 11:12 +0100, Roger Pau Monn=E9 wrote:
> > Yes, but I'm not sure if any of the macros we have inside of m4/ are
> > also part of autoconf-archive (at least some of those are custom-made
> > AFAIK). I also think it's best to do the latter, I'm going to add the
> > macro itself to my patch and resend.
> =

> Right, I think that is best for now.

Definitely we should copy it into our own source code.  Relying on
what is effectively source code in another package is pretty bad I
think.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:31:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGF6-0007M1-FC; Tue, 11 Feb 2014 16:31:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGF4-0007Lw-2B
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 16:31:26 +0000
Received: from [85.158.143.35:38175] by server-2.bemta-4.messagelabs.com id
	6B/EA-10891-D505AF25; Tue, 11 Feb 2014 16:31:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392136283!4870212!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27083 invoked from network); 11 Feb 2014 16:31:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:31:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99843749"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:31:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:31:22 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGF0-0000vv-6C;
	Tue, 11 Feb 2014 16:31:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGEz-00028a-Rp;
	Tue, 11 Feb 2014 16:31:21 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.20569.635505.708712@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:31:21 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392113651.26657.63.camel@kazak.uk.xensource.com>
References: <21227.37769.599789.521146@mariner.uk.xensource.com>
	<52EC2C9C.9090202@terremark.com>
	<21231.33273.164782.738071@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402031200260.4373@kaball.uk.xensource.com>
	<52F425AD.50209@terremark.com>
	<1391767501.2162.20.camel@kazak.uk.xensource.com>
	<52F5082E.6010207@terremark.com> <52F509F3.1000806@citrix.com>
	<1392026141.5117.10.camel@kazak.uk.xensource.com>
	<52F8B128.80800@citrix.com> <52F91F04.6030507@terremark.com>
	<52F92181.8010907@citrix.com> <52F92743.8090901@terremark.com>
	<52F92973.9080804@citrix.com> <52F96691.7000801@terremark.com>
	<1392111686.26657.56.camel@kazak.uk.xensource.com>
	<52F9F3BE.1010106@citrix.com>
	<1392112851.26657.59.camel@kazak.uk.xensource.com>
	<52F9F783.2040708@citrix.com>
	<1392113651.26657.63.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xenproject.org,
	Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] 4.4.0-rc3 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] 4.4.0-rc3 tagged"):
> On Tue, 2014-02-11 at 11:12 +0100, Roger Pau Monn=E9 wrote:
> > Yes, but I'm not sure if any of the macros we have inside of m4/ are
> > also part of autoconf-archive (at least some of those are custom-made
> > AFAIK). I also think it's best to do the latter, I'm going to add the
> > macro itself to my patch and resend.
> =

> Right, I think that is best for now.

Definitely we should copy it into our own source code.  Relying on
what is effectively source code in another package is pretty bad I
think.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:38:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGLs-0007mg-VU; Tue, 11 Feb 2014 16:38:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGLr-0007mK-BZ
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:38:27 +0000
Received: from [193.109.254.147:10693] by server-8.bemta-14.messagelabs.com id
	BE/DE-18529-2025AF25; Tue, 11 Feb 2014 16:38:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392136704!3595016!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21582 invoked from network); 11 Feb 2014 16:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99846850"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:38:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:38:21 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGLl-0000ya-63;
	Tue, 11 Feb 2014 16:38:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGLk-00028z-T4;
	Tue, 11 Feb 2014 16:38:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.20988.614574.86966@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:38:20 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
> Records
> =======
> 
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
...
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.

There needs to be a flag saying what the receiver should do if it sees
a record it doesn't understand.  There are two possible behaviours:
ignore the record, and abandon the restore attempt.

> VCPU_INFO
> ---------
> 
> > [ This is a combination of parts of the extended-info and
> > XC_SAVE_ID_VCPU_INFO chunks. ]
> 
> The VCPU_INFO record includes the maximum possible VCPU ID.  This will
> be followed a VCPU_CONTEXT record for each online VCPU.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+------------------------+
>     | max_vcpu_id           | (reserved)             |
>     +-----------------------+------------------------+

For ease of extensibility, it might be worth saying that the VCPU_INFO
may contain additional data which should be ignored by receivers that
don't understand it.

> Future Extensions
> =================
> 
> All changes to this format require the image version to be increased.

It would be better to have a more fine-grained extensibility
mechanism.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:38:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGLs-0007mg-VU; Tue, 11 Feb 2014 16:38:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGLr-0007mK-BZ
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:38:27 +0000
Received: from [193.109.254.147:10693] by server-8.bemta-14.messagelabs.com id
	BE/DE-18529-2025AF25; Tue, 11 Feb 2014 16:38:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392136704!3595016!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21582 invoked from network); 11 Feb 2014 16:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99846850"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:38:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:38:21 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGLl-0000ya-63;
	Tue, 11 Feb 2014 16:38:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGLk-00028z-T4;
	Tue, 11 Feb 2014 16:38:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.20988.614574.86966@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:38:20 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
> Records
> =======
> 
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
...
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.

There needs to be a flag saying what the receiver should do if it sees
a record it doesn't understand.  There are two possible behaviours:
ignore the record, and abandon the restore attempt.

> VCPU_INFO
> ---------
> 
> > [ This is a combination of parts of the extended-info and
> > XC_SAVE_ID_VCPU_INFO chunks. ]
> 
> The VCPU_INFO record includes the maximum possible VCPU ID.  This will
> be followed a VCPU_CONTEXT record for each online VCPU.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+------------------------+
>     | max_vcpu_id           | (reserved)             |
>     +-----------------------+------------------------+

For ease of extensibility, it might be worth saying that the VCPU_INFO
may contain additional data which should be ignored by receivers that
don't understand it.

> Future Extensions
> =================
> 
> All changes to this format require the image version to be increased.

It would be better to have a more fine-grained extensibility
mechanism.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:45:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGSi-0008MI-EH; Tue, 11 Feb 2014 16:45:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGSh-0008MD-FU
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:45:31 +0000
Received: from [85.158.143.35:57360] by server-2.bemta-4.messagelabs.com id
	C0/E2-10891-AA35AF25; Tue, 11 Feb 2014 16:45:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392137128!4893721!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15032 invoked from network); 11 Feb 2014 16:45:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:45:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101659064"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 16:45:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:45:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGSd-00010P-If;
	Tue, 11 Feb 2014 16:45:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGSd-00029X-AR;
	Tue, 11 Feb 2014 16:45:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.21415.38574.210861@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:45:27 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52FA043B020000780011B10C@nat28.tlf.novell.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
> > Fields
> > ------
> > 
> > All the fields within the headers and records have a fixed width.
> > 
> > Fields are always aligned to their size.
> > 
> > Padding and reserved fields are set to zero on save and must be
> > ignored during restore.
> 
> Meaning it would be impossible to assign a meaning to these fields
> later. I'd rather mandate that the restore side has to check these
> fields are zero, and bail if they aren't.

I disagree.  It is precisely the fact that they are ignored which
makes it possible to assign meaning to them later.

It is easy enough to make old readers bail.  What is good about
David's spec is the room for enhancement _without_ making old readers
bail.

I would like to see is that the version number does not normally need
to be incremented.  For example, the .deb package format which I
designed in 1993 has stayed at version 2.0 even though it has been
heavily extended in a number of directions.

> > Integer (numeric) fields in the image header are always in big-endian
> > byte order.
> 
> Why would big endian be preferable when both currently
> supported architectures use little endian?

Because (a) that's Network Byte Order (b) there are convenient
functions for dealing with endian-conversion.

> > 1. Image header
> > 2. Domain header
> > 3. X86_PV_INFO record
> > 4. At least one P2M record
> > 5. At least one PAGE_DATA record
> > 6. VCPU_INFO record
> > 6. At least one VCPU_CONTEXT record
> > 7. END record
> 
> Is it necessary to require this strict ordering? Obviously the headers
> have to be first and the END record last, but at least some of the
> other records don't seem to need to be at exactly the place you list
> them.

It's necessary to define some constraints on the ordering.  In
practice the receiving implementation may or may not work with all
orderings and defining a particular ordering is probably easiest.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:45:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGSi-0008MI-EH; Tue, 11 Feb 2014 16:45:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGSh-0008MD-FU
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:45:31 +0000
Received: from [85.158.143.35:57360] by server-2.bemta-4.messagelabs.com id
	C0/E2-10891-AA35AF25; Tue, 11 Feb 2014 16:45:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392137128!4893721!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15032 invoked from network); 11 Feb 2014 16:45:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:45:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101659064"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 16:45:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:45:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGSd-00010P-If;
	Tue, 11 Feb 2014 16:45:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGSd-00029X-AR;
	Tue, 11 Feb 2014 16:45:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.21415.38574.210861@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:45:27 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <52FA043B020000780011B10C@nat28.tlf.novell.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
> > Fields
> > ------
> > 
> > All the fields within the headers and records have a fixed width.
> > 
> > Fields are always aligned to their size.
> > 
> > Padding and reserved fields are set to zero on save and must be
> > ignored during restore.
> 
> Meaning it would be impossible to assign a meaning to these fields
> later. I'd rather mandate that the restore side has to check these
> fields are zero, and bail if they aren't.

I disagree.  It is precisely the fact that they are ignored which
makes it possible to assign meaning to them later.

It is easy enough to make old readers bail.  What is good about
David's spec is the room for enhancement _without_ making old readers
bail.

I would like to see is that the version number does not normally need
to be incremented.  For example, the .deb package format which I
designed in 1993 has stayed at version 2.0 even though it has been
heavily extended in a number of directions.

> > Integer (numeric) fields in the image header are always in big-endian
> > byte order.
> 
> Why would big endian be preferable when both currently
> supported architectures use little endian?

Because (a) that's Network Byte Order (b) there are convenient
functions for dealing with endian-conversion.

> > 1. Image header
> > 2. Domain header
> > 3. X86_PV_INFO record
> > 4. At least one P2M record
> > 5. At least one PAGE_DATA record
> > 6. VCPU_INFO record
> > 6. At least one VCPU_CONTEXT record
> > 7. END record
> 
> Is it necessary to require this strict ordering? Obviously the headers
> have to be first and the END record last, but at least some of the
> other records don't seem to need to be at exactly the place you list
> them.

It's necessary to define some constraints on the ordering.  In
practice the receiving implementation may or may not work with all
orderings and defining a particular ordering is probably easiest.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGWW-00008N-CG; Tue, 11 Feb 2014 16:49:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGWV-00008H-AW
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:49:27 +0000
Received: from [85.158.139.211:30300] by server-2.bemta-5.messagelabs.com id
	04/08-23037-6945AF25; Tue, 11 Feb 2014 16:49:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392137364!3229062!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1795 invoked from network); 11 Feb 2014 16:49:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:49:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99851298"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:49:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:49:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGWQ-00011O-Tf;
	Tue, 11 Feb 2014 16:49:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGWQ-00029s-N6;
	Tue, 11 Feb 2014 16:49:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.21650.571988.662930@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:49:22 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).

I think this is a good start.  I've made some other comments already.

> Overview
> ========

I would like to make another perhaps controversial suggestion.  We
should explicitly specify that the receiver may advertise its
capabilities to the sender, so that backwards-migration _can_ be
supported if we choose to do so.

In practice I think that means a capability advertisement block.
Probably, one bit per version, one bit per record type, etc.

I greatly prefer doing forward-compatibility with new record type
enums etc. than with version numbers.  Version numbers presuppose a
strict global order on all the implementations' capabilities, which is
of course not necessarily true in free software.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGWb-00009K-Q0; Tue, 11 Feb 2014 16:49:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDGWa-000099-Sr
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:49:33 +0000
Received: from [85.158.139.211:46986] by server-8.bemta-5.messagelabs.com id
	B0/11-05298-B945AF25; Tue, 11 Feb 2014 16:49:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392137364!3229062!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2062 invoked from network); 11 Feb 2014 16:49:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99851328"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:49:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:49:29 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDGWX-00011e-Cm;
	Tue, 11 Feb 2014 16:49:29 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 16:49:29 +0000
Message-ID: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] cs-adjust-flight: fix runvar-del
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Missing $.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cs-adjust-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cs-adjust-flight b/cs-adjust-flight
index d937caa..663ca6f 100755
--- a/cs-adjust-flight
+++ b/cs-adjust-flight
@@ -228,7 +228,7 @@ sub change__runvar_del {
 
     for_runvars($jobs, $vars, sub {
         my ($job, $name) = @_;
-        runvar_rm_q->execute($dstflight, $job, $name);
+        $runvar_rm_q->execute($dstflight, $job, $name);
 	verbose "$dstflight.$job $name runvar deleted\n";
     }, 'IGNORE');
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGWW-00008N-CG; Tue, 11 Feb 2014 16:49:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGWV-00008H-AW
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:49:27 +0000
Received: from [85.158.139.211:30300] by server-2.bemta-5.messagelabs.com id
	04/08-23037-6945AF25; Tue, 11 Feb 2014 16:49:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392137364!3229062!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1795 invoked from network); 11 Feb 2014 16:49:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:49:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99851298"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:49:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:49:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGWQ-00011O-Tf;
	Tue, 11 Feb 2014 16:49:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGWQ-00029s-N6;
	Tue, 11 Feb 2014 16:49:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.21650.571988.662930@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 16:49:22 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52F90A71.40802@citrix.com>
References: <52F90A71.40802@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (e.g., images for HVM guest are
> not considered).

I think this is a good start.  I've made some other comments already.

> Overview
> ========

I would like to make another perhaps controversial suggestion.  We
should explicitly specify that the receiver may advertise its
capabilities to the sender, so that backwards-migration _can_ be
supported if we choose to do so.

In practice I think that means a capability advertisement block.
Probably, one bit per version, one bit per record type, etc.

I greatly prefer doing forward-compatibility with new record type
enums etc. than with version numbers.  Version numbers presuppose a
strict global order on all the implementations' capabilities, which is
of course not necessarily true in free software.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 16:49:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 16:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGWb-00009K-Q0; Tue, 11 Feb 2014 16:49:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDGWa-000099-Sr
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 16:49:33 +0000
Received: from [85.158.139.211:46986] by server-8.bemta-5.messagelabs.com id
	B0/11-05298-B945AF25; Tue, 11 Feb 2014 16:49:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392137364!3229062!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2062 invoked from network); 11 Feb 2014 16:49:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 16:49:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99851328"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 16:49:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 11:49:29 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDGWX-00011e-Cm;
	Tue, 11 Feb 2014 16:49:29 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 16:49:29 +0000
Message-ID: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] cs-adjust-flight: fix runvar-del
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Missing $.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 cs-adjust-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cs-adjust-flight b/cs-adjust-flight
index d937caa..663ca6f 100755
--- a/cs-adjust-flight
+++ b/cs-adjust-flight
@@ -228,7 +228,7 @@ sub change__runvar_del {
 
     for_runvars($jobs, $vars, sub {
         my ($job, $name) = @_;
-        runvar_rm_q->execute($dstflight, $job, $name);
+        $runvar_rm_q->execute($dstflight, $job, $name);
 	verbose "$dstflight.$job $name runvar deleted\n";
     }, 'IGNORE');
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:05:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGla-0001In-6H; Tue, 11 Feb 2014 17:05:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDGlY-0001Ih-2a
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:05:00 +0000
Received: from [193.109.254.147:35481] by server-11.bemta-14.messagelabs.com
	id B6/71-24604-B385AF25; Tue, 11 Feb 2014 17:04:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392138296!3602589!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31007 invoked from network); 11 Feb 2014 17:04:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:04:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99857188"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:04:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:04:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDGlM-0006Fn-O3;
	Tue, 11 Feb 2014 17:04:48 +0000
Message-ID: <52FA5830.9060205@citrix.com>
Date: Tue, 11 Feb 2014 17:04:48 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.20988.614574.86966@mariner.uk.xensource.com>
In-Reply-To: <21242.20988.614574.86966@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 16:38, Ian Jackson wrote:
> David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
>> Records
>> =======
>>
>> A record has a record header, type specific data and a trailing
>> footer.  If body_length is not a multiple of 8, the body is padded
>> with zeroes to align the checksum field on an 8 octet boundary.
>>
>>      0     1     2     3     4     5     6     7 octet
>>     +-----------------------+-------------------------+
>>     | type                  | body_length             |
>>     +-----------+-----------+-------------------------+
>>     | options   | (reserved)                          |
>>     +-----------+-------------------------------------+
> ...
>> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
> There needs to be a flag saying what the receiver should do if it sees
> a record it doesn't understand.  There are two possible behaviours:
> ignore the record, and abandon the restore attempt.

No need.  Any unrecognised records can be safely ignored.  Any record
which couldn't be ignored would be required to bump the main stream
version number at which point the older reader would bail on that basis.

This would allow adding new backward compatible features without
breaking older systems, which is the way compatibility is maintained in
the current code.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:05:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGla-0001In-6H; Tue, 11 Feb 2014 17:05:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDGlY-0001Ih-2a
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:05:00 +0000
Received: from [193.109.254.147:35481] by server-11.bemta-14.messagelabs.com
	id B6/71-24604-B385AF25; Tue, 11 Feb 2014 17:04:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392138296!3602589!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31007 invoked from network); 11 Feb 2014 17:04:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:04:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99857188"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:04:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:04:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDGlM-0006Fn-O3;
	Tue, 11 Feb 2014 17:04:48 +0000
Message-ID: <52FA5830.9060205@citrix.com>
Date: Tue, 11 Feb 2014 17:04:48 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.20988.614574.86966@mariner.uk.xensource.com>
In-Reply-To: <21242.20988.614574.86966@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 16:38, Ian Jackson wrote:
> David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
>> Records
>> =======
>>
>> A record has a record header, type specific data and a trailing
>> footer.  If body_length is not a multiple of 8, the body is padded
>> with zeroes to align the checksum field on an 8 octet boundary.
>>
>>      0     1     2     3     4     5     6     7 octet
>>     +-----------------------+-------------------------+
>>     | type                  | body_length             |
>>     +-----------+-----------+-------------------------+
>>     | options   | (reserved)                          |
>>     +-----------+-------------------------------------+
> ...
>> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
> There needs to be a flag saying what the receiver should do if it sees
> a record it doesn't understand.  There are two possible behaviours:
> ignore the record, and abandon the restore attempt.

No need.  Any unrecognised records can be safely ignored.  Any record
which couldn't be ignored would be required to bump the main stream
version number at which point the older reader would bail on that basis.

This would allow adding new backward compatible features without
breaking older systems, which is the way compatibility is maintained in
the current code.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:08:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGoP-0001QY-Tb; Tue, 11 Feb 2014 17:07:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGoP-0001QQ-1y
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:07:57 +0000
Received: from [193.109.254.147:4024] by server-1.bemta-14.messagelabs.com id
	A1/47-15438-CE85AF25; Tue, 11 Feb 2014 17:07:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392138473!3602158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28541 invoked from network); 11 Feb 2014 17:07:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:07:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99858448"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:07:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:07:37 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGo5-00017n-Kf;
	Tue, 11 Feb 2014 17:07:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGo5-0002Bf-Dc;
	Tue, 11 Feb 2014 17:07:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.22745.213121.757784@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:07:37 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <52FA5830.9060205@citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.20988.614574.86966@mariner.uk.xensource.com>
	<52FA5830.9060205@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> No need.  Any unrecognised records can be safely ignored.  Any record
> which couldn't be ignored would be required to bump the main stream
> version number at which point the older reader would bail on that basis.

See my other comment about why I don't like version numbers for
forward compatibility in free software.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:08:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGoP-0001QY-Tb; Tue, 11 Feb 2014 17:07:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGoP-0001QQ-1y
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:07:57 +0000
Received: from [193.109.254.147:4024] by server-1.bemta-14.messagelabs.com id
	A1/47-15438-CE85AF25; Tue, 11 Feb 2014 17:07:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392138473!3602158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28541 invoked from network); 11 Feb 2014 17:07:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:07:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99858448"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:07:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:07:37 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGo5-00017n-Kf;
	Tue, 11 Feb 2014 17:07:37 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGo5-0002Bf-Dc;
	Tue, 11 Feb 2014 17:07:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.22745.213121.757784@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:07:37 +0000
To: Andrew Cooper <andrew.cooper3@citrix.com>
In-Reply-To: <52FA5830.9060205@citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.20988.614574.86966@mariner.uk.xensource.com>
	<52FA5830.9060205@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> No need.  Any unrecognised records can be safely ignored.  Any record
> which couldn't be ignored would be required to bump the main stream
> version number at which point the older reader would bail on that basis.

See my other comment about why I don't like version numbers for
forward compatibility in free software.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGob-0001S1-9i; Tue, 11 Feb 2014 17:08:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGoa-0001Rr-3i
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 17:08:08 +0000
Received: from [193.109.254.147:5051] by server-7.bemta-14.messagelabs.com id
	EA/98-23424-7F85AF25; Tue, 11 Feb 2014 17:08:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392138485!3610686!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24232 invoked from network); 11 Feb 2014 17:08:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:08:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101668195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:08:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:08:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDGoT-00017w-Dl;
	Tue, 11 Feb 2014 17:08:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDGoT-000666-4E;
	Tue, 11 Feb 2014 17:08:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24841-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 17:08:01 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24841: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24841 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install            fail REGR. vs. 12557
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                6792dfe383dd20ed270da198aa0676bac47245b4
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7014 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2372517 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGob-0001S1-9i; Tue, 11 Feb 2014 17:08:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGoa-0001Rr-3i
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 17:08:08 +0000
Received: from [193.109.254.147:5051] by server-7.bemta-14.messagelabs.com id
	EA/98-23424-7F85AF25; Tue, 11 Feb 2014 17:08:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392138485!3610686!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24232 invoked from network); 11 Feb 2014 17:08:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:08:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101668195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:08:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:08:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDGoT-00017w-Dl;
	Tue, 11 Feb 2014 17:08:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDGoT-000666-4E;
	Tue, 11 Feb 2014 17:08:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24841-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 17:08:01 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24841: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24841 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install            fail REGR. vs. 12557
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                6792dfe383dd20ed270da198aa0676bac47245b4
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7014 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2372517 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:09:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:09:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGpP-0001bI-U8; Tue, 11 Feb 2014 17:08:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGpO-0001az-O8
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:08:58 +0000
Received: from [85.158.143.35:5398] by server-2.bemta-4.messagelabs.com id
	09/37-10891-9295AF25; Tue, 11 Feb 2014 17:08:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392138536!4879654!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31696 invoked from network); 11 Feb 2014 17:08:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:08:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99858913"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:08:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:08:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGpK-00018B-Vx;
	Tue, 11 Feb 2014 17:08:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGpK-0002CT-O4;
	Tue, 11 Feb 2014 17:08:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.22821.968418.709001@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:08:53 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
References: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] cs-adjust-flight: fix runvar-del
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] cs-adjust-flight: fix runvar-del"):
> Missing $.

WTF

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Maybe we should have a "non-urgent pending fixes" branch, since our
push gate is going to be busy for a while.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:09:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:09:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGpP-0001bI-U8; Tue, 11 Feb 2014 17:08:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDGpO-0001az-O8
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:08:58 +0000
Received: from [85.158.143.35:5398] by server-2.bemta-4.messagelabs.com id
	09/37-10891-9295AF25; Tue, 11 Feb 2014 17:08:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392138536!4879654!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31696 invoked from network); 11 Feb 2014 17:08:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:08:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99858913"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:08:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:08:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGpK-00018B-Vx;
	Tue, 11 Feb 2014 17:08:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDGpK-0002CT-O4;
	Tue, 11 Feb 2014 17:08:54 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.22821.968418.709001@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:08:53 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
References: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] cs-adjust-flight: fix runvar-del
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] cs-adjust-flight: fix runvar-del"):
> Missing $.

WTF

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Maybe we should have a "non-urgent pending fixes" branch, since our
push gate is going to be busy for a while.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:09:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGq3-0001lp-HF; Tue, 11 Feb 2014 17:09:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WDGq1-0001lN-P3
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:09:38 +0000
Received: from [85.158.137.68:31767] by server-2.bemta-3.messagelabs.com id
	D9/0C-06531-0595AF25; Tue, 11 Feb 2014 17:09:36 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392138574!1164871!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26006 invoked from network); 11 Feb 2014 17:09:35 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 17:09:35 -0000
Received: from mail-ie0-f169.google.com (mail-ie0-f169.google.com
	[209.85.223.169]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1BH9W6W014905
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 09:09:32 -0800
Received: by mail-ie0-f169.google.com with SMTP id to1so4802770ieb.28
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 09:09:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Up4i1vEmBcP7pMVlkUa6urc82SDObRJAprLBF+664xw=;
	b=EukOjnCEtZwdJ1CDaCgI+zq2BfkNEbQQTWC9u373gq8jaK9/zeD635X3xgsYHUXfvX
	Y1Bz07qY/rbSglD4XZ0Fu13PgvNKaMTOq0sil4dFkgKew4XnEk4uDsVl7pbMfjpkwZaP
	wavYYqFdiNGDfi+wa5PewSdiaSssZPu3//xDjWDFFdtAMFTiDlB5Cz1aVmDeEE1sgjWG
	8dS3pvNR/NAvuhB6hHWo8QDh5mPoH2lS73P2yLLYMX7tuononKyz3GtHwDBsSZIsOzId
	1oogDY/SKHYCFvkxPlZpcwAacanxjLpGDCFBHRQ1300gLJN8Hk/nyYYjtG4Zf3LXcF6V
	7dmg==
X-Received: by 10.50.45.69 with SMTP id k5mr7178383igm.32.1392138571183; Tue,
	11 Feb 2014 09:09:31 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 11 Feb 2014 09:08:51 -0800 (PST)
In-Reply-To: <21242.21415.38574.210861@mariner.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 11 Feb 2014 11:08:51 -0600
Message-ID: <CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3300012052583883416=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3300012052583883416==
Content-Type: multipart/alternative; boundary=089e0111b7c60381f004f224858e

--089e0111b7c60381f004f224858e
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 11, 2014 at 10:45 AM, Ian Jackson <Ian.Jackson@eu.citrix.com>wrote:

> Jan Beulich writes ("Re: [Xen-devel] Domain Save Image Format proposal
> (draft B)"):
> > On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
> > > Fields
> > > ------
> > >
> > > All the fields within the headers and records have a fixed width.
> > >
> > > Fields are always aligned to their size.
> > >
> > > Padding and reserved fields are set to zero on save and must be
> > > ignored during restore.
> >
> > Meaning it would be impossible to assign a meaning to these fields
> > later. I'd rather mandate that the restore side has to check these
> > fields are zero, and bail if they aren't.
>
> I disagree.  It is precisely the fact that they are ignored which
> makes it possible to assign meaning to them later.
>
> It is easy enough to make old readers bail.  What is good about
> David's spec is the room for enhancement _without_ making old readers
> bail.
>
> I would like to see is that the version number does not normally need
> to be incremented.  For example, the .deb package format which I
> designed in 1993 has stayed at version 2.0 even though it has been
> heavily extended in a number of directions.
>
> > > Integer (numeric) fields in the image header are always in big-endian
> > > byte order.
> >
> > Why would big endian be preferable when both currently
> > supported architectures use little endian?
>
> Because (a) that's Network Byte Order (b) there are convenient
> functions for dealing with endian-conversion.
>
>
Yes and Yes. But why resort to Network Byte Order at all when we
don't have support any architectures using big endian?

Are we thinking of transferring images or migrating data between machines
that have
different endian-ness? Its not like network elements such as middleboxes
(which could probably be big-endian ) are going to interpret the
application
level data and take routing decisions.

I am not that familiar with ARM, but from what I read, its bi-endian past
v3.
Don't think we have plans to support SPARC, which too is bi-endian
beyond a certain version IIRC.

That leaves legacy ARMs and SPARC that use big endian mode.

So am I missing something elementary here Ian? Why the emphasis on
network byte order? I certainly agree that the byte order should be declared
once in the header, so that there would be no confusion on how to interpret
it.



> > > 1. Image header
> > > 2. Domain header
> > > 3. X86_PV_INFO record
> > > 4. At least one P2M record
> > > 5. At least one PAGE_DATA record
> > > 6. VCPU_INFO record
> > > 6. At least one VCPU_CONTEXT record
> > > 7. END record
> >
> > Is it necessary to require this strict ordering? Obviously the headers
> > have to be first and the END record last, but at least some of the
> > other records don't seem to need to be at exactly the place you list
> > them.
>
> It's necessary to define some constraints on the ordering.  In
> practice the receiving implementation may or may not work with all
> orderings and defining a particular ordering is probably easiest.
>
> Ian.
>
>

--089e0111b7c60381f004f224858e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 11, 2014 at 10:45 AM, Ian Jackson <span dir=3D"ltr">&lt;<a href=3D"=
mailto:Ian.Jackson@eu.citrix.com" target=3D"_blank">Ian.Jackson@eu.citrix.c=
om</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Jan Beulich writes (&quot;Re: [Xen-devel] Do=
main Save Image Format proposal (draft B)&quot;):<br>
<div class=3D"">&gt; On 10.02.14 at 18:20, David Vrabel &lt;<a href=3D"mail=
to:david.vrabel@citrix.com">david.vrabel@citrix.com</a>&gt; wrote:<br>
&gt; &gt; Fields<br>
&gt; &gt; ------<br>
&gt; &gt;<br>
&gt; &gt; All the fields within the headers and records have a fixed width.=
<br>
&gt; &gt;<br>
&gt; &gt; Fields are always aligned to their size.<br>
&gt; &gt;<br>
&gt; &gt; Padding and reserved fields are set to zero on save and must be<b=
r>
&gt; &gt; ignored during restore.<br>
&gt;<br>
&gt; Meaning it would be impossible to assign a meaning to these fields<br>
&gt; later. I&#39;d rather mandate that the restore side has to check these=
<br>
&gt; fields are zero, and bail if they aren&#39;t.<br>
<br>
</div>I disagree. =A0It is precisely the fact that they are ignored which<b=
r>
makes it possible to assign meaning to them later.<br>
<br>
It is easy enough to make old readers bail. =A0What is good about<br>
David&#39;s spec is the room for enhancement _without_ making old readers<b=
r>
bail.<br>
<br>
I would like to see is that the version number does not normally need<br>
to be incremented. =A0For example, the .deb package format which I<br>
designed in 1993 has stayed at version 2.0 even though it has been<br>
heavily extended in a number of directions.<br>
<div class=3D""><br>
&gt; &gt; Integer (numeric) fields in the image header are always in big-en=
dian<br>
&gt; &gt; byte order.<br>
&gt;<br>
&gt; Why would big endian be preferable when both currently<br>
&gt; supported architectures use little endian?<br>
<br>
</div>Because (a) that&#39;s Network Byte Order (b) there are convenient<br=
>
functions for dealing with endian-conversion.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>Yes and Yes. But=
 why resort to Network Byte Order at all when we</div><div>don&#39;t have s=
upport any architectures using big endian?=A0</div><div><br></div><div>Are =
we thinking of transferring images or migrating data between machines that =
have</div>

<div>different endian-ness? Its not like network elements such as middlebox=
es</div><div>(which could probably be big-endian ) are going to interpret t=
he application=A0</div><div>level data and take routing decisions.</div>
<div>
<br></div><div>I am not that familiar with ARM, but from what I read, its b=
i-endian past v3.</div><div>Don&#39;t think we have plans to support SPARC,=
 which too is bi-endian</div><div>beyond a certain version IIRC.</div>
<div>
<br></div><div>That leaves legacy ARMs and SPARC that use big endian mode.<=
/div><div><br></div><div>So am I missing something elementary here Ian? Why=
 the emphasis on</div><div>network byte order? I certainly agree that the b=
yte order should be declared</div>

<div>once in the header, so that there would be no confusion on how to inte=
rpret it.</div><div><br></div><div>=A0</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">=
<div class=3D"">


&gt; &gt; 1. Image header<br>
&gt; &gt; 2. Domain header<br>
&gt; &gt; 3. X86_PV_INFO record<br>
&gt; &gt; 4. At least one P2M record<br>
&gt; &gt; 5. At least one PAGE_DATA record<br>
&gt; &gt; 6. VCPU_INFO record<br>
&gt; &gt; 6. At least one VCPU_CONTEXT record<br>
&gt; &gt; 7. END record<br>
&gt;<br>
&gt; Is it necessary to require this strict ordering? Obviously the headers=
<br>
&gt; have to be first and the END record last, but at least some of the<br>
&gt; other records don&#39;t seem to need to be at exactly the place you li=
st<br>
&gt; them.<br>
<br>
</div>It&#39;s necessary to define some constraints on the ordering. =A0In<=
br>
practice the receiving implementation may or may not work with all<br>
orderings and defining a particular ordering is probably easiest.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><br></div></div>

--089e0111b7c60381f004f224858e--


--===============3300012052583883416==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3300012052583883416==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 17:09:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGq3-0001lp-HF; Tue, 11 Feb 2014 17:09:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rshriram@cs.ubc.ca>) id 1WDGq1-0001lN-P3
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:09:38 +0000
Received: from [85.158.137.68:31767] by server-2.bemta-3.messagelabs.com id
	D9/0C-06531-0595AF25; Tue, 11 Feb 2014 17:09:36 +0000
X-Env-Sender: rshriram@cs.ubc.ca
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392138574!1164871!1
X-Originating-IP: [142.103.6.52]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26006 invoked from network); 11 Feb 2014 17:09:35 -0000
Received: from smtp.cs.ubc.ca (HELO smtp.cs.ubc.ca) (142.103.6.52)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 11 Feb 2014 17:09:35 -0000
Received: from mail-ie0-f169.google.com (mail-ie0-f169.google.com
	[209.85.223.169]) (authenticated bits=0)
	by smtp.cs.ubc.ca (8.14.5/8.13.6) with ESMTP id s1BH9W6W014905
	(version=TLSv1/SSLv3 cipher=RC4-SHA bits=128 verify=FAIL)
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 09:09:32 -0800
Received: by mail-ie0-f169.google.com with SMTP id to1so4802770ieb.28
	for <Xen-devel@lists.xen.org>; Tue, 11 Feb 2014 09:09:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=mime-version:reply-to:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Up4i1vEmBcP7pMVlkUa6urc82SDObRJAprLBF+664xw=;
	b=EukOjnCEtZwdJ1CDaCgI+zq2BfkNEbQQTWC9u373gq8jaK9/zeD635X3xgsYHUXfvX
	Y1Bz07qY/rbSglD4XZ0Fu13PgvNKaMTOq0sil4dFkgKew4XnEk4uDsVl7pbMfjpkwZaP
	wavYYqFdiNGDfi+wa5PewSdiaSssZPu3//xDjWDFFdtAMFTiDlB5Cz1aVmDeEE1sgjWG
	8dS3pvNR/NAvuhB6hHWo8QDh5mPoH2lS73P2yLLYMX7tuononKyz3GtHwDBsSZIsOzId
	1oogDY/SKHYCFvkxPlZpcwAacanxjLpGDCFBHRQ1300gLJN8Hk/nyYYjtG4Zf3LXcF6V
	7dmg==
X-Received: by 10.50.45.69 with SMTP id k5mr7178383igm.32.1392138571183; Tue,
	11 Feb 2014 09:09:31 -0800 (PST)
MIME-Version: 1.0
Received: by 10.42.120.147 with HTTP; Tue, 11 Feb 2014 09:08:51 -0800 (PST)
In-Reply-To: <21242.21415.38574.210861@mariner.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
From: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Date: Tue, 11 Feb 2014 11:08:51 -0600
Message-ID: <CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: rshriram@cs.ubc.ca
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3300012052583883416=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3300012052583883416==
Content-Type: multipart/alternative; boundary=089e0111b7c60381f004f224858e

--089e0111b7c60381f004f224858e
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 11, 2014 at 10:45 AM, Ian Jackson <Ian.Jackson@eu.citrix.com>wrote:

> Jan Beulich writes ("Re: [Xen-devel] Domain Save Image Format proposal
> (draft B)"):
> > On 10.02.14 at 18:20, David Vrabel <david.vrabel@citrix.com> wrote:
> > > Fields
> > > ------
> > >
> > > All the fields within the headers and records have a fixed width.
> > >
> > > Fields are always aligned to their size.
> > >
> > > Padding and reserved fields are set to zero on save and must be
> > > ignored during restore.
> >
> > Meaning it would be impossible to assign a meaning to these fields
> > later. I'd rather mandate that the restore side has to check these
> > fields are zero, and bail if they aren't.
>
> I disagree.  It is precisely the fact that they are ignored which
> makes it possible to assign meaning to them later.
>
> It is easy enough to make old readers bail.  What is good about
> David's spec is the room for enhancement _without_ making old readers
> bail.
>
> I would like to see is that the version number does not normally need
> to be incremented.  For example, the .deb package format which I
> designed in 1993 has stayed at version 2.0 even though it has been
> heavily extended in a number of directions.
>
> > > Integer (numeric) fields in the image header are always in big-endian
> > > byte order.
> >
> > Why would big endian be preferable when both currently
> > supported architectures use little endian?
>
> Because (a) that's Network Byte Order (b) there are convenient
> functions for dealing with endian-conversion.
>
>
Yes and Yes. But why resort to Network Byte Order at all when we
don't have support any architectures using big endian?

Are we thinking of transferring images or migrating data between machines
that have
different endian-ness? Its not like network elements such as middleboxes
(which could probably be big-endian ) are going to interpret the
application
level data and take routing decisions.

I am not that familiar with ARM, but from what I read, its bi-endian past
v3.
Don't think we have plans to support SPARC, which too is bi-endian
beyond a certain version IIRC.

That leaves legacy ARMs and SPARC that use big endian mode.

So am I missing something elementary here Ian? Why the emphasis on
network byte order? I certainly agree that the byte order should be declared
once in the header, so that there would be no confusion on how to interpret
it.



> > > 1. Image header
> > > 2. Domain header
> > > 3. X86_PV_INFO record
> > > 4. At least one P2M record
> > > 5. At least one PAGE_DATA record
> > > 6. VCPU_INFO record
> > > 6. At least one VCPU_CONTEXT record
> > > 7. END record
> >
> > Is it necessary to require this strict ordering? Obviously the headers
> > have to be first and the END record last, but at least some of the
> > other records don't seem to need to be at exactly the place you list
> > them.
>
> It's necessary to define some constraints on the ordering.  In
> practice the receiving implementation may or may not work with all
> orderings and defining a particular ordering is probably easiest.
>
> Ian.
>
>

--089e0111b7c60381f004f224858e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 11, 2014 at 10:45 AM, Ian Jackson <span dir=3D"ltr">&lt;<a href=3D"=
mailto:Ian.Jackson@eu.citrix.com" target=3D"_blank">Ian.Jackson@eu.citrix.c=
om</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Jan Beulich writes (&quot;Re: [Xen-devel] Do=
main Save Image Format proposal (draft B)&quot;):<br>
<div class=3D"">&gt; On 10.02.14 at 18:20, David Vrabel &lt;<a href=3D"mail=
to:david.vrabel@citrix.com">david.vrabel@citrix.com</a>&gt; wrote:<br>
&gt; &gt; Fields<br>
&gt; &gt; ------<br>
&gt; &gt;<br>
&gt; &gt; All the fields within the headers and records have a fixed width.=
<br>
&gt; &gt;<br>
&gt; &gt; Fields are always aligned to their size.<br>
&gt; &gt;<br>
&gt; &gt; Padding and reserved fields are set to zero on save and must be<b=
r>
&gt; &gt; ignored during restore.<br>
&gt;<br>
&gt; Meaning it would be impossible to assign a meaning to these fields<br>
&gt; later. I&#39;d rather mandate that the restore side has to check these=
<br>
&gt; fields are zero, and bail if they aren&#39;t.<br>
<br>
</div>I disagree. =A0It is precisely the fact that they are ignored which<b=
r>
makes it possible to assign meaning to them later.<br>
<br>
It is easy enough to make old readers bail. =A0What is good about<br>
David&#39;s spec is the room for enhancement _without_ making old readers<b=
r>
bail.<br>
<br>
I would like to see is that the version number does not normally need<br>
to be incremented. =A0For example, the .deb package format which I<br>
designed in 1993 has stayed at version 2.0 even though it has been<br>
heavily extended in a number of directions.<br>
<div class=3D""><br>
&gt; &gt; Integer (numeric) fields in the image header are always in big-en=
dian<br>
&gt; &gt; byte order.<br>
&gt;<br>
&gt; Why would big endian be preferable when both currently<br>
&gt; supported architectures use little endian?<br>
<br>
</div>Because (a) that&#39;s Network Byte Order (b) there are convenient<br=
>
functions for dealing with endian-conversion.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>Yes and Yes. But=
 why resort to Network Byte Order at all when we</div><div>don&#39;t have s=
upport any architectures using big endian?=A0</div><div><br></div><div>Are =
we thinking of transferring images or migrating data between machines that =
have</div>

<div>different endian-ness? Its not like network elements such as middlebox=
es</div><div>(which could probably be big-endian ) are going to interpret t=
he application=A0</div><div>level data and take routing decisions.</div>
<div>
<br></div><div>I am not that familiar with ARM, but from what I read, its b=
i-endian past v3.</div><div>Don&#39;t think we have plans to support SPARC,=
 which too is bi-endian</div><div>beyond a certain version IIRC.</div>
<div>
<br></div><div>That leaves legacy ARMs and SPARC that use big endian mode.<=
/div><div><br></div><div>So am I missing something elementary here Ian? Why=
 the emphasis on</div><div>network byte order? I certainly agree that the b=
yte order should be declared</div>

<div>once in the header, so that there would be no confusion on how to inte=
rpret it.</div><div><br></div><div>=A0</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">=
<div class=3D"">


&gt; &gt; 1. Image header<br>
&gt; &gt; 2. Domain header<br>
&gt; &gt; 3. X86_PV_INFO record<br>
&gt; &gt; 4. At least one P2M record<br>
&gt; &gt; 5. At least one PAGE_DATA record<br>
&gt; &gt; 6. VCPU_INFO record<br>
&gt; &gt; 6. At least one VCPU_CONTEXT record<br>
&gt; &gt; 7. END record<br>
&gt;<br>
&gt; Is it necessary to require this strict ordering? Obviously the headers=
<br>
&gt; have to be first and the END record last, but at least some of the<br>
&gt; other records don&#39;t seem to need to be at exactly the place you li=
st<br>
&gt; them.<br>
<br>
</div>It&#39;s necessary to define some constraints on the ordering. =A0In<=
br>
practice the receiving implementation may or may not work with all<br>
orderings and defining a particular ordering is probably easiest.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
</font></span></blockquote></div><br></div></div>

--089e0111b7c60381f004f224858e--


--===============3300012052583883416==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3300012052583883416==--


From xen-devel-bounces@lists.xen.org Tue Feb 11 17:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGrd-0002IQ-HE; Tue, 11 Feb 2014 17:11:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDGrc-0002HJ-KL
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:11:16 +0000
Received: from [193.109.254.147:17714] by server-9.bemta-14.messagelabs.com id
	10/CE-24895-4B95AF25; Tue, 11 Feb 2014 17:11:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392138673!3624926!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25307 invoked from network); 11 Feb 2014 17:11:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:11:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101669073"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:10:44 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:10:44 -0500
Message-ID: <52FA5992.3070207@citrix.com>
Date: Tue, 11 Feb 2014 17:10:42 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.21650.571988.662930@mariner.uk.xensource.com>
In-Reply-To: <21242.21650.571988.662930@mariner.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 16:49, Ian Jackson wrote:
> David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
>> Here is a draft of a proposal for a new domain save image format.  It
>> does not currently cover all use cases (e.g., images for HVM guest are
>> not considered).
> 
> I think this is a good start.  I've made some other comments already.
> 
>> Overview
>> ========
> 
> I would like to make another perhaps controversial suggestion.  We
> should explicitly specify that the receiver may advertise its
> capabilities to the sender, so that backwards-migration _can_ be
> supported if we choose to do so.
> 
> In practice I think that means a capability advertisement block.
> Probably, one bit per version, one bit per record type, etc.
> 
> I greatly prefer doing forward-compatibility with new record type
> enums etc. than with version numbers.  Version numbers presuppose a
> strict global order on all the implementations' capabilities, which is
> of course not necessarily true in free software.

I think I quite like this idea.  There would still need to be a central
repository of record types.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGrd-0002IQ-HE; Tue, 11 Feb 2014 17:11:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDGrc-0002HJ-KL
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:11:16 +0000
Received: from [193.109.254.147:17714] by server-9.bemta-14.messagelabs.com id
	10/CE-24895-4B95AF25; Tue, 11 Feb 2014 17:11:16 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392138673!3624926!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25307 invoked from network); 11 Feb 2014 17:11:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:11:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101669073"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:10:44 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:10:44 -0500
Message-ID: <52FA5992.3070207@citrix.com>
Date: Tue, 11 Feb 2014 17:10:42 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.21650.571988.662930@mariner.uk.xensource.com>
In-Reply-To: <21242.21650.571988.662930@mariner.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 16:49, Ian Jackson wrote:
> David Vrabel writes ("Domain Save Image Format proposal (draft B)"):
>> Here is a draft of a proposal for a new domain save image format.  It
>> does not currently cover all use cases (e.g., images for HVM guest are
>> not considered).
> 
> I think this is a good start.  I've made some other comments already.
> 
>> Overview
>> ========
> 
> I would like to make another perhaps controversial suggestion.  We
> should explicitly specify that the receiver may advertise its
> capabilities to the sender, so that backwards-migration _can_ be
> supported if we choose to do so.
> 
> In practice I think that means a capability advertisement block.
> Probably, one bit per version, one bit per record type, etc.
> 
> I greatly prefer doing forward-compatibility with new record type
> enums etc. than with version numbers.  Version numbers presuppose a
> strict global order on all the implementations' capabilities, which is
> of course not necessarily true in free software.

I think I quite like this idea.  There would still need to be a central
repository of record types.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGvm-0002bP-EO; Tue, 11 Feb 2014 17:15:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDGvl-0002bH-EV
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:15:33 +0000
Received: from [193.109.254.147:23692] by server-3.bemta-14.messagelabs.com id
	AC/7E-00432-4BA5AF25; Tue, 11 Feb 2014 17:15:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392138930!3614919!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18174 invoked from network); 11 Feb 2014 17:15:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:15:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101670887"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:15:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:15:29 -0500
Message-ID: <1392138928.26657.134.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <rshriram@cs.ubc.ca>
Date: Tue, 11 Feb 2014 17:15:28 +0000
In-Reply-To: <CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 11:08 -0600, Shriram Rajagopalan wrote:

> I am not that familiar with ARM, but from what I read, its bi-endian
> past v3.

Modern ARMs can still do big endian, and are sometimes used that way
too.

I expect that this would be expressed as a new guest arch/type (ARMBE)
though.

But there's nothing wrong per-se with having the endianness of an image
be explicit in the file formats header, even if it is a bit redundant.

Note that the initial header has to be in some fixed endianness, but
it's a small handful of bytes which only occurs once, I don't think the
byte swapping is an issue there.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGvm-0002bP-EO; Tue, 11 Feb 2014 17:15:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDGvl-0002bH-EV
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:15:33 +0000
Received: from [193.109.254.147:23692] by server-3.bemta-14.messagelabs.com id
	AC/7E-00432-4BA5AF25; Tue, 11 Feb 2014 17:15:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392138930!3614919!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18174 invoked from network); 11 Feb 2014 17:15:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:15:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101670887"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:15:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:15:29 -0500
Message-ID: <1392138928.26657.134.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <rshriram@cs.ubc.ca>
Date: Tue, 11 Feb 2014 17:15:28 +0000
In-Reply-To: <CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 11:08 -0600, Shriram Rajagopalan wrote:

> I am not that familiar with ARM, but from what I read, its bi-endian
> past v3.

Modern ARMs can still do big endian, and are sometimes used that way
too.

I expect that this would be expressed as a new guest arch/type (ARMBE)
though.

But there's nothing wrong per-se with having the endianness of an image
be explicit in the file formats header, even if it is a bit redundant.

Note that the initial header has to be in some fixed endianness, but
it's a small handful of bytes which only occurs once, I don't think the
byte swapping is an issue there.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:18:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGy8-0002jU-Lh; Tue, 11 Feb 2014 17:18:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDGy6-0002jK-Qp
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:17:58 +0000
Received: from [85.158.139.211:41729] by server-3.bemta-5.messagelabs.com id
	4A/96-13671-54B5AF25; Tue, 11 Feb 2014 17:17:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392139075!3224130!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21525 invoked from network); 11 Feb 2014 17:17:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:17:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101671868"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:17:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:17:53 -0500
Message-ID: <1392139072.26657.135.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 17:17:52 +0000
In-Reply-To: <21242.22821.968418.709001@mariner.uk.xensource.com>
References: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
	<21242.22821.968418.709001@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] cs-adjust-flight: fix runvar-del
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:08 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] cs-adjust-flight: fix runvar-del"):
> > Missing $.
> 
> WTF

I assume this isn't used much :-)

> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks.

> Maybe we should have a "non-urgent pending fixes" branch, since our
> push gate is going to be busy for a while.

Sure. Will you create it?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:18:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:18:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDGy8-0002jU-Lh; Tue, 11 Feb 2014 17:18:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDGy6-0002jK-Qp
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:17:58 +0000
Received: from [85.158.139.211:41729] by server-3.bemta-5.messagelabs.com id
	4A/96-13671-54B5AF25; Tue, 11 Feb 2014 17:17:57 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392139075!3224130!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21525 invoked from network); 11 Feb 2014 17:17:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:17:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101671868"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:17:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:17:53 -0500
Message-ID: <1392139072.26657.135.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 17:17:52 +0000
In-Reply-To: <21242.22821.968418.709001@mariner.uk.xensource.com>
References: <1392137369-20277-1-git-send-email-ian.campbell@citrix.com>
	<21242.22821.968418.709001@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] cs-adjust-flight: fix runvar-del
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:08 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] cs-adjust-flight: fix runvar-del"):
> > Missing $.
> 
> WTF

I assume this isn't used much :-)

> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks.

> Maybe we should have a "non-urgent pending fixes" branch, since our
> push gate is going to be busy for a while.

Sure. Will you create it?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDH8b-0003Oc-3d; Tue, 11 Feb 2014 17:28:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDH8Z-0003OE-SA
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:28:48 +0000
Received: from [193.109.254.147:5672] by server-7.bemta-14.messagelabs.com id
	3E/41-23424-FCD5AF25; Tue, 11 Feb 2014 17:28:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392139725!3607670!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31514 invoked from network); 11 Feb 2014 17:28:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:28:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101674923"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:28:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:28:44 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDH8W-0001EQ-BW;
	Tue, 11 Feb 2014 17:28:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDH8W-0002G3-4S;
	Tue, 11 Feb 2014 17:28:44 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.24011.946144.811862@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:28:43 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52FA5992.3070207@citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.21650.571988.662930@mariner.uk.xensource.com>
	<52FA5992.3070207@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Re: Domain Save Image Format proposal (draft B)"):
> On 11/02/14 16:49, Ian Jackson wrote:
> > I greatly prefer doing forward-compatibility with new record type
> > enums etc. than with version numbers.  Version numbers presuppose a
> > strict global order on all the implementations' capabilities, which is
> > of course not necessarily true in free software.
> 
> I think I quite like this idea.  There would still need to be a central
> repository of record types.

Right.

One thing you can do is like PNG and have separate ranges of record
type numbers for (a) ok to ignore this if not understood and (b) fatal
if not understood.

I think PNG also has subset of the namespace for "private use".

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDH8b-0003Oc-3d; Tue, 11 Feb 2014 17:28:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDH8Z-0003OE-SA
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:28:48 +0000
Received: from [193.109.254.147:5672] by server-7.bemta-14.messagelabs.com id
	3E/41-23424-FCD5AF25; Tue, 11 Feb 2014 17:28:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392139725!3607670!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31514 invoked from network); 11 Feb 2014 17:28:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:28:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101674923"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:28:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:28:44 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDH8W-0001EQ-BW;
	Tue, 11 Feb 2014 17:28:44 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDH8W-0002G3-4S;
	Tue, 11 Feb 2014 17:28:44 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.24011.946144.811862@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:28:43 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52FA5992.3070207@citrix.com>
References: <52F90A71.40802@citrix.com>
	<21242.21650.571988.662930@mariner.uk.xensource.com>
	<52FA5992.3070207@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Re: Domain Save Image Format proposal (draft B)"):
> On 11/02/14 16:49, Ian Jackson wrote:
> > I greatly prefer doing forward-compatibility with new record type
> > enums etc. than with version numbers.  Version numbers presuppose a
> > strict global order on all the implementations' capabilities, which is
> > of course not necessarily true in free software.
> 
> I think I quite like this idea.  There would still need to be a central
> repository of record types.

Right.

One thing you can do is like PNG and have separate ranges of record
type numbers for (a) ok to ignore this if not understood and (b) fatal
if not understood.

I think PNG also has subset of the namespace for "private use".

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHAh-0003X8-Po; Tue, 11 Feb 2014 17:30:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDHAg-0003X1-60
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:30:58 +0000
Received: from [193.109.254.147:9462] by server-1.bemta-14.messagelabs.com id
	43/62-15438-15E5AF25; Tue, 11 Feb 2014 17:30:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392139854!3615141!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23047 invoked from network); 11 Feb 2014 17:30:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:30:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99866021"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:30:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:30:53 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHAb-0001Ez-9L;
	Tue, 11 Feb 2014 17:30:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHAb-0002GG-2S;
	Tue, 11 Feb 2014 17:30:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.24140.933116.633101@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:30:52 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392138928.26657.134.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: rshriram@cs.ubc.ca, "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> But there's nothing wrong per-se with having the endianness of an image
> be explicit in the file formats header, even if it is a bit redundant.

Right.

> Note that the initial header has to be in some fixed endianness, but
> it's a small handful of bytes which only occurs once, I don't think the
> byte swapping is an issue there.

Given that it has to be some fixed endianness, and we don't want to
write code which will go wrong if we want to support a big endian
arcdh in the future, the code we write now must include byteswapping
calls.

Those calls are simple if the specified endianness is big endian.
Also, making it be the opposite of the native for popular platforms
means we're more likely to notice if we miss any endian fix calls.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHAh-0003X8-Po; Tue, 11 Feb 2014 17:30:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDHAg-0003X1-60
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:30:58 +0000
Received: from [193.109.254.147:9462] by server-1.bemta-14.messagelabs.com id
	43/62-15438-15E5AF25; Tue, 11 Feb 2014 17:30:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392139854!3615141!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23047 invoked from network); 11 Feb 2014 17:30:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:30:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99866021"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:30:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:30:53 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHAb-0001Ez-9L;
	Tue, 11 Feb 2014 17:30:53 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHAb-0002GG-2S;
	Tue, 11 Feb 2014 17:30:53 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.24140.933116.633101@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:30:52 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392138928.26657.134.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: rshriram@cs.ubc.ca, "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> But there's nothing wrong per-se with having the endianness of an image
> be explicit in the file formats header, even if it is a bit redundant.

Right.

> Note that the initial header has to be in some fixed endianness, but
> it's a small handful of bytes which only occurs once, I don't think the
> byte swapping is an issue there.

Given that it has to be some fixed endianness, and we don't want to
write code which will go wrong if we want to support a big endian
arcdh in the future, the code we write now must include byteswapping
calls.

Those calls are simple if the specified endianness is big endian.
Also, making it be the opposite of the native for popular platforms
means we're more likely to notice if we miss any endian fix calls.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:31:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHBO-0003f2-Vx; Tue, 11 Feb 2014 17:31:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDHBN-0003ef-1H
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:31:41 +0000
Received: from [85.158.143.35:25152] by server-1.bemta-4.messagelabs.com id
	C1/25-31661-B7E5AF25; Tue, 11 Feb 2014 17:31:39 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392139897!4905296!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11748 invoked from network); 11 Feb 2014 17:31:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:31:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101675639"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:31:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:31:26 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDHB8-0006dx-8F;
	Tue, 11 Feb 2014 17:31:26 +0000
Message-ID: <1392139881.27767.6.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 11 Feb 2014 17:31:21 +0000
In-Reply-To: <1392138928.26657.134.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:15 +0000, Ian Campbell wrote:
> On Tue, 2014-02-11 at 11:08 -0600, Shriram Rajagopalan wrote:
> 
> > I am not that familiar with ARM, but from what I read, its bi-endian
> > past v3.
> 
> Modern ARMs can still do big endian, and are sometimes used that way
> too.
> 
> I expect that this would be expressed as a new guest arch/type (ARMBE)
> though.
> 
> But there's nothing wrong per-se with having the endianness of an image
> be explicit in the file formats header, even if it is a bit redundant.
> 
> Note that the initial header has to be in some fixed endianness, but
> it's a small handful of bytes which only occurs once, I don't think the
> byte swapping is an issue there.
> 
> Ian.
> 

I think we should just stick with the native format. It's easy to
handle.

If a Xen x86 receive an ARM image it probably cannot cope with it. It
can only store it in some form. On the other end is much easy to restore
as x86 is always little-endian and we know images to restore will be
little endian.

Considering architecture which support both endianess we will never
restore a little-endian image in a big-endian guest. Although we could
convert headers and stuff like that we can't convert memory inside the
guest so the guest must have the same endianess. If a machine can handle
both endianess just setting some CPU flags in this case it will make
sense (but it will still maintain the VM endianess!) but why slow down
on system that can't handle it?

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:31:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHBO-0003f2-Vx; Tue, 11 Feb 2014 17:31:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDHBN-0003ef-1H
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:31:41 +0000
Received: from [85.158.143.35:25152] by server-1.bemta-4.messagelabs.com id
	C1/25-31661-B7E5AF25; Tue, 11 Feb 2014 17:31:39 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392139897!4905296!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11748 invoked from network); 11 Feb 2014 17:31:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:31:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101675639"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:31:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:31:26 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDHB8-0006dx-8F;
	Tue, 11 Feb 2014 17:31:26 +0000
Message-ID: <1392139881.27767.6.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 11 Feb 2014 17:31:21 +0000
In-Reply-To: <1392138928.26657.134.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:15 +0000, Ian Campbell wrote:
> On Tue, 2014-02-11 at 11:08 -0600, Shriram Rajagopalan wrote:
> 
> > I am not that familiar with ARM, but from what I read, its bi-endian
> > past v3.
> 
> Modern ARMs can still do big endian, and are sometimes used that way
> too.
> 
> I expect that this would be expressed as a new guest arch/type (ARMBE)
> though.
> 
> But there's nothing wrong per-se with having the endianness of an image
> be explicit in the file formats header, even if it is a bit redundant.
> 
> Note that the initial header has to be in some fixed endianness, but
> it's a small handful of bytes which only occurs once, I don't think the
> byte swapping is an issue there.
> 
> Ian.
> 

I think we should just stick with the native format. It's easy to
handle.

If a Xen x86 receive an ARM image it probably cannot cope with it. It
can only store it in some form. On the other end is much easy to restore
as x86 is always little-endian and we know images to restore will be
little endian.

Considering architecture which support both endianess we will never
restore a little-endian image in a big-endian guest. Although we could
convert headers and stuff like that we can't convert memory inside the
guest so the guest must have the same endianess. If a machine can handle
both endianess just setting some CPU flags in this case it will make
sense (but it will still maintain the VM endianess!) but why slow down
on system that can't handle it?

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:40:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHJZ-0004E2-JX; Tue, 11 Feb 2014 17:40:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDHJX-0004DN-SD
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 17:40:08 +0000
Received: from [85.158.143.35:24234] by server-1.bemta-4.messagelabs.com id
	40/1F-31661-7706AF25; Tue, 11 Feb 2014 17:40:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392140405!4898269!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10436 invoked from network); 11 Feb 2014 17:40:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:40:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101678179"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:40:04 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:40:04 -0500
Message-ID: <52FA6072.40908@citrix.com>
Date: Tue, 11 Feb 2014 18:40:02 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
In-Reply-To: <1542261541.20140211170750@eikelenboom.it>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	david.vrabel@citrix.com, msw@amazon.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 17:07, Sander Eikelenboom wrote:
> =

> Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
> =

>> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>>> Hi Konrad,
>>>
>>> Today decided to tryout another kernel RC and your pull request to Jens=
 on top of it .. and I encoutered this one:
> =

>> Thank you for testing!
> =

>> Could you provide the .config file please?
> =

> Attached
> =

>> Did you see this _before_ the pull request with Jens? I presume
>> not, but just double checking?
> =

> Nope not too my knowledge (though it's a bit messy with things broken on =
3.14 at the moment)
> =

>> And lastly - what were you doing when you triggered this? Just launching
>> a guest?
> =

> Nope it triggers on guest shutdown ..
> =

> =

>> CC-ing Roger and other folks who were on the patches.
> =

>>>
>>>
>>> [  438.029756] INFO: trying to register non-static key.
>>> [  438.029759] the code is fine but needs lockdep annotation.
>>> [  438.029760] turning off the locking correctness validator.
>>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        =
W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS=
 V1.8B1 09/13/2010
>>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff=
88004ba2b510
>>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff=
88004e5d9bf8
>>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffff=
ffff82cee570
>>> [  438.029799] Call Trace:
>>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d=
/0x90
>>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x=
240
>>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81=
/0x90
>>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>
>>> Doesn't seem to serious .. but never the less :-)

Thanks for the report!

Does the following patch solve the problem?

---
commit c1460953d081c8a18ac9e84fe90f696cdceae105
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Tue Feb 11 17:21:19 2014 +0100

    xen-blkback: init persistent_purge_work work_struct
    =

    Do a dummy initialization of the persistent_purge_work
    work_struct on xen_blkif_alloc, so that when flush_work is called on
    shutdown the struct is initialized even if it hasn't been used.
    =

    Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 84973c6..3df7575 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -129,6 +129,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 	blkif->free_pages_num =3D 0;
 	atomic_set(&blkif->persistent_gnt_in_use, 0);
 	atomic_set(&blkif->inflight, 0);
+	/*
+	 * Init the work struct with a NULL function, this is done
+	 * so that flush_work doesn't complain when shutting down if
+	 * persistent_purge_work has not been used during the lifetime
+	 * of this blkback instance.
+	 *
+	 * NB: In purge_persistent_gnt we make sure that
+	 * persistent_purge_work is always correctly setup with a valid
+	 * function pointer before being scheduled.
+	 */
+	INIT_WORK(&blkif->persistent_purge_work, NULL);
 =

 	INIT_LIST_HEAD(&blkif->pending_free);
 =




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:40:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHJZ-0004E2-JX; Tue, 11 Feb 2014 17:40:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDHJX-0004DN-SD
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 17:40:08 +0000
Received: from [85.158.143.35:24234] by server-1.bemta-4.messagelabs.com id
	40/1F-31661-7706AF25; Tue, 11 Feb 2014 17:40:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392140405!4898269!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10436 invoked from network); 11 Feb 2014 17:40:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:40:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101678179"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 17:40:04 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:40:04 -0500
Message-ID: <52FA6072.40908@citrix.com>
Date: Tue, 11 Feb 2014 18:40:02 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
In-Reply-To: <1542261541.20140211170750@eikelenboom.it>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	david.vrabel@citrix.com, msw@amazon.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 17:07, Sander Eikelenboom wrote:
> =

> Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
> =

>> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>>> Hi Konrad,
>>>
>>> Today decided to tryout another kernel RC and your pull request to Jens=
 on top of it .. and I encoutered this one:
> =

>> Thank you for testing!
> =

>> Could you provide the .config file please?
> =

> Attached
> =

>> Did you see this _before_ the pull request with Jens? I presume
>> not, but just double checking?
> =

> Nope not too my knowledge (though it's a bit messy with things broken on =
3.14 at the moment)
> =

>> And lastly - what were you doing when you triggered this? Just launching
>> a guest?
> =

> Nope it triggers on guest shutdown ..
> =

> =

>> CC-ing Roger and other folks who were on the patches.
> =

>>>
>>>
>>> [  438.029756] INFO: trying to register non-static key.
>>> [  438.029759] the code is fine but needs lockdep annotation.
>>> [  438.029760] turning off the locking correctness validator.
>>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G        =
W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIOS=
 V1.8B1 09/13/2010
>>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ffff=
88004ba2b510
>>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ffff=
88004e5d9bf8
>>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ffff=
ffff82cee570
>>> [  438.029799] Call Trace:
>>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6d=
/0x90
>>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0x=
240
>>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x81=
/0x90
>>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>
>>> Doesn't seem to serious .. but never the less :-)

Thanks for the report!

Does the following patch solve the problem?

---
commit c1460953d081c8a18ac9e84fe90f696cdceae105
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Tue Feb 11 17:21:19 2014 +0100

    xen-blkback: init persistent_purge_work work_struct
    =

    Do a dummy initialization of the persistent_purge_work
    work_struct on xen_blkif_alloc, so that when flush_work is called on
    shutdown the struct is initialized even if it hasn't been used.
    =

    Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 84973c6..3df7575 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -129,6 +129,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 	blkif->free_pages_num =3D 0;
 	atomic_set(&blkif->persistent_gnt_in_use, 0);
 	atomic_set(&blkif->inflight, 0);
+	/*
+	 * Init the work struct with a NULL function, this is done
+	 * so that flush_work doesn't complain when shutting down if
+	 * persistent_purge_work has not been used during the lifetime
+	 * of this blkback instance.
+	 *
+	 * NB: In purge_persistent_gnt we make sure that
+	 * persistent_purge_work is always correctly setup with a valid
+	 * function pointer before being scheduled.
+	 */
+	INIT_WORK(&blkif->persistent_purge_work, NULL);
 =

 	INIT_LIST_HEAD(&blkif->pending_free);
 =




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHUz-0005I9-Bb; Tue, 11 Feb 2014 17:51:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDHUx-0005I4-OI
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:51:55 +0000
Received: from [193.109.254.147:48154] by server-13.bemta-14.messagelabs.com
	id A0/9E-01226-B336AF25; Tue, 11 Feb 2014 17:51:55 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392141113!3588838!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14807 invoked from network); 11 Feb 2014 17:51:53 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-15.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 17:51:53 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 9735E976;
	Tue, 11 Feb 2014 17:51:52 +0000 (UTC)
To: john.stultz@linaro.org, david.vrabel@citrix.com, gregkh@linuxfoundation.org,
	konrad.wilk@oracle.com, mingo@kernel.org, prarit@redhat.com,
	richardcochran@gmail.com, sasha.levin@oracle.com,
	tglx@linutronix.de, xen-devel@lists.xen.org
From: <gregkh@linuxfoundation.org>
Date: Tue, 11 Feb 2014 09:53:08 -0800
Message-ID: <13921411883099@kroah.com>
MIME-Version: 1.0
Cc: stable@vger.kernel.org, stable-commits@vger.kernel.org
Subject: [Xen-devel] Patch "timekeeping: Fix potential lost pv notification
	of time change" has been added to the 3.12-stable tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This is a note to let you know that I've just added the patch titled

    timekeeping: Fix potential lost pv notification of time change

to the 3.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
and it can be found in the queue-3.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 5258d3f25c76f6ab86e9333abf97a55a877d3870 Mon Sep 17 00:00:00 2001
From: John Stultz <john.stultz@linaro.org>
Date: Wed, 11 Dec 2013 20:07:49 -0800
Subject: timekeeping: Fix potential lost pv notification of time change

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);


Patches currently in stable-queue which might be from john.stultz@linaro.org are

queue-3.12/timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
queue-3.12/timekeeping-fix-lost-updates-to-tai-adjustment.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:52:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHUz-0005I9-Bb; Tue, 11 Feb 2014 17:51:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDHUx-0005I4-OI
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:51:55 +0000
Received: from [193.109.254.147:48154] by server-13.bemta-14.messagelabs.com
	id A0/9E-01226-B336AF25; Tue, 11 Feb 2014 17:51:55 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392141113!3588838!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14807 invoked from network); 11 Feb 2014 17:51:53 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-15.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 17:51:53 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 9735E976;
	Tue, 11 Feb 2014 17:51:52 +0000 (UTC)
To: john.stultz@linaro.org, david.vrabel@citrix.com, gregkh@linuxfoundation.org,
	konrad.wilk@oracle.com, mingo@kernel.org, prarit@redhat.com,
	richardcochran@gmail.com, sasha.levin@oracle.com,
	tglx@linutronix.de, xen-devel@lists.xen.org
From: <gregkh@linuxfoundation.org>
Date: Tue, 11 Feb 2014 09:53:08 -0800
Message-ID: <13921411883099@kroah.com>
MIME-Version: 1.0
Cc: stable@vger.kernel.org, stable-commits@vger.kernel.org
Subject: [Xen-devel] Patch "timekeeping: Fix potential lost pv notification
	of time change" has been added to the 3.12-stable tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This is a note to let you know that I've just added the patch titled

    timekeeping: Fix potential lost pv notification of time change

to the 3.12-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
and it can be found in the queue-3.12 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 5258d3f25c76f6ab86e9333abf97a55a877d3870 Mon Sep 17 00:00:00 2001
From: John Stultz <john.stultz@linaro.org>
Date: Wed, 11 Dec 2013 20:07:49 -0800
Subject: timekeeping: Fix potential lost pv notification of time change

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);


Patches currently in stable-queue which might be from john.stultz@linaro.org are

queue-3.12/timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
queue-3.12/timekeeping-fix-lost-updates-to-tai-adjustment.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:52:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHVF-0005J8-6C; Tue, 11 Feb 2014 17:52:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDHV9-0005IO-2g
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:52:10 +0000
Received: from [85.158.137.68:47374] by server-17.bemta-3.messagelabs.com id
	EF/7E-22569-6436AF25; Tue, 11 Feb 2014 17:52:06 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392141124!1178270!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6786 invoked from network); 11 Feb 2014 17:52:04 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-8.tower-31.messagelabs.com with SMTP;
	11 Feb 2014 17:52:04 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id AB3C3976;
	Tue, 11 Feb 2014 17:52:03 +0000 (UTC)
To: john.stultz@linaro.org, david.vrabel@citrix.com, gregkh@linuxfoundation.org,
	konrad.wilk@oracle.com, mingo@kernel.org, prarit@redhat.com,
	richardcochran@gmail.com, sasha.levin@oracle.com,
	tglx@linutronix.de, xen-devel@lists.xen.org
From: <gregkh@linuxfoundation.org>
Date: Tue, 11 Feb 2014 09:53:11 -0800
Message-ID: <1392141191732@kroah.com>
MIME-Version: 1.0
Cc: stable@vger.kernel.org, stable-commits@vger.kernel.org
Subject: [Xen-devel] Patch "timekeeping: Fix potential lost pv notification
	of time change" has been added to the 3.13-stable tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This is a note to let you know that I've just added the patch titled

    timekeeping: Fix potential lost pv notification of time change

to the 3.13-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
and it can be found in the queue-3.13 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 5258d3f25c76f6ab86e9333abf97a55a877d3870 Mon Sep 17 00:00:00 2001
From: John Stultz <john.stultz@linaro.org>
Date: Wed, 11 Dec 2013 20:07:49 -0800
Subject: timekeeping: Fix potential lost pv notification of time change

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);


Patches currently in stable-queue which might be from john.stultz@linaro.org are

queue-3.13/timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
queue-3.13/timekeeping-fix-lost-updates-to-tai-adjustment.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:52:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHVF-0005J8-6C; Tue, 11 Feb 2014 17:52:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDHV9-0005IO-2g
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:52:10 +0000
Received: from [85.158.137.68:47374] by server-17.bemta-3.messagelabs.com id
	EF/7E-22569-6436AF25; Tue, 11 Feb 2014 17:52:06 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392141124!1178270!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6786 invoked from network); 11 Feb 2014 17:52:04 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-8.tower-31.messagelabs.com with SMTP;
	11 Feb 2014 17:52:04 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id AB3C3976;
	Tue, 11 Feb 2014 17:52:03 +0000 (UTC)
To: john.stultz@linaro.org, david.vrabel@citrix.com, gregkh@linuxfoundation.org,
	konrad.wilk@oracle.com, mingo@kernel.org, prarit@redhat.com,
	richardcochran@gmail.com, sasha.levin@oracle.com,
	tglx@linutronix.de, xen-devel@lists.xen.org
From: <gregkh@linuxfoundation.org>
Date: Tue, 11 Feb 2014 09:53:11 -0800
Message-ID: <1392141191732@kroah.com>
MIME-Version: 1.0
Cc: stable@vger.kernel.org, stable-commits@vger.kernel.org
Subject: [Xen-devel] Patch "timekeeping: Fix potential lost pv notification
	of time change" has been added to the 3.13-stable tree
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This is a note to let you know that I've just added the patch titled

    timekeeping: Fix potential lost pv notification of time change

to the 3.13-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
and it can be found in the queue-3.13 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 5258d3f25c76f6ab86e9333abf97a55a877d3870 Mon Sep 17 00:00:00 2001
From: John Stultz <john.stultz@linaro.org>
Date: Wed, 11 Dec 2013 20:07:49 -0800
Subject: timekeeping: Fix potential lost pv notification of time change

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);


Patches currently in stable-queue which might be from john.stultz@linaro.org are

queue-3.13/timekeeping-fix-potential-lost-pv-notification-of-time-change.patch
queue-3.13/timekeeping-fix-lost-updates-to-tai-adjustment.patch

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:52:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHVV-0005Lc-Ne; Tue, 11 Feb 2014 17:52:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDHVU-0005LJ-K9
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 17:52:28 +0000
Received: from [85.158.137.68:4876] by server-7.bemta-3.messagelabs.com id
	28/AB-13775-B536AF25; Tue, 11 Feb 2014 17:52:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392141145!1190037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30291 invoked from network); 11 Feb 2014 17:52:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:52:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99873184"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:52:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:52:19 -0500
Message-ID: <52FA6352.1010403@citrix.com>
Date: Tue, 11 Feb 2014 17:52:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>	<20140210195402.GA3924@kernel.dk>	<771950784.20140211165215@eikelenboom.it>	<20140211155650.GA23026@phenom.dumpdata.com>	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com>
In-Reply-To: <52FA6072.40908@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	Sander Eikelenboom <linux@eikelenboom.it>,
	david.vrabel@citrix.com, msw@amazon.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 17:40, Roger Pau Monn=E9 wrote:
> On 11/02/14 17:07, Sander Eikelenboom wrote:
>>
>> Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
>>
>>> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>>>> Hi Konrad,
>>>>
>>>> Today decided to tryout another kernel RC and your pull request to Jen=
s on top of it .. and I encoutered this one:
>>
>>> Thank you for testing!
>>
>>> Could you provide the .config file please?
>>
>> Attached
>>
>>> Did you see this _before_ the pull request with Jens? I presume
>>> not, but just double checking?
>>
>> Nope not too my knowledge (though it's a bit messy with things broken on=
 3.14 at the moment)
>>
>>> And lastly - what were you doing when you triggered this? Just launching
>>> a guest?
>>
>> Nope it triggers on guest shutdown ..
>>
>>
>>> CC-ing Roger and other folks who were on the patches.
>>
>>>>
>>>>
>>>> [  438.029756] INFO: trying to register non-static key.
>>>> [  438.029759] the code is fine but needs lockdep annotation.
>>>> [  438.029760] turning off the locking correctness validator.
>>>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G       =
 W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>>>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIO=
S V1.8B1 09/13/2010
>>>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 fff=
f88004ba2b510
>>>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab fff=
f88004e5d9bf8
>>>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 fff=
fffff82cee570
>>>> [  438.029799] Call Trace:
>>>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>>>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>>>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>>>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>>>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>>>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6=
d/0x90
>>>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0=
x240
>>>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>>>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>>>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x8=
1/0x90
>>>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>>>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>>>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>>>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>>>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>>
>>>> Doesn't seem to serious .. but never the less :-)
> =

> Thanks for the report!
> =

> Does the following patch solve the problem?
> =

> ---
> commit c1460953d081c8a18ac9e84fe90f696cdceae105
> Author: Roger Pau Monne <roger.pau@citrix.com>
> Date:   Tue Feb 11 17:21:19 2014 +0100
> =

>     xen-blkback: init persistent_purge_work work_struct
>     =

>     Do a dummy initialization of the persistent_purge_work
>     work_struct on xen_blkif_alloc, so that when flush_work is called on
>     shutdown the struct is initialized even if it hasn't been used.
>     =

>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> =

> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
> index 84973c6..3df7575 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -129,6 +129,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t dom=
id)
>  	blkif->free_pages_num =3D 0;
>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>  	atomic_set(&blkif->inflight, 0);
> +	/*
> +	 * Init the work struct with a NULL function, this is done
> +	 * so that flush_work doesn't complain when shutting down if
> +	 * persistent_purge_work has not been used during the lifetime
> +	 * of this blkback instance.
> +	 *
> +	 * NB: In purge_persistent_gnt we make sure that
> +	 * persistent_purge_work is always correctly setup with a valid
> +	 * function pointer before being scheduled.
> +	 */
> +	INIT_WORK(&blkif->persistent_purge_work, NULL);

I think you should init this fully here and remove the other call to
INIT_WORK.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:52:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHVV-0005Lc-Ne; Tue, 11 Feb 2014 17:52:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDHVU-0005LJ-K9
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 17:52:28 +0000
Received: from [85.158.137.68:4876] by server-7.bemta-3.messagelabs.com id
	28/AB-13775-B536AF25; Tue, 11 Feb 2014 17:52:27 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392141145!1190037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30291 invoked from network); 11 Feb 2014 17:52:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:52:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99873184"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:52:20 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:52:19 -0500
Message-ID: <52FA6352.1010403@citrix.com>
Date: Tue, 11 Feb 2014 17:52:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>	<20140210195402.GA3924@kernel.dk>	<771950784.20140211165215@eikelenboom.it>	<20140211155650.GA23026@phenom.dumpdata.com>	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com>
In-Reply-To: <52FA6072.40908@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	Sander Eikelenboom <linux@eikelenboom.it>,
	david.vrabel@citrix.com, msw@amazon.com, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 17:40, Roger Pau Monn=E9 wrote:
> On 11/02/14 17:07, Sander Eikelenboom wrote:
>>
>> Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
>>
>>> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>>>> Hi Konrad,
>>>>
>>>> Today decided to tryout another kernel RC and your pull request to Jen=
s on top of it .. and I encoutered this one:
>>
>>> Thank you for testing!
>>
>>> Could you provide the .config file please?
>>
>> Attached
>>
>>> Did you see this _before_ the pull request with Jens? I presume
>>> not, but just double checking?
>>
>> Nope not too my knowledge (though it's a bit messy with things broken on=
 3.14 at the moment)
>>
>>> And lastly - what were you doing when you triggered this? Just launching
>>> a guest?
>>
>> Nope it triggers on guest shutdown ..
>>
>>
>>> CC-ing Roger and other folks who were on the patches.
>>
>>>>
>>>>
>>>> [  438.029756] INFO: trying to register non-static key.
>>>> [  438.029759] the code is fine but needs lockdep annotation.
>>>> [  438.029760] turning off the locking correctness validator.
>>>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G       =
 W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>>>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BIO=
S V1.8B1 09/13/2010
>>>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 fff=
f88004ba2b510
>>>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab fff=
f88004e5d9bf8
>>>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 fff=
fffff82cee570
>>>> [  438.029799] Call Trace:
>>>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>>>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>>>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>>>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>>>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>>>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x6=
d/0x90
>>>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/0=
x240
>>>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>>>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>>>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x8=
1/0x90
>>>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>>>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>>>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>>>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>>>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>>
>>>> Doesn't seem to serious .. but never the less :-)
> =

> Thanks for the report!
> =

> Does the following patch solve the problem?
> =

> ---
> commit c1460953d081c8a18ac9e84fe90f696cdceae105
> Author: Roger Pau Monne <roger.pau@citrix.com>
> Date:   Tue Feb 11 17:21:19 2014 +0100
> =

>     xen-blkback: init persistent_purge_work work_struct
>     =

>     Do a dummy initialization of the persistent_purge_work
>     work_struct on xen_blkif_alloc, so that when flush_work is called on
>     shutdown the struct is initialized even if it hasn't been used.
>     =

>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> =

> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkba=
ck/xenbus.c
> index 84973c6..3df7575 100644
> --- a/drivers/block/xen-blkback/xenbus.c
> +++ b/drivers/block/xen-blkback/xenbus.c
> @@ -129,6 +129,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t dom=
id)
>  	blkif->free_pages_num =3D 0;
>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>  	atomic_set(&blkif->inflight, 0);
> +	/*
> +	 * Init the work struct with a NULL function, this is done
> +	 * so that flush_work doesn't complain when shutting down if
> +	 * persistent_purge_work has not been used during the lifetime
> +	 * of this blkback instance.
> +	 *
> +	 * NB: In purge_persistent_gnt we make sure that
> +	 * persistent_purge_work is always correctly setup with a valid
> +	 * function pointer before being scheduled.
> +	 */
> +	INIT_WORK(&blkif->persistent_purge_work, NULL);

I think you should init this fully here and remove the other call to
INIT_WORK.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:53:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHWv-0005Yc-SF; Tue, 11 Feb 2014 17:53:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDHWu-0005YN-22
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:53:56 +0000
Received: from [85.158.143.35:27591] by server-3.bemta-4.messagelabs.com id
	ED/F8-11539-3B36AF25; Tue, 11 Feb 2014 17:53:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392141233!4903351!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5745 invoked from network); 11 Feb 2014 17:53:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:53:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99873590"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:53:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:53:52 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHWq-0001Mm-Ay;
	Tue, 11 Feb 2014 17:53:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHWq-0002Ik-49;
	Tue, 11 Feb 2014 17:53:52 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.25519.844862.598762@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:53:51 +0000
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <1392139881.27767.6.camel@hamster.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> [stuff]

I think this is in danger of becoming a bikeshed issue.  I am
unconvinced by the arguments made on the other side.  The performance
and code complexity implications of byteswapping the image header are
IMO minimal.  So I'm going to put my foot down and say this:

David is entirely correct to specify a fixed endianness for the image
header.  As a tools maintainer I would accept patches for David's
format (subject to the other comments that are being discussed).  I
would be very reluctant to accept patches for a similar format with a
variable-endianness image header.  I would certainly not accept
patches which purport to implement a nominally-fixed-endianness format
but where the implementation is not in fact portable to the other
endianness.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 17:53:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 17:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHWv-0005Yc-SF; Tue, 11 Feb 2014 17:53:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDHWu-0005YN-22
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:53:56 +0000
Received: from [85.158.143.35:27591] by server-3.bemta-4.messagelabs.com id
	ED/F8-11539-3B36AF25; Tue, 11 Feb 2014 17:53:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392141233!4903351!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5745 invoked from network); 11 Feb 2014 17:53:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:53:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99873590"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:53:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 12:53:52 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHWq-0001Mm-Ay;
	Tue, 11 Feb 2014 17:53:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDHWq-0002Ik-49;
	Tue, 11 Feb 2014 17:53:52 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21242.25519.844862.598762@mariner.uk.xensource.com>
Date: Tue, 11 Feb 2014 17:53:51 +0000
To: Frediano Ziglio <frediano.ziglio@citrix.com>
In-Reply-To: <1392139881.27767.6.camel@hamster.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> [stuff]

I think this is in danger of becoming a bikeshed issue.  I am
unconvinced by the arguments made on the other side.  The performance
and code complexity implications of byteswapping the image header are
IMO minimal.  So I'm going to put my foot down and say this:

David is entirely correct to specify a fixed endianness for the image
header.  As a tools maintainer I would accept patches for David's
format (subject to the other comments that are being discussed).  I
would be very reluctant to accept patches for a similar format with a
variable-endianness image header.  I would certainly not accept
patches which purport to implement a nominally-fixed-endianness format
but where the implementation is not in fact portable to the other
endianness.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:00:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHck-00060Q-AC; Tue, 11 Feb 2014 17:59:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDHci-0005zd-Cs
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:59:56 +0000
Received: from [85.158.139.211:35441] by server-9.bemta-5.messagelabs.com id
	7F/DC-11237-B156AF25; Tue, 11 Feb 2014 17:59:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392141593!3231367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24637 invoked from network); 11 Feb 2014 17:59:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:59:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99874827"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:59:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:59:52 -0500
Message-ID: <1392141590.26657.137.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 17:59:50 +0000
In-Reply-To: <21242.25519.844862.598762@mariner.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:53 +0000, Ian Jackson wrote:
> Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> David is entirely correct to specify a fixed endianness for the image
> header.

Full ack.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:00:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHck-00060Q-AC; Tue, 11 Feb 2014 17:59:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDHci-0005zd-Cs
	for Xen-devel@lists.xen.org; Tue, 11 Feb 2014 17:59:56 +0000
Received: from [85.158.139.211:35441] by server-9.bemta-5.messagelabs.com id
	7F/DC-11237-B156AF25; Tue, 11 Feb 2014 17:59:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392141593!3231367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24637 invoked from network); 11 Feb 2014 17:59:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 17:59:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99874827"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 17:59:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 12:59:52 -0500
Message-ID: <1392141590.26657.137.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 17:59:50 +0000
In-Reply-To: <21242.25519.844862.598762@mariner.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Frediano Ziglio <frediano.ziglio@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:53 +0000, Ian Jackson wrote:
> Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> David is entirely correct to specify a fixed endianness for the image
> header.

Full ack.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:15:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHrb-0006zp-C5; Tue, 11 Feb 2014 18:15:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDHrZ-0006zk-NT
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 18:15:17 +0000
Received: from [193.109.254.147:39166] by server-13.bemta-14.messagelabs.com
	id 34/F5-01226-4B86AF25; Tue, 11 Feb 2014 18:15:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392142514!3623505!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28294 invoked from network); 11 Feb 2014 18:15:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:15:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99881366"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 18:15:13 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:15:12 -0500
Message-ID: <52FA68AE.6070608@citrix.com>
Date: Tue, 11 Feb 2014 19:15:10 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>	<20140210195402.GA3924@kernel.dk>	<771950784.20140211165215@eikelenboom.it>	<20140211155650.GA23026@phenom.dumpdata.com>	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
In-Reply-To: <52FA6352.1010403@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	Sander Eikelenboom <linux@eikelenboom.it>, msw@amazon.com,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 18:52, David Vrabel wrote:
> On 11/02/14 17:40, Roger Pau Monn=E9 wrote:
>> On 11/02/14 17:07, Sander Eikelenboom wrote:
>>>
>>> Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
>>>
>>>> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>>>>> Hi Konrad,
>>>>>
>>>>> Today decided to tryout another kernel RC and your pull request to Je=
ns on top of it .. and I encoutered this one:
>>>
>>>> Thank you for testing!
>>>
>>>> Could you provide the .config file please?
>>>
>>> Attached
>>>
>>>> Did you see this _before_ the pull request with Jens? I presume
>>>> not, but just double checking?
>>>
>>> Nope not too my knowledge (though it's a bit messy with things broken o=
n 3.14 at the moment)
>>>
>>>> And lastly - what were you doing when you triggered this? Just launchi=
ng
>>>> a guest?
>>>
>>> Nope it triggers on guest shutdown ..
>>>
>>>
>>>> CC-ing Roger and other folks who were on the patches.
>>>
>>>>>
>>>>>
>>>>> [  438.029756] INFO: trying to register non-static key.
>>>>> [  438.029759] the code is fine but needs lockdep annotation.
>>>>> [  438.029760] turning off the locking correctness validator.
>>>>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G      =
  W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>>>>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BI=
OS V1.8B1 09/13/2010
>>>>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ff=
ff88004ba2b510
>>>>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ff=
ff88004e5d9bf8
>>>>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ff=
ffffff82cee570
>>>>> [  438.029799] Call Trace:
>>>>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>>>>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>>>>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>>>>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>>>>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>>>>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x=
6d/0x90
>>>>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/=
0x240
>>>>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>>>>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>>>>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x=
81/0x90
>>>>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>>>>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>>>>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>>>>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>>>>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>>>
>>>>> Doesn't seem to serious .. but never the less :-)
>>
>> Thanks for the report!
>>
>> Does the following patch solve the problem?
>>
>> ---
>> commit c1460953d081c8a18ac9e84fe90f696cdceae105
>> Author: Roger Pau Monne <roger.pau@citrix.com>
>> Date:   Tue Feb 11 17:21:19 2014 +0100
>>
>>     xen-blkback: init persistent_purge_work work_struct
>>     =

>>     Do a dummy initialization of the persistent_purge_work
>>     work_struct on xen_blkif_alloc, so that when flush_work is called on
>>     shutdown the struct is initialized even if it hasn't been used.
>>     =

>>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>>
>> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkb=
ack/xenbus.c
>> index 84973c6..3df7575 100644
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
>> @@ -129,6 +129,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t do=
mid)
>>  	blkif->free_pages_num =3D 0;
>>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>>  	atomic_set(&blkif->inflight, 0);
>> +	/*
>> +	 * Init the work struct with a NULL function, this is done
>> +	 * so that flush_work doesn't complain when shutting down if
>> +	 * persistent_purge_work has not been used during the lifetime
>> +	 * of this blkback instance.
>> +	 *
>> +	 * NB: In purge_persistent_gnt we make sure that
>> +	 * persistent_purge_work is always correctly setup with a valid
>> +	 * function pointer before being scheduled.
>> +	 */
>> +	INIT_WORK(&blkif->persistent_purge_work, NULL);
> =

> I think you should init this fully here and remove the other call to
> INIT_WORK.

That would mean that unmap_purged_grants would no longer be static and =

I should also add a prototype for it in blkback/common.h, which is kind =

of ugly IMHO.

---
commit 980e72e45454b64ccb7f23b6794a769384e51038
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Tue Feb 11 19:04:06 2014 +0100

    xen-blkback: init persistent_purge_work work_struct
    =

    Initialize persistent_purge_work work_struct on xen_blkif_alloc (and
    remove the previous initialization done in purge_persistent_gnt). This
    prevents flush_work from complaining even if purge_persistent_gnt has
    not been used.
    =

    Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index e612627..10cd50b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -299,7 +299,7 @@ static void free_persistent_gnts(struct xen_blkif *blki=
f, struct rb_root *root,
 	BUG_ON(num !=3D 0);
 }
 =

-static void unmap_purged_grants(struct work_struct *work)
+void xen_blkbk_unmap_purged_grants(struct work_struct *work)
 {
 	struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
 	struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
@@ -420,7 +420,6 @@ finished:
 	blkif->vbd.overflow_max_grants =3D 0;
 =

 	/* We can defer this work */
-	INIT_WORK(&blkif->persistent_purge_work, unmap_purged_grants);
 	schedule_work(&blkif->persistent_purge_work);
 	pr_debug(DRV_PFX "Purged %u/%u\n", (total - num_clean), total);
 	return;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index 9eb34e2..be05277 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -385,6 +385,7 @@ int xen_blkbk_flush_diskcache(struct xenbus_transaction=
 xbt,
 int xen_blkbk_barrier(struct xenbus_transaction xbt,
 		      struct backend_info *be, int state);
 struct xenbus_device *xen_blkbk_xenbus(struct backend_info *be);
+void xen_blkbk_unmap_purged_grants(struct work_struct *work);
 =

 static inline void blkif_get_x86_32_req(struct blkif_request *dst,
 					struct blkif_x86_32_request *src)
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 84973c6..9a547e6 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -129,6 +129,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 	blkif->free_pages_num =3D 0;
 	atomic_set(&blkif->persistent_gnt_in_use, 0);
 	atomic_set(&blkif->inflight, 0);
+	INIT_WORK(&blkif->persistent_purge_work, xen_blkbk_unmap_purged_grants);
 =

 	INIT_LIST_HEAD(&blkif->pending_free);
 =




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:15:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHrb-0006zp-C5; Tue, 11 Feb 2014 18:15:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDHrZ-0006zk-NT
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 18:15:17 +0000
Received: from [193.109.254.147:39166] by server-13.bemta-14.messagelabs.com
	id 34/F5-01226-4B86AF25; Tue, 11 Feb 2014 18:15:16 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392142514!3623505!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28294 invoked from network); 11 Feb 2014 18:15:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:15:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99881366"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 18:15:13 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:15:12 -0500
Message-ID: <52FA68AE.6070608@citrix.com>
Date: Tue, 11 Feb 2014 19:15:10 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>	<20140210195402.GA3924@kernel.dk>	<771950784.20140211165215@eikelenboom.it>	<20140211155650.GA23026@phenom.dumpdata.com>	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
In-Reply-To: <52FA6352.1010403@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	Sander Eikelenboom <linux@eikelenboom.it>, msw@amazon.com,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 18:52, David Vrabel wrote:
> On 11/02/14 17:40, Roger Pau Monn=E9 wrote:
>> On 11/02/14 17:07, Sander Eikelenboom wrote:
>>>
>>> Tuesday, February 11, 2014, 4:56:50 PM, you wrote:
>>>
>>>> On Tue, Feb 11, 2014 at 04:52:15PM +0100, Sander Eikelenboom wrote:
>>>>> Hi Konrad,
>>>>>
>>>>> Today decided to tryout another kernel RC and your pull request to Je=
ns on top of it .. and I encoutered this one:
>>>
>>>> Thank you for testing!
>>>
>>>> Could you provide the .config file please?
>>>
>>> Attached
>>>
>>>> Did you see this _before_ the pull request with Jens? I presume
>>>> not, but just double checking?
>>>
>>> Nope not too my knowledge (though it's a bit messy with things broken o=
n 3.14 at the moment)
>>>
>>>> And lastly - what were you doing when you triggered this? Just launchi=
ng
>>>> a guest?
>>>
>>> Nope it triggers on guest shutdown ..
>>>
>>>
>>>> CC-ing Roger and other folks who were on the patches.
>>>
>>>>>
>>>>>
>>>>> [  438.029756] INFO: trying to register non-static key.
>>>>> [  438.029759] the code is fine but needs lockdep annotation.
>>>>> [  438.029760] turning off the locking correctness validator.
>>>>> [  438.029770] CPU: 3 PID: 9593 Comm: blkback.2.xvda Tainted: G      =
  W    3.14.0-rc2-20140211-pcireset-net-btrevert-xenblock+ #1
>>>>> [  438.029773] Hardware name: MSI MS-7640/890FXA-GD70 (MS-7640)  , BI=
OS V1.8B1 09/13/2010
>>>>> [  438.029784]  ffff88005224c4f0 ffff88004e5d9b68 ffffffff81b808c4 ff=
ff88004ba2b510
>>>>> [  438.029791]  0000000000000002 ffff88004e5d9c38 ffffffff81116eab ff=
ff88004e5d9bf8
>>>>> [  438.029798]  ffffffff81117b35 0000000000000000 0000000000000000 ff=
ffffff82cee570
>>>>> [  438.029799] Call Trace:
>>>>> [  438.029815]  [<ffffffff81b808c4>] dump_stack+0x46/0x58
>>>>> [  438.029826]  [<ffffffff81116eab>] __lock_acquire+0x1c2b/0x2220
>>>>> [  438.029833]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>>> [  438.029841]  [<ffffffff81117b0d>] lock_acquire+0xbd/0x150
>>>>> [  438.029847]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>>> [  438.029852]  [<ffffffff810e599d>] flush_work+0x3d/0x290
>>>>> [  438.029856]  [<ffffffff810e5965>] ? flush_work+0x5/0x290
>>>>> [  438.029863]  [<ffffffff81117b35>] ? lock_acquire+0xe5/0x150
>>>>> [  438.029872]  [<ffffffff816fef01>] ? xen_blkif_schedule+0x1a1/0x8d0
>>>>> [  438.029881]  [<ffffffff81b8ae0d>] ? _raw_spin_unlock_irqrestore+0x=
6d/0x90
>>>>> [  438.029888]  [<ffffffff8111392b>] ? trace_hardirqs_on_caller+0xfb/=
0x240
>>>>> [  438.029894]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>>> [  438.029901]  [<ffffffff816fefe9>] xen_blkif_schedule+0x289/0x8d0
>>>>> [  438.029907]  [<ffffffff8110d510>] ? __init_waitqueue_head+0x60/0x60
>>>>> [  438.029913]  [<ffffffff81113a7d>] ? trace_hardirqs_on+0xd/0x10
>>>>> [  438.029919]  [<ffffffff81b8ae21>] ? _raw_spin_unlock_irqrestore+0x=
81/0x90
>>>>> [  438.029925]  [<ffffffff816fed60>] ? xen_blkif_be_int+0x40/0x40
>>>>> [  438.029932]  [<ffffffff810ee374>] kthread+0xe4/0x100
>>>>> [  438.029938]  [<ffffffff81b8afe0>] ? _raw_spin_unlock_irq+0x30/0x50
>>>>> [  438.029946]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>>> [  438.029951]  [<ffffffff81b8c1fc>] ret_from_fork+0x7c/0xb0
>>>>> [  438.029958]  [<ffffffff810ee290>] ? __init_kthread_worker+0x70/0x70
>>>>>
>>>>> Doesn't seem to serious .. but never the less :-)
>>
>> Thanks for the report!
>>
>> Does the following patch solve the problem?
>>
>> ---
>> commit c1460953d081c8a18ac9e84fe90f696cdceae105
>> Author: Roger Pau Monne <roger.pau@citrix.com>
>> Date:   Tue Feb 11 17:21:19 2014 +0100
>>
>>     xen-blkback: init persistent_purge_work work_struct
>>     =

>>     Do a dummy initialization of the persistent_purge_work
>>     work_struct on xen_blkif_alloc, so that when flush_work is called on
>>     shutdown the struct is initialized even if it hasn't been used.
>>     =

>>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>>
>> diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkb=
ack/xenbus.c
>> index 84973c6..3df7575 100644
>> --- a/drivers/block/xen-blkback/xenbus.c
>> +++ b/drivers/block/xen-blkback/xenbus.c
>> @@ -129,6 +129,17 @@ static struct xen_blkif *xen_blkif_alloc(domid_t do=
mid)
>>  	blkif->free_pages_num =3D 0;
>>  	atomic_set(&blkif->persistent_gnt_in_use, 0);
>>  	atomic_set(&blkif->inflight, 0);
>> +	/*
>> +	 * Init the work struct with a NULL function, this is done
>> +	 * so that flush_work doesn't complain when shutting down if
>> +	 * persistent_purge_work has not been used during the lifetime
>> +	 * of this blkback instance.
>> +	 *
>> +	 * NB: In purge_persistent_gnt we make sure that
>> +	 * persistent_purge_work is always correctly setup with a valid
>> +	 * function pointer before being scheduled.
>> +	 */
>> +	INIT_WORK(&blkif->persistent_purge_work, NULL);
> =

> I think you should init this fully here and remove the other call to
> INIT_WORK.

That would mean that unmap_purged_grants would no longer be static and =

I should also add a prototype for it in blkback/common.h, which is kind =

of ugly IMHO.

---
commit 980e72e45454b64ccb7f23b6794a769384e51038
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Tue Feb 11 19:04:06 2014 +0100

    xen-blkback: init persistent_purge_work work_struct
    =

    Initialize persistent_purge_work work_struct on xen_blkif_alloc (and
    remove the previous initialization done in purge_persistent_gnt). This
    prevents flush_work from complaining even if purge_persistent_gnt has
    not been used.
    =

    Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkbac=
k/blkback.c
index e612627..10cd50b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -299,7 +299,7 @@ static void free_persistent_gnts(struct xen_blkif *blki=
f, struct rb_root *root,
 	BUG_ON(num !=3D 0);
 }
 =

-static void unmap_purged_grants(struct work_struct *work)
+void xen_blkbk_unmap_purged_grants(struct work_struct *work)
 {
 	struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST];
 	struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST];
@@ -420,7 +420,6 @@ finished:
 	blkif->vbd.overflow_max_grants =3D 0;
 =

 	/* We can defer this work */
-	INIT_WORK(&blkif->persistent_purge_work, unmap_purged_grants);
 	schedule_work(&blkif->persistent_purge_work);
 	pr_debug(DRV_PFX "Purged %u/%u\n", (total - num_clean), total);
 	return;
diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback=
/common.h
index 9eb34e2..be05277 100644
--- a/drivers/block/xen-blkback/common.h
+++ b/drivers/block/xen-blkback/common.h
@@ -385,6 +385,7 @@ int xen_blkbk_flush_diskcache(struct xenbus_transaction=
 xbt,
 int xen_blkbk_barrier(struct xenbus_transaction xbt,
 		      struct backend_info *be, int state);
 struct xenbus_device *xen_blkbk_xenbus(struct backend_info *be);
+void xen_blkbk_unmap_purged_grants(struct work_struct *work);
 =

 static inline void blkif_get_x86_32_req(struct blkif_request *dst,
 					struct blkif_x86_32_request *src)
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback=
/xenbus.c
index 84973c6..9a547e6 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -129,6 +129,7 @@ static struct xen_blkif *xen_blkif_alloc(domid_t domid)
 	blkif->free_pages_num =3D 0;
 	atomic_set(&blkif->persistent_gnt_in_use, 0);
 	atomic_set(&blkif->inflight, 0);
+	INIT_WORK(&blkif->persistent_purge_work, xen_blkbk_unmap_purged_grants);
 =

 	INIT_LIST_HEAD(&blkif->pending_free);
 =




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:22:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHy9-0007a4-2r; Tue, 11 Feb 2014 18:22:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDHy5-0007Zz-Jm
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 18:22:01 +0000
Received: from [85.158.143.35:43502] by server-3.bemta-4.messagelabs.com id
	96/03-11539-84A6AF25; Tue, 11 Feb 2014 18:22:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392142919!4910110!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2614 invoked from network); 11 Feb 2014 18:22:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101692246"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 18:21:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:21:58 -0500
Message-ID: <52FA6A44.6050003@citrix.com>
Date: Tue, 11 Feb 2014 18:21:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>	<20140210195402.GA3924@kernel.dk>	<771950784.20140211165215@eikelenboom.it>	<20140211155650.GA23026@phenom.dumpdata.com>	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com>
In-Reply-To: <52FA68AE.6070608@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	Sander Eikelenboom <linux@eikelenboom.it>, msw@amazon.com,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
> On 11/02/14 18:52, David Vrabel wrote:
>> =

> That would mean that unmap_purged_grants would no longer be static and =

> I should also add a prototype for it in blkback/common.h, which is kind =

> of ugly IMHO.

But less ugly than initializing work with a NULL function, IMO.

> commit 980e72e45454b64ccb7f23b6794a769384e51038
> Author: Roger Pau Monne <roger.pau@citrix.com>
> Date:   Tue Feb 11 19:04:06 2014 +0100
> =

>     xen-blkback: init persistent_purge_work work_struct
>     =

>     Initialize persistent_purge_work work_struct on xen_blkif_alloc (and
>     remove the previous initialization done in purge_persistent_gnt). This
>     prevents flush_work from complaining even if purge_persistent_gnt has
>     not been used.
>     =

>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:22:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDHy9-0007a4-2r; Tue, 11 Feb 2014 18:22:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDHy5-0007Zz-Jm
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 18:22:01 +0000
Received: from [85.158.143.35:43502] by server-3.bemta-4.messagelabs.com id
	96/03-11539-84A6AF25; Tue, 11 Feb 2014 18:22:00 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392142919!4910110!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2614 invoked from network); 11 Feb 2014 18:22:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101692246"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 18:21:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:21:58 -0500
Message-ID: <52FA6A44.6050003@citrix.com>
Date: Tue, 11 Feb 2014 18:21:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>	<20140210195402.GA3924@kernel.dk>	<771950784.20140211165215@eikelenboom.it>	<20140211155650.GA23026@phenom.dumpdata.com>	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com>
In-Reply-To: <52FA68AE.6070608@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	Sander Eikelenboom <linux@eikelenboom.it>, msw@amazon.com,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
> On 11/02/14 18:52, David Vrabel wrote:
>> =

> That would mean that unmap_purged_grants would no longer be static and =

> I should also add a prototype for it in blkback/common.h, which is kind =

> of ugly IMHO.

But less ugly than initializing work with a NULL function, IMO.

> commit 980e72e45454b64ccb7f23b6794a769384e51038
> Author: Roger Pau Monne <roger.pau@citrix.com>
> Date:   Tue Feb 11 19:04:06 2014 +0100
> =

>     xen-blkback: init persistent_purge_work work_struct
>     =

>     Initialize persistent_purge_work work_struct on xen_blkif_alloc (and
>     remove the previous initialization done in purge_persistent_gnt). This
>     prevents flush_work from complaining even if purge_persistent_gnt has
>     not been used.
>     =

>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:35:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIB6-0008BS-Vl; Tue, 11 Feb 2014 18:35:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <uazam@i2cinc.com>) id 1WDIB6-0008BN-2P
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 18:35:28 +0000
Received: from [85.158.143.35:6057] by server-3.bemta-4.messagelabs.com id
	45/CE-11539-F6D6AF25; Tue, 11 Feb 2014 18:35:27 +0000
X-Env-Sender: uazam@i2cinc.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392143726!4891760!1
X-Originating-IP: [199.96.216.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13491 invoked from network); 11 Feb 2014 18:35:26 -0000
Received: from mail.i2cinc.com (HELO zm1.i2cinc.com) (199.96.216.55)
	by server-8.tower-21.messagelabs.com with SMTP;
	11 Feb 2014 18:35:26 -0000
Received: from localhost (localhost [127.0.0.1])
	by zm1.i2cinc.com (Postfix) with ESMTP id 39B321F2065C
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 10:35:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at i2cinc.com
Received: from zm1.i2cinc.com ([127.0.0.1])
	by localhost (zm1.i2cinc.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 6lTxImsTTQR6 for <xen-devel@lists.xen.org>;
	Tue, 11 Feb 2014 10:35:26 -0800 (PST)
Received: from [10.11.17.22] (unknown [119.63.130.34])
	by zm1.i2cinc.com (Postfix) with ESMTPSA id 69FA01F2065B
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 10:35:25 -0800 (PST)
Message-ID: <52FA6D33.3080905@i2cinc.com>
Date: Tue, 11 Feb 2014 23:34:27 +0500
From: Umair Azam <uazam@i2cinc.com>
User-Agent: Mozilla/5.0 (Windows NT 6.0;
	rv:17.0) Gecko/20130509 Thunderbird/17.0.6
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <52FA6C2C.4050207@i2cinc.com>
In-Reply-To: <52FA6C2C.4050207@i2cinc.com>
X-Forwarded-Message-Id: <52FA6C2C.4050207@i2cinc.com>
Subject: [Xen-devel] Xen hypervisor gets stuck and reboots automatically
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I am testing Xenserver6.2 with cloudstack, However i am getting a
strange error which i dont know how to figure out its solution, When
Cloudstack requests Hypervisor host to launch VM hypervisor host becomes
unresponsive for a while and then reboots automatically, Can anybody
guide how to resolve this issue.

-- 
Umair Azam




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:35:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIB6-0008BS-Vl; Tue, 11 Feb 2014 18:35:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <uazam@i2cinc.com>) id 1WDIB6-0008BN-2P
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 18:35:28 +0000
Received: from [85.158.143.35:6057] by server-3.bemta-4.messagelabs.com id
	45/CE-11539-F6D6AF25; Tue, 11 Feb 2014 18:35:27 +0000
X-Env-Sender: uazam@i2cinc.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392143726!4891760!1
X-Originating-IP: [199.96.216.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13491 invoked from network); 11 Feb 2014 18:35:26 -0000
Received: from mail.i2cinc.com (HELO zm1.i2cinc.com) (199.96.216.55)
	by server-8.tower-21.messagelabs.com with SMTP;
	11 Feb 2014 18:35:26 -0000
Received: from localhost (localhost [127.0.0.1])
	by zm1.i2cinc.com (Postfix) with ESMTP id 39B321F2065C
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 10:35:26 -0800 (PST)
X-Virus-Scanned: amavisd-new at i2cinc.com
Received: from zm1.i2cinc.com ([127.0.0.1])
	by localhost (zm1.i2cinc.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 6lTxImsTTQR6 for <xen-devel@lists.xen.org>;
	Tue, 11 Feb 2014 10:35:26 -0800 (PST)
Received: from [10.11.17.22] (unknown [119.63.130.34])
	by zm1.i2cinc.com (Postfix) with ESMTPSA id 69FA01F2065B
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 10:35:25 -0800 (PST)
Message-ID: <52FA6D33.3080905@i2cinc.com>
Date: Tue, 11 Feb 2014 23:34:27 +0500
From: Umair Azam <uazam@i2cinc.com>
User-Agent: Mozilla/5.0 (Windows NT 6.0;
	rv:17.0) Gecko/20130509 Thunderbird/17.0.6
MIME-Version: 1.0
To: xen-devel@lists.xen.org
References: <52FA6C2C.4050207@i2cinc.com>
In-Reply-To: <52FA6C2C.4050207@i2cinc.com>
X-Forwarded-Message-Id: <52FA6C2C.4050207@i2cinc.com>
Subject: [Xen-devel] Xen hypervisor gets stuck and reboots automatically
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I am testing Xenserver6.2 with cloudstack, However i am getting a
strange error which i dont know how to figure out its solution, When
Cloudstack requests Hypervisor host to launch VM hypervisor host becomes
unresponsive for a while and then reboots automatically, Can anybody
guide how to resolve this issue.

-- 
Umair Azam




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIDi-0008IE-NW; Tue, 11 Feb 2014 18:38:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDIDV-0008HP-B5
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 18:37:57 +0000
Received: from [85.158.139.211:25571] by server-6.bemta-5.messagelabs.com id
	21/D3-14342-40E6AF25; Tue, 11 Feb 2014 18:37:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392143874!3208518!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9997 invoked from network); 11 Feb 2014 18:37:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:37:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101697232"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 18:37:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:37:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDIDQ-0007XB-Vn;
	Tue, 11 Feb 2014 18:37:52 +0000
Message-ID: <52FA6E00.8000602@citrix.com>
Date: Tue, 11 Feb 2014 18:37:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Umair Azam <uazam@i2cinc.com>
References: <52FA6C2C.4050207@i2cinc.com> <52FA6D33.3080905@i2cinc.com>
In-Reply-To: <52FA6D33.3080905@i2cinc.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
X-Mailman-Approved-At: Tue, 11 Feb 2014 18:38:09 +0000
Cc: "xs-devel@lists.xenserver.org" <xs-devel@lists.xenserver.org>
Subject: Re: [Xen-devel] Xen hypervisor gets stuck and reboots automatically
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 18:34, Umair Azam wrote:
> Hi,
>
> I am testing Xenserver6.2 with cloudstack, However i am getting a
> strange error which i dont know how to figure out its solution, When
> Cloudstack requests Hypervisor host to launch VM hypervisor host becomes
> unresponsive for a while and then reboots automatically, Can anybody
> guide how to resolve this issue.
>

Moving thread from xen-devel to xs-devel.

Are there any crash logs in /var/crash/$DATE ?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIDi-0008IE-NW; Tue, 11 Feb 2014 18:38:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDIDV-0008HP-B5
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 18:37:57 +0000
Received: from [85.158.139.211:25571] by server-6.bemta-5.messagelabs.com id
	21/D3-14342-40E6AF25; Tue, 11 Feb 2014 18:37:56 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392143874!3208518!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9997 invoked from network); 11 Feb 2014 18:37:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:37:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101697232"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 18:37:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:37:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDIDQ-0007XB-Vn;
	Tue, 11 Feb 2014 18:37:52 +0000
Message-ID: <52FA6E00.8000602@citrix.com>
Date: Tue, 11 Feb 2014 18:37:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Umair Azam <uazam@i2cinc.com>
References: <52FA6C2C.4050207@i2cinc.com> <52FA6D33.3080905@i2cinc.com>
In-Reply-To: <52FA6D33.3080905@i2cinc.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
X-Mailman-Approved-At: Tue, 11 Feb 2014 18:38:09 +0000
Cc: "xs-devel@lists.xenserver.org" <xs-devel@lists.xenserver.org>
Subject: Re: [Xen-devel] Xen hypervisor gets stuck and reboots automatically
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 11/02/14 18:34, Umair Azam wrote:
> Hi,
>
> I am testing Xenserver6.2 with cloudstack, However i am getting a
> strange error which i dont know how to figure out its solution, When
> Cloudstack requests Hypervisor host to launch VM hypervisor host becomes
> unresponsive for a while and then reboots automatically, Can anybody
> guide how to resolve this issue.
>

Moving thread from xen-devel to xs-devel.

Are there any crash logs in /var/crash/$DATE ?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:55:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIUU-0000uY-64; Tue, 11 Feb 2014 18:55:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WDIUS-0000uT-Rh
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 18:55:29 +0000
Received: from [85.158.139.211:15396] by server-10.bemta-5.messagelabs.com id
	4C/1B-08578-0227AF25; Tue, 11 Feb 2014 18:55:28 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392144925!3138786!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9313 invoked from network); 11 Feb 2014 18:55:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:55:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99893728"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 18:55:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:55:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WDIUM-0007lU-Sm;
	Tue, 11 Feb 2014 18:55:22 +0000
Date: Tue, 11 Feb 2014 18:55:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1402111845430.4373@kaball.uk.xensource.com>
References: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v8] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Feb 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> - it cuts out common parts from m2p_*_override functions to
>   *_foreign_p2m_mapping functions
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> v7:
> - the previous version broke build on ARM, as there is no need for those p2m
>   changes. I've put them into arch specific functions, which are stubs on arm
> 
> v8:
> - give credit to Anthony Liguori who submitted a very similar patch originally:
> http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
> - create ARM stub for get_phys_to_machine
> - move definition of mfn in __gnttab_unmap_refs to the right place
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Original-by: Anthony Liguori <aliguori@amazon.com>
> ---
>  arch/arm/include/asm/xen/page.h     |   20 +++++++++-
>  arch/x86/include/asm/xen/page.h     |    7 +++-
>  arch/x86/xen/p2m.c                  |   49 ++++++++++++++++-------
>  drivers/block/xen-blkback/blkback.c |   15 +++----
>  drivers/xen/gntdev.c                |   13 +++---
>  drivers/xen/grant-table.c           |   74 +++++++++++++++++++++++++++++------
>  include/xen/grant_table.h           |    8 +++-
>  7 files changed, 140 insertions(+), 46 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> index e0965ab..d26c3d7 100644
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@ -97,13 +97,31 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
>  	return NULL;
>  }
>  
> +static inline unsigned long get_phys_to_machine(unsigned long pfn)
> +{
> +	return 0;
> +}

We can probably just rename phys_to_machine to get_phys_to_machine.
Be careful about the other callers in page.h.


> +static inline int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
> +{
> +	return 0;
> +}
> +
>  static inline int m2p_add_override(unsigned long mfn, struct page *page,
>  		struct gnttab_map_grant_ref *kmap_op)
>  {
>  	return 0;
>  }
>  
> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
> +static inline int restore_foreign_p2m_mapping(struct page *page,
> +					      unsigned long mfn)
> +{
> +	return 0;
> +}
> +
> +static inline int m2p_remove_override(struct page *page,
> +				      struct gnttab_map_grant_ref *kmap_op,
> +				      unsigned long mfn)
>  {
>  	return 0;
>  }

I like set_foreign_p2m_mapping and restore_foreign_p2m_mapping but we
need to use them coherently.
I think that on ARM it makes sense to rename the current implementation of
set_phys_to_machine to set_foreign_p2m_mapping.
I also makes sense to change the call to set_phys_to_machine with
INVALID_P2M_ENTRY as a parameter to restore_foreign_p2m_mapping.

Then you can implement set_phys_to_machine as

BUG_ON(pfn != mfn):
    
and get_phys_to_machine as

return pfn;


> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 3e276eb..0340954 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,13 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> +extern int set_foreign_p2m_mapping(struct page *page, unsigned long mfn);
>  extern int m2p_add_override(unsigned long mfn, struct page *page,
>  			    struct gnttab_map_grant_ref *kmap_op);
> +extern int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +124,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 696c694..3e7cfa9 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -881,6 +881,22 @@ static unsigned long mfn_hash(unsigned long mfn)
>  	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
>  }
>  
> +int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +
> +	WARN_ON(PagePrivate(page));
> +	SetPagePrivate(page);
> +	set_page_private(page, mfn);
> +
> +	page->index = pfn_to_mfn(pfn);
> +	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> +		return -ENOMEM;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
> +
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
>  		struct gnttab_map_grant_ref *kmap_op)
> @@ -899,13 +915,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -943,20 +952,33 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
> +
> +int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +
> +	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> +		return -EINVAL;
> +
> +	set_page_private(page, INVALID_P2M_ENTRY);
> +	WARN_ON(!PagePrivate(page));
> +	ClearPagePrivate(page);
> +	set_phys_to_machine(pfn, page->index);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(restore_foreign_p2m_mapping);
> +
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
>  	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
>  	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
>  
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> @@ -970,10 +992,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 4b97b86..da18046 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 073b4a1..34a2704 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index b84e3ab..c719bc8 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -928,15 +928,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -955,10 +957,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);

I think that this should become a call to set_foreign_p2m_mapping


>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -975,8 +979,14 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +
> +		ret = set_foreign_p2m_mapping(pages[i], mfn);
> +		if (ret)
> +			goto out;
> +
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
>  		if (ret)
>  			goto out;
>  	}
> @@ -987,15 +997,31 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -1006,17 +1032,27 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);

this should become a call to restore_foreign_p2m_mapping


>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
> +		ret = restore_foreign_p2m_mapping(pages[i], mfn);
> +		if (ret)
> +			goto out;
> +
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -1027,8 +1063,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index a5af2a2..7ad033d 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 18:55:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 18:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIUU-0000uY-64; Tue, 11 Feb 2014 18:55:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WDIUS-0000uT-Rh
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 18:55:29 +0000
Received: from [85.158.139.211:15396] by server-10.bemta-5.messagelabs.com id
	4C/1B-08578-0227AF25; Tue, 11 Feb 2014 18:55:28 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392144925!3138786!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9313 invoked from network); 11 Feb 2014 18:55:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 18:55:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99893728"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 18:55:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 13:55:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WDIUM-0007lU-Sm;
	Tue, 11 Feb 2014 18:55:22 +0000
Date: Tue, 11 Feb 2014 18:55:06 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1402111845430.4373@kaball.uk.xensource.com>
References: <1391691376-21471-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v8] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 6 Feb 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - based on m2p_override either they follow the original behaviour, or just set
>   the private flag and call set_phys_to_machine
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> - it cuts out common parts from m2p_*_override functions to
>   *_foreign_p2m_mapping functions
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> v7:
> - the previous version broke build on ARM, as there is no need for those p2m
>   changes. I've put them into arch specific functions, which are stubs on arm
> 
> v8:
> - give credit to Anthony Liguori who submitted a very similar patch originally:
> http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
> - create ARM stub for get_phys_to_machine
> - move definition of mfn in __gnttab_unmap_refs to the right place
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Original-by: Anthony Liguori <aliguori@amazon.com>
> ---
>  arch/arm/include/asm/xen/page.h     |   20 +++++++++-
>  arch/x86/include/asm/xen/page.h     |    7 +++-
>  arch/x86/xen/p2m.c                  |   49 ++++++++++++++++-------
>  drivers/block/xen-blkback/blkback.c |   15 +++----
>  drivers/xen/gntdev.c                |   13 +++---
>  drivers/xen/grant-table.c           |   74 +++++++++++++++++++++++++++++------
>  include/xen/grant_table.h           |    8 +++-
>  7 files changed, 140 insertions(+), 46 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> index e0965ab..d26c3d7 100644
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@ -97,13 +97,31 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
>  	return NULL;
>  }
>  
> +static inline unsigned long get_phys_to_machine(unsigned long pfn)
> +{
> +	return 0;
> +}

We can probably just rename phys_to_machine to get_phys_to_machine.
Be careful about the other callers in page.h.


> +static inline int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
> +{
> +	return 0;
> +}
> +
>  static inline int m2p_add_override(unsigned long mfn, struct page *page,
>  		struct gnttab_map_grant_ref *kmap_op)
>  {
>  	return 0;
>  }
>  
> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
> +static inline int restore_foreign_p2m_mapping(struct page *page,
> +					      unsigned long mfn)
> +{
> +	return 0;
> +}
> +
> +static inline int m2p_remove_override(struct page *page,
> +				      struct gnttab_map_grant_ref *kmap_op,
> +				      unsigned long mfn)
>  {
>  	return 0;
>  }

I like set_foreign_p2m_mapping and restore_foreign_p2m_mapping but we
need to use them coherently.
I think that on ARM it makes sense to rename the current implementation of
set_phys_to_machine to set_foreign_p2m_mapping.
I also makes sense to change the call to set_phys_to_machine with
INVALID_P2M_ENTRY as a parameter to restore_foreign_p2m_mapping.

Then you can implement set_phys_to_machine as

BUG_ON(pfn != mfn):
    
and get_phys_to_machine as

return pfn;


> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 3e276eb..0340954 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,13 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> +extern int set_foreign_p2m_mapping(struct page *page, unsigned long mfn);
>  extern int m2p_add_override(unsigned long mfn, struct page *page,
>  			    struct gnttab_map_grant_ref *kmap_op);
> +extern int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +124,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 696c694..3e7cfa9 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -881,6 +881,22 @@ static unsigned long mfn_hash(unsigned long mfn)
>  	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
>  }
>  
> +int set_foreign_p2m_mapping(struct page *page, unsigned long mfn)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +
> +	WARN_ON(PagePrivate(page));
> +	SetPagePrivate(page);
> +	set_page_private(page, mfn);
> +
> +	page->index = pfn_to_mfn(pfn);
> +	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> +		return -ENOMEM;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
> +
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
>  		struct gnttab_map_grant_ref *kmap_op)
> @@ -899,13 +915,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -943,20 +952,33 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
> +
> +int restore_foreign_p2m_mapping(struct page *page, unsigned long mfn)
> +{
> +	unsigned long pfn = page_to_pfn(page);
> +
> +	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> +		return -EINVAL;
> +
> +	set_page_private(page, INVALID_P2M_ENTRY);
> +	WARN_ON(!PagePrivate(page));
> +	ClearPagePrivate(page);
> +	set_phys_to_machine(pfn, page->index);
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(restore_foreign_p2m_mapping);
> +
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
>  	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
>  	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
>  
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> @@ -970,10 +992,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 4b97b86..da18046 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 073b4a1..34a2704 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index b84e3ab..c719bc8 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -928,15 +928,17 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +		    struct page **pages, unsigned int count,
> +		    bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  	pte_t *pte;
>  	unsigned long mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -955,10 +957,12 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
>  					map_ops[i].dev_bus_addr >> PAGE_SHIFT);

I think that this should become a call to set_foreign_p2m_mapping


>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
> @@ -975,8 +979,14 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  		} else {
>  			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
>  		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +
> +		ret = set_foreign_p2m_mapping(pages[i], mfn);
> +		if (ret)
> +			goto out;
> +
> +		if (m2p_override)
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
>  		if (ret)
>  			goto out;
>  	}
> @@ -987,15 +997,31 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +		      struct page **pages, unsigned int count,
> +		      bool m2p_override)
>  {
>  	int i, ret;
>  	bool lazy = false;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
> @@ -1006,17 +1032,27 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
>  					INVALID_P2M_ENTRY);

this should become a call to restore_foreign_p2m_mapping


>  		}
> -		return ret;
> +		return 0;
>  	}
>  
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
>  		arch_enter_lazy_mmu_mode();
>  		lazy = true;
>  	}
>  
>  	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> +		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
> +		ret = restore_foreign_p2m_mapping(pages[i], mfn);
> +		if (ret)
> +			goto out;
> +
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						   &kmap_ops[i] : NULL,
> +						  mfn);
>  		if (ret)
>  			goto out;
>  	}
> @@ -1027,8 +1063,22 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
>  
>  	return ret;
>  }
> +
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
> +}
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index a5af2a2..7ad033d 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:06:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIel-0001cY-2h; Tue, 11 Feb 2014 19:06:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDIej-0001cT-Fy
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:06:05 +0000
Received: from [193.109.254.147:6195] by server-9.bemta-14.messagelabs.com id
	6F/6E-24895-C947AF25; Tue, 11 Feb 2014 19:06:04 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392145563!3622374!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9050 invoked from network); 11 Feb 2014 19:06:03 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-8.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 19:06:03 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 4D657988;
	Tue, 11 Feb 2014 19:06:02 +0000 (UTC)
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Date: Tue, 11 Feb 2014 11:05:54 -0800
Message-Id: <20140211184826.865964390@linuxfoundation.org>
X-Mailer: git-send-email 1.8.5.1.163.gd7aced9
In-Reply-To: <20140211184823.492407127@linuxfoundation.org>
References: <20140211184823.492407127@linuxfoundation.org>
User-Agent: quilt/0.61-1
MIME-Version: 1.0
Cc: Prarit Bhargava <prarit@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Richard Cochran <richardcochran@gmail.com>,
	stable@vger.kernel.org, xen-devel@lists.xen.org,
	John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 3.13 113/120] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.13-stable review patch.  If anyone has any objections, please let me know.

------------------

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:06:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIel-0001cY-2h; Tue, 11 Feb 2014 19:06:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDIej-0001cT-Fy
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:06:05 +0000
Received: from [193.109.254.147:6195] by server-9.bemta-14.messagelabs.com id
	6F/6E-24895-C947AF25; Tue, 11 Feb 2014 19:06:04 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392145563!3622374!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9050 invoked from network); 11 Feb 2014 19:06:03 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-8.tower-27.messagelabs.com with SMTP;
	11 Feb 2014 19:06:03 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 4D657988;
	Tue, 11 Feb 2014 19:06:02 +0000 (UTC)
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Date: Tue, 11 Feb 2014 11:05:54 -0800
Message-Id: <20140211184826.865964390@linuxfoundation.org>
X-Mailer: git-send-email 1.8.5.1.163.gd7aced9
In-Reply-To: <20140211184823.492407127@linuxfoundation.org>
References: <20140211184823.492407127@linuxfoundation.org>
User-Agent: quilt/0.61-1
MIME-Version: 1.0
Cc: Prarit Bhargava <prarit@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Richard Cochran <richardcochran@gmail.com>,
	stable@vger.kernel.org, xen-devel@lists.xen.org,
	John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 3.13 113/120] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.13-stable review patch.  If anyone has any objections, please let me know.

------------------

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:07:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIfz-0001hG-JZ; Tue, 11 Feb 2014 19:07:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDIfx-0001h4-Q7
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:07:22 +0000
Received: from [85.158.139.211:48194] by server-2.bemta-5.messagelabs.com id
	BA/1A-23037-9E47AF25; Tue, 11 Feb 2014 19:07:21 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392145639!3213196!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32621 invoked from network); 11 Feb 2014 19:07:19 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-9.tower-206.messagelabs.com with SMTP;
	11 Feb 2014 19:07:19 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id C83C0982;
	Tue, 11 Feb 2014 19:07:18 +0000 (UTC)
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Date: Tue, 11 Feb 2014 11:06:06 -0800
Message-Id: <20140211184751.044001475@linuxfoundation.org>
X-Mailer: git-send-email 1.8.5.1.163.gd7aced9
In-Reply-To: <20140211184748.191276235@linuxfoundation.org>
References: <20140211184748.191276235@linuxfoundation.org>
User-Agent: quilt/0.61-1
MIME-Version: 1.0
Cc: Prarit Bhargava <prarit@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Richard Cochran <richardcochran@gmail.com>,
	stable@vger.kernel.org, xen-devel@lists.xen.org,
	John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 3.12 100/107] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.12-stable review patch.  If anyone has any objections, please let me know.

------------------

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:07:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIfz-0001hG-JZ; Tue, 11 Feb 2014 19:07:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WDIfx-0001h4-Q7
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:07:22 +0000
Received: from [85.158.139.211:48194] by server-2.bemta-5.messagelabs.com id
	BA/1A-23037-9E47AF25; Tue, 11 Feb 2014 19:07:21 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392145639!3213196!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32621 invoked from network); 11 Feb 2014 19:07:19 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-9.tower-206.messagelabs.com with SMTP;
	11 Feb 2014 19:07:19 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id C83C0982;
	Tue, 11 Feb 2014 19:07:18 +0000 (UTC)
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Date: Tue, 11 Feb 2014 11:06:06 -0800
Message-Id: <20140211184751.044001475@linuxfoundation.org>
X-Mailer: git-send-email 1.8.5.1.163.gd7aced9
In-Reply-To: <20140211184748.191276235@linuxfoundation.org>
References: <20140211184748.191276235@linuxfoundation.org>
User-Agent: quilt/0.61-1
MIME-Version: 1.0
Cc: Prarit Bhargava <prarit@redhat.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Richard Cochran <richardcochran@gmail.com>,
	stable@vger.kernel.org, xen-devel@lists.xen.org,
	John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 3.12 100/107] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.12-stable review patch.  If anyone has any objections, please let me know.

------------------

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 kernel/time/timekeeping.c |   20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_ns
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_ns
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrd-0002fb-6n; Tue, 11 Feb 2014 19:19:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIrb-0002ee-31
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:23 +0000
Received: from [193.109.254.147:51800] by server-7.bemta-14.messagelabs.com id
	5C/10-23424-AB77AF25; Tue, 11 Feb 2014 19:19:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392146360!3634216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28763 invoked from network); 11 Feb 2014 19:19:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101708720"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-Uo;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:12 +0000
Message-ID: <1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
	task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andrew Cooper <andrew.cooper3@citrix.com>

In evtchn_do_upcall(), if the interrupted task was in a preemptible
hypercall add a conditional schedule point.

This allows tasks in long running preemptible hypercalls to be
descheduled.  Allowing other tasks to run and prevents long running
hypercalls issued via the privcmd driver from triggering soft lockups.

Signed-off-by: Andrew Cooper <andrew.cooper3.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..0b5df37 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 	irq_exit();
 	set_irq_regs(old_regs);
+
+#ifndef CONFIG_PREEMPT
+	if ( __this_cpu_read(xed_nesting_count) == 0
+	     && is_preemptible_hypercall(regs) )
+		_cond_resched();
+#endif
 }
 
 void xen_hvm_evtchn_do_upcall(void)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrb-0002es-PU; Tue, 11 Feb 2014 19:19:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIra-0002eX-T7
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:23 +0000
Received: from [193.109.254.147:63066] by server-16.bemta-14.messagelabs.com
	id 0F/95-21945-AB77AF25; Tue, 11 Feb 2014 19:19:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392146360!3631132!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8055 invoked from network); 11 Feb 2014 19:19:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99901524"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-UF;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:11 +0000
Message-ID: <1392146352-16381-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] arm/xen: add stub is_preemptible_hypercall()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

is_preemptible_hypercall() may be called from an upcall to determine
if the interrupted task was in a preemptible hypercall.

Provide a stub implementation that always returns false.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 7704e28..4b988fe 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -68,4 +68,12 @@ HYPERVISOR_multicall(void *call_list, int nr_calls)
 {
 	BUG();
 }
+
+#ifndef CONFIG_PREEMPT
+static inline bool is_preemptible_hypercall(struct pt_regs *regs)
+{
+	return false;
+}
+#endif
+
 #endif /* _ASM_ARM_XEN_HYPERCALL_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrd-0002fw-Uk; Tue, 11 Feb 2014 19:19:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIrb-0002ef-Ei
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:23 +0000
Received: from [193.109.254.147:51816] by server-13.bemta-14.messagelabs.com
	id 46/39-01226-AB77AF25; Tue, 11 Feb 2014 19:19:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392146360!3631132!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8066 invoked from network); 11 Feb 2014 19:19:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99901525"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-Rh;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:09 +0000
Message-ID: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1 0/3]: xen: voluntary preemption for privcmd
	hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series adds a voluntary preemption point into hypercalls issued
by privcmd.  Without this, long running hypercalls will prevent the
task from being scheduled (potentially for several seconds) which may
trigger the kernel's soft lockup detector.

I've added a stub is_preemptible_hypercall() for ARM but would
appreciate a working implementation.  Checking whether the PC is
within the privcmd_hypercall function would probably do.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrd-0002fw-Uk; Tue, 11 Feb 2014 19:19:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIrb-0002ef-Ei
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:23 +0000
Received: from [193.109.254.147:51816] by server-13.bemta-14.messagelabs.com
	id 46/39-01226-AB77AF25; Tue, 11 Feb 2014 19:19:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392146360!3631132!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8066 invoked from network); 11 Feb 2014 19:19:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99901525"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-Rh;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:09 +0000
Message-ID: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1 0/3]: xen: voluntary preemption for privcmd
	hypercalls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series adds a voluntary preemption point into hypercalls issued
by privcmd.  Without this, long running hypercalls will prevent the
task from being scheduled (potentially for several seconds) which may
trigger the kernel's soft lockup detector.

I've added a stub is_preemptible_hypercall() for ARM but would
appreciate a working implementation.  Checking whether the PC is
within the privcmd_hypercall function would probably do.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrf-0002gE-GQ; Tue, 11 Feb 2014 19:19:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIrd-0002fa-5t
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:25 +0000
Received: from [193.109.254.147:51880] by server-12.bemta-14.messagelabs.com
	id 69/C1-17220-CB77AF25; Tue, 11 Feb 2014 19:19:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392146360!3631132!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8083 invoked from network); 11 Feb 2014 19:19:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99901526"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-TZ;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:10 +0000
Message-ID: <1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andrew Cooper <andrew.cooper3@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with only voluntary preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

There needs to be a voluntary preemption point (cond_resched()) at the
end of an upcall, but only if the interrupted task had issued a
hypercall via the privcmd driver.  Add is_preemptible_hypercall()
which may be used in a upcall to determine this.

Implement is_premptible_hypercall() by adding a second hypercall page
(preemptible_hypercall_page, copied from hypercall_page).  Calls made
via the new page may be preempted.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/hypercall.h |   14 ++++++++++++++
 arch/x86/xen/enlighten.c             |    7 +++++++
 arch/x86/xen/xen-head.S              |   18 +++++++++++++++++-
 3 files changed, 38 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index e709884..4658a75 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -83,6 +83,16 @@
  */
 
 extern struct { char _entry[32]; } hypercall_page[];
+#ifndef CONFIG_PREEMPT
+extern struct { char _entry[32]; } preemptible_hypercall_page[];
+
+static inline bool is_preemptible_hypercall(struct pt_regs *regs)
+{
+	return !user_mode_vm(regs) &&
+		regs->ip >= (unsigned long)preemptible_hypercall_page &&
+		regs->ip < (unsigned long)preemptible_hypercall_page + PAGE_SIZE;
+}
+#endif
 
 #define __HYPERCALL		"call hypercall_page+%c[offset]"
 #define __HYPERCALL_ENTRY(x)						\
@@ -215,7 +225,11 @@ privcmd_call(unsigned call,
 
 	asm volatile("call *%[call]"
 		     : __HYPERCALL_5PARAM
+#ifndef CONFIG_PREEMPT
+		     : [call] "a" (&preemptible_hypercall_page[call])
+#else
 		     : [call] "a" (&hypercall_page[call])
+#endif
 		     : __HYPERCALL_CLOBBER5);
 
 	return (long)__res;
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..7320fa8 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -84,6 +84,9 @@
 #include "multicalls.h"
 
 EXPORT_SYMBOL_GPL(hypercall_page);
+#ifndef CONFIG_PREEMPT
+EXPORT_SYMBOL_GPL(preemptible_hypercall_page);
+#endif
 
 /*
  * Pointer to the xen_vcpu_info structure or
@@ -1517,6 +1520,10 @@ asmlinkage void __init xen_start_kernel(void)
 	xen_pvh_early_guest_init();
 	xen_setup_machphys_mapping();
 
+#ifndef CONFIG_PREEMPT
+	copy_page(preemptible_hypercall_page, hypercall_page);
+#endif
+
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 485b695..0407d48 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -50,9 +50,18 @@ ENTRY(startup_xen)
 .pushsection .text
 	.balign PAGE_SIZE
 ENTRY(hypercall_page)
+
+#ifdef CONFIG_PREEMPT
+#  define PREEMPT_HYPERCALL_ENTRY(x)
+#else
+#  define PREEMPT_HYPERCALL_ENTRY(x) \
+	.global xen_hypercall_##x ## _p ASM_NL \
+	.set preemptible_xen_hypercall_##x, xen_hypercall_##x + PAGE_SIZE ASM_NL
+#endif
 #define NEXT_HYPERCALL(x) \
 	ENTRY(xen_hypercall_##x) \
-	.skip 32
+	.skip 32 ASM_NL \
+	PREEMPT_HYPERCALL_ENTRY(x)
 
 NEXT_HYPERCALL(set_trap_table)
 NEXT_HYPERCALL(mmu_update)
@@ -103,6 +112,13 @@ NEXT_HYPERCALL(arch_4)
 NEXT_HYPERCALL(arch_5)
 NEXT_HYPERCALL(arch_6)
 	.balign PAGE_SIZE
+
+#ifndef CONFIG_PREEMPT
+ENTRY(preemptible_hypercall_page)
+	.skip PAGE_SIZE
+#endif /* CONFIG_PREEMPT */
+
+#undef NEXT_HYPERCALL
 .popsection
 
 	ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS,       .asciz "linux")
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrb-0002es-PU; Tue, 11 Feb 2014 19:19:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIra-0002eX-T7
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:23 +0000
Received: from [193.109.254.147:63066] by server-16.bemta-14.messagelabs.com
	id 0F/95-21945-AB77AF25; Tue, 11 Feb 2014 19:19:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392146360!3631132!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8055 invoked from network); 11 Feb 2014 19:19:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99901524"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-UF;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:11 +0000
Message-ID: <1392146352-16381-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] arm/xen: add stub is_preemptible_hypercall()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

is_preemptible_hypercall() may be called from an upcall to determine
if the interrupted task was in a preemptible hypercall.

Provide a stub implementation that always returns false.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/arm/include/asm/xen/hypercall.h |    8 ++++++++
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/arm/include/asm/xen/hypercall.h b/arch/arm/include/asm/xen/hypercall.h
index 7704e28..4b988fe 100644
--- a/arch/arm/include/asm/xen/hypercall.h
+++ b/arch/arm/include/asm/xen/hypercall.h
@@ -68,4 +68,12 @@ HYPERVISOR_multicall(void *call_list, int nr_calls)
 {
 	BUG();
 }
+
+#ifndef CONFIG_PREEMPT
+static inline bool is_preemptible_hypercall(struct pt_regs *regs)
+{
+	return false;
+}
+#endif
+
 #endif /* _ASM_ARM_XEN_HYPERCALL_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrd-0002fb-6n; Tue, 11 Feb 2014 19:19:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIrb-0002ee-31
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:23 +0000
Received: from [193.109.254.147:51800] by server-7.bemta-14.messagelabs.com id
	5C/10-23424-AB77AF25; Tue, 11 Feb 2014 19:19:22 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392146360!3634216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28763 invoked from network); 11 Feb 2014 19:19:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="101708720"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-Uo;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:12 +0000
Message-ID: <1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
	task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andrew Cooper <andrew.cooper3@citrix.com>

In evtchn_do_upcall(), if the interrupted task was in a preemptible
hypercall add a conditional schedule point.

This allows tasks in long running preemptible hypercalls to be
descheduled.  Allowing other tasks to run and prevents long running
hypercalls issued via the privcmd driver from triggering soft lockups.

Signed-off-by: Andrew Cooper <andrew.cooper3.citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 4672e00..0b5df37 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
 
 	irq_exit();
 	set_irq_regs(old_regs);
+
+#ifndef CONFIG_PREEMPT
+	if ( __this_cpu_read(xed_nesting_count) == 0
+	     && is_preemptible_hypercall(regs) )
+		_cond_resched();
+#endif
 }
 
 void xen_hvm_evtchn_do_upcall(void)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 19:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 19:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDIrf-0002gE-GQ; Tue, 11 Feb 2014 19:19:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDIrd-0002fa-5t
	for xen-devel@lists.xen.org; Tue, 11 Feb 2014 19:19:25 +0000
Received: from [193.109.254.147:51880] by server-12.bemta-14.messagelabs.com
	id 69/C1-17220-CB77AF25; Tue, 11 Feb 2014 19:19:24 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392146360!3631132!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8083 invoked from network); 11 Feb 2014 19:19:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 19:19:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,826,1384300800"; d="scan'208";a="99901526"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 11 Feb 2014 19:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 11 Feb 2014 14:19:19 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDIrW-00084Y-TZ;
	Tue, 11 Feb 2014 19:19:18 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 11 Feb 2014 19:19:10 +0000
Message-ID: <1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Andrew Cooper <andrew.cooper3@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with only voluntary preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

There needs to be a voluntary preemption point (cond_resched()) at the
end of an upcall, but only if the interrupted task had issued a
hypercall via the privcmd driver.  Add is_preemptible_hypercall()
which may be used in a upcall to determine this.

Implement is_premptible_hypercall() by adding a second hypercall page
(preemptible_hypercall_page, copied from hypercall_page).  Calls made
via the new page may be preempted.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/include/asm/xen/hypercall.h |   14 ++++++++++++++
 arch/x86/xen/enlighten.c             |    7 +++++++
 arch/x86/xen/xen-head.S              |   18 +++++++++++++++++-
 3 files changed, 38 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index e709884..4658a75 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -83,6 +83,16 @@
  */
 
 extern struct { char _entry[32]; } hypercall_page[];
+#ifndef CONFIG_PREEMPT
+extern struct { char _entry[32]; } preemptible_hypercall_page[];
+
+static inline bool is_preemptible_hypercall(struct pt_regs *regs)
+{
+	return !user_mode_vm(regs) &&
+		regs->ip >= (unsigned long)preemptible_hypercall_page &&
+		regs->ip < (unsigned long)preemptible_hypercall_page + PAGE_SIZE;
+}
+#endif
 
 #define __HYPERCALL		"call hypercall_page+%c[offset]"
 #define __HYPERCALL_ENTRY(x)						\
@@ -215,7 +225,11 @@ privcmd_call(unsigned call,
 
 	asm volatile("call *%[call]"
 		     : __HYPERCALL_5PARAM
+#ifndef CONFIG_PREEMPT
+		     : [call] "a" (&preemptible_hypercall_page[call])
+#else
 		     : [call] "a" (&hypercall_page[call])
+#endif
 		     : __HYPERCALL_CLOBBER5);
 
 	return (long)__res;
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index a4d7b64..7320fa8 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -84,6 +84,9 @@
 #include "multicalls.h"
 
 EXPORT_SYMBOL_GPL(hypercall_page);
+#ifndef CONFIG_PREEMPT
+EXPORT_SYMBOL_GPL(preemptible_hypercall_page);
+#endif
 
 /*
  * Pointer to the xen_vcpu_info structure or
@@ -1517,6 +1520,10 @@ asmlinkage void __init xen_start_kernel(void)
 	xen_pvh_early_guest_init();
 	xen_setup_machphys_mapping();
 
+#ifndef CONFIG_PREEMPT
+	copy_page(preemptible_hypercall_page, hypercall_page);
+#endif
+
 	/* Install Xen paravirt ops */
 	pv_info = xen_info;
 	pv_init_ops = xen_init_ops;
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 485b695..0407d48 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -50,9 +50,18 @@ ENTRY(startup_xen)
 .pushsection .text
 	.balign PAGE_SIZE
 ENTRY(hypercall_page)
+
+#ifdef CONFIG_PREEMPT
+#  define PREEMPT_HYPERCALL_ENTRY(x)
+#else
+#  define PREEMPT_HYPERCALL_ENTRY(x) \
+	.global xen_hypercall_##x ## _p ASM_NL \
+	.set preemptible_xen_hypercall_##x, xen_hypercall_##x + PAGE_SIZE ASM_NL
+#endif
 #define NEXT_HYPERCALL(x) \
 	ENTRY(xen_hypercall_##x) \
-	.skip 32
+	.skip 32 ASM_NL \
+	PREEMPT_HYPERCALL_ENTRY(x)
 
 NEXT_HYPERCALL(set_trap_table)
 NEXT_HYPERCALL(mmu_update)
@@ -103,6 +112,13 @@ NEXT_HYPERCALL(arch_4)
 NEXT_HYPERCALL(arch_5)
 NEXT_HYPERCALL(arch_6)
 	.balign PAGE_SIZE
+
+#ifndef CONFIG_PREEMPT
+ENTRY(preemptible_hypercall_page)
+	.skip PAGE_SIZE
+#endif /* CONFIG_PREEMPT */
+
+#undef NEXT_HYPERCALL
 .popsection
 
 	ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS,       .asciz "linux")
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZo-00054F-Oz; Tue, 11 Feb 2014 20:05:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZl-00053e-QL
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:02 +0000
Received: from [85.158.137.68:43481] by server-9.bemta-3.messagelabs.com id
	1B/BC-10184-D628AF25; Tue, 11 Feb 2014 20:05:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392149100!1173749!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13922 invoked from network); 11 Feb 2014 20:05:00 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:05:00 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so3900756eaj.25
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:05:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=zoaVEeA3vPfomEn2GRAhP1umGjt+tLmVHI4GuW/njv0=;
	b=ChP38NgTebbGauK/ZemGTXaJM4l40Vip1dIae2TsSB0NXv/cc+fe02GOmnMne9i2A2
	d/B47z2AdZlpxiRyTqVBizAsujZcvph8+RCNFxZIIl1zYE23zv+sWK0KLZioL1pdpqEB
	ycv12tR2i+Nm2nCFwoMzG+eaGyBrczGsCVVJoU//Pibndxq0BsKNrf/LFfSfVkoekhUd
	FnHyNXaXp1yIubuFY16c0yANCavQMsVnqKkkqitE20K3CYvYhj/3+9UvUCYRGIZCSgdO
	DyxFWXPWrBaVSq+n/4H/RwgUiTYRVBHlMkGyxZ5C1asmEkFx/VuLUR+AQznCEKP1rEjg
	HoeA==
X-Gm-Message-State: ALoCoQmosEigp+Qwit6vhGxAQ8JHYFcRCW16URfLiEotieDLl6EMQf23wQXLI1NAf5urQzCjOzvK
X-Received: by 10.14.93.199 with SMTP id l47mr12905383eef.58.1392149100222;
	Tue, 11 Feb 2014 12:05:00 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:59 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:43 +0000
Message-Id: <1392149085-14366-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 3/5] xen/arm64: Implement
	lookup_processor_type as a dummy function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM64 implementation doesn't yet have specific code per processor.
This function will be used in a later patch.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/arm64/head.S |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index c97c194..e1d3103 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -546,6 +546,13 @@ putn:   ret
 
 #endif /* !CONFIG_EARLY_PRINTK */
 
+/* This provides a C-API version of __lookup_processor_type
+ * TODO: For now, the implementation return NULL every time
+ */
+GLOBAL(lookup_processor_type)
+        mov  x0, #0
+        ret
+
 /*
  * Local variables:
  * mode: ASM
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZn-00053j-Be; Tue, 11 Feb 2014 20:05:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZk-00053W-FR
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:00 +0000
Received: from [85.158.143.35:14709] by server-2.bemta-4.messagelabs.com id
	AF/56-10891-B628AF25; Tue, 11 Feb 2014 20:04:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392149098!4923519!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20330 invoked from network); 11 Feb 2014 20:04:59 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:04:59 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so3915613eae.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:04:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=tAJjfm/bz9rd2naidPyHgfBHsPt3EVDWtPKqjyBgXbw=;
	b=lGL14R1PPgNM6mXwpLuApAXn5k9xtIqeSF2P5vbFjF94vQ8BQtOtL0LbhtOUPY9MqR
	9v35GcrBgZepye3pr/H4t+qSnxg4YgB0LEs3wvxe9uN08mOJifhab+HcChsivTgoQMVG
	kQXwsrmTJnk+lTHKtQGrN5ARpxIg/JMfWDzPT6kawaU47Gi/uSVk2dvCpsBEgZetZuKh
	+57b5JJrt9du+Kyvz7kpzK7RsSUtrHW2TZdKHzfk6j+9z1heWnL+PTx8BTozdS3z5IR2
	QWh7IwSLTdYH1o6zI3s3on7eo0gkBP8OJgoHEaejUNYeGLeGc/mXV9ee+YQgMooj5uZb
	uzpA==
X-Gm-Message-State: ALoCoQk+mKg2AP6eAlEOijYOs3+pqIm4qQP5SH5q+EsjMJiwpaT65shzWLP2D/BXzRKdS0jozrE2
X-Received: by 10.14.88.131 with SMTP id a3mr5797754eef.64.1392149098659;
	Tue, 11 Feb 2014 12:04:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:42 +0000
Message-Id: <1392149085-14366-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 2/5] xen/arm32: Introduce
	lookup_processor_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Looking for a specific proc_info structure is already implemented in assembly.
Implement lookup_processor_type to avoid duplicate code between C and
assembly.

This function searches the proc_info_list structure following the processor
ID. If the search fail, it will return NULL, otherwise a pointer to this
structure for the specific processor.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/arm32/head.S |   57 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 43 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 77f5518..68fb499 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -198,26 +198,16 @@ skip_bss:
         PRINT("- Setting up control registers -\r\n")
 
         /* Get processor specific proc info into r1 */
-        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
-        ldr   r1, = __proc_info_start
-        add   r1, r1, r10                   /* r1 := paddr of table (start) */
-        ldr   r2, = __proc_info_end
-        add   r2, r2, r10                   /* r2 := paddr of table (end) */
-1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
-        and   r4, r0, r3                    /* r4 := our cpu id with mask */
-        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
-        teq   r4, r3
-        beq   2f                            /* Match => exit, or try next proc info */
-        add   r1, r1, #PROCINFO_sizeof
-        cmp   r1, r2
-        blo   1b
+        bl    __lookup_processor_type
+        teq   r1, #0
+        bne   1f
         mov   r4, r0
         PRINT("- Missing processor info: ")
         mov   r0, r4
         bl    putn
         PRINT(" -\r\n")
         b     fail
-2:
+1:
 
         /* Jump to cpu_init */
         ldr   r1, [r1, #PROCINFO_cpu_init]  /* r1 := vaddr(init func) */
@@ -545,6 +535,45 @@ putn:   mov   pc, lr
 
 #endif /* !CONFIG_EARLY_PRINTK */
 
+/* This provides a C-API version of __lookup_processor_type */
+GLOBAL(lookup_processor_type)
+        stmfd sp!, {r4, r10, lr}
+        mov   r10, #0                   /* r10 := offset between virt&phys */
+        bl    __lookup_processor_type
+        mov r0, r1
+        ldmfd sp!, {r4, r10, pc}
+
+/* Read processor ID register (CP#15, CR0), and Look up in the linker-built
+ * supported processor list. Note that we can't use the absolute addresses for
+ * the __proc_info lists since we aren't running with the MMU on (and therefore,
+ * we are not in correct address space). We have to calculate the offset.
+ *
+ * r10: offset between virt&phys
+ *
+ * Returns:
+ * r0: CPUID
+ * r1: proc_info pointer
+ * Clobbers r2-r4
+ */
+__lookup_processor_type:
+        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
+        ldr   r1, = __proc_info_start
+        add   r1, r1, r10                   /* r1 := paddr of table (start) */
+        ldr   r2, = __proc_info_end
+        add   r2, r2, r10                   /* r2 := paddr of table (end) */
+1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
+        and   r4, r0, r3                    /* r4 := our cpu id with mask */
+        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
+        teq   r4, r3
+        beq   2f                            /* Match => exit, or try next proc info */
+        add   r1, r1, #PROCINFO_sizeof
+        cmp   r1, r2
+        blo   1b
+        /* We failed to find the proc_info, return NULL */
+        mov   r1, #0
+2:
+        mov   pc, lr
+
 /*
  * Local variables:
  * mode: ASM
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZi-00053G-Je; Tue, 11 Feb 2014 20:04:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZh-00052l-5e
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:04:57 +0000
Received: from [193.109.254.147:63471] by server-15.bemta-14.messagelabs.com
	id 03/56-10839-8628AF25; Tue, 11 Feb 2014 20:04:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392149095!3637339!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30353 invoked from network); 11 Feb 2014 20:04:55 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:04:55 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so3320798eek.28
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:04:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=9Tw0M7BTBNOPmTRRRwce+1IIQXQLw/eEesukKIbsylY=;
	b=AmmFU5RxSKL4ZGtYKND67T370gA1168lI/0xP5XK7gB47LRjhuKbmpvMOPxx5AZp4G
	xLO7J9nmbrobMcbtxxtR8jslvR4QD6zyPYQt/2ceWnSrZ4p1bDrPe4oxYHXwYabeNvUl
	a0D2FQU4ydQnbF5o7nriDASo5j3aIJ5/rPZbKoYgJwP4S/0omLYCZjzP7SrBoVWoe1qj
	4Q7nvKrmUdf9Qy+tRhZlnoAYyzcLwz1eqGvsIgojpGuB5PLLtmk8HwSe0KVa7ZEZD944
	gmwLzm1awm4eaMd2FGe/A1AhES5OlFzwQZurt6i2eb73SBmF5rlDxZ/BdxK+U1An9/dC
	BxHQ==
X-Gm-Message-State: ALoCoQnQgsKsjRHSjvRCO/etdbGGzGkWqapRHaS8ltNez/IKZJHCrgfdp+BlPifpQHOWPJgCXsjv
X-Received: by 10.14.225.195 with SMTP id z43mr46824493eep.19.1392149095337;
	Tue, 11 Feb 2014 12:04:55 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.53
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:54 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:40 +0000
Message-Id: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 0/5] xen/arm: Remove processor specific
	bits in code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series follow a patch I have sent few months ago see:
https://patches.linaro.org/19617/.

I took a new approach and introduced specific processors callback which will
be called at least during vcpu initialization. In the future, we can extend
the structure to add new callbacks.

This patch series also removes xen/include/asm-arm/processor-ca{15,7}.h, both
headers are not used in Xen.

Sincerely yours,

Julien Grall (5):
  xen/arm32: head.S: Remove CA15 and CA7 specific includes
  xen/arm32: Introduce lookup_processor_type
  xen/arm64: Implement lookup_processor_type as a dummy function
  xen/arm: Remove processor specific setup in vcpu_initialise
  xen/arm: Remove asm-arm/processor-ca{15,7}.h headers

 xen/arch/arm/Makefile                |    1 +
 xen/arch/arm/arm32/Makefile          |    2 +-
 xen/arch/arm/arm32/head.S            |   59 +++++++++++++++++++++++++---------
 xen/arch/arm/arm32/proc-v7-c.c       |   32 ++++++++++++++++++
 xen/arch/arm/arm32/proc-v7.S         |    3 ++
 xen/arch/arm/arm64/head.S            |    7 ++++
 xen/arch/arm/domain.c                |    8 ++---
 xen/arch/arm/processor.c             |   49 ++++++++++++++++++++++++++++
 xen/arch/arm/setup.c                 |    3 ++
 xen/include/asm-arm/processor-ca15.h |   42 ------------------------
 xen/include/asm-arm/processor-ca7.h  |   20 ------------
 xen/include/asm-arm/procinfo.h       |   17 ++++++++--
 12 files changed, 156 insertions(+), 87 deletions(-)
 create mode 100644 xen/arch/arm/arm32/proc-v7-c.c
 create mode 100644 xen/arch/arm/processor.c
 delete mode 100644 xen/include/asm-arm/processor-ca15.h
 delete mode 100644 xen/include/asm-arm/processor-ca7.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZj-00053S-Vi; Tue, 11 Feb 2014 20:04:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZj-00053K-6L
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:04:59 +0000
Received: from [193.109.254.147:63587] by server-12.bemta-14.messagelabs.com
	id 90/CE-17220-A628AF25; Tue, 11 Feb 2014 20:04:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392149097!3625449!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17206 invoked from network); 11 Feb 2014 20:04:57 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:04:57 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3844318eek.23
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:04:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=yr5AabJ6NVq+LsRTRLE/6b4akgIBDDQprZF7J4aJGIY=;
	b=XXpKXbxoOWC6J7wpkv6Nw0JZntJ9L2aWQb8WMreBn8apHvGcyLpqnpriuWOzCK3dfz
	m3RJZXKTIXEWn9zz0jKrd9pI6Zl7/0PbfnG3Hq2s3rrVqtWfFboh9nJRu/NXi+Erxi9B
	+0fcDCwMd2O/g6Igdp5WycZOEe4TdmiNM18tYuj78Il/hvbKvs+VUSVtqsGF9kO4w4VM
	5B2/cyWjCM5sgUP9F/9UoMQuHq9oK5QDKYrR6sqLOE108Ly6WxDY/D4fUtnUKFm6Y60N
	8S5/uKBlqyZ8zzn6nUu+lBDLigZGqKqTv3FZDHPI+lrWs+8tuzIuysAU2tbh85uz2C27
	Si9g==
X-Gm-Message-State: ALoCoQk4e8Cgc2D5jBxc6KzuE9Gst3h6vZUQDVEi4pudzov9HRx9fOiJ/CeXxGJ0hFWjXrBh8red
X-Received: by 10.14.1.198 with SMTP id 46mr9574372eed.11.1392149097305;
	Tue, 11 Feb 2014 12:04:57 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.55
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:56 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:41 +0000
Message-Id: <1392149085-14366-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 1/5] xen/arm32: head.S: Remove CA15 and
	CA7 specific includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

head.S only contains generic code. It should not include headers for a
specific processor.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/arm32/head.S |    2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 1b1801b..77f5518 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -19,8 +19,6 @@
 
 #include <asm/config.h>
 #include <asm/page.h>
-#include <asm/processor-ca15.h>
-#include <asm/processor-ca7.h>
 #include <asm/asm_defns.h>
 #include <asm/early_printk.h>
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZo-00054F-Oz; Tue, 11 Feb 2014 20:05:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZl-00053e-QL
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:02 +0000
Received: from [85.158.137.68:43481] by server-9.bemta-3.messagelabs.com id
	1B/BC-10184-D628AF25; Tue, 11 Feb 2014 20:05:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392149100!1173749!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13922 invoked from network); 11 Feb 2014 20:05:00 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:05:00 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so3900756eaj.25
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:05:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=zoaVEeA3vPfomEn2GRAhP1umGjt+tLmVHI4GuW/njv0=;
	b=ChP38NgTebbGauK/ZemGTXaJM4l40Vip1dIae2TsSB0NXv/cc+fe02GOmnMne9i2A2
	d/B47z2AdZlpxiRyTqVBizAsujZcvph8+RCNFxZIIl1zYE23zv+sWK0KLZioL1pdpqEB
	ycv12tR2i+Nm2nCFwoMzG+eaGyBrczGsCVVJoU//Pibndxq0BsKNrf/LFfSfVkoekhUd
	FnHyNXaXp1yIubuFY16c0yANCavQMsVnqKkkqitE20K3CYvYhj/3+9UvUCYRGIZCSgdO
	DyxFWXPWrBaVSq+n/4H/RwgUiTYRVBHlMkGyxZ5C1asmEkFx/VuLUR+AQznCEKP1rEjg
	HoeA==
X-Gm-Message-State: ALoCoQmosEigp+Qwit6vhGxAQ8JHYFcRCW16URfLiEotieDLl6EMQf23wQXLI1NAf5urQzCjOzvK
X-Received: by 10.14.93.199 with SMTP id l47mr12905383eef.58.1392149100222;
	Tue, 11 Feb 2014 12:05:00 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:59 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:43 +0000
Message-Id: <1392149085-14366-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 3/5] xen/arm64: Implement
	lookup_processor_type as a dummy function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM64 implementation doesn't yet have specific code per processor.
This function will be used in a later patch.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/arm64/head.S |    7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index c97c194..e1d3103 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -546,6 +546,13 @@ putn:   ret
 
 #endif /* !CONFIG_EARLY_PRINTK */
 
+/* This provides a C-API version of __lookup_processor_type
+ * TODO: For now, the implementation return NULL every time
+ */
+GLOBAL(lookup_processor_type)
+        mov  x0, #0
+        ret
+
 /*
  * Local variables:
  * mode: ASM
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZp-00055Q-Ij; Tue, 11 Feb 2014 20:05:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZo-00053x-CM
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:04 +0000
Received: from [85.158.139.211:15946] by server-11.bemta-5.messagelabs.com id
	3E/DC-23886-F628AF25; Tue, 11 Feb 2014 20:05:03 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392149102!3253511!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22992 invoked from network); 11 Feb 2014 20:05:03 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:05:03 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so1703099eaj.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:05:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=RA/Zt1ZBqrwLFJQQsrfNo8IXXEqzVnxREQGySe84uNo=;
	b=Q8GUOawRCv1wC3SRL9u185VYBg6YY7DornjprG16/syElN7C7KIYNX5N1+ZX2L9ipw
	rpIM4wGnfCWPCRUprWBlMnG8C5seT34rCPY/9ZH4lDdnbWQNkYLTyvFdo2Elxc8HJvM2
	DBgCHMyMnq8ygykqC+TzYsBb038af/mGeWYCfzTlmRIIcpMEay0+Q3VhzbSb+UFu01vs
	FIeQc0llfK1zmJWmMwzORTdHE2sylidf7zIzf+Yo8pAM3+3K6fOh/LYwdFxx2mJ61AWk
	57KcgRIG4wl1QWSVbQrngOIJyylwYKzbvfIrJKDT8pSa9gNK9D4bEhUz3irEYPOQ6HV1
	u8+Q==
X-Gm-Message-State: ALoCoQlcLr+yiW07okWybOqw137olaM5T1ZArbqU4EkkwkoT8O8eAZHbmVRrpQsc+td/6E9vt74p
X-Received: by 10.14.93.199 with SMTP id l47mr12905650eef.58.1392149102692;
	Tue, 11 Feb 2014 12:05:02 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.05.01
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:05:02 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:45 +0000
Message-Id: <1392149085-14366-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
	asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Theses headers are not in the right directory and are not used anymore.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/include/asm-arm/processor-ca15.h |   42 ----------------------------------
 xen/include/asm-arm/processor-ca7.h  |   20 ----------------
 2 files changed, 62 deletions(-)
 delete mode 100644 xen/include/asm-arm/processor-ca15.h
 delete mode 100644 xen/include/asm-arm/processor-ca7.h

diff --git a/xen/include/asm-arm/processor-ca15.h b/xen/include/asm-arm/processor-ca15.h
deleted file mode 100644
index f65f40a..0000000
--- a/xen/include/asm-arm/processor-ca15.h
+++ /dev/null
@@ -1,42 +0,0 @@
-#ifndef __ASM_ARM_PROCESSOR_CA15_H
-#define __ASM_ARM_PROCESSOR_CA15_H
-
-/* ACTLR Auxiliary Control Register, Cortex A15 */
-#define ACTLR_CA15_SNOOP_DELAYED      (1<<31)
-#define ACTLR_CA15_MAIN_CLOCK         (1<<30)
-#define ACTLR_CA15_NEON_CLOCK         (1<<29)
-#define ACTLR_CA15_NONCACHE           (1<<24)
-#define ACTLR_CA15_INORDER_REQ        (1<<23)
-#define ACTLR_CA15_INORDER_LOAD       (1<<22)
-#define ACTLR_CA15_L2_TLB_PREFETCH    (1<<21)
-#define ACTLR_CA15_L2_IPA_PA_CACHE    (1<<20)
-#define ACTLR_CA15_L2_CACHE           (1<<19)
-#define ACTLR_CA15_L2_PA_CACHE        (1<<18)
-#define ACTLR_CA15_TLB                (1<<17)
-#define ACTLR_CA15_STRONGY_ORDERED    (1<<16)
-#define ACTLR_CA15_INORDER            (1<<15)
-#define ACTLR_CA15_FORCE_LIM          (1<<14)
-#define ACTLR_CA15_CP_FLUSH           (1<<13)
-#define ACTLR_CA15_CP_PUSH            (1<<12)
-#define ACTLR_CA15_LIM                (1<<11)
-#define ACTLR_CA15_SER                (1<<10)
-#define ACTLR_CA15_OPT                (1<<9)
-#define ACTLR_CA15_WFI                (1<<8)
-#define ACTLR_CA15_WFE                (1<<7)
-#define ACTLR_CA15_SMP                (1<<6)
-#define ACTLR_CA15_PLD                (1<<5)
-#define ACTLR_CA15_IP                 (1<<4)
-#define ACTLR_CA15_MICRO_BTB          (1<<3)
-#define ACTLR_CA15_LOOP_ONE           (1<<2)
-#define ACTLR_CA15_LOOP_DISABLE       (1<<1)
-#define ACTLR_CA15_BTB                (1<<0)
-
-#endif /* __ASM_ARM_PROCESSOR_CA15_H */
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-arm/processor-ca7.h b/xen/include/asm-arm/processor-ca7.h
deleted file mode 100644
index 5048a95..0000000
--- a/xen/include/asm-arm/processor-ca7.h
+++ /dev/null
@@ -1,20 +0,0 @@
-#ifndef __ASM_ARM_PROCESSOR_CA7_H
-#define __ASM_ARM_PROCESSOR_CA7_H
-
-/* ACTLR Auxiliary Control Register, Cortex A7 */
-#define ACTLR_CA7_DDI                 (1<<28)
-#define ACTLR_CA7_DDVM                (1<<15)
-#define ACTLR_CA7_L1RADIS             (1<<12)
-#define ACTLR_CA7_L2RADIS             (1<<11)
-#define ACTLR_CA7_DODMBS              (1<<10)
-#define ACTLR_CA7_SMP                 (1<<6)
-
-#endif /* __ASM_ARM_PROCESSOR_CA7_H */
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZp-00054X-5Z; Tue, 11 Feb 2014 20:05:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZn-00053n-UR
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:04 +0000
Received: from [85.158.139.211:9942] by server-2.bemta-5.messagelabs.com id
	4D/62-23037-F628AF25; Tue, 11 Feb 2014 20:05:03 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392149101!3263212!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26799 invoked from network); 11 Feb 2014 20:05:01 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:05:01 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so3304424eek.0
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:05:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=3jLAgXb8Foa+Cm5NG1xg6/OAILk0nAb1qxp/6ap7n20=;
	b=JGpv6ONMcJ4QCu5D5pbX54dgQ6Pgnl0K0zAfilKmQIReIdoIoT4xbwgSfVIy9gg/S+
	bcmdWuykjuzi3ee/2GEVFOSMqPA4qHuYf7RKkJfkIruYpN6ThVa0uHP3F99MPuK1WQLs
	udCnzqVuPjgx+PF6GrMpjR5jbLij5RIh+EJKpIAqhWT81SPS+UbNzNNZwuuTpYv8dPIt
	Nwaqx1lfjSsnqQ1fkcEblt6M+ERZcGu4yHEme+dgxyYbpfhZcUkIq2wX5GB1ZlIuJsMt
	0qQujOl8R75xPZQBZt0f4NXJjQquuSvr9GLI9GWwNGpub9G7DoLOoXp/LbeYj/EU742U
	qBSQ==
X-Gm-Message-State: ALoCoQmqYSA/HoABq/tUeShrihBx7zzHdi1lrmvN7JqNV9kYcbEFqFuQD13YQQ/Zje4MPUAv8Nry
X-Received: by 10.14.220.193 with SMTP id o41mr46873552eep.22.1392149101485;
	Tue, 11 Feb 2014 12:05:01 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.05.00
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:05:00 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:44 +0000
Message-Id: <1392149085-14366-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor specific
	setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces the possibility to have specific processor callbacks
that can be called in various place.

Currently VCPU initialization code contains processor specific setup (for
Cortex A7 and Cortex A15) for the ACTRL registers. It's possible to have
processor with a different layout for this register.

Move this setup in a specific callback for ARM v7 processor.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/Makefile          |    1 +
 xen/arch/arm/arm32/Makefile    |    2 +-
 xen/arch/arm/arm32/proc-v7-c.c |   32 ++++++++++++++++++++++++++
 xen/arch/arm/arm32/proc-v7.S   |    3 +++
 xen/arch/arm/domain.c          |    8 ++-----
 xen/arch/arm/processor.c       |   49 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c           |    3 +++
 xen/include/asm-arm/procinfo.h |   17 ++++++++++++--
 8 files changed, 106 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/arm/arm32/proc-v7-c.c
 create mode 100644 xen/arch/arm/processor.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index d70f6d5..63e0460 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -32,6 +32,7 @@ obj-y += vuart.o
 obj-y += hvm.o
 obj-y += device.o
 obj-y += decode.o
+obj-y += processor.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
index 65ecff4..0e7c3b4 100644
--- a/xen/arch/arm/arm32/Makefile
+++ b/xen/arch/arm/arm32/Makefile
@@ -1,7 +1,7 @@
 subdir-y += lib
 
 obj-y += entry.o
-obj-y += proc-v7.o
+obj-y += proc-v7.o proc-v7-c.o
 
 obj-y += traps.o
 obj-y += domain.o
diff --git a/xen/arch/arm/arm32/proc-v7-c.c b/xen/arch/arm/arm32/proc-v7-c.c
new file mode 100644
index 0000000..a3b94a2
--- /dev/null
+++ b/xen/arch/arm/arm32/proc-v7-c.c
@@ -0,0 +1,32 @@
+/*
+ * xen/arch/arm/arm32/proc-v7-c.c
+ *
+ * arm v7 specific initializations (C part)
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <asm/procinfo.h>
+#include <asm/processor.h>
+
+static void armv7_vcpu_initialize(struct vcpu *v)
+{
+    if ( v->domain->max_vcpus > 1 )
+        v->arch.actlr |= ACTLR_V7_SMP;
+    else
+        v->arch.actlr &= ~ACTLR_V7_SMP;
+}
+
+const struct processor armv7_processor = {
+    .vcpu_initialize = armv7_vcpu_initialize,
+};
diff --git a/xen/arch/arm/arm32/proc-v7.S b/xen/arch/arm/arm32/proc-v7.S
index 2c8cb9c..d7d2318 100644
--- a/xen/arch/arm/arm32/proc-v7.S
+++ b/xen/arch/arm/arm32/proc-v7.S
@@ -33,6 +33,7 @@ __v7_ca15mp_proc_info:
         .long 0x410FC0F0             /* Cortex-A15 */
         .long 0xFF0FFFF0             /* Mask */
         .long v7_init
+        .long armv7_processor
         .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
 
         .section ".init.proc.info", #alloc, #execinstr
@@ -41,6 +42,7 @@ __v7_ca7mp_proc_info:
         .long 0x410FC070             /* Cortex-A7 */
         .long 0xFF0FFFF0             /* Mask */
         .long v7_init
+        .long armv7_processor
         .size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info
 
         .section ".init.proc.info", #alloc, #execinstr
@@ -49,6 +51,7 @@ __v7_brahma15mp_proc_info:
         .long 0x420F00F2             /* Broadcom Brahma-B15 */
         .long 0xFF0FFFFF             /* Mask */
         .long v7_init
+        .long armv7_processor
         .size __v7_brahma15mp_proc_info, . - __v7_brahma15mp_proc_info
 
 /*
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..df23147 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -29,7 +29,7 @@
 #include <asm/irq.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
-#include <asm/processor-ca15.h>
+#include <asm/procinfo.h>
 
 #include <asm/gic.h>
 #include <asm/platform.h>
@@ -487,11 +487,7 @@ int vcpu_initialise(struct vcpu *v)
 
     v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
 
-    /* XXX: Handle other than CA15 cpus */
-    if ( v->domain->max_vcpus > 1 )
-        v->arch.actlr |= ACTLR_CA15_SMP;
-    else
-        v->arch.actlr &= ~ACTLR_CA15_SMP;
+    processor_vcpu_initialize(v);
 
     if ( (rc = vcpu_vgic_init(v)) != 0 )
         return rc;
diff --git a/xen/arch/arm/processor.c b/xen/arch/arm/processor.c
new file mode 100644
index 0000000..b06fcc5
--- /dev/null
+++ b/xen/arch/arm/processor.c
@@ -0,0 +1,49 @@
+/*
+ * xen/arch/arm/processor.c
+ *
+ * Helpers to execute processor specific code.
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (C) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <asm/procinfo.h>
+
+static const struct processor *processor = NULL;
+
+void __init processor_setup(void)
+{
+    const struct proc_info_list *procinfo;
+
+    procinfo = lookup_processor_type();
+    if ( !procinfo )
+        return;
+
+    processor = procinfo->processor;
+}
+
+void processor_vcpu_initialize(struct vcpu *v)
+{
+    if ( !processor || !processor->vcpu_initialize )
+        return;
+
+    processor->vcpu_initialize(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f6d713..5529cb1 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -42,6 +42,7 @@
 #include <asm/gic.h>
 #include <asm/cpufeature.h>
 #include <asm/platform.h>
+#include <asm/procinfo.h>
 
 struct cpuinfo_arm __read_mostly boot_cpu_data;
 
@@ -148,6 +149,8 @@ static void __init processor_id(void)
     {
         printk("32-bit Execution: Unsupported\n");
     }
+
+    processor_setup();
 }
 
 static void dt_unreserved_regions(paddr_t s, paddr_t e,
diff --git a/xen/include/asm-arm/procinfo.h b/xen/include/asm-arm/procinfo.h
index 9d3feb7..9cb7a86 100644
--- a/xen/include/asm-arm/procinfo.h
+++ b/xen/include/asm-arm/procinfo.h
@@ -21,10 +21,23 @@
 #ifndef __ASM_ARM_PROCINFO_H
 #define __ASM_ARM_PROCINFO_H
 
+#include <xen/sched.h>
+
+struct processor {
+    /* Initialize specific processor register for the new VPCU*/
+    void (*vcpu_initialize)(struct vcpu *v);
+};
+
 struct proc_info_list {
-	unsigned int		cpu_val;
-	unsigned int		cpu_mask;
+    unsigned int        cpu_val;
+    unsigned int        cpu_mask;
     void                (*cpu_init)(void);
+    struct processor    *processor;
 };
 
+const __init struct proc_info_list *lookup_processor_type(void);
+
+void __init processor_setup(void);
+void processor_vcpu_initialize(struct vcpu *v);
+
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZn-00053j-Be; Tue, 11 Feb 2014 20:05:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZk-00053W-FR
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:00 +0000
Received: from [85.158.143.35:14709] by server-2.bemta-4.messagelabs.com id
	AF/56-10891-B628AF25; Tue, 11 Feb 2014 20:04:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392149098!4923519!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20330 invoked from network); 11 Feb 2014 20:04:59 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:04:59 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so3915613eae.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:04:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=tAJjfm/bz9rd2naidPyHgfBHsPt3EVDWtPKqjyBgXbw=;
	b=lGL14R1PPgNM6mXwpLuApAXn5k9xtIqeSF2P5vbFjF94vQ8BQtOtL0LbhtOUPY9MqR
	9v35GcrBgZepye3pr/H4t+qSnxg4YgB0LEs3wvxe9uN08mOJifhab+HcChsivTgoQMVG
	kQXwsrmTJnk+lTHKtQGrN5ARpxIg/JMfWDzPT6kawaU47Gi/uSVk2dvCpsBEgZetZuKh
	+57b5JJrt9du+Kyvz7kpzK7RsSUtrHW2TZdKHzfk6j+9z1heWnL+PTx8BTozdS3z5IR2
	QWh7IwSLTdYH1o6zI3s3on7eo0gkBP8OJgoHEaejUNYeGLeGc/mXV9ee+YQgMooj5uZb
	uzpA==
X-Gm-Message-State: ALoCoQk+mKg2AP6eAlEOijYOs3+pqIm4qQP5SH5q+EsjMJiwpaT65shzWLP2D/BXzRKdS0jozrE2
X-Received: by 10.14.88.131 with SMTP id a3mr5797754eef.64.1392149098659;
	Tue, 11 Feb 2014 12:04:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:42 +0000
Message-Id: <1392149085-14366-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 2/5] xen/arm32: Introduce
	lookup_processor_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Looking for a specific proc_info structure is already implemented in assembly.
Implement lookup_processor_type to avoid duplicate code between C and
assembly.

This function searches the proc_info_list structure following the processor
ID. If the search fail, it will return NULL, otherwise a pointer to this
structure for the specific processor.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/arm32/head.S |   57 ++++++++++++++++++++++++++++++++++-----------
 1 file changed, 43 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 77f5518..68fb499 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -198,26 +198,16 @@ skip_bss:
         PRINT("- Setting up control registers -\r\n")
 
         /* Get processor specific proc info into r1 */
-        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
-        ldr   r1, = __proc_info_start
-        add   r1, r1, r10                   /* r1 := paddr of table (start) */
-        ldr   r2, = __proc_info_end
-        add   r2, r2, r10                   /* r2 := paddr of table (end) */
-1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
-        and   r4, r0, r3                    /* r4 := our cpu id with mask */
-        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
-        teq   r4, r3
-        beq   2f                            /* Match => exit, or try next proc info */
-        add   r1, r1, #PROCINFO_sizeof
-        cmp   r1, r2
-        blo   1b
+        bl    __lookup_processor_type
+        teq   r1, #0
+        bne   1f
         mov   r4, r0
         PRINT("- Missing processor info: ")
         mov   r0, r4
         bl    putn
         PRINT(" -\r\n")
         b     fail
-2:
+1:
 
         /* Jump to cpu_init */
         ldr   r1, [r1, #PROCINFO_cpu_init]  /* r1 := vaddr(init func) */
@@ -545,6 +535,45 @@ putn:   mov   pc, lr
 
 #endif /* !CONFIG_EARLY_PRINTK */
 
+/* This provides a C-API version of __lookup_processor_type */
+GLOBAL(lookup_processor_type)
+        stmfd sp!, {r4, r10, lr}
+        mov   r10, #0                   /* r10 := offset between virt&phys */
+        bl    __lookup_processor_type
+        mov r0, r1
+        ldmfd sp!, {r4, r10, pc}
+
+/* Read processor ID register (CP#15, CR0), and Look up in the linker-built
+ * supported processor list. Note that we can't use the absolute addresses for
+ * the __proc_info lists since we aren't running with the MMU on (and therefore,
+ * we are not in correct address space). We have to calculate the offset.
+ *
+ * r10: offset between virt&phys
+ *
+ * Returns:
+ * r0: CPUID
+ * r1: proc_info pointer
+ * Clobbers r2-r4
+ */
+__lookup_processor_type:
+        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
+        ldr   r1, = __proc_info_start
+        add   r1, r1, r10                   /* r1 := paddr of table (start) */
+        ldr   r2, = __proc_info_end
+        add   r2, r2, r10                   /* r2 := paddr of table (end) */
+1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
+        and   r4, r0, r3                    /* r4 := our cpu id with mask */
+        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
+        teq   r4, r3
+        beq   2f                            /* Match => exit, or try next proc info */
+        add   r1, r1, #PROCINFO_sizeof
+        cmp   r1, r2
+        blo   1b
+        /* We failed to find the proc_info, return NULL */
+        mov   r1, #0
+2:
+        mov   pc, lr
+
 /*
  * Local variables:
  * mode: ASM
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZi-00053G-Je; Tue, 11 Feb 2014 20:04:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZh-00052l-5e
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:04:57 +0000
Received: from [193.109.254.147:63471] by server-15.bemta-14.messagelabs.com
	id 03/56-10839-8628AF25; Tue, 11 Feb 2014 20:04:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392149095!3637339!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30353 invoked from network); 11 Feb 2014 20:04:55 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:04:55 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so3320798eek.28
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:04:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=9Tw0M7BTBNOPmTRRRwce+1IIQXQLw/eEesukKIbsylY=;
	b=AmmFU5RxSKL4ZGtYKND67T370gA1168lI/0xP5XK7gB47LRjhuKbmpvMOPxx5AZp4G
	xLO7J9nmbrobMcbtxxtR8jslvR4QD6zyPYQt/2ceWnSrZ4p1bDrPe4oxYHXwYabeNvUl
	a0D2FQU4ydQnbF5o7nriDASo5j3aIJ5/rPZbKoYgJwP4S/0omLYCZjzP7SrBoVWoe1qj
	4Q7nvKrmUdf9Qy+tRhZlnoAYyzcLwz1eqGvsIgojpGuB5PLLtmk8HwSe0KVa7ZEZD944
	gmwLzm1awm4eaMd2FGe/A1AhES5OlFzwQZurt6i2eb73SBmF5rlDxZ/BdxK+U1An9/dC
	BxHQ==
X-Gm-Message-State: ALoCoQnQgsKsjRHSjvRCO/etdbGGzGkWqapRHaS8ltNez/IKZJHCrgfdp+BlPifpQHOWPJgCXsjv
X-Received: by 10.14.225.195 with SMTP id z43mr46824493eep.19.1392149095337;
	Tue, 11 Feb 2014 12:04:55 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.53
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:54 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:40 +0000
Message-Id: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 0/5] xen/arm: Remove processor specific
	bits in code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This patch series follow a patch I have sent few months ago see:
https://patches.linaro.org/19617/.

I took a new approach and introduced specific processors callback which will
be called at least during vcpu initialization. In the future, we can extend
the structure to add new callbacks.

This patch series also removes xen/include/asm-arm/processor-ca{15,7}.h, both
headers are not used in Xen.

Sincerely yours,

Julien Grall (5):
  xen/arm32: head.S: Remove CA15 and CA7 specific includes
  xen/arm32: Introduce lookup_processor_type
  xen/arm64: Implement lookup_processor_type as a dummy function
  xen/arm: Remove processor specific setup in vcpu_initialise
  xen/arm: Remove asm-arm/processor-ca{15,7}.h headers

 xen/arch/arm/Makefile                |    1 +
 xen/arch/arm/arm32/Makefile          |    2 +-
 xen/arch/arm/arm32/head.S            |   59 +++++++++++++++++++++++++---------
 xen/arch/arm/arm32/proc-v7-c.c       |   32 ++++++++++++++++++
 xen/arch/arm/arm32/proc-v7.S         |    3 ++
 xen/arch/arm/arm64/head.S            |    7 ++++
 xen/arch/arm/domain.c                |    8 ++---
 xen/arch/arm/processor.c             |   49 ++++++++++++++++++++++++++++
 xen/arch/arm/setup.c                 |    3 ++
 xen/include/asm-arm/processor-ca15.h |   42 ------------------------
 xen/include/asm-arm/processor-ca7.h  |   20 ------------
 xen/include/asm-arm/procinfo.h       |   17 ++++++++--
 12 files changed, 156 insertions(+), 87 deletions(-)
 create mode 100644 xen/arch/arm/arm32/proc-v7-c.c
 create mode 100644 xen/arch/arm/processor.c
 delete mode 100644 xen/include/asm-arm/processor-ca15.h
 delete mode 100644 xen/include/asm-arm/processor-ca7.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZj-00053S-Vi; Tue, 11 Feb 2014 20:04:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZj-00053K-6L
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:04:59 +0000
Received: from [193.109.254.147:63587] by server-12.bemta-14.messagelabs.com
	id 90/CE-17220-A628AF25; Tue, 11 Feb 2014 20:04:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392149097!3625449!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17206 invoked from network); 11 Feb 2014 20:04:57 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:04:57 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3844318eek.23
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:04:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=yr5AabJ6NVq+LsRTRLE/6b4akgIBDDQprZF7J4aJGIY=;
	b=XXpKXbxoOWC6J7wpkv6Nw0JZntJ9L2aWQb8WMreBn8apHvGcyLpqnpriuWOzCK3dfz
	m3RJZXKTIXEWn9zz0jKrd9pI6Zl7/0PbfnG3Hq2s3rrVqtWfFboh9nJRu/NXi+Erxi9B
	+0fcDCwMd2O/g6Igdp5WycZOEe4TdmiNM18tYuj78Il/hvbKvs+VUSVtqsGF9kO4w4VM
	5B2/cyWjCM5sgUP9F/9UoMQuHq9oK5QDKYrR6sqLOE108Ly6WxDY/D4fUtnUKFm6Y60N
	8S5/uKBlqyZ8zzn6nUu+lBDLigZGqKqTv3FZDHPI+lrWs+8tuzIuysAU2tbh85uz2C27
	Si9g==
X-Gm-Message-State: ALoCoQk4e8Cgc2D5jBxc6KzuE9Gst3h6vZUQDVEi4pudzov9HRx9fOiJ/CeXxGJ0hFWjXrBh8red
X-Received: by 10.14.1.198 with SMTP id 46mr9574372eed.11.1392149097305;
	Tue, 11 Feb 2014 12:04:57 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.04.55
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:04:56 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:41 +0000
Message-Id: <1392149085-14366-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 1/5] xen/arm32: head.S: Remove CA15 and
	CA7 specific includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

head.S only contains generic code. It should not include headers for a
specific processor.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/arm32/head.S |    2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 1b1801b..77f5518 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -19,8 +19,6 @@
 
 #include <asm/config.h>
 #include <asm/page.h>
-#include <asm/processor-ca15.h>
-#include <asm/processor-ca7.h>
 #include <asm/asm_defns.h>
 #include <asm/early_printk.h>
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZp-00054X-5Z; Tue, 11 Feb 2014 20:05:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZn-00053n-UR
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:04 +0000
Received: from [85.158.139.211:9942] by server-2.bemta-5.messagelabs.com id
	4D/62-23037-F628AF25; Tue, 11 Feb 2014 20:05:03 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392149101!3263212!1
X-Originating-IP: [74.125.83.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26799 invoked from network); 11 Feb 2014 20:05:01 -0000
Received: from mail-ee0-f41.google.com (HELO mail-ee0-f41.google.com)
	(74.125.83.41)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:05:01 -0000
Received: by mail-ee0-f41.google.com with SMTP id e51so3304424eek.0
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:05:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=3jLAgXb8Foa+Cm5NG1xg6/OAILk0nAb1qxp/6ap7n20=;
	b=JGpv6ONMcJ4QCu5D5pbX54dgQ6Pgnl0K0zAfilKmQIReIdoIoT4xbwgSfVIy9gg/S+
	bcmdWuykjuzi3ee/2GEVFOSMqPA4qHuYf7RKkJfkIruYpN6ThVa0uHP3F99MPuK1WQLs
	udCnzqVuPjgx+PF6GrMpjR5jbLij5RIh+EJKpIAqhWT81SPS+UbNzNNZwuuTpYv8dPIt
	Nwaqx1lfjSsnqQ1fkcEblt6M+ERZcGu4yHEme+dgxyYbpfhZcUkIq2wX5GB1ZlIuJsMt
	0qQujOl8R75xPZQBZt0f4NXJjQquuSvr9GLI9GWwNGpub9G7DoLOoXp/LbeYj/EU742U
	qBSQ==
X-Gm-Message-State: ALoCoQmqYSA/HoABq/tUeShrihBx7zzHdi1lrmvN7JqNV9kYcbEFqFuQD13YQQ/Zje4MPUAv8Nry
X-Received: by 10.14.220.193 with SMTP id o41mr46873552eep.22.1392149101485;
	Tue, 11 Feb 2014 12:05:01 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.05.00
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:05:00 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:44 +0000
Message-Id: <1392149085-14366-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor specific
	setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces the possibility to have specific processor callbacks
that can be called in various place.

Currently VCPU initialization code contains processor specific setup (for
Cortex A7 and Cortex A15) for the ACTRL registers. It's possible to have
processor with a different layout for this register.

Move this setup in a specific callback for ARM v7 processor.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/Makefile          |    1 +
 xen/arch/arm/arm32/Makefile    |    2 +-
 xen/arch/arm/arm32/proc-v7-c.c |   32 ++++++++++++++++++++++++++
 xen/arch/arm/arm32/proc-v7.S   |    3 +++
 xen/arch/arm/domain.c          |    8 ++-----
 xen/arch/arm/processor.c       |   49 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/setup.c           |    3 +++
 xen/include/asm-arm/procinfo.h |   17 ++++++++++++--
 8 files changed, 106 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/arm/arm32/proc-v7-c.c
 create mode 100644 xen/arch/arm/processor.c

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index d70f6d5..63e0460 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -32,6 +32,7 @@ obj-y += vuart.o
 obj-y += hvm.o
 obj-y += device.o
 obj-y += decode.o
+obj-y += processor.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/arm32/Makefile b/xen/arch/arm/arm32/Makefile
index 65ecff4..0e7c3b4 100644
--- a/xen/arch/arm/arm32/Makefile
+++ b/xen/arch/arm/arm32/Makefile
@@ -1,7 +1,7 @@
 subdir-y += lib
 
 obj-y += entry.o
-obj-y += proc-v7.o
+obj-y += proc-v7.o proc-v7-c.o
 
 obj-y += traps.o
 obj-y += domain.o
diff --git a/xen/arch/arm/arm32/proc-v7-c.c b/xen/arch/arm/arm32/proc-v7-c.c
new file mode 100644
index 0000000..a3b94a2
--- /dev/null
+++ b/xen/arch/arm/arm32/proc-v7-c.c
@@ -0,0 +1,32 @@
+/*
+ * xen/arch/arm/arm32/proc-v7-c.c
+ *
+ * arm v7 specific initializations (C part)
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <asm/procinfo.h>
+#include <asm/processor.h>
+
+static void armv7_vcpu_initialize(struct vcpu *v)
+{
+    if ( v->domain->max_vcpus > 1 )
+        v->arch.actlr |= ACTLR_V7_SMP;
+    else
+        v->arch.actlr &= ~ACTLR_V7_SMP;
+}
+
+const struct processor armv7_processor = {
+    .vcpu_initialize = armv7_vcpu_initialize,
+};
diff --git a/xen/arch/arm/arm32/proc-v7.S b/xen/arch/arm/arm32/proc-v7.S
index 2c8cb9c..d7d2318 100644
--- a/xen/arch/arm/arm32/proc-v7.S
+++ b/xen/arch/arm/arm32/proc-v7.S
@@ -33,6 +33,7 @@ __v7_ca15mp_proc_info:
         .long 0x410FC0F0             /* Cortex-A15 */
         .long 0xFF0FFFF0             /* Mask */
         .long v7_init
+        .long armv7_processor
         .size __v7_ca15mp_proc_info, . - __v7_ca15mp_proc_info
 
         .section ".init.proc.info", #alloc, #execinstr
@@ -41,6 +42,7 @@ __v7_ca7mp_proc_info:
         .long 0x410FC070             /* Cortex-A7 */
         .long 0xFF0FFFF0             /* Mask */
         .long v7_init
+        .long armv7_processor
         .size __v7_ca7mp_proc_info, . - __v7_ca7mp_proc_info
 
         .section ".init.proc.info", #alloc, #execinstr
@@ -49,6 +51,7 @@ __v7_brahma15mp_proc_info:
         .long 0x420F00F2             /* Broadcom Brahma-B15 */
         .long 0xFF0FFFFF             /* Mask */
         .long v7_init
+        .long armv7_processor
         .size __v7_brahma15mp_proc_info, . - __v7_brahma15mp_proc_info
 
 /*
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c279a27..df23147 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -29,7 +29,7 @@
 #include <asm/irq.h>
 #include <asm/cpufeature.h>
 #include <asm/vfp.h>
-#include <asm/processor-ca15.h>
+#include <asm/procinfo.h>
 
 #include <asm/gic.h>
 #include <asm/platform.h>
@@ -487,11 +487,7 @@ int vcpu_initialise(struct vcpu *v)
 
     v->arch.actlr = READ_SYSREG32(ACTLR_EL1);
 
-    /* XXX: Handle other than CA15 cpus */
-    if ( v->domain->max_vcpus > 1 )
-        v->arch.actlr |= ACTLR_CA15_SMP;
-    else
-        v->arch.actlr &= ~ACTLR_CA15_SMP;
+    processor_vcpu_initialize(v);
 
     if ( (rc = vcpu_vgic_init(v)) != 0 )
         return rc;
diff --git a/xen/arch/arm/processor.c b/xen/arch/arm/processor.c
new file mode 100644
index 0000000..b06fcc5
--- /dev/null
+++ b/xen/arch/arm/processor.c
@@ -0,0 +1,49 @@
+/*
+ * xen/arch/arm/processor.c
+ *
+ * Helpers to execute processor specific code.
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (C) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <asm/procinfo.h>
+
+static const struct processor *processor = NULL;
+
+void __init processor_setup(void)
+{
+    const struct proc_info_list *procinfo;
+
+    procinfo = lookup_processor_type();
+    if ( !procinfo )
+        return;
+
+    processor = procinfo->processor;
+}
+
+void processor_vcpu_initialize(struct vcpu *v)
+{
+    if ( !processor || !processor->vcpu_initialize )
+        return;
+
+    processor->vcpu_initialize(v);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 1f6d713..5529cb1 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -42,6 +42,7 @@
 #include <asm/gic.h>
 #include <asm/cpufeature.h>
 #include <asm/platform.h>
+#include <asm/procinfo.h>
 
 struct cpuinfo_arm __read_mostly boot_cpu_data;
 
@@ -148,6 +149,8 @@ static void __init processor_id(void)
     {
         printk("32-bit Execution: Unsupported\n");
     }
+
+    processor_setup();
 }
 
 static void dt_unreserved_regions(paddr_t s, paddr_t e,
diff --git a/xen/include/asm-arm/procinfo.h b/xen/include/asm-arm/procinfo.h
index 9d3feb7..9cb7a86 100644
--- a/xen/include/asm-arm/procinfo.h
+++ b/xen/include/asm-arm/procinfo.h
@@ -21,10 +21,23 @@
 #ifndef __ASM_ARM_PROCINFO_H
 #define __ASM_ARM_PROCINFO_H
 
+#include <xen/sched.h>
+
+struct processor {
+    /* Initialize specific processor register for the new VPCU*/
+    void (*vcpu_initialize)(struct vcpu *v);
+};
+
 struct proc_info_list {
-	unsigned int		cpu_val;
-	unsigned int		cpu_mask;
+    unsigned int        cpu_val;
+    unsigned int        cpu_mask;
     void                (*cpu_init)(void);
+    struct processor    *processor;
 };
 
+const __init struct proc_info_list *lookup_processor_type(void);
+
+void __init processor_setup(void);
+void processor_vcpu_initialize(struct vcpu *v);
+
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:05:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:05:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJZp-00055Q-Ij; Tue, 11 Feb 2014 20:05:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDJZo-00053x-CM
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 20:05:04 +0000
Received: from [85.158.139.211:15946] by server-11.bemta-5.messagelabs.com id
	3E/DC-23886-F628AF25; Tue, 11 Feb 2014 20:05:03 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392149102!3253511!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22992 invoked from network); 11 Feb 2014 20:05:03 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 20:05:03 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so1703099eaj.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 12:05:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=RA/Zt1ZBqrwLFJQQsrfNo8IXXEqzVnxREQGySe84uNo=;
	b=Q8GUOawRCv1wC3SRL9u185VYBg6YY7DornjprG16/syElN7C7KIYNX5N1+ZX2L9ipw
	rpIM4wGnfCWPCRUprWBlMnG8C5seT34rCPY/9ZH4lDdnbWQNkYLTyvFdo2Elxc8HJvM2
	DBgCHMyMnq8ygykqC+TzYsBb038af/mGeWYCfzTlmRIIcpMEay0+Q3VhzbSb+UFu01vs
	FIeQc0llfK1zmJWmMwzORTdHE2sylidf7zIzf+Yo8pAM3+3K6fOh/LYwdFxx2mJ61AWk
	57KcgRIG4wl1QWSVbQrngOIJyylwYKzbvfIrJKDT8pSa9gNK9D4bEhUz3irEYPOQ6HV1
	u8+Q==
X-Gm-Message-State: ALoCoQlcLr+yiW07okWybOqw137olaM5T1ZArbqU4EkkwkoT8O8eAZHbmVRrpQsc+td/6E9vt74p
X-Received: by 10.14.93.199 with SMTP id l47mr12905650eef.58.1392149102692;
	Tue, 11 Feb 2014 12:05:02 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm51090920eef.2.2014.02.11.12.05.01
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 12:05:02 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 20:04:45 +0000
Message-Id: <1392149085-14366-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
	asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Theses headers are not in the right directory and are not used anymore.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/include/asm-arm/processor-ca15.h |   42 ----------------------------------
 xen/include/asm-arm/processor-ca7.h  |   20 ----------------
 2 files changed, 62 deletions(-)
 delete mode 100644 xen/include/asm-arm/processor-ca15.h
 delete mode 100644 xen/include/asm-arm/processor-ca7.h

diff --git a/xen/include/asm-arm/processor-ca15.h b/xen/include/asm-arm/processor-ca15.h
deleted file mode 100644
index f65f40a..0000000
--- a/xen/include/asm-arm/processor-ca15.h
+++ /dev/null
@@ -1,42 +0,0 @@
-#ifndef __ASM_ARM_PROCESSOR_CA15_H
-#define __ASM_ARM_PROCESSOR_CA15_H
-
-/* ACTLR Auxiliary Control Register, Cortex A15 */
-#define ACTLR_CA15_SNOOP_DELAYED      (1<<31)
-#define ACTLR_CA15_MAIN_CLOCK         (1<<30)
-#define ACTLR_CA15_NEON_CLOCK         (1<<29)
-#define ACTLR_CA15_NONCACHE           (1<<24)
-#define ACTLR_CA15_INORDER_REQ        (1<<23)
-#define ACTLR_CA15_INORDER_LOAD       (1<<22)
-#define ACTLR_CA15_L2_TLB_PREFETCH    (1<<21)
-#define ACTLR_CA15_L2_IPA_PA_CACHE    (1<<20)
-#define ACTLR_CA15_L2_CACHE           (1<<19)
-#define ACTLR_CA15_L2_PA_CACHE        (1<<18)
-#define ACTLR_CA15_TLB                (1<<17)
-#define ACTLR_CA15_STRONGY_ORDERED    (1<<16)
-#define ACTLR_CA15_INORDER            (1<<15)
-#define ACTLR_CA15_FORCE_LIM          (1<<14)
-#define ACTLR_CA15_CP_FLUSH           (1<<13)
-#define ACTLR_CA15_CP_PUSH            (1<<12)
-#define ACTLR_CA15_LIM                (1<<11)
-#define ACTLR_CA15_SER                (1<<10)
-#define ACTLR_CA15_OPT                (1<<9)
-#define ACTLR_CA15_WFI                (1<<8)
-#define ACTLR_CA15_WFE                (1<<7)
-#define ACTLR_CA15_SMP                (1<<6)
-#define ACTLR_CA15_PLD                (1<<5)
-#define ACTLR_CA15_IP                 (1<<4)
-#define ACTLR_CA15_MICRO_BTB          (1<<3)
-#define ACTLR_CA15_LOOP_ONE           (1<<2)
-#define ACTLR_CA15_LOOP_DISABLE       (1<<1)
-#define ACTLR_CA15_BTB                (1<<0)
-
-#endif /* __ASM_ARM_PROCESSOR_CA15_H */
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/include/asm-arm/processor-ca7.h b/xen/include/asm-arm/processor-ca7.h
deleted file mode 100644
index 5048a95..0000000
--- a/xen/include/asm-arm/processor-ca7.h
+++ /dev/null
@@ -1,20 +0,0 @@
-#ifndef __ASM_ARM_PROCESSOR_CA7_H
-#define __ASM_ARM_PROCESSOR_CA7_H
-
-/* ACTLR Auxiliary Control Register, Cortex A7 */
-#define ACTLR_CA7_DDI                 (1<<28)
-#define ACTLR_CA7_DDVM                (1<<15)
-#define ACTLR_CA7_L1RADIS             (1<<12)
-#define ACTLR_CA7_L2RADIS             (1<<11)
-#define ACTLR_CA7_DODMBS              (1<<10)
-#define ACTLR_CA7_SMP                 (1<<6)
-
-#endif /* __ASM_ARM_PROCESSOR_CA7_H */
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * indent-tabs-mode: nil
- * End:
- */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:22:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJqG-00074Q-Gf; Tue, 11 Feb 2014 20:22:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WDJqE-00074L-7Q
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 20:22:03 +0000
Received: from [85.158.143.35:32698] by server-1.bemta-4.messagelabs.com id
	B5/32-31661-9668AF25; Tue, 11 Feb 2014 20:22:01 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392150120!4914356!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25396 invoked from network); 11 Feb 2014 20:22:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 11 Feb 2014 20:22:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:57754 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WDJpv-0003Gn-9m; Tue, 11 Feb 2014 21:21:43 +0100
Date: Tue, 11 Feb 2014 21:21:57 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1009282985.20140211212157@eikelenboom.it>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52FA6A44.6050003@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
MIME-Version: 1.0
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	msw@amazon.com, boris.ostrovsky@oracle.com,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, February 11, 2014, 7:21:56 PM, you wrote:

> On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
>> On 11/02/14 18:52, David Vrabel wrote:
>>> =

>> That would mean that unmap_purged_grants would no longer be static and =

>> I should also add a prototype for it in blkback/common.h, which is kind =

>> of ugly IMHO.

> But less ugly than initializing work with a NULL function, IMO.

>> commit 980e72e45454b64ccb7f23b6794a769384e51038
>> Author: Roger Pau Monne <roger.pau@citrix.com>
>> Date:   Tue Feb 11 19:04:06 2014 +0100
>> =

>>     xen-blkback: init persistent_purge_work work_struct
>>     =

>>     Initialize persistent_purge_work work_struct on xen_blkif_alloc (and
>>     remove the previous initialization done in purge_persistent_gnt). Th=
is
>>     prevents flush_work from complaining even if purge_persistent_gnt has
>>     not been used.
>>     =

>>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

And a Tested-by: Sander Eikelenboom <linux@eikelenboom.it>

Thanks !

> Thanks.

> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 20:22:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 20:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDJqG-00074Q-Gf; Tue, 11 Feb 2014 20:22:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WDJqE-00074L-7Q
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 20:22:03 +0000
Received: from [85.158.143.35:32698] by server-1.bemta-4.messagelabs.com id
	B5/32-31661-9668AF25; Tue, 11 Feb 2014 20:22:01 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392150120!4914356!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25396 invoked from network); 11 Feb 2014 20:22:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 11 Feb 2014 20:22:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:57754 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WDJpv-0003Gn-9m; Tue, 11 Feb 2014 21:21:43 +0100
Date: Tue, 11 Feb 2014 21:21:57 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1009282985.20140211212157@eikelenboom.it>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <52FA6A44.6050003@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
MIME-Version: 1.0
Cc: Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xensource.com,
	linux-kernel@vger.kernel.org, mrushton@amazon.com,
	msw@amazon.com, boris.ostrovsky@oracle.com,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Tuesday, February 11, 2014, 7:21:56 PM, you wrote:

> On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
>> On 11/02/14 18:52, David Vrabel wrote:
>>> =

>> That would mean that unmap_purged_grants would no longer be static and =

>> I should also add a prototype for it in blkback/common.h, which is kind =

>> of ugly IMHO.

> But less ugly than initializing work with a NULL function, IMO.

>> commit 980e72e45454b64ccb7f23b6794a769384e51038
>> Author: Roger Pau Monne <roger.pau@citrix.com>
>> Date:   Tue Feb 11 19:04:06 2014 +0100
>> =

>>     xen-blkback: init persistent_purge_work work_struct
>>     =

>>     Initialize persistent_purge_work work_struct on xen_blkif_alloc (and
>>     remove the previous initialization done in purge_persistent_gnt). Th=
is
>>     prevents flush_work from complaining even if purge_persistent_gnt has
>>     not been used.
>>     =

>>     Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

> Reviewed-by: David Vrabel <david.vrabel@citrix.com>

And a Tested-by: Sander Eikelenboom <linux@eikelenboom.it>

Thanks !

> Thanks.

> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:28:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDKsC-0001c3-Le; Tue, 11 Feb 2014 21:28:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDKsB-0001bv-DU
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 21:28:07 +0000
Received: from [85.158.137.68:53143] by server-1.bemta-3.messagelabs.com id
	0B/5C-17293-6E59AF25; Tue, 11 Feb 2014 21:28:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392154085!1220449!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21765 invoked from network); 11 Feb 2014 21:28:05 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:28:05 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so3946397eaj.25
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 13:28:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Mr+qhw9plTBlocTHdIEjtswBElQPKj6gU7FpMBRn2VY=;
	b=DHtroKfXcEymmMQvOV8+2LnXQRF3xcFx+STUNyM2uPV42f3GCaxS7zxHNnFr2QpX73
	sFDWy1gAIpbUpHG4e6jvcBek/o+cf25X0spn2kUBJpGOAI3rNkF6bmaDp2ltDHMWG241
	fV3h/OiCuPeBH8mjMzhPlGOwrW/PtTH+H373AQ5K5DtqJdAj+m5PJOhuLw47HQG4LHKf
	V8A0+tPJGMVpYfiCMsQD7pm0tXlGDCGXawLXLnDz2bYgMvGz7cFYdrAOIQTdZOex+W+1
	02paLRLeXyHoLRdWpbXowoobmZuNB4Cy/jNkZdCv7dhFjBewm3BZqw4Cv7AFGM6+AQ7j
	1mnA==
X-Gm-Message-State: ALoCoQnH24aOlEh5LdAI69fp4F9rsZ1mo6+ICiDwlZfgH0ACE6OfDAYapW4DnV+rQd0TWwgksF8C
X-Received: by 10.15.99.201 with SMTP id bl49mr14207930eeb.53.1392154085497;
	Tue, 11 Feb 2014 13:28:05 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm72094016eeh.3.2014.02.11.13.28.03
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 13:28:04 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 21:27:30 +0000
Message-Id: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH for-4.5] xen/serial: Don't leak memory mapping
	if the serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/exynos4210-uart.c |    1 +
 xen/drivers/char/omap-uart.c       |    1 +
 xen/drivers/char/pl011.c           |    1 +
 xen/include/asm-arm/mm.h           |    1 +
 4 files changed, 4 insertions(+)

diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 0619575..9a2b8b9 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -344,6 +344,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     if ( res )
     {
         early_printk("exynos4210: Unable to retrieve the IRQ\n");
+        iounmap(uart->regs);
         return res;
     }
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index c1580ef..bfc39b4 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -337,6 +337,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     if ( res )
     {
         early_printk("omap-uart: Unable to retrieve the IRQ\n");
+        iounmap(uart->regs);
         return res;
     }
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index fd82511..f7be578 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -260,6 +260,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     if ( res )
     {
         early_printk("pl011: Unable to retrieve the IRQ\n");
+        iounmap(uart->regs);
         return res;
     }
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b8d4e7d..4211c0b 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -4,6 +4,7 @@
 #include <xen/config.h>
 #include <xen/kernel.h>
 #include <asm/page.h>
+#include <xen/vmap.h>
 #include <public/xen.h>
 
 /* Align Xen to a 2 MiB boundary. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:28:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDKsC-0001c3-Le; Tue, 11 Feb 2014 21:28:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDKsB-0001bv-DU
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 21:28:07 +0000
Received: from [85.158.137.68:53143] by server-1.bemta-3.messagelabs.com id
	0B/5C-17293-6E59AF25; Tue, 11 Feb 2014 21:28:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392154085!1220449!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21765 invoked from network); 11 Feb 2014 21:28:05 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:28:05 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so3946397eaj.25
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 13:28:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Mr+qhw9plTBlocTHdIEjtswBElQPKj6gU7FpMBRn2VY=;
	b=DHtroKfXcEymmMQvOV8+2LnXQRF3xcFx+STUNyM2uPV42f3GCaxS7zxHNnFr2QpX73
	sFDWy1gAIpbUpHG4e6jvcBek/o+cf25X0spn2kUBJpGOAI3rNkF6bmaDp2ltDHMWG241
	fV3h/OiCuPeBH8mjMzhPlGOwrW/PtTH+H373AQ5K5DtqJdAj+m5PJOhuLw47HQG4LHKf
	V8A0+tPJGMVpYfiCMsQD7pm0tXlGDCGXawLXLnDz2bYgMvGz7cFYdrAOIQTdZOex+W+1
	02paLRLeXyHoLRdWpbXowoobmZuNB4Cy/jNkZdCv7dhFjBewm3BZqw4Cv7AFGM6+AQ7j
	1mnA==
X-Gm-Message-State: ALoCoQnH24aOlEh5LdAI69fp4F9rsZ1mo6+ICiDwlZfgH0ACE6OfDAYapW4DnV+rQd0TWwgksF8C
X-Received: by 10.15.99.201 with SMTP id bl49mr14207930eeb.53.1392154085497;
	Tue, 11 Feb 2014 13:28:05 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm72094016eeh.3.2014.02.11.13.28.03
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 13:28:04 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 11 Feb 2014 21:27:30 +0000
Message-Id: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH for-4.5] xen/serial: Don't leak memory mapping
	if the serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/exynos4210-uart.c |    1 +
 xen/drivers/char/omap-uart.c       |    1 +
 xen/drivers/char/pl011.c           |    1 +
 xen/include/asm-arm/mm.h           |    1 +
 4 files changed, 4 insertions(+)

diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 0619575..9a2b8b9 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -344,6 +344,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     if ( res )
     {
         early_printk("exynos4210: Unable to retrieve the IRQ\n");
+        iounmap(uart->regs);
         return res;
     }
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index c1580ef..bfc39b4 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -337,6 +337,7 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     if ( res )
     {
         early_printk("omap-uart: Unable to retrieve the IRQ\n");
+        iounmap(uart->regs);
         return res;
     }
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index fd82511..f7be578 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -260,6 +260,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     if ( res )
     {
         early_printk("pl011: Unable to retrieve the IRQ\n");
+        iounmap(uart->regs);
         return res;
     }
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b8d4e7d..4211c0b 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -4,6 +4,7 @@
 #include <xen/config.h>
 #include <xen/kernel.h>
 #include <asm/page.h>
+#include <xen/vmap.h>
 #include <public/xen.h>
 
 /* Align Xen to a 2 MiB boundary. */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:32:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDKwA-0002Bj-QI; Tue, 11 Feb 2014 21:32:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDKw9-0002Bd-As
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 21:32:13 +0000
Received: from [193.109.254.147:39490] by server-9.bemta-14.messagelabs.com id
	8E/D8-24895-CD69AF25; Tue, 11 Feb 2014 21:32:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392154330!3664565!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23351 invoked from network); 11 Feb 2014 21:32:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:32:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,827,1384300800"; d="scan'208";a="101744405"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 21:32:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 16:32:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDKw4-0002Zm-Iu;
	Tue, 11 Feb 2014 21:32:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDKw4-0008U4-FO;
	Tue, 11 Feb 2014 21:32:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24846-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 21:32:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24846: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24846 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24846/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 24839

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf
baseline version:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44

------------------------------------------------------------
People who touched revisions under test:
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  David Scott <dave.scott@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3b2f92c1f8567461562fac9922fbad223dc8c6cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 10 17:34:46 2014 +0000

    xen/arm: Correctly boot with an initrd and no linux command line
    
    When DOM0 device tree is building, the properties for initrd will
    only be added if there is a linux command line. This will result to a panic
    later:
    
    (XEN) *** LOADING DOMAIN 0 ***
    (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
    (XEN) Loading kernel from boot module 2
    (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
    (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
    (XEN)
    (XEN) ****************************************
    (XEN) Panic on CPU 0:
    (XEN) Cannot fix up "linux,initrd-start" property
    (XEN) ****************************************
    (XEN)
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 3413b6a923839446c51a7612b19c5dc33b8aa6bc
Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Feb 7 16:51:51 2014 -0500

    xenlight_stubs.c: Allow it to build with ocaml 3.09.3
    
    This code was copied from:
    
    http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
    
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: David Scott <dave.scott@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 117501c67ac00ad7850eedf25f870fe36579f71c
Author: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date:   Fri Feb 7 18:27:16 2014 +0530

    xen: arm: arm64: Fix memory cloberring issues during VFP save restore.
    
    This patch addresses memory cloberring issue mentioed by Julien Grall
    with my earlier patch -
    Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f
    
    Discussion related to this fix -
    http://www.gossamer-threads.com/lists/xen/devel/316247
    
    Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
    Signed-off-by: Anup Patel <anup.patel@linaro.org>
    Acked-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit ebe867052e0f782139147015c4e91b37aa5e68f1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 11 11:14:10 2014 +0100

    flask: check permissions first thing in flask_security_set_bool()
    
    Nothing else should be done if the caller isn't permitted to set
    boolean values.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

commit 31f3620be0e3158c205a3669135f9c4bfa40b1c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 11 11:13:22 2014 +0100

    flask: fix error propagation from flask_security_set_bool()
    
    The function should return an error when flask_security_make_bools()
    fails as well as when the input ID is out of range.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

commit 57c9f2caf05de41913b3e4eb48c0c3ad6c18dd3f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 11 11:11:48 2014 +0100

    flask: fix memory leaks
    
    Plus, in the case of security_preserve_bools(), prevent double freeing
    in the case of security_get_bools() failing.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:32:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDKwA-0002Bj-QI; Tue, 11 Feb 2014 21:32:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDKw9-0002Bd-As
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 21:32:13 +0000
Received: from [193.109.254.147:39490] by server-9.bemta-14.messagelabs.com id
	8E/D8-24895-CD69AF25; Tue, 11 Feb 2014 21:32:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392154330!3664565!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23351 invoked from network); 11 Feb 2014 21:32:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:32:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,827,1384300800"; d="scan'208";a="101744405"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 11 Feb 2014 21:32:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 16:32:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDKw4-0002Zm-Iu;
	Tue, 11 Feb 2014 21:32:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDKw4-0008U4-FO;
	Tue, 11 Feb 2014 21:32:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24846-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 11 Feb 2014 21:32:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24846: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24846 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24846/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    3 host-build-prep           fail REGR. vs. 24839

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a

version targeted for testing:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf
baseline version:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44

------------------------------------------------------------
People who touched revisions under test:
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  David Scott <dave.scott@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3b2f92c1f8567461562fac9922fbad223dc8c6cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 10 17:34:46 2014 +0000

    xen/arm: Correctly boot with an initrd and no linux command line
    
    When DOM0 device tree is building, the properties for initrd will
    only be added if there is a linux command line. This will result to a panic
    later:
    
    (XEN) *** LOADING DOMAIN 0 ***
    (XEN) Populate P2M 0x20000000->0x40000000 (1:1 mapping for dom0)
    (XEN) Loading kernel from boot module 2
    (XEN) Loading zImage from 0000000001000000 to 0000000027c00000-0000000027eafb48
    (XEN) Loading dom0 initrd from 0000000002000000 to 0x0000000028200000-0x0000000028c00000
    (XEN)
    (XEN) ****************************************
    (XEN) Panic on CPU 0:
    (XEN) Cannot fix up "linux,initrd-start" property
    (XEN) ****************************************
    (XEN)
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 3413b6a923839446c51a7612b19c5dc33b8aa6bc
Author: Don Slutz <dslutz@verizon.com>
Date:   Fri Feb 7 16:51:51 2014 -0500

    xenlight_stubs.c: Allow it to build with ocaml 3.09.3
    
    This code was copied from:
    
    http://docs.camlcity.org/docs/godisrc/oasis-ocaml-fd-1.1.1.tar.gz/ocaml-fd-1.1.1/lib/fd_stubs.c
    
    Signed-off-by: Don Slutz <dslutz@verizon.com>
    Acked-by: David Scott <dave.scott@citrix.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 117501c67ac00ad7850eedf25f870fe36579f71c
Author: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
Date:   Fri Feb 7 18:27:16 2014 +0530

    xen: arm: arm64: Fix memory cloberring issues during VFP save restore.
    
    This patch addresses memory cloberring issue mentioed by Julien Grall
    with my earlier patch -
    Commit Id: 712eb2e04da2cbcd9908f74ebd47c6df60d6d12f
    
    Discussion related to this fix -
    http://www.gossamer-threads.com/lists/xen/devel/316247
    
    Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
    Signed-off-by: Anup Patel <anup.patel@linaro.org>
    Acked-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit ebe867052e0f782139147015c4e91b37aa5e68f1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 11 11:14:10 2014 +0100

    flask: check permissions first thing in flask_security_set_bool()
    
    Nothing else should be done if the caller isn't permitted to set
    boolean values.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

commit 31f3620be0e3158c205a3669135f9c4bfa40b1c7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 11 11:13:22 2014 +0100

    flask: fix error propagation from flask_security_set_bool()
    
    The function should return an error when flask_security_make_bools()
    fails as well as when the input ID is out of range.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

commit 57c9f2caf05de41913b3e4eb48c0c3ad6c18dd3f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Feb 11 11:11:48 2014 +0100

    flask: fix memory leaks
    
    Plus, in the case of security_preserve_bools(), prevent double freeing
    in the case of security_get_bools() failing.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:45:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDL8M-0002mr-D5; Tue, 11 Feb 2014 21:44:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1WDL8K-0002mm-SJ
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 21:44:49 +0000
Received: from [85.158.139.211:24066] by server-9.bemta-5.messagelabs.com id
	52/AC-11237-0D99AF25; Tue, 11 Feb 2014 21:44:48 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392155085!3275156!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31073 invoked from network); 11 Feb 2014 21:44:47 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:44:47 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so8240789pab.30
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Feb 2014 13:44:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RSQPzsQ+1/GrCIVwyu+mzkpHVu6DRgGzFTwKTPPPS5c=;
	b=UQWdmscXriJlukG0bIIq2Z89QLXUsES5B4LcSFnixIZDcYtn263lJCJUKDQsNrwslO
	iVv/jMMUEtm+2LR7eePSwHbi9HyOGRNAE+InR/HLKJZf91Sxpgfx10949kQzt8v12POA
	NbOx1ZEqP11yZX2rUQ04KnraL7J3folYmSfvLjMDcdhC2grbz23IDJZDMoPMw20sGFZE
	kxLFdmqlLmo3GlH5IuKXTCLBNNEcAAgE2MSlAbt224NooyMd4Ti88f7lMack1eix/XYx
	E0HHcpDryhzxwnIEGfb371xjstHQu6xJknmbB6uHLh/BdMlz4hW80CIpYYOkz61Ih8y0
	fYrQ==
X-Gm-Message-State: ALoCoQklU5hNAsExC+iEJopvO7dlAUHuaZIu4HlrLg8fyuG152zkvkcXN3sugIYIJY3cEVVNgzLj
X-Received: by 10.66.121.234 with SMTP id ln10mr34936334pab.20.1392155084971; 
	Tue, 11 Feb 2014 13:44:44 -0800 (PST)
Received: from [192.168.3.11] (66.29.187.51.static.utbb.net. [66.29.187.51])
	by mx.google.com with ESMTPSA id
	oa3sm56345130pbb.15.2014.02.11.13.44.41 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 13:44:43 -0800 (PST)
Message-ID: <52FA99CA.8070505@kernel.dk>
Date: Tue, 11 Feb 2014 14:44:42 -0700
From: Jens Axboe <axboe@kernel.dk>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>, 
	David Vrabel <david.vrabel@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
	<1009282985.20140211212157@eikelenboom.it>
In-Reply-To: <1009282985.20140211212157@eikelenboom.it>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	mrushton@amazon.com, msw@amazon.com, boris.ostrovsky@oracle.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-02-11 13:21, Sander Eikelenboom wrote:
>
> Tuesday, February 11, 2014, 7:21:56 PM, you wrote:
>
>> On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
>>> On 11/02/14 18:52, David Vrabel wrote:
>>>>
>>> That would mean that unmap_purged_grants would no longer be static and
>>> I should also add a prototype for it in blkback/common.h, which is kind
>>> of ugly IMHO.
>
>> But less ugly than initializing work with a NULL function, IMO.
>
>>> commit 980e72e45454b64ccb7f23b6794a769384e51038
>>> Author: Roger Pau Monne <roger.pau@citrix.com>
>>> Date:   Tue Feb 11 19:04:06 2014 +0100
>>>
>>>      xen-blkback: init persistent_purge_work work_struct
>>>
>>>      Initialize persistent_purge_work work_struct on xen_blkif_alloc (a=
nd
>>>      remove the previous initialization done in purge_persistent_gnt). =
This
>>>      prevents flush_work from complaining even if purge_persistent_gnt =
has
>>>      not been used.
>>>
>>>      Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>
>> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
>
> And a Tested-by: Sander Eikelenboom <linux@eikelenboom.it>

Konrad, I was going to ship my current tree today, but looks like we =

need this too.

-- =

Jens Axboe

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:45:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDL8M-0002mr-D5; Tue, 11 Feb 2014 21:44:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1WDL8K-0002mm-SJ
	for xen-devel@lists.xensource.com; Tue, 11 Feb 2014 21:44:49 +0000
Received: from [85.158.139.211:24066] by server-9.bemta-5.messagelabs.com id
	52/AC-11237-0D99AF25; Tue, 11 Feb 2014 21:44:48 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392155085!3275156!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31073 invoked from network); 11 Feb 2014 21:44:47 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:44:47 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so8240789pab.30
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Feb 2014 13:44:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RSQPzsQ+1/GrCIVwyu+mzkpHVu6DRgGzFTwKTPPPS5c=;
	b=UQWdmscXriJlukG0bIIq2Z89QLXUsES5B4LcSFnixIZDcYtn263lJCJUKDQsNrwslO
	iVv/jMMUEtm+2LR7eePSwHbi9HyOGRNAE+InR/HLKJZf91Sxpgfx10949kQzt8v12POA
	NbOx1ZEqP11yZX2rUQ04KnraL7J3folYmSfvLjMDcdhC2grbz23IDJZDMoPMw20sGFZE
	kxLFdmqlLmo3GlH5IuKXTCLBNNEcAAgE2MSlAbt224NooyMd4Ti88f7lMack1eix/XYx
	E0HHcpDryhzxwnIEGfb371xjstHQu6xJknmbB6uHLh/BdMlz4hW80CIpYYOkz61Ih8y0
	fYrQ==
X-Gm-Message-State: ALoCoQklU5hNAsExC+iEJopvO7dlAUHuaZIu4HlrLg8fyuG152zkvkcXN3sugIYIJY3cEVVNgzLj
X-Received: by 10.66.121.234 with SMTP id ln10mr34936334pab.20.1392155084971; 
	Tue, 11 Feb 2014 13:44:44 -0800 (PST)
Received: from [192.168.3.11] (66.29.187.51.static.utbb.net. [66.29.187.51])
	by mx.google.com with ESMTPSA id
	oa3sm56345130pbb.15.2014.02.11.13.44.41 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 13:44:43 -0800 (PST)
Message-ID: <52FA99CA.8070505@kernel.dk>
Date: Tue, 11 Feb 2014 14:44:42 -0700
From: Jens Axboe <axboe@kernel.dk>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>, 
	David Vrabel <david.vrabel@citrix.com>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
	<1009282985.20140211212157@eikelenboom.it>
In-Reply-To: <1009282985.20140211212157@eikelenboom.it>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	mrushton@amazon.com, msw@amazon.com, boris.ostrovsky@oracle.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2014-02-11 13:21, Sander Eikelenboom wrote:
>
> Tuesday, February 11, 2014, 7:21:56 PM, you wrote:
>
>> On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
>>> On 11/02/14 18:52, David Vrabel wrote:
>>>>
>>> That would mean that unmap_purged_grants would no longer be static and
>>> I should also add a prototype for it in blkback/common.h, which is kind
>>> of ugly IMHO.
>
>> But less ugly than initializing work with a NULL function, IMO.
>
>>> commit 980e72e45454b64ccb7f23b6794a769384e51038
>>> Author: Roger Pau Monne <roger.pau@citrix.com>
>>> Date:   Tue Feb 11 19:04:06 2014 +0100
>>>
>>>      xen-blkback: init persistent_purge_work work_struct
>>>
>>>      Initialize persistent_purge_work work_struct on xen_blkif_alloc (a=
nd
>>>      remove the previous initialization done in purge_persistent_gnt). =
This
>>>      prevents flush_work from complaining even if purge_persistent_gnt =
has
>>>      not been used.
>>>
>>>      Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
>
>> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
>
> And a Tested-by: Sander Eikelenboom <linux@eikelenboom.it>

Konrad, I was going to ship my current tree today, but looks like we =

need this too.

-- =

Jens Axboe

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:53:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDLH4-0003Lp-QK; Tue, 11 Feb 2014 21:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDLH3-0003Lk-8g
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 21:53:49 +0000
Received: from [85.158.139.211:57701] by server-2.bemta-5.messagelabs.com id
	C3/2B-23037-CEB9AF25; Tue, 11 Feb 2014 21:53:48 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392155626!3272961!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27820 invoked from network); 11 Feb 2014 21:53:47 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:53:47 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so6538733lab.30
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 13:53:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=VTjuOhR7pb9awz9hnrJNn/rj6JR9h2fY1jHrInk3dJM=;
	b=cw47l//CaG7PtZNqHRmiGnR/cZNcbpXOe255zUEWZ5joPyqJBxZDAmYEgNeMQWehua
	6W/dM4lMk8vhuILvq0Dpl2NJOctHJ/LSLiTMgeBbOU9cL3whr4rQlXbPEG22zMFl2+pP
	vFB781zU2M2MAiVL+hQSg7O7+ihvNOI6Dx3bmosfibH+5JqN7n9lay2rj+jldTPc0JOn
	MIBBMZo3wpo2vNW9x2IjhUrA8ywWbsA/9xJIKacjUEd8yAeBHjAHQB9hhzz1yxtZVqwn
	Io399comcjNYK1uktPyp4v2pvBtVn/HPnNK+FGMiyPoUglUhQP3HIIUYmF5m6A4pdrGC
	98mg==
X-Received: by 10.112.211.233 with SMTP id nf9mr3086142lbc.50.1392155626268;
	Tue, 11 Feb 2014 13:53:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 11 Feb 2014 13:53:26 -0800 (PST)
In-Reply-To: <1392108205.22033.16.camel@dagon.hellion.org.uk>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 11 Feb 2014 13:53:26 -0800
X-Google-Sender-Auth: M7QMbo99YcECCHhx442fAMZK2kA
Message-ID: <CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cc'ing kvm folks as they may have a shared interest on the shared
physical case with the bridge (non NAT).

On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> Although the xen-netback interfaces do not participate in the
>> link as a typical Ethernet device interfaces for them are
>> still required under the current archtitecture. IPv6 addresses
>> do not need to be created or assigned on the xen-netback interfaces
>> however, even if the frontend devices do need them, so clear the
>> multicast flag to ensure the net core does not initiate IPv6
>> Stateless Address Autoconfiguration.
>
> How does disabling SAA flow from the absence of multicast?

See patch 1 in this series [0], but I explain the issue I see with
this on the cover letter [1]. In summary the RFCs on IPv6 make it
clear you need multicast for Stateless address autoconfiguration
(SLAAC is the preferred acronym) and DAD, however the net core has not
made this a requirement, and hence the patch. The caveat which I
address on the cover letter needs to be seriously considered though.

[0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
[1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2

> Surely these should be controlled logically independently even if there is some
> notional linkage.

When a node hops on a network it will query its network by sending a
router solicitation multicast request for its configuration
parameters, the router can respond with router advertisements to
disable SLAAC.

Apart from that we have no other means to disable SLAAC neatly, and as
I gather that would be counter to the IPv6 RFCs anyway, and that makes
sense.

> Can SAA not be disabled directly?

Nope. The ipv6 core assumes all device want ipv6 and this is done upon
netdev registration, and as I noted on my patch 1 description --
although ipv6 supports a module parameter to disable autoconfiguration
RFC4682 Section 5.4 makes it clear that DAD *MUST* be performed on all
unicast addresses prior to assigning them to an interface, regardless
of
whether they are obtained through SLAAC, DHCPv6, or manual configuration.

Upon NETDEV_REGISTER the ipv6 core has 2 struct ipv6_devconf sets of
configurations which could get slapped onto devices, neither of these
disable autoconfiguration, its not a knob with a design purpose to let
devices disable freely -- and technically the RFCs for IPv6 simply
imply that you should not use IPv6 if you do cannot support multicast.
Given that the noautoconf module parameter exists though I think my
patch can be considered upstream after addressing the caveat I noted
on not-NBMA links (and I now I think I know how to address that, we
can just make the MULTICAST flag more meaninful for the dev->type).

A nasty hack to disable IPv6 for testing purposes is to set the MTU to
something lower than IPV6_MIN_MTU (1280) and in fact IPv4 can also be
disabled by setting it to 67, that will disable both IPv6 and IPv4.
That's obviously just that, a nasty nasty hack, but useful for easy
testing of disabling either ipv6 completely or both.

>>  Clearing the multicast
>> flag is required given that the net_device is using the
>> ether_setup() helper.
>>
>> There's also no good reason why the special MAC address of
>> FE:FF:FF:FF:FF:FF is being used other than to avoid issues
>> with STP,
>
> With your change there is a random probability on reboot that the bridge
> will end up with a randomly generated MAC address instead of a static
> MAC address (usually that of the physical NIC on the bridge), since the
> bridge tends to inherit the lowest MAC of any port.

I had not considered the bridge taking the lowest MAC address of any
port added! So that was one of the tricks with the fixed MAC address
of FE:FF:FF:FF:FF:FF, to ensure the bridge would skip using its mac
address when it was later added as a port. Another collateral issue if
this is *not* considered is that if a xen-netback interface has a MAC
address lower than the general interface one and if you shutdown that
guest, and therefore removed it from the bridge, the bridge MAC
address will also change once again back to the general interface one.
This will cause a hiccup on accessing the device, while ARP settles
things. If doing a massive shutdown / reboot of guests that have a
series of MAC addresses lower than the general interface one is a
series of MAC address changes on the bridge.

FWIW kvm seems to completely randomize the MAC address of the backend
TAP interfaces (while the front end virtio driver fully randomizes
it), but note that in the NAT use case where only the TAP interfaces
get added the above is not an issue, although I suspect if the shared
connection is used this could be a problem, it will depend on what
tools create the TAP interface and how.

I suspect we may have a shared concern here and I wonder if kvm hit
the snags described above on the shared physical cases. Curious if kvm
folks have seen these issues?

> Since IP configuration is done on the bridge this will break DHCP,
> whether it is using static or dynamic mappings from MAC to IP address,
> and the host will randomly change IP address on reboot.

Its beyond that, as I noted as well there can be issues upon shutdown.

> So Nack for that reason.

Makes sense. Will think about this a bit more.

>>  since using this can create an issue if a user
>> decides to enable multicast on the backend interfaces
>
> Please explain what this issue is.

I explained this on the cover letter but should have elaborated more
here. The *known* and *reported* issue is that xen-backend interfaces
can end up  SLAAC and you'd obviously end up in some situations where
the MAC address and IP address clash, despite the architecture of IPv6
to randomize time requests for neighbor solicitations, and DAD.
Ultimately a series of services can end up filling your log messages
with tons of warnings.

Another not reported issue, but I suspect critical and it can bite
both xen and kvm in the ass is described on Appendex A on RFC 4862 [2]
which considers the issues of getting duplicates of packets on the
same link with the same link layer address. I think to address that we
can also consider dev->type into all the different cases.

What drove these patches is trying to find a proper upstream approach
to Olaf's old xen ipv6-no-autoconf patch [3]. Although not stated on
the patch I have seen some old year 2006 internal reports even with
the static FE:FF:FF:FF:FF:FF MAC address, whereby the xen-netback
interfaces kicked off IPv6 SLAAC and DAD. IPv6 SLAAC should trigger
once the link goes up.

My preference, rather than trying to simply disable ipv6 is actually
seeing how xen-netback interfaces (and kvm TAP topology) can be
simplified further). As I see it there is tons of code which could
trigger being used on these xen-netback interfaces (and TAP for kvm)
which is simply not needed for the use case of just doing sending data
back and forth between host and guest: ipv6 is not needed at all, and
I tried to test removing ipv4, but ran into issues.

[2] http://tools.ietf.org/html/rfc4862#appendix-A
[3[ https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf

> Also how can a user enable multicast on the b/e?

ip set multicast on dev <devname>
ip set multicast off dev <devname>

> AFAIK only Solaris ever
> implemented the m/c bits of the Xen PV network protocol (not that I
> wouldn't welcome attempts to add it to other platforms)

Do you mean kernel configuration multicast ? Or networking ?

 Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 11 21:53:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Feb 2014 21:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDLH4-0003Lp-QK; Tue, 11 Feb 2014 21:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDLH3-0003Lk-8g
	for xen-devel@lists.xenproject.org; Tue, 11 Feb 2014 21:53:49 +0000
Received: from [85.158.139.211:57701] by server-2.bemta-5.messagelabs.com id
	C3/2B-23037-CEB9AF25; Tue, 11 Feb 2014 21:53:48 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392155626!3272961!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27820 invoked from network); 11 Feb 2014 21:53:47 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	11 Feb 2014 21:53:47 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so6538733lab.30
	for <xen-devel@lists.xenproject.org>;
	Tue, 11 Feb 2014 13:53:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=VTjuOhR7pb9awz9hnrJNn/rj6JR9h2fY1jHrInk3dJM=;
	b=cw47l//CaG7PtZNqHRmiGnR/cZNcbpXOe255zUEWZ5joPyqJBxZDAmYEgNeMQWehua
	6W/dM4lMk8vhuILvq0Dpl2NJOctHJ/LSLiTMgeBbOU9cL3whr4rQlXbPEG22zMFl2+pP
	vFB781zU2M2MAiVL+hQSg7O7+ihvNOI6Dx3bmosfibH+5JqN7n9lay2rj+jldTPc0JOn
	MIBBMZo3wpo2vNW9x2IjhUrA8ywWbsA/9xJIKacjUEd8yAeBHjAHQB9hhzz1yxtZVqwn
	Io399comcjNYK1uktPyp4v2pvBtVn/HPnNK+FGMiyPoUglUhQP3HIIUYmF5m6A4pdrGC
	98mg==
X-Received: by 10.112.211.233 with SMTP id nf9mr3086142lbc.50.1392155626268;
	Tue, 11 Feb 2014 13:53:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 11 Feb 2014 13:53:26 -0800 (PST)
In-Reply-To: <1392108205.22033.16.camel@dagon.hellion.org.uk>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 11 Feb 2014 13:53:26 -0800
X-Google-Sender-Auth: M7QMbo99YcECCHhx442fAMZK2kA
Message-ID: <CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cc'ing kvm folks as they may have a shared interest on the shared
physical case with the bridge (non NAT).

On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> Although the xen-netback interfaces do not participate in the
>> link as a typical Ethernet device interfaces for them are
>> still required under the current archtitecture. IPv6 addresses
>> do not need to be created or assigned on the xen-netback interfaces
>> however, even if the frontend devices do need them, so clear the
>> multicast flag to ensure the net core does not initiate IPv6
>> Stateless Address Autoconfiguration.
>
> How does disabling SAA flow from the absence of multicast?

See patch 1 in this series [0], but I explain the issue I see with
this on the cover letter [1]. In summary the RFCs on IPv6 make it
clear you need multicast for Stateless address autoconfiguration
(SLAAC is the preferred acronym) and DAD, however the net core has not
made this a requirement, and hence the patch. The caveat which I
address on the cover letter needs to be seriously considered though.

[0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
[1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2

> Surely these should be controlled logically independently even if there is some
> notional linkage.

When a node hops on a network it will query its network by sending a
router solicitation multicast request for its configuration
parameters, the router can respond with router advertisements to
disable SLAAC.

Apart from that we have no other means to disable SLAAC neatly, and as
I gather that would be counter to the IPv6 RFCs anyway, and that makes
sense.

> Can SAA not be disabled directly?

Nope. The ipv6 core assumes all device want ipv6 and this is done upon
netdev registration, and as I noted on my patch 1 description --
although ipv6 supports a module parameter to disable autoconfiguration
RFC4682 Section 5.4 makes it clear that DAD *MUST* be performed on all
unicast addresses prior to assigning them to an interface, regardless
of
whether they are obtained through SLAAC, DHCPv6, or manual configuration.

Upon NETDEV_REGISTER the ipv6 core has 2 struct ipv6_devconf sets of
configurations which could get slapped onto devices, neither of these
disable autoconfiguration, its not a knob with a design purpose to let
devices disable freely -- and technically the RFCs for IPv6 simply
imply that you should not use IPv6 if you do cannot support multicast.
Given that the noautoconf module parameter exists though I think my
patch can be considered upstream after addressing the caveat I noted
on not-NBMA links (and I now I think I know how to address that, we
can just make the MULTICAST flag more meaninful for the dev->type).

A nasty hack to disable IPv6 for testing purposes is to set the MTU to
something lower than IPV6_MIN_MTU (1280) and in fact IPv4 can also be
disabled by setting it to 67, that will disable both IPv6 and IPv4.
That's obviously just that, a nasty nasty hack, but useful for easy
testing of disabling either ipv6 completely or both.

>>  Clearing the multicast
>> flag is required given that the net_device is using the
>> ether_setup() helper.
>>
>> There's also no good reason why the special MAC address of
>> FE:FF:FF:FF:FF:FF is being used other than to avoid issues
>> with STP,
>
> With your change there is a random probability on reboot that the bridge
> will end up with a randomly generated MAC address instead of a static
> MAC address (usually that of the physical NIC on the bridge), since the
> bridge tends to inherit the lowest MAC of any port.

I had not considered the bridge taking the lowest MAC address of any
port added! So that was one of the tricks with the fixed MAC address
of FE:FF:FF:FF:FF:FF, to ensure the bridge would skip using its mac
address when it was later added as a port. Another collateral issue if
this is *not* considered is that if a xen-netback interface has a MAC
address lower than the general interface one and if you shutdown that
guest, and therefore removed it from the bridge, the bridge MAC
address will also change once again back to the general interface one.
This will cause a hiccup on accessing the device, while ARP settles
things. If doing a massive shutdown / reboot of guests that have a
series of MAC addresses lower than the general interface one is a
series of MAC address changes on the bridge.

FWIW kvm seems to completely randomize the MAC address of the backend
TAP interfaces (while the front end virtio driver fully randomizes
it), but note that in the NAT use case where only the TAP interfaces
get added the above is not an issue, although I suspect if the shared
connection is used this could be a problem, it will depend on what
tools create the TAP interface and how.

I suspect we may have a shared concern here and I wonder if kvm hit
the snags described above on the shared physical cases. Curious if kvm
folks have seen these issues?

> Since IP configuration is done on the bridge this will break DHCP,
> whether it is using static or dynamic mappings from MAC to IP address,
> and the host will randomly change IP address on reboot.

Its beyond that, as I noted as well there can be issues upon shutdown.

> So Nack for that reason.

Makes sense. Will think about this a bit more.

>>  since using this can create an issue if a user
>> decides to enable multicast on the backend interfaces
>
> Please explain what this issue is.

I explained this on the cover letter but should have elaborated more
here. The *known* and *reported* issue is that xen-backend interfaces
can end up  SLAAC and you'd obviously end up in some situations where
the MAC address and IP address clash, despite the architecture of IPv6
to randomize time requests for neighbor solicitations, and DAD.
Ultimately a series of services can end up filling your log messages
with tons of warnings.

Another not reported issue, but I suspect critical and it can bite
both xen and kvm in the ass is described on Appendex A on RFC 4862 [2]
which considers the issues of getting duplicates of packets on the
same link with the same link layer address. I think to address that we
can also consider dev->type into all the different cases.

What drove these patches is trying to find a proper upstream approach
to Olaf's old xen ipv6-no-autoconf patch [3]. Although not stated on
the patch I have seen some old year 2006 internal reports even with
the static FE:FF:FF:FF:FF:FF MAC address, whereby the xen-netback
interfaces kicked off IPv6 SLAAC and DAD. IPv6 SLAAC should trigger
once the link goes up.

My preference, rather than trying to simply disable ipv6 is actually
seeing how xen-netback interfaces (and kvm TAP topology) can be
simplified further). As I see it there is tons of code which could
trigger being used on these xen-netback interfaces (and TAP for kvm)
which is simply not needed for the use case of just doing sending data
back and forth between host and guest: ipv6 is not needed at all, and
I tried to test removing ipv4, but ran into issues.

[2] http://tools.ietf.org/html/rfc4862#appendix-A
[3[ https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf

> Also how can a user enable multicast on the b/e?

ip set multicast on dev <devname>
ip set multicast off dev <devname>

> AFAIK only Solaris ever
> implemented the m/c bits of the Xen PV network protocol (not that I
> wouldn't welcome attempts to add it to other platforms)

Do you mean kernel configuration multicast ? Or networking ?

 Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 00:54:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 00:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDO5E-0002AU-7g; Wed, 12 Feb 2014 00:53:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDO5C-0002AP-7n
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 00:53:46 +0000
Received: from [85.158.137.68:34965] by server-2.bemta-3.messagelabs.com id
	6E/27-06531-916CAF25; Wed, 12 Feb 2014 00:53:45 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392166423!1239606!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24725 invoked from network); 12 Feb 2014 00:53:44 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-10.tower-31.messagelabs.com with SMTP;
	12 Feb 2014 00:53:44 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 16:53:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="481894608"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 11 Feb 2014 16:53:42 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 11 Feb 2014 16:53:42 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Wed, 12 Feb 2014 08:53:35 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEA==
Date: Wed, 12 Feb 2014 00:53:34 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
In-Reply-To: <52FA480D.9040707@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-11:
> On 02/11/2014 12:57 PM, Jan Beulich wrote:
>>>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
>>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>>>> What I'm missing here is what you think a proper solution is.
>>> 
>>> A _proper_ solution would be for the IOMMU h/w to allow restartable
>>> faults, so that we can do all the usual fault-driven virtual memory
>>> operations with DMA. :)  In the meantime...
>> 
>> Or maintaining the A/D bits for IOMMU side accesses too.
>> 
>>>>   It seems we have:
>>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the
>>>> buffer being tracked, and hope the guest doesn't DMA into video
>>>> ram; DMA causes IOMMU fault. (This really shouldn't crash the host
>>>> under normal circumstances; if it does it's a hardware bug.)
>>> 
>>> Note "hope" and "shouldn't" there. :)
>>> 
>>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA
>>>> into video ram.  DMA causes missed update to dirty bitmap, which
>>>> will hopefully just cause screen corruption.
>>> 
>>> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
>>> (for VMs that actually have devices passed through to them).
>>> The extra bookkeeping could be expensive in some cases, but
>>> basically all of those cases are already incompatible with IOMMU.
>>> 
>>>> C. Do buffer scanning rather than dirty vram tracking (SLOW) D.
>>>> Don't allow both a virtual video card and pass-through
>>> 
>>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
>>> and then split them out.  That one
>> 
>> Wouldn't that be problematic in terms of memory being available,
>> namely when using ballooning in Dom0?
>> 
>>>> Given that most operating systems will probably *not* DMA into
>>>> video ram, and that an IOMMU fault isn't *supposed* to be able to
>>>> crash the host, 'A' seems like the most reasonable option to me.
>>> 
>>> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
>>> seems to have most support from other people.  On that basis this
>>> patch can have my Ack.
>> 
>> I too would consider B better than A.
> 
> I think I got a bit distracted with the "A isn't really so bad" thing.
> Actually, if the overhead of not sharing tables isn't very high, then
> B isn't such a bad option.  In fact, B is what I expected Yang to
> submit when he originally described the problem.

Actually, the first solution came to my mind is B. Then I realized that even chose B, we still cannot track the memory updating from DMA(even with A/D bit, it still a problem). Also, considering the current usage case of log dirty in Xen(only vram tracking has problem), I though A is better.: Hypervisor only need to track the vram change. If a malicious guest try to DMA to vram range, it only crashed himself (This should be reasonable).

> 
> I was going to say, from a release perspective, B is probably the
> safest option for now.  But on the other hand, if we've been testing
> sharing all this time, maybe switching back over to non-sharing whole-hog has the higher risk?

Another problem with B is that current VT-d large paging supporting relies on the sharing EPT and VT-d page table. This means if we choose B, then we need to re-enable VT-d large page. This would be a huge performance impaction for Xen 4.4 on using VT-d solution.

> 
> Anyway, both are at least probably equal risk-wise.  How easy is it to
> implement?
> 
>   -George


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 00:54:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 00:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDO5E-0002AU-7g; Wed, 12 Feb 2014 00:53:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDO5C-0002AP-7n
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 00:53:46 +0000
Received: from [85.158.137.68:34965] by server-2.bemta-3.messagelabs.com id
	6E/27-06531-916CAF25; Wed, 12 Feb 2014 00:53:45 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392166423!1239606!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24725 invoked from network); 12 Feb 2014 00:53:44 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-10.tower-31.messagelabs.com with SMTP;
	12 Feb 2014 00:53:44 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 16:53:43 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="481894608"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 11 Feb 2014 16:53:42 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 11 Feb 2014 16:53:42 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Wed, 12 Feb 2014 08:53:35 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEA==
Date: Wed, 12 Feb 2014 00:53:34 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
In-Reply-To: <52FA480D.9040707@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-11:
> On 02/11/2014 12:57 PM, Jan Beulich wrote:
>>>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
>>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>>>> What I'm missing here is what you think a proper solution is.
>>> 
>>> A _proper_ solution would be for the IOMMU h/w to allow restartable
>>> faults, so that we can do all the usual fault-driven virtual memory
>>> operations with DMA. :)  In the meantime...
>> 
>> Or maintaining the A/D bits for IOMMU side accesses too.
>> 
>>>>   It seems we have:
>>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the
>>>> buffer being tracked, and hope the guest doesn't DMA into video
>>>> ram; DMA causes IOMMU fault. (This really shouldn't crash the host
>>>> under normal circumstances; if it does it's a hardware bug.)
>>> 
>>> Note "hope" and "shouldn't" there. :)
>>> 
>>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA
>>>> into video ram.  DMA causes missed update to dirty bitmap, which
>>>> will hopefully just cause screen corruption.
>>> 
>>> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
>>> (for VMs that actually have devices passed through to them).
>>> The extra bookkeeping could be expensive in some cases, but
>>> basically all of those cases are already incompatible with IOMMU.
>>> 
>>>> C. Do buffer scanning rather than dirty vram tracking (SLOW) D.
>>>> Don't allow both a virtual video card and pass-through
>>> 
>>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
>>> and then split them out.  That one
>> 
>> Wouldn't that be problematic in terms of memory being available,
>> namely when using ballooning in Dom0?
>> 
>>>> Given that most operating systems will probably *not* DMA into
>>>> video ram, and that an IOMMU fault isn't *supposed* to be able to
>>>> crash the host, 'A' seems like the most reasonable option to me.
>>> 
>>> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
>>> seems to have most support from other people.  On that basis this
>>> patch can have my Ack.
>> 
>> I too would consider B better than A.
> 
> I think I got a bit distracted with the "A isn't really so bad" thing.
> Actually, if the overhead of not sharing tables isn't very high, then
> B isn't such a bad option.  In fact, B is what I expected Yang to
> submit when he originally described the problem.

Actually, the first solution came to my mind is B. Then I realized that even chose B, we still cannot track the memory updating from DMA(even with A/D bit, it still a problem). Also, considering the current usage case of log dirty in Xen(only vram tracking has problem), I though A is better.: Hypervisor only need to track the vram change. If a malicious guest try to DMA to vram range, it only crashed himself (This should be reasonable).

> 
> I was going to say, from a release perspective, B is probably the
> safest option for now.  But on the other hand, if we've been testing
> sharing all this time, maybe switching back over to non-sharing whole-hog has the higher risk?

Another problem with B is that current VT-d large paging supporting relies on the sharing EPT and VT-d page table. This means if we choose B, then we need to re-enable VT-d large page. This would be a huge performance impaction for Xen 4.4 on using VT-d solution.

> 
> Anyway, both are at least probably equal risk-wise.  How easy is it to
> implement?
> 
>   -George


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 01:00:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 01:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDOBS-0002bF-1N; Wed, 12 Feb 2014 01:00:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nathan@gt.net>) id 1WDOBK-0002ZZ-U7
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 01:00:10 +0000
Received: from [193.109.254.147:52204] by server-9.bemta-14.messagelabs.com id
	3F/26-24895-697CAF25; Wed, 12 Feb 2014 01:00:06 +0000
X-Env-Sender: nathan@gt.net
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392166803!3670540!1
X-Originating-IP: [208.70.244.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5845 invoked from network); 12 Feb 2014 01:00:05 -0000
Received: from gossamer.nmsrv.com (HELO gossamer.nmsrv.com) (208.70.244.21)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 01:00:05 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gt.net; h=message-id:date
	:from:mime-version:to:subject:content-type
	:content-transfer-encoding; s=mail; bh=cK9nU8LE4vR1wdSgfyZi5dgZz
	mE=; b=mz7VlU2rsngne81FYpVehj0aEi5AQV7uRoCvzpa6eQSqbYiyzfNrd94w8
	njErB7YotQDlBElPi771YO5UB+cSygn0BxPb/yDdLwYKMJd/s+SgGNTSCdOnM82W
	CSb+bHbvKj5Rvm0GdtkVzklwim6nX7hDBtyErDjijpqm+onkn0=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gt.net; h=message-id:date
	:from:mime-version:to:subject:content-type
	:content-transfer-encoding; q=dns; s=mail; b=iANTmTS9T8bE5dZi1hT
	+D3Ihmkqzp9RjAgUjcjA0auWGAB7iIaHxlU29NA0wfNY2qsDJuBySPf06c8UPlDb
	taOzVojzLD3URi1tS12Q13VOftSARQzXKZgH4tjj2DmpYHeAxI3PzTG9jDRu/g35
	2Y5qlKvnX0Y9mZ6Qpf2R9160=
Received: (qmail 16121 invoked from network); 12 Feb 2014 01:00:01 -0000
X-AntiVirus: Clean
Received: from int.nmsrv.com (HELO ?10.0.30.12?) (nathan@gt.net@204.187.14.194)
	by gossamer.nmsrv.com with ESMTPSA (AES128-SHA encrypted);
	12 Feb 2014 01:00:01 -0000
Message-ID: <52FAC778.4090803@gt.net>
Date: Tue, 11 Feb 2014 16:59:36 -0800
From: Nathan March <nathan@gt.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] Xen consulting work
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

We're looking to find a xen guru who can help us out with a couple 
issues we're having (both with xensource and xenserver), and I'm hoping 
someone on this list might fit that descrption.

If you're deeply familiar with how xen works with storage internally and 
are interested, please drop me an email for more details.

Thanks!

- Nathan

-- 
Nathan March <nathan@gt.net>
Gossamer Threads Inc. http://www.gossamer-threads.com/
Tel: (604) 687-5804 Fax: (604) 687-5806


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 01:00:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 01:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDOBS-0002bF-1N; Wed, 12 Feb 2014 01:00:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nathan@gt.net>) id 1WDOBK-0002ZZ-U7
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 01:00:10 +0000
Received: from [193.109.254.147:52204] by server-9.bemta-14.messagelabs.com id
	3F/26-24895-697CAF25; Wed, 12 Feb 2014 01:00:06 +0000
X-Env-Sender: nathan@gt.net
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392166803!3670540!1
X-Originating-IP: [208.70.244.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5845 invoked from network); 12 Feb 2014 01:00:05 -0000
Received: from gossamer.nmsrv.com (HELO gossamer.nmsrv.com) (208.70.244.21)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 01:00:05 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gt.net; h=message-id:date
	:from:mime-version:to:subject:content-type
	:content-transfer-encoding; s=mail; bh=cK9nU8LE4vR1wdSgfyZi5dgZz
	mE=; b=mz7VlU2rsngne81FYpVehj0aEi5AQV7uRoCvzpa6eQSqbYiyzfNrd94w8
	njErB7YotQDlBElPi771YO5UB+cSygn0BxPb/yDdLwYKMJd/s+SgGNTSCdOnM82W
	CSb+bHbvKj5Rvm0GdtkVzklwim6nX7hDBtyErDjijpqm+onkn0=
DomainKey-Signature: a=rsa-sha1; c=nofws; d=gt.net; h=message-id:date
	:from:mime-version:to:subject:content-type
	:content-transfer-encoding; q=dns; s=mail; b=iANTmTS9T8bE5dZi1hT
	+D3Ihmkqzp9RjAgUjcjA0auWGAB7iIaHxlU29NA0wfNY2qsDJuBySPf06c8UPlDb
	taOzVojzLD3URi1tS12Q13VOftSARQzXKZgH4tjj2DmpYHeAxI3PzTG9jDRu/g35
	2Y5qlKvnX0Y9mZ6Qpf2R9160=
Received: (qmail 16121 invoked from network); 12 Feb 2014 01:00:01 -0000
X-AntiVirus: Clean
Received: from int.nmsrv.com (HELO ?10.0.30.12?) (nathan@gt.net@204.187.14.194)
	by gossamer.nmsrv.com with ESMTPSA (AES128-SHA encrypted);
	12 Feb 2014 01:00:01 -0000
Message-ID: <52FAC778.4090803@gt.net>
Date: Tue, 11 Feb 2014 16:59:36 -0800
From: Nathan March <nathan@gt.net>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] Xen consulting work
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

We're looking to find a xen guru who can help us out with a couple 
issues we're having (both with xensource and xenserver), and I'm hoping 
someone on this list might fit that descrption.

If you're deeply familiar with how xen works with storage internally and 
are interested, please drop me an email for more details.

Thanks!

- Nathan

-- 
Nathan March <nathan@gt.net>
Gossamer Threads Inc. http://www.gossamer-threads.com/
Tel: (604) 687-5804 Fax: (604) 687-5806


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 02:14:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 02:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDPKY-0000Jj-LR; Wed, 12 Feb 2014 02:13:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDPKX-0000Jd-Mq
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 02:13:41 +0000
Received: from [85.158.139.211:4236] by server-1.bemta-5.messagelabs.com id
	BB/77-12859-4D8DAF25; Wed, 12 Feb 2014 02:13:40 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392171219!3291338!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6353 invoked from network); 12 Feb 2014 02:13:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-206.messagelabs.com with SMTP;
	12 Feb 2014 02:13:40 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 18:13:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="453910031"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 11 Feb 2014 18:13:36 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 12 Feb 2014 10:08:55 +0800
Message-Id: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: Yang Zhang <yang.z.zhang@Intel.com>, chegger@amazon.de,
	eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 1/2] Nested VMX: update nested paging mode on
	vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Since SVM and VMX use different mechanism to emulate the virtual-vmentry
and virtual-vmexit, it's hard to update the nested paging mode correctly in
common code. So we need to update the nested paging mode in their respective
code path.
SVM already updates the nested paging mode on vmexit. This patch adds the same
logic in VMX side.

Previous discussion is here:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6409d6..baf3040 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2541,6 +2541,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
     vcpu_nestedhvm(v).nv_vmswitch_in_progress = 0;
     if ( nestedhvm_vcpu_in_guestmode(v) )
     {
+        paging_update_nestedmode(v);
         if ( nvmx_n2_vmexit_handler(regs, exit_reason) )
             goto out;
     }
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 02:14:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 02:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDPKb-0000Jw-21; Wed, 12 Feb 2014 02:13:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDPKY-0000Ji-W5
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 02:13:43 +0000
Received: from [85.158.139.211:63179] by server-13.bemta-5.messagelabs.com id
	DE/70-18801-6D8DAF25; Wed, 12 Feb 2014 02:13:42 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392171219!3291338!2
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6400 invoked from network); 12 Feb 2014 02:13:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-206.messagelabs.com with SMTP;
	12 Feb 2014 02:13:40 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 18:13:39 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="453910040"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 11 Feb 2014 18:13:38 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 12 Feb 2014 10:08:56 +0800
Message-Id: <1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
Cc: Yang Zhang <yang.z.zhang@Intel.com>, chegger@amazon.de,
	eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 2/2] Nested EPT: fixing issue of translate L2
	gva to L1 gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

There is no way to translate L2 gva to L1 gfn directly. To do it,
we need to get L2's gfn first. Then look up the virtual EPT to get L1's gfn.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/mm/p2m.c |   25 ++++++++++++++++++++-----
 1 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8f380ed..e92cfbe 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1605,22 +1605,37 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         && paging_mode_hap(v->domain) 
         && nestedhvm_is_n2(v) )
     {
-        unsigned long gfn;
+        unsigned long gfn, l1gfn, exit_qual;
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
-        uint32_t pfec_21 = *pfec;
         uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
+        unsigned int page_order, exit_reason;
+        int rc;
+        uint8_t p2m_acc;
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
         /* translate l2 guest va into l2 guest gfn */
         p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
+        if ( gfn == INVALID_GFN )
+            return gfn;
+
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
-                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
-    }
+        rc = nept_translate_l2ga(v, gfn << 12 , &page_order, 4, &l1gfn, &p2m_acc,
+                                &exit_qual, &exit_reason);
+        if ( rc == EPT_TRANSLATE_VIOLATION || rc == EPT_TRANSLATE_MISCONFIG )
+        {
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
+            vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        }
+        if ( rc == EPT_TRANSLATE_RETRY )
+            *pfec = PFEC_page_paged;
 
+        return l1gfn;
+    }
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 02:14:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 02:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDPKb-0000Jw-21; Wed, 12 Feb 2014 02:13:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDPKY-0000Ji-W5
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 02:13:43 +0000
Received: from [85.158.139.211:63179] by server-13.bemta-5.messagelabs.com id
	DE/70-18801-6D8DAF25; Wed, 12 Feb 2014 02:13:42 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392171219!3291338!2
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6400 invoked from network); 12 Feb 2014 02:13:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-206.messagelabs.com with SMTP;
	12 Feb 2014 02:13:40 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 18:13:39 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="453910040"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 11 Feb 2014 18:13:38 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 12 Feb 2014 10:08:56 +0800
Message-Id: <1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
Cc: Yang Zhang <yang.z.zhang@Intel.com>, chegger@amazon.de,
	eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 2/2] Nested EPT: fixing issue of translate L2
	gva to L1 gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

There is no way to translate L2 gva to L1 gfn directly. To do it,
we need to get L2's gfn first. Then look up the virtual EPT to get L1's gfn.

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/mm/p2m.c |   25 ++++++++++++++++++++-----
 1 files changed, 20 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 8f380ed..e92cfbe 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1605,22 +1605,37 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
         && paging_mode_hap(v->domain) 
         && nestedhvm_is_n2(v) )
     {
-        unsigned long gfn;
+        unsigned long gfn, l1gfn, exit_qual;
         struct p2m_domain *p2m;
         const struct paging_mode *mode;
-        uint32_t pfec_21 = *pfec;
         uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
+        unsigned int page_order, exit_reason;
+        int rc;
+        uint8_t p2m_acc;
+        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
 
         /* translate l2 guest va into l2 guest gfn */
         p2m = p2m_get_nestedp2m(v, np2m_base);
         mode = paging_get_nestedmode(v);
         gfn = mode->gva_to_gfn(v, p2m, va, pfec);
 
+        if ( gfn == INVALID_GFN )
+            return gfn;
+
         /* translate l2 guest gfn into l1 guest gfn */
-        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
-                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
-    }
+        rc = nept_translate_l2ga(v, gfn << 12 , &page_order, 4, &l1gfn, &p2m_acc,
+                                &exit_qual, &exit_reason);
+        if ( rc == EPT_TRANSLATE_VIOLATION || rc == EPT_TRANSLATE_MISCONFIG )
+        {
+            nvmx->ept.exit_reason = exit_reason;
+            nvmx->ept.exit_qual = exit_qual;
+            vcpu_nestedhvm(current).nv_vmexit_pending = 1;
+        }
+        if ( rc == EPT_TRANSLATE_RETRY )
+            *pfec = PFEC_page_paged;
 
+        return l1gfn;
+    }
     return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
 }
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 02:14:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 02:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDPKY-0000Jj-LR; Wed, 12 Feb 2014 02:13:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDPKX-0000Jd-Mq
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 02:13:41 +0000
Received: from [85.158.139.211:4236] by server-1.bemta-5.messagelabs.com id
	BB/77-12859-4D8DAF25; Wed, 12 Feb 2014 02:13:40 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392171219!3291338!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6353 invoked from network); 12 Feb 2014 02:13:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-206.messagelabs.com with SMTP;
	12 Feb 2014 02:13:40 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 11 Feb 2014 18:13:38 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,829,1384329600"; d="scan'208";a="453910031"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 11 Feb 2014 18:13:36 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 12 Feb 2014 10:08:55 +0800
Message-Id: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: Yang Zhang <yang.z.zhang@Intel.com>, chegger@amazon.de,
	eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: [Xen-devel] [PATCH 1/2] Nested VMX: update nested paging mode on
	vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Since SVM and VMX use different mechanism to emulate the virtual-vmentry
and virtual-vmexit, it's hard to update the nested paging mode correctly in
common code. So we need to update the nested paging mode in their respective
code path.
SVM already updates the nested paging mode on vmexit. This patch adds the same
logic in VMX side.

Previous discussion is here:
http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c |    1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6409d6..baf3040 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2541,6 +2541,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
     vcpu_nestedhvm(v).nv_vmswitch_in_progress = 0;
     if ( nestedhvm_vcpu_in_guestmode(v) )
     {
+        paging_update_nestedmode(v);
         if ( nvmx_n2_vmexit_handler(regs, exit_reason) )
             goto out;
     }
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 02:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 02:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDPsw-0001Xf-MH; Wed, 12 Feb 2014 02:49:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WDPsr-0001Xa-KA
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 02:49:13 +0000
Received: from [85.158.137.68:13832] by server-15.bemta-3.messagelabs.com id
	35/84-19263-421EAF25; Wed, 12 Feb 2014 02:49:08 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392173347!1242541!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31465 invoked from network); 12 Feb 2014 02:49:07 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 02:49:07 -0000
Received: by mail-wi0-f169.google.com with SMTP id e4so820627wiv.2
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Feb 2014 18:49:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=3qVaetPbLMxoF0WD6UnSA9bSwFKhXPytGWvRWfpxG8o=;
	b=AH8xOhkWkMCew0OdVqC7cs3EncA9JCJWA6F4XOOoWdL9AQCRVmDouXDgwlpIozySpS
	B4vVe7IV6odKrQVYqRkTOiv2pkVzCC44FxKO502XxNPP6PErKNzmopV3DZin20zyq2oX
	KKx0zHOVGUvm+JOBQ3y/K2eXloH8KgTkl1KuQW1+0LUeR85/ECiqDcvfeyVrDB1VjjPz
	T0SQNFiXfuA4dgG5xSWsRcxlrDmc6ma2BOvRdCHiGlkThJyoSPMmgupOpOExAxorC40a
	rZUArYPaTg1fIHBljw82gIHFdvjBQGNeXo5OIukyQf6pssbwT0acCCt2llyOROs4QVaK
	uljA==
X-Received: by 10.180.12.14 with SMTP id u14mr625278wib.0.1392173347503; Tue,
	11 Feb 2014 18:49:07 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 11 Feb 2014 18:48:47 -0800 (PST)
In-Reply-To: <1392042223.26657.7.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Wed, 12 Feb 2014 02:48:47 +0000
Message-ID: <CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Parsing config from test.cfg
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x9a9f30:
create: how=(nil) callback=(nil) poller=0x9a9f90
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda,
uses script=... assuming phy backend
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
vdev=xvda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=(null) spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null),
uses script=... assuming phy backend
libxl: debug: libxl.c:2604:libxl__device_disk_local_initiate_attach:
locally attaching PHY disk drbd-remus-test
libxl: debug: libxl_bootloader.c:409:bootloader_disk_attached_cb:
Config bootloader value: pygrub
libxl: debug: libxl_bootloader.c:425:bootloader_disk_attached_cb:
Checking for bootloader in libexec path: /usr/local/lib/xen/bin/pygrub
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x9a9f30:
inprogress: poller=0x9a9f90, flags=i
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x9aa3c8 wpath=/local/domain/35 token=3/0: register slotnum=3
libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
0x9a9f30: progress report: ignored
libxl: debug: libxl_bootloader.c:535:bootloader_gotptys: executing
bootloader: /usr/local/lib/xen/bin/pygrub
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: /usr/local/lib/xen/bin/pygrub
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --args=root=/dev/xvda1 rw console=hvc0 xencons=tty
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --output=/var/run/xen/bootloader.35.out
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --output-format=simple0
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --output-directory=/var/run/xen/bootloader.35.d
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: drbd-remus-test
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x9aa3c8
wpath=/local/domain/35 token=3/0: event epath=/local/domain/35
libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
failed - consult logfile /var/log/xen/bootloader.35.log
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
bootloader [12781] exited with error status 1
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x9aa3c8 wpath=/local/domain/35 token=3/0: deregister slotnum=3
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -3
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x9a9f30:
complete, rc=-3
libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x9a9f30: destroy
xc: debug: hypercall buffer: total allocations:20 total releases:20
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:16 misses:2 toobig:2


This part:
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: drbd-remus-test

Does seem weird...

On Mon, Feb 10, 2014 at 2:23 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 16:03 +0000, Miguel Clara wrote:
>>
>> If I use kernel/ramdisk files instead of pygrub it works, with he same
>> disk syntax!
>
> Ah, I suspect that something isn't realising that drbd:ressouce-name
> isn't a local disk name and so tries to get pygrub to run on it
> directly, instead of creating a loopback xvd to use.
>
> Some of this has been improved since 4.3, but an "xl -vvv create ..."
> might shed some light on exactly what is going wrong.
>
> Ian.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 02:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 02:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDPsw-0001Xf-MH; Wed, 12 Feb 2014 02:49:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WDPsr-0001Xa-KA
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 02:49:13 +0000
Received: from [85.158.137.68:13832] by server-15.bemta-3.messagelabs.com id
	35/84-19263-421EAF25; Wed, 12 Feb 2014 02:49:08 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392173347!1242541!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31465 invoked from network); 12 Feb 2014 02:49:07 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 02:49:07 -0000
Received: by mail-wi0-f169.google.com with SMTP id e4so820627wiv.2
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Feb 2014 18:49:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=3qVaetPbLMxoF0WD6UnSA9bSwFKhXPytGWvRWfpxG8o=;
	b=AH8xOhkWkMCew0OdVqC7cs3EncA9JCJWA6F4XOOoWdL9AQCRVmDouXDgwlpIozySpS
	B4vVe7IV6odKrQVYqRkTOiv2pkVzCC44FxKO502XxNPP6PErKNzmopV3DZin20zyq2oX
	KKx0zHOVGUvm+JOBQ3y/K2eXloH8KgTkl1KuQW1+0LUeR85/ECiqDcvfeyVrDB1VjjPz
	T0SQNFiXfuA4dgG5xSWsRcxlrDmc6ma2BOvRdCHiGlkThJyoSPMmgupOpOExAxorC40a
	rZUArYPaTg1fIHBljw82gIHFdvjBQGNeXo5OIukyQf6pssbwT0acCCt2llyOROs4QVaK
	uljA==
X-Received: by 10.180.12.14 with SMTP id u14mr625278wib.0.1392173347503; Tue,
	11 Feb 2014 18:49:07 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Tue, 11 Feb 2014 18:48:47 -0800 (PST)
In-Reply-To: <1392042223.26657.7.camel@kazak.uk.xensource.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Wed, 12 Feb 2014 02:48:47 +0000
Message-ID: <CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] Xen 4.3 xl migrate " htree_dirblock_to_tree" on
 second host
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Parsing config from test.cfg
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x9a9f30:
create: how=(nil) callback=(nil) poller=0x9a9f90
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda,
uses script=... assuming phy backend
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
vdev=xvda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=(null) spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null),
uses script=... assuming phy backend
libxl: debug: libxl.c:2604:libxl__device_disk_local_initiate_attach:
locally attaching PHY disk drbd-remus-test
libxl: debug: libxl_bootloader.c:409:bootloader_disk_attached_cb:
Config bootloader value: pygrub
libxl: debug: libxl_bootloader.c:425:bootloader_disk_attached_cb:
Checking for bootloader in libexec path: /usr/local/lib/xen/bin/pygrub
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x9a9f30:
inprogress: poller=0x9a9f90, flags=i
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x9aa3c8 wpath=/local/domain/35 token=3/0: register slotnum=3
libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao
0x9a9f30: progress report: ignored
libxl: debug: libxl_bootloader.c:535:bootloader_gotptys: executing
bootloader: /usr/local/lib/xen/bin/pygrub
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: /usr/local/lib/xen/bin/pygrub
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --args=root=/dev/xvda1 rw console=hvc0 xencons=tty
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --output=/var/run/xen/bootloader.35.out
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --output-format=simple0
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: --output-directory=/var/run/xen/bootloader.35.d
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: drbd-remus-test
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x9aa3c8
wpath=/local/domain/35 token=3/0: event epath=/local/domain/35
libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader
failed - consult logfile /var/log/xen/bootloader.35.log
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus:
bootloader [12781] exited with error status 1
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x9aa3c8 wpath=/local/domain/35 token=3/0: deregister slotnum=3
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -3
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x9a9f30:
complete, rc=-3
libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x9a9f30: destroy
xc: debug: hypercall buffer: total allocations:20 total releases:20
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:16 misses:2 toobig:2


This part:
libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader
arg: drbd-remus-test

Does seem weird...

On Mon, Feb 10, 2014 at 2:23 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-06 at 16:03 +0000, Miguel Clara wrote:
>>
>> If I use kernel/ramdisk files instead of pygrub it works, with he same
>> disk syntax!
>
> Ah, I suspect that something isn't realising that drbd:ressouce-name
> isn't a local disk name and so tries to get pygrub to run on it
> directly, instead of creating a loopback xvd to use.
>
> Some of this has been improved since 4.3, but an "xl -vvv create ..."
> might shed some light on exactly what is going wrong.
>
> Ian.
>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 03:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 03:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDQIj-0002Vv-By; Wed, 12 Feb 2014 03:15:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDQIh-0002Vp-JR
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 03:15:51 +0000
Received: from [85.158.143.35:38881] by server-3.bemta-4.messagelabs.com id
	2C/23-11539-667EAF25; Wed, 12 Feb 2014 03:15:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392174948!4972773!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10447 invoked from network); 12 Feb 2014 03:15:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 03:15:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1C3FYEc004213
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 03:15:36 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1C3FWC5010440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 03:15:32 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1C3FVmQ017801; Wed, 12 Feb 2014 03:15:31 GMT
Received: from android-45f18b7a850a525a.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 11 Feb 2014 19:15:30 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52FA99CA.8070505@kernel.dk>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
	<1009282985.20140211212157@eikelenboom.it>
	<52FA99CA.8070505@kernel.dk>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 11 Feb 2014 22:14:15 -0500
To: Jens Axboe <axboe@kernel.dk>, Sander Eikelenboom <linux@eikelenboom.it>,
	David Vrabel <david.vrabel@citrix.com>
Message-ID: <92e1a6a9-1993-4203-b38c-9c8da96a3ce7@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	mrushton@amazon.com, msw@amazon.com, boris.ostrovsky@oracle.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRmVicnVhcnkgMTEsIDIwMTQgNDo0NDo0MiBQTSBFU1QsIEplbnMgQXhib2UgPGF4Ym9lQGtl
cm5lbC5kaz4gd3JvdGU6Cj5PbiAyMDE0LTAyLTExIDEzOjIxLCBTYW5kZXIgRWlrZWxlbmJvb20g
d3JvdGU6Cj4+Cj4+IFR1ZXNkYXksIEZlYnJ1YXJ5IDExLCAyMDE0LCA3OjIxOjU2IFBNLCB5b3Ug
d3JvdGU6Cj4+Cj4+PiBPbiAxMS8wMi8xNCAxODoxNSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToK
Pj4+PiBPbiAxMS8wMi8xNCAxODo1MiwgRGF2aWQgVnJhYmVsIHdyb3RlOgo+Pj4+Pgo+Pj4+IFRo
YXQgd291bGQgbWVhbiB0aGF0IHVubWFwX3B1cmdlZF9ncmFudHMgd291bGQgbm8gbG9uZ2VyIGJl
IHN0YXRpYwo+YW5kCj4+Pj4gSSBzaG91bGQgYWxzbyBhZGQgYSBwcm90b3R5cGUgZm9yIGl0IGlu
IGJsa2JhY2svY29tbW9uLmgsIHdoaWNoIGlzCj5raW5kCj4+Pj4gb2YgdWdseSBJTUhPLgo+Pgo+
Pj4gQnV0IGxlc3MgdWdseSB0aGFuIGluaXRpYWxpemluZyB3b3JrIHdpdGggYSBOVUxMIGZ1bmN0
aW9uLCBJTU8uCj4+Cj4+Pj4gY29tbWl0IDk4MGU3MmU0NTQ1NGI2NGNjYjdmMjNiNjc5NGE3Njkz
ODRlNTEwMzgKPj4+PiBBdXRob3I6IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5j
b20+Cj4+Pj4gRGF0ZTogICBUdWUgRmViIDExIDE5OjA0OjA2IDIwMTQgKzAxMDAKPj4+Pgo+Pj4+
ICAgICAgeGVuLWJsa2JhY2s6IGluaXQgcGVyc2lzdGVudF9wdXJnZV93b3JrIHdvcmtfc3RydWN0
Cj4+Pj4KPj4+PiAgICAgIEluaXRpYWxpemUgcGVyc2lzdGVudF9wdXJnZV93b3JrIHdvcmtfc3Ry
dWN0IG9uCj54ZW5fYmxraWZfYWxsb2MgKGFuZAo+Pj4+ICAgICAgcmVtb3ZlIHRoZSBwcmV2aW91
cyBpbml0aWFsaXphdGlvbiBkb25lIGluCj5wdXJnZV9wZXJzaXN0ZW50X2dudCkuIFRoaXMKPj4+
PiAgICAgIHByZXZlbnRzIGZsdXNoX3dvcmsgZnJvbSBjb21wbGFpbmluZyBldmVuIGlmCj5wdXJn
ZV9wZXJzaXN0ZW50X2dudCBoYXMKPj4+PiAgICAgIG5vdCBiZWVuIHVzZWQuCj4+Pj4KPj4+PiAg
ICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29t
Pgo+Pgo+Pj4gUmV2aWV3ZWQtYnk6IERhdmlkIFZyYWJlbCA8ZGF2aWQudnJhYmVsQGNpdHJpeC5j
b20+Cj4+Cj4+IEFuZCBhIFRlc3RlZC1ieTogU2FuZGVyIEVpa2VsZW5ib29tIDxsaW51eEBlaWtl
bGVuYm9vbS5pdD4KPgo+S29ucmFkLCBJIHdhcyBnb2luZyB0byBzaGlwIG15IGN1cnJlbnQgdHJl
ZSB0b2RheSwgYnV0IGxvb2tzIGxpa2Ugd2UgCj5uZWVkIHRoaXMgdG9vLgoKWWVzLiBBcmUgeW91
IE9LIHRha2luZyBpdCBmcm9tIHRoaXMgdGhyZWFkIG9yIHdvdWxkIHlvdSBsaWtlIG1lIHRvIHBy
ZXAgYSBHSVQgcHVsbCB3aXRoIGl0PwoKVGhhbmtzIQoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 12 03:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 03:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDQIj-0002Vv-By; Wed, 12 Feb 2014 03:15:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDQIh-0002Vp-JR
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 03:15:51 +0000
Received: from [85.158.143.35:38881] by server-3.bemta-4.messagelabs.com id
	2C/23-11539-667EAF25; Wed, 12 Feb 2014 03:15:50 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392174948!4972773!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10447 invoked from network); 12 Feb 2014 03:15:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 03:15:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1C3FYEc004213
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 03:15:36 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1C3FWC5010440
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 03:15:32 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1C3FVmQ017801; Wed, 12 Feb 2014 03:15:31 GMT
Received: from android-45f18b7a850a525a.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 11 Feb 2014 19:15:30 -0800
User-Agent: K-9 Mail for Android
In-Reply-To: <52FA99CA.8070505@kernel.dk>
References: <20140210184412.GA18198@phenom.dumpdata.com>
	<20140210195402.GA3924@kernel.dk>
	<771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
	<1009282985.20140211212157@eikelenboom.it>
	<52FA99CA.8070505@kernel.dk>
MIME-Version: 1.0
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 11 Feb 2014 22:14:15 -0500
To: Jens Axboe <axboe@kernel.dk>, Sander Eikelenboom <linux@eikelenboom.it>,
	David Vrabel <david.vrabel@citrix.com>
Message-ID: <92e1a6a9-1993-4203-b38c-9c8da96a3ce7@email.android.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	mrushton@amazon.com, msw@amazon.com, boris.ostrovsky@oracle.com,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
	to register non-static key. the code is fine but needs
	lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gRmVicnVhcnkgMTEsIDIwMTQgNDo0NDo0MiBQTSBFU1QsIEplbnMgQXhib2UgPGF4Ym9lQGtl
cm5lbC5kaz4gd3JvdGU6Cj5PbiAyMDE0LTAyLTExIDEzOjIxLCBTYW5kZXIgRWlrZWxlbmJvb20g
d3JvdGU6Cj4+Cj4+IFR1ZXNkYXksIEZlYnJ1YXJ5IDExLCAyMDE0LCA3OjIxOjU2IFBNLCB5b3Ug
d3JvdGU6Cj4+Cj4+PiBPbiAxMS8wMi8xNCAxODoxNSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToK
Pj4+PiBPbiAxMS8wMi8xNCAxODo1MiwgRGF2aWQgVnJhYmVsIHdyb3RlOgo+Pj4+Pgo+Pj4+IFRo
YXQgd291bGQgbWVhbiB0aGF0IHVubWFwX3B1cmdlZF9ncmFudHMgd291bGQgbm8gbG9uZ2VyIGJl
IHN0YXRpYwo+YW5kCj4+Pj4gSSBzaG91bGQgYWxzbyBhZGQgYSBwcm90b3R5cGUgZm9yIGl0IGlu
IGJsa2JhY2svY29tbW9uLmgsIHdoaWNoIGlzCj5raW5kCj4+Pj4gb2YgdWdseSBJTUhPLgo+Pgo+
Pj4gQnV0IGxlc3MgdWdseSB0aGFuIGluaXRpYWxpemluZyB3b3JrIHdpdGggYSBOVUxMIGZ1bmN0
aW9uLCBJTU8uCj4+Cj4+Pj4gY29tbWl0IDk4MGU3MmU0NTQ1NGI2NGNjYjdmMjNiNjc5NGE3Njkz
ODRlNTEwMzgKPj4+PiBBdXRob3I6IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5j
b20+Cj4+Pj4gRGF0ZTogICBUdWUgRmViIDExIDE5OjA0OjA2IDIwMTQgKzAxMDAKPj4+Pgo+Pj4+
ICAgICAgeGVuLWJsa2JhY2s6IGluaXQgcGVyc2lzdGVudF9wdXJnZV93b3JrIHdvcmtfc3RydWN0
Cj4+Pj4KPj4+PiAgICAgIEluaXRpYWxpemUgcGVyc2lzdGVudF9wdXJnZV93b3JrIHdvcmtfc3Ry
dWN0IG9uCj54ZW5fYmxraWZfYWxsb2MgKGFuZAo+Pj4+ICAgICAgcmVtb3ZlIHRoZSBwcmV2aW91
cyBpbml0aWFsaXphdGlvbiBkb25lIGluCj5wdXJnZV9wZXJzaXN0ZW50X2dudCkuIFRoaXMKPj4+
PiAgICAgIHByZXZlbnRzIGZsdXNoX3dvcmsgZnJvbSBjb21wbGFpbmluZyBldmVuIGlmCj5wdXJn
ZV9wZXJzaXN0ZW50X2dudCBoYXMKPj4+PiAgICAgIG5vdCBiZWVuIHVzZWQuCj4+Pj4KPj4+PiAg
ICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29t
Pgo+Pgo+Pj4gUmV2aWV3ZWQtYnk6IERhdmlkIFZyYWJlbCA8ZGF2aWQudnJhYmVsQGNpdHJpeC5j
b20+Cj4+Cj4+IEFuZCBhIFRlc3RlZC1ieTogU2FuZGVyIEVpa2VsZW5ib29tIDxsaW51eEBlaWtl
bGVuYm9vbS5pdD4KPgo+S29ucmFkLCBJIHdhcyBnb2luZyB0byBzaGlwIG15IGN1cnJlbnQgdHJl
ZSB0b2RheSwgYnV0IGxvb2tzIGxpa2Ugd2UgCj5uZWVkIHRoaXMgdG9vLgoKWWVzLiBBcmUgeW91
IE9LIHRha2luZyBpdCBmcm9tIHRoaXMgdGhyZWFkIG9yIHdvdWxkIHlvdSBsaWtlIG1lIHRvIHBy
ZXAgYSBHSVQgcHVsbCB3aXRoIGl0PwoKVGhhbmtzIQoKCl9fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVs
QGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 12 03:35:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 03:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDQb0-0003Ac-ST; Wed, 12 Feb 2014 03:34:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1WDQaz-0003AS-AA
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 03:34:45 +0000
Received: from [85.158.143.35:48774] by server-2.bemta-4.messagelabs.com id
	E2/37-10891-4DBEAF25; Wed, 12 Feb 2014 03:34:44 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392176082!4979502!1
X-Originating-IP: [209.85.220.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16054 invoked from network); 12 Feb 2014 03:34:43 -0000
Received: from mail-pa0-f46.google.com (HELO mail-pa0-f46.google.com)
	(209.85.220.46)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 03:34:43 -0000
Received: by mail-pa0-f46.google.com with SMTP id rd3so8559948pab.19
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Feb 2014 19:34:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition
	:content-transfer-encoding:in-reply-to;
	bh=79eHmBjqH5NRFyVIIO04XpCXJ2Tny/Vd+dqjqU8Jrsc=;
	b=UBnTYRxzZ/kCqwX7WiIVK6gcOk1PpWH4VFy/2g1FaFJw1rpjr+F8IU87grkw4B1A84
	SO551Z1ySYkzoXGkOpuPF3k2GSWcu0CUEoDw+jVzCRxY/Qo72v1nHCpaBhUUvcsv7ps4
	Ubw09OhmoA6jV6VHizEzQt4DGyZLWr1KqGCOIw1MrggLRSQz6O7yRLHNlqBniYgwgam5
	CYGqFavaVaeMC0LsyW432gQNzDStKylxeDk7qPB20zbfKJdqxZad4ZY8sPe8s7q1VLzj
	rfSeOKcTAnDGhK4QIo582Ed13bqcUi/aXe2f5qQky+0YA0t40101nCagN12je0Jhq1Cn
	c2xA==
X-Gm-Message-State: ALoCoQnn71jJZDOPfuD7muPbLc8RkjmKAq/xS69vkKqiNO3x6Axa8zzWKYF50HXJ64tiT9eSGqO4
X-Received: by 10.68.197.234 with SMTP id ix10mr48167849pbc.80.1392176081530; 
	Tue, 11 Feb 2014 19:34:41 -0800 (PST)
Received: from localhost (66.29.187.52.static.utbb.net. [66.29.187.52])
	by mx.google.com with ESMTPSA id bc4sm58246711pbb.2.2014.02.11.19.34.38
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 19:34:39 -0800 (PST)
Date: Tue, 11 Feb 2014 20:34:37 -0700
From: Jens Axboe <axboe@kernel.dk>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140212033437.GA9739@kernel.dk>
References: <771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
	<1009282985.20140211212157@eikelenboom.it>
	<52FA99CA.8070505@kernel.dk>
	<92e1a6a9-1993-4203-b38c-9c8da96a3ce7@email.android.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <92e1a6a9-1993-4203-b38c-9c8da96a3ce7@email.android.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	mrushton@amazon.com, Sander Eikelenboom <linux@eikelenboom.it>,
	David Vrabel <david.vrabel@citrix.com>, msw@amazon.com,
	boris.ostrovsky@oracle.com,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11 2014, Konrad Rzeszutek Wilk wrote:
> On February 11, 2014 4:44:42 PM EST, Jens Axboe <axboe@kernel.dk> wrote:
> >On 2014-02-11 13:21, Sander Eikelenboom wrote:
> >>
> >> Tuesday, February 11, 2014, 7:21:56 PM, you wrote:
> >>
> >>> On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
> >>>> On 11/02/14 18:52, David Vrabel wrote:
> >>>>>
> >>>> That would mean that unmap_purged_grants would no longer be static
> >and
> >>>> I should also add a prototype for it in blkback/common.h, which is
> >kind
> >>>> of ugly IMHO.
> >>
> >>> But less ugly than initializing work with a NULL function, IMO.
> >>
> >>>> commit 980e72e45454b64ccb7f23b6794a769384e51038
> >>>> Author: Roger Pau Monne <roger.pau@citrix.com>
> >>>> Date:   Tue Feb 11 19:04:06 2014 +0100
> >>>>
> >>>>      xen-blkback: init persistent_purge_work work_struct
> >>>>
> >>>>      Initialize persistent_purge_work work_struct on
> >xen_blkif_alloc (and
> >>>>      remove the previous initialization done in
> >purge_persistent_gnt). This
> >>>>      prevents flush_work from complaining even if
> >purge_persistent_gnt has
> >>>>      not been used.
> >>>>
> >>>>      Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> >>
> >>> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> >>
> >> And a Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
> >
> >Konrad, I was going to ship my current tree today, but looks like we =

> >need this too.
> =

> Yes. Are you OK taking it from this thread or would you like me to prep a=
 GIT pull with it?

I'll just grab it from here, no problem. Done.

-- =

Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 03:35:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 03:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDQb0-0003Ac-ST; Wed, 12 Feb 2014 03:34:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <axboe@kernel.dk>) id 1WDQaz-0003AS-AA
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 03:34:45 +0000
Received: from [85.158.143.35:48774] by server-2.bemta-4.messagelabs.com id
	E2/37-10891-4DBEAF25; Wed, 12 Feb 2014 03:34:44 +0000
X-Env-Sender: axboe@kernel.dk
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392176082!4979502!1
X-Originating-IP: [209.85.220.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16054 invoked from network); 12 Feb 2014 03:34:43 -0000
Received: from mail-pa0-f46.google.com (HELO mail-pa0-f46.google.com)
	(209.85.220.46)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 03:34:43 -0000
Received: by mail-pa0-f46.google.com with SMTP id rd3so8559948pab.19
	for <xen-devel@lists.xensource.com>;
	Tue, 11 Feb 2014 19:34:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition
	:content-transfer-encoding:in-reply-to;
	bh=79eHmBjqH5NRFyVIIO04XpCXJ2Tny/Vd+dqjqU8Jrsc=;
	b=UBnTYRxzZ/kCqwX7WiIVK6gcOk1PpWH4VFy/2g1FaFJw1rpjr+F8IU87grkw4B1A84
	SO551Z1ySYkzoXGkOpuPF3k2GSWcu0CUEoDw+jVzCRxY/Qo72v1nHCpaBhUUvcsv7ps4
	Ubw09OhmoA6jV6VHizEzQt4DGyZLWr1KqGCOIw1MrggLRSQz6O7yRLHNlqBniYgwgam5
	CYGqFavaVaeMC0LsyW432gQNzDStKylxeDk7qPB20zbfKJdqxZad4ZY8sPe8s7q1VLzj
	rfSeOKcTAnDGhK4QIo582Ed13bqcUi/aXe2f5qQky+0YA0t40101nCagN12je0Jhq1Cn
	c2xA==
X-Gm-Message-State: ALoCoQnn71jJZDOPfuD7muPbLc8RkjmKAq/xS69vkKqiNO3x6Axa8zzWKYF50HXJ64tiT9eSGqO4
X-Received: by 10.68.197.234 with SMTP id ix10mr48167849pbc.80.1392176081530; 
	Tue, 11 Feb 2014 19:34:41 -0800 (PST)
Received: from localhost (66.29.187.52.static.utbb.net. [66.29.187.52])
	by mx.google.com with ESMTPSA id bc4sm58246711pbb.2.2014.02.11.19.34.38
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 11 Feb 2014 19:34:39 -0800 (PST)
Date: Tue, 11 Feb 2014 20:34:37 -0700
From: Jens Axboe <axboe@kernel.dk>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140212033437.GA9739@kernel.dk>
References: <771950784.20140211165215@eikelenboom.it>
	<20140211155650.GA23026@phenom.dumpdata.com>
	<1542261541.20140211170750@eikelenboom.it>
	<52FA6072.40908@citrix.com> <52FA6352.1010403@citrix.com>
	<52FA68AE.6070608@citrix.com> <52FA6A44.6050003@citrix.com>
	<1009282985.20140211212157@eikelenboom.it>
	<52FA99CA.8070505@kernel.dk>
	<92e1a6a9-1993-4203-b38c-9c8da96a3ce7@email.android.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <92e1a6a9-1993-4203-b38c-9c8da96a3ce7@email.android.com>
Cc: xen-devel@lists.xensource.com, linux-kernel@vger.kernel.org,
	mrushton@amazon.com, Sander Eikelenboom <linux@eikelenboom.it>,
	David Vrabel <david.vrabel@citrix.com>, msw@amazon.com,
	boris.ostrovsky@oracle.com,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [GIT PULL] (xen) stable/for-jens-3.14 : NFO: trying
 to register non-static key. the code is fine but needs lockdep annotation.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11 2014, Konrad Rzeszutek Wilk wrote:
> On February 11, 2014 4:44:42 PM EST, Jens Axboe <axboe@kernel.dk> wrote:
> >On 2014-02-11 13:21, Sander Eikelenboom wrote:
> >>
> >> Tuesday, February 11, 2014, 7:21:56 PM, you wrote:
> >>
> >>> On 11/02/14 18:15, Roger Pau Monn=E9 wrote:
> >>>> On 11/02/14 18:52, David Vrabel wrote:
> >>>>>
> >>>> That would mean that unmap_purged_grants would no longer be static
> >and
> >>>> I should also add a prototype for it in blkback/common.h, which is
> >kind
> >>>> of ugly IMHO.
> >>
> >>> But less ugly than initializing work with a NULL function, IMO.
> >>
> >>>> commit 980e72e45454b64ccb7f23b6794a769384e51038
> >>>> Author: Roger Pau Monne <roger.pau@citrix.com>
> >>>> Date:   Tue Feb 11 19:04:06 2014 +0100
> >>>>
> >>>>      xen-blkback: init persistent_purge_work work_struct
> >>>>
> >>>>      Initialize persistent_purge_work work_struct on
> >xen_blkif_alloc (and
> >>>>      remove the previous initialization done in
> >purge_persistent_gnt). This
> >>>>      prevents flush_work from complaining even if
> >purge_persistent_gnt has
> >>>>      not been used.
> >>>>
> >>>>      Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> >>
> >>> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> >>
> >> And a Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
> >
> >Konrad, I was going to ship my current tree today, but looks like we =

> >need this too.
> =

> Yes. Are you OK taking it from this thread or would you like me to prep a=
 GIT pull with it?

I'll just grab it from here, no problem. Done.

-- =

Jens Axboe


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 04:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 04:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDREJ-0004H7-C7; Wed, 12 Feb 2014 04:15:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDREI-0004H2-6M
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 04:15:22 +0000
Received: from [85.158.139.211:51009] by server-16.bemta-5.messagelabs.com id
	D3/C1-05060-955FAF25; Wed, 12 Feb 2014 04:15:21 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392178518!3267822!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19425 invoked from network); 12 Feb 2014 04:15:19 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 04:15:19 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so8468254pdj.15
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 20:15:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:references:mime-version:message-id
	:content-type; bh=1+faLQ678/bccKb59j1xoHg/pd5Grmm6zfRO7iHu2Ps=;
	b=o9EcCBhR4P8Ma9WcWUALOsgvgq6d2ykNbQINOFgKDKpfsAUXorj4RKCMyaO5Kt/gLP
	3GHmh+tnQ0xt8vYaoIJma0xarAKhT7D+XCB2LADhgih7PQrDNCwjd23YrQZPQ2RY2Jmv
	hMIHsnKe6V1jCUyhxDEDBoFjNsUBduq0zdFD69AKy+00RKuWwOEOcoTkfoSOpQH2QB9w
	LsrBAxVQ25OE9ylbU9C6/I3qdFH2juniw739c7lTyB2jk3NA2V0Z7g7Z6i5sZZEnAH3Y
	WkQgw+ih9bzjgFHf7TTaSioha8Q4ZqRN2nzVVvF/pSfhEHjG0jaQTWbefpMAXc7CxJUP
	Wmjg==
X-Received: by 10.68.93.161 with SMTP id cv1mr49324465pbb.122.1392178517999;
	Tue, 11 Feb 2014 20:15:17 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id
	pp5sm58498288pbb.33.2014.02.11.20.15.15 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 20:15:17 -0800 (PST)
Date: Wed, 12 Feb 2014 12:15:14 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <2014021017472829063425@gmail.com>, 
	<1392041992.26657.6.camel@kazak.uk.xensource.com>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021212150776651942@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN BUG] XEN + Qemu upstream + Qcow2 + coroutine =
	BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2228157291637945671=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============2228157291637945671==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart328835548817_=----"

This is a multi-part message in MIME format.

------=_001_NextPart328835548817_=----
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: base64

VGh477yMIElhbg0KDQpJIGFtIHRyeWluZyB0byBhbmFseXplIHRoZSBjcmFzaCByZWFzb24uDQpU
aGlzIEJVRyBpcyBvbmx5IGhhcHBlbmRlZCB3aGVuIHVzaW5nIElERSBkaXNrIChoZCogbmFtZSBm
b3IgdmRpc2spLg0KSWYgdXNpbmcgU0NTSSBkaXNrIChzZCogbmFtZSkgaW4gdm0gY29uZmlnIGZp
bGUsIFFlbXUgd29ya3Mgd2VsbC4NCg0KDQoNCg0KaGVyYmVydCBjbGFuZA0KDQpGcm9tOiBJYW4g
Q2FtcGJlbGwNCkRhdGU6IDIwMTQtMDItMTAgMjI6MTkNClRvOiBoZXJiZXJ0IGNsYW5kDQpDQzog
eGVuLWRldmVsDQpTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gW1hFTiBCVUddIFhFTiArIFFlbXUg
dXBzdHJlYW0gKyBRY293MiArIGNvcm91dGluZSA9IEJPT00NCk9uIE1vbiwgMjAxNC0wMi0xMCBh
dCAxNzo0NyArMDgwMCwgaGVyYmVydCBjbGFuZCB3cm90ZToNCj4gRGVhciBBTEwhDQo+ICANCj4g
cWNvdzIgZm9ybWF0IGRpc2sgd29ya3Mgd2VsbCB3aXRoIHhlbiBmb3IgbW9zdCBjYXNlczsNCj4g
QlVUIGlmIHVzaW5nIGNvcm91dGluZSBmdW5jdGlvbiBzdWNoIGFzIGJsb2NrLXN0cmVhbSwgdGhl
IHFlbXUgcHJvY2Vzcw0KPiBjcmFzaGVkIQ0KPiBBbmQgd2UgY2FuIGZvdW5kIGZvbGxvd2luZyBt
c2cgZnJvbSBjbWQgZG1lc2cNCj4gIA0KPiBxZW11LXN5c3RlbS1pMzhbMjA1MTRdOiBzZWdmYXVs
dCBhdCA2MCBpcCAwMDAwN2YyYjYwZGM4NzlhIHNwDQo+IDAwMDA3ZmZmZmNjMmYwMDAgZXJyb3Ig
NCBpbiBxZW11LXN5c3RlbS1pMzg2WzdmMmI2MGNkZjAwMCs1MjkwMDBdIA0KPiAgDQo+IEFueSBz
dWdnZXN0aW9uPw0KDQpQbGVhc2UgY2FuIHlvdSB0cnkgaW5zdGFsbGluZyB0aGUgbmVjZXNzYXJ5
IGRlYnVnIHN5bWJvbHMgcGFja2FnZSAoSQ0KZG9uJ3Qga25vdyBob3cgdG8gZG8gdGhpcyBvbiBj
ZW50b3MpIGFuZCB0cnkgYW5kIGNhcHR1cmUgYSBjb3JlIGR1bXANCih3aGljaCBtaWdodCByZXF1
aXJlIGFkanVzdGluZyBybGltaXRzKSBvciBydW5uaW5nIHFlbXUgdW5kZXIgZ2RiDQpzb21laG93
LCBpbiBlaXRoZXIgY2FzZSBnZXR0aW5nIGEgYmFja3RyYWNlIG91dCBvZiB0aGlzIGNyYXNoIHdp
bGwgaGVscA0KdXMgbWFrZSBwcm9ncmVzcy4NCg0KUGxlYXNlIGFsc28gdGFrZSBhIGxvb2sgYXQN
Cmh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9SZXBvcnRpbmdfQnVnc19hZ2FpbnN0X1hlbiBhbmQg
c2VlIHdoYXQgb3RoZXINCmluZm8gbWlnaHQgYmUgdXNlZnVsLCBsaWtlIGd1ZXN0IGNvbmZpZyBm
aWxlLCB3aGljaCB0b29sc3RhY2sgeW91IGFyZQ0KdXNpbmcsIG90aGVyIHVzZWZ1bCBsb2dzIGV0
Yy4NCg0KU2luY2UgdGhpcyBpcyBhIHFlbXUgaXNzdWUgeW91IG1pZ2h0IGFsc28gd2FudCB0byBp
bmNsdWRlIHRoZSBxZW11IGxpc3QNCmluIHlvdXIgcmVwb3J0Lg0KDQpJYW4u

------=_001_NextPart328835548817_=----
Content-Type: text/html;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dutf-8" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =E5=BE=AE=E8=BD=AF=E9=9B=85=E9=BB=91; COL=
OR: #000000; LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Thx=EF=BC=8C Ian</DIV>
<DIV>&nbsp;</DIV>
<DIV>I am trying to analyze the crash reason.</DIV>
<DIV>This BUG is only happended when using IDE disk (hd* name for vdisk).<=
/DIV>
<DIV>If using SCSI disk (sd* name) in vm config file, Qemu works well.</DI=
V>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; BORDER-=
BOTTOM: medium none; PADDING-BOTTOM: 0cm; PADDING-TOP: 3pt; PADDING-LEFT: =
0cm; BORDER-LEFT: medium none; PADDING-RIGHT: 0cm">
<DIV=20
style=3D"FONT-SIZE: 12px; FONT-FAMILY: tahoma; BACKGROUND: #efefef; COLOR:=
 #000000; PADDING-BOTTOM: 8px; PADDING-TOP: 8px; PADDING-LEFT: 8px; PADDIN=
G-RIGHT: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:Ian.Campbell@citrix.com">Ian=20
Campbell</A></DIV>
<DIV><B>Date:</B>&nbsp;2014-02-10&nbsp;22:19</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:herbert.cland@gmail.com">herbert=20
cland</A></DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [XEN BUG] XEN + Qemu upstream + =
Qcow2=20
+ coroutine =3D BOOM</DIV></DIV></DIV>
<DIV>
<DIV>On Mon, 2014-02-10 at 17:47 +0800, herbert cland wrote:</DIV>
<DIV>&gt; Dear ALL!</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; qcow2 format disk works well with xen for most cases;</DIV>
<DIV>&gt; BUT if using coroutine function such as block-stream, the qemu=20
process</DIV>
<DIV>&gt; crashed!</DIV>
<DIV>&gt; And we can found following msg from cmd dmesg</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; qemu-system-i38[20514]: segfault at 60 ip 00007f2b60dc879a sp</D=
IV>
<DIV>&gt; 00007ffffcc2f000 error 4 in qemu-system-i386[7f2b60cdf000+529000=
]=20
</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; Any suggestion?</DIV>
<DIV>&nbsp;</DIV>
<DIV>Please can you try installing the necessary debug symbols package (I<=
/DIV>
<DIV>don't know how to do this on centos) and try and capture a core dump<=
/DIV>
<DIV>(which might require adjusting rlimits) or running qemu under gdb</DI=
V>
<DIV>somehow, in either case getting a backtrace out of this crash will=20
help</DIV>
<DIV>us make progress.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Please also take a look at</DIV>
<DIV>http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen and see what=20
other</DIV>
<DIV>info might be useful, like guest config file, which toolstack you are=
</DIV>
<DIV>using, other useful logs etc.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Since this is a qemu issue you might also want to include the qemu=20
list</DIV>
<DIV>in your report.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Ian.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart328835548817_=------



--===============2228157291637945671==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2228157291637945671==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 04:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 04:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDREJ-0004H7-C7; Wed, 12 Feb 2014 04:15:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDREI-0004H2-6M
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 04:15:22 +0000
Received: from [85.158.139.211:51009] by server-16.bemta-5.messagelabs.com id
	D3/C1-05060-955FAF25; Wed, 12 Feb 2014 04:15:21 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392178518!3267822!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19425 invoked from network); 12 Feb 2014 04:15:19 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 04:15:19 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so8468254pdj.15
	for <xen-devel@lists.xen.org>; Tue, 11 Feb 2014 20:15:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:references:mime-version:message-id
	:content-type; bh=1+faLQ678/bccKb59j1xoHg/pd5Grmm6zfRO7iHu2Ps=;
	b=o9EcCBhR4P8Ma9WcWUALOsgvgq6d2ykNbQINOFgKDKpfsAUXorj4RKCMyaO5Kt/gLP
	3GHmh+tnQ0xt8vYaoIJma0xarAKhT7D+XCB2LADhgih7PQrDNCwjd23YrQZPQ2RY2Jmv
	hMIHsnKe6V1jCUyhxDEDBoFjNsUBduq0zdFD69AKy+00RKuWwOEOcoTkfoSOpQH2QB9w
	LsrBAxVQ25OE9ylbU9C6/I3qdFH2juniw739c7lTyB2jk3NA2V0Z7g7Z6i5sZZEnAH3Y
	WkQgw+ih9bzjgFHf7TTaSioha8Q4ZqRN2nzVVvF/pSfhEHjG0jaQTWbefpMAXc7CxJUP
	Wmjg==
X-Received: by 10.68.93.161 with SMTP id cv1mr49324465pbb.122.1392178517999;
	Tue, 11 Feb 2014 20:15:17 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id
	pp5sm58498288pbb.33.2014.02.11.20.15.15 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 11 Feb 2014 20:15:17 -0800 (PST)
Date: Wed, 12 Feb 2014 12:15:14 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <2014021017472829063425@gmail.com>, 
	<1392041992.26657.6.camel@kazak.uk.xensource.com>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021212150776651942@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [XEN BUG] XEN + Qemu upstream + Qcow2 + coroutine =
	BOOM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2228157291637945671=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============2228157291637945671==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart328835548817_=----"

This is a multi-part message in MIME format.

------=_001_NextPart328835548817_=----
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: base64

VGh477yMIElhbg0KDQpJIGFtIHRyeWluZyB0byBhbmFseXplIHRoZSBjcmFzaCByZWFzb24uDQpU
aGlzIEJVRyBpcyBvbmx5IGhhcHBlbmRlZCB3aGVuIHVzaW5nIElERSBkaXNrIChoZCogbmFtZSBm
b3IgdmRpc2spLg0KSWYgdXNpbmcgU0NTSSBkaXNrIChzZCogbmFtZSkgaW4gdm0gY29uZmlnIGZp
bGUsIFFlbXUgd29ya3Mgd2VsbC4NCg0KDQoNCg0KaGVyYmVydCBjbGFuZA0KDQpGcm9tOiBJYW4g
Q2FtcGJlbGwNCkRhdGU6IDIwMTQtMDItMTAgMjI6MTkNClRvOiBoZXJiZXJ0IGNsYW5kDQpDQzog
eGVuLWRldmVsDQpTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gW1hFTiBCVUddIFhFTiArIFFlbXUg
dXBzdHJlYW0gKyBRY293MiArIGNvcm91dGluZSA9IEJPT00NCk9uIE1vbiwgMjAxNC0wMi0xMCBh
dCAxNzo0NyArMDgwMCwgaGVyYmVydCBjbGFuZCB3cm90ZToNCj4gRGVhciBBTEwhDQo+ICANCj4g
cWNvdzIgZm9ybWF0IGRpc2sgd29ya3Mgd2VsbCB3aXRoIHhlbiBmb3IgbW9zdCBjYXNlczsNCj4g
QlVUIGlmIHVzaW5nIGNvcm91dGluZSBmdW5jdGlvbiBzdWNoIGFzIGJsb2NrLXN0cmVhbSwgdGhl
IHFlbXUgcHJvY2Vzcw0KPiBjcmFzaGVkIQ0KPiBBbmQgd2UgY2FuIGZvdW5kIGZvbGxvd2luZyBt
c2cgZnJvbSBjbWQgZG1lc2cNCj4gIA0KPiBxZW11LXN5c3RlbS1pMzhbMjA1MTRdOiBzZWdmYXVs
dCBhdCA2MCBpcCAwMDAwN2YyYjYwZGM4NzlhIHNwDQo+IDAwMDA3ZmZmZmNjMmYwMDAgZXJyb3Ig
NCBpbiBxZW11LXN5c3RlbS1pMzg2WzdmMmI2MGNkZjAwMCs1MjkwMDBdIA0KPiAgDQo+IEFueSBz
dWdnZXN0aW9uPw0KDQpQbGVhc2UgY2FuIHlvdSB0cnkgaW5zdGFsbGluZyB0aGUgbmVjZXNzYXJ5
IGRlYnVnIHN5bWJvbHMgcGFja2FnZSAoSQ0KZG9uJ3Qga25vdyBob3cgdG8gZG8gdGhpcyBvbiBj
ZW50b3MpIGFuZCB0cnkgYW5kIGNhcHR1cmUgYSBjb3JlIGR1bXANCih3aGljaCBtaWdodCByZXF1
aXJlIGFkanVzdGluZyBybGltaXRzKSBvciBydW5uaW5nIHFlbXUgdW5kZXIgZ2RiDQpzb21laG93
LCBpbiBlaXRoZXIgY2FzZSBnZXR0aW5nIGEgYmFja3RyYWNlIG91dCBvZiB0aGlzIGNyYXNoIHdp
bGwgaGVscA0KdXMgbWFrZSBwcm9ncmVzcy4NCg0KUGxlYXNlIGFsc28gdGFrZSBhIGxvb2sgYXQN
Cmh0dHA6Ly93aWtpLnhlbi5vcmcvd2lraS9SZXBvcnRpbmdfQnVnc19hZ2FpbnN0X1hlbiBhbmQg
c2VlIHdoYXQgb3RoZXINCmluZm8gbWlnaHQgYmUgdXNlZnVsLCBsaWtlIGd1ZXN0IGNvbmZpZyBm
aWxlLCB3aGljaCB0b29sc3RhY2sgeW91IGFyZQ0KdXNpbmcsIG90aGVyIHVzZWZ1bCBsb2dzIGV0
Yy4NCg0KU2luY2UgdGhpcyBpcyBhIHFlbXUgaXNzdWUgeW91IG1pZ2h0IGFsc28gd2FudCB0byBp
bmNsdWRlIHRoZSBxZW11IGxpc3QNCmluIHlvdXIgcmVwb3J0Lg0KDQpJYW4u

------=_001_NextPart328835548817_=----
Content-Type: text/html;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dutf-8" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =E5=BE=AE=E8=BD=AF=E9=9B=85=E9=BB=91; COL=
OR: #000000; LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Thx=EF=BC=8C Ian</DIV>
<DIV>&nbsp;</DIV>
<DIV>I am trying to analyze the crash reason.</DIV>
<DIV>This BUG is only happended when using IDE disk (hd* name for vdisk).<=
/DIV>
<DIV>If using SCSI disk (sd* name) in vm config file, Qemu works well.</DI=
V>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; BORDER-=
BOTTOM: medium none; PADDING-BOTTOM: 0cm; PADDING-TOP: 3pt; PADDING-LEFT: =
0cm; BORDER-LEFT: medium none; PADDING-RIGHT: 0cm">
<DIV=20
style=3D"FONT-SIZE: 12px; FONT-FAMILY: tahoma; BACKGROUND: #efefef; COLOR:=
 #000000; PADDING-BOTTOM: 8px; PADDING-TOP: 8px; PADDING-LEFT: 8px; PADDIN=
G-RIGHT: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:Ian.Campbell@citrix.com">Ian=20
Campbell</A></DIV>
<DIV><B>Date:</B>&nbsp;2014-02-10&nbsp;22:19</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:herbert.cland@gmail.com">herbert=20
cland</A></DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [XEN BUG] XEN + Qemu upstream + =
Qcow2=20
+ coroutine =3D BOOM</DIV></DIV></DIV>
<DIV>
<DIV>On Mon, 2014-02-10 at 17:47 +0800, herbert cland wrote:</DIV>
<DIV>&gt; Dear ALL!</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; qcow2 format disk works well with xen for most cases;</DIV>
<DIV>&gt; BUT if using coroutine function such as block-stream, the qemu=20
process</DIV>
<DIV>&gt; crashed!</DIV>
<DIV>&gt; And we can found following msg from cmd dmesg</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; qemu-system-i38[20514]: segfault at 60 ip 00007f2b60dc879a sp</D=
IV>
<DIV>&gt; 00007ffffcc2f000 error 4 in qemu-system-i386[7f2b60cdf000+529000=
]=20
</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; Any suggestion?</DIV>
<DIV>&nbsp;</DIV>
<DIV>Please can you try installing the necessary debug symbols package (I<=
/DIV>
<DIV>don't know how to do this on centos) and try and capture a core dump<=
/DIV>
<DIV>(which might require adjusting rlimits) or running qemu under gdb</DI=
V>
<DIV>somehow, in either case getting a backtrace out of this crash will=20
help</DIV>
<DIV>us make progress.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Please also take a look at</DIV>
<DIV>http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen and see what=20
other</DIV>
<DIV>info might be useful, like guest config file, which toolstack you are=
</DIV>
<DIV>using, other useful logs etc.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Since this is a qemu issue you might also want to include the qemu=20
list</DIV>
<DIV>in your report.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Ian.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart328835548817_=------



--===============2228157291637945671==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2228157291637945671==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 04:27:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 04:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDRPt-0004d6-VA; Wed, 12 Feb 2014 04:27:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDRPs-0004d1-F6
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 04:27:20 +0000
Received: from [193.109.254.147:2453] by server-11.bemta-14.messagelabs.com id
	D3/3D-24604-728FAF25; Wed, 12 Feb 2014 04:27:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392179236!3663725!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17136 invoked from network); 12 Feb 2014 04:27:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 04:27:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,830,1384300800"; d="scan'208";a="101807763"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 04:27:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 23:27:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDRPm-0004kz-Qs;
	Wed, 12 Feb 2014 04:27:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDRPm-0006ZT-L2;
	Wed, 12 Feb 2014 04:27:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24851-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 04:27:14 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24851: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24851 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24851/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf
baseline version:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44

------------------------------------------------------------
People who touched revisions under test:
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  David Scott <dave.scott@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3b2f92c1f8567461562fac9922fbad223dc8c6cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3b2f92c1f8567461562fac9922fbad223dc8c6cf
+ branch=xen-unstable
+ revision=3b2f92c1f8567461562fac9922fbad223dc8c6cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 3b2f92c1f8567461562fac9922fbad223dc8c6cf:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   06bbcaf..3b2f92c  3b2f92c1f8567461562fac9922fbad223dc8c6cf -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 04:27:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 04:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDRPt-0004d6-VA; Wed, 12 Feb 2014 04:27:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDRPs-0004d1-F6
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 04:27:20 +0000
Received: from [193.109.254.147:2453] by server-11.bemta-14.messagelabs.com id
	D3/3D-24604-728FAF25; Wed, 12 Feb 2014 04:27:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392179236!3663725!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17136 invoked from network); 12 Feb 2014 04:27:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 04:27:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,830,1384300800"; d="scan'208";a="101807763"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 04:27:15 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 11 Feb 2014 23:27:15 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDRPm-0004kz-Qs;
	Wed, 12 Feb 2014 04:27:14 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDRPm-0006ZT-L2;
	Wed, 12 Feb 2014 04:27:14 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24851-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 04:27:14 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24851: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24851 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24851/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf
baseline version:
 xen                  06bbcaf48d09c18a41c482866941ddd5d2846b44

------------------------------------------------------------
People who touched revisions under test:
  Anup Patel <anup.patel@linaro.org>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  David Scott <dave.scott@citrix.com>
  Don Slutz <dslutz@verizon.com>
  Ian Campbell <ian.campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Pranavkumar Sawargaonkar <pranavkumar@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=3b2f92c1f8567461562fac9922fbad223dc8c6cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 3b2f92c1f8567461562fac9922fbad223dc8c6cf
+ branch=xen-unstable
+ revision=3b2f92c1f8567461562fac9922fbad223dc8c6cf
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 3b2f92c1f8567461562fac9922fbad223dc8c6cf:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   06bbcaf..3b2f92c  3b2f92c1f8567461562fac9922fbad223dc8c6cf -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 08:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 08:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDV8q-0002ib-3N; Wed, 12 Feb 2014 08:26:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDV8p-0002iW-JB
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 08:25:59 +0000
Received: from [85.158.139.211:64682] by server-13.bemta-5.messagelabs.com id
	2D/91-18801-6103BF25; Wed, 12 Feb 2014 08:25:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392193556!3315262!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6406 invoked from network); 12 Feb 2014 08:25:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 08:25:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101844719"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 08:25:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 03:25:55 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WDV8l-0001rp-9A;
	Wed, 12 Feb 2014 08:25:55 +0000
Message-ID: <1392193555.22033.18.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Nathan March <nathan@gt.net>
Date: Wed, 12 Feb 2014 08:25:55 +0000
In-Reply-To: <52FAC778.4090803@gt.net>
References: <52FAC778.4090803@gt.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Xen consulting work
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 16:59 -0800, Nathan March wrote:
> We're looking to find a xen guru who can help us out with a couple 
> issues we're having (both with xensource and xenserver), and I'm hoping 
> someone on this list might fit that descrption.
> 
> If you're deeply familiar with how xen works with storage internally and 
> are interested, please drop me an email for more details.

The XenProject website contains a directory of consultants at
http://www.xenproject.org/directory/directory/consulting.html which you
might want to check out.

Conversely any consultants reading this might want to add themselves
there (as well as responding to Nathan of course!)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 08:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 08:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDV8q-0002ib-3N; Wed, 12 Feb 2014 08:26:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDV8p-0002iW-JB
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 08:25:59 +0000
Received: from [85.158.139.211:64682] by server-13.bemta-5.messagelabs.com id
	2D/91-18801-6103BF25; Wed, 12 Feb 2014 08:25:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392193556!3315262!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6406 invoked from network); 12 Feb 2014 08:25:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 08:25:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101844719"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 08:25:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 03:25:55 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WDV8l-0001rp-9A;
	Wed, 12 Feb 2014 08:25:55 +0000
Message-ID: <1392193555.22033.18.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Nathan March <nathan@gt.net>
Date: Wed, 12 Feb 2014 08:25:55 +0000
In-Reply-To: <52FAC778.4090803@gt.net>
References: <52FAC778.4090803@gt.net>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Xen consulting work
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 16:59 -0800, Nathan March wrote:
> We're looking to find a xen guru who can help us out with a couple 
> issues we're having (both with xensource and xenserver), and I'm hoping 
> someone on this list might fit that descrption.
> 
> If you're deeply familiar with how xen works with storage internally and 
> are interested, please drop me an email for more details.

The XenProject website contains a directory of consultants at
http://www.xenproject.org/directory/directory/consulting.html which you
might want to check out.

Conversely any consultants reading this might want to add themselves
there (as well as responding to Nathan of course!)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDVmm-000408-Pk; Wed, 12 Feb 2014 09:07:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDVml-000402-N6
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:07:15 +0000
Received: from [193.109.254.147:7160] by server-7.bemta-14.messagelabs.com id
	FE/21-23424-2C93BF25; Wed, 12 Feb 2014 09:07:14 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392196029!3739550!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28022 invoked from network); 12 Feb 2014 09:07:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:07:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101851811"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 09:07:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 04:07:08 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDVme-0002OQ-MD;
	Wed, 12 Feb 2014 09:07:08 +0000
Message-ID: <1392196023.24787.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 12 Feb 2014 09:07:03 +0000
In-Reply-To: <1392141590.26657.137.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
	<1392141590.26657.137.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:59 +0000, Ian Campbell wrote:
> On Tue, 2014-02-11 at 17:53 +0000, Ian Jackson wrote:
> > Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> > David is entirely correct to specify a fixed endianness for the image
> > header.
> 
> Full ack.
> 
> 
Me too. I was not speaking about the image header but all others
records. Obviously by native endianess I mean the endianess of the VM.
If I have a ARMBE this would save as big endian because the VM is big
endian. On the same host an ARMLE would save using little endian.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDVmm-000408-Pk; Wed, 12 Feb 2014 09:07:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDVml-000402-N6
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:07:15 +0000
Received: from [193.109.254.147:7160] by server-7.bemta-14.messagelabs.com id
	FE/21-23424-2C93BF25; Wed, 12 Feb 2014 09:07:14 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392196029!3739550!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28022 invoked from network); 12 Feb 2014 09:07:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:07:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101851811"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 09:07:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 04:07:08 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDVme-0002OQ-MD;
	Wed, 12 Feb 2014 09:07:08 +0000
Message-ID: <1392196023.24787.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 12 Feb 2014 09:07:03 +0000
In-Reply-To: <1392141590.26657.137.camel@kazak.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
	<1392141590.26657.137.camel@kazak.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 17:59 +0000, Ian Campbell wrote:
> On Tue, 2014-02-11 at 17:53 +0000, Ian Jackson wrote:
> > Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> > David is entirely correct to specify a fixed endianness for the image
> > header.
> 
> Full ack.
> 
> 
Me too. I was not speaking about the image header but all others
records. Obviously by native endianess I mean the endianess of the VM.
If I have a ARMBE this would save as big endian because the VM is big
endian. On the same host an ARMLE would save using little endian.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDVuE-0004QS-UX; Wed, 12 Feb 2014 09:14:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDVuD-0004QN-8U
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:14:57 +0000
Received: from [85.158.137.68:18337] by server-3.bemta-3.messagelabs.com id
	FF/2F-14520-09B3BF25; Wed, 12 Feb 2014 09:14:56 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392196493!1302145!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18804 invoked from network); 12 Feb 2014 09:14:55 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:14:55 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so8918996pab.16
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 01:14:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=eAaOk60VqI0SJkBYmtqMbI1Za0ZZnfCGd+DPmKyG77M=;
	b=OYu1yDJd7EDdTOhG/X71EouvYZ13CXNKEgF/PRsmHMeaNGAYWdsEl17jyhe9/AJyhm
	ER7j+OBwwLsR3awz8FxUJTUNoGOO2l++EI5h+HQNbBgFbxEBeU6x03oneiXUZX+pUk0o
	ZsH9I2pPh1QEKBXbokEyyKrtPL+TkVQIY2RG2pFpe7a3ikq4e6hTfiiTuSN4kGQihdN5
	lA8XHc2VdZ5l5xRcJ75gJq7AlkMMCIzIyKZp7/2VrtYvzudQqJQYHd4yIoS7UUp+mnMO
	NNznV6s5szEKuN5N67PY4XVxFKDJjMWOkHdVD/0QYNosml26YfNNSvc0j3wJpjKbSIbJ
	wkFQ==
X-Received: by 10.68.180.66 with SMTP id dm2mr21460631pbc.143.1392196493464;
	Wed, 12 Feb 2014 01:14:53 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id e6sm61435614pbg.4.2014.02.12.01.14.51
	for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 01:14:52 -0800 (PST)
Date: Wed, 12 Feb 2014 17:14:50 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021217144850999948@gmail.com>
Subject: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4060232788916043603=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4060232788916043603==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart647865828717_=----"

This is a multi-part message in MIME format.

------=_001_NextPart647865828717_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCkZvbGxvd2luZyBtZXJnZSBtYXkgYmUgb3ZlcndyaXRlIHRoZSAieGVuOiBG
aXggdmNwdSBpbml0aWFsaXphdGlvbiIgcGF0Y2gNCi0tLS0tLS0tLS0tLS0tLS0tLS0tDQpNZXJn
ZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3RhZ2luZy1tYXN0
ZXItOSANCi0tLS0tLS0tLS0tLS0tLS0tLS0tIA0KDQpUaGlzIG1hZGUgYSBjb25mbGljdCBmb3Ig
eGVuLWFsbC5jLg0KU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBob3RwbHVnIHBhdGNoIHdhcyBv
dmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbSBxZW11IHZlcnNpb24uDQoNCg0KDQoNCg0KaGVyYmVy
dCBjbGFuZA==

------=_001_NextPart647865828717_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dgb2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Following merge may be overwrite the "xen:=
 Fix=20
vcpu initialization" patch</DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Merge remote branch 'origin/stable-1.6' in=
to=20
xen-staging-master-9 </DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">This made a conflict for xen-all.c.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">So it seems that the vpcu hotplug patch wa=
s=20
overwrited by the upstream qemu version.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart647865828717_=------



--===============4060232788916043603==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4060232788916043603==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 09:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDVuE-0004QS-UX; Wed, 12 Feb 2014 09:14:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDVuD-0004QN-8U
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:14:57 +0000
Received: from [85.158.137.68:18337] by server-3.bemta-3.messagelabs.com id
	FF/2F-14520-09B3BF25; Wed, 12 Feb 2014 09:14:56 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392196493!1302145!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18804 invoked from network); 12 Feb 2014 09:14:55 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:14:55 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so8918996pab.16
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 01:14:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=eAaOk60VqI0SJkBYmtqMbI1Za0ZZnfCGd+DPmKyG77M=;
	b=OYu1yDJd7EDdTOhG/X71EouvYZ13CXNKEgF/PRsmHMeaNGAYWdsEl17jyhe9/AJyhm
	ER7j+OBwwLsR3awz8FxUJTUNoGOO2l++EI5h+HQNbBgFbxEBeU6x03oneiXUZX+pUk0o
	ZsH9I2pPh1QEKBXbokEyyKrtPL+TkVQIY2RG2pFpe7a3ikq4e6hTfiiTuSN4kGQihdN5
	lA8XHc2VdZ5l5xRcJ75gJq7AlkMMCIzIyKZp7/2VrtYvzudQqJQYHd4yIoS7UUp+mnMO
	NNznV6s5szEKuN5N67PY4XVxFKDJjMWOkHdVD/0QYNosml26YfNNSvc0j3wJpjKbSIbJ
	wkFQ==
X-Received: by 10.68.180.66 with SMTP id dm2mr21460631pbc.143.1392196493464;
	Wed, 12 Feb 2014 01:14:53 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id e6sm61435614pbg.4.2014.02.12.01.14.51
	for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 01:14:52 -0800 (PST)
Date: Wed, 12 Feb 2014 17:14:50 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021217144850999948@gmail.com>
Subject: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4060232788916043603=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============4060232788916043603==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart647865828717_=----"

This is a multi-part message in MIME format.

------=_001_NextPart647865828717_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCkZvbGxvd2luZyBtZXJnZSBtYXkgYmUgb3ZlcndyaXRlIHRoZSAieGVuOiBG
aXggdmNwdSBpbml0aWFsaXphdGlvbiIgcGF0Y2gNCi0tLS0tLS0tLS0tLS0tLS0tLS0tDQpNZXJn
ZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3RhZ2luZy1tYXN0
ZXItOSANCi0tLS0tLS0tLS0tLS0tLS0tLS0tIA0KDQpUaGlzIG1hZGUgYSBjb25mbGljdCBmb3Ig
eGVuLWFsbC5jLg0KU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBob3RwbHVnIHBhdGNoIHdhcyBv
dmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbSBxZW11IHZlcnNpb24uDQoNCg0KDQoNCg0KaGVyYmVy
dCBjbGFuZA==

------=_001_NextPart647865828717_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dgb2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Following merge may be overwrite the "xen:=
 Fix=20
vcpu initialization" patch</DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Merge remote branch 'origin/stable-1.6' in=
to=20
xen-staging-master-9 </DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">This made a conflict for xen-all.c.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">So it seems that the vpcu hotplug patch wa=
s=20
overwrited by the upstream qemu version.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart647865828717_=------



--===============4060232788916043603==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4060232788916043603==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 09:23:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDW22-0004d3-32; Wed, 12 Feb 2014 09:23:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDW21-0004cy-2R
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:23:01 +0000
Received: from [193.109.254.147:9110] by server-2.bemta-14.messagelabs.com id
	B3/18-01236-37D3BF25; Wed, 12 Feb 2014 09:22:59 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392196976!3738058!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3048 invoked from network); 12 Feb 2014 09:22:58 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:22:58 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392196977; x=1423732977;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=RtY9WtqmheC4iQTY5pI6ZWPVe93RgJeg8cqBcf+DyEo=;
	b=Bg4qLkK49dVlqwlMcdrPJWF40hamZi5QqAUS0Gm5D1xoCWzGGbmgIvvn
	C1ZmEsW+cTecsnwt7X3F6HokQ3uQ/ArqLRIDjGEoDhPqrEDggJDxu93Cy
	2X///dXtk6JTDfaqK+PE71kEWc7YXrXtuLWjZtQpt4GnfEj2MyHLjdNE3 M=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78808650"
Received: from email-inbound-relay-60001.pdx1.amazon.com ([10.232.153.146])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:22:13 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by email-inbound-relay-60001.pdx1.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1C9MCap029772
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:22:12 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:22:07 -0800
Message-ID: <52FB3D3D.9070909@amazon.de>
Date: Wed, 12 Feb 2014 10:22:05 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Yang Zhang <yang.z.zhang@intel.com>, <xen-devel@lists.xen.org>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH 1/2] Nested VMX: update nested paging mode
	on vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12.02.14 03:08, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> Since SVM and VMX use different mechanism to emulate the virtual-vmentry
> and virtual-vmexit, it's hard to update the nested paging mode correctly in
> common code. So we need to update the nested paging mode in their respective
> code path.
> SVM already updates the nested paging mode on vmexit. This patch adds the same
> logic in VMX side.
> 
> Previous discussion is here:
> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>

> ---
>  xen/arch/x86/hvm/vmx/vmx.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index f6409d6..baf3040 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2541,6 +2541,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>      vcpu_nestedhvm(v).nv_vmswitch_in_progress = 0;
>      if ( nestedhvm_vcpu_in_guestmode(v) )
>      {
> +        paging_update_nestedmode(v);
>          if ( nvmx_n2_vmexit_handler(regs, exit_reason) )
>              goto out;
>      }
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:23:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDW22-0004d3-32; Wed, 12 Feb 2014 09:23:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDW21-0004cy-2R
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:23:01 +0000
Received: from [193.109.254.147:9110] by server-2.bemta-14.messagelabs.com id
	B3/18-01236-37D3BF25; Wed, 12 Feb 2014 09:22:59 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392196976!3738058!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3048 invoked from network); 12 Feb 2014 09:22:58 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:22:58 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392196977; x=1423732977;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=RtY9WtqmheC4iQTY5pI6ZWPVe93RgJeg8cqBcf+DyEo=;
	b=Bg4qLkK49dVlqwlMcdrPJWF40hamZi5QqAUS0Gm5D1xoCWzGGbmgIvvn
	C1ZmEsW+cTecsnwt7X3F6HokQ3uQ/ArqLRIDjGEoDhPqrEDggJDxu93Cy
	2X///dXtk6JTDfaqK+PE71kEWc7YXrXtuLWjZtQpt4GnfEj2MyHLjdNE3 M=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78808650"
Received: from email-inbound-relay-60001.pdx1.amazon.com ([10.232.153.146])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:22:13 +0000
Received: from ex10-hub-9003.ant.amazon.com (ex10-hub-9003.ant.amazon.com
	[10.185.137.132])
	by email-inbound-relay-60001.pdx1.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1C9MCap029772
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:22:12 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9003.ant.amazon.com (10.185.137.132) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:22:07 -0800
Message-ID: <52FB3D3D.9070909@amazon.de>
Date: Wed, 12 Feb 2014 10:22:05 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Yang Zhang <yang.z.zhang@intel.com>, <xen-devel@lists.xen.org>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH 1/2] Nested VMX: update nested paging mode
	on vmexit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12.02.14 03:08, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> Since SVM and VMX use different mechanism to emulate the virtual-vmentry
> and virtual-vmexit, it's hard to update the nested paging mode correctly in
> common code. So we need to update the nested paging mode in their respective
> code path.
> SVM already updates the nested paging mode on vmexit. This patch adds the same
> logic in VMX side.
> 
> Previous discussion is here:
> http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Reviewed-by: Christoph Egger <chegger@amazon.de>

> ---
>  xen/arch/x86/hvm/vmx/vmx.c |    1 +
>  1 files changed, 1 insertions(+), 0 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index f6409d6..baf3040 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2541,6 +2541,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>      vcpu_nestedhvm(v).nv_vmswitch_in_progress = 0;
>      if ( nestedhvm_vcpu_in_guestmode(v) )
>      {
> +        paging_update_nestedmode(v);
>          if ( nvmx_n2_vmexit_handler(regs, exit_reason) )
>              goto out;
>      }
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDW53-0004mJ-PJ; Wed, 12 Feb 2014 09:26:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDW52-0004mD-7b
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:26:08 +0000
Received: from [85.158.143.35:13667] by server-2.bemta-4.messagelabs.com id
	A1/A9-10891-F2E3BF25; Wed, 12 Feb 2014 09:26:07 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392197165!5042463!1
X-Originating-IP: [209.85.160.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22018 invoked from network); 12 Feb 2014 09:26:06 -0000
Received: from mail-pb0-f53.google.com (HELO mail-pb0-f53.google.com)
	(209.85.160.53)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:26:06 -0000
Received: by mail-pb0-f53.google.com with SMTP id md12so8962378pbc.40
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 01:26:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=h5FH9dhjpPaiDaijliUUQki1BL0I6223LCadXFRirCw=;
	b=pocUpIUuMGHJoO/HcdMnQXxCUpTdFzUPDJJpFoFL9BKeFf4s1lGMkrM+FFFPak3RkJ
	hfxgJ4gbvedjbm1isyWOrBACP+R2wHBYfLpO2gjJeQ3F8N4fZiQ18Ey3EvkDgfNfeKbO
	OeM9/l+2aNPcrqRNEfqOqbFE1GU8n6uCsnifmHdRJfe+DFdKA5b0Gae6ON9eFG3w9r/R
	wj4tT8lLJU2u52DoJTLuGBqFFL7Hkh3TH9T85BHtlPLdZTyatjtR/O4YTllFxInS3JG8
	jBdY8rifMdTITjwKhey0BIWTkcwJvm3W8SrgywxZQXUsJq3akNX5wQHba7KzodR/wKu6
	31Cw==
X-Received: by 10.66.145.199 with SMTP id sw7mr21309849pab.143.1392197164824; 
	Wed, 12 Feb 2014 01:26:04 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id c7sm61523615pbt.0.2014.02.12.01.25.59
	for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 01:26:03 -0800 (PST)
Date: Wed, 12 Feb 2014 17:25:58 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021217255231643849@gmail.com>
Subject: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0689713517071110766=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============0689713517071110766==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart205340081584_=----"

This is a multi-part message in MIME format.

------=_001_NextPart205340081584_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCkZvbGxvd2luZyBtZXJnZSBtYXkgYmUgb3ZlcndyaXRlIHRoZSAieGVuOiBG
aXggdmNwdSBpbml0aWFsaXphdGlvbiIgcGF0Y2gNCi0tLS0tLS0tLS0tLS0tLS0tLS0tDQpNZXJn
ZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3RhZ2luZy1tYXN0
ZXItOSANCi0tLS0tLS0tLS0tLS0tLS0tLS0tIA0KDQpUaGlzIG1hZGUgYSBjb25mbGljdCBmb3Ig
eGVuLWFsbC5jLg0KU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBob3RwbHVnIHBhdGNoIHdhcyBv
dmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbSBxZW11IHZlcnNpb24uDQoNCg0KDQoNCg0KaGVyYmVy
dCBjbGFuZA==

------=_001_NextPart205340081584_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3DGB2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Following merge may be overwrite the "xen:=
 Fix=20
vcpu initialization" patch</DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Merge remote branch 'origin/stable-1.6' in=
to=20
xen-staging-master-9 </DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">This made a conflict for xen-all.c.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">So it seems that the vpcu hotplug patch wa=
s=20
overwrited by the upstream qemu version.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart205340081584_=------



--===============0689713517071110766==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0689713517071110766==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 09:26:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDW53-0004mJ-PJ; Wed, 12 Feb 2014 09:26:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDW52-0004mD-7b
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:26:08 +0000
Received: from [85.158.143.35:13667] by server-2.bemta-4.messagelabs.com id
	A1/A9-10891-F2E3BF25; Wed, 12 Feb 2014 09:26:07 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392197165!5042463!1
X-Originating-IP: [209.85.160.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22018 invoked from network); 12 Feb 2014 09:26:06 -0000
Received: from mail-pb0-f53.google.com (HELO mail-pb0-f53.google.com)
	(209.85.160.53)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:26:06 -0000
Received: by mail-pb0-f53.google.com with SMTP id md12so8962378pbc.40
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 01:26:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=h5FH9dhjpPaiDaijliUUQki1BL0I6223LCadXFRirCw=;
	b=pocUpIUuMGHJoO/HcdMnQXxCUpTdFzUPDJJpFoFL9BKeFf4s1lGMkrM+FFFPak3RkJ
	hfxgJ4gbvedjbm1isyWOrBACP+R2wHBYfLpO2gjJeQ3F8N4fZiQ18Ey3EvkDgfNfeKbO
	OeM9/l+2aNPcrqRNEfqOqbFE1GU8n6uCsnifmHdRJfe+DFdKA5b0Gae6ON9eFG3w9r/R
	wj4tT8lLJU2u52DoJTLuGBqFFL7Hkh3TH9T85BHtlPLdZTyatjtR/O4YTllFxInS3JG8
	jBdY8rifMdTITjwKhey0BIWTkcwJvm3W8SrgywxZQXUsJq3akNX5wQHba7KzodR/wKu6
	31Cw==
X-Received: by 10.66.145.199 with SMTP id sw7mr21309849pab.143.1392197164824; 
	Wed, 12 Feb 2014 01:26:04 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id c7sm61523615pbt.0.2014.02.12.01.25.59
	for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 01:26:03 -0800 (PST)
Date: Wed, 12 Feb 2014 17:25:58 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021217255231643849@gmail.com>
Subject: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0689713517071110766=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============0689713517071110766==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart205340081584_=----"

This is a multi-part message in MIME format.

------=_001_NextPart205340081584_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCkZvbGxvd2luZyBtZXJnZSBtYXkgYmUgb3ZlcndyaXRlIHRoZSAieGVuOiBG
aXggdmNwdSBpbml0aWFsaXphdGlvbiIgcGF0Y2gNCi0tLS0tLS0tLS0tLS0tLS0tLS0tDQpNZXJn
ZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3RhZ2luZy1tYXN0
ZXItOSANCi0tLS0tLS0tLS0tLS0tLS0tLS0tIA0KDQpUaGlzIG1hZGUgYSBjb25mbGljdCBmb3Ig
eGVuLWFsbC5jLg0KU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBob3RwbHVnIHBhdGNoIHdhcyBv
dmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbSBxZW11IHZlcnNpb24uDQoNCg0KDQoNCg0KaGVyYmVy
dCBjbGFuZA==

------=_001_NextPart205340081584_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3DGB2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Following merge may be overwrite the "xen:=
 Fix=20
vcpu initialization" patch</DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Merge remote branch 'origin/stable-1.6' in=
to=20
xen-staging-master-9 </DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">This made a conflict for xen-all.c.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">So it seems that the vpcu hotplug patch wa=
s=20
overwrited by the upstream qemu version.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart205340081584_=------



--===============0689713517071110766==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0689713517071110766==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 09:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDW7f-0004yM-Bh; Wed, 12 Feb 2014 09:28:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDW7d-0004xC-OH
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:28:49 +0000
Received: from [85.158.137.68:48606] by server-13.bemta-3.messagelabs.com id
	7D/F0-26923-0DE3BF25; Wed, 12 Feb 2014 09:28:48 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392197326!1306670!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11474 invoked from network); 12 Feb 2014 09:28:47 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:28:47 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392197327; x=1423733327;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=MrzRe7VSFNy0hCoIhwdA9QOx+vpdq8BZsBCKpXzntuQ=;
	b=UJRTSsnl4JUe7O/T6yCsMkaJ+NYgflLfoXmcbvp/yv7jctnS6/ZsIRqX
	eNB87KryAGyAYbwoUPUBA1xZdodIed2BtHL+iV+sP+CLAyUH9ycj9Vkd6
	+0lgA6q0Ur9n0jOojOzoTdR0LZI0wkA7fnMi/9WFDffzqomziWEXJOqXf 0=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78813296"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:28:44 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-1101.vdc.amazon.com (8.14.7/8.14.7) with ESMTP id
	s1C9SgPN023481
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:28:44 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:28:42 -0800
Message-ID: <52FB3EC9.5000201@amazon.de>
Date: Wed, 12 Feb 2014 10:28:41 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Yang Zhang <yang.z.zhang@intel.com>, <xen-devel@lists.xen.org>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
	<1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH 2/2] Nested EPT: fixing issue of translate
 L2 gva to L1 gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12.02.14 03:08, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> There is no way to translate L2 gva to L1 gfn directly.

Why?

> To do it, we need to get L2's gfn first. Then look up the virtual EPT to get L1's gfn.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/mm/p2m.c |   25 ++++++++++++++++++++-----
>  1 files changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 8f380ed..e92cfbe 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1605,22 +1605,37 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
>          && paging_mode_hap(v->domain) 
>          && nestedhvm_is_n2(v) )
>      {
> -        unsigned long gfn;
> +        unsigned long gfn, l1gfn, exit_qual;
>          struct p2m_domain *p2m;
>          const struct paging_mode *mode;
> -        uint32_t pfec_21 = *pfec;
>          uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
> +        unsigned int page_order, exit_reason;
> +        int rc;
> +        uint8_t p2m_acc;
> +        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
>  
>          /* translate l2 guest va into l2 guest gfn */
>          p2m = p2m_get_nestedp2m(v, np2m_base);
>          mode = paging_get_nestedmode(v);
>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>  
> +        if ( gfn == INVALID_GFN )
> +            return gfn;
> +
>          /* translate l2 guest gfn into l1 guest gfn */
> -        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
> -                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
> -    }

I think in p2m-ept.c you should override that function pointer to a EPT
specific implementation.

Christoph

> +        rc = nept_translate_l2ga(v, gfn << 12 , &page_order, 4, &l1gfn, &p2m_acc,
> +                                &exit_qual, &exit_reason);
> +        if ( rc == EPT_TRANSLATE_VIOLATION || rc == EPT_TRANSLATE_MISCONFIG )
> +        {
> +            nvmx->ept.exit_reason = exit_reason;
> +            nvmx->ept.exit_qual = exit_qual;
> +            vcpu_nestedhvm(current).nv_vmexit_pending = 1;
> +        }
> +        if ( rc == EPT_TRANSLATE_RETRY )
> +            *pfec = PFEC_page_paged;
>  
> +        return l1gfn;
> +    }
>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
>  }
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDW7f-0004yM-Bh; Wed, 12 Feb 2014 09:28:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDW7d-0004xC-OH
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:28:49 +0000
Received: from [85.158.137.68:48606] by server-13.bemta-3.messagelabs.com id
	7D/F0-26923-0DE3BF25; Wed, 12 Feb 2014 09:28:48 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392197326!1306670!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11474 invoked from network); 12 Feb 2014 09:28:47 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:28:47 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392197327; x=1423733327;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=MrzRe7VSFNy0hCoIhwdA9QOx+vpdq8BZsBCKpXzntuQ=;
	b=UJRTSsnl4JUe7O/T6yCsMkaJ+NYgflLfoXmcbvp/yv7jctnS6/ZsIRqX
	eNB87KryAGyAYbwoUPUBA1xZdodIed2BtHL+iV+sP+CLAyUH9ycj9Vkd6
	+0lgA6q0Ur9n0jOojOzoTdR0LZI0wkA7fnMi/9WFDffzqomziWEXJOqXf 0=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78813296"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:28:44 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by smtp-in-1101.vdc.amazon.com (8.14.7/8.14.7) with ESMTP id
	s1C9SgPN023481
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:28:44 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:28:42 -0800
Message-ID: <52FB3EC9.5000201@amazon.de>
Date: Wed, 12 Feb 2014 10:28:41 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Yang Zhang <yang.z.zhang@intel.com>, <xen-devel@lists.xen.org>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
	<1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: eddie.dong@intel.com, xiantao.zhang@intel.com, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH 2/2] Nested EPT: fixing issue of translate
 L2 gva to L1 gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12.02.14 03:08, Yang Zhang wrote:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> There is no way to translate L2 gva to L1 gfn directly.

Why?

> To do it, we need to get L2's gfn first. Then look up the virtual EPT to get L1's gfn.
> 
> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
> ---
>  xen/arch/x86/mm/p2m.c |   25 ++++++++++++++++++++-----
>  1 files changed, 20 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
> index 8f380ed..e92cfbe 100644
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1605,22 +1605,37 @@ unsigned long paging_gva_to_gfn(struct vcpu *v,
>          && paging_mode_hap(v->domain) 
>          && nestedhvm_is_n2(v) )
>      {
> -        unsigned long gfn;
> +        unsigned long gfn, l1gfn, exit_qual;
>          struct p2m_domain *p2m;
>          const struct paging_mode *mode;
> -        uint32_t pfec_21 = *pfec;
>          uint64_t np2m_base = nhvm_vcpu_p2m_base(v);
> +        unsigned int page_order, exit_reason;
> +        int rc;
> +        uint8_t p2m_acc;
> +        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
>  
>          /* translate l2 guest va into l2 guest gfn */
>          p2m = p2m_get_nestedp2m(v, np2m_base);
>          mode = paging_get_nestedmode(v);
>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>  
> +        if ( gfn == INVALID_GFN )
> +            return gfn;
> +
>          /* translate l2 guest gfn into l1 guest gfn */
> -        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base,
> -                                       gfn << PAGE_SHIFT, &pfec_21, NULL);
> -    }

I think in p2m-ept.c you should override that function pointer to a EPT
specific implementation.

Christoph

> +        rc = nept_translate_l2ga(v, gfn << 12 , &page_order, 4, &l1gfn, &p2m_acc,
> +                                &exit_qual, &exit_reason);
> +        if ( rc == EPT_TRANSLATE_VIOLATION || rc == EPT_TRANSLATE_MISCONFIG )
> +        {
> +            nvmx->ept.exit_reason = exit_reason;
> +            nvmx->ept.exit_qual = exit_qual;
> +            vcpu_nestedhvm(current).nv_vmexit_pending = 1;
> +        }
> +        if ( rc == EPT_TRANSLATE_RETRY )
> +            *pfec = PFEC_page_paged;
>  
> +        return l1gfn;
> +    }
>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);
>  }
>  
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:37:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWFQ-0005hG-I6; Wed, 12 Feb 2014 09:36:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDWFP-0005h8-Pz
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:36:51 +0000
Received: from [193.109.254.147:12700] by server-13.bemta-14.messagelabs.com
	id D6/42-01226-3B04BF25; Wed, 12 Feb 2014 09:36:51 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392197808!3753573!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22486 invoked from network); 12 Feb 2014 09:36:49 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:36:49 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392197809; x=1423733809;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=GiOws9+c84u26+XsLAFkVro3N1JJWOtlRrzJbutKslM=;
	b=t9TzjMotbSfptqAHAFICp0XDpDaOds4FSBusjws8iZXlU0J6b39CvpEv
	2TSVSF/Yi2OEpESa+9Rt4QV5YowN37t4o9GlcxRwr46wvSlxWG2P2rt5T
	ZQsRrm67Ge/tpKN6TP5i0DkvlW4rnGFZvHSvGoNunZTYgJ3zX+ZYPXn9d Q=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78817720"
Received: from email-inbound-relay-64001.pdx4.amazon.com ([10.220.169.6])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:36:47 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by email-inbound-relay-64001.pdx4.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1C9XiT5016716
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:36:47 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:36:28 -0800
Message-ID: <52FB409A.3010506@amazon.de>
Date: Wed, 12 Feb 2014 10:36:26 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Aravind Gopalakrishnan
	<aravind.gopalakrishnan@amd.com>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
	<52F2A1E4.9030700@amd.com>
	<52F35EE60200007800119A84@nat28.tlf.novell.com>
In-Reply-To: <52F35EE60200007800119A84@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06.02.14 10:07, Jan Beulich wrote:
>>>> On 05.02.14 at 21:41, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> wrote:
>> On 1/28/2014 5:24 AM, Jan Beulich wrote:
>>>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
>> wrote:
>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t 
>> msr, uint64_t *val)
>>>>   
>>>>       *val = 0;
>>>>   
>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>>> As one of the other reviewers already said - 0xC0000000 would
>>> be better recognizable here.
>>>
>>> As to the 3 -> 0x13 change - I don't think this is conceptually
>>> correct. While at present we emulate only 2 banks, this had
>>> been different in the past and may become different again.
>>> Hence introducing a dis-contiguity after bank 3 is undesirable.
>>
>> IMHO, including the '0x13' is necessary. The reason is that 0x413, 
>> 0xc0000408 and 0xc0000409
>> together form the set of MC4 thresholding registers. Not including 0x13 
>> in the mask would mean
>> that accesses to 0x413 alone would not be handled. (which would be 
>> confusing if someone new
>> were to look into the mce codebase)
> 
> No - bit 4 is part of what forms the bank number. Hence it must
> be masked out in the switch() expression.

I prefer to see a comment in the code that makes this clear.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:37:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWFQ-0005hG-I6; Wed, 12 Feb 2014 09:36:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDWFP-0005h8-Pz
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:36:51 +0000
Received: from [193.109.254.147:12700] by server-13.bemta-14.messagelabs.com
	id D6/42-01226-3B04BF25; Wed, 12 Feb 2014 09:36:51 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392197808!3753573!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22486 invoked from network); 12 Feb 2014 09:36:49 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:36:49 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392197809; x=1423733809;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=GiOws9+c84u26+XsLAFkVro3N1JJWOtlRrzJbutKslM=;
	b=t9TzjMotbSfptqAHAFICp0XDpDaOds4FSBusjws8iZXlU0J6b39CvpEv
	2TSVSF/Yi2OEpESa+9Rt4QV5YowN37t4o9GlcxRwr46wvSlxWG2P2rt5T
	ZQsRrm67Ge/tpKN6TP5i0DkvlW4rnGFZvHSvGoNunZTYgJ3zX+ZYPXn9d Q=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78817720"
Received: from email-inbound-relay-64001.pdx4.amazon.com ([10.220.169.6])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:36:47 +0000
Received: from ex10-hub-31006.ant.amazon.com (ex10-hub-31006.sea31.amazon.com
	[10.185.176.13])
	by email-inbound-relay-64001.pdx4.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1C9XiT5016716
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:36:47 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-31006.ant.amazon.com (10.185.176.13) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:36:28 -0800
Message-ID: <52FB409A.3010506@amazon.de>
Date: Wed, 12 Feb 2014 10:36:26 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Aravind Gopalakrishnan
	<aravind.gopalakrishnan@amd.com>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
	<52F2A1E4.9030700@amd.com>
	<52F35EE60200007800119A84@nat28.tlf.novell.com>
In-Reply-To: <52F35EE60200007800119A84@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 06.02.14 10:07, Jan Beulich wrote:
>>>> On 05.02.14 at 21:41, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> wrote:
>> On 1/28/2014 5:24 AM, Jan Beulich wrote:
>>>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
>> wrote:
>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t 
>> msr, uint64_t *val)
>>>>   
>>>>       *val = 0;
>>>>   
>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>> +    switch ( msr & (MSR_IA32_MC0_CTL | (0x3 << 30) | 0x13))
>>> As one of the other reviewers already said - 0xC0000000 would
>>> be better recognizable here.
>>>
>>> As to the 3 -> 0x13 change - I don't think this is conceptually
>>> correct. While at present we emulate only 2 banks, this had
>>> been different in the past and may become different again.
>>> Hence introducing a dis-contiguity after bank 3 is undesirable.
>>
>> IMHO, including the '0x13' is necessary. The reason is that 0x413, 
>> 0xc0000408 and 0xc0000409
>> together form the set of MC4 thresholding registers. Not including 0x13 
>> in the mask would mean
>> that accesses to 0x413 alone would not be handled. (which would be 
>> confusing if someone new
>> were to look into the mce codebase)
> 
> No - bit 4 is part of what forms the bank number. Hence it must
> be masked out in the switch() expression.

I prefer to see a comment in the code that makes this clear.

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWH1-0005rW-2L; Wed, 12 Feb 2014 09:38:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WDWGz-0005rN-9F
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 09:38:29 +0000
Received: from [193.109.254.147:39047] by server-15.bemta-14.messagelabs.com
	id A8/33-10839-4114BF25; Wed, 12 Feb 2014 09:38:28 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392197906!3725226!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1449 invoked from network); 12 Feb 2014 09:38:27 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 09:38:27 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 383CE8;
	Wed, 12 Feb 2014 10:38:25 +0100
Message-ID: <52FB4110.8090005@scytl.com>
Date: Wed, 12 Feb 2014 10:38:24 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<52F9F514.8040907@scytl.com> <52FA413F.1040608@tycho.nsa.gov>
In-Reply-To: <52FA413F.1040608@tycho.nsa.gov>
X-Enigmail-Version: 1.6
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4738565846614668823=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============4738565846614668823==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello Daniel,

On 02/11/2014 04:26 PM, Daniel De Graaf wrote:
> On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
>> Hello Daniel,
>>
>> Thanks for your thorough answer. I have a few comments below.
>>
>> On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
>>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>>> Dear all,
>>>>
>>>> I have recently configured a Xen 4.3 server with the vTPM enabled
>>>> and a
>>>> guest virtual machine that takes advantage of it. After playing a bi=
t
>>>> with it, I have a few questions:
>>>>
>>>> 1.According to the documentation, to shutdown the vTPM stubdom it is=

>>>> only needed to normally shutdown the guest VM. Theoretically, the vT=
PM
>>>> stubdom automatically shuts down after this. Nevertheless, if I
>>>> shutdown
>>>> the guest the vTPM stubdom continues active and, moreover, I can sta=
rt
>>>> the machine again and the values of the vTPM are the last ones there=

>>>> were in the previous instance of the guest. Is this normal?
>>>
>>> The documentation is in error here; while this was originally how the=

>>> vTPM
>>> domain behaved, this automatic shutdown was not reliable: it was not
>>> done
>>> if the peer domain did not use the vTPM, and it was incorrectly
>>> triggered
>>> by pv-grub's use of the vTPM to record guest kernel measurements
>>> (which was
>>> the immediate reason for its removal). The solution now is to either
>>> send a
>>> shutdown request or simply destroy the vTPM upon guest shutdown.
>>>
>>> An alternative that may require less work on your part is to destroy
>>> the vTPM stub domain during a guest's construction, something like:
>>>
>>> #!/bin/sh -e
>>> xl destroy "$1-vtpm" || true
>>> xl create $1-vtpm.cfg
>>> xl create $1-domu.cfg
>>>
>>> Allowing a vTPM to remain active across a guest restart will cause th=
e
>>> PCR values extended by pv-grub to be incorrect, as you observed in yo=
ur
>>> second email. In order for the vTPM's PCRs to be useful for quotes or=

>>> releasing sealed secrets, you need to ensure that a new vTPM is start=
ed
>>> if and only if it is paired with a corresponding guest.
>>
>> I see a potential threat due to this behaviour (please correct me if I=

>> am wrong).
>>
>> Assume an administrator of Dom0 becomes malicious. Since the hyperviso=
r
>> does not enforce the shut down of the vTPM domain, the malicious
>> administrator could try the following: 1) make a copy of the peer
>> domain, 2) manipulate the copy of the peer domain and disable its
>> measurements, 3) boot the original peer domain, 4) switch it off or
>> pause it, 5) boot the manipulated copy of the peer domain.
>>
>> Then, the shown PCR values of the manipulated copy of the peer domain
>> are the ones measured by the original peer domain during the first boo=
t.
>> But the manipulated copy is the one actually running. Hence, this coul=
d
>> not be detected nor by quoting the vTPM neither the pTPM.
>>
>
> A malicious dom0 has a much simpler attack vector: start the domain wit=
h
> a custom version of pv-grub that extends arbitrary measurements instead=

> of the real kernel's measurements. Then, a user kernel with disabled or=

> similarly false measuring capabilities can be booted. Alternatively,
> if XSM polices do not restrict it, a debugger could be attached to the
> guest so that it can be manipulated online.

This is the reason why I wanted to measure Dom0, to detect a possible
manipulation, e.g. of a custom version of pv-grub. Nevertheless, still
the administrator could try to inject a manipulated copy of it into the
system after booting it. Hence, I agree with the solutions you propose
below.

>
>> May be, one possible solution could be to enforce an XSM FLASK policy =
to
>> prevent any user in Dom0 from destroying, shutting down or pausing a
>> domain. Then, measure the policy when Dom0 starts into a PCR of the
>> phsyical TPM. Nevertheless, on one hand I do not know if this is
>> feasible and, in the other hand, this prevents the system from
>> destroying the vTPM domain when the peer domain shuts down.
>
> The solution to this problem is to disaggregate dom0 and relocate the
> domain building component to a stub domain that is completely measured
> in the pTPM (perhaps by TBOOT). The domain builder could use a static
> library of domains to build (hardware domain and TPM Manager built only=

> once; vTPM and pv-grub domain pairs built on request). An XSM policy
> could then restrict vTPM communication so that only correctly built
> guests are allowed to talk to their paired vTPM. In this case, dom0
> would have permission to shut down either VM, but could not start a
> replacement.
>

I understand this cannot be done with the current implementation of Xen.
Are there any plans to do this in the future?


>>>> 2.In the documentation it is recommended to avoid accessing the
>>>> physical
>>>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>>>> Nevertheless, I currently have the IMA and the Trousers enabled in
>>>> Dom0
>>>> without any apparent issue. Why is not recommended directly accessin=
g
>>>> the physical TPM of Dom0?
>>>
>>> While most of the time it is not a problem to have two entities
>>> talking to
>>> the physical TPM, it is possible for the trousers daemon in dom0 to
>>> interfere
>>> with key handles used by the TPM Manager. There are also certain
>>> operations
>>> of the TPM that may not handle concurrency, although I do not believe=

>>> that
>>> trousers uses them - SHA1Start, the DAA commands, and certain audit
>>> logs
>>> come to mind.
>>>
>>> The other reason why it is recommended to avoid pTPM access from
>>> dom0 is
>>> because the ability to send unseal/unbind requests to the physical TP=
M
>>> makes
>>> it possible for applications running in dom0 to decrypt the TPM
>>> Manager's
>>> data (and thereby access vTPM private keys).
>>>
>>> At present, sharing the physical TPM between dom0 and the TPM
>>> Manager is
>>> the only way to get full integrity checks.
>>
>> OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
>> security problem to the vTPM. But if we do not enable the support, the=

>> integrity of Dom0 cannot be proved using the TPM (e.g. by remote
>> attestation).
>
> Right. Since dom0 currently must be trusted (as discussed above) this i=
s
> currently the best way to handle the dom0 attestation problem.
>
>>>
>>>> 3.If it is not recommended to directly accessing the physical TPM in=

>>>> Dom0, which is the advisable way to check the integrity of this
>>>> domain?
>>>> With solutions such as TBOOT and IntelTXT?
>>>
>>> While the TPM Manager in Xen 4.3/4.4 does not yet have this
>>> functionality,
>>> an update which I will be submitting for inclusion in Xen 4.5 has the=

>>> ability to get physical TPM quotes using a virtual TPM. Combined
>>> with an
>>> early domain builder, the eventual goal is to have dom0 use a vTPM fo=
r
>>> its integrity/reporting/sealing operations, and use the physical TPM
>>> only
>>> to secure the secrets of vTPMs and for deep quotes to provide fresh
>>> proofs
>>> of the system's state.
>>
>> This sounds really good. I look forward to try it in Xen 4.5!!
>>
>>
>> Thank you for your answers!
>> Jordi.
>>
>
>


--=20
Jordi Cucurull Juan
Researcher
Scytl Secure Electronic Voting
Pla=E7a Gal=B7la Placidia, 1-3, 1st floor =B7 08006 Barcelona
Phone:     + 34 934 230 324
Fax        + 34 933 251 028
jordi.cucurull@scytl.com
http://www.scytl.com



--JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJS+0EQAAoJEI3Hup7P+VY6J7oIAIZHQMFxKwaVcFa2SDWcG64F
jndIsOxy9kcTMA8uNOyq11y9LCurBLHf9EqIy/8xyFsaweI/ACYoY1kkx47AIstw
Gt0DqKnfkh2L5ivrGuIIJnPisOWo+mzG7XY3e1+toYsu92qs68vFKXnqMQkHfmfq
AVxkmzD2c/zJzGTNjnYmZwtS+mBqyc7LkjOB+1fCLQFy/V9yyseRZcFOlf7asIBi
/Yteqjh1x2XpD506dA0FcrgiM44x8mOog7nX0WMYlkiDH61ofKTnY8dUHZVyS+RW
u7IiwwXzmtvBN60wYlkN9eUF/S08ATO8vmXe/owfou+2lcdZ/0FjNJ/UnIi1U2s=
=PLqe
-----END PGP SIGNATURE-----

--JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ--


--===============4738565846614668823==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4738565846614668823==--


From xen-devel-bounces@lists.xen.org Wed Feb 12 09:38:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWH1-0005rW-2L; Wed, 12 Feb 2014 09:38:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jordi.cucurull@scytl.com>) id 1WDWGz-0005rN-9F
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 09:38:29 +0000
Received: from [193.109.254.147:39047] by server-15.bemta-14.messagelabs.com
	id A8/33-10839-4114BF25; Wed, 12 Feb 2014 09:38:28 +0000
X-Env-Sender: jordi.cucurull@scytl.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392197906!3725226!1
X-Originating-IP: [217.111.179.100]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1449 invoked from network); 12 Feb 2014 09:38:27 -0000
Received: from mail3.scytl.com (HELO mail3.scytl.com) (217.111.179.100)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 09:38:27 -0000
Received: from [10.0.16.210] (217.111.178.66) by mail3.scytl.com (Axigen)
	with (DHE-RSA-AES256-SHA encrypted) ESMTPSA id 383CE8;
	Wed, 12 Feb 2014 10:38:25 +0100
Message-ID: <52FB4110.8090005@scytl.com>
Date: Wed, 12 Feb 2014 10:38:24 +0100
From: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0)
	Gecko/20130912 Thunderbird/17.0.9
MIME-Version: 1.0
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<52F9F514.8040907@scytl.com> <52FA413F.1040608@tycho.nsa.gov>
In-Reply-To: <52FA413F.1040608@tycho.nsa.gov>
X-Enigmail-Version: 1.6
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4738565846614668823=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============4738565846614668823==
Content-Type: multipart/signed; micalg=pgp-sha1;
 protocol="application/pgp-signature";
 boundary="JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hello Daniel,

On 02/11/2014 04:26 PM, Daniel De Graaf wrote:
> On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
>> Hello Daniel,
>>
>> Thanks for your thorough answer. I have a few comments below.
>>
>> On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
>>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>>> Dear all,
>>>>
>>>> I have recently configured a Xen 4.3 server with the vTPM enabled
>>>> and a
>>>> guest virtual machine that takes advantage of it. After playing a bi=
t
>>>> with it, I have a few questions:
>>>>
>>>> 1.According to the documentation, to shutdown the vTPM stubdom it is=

>>>> only needed to normally shutdown the guest VM. Theoretically, the vT=
PM
>>>> stubdom automatically shuts down after this. Nevertheless, if I
>>>> shutdown
>>>> the guest the vTPM stubdom continues active and, moreover, I can sta=
rt
>>>> the machine again and the values of the vTPM are the last ones there=

>>>> were in the previous instance of the guest. Is this normal?
>>>
>>> The documentation is in error here; while this was originally how the=

>>> vTPM
>>> domain behaved, this automatic shutdown was not reliable: it was not
>>> done
>>> if the peer domain did not use the vTPM, and it was incorrectly
>>> triggered
>>> by pv-grub's use of the vTPM to record guest kernel measurements
>>> (which was
>>> the immediate reason for its removal). The solution now is to either
>>> send a
>>> shutdown request or simply destroy the vTPM upon guest shutdown.
>>>
>>> An alternative that may require less work on your part is to destroy
>>> the vTPM stub domain during a guest's construction, something like:
>>>
>>> #!/bin/sh -e
>>> xl destroy "$1-vtpm" || true
>>> xl create $1-vtpm.cfg
>>> xl create $1-domu.cfg
>>>
>>> Allowing a vTPM to remain active across a guest restart will cause th=
e
>>> PCR values extended by pv-grub to be incorrect, as you observed in yo=
ur
>>> second email. In order for the vTPM's PCRs to be useful for quotes or=

>>> releasing sealed secrets, you need to ensure that a new vTPM is start=
ed
>>> if and only if it is paired with a corresponding guest.
>>
>> I see a potential threat due to this behaviour (please correct me if I=

>> am wrong).
>>
>> Assume an administrator of Dom0 becomes malicious. Since the hyperviso=
r
>> does not enforce the shut down of the vTPM domain, the malicious
>> administrator could try the following: 1) make a copy of the peer
>> domain, 2) manipulate the copy of the peer domain and disable its
>> measurements, 3) boot the original peer domain, 4) switch it off or
>> pause it, 5) boot the manipulated copy of the peer domain.
>>
>> Then, the shown PCR values of the manipulated copy of the peer domain
>> are the ones measured by the original peer domain during the first boo=
t.
>> But the manipulated copy is the one actually running. Hence, this coul=
d
>> not be detected nor by quoting the vTPM neither the pTPM.
>>
>
> A malicious dom0 has a much simpler attack vector: start the domain wit=
h
> a custom version of pv-grub that extends arbitrary measurements instead=

> of the real kernel's measurements. Then, a user kernel with disabled or=

> similarly false measuring capabilities can be booted. Alternatively,
> if XSM polices do not restrict it, a debugger could be attached to the
> guest so that it can be manipulated online.

This is the reason why I wanted to measure Dom0, to detect a possible
manipulation, e.g. of a custom version of pv-grub. Nevertheless, still
the administrator could try to inject a manipulated copy of it into the
system after booting it. Hence, I agree with the solutions you propose
below.

>
>> May be, one possible solution could be to enforce an XSM FLASK policy =
to
>> prevent any user in Dom0 from destroying, shutting down or pausing a
>> domain. Then, measure the policy when Dom0 starts into a PCR of the
>> phsyical TPM. Nevertheless, on one hand I do not know if this is
>> feasible and, in the other hand, this prevents the system from
>> destroying the vTPM domain when the peer domain shuts down.
>
> The solution to this problem is to disaggregate dom0 and relocate the
> domain building component to a stub domain that is completely measured
> in the pTPM (perhaps by TBOOT). The domain builder could use a static
> library of domains to build (hardware domain and TPM Manager built only=

> once; vTPM and pv-grub domain pairs built on request). An XSM policy
> could then restrict vTPM communication so that only correctly built
> guests are allowed to talk to their paired vTPM. In this case, dom0
> would have permission to shut down either VM, but could not start a
> replacement.
>

I understand this cannot be done with the current implementation of Xen.
Are there any plans to do this in the future?


>>>> 2.In the documentation it is recommended to avoid accessing the
>>>> physical
>>>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>>>> Nevertheless, I currently have the IMA and the Trousers enabled in
>>>> Dom0
>>>> without any apparent issue. Why is not recommended directly accessin=
g
>>>> the physical TPM of Dom0?
>>>
>>> While most of the time it is not a problem to have two entities
>>> talking to
>>> the physical TPM, it is possible for the trousers daemon in dom0 to
>>> interfere
>>> with key handles used by the TPM Manager. There are also certain
>>> operations
>>> of the TPM that may not handle concurrency, although I do not believe=

>>> that
>>> trousers uses them - SHA1Start, the DAA commands, and certain audit
>>> logs
>>> come to mind.
>>>
>>> The other reason why it is recommended to avoid pTPM access from
>>> dom0 is
>>> because the ability to send unseal/unbind requests to the physical TP=
M
>>> makes
>>> it possible for applications running in dom0 to decrypt the TPM
>>> Manager's
>>> data (and thereby access vTPM private keys).
>>>
>>> At present, sharing the physical TPM between dom0 and the TPM
>>> Manager is
>>> the only way to get full integrity checks.
>>
>> OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
>> security problem to the vTPM. But if we do not enable the support, the=

>> integrity of Dom0 cannot be proved using the TPM (e.g. by remote
>> attestation).
>
> Right. Since dom0 currently must be trusted (as discussed above) this i=
s
> currently the best way to handle the dom0 attestation problem.
>
>>>
>>>> 3.If it is not recommended to directly accessing the physical TPM in=

>>>> Dom0, which is the advisable way to check the integrity of this
>>>> domain?
>>>> With solutions such as TBOOT and IntelTXT?
>>>
>>> While the TPM Manager in Xen 4.3/4.4 does not yet have this
>>> functionality,
>>> an update which I will be submitting for inclusion in Xen 4.5 has the=

>>> ability to get physical TPM quotes using a virtual TPM. Combined
>>> with an
>>> early domain builder, the eventual goal is to have dom0 use a vTPM fo=
r
>>> its integrity/reporting/sealing operations, and use the physical TPM
>>> only
>>> to secure the secrets of vTPMs and for deep quotes to provide fresh
>>> proofs
>>> of the system's state.
>>
>> This sounds really good. I look forward to try it in Xen 4.5!!
>>
>>
>> Thank you for your answers!
>> Jordi.
>>
>
>


--=20
Jordi Cucurull Juan
Researcher
Scytl Secure Electronic Voting
Pla=E7a Gal=B7la Placidia, 1-3, 1st floor =B7 08006 Barcelona
Phone:     + 34 934 230 324
Fax        + 34 933 251 028
jordi.cucurull@scytl.com
http://www.scytl.com



--JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.14 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJS+0EQAAoJEI3Hup7P+VY6J7oIAIZHQMFxKwaVcFa2SDWcG64F
jndIsOxy9kcTMA8uNOyq11y9LCurBLHf9EqIy/8xyFsaweI/ACYoY1kkx47AIstw
Gt0DqKnfkh2L5ivrGuIIJnPisOWo+mzG7XY3e1+toYsu92qs68vFKXnqMQkHfmfq
AVxkmzD2c/zJzGTNjnYmZwtS+mBqyc7LkjOB+1fCLQFy/V9yyseRZcFOlf7asIBi
/Yteqjh1x2XpD506dA0FcrgiM44x8mOog7nX0WMYlkiDH61ofKTnY8dUHZVyS+RW
u7IiwwXzmtvBN60wYlkN9eUF/S08ATO8vmXe/owfou+2lcdZ/0FjNJ/UnIi1U2s=
=PLqe
-----END PGP SIGNATURE-----

--JN7dVRjllhTlUQnvvNi2voXf5eWDtAbMQ--


--===============4738565846614668823==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4738565846614668823==--


From xen-devel-bounces@lists.xen.org Wed Feb 12 09:38:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWHR-0005y2-PI; Wed, 12 Feb 2014 09:38:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDWHP-0005xg-Qk
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:38:55 +0000
Received: from [85.158.143.35:50151] by server-3.bemta-4.messagelabs.com id
	8F/11-11539-F214BF25; Wed, 12 Feb 2014 09:38:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392197933!5048036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2273 invoked from network); 12 Feb 2014 09:38:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:38:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101856818"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 09:38:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 04:38:51 -0500
Message-ID: <1392197930.13563.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 12 Feb 2014 09:38:50 +0000
In-Reply-To: <1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls
 to be preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 19:19 +0000, David Vrabel wrote:
> Implement is_premptible_hypercall() by adding a second hypercall page
> (preemptible_hypercall_page, copied from hypercall_page).  Calls made
> via the new page may be preempted.

Wouldn't a per-cpu flag variable set for the duration of privcmd_call do
the job just as well without requiring per-arch knowledge?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:38:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWHR-0005y2-PI; Wed, 12 Feb 2014 09:38:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDWHP-0005xg-Qk
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:38:55 +0000
Received: from [85.158.143.35:50151] by server-3.bemta-4.messagelabs.com id
	8F/11-11539-F214BF25; Wed, 12 Feb 2014 09:38:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392197933!5048036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2273 invoked from network); 12 Feb 2014 09:38:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:38:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101856818"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 09:38:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 04:38:51 -0500
Message-ID: <1392197930.13563.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 12 Feb 2014 09:38:50 +0000
In-Reply-To: <1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls
 to be preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 19:19 +0000, David Vrabel wrote:
> Implement is_premptible_hypercall() by adding a second hypercall page
> (preemptible_hypercall_page, copied from hypercall_page).  Calls made
> via the new page may be preempted.

Wouldn't a per-cpu flag variable set for the duration of privcmd_call do
the job just as well without requiring per-arch knowledge?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:56:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWYZ-0006Zt-L8; Wed, 12 Feb 2014 09:56:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDWYY-0006Zo-FK
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 09:56:38 +0000
Received: from [193.109.254.147:4088] by server-3.bemta-14.messagelabs.com id
	4E/2A-00432-5554BF25; Wed, 12 Feb 2014 09:56:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392198995!3752853!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23787 invoked from network); 12 Feb 2014 09:56:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:56:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101860325"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 09:56:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 04:56:34 -0500
Message-ID: <1392198993.13563.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Wed, 12 Feb 2014 09:56:33 +0000
In-Reply-To: <CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please could you not top post.

On Wed, 2014-02-12 at 02:48 +0000, Miguel Clara wrote:
> Parsing config from test.cfg
> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x9a9f30: create: how=(nil) callback=(nil) poller=0x9a9f90
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda, uses script=... assuming phy backend
> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=(null) spec.backend=phy
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null), uses script=... assuming phy backend
> libxl: debug: libxl.c:2604:libxl__device_disk_local_initiate_attach: locally attaching PHY disk drbd-remus-test
> libxl: debug: libxl_bootloader.c:409:bootloader_disk_attached_cb: Config bootloader value: pygrub
> libxl: debug: libxl_bootloader.c:425:bootloader_disk_attached_cb: Checking for bootloader in libexec path: /usr/local/lib/xen/bin/pygrub
> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x9a9f30: inprogress: poller=0x9a9f90, flags=i
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x9aa3c8 wpath=/local/domain/35 token=3/0: register slotnum=3
> libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x9a9f30: progress report: ignored
> libxl: debug: libxl_bootloader.c:535:bootloader_gotptys: executing bootloader: /usr/local/lib/xen/bin/pygrub
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: /usr/local/lib/xen/bin/pygrub
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --args=root=/dev/xvda1 rw console=hvc0 xencons=tty
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --output=/var/run/xen/bootloader.35.out
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --output-format=simple0
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --output-directory=/var/run/xen/bootloader.35.d
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: drbd-remus-test
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x9aa3c8 wpath=/local/domain/35 token=3/0: event epath=/local/domain/35
> libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader failed - consult logfile /var/log/xen/bootloader.35.log
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [12781] exited with error status 1
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=0x9aa3c8 wpath=/local/domain/35 token=3/0: deregister slotnum=3
> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot (re-)build domain: -3
> libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x9a9f30: complete, rc=-3
> libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x9a9f30: destroy
> xc: debug: hypercall buffer: total allocations:20 total releases:20
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:16 misses:2 toobig:2
> 
> 
> This part:
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: drbd-remus-test
> 
> Does seem weird...

Indeed, especially when there is a preceding
libxl: debug: libxl.c:2604:libxl__device_disk_local_initiate_attach: locally attaching PHY disk drbd-remus-test
which should have resulted in an xvda device to use.

Hrm, that function handles phy devices as a straight passthrough of the
underlying device, which is the source of the error.

I wonder if we should add some special handling of disk devices which
have a non-null script. I guess that would look something like the QDISK
bit of libxl__device_disk_local_initiate_attach but gated on
disk->script rather than ->format. i.e.:
        case LIBXL_DISK_BACKEND_PHY:
                if (disk->script) {
                libxl__prepare_ao_device(ao, &dls->aodev);
                dls->aodev.callback = local_device_attach_cb;
                device_disk_add(egc, LIBXL_TOOLSTACK_DOMID, disk,
                                &dls->aodev, libxl__alloc_vdev,
                                (void *) blkdev_start);
                return;
            } else {
                dev = disk->pdev_path;
            }

Would also need some code in libxl__device_disk_local_initiate_detach.
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:56:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWYZ-0006Zt-L8; Wed, 12 Feb 2014 09:56:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDWYY-0006Zo-FK
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 09:56:38 +0000
Received: from [193.109.254.147:4088] by server-3.bemta-14.messagelabs.com id
	4E/2A-00432-5554BF25; Wed, 12 Feb 2014 09:56:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392198995!3752853!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23787 invoked from network); 12 Feb 2014 09:56:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:56:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="101860325"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 09:56:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 04:56:34 -0500
Message-ID: <1392198993.13563.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Miguel Clara <miguelmclara@gmail.com>
Date: Wed, 12 Feb 2014 09:56:33 +0000
In-Reply-To: <CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391422624.10515.27.camel@kazak.uk.xensource.com>
	<17ef2b9f-3d57-4909-be94-4c560c3dfc57@email.android.com>
	<CADGo8CVmZ_Ot5PY+7Ec8mmeeF4LCF=53yCK6KWE1jxQob0EtdQ@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Please could you not top post.

On Wed, 2014-02-12 at 02:48 +0000, Miguel Clara wrote:
> Parsing config from test.cfg
> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x9a9f30: create: how=(nil) callback=(nil) poller=0x9a9f90
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda, uses script=... assuming phy backend
> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk vdev=(null) spec.backend=phy
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null), uses script=... assuming phy backend
> libxl: debug: libxl.c:2604:libxl__device_disk_local_initiate_attach: locally attaching PHY disk drbd-remus-test
> libxl: debug: libxl_bootloader.c:409:bootloader_disk_attached_cb: Config bootloader value: pygrub
> libxl: debug: libxl_bootloader.c:425:bootloader_disk_attached_cb: Checking for bootloader in libexec path: /usr/local/lib/xen/bin/pygrub
> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x9a9f30: inprogress: poller=0x9a9f90, flags=i
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch w=0x9aa3c8 wpath=/local/domain/35 token=3/0: register slotnum=3
> libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x9a9f30: progress report: ignored
> libxl: debug: libxl_bootloader.c:535:bootloader_gotptys: executing bootloader: /usr/local/lib/xen/bin/pygrub
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: /usr/local/lib/xen/bin/pygrub
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --args=root=/dev/xvda1 rw console=hvc0 xencons=tty
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --output=/var/run/xen/bootloader.35.out
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --output-format=simple0
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: --output-directory=/var/run/xen/bootloader.35.d
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: drbd-remus-test
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x9aa3c8 wpath=/local/domain/35 token=3/0: event epath=/local/domain/35
> libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader failed - consult logfile /var/log/xen/bootloader.35.log
> libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [12781] exited with error status 1
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch w=0x9aa3c8 wpath=/local/domain/35 token=3/0: deregister slotnum=3
> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot (re-)build domain: -3
> libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x9a9f30: complete, rc=-3
> libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x9a9f30: destroy
> xc: debug: hypercall buffer: total allocations:20 total releases:20
> xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
> xc: debug: hypercall buffer: cache current size:2
> xc: debug: hypercall buffer: cache hits:16 misses:2 toobig:2
> 
> 
> This part:
> libxl: debug: libxl_bootloader.c:539:bootloader_gotptys:   bootloader arg: drbd-remus-test
> 
> Does seem weird...

Indeed, especially when there is a preceding
libxl: debug: libxl.c:2604:libxl__device_disk_local_initiate_attach: locally attaching PHY disk drbd-remus-test
which should have resulted in an xvda device to use.

Hrm, that function handles phy devices as a straight passthrough of the
underlying device, which is the source of the error.

I wonder if we should add some special handling of disk devices which
have a non-null script. I guess that would look something like the QDISK
bit of libxl__device_disk_local_initiate_attach but gated on
disk->script rather than ->format. i.e.:
        case LIBXL_DISK_BACKEND_PHY:
                if (disk->script) {
                libxl__prepare_ao_device(ao, &dls->aodev);
                dls->aodev.callback = local_device_attach_cb;
                device_disk_add(egc, LIBXL_TOOLSTACK_DOMID, disk,
                                &dls->aodev, libxl__alloc_vdev,
                                (void *) blkdev_start);
                return;
            } else {
                dev = disk->pdev_path;
            }

Would also need some code in libxl__device_disk_local_initiate_detach.
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:59:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWb4-0006jb-8E; Wed, 12 Feb 2014 09:59:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDWb2-0006jT-JA
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:59:12 +0000
Received: from [85.158.139.211:60844] by server-2.bemta-5.messagelabs.com id
	FC/21-23037-FE54BF25; Wed, 12 Feb 2014 09:59:11 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392199149!3367781!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13616 invoked from network); 12 Feb 2014 09:59:10 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:59:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392199150; x=1423735150;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=VQ1bZfbR5TFA878mr32+zievuatLhxYNBo0LE+k+9jY=;
	b=StrlemFDgNaL9/zy5j+ZxNRVyFxA7+1R/p0MXHkYwbYmf8r3QLgeTpqh
	93jIfjNh6Si3XvomMXfoIEzyvW2p1v+58hLOl0aDEFI+XCuU5xiUlMJQk
	fRacL6wX/OPR6moocYK+wsi05VoItEe1tT3LP8e+fz6BxCz0+yeCapSZK 0=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78830585"
Received: from email-inbound-relay-60001.pdx1.amazon.com ([10.232.153.146])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:59:06 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by email-inbound-relay-60001.pdx1.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1C9x6ZQ006156
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:59:06 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:59:00 -0800
Message-ID: <52FB45E2.8090703@amazon.de>
Date: Wed, 12 Feb 2014 10:58:58 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Aravind Gopalakrishnan
	<aravind.gopalakrishnan@amd.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
	<20140207212724.GD8837@arav-dinar>
	<52F890CD020000780011A95D@nat28.tlf.novell.com>
In-Reply-To: <52F890CD020000780011A95D@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10.02.14 08:41, Jan Beulich wrote:
>>>> On 07.02.14 at 22:27, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> wrote:
>> On Fri, Feb 07, 2014 at 11:05:17AM +0000, Jan Beulich wrote:
>>>>>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
>> wrote:
>>>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>>>> -		v->arch.vmce.bank[1].mci_misc = val; 
>>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>>> -		break;
>>>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>>>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>>>> -		/* ignore write: we do not emulate link and l3 cache errors
>>>> -		 * to the guest.
>>>> -		 */
>>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>>> -		break;
>>>> -	default:
>>>> -		return 0;
>>>> -	}
>>>> +    /* If not present, #GP fault, else do nothing as we don't emulate */
>>>> +    if ( !amd_thresholding_reg_present(msr) )
>>>> +        return -1;
>>>
>>> The one thing I'm concerned about making this #GP in the guest is
>>> migration: With it being _newer_ CPUs implementing fewer of these
>>> MSRs, it would be impossible to migrate a guest from an older system
>>> to a newer one - a direction that (as long as the newer system
>>> provides all the hardware capabilities the older one has) is generally
>>> assumed to work. Bottom line - we're probably better off always
>>> dropping writes, and always returning zero for reads. Which will
>>> eliminate the need for amd_thresholding_reg_present().
>>>
>>
>> Before I go ahead and remove the function, few questions-
>>
>> Assuming there is a tool in the guest that accesses these MSRs,
>> wouldn't it be fair to expect that the tool keep in mind these MSRs
>> exist only in certain families?
>>
>> For example:
>> if there's a guest running on F10 that accesses 0xc000040a, that would
>> be fine. But once we migrate to a newer family, then the guest should
>> not even generate accesses to the MSR.
> 
> All correct, provided the family check and the MSR access aren't
> separated by a migration.
> 
>> Also, returning #GP to guests would mean keeping it consistent with HW
>> behavior. If we return zero for reads, (IMHO) it's not necessarily
>> correct information as the register does not even exist.. 
>>
>> Bare-metal cases will face same problems too.. but if a register doesn't
>> exist, then shouldn't OS/hypervisor just say so and let whoever
>> generated the access deal with it?
> 
> That's all valid argumentation as long as you leave migration out
> of the picture.

I agree with Jan. All argumentation is valid from hardware perspective.

Apart from migration there is another perspective you miss completely:
The vmce_amd_* functions (and also the corresponding intel functions)
deal with *virtual* MSRs and deal with the case what should happen
with/to the guest when the guest accesses them.

This has absolutely nothing to do what the hardware provides and what
not. The point is, the guest knows (or better assumes) which MSRs exist
from the cpu family/model information it gets via cpuid. The question is
what should happen when the guest accesses these MSRs.

To get the right thing, the questions are:
What should the hypervisor do for recovery?
Does it make sense to make the guest aware of it?

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 09:59:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 09:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWb4-0006jb-8E; Wed, 12 Feb 2014 09:59:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDWb2-0006jT-JA
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 09:59:12 +0000
Received: from [85.158.139.211:60844] by server-2.bemta-5.messagelabs.com id
	FC/21-23037-FE54BF25; Wed, 12 Feb 2014 09:59:11 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392199149!3367781!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13616 invoked from network); 12 Feb 2014 09:59:10 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 09:59:10 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392199150; x=1423735150;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=VQ1bZfbR5TFA878mr32+zievuatLhxYNBo0LE+k+9jY=;
	b=StrlemFDgNaL9/zy5j+ZxNRVyFxA7+1R/p0MXHkYwbYmf8r3QLgeTpqh
	93jIfjNh6Si3XvomMXfoIEzyvW2p1v+58hLOl0aDEFI+XCuU5xiUlMJQk
	fRacL6wX/OPR6moocYK+wsi05VoItEe1tT3LP8e+fz6BxCz0+yeCapSZK 0=;
X-IronPort-AV: E=Sophos;i="4.95,831,1384300800"; d="scan'208";a="78830585"
Received: from email-inbound-relay-60001.pdx1.amazon.com ([10.232.153.146])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 09:59:06 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by email-inbound-relay-60001.pdx1.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1C9x6ZQ006156
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 09:59:06 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 01:59:00 -0800
Message-ID: <52FB45E2.8090703@amazon.de>
Date: Wed, 12 Feb 2014 10:58:58 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Aravind Gopalakrishnan
	<aravind.gopalakrishnan@amd.com>
References: <1391733176-2941-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52F4CBFD020000780011A2F1@nat28.tlf.novell.com>
	<20140207212724.GD8837@arav-dinar>
	<52F890CD020000780011A95D@nat28.tlf.novell.com>
In-Reply-To: <52F890CD020000780011A95D@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 10.02.14 08:41, Jan Beulich wrote:
>>>> On 07.02.14 at 22:27, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> wrote:
>> On Fri, Feb 07, 2014 at 11:05:17AM +0000, Jan Beulich wrote:
>>>>>> On 07.02.14 at 01:32, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
>> wrote:
>>>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>>>> -		v->arch.vmce.bank[1].mci_misc = val; 
>>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>>> -		break;
>>>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>>>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>>>> -		/* ignore write: we do not emulate link and l3 cache errors
>>>> -		 * to the guest.
>>>> -		 */
>>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>>> -		break;
>>>> -	default:
>>>> -		return 0;
>>>> -	}
>>>> +    /* If not present, #GP fault, else do nothing as we don't emulate */
>>>> +    if ( !amd_thresholding_reg_present(msr) )
>>>> +        return -1;
>>>
>>> The one thing I'm concerned about making this #GP in the guest is
>>> migration: With it being _newer_ CPUs implementing fewer of these
>>> MSRs, it would be impossible to migrate a guest from an older system
>>> to a newer one - a direction that (as long as the newer system
>>> provides all the hardware capabilities the older one has) is generally
>>> assumed to work. Bottom line - we're probably better off always
>>> dropping writes, and always returning zero for reads. Which will
>>> eliminate the need for amd_thresholding_reg_present().
>>>
>>
>> Before I go ahead and remove the function, few questions-
>>
>> Assuming there is a tool in the guest that accesses these MSRs,
>> wouldn't it be fair to expect that the tool keep in mind these MSRs
>> exist only in certain families?
>>
>> For example:
>> if there's a guest running on F10 that accesses 0xc000040a, that would
>> be fine. But once we migrate to a newer family, then the guest should
>> not even generate accesses to the MSR.
> 
> All correct, provided the family check and the MSR access aren't
> separated by a migration.
> 
>> Also, returning #GP to guests would mean keeping it consistent with HW
>> behavior. If we return zero for reads, (IMHO) it's not necessarily
>> correct information as the register does not even exist.. 
>>
>> Bare-metal cases will face same problems too.. but if a register doesn't
>> exist, then shouldn't OS/hypervisor just say so and let whoever
>> generated the access deal with it?
> 
> That's all valid argumentation as long as you leave migration out
> of the picture.

I agree with Jan. All argumentation is valid from hardware perspective.

Apart from migration there is another perspective you miss completely:
The vmce_amd_* functions (and also the corresponding intel functions)
deal with *virtual* MSRs and deal with the case what should happen
with/to the guest when the guest accesses them.

This has absolutely nothing to do what the hardware provides and what
not. The point is, the guest knows (or better assumes) which MSRs exist
from the cpu family/model information it gets via cpuid. The question is
what should happen when the guest accesses these MSRs.

To get the right thing, the questions are:
What should the hypervisor do for recovery?
Does it make sense to make the guest aware of it?

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:11:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWmS-0007Dp-Iw; Wed, 12 Feb 2014 10:11:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDWmQ-0007Dk-PO
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 10:10:58 +0000
Received: from [85.158.143.35:57728] by server-2.bemta-4.messagelabs.com id
	57/A8-10891-2B84BF25; Wed, 12 Feb 2014 10:10:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392199856!5041526!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12195 invoked from network); 12 Feb 2014 10:10:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100067384"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 10:10:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 05:10:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDWmC-0003Gs-Q3;
	Wed, 12 Feb 2014 10:10:44 +0000
Message-ID: <52FB48A4.9090205@citrix.com>
Date: Wed, 12 Feb 2014 10:10:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
	<1392197930.13563.7.camel@kazak.uk.xensource.com>
In-Reply-To: <1392197930.13563.7.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls
 to be preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 09:38, Ian Campbell wrote:
> On Tue, 2014-02-11 at 19:19 +0000, David Vrabel wrote:
>> Implement is_premptible_hypercall() by adding a second hypercall page
>> (preemptible_hypercall_page, copied from hypercall_page).  Calls made
>> via the new page may be preempted.
> Wouldn't a per-cpu flag variable set for the duration of privcmd_call do
> the job just as well without requiring per-arch knowledge?
>
> Ian.
>

Why should privcmd_call be the only preemptible calls?  This code allows
anyone to voluntarily use the preemptible_hypercall_page.  I would not
be surprised if modules like gntdev could do with some preemption, but
it is the long-running toolstack hypercalls which are currently killing
any loaded XenServer system.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:11:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWmS-0007Dp-Iw; Wed, 12 Feb 2014 10:11:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDWmQ-0007Dk-PO
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 10:10:58 +0000
Received: from [85.158.143.35:57728] by server-2.bemta-4.messagelabs.com id
	57/A8-10891-2B84BF25; Wed, 12 Feb 2014 10:10:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392199856!5041526!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12195 invoked from network); 12 Feb 2014 10:10:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:10:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100067384"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 10:10:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 05:10:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDWmC-0003Gs-Q3;
	Wed, 12 Feb 2014 10:10:44 +0000
Message-ID: <52FB48A4.9090205@citrix.com>
Date: Wed, 12 Feb 2014 10:10:44 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
	<1392197930.13563.7.camel@kazak.uk.xensource.com>
In-Reply-To: <1392197930.13563.7.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls
 to be preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 09:38, Ian Campbell wrote:
> On Tue, 2014-02-11 at 19:19 +0000, David Vrabel wrote:
>> Implement is_premptible_hypercall() by adding a second hypercall page
>> (preemptible_hypercall_page, copied from hypercall_page).  Calls made
>> via the new page may be preempted.
> Wouldn't a per-cpu flag variable set for the duration of privcmd_call do
> the job just as well without requiring per-arch knowledge?
>
> Ian.
>

Why should privcmd_call be the only preemptible calls?  This code allows
anyone to voluntarily use the preemptible_hypercall_page.  I would not
be surprised if modules like gntdev could do with some preemption, but
it is the long-running toolstack hypercalls which are currently killing
any loaded XenServer system.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWtx-0007al-H4; Wed, 12 Feb 2014 10:18:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDWtv-0007aY-Cg
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 10:18:43 +0000
Received: from [193.109.254.147:44955] by server-4.bemta-14.messagelabs.com id
	33/06-32066-28A4BF25; Wed, 12 Feb 2014 10:18:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392200320!3739214!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22407 invoked from network); 12 Feb 2014 10:18:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:18:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101865700"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:18:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 05:18:11 -0500
Message-ID: <1392200290.13563.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 12 Feb 2014 10:18:10 +0000
In-Reply-To: <52FB48A4.9090205@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
	<1392197930.13563.7.camel@kazak.uk.xensource.com>
	<52FB48A4.9090205@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls
 to be preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 10:10 +0000, Andrew Cooper wrote:
> On 12/02/14 09:38, Ian Campbell wrote:
> > On Tue, 2014-02-11 at 19:19 +0000, David Vrabel wrote:
> >> Implement is_premptible_hypercall() by adding a second hypercall page
> >> (preemptible_hypercall_page, copied from hypercall_page).  Calls made
> >> via the new page may be preempted.
> > Wouldn't a per-cpu flag variable set for the duration of privcmd_call do
> > the job just as well without requiring per-arch knowledge?
> >
> > Ian.
> >
> 
> Why should privcmd_call be the only preemptible calls?  This code allows
> anyone to voluntarily use the preemptible_hypercall_page.  I would not
> be surprised if modules like gntdev could do with some preemption, but
> it is the long-running toolstack hypercalls which are currently killing
> any loaded XenServer system.

Other sites could equally well use the flag, you could even wrap it up
in xen_preemptible_hypercall_{start,end}.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWtx-0007as-TY; Wed, 12 Feb 2014 10:18:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDWtv-0007aX-E6
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 10:18:43 +0000
Received: from [85.158.137.68:54506] by server-14.bemta-3.messagelabs.com id
	ED/67-08196-28A4BF25; Wed, 12 Feb 2014 10:18:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392200320!1314286!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9149 invoked from network); 12 Feb 2014 10:18:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101865693"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:18:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 05:18:09 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDWtN-0006YU-B1;
	Wed, 12 Feb 2014 10:18:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDWtN-0004Od-9o;
	Wed, 12 Feb 2014 10:18:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24852-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 10:18:09 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24852: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24852 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24852/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-multivcpu 10 guest-saverestore           fail pass in 24851

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf
baseline version:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWtx-0007al-H4; Wed, 12 Feb 2014 10:18:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDWtv-0007aY-Cg
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 10:18:43 +0000
Received: from [193.109.254.147:44955] by server-4.bemta-14.messagelabs.com id
	33/06-32066-28A4BF25; Wed, 12 Feb 2014 10:18:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392200320!3739214!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22407 invoked from network); 12 Feb 2014 10:18:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:18:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101865700"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:18:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 05:18:11 -0500
Message-ID: <1392200290.13563.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 12 Feb 2014 10:18:10 +0000
In-Reply-To: <52FB48A4.9090205@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-2-git-send-email-david.vrabel@citrix.com>
	<1392197930.13563.7.camel@kazak.uk.xensource.com>
	<52FB48A4.9090205@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/3] x86/xen: allow for privcmd hypercalls
 to be preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 10:10 +0000, Andrew Cooper wrote:
> On 12/02/14 09:38, Ian Campbell wrote:
> > On Tue, 2014-02-11 at 19:19 +0000, David Vrabel wrote:
> >> Implement is_premptible_hypercall() by adding a second hypercall page
> >> (preemptible_hypercall_page, copied from hypercall_page).  Calls made
> >> via the new page may be preempted.
> > Wouldn't a per-cpu flag variable set for the duration of privcmd_call do
> > the job just as well without requiring per-arch knowledge?
> >
> > Ian.
> >
> 
> Why should privcmd_call be the only preemptible calls?  This code allows
> anyone to voluntarily use the preemptible_hypercall_page.  I would not
> be surprised if modules like gntdev could do with some preemption, but
> it is the long-running toolstack hypercalls which are currently killing
> any loaded XenServer system.

Other sites could equally well use the flag, you could even wrap it up
in xen_preemptible_hypercall_{start,end}.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDWtx-0007as-TY; Wed, 12 Feb 2014 10:18:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDWtv-0007aX-E6
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 10:18:43 +0000
Received: from [85.158.137.68:54506] by server-14.bemta-3.messagelabs.com id
	ED/67-08196-28A4BF25; Wed, 12 Feb 2014 10:18:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392200320!1314286!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9149 invoked from network); 12 Feb 2014 10:18:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101865693"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:18:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 05:18:09 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDWtN-0006YU-B1;
	Wed, 12 Feb 2014 10:18:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDWtN-0004Od-9o;
	Wed, 12 Feb 2014 10:18:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24852-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 10:18:09 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24852: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24852 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24852/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-multivcpu 10 guest-saverestore           fail pass in 24851

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf
baseline version:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:43:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXHN-0008RU-Cm; Wed, 12 Feb 2014 10:42:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDXHL-0008RP-7Q
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 10:42:55 +0000
Received: from [85.158.139.211:7224] by server-11.bemta-5.messagelabs.com id
	C6/1D-23886-E205BF25; Wed, 12 Feb 2014 10:42:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392201772!3398894!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14624 invoked from network); 12 Feb 2014 10:42:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:42:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101870181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:42:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 05:42:51 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDXHG-0006gI-VX;
	Wed, 12 Feb 2014 10:42:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 10:42:50 +0000
Message-ID: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on host
	boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The fabric on the Calxeda midway boxes (marilith-* in osstest) does not learn
mac address (at least not with the firmware we have, and with Calxeda folding
this seems unlikely to get fixed). This means that guests do not get network
connectivity unless their mac addresses explicitly registered with the fabric.

Registrations can be done with the bridge(8) tool from the iproute2 package
which unfortunately is only present in Jessie+ and not in Wheezy. So I have
done my own backport and placed it in $images/wheezy-iproute2 and arranged for
it to be installed along with the transitional iproute package (from the same
source) which is needed to satisfy various dependencies.

The registrations are ephemeral and need to be renewed on each reboot, so add
the necessary commands to rc.local during ts-xen-install.

This required leaking a certain amount of the implementation of select_ether.
Unless we want to do the bodge on every ts-guest-satart, reboot, migrate etc
then this seems to be the best way.

In my testing I've set NRCXFabricMACs to 8. test-armhf-armhf-xl now gets past
ts-guest-start and as far as the migration tests.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/CXFabric.pm    | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++
 Osstest/TestSupport.pm | 21 ++++++++----
 ts-xen-install         |  3 ++
 3 files changed, 103 insertions(+), 7 deletions(-)
 create mode 100644 Osstest/CXFabric.pm

diff --git a/Osstest/CXFabric.pm b/Osstest/CXFabric.pm
new file mode 100644
index 0000000..690afba
--- /dev/null
+++ b/Osstest/CXFabric.pm
@@ -0,0 +1,86 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+package Osstest::CXFabric;
+
+use strict;
+use warnings;
+
+use Osstest;
+use Osstest::TestSupport;
+
+BEGIN {
+    use Exporter ();
+    our ($VERSION, @ISA, @EXPORT, @EXPORT_OK, %EXPORT_TAGS);
+    $VERSION     = 1.00;
+    @ISA         = qw(Exporter);
+    @EXPORT      = qw(
+                      setup_cxfabric
+                      );
+    %EXPORT_TAGS = ( );
+
+    @EXPORT_OK   = qw();
+}
+
+sub setup_cxfabric($)
+{
+    my ($ho) = @_;
+
+    my $nr = get_host_property($ho, 'NRCXFabricMACs', 0);
+    return unless $nr;
+
+    die unless $nr < 2**16;
+
+    my $prefix = ether_prefix($ho);
+    logm("Registering $nr MAC addresses with CX fabric using prefix $prefix");
+
+    if ( $ho->{Suite} =~ m/wheezy/ )
+    {
+        # iproute2 is not in Wheezy nor wheezy-backports. Use our own backport.
+        my $images = "$c{Images}/wheezy-iproute2";
+        my $ver = '3.12.0-1~xen70+1';
+
+        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
+        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");
+
+        target_cmd_root($ho, <<END);
+dpkg -i iproute_${ver}_all.deb iproute2_${ver}_armhf.deb
+END
+    } else {
+        target_install_packages($ho, qw(iproute2));
+    }
+
+    my $banner = '# osstest: register potential guest MACs with CX fabric';
+    my $rclocal = "$banner\n";
+    # Osstest::TestSupport::select_ether allocates sequentially from $prefix:00:01
+    my $i = 0;
+    while ( $i++ < $nr ) {
+        $rclocal .= sprintf("bridge fdb add $prefix:%02x:%02x dev eth0\n",
+                            $i >> 8, $i & 0xff);
+    }
+
+    target_editfile_root($ho, '/etc/rc.local', sub {
+        my $had_banner = 0;
+        while (<::EI>) {
+            $had_banner = 1 if m/^$banner$/;
+            print ::EO $rclocal if m/^exit 0$/ && !$had_banner;
+            print ::EO;
+        }
+    });
+
+}
+
+1;
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..98eb172 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -97,6 +97,8 @@ BEGIN {
                       await_webspace_fetch_byleaf create_webfile
                       file_link_contents get_timeout
                       setup_pxeboot setup_pxeboot_local host_pxefile
+
+                      ether_prefix
                       );
     %EXPORT_TAGS = ( );
 
@@ -1245,6 +1247,17 @@ sub target_choose_vg ($$) {
     return $bestvg;
 }
 
+sub ether_prefix($) {
+    my ($ho) = @_;
+    my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
+    $prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
+    my $lhs = $1;
+    my $pv = (hex($2)<<8) | (hex($3));
+    $pv ^= $mjobdb->gen_ether_offset($ho,$flight);
+    $prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
+    return $prefix;
+}
+
 sub select_ether ($$) {
     my ($ho,$vn) = @_;
     # must be run outside transaction
@@ -1252,13 +1265,7 @@ sub select_ether ($$) {
     return $ether if defined $ether;
 
     db_retry($flight,'running', $dbh_tests,[qw(flights)], sub {
-	my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
-	$prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
-	my $lhs = $1;
-	my $pv = (hex($2)<<8) | (hex($3));
-	$pv ^= $mjobdb->gen_ether_offset($ho,$flight);
-	$prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
-
+	my $prefix = ether_prefix($ho);
 	my $glob_ether = $mjobdb->jobdb_db_glob('*_ether');
 
         my $previous= $dbh_tests->selectrow_array(<<END, {}, $flight);
diff --git a/ts-xen-install b/ts-xen-install
index fc96516..903ed45 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -22,6 +22,7 @@ use File::Path;
 use POSIX;
 use Osstest::Debian;
 use Osstest::TestSupport;
+use Osstest::CXFabric;
 
 my $checkmode= 0;
 
@@ -122,6 +123,8 @@ sub adjustconfig () {
     });
 
     target_cmd_root($ho, 'mkdir -p /var/log/xen/console');
+
+    setup_cxfabric($ho);
 }
 
 sub setupboot () {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:43:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXHN-0008RU-Cm; Wed, 12 Feb 2014 10:42:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDXHL-0008RP-7Q
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 10:42:55 +0000
Received: from [85.158.139.211:7224] by server-11.bemta-5.messagelabs.com id
	C6/1D-23886-E205BF25; Wed, 12 Feb 2014 10:42:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392201772!3398894!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14624 invoked from network); 12 Feb 2014 10:42:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:42:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101870181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:42:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 05:42:51 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDXHG-0006gI-VX;
	Wed, 12 Feb 2014 10:42:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 10:42:50 +0000
Message-ID: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on host
	boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The fabric on the Calxeda midway boxes (marilith-* in osstest) does not learn
mac address (at least not with the firmware we have, and with Calxeda folding
this seems unlikely to get fixed). This means that guests do not get network
connectivity unless their mac addresses explicitly registered with the fabric.

Registrations can be done with the bridge(8) tool from the iproute2 package
which unfortunately is only present in Jessie+ and not in Wheezy. So I have
done my own backport and placed it in $images/wheezy-iproute2 and arranged for
it to be installed along with the transitional iproute package (from the same
source) which is needed to satisfy various dependencies.

The registrations are ephemeral and need to be renewed on each reboot, so add
the necessary commands to rc.local during ts-xen-install.

This required leaking a certain amount of the implementation of select_ether.
Unless we want to do the bodge on every ts-guest-satart, reboot, migrate etc
then this seems to be the best way.

In my testing I've set NRCXFabricMACs to 8. test-armhf-armhf-xl now gets past
ts-guest-start and as far as the migration tests.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/CXFabric.pm    | 86 ++++++++++++++++++++++++++++++++++++++++++++++++++
 Osstest/TestSupport.pm | 21 ++++++++----
 ts-xen-install         |  3 ++
 3 files changed, 103 insertions(+), 7 deletions(-)
 create mode 100644 Osstest/CXFabric.pm

diff --git a/Osstest/CXFabric.pm b/Osstest/CXFabric.pm
new file mode 100644
index 0000000..690afba
--- /dev/null
+++ b/Osstest/CXFabric.pm
@@ -0,0 +1,86 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+package Osstest::CXFabric;
+
+use strict;
+use warnings;
+
+use Osstest;
+use Osstest::TestSupport;
+
+BEGIN {
+    use Exporter ();
+    our ($VERSION, @ISA, @EXPORT, @EXPORT_OK, %EXPORT_TAGS);
+    $VERSION     = 1.00;
+    @ISA         = qw(Exporter);
+    @EXPORT      = qw(
+                      setup_cxfabric
+                      );
+    %EXPORT_TAGS = ( );
+
+    @EXPORT_OK   = qw();
+}
+
+sub setup_cxfabric($)
+{
+    my ($ho) = @_;
+
+    my $nr = get_host_property($ho, 'NRCXFabricMACs', 0);
+    return unless $nr;
+
+    die unless $nr < 2**16;
+
+    my $prefix = ether_prefix($ho);
+    logm("Registering $nr MAC addresses with CX fabric using prefix $prefix");
+
+    if ( $ho->{Suite} =~ m/wheezy/ )
+    {
+        # iproute2 is not in Wheezy nor wheezy-backports. Use our own backport.
+        my $images = "$c{Images}/wheezy-iproute2";
+        my $ver = '3.12.0-1~xen70+1';
+
+        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
+        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");
+
+        target_cmd_root($ho, <<END);
+dpkg -i iproute_${ver}_all.deb iproute2_${ver}_armhf.deb
+END
+    } else {
+        target_install_packages($ho, qw(iproute2));
+    }
+
+    my $banner = '# osstest: register potential guest MACs with CX fabric';
+    my $rclocal = "$banner\n";
+    # Osstest::TestSupport::select_ether allocates sequentially from $prefix:00:01
+    my $i = 0;
+    while ( $i++ < $nr ) {
+        $rclocal .= sprintf("bridge fdb add $prefix:%02x:%02x dev eth0\n",
+                            $i >> 8, $i & 0xff);
+    }
+
+    target_editfile_root($ho, '/etc/rc.local', sub {
+        my $had_banner = 0;
+        while (<::EI>) {
+            $had_banner = 1 if m/^$banner$/;
+            print ::EO $rclocal if m/^exit 0$/ && !$had_banner;
+            print ::EO;
+        }
+    });
+
+}
+
+1;
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..98eb172 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -97,6 +97,8 @@ BEGIN {
                       await_webspace_fetch_byleaf create_webfile
                       file_link_contents get_timeout
                       setup_pxeboot setup_pxeboot_local host_pxefile
+
+                      ether_prefix
                       );
     %EXPORT_TAGS = ( );
 
@@ -1245,6 +1247,17 @@ sub target_choose_vg ($$) {
     return $bestvg;
 }
 
+sub ether_prefix($) {
+    my ($ho) = @_;
+    my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
+    $prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
+    my $lhs = $1;
+    my $pv = (hex($2)<<8) | (hex($3));
+    $pv ^= $mjobdb->gen_ether_offset($ho,$flight);
+    $prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
+    return $prefix;
+}
+
 sub select_ether ($$) {
     my ($ho,$vn) = @_;
     # must be run outside transaction
@@ -1252,13 +1265,7 @@ sub select_ether ($$) {
     return $ether if defined $ether;
 
     db_retry($flight,'running', $dbh_tests,[qw(flights)], sub {
-	my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
-	$prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
-	my $lhs = $1;
-	my $pv = (hex($2)<<8) | (hex($3));
-	$pv ^= $mjobdb->gen_ether_offset($ho,$flight);
-	$prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
-
+	my $prefix = ether_prefix($ho);
 	my $glob_ether = $mjobdb->jobdb_db_glob('*_ether');
 
         my $previous= $dbh_tests->selectrow_array(<<END, {}, $flight);
diff --git a/ts-xen-install b/ts-xen-install
index fc96516..903ed45 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -22,6 +22,7 @@ use File::Path;
 use POSIX;
 use Osstest::Debian;
 use Osstest::TestSupport;
+use Osstest::CXFabric;
 
 my $checkmode= 0;
 
@@ -122,6 +123,8 @@ sub adjustconfig () {
     });
 
     target_cmd_root($ho, 'mkdir -p /var/log/xen/console');
+
+    setup_cxfabric($ho);
 }
 
 sub setupboot () {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXLX-00009D-2k; Wed, 12 Feb 2014 10:47:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDXLW-000098-29
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 10:47:14 +0000
Received: from [85.158.137.68:26309] by server-14.bemta-3.messagelabs.com id
	5B/75-08196-1315BF25; Wed, 12 Feb 2014 10:47:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392202030!1319624!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13229 invoked from network); 12 Feb 2014 10:47:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:47:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101870569"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:46:10 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 05:46:10 -0500
Message-ID: <52FB50F0.70106@citrix.com>
Date: Wed, 12 Feb 2014 11:46:08 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Miguel Clara
	<miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>	
	<1391513622.10515.75.camel@kazak.uk.xensource.com>	
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>	
	<1391528110.6497.32.camel@kazak.uk.xensource.com>	
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>	
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>	
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>	
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>	
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>	
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>	
	<1391681000.23098.29.camel@kazak.uk.xensource.com>	
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>	
	<1392042223.26657.7.camel@kazak.uk.xensource.com>	
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
In-Reply-To: <1392198993.13563.13.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTIvMDIvMTQgMTA6NTYsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBQbGVhc2UgY291bGQgeW91
IG5vdCB0b3AgcG9zdC4KPiAKPiBPbiBXZWQsIDIwMTQtMDItMTIgYXQgMDI6NDggKzAwMDAsIE1p
Z3VlbCBDbGFyYSB3cm90ZToKPj4gUGFyc2luZyBjb25maWcgZnJvbSB0ZXN0LmNmZwo+PiBsaWJ4
bDogZGVidWc6IGxpYnhsX2NyZWF0ZS5jOjEyMzA6ZG9fZG9tYWluX2NyZWF0ZTogYW8gMHg5YTlm
MzA6IGNyZWF0ZTogaG93PShuaWwpIGNhbGxiYWNrPShuaWwpIHBvbGxlcj0weDlhOWY5MAo+PiBs
aWJ4bDogZGVidWc6IGxpYnhsX2RldmljZS5jOjI1NzpsaWJ4bF9fZGV2aWNlX2Rpc2tfc2V0X2Jh
Y2tlbmQ6IERpc2sgdmRldj14dmRhIHNwZWMuYmFja2VuZD11bmtub3duCj4+IGxpYnhsOiBkZWJ1
ZzogbGlieGxfZGV2aWNlLmM6MTg4OmRpc2tfdHJ5X2JhY2tlbmQ6IERpc2sgdmRldj14dmRhLCB1
c2VzIHNjcmlwdD0uLi4gYXNzdW1pbmcgcGh5IGJhY2tlbmQKPj4gbGlieGw6IGRlYnVnOiBsaWJ4
bF9kZXZpY2UuYzoyOTY6bGlieGxfX2RldmljZV9kaXNrX3NldF9iYWNrZW5kOiBEaXNrIHZkZXY9
eHZkYSwgdXNpbmcgYmFja2VuZCBwaHkKPj4gbGlieGw6IGRlYnVnOiBsaWJ4bF9jcmVhdGUuYzo2
NzU6aW5pdGlhdGVfZG9tYWluX2NyZWF0ZTogcnVubmluZyBib290bG9hZGVyCj4+IGxpYnhsOiBk
ZWJ1ZzogbGlieGxfZGV2aWNlLmM6MjU3OmxpYnhsX19kZXZpY2VfZGlza19zZXRfYmFja2VuZDog
RGlzayB2ZGV2PShudWxsKSBzcGVjLmJhY2tlbmQ9cGh5Cj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxf
ZGV2aWNlLmM6MTg4OmRpc2tfdHJ5X2JhY2tlbmQ6IERpc2sgdmRldj0obnVsbCksIHVzZXMgc2Ny
aXB0PS4uLiBhc3N1bWluZyBwaHkgYmFja2VuZAo+PiBsaWJ4bDogZGVidWc6IGxpYnhsLmM6MjYw
NDpsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfYXR0YWNoOiBsb2NhbGx5IGF0dGFj
aGluZyBQSFkgZGlzayBkcmJkLXJlbXVzLXRlc3QKPj4gbGlieGw6IGRlYnVnOiBsaWJ4bF9ib290
bG9hZGVyLmM6NDA5OmJvb3Rsb2FkZXJfZGlza19hdHRhY2hlZF9jYjogQ29uZmlnIGJvb3Rsb2Fk
ZXIgdmFsdWU6IHB5Z3J1Ygo+PiBsaWJ4bDogZGVidWc6IGxpYnhsX2Jvb3Rsb2FkZXIuYzo0MjU6
Ym9vdGxvYWRlcl9kaXNrX2F0dGFjaGVkX2NiOiBDaGVja2luZyBmb3IgYm9vdGxvYWRlciBpbiBs
aWJleGVjIHBhdGg6IC91c3IvbG9jYWwvbGliL3hlbi9iaW4vcHlncnViCj4+IGxpYnhsOiBkZWJ1
ZzogbGlieGxfY3JlYXRlLmM6MTI0Mzpkb19kb21haW5fY3JlYXRlOiBhbyAweDlhOWYzMDogaW5w
cm9ncmVzczogcG9sbGVyPTB4OWE5ZjkwLCBmbGFncz1pCj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxf
ZXZlbnQuYzo1NTk6bGlieGxfX2V2X3hzd2F0Y2hfcmVnaXN0ZXI6IHdhdGNoIHc9MHg5YWEzYzgg
d3BhdGg9L2xvY2FsL2RvbWFpbi8zNSB0b2tlbj0zLzA6IHJlZ2lzdGVyIHNsb3RudW09Mwo+PiBs
aWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6MTczNzpsaWJ4bF9fYW9fcHJvZ3Jlc3NfcmVwb3J0
OiBhbyAweDlhOWYzMDogcHJvZ3Jlc3MgcmVwb3J0OiBpZ25vcmVkCj4+IGxpYnhsOiBkZWJ1Zzog
bGlieGxfYm9vdGxvYWRlci5jOjUzNTpib290bG9hZGVyX2dvdHB0eXM6IGV4ZWN1dGluZyBib290
bG9hZGVyOiAvdXNyL2xvY2FsL2xpYi94ZW4vYmluL3B5Z3J1Ygo+PiBsaWJ4bDogZGVidWc6IGxp
YnhsX2Jvb3Rsb2FkZXIuYzo1Mzk6Ym9vdGxvYWRlcl9nb3RwdHlzOiAgIGJvb3Rsb2FkZXIgYXJn
OiAvdXNyL2xvY2FsL2xpYi94ZW4vYmluL3B5Z3J1Ygo+PiBsaWJ4bDogZGVidWc6IGxpYnhsX2Jv
b3Rsb2FkZXIuYzo1Mzk6Ym9vdGxvYWRlcl9nb3RwdHlzOiAgIGJvb3Rsb2FkZXIgYXJnOiAtLWFy
Z3M9cm9vdD0vZGV2L3h2ZGExIHJ3IGNvbnNvbGU9aHZjMCB4ZW5jb25zPXR0eQo+PiBsaWJ4bDog
ZGVidWc6IGxpYnhsX2Jvb3Rsb2FkZXIuYzo1Mzk6Ym9vdGxvYWRlcl9nb3RwdHlzOiAgIGJvb3Rs
b2FkZXIgYXJnOiAtLW91dHB1dD0vdmFyL3J1bi94ZW4vYm9vdGxvYWRlci4zNS5vdXQKPj4gbGli
eGw6IGRlYnVnOiBsaWJ4bF9ib290bG9hZGVyLmM6NTM5OmJvb3Rsb2FkZXJfZ290cHR5czogICBi
b290bG9hZGVyIGFyZzogLS1vdXRwdXQtZm9ybWF0PXNpbXBsZTAKPj4gbGlieGw6IGRlYnVnOiBs
aWJ4bF9ib290bG9hZGVyLmM6NTM5OmJvb3Rsb2FkZXJfZ290cHR5czogICBib290bG9hZGVyIGFy
ZzogLS1vdXRwdXQtZGlyZWN0b3J5PS92YXIvcnVuL3hlbi9ib290bG9hZGVyLjM1LmQKPj4gbGli
eGw6IGRlYnVnOiBsaWJ4bF9ib290bG9hZGVyLmM6NTM5OmJvb3Rsb2FkZXJfZ290cHR5czogICBi
b290bG9hZGVyIGFyZzogZHJiZC1yZW11cy10ZXN0Cj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxfZXZl
bnQuYzo1MDM6d2F0Y2hmZF9jYWxsYmFjazogd2F0Y2ggdz0weDlhYTNjOCB3cGF0aD0vbG9jYWwv
ZG9tYWluLzM1IHRva2VuPTMvMDogZXZlbnQgZXBhdGg9L2xvY2FsL2RvbWFpbi8zNQo+PiBsaWJ4
bDogZXJyb3I6IGxpYnhsX2Jvb3Rsb2FkZXIuYzo2Mjg6Ym9vdGxvYWRlcl9maW5pc2hlZDogYm9v
dGxvYWRlciBmYWlsZWQgLSBjb25zdWx0IGxvZ2ZpbGUgL3Zhci9sb2cveGVuL2Jvb3Rsb2FkZXIu
MzUubG9nCj4+IGxpYnhsOiBlcnJvcjogbGlieGxfZXhlYy5jOjExODpsaWJ4bF9yZXBvcnRfY2hp
bGRfZXhpdHN0YXR1czogYm9vdGxvYWRlciBbMTI3ODFdIGV4aXRlZCB3aXRoIGVycm9yIHN0YXR1
cyAxCj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo1OTY6bGlieGxfX2V2X3hzd2F0Y2hf
ZGVyZWdpc3Rlcjogd2F0Y2ggdz0weDlhYTNjOCB3cGF0aD0vbG9jYWwvZG9tYWluLzM1IHRva2Vu
PTMvMDogZGVyZWdpc3RlciBzbG90bnVtPTMKPj4gbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVhdGUu
Yzo5MDA6ZG9tY3JlYXRlX3JlYnVpbGRfZG9uZTogY2Fubm90IChyZS0pYnVpbGQgZG9tYWluOiAt
Mwo+PiBsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6MTU2OTpsaWJ4bF9fYW9fY29tcGxldGU6
IGFvIDB4OWE5ZjMwOiBjb21wbGV0ZSwgcmM9LTMKPj4gbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVu
dC5jOjE1NDE6bGlieGxfX2FvX19kZXN0cm95OiBhbyAweDlhOWYzMDogZGVzdHJveQo+PiB4Yzog
ZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IHRvdGFsIGFsbG9jYXRpb25zOjIwIHRvdGFsIHJlbGVh
c2VzOjIwCj4+IHhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY3VycmVudCBhbGxvY2F0aW9u
czowIG1heGltdW0gYWxsb2NhdGlvbnM6Mgo+PiB4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6
IGNhY2hlIGN1cnJlbnQgc2l6ZToyCj4+IHhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY2Fj
aGUgaGl0czoxNiBtaXNzZXM6MiB0b29iaWc6Mgo+Pgo+Pgo+PiBUaGlzIHBhcnQ6Cj4+IGxpYnhs
OiBkZWJ1ZzogbGlieGxfYm9vdGxvYWRlci5jOjUzOTpib290bG9hZGVyX2dvdHB0eXM6ICAgYm9v
dGxvYWRlciBhcmc6IGRyYmQtcmVtdXMtdGVzdAo+Pgo+PiBEb2VzIHNlZW0gd2VpcmQuLi4KPiAK
PiBJbmRlZWQsIGVzcGVjaWFsbHkgd2hlbiB0aGVyZSBpcyBhIHByZWNlZGluZwo+IGxpYnhsOiBk
ZWJ1ZzogbGlieGwuYzoyNjA0OmxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9hdHRh
Y2g6IGxvY2FsbHkgYXR0YWNoaW5nIFBIWSBkaXNrIGRyYmQtcmVtdXMtdGVzdAo+IHdoaWNoIHNo
b3VsZCBoYXZlIHJlc3VsdGVkIGluIGFuIHh2ZGEgZGV2aWNlIHRvIHVzZS4KPiAKPiBIcm0sIHRo
YXQgZnVuY3Rpb24gaGFuZGxlcyBwaHkgZGV2aWNlcyBhcyBhIHN0cmFpZ2h0IHBhc3N0aHJvdWdo
IG9mIHRoZQo+IHVuZGVybHlpbmcgZGV2aWNlLCB3aGljaCBpcyB0aGUgc291cmNlIG9mIHRoZSBl
cnJvci4KPiAKPiBJIHdvbmRlciBpZiB3ZSBzaG91bGQgYWRkIHNvbWUgc3BlY2lhbCBoYW5kbGlu
ZyBvZiBkaXNrIGRldmljZXMgd2hpY2gKPiBoYXZlIGEgbm9uLW51bGwgc2NyaXB0LiBJIGd1ZXNz
IHRoYXQgd291bGQgbG9vayBzb21ldGhpbmcgbGlrZSB0aGUgUURJU0sKPiBiaXQgb2YgbGlieGxf
X2RldmljZV9kaXNrX2xvY2FsX2luaXRpYXRlX2F0dGFjaCBidXQgZ2F0ZWQgb24KPiBkaXNrLT5z
Y3JpcHQgcmF0aGVyIHRoYW4gLT5mb3JtYXQuIGkuZS46Cj4gICAgICAgICBjYXNlIExJQlhMX0RJ
U0tfQkFDS0VORF9QSFk6Cj4gICAgICAgICAgICAgICAgIGlmIChkaXNrLT5zY3JpcHQpIHsKPiAg
ICAgICAgICAgICAgICAgbGlieGxfX3ByZXBhcmVfYW9fZGV2aWNlKGFvLCAmZGxzLT5hb2Rldik7
Cj4gICAgICAgICAgICAgICAgIGRscy0+YW9kZXYuY2FsbGJhY2sgPSBsb2NhbF9kZXZpY2VfYXR0
YWNoX2NiOwo+ICAgICAgICAgICAgICAgICBkZXZpY2VfZGlza19hZGQoZWdjLCBMSUJYTF9UT09M
U1RBQ0tfRE9NSUQsIGRpc2ssCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmZGxz
LT5hb2RldiwgbGlieGxfX2FsbG9jX3ZkZXYsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAodm9pZCAqKSBibGtkZXZfc3RhcnQpOwo+ICAgICAgICAgICAgICAgICByZXR1cm47Cj4g
ICAgICAgICAgICAgfSBlbHNlIHsKPiAgICAgICAgICAgICAgICAgZGV2ID0gZGlzay0+cGRldl9w
YXRoOwo+ICAgICAgICAgICAgIH0KPiAKPiBXb3VsZCBhbHNvIG5lZWQgc29tZSBjb2RlIGluIGxp
YnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9kZXRhY2guCj4gSWFuLgoKSSB0aG91Z2gg
dGhpcyB3YXMgYWxyZWFkeSB3b3JraW5nIG9uIGxpYnhsLi4uIENvdWxkIHlvdSBwbGVhc2UgdGVz
dCB0aGUgCmF0dGFjaGVkIHBhdGNoPyBXaGljaCBpcyBiYXNpY2FsbHkgdGhlIGNodW5rIElhbiBw
b3N0ZWQgYWJvdmUgcGx1cyB0aGUgCmxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9k
ZXRhY2ggcGFydC4KCi0tLQpjb21taXQgM2JjZjkxY2JiZDlhMThkYjlhZTdkNTk0ZmZkZTc5Nzk3
NzRlZDUxMgpBdXRob3I6IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkRh
dGU6ICAgV2VkIEZlYiAxMiAxMToxNToxNyAyMDE0ICswMTAwCgogICAgbGlieGw6IGxvY2FsIGF0
dGFjaCBzdXBwb3J0IGZvciBQSFkgYmFja2VuZHMgdXNpbmcgc2NyaXB0cwogICAgCiAgICBBbGxv
dyBkaXNrcyB1c2luZyB0aGUgUEhZIGJhY2tlbmQgdG8gbG9jYWxseSBhdHRhY2ggaWYgdXNpbmcg
YSBzY3JpcHQuCiAgICAKICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPgogICAgU3VnZ2VzdGVkLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1w
YmVsbEBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMgYi90b29s
cy9saWJ4bC9saWJ4bC5jCmluZGV4IDczMGY2ZTEuLjVjYjQ2YTEgMTAwNjQ0Ci0tLSBhL3Rvb2xz
L2xpYnhsL2xpYnhsLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGwuYwpAQCAtMjYzMCw2ICsyNjMw
LDE2IEBAIHZvaWQgbGlieGxfX2RldmljZV9kaXNrX2xvY2FsX2luaXRpYXRlX2F0dGFjaChsaWJ4
bF9fZWdjICplZ2MsCiAKICAgICBzd2l0Y2ggKGRpc2stPmJhY2tlbmQpIHsKICAgICAgICAgY2Fz
ZSBMSUJYTF9ESVNLX0JBQ0tFTkRfUEhZOgorICAgICAgICAgICAgaWYgKGRpc2stPnNjcmlwdCAh
PSBOVUxMKSB7CisgICAgICAgICAgICAgICAgTE9HKERFQlVHLCAidHJ5aW5nIHRvIGxvY2FsbHkg
YXR0YWNoIFBIWSBkZXZpY2UgJXMgd2l0aCBzY3JpcHQgJXMiLAorICAgICAgICAgICAgICAgICAg
ICAgICAgICAgZGlzay0+cGRldl9wYXRoLCBkaXNrLT5zY3JpcHQpOworICAgICAgICAgICAgICAg
IGxpYnhsX19wcmVwYXJlX2FvX2RldmljZShhbywgJmRscy0+YW9kZXYpOworICAgICAgICAgICAg
ICAgIGRscy0+YW9kZXYuY2FsbGJhY2sgPSBsb2NhbF9kZXZpY2VfYXR0YWNoX2NiOworICAgICAg
ICAgICAgICAgIGRldmljZV9kaXNrX2FkZChlZ2MsIExJQlhMX1RPT0xTVEFDS19ET01JRCwgZGlz
aywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJmRscy0+YW9kZXYsIGxpYnhsX19h
bGxvY192ZGV2LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAodm9pZCAqKSBibGtk
ZXZfc3RhcnQpOworICAgICAgICAgICAgICAgIHJldHVybjsKKyAgICAgICAgICAgIH0KICAgICAg
ICAgICAgIExJQlhMX19MT0coY3R4LCBMSUJYTF9fTE9HX0RFQlVHLCAibG9jYWxseSBhdHRhY2hp
bmcgUEhZIGRpc2sgJXMiLAogICAgICAgICAgICAgICAgICAgICAgICBkaXNrLT5wZGV2X3BhdGgp
OwogICAgICAgICAgICAgZGV2ID0gZGlzay0+cGRldl9wYXRoOwpAQCAtMjcwOSw3ICsyNzE5LDcg
QEAgc3RhdGljIHZvaWQgbG9jYWxfZGV2aWNlX2F0dGFjaF9jYihsaWJ4bF9fZWdjICplZ2MsIGxp
YnhsX19hb19kZXZpY2UgKmFvZGV2KQogICAgIH0KIAogICAgIGRldiA9IEdDU1BSSU5URigiL2Rl
di8lcyIsIGRpc2stPnZkZXYpOwotICAgIExPRyhERUJVRywgImxvY2FsbHkgYXR0YWNoaW5nIHFk
aXNrICVzIiwgZGV2KTsKKyAgICBMT0coREVCVUcsICJsb2NhbGx5IGF0dGFjaGVkIGRpc2sgJXMi
LCBkZXYpOwogCiAgICAgcmMgPSBsaWJ4bF9fZGV2aWNlX2Zyb21fZGlzayhnYywgTElCWExfVE9P
TFNUQUNLX0RPTUlELCBkaXNrLCAmZGV2aWNlKTsKICAgICBpZiAocmMgPCAwKQpAQCAtMjc0OSw2
ICsyNzU5LDcgQEAgdm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfZGV0YWNo
KGxpYnhsX19lZ2MgKmVnYywKICAgICBpZiAoIWRscy0+ZGlza3BhdGgpIGdvdG8gb3V0OwogCiAg
ICAgc3dpdGNoIChkaXNrLT5iYWNrZW5kKSB7CisgICAgICAgIGNhc2UgTElCWExfRElTS19CQUNL
RU5EX1BIWToKICAgICAgICAgY2FzZSBMSUJYTF9ESVNLX0JBQ0tFTkRfUURJU0s6CiAgICAgICAg
ICAgICBpZiAoZGlzay0+dmRldiAhPSBOVUxMKSB7CiAgICAgICAgICAgICAgICAgR0NORVcoZGV2
aWNlKTsKQEAgLTI3NjYsNyArMjc3Nyw2IEBAIHZvaWQgbGlieGxfX2RldmljZV9kaXNrX2xvY2Fs
X2luaXRpYXRlX2RldGFjaChsaWJ4bF9fZWdjICplZ2MsCiAgICAgICAgICAgICAvKiBkaXNrLT52
ZGV2ID09IE5VTEw7IGZhbGwgdGhyb3VnaCAqLwogICAgICAgICBkZWZhdWx0OgogICAgICAgICAg
ICAgLyoKLSAgICAgICAgICAgICAqIE5vdGhpbmcgdG8gZG8gZm9yIFBIWVNUWVBFX1BIWS4KICAg
ICAgICAgICAgICAqIEZvciBvdGhlciBkZXZpY2UgdHlwZXMgYXNzdW1lIHRoYXQgdGhlIGJsa3Rh
cDIgcHJvY2VzcyBpcwogICAgICAgICAgICAgICogbmVlZGVkIGJ5IHRoZSBzb29uIHRvIGJlIHN0
YXJ0ZWQgZG9tYWluIGFuZCBkbyBub3RoaW5nLgogICAgICAgICAgICAgICovCgoKCgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Feb 12 10:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 10:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXLX-00009D-2k; Wed, 12 Feb 2014 10:47:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDXLW-000098-29
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 10:47:14 +0000
Received: from [85.158.137.68:26309] by server-14.bemta-3.messagelabs.com id
	5B/75-08196-1315BF25; Wed, 12 Feb 2014 10:47:13 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392202030!1319624!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13229 invoked from network); 12 Feb 2014 10:47:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 10:47:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101870569"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 10:46:10 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 05:46:10 -0500
Message-ID: <52FB50F0.70106@citrix.com>
Date: Wed, 12 Feb 2014 11:46:08 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Miguel Clara
	<miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>	
	<1391513622.10515.75.camel@kazak.uk.xensource.com>	
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>	
	<1391528110.6497.32.camel@kazak.uk.xensource.com>	
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>	
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>	
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>	
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>	
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>	
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>	
	<1391681000.23098.29.camel@kazak.uk.xensource.com>	
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>	
	<1392042223.26657.7.camel@kazak.uk.xensource.com>	
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
In-Reply-To: <1392198993.13563.13.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMTIvMDIvMTQgMTA6NTYsIElhbiBDYW1wYmVsbCB3cm90ZToKPiBQbGVhc2UgY291bGQgeW91
IG5vdCB0b3AgcG9zdC4KPiAKPiBPbiBXZWQsIDIwMTQtMDItMTIgYXQgMDI6NDggKzAwMDAsIE1p
Z3VlbCBDbGFyYSB3cm90ZToKPj4gUGFyc2luZyBjb25maWcgZnJvbSB0ZXN0LmNmZwo+PiBsaWJ4
bDogZGVidWc6IGxpYnhsX2NyZWF0ZS5jOjEyMzA6ZG9fZG9tYWluX2NyZWF0ZTogYW8gMHg5YTlm
MzA6IGNyZWF0ZTogaG93PShuaWwpIGNhbGxiYWNrPShuaWwpIHBvbGxlcj0weDlhOWY5MAo+PiBs
aWJ4bDogZGVidWc6IGxpYnhsX2RldmljZS5jOjI1NzpsaWJ4bF9fZGV2aWNlX2Rpc2tfc2V0X2Jh
Y2tlbmQ6IERpc2sgdmRldj14dmRhIHNwZWMuYmFja2VuZD11bmtub3duCj4+IGxpYnhsOiBkZWJ1
ZzogbGlieGxfZGV2aWNlLmM6MTg4OmRpc2tfdHJ5X2JhY2tlbmQ6IERpc2sgdmRldj14dmRhLCB1
c2VzIHNjcmlwdD0uLi4gYXNzdW1pbmcgcGh5IGJhY2tlbmQKPj4gbGlieGw6IGRlYnVnOiBsaWJ4
bF9kZXZpY2UuYzoyOTY6bGlieGxfX2RldmljZV9kaXNrX3NldF9iYWNrZW5kOiBEaXNrIHZkZXY9
eHZkYSwgdXNpbmcgYmFja2VuZCBwaHkKPj4gbGlieGw6IGRlYnVnOiBsaWJ4bF9jcmVhdGUuYzo2
NzU6aW5pdGlhdGVfZG9tYWluX2NyZWF0ZTogcnVubmluZyBib290bG9hZGVyCj4+IGxpYnhsOiBk
ZWJ1ZzogbGlieGxfZGV2aWNlLmM6MjU3OmxpYnhsX19kZXZpY2VfZGlza19zZXRfYmFja2VuZDog
RGlzayB2ZGV2PShudWxsKSBzcGVjLmJhY2tlbmQ9cGh5Cj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxf
ZGV2aWNlLmM6MTg4OmRpc2tfdHJ5X2JhY2tlbmQ6IERpc2sgdmRldj0obnVsbCksIHVzZXMgc2Ny
aXB0PS4uLiBhc3N1bWluZyBwaHkgYmFja2VuZAo+PiBsaWJ4bDogZGVidWc6IGxpYnhsLmM6MjYw
NDpsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfYXR0YWNoOiBsb2NhbGx5IGF0dGFj
aGluZyBQSFkgZGlzayBkcmJkLXJlbXVzLXRlc3QKPj4gbGlieGw6IGRlYnVnOiBsaWJ4bF9ib290
bG9hZGVyLmM6NDA5OmJvb3Rsb2FkZXJfZGlza19hdHRhY2hlZF9jYjogQ29uZmlnIGJvb3Rsb2Fk
ZXIgdmFsdWU6IHB5Z3J1Ygo+PiBsaWJ4bDogZGVidWc6IGxpYnhsX2Jvb3Rsb2FkZXIuYzo0MjU6
Ym9vdGxvYWRlcl9kaXNrX2F0dGFjaGVkX2NiOiBDaGVja2luZyBmb3IgYm9vdGxvYWRlciBpbiBs
aWJleGVjIHBhdGg6IC91c3IvbG9jYWwvbGliL3hlbi9iaW4vcHlncnViCj4+IGxpYnhsOiBkZWJ1
ZzogbGlieGxfY3JlYXRlLmM6MTI0Mzpkb19kb21haW5fY3JlYXRlOiBhbyAweDlhOWYzMDogaW5w
cm9ncmVzczogcG9sbGVyPTB4OWE5ZjkwLCBmbGFncz1pCj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxf
ZXZlbnQuYzo1NTk6bGlieGxfX2V2X3hzd2F0Y2hfcmVnaXN0ZXI6IHdhdGNoIHc9MHg5YWEzYzgg
d3BhdGg9L2xvY2FsL2RvbWFpbi8zNSB0b2tlbj0zLzA6IHJlZ2lzdGVyIHNsb3RudW09Mwo+PiBs
aWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6MTczNzpsaWJ4bF9fYW9fcHJvZ3Jlc3NfcmVwb3J0
OiBhbyAweDlhOWYzMDogcHJvZ3Jlc3MgcmVwb3J0OiBpZ25vcmVkCj4+IGxpYnhsOiBkZWJ1Zzog
bGlieGxfYm9vdGxvYWRlci5jOjUzNTpib290bG9hZGVyX2dvdHB0eXM6IGV4ZWN1dGluZyBib290
bG9hZGVyOiAvdXNyL2xvY2FsL2xpYi94ZW4vYmluL3B5Z3J1Ygo+PiBsaWJ4bDogZGVidWc6IGxp
YnhsX2Jvb3Rsb2FkZXIuYzo1Mzk6Ym9vdGxvYWRlcl9nb3RwdHlzOiAgIGJvb3Rsb2FkZXIgYXJn
OiAvdXNyL2xvY2FsL2xpYi94ZW4vYmluL3B5Z3J1Ygo+PiBsaWJ4bDogZGVidWc6IGxpYnhsX2Jv
b3Rsb2FkZXIuYzo1Mzk6Ym9vdGxvYWRlcl9nb3RwdHlzOiAgIGJvb3Rsb2FkZXIgYXJnOiAtLWFy
Z3M9cm9vdD0vZGV2L3h2ZGExIHJ3IGNvbnNvbGU9aHZjMCB4ZW5jb25zPXR0eQo+PiBsaWJ4bDog
ZGVidWc6IGxpYnhsX2Jvb3Rsb2FkZXIuYzo1Mzk6Ym9vdGxvYWRlcl9nb3RwdHlzOiAgIGJvb3Rs
b2FkZXIgYXJnOiAtLW91dHB1dD0vdmFyL3J1bi94ZW4vYm9vdGxvYWRlci4zNS5vdXQKPj4gbGli
eGw6IGRlYnVnOiBsaWJ4bF9ib290bG9hZGVyLmM6NTM5OmJvb3Rsb2FkZXJfZ290cHR5czogICBi
b290bG9hZGVyIGFyZzogLS1vdXRwdXQtZm9ybWF0PXNpbXBsZTAKPj4gbGlieGw6IGRlYnVnOiBs
aWJ4bF9ib290bG9hZGVyLmM6NTM5OmJvb3Rsb2FkZXJfZ290cHR5czogICBib290bG9hZGVyIGFy
ZzogLS1vdXRwdXQtZGlyZWN0b3J5PS92YXIvcnVuL3hlbi9ib290bG9hZGVyLjM1LmQKPj4gbGli
eGw6IGRlYnVnOiBsaWJ4bF9ib290bG9hZGVyLmM6NTM5OmJvb3Rsb2FkZXJfZ290cHR5czogICBi
b290bG9hZGVyIGFyZzogZHJiZC1yZW11cy10ZXN0Cj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxfZXZl
bnQuYzo1MDM6d2F0Y2hmZF9jYWxsYmFjazogd2F0Y2ggdz0weDlhYTNjOCB3cGF0aD0vbG9jYWwv
ZG9tYWluLzM1IHRva2VuPTMvMDogZXZlbnQgZXBhdGg9L2xvY2FsL2RvbWFpbi8zNQo+PiBsaWJ4
bDogZXJyb3I6IGxpYnhsX2Jvb3Rsb2FkZXIuYzo2Mjg6Ym9vdGxvYWRlcl9maW5pc2hlZDogYm9v
dGxvYWRlciBmYWlsZWQgLSBjb25zdWx0IGxvZ2ZpbGUgL3Zhci9sb2cveGVuL2Jvb3Rsb2FkZXIu
MzUubG9nCj4+IGxpYnhsOiBlcnJvcjogbGlieGxfZXhlYy5jOjExODpsaWJ4bF9yZXBvcnRfY2hp
bGRfZXhpdHN0YXR1czogYm9vdGxvYWRlciBbMTI3ODFdIGV4aXRlZCB3aXRoIGVycm9yIHN0YXR1
cyAxCj4+IGxpYnhsOiBkZWJ1ZzogbGlieGxfZXZlbnQuYzo1OTY6bGlieGxfX2V2X3hzd2F0Y2hf
ZGVyZWdpc3Rlcjogd2F0Y2ggdz0weDlhYTNjOCB3cGF0aD0vbG9jYWwvZG9tYWluLzM1IHRva2Vu
PTMvMDogZGVyZWdpc3RlciBzbG90bnVtPTMKPj4gbGlieGw6IGVycm9yOiBsaWJ4bF9jcmVhdGUu
Yzo5MDA6ZG9tY3JlYXRlX3JlYnVpbGRfZG9uZTogY2Fubm90IChyZS0pYnVpbGQgZG9tYWluOiAt
Mwo+PiBsaWJ4bDogZGVidWc6IGxpYnhsX2V2ZW50LmM6MTU2OTpsaWJ4bF9fYW9fY29tcGxldGU6
IGFvIDB4OWE5ZjMwOiBjb21wbGV0ZSwgcmM9LTMKPj4gbGlieGw6IGRlYnVnOiBsaWJ4bF9ldmVu
dC5jOjE1NDE6bGlieGxfX2FvX19kZXN0cm95OiBhbyAweDlhOWYzMDogZGVzdHJveQo+PiB4Yzog
ZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6IHRvdGFsIGFsbG9jYXRpb25zOjIwIHRvdGFsIHJlbGVh
c2VzOjIwCj4+IHhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY3VycmVudCBhbGxvY2F0aW9u
czowIG1heGltdW0gYWxsb2NhdGlvbnM6Mgo+PiB4YzogZGVidWc6IGh5cGVyY2FsbCBidWZmZXI6
IGNhY2hlIGN1cnJlbnQgc2l6ZToyCj4+IHhjOiBkZWJ1ZzogaHlwZXJjYWxsIGJ1ZmZlcjogY2Fj
aGUgaGl0czoxNiBtaXNzZXM6MiB0b29iaWc6Mgo+Pgo+Pgo+PiBUaGlzIHBhcnQ6Cj4+IGxpYnhs
OiBkZWJ1ZzogbGlieGxfYm9vdGxvYWRlci5jOjUzOTpib290bG9hZGVyX2dvdHB0eXM6ICAgYm9v
dGxvYWRlciBhcmc6IGRyYmQtcmVtdXMtdGVzdAo+Pgo+PiBEb2VzIHNlZW0gd2VpcmQuLi4KPiAK
PiBJbmRlZWQsIGVzcGVjaWFsbHkgd2hlbiB0aGVyZSBpcyBhIHByZWNlZGluZwo+IGxpYnhsOiBk
ZWJ1ZzogbGlieGwuYzoyNjA0OmxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9hdHRh
Y2g6IGxvY2FsbHkgYXR0YWNoaW5nIFBIWSBkaXNrIGRyYmQtcmVtdXMtdGVzdAo+IHdoaWNoIHNo
b3VsZCBoYXZlIHJlc3VsdGVkIGluIGFuIHh2ZGEgZGV2aWNlIHRvIHVzZS4KPiAKPiBIcm0sIHRo
YXQgZnVuY3Rpb24gaGFuZGxlcyBwaHkgZGV2aWNlcyBhcyBhIHN0cmFpZ2h0IHBhc3N0aHJvdWdo
IG9mIHRoZQo+IHVuZGVybHlpbmcgZGV2aWNlLCB3aGljaCBpcyB0aGUgc291cmNlIG9mIHRoZSBl
cnJvci4KPiAKPiBJIHdvbmRlciBpZiB3ZSBzaG91bGQgYWRkIHNvbWUgc3BlY2lhbCBoYW5kbGlu
ZyBvZiBkaXNrIGRldmljZXMgd2hpY2gKPiBoYXZlIGEgbm9uLW51bGwgc2NyaXB0LiBJIGd1ZXNz
IHRoYXQgd291bGQgbG9vayBzb21ldGhpbmcgbGlrZSB0aGUgUURJU0sKPiBiaXQgb2YgbGlieGxf
X2RldmljZV9kaXNrX2xvY2FsX2luaXRpYXRlX2F0dGFjaCBidXQgZ2F0ZWQgb24KPiBkaXNrLT5z
Y3JpcHQgcmF0aGVyIHRoYW4gLT5mb3JtYXQuIGkuZS46Cj4gICAgICAgICBjYXNlIExJQlhMX0RJ
U0tfQkFDS0VORF9QSFk6Cj4gICAgICAgICAgICAgICAgIGlmIChkaXNrLT5zY3JpcHQpIHsKPiAg
ICAgICAgICAgICAgICAgbGlieGxfX3ByZXBhcmVfYW9fZGV2aWNlKGFvLCAmZGxzLT5hb2Rldik7
Cj4gICAgICAgICAgICAgICAgIGRscy0+YW9kZXYuY2FsbGJhY2sgPSBsb2NhbF9kZXZpY2VfYXR0
YWNoX2NiOwo+ICAgICAgICAgICAgICAgICBkZXZpY2VfZGlza19hZGQoZWdjLCBMSUJYTF9UT09M
U1RBQ0tfRE9NSUQsIGRpc2ssCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmZGxz
LT5hb2RldiwgbGlieGxfX2FsbG9jX3ZkZXYsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAodm9pZCAqKSBibGtkZXZfc3RhcnQpOwo+ICAgICAgICAgICAgICAgICByZXR1cm47Cj4g
ICAgICAgICAgICAgfSBlbHNlIHsKPiAgICAgICAgICAgICAgICAgZGV2ID0gZGlzay0+cGRldl9w
YXRoOwo+ICAgICAgICAgICAgIH0KPiAKPiBXb3VsZCBhbHNvIG5lZWQgc29tZSBjb2RlIGluIGxp
YnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9kZXRhY2guCj4gSWFuLgoKSSB0aG91Z2gg
dGhpcyB3YXMgYWxyZWFkeSB3b3JraW5nIG9uIGxpYnhsLi4uIENvdWxkIHlvdSBwbGVhc2UgdGVz
dCB0aGUgCmF0dGFjaGVkIHBhdGNoPyBXaGljaCBpcyBiYXNpY2FsbHkgdGhlIGNodW5rIElhbiBw
b3N0ZWQgYWJvdmUgcGx1cyB0aGUgCmxpYnhsX19kZXZpY2VfZGlza19sb2NhbF9pbml0aWF0ZV9k
ZXRhY2ggcGFydC4KCi0tLQpjb21taXQgM2JjZjkxY2JiZDlhMThkYjlhZTdkNTk0ZmZkZTc5Nzk3
NzRlZDUxMgpBdXRob3I6IFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkRh
dGU6ICAgV2VkIEZlYiAxMiAxMToxNToxNyAyMDE0ICswMTAwCgogICAgbGlieGw6IGxvY2FsIGF0
dGFjaCBzdXBwb3J0IGZvciBQSFkgYmFja2VuZHMgdXNpbmcgc2NyaXB0cwogICAgCiAgICBBbGxv
dyBkaXNrcyB1c2luZyB0aGUgUEhZIGJhY2tlbmQgdG8gbG9jYWxseSBhdHRhY2ggaWYgdXNpbmcg
YSBzY3JpcHQuCiAgICAKICAgIFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPgogICAgU3VnZ2VzdGVkLWJ5OiBJYW4gQ2FtcGJlbGwgPGlhbi5jYW1w
YmVsbEBjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsLmMgYi90b29s
cy9saWJ4bC9saWJ4bC5jCmluZGV4IDczMGY2ZTEuLjVjYjQ2YTEgMTAwNjQ0Ci0tLSBhL3Rvb2xz
L2xpYnhsL2xpYnhsLmMKKysrIGIvdG9vbHMvbGlieGwvbGlieGwuYwpAQCAtMjYzMCw2ICsyNjMw
LDE2IEBAIHZvaWQgbGlieGxfX2RldmljZV9kaXNrX2xvY2FsX2luaXRpYXRlX2F0dGFjaChsaWJ4
bF9fZWdjICplZ2MsCiAKICAgICBzd2l0Y2ggKGRpc2stPmJhY2tlbmQpIHsKICAgICAgICAgY2Fz
ZSBMSUJYTF9ESVNLX0JBQ0tFTkRfUEhZOgorICAgICAgICAgICAgaWYgKGRpc2stPnNjcmlwdCAh
PSBOVUxMKSB7CisgICAgICAgICAgICAgICAgTE9HKERFQlVHLCAidHJ5aW5nIHRvIGxvY2FsbHkg
YXR0YWNoIFBIWSBkZXZpY2UgJXMgd2l0aCBzY3JpcHQgJXMiLAorICAgICAgICAgICAgICAgICAg
ICAgICAgICAgZGlzay0+cGRldl9wYXRoLCBkaXNrLT5zY3JpcHQpOworICAgICAgICAgICAgICAg
IGxpYnhsX19wcmVwYXJlX2FvX2RldmljZShhbywgJmRscy0+YW9kZXYpOworICAgICAgICAgICAg
ICAgIGRscy0+YW9kZXYuY2FsbGJhY2sgPSBsb2NhbF9kZXZpY2VfYXR0YWNoX2NiOworICAgICAg
ICAgICAgICAgIGRldmljZV9kaXNrX2FkZChlZ2MsIExJQlhMX1RPT0xTVEFDS19ET01JRCwgZGlz
aywKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJmRscy0+YW9kZXYsIGxpYnhsX19h
bGxvY192ZGV2LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAodm9pZCAqKSBibGtk
ZXZfc3RhcnQpOworICAgICAgICAgICAgICAgIHJldHVybjsKKyAgICAgICAgICAgIH0KICAgICAg
ICAgICAgIExJQlhMX19MT0coY3R4LCBMSUJYTF9fTE9HX0RFQlVHLCAibG9jYWxseSBhdHRhY2hp
bmcgUEhZIGRpc2sgJXMiLAogICAgICAgICAgICAgICAgICAgICAgICBkaXNrLT5wZGV2X3BhdGgp
OwogICAgICAgICAgICAgZGV2ID0gZGlzay0+cGRldl9wYXRoOwpAQCAtMjcwOSw3ICsyNzE5LDcg
QEAgc3RhdGljIHZvaWQgbG9jYWxfZGV2aWNlX2F0dGFjaF9jYihsaWJ4bF9fZWdjICplZ2MsIGxp
YnhsX19hb19kZXZpY2UgKmFvZGV2KQogICAgIH0KIAogICAgIGRldiA9IEdDU1BSSU5URigiL2Rl
di8lcyIsIGRpc2stPnZkZXYpOwotICAgIExPRyhERUJVRywgImxvY2FsbHkgYXR0YWNoaW5nIHFk
aXNrICVzIiwgZGV2KTsKKyAgICBMT0coREVCVUcsICJsb2NhbGx5IGF0dGFjaGVkIGRpc2sgJXMi
LCBkZXYpOwogCiAgICAgcmMgPSBsaWJ4bF9fZGV2aWNlX2Zyb21fZGlzayhnYywgTElCWExfVE9P
TFNUQUNLX0RPTUlELCBkaXNrLCAmZGV2aWNlKTsKICAgICBpZiAocmMgPCAwKQpAQCAtMjc0OSw2
ICsyNzU5LDcgQEAgdm9pZCBsaWJ4bF9fZGV2aWNlX2Rpc2tfbG9jYWxfaW5pdGlhdGVfZGV0YWNo
KGxpYnhsX19lZ2MgKmVnYywKICAgICBpZiAoIWRscy0+ZGlza3BhdGgpIGdvdG8gb3V0OwogCiAg
ICAgc3dpdGNoIChkaXNrLT5iYWNrZW5kKSB7CisgICAgICAgIGNhc2UgTElCWExfRElTS19CQUNL
RU5EX1BIWToKICAgICAgICAgY2FzZSBMSUJYTF9ESVNLX0JBQ0tFTkRfUURJU0s6CiAgICAgICAg
ICAgICBpZiAoZGlzay0+dmRldiAhPSBOVUxMKSB7CiAgICAgICAgICAgICAgICAgR0NORVcoZGV2
aWNlKTsKQEAgLTI3NjYsNyArMjc3Nyw2IEBAIHZvaWQgbGlieGxfX2RldmljZV9kaXNrX2xvY2Fs
X2luaXRpYXRlX2RldGFjaChsaWJ4bF9fZWdjICplZ2MsCiAgICAgICAgICAgICAvKiBkaXNrLT52
ZGV2ID09IE5VTEw7IGZhbGwgdGhyb3VnaCAqLwogICAgICAgICBkZWZhdWx0OgogICAgICAgICAg
ICAgLyoKLSAgICAgICAgICAgICAqIE5vdGhpbmcgdG8gZG8gZm9yIFBIWVNUWVBFX1BIWS4KICAg
ICAgICAgICAgICAqIEZvciBvdGhlciBkZXZpY2UgdHlwZXMgYXNzdW1lIHRoYXQgdGhlIGJsa3Rh
cDIgcHJvY2VzcyBpcwogICAgICAgICAgICAgICogbmVlZGVkIGJ5IHRoZSBzb29uIHRvIGJlIHN0
YXJ0ZWQgZG9tYWluIGFuZCBkbyBub3RoaW5nLgogICAgICAgICAgICAgICovCgoKCgoKX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxp
bmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4t
ZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXiV-000135-FB; Wed, 12 Feb 2014 11:10:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDXiT-00012y-If
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:10:57 +0000
Received: from [85.158.137.68:3829] by server-7.bemta-3.messagelabs.com id
	57/A1-13775-0C65BF25; Wed, 12 Feb 2014 11:10:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392203453!320868!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11719 invoked from network); 12 Feb 2014 11:10:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101875163"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:10:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 06:10:51 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDXiN-0006os-1i;
	Wed, 12 Feb 2014 11:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 11:10:51 +0000
Message-ID: <1392203451-15422-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] ts-hosts-allocate-Standalone: abort if
	the host to use has changed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When a job has been run once then the selected host is stored in a runvar and
used from then on. This means that if you try to run on a different host (by
changing the config or by changing OSSTEST_HOST_HOST) then you may be
surprised when things happen to the original host and not the new one.

Abort when this is detected.

Changing host requires you to run:
    ./cs-adjust-flight -v $flight runvar-del $job host
$flight = standalone by default
$job = test-x-y-z or build-x etc
host = the literal string host (not either of the host names)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ts-hosts-allocate-Standalone | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/ts-hosts-allocate-Standalone b/ts-hosts-allocate-Standalone
index 4ce0c0d..88a5d28 100755
--- a/ts-hosts-allocate-Standalone
+++ b/ts-hosts-allocate-Standalone
@@ -31,5 +31,13 @@ foreach my $ident (@ARGV) {
     $host ||= $c{"TestHost_$ident"};
     $host ||= $c{TestHost};
     $host || die "need host setting for $ident";
+
+    my $expected = $ENV{'OSSTEST_HOST_'.uc $ident}
+                   || $c{"TestHost_$ident"}
+                   || $c{TestHost};
+
+    die "$ident configuration mismatch $r{$ident} != $expected"
+	if $r{$ident} && $r{$ident} ne $expected;
+
     store_runvar($ident, $host);
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:11:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXiV-000135-FB; Wed, 12 Feb 2014 11:10:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDXiT-00012y-If
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:10:57 +0000
Received: from [85.158.137.68:3829] by server-7.bemta-3.messagelabs.com id
	57/A1-13775-0C65BF25; Wed, 12 Feb 2014 11:10:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392203453!320868!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11719 invoked from network); 12 Feb 2014 11:10:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101875163"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:10:51 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 06:10:51 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDXiN-0006os-1i;
	Wed, 12 Feb 2014 11:10:51 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 11:10:51 +0000
Message-ID: <1392203451-15422-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] ts-hosts-allocate-Standalone: abort if
	the host to use has changed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When a job has been run once then the selected host is stored in a runvar and
used from then on. This means that if you try to run on a different host (by
changing the config or by changing OSSTEST_HOST_HOST) then you may be
surprised when things happen to the original host and not the new one.

Abort when this is detected.

Changing host requires you to run:
    ./cs-adjust-flight -v $flight runvar-del $job host
$flight = standalone by default
$job = test-x-y-z or build-x etc
host = the literal string host (not either of the host names)

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ts-hosts-allocate-Standalone | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/ts-hosts-allocate-Standalone b/ts-hosts-allocate-Standalone
index 4ce0c0d..88a5d28 100755
--- a/ts-hosts-allocate-Standalone
+++ b/ts-hosts-allocate-Standalone
@@ -31,5 +31,13 @@ foreach my $ident (@ARGV) {
     $host ||= $c{"TestHost_$ident"};
     $host ||= $c{TestHost};
     $host || die "need host setting for $ident";
+
+    my $expected = $ENV{'OSSTEST_HOST_'.uc $ident}
+                   || $c{"TestHost_$ident"}
+                   || $c{TestHost};
+
+    die "$ident configuration mismatch $r{$ident} != $expected"
+	if $r{$ident} && $r{$ident} ne $expected;
+
     store_runvar($ident, $host);
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:15:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXmf-0001HG-5E; Wed, 12 Feb 2014 11:15:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDXmc-0001HA-Nk
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 11:15:15 +0000
Received: from [85.158.137.68:62354] by server-9.bemta-3.messagelabs.com id
	E0/CC-10184-2C75BF25; Wed, 12 Feb 2014 11:15:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392203711!1340685!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27651 invoked from network); 12 Feb 2014 11:15:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101875837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:15:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:15:10 -0500
Message-ID: <1392203708.13563.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 11:15:08 +0000
In-Reply-To: <CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 13:53 -0800, Luis R. Rodriguez wrote:
> Cc'ing kvm folks as they may have a shared interest on the shared
> physical case with the bridge (non NAT).
> 
> On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> Although the xen-netback interfaces do not participate in the
> >> link as a typical Ethernet device interfaces for them are
> >> still required under the current archtitecture. IPv6 addresses
> >> do not need to be created or assigned on the xen-netback interfaces
> >> however, even if the frontend devices do need them, so clear the
> >> multicast flag to ensure the net core does not initiate IPv6
> >> Stateless Address Autoconfiguration.
> >
> > How does disabling SAA flow from the absence of multicast?
> 
> See patch 1 in this series [0], but I explain the issue I see with
> this on the cover letter [1].

Oop, I felt like I'd missed some context. Thanks for pointing out that
it was right under my nose.

> In summary the RFCs on IPv6 make it
> clear you need multicast for Stateless address autoconfiguration
> (SLAAC is the preferred acronym) and DAD,

That seems reasonable, but I think is the opposite to what I was trying
to get at.

Why is it not possible to disable SLAAC and/or DAD even if multicast is
present?

IOW -- enabling/disabling multicast seems to me to be an odd proxy for
disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
which is to avoid SLAAC and DAD on interfaces which don't do multicast
(which makes sense since those protocols involve multicast).

>  however the net core has not
> made this a requirement, and hence the patch. The caveat which I
> address on the cover letter needs to be seriously considered though.
> 
> [0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
> [1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2
> 
> > Surely these should be controlled logically independently even if there is some
> > notional linkage.
> 
> When a node hops on a network it will query its network by sending a
> router solicitation multicast request for its configuration
> parameters, the router can respond with router advertisements to
> disable SLAAC.

Surely it should be possible for an interface to be explicitly not ipv6
enabled, in which case it doesn't want to do any solicitation at all.

> Apart from that we have no other means to disable SLAAC neatly, and as
> I gather that would be counter to the IPv6 RFCs anyway, and that makes
> sense.

In your[0] post you say:
        it should be noted that RFC4682 Section 5.4
        makes it clear that DAD *MUST* be performed on all unicast
        addresses prior to assigning them to an interface

is that what you mean by counter to the RFCs?

In my reading this "must do DAD" requirement only comes into affect if
you are trying to assign a unicast address to an interface. It should be
possible to simply not do that for an interface.

> > Can SAA not be disabled directly?
> 
> Nope. The ipv6 core assumes all device want ipv6

IMHO it is entirely reasonable for an admin to desire that an interface
has nothing at all to do with IPv6. At which point all of the
requirements for multicast which flow from enabling IPv6 disappear.

> >>  since using this can create an issue if a user
> >> decides to enable multicast on the backend interfaces
> >
> > Please explain what this issue is.
> 
> I explained this on the cover letter but should have elaborated more
> here. The *known* and *reported* issue is that xen-backend interfaces
> can end up  SLAAC and you'd obviously end up in some situations where
> the MAC address and IP address clash, despite the architecture of IPv6
> to randomize time requests for neighbor solicitations, and DAD.
> Ultimately a series of services can end up filling your log messages
> with tons of warnings.

Right, this makes sense, but it seems like the solution should be to
stop SLAAC from happening directly and not by playing tricks with
multicast that happen to have the side effect of disabling SLAAC.

> Another not reported issue, but I suspect critical and it can bite
> both xen and kvm in the ass is described on Appendex A on RFC 4862 [2]
> which considers the issues of getting duplicates of packets on the
> same link with the same link layer address. I think to address that we
> can also consider dev->type into all the different cases.

We should never actually be generating any traffic with this address
FWIW, all the generated traffic will have the guest's actual MAC. (at
least in the bridging case, perhaps with with routing or NAT things are
different, but I think in that case the traffic would appear to come
from the hosts outgoing interface, not the vif device)

> My preference, rather than trying to simply disable ipv6 is actually
> seeing how xen-netback interfaces (and kvm TAP topology) can be
> simplified further). As I see it there is tons of code which could
> trigger being used on these xen-netback interfaces (and TAP for kvm)
> which is simply not needed for the use case of just doing sending data
> back and forth between host and guest: ipv6 is not needed at all, and
> I tried to test removing ipv4, but ran into issues.

Bridging is not the only way to provide VM network connectivity. It
should also be possible to do routing and even NAT by configuring
appropriate p2p links and routing tables in the host. For that to work I
think the tap and vif devices do need some sort of IPv[46] capability,
so you can't just nuke that stuff completely. (Maybe/likely it also
requires them to have a sensible MAC address, I'm not sure).

> [2] http://tools.ietf.org/html/rfc4862#appendix-A
> [3[ https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf
> 
> > Also how can a user enable multicast on the b/e?
> 
> ip set multicast on dev <devname>
> ip set multicast off dev <devname>
> 
> > AFAIK only Solaris ever
> > implemented the m/c bits of the Xen PV network protocol (not that I
> > wouldn't welcome attempts to add it to other platforms)
> 
> Do you mean kernel configuration multicast ? Or networking ?

I meant the PV protocol extension which allows guests (netfront) to
register to receive multicast frames across the PV ring -- i.e. for
multicast to work from the guests PoV.

(maybe that was just an optimisation though and the default is to flood
everything, it was a long time ago)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:15:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXmf-0001HG-5E; Wed, 12 Feb 2014 11:15:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDXmc-0001HA-Nk
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 11:15:15 +0000
Received: from [85.158.137.68:62354] by server-9.bemta-3.messagelabs.com id
	E0/CC-10184-2C75BF25; Wed, 12 Feb 2014 11:15:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392203711!1340685!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27651 invoked from network); 12 Feb 2014 11:15:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:15:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101875837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:15:10 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:15:10 -0500
Message-ID: <1392203708.13563.50.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 11:15:08 +0000
In-Reply-To: <CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 13:53 -0800, Luis R. Rodriguez wrote:
> Cc'ing kvm folks as they may have a shared interest on the shared
> physical case with the bridge (non NAT).
> 
> On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> Although the xen-netback interfaces do not participate in the
> >> link as a typical Ethernet device interfaces for them are
> >> still required under the current archtitecture. IPv6 addresses
> >> do not need to be created or assigned on the xen-netback interfaces
> >> however, even if the frontend devices do need them, so clear the
> >> multicast flag to ensure the net core does not initiate IPv6
> >> Stateless Address Autoconfiguration.
> >
> > How does disabling SAA flow from the absence of multicast?
> 
> See patch 1 in this series [0], but I explain the issue I see with
> this on the cover letter [1].

Oop, I felt like I'd missed some context. Thanks for pointing out that
it was right under my nose.

> In summary the RFCs on IPv6 make it
> clear you need multicast for Stateless address autoconfiguration
> (SLAAC is the preferred acronym) and DAD,

That seems reasonable, but I think is the opposite to what I was trying
to get at.

Why is it not possible to disable SLAAC and/or DAD even if multicast is
present?

IOW -- enabling/disabling multicast seems to me to be an odd proxy for
disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
which is to avoid SLAAC and DAD on interfaces which don't do multicast
(which makes sense since those protocols involve multicast).

>  however the net core has not
> made this a requirement, and hence the patch. The caveat which I
> address on the cover letter needs to be seriously considered though.
> 
> [0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
> [1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2
> 
> > Surely these should be controlled logically independently even if there is some
> > notional linkage.
> 
> When a node hops on a network it will query its network by sending a
> router solicitation multicast request for its configuration
> parameters, the router can respond with router advertisements to
> disable SLAAC.

Surely it should be possible for an interface to be explicitly not ipv6
enabled, in which case it doesn't want to do any solicitation at all.

> Apart from that we have no other means to disable SLAAC neatly, and as
> I gather that would be counter to the IPv6 RFCs anyway, and that makes
> sense.

In your[0] post you say:
        it should be noted that RFC4682 Section 5.4
        makes it clear that DAD *MUST* be performed on all unicast
        addresses prior to assigning them to an interface

is that what you mean by counter to the RFCs?

In my reading this "must do DAD" requirement only comes into affect if
you are trying to assign a unicast address to an interface. It should be
possible to simply not do that for an interface.

> > Can SAA not be disabled directly?
> 
> Nope. The ipv6 core assumes all device want ipv6

IMHO it is entirely reasonable for an admin to desire that an interface
has nothing at all to do with IPv6. At which point all of the
requirements for multicast which flow from enabling IPv6 disappear.

> >>  since using this can create an issue if a user
> >> decides to enable multicast on the backend interfaces
> >
> > Please explain what this issue is.
> 
> I explained this on the cover letter but should have elaborated more
> here. The *known* and *reported* issue is that xen-backend interfaces
> can end up  SLAAC and you'd obviously end up in some situations where
> the MAC address and IP address clash, despite the architecture of IPv6
> to randomize time requests for neighbor solicitations, and DAD.
> Ultimately a series of services can end up filling your log messages
> with tons of warnings.

Right, this makes sense, but it seems like the solution should be to
stop SLAAC from happening directly and not by playing tricks with
multicast that happen to have the side effect of disabling SLAAC.

> Another not reported issue, but I suspect critical and it can bite
> both xen and kvm in the ass is described on Appendex A on RFC 4862 [2]
> which considers the issues of getting duplicates of packets on the
> same link with the same link layer address. I think to address that we
> can also consider dev->type into all the different cases.

We should never actually be generating any traffic with this address
FWIW, all the generated traffic will have the guest's actual MAC. (at
least in the bridging case, perhaps with with routing or NAT things are
different, but I think in that case the traffic would appear to come
from the hosts outgoing interface, not the vif device)

> My preference, rather than trying to simply disable ipv6 is actually
> seeing how xen-netback interfaces (and kvm TAP topology) can be
> simplified further). As I see it there is tons of code which could
> trigger being used on these xen-netback interfaces (and TAP for kvm)
> which is simply not needed for the use case of just doing sending data
> back and forth between host and guest: ipv6 is not needed at all, and
> I tried to test removing ipv4, but ran into issues.

Bridging is not the only way to provide VM network connectivity. It
should also be possible to do routing and even NAT by configuring
appropriate p2p links and routing tables in the host. For that to work I
think the tap and vif devices do need some sort of IPv[46] capability,
so you can't just nuke that stuff completely. (Maybe/likely it also
requires them to have a sensible MAC address, I'm not sure).

> [2] http://tools.ietf.org/html/rfc4862#appendix-A
> [3[ https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf
> 
> > Also how can a user enable multicast on the b/e?
> 
> ip set multicast on dev <devname>
> ip set multicast off dev <devname>
> 
> > AFAIK only Solaris ever
> > implemented the m/c bits of the Xen PV network protocol (not that I
> > wouldn't welcome attempts to add it to other platforms)
> 
> Do you mean kernel configuration multicast ? Or networking ?

I meant the PV protocol extension which allows guests (netfront) to
register to receive multicast frames across the PV ring -- i.e. for
multicast to work from the guests PoV.

(maybe that was just an optimisation though and the default is to flood
everything, it was a long time ago)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:28:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXz0-0001m5-6I; Wed, 12 Feb 2014 11:28:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDXyy-0001m0-P9
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:28:01 +0000
Received: from [85.158.137.68:6980] by server-13.bemta-3.messagelabs.com id
	2E/CE-26923-FBA5BF25; Wed, 12 Feb 2014 11:27:59 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392204477!1342168!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24998 invoked from network); 12 Feb 2014 11:27:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101878018"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:27:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:27:56 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDXyu-0004XR-PF;
	Wed, 12 Feb 2014 11:27:56 +0000
Message-ID: <1392204471.24787.15.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 12 Feb 2014 11:27:51 +0000
In-Reply-To: <1392196023.24787.1.camel@hamster.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
	<1392141590.26657.137.camel@kazak.uk.xensource.com>
	<1392196023.24787.1.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 09:07 +0000, Frediano Ziglio wrote:
> On Tue, 2014-02-11 at 17:59 +0000, Ian Campbell wrote:
> > On Tue, 2014-02-11 at 17:53 +0000, Ian Jackson wrote:
> > > Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> > > David is entirely correct to specify a fixed endianness for the image
> > > header.
> > 
> > Full ack.
> > 
> > 
> Me too. I was not speaking about the image header but all others
> records. Obviously by native endianess I mean the endianess of the VM.
> If I have a ARMBE this would save as big endian because the VM is big
> endian. On the same host an ARMLE would save using little endian.
> 
> Frediano
> 

More about endianess and protocols. Why the network defined a network
byte order?
1- you don't need to specify a flag
2- you always have to convert or not, that's no if

Consider point 2. Considering you have a function to get a 32 bit in the
native format

uint32_t get_native32(uint32_t n) {
   XXX
}

Now, in the case you use always the same endian XXX is either a NOP or a
a swap. The case you support both is more complicated, something like

  if (is_my_format)
    return n;
  else
    return swap(n);

If we inline it we expand the code, if we not inline we slow down a bit.
Considering that in many platforms swapping is so easy that can be
inlined implementing get_native32 can be easily inlined.

In the case we need to handle large structures and you have to call may
times these kind of functions perhaps the time spent for the conversion
and size of the code does not matter (we could use a template system to
optimize the two cases spending extra code or device to not inline) but
if choosing a protocol specification instead of another could help why
not choosing the proper endian schema considering the pro/cons of all
cases?

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:28:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDXz0-0001m5-6I; Wed, 12 Feb 2014 11:28:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDXyy-0001m0-P9
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:28:01 +0000
Received: from [85.158.137.68:6980] by server-13.bemta-3.messagelabs.com id
	2E/CE-26923-FBA5BF25; Wed, 12 Feb 2014 11:27:59 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392204477!1342168!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24998 invoked from network); 12 Feb 2014 11:27:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101878018"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:27:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:27:56 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDXyu-0004XR-PF;
	Wed, 12 Feb 2014 11:27:56 +0000
Message-ID: <1392204471.24787.15.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Wed, 12 Feb 2014 11:27:51 +0000
In-Reply-To: <1392196023.24787.1.camel@hamster.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
	<1392141590.26657.137.camel@kazak.uk.xensource.com>
	<1392196023.24787.1.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 09:07 +0000, Frediano Ziglio wrote:
> On Tue, 2014-02-11 at 17:59 +0000, Ian Campbell wrote:
> > On Tue, 2014-02-11 at 17:53 +0000, Ian Jackson wrote:
> > > Frediano Ziglio writes ("Re: [Xen-devel] Domain Save Image Format proposal (draft B)"):
> > > David is entirely correct to specify a fixed endianness for the image
> > > header.
> > 
> > Full ack.
> > 
> > 
> Me too. I was not speaking about the image header but all others
> records. Obviously by native endianess I mean the endianess of the VM.
> If I have a ARMBE this would save as big endian because the VM is big
> endian. On the same host an ARMLE would save using little endian.
> 
> Frediano
> 

More about endianess and protocols. Why the network defined a network
byte order?
1- you don't need to specify a flag
2- you always have to convert or not, that's no if

Consider point 2. Considering you have a function to get a 32 bit in the
native format

uint32_t get_native32(uint32_t n) {
   XXX
}

Now, in the case you use always the same endian XXX is either a NOP or a
a swap. The case you support both is more complicated, something like

  if (is_my_format)
    return n;
  else
    return swap(n);

If we inline it we expand the code, if we not inline we slow down a bit.
Considering that in many platforms swapping is so easy that can be
inlined implementing get_native32 can be easily inlined.

In the case we need to handle large structures and you have to call may
times these kind of functions perhaps the time spent for the conversion
and size of the code does not matter (we could use a template system to
optimize the two cases spending extra code or device to not inline) but
if choosing a protocol specification instead of another could help why
not choosing the proper endian schema considering the pro/cons of all
cases?

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDY5V-000292-2N; Wed, 12 Feb 2014 11:34:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDY5U-00028v-CZ
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:34:44 +0000
Received: from [193.109.254.147:16010] by server-9.bemta-14.messagelabs.com id
	56/41-24895-35C5BF25; Wed, 12 Feb 2014 11:34:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392204882!3787412!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30018 invoked from network); 12 Feb 2014 11:34:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:34:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101879305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:34:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:34:40 -0500
Message-ID: <1392204879.13563.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Date: Wed, 12 Feb 2014 11:34:39 +0000
In-Reply-To: <1392204471.24787.15.camel@hamster.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
	<1392141590.26657.137.camel@kazak.uk.xensource.com>
	<1392196023.24787.1.camel@hamster.uk.xensource.com>
	<1392204471.24787.15.camel@hamster.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 11:27 +0000, Frediano Ziglio wrote:
> More about endianess and protocols.

I don't think we need more on this. It has already gotten far more
discussion than needed for such a minor issue and I trust David to take
on board any useful feedback and to update the spec accordingly as
needed.

It would be more useful if people were to spend their time thinking
about the core of the proposal instead of nitpicking around the edges.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDY5V-000292-2N; Wed, 12 Feb 2014 11:34:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDY5U-00028v-CZ
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:34:44 +0000
Received: from [193.109.254.147:16010] by server-9.bemta-14.messagelabs.com id
	56/41-24895-35C5BF25; Wed, 12 Feb 2014 11:34:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392204882!3787412!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30018 invoked from network); 12 Feb 2014 11:34:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:34:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101879305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:34:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:34:40 -0500
Message-ID: <1392204879.13563.58.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Date: Wed, 12 Feb 2014 11:34:39 +0000
In-Reply-To: <1392204471.24787.15.camel@hamster.uk.xensource.com>
References: <52F90A71.40802@citrix.com>
	<52FA043B020000780011B10C@nat28.tlf.novell.com>
	<21242.21415.38574.210861@mariner.uk.xensource.com>
	<CAP8mzPOrmzyVW7PQNCzqiL6c=Qo0cYrV24sJHPUq5ja2P7-48Q@mail.gmail.com>
	<1392138928.26657.134.camel@kazak.uk.xensource.com>
	<1392139881.27767.6.camel@hamster.uk.xensource.com>
	<21242.25519.844862.598762@mariner.uk.xensource.com>
	<1392141590.26657.137.camel@kazak.uk.xensource.com>
	<1392196023.24787.1.camel@hamster.uk.xensource.com>
	<1392204471.24787.15.camel@hamster.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan
	Beulich <JBeulich@suse.com>, rshriram@cs.ubc.ca
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 11:27 +0000, Frediano Ziglio wrote:
> More about endianess and protocols.

I don't think we need more on this. It has already gotten far more
discussion than needed for such a minor issue and I trust David to take
on board any useful feedback and to update the spec accordingly as
needed.

It would be more useful if people were to spend their time thinking
about the core of the proposal instead of nitpicking around the edges.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:46:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYGT-0002Vb-9U; Wed, 12 Feb 2014 11:46:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDYGR-0002VW-KY
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:46:03 +0000
Received: from [85.158.139.211:10273] by server-1.bemta-5.messagelabs.com id
	AC/80-12859-AFE5BF25; Wed, 12 Feb 2014 11:46:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392205560!3362354!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23211 invoked from network); 12 Feb 2014 11:46:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:46:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101881268"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:46:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 06:45:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYGN-0006zR-FG;
	Wed, 12 Feb 2014 11:45:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYGN-00032L-6s;
	Wed, 12 Feb 2014 11:45:59 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.24311.41722.641547@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 11:45:59 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> The fabric on the Calxeda midway boxes (marilith-* in osstest) does
> not learn mac address (at least not with the firmware we have, and
> with Calxeda folding this seems unlikely to get fixed). This means
> that guests do not get network connectivity unless their mac
> addresses explicitly registered with the fabric.

This looks mostly fine (well, FSVO "fine"!) to me, but:

> +    my $nr = get_host_property($ho, 'NRCXFabricMACs', 0);

This shouldn't be a host property, but a global config option (or even
hardcoded).  There should be a host flag for the bodge, probably
"need-calxeda-bridge-fixup" or something.

And you could wrap these lines, while you're at it:

> +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");

Or replace this extensive code with something more condensed, e.g.:

  +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);
  +        target_putfile_root($ho, 10, "$images/$_", $_) foreach @debs;
  +        target_cmd_root($ho, "dpkg -i @debs");

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:46:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYGT-0002Vb-9U; Wed, 12 Feb 2014 11:46:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDYGR-0002VW-KY
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:46:03 +0000
Received: from [85.158.139.211:10273] by server-1.bemta-5.messagelabs.com id
	AC/80-12859-AFE5BF25; Wed, 12 Feb 2014 11:46:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392205560!3362354!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23211 invoked from network); 12 Feb 2014 11:46:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:46:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101881268"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:46:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 06:45:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYGN-0006zR-FG;
	Wed, 12 Feb 2014 11:45:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYGN-00032L-6s;
	Wed, 12 Feb 2014 11:45:59 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.24311.41722.641547@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 11:45:59 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> The fabric on the Calxeda midway boxes (marilith-* in osstest) does
> not learn mac address (at least not with the firmware we have, and
> with Calxeda folding this seems unlikely to get fixed). This means
> that guests do not get network connectivity unless their mac
> addresses explicitly registered with the fabric.

This looks mostly fine (well, FSVO "fine"!) to me, but:

> +    my $nr = get_host_property($ho, 'NRCXFabricMACs', 0);

This shouldn't be a host property, but a global config option (or even
hardcoded).  There should be a host flag for the bodge, probably
"need-calxeda-bridge-fixup" or something.

And you could wrap these lines, while you're at it:

> +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");

Or replace this extensive code with something more condensed, e.g.:

  +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);
  +        target_putfile_root($ho, 10, "$images/$_", $_) foreach @debs;
  +        target_cmd_root($ho, "dpkg -i @debs");

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYKW-0002o8-0w; Wed, 12 Feb 2014 11:50:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYKU-0002o3-CQ
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:50:14 +0000
Received: from [85.158.139.211:8770] by server-9.bemta-5.messagelabs.com id
	FA/4A-11237-5FF5BF25; Wed, 12 Feb 2014 11:50:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392205811!3398618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17398 invoked from network); 12 Feb 2014 11:50:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:50:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101881723"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:50:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:50:10 -0500
Message-ID: <1392205809.13563.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 11:50:09 +0000
In-Reply-To: <21243.24311.41722.641547@mariner.uk.xensource.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
	<21243.24311.41722.641547@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 11:45 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> > The fabric on the Calxeda midway boxes (marilith-* in osstest) does
> > not learn mac address (at least not with the firmware we have, and
> > with Calxeda folding this seems unlikely to get fixed). This means
> > that guests do not get network connectivity unless their mac
> > addresses explicitly registered with the fabric.
> 
> This looks mostly fine (well, FSVO "fine"!) to me, but:
> 
> > +    my $nr = get_host_property($ho, 'NRCXFabricMACs', 0);
> 
> This shouldn't be a host property, but a global config option (or even
> hardcoded).  There should be a host flag for the bodge, probably
> "need-calxeda-bridge-fixup" or something.

equiv-marilith isn't suitable I take it?

> And you could wrap these lines, while you're at it:
> 
> > +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> > +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");

Sure.

> Or replace this extensive code with something more condensed, e.g.:
> 
>   +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);

This misses the iproute vs iproute2 distinction in the names, but I'll
see if I can get something like this to work.

>   +        target_putfile_root($ho, 10, "$images/$_", $_) foreach @debs;
>   +        target_cmd_root($ho, "dpkg -i @debs");
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYKW-0002o8-0w; Wed, 12 Feb 2014 11:50:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYKU-0002o3-CQ
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:50:14 +0000
Received: from [85.158.139.211:8770] by server-9.bemta-5.messagelabs.com id
	FA/4A-11237-5FF5BF25; Wed, 12 Feb 2014 11:50:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392205811!3398618!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17398 invoked from network); 12 Feb 2014 11:50:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:50:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101881723"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 11:50:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:50:10 -0500
Message-ID: <1392205809.13563.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 11:50:09 +0000
In-Reply-To: <21243.24311.41722.641547@mariner.uk.xensource.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
	<21243.24311.41722.641547@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 11:45 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> > The fabric on the Calxeda midway boxes (marilith-* in osstest) does
> > not learn mac address (at least not with the firmware we have, and
> > with Calxeda folding this seems unlikely to get fixed). This means
> > that guests do not get network connectivity unless their mac
> > addresses explicitly registered with the fabric.
> 
> This looks mostly fine (well, FSVO "fine"!) to me, but:
> 
> > +    my $nr = get_host_property($ho, 'NRCXFabricMACs', 0);
> 
> This shouldn't be a host property, but a global config option (or even
> hardcoded).  There should be a host flag for the bodge, probably
> "need-calxeda-bridge-fixup" or something.

equiv-marilith isn't suitable I take it?

> And you could wrap these lines, while you're at it:
> 
> > +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> > +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");

Sure.

> Or replace this extensive code with something more condensed, e.g.:
> 
>   +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);

This misses the iproute vs iproute2 distinction in the names, but I'll
see if I can get something like this to work.

>   +        target_putfile_root($ho, 10, "$images/$_", $_) foreach @debs;
>   +        target_cmd_root($ho, "dpkg -i @debs");
> 
> Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:52:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYMO-0002w9-1M; Wed, 12 Feb 2014 11:52:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDYMM-0002vz-NI
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:52:11 +0000
Received: from [85.158.137.68:30096] by server-7.bemta-3.messagelabs.com id
	AF/39-13775-9606BF25; Wed, 12 Feb 2014 11:52:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392205928!1347907!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12479 invoked from network); 12 Feb 2014 11:52:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 11:52:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 11:53:12 +0000
Message-Id: <52FB6E75020000780011BABB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 11:52:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
	<1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v5 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 15:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
> 
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has 
> occured
> gets committed to real RAM. To achieve this add a new cacheflush_page 
> function,
> which is a stub on x86.
> 
> Secondly we need to flush anything which the domain builder touches, which 
> we
> do via a new domctl.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> Cc: keir@xen.org 
> Cc: ian.jackson@eu.citrix.com 
> --
> v5: avoid get_order_from_pages and just use 1<<MAX_ORDER
> 
>     s/sync_page_to_ram/flush_page_to_ram/g
> 
>     remove hard tab, add an emacs magic block
> 
> v4: introduce a function to clean and invalidate as intended
> 
>     make the domctl take a length not an end.
> 
> v3:
>     s/cacheflush_page/sync_page_to_ram/
> 
>     xc interface takes a length instead of an end
> 
>     make the domctl range inclusive.
> 
>     make xc interface internal -- it isn't needed from libxl in the current
>     design and it is easier to expose an interface in the future than to 
> hide
>     it.
> 
> v2:
>    Switch to cleaning at page allocation time + explicit flushing of the
>    regions which the toolstack touches.
> 
>    Add XSM for new domctl.
> 
>    New domctl restricts the amount of space it is willing to flush, to avoid
>    thinking about preemption.
> ---
>  tools/libxc/xc_dom_boot.c           |    4 ++++
>  tools/libxc/xc_dom_core.c           |    2 ++
>  tools/libxc/xc_domain.c             |   10 ++++++++++
>  tools/libxc/xc_private.c            |    2 ++
>  tools/libxc/xc_private.h            |    3 +++
>  xen/arch/arm/domctl.c               |   14 ++++++++++++++
>  xen/arch/arm/mm.c                   |   12 ++++++++++++
>  xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
>  xen/common/page_alloc.c             |    5 +++++
>  xen/include/asm-arm/arm32/page.h    |    4 ++++
>  xen/include/asm-arm/arm64/page.h    |    4 ++++
>  xen/include/asm-arm/p2m.h           |    3 +++
>  xen/include/asm-arm/page.h          |    3 +++
>  xen/include/asm-x86/page.h          |    3 +++
>  xen/include/public/domctl.h         |   13 +++++++++++++
>  xen/xsm/flask/hooks.c               |   13 +++++++++++++
>  xen/xsm/flask/policy/access_vectors |    2 ++
>  17 files changed, 122 insertions(+)
> 
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>          return -1;
>      }
>  
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t 
> pfn)
>          prev->next = phys->next;
>      else
>          dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
>  }
>  
>  void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index e1d1bec..369c3f3 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.nr_pfns = nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}
>  
>  int xc_domain_pause(xc_interface *xch,
>                      uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
>          return -1;
>      memcpy(vaddr, src_page, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
>          return -1;
>      memset(vaddr, 0, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, 
> int nbits);
>  /* Optionally flush file to disk and discard page cache */
>  void discard_file_cache(xc_interface *xch, int fd, int flush);
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
>  #define MAX_MMU_UPDATES 1024
>  struct xc_mmu {
>      mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..45974e7 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct 
> domain *d,
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> +            return -EINVAL;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        return p2m_cache_flush(d, s, e);
> +    }
> +
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index cf4e7d4..98d054b 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -342,6 +342,18 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
>  }
>  #endif
>  
> +void flush_page_to_ram(unsigned long mfn)
> +{
> +    void *p, *v = map_domain_page(mfn);
> +
> +    dsb();           /* So the CPU issues all writes to the range */
> +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" 
> (p));
> +    dsb();           /* So we know the flushes happen before continuing */
> +
> +    unmap_domain_page(v);
> +}
> +
>  void __init arch_init_memory(void)
>  {
>      /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..d00c882 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
>  #include <asm/gic.h>
>  #include <asm/event.h>
>  #include <asm/hardirq.h>
> +#include <asm/page.h>
>  
>  /* First level P2M is 2 consecutive pages */
>  #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
> +    CACHEFLUSH,
>  };
>  
>  static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
>                      count++;
>                  }
>                  break;
> +
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                        break;
> +
> +                    flush_page_to_ram(pte.p2m.base);
> +                }
> +                break;
>          }
>  
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
>                                MATTR_MEM, p2m_invalid);
>  }
>  
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t 
> end_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> +    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> +    return apply_p2m_changes(d, CACHEFLUSH,
> +                             pfn_to_paddr(start_mfn),
> +                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(INVALID_MFN),
> +                             MATTR_MEM, p2m_invalid);
> +}
> +
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>  {
>      paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..601319c 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
>          /* Initialise fields which have other uses for free pages. */
>          pg[i].u.inuse.type_info = 0;
>          page_set_owner(&pg[i], NULL);
> +
> +        /* Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(page_to_mfn(&pg[i]));
>      }
>  
>      spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) 
> */
>  #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
>  /*
>   * Flush all hypervisor mappings from the TLB and branch predictor.
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) 
> */
>  #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
> +
>  /*
>   * Flush all hypervisor mappings from the TLB
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
>  /* Look up the MFN corresponding to a domain's PFN. */
>  paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>  
> +/* Clean & invalidate caches corresponding to a region of guest address 
> space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t 
> end_mfn);
> +
>  /* Setup p2m RAM mapping for domain d from start-end. */
>  int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
>  /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..5a371da 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, 
> unsigned long size)
>              : : "r" (_p), "m" (*_p));                                   \
>  } while (0)
>  
> +/* Flush the dcache for an entire page. */
> +void flush_page_to_ram(unsigned long mfn);
> +
>  /* Print a walk of an arbitrary page table */
>  void dump_pt_walk(lpae_t *table, paddr_t addr);
>  
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..ccc268d 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t 
> cacheattr)
>      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>  }
>  
> +/* No cache maintenance required on x86 architecture. */
> +static inline void flush_page_to_ram(unsigned long mfn) {}
> +
>  /* return true if permission increased */
>  static inline bool_t
>  perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +965,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setnodeaffinity               68
>  #define XEN_DOMCTL_getnodeaffinity               69
>  #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
>          struct xen_domctl_set_max_evtchn    set_max_evtchn;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>          struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..d515702 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_set_max_evtchn:
>          return current_has_perm(d, SECCLASS_DOMAIN2, 
> DOMAIN2__SET_MAX_EVTCHN);
>  
> +    case XEN_DOMCTL_cacheflush:
> +        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
>      default:
>          printk("flask_domctl: Unknown op %d\n", cmd);
>          return -EPERM;
> @@ -1617,3 +1620,13 @@ static __init int flask_init(void)
>  }
>  
>  xsm_initcall(flask_init);
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/xsm/flask/policy/access_vectors 
> b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
>      setclaim
>  # XEN_DOMCTL_set_max_evtchn
>      set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> +    cacheflush
>  }
>  
>  # Similar to class domain, but primarily contains domctls related to HVM 
> domains
> -- 
> 1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:52:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYMO-0002w9-1M; Wed, 12 Feb 2014 11:52:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDYMM-0002vz-NI
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:52:11 +0000
Received: from [85.158.137.68:30096] by server-7.bemta-3.messagelabs.com id
	AF/39-13775-9606BF25; Wed, 12 Feb 2014 11:52:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392205928!1347907!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12479 invoked from network); 12 Feb 2014 11:52:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 11:52:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 11:53:12 +0000
Message-Id: <52FB6E75020000780011BABB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 11:52:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
	<1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
In-Reply-To: <1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v5 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 15:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> Guests are initially started with caches disabled and so we need to make sure
> they see consistent data in RAM (requiring a cache clean) but also that they
> do not have old stale data suddenly appear in the caches when they enable
> their caches (requiring the invalidate).
> 
> This can be split into two halves. First we must flush each page as it is
> allocated to the guest. It is not sufficient to do the flush at scrub time
> since this will miss pages which are ballooned out by the guest (where the
> guest must scrub if it cares about not leaking the pagecontent). We need to
> clean as well as invalidate to make sure that any scrubbing which has 
> occured
> gets committed to real RAM. To achieve this add a new cacheflush_page 
> function,
> which is a stub on x86.
> 
> Secondly we need to flush anything which the domain builder touches, which 
> we
> do via a new domctl.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> Acked-by: Julien Grall <julien.grall@linaro.org>
> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> Cc: keir@xen.org 
> Cc: ian.jackson@eu.citrix.com 
> --
> v5: avoid get_order_from_pages and just use 1<<MAX_ORDER
> 
>     s/sync_page_to_ram/flush_page_to_ram/g
> 
>     remove hard tab, add an emacs magic block
> 
> v4: introduce a function to clean and invalidate as intended
> 
>     make the domctl take a length not an end.
> 
> v3:
>     s/cacheflush_page/sync_page_to_ram/
> 
>     xc interface takes a length instead of an end
> 
>     make the domctl range inclusive.
> 
>     make xc interface internal -- it isn't needed from libxl in the current
>     design and it is easier to expose an interface in the future than to 
> hide
>     it.
> 
> v2:
>    Switch to cleaning at page allocation time + explicit flushing of the
>    regions which the toolstack touches.
> 
>    Add XSM for new domctl.
> 
>    New domctl restricts the amount of space it is willing to flush, to avoid
>    thinking about preemption.
> ---
>  tools/libxc/xc_dom_boot.c           |    4 ++++
>  tools/libxc/xc_dom_core.c           |    2 ++
>  tools/libxc/xc_domain.c             |   10 ++++++++++
>  tools/libxc/xc_private.c            |    2 ++
>  tools/libxc/xc_private.h            |    3 +++
>  xen/arch/arm/domctl.c               |   14 ++++++++++++++
>  xen/arch/arm/mm.c                   |   12 ++++++++++++
>  xen/arch/arm/p2m.c                  |   25 +++++++++++++++++++++++++
>  xen/common/page_alloc.c             |    5 +++++
>  xen/include/asm-arm/arm32/page.h    |    4 ++++
>  xen/include/asm-arm/arm64/page.h    |    4 ++++
>  xen/include/asm-arm/p2m.h           |    3 +++
>  xen/include/asm-arm/page.h          |    3 +++
>  xen/include/asm-x86/page.h          |    3 +++
>  xen/include/public/domctl.h         |   13 +++++++++++++
>  xen/xsm/flask/hooks.c               |   13 +++++++++++++
>  xen/xsm/flask/policy/access_vectors |    2 ++
>  17 files changed, 122 insertions(+)
> 
> diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xc_dom_boot.c
> index 5a9cfc6..3d4d107 100644
> --- a/tools/libxc/xc_dom_boot.c
> +++ b/tools/libxc/xc_dom_boot.c
> @@ -351,6 +351,10 @@ int xc_dom_gnttab_seed(xc_interface *xch, domid_t domid,
>          return -1;
>      }
>  
> +    /* Guest shouldn't really touch its grant table until it has
> +     * enabled its caches. But lets be nice. */
> +    xc_domain_cacheflush(xch, domid, gnttab_gmfn, 1);
> +
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> index 77a4e64..b9d1015 100644
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -603,6 +603,8 @@ void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t 
> pfn)
>          prev->next = phys->next;
>      else
>          dom->phys_pages = phys->next;
> +
> +    xc_domain_cacheflush(dom->xch, dom->guest_domid, phys->first, phys->count);
>  }
>  
>  void xc_dom_unmap_all(struct xc_dom_image *dom)
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> index e1d1bec..369c3f3 100644
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -48,6 +48,16 @@ int xc_domain_create(xc_interface *xch,
>      return 0;
>  }
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +                         xen_pfn_t start_pfn, xen_pfn_t nr_pfns)
> +{
> +    DECLARE_DOMCTL;
> +    domctl.cmd = XEN_DOMCTL_cacheflush;
> +    domctl.domain = (domid_t)domid;
> +    domctl.u.cacheflush.start_pfn = start_pfn;
> +    domctl.u.cacheflush.nr_pfns = nr_pfns;
> +    return do_domctl(xch, &domctl);
> +}
>  
>  int xc_domain_pause(xc_interface *xch,
>                      uint32_t domid)
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index 838fd21..33ed15b 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -628,6 +628,7 @@ int xc_copy_to_domain_page(xc_interface *xch,
>          return -1;
>      memcpy(vaddr, src_page, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> @@ -641,6 +642,7 @@ int xc_clear_domain_page(xc_interface *xch,
>          return -1;
>      memset(vaddr, 0, PAGE_SIZE);
>      munmap(vaddr, PAGE_SIZE);
> +    xc_domain_cacheflush(xch, domid, dst_pfn, 1);
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
> index 92271c9..a610f0c 100644
> --- a/tools/libxc/xc_private.h
> +++ b/tools/libxc/xc_private.h
> @@ -304,6 +304,9 @@ void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, 
> int nbits);
>  /* Optionally flush file to disk and discard page cache */
>  void discard_file_cache(xc_interface *xch, int fd, int flush);
>  
> +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
> +			 xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
> +
>  #define MAX_MMU_UPDATES 1024
>  struct xc_mmu {
>      mmu_update_t updates[MAX_MMU_UPDATES];
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 546e86b..45974e7 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -17,6 +17,20 @@ long arch_do_domctl(struct xen_domctl *domctl, struct 
> domain *d,
>  {
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_cacheflush:
> +    {
> +        unsigned long s = domctl->u.cacheflush.start_pfn;
> +        unsigned long e = s + domctl->u.cacheflush.nr_pfns;
> +
> +        if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> +            return -EINVAL;
> +
> +        if ( e < s )
> +            return -EINVAL;
> +
> +        return p2m_cache_flush(d, s, e);
> +    }
> +
>      default:
>          return subarch_do_domctl(domctl, d, u_domctl);
>      }
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index cf4e7d4..98d054b 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -342,6 +342,18 @@ unsigned long domain_page_map_to_mfn(const void *ptr)
>  }
>  #endif
>  
> +void flush_page_to_ram(unsigned long mfn)
> +{
> +    void *p, *v = map_domain_page(mfn);
> +
> +    dsb();           /* So the CPU issues all writes to the range */
> +    for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" 
> (p));
> +    dsb();           /* So we know the flushes happen before continuing */
> +
> +    unmap_domain_page(v);
> +}
> +
>  void __init arch_init_memory(void)
>  {
>      /*
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index a61edeb..d00c882 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -8,6 +8,7 @@
>  #include <asm/gic.h>
>  #include <asm/event.h>
>  #include <asm/hardirq.h>
> +#include <asm/page.h>
>  
>  /* First level P2M is 2 consecutive pages */
>  #define P2M_FIRST_ORDER 1
> @@ -228,6 +229,7 @@ enum p2m_operation {
>      ALLOCATE,
>      REMOVE,
>      RELINQUISH,
> +    CACHEFLUSH,
>  };
>  
>  static int apply_p2m_changes(struct domain *d,
> @@ -381,6 +383,15 @@ static int apply_p2m_changes(struct domain *d,
>                      count++;
>                  }
>                  break;
> +
> +            case CACHEFLUSH:
> +                {
> +                    if ( !pte.p2m.valid || !p2m_is_ram(pte.p2m.type) )
> +                        break;
> +
> +                    flush_page_to_ram(pte.p2m.base);
> +                }
> +                break;
>          }
>  
>          /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> @@ -624,6 +635,20 @@ int relinquish_p2m_mapping(struct domain *d)
>                                MATTR_MEM, p2m_invalid);
>  }
>  
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t 
> end_mfn)
> +{
> +    struct p2m_domain *p2m = &d->arch.p2m;
> +
> +    start_mfn = MAX(start_mfn, p2m->lowest_mapped_gfn);
> +    end_mfn = MIN(end_mfn, p2m->max_mapped_gfn);
> +
> +    return apply_p2m_changes(d, CACHEFLUSH,
> +                             pfn_to_paddr(start_mfn),
> +                             pfn_to_paddr(end_mfn),
> +                             pfn_to_paddr(INVALID_MFN),
> +                             MATTR_MEM, p2m_invalid);
> +}
> +
>  unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
>  {
>      paddr_t p = p2m_lookup(d, pfn_to_paddr(gpfn), NULL);
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 5f484a2..601319c 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -710,6 +710,11 @@ static struct page_info *alloc_heap_pages(
>          /* Initialise fields which have other uses for free pages. */
>          pg[i].u.inuse.type_info = 0;
>          page_set_owner(&pg[i], NULL);
> +
> +        /* Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(page_to_mfn(&pg[i]));
>      }
>  
>      spin_unlock(&heap_lock);
> diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
> index cf12a89..cb6add4 100644
> --- a/xen/include/asm-arm/arm32/page.h
> +++ b/xen/include/asm-arm/arm32/page.h
> @@ -22,6 +22,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) 
> */
>  #define __flush_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) STORE_CP32(R, DCCIMVAC)
> +
>  /*
>   * Flush all hypervisor mappings from the TLB and branch predictor.
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
> index 9551f90..baf8903 100644
> --- a/xen/include/asm-arm/arm64/page.h
> +++ b/xen/include/asm-arm/arm64/page.h
> @@ -17,6 +17,10 @@ static inline void write_pte(lpae_t *p, lpae_t pte)
>  /* Inline ASM to flush dcache on register R (may be an inline asm operand) 
> */
>  #define __flush_xen_dcache_one(R) "dc cvac, %" #R ";"
>  
> +/* Inline ASM to clean and invalidate dcache on register R (may be an
> + * inline asm operand) */
> +#define __clean_and_invalidate_xen_dcache_one(R) "dc  civac, %" #R ";"
> +
>  /*
>   * Flush all hypervisor mappings from the TLB
>   * This is needed after changing Xen code mappings.
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index e9c884a..3b39c45 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -78,6 +78,9 @@ void p2m_load_VTTBR(struct domain *d);
>  /* Look up the MFN corresponding to a domain's PFN. */
>  paddr_t p2m_lookup(struct domain *d, paddr_t gpfn, p2m_type_t *t);
>  
> +/* Clean & invalidate caches corresponding to a region of guest address 
> space */
> +int p2m_cache_flush(struct domain *d, xen_pfn_t start_mfn, xen_pfn_t 
> end_mfn);
> +
>  /* Setup p2m RAM mapping for domain d from start-end. */
>  int p2m_populate_ram(struct domain *d, paddr_t start, paddr_t end);
>  /* Map MMIO regions in the p2m: start_gaddr and end_gaddr is the range
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 670d4e7..5a371da 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -253,6 +253,9 @@ static inline void flush_xen_dcache_va_range(void *p, 
> unsigned long size)
>              : : "r" (_p), "m" (*_p));                                   \
>  } while (0)
>  
> +/* Flush the dcache for an entire page. */
> +void flush_page_to_ram(unsigned long mfn);
> +
>  /* Print a walk of an arbitrary page table */
>  void dump_pt_walk(lpae_t *table, paddr_t addr);
>  
> diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
> index 7a46af5..ccc268d 100644
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -346,6 +346,9 @@ static inline uint32_t cacheattr_to_pte_flags(uint32_t 
> cacheattr)
>      return ((cacheattr & 4) << 5) | ((cacheattr & 3) << 3);
>  }
>  
> +/* No cache maintenance required on x86 architecture. */
> +static inline void flush_page_to_ram(unsigned long mfn) {}
> +
>  /* return true if permission increased */
>  static inline bool_t
>  perms_strictly_increased(uint32_t old_flags, uint32_t new_flags)
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 91f01fa..f22fe2e 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -885,6 +885,17 @@ struct xen_domctl_set_max_evtchn {
>  typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
>  
> +/*
> + * ARM: Clean and invalidate caches associated with given region of
> + * guest memory.
> + */
> +struct xen_domctl_cacheflush {
> +    /* IN: page range to flush. */
> +    xen_pfn_t start_pfn, nr_pfns;
> +};
> +typedef struct xen_domctl_cacheflush xen_domctl_cacheflush_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_cacheflush_t);
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -954,6 +965,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_setnodeaffinity               68
>  #define XEN_DOMCTL_getnodeaffinity               69
>  #define XEN_DOMCTL_set_max_evtchn                70
> +#define XEN_DOMCTL_cacheflush                    71
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1012,6 +1024,7 @@ struct xen_domctl {
>          struct xen_domctl_set_max_evtchn    set_max_evtchn;
>          struct xen_domctl_gdbsx_memio       gdbsx_guest_memio;
>          struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
> +        struct xen_domctl_cacheflush        cacheflush;
>          struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
>          struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
>          uint8_t                             pad[128];
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 50a35fc..d515702 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -737,6 +737,9 @@ static int flask_domctl(struct domain *d, int cmd)
>      case XEN_DOMCTL_set_max_evtchn:
>          return current_has_perm(d, SECCLASS_DOMAIN2, 
> DOMAIN2__SET_MAX_EVTCHN);
>  
> +    case XEN_DOMCTL_cacheflush:
> +        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__CACHEFLUSH);
> +
>      default:
>          printk("flask_domctl: Unknown op %d\n", cmd);
>          return -EPERM;
> @@ -1617,3 +1620,13 @@ static __init int flask_init(void)
>  }
>  
>  xsm_initcall(flask_init);
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/xsm/flask/policy/access_vectors 
> b/xen/xsm/flask/policy/access_vectors
> index 1fbe241..a0ed13d 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -196,6 +196,8 @@ class domain2
>      setclaim
>  # XEN_DOMCTL_set_max_evtchn
>      set_max_evtchn
> +# XEN_DOMCTL_cacheflush
> +    cacheflush
>  }
>  
>  # Similar to class domain, but primarily contains domctls related to HVM 
> domains
> -- 
> 1.7.10.4



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:52:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYMb-0002y8-KW; Wed, 12 Feb 2014 11:52:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYMa-0002xm-3u
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:52:24 +0000
Received: from [85.158.143.35:2806] by server-2.bemta-4.messagelabs.com id
	B8/4A-10891-6706BF25; Wed, 12 Feb 2014 11:52:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392205940!5093570!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13784 invoked from network); 12 Feb 2014 11:52:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100086577"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 11:52:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:52:16 -0500
Message-ID: <1392205936.13563.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Wed, 12 Feb 2014 11:52:16 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] arm32 crash on domain destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien,

I'm seeing this which I thought you had fixed last week, did I fail to
apply a patch or is this something new?

        (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
        (XEN) Xen BUG at mm.c:334
        (XEN) CPU2: Unexpected Trap: Undefined Instruction
        (XEN) ----[ Xen-4.4-rc2  arm32  debug=y  Not tainted ]----
        (XEN) CPU:    2
        (XEN) PC:     0023fb54 __bug+0x28/0x44
        (XEN) CPSR:   2000015a MODE:Hypervisor
        (XEN)      R0: 002646dc R1: 00000003 R2: 3fd2bd80 R3: 00000fff
        (XEN)      R4: 00261a0c R5: 0000014e R6: 00000c00 R7: 00000000
        (XEN)      R8: 4007f080 R9: c4176000 R10:00000000 R11:7ffcfd64 R12:00000004
        (XEN) HYP: SP: 7ffcfd5c LR: 0023fb54
        (XEN) 
        (XEN)   VTCR_EL2: 80002558
        (XEN)  VTTBR_EL2: 00010002f9ffc000
        (XEN) 
        (XEN)  SCTLR_EL2: 30cd187f
        (XEN)    HCR_EL2: 0000000000282835
        (XEN)  TTBR0_EL2: 00000000be010000
        (XEN) 
        (XEN)    ESR_EL2: 00000000
        (XEN)  HPFAR_EL2: 0000000000fff110
        (XEN)      HDFAR: e0800f00
        (XEN)      HIFAR: 00000000
        (XEN) 
        (XEN) Xen stack trace from sp=7ffcfd5c:
        (XEN)    00000000 7ffcfd74 0024810c 40075000 1001b000 7ffcfd84 0020b48c 40075000
        (XEN)    4007f000 7ffcfd94 0020b4e0 4002dff0 40075000 7ffcfda4 0020c04c 4007f000
        (XEN)    00000000 7ffcfdc4 0020b334 00000000 c000e73c 4007f000 b6fb3004 7ffcff58
        (XEN)    00000005 7ffcfddc 00208290 00000001 c000e73c 4007f000 b6fb3004 7ffcfeec
        (XEN)    00206368 00020002 00000000 00020100 00000000 00000000 00000000 00000000
        (XEN)    00000000 00000000 00000000 000be076 00000000 7b1eaf48 00000002 00000001
        (XEN)    00000000 00000000 997b032a fd48d390 9406888f 40725f67 00000000 00000002
        (XEN)    00000009 00050001 00035890 becdb884 b6fbe4a9 b6f16a98 b6fcd2c8 00000005
        (XEN)    b6ec3258 becdb854 00035030 ffffffff 0003589c 00037a60 00000005 b6d63000
        (XEN)    00000000 b6d634c0 b6f39000 0005f780 00035890 becdb8c4 b6fbe4a9 00000000
        (XEN)    00000001 00000005 00000000 becdb874 b6f16a98 00000001 000355e0 0003c198
        (XEN)    00035030 00000001 0003589c 7ffcff58 c000e73c 00000ea1 ddb3be40 7ffcff58
        (XEN)    00000005 c4176000 00000000 7ffcff54 0024d280 b6d634c0 0022c8c4 0022c6b8
        (XEN)    00000002 002ae000 002b1ff0 002e7614 400218d8 00000000 00000013 fff11b00
        (XEN)    00000001 becdb5f0 ddb3be40 00305000 00000005 c4176000 b6d634c0 becdb790
        (XEN)    ddb3be40 00305000 00000005 c4176000 00000000 7ffcff58 0024fa70 b6fb3004
        (XEN)    00000001 b6de4f50 0005f338 b6d634c0 becdb790 ddb3be40 00305000 00000005
        (XEN)    c4176000 00000000 becdb77c 00000024 ffffffff b6f2c708 c000e73c 60000013
        (XEN)    00000000 becdb778 c07e1380 c0012540 c4177e9c c025b570 c07e138c c0012680
        (XEN)    c07e1398 c0012720 00000000 00000000 00000000 00000000 00000000 00000000
        (XEN) Xen call trace:
        (XEN)    [<0023fb54>] __bug+0x28/0x44 (PC)
        (XEN)    [<0023fb54>] __bug+0x28/0x44 (LR)
        (XEN)    [<0024810c>] domain_page_map_to_mfn+0x50/0xb4
        (XEN)    [<0020b48c>] unmap_guest_page+0x20/0x54
        (XEN)    [<0020b4e0>] cleanup_control_block+0x20/0x34
        (XEN)    [<0020c04c>] evtchn_fifo_destroy+0x2c/0x6c
        (XEN)    [<0020b334>] evtchn_destroy+0x1a8/0x1b0
        (XEN)    [<00208290>] domain_kill+0x60/0x128
        (XEN)    [<00206368>] do_domctl+0xa6c/0x10ec
        (XEN)    [<0024d280>] do_trap_hypervisor+0xad8/0xd78
        (XEN)    [<0024fa70>] return_from_trap+0/0x4
        


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 11:52:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 11:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYMb-0002y8-KW; Wed, 12 Feb 2014 11:52:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYMa-0002xm-3u
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:52:24 +0000
Received: from [85.158.143.35:2806] by server-2.bemta-4.messagelabs.com id
	B8/4A-10891-6706BF25; Wed, 12 Feb 2014 11:52:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392205940!5093570!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13784 invoked from network); 12 Feb 2014 11:52:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 11:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100086577"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 11:52:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 06:52:16 -0500
Message-ID: <1392205936.13563.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Wed, 12 Feb 2014 11:52:16 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] arm32 crash on domain destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Julien,

I'm seeing this which I thought you had fixed last week, did I fail to
apply a patch or is this something new?

        (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
        (XEN) Xen BUG at mm.c:334
        (XEN) CPU2: Unexpected Trap: Undefined Instruction
        (XEN) ----[ Xen-4.4-rc2  arm32  debug=y  Not tainted ]----
        (XEN) CPU:    2
        (XEN) PC:     0023fb54 __bug+0x28/0x44
        (XEN) CPSR:   2000015a MODE:Hypervisor
        (XEN)      R0: 002646dc R1: 00000003 R2: 3fd2bd80 R3: 00000fff
        (XEN)      R4: 00261a0c R5: 0000014e R6: 00000c00 R7: 00000000
        (XEN)      R8: 4007f080 R9: c4176000 R10:00000000 R11:7ffcfd64 R12:00000004
        (XEN) HYP: SP: 7ffcfd5c LR: 0023fb54
        (XEN) 
        (XEN)   VTCR_EL2: 80002558
        (XEN)  VTTBR_EL2: 00010002f9ffc000
        (XEN) 
        (XEN)  SCTLR_EL2: 30cd187f
        (XEN)    HCR_EL2: 0000000000282835
        (XEN)  TTBR0_EL2: 00000000be010000
        (XEN) 
        (XEN)    ESR_EL2: 00000000
        (XEN)  HPFAR_EL2: 0000000000fff110
        (XEN)      HDFAR: e0800f00
        (XEN)      HIFAR: 00000000
        (XEN) 
        (XEN) Xen stack trace from sp=7ffcfd5c:
        (XEN)    00000000 7ffcfd74 0024810c 40075000 1001b000 7ffcfd84 0020b48c 40075000
        (XEN)    4007f000 7ffcfd94 0020b4e0 4002dff0 40075000 7ffcfda4 0020c04c 4007f000
        (XEN)    00000000 7ffcfdc4 0020b334 00000000 c000e73c 4007f000 b6fb3004 7ffcff58
        (XEN)    00000005 7ffcfddc 00208290 00000001 c000e73c 4007f000 b6fb3004 7ffcfeec
        (XEN)    00206368 00020002 00000000 00020100 00000000 00000000 00000000 00000000
        (XEN)    00000000 00000000 00000000 000be076 00000000 7b1eaf48 00000002 00000001
        (XEN)    00000000 00000000 997b032a fd48d390 9406888f 40725f67 00000000 00000002
        (XEN)    00000009 00050001 00035890 becdb884 b6fbe4a9 b6f16a98 b6fcd2c8 00000005
        (XEN)    b6ec3258 becdb854 00035030 ffffffff 0003589c 00037a60 00000005 b6d63000
        (XEN)    00000000 b6d634c0 b6f39000 0005f780 00035890 becdb8c4 b6fbe4a9 00000000
        (XEN)    00000001 00000005 00000000 becdb874 b6f16a98 00000001 000355e0 0003c198
        (XEN)    00035030 00000001 0003589c 7ffcff58 c000e73c 00000ea1 ddb3be40 7ffcff58
        (XEN)    00000005 c4176000 00000000 7ffcff54 0024d280 b6d634c0 0022c8c4 0022c6b8
        (XEN)    00000002 002ae000 002b1ff0 002e7614 400218d8 00000000 00000013 fff11b00
        (XEN)    00000001 becdb5f0 ddb3be40 00305000 00000005 c4176000 b6d634c0 becdb790
        (XEN)    ddb3be40 00305000 00000005 c4176000 00000000 7ffcff58 0024fa70 b6fb3004
        (XEN)    00000001 b6de4f50 0005f338 b6d634c0 becdb790 ddb3be40 00305000 00000005
        (XEN)    c4176000 00000000 becdb77c 00000024 ffffffff b6f2c708 c000e73c 60000013
        (XEN)    00000000 becdb778 c07e1380 c0012540 c4177e9c c025b570 c07e138c c0012680
        (XEN)    c07e1398 c0012720 00000000 00000000 00000000 00000000 00000000 00000000
        (XEN) Xen call trace:
        (XEN)    [<0023fb54>] __bug+0x28/0x44 (PC)
        (XEN)    [<0023fb54>] __bug+0x28/0x44 (LR)
        (XEN)    [<0024810c>] domain_page_map_to_mfn+0x50/0xb4
        (XEN)    [<0020b48c>] unmap_guest_page+0x20/0x54
        (XEN)    [<0020b4e0>] cleanup_control_block+0x20/0x34
        (XEN)    [<0020c04c>] evtchn_fifo_destroy+0x2c/0x6c
        (XEN)    [<0020b334>] evtchn_destroy+0x1a8/0x1b0
        (XEN)    [<00208290>] domain_kill+0x60/0x128
        (XEN)    [<00206368>] do_domctl+0xa6c/0x10ec
        (XEN)    [<0024d280>] do_trap_hypervisor+0xad8/0xd78
        (XEN)    [<0024fa70>] return_from_trap+0/0x4
        


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:00:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYTn-0003JM-JN; Wed, 12 Feb 2014 11:59:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDYTl-0003JH-JF
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:59:49 +0000
Received: from [193.109.254.147:23295] by server-15.bemta-14.messagelabs.com
	id D1/EC-10839-4326BF25; Wed, 12 Feb 2014 11:59:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392206387!3802165!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4311 invoked from network); 12 Feb 2014 11:59:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 11:59:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 12:00:52 +0000
Message-Id: <52FB7042020000780011BAD0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 11:59:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  
>  	irq_exit();
>  	set_irq_regs(old_regs);
> +
> +#ifndef CONFIG_PREEMPT
> +	if ( __this_cpu_read(xed_nesting_count) == 0
> +	     && is_preemptible_hypercall(regs) )
> +		_cond_resched();
> +#endif

I don't think this can be done here - a 64-bit x86 kernel would
generally be on the IRQ stack, and I don't think scheduling
should be done in this state.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:00:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYTn-0003JM-JN; Wed, 12 Feb 2014 11:59:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDYTl-0003JH-JF
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 11:59:49 +0000
Received: from [193.109.254.147:23295] by server-15.bemta-14.messagelabs.com
	id D1/EC-10839-4326BF25; Wed, 12 Feb 2014 11:59:48 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392206387!3802165!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4311 invoked from network); 12 Feb 2014 11:59:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 11:59:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 12:00:52 +0000
Message-Id: <52FB7042020000780011BAD0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 11:59:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>  
>  	irq_exit();
>  	set_irq_regs(old_regs);
> +
> +#ifndef CONFIG_PREEMPT
> +	if ( __this_cpu_read(xed_nesting_count) == 0
> +	     && is_preemptible_hypercall(regs) )
> +		_cond_resched();
> +#endif

I don't think this can be done here - a 64-bit x86 kernel would
generally be on the IRQ stack, and I don't think scheduling
should be done in this state.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYZA-0003iW-Lj; Wed, 12 Feb 2014 12:05:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDYZ9-0003iO-La
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:05:23 +0000
Received: from [85.158.143.35:8519] by server-1.bemta-4.messagelabs.com id
	90/07-31661-3836BF25; Wed, 12 Feb 2014 12:05:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392206721!5080836!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23041 invoked from network); 12 Feb 2014 12:05:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:05:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100089438"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 12:05:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 07:05:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYZ6-00075l-7z;
	Wed, 12 Feb 2014 12:05:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYZ6-000347-1d;
	Wed, 12 Feb 2014 12:05:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.25471.891504.646291@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 12:05:19 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392205809.13563.61.camel@kazak.uk.xensource.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
	<21243.24311.41722.641547@mariner.uk.xensource.com>
	<1392205809.13563.61.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> On Wed, 2014-02-12 at 11:45 +0000, Ian Jackson wrote:
> > This shouldn't be a host property, but a global config option (or even
> > hardcoded).  There should be a host flag for the bodge, probably
> > "need-calxeda-bridge-fixup" or something.
> 
> equiv-marilith isn't suitable I take it?

Wow, that's _really_ bodgy.

> > And you could wrap these lines, while you're at it:
> > 
> > > +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> > > +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");
> 
> Sure.
> 
> > Or replace this extensive code with something more condensed, e.g.:
> > 
> >   +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);
> 
> This misses the iproute vs iproute2 distinction in the names, but I'll
> see if I can get something like this to work.

Oh so it does.  Then you need to say
   +        my @debs = "iproute_${ver}_all.deb", "iproute2_${ver}_armhf.deb";
but you can at least spell that out only once rather than three times
as you do...

Ian.
(trying to turn everything into Common Lisp)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYZA-0003iW-Lj; Wed, 12 Feb 2014 12:05:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDYZ9-0003iO-La
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:05:23 +0000
Received: from [85.158.143.35:8519] by server-1.bemta-4.messagelabs.com id
	90/07-31661-3836BF25; Wed, 12 Feb 2014 12:05:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392206721!5080836!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23041 invoked from network); 12 Feb 2014 12:05:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:05:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100089438"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 12:05:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 07:05:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYZ6-00075l-7z;
	Wed, 12 Feb 2014 12:05:20 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDYZ6-000347-1d;
	Wed, 12 Feb 2014 12:05:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.25471.891504.646291@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 12:05:19 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392205809.13563.61.camel@kazak.uk.xensource.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
	<21243.24311.41722.641547@mariner.uk.xensource.com>
	<1392205809.13563.61.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> On Wed, 2014-02-12 at 11:45 +0000, Ian Jackson wrote:
> > This shouldn't be a host property, but a global config option (or even
> > hardcoded).  There should be a host flag for the bodge, probably
> > "need-calxeda-bridge-fixup" or something.
> 
> equiv-marilith isn't suitable I take it?

Wow, that's _really_ bodgy.

> > And you could wrap these lines, while you're at it:
> > 
> > > +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> > > +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");
> 
> Sure.
> 
> > Or replace this extensive code with something more condensed, e.g.:
> > 
> >   +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);
> 
> This misses the iproute vs iproute2 distinction in the names, but I'll
> see if I can get something like this to work.

Oh so it does.  Then you need to say
   +        my @debs = "iproute_${ver}_all.deb", "iproute2_${ver}_armhf.deb";
but you can at least spell that out only once rather than three times
as you do...

Ian.
(trying to turn everything into Common Lisp)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:07:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYal-0003tW-5o; Wed, 12 Feb 2014 12:07:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYaj-0003tK-FD
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:07:01 +0000
Received: from [193.109.254.147:36272] by server-12.bemta-14.messagelabs.com
	id B5/68-17220-4E36BF25; Wed, 12 Feb 2014 12:07:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392206819!3802752!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13161 invoked from network); 12 Feb 2014 12:07:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:07:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101885113"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:06:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 07:06:58 -0500
Message-ID: <1392206817.13563.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 12:06:57 +0000
In-Reply-To: <21243.25471.891504.646291@mariner.uk.xensource.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
	<21243.24311.41722.641547@mariner.uk.xensource.com>
	<1392205809.13563.61.camel@kazak.uk.xensource.com>
	<21243.25471.891504.646291@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 12:05 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> > On Wed, 2014-02-12 at 11:45 +0000, Ian Jackson wrote:
> > > This shouldn't be a host property, but a global config option (or even
> > > hardcoded).  There should be a host flag for the bodge, probably
> > > "need-calxeda-bridge-fixup" or something.
> > 
> > equiv-marilith isn't suitable I take it?
> 
> Wow, that's _really_ bodgy.

I'll take that as a no ;-)

> > > And you could wrap these lines, while you're at it:
> > > 
> > > > +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> > > > +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");
> > 
> > Sure.
> > 
> > > Or replace this extensive code with something more condensed, e.g.:
> > > 
> > >   +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);
> > 
> > This misses the iproute vs iproute2 distinction in the names, but I'll
> > see if I can get something like this to work.
> 
> Oh so it does.  Then you need to say
>    +        my @debs = "iproute_${ver}_all.deb", "iproute2_${ver}_armhf.deb";
> but you can at least spell that out only once rather than three times
> as you do...

Ack

> Ian.
> (trying to turn everything into Common Lisp)

(Ack)

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:07:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYal-0003tW-5o; Wed, 12 Feb 2014 12:07:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYaj-0003tK-FD
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:07:01 +0000
Received: from [193.109.254.147:36272] by server-12.bemta-14.messagelabs.com
	id B5/68-17220-4E36BF25; Wed, 12 Feb 2014 12:07:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392206819!3802752!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13161 invoked from network); 12 Feb 2014 12:07:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:07:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101885113"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:06:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 07:06:58 -0500
Message-ID: <1392206817.13563.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 12:06:57 +0000
In-Reply-To: <21243.25471.891504.646291@mariner.uk.xensource.com>
References: <1392201770-14927-1-git-send-email-ian.campbell@citrix.com>
	<21243.24311.41722.641547@mariner.uk.xensource.com>
	<1392205809.13563.61.camel@kazak.uk.xensource.com>
	<21243.25471.891504.646291@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Configure the Calxeda fabric on
	host boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 12:05 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH OSSTEST] Configure the Calxeda fabric on host boot"):
> > On Wed, 2014-02-12 at 11:45 +0000, Ian Jackson wrote:
> > > This shouldn't be a host property, but a global config option (or even
> > > hardcoded).  There should be a host flag for the bodge, probably
> > > "need-calxeda-bridge-fixup" or something.
> > 
> > equiv-marilith isn't suitable I take it?
> 
> Wow, that's _really_ bodgy.

I'll take that as a no ;-)

> > > And you could wrap these lines, while you're at it:
> > > 
> > > > +        target_putfile_root($ho, 10, "$images/iproute_${ver}_all.deb", "iproute_${ver}_all.deb");
> > > > +        target_putfile_root($ho, 10, "$images/iproute2_${ver}_armhf.deb", "iproute2_${ver}_armhf.deb");
> > 
> > Sure.
> > 
> > > Or replace this extensive code with something more condensed, e.g.:
> > > 
> > >   +        my @debs = map { "iproute2_${ver}_${_}.deb" } qw(all armhf);
> > 
> > This misses the iproute vs iproute2 distinction in the names, but I'll
> > see if I can get something like this to work.
> 
> Oh so it does.  Then you need to say
>    +        my @debs = "iproute_${ver}_all.deb", "iproute2_${ver}_armhf.deb";
> but you can at least spell that out only once rather than three times
> as you do...

Ack

> Ian.
> (trying to turn everything into Common Lisp)

(Ack)

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:20:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYnA-0004Ma-H0; Wed, 12 Feb 2014 12:19:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WDYn8-0004ML-Hr
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 12:19:50 +0000
Received: from [85.158.139.211:26377] by server-6.bemta-5.messagelabs.com id
	7B/09-14342-5E66BF25; Wed, 12 Feb 2014 12:19:49 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392207587!3366972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28430 invoked from network); 12 Feb 2014 12:19:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:19:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101889625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:19:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 07:19:46 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WDYn4-0005JJ-3l;
	Wed, 12 Feb 2014 12:19:46 +0000
Date: Wed, 12 Feb 2014 12:19:46 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140212121946.GA31937@zion.uk.xensource.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 01:53:26PM -0800, Luis R. Rodriguez wrote:
> Cc'ing kvm folks as they may have a shared interest on the shared
> physical case with the bridge (non NAT).
> 
> On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> Although the xen-netback interfaces do not participate in the
> >> link as a typical Ethernet device interfaces for them are
> >> still required under the current archtitecture. IPv6 addresses
> >> do not need to be created or assigned on the xen-netback interfaces
> >> however, even if the frontend devices do need them, so clear the
> >> multicast flag to ensure the net core does not initiate IPv6
> >> Stateless Address Autoconfiguration.
> >
> > How does disabling SAA flow from the absence of multicast?
> 
> See patch 1 in this series [0], but I explain the issue I see with
> this on the cover letter [1]. In summary the RFCs on IPv6 make it
> clear you need multicast for Stateless address autoconfiguration
> (SLAAC is the preferred acronym) and DAD, however the net core has not
> made this a requirement, and hence the patch. The caveat which I
> address on the cover letter needs to be seriously considered though.
> 
> [0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
> [1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2
> 
> > Surely these should be controlled logically independently even if there is some
> > notional linkage.
> 
> When a node hops on a network it will query its network by sending a
> router solicitation multicast request for its configuration
> parameters, the router can respond with router advertisements to
> disable SLAAC.
> 
> Apart from that we have no other means to disable SLAAC neatly, and as
> I gather that would be counter to the IPv6 RFCs anyway, and that makes
> sense.
> 
> > Can SAA not be disabled directly?
> 
> Nope. The ipv6 core assumes all device want ipv6 and this is done upon
> netdev registration, and as I noted on my patch 1 description --
> although ipv6 supports a module parameter to disable autoconfiguration
> RFC4682 Section 5.4 makes it clear that DAD *MUST* be performed on all

FWIW: RFC4862 :-)

You had the same typo in patch 1.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:20:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYnA-0004Ma-H0; Wed, 12 Feb 2014 12:19:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WDYn8-0004ML-Hr
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 12:19:50 +0000
Received: from [85.158.139.211:26377] by server-6.bemta-5.messagelabs.com id
	7B/09-14342-5E66BF25; Wed, 12 Feb 2014 12:19:49 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392207587!3366972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28430 invoked from network); 12 Feb 2014 12:19:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:19:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101889625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:19:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 07:19:46 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WDYn4-0005JJ-3l;
	Wed, 12 Feb 2014 12:19:46 +0000
Date: Wed, 12 Feb 2014 12:19:46 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140212121946.GA31937@zion.uk.xensource.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 01:53:26PM -0800, Luis R. Rodriguez wrote:
> Cc'ing kvm folks as they may have a shared interest on the shared
> physical case with the bridge (non NAT).
> 
> On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> Although the xen-netback interfaces do not participate in the
> >> link as a typical Ethernet device interfaces for them are
> >> still required under the current archtitecture. IPv6 addresses
> >> do not need to be created or assigned on the xen-netback interfaces
> >> however, even if the frontend devices do need them, so clear the
> >> multicast flag to ensure the net core does not initiate IPv6
> >> Stateless Address Autoconfiguration.
> >
> > How does disabling SAA flow from the absence of multicast?
> 
> See patch 1 in this series [0], but I explain the issue I see with
> this on the cover letter [1]. In summary the RFCs on IPv6 make it
> clear you need multicast for Stateless address autoconfiguration
> (SLAAC is the preferred acronym) and DAD, however the net core has not
> made this a requirement, and hence the patch. The caveat which I
> address on the cover letter needs to be seriously considered though.
> 
> [0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
> [1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2
> 
> > Surely these should be controlled logically independently even if there is some
> > notional linkage.
> 
> When a node hops on a network it will query its network by sending a
> router solicitation multicast request for its configuration
> parameters, the router can respond with router advertisements to
> disable SLAAC.
> 
> Apart from that we have no other means to disable SLAAC neatly, and as
> I gather that would be counter to the IPv6 RFCs anyway, and that makes
> sense.
> 
> > Can SAA not be disabled directly?
> 
> Nope. The ipv6 core assumes all device want ipv6 and this is done upon
> netdev registration, and as I noted on my patch 1 description --
> although ipv6 supports a module parameter to disable autoconfiguration
> RFC4682 Section 5.4 makes it clear that DAD *MUST* be performed on all

FWIW: RFC4862 :-)

You had the same typo in patch 1.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:31:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYyA-0004tR-Qm; Wed, 12 Feb 2014 12:31:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYy9-0004tM-OI
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:31:14 +0000
Received: from [85.158.137.68:39074] by server-9.bemta-3.messagelabs.com id
	9D/68-10184-1996BF25; Wed, 12 Feb 2014 12:31:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392208270!1359363!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6983 invoked from network); 12 Feb 2014 12:31:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:31:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101891713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:31:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 07:31:09 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDYy5-0007EM-Ab;
	Wed, 12 Feb 2014 12:31:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 12:31:09 +0000
Message-ID: <1392208269-17122-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST v2] Configure the Calxeda fabric on host
	boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The fabric on the Calxeda midway boxes (marilith-* in osstest) does not learn
mac address (at least not with the firmware we have, and with Calxeda folding
this seems unlikely to get fixed). This means that guests do not get network
connectivity unless their mac addresses explicitly registered with the fabric.

Registrations can be done with the bridge(8) tool from the iproute2 package
which unfortunately is only present in Jessie+ and not in Wheezy. So I have
done my own backport and placed it in $images/wheezy-iproute2 and arranged for
it to be installed along with the transitional iproute package (from the same
source) which is needed to satisfy various dependencies.

The registrations are ephemeral and need to be renewed on each reboot, so add
the necessary commands to rc.local during ts-xen-install.

This required leaking a certain amount of the implementation of select_ether.
Unless we want to do the bodge on every ts-guest-start, reboot, migrate etc
then this seems to be the best way.

test-armhf-armhf-xl now gets past ts-guest-start and as far as the migration
tests.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: hardcode 8 mac addresses and gate on equiv-marilith host flag
    simplify code to install iproute debs
---
 Osstest/CXFabric.pm    | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++
 Osstest/TestSupport.pm | 21 ++++++++-----
 ts-xen-install         |  3 ++
 3 files changed, 101 insertions(+), 7 deletions(-)
 create mode 100644 Osstest/CXFabric.pm

diff --git a/Osstest/CXFabric.pm b/Osstest/CXFabric.pm
new file mode 100644
index 0000000..866aefe
--- /dev/null
+++ b/Osstest/CXFabric.pm
@@ -0,0 +1,84 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+package Osstest::CXFabric;
+
+use strict;
+use warnings;
+
+use Osstest;
+use Osstest::TestSupport;
+
+BEGIN {
+    use Exporter ();
+    our ($VERSION, @ISA, @EXPORT, @EXPORT_OK, %EXPORT_TAGS);
+    $VERSION     = 1.00;
+    @ISA         = qw(Exporter);
+    @EXPORT      = qw(
+                      setup_cxfabric
+                      );
+    %EXPORT_TAGS = ( );
+
+    @EXPORT_OK   = qw();
+}
+
+sub setup_cxfabric($)
+{
+    my ($ho) = @_;
+
+    # This is only needed on Calxeda boxes, which given that they have folded
+    # is unlikely to be anything other than exactly our marilith box.
+    return unless $ho->{Flags}{'equiv-marilith'};
+
+    my $nr = 8;
+
+    my $prefix = ether_prefix($ho);
+    logm("Registering $nr MAC addresses with CX fabric using prefix $prefix");
+
+    if ( $ho->{Suite} =~ m/wheezy/ )
+    {
+        # iproute2 is not in Wheezy nor wheezy-backports. Use our own backport.
+        my $images = "$c{Images}/wheezy-iproute2";
+        my $ver = '3.12.0-1~xen70+1';
+        my @debs = ("iproute_${ver}_all.deb", "iproute2_${ver}_armhf.deb");
+
+        target_putfile_root($ho, 10, "$images/$_", $_) foreach @debs;
+        target_cmd_root($ho, "dpkg -i @debs");
+    } else {
+        target_install_packages($ho, qw(iproute2));
+    }
+
+    my $banner = '# osstest: register potential guest MACs with CX fabric';
+    my $rclocal = "$banner\n";
+    # Osstest::TestSupport::select_ether allocates sequentially from $prefix:00:01
+    my $i = 0;
+    while ( $i++ < $nr ) {
+        $rclocal .= sprintf("bridge fdb add $prefix:%02x:%02x dev eth0\n",
+                            $i >> 8, $i & 0xff);
+    }
+
+    target_editfile_root($ho, '/etc/rc.local', sub {
+        my $had_banner = 0;
+        while (<::EI>) {
+            $had_banner = 1 if m/^$banner$/;
+            print ::EO $rclocal if m/^exit 0$/ && !$had_banner;
+            print ::EO;
+        }
+    });
+
+}
+
+1;
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..98eb172 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -97,6 +97,8 @@ BEGIN {
                       await_webspace_fetch_byleaf create_webfile
                       file_link_contents get_timeout
                       setup_pxeboot setup_pxeboot_local host_pxefile
+
+                      ether_prefix
                       );
     %EXPORT_TAGS = ( );
 
@@ -1245,6 +1247,17 @@ sub target_choose_vg ($$) {
     return $bestvg;
 }
 
+sub ether_prefix($) {
+    my ($ho) = @_;
+    my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
+    $prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
+    my $lhs = $1;
+    my $pv = (hex($2)<<8) | (hex($3));
+    $pv ^= $mjobdb->gen_ether_offset($ho,$flight);
+    $prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
+    return $prefix;
+}
+
 sub select_ether ($$) {
     my ($ho,$vn) = @_;
     # must be run outside transaction
@@ -1252,13 +1265,7 @@ sub select_ether ($$) {
     return $ether if defined $ether;
 
     db_retry($flight,'running', $dbh_tests,[qw(flights)], sub {
-	my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
-	$prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
-	my $lhs = $1;
-	my $pv = (hex($2)<<8) | (hex($3));
-	$pv ^= $mjobdb->gen_ether_offset($ho,$flight);
-	$prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
-
+	my $prefix = ether_prefix($ho);
 	my $glob_ether = $mjobdb->jobdb_db_glob('*_ether');
 
         my $previous= $dbh_tests->selectrow_array(<<END, {}, $flight);
diff --git a/ts-xen-install b/ts-xen-install
index fc96516..903ed45 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -22,6 +22,7 @@ use File::Path;
 use POSIX;
 use Osstest::Debian;
 use Osstest::TestSupport;
+use Osstest::CXFabric;
 
 my $checkmode= 0;
 
@@ -122,6 +123,8 @@ sub adjustconfig () {
     });
 
     target_cmd_root($ho, 'mkdir -p /var/log/xen/console');
+
+    setup_cxfabric($ho);
 }
 
 sub setupboot () {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:31:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDYyA-0004tR-Qm; Wed, 12 Feb 2014 12:31:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDYy9-0004tM-OI
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:31:14 +0000
Received: from [85.158.137.68:39074] by server-9.bemta-3.messagelabs.com id
	9D/68-10184-1996BF25; Wed, 12 Feb 2014 12:31:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392208270!1359363!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6983 invoked from network); 12 Feb 2014 12:31:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:31:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101891713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:31:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 07:31:09 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDYy5-0007EM-Ab;
	Wed, 12 Feb 2014 12:31:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 12:31:09 +0000
Message-ID: <1392208269-17122-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST v2] Configure the Calxeda fabric on host
	boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The fabric on the Calxeda midway boxes (marilith-* in osstest) does not learn
mac address (at least not with the firmware we have, and with Calxeda folding
this seems unlikely to get fixed). This means that guests do not get network
connectivity unless their mac addresses explicitly registered with the fabric.

Registrations can be done with the bridge(8) tool from the iproute2 package
which unfortunately is only present in Jessie+ and not in Wheezy. So I have
done my own backport and placed it in $images/wheezy-iproute2 and arranged for
it to be installed along with the transitional iproute package (from the same
source) which is needed to satisfy various dependencies.

The registrations are ephemeral and need to be renewed on each reboot, so add
the necessary commands to rc.local during ts-xen-install.

This required leaking a certain amount of the implementation of select_ether.
Unless we want to do the bodge on every ts-guest-start, reboot, migrate etc
then this seems to be the best way.

test-armhf-armhf-xl now gets past ts-guest-start and as far as the migration
tests.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2: hardcode 8 mac addresses and gate on equiv-marilith host flag
    simplify code to install iproute debs
---
 Osstest/CXFabric.pm    | 84 ++++++++++++++++++++++++++++++++++++++++++++++++++
 Osstest/TestSupport.pm | 21 ++++++++-----
 ts-xen-install         |  3 ++
 3 files changed, 101 insertions(+), 7 deletions(-)
 create mode 100644 Osstest/CXFabric.pm

diff --git a/Osstest/CXFabric.pm b/Osstest/CXFabric.pm
new file mode 100644
index 0000000..866aefe
--- /dev/null
+++ b/Osstest/CXFabric.pm
@@ -0,0 +1,84 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+package Osstest::CXFabric;
+
+use strict;
+use warnings;
+
+use Osstest;
+use Osstest::TestSupport;
+
+BEGIN {
+    use Exporter ();
+    our ($VERSION, @ISA, @EXPORT, @EXPORT_OK, %EXPORT_TAGS);
+    $VERSION     = 1.00;
+    @ISA         = qw(Exporter);
+    @EXPORT      = qw(
+                      setup_cxfabric
+                      );
+    %EXPORT_TAGS = ( );
+
+    @EXPORT_OK   = qw();
+}
+
+sub setup_cxfabric($)
+{
+    my ($ho) = @_;
+
+    # This is only needed on Calxeda boxes, which given that they have folded
+    # is unlikely to be anything other than exactly our marilith box.
+    return unless $ho->{Flags}{'equiv-marilith'};
+
+    my $nr = 8;
+
+    my $prefix = ether_prefix($ho);
+    logm("Registering $nr MAC addresses with CX fabric using prefix $prefix");
+
+    if ( $ho->{Suite} =~ m/wheezy/ )
+    {
+        # iproute2 is not in Wheezy nor wheezy-backports. Use our own backport.
+        my $images = "$c{Images}/wheezy-iproute2";
+        my $ver = '3.12.0-1~xen70+1';
+        my @debs = ("iproute_${ver}_all.deb", "iproute2_${ver}_armhf.deb");
+
+        target_putfile_root($ho, 10, "$images/$_", $_) foreach @debs;
+        target_cmd_root($ho, "dpkg -i @debs");
+    } else {
+        target_install_packages($ho, qw(iproute2));
+    }
+
+    my $banner = '# osstest: register potential guest MACs with CX fabric';
+    my $rclocal = "$banner\n";
+    # Osstest::TestSupport::select_ether allocates sequentially from $prefix:00:01
+    my $i = 0;
+    while ( $i++ < $nr ) {
+        $rclocal .= sprintf("bridge fdb add $prefix:%02x:%02x dev eth0\n",
+                            $i >> 8, $i & 0xff);
+    }
+
+    target_editfile_root($ho, '/etc/rc.local', sub {
+        my $had_banner = 0;
+        while (<::EI>) {
+            $had_banner = 1 if m/^$banner$/;
+            print ::EO $rclocal if m/^exit 0$/ && !$had_banner;
+            print ::EO;
+        }
+    });
+
+}
+
+1;
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..98eb172 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -97,6 +97,8 @@ BEGIN {
                       await_webspace_fetch_byleaf create_webfile
                       file_link_contents get_timeout
                       setup_pxeboot setup_pxeboot_local host_pxefile
+
+                      ether_prefix
                       );
     %EXPORT_TAGS = ( );
 
@@ -1245,6 +1247,17 @@ sub target_choose_vg ($$) {
     return $bestvg;
 }
 
+sub ether_prefix($) {
+    my ($ho) = @_;
+    my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
+    $prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
+    my $lhs = $1;
+    my $pv = (hex($2)<<8) | (hex($3));
+    $pv ^= $mjobdb->gen_ether_offset($ho,$flight);
+    $prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
+    return $prefix;
+}
+
 sub select_ether ($$) {
     my ($ho,$vn) = @_;
     # must be run outside transaction
@@ -1252,13 +1265,7 @@ sub select_ether ($$) {
     return $ether if defined $ether;
 
     db_retry($flight,'running', $dbh_tests,[qw(flights)], sub {
-	my $prefix = get_host_property($ho, 'gen-ether-prefix-base');
-	$prefix =~ m/^(\w+:\w+):(\w+):(\w+)$/ or die "$prefix ?";
-	my $lhs = $1;
-	my $pv = (hex($2)<<8) | (hex($3));
-	$pv ^= $mjobdb->gen_ether_offset($ho,$flight);
-	$prefix = sprintf "%s:%02x:%02x", $lhs, ($pv>>8)&0xff, $pv&0xff;
-
+	my $prefix = ether_prefix($ho);
 	my $glob_ether = $mjobdb->jobdb_db_glob('*_ether');
 
         my $previous= $dbh_tests->selectrow_array(<<END, {}, $flight);
diff --git a/ts-xen-install b/ts-xen-install
index fc96516..903ed45 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -22,6 +22,7 @@ use File::Path;
 use POSIX;
 use Osstest::Debian;
 use Osstest::TestSupport;
+use Osstest::CXFabric;
 
 my $checkmode= 0;
 
@@ -122,6 +123,8 @@ sub adjustconfig () {
     });
 
     target_cmd_root($ho, 'mkdir -p /var/log/xen/console');
+
+    setup_cxfabric($ho);
 }
 
 sub setupboot () {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:56:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZLo-0005mO-P4; Wed, 12 Feb 2014 12:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDZLm-0005mJ-Ob
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:55:38 +0000
Received: from [85.158.143.35:44501] by server-2.bemta-4.messagelabs.com id
	3A/81-10891-A4F6BF25; Wed, 12 Feb 2014 12:55:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392209725!5109209!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8999 invoked from network); 12 Feb 2014 12:55:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101895773"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:54:50 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 07:54:42 -0500
Message-ID: <52FB6F10.8040307@citrix.com>
Date: Wed, 12 Feb 2014 12:54:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
In-Reply-To: <52FB7042020000780011BAD0@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 11:59, Jan Beulich wrote:
>>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
>> --- a/drivers/xen/events/events_base.c
>> +++ b/drivers/xen/events/events_base.c
>> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>>  
>>  	irq_exit();
>>  	set_irq_regs(old_regs);
>> +
>> +#ifndef CONFIG_PREEMPT
>> +	if ( __this_cpu_read(xed_nesting_count) == 0
>> +	     && is_preemptible_hypercall(regs) )
>> +		_cond_resched();
>> +#endif
> 
> I don't think this can be done here - a 64-bit x86 kernel would
> generally be on the IRQ stack, and I don't think scheduling
> should be done in this state.

_cond_resched() doesn't look that different from preempt_schedule_irq()
which is explicitly callable from irq context.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 12:56:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 12:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZLo-0005mO-P4; Wed, 12 Feb 2014 12:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDZLm-0005mJ-Ob
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 12:55:38 +0000
Received: from [85.158.143.35:44501] by server-2.bemta-4.messagelabs.com id
	3A/81-10891-A4F6BF25; Wed, 12 Feb 2014 12:55:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392209725!5109209!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8999 invoked from network); 12 Feb 2014 12:55:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 12:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101895773"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 12:54:50 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 07:54:42 -0500
Message-ID: <52FB6F10.8040307@citrix.com>
Date: Wed, 12 Feb 2014 12:54:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
In-Reply-To: <52FB7042020000780011BAD0@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 11:59, Jan Beulich wrote:
>>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
>> --- a/drivers/xen/events/events_base.c
>> +++ b/drivers/xen/events/events_base.c
>> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>>  
>>  	irq_exit();
>>  	set_irq_regs(old_regs);
>> +
>> +#ifndef CONFIG_PREEMPT
>> +	if ( __this_cpu_read(xed_nesting_count) == 0
>> +	     && is_preemptible_hypercall(regs) )
>> +		_cond_resched();
>> +#endif
> 
> I don't think this can be done here - a 64-bit x86 kernel would
> generally be on the IRQ stack, and I don't think scheduling
> should be done in this state.

_cond_resched() doesn't look that different from preempt_schedule_irq()
which is explicitly callable from irq context.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZSb-0006Cx-Lj; Wed, 12 Feb 2014 13:02:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDZSZ-0006Cr-I0
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:02:39 +0000
Received: from [85.158.137.68:57669] by server-7.bemta-3.messagelabs.com id
	4B/FE-13775-EE07BF25; Wed, 12 Feb 2014 13:02:38 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392210156!103164!1
X-Originating-IP: [209.85.220.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4158 invoked from network); 12 Feb 2014 13:02:37 -0000
Received: from mail-pa0-f54.google.com (HELO mail-pa0-f54.google.com)
	(209.85.220.54)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:02:37 -0000
Received: by mail-pa0-f54.google.com with SMTP id fa1so9247594pad.27
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 05:02:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=d8pglrDIO5EBkYeTc7Q2T8VoI7mSv7LMp+9DyoUtxxI=;
	b=bNLYAoAnPMKQoDcIWSYG0hSBsD7PhfcrbkJllWYa/TyT6msS/yDLZ7VVILQsyM1efu
	Dwgb11WsY+AD0gHrzn+JIDfMmM/n+8fCM8GN/qCG6gyI7UqpUtAG6bPxZuRkrQQTjxUM
	UO5sEWSeqfgDeJpq1foE9KcV1PJqGbPwiKG8rHKZDZmV2hTO3BJ9y2/gM49s4ldGpPsy
	Ap9JM2vBZRvgIvNi2aiMTAYoHj8G/PomdsZeb6/FXuBpy3F2MSYGi7j4vOQ1uZqIa/HV
	VygsBd3VbKW/4f8g/b1Mgq2jLuPP4tpq/N7ZRcmhEoEFlfXexsdT9xyGVnhOQms28w7y
	nOJw==
X-Received: by 10.68.212.10 with SMTP id ng10mr51494826pbc.95.1392210155554;
	Wed, 12 Feb 2014 05:02:35 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id cz3sm63446041pbc.9.2014.02.12.05.02.33
	for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 05:02:35 -0800 (PST)
Date: Wed, 12 Feb 2014 21:02:31 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021221022931163650@gmail.com>
Subject: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8780912408497029026=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============8780912408497029026==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart203128732274_=----"

This is a multi-part message in MIME format.

------=_001_NextPart203128732274_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCkZvbGxvd2luZyBtZXJnZSBtYXkgYmUgb3ZlcndyaXRlIHRoZSAieGVuOiBG
aXggdmNwdSBpbml0aWFsaXphdGlvbiIgcGF0Y2gNCi0tLS0tLS0tLS0tLS0tLS0tLS0tDQpNZXJn
ZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3RhZ2luZy1tYXN0
ZXItOSANCi0tLS0tLS0tLS0tLS0tLS0tLS0tIA0KDQpUaGlzIG1hZGUgYSBjb25mbGljdCBmb3Ig
eGVuLWFsbC5jLg0KU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBob3RwbHVnIHBhdGNoIHdhcyBv
dmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbSBxZW11IHZlcnNpb24uDQoNCg0KDQoNCg0KaGVyYmVy
dCBjbGFuZA==

------=_001_NextPart203128732274_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3DGB2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Following merge may be overwrite the "xen:=
 Fix=20
vcpu initialization" patch</DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Merge remote branch 'origin/stable-1.6' in=
to=20
xen-staging-master-9 </DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">This made a conflict for xen-all.c.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">So it seems that the vpcu hotplug patch wa=
s=20
overwrited by the upstream qemu version.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart203128732274_=------



--===============8780912408497029026==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8780912408497029026==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 13:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZSb-0006Cx-Lj; Wed, 12 Feb 2014 13:02:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDZSZ-0006Cr-I0
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:02:39 +0000
Received: from [85.158.137.68:57669] by server-7.bemta-3.messagelabs.com id
	4B/FE-13775-EE07BF25; Wed, 12 Feb 2014 13:02:38 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392210156!103164!1
X-Originating-IP: [209.85.220.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=HTML_MESSAGE,
	MIME_BASE64_TEXT,MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4158 invoked from network); 12 Feb 2014 13:02:37 -0000
Received: from mail-pa0-f54.google.com (HELO mail-pa0-f54.google.com)
	(209.85.220.54)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:02:37 -0000
Received: by mail-pa0-f54.google.com with SMTP id fa1so9247594pad.27
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 05:02:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:subject:mime-version:message-id:content-type;
	bh=d8pglrDIO5EBkYeTc7Q2T8VoI7mSv7LMp+9DyoUtxxI=;
	b=bNLYAoAnPMKQoDcIWSYG0hSBsD7PhfcrbkJllWYa/TyT6msS/yDLZ7VVILQsyM1efu
	Dwgb11WsY+AD0gHrzn+JIDfMmM/n+8fCM8GN/qCG6gyI7UqpUtAG6bPxZuRkrQQTjxUM
	UO5sEWSeqfgDeJpq1foE9KcV1PJqGbPwiKG8rHKZDZmV2hTO3BJ9y2/gM49s4ldGpPsy
	Ap9JM2vBZRvgIvNi2aiMTAYoHj8G/PomdsZeb6/FXuBpy3F2MSYGi7j4vOQ1uZqIa/HV
	VygsBd3VbKW/4f8g/b1Mgq2jLuPP4tpq/N7ZRcmhEoEFlfXexsdT9xyGVnhOQms28w7y
	nOJw==
X-Received: by 10.68.212.10 with SMTP id ng10mr51494826pbc.95.1392210155554;
	Wed, 12 Feb 2014 05:02:35 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id cz3sm63446041pbc.9.2014.02.12.05.02.33
	for <xen-devel@lists.xen.org>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 05:02:35 -0800 (PST)
Date: Wed, 12 Feb 2014 21:02:31 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021221022931163650@gmail.com>
Subject: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8780912408497029026=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============8780912408497029026==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart203128732274_=----"

This is a multi-part message in MIME format.

------=_001_NextPart203128732274_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

RGVhciBBTEwhDQoNCkZvbGxvd2luZyBtZXJnZSBtYXkgYmUgb3ZlcndyaXRlIHRoZSAieGVuOiBG
aXggdmNwdSBpbml0aWFsaXphdGlvbiIgcGF0Y2gNCi0tLS0tLS0tLS0tLS0tLS0tLS0tDQpNZXJn
ZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3RhZ2luZy1tYXN0
ZXItOSANCi0tLS0tLS0tLS0tLS0tLS0tLS0tIA0KDQpUaGlzIG1hZGUgYSBjb25mbGljdCBmb3Ig
eGVuLWFsbC5jLg0KU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBob3RwbHVnIHBhdGNoIHdhcyBv
dmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbSBxZW11IHZlcnNpb24uDQoNCg0KDQoNCg0KaGVyYmVy
dCBjbGFuZA==

------=_001_NextPart203128732274_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3DGB2312" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =CE=A2=C8=ED=D1=C5=BA=DA; COLOR: #000000;=
 LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear ALL!</DIV>
<DIV>&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Following merge may be overwrite the "xen:=
 Fix=20
vcpu initialization" patch</DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------</DIV>
<DIV style=3D"TEXT-INDENT: 2em">Merge remote branch 'origin/stable-1.6' in=
to=20
xen-staging-master-9 </DIV>
<DIV style=3D"TEXT-INDENT: 2em">--------------------&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV style=3D"TEXT-INDENT: 2em">This made a conflict for xen-all.c.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">So it seems that the vpcu hotplug patch wa=
s=20
overwrited by the upstream qemu version.</DIV>
<DIV style=3D"TEXT-INDENT: 2em">&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV></BODY></HTML>

------=_001_NextPart203128732274_=------



--===============8780912408497029026==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8780912408497029026==--



From xen-devel-bounces@lists.xen.org Wed Feb 12 13:05:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZVB-0006LM-DC; Wed, 12 Feb 2014 13:05:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDZV9-0006LH-VD
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:05:20 +0000
Received: from [85.158.139.211:64647] by server-17.bemta-5.messagelabs.com id
	21/68-31975-F817BF25; Wed, 12 Feb 2014 13:05:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392210318!3385994!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3526 invoked from network); 12 Feb 2014 13:05:18 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:05:18 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so4290634eek.41
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 05:05:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Bw1Tcf9gm1S0qOgzWU1cm+FQTSWZGGJXR6Tz/E6xPlI=;
	b=CPdnhW+npyWASW8LiVSsO2DSqICskTf/gozfKi8HHGmRnvCjCnfl0QMd2EuyvkChnp
	SD/WQb6aDulB5LanVZXm5ieT6l6JoZrUAcW09cg3KCbOhFuFqcI9iLmb1K1e7sx8i9/a
	LUpLiNyPbsPrPheXiNfUmq1RT4VXNVCrDTJkMgHJZ1hhnqTp79LOBihDHiFmGrFJQUA+
	cDw6zjC3+KURYpKKi4XEk4OBG+22jKBwO9f5lj9JLYoROhgpc48z4gCA164/0hshBJ7I
	oZQk42dlvWgSkASL+DNUPmU9VdGT893a1xJyKysTUN4dqm0UV2upuf2dJyLxABQDZSzq
	7qgQ==
X-Gm-Message-State: ALoCoQnj89gxaslgsxEFpXnaxiLzbf9KSdW09SRQsTRUNOtBdSoZtya0UCFBQGUG5mG2HpQAWqW/
X-Received: by 10.15.53.69 with SMTP id q45mr2131308eew.108.1392210318457;
	Wed, 12 Feb 2014 05:05:18 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	k41sm81806978een.19.2014.02.12.05.05.15 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 12 Feb 2014 05:05:17 -0800 (PST)
Message-ID: <52FB7188.6070004@linaro.org>
Date: Wed, 12 Feb 2014 13:05:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392205936.13563.62.camel@kazak.uk.xensource.com>
In-Reply-To: <1392205936.13563.62.camel@kazak.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] arm32 crash on domain destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 11:52 AM, Ian Campbell wrote:
> Julien,

Hi Ian,

> I'm seeing this which I thought you had fixed last week, did I fail to
> apply a patch or is this something new?
> 
>         (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
>         (XEN) Xen BUG at mm.c:334

I suspect you didn't update your tree. This line correspond to an empty
in master.

The commit that should resolve this issue is: 65f3c7e.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:05:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZVB-0006LM-DC; Wed, 12 Feb 2014 13:05:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDZV9-0006LH-VD
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:05:20 +0000
Received: from [85.158.139.211:64647] by server-17.bemta-5.messagelabs.com id
	21/68-31975-F817BF25; Wed, 12 Feb 2014 13:05:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392210318!3385994!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3526 invoked from network); 12 Feb 2014 13:05:18 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:05:18 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so4290634eek.41
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 05:05:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Bw1Tcf9gm1S0qOgzWU1cm+FQTSWZGGJXR6Tz/E6xPlI=;
	b=CPdnhW+npyWASW8LiVSsO2DSqICskTf/gozfKi8HHGmRnvCjCnfl0QMd2EuyvkChnp
	SD/WQb6aDulB5LanVZXm5ieT6l6JoZrUAcW09cg3KCbOhFuFqcI9iLmb1K1e7sx8i9/a
	LUpLiNyPbsPrPheXiNfUmq1RT4VXNVCrDTJkMgHJZ1hhnqTp79LOBihDHiFmGrFJQUA+
	cDw6zjC3+KURYpKKi4XEk4OBG+22jKBwO9f5lj9JLYoROhgpc48z4gCA164/0hshBJ7I
	oZQk42dlvWgSkASL+DNUPmU9VdGT893a1xJyKysTUN4dqm0UV2upuf2dJyLxABQDZSzq
	7qgQ==
X-Gm-Message-State: ALoCoQnj89gxaslgsxEFpXnaxiLzbf9KSdW09SRQsTRUNOtBdSoZtya0UCFBQGUG5mG2HpQAWqW/
X-Received: by 10.15.53.69 with SMTP id q45mr2131308eew.108.1392210318457;
	Wed, 12 Feb 2014 05:05:18 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	k41sm81806978een.19.2014.02.12.05.05.15 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 12 Feb 2014 05:05:17 -0800 (PST)
Message-ID: <52FB7188.6070004@linaro.org>
Date: Wed, 12 Feb 2014 13:05:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392205936.13563.62.camel@kazak.uk.xensource.com>
In-Reply-To: <1392205936.13563.62.camel@kazak.uk.xensource.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] arm32 crash on domain destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 11:52 AM, Ian Campbell wrote:
> Julien,

Hi Ian,

> I'm seeing this which I thought you had fixed last week, did I fail to
> apply a patch or is this something new?
> 
>         (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
>         (XEN) Xen BUG at mm.c:334

I suspect you didn't update your tree. This line correspond to an empty
in master.

The commit that should resolve this issue is: 65f3c7e.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZnq-000719-Re; Wed, 12 Feb 2014 13:24:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDZnq-000714-18
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:24:38 +0000
Received: from [193.109.254.147:30956] by server-13.bemta-14.messagelabs.com
	id CF/5D-01226-5167BF25; Wed, 12 Feb 2014 13:24:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392211475!3828970!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14763 invoked from network); 12 Feb 2014 13:24:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:24:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101899571"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 13:15:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 08:15:02 -0500
Message-ID: <1392210901.13563.70.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: herbert cland <herbert.cland@gmail.com>
Date: Wed, 12 Feb 2014 13:15:01 +0000
In-Reply-To: <2014021221022931163650@gmail.com>
References: <2014021221022931163650@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 21:02 +0800, herbert cland wrote:
> Dear ALL!
>  
> Following merge may be overwrite the "xen: Fix vcpu initialization"

So you've said three or four times now. There is no need to repeat
yourself like this, rest assured that your mails are in the relevant
people's inboxes and will be dealt with. Pestering in this way is just
rude.

>  patch
> --------------------
> Merge remote branch 'origin/stable-1.6' into xen-staging-master-9 
> -------------------- 
>  
> This made a conflict for xen-all.c.
> So it seems that the vpcu hotplug patch was overwrited by the upstream
> qemu version.

It's not clear to me how you have reached that conclusion. A merge
conflict could be down to numerous things, no all of which are "vcpu
hotplug patch was overwrited".

You'll have to be a lot more precise about what branch you are merging
into what other branch and what you think the undesirable fallout has
been I'm afraid.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:24:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:24:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZnq-000719-Re; Wed, 12 Feb 2014 13:24:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDZnq-000714-18
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:24:38 +0000
Received: from [193.109.254.147:30956] by server-13.bemta-14.messagelabs.com
	id CF/5D-01226-5167BF25; Wed, 12 Feb 2014 13:24:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392211475!3828970!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14763 invoked from network); 12 Feb 2014 13:24:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:24:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101899571"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 13:15:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 08:15:02 -0500
Message-ID: <1392210901.13563.70.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: herbert cland <herbert.cland@gmail.com>
Date: Wed, 12 Feb 2014 13:15:01 +0000
In-Reply-To: <2014021221022931163650@gmail.com>
References: <2014021221022931163650@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 21:02 +0800, herbert cland wrote:
> Dear ALL!
>  
> Following merge may be overwrite the "xen: Fix vcpu initialization"

So you've said three or four times now. There is no need to repeat
yourself like this, rest assured that your mails are in the relevant
people's inboxes and will be dealt with. Pestering in this way is just
rude.

>  patch
> --------------------
> Merge remote branch 'origin/stable-1.6' into xen-staging-master-9 
> -------------------- 
>  
> This made a conflict for xen-all.c.
> So it seems that the vpcu hotplug patch was overwrited by the upstream
> qemu version.

It's not clear to me how you have reached that conclusion. A merge
conflict could be down to numerous things, no all of which are "vcpu
hotplug patch was overwrited".

You'll have to be a lot more precise about what branch you are merging
into what other branch and what you think the undesirable fallout has
been I'm afraid.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZp7-000753-AX; Wed, 12 Feb 2014 13:25:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDZp6-00074x-Al
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:25:56 +0000
Received: from [85.158.137.68:24802] by server-17.bemta-3.messagelabs.com id
	74/44-22569-3667BF25; Wed, 12 Feb 2014 13:25:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392211553!114355!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28838 invoked from network); 12 Feb 2014 13:25:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:25:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101900460"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 13:20:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 08:20:05 -0500
Message-ID: <1392211204.13563.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 12 Feb 2014 13:20:04 +0000
In-Reply-To: <52FB7188.6070004@linaro.org>
References: <1392205936.13563.62.camel@kazak.uk.xensource.com>
	<52FB7188.6070004@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] arm32 crash on domain destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 13:05 +0000, Julien Grall wrote:
> On 02/12/2014 11:52 AM, Ian Campbell wrote:
> > Julien,
> 
> Hi Ian,
> 
> > I'm seeing this which I thought you had fixed last week, did I fail to
> > apply a patch or is this something new?
> > 
> >         (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
> >         (XEN) Xen BUG at mm.c:334
> 
> I suspect you didn't update your tree. This line correspond to an empty
> in master.
> 
> The commit that should resolve this issue is: 65f3c7e.

Ah, I thought my osstest build had picked up staging but actually it was
master.

Sorry for the noise.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZp7-000753-AX; Wed, 12 Feb 2014 13:25:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDZp6-00074x-Al
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:25:56 +0000
Received: from [85.158.137.68:24802] by server-17.bemta-3.messagelabs.com id
	74/44-22569-3667BF25; Wed, 12 Feb 2014 13:25:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392211553!114355!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28838 invoked from network); 12 Feb 2014 13:25:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:25:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101900460"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 13:20:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 08:20:05 -0500
Message-ID: <1392211204.13563.71.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 12 Feb 2014 13:20:04 +0000
In-Reply-To: <52FB7188.6070004@linaro.org>
References: <1392205936.13563.62.camel@kazak.uk.xensource.com>
	<52FB7188.6070004@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] arm32 crash on domain destroy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 13:05 +0000, Julien Grall wrote:
> On 02/12/2014 11:52 AM, Ian Campbell wrote:
> > Julien,
> 
> Hi Ian,
> 
> > I'm seeing this which I thought you had fixed last week, did I fail to
> > apply a patch or is this something new?
> > 
> >         (XEN) Assertion 'slot >= 0 && slot < DOMHEAP_ENTRIES' failed, line 334, file mm.c
> >         (XEN) Xen BUG at mm.c:334
> 
> I suspect you didn't update your tree. This line correspond to an empty
> in master.
> 
> The commit that should resolve this issue is: 65f3c7e.

Ah, I thought my osstest build had picked up staging but actually it was
master.

Sorry for the noise.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZpm-00079B-P0; Wed, 12 Feb 2014 13:26:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDZpl-000792-MW
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:26:37 +0000
Received: from [85.158.143.35:30734] by server-1.bemta-4.messagelabs.com id
	E8/AA-31661-D867BF25; Wed, 12 Feb 2014 13:26:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392211594!5108479!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25765 invoked from network); 12 Feb 2014 13:26:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:26:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100107641"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 13:20:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 08:20:54 -0500
Message-ID: <1392211253.13563.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Feb 2014 13:20:53 +0000
In-Reply-To: <52FB6E75020000780011BABB@nat28.tlf.novell.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
	<1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
	<52FB6E75020000780011BABB@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v5 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 11:52 +0000, Jan Beulich wrote:
> >>> On 11.02.14 at 15:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> > 
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has 
> > occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page 
> > function,
> > which is a stub on x86.
> > 
> > Secondly we need to flush anything which the domain builder touches, which 
> > we
> > do via a new domctl.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > Acked-by: Julien Grall <julien.grall@linaro.org>
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks. With that I have applied this series.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDZpm-00079B-P0; Wed, 12 Feb 2014 13:26:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDZpl-000792-MW
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:26:37 +0000
Received: from [85.158.143.35:30734] by server-1.bemta-4.messagelabs.com id
	E8/AA-31661-D867BF25; Wed, 12 Feb 2014 13:26:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392211594!5108479!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25765 invoked from network); 12 Feb 2014 13:26:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:26:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100107641"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 13:20:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 08:20:54 -0500
Message-ID: <1392211253.13563.72.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 12 Feb 2014 13:20:53 +0000
In-Reply-To: <52FB6E75020000780011BABB@nat28.tlf.novell.com>
References: <1392127796.26657.130.camel@kazak.uk.xensource.com>
	<1392127864-2605-3-git-send-email-ian.campbell@citrix.com>
	<52FB6E75020000780011BABB@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, tim@xen.org,
	julien.grall@linaro.org, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v5 3/5] xen/arm: clean and invalidate all
 guest caches by VMID after domain build.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 11:52 +0000, Jan Beulich wrote:
> >>> On 11.02.14 at 15:11, Ian Campbell <ian.campbell@citrix.com> wrote:
> > Guests are initially started with caches disabled and so we need to make sure
> > they see consistent data in RAM (requiring a cache clean) but also that they
> > do not have old stale data suddenly appear in the caches when they enable
> > their caches (requiring the invalidate).
> > 
> > This can be split into two halves. First we must flush each page as it is
> > allocated to the guest. It is not sufficient to do the flush at scrub time
> > since this will miss pages which are ballooned out by the guest (where the
> > guest must scrub if it cares about not leaking the pagecontent). We need to
> > clean as well as invalidate to make sure that any scrubbing which has 
> > occured
> > gets committed to real RAM. To achieve this add a new cacheflush_page 
> > function,
> > which is a stub on x86.
> > 
> > Secondly we need to flush anything which the domain builder touches, which 
> > we
> > do via a new domctl.
> > 
> > Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> > Acked-by: Julien Grall <julien.grall@linaro.org>
> > Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks. With that I have applied this series.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:43:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDa5P-0007tK-Cd; Wed, 12 Feb 2014 13:42:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDa5O-0007tF-15
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 13:42:46 +0000
Received: from [85.158.137.68:27381] by server-3.bemta-3.messagelabs.com id
	71/57-14520-55A7BF25; Wed, 12 Feb 2014 13:42:45 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392212562!1386551!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21026 invoked from network); 12 Feb 2014 13:42:44 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:42:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392212564; x=1423748564;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=piZktaRA4xpXQwUdl1Hp84HuAGqVYIvYBSYx5jbuhpg=;
	b=PdVHyG18EkIS+hR9eIr7BxoRwy4Bu7AHH3JK9w63D4fbhIwqOlbvDhQU
	zOtm6oDy/h8Qz/+FJkk8XueCBnqyjXMbY7dJaDPoxh0qOESZkwbBA2neb
	47q7M/bhUINaQEUk5t3o6zmKZp5d6jiOacCQ2oLHw5Km9ueXgRaziwMN7 U=;
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="78966562"
Received: from smtp-in-7002.iad7.amazon.com ([10.229.167.11])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 13:42:40 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-7002.iad7.amazon.com (8.14.7/8.14.7) with ESMTP id
	s1CDgY4K001861
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 13:42:38 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.342.3; Wed, 12 Feb 2014 05:42:33 -0800
Message-ID: <52FB7A46.7020801@amazon.de>
Date: Wed, 12 Feb 2014 14:42:30 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matthew Rushton <mrushton7@yahoo.com>, =?UTF-8?B?Um9nZXIgUGF1IE1vbm5l?=
	=?UTF-8?B?IOKAjuKAjg==?= <roger.pau@citrix.com>
References: <1391742906.53011.YahooMailNeo@web122601.mail.ne1.yahoo.com>
In-Reply-To: <1391742906.53011.YahooMailNeo@web122601.mail.ne1.yahoo.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: =?UTF-8?B?SWFuIENhbXBiZWxsIOKAjg==?= <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	=?UTF-8?B?RGF2aWQgVnJhYmVsIA==?= =?UTF-8?B?4oCO4oCO4oCO?=
	<david.vrabel@citrix.com>, "msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	=?UTF-8?B?Qm9yaXMgT3N0cm92c2t5IOKAjg==?= <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.02.14 04:15, Matthew Rushton wrote:
> On 04/02/14 10:26, Roger Pau Monne wrote:
>> I've at least identified two possible memory leaks in blkback, both
>> related to the shutdown path of a VBD:
>> 
>> - blkback doesn't wait for any pending purge work to finish before
>>   cleaning the list of free_pages. The purge work will call
>>   put_free_pages and thus we might end up with pages being added to
>>   the free_pages list after we have emptied it. Fix this by making
>>   sure there's no pending purge work before exiting
>>   xen_blkif_schedule, and moving the free_page cleanup code to
>>   xen_blkif_free.
>> - blkback doesn't wait for pending requests to end before cleaning
>>   persistent grants and the list of free_pages. Again this can add
>>   pages to the free_pages list or persistent grants to the
>>   persistent_gnts red-black tree. Fixed by moving the persistent
>>   grants and free_pages cleanup code to xen_blkif_free.
>> 
>> Also, add some checks in xen_blkif_free to make sure we are cleaning
>> everything.
> 
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>

Tested-by: Christoph Egger <chegger@amazon.de>
Reviewed-by: Christoph Egger <chegger@amazon.de>

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:43:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDa5P-0007tK-Cd; Wed, 12 Feb 2014 13:42:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDa5O-0007tF-15
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 13:42:46 +0000
Received: from [85.158.137.68:27381] by server-3.bemta-3.messagelabs.com id
	71/57-14520-55A7BF25; Wed, 12 Feb 2014 13:42:45 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392212562!1386551!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21026 invoked from network); 12 Feb 2014 13:42:44 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:42:44 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392212564; x=1423748564;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=piZktaRA4xpXQwUdl1Hp84HuAGqVYIvYBSYx5jbuhpg=;
	b=PdVHyG18EkIS+hR9eIr7BxoRwy4Bu7AHH3JK9w63D4fbhIwqOlbvDhQU
	zOtm6oDy/h8Qz/+FJkk8XueCBnqyjXMbY7dJaDPoxh0qOESZkwbBA2neb
	47q7M/bhUINaQEUk5t3o6zmKZp5d6jiOacCQ2oLHw5Km9ueXgRaziwMN7 U=;
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="78966562"
Received: from smtp-in-7002.iad7.amazon.com ([10.229.167.11])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 13:42:40 +0000
Received: from ex10-hub-9001.ant.amazon.com (ex10-hub-9001.ant.amazon.com
	[10.185.137.58])
	by smtp-in-7002.iad7.amazon.com (8.14.7/8.14.7) with ESMTP id
	s1CDgY4K001861
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 13:42:38 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9001.ant.amazon.com (10.185.137.58) with Microsoft SMTP Server
	id 14.2.342.3; Wed, 12 Feb 2014 05:42:33 -0800
Message-ID: <52FB7A46.7020801@amazon.de>
Date: Wed, 12 Feb 2014 14:42:30 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matthew Rushton <mrushton7@yahoo.com>, =?UTF-8?B?Um9nZXIgUGF1IE1vbm5l?=
	=?UTF-8?B?IOKAjuKAjg==?= <roger.pau@citrix.com>
References: <1391742906.53011.YahooMailNeo@web122601.mail.ne1.yahoo.com>
In-Reply-To: <1391742906.53011.YahooMailNeo@web122601.mail.ne1.yahoo.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: =?UTF-8?B?SWFuIENhbXBiZWxsIOKAjg==?= <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	=?UTF-8?B?RGF2aWQgVnJhYmVsIA==?= =?UTF-8?B?4oCO4oCO4oCO?=
	<david.vrabel@citrix.com>, "msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	=?UTF-8?B?Qm9yaXMgT3N0cm92c2t5IOKAjg==?= <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 2/4] xen-blkback: fix memory leaks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.02.14 04:15, Matthew Rushton wrote:
> On 04/02/14 10:26, Roger Pau Monne wrote:
>> I've at least identified two possible memory leaks in blkback, both
>> related to the shutdown path of a VBD:
>> 
>> - blkback doesn't wait for any pending purge work to finish before
>>   cleaning the list of free_pages. The purge work will call
>>   put_free_pages and thus we might end up with pages being added to
>>   the free_pages list after we have emptied it. Fix this by making
>>   sure there's no pending purge work before exiting
>>   xen_blkif_schedule, and moving the free_page cleanup code to
>>   xen_blkif_free.
>> - blkback doesn't wait for pending requests to end before cleaning
>>   persistent grants and the list of free_pages. Again this can add
>>   pages to the free_pages list or persistent grants to the
>>   persistent_gnts red-black tree. Fixed by moving the persistent
>>   grants and free_pages cleanup code to xen_blkif_free.
>> 
>> Also, add some checks in xen_blkif_free to make sure we are cleaning
>> everything.
> 
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>

Tested-by: Christoph Egger <chegger@amazon.de>
Reviewed-by: Christoph Egger <chegger@amazon.de>

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:43:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDa6A-0007wE-RL; Wed, 12 Feb 2014 13:43:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDa69-0007w6-Dx
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 13:43:33 +0000
Received: from [85.158.139.211:7240] by server-17.bemta-5.messagelabs.com id
	A9/DE-31975-48A7BF25; Wed, 12 Feb 2014 13:43:32 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392212610!3396954!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2398 invoked from network); 12 Feb 2014 13:43:31 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:43:31 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392212611; x=1423748611;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=tLPT5n4ioVPs4KlSFH1RiA01sSD0zTRqWtq3Xgg++JE=;
	b=IgTzZlEIV3XIRBYSkjwB17KsVEntEI3TZBOK0BRUDZwmZJvYiOBxtHiN
	XYCDzd5fiSmKNhb8Hbw9p46N+sqw8AGbEdkR5Syt+SX6cG42NG9jt35ok
	oHn6w6B8YEp98FFZtvlae//U83UIbSuz2szLYR7LM5MEUEe7bL7Z66bPC w=;
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="78967094"
Received: from email-inbound-relay-64001.pdx4.amazon.com ([10.220.169.6])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 13:43:29 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by email-inbound-relay-64001.pdx4.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1CDhTYq004990
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 13:43:29 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 05:42:59 -0800
Message-ID: <52FB7A60.4000907@amazon.de>
Date: Wed, 12 Feb 2014 14:42:56 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matthew Rushton <mrushton7@yahoo.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>
References: <1391743434.10989.YahooMailNeo@web122602.mail.ne1.yahoo.com>
In-Reply-To: <1391743434.10989.YahooMailNeo@web122602.mail.ne1.yahoo.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.02.14 04:23, Matthew Rushton wrote:
> On 04/02/14 10:26, Roger Pau Monne wrote:
>> Introduce a new variable to keep track of the number of in-flight
>> requests. We need to make sure that when xen_blkif_put is called the
>> request has already been freed and we can safely free xen_blkif, which
>> was not the case before.
> 
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>

Tested-by: Christoph Egger <chegger@amazon.de>
Reviewed-by: Christoph Egger <chegger@amazon.de>

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:43:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDa6A-0007wE-RL; Wed, 12 Feb 2014 13:43:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDa69-0007w6-Dx
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 13:43:33 +0000
Received: from [85.158.139.211:7240] by server-17.bemta-5.messagelabs.com id
	A9/DE-31975-48A7BF25; Wed, 12 Feb 2014 13:43:32 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392212610!3396954!1
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2398 invoked from network); 12 Feb 2014 13:43:31 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:43:31 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392212611; x=1423748611;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=tLPT5n4ioVPs4KlSFH1RiA01sSD0zTRqWtq3Xgg++JE=;
	b=IgTzZlEIV3XIRBYSkjwB17KsVEntEI3TZBOK0BRUDZwmZJvYiOBxtHiN
	XYCDzd5fiSmKNhb8Hbw9p46N+sqw8AGbEdkR5Syt+SX6cG42NG9jt35ok
	oHn6w6B8YEp98FFZtvlae//U83UIbSuz2szLYR7LM5MEUEe7bL7Z66bPC w=;
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="78967094"
Received: from email-inbound-relay-64001.pdx4.amazon.com ([10.220.169.6])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 13:43:29 +0000
Received: from ex10-hub-9004.ant.amazon.com (ex10-hub-9004.ant.amazon.com
	[10.185.137.182])
	by email-inbound-relay-64001.pdx4.amazon.com (8.14.7/8.14.7) with ESMTP
	id s1CDhTYq004990
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 13:43:29 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9004.ant.amazon.com (10.185.137.182) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 05:42:59 -0800
Message-ID: <52FB7A60.4000907@amazon.de>
Date: Wed, 12 Feb 2014 14:42:56 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matthew Rushton <mrushton7@yahoo.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>
References: <1391743434.10989.YahooMailNeo@web122602.mail.ne1.yahoo.com>
In-Reply-To: <1391743434.10989.YahooMailNeo@web122602.mail.ne1.yahoo.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"mrushton@amazon.com" <mrushton@amazon.com>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"msw@amazon.com" <msw@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>
Subject: Re: [Xen-devel] [PATCH v2 3/4] xen-blkback: fix shutdown race
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.02.14 04:23, Matthew Rushton wrote:
> On 04/02/14 10:26, Roger Pau Monne wrote:
>> Introduce a new variable to keep track of the number of in-flight
>> requests. We need to make sure that when xen_blkif_put is called the
>> request has already been freed and we can safely free xen_blkif, which
>> was not the case before.
> 
> Tested-by: Matt Rushton <mrushton@amazon.com>
> Reviewed-by: Matt Rushton <mrushton@amazon.com>

Tested-by: Christoph Egger <chegger@amazon.de>
Reviewed-by: Christoph Egger <chegger@amazon.de>

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:43:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDa6F-0007xJ-7z; Wed, 12 Feb 2014 13:43:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDa6D-0007wr-F9
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 13:43:37 +0000
Received: from [85.158.139.211:63119] by server-2.bemta-5.messagelabs.com id
	08/88-23037-88A7BF25; Wed, 12 Feb 2014 13:43:36 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392212610!3396954!2
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2750 invoked from network); 12 Feb 2014 13:43:35 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:43:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392212615; x=1423748615;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=JyQY+fhf5jnt1i32dVlo9TMXkxls5xG9PYgcx7pdd3A=;
	b=lXNCWtVfZEBVVaf/0AKt1GF+C/iOk6bJqFxahZbtGAga2xkAT/Ngnnnn
	QCt8dzcZEB0YLfBGvWEGAk7XfD+vtPYWPu7aJdT1xVWdSQHpV7avWDEjo
	ECHWZHq3ROcj6T9/6DFryRjBaNl2e/J4zaCnRMuAuzje26u3Q9P4z18hc 4=;
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="78967151"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 13:43:34 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-1101.vdc.amazon.com (8.14.7/8.14.7) with ESMTP id
	s1CDhV8C020659
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 13:43:32 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 05:43:26 -0800
Message-ID: <52FB7A7B.2060102@amazon.de>
Date: Wed, 12 Feb 2014 14:43:23 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matthew Rushton <mrushton7@yahoo.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>
References: <1391744311.40091.YahooMailNeo@web122606.mail.ne1.yahoo.com>
In-Reply-To: <1391744311.40091.YahooMailNeo@web122606.mail.ne1.yahoo.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: "msw@amazon.com" <msw@amazon.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.02.14 04:38, Matthew Rushton wrote:
>> This was wrongly introduced in commit 402b27f9, the only difference
>> between blkif_request_segment_aligned and blkif_request_segment is
>> that the former has a named padding, while both share the same
>> memory layout.
>>
>> Also correct a few minor glitches in the description, including for it
>> to no longer assume PAGE_SIZE == 4096.
> 
> Tested-by: Matt Rushton <mrushton@amazon.com>
> 
> *Corrected subject line from last email and resent. I tested the set and
> everything looks solid. I also reviewed patch 2 and 3.
> 

Tested-by: Christoph Egger <chegger@amazon.de>
Reviewed-by: Christoph Egger <chegger@amazon.de>

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:43:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDa6F-0007xJ-7z; Wed, 12 Feb 2014 13:43:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=1133ffbf4=chegger@amazon.de>)
	id 1WDa6D-0007wr-F9
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 13:43:37 +0000
Received: from [85.158.139.211:63119] by server-2.bemta-5.messagelabs.com id
	08/88-23037-88A7BF25; Wed, 12 Feb 2014 13:43:36 +0000
X-Env-Sender: prvs=1133ffbf4=chegger@amazon.de
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392212610!3396954!2
X-Originating-IP: [207.171.184.29]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2750 invoked from network); 12 Feb 2014 13:43:35 -0000
Received: from smtp-fw-9102.amazon.com (HELO smtp-fw-9102.amazon.com)
	(207.171.184.29)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 13:43:35 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
	t=1392212615; x=1423748615;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=JyQY+fhf5jnt1i32dVlo9TMXkxls5xG9PYgcx7pdd3A=;
	b=lXNCWtVfZEBVVaf/0AKt1GF+C/iOk6bJqFxahZbtGAga2xkAT/Ngnnnn
	QCt8dzcZEB0YLfBGvWEGAk7XfD+vtPYWPu7aJdT1xVWdSQHpV7avWDEjo
	ECHWZHq3ROcj6T9/6DFryRjBaNl2e/J4zaCnRMuAuzje26u3Q9P4z18hc 4=;
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="78967151"
Received: from smtp-in-1101.vdc.amazon.com ([10.146.54.37])
	by smtp-border-fw-out-9102.sea19.amazon.com with
	ESMTP/TLS/DHE-RSA-AES256-SHA; 12 Feb 2014 13:43:34 +0000
Received: from ex10-hub-9002.ant.amazon.com (ex10-hub-9002.ant.amazon.com
	[10.185.137.130])
	by smtp-in-1101.vdc.amazon.com (8.14.7/8.14.7) with ESMTP id
	s1CDhV8C020659
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=OK);
	Wed, 12 Feb 2014 13:43:32 GMT
Received: from a8206654c64f.ant.amazon.com (172.17.1.114) by
	ex10-hub-9002.ant.amazon.com (10.185.137.130) with Microsoft SMTP
	Server id 14.2.342.3; Wed, 12 Feb 2014 05:43:26 -0800
Message-ID: <52FB7A7B.2060102@amazon.de>
Date: Wed, 12 Feb 2014 14:43:23 +0100
From: "Egger, Christoph" <chegger@amazon.de>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Matthew Rushton <mrushton7@yahoo.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>
References: <1391744311.40091.YahooMailNeo@web122606.mail.ne1.yahoo.com>
In-Reply-To: <1391744311.40091.YahooMailNeo@web122606.mail.ne1.yahoo.com>
X-Enigmail-Version: 1.6
Precedence: Bulk
Cc: "msw@amazon.com" <msw@amazon.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"david.vrabel@citrix.com" <david.vrabel@citrix.com>,
	"aliguori@amazon.com" <aliguori@amazon.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2 4/4] xen-blkif: drop struct
	blkif_request_segment_aligned
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 07.02.14 04:38, Matthew Rushton wrote:
>> This was wrongly introduced in commit 402b27f9, the only difference
>> between blkif_request_segment_aligned and blkif_request_segment is
>> that the former has a named padding, while both share the same
>> memory layout.
>>
>> Also correct a few minor glitches in the description, including for it
>> to no longer assume PAGE_SIZE == 4096.
> 
> Tested-by: Matt Rushton <mrushton@amazon.com>
> 
> *Corrected subject line from last email and resent. I tested the set and
> everything looks solid. I also reviewed patch 2 and 3.
> 

Tested-by: Christoph Egger <chegger@amazon.de>
Reviewed-by: Christoph Egger <chegger@amazon.de>

Christoph


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDaIS-000077-R4; Wed, 12 Feb 2014 13:56:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDaIQ-000072-GS
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:56:14 +0000
Received: from [85.158.139.211:20700] by server-5.bemta-5.messagelabs.com id
	70/62-32749-D7D7BF25; Wed, 12 Feb 2014 13:56:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392213372!3432806!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14308 invoked from network); 12 Feb 2014 13:56:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 13:56:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 13:57:14 +0000
Message-Id: <52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 13:56:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
	<52FB6F10.8040307@citrix.com>
In-Reply-To: <52FB6F10.8040307@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 13:54, David Vrabel <david.vrabel@citrix.com> wrote:
> On 12/02/14 11:59, Jan Beulich wrote:
>>>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
>>> --- a/drivers/xen/events/events_base.c
>>> +++ b/drivers/xen/events/events_base.c
>>> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>>>  
>>>  	irq_exit();
>>>  	set_irq_regs(old_regs);
>>> +
>>> +#ifndef CONFIG_PREEMPT
>>> +	if ( __this_cpu_read(xed_nesting_count) == 0
>>> +	     && is_preemptible_hypercall(regs) )
>>> +		_cond_resched();
>>> +#endif
>> 
>> I don't think this can be done here - a 64-bit x86 kernel would
>> generally be on the IRQ stack, and I don't think scheduling
>> should be done in this state.
> 
> _cond_resched() doesn't look that different from preempt_schedule_irq()
> which is explicitly callable from irq context.

But IRQ context and running on the IRQ stack isn't the same. All
current callers of that function are in assembly code, where one
would hope people know what they're doing (and in particular
_when_ they do so).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 13:56:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 13:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDaIS-000077-R4; Wed, 12 Feb 2014 13:56:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDaIQ-000072-GS
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 13:56:14 +0000
Received: from [85.158.139.211:20700] by server-5.bemta-5.messagelabs.com id
	70/62-32749-D7D7BF25; Wed, 12 Feb 2014 13:56:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392213372!3432806!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14308 invoked from network); 12 Feb 2014 13:56:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 13:56:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 13:57:14 +0000
Message-Id: <52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 13:56:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
	<52FB6F10.8040307@citrix.com>
In-Reply-To: <52FB6F10.8040307@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 13:54, David Vrabel <david.vrabel@citrix.com> wrote:
> On 12/02/14 11:59, Jan Beulich wrote:
>>>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
>>> --- a/drivers/xen/events/events_base.c
>>> +++ b/drivers/xen/events/events_base.c
>>> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>>>  
>>>  	irq_exit();
>>>  	set_irq_regs(old_regs);
>>> +
>>> +#ifndef CONFIG_PREEMPT
>>> +	if ( __this_cpu_read(xed_nesting_count) == 0
>>> +	     && is_preemptible_hypercall(regs) )
>>> +		_cond_resched();
>>> +#endif
>> 
>> I don't think this can be done here - a 64-bit x86 kernel would
>> generally be on the IRQ stack, and I don't think scheduling
>> should be done in this state.
> 
> _cond_resched() doesn't look that different from preempt_schedule_irq()
> which is explicitly callable from irq context.

But IRQ context and running on the IRQ stack isn't the same. All
current callers of that function are in assembly code, where one
would hope people know what they're doing (and in particular
_when_ they do so).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:16:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDac3-0000oL-Tw; Wed, 12 Feb 2014 14:16:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDac2-0000oG-7r
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:16:30 +0000
Received: from [85.158.137.68:44732] by server-1.bemta-3.messagelabs.com id
	17/70-17293-C328BF25; Wed, 12 Feb 2014 14:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392214586!127602!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15271 invoked from network); 12 Feb 2014 14:16:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:16:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100126686"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:16:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:16:25 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDabx-0007lr-Hp;
	Wed, 12 Feb 2014 14:16:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:16:25 +0000
Message-ID: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I run osstest against machines which are in both the XenServer and XenClient
administrative domains, and hence which have different TFTP servers, accessible
locally via different NFS mounted paths.

Make it possible to specify various bits of TFTP path via ~/.xen-osstest/config

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/Debian.pm      |  6 +++++-
 Osstest/TestSupport.pm |  5 ++++-
 ts-host-install        | 16 ++++++++++------
 3 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 6759263..a70d35b 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -554,7 +554,11 @@ END
     foreach my $kp (keys %{ $ho->{Flags} }) {
 	$kp =~ s/need-kernel-deb-// or next;
 
-	my $d_i= $c{TftpPath}.'/'.$c{TftpDiBase}.'/'.$r{arch}.'/'.$c{TftpDiVersion}.'-'.$ho->{Suite};
+	my $tftppath = get_host_property($ho, "TftpPath", $c{TftpPath});
+	my $tftpdibase = get_host_property($ho, "TftpDiBase", $c{TftpDiBase});
+	my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});
+
+	my $d_i= $tftppath.'/'.$tftpdibase.'/'.$r{arch}.'/'.$tftpdiversion.'-'.$ho->{Suite};
 
 	my $kurl = create_webfile($ho, "kernel", sub {
 	    copy("$d_i/$kp.deb", $_[0]);
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..5c01ffa 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1839,8 +1839,11 @@ sub host_pxefile ($) {
 
 sub setup_pxeboot ($$) {
     my ($ho, $bootfile) = @_;
+    my $p= get_host_property($ho, "TftpPath", $c{TftpPath});
+    my $d= get_host_property($ho, "TftpPxeDir", $c{TftpPxeDir});
     my $f= host_pxefile($ho);
-    file_link_contents("$c{TftpPath}$c{TftpPxeDir}$f", $bootfile);
+
+    file_link_contents("$p$d$f", $bootfile);
 }
 
 sub setup_pxeboot_local ($) {
diff --git a/ts-host-install b/ts-host-install
index 5c0018e..2e711fe 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -122,19 +122,23 @@ END
 sub setup_pxeboot_firstboot($) {
     my ($ps_url) = @_;
     
-    my $d_i= $c{TftpDiBase}.'/'.$r{arch}.'/'.$c{TftpDiVersion}.'-'.$ho->{Suite};
+    my $tftppath = get_host_property($ho, "TftpPath", $c{TftpPath});
+    my $tftpdibase = get_host_property($ho, "TftpDiBase", $c{TftpDiBase});
+    my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});
+
+    my $d_i= $tftpdibase.'/'.$r{arch}.'/'.$tftpdiversion.'-'.$ho->{Suite};
     
     my @installcmdline= qw(vga=normal);
     push @installcmdline, di_installcmdline_core($ho, $ps_url, %xopts);
 
     my $src_initrd= "$d_i/initrd.gz";
-    my @initrds= "$c{TftpPath}/$src_initrd";
+    my @initrds= "$tftppath/$src_initrd";
 
     my $kernel;
 
     foreach my $fp (keys %{ $ho->{Flags} }) {
         $fp =~ s/^need-firmware-deb-// or next;
-        my $cpio= "$c{TftpPath}/$d_i/$fp.cpio.gz";
+        my $cpio= "$tftppath/$d_i/$fp.cpio.gz";
         if (stat $cpio) {
             logm("using firmware from: $cpio");
             push @initrds, $cpio;
@@ -147,7 +151,7 @@ sub setup_pxeboot_firstboot($) {
 
     foreach my $kp (keys %{ $ho->{Flags} }) {
         $kp =~ s/need-kernel-deb-// or next;
-        my $kern= "$c{TftpPath}/$d_i/linux.$kp";
+        my $kern= "$tftppath/$d_i/linux.$kp";
         if (stat $kern) {
             logm("using kernel from: $kern");
             $kernel = "/$d_i/linux.$kp";
@@ -157,7 +161,7 @@ sub setup_pxeboot_firstboot($) {
             die "$kp $kern $!";
         }
 
-        my $cpio= "$c{TftpPath}/$d_i/$kp.cpio.gz";
+        my $cpio= "$tftppath/$d_i/$kp.cpio.gz";
         if (stat $cpio) {
             logm("using kernel modules from: $cpio");
             push @initrds, $cpio;
@@ -195,7 +199,7 @@ END
 
     logm("using initrds: @initrds");
     my $initrd= "$c{TftpTmpDir}$ho->{Name}--initrd.gz";
-    system_checked("cat -- @initrds >$c{TftpPath}$initrd");
+    system_checked("cat -- @initrds >$tftppath$initrd");
     
     push @installcmdline, ("initrd=/$initrd",
                            "domain=$c{TestHostDomain}",
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:16:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDacB-0000of-A4; Wed, 12 Feb 2014 14:16:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDac9-0000oX-Rd
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:16:37 +0000
Received: from [85.158.143.35:16891] by server-2.bemta-4.messagelabs.com id
	B2/77-10891-5428BF25; Wed, 12 Feb 2014 14:16:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392214595!5121439!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15781 invoked from network); 12 Feb 2014 14:16:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:16:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100126712"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:16:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:16:34 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDac6-0007ly-9P;
	Wed, 12 Feb 2014 14:16:34 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:16:34 +0000
Message-ID: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy for
	non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After this I was stymied by ssh host keys and other roadblocks and just pushed
the branch to my xenbits tree but I think this is still correct as far as it
goes.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/TestSupport.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 236083e..45e5ecb 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -988,7 +988,9 @@ sub file_simple_write_contents ($$) {
 sub git_massage_url ($) {
     my ($url) = @_;
 
-    if ($c{GitCacheProxy}) { $url = $c{GitCacheProxy}.$url; }
+    if ($url =~ m,^git://, && $c{GitCacheProxy}) {
+	$url = $c{GitCacheProxy}.$url;
+    }
     return $url;
 }
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:16:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDac3-0000oL-Tw; Wed, 12 Feb 2014 14:16:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDac2-0000oG-7r
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:16:30 +0000
Received: from [85.158.137.68:44732] by server-1.bemta-3.messagelabs.com id
	17/70-17293-C328BF25; Wed, 12 Feb 2014 14:16:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392214586!127602!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15271 invoked from network); 12 Feb 2014 14:16:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:16:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100126686"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:16:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:16:25 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDabx-0007lr-Hp;
	Wed, 12 Feb 2014 14:16:25 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:16:25 +0000
Message-ID: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I run osstest against machines which are in both the XenServer and XenClient
administrative domains, and hence which have different TFTP servers, accessible
locally via different NFS mounted paths.

Make it possible to specify various bits of TFTP path via ~/.xen-osstest/config

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/Debian.pm      |  6 +++++-
 Osstest/TestSupport.pm |  5 ++++-
 ts-host-install        | 16 ++++++++++------
 3 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 6759263..a70d35b 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -554,7 +554,11 @@ END
     foreach my $kp (keys %{ $ho->{Flags} }) {
 	$kp =~ s/need-kernel-deb-// or next;
 
-	my $d_i= $c{TftpPath}.'/'.$c{TftpDiBase}.'/'.$r{arch}.'/'.$c{TftpDiVersion}.'-'.$ho->{Suite};
+	my $tftppath = get_host_property($ho, "TftpPath", $c{TftpPath});
+	my $tftpdibase = get_host_property($ho, "TftpDiBase", $c{TftpDiBase});
+	my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});
+
+	my $d_i= $tftppath.'/'.$tftpdibase.'/'.$r{arch}.'/'.$tftpdiversion.'-'.$ho->{Suite};
 
 	my $kurl = create_webfile($ho, "kernel", sub {
 	    copy("$d_i/$kp.deb", $_[0]);
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index a513540..5c01ffa 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1839,8 +1839,11 @@ sub host_pxefile ($) {
 
 sub setup_pxeboot ($$) {
     my ($ho, $bootfile) = @_;
+    my $p= get_host_property($ho, "TftpPath", $c{TftpPath});
+    my $d= get_host_property($ho, "TftpPxeDir", $c{TftpPxeDir});
     my $f= host_pxefile($ho);
-    file_link_contents("$c{TftpPath}$c{TftpPxeDir}$f", $bootfile);
+
+    file_link_contents("$p$d$f", $bootfile);
 }
 
 sub setup_pxeboot_local ($) {
diff --git a/ts-host-install b/ts-host-install
index 5c0018e..2e711fe 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -122,19 +122,23 @@ END
 sub setup_pxeboot_firstboot($) {
     my ($ps_url) = @_;
     
-    my $d_i= $c{TftpDiBase}.'/'.$r{arch}.'/'.$c{TftpDiVersion}.'-'.$ho->{Suite};
+    my $tftppath = get_host_property($ho, "TftpPath", $c{TftpPath});
+    my $tftpdibase = get_host_property($ho, "TftpDiBase", $c{TftpDiBase});
+    my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});
+
+    my $d_i= $tftpdibase.'/'.$r{arch}.'/'.$tftpdiversion.'-'.$ho->{Suite};
     
     my @installcmdline= qw(vga=normal);
     push @installcmdline, di_installcmdline_core($ho, $ps_url, %xopts);
 
     my $src_initrd= "$d_i/initrd.gz";
-    my @initrds= "$c{TftpPath}/$src_initrd";
+    my @initrds= "$tftppath/$src_initrd";
 
     my $kernel;
 
     foreach my $fp (keys %{ $ho->{Flags} }) {
         $fp =~ s/^need-firmware-deb-// or next;
-        my $cpio= "$c{TftpPath}/$d_i/$fp.cpio.gz";
+        my $cpio= "$tftppath/$d_i/$fp.cpio.gz";
         if (stat $cpio) {
             logm("using firmware from: $cpio");
             push @initrds, $cpio;
@@ -147,7 +151,7 @@ sub setup_pxeboot_firstboot($) {
 
     foreach my $kp (keys %{ $ho->{Flags} }) {
         $kp =~ s/need-kernel-deb-// or next;
-        my $kern= "$c{TftpPath}/$d_i/linux.$kp";
+        my $kern= "$tftppath/$d_i/linux.$kp";
         if (stat $kern) {
             logm("using kernel from: $kern");
             $kernel = "/$d_i/linux.$kp";
@@ -157,7 +161,7 @@ sub setup_pxeboot_firstboot($) {
             die "$kp $kern $!";
         }
 
-        my $cpio= "$c{TftpPath}/$d_i/$kp.cpio.gz";
+        my $cpio= "$tftppath/$d_i/$kp.cpio.gz";
         if (stat $cpio) {
             logm("using kernel modules from: $cpio");
             push @initrds, $cpio;
@@ -195,7 +199,7 @@ END
 
     logm("using initrds: @initrds");
     my $initrd= "$c{TftpTmpDir}$ho->{Name}--initrd.gz";
-    system_checked("cat -- @initrds >$c{TftpPath}$initrd");
+    system_checked("cat -- @initrds >$tftppath$initrd");
     
     push @installcmdline, ("initrd=/$initrd",
                            "domain=$c{TestHostDomain}",
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:16:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDacB-0000of-A4; Wed, 12 Feb 2014 14:16:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDac9-0000oX-Rd
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:16:37 +0000
Received: from [85.158.143.35:16891] by server-2.bemta-4.messagelabs.com id
	B2/77-10891-5428BF25; Wed, 12 Feb 2014 14:16:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392214595!5121439!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15781 invoked from network); 12 Feb 2014 14:16:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:16:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100126712"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:16:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:16:34 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDac6-0007ly-9P;
	Wed, 12 Feb 2014 14:16:34 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:16:34 +0000
Message-ID: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy for
	non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

After this I was stymied by ssh host keys and other roadblocks and just pushed
the branch to my xenbits tree but I think this is still correct as far as it
goes.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 Osstest/TestSupport.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 236083e..45e5ecb 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -988,7 +988,9 @@ sub file_simple_write_contents ($$) {
 sub git_massage_url ($) {
     my ($url) = @_;
 
-    if ($c{GitCacheProxy}) { $url = $c{GitCacheProxy}.$url; }
+    if ($url =~ m,^git://, && $c{GitCacheProxy}) {
+	$url = $c{GitCacheProxy}.$url;
+    }
     return $url;
 }
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:28:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDan7-0001Ia-It; Wed, 12 Feb 2014 14:27:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDan7-0001IV-0a
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:27:57 +0000
Received: from [85.158.137.68:46758] by server-11.bemta-3.messagelabs.com id
	3B/26-04255-CE48BF25; Wed, 12 Feb 2014 14:27:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392215271!1371043!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26898 invoked from network); 12 Feb 2014 14:27:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:27:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100129999"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:27:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:27:37 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDamn-0007pe-1Y;
	Wed, 12 Feb 2014 14:27:37 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 12 Feb 2014 14:27:37 +0000
Message-ID: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: goerge.dunlap@citrix.com, ian.jackson@eu.citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
	platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM does not (currently) support migration, so stop offering tasty looking
treats like "xl migrate".

Apart from the UI improvement my intention is to use this in osstest to detect
whether to attempt the save/restore/migrate tests.

Other than the additions of the #define/#ifdef there is a tiny bit of code
motion ("dump-core" in the command list and core_dump_domain in the
implementations) which serves to put ifdeffable bits next to each other.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: goerge.dunlap@citrix.com
---
Release:
   My main motivation here is to be able to get a complete osstest run on armhf
   prior to Xen 4.4 and the lack of migration support is currently blocking
   that (fine) but is also blocking subsequent useful tests. This change will
   allow me to make osstest skip the unsupported functionality, and in a way where
   it will automatically start trying to test it as soon as it is implemented.
---
 tools/libxl/libxl.h       | 14 ++++++++++++++
 tools/libxl/xl.h          |  4 ++++
 tools/libxl/xl_cmdimpl.c  | 20 ++++++++++++--------
 tools/libxl/xl_cmdtable.c | 15 +++++++++------
 4 files changed, 39 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 0b992d1..06bbca6 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -431,6 +431,20 @@
  */
 #define LIBXL_HAVE_SIGCHLD_SHARING 1
 
+/*
+ * LIBXL_HAVE_NO_SUSPEND_RESUME
+ *
+ * Is this is defined then the platform has no support for saving,
+ * restoring or migrating a domain. In this case the related functions
+ * should be expected to return failure. That is:
+ *  - libxl_domain_suspend
+ *  - libxl_domain_resume
+ *  - libxl_domain_remus_start
+ */
+#if defined(__arm__) || defined(__aarch64__)
+#define LIBXL_HAVE_NO_SUSPEND_RESUME 1
+#endif
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..f188708 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -43,10 +43,12 @@ int main_pciattach(int argc, char **argv);
 int main_pciassignable_add(int argc, char **argv);
 int main_pciassignable_remove(int argc, char **argv);
 int main_pciassignable_list(int argc, char **argv);
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_restore(int argc, char **argv);
 int main_migrate_receive(int argc, char **argv);
 int main_save(int argc, char **argv);
 int main_migrate(int argc, char **argv);
+#endif
 int main_dump_core(int argc, char **argv);
 int main_pause(int argc, char **argv);
 int main_unpause(int argc, char **argv);
@@ -104,7 +106,9 @@ int main_cpupoolnumasplit(int argc, char **argv);
 int main_getenforce(int argc, char **argv);
 int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_remus(int argc, char **argv);
+#endif
 int main_devd(int argc, char **argv);
 
 void help(const char *command);
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..4fc46eb 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3384,6 +3384,15 @@ static void list_vm(void)
     libxl_vminfo_list_free(info, nb_vm);
 }
 
+static void core_dump_domain(uint32_t domid, const char *filename)
+{
+    int rc;
+
+    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
+    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
+}
+
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 static void save_domain_core_begin(uint32_t domid,
                                    const char *override_config_file,
                                    uint8_t **config_data_r,
@@ -3775,14 +3784,6 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
     exit(-ERROR_BADFAIL);
 }
 
-static void core_dump_domain(uint32_t domid, const char *filename)
-{
-    int rc;
-
-    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
-    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
-}
-
 static void migrate_receive(int debug, int daemonize, int monitor,
                             int send_fd, int recv_fd, int remus)
 {
@@ -4102,6 +4103,7 @@ int main_migrate(int argc, char **argv)
     migrate_domain(domid, rune, debug, config_filename);
     return 0;
 }
+#endif
 
 int main_dump_core(int argc, char **argv)
 {
@@ -7248,6 +7250,7 @@ done:
     return ret;
 }
 
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_remus(int argc, char **argv)
 {
     uint32_t domid;
@@ -7341,6 +7344,7 @@ int main_remus(int argc, char **argv)
     close(send_fd);
     return -ERROR_FAIL;
 }
+#endif
 
 int main_devd(int argc, char **argv)
 {
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..e8ab93a 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -137,6 +137,7 @@ struct cmd_spec cmd_table[] = {
       "                         -autopass\n"
       "--vncviewer-autopass     (consistency alias for --autopass)"
     },
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
     { "save",
       &main_save, 0, 1,
       "Save a domain state to restore later",
@@ -158,11 +159,6 @@ struct cmd_spec cmd_table[] = {
       "                of the domain.\n"
       "--debug         Print huge (!) amount of debug during the migration process."
     },
-    { "dump-core",
-      &main_dump_core, 0, 1,
-      "Core dump a domain",
-      "<Domain> <filename>"
-    },
     { "restore",
       &main_restore, 0, 1,
       "Restore a domain from a saved state",
@@ -179,6 +175,12 @@ struct cmd_spec cmd_table[] = {
       "Restore a domain from a saved state",
       "- for internal use only",
     },
+#endif
+    { "dump-core",
+      &main_dump_core, 0, 1,
+      "Core dump a domain",
+      "<Domain> <filename>"
+    },
     { "cd-insert",
       &main_cd_insert, 1, 1,
       "Insert a cdrom into a guest's cd drive",
@@ -474,6 +476,7 @@ struct cmd_spec cmd_table[] = {
       "Loads a new policy int the Flask Xen security module",
       "<policy file>",
     },
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
     { "remus",
       &main_remus, 0, 1,
       "Enable Remus HA for domain",
@@ -486,8 +489,8 @@ struct cmd_spec cmd_table[] = {
       "                        ssh <host> xl migrate-receive -r [-e]\n"
       "-e                      Do not wait in the background (on <host>) for the death\n"
       "                        of the domain."
-
     },
+#endif
     { "devd",
       &main_devd, 0, 1,
       "Daemon that listens for devices and launches backends",
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:28:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDan7-0001Ia-It; Wed, 12 Feb 2014 14:27:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDan7-0001IV-0a
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:27:57 +0000
Received: from [85.158.137.68:46758] by server-11.bemta-3.messagelabs.com id
	3B/26-04255-CE48BF25; Wed, 12 Feb 2014 14:27:56 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392215271!1371043!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26898 invoked from network); 12 Feb 2014 14:27:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:27:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100129999"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:27:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:27:37 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDamn-0007pe-1Y;
	Wed, 12 Feb 2014 14:27:37 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 12 Feb 2014 14:27:37 +0000
Message-ID: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: goerge.dunlap@citrix.com, ian.jackson@eu.citrix.com,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
	platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM does not (currently) support migration, so stop offering tasty looking
treats like "xl migrate".

Apart from the UI improvement my intention is to use this in osstest to detect
whether to attempt the save/restore/migrate tests.

Other than the additions of the #define/#ifdef there is a tiny bit of code
motion ("dump-core" in the command list and core_dump_domain in the
implementations) which serves to put ifdeffable bits next to each other.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: goerge.dunlap@citrix.com
---
Release:
   My main motivation here is to be able to get a complete osstest run on armhf
   prior to Xen 4.4 and the lack of migration support is currently blocking
   that (fine) but is also blocking subsequent useful tests. This change will
   allow me to make osstest skip the unsupported functionality, and in a way where
   it will automatically start trying to test it as soon as it is implemented.
---
 tools/libxl/libxl.h       | 14 ++++++++++++++
 tools/libxl/xl.h          |  4 ++++
 tools/libxl/xl_cmdimpl.c  | 20 ++++++++++++--------
 tools/libxl/xl_cmdtable.c | 15 +++++++++------
 4 files changed, 39 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 0b992d1..06bbca6 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -431,6 +431,20 @@
  */
 #define LIBXL_HAVE_SIGCHLD_SHARING 1
 
+/*
+ * LIBXL_HAVE_NO_SUSPEND_RESUME
+ *
+ * Is this is defined then the platform has no support for saving,
+ * restoring or migrating a domain. In this case the related functions
+ * should be expected to return failure. That is:
+ *  - libxl_domain_suspend
+ *  - libxl_domain_resume
+ *  - libxl_domain_remus_start
+ */
+#if defined(__arm__) || defined(__aarch64__)
+#define LIBXL_HAVE_NO_SUSPEND_RESUME 1
+#endif
+
 /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
  * called from within libxl itself. Callers outside libxl, who
  * do not #include libxl_internal.h, are fine. */
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..f188708 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -43,10 +43,12 @@ int main_pciattach(int argc, char **argv);
 int main_pciassignable_add(int argc, char **argv);
 int main_pciassignable_remove(int argc, char **argv);
 int main_pciassignable_list(int argc, char **argv);
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_restore(int argc, char **argv);
 int main_migrate_receive(int argc, char **argv);
 int main_save(int argc, char **argv);
 int main_migrate(int argc, char **argv);
+#endif
 int main_dump_core(int argc, char **argv);
 int main_pause(int argc, char **argv);
 int main_unpause(int argc, char **argv);
@@ -104,7 +106,9 @@ int main_cpupoolnumasplit(int argc, char **argv);
 int main_getenforce(int argc, char **argv);
 int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_remus(int argc, char **argv);
+#endif
 int main_devd(int argc, char **argv);
 
 void help(const char *command);
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..4fc46eb 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -3384,6 +3384,15 @@ static void list_vm(void)
     libxl_vminfo_list_free(info, nb_vm);
 }
 
+static void core_dump_domain(uint32_t domid, const char *filename)
+{
+    int rc;
+
+    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
+    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
+}
+
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 static void save_domain_core_begin(uint32_t domid,
                                    const char *override_config_file,
                                    uint8_t **config_data_r,
@@ -3775,14 +3784,6 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
     exit(-ERROR_BADFAIL);
 }
 
-static void core_dump_domain(uint32_t domid, const char *filename)
-{
-    int rc;
-
-    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
-    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
-}
-
 static void migrate_receive(int debug, int daemonize, int monitor,
                             int send_fd, int recv_fd, int remus)
 {
@@ -4102,6 +4103,7 @@ int main_migrate(int argc, char **argv)
     migrate_domain(domid, rune, debug, config_filename);
     return 0;
 }
+#endif
 
 int main_dump_core(int argc, char **argv)
 {
@@ -7248,6 +7250,7 @@ done:
     return ret;
 }
 
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_remus(int argc, char **argv)
 {
     uint32_t domid;
@@ -7341,6 +7344,7 @@ int main_remus(int argc, char **argv)
     close(send_fd);
     return -ERROR_FAIL;
 }
+#endif
 
 int main_devd(int argc, char **argv)
 {
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..e8ab93a 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -137,6 +137,7 @@ struct cmd_spec cmd_table[] = {
       "                         -autopass\n"
       "--vncviewer-autopass     (consistency alias for --autopass)"
     },
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
     { "save",
       &main_save, 0, 1,
       "Save a domain state to restore later",
@@ -158,11 +159,6 @@ struct cmd_spec cmd_table[] = {
       "                of the domain.\n"
       "--debug         Print huge (!) amount of debug during the migration process."
     },
-    { "dump-core",
-      &main_dump_core, 0, 1,
-      "Core dump a domain",
-      "<Domain> <filename>"
-    },
     { "restore",
       &main_restore, 0, 1,
       "Restore a domain from a saved state",
@@ -179,6 +175,12 @@ struct cmd_spec cmd_table[] = {
       "Restore a domain from a saved state",
       "- for internal use only",
     },
+#endif
+    { "dump-core",
+      &main_dump_core, 0, 1,
+      "Core dump a domain",
+      "<Domain> <filename>"
+    },
     { "cd-insert",
       &main_cd_insert, 1, 1,
       "Insert a cdrom into a guest's cd drive",
@@ -474,6 +476,7 @@ struct cmd_spec cmd_table[] = {
       "Loads a new policy int the Flask Xen security module",
       "<policy file>",
     },
+#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
     { "remus",
       &main_remus, 0, 1,
       "Enable Remus HA for domain",
@@ -486,8 +489,8 @@ struct cmd_spec cmd_table[] = {
       "                        ssh <host> xl migrate-receive -r [-e]\n"
       "-e                      Do not wait in the background (on <host>) for the death\n"
       "                        of the domain."
-
     },
+#endif
     { "devd",
       &main_devd, 0, 1,
       "Daemon that listens for devices and launches backends",
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:32:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDarK-0001cD-9n; Wed, 12 Feb 2014 14:32:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDarI-0001c3-Gt
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:32:16 +0000
Received: from [193.109.254.147:62159] by server-11.bemta-14.messagelabs.com
	id 2C/43-24604-FE58BF25; Wed, 12 Feb 2014 14:32:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392215533!89421!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16009 invoked from network); 12 Feb 2014 14:32:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:32:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100131578"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:32:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 09:32:12 -0500
Message-ID: <1392215531.13563.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 12 Feb 2014 14:32:11 +0000
In-Reply-To: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Typoed George's address...

On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> ARM does not (currently) support migration, so stop offering tasty looking
> treats like "xl migrate".
> 
> Apart from the UI improvement my intention is to use this in osstest to detect
> whether to attempt the save/restore/migrate tests.
> 
> Other than the additions of the #define/#ifdef there is a tiny bit of code
> motion ("dump-core" in the command list and core_dump_domain in the
> implementations) which serves to put ifdeffable bits next to each other.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: goerge.dunlap@citrix.com
> ---
> Release:
>    My main motivation here is to be able to get a complete osstest run on armhf
>    prior to Xen 4.4 and the lack of migration support is currently blocking
>    that (fine) but is also blocking subsequent useful tests. This change will
>    allow me to make osstest skip the unsupported functionality, and in a way where
>    it will automatically start trying to test it as soon as it is implemented.
> ---
>  tools/libxl/libxl.h       | 14 ++++++++++++++
>  tools/libxl/xl.h          |  4 ++++
>  tools/libxl/xl_cmdimpl.c  | 20 ++++++++++++--------
>  tools/libxl/xl_cmdtable.c | 15 +++++++++------
>  4 files changed, 39 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 0b992d1..06bbca6 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -431,6 +431,20 @@
>   */
>  #define LIBXL_HAVE_SIGCHLD_SHARING 1
>  
> +/*
> + * LIBXL_HAVE_NO_SUSPEND_RESUME
> + *
> + * Is this is defined then the platform has no support for saving,
> + * restoring or migrating a domain. In this case the related functions
> + * should be expected to return failure. That is:
> + *  - libxl_domain_suspend
> + *  - libxl_domain_resume
> + *  - libxl_domain_remus_start
> + */
> +#if defined(__arm__) || defined(__aarch64__)
> +#define LIBXL_HAVE_NO_SUSPEND_RESUME 1
> +#endif
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */
> diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
> index c876a33..f188708 100644
> --- a/tools/libxl/xl.h
> +++ b/tools/libxl/xl.h
> @@ -43,10 +43,12 @@ int main_pciattach(int argc, char **argv);
>  int main_pciassignable_add(int argc, char **argv);
>  int main_pciassignable_remove(int argc, char **argv);
>  int main_pciassignable_list(int argc, char **argv);
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  int main_restore(int argc, char **argv);
>  int main_migrate_receive(int argc, char **argv);
>  int main_save(int argc, char **argv);
>  int main_migrate(int argc, char **argv);
> +#endif
>  int main_dump_core(int argc, char **argv);
>  int main_pause(int argc, char **argv);
>  int main_unpause(int argc, char **argv);
> @@ -104,7 +106,9 @@ int main_cpupoolnumasplit(int argc, char **argv);
>  int main_getenforce(int argc, char **argv);
>  int main_setenforce(int argc, char **argv);
>  int main_loadpolicy(int argc, char **argv);
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  int main_remus(int argc, char **argv);
> +#endif
>  int main_devd(int argc, char **argv);
>  
>  void help(const char *command);
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index aff6f90..4fc46eb 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3384,6 +3384,15 @@ static void list_vm(void)
>      libxl_vminfo_list_free(info, nb_vm);
>  }
>  
> +static void core_dump_domain(uint32_t domid, const char *filename)
> +{
> +    int rc;
> +
> +    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
> +    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
> +}
> +
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  static void save_domain_core_begin(uint32_t domid,
>                                     const char *override_config_file,
>                                     uint8_t **config_data_r,
> @@ -3775,14 +3784,6 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
>      exit(-ERROR_BADFAIL);
>  }
>  
> -static void core_dump_domain(uint32_t domid, const char *filename)
> -{
> -    int rc;
> -
> -    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
> -    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
> -}
> -
>  static void migrate_receive(int debug, int daemonize, int monitor,
>                              int send_fd, int recv_fd, int remus)
>  {
> @@ -4102,6 +4103,7 @@ int main_migrate(int argc, char **argv)
>      migrate_domain(domid, rune, debug, config_filename);
>      return 0;
>  }
> +#endif
>  
>  int main_dump_core(int argc, char **argv)
>  {
> @@ -7248,6 +7250,7 @@ done:
>      return ret;
>  }
>  
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  int main_remus(int argc, char **argv)
>  {
>      uint32_t domid;
> @@ -7341,6 +7344,7 @@ int main_remus(int argc, char **argv)
>      close(send_fd);
>      return -ERROR_FAIL;
>  }
> +#endif
>  
>  int main_devd(int argc, char **argv)
>  {
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..e8ab93a 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -137,6 +137,7 @@ struct cmd_spec cmd_table[] = {
>        "                         -autopass\n"
>        "--vncviewer-autopass     (consistency alias for --autopass)"
>      },
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>      { "save",
>        &main_save, 0, 1,
>        "Save a domain state to restore later",
> @@ -158,11 +159,6 @@ struct cmd_spec cmd_table[] = {
>        "                of the domain.\n"
>        "--debug         Print huge (!) amount of debug during the migration process."
>      },
> -    { "dump-core",
> -      &main_dump_core, 0, 1,
> -      "Core dump a domain",
> -      "<Domain> <filename>"
> -    },
>      { "restore",
>        &main_restore, 0, 1,
>        "Restore a domain from a saved state",
> @@ -179,6 +175,12 @@ struct cmd_spec cmd_table[] = {
>        "Restore a domain from a saved state",
>        "- for internal use only",
>      },
> +#endif
> +    { "dump-core",
> +      &main_dump_core, 0, 1,
> +      "Core dump a domain",
> +      "<Domain> <filename>"
> +    },
>      { "cd-insert",
>        &main_cd_insert, 1, 1,
>        "Insert a cdrom into a guest's cd drive",
> @@ -474,6 +476,7 @@ struct cmd_spec cmd_table[] = {
>        "Loads a new policy int the Flask Xen security module",
>        "<policy file>",
>      },
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>      { "remus",
>        &main_remus, 0, 1,
>        "Enable Remus HA for domain",
> @@ -486,8 +489,8 @@ struct cmd_spec cmd_table[] = {
>        "                        ssh <host> xl migrate-receive -r [-e]\n"
>        "-e                      Do not wait in the background (on <host>) for the death\n"
>        "                        of the domain."
> -
>      },
> +#endif
>      { "devd",
>        &main_devd, 0, 1,
>        "Daemon that listens for devices and launches backends",



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:32:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDarK-0001cD-9n; Wed, 12 Feb 2014 14:32:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDarI-0001c3-Gt
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:32:16 +0000
Received: from [193.109.254.147:62159] by server-11.bemta-14.messagelabs.com
	id 2C/43-24604-FE58BF25; Wed, 12 Feb 2014 14:32:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392215533!89421!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16009 invoked from network); 12 Feb 2014 14:32:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:32:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100131578"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:32:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 09:32:12 -0500
Message-ID: <1392215531.13563.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 12 Feb 2014 14:32:11 +0000
In-Reply-To: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Typoed George's address...

On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> ARM does not (currently) support migration, so stop offering tasty looking
> treats like "xl migrate".
> 
> Apart from the UI improvement my intention is to use this in osstest to detect
> whether to attempt the save/restore/migrate tests.
> 
> Other than the additions of the #define/#ifdef there is a tiny bit of code
> motion ("dump-core" in the command list and core_dump_domain in the
> implementations) which serves to put ifdeffable bits next to each other.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: goerge.dunlap@citrix.com
> ---
> Release:
>    My main motivation here is to be able to get a complete osstest run on armhf
>    prior to Xen 4.4 and the lack of migration support is currently blocking
>    that (fine) but is also blocking subsequent useful tests. This change will
>    allow me to make osstest skip the unsupported functionality, and in a way where
>    it will automatically start trying to test it as soon as it is implemented.
> ---
>  tools/libxl/libxl.h       | 14 ++++++++++++++
>  tools/libxl/xl.h          |  4 ++++
>  tools/libxl/xl_cmdimpl.c  | 20 ++++++++++++--------
>  tools/libxl/xl_cmdtable.c | 15 +++++++++------
>  4 files changed, 39 insertions(+), 14 deletions(-)
> 
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 0b992d1..06bbca6 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -431,6 +431,20 @@
>   */
>  #define LIBXL_HAVE_SIGCHLD_SHARING 1
>  
> +/*
> + * LIBXL_HAVE_NO_SUSPEND_RESUME
> + *
> + * Is this is defined then the platform has no support for saving,
> + * restoring or migrating a domain. In this case the related functions
> + * should be expected to return failure. That is:
> + *  - libxl_domain_suspend
> + *  - libxl_domain_resume
> + *  - libxl_domain_remus_start
> + */
> +#if defined(__arm__) || defined(__aarch64__)
> +#define LIBXL_HAVE_NO_SUSPEND_RESUME 1
> +#endif
> +
>  /* Functions annotated with LIBXL_EXTERNAL_CALLERS_ONLY may not be
>   * called from within libxl itself. Callers outside libxl, who
>   * do not #include libxl_internal.h, are fine. */
> diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
> index c876a33..f188708 100644
> --- a/tools/libxl/xl.h
> +++ b/tools/libxl/xl.h
> @@ -43,10 +43,12 @@ int main_pciattach(int argc, char **argv);
>  int main_pciassignable_add(int argc, char **argv);
>  int main_pciassignable_remove(int argc, char **argv);
>  int main_pciassignable_list(int argc, char **argv);
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  int main_restore(int argc, char **argv);
>  int main_migrate_receive(int argc, char **argv);
>  int main_save(int argc, char **argv);
>  int main_migrate(int argc, char **argv);
> +#endif
>  int main_dump_core(int argc, char **argv);
>  int main_pause(int argc, char **argv);
>  int main_unpause(int argc, char **argv);
> @@ -104,7 +106,9 @@ int main_cpupoolnumasplit(int argc, char **argv);
>  int main_getenforce(int argc, char **argv);
>  int main_setenforce(int argc, char **argv);
>  int main_loadpolicy(int argc, char **argv);
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  int main_remus(int argc, char **argv);
> +#endif
>  int main_devd(int argc, char **argv);
>  
>  void help(const char *command);
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index aff6f90..4fc46eb 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -3384,6 +3384,15 @@ static void list_vm(void)
>      libxl_vminfo_list_free(info, nb_vm);
>  }
>  
> +static void core_dump_domain(uint32_t domid, const char *filename)
> +{
> +    int rc;
> +
> +    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
> +    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
> +}
> +
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  static void save_domain_core_begin(uint32_t domid,
>                                     const char *override_config_file,
>                                     uint8_t **config_data_r,
> @@ -3775,14 +3784,6 @@ static void migrate_domain(uint32_t domid, const char *rune, int debug,
>      exit(-ERROR_BADFAIL);
>  }
>  
> -static void core_dump_domain(uint32_t domid, const char *filename)
> -{
> -    int rc;
> -
> -    rc=libxl_domain_core_dump(ctx, domid, filename, NULL);
> -    if (rc) { fprintf(stderr,"core dump failed (rc=%d)\n",rc);exit(-1); }
> -}
> -
>  static void migrate_receive(int debug, int daemonize, int monitor,
>                              int send_fd, int recv_fd, int remus)
>  {
> @@ -4102,6 +4103,7 @@ int main_migrate(int argc, char **argv)
>      migrate_domain(domid, rune, debug, config_filename);
>      return 0;
>  }
> +#endif
>  
>  int main_dump_core(int argc, char **argv)
>  {
> @@ -7248,6 +7250,7 @@ done:
>      return ret;
>  }
>  
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>  int main_remus(int argc, char **argv)
>  {
>      uint32_t domid;
> @@ -7341,6 +7344,7 @@ int main_remus(int argc, char **argv)
>      close(send_fd);
>      return -ERROR_FAIL;
>  }
> +#endif
>  
>  int main_devd(int argc, char **argv)
>  {
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..e8ab93a 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -137,6 +137,7 @@ struct cmd_spec cmd_table[] = {
>        "                         -autopass\n"
>        "--vncviewer-autopass     (consistency alias for --autopass)"
>      },
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>      { "save",
>        &main_save, 0, 1,
>        "Save a domain state to restore later",
> @@ -158,11 +159,6 @@ struct cmd_spec cmd_table[] = {
>        "                of the domain.\n"
>        "--debug         Print huge (!) amount of debug during the migration process."
>      },
> -    { "dump-core",
> -      &main_dump_core, 0, 1,
> -      "Core dump a domain",
> -      "<Domain> <filename>"
> -    },
>      { "restore",
>        &main_restore, 0, 1,
>        "Restore a domain from a saved state",
> @@ -179,6 +175,12 @@ struct cmd_spec cmd_table[] = {
>        "Restore a domain from a saved state",
>        "- for internal use only",
>      },
> +#endif
> +    { "dump-core",
> +      &main_dump_core, 0, 1,
> +      "Core dump a domain",
> +      "<Domain> <filename>"
> +    },
>      { "cd-insert",
>        &main_cd_insert, 1, 1,
>        "Insert a cdrom into a guest's cd drive",
> @@ -474,6 +476,7 @@ struct cmd_spec cmd_table[] = {
>        "Loads a new policy int the Flask Xen security module",
>        "<policy file>",
>      },
> +#ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
>      { "remus",
>        &main_remus, 0, 1,
>        "Enable Remus HA for domain",
> @@ -486,8 +489,8 @@ struct cmd_spec cmd_table[] = {
>        "                        ssh <host> xl migrate-receive -r [-e]\n"
>        "-e                      Do not wait in the background (on <host>) for the death\n"
>        "                        of the domain."
> -
>      },
> +#endif
>      { "devd",
>        &main_devd, 0, 1,
>        "Daemon that listens for devices and launches backends",



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:33:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDas4-0001fV-Nz; Wed, 12 Feb 2014 14:33:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDas3-0001fK-2d
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:33:03 +0000
Received: from [85.158.143.35:11466] by server-2.bemta-4.messagelabs.com id
	5A/59-10891-E168BF25; Wed, 12 Feb 2014 14:33:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392215580!5146718!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31898 invoked from network); 12 Feb 2014 14:33:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:33:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101922183"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:32:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:32:59 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDary-0007qv-TC;
	Wed, 12 Feb 2014 14:32:58 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:32:58 +0000
Message-ID: <1392215578-27239-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Do not attempt migration tests if the
	platform doesn't support it
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Doing so blocks the rest of the tests in the job, which may be able to
indepentently complete. So arrange for a ts-migrate-support-check test to run
and gate the remaining migration tests on that.

This relies on the xen patch "xl: suppress suspend/resume functions on
platforms which do not support it" to actually suppress migration support on
arm.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
This needs to go in after the Xen patch. Otherwise this new step will appear to
pass and then start to fail when the Xen patch is applied.
---
 sg-run-job               |  8 +++++++-
 ts-migrate-support-check | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+), 1 deletion(-)
 create mode 100755 ts-migrate-support-check

diff --git a/sg-run-job b/sg-run-job
index db62365..d894711 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -281,12 +281,18 @@ proc run-job/test-pair {} {
 #    run-ts . remus-failover ts-remus-check         src_host dst_host + debian
 }
 
-proc test-guest {g} {
+proc test-guest-migr {g} {
+    if {[catch { run-ts . = ts-migrate-support-check + host $g }]} return
+
     foreach iteration {{} .2} {
         run-ts . =$iteration ts-guest-saverestore + host $g
         run-ts . =$iteration ts-guest-localmigrate + host $g
     }
     run-ts . = ts-guest-localmigrate x10 + host $g
+}
+
+proc test-guest {g} {
+    test-guest-migr $g
     test-guest-nomigr $g
 }
 
diff --git a/ts-migrate-support-check b/ts-migrate-support-check
new file mode 100755
index 0000000..ffae1b3
--- /dev/null
+++ b/ts-migrate-support-check
@@ -0,0 +1,35 @@
+#!/usr/bin/perl -w
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+use strict qw(vars);
+use DBI;
+use Osstest;
+use Osstest::TestSupport;
+
+tsreadconfig();
+
+our $ho = selecthost($ARGV[0]);
+
+# all xend/xm platforms support migration
+exit(0) if toolstack()->{Command} eq "xm";
+
+my $help = target_cmd_output_root($ho, toolstack()->{Command}." help");
+
+my $rc = ($help =~ m/^\s*migrate/m) ? 0 : 1;
+
+logm("rc=$rc");
+exit($rc);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:33:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDas4-0001fV-Nz; Wed, 12 Feb 2014 14:33:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDas3-0001fK-2d
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:33:03 +0000
Received: from [85.158.143.35:11466] by server-2.bemta-4.messagelabs.com id
	5A/59-10891-E168BF25; Wed, 12 Feb 2014 14:33:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392215580!5146718!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31898 invoked from network); 12 Feb 2014 14:33:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:33:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101922183"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:32:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:32:59 -0500
Received: from spare.cam.xci-test.com ([10.80.2.80]
	helo=kazak.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDary-0007qv-TC;
	Wed, 12 Feb 2014 14:32:58 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:32:58 +0000
Message-ID: <1392215578-27239-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH OSSTEST] Do not attempt migration tests if the
	platform doesn't support it
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Doing so blocks the rest of the tests in the job, which may be able to
indepentently complete. So arrange for a ts-migrate-support-check test to run
and gate the remaining migration tests on that.

This relies on the xen patch "xl: suppress suspend/resume functions on
platforms which do not support it" to actually suppress migration support on
arm.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
This needs to go in after the Xen patch. Otherwise this new step will appear to
pass and then start to fail when the Xen patch is applied.
---
 sg-run-job               |  8 +++++++-
 ts-migrate-support-check | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+), 1 deletion(-)
 create mode 100755 ts-migrate-support-check

diff --git a/sg-run-job b/sg-run-job
index db62365..d894711 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -281,12 +281,18 @@ proc run-job/test-pair {} {
 #    run-ts . remus-failover ts-remus-check         src_host dst_host + debian
 }
 
-proc test-guest {g} {
+proc test-guest-migr {g} {
+    if {[catch { run-ts . = ts-migrate-support-check + host $g }]} return
+
     foreach iteration {{} .2} {
         run-ts . =$iteration ts-guest-saverestore + host $g
         run-ts . =$iteration ts-guest-localmigrate + host $g
     }
     run-ts . = ts-guest-localmigrate x10 + host $g
+}
+
+proc test-guest {g} {
+    test-guest-migr $g
     test-guest-nomigr $g
 }
 
diff --git a/ts-migrate-support-check b/ts-migrate-support-check
new file mode 100755
index 0000000..ffae1b3
--- /dev/null
+++ b/ts-migrate-support-check
@@ -0,0 +1,35 @@
+#!/usr/bin/perl -w
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+use strict qw(vars);
+use DBI;
+use Osstest;
+use Osstest::TestSupport;
+
+tsreadconfig();
+
+our $ho = selecthost($ARGV[0]);
+
+# all xend/xm platforms support migration
+exit(0) if toolstack()->{Command} eq "xm";
+
+my $help = target_cmd_output_root($ho, toolstack()->{Command}." help");
+
+my $rc = ($help =~ m/^\s*migrate/m) ? 0 : 1;
+
+logm("rc=$rc");
+exit($rc);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:41:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDazo-0002As-Ee; Wed, 12 Feb 2014 14:41:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDazk-0002AJ-HC; Wed, 12 Feb 2014 14:41:00 +0000
Received: from [85.158.143.35:35140] by server-3.bemta-4.messagelabs.com id
	77/7D-11539-BF78BF25; Wed, 12 Feb 2014 14:40:59 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392216057!5134789!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11883 invoked from network); 12 Feb 2014 14:40:58 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-3.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	12 Feb 2014 14:40:58 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDazd-0005Vh-54; Wed, 12 Feb 2014 14:40:53 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDazc-0002Lp-Nh; Wed, 12 Feb 2014 14:40:53 +0000
Date: Wed, 12 Feb 2014 14:40:52 +0000
Message-Id: <E1WDazc-0002Lp-Nh@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 88 - use-after-free in
 xc_cpupool_getinfo() under memory pressure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                    Xen Security Advisory XSA-88
                              version 2

      use-after-free in xc_cpupool_getinfo() under memory pressure

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

If xc_cpumap_alloc() fails then xc_cpupool_getinfo() will free and incorrectly
return the then-free pointer to the result structure.

IMPACT
======

An attacker may be able to cause a multi-threaded toolstack using this
function to race against itself leading to heap corruption and a
potential DoS.

Depending on the malloc implementation, privilege escalation cannot be
ruled out.

VULNERABLE SYSTEMS
==================

The flaw is present in Xen 4.1 onwards.  Only multithreaded toolstacks
are vulnerable.  Only systems where management functions (such as
domain creation) are exposed to untrusted users are vulnerable.

xl is not multithreaded, so is not vulnerable.  However, multithreaded
toolstacks using libxl as a library are vulnerable.  xend is
vulnerable.

MITIGATION
==========

Not allowing untrusted users access to toolstack functionality will
avoid this issue.

CREDITS
=======

This issue was discovered by Coverity Scan and diagnosed by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa88.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x, Xen 4.1.x

$ sha256sum xsa88*.patch
7a73ca9db19a9ffe6e8cd259fa71dc1299738f26fa024303f4ab38931db75f14  xsa88.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+4fOAAoJEIP+FMlX6CvZfUUH/2wyYKHOkEaEmcjUbuyUM3CT
8V9VgW4dhq/sk9p5SqR0xGB6N+f2XytCAFXI3kNmYjrs+jGK5cQgLjxMOwMKrpwm
PsHCAZnGNzYMy48JtEUieEfwZqH/jNci7qJWNVdPoKnULOEd9X0hTri7vg1CoDI2
DUBeLvmC5mCFBej4pcDGX++XsdL90EnGa0RfrrVfIVf16EfBjgr8KzLKXd1uBueC
yWKg5z24+HoRqFp3n3+Q9T6GN+npOj/78mrlXJ7onKepONAmLqg0J6g/1hHuc4hY
pwUnbSf0452FKTFs7KUodXoJNNX1i3IuOch9pBcKlrbT6K/g/qwMZ/Pl2Ir8a20=
=vA6e
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa88.patch"
Content-Disposition: attachment; filename="xsa88.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCAyMiBKYW4gMjAxNCAxNzo0NzoyMSArMDAwMApTdWJq
ZWN0OiBsaWJ4YzogRml4IG91dC1vZi1tZW1vcnkgZXJyb3IgaGFuZGxpbmcg
aW4geGNfY3B1cG9vbF9nZXRpbmZvKCkKCkF2b2lkIGZyZWVpbmcgaW5mbyB0
aGVuIHJldHVybmluZyBpdCB0byB0aGUgY2FsbGVyLgoKVGhpcyBpcyBYU0Et
ODguCgpDb3Zlcml0eS1JRDogMTA1NjE5MgpTaWduZWQtb2ZmLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0KIHRv
b2xzL2xpYnhjL3hjX2NwdXBvb2wuYyB8ICAgIDEgKwogMSBmaWxlIGNoYW5n
ZWQsIDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGMv
eGNfY3B1cG9vbC5jIGIvdG9vbHMvbGlieGMveGNfY3B1cG9vbC5jCmluZGV4
IGM4YzJhMzMuLjYzOTNjZmIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhjL3hj
X2NwdXBvb2wuYworKysgYi90b29scy9saWJ4Yy94Y19jcHVwb29sLmMKQEAg
LTEwNCw2ICsxMDQsNyBAQCB4Y19jcHVwb29saW5mb190ICp4Y19jcHVwb29s
X2dldGluZm8oeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgaW5mby0+Y3B1bWFw
ID0geGNfY3B1bWFwX2FsbG9jKHhjaCk7CiAgICAgaWYgKCFpbmZvLT5jcHVt
YXApIHsKICAgICAgICAgZnJlZShpbmZvKTsKKyAgICAgICAgaW5mbyA9IE5V
TEw7CiAgICAgICAgIGdvdG8gb3V0OwogICAgIH0KICAgICBpbmZvLT5jcHVw
b29sX2lkID0gc3lzY3RsLnUuY3B1cG9vbF9vcC5jcHVwb29sX2lkOwo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 12 14:41:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDazo-0002As-Ee; Wed, 12 Feb 2014 14:41:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDazk-0002AJ-HC; Wed, 12 Feb 2014 14:41:00 +0000
Received: from [85.158.143.35:35140] by server-3.bemta-4.messagelabs.com id
	77/7D-11539-BF78BF25; Wed, 12 Feb 2014 14:40:59 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392216057!5134789!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11883 invoked from network); 12 Feb 2014 14:40:58 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-3.tower-21.messagelabs.com with AES256-SHA encrypted SMTP;
	12 Feb 2014 14:40:58 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDazd-0005Vh-54; Wed, 12 Feb 2014 14:40:53 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDazc-0002Lp-Nh; Wed, 12 Feb 2014 14:40:53 +0000
Date: Wed, 12 Feb 2014 14:40:52 +0000
Message-Id: <E1WDazc-0002Lp-Nh@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 88 - use-after-free in
 xc_cpupool_getinfo() under memory pressure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

                    Xen Security Advisory XSA-88
                              version 2

      use-after-free in xc_cpupool_getinfo() under memory pressure

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

If xc_cpumap_alloc() fails then xc_cpupool_getinfo() will free and incorrectly
return the then-free pointer to the result structure.

IMPACT
======

An attacker may be able to cause a multi-threaded toolstack using this
function to race against itself leading to heap corruption and a
potential DoS.

Depending on the malloc implementation, privilege escalation cannot be
ruled out.

VULNERABLE SYSTEMS
==================

The flaw is present in Xen 4.1 onwards.  Only multithreaded toolstacks
are vulnerable.  Only systems where management functions (such as
domain creation) are exposed to untrusted users are vulnerable.

xl is not multithreaded, so is not vulnerable.  However, multithreaded
toolstacks using libxl as a library are vulnerable.  xend is
vulnerable.

MITIGATION
==========

Not allowing untrusted users access to toolstack functionality will
avoid this issue.

CREDITS
=======

This issue was discovered by Coverity Scan and diagnosed by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa88.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x, Xen 4.1.x

$ sha256sum xsa88*.patch
7a73ca9db19a9ffe6e8cd259fa71dc1299738f26fa024303f4ab38931db75f14  xsa88.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+4fOAAoJEIP+FMlX6CvZfUUH/2wyYKHOkEaEmcjUbuyUM3CT
8V9VgW4dhq/sk9p5SqR0xGB6N+f2XytCAFXI3kNmYjrs+jGK5cQgLjxMOwMKrpwm
PsHCAZnGNzYMy48JtEUieEfwZqH/jNci7qJWNVdPoKnULOEd9X0hTri7vg1CoDI2
DUBeLvmC5mCFBej4pcDGX++XsdL90EnGa0RfrrVfIVf16EfBjgr8KzLKXd1uBueC
yWKg5z24+HoRqFp3n3+Q9T6GN+npOj/78mrlXJ7onKepONAmLqg0J6g/1hHuc4hY
pwUnbSf0452FKTFs7KUodXoJNNX1i3IuOch9pBcKlrbT6K/g/qwMZ/Pl2Ir8a20=
=vA6e
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa88.patch"
Content-Disposition: attachment; filename="xsa88.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCAyMiBKYW4gMjAxNCAxNzo0NzoyMSArMDAwMApTdWJq
ZWN0OiBsaWJ4YzogRml4IG91dC1vZi1tZW1vcnkgZXJyb3IgaGFuZGxpbmcg
aW4geGNfY3B1cG9vbF9nZXRpbmZvKCkKCkF2b2lkIGZyZWVpbmcgaW5mbyB0
aGVuIHJldHVybmluZyBpdCB0byB0aGUgY2FsbGVyLgoKVGhpcyBpcyBYU0Et
ODguCgpDb3Zlcml0eS1JRDogMTA1NjE5MgpTaWduZWQtb2ZmLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0KIHRv
b2xzL2xpYnhjL3hjX2NwdXBvb2wuYyB8ICAgIDEgKwogMSBmaWxlIGNoYW5n
ZWQsIDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGMv
eGNfY3B1cG9vbC5jIGIvdG9vbHMvbGlieGMveGNfY3B1cG9vbC5jCmluZGV4
IGM4YzJhMzMuLjYzOTNjZmIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhjL3hj
X2NwdXBvb2wuYworKysgYi90b29scy9saWJ4Yy94Y19jcHVwb29sLmMKQEAg
LTEwNCw2ICsxMDQsNyBAQCB4Y19jcHVwb29saW5mb190ICp4Y19jcHVwb29s
X2dldGluZm8oeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgaW5mby0+Y3B1bWFw
ID0geGNfY3B1bWFwX2FsbG9jKHhjaCk7CiAgICAgaWYgKCFpbmZvLT5jcHVt
YXApIHsKICAgICAgICAgZnJlZShpbmZvKTsKKyAgICAgICAgaW5mbyA9IE5V
TEw7CiAgICAgICAgIGdvdG8gb3V0OwogICAgIH0KICAgICBpbmZvLT5jcHVw
b29sX2lkID0gc3lzY3RsLnUuY3B1cG9vbF9vcC5jcHVwb29sX2lkOwo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 12 14:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDb8e-0002vl-6M; Wed, 12 Feb 2014 14:50:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDb8c-0002vc-6V
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:50:10 +0000
Received: from [85.158.143.35:57301] by server-3.bemta-4.messagelabs.com id
	62/20-11539-12A8BF25; Wed, 12 Feb 2014 14:50:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392216607!5148345!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26656 invoked from network); 12 Feb 2014 14:50:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:50:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101927821"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:50:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:50:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDb8Y-0007wa-NF;
	Wed, 12 Feb 2014 14:50:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDb8Y-0003OX-Gd;
	Wed, 12 Feb 2014 14:50:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35358.349750.484725@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:50:06 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):
> I run osstest against machines which are in both the XenServer and
> XenClient administrative domains, and hence which have different
> TFTP servers, accessible locally via different NFS mounted paths.
> 
> Make it possible to specify various bits of TFTP path via
> ~/.xen-osstest/config

As I said in person: this would be much better if instead the host
property referred to a named TFTP scope/server.  Otherwise you have to
set a whole bunch of host properties identically.

Something like:
  * Replace references to $c{Tftp*} with a new indirection involving
    $ho.  Perhaps just $ho->{Tftp}{*}.  (Involves formulaic patch made
    with perl -i, perhaps.)
  * In selecthost, look up a TftpScope host property and then
    $c{TftpFoo_$scope} for each TftpFoo (except, I guess, when
    $scope is "default" or something).
  * In selectguest, copy a reference to the hosts's $ho->{Tftp}.

> +	my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});

This definitely shouldn't be a host property because it needs push
gate version control.


Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDb8e-0002vl-6M; Wed, 12 Feb 2014 14:50:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDb8c-0002vc-6V
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:50:10 +0000
Received: from [85.158.143.35:57301] by server-3.bemta-4.messagelabs.com id
	62/20-11539-12A8BF25; Wed, 12 Feb 2014 14:50:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392216607!5148345!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26656 invoked from network); 12 Feb 2014 14:50:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:50:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101927821"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:50:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:50:07 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDb8Y-0007wa-NF;
	Wed, 12 Feb 2014 14:50:06 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDb8Y-0003OX-Gd;
	Wed, 12 Feb 2014 14:50:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35358.349750.484725@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:50:06 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):
> I run osstest against machines which are in both the XenServer and
> XenClient administrative domains, and hence which have different
> TFTP servers, accessible locally via different NFS mounted paths.
> 
> Make it possible to specify various bits of TFTP path via
> ~/.xen-osstest/config

As I said in person: this would be much better if instead the host
property referred to a named TFTP scope/server.  Otherwise you have to
set a whole bunch of host properties identically.

Something like:
  * Replace references to $c{Tftp*} with a new indirection involving
    $ho.  Perhaps just $ho->{Tftp}{*}.  (Involves formulaic patch made
    with perl -i, perhaps.)
  * In selecthost, look up a TftpScope host property and then
    $c{TftpFoo_$scope} for each TftpFoo (except, I guess, when
    $scope is "default" or something).
  * In selectguest, copy a reference to the hosts's $ho->{Tftp}.

> +	my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});

This definitely shouldn't be a host property because it needs push
gate version control.


Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:51:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbAL-00034b-VH; Wed, 12 Feb 2014 14:51:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbAK-00034P-Mx
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:51:56 +0000
Received: from [85.158.139.211:11217] by server-3.bemta-5.messagelabs.com id
	7A/B6-13671-B8A8BF25; Wed, 12 Feb 2014 14:51:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392216713!3345876!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15187 invoked from network); 12 Feb 2014 14:51:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:51:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101928362"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:51:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:51:53 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbAG-0007x7-Ia;
	Wed, 12 Feb 2014 14:51:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbAG-0003Om-AF;
	Wed, 12 Feb 2014 14:51:52 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35464.3490.284158@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:51:52 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
References: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy
	for non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] TestSupport: Don't use git proxy for non-git:// urls."):
> After this I was stymied by ssh host keys and other roadblocks and
> just pushed the branch to my xenbits tree but I think this is still
> correct as far as it goes.

This should probably be made to work for http:// and https://, at
least.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:51:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbAL-00034b-VH; Wed, 12 Feb 2014 14:51:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbAK-00034P-Mx
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:51:56 +0000
Received: from [85.158.139.211:11217] by server-3.bemta-5.messagelabs.com id
	7A/B6-13671-B8A8BF25; Wed, 12 Feb 2014 14:51:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392216713!3345876!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15187 invoked from network); 12 Feb 2014 14:51:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:51:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101928362"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:51:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:51:53 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbAG-0007x7-Ia;
	Wed, 12 Feb 2014 14:51:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbAG-0003Om-AF;
	Wed, 12 Feb 2014 14:51:52 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35464.3490.284158@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:51:52 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
References: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy
	for non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] TestSupport: Don't use git proxy for non-git:// urls."):
> After this I was stymied by ssh host keys and other roadblocks and
> just pushed the branch to my xenbits tree but I think this is still
> correct as far as it goes.

This should probably be made to work for http:// and https://, at
least.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:53:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbBq-0003F0-IO; Wed, 12 Feb 2014 14:53:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDbBo-0003Ep-Td
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:53:29 +0000
Received: from [85.158.139.211:5964] by server-4.bemta-5.messagelabs.com id
	DA/08-08092-8EA8BF25; Wed, 12 Feb 2014 14:53:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392216806!3453133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14696 invoked from network); 12 Feb 2014 14:53:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:53:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100138885"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:53:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 09:53:25 -0500
Message-ID: <1392216804.13563.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:53:24 +0000
In-Reply-To: <21243.35358.349750.484725@mariner.uk.xensource.com>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
	<21243.35358.349750.484725@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:50 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):
> > I run osstest against machines which are in both the XenServer and
> > XenClient administrative domains, and hence which have different
> > TFTP servers, accessible locally via different NFS mounted paths.
> > 
> > Make it possible to specify various bits of TFTP path via
> > ~/.xen-osstest/config
> 
> As I said in person: this would be much better if instead the host
> property referred to a named TFTP scope/server.  Otherwise you have to
> set a whole bunch of host properties identically.
> 
> Something like:
>   * Replace references to $c{Tftp*} with a new indirection involving
>     $ho.  Perhaps just $ho->{Tftp}{*}.  (Involves formulaic patch made
>     with perl -i, perhaps.)
>   * In selecthost, look up a TftpScope host property and then
>     $c{TftpFoo_$scope} for each TftpFoo (except, I guess, when
>     $scope is "default" or something).
>   * In selectguest, copy a reference to the hosts's $ho->{Tftp}.
> 
> > +	my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});
> 
> This definitely shouldn't be a host property because it needs push
> gate version control.
> 

Ack. I'll put this on my todo list.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:53:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbBq-0003F0-IO; Wed, 12 Feb 2014 14:53:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDbBo-0003Ep-Td
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:53:29 +0000
Received: from [85.158.139.211:5964] by server-4.bemta-5.messagelabs.com id
	DA/08-08092-8EA8BF25; Wed, 12 Feb 2014 14:53:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392216806!3453133!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14696 invoked from network); 12 Feb 2014 14:53:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:53:27 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100138885"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:53:25 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 09:53:25 -0500
Message-ID: <1392216804.13563.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:53:24 +0000
In-Reply-To: <21243.35358.349750.484725@mariner.uk.xensource.com>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
	<21243.35358.349750.484725@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:50 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):
> > I run osstest against machines which are in both the XenServer and
> > XenClient administrative domains, and hence which have different
> > TFTP servers, accessible locally via different NFS mounted paths.
> > 
> > Make it possible to specify various bits of TFTP path via
> > ~/.xen-osstest/config
> 
> As I said in person: this would be much better if instead the host
> property referred to a named TFTP scope/server.  Otherwise you have to
> set a whole bunch of host properties identically.
> 
> Something like:
>   * Replace references to $c{Tftp*} with a new indirection involving
>     $ho.  Perhaps just $ho->{Tftp}{*}.  (Involves formulaic patch made
>     with perl -i, perhaps.)
>   * In selecthost, look up a TftpScope host property and then
>     $c{TftpFoo_$scope} for each TftpFoo (except, I guess, when
>     $scope is "default" or something).
>   * In selectguest, copy a reference to the hosts's $ho->{Tftp}.
> 
> > +	my $tftpdiversion = get_host_property($ho, "TftpDiVersion", $c{TftpDiVersion});
> 
> This definitely shouldn't be a host property because it needs push
> gate version control.
> 

Ack. I'll put this on my todo list.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:54:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbCk-0003Lg-1w; Wed, 12 Feb 2014 14:54:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDbCi-0003LW-U9
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:54:25 +0000
Received: from [85.158.143.35:59398] by server-2.bemta-4.messagelabs.com id
	23/B3-10891-02B8BF25; Wed, 12 Feb 2014 14:54:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392216861!5156181!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 639 invoked from network); 12 Feb 2014 14:54:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:54:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101928801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:54:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 09:54:07 -0500
Message-ID: <1392216846.13563.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:54:06 +0000
In-Reply-To: <21243.35464.3490.284158@mariner.uk.xensource.com>
References: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
	<21243.35464.3490.284158@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy
 for non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:51 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] TestSupport: Don't use git proxy for non-git:// urls."):
> > After this I was stymied by ssh host keys and other roadblocks and
> > just pushed the branch to my xenbits tree but I think this is still
> > correct as far as it goes.
> 
> This should probably be made to work for http:// and https://, at
> least.

The proxy supports those?

Trivial enough patch, I'll put it on my list.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:54:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbCk-0003Lg-1w; Wed, 12 Feb 2014 14:54:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDbCi-0003LW-U9
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:54:25 +0000
Received: from [85.158.143.35:59398] by server-2.bemta-4.messagelabs.com id
	23/B3-10891-02B8BF25; Wed, 12 Feb 2014 14:54:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392216861!5156181!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 639 invoked from network); 12 Feb 2014 14:54:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:54:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101928801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:54:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 09:54:07 -0500
Message-ID: <1392216846.13563.84.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 14:54:06 +0000
In-Reply-To: <21243.35464.3490.284158@mariner.uk.xensource.com>
References: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
	<21243.35464.3490.284158@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy
 for non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:51 +0000, Ian Jackson wrote:
> Ian Campbell writes ("[PATCH OSSTEST] TestSupport: Don't use git proxy for non-git:// urls."):
> > After this I was stymied by ssh host keys and other roadblocks and
> > just pushed the branch to my xenbits tree but I think this is still
> > correct as far as it goes.
> 
> This should probably be made to work for http:// and https://, at
> least.

The proxy supports those?

Trivial enough patch, I'll put it on my list.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:54:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbCr-0003NY-JQ; Wed, 12 Feb 2014 14:54:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbCq-0003N5-1m
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:54:32 +0000
Received: from [85.158.139.211:20127] by server-12.bemta-5.messagelabs.com id
	12/4D-15415-72B8BF25; Wed, 12 Feb 2014 14:54:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392216869!3439931!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6004 invoked from network); 12 Feb 2014 14:54:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:54:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100139351"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:54:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:54:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbCm-0007yF-5J;
	Wed, 12 Feb 2014 14:54:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbCl-0003PF-V5;
	Wed, 12 Feb 2014 14:54:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35619.819765.162321@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:54:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392215531.13563.79.camel@kazak.uk.xensource.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> > ARM does not (currently) support migration, so stop offering tasty looking
> > treats like "xl migrate".

> > Other than the additions of the #define/#ifdef there is a tiny bit of code
> > motion ("dump-core" in the command list and core_dump_domain in the
> > implementations) which serves to put ifdeffable bits next to each other.

I'm not a huge fan of #ifdef but this is tolerable, I think.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I think this should go into 4.4.  It is essential that we start
advertising lack-of-resume in 4.4 as otherwise in 4.5 we'll have to
invent a new HAVE_HAVE_NO_SUSPEND_RESUME which tells you whether the
lack of HAVE_NO_SUSPEND_RESUME means that you can definitely
suspend/resume.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:54:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbCr-0003NY-JQ; Wed, 12 Feb 2014 14:54:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbCq-0003N5-1m
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:54:32 +0000
Received: from [85.158.139.211:20127] by server-12.bemta-5.messagelabs.com id
	12/4D-15415-72B8BF25; Wed, 12 Feb 2014 14:54:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392216869!3439931!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6004 invoked from network); 12 Feb 2014 14:54:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:54:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100139351"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 14:54:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:54:28 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbCm-0007yF-5J;
	Wed, 12 Feb 2014 14:54:28 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbCl-0003PF-V5;
	Wed, 12 Feb 2014 14:54:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35619.819765.162321@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:54:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392215531.13563.79.camel@kazak.uk.xensource.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> > ARM does not (currently) support migration, so stop offering tasty looking
> > treats like "xl migrate".

> > Other than the additions of the #define/#ifdef there is a tiny bit of code
> > motion ("dump-core" in the command list and core_dump_domain in the
> > implementations) which serves to put ifdeffable bits next to each other.

I'm not a huge fan of #ifdef but this is tolerable, I think.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I think this should go into 4.4.  It is essential that we start
advertising lack-of-resume in 4.4 as otherwise in 4.5 we'll have to
invent a new HAVE_HAVE_NO_SUSPEND_RESUME which tells you whether the
lack of HAVE_NO_SUSPEND_RESUME means that you can definitely
suspend/resume.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:58:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbGY-0003jT-Lg; Wed, 12 Feb 2014 14:58:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbGK-0003j6-I1
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:58:19 +0000
Received: from [193.109.254.147:32999] by server-15.bemta-14.messagelabs.com
	id B8/38-10839-FFB8BF25; Wed, 12 Feb 2014 14:58:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392217085!3847575!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28226 invoked from network); 12 Feb 2014 14:58:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:58:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101930049"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:58:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:58:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbGF-0007z8-TO;
	Wed, 12 Feb 2014 14:58:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbGF-0003PU-Kl;
	Wed, 12 Feb 2014 14:58:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35834.654223.986214@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:58:02 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392215578-27239-1-git-send-email-ian.campbell@citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215578-27239-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Do not attempt migration tests if
	the platform doesn't support it
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Do not attempt migration tests if the platform doesn't support it"):
> Doing so blocks the rest of the tests in the job, which may be able to
> indepentently complete. So arrange for a ts-migrate-support-check test to run
> and gate the remaining migration tests on that.
> 
> This relies on the xen patch "xl: suppress suspend/resume functions on
> platforms which do not support it" to actually suppress migration support on
> arm.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 14:58:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 14:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbGY-0003jT-Lg; Wed, 12 Feb 2014 14:58:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbGK-0003j6-I1
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 14:58:19 +0000
Received: from [193.109.254.147:32999] by server-15.bemta-14.messagelabs.com
	id B8/38-10839-FFB8BF25; Wed, 12 Feb 2014 14:58:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392217085!3847575!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28226 invoked from network); 12 Feb 2014 14:58:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 14:58:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="101930049"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 14:58:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 09:58:04 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbGF-0007z8-TO;
	Wed, 12 Feb 2014 14:58:03 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbGF-0003PU-Kl;
	Wed, 12 Feb 2014 14:58:03 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.35834.654223.986214@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 14:58:02 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1392215578-27239-1-git-send-email-ian.campbell@citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215578-27239-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Do not attempt migration tests if
	the platform doesn't support it
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST] Do not attempt migration tests if the platform doesn't support it"):
> Doing so blocks the rest of the tests in the job, which may be able to
> indepentently complete. So arrange for a ts-migrate-support-check test to run
> and gate the remaining migration tests on that.
> 
> This relies on the xen patch "xl: suppress suspend/resume functions on
> platforms which do not support it" to actually suppress migration support on
> arm.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:01:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbJb-00049x-Cl; Wed, 12 Feb 2014 15:01:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbJZ-00049q-Ob
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:01:29 +0000
Received: from [193.109.254.147:37688] by server-8.bemta-14.messagelabs.com id
	6E/70-18529-9CC8BF25; Wed, 12 Feb 2014 15:01:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392217287!3843098!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10062 invoked from network); 12 Feb 2014 15:01:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 15:01:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100141770"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 15:01:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 10:01:26 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbJW-000806-1K;
	Wed, 12 Feb 2014 15:01:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbJV-0003Qg-Q1;
	Wed, 12 Feb 2014 15:01:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.36037.485367.843299@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 15:01:25 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392216846.13563.84.camel@kazak.uk.xensource.com>
References: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
	<21243.35464.3490.284158@mariner.uk.xensource.com>
	<1392216846.13563.84.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy
 for non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH OSSTEST] TestSupport: Don't use git proxy for non-git:// urls."):
> On Wed, 2014-02-12 at 14:51 +0000, Ian Jackson wrote:
> > This should probably be made to work for http:// and https://, at
> > least.
> 
> The proxy supports those?

Yes.

> Trivial enough patch, I'll put it on my list.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:01:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbJb-00049x-Cl; Wed, 12 Feb 2014 15:01:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDbJZ-00049q-Ob
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:01:29 +0000
Received: from [193.109.254.147:37688] by server-8.bemta-14.messagelabs.com id
	6E/70-18529-9CC8BF25; Wed, 12 Feb 2014 15:01:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392217287!3843098!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10062 invoked from network); 12 Feb 2014 15:01:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 15:01:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100141770"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 15:01:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 10:01:26 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbJW-000806-1K;
	Wed, 12 Feb 2014 15:01:26 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDbJV-0003Qg-Q1;
	Wed, 12 Feb 2014 15:01:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.36037.485367.843299@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 15:01:25 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392216846.13563.84.camel@kazak.uk.xensource.com>
References: <1392214594-26668-1-git-send-email-ian.campbell@citrix.com>
	<21243.35464.3490.284158@mariner.uk.xensource.com>
	<1392216846.13563.84.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] TestSupport: Don't use git proxy
 for non-git:// urls.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [PATCH OSSTEST] TestSupport: Don't use git proxy for non-git:// urls."):
> On Wed, 2014-02-12 at 14:51 +0000, Ian Jackson wrote:
> > This should probably be made to work for http:// and https://, at
> > least.
> 
> The proxy supports those?

Yes.

> Trivial enough patch, I'll put it on my list.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:21:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbd4-0004k4-MT; Wed, 12 Feb 2014 15:21:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WDbd3-0004jz-5S
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:21:37 +0000
Received: from [85.158.139.211:55766] by server-9.bemta-5.messagelabs.com id
	7C/0C-11237-0819BF25; Wed, 12 Feb 2014 15:21:36 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392218495!3468844!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29699 invoked from network); 12 Feb 2014 15:21:35 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 15:21:35 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392218495; l=198;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=BAVWu9ffr6MQM5rcWMqG/QYCZsg=;
	b=gJV5nMhXIF0LA9G+tpFepTMc2I5LAVZiOAL6kGDR6zUBAQ6VuDNm1AYWvaWoYipOXHG
	iwaGyDZw0kISpql4w6+lkijk4TuggWXQD5k/EKCw9SXtL58NG1VQmwC+NuH2zsaodiUYz
	2zZZXyFnvCjEiHOkVY0LAyqBEgrF9emW5xI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id Y0149dq1CFLZOgr
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 12 Feb 2014 16:21:35 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id C4A3750269; Wed, 12 Feb 2014 16:21:34 +0100 (CET)
Date: Wed, 12 Feb 2014 16:21:34 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140212152134.GA20090@aepfle.de>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: George.Dunlap@eu.citrix.com, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, Ian Campbell wrote:

>  #define LIBXL_HAVE_SIGCHLD_SHARING 1
>  
> +/*
> + * LIBXL_HAVE_NO_SUSPEND_RESUME

Think positive?
Make that HAVE_FEATURE and define it on x86.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:21:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbd4-0004k4-MT; Wed, 12 Feb 2014 15:21:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WDbd3-0004jz-5S
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:21:37 +0000
Received: from [85.158.139.211:55766] by server-9.bemta-5.messagelabs.com id
	7C/0C-11237-0819BF25; Wed, 12 Feb 2014 15:21:36 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392218495!3468844!1
X-Originating-IP: [81.169.146.221]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29699 invoked from network); 12 Feb 2014 15:21:35 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.221)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 15:21:35 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392218495; l=198;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=BAVWu9ffr6MQM5rcWMqG/QYCZsg=;
	b=gJV5nMhXIF0LA9G+tpFepTMc2I5LAVZiOAL6kGDR6zUBAQ6VuDNm1AYWvaWoYipOXHG
	iwaGyDZw0kISpql4w6+lkijk4TuggWXQD5k/EKCw9SXtL58NG1VQmwC+NuH2zsaodiUYz
	2zZZXyFnvCjEiHOkVY0LAyqBEgrF9emW5xI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id Y0149dq1CFLZOgr
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Wed, 12 Feb 2014 16:21:35 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id C4A3750269; Wed, 12 Feb 2014 16:21:34 +0100 (CET)
Date: Wed, 12 Feb 2014 16:21:34 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140212152134.GA20090@aepfle.de>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: George.Dunlap@eu.citrix.com, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, Ian Campbell wrote:

>  #define LIBXL_HAVE_SIGCHLD_SHARING 1
>  
> +/*
> + * LIBXL_HAVE_NO_SUSPEND_RESUME

Think positive?
Make that HAVE_FEATURE and define it on x86.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:34:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbpq-0005Ey-6B; Wed, 12 Feb 2014 15:34:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDbpo-0005Et-HF
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:34:48 +0000
Received: from [85.158.143.35:57893] by server-3.bemta-4.messagelabs.com id
	60/A7-11539-7949BF25; Wed, 12 Feb 2014 15:34:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392219287!5155730!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3231 invoked from network); 12 Feb 2014 15:34:47 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 15:34:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDbpg-0003h7-Ki; Wed, 12 Feb 2014 15:34:40 +0000
Date: Wed, 12 Feb 2014 16:34:40 +0100
From: Tim Deegan <tim@xen.org>
To: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Message-ID: <20140212153440.GC91459@deinos.phlegethon.org>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
	<52F97E6F.2000402@citrix.com>
	<CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:12 -0600 on 10 Feb (1392066773), Shriram Rajagopalan wrote:
> My point here being, checksums seem like unnecessary compute overhead when
> doing live migration
> or Remus.  One can simply set this field to 0 when doing live
> migration/Remus.

Remus _compresses_ the payload, right?  CRC32 is basically free
compared to that. 

But I think the point about checksumming the whole image is sound.
That will catch _more_ classes of corruption than a per-block data
checksum, and the class of bugs caught by per-block checksums (basically,
that the header and the data don't match for some reason) are pretty
small in this design.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:34:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbpq-0005Ey-6B; Wed, 12 Feb 2014 15:34:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDbpo-0005Et-HF
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:34:48 +0000
Received: from [85.158.143.35:57893] by server-3.bemta-4.messagelabs.com id
	60/A7-11539-7949BF25; Wed, 12 Feb 2014 15:34:47 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392219287!5155730!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3231 invoked from network); 12 Feb 2014 15:34:47 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 15:34:47 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDbpg-0003h7-Ki; Wed, 12 Feb 2014 15:34:40 +0000
Date: Wed, 12 Feb 2014 16:34:40 +0100
From: Tim Deegan <tim@xen.org>
To: Shriram Rajagopalan <rshriram@cs.ubc.ca>
Message-ID: <20140212153440.GC91459@deinos.phlegethon.org>
References: <52F90A71.40802@citrix.com>
	<CAP8mzPMtM0yXtv_th_rwPBMMx-nMEBCUoRnAtwXJUjYHy7AOdA@mail.gmail.com>
	<52F97E6F.2000402@citrix.com>
	<CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAP8mzPNrDNOC=nsOTGbowfm5uku30NvEZsVdia4VwZ6eFhMe-Q@mail.gmail.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:12 -0600 on 10 Feb (1392066773), Shriram Rajagopalan wrote:
> My point here being, checksums seem like unnecessary compute overhead when
> doing live migration
> or Remus.  One can simply set this field to 0 when doing live
> migration/Remus.

Remus _compresses_ the payload, right?  CRC32 is basically free
compared to that. 

But I think the point about checksumming the whole image is sound.
That will catch _more_ classes of corruption than a per-block data
checksum, and the class of bugs caught by per-block checksums (basically,
that the header and the data don't match for some reason) are pretty
small in this design.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:41:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbwU-0005Pw-70; Wed, 12 Feb 2014 15:41:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDbwS-0005Pr-97
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:41:40 +0000
Received: from [85.158.143.35:7992] by server-2.bemta-4.messagelabs.com id
	49/7C-10891-3369BF25; Wed, 12 Feb 2014 15:41:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392219698!5147206!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1304 invoked from network); 12 Feb 2014 15:41:38 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 15:41:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDbwO-0003nz-3M; Wed, 12 Feb 2014 15:41:36 +0000
Date: Wed, 12 Feb 2014 16:41:36 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140212154136.GD91459@deinos.phlegethon.org>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392111040.26657.50.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:30 +0000 on 11 Feb (1392107440), Ian Campbell wrote:
> > checksum     CRC-32 checksum of the record body (including any trailing
> >              padding), or 0x00000000 if the checksum field is invalid.
> 
> Which CRC-32 :-P

CRC32C please; it's got hardware acceleration in SSE4.2.  There are
also options like Adler which are supposed to be faster in normal
arithmetic (not sure if that's still the case if you include tricks
like using vector multiply to fold the payload).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:41:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:41:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDbwU-0005Pw-70; Wed, 12 Feb 2014 15:41:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDbwS-0005Pr-97
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:41:40 +0000
Received: from [85.158.143.35:7992] by server-2.bemta-4.messagelabs.com id
	49/7C-10891-3369BF25; Wed, 12 Feb 2014 15:41:39 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392219698!5147206!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1304 invoked from network); 12 Feb 2014 15:41:38 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 15:41:38 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDbwO-0003nz-3M; Wed, 12 Feb 2014 15:41:36 +0000
Date: Wed, 12 Feb 2014 16:41:36 +0100
From: Tim Deegan <tim@xen.org>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140212154136.GD91459@deinos.phlegethon.org>
References: <52F90A71.40802@citrix.com>
	<1392111040.26657.50.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392111040.26657.50.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 09:30 +0000 on 11 Feb (1392107440), Ian Campbell wrote:
> > checksum     CRC-32 checksum of the record body (including any trailing
> >              padding), or 0x00000000 if the checksum field is invalid.
> 
> Which CRC-32 :-P

CRC32C please; it's got hardware acceleration in SSE4.2.  There are
also options like Adler which are supposed to be faster in normal
arithmetic (not sure if that's still the case if you include tricks
like using vector multiply to fold the payload).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:58:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcBt-00060e-TW; Wed, 12 Feb 2014 15:57:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDcBt-00060Z-4w
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:57:37 +0000
Received: from [85.158.137.68:21872] by server-16.bemta-3.messagelabs.com id
	65/3B-29917-0F99BF25; Wed, 12 Feb 2014 15:57:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392220654!1419325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27944 invoked from network); 12 Feb 2014 15:57:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 15:57:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100163593"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 15:57:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 10:57:32 -0500
Message-ID: <1392220651.13563.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 12 Feb 2014 15:57:31 +0000
In-Reply-To: <20140212152134.GA20090@aepfle.de>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<20140212152134.GA20090@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George.Dunlap@eu.citrix.com, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
> On Wed, Feb 12, Ian Campbell wrote:
> 
> >  #define LIBXL_HAVE_SIGCHLD_SHARING 1
> >  
> > +/*
> > + * LIBXL_HAVE_NO_SUSPEND_RESUME
> 
> Think positive?
> Make that HAVE_FEATURE and define it on x86.

Could do -- anyone got any strong feeling one way or the other?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 15:58:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 15:58:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcBt-00060e-TW; Wed, 12 Feb 2014 15:57:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDcBt-00060Z-4w
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 15:57:37 +0000
Received: from [85.158.137.68:21872] by server-16.bemta-3.messagelabs.com id
	65/3B-29917-0F99BF25; Wed, 12 Feb 2014 15:57:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392220654!1419325!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27944 invoked from network); 12 Feb 2014 15:57:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 15:57:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100163593"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 15:57:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 10:57:32 -0500
Message-ID: <1392220651.13563.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 12 Feb 2014 15:57:31 +0000
In-Reply-To: <20140212152134.GA20090@aepfle.de>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<20140212152134.GA20090@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George.Dunlap@eu.citrix.com, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
> On Wed, Feb 12, Ian Campbell wrote:
> 
> >  #define LIBXL_HAVE_SIGCHLD_SHARING 1
> >  
> > +/*
> > + * LIBXL_HAVE_NO_SUSPEND_RESUME
> 
> Think positive?
> Make that HAVE_FEATURE and define it on x86.

Could do -- anyone got any strong feeling one way or the other?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:03:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcHA-0006l2-QO; Wed, 12 Feb 2014 16:03:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDcH8-0006kv-L7
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 16:03:03 +0000
Received: from [85.158.137.68:33078] by server-1.bemta-3.messagelabs.com id
	91/D4-17293-53B9BF25; Wed, 12 Feb 2014 16:03:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392220978!1399273!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11548 invoked from network); 12 Feb 2014 16:03:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:03:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100165777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:02:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 11:02:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDcH3-0008Jg-A9;
	Wed, 12 Feb 2014 16:02:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDcH3-0005R4-8W;
	Wed, 12 Feb 2014 16:02:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24854-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:02:57 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24854: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24854 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24854/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                45f7fdc2ffb9d5af4dab593843e89da70d1259e3
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7029 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2376519 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:03:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcHA-0006l2-QO; Wed, 12 Feb 2014 16:03:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDcH8-0006kv-L7
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 16:03:03 +0000
Received: from [85.158.137.68:33078] by server-1.bemta-3.messagelabs.com id
	91/D4-17293-53B9BF25; Wed, 12 Feb 2014 16:03:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392220978!1399273!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11548 invoked from network); 12 Feb 2014 16:03:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:03:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100165777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:02:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 11:02:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDcH3-0008Jg-A9;
	Wed, 12 Feb 2014 16:02:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDcH3-0005R4-8W;
	Wed, 12 Feb 2014 16:02:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24854-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:02:57 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24854: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24854 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24854/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                45f7fdc2ffb9d5af4dab593843e89da70d1259e3
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7029 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2376519 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:12:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcPt-0006yZ-3x; Wed, 12 Feb 2014 16:12:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDcPr-0006yU-W5
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:12:04 +0000
Received: from [85.158.137.68:47439] by server-9.bemta-3.messagelabs.com id
	80/47-10184-35D9BF25; Wed, 12 Feb 2014 16:12:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392221521!161130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8402 invoked from network); 12 Feb 2014 16:12:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:12:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100169757"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:12:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 11:12:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDcPo-0008Mx-3q;
	Wed, 12 Feb 2014 16:12:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDcPn-0003VV-SA;
	Wed, 12 Feb 2014 16:11:59 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.40271.558124.694232@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 16:11:59 +0000
To: <George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen: Drop N from rcN in XEN_EXTRAVERSION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Having this here means we have to wait for a push gate pass, or fart
about which explicit pushes to master, to make an RC.  The boot
messages for git builds already contain the git revision (as a
shorthash).

I will change the tarball creation checklist to seddery the -rc back
to -rcN, along with the other release-management-related changes (like
using an embedded copy of qemu).

If this patch meets with approval it should be thrown into the push
gate today, along with the patch for XSA-88, and then hopefully
nothing much else, so that we can get something suitable for making an
RC from by Friday.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.

diff --git a/xen/Makefile b/xen/Makefile
index 7258504..576d239 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -2,7 +2,7 @@
 # All other places this is stored (eg. compile.h) should be autogenerated.
 export XEN_VERSION       = 4
 export XEN_SUBVERSION    = 4
-export XEN_EXTRAVERSION ?= -rc2$(XEN_VENDORVERSION)
+export XEN_EXTRAVERSION ?= -rc$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
 -include xen-version
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:12:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcPt-0006yZ-3x; Wed, 12 Feb 2014 16:12:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDcPr-0006yU-W5
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:12:04 +0000
Received: from [85.158.137.68:47439] by server-9.bemta-3.messagelabs.com id
	80/47-10184-35D9BF25; Wed, 12 Feb 2014 16:12:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392221521!161130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8402 invoked from network); 12 Feb 2014 16:12:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:12:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,832,1384300800"; d="scan'208";a="100169757"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:12:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 11:12:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDcPo-0008Mx-3q;
	Wed, 12 Feb 2014 16:12:00 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDcPn-0003VV-SA;
	Wed, 12 Feb 2014 16:11:59 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21243.40271.558124.694232@mariner.uk.xensource.com>
Date: Wed, 12 Feb 2014 16:11:59 +0000
To: <George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] [PATCH] xen: Drop N from rcN in XEN_EXTRAVERSION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Having this here means we have to wait for a push gate pass, or fart
about which explicit pushes to master, to make an RC.  The boot
messages for git builds already contain the git revision (as a
shorthash).

I will change the tarball creation checklist to seddery the -rc back
to -rcN, along with the other release-management-related changes (like
using an embedded copy of qemu).

If this patch meets with approval it should be thrown into the push
gate today, along with the patch for XSA-88, and then hopefully
nothing much else, so that we can get something suitable for making an
RC from by Friday.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.

diff --git a/xen/Makefile b/xen/Makefile
index 7258504..576d239 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -2,7 +2,7 @@
 # All other places this is stored (eg. compile.h) should be autogenerated.
 export XEN_VERSION       = 4
 export XEN_SUBVERSION    = 4
-export XEN_EXTRAVERSION ?= -rc2$(XEN_VENDORVERSION)
+export XEN_EXTRAVERSION ?= -rc$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   = $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
 -include xen-version
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:32:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcjs-0007ij-1h; Wed, 12 Feb 2014 16:32:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDcjq-0007ic-81
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:32:42 +0000
Received: from [85.158.139.211:31858] by server-13.bemta-5.messagelabs.com id
	41/2F-18801-922ABF25; Wed, 12 Feb 2014 16:32:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392222760!3481375!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12258 invoked from network); 12 Feb 2014 16:32:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 16:32:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 16:33:39 +0000
Message-Id: <52FBB035020000780011BCAB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 16:32:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <21243.40271.558124.694232@mariner.uk.xensource.com>
In-Reply-To: <21243.40271.558124.694232@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: Drop N from rcN in XEN_EXTRAVERSION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 17:11, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Having this here means we have to wait for a push gate pass, or fart
> about which explicit pushes to master, to make an RC.  The boot
> messages for git builds already contain the git revision (as a
> shorthash).
> 
> I will change the tarball creation checklist to seddery the -rc back
> to -rcN, along with the other release-management-related changes (like
> using an embedded copy of qemu).
> 
> If this patch meets with approval it should be thrown into the push
> gate today, along with the patch for XSA-88, and then hopefully
> nothing much else, so that we can get something suitable for making an
> RC from by Friday.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> diff --git a/xen/Makefile b/xen/Makefile
> index 7258504..576d239 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -2,7 +2,7 @@
>  # All other places this is stored (eg. compile.h) should be autogenerated.
>  export XEN_VERSION       = 4
>  export XEN_SUBVERSION    = 4
> -export XEN_EXTRAVERSION ?= -rc2$(XEN_VENDORVERSION)
> +export XEN_EXTRAVERSION ?= -rc$(XEN_VENDORVERSION)
>  export XEN_FULLVERSION   = 
> $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>  -include xen-version
>  




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:32:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcjs-0007ij-1h; Wed, 12 Feb 2014 16:32:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDcjq-0007ic-81
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:32:42 +0000
Received: from [85.158.139.211:31858] by server-13.bemta-5.messagelabs.com id
	41/2F-18801-922ABF25; Wed, 12 Feb 2014 16:32:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392222760!3481375!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12258 invoked from network); 12 Feb 2014 16:32:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 16:32:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 16:33:39 +0000
Message-Id: <52FBB035020000780011BCAB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 16:32:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <21243.40271.558124.694232@mariner.uk.xensource.com>
In-Reply-To: <21243.40271.558124.694232@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George.Dunlap@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: Drop N from rcN in XEN_EXTRAVERSION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 17:11, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Having this here means we have to wait for a push gate pass, or fart
> about which explicit pushes to master, to make an RC.  The boot
> messages for git builds already contain the git revision (as a
> shorthash).
> 
> I will change the tarball creation checklist to seddery the -rc back
> to -rcN, along with the other release-management-related changes (like
> using an embedded copy of qemu).
> 
> If this patch meets with approval it should be thrown into the push
> gate today, along with the patch for XSA-88, and then hopefully
> nothing much else, so that we can get something suitable for making an
> RC from by Friday.
> 
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> diff --git a/xen/Makefile b/xen/Makefile
> index 7258504..576d239 100644
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -2,7 +2,7 @@
>  # All other places this is stored (eg. compile.h) should be autogenerated.
>  export XEN_VERSION       = 4
>  export XEN_SUBVERSION    = 4
> -export XEN_EXTRAVERSION ?= -rc2$(XEN_VENDORVERSION)
> +export XEN_EXTRAVERSION ?= -rc$(XEN_VENDORVERSION)
>  export XEN_FULLVERSION   = 
> $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVERSION)
>  -include xen-version
>  




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcmf-0007oB-L8; Wed, 12 Feb 2014 16:35:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDcme-0007o4-3R
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:35:36 +0000
Received: from [193.109.254.147:43077] by server-12.bemta-14.messagelabs.com
	id D6/72-17220-7D2ABF25; Wed, 12 Feb 2014 16:35:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392222933!3882926!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14012 invoked from network); 12 Feb 2014 16:35:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:35:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101968002"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 16:35:32 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:35:32 -0500
Message-ID: <52FBA2D3.7020503@citrix.com>
Date: Wed, 12 Feb 2014 16:35:31 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
	<52FB6F10.8040307@citrix.com>
	<52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
In-Reply-To: <52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 13:56, Jan Beulich wrote:
>>>> On 12.02.14 at 13:54, David Vrabel <david.vrabel@citrix.com> wrote:
>> On 12/02/14 11:59, Jan Beulich wrote:
>>>>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
>>>> --- a/drivers/xen/events/events_base.c
>>>> +++ b/drivers/xen/events/events_base.c
>>>> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>>>>  
>>>>  	irq_exit();
>>>>  	set_irq_regs(old_regs);
>>>> +
>>>> +#ifndef CONFIG_PREEMPT
>>>> +	if ( __this_cpu_read(xed_nesting_count) == 0
>>>> +	     && is_preemptible_hypercall(regs) )
>>>> +		_cond_resched();
>>>> +#endif
>>>
>>> I don't think this can be done here - a 64-bit x86 kernel would
>>> generally be on the IRQ stack, and I don't think scheduling
>>> should be done in this state.
>>
>> _cond_resched() doesn't look that different from preempt_schedule_irq()
>> which is explicitly callable from irq context.
> 
> But IRQ context and running on the IRQ stack isn't the same. All
> current callers of that function are in assembly code, where one
> would hope people know what they're doing (and in particular
> _when_ they do so).

Ok.

I'm not sure I can claim I know what I'm doing, but I think this does
the right thing now.

8<--------------------------------------
>From 3094ed5851697b8bffe1227d32c1f1022e553191 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 15:41:03 +0000
Subject: [PATCH] xen: allow privcmd hypercalls to be preempted

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such a task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/kernel/entry_64.S       |   23 ++++++++++++++++++++++-
 drivers/xen/Makefile             |    2 +-
 drivers/xen/events/events_base.c |    1 +
 drivers/xen/preempt.c            |   16 ++++++++++++++++
 drivers/xen/privcmd.c            |    2 ++
 include/xen/xen-ops.h            |   27 +++++++++++++++++++++++++++
 6 files changed, 69 insertions(+), 2 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..e614aaa 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,7 @@ ENTRY(xen_do_hypervisor_callback)   #
do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
-	jmp  error_exit
+	jmp  xen_error_exit
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)

@@ -1470,6 +1470,26 @@ END(xen_failsafe_callback)
 apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
 	xen_hvm_callback_vector xen_evtchn_do_upcall

+ENTRY(xen_error_exit)
+	DEFAULT_FRAME
+	movl %ebx,%eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax,%eax
+	je error_exit_user
+#ifndef CONFIG_PREEMPT
+	testb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
+	je retint_kernel
+	movb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1, PER_CPU_VAR(xen_in_preemptible_hcall)
+#endif
+	jmp retint_kernel
+	CFI_ENDPROC
+END(xen_error_exit)
+
 #endif /* CONFIG_XEN */

 #if IS_ENABLED(CONFIG_HYPERV)
@@ -1629,6 +1649,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index d75c811..f8c7e04 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/

diff --git a/drivers/xen/events/events_base.c
b/drivers/xen/events/events_base.c
index 4672e00..db9584a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -32,6 +32,7 @@
 #include <linux/slab.h>
 #include <linux/irqnr.h>
 #include <linux/pci.h>
+#include <linux/sched.h>

 #ifdef CONFIG_X86
 #include <asm/desc.h>
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..3275ffe
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,16 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;

+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();

 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct
*vma,
 			       int numpgs, struct page **pages);

 bool xen_running_on_version_or_later(unsigned int major, unsigned int
minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:35:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcmf-0007oB-L8; Wed, 12 Feb 2014 16:35:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDcme-0007o4-3R
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:35:36 +0000
Received: from [193.109.254.147:43077] by server-12.bemta-14.messagelabs.com
	id D6/72-17220-7D2ABF25; Wed, 12 Feb 2014 16:35:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392222933!3882926!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14012 invoked from network); 12 Feb 2014 16:35:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:35:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101968002"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 16:35:32 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:35:32 -0500
Message-ID: <52FBA2D3.7020503@citrix.com>
Date: Wed, 12 Feb 2014 16:35:31 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
	<52FB6F10.8040307@citrix.com>
	<52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
In-Reply-To: <52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 13:56, Jan Beulich wrote:
>>>> On 12.02.14 at 13:54, David Vrabel <david.vrabel@citrix.com> wrote:
>> On 12/02/14 11:59, Jan Beulich wrote:
>>>>>> On 11.02.14 at 20:19, David Vrabel <david.vrabel@citrix.com> wrote:
>>>> --- a/drivers/xen/events/events_base.c
>>>> +++ b/drivers/xen/events/events_base.c
>>>> @@ -1254,6 +1254,12 @@ void xen_evtchn_do_upcall(struct pt_regs *regs)
>>>>  
>>>>  	irq_exit();
>>>>  	set_irq_regs(old_regs);
>>>> +
>>>> +#ifndef CONFIG_PREEMPT
>>>> +	if ( __this_cpu_read(xed_nesting_count) == 0
>>>> +	     && is_preemptible_hypercall(regs) )
>>>> +		_cond_resched();
>>>> +#endif
>>>
>>> I don't think this can be done here - a 64-bit x86 kernel would
>>> generally be on the IRQ stack, and I don't think scheduling
>>> should be done in this state.
>>
>> _cond_resched() doesn't look that different from preempt_schedule_irq()
>> which is explicitly callable from irq context.
> 
> But IRQ context and running on the IRQ stack isn't the same. All
> current callers of that function are in assembly code, where one
> would hope people know what they're doing (and in particular
> _when_ they do so).

Ok.

I'm not sure I can claim I know what I'm doing, but I think this does
the right thing now.

8<--------------------------------------
>From 3094ed5851697b8bffe1227d32c1f1022e553191 Mon Sep 17 00:00:00 2001
From: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 11 Feb 2014 15:41:03 +0000
Subject: [PATCH] xen: allow privcmd hypercalls to be preempted

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such a task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/kernel/entry_64.S       |   23 ++++++++++++++++++++++-
 drivers/xen/Makefile             |    2 +-
 drivers/xen/events/events_base.c |    1 +
 drivers/xen/preempt.c            |   16 ++++++++++++++++
 drivers/xen/privcmd.c            |    2 ++
 include/xen/xen-ops.h            |   27 +++++++++++++++++++++++++++
 6 files changed, 69 insertions(+), 2 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..e614aaa 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,7 @@ ENTRY(xen_do_hypervisor_callback)   #
do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
-	jmp  error_exit
+	jmp  xen_error_exit
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)

@@ -1470,6 +1470,26 @@ END(xen_failsafe_callback)
 apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
 	xen_hvm_callback_vector xen_evtchn_do_upcall

+ENTRY(xen_error_exit)
+	DEFAULT_FRAME
+	movl %ebx,%eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax,%eax
+	je error_exit_user
+#ifndef CONFIG_PREEMPT
+	testb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
+	je retint_kernel
+	movb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1, PER_CPU_VAR(xen_in_preemptible_hcall)
+#endif
+	jmp retint_kernel
+	CFI_ENDPROC
+END(xen_error_exit)
+
 #endif /* CONFIG_XEN */

 #if IS_ENABLED(CONFIG_HYPERV)
@@ -1629,6 +1649,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index d75c811..f8c7e04 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/

diff --git a/drivers/xen/events/events_base.c
b/drivers/xen/events/events_base.c
index 4672e00..db9584a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -32,6 +32,7 @@
 #include <linux/slab.h>
 #include <linux/irqnr.h>
 #include <linux/pci.h>
+#include <linux/sched.h>

 #ifdef CONFIG_X86
 #include <asm/desc.h>
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..3275ffe
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,16 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;

+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();

 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct
*vma,
 			       int numpgs, struct page **pages);

 bool xen_running_on_version_or_later(unsigned int major, unsigned int
minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:36:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcnW-0007tb-A4; Wed, 12 Feb 2014 16:36:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDcnV-0007tS-Ap
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:36:29 +0000
Received: from [85.158.143.35:51750] by server-2.bemta-4.messagelabs.com id
	C4/BB-10891-C03ABF25; Wed, 12 Feb 2014 16:36:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392222987!5180457!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26154 invoked from network); 12 Feb 2014 16:36:27 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 16:36:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDcnR-0004gp-Le; Wed, 12 Feb 2014 16:36:25 +0000
Date: Wed, 12 Feb 2014 17:36:25 +0100
From: Tim Deegan <tim@xen.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140212163625.GE91459@deinos.phlegethon.org>
References: <52F90A71.40802@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F90A71.40802@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This draft has my wholehearted support.  Even without addressing any
of the points under discussion something along these lines would be a
vast improvement on the current format.

I have two general questions:

 - The existing save-format definition is spread across a number of
   places: libxc for hypervisor state, qemu for DM state, and the main
   toolstack (libxl/xend/xapi/&c) for other config runes and a general
   wrapper.  This is clearly a reworking of the libxc parts -- do
   you think there's anything currently defined elsewhere that belongs
   in this spec?

 - Have you given any thought to making this into a wire protocol
   rather than just a file format?  Would there be any benefit to
   having records individually acked by the receiver in a live
   migration, or having the receiver send instructions about
   compatibility?  Or is that again left to the toolstack to manage?

and a few nits:

At 17:20 +0000 on 10 Feb (1392049249), David Vrabel wrote:
> Records
> =======
> 
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
>     ...
>     Record body of length body_length octets followed by
>     0 to 7 octets of padding.
>     ...
>     +-----------------------+-------------------------+
>     | checksum              | (reserved)              |
>     +-----------------------+-------------------------+
> 
> --------------------------------------------------------------------
> Field        Description
> -----------  -------------------------------------------------------
> type         0x00000000: END
> 
>              0x00000001: PAGE_DATA
> 
>              0x00000002: VCPU_INFO
> 
>              0x00000003: VCPU_CONTEXT
> 
>              0x00000004: X86_PV_INFO
> 
>              0x00000005: P2M
> 
>              0x00000006 - 0xFFFFFFFF: Reserved
> 
> body_length  Length in octets of the record body.
> 
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
> 
>              Bit 1-15: Reserved.
> 
> checksum     CRC-32 checksum of the record body (including any trailing
>              padding), or 0x00000000 if the checksum field is invalid.

Apart from any discussion of the merits of per-record vs whole-file
checksums, it would be useful for this checksum to cover the header
too.  E.g., by declaring it to be the checksum of header+data where
the checksum field is 0, or by declaring that it shall be that pattern
which causes the finished header+data to checksum to 0.

> VCPU_INFO
> ---------
> 
> > [ This is a combination of parts of the extended-info and
> > XC_SAVE_ID_VCPU_INFO chunks. ]
> 
> The VCPU_INFO record includes the maximum possible VCPU ID.  This will
> be followed a VCPU_CONTEXT record for each online VCPU.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+------------------------+
>     | max_vcpu_id           | (reserved)             |
>     +-----------------------+------------------------+
> 
> --------------------------------------------------------------------
> Field        Description
> -----------  ---------------------------------------------------
> max_vcpu_id  Maximum possible VCPU ID.
> --------------------------------------------------------------------

If this is all that's in this record, maybe it should be called
VCPU_COUNT?

> P2M
> ---
> 
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]
> 
> The P2M record contains a portion of the source domain's P2M.
> Multiple P2M records may be sent if the source P2M changes during the
> stream.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | pfn_begin                                       |
>     +-------------------------------------------------+
>     | pfn_end                                         |
>     +-------------------------------------------------+
>     | mfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | mfn[N-1]                                        |
>     +-------------------------------------------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> pfn_begin   The first PFN in this portion of the P2M
> 
> pfn_end     One past the last PFN in this portion of the P2M.
> 
> mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
>             the set of PFNs in the range [pfn_begin, pfn_end).
> --------------------------------------------------------------------

The current save record doesn't contain the p2m itself, but rather the
p2m_frame_list, an array of the MFNs (in the save record, PFNs) that
hold the actual p2m.  Frames in that list are used to populate the p2m
as memory is allocated on the receiving side.

I'm not sure what it would mean to allow the guest to change the
location of its p2m table (as distinct from the contents) on the fly
during a migration.  We would at least have to re-send the contents of
any frames that are no longer in the p2m table, in case the receiver
has already overwritten them.  And I think it should be fine to just
send the whole list every time (or else we need to manage deltas
carefully too).

Also, while I'm thinking about record names, this probably ought to be
called X86_PV_P2M or something like that.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:36:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcnW-0007tb-A4; Wed, 12 Feb 2014 16:36:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDcnV-0007tS-Ap
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:36:29 +0000
Received: from [85.158.143.35:51750] by server-2.bemta-4.messagelabs.com id
	C4/BB-10891-C03ABF25; Wed, 12 Feb 2014 16:36:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392222987!5180457!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26154 invoked from network); 12 Feb 2014 16:36:27 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 16:36:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDcnR-0004gp-Le; Wed, 12 Feb 2014 16:36:25 +0000
Date: Wed, 12 Feb 2014 17:36:25 +0100
From: Tim Deegan <tim@xen.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140212163625.GE91459@deinos.phlegethon.org>
References: <52F90A71.40802@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52F90A71.40802@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This draft has my wholehearted support.  Even without addressing any
of the points under discussion something along these lines would be a
vast improvement on the current format.

I have two general questions:

 - The existing save-format definition is spread across a number of
   places: libxc for hypervisor state, qemu for DM state, and the main
   toolstack (libxl/xend/xapi/&c) for other config runes and a general
   wrapper.  This is clearly a reworking of the libxc parts -- do
   you think there's anything currently defined elsewhere that belongs
   in this spec?

 - Have you given any thought to making this into a wire protocol
   rather than just a file format?  Would there be any benefit to
   having records individually acked by the receiver in a live
   migration, or having the receiver send instructions about
   compatibility?  Or is that again left to the toolstack to manage?

and a few nits:

At 17:20 +0000 on 10 Feb (1392049249), David Vrabel wrote:
> Records
> =======
> 
> A record has a record header, type specific data and a trailing
> footer.  If body_length is not a multiple of 8, the body is padded
> with zeroes to align the checksum field on an 8 octet boundary.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+-------------------------+
>     | type                  | body_length             |
>     +-----------+-----------+-------------------------+
>     | options   | (reserved)                          |
>     +-----------+-------------------------------------+
>     ...
>     Record body of length body_length octets followed by
>     0 to 7 octets of padding.
>     ...
>     +-----------------------+-------------------------+
>     | checksum              | (reserved)              |
>     +-----------------------+-------------------------+
> 
> --------------------------------------------------------------------
> Field        Description
> -----------  -------------------------------------------------------
> type         0x00000000: END
> 
>              0x00000001: PAGE_DATA
> 
>              0x00000002: VCPU_INFO
> 
>              0x00000003: VCPU_CONTEXT
> 
>              0x00000004: X86_PV_INFO
> 
>              0x00000005: P2M
> 
>              0x00000006 - 0xFFFFFFFF: Reserved
> 
> body_length  Length in octets of the record body.
> 
> options      Bit 0: 0 - checksum invalid, 1 = checksum valid.
> 
>              Bit 1-15: Reserved.
> 
> checksum     CRC-32 checksum of the record body (including any trailing
>              padding), or 0x00000000 if the checksum field is invalid.

Apart from any discussion of the merits of per-record vs whole-file
checksums, it would be useful for this checksum to cover the header
too.  E.g., by declaring it to be the checksum of header+data where
the checksum field is 0, or by declaring that it shall be that pattern
which causes the finished header+data to checksum to 0.

> VCPU_INFO
> ---------
> 
> > [ This is a combination of parts of the extended-info and
> > XC_SAVE_ID_VCPU_INFO chunks. ]
> 
> The VCPU_INFO record includes the maximum possible VCPU ID.  This will
> be followed a VCPU_CONTEXT record for each online VCPU.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-----------------------+------------------------+
>     | max_vcpu_id           | (reserved)             |
>     +-----------------------+------------------------+
> 
> --------------------------------------------------------------------
> Field        Description
> -----------  ---------------------------------------------------
> max_vcpu_id  Maximum possible VCPU ID.
> --------------------------------------------------------------------

If this is all that's in this record, maybe it should be called
VCPU_COUNT?

> P2M
> ---
> 
> [ This is a more flexible replacement for the old p2m_size field and
> p2m array. ]
> 
> The P2M record contains a portion of the source domain's P2M.
> Multiple P2M records may be sent if the source P2M changes during the
> stream.
> 
>      0     1     2     3     4     5     6     7 octet
>     +-------------------------------------------------+
>     | pfn_begin                                       |
>     +-------------------------------------------------+
>     | pfn_end                                         |
>     +-------------------------------------------------+
>     | mfn[0]                                          |
>     +-------------------------------------------------+
>     ...
>     +-------------------------------------------------+
>     | mfn[N-1]                                        |
>     +-------------------------------------------------+
> 
> --------------------------------------------------------------------
> Field       Description
> ----------- --------------------------------------------------------
> pfn_begin   The first PFN in this portion of the P2M
> 
> pfn_end     One past the last PFN in this portion of the P2M.
> 
> mfn         Array of (pfn_end - pfn-begin) MFNs corresponding to
>             the set of PFNs in the range [pfn_begin, pfn_end).
> --------------------------------------------------------------------

The current save record doesn't contain the p2m itself, but rather the
p2m_frame_list, an array of the MFNs (in the save record, PFNs) that
hold the actual p2m.  Frames in that list are used to populate the p2m
as memory is allocated on the receiving side.

I'm not sure what it would mean to allow the guest to change the
location of its p2m table (as distinct from the contents) on the fly
during a migration.  We would at least have to re-send the contents of
any frames that are no longer in the p2m table, in case the receiver
has already overwritten them.  And I think it should be fine to just
send the whole list every time (or else we need to manage deltas
carefully too).

Also, while I'm thinking about record names, this probably ought to be
called X86_PV_P2M or something like that.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:36:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcnw-0007xR-No; Wed, 12 Feb 2014 16:36:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDcnv-0007xC-LA
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:36:55 +0000
Received: from [85.158.139.211:42405] by server-1.bemta-5.messagelabs.com id
	99/A3-12859-623ABF25; Wed, 12 Feb 2014 16:36:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392223012!3501157!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4488 invoked from network); 12 Feb 2014 16:36:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:36:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100180151"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:36:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:36:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDcns-0000Vd-1c;
	Wed, 12 Feb 2014 16:36:52 +0000
Message-ID: <52FBA31A.9060801@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:36:42 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <21243.40271.558124.694232@mariner.uk.xensource.com>
In-Reply-To: <21243.40271.558124.694232@mariner.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: Drop N from rcN in XEN_EXTRAVERSION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 04:11 PM, Ian Jackson wrote:
> Having this here means we have to wait for a push gate pass, or fart
> about which explicit pushes to master, to make an RC.  The boot
> messages for git builds already contain the git revision (as a
> shorthash).
>
> I will change the tarball creation checklist to seddery the -rc back
> to -rcN, along with the other release-management-related changes (like
> using an embedded copy of qemu).
>
> If this patch meets with approval it should be thrown into the push
> gate today, along with the patch for XSA-88, and then hopefully
> nothing much else, so that we can get something suitable for making an
> RC from by Friday.
>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

[Release-]Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

(That was two acks.)

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:36:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:36:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcnw-0007xR-No; Wed, 12 Feb 2014 16:36:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDcnv-0007xC-LA
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:36:55 +0000
Received: from [85.158.139.211:42405] by server-1.bemta-5.messagelabs.com id
	99/A3-12859-623ABF25; Wed, 12 Feb 2014 16:36:54 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392223012!3501157!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4488 invoked from network); 12 Feb 2014 16:36:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:36:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100180151"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:36:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:36:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDcns-0000Vd-1c;
	Wed, 12 Feb 2014 16:36:52 +0000
Message-ID: <52FBA31A.9060801@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:36:42 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <21243.40271.558124.694232@mariner.uk.xensource.com>
In-Reply-To: <21243.40271.558124.694232@mariner.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: Drop N from rcN in XEN_EXTRAVERSION
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 04:11 PM, Ian Jackson wrote:
> Having this here means we have to wait for a push gate pass, or fart
> about which explicit pushes to master, to make an RC.  The boot
> messages for git builds already contain the git revision (as a
> shorthash).
>
> I will change the tarball creation checklist to seddery the -rc back
> to -rcN, along with the other release-management-related changes (like
> using an embedded copy of qemu).
>
> If this patch meets with approval it should be thrown into the push
> gate today, along with the patch for XSA-88, and then hopefully
> nothing much else, so that we can get something suitable for making an
> RC from by Friday.
>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

[Release-]Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

(That was two acks.)

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:37:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcoE-00081G-AL; Wed, 12 Feb 2014 16:37:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1WDcoC-00080o-A6; Wed, 12 Feb 2014 16:37:12 +0000
Received: from [85.158.139.211:49761] by server-14.bemta-5.messagelabs.com id
	4D/9B-27598-733ABF25; Wed, 12 Feb 2014 16:37:11 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392223029!3453073!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20536 invoked from network); 12 Feb 2014 16:37:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100180313"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:37:09 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:37:08 -0500
Message-ID: <52FBA331.9040004@citrix.com>
Date: Wed, 12 Feb 2014 17:37:05 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <lars.kurth@xen.org>, Dario
	Faggioli <dario.faggioli@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ben Guthro <benjamin.guthro@citrix.com>, Andrew
	Cooper <Andrew.Cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>, 
	Santosh Jodh <Santosh.Jodh@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 15:09, Ian Campbell wrote:
> Roger:
>       * Refactor Linux hotplug scripts
> 
>         You did some of this I think?

No, I've added a block-iscsi script, but I did not refactor the other
ones. It is still valid, however, I'm not sure it's attractive from a
GSoC point of view. I'm going to leave it anyway in case someone is
interested.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:37:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:37:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcoE-00081G-AL; Wed, 12 Feb 2014 16:37:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>)
	id 1WDcoC-00080o-A6; Wed, 12 Feb 2014 16:37:12 +0000
Received: from [85.158.139.211:49761] by server-14.bemta-5.messagelabs.com id
	4D/9B-27598-733ABF25; Wed, 12 Feb 2014 16:37:11 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392223029!3453073!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20536 invoked from network); 12 Feb 2014 16:37:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100180313"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:37:09 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:37:08 -0500
Message-ID: <52FBA331.9040004@citrix.com>
Date: Wed, 12 Feb 2014 17:37:05 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <lars.kurth@xen.org>, Dario
	Faggioli <dario.faggioli@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Ben Guthro <benjamin.guthro@citrix.com>, Andrew
	Cooper <Andrew.Cooper3@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>, 
	Santosh Jodh <Santosh.Jodh@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
In-Reply-To: <1391609348.6497.178.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 05/02/14 15:09, Ian Campbell wrote:
> Roger:
>       * Refactor Linux hotplug scripts
> 
>         You did some of this I think?

No, I've added a block-iscsi script, but I did not refactor the other
ones. It is still valid, however, I'm not sure it's attractive from a
GSoC point of view. I'm going to leave it anyway in case someone is
interested.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:47:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcyG-0000Dm-NQ; Wed, 12 Feb 2014 16:47:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDcyF-0000Dh-3j
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:47:35 +0000
Received: from [85.158.139.211:53340] by server-4.bemta-5.messagelabs.com id
	58/F0-08092-6A5ABF25; Wed, 12 Feb 2014 16:47:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392223653!3483791!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 381 invoked from network); 12 Feb 2014 16:47:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 16:47:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 16:48:32 +0000
Message-Id: <52FBB3B2020000780011BCE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 16:47:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
	<52FB6F10.8040307@citrix.com>
	<52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
	<52FBA2D3.7020503@citrix.com>
In-Reply-To: <52FBA2D3.7020503@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 17:35, David Vrabel <david.vrabel@citrix.com> wrote:
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1404,7 +1404,7 @@ ENTRY(xen_do_hypervisor_callback)   #
> do_hypervisor_callback(struct *pt_regs)
>  	popq %rsp
>  	CFI_DEF_CFA_REGISTER rsp
>  	decl PER_CPU_VAR(irq_count)
> -	jmp  error_exit
> +	jmp  xen_error_exit

Any reason not to put all the new code right here, instead of the
jmp?

>  	CFI_ENDPROC
>  END(xen_do_hypervisor_callback)
> 
> @@ -1470,6 +1470,26 @@ END(xen_failsafe_callback)
>  apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
>  	xen_hvm_callback_vector xen_evtchn_do_upcall
> 
> +ENTRY(xen_error_exit)
> +	DEFAULT_FRAME
> +	movl %ebx,%eax
> +	RESTORE_REST
> +	DISABLE_INTERRUPTS(CLBR_NONE)
> +	TRACE_IRQS_OFF
> +	GET_THREAD_INFO(%rcx)
> +	testl %eax,%eax
> +	je error_exit_user
> +#ifndef CONFIG_PREEMPT
> +	testb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
> +	je retint_kernel

This is effectively an unconditional branch now. You either want
cmpb instead of testb or $0xff instead of $0.

Jan

> +	movb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
> +	call preempt_schedule_irq
> +	movb $1, PER_CPU_VAR(xen_in_preemptible_hcall)
> +#endif
> +	jmp retint_kernel
> +	CFI_ENDPROC
> +END(xen_error_exit)
> +
>  #endif /* CONFIG_XEN */
> 
>  #if IS_ENABLED(CONFIG_HYPERV)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:47:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcyG-0000Dm-NQ; Wed, 12 Feb 2014 16:47:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDcyF-0000Dh-3j
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:47:35 +0000
Received: from [85.158.139.211:53340] by server-4.bemta-5.messagelabs.com id
	58/F0-08092-6A5ABF25; Wed, 12 Feb 2014 16:47:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392223653!3483791!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 381 invoked from network); 12 Feb 2014 16:47:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 16:47:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 12 Feb 2014 16:48:32 +0000
Message-Id: <52FBB3B2020000780011BCE1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 12 Feb 2014 16:47:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "David Vrabel" <david.vrabel@citrix.com>
References: <1392146352-16381-1-git-send-email-david.vrabel@citrix.com>
	<1392146352-16381-4-git-send-email-david.vrabel@citrix.com>
	<52FB7042020000780011BAD0@nat28.tlf.novell.com>
	<52FB6F10.8040307@citrix.com>
	<52FB8B8A020000780011BB7D@nat28.tlf.novell.com>
	<52FBA2D3.7020503@citrix.com>
In-Reply-To: <52FBA2D3.7020503@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 3/3] xen/events: schedule if the interrupted
 task is in a preemptible hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 17:35, David Vrabel <david.vrabel@citrix.com> wrote:
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1404,7 +1404,7 @@ ENTRY(xen_do_hypervisor_callback)   #
> do_hypervisor_callback(struct *pt_regs)
>  	popq %rsp
>  	CFI_DEF_CFA_REGISTER rsp
>  	decl PER_CPU_VAR(irq_count)
> -	jmp  error_exit
> +	jmp  xen_error_exit

Any reason not to put all the new code right here, instead of the
jmp?

>  	CFI_ENDPROC
>  END(xen_do_hypervisor_callback)
> 
> @@ -1470,6 +1470,26 @@ END(xen_failsafe_callback)
>  apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
>  	xen_hvm_callback_vector xen_evtchn_do_upcall
> 
> +ENTRY(xen_error_exit)
> +	DEFAULT_FRAME
> +	movl %ebx,%eax
> +	RESTORE_REST
> +	DISABLE_INTERRUPTS(CLBR_NONE)
> +	TRACE_IRQS_OFF
> +	GET_THREAD_INFO(%rcx)
> +	testl %eax,%eax
> +	je error_exit_user
> +#ifndef CONFIG_PREEMPT
> +	testb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
> +	je retint_kernel

This is effectively an unconditional branch now. You either want
cmpb instead of testb or $0xff instead of $0.

Jan

> +	movb $0, PER_CPU_VAR(xen_in_preemptible_hcall)
> +	call preempt_schedule_irq
> +	movb $1, PER_CPU_VAR(xen_in_preemptible_hcall)
> +#endif
> +	jmp retint_kernel
> +	CFI_ENDPROC
> +END(xen_error_exit)
> +
>  #endif /* CONFIG_XEN */
> 
>  #if IS_ENABLED(CONFIG_HYPERV)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:48:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcye-0000F9-4S; Wed, 12 Feb 2014 16:48:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDcyc-0000F2-NS
	for Xen-devel@lists.xensource.com; Wed, 12 Feb 2014 16:47:58 +0000
Received: from [85.158.137.68:56346] by server-11.bemta-3.messagelabs.com id
	1F/5E-04255-EB5ABF25; Wed, 12 Feb 2014 16:47:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392223677!420756!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4818 invoked from network); 12 Feb 2014 16:47:57 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:47:57 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so4565467ead.27
	for <Xen-devel@lists.xensource.com>;
	Wed, 12 Feb 2014 08:47:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=el2yNRh8Knx8vpMDbj5Ktf+unzRCQhlS9Yy4BBujzyA=;
	b=ZNjoUPTZwcfMA8+iw8mFwDmFBn+njdyVtZXrlHkl9ntUwx9d2qYT/eLiVdqxmVpSar
	U8IFUMXov1R4glHExmf4jXdJJKXItgpT8AoRjVy+NppYkplCi/IOYsCu6vIfztUpjdQ8
	B0qEvPftfDbQ4wt5J/ftnvAyeXTfrHPJs4ESemupZQn/L47H9xkynbZsKAmjWqdBPtCm
	IufukDDYfnsHM72Q69xNT9tn//WkxvVPowzmC71cpBi4B7zmZ01QVvTQ/fgjbPbbpy8V
	s46Xyv1GGKH7hl5oEeXFNRdrwFMFALCGSSluFuS6jTO7sKZ9/OI9i4bL65CA9x5rfapV
	+8pg==
X-Gm-Message-State: ALoCoQnG51Y+dGqIl+jKxHyQUbO5+1UFfd4WWhzdfZoftTBjNx95ZnLu8D1wf0jjvnEGbOb23ibX
X-Received: by 10.14.221.4 with SMTP id q4mr5015640eep.47.1392223676568;
	Wed, 12 Feb 2014 08:47:56 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm82953089eei.9.2014.02.12.08.47.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 12 Feb 2014 08:47:55 -0800 (PST)
Message-ID: <52FBA5BA.4020301@linaro.org>
Date: Wed, 12 Feb 2014 16:47:54 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Mukesh,

On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> In preparation for the next patch, we update xsm_add_to_physmap to
> allow for checking of foreign domain. Thus, the current domain must
> have the right to update the mappings of target domain with pages from
> foreign domain.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>

While I was playing with XSM on ARM, I have noticed that Daniel De Graff
has added xsm_map_gfmn_foreign few months ago (see commit 0b201e6).

Would it be suitable to use this XSM instead of extending
xsm_add_to_physmap?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:48:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDcye-0000F9-4S; Wed, 12 Feb 2014 16:48:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WDcyc-0000F2-NS
	for Xen-devel@lists.xensource.com; Wed, 12 Feb 2014 16:47:58 +0000
Received: from [85.158.137.68:56346] by server-11.bemta-3.messagelabs.com id
	1F/5E-04255-EB5ABF25; Wed, 12 Feb 2014 16:47:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392223677!420756!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4818 invoked from network); 12 Feb 2014 16:47:57 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:47:57 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so4565467ead.27
	for <Xen-devel@lists.xensource.com>;
	Wed, 12 Feb 2014 08:47:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=el2yNRh8Knx8vpMDbj5Ktf+unzRCQhlS9Yy4BBujzyA=;
	b=ZNjoUPTZwcfMA8+iw8mFwDmFBn+njdyVtZXrlHkl9ntUwx9d2qYT/eLiVdqxmVpSar
	U8IFUMXov1R4glHExmf4jXdJJKXItgpT8AoRjVy+NppYkplCi/IOYsCu6vIfztUpjdQ8
	B0qEvPftfDbQ4wt5J/ftnvAyeXTfrHPJs4ESemupZQn/L47H9xkynbZsKAmjWqdBPtCm
	IufukDDYfnsHM72Q69xNT9tn//WkxvVPowzmC71cpBi4B7zmZ01QVvTQ/fgjbPbbpy8V
	s46Xyv1GGKH7hl5oEeXFNRdrwFMFALCGSSluFuS6jTO7sKZ9/OI9i4bL65CA9x5rfapV
	+8pg==
X-Gm-Message-State: ALoCoQnG51Y+dGqIl+jKxHyQUbO5+1UFfd4WWhzdfZoftTBjNx95ZnLu8D1wf0jjvnEGbOb23ibX
X-Received: by 10.14.221.4 with SMTP id q4mr5015640eep.47.1392223676568;
	Wed, 12 Feb 2014 08:47:56 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm82953089eei.9.2014.02.12.08.47.55
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 12 Feb 2014 08:47:55 -0800 (PST)
Message-ID: <52FBA5BA.4020301@linaro.org>
Date: Wed, 12 Feb 2014 16:47:54 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Mukesh,

On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> In preparation for the next patch, we update xsm_add_to_physmap to
> allow for checking of foreign domain. Thus, the current domain must
> have the right to update the mappings of target domain with pages from
> foreign domain.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>

While I was playing with XSM on ARM, I have noticed that Daniel De Graff
has added xsm_map_gfmn_foreign few months ago (see commit 0b201e6).

Would it be suitable to use this XSM instead of extending
xsm_add_to_physmap?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDczP-0000LP-Nk; Wed, 12 Feb 2014 16:48:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDczN-0000L9-I1
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:48:45 +0000
Received: from [85.158.139.211:18785] by server-12.bemta-5.messagelabs.com id
	DA/57-15415-CE5ABF25; Wed, 12 Feb 2014 16:48:44 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392223722!3483915!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2806 invoked from network); 12 Feb 2014 16:48:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:48:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100185116"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:48:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:48:41 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDczJ-0000i3-66;
	Wed, 12 Feb 2014 16:48:41 +0000
Message-ID: <52FBA5DF.2090206@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:48:31 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <xen-devel@lists.xen.org>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
In-Reply-To: <1392215531.13563.79.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 02:32 PM, Ian Campbell wrote:
> Typoed George's address...
>
> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
>> ARM does not (currently) support migration, so stop offering tasty looking
>> treats like "xl migrate".
>>
>> Apart from the UI improvement my intention is to use this in osstest to detect
>> whether to attempt the save/restore/migrate tests.
>>
>> Other than the additions of the #define/#ifdef there is a tiny bit of code
>> motion ("dump-core" in the command list and core_dump_domain in the
>> implementations) which serves to put ifdeffable bits next to each other.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: goerge.dunlap@citrix.com

Looks reasonable.  The risk should be pretty minimal; a compile error at 
most.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDczP-0000LP-Nk; Wed, 12 Feb 2014 16:48:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDczN-0000L9-I1
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:48:45 +0000
Received: from [85.158.139.211:18785] by server-12.bemta-5.messagelabs.com id
	DA/57-15415-CE5ABF25; Wed, 12 Feb 2014 16:48:44 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392223722!3483915!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2806 invoked from network); 12 Feb 2014 16:48:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:48:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100185116"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 16:48:42 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:48:41 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDczJ-0000i3-66;
	Wed, 12 Feb 2014 16:48:41 +0000
Message-ID: <52FBA5DF.2090206@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:48:31 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <xen-devel@lists.xen.org>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
In-Reply-To: <1392215531.13563.79.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 02:32 PM, Ian Campbell wrote:
> Typoed George's address...
>
> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
>> ARM does not (currently) support migration, so stop offering tasty looking
>> treats like "xl migrate".
>>
>> Apart from the UI improvement my intention is to use this in osstest to detect
>> whether to attempt the save/restore/migrate tests.
>>
>> Other than the additions of the #define/#ifdef there is a tiny bit of code
>> motion ("dump-core" in the command list and core_dump_domain in the
>> implementations) which serves to put ifdeffable bits next to each other.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: goerge.dunlap@citrix.com

Looks reasonable.  The risk should be pretty minimal; a compile error at 
most.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:49:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDd0X-0000Ua-75; Wed, 12 Feb 2014 16:49:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDd0W-0000UQ-Jc
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:49:56 +0000
Received: from [85.158.139.211:42233] by server-16.bemta-5.messagelabs.com id
	B4/DF-05060-336ABF25; Wed, 12 Feb 2014 16:49:55 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392223793!3480851!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4776 invoked from network); 12 Feb 2014 16:49:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:49:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101973631"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 16:49:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:49:49 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDd0P-0000iw-Lz;
	Wed, 12 Feb 2014 16:49:49 +0000
Message-ID: <52FBA623.2010809@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:49:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Olaf Hering <olaf@aepfle.de>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>	
	<20140212152134.GA20090@aepfle.de>
	<1392220651.13563.86.camel@kazak.uk.xensource.com>
In-Reply-To: <1392220651.13563.86.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 03:57 PM, Ian Campbell wrote:
> On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
>> On Wed, Feb 12, Ian Campbell wrote:
>>
>>>   #define LIBXL_HAVE_SIGCHLD_SHARING 1
>>>   
>>> +/*
>>> + * LIBXL_HAVE_NO_SUSPEND_RESUME
>> Think positive?
>> Make that HAVE_FEATURE and define it on x86.
> Could do -- anyone got any strong feeling one way or the other?

I'm a fan of consistency, so "HAVE_SUSPEND" sounds a little bit better 
to me.  But either way is fine.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:49:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDd0X-0000Ua-75; Wed, 12 Feb 2014 16:49:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDd0W-0000UQ-Jc
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:49:56 +0000
Received: from [85.158.139.211:42233] by server-16.bemta-5.messagelabs.com id
	B4/DF-05060-336ABF25; Wed, 12 Feb 2014 16:49:55 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392223793!3480851!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4776 invoked from network); 12 Feb 2014 16:49:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:49:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101973631"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 16:49:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:49:49 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDd0P-0000iw-Lz;
	Wed, 12 Feb 2014 16:49:49 +0000
Message-ID: <52FBA623.2010809@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:49:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Olaf Hering <olaf@aepfle.de>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>	
	<20140212152134.GA20090@aepfle.de>
	<1392220651.13563.86.camel@kazak.uk.xensource.com>
In-Reply-To: <1392220651.13563.86.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 03:57 PM, Ian Campbell wrote:
> On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
>> On Wed, Feb 12, Ian Campbell wrote:
>>
>>>   #define LIBXL_HAVE_SIGCHLD_SHARING 1
>>>   
>>> +/*
>>> + * LIBXL_HAVE_NO_SUSPEND_RESUME
>> Think positive?
>> Make that HAVE_FEATURE and define it on x86.
> Could do -- anyone got any strong feeling one way or the other?

I'm a fan of consistency, so "HAVE_SUSPEND" sounds a little bit better 
to me.  But either way is fine.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:55:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDd69-0000oy-KG; Wed, 12 Feb 2014 16:55:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDd67-0000od-SH
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:55:43 +0000
Received: from [85.158.139.211:29467] by server-16.bemta-5.messagelabs.com id
	53/DB-05060-F87ABF25; Wed, 12 Feb 2014 16:55:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392224141!3492551!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11428 invoked from network); 12 Feb 2014 16:55:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:55:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101975480"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 16:54:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:54:30 -0500
Message-ID: <1392224069.13563.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:54:29 +0000
In-Reply-To: <52FBA623.2010809@eu.citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<20140212152134.GA20090@aepfle.de>
	<1392220651.13563.86.camel@kazak.uk.xensource.com>
	<52FBA623.2010809@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 16:49 +0000, George Dunlap wrote:
> On 02/12/2014 03:57 PM, Ian Campbell wrote:
> > On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
> >> On Wed, Feb 12, Ian Campbell wrote:
> >>
> >>>   #define LIBXL_HAVE_SIGCHLD_SHARING 1
> >>>   
> >>> +/*
> >>> + * LIBXL_HAVE_NO_SUSPEND_RESUME
> >> Think positive?
> >> Make that HAVE_FEATURE and define it on x86.
> > Could do -- anyone got any strong feeling one way or the other?
> 
> I'm a fan of consistency, so "HAVE_SUSPEND" sounds a little bit better 
> to me.  But either way is fine.

Right. I think I'll flip it then, it does make more logical sense that
way.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 16:55:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDd69-0000oy-KG; Wed, 12 Feb 2014 16:55:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDd67-0000od-SH
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 16:55:43 +0000
Received: from [85.158.139.211:29467] by server-16.bemta-5.messagelabs.com id
	53/DB-05060-F87ABF25; Wed, 12 Feb 2014 16:55:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392224141!3492551!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11428 invoked from network); 12 Feb 2014 16:55:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 16:55:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101975480"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 16:54:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 11:54:30 -0500
Message-ID: <1392224069.13563.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 12 Feb 2014 16:54:29 +0000
In-Reply-To: <52FBA623.2010809@eu.citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<20140212152134.GA20090@aepfle.de>
	<1392220651.13563.86.camel@kazak.uk.xensource.com>
	<52FBA623.2010809@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 16:49 +0000, George Dunlap wrote:
> On 02/12/2014 03:57 PM, Ian Campbell wrote:
> > On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
> >> On Wed, Feb 12, Ian Campbell wrote:
> >>
> >>>   #define LIBXL_HAVE_SIGCHLD_SHARING 1
> >>>   
> >>> +/*
> >>> + * LIBXL_HAVE_NO_SUSPEND_RESUME
> >> Think positive?
> >> Make that HAVE_FEATURE and define it on x86.
> > Could do -- anyone got any strong feeling one way or the other?
> 
> I'm a fan of consistency, so "HAVE_SUSPEND" sounds a little bit better 
> to me.  But either way is fine.

Right. I think I'll flip it then, it does make more logical sense that
way.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdB0-0001B3-DS; Wed, 12 Feb 2014 17:00:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDdAz-0001Ay-DY
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 17:00:45 +0000
Received: from [85.158.143.35:27276] by server-3.bemta-4.messagelabs.com id
	21/13-11539-CB8ABF25; Wed, 12 Feb 2014 17:00:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392224440!5186461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14679 invoked from network); 12 Feb 2014 17:00:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:00:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100189328"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 17:00:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:00:22 -0500
Message-ID: <1392224421.13563.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 12 Feb 2014 17:00:21 +0000
In-Reply-To: <1392224069.13563.97.camel@kazak.uk.xensource.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<20140212152134.GA20090@aepfle.de>
	<1392220651.13563.86.camel@kazak.uk.xensource.com>
	<52FBA623.2010809@eu.citrix.com>
	<1392224069.13563.97.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 16:54 +0000, Ian Campbell wrote:
> On Wed, 2014-02-12 at 16:49 +0000, George Dunlap wrote:
> > On 02/12/2014 03:57 PM, Ian Campbell wrote:
> > > On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
> > >> On Wed, Feb 12, Ian Campbell wrote:
> > >>
> > >>>   #define LIBXL_HAVE_SIGCHLD_SHARING 1
> > >>>   
> > >>> +/*
> > >>> + * LIBXL_HAVE_NO_SUSPEND_RESUME
> > >> Think positive?
> > >> Make that HAVE_FEATURE and define it on x86.
> > > Could do -- anyone got any strong feeling one way or the other?
> > 
> > I'm a fan of consistency, so "HAVE_SUSPEND" sounds a little bit better 
> > to me.  But either way is fine.
> 
> Right. I think I'll flip it then, it does make more logical sense that
> way.

Actually, no I wont.

Anyone who uses the positive version of this will find that migration
support suddenly disappears on x86 when they build against Xen 4.3 or
earlier, unless they are careful to use
        #if defined(LIBL_HAVE_NO_SUSPEND_RESUME) || defined(__i386__) || defined(__x86_64__)
which is worse than a -ve feature flag IMHO.

I will stick with LIBXL_HAVE_NO_SUSPEND_RESUME then.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdB0-0001B3-DS; Wed, 12 Feb 2014 17:00:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDdAz-0001Ay-DY
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 17:00:45 +0000
Received: from [85.158.143.35:27276] by server-3.bemta-4.messagelabs.com id
	21/13-11539-CB8ABF25; Wed, 12 Feb 2014 17:00:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392224440!5186461!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14679 invoked from network); 12 Feb 2014 17:00:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:00:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100189328"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 17:00:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:00:22 -0500
Message-ID: <1392224421.13563.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Wed, 12 Feb 2014 17:00:21 +0000
In-Reply-To: <1392224069.13563.97.camel@kazak.uk.xensource.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<20140212152134.GA20090@aepfle.de>
	<1392220651.13563.86.camel@kazak.uk.xensource.com>
	<52FBA623.2010809@eu.citrix.com>
	<1392224069.13563.97.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, ian.jackson@eu.citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 16:54 +0000, Ian Campbell wrote:
> On Wed, 2014-02-12 at 16:49 +0000, George Dunlap wrote:
> > On 02/12/2014 03:57 PM, Ian Campbell wrote:
> > > On Wed, 2014-02-12 at 16:21 +0100, Olaf Hering wrote:
> > >> On Wed, Feb 12, Ian Campbell wrote:
> > >>
> > >>>   #define LIBXL_HAVE_SIGCHLD_SHARING 1
> > >>>   
> > >>> +/*
> > >>> + * LIBXL_HAVE_NO_SUSPEND_RESUME
> > >> Think positive?
> > >> Make that HAVE_FEATURE and define it on x86.
> > > Could do -- anyone got any strong feeling one way or the other?
> > 
> > I'm a fan of consistency, so "HAVE_SUSPEND" sounds a little bit better 
> > to me.  But either way is fine.
> 
> Right. I think I'll flip it then, it does make more logical sense that
> way.

Actually, no I wont.

Anyone who uses the positive version of this will find that migration
support suddenly disappears on x86 when they build against Xen 4.3 or
earlier, unless they are careful to use
        #if defined(LIBL_HAVE_NO_SUSPEND_RESUME) || defined(__i386__) || defined(__x86_64__)
which is worse than a -ve feature flag IMHO.

I will stick with LIBXL_HAVE_NO_SUSPEND_RESUME then.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:05:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdF5-0001Lm-QB; Wed, 12 Feb 2014 17:04:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDdF3-0001LJ-NB; Wed, 12 Feb 2014 17:04:57 +0000
Received: from [193.109.254.147:22604] by server-13.bemta-14.messagelabs.com
	id F1/77-01226-8B9ABF25; Wed, 12 Feb 2014 17:04:56 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392224695!3881502!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16699 invoked from network); 12 Feb 2014 17:04:56 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	12 Feb 2014 17:04:56 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDdEw-0007GV-W4; Wed, 12 Feb 2014 17:04:50 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDdEw-00070S-Fk; Wed, 12 Feb 2014 17:04:50 +0000
Date: Wed, 12 Feb 2014 17:04:50 +0000
Message-Id: <E1WDdEw-00070S-Fk@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 88 (CVE-2014-1950) -
 use-after-free in xc_cpupool_getinfo() under memory pressure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

              Xen Security Advisory CVE-2014-1950 / XSA-88
                              version 3

      use-after-free in xc_cpupool_getinfo() under memory pressure

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

If xc_cpumap_alloc() fails then xc_cpupool_getinfo() will free and incorrectly
return the then-free pointer to the result structure.

IMPACT
======

An attacker may be able to cause a multi-threaded toolstack using this
function to race against itself leading to heap corruption and a
potential DoS.

Depending on the malloc implementation, privilege escalation cannot be
ruled out.

VULNERABLE SYSTEMS
==================

The flaw is present in Xen 4.1 onwards.  Only multithreaded toolstacks
are vulnerable.  Only systems where management functions (such as
domain creation) are exposed to untrusted users are vulnerable.

xl is not multithreaded, so is not vulnerable.  However, multithreaded
toolstacks using libxl as a library are vulnerable.  xend is
vulnerable.

MITIGATION
==========

Not allowing untrusted users access to toolstack functionality will
avoid this issue.

CREDITS
=======

This issue was discovered by Coverity Scan and diagnosed by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa88.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x, Xen 4.1.x

$ sha256sum xsa88*.patch
7a73ca9db19a9ffe6e8cd259fa71dc1299738f26fa024303f4ab38931db75f14  xsa88.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+6mbAAoJEIP+FMlX6CvZjhAH/j9PI7N93lhkTiVZiD3noh9e
czgskoQ1ge1zHSzYVXvLZvVEaEVCSMQpql37gSAeWl7rfjdFxv6xQQ3OIla2Xyqm
xfoaQhP8ZMbBX6RAWRWC99wCB8ki67VA3ZqHEqNPz72FxnaT9Y0bQ0Wg4cVcq69q
hNtidmtRfX8yD5o/ACpiuCHL0miD9GxZGjGVy1EAjMxKgfDR8fBkI2hoHe4v6V4v
XzeiXW7/xyLtXausFsTdUI/gTO+2UCWlaBPS5eobCnXFP+agmJfhTAzHU9gNQajv
AATAlka1y9WMWnLBvp+UMDqJ2w5XhwwVQAW17mAyipLi0vco6gcp1F80UTKmtVc=
=1It2
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa88.patch"
Content-Disposition: attachment; filename="xsa88.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCAyMiBKYW4gMjAxNCAxNzo0NzoyMSArMDAwMApTdWJq
ZWN0OiBsaWJ4YzogRml4IG91dC1vZi1tZW1vcnkgZXJyb3IgaGFuZGxpbmcg
aW4geGNfY3B1cG9vbF9nZXRpbmZvKCkKCkF2b2lkIGZyZWVpbmcgaW5mbyB0
aGVuIHJldHVybmluZyBpdCB0byB0aGUgY2FsbGVyLgoKVGhpcyBpcyBYU0Et
ODguCgpDb3Zlcml0eS1JRDogMTA1NjE5MgpTaWduZWQtb2ZmLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0KIHRv
b2xzL2xpYnhjL3hjX2NwdXBvb2wuYyB8ICAgIDEgKwogMSBmaWxlIGNoYW5n
ZWQsIDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGMv
eGNfY3B1cG9vbC5jIGIvdG9vbHMvbGlieGMveGNfY3B1cG9vbC5jCmluZGV4
IGM4YzJhMzMuLjYzOTNjZmIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhjL3hj
X2NwdXBvb2wuYworKysgYi90b29scy9saWJ4Yy94Y19jcHVwb29sLmMKQEAg
LTEwNCw2ICsxMDQsNyBAQCB4Y19jcHVwb29saW5mb190ICp4Y19jcHVwb29s
X2dldGluZm8oeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgaW5mby0+Y3B1bWFw
ID0geGNfY3B1bWFwX2FsbG9jKHhjaCk7CiAgICAgaWYgKCFpbmZvLT5jcHVt
YXApIHsKICAgICAgICAgZnJlZShpbmZvKTsKKyAgICAgICAgaW5mbyA9IE5V
TEw7CiAgICAgICAgIGdvdG8gb3V0OwogICAgIH0KICAgICBpbmZvLT5jcHVw
b29sX2lkID0gc3lzY3RsLnUuY3B1cG9vbF9vcC5jcHVwb29sX2lkOwo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 12 17:05:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdF5-0001Lm-QB; Wed, 12 Feb 2014 17:04:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDdF3-0001LJ-NB; Wed, 12 Feb 2014 17:04:57 +0000
Received: from [193.109.254.147:22604] by server-13.bemta-14.messagelabs.com
	id F1/77-01226-8B9ABF25; Wed, 12 Feb 2014 17:04:56 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392224695!3881502!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16699 invoked from network); 12 Feb 2014 17:04:56 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-8.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	12 Feb 2014 17:04:56 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDdEw-0007GV-W4; Wed, 12 Feb 2014 17:04:50 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WDdEw-00070S-Fk; Wed, 12 Feb 2014 17:04:50 +0000
Date: Wed, 12 Feb 2014 17:04:50 +0000
Message-Id: <E1WDdEw-00070S-Fk@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 88 (CVE-2014-1950) -
 use-after-free in xc_cpupool_getinfo() under memory pressure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

              Xen Security Advisory CVE-2014-1950 / XSA-88
                              version 3

      use-after-free in xc_cpupool_getinfo() under memory pressure

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

If xc_cpumap_alloc() fails then xc_cpupool_getinfo() will free and incorrectly
return the then-free pointer to the result structure.

IMPACT
======

An attacker may be able to cause a multi-threaded toolstack using this
function to race against itself leading to heap corruption and a
potential DoS.

Depending on the malloc implementation, privilege escalation cannot be
ruled out.

VULNERABLE SYSTEMS
==================

The flaw is present in Xen 4.1 onwards.  Only multithreaded toolstacks
are vulnerable.  Only systems where management functions (such as
domain creation) are exposed to untrusted users are vulnerable.

xl is not multithreaded, so is not vulnerable.  However, multithreaded
toolstacks using libxl as a library are vulnerable.  xend is
vulnerable.

MITIGATION
==========

Not allowing untrusted users access to toolstack functionality will
avoid this issue.

CREDITS
=======

This issue was discovered by Coverity Scan and diagnosed by Andrew
Cooper.

RESOLUTION
==========

Applying the attached patch resolves this issue.

xsa88.patch        xen-unstable, Xen 4.3.x, Xen 4.2.x, Xen 4.1.x

$ sha256sum xsa88*.patch
7a73ca9db19a9ffe6e8cd259fa71dc1299738f26fa024303f4ab38931db75f14  xsa88.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJS+6mbAAoJEIP+FMlX6CvZjhAH/j9PI7N93lhkTiVZiD3noh9e
czgskoQ1ge1zHSzYVXvLZvVEaEVCSMQpql37gSAeWl7rfjdFxv6xQQ3OIla2Xyqm
xfoaQhP8ZMbBX6RAWRWC99wCB8ki67VA3ZqHEqNPz72FxnaT9Y0bQ0Wg4cVcq69q
hNtidmtRfX8yD5o/ACpiuCHL0miD9GxZGjGVy1EAjMxKgfDR8fBkI2hoHe4v6V4v
XzeiXW7/xyLtXausFsTdUI/gTO+2UCWlaBPS5eobCnXFP+agmJfhTAzHU9gNQajv
AATAlka1y9WMWnLBvp+UMDqJ2w5XhwwVQAW17mAyipLi0vco6gcp1F80UTKmtVc=
=1It2
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa88.patch"
Content-Disposition: attachment; filename="xsa88.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNv
bT4KRGF0ZTogV2VkLCAyMiBKYW4gMjAxNCAxNzo0NzoyMSArMDAwMApTdWJq
ZWN0OiBsaWJ4YzogRml4IG91dC1vZi1tZW1vcnkgZXJyb3IgaGFuZGxpbmcg
aW4geGNfY3B1cG9vbF9nZXRpbmZvKCkKCkF2b2lkIGZyZWVpbmcgaW5mbyB0
aGVuIHJldHVybmluZyBpdCB0byB0aGUgY2FsbGVyLgoKVGhpcyBpcyBYU0Et
ODguCgpDb3Zlcml0eS1JRDogMTA1NjE5MgpTaWduZWQtb2ZmLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpSZXZpZXdl
ZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0KIHRv
b2xzL2xpYnhjL3hjX2NwdXBvb2wuYyB8ICAgIDEgKwogMSBmaWxlIGNoYW5n
ZWQsIDEgaW5zZXJ0aW9uKCspCgpkaWZmIC0tZ2l0IGEvdG9vbHMvbGlieGMv
eGNfY3B1cG9vbC5jIGIvdG9vbHMvbGlieGMveGNfY3B1cG9vbC5jCmluZGV4
IGM4YzJhMzMuLjYzOTNjZmIgMTAwNjQ0Ci0tLSBhL3Rvb2xzL2xpYnhjL3hj
X2NwdXBvb2wuYworKysgYi90b29scy9saWJ4Yy94Y19jcHVwb29sLmMKQEAg
LTEwNCw2ICsxMDQsNyBAQCB4Y19jcHVwb29saW5mb190ICp4Y19jcHVwb29s
X2dldGluZm8oeGNfaW50ZXJmYWNlICp4Y2gsCiAgICAgaW5mby0+Y3B1bWFw
ID0geGNfY3B1bWFwX2FsbG9jKHhjaCk7CiAgICAgaWYgKCFpbmZvLT5jcHVt
YXApIHsKICAgICAgICAgZnJlZShpbmZvKTsKKyAgICAgICAgaW5mbyA9IE5V
TEw7CiAgICAgICAgIGdvdG8gb3V0OwogICAgIH0KICAgICBpbmZvLT5jcHVw
b29sX2lkID0gc3lzY3RsLnUuY3B1cG9vbF9vcC5jcHVwb29sX2lkOwo=

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 12 17:11:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdKV-0001y1-AZ; Wed, 12 Feb 2014 17:10:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDdKT-0001xo-AE
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 17:10:33 +0000
Received: from [85.158.139.211:31016] by server-14.bemta-5.messagelabs.com id
	27/4A-27598-80BABF25; Wed, 12 Feb 2014 17:10:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392225029!3452798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11054 invoked from network); 12 Feb 2014 17:10:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:10:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100192790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 17:09:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:09:57 -0500
Message-ID: <52FBAAE4.7010602@citrix.com>
Date: Wed, 12 Feb 2014 17:09:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <52F90A71.40802@citrix.com>
	<20140212163625.GE91459@deinos.phlegethon.org>
In-Reply-To: <20140212163625.GE91459@deinos.phlegethon.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 16:36, Tim Deegan wrote:
> Hi,
> 
> This draft has my wholehearted support.  Even without addressing any
> of the points under discussion something along these lines would be a
> vast improvement on the current format.
> 
> I have two general questions:
> 
>  - The existing save-format definition is spread across a number of
>    places: libxc for hypervisor state, qemu for DM state, and the main
>    toolstack (libxl/xend/xapi/&c) for other config runes and a general
>    wrapper.  This is clearly a reworking of the libxc parts -- do
>    you think there's anything currently defined elsewhere that belongs
>    in this spec?

I was considering this format as a container for those blobs, but I
think there should be enough flexibility that additional things could be
moved into the spec in the future.

>  - Have you given any thought to making this into a wire protocol
>    rather than just a file format?  Would there be any benefit to
>    having records individually acked by the receiver in a live
>    migration, or having the receiver send instructions about
>    compatibility?  Or is that again left to the toolstack to manage?

I don't see how having the restorer send anything back to the saver
would work with image files[1] so any two way stuff must be optional so
this can be left for future.

Ian J had some suggestions for how to handle compatibility better
without having the restorer report its capabilities.

>> checksum     CRC-32 checksum of the record body (including any trailing
>>              padding), or 0x00000000 if the checksum field is invalid.
> 
> Apart from any discussion of the merits of per-record vs whole-file
> checksums, it would be useful for this checksum to cover the header
> too.  E.g., by declaring it to be the checksum of header+data where
> the checksum field is 0, or by declaring that it shall be that pattern
> which causes the finished header+data to checksum to 0.

A single checksum for a multi GB file doesn't seem robust enough, which
is why I made it per-record.  Per-record checksums also mean you can
discard records the restorer isn't interested in without having to read
them to calculate the checksum.

I'm not entirely convinced by the usefulness of checksums, though.  If
no one else thinks they would be useful I'll probably drop them.

>> P2M
>> ---
[...]
> The current save record doesn't contain the p2m itself, but rather the
> p2m_frame_list, an array of the MFNs (in the save record, PFNs) that
> hold the actual p2m.  Frames in that list are used to populate the p2m
> as memory is allocated on the receiving side.

Er. Yes, I got confused by the code here and misunderstood it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:11:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdKV-0001y1-AZ; Wed, 12 Feb 2014 17:10:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDdKT-0001xo-AE
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 17:10:33 +0000
Received: from [85.158.139.211:31016] by server-14.bemta-5.messagelabs.com id
	27/4A-27598-80BABF25; Wed, 12 Feb 2014 17:10:32 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392225029!3452798!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11054 invoked from network); 12 Feb 2014 17:10:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:10:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100192790"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 17:09:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:09:57 -0500
Message-ID: <52FBAAE4.7010602@citrix.com>
Date: Wed, 12 Feb 2014 17:09:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <52F90A71.40802@citrix.com>
	<20140212163625.GE91459@deinos.phlegethon.org>
In-Reply-To: <20140212163625.GE91459@deinos.phlegethon.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/02/14 16:36, Tim Deegan wrote:
> Hi,
> 
> This draft has my wholehearted support.  Even without addressing any
> of the points under discussion something along these lines would be a
> vast improvement on the current format.
> 
> I have two general questions:
> 
>  - The existing save-format definition is spread across a number of
>    places: libxc for hypervisor state, qemu for DM state, and the main
>    toolstack (libxl/xend/xapi/&c) for other config runes and a general
>    wrapper.  This is clearly a reworking of the libxc parts -- do
>    you think there's anything currently defined elsewhere that belongs
>    in this spec?

I was considering this format as a container for those blobs, but I
think there should be enough flexibility that additional things could be
moved into the spec in the future.

>  - Have you given any thought to making this into a wire protocol
>    rather than just a file format?  Would there be any benefit to
>    having records individually acked by the receiver in a live
>    migration, or having the receiver send instructions about
>    compatibility?  Or is that again left to the toolstack to manage?

I don't see how having the restorer send anything back to the saver
would work with image files[1] so any two way stuff must be optional so
this can be left for future.

Ian J had some suggestions for how to handle compatibility better
without having the restorer report its capabilities.

>> checksum     CRC-32 checksum of the record body (including any trailing
>>              padding), or 0x00000000 if the checksum field is invalid.
> 
> Apart from any discussion of the merits of per-record vs whole-file
> checksums, it would be useful for this checksum to cover the header
> too.  E.g., by declaring it to be the checksum of header+data where
> the checksum field is 0, or by declaring that it shall be that pattern
> which causes the finished header+data to checksum to 0.

A single checksum for a multi GB file doesn't seem robust enough, which
is why I made it per-record.  Per-record checksums also mean you can
discard records the restorer isn't interested in without having to read
them to calculate the checksum.

I'm not entirely convinced by the usefulness of checksums, though.  If
no one else thinks they would be useful I'll probably drop them.

>> P2M
>> ---
[...]
> The current save record doesn't contain the p2m itself, but rather the
> p2m_frame_list, an array of the MFNs (in the save record, PFNs) that
> hold the actual p2m.  Frames in that list are used to populate the p2m
> as memory is allocated on the receiving side.

Er. Yes, I got confused by the code here and misunderstood it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:14:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdOc-0002M3-1x; Wed, 12 Feb 2014 17:14:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WDdOa-0002Ly-R7
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 17:14:49 +0000
Received: from [85.158.137.68:33674] by server-8.bemta-3.messagelabs.com id
	62/BC-16039-70CABF25; Wed, 12 Feb 2014 17:14:47 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392225239!426622!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30158 invoked from network); 12 Feb 2014 17:14:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:14:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101982883"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 17:13:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:13:58 -0500
Message-ID: <52FBABD3.6020007@citrix.com>
Date: Wed, 12 Feb 2014 17:13:55 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@schaman.hu>, Jeff Kirsher
	<jeffrey.t.kirsher@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>, 
	Bruce Allan <bruce.w.allan@intel.com>, Carolyn Wyborny
	<carolyn.wyborny@intel.com>, Don Skidmore <donald.c.skidmore@intel.com>,
	Greg Rose <gregory.v.rose@intel.com>, Peter P Waskiewicz Jr
	<peter.p.waskiewicz.jr@intel.com>,
	Alex Duyck <alexander.h.duyck@intel.com>, 
	John Ronciak <john.ronciak@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>, 
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>, "David S. Miller"
	<davem@davemloft.net>, <e1000-devel@lists.sourceforge.net>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, Michael Chan <mchan@broadcom.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <52EAA31B.1090606@schaman.hu>
In-Reply-To: <52EAA31B.1090606@schaman.hu>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I still haven't managed to crack this problem. I've made sure the below 
mentioned skb's look the same as the other ones: linear buffer with 
header, and the rest is aggregated into frags. Utilizing the skb 
destructor I've also checked that these packets are all freed before the 
TX hang happens. So the only difference from current upstream is that 
the pages are grant mapped into Dom0 instead of grant copy to a local page.
I've also found some of my older notes about this issue, where I managed 
to reproduce this on igb, and in that particular case the TX hang could 
be solved with ifconfig down/up. Does the "Detected Tx Unit Hang" 
messages give any hint to igb developers?

Nov 26 04:18:34 localhost kernel: [ 7814.197868] ------------[ cut here 
]------------
Nov 26 04:18:34 localhost kernel: [ 7814.197889] WARNING: at 
net/sched/sch_generic.c:255 dev_watchdog+0x165/0x220()
Nov 26 04:18:34 localhost kernel: [ 7814.197892] NETDEV WATCHDOG: eth0 
(igb): transmit queue 7 timed out
Nov 26 04:18:34 localhost kernel: [ 7814.197894] Modules linked in: tun 
nfsv3 nfs_acl nfs fscache dm_multipath scsi_dh lockd sunrpc openvswitch 
ipt_REJECT nf_conntrack_ipv4 nf_defrag_ip
v4 xt_tcpudp xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
nls_utf8 isofs dm_mirror video backlight sbs sbshc hed acpi_ipmi 
ipmi_msghandler nvram sg psmouse serio_raw igb
i2c_algo_bit ptp pps_core hpilo tpm_tis tpm tpm_bios lpc_ich mfd_core 
ehci_pci crc32_pclmul aesni_intel ablk_helper cryptd lrw aes_i586 xts 
gf128mul dm_region_hash dm_log dm_mod shpchp
hpsa sd_mod scsi_mod uhci_hcd ohci_hcd ehci_hcd fbcon font tileblit 
bitblit softcursor [last unloaded: microcode]
Nov 26 04:18:34 localhost kernel: [ 7814.197957] CPU: 5 PID: 0 Comm: 
swapper/5 Not tainted 3.10.11-0.xs1.8.50.127.377543 #1
Nov 26 04:18:34 localhost kernel: [ 7814.197959] Hardware name: HP 
ProLiant BL420c Gen8, BIOS I30 12/14/2012
Nov 26 04:18:34 localhost kernel: [ 7814.197962]  e5cd9e10 c13e4c55 
e5cd9ddc c1278546 e5cd9e00 c1047fd3 c1643220 e5cd9e2c
Nov 26 04:18:34 localhost kernel: [ 7814.197969]  000000ff c13e4c55 
e1fa8700 00000007 000004e2 e5cd9e18 c1048093 00000009
Nov 26 04:18:34 localhost kernel: [ 7814.197975]  e5cd9e10 c1643220 
e5cd9e2c e5cd9e50 c13e4c55 c163fe6b 000000ff c1643220
Nov 26 04:18:34 localhost kernel: [ 7814.197982] Call Trace:
Nov 26 04:18:34 localhost kernel: [ 7814.197988]  [<c13e4c55>] ? 
dev_watchdog+0x165/0x220
Nov 26 04:18:34 localhost kernel: [ 7814.197994]  [<c1278546>] 
dump_stack+0x16/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198000]  [<c1047fd3>] 
warn_slowpath_common+0x63/0x80
Nov 26 04:18:34 localhost kernel: [ 7814.198003]  [<c13e4c55>] ? 
dev_watchdog+0x165/0x220
Nov 26 04:18:34 localhost kernel: [ 7814.198007]  [<c1048093>] 
warn_slowpath_fmt+0x33/0x40
Nov 26 04:18:34 localhost kernel: [ 7814.198011]  [<c13e4c55>] 
dev_watchdog+0x165/0x220
Nov 26 04:18:34 localhost kernel: [ 7814.198017]  [<c13e4af0>] ? 
dev_activate+0x110/0x110
Nov 26 04:18:34 localhost kernel: [ 7814.198020]  [<c1055c18>] 
call_timer_fn+0x58/0xe0
Nov 26 04:18:34 localhost kernel: [ 7814.198024]  [<c1056ce8>] 
run_timer_softirq+0x1a8/0x1f0
Nov 26 04:18:34 localhost kernel: [ 7814.198028]  [<c12fb61d>] ? 
info_for_irq+0xd/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198031]  [<c12fbb6c>] ? 
evtchn_from_irq+0x3c/0x50
Nov 26 04:18:34 localhost kernel: [ 7814.198034]  [<c13e4af0>] ? 
dev_activate+0x110/0x110
Nov 26 04:18:34 localhost kernel: [ 7814.198038]  [<c104fcb9>] 
__do_softirq+0xd9/0x1e0
Nov 26 04:18:34 localhost kernel: [ 7814.198041]  [<c12fc045>] ? 
__xen_evtchn_do_upcall+0x245/0x280
Nov 26 04:18:34 localhost kernel: [ 7814.198045]  [<c104fe41>] 
irq_exit+0x41/0x80
Nov 26 04:18:34 localhost kernel: [ 7814.198048]  [<c12fc0e5>] 
xen_evtchn_do_upcall+0x25/0x30
Nov 26 04:18:34 localhost kernel: [ 7814.198053]  [<c147b287>] 
xen_do_upcall+0x7/0xc
Nov 26 04:18:34 localhost kernel: [ 7814.198058]  [<c10c00d8>] ? 
rcu_process_gp_end+0x58/0x70
Nov 26 04:18:34 localhost kernel: [ 7814.198061]  [<c10013a7>] ? 
xen_hypercall_sched_op+0x7/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198066]  [<c1007ef2>] ? 
xen_safe_halt+0x12/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198070]  [<c1015be6>] 
default_idle+0x56/0xb0
Nov 26 04:18:34 localhost kernel: [ 7814.198074]  [<c10158e7>] 
arch_cpu_idle+0x17/0x30
Nov 26 04:18:34 localhost kernel: [ 7814.198078]  [<c108e2ae>] 
cpu_startup_entry+0x15e/0x1d0
Nov 26 04:18:34 localhost kernel: [ 7814.198085]  [<c1464282>] 
cpu_bringup_and_idle+0x12/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198088] ---[ end trace 
d8c0d3f5c187aa6b ]---

And the recovery:

Nov 26 21:47:54 localhost kernel: [70773.950715] ------------[ cut here 
]------------
Nov 26 21:47:54 localhost kernel: [70773.950747] WARNING: at 
net/core/dev.c:4201 net_rx_action+0xfd/0x1c0()
Nov 26 21:47:54 localhost kernel: [70773.950751] Modules linked in: tun 
nfsv3 nfs_acl nfs fscache dm_multipath scsi_dh lockd sunrpc openvswitch 
ipt_REJECT nf_conntrack_ipv4 nf_defrag_ip
v4 xt_tcpudp xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
nls_utf8 isofs dm_mirror video backlight sbs sbshc hed acpi_ipmi 
ipmi_msghandler nvram sg psmouse serio_raw igb
i2c_algo_bit ptp pps_core hpilo tpm_tis tpm tpm_bios lpc_ich mfd_core 
ehci_pci crc32_pclmul aesni_intel ablk_helper cryptd lrw aes_i586 xts 
gf128mul dm_region_hash dm_log dm_mod shpchp
hpsa sd_mod scsi_mod uhci_hcd ohci_hcd ehci_hcd fbcon font tileblit 
bitblit softcursor [last unloaded: microcode]
Nov 26 21:47:54 localhost kernel: [70773.950852] CPU: 0 PID: 0 Comm: 
swapper/0 Tainted: G        W    3.10.11-0.xs1.8.50.127.377543 #1
Nov 26 21:47:54 localhost kernel: [70773.950856] Hardware name: HP 
ProLiant BL420c Gen8, BIOS I30 12/14/2012
Nov 26 21:47:54 localhost kernel: [70773.950860]  00000000 c13ccdfd 
c167fc78 c1278546 c167fc9c c1047fd3 c15ebc78 c163f7da
Nov 26 21:47:54 localhost kernel: [70773.950873]  00001069 c13ccdfd 
dff404c8 00000040 00000000 c167fcac c1048012 00000009
Nov 26 21:47:54 localhost kernel: [70773.950884]  00000000 c167fcd8 
c13ccdfd ed383888 010cbb97 000000e2 ed383880 00000043
Nov 26 21:47:54 localhost kernel: [70773.950896] Call Trace:
Nov 26 21:47:54 localhost kernel: [70773.950905]  [<c13ccdfd>] ? 
net_rx_action+0xfd/0x1c0
Nov 26 21:47:54 localhost kernel: [70773.950915]  [<c1278546>] 
dump_stack+0x16/0x20
Nov 26 21:47:54 localhost kernel: [70773.950924]  [<c1047fd3>] 
warn_slowpath_common+0x63/0x80
Nov 26 21:47:54 localhost kernel: [70773.950930]  [<c13ccdfd>] ? 
net_rx_action+0xfd/0x1c0
Nov 26 21:47:54 localhost kernel: [70773.950937]  [<c1048012>] 
warn_slowpath_null+0x22/0x30
Nov 26 21:47:54 localhost kernel: [70773.950954]  [<c13ccdfd>] 
net_rx_action+0xfd/0x1c0
Nov 26 21:47:54 localhost kernel: [70773.950969]  [<c104fcb9>] 
__do_softirq+0xd9/0x1e0
Nov 26 21:47:54 localhost kernel: [70773.950985]  [<c12fc045>] ? 
__xen_evtchn_do_upcall+0x245/0x280
Nov 26 21:47:54 localhost kernel: [70773.951002]  [<c104fe41>] 
irq_exit+0x41/0x80
Nov 26 21:47:54 localhost kernel: [70773.951011]  [<c12fc0e5>] 
xen_evtchn_do_upcall+0x25/0x30
Nov 26 21:47:54 localhost kernel: [70773.951019]  [<c147b287>] 
xen_do_upcall+0x7/0xc
Nov 26 21:47:54 localhost kernel: [70773.951026]  [<c10013a7>] ? 
xen_hypercall_sched_op+0x7/0x20
Nov 26 21:47:54 localhost kernel: [70773.951033]  [<c1007ef2>] ? 
xen_safe_halt+0x12/0x20
Nov 26 21:47:54 localhost kernel: [70773.951041]  [<c1015be6>] 
default_idle+0x56/0xb0
Nov 26 21:47:54 localhost kernel: [70773.951046]  [<c10158e7>] 
arch_cpu_idle+0x17/0x30
Nov 26 21:47:54 localhost kernel: [70773.951054]  [<c108e2ae>] 
cpu_startup_entry+0x15e/0x1d0
Nov 26 21:47:54 localhost kernel: [70773.951064]  [<c1460362>] 
rest_init+0x62/0x70
Nov 26 21:47:54 localhost kernel: [70773.951071]  [<c16efcea>] 
start_kernel+0x39a/0x3b0
Nov 26 21:47:54 localhost kernel: [70773.951076]  [<c16ef520>] ? 
repair_env_string+0x60/0x60
Nov 26 21:47:54 localhost kernel: [70773.951082]  [<c16ef2eb>] 
i386_start_kernel+0x8b/0x90
Nov 26 21:47:54 localhost kernel: [70773.951088]  [<c16f2c2d>] 
xen_start_kernel+0x7cd/0x7f0
Nov 26 21:47:54 localhost kernel: [70773.951097] ---[ end trace 
d8c0d3f5c187aa6c ]---
Nov 26 21:47:54 localhost kernel: [70773.952034] ------------[ cut here 
]------------
Nov 26 21:47:54 localhost kernel: [70773.952067] WARNING: at 
drivers/net/ethernet/intel/igb/igb_main.c:2860 __igb_close+0x3d/0xb0 [igb]()
Nov 26 21:47:54 localhost kernel: [70773.952071] Modules linked in: tun 
nfsv3 nfs_acl nfs fscache dm_multipath scsi_dh lockd sunrpc openvswitch 
ipt_REJECT nf_conntrack_ipv4 nf_defrag_ip
v4 xt_tcpudp xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
nls_utf8 isofs dm_mirror video backlight sbs sbshc hed acpi_ipmi 
ipmi_msghandler nvram sg psmouse serio_raw igb i2c_algo_bit ptp pps_core 
hpilo tpm_tis tpm tpm_bios lpc_ich mfd_core ehci_pci crc32_pclmul 
aesni_intel ablk_helper cryptd lrw aes_i586 xts gf128mul dm_region_hash 
dm_log dm_mod shpchp hpsa sd_mod scsi_mod uhci_hcd ohci_hcd ehci_hcd 
fbcon font tileblit bitblit softcursor [last unloaded: microcode]
Nov 26 21:47:54 localhost kernel: [70773.952150] CPU: 4 PID: 3467 Comm: 
ifconfig Tainted: G        W    3.10.11-0.xs1.8.50.127.377543 #1
Nov 26 21:47:54 localhost kernel: [70773.952153] Hardware name: HP 
ProLiant BL420c Gen8, BIOS I30 12/14/2012
Nov 26 21:47:54 localhost kernel: [70773.952157]  00000000 eddcec4d 
ca701d8c c1278546 ca701db0 c1047fd3 c15ebc78 edde1b0c
Nov 26 21:47:54 localhost kernel: [70773.952169]  00000b2c eddcec4d 
00000000 e35504c0 e5f17000 ca701dc0 c1048012 00000009
Nov 26 21:47:54 localhost kernel: [70773.952180]  00000000 ca701dd4 
eddcec4d e3550000 ca701e00 ca701e00 ca701ddc eddceccf
Nov 26 21:47:54 localhost kernel: [70773.952192] Call Trace:
Nov 26 21:47:54 localhost kernel: [70773.952207]  [<eddcec4d>] ? 
__igb_close+0x3d/0xb0 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952216]  [<c1278546>] 
dump_stack+0x16/0x20
Nov 26 21:47:54 localhost kernel: [70773.952223]  [<c1047fd3>] 
warn_slowpath_common+0x63/0x80
Nov 26 21:47:54 localhost kernel: [70773.952237]  [<eddcec4d>] ? 
__igb_close+0x3d/0xb0 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952243]  [<c1048012>] 
warn_slowpath_null+0x22/0x30
Nov 26 21:47:54 localhost kernel: [70773.952255]  [<eddcec4d>] 
__igb_close+0x3d/0xb0 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952267]  [<eddceccf>] 
igb_close+0xf/0x20 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952275]  [<c13c8691>] 
__dev_close_many+0x91/0xb0
Nov 26 21:47:54 localhost kernel: [70773.952284]  [<c13df583>] ? 
netpoll_rx_disable+0x43/0x50
Nov 26 21:47:54 localhost kernel: [70773.952289]  [<c13c9163>] 
__dev_close+0x43/0x80
Nov 26 21:47:54 localhost kernel: [70773.952300]  [<c13c7c28>] 
__dev_change_flags+0xa8/0x120
Nov 26 21:47:54 localhost kernel: [70773.952308]  [<c13c85c3>] 
dev_change_flags+0x23/0x60
Nov 26 21:47:54 localhost kernel: [70773.952314]  [<c1424d9c>] 
devinet_ioctl+0x29c/0x600
Nov 26 21:47:54 localhost kernel: [70773.952323]  [<c13dbf05>] ? 
dev_ioctl+0x475/0x4d0
Nov 26 21:47:54 localhost kernel: [70773.952330]  [<c1425d6b>] 
inet_ioctl+0x5b/0x80
Nov 26 21:47:54 localhost kernel: [70773.952340]  [<c13b776e>] 
sock_ioctl+0x1fe/0x230
Nov 26 21:47:54 localhost kernel: [70773.952350]  [<c13b7570>] ? 
sock_recvmsg_nosec+0xb0/0xb0
Nov 26 21:47:54 localhost kernel: [70773.952360]  [<c1143cf6>] 
vfs_ioctl+0x26/0x40
Nov 26 21:47:54 localhost kernel: [70773.952367]  [<c11448ba>] 
do_vfs_ioctl+0x4ea/0x550
Nov 26 21:47:54 localhost kernel: [70773.952376]  [<c113de22>] ? 
final_putname+0x32/0x40
Nov 26 21:47:54 localhost kernel: [70773.952382]  [<c113de22>] ? 
final_putname+0x32/0x40
Nov 26 21:47:54 localhost kernel: [70773.952391]  [<c113de67>] ? 
putname+0x37/0x40
Nov 26 21:47:54 localhost kernel: [70773.952401]  [<c1134b64>] ? 
do_sys_open+0x194/0x1a0
Nov 26 21:47:54 localhost kernel: [70773.952408]  [<c1144983>] 
SyS_ioctl+0x63/0x90
Nov 26 21:47:54 localhost kernel: [70773.952416]  [<c147ad4d>] 
sysenter_do_call+0x12/0x28
Nov 26 21:47:54 localhost kernel: [70773.952423] ---[ end trace 
d8c0d3f5c187aa6d ]---
Nov 26 21:47:54 localhost kernel: [70773.971294] igb 0000:04:00.1 eth1: 
Reset adapter
Nov 26 21:47:54 localhost kernel: [70774.068154] igb 0000:04:00.0 eth0: 
Reset adapter
Nov 26 21:47:55 localhost kernel: [70774.357949] igb: eth1 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:00 localhost kernel: [70779.231904] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:00 localhost kernel: [70779.346793] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:02 localhost kernel: [70781.214844] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:02 localhost kernel: [70781.214844]   Tx Queue             <7>
Nov 26 21:48:02 localhost kernel: [70781.214844]   TDH                  <0>
Nov 26 21:48:02 localhost kernel: [70781.214844]   TDT                  <0>
Nov 26 21:48:02 localhost kernel: [70781.214844]   next_to_use          <1>
Nov 26 21:48:02 localhost kernel: [70781.214844]   next_to_clean        <0>
Nov 26 21:48:02 localhost kernel: [70781.214844] buffer_info[next_to_clean]
Nov 26 21:48:02 localhost kernel: [70781.214844]   time_stamp 
<10cc0cd>
Nov 26 21:48:02 localhost kernel: [70781.214844]   next_to_watch 
<e2d5e000>
Nov 26 21:48:02 localhost kernel: [70781.214844]   jiffies 
<10cc2ae>
Nov 26 21:48:02 localhost kernel: [70781.214844]   desc.status 
<12c000>
Nov 26 21:48:04 localhost kernel: [70783.214857] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:04 localhost kernel: [70783.214857]   Tx Queue             <7>
Nov 26 21:48:04 localhost kernel: [70783.214857]   TDH                  <0>
Nov 26 21:48:04 localhost kernel: [70783.214857]   TDT                  <0>
Nov 26 21:48:04 localhost kernel: [70783.214857]   next_to_use          <1>
Nov 26 21:48:04 localhost kernel: [70783.214857]   next_to_clean        <0>
Nov 26 21:48:04 localhost kernel: [70783.214857] buffer_info[next_to_clean]
Nov 26 21:48:04 localhost kernel: [70783.214857]   time_stamp 
<10cc0cd>
Nov 26 21:48:04 localhost kernel: [70783.214857]   next_to_watch 
<e2d5e000>
Nov 26 21:48:04 localhost kernel: [70783.214857]   jiffies 
<10cc4a2>
Nov 26 21:48:04 localhost kernel: [70783.214857]   desc.status 
<12c000>
Nov 26 21:48:06 localhost kernel: [70785.214700] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:06 localhost kernel: [70785.214700]   Tx Queue             <7>
Nov 26 21:48:06 localhost kernel: [70785.214700]   TDH                  <0>
Nov 26 21:48:06 localhost kernel: [70785.214700]   TDT                  <0>
Nov 26 21:48:06 localhost kernel: [70785.214700]   next_to_use          <1>
Nov 26 21:48:06 localhost kernel: [70785.214700]   next_to_clean        <0>
Nov 26 21:48:06 localhost kernel: [70785.214700] buffer_info[next_to_clean]
Nov 26 21:48:06 localhost kernel: [70785.214700]   time_stamp 
<10cc0cd>
Nov 26 21:48:06 localhost kernel: [70785.214700]   next_to_watch 
<e2d5e000>
Nov 26 21:48:06 localhost kernel: [70785.214700]   jiffies 
<10cc696>
Nov 26 21:48:06 localhost kernel: [70785.214700]   desc.status 
<12c000>
Nov 26 21:48:08 localhost kernel: [70787.214734] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:08 localhost kernel: [70787.214734]   Tx Queue             <7>
Nov 26 21:48:08 localhost kernel: [70787.214734]   TDH                  <0>
Nov 26 21:48:08 localhost kernel: [70787.214734]   TDT                  <0>
Nov 26 21:48:08 localhost kernel: [70787.214734]   next_to_use          <1>
Nov 26 21:48:08 localhost kernel: [70787.214734]   next_to_clean        <0>
Nov 26 21:48:08 localhost kernel: [70787.214734] buffer_info[next_to_clean]
Nov 26 21:48:08 localhost kernel: [70787.214734]   time_stamp 
<10cc0cd>
Nov 26 21:48:08 localhost kernel: [70787.214734]   next_to_watch 
<e2d5e000>
Nov 26 21:48:08 localhost kernel: [70787.214734]   jiffies 
<10cc88a>
Nov 26 21:48:08 localhost kernel: [70787.214734]   desc.status 
<12c000>
Nov 26 21:48:10 localhost kernel: [70789.214752] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:10 localhost kernel: [70789.214752]   Tx Queue             <7>
Nov 26 21:48:10 localhost kernel: [70789.214752]   TDH                  <0>
Nov 26 21:48:10 localhost kernel: [70789.214752]   TDT                  <0>
Nov 26 21:48:10 localhost kernel: [70789.214752]   next_to_use          <1>
Nov 26 21:48:10 localhost kernel: [70789.214752]   next_to_clean        <0>
Nov 26 21:48:10 localhost kernel: [70789.214752] buffer_info[next_to_clean]
Nov 26 21:48:10 localhost kernel: [70789.214752]   time_stamp 
<10cc0cd>
Nov 26 21:48:10 localhost kernel: [70789.214752]   next_to_watch 
<e2d5e000>
Nov 26 21:48:10 localhost kernel: [70789.214752]   jiffies 
<10cca7e>
Nov 26 21:48:10 localhost kernel: [70789.214752]   desc.status 
<12c000>
Nov 26 21:48:11 localhost kernel: [70790.214611] igb 0000:04:00.0 eth0: 
Reset adapter
Nov 26 21:48:11 localhost kernel: [70790.246610] igb 0000:04:00.1 eth1: 
Reset adapter
Nov 26 21:48:11 localhost kernel: [70790.250616] igb: eth1 NIC Link is Down
Nov 26 21:48:11 localhost kernel: [70790.340089] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:11 localhost kernel: [70790.367984] igb: eth1 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:11 localhost kernel: [70790.598550] igb: eth1 NIC Link is Down
Nov 26 21:48:11 localhost kernel: [70790.634559] igb: eth1 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:11 localhost kernel: [70790.638593] igb: eth0 NIC Link is Down
Nov 26 21:48:11 localhost kernel: [70790.674599] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX


On 30/01/14 19:08, Zoltan Kiss wrote:
> I've experienced some queue timeout problems mentioned in the subject
> with igb and bnx2 cards. I haven't seen them on other cards so far. I'm
> using XenServer with 3.10 Dom0 kernel (however igb were already updated
> to latest version), and there are Windows guests sending data through
> these cards. I noticed these problems in XenRT test runs, and I know
> that they usually mean some lost interrupt problem or other hardware
> error, but in my case they started to appear more often, and they are
> likely connected to my netback grant mapping patches. These patches
> causing skb's with huge (~64kb) linear buffers to appear more often.
> The reason for that is an old problem in the ring protocol: originally
> the maximum amount of slots were linked to MAX_SKB_FRAGS, as every slot
> ended up as a frag of the skb. When this value were changed, netback had
> to cope with the situation by coalescing the packets into fewer frags.
> My patch series take a different approach: the leftover slots (pages)
> were assigned to a new skb's frags, and that skb were stashed to the
> frag_list of the first one. Then, before sending it off to the stack it
> calls skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC, __GFP_NOWARN), which
> basically creates a new skb and copied all the data into it. As far as I
> understood, it put everything into the linear buffer, which can amount
> to 64KB at most. The original skb are freed then, and this new one were
> sent to the stack.
> I suspect that this is the problem as it only happens when guests send
> too much slots. Does anyone familiar with these drivers have seen such
> issue before? (when these kind of skb's get stucked in the queue)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:14:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdOc-0002M3-1x; Wed, 12 Feb 2014 17:14:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WDdOa-0002Ly-R7
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 17:14:49 +0000
Received: from [85.158.137.68:33674] by server-8.bemta-3.messagelabs.com id
	62/BC-16039-70CABF25; Wed, 12 Feb 2014 17:14:47 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392225239!426622!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30158 invoked from network); 12 Feb 2014 17:14:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:14:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101982883"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 17:13:59 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:13:58 -0500
Message-ID: <52FBABD3.6020007@citrix.com>
Date: Wed, 12 Feb 2014 17:13:55 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@schaman.hu>, Jeff Kirsher
	<jeffrey.t.kirsher@intel.com>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>, 
	Bruce Allan <bruce.w.allan@intel.com>, Carolyn Wyborny
	<carolyn.wyborny@intel.com>, Don Skidmore <donald.c.skidmore@intel.com>,
	Greg Rose <gregory.v.rose@intel.com>, Peter P Waskiewicz Jr
	<peter.p.waskiewicz.jr@intel.com>,
	Alex Duyck <alexander.h.duyck@intel.com>, 
	John Ronciak <john.ronciak@intel.com>,
	Tushar Dave <tushar.n.dave@intel.com>, 
	Akeem G Abodunrin <akeem.g.abodunrin@intel.com>, "David S. Miller"
	<davem@davemloft.net>, <e1000-devel@lists.sourceforge.net>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, Michael Chan <mchan@broadcom.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <52EAA31B.1090606@schaman.hu>
In-Reply-To: <52EAA31B.1090606@schaman.hu>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Subject: Re: [Xen-devel] igb and bnx2: "NETDEV WATCHDOG: transmit queue
 timed out" when skb has huge linear buffer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I still haven't managed to crack this problem. I've made sure the below 
mentioned skb's look the same as the other ones: linear buffer with 
header, and the rest is aggregated into frags. Utilizing the skb 
destructor I've also checked that these packets are all freed before the 
TX hang happens. So the only difference from current upstream is that 
the pages are grant mapped into Dom0 instead of grant copy to a local page.
I've also found some of my older notes about this issue, where I managed 
to reproduce this on igb, and in that particular case the TX hang could 
be solved with ifconfig down/up. Does the "Detected Tx Unit Hang" 
messages give any hint to igb developers?

Nov 26 04:18:34 localhost kernel: [ 7814.197868] ------------[ cut here 
]------------
Nov 26 04:18:34 localhost kernel: [ 7814.197889] WARNING: at 
net/sched/sch_generic.c:255 dev_watchdog+0x165/0x220()
Nov 26 04:18:34 localhost kernel: [ 7814.197892] NETDEV WATCHDOG: eth0 
(igb): transmit queue 7 timed out
Nov 26 04:18:34 localhost kernel: [ 7814.197894] Modules linked in: tun 
nfsv3 nfs_acl nfs fscache dm_multipath scsi_dh lockd sunrpc openvswitch 
ipt_REJECT nf_conntrack_ipv4 nf_defrag_ip
v4 xt_tcpudp xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
nls_utf8 isofs dm_mirror video backlight sbs sbshc hed acpi_ipmi 
ipmi_msghandler nvram sg psmouse serio_raw igb
i2c_algo_bit ptp pps_core hpilo tpm_tis tpm tpm_bios lpc_ich mfd_core 
ehci_pci crc32_pclmul aesni_intel ablk_helper cryptd lrw aes_i586 xts 
gf128mul dm_region_hash dm_log dm_mod shpchp
hpsa sd_mod scsi_mod uhci_hcd ohci_hcd ehci_hcd fbcon font tileblit 
bitblit softcursor [last unloaded: microcode]
Nov 26 04:18:34 localhost kernel: [ 7814.197957] CPU: 5 PID: 0 Comm: 
swapper/5 Not tainted 3.10.11-0.xs1.8.50.127.377543 #1
Nov 26 04:18:34 localhost kernel: [ 7814.197959] Hardware name: HP 
ProLiant BL420c Gen8, BIOS I30 12/14/2012
Nov 26 04:18:34 localhost kernel: [ 7814.197962]  e5cd9e10 c13e4c55 
e5cd9ddc c1278546 e5cd9e00 c1047fd3 c1643220 e5cd9e2c
Nov 26 04:18:34 localhost kernel: [ 7814.197969]  000000ff c13e4c55 
e1fa8700 00000007 000004e2 e5cd9e18 c1048093 00000009
Nov 26 04:18:34 localhost kernel: [ 7814.197975]  e5cd9e10 c1643220 
e5cd9e2c e5cd9e50 c13e4c55 c163fe6b 000000ff c1643220
Nov 26 04:18:34 localhost kernel: [ 7814.197982] Call Trace:
Nov 26 04:18:34 localhost kernel: [ 7814.197988]  [<c13e4c55>] ? 
dev_watchdog+0x165/0x220
Nov 26 04:18:34 localhost kernel: [ 7814.197994]  [<c1278546>] 
dump_stack+0x16/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198000]  [<c1047fd3>] 
warn_slowpath_common+0x63/0x80
Nov 26 04:18:34 localhost kernel: [ 7814.198003]  [<c13e4c55>] ? 
dev_watchdog+0x165/0x220
Nov 26 04:18:34 localhost kernel: [ 7814.198007]  [<c1048093>] 
warn_slowpath_fmt+0x33/0x40
Nov 26 04:18:34 localhost kernel: [ 7814.198011]  [<c13e4c55>] 
dev_watchdog+0x165/0x220
Nov 26 04:18:34 localhost kernel: [ 7814.198017]  [<c13e4af0>] ? 
dev_activate+0x110/0x110
Nov 26 04:18:34 localhost kernel: [ 7814.198020]  [<c1055c18>] 
call_timer_fn+0x58/0xe0
Nov 26 04:18:34 localhost kernel: [ 7814.198024]  [<c1056ce8>] 
run_timer_softirq+0x1a8/0x1f0
Nov 26 04:18:34 localhost kernel: [ 7814.198028]  [<c12fb61d>] ? 
info_for_irq+0xd/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198031]  [<c12fbb6c>] ? 
evtchn_from_irq+0x3c/0x50
Nov 26 04:18:34 localhost kernel: [ 7814.198034]  [<c13e4af0>] ? 
dev_activate+0x110/0x110
Nov 26 04:18:34 localhost kernel: [ 7814.198038]  [<c104fcb9>] 
__do_softirq+0xd9/0x1e0
Nov 26 04:18:34 localhost kernel: [ 7814.198041]  [<c12fc045>] ? 
__xen_evtchn_do_upcall+0x245/0x280
Nov 26 04:18:34 localhost kernel: [ 7814.198045]  [<c104fe41>] 
irq_exit+0x41/0x80
Nov 26 04:18:34 localhost kernel: [ 7814.198048]  [<c12fc0e5>] 
xen_evtchn_do_upcall+0x25/0x30
Nov 26 04:18:34 localhost kernel: [ 7814.198053]  [<c147b287>] 
xen_do_upcall+0x7/0xc
Nov 26 04:18:34 localhost kernel: [ 7814.198058]  [<c10c00d8>] ? 
rcu_process_gp_end+0x58/0x70
Nov 26 04:18:34 localhost kernel: [ 7814.198061]  [<c10013a7>] ? 
xen_hypercall_sched_op+0x7/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198066]  [<c1007ef2>] ? 
xen_safe_halt+0x12/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198070]  [<c1015be6>] 
default_idle+0x56/0xb0
Nov 26 04:18:34 localhost kernel: [ 7814.198074]  [<c10158e7>] 
arch_cpu_idle+0x17/0x30
Nov 26 04:18:34 localhost kernel: [ 7814.198078]  [<c108e2ae>] 
cpu_startup_entry+0x15e/0x1d0
Nov 26 04:18:34 localhost kernel: [ 7814.198085]  [<c1464282>] 
cpu_bringup_and_idle+0x12/0x20
Nov 26 04:18:34 localhost kernel: [ 7814.198088] ---[ end trace 
d8c0d3f5c187aa6b ]---

And the recovery:

Nov 26 21:47:54 localhost kernel: [70773.950715] ------------[ cut here 
]------------
Nov 26 21:47:54 localhost kernel: [70773.950747] WARNING: at 
net/core/dev.c:4201 net_rx_action+0xfd/0x1c0()
Nov 26 21:47:54 localhost kernel: [70773.950751] Modules linked in: tun 
nfsv3 nfs_acl nfs fscache dm_multipath scsi_dh lockd sunrpc openvswitch 
ipt_REJECT nf_conntrack_ipv4 nf_defrag_ip
v4 xt_tcpudp xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
nls_utf8 isofs dm_mirror video backlight sbs sbshc hed acpi_ipmi 
ipmi_msghandler nvram sg psmouse serio_raw igb
i2c_algo_bit ptp pps_core hpilo tpm_tis tpm tpm_bios lpc_ich mfd_core 
ehci_pci crc32_pclmul aesni_intel ablk_helper cryptd lrw aes_i586 xts 
gf128mul dm_region_hash dm_log dm_mod shpchp
hpsa sd_mod scsi_mod uhci_hcd ohci_hcd ehci_hcd fbcon font tileblit 
bitblit softcursor [last unloaded: microcode]
Nov 26 21:47:54 localhost kernel: [70773.950852] CPU: 0 PID: 0 Comm: 
swapper/0 Tainted: G        W    3.10.11-0.xs1.8.50.127.377543 #1
Nov 26 21:47:54 localhost kernel: [70773.950856] Hardware name: HP 
ProLiant BL420c Gen8, BIOS I30 12/14/2012
Nov 26 21:47:54 localhost kernel: [70773.950860]  00000000 c13ccdfd 
c167fc78 c1278546 c167fc9c c1047fd3 c15ebc78 c163f7da
Nov 26 21:47:54 localhost kernel: [70773.950873]  00001069 c13ccdfd 
dff404c8 00000040 00000000 c167fcac c1048012 00000009
Nov 26 21:47:54 localhost kernel: [70773.950884]  00000000 c167fcd8 
c13ccdfd ed383888 010cbb97 000000e2 ed383880 00000043
Nov 26 21:47:54 localhost kernel: [70773.950896] Call Trace:
Nov 26 21:47:54 localhost kernel: [70773.950905]  [<c13ccdfd>] ? 
net_rx_action+0xfd/0x1c0
Nov 26 21:47:54 localhost kernel: [70773.950915]  [<c1278546>] 
dump_stack+0x16/0x20
Nov 26 21:47:54 localhost kernel: [70773.950924]  [<c1047fd3>] 
warn_slowpath_common+0x63/0x80
Nov 26 21:47:54 localhost kernel: [70773.950930]  [<c13ccdfd>] ? 
net_rx_action+0xfd/0x1c0
Nov 26 21:47:54 localhost kernel: [70773.950937]  [<c1048012>] 
warn_slowpath_null+0x22/0x30
Nov 26 21:47:54 localhost kernel: [70773.950954]  [<c13ccdfd>] 
net_rx_action+0xfd/0x1c0
Nov 26 21:47:54 localhost kernel: [70773.950969]  [<c104fcb9>] 
__do_softirq+0xd9/0x1e0
Nov 26 21:47:54 localhost kernel: [70773.950985]  [<c12fc045>] ? 
__xen_evtchn_do_upcall+0x245/0x280
Nov 26 21:47:54 localhost kernel: [70773.951002]  [<c104fe41>] 
irq_exit+0x41/0x80
Nov 26 21:47:54 localhost kernel: [70773.951011]  [<c12fc0e5>] 
xen_evtchn_do_upcall+0x25/0x30
Nov 26 21:47:54 localhost kernel: [70773.951019]  [<c147b287>] 
xen_do_upcall+0x7/0xc
Nov 26 21:47:54 localhost kernel: [70773.951026]  [<c10013a7>] ? 
xen_hypercall_sched_op+0x7/0x20
Nov 26 21:47:54 localhost kernel: [70773.951033]  [<c1007ef2>] ? 
xen_safe_halt+0x12/0x20
Nov 26 21:47:54 localhost kernel: [70773.951041]  [<c1015be6>] 
default_idle+0x56/0xb0
Nov 26 21:47:54 localhost kernel: [70773.951046]  [<c10158e7>] 
arch_cpu_idle+0x17/0x30
Nov 26 21:47:54 localhost kernel: [70773.951054]  [<c108e2ae>] 
cpu_startup_entry+0x15e/0x1d0
Nov 26 21:47:54 localhost kernel: [70773.951064]  [<c1460362>] 
rest_init+0x62/0x70
Nov 26 21:47:54 localhost kernel: [70773.951071]  [<c16efcea>] 
start_kernel+0x39a/0x3b0
Nov 26 21:47:54 localhost kernel: [70773.951076]  [<c16ef520>] ? 
repair_env_string+0x60/0x60
Nov 26 21:47:54 localhost kernel: [70773.951082]  [<c16ef2eb>] 
i386_start_kernel+0x8b/0x90
Nov 26 21:47:54 localhost kernel: [70773.951088]  [<c16f2c2d>] 
xen_start_kernel+0x7cd/0x7f0
Nov 26 21:47:54 localhost kernel: [70773.951097] ---[ end trace 
d8c0d3f5c187aa6c ]---
Nov 26 21:47:54 localhost kernel: [70773.952034] ------------[ cut here 
]------------
Nov 26 21:47:54 localhost kernel: [70773.952067] WARNING: at 
drivers/net/ethernet/intel/igb/igb_main.c:2860 __igb_close+0x3d/0xb0 [igb]()
Nov 26 21:47:54 localhost kernel: [70773.952071] Modules linked in: tun 
nfsv3 nfs_acl nfs fscache dm_multipath scsi_dh lockd sunrpc openvswitch 
ipt_REJECT nf_conntrack_ipv4 nf_defrag_ip
v4 xt_tcpudp xt_conntrack nf_conntrack iptable_filter ip_tables x_tables 
nls_utf8 isofs dm_mirror video backlight sbs sbshc hed acpi_ipmi 
ipmi_msghandler nvram sg psmouse serio_raw igb i2c_algo_bit ptp pps_core 
hpilo tpm_tis tpm tpm_bios lpc_ich mfd_core ehci_pci crc32_pclmul 
aesni_intel ablk_helper cryptd lrw aes_i586 xts gf128mul dm_region_hash 
dm_log dm_mod shpchp hpsa sd_mod scsi_mod uhci_hcd ohci_hcd ehci_hcd 
fbcon font tileblit bitblit softcursor [last unloaded: microcode]
Nov 26 21:47:54 localhost kernel: [70773.952150] CPU: 4 PID: 3467 Comm: 
ifconfig Tainted: G        W    3.10.11-0.xs1.8.50.127.377543 #1
Nov 26 21:47:54 localhost kernel: [70773.952153] Hardware name: HP 
ProLiant BL420c Gen8, BIOS I30 12/14/2012
Nov 26 21:47:54 localhost kernel: [70773.952157]  00000000 eddcec4d 
ca701d8c c1278546 ca701db0 c1047fd3 c15ebc78 edde1b0c
Nov 26 21:47:54 localhost kernel: [70773.952169]  00000b2c eddcec4d 
00000000 e35504c0 e5f17000 ca701dc0 c1048012 00000009
Nov 26 21:47:54 localhost kernel: [70773.952180]  00000000 ca701dd4 
eddcec4d e3550000 ca701e00 ca701e00 ca701ddc eddceccf
Nov 26 21:47:54 localhost kernel: [70773.952192] Call Trace:
Nov 26 21:47:54 localhost kernel: [70773.952207]  [<eddcec4d>] ? 
__igb_close+0x3d/0xb0 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952216]  [<c1278546>] 
dump_stack+0x16/0x20
Nov 26 21:47:54 localhost kernel: [70773.952223]  [<c1047fd3>] 
warn_slowpath_common+0x63/0x80
Nov 26 21:47:54 localhost kernel: [70773.952237]  [<eddcec4d>] ? 
__igb_close+0x3d/0xb0 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952243]  [<c1048012>] 
warn_slowpath_null+0x22/0x30
Nov 26 21:47:54 localhost kernel: [70773.952255]  [<eddcec4d>] 
__igb_close+0x3d/0xb0 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952267]  [<eddceccf>] 
igb_close+0xf/0x20 [igb]
Nov 26 21:47:54 localhost kernel: [70773.952275]  [<c13c8691>] 
__dev_close_many+0x91/0xb0
Nov 26 21:47:54 localhost kernel: [70773.952284]  [<c13df583>] ? 
netpoll_rx_disable+0x43/0x50
Nov 26 21:47:54 localhost kernel: [70773.952289]  [<c13c9163>] 
__dev_close+0x43/0x80
Nov 26 21:47:54 localhost kernel: [70773.952300]  [<c13c7c28>] 
__dev_change_flags+0xa8/0x120
Nov 26 21:47:54 localhost kernel: [70773.952308]  [<c13c85c3>] 
dev_change_flags+0x23/0x60
Nov 26 21:47:54 localhost kernel: [70773.952314]  [<c1424d9c>] 
devinet_ioctl+0x29c/0x600
Nov 26 21:47:54 localhost kernel: [70773.952323]  [<c13dbf05>] ? 
dev_ioctl+0x475/0x4d0
Nov 26 21:47:54 localhost kernel: [70773.952330]  [<c1425d6b>] 
inet_ioctl+0x5b/0x80
Nov 26 21:47:54 localhost kernel: [70773.952340]  [<c13b776e>] 
sock_ioctl+0x1fe/0x230
Nov 26 21:47:54 localhost kernel: [70773.952350]  [<c13b7570>] ? 
sock_recvmsg_nosec+0xb0/0xb0
Nov 26 21:47:54 localhost kernel: [70773.952360]  [<c1143cf6>] 
vfs_ioctl+0x26/0x40
Nov 26 21:47:54 localhost kernel: [70773.952367]  [<c11448ba>] 
do_vfs_ioctl+0x4ea/0x550
Nov 26 21:47:54 localhost kernel: [70773.952376]  [<c113de22>] ? 
final_putname+0x32/0x40
Nov 26 21:47:54 localhost kernel: [70773.952382]  [<c113de22>] ? 
final_putname+0x32/0x40
Nov 26 21:47:54 localhost kernel: [70773.952391]  [<c113de67>] ? 
putname+0x37/0x40
Nov 26 21:47:54 localhost kernel: [70773.952401]  [<c1134b64>] ? 
do_sys_open+0x194/0x1a0
Nov 26 21:47:54 localhost kernel: [70773.952408]  [<c1144983>] 
SyS_ioctl+0x63/0x90
Nov 26 21:47:54 localhost kernel: [70773.952416]  [<c147ad4d>] 
sysenter_do_call+0x12/0x28
Nov 26 21:47:54 localhost kernel: [70773.952423] ---[ end trace 
d8c0d3f5c187aa6d ]---
Nov 26 21:47:54 localhost kernel: [70773.971294] igb 0000:04:00.1 eth1: 
Reset adapter
Nov 26 21:47:54 localhost kernel: [70774.068154] igb 0000:04:00.0 eth0: 
Reset adapter
Nov 26 21:47:55 localhost kernel: [70774.357949] igb: eth1 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:00 localhost kernel: [70779.231904] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:00 localhost kernel: [70779.346793] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:02 localhost kernel: [70781.214844] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:02 localhost kernel: [70781.214844]   Tx Queue             <7>
Nov 26 21:48:02 localhost kernel: [70781.214844]   TDH                  <0>
Nov 26 21:48:02 localhost kernel: [70781.214844]   TDT                  <0>
Nov 26 21:48:02 localhost kernel: [70781.214844]   next_to_use          <1>
Nov 26 21:48:02 localhost kernel: [70781.214844]   next_to_clean        <0>
Nov 26 21:48:02 localhost kernel: [70781.214844] buffer_info[next_to_clean]
Nov 26 21:48:02 localhost kernel: [70781.214844]   time_stamp 
<10cc0cd>
Nov 26 21:48:02 localhost kernel: [70781.214844]   next_to_watch 
<e2d5e000>
Nov 26 21:48:02 localhost kernel: [70781.214844]   jiffies 
<10cc2ae>
Nov 26 21:48:02 localhost kernel: [70781.214844]   desc.status 
<12c000>
Nov 26 21:48:04 localhost kernel: [70783.214857] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:04 localhost kernel: [70783.214857]   Tx Queue             <7>
Nov 26 21:48:04 localhost kernel: [70783.214857]   TDH                  <0>
Nov 26 21:48:04 localhost kernel: [70783.214857]   TDT                  <0>
Nov 26 21:48:04 localhost kernel: [70783.214857]   next_to_use          <1>
Nov 26 21:48:04 localhost kernel: [70783.214857]   next_to_clean        <0>
Nov 26 21:48:04 localhost kernel: [70783.214857] buffer_info[next_to_clean]
Nov 26 21:48:04 localhost kernel: [70783.214857]   time_stamp 
<10cc0cd>
Nov 26 21:48:04 localhost kernel: [70783.214857]   next_to_watch 
<e2d5e000>
Nov 26 21:48:04 localhost kernel: [70783.214857]   jiffies 
<10cc4a2>
Nov 26 21:48:04 localhost kernel: [70783.214857]   desc.status 
<12c000>
Nov 26 21:48:06 localhost kernel: [70785.214700] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:06 localhost kernel: [70785.214700]   Tx Queue             <7>
Nov 26 21:48:06 localhost kernel: [70785.214700]   TDH                  <0>
Nov 26 21:48:06 localhost kernel: [70785.214700]   TDT                  <0>
Nov 26 21:48:06 localhost kernel: [70785.214700]   next_to_use          <1>
Nov 26 21:48:06 localhost kernel: [70785.214700]   next_to_clean        <0>
Nov 26 21:48:06 localhost kernel: [70785.214700] buffer_info[next_to_clean]
Nov 26 21:48:06 localhost kernel: [70785.214700]   time_stamp 
<10cc0cd>
Nov 26 21:48:06 localhost kernel: [70785.214700]   next_to_watch 
<e2d5e000>
Nov 26 21:48:06 localhost kernel: [70785.214700]   jiffies 
<10cc696>
Nov 26 21:48:06 localhost kernel: [70785.214700]   desc.status 
<12c000>
Nov 26 21:48:08 localhost kernel: [70787.214734] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:08 localhost kernel: [70787.214734]   Tx Queue             <7>
Nov 26 21:48:08 localhost kernel: [70787.214734]   TDH                  <0>
Nov 26 21:48:08 localhost kernel: [70787.214734]   TDT                  <0>
Nov 26 21:48:08 localhost kernel: [70787.214734]   next_to_use          <1>
Nov 26 21:48:08 localhost kernel: [70787.214734]   next_to_clean        <0>
Nov 26 21:48:08 localhost kernel: [70787.214734] buffer_info[next_to_clean]
Nov 26 21:48:08 localhost kernel: [70787.214734]   time_stamp 
<10cc0cd>
Nov 26 21:48:08 localhost kernel: [70787.214734]   next_to_watch 
<e2d5e000>
Nov 26 21:48:08 localhost kernel: [70787.214734]   jiffies 
<10cc88a>
Nov 26 21:48:08 localhost kernel: [70787.214734]   desc.status 
<12c000>
Nov 26 21:48:10 localhost kernel: [70789.214752] igb 0000:04:00.0: 
Detected Tx Unit Hang
Nov 26 21:48:10 localhost kernel: [70789.214752]   Tx Queue             <7>
Nov 26 21:48:10 localhost kernel: [70789.214752]   TDH                  <0>
Nov 26 21:48:10 localhost kernel: [70789.214752]   TDT                  <0>
Nov 26 21:48:10 localhost kernel: [70789.214752]   next_to_use          <1>
Nov 26 21:48:10 localhost kernel: [70789.214752]   next_to_clean        <0>
Nov 26 21:48:10 localhost kernel: [70789.214752] buffer_info[next_to_clean]
Nov 26 21:48:10 localhost kernel: [70789.214752]   time_stamp 
<10cc0cd>
Nov 26 21:48:10 localhost kernel: [70789.214752]   next_to_watch 
<e2d5e000>
Nov 26 21:48:10 localhost kernel: [70789.214752]   jiffies 
<10cca7e>
Nov 26 21:48:10 localhost kernel: [70789.214752]   desc.status 
<12c000>
Nov 26 21:48:11 localhost kernel: [70790.214611] igb 0000:04:00.0 eth0: 
Reset adapter
Nov 26 21:48:11 localhost kernel: [70790.246610] igb 0000:04:00.1 eth1: 
Reset adapter
Nov 26 21:48:11 localhost kernel: [70790.250616] igb: eth1 NIC Link is Down
Nov 26 21:48:11 localhost kernel: [70790.340089] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:11 localhost kernel: [70790.367984] igb: eth1 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:11 localhost kernel: [70790.598550] igb: eth1 NIC Link is Down
Nov 26 21:48:11 localhost kernel: [70790.634559] igb: eth1 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Nov 26 21:48:11 localhost kernel: [70790.638593] igb: eth0 NIC Link is Down
Nov 26 21:48:11 localhost kernel: [70790.674599] igb: eth0 NIC Link is 
Up 1000 Mbps Full Duplex, Flow Control: RX/TX


On 30/01/14 19:08, Zoltan Kiss wrote:
> I've experienced some queue timeout problems mentioned in the subject
> with igb and bnx2 cards. I haven't seen them on other cards so far. I'm
> using XenServer with 3.10 Dom0 kernel (however igb were already updated
> to latest version), and there are Windows guests sending data through
> these cards. I noticed these problems in XenRT test runs, and I know
> that they usually mean some lost interrupt problem or other hardware
> error, but in my case they started to appear more often, and they are
> likely connected to my netback grant mapping patches. These patches
> causing skb's with huge (~64kb) linear buffers to appear more often.
> The reason for that is an old problem in the ring protocol: originally
> the maximum amount of slots were linked to MAX_SKB_FRAGS, as every slot
> ended up as a frag of the skb. When this value were changed, netback had
> to cope with the situation by coalescing the packets into fewer frags.
> My patch series take a different approach: the leftover slots (pages)
> were assigned to a new skb's frags, and that skb were stashed to the
> frag_list of the first one. Then, before sending it off to the stack it
> calls skb = skb_copy_expand(skb, 0, 0, GFP_ATOMIC, __GFP_NOWARN), which
> basically creates a new skb and copied all the data into it. As far as I
> understood, it put everything into the linear buffer, which can amount
> to 64KB at most. The original skb are freed then, and this new one were
> sent to the stack.
> I suspect that this is the problem as it only happens when guests send
> too much slots. Does anyone familiar with these drivers have seen such
> issue before? (when these kind of skb's get stucked in the queue)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:22:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdWF-0002hY-Ce; Wed, 12 Feb 2014 17:22:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDdWD-0002hS-Sc
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 17:22:42 +0000
Received: from [85.158.139.211:36767] by server-7.bemta-5.messagelabs.com id
	8F/C0-14867-1EDABF25; Wed, 12 Feb 2014 17:22:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392225759!3490744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17061 invoked from network); 12 Feb 2014 17:22:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:22:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101985327"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 17:22:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:22:37 -0500
Message-ID: <1392225757.13563.101.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Wed, 12 Feb 2014 17:22:37 +0000
In-Reply-To: <52FA40DD.6060106@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<1392111423.26657.55.camel@kazak.uk.xensource.com>
	<52FA40DD.6060106@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] [PATCH] docs/vtpm: fix auto-shutdown reference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 10:25 -0500, Daniel De Graaf wrote:
> On 02/11/2014 04:37 AM, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 14:40 -0500, Daniel De Graaf wrote:
> >> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
> >>> Dear all,
> >>>
> >>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
> >>> guest virtual machine that takes advantage of it. After playing a bit
> >>> with it, I have a few questions:
> >>>
> >>> 1.According to the documentation, to shutdown the vTPM stubdom it is
> >>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
> >>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
> >>> the guest the vTPM stubdom continues active and, moreover, I can start
> >>> the machine again and the values of the vTPM are the last ones there
> >>> were in the previous instance of the guest. Is this normal?
> >>
> >> The documentation is in error here;
> >
> > Can you send a patch please.
> >
> > Ian.
> >
> Patch below.

Thanks.

> ------------------------->8--------------------------------------
> 
> The automatic shutdown feature of the vTPM was removed because it
> interfered with pv-grub measurement support and was also not triggered
> if the guest did not use the vTPM. Virtual TPM domains will need to be
> shut down or destroyed on guest shutdown via a script or other user
> action.
> 
> This also fixes an incorrect reference to the vTPM being PV-only.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I'm holding off committing while preparations are made for another rc,
but once that is out the way I see no reason to hold off on this.

> ---
>   docs/misc/vtpm.txt | 12 +++---------
>   1 file changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
> index b8979a3..df1dfae 100644
> --- a/docs/misc/vtpm.txt
> +++ b/docs/misc/vtpm.txt
> @@ -234,7 +234,7 @@ the Linux tpmfront driver. Add the following line:
>   
>   vtpm=["backend=domu-vtpm"]
>   
> -Currently only paravirtualized guests are supported.
> +Currently only Linux guests are supported (PV or HVM with PV drivers).
>   
>   Launching and shut down:
>   ------------------------
> @@ -280,14 +280,8 @@ You should also see the command being sent to the vtpm console as well
>   as the vtpm saving its state. You should see the vtpm key being
>   encrypted and stored on the vtpmmgr console.
>   
> -To shutdown the guest and its vtpm, you just have to shutdown the guest
> -normally. As soon as the guest vm disconnects, the vtpm will shut itself
> -down automatically.
> -
> -On guest:
> -# shutdown -h now
> -
> -You may wish to write a script to start your vtpm and guest together.
> +You may wish to write a script to start your vtpm and guest together and
> +to destroy the vtpm when the guest shuts down.
>   
>   ------------------------------
>   INTEGRATION WITH PV-GRUB



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 17:22:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 17:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDdWF-0002hY-Ce; Wed, 12 Feb 2014 17:22:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDdWD-0002hS-Sc
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 17:22:42 +0000
Received: from [85.158.139.211:36767] by server-7.bemta-5.messagelabs.com id
	8F/C0-14867-1EDABF25; Wed, 12 Feb 2014 17:22:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392225759!3490744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17061 invoked from network); 12 Feb 2014 17:22:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 17:22:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="101985327"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 12 Feb 2014 17:22:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 12:22:37 -0500
Message-ID: <1392225757.13563.101.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Wed, 12 Feb 2014 17:22:37 +0000
In-Reply-To: <52FA40DD.6060106@tycho.nsa.gov>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<1392111423.26657.55.camel@kazak.uk.xensource.com>
	<52FA40DD.6060106@tycho.nsa.gov>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] [PATCH] docs/vtpm: fix auto-shutdown reference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 10:25 -0500, Daniel De Graaf wrote:
> On 02/11/2014 04:37 AM, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 14:40 -0500, Daniel De Graaf wrote:
> >> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
> >>> Dear all,
> >>>
> >>> I have recently configured a Xen 4.3 server with the vTPM enabled and a
> >>> guest virtual machine that takes advantage of it. After playing a bit
> >>> with it, I have a few questions:
> >>>
> >>> 1.According to the documentation, to shutdown the vTPM stubdom it is
> >>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
> >>> stubdom automatically shuts down after this. Nevertheless, if I shutdown
> >>> the guest the vTPM stubdom continues active and, moreover, I can start
> >>> the machine again and the values of the vTPM are the last ones there
> >>> were in the previous instance of the guest. Is this normal?
> >>
> >> The documentation is in error here;
> >
> > Can you send a patch please.
> >
> > Ian.
> >
> Patch below.

Thanks.

> ------------------------->8--------------------------------------
> 
> The automatic shutdown feature of the vTPM was removed because it
> interfered with pv-grub measurement support and was also not triggered
> if the guest did not use the vTPM. Virtual TPM domains will need to be
> shut down or destroyed on guest shutdown via a script or other user
> action.
> 
> This also fixes an incorrect reference to the vTPM being PV-only.
> 
> Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

I'm holding off committing while preparations are made for another rc,
but once that is out the way I see no reason to hold off on this.

> ---
>   docs/misc/vtpm.txt | 12 +++---------
>   1 file changed, 3 insertions(+), 9 deletions(-)
> 
> diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt
> index b8979a3..df1dfae 100644
> --- a/docs/misc/vtpm.txt
> +++ b/docs/misc/vtpm.txt
> @@ -234,7 +234,7 @@ the Linux tpmfront driver. Add the following line:
>   
>   vtpm=["backend=domu-vtpm"]
>   
> -Currently only paravirtualized guests are supported.
> +Currently only Linux guests are supported (PV or HVM with PV drivers).
>   
>   Launching and shut down:
>   ------------------------
> @@ -280,14 +280,8 @@ You should also see the command being sent to the vtpm console as well
>   as the vtpm saving its state. You should see the vtpm key being
>   encrypted and stored on the vtpmmgr console.
>   
> -To shutdown the guest and its vtpm, you just have to shutdown the guest
> -normally. As soon as the guest vm disconnects, the vtpm will shut itself
> -down automatically.
> -
> -On guest:
> -# shutdown -h now
> -
> -You may wish to write a script to start your vtpm and guest together.
> +You may wish to write a script to start your vtpm and guest together and
> +to destroy the vtpm when the guest shuts down.
>   
>   ------------------------------
>   INTEGRATION WITH PV-GRUB



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 18:17:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 18:17:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDeMT-000420-Qw; Wed, 12 Feb 2014 18:16:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDeMR-00041v-Oy
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 18:16:40 +0000
Received: from [85.158.143.35:61233] by server-1.bemta-4.messagelabs.com id
	85/E7-31661-68ABBF25; Wed, 12 Feb 2014 18:16:38 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392228996!5201974!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24869 invoked from network); 12 Feb 2014 18:16:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 18:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100213254"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 18:16:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 13:16:34 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDeMM-0001uO-KN;
	Wed, 12 Feb 2014 18:16:34 +0000
Message-ID: <1392228989.10336.11.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 12 Feb 2014 18:16:29 +0000
In-Reply-To: <52FBAAE4.7010602@citrix.com>
References: <52F90A71.40802@citrix.com>
	<20140212163625.GE91459@deinos.phlegethon.org>
	<52FBAAE4.7010602@citrix.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 17:09 +0000, David Vrabel wrote:
> On 12/02/14 16:36, Tim Deegan wrote:
> > Hi,
> > 
> > This draft has my wholehearted support.  Even without addressing any
> > of the points under discussion something along these lines would be a
> > vast improvement on the current format.
> > 
> > I have two general questions:
> > 
> >  - The existing save-format definition is spread across a number of
> >    places: libxc for hypervisor state, qemu for DM state, and the main
> >    toolstack (libxl/xend/xapi/&c) for other config runes and a general
> >    wrapper.  This is clearly a reworking of the libxc parts -- do
> >    you think there's anything currently defined elsewhere that belongs
> >    in this spec?
> 
> I was considering this format as a container for those blobs, but I
> think there should be enough flexibility that additional things could be
> moved into the spec in the future.
> 
> >  - Have you given any thought to making this into a wire protocol
> >    rather than just a file format?  Would there be any benefit to
> >    having records individually acked by the receiver in a live
> >    migration, or having the receiver send instructions about
> >    compatibility?  Or is that again left to the toolstack to manage?
> 
> I don't see how having the restorer send anything back to the saver
> would work with image files[1] so any two way stuff must be optional so
> this can be left for future.
> 
> Ian J had some suggestions for how to handle compatibility better
> without having the restorer report its capabilities.
> 
> >> checksum     CRC-32 checksum of the record body (including any trailing
> >>              padding), or 0x00000000 if the checksum field is invalid.
> > 
> > Apart from any discussion of the merits of per-record vs whole-file
> > checksums, it would be useful for this checksum to cover the header
> > too.  E.g., by declaring it to be the checksum of header+data where
> > the checksum field is 0, or by declaring that it shall be that pattern
> > which causes the finished header+data to checksum to 0.
> 
> A single checksum for a multi GB file doesn't seem robust enough, which
> is why I made it per-record.  Per-record checksums also mean you can
> discard records the restorer isn't interested in without having to read
> them to calculate the checksum.
> 
> I'm not entirely convinced by the usefulness of checksums, though.  If
> no one else thinks they would be useful I'll probably drop them.
> 

<paranoia mode>
I think it depends if you want to detect some type of corruption. Images
can be send through wire or saved to disk and then restored. Although
network put a lot of checks and disk know when data are corrupted in the
physical layer (as sectors have CRCs too) corruptions occurring on
memory transfers or bus (like SATA or PCI) are not detected.

CRC could also be useful for Remus to detect corruption and request
updating it.
</paranoia mode>

> >> P2M
> >> ---
> [...]
> > The current save record doesn't contain the p2m itself, but rather the
> > p2m_frame_list, an array of the MFNs (in the save record, PFNs) that
> > hold the actual p2m.  Frames in that list are used to populate the p2m
> > as memory is allocated on the receiving side.
> 
> Er. Yes, I got confused by the code here and misunderstood it.
> 
> David
> 

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 18:17:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 18:17:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDeMT-000420-Qw; Wed, 12 Feb 2014 18:16:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WDeMR-00041v-Oy
	for Xen-devel@lists.xen.org; Wed, 12 Feb 2014 18:16:40 +0000
Received: from [85.158.143.35:61233] by server-1.bemta-4.messagelabs.com id
	85/E7-31661-68ABBF25; Wed, 12 Feb 2014 18:16:38 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392228996!5201974!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24869 invoked from network); 12 Feb 2014 18:16:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 18:16:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,833,1384300800"; d="scan'208";a="100213254"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 18:16:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 12 Feb 2014 13:16:34 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WDeMM-0001uO-KN;
	Wed, 12 Feb 2014 18:16:34 +0000
Message-ID: <1392228989.10336.11.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 12 Feb 2014 18:16:29 +0000
In-Reply-To: <52FBAAE4.7010602@citrix.com>
References: <52F90A71.40802@citrix.com>
	<20140212163625.GE91459@deinos.phlegethon.org>
	<52FBAAE4.7010602@citrix.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft B)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 17:09 +0000, David Vrabel wrote:
> On 12/02/14 16:36, Tim Deegan wrote:
> > Hi,
> > 
> > This draft has my wholehearted support.  Even without addressing any
> > of the points under discussion something along these lines would be a
> > vast improvement on the current format.
> > 
> > I have two general questions:
> > 
> >  - The existing save-format definition is spread across a number of
> >    places: libxc for hypervisor state, qemu for DM state, and the main
> >    toolstack (libxl/xend/xapi/&c) for other config runes and a general
> >    wrapper.  This is clearly a reworking of the libxc parts -- do
> >    you think there's anything currently defined elsewhere that belongs
> >    in this spec?
> 
> I was considering this format as a container for those blobs, but I
> think there should be enough flexibility that additional things could be
> moved into the spec in the future.
> 
> >  - Have you given any thought to making this into a wire protocol
> >    rather than just a file format?  Would there be any benefit to
> >    having records individually acked by the receiver in a live
> >    migration, or having the receiver send instructions about
> >    compatibility?  Or is that again left to the toolstack to manage?
> 
> I don't see how having the restorer send anything back to the saver
> would work with image files[1] so any two way stuff must be optional so
> this can be left for future.
> 
> Ian J had some suggestions for how to handle compatibility better
> without having the restorer report its capabilities.
> 
> >> checksum     CRC-32 checksum of the record body (including any trailing
> >>              padding), or 0x00000000 if the checksum field is invalid.
> > 
> > Apart from any discussion of the merits of per-record vs whole-file
> > checksums, it would be useful for this checksum to cover the header
> > too.  E.g., by declaring it to be the checksum of header+data where
> > the checksum field is 0, or by declaring that it shall be that pattern
> > which causes the finished header+data to checksum to 0.
> 
> A single checksum for a multi GB file doesn't seem robust enough, which
> is why I made it per-record.  Per-record checksums also mean you can
> discard records the restorer isn't interested in without having to read
> them to calculate the checksum.
> 
> I'm not entirely convinced by the usefulness of checksums, though.  If
> no one else thinks they would be useful I'll probably drop them.
> 

<paranoia mode>
I think it depends if you want to detect some type of corruption. Images
can be send through wire or saved to disk and then restored. Although
network put a lot of checks and disk know when data are corrupted in the
physical layer (as sectors have CRCs too) corruptions occurring on
memory transfers or bus (like SATA or PCI) are not detected.

CRC could also be useful for Remus to detect corruption and request
updating it.
</paranoia mode>

> >> P2M
> >> ---
> [...]
> > The current save record doesn't contain the p2m itself, but rather the
> > p2m_frame_list, an array of the MFNs (in the save record, PFNs) that
> > hold the actual p2m.  Frames in that list are used to populate the p2m
> > as memory is allocated on the receiving side.
> 
> Er. Yes, I got confused by the code here and misunderstood it.
> 
> David
> 

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 18:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 18:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDeV6-0004Xe-Tn; Wed, 12 Feb 2014 18:25:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDeV4-0004XX-Ec
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 18:25:34 +0000
Received: from [85.158.137.68:62726] by server-8.bemta-3.messagelabs.com id
	21/C6-16039-D9CBBF25; Wed, 12 Feb 2014 18:25:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392229531!1453257!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2479 invoked from network); 12 Feb 2014 18:25:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 18:25:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CIPTmx012967
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 18:25:30 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1CIPSKR011548
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 18:25:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CIPR9v019050; Wed, 12 Feb 2014 18:25:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 10:25:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 70CB11C0972; Wed, 12 Feb 2014 13:25:26 -0500 (EST)
Date: Wed, 12 Feb 2014 13:25:26 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140212182526.GA28938@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
	<20140210184707.GA18755@phenom.dumpdata.com>
	<CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 08:04:04AM -0500, Mike Neiderhauser wrote:
> So I went ahead to tested the setup I am trying to achieve using xen.  This
> setup basically requires two isolated machines that can be used for network
> testing.  On the hvm mentioned above, this testing fails due to something I
> cannot wrap my head around.  I believe it is still related to the PCI
> passthrough of a device and I believe it is related to the libxl error
> mentioned above. Can anyone shed some light on what is going on?  Is it a
> driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet)

That looks like this issue:

igb and bnx2: "NETDEV WATCHDOG: transmit queue timed out" when skb has huge linear buffer

(http://lkml.org/lkml/2014/1/30/358) ?
> 
> [79464.816085] ------------[ cut here ]------------
> [79464.816093] WARNING: at
> /build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254
> dev_watchdog+0x262/0x270()
> [79464.816094] Hardware name: HVM domU
> [79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed out
> [79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)
> xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) authenc(F)
> esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)
> twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)
> twofish_x86_64(F) twofish_common(F) camellia_generic(F)
> camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)
> serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)
> blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)
> cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) xcbc(F)
> rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F)
> llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_intel(F)
> ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirrus(F)
> ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)
> sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F)
> lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F)
> lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded:
> ipmi_msghandler]
> [79464.816139] Pid: 0, comm: swapper/1 Tainted: GF
>  3.8.0-29-generic #42~precise1-Ubuntu
> [79464.816140] Call Trace:
> [79464.816142]  <IRQ>  [<ffffffff81059b0f>] warn_slowpath_common+0x7f/0xc0
> [79464.816149]  [<ffffffff8135b9d4>] ? timerqueue_add+0x64/0xb0
> [79464.816151]  [<ffffffff81059c06>] warn_slowpath_fmt+0x46/0x50
> [79464.816154]  [<ffffffff81076794>] ? wake_up_worker+0x24/0x30
> [79464.816157]  [<ffffffff81602062>] dev_watchdog+0x262/0x270
> [79464.816160]  [<ffffffff810771f0>] ? __queue_work+0x2d0/0x2d0
> [79464.816161]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> [79464.816164]  [<ffffffff8106995b>] call_timer_fn+0x3b/0x150
> [79464.816167]  [<ffffffff8144f5a1>] ? add_interrupt_randomness+0x41/0x190
> [79464.816170]  [<ffffffff8106b427>] run_timer_softirq+0x267/0x2c0
> [79464.816173]  [<ffffffff810ee3c9>] ? handle_irq_event_percpu+0xa9/0x210
> [79464.816175]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> [79464.816177]  [<ffffffff81062620>] __do_softirq+0xc0/0x240
> [79464.816180]  [<ffffffff810b67cd>] ? tick_do_update_jiffies64+0x9d/0xd0
> [79464.816184]  [<ffffffff816fdd5c>] call_softirq+0x1c/0x30
> [79464.816188]  [<ffffffff81016775>] do_softirq+0x65/0xa0
> [79464.816189]  [<ffffffff810628fe>] irq_exit+0x8e/0xb0
> [79464.816193]  [<ffffffff8140a125>] xen_evtchn_do_upcall+0x35/0x50
> [79464.816195]  [<ffffffff816fdeed>] xen_hvm_callback_vector+0x6d/0x80
> [79464.816196]  <EOI>  [<ffffffff81084008>] ? hrtimer_start+0x18/0x20
> [79464.816201]  [<ffffffff81045136>] ? native_safe_halt+0x6/0x10
> [79464.816204]  [<ffffffff8101cc33>] default_idle+0x53/0x1f0
> [79464.816206]  [<ffffffff8101dad9>] cpu_idle+0xd9/0x120
> [79464.816209]  [<ffffffff816d10fe>] start_secondary+0xc3/0xc5
> [79464.816210] ---[ end trace 48cf6b13be16e0ae ]---
> [79464.816214] bnx2 0000:00:05.0 eth1: <--- start FTQ dump --->
> [79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000
> [79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000
> [79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000
> [79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000
> [79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000
> [79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> [79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> [79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000
> [79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000
> [79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000
> [79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000
> [79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000
> [79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000
> [79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000
> [79464.816308] bnx2 0000:00:05.0 eth1: CPU states:
> [79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000
> evt_mask 500 pc 8001288 pc 8001288 instr 38640001
> [79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000
> evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016
> [79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000
> evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
> [79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000
> evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020
> [79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000
> evt_mask 500 pc 8009c00 pc 800d948 instr 30420040
> [79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000
> evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823
> [79464.816392] bnx2 0000:00:05.0 eth1: <--- end FTQ dump --->
> [79464.816394] bnx2 0000:00:05.0 eth1: <--- start TBDC dump --->
> [79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32
> [79464.816401] bnx2 0000:00:05.0 eth1: LINE     CID  BIDX   CMD  VALIDS
> [79464.816410] bnx2 0000:00:05.0 eth1: 00    001000  00e8   00    [0]
> [79464.816420] bnx2 0000:00:05.0 eth1: 01    001000  00e8   00    [0]
> [79464.816429] bnx2 0000:00:05.0 eth1: 02    000800  afc8   00    [0]
> [79464.816438] bnx2 0000:00:05.0 eth1: 03    000800  afb8   00    [0]
> [79464.816447] bnx2 0000:00:05.0 eth1: 04    000800  afd8   00    [0]
> [79464.816456] bnx2 0000:00:05.0 eth1: 05    000800  afe0   00    [0]
> [79464.816465] bnx2 0000:00:05.0 eth1: 06    000800  afe8   00    [0]
> [79464.816474] bnx2 0000:00:05.0 eth1: 07    000800  afd0   00    [0]
> [79464.816485] bnx2 0000:00:05.0 eth1: 08    001000  3510   00    [0]
> [79464.816494] bnx2 0000:00:05.0 eth1: 09    000800  aec0   00    [0]
> [79464.816504] bnx2 0000:00:05.0 eth1: 0a    001000  3530   00    [0]
> [79464.816514] bnx2 0000:00:05.0 eth1: 0b    000800  aec8   00    [0]
> [79464.816523] bnx2 0000:00:05.0 eth1: 0c    000800  aed0   00    [0]
> [79464.816559] bnx2 0000:00:05.0 eth1: 0d    001000  34f8   00    [0]
> [79464.816570] bnx2 0000:00:05.0 eth1: 0e    001000  3500   00    [0]
> [79464.816580] bnx2 0000:00:05.0 eth1: 0f    001000  3518   00    [0]
> [79464.816590] bnx2 0000:00:05.0 eth1: 10    1fbc00  2fe8   7d    [0]
> [79464.816599] bnx2 0000:00:05.0 eth1: 11    1ab780  fff8   7d    [0]
> [79464.816608] bnx2 0000:00:05.0 eth1: 12    17ff00  b908   f7    [0]
> [79464.816618] bnx2 0000:00:05.0 eth1: 13    0cb700  ff40   d7    [0]
> [79464.816627] bnx2 0000:00:05.0 eth1: 14    177a80  efe0   03    [0]
> [79464.816637] bnx2 0000:00:05.0 eth1: 15    037d80  9f88   72    [0]
> [79464.816646] bnx2 0000:00:05.0 eth1: 16    1bae00  eef8   ce    [0]
> [79464.816657] bnx2 0000:00:05.0 eth1: 17    1bbc80  a7f8   df    [0]
> [79464.816666] bnx2 0000:00:05.0 eth1: 18    17e180  6aa8   e4    [0]
> [79464.816675] bnx2 0000:00:05.0 eth1: 19    07ff80  6e50   dd    [0]
> [79464.816683] bnx2 0000:00:05.0 eth1: 1a    1fda80  f790   6e    [0]
> [79464.816694] bnx2 0000:00:05.0 eth1: 1b    151580  d7b0   fc    [0]
> [79464.816703] bnx2 0000:00:05.0 eth1: 1c    1b9f80  cef8   6b    [0]
> [79464.816712] bnx2 0000:00:05.0 eth1: 1d    1ebf00  ffa8   df    [0]
> [79464.816723] bnx2 0000:00:05.0 eth1: 1e    1e7e00  ff78   ef    [0]
> [79464.816731] bnx2 0000:00:05.0 eth1: 1f    166e80  fbd8   aa    [0]
> [79464.816733] bnx2 0000:00:05.0 eth1: <--- end TBDC dump --->
> [79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[00100406]
> [79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]
> PCI_MISC_CFG[92000088]
> [79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]
> EMAC_RX_STATUS[00000000]
> [79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
> [79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:
> HC_STATS_INTERRUPT_STATUS[01fc0003]
> [79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]
> [79464.817264] bnx2 0000:00:05.0 eth1: <--- start MCP states dump --->
> [79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]
> MCP_STATE_P1[0003611e]
> [79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]
> state[80000000] evt_mask[00000500]
> [79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994]
> instr[32020020]
> [79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:
> [79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]
> fw_mb[00000027] link_status[0008506b]
> [79464.817301]  drv_pulse_mb[0000338a]
> [79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_signature[44564907]
> reset_type[01005254]
> [79464.817310]  condition[0003611e]
> [79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083
> 0003611e 00000000
> [79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000000
> 00000000 00000000
> [79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000000
> 00000000 00000000
> [79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000000
> 00000000 00000000
> [79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]
> [79464.817374] bnx2 0000:00:05.0 eth1: <--- end MCP states dump --->
> [79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down
> 
> 
> 
> On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> > > Thanks for the answers on the timeline.
> > >
> > > When I start the HVM with th broadcom adapter, I get this message back.
> > > Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > > support reset from sysfs for PCI device 0000:05:00.0
> > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > > support reset from sysfs for PCI device 0000:05:00.1
> > >
> > > However, the devices appear in the HVM.  Is this something that I should
> > be
> > > concerned about?
> >
> > No. Xen pciback does the reset automatically.
> >
> > Actually we might want to ditch that reporting in libxl, or maybe just
> > implement a stub function in xen-pciback so that libxl will be happy.
> >
> > >
> > >
> > >
> > > On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu>
> > wrote:
> > >
> > > > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > > > <mikeneiderhauser@gmail.com> wrote:
> > > > > Works like a charm.  I do not have physical access to the computer
> > this
> > > > > weekend to verify that the cards are isolated, but the HVM starts and
> > > > > appears to be working well.
> > > > >
> > > > > When do you think Xen 4.4 will be released.  The article I read
> > > > mentioned it
> > > > > will be released in 2014 (hinting towards the end of February).  I
> > also
> > > > read
> > > > > 'When it is ready.'
> > > > >
> > > > > Any timeline would be great.
> > > >
> > > > I'm afraid that's about all we can give. :-)  We've locked down
> > > > development for 2 months now and are working on finding and fixing
> > > > bugs.  If there are no more blocker bugs or other unforeseen delays,
> > > > it should be out by the end of February.  But there are necessarily
> > > > significant unknowns, so we can't make any promises.
> > > >
> > > >  -George
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 18:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 18:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDeV6-0004Xe-Tn; Wed, 12 Feb 2014 18:25:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDeV4-0004XX-Ec
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 18:25:34 +0000
Received: from [85.158.137.68:62726] by server-8.bemta-3.messagelabs.com id
	21/C6-16039-D9CBBF25; Wed, 12 Feb 2014 18:25:33 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392229531!1453257!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2479 invoked from network); 12 Feb 2014 18:25:32 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 18:25:32 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CIPTmx012967
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 18:25:30 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1CIPSKR011548
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 18:25:28 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CIPR9v019050; Wed, 12 Feb 2014 18:25:27 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 10:25:27 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 70CB11C0972; Wed, 12 Feb 2014 13:25:26 -0500 (EST)
Date: Wed, 12 Feb 2014 13:25:26 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Message-ID: <20140212182526.GA28938@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
	<20140210184707.GA18755@phenom.dumpdata.com>
	<CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 08:04:04AM -0500, Mike Neiderhauser wrote:
> So I went ahead to tested the setup I am trying to achieve using xen.  This
> setup basically requires two isolated machines that can be used for network
> testing.  On the hvm mentioned above, this testing fails due to something I
> cannot wrap my head around.  I believe it is still related to the PCI
> passthrough of a device and I believe it is related to the libxl error
> mentioned above. Can anyone shed some light on what is going on?  Is it a
> driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit Ethernet)

That looks like this issue:

igb and bnx2: "NETDEV WATCHDOG: transmit queue timed out" when skb has huge linear buffer

(http://lkml.org/lkml/2014/1/30/358) ?
> 
> [79464.816085] ------------[ cut here ]------------
> [79464.816093] WARNING: at
> /build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254
> dev_watchdog+0x262/0x270()
> [79464.816094] Hardware name: HVM domU
> [79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed out
> [79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)
> xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) authenc(F)
> esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)
> twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)
> twofish_x86_64(F) twofish_common(F) camellia_generic(F)
> camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)
> serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)
> blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)
> cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) xcbc(F)
> rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F)
> llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_intel(F)
> ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirrus(F)
> ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)
> sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F)
> lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F)
> lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded:
> ipmi_msghandler]
> [79464.816139] Pid: 0, comm: swapper/1 Tainted: GF
>  3.8.0-29-generic #42~precise1-Ubuntu
> [79464.816140] Call Trace:
> [79464.816142]  <IRQ>  [<ffffffff81059b0f>] warn_slowpath_common+0x7f/0xc0
> [79464.816149]  [<ffffffff8135b9d4>] ? timerqueue_add+0x64/0xb0
> [79464.816151]  [<ffffffff81059c06>] warn_slowpath_fmt+0x46/0x50
> [79464.816154]  [<ffffffff81076794>] ? wake_up_worker+0x24/0x30
> [79464.816157]  [<ffffffff81602062>] dev_watchdog+0x262/0x270
> [79464.816160]  [<ffffffff810771f0>] ? __queue_work+0x2d0/0x2d0
> [79464.816161]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> [79464.816164]  [<ffffffff8106995b>] call_timer_fn+0x3b/0x150
> [79464.816167]  [<ffffffff8144f5a1>] ? add_interrupt_randomness+0x41/0x190
> [79464.816170]  [<ffffffff8106b427>] run_timer_softirq+0x267/0x2c0
> [79464.816173]  [<ffffffff810ee3c9>] ? handle_irq_event_percpu+0xa9/0x210
> [79464.816175]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> [79464.816177]  [<ffffffff81062620>] __do_softirq+0xc0/0x240
> [79464.816180]  [<ffffffff810b67cd>] ? tick_do_update_jiffies64+0x9d/0xd0
> [79464.816184]  [<ffffffff816fdd5c>] call_softirq+0x1c/0x30
> [79464.816188]  [<ffffffff81016775>] do_softirq+0x65/0xa0
> [79464.816189]  [<ffffffff810628fe>] irq_exit+0x8e/0xb0
> [79464.816193]  [<ffffffff8140a125>] xen_evtchn_do_upcall+0x35/0x50
> [79464.816195]  [<ffffffff816fdeed>] xen_hvm_callback_vector+0x6d/0x80
> [79464.816196]  <EOI>  [<ffffffff81084008>] ? hrtimer_start+0x18/0x20
> [79464.816201]  [<ffffffff81045136>] ? native_safe_halt+0x6/0x10
> [79464.816204]  [<ffffffff8101cc33>] default_idle+0x53/0x1f0
> [79464.816206]  [<ffffffff8101dad9>] cpu_idle+0xd9/0x120
> [79464.816209]  [<ffffffff816d10fe>] start_secondary+0xc3/0xc5
> [79464.816210] ---[ end trace 48cf6b13be16e0ae ]---
> [79464.816214] bnx2 0000:00:05.0 eth1: <--- start FTQ dump --->
> [79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000
> [79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000
> [79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000
> [79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000
> [79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000
> [79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> [79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> [79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000
> [79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000
> [79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000
> [79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000
> [79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000
> [79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000
> [79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000
> [79464.816308] bnx2 0000:00:05.0 eth1: CPU states:
> [79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000
> evt_mask 500 pc 8001288 pc 8001288 instr 38640001
> [79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000
> evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016
> [79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000
> evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
> [79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000
> evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020
> [79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000
> evt_mask 500 pc 8009c00 pc 800d948 instr 30420040
> [79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000
> evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823
> [79464.816392] bnx2 0000:00:05.0 eth1: <--- end FTQ dump --->
> [79464.816394] bnx2 0000:00:05.0 eth1: <--- start TBDC dump --->
> [79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32
> [79464.816401] bnx2 0000:00:05.0 eth1: LINE     CID  BIDX   CMD  VALIDS
> [79464.816410] bnx2 0000:00:05.0 eth1: 00    001000  00e8   00    [0]
> [79464.816420] bnx2 0000:00:05.0 eth1: 01    001000  00e8   00    [0]
> [79464.816429] bnx2 0000:00:05.0 eth1: 02    000800  afc8   00    [0]
> [79464.816438] bnx2 0000:00:05.0 eth1: 03    000800  afb8   00    [0]
> [79464.816447] bnx2 0000:00:05.0 eth1: 04    000800  afd8   00    [0]
> [79464.816456] bnx2 0000:00:05.0 eth1: 05    000800  afe0   00    [0]
> [79464.816465] bnx2 0000:00:05.0 eth1: 06    000800  afe8   00    [0]
> [79464.816474] bnx2 0000:00:05.0 eth1: 07    000800  afd0   00    [0]
> [79464.816485] bnx2 0000:00:05.0 eth1: 08    001000  3510   00    [0]
> [79464.816494] bnx2 0000:00:05.0 eth1: 09    000800  aec0   00    [0]
> [79464.816504] bnx2 0000:00:05.0 eth1: 0a    001000  3530   00    [0]
> [79464.816514] bnx2 0000:00:05.0 eth1: 0b    000800  aec8   00    [0]
> [79464.816523] bnx2 0000:00:05.0 eth1: 0c    000800  aed0   00    [0]
> [79464.816559] bnx2 0000:00:05.0 eth1: 0d    001000  34f8   00    [0]
> [79464.816570] bnx2 0000:00:05.0 eth1: 0e    001000  3500   00    [0]
> [79464.816580] bnx2 0000:00:05.0 eth1: 0f    001000  3518   00    [0]
> [79464.816590] bnx2 0000:00:05.0 eth1: 10    1fbc00  2fe8   7d    [0]
> [79464.816599] bnx2 0000:00:05.0 eth1: 11    1ab780  fff8   7d    [0]
> [79464.816608] bnx2 0000:00:05.0 eth1: 12    17ff00  b908   f7    [0]
> [79464.816618] bnx2 0000:00:05.0 eth1: 13    0cb700  ff40   d7    [0]
> [79464.816627] bnx2 0000:00:05.0 eth1: 14    177a80  efe0   03    [0]
> [79464.816637] bnx2 0000:00:05.0 eth1: 15    037d80  9f88   72    [0]
> [79464.816646] bnx2 0000:00:05.0 eth1: 16    1bae00  eef8   ce    [0]
> [79464.816657] bnx2 0000:00:05.0 eth1: 17    1bbc80  a7f8   df    [0]
> [79464.816666] bnx2 0000:00:05.0 eth1: 18    17e180  6aa8   e4    [0]
> [79464.816675] bnx2 0000:00:05.0 eth1: 19    07ff80  6e50   dd    [0]
> [79464.816683] bnx2 0000:00:05.0 eth1: 1a    1fda80  f790   6e    [0]
> [79464.816694] bnx2 0000:00:05.0 eth1: 1b    151580  d7b0   fc    [0]
> [79464.816703] bnx2 0000:00:05.0 eth1: 1c    1b9f80  cef8   6b    [0]
> [79464.816712] bnx2 0000:00:05.0 eth1: 1d    1ebf00  ffa8   df    [0]
> [79464.816723] bnx2 0000:00:05.0 eth1: 1e    1e7e00  ff78   ef    [0]
> [79464.816731] bnx2 0000:00:05.0 eth1: 1f    166e80  fbd8   aa    [0]
> [79464.816733] bnx2 0000:00:05.0 eth1: <--- end TBDC dump --->
> [79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[00100406]
> [79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]
> PCI_MISC_CFG[92000088]
> [79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]
> EMAC_RX_STATUS[00000000]
> [79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
> [79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:
> HC_STATS_INTERRUPT_STATUS[01fc0003]
> [79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]
> [79464.817264] bnx2 0000:00:05.0 eth1: <--- start MCP states dump --->
> [79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]
> MCP_STATE_P1[0003611e]
> [79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]
> state[80000000] evt_mask[00000500]
> [79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994]
> instr[32020020]
> [79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:
> [79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]
> fw_mb[00000027] link_status[0008506b]
> [79464.817301]  drv_pulse_mb[0000338a]
> [79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_signature[44564907]
> reset_type[01005254]
> [79464.817310]  condition[0003611e]
> [79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083
> 0003611e 00000000
> [79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000000
> 00000000 00000000
> [79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000000
> 00000000 00000000
> [79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000000
> 00000000 00000000
> [79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]
> [79464.817374] bnx2 0000:00:05.0 eth1: <--- end MCP states dump --->
> [79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down
> 
> 
> 
> On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> > > Thanks for the answers on the timeline.
> > >
> > > When I start the HVM with th broadcom adapter, I get this message back.
> > > Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > > support reset from sysfs for PCI device 0000:05:00.0
> > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel doesn't
> > > support reset from sysfs for PCI device 0000:05:00.1
> > >
> > > However, the devices appear in the HVM.  Is this something that I should
> > be
> > > concerned about?
> >
> > No. Xen pciback does the reset automatically.
> >
> > Actually we might want to ditch that reporting in libxl, or maybe just
> > implement a stub function in xen-pciback so that libxl will be happy.
> >
> > >
> > >
> > >
> > > On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu>
> > wrote:
> > >
> > > > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > > > <mikeneiderhauser@gmail.com> wrote:
> > > > > Works like a charm.  I do not have physical access to the computer
> > this
> > > > > weekend to verify that the cards are isolated, but the HVM starts and
> > > > > appears to be working well.
> > > > >
> > > > > When do you think Xen 4.4 will be released.  The article I read
> > > > mentioned it
> > > > > will be released in 2014 (hinting towards the end of February).  I
> > also
> > > > read
> > > > > 'When it is ready.'
> > > > >
> > > > > Any timeline would be great.
> > > >
> > > > I'm afraid that's about all we can give. :-)  We've locked down
> > > > development for 2 months now and are working on finding and fixing
> > > > bugs.  If there are no more blocker bugs or other unforeseen delays,
> > > > it should be out by the end of February.  But there are necessarily
> > > > significant unknowns, so we can't make any promises.
> > > >
> > > >  -George
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 18:40:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 18:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDeis-0005Gd-Fe; Wed, 12 Feb 2014 18:39:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDeir-0005GY-Gt
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 18:39:49 +0000
Received: from [85.158.143.35:18772] by server-1.bemta-4.messagelabs.com id
	8A/EB-31661-4FFBBF25; Wed, 12 Feb 2014 18:39:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392230386!5159377!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15152 invoked from network); 12 Feb 2014 18:39:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 18:39:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CIdhsV013595
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 18:39:44 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CIdhTS004553
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 18:39:43 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CIdgEn026888; Wed, 12 Feb 2014 18:39:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 10:39:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CA5441C0972; Wed, 12 Feb 2014 13:39:41 -0500 (EST)
Date: Wed, 12 Feb 2014 13:39:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140212183941.GC27449@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc2-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5466213492293462003=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5466213492293462003==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="W/nzBZO5zC0uMSeA"
Content-Disposition: inline


--W/nzBZO5zC0uMSeA
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

git pull git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc2-tag

which has an healthy amount of code being removed - which we
do not use anymore (the only user of it was ia64 Xen
which had been removed already). The other bug-fixes are
to make Xen ARM be able to use the new event channel mechanism
and proper export of header files to user-space.

Please pull!


 drivers/xen/Makefile              |   1 -
 drivers/xen/events/events_base.c  |   2 +
 drivers/xen/xencomm.c             | 219 --------------------------------------
 include/uapi/xen/Kbuild           |   2 +
 include/{ => uapi}/xen/gntalloc.h |   0
 include/{ => uapi}/xen/gntdev.h   |   0
 include/xen/interface/xencomm.h   |  41 -------
 include/xen/xencomm.h             |  77 --------------
 8 files changed, 4 insertions(+), 338 deletions(-)

David Vrabel (2):
      xen/events: bind all new interdomain events to VCPU0
      xen: install xen/gntdev.h and xen/gntalloc.h

Paul Bolle (1):
      ia64/xen: Remove Xen support for ia64 even more


--W/nzBZO5zC0uMSeA
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS+7/kAAoJEFjIrFwIi8fJsXwH/2pN7m5u0vMW/OzFIZo7Xi1F
X2NhJ3Dgo2f8SnZgRk8xqmKqw7+m+lHiKa++rp+V6WvLLz4GW33Pf6VLMzLb5KHK
JN3G/HSey4Z/8eXiNN9VaTAvXl6EdfwOiVcJLqp/iKwqystTfRsx+bzfTj2/stBN
XYtVLJiWvla88/CwNlCoha9n5QN0COD77VzJt/4nRFKefXhKYZi1abf58k7PNoS6
BgjxpRmfS3xq+m1tE7rvoIsALJ5LmzQKX7aJfTRcswlCzFBs8z8VUdsukW4jJRc+
KyCCZtXvZiA544CFtSPjPHEoCxWbBY2sx5inpXQAjmU+HroJ7rCpD2yuDLFk8EE=
=0KxK
-----END PGP SIGNATURE-----

--W/nzBZO5zC0uMSeA--


--===============5466213492293462003==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5466213492293462003==--


From xen-devel-bounces@lists.xen.org Wed Feb 12 18:40:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 18:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDeis-0005Gd-Fe; Wed, 12 Feb 2014 18:39:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDeir-0005GY-Gt
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 18:39:49 +0000
Received: from [85.158.143.35:18772] by server-1.bemta-4.messagelabs.com id
	8A/EB-31661-4FFBBF25; Wed, 12 Feb 2014 18:39:48 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392230386!5159377!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15152 invoked from network); 12 Feb 2014 18:39:48 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 18:39:48 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CIdhsV013595
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 18:39:44 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CIdhTS004553
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 18:39:43 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CIdgEn026888; Wed, 12 Feb 2014 18:39:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 10:39:42 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id CA5441C0972; Wed, 12 Feb 2014 13:39:41 -0500 (EST)
Date: Wed, 12 Feb 2014 13:39:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com
Message-ID: <20140212183941.GC27449@phenom.dumpdata.com>
MIME-Version: 1.0
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [GIT PULL] (xen) stable/for-linus-3.14-rc2-tag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5466213492293462003=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5466213492293462003==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="W/nzBZO5zC0uMSeA"
Content-Disposition: inline


--W/nzBZO5zC0uMSeA
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hey Linus,

Please git pull the following tag:

git pull git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git stable/for-linus-3.14-rc2-tag

which has an healthy amount of code being removed - which we
do not use anymore (the only user of it was ia64 Xen
which had been removed already). The other bug-fixes are
to make Xen ARM be able to use the new event channel mechanism
and proper export of header files to user-space.

Please pull!


 drivers/xen/Makefile              |   1 -
 drivers/xen/events/events_base.c  |   2 +
 drivers/xen/xencomm.c             | 219 --------------------------------------
 include/uapi/xen/Kbuild           |   2 +
 include/{ => uapi}/xen/gntalloc.h |   0
 include/{ => uapi}/xen/gntdev.h   |   0
 include/xen/interface/xencomm.h   |  41 -------
 include/xen/xencomm.h             |  77 --------------
 8 files changed, 4 insertions(+), 338 deletions(-)

David Vrabel (2):
      xen/events: bind all new interdomain events to VCPU0
      xen: install xen/gntdev.h and xen/gntalloc.h

Paul Bolle (1):
      ia64/xen: Remove Xen support for ia64 even more


--W/nzBZO5zC0uMSeA
Content-Type: application/pgp-signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQEcBAEBAgAGBQJS+7/kAAoJEFjIrFwIi8fJsXwH/2pN7m5u0vMW/OzFIZo7Xi1F
X2NhJ3Dgo2f8SnZgRk8xqmKqw7+m+lHiKa++rp+V6WvLLz4GW33Pf6VLMzLb5KHK
JN3G/HSey4Z/8eXiNN9VaTAvXl6EdfwOiVcJLqp/iKwqystTfRsx+bzfTj2/stBN
XYtVLJiWvla88/CwNlCoha9n5QN0COD77VzJt/4nRFKefXhKYZi1abf58k7PNoS6
BgjxpRmfS3xq+m1tE7rvoIsALJ5LmzQKX7aJfTRcswlCzFBs8z8VUdsukW4jJRc+
KyCCZtXvZiA544CFtSPjPHEoCxWbBY2sx5inpXQAjmU+HroJ7rCpD2yuDLFk8EE=
=0KxK
-----END PGP SIGNATURE-----

--W/nzBZO5zC0uMSeA--


--===============5466213492293462003==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5466213492293462003==--


From xen-devel-bounces@lists.xen.org Wed Feb 12 19:09:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 19:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDfBC-00064o-Ge; Wed, 12 Feb 2014 19:09:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WDfBA-00064Q-Dc
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 19:09:04 +0000
Received: from [193.109.254.147:28636] by server-5.bemta-14.messagelabs.com id
	DB/93-16688-FC6CBF25; Wed, 12 Feb 2014 19:09:03 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392232139!3904373!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19925 invoked from network); 12 Feb 2014 19:09:00 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-8.tower-27.messagelabs.com with SMTP;
	12 Feb 2014 19:09:00 -0000
X-TM-IMSS-Message-ID: <801727bc0005a398@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 801727bc0005a398 ;
	Wed, 12 Feb 2014 14:10:09 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1CJ8tnb008058; 
	Wed, 12 Feb 2014 14:08:55 -0500
Message-ID: <52FBC68E.7040905@tycho.nsa.gov>
Date: Wed, 12 Feb 2014 14:07:58 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<52F9F514.8040907@scytl.com> <52FA413F.1040608@tycho.nsa.gov>
	<52FB4110.8090005@scytl.com>
In-Reply-To: <52FB4110.8090005@scytl.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 04:38 AM, Jordi Cucurull Juan wrote:
> Hello Daniel,
>
> On 02/11/2014 04:26 PM, Daniel De Graaf wrote:
>> On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
>>> Hello Daniel,
>>>
>>> Thanks for your thorough answer. I have a few comments below.
>>>
>>> On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
>>>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>>>> Dear all,
>>>>>
>>>>> I have recently configured a Xen 4.3 server with the vTPM enabled
>>>>> and a
>>>>> guest virtual machine that takes advantage of it. After playing a bit
>>>>> with it, I have a few questions:
>>>>>
>>>>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>>>>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>>>>> stubdom automatically shuts down after this. Nevertheless, if I
>>>>> shutdown
>>>>> the guest the vTPM stubdom continues active and, moreover, I can start
>>>>> the machine again and the values of the vTPM are the last ones there
>>>>> were in the previous instance of the guest. Is this normal?
>>>>
>>>> The documentation is in error here; while this was originally how the
>>>> vTPM
>>>> domain behaved, this automatic shutdown was not reliable: it was not
>>>> done
>>>> if the peer domain did not use the vTPM, and it was incorrectly
>>>> triggered
>>>> by pv-grub's use of the vTPM to record guest kernel measurements
>>>> (which was
>>>> the immediate reason for its removal). The solution now is to either
>>>> send a
>>>> shutdown request or simply destroy the vTPM upon guest shutdown.
>>>>
>>>> An alternative that may require less work on your part is to destroy
>>>> the vTPM stub domain during a guest's construction, something like:
>>>>
>>>> #!/bin/sh -e
>>>> xl destroy "$1-vtpm" || true
>>>> xl create $1-vtpm.cfg
>>>> xl create $1-domu.cfg
>>>>
>>>> Allowing a vTPM to remain active across a guest restart will cause the
>>>> PCR values extended by pv-grub to be incorrect, as you observed in your
>>>> second email. In order for the vTPM's PCRs to be useful for quotes or
>>>> releasing sealed secrets, you need to ensure that a new vTPM is started
>>>> if and only if it is paired with a corresponding guest.
>>>
>>> I see a potential threat due to this behaviour (please correct me if I
>>> am wrong).
>>>
>>> Assume an administrator of Dom0 becomes malicious. Since the hypervisor
>>> does not enforce the shut down of the vTPM domain, the malicious
>>> administrator could try the following: 1) make a copy of the peer
>>> domain, 2) manipulate the copy of the peer domain and disable its
>>> measurements, 3) boot the original peer domain, 4) switch it off or
>>> pause it, 5) boot the manipulated copy of the peer domain.
>>>
>>> Then, the shown PCR values of the manipulated copy of the peer domain
>>> are the ones measured by the original peer domain during the first boot.
>>> But the manipulated copy is the one actually running. Hence, this could
>>> not be detected nor by quoting the vTPM neither the pTPM.
>>>
>>
>> A malicious dom0 has a much simpler attack vector: start the domain with
>> a custom version of pv-grub that extends arbitrary measurements instead
>> of the real kernel's measurements. Then, a user kernel with disabled or
>> similarly false measuring capabilities can be booted. Alternatively,
>> if XSM polices do not restrict it, a debugger could be attached to the
>> guest so that it can be manipulated online.
>
> This is the reason why I wanted to measure Dom0, to detect a possible
> manipulation, e.g. of a custom version of pv-grub. Nevertheless, still
> the administrator could try to inject a manipulated copy of it into the
> system after booting it. Hence, I agree with the solutions you propose
> below.
>
>>
>>> May be, one possible solution could be to enforce an XSM FLASK policy to
>>> prevent any user in Dom0 from destroying, shutting down or pausing a
>>> domain. Then, measure the policy when Dom0 starts into a PCR of the
>>> phsyical TPM. Nevertheless, on one hand I do not know if this is
>>> feasible and, in the other hand, this prevents the system from
>>> destroying the vTPM domain when the peer domain shuts down.
>>
>> The solution to this problem is to disaggregate dom0 and relocate the
>> domain building component to a stub domain that is completely measured
>> in the pTPM (perhaps by TBOOT). The domain builder could use a static
>> library of domains to build (hardware domain and TPM Manager built only
>> once; vTPM and pv-grub domain pairs built on request). An XSM policy
>> could then restrict vTPM communication so that only correctly built
>> guests are allowed to talk to their paired vTPM. In this case, dom0
>> would have permission to shut down either VM, but could not start a
>> replacement.
>>
>
> I understand this cannot be done with the current implementation of Xen.
> Are there any plans to do this in the future?

Yes, although I am not sure how easy it will be to upstream the changes.
The hypervisor changes to make the hardware domain distinct from dom0
are mostly present in 4.4 - it just lacks the hooks to postpone IOMMU
setup for the hardware domain instead of dom0. I plan to post these for
review once 4.4 is branched.

The domain builder needed to run this setup also exists but the version
which handles build requests depends on an out-of-tree patch to the
hypervisor implementing inter-domain message sending (IVC). Since V4V is
more feature-complete than IVC, I think changing our domain builder to
use V4V instead of IVC will be necessary for it to be incorporated into
Xen. The bootstrapping version of the domain builder (which would be run
with domain ID 0) does not depend on IVC, but only builds a single list
of domains from its ramdisk.

>>>>> 2.In the documentation it is recommended to avoid accessing the
>>>>> physical
>>>>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>>>>> Nevertheless, I currently have the IMA and the Trousers enabled in
>>>>> Dom0
>>>>> without any apparent issue. Why is not recommended directly accessing
>>>>> the physical TPM of Dom0?
>>>>
>>>> While most of the time it is not a problem to have two entities
>>>> talking to
>>>> the physical TPM, it is possible for the trousers daemon in dom0 to
>>>> interfere
>>>> with key handles used by the TPM Manager. There are also certain
>>>> operations
>>>> of the TPM that may not handle concurrency, although I do not believe
>>>> that
>>>> trousers uses them - SHA1Start, the DAA commands, and certain audit
>>>> logs
>>>> come to mind.
>>>>
>>>> The other reason why it is recommended to avoid pTPM access from
>>>> dom0 is
>>>> because the ability to send unseal/unbind requests to the physical TPM
>>>> makes
>>>> it possible for applications running in dom0 to decrypt the TPM
>>>> Manager's
>>>> data (and thereby access vTPM private keys).
>>>>
>>>> At present, sharing the physical TPM between dom0 and the TPM
>>>> Manager is
>>>> the only way to get full integrity checks.
>>>
>>> OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
>>> security problem to the vTPM. But if we do not enable the support, the
>>> integrity of Dom0 cannot be proved using the TPM (e.g. by remote
>>> attestation).
>>
>> Right. Since dom0 currently must be trusted (as discussed above) this is
>> currently the best way to handle the dom0 attestation problem.
>>
>>>>
>>>>> 3.If it is not recommended to directly accessing the physical TPM in
>>>>> Dom0, which is the advisable way to check the integrity of this
>>>>> domain?
>>>>> With solutions such as TBOOT and IntelTXT?
>>>>
>>>> While the TPM Manager in Xen 4.3/4.4 does not yet have this
>>>> functionality,
>>>> an update which I will be submitting for inclusion in Xen 4.5 has the
>>>> ability to get physical TPM quotes using a virtual TPM. Combined
>>>> with an
>>>> early domain builder, the eventual goal is to have dom0 use a vTPM for
>>>> its integrity/reporting/sealing operations, and use the physical TPM
>>>> only
>>>> to secure the secrets of vTPMs and for deep quotes to provide fresh
>>>> proofs
>>>> of the system's state.
>>>
>>> This sounds really good. I look forward to try it in Xen 4.5!!
>>>
>>>
>>> Thank you for your answers!
>>> Jordi.
>>>
>>
>>
>
>


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 19:09:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 19:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDfBC-00064o-Ge; Wed, 12 Feb 2014 19:09:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WDfBA-00064Q-Dc
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 19:09:04 +0000
Received: from [193.109.254.147:28636] by server-5.bemta-14.messagelabs.com id
	DB/93-16688-FC6CBF25; Wed, 12 Feb 2014 19:09:03 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392232139!3904373!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19925 invoked from network); 12 Feb 2014 19:09:00 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-8.tower-27.messagelabs.com with SMTP;
	12 Feb 2014 19:09:00 -0000
X-TM-IMSS-Message-ID: <801727bc0005a398@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id 801727bc0005a398 ;
	Wed, 12 Feb 2014 14:10:09 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1CJ8tnb008058; 
	Wed, 12 Feb 2014 14:08:55 -0500
Message-ID: <52FBC68E.7040905@tycho.nsa.gov>
Date: Wed, 12 Feb 2014 14:07:58 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jordi Cucurull Juan <jordi.cucurull@scytl.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<52F9F514.8040907@scytl.com> <52FA413F.1040608@tycho.nsa.gov>
	<52FB4110.8090005@scytl.com>
In-Reply-To: <52FB4110.8090005@scytl.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] Questions about the usage of the vTPM implemented
 in Xen 4.3
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 04:38 AM, Jordi Cucurull Juan wrote:
> Hello Daniel,
>
> On 02/11/2014 04:26 PM, Daniel De Graaf wrote:
>> On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
>>> Hello Daniel,
>>>
>>> Thanks for your thorough answer. I have a few comments below.
>>>
>>> On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
>>>> On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
>>>>> Dear all,
>>>>>
>>>>> I have recently configured a Xen 4.3 server with the vTPM enabled
>>>>> and a
>>>>> guest virtual machine that takes advantage of it. After playing a bit
>>>>> with it, I have a few questions:
>>>>>
>>>>> 1.According to the documentation, to shutdown the vTPM stubdom it is
>>>>> only needed to normally shutdown the guest VM. Theoretically, the vTPM
>>>>> stubdom automatically shuts down after this. Nevertheless, if I
>>>>> shutdown
>>>>> the guest the vTPM stubdom continues active and, moreover, I can start
>>>>> the machine again and the values of the vTPM are the last ones there
>>>>> were in the previous instance of the guest. Is this normal?
>>>>
>>>> The documentation is in error here; while this was originally how the
>>>> vTPM
>>>> domain behaved, this automatic shutdown was not reliable: it was not
>>>> done
>>>> if the peer domain did not use the vTPM, and it was incorrectly
>>>> triggered
>>>> by pv-grub's use of the vTPM to record guest kernel measurements
>>>> (which was
>>>> the immediate reason for its removal). The solution now is to either
>>>> send a
>>>> shutdown request or simply destroy the vTPM upon guest shutdown.
>>>>
>>>> An alternative that may require less work on your part is to destroy
>>>> the vTPM stub domain during a guest's construction, something like:
>>>>
>>>> #!/bin/sh -e
>>>> xl destroy "$1-vtpm" || true
>>>> xl create $1-vtpm.cfg
>>>> xl create $1-domu.cfg
>>>>
>>>> Allowing a vTPM to remain active across a guest restart will cause the
>>>> PCR values extended by pv-grub to be incorrect, as you observed in your
>>>> second email. In order for the vTPM's PCRs to be useful for quotes or
>>>> releasing sealed secrets, you need to ensure that a new vTPM is started
>>>> if and only if it is paired with a corresponding guest.
>>>
>>> I see a potential threat due to this behaviour (please correct me if I
>>> am wrong).
>>>
>>> Assume an administrator of Dom0 becomes malicious. Since the hypervisor
>>> does not enforce the shut down of the vTPM domain, the malicious
>>> administrator could try the following: 1) make a copy of the peer
>>> domain, 2) manipulate the copy of the peer domain and disable its
>>> measurements, 3) boot the original peer domain, 4) switch it off or
>>> pause it, 5) boot the manipulated copy of the peer domain.
>>>
>>> Then, the shown PCR values of the manipulated copy of the peer domain
>>> are the ones measured by the original peer domain during the first boot.
>>> But the manipulated copy is the one actually running. Hence, this could
>>> not be detected nor by quoting the vTPM neither the pTPM.
>>>
>>
>> A malicious dom0 has a much simpler attack vector: start the domain with
>> a custom version of pv-grub that extends arbitrary measurements instead
>> of the real kernel's measurements. Then, a user kernel with disabled or
>> similarly false measuring capabilities can be booted. Alternatively,
>> if XSM polices do not restrict it, a debugger could be attached to the
>> guest so that it can be manipulated online.
>
> This is the reason why I wanted to measure Dom0, to detect a possible
> manipulation, e.g. of a custom version of pv-grub. Nevertheless, still
> the administrator could try to inject a manipulated copy of it into the
> system after booting it. Hence, I agree with the solutions you propose
> below.
>
>>
>>> May be, one possible solution could be to enforce an XSM FLASK policy to
>>> prevent any user in Dom0 from destroying, shutting down or pausing a
>>> domain. Then, measure the policy when Dom0 starts into a PCR of the
>>> phsyical TPM. Nevertheless, on one hand I do not know if this is
>>> feasible and, in the other hand, this prevents the system from
>>> destroying the vTPM domain when the peer domain shuts down.
>>
>> The solution to this problem is to disaggregate dom0 and relocate the
>> domain building component to a stub domain that is completely measured
>> in the pTPM (perhaps by TBOOT). The domain builder could use a static
>> library of domains to build (hardware domain and TPM Manager built only
>> once; vTPM and pv-grub domain pairs built on request). An XSM policy
>> could then restrict vTPM communication so that only correctly built
>> guests are allowed to talk to their paired vTPM. In this case, dom0
>> would have permission to shut down either VM, but could not start a
>> replacement.
>>
>
> I understand this cannot be done with the current implementation of Xen.
> Are there any plans to do this in the future?

Yes, although I am not sure how easy it will be to upstream the changes.
The hypervisor changes to make the hardware domain distinct from dom0
are mostly present in 4.4 - it just lacks the hooks to postpone IOMMU
setup for the hardware domain instead of dom0. I plan to post these for
review once 4.4 is branched.

The domain builder needed to run this setup also exists but the version
which handles build requests depends on an out-of-tree patch to the
hypervisor implementing inter-domain message sending (IVC). Since V4V is
more feature-complete than IVC, I think changing our domain builder to
use V4V instead of IVC will be necessary for it to be incorporated into
Xen. The bootstrapping version of the domain builder (which would be run
with domain ID 0) does not depend on IVC, but only builds a single list
of domains from its ramdisk.

>>>>> 2.In the documentation it is recommended to avoid accessing the
>>>>> physical
>>>>> TPM from Dom0 at the same time than the vTPM Manager stubdom.
>>>>> Nevertheless, I currently have the IMA and the Trousers enabled in
>>>>> Dom0
>>>>> without any apparent issue. Why is not recommended directly accessing
>>>>> the physical TPM of Dom0?
>>>>
>>>> While most of the time it is not a problem to have two entities
>>>> talking to
>>>> the physical TPM, it is possible for the trousers daemon in dom0 to
>>>> interfere
>>>> with key handles used by the TPM Manager. There are also certain
>>>> operations
>>>> of the TPM that may not handle concurrency, although I do not believe
>>>> that
>>>> trousers uses them - SHA1Start, the DAA commands, and certain audit
>>>> logs
>>>> come to mind.
>>>>
>>>> The other reason why it is recommended to avoid pTPM access from
>>>> dom0 is
>>>> because the ability to send unseal/unbind requests to the physical TPM
>>>> makes
>>>> it possible for applications running in dom0 to decrypt the TPM
>>>> Manager's
>>>> data (and thereby access vTPM private keys).
>>>>
>>>> At present, sharing the physical TPM between dom0 and the TPM
>>>> Manager is
>>>> the only way to get full integrity checks.
>>>
>>> OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
>>> security problem to the vTPM. But if we do not enable the support, the
>>> integrity of Dom0 cannot be proved using the TPM (e.g. by remote
>>> attestation).
>>
>> Right. Since dom0 currently must be trusted (as discussed above) this is
>> currently the best way to handle the dom0 attestation problem.
>>
>>>>
>>>>> 3.If it is not recommended to directly accessing the physical TPM in
>>>>> Dom0, which is the advisable way to check the integrity of this
>>>>> domain?
>>>>> With solutions such as TBOOT and IntelTXT?
>>>>
>>>> While the TPM Manager in Xen 4.3/4.4 does not yet have this
>>>> functionality,
>>>> an update which I will be submitting for inclusion in Xen 4.5 has the
>>>> ability to get physical TPM quotes using a virtual TPM. Combined
>>>> with an
>>>> early domain builder, the eventual goal is to have dom0 use a vTPM for
>>>> its integrity/reporting/sealing operations, and use the physical TPM
>>>> only
>>>> to secure the secrets of vTPMs and for deep quotes to provide fresh
>>>> proofs
>>>> of the system's state.
>>>
>>> This sounds really good. I look forward to try it in Xen 4.5!!
>>>
>>>
>>> Thank you for your answers!
>>> Jordi.
>>>
>>
>>
>
>


-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 19:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 19:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDfp4-00076X-G7; Wed, 12 Feb 2014 19:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDfp3-00076S-Hg
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 19:50:17 +0000
Received: from [85.158.139.211:28312] by server-11.bemta-5.messagelabs.com id
	7D/B5-23886-870DBF25; Wed, 12 Feb 2014 19:50:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392234614!3510973!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23495 invoked from network); 12 Feb 2014 19:50:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 19:50:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CJo9nB020091
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 19:50:09 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1CJo7XG003948
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 19:50:08 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CJo7Y1000695; Wed, 12 Feb 2014 19:50:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 11:50:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9AB341C0972; Wed, 12 Feb 2014 14:50:05 -0500 (EST)
Date: Wed, 12 Feb 2014 14:50:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, george.dunlap@eu.citrix.com,
	jun.nakajima@intel.com, boris.ostrovsky@oracle.com, jbeulich@suse.com, 
	andrew.cooper3@citrix.com, andrew.thomas@oracle.com, ufimtseva@gmail.com
Message-ID: <20140212195005.GB29910@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: kurt.hackel@oracle.com
Subject: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

I have been looking at figuring out how we can "easily" do PCIe assignment
of devices that are on different sockets. The problem is that
on machines with many sockets (four or more) we might inadvertently assign
the PCIe from a different socket to a guest bound to a different NUMA
node. That means more KPI traffic, higher latency, etc.

>From a Linux kernel perspective we do seem to 'pipe' said information
from the ACPI DSDT (drivers/xen/pci.c):

 75                 unsigned long long pxm;                                         
 76                                                                                 
 77                 status = acpi_evaluate_integer(handle, "_PXM",                  
 78                                    NULL, &pxm);                                 
 79                 if (ACPI_SUCCESS(status)) {                                     
 80                     add.optarr[0] = pxm;                                        
 81                     add.flags |= XEN_PCI_DEV_PXM;        

Which is neat except that Xen ignores that flag altogether. I Googled
a bit but still did not find anything relevant - thought there were
some presentations from past Xen Summits referring to it
(I can't find it now :-()

Anyhow,  what I am wondering if there are some prototypes out the
in the past that utilize this. And if we were to use this how
can we expose this to 'libxl' or any other tools to say:

"Hey! You might want to use this other PCI device assigned
to pciback which is on the same node". Some of form of
'numa-pci' affinity.

Interestingly enough one can also read this from SysFS:
/sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.

Except that we don't expose the NUMA topology to the initial
domain so the 'numa_node' is all -1. And the local_cpu depends
on seeing _all_ of the CPUs - and of course it assumes that
vCPU == pCPU.

Anyhow, if this was "tweaked" such that the initial domain
was seeing the hardware NUMA topology and parsing it (via
Elena's patches) we could potentially have at least the
'numa_node' information present and figure out if a guest
is using a PCIe device from the right socket.

So what I am wondering is:
 1) Were there any plans for the XEN_PCI_DEV_PXM in the
    hypervisor? Were there some prototypes for exporting the
    PCI device BDF and NUMA information out.

 2) Would it be better to just look at making the initial domain
   be able to figure out the NUMA topology and assign the
   correct 'numa_node' in the PCI fields?

 3). If either option is used, would taking that information in-to
   advisement when launching a guest with either 'cpus' or 'numa-affinity'
   or 'pci' and informing the user of a better choice be good?
   Or would it be better if there was some diagnostic tool to at
   least tell the user whether their PCI device assignment made
   sense or not? Or perhaps program the 'numa-affinity' based on
   the PCIe socket location?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 19:50:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 19:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDfp4-00076X-G7; Wed, 12 Feb 2014 19:50:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WDfp3-00076S-Hg
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 19:50:17 +0000
Received: from [85.158.139.211:28312] by server-11.bemta-5.messagelabs.com id
	7D/B5-23886-870DBF25; Wed, 12 Feb 2014 19:50:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392234614!3510973!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23495 invoked from network); 12 Feb 2014 19:50:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 12 Feb 2014 19:50:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CJo9nB020091
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 19:50:09 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1CJo7XG003948
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 19:50:08 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CJo7Y1000695; Wed, 12 Feb 2014 19:50:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 11:50:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9AB341C0972; Wed, 12 Feb 2014 14:50:05 -0500 (EST)
Date: Wed, 12 Feb 2014 14:50:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: xen-devel@lists.xensource.com, george.dunlap@eu.citrix.com,
	jun.nakajima@intel.com, boris.ostrovsky@oracle.com, jbeulich@suse.com, 
	andrew.cooper3@citrix.com, andrew.thomas@oracle.com, ufimtseva@gmail.com
Message-ID: <20140212195005.GB29910@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: kurt.hackel@oracle.com
Subject: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

I have been looking at figuring out how we can "easily" do PCIe assignment
of devices that are on different sockets. The problem is that
on machines with many sockets (four or more) we might inadvertently assign
the PCIe from a different socket to a guest bound to a different NUMA
node. That means more KPI traffic, higher latency, etc.

>From a Linux kernel perspective we do seem to 'pipe' said information
from the ACPI DSDT (drivers/xen/pci.c):

 75                 unsigned long long pxm;                                         
 76                                                                                 
 77                 status = acpi_evaluate_integer(handle, "_PXM",                  
 78                                    NULL, &pxm);                                 
 79                 if (ACPI_SUCCESS(status)) {                                     
 80                     add.optarr[0] = pxm;                                        
 81                     add.flags |= XEN_PCI_DEV_PXM;        

Which is neat except that Xen ignores that flag altogether. I Googled
a bit but still did not find anything relevant - thought there were
some presentations from past Xen Summits referring to it
(I can't find it now :-()

Anyhow,  what I am wondering if there are some prototypes out the
in the past that utilize this. And if we were to use this how
can we expose this to 'libxl' or any other tools to say:

"Hey! You might want to use this other PCI device assigned
to pciback which is on the same node". Some of form of
'numa-pci' affinity.

Interestingly enough one can also read this from SysFS:
/sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.

Except that we don't expose the NUMA topology to the initial
domain so the 'numa_node' is all -1. And the local_cpu depends
on seeing _all_ of the CPUs - and of course it assumes that
vCPU == pCPU.

Anyhow, if this was "tweaked" such that the initial domain
was seeing the hardware NUMA topology and parsing it (via
Elena's patches) we could potentially have at least the
'numa_node' information present and figure out if a guest
is using a PCIe device from the right socket.

So what I am wondering is:
 1) Were there any plans for the XEN_PCI_DEV_PXM in the
    hypervisor? Were there some prototypes for exporting the
    PCI device BDF and NUMA information out.

 2) Would it be better to just look at making the initial domain
   be able to figure out the NUMA topology and assign the
   correct 'numa_node' in the PCI fields?

 3). If either option is used, would taking that information in-to
   advisement when launching a guest with either 'cpus' or 'numa-affinity'
   or 'pci' and informing the user of a better choice be good?
   Or would it be better if there was some diagnostic tool to at
   least tell the user whether their PCI device assignment made
   sense or not? Or perhaps program the 'numa-affinity' based on
   the PCIe socket location?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 19:52:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 19:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDfr6-0007BY-1i; Wed, 12 Feb 2014 19:52:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDfr5-0007BS-0h
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 19:52:23 +0000
Received: from [85.158.143.35:21580] by server-1.bemta-4.messagelabs.com id
	8B/24-31661-6F0DBF25; Wed, 12 Feb 2014 19:52:22 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392234740!5171149!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19192 invoked from network); 12 Feb 2014 19:52:21 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 19:52:21 -0000
Received: by mail-lb0-f175.google.com with SMTP id p9so7501716lbv.34
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 11:52:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=eDN1kGxi5ojtVo79H5/1YkLizlnzQrYtwCI0xT7yINw=;
	b=CbnquKgdj9MmbcW64l8NWubu51PhVIiZOJuXo7ZJx6zl9Sw9xeEA742ZklQIwONYD/
	Y6iwo6HX9iuTIQBfQry6gfpDGcc6dY+F+GBFTec1qzJordwxV5HRp1huV2EJRWXGbrIu
	eL1wQ95TUd7iqYgaPpLJOQRuLle1Myv/OsdBjnT5/HQuQryt1LKruyQycUG3X2FAwh3M
	b/G4OR7XY2aLMqVl9Hmwr5gNhqJ2jjKQFkY6Z1HJC0ZpxWv4OtwWbEsl/+SbCK9xNdal
	Mo8Mp5/bfRU6ZqpADS9E9InCW+iuxc5Q68365x9RHoRuu+DkPuSp61ytX/yusuF7C09X
	z6xA==
X-Received: by 10.153.0.33 with SMTP id av1mr32802690lad.14.1392234740196;
	Wed, 12 Feb 2014 11:52:20 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 11:52:00 -0800 (PST)
In-Reply-To: <20140212121739.ecb2f222.billfink@mindspring.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<20140212121739.ecb2f222.billfink@mindspring.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 11:52:00 -0800
X-Google-Sender-Auth: -q6l4rnJF3DyZuIeADJW9LyzWog
Message-ID: <CAB=NE6Wfy-+Z4-NksF9qspz3+6Cq9JNYFvzQ9i1=YoCe0KWpbQ@mail.gmail.com>
To: Bill Fink <billfink@mindspring.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 9:17 AM, Bill Fink <billfink@mindspring.com> wrote:
> On Wed, 12 Feb 2014, Ian Campbell wrote:
>> IOW -- enabling/disabling multicast seems to me to be an odd proxy for
>> disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
>> which is to avoid SLAAC and DAD on interfaces which don't do multicast
>> (which makes sense since those protocols involve multicast).
>
> Forgive me if this doesn't make sense in this context since
> I'm not a kernel developer, but I was just wondering if any of
> the sysctls:
>
>         /proc/sys/net/ipv6/conf/<ifc>/disable_ipv6
>         /proc/sys/net/ipv6/conf/<ifc>/accept_dad
>         /proc/sys/net/ipv6/conf/<ifc>/accept_ra
>         /proc/sys/net/ipv6/conf/<ifc>/autoconf
>
> would be apropos for the requirement being discussed.

These are run time configuration options, post initialization. What
we're considering is internal net_device capability fields, to even
avoid creating these in the first place.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 19:52:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 19:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDfr6-0007BY-1i; Wed, 12 Feb 2014 19:52:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDfr5-0007BS-0h
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 19:52:23 +0000
Received: from [85.158.143.35:21580] by server-1.bemta-4.messagelabs.com id
	8B/24-31661-6F0DBF25; Wed, 12 Feb 2014 19:52:22 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392234740!5171149!1
X-Originating-IP: [209.85.217.175]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19192 invoked from network); 12 Feb 2014 19:52:21 -0000
Received: from mail-lb0-f175.google.com (HELO mail-lb0-f175.google.com)
	(209.85.217.175)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 19:52:21 -0000
Received: by mail-lb0-f175.google.com with SMTP id p9so7501716lbv.34
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 11:52:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=eDN1kGxi5ojtVo79H5/1YkLizlnzQrYtwCI0xT7yINw=;
	b=CbnquKgdj9MmbcW64l8NWubu51PhVIiZOJuXo7ZJx6zl9Sw9xeEA742ZklQIwONYD/
	Y6iwo6HX9iuTIQBfQry6gfpDGcc6dY+F+GBFTec1qzJordwxV5HRp1huV2EJRWXGbrIu
	eL1wQ95TUd7iqYgaPpLJOQRuLle1Myv/OsdBjnT5/HQuQryt1LKruyQycUG3X2FAwh3M
	b/G4OR7XY2aLMqVl9Hmwr5gNhqJ2jjKQFkY6Z1HJC0ZpxWv4OtwWbEsl/+SbCK9xNdal
	Mo8Mp5/bfRU6ZqpADS9E9InCW+iuxc5Q68365x9RHoRuu+DkPuSp61ytX/yusuF7C09X
	z6xA==
X-Received: by 10.153.0.33 with SMTP id av1mr32802690lad.14.1392234740196;
	Wed, 12 Feb 2014 11:52:20 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 11:52:00 -0800 (PST)
In-Reply-To: <20140212121739.ecb2f222.billfink@mindspring.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<20140212121739.ecb2f222.billfink@mindspring.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 11:52:00 -0800
X-Google-Sender-Auth: -q6l4rnJF3DyZuIeADJW9LyzWog
Message-ID: <CAB=NE6Wfy-+Z4-NksF9qspz3+6Cq9JNYFvzQ9i1=YoCe0KWpbQ@mail.gmail.com>
To: Bill Fink <billfink@mindspring.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 9:17 AM, Bill Fink <billfink@mindspring.com> wrote:
> On Wed, 12 Feb 2014, Ian Campbell wrote:
>> IOW -- enabling/disabling multicast seems to me to be an odd proxy for
>> disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
>> which is to avoid SLAAC and DAD on interfaces which don't do multicast
>> (which makes sense since those protocols involve multicast).
>
> Forgive me if this doesn't make sense in this context since
> I'm not a kernel developer, but I was just wondering if any of
> the sysctls:
>
>         /proc/sys/net/ipv6/conf/<ifc>/disable_ipv6
>         /proc/sys/net/ipv6/conf/<ifc>/accept_dad
>         /proc/sys/net/ipv6/conf/<ifc>/accept_ra
>         /proc/sys/net/ipv6/conf/<ifc>/autoconf
>
> would be apropos for the requirement being discussed.

These are run time configuration options, post initialization. What
we're considering is internal net_device capability fields, to even
avoid creating these in the first place.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:05:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDg3m-0007qx-SR; Wed, 12 Feb 2014 20:05:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WDg3k-0007qd-Kw
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:05:28 +0000
Received: from [85.158.143.35:65438] by server-3.bemta-4.messagelabs.com id
	1D/83-11539-704DBF25; Wed, 12 Feb 2014 20:05:27 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392235526!5219278!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21337 invoked from network); 12 Feb 2014 20:05:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 20:05:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CK5Or5007495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 20:05:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5LCU020638
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 12 Feb 2014 20:05:22 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5LgI024837; Wed, 12 Feb 2014 20:05:21 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-157.usdhcp.oraclecorp.com.com
	(/10.152.54.212) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 12:05:20 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: jbeulich@suse.com, keir@xen.org
Date: Wed, 12 Feb 2014 16:05:45 -0500
Message-Id: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH 0/2] A couple of SR-IOV-related patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The first patch fixes a bug in calculating offset to the Virtual Function's
memory space. It may be worth taking it to 4.4.

The second patch is removal of what seems to be a redundant check in computing
VF number.

Boris Ostrovsky (2):
  x86/pci: Store VF's memory space displacement in a 64-bit value
  x86/pci: Remove unnecessary check in VF value computation

 xen/arch/x86/msi.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:05:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDg3m-0007qq-HC; Wed, 12 Feb 2014 20:05:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WDg3k-0007qb-Da
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:05:28 +0000
Received: from [193.109.254.147:61814] by server-15.bemta-14.messagelabs.com
	id 3D/EE-10839-704DBF25; Wed, 12 Feb 2014 20:05:27 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392235525!3893885!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12845 invoked from network); 12 Feb 2014 20:05:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 20:05:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CK5NDU007483
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 20:05:24 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5MqI024883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 20:05:23 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5MJJ020668; Wed, 12 Feb 2014 20:05:22 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-157.usdhcp.oraclecorp.com.com
	(/10.152.54.212) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 12:05:21 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: jbeulich@suse.com, keir@xen.org
Date: Wed, 12 Feb 2014 16:05:47 -0500
Message-Id: <1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
	value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This test is already performed a couple of lines above.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/msi.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 1aaceeb..27e47c3 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 func, u8 bir, int vf)
         if ( vf < 0 || (vf && vf % stride) )
             return 0;
         if ( stride )
-        {
-            if ( vf % stride )
-                return 0;
             vf /= stride;
-        }
         if ( vf >= num_vf )
             return 0;
         BUILD_BUG_ON(ARRAY_SIZE(pdev->vf_rlen) != PCI_SRIOV_NUM_BARS);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:05:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDg3m-0007qx-SR; Wed, 12 Feb 2014 20:05:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WDg3k-0007qd-Kw
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:05:28 +0000
Received: from [85.158.143.35:65438] by server-3.bemta-4.messagelabs.com id
	1D/83-11539-704DBF25; Wed, 12 Feb 2014 20:05:27 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392235526!5219278!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21337 invoked from network); 12 Feb 2014 20:05:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 20:05:27 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CK5Or5007495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 20:05:24 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5LCU020638
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 12 Feb 2014 20:05:22 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5LgI024837; Wed, 12 Feb 2014 20:05:21 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-157.usdhcp.oraclecorp.com.com
	(/10.152.54.212) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 12:05:20 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: jbeulich@suse.com, keir@xen.org
Date: Wed, 12 Feb 2014 16:05:45 -0500
Message-Id: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH 0/2] A couple of SR-IOV-related patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The first patch fixes a bug in calculating offset to the Virtual Function's
memory space. It may be worth taking it to 4.4.

The second patch is removal of what seems to be a redundant check in computing
VF number.

Boris Ostrovsky (2):
  x86/pci: Store VF's memory space displacement in a 64-bit value
  x86/pci: Remove unnecessary check in VF value computation

 xen/arch/x86/msi.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:05:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDg3n-0007r4-7T; Wed, 12 Feb 2014 20:05:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WDg3k-0007qc-JW
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:05:28 +0000
Received: from [193.109.254.147:36656] by server-6.bemta-14.messagelabs.com id
	33/F7-03396-804DBF25; Wed, 12 Feb 2014 20:05:28 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392235525!3886181!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16970 invoked from network); 12 Feb 2014 20:05:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 20:05:27 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CK5M0r007474
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 20:05:23 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1CK5Mr2016017
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 20:05:22 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1CK5LF3015992; Wed, 12 Feb 2014 20:05:21 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-157.usdhcp.oraclecorp.com.com
	(/10.152.54.212) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 12:05:21 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: jbeulich@suse.com, keir@xen.org
Date: Wed, 12 Feb 2014 16:05:46 -0500
Message-Id: <1392239147-1547-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH 1/2] x86/pci: Store VF's memory space
	displacement in a 64-bit value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VF's memory space offset can be greater than 4GB and therefore needs
to be stored in a 64-bit variable.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/msi.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 284042e..1aaceeb 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -610,7 +610,8 @@ static int msi_capability_init(struct pci_dev *dev,
 static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 func, u8 bir, int vf)
 {
     u8 limit;
-    u32 addr, base = PCI_BASE_ADDRESS_0, disp = 0;
+    u32 addr, base = PCI_BASE_ADDRESS_0;
+    u64 disp = 0;
 
     if ( vf >= 0 )
     {
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:05:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDg3n-0007r4-7T; Wed, 12 Feb 2014 20:05:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WDg3k-0007qc-JW
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:05:28 +0000
Received: from [193.109.254.147:36656] by server-6.bemta-14.messagelabs.com id
	33/F7-03396-804DBF25; Wed, 12 Feb 2014 20:05:28 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392235525!3886181!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16970 invoked from network); 12 Feb 2014 20:05:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 20:05:27 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CK5M0r007474
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 20:05:23 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1CK5Mr2016017
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 20:05:22 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1CK5LF3015992; Wed, 12 Feb 2014 20:05:21 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-157.usdhcp.oraclecorp.com.com
	(/10.152.54.212) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 12:05:21 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: jbeulich@suse.com, keir@xen.org
Date: Wed, 12 Feb 2014 16:05:46 -0500
Message-Id: <1392239147-1547-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH 1/2] x86/pci: Store VF's memory space
	displacement in a 64-bit value
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VF's memory space offset can be greater than 4GB and therefore needs
to be stored in a 64-bit variable.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/msi.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 284042e..1aaceeb 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -610,7 +610,8 @@ static int msi_capability_init(struct pci_dev *dev,
 static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 func, u8 bir, int vf)
 {
     u8 limit;
-    u32 addr, base = PCI_BASE_ADDRESS_0, disp = 0;
+    u32 addr, base = PCI_BASE_ADDRESS_0;
+    u64 disp = 0;
 
     if ( vf >= 0 )
     {
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:05:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:05:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDg3m-0007qq-HC; Wed, 12 Feb 2014 20:05:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WDg3k-0007qb-Da
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:05:28 +0000
Received: from [193.109.254.147:61814] by server-15.bemta-14.messagelabs.com
	id 3D/EE-10839-704DBF25; Wed, 12 Feb 2014 20:05:27 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392235525!3893885!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12845 invoked from network); 12 Feb 2014 20:05:27 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 12 Feb 2014 20:05:27 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1CK5NDU007483
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 12 Feb 2014 20:05:24 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5MqI024883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 12 Feb 2014 20:05:23 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1CK5MJJ020668; Wed, 12 Feb 2014 20:05:22 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-157.usdhcp.oraclecorp.com.com
	(/10.152.54.212) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 12:05:21 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: jbeulich@suse.com, keir@xen.org
Date: Wed, 12 Feb 2014 16:05:47 -0500
Message-Id: <1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
	value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This test is already performed a couple of lines above.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/msi.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 1aaceeb..27e47c3 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 func, u8 bir, int vf)
         if ( vf < 0 || (vf && vf % stride) )
             return 0;
         if ( stride )
-        {
-            if ( vf % stride )
-                return 0;
             vf /= stride;
-        }
         if ( vf >= num_vf )
             return 0;
         BUILD_BUG_ON(ARRAY_SIZE(pdev->vf_rlen) != PCI_SRIOV_NUM_BARS);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:35:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDgWD-0000WL-5G; Wed, 12 Feb 2014 20:34:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDgWA-0000WG-VJ
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 20:34:51 +0000
Received: from [85.158.143.35:2392] by server-3.bemta-4.messagelabs.com id
	60/C7-11539-AEADBF25; Wed, 12 Feb 2014 20:34:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392237288!5223856!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24364 invoked from network); 12 Feb 2014 20:34:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 20:34:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,834,1384300800"; d="scan'208";a="100252856"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 20:34:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 15:34:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDgW6-0001PX-N1;
	Wed, 12 Feb 2014 20:34:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDgW6-0001Cj-Dk;
	Wed, 12 Feb 2014 20:34:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24855-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 20:34:46 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24855: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5956003194891091280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5956003194891091280==
Content-Type: text/plain

flight 24855 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24855/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 24852

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  5819ec7bc0f86c9dff755d85df289332742c05c3
baseline version:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=5819ec7bc0f86c9dff755d85df289332742c05c3
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 5819ec7bc0f86c9dff755d85df289332742c05c3
+ branch=xen-unstable
+ revision=5819ec7bc0f86c9dff755d85df289332742c05c3
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 5819ec7bc0f86c9dff755d85df289332742c05c3:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   3b2f92c..5819ec7  5819ec7bc0f86c9dff755d85df289332742c05c3 -> master


--===============5956003194891091280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5956003194891091280==--

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:35:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDgWD-0000WL-5G; Wed, 12 Feb 2014 20:34:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDgWA-0000WG-VJ
	for xen-devel@lists.xensource.com; Wed, 12 Feb 2014 20:34:51 +0000
Received: from [85.158.143.35:2392] by server-3.bemta-4.messagelabs.com id
	60/C7-11539-AEADBF25; Wed, 12 Feb 2014 20:34:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392237288!5223856!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24364 invoked from network); 12 Feb 2014 20:34:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 20:34:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,834,1384300800"; d="scan'208";a="100252856"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 20:34:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 15:34:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDgW6-0001PX-N1;
	Wed, 12 Feb 2014 20:34:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDgW6-0001Cj-Dk;
	Wed, 12 Feb 2014 20:34:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24855-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 12 Feb 2014 20:34:46 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24855: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5956003194891091280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5956003194891091280==
Content-Type: text/plain

flight 24855 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24855/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate/x10    fail REGR. vs. 24852

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  5819ec7bc0f86c9dff755d85df289332742c05c3
baseline version:
 xen                  3b2f92c1f8567461562fac9922fbad223dc8c6cf

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=5819ec7bc0f86c9dff755d85df289332742c05c3
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 5819ec7bc0f86c9dff755d85df289332742c05c3
+ branch=xen-unstable
+ revision=5819ec7bc0f86c9dff755d85df289332742c05c3
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 5819ec7bc0f86c9dff755d85df289332742c05c3:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   3b2f92c..5819ec7  5819ec7bc0f86c9dff755d85df289332742c05c3 -> master


--===============5956003194891091280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5956003194891091280==--

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:47:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDgia-0000sU-HL; Wed, 12 Feb 2014 20:47:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WDgiZ-0000sP-Jo
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 20:47:39 +0000
Received: from [193.109.254.147:27025] by server-14.bemta-14.messagelabs.com
	id ED/CD-29228-BEDDBF25; Wed, 12 Feb 2014 20:47:39 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392238056!3928346!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31328 invoked from network); 12 Feb 2014 20:47:36 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 20:47:36 -0000
Received: by mail-lb0-f169.google.com with SMTP id q8so7700098lbi.28
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 12:47:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=8ZlsALJ7k5vwM9LLpIiptV8s29Jz188JRAT5iN4x+5s=;
	b=MtIGQWJD8d+Ko3d3zaCzCBZlKmkFGzfz0pEM97uHdvfWko6EOsmts0NbeyZ1B4cKqO
	1NDXDjrbfqkE0I2knwS9tAPE6qYgoLFBUg8jwKi82VFWKtqj7+mGhfK5SVXC/AAzSHhy
	Tji9OYhLHNQHvUYsz8wU9VKSzvpQhegNMVYSoZMu0RnSbM8RI7+0je70ALIczQf2bPpJ
	Y+aKcIpTi4A054QV0+7tyTetQTvsJS2M3jyZTAtwA8BWKunj6QgfzRmAiYMNgOGfR+Bv
	J9lAe072d3YBpuIZw1W0yD8LcW6FYMreC0MLQWVDK38DXP33eXtnEDjnp0oAwBoqnTSQ
	YaVg==
X-Received: by 10.152.44.167 with SMTP id f7mr24210lam.86.1392238055840;
	Wed, 12 Feb 2014 12:47:35 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	pz10sm25130286lbb.10.2014.02.12.12.47.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 12 Feb 2014 12:47:35 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 13 Feb 2014 00:47:33 +0400
Message-Id: <AB94B381-73CD-47DC-B402-1FE10894A66A@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: dilos-dev@lists.illumos.org,
	illumos-developer Developer <developer@lists.illumos.org>
Subject: [Xen-devel] xen-4.3 port to illumos based platform - progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I have additional progress with xen-4.3 port to DilOS :)

now i'm able to load DilOS as PV guest to Debian Xen-4.3 dom0

you can find instruction here:
https://dilos-dev.atlassian.net/wiki/display/DS/How+to+install+DilOS+PV+to+Linux-xen-dom0

and try to do it.

i'm interested in feedback.
HVM will be fixed/updated later.
At this moment i have fixed/update only PV guest.

i have tested my new debug ISO as PV guest install at:
- dilos-xen34-dom0 - here i'm able to use pygrub, because it fixed for later ZFS changes
- debian-xen43-dom0 - you can try to test it by instruction

please let me know if you have issues.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:47:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:47:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDgia-0000sU-HL; Wed, 12 Feb 2014 20:47:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ikozhukhov@gmail.com>) id 1WDgiZ-0000sP-Jo
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 20:47:39 +0000
Received: from [193.109.254.147:27025] by server-14.bemta-14.messagelabs.com
	id ED/CD-29228-BEDDBF25; Wed, 12 Feb 2014 20:47:39 +0000
X-Env-Sender: ikozhukhov@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392238056!3928346!1
X-Originating-IP: [209.85.217.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31328 invoked from network); 12 Feb 2014 20:47:36 -0000
Received: from mail-lb0-f169.google.com (HELO mail-lb0-f169.google.com)
	(209.85.217.169)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 20:47:36 -0000
Received: by mail-lb0-f169.google.com with SMTP id q8so7700098lbi.28
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 12:47:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=8ZlsALJ7k5vwM9LLpIiptV8s29Jz188JRAT5iN4x+5s=;
	b=MtIGQWJD8d+Ko3d3zaCzCBZlKmkFGzfz0pEM97uHdvfWko6EOsmts0NbeyZ1B4cKqO
	1NDXDjrbfqkE0I2knwS9tAPE6qYgoLFBUg8jwKi82VFWKtqj7+mGhfK5SVXC/AAzSHhy
	Tji9OYhLHNQHvUYsz8wU9VKSzvpQhegNMVYSoZMu0RnSbM8RI7+0je70ALIczQf2bPpJ
	Y+aKcIpTi4A054QV0+7tyTetQTvsJS2M3jyZTAtwA8BWKunj6QgfzRmAiYMNgOGfR+Bv
	J9lAe072d3YBpuIZw1W0yD8LcW6FYMreC0MLQWVDK38DXP33eXtnEDjnp0oAwBoqnTSQ
	YaVg==
X-Received: by 10.152.44.167 with SMTP id f7mr24210lam.86.1392238055840;
	Wed, 12 Feb 2014 12:47:35 -0800 (PST)
Received: from [172.16.90.16] ([91.204.58.44]) by mx.google.com with ESMTPSA id
	pz10sm25130286lbb.10.2014.02.12.12.47.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 12 Feb 2014 12:47:35 -0800 (PST)
From: Igor Kozhukhov <ikozhukhov@gmail.com>
Date: Thu, 13 Feb 2014 00:47:33 +0400
Message-Id: <AB94B381-73CD-47DC-B402-1FE10894A66A@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Apple Message framework v1283)
X-Mailer: Apple Mail (2.1283)
Cc: dilos-dev@lists.illumos.org,
	illumos-developer Developer <developer@lists.illumos.org>
Subject: [Xen-devel] xen-4.3 port to illumos based platform - progress
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello All,

I have additional progress with xen-4.3 port to DilOS :)

now i'm able to load DilOS as PV guest to Debian Xen-4.3 dom0

you can find instruction here:
https://dilos-dev.atlassian.net/wiki/display/DS/How+to+install+DilOS+PV+to+Linux-xen-dom0

and try to do it.

i'm interested in feedback.
HVM will be fixed/updated later.
At this moment i have fixed/update only PV guest.

i have tested my new debug ISO as PV guest install at:
- dilos-xen34-dom0 - here i'm able to use pygrub, because it fixed for later ZFS changes
- debian-xen43-dom0 - you can try to test it by instruction

please let me know if you have issues.

--
Best regards,
Igor Kozhukhov





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:54:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDgpB-0001Dv-EB; Wed, 12 Feb 2014 20:54:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WDgpA-0001Dq-4s
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:54:28 +0000
Received: from [85.158.139.211:50957] by server-11.bemta-5.messagelabs.com id
	CA/33-23886-38FDBF25; Wed, 12 Feb 2014 20:54:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392238464!3518409!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22142 invoked from network); 12 Feb 2014 20:54:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 20:54:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,834,1384300800"; d="scan'208";a="100258215"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 20:54:23 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 15:54:22 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>
Date: Wed, 12 Feb 2014 20:54:13 +0000
Message-ID: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- arch specific functions *_foreign_p2m_mapping do everything after the
  hypercall
- it cuts out common parts from m2p_*_override functions to
  *_foreign_p2m_mapping functions
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Original-by: Anthony Liguori <aliguori@amazon.com>
---
v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

v7:
- the previous version broke build on ARM, as there is no need for those p2m
  changes. I've put them into arch specific functions, which are stubs on arm

v8:
- give credit to Anthony Liguori who submitted a very similar patch originally:
http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
- create ARM stub for get_phys_to_machine
- move definition of mfn in __gnttab_unmap_refs to the right place

v9:
- move everything after the hypercalls into set/clear_foreign_p2m_mapping
- m2p override functions became unnecessary on ARM therefore

 arch/arm/include/asm/xen/page.h     |   19 +++---
 arch/arm/xen/p2m.c                  |   34 ++++++++++
 arch/x86/include/asm/xen/page.h     |   13 +++-
 arch/x86/xen/p2m.c                  |  127 ++++++++++++++++++++++++++++++-----
 drivers/block/xen-blkback/blkback.c |   15 ++---
 drivers/xen/gntdev.c                |   13 ++--
 drivers/xen/grant-table.c           |  115 +++++++++++--------------------
 include/xen/grant_table.h           |    8 ++-
 8 files changed, 227 insertions(+), 117 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index e0965ab..4eaeb3f 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
-static inline int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
-{
-	return 0;
-}
-
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
-{
-	return 0;
-}
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count,
+				   bool m2p_override);
+
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count,
+				     bool m2p_override);
 
 bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
index b31ee1b2..74d977c 100644
--- a/arch/arm/xen/p2m.c
+++ b/arch/arm/xen/p2m.c
@@ -146,6 +146,40 @@ unsigned long __mfn_to_pfn(unsigned long mfn)
 }
 EXPORT_SYMBOL_GPL(__mfn_to_pfn);
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count,
+			    bool m2p_override)
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		if (map_ops[i].status)
+			continue;
+		set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+				    map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count,
+			      bool m2p_override)
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
+				    INVALID_P2M_ENTRY);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 bool __set_phys_to_machine_multi(unsigned long pfn,
 		unsigned long mfn, unsigned long nr_pages)
 {
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 3e276eb..9edc8a8 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,19 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count,
+				   bool m2p_override);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count,
+				     bool m2p_override);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +130,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..305af27 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -881,6 +881,67 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count,
+			    bool m2p_override)
+{
+	int i, ret = 0;
+	bool lazy = false;
+	pte_t *pte;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn, pfn;
+
+		/* Do not add to override if the map failed. */
+		if (map_ops[i].status)
+			continue;
+
+		if (map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+				(map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+		}
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+		pages[i]->index = pfn_to_mfn(pfn);
+
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		if (m2p_override) {
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
+			if (ret)
+				goto out;
+		}
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -899,13 +960,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -943,20 +997,66 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count,
+			      bool m2p_override)
+{
+	int i, ret = 0;
+	bool lazy = false;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		unsigned long pfn = page_to_pfn(pages[i]);
+
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						  &kmap_ops[i] : NULL,
+						  mfn);
+		if (ret)
+			goto out;
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+	return ret;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -970,10 +1070,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 073b4a1..34a2704 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 1ce1c40..5efacf8 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -928,15 +928,14 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+static int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+			     struct gnttab_map_grant_ref *kmap_ops,
+			     struct page **pages, unsigned int count,
+			     bool m2p_override)
 {
 	int i, ret;
-	bool lazy = false;
-	pte_t *pte;
-	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -947,88 +946,56 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
 						&map_ops[i].status, __func__);
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			if (map_ops[i].status)
-				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
-		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
+	return set_foreign_p2m_mapping(map_ops, kmap_ops, pages, count,
+				       m2p_override);
+}
 
-	return ret;
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
 {
-	int i, ret;
-	bool lazy = false;
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+static int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+			       struct gnttab_map_grant_ref *kmap_ops,
+			       struct page **pages, unsigned int count,
+			       bool m2p_override)
+{
+	int ret;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
-					INVALID_P2M_ENTRY);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
+	return clear_foreign_p2m_mapping(unmap_ops, kmap_ops, pages, count,
+					 m2p_override);
+}
 
-	return ret;
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 5acb1e4..2541c96 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 20:54:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 20:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDgpB-0001Dv-EB; Wed, 12 Feb 2014 20:54:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WDgpA-0001Dq-4s
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 20:54:28 +0000
Received: from [85.158.139.211:50957] by server-11.bemta-5.messagelabs.com id
	CA/33-23886-38FDBF25; Wed, 12 Feb 2014 20:54:27 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392238464!3518409!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22142 invoked from network); 12 Feb 2014 20:54:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 20:54:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,834,1384300800"; d="scan'208";a="100258215"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 12 Feb 2014 20:54:23 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 15:54:22 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas
	Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter
	Anvin" <hpa@zytor.com>, <x86@kernel.org>,
	<xen-devel@lists.xenproject.org>, 
	<linux-kernel@vger.kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Ian Campbell
	<Ian.Campbell@citrix.com>, <stefano.stabellini@eu.citrix.com>
Date: Wed, 12 Feb 2014 20:54:13 +0000
Message-ID: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override during
	mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the original functions were renamed to __gnttab_[un]map_refs, with a new
  parameter m2p_override
- arch specific functions *_foreign_p2m_mapping do everything after the
  hypercall
- it cuts out common parts from m2p_*_override functions to
  *_foreign_p2m_mapping functions
- gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
  m2p_override false
- a new function gnttab_[un]map_refs_userspace provides the old behaviour

It also removes a stray space from page.h and change ret to 0 if
XENFEAT_auto_translated_physmap, as that is the only possible return value
there.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Original-by: Anthony Liguori <aliguori@amazon.com>
---
v2:
- move the storing of the old mfn in page->index to gnttab_map_refs
- move the function header update to a separate patch

v3:
- a new approach to retain old behaviour where it needed
- squash the patches into one

v4:
- move out the common bits from m2p* functions, and pass pfn/mfn as parameter
- clear page->private before doing anything with the page, so m2p_find_override
  won't race with this

v5:
- change return value handling in __gnttab_[un]map_refs
- remove a stray space in page.h
- add detail why ret = 0 now at some places

v6:
- don't pass pfn to m2p* functions, just get it locally

v7:
- the previous version broke build on ARM, as there is no need for those p2m
  changes. I've put them into arch specific functions, which are stubs on arm

v8:
- give credit to Anthony Liguori who submitted a very similar patch originally:
http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
- create ARM stub for get_phys_to_machine
- move definition of mfn in __gnttab_unmap_refs to the right place

v9:
- move everything after the hypercalls into set/clear_foreign_p2m_mapping
- m2p override functions became unnecessary on ARM therefore

 arch/arm/include/asm/xen/page.h     |   19 +++---
 arch/arm/xen/p2m.c                  |   34 ++++++++++
 arch/x86/include/asm/xen/page.h     |   13 +++-
 arch/x86/xen/p2m.c                  |  127 ++++++++++++++++++++++++++++++-----
 drivers/block/xen-blkback/blkback.c |   15 ++---
 drivers/xen/gntdev.c                |   13 ++--
 drivers/xen/grant-table.c           |  115 +++++++++++--------------------
 include/xen/grant_table.h           |    8 ++-
 8 files changed, 227 insertions(+), 117 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index e0965ab..4eaeb3f 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
-static inline int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
-{
-	return 0;
-}
-
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
-{
-	return 0;
-}
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count,
+				   bool m2p_override);
+
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count,
+				     bool m2p_override);
 
 bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
index b31ee1b2..74d977c 100644
--- a/arch/arm/xen/p2m.c
+++ b/arch/arm/xen/p2m.c
@@ -146,6 +146,40 @@ unsigned long __mfn_to_pfn(unsigned long mfn)
 }
 EXPORT_SYMBOL_GPL(__mfn_to_pfn);
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count,
+			    bool m2p_override)
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		if (map_ops[i].status)
+			continue;
+		set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+				    map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count,
+			      bool m2p_override)
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
+				    INVALID_P2M_ENTRY);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 bool __set_phys_to_machine_multi(unsigned long pfn,
 		unsigned long mfn, unsigned long nr_pages)
 {
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 3e276eb..9edc8a8 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,19 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count,
+				   bool m2p_override);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count,
+				     bool m2p_override);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +130,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..305af27 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -881,6 +881,67 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count,
+			    bool m2p_override)
+{
+	int i, ret = 0;
+	bool lazy = false;
+	pte_t *pte;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn, pfn;
+
+		/* Do not add to override if the map failed. */
+		if (map_ops[i].status)
+			continue;
+
+		if (map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+				(map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+		}
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+		pages[i]->index = pfn_to_mfn(pfn);
+
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		if (m2p_override) {
+			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
+					       &kmap_ops[i] : NULL);
+			if (ret)
+				goto out;
+		}
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -899,13 +960,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -943,20 +997,66 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count,
+			      bool m2p_override)
+{
+	int i, ret = 0;
+	bool lazy = false;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (m2p_override &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		unsigned long pfn = page_to_pfn(pages[i]);
+
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+
+		if (m2p_override)
+			ret = m2p_remove_override(pages[i],
+						  kmap_ops ?
+						  &kmap_ops[i] : NULL,
+						  mfn);
+		if (ret)
+			goto out;
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+	return ret;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -970,10 +1070,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6620b73..875025f 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
 			!rb_next(&persistent_gnt->node)) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		pages[segs_to_unmap] = persistent_gnt->page;
 
 		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, pages,
-				segs_to_unmap);
+			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 			BUG_ON(ret);
 			put_free_pages(blkif, pages, segs_to_unmap);
 			segs_to_unmap = 0;
@@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
 		kfree(persistent_gnt);
 	}
 	if (segs_to_unmap > 0) {
-		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
+		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
 		BUG_ON(ret);
 		put_free_pages(blkif, pages, segs_to_unmap);
 	}
@@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
 				    GNTMAP_host_map, pages[i]->handle);
 		pages[i]->handle = BLKBACK_INVALID_HANDLE;
 		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
-			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
-			                        invcount);
+			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 			BUG_ON(ret);
 			put_free_pages(blkif, unmap_pages, invcount);
 			invcount = 0;
 		}
 	}
 	if (invcount) {
-		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
+		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
 		BUG_ON(ret);
 		put_free_pages(blkif, unmap_pages, invcount);
 	}
@@ -740,7 +737,7 @@ again:
 	}
 
 	if (segs_to_map) {
-		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
+		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
 		BUG_ON(ret);
 	}
 
diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index 073b4a1..34a2704 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
 	}
 
 	pr_debug("map %d+%d\n", map->index, map->count);
-	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
-			map->pages, map->count);
+	err = gnttab_map_refs_userspace(map->map_ops,
+					use_ptemod ? map->kmap_ops : NULL,
+					map->pages,
+					map->count);
 	if (err)
 		return err;
 
@@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
 		}
 	}
 
-	err = gnttab_unmap_refs(map->unmap_ops + offset,
-			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
-			pages);
+	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
+					  use_ptemod ? map->kmap_ops + offset : NULL,
+					  map->pages + offset,
+					  pages);
 	if (err)
 		return err;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 1ce1c40..5efacf8 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -928,15 +928,14 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
 }
 EXPORT_SYMBOL_GPL(gnttab_batch_copy);
 
-int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
-		    struct page **pages, unsigned int count)
+static int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+			     struct gnttab_map_grant_ref *kmap_ops,
+			     struct page **pages, unsigned int count,
+			     bool m2p_override)
 {
 	int i, ret;
-	bool lazy = false;
-	pte_t *pte;
-	unsigned long mfn;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
 		return ret;
@@ -947,88 +946,56 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
 						&map_ops[i].status, __func__);
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			if (map_ops[i].status)
-				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
-		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
+	return set_foreign_p2m_mapping(map_ops, kmap_ops, pages, count,
+				       m2p_override);
+}
 
-	return ret;
+int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
-int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kmap_ops,
-		      struct page **pages, unsigned int count)
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
 {
-	int i, ret;
-	bool lazy = false;
+	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
+
+static int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
+			       struct gnttab_map_grant_ref *kmap_ops,
+			       struct page **pages, unsigned int count,
+			       bool m2p_override)
+{
+	int ret;
 
+	BUG_ON(kmap_ops && !m2p_override);
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
-					INVALID_P2M_ENTRY);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
+	return clear_foreign_p2m_mapping(unmap_ops, kmap_ops, pages, count,
+					 m2p_override);
+}
 
-	return ret;
+int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
+		    struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
+				struct gnttab_map_grant_ref *kmap_ops,
+				struct page **pages, unsigned int count)
+{
+	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
+}
+EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
+
 static unsigned nr_status_frames(unsigned nr_grant_frames)
 {
 	BUG_ON(grefs_per_grant_frame == 0);
diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
index 5acb1e4..2541c96 100644
--- a/include/xen/grant_table.h
+++ b/include/xen/grant_table.h
@@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
 #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
 
 int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
-		    struct gnttab_map_grant_ref *kmap_ops,
 		    struct page **pages, unsigned int count);
+int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
 int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
-		      struct gnttab_map_grant_ref *kunmap_ops,
 		      struct page **pages, unsigned int count);
+int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
+				struct gnttab_map_grant_ref *kunmap_ops,
+				struct page **pages, unsigned int count);
 
 /* Perform a batch of grant map/copy operations. Retry every batch slot
  * for which the hypervisor returns GNTST_eagain. This is typically due

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 22:06:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 22:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDhwc-000304-M0; Wed, 12 Feb 2014 22:06:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDhwb-0002zz-Cm
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 22:06:13 +0000
Received: from [85.158.137.68:12219] by server-13.bemta-3.messagelabs.com id
	4D/B1-26923-450FBF25; Wed, 12 Feb 2014 22:06:12 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392242770!1483854!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24743 invoked from network); 12 Feb 2014 22:06:11 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 22:06:11 -0000
Received: by mail-lb0-f173.google.com with SMTP id s7so6009274lbd.4
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 14:06:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=NkD/tfgZ21Hld/ppUPNuDmZlaYOA4d9/CWzW9oGxV5c=;
	b=kaq4slHlX+xd1gL3CqslLF/++9MdhqrRFIiR18ajuQp6FOFCJvtbB3NzB7jxKaK8lm
	YoMkKKyzAfZdZ6FwV1EdViyen/gJSfHMR2Eg9sZijc6Hisecyw6gy7SQqNlwiOJyf0UL
	i91x63FDWlBvDeNwZcP7I+v2GGm70ErdaKeTzJIjqWdU5j1r0+x4DtEEH5GcwASiQbu+
	8phibUm+V1hq+qmy6ioA7EVrqtpj9yTKE9x4bacBBxCv+4BjJIH3avh0Mxghx7vwFn8i
	rootTatCXW37/2934xf56hEdn7qbDrMqrohvA1BuO6rMej/Je1B/ca4UCgBYsnykec8D
	jTJQ==
X-Received: by 10.112.211.233 with SMTP id nf9mr3312571lbc.50.1392242767085;
	Wed, 12 Feb 2014 14:06:07 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 14:05:47 -0800 (PST)
In-Reply-To: <1392203708.13563.50.camel@kazak.uk.xensource.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 14:05:47 -0800
X-Google-Sender-Auth: iNfaCkUbokthtIKn5jL4-iR0GmE
Message-ID: <CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 3:15 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-11 at 13:53 -0800, Luis R. Rodriguez wrote:
>> Cc'ing kvm folks as they may have a shared interest on the shared
>> physical case with the bridge (non NAT).
>>
>> On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
>> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>> >>
>> >> Although the xen-netback interfaces do not participate in the
>> >> link as a typical Ethernet device interfaces for them are
>> >> still required under the current archtitecture. IPv6 addresses
>> >> do not need to be created or assigned on the xen-netback interfaces
>> >> however, even if the frontend devices do need them, so clear the
>> >> multicast flag to ensure the net core does not initiate IPv6
>> >> Stateless Address Autoconfiguration.
>> >
>> > How does disabling SAA flow from the absence of multicast?
>>
>> See patch 1 in this series [0], but I explain the issue I see with
>> this on the cover letter [1].
>
> Oop, I felt like I'd missed some context. Thanks for pointing out that
> it was right under my nose.
>
>> In summary the RFCs on IPv6 make it
>> clear you need multicast for Stateless address autoconfiguration
>> (SLAAC is the preferred acronym) and DAD,
>
> That seems reasonable, but I think is the opposite to what I was trying
> to get at.
>
> Why is it not possible to disable SLAAC and/or DAD even if multicast is
> present?

Even if you set your IP address manually you still need to send router
solicitations using multicast, you also need to do DAD.

> IOW -- enabling/disabling multicast seems to me to be an odd proxy for
> disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
> which is to avoid SLAAC and DAD on interfaces which don't do multicast
> (which makes sense since those protocols involve multicast).

Agreed :)

>>  however the net core has not
>> made this a requirement, and hence the patch. The caveat which I
>> address on the cover letter needs to be seriously considered though.
>>
>> [0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
>> [1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2
>>
>> > Surely these should be controlled logically independently even if there is some
>> > notional linkage.
>>
>> When a node hops on a network it will query its network by sending a
>> router solicitation multicast request for its configuration
>> parameters, the router can respond with router advertisements to
>> disable SLAAC.
>
> Surely it should be possible for an interface to be explicitly not ipv6
> enabled, in which case it doesn't want to do any solicitation at all.

There are run time configuration options, but not net_device flags.
More on this below.

>> Apart from that we have no other means to disable SLAAC neatly, and as
>> I gather that would be counter to the IPv6 RFCs anyway, and that makes
>> sense.
>
> In your[0] post you say:
>         it should be noted that RFC4682 Section 5.4
>         makes it clear that DAD *MUST* be performed on all unicast
>         addresses prior to assigning them to an interface
>
> is that what you mean by counter to the RFCs?

Yeap.

> In my reading this "must do DAD" requirement only comes into affect if
> you are trying to assign a unicast address to an interface. It should be
> possible to simply not do that for an interface.

That is correct, why enable ipv6 then on that interfaces then? We have
the loopback for local stuff.

>> > Can SAA not be disabled directly?
>>
>> Nope. The ipv6 core assumes all device want ipv6
>
> IMHO it is entirely reasonable for an admin to desire that an interface
> has nothing at all to do with IPv6. At which point all of the
> requirements for multicast which flow from enabling IPv6 disappear.

Agreed.

>> >>  since using this can create an issue if a user
>> >> decides to enable multicast on the backend interfaces
>> >
>> > Please explain what this issue is.
>>
>> I explained this on the cover letter but should have elaborated more
>> here. The *known* and *reported* issue is that xen-backend interfaces
>> can end up  SLAAC and you'd obviously end up in some situations where
>> the MAC address and IP address clash, despite the architecture of IPv6
>> to randomize time requests for neighbor solicitations, and DAD.
>> Ultimately a series of services can end up filling your log messages
>> with tons of warnings.
>
> Right, this makes sense, but it seems like the solution should be to
> stop SLAAC from happening directly and not by playing tricks with
> multicast that happen to have the side effect of disabling SLAAC.

Agreed, however as I see it since yesterday the requirement for
multicast for IPv6 should likely become a requirement for dev->type
ether, there however is a module parameter to disable autoconf
completely though so I believe there may be some ether dev->type
devices out there with IPv6 without multicast, and while that seems
counter to the requirements on the RFCs it is something to consider.

At this point I consider the above a separate discussion (but one I'll
follow up with an RFCv2 patch), given that it seems we are in
agreement we should *consider* the ability to disable ipv6 all
together from a net_device. More on this below.

>> Another not reported issue, but I suspect critical and it can bite
>> both xen and kvm in the ass is described on Appendex A on RFC 4862 [2]
>> which considers the issues of getting duplicates of packets on the
>> same link with the same link layer address. I think to address that we
>> can also consider dev->type into all the different cases.
>
> We should never actually be generating any traffic with this address
> FWIW, all the generated traffic will have the guest's actual MAC. (at
> least in the bridging case, perhaps with with routing or NAT things are
> different, but I think in that case the traffic would appear to come
> from the hosts outgoing interface, not the vif device)

Which leads me to believe that creating a regular interface for a
backend interface seems overkill. I'm evaluating the minimal
requirements on the xen-backend case for an interface and believe this
can likely be shared with as a type of interface with kvm. Furthermore
the bridging could then be extended to not use its MAC address for the
root port even if STP were enabled.

>> My preference, rather than trying to simply disable ipv6 is actually
>> seeing how xen-netback interfaces (and kvm TAP topology) can be
>> simplified further). As I see it there is tons of code which could
>> trigger being used on these xen-netback interfaces (and TAP for kvm)
>> which is simply not needed for the use case of just doing sending data
>> back and forth between host and guest: ipv6 is not needed at all, and
>> I tried to test removing ipv4, but ran into issues.
>
> Bridging is not the only way to provide VM network connectivity. It
> should also be possible to do routing and even NAT by configuring
> appropriate p2p links and routing tables in the host. For that to work I
> think the tap and vif devices do need some sort of IPv[46] capability,

We have to be careful for sure, I'll try to test all cases including
kvm, but architecturally as I see it so far these things are simply
exchanging over data through their respective backend channels, I know
ipv6 interfaces are unused and I'm going to dig further to see why at
least one ipv4 interfaces is needed. I cannot fathom why either of
these interfaces would be required. I'll do a bit more digging.

The TAP interface requirements may be different, I haven't yet dug into that.

> so you can't just nuke that stuff completely. (Maybe/likely it also
> requires them to have a sensible MAC address, I'm not sure).

I'll dig.

>> [2] http://tools.ietf.org/html/rfc4862#appendix-A
>> [3[ https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf
>>
>> > Also how can a user enable multicast on the b/e?
>>
>> ip set multicast on dev <devname>
>> ip set multicast off dev <devname>
>>
>> > AFAIK only Solaris ever
>> > implemented the m/c bits of the Xen PV network protocol (not that I
>> > wouldn't welcome attempts to add it to other platforms)
>>
>> Do you mean kernel configuration multicast ? Or networking ?
>
> I meant the PV protocol extension which allows guests (netfront) to
> register to receive multicast frames across the PV ring -- i.e. for
> multicast to work from the guests PoV.

Not quite sure I understand, ipv6 works on guests so multicast works,
so its unclear what you mean by multicast frames across the PV ring.
Is there any code or or documents I can look at ?

> (maybe that was just an optimisation though and the default is to flood
> everything, it was a long time ago)

>From a networking perspective everything is being flooded as I've seen
it so far.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 22:06:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 22:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDhwc-000304-M0; Wed, 12 Feb 2014 22:06:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDhwb-0002zz-Cm
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 22:06:13 +0000
Received: from [85.158.137.68:12219] by server-13.bemta-3.messagelabs.com id
	4D/B1-26923-450FBF25; Wed, 12 Feb 2014 22:06:12 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392242770!1483854!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24743 invoked from network); 12 Feb 2014 22:06:11 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 22:06:11 -0000
Received: by mail-lb0-f173.google.com with SMTP id s7so6009274lbd.4
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 14:06:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=NkD/tfgZ21Hld/ppUPNuDmZlaYOA4d9/CWzW9oGxV5c=;
	b=kaq4slHlX+xd1gL3CqslLF/++9MdhqrRFIiR18ajuQp6FOFCJvtbB3NzB7jxKaK8lm
	YoMkKKyzAfZdZ6FwV1EdViyen/gJSfHMR2Eg9sZijc6Hisecyw6gy7SQqNlwiOJyf0UL
	i91x63FDWlBvDeNwZcP7I+v2GGm70ErdaKeTzJIjqWdU5j1r0+x4DtEEH5GcwASiQbu+
	8phibUm+V1hq+qmy6ioA7EVrqtpj9yTKE9x4bacBBxCv+4BjJIH3avh0Mxghx7vwFn8i
	rootTatCXW37/2934xf56hEdn7qbDrMqrohvA1BuO6rMej/Je1B/ca4UCgBYsnykec8D
	jTJQ==
X-Received: by 10.112.211.233 with SMTP id nf9mr3312571lbc.50.1392242767085;
	Wed, 12 Feb 2014 14:06:07 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 14:05:47 -0800 (PST)
In-Reply-To: <1392203708.13563.50.camel@kazak.uk.xensource.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 14:05:47 -0800
X-Google-Sender-Auth: iNfaCkUbokthtIKn5jL4-iR0GmE
Message-ID: <CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 3:15 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-11 at 13:53 -0800, Luis R. Rodriguez wrote:
>> Cc'ing kvm folks as they may have a shared interest on the shared
>> physical case with the bridge (non NAT).
>>
>> On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>> > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
>> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>> >>
>> >> Although the xen-netback interfaces do not participate in the
>> >> link as a typical Ethernet device interfaces for them are
>> >> still required under the current archtitecture. IPv6 addresses
>> >> do not need to be created or assigned on the xen-netback interfaces
>> >> however, even if the frontend devices do need them, so clear the
>> >> multicast flag to ensure the net core does not initiate IPv6
>> >> Stateless Address Autoconfiguration.
>> >
>> > How does disabling SAA flow from the absence of multicast?
>>
>> See patch 1 in this series [0], but I explain the issue I see with
>> this on the cover letter [1].
>
> Oop, I felt like I'd missed some context. Thanks for pointing out that
> it was right under my nose.
>
>> In summary the RFCs on IPv6 make it
>> clear you need multicast for Stateless address autoconfiguration
>> (SLAAC is the preferred acronym) and DAD,
>
> That seems reasonable, but I think is the opposite to what I was trying
> to get at.
>
> Why is it not possible to disable SLAAC and/or DAD even if multicast is
> present?

Even if you set your IP address manually you still need to send router
solicitations using multicast, you also need to do DAD.

> IOW -- enabling/disabling multicast seems to me to be an odd proxy for
> disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
> which is to avoid SLAAC and DAD on interfaces which don't do multicast
> (which makes sense since those protocols involve multicast).

Agreed :)

>>  however the net core has not
>> made this a requirement, and hence the patch. The caveat which I
>> address on the cover letter needs to be seriously considered though.
>>
>> [0] http://marc.info/?l=linux-netdev&m=139207142110535&w=2
>> [1] http://marc.info/?l=linux-netdev&m=139207142110536&w=2
>>
>> > Surely these should be controlled logically independently even if there is some
>> > notional linkage.
>>
>> When a node hops on a network it will query its network by sending a
>> router solicitation multicast request for its configuration
>> parameters, the router can respond with router advertisements to
>> disable SLAAC.
>
> Surely it should be possible for an interface to be explicitly not ipv6
> enabled, in which case it doesn't want to do any solicitation at all.

There are run time configuration options, but not net_device flags.
More on this below.

>> Apart from that we have no other means to disable SLAAC neatly, and as
>> I gather that would be counter to the IPv6 RFCs anyway, and that makes
>> sense.
>
> In your[0] post you say:
>         it should be noted that RFC4682 Section 5.4
>         makes it clear that DAD *MUST* be performed on all unicast
>         addresses prior to assigning them to an interface
>
> is that what you mean by counter to the RFCs?

Yeap.

> In my reading this "must do DAD" requirement only comes into affect if
> you are trying to assign a unicast address to an interface. It should be
> possible to simply not do that for an interface.

That is correct, why enable ipv6 then on that interfaces then? We have
the loopback for local stuff.

>> > Can SAA not be disabled directly?
>>
>> Nope. The ipv6 core assumes all device want ipv6
>
> IMHO it is entirely reasonable for an admin to desire that an interface
> has nothing at all to do with IPv6. At which point all of the
> requirements for multicast which flow from enabling IPv6 disappear.

Agreed.

>> >>  since using this can create an issue if a user
>> >> decides to enable multicast on the backend interfaces
>> >
>> > Please explain what this issue is.
>>
>> I explained this on the cover letter but should have elaborated more
>> here. The *known* and *reported* issue is that xen-backend interfaces
>> can end up  SLAAC and you'd obviously end up in some situations where
>> the MAC address and IP address clash, despite the architecture of IPv6
>> to randomize time requests for neighbor solicitations, and DAD.
>> Ultimately a series of services can end up filling your log messages
>> with tons of warnings.
>
> Right, this makes sense, but it seems like the solution should be to
> stop SLAAC from happening directly and not by playing tricks with
> multicast that happen to have the side effect of disabling SLAAC.

Agreed, however as I see it since yesterday the requirement for
multicast for IPv6 should likely become a requirement for dev->type
ether, there however is a module parameter to disable autoconf
completely though so I believe there may be some ether dev->type
devices out there with IPv6 without multicast, and while that seems
counter to the requirements on the RFCs it is something to consider.

At this point I consider the above a separate discussion (but one I'll
follow up with an RFCv2 patch), given that it seems we are in
agreement we should *consider* the ability to disable ipv6 all
together from a net_device. More on this below.

>> Another not reported issue, but I suspect critical and it can bite
>> both xen and kvm in the ass is described on Appendex A on RFC 4862 [2]
>> which considers the issues of getting duplicates of packets on the
>> same link with the same link layer address. I think to address that we
>> can also consider dev->type into all the different cases.
>
> We should never actually be generating any traffic with this address
> FWIW, all the generated traffic will have the guest's actual MAC. (at
> least in the bridging case, perhaps with with routing or NAT things are
> different, but I think in that case the traffic would appear to come
> from the hosts outgoing interface, not the vif device)

Which leads me to believe that creating a regular interface for a
backend interface seems overkill. I'm evaluating the minimal
requirements on the xen-backend case for an interface and believe this
can likely be shared with as a type of interface with kvm. Furthermore
the bridging could then be extended to not use its MAC address for the
root port even if STP were enabled.

>> My preference, rather than trying to simply disable ipv6 is actually
>> seeing how xen-netback interfaces (and kvm TAP topology) can be
>> simplified further). As I see it there is tons of code which could
>> trigger being used on these xen-netback interfaces (and TAP for kvm)
>> which is simply not needed for the use case of just doing sending data
>> back and forth between host and guest: ipv6 is not needed at all, and
>> I tried to test removing ipv4, but ran into issues.
>
> Bridging is not the only way to provide VM network connectivity. It
> should also be possible to do routing and even NAT by configuring
> appropriate p2p links and routing tables in the host. For that to work I
> think the tap and vif devices do need some sort of IPv[46] capability,

We have to be careful for sure, I'll try to test all cases including
kvm, but architecturally as I see it so far these things are simply
exchanging over data through their respective backend channels, I know
ipv6 interfaces are unused and I'm going to dig further to see why at
least one ipv4 interfaces is needed. I cannot fathom why either of
these interfaces would be required. I'll do a bit more digging.

The TAP interface requirements may be different, I haven't yet dug into that.

> so you can't just nuke that stuff completely. (Maybe/likely it also
> requires them to have a sensible MAC address, I'm not sure).

I'll dig.

>> [2] http://tools.ietf.org/html/rfc4862#appendix-A
>> [3[ https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf
>>
>> > Also how can a user enable multicast on the b/e?
>>
>> ip set multicast on dev <devname>
>> ip set multicast off dev <devname>
>>
>> > AFAIK only Solaris ever
>> > implemented the m/c bits of the Xen PV network protocol (not that I
>> > wouldn't welcome attempts to add it to other platforms)
>>
>> Do you mean kernel configuration multicast ? Or networking ?
>
> I meant the PV protocol extension which allows guests (netfront) to
> register to receive multicast frames across the PV ring -- i.e. for
> multicast to work from the guests PoV.

Not quite sure I understand, ipv6 works on guests so multicast works,
so its unclear what you mean by multicast frames across the PV ring.
Is there any code or or documents I can look at ?

> (maybe that was just an optimisation though and the default is to flood
> everything, it was a long time ago)

>From a networking perspective everything is being flooded as I've seen
it so far.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 23:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjRg-0005Ko-IF; Wed, 12 Feb 2014 23:42:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WDjRe-0005Kj-So
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 23:42:23 +0000
Received: from [85.158.143.35:20999] by server-1.bemta-4.messagelabs.com id
	74/1A-31661-ED60CF25; Wed, 12 Feb 2014 23:42:22 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392248539!5233028!1
X-Originating-IP: [65.55.88.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26934 invoked from network); 12 Feb 2014 23:42:21 -0000
Received: from tx2ehsobe005.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.15)
	by server-11.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	12 Feb 2014 23:42:21 -0000
Received: from mail210-tx2-R.bigfish.com (10.9.14.234) by
	TX2EHSOBE002.bigfish.com (10.9.40.22) with Microsoft SMTP Server id
	14.1.225.22; Wed, 12 Feb 2014 23:42:19 +0000
Received: from mail210-tx2 (localhost [127.0.0.1])	by
	mail210-tx2-R.bigfish.com (Postfix) with ESMTP id 2FF348C0100;
	Wed, 12 Feb 2014 23:42:19 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h255eh1155h)
Received: from mail210-tx2 (localhost.localdomain [127.0.0.1]) by mail210-tx2
	(MessageSwitch) id 1392248537217844_6650;
	Wed, 12 Feb 2014 23:42:17 +0000 (UTC)
Received: from TX2EHSMHS010.bigfish.com (unknown [10.9.14.233])	by
	mail210-tx2.bigfish.com (Postfix) with ESMTP id 305218A0031;
	Wed, 12 Feb 2014 23:42:17 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS010.bigfish.com
	(10.9.99.110) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 12 Feb 2014 23:42:16 +0000
X-WSS-ID: 0N0WPUF-07-DNJ-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2A7F312C006E;	Wed, 12 Feb 2014 17:42:15 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 12 Feb 2014 17:42:19 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 12 Feb 2014 18:42:00 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Wed, 12 Feb 2014 17:26:48 -0600
Message-ID: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V4.1] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

Corrected this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

While at it, remove some clutter in the vmce_amd* functions. Retained
current policy of returning zero for reads and ignoring writes.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   41 ++++++-------------------------------
 xen/arch/x86/cpu/mcheck/vmce.c    |   14 +++++++++++--
 2 files changed, 18 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..03797ab 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Do nothing as we don't emulate this MC bank currently */
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Assign '0' as we don't emulate this MC bank currently */
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..84843fc 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    /* Allow only first 3 MC banks into switch() */
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
             ret = vmce_intel_rdmsr(v, msr, val);
             break;
         case X86_VENDOR_AMD:
+            /*
+             * Extended block of AMD thresholding registers fall into default. 
+             * Handle reads here.
+             */
             ret = vmce_amd_rdmsr(v, msr, val);
             break;
         default:
@@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    /* Allow only first 3 MC banks into switch() */
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
@@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             ret = vmce_intel_wrmsr(v, msr, val);
             break;
         case X86_VENDOR_AMD:
+            /*
+             * Extended block of AMD thresholding registers fall into default. 
+             * Handle writes here.
+             */
             ret = vmce_amd_wrmsr(v, msr, val);
             break;
         default:
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 23:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjRm-0005L2-VE; Wed, 12 Feb 2014 23:42:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WDjRl-0005Kv-06
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 23:42:29 +0000
Received: from [85.158.139.211:31104] by server-10.bemta-5.messagelabs.com id
	AD/35-08578-4E60CF25; Wed, 12 Feb 2014 23:42:28 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392248545!3435682!1
X-Originating-IP: [207.46.163.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2422 invoked from network); 12 Feb 2014 23:42:27 -0000
Received: from co9ehsobe005.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.28)
	by server-5.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	12 Feb 2014 23:42:27 -0000
Received: from mail90-co9-R.bigfish.com (10.236.132.238) by
	CO9EHSOBE027.bigfish.com (10.236.130.90) with Microsoft SMTP Server id
	14.1.225.22; Wed, 12 Feb 2014 23:42:24 +0000
Received: from mail90-co9 (localhost [127.0.0.1])	by mail90-co9-R.bigfish.com
	(Postfix) with ESMTP id 966D5740159;
	Wed, 12 Feb 2014 23:42:24 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(z579ehzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail90-co9 (localhost.localdomain [127.0.0.1]) by mail90-co9
	(MessageSwitch) id 1392248542655993_7904;
	Wed, 12 Feb 2014 23:42:22 +0000 (UTC)
Received: from CO9EHSMHS031.bigfish.com (unknown [10.236.132.253])	by
	mail90-co9.bigfish.com (Postfix) with ESMTP id 9C02D9C0048;
	Wed, 12 Feb 2014 23:42:22 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO9EHSMHS031.bigfish.com
	(10.236.130.41) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 12 Feb 2014 23:42:18 +0000
X-WSS-ID: 0N0WPUG-07-DNK-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	248E612C006F;	Wed, 12 Feb 2014 17:42:15 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 12 Feb 2014 17:42:19 -0600
Received: from arav-dinar (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 12 Feb 2014 18:42:10 -0500
Date: Wed, 12 Feb 2014 17:43:13 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: "Egger, Christoph" <chegger@amazon.de>
Message-ID: <20140212234312.GA30754@arav-dinar>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
	<52F2A1E4.9030700@amd.com>
	<52F35EE60200007800119A84@nat28.tlf.novell.com>
	<52FB409A.3010506@amazon.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FB409A.3010506@amazon.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "jinsong.liu@intel.com" <jinsong.liu@intel.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@amd.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 03:36:26AM -0600, Egger, Christoph wrote:
> On 06.02.14 10:07, Jan Beulich wrote:
> >>>> On 05.02.14 at 21:41, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> > wrote:
> >> On 1/28/2014 5:24 AM, Jan Beulich wrote:
> >>>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
> >> wrote:
> > 
> > No - bit 4 is part of what forms the bank number. Hence it must
> > be masked out in the switch() expression.
> 
> I prefer to see a comment in the code that makes this clear.
> 

Agreed, I have added very brief comments in the code.
Do let me know if it needs to be more verbose..

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 23:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjRm-0005L2-VE; Wed, 12 Feb 2014 23:42:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WDjRl-0005Kv-06
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 23:42:29 +0000
Received: from [85.158.139.211:31104] by server-10.bemta-5.messagelabs.com id
	AD/35-08578-4E60CF25; Wed, 12 Feb 2014 23:42:28 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392248545!3435682!1
X-Originating-IP: [207.46.163.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2422 invoked from network); 12 Feb 2014 23:42:27 -0000
Received: from co9ehsobe005.messaging.microsoft.com (HELO
	co9outboundpool.messaging.microsoft.com) (207.46.163.28)
	by server-5.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	12 Feb 2014 23:42:27 -0000
Received: from mail90-co9-R.bigfish.com (10.236.132.238) by
	CO9EHSOBE027.bigfish.com (10.236.130.90) with Microsoft SMTP Server id
	14.1.225.22; Wed, 12 Feb 2014 23:42:24 +0000
Received: from mail90-co9 (localhost [127.0.0.1])	by mail90-co9-R.bigfish.com
	(Postfix) with ESMTP id 966D5740159;
	Wed, 12 Feb 2014 23:42:24 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(z579ehzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h944hd25hd2bhf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h209eh2216h22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail90-co9 (localhost.localdomain [127.0.0.1]) by mail90-co9
	(MessageSwitch) id 1392248542655993_7904;
	Wed, 12 Feb 2014 23:42:22 +0000 (UTC)
Received: from CO9EHSMHS031.bigfish.com (unknown [10.236.132.253])	by
	mail90-co9.bigfish.com (Postfix) with ESMTP id 9C02D9C0048;
	Wed, 12 Feb 2014 23:42:22 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CO9EHSMHS031.bigfish.com
	(10.236.130.41) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 12 Feb 2014 23:42:18 +0000
X-WSS-ID: 0N0WPUG-07-DNK-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	248E612C006F;	Wed, 12 Feb 2014 17:42:15 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 12 Feb 2014 17:42:19 -0600
Received: from arav-dinar (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 12 Feb 2014 18:42:10 -0500
Date: Wed, 12 Feb 2014 17:43:13 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: "Egger, Christoph" <chegger@amazon.de>
Message-ID: <20140212234312.GA30754@arav-dinar>
References: <52E7A17D020000780011784E@nat28.tlf.novell.com>
	<52F2A1E4.9030700@amd.com>
	<52F35EE60200007800119A84@nat28.tlf.novell.com>
	<52FB409A.3010506@amazon.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FB409A.3010506@amazon.de>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "jinsong.liu@intel.com" <jinsong.liu@intel.com>,
	"boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"Suthikulpanit, Suravee" <Suravee.Suthikulpanit@amd.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2 V2] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 03:36:26AM -0600, Egger, Christoph wrote:
> On 06.02.14 10:07, Jan Beulich wrote:
> >>>> On 05.02.14 at 21:41, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> > wrote:
> >> On 1/28/2014 5:24 AM, Jan Beulich wrote:
> >>>>>> On 27.01.14 at 23:44, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> 
> >> wrote:
> > 
> > No - bit 4 is part of what forms the bank number. Hence it must
> > be masked out in the switch() expression.
> 
> I prefer to see a comment in the code that makes this clear.
> 

Agreed, I have added very brief comments in the code.
Do let me know if it needs to be more verbose..

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 23:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjRg-0005Ko-IF; Wed, 12 Feb 2014 23:42:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WDjRe-0005Kj-So
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 23:42:23 +0000
Received: from [85.158.143.35:20999] by server-1.bemta-4.messagelabs.com id
	74/1A-31661-ED60CF25; Wed, 12 Feb 2014 23:42:22 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392248539!5233028!1
X-Originating-IP: [65.55.88.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26934 invoked from network); 12 Feb 2014 23:42:21 -0000
Received: from tx2ehsobe005.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.15)
	by server-11.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	12 Feb 2014 23:42:21 -0000
Received: from mail210-tx2-R.bigfish.com (10.9.14.234) by
	TX2EHSOBE002.bigfish.com (10.9.40.22) with Microsoft SMTP Server id
	14.1.225.22; Wed, 12 Feb 2014 23:42:19 +0000
Received: from mail210-tx2 (localhost [127.0.0.1])	by
	mail210-tx2-R.bigfish.com (Postfix) with ESMTP id 2FF348C0100;
	Wed, 12 Feb 2014 23:42:19 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 3
X-BigFish: VPS3(z37d5kzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzdchz1de098h8275bh1de097hz2dh839hd24he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h255eh1155h)
Received: from mail210-tx2 (localhost.localdomain [127.0.0.1]) by mail210-tx2
	(MessageSwitch) id 1392248537217844_6650;
	Wed, 12 Feb 2014 23:42:17 +0000 (UTC)
Received: from TX2EHSMHS010.bigfish.com (unknown [10.9.14.233])	by
	mail210-tx2.bigfish.com (Postfix) with ESMTP id 305218A0031;
	Wed, 12 Feb 2014 23:42:17 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS010.bigfish.com
	(10.9.99.110) with Microsoft SMTP Server id 14.16.227.3;
	Wed, 12 Feb 2014 23:42:16 +0000
X-WSS-ID: 0N0WPUF-07-DNJ-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2A7F312C006E;	Wed, 12 Feb 2014 17:42:15 -0600 (CST)
Received: from SATLEXDAG02.amd.com (10.181.40.5) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Wed, 12 Feb 2014 17:42:19 -0600
Received: from arav-xen-dinar.amd.com (10.180.168.240) by SATLEXDAG02.amd.com
	(10.181.40.5) with Microsoft SMTP Server id 14.2.328.9;
	Wed, 12 Feb 2014 18:42:00 -0500
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
To: <chegger@amazon.de>, <jinsong.liu@intel.com>,
	<suravee.suthikulpanit@amd.com>, <boris.ostrovsky@oracle.com>,
	<xen-devel@lists.xen.org>, <JBeulich@suse.com>
Date: Wed, 12 Feb 2014 17:26:48 -0600
Message-ID: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Subject: [Xen-devel] [PATCH V4.1] mcheck,
	vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
registers. But due to this statement here:
switch ( msr & (MSR_IA32_MC0_CTL | 3) )
we are wrongly masking off top two bits which meant the register
accesses never made it to vmce_amd_* functions.

Corrected this problem by modifying the mask in this patch to allow
AMD thresholding registers to fall to 'default' case which in turn
allows vmce_amd_* functions to handle access to the registers.

While at it, remove some clutter in the vmce_amd* functions. Retained
current policy of returning zero for reads and ignoring writes.

Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
---
 xen/arch/x86/cpu/mcheck/amd_f10.c |   41 ++++++-------------------------------
 xen/arch/x86/cpu/mcheck/vmce.c    |   14 +++++++++++--
 2 files changed, 18 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c b/xen/arch/x86/cpu/mcheck/amd_f10.c
index 61319dc..03797ab 100644
--- a/xen/arch/x86/cpu/mcheck/amd_f10.c
+++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
@@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct cpuinfo_x86 *c)
 /* amd specific MCA MSR */
 int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		v->arch.vmce.bank[1].mci_misc = val; 
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* ignore write: we do not emulate link and l3 cache errors
-		 * to the guest.
-		 */
-		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Do nothing as we don't emulate this MC bank currently */
+    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
+    return 1;
 }
 
 int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 {
-	switch (msr) {
-	case MSR_F10_MC4_MISC1: /* DRAM error type */
-		*val = v->arch.vmce.bank[1].mci_misc;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	case MSR_F10_MC4_MISC2: /* Link error type */
-	case MSR_F10_MC4_MISC3: /* L3 cache error type */
-		/* we do not emulate link and l3 cache
-		 * errors to the guest.
-		 */
-		*val = 0;
-		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
-		break;
-	default:
-		return 0;
-	}
-
-	return 1;
+    /* Assign '0' as we don't emulate this MC bank currently */
+    *val = 0;
+    return 1;
 }
diff --git a/xen/arch/x86/cpu/mcheck/vmce.c b/xen/arch/x86/cpu/mcheck/vmce.c
index f6c35db..84843fc 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
 
     *val = 0;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    /* Allow only first 3 MC banks into switch() */
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /* stick all 1's to MCi_CTL */
@@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
             ret = vmce_intel_rdmsr(v, msr, val);
             break;
         case X86_VENDOR_AMD:
+            /*
+             * Extended block of AMD thresholding registers fall into default. 
+             * Handle reads here.
+             */
             ret = vmce_amd_rdmsr(v, msr, val);
             break;
         default:
@@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
     int ret = 1;
     unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
 
-    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
+    /* Allow only first 3 MC banks into switch() */
+    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
     {
     case MSR_IA32_MC0_CTL:
         /*
@@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             ret = vmce_intel_wrmsr(v, msr, val);
             break;
         case X86_VENDOR_AMD:
+            /*
+             * Extended block of AMD thresholding registers fall into default. 
+             * Handle writes here.
+             */
             ret = vmce_amd_wrmsr(v, msr, val);
             break;
         default:
-- 
1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 23:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjZT-0005oA-BN; Wed, 12 Feb 2014 23:50:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WDeeX-00056D-0w
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 18:35:21 +0000
Received: from [85.158.137.68:33410] by server-9.bemta-3.messagelabs.com id
	48/78-10184-8EEBBF25; Wed, 12 Feb 2014 18:35:20 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392230117!131989!1
X-Originating-IP: [209.85.212.47]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29030 invoked from network); 12 Feb 2014 18:35:18 -0000
Received: from mail-vb0-f47.google.com (HELO mail-vb0-f47.google.com)
	(209.85.212.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 18:35:18 -0000
Received: by mail-vb0-f47.google.com with SMTP id p6so7220014vbe.20
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 10:35:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=4FI8bJuR7rYcDLOUmfS1kcXyufRYv3GA9wc3iUBPayE=;
	b=HWXadle/3oLweecnnEbJxH3EyvgfIIEg1GukuRjO8F5p7GEC48YeMFVNKwmVXkLjKY
	s1W/tmAbgRaNPQrAM/o3LBrq4CI65Bu4FkqUZWyMYpgmgbdLnhr1JcgJnKIZ83/VlAuM
	CqLXBw89ZYv91ibgxeJy7bor3oyop8EaF7da5/j/AYCOdh8eRg6eEiHZRU1VRLmwDIAt
	ZWCLYPO/S/29qYLtuDdbPyC7MYIV0T9xrAKM9PrycKNKCHTl04/u2G1sZ9PUtysyLp8Y
	oFNSHsD56HQLhXWYf8ezlpcvxaJkOPCJB6brdLjy5Zw6NlIIg5yT0OLcRng/yIP6DVHw
	Dvqw==
X-Received: by 10.220.159.4 with SMTP id h4mr34272409vcx.1.1392230116769; Wed,
	12 Feb 2014 10:35:16 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Wed, 12 Feb 2014 10:34:36 -0800 (PST)
In-Reply-To: <20140212182526.GA28938@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
	<20140210184707.GA18755@phenom.dumpdata.com>
	<CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
	<20140212182526.GA28938@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Wed, 12 Feb 2014 13:34:36 -0500
Message-ID: <CA+XTOOiabhmukhqRAG9ibst8X4ZZGxev-O9DPmcgCXffmDxVsQ@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Wed, 12 Feb 2014 23:50:26 +0000
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1203612201343063609=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1203612201343063609==
Content-Type: multipart/alternative; boundary=001a11c2ca0c8e46fc04f239d553

--001a11c2ca0c8e46fc04f239d553
Content-Type: text/plain; charset=ISO-8859-1

It looks very similar.


On Wed, Feb 12, 2014 at 1:25 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Tue, Feb 11, 2014 at 08:04:04AM -0500, Mike Neiderhauser wrote:
> > So I went ahead to tested the setup I am trying to achieve using xen.
>  This
> > setup basically requires two isolated machines that can be used for
> network
> > testing.  On the hvm mentioned above, this testing fails due to
> something I
> > cannot wrap my head around.  I believe it is still related to the PCI
> > passthrough of a device and I believe it is related to the libxl error
> > mentioned above. Can anyone shed some light on what is going on?  Is it a
> > driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit
> Ethernet)
>
> That looks like this issue:
>
> igb and bnx2: "NETDEV WATCHDOG: transmit queue timed out" when skb has
> huge linear buffer
>
> (http://lkml.org/lkml/2014/1/30/358) ?
> >
> > [79464.816085] ------------[ cut here ]------------
> > [79464.816093] WARNING: at
> > /build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254
> > dev_watchdog+0x262/0x270()
> > [79464.816094] Hardware name: HVM domU
> > [79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed out
> > [79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)
> > xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F)
> authenc(F)
> > esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)
> > twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)
> > twofish_x86_64(F) twofish_common(F) camellia_generic(F)
> > camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)
> > serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)
> > blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)
> > cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F)
> xcbc(F)
> > rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F)
> > llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F)
> aesni_intel(F)
> > ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F)
> cirrus(F)
> > ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)
> > sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F)
> > lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F)
> > lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded:
> > ipmi_msghandler]
> > [79464.816139] Pid: 0, comm: swapper/1 Tainted: GF
> >  3.8.0-29-generic #42~precise1-Ubuntu
> > [79464.816140] Call Trace:
> > [79464.816142]  <IRQ>  [<ffffffff81059b0f>]
> warn_slowpath_common+0x7f/0xc0
> > [79464.816149]  [<ffffffff8135b9d4>] ? timerqueue_add+0x64/0xb0
> > [79464.816151]  [<ffffffff81059c06>] warn_slowpath_fmt+0x46/0x50
> > [79464.816154]  [<ffffffff81076794>] ? wake_up_worker+0x24/0x30
> > [79464.816157]  [<ffffffff81602062>] dev_watchdog+0x262/0x270
> > [79464.816160]  [<ffffffff810771f0>] ? __queue_work+0x2d0/0x2d0
> > [79464.816161]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> > [79464.816164]  [<ffffffff8106995b>] call_timer_fn+0x3b/0x150
> > [79464.816167]  [<ffffffff8144f5a1>] ?
> add_interrupt_randomness+0x41/0x190
> > [79464.816170]  [<ffffffff8106b427>] run_timer_softirq+0x267/0x2c0
> > [79464.816173]  [<ffffffff810ee3c9>] ? handle_irq_event_percpu+0xa9/0x210
> > [79464.816175]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> > [79464.816177]  [<ffffffff81062620>] __do_softirq+0xc0/0x240
> > [79464.816180]  [<ffffffff810b67cd>] ? tick_do_update_jiffies64+0x9d/0xd0
> > [79464.816184]  [<ffffffff816fdd5c>] call_softirq+0x1c/0x30
> > [79464.816188]  [<ffffffff81016775>] do_softirq+0x65/0xa0
> > [79464.816189]  [<ffffffff810628fe>] irq_exit+0x8e/0xb0
> > [79464.816193]  [<ffffffff8140a125>] xen_evtchn_do_upcall+0x35/0x50
> > [79464.816195]  [<ffffffff816fdeed>] xen_hvm_callback_vector+0x6d/0x80
> > [79464.816196]  <EOI>  [<ffffffff81084008>] ? hrtimer_start+0x18/0x20
> > [79464.816201]  [<ffffffff81045136>] ? native_safe_halt+0x6/0x10
> > [79464.816204]  [<ffffffff8101cc33>] default_idle+0x53/0x1f0
> > [79464.816206]  [<ffffffff8101dad9>] cpu_idle+0xd9/0x120
> > [79464.816209]  [<ffffffff816d10fe>] start_secondary+0xc3/0xc5
> > [79464.816210] ---[ end trace 48cf6b13be16e0ae ]---
> > [79464.816214] bnx2 0000:00:05.0 eth1: <--- start FTQ dump --->
> > [79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000
> > [79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000
> > [79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000
> > [79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000
> > [79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000
> > [79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> > [79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> > [79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000
> > [79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000
> > [79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000
> > [79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000
> > [79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000
> > [79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000
> > [79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000
> > [79464.816308] bnx2 0000:00:05.0 eth1: CPU states:
> > [79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000
> > evt_mask 500 pc 8001288 pc 8001288 instr 38640001
> > [79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000
> > evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016
> > [79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000
> > evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
> > [79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000
> > evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020
> > [79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000
> > evt_mask 500 pc 8009c00 pc 800d948 instr 30420040
> > [79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000
> > evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823
> > [79464.816392] bnx2 0000:00:05.0 eth1: <--- end FTQ dump --->
> > [79464.816394] bnx2 0000:00:05.0 eth1: <--- start TBDC dump --->
> > [79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32
> > [79464.816401] bnx2 0000:00:05.0 eth1: LINE     CID  BIDX   CMD  VALIDS
> > [79464.816410] bnx2 0000:00:05.0 eth1: 00    001000  00e8   00    [0]
> > [79464.816420] bnx2 0000:00:05.0 eth1: 01    001000  00e8   00    [0]
> > [79464.816429] bnx2 0000:00:05.0 eth1: 02    000800  afc8   00    [0]
> > [79464.816438] bnx2 0000:00:05.0 eth1: 03    000800  afb8   00    [0]
> > [79464.816447] bnx2 0000:00:05.0 eth1: 04    000800  afd8   00    [0]
> > [79464.816456] bnx2 0000:00:05.0 eth1: 05    000800  afe0   00    [0]
> > [79464.816465] bnx2 0000:00:05.0 eth1: 06    000800  afe8   00    [0]
> > [79464.816474] bnx2 0000:00:05.0 eth1: 07    000800  afd0   00    [0]
> > [79464.816485] bnx2 0000:00:05.0 eth1: 08    001000  3510   00    [0]
> > [79464.816494] bnx2 0000:00:05.0 eth1: 09    000800  aec0   00    [0]
> > [79464.816504] bnx2 0000:00:05.0 eth1: 0a    001000  3530   00    [0]
> > [79464.816514] bnx2 0000:00:05.0 eth1: 0b    000800  aec8   00    [0]
> > [79464.816523] bnx2 0000:00:05.0 eth1: 0c    000800  aed0   00    [0]
> > [79464.816559] bnx2 0000:00:05.0 eth1: 0d    001000  34f8   00    [0]
> > [79464.816570] bnx2 0000:00:05.0 eth1: 0e    001000  3500   00    [0]
> > [79464.816580] bnx2 0000:00:05.0 eth1: 0f    001000  3518   00    [0]
> > [79464.816590] bnx2 0000:00:05.0 eth1: 10    1fbc00  2fe8   7d    [0]
> > [79464.816599] bnx2 0000:00:05.0 eth1: 11    1ab780  fff8   7d    [0]
> > [79464.816608] bnx2 0000:00:05.0 eth1: 12    17ff00  b908   f7    [0]
> > [79464.816618] bnx2 0000:00:05.0 eth1: 13    0cb700  ff40   d7    [0]
> > [79464.816627] bnx2 0000:00:05.0 eth1: 14    177a80  efe0   03    [0]
> > [79464.816637] bnx2 0000:00:05.0 eth1: 15    037d80  9f88   72    [0]
> > [79464.816646] bnx2 0000:00:05.0 eth1: 16    1bae00  eef8   ce    [0]
> > [79464.816657] bnx2 0000:00:05.0 eth1: 17    1bbc80  a7f8   df    [0]
> > [79464.816666] bnx2 0000:00:05.0 eth1: 18    17e180  6aa8   e4    [0]
> > [79464.816675] bnx2 0000:00:05.0 eth1: 19    07ff80  6e50   dd    [0]
> > [79464.816683] bnx2 0000:00:05.0 eth1: 1a    1fda80  f790   6e    [0]
> > [79464.816694] bnx2 0000:00:05.0 eth1: 1b    151580  d7b0   fc    [0]
> > [79464.816703] bnx2 0000:00:05.0 eth1: 1c    1b9f80  cef8   6b    [0]
> > [79464.816712] bnx2 0000:00:05.0 eth1: 1d    1ebf00  ffa8   df    [0]
> > [79464.816723] bnx2 0000:00:05.0 eth1: 1e    1e7e00  ff78   ef    [0]
> > [79464.816731] bnx2 0000:00:05.0 eth1: 1f    166e80  fbd8   aa    [0]
> > [79464.816733] bnx2 0000:00:05.0 eth1: <--- end TBDC dump --->
> > [79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0]
> PCI_CMD[00100406]
> > [79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]
> > PCI_MISC_CFG[92000088]
> > [79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]
> > EMAC_RX_STATUS[00000000]
> > [79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
> > [79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:
> > HC_STATS_INTERRUPT_STATUS[01fc0003]
> > [79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]
> > [79464.817264] bnx2 0000:00:05.0 eth1: <--- start MCP states dump --->
> > [79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]
> > MCP_STATE_P1[0003611e]
> > [79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]
> > state[80000000] evt_mask[00000500]
> > [79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994]
> > instr[32020020]
> > [79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:
> > [79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]
> > fw_mb[00000027] link_status[0008506b]
> > [79464.817301]  drv_pulse_mb[0000338a]
> > [79464.817306] bnx2 0000:00:05.0 eth1: DEBUG:
> dev_info_signature[44564907]
> > reset_type[01005254]
> > [79464.817310]  condition[0003611e]
> > [79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083
> > 0003611e 00000000
> > [79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000000
> > 00000000 00000000
> > [79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000000
> > 00000000 00000000
> > [79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000000
> > 00000000 00000000
> > [79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]
> > [79464.817374] bnx2 0000:00:05.0 eth1: <--- end MCP states dump --->
> > [79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down
> >
> >
> >
> > On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> > > > Thanks for the answers on the timeline.
> > > >
> > > > When I start the HVM with th broadcom adapter, I get this message
> back.
> > > > Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> > > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel
> doesn't
> > > > support reset from sysfs for PCI device 0000:05:00.0
> > > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel
> doesn't
> > > > support reset from sysfs for PCI device 0000:05:00.1
> > > >
> > > > However, the devices appear in the HVM.  Is this something that I
> should
> > > be
> > > > concerned about?
> > >
> > > No. Xen pciback does the reset automatically.
> > >
> > > Actually we might want to ditch that reporting in libxl, or maybe just
> > > implement a stub function in xen-pciback so that libxl will be happy.
> > >
> > > >
> > > >
> > > >
> > > > On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu>
> > > wrote:
> > > >
> > > > > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > > > > <mikeneiderhauser@gmail.com> wrote:
> > > > > > Works like a charm.  I do not have physical access to the
> computer
> > > this
> > > > > > weekend to verify that the cards are isolated, but the HVM
> starts and
> > > > > > appears to be working well.
> > > > > >
> > > > > > When do you think Xen 4.4 will be released.  The article I read
> > > > > mentioned it
> > > > > > will be released in 2014 (hinting towards the end of February).
>  I
> > > also
> > > > > read
> > > > > > 'When it is ready.'
> > > > > >
> > > > > > Any timeline would be great.
> > > > >
> > > > > I'm afraid that's about all we can give. :-)  We've locked down
> > > > > development for 2 months now and are working on finding and fixing
> > > > > bugs.  If there are no more blocker bugs or other unforeseen
> delays,
> > > > > it should be out by the end of February.  But there are necessarily
> > > > > significant unknowns, so we can't make any promises.
> > > > >
> > > > >  -George
> > > > >
> > >
>

--001a11c2ca0c8e46fc04f239d553
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">It looks very similar.</div><div class=3D"gmail_extra"><br=
><br><div class=3D"gmail_quote">On Wed, Feb 12, 2014 at 1:25 PM, Konrad Rze=
szutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com"=
 target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Tue, Feb 11, 2014 at 08:0=
4:04AM -0500, Mike Neiderhauser wrote:<br>
&gt; So I went ahead to tested the setup I am trying to achieve using xen. =
=A0This<br>
&gt; setup basically requires two isolated machines that can be used for ne=
twork<br>
&gt; testing. =A0On the hvm mentioned above, this testing fails due to some=
thing I<br>
&gt; cannot wrap my head around. =A0I believe it is still related to the PC=
I<br>
&gt; passthrough of a device and I believe it is related to the libxl error=
<br>
&gt; mentioned above. Can anyone shed some light on what is going on? =A0Is=
 it a<br>
&gt; driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit Ether=
net)<br>
<br>
</div>That looks like this issue:<br>
<br>
igb and bnx2: &quot;NETDEV WATCHDOG: transmit queue timed out&quot; when sk=
b has huge linear buffer<br>
<br>
(<a href=3D"http://lkml.org/lkml/2014/1/30/358" target=3D"_blank">http://lk=
ml.org/lkml/2014/1/30/358</a>) ?<br>
<div class=3D"HOEnZb"><div class=3D"h5">&gt;<br>
&gt; [79464.816085] ------------[ cut here ]------------<br>
&gt; [79464.816093] WARNING: at<br>
&gt; /build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254<br>
&gt; dev_watchdog+0x262/0x270()<br>
&gt; [79464.816094] Hardware name: HVM domU<br>
&gt; [79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed ou=
t<br>
&gt; [79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)<=
br>
&gt; xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) authe=
nc(F)<br>
&gt; esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)<=
br>
&gt; twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)<br>
&gt; twofish_x86_64(F) twofish_common(F) camellia_generic(F)<br>
&gt; camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)<=
br>
&gt; serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)<br>
&gt; blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)<br>
&gt; cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) xcb=
c(F)<br>
&gt; rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F=
)<br>
&gt; llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_inte=
l(F)<br>
&gt; ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirru=
s(F)<br>
&gt; ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)<br>
&gt; sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F=
)<br>
&gt; lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw=
(F)<br>
&gt; lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded=
:<br>
&gt; ipmi_msghandler]<br>
&gt; [79464.816139] Pid: 0, comm: swapper/1 Tainted: GF<br>
&gt; =A03.8.0-29-generic #42~precise1-Ubuntu<br>
&gt; [79464.816140] Call Trace:<br>
&gt; [79464.816142] =A0&lt;IRQ&gt; =A0[&lt;ffffffff81059b0f&gt;] warn_slowp=
ath_common+0x7f/0xc0<br>
&gt; [79464.816149] =A0[&lt;ffffffff8135b9d4&gt;] ? timerqueue_add+0x64/0xb=
0<br>
&gt; [79464.816151] =A0[&lt;ffffffff81059c06&gt;] warn_slowpath_fmt+0x46/0x=
50<br>
&gt; [79464.816154] =A0[&lt;ffffffff81076794&gt;] ? wake_up_worker+0x24/0x3=
0<br>
&gt; [79464.816157] =A0[&lt;ffffffff81602062&gt;] dev_watchdog+0x262/0x270<=
br>
&gt; [79464.816160] =A0[&lt;ffffffff810771f0&gt;] ? __queue_work+0x2d0/0x2d=
0<br>
&gt; [79464.816161] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_fast_dequeue+0xe0=
/0xe0<br>
&gt; [79464.816164] =A0[&lt;ffffffff8106995b&gt;] call_timer_fn+0x3b/0x150<=
br>
&gt; [79464.816167] =A0[&lt;ffffffff8144f5a1&gt;] ? add_interrupt_randomnes=
s+0x41/0x190<br>
&gt; [79464.816170] =A0[&lt;ffffffff8106b427&gt;] run_timer_softirq+0x267/0=
x2c0<br>
&gt; [79464.816173] =A0[&lt;ffffffff810ee3c9&gt;] ? handle_irq_event_percpu=
+0xa9/0x210<br>
&gt; [79464.816175] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_fast_dequeue+0xe0=
/0xe0<br>
&gt; [79464.816177] =A0[&lt;ffffffff81062620&gt;] __do_softirq+0xc0/0x240<b=
r>
&gt; [79464.816180] =A0[&lt;ffffffff810b67cd&gt;] ? tick_do_update_jiffies6=
4+0x9d/0xd0<br>
&gt; [79464.816184] =A0[&lt;ffffffff816fdd5c&gt;] call_softirq+0x1c/0x30<br=
>
&gt; [79464.816188] =A0[&lt;ffffffff81016775&gt;] do_softirq+0x65/0xa0<br>
&gt; [79464.816189] =A0[&lt;ffffffff810628fe&gt;] irq_exit+0x8e/0xb0<br>
&gt; [79464.816193] =A0[&lt;ffffffff8140a125&gt;] xen_evtchn_do_upcall+0x35=
/0x50<br>
&gt; [79464.816195] =A0[&lt;ffffffff816fdeed&gt;] xen_hvm_callback_vector+0=
x6d/0x80<br>
&gt; [79464.816196] =A0&lt;EOI&gt; =A0[&lt;ffffffff81084008&gt;] ? hrtimer_=
start+0x18/0x20<br>
&gt; [79464.816201] =A0[&lt;ffffffff81045136&gt;] ? native_safe_halt+0x6/0x=
10<br>
&gt; [79464.816204] =A0[&lt;ffffffff8101cc33&gt;] default_idle+0x53/0x1f0<b=
r>
&gt; [79464.816206] =A0[&lt;ffffffff8101dad9&gt;] cpu_idle+0xd9/0x120<br>
&gt; [79464.816209] =A0[&lt;ffffffff816d10fe&gt;] start_secondary+0xc3/0xc5=
<br>
&gt; [79464.816210] ---[ end trace 48cf6b13be16e0ae ]---<br>
&gt; [79464.816214] bnx2 0000:00:05.0 eth1: &lt;--- start FTQ dump ---&gt;<=
br>
&gt; [79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000<br>
&gt; [79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000<br>
&gt; [79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000<br>
&gt; [79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000<br>
&gt; [79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000<br>
&gt; [79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000<br>
&gt; [79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000<br>
&gt; [79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000<br>
&gt; [79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000<br>
&gt; [79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000<br>
&gt; [79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000<br>
&gt; [79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000<br>
&gt; [79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000<br>
&gt; [79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000<br>
&gt; [79464.816308] bnx2 0000:00:05.0 eth1: CPU states:<br>
&gt; [79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000=
<br>
&gt; evt_mask 500 pc 8001288 pc 8001288 instr 38640001<br>
&gt; [79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000=
<br>
&gt; evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016<br>
&gt; [79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000=
<br>
&gt; evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003<br>
&gt; [79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000=
<br>
&gt; evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020<br>
&gt; [79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000=
<br>
&gt; evt_mask 500 pc 8009c00 pc 800d948 instr 30420040<br>
&gt; [79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000=
<br>
&gt; evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823<br>
&gt; [79464.816392] bnx2 0000:00:05.0 eth1: &lt;--- end FTQ dump ---&gt;<br=
>
&gt; [79464.816394] bnx2 0000:00:05.0 eth1: &lt;--- start TBDC dump ---&gt;=
<br>
&gt; [79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32<br>
&gt; [79464.816401] bnx2 0000:00:05.0 eth1: LINE =A0 =A0 CID =A0BIDX =A0 CM=
D =A0VALIDS<br>
&gt; [79464.816410] bnx2 0000:00:05.0 eth1: 00 =A0 =A0001000 =A000e8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816420] bnx2 0000:00:05.0 eth1: 01 =A0 =A0001000 =A000e8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816429] bnx2 0000:00:05.0 eth1: 02 =A0 =A0000800 =A0afc8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816438] bnx2 0000:00:05.0 eth1: 03 =A0 =A0000800 =A0afb8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816447] bnx2 0000:00:05.0 eth1: 04 =A0 =A0000800 =A0afd8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816456] bnx2 0000:00:05.0 eth1: 05 =A0 =A0000800 =A0afe0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816465] bnx2 0000:00:05.0 eth1: 06 =A0 =A0000800 =A0afe8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816474] bnx2 0000:00:05.0 eth1: 07 =A0 =A0000800 =A0afd0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816485] bnx2 0000:00:05.0 eth1: 08 =A0 =A0001000 =A03510 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816494] bnx2 0000:00:05.0 eth1: 09 =A0 =A0000800 =A0aec0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816504] bnx2 0000:00:05.0 eth1: 0a =A0 =A0001000 =A03530 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816514] bnx2 0000:00:05.0 eth1: 0b =A0 =A0000800 =A0aec8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816523] bnx2 0000:00:05.0 eth1: 0c =A0 =A0000800 =A0aed0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816559] bnx2 0000:00:05.0 eth1: 0d =A0 =A0001000 =A034f8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816570] bnx2 0000:00:05.0 eth1: 0e =A0 =A0001000 =A03500 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816580] bnx2 0000:00:05.0 eth1: 0f =A0 =A0001000 =A03518 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816590] bnx2 0000:00:05.0 eth1: 10 =A0 =A01fbc00 =A02fe8 =A0 7d=
 =A0 =A0[0]<br>
&gt; [79464.816599] bnx2 0000:00:05.0 eth1: 11 =A0 =A01ab780 =A0fff8 =A0 7d=
 =A0 =A0[0]<br>
&gt; [79464.816608] bnx2 0000:00:05.0 eth1: 12 =A0 =A017ff00 =A0b908 =A0 f7=
 =A0 =A0[0]<br>
&gt; [79464.816618] bnx2 0000:00:05.0 eth1: 13 =A0 =A00cb700 =A0ff40 =A0 d7=
 =A0 =A0[0]<br>
&gt; [79464.816627] bnx2 0000:00:05.0 eth1: 14 =A0 =A0177a80 =A0efe0 =A0 03=
 =A0 =A0[0]<br>
&gt; [79464.816637] bnx2 0000:00:05.0 eth1: 15 =A0 =A0037d80 =A09f88 =A0 72=
 =A0 =A0[0]<br>
&gt; [79464.816646] bnx2 0000:00:05.0 eth1: 16 =A0 =A01bae00 =A0eef8 =A0 ce=
 =A0 =A0[0]<br>
&gt; [79464.816657] bnx2 0000:00:05.0 eth1: 17 =A0 =A01bbc80 =A0a7f8 =A0 df=
 =A0 =A0[0]<br>
&gt; [79464.816666] bnx2 0000:00:05.0 eth1: 18 =A0 =A017e180 =A06aa8 =A0 e4=
 =A0 =A0[0]<br>
&gt; [79464.816675] bnx2 0000:00:05.0 eth1: 19 =A0 =A007ff80 =A06e50 =A0 dd=
 =A0 =A0[0]<br>
&gt; [79464.816683] bnx2 0000:00:05.0 eth1: 1a =A0 =A01fda80 =A0f790 =A0 6e=
 =A0 =A0[0]<br>
&gt; [79464.816694] bnx2 0000:00:05.0 eth1: 1b =A0 =A0151580 =A0d7b0 =A0 fc=
 =A0 =A0[0]<br>
&gt; [79464.816703] bnx2 0000:00:05.0 eth1: 1c =A0 =A01b9f80 =A0cef8 =A0 6b=
 =A0 =A0[0]<br>
&gt; [79464.816712] bnx2 0000:00:05.0 eth1: 1d =A0 =A01ebf00 =A0ffa8 =A0 df=
 =A0 =A0[0]<br>
&gt; [79464.816723] bnx2 0000:00:05.0 eth1: 1e =A0 =A01e7e00 =A0ff78 =A0 ef=
 =A0 =A0[0]<br>
&gt; [79464.816731] bnx2 0000:00:05.0 eth1: 1f =A0 =A0166e80 =A0fbd8 =A0 aa=
 =A0 =A0[0]<br>
&gt; [79464.816733] bnx2 0000:00:05.0 eth1: &lt;--- end TBDC dump ---&gt;<b=
r>
&gt; [79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[0010=
0406]<br>
&gt; [79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]<br>
&gt; PCI_MISC_CFG[92000088]<br>
&gt; [79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]=
<br>
&gt; EMAC_RX_STATUS[00000000]<br>
&gt; [79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[400000=
88]<br>
&gt; [79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:<br>
&gt; HC_STATS_INTERRUPT_STATUS[01fc0003]<br>
&gt; [79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]<br>
&gt; [79464.817264] bnx2 0000:00:05.0 eth1: &lt;--- start MCP states dump -=
--&gt;<br>
&gt; [79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]<b=
r>
&gt; MCP_STATE_P1[0003611e]<br>
&gt; [79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]<br>
&gt; state[80000000] evt_mask[00000500]<br>
&gt; [79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994=
]<br>
&gt; instr[32020020]<br>
&gt; [79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:<br>
&gt; [79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]<br>
&gt; fw_mb[00000027] link_status[0008506b]<br>
&gt; [79464.817301] =A0drv_pulse_mb[0000338a]<br>
&gt; [79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_signature[44564=
907]<br>
&gt; reset_type[01005254]<br>
&gt; [79464.817310] =A0condition[0003611e]<br>
&gt; [79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530=
083<br>
&gt; 0003611e 00000000<br>
&gt; [79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000=
000<br>
&gt; 00000000 00000000<br>
&gt; [79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000=
000<br>
&gt; 00000000 00000000<br>
&gt; [79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000=
000<br>
&gt; 00000000 00000000<br>
&gt; [79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]<br>
&gt; [79464.817374] bnx2 0000:00:05.0 eth1: &lt;--- end MCP states dump ---=
&gt;<br>
&gt; [79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Thanks for the answers on the timeline.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; When I start the HVM with th broadcom adapter, I get this me=
ssage back.<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-1.cfg<br>
&gt; &gt; &gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The k=
ernel doesn&#39;t<br>
&gt; &gt; &gt; support reset from sysfs for PCI device 0000:05:00.0<br>
&gt; &gt; &gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The k=
ernel doesn&#39;t<br>
&gt; &gt; &gt; support reset from sysfs for PCI device 0000:05:00.1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; However, the devices appear in the HVM. =A0Is this something=
 that I should<br>
&gt; &gt; be<br>
&gt; &gt; &gt; concerned about?<br>
&gt; &gt;<br>
&gt; &gt; No. Xen pciback does the reset automatically.<br>
&gt; &gt;<br>
&gt; &gt; Actually we might want to ditch that reporting in libxl, or maybe=
 just<br>
&gt; &gt; implement a stub function in xen-pciback so that libxl will be ha=
ppy.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap &lt;<a href=
=3D"mailto:dunlapg@umich.edu">dunlapg@umich.edu</a>&gt;<br>
&gt; &gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser<br>
&gt; &gt; &gt; &gt; &lt;<a href=3D"mailto:mikeneiderhauser@gmail.com">miken=
eiderhauser@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; Works like a charm. =A0I do not have physical acce=
ss to the computer<br>
&gt; &gt; this<br>
&gt; &gt; &gt; &gt; &gt; weekend to verify that the cards are isolated, but=
 the HVM starts and<br>
&gt; &gt; &gt; &gt; &gt; appears to be working well.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; When do you think Xen 4.4 will be released. =A0The=
 article I read<br>
&gt; &gt; &gt; &gt; mentioned it<br>
&gt; &gt; &gt; &gt; &gt; will be released in 2014 (hinting towards the end =
of February). =A0I<br>
&gt; &gt; also<br>
&gt; &gt; &gt; &gt; read<br>
&gt; &gt; &gt; &gt; &gt; &#39;When it is ready.&#39;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Any timeline would be great.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; I&#39;m afraid that&#39;s about all we can give. :-) =
=A0We&#39;ve locked down<br>
&gt; &gt; &gt; &gt; development for 2 months now and are working on finding=
 and fixing<br>
&gt; &gt; &gt; &gt; bugs. =A0If there are no more blocker bugs or other unf=
oreseen delays,<br>
&gt; &gt; &gt; &gt; it should be out by the end of February. =A0But there a=
re necessarily<br>
&gt; &gt; &gt; &gt; significant unknowns, so we can&#39;t make any promises=
.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; =A0-George<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a11c2ca0c8e46fc04f239d553--


--===============1203612201343063609==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1203612201343063609==--


From xen-devel-bounces@lists.xen.org Wed Feb 12 23:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjZS-0005o3-UK; Wed, 12 Feb 2014 23:50:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <billfink@mindspring.com>) id 1WDdRX-0002ZY-9F
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 17:17:51 +0000
Received: from [193.109.254.147:5568] by server-16.bemta-14.messagelabs.com id
	34/68-21945-EBCABF25; Wed, 12 Feb 2014 17:17:50 +0000
X-Env-Sender: billfink@mindspring.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392225469!3888826!1
X-Originating-IP: [209.86.89.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMDkuODYuODkuNjEgPT4gMzY2Ng==\n,sa_preprocessor: 
	QmFkIElQOiAyMDkuODYuODkuNjEgPT4gMzY2Ng==\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8968 invoked from network); 12 Feb 2014 17:17:49 -0000
Received: from elasmtp-galgo.atl.sa.earthlink.net (HELO
	elasmtp-galgo.atl.sa.earthlink.net) (209.86.89.61)
	by server-6.tower-27.messagelabs.com with SMTP;
	12 Feb 2014 17:17:49 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;
	s=dk20050327; d=mindspring.com;
	b=QLmvyvwbnY8OK2hpbP2BROkRlpRDB/AxdI/aMKSkupS9l04lHuzPBMxebQj4EyFd;
	h=Received:Date:From:To:Cc:Subject:Message-Id:In-Reply-To:References:X-Mailer:Mime-Version:Content-Type:Content-Transfer-Encoding:X-ELNK-Trace:X-Originating-IP;
Received: from [71.179.3.200] (helo=gwiz.sci.gsfc.nasa.gov)
	by elasmtp-galgo.atl.sa.earthlink.net with esmtpa (Exim 4.67)
	(envelope-from <billfink@mindspring.com>)
	id 1WDdRT-0008Di-Co; Wed, 12 Feb 2014 12:17:47 -0500
Date: Wed, 12 Feb 2014 12:17:39 -0500
From: Bill Fink <billfink@mindspring.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-Id: <20140212121739.ecb2f222.billfink@mindspring.com>
In-Reply-To: <1392203708.13563.50.camel@kazak.uk.xensource.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
X-Mailer: Sylpheed 2.6.0 (GTK+ 2.16.6; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-ELNK-Trace: c598f748b88b6fd49c7f779228e2f6aeda0071232e20db4dea14c196f66adca47a54a544fb5bbc79350badd9bab72f9c350badd9bab72f9c350badd9bab72f9c
X-Originating-IP: 71.179.3.200
X-Mailman-Approved-At: Wed, 12 Feb 2014 23:50:26 +0000
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Feb 2014, Ian Campbell wrote:

> On Tue, 2014-02-11 at 13:53 -0800, Luis R. Rodriguez wrote:
> > Cc'ing kvm folks as they may have a shared interest on the shared
> > physical case with the bridge (non NAT).
> > 
> > On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> > >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> > >>
> > >> Although the xen-netback interfaces do not participate in the
> > >> link as a typical Ethernet device interfaces for them are
> > >> still required under the current archtitecture. IPv6 addresses
> > >> do not need to be created or assigned on the xen-netback interfaces
> > >> however, even if the frontend devices do need them, so clear the
> > >> multicast flag to ensure the net core does not initiate IPv6
> > >> Stateless Address Autoconfiguration.
> > >
> > > How does disabling SAA flow from the absence of multicast?
> > 
> > See patch 1 in this series [0], but I explain the issue I see with
> > this on the cover letter [1].
> 
> Oop, I felt like I'd missed some context. Thanks for pointing out that
> it was right under my nose.
> 
> > In summary the RFCs on IPv6 make it
> > clear you need multicast for Stateless address autoconfiguration
> > (SLAAC is the preferred acronym) and DAD,
> 
> That seems reasonable, but I think is the opposite to what I was trying
> to get at.
> 
> Why is it not possible to disable SLAAC and/or DAD even if multicast is
> present?
> 
> IOW -- enabling/disabling multicast seems to me to be an odd proxy for
> disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
> which is to avoid SLAAC and DAD on interfaces which don't do multicast
> (which makes sense since those protocols involve multicast).

Forgive me if this doesn't make sense in this context since
I'm not a kernel developer, but I was just wondering if any of
the sysctls:

	/proc/sys/net/ipv6/conf/<ifc>/disable_ipv6
	/proc/sys/net/ipv6/conf/<ifc>/accept_dad
	/proc/sys/net/ipv6/conf/<ifc>/accept_ra
	/proc/sys/net/ipv6/conf/<ifc>/autoconf

would be apropos for the requirement being discussed.

					-Bill

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 12 23:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjZT-0005oA-BN; Wed, 12 Feb 2014 23:50:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mikeneiderhauser@gmail.com>) id 1WDeeX-00056D-0w
	for xen-devel@lists.xen.org; Wed, 12 Feb 2014 18:35:21 +0000
Received: from [85.158.137.68:33410] by server-9.bemta-3.messagelabs.com id
	48/78-10184-8EEBBF25; Wed, 12 Feb 2014 18:35:20 +0000
X-Env-Sender: mikeneiderhauser@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392230117!131989!1
X-Originating-IP: [209.85.212.47]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29030 invoked from network); 12 Feb 2014 18:35:18 -0000
Received: from mail-vb0-f47.google.com (HELO mail-vb0-f47.google.com)
	(209.85.212.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	12 Feb 2014 18:35:18 -0000
Received: by mail-vb0-f47.google.com with SMTP id p6so7220014vbe.20
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 10:35:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=4FI8bJuR7rYcDLOUmfS1kcXyufRYv3GA9wc3iUBPayE=;
	b=HWXadle/3oLweecnnEbJxH3EyvgfIIEg1GukuRjO8F5p7GEC48YeMFVNKwmVXkLjKY
	s1W/tmAbgRaNPQrAM/o3LBrq4CI65Bu4FkqUZWyMYpgmgbdLnhr1JcgJnKIZ83/VlAuM
	CqLXBw89ZYv91ibgxeJy7bor3oyop8EaF7da5/j/AYCOdh8eRg6eEiHZRU1VRLmwDIAt
	ZWCLYPO/S/29qYLtuDdbPyC7MYIV0T9xrAKM9PrycKNKCHTl04/u2G1sZ9PUtysyLp8Y
	oFNSHsD56HQLhXWYf8ezlpcvxaJkOPCJB6brdLjy5Zw6NlIIg5yT0OLcRng/yIP6DVHw
	Dvqw==
X-Received: by 10.220.159.4 with SMTP id h4mr34272409vcx.1.1392230116769; Wed,
	12 Feb 2014 10:35:16 -0800 (PST)
MIME-Version: 1.0
Received: by 10.52.233.73 with HTTP; Wed, 12 Feb 2014 10:34:36 -0800 (PST)
In-Reply-To: <20140212182526.GA28938@phenom.dumpdata.com>
References: <5aa3cfb2-f554-449a-b251-13b69ac35185@default>
	<CA+XTOOhrbLE0rn1Nr7PKUhBa3s26Kb9mAqeoK5_DqgnT4y+_Fg@mail.gmail.com>
	<CA+XTOOjxrxh1jSz0Jeavm61MTdppOQ52f2V2SWKegq8vCfsaqQ@mail.gmail.com>
	<CAFLBxZah26ChS1+1Np+bEv=+vOrTnRpnyu4SzGegLp0Sz0d3HA@mail.gmail.com>
	<CA+XTOOh3cGk8fGnhSuURKnW+B1bnxYVtjRZTB6F4DLQcrA=XmA@mail.gmail.com>
	<20140210184707.GA18755@phenom.dumpdata.com>
	<CA+XTOOhnP4OGBeMF4m8k5JxTLN4qxZ8XhJA8bXg_HBDH1TX5kg@mail.gmail.com>
	<20140212182526.GA28938@phenom.dumpdata.com>
From: Mike Neiderhauser <mikeneiderhauser@gmail.com>
Date: Wed, 12 Feb 2014 13:34:36 -0500
Message-ID: <CA+XTOOiabhmukhqRAG9ibst8X4ZZGxev-O9DPmcgCXffmDxVsQ@mail.gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Wed, 12 Feb 2014 23:50:26 +0000
Cc: George Dunlap <dunlapg@umich.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1203612201343063609=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1203612201343063609==
Content-Type: multipart/alternative; boundary=001a11c2ca0c8e46fc04f239d553

--001a11c2ca0c8e46fc04f239d553
Content-Type: text/plain; charset=ISO-8859-1

It looks very similar.


On Wed, Feb 12, 2014 at 1:25 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Tue, Feb 11, 2014 at 08:04:04AM -0500, Mike Neiderhauser wrote:
> > So I went ahead to tested the setup I am trying to achieve using xen.
>  This
> > setup basically requires two isolated machines that can be used for
> network
> > testing.  On the hvm mentioned above, this testing fails due to
> something I
> > cannot wrap my head around.  I believe it is still related to the PCI
> > passthrough of a device and I believe it is related to the libxl error
> > mentioned above. Can anyone shed some light on what is going on?  Is it a
> > driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit
> Ethernet)
>
> That looks like this issue:
>
> igb and bnx2: "NETDEV WATCHDOG: transmit queue timed out" when skb has
> huge linear buffer
>
> (http://lkml.org/lkml/2014/1/30/358) ?
> >
> > [79464.816085] ------------[ cut here ]------------
> > [79464.816093] WARNING: at
> > /build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254
> > dev_watchdog+0x262/0x270()
> > [79464.816094] Hardware name: HVM domU
> > [79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed out
> > [79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)
> > xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F)
> authenc(F)
> > esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)
> > twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)
> > twofish_x86_64(F) twofish_common(F) camellia_generic(F)
> > camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)
> > serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)
> > blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)
> > cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F)
> xcbc(F)
> > rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F)
> > llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F)
> aesni_intel(F)
> > ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F)
> cirrus(F)
> > ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)
> > sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F)
> > lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw(F)
> > lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded:
> > ipmi_msghandler]
> > [79464.816139] Pid: 0, comm: swapper/1 Tainted: GF
> >  3.8.0-29-generic #42~precise1-Ubuntu
> > [79464.816140] Call Trace:
> > [79464.816142]  <IRQ>  [<ffffffff81059b0f>]
> warn_slowpath_common+0x7f/0xc0
> > [79464.816149]  [<ffffffff8135b9d4>] ? timerqueue_add+0x64/0xb0
> > [79464.816151]  [<ffffffff81059c06>] warn_slowpath_fmt+0x46/0x50
> > [79464.816154]  [<ffffffff81076794>] ? wake_up_worker+0x24/0x30
> > [79464.816157]  [<ffffffff81602062>] dev_watchdog+0x262/0x270
> > [79464.816160]  [<ffffffff810771f0>] ? __queue_work+0x2d0/0x2d0
> > [79464.816161]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> > [79464.816164]  [<ffffffff8106995b>] call_timer_fn+0x3b/0x150
> > [79464.816167]  [<ffffffff8144f5a1>] ?
> add_interrupt_randomness+0x41/0x190
> > [79464.816170]  [<ffffffff8106b427>] run_timer_softirq+0x267/0x2c0
> > [79464.816173]  [<ffffffff810ee3c9>] ? handle_irq_event_percpu+0xa9/0x210
> > [79464.816175]  [<ffffffff81601e00>] ? pfifo_fast_dequeue+0xe0/0xe0
> > [79464.816177]  [<ffffffff81062620>] __do_softirq+0xc0/0x240
> > [79464.816180]  [<ffffffff810b67cd>] ? tick_do_update_jiffies64+0x9d/0xd0
> > [79464.816184]  [<ffffffff816fdd5c>] call_softirq+0x1c/0x30
> > [79464.816188]  [<ffffffff81016775>] do_softirq+0x65/0xa0
> > [79464.816189]  [<ffffffff810628fe>] irq_exit+0x8e/0xb0
> > [79464.816193]  [<ffffffff8140a125>] xen_evtchn_do_upcall+0x35/0x50
> > [79464.816195]  [<ffffffff816fdeed>] xen_hvm_callback_vector+0x6d/0x80
> > [79464.816196]  <EOI>  [<ffffffff81084008>] ? hrtimer_start+0x18/0x20
> > [79464.816201]  [<ffffffff81045136>] ? native_safe_halt+0x6/0x10
> > [79464.816204]  [<ffffffff8101cc33>] default_idle+0x53/0x1f0
> > [79464.816206]  [<ffffffff8101dad9>] cpu_idle+0xd9/0x120
> > [79464.816209]  [<ffffffff816d10fe>] start_secondary+0xc3/0xc5
> > [79464.816210] ---[ end trace 48cf6b13be16e0ae ]---
> > [79464.816214] bnx2 0000:00:05.0 eth1: <--- start FTQ dump --->
> > [79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000
> > [79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000
> > [79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000
> > [79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000
> > [79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000
> > [79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> > [79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000
> > [79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000
> > [79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000
> > [79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000
> > [79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000
> > [79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000
> > [79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000
> > [79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000
> > [79464.816308] bnx2 0000:00:05.0 eth1: CPU states:
> > [79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000
> > evt_mask 500 pc 8001288 pc 8001288 instr 38640001
> > [79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000
> > evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016
> > [79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000
> > evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003
> > [79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000
> > evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020
> > [79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000
> > evt_mask 500 pc 8009c00 pc 800d948 instr 30420040
> > [79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000
> > evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823
> > [79464.816392] bnx2 0000:00:05.0 eth1: <--- end FTQ dump --->
> > [79464.816394] bnx2 0000:00:05.0 eth1: <--- start TBDC dump --->
> > [79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32
> > [79464.816401] bnx2 0000:00:05.0 eth1: LINE     CID  BIDX   CMD  VALIDS
> > [79464.816410] bnx2 0000:00:05.0 eth1: 00    001000  00e8   00    [0]
> > [79464.816420] bnx2 0000:00:05.0 eth1: 01    001000  00e8   00    [0]
> > [79464.816429] bnx2 0000:00:05.0 eth1: 02    000800  afc8   00    [0]
> > [79464.816438] bnx2 0000:00:05.0 eth1: 03    000800  afb8   00    [0]
> > [79464.816447] bnx2 0000:00:05.0 eth1: 04    000800  afd8   00    [0]
> > [79464.816456] bnx2 0000:00:05.0 eth1: 05    000800  afe0   00    [0]
> > [79464.816465] bnx2 0000:00:05.0 eth1: 06    000800  afe8   00    [0]
> > [79464.816474] bnx2 0000:00:05.0 eth1: 07    000800  afd0   00    [0]
> > [79464.816485] bnx2 0000:00:05.0 eth1: 08    001000  3510   00    [0]
> > [79464.816494] bnx2 0000:00:05.0 eth1: 09    000800  aec0   00    [0]
> > [79464.816504] bnx2 0000:00:05.0 eth1: 0a    001000  3530   00    [0]
> > [79464.816514] bnx2 0000:00:05.0 eth1: 0b    000800  aec8   00    [0]
> > [79464.816523] bnx2 0000:00:05.0 eth1: 0c    000800  aed0   00    [0]
> > [79464.816559] bnx2 0000:00:05.0 eth1: 0d    001000  34f8   00    [0]
> > [79464.816570] bnx2 0000:00:05.0 eth1: 0e    001000  3500   00    [0]
> > [79464.816580] bnx2 0000:00:05.0 eth1: 0f    001000  3518   00    [0]
> > [79464.816590] bnx2 0000:00:05.0 eth1: 10    1fbc00  2fe8   7d    [0]
> > [79464.816599] bnx2 0000:00:05.0 eth1: 11    1ab780  fff8   7d    [0]
> > [79464.816608] bnx2 0000:00:05.0 eth1: 12    17ff00  b908   f7    [0]
> > [79464.816618] bnx2 0000:00:05.0 eth1: 13    0cb700  ff40   d7    [0]
> > [79464.816627] bnx2 0000:00:05.0 eth1: 14    177a80  efe0   03    [0]
> > [79464.816637] bnx2 0000:00:05.0 eth1: 15    037d80  9f88   72    [0]
> > [79464.816646] bnx2 0000:00:05.0 eth1: 16    1bae00  eef8   ce    [0]
> > [79464.816657] bnx2 0000:00:05.0 eth1: 17    1bbc80  a7f8   df    [0]
> > [79464.816666] bnx2 0000:00:05.0 eth1: 18    17e180  6aa8   e4    [0]
> > [79464.816675] bnx2 0000:00:05.0 eth1: 19    07ff80  6e50   dd    [0]
> > [79464.816683] bnx2 0000:00:05.0 eth1: 1a    1fda80  f790   6e    [0]
> > [79464.816694] bnx2 0000:00:05.0 eth1: 1b    151580  d7b0   fc    [0]
> > [79464.816703] bnx2 0000:00:05.0 eth1: 1c    1b9f80  cef8   6b    [0]
> > [79464.816712] bnx2 0000:00:05.0 eth1: 1d    1ebf00  ffa8   df    [0]
> > [79464.816723] bnx2 0000:00:05.0 eth1: 1e    1e7e00  ff78   ef    [0]
> > [79464.816731] bnx2 0000:00:05.0 eth1: 1f    166e80  fbd8   aa    [0]
> > [79464.816733] bnx2 0000:00:05.0 eth1: <--- end TBDC dump --->
> > [79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0]
> PCI_CMD[00100406]
> > [79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]
> > PCI_MISC_CFG[92000088]
> > [79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]
> > EMAC_RX_STATUS[00000000]
> > [79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[40000088]
> > [79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:
> > HC_STATS_INTERRUPT_STATUS[01fc0003]
> > [79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]
> > [79464.817264] bnx2 0000:00:05.0 eth1: <--- start MCP states dump --->
> > [79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]
> > MCP_STATE_P1[0003611e]
> > [79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]
> > state[80000000] evt_mask[00000500]
> > [79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994]
> > instr[32020020]
> > [79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:
> > [79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]
> > fw_mb[00000027] link_status[0008506b]
> > [79464.817301]  drv_pulse_mb[0000338a]
> > [79464.817306] bnx2 0000:00:05.0 eth1: DEBUG:
> dev_info_signature[44564907]
> > reset_type[01005254]
> > [79464.817310]  condition[0003611e]
> > [79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530083
> > 0003611e 00000000
> > [79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000000
> > 00000000 00000000
> > [79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000000
> > 00000000 00000000
> > [79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000000
> > 00000000 00000000
> > [79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]
> > [79464.817374] bnx2 0000:00:05.0 eth1: <--- end MCP states dump --->
> > [79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down
> >
> >
> >
> > On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote:
> > > > Thanks for the answers on the timeline.
> > > >
> > > > When I start the HVM with th broadcom adapter, I get this message
> back.
> > > > Parsing config from /etc/xen/ubuntu-hvm-1.cfg
> > > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel
> doesn't
> > > > support reset from sysfs for PCI device 0000:05:00.0
> > > > libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The kernel
> doesn't
> > > > support reset from sysfs for PCI device 0000:05:00.1
> > > >
> > > > However, the devices appear in the HVM.  Is this something that I
> should
> > > be
> > > > concerned about?
> > >
> > > No. Xen pciback does the reset automatically.
> > >
> > > Actually we might want to ditch that reporting in libxl, or maybe just
> > > implement a stub function in xen-pciback so that libxl will be happy.
> > >
> > > >
> > > >
> > > >
> > > > On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap <dunlapg@umich.edu>
> > > wrote:
> > > >
> > > > > On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser
> > > > > <mikeneiderhauser@gmail.com> wrote:
> > > > > > Works like a charm.  I do not have physical access to the
> computer
> > > this
> > > > > > weekend to verify that the cards are isolated, but the HVM
> starts and
> > > > > > appears to be working well.
> > > > > >
> > > > > > When do you think Xen 4.4 will be released.  The article I read
> > > > > mentioned it
> > > > > > will be released in 2014 (hinting towards the end of February).
>  I
> > > also
> > > > > read
> > > > > > 'When it is ready.'
> > > > > >
> > > > > > Any timeline would be great.
> > > > >
> > > > > I'm afraid that's about all we can give. :-)  We've locked down
> > > > > development for 2 months now and are working on finding and fixing
> > > > > bugs.  If there are no more blocker bugs or other unforeseen
> delays,
> > > > > it should be out by the end of February.  But there are necessarily
> > > > > significant unknowns, so we can't make any promises.
> > > > >
> > > > >  -George
> > > > >
> > >
>

--001a11c2ca0c8e46fc04f239d553
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">It looks very similar.</div><div class=3D"gmail_extra"><br=
><br><div class=3D"gmail_quote">On Wed, Feb 12, 2014 at 1:25 PM, Konrad Rze=
szutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com"=
 target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Tue, Feb 11, 2014 at 08:0=
4:04AM -0500, Mike Neiderhauser wrote:<br>
&gt; So I went ahead to tested the setup I am trying to achieve using xen. =
=A0This<br>
&gt; setup basically requires two isolated machines that can be used for ne=
twork<br>
&gt; testing. =A0On the hvm mentioned above, this testing fails due to some=
thing I<br>
&gt; cannot wrap my head around. =A0I believe it is still related to the PC=
I<br>
&gt; passthrough of a device and I believe it is related to the libxl error=
<br>
&gt; mentioned above. Can anyone shed some light on what is going on? =A0Is=
 it a<br>
&gt; driver issue? (Broadcom Corporation NetXtreme II BCM5716 Gigabit Ether=
net)<br>
<br>
</div>That looks like this issue:<br>
<br>
igb and bnx2: &quot;NETDEV WATCHDOG: transmit queue timed out&quot; when sk=
b has huge linear buffer<br>
<br>
(<a href=3D"http://lkml.org/lkml/2014/1/30/358" target=3D"_blank">http://lk=
ml.org/lkml/2014/1/30/358</a>) ?<br>
<div class=3D"HOEnZb"><div class=3D"h5">&gt;<br>
&gt; [79464.816085] ------------[ cut here ]------------<br>
&gt; [79464.816093] WARNING: at<br>
&gt; /build/buildd/linux-lts-raring-3.8.0/net/sched/sch_generic.c:254<br>
&gt; dev_watchdog+0x262/0x270()<br>
&gt; [79464.816094] Hardware name: HVM domU<br>
&gt; [79464.816096] NETDEV WATCHDOG: eth1 (bnx2): transmit queue 1 timed ou=
t<br>
&gt; [79464.816096] Modules linked in: esp6(F) ah6(F) xfrm6_mode_tunnel(F)<=
br>
&gt; xfrm_user(F) xfrm4_tunnel(F) tunnel4(F) ipcomp(F) xfrm_ipcomp(F) authe=
nc(F)<br>
&gt; esp4(F) ah4(F) xfrm4_mode_tunnel(F) deflate(F) zlib_deflate(F) ctr(F)<=
br>
&gt; twofish_generic(F) twofish_avx_x86_64(F) twofish_x86_64_3way(F)<br>
&gt; twofish_x86_64(F) twofish_common(F) camellia_generic(F)<br>
&gt; camellia_aesni_avx_x86_64(F) camellia_x86_64(F) serpent_avx_x86_64(F)<=
br>
&gt; serpent_sse2_x86_64(F) glue_helper(F) serpent_generic(F)<br>
&gt; blowfish_generic(F) blowfish_x86_64(F) blowfish_common(F)<br>
&gt; cast5_avx_x86_64(F) cast5_generic(F) cast_common(F) des_generic(F) xcb=
c(F)<br>
&gt; rmd160(F) crypto_null(F) af_key(F) xfrm_algo(F) 8021q(F) garp(F) stp(F=
)<br>
&gt; llc(F) ipmi_msghandler(F) autofs4(F) ghash_clmulni_intel(F) aesni_inte=
l(F)<br>
&gt; ablk_helper(F) cryptd(F) lrw(F) aes_x86_64(F) xts(F) gf128mul(F) cirru=
s(F)<br>
&gt; ttm(F) drm_kms_helper(F) nfsd(F) nfs_acl(F) drm(F) auth_rpcgss(F)<br>
&gt; sysimgblt(F) nfs(F) psmouse(F) sysfillrect(F) syscopyarea(F) fscache(F=
)<br>
&gt; lockd(F) microcode(F) joydev(F) xen_kbdfront(F) i2c_piix4(F) serio_raw=
(F)<br>
&gt; lp(F) mac_hid(F) sunrpc(F) parport(F) floppy(F) bnx2(F) [last unloaded=
:<br>
&gt; ipmi_msghandler]<br>
&gt; [79464.816139] Pid: 0, comm: swapper/1 Tainted: GF<br>
&gt; =A03.8.0-29-generic #42~precise1-Ubuntu<br>
&gt; [79464.816140] Call Trace:<br>
&gt; [79464.816142] =A0&lt;IRQ&gt; =A0[&lt;ffffffff81059b0f&gt;] warn_slowp=
ath_common+0x7f/0xc0<br>
&gt; [79464.816149] =A0[&lt;ffffffff8135b9d4&gt;] ? timerqueue_add+0x64/0xb=
0<br>
&gt; [79464.816151] =A0[&lt;ffffffff81059c06&gt;] warn_slowpath_fmt+0x46/0x=
50<br>
&gt; [79464.816154] =A0[&lt;ffffffff81076794&gt;] ? wake_up_worker+0x24/0x3=
0<br>
&gt; [79464.816157] =A0[&lt;ffffffff81602062&gt;] dev_watchdog+0x262/0x270<=
br>
&gt; [79464.816160] =A0[&lt;ffffffff810771f0&gt;] ? __queue_work+0x2d0/0x2d=
0<br>
&gt; [79464.816161] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_fast_dequeue+0xe0=
/0xe0<br>
&gt; [79464.816164] =A0[&lt;ffffffff8106995b&gt;] call_timer_fn+0x3b/0x150<=
br>
&gt; [79464.816167] =A0[&lt;ffffffff8144f5a1&gt;] ? add_interrupt_randomnes=
s+0x41/0x190<br>
&gt; [79464.816170] =A0[&lt;ffffffff8106b427&gt;] run_timer_softirq+0x267/0=
x2c0<br>
&gt; [79464.816173] =A0[&lt;ffffffff810ee3c9&gt;] ? handle_irq_event_percpu=
+0xa9/0x210<br>
&gt; [79464.816175] =A0[&lt;ffffffff81601e00&gt;] ? pfifo_fast_dequeue+0xe0=
/0xe0<br>
&gt; [79464.816177] =A0[&lt;ffffffff81062620&gt;] __do_softirq+0xc0/0x240<b=
r>
&gt; [79464.816180] =A0[&lt;ffffffff810b67cd&gt;] ? tick_do_update_jiffies6=
4+0x9d/0xd0<br>
&gt; [79464.816184] =A0[&lt;ffffffff816fdd5c&gt;] call_softirq+0x1c/0x30<br=
>
&gt; [79464.816188] =A0[&lt;ffffffff81016775&gt;] do_softirq+0x65/0xa0<br>
&gt; [79464.816189] =A0[&lt;ffffffff810628fe&gt;] irq_exit+0x8e/0xb0<br>
&gt; [79464.816193] =A0[&lt;ffffffff8140a125&gt;] xen_evtchn_do_upcall+0x35=
/0x50<br>
&gt; [79464.816195] =A0[&lt;ffffffff816fdeed&gt;] xen_hvm_callback_vector+0=
x6d/0x80<br>
&gt; [79464.816196] =A0&lt;EOI&gt; =A0[&lt;ffffffff81084008&gt;] ? hrtimer_=
start+0x18/0x20<br>
&gt; [79464.816201] =A0[&lt;ffffffff81045136&gt;] ? native_safe_halt+0x6/0x=
10<br>
&gt; [79464.816204] =A0[&lt;ffffffff8101cc33&gt;] default_idle+0x53/0x1f0<b=
r>
&gt; [79464.816206] =A0[&lt;ffffffff8101dad9&gt;] cpu_idle+0xd9/0x120<br>
&gt; [79464.816209] =A0[&lt;ffffffff816d10fe&gt;] start_secondary+0xc3/0xc5=
<br>
&gt; [79464.816210] ---[ end trace 48cf6b13be16e0ae ]---<br>
&gt; [79464.816214] bnx2 0000:00:05.0 eth1: &lt;--- start FTQ dump ---&gt;<=
br>
&gt; [79464.816220] bnx2 0000:00:05.0 eth1: RV2P_PFTQ_CTL 00010000<br>
&gt; [79464.816225] bnx2 0000:00:05.0 eth1: RV2P_TFTQ_CTL 00020000<br>
&gt; [79464.816258] bnx2 0000:00:05.0 eth1: RV2P_MFTQ_CTL 00004000<br>
&gt; [79464.816263] bnx2 0000:00:05.0 eth1: TBDR_FTQ_CTL 00004000<br>
&gt; [79464.816268] bnx2 0000:00:05.0 eth1: TDMA_FTQ_CTL 00010000<br>
&gt; [79464.816272] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000<br>
&gt; [79464.816276] bnx2 0000:00:05.0 eth1: TXP_FTQ_CTL 00010000<br>
&gt; [79464.816280] bnx2 0000:00:05.0 eth1: TPAT_FTQ_CTL 00010000<br>
&gt; [79464.816285] bnx2 0000:00:05.0 eth1: RXP_CFTQ_CTL 00008000<br>
&gt; [79464.816289] bnx2 0000:00:05.0 eth1: RXP_FTQ_CTL 00100000<br>
&gt; [79464.816293] bnx2 0000:00:05.0 eth1: COM_COMXQ_FTQ_CTL 00010000<br>
&gt; [79464.816297] bnx2 0000:00:05.0 eth1: COM_COMTQ_FTQ_CTL 00020000<br>
&gt; [79464.816302] bnx2 0000:00:05.0 eth1: COM_COMQ_FTQ_CTL 00010000<br>
&gt; [79464.816306] bnx2 0000:00:05.0 eth1: CP_CPQ_FTQ_CTL 00004000<br>
&gt; [79464.816308] bnx2 0000:00:05.0 eth1: CPU states:<br>
&gt; [79464.816321] bnx2 0000:00:05.0 eth1: 045000 mode b84c state 80001000=
<br>
&gt; evt_mask 500 pc 8001288 pc 8001288 instr 38640001<br>
&gt; [79464.816335] bnx2 0000:00:05.0 eth1: 085000 mode b84c state 80005000=
<br>
&gt; evt_mask 500 pc 8000a5c pc 8000a5c instr 10400016<br>
&gt; [79464.816349] bnx2 0000:00:05.0 eth1: 0c5000 mode b84c state 80000000=
<br>
&gt; evt_mask 500 pc 8004c14 pc 8004c14 instr 32050003<br>
&gt; [79464.816362] bnx2 0000:00:05.0 eth1: 105000 mode b8cc state 80000000=
<br>
&gt; evt_mask 500 pc 8000a9c pc 8000a94 instr 8c420020<br>
&gt; [79464.816375] bnx2 0000:00:05.0 eth1: 145000 mode b880 state 80000000=
<br>
&gt; evt_mask 500 pc 8009c00 pc 800d948 instr 30420040<br>
&gt; [79464.816389] bnx2 0000:00:05.0 eth1: 185000 mode b8cc state 80008000=
<br>
&gt; evt_mask 500 pc 8000c58 pc 8000c58 instr 1092823<br>
&gt; [79464.816392] bnx2 0000:00:05.0 eth1: &lt;--- end FTQ dump ---&gt;<br=
>
&gt; [79464.816394] bnx2 0000:00:05.0 eth1: &lt;--- start TBDC dump ---&gt;=
<br>
&gt; [79464.816399] bnx2 0000:00:05.0 eth1: TBDC free cnt: 32<br>
&gt; [79464.816401] bnx2 0000:00:05.0 eth1: LINE =A0 =A0 CID =A0BIDX =A0 CM=
D =A0VALIDS<br>
&gt; [79464.816410] bnx2 0000:00:05.0 eth1: 00 =A0 =A0001000 =A000e8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816420] bnx2 0000:00:05.0 eth1: 01 =A0 =A0001000 =A000e8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816429] bnx2 0000:00:05.0 eth1: 02 =A0 =A0000800 =A0afc8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816438] bnx2 0000:00:05.0 eth1: 03 =A0 =A0000800 =A0afb8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816447] bnx2 0000:00:05.0 eth1: 04 =A0 =A0000800 =A0afd8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816456] bnx2 0000:00:05.0 eth1: 05 =A0 =A0000800 =A0afe0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816465] bnx2 0000:00:05.0 eth1: 06 =A0 =A0000800 =A0afe8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816474] bnx2 0000:00:05.0 eth1: 07 =A0 =A0000800 =A0afd0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816485] bnx2 0000:00:05.0 eth1: 08 =A0 =A0001000 =A03510 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816494] bnx2 0000:00:05.0 eth1: 09 =A0 =A0000800 =A0aec0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816504] bnx2 0000:00:05.0 eth1: 0a =A0 =A0001000 =A03530 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816514] bnx2 0000:00:05.0 eth1: 0b =A0 =A0000800 =A0aec8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816523] bnx2 0000:00:05.0 eth1: 0c =A0 =A0000800 =A0aed0 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816559] bnx2 0000:00:05.0 eth1: 0d =A0 =A0001000 =A034f8 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816570] bnx2 0000:00:05.0 eth1: 0e =A0 =A0001000 =A03500 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816580] bnx2 0000:00:05.0 eth1: 0f =A0 =A0001000 =A03518 =A0 00=
 =A0 =A0[0]<br>
&gt; [79464.816590] bnx2 0000:00:05.0 eth1: 10 =A0 =A01fbc00 =A02fe8 =A0 7d=
 =A0 =A0[0]<br>
&gt; [79464.816599] bnx2 0000:00:05.0 eth1: 11 =A0 =A01ab780 =A0fff8 =A0 7d=
 =A0 =A0[0]<br>
&gt; [79464.816608] bnx2 0000:00:05.0 eth1: 12 =A0 =A017ff00 =A0b908 =A0 f7=
 =A0 =A0[0]<br>
&gt; [79464.816618] bnx2 0000:00:05.0 eth1: 13 =A0 =A00cb700 =A0ff40 =A0 d7=
 =A0 =A0[0]<br>
&gt; [79464.816627] bnx2 0000:00:05.0 eth1: 14 =A0 =A0177a80 =A0efe0 =A0 03=
 =A0 =A0[0]<br>
&gt; [79464.816637] bnx2 0000:00:05.0 eth1: 15 =A0 =A0037d80 =A09f88 =A0 72=
 =A0 =A0[0]<br>
&gt; [79464.816646] bnx2 0000:00:05.0 eth1: 16 =A0 =A01bae00 =A0eef8 =A0 ce=
 =A0 =A0[0]<br>
&gt; [79464.816657] bnx2 0000:00:05.0 eth1: 17 =A0 =A01bbc80 =A0a7f8 =A0 df=
 =A0 =A0[0]<br>
&gt; [79464.816666] bnx2 0000:00:05.0 eth1: 18 =A0 =A017e180 =A06aa8 =A0 e4=
 =A0 =A0[0]<br>
&gt; [79464.816675] bnx2 0000:00:05.0 eth1: 19 =A0 =A007ff80 =A06e50 =A0 dd=
 =A0 =A0[0]<br>
&gt; [79464.816683] bnx2 0000:00:05.0 eth1: 1a =A0 =A01fda80 =A0f790 =A0 6e=
 =A0 =A0[0]<br>
&gt; [79464.816694] bnx2 0000:00:05.0 eth1: 1b =A0 =A0151580 =A0d7b0 =A0 fc=
 =A0 =A0[0]<br>
&gt; [79464.816703] bnx2 0000:00:05.0 eth1: 1c =A0 =A01b9f80 =A0cef8 =A0 6b=
 =A0 =A0[0]<br>
&gt; [79464.816712] bnx2 0000:00:05.0 eth1: 1d =A0 =A01ebf00 =A0ffa8 =A0 df=
 =A0 =A0[0]<br>
&gt; [79464.816723] bnx2 0000:00:05.0 eth1: 1e =A0 =A01e7e00 =A0ff78 =A0 ef=
 =A0 =A0[0]<br>
&gt; [79464.816731] bnx2 0000:00:05.0 eth1: 1f =A0 =A0166e80 =A0fbd8 =A0 aa=
 =A0 =A0[0]<br>
&gt; [79464.816733] bnx2 0000:00:05.0 eth1: &lt;--- end TBDC dump ---&gt;<b=
r>
&gt; [79464.817101] bnx2 0000:00:05.0 eth1: DEBUG: intr_sem[0] PCI_CMD[0010=
0406]<br>
&gt; [79464.817239] bnx2 0000:00:05.0 eth1: DEBUG: PCI_PM[19002008]<br>
&gt; PCI_MISC_CFG[92000088]<br>
&gt; [79464.817248] bnx2 0000:00:05.0 eth1: DEBUG: EMAC_TX_STATUS[00000008]=
<br>
&gt; EMAC_RX_STATUS[00000000]<br>
&gt; [79464.817252] bnx2 0000:00:05.0 eth1: DEBUG: RPM_MGMT_PKT_CTRL[400000=
88]<br>
&gt; [79464.817257] bnx2 0000:00:05.0 eth1: DEBUG:<br>
&gt; HC_STATS_INTERRUPT_STATUS[01fc0003]<br>
&gt; [79464.817261] bnx2 0000:00:05.0 eth1: DEBUG: PBA[00000003]<br>
&gt; [79464.817264] bnx2 0000:00:05.0 eth1: &lt;--- start MCP states dump -=
--&gt;<br>
&gt; [79464.817270] bnx2 0000:00:05.0 eth1: DEBUG: MCP_STATE_P0[0003611e]<b=
r>
&gt; MCP_STATE_P1[0003611e]<br>
&gt; [79464.817278] bnx2 0000:00:05.0 eth1: DEBUG: MCP mode[0000b880]<br>
&gt; state[80000000] evt_mask[00000500]<br>
&gt; [79464.817286] bnx2 0000:00:05.0 eth1: DEBUG: pc[08003ebc] pc[0800d994=
]<br>
&gt; instr[32020020]<br>
&gt; [79464.817288] bnx2 0000:00:05.0 eth1: DEBUG: shmem states:<br>
&gt; [79464.817296] bnx2 0000:00:05.0 eth1: DEBUG: drv_mb[01030027]<br>
&gt; fw_mb[00000027] link_status[0008506b]<br>
&gt; [79464.817301] =A0drv_pulse_mb[0000338a]<br>
&gt; [79464.817306] bnx2 0000:00:05.0 eth1: DEBUG: dev_info_signature[44564=
907]<br>
&gt; reset_type[01005254]<br>
&gt; [79464.817310] =A0condition[0003611e]<br>
&gt; [79464.817318] bnx2 0000:00:05.0 eth1: DEBUG: 000001c0: 01005254 42530=
083<br>
&gt; 0003611e 00000000<br>
&gt; [79464.817328] bnx2 0000:00:05.0 eth1: DEBUG: 000003cc: 00000000 00000=
000<br>
&gt; 00000000 00000000<br>
&gt; [79464.817357] bnx2 0000:00:05.0 eth1: DEBUG: 000003dc: 00000000 00000=
000<br>
&gt; 00000000 00000000<br>
&gt; [79464.817367] bnx2 0000:00:05.0 eth1: DEBUG: 000003ec: 00000000 00000=
000<br>
&gt; 00000000 00000000<br>
&gt; [79464.817372] bnx2 0000:00:05.0 eth1: DEBUG: 0x3fc[00000000]<br>
&gt; [79464.817374] bnx2 0000:00:05.0 eth1: &lt;--- end MCP states dump ---=
&gt;<br>
&gt; [79464.872988] bnx2 0000:00:05.0 eth1: NIC Copper Link is Down<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Feb 10, 2014 at 1:47 PM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Mon, Feb 10, 2014 at 01:29:22PM -0500, Mike Neiderhauser wrote=
:<br>
&gt; &gt; &gt; Thanks for the answers on the timeline.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; When I start the HVM with th broadcom adapter, I get this me=
ssage back.<br>
&gt; &gt; &gt; Parsing config from /etc/xen/ubuntu-hvm-1.cfg<br>
&gt; &gt; &gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The k=
ernel doesn&#39;t<br>
&gt; &gt; &gt; support reset from sysfs for PCI device 0000:05:00.0<br>
&gt; &gt; &gt; libxl: error: libxl_pci.c:990:libxl__device_pci_reset: The k=
ernel doesn&#39;t<br>
&gt; &gt; &gt; support reset from sysfs for PCI device 0000:05:00.1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; However, the devices appear in the HVM. =A0Is this something=
 that I should<br>
&gt; &gt; be<br>
&gt; &gt; &gt; concerned about?<br>
&gt; &gt;<br>
&gt; &gt; No. Xen pciback does the reset automatically.<br>
&gt; &gt;<br>
&gt; &gt; Actually we might want to ditch that reporting in libxl, or maybe=
 just<br>
&gt; &gt; implement a stub function in xen-pciback so that libxl will be ha=
ppy.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Mon, Feb 10, 2014 at 12:11 PM, George Dunlap &lt;<a href=
=3D"mailto:dunlapg@umich.edu">dunlapg@umich.edu</a>&gt;<br>
&gt; &gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Sat, Feb 8, 2014 at 5:42 PM, Mike Neiderhauser<br>
&gt; &gt; &gt; &gt; &lt;<a href=3D"mailto:mikeneiderhauser@gmail.com">miken=
eiderhauser@gmail.com</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt; &gt; Works like a charm. =A0I do not have physical acce=
ss to the computer<br>
&gt; &gt; this<br>
&gt; &gt; &gt; &gt; &gt; weekend to verify that the cards are isolated, but=
 the HVM starts and<br>
&gt; &gt; &gt; &gt; &gt; appears to be working well.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; When do you think Xen 4.4 will be released. =A0The=
 article I read<br>
&gt; &gt; &gt; &gt; mentioned it<br>
&gt; &gt; &gt; &gt; &gt; will be released in 2014 (hinting towards the end =
of February). =A0I<br>
&gt; &gt; also<br>
&gt; &gt; &gt; &gt; read<br>
&gt; &gt; &gt; &gt; &gt; &#39;When it is ready.&#39;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Any timeline would be great.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; I&#39;m afraid that&#39;s about all we can give. :-) =
=A0We&#39;ve locked down<br>
&gt; &gt; &gt; &gt; development for 2 months now and are working on finding=
 and fixing<br>
&gt; &gt; &gt; &gt; bugs. =A0If there are no more blocker bugs or other unf=
oreseen delays,<br>
&gt; &gt; &gt; &gt; it should be out by the end of February. =A0But there a=
re necessarily<br>
&gt; &gt; &gt; &gt; significant unknowns, so we can&#39;t make any promises=
.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; =A0-George<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a11c2ca0c8e46fc04f239d553--


--===============1203612201343063609==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1203612201343063609==--


From xen-devel-bounces@lists.xen.org Wed Feb 12 23:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Feb 2014 23:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDjZS-0005o3-UK; Wed, 12 Feb 2014 23:50:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <billfink@mindspring.com>) id 1WDdRX-0002ZY-9F
	for xen-devel@lists.xenproject.org; Wed, 12 Feb 2014 17:17:51 +0000
Received: from [193.109.254.147:5568] by server-16.bemta-14.messagelabs.com id
	34/68-21945-EBCABF25; Wed, 12 Feb 2014 17:17:50 +0000
X-Env-Sender: billfink@mindspring.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392225469!3888826!1
X-Originating-IP: [209.86.89.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMDkuODYuODkuNjEgPT4gMzY2Ng==\n,sa_preprocessor: 
	QmFkIElQOiAyMDkuODYuODkuNjEgPT4gMzY2Ng==\n,BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8968 invoked from network); 12 Feb 2014 17:17:49 -0000
Received: from elasmtp-galgo.atl.sa.earthlink.net (HELO
	elasmtp-galgo.atl.sa.earthlink.net) (209.86.89.61)
	by server-6.tower-27.messagelabs.com with SMTP;
	12 Feb 2014 17:17:49 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws;
	s=dk20050327; d=mindspring.com;
	b=QLmvyvwbnY8OK2hpbP2BROkRlpRDB/AxdI/aMKSkupS9l04lHuzPBMxebQj4EyFd;
	h=Received:Date:From:To:Cc:Subject:Message-Id:In-Reply-To:References:X-Mailer:Mime-Version:Content-Type:Content-Transfer-Encoding:X-ELNK-Trace:X-Originating-IP;
Received: from [71.179.3.200] (helo=gwiz.sci.gsfc.nasa.gov)
	by elasmtp-galgo.atl.sa.earthlink.net with esmtpa (Exim 4.67)
	(envelope-from <billfink@mindspring.com>)
	id 1WDdRT-0008Di-Co; Wed, 12 Feb 2014 12:17:47 -0500
Date: Wed, 12 Feb 2014 12:17:39 -0500
From: Bill Fink <billfink@mindspring.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-Id: <20140212121739.ecb2f222.billfink@mindspring.com>
In-Reply-To: <1392203708.13563.50.camel@kazak.uk.xensource.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
X-Mailer: Sylpheed 2.6.0 (GTK+ 2.16.6; x86_64-redhat-linux-gnu)
Mime-Version: 1.0
X-ELNK-Trace: c598f748b88b6fd49c7f779228e2f6aeda0071232e20db4dea14c196f66adca47a54a544fb5bbc79350badd9bab72f9c350badd9bab72f9c350badd9bab72f9c
X-Originating-IP: 71.179.3.200
X-Mailman-Approved-At: Wed, 12 Feb 2014 23:50:26 +0000
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Feb 2014, Ian Campbell wrote:

> On Tue, 2014-02-11 at 13:53 -0800, Luis R. Rodriguez wrote:
> > Cc'ing kvm folks as they may have a shared interest on the shared
> > physical case with the bridge (non NAT).
> > 
> > On Tue, Feb 11, 2014 at 12:43 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Mon, 2014-02-10 at 14:29 -0800, Luis R. Rodriguez wrote:
> > >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> > >>
> > >> Although the xen-netback interfaces do not participate in the
> > >> link as a typical Ethernet device interfaces for them are
> > >> still required under the current archtitecture. IPv6 addresses
> > >> do not need to be created or assigned on the xen-netback interfaces
> > >> however, even if the frontend devices do need them, so clear the
> > >> multicast flag to ensure the net core does not initiate IPv6
> > >> Stateless Address Autoconfiguration.
> > >
> > > How does disabling SAA flow from the absence of multicast?
> > 
> > See patch 1 in this series [0], but I explain the issue I see with
> > this on the cover letter [1].
> 
> Oop, I felt like I'd missed some context. Thanks for pointing out that
> it was right under my nose.
> 
> > In summary the RFCs on IPv6 make it
> > clear you need multicast for Stateless address autoconfiguration
> > (SLAAC is the preferred acronym) and DAD,
> 
> That seems reasonable, but I think is the opposite to what I was trying
> to get at.
> 
> Why is it not possible to disable SLAAC and/or DAD even if multicast is
> present?
> 
> IOW -- enabling/disabling multicast seems to me to be an odd proxy for
> disabling SLAAC or DAD and AIUI your patch fixes the opposite case,
> which is to avoid SLAAC and DAD on interfaces which don't do multicast
> (which makes sense since those protocols involve multicast).

Forgive me if this doesn't make sense in this context since
I'm not a kernel developer, but I was just wondering if any of
the sysctls:

	/proc/sys/net/ipv6/conf/<ifc>/disable_ipv6
	/proc/sys/net/ipv6/conf/<ifc>/accept_dad
	/proc/sys/net/ipv6/conf/<ifc>/accept_ra
	/proc/sys/net/ipv6/conf/<ifc>/autoconf

would be apropos for the requirement being discussed.

					-Bill

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 00:30:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 00:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDkBy-0007N2-6M; Thu, 13 Feb 2014 00:30:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDkBw-0007Mq-25
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 00:30:12 +0000
Received: from [85.158.139.211:21784] by server-12.bemta-5.messagelabs.com id
	D9/A2-15415-3121CF25; Thu, 13 Feb 2014 00:30:11 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392251408!3510187!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21295 invoked from network); 13 Feb 2014 00:30:10 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 00:30:10 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so9974315pbb.25
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 16:30:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:references:mime-version:message-id
	:content-type; bh=5MjLm4Yj9YxgjkbOj98N6VGG6Dt+nuY84XdictybYlc=;
	b=FI2f1eYbOsaHOdyYFCCEFsoVDl3/qOOY4Q6VRRxMhD+ezLkgh1lmvr1h2Ob3gDZ0fH
	EykuI0vqgxfR4Em8OWR713IvT2pGTxCnV/e0QUwNMirn/LQ8rv/GliOVwPuSyTQ4Brky
	DpDHJTAzKtJzHbfYX7GkiQrCcFddDWuanGvmMyyQ1iQp2DdXBO+lsnCiZqIL78Nj6bd6
	LRlNZCzARwBYpIqBRABEKg8Afk/OT62BFwglrqqGH/krCvOlPonougBCCqy48YxbRPYk
	xaJHbPoyAxBen/Po8uvRKFMpjckdFFTfZ6HNocWpPvcYLYjIP6CPgVKkZ+bVxB1bK05I
	RQ9Q==
X-Received: by 10.68.197.66 with SMTP id is2mr54314822pbc.96.1392251407909;
	Wed, 12 Feb 2014 16:30:07 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id j3sm118827pbh.38.2014.02.12.16.30.05
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 16:30:07 -0800 (PST)
Date: Thu, 13 Feb 2014 08:30:03 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <2014021221022931163650@gmail.com>, 
	<1392210901.13563.70.camel@kazak.uk.xensource.com>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021308300156477455@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8401865117394408970=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============8401865117394408970==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart166467430317_=----"

This is a multi-part message in MIME format.

------=_001_NextPart166467430317_=----
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: base64

U29ycnkgSWFuIGFuZCBFdmVyeSBPbmUsICBUaGVyZSBpcyBzb21ldGhpbmcgd3Jvbmcgd2l0aCBt
eSBtYWlsYm94LCBhbmQgdGhlIG1haWwgcmVwZWF0ZWQhDQpJIGFtIFZFUlkgc29ycnkgZm9yIHdh
c3RpbmcgeW91cnMnIHRpbWUuDQpQbGVhc2UgaWdub3JlIHRoaXMgTWVzc2FnZSAgDQoNCg0KDQoN
CmhlcmJlcnQgY2xhbmQNCg0KRnJvbTogSWFuIENhbXBiZWxsDQpEYXRlOiAyMDE0LTAyLTEyIDIx
OjE1DQpUbzogaGVyYmVydCBjbGFuZA0KQ0M6IHhlbi1kZXZlbA0KU3ViamVjdDogUmU6IFtYZW4t
ZGV2ZWxdIFtxZW11LXVwc3RyZWFtLXVuc3RhYmxlLmdpdF0gaXMgaW4gQ09ORkxJQ1Qgc3RhdGUN
Ck9uIFdlZCwgMjAxNC0wMi0xMiBhdCAyMTowMiArMDgwMCwgaGVyYmVydCBjbGFuZCB3cm90ZToN
Cj4gRGVhciBBTEwhDQo+ICANCj4gRm9sbG93aW5nIG1lcmdlIG1heSBiZSBvdmVyd3JpdGUgdGhl
ICJ4ZW46IEZpeCB2Y3B1IGluaXRpYWxpemF0aW9uIg0KDQpTbyB5b3UndmUgc2FpZCB0aHJlZSBv
ciBmb3VyIHRpbWVzIG5vdy4gVGhlcmUgaXMgbm8gbmVlZCB0byByZXBlYXQNCnlvdXJzZWxmIGxp
a2UgdGhpcywgcmVzdCBhc3N1cmVkIHRoYXQgeW91ciBtYWlscyBhcmUgaW4gdGhlIHJlbGV2YW50
DQpwZW9wbGUncyBpbmJveGVzIGFuZCB3aWxsIGJlIGRlYWx0IHdpdGguIFBlc3RlcmluZyBpbiB0
aGlzIHdheSBpcyBqdXN0DQpydWRlLg0KDQo+ICBwYXRjaA0KPiAtLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KPiBNZXJnZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3Rh
Z2luZy1tYXN0ZXItOSANCj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0gDQo+ICANCj4gVGhpcyBtYWRl
IGEgY29uZmxpY3QgZm9yIHhlbi1hbGwuYy4NCj4gU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBo
b3RwbHVnIHBhdGNoIHdhcyBvdmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbQ0KPiBxZW11IHZlcnNp
b24uDQoNCkl0J3Mgbm90IGNsZWFyIHRvIG1lIGhvdyB5b3UgaGF2ZSByZWFjaGVkIHRoYXQgY29u
Y2x1c2lvbi4gQSBtZXJnZQ0KY29uZmxpY3QgY291bGQgYmUgZG93biB0byBudW1lcm91cyB0aGlu
Z3MsIG5vIGFsbCBvZiB3aGljaCBhcmUgInZjcHUNCmhvdHBsdWcgcGF0Y2ggd2FzIG92ZXJ3cml0
ZWQiLg0KDQpZb3UnbGwgaGF2ZSB0byBiZSBhIGxvdCBtb3JlIHByZWNpc2UgYWJvdXQgd2hhdCBi
cmFuY2ggeW91IGFyZSBtZXJnaW5nDQppbnRvIHdoYXQgb3RoZXIgYnJhbmNoIGFuZCB3aGF0IHlv
dSB0aGluayB0aGUgdW5kZXNpcmFibGUgZmFsbG91dCBoYXMNCmJlZW4gSSdtIGFmcmFpZC4NCg0K
SWFuLg==

------=_001_NextPart166467430317_=----
Content-Type: text/html;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dutf-8" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =E5=BE=AE=E8=BD=AF=E9=9B=85=E9=BB=91; COL=
OR: #000000; LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Sorry Ian and Every One, &nbsp;There is something wrong with my mailb=
ox,=20
and the mail repeated!</DIV>
<DIV>I am VERY&nbsp;sorry for&nbsp;wasting yours' time.</DIV>
<DIV>Please ignore&nbsp;this&nbsp;Message&nbsp;&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; BORDER-=
BOTTOM: medium none; PADDING-BOTTOM: 0cm; PADDING-TOP: 3pt; PADDING-LEFT: =
0cm; BORDER-LEFT: medium none; PADDING-RIGHT: 0cm">
<DIV=20
style=3D"FONT-SIZE: 12px; FONT-FAMILY: tahoma; BACKGROUND: #efefef; COLOR:=
 #000000; PADDING-BOTTOM: 8px; PADDING-TOP: 8px; PADDING-LEFT: 8px; PADDIN=
G-RIGHT: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:Ian.Campbell@citrix.com">Ian=20
Campbell</A></DIV>
<DIV><B>Date:</B>&nbsp;2014-02-12&nbsp;21:15</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:herbert.cland@gmail.com">herbert=20
cland</A></DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [qemu-upstream-unstable.git] is =
in=20
CONFLICT state</DIV></DIV></DIV>
<DIV>
<DIV>On Wed, 2014-02-12 at 21:02 +0800, herbert cland wrote:</DIV>
<DIV>&gt; Dear ALL!</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; Following merge may be overwrite the "xen: Fix vcpu=20
initialization"</DIV>
<DIV>&nbsp;</DIV>
<DIV>So you've said three or four times now. There is no need to repeat</D=
IV>
<DIV>yourself like this, rest assured that your mails are in the relevant<=
/DIV>
<DIV>people's inboxes and will be dealt with. Pestering in this way is=20
just</DIV>
<DIV>rude.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp; patch</DIV>
<DIV>&gt; --------------------</DIV>
<DIV>&gt; Merge remote branch 'origin/stable-1.6' into xen-staging-master-=
9=20
</DIV>
<DIV>&gt; -------------------- </DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; This made a conflict for xen-all.c.</DIV>
<DIV>&gt; So it seems that the vpcu hotplug patch was overwrited by the=20
upstream</DIV>
<DIV>&gt; qemu version.</DIV>
<DIV>&nbsp;</DIV>
<DIV>It's not clear to me how you have reached that conclusion. A merge</D=
IV>
<DIV>conflict could be down to numerous things, no all of which are "vcpu<=
/DIV>
<DIV>hotplug patch was overwrited".</DIV>
<DIV>&nbsp;</DIV>
<DIV>You'll have to be a lot more precise about what branch you are=20
merging</DIV>
<DIV>into what other branch and what you think the undesirable fallout has=
</DIV>
<DIV>been I'm afraid.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Ian.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart166467430317_=------



--===============8401865117394408970==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8401865117394408970==--



From xen-devel-bounces@lists.xen.org Thu Feb 13 00:30:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 00:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDkBy-0007N2-6M; Thu, 13 Feb 2014 00:30:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <herbert.cland@gmail.com>) id 1WDkBw-0007Mq-25
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 00:30:12 +0000
Received: from [85.158.139.211:21784] by server-12.bemta-5.messagelabs.com id
	D9/A2-15415-3121CF25; Thu, 13 Feb 2014 00:30:11 +0000
X-Env-Sender: herbert.cland@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392251408!3510187!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	MIME_BOUND_NEXTPART,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21295 invoked from network); 13 Feb 2014 00:30:10 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 00:30:10 -0000
Received: by mail-pb0-f52.google.com with SMTP id jt11so9974315pbb.25
	for <xen-devel@lists.xen.org>; Wed, 12 Feb 2014 16:30:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:references:mime-version:message-id
	:content-type; bh=5MjLm4Yj9YxgjkbOj98N6VGG6Dt+nuY84XdictybYlc=;
	b=FI2f1eYbOsaHOdyYFCCEFsoVDl3/qOOY4Q6VRRxMhD+ezLkgh1lmvr1h2Ob3gDZ0fH
	EykuI0vqgxfR4Em8OWR713IvT2pGTxCnV/e0QUwNMirn/LQ8rv/GliOVwPuSyTQ4Brky
	DpDHJTAzKtJzHbfYX7GkiQrCcFddDWuanGvmMyyQ1iQp2DdXBO+lsnCiZqIL78Nj6bd6
	LRlNZCzARwBYpIqBRABEKg8Afk/OT62BFwglrqqGH/krCvOlPonougBCCqy48YxbRPYk
	xaJHbPoyAxBen/Po8uvRKFMpjckdFFTfZ6HNocWpPvcYLYjIP6CPgVKkZ+bVxB1bK05I
	RQ9Q==
X-Received: by 10.68.197.66 with SMTP id is2mr54314822pbc.96.1392251407909;
	Wed, 12 Feb 2014 16:30:07 -0800 (PST)
Received: from HP-YANBINGHENG ([111.207.123.2])
	by mx.google.com with ESMTPSA id j3sm118827pbh.38.2014.02.12.16.30.05
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 12 Feb 2014 16:30:07 -0800 (PST)
Date: Thu, 13 Feb 2014 08:30:03 +0800
From: "herbert cland" <herbert.cland@gmail.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <2014021221022931163650@gmail.com>, 
	<1392210901.13563.70.camel@kazak.uk.xensource.com>
X-Priority: 3
X-Has-Attach: no
X-Mailer: Foxmail 7, 1, 3, 52[cn]
Mime-Version: 1.0
Message-ID: <2014021308300156477455@gmail.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8401865117394408970=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============8401865117394408970==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart166467430317_=----"

This is a multi-part message in MIME format.

------=_001_NextPart166467430317_=----
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: base64

U29ycnkgSWFuIGFuZCBFdmVyeSBPbmUsICBUaGVyZSBpcyBzb21ldGhpbmcgd3Jvbmcgd2l0aCBt
eSBtYWlsYm94LCBhbmQgdGhlIG1haWwgcmVwZWF0ZWQhDQpJIGFtIFZFUlkgc29ycnkgZm9yIHdh
c3RpbmcgeW91cnMnIHRpbWUuDQpQbGVhc2UgaWdub3JlIHRoaXMgTWVzc2FnZSAgDQoNCg0KDQoN
CmhlcmJlcnQgY2xhbmQNCg0KRnJvbTogSWFuIENhbXBiZWxsDQpEYXRlOiAyMDE0LTAyLTEyIDIx
OjE1DQpUbzogaGVyYmVydCBjbGFuZA0KQ0M6IHhlbi1kZXZlbA0KU3ViamVjdDogUmU6IFtYZW4t
ZGV2ZWxdIFtxZW11LXVwc3RyZWFtLXVuc3RhYmxlLmdpdF0gaXMgaW4gQ09ORkxJQ1Qgc3RhdGUN
Ck9uIFdlZCwgMjAxNC0wMi0xMiBhdCAyMTowMiArMDgwMCwgaGVyYmVydCBjbGFuZCB3cm90ZToN
Cj4gRGVhciBBTEwhDQo+ICANCj4gRm9sbG93aW5nIG1lcmdlIG1heSBiZSBvdmVyd3JpdGUgdGhl
ICJ4ZW46IEZpeCB2Y3B1IGluaXRpYWxpemF0aW9uIg0KDQpTbyB5b3UndmUgc2FpZCB0aHJlZSBv
ciBmb3VyIHRpbWVzIG5vdy4gVGhlcmUgaXMgbm8gbmVlZCB0byByZXBlYXQNCnlvdXJzZWxmIGxp
a2UgdGhpcywgcmVzdCBhc3N1cmVkIHRoYXQgeW91ciBtYWlscyBhcmUgaW4gdGhlIHJlbGV2YW50
DQpwZW9wbGUncyBpbmJveGVzIGFuZCB3aWxsIGJlIGRlYWx0IHdpdGguIFBlc3RlcmluZyBpbiB0
aGlzIHdheSBpcyBqdXN0DQpydWRlLg0KDQo+ICBwYXRjaA0KPiAtLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KPiBNZXJnZSByZW1vdGUgYnJhbmNoICdvcmlnaW4vc3RhYmxlLTEuNicgaW50byB4ZW4tc3Rh
Z2luZy1tYXN0ZXItOSANCj4gLS0tLS0tLS0tLS0tLS0tLS0tLS0gDQo+ICANCj4gVGhpcyBtYWRl
IGEgY29uZmxpY3QgZm9yIHhlbi1hbGwuYy4NCj4gU28gaXQgc2VlbXMgdGhhdCB0aGUgdnBjdSBo
b3RwbHVnIHBhdGNoIHdhcyBvdmVyd3JpdGVkIGJ5IHRoZSB1cHN0cmVhbQ0KPiBxZW11IHZlcnNp
b24uDQoNCkl0J3Mgbm90IGNsZWFyIHRvIG1lIGhvdyB5b3UgaGF2ZSByZWFjaGVkIHRoYXQgY29u
Y2x1c2lvbi4gQSBtZXJnZQ0KY29uZmxpY3QgY291bGQgYmUgZG93biB0byBudW1lcm91cyB0aGlu
Z3MsIG5vIGFsbCBvZiB3aGljaCBhcmUgInZjcHUNCmhvdHBsdWcgcGF0Y2ggd2FzIG92ZXJ3cml0
ZWQiLg0KDQpZb3UnbGwgaGF2ZSB0byBiZSBhIGxvdCBtb3JlIHByZWNpc2UgYWJvdXQgd2hhdCBi
cmFuY2ggeW91IGFyZSBtZXJnaW5nDQppbnRvIHdoYXQgb3RoZXIgYnJhbmNoIGFuZCB3aGF0IHlv
dSB0aGluayB0aGUgdW5kZXNpcmFibGUgZmFsbG91dCBoYXMNCmJlZW4gSSdtIGFmcmFpZC4NCg0K
SWFuLg==

------=_001_NextPart166467430317_=----
Content-Type: text/html;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META content=3D"text/html; charset=3Dutf-8" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
	MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em; MARGIN-TOP: 0px
}
OL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
UL {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
P {
	MARGIN-BOTTOM: 0px; MARGIN-TOP: 0px
}
BODY {
	FONT-SIZE: 10.5pt; FONT-FAMILY: =E5=BE=AE=E8=BD=AF=E9=9B=85=E9=BB=91; COL=
OR: #000000; LINE-HEIGHT: 1.5
}
</STYLE>

<META name=3DGENERATOR content=3D"MSHTML 10.00.9200.16540"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Sorry Ian and Every One, &nbsp;There is something wrong with my mailb=
ox,=20
and the mail repeated!</DIV>
<DIV>I am VERY&nbsp;sorry for&nbsp;wasting yours' time.</DIV>
<DIV>Please ignore&nbsp;this&nbsp;Message&nbsp;&nbsp;</DIV>
<DIV>&nbsp;</DIV>
<HR style=3D"HEIGHT: 1px; WIDTH: 210px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>

<DIV><SPAN>herbert cland</SPAN></DIV>
<DIV>&nbsp;</DIV>
<DIV=20
style=3D"BORDER-TOP: #b5c4df 1pt solid; BORDER-RIGHT: medium none; BORDER-=
BOTTOM: medium none; PADDING-BOTTOM: 0cm; PADDING-TOP: 3pt; PADDING-LEFT: =
0cm; BORDER-LEFT: medium none; PADDING-RIGHT: 0cm">
<DIV=20
style=3D"FONT-SIZE: 12px; FONT-FAMILY: tahoma; BACKGROUND: #efefef; COLOR:=
 #000000; PADDING-BOTTOM: 8px; PADDING-TOP: 8px; PADDING-LEFT: 8px; PADDIN=
G-RIGHT: 8px">
<DIV><B>From:</B>&nbsp;<A href=3D"mailto:Ian.Campbell@citrix.com">Ian=20
Campbell</A></DIV>
<DIV><B>Date:</B>&nbsp;2014-02-12&nbsp;21:15</DIV>
<DIV><B>To:</B>&nbsp;<A href=3D"mailto:herbert.cland@gmail.com">herbert=20
cland</A></DIV>
<DIV><B>CC:</B>&nbsp;<A=20
href=3D"mailto:xen-devel@lists.xen.org">xen-devel</A></DIV>
<DIV><B>Subject:</B>&nbsp;Re: [Xen-devel] [qemu-upstream-unstable.git] is =
in=20
CONFLICT state</DIV></DIV></DIV>
<DIV>
<DIV>On Wed, 2014-02-12 at 21:02 +0800, herbert cland wrote:</DIV>
<DIV>&gt; Dear ALL!</DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; Following merge may be overwrite the "xen: Fix vcpu=20
initialization"</DIV>
<DIV>&nbsp;</DIV>
<DIV>So you've said three or four times now. There is no need to repeat</D=
IV>
<DIV>yourself like this, rest assured that your mails are in the relevant<=
/DIV>
<DIV>people's inboxes and will be dealt with. Pestering in this way is=20
just</DIV>
<DIV>rude.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&gt;&nbsp; patch</DIV>
<DIV>&gt; --------------------</DIV>
<DIV>&gt; Merge remote branch 'origin/stable-1.6' into xen-staging-master-=
9=20
</DIV>
<DIV>&gt; -------------------- </DIV>
<DIV>&gt;&nbsp; </DIV>
<DIV>&gt; This made a conflict for xen-all.c.</DIV>
<DIV>&gt; So it seems that the vpcu hotplug patch was overwrited by the=20
upstream</DIV>
<DIV>&gt; qemu version.</DIV>
<DIV>&nbsp;</DIV>
<DIV>It's not clear to me how you have reached that conclusion. A merge</D=
IV>
<DIV>conflict could be down to numerous things, no all of which are "vcpu<=
/DIV>
<DIV>hotplug patch was overwrited".</DIV>
<DIV>&nbsp;</DIV>
<DIV>You'll have to be a lot more precise about what branch you are=20
merging</DIV>
<DIV>into what other branch and what you think the undesirable fallout has=
</DIV>
<DIV>been I'm afraid.</DIV>
<DIV>&nbsp;</DIV>
<DIV>Ian.</DIV>
<DIV>&nbsp;</DIV>
<DIV>&nbsp;</DIV></DIV></BODY></HTML>

------=_001_NextPart166467430317_=------



--===============8401865117394408970==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8401865117394408970==--



From xen-devel-bounces@lists.xen.org Thu Feb 13 01:30:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 01:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDl8C-0004Or-G8; Thu, 13 Feb 2014 01:30:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WDl89-0004Om-VB
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 01:30:22 +0000
Received: from [193.109.254.147:53332] by server-5.bemta-14.messagelabs.com id
	D9/E4-16688-D202CF25; Thu, 13 Feb 2014 01:30:21 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392255020!3949426!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12956 invoked from network); 13 Feb 2014 01:30:20 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 01:30:20 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so7872378wib.12
	for <xen-devel@lists.xensource.com>;
	Wed, 12 Feb 2014 17:30:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=UWAz6zbaJgzVNiNNsdkJAhk8n5Wv0XGSFM5IwsgNwYQ=;
	b=DAr2gjSlcA1abWQZv9+YHNVoO07nJ9WiJGQ426qA7Xjttsl4VMS/M+7PV70k92jbM0
	i0K8qMkG2ncvn9Qhieaisc+mKZL4cnTKlPirubOjvljXdDsGcyeaRgCB8BZXej3cVMJk
	p/pi1eftFrEchLXdV4ZrqldvMsSO6IlllCbUAExzJIm1fbMEbLPTDcF4e9qQpiRx+/SK
	c0g+rs9UYDJqj8QJCh9l2gVloYUpv/0MsMKns8R6FtIRsIVPHKrgtE2+HcKkUpOv5G91
	Br5konULOLjCZE/s0X8H/BgsGpSk0RNYTQkrCzRV55XXcgVNwB+0yvWkXdDNreKoTvuZ
	JY5A==
X-Received: by 10.194.63.228 with SMTP id j4mr3757889wjs.34.1392255019734;
	Wed, 12 Feb 2014 17:30:19 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Wed, 12 Feb 2014 17:29:59 -0800 (PST)
In-Reply-To: <52FB50F0.70106@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 13 Feb 2014 01:29:59 +0000
Message-ID: <CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I tried the patch provided by roger, I get a different error now:

Parsing config from test.cfg
libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
/local/domain/0/backend/vbd/0/51712 not ready
libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
failed to attach local disk for bootloader execution
libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
unable to detach locally attached disk
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -1


with -vvv

# xl -vvv create test.cfg
Parsing config from test.cfg
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x12548c0:
create: how=(nil) callback=(nil) poller=0x1254980
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda,
uses script=... assuming phy backend
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
vdev=xvda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=(null) spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null),
uses script=... assuming phy backend
libxl: debug: libxl.c:2605:libxl__device_disk_local_initiate_attach:
trying to locally attach PHY device drbd-remus-test with script
block-drbd
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=xvdf spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvdf,
uses script=... assuming phy backend
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x12548c0:
inprogress: poller=0x1254980, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
epath=/local/domain/0/backend/vbd/0/51792/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vbd/0/51792/state wanted state 2 still waiting
state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
epath=/local/domain/0/backend/vbd/0/51792/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vbd/0/51792/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x124a300: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
script: /etc/xen/scripts/block-drbd add
libxl: debug: libxl.c:2692:local_device_attach_cb: locally attached
disk /dev/xvdf
libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
/local/domain/0/backend/vbd/0/51792 not ready
libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
failed to attach local disk for bootloader execution
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x124a498: deregister unregistered
libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
unable to detach locally attached disk
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -1
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x12548c0:
complete, rc=-3
libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x12548c0: destroy
xc: debug: hypercall buffer: total allocations:31 total releases:31
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:27 misses:2 toobig:2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 01:30:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 01:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDl8C-0004Or-G8; Thu, 13 Feb 2014 01:30:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WDl89-0004Om-VB
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 01:30:22 +0000
Received: from [193.109.254.147:53332] by server-5.bemta-14.messagelabs.com id
	D9/E4-16688-D202CF25; Thu, 13 Feb 2014 01:30:21 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392255020!3949426!1
X-Originating-IP: [209.85.212.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12956 invoked from network); 13 Feb 2014 01:30:20 -0000
Received: from mail-wi0-f173.google.com (HELO mail-wi0-f173.google.com)
	(209.85.212.173)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 01:30:20 -0000
Received: by mail-wi0-f173.google.com with SMTP id hn9so7872378wib.12
	for <xen-devel@lists.xensource.com>;
	Wed, 12 Feb 2014 17:30:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=UWAz6zbaJgzVNiNNsdkJAhk8n5Wv0XGSFM5IwsgNwYQ=;
	b=DAr2gjSlcA1abWQZv9+YHNVoO07nJ9WiJGQ426qA7Xjttsl4VMS/M+7PV70k92jbM0
	i0K8qMkG2ncvn9Qhieaisc+mKZL4cnTKlPirubOjvljXdDsGcyeaRgCB8BZXej3cVMJk
	p/pi1eftFrEchLXdV4ZrqldvMsSO6IlllCbUAExzJIm1fbMEbLPTDcF4e9qQpiRx+/SK
	c0g+rs9UYDJqj8QJCh9l2gVloYUpv/0MsMKns8R6FtIRsIVPHKrgtE2+HcKkUpOv5G91
	Br5konULOLjCZE/s0X8H/BgsGpSk0RNYTQkrCzRV55XXcgVNwB+0yvWkXdDNreKoTvuZ
	JY5A==
X-Received: by 10.194.63.228 with SMTP id j4mr3757889wjs.34.1392255019734;
	Wed, 12 Feb 2014 17:30:19 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Wed, 12 Feb 2014 17:29:59 -0800 (PST)
In-Reply-To: <52FB50F0.70106@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391513622.10515.75.camel@kazak.uk.xensource.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Thu, 13 Feb 2014 01:29:59 +0000
Message-ID: <CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I tried the patch provided by roger, I get a different error now:

Parsing config from test.cfg
libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
/local/domain/0/backend/vbd/0/51712 not ready
libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
failed to attach local disk for bootloader execution
libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
unable to detach locally attached disk
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -1


with -vvv

# xl -vvv create test.cfg
Parsing config from test.cfg
libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x12548c0:
create: how=(nil) callback=(nil) poller=0x1254980
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=xvda spec.backend=unknown
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda,
uses script=... assuming phy backend
libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
vdev=xvda, using backend phy
libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=(null) spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null),
uses script=... assuming phy backend
libxl: debug: libxl.c:2605:libxl__device_disk_local_initiate_attach:
trying to locally attach PHY device drbd-remus-test with script
block-drbd
libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
vdev=xvdf spec.backend=phy
libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvdf,
uses script=... assuming phy backend
libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
register slotnum=3
libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x12548c0:
inprogress: poller=0x1254980, flags=i
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
epath=/local/domain/0/backend/vbd/0/51792/state
libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
/local/domain/0/backend/vbd/0/51792/state wanted state 2 still waiting
state 1
libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
epath=/local/domain/0/backend/vbd/0/51792/state
libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
/local/domain/0/backend/vbd/0/51792/state wanted state 2 ok
libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
deregister slotnum=3
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x124a300: deregister unregistered
libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
script: /etc/xen/scripts/block-drbd add
libxl: debug: libxl.c:2692:local_device_attach_cb: locally attached
disk /dev/xvdf
libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
/local/domain/0/backend/vbd/0/51792 not ready
libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
failed to attach local disk for bootloader execution
libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
w=0x124a498: deregister unregistered
libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
unable to detach locally attached disk
libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
(re-)build domain: -1
libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x12548c0:
complete, rc=-3
libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x12548c0: destroy
xc: debug: hypercall buffer: total allocations:31 total releases:31
xc: debug: hypercall buffer: current allocations:0 maximum allocations:2
xc: debug: hypercall buffer: cache current size:2
xc: debug: hypercall buffer: cache hits:27 misses:2 toobig:2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 01:41:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 01:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDlJ1-0004ks-To; Thu, 13 Feb 2014 01:41:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <uazam@i2cinc.com>) id 1WDlJ0-0004kn-6U
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 01:41:34 +0000
Received: from [85.158.139.211:64566] by server-11.bemta-5.messagelabs.com id
	60/38-23886-DC22CF25; Thu, 13 Feb 2014 01:41:33 +0000
X-Env-Sender: uazam@i2cinc.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392255690!3570710!1
X-Originating-IP: [199.96.216.55]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_10_20,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27170 invoked from network); 13 Feb 2014 01:41:30 -0000
Received: from mail.i2cinc.com (HELO zm1.i2cinc.com) (199.96.216.55)
	by server-7.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 01:41:30 -0000
Received: from localhost (localhost [127.0.0.1])
	by zm1.i2cinc.com (Postfix) with ESMTP id 261C91F21F9F;
	Wed, 12 Feb 2014 17:41:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at i2cinc.com
Received: from zm1.i2cinc.com ([127.0.0.1])
	by localhost (zm1.i2cinc.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 6pkVynyc7RH4; Wed, 12 Feb 2014 17:41:29 -0800 (PST)
Received: from [10.11.17.22] (unknown [119.63.130.34])
	by zm1.i2cinc.com (Postfix) with ESMTPSA id 925F41F20ACD;
	Wed, 12 Feb 2014 17:41:27 -0800 (PST)
Message-ID: <52FC228C.9040207@i2cinc.com>
Date: Thu, 13 Feb 2014 06:40:28 +0500
From: Umair Azam <uazam@i2cinc.com>
User-Agent: Mozilla/5.0 (Windows NT 6.0;
	rv:17.0) Gecko/20130509 Thunderbird/17.0.6
MIME-Version: 1.0
To: "xs-devel@lists.xenserver.org" <xs-devel@lists.xenserver.org>, 
	xen-devel@lists.xen.org
Subject: [Xen-devel] problem with my xenserver host NIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4138881865152443097=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4138881865152443097==
Content-Type: multipart/alternative;
 boundary="------------060709020604090003000508"

This is a multi-part message in MIME format.
--------------060709020604090003000508
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi guys,

Can u help to make me understand whats the problem with my xenserver 
host NIC

Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965144] 
/local/domain/3/device/vif/0: Initialising
Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965194] 
/local/domain/3/device/vif/0: Initialising
Feb 13 04:37:42 xenserver-Host1 kernel: [84622.459354] *device vif3.0 
entered promiscuous mode*
Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609173] 
/local/domain/3/device/vif/1: Initialising
Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609256] 
/local/domain/3/device/vif/1: Initialising
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.098415] device vif3.1 
entered promiscuous mode
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238585] 
*/local/domain/3/device/vif/2: Initialising*
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238657] 
/local/domain/3/device/vif/2: Initialising
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.737693] device vif3.2 
entered promiscuous mode
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876452] 
/local/domain/3/device/vif/3: Initialising
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876541] 
/local/domain/3/device/vif/3: Initialising
Feb 13 04:37:44 xenserver-Host1 kernel: [84624.367713] device vif3.3 
entered promiscuous mode
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.888799] blkback: 
event-channel 8
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891066] blkback: ring-ref 8
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891180] blkback: protocol 
1 (x86_32-abi)
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897358] blkback: 
event-channel 9
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897573] blkback: ring-ref 9
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897650] blkback: protocol 
1 (x86_32-abi)
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.900317] 
/local/domain/3/device/vif/0: Connected
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.902502] 
/local/domain/3/device/vif/1: Connected
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.904564] 
/local/domain/3/device/vif/2: Connected
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.909578] 
/local/domain/3/device/vif/3: Connected
Feb 13 04:38:26 xenserver-Host1 kernel: [84666.557634] *e1000e: eth0 NIC 
Link is Down*
Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967459] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967472] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:39:12 xenserver-Host1 kernel: [84712.977493] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:39:24 xenserver-Host1 kernel: [84724.987474] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:39:36 xenserver-Host1 kernel: [84736.427442] *vif3.3: draining 
TX queue*
Feb 13 04:39:36 xenserver-Host1 kernel: [84736.987447] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:40:00 xenserver-Host1 kernel: [84760.987443] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:40:48 xenserver-Host1 kernel: [84808.987488] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:41:48 xenserver-Host1 kernel: [84868.987466] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:42:24 xenserver-Host1 kernel: [84904.987446] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538298] *e1000e: eth0 NIC 
Link is Up 100 Mbps Full Duplex, Flow Control: None*
Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538406] e1000e 
0000:00:19.0: eth0: 10/100 speed: disabling TSO
Feb 13 04:43:24 xenserver-Host1 kernel: [84964.987445] nfs: server 
10.11.17.31 not responding, timed out


-- 
Umair Azam


--------------060709020604090003000508
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi guys,<br>
    <br>
    Can u help to make me understand whats the problem with my xenserver
    host NIC<br>
    <br>
    Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965144]
    /local/domain/3/device/vif/0: Initialising<br>
    Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965194]
    /local/domain/3/device/vif/0: Initialising<br>
    Feb 13 04:37:42 xenserver-Host1 kernel: [84622.459354] <b>device
      vif3.0 entered promiscuous mode</b><br>
    Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609173]
    /local/domain/3/device/vif/1: Initialising<br>
    Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609256]
    /local/domain/3/device/vif/1: Initialising<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.098415] device vif3.1
    entered promiscuous mode<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238585] <b>/local/domain/3/device/vif/2:
      Initialising</b><br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238657]
    /local/domain/3/device/vif/2: Initialising<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.737693] device vif3.2
    entered promiscuous mode<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876452]
    /local/domain/3/device/vif/3: Initialising<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876541]
    /local/domain/3/device/vif/3: Initialising<br>
    Feb 13 04:37:44 xenserver-Host1 kernel: [84624.367713] device vif3.3
    entered promiscuous mode<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.888799] blkback:
    event-channel 8<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891066] blkback:
    ring-ref 8<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891180] blkback:
    protocol 1 (x86_32-abi)<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897358] blkback:
    event-channel 9<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897573] blkback:
    ring-ref 9<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897650] blkback:
    protocol 1 (x86_32-abi)<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.900317]
    /local/domain/3/device/vif/0: Connected<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.902502]
    /local/domain/3/device/vif/1: Connected<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.904564]
    /local/domain/3/device/vif/2: Connected<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.909578]
    /local/domain/3/device/vif/3: Connected<br>
    Feb 13 04:38:26 xenserver-Host1 kernel: [84666.557634] <b>e1000e:
      eth0 NIC Link is Down</b><br>
    Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967459] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967472] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:39:12 xenserver-Host1 kernel: [84712.977493] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:39:24 xenserver-Host1 kernel: [84724.987474] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:39:36 xenserver-Host1 kernel: [84736.427442] <b>vif3.3:
      draining TX queue</b><br>
    Feb 13 04:39:36 xenserver-Host1 kernel: [84736.987447] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:40:00 xenserver-Host1 kernel: [84760.987443] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:40:48 xenserver-Host1 kernel: [84808.987488] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:41:48 xenserver-Host1 kernel: [84868.987466] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:42:24 xenserver-Host1 kernel: [84904.987446] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538298] <b>e1000e:
      eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None</b><br>
    Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538406] e1000e
    0000:00:19.0: eth0: 10/100 speed: disabling TSO<br>
    Feb 13 04:43:24 xenserver-Host1 kernel: [84964.987445] nfs: server
    10.11.17.31 not responding, timed out<br>
    <br>
    <br>
    <pre class="moz-signature" cols="72">-- 
Umair Azam
</pre>
  </body>
</html>

--------------060709020604090003000508--


--===============4138881865152443097==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4138881865152443097==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 01:41:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 01:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDlJ1-0004ks-To; Thu, 13 Feb 2014 01:41:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <uazam@i2cinc.com>) id 1WDlJ0-0004kn-6U
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 01:41:34 +0000
Received: from [85.158.139.211:64566] by server-11.bemta-5.messagelabs.com id
	60/38-23886-DC22CF25; Thu, 13 Feb 2014 01:41:33 +0000
X-Env-Sender: uazam@i2cinc.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392255690!3570710!1
X-Originating-IP: [199.96.216.55]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_10_20,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27170 invoked from network); 13 Feb 2014 01:41:30 -0000
Received: from mail.i2cinc.com (HELO zm1.i2cinc.com) (199.96.216.55)
	by server-7.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 01:41:30 -0000
Received: from localhost (localhost [127.0.0.1])
	by zm1.i2cinc.com (Postfix) with ESMTP id 261C91F21F9F;
	Wed, 12 Feb 2014 17:41:30 -0800 (PST)
X-Virus-Scanned: amavisd-new at i2cinc.com
Received: from zm1.i2cinc.com ([127.0.0.1])
	by localhost (zm1.i2cinc.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 6pkVynyc7RH4; Wed, 12 Feb 2014 17:41:29 -0800 (PST)
Received: from [10.11.17.22] (unknown [119.63.130.34])
	by zm1.i2cinc.com (Postfix) with ESMTPSA id 925F41F20ACD;
	Wed, 12 Feb 2014 17:41:27 -0800 (PST)
Message-ID: <52FC228C.9040207@i2cinc.com>
Date: Thu, 13 Feb 2014 06:40:28 +0500
From: Umair Azam <uazam@i2cinc.com>
User-Agent: Mozilla/5.0 (Windows NT 6.0;
	rv:17.0) Gecko/20130509 Thunderbird/17.0.6
MIME-Version: 1.0
To: "xs-devel@lists.xenserver.org" <xs-devel@lists.xenserver.org>, 
	xen-devel@lists.xen.org
Subject: [Xen-devel] problem with my xenserver host NIC
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4138881865152443097=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4138881865152443097==
Content-Type: multipart/alternative;
 boundary="------------060709020604090003000508"

This is a multi-part message in MIME format.
--------------060709020604090003000508
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi guys,

Can u help to make me understand whats the problem with my xenserver 
host NIC

Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965144] 
/local/domain/3/device/vif/0: Initialising
Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965194] 
/local/domain/3/device/vif/0: Initialising
Feb 13 04:37:42 xenserver-Host1 kernel: [84622.459354] *device vif3.0 
entered promiscuous mode*
Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609173] 
/local/domain/3/device/vif/1: Initialising
Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609256] 
/local/domain/3/device/vif/1: Initialising
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.098415] device vif3.1 
entered promiscuous mode
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238585] 
*/local/domain/3/device/vif/2: Initialising*
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238657] 
/local/domain/3/device/vif/2: Initialising
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.737693] device vif3.2 
entered promiscuous mode
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876452] 
/local/domain/3/device/vif/3: Initialising
Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876541] 
/local/domain/3/device/vif/3: Initialising
Feb 13 04:37:44 xenserver-Host1 kernel: [84624.367713] device vif3.3 
entered promiscuous mode
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.888799] blkback: 
event-channel 8
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891066] blkback: ring-ref 8
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891180] blkback: protocol 
1 (x86_32-abi)
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897358] blkback: 
event-channel 9
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897573] blkback: ring-ref 9
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897650] blkback: protocol 
1 (x86_32-abi)
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.900317] 
/local/domain/3/device/vif/0: Connected
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.902502] 
/local/domain/3/device/vif/1: Connected
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.904564] 
/local/domain/3/device/vif/2: Connected
Feb 13 04:38:08 xenserver-Host1 kernel: [84648.909578] 
/local/domain/3/device/vif/3: Connected
Feb 13 04:38:26 xenserver-Host1 kernel: [84666.557634] *e1000e: eth0 NIC 
Link is Down*
Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967459] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967472] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:39:12 xenserver-Host1 kernel: [84712.977493] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:39:24 xenserver-Host1 kernel: [84724.987474] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:39:36 xenserver-Host1 kernel: [84736.427442] *vif3.3: draining 
TX queue*
Feb 13 04:39:36 xenserver-Host1 kernel: [84736.987447] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:40:00 xenserver-Host1 kernel: [84760.987443] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:40:48 xenserver-Host1 kernel: [84808.987488] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:41:48 xenserver-Host1 kernel: [84868.987466] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:42:24 xenserver-Host1 kernel: [84904.987446] nfs: server 
10.11.17.31 not responding, timed out
Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538298] *e1000e: eth0 NIC 
Link is Up 100 Mbps Full Duplex, Flow Control: None*
Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538406] e1000e 
0000:00:19.0: eth0: 10/100 speed: disabling TSO
Feb 13 04:43:24 xenserver-Host1 kernel: [84964.987445] nfs: server 
10.11.17.31 not responding, timed out


-- 
Umair Azam


--------------060709020604090003000508
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi guys,<br>
    <br>
    Can u help to make me understand whats the problem with my xenserver
    host NIC<br>
    <br>
    Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965144]
    /local/domain/3/device/vif/0: Initialising<br>
    Feb 13 04:37:41 xenserver-Host1 kernel: [84621.965194]
    /local/domain/3/device/vif/0: Initialising<br>
    Feb 13 04:37:42 xenserver-Host1 kernel: [84622.459354] <b>device
      vif3.0 entered promiscuous mode</b><br>
    Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609173]
    /local/domain/3/device/vif/1: Initialising<br>
    Feb 13 04:37:42 xenserver-Host1 kernel: [84622.609256]
    /local/domain/3/device/vif/1: Initialising<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.098415] device vif3.1
    entered promiscuous mode<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238585] <b>/local/domain/3/device/vif/2:
      Initialising</b><br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.238657]
    /local/domain/3/device/vif/2: Initialising<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.737693] device vif3.2
    entered promiscuous mode<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876452]
    /local/domain/3/device/vif/3: Initialising<br>
    Feb 13 04:37:43 xenserver-Host1 kernel: [84623.876541]
    /local/domain/3/device/vif/3: Initialising<br>
    Feb 13 04:37:44 xenserver-Host1 kernel: [84624.367713] device vif3.3
    entered promiscuous mode<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.888799] blkback:
    event-channel 8<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891066] blkback:
    ring-ref 8<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.891180] blkback:
    protocol 1 (x86_32-abi)<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897358] blkback:
    event-channel 9<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897573] blkback:
    ring-ref 9<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.897650] blkback:
    protocol 1 (x86_32-abi)<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.900317]
    /local/domain/3/device/vif/0: Connected<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.902502]
    /local/domain/3/device/vif/1: Connected<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.904564]
    /local/domain/3/device/vif/2: Connected<br>
    Feb 13 04:38:08 xenserver-Host1 kernel: [84648.909578]
    /local/domain/3/device/vif/3: Connected<br>
    Feb 13 04:38:26 xenserver-Host1 kernel: [84666.557634] <b>e1000e:
      eth0 NIC Link is Down</b><br>
    Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967459] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:38:39 xenserver-Host1 kernel: [84679.967472] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:39:12 xenserver-Host1 kernel: [84712.977493] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:39:24 xenserver-Host1 kernel: [84724.987474] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:39:36 xenserver-Host1 kernel: [84736.427442] <b>vif3.3:
      draining TX queue</b><br>
    Feb 13 04:39:36 xenserver-Host1 kernel: [84736.987447] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:40:00 xenserver-Host1 kernel: [84760.987443] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:40:48 xenserver-Host1 kernel: [84808.987488] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:41:48 xenserver-Host1 kernel: [84868.987466] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:42:24 xenserver-Host1 kernel: [84904.987446] nfs: server
    10.11.17.31 not responding, timed out<br>
    Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538298] <b>e1000e:
      eth0 NIC Link is Up 100 Mbps Full Duplex, Flow Control: None</b><br>
    Feb 13 04:43:19 xenserver-Host1 kernel: [84959.538406] e1000e
    0000:00:19.0: eth0: 10/100 speed: disabling TSO<br>
    Feb 13 04:43:24 xenserver-Host1 kernel: [84964.987445] nfs: server
    10.11.17.31 not responding, timed out<br>
    <br>
    <br>
    <pre class="moz-signature" cols="72">-- 
Umair Azam
</pre>
  </body>
</html>

--------------060709020604090003000508--


--===============4138881865152443097==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4138881865152443097==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 01:59:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 01:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDlaI-0005E6-Kh; Thu, 13 Feb 2014 01:59:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDlaG-0005E1-Hy
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 01:59:24 +0000
Received: from [85.158.143.35:32396] by server-3.bemta-4.messagelabs.com id
	D6/C0-11539-BF62CF25; Thu, 13 Feb 2014 01:59:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392256758!5271027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1090 invoked from network); 13 Feb 2014 01:59:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 01:59:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="100318563"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 01:59:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 20:59:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDla8-00036U-EU;
	Thu, 13 Feb 2014 01:59:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDla8-00025R-1d;
	Thu, 13 Feb 2014 01:59:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24859-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 01:59:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24859: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24859 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24859/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69
baseline version:
 xen                  f0d0e5efe15a8ce53eaaeee64cf568358ec197ca

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=e21b2fa19946806ea27873a8808bc1ace48b7c69
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing e21b2fa19946806ea27873a8808bc1ace48b7c69
+ branch=xen-4.1-testing
+ revision=e21b2fa19946806ea27873a8808bc1ace48b7c69
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.1-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.1-testing
+ xenversion=xen-4.1
+ xenversion=4.1
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git e21b2fa19946806ea27873a8808bc1ace48b7c69:stable-4.1
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   f0d0e5e..e21b2fa  e21b2fa19946806ea27873a8808bc1ace48b7c69 -> stable-4.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 01:59:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 01:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDlaI-0005E6-Kh; Thu, 13 Feb 2014 01:59:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDlaG-0005E1-Hy
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 01:59:24 +0000
Received: from [85.158.143.35:32396] by server-3.bemta-4.messagelabs.com id
	D6/C0-11539-BF62CF25; Thu, 13 Feb 2014 01:59:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392256758!5271027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1090 invoked from network); 13 Feb 2014 01:59:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 01:59:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="100318563"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 01:59:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 20:59:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDla8-00036U-EU;
	Thu, 13 Feb 2014 01:59:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDla8-00025R-1d;
	Thu, 13 Feb 2014 01:59:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24859-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 01:59:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 24859: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24859 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24859/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-freebsd10-i386 17 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 17 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69
baseline version:
 xen                  f0d0e5efe15a8ce53eaaeee64cf568358ec197ca

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.1-testing
+ revision=e21b2fa19946806ea27873a8808bc1ace48b7c69
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.1-testing e21b2fa19946806ea27873a8808bc1ace48b7c69
+ branch=xen-4.1-testing
+ revision=e21b2fa19946806ea27873a8808bc1ace48b7c69
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.1-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.1-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.1-testing
++ : daily-cron.xen-4.1-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.1-testing.git
++ : daily-cron.xen-4.1-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.1-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.1-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.1-testing
+ xenversion=xen-4.1
+ xenversion=4.1
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git e21b2fa19946806ea27873a8808bc1ace48b7c69:stable-4.1
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   f0d0e5e..e21b2fa  e21b2fa19946806ea27873a8808bc1ace48b7c69 -> stable-4.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 02:14:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 02:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDloz-00067a-5C; Thu, 13 Feb 2014 02:14:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDlox-00067V-13
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 02:14:35 +0000
Received: from [85.158.139.211:2909] by server-16.bemta-5.messagelabs.com id
	AC/7F-05060-A8A2CF25; Thu, 13 Feb 2014 02:14:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392257671!3556258!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12099 invoked from network); 13 Feb 2014 02:14:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 02:14:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="100321163"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 02:14:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 21:14:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDlos-0003BL-NM;
	Thu, 13 Feb 2014 02:14:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDlos-0000E2-KG;
	Thu, 13 Feb 2014 02:14:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24858-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 02:14:30 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24858: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24858 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24858/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24760

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  2778b572abf9dde4b38e60c4dd422283cf4bbde5
baseline version:
 xen                  d7c6be61836b0a4d996f82d3e7c7e50150996701

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=2778b572abf9dde4b38e60c4dd422283cf4bbde5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 2778b572abf9dde4b38e60c4dd422283cf4bbde5
+ branch=xen-4.3-testing
+ revision=2778b572abf9dde4b38e60c4dd422283cf4bbde5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 2778b572abf9dde4b38e60c4dd422283cf4bbde5:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   d7c6be6..2778b57  2778b572abf9dde4b38e60c4dd422283cf4bbde5 -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 02:14:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 02:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDloz-00067a-5C; Thu, 13 Feb 2014 02:14:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDlox-00067V-13
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 02:14:35 +0000
Received: from [85.158.139.211:2909] by server-16.bemta-5.messagelabs.com id
	AC/7F-05060-A8A2CF25; Thu, 13 Feb 2014 02:14:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392257671!3556258!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12099 invoked from network); 13 Feb 2014 02:14:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 02:14:33 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="100321163"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 02:14:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 21:14:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDlos-0003BL-NM;
	Thu, 13 Feb 2014 02:14:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDlos-0000E2-KG;
	Thu, 13 Feb 2014 02:14:30 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24858-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 02:14:30 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24858: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24858 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24858/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24760

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  2778b572abf9dde4b38e60c4dd422283cf4bbde5
baseline version:
 xen                  d7c6be61836b0a4d996f82d3e7c7e50150996701

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=2778b572abf9dde4b38e60c4dd422283cf4bbde5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing 2778b572abf9dde4b38e60c4dd422283cf4bbde5
+ branch=xen-4.3-testing
+ revision=2778b572abf9dde4b38e60c4dd422283cf4bbde5
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 2778b572abf9dde4b38e60c4dd422283cf4bbde5:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   d7c6be6..2778b57  2778b572abf9dde4b38e60c4dd422283cf4bbde5 -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 02:54:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 02:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDmR9-000791-Gx; Thu, 13 Feb 2014 02:54:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WDmR7-00078w-Tr
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 02:54:02 +0000
Received: from [85.158.137.68:60031] by server-11.bemta-3.messagelabs.com id
	3E/9F-04255-9C33CF25; Thu, 13 Feb 2014 02:54:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392260038!1498336!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27901 invoked from network); 13 Feb 2014 02:54:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 02:54:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1D2rs9L015725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Feb 2014 02:53:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1D2rrde003843
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Feb 2014 02:53:54 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1D2rrFo024108; Thu, 13 Feb 2014 02:53:53 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 18:53:53 -0800
Date: Wed, 12 Feb 2014 18:53:52 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20140212185352.4c920a54@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It appears that xc_map_foreign_pages() handles return incorrectly :

    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
    if (res) {
        for (i = 0; i < num; i++) {
            if (err[i]) {
                errno = -err[i];
                munmap(res, num * PAGE_SIZE);
                res = NULL;
                break;
            }
        }
    }

The add to_physmap batch'd interface  actually will store errors
in the err array, and return 0 unless EFAULT or something like that.
See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
calls here to map page which fails, but the return is 0 as the error is
succesfully copied by xen. But the error is missed above since res is 0.
xentrace does something again, and that causes xen crash. 

It appears the fix could be just removing the check for res above...

    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
    for (i = 0; i < num; i++) {
        if (err[i]) {
         .....

What do you guys think?

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 02:54:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 02:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDmR9-000791-Gx; Thu, 13 Feb 2014 02:54:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WDmR7-00078w-Tr
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 02:54:02 +0000
Received: from [85.158.137.68:60031] by server-11.bemta-3.messagelabs.com id
	3E/9F-04255-9C33CF25; Thu, 13 Feb 2014 02:54:01 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392260038!1498336!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27901 invoked from network); 13 Feb 2014 02:54:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 02:54:00 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1D2rs9L015725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 13 Feb 2014 02:53:55 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1D2rrde003843
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 13 Feb 2014 02:53:54 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1D2rrFo024108; Thu, 13 Feb 2014 02:53:53 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 12 Feb 2014 18:53:53 -0800
Date: Wed, 12 Feb 2014 18:53:52 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>
Message-ID: <20140212185352.4c920a54@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.2 (GTK+ 2.18.9; x86_64-unknown-linux-gnu)
Mime-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It appears that xc_map_foreign_pages() handles return incorrectly :

    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
    if (res) {
        for (i = 0; i < num; i++) {
            if (err[i]) {
                errno = -err[i];
                munmap(res, num * PAGE_SIZE);
                res = NULL;
                break;
            }
        }
    }

The add to_physmap batch'd interface  actually will store errors
in the err array, and return 0 unless EFAULT or something like that.
See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
calls here to map page which fails, but the return is 0 as the error is
succesfully copied by xen. But the error is missed above since res is 0.
xentrace does something again, and that causes xen crash. 

It appears the fix could be just removing the check for res above...

    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
    for (i = 0; i < num; i++) {
        if (err[i]) {
         .....

What do you guys think?

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 03:54:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 03:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnMu-0000CX-Me; Thu, 13 Feb 2014 03:53:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDnMt-0000CQ-IF
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 03:53:43 +0000
Received: from [85.158.137.68:59636] by server-13.bemta-3.messagelabs.com id
	BE/67-26923-5C14CF25; Thu, 13 Feb 2014 03:53:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392263619!1489673!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22040 invoked from network); 13 Feb 2014 03:53:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 03:53:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="100333235"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 03:53:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 22:53:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDnMn-0000S9-Rp;
	Thu, 13 Feb 2014 03:53:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDnMm-0005F8-Ei;
	Thu, 13 Feb 2014 03:53:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24857-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 03:53:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24857: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24857 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24857/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-freebsd10-amd64 3 host-install(3) broken REGR. vs. 24808

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
baseline version:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 22 17:47:21 2014 +0000

    libxc: Fix out-of-memory error handling in xc_cpupool_getinfo()
    
    Avoid freeing info then returning it to the caller.
    
    This is XSA-88.
    
    Coverity-ID: 1056192
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit d883c179a74111a6804baf8cb8224235242a88fc)
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 03:54:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 03:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnMu-0000CX-Me; Thu, 13 Feb 2014 03:53:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDnMt-0000CQ-IF
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 03:53:43 +0000
Received: from [85.158.137.68:59636] by server-13.bemta-3.messagelabs.com id
	BE/67-26923-5C14CF25; Thu, 13 Feb 2014 03:53:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392263619!1489673!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22040 invoked from network); 13 Feb 2014 03:53:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 03:53:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="100333235"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 03:53:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 12 Feb 2014 22:53:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDnMn-0000S9-Rp;
	Thu, 13 Feb 2014 03:53:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDnMm-0005F8-Ei;
	Thu, 13 Feb 2014 03:53:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24857-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 03:53:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24857: trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24857 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24857/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-freebsd10-amd64 3 host-install(3) broken REGR. vs. 24808

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
baseline version:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        broken  
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jan 22 17:47:21 2014 +0000

    libxc: Fix out-of-memory error handling in xc_cpupool_getinfo()
    
    Avoid freeing info then returning it to the caller.
    
    This is XSA-88.
    
    Coverity-ID: 1056192
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit d883c179a74111a6804baf8cb8224235242a88fc)
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 03:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 03:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnOw-0000Hk-84; Thu, 13 Feb 2014 03:55:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDnOu-0000Hd-TA
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 03:55:49 +0000
Received: from [85.158.139.211:4916] by server-10.bemta-5.messagelabs.com id
	5D/55-08578-4424CF25; Thu, 13 Feb 2014 03:55:48 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392263746!3561946!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31196 invoked from network); 13 Feb 2014 03:55:47 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 03:55:47 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 12 Feb 2014 19:51:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,836,1384329600"; d="scan'208";a="454636512"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 12 Feb 2014 19:55:45 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 12 Feb 2014 19:55:45 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 12 Feb 2014 19:55:44 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Thu, 13 Feb 2014 11:55:31 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Thread-Topic: Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcA==
Date: Thu, 13 Feb 2014 03:55:31 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi George,

I have updated the Latest Xen nested status in the Xen wiki page, please have a look.
Although it is hard to say nested is good supported or product quality, it is ready to let people to know nested basically is supported by Xen. Especially, for Xen on Xen case, I didn't see any issue with it for more than half of year. Besides, I am always using nested Xen to debug Xen booting issue which doesn't need to reboot my real box. And it really helps me a lot. So, if possible, I hope we can add nested support into Xen 4.4 release to let people know current status.

Xen nested wiki:
http://wiki.xenproject.org/wiki/Xen_nested

best regards
yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 03:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 03:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnOw-0000Hk-84; Thu, 13 Feb 2014 03:55:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDnOu-0000Hd-TA
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 03:55:49 +0000
Received: from [85.158.139.211:4916] by server-10.bemta-5.messagelabs.com id
	5D/55-08578-4424CF25; Thu, 13 Feb 2014 03:55:48 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392263746!3561946!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31196 invoked from network); 13 Feb 2014 03:55:47 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-8.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 03:55:47 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 12 Feb 2014 19:51:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,836,1384329600"; d="scan'208";a="454636512"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 12 Feb 2014 19:55:45 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 12 Feb 2014 19:55:45 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 12 Feb 2014 19:55:44 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.253]) with mapi id
	14.03.0123.003; Thu, 13 Feb 2014 11:55:31 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Thread-Topic: Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcA==
Date: Thu, 13 Feb 2014 03:55:31 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi George,

I have updated the Latest Xen nested status in the Xen wiki page, please have a look.
Although it is hard to say nested is good supported or product quality, it is ready to let people to know nested basically is supported by Xen. Especially, for Xen on Xen case, I didn't see any issue with it for more than half of year. Besides, I am always using nested Xen to debug Xen booting issue which doesn't need to reboot my real box. And it really helps me a lot. So, if possible, I hope we can add nested support into Xen 4.4 release to let people know current status.

Xen nested wiki:
http://wiki.xenproject.org/wiki/Xen_nested

best regards
yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 04:25:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 04:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnrI-0001Cf-D1; Thu, 13 Feb 2014 04:25:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDnrC-0001Ca-Ht
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 04:25:03 +0000
Received: from [85.158.139.211:57030] by server-13.bemta-5.messagelabs.com id
	17/EB-18801-D194CF25; Thu, 13 Feb 2014 04:25:01 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392265500!3568911!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18526 invoked from network); 13 Feb 2014 04:25:00 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 04:25:00 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 12 Feb 2014 20:20:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,836,1384329600"; d="scan'208";a="454646930"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 12 Feb 2014 20:24:42 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 12 Feb 2014 20:24:42 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Thu, 13 Feb 2014 12:24:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH 2/2] Nested EPT: fixing issue of translate L2 gva to L1
	gfn
Thread-Index: AQHPJ9Td6/IQp/hGFkeyWX9+GazxJZqyj9mQ
Date: Thu, 13 Feb 2014 04:24:38 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE59@SHSMSX104.ccr.corp.intel.com>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
	<1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
	<52FB3EC9.5000201@amazon.de>
In-Reply-To: <52FB3EC9.5000201@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] Nested EPT: fixing issue of translate
 L2 gva to L1 gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-02-12:
> On 12.02.14 03:08, Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> There is no way to translate L2 gva to L1 gfn directly.
> 
> Why?

I guess you mean p2m_ga_to_gfn() is able to do it. 

> 
>> To do it, we need to get L2's gfn first. Then look up the virtual EPT
>> to get L1's gfn.
>> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  xen/arch/x86/mm/p2m.c |   25 ++++++++++++++++++++-----
>>  1 files changed, 20 insertions(+), 5 deletions(-)
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index
>> 8f380ed..e92cfbe 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1605,22 +1605,37 @@ unsigned long paging_gva_to_gfn(struct vcpu
> *v,
>>          && paging_mode_hap(v->domain)
>>          && nestedhvm_is_n2(v) )
>>      {
>> -        unsigned long gfn;
>> +        unsigned long gfn, l1gfn, exit_qual;
>>          struct p2m_domain *p2m; const struct paging_mode *mode; -     
>>            uint32_t pfec_21 = *pfec; uint64_t np2m_base =
>>          nhvm_vcpu_p2m_base(v);
>> +        unsigned int page_order, exit_reason;
>> +        int rc;
>> +        uint8_t p2m_acc;
>> +        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
>> 
>>          /* translate l2 guest va into l2 guest gfn */
>>          p2m = p2m_get_nestedp2m(v, np2m_base);
>>          mode = paging_get_nestedmode(v);
>>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>> +        if ( gfn == INVALID_GFN )
>> +            return gfn;
>> +
>>          /* translate l2 guest gfn into l1 guest gfn */
>> -        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base, -       
>>                                gfn << PAGE_SHIFT, &pfec_21, NULL); -   
>> }
> 
> I think in p2m-ept.c you should override that function pointer to a
> EPT specific implementation.
> 

Right. I just noticed that p2m_ga_to_gfn() is designed to do this. 

> Christoph
> 
>> +        rc = nept_translate_l2ga(v, gfn << 12 , &page_order, 4, +
>> &l1gfn, &p2m_acc, +                                &exit_qual,
>> &exit_reason); +        if ( rc == EPT_TRANSLATE_VIOLATION || rc ==
>> EPT_TRANSLATE_MISCONFIG ) +        { +            nvmx->ept.exit_reason
>> = exit_reason; +            nvmx->ept.exit_qual = exit_qual; +         
>>   vcpu_nestedhvm(current).nv_vmexit_pending = 1; +        } +        if
>> ( rc == EPT_TRANSLATE_RETRY ) +            *pfec = PFEC_page_paged;
>> 
>> +        return l1gfn;
>> +    }
>>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);  }
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 04:25:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 04:25:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnrI-0001Cf-D1; Thu, 13 Feb 2014 04:25:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WDnrC-0001Ca-Ht
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 04:25:03 +0000
Received: from [85.158.139.211:57030] by server-13.bemta-5.messagelabs.com id
	17/EB-18801-D194CF25; Thu, 13 Feb 2014 04:25:01 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392265500!3568911!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18526 invoked from network); 13 Feb 2014 04:25:00 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 04:25:00 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 12 Feb 2014 20:20:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,836,1384329600"; d="scan'208";a="454646930"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 12 Feb 2014 20:24:42 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 12 Feb 2014 20:24:42 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.195]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Thu, 13 Feb 2014 12:24:38 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "Egger, Christoph" <chegger@amazon.de>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH 2/2] Nested EPT: fixing issue of translate L2 gva to L1
	gfn
Thread-Index: AQHPJ9Td6/IQp/hGFkeyWX9+GazxJZqyj9mQ
Date: Thu, 13 Feb 2014 04:24:38 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE59@SHSMSX104.ccr.corp.intel.com>
References: <1392170936-31362-1-git-send-email-yang.z.zhang@intel.com>
	<1392170936-31362-2-git-send-email-yang.z.zhang@intel.com>
	<52FB3EC9.5000201@amazon.de>
In-Reply-To: <52FB3EC9.5000201@amazon.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Dong, Eddie" <eddie.dong@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>, "JBeulich@suse.com" <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] Nested EPT: fixing issue of translate
 L2 gva to L1 gfn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Egger, Christoph wrote on 2014-02-12:
> On 12.02.14 03:08, Yang Zhang wrote:
>> From: Yang Zhang <yang.z.zhang@Intel.com>
>> 
>> There is no way to translate L2 gva to L1 gfn directly.
> 
> Why?

I guess you mean p2m_ga_to_gfn() is able to do it. 

> 
>> To do it, we need to get L2's gfn first. Then look up the virtual EPT
>> to get L1's gfn.
>> 
>> Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
>> ---
>>  xen/arch/x86/mm/p2m.c |   25 ++++++++++++++++++++-----
>>  1 files changed, 20 insertions(+), 5 deletions(-)
>> diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index
>> 8f380ed..e92cfbe 100644
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1605,22 +1605,37 @@ unsigned long paging_gva_to_gfn(struct vcpu
> *v,
>>          && paging_mode_hap(v->domain)
>>          && nestedhvm_is_n2(v) )
>>      {
>> -        unsigned long gfn;
>> +        unsigned long gfn, l1gfn, exit_qual;
>>          struct p2m_domain *p2m; const struct paging_mode *mode; -     
>>            uint32_t pfec_21 = *pfec; uint64_t np2m_base =
>>          nhvm_vcpu_p2m_base(v);
>> +        unsigned int page_order, exit_reason;
>> +        int rc;
>> +        uint8_t p2m_acc;
>> +        struct nestedvmx *nvmx = &vcpu_2_nvmx(v);
>> 
>>          /* translate l2 guest va into l2 guest gfn */
>>          p2m = p2m_get_nestedp2m(v, np2m_base);
>>          mode = paging_get_nestedmode(v);
>>          gfn = mode->gva_to_gfn(v, p2m, va, pfec);
>> +        if ( gfn == INVALID_GFN )
>> +            return gfn;
>> +
>>          /* translate l2 guest gfn into l1 guest gfn */
>> -        return hostmode->p2m_ga_to_gfn(v, hostp2m, np2m_base, -       
>>                                gfn << PAGE_SHIFT, &pfec_21, NULL); -   
>> }
> 
> I think in p2m-ept.c you should override that function pointer to a
> EPT specific implementation.
> 

Right. I just noticed that p2m_ga_to_gfn() is designed to do this. 

> Christoph
> 
>> +        rc = nept_translate_l2ga(v, gfn << 12 , &page_order, 4, +
>> &l1gfn, &p2m_acc, +                                &exit_qual,
>> &exit_reason); +        if ( rc == EPT_TRANSLATE_VIOLATION || rc ==
>> EPT_TRANSLATE_MISCONFIG ) +        { +            nvmx->ept.exit_reason
>> = exit_reason; +            nvmx->ept.exit_qual = exit_qual; +         
>>   vcpu_nestedhvm(current).nv_vmexit_pending = 1; +        } +        if
>> ( rc == EPT_TRANSLATE_RETRY ) +            *pfec = PFEC_page_paged;
>> 
>> +        return l1gfn;
>> +    }
>>      return hostmode->gva_to_gfn(v, hostp2m, va, pfec);  }
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 04:28:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 04:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnuL-0001LJ-5j; Thu, 13 Feb 2014 04:28:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDnuJ-0001LA-C2
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 04:28:15 +0000
Received: from [85.158.137.68:42953] by server-10.bemta-3.messagelabs.com id
	03/C3-07302-ED94CF25; Thu, 13 Feb 2014 04:28:14 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392265693!1492843!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11381 invoked from network); 13 Feb 2014 04:28:13 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 04:28:13 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so7749420lbd.37
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 20:28:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=UqtiS6wW0Kca32+GJZnZcYetUy5KW5QPDAOlss+cYr0=;
	b=HxMmvqYpnBYuIwZTYYPQ/SK4sAtWO7GRqsEnsZy0B1+K79MQo9j5jA44WYUoNl/7gy
	9LJ4LkZLV0yW25BG59L6J9Z9AvnYv4dl4dACVJeT5dYYeQ4TN4YM2WnXJXD7eIIkEBrK
	41ugLO7vkZp/o2WnTzpFqqiSbqyOT0GBac3M96SGiFm/7csDdA182m2roqVblrqQALjw
	JnXY+ygzD0ThZ7huhZHHadZhwrPdkIHk5+aFZ/E9aLz57D63W5eSXw9v+6a/Cq1+anAv
	D3bJZq646UTJWaBgHtcAFyOz0KI8hyJhn0IAFdltfLVnmcL+REKit7X2Ct5LfpgHlxrE
	+lrg==
X-Received: by 10.152.120.37 with SMTP id kz5mr30521922lab.30.1392265692921;
	Wed, 12 Feb 2014 20:28:12 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 20:27:52 -0800 (PST)
In-Reply-To: <CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 20:27:52 -0800
X-Google-Sender-Auth: 3VvDl6nH5h4CY1rXr7wLXWdIixw
Message-ID: <CAB=NE6Vf4KHyMGiGZuuvZ8uWhw51UXbhHPt5RxqZaJbLHOU71w@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 2:05 PM, Luis R. Rodriguez
<mcgrof@do-not-panic.com> wrote:
> We have to be careful for sure, I'll try to test all cases including
> kvm, but architecturally as I see it so far these things are simply
> exchanging over data through their respective backend channels, I know
> ipv6 interfaces are unused and I'm going to dig further to see why at
> least one ipv4 interfaces is needed. I cannot fathom why either of
> these interfaces would be required. I'll do a bit more digging.
>
> The TAP interface requirements may be different, I haven't yet dug into that.

I have a test patch that now works that restricts xen-netback from
getting any IPv4 and IPv6 addresses, and disables multicast. With this
set in place the xen-frontend still gets IPv4 and IPv6 addresses and
Multicast still works. This was tested under a shared physical
environment, I'll have to test NAT next, and also see if we can enable
this as an option for KVM for their TAP 'backend' interfaces.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 04:28:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 04:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDnuL-0001LJ-5j; Thu, 13 Feb 2014 04:28:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDnuJ-0001LA-C2
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 04:28:15 +0000
Received: from [85.158.137.68:42953] by server-10.bemta-3.messagelabs.com id
	03/C3-07302-ED94CF25; Thu, 13 Feb 2014 04:28:14 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392265693!1492843!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11381 invoked from network); 13 Feb 2014 04:28:13 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 04:28:13 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so7749420lbd.37
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 20:28:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=UqtiS6wW0Kca32+GJZnZcYetUy5KW5QPDAOlss+cYr0=;
	b=HxMmvqYpnBYuIwZTYYPQ/SK4sAtWO7GRqsEnsZy0B1+K79MQo9j5jA44WYUoNl/7gy
	9LJ4LkZLV0yW25BG59L6J9Z9AvnYv4dl4dACVJeT5dYYeQ4TN4YM2WnXJXD7eIIkEBrK
	41ugLO7vkZp/o2WnTzpFqqiSbqyOT0GBac3M96SGiFm/7csDdA182m2roqVblrqQALjw
	JnXY+ygzD0ThZ7huhZHHadZhwrPdkIHk5+aFZ/E9aLz57D63W5eSXw9v+6a/Cq1+anAv
	D3bJZq646UTJWaBgHtcAFyOz0KI8hyJhn0IAFdltfLVnmcL+REKit7X2Ct5LfpgHlxrE
	+lrg==
X-Received: by 10.152.120.37 with SMTP id kz5mr30521922lab.30.1392265692921;
	Wed, 12 Feb 2014 20:28:12 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 20:27:52 -0800 (PST)
In-Reply-To: <CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 20:27:52 -0800
X-Google-Sender-Auth: 3VvDl6nH5h4CY1rXr7wLXWdIixw
Message-ID: <CAB=NE6Vf4KHyMGiGZuuvZ8uWhw51UXbhHPt5RxqZaJbLHOU71w@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 2:05 PM, Luis R. Rodriguez
<mcgrof@do-not-panic.com> wrote:
> We have to be careful for sure, I'll try to test all cases including
> kvm, but architecturally as I see it so far these things are simply
> exchanging over data through their respective backend channels, I know
> ipv6 interfaces are unused and I'm going to dig further to see why at
> least one ipv4 interfaces is needed. I cannot fathom why either of
> these interfaces would be required. I'll do a bit more digging.
>
> The TAP interface requirements may be different, I haven't yet dug into that.

I have a test patch that now works that restricts xen-netback from
getting any IPv4 and IPv6 addresses, and disables multicast. With this
set in place the xen-frontend still gets IPv4 and IPv6 addresses and
Multicast still works. This was tested under a shared physical
environment, I'll have to test NAT next, and also see if we can enable
this as an option for KVM for their TAP 'backend' interfaces.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 04:36:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 04:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDo1j-0001in-4D; Thu, 13 Feb 2014 04:35:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDo1h-0001ii-Kf
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 04:35:53 +0000
Received: from [85.158.143.35:12584] by server-1.bemta-4.messagelabs.com id
	18/57-31661-9AB4CF25; Thu, 13 Feb 2014 04:35:53 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392266151!5289127!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16660 invoked from network); 13 Feb 2014 04:35:52 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 04:35:52 -0000
Received: by mail-lb0-f171.google.com with SMTP id c11so8004120lbj.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 20:35:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=+7uno245m9GuL4wTFFLIo5oXIHSs1NgWxjIC7yp6lw0=;
	b=MYMqKuIRyg7subASWqAvHIVJLzeZ3V4ci8hJtos1YqQm6hAhLWnC2RzMRAw9uYTlKk
	1xHVk548eLDl+uesrbtk20A34FOZDNjkuiXFz2cS03Aviv7XTFgQCb5GUH61Q86D1RGl
	Y4AUkIcwLWHz0M2s40QNEsCVI00SmyOoAXNTm19VTOGTHKAydwEWEuDPC+sESuwzh0oq
	q0MPS7SigeCBI70RghSPy3AdGkQnotVJBC3i0t+IsEkYCALPrNmZpGn1tRa8CW+hNoEh
	DSGICT+o57vYuR+USDOsDclU5K3FzSi7hXFEY/hdLC/V9Dkzorf5KIZG2pwKghx4yFaK
	8qNg==
X-Received: by 10.112.138.233 with SMTP id qt9mr13541254lbb.34.1392266151282; 
	Wed, 12 Feb 2014 20:35:51 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 20:35:31 -0800 (PST)
In-Reply-To: <CAB=NE6Vf4KHyMGiGZuuvZ8uWhw51UXbhHPt5RxqZaJbLHOU71w@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
	<CAB=NE6Vf4KHyMGiGZuuvZ8uWhw51UXbhHPt5RxqZaJbLHOU71w@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 20:35:31 -0800
X-Google-Sender-Auth: DE-lUQMCyNalPjJf3BidnR7jo08
Message-ID: <CAB=NE6WtKf_imDkcyLa_TDT4ajEeBZv5x5b4Vsd+nheEmGt5sA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 8:27 PM, Luis R. Rodriguez
<mcgrof@do-not-panic.com> wrote:
> I have a test patch that now works that restricts xen-netback from
> getting any IPv4 and IPv6 addresses, and disables multicast. With this
> set in place the xen-frontend still gets IPv4 and IPv6 addresses and
> Multicast still works. This was tested under a shared physical
> environment, I'll have to test NAT next, and also see if we can enable
> this as an option for KVM for their TAP 'backend' interfaces.

Also perhaps a silly question, as I haven't yet looked carefully into
the qemu TAP usage / requirement yet, but has anyone considered just
having a dma-buf agent for us in userspace ? That'd remove any
redundant interfaces and do let qemu do DMA directly.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 04:36:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 04:36:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDo1j-0001in-4D; Thu, 13 Feb 2014 04:35:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WDo1h-0001ii-Kf
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 04:35:53 +0000
Received: from [85.158.143.35:12584] by server-1.bemta-4.messagelabs.com id
	18/57-31661-9AB4CF25; Thu, 13 Feb 2014 04:35:53 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392266151!5289127!1
X-Originating-IP: [209.85.217.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16660 invoked from network); 13 Feb 2014 04:35:52 -0000
Received: from mail-lb0-f171.google.com (HELO mail-lb0-f171.google.com)
	(209.85.217.171)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 04:35:52 -0000
Received: by mail-lb0-f171.google.com with SMTP id c11so8004120lbj.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 12 Feb 2014 20:35:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=+7uno245m9GuL4wTFFLIo5oXIHSs1NgWxjIC7yp6lw0=;
	b=MYMqKuIRyg7subASWqAvHIVJLzeZ3V4ci8hJtos1YqQm6hAhLWnC2RzMRAw9uYTlKk
	1xHVk548eLDl+uesrbtk20A34FOZDNjkuiXFz2cS03Aviv7XTFgQCb5GUH61Q86D1RGl
	Y4AUkIcwLWHz0M2s40QNEsCVI00SmyOoAXNTm19VTOGTHKAydwEWEuDPC+sESuwzh0oq
	q0MPS7SigeCBI70RghSPy3AdGkQnotVJBC3i0t+IsEkYCALPrNmZpGn1tRa8CW+hNoEh
	DSGICT+o57vYuR+USDOsDclU5K3FzSi7hXFEY/hdLC/V9Dkzorf5KIZG2pwKghx4yFaK
	8qNg==
X-Received: by 10.112.138.233 with SMTP id qt9mr13541254lbb.34.1392266151282; 
	Wed, 12 Feb 2014 20:35:51 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 12 Feb 2014 20:35:31 -0800 (PST)
In-Reply-To: <CAB=NE6Vf4KHyMGiGZuuvZ8uWhw51UXbhHPt5RxqZaJbLHOU71w@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
	<CAB=NE6Vf4KHyMGiGZuuvZ8uWhw51UXbhHPt5RxqZaJbLHOU71w@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 12 Feb 2014 20:35:31 -0800
X-Google-Sender-Auth: DE-lUQMCyNalPjJf3BidnR7jo08
Message-ID: <CAB=NE6WtKf_imDkcyLa_TDT4ajEeBZv5x5b4Vsd+nheEmGt5sA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
	random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 8:27 PM, Luis R. Rodriguez
<mcgrof@do-not-panic.com> wrote:
> I have a test patch that now works that restricts xen-netback from
> getting any IPv4 and IPv6 addresses, and disables multicast. With this
> set in place the xen-frontend still gets IPv4 and IPv6 addresses and
> Multicast still works. This was tested under a shared physical
> environment, I'll have to test NAT next, and also see if we can enable
> this as an option for KVM for their TAP 'backend' interfaces.

Also perhaps a silly question, as I haven't yet looked carefully into
the qemu TAP usage / requirement yet, but has anyone considered just
having a dma-buf agent for us in userspace ? That'd remove any
redundant interfaces and do let qemu do DMA directly.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 05:44:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 05:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDp5o-0003eD-1w; Thu, 13 Feb 2014 05:44:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDp5m-0003e5-55
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 05:44:10 +0000
Received: from [85.158.137.68:27425] by server-8.bemta-3.messagelabs.com id
	13/1D-16039-8AB5CF25; Thu, 13 Feb 2014 05:44:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392270246!1529300!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23859 invoked from network); 13 Feb 2014 05:44:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 05:44:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="102137368"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 05:44:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 00:44:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDp5h-000105-1F;
	Thu, 13 Feb 2014 05:44:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDp5g-0000b8-Dk;
	Thu, 13 Feb 2014 05:44:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24860-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 05:44:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24860: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24860 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24860/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc
baseline version:
 xen                  5819ec7bc0f86c9dff755d85df289332742c05c3

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=d883c179a74111a6804baf8cb8224235242a88fc
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable d883c179a74111a6804baf8cb8224235242a88fc
+ branch=xen-unstable
+ revision=d883c179a74111a6804baf8cb8224235242a88fc
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git d883c179a74111a6804baf8cb8224235242a88fc:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   5819ec7..d883c17  d883c179a74111a6804baf8cb8224235242a88fc -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 05:44:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 05:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDp5o-0003eD-1w; Thu, 13 Feb 2014 05:44:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDp5m-0003e5-55
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 05:44:10 +0000
Received: from [85.158.137.68:27425] by server-8.bemta-3.messagelabs.com id
	13/1D-16039-8AB5CF25; Thu, 13 Feb 2014 05:44:08 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392270246!1529300!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23859 invoked from network); 13 Feb 2014 05:44:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 05:44:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,836,1384300800"; d="scan'208";a="102137368"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 05:44:05 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 00:44:04 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDp5h-000105-1F;
	Thu, 13 Feb 2014 05:44:05 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDp5g-0000b8-Dk;
	Thu, 13 Feb 2014 05:44:04 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24860-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 05:44:04 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24860: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24860 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24860/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc
baseline version:
 xen                  5819ec7bc0f86c9dff755d85df289332742c05c3

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=d883c179a74111a6804baf8cb8224235242a88fc
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable d883c179a74111a6804baf8cb8224235242a88fc
+ branch=xen-unstable
+ revision=d883c179a74111a6804baf8cb8224235242a88fc
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git d883c179a74111a6804baf8cb8224235242a88fc:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   5819ec7..d883c17  d883c179a74111a6804baf8cb8224235242a88fc -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:18:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDrUP-0008Gw-GC; Thu, 13 Feb 2014 08:17:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDrUO-0008Gp-7j
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 08:17:44 +0000
Received: from [85.158.137.68:60848] by server-8.bemta-3.messagelabs.com id
	41/47-16039-7AF7CF25; Thu, 13 Feb 2014 08:17:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392279460!291056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2567 invoked from network); 13 Feb 2014 08:17:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 08:17:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="100371582"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 08:17:20 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 03:17:20 -0500
Message-ID: <52FC7F8E.7040608@citrix.com>
Date: Thu, 13 Feb 2014 09:17:18 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
In-Reply-To: <CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 02:29, Miguel Clara wrote:
> I tried the patch provided by roger, I get a different error now:
> 
> Parsing config from test.cfg
> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
> /local/domain/0/backend/vbd/0/51712 not ready
> libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
> failed to attach local disk for bootloader execution
> libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
> unable to detach locally attached disk
> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
> (re-)build domain: -1
> 
> 
> with -vvv
> 
> # xl -vvv create test.cfg
> Parsing config from test.cfg
> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x12548c0:
> create: how=(nil) callback=(nil) poller=0x1254980
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda,
> uses script=... assuming phy backend
> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=(null) spec.backend=phy
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null),
> uses script=... assuming phy backend
> libxl: debug: libxl.c:2605:libxl__device_disk_local_initiate_attach:
> trying to locally attach PHY device drbd-remus-test with script
> block-drbd
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=xvdf spec.backend=phy
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvdf,
> uses script=... assuming phy backend
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
> register slotnum=3
> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x12548c0:
> inprogress: poller=0x1254980, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
> wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
> epath=/local/domain/0/backend/vbd/0/51792/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/0/51792/state wanted state 2 still waiting
> state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
> wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
> epath=/local/domain/0/backend/vbd/0/51792/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/0/51792/state wanted state 2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x124a300: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script: /etc/xen/scripts/block-drbd add
> libxl: debug: libxl.c:2692:local_device_attach_cb: locally attached
> disk /dev/xvdf
> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
> /local/domain/0/backend/vbd/0/51792 not ready

So the local attach seems to DTRT, but the device never gets to state 4
(connected). Does the block-drbd script work with guests that are not
using pygrub? (extract the kernel from the DomU and use it directly on
the config file).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:18:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDrUP-0008Gw-GC; Thu, 13 Feb 2014 08:17:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDrUO-0008Gp-7j
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 08:17:44 +0000
Received: from [85.158.137.68:60848] by server-8.bemta-3.messagelabs.com id
	41/47-16039-7AF7CF25; Thu, 13 Feb 2014 08:17:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392279460!291056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2567 invoked from network); 13 Feb 2014 08:17:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 08:17:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="100371582"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 08:17:20 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 03:17:20 -0500
Message-ID: <52FC7F8E.7040608@citrix.com>
Date: Thu, 13 Feb 2014 09:17:18 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<08b36658-42d4-4dbd-8637-179a14541fa2@email.android.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
In-Reply-To: <CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 02:29, Miguel Clara wrote:
> I tried the patch provided by roger, I get a different error now:
> 
> Parsing config from test.cfg
> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
> /local/domain/0/backend/vbd/0/51712 not ready
> libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
> failed to attach local disk for bootloader execution
> libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
> unable to detach locally attached disk
> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
> (re-)build domain: -1
> 
> 
> with -vvv
> 
> # xl -vvv create test.cfg
> Parsing config from test.cfg
> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x12548c0:
> create: how=(nil) callback=(nil) poller=0x1254980
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=xvda spec.backend=unknown
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvda,
> uses script=... assuming phy backend
> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> vdev=xvda, using backend phy
> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloader
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=(null) spec.backend=phy
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=(null),
> uses script=... assuming phy backend
> libxl: debug: libxl.c:2605:libxl__device_disk_local_initiate_attach:
> trying to locally attach PHY device drbd-remus-test with script
> block-drbd
> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> vdev=xvdf spec.backend=phy
> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=xvdf,
> uses script=... assuming phy backend
> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
> register slotnum=3
> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x12548c0:
> inprogress: poller=0x1254980, flags=i
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
> wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
> epath=/local/domain/0/backend/vbd/0/51792/state
> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/0/51792/state wanted state 2 still waiting
> state 1
> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x124a300
> wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0: event
> epath=/local/domain/0/backend/vbd/0/51792/state
> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> /local/domain/0/backend/vbd/0/51792/state wanted state 2 ok
> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> w=0x124a300 wpath=/local/domain/0/backend/vbd/0/51792/state token=3/0:
> deregister slotnum=3
> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> w=0x124a300: deregister unregistered
> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
> script: /etc/xen/scripts/block-drbd add
> libxl: debug: libxl.c:2692:local_device_attach_cb: locally attached
> disk /dev/xvdf
> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
> /local/domain/0/backend/vbd/0/51792 not ready

So the local attach seems to DTRT, but the device never gets to state 4
(connected). Does the block-drbd script work with guests that are not
using pygrub? (extract the kernel from the DomU and use it directly on
the config file).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:28:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDreu-0000Jd-5z; Thu, 13 Feb 2014 08:28:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WDres-0000ID-Tr
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 08:28:35 +0000
Received: from [85.158.137.68:10285] by server-15.bemta-3.messagelabs.com id
	44/71-19263-2328CF25; Thu, 13 Feb 2014 08:28:34 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392280112!1553568!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1352 invoked from network); 13 Feb 2014 08:28:33 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-7.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Feb 2014 08:28:33 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WDren-0002nz-Sw
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 00:28:30 -0800
Date: Thu, 13 Feb 2014 00:28:29 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1392280109769-5721270.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] what is xen design in multriprocessor host?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

As we know xen is bare metal hypervisor. I want to know what is the xen
design when it can access multiple processors. I would like  to ask:

 is xen multriprocessor model is linux's smp model - the kernel instance is
only one and the all host's real cpus execute concurrently this one 
instance  of the kernel?

Best regards 



--
View this message in context: http://xen.1045712.n5.nabble.com/what-is-xen-design-in-multriprocessor-host-tp5721270.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:28:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDreu-0000Jd-5z; Thu, 13 Feb 2014 08:28:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WDres-0000ID-Tr
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 08:28:35 +0000
Received: from [85.158.137.68:10285] by server-15.bemta-3.messagelabs.com id
	44/71-19263-2328CF25; Thu, 13 Feb 2014 08:28:34 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392280112!1553568!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1352 invoked from network); 13 Feb 2014 08:28:33 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-7.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	13 Feb 2014 08:28:33 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WDren-0002nz-Sw
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 00:28:30 -0800
Date: Thu, 13 Feb 2014 00:28:29 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1392280109769-5721270.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] what is xen design in multriprocessor host?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all

As we know xen is bare metal hypervisor. I want to know what is the xen
design when it can access multiple processors. I would like  to ask:

 is xen multriprocessor model is linux's smp model - the kernel instance is
only one and the all host's real cpus execute concurrently this one 
instance  of the kernel?

Best regards 



--
View this message in context: http://xen.1045712.n5.nabble.com/what-is-xen-design-in-multriprocessor-host-tp5721270.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDroD-0000ZS-Fg; Thu, 13 Feb 2014 08:38:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDroC-0000ZN-Ar
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 08:38:12 +0000
Received: from [85.158.143.35:31920] by server-2.bemta-4.messagelabs.com id
	CA/30-10891-2748CF25; Thu, 13 Feb 2014 08:38:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392280689!5335341!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2084 invoked from network); 13 Feb 2014 08:38:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 08:38:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 08:38:08 +0000
Message-Id: <52FC927D020000780011BF0A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 08:38:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 00:26, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>      *val = 0;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    /* Allow only first 3 MC banks into switch() */
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */

I'm confused: You now add a comment as if the mask was including
bit 4, which it doesn't. What am I missing?

Also, please get used to mention (commonly at the bottom of the
commit message, after a --- separator) what changed compared to
the previous iteration.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDroD-0000ZS-Fg; Thu, 13 Feb 2014 08:38:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDroC-0000ZN-Ar
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 08:38:12 +0000
Received: from [85.158.143.35:31920] by server-2.bemta-4.messagelabs.com id
	CA/30-10891-2748CF25; Thu, 13 Feb 2014 08:38:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392280689!5335341!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2084 invoked from network); 13 Feb 2014 08:38:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 08:38:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 08:38:08 +0000
Message-Id: <52FC927D020000780011BF0A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 08:38:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 00:26, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  
>      *val = 0;
>  
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    /* Allow only first 3 MC banks into switch() */
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */

I'm confused: You now add a comment as if the mask was including
bit 4, which it doesn't. What am I missing?

Also, please get used to mention (commonly at the bottom of the
commit message, after a --- separator) what changed compared to
the previous iteration.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 08:58:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDs7O-0001Ej-9Y; Thu, 13 Feb 2014 08:58:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDs7M-0001Ee-HN
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 08:58:00 +0000
Received: from [193.109.254.147:26233] by server-6.bemta-14.messagelabs.com id
	E6/9B-03396-7198CF25; Thu, 13 Feb 2014 08:57:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392281878!4027840!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28144 invoked from network); 13 Feb 2014 08:57:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 08:57:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 08:57:58 +0000
Message-Id: <52FC9726020000780011BF24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 08:57:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part497AEA06.1__="
Cc: xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] IOMMU: generalize and correct softirq
 processing during Dom0 device setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part497AEA06.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

c/s 21039:95f5a4ce8f24 ("VT-d: reduce default verbosity") having put a
call to process_pending_softirqs() in VT-d's domain_context_mapping()
was wrong in two ways: For one we shouldn't be doing this when setting
up a device during DomU assignment. And then - I didn't check whether
that was the case already back then - we shouldn't call that function
with the pcidevs_lock (or in fact any spin lock) held.

Move the "preemption" into generic code, at once dealing with further
actual (too much output elsewhere - particularly on systems with very
many host bridge like devices - having been observed to still cause the
watchdog to trigger when enabled) and potential (other IOMMU code may
also end up being too verbose) issues.

Do the "preemption" once per device actually being set up when in
verbose mode, and once per bus otherwise.

Note that dropping pcidevs_lock around the process_pending_softirqs()
invocation is specifically not a problem here: We're in an __init
function and aren't racing with potential additions/removals of PCI
devices. Not acquiring the lock in setup_dom0_pci_devices() otoh is not
an option, as there are too many places that assert the lock being
held.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -27,6 +27,7 @@
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/radix-tree.h>
+#include <xen/softirq.h>
 #include <xen/tasklet.h>
 #include <xsm/xsm.h>
 #include <asm/msi.h>
@@ -922,6 +923,20 @@ static int __init _setup_dom0_pci_device
                 printk(XENLOG_WARNING "Dom%d owning %04x:%02x:%02x.%u?\n",=

                        pdev->domain->domain_id, pseg->nr, bus,
                        PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+            if ( iommu_verbose )
+            {
+                spin_unlock(&pcidevs_lock);
+                process_pending_softirqs();
+                spin_lock(&pcidevs_lock);
+            }
+        }
+
+        if ( !iommu_verbose )
+        {
+            spin_unlock(&pcidevs_lock);
+            process_pending_softirqs();
+            spin_lock(&pcidevs_lock);
         }
     }
=20
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -31,7 +31,6 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
-#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #include <asm/hvm/vmx/vmx.h>
@@ -1484,9 +1483,6 @@ static int domain_context_mapping(
         break;
     }
=20
-    if ( iommu_verbose )
-        process_pending_softirqs();
-
     return ret;
 }
=20




--=__Part497AEA06.1__=
Content-Type: text/plain; name="IOMMU-dom0-setup-preemption.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-dom0-setup-preemption.patch"

IOMMU: generalize and correct softirq processing during Dom0 device =
setup=0A=0Ac/s 21039:95f5a4ce8f24 ("VT-d: reduce default verbosity") =
having put a=0Acall to process_pending_softirqs() in VT-d's domain_context_=
mapping()=0Awas wrong in two ways: For one we shouldn't be doing this when =
setting=0Aup a device during DomU assignment. And then - I didn't check =
whether=0Athat was the case already back then - we shouldn't call that =
function=0Awith the pcidevs_lock (or in fact any spin lock) held.=0A=0AMove=
 the "preemption" into generic code, at once dealing with further=0Aactual =
(too much output elsewhere - particularly on systems with very=0Amany host =
bridge like devices - having been observed to still cause the=0Awatchdog =
to trigger when enabled) and potential (other IOMMU code may=0Aalso end up =
being too verbose) issues.=0A=0ADo the "preemption" once per device =
actually being set up when in=0Averbose mode, and once per bus otherwise.=
=0A=0ANote that dropping pcidevs_lock around the process_pending_softirqs()=
=0Ainvocation is specifically not a problem here: We're in an __init=0Afunc=
tion and aren't racing with potential additions/removals of PCI=0Adevices. =
Not acquiring the lock in setup_dom0_pci_devices() otoh is not=0Aan =
option, as there are too many places that assert the lock being=0Aheld.=0A=
=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/drivers/pa=
ssthrough/pci.c=0A+++ b/xen/drivers/passthrough/pci.c=0A@@ -27,6 +27,7 =
@@=0A #include <xen/delay.h>=0A #include <xen/keyhandler.h>=0A #include =
<xen/radix-tree.h>=0A+#include <xen/softirq.h>=0A #include <xen/tasklet.h>=
=0A #include <xsm/xsm.h>=0A #include <asm/msi.h>=0A@@ -922,6 +923,20 @@ =
static int __init _setup_dom0_pci_device=0A                 printk(XENLOG_W=
ARNING "Dom%d owning %04x:%02x:%02x.%u?\n",=0A                        =
pdev->domain->domain_id, pseg->nr, bus,=0A                        =
PCI_SLOT(devfn), PCI_FUNC(devfn));=0A+=0A+            if ( iommu_verbose =
)=0A+            {=0A+                spin_unlock(&pcidevs_lock);=0A+      =
          process_pending_softirqs();=0A+                spin_lock(&pcidevs=
_lock);=0A+            }=0A+        }=0A+=0A+        if ( !iommu_verbose =
)=0A+        {=0A+            spin_unlock(&pcidevs_lock);=0A+            =
process_pending_softirqs();=0A+            spin_lock(&pcidevs_lock);=0A    =
     }=0A     }=0A =0A--- a/xen/drivers/passthrough/vtd/iommu.c=0A+++ =
b/xen/drivers/passthrough/vtd/iommu.c=0A@@ -31,7 +31,6 @@=0A #include =
<xen/pci.h>=0A #include <xen/pci_regs.h>=0A #include <xen/keyhandler.h>=0A-=
#include <xen/softirq.h>=0A #include <asm/msi.h>=0A #include <asm/irq.h>=0A=
 #include <asm/hvm/vmx/vmx.h>=0A@@ -1484,9 +1483,6 @@ static int domain_con=
text_mapping(=0A         break;=0A     }=0A =0A-    if ( iommu_verbose =
)=0A-        process_pending_softirqs();=0A-=0A     return ret;=0A }=0A =0A
--=__Part497AEA06.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part497AEA06.1__=--


From xen-devel-bounces@lists.xen.org Thu Feb 13 08:58:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 08:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDs7O-0001Ej-9Y; Thu, 13 Feb 2014 08:58:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDs7M-0001Ee-HN
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 08:58:00 +0000
Received: from [193.109.254.147:26233] by server-6.bemta-14.messagelabs.com id
	E6/9B-03396-7198CF25; Thu, 13 Feb 2014 08:57:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392281878!4027840!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28144 invoked from network); 13 Feb 2014 08:57:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 08:57:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 08:57:58 +0000
Message-Id: <52FC9726020000780011BF24@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 08:57:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part497AEA06.1__="
Cc: xiantao.zhang@intel.com
Subject: [Xen-devel] [PATCH] IOMMU: generalize and correct softirq
 processing during Dom0 device setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part497AEA06.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

c/s 21039:95f5a4ce8f24 ("VT-d: reduce default verbosity") having put a
call to process_pending_softirqs() in VT-d's domain_context_mapping()
was wrong in two ways: For one we shouldn't be doing this when setting
up a device during DomU assignment. And then - I didn't check whether
that was the case already back then - we shouldn't call that function
with the pcidevs_lock (or in fact any spin lock) held.

Move the "preemption" into generic code, at once dealing with further
actual (too much output elsewhere - particularly on systems with very
many host bridge like devices - having been observed to still cause the
watchdog to trigger when enabled) and potential (other IOMMU code may
also end up being too verbose) issues.

Do the "preemption" once per device actually being set up when in
verbose mode, and once per bus otherwise.

Note that dropping pcidevs_lock around the process_pending_softirqs()
invocation is specifically not a problem here: We're in an __init
function and aren't racing with potential additions/removals of PCI
devices. Not acquiring the lock in setup_dom0_pci_devices() otoh is not
an option, as there are too many places that assert the lock being
held.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -27,6 +27,7 @@
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/radix-tree.h>
+#include <xen/softirq.h>
 #include <xen/tasklet.h>
 #include <xsm/xsm.h>
 #include <asm/msi.h>
@@ -922,6 +923,20 @@ static int __init _setup_dom0_pci_device
                 printk(XENLOG_WARNING "Dom%d owning %04x:%02x:%02x.%u?\n",=

                        pdev->domain->domain_id, pseg->nr, bus,
                        PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+            if ( iommu_verbose )
+            {
+                spin_unlock(&pcidevs_lock);
+                process_pending_softirqs();
+                spin_lock(&pcidevs_lock);
+            }
+        }
+
+        if ( !iommu_verbose )
+        {
+            spin_unlock(&pcidevs_lock);
+            process_pending_softirqs();
+            spin_lock(&pcidevs_lock);
         }
     }
=20
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -31,7 +31,6 @@
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/keyhandler.h>
-#include <xen/softirq.h>
 #include <asm/msi.h>
 #include <asm/irq.h>
 #include <asm/hvm/vmx/vmx.h>
@@ -1484,9 +1483,6 @@ static int domain_context_mapping(
         break;
     }
=20
-    if ( iommu_verbose )
-        process_pending_softirqs();
-
     return ret;
 }
=20




--=__Part497AEA06.1__=
Content-Type: text/plain; name="IOMMU-dom0-setup-preemption.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="IOMMU-dom0-setup-preemption.patch"

IOMMU: generalize and correct softirq processing during Dom0 device =
setup=0A=0Ac/s 21039:95f5a4ce8f24 ("VT-d: reduce default verbosity") =
having put a=0Acall to process_pending_softirqs() in VT-d's domain_context_=
mapping()=0Awas wrong in two ways: For one we shouldn't be doing this when =
setting=0Aup a device during DomU assignment. And then - I didn't check =
whether=0Athat was the case already back then - we shouldn't call that =
function=0Awith the pcidevs_lock (or in fact any spin lock) held.=0A=0AMove=
 the "preemption" into generic code, at once dealing with further=0Aactual =
(too much output elsewhere - particularly on systems with very=0Amany host =
bridge like devices - having been observed to still cause the=0Awatchdog =
to trigger when enabled) and potential (other IOMMU code may=0Aalso end up =
being too verbose) issues.=0A=0ADo the "preemption" once per device =
actually being set up when in=0Averbose mode, and once per bus otherwise.=
=0A=0ANote that dropping pcidevs_lock around the process_pending_softirqs()=
=0Ainvocation is specifically not a problem here: We're in an __init=0Afunc=
tion and aren't racing with potential additions/removals of PCI=0Adevices. =
Not acquiring the lock in setup_dom0_pci_devices() otoh is not=0Aan =
option, as there are too many places that assert the lock being=0Aheld.=0A=
=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/drivers/pa=
ssthrough/pci.c=0A+++ b/xen/drivers/passthrough/pci.c=0A@@ -27,6 +27,7 =
@@=0A #include <xen/delay.h>=0A #include <xen/keyhandler.h>=0A #include =
<xen/radix-tree.h>=0A+#include <xen/softirq.h>=0A #include <xen/tasklet.h>=
=0A #include <xsm/xsm.h>=0A #include <asm/msi.h>=0A@@ -922,6 +923,20 @@ =
static int __init _setup_dom0_pci_device=0A                 printk(XENLOG_W=
ARNING "Dom%d owning %04x:%02x:%02x.%u?\n",=0A                        =
pdev->domain->domain_id, pseg->nr, bus,=0A                        =
PCI_SLOT(devfn), PCI_FUNC(devfn));=0A+=0A+            if ( iommu_verbose =
)=0A+            {=0A+                spin_unlock(&pcidevs_lock);=0A+      =
          process_pending_softirqs();=0A+                spin_lock(&pcidevs=
_lock);=0A+            }=0A+        }=0A+=0A+        if ( !iommu_verbose =
)=0A+        {=0A+            spin_unlock(&pcidevs_lock);=0A+            =
process_pending_softirqs();=0A+            spin_lock(&pcidevs_lock);=0A    =
     }=0A     }=0A =0A--- a/xen/drivers/passthrough/vtd/iommu.c=0A+++ =
b/xen/drivers/passthrough/vtd/iommu.c=0A@@ -31,7 +31,6 @@=0A #include =
<xen/pci.h>=0A #include <xen/pci_regs.h>=0A #include <xen/keyhandler.h>=0A-=
#include <xen/softirq.h>=0A #include <asm/msi.h>=0A #include <asm/irq.h>=0A=
 #include <asm/hvm/vmx/vmx.h>=0A@@ -1484,9 +1483,6 @@ static int domain_con=
text_mapping(=0A         break;=0A     }=0A =0A-    if ( iommu_verbose =
)=0A-        process_pending_softirqs();=0A-=0A     return ret;=0A }=0A =0A
--=__Part497AEA06.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part497AEA06.1__=--


From xen-devel-bounces@lists.xen.org Thu Feb 13 09:08:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:08:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDsHQ-0001eU-Ok; Thu, 13 Feb 2014 09:08:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDsHP-0001eP-Jk
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 09:08:23 +0000
Received: from [85.158.139.211:22996] by server-13.bemta-5.messagelabs.com id
	CA/0E-18801-68B8CF25; Thu, 13 Feb 2014 09:08:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392282500!3602661!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25737 invoked from network); 13 Feb 2014 09:08:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 09:08:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="102166270"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 09:08:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 04:08:20 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WDsHL-00055y-QN;
	Thu, 13 Feb 2014 09:08:19 +0000
Message-ID: <1392282499.22033.48.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xennn <openbg@abv.bg>
Date: Thu, 13 Feb 2014 09:08:19 +0000
In-Reply-To: <1392280109769-5721270.post@n5.nabble.com>
References: <1392280109769-5721270.post@n5.nabble.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] what is xen design in multriprocessor host?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 00:28 -0800, xennn wrote:
> Hi all
> 
> As we know xen is bare metal hypervisor. I want to know what is the xen
> design when it can access multiple processors. I would like  to ask:
> 
>  is xen multriprocessor model is linux's smp model - the kernel instance is
> only one and the all host's real cpus execute concurrently this one 
> instance  of the kernel?

Yes.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:08:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:08:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDsHQ-0001eU-Ok; Thu, 13 Feb 2014 09:08:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDsHP-0001eP-Jk
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 09:08:23 +0000
Received: from [85.158.139.211:22996] by server-13.bemta-5.messagelabs.com id
	CA/0E-18801-68B8CF25; Thu, 13 Feb 2014 09:08:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392282500!3602661!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25737 invoked from network); 13 Feb 2014 09:08:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 09:08:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="102166270"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 09:08:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 04:08:20 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WDsHL-00055y-QN;
	Thu, 13 Feb 2014 09:08:19 +0000
Message-ID: <1392282499.22033.48.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xennn <openbg@abv.bg>
Date: Thu, 13 Feb 2014 09:08:19 +0000
In-Reply-To: <1392280109769-5721270.post@n5.nabble.com>
References: <1392280109769-5721270.post@n5.nabble.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] what is xen design in multriprocessor host?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 00:28 -0800, xennn wrote:
> Hi all
> 
> As we know xen is bare metal hypervisor. I want to know what is the xen
> design when it can access multiple processors. I would like  to ask:
> 
>  is xen multriprocessor model is linux's smp model - the kernel instance is
> only one and the all host's real cpus execute concurrently this one 
> instance  of the kernel?

Yes.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:19:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDsS4-0002HJ-9D; Thu, 13 Feb 2014 09:19:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDsS2-0002HE-Cl
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 09:19:22 +0000
Received: from [85.158.143.35:6851] by server-1.bemta-4.messagelabs.com id
	3C/73-31661-91E8CF25; Thu, 13 Feb 2014 09:19:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392283159!5346666!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3292 invoked from network); 13 Feb 2014 09:19:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 09:19:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="100382103"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 09:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 04:19:18 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WDsRy-0005EU-IO;
	Thu, 13 Feb 2014 09:19:18 +0000
Message-ID: <1392283158.22033.49.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 09:19:18 +0000
In-Reply-To: <21243.35619.819765.162321@mariner.uk.xensource.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
	<21243.35619.819765.162321@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:54 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
> > On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> > > ARM does not (currently) support migration, so stop offering tasty looking
> > > treats like "xl migrate".
> 
> > > Other than the additions of the #define/#ifdef there is a tiny bit of code
> > > motion ("dump-core" in the command list and core_dump_domain in the
> > > implementations) which serves to put ifdeffable bits next to each other.
> 
> I'm not a huge fan of #ifdef but this is tolerable, I think.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I think this should go into 4.4.  It is essential that we start
> advertising lack-of-resume in 4.4 as otherwise in 4.5 we'll have to
> invent a new HAVE_HAVE_NO_SUSPEND_RESUME which tells you whether the
> lack of HAVE_NO_SUSPEND_RESUME means that you can definitely
> suspend/resume.

George release acked it and so I have pushed.

I'll coordinate with you re the osstest push once this passes the xen
gate.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:19:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDsS4-0002HJ-9D; Thu, 13 Feb 2014 09:19:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDsS2-0002HE-Cl
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 09:19:22 +0000
Received: from [85.158.143.35:6851] by server-1.bemta-4.messagelabs.com id
	3C/73-31661-91E8CF25; Thu, 13 Feb 2014 09:19:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392283159!5346666!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3292 invoked from network); 13 Feb 2014 09:19:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 09:19:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="100382103"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 09:19:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 04:19:18 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WDsRy-0005EU-IO;
	Thu, 13 Feb 2014 09:19:18 +0000
Message-ID: <1392283158.22033.49.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 09:19:18 +0000
In-Reply-To: <21243.35619.819765.162321@mariner.uk.xensource.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
	<21243.35619.819765.162321@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:54 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
> > On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> > > ARM does not (currently) support migration, so stop offering tasty looking
> > > treats like "xl migrate".
> 
> > > Other than the additions of the #define/#ifdef there is a tiny bit of code
> > > motion ("dump-core" in the command list and core_dump_domain in the
> > > implementations) which serves to put ifdeffable bits next to each other.
> 
> I'm not a huge fan of #ifdef but this is tolerable, I think.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I think this should go into 4.4.  It is essential that we start
> advertising lack-of-resume in 4.4 as otherwise in 4.5 we'll have to
> invent a new HAVE_HAVE_NO_SUSPEND_RESUME which tells you whether the
> lack of HAVE_NO_SUSPEND_RESUME means that you can definitely
> suspend/resume.

George release acked it and so I have pushed.

I'll coordinate with you re the osstest push once this passes the xen
gate.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDstq-0002zy-BI; Thu, 13 Feb 2014 09:48:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDsto-0002zt-KI
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 09:48:04 +0000
Received: from [85.158.137.68:50845] by server-3.bemta-3.messagelabs.com id
	1A/DC-14520-3D49CF25; Thu, 13 Feb 2014 09:48:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392284882!1584288!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24889 invoked from network); 13 Feb 2014 09:48:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 09:48:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 09:48:02 +0000
Message-Id: <52FCA2E0020000780011BF52@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 09:48:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part92A131C0.1__="
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part92A131C0.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> =
wrote:
> This test is already performed a couple of lines above.

Except that it's the wrong code you remove:

> @@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 =
slot, u8 func, u8 bir, int vf)
>          if ( vf < 0 || (vf && vf % stride) )
>              return 0;
>          if ( stride )
> -        {
> -            if ( vf % stride )
> -                return 0;
>              vf /=3D stride;
> -        }

Note how this second check carefully avoids a division by zero.
>From what I can tell I think that I simply forgot to remove the
right side of the earlier || after having converted it to the safer
variant inside the if(). Hence I think we instead want:

x86/MSI: don't risk division by zero

The check in question is redundant with the one in the immediately
following if(), where dividing by zero gets carefully avoided.

Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8=20
             return 0;
         base =3D pos + PCI_SRIOV_BAR;
         vf -=3D PCI_BDF(bus, slot, func) + offset;
-        if ( vf < 0 || (vf && vf % stride) )
+        if ( vf < 0 )
             return 0;
         if ( stride )
         {




--=__Part92A131C0.1__=
Content-Type: text/plain; name="x86-MSI-VF-divide-by-zero.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MSI-VF-divide-by-zero.patch"

x86/MSI: don't risk division by zero=0A=0AThe check in question is =
redundant with the one in the immediately=0Afollowing if(), where dividing =
by zero gets carefully avoided.=0A=0ASpotted-by: Boris Ostrovsky <boris.ost=
rovsky@oracle.com>=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--=
- a/xen/arch/x86/msi.c=0A+++ b/xen/arch/x86/msi.c=0A@@ -635,7 +635,7 @@ =
static u64 read_pci_mem_bar(u16 seg, u8 =0A             return 0;=0A       =
  base =3D pos + PCI_SRIOV_BAR;=0A         vf -=3D PCI_BDF(bus, slot, =
func) + offset;=0A-        if ( vf < 0 || (vf && vf % stride) )=0A+        =
if ( vf < 0 )=0A             return 0;=0A         if ( stride )=0A         =
{=0A
--=__Part92A131C0.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part92A131C0.1__=--


From xen-devel-bounces@lists.xen.org Thu Feb 13 09:48:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:48:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDstq-0002zy-BI; Thu, 13 Feb 2014 09:48:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDsto-0002zt-KI
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 09:48:04 +0000
Received: from [85.158.137.68:50845] by server-3.bemta-3.messagelabs.com id
	1A/DC-14520-3D49CF25; Thu, 13 Feb 2014 09:48:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392284882!1584288!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24889 invoked from network); 13 Feb 2014 09:48:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 09:48:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 09:48:02 +0000
Message-Id: <52FCA2E0020000780011BF52@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 09:48:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
In-Reply-To: <1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part92A131C0.1__="
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part92A131C0.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> =
wrote:
> This test is already performed a couple of lines above.

Except that it's the wrong code you remove:

> @@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 =
slot, u8 func, u8 bir, int vf)
>          if ( vf < 0 || (vf && vf % stride) )
>              return 0;
>          if ( stride )
> -        {
> -            if ( vf % stride )
> -                return 0;
>              vf /=3D stride;
> -        }

Note how this second check carefully avoids a division by zero.
>From what I can tell I think that I simply forgot to remove the
right side of the earlier || after having converted it to the safer
variant inside the if(). Hence I think we instead want:

x86/MSI: don't risk division by zero

The check in question is redundant with the one in the immediately
following if(), where dividing by zero gets carefully avoided.

Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8=20
             return 0;
         base =3D pos + PCI_SRIOV_BAR;
         vf -=3D PCI_BDF(bus, slot, func) + offset;
-        if ( vf < 0 || (vf && vf % stride) )
+        if ( vf < 0 )
             return 0;
         if ( stride )
         {




--=__Part92A131C0.1__=
Content-Type: text/plain; name="x86-MSI-VF-divide-by-zero.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MSI-VF-divide-by-zero.patch"

x86/MSI: don't risk division by zero=0A=0AThe check in question is =
redundant with the one in the immediately=0Afollowing if(), where dividing =
by zero gets carefully avoided.=0A=0ASpotted-by: Boris Ostrovsky <boris.ost=
rovsky@oracle.com>=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--=
- a/xen/arch/x86/msi.c=0A+++ b/xen/arch/x86/msi.c=0A@@ -635,7 +635,7 @@ =
static u64 read_pci_mem_bar(u16 seg, u8 =0A             return 0;=0A       =
  base =3D pos + PCI_SRIOV_BAR;=0A         vf -=3D PCI_BDF(bus, slot, =
func) + offset;=0A-        if ( vf < 0 || (vf && vf % stride) )=0A+        =
if ( vf < 0 )=0A             return 0;=0A         if ( stride )=0A         =
{=0A
--=__Part92A131C0.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part92A131C0.1__=--


From xen-devel-bounces@lists.xen.org Thu Feb 13 09:55:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDt0Q-0003N5-CW; Thu, 13 Feb 2014 09:54:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDt0P-0003N0-60
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 09:54:53 +0000
Received: from [193.109.254.147:10993] by server-1.bemta-14.messagelabs.com id
	D5/F2-15438-C669CF25; Thu, 13 Feb 2014 09:54:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392285290!4008858!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32413 invoked from network); 13 Feb 2014 09:54:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 09:54:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="102175030"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 09:54:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 04:54:49 -0500
Message-ID: <1392285288.27366.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 13 Feb 2014 09:54:48 +0000
In-Reply-To: <1392225757.13563.101.camel@kazak.uk.xensource.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<1392111423.26657.55.camel@kazak.uk.xensource.com>
	<52FA40DD.6060106@tycho.nsa.gov>
	<1392225757.13563.101.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] [PATCH] docs/vtpm: fix auto-shutdown reference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 17:22 +0000, Ian Campbell wrote:
> On Tue, 2014-02-11 at 10:25 -0500, Daniel De Graaf wrote:
> > 
> > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I'm holding off committing while preparations are made for another rc,
> but once that is out the way I see no reason to hold off on this.

I've applied this. It looks like you have some whitespace differences in
your baseline (everything was indented one column, some other patch in
your queue perhaps?) which I resolved as I went.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:55:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDt0Q-0003N5-CW; Thu, 13 Feb 2014 09:54:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDt0P-0003N0-60
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 09:54:53 +0000
Received: from [193.109.254.147:10993] by server-1.bemta-14.messagelabs.com id
	D5/F2-15438-C669CF25; Thu, 13 Feb 2014 09:54:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392285290!4008858!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32413 invoked from network); 13 Feb 2014 09:54:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 09:54:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,837,1384300800"; d="scan'208";a="102175030"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 09:54:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 04:54:49 -0500
Message-ID: <1392285288.27366.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 13 Feb 2014 09:54:48 +0000
In-Reply-To: <1392225757.13563.101.camel@kazak.uk.xensource.com>
References: <52F26C40.2060901@scytl.com> <52F92B4A.3010805@tycho.nsa.gov>
	<1392111423.26657.55.camel@kazak.uk.xensource.com>
	<52FA40DD.6060106@tycho.nsa.gov>
	<1392225757.13563.101.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org,
	Jordi Cucurull Juan <jordi.cucurull@scytl.com>
Subject: Re: [Xen-devel] [PATCH] docs/vtpm: fix auto-shutdown reference
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 17:22 +0000, Ian Campbell wrote:
> On Tue, 2014-02-11 at 10:25 -0500, Daniel De Graaf wrote:
> > 
> > Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> I'm holding off committing while preparations are made for another rc,
> but once that is out the way I see no reason to hold off on this.

I've applied this. It looks like you have some whitespace differences in
your baseline (everything was indented one column, some other patch in
your queue perhaps?) which I resolved as I went.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:57:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDt2k-0003TF-2C; Thu, 13 Feb 2014 09:57:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDt2i-0003TA-82
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 09:57:16 +0000
Received: from [85.158.143.35:52864] by server-2.bemta-4.messagelabs.com id
	B2/64-10891-BF69CF25; Thu, 13 Feb 2014 09:57:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392285435!5346620!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2250 invoked from network); 13 Feb 2014 09:57:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 09:57:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 09:57:14 +0000
Message-Id: <52FCA509020000780011BF60@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 09:57:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
In-Reply-To: <20140212185352.4c920a54@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 03:53, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> It appears that xc_map_foreign_pages() handles return incorrectly :
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     if (res) {
>         for (i = 0; i < num; i++) {
>             if (err[i]) {
>                 errno = -err[i];
>                 munmap(res, num * PAGE_SIZE);
>                 res = NULL;
>                 break;
>             }
>         }
>     }
> 
> The add to_physmap batch'd interface  actually will store errors
> in the err array, and return 0 unless EFAULT or something like that.
> See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
> calls here to map page which fails, but the return is 0 as the error is
> succesfully copied by xen. But the error is missed above since res is 0.
> xentrace does something again, and that causes xen crash. 
> 
> It appears the fix could be just removing the check for res above...
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     for (i = 0; i < num; i++) {
>         if (err[i]) {
>          .....

Definitely not. "res" is "void *", so it being NULL indicates there
was no mapping established at all. Only if it's non-NULL it makes
sense to inspect err[] (and call munmap(res, ...)).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 09:57:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 09:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDt2k-0003TF-2C; Thu, 13 Feb 2014 09:57:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDt2i-0003TA-82
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 09:57:16 +0000
Received: from [85.158.143.35:52864] by server-2.bemta-4.messagelabs.com id
	B2/64-10891-BF69CF25; Thu, 13 Feb 2014 09:57:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392285435!5346620!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2250 invoked from network); 13 Feb 2014 09:57:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 09:57:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 09:57:14 +0000
Message-Id: <52FCA509020000780011BF60@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 09:57:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
In-Reply-To: <20140212185352.4c920a54@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 03:53, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> It appears that xc_map_foreign_pages() handles return incorrectly :
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     if (res) {
>         for (i = 0; i < num; i++) {
>             if (err[i]) {
>                 errno = -err[i];
>                 munmap(res, num * PAGE_SIZE);
>                 res = NULL;
>                 break;
>             }
>         }
>     }
> 
> The add to_physmap batch'd interface  actually will store errors
> in the err array, and return 0 unless EFAULT or something like that.
> See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
> calls here to map page which fails, but the return is 0 as the error is
> succesfully copied by xen. But the error is missed above since res is 0.
> xentrace does something again, and that causes xen crash. 
> 
> It appears the fix could be just removing the check for res above...
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     for (i = 0; i < num; i++) {
>         if (err[i]) {
>          .....

Definitely not. "res" is "void *", so it being NULL indicates there
was no mapping established at all. Only if it's non-NULL it makes
sense to inspect err[] (and call munmap(res, ...)).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:00:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDt5y-0003uI-PP; Thu, 13 Feb 2014 10:00:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDt5w-0003u6-M9
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 10:00:36 +0000
Received: from [85.158.143.35:57026] by server-1.bemta-4.messagelabs.com id
	06/F3-31661-4C79CF25; Thu, 13 Feb 2014 10:00:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392285634!5349978!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11488 invoked from network); 13 Feb 2014 10:00:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:00:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102176206"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 10:00:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 05:00:33 -0500
Message-ID: <1392285632.27366.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Thu, 13 Feb 2014 10:00:32 +0000
In-Reply-To: <20140212185352.4c920a54@mantra.us.oracle.com>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Julien Grall <julien.grall@linaro.org>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 18:53 -0800, Mukesh Rathor wrote:
> It appears that xc_map_foreign_pages() handles return incorrectly :

libxc is a complete mess in this regard (error handling) generally.

IIRC there is some subtlety on the Linux privcmd side wrt partial
failure of batches, particularly once paging/sharing gets involved and
you want to retry the missing subset, which might have something to do
with this. David and Andres might know if that is relevant here.
 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     if (res) {
>         for (i = 0; i < num; i++) {
>             if (err[i]) {
>                 errno = -err[i];
>                 munmap(res, num * PAGE_SIZE);
>                 res = NULL;
>                 break;
>             }
>         }
>     }
> 
> The add to_physmap batch'd interface  actually will store errors
> in the err array, and return 0 unless EFAULT or something like that.
> See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
> calls here to map page which fails, but the return is 0 as the error is
> succesfully copied by xen. But the error is missed above since res is 0.
> xentrace does something again, and that causes xen crash. 
> 
> It appears the fix could be just removing the check for res above...
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     for (i = 0; i < num; i++) {
>         if (err[i]) {
>          .....
> 
> What do you guys think?
> 
> thanks,
> Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:00:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDt5y-0003uI-PP; Thu, 13 Feb 2014 10:00:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDt5w-0003u6-M9
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 10:00:36 +0000
Received: from [85.158.143.35:57026] by server-1.bemta-4.messagelabs.com id
	06/F3-31661-4C79CF25; Thu, 13 Feb 2014 10:00:36 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392285634!5349978!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11488 invoked from network); 13 Feb 2014 10:00:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:00:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102176206"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 10:00:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 05:00:33 -0500
Message-ID: <1392285632.27366.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Date: Thu, 13 Feb 2014 10:00:32 +0000
In-Reply-To: <20140212185352.4c920a54@mantra.us.oracle.com>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Julien Grall <julien.grall@linaro.org>, Ian
	Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 18:53 -0800, Mukesh Rathor wrote:
> It appears that xc_map_foreign_pages() handles return incorrectly :

libxc is a complete mess in this regard (error handling) generally.

IIRC there is some subtlety on the Linux privcmd side wrt partial
failure of batches, particularly once paging/sharing gets involved and
you want to retry the missing subset, which might have something to do
with this. David and Andres might know if that is relevant here.
 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     if (res) {
>         for (i = 0; i < num; i++) {
>             if (err[i]) {
>                 errno = -err[i];
>                 munmap(res, num * PAGE_SIZE);
>                 res = NULL;
>                 break;
>             }
>         }
>     }
> 
> The add to_physmap batch'd interface  actually will store errors
> in the err array, and return 0 unless EFAULT or something like that.
> See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
> calls here to map page which fails, but the return is 0 as the error is
> succesfully copied by xen. But the error is missed above since res is 0.
> xentrace does something again, and that causes xen crash. 
> 
> It appears the fix could be just removing the check for res above...
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     for (i = 0; i < num; i++) {
>         if (err[i]) {
>          .....
> 
> What do you guys think?
> 
> thanks,
> Mukesh



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:08:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtDS-00048O-7P; Thu, 13 Feb 2014 10:08:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDtDQ-00048J-RH
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 10:08:21 +0000
Received: from [193.109.254.147:38414] by server-3.bemta-14.messagelabs.com id
	1B/ED-00432-4999CF25; Thu, 13 Feb 2014 10:08:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392286099!4039133!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32390 invoked from network); 13 Feb 2014 10:08:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 10:08:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 10:08:19 +0000
Message-Id: <52FCA7A0020000780011BF88@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 10:08:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
In-Reply-To: <20140212195005.GB29910@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ufimtseva@gmail.com, andrew.thomas@oracle.com, george.dunlap@eu.citrix.com,
	andrew.cooper3@citrix.com, jun.nakajima@intel.com, kurt.hackel@oracle.com,
	xen-devel <xen-devel@lists.xenproject.org>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 20:50, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> From a Linux kernel perspective we do seem to 'pipe' said information
> from the ACPI DSDT (drivers/xen/pci.c):
> 
>  75                 unsigned long long pxm;                                  
>  76                                                                          
>  77                 status = acpi_evaluate_integer(handle, "_PXM",           
>  78                                    NULL, &pxm);                           
>  79                 if (ACPI_SUCCESS(status)) {                              
>  80                     add.optarr[0] = pxm;                                 
>  81                     add.flags |= XEN_PCI_DEV_PXM;        
> 
> Which is neat except that Xen ignores that flag altogether. I Googled
> a bit but still did not find anything relevant - thought there were
> some presentations from past Xen Summits referring to it
> (I can't find it now :-()

When adding that interface it seemed pretty clear to me that we
would want/need this information sooner or later. I'm unaware of
any (prototype or better) code utilizing it.

> Anyhow,  what I am wondering if there are some prototypes out the
> in the past that utilize this. And if we were to use this how
> can we expose this to 'libxl' or any other tools to say:
> 
> "Hey! You might want to use this other PCI device assigned
> to pciback which is on the same node". Some of form of
> 'numa-pci' affinity.

Right, a hint like this might be desirable. But this shouldn't be
enforced.

> Interestingly enough one can also read this from SysFS:
> /sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.
> 
> Except that we don't expose the NUMA topology to the initial
> domain so the 'numa_node' is all -1. And the local_cpu depends
> on seeing _all_ of the CPUs - and of course it assumes that
> vCPU == pCPU.
> 
> Anyhow, if this was "tweaked" such that the initial domain
> was seeing the hardware NUMA topology and parsing it (via
> Elena's patches) we could potentially have at least the
> 'numa_node' information present and figure out if a guest
> is using a PCIe device from the right socket.

I think you're mixing up things here. Afaict Elena's patches
are to introduce _virtual_ NUMA, i.e. it would specifically _not_
expose the host NUMA properties to the Dom0 kernel. Don't
we have interfaces to expose the host NUMA information to
the tools already?

> So what I am wondering is:
>  1) Were there any plans for the XEN_PCI_DEV_PXM in the
>     hypervisor? Were there some prototypes for exporting the
>     PCI device BDF and NUMA information out.

As said above: Intentions (I wouldn't call it plans) yes, prototypes
no.

>  2) Would it be better to just look at making the initial domain
>    be able to figure out the NUMA topology and assign the
>    correct 'numa_node' in the PCI fields?

As said above, I don't think this should be exposed to and
handled in Dom0's kernel. It's the tool stack to have the overall
view here.

>  3). If either option is used, would taking that information in-to
>    advisement when launching a guest with either 'cpus' or 'numa-affinity'
>    or 'pci' and informing the user of a better choice be good?
>    Or would it be better if there was some diagnostic tool to at
>    least tell the user whether their PCI device assignment made
>    sense or not? Or perhaps program the 'numa-affinity' based on
>    the PCIe socket location?

I think issuing hint messages would be nice. Automatic placement
could clearly also take assigned devices' localities into consideration,
i.e. one could expect assigned devices to result in the respective
nodes to be picked in preference (as long as CPU and memory
availability allow doing so).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:08:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:08:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtDS-00048O-7P; Thu, 13 Feb 2014 10:08:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDtDQ-00048J-RH
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 10:08:21 +0000
Received: from [193.109.254.147:38414] by server-3.bemta-14.messagelabs.com id
	1B/ED-00432-4999CF25; Thu, 13 Feb 2014 10:08:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392286099!4039133!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32390 invoked from network); 13 Feb 2014 10:08:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 10:08:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 10:08:19 +0000
Message-Id: <52FCA7A0020000780011BF88@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 10:08:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
In-Reply-To: <20140212195005.GB29910@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ufimtseva@gmail.com, andrew.thomas@oracle.com, george.dunlap@eu.citrix.com,
	andrew.cooper3@citrix.com, jun.nakajima@intel.com, kurt.hackel@oracle.com,
	xen-devel <xen-devel@lists.xenproject.org>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 12.02.14 at 20:50, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> From a Linux kernel perspective we do seem to 'pipe' said information
> from the ACPI DSDT (drivers/xen/pci.c):
> 
>  75                 unsigned long long pxm;                                  
>  76                                                                          
>  77                 status = acpi_evaluate_integer(handle, "_PXM",           
>  78                                    NULL, &pxm);                           
>  79                 if (ACPI_SUCCESS(status)) {                              
>  80                     add.optarr[0] = pxm;                                 
>  81                     add.flags |= XEN_PCI_DEV_PXM;        
> 
> Which is neat except that Xen ignores that flag altogether. I Googled
> a bit but still did not find anything relevant - thought there were
> some presentations from past Xen Summits referring to it
> (I can't find it now :-()

When adding that interface it seemed pretty clear to me that we
would want/need this information sooner or later. I'm unaware of
any (prototype or better) code utilizing it.

> Anyhow,  what I am wondering if there are some prototypes out the
> in the past that utilize this. And if we were to use this how
> can we expose this to 'libxl' or any other tools to say:
> 
> "Hey! You might want to use this other PCI device assigned
> to pciback which is on the same node". Some of form of
> 'numa-pci' affinity.

Right, a hint like this might be desirable. But this shouldn't be
enforced.

> Interestingly enough one can also read this from SysFS:
> /sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.
> 
> Except that we don't expose the NUMA topology to the initial
> domain so the 'numa_node' is all -1. And the local_cpu depends
> on seeing _all_ of the CPUs - and of course it assumes that
> vCPU == pCPU.
> 
> Anyhow, if this was "tweaked" such that the initial domain
> was seeing the hardware NUMA topology and parsing it (via
> Elena's patches) we could potentially have at least the
> 'numa_node' information present and figure out if a guest
> is using a PCIe device from the right socket.

I think you're mixing up things here. Afaict Elena's patches
are to introduce _virtual_ NUMA, i.e. it would specifically _not_
expose the host NUMA properties to the Dom0 kernel. Don't
we have interfaces to expose the host NUMA information to
the tools already?

> So what I am wondering is:
>  1) Were there any plans for the XEN_PCI_DEV_PXM in the
>     hypervisor? Were there some prototypes for exporting the
>     PCI device BDF and NUMA information out.

As said above: Intentions (I wouldn't call it plans) yes, prototypes
no.

>  2) Would it be better to just look at making the initial domain
>    be able to figure out the NUMA topology and assign the
>    correct 'numa_node' in the PCI fields?

As said above, I don't think this should be exposed to and
handled in Dom0's kernel. It's the tool stack to have the overall
view here.

>  3). If either option is used, would taking that information in-to
>    advisement when launching a guest with either 'cpus' or 'numa-affinity'
>    or 'pci' and informing the user of a better choice be good?
>    Or would it be better if there was some diagnostic tool to at
>    least tell the user whether their PCI device assignment made
>    sense or not? Or perhaps program the 'numa-affinity' based on
>    the PCIe socket location?

I think issuing hint messages would be nice. Automatic placement
could clearly also take assigned devices' localities into consideration,
i.e. one could expect assigned devices to result in the respective
nodes to be picked in preference (as long as CPU and memory
availability allow doing so).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtFt-0004Sg-U3; Thu, 13 Feb 2014 10:10:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDtFr-0004Sa-SB
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 10:10:52 +0000
Received: from [85.158.139.211:22047] by server-2.bemta-5.messagelabs.com id
	FF/76-23037-B2A9CF25; Thu, 13 Feb 2014 10:10:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392286248!3644206!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3703 invoked from network); 13 Feb 2014 10:10:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:10:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102178641"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 10:10:47 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 05:10:47 -0500
Message-ID: <52FC9A24.2020703@citrix.com>
Date: Thu, 13 Feb 2014 11:10:44 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>	<1391528110.6497.32.camel@kazak.uk.xensource.com>	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>	<1391681000.23098.29.camel@kazak.uk.xensource.com>	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>	<1392042223.26657.7.camel@kazak.uk.xensource.com>	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>	<1392198993.13563.13.camel@kazak.uk.xensource.com>	<52FB50F0.70106@citrix.com>	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com>
In-Reply-To: <52FC7F8E.7040608@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 09:17, Roger Pau Monn=E9 wrote:
> On 13/02/14 02:29, Miguel Clara wrote:
>> I tried the patch provided by roger, I get a different error now:
>>
>> Parsing config from test.cfg
>> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
>> /local/domain/0/backend/vbd/0/51712 not ready
>> libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
>> failed to attach local disk for bootloader execution
>> libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
>> unable to detach locally attached disk
>> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
>> (re-)build domain: -1
>>
>>
>> with -vvv
>>
>> # xl -vvv create test.cfg
>> Parsing config from test.cfg
>> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x12548c0:
>> create: how=3D(nil) callback=3D(nil) poller=3D0x1254980
>> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
>> vdev=3Dxvda spec.backend=3Dunknown
>> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=3Dxvda,
>> uses script=3D... assuming phy backend
>> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
>> vdev=3Dxvda, using backend phy
>> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloa=
der
>> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
>> vdev=3D(null) spec.backend=3Dphy
>> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=3D(null),
>> uses script=3D... assuming phy backend
>> libxl: debug: libxl.c:2605:libxl__device_disk_local_initiate_attach:
>> trying to locally attach PHY device drbd-remus-test with script
>> block-drbd
>> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
>> vdev=3Dxvdf spec.backend=3Dphy
>> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=3Dxvdf,
>> uses script=3D... assuming phy backend
>> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> w=3D0x124a300 wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D=
3/0:
>> register slotnum=3D3
>> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x12548c0:
>> inprogress: poller=3D0x1254980, flags=3Di
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x124a300
>> wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D3/0: event
>> epath=3D/local/domain/0/backend/vbd/0/51792/state
>> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
>> /local/domain/0/backend/vbd/0/51792/state wanted state 2 still waiting
>> state 1
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x124a300
>> wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D3/0: event
>> epath=3D/local/domain/0/backend/vbd/0/51792/state
>> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
>> /local/domain/0/backend/vbd/0/51792/state wanted state 2 ok
>> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> w=3D0x124a300 wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D=
3/0:
>> deregister slotnum=3D3
>> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> w=3D0x124a300: deregister unregistered
>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script: /etc/xen/scripts/block-drbd add
>> libxl: debug: libxl.c:2692:local_device_attach_cb: locally attached
>> disk /dev/xvdf
>> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
>> /local/domain/0/backend/vbd/0/51792 not ready
> =

> So the local attach seems to DTRT, but the device never gets to state 4
> (connected). Does the block-drbd script work with guests that are not
> using pygrub? (extract the kernel from the DomU and use it directly on
> the config file).

I've been looking into this, and found out that the block-drbd script =

is outdated, it expects that the type node in xenstore will be "drbd" =

(which probably was what xend was setting), but libxl sets it to "phy", =

so the following patch to block-drbd is needed. If it solves your =

problems I will upstream it to drbd for inclusion on new releases.

(The patch is against git://git.drbd.org/drbd-9.0.git)

---
commit 4a4806421d81b30762ed6a0b111e491b77e78a08
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Feb 13 11:01:48 2014 +0100

    block-drbd: type is "phy" for drbd backends
    =

    The type written to xenstore by libxl when attaching a drbd backend is
    "phy", not "drbd", so handle this case also.
    =

    Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

diff --git a/scripts/block-drbd b/scripts/block-drbd
index 5563ccb..975802b 100755
--- a/scripts/block-drbd
+++ b/scripts/block-drbd
@@ -250,7 +250,7 @@ case "$command" in
     fi
 =

     case $t in =

-      drbd)
+      drbd|phy)
         drbd_resource=3D$p
         drbd_role=3D"$(drbdadm role $drbd_resource)"
         drbd_lrole=3D"${drbd_role%%/*}"
@@ -278,7 +278,7 @@ case "$command" in
 =

   remove)
     case $t in =

-      drbd)
+      drbd|phy)
         p=3D$(xenstore_read "$XENBUS_PATH/params")
         drbd_resource=3D$p
         drbd_role=3D"$(drbdadm role $drbd_resource)"


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtFt-0004Sg-U3; Thu, 13 Feb 2014 10:10:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDtFr-0004Sa-SB
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 10:10:52 +0000
Received: from [85.158.139.211:22047] by server-2.bemta-5.messagelabs.com id
	FF/76-23037-B2A9CF25; Thu, 13 Feb 2014 10:10:51 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392286248!3644206!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3703 invoked from network); 13 Feb 2014 10:10:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:10:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102178641"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 10:10:47 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 05:10:47 -0500
Message-ID: <52FC9A24.2020703@citrix.com>
Date: Thu, 13 Feb 2014 11:10:44 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>	<1391528110.6497.32.camel@kazak.uk.xensource.com>	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>	<1391681000.23098.29.camel@kazak.uk.xensource.com>	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>	<1392042223.26657.7.camel@kazak.uk.xensource.com>	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>	<1392198993.13563.13.camel@kazak.uk.xensource.com>	<52FB50F0.70106@citrix.com>	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com>
In-Reply-To: <52FC7F8E.7040608@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 09:17, Roger Pau Monn=E9 wrote:
> On 13/02/14 02:29, Miguel Clara wrote:
>> I tried the patch provided by roger, I get a different error now:
>>
>> Parsing config from test.cfg
>> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
>> /local/domain/0/backend/vbd/0/51712 not ready
>> libxl: error: libxl_bootloader.c:405:bootloader_disk_attached_cb:
>> failed to attach local disk for bootloader execution
>> libxl: error: libxl_bootloader.c:276:bootloader_local_detached_cb:
>> unable to detach locally attached disk
>> libxl: error: libxl_create.c:900:domcreate_rebuild_done: cannot
>> (re-)build domain: -1
>>
>>
>> with -vvv
>>
>> # xl -vvv create test.cfg
>> Parsing config from test.cfg
>> libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x12548c0:
>> create: how=3D(nil) callback=3D(nil) poller=3D0x1254980
>> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
>> vdev=3Dxvda spec.backend=3Dunknown
>> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=3Dxvda,
>> uses script=3D... assuming phy backend
>> libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
>> vdev=3Dxvda, using backend phy
>> libxl: debug: libxl_create.c:675:initiate_domain_create: running bootloa=
der
>> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
>> vdev=3D(null) spec.backend=3Dphy
>> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=3D(null),
>> uses script=3D... assuming phy backend
>> libxl: debug: libxl.c:2605:libxl__device_disk_local_initiate_attach:
>> trying to locally attach PHY device drbd-remus-test with script
>> block-drbd
>> libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
>> vdev=3Dxvdf spec.backend=3Dphy
>> libxl: debug: libxl_device.c:188:disk_try_backend: Disk vdev=3Dxvdf,
>> uses script=3D... assuming phy backend
>> libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
>> w=3D0x124a300 wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D=
3/0:
>> register slotnum=3D3
>> libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x12548c0:
>> inprogress: poller=3D0x1254980, flags=3Di
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x124a300
>> wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D3/0: event
>> epath=3D/local/domain/0/backend/vbd/0/51792/state
>> libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
>> /local/domain/0/backend/vbd/0/51792/state wanted state 2 still waiting
>> state 1
>> libxl: debug: libxl_event.c:503:watchfd_callback: watch w=3D0x124a300
>> wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D3/0: event
>> epath=3D/local/domain/0/backend/vbd/0/51792/state
>> libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
>> /local/domain/0/backend/vbd/0/51792/state wanted state 2 ok
>> libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
>> w=3D0x124a300 wpath=3D/local/domain/0/backend/vbd/0/51792/state token=3D=
3/0:
>> deregister slotnum=3D3
>> libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
>> w=3D0x124a300: deregister unregistered
>> libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug
>> script: /etc/xen/scripts/block-drbd add
>> libxl: debug: libxl.c:2692:local_device_attach_cb: locally attached
>> disk /dev/xvdf
>> libxl: error: libxl_device.c:1127:libxl__wait_for_backend: Backend
>> /local/domain/0/backend/vbd/0/51792 not ready
> =

> So the local attach seems to DTRT, but the device never gets to state 4
> (connected). Does the block-drbd script work with guests that are not
> using pygrub? (extract the kernel from the DomU and use it directly on
> the config file).

I've been looking into this, and found out that the block-drbd script =

is outdated, it expects that the type node in xenstore will be "drbd" =

(which probably was what xend was setting), but libxl sets it to "phy", =

so the following patch to block-drbd is needed. If it solves your =

problems I will upstream it to drbd for inclusion on new releases.

(The patch is against git://git.drbd.org/drbd-9.0.git)

---
commit 4a4806421d81b30762ed6a0b111e491b77e78a08
Author: Roger Pau Monne <roger.pau@citrix.com>
Date:   Thu Feb 13 11:01:48 2014 +0100

    block-drbd: type is "phy" for drbd backends
    =

    The type written to xenstore by libxl when attaching a drbd backend is
    "phy", not "drbd", so handle this case also.
    =

    Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>

diff --git a/scripts/block-drbd b/scripts/block-drbd
index 5563ccb..975802b 100755
--- a/scripts/block-drbd
+++ b/scripts/block-drbd
@@ -250,7 +250,7 @@ case "$command" in
     fi
 =

     case $t in =

-      drbd)
+      drbd|phy)
         drbd_resource=3D$p
         drbd_role=3D"$(drbdadm role $drbd_resource)"
         drbd_lrole=3D"${drbd_role%%/*}"
@@ -278,7 +278,7 @@ case "$command" in
 =

   remove)
     case $t in =

-      drbd)
+      drbd|phy)
         p=3D$(xenstore_read "$XENBUS_PATH/params")
         drbd_resource=3D$p
         drbd_role=3D"$(drbdadm role $drbd_resource)"


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:19:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:19:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtO0-0004uA-5O; Thu, 13 Feb 2014 10:19:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDtNy-0004u5-DE
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 10:19:14 +0000
Received: from [85.158.137.68:65039] by server-9.bemta-3.messagelabs.com id
	CB/1D-10184-12C9CF25; Thu, 13 Feb 2014 10:19:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392286751!271743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8182 invoked from network); 13 Feb 2014 10:19:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:19:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102180386"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 10:19:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 05:19:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDtNt-0002QO-SQ;
	Thu, 13 Feb 2014 10:19:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDtNt-0000d3-PB;
	Thu, 13 Feb 2014 10:19:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24861-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 10:19:09 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24861: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24861 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24861/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10       fail   like 24806

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
baseline version:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
+ branch=xen-4.2-testing
+ revision=eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git eba76e4112339b2bb9dcac66835c04fd5ba7b5d2:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   ae5d69f..eba76e4  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:19:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:19:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtO0-0004uA-5O; Thu, 13 Feb 2014 10:19:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDtNy-0004u5-DE
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 10:19:14 +0000
Received: from [85.158.137.68:65039] by server-9.bemta-3.messagelabs.com id
	CB/1D-10184-12C9CF25; Thu, 13 Feb 2014 10:19:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392286751!271743!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8182 invoked from network); 13 Feb 2014 10:19:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:19:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102180386"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 10:19:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 05:19:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDtNt-0002QO-SQ;
	Thu, 13 Feb 2014 10:19:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDtNt-0000d3-PB;
	Thu, 13 Feb 2014 10:19:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24861-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 10:19:09 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24861: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24861 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24861/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10       fail   like 24806

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
baseline version:
 xen                  ae5d69f1c6d6cf5960e72d79ac0840eec1d75856

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
+ branch=xen-4.2-testing
+ revision=eba76e4112339b2bb9dcac66835c04fd5ba7b5d2
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git eba76e4112339b2bb9dcac66835c04fd5ba7b5d2:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   ae5d69f..eba76e4  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:30:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtYN-0005KV-9v; Thu, 13 Feb 2014 10:29:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WDtYL-0005KJ-8j; Thu, 13 Feb 2014 10:29:57 +0000
Received: from [193.109.254.147:42793] by server-13.bemta-14.messagelabs.com
	id 41/13-01226-4AE9CF25; Thu, 13 Feb 2014 10:29:56 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392287385!4020805!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDM3NDMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19765 invoked from network); 13 Feb 2014 10:29:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:29:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; 
	d="asc'?scan'208";a="100396634"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 10:29:45 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 05:29:44 -0500
Message-ID: <1392287382.32038.34.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Thu, 13 Feb 2014 11:29:42 +0100
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4298551883746762847=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4298551883746762847==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-P9gfs2oIxqIMALkrLAK7"

--=-P9gfs2oIxqIMALkrLAK7
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

[Adding the Xen publicity mailing list]

On gio, 2014-02-13 at 03:55 +0000, Zhang, Yang Z wrote:
> Hi George,
>=20
> I have updated the Latest Xen nested status in the Xen wiki page, please =
have a look.
>
Hey Yang,

This is something really useful to have on the wiki, thanks for doing
it.

> Although it is hard to say nested is good supported or product quality, i=
t is ready to let people to know nested basically is supported by Xen.
>
Right. With that in mind, I think this topic would be a great one for a
blog post on the Xen Project's blog! The content you have on the wiki is
mostly fine as the core content of the blog post too, we'd just need a
couple more "colloquial" glue paragraph here and there, both about
nested virt in general and about Xen supporting it, for instance...

> Especially, for Xen on Xen case, I didn't see any issue with it for more =
than half of year. Besides, I am always using nested Xen to debug Xen booti=
ng issue which doesn't need to reboot my real box. And it really helps me a=
 lot.
>
...something like this line above... It's actually a quite good example
of what I meant above! :-)

So, how do you feel about this?

> So, if possible, I hope we can add nested support into Xen 4.4 release to=
 let people know current status.
>=20
Sorry for my ignorance on the subject, what is it that is missing for
making the above (and the content of the wiki) true for 4.4?

Anyway, I don't think that, whatever the answer is, it will be less
worth to have a blog post about nested virt... At most it will affect
when we want it. If it's going to be a 4.4 feature, I think the sooner
the better. If not, we can wait a little bit.

Let me know your thoughts on the idea.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-P9gfs2oIxqIMALkrLAK7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL8npYACgkQk4XaBE3IOsSaFQCfVbJ9erMECygVJUbsvp32+Vk0
N3kAn2yv2Usyr48b8fhTg4rD4ZfEYSVM
=CnHq
-----END PGP SIGNATURE-----

--=-P9gfs2oIxqIMALkrLAK7--


--===============4298551883746762847==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4298551883746762847==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 10:30:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDtYN-0005KV-9v; Thu, 13 Feb 2014 10:29:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WDtYL-0005KJ-8j; Thu, 13 Feb 2014 10:29:57 +0000
Received: from [193.109.254.147:42793] by server-13.bemta-14.messagelabs.com
	id 41/13-01226-4AE9CF25; Thu, 13 Feb 2014 10:29:56 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392287385!4020805!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDM3NDMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19765 invoked from network); 13 Feb 2014 10:29:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 10:29:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; 
	d="asc'?scan'208";a="100396634"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 10:29:45 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 05:29:44 -0500
Message-ID: <1392287382.32038.34.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Thu, 13 Feb 2014 11:29:42 +0100
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4298551883746762847=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4298551883746762847==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-P9gfs2oIxqIMALkrLAK7"

--=-P9gfs2oIxqIMALkrLAK7
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

[Adding the Xen publicity mailing list]

On gio, 2014-02-13 at 03:55 +0000, Zhang, Yang Z wrote:
> Hi George,
>=20
> I have updated the Latest Xen nested status in the Xen wiki page, please =
have a look.
>
Hey Yang,

This is something really useful to have on the wiki, thanks for doing
it.

> Although it is hard to say nested is good supported or product quality, i=
t is ready to let people to know nested basically is supported by Xen.
>
Right. With that in mind, I think this topic would be a great one for a
blog post on the Xen Project's blog! The content you have on the wiki is
mostly fine as the core content of the blog post too, we'd just need a
couple more "colloquial" glue paragraph here and there, both about
nested virt in general and about Xen supporting it, for instance...

> Especially, for Xen on Xen case, I didn't see any issue with it for more =
than half of year. Besides, I am always using nested Xen to debug Xen booti=
ng issue which doesn't need to reboot my real box. And it really helps me a=
 lot.
>
...something like this line above... It's actually a quite good example
of what I meant above! :-)

So, how do you feel about this?

> So, if possible, I hope we can add nested support into Xen 4.4 release to=
 let people know current status.
>=20
Sorry for my ignorance on the subject, what is it that is missing for
making the above (and the content of the wiki) true for 4.4?

Anyway, I don't think that, whatever the answer is, it will be less
worth to have a blog post about nested virt... At most it will affect
when we want it. If it's going to be a 4.4 feature, I think the sooner
the better. If not, we can wait a little bit.

Let me know your thoughts on the idea.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-P9gfs2oIxqIMALkrLAK7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL8npYACgkQk4XaBE3IOsSaFQCfVbJ9erMECygVJUbsvp32+Vk0
N3kAn2yv2Usyr48b8fhTg4rD4ZfEYSVM
=CnHq
-----END PGP SIGNATURE-----

--=-P9gfs2oIxqIMALkrLAK7--


--===============4298551883746762847==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4298551883746762847==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 10:36:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDteh-0005Vt-JF; Thu, 13 Feb 2014 10:36:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WDteg-0005Vn-Jz
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 10:36:30 +0000
Received: from [193.109.254.147:54383] by server-3.bemta-14.messagelabs.com id
	70/9A-00432-E20ACF25; Thu, 13 Feb 2014 10:36:30 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392287788!62216!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1787 invoked from network); 13 Feb 2014 10:36:29 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 10:36:29 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id D2DCF188801;
	Thu, 13 Feb 2014 12:36:27 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A2AAC36C01F; Thu, 13 Feb 2014 12:36:27 +0200 (EET)
Date: Thu, 13 Feb 2014 12:36:27 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140213103627.GF2924@reaktio.net>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 03:55:31AM +0000, Zhang, Yang Z wrote:
> Hi George,
> 

Hello,

> I have updated the Latest Xen nested status in the Xen wiki page, please have a look.
> Although it is hard to say nested is good supported or product quality, it is ready to let people to know nested basically is supported by Xen. Especially, for Xen on Xen case, I didn't see any issue with it for more than half of year. Besides, I am always using nested Xen to debug Xen booting issue which doesn't need to reboot my real box. And it really helps me a lot. So, if possible, I hope we can add nested support into Xen 4.4 release to let people know current status.
> 
> Xen nested wiki:
> http://wiki.xenproject.org/wiki/Xen_nested
>

Thanks a lot for writing that! Good stuff.

How about the bugfix patches mentioned on the wiki page:
http://www.gossamer-threads.com/lists/xen/devel/316994
http://www.gossamer-threads.com/lists/xen/devel/316993

It looks like the patch 2/2 needs some more work, is it likely to be fixed/merged before Xen 4.4 final? 
(is it suitable to be merged as a bugfix?)

Thanks,

-- Pasi

 
> best regards
> yang
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 10:36:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 10:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDteh-0005Vt-JF; Thu, 13 Feb 2014 10:36:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WDteg-0005Vn-Jz
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 10:36:30 +0000
Received: from [193.109.254.147:54383] by server-3.bemta-14.messagelabs.com id
	70/9A-00432-E20ACF25; Thu, 13 Feb 2014 10:36:30 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392287788!62216!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1787 invoked from network); 13 Feb 2014 10:36:29 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 10:36:29 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id D2DCF188801;
	Thu, 13 Feb 2014 12:36:27 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id A2AAC36C01F; Thu, 13 Feb 2014 12:36:27 +0200 (EET)
Date: Thu, 13 Feb 2014 12:36:27 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Message-ID: <20140213103627.GF2924@reaktio.net>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 03:55:31AM +0000, Zhang, Yang Z wrote:
> Hi George,
> 

Hello,

> I have updated the Latest Xen nested status in the Xen wiki page, please have a look.
> Although it is hard to say nested is good supported or product quality, it is ready to let people to know nested basically is supported by Xen. Especially, for Xen on Xen case, I didn't see any issue with it for more than half of year. Besides, I am always using nested Xen to debug Xen booting issue which doesn't need to reboot my real box. And it really helps me a lot. So, if possible, I hope we can add nested support into Xen 4.4 release to let people know current status.
> 
> Xen nested wiki:
> http://wiki.xenproject.org/wiki/Xen_nested
>

Thanks a lot for writing that! Good stuff.

How about the bugfix patches mentioned on the wiki page:
http://www.gossamer-threads.com/lists/xen/devel/316994
http://www.gossamer-threads.com/lists/xen/devel/316993

It looks like the patch 2/2 needs some more work, is it likely to be fixed/merged before Xen 4.4 final? 
(is it suitable to be merged as a bugfix?)

Thanks,

-- Pasi

 
> best regards
> yang
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:03:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDu4q-0006PU-Qp; Thu, 13 Feb 2014 11:03:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDu4p-0006Ou-Ga
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 11:03:31 +0000
Received: from [85.158.137.68:49773] by server-2.bemta-3.messagelabs.com id
	18/C3-06531-286ACF25; Thu, 13 Feb 2014 11:03:30 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392289408!347492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13775 invoked from network); 13 Feb 2014 11:03:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:03:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100404345"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 11:03:26 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:03:25 -0500
Message-ID: <52FCA67C.4070801@citrix.com>
Date: Thu, 13 Feb 2014 11:03:24 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
In-Reply-To: <20140212185352.4c920a54@mantra.us.oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jan Beulich <JBeulich@suse.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Julien Grall <julien.grall@linaro.org>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 02:53, Mukesh Rathor wrote:
> It appears that xc_map_foreign_pages() handles return incorrectly :
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     if (res) {
>         for (i = 0; i < num; i++) {
>             if (err[i]) {
>                 errno = -err[i];
>                 munmap(res, num * PAGE_SIZE);
>                 res = NULL;
>                 break;
>             }
>         }
>     }

This looks correct to me.

Which kernel are you using?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:03:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:03:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDu4q-0006PU-Qp; Thu, 13 Feb 2014 11:03:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDu4p-0006Ou-Ga
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 11:03:31 +0000
Received: from [85.158.137.68:49773] by server-2.bemta-3.messagelabs.com id
	18/C3-06531-286ACF25; Thu, 13 Feb 2014 11:03:30 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392289408!347492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13775 invoked from network); 13 Feb 2014 11:03:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:03:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100404345"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 11:03:26 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:03:25 -0500
Message-ID: <52FCA67C.4070801@citrix.com>
Date: Thu, 13 Feb 2014 11:03:24 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
In-Reply-To: <20140212185352.4c920a54@mantra.us.oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jan Beulich <JBeulich@suse.com>,
	"Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Julien Grall <julien.grall@linaro.org>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 02:53, Mukesh Rathor wrote:
> It appears that xc_map_foreign_pages() handles return incorrectly :
> 
>     res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>     if (res) {
>         for (i = 0; i < num; i++) {
>             if (err[i]) {
>                 errno = -err[i];
>                 munmap(res, num * PAGE_SIZE);
>                 res = NULL;
>                 break;
>             }
>         }
>     }

This looks correct to me.

Which kernel are you using?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:21:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuMJ-00075e-15; Thu, 13 Feb 2014 11:21:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDuMH-00075Z-7N
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:21:33 +0000
Received: from [85.158.139.211:28636] by server-14.bemta-5.messagelabs.com id
	00/F1-27598-CBAACF25; Thu, 13 Feb 2014 11:21:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392290490!3557668!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30258 invoked from network); 13 Feb 2014 11:21:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:21:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102191810"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:21:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:21:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDuMD-00074i-Ao;
	Thu, 13 Feb 2014 11:21:29 +0000
Message-ID: <52FCAAB9.3020908@citrix.com>
Date: Thu, 13 Feb 2014 11:21:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
	<52FCA7A0020000780011BF88@nat28.tlf.novell.com>
In-Reply-To: <52FCA7A0020000780011BF88@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: ufimtseva@gmail.com, andrew.thomas@oracle.com, george.dunlap@eu.citrix.com,
	kurt.hackel@oracle.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 10:08, Jan Beulich wrote:
>> Interestingly enough one can also read this from SysFS:
>> /sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.
>>
>> Except that we don't expose the NUMA topology to the initial
>> domain so the 'numa_node' is all -1. And the local_cpu depends
>> on seeing _all_ of the CPUs - and of course it assumes that
>> vCPU == pCPU.
>>
>> Anyhow, if this was "tweaked" such that the initial domain
>> was seeing the hardware NUMA topology and parsing it (via
>> Elena's patches) we could potentially have at least the
>> 'numa_node' information present and figure out if a guest
>> is using a PCIe device from the right socket.
> I think you're mixing up things here. Afaict Elena's patches
> are to introduce _virtual_ NUMA, i.e. it would specifically _not_
> expose the host NUMA properties to the Dom0 kernel. Don't
> we have interfaces to expose the host NUMA information to
> the tools already?

I have recently looked into this when playing with xen support in hwloc.

Xen can export its vcpu_to_{socket,node,core} mappings for the toolstack
to consume, and for each node expose an count of used and free pages,
along with a square matrix of distances from the SRAT table.

The counts of used pages are problematic, because it includes pages
mapping MMIO regions, which is different to the logical expectation of
just being RAM

>
>> So what I am wondering is:
>>  1) Were there any plans for the XEN_PCI_DEV_PXM in the
>>     hypervisor? Were there some prototypes for exporting the
>>     PCI device BDF and NUMA information out.
> As said above: Intentions (I wouldn't call it plans) yes, prototypes
> no.
>
>>  2) Would it be better to just look at making the initial domain
>>    be able to figure out the NUMA topology and assign the
>>    correct 'numa_node' in the PCI fields?
> As said above, I don't think this should be exposed to and
> handled in Dom0's kernel. It's the tool stack to have the overall
> view here.

This is where things get awkward.  Dom0 has the real APCI tables and is
the only entity with the ability to evaluate the _PXM() attributes to
work out which PCI devices belong to which NUMA nodes.  On the other
hand, its idea of cpus and numa is stifled by being virtual and
generally not having access to all the cpus it can see as present in the
ACPI tables.

It would certainly be nice for dom0 to report the _PXM() attributes back
up to Xen, but I have no idea how easy/hard it would be.

>
>>  3). If either option is used, would taking that information in-to
>>    advisement when launching a guest with either 'cpus' or 'numa-affinity'
>>    or 'pci' and informing the user of a better choice be good?
>>    Or would it be better if there was some diagnostic tool to at
>>    least tell the user whether their PCI device assignment made
>>    sense or not? Or perhaps program the 'numa-affinity' based on
>>    the PCIe socket location?
> I think issuing hint messages would be nice. Automatic placement
> could clearly also take assigned devices' localities into consideration,
> i.e. one could expect assigned devices to result in the respective
> nodes to be picked in preference (as long as CPU and memory
> availability allow doing so).
>
> Jan
>

Diagnostic tool is arguably in the works, having been done in my copious
free time, and rather more activly on the hwloc-devel list than
xen-devel, given the current code freeze.

http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v4
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/hwloc-support-experimental-v2

One vague idea I had was to see about using hwlocs placement algorithms
to help advise domain placement, but I have not yet done any
investigation into the feasibility of this.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:21:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuMJ-00075e-15; Thu, 13 Feb 2014 11:21:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDuMH-00075Z-7N
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:21:33 +0000
Received: from [85.158.139.211:28636] by server-14.bemta-5.messagelabs.com id
	00/F1-27598-CBAACF25; Thu, 13 Feb 2014 11:21:32 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392290490!3557668!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30258 invoked from network); 13 Feb 2014 11:21:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:21:31 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102191810"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:21:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:21:29 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDuMD-00074i-Ao;
	Thu, 13 Feb 2014 11:21:29 +0000
Message-ID: <52FCAAB9.3020908@citrix.com>
Date: Thu, 13 Feb 2014 11:21:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
	<52FCA7A0020000780011BF88@nat28.tlf.novell.com>
In-Reply-To: <52FCA7A0020000780011BF88@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: ufimtseva@gmail.com, andrew.thomas@oracle.com, george.dunlap@eu.citrix.com,
	kurt.hackel@oracle.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 10:08, Jan Beulich wrote:
>> Interestingly enough one can also read this from SysFS:
>> /sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.
>>
>> Except that we don't expose the NUMA topology to the initial
>> domain so the 'numa_node' is all -1. And the local_cpu depends
>> on seeing _all_ of the CPUs - and of course it assumes that
>> vCPU == pCPU.
>>
>> Anyhow, if this was "tweaked" such that the initial domain
>> was seeing the hardware NUMA topology and parsing it (via
>> Elena's patches) we could potentially have at least the
>> 'numa_node' information present and figure out if a guest
>> is using a PCIe device from the right socket.
> I think you're mixing up things here. Afaict Elena's patches
> are to introduce _virtual_ NUMA, i.e. it would specifically _not_
> expose the host NUMA properties to the Dom0 kernel. Don't
> we have interfaces to expose the host NUMA information to
> the tools already?

I have recently looked into this when playing with xen support in hwloc.

Xen can export its vcpu_to_{socket,node,core} mappings for the toolstack
to consume, and for each node expose an count of used and free pages,
along with a square matrix of distances from the SRAT table.

The counts of used pages are problematic, because it includes pages
mapping MMIO regions, which is different to the logical expectation of
just being RAM

>
>> So what I am wondering is:
>>  1) Were there any plans for the XEN_PCI_DEV_PXM in the
>>     hypervisor? Were there some prototypes for exporting the
>>     PCI device BDF and NUMA information out.
> As said above: Intentions (I wouldn't call it plans) yes, prototypes
> no.
>
>>  2) Would it be better to just look at making the initial domain
>>    be able to figure out the NUMA topology and assign the
>>    correct 'numa_node' in the PCI fields?
> As said above, I don't think this should be exposed to and
> handled in Dom0's kernel. It's the tool stack to have the overall
> view here.

This is where things get awkward.  Dom0 has the real APCI tables and is
the only entity with the ability to evaluate the _PXM() attributes to
work out which PCI devices belong to which NUMA nodes.  On the other
hand, its idea of cpus and numa is stifled by being virtual and
generally not having access to all the cpus it can see as present in the
ACPI tables.

It would certainly be nice for dom0 to report the _PXM() attributes back
up to Xen, but I have no idea how easy/hard it would be.

>
>>  3). If either option is used, would taking that information in-to
>>    advisement when launching a guest with either 'cpus' or 'numa-affinity'
>>    or 'pci' and informing the user of a better choice be good?
>>    Or would it be better if there was some diagnostic tool to at
>>    least tell the user whether their PCI device assignment made
>>    sense or not? Or perhaps program the 'numa-affinity' based on
>>    the PCIe socket location?
> I think issuing hint messages would be nice. Automatic placement
> could clearly also take assigned devices' localities into consideration,
> i.e. one could expect assigned devices to result in the respective
> nodes to be picked in preference (as long as CPU and memory
> availability allow doing so).
>
> Jan
>

Diagnostic tool is arguably in the works, having been done in my copious
free time, and rather more activly on the hwloc-devel list than
xen-devel, given the current code freeze.

http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v4
http://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/hwloc-support-experimental-v2

One vague idea I had was to see about using hwlocs placement algorithms
to help advise domain placement, but I have not yet done any
investigation into the feasibility of this.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:22:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuMy-00078A-En; Thu, 13 Feb 2014 11:22:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDuMx-00077y-IV
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 11:22:15 +0000
Received: from [193.109.254.147:26893] by server-7.bemta-14.messagelabs.com id
	EE/86-23424-6EAACF25; Thu, 13 Feb 2014 11:22:14 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392290532!4062467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30008 invoked from network); 13 Feb 2014 11:22:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:22:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100408161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 11:22:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:22:12 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDuMt-00075I-OF;
	Thu, 13 Feb 2014 11:22:11 +0000
Message-ID: <52FCAAD8.20206@eu.citrix.com>
Date: Thu, 13 Feb 2014 11:22:00 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	<xen-devel@lists.xensource.com>, <jun.nakajima@intel.com>,
	<boris.ostrovsky@oracle.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <andrew.thomas@oracle.com>,
	<ufimtseva@gmail.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
In-Reply-To: <20140212195005.GB29910@phenom.dumpdata.com>
X-DLP: MIA2
Cc: kurt.hackel@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 07:50 PM, Konrad Rzeszutek Wilk wrote:
> Hey,
>
> I have been looking at figuring out how we can "easily" do PCIe assignment
> of devices that are on different sockets. The problem is that
> on machines with many sockets (four or more) we might inadvertently assign
> the PCIe from a different socket to a guest bound to a different NUMA
> node. That means more KPI traffic, higher latency, etc.
>
>  From a Linux kernel perspective we do seem to 'pipe' said information
> from the ACPI DSDT (drivers/xen/pci.c):
>
>   75                 unsigned long long pxm;
>   76
>   77                 status = acpi_evaluate_integer(handle, "_PXM",
>   78                                    NULL, &pxm);
>   79                 if (ACPI_SUCCESS(status)) {
>   80                     add.optarr[0] = pxm;
>   81                     add.flags |= XEN_PCI_DEV_PXM;
>
> Which is neat except that Xen ignores that flag altogether. I Googled
> a bit but still did not find anything relevant - thought there were
> some presentations from past Xen Summits referring to it
> (I can't find it now :-()
>
> Anyhow,  what I am wondering if there are some prototypes out the
> in the past that utilize this. And if we were to use this how
> can we expose this to 'libxl' or any other tools to say:
>
> "Hey! You might want to use this other PCI device assigned
> to pciback which is on the same node". Some of form of
> 'numa-pci' affinity.

A warning that the PCI device is not in the numa affinity of the guest 
might be nice.

> Interestingly enough one can also read this from SysFS:
> /sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.
>
> Except that we don't expose the NUMA topology to the initial
> domain so the 'numa_node' is all -1. And the local_cpu depends
> on seeing _all_ of the CPUs - and of course it assumes that
> vCPU == pCPU.
>
> Anyhow, if this was "tweaked" such that the initial domain
> was seeing the hardware NUMA topology and parsing it (via
> Elena's patches) we could potentially have at least the
> 'numa_node' information present and figure out if a guest
> is using a PCIe device from the right socket.

I don't think we want to go down the path of pretending that dom0 is the 
hypervisor.  This is the same reason I objected to Boris' approach to 
perf integration last year.  I can understand the idea of wanting to use 
the same tools in the same way; but the fact is dom0 is a guest, and its 
virtual hardware (including #cpus, topology, &c) isn't (and shouldn't be 
required to) be in any way related to the host.

On the other hand... just tossing this out there, but how hard would it 
be for dom0 to report information about the *physical* topology on 
certain things in sysfs, rather than *virtual* topology?  I.e., no 
matter what dom0's virtual topology was, to report the physical 
numa_node, local_cpu, &c in sysfs?

I suppose this might cause problems if the scheduler then tried to run a 
process / tasklet on the node to which the device was attached, only to 
find out that no such (virtual) node existed.

If that would be a no-go, then I think we need to expose that 
information via libxl somehow so the toolstack can make reasonable 
decisions.

>
> So what I am wondering is:
>   1) Were there any plans for the XEN_PCI_DEV_PXM in the
>      hypervisor? Were there some prototypes for exporting the
>      PCI device BDF and NUMA information out.
>
>   2) Would it be better to just look at making the initial domain
>     be able to figure out the NUMA topology and assign the
>     correct 'numa_node' in the PCI fields?
>
>   3). If either option is used, would taking that information in-to
>     advisement when launching a guest with either 'cpus' or 'numa-affinity'
>     or 'pci' and informing the user of a better choice be good?
>     Or would it be better if there was some diagnostic tool to at
>     least tell the user whether their PCI device assignment made
>     sense or not? Or perhaps program the 'numa-affinity' based on
>     the PCIe socket location?

I think in general, we should:
* Do something reasonable when no NUMA topology has been specified
* Do what the user asks (but help them make good decisions) when they do 
specify topology.

A couple of things that might mean:
* Having the NUMA placement algorithm take into account the location of 
assigned PCI devices is probably a good idea.
* Having a warning when a device is outside of a VM's soft cpu affinity 
or NUMA affinity.  (I think we do something similar when the soft cpu 
affinity doesn't intersect the NUMA affinity.)
* Exposing the NUMA affinity of a device when doing xl 
pci-assignable-list might be a good idea as well, just to give people a 
hint that they should be maybe thinking about this.  Maybe have xl 
pci-assignable-add print what node a device is on as well? (Maybe only 
on NUMA boxes?)

Just as an aside, can I take it that a lot of your customers have / are 
expected to have such NUMA boxes?  The accepted wisdom (at least in some 
circles) seems to be that NUMA isn't particularly important for cloud, 
because cloud providers will generally use a larger number of smaller 
boxes and use a cloud orchestration layer to tie them all together.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:22:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuMy-00078A-En; Thu, 13 Feb 2014 11:22:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDuMx-00077y-IV
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 11:22:15 +0000
Received: from [193.109.254.147:26893] by server-7.bemta-14.messagelabs.com id
	EE/86-23424-6EAACF25; Thu, 13 Feb 2014 11:22:14 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392290532!4062467!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30008 invoked from network); 13 Feb 2014 11:22:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:22:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100408161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 11:22:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:22:12 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDuMt-00075I-OF;
	Thu, 13 Feb 2014 11:22:11 +0000
Message-ID: <52FCAAD8.20206@eu.citrix.com>
Date: Thu, 13 Feb 2014 11:22:00 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	<xen-devel@lists.xensource.com>, <jun.nakajima@intel.com>,
	<boris.ostrovsky@oracle.com>, <jbeulich@suse.com>,
	<andrew.cooper3@citrix.com>, <andrew.thomas@oracle.com>,
	<ufimtseva@gmail.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
In-Reply-To: <20140212195005.GB29910@phenom.dumpdata.com>
X-DLP: MIA2
Cc: kurt.hackel@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 07:50 PM, Konrad Rzeszutek Wilk wrote:
> Hey,
>
> I have been looking at figuring out how we can "easily" do PCIe assignment
> of devices that are on different sockets. The problem is that
> on machines with many sockets (four or more) we might inadvertently assign
> the PCIe from a different socket to a guest bound to a different NUMA
> node. That means more KPI traffic, higher latency, etc.
>
>  From a Linux kernel perspective we do seem to 'pipe' said information
> from the ACPI DSDT (drivers/xen/pci.c):
>
>   75                 unsigned long long pxm;
>   76
>   77                 status = acpi_evaluate_integer(handle, "_PXM",
>   78                                    NULL, &pxm);
>   79                 if (ACPI_SUCCESS(status)) {
>   80                     add.optarr[0] = pxm;
>   81                     add.flags |= XEN_PCI_DEV_PXM;
>
> Which is neat except that Xen ignores that flag altogether. I Googled
> a bit but still did not find anything relevant - thought there were
> some presentations from past Xen Summits referring to it
> (I can't find it now :-()
>
> Anyhow,  what I am wondering if there are some prototypes out the
> in the past that utilize this. And if we were to use this how
> can we expose this to 'libxl' or any other tools to say:
>
> "Hey! You might want to use this other PCI device assigned
> to pciback which is on the same node". Some of form of
> 'numa-pci' affinity.

A warning that the PCI device is not in the numa affinity of the guest 
might be nice.

> Interestingly enough one can also read this from SysFS:
> /sys/bus/pci/devices/<BDF>/numa_node,local_cpu,local_cpulist.
>
> Except that we don't expose the NUMA topology to the initial
> domain so the 'numa_node' is all -1. And the local_cpu depends
> on seeing _all_ of the CPUs - and of course it assumes that
> vCPU == pCPU.
>
> Anyhow, if this was "tweaked" such that the initial domain
> was seeing the hardware NUMA topology and parsing it (via
> Elena's patches) we could potentially have at least the
> 'numa_node' information present and figure out if a guest
> is using a PCIe device from the right socket.

I don't think we want to go down the path of pretending that dom0 is the 
hypervisor.  This is the same reason I objected to Boris' approach to 
perf integration last year.  I can understand the idea of wanting to use 
the same tools in the same way; but the fact is dom0 is a guest, and its 
virtual hardware (including #cpus, topology, &c) isn't (and shouldn't be 
required to) be in any way related to the host.

On the other hand... just tossing this out there, but how hard would it 
be for dom0 to report information about the *physical* topology on 
certain things in sysfs, rather than *virtual* topology?  I.e., no 
matter what dom0's virtual topology was, to report the physical 
numa_node, local_cpu, &c in sysfs?

I suppose this might cause problems if the scheduler then tried to run a 
process / tasklet on the node to which the device was attached, only to 
find out that no such (virtual) node existed.

If that would be a no-go, then I think we need to expose that 
information via libxl somehow so the toolstack can make reasonable 
decisions.

>
> So what I am wondering is:
>   1) Were there any plans for the XEN_PCI_DEV_PXM in the
>      hypervisor? Were there some prototypes for exporting the
>      PCI device BDF and NUMA information out.
>
>   2) Would it be better to just look at making the initial domain
>     be able to figure out the NUMA topology and assign the
>     correct 'numa_node' in the PCI fields?
>
>   3). If either option is used, would taking that information in-to
>     advisement when launching a guest with either 'cpus' or 'numa-affinity'
>     or 'pci' and informing the user of a better choice be good?
>     Or would it be better if there was some diagnostic tool to at
>     least tell the user whether their PCI device assignment made
>     sense or not? Or perhaps program the 'numa-affinity' based on
>     the PCIe socket location?

I think in general, we should:
* Do something reasonable when no NUMA topology has been specified
* Do what the user asks (but help them make good decisions) when they do 
specify topology.

A couple of things that might mean:
* Having the NUMA placement algorithm take into account the location of 
assigned PCI devices is probably a good idea.
* Having a warning when a device is outside of a VM's soft cpu affinity 
or NUMA affinity.  (I think we do something similar when the soft cpu 
affinity doesn't intersect the NUMA affinity.)
* Exposing the NUMA affinity of a device when doing xl 
pci-assignable-list might be a good idea as well, just to give people a 
hint that they should be maybe thinking about this.  Maybe have xl 
pci-assignable-add print what node a device is on as well? (Maybe only 
on NUMA boxes?)

Just as an aside, can I take it that a lot of your customers have / are 
expected to have such NUMA boxes?  The accepted wisdom (at least in some 
circles) seems to be that NUMA isn't particularly important for cloud, 
because cloud providers will generally use a larger number of smaller 
boxes and use a cloud orchestration layer to tie them all together.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:24:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuP3-0007Hz-06; Thu, 13 Feb 2014 11:24:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDuP1-0007Ho-EL
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:24:23 +0000
Received: from [85.158.139.211:57885] by server-12.bemta-5.messagelabs.com id
	E4/F5-15415-66BACF25; Thu, 13 Feb 2014 11:24:22 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392290661!3646544!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1718 invoked from network); 13 Feb 2014 11:24:21 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 11:24:21 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDuOx-000MbH-Tl; Thu, 13 Feb 2014 11:24:19 +0000
Date: Thu, 13 Feb 2014 12:24:19 +0100
From: Tim Deegan <tim@xen.org>
To: george.dunlap@citrix.com
Message-ID: <20140213112419.GB82703@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140211143346.GE10482@deinos.phlegethon.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	keir@xen.org, Ian.Jackson@eu.citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George: ping.

At 15:33 +0100 on 11 Feb (1392129226), Tim Deegan wrote:
> At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
> > If it's possible I'd like this patch goes in Xen 4.4 to fix build with 
> > official version of clang (until 3.4).
> > 
> > Clang 3.5 is still under development, so I don't think it's important to 
> > have support for it in Xen 4.4.
> 
> Fair enough.  In that case it needs a release ack from George.  It:
>  - fixes a compile issue on some version s of clang;
>  - might cause a regression with other compilers, but the regression
>    is likely to be obvious (i.e. a compile-time failure).
> 
> And it needs an ack from Keir, for changing common code. 
> 
> v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
> that's OK.
> 
> Cheers,
> 
> Tim.
> 
> commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
> 
>     xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>     
>     We already have our own versions of the stdarg/stdbool definitions, for
>     systems where those headers are installed in /usr/include.
>     
>     On linux, they're typically installed in compiler-specific paths, but
>     finding them has proved unreliable.  Drop that and use our own versions
>     everywhere.
>     
>     Signed-off-by: Tim Deegan <tim@xen.org>
>     Tested-by: Julien Grall <julien.grall@linaro.org>
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index df1428f..3a6cec5 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -44,10 +44,7 @@ ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
>  CFLAGS += -fno-builtin -fno-common
>  CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
>  CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
> -# Solaris puts stdarg.h &c in the system include directory.
> -ifneq ($(XEN_OS),SunOS)
> -CFLAGS += -nostdinc -iwithprefix include
> -endif
> +CFLAGS += -nostdinc
>  
>  CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
>  CFLAGS-$(FLASK_ENABLE)  += -DFLASK_ENABLE -DXSM_MAGIC=0xf97cff8c
> diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
> index d1b2540..0283f06 100644
> --- a/xen/include/xen/stdarg.h
> +++ b/xen/include/xen/stdarg.h
> @@ -1,23 +1,21 @@
>  #ifndef __XEN_STDARG_H__
>  #define __XEN_STDARG_H__
>  
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -   typedef __builtin_va_list va_list;
> -#  ifdef __GNUC__
> -#    define __GNUC_PREREQ__(x, y)                                       \
> -        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> -         (__GNUC__ > (x)))
> -#  else
> -#    define __GNUC_PREREQ__(x, y)   0
> -#  endif
> -#  if !__GNUC_PREREQ__(4, 5)
> -#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> -#  endif
> -#  define va_start(ap, last)    __builtin_va_start((ap), (last))
> -#  define va_end(ap)            __builtin_va_end(ap)
> -#  define va_arg                __builtin_va_arg
> +#ifdef __GNUC__
> +#  define __GNUC_PREREQ__(x, y)                                       \
> +      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> +       (__GNUC__ > (x)))
>  #else
> -#  include <stdarg.h>
> +#  define __GNUC_PREREQ__(x, y)   0
>  #endif
>  
> +#if !__GNUC_PREREQ__(4, 5)
> +#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> +#endif
> +
> +typedef __builtin_va_list va_list;
> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
> +#define va_end(ap)            __builtin_va_end(ap)
> +#define va_arg                __builtin_va_arg
> +
>  #endif /* __XEN_STDARG_H__ */
> diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
> index f0faedf..b0947a6 100644
> --- a/xen/include/xen/stdbool.h
> +++ b/xen/include/xen/stdbool.h
> @@ -1,13 +1,9 @@
>  #ifndef __XEN_STDBOOL_H__
>  #define __XEN_STDBOOL_H__
>  
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -#  define bool _Bool
> -#  define true 1
> -#  define false 0
> -#  define __bool_true_false_are_defined   1
> -#else
> -#  include <stdbool.h>
> -#endif
> +#define bool _Bool
> +#define true 1
> +#define false 0
> +#define __bool_true_false_are_defined   1
>  
>  #endif /* __XEN_STDBOOL_H__ */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:24:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:24:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuP3-0007Hz-06; Thu, 13 Feb 2014 11:24:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDuP1-0007Ho-EL
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:24:23 +0000
Received: from [85.158.139.211:57885] by server-12.bemta-5.messagelabs.com id
	E4/F5-15415-66BACF25; Thu, 13 Feb 2014 11:24:22 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392290661!3646544!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1718 invoked from network); 13 Feb 2014 11:24:21 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 11:24:21 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDuOx-000MbH-Tl; Thu, 13 Feb 2014 11:24:19 +0000
Date: Thu, 13 Feb 2014 12:24:19 +0100
From: Tim Deegan <tim@xen.org>
To: george.dunlap@citrix.com
Message-ID: <20140213112419.GB82703@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140211143346.GE10482@deinos.phlegethon.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	keir@xen.org, Ian.Jackson@eu.citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George: ping.

At 15:33 +0100 on 11 Feb (1392129226), Tim Deegan wrote:
> At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
> > If it's possible I'd like this patch goes in Xen 4.4 to fix build with 
> > official version of clang (until 3.4).
> > 
> > Clang 3.5 is still under development, so I don't think it's important to 
> > have support for it in Xen 4.4.
> 
> Fair enough.  In that case it needs a release ack from George.  It:
>  - fixes a compile issue on some version s of clang;
>  - might cause a regression with other compilers, but the regression
>    is likely to be obvious (i.e. a compile-time failure).
> 
> And it needs an ack from Keir, for changing common code. 
> 
> v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
> that's OK.
> 
> Cheers,
> 
> Tim.
> 
> commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
> Author: Tim Deegan <tim@xen.org>
> Date:   Tue Feb 11 12:44:09 2014 +0000
> 
>     xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>     
>     We already have our own versions of the stdarg/stdbool definitions, for
>     systems where those headers are installed in /usr/include.
>     
>     On linux, they're typically installed in compiler-specific paths, but
>     finding them has proved unreliable.  Drop that and use our own versions
>     everywhere.
>     
>     Signed-off-by: Tim Deegan <tim@xen.org>
>     Tested-by: Julien Grall <julien.grall@linaro.org>
> 
> diff --git a/xen/Rules.mk b/xen/Rules.mk
> index df1428f..3a6cec5 100644
> --- a/xen/Rules.mk
> +++ b/xen/Rules.mk
> @@ -44,10 +44,7 @@ ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
>  CFLAGS += -fno-builtin -fno-common
>  CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
>  CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
> -# Solaris puts stdarg.h &c in the system include directory.
> -ifneq ($(XEN_OS),SunOS)
> -CFLAGS += -nostdinc -iwithprefix include
> -endif
> +CFLAGS += -nostdinc
>  
>  CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
>  CFLAGS-$(FLASK_ENABLE)  += -DFLASK_ENABLE -DXSM_MAGIC=0xf97cff8c
> diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
> index d1b2540..0283f06 100644
> --- a/xen/include/xen/stdarg.h
> +++ b/xen/include/xen/stdarg.h
> @@ -1,23 +1,21 @@
>  #ifndef __XEN_STDARG_H__
>  #define __XEN_STDARG_H__
>  
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -   typedef __builtin_va_list va_list;
> -#  ifdef __GNUC__
> -#    define __GNUC_PREREQ__(x, y)                                       \
> -        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> -         (__GNUC__ > (x)))
> -#  else
> -#    define __GNUC_PREREQ__(x, y)   0
> -#  endif
> -#  if !__GNUC_PREREQ__(4, 5)
> -#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> -#  endif
> -#  define va_start(ap, last)    __builtin_va_start((ap), (last))
> -#  define va_end(ap)            __builtin_va_end(ap)
> -#  define va_arg                __builtin_va_arg
> +#ifdef __GNUC__
> +#  define __GNUC_PREREQ__(x, y)                                       \
> +      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
> +       (__GNUC__ > (x)))
>  #else
> -#  include <stdarg.h>
> +#  define __GNUC_PREREQ__(x, y)   0
>  #endif
>  
> +#if !__GNUC_PREREQ__(4, 5)
> +#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
> +#endif
> +
> +typedef __builtin_va_list va_list;
> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
> +#define va_end(ap)            __builtin_va_end(ap)
> +#define va_arg                __builtin_va_arg
> +
>  #endif /* __XEN_STDARG_H__ */
> diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
> index f0faedf..b0947a6 100644
> --- a/xen/include/xen/stdbool.h
> +++ b/xen/include/xen/stdbool.h
> @@ -1,13 +1,9 @@
>  #ifndef __XEN_STDBOOL_H__
>  #define __XEN_STDBOOL_H__
>  
> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
> -#  define bool _Bool
> -#  define true 1
> -#  define false 0
> -#  define __bool_true_false_are_defined   1
> -#else
> -#  include <stdbool.h>
> -#endif
> +#define bool _Bool
> +#define true 1
> +#define false 0
> +#define __bool_true_false_are_defined   1
>  
>  #endif /* __XEN_STDBOOL_H__ */
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:35:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuZj-0007kL-FP; Thu, 13 Feb 2014 11:35:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDuZi-0007kB-9p
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:35:26 +0000
Received: from [85.158.143.35:39692] by server-1.bemta-4.messagelabs.com id
	09/E9-31661-DFDACF25; Thu, 13 Feb 2014 11:35:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392291323!5383039!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22087 invoked from network); 13 Feb 2014 11:35:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:35:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102194539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:35:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:35:22 -0500
Message-ID: <1392291321.27366.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 13 Feb 2014 11:35:21 +0000
In-Reply-To: <CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:05 -0800, Luis R. Rodriguez wrote:
> > I meant the PV protocol extension which allows guests (netfront) to
> > register to receive multicast frames across the PV ring -- i.e. for
> > multicast to work from the guests PoV.
> 
> Not quite sure I understand, ipv6 works on guests so multicast works,
> so its unclear what you mean by multicast frames across the PV ring.
> Is there any code or or documents I can look at ?

xen/include/public/io/netif.h talks about 'feature-multicast-control'
and XEN_NETIF_EXTRA_TYPE_MCAST_{ADD,DEL}.

Looking at it now in the absence of those then flooding is the
default...

> > (maybe that was just an optimisation though and the default is to flood
> > everything, it was a long time ago)
> 
> From a networking perspective everything is being flooded as I've seen
> it so far.

... which is why it works ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:35:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuZj-0007kS-Qu; Thu, 13 Feb 2014 11:35:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDuZi-0007kC-Od
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:35:26 +0000
Received: from [85.158.143.35:29416] by server-2.bemta-4.messagelabs.com id
	23/70-10891-EFDACF25; Thu, 13 Feb 2014 11:35:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392291323!5383039!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22197 invoked from network); 13 Feb 2014 11:35:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:35:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102194540"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:35:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 06:35:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDuZe-0002od-T3;
	Thu, 13 Feb 2014 11:35:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDuZe-0004Xb-MQ;
	Thu, 13 Feb 2014 11:35:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21244.44538.522283.884578@mariner.uk.xensource.com>
Date: Thu, 13 Feb 2014 11:35:22 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <52F8E379.4020702@citrix.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F8C40E.7010707@citrix.com>
	<21240.50574.203262.432094@mariner.uk.xensource.com>
	<52F8E379.4020702@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RELE=
ASE (20140116-r260789)"):
> On 10/02/14 13:26, Ian Jackson wrote:
> > Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-=
RELEASE (20140116-r260789)"):
> >> Thanks for the patch. I think it's missing the following chunk:
> > ...
> >> -our $freebsd_version=3D "10.0-BETA3";
> >> +our $freebsd_version=3D "10.0-RELEASE";
> > =

> > Oh.  Err, why is this hardcoded in the script ?  Changing the
> > runvar(s) ought to be sufficient.
> > =

> > ... (looks at the code) ...
> > =

> > Oh, I see, that's just the default.  Perhaps the default should be
> > removed entirely ?  None of the other scripts have a default image
> > filename.
> =

> So $freebsd_image is going to contain the absolute path to the image?
> I'm asking because ts-freebsd-install searches for the image in
> /var/images, do we have to do something like /var/images/$freebsd_image
> in order to get the absolute image path?

(Sorry for not replying sooner.)

In flights made with make-flight, the runvar $r{freebsd_image} is
always set and is used by target_put_guest_image instead of the third
"default" argument.  So both $freebsd_version and $freebsd_vm_repo are
ignored.

ts-redhat-install and ts-windows-install both use
more_prepareguest_hvm which pass "undef" for the third argument.
So I'm suggesting that the bit of ts-freebsd-install which constructs
the default image filename be removed.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:35:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuZj-0007kL-FP; Thu, 13 Feb 2014 11:35:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDuZi-0007kB-9p
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:35:26 +0000
Received: from [85.158.143.35:39692] by server-1.bemta-4.messagelabs.com id
	09/E9-31661-DFDACF25; Thu, 13 Feb 2014 11:35:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392291323!5383039!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22087 invoked from network); 13 Feb 2014 11:35:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:35:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102194539"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:35:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:35:22 -0500
Message-ID: <1392291321.27366.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 13 Feb 2014 11:35:21 +0000
In-Reply-To: <CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
References: <1392071391-13215-1-git-send-email-mcgrof@do-not-panic.com>
	<1392071391-13215-3-git-send-email-mcgrof@do-not-panic.com>
	<1392108205.22033.16.camel@dagon.hellion.org.uk>
	<CAB=NE6Vu=khpj_3J7r-u8DFkhyC-RgLikNFtOU-WO7te_4HMCw@mail.gmail.com>
	<1392203708.13563.50.camel@kazak.uk.xensource.com>
	<CAB=NE6WFDWW1faoCYJP9zh6rPPvET+dbHxjHnGkYXtRx6z2LOQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC 2/2] xen-netback: disable multicast and use a
 random hw MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-12 at 14:05 -0800, Luis R. Rodriguez wrote:
> > I meant the PV protocol extension which allows guests (netfront) to
> > register to receive multicast frames across the PV ring -- i.e. for
> > multicast to work from the guests PoV.
> 
> Not quite sure I understand, ipv6 works on guests so multicast works,
> so its unclear what you mean by multicast frames across the PV ring.
> Is there any code or or documents I can look at ?

xen/include/public/io/netif.h talks about 'feature-multicast-control'
and XEN_NETIF_EXTRA_TYPE_MCAST_{ADD,DEL}.

Looking at it now in the absence of those then flooding is the
default...

> > (maybe that was just an optimisation though and the default is to flood
> > everything, it was a long time ago)
> 
> From a networking perspective everything is being flooded as I've seen
> it so far.

... which is why it works ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:35:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDuZj-0007kS-Qu; Thu, 13 Feb 2014 11:35:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDuZi-0007kC-Od
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:35:26 +0000
Received: from [85.158.143.35:29416] by server-2.bemta-4.messagelabs.com id
	23/70-10891-EFDACF25; Thu, 13 Feb 2014 11:35:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392291323!5383039!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22197 invoked from network); 13 Feb 2014 11:35:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:35:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102194540"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:35:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 06:35:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDuZe-0002od-T3;
	Thu, 13 Feb 2014 11:35:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDuZe-0004Xb-MQ;
	Thu, 13 Feb 2014 11:35:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21244.44538.522283.884578@mariner.uk.xensource.com>
Date: Thu, 13 Feb 2014 11:35:22 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <52F8E379.4020702@citrix.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F8C40E.7010707@citrix.com>
	<21240.50574.203262.432094@mariner.uk.xensource.com>
	<52F8E379.4020702@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RELE=
ASE (20140116-r260789)"):
> On 10/02/14 13:26, Ian Jackson wrote:
> > Roger Pau Monn=E9 writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-=
RELEASE (20140116-r260789)"):
> >> Thanks for the patch. I think it's missing the following chunk:
> > ...
> >> -our $freebsd_version=3D "10.0-BETA3";
> >> +our $freebsd_version=3D "10.0-RELEASE";
> > =

> > Oh.  Err, why is this hardcoded in the script ?  Changing the
> > runvar(s) ought to be sufficient.
> > =

> > ... (looks at the code) ...
> > =

> > Oh, I see, that's just the default.  Perhaps the default should be
> > removed entirely ?  None of the other scripts have a default image
> > filename.
> =

> So $freebsd_image is going to contain the absolute path to the image?
> I'm asking because ts-freebsd-install searches for the image in
> /var/images, do we have to do something like /var/images/$freebsd_image
> in order to get the absolute image path?

(Sorry for not replying sooner.)

In flights made with make-flight, the runvar $r{freebsd_image} is
always set and is used by target_put_guest_image instead of the third
"default" argument.  So both $freebsd_version and $freebsd_vm_repo are
ignored.

ts-redhat-install and ts-windows-install both use
more_prepareguest_hvm which pass "undef" for the third argument.
So I'm suggesting that the bit of ts-freebsd-install which constructs
the default image filename be removed.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:40:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDueG-0008Fe-T7; Thu, 13 Feb 2014 11:40:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDueG-0008FY-DS
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:40:08 +0000
Received: from [193.109.254.147:27021] by server-5.bemta-14.messagelabs.com id
	E8/C0-16688-71FACF25; Thu, 13 Feb 2014 11:40:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392291606!4085032!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20733 invoked from network); 13 Feb 2014 11:40:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 11:40:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 11:40:06 +0000
Message-Id: <52FCBD23020000780011C080@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 11:40:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
	<52FCA7A0020000780011BF88@nat28.tlf.novell.com>
	<52FCAAB9.3020908@citrix.com>
In-Reply-To: <52FCAAB9.3020908@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ufimtseva@gmail.com, andrew.thomas@oracle.com, george.dunlap@eu.citrix.com,
	kurt.hackel@oracle.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 12:21, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 13/02/14 10:08, Jan Beulich wrote:
>>>  2) Would it be better to just look at making the initial domain
>>>    be able to figure out the NUMA topology and assign the
>>>    correct 'numa_node' in the PCI fields?
>> As said above, I don't think this should be exposed to and
>> handled in Dom0's kernel. It's the tool stack to have the overall
>> view here.
> 
> This is where things get awkward.  Dom0 has the real APCI tables and is
> the only entity with the ability to evaluate the _PXM() attributes to
> work out which PCI devices belong to which NUMA nodes.  On the other
> hand, its idea of cpus and numa is stifled by being virtual and
> generally not having access to all the cpus it can see as present in the
> ACPI tables.
> 
> It would certainly be nice for dom0 to report the _PXM() attributes back
> up to Xen, but I have no idea how easy/hard it would be.

But that's being done already (see Konrad's original post), just
that the hypervisor doesn't really make use of the information
at present.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:40:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDueG-0008Fe-T7; Thu, 13 Feb 2014 11:40:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDueG-0008FY-DS
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:40:08 +0000
Received: from [193.109.254.147:27021] by server-5.bemta-14.messagelabs.com id
	E8/C0-16688-71FACF25; Thu, 13 Feb 2014 11:40:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392291606!4085032!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20733 invoked from network); 13 Feb 2014 11:40:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 11:40:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 11:40:06 +0000
Message-Id: <52FCBD23020000780011C080@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 11:40:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <20140212195005.GB29910@phenom.dumpdata.com>
	<52FCA7A0020000780011BF88@nat28.tlf.novell.com>
	<52FCAAB9.3020908@citrix.com>
In-Reply-To: <52FCAAB9.3020908@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: ufimtseva@gmail.com, andrew.thomas@oracle.com, george.dunlap@eu.citrix.com,
	kurt.hackel@oracle.com, jun.nakajima@intel.com,
	xen-devel <xen-devel@lists.xenproject.org>, boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] _PXM, NUMA, and all that goodnesss
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 12:21, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 13/02/14 10:08, Jan Beulich wrote:
>>>  2) Would it be better to just look at making the initial domain
>>>    be able to figure out the NUMA topology and assign the
>>>    correct 'numa_node' in the PCI fields?
>> As said above, I don't think this should be exposed to and
>> handled in Dom0's kernel. It's the tool stack to have the overall
>> view here.
> 
> This is where things get awkward.  Dom0 has the real APCI tables and is
> the only entity with the ability to evaluate the _PXM() attributes to
> work out which PCI devices belong to which NUMA nodes.  On the other
> hand, its idea of cpus and numa is stifled by being virtual and
> generally not having access to all the cpus it can see as present in the
> ACPI tables.
> 
> It would certainly be nice for dom0 to report the _PXM() attributes back
> up to Xen, but I have no idea how easy/hard it would be.

But that's being done already (see Konrad's original post), just
that the hypervisor doesn't really make use of the information
at present.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:47:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDukk-0008Qe-Pr; Thu, 13 Feb 2014 11:46:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDukj-0008QZ-Lo
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:46:50 +0000
Received: from [85.158.137.68:54774] by server-17.bemta-3.messagelabs.com id
	A3/C6-22569-8A0BCF25; Thu, 13 Feb 2014 11:46:48 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392292006!1622203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2263 invoked from network); 13 Feb 2014 11:46:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:46:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100413088"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 11:46:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:46:45 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDuke-0007Sf-V8;
	Thu, 13 Feb 2014 11:46:44 +0000
Message-ID: <52FCB099.7000902@eu.citrix.com>
Date: Thu, 13 Feb 2014 11:46:33 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <george.dunlap@citrix.com>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213112419.GB82703@deinos.phlegethon.org>
In-Reply-To: <20140213112419.GB82703@deinos.phlegethon.org>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	keir@xen.org, Ian.Jackson@eu.citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 11:24 AM, Tim Deegan wrote:
> George: ping.
>
> At 15:33 +0100 on 11 Feb (1392129226), Tim Deegan wrote:
>> At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
>>> If it's possible I'd like this patch goes in Xen 4.4 to fix build with
>>> official version of clang (until 3.4).
>>>
>>> Clang 3.5 is still under development, so I don't think it's important to
>>> have support for it in Xen 4.4.
>> Fair enough.  In that case it needs a release ack from George.  It:
>>   - fixes a compile issue on some version s of clang;
>>   - might cause a regression with other compilers, but the regression
>>     is likely to be obvious (i.e. a compile-time failure).

So the main risk would be if stgarg.h contained something like the 
"__GNUC_PREREQ__(4, 5)" #ifdef-ery that we missed.  Without this patch, 
we know it doesn't compile on the latest version of clang.  I think 
that's probably worse than the risk of potentially not compiling on some 
compiler that doesn't end up being tested before the release:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


>>
>> And it needs an ack from Keir, for changing common code.
>>
>> v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
>> that's OK.
>>
>> Cheers,
>>
>> Tim.
>>
>> commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
>> Author: Tim Deegan <tim@xen.org>
>> Date:   Tue Feb 11 12:44:09 2014 +0000
>>
>>      xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>>      
>>      We already have our own versions of the stdarg/stdbool definitions, for
>>      systems where those headers are installed in /usr/include.
>>      
>>      On linux, they're typically installed in compiler-specific paths, but
>>      finding them has proved unreliable.  Drop that and use our own versions
>>      everywhere.
>>      
>>      Signed-off-by: Tim Deegan <tim@xen.org>
>>      Tested-by: Julien Grall <julien.grall@linaro.org>
>>
>> diff --git a/xen/Rules.mk b/xen/Rules.mk
>> index df1428f..3a6cec5 100644
>> --- a/xen/Rules.mk
>> +++ b/xen/Rules.mk
>> @@ -44,10 +44,7 @@ ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
>>   CFLAGS += -fno-builtin -fno-common
>>   CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
>>   CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
>> -# Solaris puts stdarg.h &c in the system include directory.
>> -ifneq ($(XEN_OS),SunOS)
>> -CFLAGS += -nostdinc -iwithprefix include
>> -endif
>> +CFLAGS += -nostdinc
>>   
>>   CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
>>   CFLAGS-$(FLASK_ENABLE)  += -DFLASK_ENABLE -DXSM_MAGIC=0xf97cff8c
>> diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
>> index d1b2540..0283f06 100644
>> --- a/xen/include/xen/stdarg.h
>> +++ b/xen/include/xen/stdarg.h
>> @@ -1,23 +1,21 @@
>>   #ifndef __XEN_STDARG_H__
>>   #define __XEN_STDARG_H__
>>   
>> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
>> -   typedef __builtin_va_list va_list;
>> -#  ifdef __GNUC__
>> -#    define __GNUC_PREREQ__(x, y)                                       \
>> -        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
>> -         (__GNUC__ > (x)))
>> -#  else
>> -#    define __GNUC_PREREQ__(x, y)   0
>> -#  endif
>> -#  if !__GNUC_PREREQ__(4, 5)
>> -#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
>> -#  endif
>> -#  define va_start(ap, last)    __builtin_va_start((ap), (last))
>> -#  define va_end(ap)            __builtin_va_end(ap)
>> -#  define va_arg                __builtin_va_arg
>> +#ifdef __GNUC__
>> +#  define __GNUC_PREREQ__(x, y)                                       \
>> +      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
>> +       (__GNUC__ > (x)))
>>   #else
>> -#  include <stdarg.h>
>> +#  define __GNUC_PREREQ__(x, y)   0
>>   #endif
>>   
>> +#if !__GNUC_PREREQ__(4, 5)
>> +#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
>> +#endif
>> +
>> +typedef __builtin_va_list va_list;
>> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
>> +#define va_end(ap)            __builtin_va_end(ap)
>> +#define va_arg                __builtin_va_arg
>> +
>>   #endif /* __XEN_STDARG_H__ */
>> diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
>> index f0faedf..b0947a6 100644
>> --- a/xen/include/xen/stdbool.h
>> +++ b/xen/include/xen/stdbool.h
>> @@ -1,13 +1,9 @@
>>   #ifndef __XEN_STDBOOL_H__
>>   #define __XEN_STDBOOL_H__
>>   
>> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
>> -#  define bool _Bool
>> -#  define true 1
>> -#  define false 0
>> -#  define __bool_true_false_are_defined   1
>> -#else
>> -#  include <stdbool.h>
>> -#endif
>> +#define bool _Bool
>> +#define true 1
>> +#define false 0
>> +#define __bool_true_false_are_defined   1
>>   
>>   #endif /* __XEN_STDBOOL_H__ */
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:47:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDukk-0008Qe-Pr; Thu, 13 Feb 2014 11:46:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDukj-0008QZ-Lo
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:46:50 +0000
Received: from [85.158.137.68:54774] by server-17.bemta-3.messagelabs.com id
	A3/C6-22569-8A0BCF25; Thu, 13 Feb 2014 11:46:48 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392292006!1622203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2263 invoked from network); 13 Feb 2014 11:46:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:46:47 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100413088"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 11:46:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:46:45 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDuke-0007Sf-V8;
	Thu, 13 Feb 2014 11:46:44 +0000
Message-ID: <52FCB099.7000902@eu.citrix.com>
Date: Thu, 13 Feb 2014 11:46:33 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <george.dunlap@citrix.com>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213112419.GB82703@deinos.phlegethon.org>
In-Reply-To: <20140213112419.GB82703@deinos.phlegethon.org>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	keir@xen.org, Ian.Jackson@eu.citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 11:24 AM, Tim Deegan wrote:
> George: ping.
>
> At 15:33 +0100 on 11 Feb (1392129226), Tim Deegan wrote:
>> At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
>>> If it's possible I'd like this patch goes in Xen 4.4 to fix build with
>>> official version of clang (until 3.4).
>>>
>>> Clang 3.5 is still under development, so I don't think it's important to
>>> have support for it in Xen 4.4.
>> Fair enough.  In that case it needs a release ack from George.  It:
>>   - fixes a compile issue on some version s of clang;
>>   - might cause a regression with other compilers, but the regression
>>     is likely to be obvious (i.e. a compile-time failure).

So the main risk would be if stgarg.h contained something like the 
"__GNUC_PREREQ__(4, 5)" #ifdef-ery that we missed.  Without this patch, 
we know it doesn't compile on the latest version of clang.  I think 
that's probably worse than the risk of potentially not compiling on some 
compiler that doesn't end up being tested before the release:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


>>
>> And it needs an ack from Keir, for changing common code.
>>
>> v2 is below, removing "-iwithprefix".  I've kept your tested-by; hope
>> that's OK.
>>
>> Cheers,
>>
>> Tim.
>>
>> commit 1d62fcb9ad8d2b409ac2cf0e8a3824e19ca3313f
>> Author: Tim Deegan <tim@xen.org>
>> Date:   Tue Feb 11 12:44:09 2014 +0000
>>
>>      xen: stop trying to use the system <stdarg.h> and <stdbool.h>
>>      
>>      We already have our own versions of the stdarg/stdbool definitions, for
>>      systems where those headers are installed in /usr/include.
>>      
>>      On linux, they're typically installed in compiler-specific paths, but
>>      finding them has proved unreliable.  Drop that and use our own versions
>>      everywhere.
>>      
>>      Signed-off-by: Tim Deegan <tim@xen.org>
>>      Tested-by: Julien Grall <julien.grall@linaro.org>
>>
>> diff --git a/xen/Rules.mk b/xen/Rules.mk
>> index df1428f..3a6cec5 100644
>> --- a/xen/Rules.mk
>> +++ b/xen/Rules.mk
>> @@ -44,10 +44,7 @@ ALL_OBJS-$(x86)          += $(BASEDIR)/crypto/built_in.o
>>   CFLAGS += -fno-builtin -fno-common
>>   CFLAGS += -Werror -Wredundant-decls -Wno-pointer-arith
>>   CFLAGS += -pipe -g -D__XEN__ -include $(BASEDIR)/include/xen/config.h
>> -# Solaris puts stdarg.h &c in the system include directory.
>> -ifneq ($(XEN_OS),SunOS)
>> -CFLAGS += -nostdinc -iwithprefix include
>> -endif
>> +CFLAGS += -nostdinc
>>   
>>   CFLAGS-$(XSM_ENABLE)    += -DXSM_ENABLE
>>   CFLAGS-$(FLASK_ENABLE)  += -DFLASK_ENABLE -DXSM_MAGIC=0xf97cff8c
>> diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
>> index d1b2540..0283f06 100644
>> --- a/xen/include/xen/stdarg.h
>> +++ b/xen/include/xen/stdarg.h
>> @@ -1,23 +1,21 @@
>>   #ifndef __XEN_STDARG_H__
>>   #define __XEN_STDARG_H__
>>   
>> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
>> -   typedef __builtin_va_list va_list;
>> -#  ifdef __GNUC__
>> -#    define __GNUC_PREREQ__(x, y)                                       \
>> -        ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
>> -         (__GNUC__ > (x)))
>> -#  else
>> -#    define __GNUC_PREREQ__(x, y)   0
>> -#  endif
>> -#  if !__GNUC_PREREQ__(4, 5)
>> -#    define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
>> -#  endif
>> -#  define va_start(ap, last)    __builtin_va_start((ap), (last))
>> -#  define va_end(ap)            __builtin_va_end(ap)
>> -#  define va_arg                __builtin_va_arg
>> +#ifdef __GNUC__
>> +#  define __GNUC_PREREQ__(x, y)                                       \
>> +      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
>> +       (__GNUC__ > (x)))
>>   #else
>> -#  include <stdarg.h>
>> +#  define __GNUC_PREREQ__(x, y)   0
>>   #endif
>>   
>> +#if !__GNUC_PREREQ__(4, 5)
>> +#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
>> +#endif
>> +
>> +typedef __builtin_va_list va_list;
>> +#define va_start(ap, last)    __builtin_va_start((ap), (last))
>> +#define va_end(ap)            __builtin_va_end(ap)
>> +#define va_arg                __builtin_va_arg
>> +
>>   #endif /* __XEN_STDARG_H__ */
>> diff --git a/xen/include/xen/stdbool.h b/xen/include/xen/stdbool.h
>> index f0faedf..b0947a6 100644
>> --- a/xen/include/xen/stdbool.h
>> +++ b/xen/include/xen/stdbool.h
>> @@ -1,13 +1,9 @@
>>   #ifndef __XEN_STDBOOL_H__
>>   #define __XEN_STDBOOL_H__
>>   
>> -#if defined(__OpenBSD__) || defined(__NetBSD__) || defined(__FreeBSD__)
>> -#  define bool _Bool
>> -#  define true 1
>> -#  define false 0
>> -#  define __bool_true_false_are_defined   1
>> -#else
>> -#  include <stdbool.h>
>> -#endif
>> +#define bool _Bool
>> +#define true 1
>> +#define false 0
>> +#define __bool_true_false_are_defined   1
>>   
>>   #endif /* __XEN_STDBOOL_H__ */
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:49:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDunZ-0000GO-DP; Thu, 13 Feb 2014 11:49:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDunX-0000Fr-H0
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:49:43 +0000
Received: from [85.158.143.35:60018] by server-3.bemta-4.messagelabs.com id
	35/31-11539-651BCF25; Thu, 13 Feb 2014 11:49:42 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392292181!5387667!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14885 invoked from network); 13 Feb 2014 11:49:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102197254"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:49:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:49:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDunU-0007Ue-4m;
	Thu, 13 Feb 2014 11:49:40 +0000
Message-ID: <52FCB149.4050305@eu.citrix.com>
Date: Thu, 13 Feb 2014 11:49:29 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <george.dunlap@citrix.com>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213112419.GB82703@deinos.phlegethon.org>
	<52FCB099.7000902@eu.citrix.com>
In-Reply-To: <52FCB099.7000902@eu.citrix.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	keir@xen.org, Ian.Jackson@eu.citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 11:46 AM, George Dunlap wrote:
> On 02/13/2014 11:24 AM, Tim Deegan wrote:
>> George: ping.
>>
>> At 15:33 +0100 on 11 Feb (1392129226), Tim Deegan wrote:
>>> At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
>>>> If it's possible I'd like this patch goes in Xen 4.4 to fix build with
>>>> official version of clang (until 3.4).
>>>>
>>>> Clang 3.5 is still under development, so I don't think it's 
>>>> important to
>>>> have support for it in Xen 4.4.
>>> Fair enough.  In that case it needs a release ack from George.  It:
>>>   - fixes a compile issue on some version s of clang;
>>>   - might cause a regression with other compilers, but the regression
>>>     is likely to be obvious (i.e. a compile-time failure).
>
> So the main risk would be if stgarg.h contained something like the 
> "__GNUC_PREREQ__(4, 5)" #ifdef-ery that we missed.

Sorry, realized the grammar was ambiguous here.  For the record, I 
meant, the risk would be a system where the stdarg.h contained something 
of that type that our own copy didn't.

-G

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 11:49:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 11:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDunZ-0000GO-DP; Thu, 13 Feb 2014 11:49:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDunX-0000Fr-H0
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 11:49:43 +0000
Received: from [85.158.143.35:60018] by server-3.bemta-4.messagelabs.com id
	35/31-11539-651BCF25; Thu, 13 Feb 2014 11:49:42 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392292181!5387667!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14885 invoked from network); 13 Feb 2014 11:49:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 11:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102197254"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 11:49:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 06:49:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDunU-0007Ue-4m;
	Thu, 13 Feb 2014 11:49:40 +0000
Message-ID: <52FCB149.4050305@eu.citrix.com>
Date: Thu, 13 Feb 2014 11:49:29 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <george.dunlap@citrix.com>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213112419.GB82703@deinos.phlegethon.org>
	<52FCB099.7000902@eu.citrix.com>
In-Reply-To: <52FCB099.7000902@eu.citrix.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	keir@xen.org, Ian.Jackson@eu.citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use -nostdinc flags with CLANG
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 11:46 AM, George Dunlap wrote:
> On 02/13/2014 11:24 AM, Tim Deegan wrote:
>> George: ping.
>>
>> At 15:33 +0100 on 11 Feb (1392129226), Tim Deegan wrote:
>>> At 14:24 +0000 on 11 Feb (1392125052), Julien Grall wrote:
>>>> If it's possible I'd like this patch goes in Xen 4.4 to fix build with
>>>> official version of clang (until 3.4).
>>>>
>>>> Clang 3.5 is still under development, so I don't think it's 
>>>> important to
>>>> have support for it in Xen 4.4.
>>> Fair enough.  In that case it needs a release ack from George.  It:
>>>   - fixes a compile issue on some version s of clang;
>>>   - might cause a regression with other compilers, but the regression
>>>     is likely to be obvious (i.e. a compile-time failure).
>
> So the main risk would be if stgarg.h contained something like the 
> "__GNUC_PREREQ__(4, 5)" #ifdef-ery that we missed.

Sorry, realized the grammar was ambiguous here.  For the record, I 
meant, the risk would be a system where the stdarg.h contained something 
of that type that our own copy didn't.

-G

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:25:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvM9-0001Vb-CB; Thu, 13 Feb 2014 12:25:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvM7-0001VW-Sp
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:25:28 +0000
Received: from [193.109.254.147:25347] by server-14.bemta-14.messagelabs.com
	id 52/DB-29228-7B9BCF25; Thu, 13 Feb 2014 12:25:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392294325!4056680!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23854 invoked from network); 13 Feb 2014 12:25:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:25:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102205640"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:25:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 07:25:24 -0500
Message-ID: <1392294323.27366.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 12:25:23 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [GIT PULL OSSTEST] arm fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

Since I'm away tomorrow and Monday please could you merge the following
after 3ac3817762d1a "xl: suppress suspend/resume functions on platforms
which do not support it." has made it through the xen.git push gate. You
have acked both parts.

If 3ac38 makes it through the push gate before 5pm I'll push this myself
(given the number of stable branch tests going on today this looks
unlikely) 

With these I was able to run test-armhf-armhf-xl to completion locally. 

Thanks,
Ian.


The following changes since commit 4ca7b8955a472a6c7d673bb0e8fa29ce6cd6e217:

  ts-guests-nbd-mirror: set "oldstyle=true" (2014-02-12 14:22:47 +0000)

are available in the git repository at:

  git://xenbits.xen.org/people/ianc/osstest.git 2014-02-13-marilith

for you to fetch changes up to 01c2f5ff6dd2ddce55916a441e61dff0e9ec42ea:

  Do not attempt migration tests if the platform doesn't support it (2014-02-13 11:39:16 +0000)

----------------------------------------------------------------
Ian Campbell (2):
      Configure the Calxeda fabric on host boot
      Do not attempt migration tests if the platform doesn't support it

 Osstest/CXFabric.pm      | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
 Osstest/TestSupport.pm   | 21 ++++++++----
 sg-run-job               |  8 ++++-
 ts-migrate-support-check | 35 ++++++++++++++++++++
 ts-xen-install           |  3 ++
 5 files changed, 143 insertions(+), 8 deletions(-)
 create mode 100644 Osstest/CXFabric.pm
 create mode 100755 ts-migrate-support-check



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:25:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvM9-0001Vb-CB; Thu, 13 Feb 2014 12:25:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvM7-0001VW-Sp
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:25:28 +0000
Received: from [193.109.254.147:25347] by server-14.bemta-14.messagelabs.com
	id 52/DB-29228-7B9BCF25; Thu, 13 Feb 2014 12:25:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392294325!4056680!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23854 invoked from network); 13 Feb 2014 12:25:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:25:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102205640"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:25:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 07:25:24 -0500
Message-ID: <1392294323.27366.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 12:25:23 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [GIT PULL OSSTEST] arm fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

Since I'm away tomorrow and Monday please could you merge the following
after 3ac3817762d1a "xl: suppress suspend/resume functions on platforms
which do not support it." has made it through the xen.git push gate. You
have acked both parts.

If 3ac38 makes it through the push gate before 5pm I'll push this myself
(given the number of stable branch tests going on today this looks
unlikely) 

With these I was able to run test-armhf-armhf-xl to completion locally. 

Thanks,
Ian.


The following changes since commit 4ca7b8955a472a6c7d673bb0e8fa29ce6cd6e217:

  ts-guests-nbd-mirror: set "oldstyle=true" (2014-02-12 14:22:47 +0000)

are available in the git repository at:

  git://xenbits.xen.org/people/ianc/osstest.git 2014-02-13-marilith

for you to fetch changes up to 01c2f5ff6dd2ddce55916a441e61dff0e9ec42ea:

  Do not attempt migration tests if the platform doesn't support it (2014-02-13 11:39:16 +0000)

----------------------------------------------------------------
Ian Campbell (2):
      Configure the Calxeda fabric on host boot
      Do not attempt migration tests if the platform doesn't support it

 Osstest/CXFabric.pm      | 84 ++++++++++++++++++++++++++++++++++++++++++++++++
 Osstest/TestSupport.pm   | 21 ++++++++----
 sg-run-job               |  8 ++++-
 ts-migrate-support-check | 35 ++++++++++++++++++++
 ts-xen-install           |  3 ++
 5 files changed, 143 insertions(+), 8 deletions(-)
 create mode 100644 Osstest/CXFabric.pm
 create mode 100755 ts-migrate-support-check



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:36:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:36:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvWq-0001sv-8g; Thu, 13 Feb 2014 12:36:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDvWo-0001sq-Nx
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 12:36:31 +0000
Received: from [85.158.139.211:46471] by server-1.bemta-5.messagelabs.com id
	8B/FF-12859-D4CBCF25; Thu, 13 Feb 2014 12:36:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392294987!3668992!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26519 invoked from network); 13 Feb 2014 12:36:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:36:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102208979"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:36:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:36:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDvWk-0003Ap-AB;
	Thu, 13 Feb 2014 12:36:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDvWi-0000U0-4D;
	Thu, 13 Feb 2014 12:36:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24862-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 12:36:24 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24862: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24862 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24862/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 24860

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:36:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:36:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvWq-0001sv-8g; Thu, 13 Feb 2014 12:36:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDvWo-0001sq-Nx
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 12:36:31 +0000
Received: from [85.158.139.211:46471] by server-1.bemta-5.messagelabs.com id
	8B/FF-12859-D4CBCF25; Thu, 13 Feb 2014 12:36:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392294987!3668992!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26519 invoked from network); 13 Feb 2014 12:36:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:36:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102208979"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:36:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:36:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDvWk-0003Ap-AB;
	Thu, 13 Feb 2014 12:36:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDvWi-0000U0-4D;
	Thu, 13 Feb 2014 12:36:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24862-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 12:36:24 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24862: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24862 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24862/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore           fail pass in 24860

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:37:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvXi-0001vq-Mq; Thu, 13 Feb 2014 12:37:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvXh-0001ve-BH
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:37:25 +0000
Received: from [85.158.143.35:4523] by server-2.bemta-4.messagelabs.com id
	6E/20-10891-48CBCF25; Thu, 13 Feb 2014 12:37:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392295042!5399767!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22976 invoked from network); 13 Feb 2014 12:37:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:37:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426643"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:37:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 07:37:21 -0500
Message-ID: <1392295040.31985.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:37:20 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH for-4.5 v2 0/8] xen: arm: map normal memory as
 inner shareable, reduce scope of various barriers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently Xen maps all RAM as Outer-Shareable, since that seemed like
the most conservative option early on when we didn't really know what
Inner- vs. Outer-Shareable meant. However we have long suspected that
actually Inner-Shareable would be the correct type to use.

After reading the docs many times, getting more confused each time, I
finally got a reasonable explanation from a man (and a dog) down the
pub: Inner-Shareable == the processors in an SMP system, while
Outer-Shareable == devices. (NB: Not a random man, he knew what he was
talking about...). With that in mind switch all of Xen's memory mapping,
page table walks and an appropriate subset of the barriers to be inner
shareable.

In addition I have switched barriers to use the correct read/write/any
variants for their types. Note that I have only tackled the generic
mb/rmb/wmb and smp_* barriers (mainly used by common code) here. There
are also quite a few open-coded full-system dsb's in the arch code which
I will look at another time.

v1 of this was back in June[0], and I deferred it due to the 4.3 freeze,
so given that we are now frozen for 4.4 this is clearly 4.5 material.

I've slightly forgotten what was changed since then, but apart from
rebasing the highlights are:

      * dropped all the bogus tlb flush stuff, which was wrong, or at
        best confused.
      * mfn_to_p2m_entry sets shareability based on mattr, dev mappings
        remain outer
      * some clarifications to the comments.

[0] http://lists.xen.org/archives/html/xen-devel/2013-06/msg02969.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:37:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:37:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvXi-0001vq-Mq; Thu, 13 Feb 2014 12:37:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvXh-0001ve-BH
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:37:25 +0000
Received: from [85.158.143.35:4523] by server-2.bemta-4.messagelabs.com id
	6E/20-10891-48CBCF25; Thu, 13 Feb 2014 12:37:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392295042!5399767!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22976 invoked from network); 13 Feb 2014 12:37:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:37:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426643"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:37:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 07:37:21 -0500
Message-ID: <1392295040.31985.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:37:20 +0000
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH for-4.5 v2 0/8] xen: arm: map normal memory as
 inner shareable, reduce scope of various barriers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently Xen maps all RAM as Outer-Shareable, since that seemed like
the most conservative option early on when we didn't really know what
Inner- vs. Outer-Shareable meant. However we have long suspected that
actually Inner-Shareable would be the correct type to use.

After reading the docs many times, getting more confused each time, I
finally got a reasonable explanation from a man (and a dog) down the
pub: Inner-Shareable == the processors in an SMP system, while
Outer-Shareable == devices. (NB: Not a random man, he knew what he was
talking about...). With that in mind switch all of Xen's memory mapping,
page table walks and an appropriate subset of the barriers to be inner
shareable.

In addition I have switched barriers to use the correct read/write/any
variants for their types. Note that I have only tackled the generic
mb/rmb/wmb and smp_* barriers (mainly used by common code) here. There
are also quite a few open-coded full-system dsb's in the arch code which
I will look at another time.

v1 of this was back in June[0], and I deferred it due to the 4.3 freeze,
so given that we are now frozen for 4.4 this is clearly 4.5 material.

I've slightly forgotten what was changed since then, but apart from
rebasing the highlights are:

      * dropped all the bogus tlb flush stuff, which was wrong, or at
        best confused.
      * mfn_to_p2m_entry sets shareability based on mattr, dev mappings
        remain outer
      * some clarifications to the comments.

[0] http://lists.xen.org/archives/html/xen-devel/2013-06/msg02969.html


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYV-00022E-4m; Thu, 13 Feb 2014 12:38:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYT-00021y-Ab
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:13 +0000
Received: from [85.158.137.68:46891] by server-9.bemta-3.messagelabs.com id
	C8/3D-10184-4BCBCF25; Thu, 13 Feb 2014 12:38:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392295089!374482!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22434 invoked from network); 13 Feb 2014 12:38:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102209211"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:08 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYO-0003Bw-Mk;
	Thu, 13 Feb 2014 12:38:08 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:01 +0000
Message-ID: <1392295088-24219-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 1/8] xen: arm: map memory as inner
	shareable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The inner shareable domain contains all SMP processors, including different
clusters (e.g. big.LITTLE). Therefore this is the correct thing to use for Xen
memory mappings. The outer shareable domain is for devices on busses which are
coherent and barrier-aware (e.g. AMBA4 AXI with ACE). While the system domain
is for things behind bridges which are not.

One wrinkle is that Normal memory with attributes Inner Non-cacheable, Outer
Non-cacheable (which we call BUFFERABLE) must be mapped Outer Shareable on ARM
v7. Therefore change the prototype of mfn_to_xen_entry to take the attribute
index so we can DTRT. On ARMv8 the sharability is ignored and considered to
always be Outer Shareable.

Don't adjust the barriers, flushes etc, those remain as they were (which is
more than is now required).  I'll change those in a later patch.

Many thanks to Leif for explaining the difference between Inner- and
Outer-Shareable in words of two or less syllables, I hope I've replicated that
explanation properly above!

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2:
     split dsb sy changes into a separate patch

     comment clarifications from Leif.

     mfn_to_p2m_entry sets shareability based on mattr, dev mappings remain
     outer.
---
 xen/arch/arm/arm32/head.S  |    8 ++++----
 xen/arch/arm/arm64/head.S  |    8 ++++----
 xen/arch/arm/mm.c          |   34 +++++++++++++++++++---------------
 xen/arch/arm/p2m.c         |   18 ++++++++++++++++--
 xen/include/asm-arm/page.h |   37 ++++++++++++++++++++++++++++++++++---
 5 files changed, 77 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..60d5cd6 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -26,8 +26,8 @@
 
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
-#define PT_PT     0xe7f /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=1 P=1 */
-#define PT_MEM    0xe7d /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
@@ -236,10 +236,10 @@ cpu_init_done:
         mcr   CP32(r1, HMAIR1)
 
         /* Set up the HTCR:
-         * PT walks use Outer-Shareable accesses,
+         * PT walks use Inner-Shareable accesses,
          * PT walks are write-back, write-allocate in both cache levels,
          * Full 32-bit address space goes through this table. */
-        ldr   r0, =0x80002500
+        ldr   r0, =0x80003500
         mcr   CP32(r0, HTCR)
 
         /* Set up the HSCTLR:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..bf9bb58 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -25,8 +25,8 @@
 #include <asm/asm_defns.h>
 #include <asm/early_printk.h>
 
-#define PT_PT     0xe7f /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=1 P=1 */
-#define PT_MEM    0xe7d /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
@@ -227,10 +227,10 @@ skip_bss:
         /* Set up the HTCR:
          * PASize -- 40 bits / 1TB
          * Top byte is used
-         * PT walks use Outer-Shareable accesses,
+         * PT walks use Inner-Shareable accesses,
          * PT walks are write-back, write-allocate in both cache levels,
          * Full 64-bit address space goes through this table. */
-        ldr   x0, =0x80822500
+        ldr   x0, =0x80823500
         msr   tcr_el2, x0
 
         /* Set up the SCTLR_EL2:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 308a798..f608020 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -212,9 +212,8 @@ void dump_hyp_walk(vaddr_t addr)
 /* Map a 4k page in a fixmap entry */
 void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes)
 {
-    lpae_t pte = mfn_to_xen_entry(mfn);
+    lpae_t pte = mfn_to_xen_entry(mfn, attributes);
     pte.pt.table = 1; /* 4k mappings always have this bit set */
-    pte.pt.ai = attributes;
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
     flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
@@ -270,7 +269,7 @@ void *map_domain_page(unsigned long mfn)
         else if ( map[slot].pt.avail == 0 )
         {
             /* Commandeer this 2MB slot */
-            pte = mfn_to_xen_entry(slot_mfn);
+            pte = mfn_to_xen_entry(slot_mfn, WRITEALLOC);
             pte.pt.avail = 1;
             write_pte(map + slot, pte);
             break;
@@ -401,7 +400,7 @@ static inline lpae_t pte_of_xenaddr(vaddr_t va)
 {
     paddr_t ma = va + phys_offset;
     unsigned long mfn = ma >> PAGE_SHIFT;
-    return mfn_to_xen_entry(mfn);
+    return mfn_to_xen_entry(mfn, WRITEALLOC);
 }
 
 void __init remove_early_mappings(void)
@@ -422,6 +421,12 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     lpae_t pte, *p;
     int i;
 
+    /* Map the destination in the boot misc area. */
+    dest_va = BOOT_RELOC_VIRT_START;
+    pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT, WRITEALLOC);
+    write_pte(xen_second + second_table_offset(dest_va), pte);
+    flush_xen_data_tlb_range_va(dest_va, SECOND_SIZE);
+
     /* Calculate virt-to-phys offset for the new location */
     phys_offset = xen_paddr - (unsigned long) _start;
 
@@ -455,7 +460,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Initialise xen second level entries ... */
     /* ... Xen's text etc */
 
-    pte = mfn_to_xen_entry(xen_paddr>>PAGE_SHIFT);
+    pte = mfn_to_xen_entry(xen_paddr>>PAGE_SHIFT, WRITEALLOC);
     pte.pt.xn = 0;/* Contains our text mapping! */
     xen_second[second_table_offset(XEN_VIRT_START)] = pte;
 
@@ -470,7 +475,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 
     /* Map the destination in the boot misc area. */
     dest_va = BOOT_RELOC_VIRT_START;
-    pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT);
+    pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT, WRITEALLOC);
     write_pte(boot_second + second_table_offset(dest_va), pte);
     flush_xen_data_tlb_range_va(dest_va, SECOND_SIZE);
 #ifdef CONFIG_ARM_64
@@ -499,7 +504,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
         unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
         if ( !is_kernel(va) )
             break;
-        pte = mfn_to_xen_entry(mfn);
+        pte = mfn_to_xen_entry(mfn, WRITEALLOC);
         pte.pt.table = 1; /* 4k mappings always have this bit set */
         if ( is_kernel_text(va) || is_kernel_inittext(va) )
         {
@@ -569,7 +574,7 @@ int init_secondary_pagetables(int cpu)
      * domheap mapping pages. */
     for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
     {
-        pte = mfn_to_xen_entry(virt_to_mfn(domheap+i*LPAE_ENTRIES));
+        pte = mfn_to_xen_entry(virt_to_mfn(domheap+i*LPAE_ENTRIES), WRITEALLOC);
         pte.pt.table = 1;
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
@@ -614,7 +619,7 @@ static void __init create_32mb_mappings(lpae_t *second,
 
     count = nr_mfns / LPAE_ENTRIES;
     p = second + second_linear_offset(virt_offset);
-    pte = mfn_to_xen_entry(base_mfn);
+    pte = mfn_to_xen_entry(base_mfn, WRITEALLOC);
     pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
     for ( i = 0; i < count; i++ )
     {
@@ -686,13 +691,13 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
         else
         {
             unsigned long first_mfn = alloc_boot_pages(1, 1);
-            pte = mfn_to_xen_entry(first_mfn);
+            pte = mfn_to_xen_entry(first_mfn, WRITEALLOC);
             pte.pt.table = 1;
             write_pte(p, pte);
             first = mfn_to_virt(first_mfn);
         }
 
-        pte = mfn_to_xen_entry(base_mfn);
+        pte = mfn_to_xen_entry(base_mfn, WRITEALLOC);
         /* TODO: Set pte.pt.contig when appropriate. */
         write_pte(&first[first_table_offset(vaddr)], pte);
 
@@ -728,7 +733,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     second = mfn_to_virt(second_base);
     for ( i = 0; i < nr_second; i++ )
     {
-        pte = mfn_to_xen_entry(second_base + i);
+        pte = mfn_to_xen_entry(second_base + i, WRITEALLOC);
         pte.pt.table = 1;
         write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
     }
@@ -780,7 +785,7 @@ static int create_xen_table(lpae_t *entry)
     if ( p == NULL )
         return -ENOMEM;
     clear_page(p);
-    pte = mfn_to_xen_entry(virt_to_mfn(p));
+    pte = mfn_to_xen_entry(virt_to_mfn(p), WRITEALLOC);
     pte.pt.table = 1;
     write_pte(entry, pte);
     return 0;
@@ -826,9 +831,8 @@ static int create_xen_entries(enum xenmap_operation op,
                            addr, mfn);
                     return -EINVAL;
                 }
-                pte = mfn_to_xen_entry(mfn);
+                pte = mfn_to_xen_entry(mfn, ai);
                 pte.pt.table = 1;
-                pte.pt.ai = ai;
                 write_pte(&third[third_table_offset(addr)], pte);
                 break;
             case REMOVE:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d00c882..b9d8ca6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -145,10 +145,10 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
                                p2m_type_t t)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
-    /* xn and write bit will be defined in the switch */
+    /* sh, xn and write bit will be defined in the following switches
+     * based on mattr and t. */
     lpae_t e = (lpae_t) {
         .p2m.af = 1,
-        .p2m.sh = LPAE_SH_OUTER,
         .p2m.read = 1,
         .p2m.mattr = mattr,
         .p2m.table = 1,
@@ -158,6 +158,20 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
 
     BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
 
+    switch (mattr)
+    {
+    case MATTR_MEM:
+        e.p2m.sh = LPAE_SH_INNER;
+        break;
+
+    case MATTR_DEV:
+        e.p2m.sh = LPAE_SH_OUTER;
+        break;
+    default:
+        BUG();
+        break;
+    }
+
     switch (t)
     {
     case p2m_ram_rw:
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index e00be9e..6dc7fa6 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -185,7 +185,7 @@ typedef union {
 /* Standard entry type that we'll use to build Xen's own pagetables.
  * We put the same permissions at every level, because they're ignored
  * by the walker in non-leaf entries. */
-static inline lpae_t mfn_to_xen_entry(unsigned long mfn)
+static inline lpae_t mfn_to_xen_entry(unsigned long mfn, unsigned attr)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     lpae_t e = (lpae_t) {
@@ -193,10 +193,9 @@ static inline lpae_t mfn_to_xen_entry(unsigned long mfn)
             .xn = 1,              /* No need to execute outside .text */
             .ng = 1,              /* Makes TLB flushes easier */
             .af = 1,              /* No need for access tracking */
-            .sh = LPAE_SH_OUTER,  /* Xen mappings are globally coherent */
             .ns = 1,              /* Hyp mode is in the non-secure world */
             .user = 1,            /* See below */
-            .ai = WRITEALLOC,
+            .ai = attr,
             .table = 0,           /* Set to 1 for links and 4k maps */
             .valid = 1,           /* Mappings are present */
         }};;
@@ -205,6 +204,38 @@ static inline lpae_t mfn_to_xen_entry(unsigned long mfn)
      * pagetables un User mode it's OK.  If this changes, remember
      * to update the hard-coded values in head.S too */
 
+    switch ( attr )
+    {
+    case BUFFERABLE:
+        /*
+         * ARM ARM: Overlaying the shareability attribute (DDI
+         * 0406C.b B3-1376 to 1377)
+         *
+         * A memory region with a resultant memory type attribute of Normal,
+         * and a resultant cacheability attribute of Inner Non-cacheable,
+         * Outer Non-cacheable, must have a resultant shareability attribute
+         * of Outer Shareable, otherwise shareability is UNPREDICTABLE.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as Outer
+         * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable.
+         */
+        e.pt.sh = LPAE_SH_OUTER;
+        break;
+    case UNCACHED:
+    case DEV_SHARED:
+        /* Shareability is ignored for non-Normal memory, Outer is as
+         * good as anything.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as Outer
+         * Shareable for any device memory type.
+         */
+        e.pt.sh = LPAE_SH_OUTER;
+        break;
+    default:
+        e.pt.sh = LPAE_SH_INNER;  /* Xen mappings are SMP coherent */
+        break;
+    }
+
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYV-00022P-HK; Thu, 13 Feb 2014 12:38:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYT-000221-So
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:13 +0000
Received: from [85.158.137.68:19105] by server-17.bemta-3.messagelabs.com id
	A9/DB-22569-5BCBCF25; Thu, 13 Feb 2014 12:38:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392295089!374482!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22622 invoked from network); 13 Feb 2014 12:38:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102209212"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:08 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYO-0003Bw-RA;
	Thu, 13 Feb 2014 12:38:08 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:02 +0000
Message-ID: <1392295088-24219-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 2/8] xen: arm: Only upgrade guest
	barriers to inner shareable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/traps.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 21c7b26..72fd620 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -75,7 +75,7 @@ void __cpuinit init_traps(void)
     WRITE_SYSREG((vaddr_t)hyp_traps_vector, VBAR_EL2);
 
     /* Setup hypervisor traps */
-    WRITE_SYSREG(HCR_PTW|HCR_BSU_OUTER|HCR_AMO|HCR_IMO|HCR_VM|HCR_TWI|HCR_TSC|
+    WRITE_SYSREG(HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_VM|HCR_TWI|HCR_TSC|
                  HCR_TAC, HCR_EL2);
     isb();
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYV-00022E-4m; Thu, 13 Feb 2014 12:38:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYT-00021y-Ab
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:13 +0000
Received: from [85.158.137.68:46891] by server-9.bemta-3.messagelabs.com id
	C8/3D-10184-4BCBCF25; Thu, 13 Feb 2014 12:38:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392295089!374482!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22434 invoked from network); 13 Feb 2014 12:38:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:11 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102209211"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:08 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYO-0003Bw-Mk;
	Thu, 13 Feb 2014 12:38:08 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:01 +0000
Message-ID: <1392295088-24219-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 1/8] xen: arm: map memory as inner
	shareable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The inner shareable domain contains all SMP processors, including different
clusters (e.g. big.LITTLE). Therefore this is the correct thing to use for Xen
memory mappings. The outer shareable domain is for devices on busses which are
coherent and barrier-aware (e.g. AMBA4 AXI with ACE). While the system domain
is for things behind bridges which are not.

One wrinkle is that Normal memory with attributes Inner Non-cacheable, Outer
Non-cacheable (which we call BUFFERABLE) must be mapped Outer Shareable on ARM
v7. Therefore change the prototype of mfn_to_xen_entry to take the attribute
index so we can DTRT. On ARMv8 the sharability is ignored and considered to
always be Outer Shareable.

Don't adjust the barriers, flushes etc, those remain as they were (which is
more than is now required).  I'll change those in a later patch.

Many thanks to Leif for explaining the difference between Inner- and
Outer-Shareable in words of two or less syllables, I hope I've replicated that
explanation properly above!

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
v2:
     split dsb sy changes into a separate patch

     comment clarifications from Leif.

     mfn_to_p2m_entry sets shareability based on mattr, dev mappings remain
     outer.
---
 xen/arch/arm/arm32/head.S  |    8 ++++----
 xen/arch/arm/arm64/head.S  |    8 ++++----
 xen/arch/arm/mm.c          |   34 +++++++++++++++++++---------------
 xen/arch/arm/p2m.c         |   18 ++++++++++++++++--
 xen/include/asm-arm/page.h |   37 ++++++++++++++++++++++++++++++++++---
 5 files changed, 77 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..60d5cd6 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -26,8 +26,8 @@
 
 #define ZIMAGE_MAGIC_NUMBER 0x016f2818
 
-#define PT_PT     0xe7f /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=1 P=1 */
-#define PT_MEM    0xe7d /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
@@ -236,10 +236,10 @@ cpu_init_done:
         mcr   CP32(r1, HMAIR1)
 
         /* Set up the HTCR:
-         * PT walks use Outer-Shareable accesses,
+         * PT walks use Inner-Shareable accesses,
          * PT walks are write-back, write-allocate in both cache levels,
          * Full 32-bit address space goes through this table. */
-        ldr   r0, =0x80002500
+        ldr   r0, =0x80003500
         mcr   CP32(r0, HTCR)
 
         /* Set up the HSCTLR:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..bf9bb58 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -25,8 +25,8 @@
 #include <asm/asm_defns.h>
 #include <asm/early_printk.h>
 
-#define PT_PT     0xe7f /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=1 P=1 */
-#define PT_MEM    0xe7d /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=111 T=0 P=1 */
+#define PT_PT     0xf7f /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=1 P=1 */
+#define PT_MEM    0xf7d /* nG=1 AF=1 SH=11 AP=01 NS=1 ATTR=111 T=0 P=1 */
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
@@ -227,10 +227,10 @@ skip_bss:
         /* Set up the HTCR:
          * PASize -- 40 bits / 1TB
          * Top byte is used
-         * PT walks use Outer-Shareable accesses,
+         * PT walks use Inner-Shareable accesses,
          * PT walks are write-back, write-allocate in both cache levels,
          * Full 64-bit address space goes through this table. */
-        ldr   x0, =0x80822500
+        ldr   x0, =0x80823500
         msr   tcr_el2, x0
 
         /* Set up the SCTLR_EL2:
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 308a798..f608020 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -212,9 +212,8 @@ void dump_hyp_walk(vaddr_t addr)
 /* Map a 4k page in a fixmap entry */
 void set_fixmap(unsigned map, unsigned long mfn, unsigned attributes)
 {
-    lpae_t pte = mfn_to_xen_entry(mfn);
+    lpae_t pte = mfn_to_xen_entry(mfn, attributes);
     pte.pt.table = 1; /* 4k mappings always have this bit set */
-    pte.pt.ai = attributes;
     pte.pt.xn = 1;
     write_pte(xen_fixmap + third_table_offset(FIXMAP_ADDR(map)), pte);
     flush_xen_data_tlb_range_va(FIXMAP_ADDR(map), PAGE_SIZE);
@@ -270,7 +269,7 @@ void *map_domain_page(unsigned long mfn)
         else if ( map[slot].pt.avail == 0 )
         {
             /* Commandeer this 2MB slot */
-            pte = mfn_to_xen_entry(slot_mfn);
+            pte = mfn_to_xen_entry(slot_mfn, WRITEALLOC);
             pte.pt.avail = 1;
             write_pte(map + slot, pte);
             break;
@@ -401,7 +400,7 @@ static inline lpae_t pte_of_xenaddr(vaddr_t va)
 {
     paddr_t ma = va + phys_offset;
     unsigned long mfn = ma >> PAGE_SHIFT;
-    return mfn_to_xen_entry(mfn);
+    return mfn_to_xen_entry(mfn, WRITEALLOC);
 }
 
 void __init remove_early_mappings(void)
@@ -422,6 +421,12 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     lpae_t pte, *p;
     int i;
 
+    /* Map the destination in the boot misc area. */
+    dest_va = BOOT_RELOC_VIRT_START;
+    pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT, WRITEALLOC);
+    write_pte(xen_second + second_table_offset(dest_va), pte);
+    flush_xen_data_tlb_range_va(dest_va, SECOND_SIZE);
+
     /* Calculate virt-to-phys offset for the new location */
     phys_offset = xen_paddr - (unsigned long) _start;
 
@@ -455,7 +460,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
     /* Initialise xen second level entries ... */
     /* ... Xen's text etc */
 
-    pte = mfn_to_xen_entry(xen_paddr>>PAGE_SHIFT);
+    pte = mfn_to_xen_entry(xen_paddr>>PAGE_SHIFT, WRITEALLOC);
     pte.pt.xn = 0;/* Contains our text mapping! */
     xen_second[second_table_offset(XEN_VIRT_START)] = pte;
 
@@ -470,7 +475,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
 
     /* Map the destination in the boot misc area. */
     dest_va = BOOT_RELOC_VIRT_START;
-    pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT);
+    pte = mfn_to_xen_entry(xen_paddr >> PAGE_SHIFT, WRITEALLOC);
     write_pte(boot_second + second_table_offset(dest_va), pte);
     flush_xen_data_tlb_range_va(dest_va, SECOND_SIZE);
 #ifdef CONFIG_ARM_64
@@ -499,7 +504,7 @@ void __init setup_pagetables(unsigned long boot_phys_offset, paddr_t xen_paddr)
         unsigned long va = XEN_VIRT_START + (i << PAGE_SHIFT);
         if ( !is_kernel(va) )
             break;
-        pte = mfn_to_xen_entry(mfn);
+        pte = mfn_to_xen_entry(mfn, WRITEALLOC);
         pte.pt.table = 1; /* 4k mappings always have this bit set */
         if ( is_kernel_text(va) || is_kernel_inittext(va) )
         {
@@ -569,7 +574,7 @@ int init_secondary_pagetables(int cpu)
      * domheap mapping pages. */
     for ( i = 0; i < DOMHEAP_SECOND_PAGES; i++ )
     {
-        pte = mfn_to_xen_entry(virt_to_mfn(domheap+i*LPAE_ENTRIES));
+        pte = mfn_to_xen_entry(virt_to_mfn(domheap+i*LPAE_ENTRIES), WRITEALLOC);
         pte.pt.table = 1;
         write_pte(&first[first_table_offset(DOMHEAP_VIRT_START+i*FIRST_SIZE)], pte);
     }
@@ -614,7 +619,7 @@ static void __init create_32mb_mappings(lpae_t *second,
 
     count = nr_mfns / LPAE_ENTRIES;
     p = second + second_linear_offset(virt_offset);
-    pte = mfn_to_xen_entry(base_mfn);
+    pte = mfn_to_xen_entry(base_mfn, WRITEALLOC);
     pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
     for ( i = 0; i < count; i++ )
     {
@@ -686,13 +691,13 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
         else
         {
             unsigned long first_mfn = alloc_boot_pages(1, 1);
-            pte = mfn_to_xen_entry(first_mfn);
+            pte = mfn_to_xen_entry(first_mfn, WRITEALLOC);
             pte.pt.table = 1;
             write_pte(p, pte);
             first = mfn_to_virt(first_mfn);
         }
 
-        pte = mfn_to_xen_entry(base_mfn);
+        pte = mfn_to_xen_entry(base_mfn, WRITEALLOC);
         /* TODO: Set pte.pt.contig when appropriate. */
         write_pte(&first[first_table_offset(vaddr)], pte);
 
@@ -728,7 +733,7 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
     second = mfn_to_virt(second_base);
     for ( i = 0; i < nr_second; i++ )
     {
-        pte = mfn_to_xen_entry(second_base + i);
+        pte = mfn_to_xen_entry(second_base + i, WRITEALLOC);
         pte.pt.table = 1;
         write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
     }
@@ -780,7 +785,7 @@ static int create_xen_table(lpae_t *entry)
     if ( p == NULL )
         return -ENOMEM;
     clear_page(p);
-    pte = mfn_to_xen_entry(virt_to_mfn(p));
+    pte = mfn_to_xen_entry(virt_to_mfn(p), WRITEALLOC);
     pte.pt.table = 1;
     write_pte(entry, pte);
     return 0;
@@ -826,9 +831,8 @@ static int create_xen_entries(enum xenmap_operation op,
                            addr, mfn);
                     return -EINVAL;
                 }
-                pte = mfn_to_xen_entry(mfn);
+                pte = mfn_to_xen_entry(mfn, ai);
                 pte.pt.table = 1;
-                pte.pt.ai = ai;
                 write_pte(&third[third_table_offset(addr)], pte);
                 break;
             case REMOVE:
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d00c882..b9d8ca6 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -145,10 +145,10 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
                                p2m_type_t t)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
-    /* xn and write bit will be defined in the switch */
+    /* sh, xn and write bit will be defined in the following switches
+     * based on mattr and t. */
     lpae_t e = (lpae_t) {
         .p2m.af = 1,
-        .p2m.sh = LPAE_SH_OUTER,
         .p2m.read = 1,
         .p2m.mattr = mattr,
         .p2m.table = 1,
@@ -158,6 +158,20 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
 
     BUILD_BUG_ON(p2m_max_real_type > (1 << 4));
 
+    switch (mattr)
+    {
+    case MATTR_MEM:
+        e.p2m.sh = LPAE_SH_INNER;
+        break;
+
+    case MATTR_DEV:
+        e.p2m.sh = LPAE_SH_OUTER;
+        break;
+    default:
+        BUG();
+        break;
+    }
+
     switch (t)
     {
     case p2m_ram_rw:
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index e00be9e..6dc7fa6 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -185,7 +185,7 @@ typedef union {
 /* Standard entry type that we'll use to build Xen's own pagetables.
  * We put the same permissions at every level, because they're ignored
  * by the walker in non-leaf entries. */
-static inline lpae_t mfn_to_xen_entry(unsigned long mfn)
+static inline lpae_t mfn_to_xen_entry(unsigned long mfn, unsigned attr)
 {
     paddr_t pa = ((paddr_t) mfn) << PAGE_SHIFT;
     lpae_t e = (lpae_t) {
@@ -193,10 +193,9 @@ static inline lpae_t mfn_to_xen_entry(unsigned long mfn)
             .xn = 1,              /* No need to execute outside .text */
             .ng = 1,              /* Makes TLB flushes easier */
             .af = 1,              /* No need for access tracking */
-            .sh = LPAE_SH_OUTER,  /* Xen mappings are globally coherent */
             .ns = 1,              /* Hyp mode is in the non-secure world */
             .user = 1,            /* See below */
-            .ai = WRITEALLOC,
+            .ai = attr,
             .table = 0,           /* Set to 1 for links and 4k maps */
             .valid = 1,           /* Mappings are present */
         }};;
@@ -205,6 +204,38 @@ static inline lpae_t mfn_to_xen_entry(unsigned long mfn)
      * pagetables un User mode it's OK.  If this changes, remember
      * to update the hard-coded values in head.S too */
 
+    switch ( attr )
+    {
+    case BUFFERABLE:
+        /*
+         * ARM ARM: Overlaying the shareability attribute (DDI
+         * 0406C.b B3-1376 to 1377)
+         *
+         * A memory region with a resultant memory type attribute of Normal,
+         * and a resultant cacheability attribute of Inner Non-cacheable,
+         * Outer Non-cacheable, must have a resultant shareability attribute
+         * of Outer Shareable, otherwise shareability is UNPREDICTABLE.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as Outer
+         * Shareable for Normal Inner Non_cacheable, Outer Non-cacheable.
+         */
+        e.pt.sh = LPAE_SH_OUTER;
+        break;
+    case UNCACHED:
+    case DEV_SHARED:
+        /* Shareability is ignored for non-Normal memory, Outer is as
+         * good as anything.
+         *
+         * On ARMv8 sharability is ignored and explicitly treated as Outer
+         * Shareable for any device memory type.
+         */
+        e.pt.sh = LPAE_SH_OUTER;
+        break;
+    default:
+        e.pt.sh = LPAE_SH_INNER;  /* Xen mappings are SMP coherent */
+        break;
+    }
+
     ASSERT(!(pa & ~PAGE_MASK));
     ASSERT(!(pa & ~PADDR_MASK));
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYV-00022P-HK; Thu, 13 Feb 2014 12:38:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYT-000221-So
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:13 +0000
Received: from [85.158.137.68:19105] by server-17.bemta-3.messagelabs.com id
	A9/DB-22569-5BCBCF25; Thu, 13 Feb 2014 12:38:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392295089!374482!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22622 invoked from network); 13 Feb 2014 12:38:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:12 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102209212"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:08 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYO-0003Bw-RA;
	Thu, 13 Feb 2014 12:38:08 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:02 +0000
Message-ID: <1392295088-24219-2-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 2/8] xen: arm: Only upgrade guest
	barriers to inner shareable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/traps.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 21c7b26..72fd620 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -75,7 +75,7 @@ void __cpuinit init_traps(void)
     WRITE_SYSREG((vaddr_t)hyp_traps_vector, VBAR_EL2);
 
     /* Setup hypervisor traps */
-    WRITE_SYSREG(HCR_PTW|HCR_BSU_OUTER|HCR_AMO|HCR_IMO|HCR_VM|HCR_TWI|HCR_TSC|
+    WRITE_SYSREG(HCR_PTW|HCR_BSU_INNER|HCR_AMO|HCR_IMO|HCR_VM|HCR_TWI|HCR_TSC|
                  HCR_TAC, HCR_EL2);
     isb();
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYh-00026a-3w; Thu, 13 Feb 2014 12:38:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYf-00025p-Of
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:26 +0000
Received: from [85.158.143.35:21452] by server-1.bemta-4.messagelabs.com id
	08/E7-31661-1CCBCF25; Thu, 13 Feb 2014 12:38:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19213 invoked from network); 13 Feb 2014 12:38:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426780"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-7Y;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:06 +0000
Message-ID: <1392295088-24219-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 6/8] xen: arm: add scope to dsb and
	dmb macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Everywhere currently passes "sy"stem, so no actual change.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/domain.c                |    2 +-
 xen/arch/arm/gic.c                   |   10 +++++-----
 xen/arch/arm/mm.c                    |    4 ++--
 xen/arch/arm/platforms/vexpress.c    |    6 +++---
 xen/arch/arm/smpboot.c               |    2 +-
 xen/arch/arm/time.c                  |    2 +-
 xen/drivers/video/arm_hdlcd.c        |    2 +-
 xen/include/asm-arm/arm32/flushtlb.h |   16 ++++++++--------
 xen/include/asm-arm/arm32/page.h     |    4 ++--
 xen/include/asm-arm/arm64/page.h     |    4 ++--
 xen/include/asm-arm/page.h           |    4 ++--
 xen/include/asm-arm/system.h         |   16 ++++++++--------
 12 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8f20fdf..b27f32f 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -47,7 +47,7 @@ void idle_loop(void)
         local_irq_disable();
         if ( cpu_is_haltable(smp_processor_id()) )
         {
-            dsb();
+            dsb(sy);
             wfi();
         }
         local_irq_enable();
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 13bbf48..1467b69 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -137,7 +137,7 @@ static void gic_irq_enable(struct irq_desc *desc)
     spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);
     desc->status &= ~IRQ_DISABLED;
-    dsb();
+    dsb(sy);
     /* Enable routing */
     GICD[GICD_ISENABLER + irq / 32] = (1u << (irq % 32));
     spin_unlock(&gic.lock);
@@ -478,7 +478,7 @@ void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi)
     cpumask_and(&online_mask, cpumask, &cpu_online_map);
     mask = gic_cpu_mask(&online_mask);
 
-    dsb();
+    dsb(sy);
 
     GICD[GICD_SGIR] = GICD_SGI_TARGET_LIST
         | (mask<<GICD_SGI_TARGET_SHIFT)
@@ -495,7 +495,7 @@ void send_SGI_self(enum gic_sgi sgi)
 {
     ASSERT(sgi < 16); /* There are only 16 SGIs */
 
-    dsb();
+    dsb(sy);
 
     GICD[GICD_SGIR] = GICD_SGI_TARGET_SELF
         | sgi;
@@ -505,7 +505,7 @@ void send_SGI_allbutself(enum gic_sgi sgi)
 {
    ASSERT(sgi < 16); /* There are only 16 SGIs */
 
-   dsb();
+   dsb(sy);
 
    GICD[GICD_SGIR] = GICD_SGI_TARGET_OTHERS
        | sgi;
@@ -589,7 +589,7 @@ static int __setup_irq(struct irq_desc *desc, unsigned int irq,
         return -EBUSY;
 
     desc->action  = new;
-    dsb();
+    dsb(sy);
 
     return 0;
 }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index ff19e39..20dbb90 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -345,10 +345,10 @@ void flush_page_to_ram(unsigned long mfn)
 {
     void *p, *v = map_domain_page(mfn);
 
-    dsb();           /* So the CPU issues all writes to the range */
+    dsb(sy);         /* So the CPU issues all writes to the range */
     for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
         asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
-    dsb();           /* So we know the flushes happen before continuing */
+    dsb(sy);         /* So we know the flushes happen before continuing */
 
     unmap_domain_page(v);
 }
diff --git a/xen/arch/arm/platforms/vexpress.c b/xen/arch/arm/platforms/vexpress.c
index 6132056..8e6a4ea 100644
--- a/xen/arch/arm/platforms/vexpress.c
+++ b/xen/arch/arm/platforms/vexpress.c
@@ -48,7 +48,7 @@ static inline int vexpress_ctrl_start(uint32_t *syscfg, int write,
     /* wait for complete flag to be set */
     do {
         stat = syscfg[V2M_SYS_CFGSTAT/4];
-        dsb();
+        dsb(sy);
     } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
 
     /* check error status and return error flag if set */
@@ -113,10 +113,10 @@ static void vexpress_reset(void)
 
     /* switch to slow mode */
     writel(0x3, sp810);
-    dsb(); isb();
+    dsb(sy); isb();
     /* writing any value to SCSYSSTAT reg will reset the system */
     writel(0x1, sp810 + 4);
-    dsb(); isb();
+    dsb(sy); isb();
 
     iounmap(sp810);
 }
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index ce68d34..7f28b68 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -341,7 +341,7 @@ void stop_cpu(void)
     local_irq_disable();
     cpu_is_dead = 1;
     /* Make sure the write happens before we sleep forever */
-    dsb();
+    dsb(sy);
     isb();
     while ( 1 )
         wfi();
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 81e3e28..93d957a 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -260,7 +260,7 @@ void udelay(unsigned long usecs)
     s_time_t deadline = get_s_time() + 1000 * (s_time_t) usecs;
     while ( get_s_time() - deadline < 0 )
         ;
-    dsb();
+    dsb(sy);
     isb();
 }
 
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
index 647f22c..e5ad18d 100644
--- a/xen/drivers/video/arm_hdlcd.c
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -78,7 +78,7 @@ void (*video_puts)(const char *) = vga_noop_puts;
 
 static void hdlcd_flush(void)
 {
-    dsb();
+    dsb(sy);
 }
 
 static int __init get_color_masks(const char* bpp, struct color_masks **masks)
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index 7183a07..bbcc82f 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -4,44 +4,44 @@
 /* Flush local TLBs, current VMID only */
 static inline void flush_tlb_local(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALL);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
 /* Flush inner shareable TLBs, current VMID only */
 static inline void flush_tlb(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALLIS);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
 static inline void flush_tlb_all_local(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
 static inline void flush_tlb_all(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index b8221ca..191a108 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -67,13 +67,13 @@ static inline void flush_xen_data_tlb(void)
 static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
 {
     unsigned long end = va + size;
-    dsb(); /* Ensure preceding are visible */
+    dsb(sy); /* Ensure preceding are visible */
     while ( va < end ) {
         asm volatile(STORE_CP32(0, TLBIMVAH)
                      : : "r" (va) : "memory");
         va += PAGE_SIZE;
     }
-    dsb(); /* Ensure completion of the TLB flush */
+    dsb(sy); /* Ensure completion of the TLB flush */
     isb();
 }
 
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 3352821..20b4c5a 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -60,13 +60,13 @@ static inline void flush_xen_data_tlb(void)
 static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
 {
     unsigned long end = va + size;
-    dsb(); /* Ensure preceding are visible */
+    dsb(sy); /* Ensure preceding are visible */
     while ( va < end ) {
         asm volatile("tlbi vae2, %0;"
                      : : "r" (va>>PAGE_SHIFT) : "memory");
         va += PAGE_SIZE;
     }
-    dsb(); /* Ensure completion of the TLB flush */
+    dsb(sy); /* Ensure completion of the TLB flush */
     isb();
 }
 
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 6dc7fa6..5e4678e 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -263,10 +263,10 @@ extern size_t cacheline_bytes;
 static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
 {
     void *end;
-    dsb();           /* So the CPU issues all writes to the range */
+    dsb(sy);           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
         asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
-    dsb();           /* So we know the flushes happen before continuing */
+    dsb(sy);           /* So we know the flushes happen before continuing */
 }
 
 /* Macro for flushing a single small item.  The predicate is always
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 89c61ef..e1f126a 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -13,16 +13,16 @@
 #define wfi()           asm volatile("wfi" : : : "memory")
 
 #define isb()           asm volatile("isb" : : : "memory")
-#define dsb()           asm volatile("dsb sy" : : : "memory")
-#define dmb()           asm volatile("dmb sy" : : : "memory")
+#define dsb(scope)      asm volatile("dsb " #scope : : : "memory")
+#define dmb(scope)      asm volatile("dmb " #scope : : : "memory")
 
-#define mb()            dsb()
-#define rmb()           dsb()
-#define wmb()           dsb()
+#define mb()            dsb(sy)
+#define rmb()           dsb(sy)
+#define wmb()           dsb(sy)
 
-#define smp_mb()        dmb()
-#define smp_rmb()       dmb()
-#define smp_wmb()       dmb()
+#define smp_mb()        dmb(sy)
+#define smp_rmb()       dmb(sy)
+#define smp_wmb()       dmb(sy)
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYh-00026w-Gu; Thu, 13 Feb 2014 12:38:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYg-00025z-9X
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:26 +0000
Received: from [85.158.143.35:35488] by server-3.bemta-4.messagelabs.com id
	DD/34-11539-1CCBCF25; Thu, 13 Feb 2014 12:38:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19353 invoked from network); 13 Feb 2014 12:38:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426775"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYO-0003Bw-V2;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:03 +0000
Message-ID: <1392295088-24219-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 3/8] xen: arm: consolidate barrier
	definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are effectively identical on both 32- and 64-bit.

The only difference is that they implicit "sy" on 32-bit becomes explicit.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/arm32/system.h |   16 ----------------
 xen/include/asm-arm/arm64/system.h |   17 -----------------
 xen/include/asm-arm/system.h       |   16 ++++++++++++++++
 3 files changed, 16 insertions(+), 33 deletions(-)

diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
index 60148cb..9f233fe 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -2,22 +2,6 @@
 #ifndef __ASM_ARM32_SYSTEM_H
 #define __ASM_ARM32_SYSTEM_H
 
-#define sev() __asm__ __volatile__ ("sev" : : : "memory")
-#define wfe() __asm__ __volatile__ ("wfe" : : : "memory")
-#define wfi() __asm__ __volatile__ ("wfi" : : : "memory")
-
-#define isb() __asm__ __volatile__ ("isb" : : : "memory")
-#define dsb() __asm__ __volatile__ ("dsb" : : : "memory")
-#define dmb() __asm__ __volatile__ ("dmb" : : : "memory")
-
-#define mb()            dsb()
-#define rmb()           dsb()
-#define wmb()           mb()
-
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
-
 extern void __bad_xchg(volatile void *, int);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
index d7e912f..570af5c 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -2,23 +2,6 @@
 #ifndef __ASM_ARM64_SYSTEM_H
 #define __ASM_ARM64_SYSTEM_H
 
-#define sev()           asm volatile("sev" : : : "memory")
-#define wfe()           asm volatile("wfe" : : : "memory")
-#define wfi()           asm volatile("wfi" : : : "memory")
-
-#define isb()           asm volatile("isb" : : : "memory")
-#define dsb()           asm volatile("dsb sy" : : : "memory")
-#define dmb()           asm volatile("dmb sy" : : : "memory")
-
-#define mb()            dsb()
-#define rmb()           dsb()
-#define wmb()           mb()
-
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
-
-
 extern void __bad_xchg(volatile void *, int);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 290d38d..e003624 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -8,6 +8,22 @@
 #define nop() \
     asm volatile ( "nop" )
 
+#define sev()           asm volatile("sev" : : : "memory")
+#define wfe()           asm volatile("wfe" : : : "memory")
+#define wfi()           asm volatile("wfi" : : : "memory")
+
+#define isb()           asm volatile("isb" : : : "memory")
+#define dsb()           asm volatile("dsb sy" : : : "memory")
+#define dmb()           asm volatile("dmb sy" : : : "memory")
+
+#define mb()            dsb()
+#define rmb()           dsb()
+#define wmb()           mb()
+
+#define smp_mb()        mb()
+#define smp_rmb()       rmb()
+#define smp_wmb()       wmb()
+
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYh-00026a-3w; Thu, 13 Feb 2014 12:38:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYf-00025p-Of
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:26 +0000
Received: from [85.158.143.35:21452] by server-1.bemta-4.messagelabs.com id
	08/E7-31661-1CCBCF25; Thu, 13 Feb 2014 12:38:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19213 invoked from network); 13 Feb 2014 12:38:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426780"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-7Y;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:06 +0000
Message-ID: <1392295088-24219-6-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 6/8] xen: arm: add scope to dsb and
	dmb macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Everywhere currently passes "sy"stem, so no actual change.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/domain.c                |    2 +-
 xen/arch/arm/gic.c                   |   10 +++++-----
 xen/arch/arm/mm.c                    |    4 ++--
 xen/arch/arm/platforms/vexpress.c    |    6 +++---
 xen/arch/arm/smpboot.c               |    2 +-
 xen/arch/arm/time.c                  |    2 +-
 xen/drivers/video/arm_hdlcd.c        |    2 +-
 xen/include/asm-arm/arm32/flushtlb.h |   16 ++++++++--------
 xen/include/asm-arm/arm32/page.h     |    4 ++--
 xen/include/asm-arm/arm64/page.h     |    4 ++--
 xen/include/asm-arm/page.h           |    4 ++--
 xen/include/asm-arm/system.h         |   16 ++++++++--------
 12 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8f20fdf..b27f32f 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -47,7 +47,7 @@ void idle_loop(void)
         local_irq_disable();
         if ( cpu_is_haltable(smp_processor_id()) )
         {
-            dsb();
+            dsb(sy);
             wfi();
         }
         local_irq_enable();
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 13bbf48..1467b69 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -137,7 +137,7 @@ static void gic_irq_enable(struct irq_desc *desc)
     spin_lock_irqsave(&desc->lock, flags);
     spin_lock(&gic.lock);
     desc->status &= ~IRQ_DISABLED;
-    dsb();
+    dsb(sy);
     /* Enable routing */
     GICD[GICD_ISENABLER + irq / 32] = (1u << (irq % 32));
     spin_unlock(&gic.lock);
@@ -478,7 +478,7 @@ void send_SGI_mask(const cpumask_t *cpumask, enum gic_sgi sgi)
     cpumask_and(&online_mask, cpumask, &cpu_online_map);
     mask = gic_cpu_mask(&online_mask);
 
-    dsb();
+    dsb(sy);
 
     GICD[GICD_SGIR] = GICD_SGI_TARGET_LIST
         | (mask<<GICD_SGI_TARGET_SHIFT)
@@ -495,7 +495,7 @@ void send_SGI_self(enum gic_sgi sgi)
 {
     ASSERT(sgi < 16); /* There are only 16 SGIs */
 
-    dsb();
+    dsb(sy);
 
     GICD[GICD_SGIR] = GICD_SGI_TARGET_SELF
         | sgi;
@@ -505,7 +505,7 @@ void send_SGI_allbutself(enum gic_sgi sgi)
 {
    ASSERT(sgi < 16); /* There are only 16 SGIs */
 
-   dsb();
+   dsb(sy);
 
    GICD[GICD_SGIR] = GICD_SGI_TARGET_OTHERS
        | sgi;
@@ -589,7 +589,7 @@ static int __setup_irq(struct irq_desc *desc, unsigned int irq,
         return -EBUSY;
 
     desc->action  = new;
-    dsb();
+    dsb(sy);
 
     return 0;
 }
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index ff19e39..20dbb90 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -345,10 +345,10 @@ void flush_page_to_ram(unsigned long mfn)
 {
     void *p, *v = map_domain_page(mfn);
 
-    dsb();           /* So the CPU issues all writes to the range */
+    dsb(sy);         /* So the CPU issues all writes to the range */
     for ( p = v; p < v + PAGE_SIZE ; p += cacheline_bytes )
         asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r" (p));
-    dsb();           /* So we know the flushes happen before continuing */
+    dsb(sy);         /* So we know the flushes happen before continuing */
 
     unmap_domain_page(v);
 }
diff --git a/xen/arch/arm/platforms/vexpress.c b/xen/arch/arm/platforms/vexpress.c
index 6132056..8e6a4ea 100644
--- a/xen/arch/arm/platforms/vexpress.c
+++ b/xen/arch/arm/platforms/vexpress.c
@@ -48,7 +48,7 @@ static inline int vexpress_ctrl_start(uint32_t *syscfg, int write,
     /* wait for complete flag to be set */
     do {
         stat = syscfg[V2M_SYS_CFGSTAT/4];
-        dsb();
+        dsb(sy);
     } while ( !(stat & V2M_SYS_CFG_COMPLETE) );
 
     /* check error status and return error flag if set */
@@ -113,10 +113,10 @@ static void vexpress_reset(void)
 
     /* switch to slow mode */
     writel(0x3, sp810);
-    dsb(); isb();
+    dsb(sy); isb();
     /* writing any value to SCSYSSTAT reg will reset the system */
     writel(0x1, sp810 + 4);
-    dsb(); isb();
+    dsb(sy); isb();
 
     iounmap(sp810);
 }
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index ce68d34..7f28b68 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -341,7 +341,7 @@ void stop_cpu(void)
     local_irq_disable();
     cpu_is_dead = 1;
     /* Make sure the write happens before we sleep forever */
-    dsb();
+    dsb(sy);
     isb();
     while ( 1 )
         wfi();
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 81e3e28..93d957a 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -260,7 +260,7 @@ void udelay(unsigned long usecs)
     s_time_t deadline = get_s_time() + 1000 * (s_time_t) usecs;
     while ( get_s_time() - deadline < 0 )
         ;
-    dsb();
+    dsb(sy);
     isb();
 }
 
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
index 647f22c..e5ad18d 100644
--- a/xen/drivers/video/arm_hdlcd.c
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -78,7 +78,7 @@ void (*video_puts)(const char *) = vga_noop_puts;
 
 static void hdlcd_flush(void)
 {
-    dsb();
+    dsb(sy);
 }
 
 static int __init get_color_masks(const char* bpp, struct color_masks **masks)
diff --git a/xen/include/asm-arm/arm32/flushtlb.h b/xen/include/asm-arm/arm32/flushtlb.h
index 7183a07..bbcc82f 100644
--- a/xen/include/asm-arm/arm32/flushtlb.h
+++ b/xen/include/asm-arm/arm32/flushtlb.h
@@ -4,44 +4,44 @@
 /* Flush local TLBs, current VMID only */
 static inline void flush_tlb_local(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALL);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
 /* Flush inner shareable TLBs, current VMID only */
 static inline void flush_tlb(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALLIS);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
 /* Flush local TLBs, all VMIDs, non-hypervisor mode */
 static inline void flush_tlb_all_local(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALLNSNH);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
 /* Flush innershareable TLBs, all VMIDs, non-hypervisor mode */
 static inline void flush_tlb_all(void)
 {
-    dsb();
+    dsb(sy);
 
     WRITE_CP32((uint32_t) 0, TLBIALLNSNHIS);
 
-    dsb();
+    dsb(sy);
     isb();
 }
 
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index b8221ca..191a108 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -67,13 +67,13 @@ static inline void flush_xen_data_tlb(void)
 static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
 {
     unsigned long end = va + size;
-    dsb(); /* Ensure preceding are visible */
+    dsb(sy); /* Ensure preceding are visible */
     while ( va < end ) {
         asm volatile(STORE_CP32(0, TLBIMVAH)
                      : : "r" (va) : "memory");
         va += PAGE_SIZE;
     }
-    dsb(); /* Ensure completion of the TLB flush */
+    dsb(sy); /* Ensure completion of the TLB flush */
     isb();
 }
 
diff --git a/xen/include/asm-arm/arm64/page.h b/xen/include/asm-arm/arm64/page.h
index 3352821..20b4c5a 100644
--- a/xen/include/asm-arm/arm64/page.h
+++ b/xen/include/asm-arm/arm64/page.h
@@ -60,13 +60,13 @@ static inline void flush_xen_data_tlb(void)
 static inline void flush_xen_data_tlb_range_va(unsigned long va, unsigned long size)
 {
     unsigned long end = va + size;
-    dsb(); /* Ensure preceding are visible */
+    dsb(sy); /* Ensure preceding are visible */
     while ( va < end ) {
         asm volatile("tlbi vae2, %0;"
                      : : "r" (va>>PAGE_SHIFT) : "memory");
         va += PAGE_SIZE;
     }
-    dsb(); /* Ensure completion of the TLB flush */
+    dsb(sy); /* Ensure completion of the TLB flush */
     isb();
 }
 
diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 6dc7fa6..5e4678e 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -263,10 +263,10 @@ extern size_t cacheline_bytes;
 static inline void clean_xen_dcache_va_range(void *p, unsigned long size)
 {
     void *end;
-    dsb();           /* So the CPU issues all writes to the range */
+    dsb(sy);           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
         asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
-    dsb();           /* So we know the flushes happen before continuing */
+    dsb(sy);           /* So we know the flushes happen before continuing */
 }
 
 /* Macro for flushing a single small item.  The predicate is always
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 89c61ef..e1f126a 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -13,16 +13,16 @@
 #define wfi()           asm volatile("wfi" : : : "memory")
 
 #define isb()           asm volatile("isb" : : : "memory")
-#define dsb()           asm volatile("dsb sy" : : : "memory")
-#define dmb()           asm volatile("dmb sy" : : : "memory")
+#define dsb(scope)      asm volatile("dsb " #scope : : : "memory")
+#define dmb(scope)      asm volatile("dmb " #scope : : : "memory")
 
-#define mb()            dsb()
-#define rmb()           dsb()
-#define wmb()           dsb()
+#define mb()            dsb(sy)
+#define rmb()           dsb(sy)
+#define wmb()           dsb(sy)
 
-#define smp_mb()        dmb()
-#define smp_rmb()       dmb()
-#define smp_wmb()       dmb()
+#define smp_mb()        dmb(sy)
+#define smp_rmb()       dmb(sy)
+#define smp_wmb()       dmb(sy)
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYh-00026w-Gu; Thu, 13 Feb 2014 12:38:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYg-00025z-9X
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:26 +0000
Received: from [85.158.143.35:35488] by server-3.bemta-4.messagelabs.com id
	DD/34-11539-1CCBCF25; Thu, 13 Feb 2014 12:38:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19353 invoked from network); 13 Feb 2014 12:38:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426775"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYO-0003Bw-V2;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:03 +0000
Message-ID: <1392295088-24219-3-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 3/8] xen: arm: consolidate barrier
	definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are effectively identical on both 32- and 64-bit.

The only difference is that they implicit "sy" on 32-bit becomes explicit.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/include/asm-arm/arm32/system.h |   16 ----------------
 xen/include/asm-arm/arm64/system.h |   17 -----------------
 xen/include/asm-arm/system.h       |   16 ++++++++++++++++
 3 files changed, 16 insertions(+), 33 deletions(-)

diff --git a/xen/include/asm-arm/arm32/system.h b/xen/include/asm-arm/arm32/system.h
index 60148cb..9f233fe 100644
--- a/xen/include/asm-arm/arm32/system.h
+++ b/xen/include/asm-arm/arm32/system.h
@@ -2,22 +2,6 @@
 #ifndef __ASM_ARM32_SYSTEM_H
 #define __ASM_ARM32_SYSTEM_H
 
-#define sev() __asm__ __volatile__ ("sev" : : : "memory")
-#define wfe() __asm__ __volatile__ ("wfe" : : : "memory")
-#define wfi() __asm__ __volatile__ ("wfi" : : : "memory")
-
-#define isb() __asm__ __volatile__ ("isb" : : : "memory")
-#define dsb() __asm__ __volatile__ ("dsb" : : : "memory")
-#define dmb() __asm__ __volatile__ ("dmb" : : : "memory")
-
-#define mb()            dsb()
-#define rmb()           dsb()
-#define wmb()           mb()
-
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
-
 extern void __bad_xchg(volatile void *, int);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
diff --git a/xen/include/asm-arm/arm64/system.h b/xen/include/asm-arm/arm64/system.h
index d7e912f..570af5c 100644
--- a/xen/include/asm-arm/arm64/system.h
+++ b/xen/include/asm-arm/arm64/system.h
@@ -2,23 +2,6 @@
 #ifndef __ASM_ARM64_SYSTEM_H
 #define __ASM_ARM64_SYSTEM_H
 
-#define sev()           asm volatile("sev" : : : "memory")
-#define wfe()           asm volatile("wfe" : : : "memory")
-#define wfi()           asm volatile("wfi" : : : "memory")
-
-#define isb()           asm volatile("isb" : : : "memory")
-#define dsb()           asm volatile("dsb sy" : : : "memory")
-#define dmb()           asm volatile("dmb sy" : : : "memory")
-
-#define mb()            dsb()
-#define rmb()           dsb()
-#define wmb()           mb()
-
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
-
-
 extern void __bad_xchg(volatile void *, int);
 
 static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 290d38d..e003624 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -8,6 +8,22 @@
 #define nop() \
     asm volatile ( "nop" )
 
+#define sev()           asm volatile("sev" : : : "memory")
+#define wfe()           asm volatile("wfe" : : : "memory")
+#define wfi()           asm volatile("wfi" : : : "memory")
+
+#define isb()           asm volatile("isb" : : : "memory")
+#define dsb()           asm volatile("dsb sy" : : : "memory")
+#define dmb()           asm volatile("dmb sy" : : : "memory")
+
+#define mb()            dsb()
+#define rmb()           dsb()
+#define wmb()           mb()
+
+#define smp_mb()        mb()
+#define smp_rmb()       rmb()
+#define smp_wmb()       wmb()
+
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYi-00027q-3I; Thu, 13 Feb 2014 12:38:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYh-00026N-04
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:27 +0000
Received: from [193.109.254.147:6135] by server-13.bemta-14.messagelabs.com id
	CC/0B-01226-2CCBCF25; Thu, 13 Feb 2014 12:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392295104!4060571!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2992 invoked from network); 13 Feb 2014 12:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426779"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-Cb;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:08 +0000
Message-ID: <1392295088-24219-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 8/8] xen: arm: use more specific
	barriers for read and write barriers.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note that 32-bit does not provide a load variant of the inner shareable
barrier, so that remains a full any-any barrier.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/include/asm-arm/system.h |   17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 32ed277..7aaaf50 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -17,12 +17,21 @@
 #define dmb(scope)      asm volatile("dmb " #scope : : : "memory")
 
 #define mb()            dsb(sy)
-#define rmb()           dsb(sy)
-#define wmb()           dsb(sy)
+#ifdef CONFIG_ARM_64
+#define rmb()           dsb(ld)
+#else
+#define rmb()           dsb(sy) /* 32-bit has no ld variant. */
+#endif
+#define wmb()           dsb(st)
 
 #define smp_mb()        dmb(ish)
-#define smp_rmb()       dmb(ish)
-#define smp_wmb()       dmb(ish)
+#ifdef CONFIG_ARM_64
+#define smp_rmb()       dmb(ishld)
+#else
+#define smp_rmb()       dmb(ish) /* 32-bit has no ishld variant. */
+#endif
+
+#define smp_wmb()       dmb(ishst)
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYi-00027q-3I; Thu, 13 Feb 2014 12:38:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYh-00026N-04
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:27 +0000
Received: from [193.109.254.147:6135] by server-13.bemta-14.messagelabs.com id
	CC/0B-01226-2CCBCF25; Thu, 13 Feb 2014 12:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392295104!4060571!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2992 invoked from network); 13 Feb 2014 12:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426779"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-Cb;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:08 +0000
Message-ID: <1392295088-24219-8-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 8/8] xen: arm: use more specific
	barriers for read and write barriers.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Note that 32-bit does not provide a load variant of the inner shareable
barrier, so that remains a full any-any barrier.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/include/asm-arm/system.h |   17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index 32ed277..7aaaf50 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -17,12 +17,21 @@
 #define dmb(scope)      asm volatile("dmb " #scope : : : "memory")
 
 #define mb()            dsb(sy)
-#define rmb()           dsb(sy)
-#define wmb()           dsb(sy)
+#ifdef CONFIG_ARM_64
+#define rmb()           dsb(ld)
+#else
+#define rmb()           dsb(sy) /* 32-bit has no ld variant. */
+#endif
+#define wmb()           dsb(st)
 
 #define smp_mb()        dmb(ish)
-#define smp_rmb()       dmb(ish)
-#define smp_wmb()       dmb(ish)
+#ifdef CONFIG_ARM_64
+#define smp_rmb()       dmb(ishld)
+#else
+#define smp_rmb()       dmb(ish) /* 32-bit has no ishld variant. */
+#endif
+
+#define smp_wmb()       dmb(ishst)
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYi-00028W-Lu; Thu, 13 Feb 2014 12:38:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYh-00026U-4y
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:27 +0000
Received: from [85.158.143.35:35603] by server-2.bemta-4.messagelabs.com id
	F9/12-10891-2CCBCF25; Thu, 13 Feb 2014 12:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19494 invoked from network); 13 Feb 2014 12:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-9z;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:07 +0000
Message-ID: <1392295088-24219-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 7/8] xen: arm: weaken SMP barriers to
	inner shareable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since all processors are in the inner-shareable domain and we map everything
that way this is sufficient.

The non-SMP barriers remain full system. Although in principle they could
become outer shareable barriers for some hardware this would require us to
know which class a given device is. Given the small number of device drivers
in Xen itself its probably not worth worrying over, although maybe someone
will benchmark at some point.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/include/asm-arm/system.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index e1f126a..32ed277 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -20,9 +20,9 @@
 #define rmb()           dsb(sy)
 #define wmb()           dsb(sy)
 
-#define smp_mb()        dmb(sy)
-#define smp_rmb()       dmb(sy)
-#define smp_wmb()       dmb(sy)
+#define smp_mb()        dmb(ish)
+#define smp_rmb()       dmb(ish)
+#define smp_wmb()       dmb(ish)
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYi-00028W-Lu; Thu, 13 Feb 2014 12:38:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYh-00026U-4y
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:27 +0000
Received: from [85.158.143.35:35603] by server-2.bemta-4.messagelabs.com id
	F9/12-10891-2CCBCF25; Thu, 13 Feb 2014 12:38:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19494 invoked from network); 13 Feb 2014 12:38:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426777"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-9z;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:07 +0000
Message-ID: <1392295088-24219-7-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 7/8] xen: arm: weaken SMP barriers to
	inner shareable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since all processors are in the inner-shareable domain and we map everything
that way this is sufficient.

The non-SMP barriers remain full system. Although in principle they could
become outer shareable barriers for some hardware this would require us to
know which class a given device is. Given the small number of device drivers
in Xen itself its probably not worth worrying over, although maybe someone
will benchmark at some point.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/include/asm-arm/system.h |    6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index e1f126a..32ed277 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -20,9 +20,9 @@
 #define rmb()           dsb(sy)
 #define wmb()           dsb(sy)
 
-#define smp_mb()        dmb(sy)
-#define smp_rmb()       dmb(sy)
-#define smp_wmb()       dmb(sy)
+#define smp_mb()        dmb(ish)
+#define smp_rmb()       dmb(ish)
+#define smp_wmb()       dmb(ish)
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYk-0002AH-8x; Thu, 13 Feb 2014 12:38:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYi-000275-0b
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:28 +0000
Received: from [85.158.143.35:35712] by server-2.bemta-4.messagelabs.com id
	EF/12-10891-3CCBCF25; Thu, 13 Feb 2014 12:38:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19580 invoked from network); 13 Feb 2014 12:38:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426778"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-4w;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:05 +0000
Message-ID: <1392295088-24219-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 5/8] xen: arm: Use dmb for smp
	barriers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The full power of dsb is not required in this context.

Also change wmb() to be dsb() directly instead of indirectly via mb(), for
clarity.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/include/asm-arm/system.h |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index e003624..89c61ef 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -18,11 +18,11 @@
 
 #define mb()            dsb()
 #define rmb()           dsb()
-#define wmb()           mb()
+#define wmb()           dsb()
 
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
+#define smp_mb()        dmb()
+#define smp_rmb()       dmb()
+#define smp_wmb()       dmb()
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYk-0002AH-8x; Thu, 13 Feb 2014 12:38:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYi-000275-0b
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:28 +0000
Received: from [85.158.143.35:35712] by server-2.bemta-4.messagelabs.com id
	EF/12-10891-3CCBCF25; Thu, 13 Feb 2014 12:38:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392295102!5414705!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19580 invoked from network); 13 Feb 2014 12:38:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426778"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-4w;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:05 +0000
Message-ID: <1392295088-24219-5-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 5/8] xen: arm: Use dmb for smp
	barriers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The full power of dsb is not required in this context.

Also change wmb() to be dsb() directly instead of indirectly via mb(), for
clarity.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/include/asm-arm/system.h |    8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/include/asm-arm/system.h b/xen/include/asm-arm/system.h
index e003624..89c61ef 100644
--- a/xen/include/asm-arm/system.h
+++ b/xen/include/asm-arm/system.h
@@ -18,11 +18,11 @@
 
 #define mb()            dsb()
 #define rmb()           dsb()
-#define wmb()           mb()
+#define wmb()           dsb()
 
-#define smp_mb()        mb()
-#define smp_rmb()       rmb()
-#define smp_wmb()       wmb()
+#define smp_mb()        dmb()
+#define smp_rmb()       dmb()
+#define smp_wmb()       dmb()
 
 #define xchg(ptr,x) \
         ((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYk-0002B6-PD; Thu, 13 Feb 2014 12:38:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYi-00027F-5f
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:28 +0000
Received: from [193.109.254.147:54734] by server-8.bemta-14.messagelabs.com id
	6D/F6-18529-3CCBCF25; Thu, 13 Feb 2014 12:38:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392295104!4060571!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3101 invoked from network); 13 Feb 2014 12:38:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426776"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-2P;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:04 +0000
Message-ID: <1392295088-24219-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 4/8] xen: arm: Use SMP barriers when
	that is all which is required.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SMP barriers can be used when all we care about is synchronising against other
processors.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Turn cpu_die mb()s into smp_mb()s.
---
 xen/arch/arm/mm.c      |    2 +-
 xen/arch/arm/smpboot.c |   10 +++++-----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index f608020..ff19e39 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -967,7 +967,7 @@ void share_xen_page_with_guest(struct page_info *page,
     page->u.inuse.type_info |= PGT_validated | 1;
 
     page_set_owner(page, d);
-    wmb(); /* install valid domain ptr before updating refcnt. */
+    smp_wmb(); /* install valid domain ptr before updating refcnt. */
     ASSERT((page->count_info & ~PGC_xen_heap) == 0);
 
     /* Only add to the allocation list if the domain isn't dying. */
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index a829957..ce68d34 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -298,12 +298,12 @@ void __cpuinit start_secondary(unsigned long boot_phys_offset,
 
     /* Run local notifiers */
     notify_cpu_starting(cpuid);
-    wmb();
+    smp_wmb();
 
     /* Now report this CPU is up */
     smp_up_cpu = MPIDR_INVALID;
     cpumask_set_cpu(cpuid, &cpu_online_map);
-    wmb();
+    smp_wmb();
 
     local_irq_enable();
     local_abort_enable();
@@ -330,7 +330,7 @@ void __cpu_disable(void)
 
     if ( cpu_disable_scheduler(cpu) )
         BUG();
-    mb();
+    smp_mb();
 
     /* Return to caller; eventually the IPI mechanism will unwind and the 
      * scheduler will drop to the idle loop, which will call stop_cpu(). */
@@ -411,10 +411,10 @@ void __cpu_die(unsigned int cpu)
         process_pending_softirqs();
         if ( (++i % 10) == 0 )
             printk(KERN_ERR "CPU %u still not dead...\n", cpu);
-        mb();
+        smp_mb();
     }
     cpu_is_dead = 0;
-    mb();
+    smp_mb();
 }
 
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvYk-0002B6-PD; Thu, 13 Feb 2014 12:38:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvYi-00027F-5f
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:38:28 +0000
Received: from [193.109.254.147:54734] by server-8.bemta-14.messagelabs.com id
	6D/F6-18529-3CCBCF25; Thu, 13 Feb 2014 12:38:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392295104!4060571!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3101 invoked from network); 13 Feb 2014 12:38:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:38:26 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100426776"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 12:38:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 07:38:09 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WDvYP-0003Bw-2P;
	Thu, 13 Feb 2014 12:38:09 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 12:38:04 +0000
Message-ID: <1392295088-24219-4-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392295040.31985.7.camel@kazak.uk.xensource.com>
References: <1392295040.31985.7.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH for-4.5 v2 4/8] xen: arm: Use SMP barriers when
	that is all which is required.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SMP barriers can be used when all we care about is synchronising against other
processors.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
v2: Turn cpu_die mb()s into smp_mb()s.
---
 xen/arch/arm/mm.c      |    2 +-
 xen/arch/arm/smpboot.c |   10 +++++-----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index f608020..ff19e39 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -967,7 +967,7 @@ void share_xen_page_with_guest(struct page_info *page,
     page->u.inuse.type_info |= PGT_validated | 1;
 
     page_set_owner(page, d);
-    wmb(); /* install valid domain ptr before updating refcnt. */
+    smp_wmb(); /* install valid domain ptr before updating refcnt. */
     ASSERT((page->count_info & ~PGC_xen_heap) == 0);
 
     /* Only add to the allocation list if the domain isn't dying. */
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index a829957..ce68d34 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -298,12 +298,12 @@ void __cpuinit start_secondary(unsigned long boot_phys_offset,
 
     /* Run local notifiers */
     notify_cpu_starting(cpuid);
-    wmb();
+    smp_wmb();
 
     /* Now report this CPU is up */
     smp_up_cpu = MPIDR_INVALID;
     cpumask_set_cpu(cpuid, &cpu_online_map);
-    wmb();
+    smp_wmb();
 
     local_irq_enable();
     local_abort_enable();
@@ -330,7 +330,7 @@ void __cpu_disable(void)
 
     if ( cpu_disable_scheduler(cpu) )
         BUG();
-    mb();
+    smp_mb();
 
     /* Return to caller; eventually the IPI mechanism will unwind and the 
      * scheduler will drop to the idle loop, which will call stop_cpu(). */
@@ -411,10 +411,10 @@ void __cpu_die(unsigned int cpu)
         process_pending_softirqs();
         if ( (++i % 10) == 0 )
             printk(KERN_ERR "CPU %u still not dead...\n", cpu);
-        mb();
+        smp_mb();
     }
     cpu_is_dead = 0;
-    mb();
+    smp_mb();
 }
 
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 12:49:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvjH-0003l3-Ir; Thu, 13 Feb 2014 12:49:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lccycc123@gmail.com>) id 1WDvjF-0003ky-Oz
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:49:22 +0000
Received: from [85.158.137.68:34329] by server-2.bemta-3.messagelabs.com id
	FE/1C-06531-05FBCF25; Thu, 13 Feb 2014 12:49:20 +0000
X-Env-Sender: lccycc123@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392295757!1634148!1
X-Originating-IP: [209.85.192.42]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15345 invoked from network); 13 Feb 2014 12:49:18 -0000
Received: from mail-qg0-f42.google.com (HELO mail-qg0-f42.google.com)
	(209.85.192.42)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:49:18 -0000
Received: by mail-qg0-f42.google.com with SMTP id q107so1134976qgd.1
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 04:49:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=kglP0WUHV3UacbaRUBPim08Ljsai4AWjGBeMu6z8Unc=;
	b=xzxoXtJ4AJvc1jlOFjoejrt8qKNMRIu+34BH/9FSTXK7b6OjvlHOtXXfaSCjTVFsQ2
	ex8w41HhAaefRiq6ZCBHCSnA38txRHfUu4JngQ35pUOx0TkfFE3V5UFhDdNH2nlTNotV
	J9/sXrlhJhUaOHgcAsZ9OuqVJ4smveIDcPdzMhDgkAtLLzXj2UHzex/t9qBgTCQ+s4EB
	usUliXk4flF9MMb1q68qdrtxj6nW7EJkxJQ/oBZeBeWcBktZXPWq8zs6D8XdvKl5T1Vz
	pZ7i5AeDsjCJtcCXmTxpTURJGtFnZuaA048Vwy/1bYfFWTzaHw7Mk0czAVjSO7tby6TE
	j+QQ==
MIME-Version: 1.0
X-Received: by 10.224.87.193 with SMTP id x1mr2112855qal.70.1392295756856;
	Thu, 13 Feb 2014 04:49:16 -0800 (PST)
Received: by 10.224.196.138 with HTTP; Thu, 13 Feb 2014 04:49:16 -0800 (PST)
In-Reply-To: <1386136035-19544-1-git-send-email-ufimtseva@gmail.com>
References: <1386136035-19544-1-git-send-email-ufimtseva@gmail.com>
Date: Thu, 13 Feb 2014 20:49:16 +0800
Message-ID: <CAP5+zHTb_1AVE3_oCNoLH0q7Queoa0ui+vfhK4=Z31+Ld6k30w@mail.gmail.com>
From: Li Yechen <lccycc123@gmail.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Matt Wilson <msw@linux.com>,
	Dario Faggioli <dario.faggioli@citrix.com>, ian.jackson@eu.citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v4 0/7] vNUMA introduction
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4478325514760300068=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4478325514760300068==
Content-Type: multipart/alternative; boundary=001a11c3e24602829204f2491e03

--001a11c3e24602829204f2491e03
Content-Type: text/plain; charset=ISO-8859-1

Hi Elena,
The patch on gitorious is not available. Have you got a newest version?
I have a idea and need to modify your patch a little to manage the change
of memory.
Sorry for being missing so long :-)


On Wed, Dec 4, 2013 at 1:47 PM, Elena Ufimtseva <ufimtseva@gmail.com> wrote:

> vNUMA introduction
>
> This series of patches introduces vNUMA topology awareness and
> provides interfaces and data structures to enable vNUMA for
> PV guests. There is a plan to extend this support for dom0 and
> HVM domains.
>
> vNUMA topology support should be supported by PV guest kernel.
> Corresponging patches should be applied.
>
> Introduction
> -------------
>
> vNUMA topology is exposed to the PV guest to improve performance when
> running
> workloads on NUMA machines.
> XEN vNUMA implementation provides a way to create vNUMA-enabled guests on
> NUMA/UMA
> and map vNUMA topology to physical NUMA in a optimal way.
>
> XEN vNUMA support
>
> Current set of patches introduces subop hypercall that is available for
> enlightened
> PV guests with vNUMA patches applied.
>
> Domain structure was modified to reflect per-domain vNUMA topology for use
> in other
> vNUMA-aware subsystems (e.g. ballooning).
>
> libxc
>
> libxc provides interfaces to build PV guests with vNUMA support and in
> case of NUMA
> machines provides initial memory allocation on physical NUMA nodes. This
> implemented by
> utilizing nodemap formed by automatic NUMA placement. Details are in patch
> #3.
>
> libxl
>
> libxl provides a way to predefine in VM config vNUMA topology - number of
> vnodes,
> memory arrangement, vcpus to vnodes assignment, distance map.
>
> PV guest
>
> As of now, only PV guest can take advantage of vNUMA functionality. vNUMA
> Linux patches
> should be applied and NUMA support should be compiled in kernel.
>
> This patchset can be pulled from
> https://git.gitorious.org/xenvnuma/xenvnuma.git:v6
> Linux patchset https://git.gitorious.org/xenvnuma/linuxvnuma.git:v6
>
> Examples of booting vNUMA enabled PV Linux guest on real NUMA machine:
>
> 1. Automatic vNUMA placement on h/w NUMA machine:
>
> VM config:
>
> memory = 16384
> vcpus = 4
> name = "rcbig"
> vnodes = 4
> vnumamem = [10,10]
> vnuma_distance = [10, 30, 10, 30]
> vcpu_to_vnode = [0, 0, 1, 1]
>
> Xen:
>
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 2569511):
> (XEN)     Node 0: 1416166
> (XEN)     Node 1: 1153345
> (XEN) Domain 5 (total: 4194304):
> (XEN)     Node 0: 2097152
> (XEN)     Node 1: 2097152
> (XEN)     Domain has 4 vnodes
> (XEN)         vnode 0 - pnode 0  (4096) MB
> (XEN)         vnode 1 - pnode 0  (4096) MB
> (XEN)         vnode 2 - pnode 1  (4096) MB
> (XEN)         vnode 3 - pnode 1  (4096) MB
> (XEN)     Domain vcpu to vnode:
> (XEN)     0 1 2 3
>
> dmesg on pv guest:
>
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
> [    0.000000]   node   0: [mem 0x00100000-0xffffffff]
> [    0.000000]   node   1: [mem 0x100000000-0x1ffffffff]
> [    0.000000]   node   2: [mem 0x200000000-0x2ffffffff]
> [    0.000000]   node   3: [mem 0x300000000-0x3ffffffff]
> [    0.000000] On node 0 totalpages: 1048479
> [    0.000000]   DMA zone: 56 pages used for memmap
> [    0.000000]   DMA zone: 21 pages reserved
> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
> [    0.000000]   DMA32 zone: 14280 pages used for memmap
> [    0.000000]   DMA32 zone: 1044480 pages, LIFO batch:31
> [    0.000000] On node 1 totalpages: 1048576
> [    0.000000]   Normal zone: 14336 pages used for memmap
> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
> [    0.000000] On node 2 totalpages: 1048576
> [    0.000000]   Normal zone: 14336 pages used for memmap
> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
> [    0.000000] On node 3 totalpages: 1048576
> [    0.000000]   Normal zone: 14336 pages used for memmap
> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] nr_irqs_gsi: 16
> [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff]
> [    0.000000] e820: cannot find a gap in the 32bit address range
> [    0.000000] e820: PCI devices with unassigned 32bit BARs may break!
> [    0.000000] e820: [mem 0x400100000-0x4004fffff] available for PCI
> devices
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.4-unstable (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4
> nr_node_ids:4
> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s85376
> r8192 d21120 u2097152
> [    0.000000] pcpu-alloc: s85376 r8192 d21120 u2097152 alloc=1*2097152
> [    0.000000] pcpu-alloc: [0] 0 [1] 1 [2] 2 [3] 3
>
>
> pv guest: numactl --hardware:
>
> root@heatpipe:~# numactl --hardware
> available: 4 nodes (0-3)
> node 0 cpus: 0
> node 0 size: 4031 MB
> node 0 free: 3997 MB
> node 1 cpus: 1
> node 1 size: 4039 MB
> node 1 free: 4022 MB
> node 2 cpus: 2
> node 2 size: 4039 MB
> node 2 free: 4023 MB
> node 3 cpus: 3
> node 3 size: 3975 MB
> node 3 free: 3963 MB
> node distances:
> node   0   1   2   3
>   0:  10  20  20  20
>   1:  20  10  20  20
>   2:  20  20  10  20
>   3:  20  20  20  10
>
> Comments:
> None of the configuration options are correct so default values were used.
> Since machine is NUMA machine and there is no vcpu pinning defines, NUMA
> automatic node selection mechanism is used and you can see how vnodes
> were split across physical nodes.
>
> 2. Example with e820_host = 1 (32GB real NUMA machines, two nodes).
>
> pv config:
> memory = 4000
> vcpus = 8
> # The name of the domain, change this if you want more than 1 VM.
> name = "null"
> vnodes = 4
> #vnumamem = [3000, 1000]
> vdistance = [10, 40]
> #vnuma_vcpumap = [1, 0, 3, 2]
> vnuma_vnodemap = [1, 0, 1, 0]
> #vnuma_autoplacement = 1
> e820_host = 1
>
> guest boot:
>
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Initializing cgroup subsys cpuacct
> [    0.000000] Linux version 3.12.0+ (assert@superpipe) (gcc version
> 4.7.2 (Debi
> an 4.7.2-5) ) #111 SMP Tue Dec 3 14:54:36 EST 2013
> [    0.000000] Command line: root=/dev/xvda1 ro earlyprintk=xen debug
> loglevel=8
>  debug print_fatal_signals=1 loglvl=all guest_loglvl=all LOGLEVEL=8
> earlyprintk=
> xen sched_debug
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] Freeing ac228-fa000 pfn range: 318936 pages freed
> [    0.000000] 1-1 mapping on ac228->100000
> [    0.000000] Released 318936 pages of unused memory
> [    0.000000] Set 343512 page(s) to 1-1 mapping
> [    0.000000] Populating 100000-14ddd8 pfn range: 318936 pages added
> [    0.000000] e820: BIOS-provided physical RAM map:
> [    0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable
> [    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
> [    0.000000] Xen: [mem 0x0000000000100000-0x00000000ac227fff] usable
> [    0.000000] Xen: [mem 0x00000000ac228000-0x00000000ac26bfff] reserved
> [    0.000000] Xen: [mem 0x00000000ac26c000-0x00000000ac57ffff] unusable
> [    0.000000] Xen: [mem 0x00000000ac580000-0x00000000ac5a0fff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5a1000-0x00000000ac5bbfff] unusable
> [    0.000000] Xen: [mem 0x00000000ac5bc000-0x00000000ac5bdfff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5be000-0x00000000ac5befff] unusable
> [    0.000000] Xen: [mem 0x00000000ac5bf000-0x00000000ac5cafff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5cb000-0x00000000ac5d9fff] unusable
> [    0.000000] Xen: [mem 0x00000000ac5da000-0x00000000ac5fafff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5fb000-0x00000000ac6b6fff] unusable
> [    0.000000] Xen: [mem 0x00000000ac6b7000-0x00000000ac7fafff] ACPI NVS
> [    0.000000] Xen: [mem 0x00000000ac7fb000-0x00000000ac80efff] unusable
> [    0.000000] Xen: [mem 0x00000000ac80f000-0x00000000ac80ffff] ACPI data
> [    0.000000] Xen: [mem 0x00000000ac810000-0x00000000ac810fff] unusable
> [    0.000000] Xen: [mem 0x00000000ac811000-0x00000000ac812fff] ACPI data
> [    0.000000] Xen: [mem 0x00000000ac813000-0x00000000ad7fffff] unusable
> [    0.000000] Xen: [mem 0x00000000b0000000-0x00000000b3ffffff] reserved
> [    0.000000] Xen: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
> [    0.000000] Xen: [mem 0x00000000fed50000-0x00000000fed8ffff] reserved
> [    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
> [    0.000000] Xen: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserved
> [    0.000000] Xen: [mem 0x0000000100000000-0x000000014ddd7fff] usable
> [    0.000000] bootconsole [xenboot0] enabled
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
> [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
> [    0.000000] No AGP bridge found
> [    0.000000] e820: last_pfn = 0x14ddd8 max_arch_pfn = 0x400000000
> [    0.000000] e820: last_pfn = 0xac228 max_arch_pfn = 0x400000000
> [    0.000000] Base memory trampoline at [ffff88000009a000] 9a000 size
> 24576
> [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
> [    0.000000]  [mem 0x00000000-0x000fffff] page 4k
> [    0.000000] init_memory_mapping: [mem 0x14da00000-0x14dbfffff]
> [    0.000000]  [mem 0x14da00000-0x14dbfffff] page 4k
> [    0.000000] BRK [0x019bd000, 0x019bdfff] PGTABLE
> [    0.000000] BRK [0x019be000, 0x019befff] PGTABLE
> [    0.000000] init_memory_mapping: [mem 0x14c000000-0x14d9fffff]
> [    0.000000]  [mem 0x14c000000-0x14d9fffff] page 4k
> [    0.000000] BRK [0x019bf000, 0x019bffff] PGTABLE
> [    0.000000] BRK [0x019c0000, 0x019c0fff] PGTABLE
> [    0.000000] BRK [0x019c1000, 0x019c1fff] PGTABLE
> [    0.000000] BRK [0x019c2000, 0x019c2fff] PGTABLE
> [    0.000000] init_memory_mapping: [mem 0x100000000-0x14bffffff]
> [    0.000000]  [mem 0x100000000-0x14bffffff] page 4k
> [    0.000000] init_memory_mapping: [mem 0x00100000-0xac227fff]
> [    0.000000]  [mem 0x00100000-0xac227fff] page 4k
> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
> [    0.000000] NUMA: Initialized distance table, cnt=4
> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x3e7fffff]
> [    0.000000]   NODE_DATA [mem 0x3e7d9000-0x3e7fffff]
> [    0.000000] Initmem setup node 1 [mem 0x3e800000-0x7cffffff]
> [    0.000000]   NODE_DATA [mem 0x7cfd9000-0x7cffffff]
> [    0.000000] Initmem setup node 2 [mem 0x7d000000-0x10f5dffff]
> [    0.000000]   NODE_DATA [mem 0x10f5b9000-0x10f5dffff]
> [    0.000000] Initmem setup node 3 [mem 0x10f800000-0x14ddd7fff]
> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
> [    0.000000]   node   0: [mem 0x00100000-0x3e7fffff]
> [    0.000000]   node   1: [mem 0x3e800000-0x7cffffff]
> [    0.000000]   node   2: [mem 0x7d000000-0xac227fff]
> [    0.000000]   node   2: [mem 0x100000000-0x10f5dffff]
> [    0.000000]   node   3: [mem 0x10f5e0000-0x14ddd7fff]
> [    0.000000] On node 0 totalpages: 255903
> [    0.000000]   DMA zone: 56 pages used for memmap
> [    0.000000]   DMA zone: 21 pages reserved
> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
> [    0.000000]   DMA32 zone: 3444 pages used for memmap
> [    0.000000]   DMA32 zone: 251904 pages, LIFO batch:31
> [    0.000000] On node 1 totalpages: 256000
> [    0.000000]   DMA32 zone: 3500 pages used for memmap
> [    0.000000]   DMA32 zone: 256000 pages, LIFO batch:31
> [    0.000000] On node 2 totalpages: 256008
> [    0.000000]   DMA32 zone: 2640 pages used for memmap
> [    0.000000]   DMA32 zone: 193064 pages, LIFO batch:31
> [    0.000000]   Normal zone: 861 pages used for memmap
> [    0.000000]   Normal zone: 62944 pages, LIFO batch:15
> [    0.000000] On node 3 totalpages: 255992
> [    0.000000]   Normal zone: 3500 pages used for memmap
> [    0.000000]   Normal zone: 255992 pages, LIFO batch:31
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
>
> root@heatpipe:~# numactl --ha
> available: 4 nodes (0-3)
> node 0 cpus: 0 4
> node 0 size: 977 MB
> node 0 free: 947 MB
> node 1 cpus: 1 5
> node 1 size: 985 MB
> node 1 free: 974 MB
> node 2 cpus: 2 6
> node 2 size: 985 MB
> node 2 free: 973 MB
> node 3 cpus: 3 7
> node 3 size: 969 MB
> node 3 free: 958 MB
> node distances:
> node   0   1   2   3
>   0:  10  40  40  40
>   1:  40  10  40  40
>   2:  40  40  10  40
>   3:  40  40  40  10
>
> root@heatpipe:~# numastat -m
>
> Per-node system memory usage (in MBs):
>                           Node 0          Node 1          Node 2
>  Node 3           Total
>                  --------------- --------------- ---------------
> --------------- ---------------
> MemTotal                  977.14          985.50          985.44
>  969.91         3917.99
>
> hypervisor: xl debug-keys u
>
> (XEN) 'u' pressed -> dumping numa info (now-0x2A3:F7B8CB0F)
> (XEN) Domain 2 (total: 1024000):
> (XEN)     Node 0: 415468
> (XEN)     Node 1: 608532
> (XEN)     Domain has 4 vnodes
> (XEN)         vnode 0 - pnode 1 1000 MB, vcpus: 0 4
> (XEN)         vnode 1 - pnode 0 1000 MB, vcpus: 1 5
> (XEN)         vnode 2 - pnode 1 2341 MB, vcpus: 2 6
> (XEN)         vnode 3 - pnode 0 999 MB, vcpus: 3 7
>
> This size descrepancy caused by the way how size if calculated
> from guest pfns: end - start. Thus the hole size in this case of
> ~1,3Gb is included in the size.
>
> 3. zero vNUMA configuration for every pv domain.
> Will be at least one vnuma node if vnuma topology was not
> specified.
>
> pv config:
>
> memory = 4000
> vcpus = 8
> # The name of the domain, change this if you want more than 1 VM.
> name = "null"
> #vnodes = 4
> vnumamem = [3000, 1000]
> vdistance = [10, 40]
> vnuma_vcpumap = [1, 0, 3, 2]
> vnuma_vnodemap = [1, 0, 1, 0]
> vnuma_autoplacement = 1
> e820_host = 1
>
> boot:
> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
> [    0.000000] NUMA: Initialized distance table, cnt=1
> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x14ddd7fff]
> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
> [    0.000000]   node   0: [mem 0x00100000-0xac227fff]
> [    0.000000]   node   0: [mem 0x100000000-0x14ddd7fff]
>
> root@heatpipe:~# numactl --ha
> maxn: 0
> available: 1 nodes (0)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 3918 MB
> node 0 free: 3853 MB
> node distances:
> node   0
>   0:  10
>
> root@heatpipe:~# numastat -m
>
> Per-node system memory usage (in MBs):
>                           Node 0           Total
>                  --------------- ---------------
> MemTotal                 3918.74         3918.74
>
> hypervisor: xl debug-keys u
>
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 6787432):
> (XEN)     Node 0: 3485706
> (XEN)     Node 1: 3301726
> (XEN) Domain 3 (total: 1024000):
> (XEN)     Node 0: 512000
> (XEN)     Node 1: 512000
> (XEN)     Domain has 1 vnodes
> (XEN)         vnode 0 - pnode any 5341 MB, vcpus: 0 1 2 3 4 5 6 7
>
>
> Notes:
>
> to enable vNUMA in pv guest the corresponding patch set should be
> applied - https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5
> or
> https://www.gitorious.org/xenvnuma/linuxvnuma/commit/deaa014257b99f57c76fbba12a28907786cbe17d
> .
>
>
> Issues:
>
> The most important right now is the automatic numa balancing for linux pv
> kernel
> as its corrupting user space memory. Since the v3 of this patch series
> linux kernel
> 3.13 seem to perform correctly, but with the recent changes the issue is
> back.
> See https://lkml.org/lkml/2013/10/31/133 for urgent patch what presumably
> had
> numa balancing working. Sine 3.12 there were multiple changes to numa
> automatic
> balancing. I am currently back to investigating if anything should be done
> from hypervisor
> side and will work with kernel maintainers.
>
> Elena Ufimtseva (7):
>   xen: vNUMA support for PV guests
>   libxc: Plumb Xen with vNUMA topology for domain
>   xl: vnuma memory parsing and supplement functions
>   xl: vnuma distance, vcpu and pnode masks parser
>   libxc: vnuma memory domain allocation
>   libxl: vNUMA supporting interface
>   xen: adds vNUMA info debug-key u
>
>  docs/man/xl.cfg.pod.5        |   60 +++++++
>  tools/libxc/xc_dom.h         |   10 ++
>  tools/libxc/xc_dom_x86.c     |   63 +++++--
>  tools/libxc/xc_domain.c      |   64 +++++++
>  tools/libxc/xenctrl.h        |    9 +
>  tools/libxc/xg_private.h     |    1 +
>  tools/libxl/libxl.c          |   18 ++
>  tools/libxl/libxl.h          |   20 +++
>  tools/libxl/libxl_arch.h     |    6 +
>  tools/libxl/libxl_dom.c      |  158 ++++++++++++++++--
>  tools/libxl/libxl_internal.h |    6 +
>  tools/libxl/libxl_numa.c     |   49 ++++++
>  tools/libxl/libxl_types.idl  |    6 +-
>  tools/libxl/libxl_vnuma.h    |   11 ++
>  tools/libxl/libxl_x86.c      |  123 ++++++++++++++
>  tools/libxl/xl_cmdimpl.c     |  380
> ++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/numa.c          |   30 +++-
>  xen/common/domain.c          |   10 ++
>  xen/common/domctl.c          |   79 +++++++++
>  xen/common/memory.c          |   96 +++++++++++
>  xen/include/public/domctl.h  |   29 ++++
>  xen/include/public/memory.h  |   17 ++
>  xen/include/public/vnuma.h   |   59 +++++++
>  xen/include/xen/domain.h     |    8 +
>  xen/include/xen/sched.h      |    1 +
>  25 files changed, 1282 insertions(+), 31 deletions(-)
>  create mode 100644 tools/libxl/libxl_vnuma.h
>  create mode 100644 xen/include/public/vnuma.h
>
> --
> 1.7.10.4
>
>


-- 

Yechen Li

Team of System Virtualization and Cloud Computing
School of Electronic Engineering  and Computer Science,
Peking University, China

Nothing is impossible because impossible itself  says: " I'm possible "
lccycc From PKU

--001a11c3e24602829204f2491e03
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div>Hi Elena, <br></div>The patch on gitorious =
is not available. Have you got a newest version?<br></div></div>I have a id=
ea and need to modify your patch a little to manage the change of memory.<b=
r>
Sorry for being missing so long :-)<br></div><div class=3D"gmail_extra"><br=
><br><div class=3D"gmail_quote">On Wed, Dec 4, 2013 at 1:47 PM, Elena Ufimt=
seva <span dir=3D"ltr">&lt;<a href=3D"mailto:ufimtseva@gmail.com" target=3D=
"_blank">ufimtseva@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">vNUMA introduction<br>
<br>
This series of patches introduces vNUMA topology awareness and<br>
provides interfaces and data structures to enable vNUMA for<br>
PV guests. There is a plan to extend this support for dom0 and<br>
HVM domains.<br>
<br>
vNUMA topology support should be supported by PV guest kernel.<br>
Corresponging patches should be applied.<br>
<br>
Introduction<br>
-------------<br>
<br>
vNUMA topology is exposed to the PV guest to improve performance when runni=
ng<br>
workloads on NUMA machines.<br>
XEN vNUMA implementation provides a way to create vNUMA-enabled guests on N=
UMA/UMA<br>
and map vNUMA topology to physical NUMA in a optimal way.<br>
<br>
XEN vNUMA support<br>
<br>
Current set of patches introduces subop hypercall that is available for enl=
ightened<br>
PV guests with vNUMA patches applied.<br>
<br>
Domain structure was modified to reflect per-domain vNUMA topology for use =
in other<br>
vNUMA-aware subsystems (e.g. ballooning).<br>
<br>
libxc<br>
<br>
libxc provides interfaces to build PV guests with vNUMA support and in case=
 of NUMA<br>
machines provides initial memory allocation on physical NUMA nodes. This im=
plemented by<br>
utilizing nodemap formed by automatic NUMA placement. Details are in patch =
#3.<br>
<br>
libxl<br>
<br>
libxl provides a way to predefine in VM config vNUMA topology - number of v=
nodes,<br>
memory arrangement, vcpus to vnodes assignment, distance map.<br>
<br>
PV guest<br>
<br>
As of now, only PV guest can take advantage of vNUMA functionality. vNUMA L=
inux patches<br>
should be applied and NUMA support should be compiled in kernel.<br>
<br>
This patchset can be pulled from <a href=3D"https://git.gitorious.org/xenvn=
uma/xenvnuma.git:v6" target=3D"_blank">https://git.gitorious.org/xenvnuma/x=
envnuma.git:v6</a><br>
Linux patchset <a href=3D"https://git.gitorious.org/xenvnuma/linuxvnuma.git=
:v6" target=3D"_blank">https://git.gitorious.org/xenvnuma/linuxvnuma.git:v6=
</a><br>
<br>
Examples of booting vNUMA enabled PV Linux guest on real NUMA machine:<br>
<br>
1. Automatic vNUMA placement on h/w NUMA machine:<br>
<br>
VM config:<br>
<br>
memory =3D 16384<br>
vcpus =3D 4<br>
name =3D &quot;rcbig&quot;<br>
vnodes =3D 4<br>
vnumamem =3D [10,10]<br>
vnuma_distance =3D [10, 30, 10, 30]<br>
vcpu_to_vnode =3D [0, 0, 1, 1]<br>
<br>
Xen:<br>
<br>
(XEN) Memory location of each domain:<br>
(XEN) Domain 0 (total: 2569511):<br>
(XEN) =A0 =A0 Node 0: 1416166<br>
(XEN) =A0 =A0 Node 1: 1153345<br>
(XEN) Domain 5 (total: 4194304):<br>
(XEN) =A0 =A0 Node 0: 2097152<br>
(XEN) =A0 =A0 Node 1: 2097152<br>
(XEN) =A0 =A0 Domain has 4 vnodes<br>
(XEN) =A0 =A0 =A0 =A0 vnode 0 - pnode 0 =A0(4096) MB<br>
(XEN) =A0 =A0 =A0 =A0 vnode 1 - pnode 0 =A0(4096) MB<br>
(XEN) =A0 =A0 =A0 =A0 vnode 2 - pnode 1 =A0(4096) MB<br>
(XEN) =A0 =A0 =A0 =A0 vnode 3 - pnode 1 =A0(4096) MB<br>
(XEN) =A0 =A0 Domain vcpu to vnode:<br>
(XEN) =A0 =A0 0 1 2 3<br>
<br>
dmesg on pv guest:<br>
<br>
[ =A0 =A00.000000] Movable zone start for each node<br>
[ =A0 =A00.000000] Early memory node ranges<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00001000-0x0009ffff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00100000-0xffffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 1: [mem 0x100000000-0x1ffffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 2: [mem 0x200000000-0x2ffffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 3: [mem 0x300000000-0x3ffffffff]<br>
[ =A0 =A00.000000] On node 0 totalpages: 1048479<br>
[ =A0 =A00.000000] =A0 DMA zone: 56 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA zone: 21 pages reserved<br>
[ =A0 =A00.000000] =A0 DMA zone: 3999 pages, LIFO batch:0<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 14280 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 1044480 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 1 totalpages: 1048576<br>
[ =A0 =A00.000000] =A0 Normal zone: 14336 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 1048576 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 2 totalpages: 1048576<br>
[ =A0 =A00.000000] =A0 Normal zone: 14336 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 1048576 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 3 totalpages: 1048576<br>
[ =A0 =A00.000000] =A0 Normal zone: 14336 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 1048576 pages, LIFO batch:31<br>
[ =A0 =A00.000000] SFI: Simple Firmware Interface v0.81 <a href=3D"http://s=
implefirmware.org" target=3D"_blank">http://simplefirmware.org</a><br>
[ =A0 =A00.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs<br>
[ =A0 =A00.000000] No local APIC present<br>
[ =A0 =A00.000000] APIC: disable apic facility<br>
[ =A0 =A00.000000] APIC: switched to apic NOOP<br>
[ =A0 =A00.000000] nr_irqs_gsi: 16<br>
[ =A0 =A00.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff=
]<br>
[ =A0 =A00.000000] e820: cannot find a gap in the 32bit address range<br>
[ =A0 =A00.000000] e820: PCI devices with unassigned 32bit BARs may break!<=
br>
[ =A0 =A00.000000] e820: [mem 0x400100000-0x4004fffff] available for PCI de=
vices<br>
[ =A0 =A00.000000] Booting paravirtualized kernel on Xen<br>
[ =A0 =A00.000000] Xen version: 4.4-unstable (preserve-AD)<br>
[ =A0 =A00.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids=
:4 nr_node_ids:4<br>
[ =A0 =A00.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s85376 r=
8192 d21120 u2097152<br>
[ =A0 =A00.000000] pcpu-alloc: s85376 r8192 d21120 u2097152 alloc=3D1*20971=
52<br>
[ =A0 =A00.000000] pcpu-alloc: [0] 0 [1] 1 [2] 2 [3] 3<br>
<br>
<br>
pv guest: numactl --hardware:<br>
<br>
root@heatpipe:~# numactl --hardware<br>
available: 4 nodes (0-3)<br>
node 0 cpus: 0<br>
node 0 size: 4031 MB<br>
node 0 free: 3997 MB<br>
node 1 cpus: 1<br>
node 1 size: 4039 MB<br>
node 1 free: 4022 MB<br>
node 2 cpus: 2<br>
node 2 size: 4039 MB<br>
node 2 free: 4023 MB<br>
node 3 cpus: 3<br>
node 3 size: 3975 MB<br>
node 3 free: 3963 MB<br>
node distances:<br>
node =A0 0 =A0 1 =A0 2 =A0 3<br>
=A0 0: =A010 =A020 =A020 =A020<br>
=A0 1: =A020 =A010 =A020 =A020<br>
=A0 2: =A020 =A020 =A010 =A020<br>
=A0 3: =A020 =A020 =A020 =A010<br>
<br>
Comments:<br>
None of the configuration options are correct so default values were used.<=
br>
Since machine is NUMA machine and there is no vcpu pinning defines, NUMA<br=
>
automatic node selection mechanism is used and you can see how vnodes<br>
were split across physical nodes.<br>
<br>
2. Example with e820_host =3D 1 (32GB real NUMA machines, two nodes).<br>
<br>
pv config:<br>
memory =3D 4000<br>
vcpus =3D 8<br>
# The name of the domain, change this if you want more than 1 VM.<br>
name =3D &quot;null&quot;<br>
vnodes =3D 4<br>
#vnumamem =3D [3000, 1000]<br>
vdistance =3D [10, 40]<br>
#vnuma_vcpumap =3D [1, 0, 3, 2]<br>
vnuma_vnodemap =3D [1, 0, 1, 0]<br>
#vnuma_autoplacement =3D 1<br>
e820_host =3D 1<br>
<br>
guest boot:<br>
<br>
[ =A0 =A00.000000] Initializing cgroup subsys cpuset<br>
[ =A0 =A00.000000] Initializing cgroup subsys cpu<br>
[ =A0 =A00.000000] Initializing cgroup subsys cpuacct<br>
[ =A0 =A00.000000] Linux version 3.12.0+ (assert@superpipe) (gcc version 4.=
7.2 (Debi<br>
an 4.7.2-5) ) #111 SMP Tue Dec 3 14:54:36 EST 2013<br>
[ =A0 =A00.000000] Command line: root=3D/dev/xvda1 ro earlyprintk=3Dxen deb=
ug loglevel=3D8<br>
=A0debug print_fatal_signals=3D1 loglvl=3Dall guest_loglvl=3Dall LOGLEVEL=
=3D8 earlyprintk=3D<br>
xen sched_debug<br>
[ =A0 =A00.000000] ACPI in unprivileged domain disabled<br>
[ =A0 =A00.000000] Freeing ac228-fa000 pfn range: 318936 pages freed<br>
[ =A0 =A00.000000] 1-1 mapping on ac228-&gt;100000<br>
[ =A0 =A00.000000] Released 318936 pages of unused memory<br>
[ =A0 =A00.000000] Set 343512 page(s) to 1-1 mapping<br>
[ =A0 =A00.000000] Populating 100000-14ddd8 pfn range: 318936 pages added<b=
r>
[ =A0 =A00.000000] e820: BIOS-provided physical RAM map:<br>
[ =A0 =A00.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable<=
br>
[ =A0 =A00.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x0000000000100000-0x00000000ac227fff] usable<=
br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac228000-0x00000000ac26bfff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac26c000-0x00000000ac57ffff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac580000-0x00000000ac5a0fff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5a1000-0x00000000ac5bbfff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5bc000-0x00000000ac5bdfff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5be000-0x00000000ac5befff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5bf000-0x00000000ac5cafff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5cb000-0x00000000ac5d9fff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5da000-0x00000000ac5fafff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5fb000-0x00000000ac6b6fff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac6b7000-0x00000000ac7fafff] ACPI NV=
S<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac7fb000-0x00000000ac80efff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac80f000-0x00000000ac80ffff] ACPI da=
ta<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac810000-0x00000000ac810fff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac811000-0x00000000ac812fff] ACPI da=
ta<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac813000-0x00000000ad7fffff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000b0000000-0x00000000b3ffffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000fed20000-0x00000000fed3ffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000fed50000-0x00000000fed8ffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x0000000100000000-0x000000014ddd7fff] usable<=
br>
[ =A0 =A00.000000] bootconsole [xenboot0] enabled<br>
[ =A0 =A00.000000] NX (Execute Disable) protection: active<br>
[ =A0 =A00.000000] DMI not present or invalid.<br>
[ =A0 =A00.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D&g=
t; reserved<br>
[ =A0 =A00.000000] e820: remove [mem 0x000a0000-0x000fffff] usable<br>
[ =A0 =A00.000000] No AGP bridge found<br>
[ =A0 =A00.000000] e820: last_pfn =3D 0x14ddd8 max_arch_pfn =3D 0x400000000=
<br>
[ =A0 =A00.000000] e820: last_pfn =3D 0xac228 max_arch_pfn =3D 0x400000000<=
br>
[ =A0 =A00.000000] Base memory trampoline at [ffff88000009a000] 9a000 size =
24576<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]<br>
[ =A0 =A00.000000] =A0[mem 0x00000000-0x000fffff] page 4k<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14da00000-0x14dbfffff]<br>
[ =A0 =A00.000000] =A0[mem 0x14da00000-0x14dbfffff] page 4k<br>
[ =A0 =A00.000000] BRK [0x019bd000, 0x019bdfff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019be000, 0x019befff] PGTABLE<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14c000000-0x14d9fffff]<br>
[ =A0 =A00.000000] =A0[mem 0x14c000000-0x14d9fffff] page 4k<br>
[ =A0 =A00.000000] BRK [0x019bf000, 0x019bffff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019c0000, 0x019c0fff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019c1000, 0x019c1fff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019c2000, 0x019c2fff] PGTABLE<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x100000000-0x14bffffff]<br>
[ =A0 =A00.000000] =A0[mem 0x100000000-0x14bffffff] page 4k<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x00100000-0xac227fff]<br>
[ =A0 =A00.000000] =A0[mem 0x00100000-0xac227fff] page 4k<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0[mem 0x14dc00000-0x14ddd7fff] page 4k<br>
[ =A0 =A00.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]<br>
[ =A0 =A00.000000] NUMA: Initialized distance table, cnt=3D4<br>
[ =A0 =A00.000000] Initmem setup node 0 [mem 0x00000000-0x3e7fffff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x3e7d9000-0x3e7fffff]<br>
[ =A0 =A00.000000] Initmem setup node 1 [mem 0x3e800000-0x7cffffff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x7cfd9000-0x7cffffff]<br>
[ =A0 =A00.000000] Initmem setup node 2 [mem 0x7d000000-0x10f5dffff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x10f5b9000-0x10f5dffff]<br>
[ =A0 =A00.000000] Initmem setup node 3 [mem 0x10f800000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x14ddad000-0x14ddd3fff]<br>
[ =A0 =A00.000000] Zone ranges:<br>
[ =A0 =A00.000000] =A0 DMA =A0 =A0 =A0[mem 0x00001000-0x00ffffff]<br>
[ =A0 =A00.000000] =A0 DMA32 =A0 =A0[mem 0x01000000-0xffffffff]<br>
[ =A0 =A00.000000] =A0 Normal =A0 [mem 0x100000000-0x14ddd7fff]<br>
[ =A0 =A00.000000] Movable zone start for each node<br>
[ =A0 =A00.000000] Early memory node ranges<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00001000-0x0009ffff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00100000-0x3e7fffff]<br>
[ =A0 =A00.000000] =A0 node =A0 1: [mem 0x3e800000-0x7cffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 2: [mem 0x7d000000-0xac227fff]<br>
[ =A0 =A00.000000] =A0 node =A0 2: [mem 0x100000000-0x10f5dffff]<br>
[ =A0 =A00.000000] =A0 node =A0 3: [mem 0x10f5e0000-0x14ddd7fff]<br>
[ =A0 =A00.000000] On node 0 totalpages: 255903<br>
[ =A0 =A00.000000] =A0 DMA zone: 56 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA zone: 21 pages reserved<br>
[ =A0 =A00.000000] =A0 DMA zone: 3999 pages, LIFO batch:0<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 3444 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 251904 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 1 totalpages: 256000<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 3500 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 256000 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 2 totalpages: 256008<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 2640 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 193064 pages, LIFO batch:31<br>
[ =A0 =A00.000000] =A0 Normal zone: 861 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 62944 pages, LIFO batch:15<br>
[ =A0 =A00.000000] On node 3 totalpages: 255992<br>
[ =A0 =A00.000000] =A0 Normal zone: 3500 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 255992 pages, LIFO batch:31<br>
[ =A0 =A00.000000] SFI: Simple Firmware Interface v0.81 <a href=3D"http://s=
implefirmware.org" target=3D"_blank">http://simplefirmware.org</a><br>
[ =A0 =A00.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs<br>
<br>
root@heatpipe:~# numactl --ha<br>
available: 4 nodes (0-3)<br>
node 0 cpus: 0 4<br>
node 0 size: 977 MB<br>
node 0 free: 947 MB<br>
node 1 cpus: 1 5<br>
node 1 size: 985 MB<br>
node 1 free: 974 MB<br>
node 2 cpus: 2 6<br>
node 2 size: 985 MB<br>
node 2 free: 973 MB<br>
node 3 cpus: 3 7<br>
node 3 size: 969 MB<br>
node 3 free: 958 MB<br>
node distances:<br>
node =A0 0 =A0 1 =A0 2 =A0 3<br>
=A0 0: =A010 =A040 =A040 =A040<br>
=A0 1: =A040 =A010 =A040 =A040<br>
=A0 2: =A040 =A040 =A010 =A040<br>
=A0 3: =A040 =A040 =A040 =A010<br>
<br>
root@heatpipe:~# numastat -m<br>
<br>
Per-node system memory usage (in MBs):<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Node 0 =A0 =A0 =A0 =A0 =
=A0Node 1 =A0 =A0 =A0 =A0 =A0Node 2 =A0 =A0 =A0 =A0 =A0Node 3 =A0 =A0 =A0 =
=A0 =A0 Total<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0--------------- --------------- --------=
------- --------------- ---------------<br>
MemTotal =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0977.14 =A0 =A0 =A0 =A0 =A0985.5=
0 =A0 =A0 =A0 =A0 =A0985.44 =A0 =A0 =A0 =A0 =A0969.91 =A0 =A0 =A0 =A0 3917.=
99<br>
<br>
hypervisor: xl debug-keys u<br>
<br>
(XEN) &#39;u&#39; pressed -&gt; dumping numa info (now-0x2A3:F7B8CB0F)<br>
(XEN) Domain 2 (total: 1024000):<br>
(XEN) =A0 =A0 Node 0: 415468<br>
(XEN) =A0 =A0 Node 1: 608532<br>
(XEN) =A0 =A0 Domain has 4 vnodes<br>
(XEN) =A0 =A0 =A0 =A0 vnode 0 - pnode 1 1000 MB, vcpus: 0 4<br>
(XEN) =A0 =A0 =A0 =A0 vnode 1 - pnode 0 1000 MB, vcpus: 1 5<br>
(XEN) =A0 =A0 =A0 =A0 vnode 2 - pnode 1 2341 MB, vcpus: 2 6<br>
(XEN) =A0 =A0 =A0 =A0 vnode 3 - pnode 0 999 MB, vcpus: 3 7<br>
<br>
This size descrepancy caused by the way how size if calculated<br>
from guest pfns: end - start. Thus the hole size in this case of<br>
~1,3Gb is included in the size.<br>
<br>
3. zero vNUMA configuration for every pv domain.<br>
Will be at least one vnuma node if vnuma topology was not<br>
specified.<br>
<br>
pv config:<br>
<br>
memory =3D 4000<br>
vcpus =3D 8<br>
# The name of the domain, change this if you want more than 1 VM.<br>
name =3D &quot;null&quot;<br>
#vnodes =3D 4<br>
vnumamem =3D [3000, 1000]<br>
vdistance =3D [10, 40]<br>
vnuma_vcpumap =3D [1, 0, 3, 2]<br>
vnuma_vnodemap =3D [1, 0, 1, 0]<br>
vnuma_autoplacement =3D 1<br>
e820_host =3D 1<br>
<br>
boot:<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0[mem 0x14dc00000-0x14ddd7fff] page 4k<br>
[ =A0 =A00.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]<br>
[ =A0 =A00.000000] NUMA: Initialized distance table, cnt=3D1<br>
[ =A0 =A00.000000] Initmem setup node 0 [mem 0x00000000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x14ddad000-0x14ddd3fff]<br>
[ =A0 =A00.000000] Zone ranges:<br>
[ =A0 =A00.000000] =A0 DMA =A0 =A0 =A0[mem 0x00001000-0x00ffffff]<br>
[ =A0 =A00.000000] =A0 DMA32 =A0 =A0[mem 0x01000000-0xffffffff]<br>
[ =A0 =A00.000000] =A0 Normal =A0 [mem 0x100000000-0x14ddd7fff]<br>
[ =A0 =A00.000000] Movable zone start for each node<br>
[ =A0 =A00.000000] Early memory node ranges<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00001000-0x0009ffff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00100000-0xac227fff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x100000000-0x14ddd7fff]<br>
<br>
root@heatpipe:~# numactl --ha<br>
maxn: 0<br>
available: 1 nodes (0)<br>
node 0 cpus: 0 1 2 3 4 5 6 7<br>
node 0 size: 3918 MB<br>
node 0 free: 3853 MB<br>
node distances:<br>
node =A0 0<br>
=A0 0: =A010<br>
<br>
root@heatpipe:~# numastat -m<br>
<br>
Per-node system memory usage (in MBs):<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Node 0 =A0 =A0 =A0 =A0 =
=A0 Total<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0--------------- ---------------<br>
MemTotal =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 3918.74 =A0 =A0 =A0 =A0 3918.74<br=
>
<br>
hypervisor: xl debug-keys u<br>
<br>
(XEN) Memory location of each domain:<br>
(XEN) Domain 0 (total: 6787432):<br>
(XEN) =A0 =A0 Node 0: 3485706<br>
(XEN) =A0 =A0 Node 1: 3301726<br>
(XEN) Domain 3 (total: 1024000):<br>
(XEN) =A0 =A0 Node 0: 512000<br>
(XEN) =A0 =A0 Node 1: 512000<br>
(XEN) =A0 =A0 Domain has 1 vnodes<br>
(XEN) =A0 =A0 =A0 =A0 vnode 0 - pnode any 5341 MB, vcpus: 0 1 2 3 4 5 6 7<b=
r>
<br>
<br>
Notes:<br>
<br>
to enable vNUMA in pv guest the corresponding patch set should be<br>
applied - <a href=3D"https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5" =
target=3D"_blank">https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5</a><=
br>
or <a href=3D"https://www.gitorious.org/xenvnuma/linuxvnuma/commit/deaa0142=
57b99f57c76fbba12a28907786cbe17d" target=3D"_blank">https://www.gitorious.o=
rg/xenvnuma/linuxvnuma/commit/deaa014257b99f57c76fbba12a28907786cbe17d</a>.=
<br>

<br>
<br>
Issues:<br>
<br>
The most important right now is the automatic numa balancing for linux pv k=
ernel<br>
as its corrupting user space memory. Since the v3 of this patch series linu=
x kernel<br>
3.13 seem to perform correctly, but with the recent changes the issue is ba=
ck.<br>
See <a href=3D"https://lkml.org/lkml/2013/10/31/133" target=3D"_blank">http=
s://lkml.org/lkml/2013/10/31/133</a> for urgent patch what presumably had<b=
r>
numa balancing working. Sine 3.12 there were multiple changes to numa autom=
atic<br>
balancing. I am currently back to investigating if anything should be done =
from hypervisor<br>
side and will work with kernel maintainers.<br>
<br>
Elena Ufimtseva (7):<br>
=A0 xen: vNUMA support for PV guests<br>
=A0 libxc: Plumb Xen with vNUMA topology for domain<br>
=A0 xl: vnuma memory parsing and supplement functions<br>
=A0 xl: vnuma distance, vcpu and pnode masks parser<br>
=A0 libxc: vnuma memory domain allocation<br>
=A0 libxl: vNUMA supporting interface<br>
=A0 xen: adds vNUMA info debug-key u<br>
<br>
=A0docs/man/xl.cfg.pod.5 =A0 =A0 =A0 =A0| =A0 60 +++++++<br>
=A0tools/libxc/xc_dom.h =A0 =A0 =A0 =A0 | =A0 10 ++<br>
=A0tools/libxc/xc_dom_x86.c =A0 =A0 | =A0 63 +++++--<br>
=A0tools/libxc/xc_domain.c =A0 =A0 =A0| =A0 64 +++++++<br>
=A0tools/libxc/xenctrl.h =A0 =A0 =A0 =A0| =A0 =A09 +<br>
=A0tools/libxc/xg_private.h =A0 =A0 | =A0 =A01 +<br>
=A0tools/libxl/libxl.c =A0 =A0 =A0 =A0 =A0| =A0 18 ++<br>
=A0tools/libxl/libxl.h =A0 =A0 =A0 =A0 =A0| =A0 20 +++<br>
=A0tools/libxl/libxl_arch.h =A0 =A0 | =A0 =A06 +<br>
=A0tools/libxl/libxl_dom.c =A0 =A0 =A0| =A0158 ++++++++++++++++--<br>
=A0tools/libxl/libxl_internal.h | =A0 =A06 +<br>
=A0tools/libxl/libxl_numa.c =A0 =A0 | =A0 49 ++++++<br>
=A0tools/libxl/libxl_types.idl =A0| =A0 =A06 +-<br>
=A0tools/libxl/libxl_vnuma.h =A0 =A0| =A0 11 ++<br>
=A0tools/libxl/libxl_x86.c =A0 =A0 =A0| =A0123 ++++++++++++++<br>
=A0tools/libxl/xl_cmdimpl.c =A0 =A0 | =A0380 ++++++++++++++++++++++++++++++=
++++++++++++<br>
=A0xen/arch/x86/numa.c =A0 =A0 =A0 =A0 =A0| =A0 30 +++-<br>
=A0xen/common/domain.c =A0 =A0 =A0 =A0 =A0| =A0 10 ++<br>
=A0xen/common/domctl.c =A0 =A0 =A0 =A0 =A0| =A0 79 +++++++++<br>
=A0xen/common/memory.c =A0 =A0 =A0 =A0 =A0| =A0 96 +++++++++++<br>
=A0xen/include/public/domctl.h =A0| =A0 29 ++++<br>
=A0xen/include/public/memory.h =A0| =A0 17 ++<br>
=A0xen/include/public/vnuma.h =A0 | =A0 59 +++++++<br>
=A0xen/include/xen/domain.h =A0 =A0 | =A0 =A08 +<br>
=A0xen/include/xen/sched.h =A0 =A0 =A0| =A0 =A01 +<br>
=A025 files changed, 1282 insertions(+), 31 deletions(-)<br>
=A0create mode 100644 tools/libxl/libxl_vnuma.h<br>
=A0create mode 100644 xen/include/public/vnuma.h<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
1.7.10.4<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br><div dir=
=3D"ltr"><blockquote style=3D"margin:0px 0px 0px 0.8ex;border-left:1px soli=
d rgb(204,204,204);padding-left:1ex"><span style=3D"color:rgb(80,0,80)">Yec=
hen Li</span><br style=3D"color:rgb(80,0,80)">
<br style=3D"color:rgb(80,0,80)"><span style=3D"color:rgb(80,0,80)"><span s=
tyle=3D"font-family:Arial">Team of System Virtualization and Cloud Computin=
g=A0</span></span><br style=3D"font-family:Arial"><span style=3D"color:rgb(=
80,0,80)"><span style=3D"font-family:Arial">School of Electronic Engineerin=
g=A0 and Computer Science</span></span><span style=3D"color:rgb(80,0,80)">,=
=A0</span><br style=3D"color:rgb(80,0,80)">
<span style=3D"color:rgb(80,0,80)">Peking University, China</span><br><div>=
<span style=3D"color:rgb(80,0,80)"><br></span></div><div><font color=3D"#50=
0050">Nothing is impossible because impossible itself =A0says: &quot; I&#39=
;m possible &quot;</font></div>
<div><font color=3D"#500050">lccycc From PKU</font></div></blockquote></div=
>
</div>

--001a11c3e24602829204f2491e03--


--===============4478325514760300068==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4478325514760300068==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 12:49:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:49:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvjH-0003l3-Ir; Thu, 13 Feb 2014 12:49:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lccycc123@gmail.com>) id 1WDvjF-0003ky-Oz
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 12:49:22 +0000
Received: from [85.158.137.68:34329] by server-2.bemta-3.messagelabs.com id
	FE/1C-06531-05FBCF25; Thu, 13 Feb 2014 12:49:20 +0000
X-Env-Sender: lccycc123@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392295757!1634148!1
X-Originating-IP: [209.85.192.42]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15345 invoked from network); 13 Feb 2014 12:49:18 -0000
Received: from mail-qg0-f42.google.com (HELO mail-qg0-f42.google.com)
	(209.85.192.42)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 12:49:18 -0000
Received: by mail-qg0-f42.google.com with SMTP id q107so1134976qgd.1
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 04:49:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=kglP0WUHV3UacbaRUBPim08Ljsai4AWjGBeMu6z8Unc=;
	b=xzxoXtJ4AJvc1jlOFjoejrt8qKNMRIu+34BH/9FSTXK7b6OjvlHOtXXfaSCjTVFsQ2
	ex8w41HhAaefRiq6ZCBHCSnA38txRHfUu4JngQ35pUOx0TkfFE3V5UFhDdNH2nlTNotV
	J9/sXrlhJhUaOHgcAsZ9OuqVJ4smveIDcPdzMhDgkAtLLzXj2UHzex/t9qBgTCQ+s4EB
	usUliXk4flF9MMb1q68qdrtxj6nW7EJkxJQ/oBZeBeWcBktZXPWq8zs6D8XdvKl5T1Vz
	pZ7i5AeDsjCJtcCXmTxpTURJGtFnZuaA048Vwy/1bYfFWTzaHw7Mk0czAVjSO7tby6TE
	j+QQ==
MIME-Version: 1.0
X-Received: by 10.224.87.193 with SMTP id x1mr2112855qal.70.1392295756856;
	Thu, 13 Feb 2014 04:49:16 -0800 (PST)
Received: by 10.224.196.138 with HTTP; Thu, 13 Feb 2014 04:49:16 -0800 (PST)
In-Reply-To: <1386136035-19544-1-git-send-email-ufimtseva@gmail.com>
References: <1386136035-19544-1-git-send-email-ufimtseva@gmail.com>
Date: Thu, 13 Feb 2014 20:49:16 +0800
Message-ID: <CAP5+zHTb_1AVE3_oCNoLH0q7Queoa0ui+vfhK4=Z31+Ld6k30w@mail.gmail.com>
From: Li Yechen <lccycc123@gmail.com>
To: Elena Ufimtseva <ufimtseva@gmail.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Matt Wilson <msw@linux.com>,
	Dario Faggioli <dario.faggioli@citrix.com>, ian.jackson@eu.citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v4 0/7] vNUMA introduction
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4478325514760300068=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4478325514760300068==
Content-Type: multipart/alternative; boundary=001a11c3e24602829204f2491e03

--001a11c3e24602829204f2491e03
Content-Type: text/plain; charset=ISO-8859-1

Hi Elena,
The patch on gitorious is not available. Have you got a newest version?
I have a idea and need to modify your patch a little to manage the change
of memory.
Sorry for being missing so long :-)


On Wed, Dec 4, 2013 at 1:47 PM, Elena Ufimtseva <ufimtseva@gmail.com> wrote:

> vNUMA introduction
>
> This series of patches introduces vNUMA topology awareness and
> provides interfaces and data structures to enable vNUMA for
> PV guests. There is a plan to extend this support for dom0 and
> HVM domains.
>
> vNUMA topology support should be supported by PV guest kernel.
> Corresponging patches should be applied.
>
> Introduction
> -------------
>
> vNUMA topology is exposed to the PV guest to improve performance when
> running
> workloads on NUMA machines.
> XEN vNUMA implementation provides a way to create vNUMA-enabled guests on
> NUMA/UMA
> and map vNUMA topology to physical NUMA in a optimal way.
>
> XEN vNUMA support
>
> Current set of patches introduces subop hypercall that is available for
> enlightened
> PV guests with vNUMA patches applied.
>
> Domain structure was modified to reflect per-domain vNUMA topology for use
> in other
> vNUMA-aware subsystems (e.g. ballooning).
>
> libxc
>
> libxc provides interfaces to build PV guests with vNUMA support and in
> case of NUMA
> machines provides initial memory allocation on physical NUMA nodes. This
> implemented by
> utilizing nodemap formed by automatic NUMA placement. Details are in patch
> #3.
>
> libxl
>
> libxl provides a way to predefine in VM config vNUMA topology - number of
> vnodes,
> memory arrangement, vcpus to vnodes assignment, distance map.
>
> PV guest
>
> As of now, only PV guest can take advantage of vNUMA functionality. vNUMA
> Linux patches
> should be applied and NUMA support should be compiled in kernel.
>
> This patchset can be pulled from
> https://git.gitorious.org/xenvnuma/xenvnuma.git:v6
> Linux patchset https://git.gitorious.org/xenvnuma/linuxvnuma.git:v6
>
> Examples of booting vNUMA enabled PV Linux guest on real NUMA machine:
>
> 1. Automatic vNUMA placement on h/w NUMA machine:
>
> VM config:
>
> memory = 16384
> vcpus = 4
> name = "rcbig"
> vnodes = 4
> vnumamem = [10,10]
> vnuma_distance = [10, 30, 10, 30]
> vcpu_to_vnode = [0, 0, 1, 1]
>
> Xen:
>
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 2569511):
> (XEN)     Node 0: 1416166
> (XEN)     Node 1: 1153345
> (XEN) Domain 5 (total: 4194304):
> (XEN)     Node 0: 2097152
> (XEN)     Node 1: 2097152
> (XEN)     Domain has 4 vnodes
> (XEN)         vnode 0 - pnode 0  (4096) MB
> (XEN)         vnode 1 - pnode 0  (4096) MB
> (XEN)         vnode 2 - pnode 1  (4096) MB
> (XEN)         vnode 3 - pnode 1  (4096) MB
> (XEN)     Domain vcpu to vnode:
> (XEN)     0 1 2 3
>
> dmesg on pv guest:
>
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
> [    0.000000]   node   0: [mem 0x00100000-0xffffffff]
> [    0.000000]   node   1: [mem 0x100000000-0x1ffffffff]
> [    0.000000]   node   2: [mem 0x200000000-0x2ffffffff]
> [    0.000000]   node   3: [mem 0x300000000-0x3ffffffff]
> [    0.000000] On node 0 totalpages: 1048479
> [    0.000000]   DMA zone: 56 pages used for memmap
> [    0.000000]   DMA zone: 21 pages reserved
> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
> [    0.000000]   DMA32 zone: 14280 pages used for memmap
> [    0.000000]   DMA32 zone: 1044480 pages, LIFO batch:31
> [    0.000000] On node 1 totalpages: 1048576
> [    0.000000]   Normal zone: 14336 pages used for memmap
> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
> [    0.000000] On node 2 totalpages: 1048576
> [    0.000000]   Normal zone: 14336 pages used for memmap
> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
> [    0.000000] On node 3 totalpages: 1048576
> [    0.000000]   Normal zone: 14336 pages used for memmap
> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
> [    0.000000] No local APIC present
> [    0.000000] APIC: disable apic facility
> [    0.000000] APIC: switched to apic NOOP
> [    0.000000] nr_irqs_gsi: 16
> [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff]
> [    0.000000] e820: cannot find a gap in the 32bit address range
> [    0.000000] e820: PCI devices with unassigned 32bit BARs may break!
> [    0.000000] e820: [mem 0x400100000-0x4004fffff] available for PCI
> devices
> [    0.000000] Booting paravirtualized kernel on Xen
> [    0.000000] Xen version: 4.4-unstable (preserve-AD)
> [    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4
> nr_node_ids:4
> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s85376
> r8192 d21120 u2097152
> [    0.000000] pcpu-alloc: s85376 r8192 d21120 u2097152 alloc=1*2097152
> [    0.000000] pcpu-alloc: [0] 0 [1] 1 [2] 2 [3] 3
>
>
> pv guest: numactl --hardware:
>
> root@heatpipe:~# numactl --hardware
> available: 4 nodes (0-3)
> node 0 cpus: 0
> node 0 size: 4031 MB
> node 0 free: 3997 MB
> node 1 cpus: 1
> node 1 size: 4039 MB
> node 1 free: 4022 MB
> node 2 cpus: 2
> node 2 size: 4039 MB
> node 2 free: 4023 MB
> node 3 cpus: 3
> node 3 size: 3975 MB
> node 3 free: 3963 MB
> node distances:
> node   0   1   2   3
>   0:  10  20  20  20
>   1:  20  10  20  20
>   2:  20  20  10  20
>   3:  20  20  20  10
>
> Comments:
> None of the configuration options are correct so default values were used.
> Since machine is NUMA machine and there is no vcpu pinning defines, NUMA
> automatic node selection mechanism is used and you can see how vnodes
> were split across physical nodes.
>
> 2. Example with e820_host = 1 (32GB real NUMA machines, two nodes).
>
> pv config:
> memory = 4000
> vcpus = 8
> # The name of the domain, change this if you want more than 1 VM.
> name = "null"
> vnodes = 4
> #vnumamem = [3000, 1000]
> vdistance = [10, 40]
> #vnuma_vcpumap = [1, 0, 3, 2]
> vnuma_vnodemap = [1, 0, 1, 0]
> #vnuma_autoplacement = 1
> e820_host = 1
>
> guest boot:
>
> [    0.000000] Initializing cgroup subsys cpuset
> [    0.000000] Initializing cgroup subsys cpu
> [    0.000000] Initializing cgroup subsys cpuacct
> [    0.000000] Linux version 3.12.0+ (assert@superpipe) (gcc version
> 4.7.2 (Debi
> an 4.7.2-5) ) #111 SMP Tue Dec 3 14:54:36 EST 2013
> [    0.000000] Command line: root=/dev/xvda1 ro earlyprintk=xen debug
> loglevel=8
>  debug print_fatal_signals=1 loglvl=all guest_loglvl=all LOGLEVEL=8
> earlyprintk=
> xen sched_debug
> [    0.000000] ACPI in unprivileged domain disabled
> [    0.000000] Freeing ac228-fa000 pfn range: 318936 pages freed
> [    0.000000] 1-1 mapping on ac228->100000
> [    0.000000] Released 318936 pages of unused memory
> [    0.000000] Set 343512 page(s) to 1-1 mapping
> [    0.000000] Populating 100000-14ddd8 pfn range: 318936 pages added
> [    0.000000] e820: BIOS-provided physical RAM map:
> [    0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable
> [    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
> [    0.000000] Xen: [mem 0x0000000000100000-0x00000000ac227fff] usable
> [    0.000000] Xen: [mem 0x00000000ac228000-0x00000000ac26bfff] reserved
> [    0.000000] Xen: [mem 0x00000000ac26c000-0x00000000ac57ffff] unusable
> [    0.000000] Xen: [mem 0x00000000ac580000-0x00000000ac5a0fff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5a1000-0x00000000ac5bbfff] unusable
> [    0.000000] Xen: [mem 0x00000000ac5bc000-0x00000000ac5bdfff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5be000-0x00000000ac5befff] unusable
> [    0.000000] Xen: [mem 0x00000000ac5bf000-0x00000000ac5cafff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5cb000-0x00000000ac5d9fff] unusable
> [    0.000000] Xen: [mem 0x00000000ac5da000-0x00000000ac5fafff] reserved
> [    0.000000] Xen: [mem 0x00000000ac5fb000-0x00000000ac6b6fff] unusable
> [    0.000000] Xen: [mem 0x00000000ac6b7000-0x00000000ac7fafff] ACPI NVS
> [    0.000000] Xen: [mem 0x00000000ac7fb000-0x00000000ac80efff] unusable
> [    0.000000] Xen: [mem 0x00000000ac80f000-0x00000000ac80ffff] ACPI data
> [    0.000000] Xen: [mem 0x00000000ac810000-0x00000000ac810fff] unusable
> [    0.000000] Xen: [mem 0x00000000ac811000-0x00000000ac812fff] ACPI data
> [    0.000000] Xen: [mem 0x00000000ac813000-0x00000000ad7fffff] unusable
> [    0.000000] Xen: [mem 0x00000000b0000000-0x00000000b3ffffff] reserved
> [    0.000000] Xen: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
> [    0.000000] Xen: [mem 0x00000000fed50000-0x00000000fed8ffff] reserved
> [    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
> [    0.000000] Xen: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserved
> [    0.000000] Xen: [mem 0x0000000100000000-0x000000014ddd7fff] usable
> [    0.000000] bootconsole [xenboot0] enabled
> [    0.000000] NX (Execute Disable) protection: active
> [    0.000000] DMI not present or invalid.
> [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
> [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
> [    0.000000] No AGP bridge found
> [    0.000000] e820: last_pfn = 0x14ddd8 max_arch_pfn = 0x400000000
> [    0.000000] e820: last_pfn = 0xac228 max_arch_pfn = 0x400000000
> [    0.000000] Base memory trampoline at [ffff88000009a000] 9a000 size
> 24576
> [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
> [    0.000000]  [mem 0x00000000-0x000fffff] page 4k
> [    0.000000] init_memory_mapping: [mem 0x14da00000-0x14dbfffff]
> [    0.000000]  [mem 0x14da00000-0x14dbfffff] page 4k
> [    0.000000] BRK [0x019bd000, 0x019bdfff] PGTABLE
> [    0.000000] BRK [0x019be000, 0x019befff] PGTABLE
> [    0.000000] init_memory_mapping: [mem 0x14c000000-0x14d9fffff]
> [    0.000000]  [mem 0x14c000000-0x14d9fffff] page 4k
> [    0.000000] BRK [0x019bf000, 0x019bffff] PGTABLE
> [    0.000000] BRK [0x019c0000, 0x019c0fff] PGTABLE
> [    0.000000] BRK [0x019c1000, 0x019c1fff] PGTABLE
> [    0.000000] BRK [0x019c2000, 0x019c2fff] PGTABLE
> [    0.000000] init_memory_mapping: [mem 0x100000000-0x14bffffff]
> [    0.000000]  [mem 0x100000000-0x14bffffff] page 4k
> [    0.000000] init_memory_mapping: [mem 0x00100000-0xac227fff]
> [    0.000000]  [mem 0x00100000-0xac227fff] page 4k
> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
> [    0.000000] NUMA: Initialized distance table, cnt=4
> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x3e7fffff]
> [    0.000000]   NODE_DATA [mem 0x3e7d9000-0x3e7fffff]
> [    0.000000] Initmem setup node 1 [mem 0x3e800000-0x7cffffff]
> [    0.000000]   NODE_DATA [mem 0x7cfd9000-0x7cffffff]
> [    0.000000] Initmem setup node 2 [mem 0x7d000000-0x10f5dffff]
> [    0.000000]   NODE_DATA [mem 0x10f5b9000-0x10f5dffff]
> [    0.000000] Initmem setup node 3 [mem 0x10f800000-0x14ddd7fff]
> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
> [    0.000000]   node   0: [mem 0x00100000-0x3e7fffff]
> [    0.000000]   node   1: [mem 0x3e800000-0x7cffffff]
> [    0.000000]   node   2: [mem 0x7d000000-0xac227fff]
> [    0.000000]   node   2: [mem 0x100000000-0x10f5dffff]
> [    0.000000]   node   3: [mem 0x10f5e0000-0x14ddd7fff]
> [    0.000000] On node 0 totalpages: 255903
> [    0.000000]   DMA zone: 56 pages used for memmap
> [    0.000000]   DMA zone: 21 pages reserved
> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
> [    0.000000]   DMA32 zone: 3444 pages used for memmap
> [    0.000000]   DMA32 zone: 251904 pages, LIFO batch:31
> [    0.000000] On node 1 totalpages: 256000
> [    0.000000]   DMA32 zone: 3500 pages used for memmap
> [    0.000000]   DMA32 zone: 256000 pages, LIFO batch:31
> [    0.000000] On node 2 totalpages: 256008
> [    0.000000]   DMA32 zone: 2640 pages used for memmap
> [    0.000000]   DMA32 zone: 193064 pages, LIFO batch:31
> [    0.000000]   Normal zone: 861 pages used for memmap
> [    0.000000]   Normal zone: 62944 pages, LIFO batch:15
> [    0.000000] On node 3 totalpages: 255992
> [    0.000000]   Normal zone: 3500 pages used for memmap
> [    0.000000]   Normal zone: 255992 pages, LIFO batch:31
> [    0.000000] SFI: Simple Firmware Interface v0.81
> http://simplefirmware.org
> [    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
>
> root@heatpipe:~# numactl --ha
> available: 4 nodes (0-3)
> node 0 cpus: 0 4
> node 0 size: 977 MB
> node 0 free: 947 MB
> node 1 cpus: 1 5
> node 1 size: 985 MB
> node 1 free: 974 MB
> node 2 cpus: 2 6
> node 2 size: 985 MB
> node 2 free: 973 MB
> node 3 cpus: 3 7
> node 3 size: 969 MB
> node 3 free: 958 MB
> node distances:
> node   0   1   2   3
>   0:  10  40  40  40
>   1:  40  10  40  40
>   2:  40  40  10  40
>   3:  40  40  40  10
>
> root@heatpipe:~# numastat -m
>
> Per-node system memory usage (in MBs):
>                           Node 0          Node 1          Node 2
>  Node 3           Total
>                  --------------- --------------- ---------------
> --------------- ---------------
> MemTotal                  977.14          985.50          985.44
>  969.91         3917.99
>
> hypervisor: xl debug-keys u
>
> (XEN) 'u' pressed -> dumping numa info (now-0x2A3:F7B8CB0F)
> (XEN) Domain 2 (total: 1024000):
> (XEN)     Node 0: 415468
> (XEN)     Node 1: 608532
> (XEN)     Domain has 4 vnodes
> (XEN)         vnode 0 - pnode 1 1000 MB, vcpus: 0 4
> (XEN)         vnode 1 - pnode 0 1000 MB, vcpus: 1 5
> (XEN)         vnode 2 - pnode 1 2341 MB, vcpus: 2 6
> (XEN)         vnode 3 - pnode 0 999 MB, vcpus: 3 7
>
> This size descrepancy caused by the way how size if calculated
> from guest pfns: end - start. Thus the hole size in this case of
> ~1,3Gb is included in the size.
>
> 3. zero vNUMA configuration for every pv domain.
> Will be at least one vnuma node if vnuma topology was not
> specified.
>
> pv config:
>
> memory = 4000
> vcpus = 8
> # The name of the domain, change this if you want more than 1 VM.
> name = "null"
> #vnodes = 4
> vnumamem = [3000, 1000]
> vdistance = [10, 40]
> vnuma_vcpumap = [1, 0, 3, 2]
> vnuma_vnodemap = [1, 0, 1, 0]
> vnuma_autoplacement = 1
> e820_host = 1
>
> boot:
> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
> [    0.000000] NUMA: Initialized distance table, cnt=1
> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x14ddd7fff]
> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
> [    0.000000]   node   0: [mem 0x00100000-0xac227fff]
> [    0.000000]   node   0: [mem 0x100000000-0x14ddd7fff]
>
> root@heatpipe:~# numactl --ha
> maxn: 0
> available: 1 nodes (0)
> node 0 cpus: 0 1 2 3 4 5 6 7
> node 0 size: 3918 MB
> node 0 free: 3853 MB
> node distances:
> node   0
>   0:  10
>
> root@heatpipe:~# numastat -m
>
> Per-node system memory usage (in MBs):
>                           Node 0           Total
>                  --------------- ---------------
> MemTotal                 3918.74         3918.74
>
> hypervisor: xl debug-keys u
>
> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 6787432):
> (XEN)     Node 0: 3485706
> (XEN)     Node 1: 3301726
> (XEN) Domain 3 (total: 1024000):
> (XEN)     Node 0: 512000
> (XEN)     Node 1: 512000
> (XEN)     Domain has 1 vnodes
> (XEN)         vnode 0 - pnode any 5341 MB, vcpus: 0 1 2 3 4 5 6 7
>
>
> Notes:
>
> to enable vNUMA in pv guest the corresponding patch set should be
> applied - https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5
> or
> https://www.gitorious.org/xenvnuma/linuxvnuma/commit/deaa014257b99f57c76fbba12a28907786cbe17d
> .
>
>
> Issues:
>
> The most important right now is the automatic numa balancing for linux pv
> kernel
> as its corrupting user space memory. Since the v3 of this patch series
> linux kernel
> 3.13 seem to perform correctly, but with the recent changes the issue is
> back.
> See https://lkml.org/lkml/2013/10/31/133 for urgent patch what presumably
> had
> numa balancing working. Sine 3.12 there were multiple changes to numa
> automatic
> balancing. I am currently back to investigating if anything should be done
> from hypervisor
> side and will work with kernel maintainers.
>
> Elena Ufimtseva (7):
>   xen: vNUMA support for PV guests
>   libxc: Plumb Xen with vNUMA topology for domain
>   xl: vnuma memory parsing and supplement functions
>   xl: vnuma distance, vcpu and pnode masks parser
>   libxc: vnuma memory domain allocation
>   libxl: vNUMA supporting interface
>   xen: adds vNUMA info debug-key u
>
>  docs/man/xl.cfg.pod.5        |   60 +++++++
>  tools/libxc/xc_dom.h         |   10 ++
>  tools/libxc/xc_dom_x86.c     |   63 +++++--
>  tools/libxc/xc_domain.c      |   64 +++++++
>  tools/libxc/xenctrl.h        |    9 +
>  tools/libxc/xg_private.h     |    1 +
>  tools/libxl/libxl.c          |   18 ++
>  tools/libxl/libxl.h          |   20 +++
>  tools/libxl/libxl_arch.h     |    6 +
>  tools/libxl/libxl_dom.c      |  158 ++++++++++++++++--
>  tools/libxl/libxl_internal.h |    6 +
>  tools/libxl/libxl_numa.c     |   49 ++++++
>  tools/libxl/libxl_types.idl  |    6 +-
>  tools/libxl/libxl_vnuma.h    |   11 ++
>  tools/libxl/libxl_x86.c      |  123 ++++++++++++++
>  tools/libxl/xl_cmdimpl.c     |  380
> ++++++++++++++++++++++++++++++++++++++++++
>  xen/arch/x86/numa.c          |   30 +++-
>  xen/common/domain.c          |   10 ++
>  xen/common/domctl.c          |   79 +++++++++
>  xen/common/memory.c          |   96 +++++++++++
>  xen/include/public/domctl.h  |   29 ++++
>  xen/include/public/memory.h  |   17 ++
>  xen/include/public/vnuma.h   |   59 +++++++
>  xen/include/xen/domain.h     |    8 +
>  xen/include/xen/sched.h      |    1 +
>  25 files changed, 1282 insertions(+), 31 deletions(-)
>  create mode 100644 tools/libxl/libxl_vnuma.h
>  create mode 100644 xen/include/public/vnuma.h
>
> --
> 1.7.10.4
>
>


-- 

Yechen Li

Team of System Virtualization and Cloud Computing
School of Electronic Engineering  and Computer Science,
Peking University, China

Nothing is impossible because impossible itself  says: " I'm possible "
lccycc From PKU

--001a11c3e24602829204f2491e03
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div>Hi Elena, <br></div>The patch on gitorious =
is not available. Have you got a newest version?<br></div></div>I have a id=
ea and need to modify your patch a little to manage the change of memory.<b=
r>
Sorry for being missing so long :-)<br></div><div class=3D"gmail_extra"><br=
><br><div class=3D"gmail_quote">On Wed, Dec 4, 2013 at 1:47 PM, Elena Ufimt=
seva <span dir=3D"ltr">&lt;<a href=3D"mailto:ufimtseva@gmail.com" target=3D=
"_blank">ufimtseva@gmail.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">vNUMA introduction<br>
<br>
This series of patches introduces vNUMA topology awareness and<br>
provides interfaces and data structures to enable vNUMA for<br>
PV guests. There is a plan to extend this support for dom0 and<br>
HVM domains.<br>
<br>
vNUMA topology support should be supported by PV guest kernel.<br>
Corresponging patches should be applied.<br>
<br>
Introduction<br>
-------------<br>
<br>
vNUMA topology is exposed to the PV guest to improve performance when runni=
ng<br>
workloads on NUMA machines.<br>
XEN vNUMA implementation provides a way to create vNUMA-enabled guests on N=
UMA/UMA<br>
and map vNUMA topology to physical NUMA in a optimal way.<br>
<br>
XEN vNUMA support<br>
<br>
Current set of patches introduces subop hypercall that is available for enl=
ightened<br>
PV guests with vNUMA patches applied.<br>
<br>
Domain structure was modified to reflect per-domain vNUMA topology for use =
in other<br>
vNUMA-aware subsystems (e.g. ballooning).<br>
<br>
libxc<br>
<br>
libxc provides interfaces to build PV guests with vNUMA support and in case=
 of NUMA<br>
machines provides initial memory allocation on physical NUMA nodes. This im=
plemented by<br>
utilizing nodemap formed by automatic NUMA placement. Details are in patch =
#3.<br>
<br>
libxl<br>
<br>
libxl provides a way to predefine in VM config vNUMA topology - number of v=
nodes,<br>
memory arrangement, vcpus to vnodes assignment, distance map.<br>
<br>
PV guest<br>
<br>
As of now, only PV guest can take advantage of vNUMA functionality. vNUMA L=
inux patches<br>
should be applied and NUMA support should be compiled in kernel.<br>
<br>
This patchset can be pulled from <a href=3D"https://git.gitorious.org/xenvn=
uma/xenvnuma.git:v6" target=3D"_blank">https://git.gitorious.org/xenvnuma/x=
envnuma.git:v6</a><br>
Linux patchset <a href=3D"https://git.gitorious.org/xenvnuma/linuxvnuma.git=
:v6" target=3D"_blank">https://git.gitorious.org/xenvnuma/linuxvnuma.git:v6=
</a><br>
<br>
Examples of booting vNUMA enabled PV Linux guest on real NUMA machine:<br>
<br>
1. Automatic vNUMA placement on h/w NUMA machine:<br>
<br>
VM config:<br>
<br>
memory =3D 16384<br>
vcpus =3D 4<br>
name =3D &quot;rcbig&quot;<br>
vnodes =3D 4<br>
vnumamem =3D [10,10]<br>
vnuma_distance =3D [10, 30, 10, 30]<br>
vcpu_to_vnode =3D [0, 0, 1, 1]<br>
<br>
Xen:<br>
<br>
(XEN) Memory location of each domain:<br>
(XEN) Domain 0 (total: 2569511):<br>
(XEN) =A0 =A0 Node 0: 1416166<br>
(XEN) =A0 =A0 Node 1: 1153345<br>
(XEN) Domain 5 (total: 4194304):<br>
(XEN) =A0 =A0 Node 0: 2097152<br>
(XEN) =A0 =A0 Node 1: 2097152<br>
(XEN) =A0 =A0 Domain has 4 vnodes<br>
(XEN) =A0 =A0 =A0 =A0 vnode 0 - pnode 0 =A0(4096) MB<br>
(XEN) =A0 =A0 =A0 =A0 vnode 1 - pnode 0 =A0(4096) MB<br>
(XEN) =A0 =A0 =A0 =A0 vnode 2 - pnode 1 =A0(4096) MB<br>
(XEN) =A0 =A0 =A0 =A0 vnode 3 - pnode 1 =A0(4096) MB<br>
(XEN) =A0 =A0 Domain vcpu to vnode:<br>
(XEN) =A0 =A0 0 1 2 3<br>
<br>
dmesg on pv guest:<br>
<br>
[ =A0 =A00.000000] Movable zone start for each node<br>
[ =A0 =A00.000000] Early memory node ranges<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00001000-0x0009ffff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00100000-0xffffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 1: [mem 0x100000000-0x1ffffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 2: [mem 0x200000000-0x2ffffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 3: [mem 0x300000000-0x3ffffffff]<br>
[ =A0 =A00.000000] On node 0 totalpages: 1048479<br>
[ =A0 =A00.000000] =A0 DMA zone: 56 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA zone: 21 pages reserved<br>
[ =A0 =A00.000000] =A0 DMA zone: 3999 pages, LIFO batch:0<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 14280 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 1044480 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 1 totalpages: 1048576<br>
[ =A0 =A00.000000] =A0 Normal zone: 14336 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 1048576 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 2 totalpages: 1048576<br>
[ =A0 =A00.000000] =A0 Normal zone: 14336 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 1048576 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 3 totalpages: 1048576<br>
[ =A0 =A00.000000] =A0 Normal zone: 14336 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 1048576 pages, LIFO batch:31<br>
[ =A0 =A00.000000] SFI: Simple Firmware Interface v0.81 <a href=3D"http://s=
implefirmware.org" target=3D"_blank">http://simplefirmware.org</a><br>
[ =A0 =A00.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs<br>
[ =A0 =A00.000000] No local APIC present<br>
[ =A0 =A00.000000] APIC: disable apic facility<br>
[ =A0 =A00.000000] APIC: switched to apic NOOP<br>
[ =A0 =A00.000000] nr_irqs_gsi: 16<br>
[ =A0 =A00.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff=
]<br>
[ =A0 =A00.000000] e820: cannot find a gap in the 32bit address range<br>
[ =A0 =A00.000000] e820: PCI devices with unassigned 32bit BARs may break!<=
br>
[ =A0 =A00.000000] e820: [mem 0x400100000-0x4004fffff] available for PCI de=
vices<br>
[ =A0 =A00.000000] Booting paravirtualized kernel on Xen<br>
[ =A0 =A00.000000] Xen version: 4.4-unstable (preserve-AD)<br>
[ =A0 =A00.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids=
:4 nr_node_ids:4<br>
[ =A0 =A00.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s85376 r=
8192 d21120 u2097152<br>
[ =A0 =A00.000000] pcpu-alloc: s85376 r8192 d21120 u2097152 alloc=3D1*20971=
52<br>
[ =A0 =A00.000000] pcpu-alloc: [0] 0 [1] 1 [2] 2 [3] 3<br>
<br>
<br>
pv guest: numactl --hardware:<br>
<br>
root@heatpipe:~# numactl --hardware<br>
available: 4 nodes (0-3)<br>
node 0 cpus: 0<br>
node 0 size: 4031 MB<br>
node 0 free: 3997 MB<br>
node 1 cpus: 1<br>
node 1 size: 4039 MB<br>
node 1 free: 4022 MB<br>
node 2 cpus: 2<br>
node 2 size: 4039 MB<br>
node 2 free: 4023 MB<br>
node 3 cpus: 3<br>
node 3 size: 3975 MB<br>
node 3 free: 3963 MB<br>
node distances:<br>
node =A0 0 =A0 1 =A0 2 =A0 3<br>
=A0 0: =A010 =A020 =A020 =A020<br>
=A0 1: =A020 =A010 =A020 =A020<br>
=A0 2: =A020 =A020 =A010 =A020<br>
=A0 3: =A020 =A020 =A020 =A010<br>
<br>
Comments:<br>
None of the configuration options are correct so default values were used.<=
br>
Since machine is NUMA machine and there is no vcpu pinning defines, NUMA<br=
>
automatic node selection mechanism is used and you can see how vnodes<br>
were split across physical nodes.<br>
<br>
2. Example with e820_host =3D 1 (32GB real NUMA machines, two nodes).<br>
<br>
pv config:<br>
memory =3D 4000<br>
vcpus =3D 8<br>
# The name of the domain, change this if you want more than 1 VM.<br>
name =3D &quot;null&quot;<br>
vnodes =3D 4<br>
#vnumamem =3D [3000, 1000]<br>
vdistance =3D [10, 40]<br>
#vnuma_vcpumap =3D [1, 0, 3, 2]<br>
vnuma_vnodemap =3D [1, 0, 1, 0]<br>
#vnuma_autoplacement =3D 1<br>
e820_host =3D 1<br>
<br>
guest boot:<br>
<br>
[ =A0 =A00.000000] Initializing cgroup subsys cpuset<br>
[ =A0 =A00.000000] Initializing cgroup subsys cpu<br>
[ =A0 =A00.000000] Initializing cgroup subsys cpuacct<br>
[ =A0 =A00.000000] Linux version 3.12.0+ (assert@superpipe) (gcc version 4.=
7.2 (Debi<br>
an 4.7.2-5) ) #111 SMP Tue Dec 3 14:54:36 EST 2013<br>
[ =A0 =A00.000000] Command line: root=3D/dev/xvda1 ro earlyprintk=3Dxen deb=
ug loglevel=3D8<br>
=A0debug print_fatal_signals=3D1 loglvl=3Dall guest_loglvl=3Dall LOGLEVEL=
=3D8 earlyprintk=3D<br>
xen sched_debug<br>
[ =A0 =A00.000000] ACPI in unprivileged domain disabled<br>
[ =A0 =A00.000000] Freeing ac228-fa000 pfn range: 318936 pages freed<br>
[ =A0 =A00.000000] 1-1 mapping on ac228-&gt;100000<br>
[ =A0 =A00.000000] Released 318936 pages of unused memory<br>
[ =A0 =A00.000000] Set 343512 page(s) to 1-1 mapping<br>
[ =A0 =A00.000000] Populating 100000-14ddd8 pfn range: 318936 pages added<b=
r>
[ =A0 =A00.000000] e820: BIOS-provided physical RAM map:<br>
[ =A0 =A00.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable<=
br>
[ =A0 =A00.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x0000000000100000-0x00000000ac227fff] usable<=
br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac228000-0x00000000ac26bfff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac26c000-0x00000000ac57ffff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac580000-0x00000000ac5a0fff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5a1000-0x00000000ac5bbfff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5bc000-0x00000000ac5bdfff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5be000-0x00000000ac5befff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5bf000-0x00000000ac5cafff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5cb000-0x00000000ac5d9fff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5da000-0x00000000ac5fafff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac5fb000-0x00000000ac6b6fff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac6b7000-0x00000000ac7fafff] ACPI NV=
S<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac7fb000-0x00000000ac80efff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac80f000-0x00000000ac80ffff] ACPI da=
ta<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac810000-0x00000000ac810fff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac811000-0x00000000ac812fff] ACPI da=
ta<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ac813000-0x00000000ad7fffff] unusabl=
e<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000b0000000-0x00000000b3ffffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000fed20000-0x00000000fed3ffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000fed50000-0x00000000fed8ffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserve=
d<br>
[ =A0 =A00.000000] Xen: [mem 0x0000000100000000-0x000000014ddd7fff] usable<=
br>
[ =A0 =A00.000000] bootconsole [xenboot0] enabled<br>
[ =A0 =A00.000000] NX (Execute Disable) protection: active<br>
[ =A0 =A00.000000] DMI not present or invalid.<br>
[ =A0 =A00.000000] e820: update [mem 0x00000000-0x00000fff] usable =3D=3D&g=
t; reserved<br>
[ =A0 =A00.000000] e820: remove [mem 0x000a0000-0x000fffff] usable<br>
[ =A0 =A00.000000] No AGP bridge found<br>
[ =A0 =A00.000000] e820: last_pfn =3D 0x14ddd8 max_arch_pfn =3D 0x400000000=
<br>
[ =A0 =A00.000000] e820: last_pfn =3D 0xac228 max_arch_pfn =3D 0x400000000<=
br>
[ =A0 =A00.000000] Base memory trampoline at [ffff88000009a000] 9a000 size =
24576<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]<br>
[ =A0 =A00.000000] =A0[mem 0x00000000-0x000fffff] page 4k<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14da00000-0x14dbfffff]<br>
[ =A0 =A00.000000] =A0[mem 0x14da00000-0x14dbfffff] page 4k<br>
[ =A0 =A00.000000] BRK [0x019bd000, 0x019bdfff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019be000, 0x019befff] PGTABLE<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14c000000-0x14d9fffff]<br>
[ =A0 =A00.000000] =A0[mem 0x14c000000-0x14d9fffff] page 4k<br>
[ =A0 =A00.000000] BRK [0x019bf000, 0x019bffff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019c0000, 0x019c0fff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019c1000, 0x019c1fff] PGTABLE<br>
[ =A0 =A00.000000] BRK [0x019c2000, 0x019c2fff] PGTABLE<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x100000000-0x14bffffff]<br>
[ =A0 =A00.000000] =A0[mem 0x100000000-0x14bffffff] page 4k<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x00100000-0xac227fff]<br>
[ =A0 =A00.000000] =A0[mem 0x00100000-0xac227fff] page 4k<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0[mem 0x14dc00000-0x14ddd7fff] page 4k<br>
[ =A0 =A00.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]<br>
[ =A0 =A00.000000] NUMA: Initialized distance table, cnt=3D4<br>
[ =A0 =A00.000000] Initmem setup node 0 [mem 0x00000000-0x3e7fffff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x3e7d9000-0x3e7fffff]<br>
[ =A0 =A00.000000] Initmem setup node 1 [mem 0x3e800000-0x7cffffff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x7cfd9000-0x7cffffff]<br>
[ =A0 =A00.000000] Initmem setup node 2 [mem 0x7d000000-0x10f5dffff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x10f5b9000-0x10f5dffff]<br>
[ =A0 =A00.000000] Initmem setup node 3 [mem 0x10f800000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x14ddad000-0x14ddd3fff]<br>
[ =A0 =A00.000000] Zone ranges:<br>
[ =A0 =A00.000000] =A0 DMA =A0 =A0 =A0[mem 0x00001000-0x00ffffff]<br>
[ =A0 =A00.000000] =A0 DMA32 =A0 =A0[mem 0x01000000-0xffffffff]<br>
[ =A0 =A00.000000] =A0 Normal =A0 [mem 0x100000000-0x14ddd7fff]<br>
[ =A0 =A00.000000] Movable zone start for each node<br>
[ =A0 =A00.000000] Early memory node ranges<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00001000-0x0009ffff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00100000-0x3e7fffff]<br>
[ =A0 =A00.000000] =A0 node =A0 1: [mem 0x3e800000-0x7cffffff]<br>
[ =A0 =A00.000000] =A0 node =A0 2: [mem 0x7d000000-0xac227fff]<br>
[ =A0 =A00.000000] =A0 node =A0 2: [mem 0x100000000-0x10f5dffff]<br>
[ =A0 =A00.000000] =A0 node =A0 3: [mem 0x10f5e0000-0x14ddd7fff]<br>
[ =A0 =A00.000000] On node 0 totalpages: 255903<br>
[ =A0 =A00.000000] =A0 DMA zone: 56 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA zone: 21 pages reserved<br>
[ =A0 =A00.000000] =A0 DMA zone: 3999 pages, LIFO batch:0<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 3444 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 251904 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 1 totalpages: 256000<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 3500 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 256000 pages, LIFO batch:31<br>
[ =A0 =A00.000000] On node 2 totalpages: 256008<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 2640 pages used for memmap<br>
[ =A0 =A00.000000] =A0 DMA32 zone: 193064 pages, LIFO batch:31<br>
[ =A0 =A00.000000] =A0 Normal zone: 861 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 62944 pages, LIFO batch:15<br>
[ =A0 =A00.000000] On node 3 totalpages: 255992<br>
[ =A0 =A00.000000] =A0 Normal zone: 3500 pages used for memmap<br>
[ =A0 =A00.000000] =A0 Normal zone: 255992 pages, LIFO batch:31<br>
[ =A0 =A00.000000] SFI: Simple Firmware Interface v0.81 <a href=3D"http://s=
implefirmware.org" target=3D"_blank">http://simplefirmware.org</a><br>
[ =A0 =A00.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs<br>
<br>
root@heatpipe:~# numactl --ha<br>
available: 4 nodes (0-3)<br>
node 0 cpus: 0 4<br>
node 0 size: 977 MB<br>
node 0 free: 947 MB<br>
node 1 cpus: 1 5<br>
node 1 size: 985 MB<br>
node 1 free: 974 MB<br>
node 2 cpus: 2 6<br>
node 2 size: 985 MB<br>
node 2 free: 973 MB<br>
node 3 cpus: 3 7<br>
node 3 size: 969 MB<br>
node 3 free: 958 MB<br>
node distances:<br>
node =A0 0 =A0 1 =A0 2 =A0 3<br>
=A0 0: =A010 =A040 =A040 =A040<br>
=A0 1: =A040 =A010 =A040 =A040<br>
=A0 2: =A040 =A040 =A010 =A040<br>
=A0 3: =A040 =A040 =A040 =A010<br>
<br>
root@heatpipe:~# numastat -m<br>
<br>
Per-node system memory usage (in MBs):<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Node 0 =A0 =A0 =A0 =A0 =
=A0Node 1 =A0 =A0 =A0 =A0 =A0Node 2 =A0 =A0 =A0 =A0 =A0Node 3 =A0 =A0 =A0 =
=A0 =A0 Total<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0--------------- --------------- --------=
------- --------------- ---------------<br>
MemTotal =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0977.14 =A0 =A0 =A0 =A0 =A0985.5=
0 =A0 =A0 =A0 =A0 =A0985.44 =A0 =A0 =A0 =A0 =A0969.91 =A0 =A0 =A0 =A0 3917.=
99<br>
<br>
hypervisor: xl debug-keys u<br>
<br>
(XEN) &#39;u&#39; pressed -&gt; dumping numa info (now-0x2A3:F7B8CB0F)<br>
(XEN) Domain 2 (total: 1024000):<br>
(XEN) =A0 =A0 Node 0: 415468<br>
(XEN) =A0 =A0 Node 1: 608532<br>
(XEN) =A0 =A0 Domain has 4 vnodes<br>
(XEN) =A0 =A0 =A0 =A0 vnode 0 - pnode 1 1000 MB, vcpus: 0 4<br>
(XEN) =A0 =A0 =A0 =A0 vnode 1 - pnode 0 1000 MB, vcpus: 1 5<br>
(XEN) =A0 =A0 =A0 =A0 vnode 2 - pnode 1 2341 MB, vcpus: 2 6<br>
(XEN) =A0 =A0 =A0 =A0 vnode 3 - pnode 0 999 MB, vcpus: 3 7<br>
<br>
This size descrepancy caused by the way how size if calculated<br>
from guest pfns: end - start. Thus the hole size in this case of<br>
~1,3Gb is included in the size.<br>
<br>
3. zero vNUMA configuration for every pv domain.<br>
Will be at least one vnuma node if vnuma topology was not<br>
specified.<br>
<br>
pv config:<br>
<br>
memory =3D 4000<br>
vcpus =3D 8<br>
# The name of the domain, change this if you want more than 1 VM.<br>
name =3D &quot;null&quot;<br>
#vnodes =3D 4<br>
vnumamem =3D [3000, 1000]<br>
vdistance =3D [10, 40]<br>
vnuma_vcpumap =3D [1, 0, 3, 2]<br>
vnuma_vnodemap =3D [1, 0, 1, 0]<br>
vnuma_autoplacement =3D 1<br>
e820_host =3D 1<br>
<br>
boot:<br>
[ =A0 =A00.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0[mem 0x14dc00000-0x14ddd7fff] page 4k<br>
[ =A0 =A00.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]<br>
[ =A0 =A00.000000] NUMA: Initialized distance table, cnt=3D1<br>
[ =A0 =A00.000000] Initmem setup node 0 [mem 0x00000000-0x14ddd7fff]<br>
[ =A0 =A00.000000] =A0 NODE_DATA [mem 0x14ddad000-0x14ddd3fff]<br>
[ =A0 =A00.000000] Zone ranges:<br>
[ =A0 =A00.000000] =A0 DMA =A0 =A0 =A0[mem 0x00001000-0x00ffffff]<br>
[ =A0 =A00.000000] =A0 DMA32 =A0 =A0[mem 0x01000000-0xffffffff]<br>
[ =A0 =A00.000000] =A0 Normal =A0 [mem 0x100000000-0x14ddd7fff]<br>
[ =A0 =A00.000000] Movable zone start for each node<br>
[ =A0 =A00.000000] Early memory node ranges<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00001000-0x0009ffff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x00100000-0xac227fff]<br>
[ =A0 =A00.000000] =A0 node =A0 0: [mem 0x100000000-0x14ddd7fff]<br>
<br>
root@heatpipe:~# numactl --ha<br>
maxn: 0<br>
available: 1 nodes (0)<br>
node 0 cpus: 0 1 2 3 4 5 6 7<br>
node 0 size: 3918 MB<br>
node 0 free: 3853 MB<br>
node distances:<br>
node =A0 0<br>
=A0 0: =A010<br>
<br>
root@heatpipe:~# numastat -m<br>
<br>
Per-node system memory usage (in MBs):<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 Node 0 =A0 =A0 =A0 =A0 =
=A0 Total<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0--------------- ---------------<br>
MemTotal =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 3918.74 =A0 =A0 =A0 =A0 3918.74<br=
>
<br>
hypervisor: xl debug-keys u<br>
<br>
(XEN) Memory location of each domain:<br>
(XEN) Domain 0 (total: 6787432):<br>
(XEN) =A0 =A0 Node 0: 3485706<br>
(XEN) =A0 =A0 Node 1: 3301726<br>
(XEN) Domain 3 (total: 1024000):<br>
(XEN) =A0 =A0 Node 0: 512000<br>
(XEN) =A0 =A0 Node 1: 512000<br>
(XEN) =A0 =A0 Domain has 1 vnodes<br>
(XEN) =A0 =A0 =A0 =A0 vnode 0 - pnode any 5341 MB, vcpus: 0 1 2 3 4 5 6 7<b=
r>
<br>
<br>
Notes:<br>
<br>
to enable vNUMA in pv guest the corresponding patch set should be<br>
applied - <a href=3D"https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5" =
target=3D"_blank">https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5</a><=
br>
or <a href=3D"https://www.gitorious.org/xenvnuma/linuxvnuma/commit/deaa0142=
57b99f57c76fbba12a28907786cbe17d" target=3D"_blank">https://www.gitorious.o=
rg/xenvnuma/linuxvnuma/commit/deaa014257b99f57c76fbba12a28907786cbe17d</a>.=
<br>

<br>
<br>
Issues:<br>
<br>
The most important right now is the automatic numa balancing for linux pv k=
ernel<br>
as its corrupting user space memory. Since the v3 of this patch series linu=
x kernel<br>
3.13 seem to perform correctly, but with the recent changes the issue is ba=
ck.<br>
See <a href=3D"https://lkml.org/lkml/2013/10/31/133" target=3D"_blank">http=
s://lkml.org/lkml/2013/10/31/133</a> for urgent patch what presumably had<b=
r>
numa balancing working. Sine 3.12 there were multiple changes to numa autom=
atic<br>
balancing. I am currently back to investigating if anything should be done =
from hypervisor<br>
side and will work with kernel maintainers.<br>
<br>
Elena Ufimtseva (7):<br>
=A0 xen: vNUMA support for PV guests<br>
=A0 libxc: Plumb Xen with vNUMA topology for domain<br>
=A0 xl: vnuma memory parsing and supplement functions<br>
=A0 xl: vnuma distance, vcpu and pnode masks parser<br>
=A0 libxc: vnuma memory domain allocation<br>
=A0 libxl: vNUMA supporting interface<br>
=A0 xen: adds vNUMA info debug-key u<br>
<br>
=A0docs/man/xl.cfg.pod.5 =A0 =A0 =A0 =A0| =A0 60 +++++++<br>
=A0tools/libxc/xc_dom.h =A0 =A0 =A0 =A0 | =A0 10 ++<br>
=A0tools/libxc/xc_dom_x86.c =A0 =A0 | =A0 63 +++++--<br>
=A0tools/libxc/xc_domain.c =A0 =A0 =A0| =A0 64 +++++++<br>
=A0tools/libxc/xenctrl.h =A0 =A0 =A0 =A0| =A0 =A09 +<br>
=A0tools/libxc/xg_private.h =A0 =A0 | =A0 =A01 +<br>
=A0tools/libxl/libxl.c =A0 =A0 =A0 =A0 =A0| =A0 18 ++<br>
=A0tools/libxl/libxl.h =A0 =A0 =A0 =A0 =A0| =A0 20 +++<br>
=A0tools/libxl/libxl_arch.h =A0 =A0 | =A0 =A06 +<br>
=A0tools/libxl/libxl_dom.c =A0 =A0 =A0| =A0158 ++++++++++++++++--<br>
=A0tools/libxl/libxl_internal.h | =A0 =A06 +<br>
=A0tools/libxl/libxl_numa.c =A0 =A0 | =A0 49 ++++++<br>
=A0tools/libxl/libxl_types.idl =A0| =A0 =A06 +-<br>
=A0tools/libxl/libxl_vnuma.h =A0 =A0| =A0 11 ++<br>
=A0tools/libxl/libxl_x86.c =A0 =A0 =A0| =A0123 ++++++++++++++<br>
=A0tools/libxl/xl_cmdimpl.c =A0 =A0 | =A0380 ++++++++++++++++++++++++++++++=
++++++++++++<br>
=A0xen/arch/x86/numa.c =A0 =A0 =A0 =A0 =A0| =A0 30 +++-<br>
=A0xen/common/domain.c =A0 =A0 =A0 =A0 =A0| =A0 10 ++<br>
=A0xen/common/domctl.c =A0 =A0 =A0 =A0 =A0| =A0 79 +++++++++<br>
=A0xen/common/memory.c =A0 =A0 =A0 =A0 =A0| =A0 96 +++++++++++<br>
=A0xen/include/public/domctl.h =A0| =A0 29 ++++<br>
=A0xen/include/public/memory.h =A0| =A0 17 ++<br>
=A0xen/include/public/vnuma.h =A0 | =A0 59 +++++++<br>
=A0xen/include/xen/domain.h =A0 =A0 | =A0 =A08 +<br>
=A0xen/include/xen/sched.h =A0 =A0 =A0| =A0 =A01 +<br>
=A025 files changed, 1282 insertions(+), 31 deletions(-)<br>
=A0create mode 100644 tools/libxl/libxl_vnuma.h<br>
=A0create mode 100644 xen/include/public/vnuma.h<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
1.7.10.4<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br><div dir=
=3D"ltr"><blockquote style=3D"margin:0px 0px 0px 0.8ex;border-left:1px soli=
d rgb(204,204,204);padding-left:1ex"><span style=3D"color:rgb(80,0,80)">Yec=
hen Li</span><br style=3D"color:rgb(80,0,80)">
<br style=3D"color:rgb(80,0,80)"><span style=3D"color:rgb(80,0,80)"><span s=
tyle=3D"font-family:Arial">Team of System Virtualization and Cloud Computin=
g=A0</span></span><br style=3D"font-family:Arial"><span style=3D"color:rgb(=
80,0,80)"><span style=3D"font-family:Arial">School of Electronic Engineerin=
g=A0 and Computer Science</span></span><span style=3D"color:rgb(80,0,80)">,=
=A0</span><br style=3D"color:rgb(80,0,80)">
<span style=3D"color:rgb(80,0,80)">Peking University, China</span><br><div>=
<span style=3D"color:rgb(80,0,80)"><br></span></div><div><font color=3D"#50=
0050">Nothing is impossible because impossible itself =A0says: &quot; I&#39=
;m possible &quot;</font></div>
<div><font color=3D"#500050">lccycc From PKU</font></div></blockquote></div=
>
</div>

--001a11c3e24602829204f2491e03--


--===============4478325514760300068==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4478325514760300068==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 12:57:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvrR-0003vy-QH; Thu, 13 Feb 2014 12:57:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDvrP-0003vt-Mu
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 12:57:47 +0000
Received: from [85.158.143.35:48626] by server-3.bemta-4.messagelabs.com id
	29/86-11539-B41CCF25; Thu, 13 Feb 2014 12:57:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392296265!5413272!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15883 invoked from network); 13 Feb 2014 12:57:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 12:57:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 12:57:45 +0000
Message-Id: <52FCCF57020000780011C114@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 12:57:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part5063F357.2__="
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1 help
	text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part5063F357.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Will want tools/configure to be re-generated.

--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -58,7 +58,7 @@ static int __init parse_ivrs_table(struc
 AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
 AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
 AX_ARG_DEFAULT_DISABLE([xend], [Enable xend toolstack])
-AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 tools])
+AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])
=20
 AC_ARG_ENABLE([qemu-traditional],
     AS_HELP_STRING([--enable-qemu-traditional],




--=__Part5063F357.2__=
Content-Type: text/plain; name="tools-configure-blktap1-help.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="tools-configure-blktap1-help.patch"

tools/configure: correct --enable-blktap1 help text=0A=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A---=0AWill want tools/configure to be =
re-generated.=0A=0A--- a/tools/configure.ac=0A+++ b/tools/configure.ac=0A@@=
 -58,7 +58,7 @@ static int __init parse_ivrs_table(struc=0A AX_ARG_DEFAULT_=
ENABLE([seabios], [Disable SeaBIOS])=0A AX_ARG_DEFAULT_ENABLE([debug], =
[Disable debug build of tools])=0A AX_ARG_DEFAULT_DISABLE([xend], [Enable =
xend toolstack])=0A-AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 =
tools])=0A+AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])=0A =
=0A AC_ARG_ENABLE([qemu-traditional],=0A     AS_HELP_STRING([--enable-qemu-=
traditional],=0A
--=__Part5063F357.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part5063F357.2__=--


From xen-devel-bounces@lists.xen.org Thu Feb 13 12:57:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 12:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvrR-0003vy-QH; Thu, 13 Feb 2014 12:57:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDvrP-0003vt-Mu
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 12:57:47 +0000
Received: from [85.158.143.35:48626] by server-3.bemta-4.messagelabs.com id
	29/86-11539-B41CCF25; Thu, 13 Feb 2014 12:57:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392296265!5413272!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15883 invoked from network); 13 Feb 2014 12:57:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 12:57:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 12:57:45 +0000
Message-Id: <52FCCF57020000780011C114@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 12:57:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part5063F357.2__="
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1 help
	text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part5063F357.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Will want tools/configure to be re-generated.

--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -58,7 +58,7 @@ static int __init parse_ivrs_table(struc
 AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
 AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
 AX_ARG_DEFAULT_DISABLE([xend], [Enable xend toolstack])
-AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 tools])
+AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])
=20
 AC_ARG_ENABLE([qemu-traditional],
     AS_HELP_STRING([--enable-qemu-traditional],




--=__Part5063F357.2__=
Content-Type: text/plain; name="tools-configure-blktap1-help.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="tools-configure-blktap1-help.patch"

tools/configure: correct --enable-blktap1 help text=0A=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A---=0AWill want tools/configure to be =
re-generated.=0A=0A--- a/tools/configure.ac=0A+++ b/tools/configure.ac=0A@@=
 -58,7 +58,7 @@ static int __init parse_ivrs_table(struc=0A AX_ARG_DEFAULT_=
ENABLE([seabios], [Disable SeaBIOS])=0A AX_ARG_DEFAULT_ENABLE([debug], =
[Disable debug build of tools])=0A AX_ARG_DEFAULT_DISABLE([xend], [Enable =
xend toolstack])=0A-AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 =
tools])=0A+AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])=0A =
=0A AC_ARG_ENABLE([qemu-traditional],=0A     AS_HELP_STRING([--enable-qemu-=
traditional],=0A
--=__Part5063F357.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part5063F357.2__=--


From xen-devel-bounces@lists.xen.org Thu Feb 13 13:04:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvxq-0004KL-N3; Thu, 13 Feb 2014 13:04:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvxp-0004KG-Hv
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:04:25 +0000
Received: from [193.109.254.147:54178] by server-6.bemta-14.messagelabs.com id
	15/8B-03396-8D2CCF25; Thu, 13 Feb 2014 13:04:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392296663!4102219!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5309 invoked from network); 13 Feb 2014 13:04:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:04:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102215225"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 13:04:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 08:04:21 -0500
Message-ID: <1392296660.31985.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 13 Feb 2014 13:04:20 +0000
In-Reply-To: <52FCCF57020000780011C114@nat28.tlf.novell.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
	help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
> Will want tools/configure to be re-generated.
> 
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -58,7 +58,7 @@ static int __init parse_ivrs_table(struc
>  AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
>  AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
>  AX_ARG_DEFAULT_DISABLE([xend], [Enable xend toolstack])
> -AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 tools])
> +AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])
>  
>  AC_ARG_ENABLE([qemu-traditional],
>      AS_HELP_STRING([--enable-qemu-traditional],
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:04:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDvxq-0004KL-N3; Thu, 13 Feb 2014 13:04:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDvxp-0004KG-Hv
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:04:25 +0000
Received: from [193.109.254.147:54178] by server-6.bemta-14.messagelabs.com id
	15/8B-03396-8D2CCF25; Thu, 13 Feb 2014 13:04:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392296663!4102219!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5309 invoked from network); 13 Feb 2014 13:04:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:04:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102215225"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 13:04:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 08:04:21 -0500
Message-ID: <1392296660.31985.8.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 13 Feb 2014 13:04:20 +0000
In-Reply-To: <52FCCF57020000780011C114@nat28.tlf.novell.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
	help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
> Will want tools/configure to be re-generated.
> 
> --- a/tools/configure.ac
> +++ b/tools/configure.ac
> @@ -58,7 +58,7 @@ static int __init parse_ivrs_table(struc
>  AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
>  AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
>  AX_ARG_DEFAULT_DISABLE([xend], [Enable xend toolstack])
> -AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 tools])
> +AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])
>  
>  AC_ARG_ENABLE([qemu-traditional],
>      AS_HELP_STRING([--enable-qemu-traditional],
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:14:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDw7E-0004k2-TU; Thu, 13 Feb 2014 13:14:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDw7D-0004ju-KX
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:14:07 +0000
Received: from [85.158.143.35:41556] by server-1.bemta-4.messagelabs.com id
	6A/4D-31661-F15CCF25; Thu, 13 Feb 2014 13:14:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392297246!5424643!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19249 invoked from network); 13 Feb 2014 13:14:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 13:14:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 13:14:06 +0000
Message-Id: <52FCD32C020000780011C143@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 13:14:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
	<1392296660.31985.8.camel@kazak.uk.xensource.com>
In-Reply-To: <1392296660.31985.8.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
 help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 14:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.

>> ---
>> Will want tools/configure to be re-generated.

Would it be possible for one of you - having the right version of
the auto tools in place - to apply this? Or should I commit it
without re-generating tools/configure for the time being?

Thanks, Jan

>> --- a/tools/configure.ac
>> +++ b/tools/configure.ac
>> @@ -58,7 +58,7 @@ static int __init parse_ivrs_table(struc
>>  AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
>>  AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
>>  AX_ARG_DEFAULT_DISABLE([xend], [Enable xend toolstack])
>> -AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 tools])
>> +AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])
>>  
>>  AC_ARG_ENABLE([qemu-traditional],
>>      AS_HELP_STRING([--enable-qemu-traditional],
>> 
>> 
>> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:14:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDw7E-0004k2-TU; Thu, 13 Feb 2014 13:14:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDw7D-0004ju-KX
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:14:07 +0000
Received: from [85.158.143.35:41556] by server-1.bemta-4.messagelabs.com id
	6A/4D-31661-F15CCF25; Thu, 13 Feb 2014 13:14:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392297246!5424643!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19249 invoked from network); 13 Feb 2014 13:14:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 13:14:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 13:14:06 +0000
Message-Id: <52FCD32C020000780011C143@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 13:14:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
	<1392296660.31985.8.camel@kazak.uk.xensource.com>
In-Reply-To: <1392296660.31985.8.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
 help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 14:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.

>> ---
>> Will want tools/configure to be re-generated.

Would it be possible for one of you - having the right version of
the auto tools in place - to apply this? Or should I commit it
without re-generating tools/configure for the time being?

Thanks, Jan

>> --- a/tools/configure.ac
>> +++ b/tools/configure.ac
>> @@ -58,7 +58,7 @@ static int __init parse_ivrs_table(struc
>>  AX_ARG_DEFAULT_ENABLE([seabios], [Disable SeaBIOS])
>>  AX_ARG_DEFAULT_ENABLE([debug], [Disable debug build of tools])
>>  AX_ARG_DEFAULT_DISABLE([xend], [Enable xend toolstack])
>> -AX_ARG_DEFAULT_DISABLE([blktap1], [Disable blktap1 tools])
>> +AX_ARG_DEFAULT_DISABLE([blktap1], [Enable blktap1 tools])
>>  
>>  AC_ARG_ENABLE([qemu-traditional],
>>      AS_HELP_STRING([--enable-qemu-traditional],
>> 
>> 
>> 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:15:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:15:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDw8i-0004ob-DN; Thu, 13 Feb 2014 13:15:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDw8h-0004oP-4U
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:15:39 +0000
Received: from [193.109.254.147:45628] by server-11.bemta-14.messagelabs.com
	id 0A/E8-24604-A75CCF25; Thu, 13 Feb 2014 13:15:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392297336!345911!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4695 invoked from network); 13 Feb 2014 13:15:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:15:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102218041"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 13:15:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 08:15:34 -0500
Message-ID: <1392297333.31985.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 13 Feb 2014 13:15:33 +0000
In-Reply-To: <52FCD32C020000780011C143@nat28.tlf.novell.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
	<1392296660.31985.8.camel@kazak.uk.xensource.com>
	<52FCD32C020000780011C143@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
	help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 13:14 +0000, Jan Beulich wrote:
> >>> On 13.02.14 at 14:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks.
> 
> >> ---
> >> Will want tools/configure to be re-generated.
> 
> Would it be possible for one of you - having the right version of
> the auto tools in place - to apply this? Or should I commit it
> without re-generating tools/configure for the time being?

I'll do it/am doing it.

Committing without regenerating will only lead to confusion down the
line (maybe not so bad in this case, but still)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:15:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:15:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDw8i-0004ob-DN; Thu, 13 Feb 2014 13:15:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDw8h-0004oP-4U
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:15:39 +0000
Received: from [193.109.254.147:45628] by server-11.bemta-14.messagelabs.com
	id 0A/E8-24604-A75CCF25; Thu, 13 Feb 2014 13:15:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392297336!345911!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4695 invoked from network); 13 Feb 2014 13:15:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:15:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102218041"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 13:15:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 08:15:34 -0500
Message-ID: <1392297333.31985.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 13 Feb 2014 13:15:33 +0000
In-Reply-To: <52FCD32C020000780011C143@nat28.tlf.novell.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
	<1392296660.31985.8.camel@kazak.uk.xensource.com>
	<52FCD32C020000780011C143@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
	help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 13:14 +0000, Jan Beulich wrote:
> >>> On 13.02.14 at 14:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks.
> 
> >> ---
> >> Will want tools/configure to be re-generated.
> 
> Would it be possible for one of you - having the right version of
> the auto tools in place - to apply this? Or should I commit it
> without re-generating tools/configure for the time being?

I'll do it/am doing it.

Committing without regenerating will only lead to confusion down the
line (maybe not so bad in this case, but still)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:43:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDwZC-0005lt-SC; Thu, 13 Feb 2014 13:43:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDwZA-0005lo-DZ
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:43:00 +0000
Received: from [85.158.143.35:31421] by server-2.bemta-4.messagelabs.com id
	BF/16-10891-3EBCCF25; Thu, 13 Feb 2014 13:42:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392298977!5435164!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26799 invoked from network); 13 Feb 2014 13:42:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:42:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102224678"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 13:42:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 08:42:56 -0500
Message-ID: <1392298976.31985.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 13 Feb 2014 13:42:56 +0000
In-Reply-To: <1392297333.31985.15.camel@kazak.uk.xensource.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
	<1392296660.31985.8.camel@kazak.uk.xensource.com>
	<52FCD32C020000780011C143@nat28.tlf.novell.com>
	<1392297333.31985.15.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
 help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 13:15 +0000, Ian Campbell wrote:
> On Thu, 2014-02-13 at 13:14 +0000, Jan Beulich wrote:
> > >>> On 13.02.14 at 14:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
> > >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > > 
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Thanks.
> > 
> > >> ---
> > >> Will want tools/configure to be re-generated.
> > 
> > Would it be possible for one of you - having the right version of
> > the auto tools in place - to apply this? Or should I commit it
> > without re-generating tools/configure for the time being?
> 
> I'll do it/am doing it.

Done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:43:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDwZC-0005lt-SC; Thu, 13 Feb 2014 13:43:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDwZA-0005lo-DZ
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 13:43:00 +0000
Received: from [85.158.143.35:31421] by server-2.bemta-4.messagelabs.com id
	BF/16-10891-3EBCCF25; Thu, 13 Feb 2014 13:42:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392298977!5435164!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26799 invoked from network); 13 Feb 2014 13:42:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:42:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="102224678"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 13:42:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 08:42:56 -0500
Message-ID: <1392298976.31985.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Thu, 13 Feb 2014 13:42:56 +0000
In-Reply-To: <1392297333.31985.15.camel@kazak.uk.xensource.com>
References: <52FCCF57020000780011C114@nat28.tlf.novell.com>
	<1392296660.31985.8.camel@kazak.uk.xensource.com>
	<52FCD32C020000780011C143@nat28.tlf.novell.com>
	<1392297333.31985.15.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/configure: correct --enable-blktap1
 help text
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 13:15 +0000, Ian Campbell wrote:
> On Thu, 2014-02-13 at 13:14 +0000, Jan Beulich wrote:
> > >>> On 13.02.14 at 14:04, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > > On Thu, 2014-02-13 at 12:57 +0000, Jan Beulich wrote:
> > >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > > 
> > > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Thanks.
> > 
> > >> ---
> > >> Will want tools/configure to be re-generated.
> > 
> > Would it be possible for one of you - having the right version of
> > the auto tools in place - to apply this? Or should I commit it
> > without re-generating tools/configure for the time being?
> 
> I'll do it/am doing it.

Done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:53:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDwjY-00069l-2F; Thu, 13 Feb 2014 13:53:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1WDwjW-00069e-LL
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 13:53:42 +0000
Received: from [193.109.254.147:61348] by server-1.bemta-14.messagelabs.com id
	DD/92-15438-66ECCF25; Thu, 13 Feb 2014 13:53:42 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392299619!4125152!1
X-Originating-IP: [209.85.220.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1431 invoked from network); 13 Feb 2014 13:53:40 -0000
Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com)
	(209.85.220.42)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:53:40 -0000
Received: by mail-pa0-f42.google.com with SMTP id kl14so10802643pab.15
	for <Xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 05:53:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=+gAlVeqlmKlYM2L66z/EkrpXG6KB8lxfGuQMKkQtMik=;
	b=MZgaszpSxtrRabCYuYwF8VuJ/WldzkBDk7cWxNgDT0I5+jUKbxHH3suq5RP8AzDa2p
	K/DY6PLVpXKVjsP/zCOZZwvhemi9YtoBFcAMLeYWRDQbdYajpK+j7jmmmDp5M1sSwzSK
	pBYv7HMH//lbUo7HIP+PWWXRP9BfmnPzqopVQHHOZldORatPMoilvYcWkiGzWG7elx9L
	QsZ+JRFkP/3vYck0lMJjU1A6fTekoCaqyzNpu/OlzssJMmZK/Ifq8LmSiQPdLJ+7j90R
	C1Acy3p3J6XvYt3tXYqUZw4ivJSR1RPbUgcnFrJTtXCYlPgSyK038LyT3j1W+vr9VqYx
	bg0w==
X-Gm-Message-State: ALoCoQmmWPFV4b0D14aYhv6YKDdumSYK7JrJySfb7R2RHf58r0sI0KKoZHZbeuSxSftDhZDHQwq7
X-Received: by 10.68.114.163 with SMTP id jh3mr1757243pbb.99.1392299618857;
	Thu, 13 Feb 2014 05:53:38 -0800 (PST)
Received: from [10.1.2.103] ([12.226.98.3])
	by mx.google.com with ESMTPSA id n6sm6813222pbj.22.2014.02.13.05.53.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 05:53:38 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1392285632.27366.8.camel@kazak.uk.xensource.com>
Date: Thu, 13 Feb 2014 08:53:37 -0500
Message-Id: <DED90CDF-618D-4F3C-B810-3445760334B0@gridcentric.ca>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
	<1392285632.27366.8.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1510)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 13, 2014, at 5:00 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-02-12 at 18:53 -0800, Mukesh Rathor wrote:
>> It appears that xc_map_foreign_pages() handles return incorrectly :
> 
> libxc is a complete mess in this regard (error handling) generally.
> 
> IIRC there is some subtlety on the Linux privcmd side wrt partial
> failure of batches, particularly once paging/sharing gets involved and
> you want to retry the missing subset, which might have something to do
> with this. David and Andres might know if that is relevant here.

Yes, unfortunately the semantics of error communication are rather quirky and therefore fairly error prone.

IIRC, the kernel interface returns -ENOENT in the global rc if any individual failure is ENOENT. ENOENT is the case for "hit a paged out pfn, you have to wait until it is paged back in". Libxc, however, has the retry built in to wait so that ENOENTs (individual or global) are never returned to the caller of xc_map_foreign_bulk and friends.

The other condition that sets the rc for the whole operation is an EFAULT in copy to/from user. In that (unlikely) case, the values in the err array cannot be reasonably trusted.

Finally, if you have partial or total success the rc is zero, and individual error entries may have non-zero rc, as in the case in which a map of a hole is attempted.

So I believe you've identified a bug in the code below. As for the proposed solution, I would still check globally for EFAULT and have a big hammer in that case.

Caveat: I haven't looked at the actual code when whipping up this email.

Andres 
> 
>>    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>>    if (res) {
>>        for (i = 0; i < num; i++) {
>>            if (err[i]) {
>>                errno = -err[i];
>>                munmap(res, num * PAGE_SIZE);
>>                res = NULL;
>>                break;
>>            }
>>        }
>>    }
>> 
>> The add to_physmap batch'd interface  actually will store errors
>> in the err array, and return 0 unless EFAULT or something like that.
>> See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
>> calls here to map page which fails, but the return is 0 as the error is
>> succesfully copied by xen. But the error is missed above since res is 0.
>> xentrace does something again, and that causes xen crash. 
>> 
>> It appears the fix could be just removing the check for res above...
>> 
>>    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>>    for (i = 0; i < num; i++) {
>>        if (err[i]) {
>>         .....
>> 
>> What do you guys think?
>> 
>> thanks,
>> Mukesh
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 13:53:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 13:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDwjY-00069l-2F; Thu, 13 Feb 2014 13:53:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1WDwjW-00069e-LL
	for Xen-devel@lists.xensource.com; Thu, 13 Feb 2014 13:53:42 +0000
Received: from [193.109.254.147:61348] by server-1.bemta-14.messagelabs.com id
	DD/92-15438-66ECCF25; Thu, 13 Feb 2014 13:53:42 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392299619!4125152!1
X-Originating-IP: [209.85.220.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1431 invoked from network); 13 Feb 2014 13:53:40 -0000
Received: from mail-pa0-f42.google.com (HELO mail-pa0-f42.google.com)
	(209.85.220.42)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 13:53:40 -0000
Received: by mail-pa0-f42.google.com with SMTP id kl14so10802643pab.15
	for <Xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 05:53:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=+gAlVeqlmKlYM2L66z/EkrpXG6KB8lxfGuQMKkQtMik=;
	b=MZgaszpSxtrRabCYuYwF8VuJ/WldzkBDk7cWxNgDT0I5+jUKbxHH3suq5RP8AzDa2p
	K/DY6PLVpXKVjsP/zCOZZwvhemi9YtoBFcAMLeYWRDQbdYajpK+j7jmmmDp5M1sSwzSK
	pBYv7HMH//lbUo7HIP+PWWXRP9BfmnPzqopVQHHOZldORatPMoilvYcWkiGzWG7elx9L
	QsZ+JRFkP/3vYck0lMJjU1A6fTekoCaqyzNpu/OlzssJMmZK/Ifq8LmSiQPdLJ+7j90R
	C1Acy3p3J6XvYt3tXYqUZw4ivJSR1RPbUgcnFrJTtXCYlPgSyK038LyT3j1W+vr9VqYx
	bg0w==
X-Gm-Message-State: ALoCoQmmWPFV4b0D14aYhv6YKDdumSYK7JrJySfb7R2RHf58r0sI0KKoZHZbeuSxSftDhZDHQwq7
X-Received: by 10.68.114.163 with SMTP id jh3mr1757243pbb.99.1392299618857;
	Thu, 13 Feb 2014 05:53:38 -0800 (PST)
Received: from [10.1.2.103] ([12.226.98.3])
	by mx.google.com with ESMTPSA id n6sm6813222pbj.22.2014.02.13.05.53.37
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 05:53:38 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <1392285632.27366.8.camel@kazak.uk.xensource.com>
Date: Thu, 13 Feb 2014 08:53:37 -0500
Message-Id: <DED90CDF-618D-4F3C-B810-3445760334B0@gridcentric.ca>
References: <20140212185352.4c920a54@mantra.us.oracle.com>
	<1392285632.27366.8.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1510)
Cc: "Xen-devel@lists.xensource.com" <Xen-devel@lists.xensource.com>,
	Andres Lagar-Cavilla <andreslc@gridcentric.ca>,
	Julien Grall <julien.grall@linaro.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Error ignored in xc_map_foreign_pages
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 13, 2014, at 5:00 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Wed, 2014-02-12 at 18:53 -0800, Mukesh Rathor wrote:
>> It appears that xc_map_foreign_pages() handles return incorrectly :
> 
> libxc is a complete mess in this regard (error handling) generally.
> 
> IIRC there is some subtlety on the Linux privcmd side wrt partial
> failure of batches, particularly once paging/sharing gets involved and
> you want to retry the missing subset, which might have something to do
> with this. David and Andres might know if that is relevant here.

Yes, unfortunately the semantics of error communication are rather quirky and therefore fairly error prone.

IIRC, the kernel interface returns -ENOENT in the global rc if any individual failure is ENOENT. ENOENT is the case for "hit a paged out pfn, you have to wait until it is paged back in". Libxc, however, has the retry built in to wait so that ENOENTs (individual or global) are never returned to the caller of xc_map_foreign_bulk and friends.

The other condition that sets the rc for the whole operation is an EFAULT in copy to/from user. In that (unlikely) case, the values in the err array cannot be reasonably trusted.

Finally, if you have partial or total success the rc is zero, and individual error entries may have non-zero rc, as in the case in which a map of a hole is attempted.

So I believe you've identified a bug in the code below. As for the proposed solution, I would still check globally for EFAULT and have a big hammer in that case.

Caveat: I haven't looked at the actual code when whipping up this email.

Andres 
> 
>>    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>>    if (res) {
>>        for (i = 0; i < num; i++) {
>>            if (err[i]) {
>>                errno = -err[i];
>>                munmap(res, num * PAGE_SIZE);
>>                res = NULL;
>>                break;
>>            }
>>        }
>>    }
>> 
>> The add to_physmap batch'd interface  actually will store errors
>> in the err array, and return 0 unless EFAULT or something like that.
>> See xenmem_add_to_physmap_batch(). The case I'm looking at, xentrace
>> calls here to map page which fails, but the return is 0 as the error is
>> succesfully copied by xen. But the error is missed above since res is 0.
>> xentrace does something again, and that causes xen crash. 
>> 
>> It appears the fix could be just removing the check for res above...
>> 
>>    res = xc_map_foreign_bulk(xch, dom, prot, arr, err, num);
>>    for (i = 0; i < num; i++) {
>>        if (err[i]) {
>>         .....
>> 
>> What do you guys think?
>> 
>> thanks,
>> Mukesh
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:38:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxQF-0007Nd-TU; Thu, 13 Feb 2014 14:37:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WDxQE-0007NY-Bx
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 14:37:50 +0000
Received: from [85.158.143.35:28909] by server-2.bemta-4.messagelabs.com id
	64/42-10891-DB8DCF25; Thu, 13 Feb 2014 14:37:49 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392302267!5453045!1
X-Originating-IP: [209.85.128.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16140 invoked from network); 13 Feb 2014 14:37:48 -0000
Received: from mail-ve0-f169.google.com (HELO mail-ve0-f169.google.com)
	(209.85.128.169)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 14:37:48 -0000
Received: by mail-ve0-f169.google.com with SMTP id oy12so8828789veb.28
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 06:37:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=OvW7rQGbo6rTAsYSGoaw7quIddFyfJnKLSoJhknXrUw=;
	b=zXjCKa9Nbk19rmS2fGWguw9H+L2XPJLxi4xJN6uOdjZB2KN3T3Ez1dKxun0ruia+Tq
	7fC9SK5PL4BkelTSJ0j9a3SiH9rA+LWIoYTorp0WapblOi1cvCWSdnqs7tYD3qjF/sl1
	ppeLRbUKiLRcukfxMYqjeK+beWSevdomCK4WIBxK4K+xv85SDM5T2NBQiWddJXac62Oi
	g1vywDHXO4iPS5KFsFvN6pWoG950lb6+p+RomtNmVM/RtZzuDcuYedmSwJ+fG/udJ84o
	ezLWeC09JhZDcfd5OHSEbUDP82ymqYtfgzXSiPKaD3L7jLmNLAfZMlKvHurXSIm3uB3R
	H3Kw==
MIME-Version: 1.0
X-Received: by 10.52.94.77 with SMTP id da13mr60652vdb.55.1392302267437; Thu,
	13 Feb 2014 06:37:47 -0800 (PST)
Received: by 10.52.80.165 with HTTP; Thu, 13 Feb 2014 06:37:47 -0800 (PST)
In-Reply-To: <52EFB9FA.7010905@oracle.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
	<20140203145503.GA3864@phenom.dumpdata.com>
	<CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
	<52EFB9FA.7010905@oracle.com>
Date: Thu, 13 Feb 2014 23:37:47 +0900
Message-ID: <CABPT1LvN7u4Et-jxAeUjBRAJZKh7DP1uA3eyRJbndKe28vGo-g@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Is there anything else to test on my system about this issue? I will
mute the list, if any information from my system needed please CC me.

On Tue, Feb 4, 2014 at 12:47 AM, Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
> On 02/03/2014 10:18 AM, Vitaliy Tomin wrote:
>>>
>>> You might want to add 'sync_console' on your Xen line. That should
>>
>> give you a bit more of data (I hope?)
>>
>> Here is  log captured with sync_console and empty hvm config.
>
>
>> (XEN) SR-IOV device 0000:00:11.0 has its virtual functions already enabled
>> (01ab)
>
> 11.0 is the SATA controller on FCH and I don't believe it's an SR-IOV
> device. I dont' think it even has extended config space.
>
> -boris
>
>
>>
>> On Mon, Feb 3, 2014 at 11:55 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>>>
>>> On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
>>>>>
>>>>> What about without PCI passthrough?  In all cases you appear to be
>>>>
>>>> passing the embedded graphics through to an HVM domain
>>>>
>>>> Next log captured with following hvm domain config:
>>>>
>>>> name = 'blank'
>>>> builder = 'hvm'
>>>> memory = 1024
>>>>
>>>> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
>>>> hvm it takes less than a minute.
>>>
>>> You might want to add 'sync_console' on your Xen line. That should
>>> give you a bit more of data (I hope?)
>>>>
>>>> Now trying to make log with watchdog added
>>>>
>>>> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
>>>> <andrew.cooper3@citrix.com> wrote:
>>>>>
>>>>> Can you please reply-to-all to keep this on the list.
>>>>>
>>>>> On 03/02/14 14:21, Vitaliy Tomin wrote:
>>>>>>>
>>>>>>> Does it start with debug=n, but without trying to passthrough the pci
>>>>>>> device (the graphics core of the apu?) to the hvm ?
>>>>>>
>>>>>> No it crashes even with empty HVM domain (no os, no disk images no
>>>>>> network)
>>>>>
>>>>> What about without PCI passthrough?  In all cases you appear to be
>>>>> passing the embedded graphics through to an HVM domain
>>>>>
>>>>>>> Can you explain "=== whole system crashed ===" a little more.
>>>>>>
>>>>>> II means system instantly rebooted. Black screen, no messages, no
>>>>>> image on screen next what I see is POST of my real hardware.
>>>>>>
>>>>>> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>>>>>
>>>>> Ok - so something forced a system reset.  Even more curious that debug
>>>>> mode is fine while non-debug is fatal.
>>>>>
>>>>> ~Andrew
>>>>>
>>>>>
>>>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:38:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxQF-0007Nd-TU; Thu, 13 Feb 2014 14:37:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <highwaystar.ru@gmail.com>) id 1WDxQE-0007NY-Bx
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 14:37:50 +0000
Received: from [85.158.143.35:28909] by server-2.bemta-4.messagelabs.com id
	64/42-10891-DB8DCF25; Thu, 13 Feb 2014 14:37:49 +0000
X-Env-Sender: highwaystar.ru@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392302267!5453045!1
X-Originating-IP: [209.85.128.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16140 invoked from network); 13 Feb 2014 14:37:48 -0000
Received: from mail-ve0-f169.google.com (HELO mail-ve0-f169.google.com)
	(209.85.128.169)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 14:37:48 -0000
Received: by mail-ve0-f169.google.com with SMTP id oy12so8828789veb.28
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 06:37:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=OvW7rQGbo6rTAsYSGoaw7quIddFyfJnKLSoJhknXrUw=;
	b=zXjCKa9Nbk19rmS2fGWguw9H+L2XPJLxi4xJN6uOdjZB2KN3T3Ez1dKxun0ruia+Tq
	7fC9SK5PL4BkelTSJ0j9a3SiH9rA+LWIoYTorp0WapblOi1cvCWSdnqs7tYD3qjF/sl1
	ppeLRbUKiLRcukfxMYqjeK+beWSevdomCK4WIBxK4K+xv85SDM5T2NBQiWddJXac62Oi
	g1vywDHXO4iPS5KFsFvN6pWoG950lb6+p+RomtNmVM/RtZzuDcuYedmSwJ+fG/udJ84o
	ezLWeC09JhZDcfd5OHSEbUDP82ymqYtfgzXSiPKaD3L7jLmNLAfZMlKvHurXSIm3uB3R
	H3Kw==
MIME-Version: 1.0
X-Received: by 10.52.94.77 with SMTP id da13mr60652vdb.55.1392302267437; Thu,
	13 Feb 2014 06:37:47 -0800 (PST)
Received: by 10.52.80.165 with HTTP; Thu, 13 Feb 2014 06:37:47 -0800 (PST)
In-Reply-To: <52EFB9FA.7010905@oracle.com>
References: <CABPT1Lv-uA+h=iDxjHBdvh6e=vyzBkGkoXuqrXGgjV_NAtdNTg@mail.gmail.com>
	<52EF9EEF.8050301@citrix.com>
	<CABPT1Ls8zmXLDuhwxtnTSndBLVvVS2_chokQs-WGg0_THJ663Q@mail.gmail.com>
	<52EFA304.4000206@citrix.com>
	<CABPT1LscUZR_V1TOAzVmkvSms0M_Yucjp8WtUaeOADD_G-EbHA@mail.gmail.com>
	<52EFA87A.2000008@citrix.com>
	<CABPT1LvjDJVy1d-GyjZxFPzYnJ0bfuQWqvd56u2zZ0Z_EL8ozA@mail.gmail.com>
	<20140203145503.GA3864@phenom.dumpdata.com>
	<CABPT1LsBG5TDj+xmTPvCe96X7BANxN7wekzq=MfLcJim7dYUjQ@mail.gmail.com>
	<52EFB9FA.7010905@oracle.com>
Date: Thu, 13 Feb 2014 23:37:47 +0900
Message-ID: <CABPT1LvN7u4Et-jxAeUjBRAJZKh7DP1uA3eyRJbndKe28vGo-g@mail.gmail.com>
From: Vitaliy Tomin <highwaystar.ru@gmail.com>
To: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] HVM crash system on AMD APU A8-6600K
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Is there anything else to test on my system about this issue? I will
mute the list, if any information from my system needed please CC me.

On Tue, Feb 4, 2014 at 12:47 AM, Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
> On 02/03/2014 10:18 AM, Vitaliy Tomin wrote:
>>>
>>> You might want to add 'sync_console' on your Xen line. That should
>>
>> give you a bit more of data (I hope?)
>>
>> Here is  log captured with sync_console and empty hvm config.
>
>
>> (XEN) SR-IOV device 0000:00:11.0 has its virtual functions already enabled
>> (01ab)
>
> 11.0 is the SATA controller on FCH and I don't believe it's an SR-IOV
> device. I dont' think it even has extended config space.
>
> -boris
>
>
>>
>> On Mon, Feb 3, 2014 at 11:55 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>>>
>>> On Mon, Feb 03, 2014 at 11:36:23PM +0900, Vitaliy Tomin wrote:
>>>>>
>>>>> What about without PCI passthrough?  In all cases you appear to be
>>>>
>>>> passing the embedded graphics through to an HVM domain
>>>>
>>>> Next log captured with following hvm domain config:
>>>>
>>>> name = 'blank'
>>>> builder = 'hvm'
>>>> memory = 1024
>>>>
>>>> Only 3 lines. It takes longer to crash, about 5-10 minutes, with OS in
>>>> hvm it takes less than a minute.
>>>
>>> You might want to add 'sync_console' on your Xen line. That should
>>> give you a bit more of data (I hope?)
>>>>
>>>> Now trying to make log with watchdog added
>>>>
>>>> On Mon, Feb 3, 2014 at 11:32 PM, Andrew Cooper
>>>> <andrew.cooper3@citrix.com> wrote:
>>>>>
>>>>> Can you please reply-to-all to keep this on the list.
>>>>>
>>>>> On 03/02/14 14:21, Vitaliy Tomin wrote:
>>>>>>>
>>>>>>> Does it start with debug=n, but without trying to passthrough the pci
>>>>>>> device (the graphics core of the apu?) to the hvm ?
>>>>>>
>>>>>> No it crashes even with empty HVM domain (no os, no disk images no
>>>>>> network)
>>>>>
>>>>> What about without PCI passthrough?  In all cases you appear to be
>>>>> passing the embedded graphics through to an HVM domain
>>>>>
>>>>>>> Can you explain "=== whole system crashed ===" a little more.
>>>>>>
>>>>>> II means system instantly rebooted. Black screen, no messages, no
>>>>>> image on screen next what I see is POST of my real hardware.
>>>>>>
>>>>>> Log of xen runned with debug=y attached,hvm dom runned and no crash.
>>>>>
>>>>> Ok - so something forced a system reset.  Even more curious that debug
>>>>> mode is fine while non-debug is fatal.
>>>>>
>>>>> ~Andrew
>>>>>
>>>>>
>>>
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:43:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxVi-0007kk-Nb; Thu, 13 Feb 2014 14:43:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WDxVh-0007ke-7T
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 14:43:29 +0000
Received: from [85.158.143.35:9825] by server-1.bemta-4.messagelabs.com id
	A6/6E-31661-01ADCF25; Thu, 13 Feb 2014 14:43:28 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392302607!5445119!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17552 invoked from network); 13 Feb 2014 14:43:27 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 14:43:27 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392302607; l=1384;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=Toi8+5nt3p0Iy7dViX61GZLcSCc=;
	b=XS6xYQltAG3vOXmw+SLRBLi/YWMs7gfEA32Ja9kVRKWckdmBXJ1ntjJhjvMEsCXgeNd
	qYhBpM24MbQcPAhUr3WqwH/72uQ8uNlqD2FyYUenW04pF/xX3dRl6E/wyFhvVPBqG7KVm
	QUOEET+Zrigv1Qju+q3OJ8ztE/amTnNUCng=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id z07afeq1DEhRcUF
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 13 Feb 2014 15:43:27 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A467A50269; Thu, 13 Feb 2014 15:43:26 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu, 13 Feb 2014 15:43:24 +0100
Message-Id: <1392302604-13057-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] docs: mention whitespace handling diskspec
	target= parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

disk=[ ' target=/dev/loop0 ' ] will fail to parse because
'/dev/loop ' does not exist.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index c9fd9bd..b8077da 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -70,11 +70,11 @@ Special syntax:
 
    When this parameter is specified by name, ie with the "target="
    syntax in the configuration file, it consumes the whole rest of the
-   <diskspec>.  Therefore in that case it must come last.  This is
-   permissible even if an empty value for the target was already
-   specified as a positional parameter.  This is the only way to
-   specify a target string containing metacharacters such as commas
-   and (in some cases) colons, which would otherwise be
+   <diskspec> including trailing whitespaces.  Therefore in that case
+   it must come last.  This is permissible even if an empty value for
+   the target was already specified as a positional parameter.  This
+   is the only way to specify a target string containing metacharacters
+   such as commas and (in some cases) colons, which would otherwise be
    misinterpreted.
 
    Future parameter and flag names will start with an ascii letter and

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:43:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxVi-0007kk-Nb; Thu, 13 Feb 2014 14:43:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WDxVh-0007ke-7T
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 14:43:29 +0000
Received: from [85.158.143.35:9825] by server-1.bemta-4.messagelabs.com id
	A6/6E-31661-01ADCF25; Thu, 13 Feb 2014 14:43:28 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392302607!5445119!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17552 invoked from network); 13 Feb 2014 14:43:27 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 14:43:27 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392302607; l=1384;
	s=domk; d=aepfle.de;
	h=Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=Toi8+5nt3p0Iy7dViX61GZLcSCc=;
	b=XS6xYQltAG3vOXmw+SLRBLi/YWMs7gfEA32Ja9kVRKWckdmBXJ1ntjJhjvMEsCXgeNd
	qYhBpM24MbQcPAhUr3WqwH/72uQ8uNlqD2FyYUenW04pF/xX3dRl6E/wyFhvVPBqG7KVm
	QUOEET+Zrigv1Qju+q3OJ8ztE/amTnNUCng=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id z07afeq1DEhRcUF
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Thu, 13 Feb 2014 15:43:27 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id A467A50269; Thu, 13 Feb 2014 15:43:26 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Date: Thu, 13 Feb 2014 15:43:24 +0100
Message-Id: <1392302604-13057-1-git-send-email-olaf@aepfle.de>
X-Mailer: git-send-email 1.8.5.2
Cc: Olaf Hering <olaf@aepfle.de>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com
Subject: [Xen-devel] [PATCH] docs: mention whitespace handling diskspec
	target= parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

disk=[ ' target=/dev/loop0 ' ] will fail to parse because
'/dev/loop ' does not exist.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/misc/xl-disk-configuration.txt | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/docs/misc/xl-disk-configuration.txt b/docs/misc/xl-disk-configuration.txt
index c9fd9bd..b8077da 100644
--- a/docs/misc/xl-disk-configuration.txt
+++ b/docs/misc/xl-disk-configuration.txt
@@ -70,11 +70,11 @@ Special syntax:
 
    When this parameter is specified by name, ie with the "target="
    syntax in the configuration file, it consumes the whole rest of the
-   <diskspec>.  Therefore in that case it must come last.  This is
-   permissible even if an empty value for the target was already
-   specified as a positional parameter.  This is the only way to
-   specify a target string containing metacharacters such as commas
-   and (in some cases) colons, which would otherwise be
+   <diskspec> including trailing whitespaces.  Therefore in that case
+   it must come last.  This is permissible even if an empty value for
+   the target was already specified as a positional parameter.  This
+   is the only way to specify a target string containing metacharacters
+   such as commas and (in some cases) colons, which would otherwise be
    misinterpreted.
 
    Future parameter and flag names will start with an ascii letter and

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:55:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxgx-0008NU-86; Thu, 13 Feb 2014 14:55:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDxgv-0008NJ-0N
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 14:55:05 +0000
Received: from [85.158.137.68:55690] by server-3.bemta-3.messagelabs.com id
	15/57-14520-6CCDCF25; Thu, 13 Feb 2014 14:55:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392303300!1669592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5854 invoked from network); 13 Feb 2014 14:55:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 14:55:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100466369"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 14:55:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 09:54:59 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDxgp-0003vr-5T;
	Thu, 13 Feb 2014 14:54:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDxgp-000808-0p;
	Thu, 13 Feb 2014 14:54:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24863-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 14:54:59 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24863: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24863 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24863/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24858

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b
baseline version:
 xen                  2778b572abf9dde4b38e60c4dd422283cf4bbde5

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Jan Beulich <jbeulich@suse.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=bf236637c2417376693cd72a748cc6208fd0202b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing bf236637c2417376693cd72a748cc6208fd0202b
+ branch=xen-4.3-testing
+ revision=bf236637c2417376693cd72a748cc6208fd0202b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git bf236637c2417376693cd72a748cc6208fd0202b:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   2778b57..bf23663  bf236637c2417376693cd72a748cc6208fd0202b -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:55:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:55:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxgx-0008NU-86; Thu, 13 Feb 2014 14:55:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WDxgv-0008NJ-0N
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 14:55:05 +0000
Received: from [85.158.137.68:55690] by server-3.bemta-3.messagelabs.com id
	15/57-14520-6CCDCF25; Thu, 13 Feb 2014 14:55:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392303300!1669592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5854 invoked from network); 13 Feb 2014 14:55:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 14:55:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100466369"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 14:55:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 09:54:59 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WDxgp-0003vr-5T;
	Thu, 13 Feb 2014 14:54:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WDxgp-000808-0p;
	Thu, 13 Feb 2014 14:54:59 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24863-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 14:54:59 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24863: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24863 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24863/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24858

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b
baseline version:
 xen                  2778b572abf9dde4b38e60c4dd422283cf4bbde5

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Jan Beulich <jbeulich@suse.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=bf236637c2417376693cd72a748cc6208fd0202b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing bf236637c2417376693cd72a748cc6208fd0202b
+ branch=xen-4.3-testing
+ revision=bf236637c2417376693cd72a748cc6208fd0202b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git bf236637c2417376693cd72a748cc6208fd0202b:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   2778b57..bf23663  bf236637c2417376693cd72a748cc6208fd0202b -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:58:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxkT-0000Kt-9E; Thu, 13 Feb 2014 14:58:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDxkS-0000KN-3t
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 14:58:44 +0000
Received: from [85.158.143.35:20875] by server-3.bemta-4.messagelabs.com id
	F7/1F-11539-3ADDCF25; Thu, 13 Feb 2014 14:58:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392303521!5459759!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4551 invoked from network); 13 Feb 2014 14:58:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 14:58:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100467250"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 14:58:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 09:58:11 -0500
Message-ID: <1392303490.31985.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 13 Feb 2014 14:58:10 +0000
In-Reply-To: <1392302604-13057-1-git-send-email-olaf@aepfle.de>
References: <1392302604-13057-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] docs: mention whitespace handling diskspec
 target= parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 15:43 +0100, Olaf Hering wrote:
> disk=[ ' target=/dev/loop0 ' ] will fail to parse because
> '/dev/loop ' does not exist.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 14:58:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 14:58:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDxkT-0000Kt-9E; Thu, 13 Feb 2014 14:58:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDxkS-0000KN-3t
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 14:58:44 +0000
Received: from [85.158.143.35:20875] by server-3.bemta-4.messagelabs.com id
	F7/1F-11539-3ADDCF25; Thu, 13 Feb 2014 14:58:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392303521!5459759!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4551 invoked from network); 13 Feb 2014 14:58:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 14:58:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,838,1384300800"; d="scan'208";a="100467250"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 14:58:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 09:58:11 -0500
Message-ID: <1392303490.31985.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Thu, 13 Feb 2014 14:58:10 +0000
In-Reply-To: <1392302604-13057-1-git-send-email-olaf@aepfle.de>
References: <1392302604-13057-1-git-send-email-olaf@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] docs: mention whitespace handling diskspec
 target= parsing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 15:43 +0100, Olaf Hering wrote:
> disk=[ ' target=/dev/loop0 ' ] will fail to parse because
> '/dev/loop ' does not exist.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked + applied, thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:19:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDy4K-00019L-1A; Thu, 13 Feb 2014 15:19:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDy4I-000181-Tw
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:19:15 +0000
Received: from [193.109.254.147:52032] by server-14.bemta-14.messagelabs.com
	id 25/18-29228-272ECF25; Thu, 13 Feb 2014 15:19:14 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392304753!4113100!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10477 invoked from network); 13 Feb 2014 15:19:13 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 15:19:13 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDy4F-0001f9-91; Thu, 13 Feb 2014 15:19:11 +0000
Date: Thu, 13 Feb 2014 16:19:11 +0100
From: Tim Deegan <tim@xen.org>
To: xen-devel@lists.xenproject.org
Message-ID: <20140213151911.GD82703@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140211143346.GE10482@deinos.phlegethon.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: keir@xen.org, ian.campbell@citrix.com, julien.grall@linaro.org,
	Ian.Jackson@eu.citrix.com, george.dunlap@citrix.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH] xen: Don't use __builtin_stdarg_start().
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
definition of va_start() to use __builtin_va_start() rather than
__builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
__builtin_stdarg_start() before v3.3.
    
Signed-off-by: Tim Deegan <tim@xen.org>

---

George: this should fix the build issues Roger was having with GCC 4.5
after my last stdarg patch.

diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
index 0283f06..216fe6d 100644
--- a/xen/include/xen/stdarg.h
+++ b/xen/include/xen/stdarg.h
@@ -1,18 +1,6 @@
 #ifndef __XEN_STDARG_H__
 #define __XEN_STDARG_H__
 
-#ifdef __GNUC__
-#  define __GNUC_PREREQ__(x, y)                                       \
-      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
-       (__GNUC__ > (x)))
-#else
-#  define __GNUC_PREREQ__(x, y)   0
-#endif
-
-#if !__GNUC_PREREQ__(4, 5)
-#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
-#endif
-
 typedef __builtin_va_list va_list;
 #define va_start(ap, last)    __builtin_va_start((ap), (last))
 #define va_end(ap)            __builtin_va_end(ap)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:19:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDy4K-00019L-1A; Thu, 13 Feb 2014 15:19:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDy4I-000181-Tw
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:19:15 +0000
Received: from [193.109.254.147:52032] by server-14.bemta-14.messagelabs.com
	id 25/18-29228-272ECF25; Thu, 13 Feb 2014 15:19:14 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392304753!4113100!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10477 invoked from network); 13 Feb 2014 15:19:13 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 15:19:13 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDy4F-0001f9-91; Thu, 13 Feb 2014 15:19:11 +0000
Date: Thu, 13 Feb 2014 16:19:11 +0100
From: Tim Deegan <tim@xen.org>
To: xen-devel@lists.xenproject.org
Message-ID: <20140213151911.GD82703@deinos.phlegethon.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140211143346.GE10482@deinos.phlegethon.org>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: keir@xen.org, ian.campbell@citrix.com, julien.grall@linaro.org,
	Ian.Jackson@eu.citrix.com, george.dunlap@citrix.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH] xen: Don't use __builtin_stdarg_start().
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
definition of va_start() to use __builtin_va_start() rather than
__builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
__builtin_stdarg_start() before v3.3.
    
Signed-off-by: Tim Deegan <tim@xen.org>

---

George: this should fix the build issues Roger was having with GCC 4.5
after my last stdarg patch.

diff --git a/xen/include/xen/stdarg.h b/xen/include/xen/stdarg.h
index 0283f06..216fe6d 100644
--- a/xen/include/xen/stdarg.h
+++ b/xen/include/xen/stdarg.h
@@ -1,18 +1,6 @@
 #ifndef __XEN_STDARG_H__
 #define __XEN_STDARG_H__
 
-#ifdef __GNUC__
-#  define __GNUC_PREREQ__(x, y)                                       \
-      ((__GNUC__ == (x) && __GNUC_MINOR__ >= (y)) ||                  \
-       (__GNUC__ > (x)))
-#else
-#  define __GNUC_PREREQ__(x, y)   0
-#endif
-
-#if !__GNUC_PREREQ__(4, 5)
-#  define __builtin_va_start(ap, last)    __builtin_stdarg_start((ap), (last))
-#endif
-
 typedef __builtin_va_list va_list;
 #define va_start(ap, last)    __builtin_va_start((ap), (last))
 #define va_end(ap)            __builtin_va_end(ap)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:24:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDy9Z-0001NJ-Ed; Thu, 13 Feb 2014 15:24:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDy9X-0001NC-FE
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:24:39 +0000
Received: from [85.158.137.68:63490] by server-6.bemta-3.messagelabs.com id
	C5/2F-09180-6B3ECF25; Thu, 13 Feb 2014 15:24:38 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392305076!678082!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23406 invoked from network); 13 Feb 2014 15:24:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100478904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 15:24:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 10:24:35 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDy7M-0001z4-Ht;
	Thu, 13 Feb 2014 15:22:24 +0000
Message-ID: <52FCE325.9030008@eu.citrix.com>
Date: Thu, 13 Feb 2014 15:22:13 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <xen-devel@lists.xenproject.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213151911.GD82703@deinos.phlegethon.org>
In-Reply-To: <20140213151911.GD82703@deinos.phlegethon.org>
X-DLP: MIA1
Cc: keir@xen.org, ian.campbell@citrix.com, julien.grall@linaro.org,
	Ian.Jackson@eu.citrix.com, george.dunlap@citrix.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use __builtin_stdarg_start().
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 03:19 PM, Tim Deegan wrote:
> Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
> definition of va_start() to use __builtin_va_start() rather than
> __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
> __builtin_stdarg_start() before v3.3.
>      
> Signed-off-by: Tim Deegan <tim@xen.org>
>
> ---
>
> George: this should fix the build issues Roger was having with GCC 4.5
> after my last stdarg patch.

Once it has Roger's tested-by:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:24:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDy9Z-0001NJ-Ed; Thu, 13 Feb 2014 15:24:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDy9X-0001NC-FE
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:24:39 +0000
Received: from [85.158.137.68:63490] by server-6.bemta-3.messagelabs.com id
	C5/2F-09180-6B3ECF25; Thu, 13 Feb 2014 15:24:38 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392305076!678082!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23406 invoked from network); 13 Feb 2014 15:24:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:24:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100478904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 15:24:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 10:24:35 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDy7M-0001z4-Ht;
	Thu, 13 Feb 2014 15:22:24 +0000
Message-ID: <52FCE325.9030008@eu.citrix.com>
Date: Thu, 13 Feb 2014 15:22:13 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <xen-devel@lists.xenproject.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213151911.GD82703@deinos.phlegethon.org>
In-Reply-To: <20140213151911.GD82703@deinos.phlegethon.org>
X-DLP: MIA1
Cc: keir@xen.org, ian.campbell@citrix.com, julien.grall@linaro.org,
	Ian.Jackson@eu.citrix.com, george.dunlap@citrix.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use __builtin_stdarg_start().
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 03:19 PM, Tim Deegan wrote:
> Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
> definition of va_start() to use __builtin_va_start() rather than
> __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
> __builtin_stdarg_start() before v3.3.
>      
> Signed-off-by: Tim Deegan <tim@xen.org>
>
> ---
>
> George: this should fix the build issues Roger was having with GCC 4.5
> after my last stdarg patch.

Once it has Roger's tested-by:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:32:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyGw-0001qK-9L; Thu, 13 Feb 2014 15:32:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDyGm-0001q5-Lz
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:32:16 +0000
Received: from [85.158.139.211:41575] by server-10.bemta-5.messagelabs.com id
	3E/1D-08578-775ECF25; Thu, 13 Feb 2014 15:32:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392305525!3747862!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19549 invoked from network); 13 Feb 2014 15:32:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:32:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102264078"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 15:32:05 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 10:32:05 -0500
Message-ID: <52FCE573.8000506@citrix.com>
Date: Thu, 13 Feb 2014 16:32:03 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <xen-devel@lists.xenproject.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213151911.GD82703@deinos.phlegethon.org>
In-Reply-To: <20140213151911.GD82703@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, julien.grall@linaro.org, keir@xen.org,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use __builtin_stdarg_start().
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 16:19, Tim Deegan wrote:
> Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
> definition of va_start() to use __builtin_va_start() rather than
> __builtin_stdarg_start() for GCCs >=3D 4.5, but in fact GCC dropped
> __builtin_stdarg_start() before v3.3.
>     =

> Signed-off-by: Tim Deegan <tim@xen.org>

Tested-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
With gcc version 4.4.5 (Debian 4.4.5-8)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:32:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyGw-0001qK-9L; Thu, 13 Feb 2014 15:32:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WDyGm-0001q5-Lz
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:32:16 +0000
Received: from [85.158.139.211:41575] by server-10.bemta-5.messagelabs.com id
	3E/1D-08578-775ECF25; Thu, 13 Feb 2014 15:32:07 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392305525!3747862!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19549 invoked from network); 13 Feb 2014 15:32:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:32:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102264078"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 15:32:05 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 10:32:05 -0500
Message-ID: <52FCE573.8000506@citrix.com>
Date: Thu, 13 Feb 2014 16:32:03 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, <xen-devel@lists.xenproject.org>
References: <1392074974-1488-1-git-send-email-julien.grall@linaro.org>
	<20140211085317.GB92054@deinos.phlegethon.org>
	<52FA17E3.9070105@linaro.org>
	<20140211123515.GD97288@deinos.phlegethon.org>
	<52FA1945.8010400@linaro.org>
	<20140211125928.GE97288@deinos.phlegethon.org>
	<52FA23B4.5060203@linaro.org>
	<20140211135926.GB10482@deinos.phlegethon.org>
	<52FA328C.4000103@linaro.org>
	<20140211143346.GE10482@deinos.phlegethon.org>
	<20140213151911.GD82703@deinos.phlegethon.org>
In-Reply-To: <20140213151911.GD82703@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian.Jackson@eu.citrix.com, julien.grall@linaro.org, keir@xen.org,
	ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen: Don't use __builtin_stdarg_start().
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 16:19, Tim Deegan wrote:
> Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
> definition of va_start() to use __builtin_va_start() rather than
> __builtin_stdarg_start() for GCCs >=3D 4.5, but in fact GCC dropped
> __builtin_stdarg_start() before v3.3.
>     =

> Signed-off-by: Tim Deegan <tim@xen.org>

Tested-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
With gcc version 4.4.5 (Debian 4.4.5-8)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:38:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyMn-00029K-I4; Thu, 13 Feb 2014 15:38:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WDyMm-00029E-1z
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:38:20 +0000
Received: from [85.158.139.211:7200] by server-16.bemta-5.messagelabs.com id
	60/2B-05060-BE6ECF25; Thu, 13 Feb 2014 15:38:19 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392305898!3724106!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23776 invoked from network); 13 Feb 2014 15:38:18 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:38:18 -0000
Received: by mail-wi0-f176.google.com with SMTP id hi5so8794379wib.3
	for <xen-devel@lists.xenproject.org>;
	Thu, 13 Feb 2014 07:38:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=G5iPeKfhMq17MLAZ368siFxpsSxTmofFSyNHREcZoqY=;
	b=G6ucMkm+lu9X/T/MRYLSiOFwsyyqm8ULPoD1lJAMOR9ia+bEmg2sd5VSwSNkaI/KB+
	GelxE/udlSgEQY7VTc8RfO6rhBk64EtGmMec1gK6N4a9qeE91ckChol08PMxiJjykZbe
	M220E/5Recf/AZo20+T5KICzncJQebLaB4D8UKIRHOpOlmP1a/7tzwtH5kKiqcRWVpaB
	New9/6TkojiUxBKeGdMTyFDeIAFNDs/wuKW0yFN1xy+iAgJKyFVDQxqEna7QWVohjF9p
	JmZoYO2x+pn3TaVpqtTL6y5RkknJ+66f9ch/1BI2toedmVdMUml+IfKf2rf2rDWmriLF
	yWwA==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr1849557wjy.57.1392305897922;
	Thu, 13 Feb 2014 07:38:17 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 13 Feb 2014 07:38:17 -0800 (PST)
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
Date: Thu, 13 Feb 2014 15:38:17 +0000
X-Google-Sender-Auth: NIh9R2-FgxH9lhpBG6yH70z9WZU
Message-ID: <CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 12:17 AM, Zhang, Yang Z <yang.z.zhang@intel.com> wrote:
> Konrad Rzeszutek Wilk wrote on 2014-02-07:
>> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>>> <konrad.wilk@oracle.com> wrote:
>>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>>> Yes. As so
>>>>>>> Thanks. Except that ...
>>>>>>>
>>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>>      struct vcpu *v = current;
>>>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>>> -
>>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>>> variable "ioreq" would seem preferable.
>>>>>>>
>>>>>>>>      /*
>>>>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>>>>       */
>>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>>> the right thing here. Yang, Jun?
>>>>>> I have two patches - one the simpler one that is pretty
>>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>>> guests. I also did bootup tests with HVM guests to make sure they
>>>>>> worked.
>>>>>>
>>>>>> Attached and inline.
>>>>>
>>>
>>> Sorry for the late response. I just back from Chinese new year holiday.
>>>
>>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>>>
>>>> Correct.
>>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>>> incorrect, AFAICT.
>>>>
>>>> Both of them fix the hypervisor blowing up with any PVH guest.
>>>
>>> Both of fixings are right to me.
>>> The only concern is that what we want to do here:
>>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO
>> request emulation mechanism to continue nested check which current means
>> HVM VCPU.
>>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't
>>> support the IO request emulation mechanism only which current means PVH
>>> VCPU.
>>>
>>> The purpose of my original patch only wants to allow the HVM VCPU that
>> doesn't has pending IO request to continue nested check. Not use it to
>> distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU
>> goes to here as Jan mentioned before that non-HVM domain should never call
>> nested related function at all unless it also supports nested.
>>
>> So it sounds like the #2 patch is preferable by you.
>>
>> Can I stick Acked-by on it?
>>
>
> Sure.

Konrad / Jan: Ping?

If this fix looks reasonable it would be nice to get this in before RC4.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:38:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:38:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyMn-00029K-I4; Thu, 13 Feb 2014 15:38:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WDyMm-00029E-1z
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 15:38:20 +0000
Received: from [85.158.139.211:7200] by server-16.bemta-5.messagelabs.com id
	60/2B-05060-BE6ECF25; Thu, 13 Feb 2014 15:38:19 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392305898!3724106!1
X-Originating-IP: [209.85.212.176]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23776 invoked from network); 13 Feb 2014 15:38:18 -0000
Received: from mail-wi0-f176.google.com (HELO mail-wi0-f176.google.com)
	(209.85.212.176)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:38:18 -0000
Received: by mail-wi0-f176.google.com with SMTP id hi5so8794379wib.3
	for <xen-devel@lists.xenproject.org>;
	Thu, 13 Feb 2014 07:38:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=G5iPeKfhMq17MLAZ368siFxpsSxTmofFSyNHREcZoqY=;
	b=G6ucMkm+lu9X/T/MRYLSiOFwsyyqm8ULPoD1lJAMOR9ia+bEmg2sd5VSwSNkaI/KB+
	GelxE/udlSgEQY7VTc8RfO6rhBk64EtGmMec1gK6N4a9qeE91ckChol08PMxiJjykZbe
	M220E/5Recf/AZo20+T5KICzncJQebLaB4D8UKIRHOpOlmP1a/7tzwtH5kKiqcRWVpaB
	New9/6TkojiUxBKeGdMTyFDeIAFNDs/wuKW0yFN1xy+iAgJKyFVDQxqEna7QWVohjF9p
	JmZoYO2x+pn3TaVpqtTL6y5RkknJ+66f9ch/1BI2toedmVdMUml+IfKf2rf2rDWmriLF
	yWwA==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr1849557wjy.57.1392305897922;
	Thu, 13 Feb 2014 07:38:17 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 13 Feb 2014 07:38:17 -0800 (PST)
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
Date: Thu, 13 Feb 2014 15:38:17 +0000
X-Google-Sender-Auth: NIh9R2-FgxH9lhpBG6yH70z9WZU
Message-ID: <CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Cc: Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 12:17 AM, Zhang, Yang Z <yang.z.zhang@intel.com> wrote:
> Konrad Rzeszutek Wilk wrote on 2014-02-07:
>> On Fri, Feb 07, 2014 at 02:28:07AM +0000, Zhang, Yang Z wrote:
>>> Konrad Rzeszutek Wilk wrote on 2014-02-05:
>>>> On Wed, Feb 05, 2014 at 02:35:51PM +0000, George Dunlap wrote:
>>>>> On 02/04/2014 04:42 PM, Konrad Rzeszutek Wilk wrote:
>>>>>> On Tue, Feb 04, 2014 at 03:46:48PM +0000, Jan Beulich wrote:
>>>>>>>>>> On 04.02.14 at 16:32, Konrad Rzeszutek Wilk
>>>> <konrad.wilk@oracle.com> wrote:
>>>>>>>> On Tue, Feb 04, 2014 at 03:02:44PM +0000, Jan Beulich wrote:
>>>>>>>>> Wasn't it that Mukesh's patch simply was yours with the two
>>>>>>>>> get_ioreq()s folded by using a local variable?
>>>>>>>> Yes. As so
>>>>>>> Thanks. Except that ...
>>>>>>>
>>>>>>>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>>>>>>>> @@ -1394,13 +1394,13 @@ void nvmx_switch_guest(void)
>>>>>>>>      struct vcpu *v = current;
>>>>>>>>      struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>>>>>>>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>>>>>>>> -
>>>>>>>> +    ioreq_t *p = get_ioreq(v);
>>>>>>> ... you don't want to drop the blank line, and naming the new
>>>>>>> variable "ioreq" would seem preferable.
>>>>>>>
>>>>>>>>      /*
>>>>>>>>       * a pending IO emualtion may still no finished. In this case,
>>>>>>>>       * no virtual vmswith is allowed. Or else, the following IO
>>>>>>>>       * emulation will handled in a wrong VCPU context.
>>>>>>>>       */
>>>>>>>> -    if ( get_ioreq(v)->state != STATE_IOREQ_NONE )
>>>>>>>> +    if ( p && p->state != STATE_IOREQ_NONE )
>>>>>>> And, as said before, I'd think "!p ||" instead of "p &&" would be
>>>>>>> the right thing here. Yang, Jun?
>>>>>> I have two patches - one the simpler one that is pretty
>>>>>> straightfoward and the one you suggested. Either one fixes PVH
>>>>>> guests. I also did bootup tests with HVM guests to make sure they
>>>>>> worked.
>>>>>>
>>>>>> Attached and inline.
>>>>>
>>>
>>> Sorry for the late response. I just back from Chinese new year holiday.
>>>
>>>>> But they do different things -- one does "ioreq && ioreq->state..."
>>>>
>>>> Correct.
>>>>> and the other does "!ioreq || ioreq->state...".  The first one is
>>>>> incorrect, AFAICT.
>>>>
>>>> Both of them fix the hypervisor blowing up with any PVH guest.
>>>
>>> Both of fixings are right to me.
>>> The only concern is that what we want to do here:
>>> "ioreq && ioreq->state..." will only allow the VCPU that supporting IO
>> request emulation mechanism to continue nested check which current means
>> HVM VCPU.
>>> And "!ioreq || ioreq->state..." will check the VCPU that doesn't
>>> support the IO request emulation mechanism only which current means PVH
>>> VCPU.
>>>
>>> The purpose of my original patch only wants to allow the HVM VCPU that
>> doesn't has pending IO request to continue nested check. Not use it to
>> distinguish whether it is HVM or PVH. So here I prefer to only allow HVM VCPU
>> goes to here as Jan mentioned before that non-HVM domain should never call
>> nested related function at all unless it also supports nested.
>>
>> So it sounds like the #2 patch is preferable by you.
>>
>> Can I stick Acked-by on it?
>>
>
> Sure.

Konrad / Jan: Ping?

If this fix looks reasonable it would be nice to get this in before RC4.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:46:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyUZ-0002YG-ND; Thu, 13 Feb 2014 15:46:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDyUY-0002Y9-9v
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 15:46:22 +0000
Received: from [85.158.143.35:10510] by server-3.bemta-4.messagelabs.com id
	27/9F-11539-DC8ECF25; Thu, 13 Feb 2014 15:46:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392306379!5466848!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9630 invoked from network); 13 Feb 2014 15:46:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:46:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100487545"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 15:46:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 10:46:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDyUU-0002LB-29;
	Thu, 13 Feb 2014 15:46:18 +0000
Message-ID: <52FCE8BE.8050105@eu.citrix.com>
Date: Thu, 13 Feb 2014 15:46:06 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	Tim Deegan <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
X-DLP: MIA2
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
> George Dunlap wrote on 2014-02-11:
>> On 02/11/2014 12:57 PM, Jan Beulich wrote:
>>>>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
>>>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>>>>> What I'm missing here is what you think a proper solution is.
>>>> A _proper_ solution would be for the IOMMU h/w to allow restartable
>>>> faults, so that we can do all the usual fault-driven virtual memory
>>>> operations with DMA. :)  In the meantime...
>>> Or maintaining the A/D bits for IOMMU side accesses too.
>>>
>>>>>    It seems we have:
>>>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the
>>>>> buffer being tracked, and hope the guest doesn't DMA into video
>>>>> ram; DMA causes IOMMU fault. (This really shouldn't crash the host
>>>>> under normal circumstances; if it does it's a hardware bug.)
>>>> Note "hope" and "shouldn't" there. :)
>>>>
>>>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA
>>>>> into video ram.  DMA causes missed update to dirty bitmap, which
>>>>> will hopefully just cause screen corruption.
>>>> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
>>>> (for VMs that actually have devices passed through to them).
>>>> The extra bookkeeping could be expensive in some cases, but
>>>> basically all of those cases are already incompatible with IOMMU.
>>>>
>>>>> C. Do buffer scanning rather than dirty vram tracking (SLOW) D.
>>>>> Don't allow both a virtual video card and pass-through
>>>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
>>>> and then split them out.  That one
>>> Wouldn't that be problematic in terms of memory being available,
>>> namely when using ballooning in Dom0?
>>>
>>>>> Given that most operating systems will probably *not* DMA into
>>>>> video ram, and that an IOMMU fault isn't *supposed* to be able to
>>>>> crash the host, 'A' seems like the most reasonable option to me.
>>>> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
>>>> seems to have most support from other people.  On that basis this
>>>> patch can have my Ack.
>>> I too would consider B better than A.
>> I think I got a bit distracted with the "A isn't really so bad" thing.
>> Actually, if the overhead of not sharing tables isn't very high, then
>> B isn't such a bad option.  In fact, B is what I expected Yang to
>> submit when he originally described the problem.
> Actually, the first solution came to my mind is B. Then I realized that even chose B, we still cannot track the memory updating from DMA(even with A/D bit, it still a problem). Also, considering the current usage case of log dirty in Xen(only vram tracking has problem), I though A is better.: Hypervisor only need to track the vram change. If a malicious guest try to DMA to vram range, it only crashed himself (This should be reasonable).
>
>> I was going to say, from a release perspective, B is probably the
>> safest option for now.  But on the other hand, if we've been testing
>> sharing all this time, maybe switching back over to non-sharing whole-hog has the higher risk?
> Another problem with B is that current VT-d large paging supporting relies on the sharing EPT and VT-d page table. This means if we choose B, then we need to re-enable VT-d large page. This would be a huge performance impaction for Xen 4.4 on using VT-d solution.

OK -- if that's the case, then it definitely tips the balance back to 
A.  Unless Tim or Jan disagrees, can one of you two check it in?

Don't rush your judgement; but it would be nice to have this in before 
RC4, which would mean checking it in today preferrably, or early 
tomorrow at the latest.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:46:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyUZ-0002YG-ND; Thu, 13 Feb 2014 15:46:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDyUY-0002Y9-9v
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 15:46:22 +0000
Received: from [85.158.143.35:10510] by server-3.bemta-4.messagelabs.com id
	27/9F-11539-DC8ECF25; Thu, 13 Feb 2014 15:46:21 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392306379!5466848!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9630 invoked from network); 13 Feb 2014 15:46:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 15:46:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100487545"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 15:46:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 10:46:18 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDyUU-0002LB-29;
	Thu, 13 Feb 2014 15:46:18 +0000
Message-ID: <52FCE8BE.8050105@eu.citrix.com>
Date: Thu, 13 Feb 2014 15:46:06 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	Tim Deegan <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
X-DLP: MIA2
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
> George Dunlap wrote on 2014-02-11:
>> On 02/11/2014 12:57 PM, Jan Beulich wrote:
>>>>>> On 11.02.14 at 12:55, Tim Deegan <tim@xen.org> wrote:
>>>> At 10:59 +0000 on 11 Feb (1392112778), George Dunlap wrote:
>>>>> What I'm missing here is what you think a proper solution is.
>>>> A _proper_ solution would be for the IOMMU h/w to allow restartable
>>>> faults, so that we can do all the usual fault-driven virtual memory
>>>> operations with DMA. :)  In the meantime...
>>> Or maintaining the A/D bits for IOMMU side accesses too.
>>>
>>>>>    It seems we have:
>>>>> A. Share EPT/IOMMU tables, only do log-dirty tracking on the
>>>>> buffer being tracked, and hope the guest doesn't DMA into video
>>>>> ram; DMA causes IOMMU fault. (This really shouldn't crash the host
>>>>> under normal circumstances; if it does it's a hardware bug.)
>>>> Note "hope" and "shouldn't" there. :)
>>>>
>>>>> B. Never share EPT/IOMMU tables, and hope the guest doesn't DMA
>>>>> into video ram.  DMA causes missed update to dirty bitmap, which
>>>>> will hopefully just cause screen corruption.
>>>> Yep.  At a cost of about 0.2% in space and some extra bookkeeping
>>>> (for VMs that actually have devices passed through to them).
>>>> The extra bookkeeping could be expensive in some cases, but
>>>> basically all of those cases are already incompatible with IOMMU.
>>>>
>>>>> C. Do buffer scanning rather than dirty vram tracking (SLOW) D.
>>>>> Don't allow both a virtual video card and pass-through
>>>> E. Share EPT and IOMMU tables until someone turns on log-dirty mode
>>>> and then split them out.  That one
>>> Wouldn't that be problematic in terms of memory being available,
>>> namely when using ballooning in Dom0?
>>>
>>>>> Given that most operating systems will probably *not* DMA into
>>>>> video ram, and that an IOMMU fault isn't *supposed* to be able to
>>>>> crash the host, 'A' seems like the most reasonable option to me.
>>>> Meh, OK.  I prefer 'B' but 'A' is better than nothing, I guess, and
>>>> seems to have most support from other people.  On that basis this
>>>> patch can have my Ack.
>>> I too would consider B better than A.
>> I think I got a bit distracted with the "A isn't really so bad" thing.
>> Actually, if the overhead of not sharing tables isn't very high, then
>> B isn't such a bad option.  In fact, B is what I expected Yang to
>> submit when he originally described the problem.
> Actually, the first solution came to my mind is B. Then I realized that even chose B, we still cannot track the memory updating from DMA(even with A/D bit, it still a problem). Also, considering the current usage case of log dirty in Xen(only vram tracking has problem), I though A is better.: Hypervisor only need to track the vram change. If a malicious guest try to DMA to vram range, it only crashed himself (This should be reasonable).
>
>> I was going to say, from a release perspective, B is probably the
>> safest option for now.  But on the other hand, if we've been testing
>> sharing all this time, maybe switching back over to non-sharing whole-hog has the higher risk?
> Another problem with B is that current VT-d large paging supporting relies on the sharing EPT and VT-d page table. This means if we choose B, then we need to re-enable VT-d large page. This would be a huge performance impaction for Xen 4.4 on using VT-d solution.

OK -- if that's the case, then it definitely tips the balance back to 
A.  Unless Tim or Jan disagrees, can one of you two check it in?

Don't rush your judgement; but it would be nice to have this in before 
RC4, which would mean checking it in today preferrably, or early 
tomorrow at the latest.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:56:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDydi-00033I-QR; Thu, 13 Feb 2014 15:55:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDydh-00033D-AG
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 15:55:49 +0000
Received: from [85.158.137.68:2505] by server-15.bemta-3.messagelabs.com id
	CD/B6-19263-20BECF25; Thu, 13 Feb 2014 15:55:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392306945!151058!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30839 invoked from network); 13 Feb 2014 15:55:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 15:55:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 15:55:45 +0000
Message-Id: <52FCF90F020000780011C29A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 15:55:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
In-Reply-To: <52FCE8BE.8050105@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>> George Dunlap wrote on 2014-02-11:
>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>> Actually, if the overhead of not sharing tables isn't very high, then
>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>> submit when he originally described the problem.
>> Actually, the first solution came to my mind is B. Then I realized that even 
> chose B, we still cannot track the memory updating from DMA(even with A/D 
> bit, it still a problem). Also, considering the current usage case of log 
> dirty in Xen(only vram tracking has problem), I though A is better.: 
> Hypervisor only need to track the vram change. If a malicious guest try to 
> DMA to vram range, it only crashed himself (This should be reasonable).
>>
>>> I was going to say, from a release perspective, B is probably the
>>> safest option for now.  But on the other hand, if we've been testing
>>> sharing all this time, maybe switching back over to non-sharing whole-hog has 
> the higher risk?
>> Another problem with B is that current VT-d large paging supporting relies on 
> the sharing EPT and VT-d page table. This means if we choose B, then we need 
> to re-enable VT-d large page. This would be a huge performance impaction for 
> Xen 4.4 on using VT-d solution.
> 
> OK -- if that's the case, then it definitely tips the balance back to 
> A.  Unless Tim or Jan disagrees, can one of you two check it in?
> 
> Don't rush your judgement; but it would be nice to have this in before 
> RC4, which would mean checking it in today preferrably, or early 
> tomorrow at the latest.

That would be Tim then, as he would have to approve of it anyway.
I should also say that while I certainly understand the argumentation
above, I would still want to go this route only with the promise that
B is going to be worked on reasonably soon after the release, ideally
with the goal of backporting the changes for 4.4.1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 15:56:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 15:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDydi-00033I-QR; Thu, 13 Feb 2014 15:55:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDydh-00033D-AG
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 15:55:49 +0000
Received: from [85.158.137.68:2505] by server-15.bemta-3.messagelabs.com id
	CD/B6-19263-20BECF25; Thu, 13 Feb 2014 15:55:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392306945!151058!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30839 invoked from network); 13 Feb 2014 15:55:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 15:55:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 15:55:45 +0000
Message-Id: <52FCF90F020000780011C29A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 15:55:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <1392012840-22555-1-git-send-email-yang.z.zhang@intel.com>
	<20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
In-Reply-To: <52FCE8BE.8050105@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>> George Dunlap wrote on 2014-02-11:
>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>> Actually, if the overhead of not sharing tables isn't very high, then
>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>> submit when he originally described the problem.
>> Actually, the first solution came to my mind is B. Then I realized that even 
> chose B, we still cannot track the memory updating from DMA(even with A/D 
> bit, it still a problem). Also, considering the current usage case of log 
> dirty in Xen(only vram tracking has problem), I though A is better.: 
> Hypervisor only need to track the vram change. If a malicious guest try to 
> DMA to vram range, it only crashed himself (This should be reasonable).
>>
>>> I was going to say, from a release perspective, B is probably the
>>> safest option for now.  But on the other hand, if we've been testing
>>> sharing all this time, maybe switching back over to non-sharing whole-hog has 
> the higher risk?
>> Another problem with B is that current VT-d large paging supporting relies on 
> the sharing EPT and VT-d page table. This means if we choose B, then we need 
> to re-enable VT-d large page. This would be a huge performance impaction for 
> Xen 4.4 on using VT-d solution.
> 
> OK -- if that's the case, then it definitely tips the balance back to 
> A.  Unless Tim or Jan disagrees, can one of you two check it in?
> 
> Don't rush your judgement; but it would be nice to have this in before 
> RC4, which would mean checking it in today preferrably, or early 
> tomorrow at the latest.

That would be Tim then, as he would have to approve of it anyway.
I should also say that while I certainly understand the argumentation
above, I would still want to go this route only with the promise that
B is going to be worked on reasonably soon after the release, ideally
with the goal of backporting the changes for 4.4.1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:01:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyiw-0003ly-Kx; Thu, 13 Feb 2014 16:01:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WDyiv-0003ls-0l
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:01:13 +0000
Received: from [85.158.143.35:35662] by server-3.bemta-4.messagelabs.com id
	13/2C-11539-84CECF25; Thu, 13 Feb 2014 16:01:12 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392307271!5469418!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=2.2 required=7.0 tests=MISSING_SUBJECT,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18341 invoked from network); 13 Feb 2014 16:01:11 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:01:11 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so7794974wes.27
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 08:01:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:mime-version:content-type
	:content-transfer-encoding;
	bh=Sxn6tRvWh+NeneynmUat5/ijtR8muXnnyvBmwVN86Ks=;
	b=fLJ1CZPHdfCV10QzglbqKc1JulVEhST4wgXrcHztOgsj1S8Wjw5EGDdkiXr936mD3e
	HK4H1f6amtj0rpRI69whr0y1AGd04yOkPD2OPS2Q8wAVJ8GTRo5imxE55pZE06wxV69l
	IywoRrFG+sCy9DqPTV+Zga96W8zQHw+a2/wxbZVPSWpZccgWSjAOauwzCRwh9mtJXUl9
	UlQYvuvREJDqNU67Fw+Xyt+UaXR/iWF9tvugEGEVUdXZLU82I0HO93iOary/u6u+wMr0
	+iOGUjKlc6UioahiUOHca5ZeRVLxnzfnQFUcXBLiLUrs/BrZ4Zuh39uuU+KAnwpeMpF1
	vW+A==
X-Received: by 10.180.221.68 with SMTP id qc4mr3229023wic.30.1392307271231;
	Thu, 13 Feb 2014 08:01:11 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id
	ff7sm14984910wic.10.2014.02.13.08.01.08 for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 08:01:10 -0800 (PST)
Date: Thu, 13 Feb 2014 16:01:00 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <0614092.20140213160100@gmail.com>
To: xen-devel@lists.xen.org
X-Message-Tags: Strange interdependence between virtual machines
MIME-Version: 1.0
Subject: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I  am  now successfully running my little operating system inside Xen.
It  is  fully  preemptive and working a treat, but I have just noticed
something  I  wasn't expecting, and will really be a problem for me if
I can't work around it.

My configuration is as follows:

1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.

2.- Xen: 4.4 (just pulled from repository)

3.- Dom0: Debian Wheezy (Kernel 3.2)

4.- 2 cpu pools:

# xl cpupool-list
Name               CPUs   Sched     Active   Domain count
Pool-0               3    credit       y          2
pv499                1  arinc653       y          1

5.- 2 domU:

# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   984     3     r-----      39.7
win7x64                                      1  2046     3     -b----     143.0
pv499                                        3   128     1     -b----      61.2

6.- All VCPUs are pinned:

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   -b-      27.5  0
Domain-0                             0     1    1   -b-       7.2  1
Domain-0                             0     2    2   r--       5.1  2
win7x64                              1     0    0   -b-      71.6  0
win7x64                              1     1    1   -b-      37.7  1
win7x64                              1     2    2   -b-      34.5  2
pv499                                3     0    3   -b-      62.1  3

7.- pv499 is the domU that I am testing. It has no disk or vif devices
(yet). I am running a little test program in pv499 and the timing I
see is varies depending on disk activity.

My test program runs prints up the time taken in milliseconds for a
million cycles. With no disk activity I see 940 ms, with disk activity
I see 1200 ms.

I can't understand this as disk activity should be running on cores 0,
1  and 2, but never on core 3. The only thing running on core 3 should
by my paravirtual machine and the hypervisor stub.

Any idea what's going on?


-- 
Best regards,
 Simon                          mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:01:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyiw-0003ly-Kx; Thu, 13 Feb 2014 16:01:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WDyiv-0003ls-0l
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:01:13 +0000
Received: from [85.158.143.35:35662] by server-3.bemta-4.messagelabs.com id
	13/2C-11539-84CECF25; Thu, 13 Feb 2014 16:01:12 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392307271!5469418!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=2.2 required=7.0 tests=MISSING_SUBJECT,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18341 invoked from network); 13 Feb 2014 16:01:11 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:01:11 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so7794974wes.27
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 08:01:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:mime-version:content-type
	:content-transfer-encoding;
	bh=Sxn6tRvWh+NeneynmUat5/ijtR8muXnnyvBmwVN86Ks=;
	b=fLJ1CZPHdfCV10QzglbqKc1JulVEhST4wgXrcHztOgsj1S8Wjw5EGDdkiXr936mD3e
	HK4H1f6amtj0rpRI69whr0y1AGd04yOkPD2OPS2Q8wAVJ8GTRo5imxE55pZE06wxV69l
	IywoRrFG+sCy9DqPTV+Zga96W8zQHw+a2/wxbZVPSWpZccgWSjAOauwzCRwh9mtJXUl9
	UlQYvuvREJDqNU67Fw+Xyt+UaXR/iWF9tvugEGEVUdXZLU82I0HO93iOary/u6u+wMr0
	+iOGUjKlc6UioahiUOHca5ZeRVLxnzfnQFUcXBLiLUrs/BrZ4Zuh39uuU+KAnwpeMpF1
	vW+A==
X-Received: by 10.180.221.68 with SMTP id qc4mr3229023wic.30.1392307271231;
	Thu, 13 Feb 2014 08:01:11 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id
	ff7sm14984910wic.10.2014.02.13.08.01.08 for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 08:01:10 -0800 (PST)
Date: Thu, 13 Feb 2014 16:01:00 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <0614092.20140213160100@gmail.com>
To: xen-devel@lists.xen.org
X-Message-Tags: Strange interdependence between virtual machines
MIME-Version: 1.0
Subject: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I  am  now successfully running my little operating system inside Xen.
It  is  fully  preemptive and working a treat, but I have just noticed
something  I  wasn't expecting, and will really be a problem for me if
I can't work around it.

My configuration is as follows:

1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.

2.- Xen: 4.4 (just pulled from repository)

3.- Dom0: Debian Wheezy (Kernel 3.2)

4.- 2 cpu pools:

# xl cpupool-list
Name               CPUs   Sched     Active   Domain count
Pool-0               3    credit       y          2
pv499                1  arinc653       y          1

5.- 2 domU:

# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   984     3     r-----      39.7
win7x64                                      1  2046     3     -b----     143.0
pv499                                        3   128     1     -b----      61.2

6.- All VCPUs are pinned:

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   -b-      27.5  0
Domain-0                             0     1    1   -b-       7.2  1
Domain-0                             0     2    2   r--       5.1  2
win7x64                              1     0    0   -b-      71.6  0
win7x64                              1     1    1   -b-      37.7  1
win7x64                              1     2    2   -b-      34.5  2
pv499                                3     0    3   -b-      62.1  3

7.- pv499 is the domU that I am testing. It has no disk or vif devices
(yet). I am running a little test program in pv499 and the timing I
see is varies depending on disk activity.

My test program runs prints up the time taken in milliseconds for a
million cycles. With no disk activity I see 940 ms, with disk activity
I see 1200 ms.

I can't understand this as disk activity should be running on cores 0,
1  and 2, but never on core 3. The only thing running on core 3 should
by my paravirtual machine and the hypervisor stub.

Any idea what's going on?


-- 
Best regards,
 Simon                          mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:03:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyl5-0003vI-6Q; Thu, 13 Feb 2014 16:03:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDyl4-0003vB-MU
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 16:03:26 +0000
Received: from [85.158.137.68:13920] by server-13.bemta-3.messagelabs.com id
	B0/C0-26923-DCCECF25; Thu, 13 Feb 2014 16:03:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392307403!127573!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2496 invoked from network); 13 Feb 2014 16:03:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 16:03:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 16:03:22 +0000
Message-Id: <52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 16:03:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Eddie Dong" <eddie.dong@intel.com>,
	"Jun Nakajima" <jun.nakajima@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
	<CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
In-Reply-To: <CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 16:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Konrad / Jan: Ping?
> 
> If this fix looks reasonable it would be nice to get this in before RC4.

It would have long been committed if the VMX maintainers finally
gave their okay. Even privately pinging them didn't seem to help...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:03:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyl5-0003vI-6Q; Thu, 13 Feb 2014 16:03:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDyl4-0003vB-MU
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 16:03:26 +0000
Received: from [85.158.137.68:13920] by server-13.bemta-3.messagelabs.com id
	B0/C0-26923-DCCECF25; Thu, 13 Feb 2014 16:03:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392307403!127573!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2496 invoked from network); 13 Feb 2014 16:03:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 16:03:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 16:03:22 +0000
Message-Id: <52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 16:03:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>,
	"Eddie Dong" <eddie.dong@intel.com>,
	"Jun Nakajima" <jun.nakajima@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
	<CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
In-Reply-To: <CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 16:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> Konrad / Jan: Ping?
> 
> If this fix looks reasonable it would be nice to get this in before RC4.

It would have long been committed if the VMX maintainers finally
gave their okay. Even privately pinging them didn't seem to help...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:09:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyqV-00048T-0A; Thu, 13 Feb 2014 16:09:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDyqT-00048O-Vf
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 16:09:02 +0000
Received: from [193.109.254.147:8002] by server-13.bemta-14.messagelabs.com id
	63/3F-01226-D1EECF25; Thu, 13 Feb 2014 16:09:01 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392307738!4159775!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32532 invoked from network); 13 Feb 2014 16:09:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:09:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100498557"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 16:08:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:08:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDyq7-0002lb-U2;
	Thu, 13 Feb 2014 16:08:39 +0000
Message-ID: <52FCEDFC.7080103@eu.citrix.com>
Date: Thu, 13 Feb 2014 16:08:28 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Eddie Dong <eddie.dong@intel.com>, Jun
	Nakajima <jun.nakajima@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
	<CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
	<52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
In-Reply-To: <52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 04:03 PM, Jan Beulich wrote:
>>>> On 13.02.14 at 16:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> Konrad / Jan: Ping?
>>
>> If this fix looks reasonable it would be nice to get this in before RC4.
> It would have long been committed if the VMX maintainers finally
> gave their okay. Even privately pinging them didn't seem to help...

Oh, right, I forgot that Yang isn't a maintainer.

Well it looks like we have two options:
* Commit it without a maintainer's ack
* Revert the patch that caused the regression to PVH.

(Or maybe Yang can chase the maintainers internally.)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:09:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyqV-00048T-0A; Thu, 13 Feb 2014 16:09:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDyqT-00048O-Vf
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 16:09:02 +0000
Received: from [193.109.254.147:8002] by server-13.bemta-14.messagelabs.com id
	63/3F-01226-D1EECF25; Thu, 13 Feb 2014 16:09:01 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392307738!4159775!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32532 invoked from network); 13 Feb 2014 16:09:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:09:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100498557"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 16:08:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:08:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDyq7-0002lb-U2;
	Thu, 13 Feb 2014 16:08:39 +0000
Message-ID: <52FCEDFC.7080103@eu.citrix.com>
Date: Thu, 13 Feb 2014 16:08:28 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Eddie Dong <eddie.dong@intel.com>, Jun
	Nakajima <jun.nakajima@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
	<CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
	<52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
In-Reply-To: <52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/2014 04:03 PM, Jan Beulich wrote:
>>>> On 13.02.14 at 16:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>> Konrad / Jan: Ping?
>>
>> If this fix looks reasonable it would be nice to get this in before RC4.
> It would have long been committed if the VMX maintainers finally
> gave their okay. Even privately pinging them didn't seem to help...

Oh, right, I forgot that Yang isn't a maintainer.

Well it looks like we have two options:
* Commit it without a maintainer's ack
* Revert the patch that caused the regression to PVH.

(Or maybe Yang can chase the maintainers internally.)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:10:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyrw-0004Ib-Lg; Thu, 13 Feb 2014 16:10:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDyrv-0004IT-EC
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:10:31 +0000
Received: from [85.158.137.68:6815] by server-17.bemta-3.messagelabs.com id
	A7/7F-22569-67EECF25; Thu, 13 Feb 2014 16:10:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392307828!375465!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1838 invoked from network); 13 Feb 2014 16:10:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:10:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100499299"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 16:10:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:10:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDyrr-0002n8-5S;
	Thu, 13 Feb 2014 16:10:27 +0000
Message-ID: <52FCEE73.90100@citrix.com>
Date: Thu, 13 Feb 2014 16:10:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Martin <furryfuttock@gmail.com>
References: <0614092.20140213160100@gmail.com>
In-Reply-To: <0614092.20140213160100@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 16:01, Simon Martin wrote:
> Hi all,
>
> I  am  now successfully running my little operating system inside Xen.

Congratulations!

> It  is  fully  preemptive and working a treat, but I have just noticed
> something  I  wasn't expecting, and will really be a problem for me if
> I can't work around it.
>
> My configuration is as follows:
>
> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.

Can you be more specific - this covers 4 generations of Intel CPUs.

>
> 2.- Xen: 4.4 (just pulled from repository)
>
> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>
> 4.- 2 cpu pools:
>
> # xl cpupool-list
> Name               CPUs   Sched     Active   Domain count
> Pool-0               3    credit       y          2
> pv499                1  arinc653       y          1
>
> 5.- 2 domU:
>
> # xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0   984     3     r-----      39.7
> win7x64                                      1  2046     3     -b----     143.0
> pv499                                        3   128     1     -b----      61.2
>
> 6.- All VCPUs are pinned:
>
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
> Domain-0                             0     0    0   -b-      27.5  0
> Domain-0                             0     1    1   -b-       7.2  1
> Domain-0                             0     2    2   r--       5.1  2
> win7x64                              1     0    0   -b-      71.6  0
> win7x64                              1     1    1   -b-      37.7  1
> win7x64                              1     2    2   -b-      34.5  2
> pv499                                3     0    3   -b-      62.1  3
>
> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
> (yet). I am running a little test program in pv499 and the timing I
> see is varies depending on disk activity.
>
> My test program runs prints up the time taken in milliseconds for a
> million cycles. With no disk activity I see 940 ms, with disk activity
> I see 1200 ms.
>
> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
>
> Any idea what's going on?

Curious.  Lets try ruling some things out.

How are you measuring time in pv499?

What is your Cstates and Pstates looking like?  If you can, try
disabling turbo?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:10:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:10:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDyrw-0004Ib-Lg; Thu, 13 Feb 2014 16:10:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WDyrv-0004IT-EC
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:10:31 +0000
Received: from [85.158.137.68:6815] by server-17.bemta-3.messagelabs.com id
	A7/7F-22569-67EECF25; Thu, 13 Feb 2014 16:10:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392307828!375465!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1838 invoked from network); 13 Feb 2014 16:10:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:10:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100499299"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 16:10:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:10:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WDyrr-0002n8-5S;
	Thu, 13 Feb 2014 16:10:27 +0000
Message-ID: <52FCEE73.90100@citrix.com>
Date: Thu, 13 Feb 2014 16:10:27 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Martin <furryfuttock@gmail.com>
References: <0614092.20140213160100@gmail.com>
In-Reply-To: <0614092.20140213160100@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 16:01, Simon Martin wrote:
> Hi all,
>
> I  am  now successfully running my little operating system inside Xen.

Congratulations!

> It  is  fully  preemptive and working a treat, but I have just noticed
> something  I  wasn't expecting, and will really be a problem for me if
> I can't work around it.
>
> My configuration is as follows:
>
> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.

Can you be more specific - this covers 4 generations of Intel CPUs.

>
> 2.- Xen: 4.4 (just pulled from repository)
>
> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>
> 4.- 2 cpu pools:
>
> # xl cpupool-list
> Name               CPUs   Sched     Active   Domain count
> Pool-0               3    credit       y          2
> pv499                1  arinc653       y          1
>
> 5.- 2 domU:
>
> # xl list
> Name                                        ID   Mem VCPUs      State   Time(s)
> Domain-0                                     0   984     3     r-----      39.7
> win7x64                                      1  2046     3     -b----     143.0
> pv499                                        3   128     1     -b----      61.2
>
> 6.- All VCPUs are pinned:
>
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
> Domain-0                             0     0    0   -b-      27.5  0
> Domain-0                             0     1    1   -b-       7.2  1
> Domain-0                             0     2    2   r--       5.1  2
> win7x64                              1     0    0   -b-      71.6  0
> win7x64                              1     1    1   -b-      37.7  1
> win7x64                              1     2    2   -b-      34.5  2
> pv499                                3     0    3   -b-      62.1  3
>
> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
> (yet). I am running a little test program in pv499 and the timing I
> see is varies depending on disk activity.
>
> My test program runs prints up the time taken in milliseconds for a
> million cycles. With no disk activity I see 940 ms, with disk activity
> I see 1200 ms.
>
> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
>
> Any idea what's going on?

Curious.  Lets try ruling some things out.

How are you measuring time in pv499?

What is your Cstates and Pstates looking like?  If you can, try
disabling turbo?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:12:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDytd-0004Xs-Jf; Thu, 13 Feb 2014 16:12:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <scott.dj@gmail.com>)
	id 1WDysu-0004Qn-Cn; Thu, 13 Feb 2014 16:11:32 +0000
Received: from [85.158.139.211:51277] by server-9.bemta-5.messagelabs.com id
	E7/5D-11237-3BEECF25; Thu, 13 Feb 2014 16:11:31 +0000
X-Env-Sender: scott.dj@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392307887!3764026!1
X-Originating-IP: [209.85.160.42]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_SEX,HTML_20_30,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20559 invoked from network); 13 Feb 2014 16:11:29 -0000
Received: from mail-pb0-f42.google.com (HELO mail-pb0-f42.google.com)
	(209.85.160.42)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:11:29 -0000
Received: by mail-pb0-f42.google.com with SMTP id jt11so11042543pbb.29
	for <multiple recipients>; Thu, 13 Feb 2014 08:11:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=l4q+OojFnRmbuTjY5oVmgFxE0xUQZ0oQjJWFX6YSNwk=;
	b=G889E7CdC4fVg3KEAey8o4Ym6tEwmrEiKp2JrCQjrEp75Nka/+BaCknNm/BmKLrqpj
	mfYbIP/DbTXa3CHLbVCHz6v/Ym8aoXtYexisIG7m5l7LdzLVx7X8TKM0tWV/xVooNS7S
	lTGQ9fOi8hf21QFFIOfm+qpxP+a2LlBIiOQhD6YHzZPAfp7Cp1/5aV4PyUzzy7lxKZEV
	N0Yk+i78Qo4lXAYh9An7uc768icZRYzIDnlbjAbkU5lOsJ625XpVfghsLgxBFykQbrai
	xzGspFLDTeBVwzJ9qrCIls9hRZg19Qt7jv4Nwewie5ulkLLAfzJUvZav77yGOy5ypEMd
	aO7Q==
MIME-Version: 1.0
X-Received: by 10.66.26.176 with SMTP id m16mr2705853pag.142.1392307887578;
	Thu, 13 Feb 2014 08:11:27 -0800 (PST)
Received: by 10.70.55.132 with HTTP; Thu, 13 Feb 2014 08:11:27 -0800 (PST)
In-Reply-To: <52DCE9FA.6010400@xen.org>
References: <52DCE9FA.6010400@xen.org>
Date: Thu, 13 Feb 2014 16:11:27 +0000
Message-ID: <CAG_esB0qq7G41GTX08n7g2Y+YXxtrLULftmvcTML-ueu9WP7yA@mail.gmail.com>
From: David Scott <scott.dj@gmail.com>
To: lars.kurth@xen.org
X-Mailman-Approved-At: Thu, 13 Feb 2014 16:12:17 +0000
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [MirageOS-devel] Prepping for GSOC 2014 [URGENT] -
 deadline Feb 14 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2827992764087120495=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2827992764087120495==
Content-Type: multipart/alternative; boundary=bcaec52994530ea31d04f24bf19a

--bcaec52994530ea31d04f24bf19a
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I did a tidy-up of the Mirage/XAPI projects at the bottom. I've deleted
some old projects after speaking with their technical contacts (in
particular Jon Ludlam and Jonathan Davies) and I've deleted the networking
one at the end since that work is in progress anyway.

I've had a stab at classifying them as 'GSoC'-friendly or not, mainly based
on difficulty.

Are we still planning to add a Mentor section with photos and bios?

Cheers,
Dave


On Mon, Jan 20, 2014 at 9:18 AM, Lars Kurth <lars.kurth@xen.org> wrote:

> Hi all,
>
> the GSoC application deadline is coming up : Feb 2014. If we want to have
> any chance of getting accepted this year, we ought to get our project list
> into good shape. The project list and how the project and menters present
> themselves has a bigger impact on whether we get accepted than the actual
> application.
>
> Also, I would like to add a mentor section this year: a short bio, what
> the mentor cares about and a picture. This will help make the project list
> more real.
>
> We have *4 weeks* to do this. The bar for GSoC has been getting
> increasingly high. I know, we are tied down with Xen 4.4, but this is
> something you need to do if you want the Xen Project to participate.
>
> a) Please, update http://wiki.xenproject.org/wiki/Xen_Development_Projectsurgently (these need to be in good shape *before* the application). What I
> need you to do is:
> a.1) Remove items that are done
> a.2) Add new work items : we ought to have a few sexy topics on say
> Real-time, mobile and some of the other segments (assuming we can get HW)
> a.3) All project proposals need to be peer reviewed *and* clear ... The
> peer review process for projects we put in place last year worked well, by
> which we had past mentors sign of project proposals that were in good
> enough state.
>
> b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should get
> these listed on the respective other programs. And we should link to these
> from our project page.
>
> Best Regards
> Lars
> P.S.: I will also see whether we can participate as Xen Project under the
> LF GSoC program, but last year there was push-back and I don't expect this
> to change
>
>
> _______________________________________________
> MirageOS-devel mailing list
> MirageOS-devel@lists.xenproject.org
> http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel
>



-- 
Dave Scott

--bcaec52994530ea31d04f24bf19a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I did a tidy-up of the Mirage/XAPI =
projects at the bottom. I&#39;ve deleted some old projects after speaking w=
ith their technical contacts (in particular Jon Ludlam and Jonathan Davies)=
 and I&#39;ve deleted the networking one at the end since that work is in p=
rogress anyway.</div>
<div><br></div><div>I&#39;ve had a stab at classifying them as &#39;GSoC&#3=
9;-friendly or not, mainly based on difficulty.</div><div><br></div><div>Ar=
e we still planning to add a Mentor section with photos and bios?</div>
<div><br></div><div>Cheers,</div><div>Dave</div></div><div class=3D"gmail_e=
xtra"><br><br><div class=3D"gmail_quote">On Mon, Jan 20, 2014 at 9:18 AM, L=
ars Kurth <span dir=3D"ltr">&lt;<a href=3D"mailto:lars.kurth@xen.org" targe=
t=3D"_blank">lars.kurth@xen.org</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Hi all,<br>
<br>
the GSoC application deadline is coming up : Feb 2014. If we want to have a=
ny chance of getting accepted this year, we ought to get our project list i=
nto good shape. The project list and how the project and menters present th=
emselves has a bigger impact on whether we get accepted than the actual app=
lication.<br>

<br>
Also, I would like to add a mentor section this year: a short bio, what the=
 mentor cares about and a picture. This will help make the project list mor=
e real.<br>
<br>
We have *4 weeks* to do this. The bar for GSoC has been getting increasingl=
y high. I know, we are tied down with Xen 4.4, but this is something you ne=
ed to do if you want the Xen Project to participate.<br>
<br>
a) Please, update <a href=3D"http://wiki.xenproject.org/wiki/Xen_Developmen=
t_Projects" target=3D"_blank">http://wiki.xenproject.org/<u></u>wiki/Xen_De=
velopment_Projects</a> urgently (these need to be in good shape *before* th=
e application). What I need you to do is:<br>

a.1) Remove items that are done<br>
a.2) Add new work items : we ought to have a few sexy topics on say Real-ti=
me, mobile and some of the other segments (assuming we can get HW)<br>
a.3) All project proposals need to be peer reviewed *and* clear ... The pee=
r review process for projects we put in place last year worked well, by whi=
ch we had past mentors sign of project proposals that were in good enough s=
tate.<br>

<br>
b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should get =
these listed on the respective other programs. And we should link to these =
from our project page.<br>
<br>
Best Regards<br>
Lars<br>
P.S.: I will also see whether we can participate as Xen Project under the L=
F GSoC program, but last year there was push-back and I don&#39;t expect th=
is to change<br>
<br>
<br>
______________________________<u></u>_________________<br>
MirageOS-devel mailing list<br>
<a href=3D"mailto:MirageOS-devel@lists.xenproject.org" target=3D"_blank">Mi=
rageOS-devel@lists.<u></u>xenproject.org</a><br>
<a href=3D"http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-de=
vel" target=3D"_blank">http://lists.xenproject.org/<u></u>cgi-bin/mailman/l=
istinfo/<u></u>mirageos-devel</a><br>
</blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>Dave Scott
</div>

--bcaec52994530ea31d04f24bf19a--


--===============2827992764087120495==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2827992764087120495==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 16:12:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDytd-0004Xs-Jf; Thu, 13 Feb 2014 16:12:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <scott.dj@gmail.com>)
	id 1WDysu-0004Qn-Cn; Thu, 13 Feb 2014 16:11:32 +0000
Received: from [85.158.139.211:51277] by server-9.bemta-5.messagelabs.com id
	E7/5D-11237-3BEECF25; Thu, 13 Feb 2014 16:11:31 +0000
X-Env-Sender: scott.dj@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392307887!3764026!1
X-Originating-IP: [209.85.160.42]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_SEX,HTML_20_30,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20559 invoked from network); 13 Feb 2014 16:11:29 -0000
Received: from mail-pb0-f42.google.com (HELO mail-pb0-f42.google.com)
	(209.85.160.42)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:11:29 -0000
Received: by mail-pb0-f42.google.com with SMTP id jt11so11042543pbb.29
	for <multiple recipients>; Thu, 13 Feb 2014 08:11:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=l4q+OojFnRmbuTjY5oVmgFxE0xUQZ0oQjJWFX6YSNwk=;
	b=G889E7CdC4fVg3KEAey8o4Ym6tEwmrEiKp2JrCQjrEp75Nka/+BaCknNm/BmKLrqpj
	mfYbIP/DbTXa3CHLbVCHz6v/Ym8aoXtYexisIG7m5l7LdzLVx7X8TKM0tWV/xVooNS7S
	lTGQ9fOi8hf21QFFIOfm+qpxP+a2LlBIiOQhD6YHzZPAfp7Cp1/5aV4PyUzzy7lxKZEV
	N0Yk+i78Qo4lXAYh9An7uc768icZRYzIDnlbjAbkU5lOsJ625XpVfghsLgxBFykQbrai
	xzGspFLDTeBVwzJ9qrCIls9hRZg19Qt7jv4Nwewie5ulkLLAfzJUvZav77yGOy5ypEMd
	aO7Q==
MIME-Version: 1.0
X-Received: by 10.66.26.176 with SMTP id m16mr2705853pag.142.1392307887578;
	Thu, 13 Feb 2014 08:11:27 -0800 (PST)
Received: by 10.70.55.132 with HTTP; Thu, 13 Feb 2014 08:11:27 -0800 (PST)
In-Reply-To: <52DCE9FA.6010400@xen.org>
References: <52DCE9FA.6010400@xen.org>
Date: Thu, 13 Feb 2014 16:11:27 +0000
Message-ID: <CAG_esB0qq7G41GTX08n7g2Y+YXxtrLULftmvcTML-ueu9WP7yA@mail.gmail.com>
From: David Scott <scott.dj@gmail.com>
To: lars.kurth@xen.org
X-Mailman-Approved-At: Thu, 13 Feb 2014 16:12:17 +0000
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [MirageOS-devel] Prepping for GSOC 2014 [URGENT] -
 deadline Feb 14 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2827992764087120495=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2827992764087120495==
Content-Type: multipart/alternative; boundary=bcaec52994530ea31d04f24bf19a

--bcaec52994530ea31d04f24bf19a
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I did a tidy-up of the Mirage/XAPI projects at the bottom. I've deleted
some old projects after speaking with their technical contacts (in
particular Jon Ludlam and Jonathan Davies) and I've deleted the networking
one at the end since that work is in progress anyway.

I've had a stab at classifying them as 'GSoC'-friendly or not, mainly based
on difficulty.

Are we still planning to add a Mentor section with photos and bios?

Cheers,
Dave


On Mon, Jan 20, 2014 at 9:18 AM, Lars Kurth <lars.kurth@xen.org> wrote:

> Hi all,
>
> the GSoC application deadline is coming up : Feb 2014. If we want to have
> any chance of getting accepted this year, we ought to get our project list
> into good shape. The project list and how the project and menters present
> themselves has a bigger impact on whether we get accepted than the actual
> application.
>
> Also, I would like to add a mentor section this year: a short bio, what
> the mentor cares about and a picture. This will help make the project list
> more real.
>
> We have *4 weeks* to do this. The bar for GSoC has been getting
> increasingly high. I know, we are tied down with Xen 4.4, but this is
> something you need to do if you want the Xen Project to participate.
>
> a) Please, update http://wiki.xenproject.org/wiki/Xen_Development_Projectsurgently (these need to be in good shape *before* the application). What I
> need you to do is:
> a.1) Remove items that are done
> a.2) Add new work items : we ought to have a few sexy topics on say
> Real-time, mobile and some of the other segments (assuming we can get HW)
> a.3) All project proposals need to be peer reviewed *and* clear ... The
> peer review process for projects we put in place last year worked well, by
> which we had past mentors sign of project proposals that were in good
> enough state.
>
> b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should get
> these listed on the respective other programs. And we should link to these
> from our project page.
>
> Best Regards
> Lars
> P.S.: I will also see whether we can participate as Xen Project under the
> LF GSoC program, but last year there was push-back and I don't expect this
> to change
>
>
> _______________________________________________
> MirageOS-devel mailing list
> MirageOS-devel@lists.xenproject.org
> http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel
>



-- 
Dave Scott

--bcaec52994530ea31d04f24bf19a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I did a tidy-up of the Mirage/XAPI =
projects at the bottom. I&#39;ve deleted some old projects after speaking w=
ith their technical contacts (in particular Jon Ludlam and Jonathan Davies)=
 and I&#39;ve deleted the networking one at the end since that work is in p=
rogress anyway.</div>
<div><br></div><div>I&#39;ve had a stab at classifying them as &#39;GSoC&#3=
9;-friendly or not, mainly based on difficulty.</div><div><br></div><div>Ar=
e we still planning to add a Mentor section with photos and bios?</div>
<div><br></div><div>Cheers,</div><div>Dave</div></div><div class=3D"gmail_e=
xtra"><br><br><div class=3D"gmail_quote">On Mon, Jan 20, 2014 at 9:18 AM, L=
ars Kurth <span dir=3D"ltr">&lt;<a href=3D"mailto:lars.kurth@xen.org" targe=
t=3D"_blank">lars.kurth@xen.org</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Hi all,<br>
<br>
the GSoC application deadline is coming up : Feb 2014. If we want to have a=
ny chance of getting accepted this year, we ought to get our project list i=
nto good shape. The project list and how the project and menters present th=
emselves has a bigger impact on whether we get accepted than the actual app=
lication.<br>

<br>
Also, I would like to add a mentor section this year: a short bio, what the=
 mentor cares about and a picture. This will help make the project list mor=
e real.<br>
<br>
We have *4 weeks* to do this. The bar for GSoC has been getting increasingl=
y high. I know, we are tied down with Xen 4.4, but this is something you ne=
ed to do if you want the Xen Project to participate.<br>
<br>
a) Please, update <a href=3D"http://wiki.xenproject.org/wiki/Xen_Developmen=
t_Projects" target=3D"_blank">http://wiki.xenproject.org/<u></u>wiki/Xen_De=
velopment_Projects</a> urgently (these need to be in good shape *before* th=
e application). What I need you to do is:<br>

a.1) Remove items that are done<br>
a.2) Add new work items : we ought to have a few sexy topics on say Real-ti=
me, mobile and some of the other segments (assuming we can get HW)<br>
a.3) All project proposals need to be peer reviewed *and* clear ... The pee=
r review process for projects we put in place last year worked well, by whi=
ch we had past mentors sign of project proposals that were in good enough s=
tate.<br>

<br>
b) Anyone who has some kernel/linux/bsd/distro/qemu work-items, should get =
these listed on the respective other programs. And we should link to these =
from our project page.<br>
<br>
Best Regards<br>
Lars<br>
P.S.: I will also see whether we can participate as Xen Project under the L=
F GSoC program, but last year there was push-back and I don&#39;t expect th=
is to change<br>
<br>
<br>
______________________________<u></u>_________________<br>
MirageOS-devel mailing list<br>
<a href=3D"mailto:MirageOS-devel@lists.xenproject.org" target=3D"_blank">Mi=
rageOS-devel@lists.<u></u>xenproject.org</a><br>
<a href=3D"http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-de=
vel" target=3D"_blank">http://lists.xenproject.org/<u></u>cgi-bin/mailman/l=
istinfo/<u></u>mirageos-devel</a><br>
</blockquote></div><br><br clear=3D"all"><div><br></div>-- <br>Dave Scott
</div>

--bcaec52994530ea31d04f24bf19a--


--===============2827992764087120495==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2827992764087120495==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 16:20:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz1d-0004wE-RJ; Thu, 13 Feb 2014 16:20:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDz1b-0004w9-UQ
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:20:32 +0000
Received: from [85.158.139.211:55630] by server-14.bemta-5.messagelabs.com id
	E8/04-27598-FC0FCF25; Thu, 13 Feb 2014 16:20:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392308430!3744729!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13284 invoked from network); 13 Feb 2014 16:20:30 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 16:20:30 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDz1S-0002gg-BO; Thu, 13 Feb 2014 16:20:22 +0000
Date: Thu, 13 Feb 2014 17:20:22 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140213162022.GE82703@deinos.phlegethon.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FCF90F020000780011C29A@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
> >>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
> >> George Dunlap wrote on 2014-02-11:
> >>> I think I got a bit distracted with the "A isn't really so bad" thing.
> >>> Actually, if the overhead of not sharing tables isn't very high, then
> >>> B isn't such a bad option.  In fact, B is what I expected Yang to
> >>> submit when he originally described the problem.
> >> Actually, the first solution came to my mind is B. Then I realized that even 
> > chose B, we still cannot track the memory updating from DMA(even with A/D 
> > bit, it still a problem). Also, considering the current usage case of log 
> > dirty in Xen(only vram tracking has problem), I though A is better.: 
> > Hypervisor only need to track the vram change. If a malicious guest try to 
> > DMA to vram range, it only crashed himself (This should be reasonable).
> >>
> >>> I was going to say, from a release perspective, B is probably the
> >>> safest option for now.  But on the other hand, if we've been testing
> >>> sharing all this time, maybe switching back over to non-sharing whole-hog has 
> > the higher risk?
> >> Another problem with B is that current VT-d large paging supporting relies on 
> > the sharing EPT and VT-d page table. This means if we choose B, then we need 
> > to re-enable VT-d large page. This would be a huge performance impaction for 
> > Xen 4.4 on using VT-d solution.
> > 
> > OK -- if that's the case, then it definitely tips the balance back to 
> > A.  Unless Tim or Jan disagrees, can one of you two check it in?
> > 
> > Don't rush your judgement; but it would be nice to have this in before 
> > RC4, which would mean checking it in today preferrably, or early 
> > tomorrow at the latest.
> 
> That would be Tim then, as he would have to approve of it anyway.

Done.

> I should also say that while I certainly understand the argumentation
> above, I would still want to go this route only with the promise that
> B is going to be worked on reasonably soon after the release, ideally
> with the goal of backporting the changes for 4.4.1.

Agreed.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:20:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz1d-0004wE-RJ; Thu, 13 Feb 2014 16:20:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WDz1b-0004w9-UQ
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:20:32 +0000
Received: from [85.158.139.211:55630] by server-14.bemta-5.messagelabs.com id
	E8/04-27598-FC0FCF25; Thu, 13 Feb 2014 16:20:31 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392308430!3744729!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13284 invoked from network); 13 Feb 2014 16:20:30 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 16:20:30 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WDz1S-0002gg-BO; Thu, 13 Feb 2014 16:20:22 +0000
Date: Thu, 13 Feb 2014 17:20:22 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140213162022.GE82703@deinos.phlegethon.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FCF90F020000780011C29A@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Yang Z Zhang <yang.z.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
> >>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> > On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
> >> George Dunlap wrote on 2014-02-11:
> >>> I think I got a bit distracted with the "A isn't really so bad" thing.
> >>> Actually, if the overhead of not sharing tables isn't very high, then
> >>> B isn't such a bad option.  In fact, B is what I expected Yang to
> >>> submit when he originally described the problem.
> >> Actually, the first solution came to my mind is B. Then I realized that even 
> > chose B, we still cannot track the memory updating from DMA(even with A/D 
> > bit, it still a problem). Also, considering the current usage case of log 
> > dirty in Xen(only vram tracking has problem), I though A is better.: 
> > Hypervisor only need to track the vram change. If a malicious guest try to 
> > DMA to vram range, it only crashed himself (This should be reasonable).
> >>
> >>> I was going to say, from a release perspective, B is probably the
> >>> safest option for now.  But on the other hand, if we've been testing
> >>> sharing all this time, maybe switching back over to non-sharing whole-hog has 
> > the higher risk?
> >> Another problem with B is that current VT-d large paging supporting relies on 
> > the sharing EPT and VT-d page table. This means if we choose B, then we need 
> > to re-enable VT-d large page. This would be a huge performance impaction for 
> > Xen 4.4 on using VT-d solution.
> > 
> > OK -- if that's the case, then it definitely tips the balance back to 
> > A.  Unless Tim or Jan disagrees, can one of you two check it in?
> > 
> > Don't rush your judgement; but it would be nice to have this in before 
> > RC4, which would mean checking it in today preferrably, or early 
> > tomorrow at the latest.
> 
> That would be Tim then, as he would have to approve of it anyway.

Done.

> I should also say that while I certainly understand the argumentation
> above, I would still want to go this route only with the promise that
> B is going to be worked on reasonably soon after the release, ideally
> with the goal of backporting the changes for 4.4.1.

Agreed.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:26:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz73-0005DL-OD; Thu, 13 Feb 2014 16:26:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDz73-0005DG-1j
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:26:09 +0000
Received: from [85.158.139.211:8583] by server-3.bemta-5.messagelabs.com id
	F3/83-13671-022FCF25; Thu, 13 Feb 2014 16:26:08 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392308765!3721666!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23354 invoked from network); 13 Feb 2014 16:26:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:26:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102288168"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 16:26:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:26:04 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDz6x-000319-Ts;
	Thu, 13 Feb 2014 16:26:03 +0000
Message-ID: <52FCF210.7090702@eu.citrix.com>
Date: Thu, 13 Feb 2014 16:25:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
In-Reply-To: <20140213162022.GE82703@deinos.phlegethon.org>
X-DLP: MIA2
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	xenbugs <xen@bugs.xenproject.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

title 38 Implement VT-d large pages so we can avoid sharing between EPT 
and IOMMU
owner it Yang Z Zhang <yang.z.zhang@intel.com>
thanks

On 02/13/2014 04:20 PM, Tim Deegan wrote:
> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>> George Dunlap wrote on 2014-02-11:
>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>> Actually, if the overhead of not sharing tables isn't very high, then
>>>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>>>> submit when he originally described the problem.
>>>> Actually, the first solution came to my mind is B. Then I realized that even
>>> chose B, we still cannot track the memory updating from DMA(even with A/D
>>> bit, it still a problem). Also, considering the current usage case of log
>>> dirty in Xen(only vram tracking has problem), I though A is better.:
>>> Hypervisor only need to track the vram change. If a malicious guest try to
>>> DMA to vram range, it only crashed himself (This should be reasonable).
>>>>> I was going to say, from a release perspective, B is probably the
>>>>> safest option for now.  But on the other hand, if we've been testing
>>>>> sharing all this time, maybe switching back over to non-sharing whole-hog has
>>> the higher risk?
>>>> Another problem with B is that current VT-d large paging supporting relies on
>>> the sharing EPT and VT-d page table. This means if we choose B, then we need
>>> to re-enable VT-d large page. This would be a huge performance impaction for
>>> Xen 4.4 on using VT-d solution.
>>>
>>> OK -- if that's the case, then it definitely tips the balance back to
>>> A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>
>>> Don't rush your judgement; but it would be nice to have this in before
>>> RC4, which would mean checking it in today preferrably, or early
>>> tomorrow at the latest.
>> That would be Tim then, as he would have to approve of it anyway.
> Done.
>
>> I should also say that while I certainly understand the argumentation
>> above, I would still want to go this route only with the promise that
>> B is going to be worked on reasonably soon after the release, ideally
>> with the goal of backporting the changes for 4.4.1.
> Agreed.

OK -- I've retitled the bug and am going to leave it open.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:26:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz73-0005DL-OD; Thu, 13 Feb 2014 16:26:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WDz73-0005DG-1j
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:26:09 +0000
Received: from [85.158.139.211:8583] by server-3.bemta-5.messagelabs.com id
	F3/83-13671-022FCF25; Thu, 13 Feb 2014 16:26:08 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392308765!3721666!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23354 invoked from network); 13 Feb 2014 16:26:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:26:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102288168"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 16:26:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:26:04 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WDz6x-000319-Ts;
	Thu, 13 Feb 2014 16:26:03 +0000
Message-ID: <52FCF210.7090702@eu.citrix.com>
Date: Thu, 13 Feb 2014 16:25:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
In-Reply-To: <20140213162022.GE82703@deinos.phlegethon.org>
X-DLP: MIA2
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	xenbugs <xen@bugs.xenproject.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

title 38 Implement VT-d large pages so we can avoid sharing between EPT 
and IOMMU
owner it Yang Z Zhang <yang.z.zhang@intel.com>
thanks

On 02/13/2014 04:20 PM, Tim Deegan wrote:
> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>> George Dunlap wrote on 2014-02-11:
>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>> Actually, if the overhead of not sharing tables isn't very high, then
>>>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>>>> submit when he originally described the problem.
>>>> Actually, the first solution came to my mind is B. Then I realized that even
>>> chose B, we still cannot track the memory updating from DMA(even with A/D
>>> bit, it still a problem). Also, considering the current usage case of log
>>> dirty in Xen(only vram tracking has problem), I though A is better.:
>>> Hypervisor only need to track the vram change. If a malicious guest try to
>>> DMA to vram range, it only crashed himself (This should be reasonable).
>>>>> I was going to say, from a release perspective, B is probably the
>>>>> safest option for now.  But on the other hand, if we've been testing
>>>>> sharing all this time, maybe switching back over to non-sharing whole-hog has
>>> the higher risk?
>>>> Another problem with B is that current VT-d large paging supporting relies on
>>> the sharing EPT and VT-d page table. This means if we choose B, then we need
>>> to re-enable VT-d large page. This would be a huge performance impaction for
>>> Xen 4.4 on using VT-d solution.
>>>
>>> OK -- if that's the case, then it definitely tips the balance back to
>>> A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>
>>> Don't rush your judgement; but it would be nice to have this in before
>>> RC4, which would mean checking it in today preferrably, or early
>>> tomorrow at the latest.
>> That would be Tim then, as he would have to approve of it anyway.
> Done.
>
>> I should also say that while I certainly understand the argumentation
>> above, I would still want to go this route only with the promise that
>> B is going to be worked on reasonably soon after the release, ideally
>> with the goal of backporting the changes for 4.4.1.
> Agreed.

OK -- I've retitled the bug and am going to leave it open.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:26:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz7l-0005GU-Al; Thu, 13 Feb 2014 16:26:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WDz7j-0005GH-Fi
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:26:51 +0000
Received: from [85.158.143.35:50412] by server-3.bemta-4.messagelabs.com id
	EB/0C-11539-A42FCF25; Thu, 13 Feb 2014 16:26:50 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392308808!5490536!1
X-Originating-IP: [209.85.216.46]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2175 invoked from network); 13 Feb 2014 16:26:49 -0000
Received: from mail-qa0-f46.google.com (HELO mail-qa0-f46.google.com)
	(209.85.216.46)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:26:49 -0000
Received: by mail-qa0-f46.google.com with SMTP id k15so3086369qaq.19
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 08:26:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=K/chWhhcPz8aGEK8OUkfjHSbUPx0Ii0LEAMR5zhFGsU=;
	b=PH5T8FHslU+tISl/ZBjmC6/TaUZhRWeNICyOKR9fjCrqjAa/E6bsBnsKubG399X5u0
	nm+Wq0A+qwIT81x2XDVlfMiXGBf1akzY1JvhWJCTvLOVK7NccVkC9M77MuKyIGoQX6t4
	plV2BxrpUxPj8zLoQhvgEltNOUp8bEUUZwnLOT1pau/ABwIVDk5LDVv1JChbYyhI70RC
	50mujsccarIOFqx6Ex6oVRhDoj0AO4trGyXrKtYWZAHvXWyCpwYeU0Ne+CGgMVf0T8pl
	PIADcDMLorcbZftU3WkJ9xT992dD38/wPeQsNWtgXRtxmzJXVdCXDTr+C+m6M6sG9mVz
	4iHg==
MIME-Version: 1.0
X-Received: by 10.224.119.147 with SMTP id z19mr4152461qaq.20.1392308807681;
	Thu, 13 Feb 2014 08:26:47 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 13 Feb 2014 08:26:47 -0800 (PST)
In-Reply-To: <CAP5+zHTb_1AVE3_oCNoLH0q7Queoa0ui+vfhK4=Z31+Ld6k30w@mail.gmail.com>
References: <1386136035-19544-1-git-send-email-ufimtseva@gmail.com>
	<CAP5+zHTb_1AVE3_oCNoLH0q7Queoa0ui+vfhK4=Z31+Ld6k30w@mail.gmail.com>
Date: Thu, 13 Feb 2014 11:26:47 -0500
Message-ID: <CAEr7rXjdJhJ07Q4LL=Z5MXTL74x2r2yx8n8mZt_AY_BE36MPiA@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Li Yechen <lccycc123@gmail.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Matt Wilson <msw@linux.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v4 0/7] vNUMA introduction
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 7:49 AM, Li Yechen <lccycc123@gmail.com> wrote:
> Hi Elena,
> The patch on gitorious is not available. Have you got a newest version?
> I have a idea and need to modify your patch a little to manage the change of
> memory.
> Sorry for being missing so long :-)

Hi Li
No problem, let me push for you a new version, not to gitorious as it
fails sometimes..

>
>
> On Wed, Dec 4, 2013 at 1:47 PM, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>>
>> vNUMA introduction
>>
>> This series of patches introduces vNUMA topology awareness and
>> provides interfaces and data structures to enable vNUMA for
>> PV guests. There is a plan to extend this support for dom0 and
>> HVM domains.
>>
>> vNUMA topology support should be supported by PV guest kernel.
>> Corresponging patches should be applied.
>>
>> Introduction
>> -------------
>>
>> vNUMA topology is exposed to the PV guest to improve performance when
>> running
>> workloads on NUMA machines.
>> XEN vNUMA implementation provides a way to create vNUMA-enabled guests on
>> NUMA/UMA
>> and map vNUMA topology to physical NUMA in a optimal way.
>>
>> XEN vNUMA support
>>
>> Current set of patches introduces subop hypercall that is available for
>> enlightened
>> PV guests with vNUMA patches applied.
>>
>> Domain structure was modified to reflect per-domain vNUMA topology for use
>> in other
>> vNUMA-aware subsystems (e.g. ballooning).
>>
>> libxc
>>
>> libxc provides interfaces to build PV guests with vNUMA support and in
>> case of NUMA
>> machines provides initial memory allocation on physical NUMA nodes. This
>> implemented by
>> utilizing nodemap formed by automatic NUMA placement. Details are in patch
>> #3.
>>
>> libxl
>>
>> libxl provides a way to predefine in VM config vNUMA topology - number of
>> vnodes,
>> memory arrangement, vcpus to vnodes assignment, distance map.
>>
>> PV guest
>>
>> As of now, only PV guest can take advantage of vNUMA functionality. vNUMA
>> Linux patches
>> should be applied and NUMA support should be compiled in kernel.
>>
>> This patchset can be pulled from
>> https://git.gitorious.org/xenvnuma/xenvnuma.git:v6
>> Linux patchset https://git.gitorious.org/xenvnuma/linuxvnuma.git:v6
>>
>> Examples of booting vNUMA enabled PV Linux guest on real NUMA machine:
>>
>> 1. Automatic vNUMA placement on h/w NUMA machine:
>>
>> VM config:
>>
>> memory = 16384
>> vcpus = 4
>> name = "rcbig"
>> vnodes = 4
>> vnumamem = [10,10]
>> vnuma_distance = [10, 30, 10, 30]
>> vcpu_to_vnode = [0, 0, 1, 1]
>>
>> Xen:
>>
>> (XEN) Memory location of each domain:
>> (XEN) Domain 0 (total: 2569511):
>> (XEN)     Node 0: 1416166
>> (XEN)     Node 1: 1153345
>> (XEN) Domain 5 (total: 4194304):
>> (XEN)     Node 0: 2097152
>> (XEN)     Node 1: 2097152
>> (XEN)     Domain has 4 vnodes
>> (XEN)         vnode 0 - pnode 0  (4096) MB
>> (XEN)         vnode 1 - pnode 0  (4096) MB
>> (XEN)         vnode 2 - pnode 1  (4096) MB
>> (XEN)         vnode 3 - pnode 1  (4096) MB
>> (XEN)     Domain vcpu to vnode:
>> (XEN)     0 1 2 3
>>
>> dmesg on pv guest:
>>
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
>> [    0.000000]   node   0: [mem 0x00100000-0xffffffff]
>> [    0.000000]   node   1: [mem 0x100000000-0x1ffffffff]
>> [    0.000000]   node   2: [mem 0x200000000-0x2ffffffff]
>> [    0.000000]   node   3: [mem 0x300000000-0x3ffffffff]
>> [    0.000000] On node 0 totalpages: 1048479
>> [    0.000000]   DMA zone: 56 pages used for memmap
>> [    0.000000]   DMA zone: 21 pages reserved
>> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
>> [    0.000000]   DMA32 zone: 14280 pages used for memmap
>> [    0.000000]   DMA32 zone: 1044480 pages, LIFO batch:31
>> [    0.000000] On node 1 totalpages: 1048576
>> [    0.000000]   Normal zone: 14336 pages used for memmap
>> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
>> [    0.000000] On node 2 totalpages: 1048576
>> [    0.000000]   Normal zone: 14336 pages used for memmap
>> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
>> [    0.000000] On node 3 totalpages: 1048576
>> [    0.000000]   Normal zone: 14336 pages used for memmap
>> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
>> [    0.000000] SFI: Simple Firmware Interface v0.81
>> http://simplefirmware.org
>> [    0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
>> [    0.000000] No local APIC present
>> [    0.000000] APIC: disable apic facility
>> [    0.000000] APIC: switched to apic NOOP
>> [    0.000000] nr_irqs_gsi: 16
>> [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff]
>> [    0.000000] e820: cannot find a gap in the 32bit address range
>> [    0.000000] e820: PCI devices with unassigned 32bit BARs may break!
>> [    0.000000] e820: [mem 0x400100000-0x4004fffff] available for PCI
>> devices
>> [    0.000000] Booting paravirtualized kernel on Xen
>> [    0.000000] Xen version: 4.4-unstable (preserve-AD)
>> [    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4
>> nr_node_ids:4
>> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s85376
>> r8192 d21120 u2097152
>> [    0.000000] pcpu-alloc: s85376 r8192 d21120 u2097152 alloc=1*2097152
>> [    0.000000] pcpu-alloc: [0] 0 [1] 1 [2] 2 [3] 3
>>
>>
>> pv guest: numactl --hardware:
>>
>> root@heatpipe:~# numactl --hardware
>> available: 4 nodes (0-3)
>> node 0 cpus: 0
>> node 0 size: 4031 MB
>> node 0 free: 3997 MB
>> node 1 cpus: 1
>> node 1 size: 4039 MB
>> node 1 free: 4022 MB
>> node 2 cpus: 2
>> node 2 size: 4039 MB
>> node 2 free: 4023 MB
>> node 3 cpus: 3
>> node 3 size: 3975 MB
>> node 3 free: 3963 MB
>> node distances:
>> node   0   1   2   3
>>   0:  10  20  20  20
>>   1:  20  10  20  20
>>   2:  20  20  10  20
>>   3:  20  20  20  10
>>
>> Comments:
>> None of the configuration options are correct so default values were used.
>> Since machine is NUMA machine and there is no vcpu pinning defines, NUMA
>> automatic node selection mechanism is used and you can see how vnodes
>> were split across physical nodes.
>>
>> 2. Example with e820_host = 1 (32GB real NUMA machines, two nodes).
>>
>> pv config:
>> memory = 4000
>> vcpus = 8
>> # The name of the domain, change this if you want more than 1 VM.
>> name = "null"
>> vnodes = 4
>> #vnumamem = [3000, 1000]
>> vdistance = [10, 40]
>> #vnuma_vcpumap = [1, 0, 3, 2]
>> vnuma_vnodemap = [1, 0, 1, 0]
>> #vnuma_autoplacement = 1
>> e820_host = 1
>>
>> guest boot:
>>
>> [    0.000000] Initializing cgroup subsys cpuset
>> [    0.000000] Initializing cgroup subsys cpu
>> [    0.000000] Initializing cgroup subsys cpuacct
>> [    0.000000] Linux version 3.12.0+ (assert@superpipe) (gcc version 4.7.2
>> (Debi
>> an 4.7.2-5) ) #111 SMP Tue Dec 3 14:54:36 EST 2013
>> [    0.000000] Command line: root=/dev/xvda1 ro earlyprintk=xen debug
>> loglevel=8
>>  debug print_fatal_signals=1 loglvl=all guest_loglvl=all LOGLEVEL=8
>> earlyprintk=
>> xen sched_debug
>> [    0.000000] ACPI in unprivileged domain disabled
>> [    0.000000] Freeing ac228-fa000 pfn range: 318936 pages freed
>> [    0.000000] 1-1 mapping on ac228->100000
>> [    0.000000] Released 318936 pages of unused memory
>> [    0.000000] Set 343512 page(s) to 1-1 mapping
>> [    0.000000] Populating 100000-14ddd8 pfn range: 318936 pages added
>> [    0.000000] e820: BIOS-provided physical RAM map:
>> [    0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable
>> [    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
>> [    0.000000] Xen: [mem 0x0000000000100000-0x00000000ac227fff] usable
>> [    0.000000] Xen: [mem 0x00000000ac228000-0x00000000ac26bfff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac26c000-0x00000000ac57ffff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac580000-0x00000000ac5a0fff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5a1000-0x00000000ac5bbfff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac5bc000-0x00000000ac5bdfff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5be000-0x00000000ac5befff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac5bf000-0x00000000ac5cafff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5cb000-0x00000000ac5d9fff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac5da000-0x00000000ac5fafff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5fb000-0x00000000ac6b6fff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac6b7000-0x00000000ac7fafff] ACPI NVS
>> [    0.000000] Xen: [mem 0x00000000ac7fb000-0x00000000ac80efff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac80f000-0x00000000ac80ffff] ACPI data
>> [    0.000000] Xen: [mem 0x00000000ac810000-0x00000000ac810fff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac811000-0x00000000ac812fff] ACPI data
>> [    0.000000] Xen: [mem 0x00000000ac813000-0x00000000ad7fffff] unusable
>> [    0.000000] Xen: [mem 0x00000000b0000000-0x00000000b3ffffff] reserved
>> [    0.000000] Xen: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
>> [    0.000000] Xen: [mem 0x00000000fed50000-0x00000000fed8ffff] reserved
>> [    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
>> [    0.000000] Xen: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserved
>> [    0.000000] Xen: [mem 0x0000000100000000-0x000000014ddd7fff] usable
>> [    0.000000] bootconsole [xenboot0] enabled
>> [    0.000000] NX (Execute Disable) protection: active
>> [    0.000000] DMI not present or invalid.
>> [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==>
>> reserved
>> [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
>> [    0.000000] No AGP bridge found
>> [    0.000000] e820: last_pfn = 0x14ddd8 max_arch_pfn = 0x400000000
>> [    0.000000] e820: last_pfn = 0xac228 max_arch_pfn = 0x400000000
>> [    0.000000] Base memory trampoline at [ffff88000009a000] 9a000 size
>> 24576
>> [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
>> [    0.000000]  [mem 0x00000000-0x000fffff] page 4k
>> [    0.000000] init_memory_mapping: [mem 0x14da00000-0x14dbfffff]
>> [    0.000000]  [mem 0x14da00000-0x14dbfffff] page 4k
>> [    0.000000] BRK [0x019bd000, 0x019bdfff] PGTABLE
>> [    0.000000] BRK [0x019be000, 0x019befff] PGTABLE
>> [    0.000000] init_memory_mapping: [mem 0x14c000000-0x14d9fffff]
>> [    0.000000]  [mem 0x14c000000-0x14d9fffff] page 4k
>> [    0.000000] BRK [0x019bf000, 0x019bffff] PGTABLE
>> [    0.000000] BRK [0x019c0000, 0x019c0fff] PGTABLE
>> [    0.000000] BRK [0x019c1000, 0x019c1fff] PGTABLE
>> [    0.000000] BRK [0x019c2000, 0x019c2fff] PGTABLE
>> [    0.000000] init_memory_mapping: [mem 0x100000000-0x14bffffff]
>> [    0.000000]  [mem 0x100000000-0x14bffffff] page 4k
>> [    0.000000] init_memory_mapping: [mem 0x00100000-0xac227fff]
>> [    0.000000]  [mem 0x00100000-0xac227fff] page 4k
>> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
>> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
>> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
>> [    0.000000] NUMA: Initialized distance table, cnt=4
>> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x3e7fffff]
>> [    0.000000]   NODE_DATA [mem 0x3e7d9000-0x3e7fffff]
>> [    0.000000] Initmem setup node 1 [mem 0x3e800000-0x7cffffff]
>> [    0.000000]   NODE_DATA [mem 0x7cfd9000-0x7cffffff]
>> [    0.000000] Initmem setup node 2 [mem 0x7d000000-0x10f5dffff]
>> [    0.000000]   NODE_DATA [mem 0x10f5b9000-0x10f5dffff]
>> [    0.000000] Initmem setup node 3 [mem 0x10f800000-0x14ddd7fff]
>> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
>> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
>> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
>> [    0.000000]   node   0: [mem 0x00100000-0x3e7fffff]
>> [    0.000000]   node   1: [mem 0x3e800000-0x7cffffff]
>> [    0.000000]   node   2: [mem 0x7d000000-0xac227fff]
>> [    0.000000]   node   2: [mem 0x100000000-0x10f5dffff]
>> [    0.000000]   node   3: [mem 0x10f5e0000-0x14ddd7fff]
>> [    0.000000] On node 0 totalpages: 255903
>> [    0.000000]   DMA zone: 56 pages used for memmap
>> [    0.000000]   DMA zone: 21 pages reserved
>> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
>> [    0.000000]   DMA32 zone: 3444 pages used for memmap
>> [    0.000000]   DMA32 zone: 251904 pages, LIFO batch:31
>> [    0.000000] On node 1 totalpages: 256000
>> [    0.000000]   DMA32 zone: 3500 pages used for memmap
>> [    0.000000]   DMA32 zone: 256000 pages, LIFO batch:31
>> [    0.000000] On node 2 totalpages: 256008
>> [    0.000000]   DMA32 zone: 2640 pages used for memmap
>> [    0.000000]   DMA32 zone: 193064 pages, LIFO batch:31
>> [    0.000000]   Normal zone: 861 pages used for memmap
>> [    0.000000]   Normal zone: 62944 pages, LIFO batch:15
>> [    0.000000] On node 3 totalpages: 255992
>> [    0.000000]   Normal zone: 3500 pages used for memmap
>> [    0.000000]   Normal zone: 255992 pages, LIFO batch:31
>> [    0.000000] SFI: Simple Firmware Interface v0.81
>> http://simplefirmware.org
>> [    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
>>
>> root@heatpipe:~# numactl --ha
>> available: 4 nodes (0-3)
>> node 0 cpus: 0 4
>> node 0 size: 977 MB
>> node 0 free: 947 MB
>> node 1 cpus: 1 5
>> node 1 size: 985 MB
>> node 1 free: 974 MB
>> node 2 cpus: 2 6
>> node 2 size: 985 MB
>> node 2 free: 973 MB
>> node 3 cpus: 3 7
>> node 3 size: 969 MB
>> node 3 free: 958 MB
>> node distances:
>> node   0   1   2   3
>>   0:  10  40  40  40
>>   1:  40  10  40  40
>>   2:  40  40  10  40
>>   3:  40  40  40  10
>>
>> root@heatpipe:~# numastat -m
>>
>> Per-node system memory usage (in MBs):
>>                           Node 0          Node 1          Node 2
>> Node 3           Total
>>                  --------------- --------------- ---------------
>> --------------- ---------------
>> MemTotal                  977.14          985.50          985.44
>> 969.91         3917.99
>>
>> hypervisor: xl debug-keys u
>>
>> (XEN) 'u' pressed -> dumping numa info (now-0x2A3:F7B8CB0F)
>> (XEN) Domain 2 (total: 1024000):
>> (XEN)     Node 0: 415468
>> (XEN)     Node 1: 608532
>> (XEN)     Domain has 4 vnodes
>> (XEN)         vnode 0 - pnode 1 1000 MB, vcpus: 0 4
>> (XEN)         vnode 1 - pnode 0 1000 MB, vcpus: 1 5
>> (XEN)         vnode 2 - pnode 1 2341 MB, vcpus: 2 6
>> (XEN)         vnode 3 - pnode 0 999 MB, vcpus: 3 7
>>
>> This size descrepancy caused by the way how size if calculated
>> from guest pfns: end - start. Thus the hole size in this case of
>> ~1,3Gb is included in the size.
>>
>> 3. zero vNUMA configuration for every pv domain.
>> Will be at least one vnuma node if vnuma topology was not
>> specified.
>>
>> pv config:
>>
>> memory = 4000
>> vcpus = 8
>> # The name of the domain, change this if you want more than 1 VM.
>> name = "null"
>> #vnodes = 4
>> vnumamem = [3000, 1000]
>> vdistance = [10, 40]
>> vnuma_vcpumap = [1, 0, 3, 2]
>> vnuma_vnodemap = [1, 0, 1, 0]
>> vnuma_autoplacement = 1
>> e820_host = 1
>>
>> boot:
>> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
>> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
>> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
>> [    0.000000] NUMA: Initialized distance table, cnt=1
>> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x14ddd7fff]
>> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
>> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
>> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
>> [    0.000000]   node   0: [mem 0x00100000-0xac227fff]
>> [    0.000000]   node   0: [mem 0x100000000-0x14ddd7fff]
>>
>> root@heatpipe:~# numactl --ha
>> maxn: 0
>> available: 1 nodes (0)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 3918 MB
>> node 0 free: 3853 MB
>> node distances:
>> node   0
>>   0:  10
>>
>> root@heatpipe:~# numastat -m
>>
>> Per-node system memory usage (in MBs):
>>                           Node 0           Total
>>                  --------------- ---------------
>> MemTotal                 3918.74         3918.74
>>
>> hypervisor: xl debug-keys u
>>
>> (XEN) Memory location of each domain:
>> (XEN) Domain 0 (total: 6787432):
>> (XEN)     Node 0: 3485706
>> (XEN)     Node 1: 3301726
>> (XEN) Domain 3 (total: 1024000):
>> (XEN)     Node 0: 512000
>> (XEN)     Node 1: 512000
>> (XEN)     Domain has 1 vnodes
>> (XEN)         vnode 0 - pnode any 5341 MB, vcpus: 0 1 2 3 4 5 6 7
>>
>>
>> Notes:
>>
>> to enable vNUMA in pv guest the corresponding patch set should be
>> applied - https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5
>> or
>> https://www.gitorious.org/xenvnuma/linuxvnuma/commit/deaa014257b99f57c76fbba12a28907786cbe17d.
>>
>>
>> Issues:
>>
>> The most important right now is the automatic numa balancing for linux pv
>> kernel
>> as its corrupting user space memory. Since the v3 of this patch series
>> linux kernel
>> 3.13 seem to perform correctly, but with the recent changes the issue is
>> back.
>> See https://lkml.org/lkml/2013/10/31/133 for urgent patch what presumably
>> had
>> numa balancing working. Sine 3.12 there were multiple changes to numa
>> automatic
>> balancing. I am currently back to investigating if anything should be done
>> from hypervisor
>> side and will work with kernel maintainers.
>>
>> Elena Ufimtseva (7):
>>   xen: vNUMA support for PV guests
>>   libxc: Plumb Xen with vNUMA topology for domain
>>   xl: vnuma memory parsing and supplement functions
>>   xl: vnuma distance, vcpu and pnode masks parser
>>   libxc: vnuma memory domain allocation
>>   libxl: vNUMA supporting interface
>>   xen: adds vNUMA info debug-key u
>>
>>  docs/man/xl.cfg.pod.5        |   60 +++++++
>>  tools/libxc/xc_dom.h         |   10 ++
>>  tools/libxc/xc_dom_x86.c     |   63 +++++--
>>  tools/libxc/xc_domain.c      |   64 +++++++
>>  tools/libxc/xenctrl.h        |    9 +
>>  tools/libxc/xg_private.h     |    1 +
>>  tools/libxl/libxl.c          |   18 ++
>>  tools/libxl/libxl.h          |   20 +++
>>  tools/libxl/libxl_arch.h     |    6 +
>>  tools/libxl/libxl_dom.c      |  158 ++++++++++++++++--
>>  tools/libxl/libxl_internal.h |    6 +
>>  tools/libxl/libxl_numa.c     |   49 ++++++
>>  tools/libxl/libxl_types.idl  |    6 +-
>>  tools/libxl/libxl_vnuma.h    |   11 ++
>>  tools/libxl/libxl_x86.c      |  123 ++++++++++++++
>>  tools/libxl/xl_cmdimpl.c     |  380
>> ++++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/x86/numa.c          |   30 +++-
>>  xen/common/domain.c          |   10 ++
>>  xen/common/domctl.c          |   79 +++++++++
>>  xen/common/memory.c          |   96 +++++++++++
>>  xen/include/public/domctl.h  |   29 ++++
>>  xen/include/public/memory.h  |   17 ++
>>  xen/include/public/vnuma.h   |   59 +++++++
>>  xen/include/xen/domain.h     |    8 +
>>  xen/include/xen/sched.h      |    1 +
>>  25 files changed, 1282 insertions(+), 31 deletions(-)
>>  create mode 100644 tools/libxl/libxl_vnuma.h
>>  create mode 100644 xen/include/public/vnuma.h
>>
>> --
>> 1.7.10.4
>>
>
>
>
> --
>
> Yechen Li
>
> Team of System Virtualization and Cloud Computing
> School of Electronic Engineering  and Computer Science,
> Peking University, China
>
> Nothing is impossible because impossible itself  says: " I'm possible "
> lccycc From PKU



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:26:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz7l-0005GU-Al; Thu, 13 Feb 2014 16:26:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WDz7j-0005GH-Fi
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:26:51 +0000
Received: from [85.158.143.35:50412] by server-3.bemta-4.messagelabs.com id
	EB/0C-11539-A42FCF25; Thu, 13 Feb 2014 16:26:50 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392308808!5490536!1
X-Originating-IP: [209.85.216.46]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	MAILTO_TO_SPAM_ADDR,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2175 invoked from network); 13 Feb 2014 16:26:49 -0000
Received: from mail-qa0-f46.google.com (HELO mail-qa0-f46.google.com)
	(209.85.216.46)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:26:49 -0000
Received: by mail-qa0-f46.google.com with SMTP id k15so3086369qaq.19
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 08:26:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=K/chWhhcPz8aGEK8OUkfjHSbUPx0Ii0LEAMR5zhFGsU=;
	b=PH5T8FHslU+tISl/ZBjmC6/TaUZhRWeNICyOKR9fjCrqjAa/E6bsBnsKubG399X5u0
	nm+Wq0A+qwIT81x2XDVlfMiXGBf1akzY1JvhWJCTvLOVK7NccVkC9M77MuKyIGoQX6t4
	plV2BxrpUxPj8zLoQhvgEltNOUp8bEUUZwnLOT1pau/ABwIVDk5LDVv1JChbYyhI70RC
	50mujsccarIOFqx6Ex6oVRhDoj0AO4trGyXrKtYWZAHvXWyCpwYeU0Ne+CGgMVf0T8pl
	PIADcDMLorcbZftU3WkJ9xT992dD38/wPeQsNWtgXRtxmzJXVdCXDTr+C+m6M6sG9mVz
	4iHg==
MIME-Version: 1.0
X-Received: by 10.224.119.147 with SMTP id z19mr4152461qaq.20.1392308807681;
	Thu, 13 Feb 2014 08:26:47 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 13 Feb 2014 08:26:47 -0800 (PST)
In-Reply-To: <CAP5+zHTb_1AVE3_oCNoLH0q7Queoa0ui+vfhK4=Z31+Ld6k30w@mail.gmail.com>
References: <1386136035-19544-1-git-send-email-ufimtseva@gmail.com>
	<CAP5+zHTb_1AVE3_oCNoLH0q7Queoa0ui+vfhK4=Z31+Ld6k30w@mail.gmail.com>
Date: Thu, 13 Feb 2014 11:26:47 -0500
Message-ID: <CAEr7rXjdJhJ07Q4LL=Z5MXTL74x2r2yx8n8mZt_AY_BE36MPiA@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Li Yechen <lccycc123@gmail.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>, Matt Wilson <msw@linux.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH v4 0/7] vNUMA introduction
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 7:49 AM, Li Yechen <lccycc123@gmail.com> wrote:
> Hi Elena,
> The patch on gitorious is not available. Have you got a newest version?
> I have a idea and need to modify your patch a little to manage the change of
> memory.
> Sorry for being missing so long :-)

Hi Li
No problem, let me push for you a new version, not to gitorious as it
fails sometimes..

>
>
> On Wed, Dec 4, 2013 at 1:47 PM, Elena Ufimtseva <ufimtseva@gmail.com> wrote:
>>
>> vNUMA introduction
>>
>> This series of patches introduces vNUMA topology awareness and
>> provides interfaces and data structures to enable vNUMA for
>> PV guests. There is a plan to extend this support for dom0 and
>> HVM domains.
>>
>> vNUMA topology support should be supported by PV guest kernel.
>> Corresponging patches should be applied.
>>
>> Introduction
>> -------------
>>
>> vNUMA topology is exposed to the PV guest to improve performance when
>> running
>> workloads on NUMA machines.
>> XEN vNUMA implementation provides a way to create vNUMA-enabled guests on
>> NUMA/UMA
>> and map vNUMA topology to physical NUMA in a optimal way.
>>
>> XEN vNUMA support
>>
>> Current set of patches introduces subop hypercall that is available for
>> enlightened
>> PV guests with vNUMA patches applied.
>>
>> Domain structure was modified to reflect per-domain vNUMA topology for use
>> in other
>> vNUMA-aware subsystems (e.g. ballooning).
>>
>> libxc
>>
>> libxc provides interfaces to build PV guests with vNUMA support and in
>> case of NUMA
>> machines provides initial memory allocation on physical NUMA nodes. This
>> implemented by
>> utilizing nodemap formed by automatic NUMA placement. Details are in patch
>> #3.
>>
>> libxl
>>
>> libxl provides a way to predefine in VM config vNUMA topology - number of
>> vnodes,
>> memory arrangement, vcpus to vnodes assignment, distance map.
>>
>> PV guest
>>
>> As of now, only PV guest can take advantage of vNUMA functionality. vNUMA
>> Linux patches
>> should be applied and NUMA support should be compiled in kernel.
>>
>> This patchset can be pulled from
>> https://git.gitorious.org/xenvnuma/xenvnuma.git:v6
>> Linux patchset https://git.gitorious.org/xenvnuma/linuxvnuma.git:v6
>>
>> Examples of booting vNUMA enabled PV Linux guest on real NUMA machine:
>>
>> 1. Automatic vNUMA placement on h/w NUMA machine:
>>
>> VM config:
>>
>> memory = 16384
>> vcpus = 4
>> name = "rcbig"
>> vnodes = 4
>> vnumamem = [10,10]
>> vnuma_distance = [10, 30, 10, 30]
>> vcpu_to_vnode = [0, 0, 1, 1]
>>
>> Xen:
>>
>> (XEN) Memory location of each domain:
>> (XEN) Domain 0 (total: 2569511):
>> (XEN)     Node 0: 1416166
>> (XEN)     Node 1: 1153345
>> (XEN) Domain 5 (total: 4194304):
>> (XEN)     Node 0: 2097152
>> (XEN)     Node 1: 2097152
>> (XEN)     Domain has 4 vnodes
>> (XEN)         vnode 0 - pnode 0  (4096) MB
>> (XEN)         vnode 1 - pnode 0  (4096) MB
>> (XEN)         vnode 2 - pnode 1  (4096) MB
>> (XEN)         vnode 3 - pnode 1  (4096) MB
>> (XEN)     Domain vcpu to vnode:
>> (XEN)     0 1 2 3
>>
>> dmesg on pv guest:
>>
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
>> [    0.000000]   node   0: [mem 0x00100000-0xffffffff]
>> [    0.000000]   node   1: [mem 0x100000000-0x1ffffffff]
>> [    0.000000]   node   2: [mem 0x200000000-0x2ffffffff]
>> [    0.000000]   node   3: [mem 0x300000000-0x3ffffffff]
>> [    0.000000] On node 0 totalpages: 1048479
>> [    0.000000]   DMA zone: 56 pages used for memmap
>> [    0.000000]   DMA zone: 21 pages reserved
>> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
>> [    0.000000]   DMA32 zone: 14280 pages used for memmap
>> [    0.000000]   DMA32 zone: 1044480 pages, LIFO batch:31
>> [    0.000000] On node 1 totalpages: 1048576
>> [    0.000000]   Normal zone: 14336 pages used for memmap
>> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
>> [    0.000000] On node 2 totalpages: 1048576
>> [    0.000000]   Normal zone: 14336 pages used for memmap
>> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
>> [    0.000000] On node 3 totalpages: 1048576
>> [    0.000000]   Normal zone: 14336 pages used for memmap
>> [    0.000000]   Normal zone: 1048576 pages, LIFO batch:31
>> [    0.000000] SFI: Simple Firmware Interface v0.81
>> http://simplefirmware.org
>> [    0.000000] smpboot: Allowing 4 CPUs, 0 hotplug CPUs
>> [    0.000000] No local APIC present
>> [    0.000000] APIC: disable apic facility
>> [    0.000000] APIC: switched to apic NOOP
>> [    0.000000] nr_irqs_gsi: 16
>> [    0.000000] PM: Registered nosave memory: [mem 0x000a0000-0x000fffff]
>> [    0.000000] e820: cannot find a gap in the 32bit address range
>> [    0.000000] e820: PCI devices with unassigned 32bit BARs may break!
>> [    0.000000] e820: [mem 0x400100000-0x4004fffff] available for PCI
>> devices
>> [    0.000000] Booting paravirtualized kernel on Xen
>> [    0.000000] Xen version: 4.4-unstable (preserve-AD)
>> [    0.000000] setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4
>> nr_node_ids:4
>> [    0.000000] PERCPU: Embedded 28 pages/cpu @ffff8800ffc00000 s85376
>> r8192 d21120 u2097152
>> [    0.000000] pcpu-alloc: s85376 r8192 d21120 u2097152 alloc=1*2097152
>> [    0.000000] pcpu-alloc: [0] 0 [1] 1 [2] 2 [3] 3
>>
>>
>> pv guest: numactl --hardware:
>>
>> root@heatpipe:~# numactl --hardware
>> available: 4 nodes (0-3)
>> node 0 cpus: 0
>> node 0 size: 4031 MB
>> node 0 free: 3997 MB
>> node 1 cpus: 1
>> node 1 size: 4039 MB
>> node 1 free: 4022 MB
>> node 2 cpus: 2
>> node 2 size: 4039 MB
>> node 2 free: 4023 MB
>> node 3 cpus: 3
>> node 3 size: 3975 MB
>> node 3 free: 3963 MB
>> node distances:
>> node   0   1   2   3
>>   0:  10  20  20  20
>>   1:  20  10  20  20
>>   2:  20  20  10  20
>>   3:  20  20  20  10
>>
>> Comments:
>> None of the configuration options are correct so default values were used.
>> Since machine is NUMA machine and there is no vcpu pinning defines, NUMA
>> automatic node selection mechanism is used and you can see how vnodes
>> were split across physical nodes.
>>
>> 2. Example with e820_host = 1 (32GB real NUMA machines, two nodes).
>>
>> pv config:
>> memory = 4000
>> vcpus = 8
>> # The name of the domain, change this if you want more than 1 VM.
>> name = "null"
>> vnodes = 4
>> #vnumamem = [3000, 1000]
>> vdistance = [10, 40]
>> #vnuma_vcpumap = [1, 0, 3, 2]
>> vnuma_vnodemap = [1, 0, 1, 0]
>> #vnuma_autoplacement = 1
>> e820_host = 1
>>
>> guest boot:
>>
>> [    0.000000] Initializing cgroup subsys cpuset
>> [    0.000000] Initializing cgroup subsys cpu
>> [    0.000000] Initializing cgroup subsys cpuacct
>> [    0.000000] Linux version 3.12.0+ (assert@superpipe) (gcc version 4.7.2
>> (Debi
>> an 4.7.2-5) ) #111 SMP Tue Dec 3 14:54:36 EST 2013
>> [    0.000000] Command line: root=/dev/xvda1 ro earlyprintk=xen debug
>> loglevel=8
>>  debug print_fatal_signals=1 loglvl=all guest_loglvl=all LOGLEVEL=8
>> earlyprintk=
>> xen sched_debug
>> [    0.000000] ACPI in unprivileged domain disabled
>> [    0.000000] Freeing ac228-fa000 pfn range: 318936 pages freed
>> [    0.000000] 1-1 mapping on ac228->100000
>> [    0.000000] Released 318936 pages of unused memory
>> [    0.000000] Set 343512 page(s) to 1-1 mapping
>> [    0.000000] Populating 100000-14ddd8 pfn range: 318936 pages added
>> [    0.000000] e820: BIOS-provided physical RAM map:
>> [    0.000000] Xen: [mem 0x0000000000000000-0x000000000009ffff] usable
>> [    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
>> [    0.000000] Xen: [mem 0x0000000000100000-0x00000000ac227fff] usable
>> [    0.000000] Xen: [mem 0x00000000ac228000-0x00000000ac26bfff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac26c000-0x00000000ac57ffff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac580000-0x00000000ac5a0fff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5a1000-0x00000000ac5bbfff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac5bc000-0x00000000ac5bdfff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5be000-0x00000000ac5befff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac5bf000-0x00000000ac5cafff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5cb000-0x00000000ac5d9fff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac5da000-0x00000000ac5fafff] reserved
>> [    0.000000] Xen: [mem 0x00000000ac5fb000-0x00000000ac6b6fff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac6b7000-0x00000000ac7fafff] ACPI NVS
>> [    0.000000] Xen: [mem 0x00000000ac7fb000-0x00000000ac80efff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac80f000-0x00000000ac80ffff] ACPI data
>> [    0.000000] Xen: [mem 0x00000000ac810000-0x00000000ac810fff] unusable
>> [    0.000000] Xen: [mem 0x00000000ac811000-0x00000000ac812fff] ACPI data
>> [    0.000000] Xen: [mem 0x00000000ac813000-0x00000000ad7fffff] unusable
>> [    0.000000] Xen: [mem 0x00000000b0000000-0x00000000b3ffffff] reserved
>> [    0.000000] Xen: [mem 0x00000000fed20000-0x00000000fed3ffff] reserved
>> [    0.000000] Xen: [mem 0x00000000fed50000-0x00000000fed8ffff] reserved
>> [    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
>> [    0.000000] Xen: [mem 0x00000000ffa00000-0x00000000ffa3ffff] reserved
>> [    0.000000] Xen: [mem 0x0000000100000000-0x000000014ddd7fff] usable
>> [    0.000000] bootconsole [xenboot0] enabled
>> [    0.000000] NX (Execute Disable) protection: active
>> [    0.000000] DMI not present or invalid.
>> [    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==>
>> reserved
>> [    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
>> [    0.000000] No AGP bridge found
>> [    0.000000] e820: last_pfn = 0x14ddd8 max_arch_pfn = 0x400000000
>> [    0.000000] e820: last_pfn = 0xac228 max_arch_pfn = 0x400000000
>> [    0.000000] Base memory trampoline at [ffff88000009a000] 9a000 size
>> 24576
>> [    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
>> [    0.000000]  [mem 0x00000000-0x000fffff] page 4k
>> [    0.000000] init_memory_mapping: [mem 0x14da00000-0x14dbfffff]
>> [    0.000000]  [mem 0x14da00000-0x14dbfffff] page 4k
>> [    0.000000] BRK [0x019bd000, 0x019bdfff] PGTABLE
>> [    0.000000] BRK [0x019be000, 0x019befff] PGTABLE
>> [    0.000000] init_memory_mapping: [mem 0x14c000000-0x14d9fffff]
>> [    0.000000]  [mem 0x14c000000-0x14d9fffff] page 4k
>> [    0.000000] BRK [0x019bf000, 0x019bffff] PGTABLE
>> [    0.000000] BRK [0x019c0000, 0x019c0fff] PGTABLE
>> [    0.000000] BRK [0x019c1000, 0x019c1fff] PGTABLE
>> [    0.000000] BRK [0x019c2000, 0x019c2fff] PGTABLE
>> [    0.000000] init_memory_mapping: [mem 0x100000000-0x14bffffff]
>> [    0.000000]  [mem 0x100000000-0x14bffffff] page 4k
>> [    0.000000] init_memory_mapping: [mem 0x00100000-0xac227fff]
>> [    0.000000]  [mem 0x00100000-0xac227fff] page 4k
>> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
>> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
>> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
>> [    0.000000] NUMA: Initialized distance table, cnt=4
>> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x3e7fffff]
>> [    0.000000]   NODE_DATA [mem 0x3e7d9000-0x3e7fffff]
>> [    0.000000] Initmem setup node 1 [mem 0x3e800000-0x7cffffff]
>> [    0.000000]   NODE_DATA [mem 0x7cfd9000-0x7cffffff]
>> [    0.000000] Initmem setup node 2 [mem 0x7d000000-0x10f5dffff]
>> [    0.000000]   NODE_DATA [mem 0x10f5b9000-0x10f5dffff]
>> [    0.000000] Initmem setup node 3 [mem 0x10f800000-0x14ddd7fff]
>> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
>> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
>> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
>> [    0.000000]   node   0: [mem 0x00100000-0x3e7fffff]
>> [    0.000000]   node   1: [mem 0x3e800000-0x7cffffff]
>> [    0.000000]   node   2: [mem 0x7d000000-0xac227fff]
>> [    0.000000]   node   2: [mem 0x100000000-0x10f5dffff]
>> [    0.000000]   node   3: [mem 0x10f5e0000-0x14ddd7fff]
>> [    0.000000] On node 0 totalpages: 255903
>> [    0.000000]   DMA zone: 56 pages used for memmap
>> [    0.000000]   DMA zone: 21 pages reserved
>> [    0.000000]   DMA zone: 3999 pages, LIFO batch:0
>> [    0.000000]   DMA32 zone: 3444 pages used for memmap
>> [    0.000000]   DMA32 zone: 251904 pages, LIFO batch:31
>> [    0.000000] On node 1 totalpages: 256000
>> [    0.000000]   DMA32 zone: 3500 pages used for memmap
>> [    0.000000]   DMA32 zone: 256000 pages, LIFO batch:31
>> [    0.000000] On node 2 totalpages: 256008
>> [    0.000000]   DMA32 zone: 2640 pages used for memmap
>> [    0.000000]   DMA32 zone: 193064 pages, LIFO batch:31
>> [    0.000000]   Normal zone: 861 pages used for memmap
>> [    0.000000]   Normal zone: 62944 pages, LIFO batch:15
>> [    0.000000] On node 3 totalpages: 255992
>> [    0.000000]   Normal zone: 3500 pages used for memmap
>> [    0.000000]   Normal zone: 255992 pages, LIFO batch:31
>> [    0.000000] SFI: Simple Firmware Interface v0.81
>> http://simplefirmware.org
>> [    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
>>
>> root@heatpipe:~# numactl --ha
>> available: 4 nodes (0-3)
>> node 0 cpus: 0 4
>> node 0 size: 977 MB
>> node 0 free: 947 MB
>> node 1 cpus: 1 5
>> node 1 size: 985 MB
>> node 1 free: 974 MB
>> node 2 cpus: 2 6
>> node 2 size: 985 MB
>> node 2 free: 973 MB
>> node 3 cpus: 3 7
>> node 3 size: 969 MB
>> node 3 free: 958 MB
>> node distances:
>> node   0   1   2   3
>>   0:  10  40  40  40
>>   1:  40  10  40  40
>>   2:  40  40  10  40
>>   3:  40  40  40  10
>>
>> root@heatpipe:~# numastat -m
>>
>> Per-node system memory usage (in MBs):
>>                           Node 0          Node 1          Node 2
>> Node 3           Total
>>                  --------------- --------------- ---------------
>> --------------- ---------------
>> MemTotal                  977.14          985.50          985.44
>> 969.91         3917.99
>>
>> hypervisor: xl debug-keys u
>>
>> (XEN) 'u' pressed -> dumping numa info (now-0x2A3:F7B8CB0F)
>> (XEN) Domain 2 (total: 1024000):
>> (XEN)     Node 0: 415468
>> (XEN)     Node 1: 608532
>> (XEN)     Domain has 4 vnodes
>> (XEN)         vnode 0 - pnode 1 1000 MB, vcpus: 0 4
>> (XEN)         vnode 1 - pnode 0 1000 MB, vcpus: 1 5
>> (XEN)         vnode 2 - pnode 1 2341 MB, vcpus: 2 6
>> (XEN)         vnode 3 - pnode 0 999 MB, vcpus: 3 7
>>
>> This size descrepancy caused by the way how size if calculated
>> from guest pfns: end - start. Thus the hole size in this case of
>> ~1,3Gb is included in the size.
>>
>> 3. zero vNUMA configuration for every pv domain.
>> Will be at least one vnuma node if vnuma topology was not
>> specified.
>>
>> pv config:
>>
>> memory = 4000
>> vcpus = 8
>> # The name of the domain, change this if you want more than 1 VM.
>> name = "null"
>> #vnodes = 4
>> vnumamem = [3000, 1000]
>> vdistance = [10, 40]
>> vnuma_vcpumap = [1, 0, 3, 2]
>> vnuma_vnodemap = [1, 0, 1, 0]
>> vnuma_autoplacement = 1
>> e820_host = 1
>>
>> boot:
>> [    0.000000] init_memory_mapping: [mem 0x14dc00000-0x14ddd7fff]
>> [    0.000000]  [mem 0x14dc00000-0x14ddd7fff] page 4k
>> [    0.000000] RAMDISK: [mem 0x01dc8000-0x0346ffff]
>> [    0.000000] NUMA: Initialized distance table, cnt=1
>> [    0.000000] Initmem setup node 0 [mem 0x00000000-0x14ddd7fff]
>> [    0.000000]   NODE_DATA [mem 0x14ddad000-0x14ddd3fff]
>> [    0.000000] Zone ranges:
>> [    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
>> [    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
>> [    0.000000]   Normal   [mem 0x100000000-0x14ddd7fff]
>> [    0.000000] Movable zone start for each node
>> [    0.000000] Early memory node ranges
>> [    0.000000]   node   0: [mem 0x00001000-0x0009ffff]
>> [    0.000000]   node   0: [mem 0x00100000-0xac227fff]
>> [    0.000000]   node   0: [mem 0x100000000-0x14ddd7fff]
>>
>> root@heatpipe:~# numactl --ha
>> maxn: 0
>> available: 1 nodes (0)
>> node 0 cpus: 0 1 2 3 4 5 6 7
>> node 0 size: 3918 MB
>> node 0 free: 3853 MB
>> node distances:
>> node   0
>>   0:  10
>>
>> root@heatpipe:~# numastat -m
>>
>> Per-node system memory usage (in MBs):
>>                           Node 0           Total
>>                  --------------- ---------------
>> MemTotal                 3918.74         3918.74
>>
>> hypervisor: xl debug-keys u
>>
>> (XEN) Memory location of each domain:
>> (XEN) Domain 0 (total: 6787432):
>> (XEN)     Node 0: 3485706
>> (XEN)     Node 1: 3301726
>> (XEN) Domain 3 (total: 1024000):
>> (XEN)     Node 0: 512000
>> (XEN)     Node 1: 512000
>> (XEN)     Domain has 1 vnodes
>> (XEN)         vnode 0 - pnode any 5341 MB, vcpus: 0 1 2 3 4 5 6 7
>>
>>
>> Notes:
>>
>> to enable vNUMA in pv guest the corresponding patch set should be
>> applied - https://git.gitorious.org/xenvnuma/linuxvnuma.git:v5
>> or
>> https://www.gitorious.org/xenvnuma/linuxvnuma/commit/deaa014257b99f57c76fbba12a28907786cbe17d.
>>
>>
>> Issues:
>>
>> The most important right now is the automatic numa balancing for linux pv
>> kernel
>> as its corrupting user space memory. Since the v3 of this patch series
>> linux kernel
>> 3.13 seem to perform correctly, but with the recent changes the issue is
>> back.
>> See https://lkml.org/lkml/2013/10/31/133 for urgent patch what presumably
>> had
>> numa balancing working. Sine 3.12 there were multiple changes to numa
>> automatic
>> balancing. I am currently back to investigating if anything should be done
>> from hypervisor
>> side and will work with kernel maintainers.
>>
>> Elena Ufimtseva (7):
>>   xen: vNUMA support for PV guests
>>   libxc: Plumb Xen with vNUMA topology for domain
>>   xl: vnuma memory parsing and supplement functions
>>   xl: vnuma distance, vcpu and pnode masks parser
>>   libxc: vnuma memory domain allocation
>>   libxl: vNUMA supporting interface
>>   xen: adds vNUMA info debug-key u
>>
>>  docs/man/xl.cfg.pod.5        |   60 +++++++
>>  tools/libxc/xc_dom.h         |   10 ++
>>  tools/libxc/xc_dom_x86.c     |   63 +++++--
>>  tools/libxc/xc_domain.c      |   64 +++++++
>>  tools/libxc/xenctrl.h        |    9 +
>>  tools/libxc/xg_private.h     |    1 +
>>  tools/libxl/libxl.c          |   18 ++
>>  tools/libxl/libxl.h          |   20 +++
>>  tools/libxl/libxl_arch.h     |    6 +
>>  tools/libxl/libxl_dom.c      |  158 ++++++++++++++++--
>>  tools/libxl/libxl_internal.h |    6 +
>>  tools/libxl/libxl_numa.c     |   49 ++++++
>>  tools/libxl/libxl_types.idl  |    6 +-
>>  tools/libxl/libxl_vnuma.h    |   11 ++
>>  tools/libxl/libxl_x86.c      |  123 ++++++++++++++
>>  tools/libxl/xl_cmdimpl.c     |  380
>> ++++++++++++++++++++++++++++++++++++++++++
>>  xen/arch/x86/numa.c          |   30 +++-
>>  xen/common/domain.c          |   10 ++
>>  xen/common/domctl.c          |   79 +++++++++
>>  xen/common/memory.c          |   96 +++++++++++
>>  xen/include/public/domctl.h  |   29 ++++
>>  xen/include/public/memory.h  |   17 ++
>>  xen/include/public/vnuma.h   |   59 +++++++
>>  xen/include/xen/domain.h     |    8 +
>>  xen/include/xen/sched.h      |    1 +
>>  25 files changed, 1282 insertions(+), 31 deletions(-)
>>  create mode 100644 tools/libxl/libxl_vnuma.h
>>  create mode 100644 xen/include/public/vnuma.h
>>
>> --
>> 1.7.10.4
>>
>
>
>
> --
>
> Yechen Li
>
> Team of System Virtualization and Cloud Computing
> School of Electronic Engineering  and Computer Science,
> Peking University, China
>
> Nothing is impossible because impossible itself  says: " I'm possible "
> lccycc From PKU



-- 
Elena

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:28:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz8v-0005OS-13; Thu, 13 Feb 2014 16:28:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bmenges@gogrid.com>) id 1WDz7L-0005EJ-1q
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:26:27 +0000
Received: from [85.158.143.35:41332] by server-3.bemta-4.messagelabs.com id
	F9/4B-11539-232FCF25; Thu, 13 Feb 2014 16:26:26 +0000
X-Env-Sender: bmenges@gogrid.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392308784!5490400!1
X-Originating-IP: [216.93.160.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31241 invoked from network); 13 Feb 2014 16:26:25 -0000
Received: from smtp1.servepath.com (HELO smtp1.servepath.com) (216.93.160.25)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 16:26:25 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=january; d=gogrid.com;
	h=Received:Received:From:To:CC:Subject:Thread-Topic:Thread-Index:Date:Message-ID:References:In-Reply-To:Accept-Language:Content-Language:X-MS-Has-Attach:X-MS-TNEF-Correlator:x-originating-ip:Content-Type:Content-Transfer-Encoding:MIME-Version;
	b=oUcQ/LMEz/u/wzOPoY8s+B5UsgLGiYBdl4lkdc0UEjF0t2jG7w0gHGvCfkb4oZN4qbkOBtMyRJmUu+92EIrxCZqSn5B3UMQs7Z7SgUr2LQb1Tn3wsbReLMpM7BN2apN4;
Received: from [192.168.6.220] (helo=ex-001-sfo.servepath.com)
	by smtp1.servepath.com with esmtp (Exim 4.68 (FreeBSD))
	(envelope-from <bmenges@gogrid.com>)
	id 1WDz7I-000817-8J; Thu, 13 Feb 2014 08:26:24 -0800
Received: from EX-002-SFO.servepath.com ([169.254.2.202]) by
	ex-001-sfo.servepath.com ([169.254.1.228]) with mapi id 14.03.0123.003;
	Thu, 13 Feb 2014 08:24:13 -0800
From: Brian Menges <bmenges@gogrid.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] build xen 4.3 backport for wheezy
Thread-Index: Ac8mi7yUvYMzBKP8TbC0RLZqc1DA5QAw+jyAAGII1MA=
Date: Thu, 13 Feb 2014 16:24:12 +0000
Message-ID: <F33FED1E326F7448A0623CC9BFA2D4F918AEBF@ex-002-sfo.servepath.com>
References: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
	<1392111325.26657.53.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111325.26657.53.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.3.1]
MIME-Version: 1.0
X-Mailman-Approved-At: Thu, 13 Feb 2014 16:28:03 +0000
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build xen 4.3 backport for wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SWFuLA0KDQpUaGFuayB5b3UgZm9yIHBvaW50aW5nIG91dCBkZWJpYW4vcnVsZXMuZGVmcywgdGhh
dCB3YXMgdGhlIGhlbHAgdGhhdCBJIG5lZWRlZC4gSSBoYWQgY3JhZnRlZCBhIG1lc3NhZ2UgdG8g
dGhlIGFsaW90aCBEZWJpYW4gbGlzdCwgaG93ZXZlciBpdCBzZWVtcyB0byBoYXZlIGJvdW5jZWQg
KHR5cG8pLCBidXQgeW91ciBhZHZpY2Ugd2FzIHNwb3Qgb24uIEkgZ290IG15IHBhY2thZ2VzIGJ1
aWx0IGFuZCB3aWxsIGJlIHRlc3RpbmcuIFRoYW5rcyENCg0KLSBCcmlhbiBNZW5nZXMNClByaW5j
aXBhbCBFbmdpbmVlciwgRGV2T3BzDQpHb0dyaWQgfCBTZXJ2ZVBhdGggfCBDb2xvU2VydmUgfCBV
cFN0cmVhbSBOZXR3b3Jrcw0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogSWFu
IENhbXBiZWxsIFttYWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dDQpTZW50OiBUdWVzZGF5
LCBGZWJydWFyeSAxMSwgMjAxNCAwMTozNg0KVG86IEJyaWFuIE1lbmdlcw0KQ2M6IHhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnDQpTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gYnVpbGQgeGVuIDQuMyBi
YWNrcG9ydCBmb3Igd2hlZXp5DQoNCk9uIE1vbiwgMjAxNC0wMi0xMCBhdCAxODoxMyArMDAwMCwg
QnJpYW4gTWVuZ2VzIHdyb3RlOg0KPiBJ4oCZbSBhdHRlbXB0aW5nIHRvIGJ1aWxkIDQuMyBmcm9t
IEplc3NpZSAoRGViaWFuKSBhbmQgSeKAmW0gYnVtcGluZyBpbnRvDQo+IGFuIGludGVyZXN0aW5n
IGRlcGVuZGVuY3k6DQoNCj4gUnVudGltZUVycm9yOiBDYW4ndCBmaW5kIC91c3Ivc3JjL2xpbnV4
LXN1cHBvcnQtMy4xMC0zLCBwbGVhc2UgaW5zdGFsbA0KPiB0aGUgbGludXgtc3VwcG9ydC0zLjEw
LTMgcGFja2FnZQ0KDQo+IEl0IGxvb2tzIGxpa2UgdGhlIGJ1aWxkIHNjcmlwdHMgYXJlbuKAmXQg
ZGV0ZWN0aW5nIG15IGluc3RhbGxhdGlvbiBvZg0KPiBsaW51eC1zdXBwb3J0LTMuMTItMC5icG8u
MSBhbmQgaGFzIGEgdmVyc2lvbiBsb2NrIG9uIDMuMTAtMyAod2hpY2gNCj4gaXNu4oCZdCBiYWNr
cG9ydGVkLCBhbmQgYXZhaWxhYmxlIG9ubHkgaW4gSmVzc2llIGV2ZW4gdGhvdWdoIEJQTyBoYXMg
dGhlDQo+IG5ld2VzdCAzLjEyKS4NCg0KVGhpcyBpcyBhIERlYmlhbiBwYWNrYWdpbmcgdGhpbmcs
IG5vdCBhIFhlbiB0aGluZyBhdCBhbGwsIGFuZCBjZXJ0YWlubHkgbm90IGEgeGVuLWRldmVsIHRo
aW5nLg0KDQpJSVJDIHRoZSBYZW4gcGFja2FnZXMgaW4gRGViaWFuIHVzZSBzb21lIG9mIHRoZSBw
YWNrYWdpbmcgaW5mcmFzdHJ1Y3R1cmUgcHJvdmlkZWQgYnkgTGludXggYXQgc291cmNlIHBhY2th
Z2UgcHJlcGFyYXRpb24gdGltZSAobm90IGFjdHVhbCBidWlsZA0KdGltZSkgYW5kIGVuZCB1cCB3
aXRoIGEgc3BlY2lmaWMgZGVwZW5kZW5jeS4NCg0KWW91IG1pZ2h0IGhhdmUgc29tZSBsdWNrIGZy
b2JiaW5nIHhlbi9kZWJpYW4vcnVsZXMuZGVmcywgb3IgeW91IG1pZ2h0IGZpbmQgaXQgZWFzaWVy
IHRvIGJhY2twb3J0IHRoZSByZXF1aXJlZCB2ZXJzaW9uLg0KDQpJYW4uDQoNCg0KDQpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXw0KDQpUaGUgaW5mb3JtYXRpb24gY29udGFpbmVkIGlu
IHRoaXMgbWVzc2FnZSwgYW5kIGFueSBhdHRhY2htZW50cywgbWF5IGNvbnRhaW4gY29uZmlkZW50
aWFsIGFuZCBsZWdhbGx5IHByaXZpbGVnZWQgbWF0ZXJpYWwuIEl0IGlzIHNvbGVseSBmb3IgdGhl
IHVzZSBvZiB0aGUgcGVyc29uIG9yIGVudGl0eSB0byB3aGljaCBpdCBpcyBhZGRyZXNzZWQuIEFu
eSByZXZpZXcsIHJldHJhbnNtaXNzaW9uLCBkaXNzZW1pbmF0aW9uLCBvciBhY3Rpb24gdGFrZW4g
aW4gcmVsaWFuY2UgdXBvbiB0aGlzIGluZm9ybWF0aW9uIGJ5IHBlcnNvbnMgb3IgZW50aXRpZXMg
b3RoZXIgdGhhbiB0aGUgaW50ZW5kZWQgcmVjaXBpZW50IGlzIHByb2hpYml0ZWQuIElmIHlvdSBy
ZWNlaXZlIHRoaXMgaW4gZXJyb3IsIHBsZWFzZSBjb250YWN0IHRoZSBzZW5kZXIgYW5kIGRlbGV0
ZSB0aGUgbWF0ZXJpYWwgZnJvbSBhbnkgY29tcHV0ZXIuDQpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:28:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDz8v-0005OS-13; Thu, 13 Feb 2014 16:28:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bmenges@gogrid.com>) id 1WDz7L-0005EJ-1q
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:26:27 +0000
Received: from [85.158.143.35:41332] by server-3.bemta-4.messagelabs.com id
	F9/4B-11539-232FCF25; Thu, 13 Feb 2014 16:26:26 +0000
X-Env-Sender: bmenges@gogrid.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392308784!5490400!1
X-Originating-IP: [216.93.160.25]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31241 invoked from network); 13 Feb 2014 16:26:25 -0000
Received: from smtp1.servepath.com (HELO smtp1.servepath.com) (216.93.160.25)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 16:26:25 -0000
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=january; d=gogrid.com;
	h=Received:Received:From:To:CC:Subject:Thread-Topic:Thread-Index:Date:Message-ID:References:In-Reply-To:Accept-Language:Content-Language:X-MS-Has-Attach:X-MS-TNEF-Correlator:x-originating-ip:Content-Type:Content-Transfer-Encoding:MIME-Version;
	b=oUcQ/LMEz/u/wzOPoY8s+B5UsgLGiYBdl4lkdc0UEjF0t2jG7w0gHGvCfkb4oZN4qbkOBtMyRJmUu+92EIrxCZqSn5B3UMQs7Z7SgUr2LQb1Tn3wsbReLMpM7BN2apN4;
Received: from [192.168.6.220] (helo=ex-001-sfo.servepath.com)
	by smtp1.servepath.com with esmtp (Exim 4.68 (FreeBSD))
	(envelope-from <bmenges@gogrid.com>)
	id 1WDz7I-000817-8J; Thu, 13 Feb 2014 08:26:24 -0800
Received: from EX-002-SFO.servepath.com ([169.254.2.202]) by
	ex-001-sfo.servepath.com ([169.254.1.228]) with mapi id 14.03.0123.003;
	Thu, 13 Feb 2014 08:24:13 -0800
From: Brian Menges <bmenges@gogrid.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] build xen 4.3 backport for wheezy
Thread-Index: Ac8mi7yUvYMzBKP8TbC0RLZqc1DA5QAw+jyAAGII1MA=
Date: Thu, 13 Feb 2014 16:24:12 +0000
Message-ID: <F33FED1E326F7448A0623CC9BFA2D4F918AEBF@ex-002-sfo.servepath.com>
References: <F33FED1E326F7448A0623CC9BFA2D4F9188292@ex-002-sfo.servepath.com>
	<1392111325.26657.53.camel@kazak.uk.xensource.com>
In-Reply-To: <1392111325.26657.53.camel@kazak.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.3.1]
MIME-Version: 1.0
X-Mailman-Approved-At: Thu, 13 Feb 2014 16:28:03 +0000
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build xen 4.3 backport for wheezy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SWFuLA0KDQpUaGFuayB5b3UgZm9yIHBvaW50aW5nIG91dCBkZWJpYW4vcnVsZXMuZGVmcywgdGhh
dCB3YXMgdGhlIGhlbHAgdGhhdCBJIG5lZWRlZC4gSSBoYWQgY3JhZnRlZCBhIG1lc3NhZ2UgdG8g
dGhlIGFsaW90aCBEZWJpYW4gbGlzdCwgaG93ZXZlciBpdCBzZWVtcyB0byBoYXZlIGJvdW5jZWQg
KHR5cG8pLCBidXQgeW91ciBhZHZpY2Ugd2FzIHNwb3Qgb24uIEkgZ290IG15IHBhY2thZ2VzIGJ1
aWx0IGFuZCB3aWxsIGJlIHRlc3RpbmcuIFRoYW5rcyENCg0KLSBCcmlhbiBNZW5nZXMNClByaW5j
aXBhbCBFbmdpbmVlciwgRGV2T3BzDQpHb0dyaWQgfCBTZXJ2ZVBhdGggfCBDb2xvU2VydmUgfCBV
cFN0cmVhbSBOZXR3b3Jrcw0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogSWFu
IENhbXBiZWxsIFttYWlsdG86SWFuLkNhbXBiZWxsQGNpdHJpeC5jb21dDQpTZW50OiBUdWVzZGF5
LCBGZWJydWFyeSAxMSwgMjAxNCAwMTozNg0KVG86IEJyaWFuIE1lbmdlcw0KQ2M6IHhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnDQpTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gYnVpbGQgeGVuIDQuMyBi
YWNrcG9ydCBmb3Igd2hlZXp5DQoNCk9uIE1vbiwgMjAxNC0wMi0xMCBhdCAxODoxMyArMDAwMCwg
QnJpYW4gTWVuZ2VzIHdyb3RlOg0KPiBJ4oCZbSBhdHRlbXB0aW5nIHRvIGJ1aWxkIDQuMyBmcm9t
IEplc3NpZSAoRGViaWFuKSBhbmQgSeKAmW0gYnVtcGluZyBpbnRvDQo+IGFuIGludGVyZXN0aW5n
IGRlcGVuZGVuY3k6DQoNCj4gUnVudGltZUVycm9yOiBDYW4ndCBmaW5kIC91c3Ivc3JjL2xpbnV4
LXN1cHBvcnQtMy4xMC0zLCBwbGVhc2UgaW5zdGFsbA0KPiB0aGUgbGludXgtc3VwcG9ydC0zLjEw
LTMgcGFja2FnZQ0KDQo+IEl0IGxvb2tzIGxpa2UgdGhlIGJ1aWxkIHNjcmlwdHMgYXJlbuKAmXQg
ZGV0ZWN0aW5nIG15IGluc3RhbGxhdGlvbiBvZg0KPiBsaW51eC1zdXBwb3J0LTMuMTItMC5icG8u
MSBhbmQgaGFzIGEgdmVyc2lvbiBsb2NrIG9uIDMuMTAtMyAod2hpY2gNCj4gaXNu4oCZdCBiYWNr
cG9ydGVkLCBhbmQgYXZhaWxhYmxlIG9ubHkgaW4gSmVzc2llIGV2ZW4gdGhvdWdoIEJQTyBoYXMg
dGhlDQo+IG5ld2VzdCAzLjEyKS4NCg0KVGhpcyBpcyBhIERlYmlhbiBwYWNrYWdpbmcgdGhpbmcs
IG5vdCBhIFhlbiB0aGluZyBhdCBhbGwsIGFuZCBjZXJ0YWlubHkgbm90IGEgeGVuLWRldmVsIHRo
aW5nLg0KDQpJSVJDIHRoZSBYZW4gcGFja2FnZXMgaW4gRGViaWFuIHVzZSBzb21lIG9mIHRoZSBw
YWNrYWdpbmcgaW5mcmFzdHJ1Y3R1cmUgcHJvdmlkZWQgYnkgTGludXggYXQgc291cmNlIHBhY2th
Z2UgcHJlcGFyYXRpb24gdGltZSAobm90IGFjdHVhbCBidWlsZA0KdGltZSkgYW5kIGVuZCB1cCB3
aXRoIGEgc3BlY2lmaWMgZGVwZW5kZW5jeS4NCg0KWW91IG1pZ2h0IGhhdmUgc29tZSBsdWNrIGZy
b2JiaW5nIHhlbi9kZWJpYW4vcnVsZXMuZGVmcywgb3IgeW91IG1pZ2h0IGZpbmQgaXQgZWFzaWVy
IHRvIGJhY2twb3J0IHRoZSByZXF1aXJlZCB2ZXJzaW9uLg0KDQpJYW4uDQoNCg0KDQpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXw0KDQpUaGUgaW5mb3JtYXRpb24gY29udGFpbmVkIGlu
IHRoaXMgbWVzc2FnZSwgYW5kIGFueSBhdHRhY2htZW50cywgbWF5IGNvbnRhaW4gY29uZmlkZW50
aWFsIGFuZCBsZWdhbGx5IHByaXZpbGVnZWQgbWF0ZXJpYWwuIEl0IGlzIHNvbGVseSBmb3IgdGhl
IHVzZSBvZiB0aGUgcGVyc29uIG9yIGVudGl0eSB0byB3aGljaCBpdCBpcyBhZGRyZXNzZWQuIEFu
eSByZXZpZXcsIHJldHJhbnNtaXNzaW9uLCBkaXNzZW1pbmF0aW9uLCBvciBhY3Rpb24gdGFrZW4g
aW4gcmVsaWFuY2UgdXBvbiB0aGlzIGluZm9ybWF0aW9uIGJ5IHBlcnNvbnMgb3IgZW50aXRpZXMg
b3RoZXIgdGhhbiB0aGUgaW50ZW5kZWQgcmVjaXBpZW50IGlzIHByb2hpYml0ZWQuIElmIHlvdSBy
ZWNlaXZlIHRoaXMgaW4gZXJyb3IsIHBsZWFzZSBjb250YWN0IHRoZSBzZW5kZXIgYW5kIGRlbGV0
ZSB0aGUgbWF0ZXJpYWwgZnJvbSBhbnkgY29tcHV0ZXIuDQpfX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZl
bEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:40:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzL2-0006MG-NX; Thu, 13 Feb 2014 16:40:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WDzL1-0006MB-IJ
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:40:35 +0000
Received: from [85.158.137.68:9858] by server-2.bemta-3.messagelabs.com id
	A9/6D-06531-285FCF25; Thu, 13 Feb 2014 16:40:34 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392309633!441244!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3536 invoked from network); 13 Feb 2014 16:40:34 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Feb 2014 16:40:34 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WDzPK-0002lF-CO; Thu, 13 Feb 2014 16:45:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1392309902.10615@bugs.xenproject.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<52FCF210.7090702@eu.citrix.com>
In-Reply-To: <52FCF210.7090702@eu.citrix.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 13 Feb 2014 16:45:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] Don't track all memory when
 enabling log dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> title 38 Implement VT-d large pages so we can avoid sharing between EPT 
Set title for #38 to `Implement VT-d large pages so we can avoid sharing between EPT'
> and IOMMU
Command failed: Unknown command `and'. at /srv/xen-devel-bugs/lib/emesinae/control.pl line 455, <GEN1> line 2.
Stop processing here.

Modified/created Bugs:
 - 38: http://bugs.xenproject.org/xen/bug/38

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:40:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzL2-0006MG-NX; Thu, 13 Feb 2014 16:40:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WDzL1-0006MB-IJ
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:40:35 +0000
Received: from [85.158.137.68:9858] by server-2.bemta-3.messagelabs.com id
	A9/6D-06531-285FCF25; Thu, 13 Feb 2014 16:40:34 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392309633!441244!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3536 invoked from network); 13 Feb 2014 16:40:34 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 13 Feb 2014 16:40:34 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WDzPK-0002lF-CO; Thu, 13 Feb 2014 16:45:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1392309902.10615@bugs.xenproject.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<52FCF210.7090702@eu.citrix.com>
In-Reply-To: <52FCF210.7090702@eu.citrix.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <george.dunlap@eu.citrix.com>
Date: Thu, 13 Feb 2014 16:45:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] Don't track all memory when
 enabling log dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> title 38 Implement VT-d large pages so we can avoid sharing between EPT 
Set title for #38 to `Implement VT-d large pages so we can avoid sharing between EPT'
> and IOMMU
Command failed: Unknown command `and'. at /srv/xen-devel-bugs/lib/emesinae/control.pl line 455, <GEN1> line 2.
Stop processing here.

Modified/created Bugs:
 - 38: http://bugs.xenproject.org/xen/bug/38

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:56:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzaU-0006b0-Du; Thu, 13 Feb 2014 16:56:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WDzaS-0006av-LM
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:56:32 +0000
Received: from [85.158.137.68:49511] by server-17.bemta-3.messagelabs.com id
	5F/BD-22569-F39FCF25; Thu, 13 Feb 2014 16:56:31 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392310590!1704602!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17307 invoked from network); 13 Feb 2014 16:56:31 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:56:31 -0000
Received: by mail-wi0-f179.google.com with SMTP id hn9so8874907wib.0
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 08:56:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:subject:mime-version:content-type
	:content-transfer-encoding;
	bh=OqmweUSaO3dr2NNk52HWd9c06SunXB96PtoYnjCi9RQ=;
	b=B1ZEuEg0iAN5G2uANColLjz5xjr5IRg1IX1E9p78ru2xRT12/fzFbI7HaemmRJPieR
	kfFaSOwzqjAuLlEn9hui4WzBsRU9ikITVQJUlQGa46ol+pro+fnuHOjWsGP7OQzyhquA
	VFEsi3qgPno4MjfeTj5IRjrF4OE9njxVuKPcd2HxLaNbauxX7BhZF5WaJffNpc/zgubb
	WChxhBEUSK+Bi/TFIrUpB46f20Px+o2Zl4Xkq+/39JKpzrlqWybsggySWtCN5hNfF4SZ
	UvaTSSoeImR4wEN54Qu2mTJnilpJzm3GtbntiC9BEKf7V0qJ+Z1gLpoOb0CYV6hYv6n7
	p+pw==
X-Received: by 10.194.121.129 with SMTP id lk1mr1815157wjb.80.1392310576595;
	Thu, 13 Feb 2014 08:56:16 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id uq2sm5910556wjc.5.2014.02.13.08.56.13
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 08:56:15 -0800 (PST)
Date: Thu, 13 Feb 2014 16:56:04 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <1646915994.20140213165604@gmail.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Sorry, I just sent this without a subject. Here it is with the subject
as it should be.

I  am  now successfully running my little operating system inside Xen.
It  is  fully  preemptive and working a treat, but I have just noticed
something  I  wasn't expecting, and will really be a problem for me if
I can't work around it.

My configuration is as follows:

1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.

2.- Xen: 4.4 (just pulled from repository)

3.- Dom0: Debian Wheezy (Kernel 3.2)

4.- 2 cpu pools:

# xl cpupool-list
Name               CPUs   Sched     Active   Domain count
Pool-0               3    credit       y          2
pv499                1  arinc653       y          1

5.- 2 domU:

# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   984     3     r-----      39.7
win7x64                                      1  2046     3     -b----     143.0
pv499                                        3   128     1     -b----      61.2

6.- All VCPUs are pinned:

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   -b-      27.5  0
Domain-0                             0     1    1   -b-       7.2  1
Domain-0                             0     2    2   r--       5.1  2
win7x64                              1     0    0   -b-      71.6  0
win7x64                              1     1    1   -b-      37.7  1
win7x64                              1     2    2   -b-      34.5  2
pv499                                3     0    3   -b-      62.1  3

7.- pv499 is the domU that I am testing. It has no disk or vif devices
(yet). I am running a little test program in pv499 and the timing I
see is varies depending on disk activity.

My test program runs prints up the time taken in milliseconds for a
million cycles. With no disk activity I see 940 ms, with disk activity
I see 1200 ms.

I can't understand this as disk activity should be running on cores 0,
1  and 2, but never on core 3. The only thing running on core 3 should
by my paravirtual machine and the hypervisor stub.

Any idea what's going on?


-- 
Best regards,
 Simon                          mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:56:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzaU-0006b0-Du; Thu, 13 Feb 2014 16:56:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WDzaS-0006av-LM
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:56:32 +0000
Received: from [85.158.137.68:49511] by server-17.bemta-3.messagelabs.com id
	5F/BD-22569-F39FCF25; Thu, 13 Feb 2014 16:56:31 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392310590!1704602!1
X-Originating-IP: [209.85.212.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17307 invoked from network); 13 Feb 2014 16:56:31 -0000
Received: from mail-wi0-f179.google.com (HELO mail-wi0-f179.google.com)
	(209.85.212.179)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:56:31 -0000
Received: by mail-wi0-f179.google.com with SMTP id hn9so8874907wib.0
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 08:56:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:subject:mime-version:content-type
	:content-transfer-encoding;
	bh=OqmweUSaO3dr2NNk52HWd9c06SunXB96PtoYnjCi9RQ=;
	b=B1ZEuEg0iAN5G2uANColLjz5xjr5IRg1IX1E9p78ru2xRT12/fzFbI7HaemmRJPieR
	kfFaSOwzqjAuLlEn9hui4WzBsRU9ikITVQJUlQGa46ol+pro+fnuHOjWsGP7OQzyhquA
	VFEsi3qgPno4MjfeTj5IRjrF4OE9njxVuKPcd2HxLaNbauxX7BhZF5WaJffNpc/zgubb
	WChxhBEUSK+Bi/TFIrUpB46f20Px+o2Zl4Xkq+/39JKpzrlqWybsggySWtCN5hNfF4SZ
	UvaTSSoeImR4wEN54Qu2mTJnilpJzm3GtbntiC9BEKf7V0qJ+Z1gLpoOb0CYV6hYv6n7
	p+pw==
X-Received: by 10.194.121.129 with SMTP id lk1mr1815157wjb.80.1392310576595;
	Thu, 13 Feb 2014 08:56:16 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id uq2sm5910556wjc.5.2014.02.13.08.56.13
	for <xen-devel@lists.xen.org>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 08:56:15 -0800 (PST)
Date: Thu, 13 Feb 2014 16:56:04 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <1646915994.20140213165604@gmail.com>
To: xen-devel@lists.xen.org
MIME-Version: 1.0
Subject: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Sorry, I just sent this without a subject. Here it is with the subject
as it should be.

I  am  now successfully running my little operating system inside Xen.
It  is  fully  preemptive and working a treat, but I have just noticed
something  I  wasn't expecting, and will really be a problem for me if
I can't work around it.

My configuration is as follows:

1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.

2.- Xen: 4.4 (just pulled from repository)

3.- Dom0: Debian Wheezy (Kernel 3.2)

4.- 2 cpu pools:

# xl cpupool-list
Name               CPUs   Sched     Active   Domain count
Pool-0               3    credit       y          2
pv499                1  arinc653       y          1

5.- 2 domU:

# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   984     3     r-----      39.7
win7x64                                      1  2046     3     -b----     143.0
pv499                                        3   128     1     -b----      61.2

6.- All VCPUs are pinned:

# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   -b-      27.5  0
Domain-0                             0     1    1   -b-       7.2  1
Domain-0                             0     2    2   r--       5.1  2
win7x64                              1     0    0   -b-      71.6  0
win7x64                              1     1    1   -b-      37.7  1
win7x64                              1     2    2   -b-      34.5  2
pv499                                3     0    3   -b-      62.1  3

7.- pv499 is the domU that I am testing. It has no disk or vif devices
(yet). I am running a little test program in pv499 and the timing I
see is varies depending on disk activity.

My test program runs prints up the time taken in milliseconds for a
million cycles. With no disk activity I see 940 ms, with disk activity
I see 1200 ms.

I can't understand this as disk activity should be running on cores 0,
1  and 2, but never on core 3. The only thing running on core 3 should
by my paravirtual machine and the hypervisor stub.

Any idea what's going on?


-- 
Best regards,
 Simon                          mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:59:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzdX-0006kc-5i; Thu, 13 Feb 2014 16:59:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDzdV-0006kW-AL
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:59:41 +0000
Received: from [85.158.139.211:52836] by server-15.bemta-5.messagelabs.com id
	A8/EB-24395-CF9FCF25; Thu, 13 Feb 2014 16:59:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392310778!3776721!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27157 invoked from network); 13 Feb 2014 16:59:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:59:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102300636"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 16:59:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:59:37 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDzdQ-0003UN-Pt;
	Thu, 13 Feb 2014 16:59:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 16:59:27 +0000
Message-ID: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv2] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
Changes in v2:
- Use per-cpu variable to mark preemptible regions
- Call preempt_schedule_irq() from the correct place in
  xen_hypervisor_callback

 arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
 arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
 drivers/xen/Makefile       |    2 +-
 drivers/xen/preempt.c      |   16 ++++++++++++++++
 drivers/xen/privcmd.c      |    2 ++
 include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
 6 files changed, 88 insertions(+), 1 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..b99bc9c 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
 ENTRY(xen_do_upcall)
 1:	mov %esp, %eax
 	call xen_evtchn_do_upcall
+#ifdef CONFIG_PREEMPT
 	jmp  ret_from_intr
+#else
+	GET_THREAD_INFO(%ebp)
+#ifdef CONFIG_VM86
+	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
+	movb PT_CS(%esp), %al
+	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+#else
+	movl PT_CS(%esp), %eax
+	andl $SEGMENT_RPL_MASK, %eax
+#endif
+	cmpl $USER_RPL, %eax
+	jae resume_userspace		# returning to v8086 or userspace
+	DISABLE_INTERRUPTS(CLBR_ANY)
+	cmpl $0,TI_preempt_count(%ebp)	# non-zero preempt_count ?
+	jnz resume_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz resume_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp resume_kernel
+#endif /* CONFIG_PREEMPT */
 	CFI_ENDPROC
 ENDPROC(xen_hypervisor_callback)
 
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..d8f4fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
+#ifdef CONFIG_PREEMPT
 	jmp  error_exit
+#else
+	movl %ebx, %eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax, %eax
+	je error_exit_user
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz retint_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz retint_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp retint_kernel
+#endif
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)
 
@@ -1629,6 +1647,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index d75c811..f8c7e04 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/
 
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..3275ffe
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,16 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;
 
+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();
 
 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
 bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 16:59:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 16:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzdX-0006kc-5i; Thu, 13 Feb 2014 16:59:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WDzdV-0006kW-AL
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 16:59:41 +0000
Received: from [85.158.139.211:52836] by server-15.bemta-5.messagelabs.com id
	A8/EB-24395-CF9FCF25; Thu, 13 Feb 2014 16:59:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392310778!3776721!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27157 invoked from network); 13 Feb 2014 16:59:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 16:59:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102300636"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 16:59:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 11:59:37 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WDzdQ-0003UN-Pt;
	Thu, 13 Feb 2014 16:59:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 16:59:27 +0000
Message-ID: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv2] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
Changes in v2:
- Use per-cpu variable to mark preemptible regions
- Call preempt_schedule_irq() from the correct place in
  xen_hypervisor_callback

 arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
 arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
 drivers/xen/Makefile       |    2 +-
 drivers/xen/preempt.c      |   16 ++++++++++++++++
 drivers/xen/privcmd.c      |    2 ++
 include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
 6 files changed, 88 insertions(+), 1 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..b99bc9c 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
 ENTRY(xen_do_upcall)
 1:	mov %esp, %eax
 	call xen_evtchn_do_upcall
+#ifdef CONFIG_PREEMPT
 	jmp  ret_from_intr
+#else
+	GET_THREAD_INFO(%ebp)
+#ifdef CONFIG_VM86
+	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
+	movb PT_CS(%esp), %al
+	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+#else
+	movl PT_CS(%esp), %eax
+	andl $SEGMENT_RPL_MASK, %eax
+#endif
+	cmpl $USER_RPL, %eax
+	jae resume_userspace		# returning to v8086 or userspace
+	DISABLE_INTERRUPTS(CLBR_ANY)
+	cmpl $0,TI_preempt_count(%ebp)	# non-zero preempt_count ?
+	jnz resume_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz resume_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp resume_kernel
+#endif /* CONFIG_PREEMPT */
 	CFI_ENDPROC
 ENDPROC(xen_hypervisor_callback)
 
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..d8f4fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
+#ifdef CONFIG_PREEMPT
 	jmp  error_exit
+#else
+	movl %ebx, %eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax, %eax
+	je error_exit_user
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz retint_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz retint_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp retint_kernel
+#endif
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)
 
@@ -1629,6 +1647,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index d75c811..f8c7e04 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/
 
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..3275ffe
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,16 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;
 
+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();
 
 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
 bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzel-0006zu-OA; Thu, 13 Feb 2014 17:00:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDzek-0006zc-Tj
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 17:00:59 +0000
Received: from [85.158.143.35:36462] by server-2.bemta-4.messagelabs.com id
	A0/94-10891-A4AFCF25; Thu, 13 Feb 2014 17:00:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392310857!5492003!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1124 invoked from network); 13 Feb 2014 17:00:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 17:00:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 17:00:57 +0000
Message-Id: <52FD0857020000780011C32E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 17:00:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Eddie Dong" <eddie.dong@intel.com>, "JunNakajima" <jun.nakajima@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
	<CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
	<52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
	<52FCEDFC.7080103@eu.citrix.com>
In-Reply-To: <52FCEDFC.7080103@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 17:08, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/13/2014 04:03 PM, Jan Beulich wrote:
>>>>> On 13.02.14 at 16:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> Konrad / Jan: Ping?
>>>
>>> If this fix looks reasonable it would be nice to get this in before RC4.
>> It would have long been committed if the VMX maintainers finally
>> gave their okay. Even privately pinging them didn't seem to help...
> 
> Oh, right, I forgot that Yang isn't a maintainer.
> 
> Well it looks like we have two options:
> * Commit it without a maintainer's ack

Just committed it on the basis that I too can ack this. I would just
have preferred for the more specific maintainers to have given
their input/okay...

Jan

> * Revert the patch that caused the regression to PVH.
> 
> (Or maybe Yang can chase the maintainers internally.)
> 
>   -George




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:01:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzel-0006zu-OA; Thu, 13 Feb 2014 17:00:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WDzek-0006zc-Tj
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 17:00:59 +0000
Received: from [85.158.143.35:36462] by server-2.bemta-4.messagelabs.com id
	A0/94-10891-A4AFCF25; Thu, 13 Feb 2014 17:00:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392310857!5492003!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1124 invoked from network); 13 Feb 2014 17:00:57 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 13 Feb 2014 17:00:57 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 13 Feb 2014 17:00:57 +0000
Message-Id: <52FD0857020000780011C32E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 13 Feb 2014 17:00:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Eddie Dong" <eddie.dong@intel.com>, "JunNakajima" <jun.nakajima@intel.com>
References: <1391447001-19100-2-git-send-email-konrad.wilk@oracle.com>
	<52F0B8F30200007800118E81@nat28.tlf.novell.com>
	<20140204144833.GE3853@phenom.dumpdata.com>
	<52F10F2402000078001190B7@nat28.tlf.novell.com>
	<20140204153258.GA6847@phenom.dumpdata.com>
	<52F119780200007800119172@nat28.tlf.novell.com>
	<20140204164258.GB7443@phenom.dumpdata.com>
	<52F24C47.5070100@eu.citrix.com>
	<20140205152649.GA5167@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D543D@SHSMSX104.ccr.corp.intel.com>
	<20140207154128.GE3605@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D8376@SHSMSX104.ccr.corp.intel.com>
	<CAFLBxZbnUuv4VQgi-AVOWxqXb4t8k-4NdkNKsP1dw8Ju7Ja+aw@mail.gmail.com>
	<52FCFAD9020000780011C2AB@nat28.tlf.novell.com>
	<52FCEDFC.7080103@eu.citrix.com>
In-Reply-To: <52FCEDFC.7080103@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	Konrad Rzeszutek Wilk <konrad@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] pvh: Fix regression caused by assumption
 that HVM paths MUST use io-backend device.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 17:08, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/13/2014 04:03 PM, Jan Beulich wrote:
>>>>> On 13.02.14 at 16:38, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
>>> Konrad / Jan: Ping?
>>>
>>> If this fix looks reasonable it would be nice to get this in before RC4.
>> It would have long been committed if the VMX maintainers finally
>> gave their okay. Even privately pinging them didn't seem to help...
> 
> Oh, right, I forgot that Yang isn't a maintainer.
> 
> Well it looks like we have two options:
> * Commit it without a maintainer's ack

Just committed it on the basis that I too can ack this. I would just
have preferred for the more specific maintainers to have given
their input/okay...

Jan

> * Revert the patch that caused the regression to PVH.
> 
> (Or maybe Yang can chase the maintainers internally.)
> 
>   -George




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:09:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzmb-0007YC-QK; Thu, 13 Feb 2014 17:09:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDzma-0007Y7-Qa
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:09:04 +0000
Received: from [85.158.143.35:46427] by server-1.bemta-4.messagelabs.com id
	C2/39-31661-F2CFCF25; Thu, 13 Feb 2014 17:09:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392311342!5500763!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13431 invoked from network); 13 Feb 2014 17:09:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:09:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100523215"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:07:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:07:39 -0500
Message-ID: <1392311258.9138.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Thu, 13 Feb 2014 17:07:38 +0000
In-Reply-To: <1646915994.20140213165604@gmail.com>
References: <1646915994.20140213165604@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
> 
> Any idea what's going on?

Is core 3 actual a hyperthread -- IOW it is sharing processor execution
resources with e.g. core 2. Or shared L2 caches etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:09:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzmb-0007YC-QK; Thu, 13 Feb 2014 17:09:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WDzma-0007Y7-Qa
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:09:04 +0000
Received: from [85.158.143.35:46427] by server-1.bemta-4.messagelabs.com id
	C2/39-31661-F2CFCF25; Thu, 13 Feb 2014 17:09:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392311342!5500763!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13431 invoked from network); 13 Feb 2014 17:09:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:09:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100523215"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:07:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:07:39 -0500
Message-ID: <1392311258.9138.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Thu, 13 Feb 2014 17:07:38 +0000
In-Reply-To: <1646915994.20140213165604@gmail.com>
References: <1646915994.20140213165604@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
> 
> Any idea what's going on?

Is core 3 actual a hyperthread -- IOW it is sharing processor execution
resources with e.g. core 2. Or shared L2 caches etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:13:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzqj-0007r7-Rh; Thu, 13 Feb 2014 17:13:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>)
	id 1WDzqi-0007qz-FI; Thu, 13 Feb 2014 17:13:20 +0000
Received: from [85.158.137.68:52471] by server-10.bemta-3.messagelabs.com id
	DE/C3-07302-F2DFCF25; Thu, 13 Feb 2014 17:13:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392311597!1708460!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18023 invoked from network); 13 Feb 2014 17:13:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:13:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100525256"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:13:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 12:13:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDzqe-0001Sp-73;
	Thu, 13 Feb 2014 17:13:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDzqd-0004uh-Vs;
	Thu, 13 Feb 2014 17:13:16 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21244.64811.826482.435714@mariner.uk.xensource.com>
Date: Thu, 13 Feb 2014 17:13:15 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <52FBA331.9040004@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52FBA331.9040004@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org, lars.kurth@xen.org,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monn=E9 writes ("Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] =
- deadline Feb 14 2014"):
> On 05/02/14 15:09, Ian Campbell wrote:
> > Roger:
> >       * Refactor Linux hotplug scripts
> > =

> >         You did some of this I think?
> =

> No, I've added a block-iscsi script, but I did not refactor the other
> ones. It is still valid, however, I'm not sure it's attractive from a
> GSoC point of view. I'm going to leave it anyway in case someone is
> interested.

Who is actually submitting our GSOC application ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:13:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WDzqj-0007r7-Rh; Thu, 13 Feb 2014 17:13:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>)
	id 1WDzqi-0007qz-FI; Thu, 13 Feb 2014 17:13:20 +0000
Received: from [85.158.137.68:52471] by server-10.bemta-3.messagelabs.com id
	DE/C3-07302-F2DFCF25; Thu, 13 Feb 2014 17:13:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392311597!1708460!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18023 invoked from network); 13 Feb 2014 17:13:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:13:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100525256"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:13:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 12:13:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDzqe-0001Sp-73;
	Thu, 13 Feb 2014 17:13:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WDzqd-0004uh-Vs;
	Thu, 13 Feb 2014 17:13:16 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21244.64811.826482.435714@mariner.uk.xensource.com>
Date: Thu, 13 Feb 2014 17:13:15 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
In-Reply-To: <52FBA331.9040004@citrix.com>
References: <52DCE9FA.6010400@xen.org> <52E7B6AF.3050604@xen.org>
	<1391609348.6497.178.camel@kazak.uk.xensource.com>
	<52FBA331.9040004@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org, lars.kurth@xen.org,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Roger Pau Monn=E9 writes ("Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] =
- deadline Feb 14 2014"):
> On 05/02/14 15:09, Ian Campbell wrote:
> > Roger:
> >       * Refactor Linux hotplug scripts
> > =

> >         You did some of this I think?
> =

> No, I've added a block-iscsi script, but I did not refactor the other
> ones. It is still valid, however, I'm not sure it's attractive from a
> GSoC point of view. I'm going to leave it anyway in case someone is
> interested.

Who is actually submitting our GSOC application ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:27:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE04I-0008JJ-Ko; Thu, 13 Feb 2014 17:27:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WE04H-0008JE-AY
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:27:21 +0000
Received: from [85.158.137.68:64427] by server-3.bemta-3.messagelabs.com id
	27/9F-14520-8700DF25; Thu, 13 Feb 2014 17:27:20 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392312438!1689135!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28034 invoked from network); 13 Feb 2014 17:27:19 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 17:27:19 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 13 Feb 2014 17:27:17 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="652839068"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.166])
	by fldsmtpi03.verizon.com with ESMTP; 13 Feb 2014 17:27:16 +0000
Message-ID: <52FD0074.4030302@terremark.com>
Date: Thu, 13 Feb 2014 12:27:16 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <0614092.20140213160100@gmail.com> <52FCEE73.90100@citrix.com>
In-Reply-To: <52FCEE73.90100@citrix.com>
Cc: Simon Martin <furryfuttock@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/14 11:10, Andrew Cooper wrote:
> On 13/02/14 16:01, Simon Martin wrote:
>> Hi all,
>>
>> I  am  now successfully running my little operating system inside Xen.
> Congratulations!
>
>> It  is  fully  preemptive and working a treat, but I have just noticed
>> something  I  wasn't expecting, and will really be a problem for me if
>> I can't work around it.
>>
>> My configuration is as follows:
>>
>> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
> Can you be more specific - this covers 4 generations of Intel CPUs.
>

I think most i3's have Intel's hyper-threading. If this is a 2 core/4 =

thread chip, I would expect this kind of result. I also know that for =

the "sandy bridge" version I am using:

Intel=AE Xeon=AE E3-1260L processors (=93Sandy Bridge=94 microarchitecture)
(2.4/2.5/3.3 GHz, 4 cores/8 threads)

How many instruction per second a thread gets does depend on the =

"idleness" of other threads (no longer just the hyperThread's parther). =

For example running my test code that does:


0x0000000000400ee0 <workerMain+432>: inc %eax
0x0000000000400ee2 <workerMain+434>: cmp $0xffffffffffffffff,%eax
0x0000000000400ee5 <workerMain+437>: jne 0x400ee0 <workerMain+432>

for almost 4GiI in this loop.

On the setup:


[root@dcs-xen-53 ~]# xl cpupool-list
Name CPUs Sched Active Domain count
Pool-0 8 credit y 9
[root@dcs-xen-53 ~]# xl vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
Domain-0 0 0 7 r-- 1033.2 any cpu
Domain-0 0 1 3 -b- 255.9 any cpu
Domain-0 0 2 2 -b- 451.7 any cpu
Domain-0 0 3 6 -b- 231.7 any cpu
Domain-0 0 4 3 -b- 197.0 any cpu
Domain-0 0 5 0 -b- 115.1 any cpu
Domain-0 0 6 0 -b- 69.9 any cpu
Domain-0 0 7 5 -b- 214.9 any cpu
P-1-0 2 0 0 -b- 73.6 0
P-1-2 4 0 2 -b- 46.5 2
P-1-3 5 0 3 -b- 44.6 3
P-1-4 6 0 4 -b- 38.1 4
P-1-5 7 0 5 -b- 41.3 5
P-1-6 8 0 6 -b- 38.6 6
P-1-7 9 0 7 -b- 40.6 7
P-1-1 10 0 1 -b- 35.3 1

(They are HVM not PV):



xentop - 17:20:57 Xen 4.3.2-rc1
9 domains: 2 running, 7 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 33544044k total, 30265044k used, 3279000k free CPUs: 8 @ 2400MHz
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS =

NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 2629 9.4 4194304 12.5 no limit n/a 8 0 0 0 0 0 0 0 0 0 0
P-1-0 --b--- 140 6.3 3145868 9.4 3146752 9.4 1 2 61 8 0 0 0 0 0 0 0
P-1-1 --b--- 101 6.1 3145868 9.4 3146752 9.4 1 2 2 0 0 0 0 0 0 0 0
P-1-2 --b--- 113 6.3 3145868 9.4 3146752 9.4 1 2 96 10 0 0 0 0 0 0 0
P-1-3 --b--- 111 6.3 3145868 9.4 3146752 9.4 1 2 100 12 0 0 0 0 0 0 0
P-1-4 --b--- 61 2.1 3145868 9.4 3146752 9.4 1 2 8 0 0 0 0 0 0 0 0
P-1-5 --b--- 90 4.5 3145868 9.4 3146752 9.4 1 2 5 0 0 0 0 0 0 0 0
P-1-6 --b--- 162 2.7 3145868 9.4 3146752 9.4 1 2 55 20 0 0 0 0 0 0 0
P-1-7 -----r 519 100.0 3145868 9.4 3146752 9.4 1 2 46 21 0 0 0 0 0 0 0


start done
thr 0: 13 Feb 14 12:20:54.201596 13 Feb 14 12:21:22.847245
+28.645649 ~=3D 28.65 and 4.19 GiI/Sec

And 6&7 at the same time:

xentop - 17:21:58 Xen 4.3.2-rc1
9 domains: 3 running, 6 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 33544044k total, 30265044k used, 3279000k free CPUs: 8 @ 2400MHz
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS =

NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 2633 6.1 4194304 12.5 no limit n/a 8 0 0 0 0 0 0 0 0 0 0
P-1-0 --b--- 144 6.2 3145868 9.4 3146752 9.4 1 2 63 8 0 0 0 0 0 0 0
P-1-1 --b--- 105 6.2 3145868 9.4 3146752 9.4 1 2 2 0 0 0 0 0 0 0 0
P-1-2 --b--- 117 6.6 3145868 9.4 3146752 9.4 1 2 98 10 0 0 0 0 0 0 0
P-1-3 --b--- 115 6.5 3145868 9.4 3146752 9.4 1 2 103 12 0 0 0 0 0 0 0
P-1-4 --b--- 62 2.1 3145868 9.4 3146752 9.4 1 2 8 0 0 0 0 0 0 0 0
P-1-5 --b--- 93 4.5 3145868 9.4 3146752 9.4 1 2 5 0 0 0 0 0 0 0 0
P-1-6 -----r 168 100.0 3145868 9.4 3146752 9.4 1 2 58 20 0 0 0 0 0 0 0
P-1-7 -----r 550 100.0 3145868 9.4 3146752 9.4 1 2 49 22 0 0 0 0 0 0 0


start done
thr 0: 13 Feb 14 12:21:55.073588 13 Feb 14 12:22:50.905476
+55.831888 ~=3D 55.83 and 2.15 GiI/Sec

start done
thr 0: 13 Feb 14 12:21:54.847626 13 Feb 14 12:22:49.206362
+54.358736 ~=3D 54.36 and 2.21 GiI/Sec


(The DomU are all CentOS 5.3, which why Dom0 is spending so much time =

running QEMU.)

I would excpect the same from PV guests.

-Don Slutz
>> 2.- Xen: 4.4 (just pulled from repository)
>>
>> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>>
>> 4.- 2 cpu pools:
>>
>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-0               3    credit       y          2
>> pv499                1  arinc653       y          1
>>
>> 5.- 2 domU:
>>
>> # xl list
>> Name                                        ID   Mem VCPUs      State   =
Time(s)
>> Domain-0                                     0   984     3     r-----   =
   39.7
>> win7x64                                      1  2046     3     -b----   =
  143.0
>> pv499                                        3   128     1     -b----   =
   61.2
>>
>> 6.- All VCPUs are pinned:
>>
>> # xl vcpu-list
>> Name                                ID  VCPU   CPU State   Time(s) CPU A=
ffinity
>> Domain-0                             0     0    0   -b-      27.5  0
>> Domain-0                             0     1    1   -b-       7.2  1
>> Domain-0                             0     2    2   r--       5.1  2
>> win7x64                              1     0    0   -b-      71.6  0
>> win7x64                              1     1    1   -b-      37.7  1
>> win7x64                              1     2    2   -b-      34.5  2
>> pv499                                3     0    3   -b-      62.1  3
>>
>> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
>> (yet). I am running a little test program in pv499 and the timing I
>> see is varies depending on disk activity.
>>
>> My test program runs prints up the time taken in milliseconds for a
>> million cycles. With no disk activity I see 940 ms, with disk activity
>> I see 1200 ms.
>>
>> I can't understand this as disk activity should be running on cores 0,
>> 1  and 2, but never on core 3. The only thing running on core 3 should
>> by my paravirtual machine and the hypervisor stub.
>>
>> Any idea what's going on?
> Curious.  Lets try ruling some things out.
>
> How are you measuring time in pv499?
>
> What is your Cstates and Pstates looking like?  If you can, try
> disabling turbo?
>
> ~Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:27:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE04Y-0008K1-5B; Thu, 13 Feb 2014 17:27:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WE04W-0008Jk-3I
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:27:36 +0000
Received: from [85.158.139.211:37790] by server-10.bemta-5.messagelabs.com id
	9C/04-08578-7800DF25; Thu, 13 Feb 2014 17:27:35 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392312452!3762154!1
X-Originating-IP: [216.32.180.31]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28138 invoked from network); 13 Feb 2014 17:27:33 -0000
Received: from va3ehsobe005.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.31)
	by server-4.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Feb 2014 17:27:33 -0000
Received: from mail36-va3-R.bigfish.com (10.7.14.254) by
	VA3EHSOBE005.bigfish.com (10.7.40.25) with Microsoft SMTP Server id
	14.1.225.22; Thu, 13 Feb 2014 17:27:31 +0000
Received: from mail36-va3 (localhost [127.0.0.1])	by mail36-va3-R.bigfish.com
	(Postfix) with ESMTP id AF2853C038A;
	Thu, 13 Feb 2014 17:27:31 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -1
X-BigFish: VPS-1(z579eh37d5kzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h255eh1155h)
Received: from mail36-va3 (localhost.localdomain [127.0.0.1]) by mail36-va3
	(MessageSwitch) id 1392312450255351_22976;
	Thu, 13 Feb 2014 17:27:30 +0000 (UTC)
Received: from VA3EHSMHS005.bigfish.com (unknown [10.7.14.234])	by
	mail36-va3.bigfish.com (Postfix) with ESMTP id 380C62C0047;
	Thu, 13 Feb 2014 17:27:30 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS005.bigfish.com
	(10.7.99.15) with Microsoft SMTP Server id 14.16.227.3; Thu, 13 Feb 2014
	17:27:24 +0000
X-WSS-ID: 0N0Y35M-07-4DF-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29364CAE61C;	Thu, 13 Feb 2014 11:27:21 -0600 (CST)
Received: from SATLEXDAG01.amd.com (10.181.40.3) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 11:27:27 -0600
Received: from [127.0.0.1] (10.180.168.240) by SATLEXDAG01.amd.com
	(10.181.40.3) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 12:27:22 -0500
Message-ID: <52FD0079.8050601@amd.com>
Date: Thu, 13 Feb 2014 11:27:21 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
In-Reply-To: <52FC927D020000780011BF0A@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>     *val = 0;
>>   
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    /* Allow only first 3 MC banks into switch() */
>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>       {
>>       case MSR_IA32_MC0_CTL:
>>           /* stick all 1's to MCi_CTL */
> I'm confused: You now add a comment as if the mask was including
> bit 4, which it doesn't. What am I missing?

Darn. Sorry about that. Will fix..
> Also, please get used to mention (commonly at the bottom of the
> commit message, after a --- separator) what changed compared to
> the previous iteration.
>
>
Okay, will do.

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:27:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE04Y-0008K1-5B; Thu, 13 Feb 2014 17:27:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WE04W-0008Jk-3I
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:27:36 +0000
Received: from [85.158.139.211:37790] by server-10.bemta-5.messagelabs.com id
	9C/04-08578-7800DF25; Thu, 13 Feb 2014 17:27:35 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392312452!3762154!1
X-Originating-IP: [216.32.180.31]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28138 invoked from network); 13 Feb 2014 17:27:33 -0000
Received: from va3ehsobe005.messaging.microsoft.com (HELO
	va3outboundpool.messaging.microsoft.com) (216.32.180.31)
	by server-4.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Feb 2014 17:27:33 -0000
Received: from mail36-va3-R.bigfish.com (10.7.14.254) by
	VA3EHSOBE005.bigfish.com (10.7.40.25) with Microsoft SMTP Server id
	14.1.225.22; Thu, 13 Feb 2014 17:27:31 +0000
Received: from mail36-va3 (localhost [127.0.0.1])	by mail36-va3-R.bigfish.com
	(Postfix) with ESMTP id AF2853C038A;
	Thu, 13 Feb 2014 17:27:31 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -1
X-BigFish: VPS-1(z579eh37d5kzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h255eh1155h)
Received: from mail36-va3 (localhost.localdomain [127.0.0.1]) by mail36-va3
	(MessageSwitch) id 1392312450255351_22976;
	Thu, 13 Feb 2014 17:27:30 +0000 (UTC)
Received: from VA3EHSMHS005.bigfish.com (unknown [10.7.14.234])	by
	mail36-va3.bigfish.com (Postfix) with ESMTP id 380C62C0047;
	Thu, 13 Feb 2014 17:27:30 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by VA3EHSMHS005.bigfish.com
	(10.7.99.15) with Microsoft SMTP Server id 14.16.227.3; Thu, 13 Feb 2014
	17:27:24 +0000
X-WSS-ID: 0N0Y35M-07-4DF-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	29364CAE61C;	Thu, 13 Feb 2014 11:27:21 -0600 (CST)
Received: from SATLEXDAG01.amd.com (10.181.40.3) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 11:27:27 -0600
Received: from [127.0.0.1] (10.180.168.240) by SATLEXDAG01.amd.com
	(10.181.40.3) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 12:27:22 -0500
Message-ID: <52FD0079.8050601@amd.com>
Date: Thu, 13 Feb 2014 11:27:21 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
In-Reply-To: <52FC927D020000780011BF0A@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>     *val = 0;
>>   
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    /* Allow only first 3 MC banks into switch() */
>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>       {
>>       case MSR_IA32_MC0_CTL:
>>           /* stick all 1's to MCi_CTL */
> I'm confused: You now add a comment as if the mask was including
> bit 4, which it doesn't. What am I missing?

Darn. Sorry about that. Will fix..
> Also, please get used to mention (commonly at the bottom of the
> commit message, after a --- separator) what changed compared to
> the previous iteration.
>
>
Okay, will do.

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:27:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE04I-0008JJ-Ko; Thu, 13 Feb 2014 17:27:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WE04H-0008JE-AY
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:27:21 +0000
Received: from [85.158.137.68:64427] by server-3.bemta-3.messagelabs.com id
	27/9F-14520-8700DF25; Thu, 13 Feb 2014 17:27:20 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392312438!1689135!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28034 invoked from network); 13 Feb 2014 17:27:19 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 17:27:19 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe04.verizon.com with ESMTP; 13 Feb 2014 17:27:17 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="652839068"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.166])
	by fldsmtpi03.verizon.com with ESMTP; 13 Feb 2014 17:27:16 +0000
Message-ID: <52FD0074.4030302@terremark.com>
Date: Thu, 13 Feb 2014 12:27:16 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <0614092.20140213160100@gmail.com> <52FCEE73.90100@citrix.com>
In-Reply-To: <52FCEE73.90100@citrix.com>
Cc: Simon Martin <furryfuttock@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/14 11:10, Andrew Cooper wrote:
> On 13/02/14 16:01, Simon Martin wrote:
>> Hi all,
>>
>> I  am  now successfully running my little operating system inside Xen.
> Congratulations!
>
>> It  is  fully  preemptive and working a treat, but I have just noticed
>> something  I  wasn't expecting, and will really be a problem for me if
>> I can't work around it.
>>
>> My configuration is as follows:
>>
>> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
> Can you be more specific - this covers 4 generations of Intel CPUs.
>

I think most i3's have Intel's hyper-threading. If this is a 2 core/4 =

thread chip, I would expect this kind of result. I also know that for =

the "sandy bridge" version I am using:

Intel=AE Xeon=AE E3-1260L processors (=93Sandy Bridge=94 microarchitecture)
(2.4/2.5/3.3 GHz, 4 cores/8 threads)

How many instruction per second a thread gets does depend on the =

"idleness" of other threads (no longer just the hyperThread's parther). =

For example running my test code that does:


0x0000000000400ee0 <workerMain+432>: inc %eax
0x0000000000400ee2 <workerMain+434>: cmp $0xffffffffffffffff,%eax
0x0000000000400ee5 <workerMain+437>: jne 0x400ee0 <workerMain+432>

for almost 4GiI in this loop.

On the setup:


[root@dcs-xen-53 ~]# xl cpupool-list
Name CPUs Sched Active Domain count
Pool-0 8 credit y 9
[root@dcs-xen-53 ~]# xl vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
Domain-0 0 0 7 r-- 1033.2 any cpu
Domain-0 0 1 3 -b- 255.9 any cpu
Domain-0 0 2 2 -b- 451.7 any cpu
Domain-0 0 3 6 -b- 231.7 any cpu
Domain-0 0 4 3 -b- 197.0 any cpu
Domain-0 0 5 0 -b- 115.1 any cpu
Domain-0 0 6 0 -b- 69.9 any cpu
Domain-0 0 7 5 -b- 214.9 any cpu
P-1-0 2 0 0 -b- 73.6 0
P-1-2 4 0 2 -b- 46.5 2
P-1-3 5 0 3 -b- 44.6 3
P-1-4 6 0 4 -b- 38.1 4
P-1-5 7 0 5 -b- 41.3 5
P-1-6 8 0 6 -b- 38.6 6
P-1-7 9 0 7 -b- 40.6 7
P-1-1 10 0 1 -b- 35.3 1

(They are HVM not PV):



xentop - 17:20:57 Xen 4.3.2-rc1
9 domains: 2 running, 7 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 33544044k total, 30265044k used, 3279000k free CPUs: 8 @ 2400MHz
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS =

NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 2629 9.4 4194304 12.5 no limit n/a 8 0 0 0 0 0 0 0 0 0 0
P-1-0 --b--- 140 6.3 3145868 9.4 3146752 9.4 1 2 61 8 0 0 0 0 0 0 0
P-1-1 --b--- 101 6.1 3145868 9.4 3146752 9.4 1 2 2 0 0 0 0 0 0 0 0
P-1-2 --b--- 113 6.3 3145868 9.4 3146752 9.4 1 2 96 10 0 0 0 0 0 0 0
P-1-3 --b--- 111 6.3 3145868 9.4 3146752 9.4 1 2 100 12 0 0 0 0 0 0 0
P-1-4 --b--- 61 2.1 3145868 9.4 3146752 9.4 1 2 8 0 0 0 0 0 0 0 0
P-1-5 --b--- 90 4.5 3145868 9.4 3146752 9.4 1 2 5 0 0 0 0 0 0 0 0
P-1-6 --b--- 162 2.7 3145868 9.4 3146752 9.4 1 2 55 20 0 0 0 0 0 0 0
P-1-7 -----r 519 100.0 3145868 9.4 3146752 9.4 1 2 46 21 0 0 0 0 0 0 0


start done
thr 0: 13 Feb 14 12:20:54.201596 13 Feb 14 12:21:22.847245
+28.645649 ~=3D 28.65 and 4.19 GiI/Sec

And 6&7 at the same time:

xentop - 17:21:58 Xen 4.3.2-rc1
9 domains: 3 running, 6 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 33544044k total, 30265044k used, 3279000k free CPUs: 8 @ 2400MHz
NAME STATE CPU(sec) CPU(%) MEM(k) MEM(%) MAXMEM(k) MAXMEM(%) VCPUS NETS =

NETTX(k) NETRX(k) VBDS VBD_OO VBD_RD VBD_WR VBD_RSECT VBD_WSECT SSID
Domain-0 -----r 2633 6.1 4194304 12.5 no limit n/a 8 0 0 0 0 0 0 0 0 0 0
P-1-0 --b--- 144 6.2 3145868 9.4 3146752 9.4 1 2 63 8 0 0 0 0 0 0 0
P-1-1 --b--- 105 6.2 3145868 9.4 3146752 9.4 1 2 2 0 0 0 0 0 0 0 0
P-1-2 --b--- 117 6.6 3145868 9.4 3146752 9.4 1 2 98 10 0 0 0 0 0 0 0
P-1-3 --b--- 115 6.5 3145868 9.4 3146752 9.4 1 2 103 12 0 0 0 0 0 0 0
P-1-4 --b--- 62 2.1 3145868 9.4 3146752 9.4 1 2 8 0 0 0 0 0 0 0 0
P-1-5 --b--- 93 4.5 3145868 9.4 3146752 9.4 1 2 5 0 0 0 0 0 0 0 0
P-1-6 -----r 168 100.0 3145868 9.4 3146752 9.4 1 2 58 20 0 0 0 0 0 0 0
P-1-7 -----r 550 100.0 3145868 9.4 3146752 9.4 1 2 49 22 0 0 0 0 0 0 0


start done
thr 0: 13 Feb 14 12:21:55.073588 13 Feb 14 12:22:50.905476
+55.831888 ~=3D 55.83 and 2.15 GiI/Sec

start done
thr 0: 13 Feb 14 12:21:54.847626 13 Feb 14 12:22:49.206362
+54.358736 ~=3D 54.36 and 2.21 GiI/Sec


(The DomU are all CentOS 5.3, which why Dom0 is spending so much time =

running QEMU.)

I would excpect the same from PV guests.

-Don Slutz
>> 2.- Xen: 4.4 (just pulled from repository)
>>
>> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>>
>> 4.- 2 cpu pools:
>>
>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-0               3    credit       y          2
>> pv499                1  arinc653       y          1
>>
>> 5.- 2 domU:
>>
>> # xl list
>> Name                                        ID   Mem VCPUs      State   =
Time(s)
>> Domain-0                                     0   984     3     r-----   =
   39.7
>> win7x64                                      1  2046     3     -b----   =
  143.0
>> pv499                                        3   128     1     -b----   =
   61.2
>>
>> 6.- All VCPUs are pinned:
>>
>> # xl vcpu-list
>> Name                                ID  VCPU   CPU State   Time(s) CPU A=
ffinity
>> Domain-0                             0     0    0   -b-      27.5  0
>> Domain-0                             0     1    1   -b-       7.2  1
>> Domain-0                             0     2    2   r--       5.1  2
>> win7x64                              1     0    0   -b-      71.6  0
>> win7x64                              1     1    1   -b-      37.7  1
>> win7x64                              1     2    2   -b-      34.5  2
>> pv499                                3     0    3   -b-      62.1  3
>>
>> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
>> (yet). I am running a little test program in pv499 and the timing I
>> see is varies depending on disk activity.
>>
>> My test program runs prints up the time taken in milliseconds for a
>> million cycles. With no disk activity I see 940 ms, with disk activity
>> I see 1200 ms.
>>
>> I can't understand this as disk activity should be running on cores 0,
>> 1  and 2, but never on core 3. The only thing running on core 3 should
>> by my paravirtual machine and the hypervisor stub.
>>
>> Any idea what's going on?
> Curious.  Lets try ruling some things out.
>
> How are you measuring time in pv499?
>
> What is your Cstates and Pstates looking like?  If you can, try
> disabling turbo?
>
> ~Andrew
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:28:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE05R-0008RG-Iz; Thu, 13 Feb 2014 17:28:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WE05Q-0008R7-V5
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:28:33 +0000
Received: from [85.158.137.68:36572] by server-15.bemta-3.messagelabs.com id
	6B/FB-19263-0C00DF25; Thu, 13 Feb 2014 17:28:32 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392312511!1719520!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9989 invoked from network); 13 Feb 2014 17:28:31 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:28:31 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so7922509wes.27
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 09:28:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=MlJ7/4VmkFPO7g4vaPlPgkslYavFUTGC6PWOPtN5RQ4=;
	b=hRLkktuZO0fmgikd3X2ehTWE7ksW4weHza4Swo+7+FdnQ8+VERV/9+z5kk8n3ob22X
	jobcVtCNbygR5RZiW0KxahwdE4KC1mZFld5RjGk0RvfxMNY+W7zOiG0/JO8AGmRBX8Ie
	piC+lOLp9MHwzTLIVmvXYVUwn/L6k5MXWBvubUTdRY4XkXchjH4rCdCUfjvjp2IZsajY
	EzBxJc0zOtvsCASpU/TAnI8aq0RY7ckoAl79A0q6M3HpAA5cGvekq2MpQ06St/Z+ARxK
	Xyh5TB1d7n8hVKYT/wE07Ybbm1QMjX+NJi+5Jo3jyV7ajc7wFeOenFerk1Ky9w9LqpAy
	8Z8Q==
X-Received: by 10.194.2.70 with SMTP id 6mr2354829wjs.25.1392312511328;
	Thu, 13 Feb 2014 09:28:31 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id ev4sm6867751wib.1.2014.02.13.09.28.28
	for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 09:28:30 -0800 (PST)
Date: Thu, 13 Feb 2014 17:28:19 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <133818465.20140213172819@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392311258.9138.3.camel@kazak.uk.xensource.com>
References: <1646915994.20140213165604@gmail.com>
	<1392311258.9138.3.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
>> I can't understand this as disk activity should be running on cores 0,
>> 1  and 2, but never on core 3. The only thing running on core 3 should
>> by my paravirtual machine and the hypervisor stub.
>> 
>> Any idea what's going on?

> Is core 3 actual a hyperthread -- IOW it is sharing processor execution
> resources with e.g. core 2. Or shared L2 caches etc.

> Ian.

Thanks  Ian.  Very good point. It is a hyperthread. I will reconfigure
tomorrow, retest and let you know, but that make sense.

Regards.


-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:28:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:28:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE05R-0008RG-Iz; Thu, 13 Feb 2014 17:28:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WE05Q-0008R7-V5
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:28:33 +0000
Received: from [85.158.137.68:36572] by server-15.bemta-3.messagelabs.com id
	6B/FB-19263-0C00DF25; Thu, 13 Feb 2014 17:28:32 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392312511!1719520!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9989 invoked from network); 13 Feb 2014 17:28:31 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:28:31 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so7922509wes.27
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 09:28:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=MlJ7/4VmkFPO7g4vaPlPgkslYavFUTGC6PWOPtN5RQ4=;
	b=hRLkktuZO0fmgikd3X2ehTWE7ksW4weHza4Swo+7+FdnQ8+VERV/9+z5kk8n3ob22X
	jobcVtCNbygR5RZiW0KxahwdE4KC1mZFld5RjGk0RvfxMNY+W7zOiG0/JO8AGmRBX8Ie
	piC+lOLp9MHwzTLIVmvXYVUwn/L6k5MXWBvubUTdRY4XkXchjH4rCdCUfjvjp2IZsajY
	EzBxJc0zOtvsCASpU/TAnI8aq0RY7ckoAl79A0q6M3HpAA5cGvekq2MpQ06St/Z+ARxK
	Xyh5TB1d7n8hVKYT/wE07Ybbm1QMjX+NJi+5Jo3jyV7ajc7wFeOenFerk1Ky9w9LqpAy
	8Z8Q==
X-Received: by 10.194.2.70 with SMTP id 6mr2354829wjs.25.1392312511328;
	Thu, 13 Feb 2014 09:28:31 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id ev4sm6867751wib.1.2014.02.13.09.28.28
	for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 09:28:30 -0800 (PST)
Date: Thu, 13 Feb 2014 17:28:19 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <133818465.20140213172819@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392311258.9138.3.camel@kazak.uk.xensource.com>
References: <1646915994.20140213165604@gmail.com>
	<1392311258.9138.3.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
>> I can't understand this as disk activity should be running on cores 0,
>> 1  and 2, but never on core 3. The only thing running on core 3 should
>> by my paravirtual machine and the hypervisor stub.
>> 
>> Any idea what's going on?

> Is core 3 actual a hyperthread -- IOW it is sharing processor execution
> resources with e.g. core 2. Or shared L2 caches etc.

> Ian.

Thanks  Ian.  Very good point. It is a hyperthread. I will reconfigure
tomorrow, retest and let you know, but that make sense.

Regards.


-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0A5-0000Vd-BG; Thu, 13 Feb 2014 17:33:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0A4-0000VS-7X
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:33:20 +0000
Received: from [85.158.137.68:29403] by server-7.bemta-3.messagelabs.com id
	69/38-13775-FD10DF25; Thu, 13 Feb 2014 17:33:19 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392312796!1703113!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19214 invoked from network); 13 Feb 2014 17:33:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102313089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:06 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09q-0003vw-19;
	Thu, 13 Feb 2014 17:33:06 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09p-0003ka-OU;
	Thu, 13 Feb 2014 17:33:05 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:57 +0000
Message-ID: <1392312779-14373-2-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 1/3] x86/hvm/rtc: Don't run the vpt timer
	when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the guest has not asked for interrupts, don't run the vpt timer
to generate them.  This is a prerequisite for a patch to simplify how
the vpt interacts with the RTC, and also gets rid of a timer series in
Xen in a case where it's unlikely to be needed.

Instead, calculate the correct value for REG_C.PF whenever REG_C is
read or PIE is enabled.  This allow a guest to poll for the PF bit
while not asking for actual timer interrupts.  Such a guest would no
longer get the benefit of the vpt's timer modes.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        | 52 ++++++++++++++++++++++++++++++++++---------
 xen/include/asm-x86/hvm/vpt.h |  3 ++-
 2 files changed, 43 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cdedefe..cfc1af9 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -94,7 +94,7 @@ bool_t rtc_periodic_interrupt(void *opaque)
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
     }
     if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
         ret = 0;
@@ -103,11 +103,30 @@ bool_t rtc_periodic_interrupt(void *opaque)
     return ret;
 }
 
+/* Check whether the REG_C.PF bit should have been set by a tick since
+ * the last time we looked. This is used to track ticks when REG_B.PIE
+ * is clear; when PIE is set, PF ticks are handled by the VPT callbacks.  */
+static void check_for_pf_ticks(RTCState *s)
+{
+    s_time_t now;
+
+    if ( s->period == 0 || (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+        return;
+
+    now = NOW();
+    if ( (now - s->start_time) / s->period
+         != (s->check_ticks_since - s->start_time) / s->period )
+        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+
+    s->check_ticks_since = now;
+}
+
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
 {
     int period_code, period, delta;
+    s_time_t now;
     struct vcpu *v = vrtc_vcpu(s);
 
     ASSERT(spin_is_locked(&s->lock));
@@ -125,24 +144,28 @@ static void rtc_timer_update(RTCState *s)
     case RTC_REF_CLCK_4MHZ:
         if ( period_code != 0 )
         {
-            if ( period_code != s->pt_code )
+            now = NOW();
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            if ( period != s->period )
             {
-                s->pt_code = period_code;
-                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+                s->period = period;
                 if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
-                    delta = period - ((NOW() - s->start_time) % period);
-                create_periodic_time(v, &s->pt, delta, period,
-                                     RTC_IRQ, NULL, s);
+                    delta = period - ((now - s->start_time) % period);
+                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+                    create_periodic_time(v, &s->pt, delta, period,
+                                         RTC_IRQ, NULL, s);
+                else
+                    s->check_ticks_since = now;
             }
             break;
         }
         /* fall through */
     default:
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
         break;
     }
 }
@@ -484,6 +507,7 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
             if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
+        check_for_pf_ticks(s);
         s->hw.cmos_data[RTC_REG_B] = data;
         /*
          * If the interrupt is already set when the interrupt becomes
@@ -492,6 +516,11 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
         rtc_update_irq(s);
         if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
             rtc_timer_update(s);
+        else if ( !(data & RTC_PIE) && (orig & RTC_PIE) )
+        {
+            destroy_periodic_time(&s->pt);
+            rtc_timer_update(s);
+        }
         if ( (data ^ orig) & RTC_SET )
             check_update_timer(s);
         if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
@@ -645,6 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
             ret |= RTC_UIP;
         break;
     case RTC_REG_C:
+        check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
         if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
@@ -652,7 +682,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
-        rtc_timer_update(s);
+        s->pt_dead_ticks = 0;
         break;
     default:
         ret = s->hw.cmos_data[s->hw.cmos_index];
@@ -748,7 +778,7 @@ void rtc_reset(struct domain *d)
     RTCState *s = domain_vrtc(d);
 
     destroy_periodic_time(&s->pt);
-    s->pt_code = 0;
+    s->period = 0;
     s->pt.source = PTSRC_isa;
 }
 
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 87c3a66..9f48635 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -113,7 +113,8 @@ typedef struct RTCState {
     /* periodic timer */
     struct periodic_time pt;
     s_time_t start_time;
-    int pt_code;
+    s_time_t check_ticks_since;
+    int period;
     uint8_t pt_dead_ticks;
     uint32_t use_timer;
     spinlock_t lock;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0A7-0000Vu-1y; Thu, 13 Feb 2014 17:33:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0A4-0000VU-TU
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:33:21 +0000
Received: from [85.158.137.68:9667] by server-3.bemta-3.messagelabs.com id
	7D/16-14520-0E10DF25; Thu, 13 Feb 2014 17:33:20 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392312796!1703113!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19329 invoked from network); 13 Feb 2014 17:33:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:33:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102313105"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:07 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09q-0003w2-N7;
	Thu, 13 Feb 2014 17:33:06 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09q-0003kk-FW;
	Thu, 13 Feb 2014 17:33:06 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:59 +0000
Message-ID: <1392312779-14373-4-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 3/3] x86/hvm/rtc: Always deassert the IRQ
	line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Even in no-ack mode, there's no reason to leave the line asserted
after an explicit ack of the interrupt.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 18a4fe8..b592547 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -674,7 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
-        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
+        if ( ret & RTC_IRQF )
             hvm_isa_irq_deassert(d, RTC_IRQ);
         rtc_update_irq(s);
         check_update_timer(s);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0A5-0000Vd-BG; Thu, 13 Feb 2014 17:33:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0A4-0000VS-7X
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:33:20 +0000
Received: from [85.158.137.68:29403] by server-7.bemta-3.messagelabs.com id
	69/38-13775-FD10DF25; Thu, 13 Feb 2014 17:33:19 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392312796!1703113!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19214 invoked from network); 13 Feb 2014 17:33:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:33:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102313089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:06 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09q-0003vw-19;
	Thu, 13 Feb 2014 17:33:06 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09p-0003ka-OU;
	Thu, 13 Feb 2014 17:33:05 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:57 +0000
Message-ID: <1392312779-14373-2-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 1/3] x86/hvm/rtc: Don't run the vpt timer
	when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the guest has not asked for interrupts, don't run the vpt timer
to generate them.  This is a prerequisite for a patch to simplify how
the vpt interacts with the RTC, and also gets rid of a timer series in
Xen in a case where it's unlikely to be needed.

Instead, calculate the correct value for REG_C.PF whenever REG_C is
read or PIE is enabled.  This allow a guest to poll for the PF bit
while not asking for actual timer interrupts.  Such a guest would no
longer get the benefit of the vpt's timer modes.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        | 52 ++++++++++++++++++++++++++++++++++---------
 xen/include/asm-x86/hvm/vpt.h |  3 ++-
 2 files changed, 43 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cdedefe..cfc1af9 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -94,7 +94,7 @@ bool_t rtc_periodic_interrupt(void *opaque)
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
     }
     if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
         ret = 0;
@@ -103,11 +103,30 @@ bool_t rtc_periodic_interrupt(void *opaque)
     return ret;
 }
 
+/* Check whether the REG_C.PF bit should have been set by a tick since
+ * the last time we looked. This is used to track ticks when REG_B.PIE
+ * is clear; when PIE is set, PF ticks are handled by the VPT callbacks.  */
+static void check_for_pf_ticks(RTCState *s)
+{
+    s_time_t now;
+
+    if ( s->period == 0 || (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+        return;
+
+    now = NOW();
+    if ( (now - s->start_time) / s->period
+         != (s->check_ticks_since - s->start_time) / s->period )
+        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+
+    s->check_ticks_since = now;
+}
+
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
 {
     int period_code, period, delta;
+    s_time_t now;
     struct vcpu *v = vrtc_vcpu(s);
 
     ASSERT(spin_is_locked(&s->lock));
@@ -125,24 +144,28 @@ static void rtc_timer_update(RTCState *s)
     case RTC_REF_CLCK_4MHZ:
         if ( period_code != 0 )
         {
-            if ( period_code != s->pt_code )
+            now = NOW();
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            if ( period != s->period )
             {
-                s->pt_code = period_code;
-                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+                s->period = period;
                 if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
-                    delta = period - ((NOW() - s->start_time) % period);
-                create_periodic_time(v, &s->pt, delta, period,
-                                     RTC_IRQ, NULL, s);
+                    delta = period - ((now - s->start_time) % period);
+                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+                    create_periodic_time(v, &s->pt, delta, period,
+                                         RTC_IRQ, NULL, s);
+                else
+                    s->check_ticks_since = now;
             }
             break;
         }
         /* fall through */
     default:
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
         break;
     }
 }
@@ -484,6 +507,7 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
             if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
+        check_for_pf_ticks(s);
         s->hw.cmos_data[RTC_REG_B] = data;
         /*
          * If the interrupt is already set when the interrupt becomes
@@ -492,6 +516,11 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
         rtc_update_irq(s);
         if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
             rtc_timer_update(s);
+        else if ( !(data & RTC_PIE) && (orig & RTC_PIE) )
+        {
+            destroy_periodic_time(&s->pt);
+            rtc_timer_update(s);
+        }
         if ( (data ^ orig) & RTC_SET )
             check_update_timer(s);
         if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
@@ -645,6 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
             ret |= RTC_UIP;
         break;
     case RTC_REG_C:
+        check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
         if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
@@ -652,7 +682,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
-        rtc_timer_update(s);
+        s->pt_dead_ticks = 0;
         break;
     default:
         ret = s->hw.cmos_data[s->hw.cmos_index];
@@ -748,7 +778,7 @@ void rtc_reset(struct domain *d)
     RTCState *s = domain_vrtc(d);
 
     destroy_periodic_time(&s->pt);
-    s->pt_code = 0;
+    s->period = 0;
     s->pt.source = PTSRC_isa;
 }
 
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 87c3a66..9f48635 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -113,7 +113,8 @@ typedef struct RTCState {
     /* periodic timer */
     struct periodic_time pt;
     s_time_t start_time;
-    int pt_code;
+    s_time_t check_ticks_since;
+    int period;
     uint8_t pt_dead_ticks;
     uint32_t use_timer;
     spinlock_t lock;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0A7-0000WB-HC; Thu, 13 Feb 2014 17:33:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0A5-0000Vg-Th
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:33:22 +0000
Received: from [85.158.137.68:18365] by server-14.bemta-3.messagelabs.com id
	D6/0F-08196-1E10DF25; Thu, 13 Feb 2014 17:33:21 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392312796!1703113!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19422 invoked from network); 13 Feb 2014 17:33:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:33:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102313111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:06 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09q-0003vz-EW;
	Thu, 13 Feb 2014 17:33:06 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09q-0003kf-6h;
	Thu, 13 Feb 2014 17:33:06 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:58 +0000
Message-ID: <1392312779-14373-3-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 2/3] x86/hvm/rtc: Inject RTC periodic
	interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Let the vpt code drive the RTC's timer interrupts directly, as it does
for other periodic time sources, and fix up the register state in a
vpt callback when the interrupt is injected.

This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
when a tick was pending, the early callback from the VPT code would
always set REG_C.PF on every VMENTER; meanwhile the guest was in its
interrupt handler reading REG_C in a loop and waiting to see it clear.

One drawback is that a guest that attempts to suppress RTC periodic
interrupts by failing to read REG_C will receive up to 10 spurious
interrupts, even in 'strict' mode.  However:
 - since all previous RTC models have had this property (including
   the current one, since 'no-ack' mode is hard-coded on) we're
   pretty sure that all guests can handle this; and
 - we're already playing some other interesting games with this
   interrupt in the vpt code.

One other corner case: a guest that enables the PF timer interrupt,
masks the interupt in the APIC and then polls REG_C looking for PF
will not see PF getting set.  The more likely case of enabling the
timers and masking the interrupt with REG_B.PIE is already handled
correctly.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        | 25 +++++++++++--------------
 xen/arch/x86/hvm/vpt.c        | 40 ----------------------------------------
 xen/include/asm-x86/hvm/vpt.h |  1 -
 3 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cfc1af9..18a4fe8 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -78,29 +78,26 @@ static void rtc_update_irq(RTCState *s)
     hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
 }
 
-bool_t rtc_periodic_interrupt(void *opaque)
+/* Called by the VPT code after it's injected a PF interrupt for us.
+ * Fix up the register state to reflect what happened. */
+static void rtc_pf_callback(struct vcpu *v, void *opaque)
 {
     RTCState *s = opaque;
-    bool_t ret;
 
     spin_lock(&s->lock);
-    ret = rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF);
-    if ( rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_PF) )
-    {
-        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
-        rtc_update_irq(s);
-    }
-    else if ( ++(s->pt_dead_ticks) >= 10 )
+
+    if ( !rtc_mode_is(s, no_ack)
+         && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF)
+         && ++(s->pt_dead_ticks) >= 10 )
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
         s->period = 0;
     }
-    if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
-        ret = 0;
-    spin_unlock(&s->lock);
 
-    return ret;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF|RTC_IRQF;
+
+    spin_unlock(&s->lock);
 }
 
 /* Check whether the REG_C.PF bit should have been set by a tick since
@@ -156,7 +153,7 @@ static void rtc_timer_update(RTCState *s)
                     delta = period - ((now - s->start_time) % period);
                 if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
                     create_periodic_time(v, &s->pt, delta, period,
-                                         RTC_IRQ, NULL, s);
+                                         RTC_IRQ, rtc_pf_callback, s);
                 else
                     s->check_ticks_since = now;
             }
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 1961bda..f7af688 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -231,12 +231,9 @@ int pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, is_lapic;
-    void *pt_priv;
 
- rescan:
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
- rescan_locked:
     earliest_pt = NULL;
     max_lag = -1ULL;
     list_for_each_entry_safe ( pt, temp, head, list )
@@ -270,48 +267,11 @@ int pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
-    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    else if ( irq == RTC_IRQ && pt_priv )
-    {
-        if ( !rtc_periodic_interrupt(pt_priv) )
-            irq = -1;
-
-        pt_lock(earliest_pt);
-
-        if ( irq < 0 && earliest_pt->pending_intr_nr )
-        {
-            /*
-             * RTC periodic timer runs without the corresponding interrupt
-             * being enabled - need to mimic enough of pt_intr_post() to keep
-             * things going.
-             */
-            earliest_pt->pending_intr_nr = 0;
-            earliest_pt->irq_issued = 0;
-            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
-        }
-        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
-        {
-            if ( earliest_pt->on_list )
-            {
-                /* suspend timer emulation */
-                list_del(&earliest_pt->list);
-                earliest_pt->on_list = 0;
-            }
-            irq = -1;
-        }
-
-        /* Avoid dropping the lock if we can. */
-        if ( irq < 0 && v == earliest_pt->vcpu )
-            goto rescan_locked;
-        pt_unlock(earliest_pt);
-        if ( irq < 0 )
-            goto rescan;
-    }
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 9f48635..7d62653 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -184,7 +184,6 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
-bool_t rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0A7-0000Vu-1y; Thu, 13 Feb 2014 17:33:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0A4-0000VU-TU
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:33:21 +0000
Received: from [85.158.137.68:9667] by server-3.bemta-3.messagelabs.com id
	7D/16-14520-0E10DF25; Thu, 13 Feb 2014 17:33:20 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392312796!1703113!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19329 invoked from network); 13 Feb 2014 17:33:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:33:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102313105"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:07 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09q-0003w2-N7;
	Thu, 13 Feb 2014 17:33:06 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09q-0003kk-FW;
	Thu, 13 Feb 2014 17:33:06 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:59 +0000
Message-ID: <1392312779-14373-4-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 3/3] x86/hvm/rtc: Always deassert the IRQ
	line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Even in no-ack mode, there's no reason to leave the line asserted
after an explicit ack of the interrupt.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 18a4fe8..b592547 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -674,7 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
-        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
+        if ( ret & RTC_IRQF )
             hvm_isa_irq_deassert(d, RTC_IRQ);
         rtc_update_irq(s);
         check_update_timer(s);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:33:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0A7-0000WB-HC; Thu, 13 Feb 2014 17:33:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0A5-0000Vg-Th
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:33:22 +0000
Received: from [85.158.137.68:18365] by server-14.bemta-3.messagelabs.com id
	D6/0F-08196-1E10DF25; Thu, 13 Feb 2014 17:33:21 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392312796!1703113!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19422 invoked from network); 13 Feb 2014 17:33:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:33:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="102313111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:06 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09q-0003vz-EW;
	Thu, 13 Feb 2014 17:33:06 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09q-0003kf-6h;
	Thu, 13 Feb 2014 17:33:06 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:58 +0000
Message-ID: <1392312779-14373-3-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 2/3] x86/hvm/rtc: Inject RTC periodic
	interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Let the vpt code drive the RTC's timer interrupts directly, as it does
for other periodic time sources, and fix up the register state in a
vpt callback when the interrupt is injected.

This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
when a tick was pending, the early callback from the VPT code would
always set REG_C.PF on every VMENTER; meanwhile the guest was in its
interrupt handler reading REG_C in a loop and waiting to see it clear.

One drawback is that a guest that attempts to suppress RTC periodic
interrupts by failing to read REG_C will receive up to 10 spurious
interrupts, even in 'strict' mode.  However:
 - since all previous RTC models have had this property (including
   the current one, since 'no-ack' mode is hard-coded on) we're
   pretty sure that all guests can handle this; and
 - we're already playing some other interesting games with this
   interrupt in the vpt code.

One other corner case: a guest that enables the PF timer interrupt,
masks the interupt in the APIC and then polls REG_C looking for PF
will not see PF getting set.  The more likely case of enabling the
timers and masking the interrupt with REG_B.PIE is already handled
correctly.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        | 25 +++++++++++--------------
 xen/arch/x86/hvm/vpt.c        | 40 ----------------------------------------
 xen/include/asm-x86/hvm/vpt.h |  1 -
 3 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cfc1af9..18a4fe8 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -78,29 +78,26 @@ static void rtc_update_irq(RTCState *s)
     hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
 }
 
-bool_t rtc_periodic_interrupt(void *opaque)
+/* Called by the VPT code after it's injected a PF interrupt for us.
+ * Fix up the register state to reflect what happened. */
+static void rtc_pf_callback(struct vcpu *v, void *opaque)
 {
     RTCState *s = opaque;
-    bool_t ret;
 
     spin_lock(&s->lock);
-    ret = rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF);
-    if ( rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_PF) )
-    {
-        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
-        rtc_update_irq(s);
-    }
-    else if ( ++(s->pt_dead_ticks) >= 10 )
+
+    if ( !rtc_mode_is(s, no_ack)
+         && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF)
+         && ++(s->pt_dead_ticks) >= 10 )
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
         s->period = 0;
     }
-    if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
-        ret = 0;
-    spin_unlock(&s->lock);
 
-    return ret;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF|RTC_IRQF;
+
+    spin_unlock(&s->lock);
 }
 
 /* Check whether the REG_C.PF bit should have been set by a tick since
@@ -156,7 +153,7 @@ static void rtc_timer_update(RTCState *s)
                     delta = period - ((now - s->start_time) % period);
                 if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
                     create_periodic_time(v, &s->pt, delta, period,
-                                         RTC_IRQ, NULL, s);
+                                         RTC_IRQ, rtc_pf_callback, s);
                 else
                     s->check_ticks_since = now;
             }
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 1961bda..f7af688 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -231,12 +231,9 @@ int pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, is_lapic;
-    void *pt_priv;
 
- rescan:
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
- rescan_locked:
     earliest_pt = NULL;
     max_lag = -1ULL;
     list_for_each_entry_safe ( pt, temp, head, list )
@@ -270,48 +267,11 @@ int pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
-    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    else if ( irq == RTC_IRQ && pt_priv )
-    {
-        if ( !rtc_periodic_interrupt(pt_priv) )
-            irq = -1;
-
-        pt_lock(earliest_pt);
-
-        if ( irq < 0 && earliest_pt->pending_intr_nr )
-        {
-            /*
-             * RTC periodic timer runs without the corresponding interrupt
-             * being enabled - need to mimic enough of pt_intr_post() to keep
-             * things going.
-             */
-            earliest_pt->pending_intr_nr = 0;
-            earliest_pt->irq_issued = 0;
-            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
-        }
-        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
-        {
-            if ( earliest_pt->on_list )
-            {
-                /* suspend timer emulation */
-                list_del(&earliest_pt->list);
-                earliest_pt->on_list = 0;
-            }
-            irq = -1;
-        }
-
-        /* Avoid dropping the lock if we can. */
-        if ( irq < 0 && v == earliest_pt->vcpu )
-            goto rescan_locked;
-        pt_unlock(earliest_pt);
-        if ( irq < 0 )
-            goto rescan;
-    }
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 9f48635..7d62653 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -184,7 +184,6 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
-bool_t rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0As-0000hH-5r; Thu, 13 Feb 2014 17:34:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0Aq-0000gc-Dv
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:34:08 +0000
Received: from [193.109.254.147:44339] by server-9.bemta-14.messagelabs.com id
	D9/11-24895-F020DF25; Thu, 13 Feb 2014 17:34:07 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392312845!4187790!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11730 invoked from network); 13 Feb 2014 17:34:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:34:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100532502"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:06 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09p-0003vt-Mv;
	Thu, 13 Feb 2014 17:33:05 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09p-0003kX-Di;
	Thu, 13 Feb 2014 17:33:05 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:56 +0000
Message-ID: <1392312779-14373-1-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back into
	the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This series implements the most recent idea I was proposing about
reworking the RTC PF interrupt injection.

Patch 1 switches handling the !PIE case to calculate the right answer
for REG_C.PF on demand rather than running the timers.
Patch 2 switches back to the old model of having the vpt code control
the timer interrupt injection; this is the fix for the w2k3 hang.
Patch 3 is just a minor cleanup, and not particularly necessary.

N.B. In its current state it DOES NOT WORK.  I got distracted by
other things today and didn't get a chance to finish working on it,
but I wanted to send it out for feedback on the general approach.
If it seems broadly acceptable then either I can pick it up again next
week or maybe Andrew can look at fixing it.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0As-0000hH-5r; Thu, 13 Feb 2014 17:34:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WE0Aq-0000gc-Dv
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:34:08 +0000
Received: from [193.109.254.147:44339] by server-9.bemta-14.messagelabs.com id
	D9/11-24895-F020DF25; Thu, 13 Feb 2014 17:34:07 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392312845!4187790!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11730 invoked from network); 13 Feb 2014 17:34:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:34:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100532502"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:33:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:33:06 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WE09p-0003vt-Mv;
	Thu, 13 Feb 2014 17:33:05 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WE09p-0003kX-Di;
	Thu, 13 Feb 2014 17:33:05 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 17:32:56 +0000
Message-ID: <1392312779-14373-1-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back into
	the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This series implements the most recent idea I was proposing about
reworking the RTC PF interrupt injection.

Patch 1 switches handling the !PIE case to calculate the right answer
for REG_C.PF on demand rather than running the timers.
Patch 2 switches back to the old model of having the vpt code control
the timer interrupt injection; this is the fix for the w2k3 hang.
Patch 3 is just a minor cleanup, and not particularly necessary.

N.B. In its current state it DOES NOT WORK.  I got distracted by
other things today and didn't get a chance to finish working on it,
but I wanted to send it out for feedback on the general approach.
If it seems broadly acceptable then either I can pick it up again next
week or maybe Andrew can look at fixing it.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:37:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0Dn-00010V-Qa; Thu, 13 Feb 2014 17:37:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WE0Dl-00010K-U8
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:37:10 +0000
Received: from [193.109.254.147:51165] by server-15.bemta-14.messagelabs.com
	id A6/D3-10839-5C20DF25; Thu, 13 Feb 2014 17:37:09 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392313018!4182705!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjc5MDIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29268 invoked from network); 13 Feb 2014 17:36:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:36:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; 
	d="asc'?scan'208";a="102314686"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:36:57 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:36:56 -0500
Message-ID: <1392313015.32038.112.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Thu, 13 Feb 2014 18:36:55 +0100
In-Reply-To: <1646915994.20140213165604@gmail.com>
References: <1646915994.20140213165604@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8350284421458082619=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8350284421458082619==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-nI6oTAkuTpuk+6tdgSr1"

--=-nI6oTAkuTpuk+6tdgSr1
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
> Hi all,
>=20
Hey Simon!

First of all, as you're using ARINC, I'm adding Nate, as he's ARINC's
maintainer, let's see if he can help us! ;-P

> I  am  now successfully running my little operating system inside Xen.
> It  is  fully  preemptive and working a treat,=20
>
Aha, this is great! :-)

> but I have just noticed
> something  I  wasn't expecting, and will really be a problem for me if
> I can't work around it.
>=20
Well, let's see...

> My configuration is as follows:
>=20
> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
>=20
> 2.- Xen: 4.4 (just pulled from repository)
>=20
> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>=20
> 4.- 2 cpu pools:
>=20
> # xl cpupool-list
> Name               CPUs   Sched     Active   Domain count
> Pool-0               3    credit       y          2
> pv499                1  arinc653       y          1
>=20
Ok, I think I figured this out from the other information, but it would
be useful to know what pcpus are assigned to what cpupool. I think it's
`xl cpupool-list -c'.

> 5.- 2 domU:
>=20
> # xl list
> Name                                        ID   Mem VCPUs      State   T=
ime(s)
> Domain-0                                     0   984     3     r-----    =
  39.7
> win7x64                                      1  2046     3     -b----    =
 143.0
> pv499                                        3   128     1     -b----    =
  61.2
>=20
> 6.- All VCPUs are pinned:
>=20
Right, although, if you use cpupools, and if I've understood what you're
up to, you really should not require pinning. I mean, the isolation
between the RT-ish domain and the rest of the world should be already in
place thanks to cpupools.

Actually, pinning can help, but meybe not in the exact way you're using
it...

> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Af=
finity
> Domain-0                             0     0    0   -b-      27.5  0
> Domain-0                             0     1    1   -b-       7.2  1
> Domain-0                             0     2    2   r--       5.1  2
> win7x64                              1     0    0   -b-      71.6  0
> win7x64                              1     1    1   -b-      37.7  1
> win7x64                              1     2    2   -b-      34.5  2
> pv499                                3     0    3   -b-      62.1  3
>=20
...as it can be seen here.

So, if you ask me, you're restricting too much things in pool-0, where
dom0 and the Windows VM runs. In fact, is there a specific reason why
you need all their vcpus to be statically pinned each one to only one
pcpu? If not, I'd leave them a little bit more of freedom.

What I'd try is:
 1. all dom0 and win7 vcpus free, so no pinning in pool0.
 2. pinning as follows:
     * all vcpus of win7 --> pcpus 1,2
     * all vcpus of dom0 --> no pinning
   this way, what you get is the following: win7 could suffer sometimes,
   if all its 3 vcpus gets busy, but that, I think is acceptable, at
   least up to a certain extent, is that the case?
   At the same time, you
   are making sure dom0 always has a chance to run, as pcpu#0 would be
   his exclusive playground, in case someone, including your pv499
   domain, needs its services.

> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
> (yet). I am running a little test program in pv499 and the timing I
> see is varies depending on disk activity.
>=20
> My test program runs prints up the time taken in milliseconds for a
> million cycles. With no disk activity I see 940 ms, with disk activity
> I see 1200 ms.
>=20
Wow, it's very hard to tell. What I first thought is that your domain
may need something from dom0, and the suboptimal (IMHO) pinning
configuration you're using could be slowing that down. The bug in this
theory is that dom0 services are mostly PV drivers for disk and network,
which you say you don't have...

I still think your pinning setup is unnecessary restrictive, so I'd give
it a try, but it's probably not the root cause of your issue.

> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
>=20
Right. Are you familiar with tracing what happens inside Xen with
xentrace and, perhaps, xenalyze? It takes a bit of time to get used to
it but, once you dominate it, it is a good mean for getting out really
useful info!

There is a blog post about that here:
http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze=
/
and it should have most of the info, or the links to where to find them.

It's going to be a lot of data, but if you trace one run without disk IO
and one run with disk IO, it should be doable to compare the
differences, for instance, in terms of when the vcpus of your domain are
active, as well as when they get scheduled, and from that we hopefully
can try to narrow down a bit more the real root cause of the thing.

Let us know if you think you need help with that.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-nI6oTAkuTpuk+6tdgSr1
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL9ArcACgkQk4XaBE3IOsS2DwCeLkeT2cw5py0UiVMZsMJ6SiqA
NlIAnRLvzWRdHyaTEWeVLiIRybr1fU5i
=oHmP
-----END PGP SIGNATURE-----

--=-nI6oTAkuTpuk+6tdgSr1--


--===============8350284421458082619==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8350284421458082619==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 17:37:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0Dn-00010V-Qa; Thu, 13 Feb 2014 17:37:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WE0Dl-00010K-U8
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:37:10 +0000
Received: from [193.109.254.147:51165] by server-15.bemta-14.messagelabs.com
	id A6/D3-10839-5C20DF25; Thu, 13 Feb 2014 17:37:09 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392313018!4182705!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjc5MDIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29268 invoked from network); 13 Feb 2014 17:36:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:36:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; 
	d="asc'?scan'208";a="102314686"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:36:57 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:36:56 -0500
Message-ID: <1392313015.32038.112.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Thu, 13 Feb 2014 18:36:55 +0100
In-Reply-To: <1646915994.20140213165604@gmail.com>
References: <1646915994.20140213165604@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8350284421458082619=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8350284421458082619==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-nI6oTAkuTpuk+6tdgSr1"

--=-nI6oTAkuTpuk+6tdgSr1
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
> Hi all,
>=20
Hey Simon!

First of all, as you're using ARINC, I'm adding Nate, as he's ARINC's
maintainer, let's see if he can help us! ;-P

> I  am  now successfully running my little operating system inside Xen.
> It  is  fully  preemptive and working a treat,=20
>
Aha, this is great! :-)

> but I have just noticed
> something  I  wasn't expecting, and will really be a problem for me if
> I can't work around it.
>=20
Well, let's see...

> My configuration is as follows:
>=20
> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
>=20
> 2.- Xen: 4.4 (just pulled from repository)
>=20
> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>=20
> 4.- 2 cpu pools:
>=20
> # xl cpupool-list
> Name               CPUs   Sched     Active   Domain count
> Pool-0               3    credit       y          2
> pv499                1  arinc653       y          1
>=20
Ok, I think I figured this out from the other information, but it would
be useful to know what pcpus are assigned to what cpupool. I think it's
`xl cpupool-list -c'.

> 5.- 2 domU:
>=20
> # xl list
> Name                                        ID   Mem VCPUs      State   T=
ime(s)
> Domain-0                                     0   984     3     r-----    =
  39.7
> win7x64                                      1  2046     3     -b----    =
 143.0
> pv499                                        3   128     1     -b----    =
  61.2
>=20
> 6.- All VCPUs are pinned:
>=20
Right, although, if you use cpupools, and if I've understood what you're
up to, you really should not require pinning. I mean, the isolation
between the RT-ish domain and the rest of the world should be already in
place thanks to cpupools.

Actually, pinning can help, but meybe not in the exact way you're using
it...

> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Af=
finity
> Domain-0                             0     0    0   -b-      27.5  0
> Domain-0                             0     1    1   -b-       7.2  1
> Domain-0                             0     2    2   r--       5.1  2
> win7x64                              1     0    0   -b-      71.6  0
> win7x64                              1     1    1   -b-      37.7  1
> win7x64                              1     2    2   -b-      34.5  2
> pv499                                3     0    3   -b-      62.1  3
>=20
...as it can be seen here.

So, if you ask me, you're restricting too much things in pool-0, where
dom0 and the Windows VM runs. In fact, is there a specific reason why
you need all their vcpus to be statically pinned each one to only one
pcpu? If not, I'd leave them a little bit more of freedom.

What I'd try is:
 1. all dom0 and win7 vcpus free, so no pinning in pool0.
 2. pinning as follows:
     * all vcpus of win7 --> pcpus 1,2
     * all vcpus of dom0 --> no pinning
   this way, what you get is the following: win7 could suffer sometimes,
   if all its 3 vcpus gets busy, but that, I think is acceptable, at
   least up to a certain extent, is that the case?
   At the same time, you
   are making sure dom0 always has a chance to run, as pcpu#0 would be
   his exclusive playground, in case someone, including your pv499
   domain, needs its services.

> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
> (yet). I am running a little test program in pv499 and the timing I
> see is varies depending on disk activity.
>=20
> My test program runs prints up the time taken in milliseconds for a
> million cycles. With no disk activity I see 940 ms, with disk activity
> I see 1200 ms.
>=20
Wow, it's very hard to tell. What I first thought is that your domain
may need something from dom0, and the suboptimal (IMHO) pinning
configuration you're using could be slowing that down. The bug in this
theory is that dom0 services are mostly PV drivers for disk and network,
which you say you don't have...

I still think your pinning setup is unnecessary restrictive, so I'd give
it a try, but it's probably not the root cause of your issue.

> I can't understand this as disk activity should be running on cores 0,
> 1  and 2, but never on core 3. The only thing running on core 3 should
> by my paravirtual machine and the hypervisor stub.
>=20
Right. Are you familiar with tracing what happens inside Xen with
xentrace and, perhaps, xenalyze? It takes a bit of time to get used to
it but, once you dominate it, it is a good mean for getting out really
useful info!

There is a blog post about that here:
http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze=
/
and it should have most of the info, or the links to where to find them.

It's going to be a lot of data, but if you trace one run without disk IO
and one run with disk IO, it should be doable to compare the
differences, for instance, in terms of when the vcpus of your domain are
active, as well as when they get scheduled, and from that we hopefully
can try to narrow down a bit more the real root cause of the thing.

Let us know if you think you need help with that.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-nI6oTAkuTpuk+6tdgSr1
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL9ArcACgkQk4XaBE3IOsS2DwCeLkeT2cw5py0UiVMZsMJ6SiqA
NlIAnRLvzWRdHyaTEWeVLiIRybr1fU5i
=oHmP
-----END PGP SIGNATURE-----

--=-nI6oTAkuTpuk+6tdgSr1--


--===============8350284421458082619==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8350284421458082619==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 17:39:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0G4-0001IY-DV; Thu, 13 Feb 2014 17:39:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WE0G2-0001I0-Qp
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:39:31 +0000
Received: from [193.109.254.147:26693] by server-8.bemta-14.messagelabs.com id
	0A/F6-18529-2530DF25; Thu, 13 Feb 2014 17:39:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392313168!4187272!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10129 invoked from network); 13 Feb 2014 17:39:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:39:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100535169"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:39:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:39:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WE0Fz-00041a-0c;
	Thu, 13 Feb 2014 17:39:27 +0000
Message-ID: <52FD034E.2070600@citrix.com>
Date: Thu, 13 Feb 2014 17:39:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: george.dunlap@eu.citrix.com, keir@xen.org, Tim Deegan <tim@xen.org>,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 17:32, Tim Deegan wrote:
> Hi,
>
> This series implements the most recent idea I was proposing about
> reworking the RTC PF interrupt injection.
>
> Patch 1 switches handling the !PIE case to calculate the right answer
> for REG_C.PF on demand rather than running the timers.
> Patch 2 switches back to the old model of having the vpt code control
> the timer interrupt injection; this is the fix for the w2k3 hang.
> Patch 3 is just a minor cleanup, and not particularly necessary.
>
> N.B. In its current state it DOES NOT WORK.  I got distracted by
> other things today and didn't get a chance to finish working on it,
> but I wanted to send it out for feedback on the general approach.
> If it seems broadly acceptable then either I can pick it up again next
> week or maybe Andrew can look at fixing it.
>
> Cheers,
>
> Tim.
>

I should have time to look at the series tomorrow.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:39:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0G4-0001IY-DV; Thu, 13 Feb 2014 17:39:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WE0G2-0001I0-Qp
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:39:31 +0000
Received: from [193.109.254.147:26693] by server-8.bemta-14.messagelabs.com id
	0A/F6-18529-2530DF25; Thu, 13 Feb 2014 17:39:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392313168!4187272!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10129 invoked from network); 13 Feb 2014 17:39:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:39:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; d="scan'208";a="100535169"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 17:39:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:39:27 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WE0Fz-00041a-0c;
	Thu, 13 Feb 2014 17:39:27 +0000
Message-ID: <52FD034E.2070600@citrix.com>
Date: Thu, 13 Feb 2014 17:39:26 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-1-git-send-email-tim@xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: george.dunlap@eu.citrix.com, keir@xen.org, Tim Deegan <tim@xen.org>,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/14 17:32, Tim Deegan wrote:
> Hi,
>
> This series implements the most recent idea I was proposing about
> reworking the RTC PF interrupt injection.
>
> Patch 1 switches handling the !PIE case to calculate the right answer
> for REG_C.PF on demand rather than running the timers.
> Patch 2 switches back to the old model of having the vpt code control
> the timer interrupt injection; this is the fix for the w2k3 hang.
> Patch 3 is just a minor cleanup, and not particularly necessary.
>
> N.B. In its current state it DOES NOT WORK.  I got distracted by
> other things today and didn't get a chance to finish working on it,
> but I wanted to send it out for feedback on the general approach.
> If it seems broadly acceptable then either I can pick it up again next
> week or maybe Andrew can look at fixing it.
>
> Cheers,
>
> Tim.
>

I should have time to look at the series tomorrow.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 17:39:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0GG-0001KM-QO; Thu, 13 Feb 2014 17:39:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WE0GF-0001K5-D3
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:39:43 +0000
Received: from [193.109.254.147:62194] by server-14.bemta-14.messagelabs.com
	id C4/86-29228-E530DF25; Thu, 13 Feb 2014 17:39:42 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392313180!4151313!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15571 invoked from network); 13 Feb 2014 17:39:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:39:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; 
	d="asc'?scan'208";a="102315786"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:39:40 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:39:40 -0500
Message-ID: <1392313178.32038.115.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Thu, 13 Feb 2014 18:39:38 +0100
In-Reply-To: <133818465.20140213172819@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392311258.9138.3.camel@kazak.uk.xensource.com>
	<133818465.20140213172819@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7774622889285708163=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7774622889285708163==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-KBJ4oipN08c8CtvBuNF4"

--=-KBJ4oipN08c8CtvBuNF4
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-13 at 17:28 +0000, Simon Martin wrote:
> > On Thu, 2014-02-13 at 16:56 +0000, Simon Martin wrote:

> > Is core 3 actual a hyperthread -- IOW it is sharing processor execution
> > resources with e.g. core 2. Or shared L2 caches etc.
>=20
> > Ian.
>=20
> Thanks  Ian.  Very good point. It is a hyperthread. I will reconfigure
> tomorrow, retest and let you know, but that make sense.
>=20
It does indeed make sense (much more than what I was saying,
probably :-D)...

Make sure you leave the sibling thread of the one where you pin the vcpu
of your domain completely free (by pinning the other domains' vcpus
everywhere but there), and yes, let us know if that change the results.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-KBJ4oipN08c8CtvBuNF4
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL9A1oACgkQk4XaBE3IOsSTegCfe18g8XowNpn70Je8e8HLnHd9
OdAAn2pl0AkKXxHWK80gRPZkn6rVJlY0
=dqrb
-----END PGP SIGNATURE-----

--=-KBJ4oipN08c8CtvBuNF4--


--===============7774622889285708163==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7774622889285708163==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 17:39:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 17:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0GG-0001KM-QO; Thu, 13 Feb 2014 17:39:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WE0GF-0001K5-D3
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 17:39:43 +0000
Received: from [193.109.254.147:62194] by server-14.bemta-14.messagelabs.com
	id C4/86-29228-E530DF25; Thu, 13 Feb 2014 17:39:42 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392313180!4151313!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15571 invoked from network); 13 Feb 2014 17:39:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 17:39:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,839,1384300800"; 
	d="asc'?scan'208";a="102315786"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 17:39:40 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 12:39:40 -0500
Message-ID: <1392313178.32038.115.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Thu, 13 Feb 2014 18:39:38 +0100
In-Reply-To: <133818465.20140213172819@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392311258.9138.3.camel@kazak.uk.xensource.com>
	<133818465.20140213172819@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7774622889285708163=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7774622889285708163==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-KBJ4oipN08c8CtvBuNF4"

--=-KBJ4oipN08c8CtvBuNF4
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-13 at 17:28 +0000, Simon Martin wrote:
> > On Thu, 2014-02-13 at 16:56 +0000, Simon Martin wrote:

> > Is core 3 actual a hyperthread -- IOW it is sharing processor execution
> > resources with e.g. core 2. Or shared L2 caches etc.
>=20
> > Ian.
>=20
> Thanks  Ian.  Very good point. It is a hyperthread. I will reconfigure
> tomorrow, retest and let you know, but that make sense.
>=20
It does indeed make sense (much more than what I was saying,
probably :-D)...

Make sure you leave the sibling thread of the one where you pin the vcpu
of your domain completely free (by pinning the other domains' vcpus
everywhere but there), and yes, let us know if that change the results.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-KBJ4oipN08c8CtvBuNF4
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL9A1oACgkQk4XaBE3IOsSTegCfe18g8XowNpn70Je8e8HLnHd9
OdAAn2pl0AkKXxHWK80gRPZkn6rVJlY0
=dqrb
-----END PGP SIGNATURE-----

--=-KBJ4oipN08c8CtvBuNF4--


--===============7774622889285708163==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7774622889285708163==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 18:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 18:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0p2-0002t7-9F; Thu, 13 Feb 2014 18:15:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WE0p0-0002t2-Gt
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 18:15:38 +0000
Received: from [85.158.137.68:19976] by server-13.bemta-3.messagelabs.com id
	91/05-26923-9CB0DF25; Thu, 13 Feb 2014 18:15:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392315334!459517!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 558 invoked from network); 13 Feb 2014 18:15:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 18:15:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,840,1384300800"; d="scan'208";a="102329769"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 18:15:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 13:15:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WE0ov-0001lR-Li;
	Thu, 13 Feb 2014 18:15:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WE0ov-0002TL-GG;
	Thu, 13 Feb 2014 18:15:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24864-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 18:15:33 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24864: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24864 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24864/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                4675348e78fab420e70f9144b320d9c063c7cee8
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7031 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2377488 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 18:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 18:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0p2-0002t7-9F; Thu, 13 Feb 2014 18:15:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WE0p0-0002t2-Gt
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 18:15:38 +0000
Received: from [85.158.137.68:19976] by server-13.bemta-3.messagelabs.com id
	91/05-26923-9CB0DF25; Thu, 13 Feb 2014 18:15:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392315334!459517!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 558 invoked from network); 13 Feb 2014 18:15:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 18:15:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,840,1384300800"; d="scan'208";a="102329769"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 18:15:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 13:15:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WE0ov-0001lR-Li;
	Thu, 13 Feb 2014 18:15:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WE0ov-0002TL-GG;
	Thu, 13 Feb 2014 18:15:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24864-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 18:15:33 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24864: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24864 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24864/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                4675348e78fab420e70f9144b320d9c063c7cee8
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7031 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2377488 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 18:24:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 18:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0xT-0003HY-R6; Thu, 13 Feb 2014 18:24:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WE0xR-0003HT-T8
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 18:24:22 +0000
Received: from [193.109.254.147:25538] by server-3.bemta-14.messagelabs.com id
	40/F8-00432-5DD0DF25; Thu, 13 Feb 2014 18:24:21 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392315859!487804!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2762 invoked from network); 13 Feb 2014 18:24:20 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-12.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Feb 2014 18:24:20 -0000
Received: from mail230-tx2-R.bigfish.com (10.9.14.228) by
	TX2EHSOBE003.bigfish.com (10.9.40.23) with Microsoft SMTP Server id
	14.1.225.22; Thu, 13 Feb 2014 18:24:18 +0000
Received: from mail230-tx2 (localhost [127.0.0.1])	by
	mail230-tx2-R.bigfish.com (Postfix) with ESMTP id AC037A002C3;
	Thu, 13 Feb 2014 18:24:18 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h255eh1155h)
Received: from mail230-tx2 (localhost.localdomain [127.0.0.1]) by mail230-tx2
	(MessageSwitch) id 1392315856556691_32539;
	Thu, 13 Feb 2014 18:24:16 +0000 (UTC)
Received: from TX2EHSMHS026.bigfish.com (unknown [10.9.14.229])	by
	mail230-tx2.bigfish.com (Postfix) with ESMTP id 829DE600074;
	Thu, 13 Feb 2014 18:24:16 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS026.bigfish.com
	(10.9.99.126) with Microsoft SMTP Server id 14.16.227.3;
	Thu, 13 Feb 2014 18:24:16 +0000
X-WSS-ID: 0N0Y5SD-07-8K6-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	297FACAE61A;	Thu, 13 Feb 2014 12:24:13 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 12:24:18 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 13:24:13 -0500
Message-ID: <52FD0DCC.1030904@amd.com>
Date: Thu, 13 Feb 2014 12:24:12 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
	<52FD0079.8050601@amd.com>
In-Reply-To: <52FD0079.8050601@amd.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/13/2014 11:27 AM, Aravind Gopalakrishnan wrote:
> On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>>     *val = 0;
>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    /* Allow only first 3 MC banks into switch() */
>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>       {
>>>       case MSR_IA32_MC0_CTL:
>>>           /* stick all 1's to MCi_CTL */
>> I'm confused: You now add a comment as if the mask was including
>> bit 4, which it doesn't. What am I missing?
>
> Darn. Sorry about that. Will fix..

Jan,

Do let me know if the following wording is fine:

/*
  * Apply mask to allow bits[0:1] (necessary to uniquely identify MC0)
  * MC1 is handled by virtue of 'bank' value.
  */

If not, I'm open to suggestions:)

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 18:24:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 18:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE0xT-0003HY-R6; Thu, 13 Feb 2014 18:24:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WE0xR-0003HT-T8
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 18:24:22 +0000
Received: from [193.109.254.147:25538] by server-3.bemta-14.messagelabs.com id
	40/F8-00432-5DD0DF25; Thu, 13 Feb 2014 18:24:21 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392315859!487804!1
X-Originating-IP: [65.55.88.11]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2762 invoked from network); 13 Feb 2014 18:24:20 -0000
Received: from tx2ehsobe001.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.11)
	by server-12.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	13 Feb 2014 18:24:20 -0000
Received: from mail230-tx2-R.bigfish.com (10.9.14.228) by
	TX2EHSOBE003.bigfish.com (10.9.40.23) with Microsoft SMTP Server id
	14.1.225.22; Thu, 13 Feb 2014 18:24:18 +0000
Received: from mail230-tx2 (localhost [127.0.0.1])	by
	mail230-tx2-R.bigfish.com (Postfix) with ESMTP id AC037A002C3;
	Thu, 13 Feb 2014 18:24:18 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24d7h2516h2545h255eh1155h)
Received: from mail230-tx2 (localhost.localdomain [127.0.0.1]) by mail230-tx2
	(MessageSwitch) id 1392315856556691_32539;
	Thu, 13 Feb 2014 18:24:16 +0000 (UTC)
Received: from TX2EHSMHS026.bigfish.com (unknown [10.9.14.229])	by
	mail230-tx2.bigfish.com (Postfix) with ESMTP id 829DE600074;
	Thu, 13 Feb 2014 18:24:16 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by TX2EHSMHS026.bigfish.com
	(10.9.99.126) with Microsoft SMTP Server id 14.16.227.3;
	Thu, 13 Feb 2014 18:24:16 +0000
X-WSS-ID: 0N0Y5SD-07-8K6-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	297FACAE61A;	Thu, 13 Feb 2014 12:24:13 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 12:24:18 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag04.amd.com
	(10.181.40.9) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Thu, 13 Feb 2014 13:24:13 -0500
Message-ID: <52FD0DCC.1030904@amd.com>
Date: Thu, 13 Feb 2014 12:24:12 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
	<52FD0079.8050601@amd.com>
In-Reply-To: <52FD0079.8050601@amd.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com, chegger@amazon.de,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/13/2014 11:27 AM, Aravind Gopalakrishnan wrote:
> On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>>     *val = 0;
>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    /* Allow only first 3 MC banks into switch() */
>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>       {
>>>       case MSR_IA32_MC0_CTL:
>>>           /* stick all 1's to MCi_CTL */
>> I'm confused: You now add a comment as if the mask was including
>> bit 4, which it doesn't. What am I missing?
>
> Darn. Sorry about that. Will fix..

Jan,

Do let me know if the following wording is fine:

/*
  * Apply mask to allow bits[0:1] (necessary to uniquely identify MC0)
  * MC1 is handled by virtue of 'bank' value.
  */

If not, I'm open to suggestions:)

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 18:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 18:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE14S-0003hO-K1; Thu, 13 Feb 2014 18:31:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WE14Q-0003hJ-LA
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 18:31:34 +0000
Received: from [193.109.254.147:51931] by server-13.bemta-14.messagelabs.com
	id 15/E7-01226-68F0DF25; Thu, 13 Feb 2014 18:31:34 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392316291!4182880!1
X-Originating-IP: [209.85.192.177]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21103 invoked from network); 13 Feb 2014 18:31:33 -0000
Received: from mail-pd0-f177.google.com (HELO mail-pd0-f177.google.com)
	(209.85.192.177)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 18:31:33 -0000
Received: by mail-pd0-f177.google.com with SMTP id x10so10784245pdj.22
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 10:31:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=EXyYPwX+DSkPqVd51jlED5AqshXMn5yJ9bdid0PVwSE=;
	b=VuOC3j/jI4j8S5RuxcjQeVhLRn6D5+q88uCHLGm/2OeS5+uwPut8P+hSQSINiCFADK
	6IbIgsgR2KT4XP4Ln2IWFHzkHNGKMpBEvIkuTw2fcxuoWW+oE3GvueZNBPgLa7R+CmJi
	HdrM5/T/C4OpPQnH1t7DTImqkEbZhosJu0V7lMecj/BOzrmnFFtFWffY8EkNnqL/kEVq
	2O2vxNlW1ZvBMmDo/X+kGrufWBtrEPV6YExI61nORLNf/tcBh5c3rCgWvdSBoqQxMs/0
	BcjAAkOAi92hyUwQMVTihpl/f4sam8qmJpdcGd6PDrrexFMGlJInjbkIwPBnqLQ5EyI7
	dUaQ==
MIME-Version: 1.0
X-Received: by 10.66.162.74 with SMTP id xy10mr3625289pab.4.1392316291014;
	Thu, 13 Feb 2014 10:31:31 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Thu, 13 Feb 2014 10:31:30 -0800 (PST)
Date: Thu, 13 Feb 2014 13:31:30 -0500
Message-ID: <CA+hYhXsG0Tx43GuYPVps5tRX-NVYcBL+AaqmxneGT-8BoC0Z_Q@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] What's the state of vCPU scheduled out by hypervisor
	from the view of guest?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4144920006310972212=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4144920006310972212==
Content-Type: multipart/alternative; boundary=047d7b86e832f0e31f04f24de5ad

--047d7b86e832f0e31f04f24de5ad
Content-Type: text/plain; charset=ISO-8859-1

Hi All,

In a virtual SMP guest, when one vCPU is scheduled out, what's the view of
guest? Is it similar to removing one cpu via hotplug temporally? How about
the timer on the vCPU scheduled out? For example, assume the guest OS is
linux, its process scheduler (e.g. CFS) need update the vruntime and other
information for each process. When one vCPU gets scheduled again after
waiting in run queue, how does the process scheduler update the information
for the processes running on it? Thanks.

Regards,
Cong

--047d7b86e832f0e31f04f24de5ad
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi All,<br><br></div>In a virtual SMP guest, whe=
n one vCPU is scheduled out, what&#39;s the view of guest? Is it similar to=
 removing one cpu via hotplug temporally? How about the timer on the vCPU s=
cheduled out? For example, assume the guest OS is linux, its process schedu=
ler (e.g. CFS) need update the vruntime and other information for each proc=
ess. When one vCPU gets scheduled again after waiting in run queue, how doe=
s the process scheduler update the information for the processes running on=
 it? Thanks.<br>
<br>Regards,<br></div>Cong<br></div>

--047d7b86e832f0e31f04f24de5ad--


--===============4144920006310972212==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4144920006310972212==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 18:31:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 18:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE14S-0003hO-K1; Thu, 13 Feb 2014 18:31:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <congxumail@gmail.com>) id 1WE14Q-0003hJ-LA
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 18:31:34 +0000
Received: from [193.109.254.147:51931] by server-13.bemta-14.messagelabs.com
	id 15/E7-01226-68F0DF25; Thu, 13 Feb 2014 18:31:34 +0000
X-Env-Sender: congxumail@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392316291!4182880!1
X-Originating-IP: [209.85.192.177]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21103 invoked from network); 13 Feb 2014 18:31:33 -0000
Received: from mail-pd0-f177.google.com (HELO mail-pd0-f177.google.com)
	(209.85.192.177)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 18:31:33 -0000
Received: by mail-pd0-f177.google.com with SMTP id x10so10784245pdj.22
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 10:31:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=EXyYPwX+DSkPqVd51jlED5AqshXMn5yJ9bdid0PVwSE=;
	b=VuOC3j/jI4j8S5RuxcjQeVhLRn6D5+q88uCHLGm/2OeS5+uwPut8P+hSQSINiCFADK
	6IbIgsgR2KT4XP4Ln2IWFHzkHNGKMpBEvIkuTw2fcxuoWW+oE3GvueZNBPgLa7R+CmJi
	HdrM5/T/C4OpPQnH1t7DTImqkEbZhosJu0V7lMecj/BOzrmnFFtFWffY8EkNnqL/kEVq
	2O2vxNlW1ZvBMmDo/X+kGrufWBtrEPV6YExI61nORLNf/tcBh5c3rCgWvdSBoqQxMs/0
	BcjAAkOAi92hyUwQMVTihpl/f4sam8qmJpdcGd6PDrrexFMGlJInjbkIwPBnqLQ5EyI7
	dUaQ==
MIME-Version: 1.0
X-Received: by 10.66.162.74 with SMTP id xy10mr3625289pab.4.1392316291014;
	Thu, 13 Feb 2014 10:31:31 -0800 (PST)
Received: by 10.68.190.97 with HTTP; Thu, 13 Feb 2014 10:31:30 -0800 (PST)
Date: Thu, 13 Feb 2014 13:31:30 -0500
Message-ID: <CA+hYhXsG0Tx43GuYPVps5tRX-NVYcBL+AaqmxneGT-8BoC0Z_Q@mail.gmail.com>
From: xu cong <congxumail@gmail.com>
To: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: [Xen-devel] What's the state of vCPU scheduled out by hypervisor
	from the view of guest?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4144920006310972212=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4144920006310972212==
Content-Type: multipart/alternative; boundary=047d7b86e832f0e31f04f24de5ad

--047d7b86e832f0e31f04f24de5ad
Content-Type: text/plain; charset=ISO-8859-1

Hi All,

In a virtual SMP guest, when one vCPU is scheduled out, what's the view of
guest? Is it similar to removing one cpu via hotplug temporally? How about
the timer on the vCPU scheduled out? For example, assume the guest OS is
linux, its process scheduler (e.g. CFS) need update the vruntime and other
information for each process. When one vCPU gets scheduled again after
waiting in run queue, how does the process scheduler update the information
for the processes running on it? Thanks.

Regards,
Cong

--047d7b86e832f0e31f04f24de5ad
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi All,<br><br></div>In a virtual SMP guest, whe=
n one vCPU is scheduled out, what&#39;s the view of guest? Is it similar to=
 removing one cpu via hotplug temporally? How about the timer on the vCPU s=
cheduled out? For example, assume the guest OS is linux, its process schedu=
ler (e.g. CFS) need update the vruntime and other information for each proc=
ess. When one vCPU gets scheduled again after waiting in run queue, how doe=
s the process scheduler update the information for the processes running on=
 it? Thanks.<br>
<br>Regards,<br></div>Cong<br></div>

--047d7b86e832f0e31f04f24de5ad--


--===============4144920006310972212==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4144920006310972212==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 19:41:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 19:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE29t-0005UC-Mv; Thu, 13 Feb 2014 19:41:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WE29r-0005Tq-ND; Thu, 13 Feb 2014 19:41:15 +0000
Received: from [85.158.137.68:31752] by server-6.bemta-3.messagelabs.com id
	57/90-09180-ADF1DF25; Thu, 13 Feb 2014 19:41:14 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392320472!417220!1
X-Originating-IP: [209.85.216.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27085 invoked from network); 13 Feb 2014 19:41:14 -0000
Received: from mail-qa0-f47.google.com (HELO mail-qa0-f47.google.com)
	(209.85.216.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 19:41:14 -0000
Received: by mail-qa0-f47.google.com with SMTP id j5so16564661qaq.6
	for <multiple recipients>; Thu, 13 Feb 2014 11:41:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lYAYpFKhl6joCCiGXpFeqvMXEmxZf/t9Vro3uQGMLZ4=;
	b=vXCzvU1pNOUDLc4I0sh9SDZas2q1FTiApq78sCD1HLxU6nbb29eZe5mp7xUlS1xek6
	5jFaSc2ZeDOjt/sTOdQQlAvjjfCWHe2eqOnPwvJgN/nF4wG5mZHKWB41pRhF7KEMo1V2
	iYv47AlshuhpgE4JcR7STIG4txuCANjRKv7BR5V2Ige7C/jAWG1YX7Z0pQlimr0DWcLR
	94w1jGnrnwBnyU4dareBHWVD2jSL30SlJDFNUT6vA8Gwl44pBNFkBUp7LwUfvfMYhYUj
	QT2p4mkG8tlkzD6UlHZOgCpM/EBjaKb9vm2rQLGbr3rirsfWqbPK+VRMsfcFIDQDBrbQ
	R3pA==
X-Received: by 10.140.38.168 with SMTP id t37mr5368745qgt.33.1392320472454;
	Thu, 13 Feb 2014 11:41:12 -0800 (PST)
Received: from [172.16.26.11] ([63.110.51.11])
	by mx.google.com with ESMTPSA id u4sm8144320qai.21.2014.02.13.11.41.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 11:41:11 -0800 (PST)
Message-ID: <52FD1FD5.9040009@xen.org>
Date: Thu, 13 Feb 2014 19:41:09 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Scott <scott.dj@gmail.com>
References: <52DCE9FA.6010400@xen.org>
	<CAG_esB0qq7G41GTX08n7g2Y+YXxtrLULftmvcTML-ueu9WP7yA@mail.gmail.com>
In-Reply-To: <CAG_esB0qq7G41GTX08n7g2Y+YXxtrLULftmvcTML-ueu9WP7yA@mail.gmail.com>
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [MirageOS-devel] Prepping for GSOC 2014 [URGENT] -
 deadline Feb 14 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/2014 16:11, David Scott wrote:
> Hi,
>
> I did a tidy-up of the Mirage/XAPI projects at the bottom. I've 
> deleted some old projects after speaking with their technical contacts 
> (in particular Jon Ludlam and Jonathan Davies) and I've deleted the 
> networking one at the end since that work is in progress anyway.
Thank you!
>
> I've had a stab at classifying them as 'GSoC'-friendly or not, mainly 
> based on difficulty.
>
> Are we still planning to add a Mentor section with photos and bios?
>
Too late now.

I will take the page and clone it and put in the application this 
afternoon pacific time

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 19:41:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 19:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE29t-0005UC-Mv; Thu, 13 Feb 2014 19:41:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WE29r-0005Tq-ND; Thu, 13 Feb 2014 19:41:15 +0000
Received: from [85.158.137.68:31752] by server-6.bemta-3.messagelabs.com id
	57/90-09180-ADF1DF25; Thu, 13 Feb 2014 19:41:14 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392320472!417220!1
X-Originating-IP: [209.85.216.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27085 invoked from network); 13 Feb 2014 19:41:14 -0000
Received: from mail-qa0-f47.google.com (HELO mail-qa0-f47.google.com)
	(209.85.216.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 19:41:14 -0000
Received: by mail-qa0-f47.google.com with SMTP id j5so16564661qaq.6
	for <multiple recipients>; Thu, 13 Feb 2014 11:41:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lYAYpFKhl6joCCiGXpFeqvMXEmxZf/t9Vro3uQGMLZ4=;
	b=vXCzvU1pNOUDLc4I0sh9SDZas2q1FTiApq78sCD1HLxU6nbb29eZe5mp7xUlS1xek6
	5jFaSc2ZeDOjt/sTOdQQlAvjjfCWHe2eqOnPwvJgN/nF4wG5mZHKWB41pRhF7KEMo1V2
	iYv47AlshuhpgE4JcR7STIG4txuCANjRKv7BR5V2Ige7C/jAWG1YX7Z0pQlimr0DWcLR
	94w1jGnrnwBnyU4dareBHWVD2jSL30SlJDFNUT6vA8Gwl44pBNFkBUp7LwUfvfMYhYUj
	QT2p4mkG8tlkzD6UlHZOgCpM/EBjaKb9vm2rQLGbr3rirsfWqbPK+VRMsfcFIDQDBrbQ
	R3pA==
X-Received: by 10.140.38.168 with SMTP id t37mr5368745qgt.33.1392320472454;
	Thu, 13 Feb 2014 11:41:12 -0800 (PST)
Received: from [172.16.26.11] ([63.110.51.11])
	by mx.google.com with ESMTPSA id u4sm8144320qai.21.2014.02.13.11.41.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 11:41:11 -0800 (PST)
Message-ID: <52FD1FD5.9040009@xen.org>
Date: Thu, 13 Feb 2014 19:41:09 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Scott <scott.dj@gmail.com>
References: <52DCE9FA.6010400@xen.org>
	<CAG_esB0qq7G41GTX08n7g2Y+YXxtrLULftmvcTML-ueu9WP7yA@mail.gmail.com>
In-Reply-To: <CAG_esB0qq7G41GTX08n7g2Y+YXxtrLULftmvcTML-ueu9WP7yA@mail.gmail.com>
Cc: "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [MirageOS-devel] Prepping for GSOC 2014 [URGENT] -
 deadline Feb 14 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/2014 16:11, David Scott wrote:
> Hi,
>
> I did a tidy-up of the Mirage/XAPI projects at the bottom. I've 
> deleted some old projects after speaking with their technical contacts 
> (in particular Jon Ludlam and Jonathan Davies) and I've deleted the 
> networking one at the end since that work is in progress anyway.
Thank you!
>
> I've had a stab at classifying them as 'GSoC'-friendly or not, mainly 
> based on difficulty.
>
> Are we still planning to add a Mentor section with photos and bios?
>
Too late now.

I will take the page and clone it and put in the application this 
afternoon pacific time

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qE-0006bb-NU; Thu, 13 Feb 2014 20:25:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2qB-0006az-SE
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:59 +0000
Received: from [85.158.143.35:6989] by server-3.bemta-4.messagelabs.com id
	E2/B7-11539-B1A2DF25; Thu, 13 Feb 2014 20:24:59 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392323097!5511859!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13403 invoked from network); 13 Feb 2014 20:24:58 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:58 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 2AA98B9E6;
	Thu, 13 Feb 2014 15:24:57 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:37:08 -0500
Message-ID: <1827512.nD8lVLObSo@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-9-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-9-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:57 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 08/13] xen: change order of Xen intr
	init and IO APIC registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:57 PM Roger Pau Monne wrote:
> Change order of some of the services in the SI_SUB_INTR stage, so
> that it follows the order:
> 
> - System intr initialization
> - Xen intr initalization
> - IO APIC source registration

This is ok.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qG-0006c3-8U; Thu, 13 Feb 2014 20:25:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2qD-0006bB-5m
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:25:01 +0000
Received: from [85.158.143.35:61767] by server-1.bemta-4.messagelabs.com id
	6B/5C-31661-C1A2DF25; Thu, 13 Feb 2014 20:25:00 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392323098!5523859!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31823 invoked from network); 13 Feb 2014 20:24:59 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:59 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 54F03B992;
	Thu, 13 Feb 2014 15:24:58 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:36:11 -0500
Message-ID: <11166680.j6yj7PCWSG@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-8-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-8-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:58 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 07/13] xen: implement IO APIC support in
	Xen mptable parser
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:56 PM Roger Pau Monne wrote:
> Use madt_setup_io (from madt.c) on Xen apic_enumerator, in order to
> parse the interrupt sources from the IO APIC.
> 
> I would like to get opinions, but I think we should rename and move
> madt_setup_io to io_apic.c.

It wouldn't be appropriate for io_apic.c as it isn't generic to I/O
APICs but is specific to ACPI.  However, mptable.c is really not a
great name for this file in sys/x86/xen.  I wonder if it should be
xen_apic.c instead?  Also, if Xen PV has an MADT table, why do you
need a custom APIC enumerator at all?  That is, what is preventing
the code in madt.c from just working?  Do you just not have
'device acpi' in the kernel config you are using?

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2q9-0006ag-Qu; Thu, 13 Feb 2014 20:24:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2q7-0006aK-Uu
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:56 +0000
Received: from [85.158.143.35:57529] by server-3.bemta-4.messagelabs.com id
	68/A7-11539-71A2DF25; Thu, 13 Feb 2014 20:24:55 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392323093!5515931!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16279 invoked from network); 13 Feb 2014 20:24:54 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:54 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 55B71B992;
	Thu, 13 Feb 2014 15:24:53 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:57:11 -0500
Message-ID: <2534929.ApYhIx3ttm@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-12-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-12-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:53 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 11/13] pci: introduce a new event on PCI
	device detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:21:00 PM Roger Pau Monne wrote:
> Add a new event that will fire each time a PCI device is added to the
> system, and allows us to register the device with Xen.

It's really hackish to make this PCI specific.  OTOH, I can't think of a
good place to have a more generic new-bus callback.  You could make the
eventhandler pass the 'device_t' instead of the dinfo.  The dinfo isn't
really a public structure, and since the device_t's ivars are already set
you can use things like 'pci_get_domain()' and 'pci_get_bus()' of the
passed in device in your callback function.

> ---
>  sys/dev/pci/pci.c       |    1 +
>  sys/sys/eventhandler.h  |    5 +++++
>  sys/x86/xen/pv.c        |   21 +++++++++++++++++++++
>  sys/x86/xen/xen_nexus.c |    6 ++++++
>  sys/xen/pv.h            |    1 +
>  5 files changed, 34 insertions(+), 0 deletions(-)
> 
> diff --git a/sys/dev/pci/pci.c b/sys/dev/pci/pci.c
> index 4d8837f..2ee5093 100644
> --- a/sys/dev/pci/pci.c
> +++ b/sys/dev/pci/pci.c
> @@ -3293,6 +3293,7 @@ pci_add_child(device_t bus, struct pci_devinfo *dinfo)
> resource_list_init(&dinfo->resources);
>  	pci_cfg_save(dinfo->cfg.dev, dinfo, 0);
>  	pci_cfg_restore(dinfo->cfg.dev, dinfo);
> +	EVENTHANDLER_INVOKE(pci_add, dinfo);
>  	pci_print_verbose(dinfo);
>  	pci_add_resources(bus, dinfo->cfg.dev, 0, 0);
>  }
> diff --git a/sys/sys/eventhandler.h b/sys/sys/eventhandler.h
> index 111c21b..3201848 100644
> --- a/sys/sys/eventhandler.h
> +++ b/sys/sys/eventhandler.h
> @@ -269,5 +269,10 @@ typedef void (*unregister_framebuffer_fn)(void *,
> struct fb_info *); EVENTHANDLER_DECLARE(register_framebuffer,
> register_framebuffer_fn); EVENTHANDLER_DECLARE(unregister_framebuffer,
> unregister_framebuffer_fn);
> 
> +/* PCI events */
> +struct pci_devinfo;
> +typedef void (*pci_add_fn)(void *, struct pci_devinfo *);
> +EVENTHANDLER_DECLARE(pci_add, pci_add_fn);
> +
>  #endif /* _SYS_EVENTHANDLER_H_ */
> 
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index e5ad200..a44f8ca 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -39,6 +39,9 @@ __FBSDID("$FreeBSD$");
>  #include <sys/rwlock.h>
>  #include <sys/mutex.h>
>  #include <sys/smp.h>
> +#include <sys/reboot.h>
> +#include <sys/pciio.h>
> +#include <sys/eventhandler.h>
> 
>  #include <vm/vm.h>
>  #include <vm/vm_extern.h>
> @@ -63,6 +66,8 @@ __FBSDID("$FreeBSD$");
> 
>  #include <xen/interface/vcpu.h>
> 
> +#include <dev/pci/pcivar.h>
> +
>  /* Native initial function */
>  extern u_int64_t hammer_time(u_int64_t, u_int64_t);
>  /* Xen initial function */
> @@ -384,6 +389,22 @@ xen_pv_ioapic_register_intr(struct ioapic_intsrc *pin)
>  	xen_register_pirq(pin->io_irq, pin->io_activehi, pin->io_edgetrigger);
>  }
> 
> +void
> +xen_pv_pci_device_add(void *arg, struct pci_devinfo *dinfo)
> +{
> +	struct physdev_pci_device_add add_pci;
> +	int error;
> +
> +	bzero(&add_pci, sizeof(add_pci));
> +	add_pci.seg = dinfo->cfg.domain;
> +	add_pci.bus = dinfo->cfg.bus;
> +	add_pci.devfn = (dinfo->cfg.slot << 3) | dinfo->cfg.func;
> +	error = HYPERVISOR_physdev_op(PHYSDEVOP_pci_device_add, &add_pci);
> +	if (error)
> +		printf("unable to add device bus %u devfn %u error: %d\n",
> +		       add_pci.bus, add_pci.devfn, error);
> +}
> +
>  static void
>  xen_pv_set_init_ops(void)
>  {
> diff --git a/sys/x86/xen/xen_nexus.c b/sys/x86/xen/xen_nexus.c
> index 823b3bc..60c6c5d 100644
> --- a/sys/x86/xen/xen_nexus.c
> +++ b/sys/x86/xen/xen_nexus.c
> @@ -34,6 +34,7 @@ __FBSDID("$FreeBSD$");
>  #include <sys/sysctl.h>
>  #include <sys/systm.h>
>  #include <sys/smp.h>
> +#include <sys/eventhandler.h>
> 
>  #include <contrib/dev/acpica/include/acpi.h>
> 
> @@ -42,6 +43,7 @@ __FBSDID("$FreeBSD$");
>  #include <machine/nexusvar.h>
> 
>  #include <xen/xen-os.h>
> +#include <xen/pv.h>
> 
>  static const char *xen_devices[] =
>  {
> @@ -87,6 +89,10 @@ nexus_xen_attach(device_t dev)
>  		/* Disable some ACPI devices that are not usable by Dom0 */
>  		setenv("debug.acpi.disabled", "cpu hpet timer");
> 
> +		/* Register PCI add hook */
> +		EVENTHANDLER_REGISTER(pci_add, xen_pv_pci_device_add, NULL,
> +		                      EVENTHANDLER_PRI_FIRST);
> +
>  		acpi_dev = BUS_ADD_CHILD(dev, 10, "acpi", 0);
>  		if (acpi_dev == NULL)
>  			panic("Unable to add ACPI bus to Xen Dom0");
> diff --git a/sys/xen/pv.h b/sys/xen/pv.h
> index a9d6eb0..ac737a7 100644
> --- a/sys/xen/pv.h
> +++ b/sys/xen/pv.h
> @@ -24,5 +24,6 @@
>  #define	__XEN_PV_H__
> 
>  int	xen_pv_start_all_aps(void);
> +void	xen_pv_pci_device_add(void *, struct pci_devinfo *);
> 
>  #endif	/* __XEN_PV_H__ */

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qA-0006ao-7b; Thu, 13 Feb 2014 20:24:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2q9-0006aU-4d
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:57 +0000
Received: from [85.158.143.35:6825] by server-3.bemta-4.messagelabs.com id
	CC/A7-11539-81A2DF25; Thu, 13 Feb 2014 20:24:56 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392323095!5511851!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13298 invoked from network); 13 Feb 2014 20:24:55 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:55 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 7A86CB9CA;
	Thu, 13 Feb 2014 15:24:54 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:50:35 -0500
Message-ID: <2410827.IqfpSAhe3T@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-11-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-11-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:54 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 10/13] xen: add ACPI bus to xen_nexus
	when running as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:59 PM Roger Pau Monne wrote:
> Also disable a couple of ACPI devices that are not usable under Dom0.

Hmm, setting debug.acpi.disabled in this way is a bit hacky.  It might
be fine however if there's no way for the user to set it before booting
the kernel (as opposed to haing the relevant drivers explicitly disable
themselves under Xen which I think would be cleaner, but would also
make your patch larger)

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qG-0006c3-8U; Thu, 13 Feb 2014 20:25:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2qD-0006bB-5m
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:25:01 +0000
Received: from [85.158.143.35:61767] by server-1.bemta-4.messagelabs.com id
	6B/5C-31661-C1A2DF25; Thu, 13 Feb 2014 20:25:00 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392323098!5523859!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31823 invoked from network); 13 Feb 2014 20:24:59 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:59 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 54F03B992;
	Thu, 13 Feb 2014 15:24:58 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:36:11 -0500
Message-ID: <11166680.j6yj7PCWSG@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-8-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-8-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:58 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 07/13] xen: implement IO APIC support in
	Xen mptable parser
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:56 PM Roger Pau Monne wrote:
> Use madt_setup_io (from madt.c) on Xen apic_enumerator, in order to
> parse the interrupt sources from the IO APIC.
> 
> I would like to get opinions, but I think we should rename and move
> madt_setup_io to io_apic.c.

It wouldn't be appropriate for io_apic.c as it isn't generic to I/O
APICs but is specific to ACPI.  However, mptable.c is really not a
great name for this file in sys/x86/xen.  I wonder if it should be
xen_apic.c instead?  Also, if Xen PV has an MADT table, why do you
need a custom APIC enumerator at all?  That is, what is preventing
the code in madt.c from just working?  Do you just not have
'device acpi' in the kernel config you are using?

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qB-0006b0-KS; Thu, 13 Feb 2014 20:24:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2qA-0006am-I4
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:58 +0000
Received: from [193.109.254.147:2881] by server-15.bemta-14.messagelabs.com id
	01/17-10839-91A2DF25; Thu, 13 Feb 2014 20:24:57 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392323096!448911!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16591 invoked from network); 13 Feb 2014 20:24:57 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:57 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id E17FFB9DD;
	Thu, 13 Feb 2014 15:24:55 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:42:08 -0500
Message-ID: <1980951.95r2q2cca3@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-10-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-10-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:56 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
	ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
> force the usage of the Xen mptable enumerator even when ACPI is
> detected.

Hmm, so I think one question is why does the existing MADT parser
not work with the MADT table provided by Xen?  This may very well
be correct, but if it's only a small change to make the existing
MADT parser work with Xen's MADT table, that route might be
preferable.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2q9-0006ag-Qu; Thu, 13 Feb 2014 20:24:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2q7-0006aK-Uu
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:56 +0000
Received: from [85.158.143.35:57529] by server-3.bemta-4.messagelabs.com id
	68/A7-11539-71A2DF25; Thu, 13 Feb 2014 20:24:55 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392323093!5515931!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16279 invoked from network); 13 Feb 2014 20:24:54 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:54 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 55B71B992;
	Thu, 13 Feb 2014 15:24:53 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:57:11 -0500
Message-ID: <2534929.ApYhIx3ttm@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-12-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-12-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:53 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 11/13] pci: introduce a new event on PCI
	device detection
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:21:00 PM Roger Pau Monne wrote:
> Add a new event that will fire each time a PCI device is added to the
> system, and allows us to register the device with Xen.

It's really hackish to make this PCI specific.  OTOH, I can't think of a
good place to have a more generic new-bus callback.  You could make the
eventhandler pass the 'device_t' instead of the dinfo.  The dinfo isn't
really a public structure, and since the device_t's ivars are already set
you can use things like 'pci_get_domain()' and 'pci_get_bus()' of the
passed in device in your callback function.

> ---
>  sys/dev/pci/pci.c       |    1 +
>  sys/sys/eventhandler.h  |    5 +++++
>  sys/x86/xen/pv.c        |   21 +++++++++++++++++++++
>  sys/x86/xen/xen_nexus.c |    6 ++++++
>  sys/xen/pv.h            |    1 +
>  5 files changed, 34 insertions(+), 0 deletions(-)
> 
> diff --git a/sys/dev/pci/pci.c b/sys/dev/pci/pci.c
> index 4d8837f..2ee5093 100644
> --- a/sys/dev/pci/pci.c
> +++ b/sys/dev/pci/pci.c
> @@ -3293,6 +3293,7 @@ pci_add_child(device_t bus, struct pci_devinfo *dinfo)
> resource_list_init(&dinfo->resources);
>  	pci_cfg_save(dinfo->cfg.dev, dinfo, 0);
>  	pci_cfg_restore(dinfo->cfg.dev, dinfo);
> +	EVENTHANDLER_INVOKE(pci_add, dinfo);
>  	pci_print_verbose(dinfo);
>  	pci_add_resources(bus, dinfo->cfg.dev, 0, 0);
>  }
> diff --git a/sys/sys/eventhandler.h b/sys/sys/eventhandler.h
> index 111c21b..3201848 100644
> --- a/sys/sys/eventhandler.h
> +++ b/sys/sys/eventhandler.h
> @@ -269,5 +269,10 @@ typedef void (*unregister_framebuffer_fn)(void *,
> struct fb_info *); EVENTHANDLER_DECLARE(register_framebuffer,
> register_framebuffer_fn); EVENTHANDLER_DECLARE(unregister_framebuffer,
> unregister_framebuffer_fn);
> 
> +/* PCI events */
> +struct pci_devinfo;
> +typedef void (*pci_add_fn)(void *, struct pci_devinfo *);
> +EVENTHANDLER_DECLARE(pci_add, pci_add_fn);
> +
>  #endif /* _SYS_EVENTHANDLER_H_ */
> 
> diff --git a/sys/x86/xen/pv.c b/sys/x86/xen/pv.c
> index e5ad200..a44f8ca 100644
> --- a/sys/x86/xen/pv.c
> +++ b/sys/x86/xen/pv.c
> @@ -39,6 +39,9 @@ __FBSDID("$FreeBSD$");
>  #include <sys/rwlock.h>
>  #include <sys/mutex.h>
>  #include <sys/smp.h>
> +#include <sys/reboot.h>
> +#include <sys/pciio.h>
> +#include <sys/eventhandler.h>
> 
>  #include <vm/vm.h>
>  #include <vm/vm_extern.h>
> @@ -63,6 +66,8 @@ __FBSDID("$FreeBSD$");
> 
>  #include <xen/interface/vcpu.h>
> 
> +#include <dev/pci/pcivar.h>
> +
>  /* Native initial function */
>  extern u_int64_t hammer_time(u_int64_t, u_int64_t);
>  /* Xen initial function */
> @@ -384,6 +389,22 @@ xen_pv_ioapic_register_intr(struct ioapic_intsrc *pin)
>  	xen_register_pirq(pin->io_irq, pin->io_activehi, pin->io_edgetrigger);
>  }
> 
> +void
> +xen_pv_pci_device_add(void *arg, struct pci_devinfo *dinfo)
> +{
> +	struct physdev_pci_device_add add_pci;
> +	int error;
> +
> +	bzero(&add_pci, sizeof(add_pci));
> +	add_pci.seg = dinfo->cfg.domain;
> +	add_pci.bus = dinfo->cfg.bus;
> +	add_pci.devfn = (dinfo->cfg.slot << 3) | dinfo->cfg.func;
> +	error = HYPERVISOR_physdev_op(PHYSDEVOP_pci_device_add, &add_pci);
> +	if (error)
> +		printf("unable to add device bus %u devfn %u error: %d\n",
> +		       add_pci.bus, add_pci.devfn, error);
> +}
> +
>  static void
>  xen_pv_set_init_ops(void)
>  {
> diff --git a/sys/x86/xen/xen_nexus.c b/sys/x86/xen/xen_nexus.c
> index 823b3bc..60c6c5d 100644
> --- a/sys/x86/xen/xen_nexus.c
> +++ b/sys/x86/xen/xen_nexus.c
> @@ -34,6 +34,7 @@ __FBSDID("$FreeBSD$");
>  #include <sys/sysctl.h>
>  #include <sys/systm.h>
>  #include <sys/smp.h>
> +#include <sys/eventhandler.h>
> 
>  #include <contrib/dev/acpica/include/acpi.h>
> 
> @@ -42,6 +43,7 @@ __FBSDID("$FreeBSD$");
>  #include <machine/nexusvar.h>
> 
>  #include <xen/xen-os.h>
> +#include <xen/pv.h>
> 
>  static const char *xen_devices[] =
>  {
> @@ -87,6 +89,10 @@ nexus_xen_attach(device_t dev)
>  		/* Disable some ACPI devices that are not usable by Dom0 */
>  		setenv("debug.acpi.disabled", "cpu hpet timer");
> 
> +		/* Register PCI add hook */
> +		EVENTHANDLER_REGISTER(pci_add, xen_pv_pci_device_add, NULL,
> +		                      EVENTHANDLER_PRI_FIRST);
> +
>  		acpi_dev = BUS_ADD_CHILD(dev, 10, "acpi", 0);
>  		if (acpi_dev == NULL)
>  			panic("Unable to add ACPI bus to Xen Dom0");
> diff --git a/sys/xen/pv.h b/sys/xen/pv.h
> index a9d6eb0..ac737a7 100644
> --- a/sys/xen/pv.h
> +++ b/sys/xen/pv.h
> @@ -24,5 +24,6 @@
>  #define	__XEN_PV_H__
> 
>  int	xen_pv_start_all_aps(void);
> +void	xen_pv_pci_device_add(void *, struct pci_devinfo *);
> 
>  #endif	/* __XEN_PV_H__ */

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qE-0006bb-NU; Thu, 13 Feb 2014 20:25:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2qB-0006az-SE
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:59 +0000
Received: from [85.158.143.35:6989] by server-3.bemta-4.messagelabs.com id
	E2/B7-11539-B1A2DF25; Thu, 13 Feb 2014 20:24:59 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392323097!5511859!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13403 invoked from network); 13 Feb 2014 20:24:58 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:58 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 2AA98B9E6;
	Thu, 13 Feb 2014 15:24:57 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:37:08 -0500
Message-ID: <1827512.nD8lVLObSo@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-9-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-9-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:57 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 08/13] xen: change order of Xen intr
	init and IO APIC registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:57 PM Roger Pau Monne wrote:
> Change order of some of the services in the SI_SUB_INTR stage, so
> that it follows the order:
> 
> - System intr initialization
> - Xen intr initalization
> - IO APIC source registration

This is ok.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qB-0006b0-KS; Thu, 13 Feb 2014 20:24:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2qA-0006am-I4
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:58 +0000
Received: from [193.109.254.147:2881] by server-15.bemta-14.messagelabs.com id
	01/17-10839-91A2DF25; Thu, 13 Feb 2014 20:24:57 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392323096!448911!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16591 invoked from network); 13 Feb 2014 20:24:57 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:57 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id E17FFB9DD;
	Thu, 13 Feb 2014 15:24:55 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:42:08 -0500
Message-ID: <1980951.95r2q2cca3@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-10-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-10-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:56 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
	ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
> force the usage of the Xen mptable enumerator even when ACPI is
> detected.

Hmm, so I think one question is why does the existing MADT parser
not work with the MADT table provided by Xen?  This may very well
be correct, but if it's only a small change to make the existing
MADT parser work with Xen's MADT table, that route might be
preferable.

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2q9-0006aZ-C5; Thu, 13 Feb 2014 20:24:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2q7-0006aL-VP
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:56 +0000
Received: from [85.158.143.35:57515] by server-2.bemta-4.messagelabs.com id
	2C/71-10891-71A2DF25; Thu, 13 Feb 2014 20:24:55 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392323093!5525134!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15029 invoked from network); 13 Feb 2014 20:24:54 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:54 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 11C6EB9D8;
	Thu, 13 Feb 2014 15:24:52 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 17:02:47 -0500
Message-ID: <2452208.OksOsMWhSU@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-13-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-13-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:52 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 12/13] mca: disable cmc enable on Xen PV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:21:01 PM Roger Pau Monne wrote:
> Xen PV guests doesn't have a lapic, so disable the lapic call in mca
> initialization.

I think this is fine, but I wonder if it wouldn't be cleaner to have 
lapic_enable_cmc() do the check instead.  Where else do you check 
lapic_disabled?

> ---
>  sys/x86/x86/mca.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/sys/x86/x86/mca.c b/sys/x86/x86/mca.c
> index f1369cd..e9d2c1d 100644
> --- a/sys/x86/x86/mca.c
> +++ b/sys/x86/x86/mca.c
> @@ -897,7 +897,7 @@ _mca_init(int boot)
>  		}
> 
>  #ifdef DEV_APIC
> -		if (PCPU_GET(cmci_mask) != 0 && boot)
> +		if (PCPU_GET(cmci_mask) != 0 && boot && !lapic_disabled)
>  			lapic_enable_cmc();
>  #endif
>  	}

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2qA-0006ao-7b; Thu, 13 Feb 2014 20:24:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2q9-0006aU-4d
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:57 +0000
Received: from [85.158.143.35:6825] by server-3.bemta-4.messagelabs.com id
	CC/A7-11539-81A2DF25; Thu, 13 Feb 2014 20:24:56 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392323095!5511851!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=2.3 required=7.0 tests=BODY_RANDOM_LONG,
	DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13298 invoked from network); 13 Feb 2014 20:24:55 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:55 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 7A86CB9CA;
	Thu, 13 Feb 2014 15:24:54 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 16:50:35 -0500
Message-ID: <2410827.IqfpSAhe3T@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-11-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-11-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:54 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 10/13] xen: add ACPI bus to xen_nexus
	when running as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:20:59 PM Roger Pau Monne wrote:
> Also disable a couple of ACPI devices that are not usable under Dom0.

Hmm, setting debug.acpi.disabled in this way is a bit hacky.  It might
be fine however if there's no way for the user to set it before booting
the kernel (as opposed to haing the relevant drivers explicitly disable
themselves under Xen which I think would be cleaner, but would also
make your patch larger)

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:25:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE2q9-0006aZ-C5; Thu, 13 Feb 2014 20:24:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WE2q7-0006aL-VP
	for xen-devel@lists.xenproject.org; Thu, 13 Feb 2014 20:24:56 +0000
Received: from [85.158.143.35:57515] by server-2.bemta-4.messagelabs.com id
	2C/71-10891-71A2DF25; Thu, 13 Feb 2014 20:24:55 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392323093!5525134!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=1.8 required=7.0 tests=DATE_IN_PAST_96_XX
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15029 invoked from network); 13 Feb 2014 20:24:54 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 13 Feb 2014 20:24:54 -0000
Received: from ralph.baldwin.cx (pool-173-70-85-31.nwrknj.fios.verizon.net
	[173.70.85.31])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 11C6EB9D8;
	Thu, 13 Feb 2014 15:24:52 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Date: Sat, 08 Feb 2014 17:02:47 -0500
Message-ID: <2452208.OksOsMWhSU@ralph.baldwin.cx>
User-Agent: KMail/4.10.5 (FreeBSD/10.0-STABLE; KDE/4.10.5; amd64; ; )
In-Reply-To: <1387884062-41154-13-git-send-email-roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-13-git-send-email-roger.pau@citrix.com>
MIME-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Thu, 13 Feb 2014 15:24:52 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 12/13] mca: disable cmc enable on Xen PV
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tuesday, December 24, 2013 12:21:01 PM Roger Pau Monne wrote:
> Xen PV guests doesn't have a lapic, so disable the lapic call in mca
> initialization.

I think this is fine, but I wonder if it wouldn't be cleaner to have 
lapic_enable_cmc() do the check instead.  Where else do you check 
lapic_disabled?

> ---
>  sys/x86/x86/mca.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/sys/x86/x86/mca.c b/sys/x86/x86/mca.c
> index f1369cd..e9d2c1d 100644
> --- a/sys/x86/x86/mca.c
> +++ b/sys/x86/x86/mca.c
> @@ -897,7 +897,7 @@ _mca_init(int boot)
>  		}
> 
>  #ifdef DEV_APIC
> -		if (PCPU_GET(cmci_mask) != 0 && boot)
> +		if (PCPU_GET(cmci_mask) != 0 && boot && !lapic_disabled)
>  			lapic_enable_cmc();
>  #endif
>  	}

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE3Dg-0007rx-63; Thu, 13 Feb 2014 20:49:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Nate.Studer@dornerworks.com>) id 1WE3Df-0007rs-Dx
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 20:49:15 +0000
Received: from [193.109.254.147:29180] by server-3.bemta-14.messagelabs.com id
	33/BA-00432-ACF2DF25; Thu, 13 Feb 2014 20:49:14 +0000
X-Env-Sender: Nate.Studer@dornerworks.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392324553!4175346!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20440 invoked from network); 13 Feb 2014 20:49:13 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-16.tower-27.messagelabs.com with SMTP;
	13 Feb 2014 20:49:13 -0000
Received: from [172.27.12.66] (172.27.12.66) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Thu, 13 Feb 2014 15:47:33 -0500
Message-ID: <52FD2F63.3090601@dornerworks.com>
Date: Thu, 13 Feb 2014 15:47:31 -0500
From: Nate Studer <nate.studer@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, Simon Martin
	<furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
In-Reply-To: <1392313015.32038.112.camel@Solace>
X-Originating-IP: [172.27.12.66]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/13/2014 12:36 PM, Dario Faggioli wrote:
> On gio, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
>> Hi all,
>>
> Hey Simon!
> 
> First of all, as you're using ARINC, I'm adding Nate, as he's ARINC's
> maintainer, let's see if he can help us! ;-P
> 
>> I  am  now successfully running my little operating system inside Xen.
>> It  is  fully  preemptive and working a treat, 
>>
> Aha, this is great! :-)
> 
>> but I have just noticed
>> something  I  wasn't expecting, and will really be a problem for me if
>> I can't work around it.
>>
> Well, let's see...
> 
>> My configuration is as follows:
>>
>> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
>>
>> 2.- Xen: 4.4 (just pulled from repository)
>>
>> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>>
>> 4.- 2 cpu pools:
>>
>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-0               3    credit       y          2
>> pv499                1  arinc653       y          1
>>
> Ok, I think I figured this out from the other information, but it would
> be useful to know what pcpus are assigned to what cpupool. I think it's
> `xl cpupool-list -c'.
> 
>> 5.- 2 domU:
>>
>> # xl list
>> Name                                        ID   Mem VCPUs      State   Time(s)
>> Domain-0                                     0   984     3     r-----      39.7
>> win7x64                                      1  2046     3     -b----     143.0
>> pv499                                        3   128     1     -b----      61.2
>>
>> 6.- All VCPUs are pinned:
>>
> Right, although, if you use cpupools, and if I've understood what you're
> up to, you really should not require pinning. I mean, the isolation
> between the RT-ish domain and the rest of the world should be already in
> place thanks to cpupools.
> 
> Actually, pinning can help, but meybe not in the exact way you're using
> it...
> 
>> # xl vcpu-list
>> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
>> Domain-0                             0     0    0   -b-      27.5  0
>> Domain-0                             0     1    1   -b-       7.2  1
>> Domain-0                             0     2    2   r--       5.1  2
>> win7x64                              1     0    0   -b-      71.6  0
>> win7x64                              1     1    1   -b-      37.7  1
>> win7x64                              1     2    2   -b-      34.5  2
>> pv499                                3     0    3   -b-      62.1  3
>>
> ...as it can be seen here.
> 
> So, if you ask me, you're restricting too much things in pool-0, where
> dom0 and the Windows VM runs. In fact, is there a specific reason why
> you need all their vcpus to be statically pinned each one to only one
> pcpu? If not, I'd leave them a little bit more of freedom.
> 
> What I'd try is:
>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
>  2. pinning as follows:
>      * all vcpus of win7 --> pcpus 1,2
>      * all vcpus of dom0 --> no pinning
>    this way, what you get is the following: win7 could suffer sometimes,
>    if all its 3 vcpus gets busy, but that, I think is acceptable, at
>    least up to a certain extent, is that the case?
>    At the same time, you
>    are making sure dom0 always has a chance to run, as pcpu#0 would be
>    his exclusive playground, in case someone, including your pv499
>    domain, needs its services.
> 
>> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
>> (yet). I am running a little test program in pv499 and the timing I
>> see is varies depending on disk activity.
>>
>> My test program runs prints up the time taken in milliseconds for a
>> million cycles. With no disk activity I see 940 ms, with disk activity
>> I see 1200 ms.
>>
> Wow, it's very hard to tell. What I first thought is that your domain
> may need something from dom0, and the suboptimal (IMHO) pinning
> configuration you're using could be slowing that down. The bug in this
> theory is that dom0 services are mostly PV drivers for disk and network,
> which you say you don't have...

  Any shared resource between domains could cause one domain to affect the
timing of another domain:  shared cache, shared memory controller, interrupts,
shared I/O interface, domain-0, etc...

  Giving the small size and nature of your test application, the cache, which
Ian mentioned, is a likely culprit, but if your application gets more complex
some of these other sources could show up.

  What kind of variation (jitter) on this measurement can your application handle?

  You can never eliminate all the jitter, but you can get rid of a lot of it by
carefully partitioning your system.  The pinning suggested by Dario looks to be
a good step towards this goal.

     Nate

> 
> I still think your pinning setup is unnecessary restrictive, so I'd give
> it a try, but it's probably not the root cause of your issue.
> 
>> I can't understand this as disk activity should be running on cores 0,
>> 1  and 2, but never on core 3. The only thing running on core 3 should
>> by my paravirtual machine and the hypervisor stub.
>>
> Right. Are you familiar with tracing what happens inside Xen with
> xentrace and, perhaps, xenalyze? It takes a bit of time to get used to
> it but, once you dominate it, it is a good mean for getting out really
> useful info!
> 
> There is a blog post about that here:
> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
> and it should have most of the info, or the links to where to find them.
> 
> It's going to be a lot of data, but if you trace one run without disk IO
> and one run with disk IO, it should be doable to compare the
> differences, for instance, in terms of when the vcpus of your domain are
> active, as well as when they get scheduled, and from that we hopefully
> can try to narrow down a bit more the real root cause of the thing.
> 
> Let us know if you think you need help with that.
> 
> Regards,
> Dario
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 20:49:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 20:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE3Dg-0007rx-63; Thu, 13 Feb 2014 20:49:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Nate.Studer@dornerworks.com>) id 1WE3Df-0007rs-Dx
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 20:49:15 +0000
Received: from [193.109.254.147:29180] by server-3.bemta-14.messagelabs.com id
	33/BA-00432-ACF2DF25; Thu, 13 Feb 2014 20:49:14 +0000
X-Env-Sender: Nate.Studer@dornerworks.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392324553!4175346!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20440 invoked from network); 13 Feb 2014 20:49:13 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-16.tower-27.messagelabs.com with SMTP;
	13 Feb 2014 20:49:13 -0000
Received: from [172.27.12.66] (172.27.12.66) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Thu, 13 Feb 2014 15:47:33 -0500
Message-ID: <52FD2F63.3090601@dornerworks.com>
Date: Thu, 13 Feb 2014 15:47:31 -0500
From: Nate Studer <nate.studer@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, Simon Martin
	<furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
In-Reply-To: <1392313015.32038.112.camel@Solace>
X-Originating-IP: [172.27.12.66]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/13/2014 12:36 PM, Dario Faggioli wrote:
> On gio, 2014-02-13 at 16:56 +0000, Simon Martin wrote:
>> Hi all,
>>
> Hey Simon!
> 
> First of all, as you're using ARINC, I'm adding Nate, as he's ARINC's
> maintainer, let's see if he can help us! ;-P
> 
>> I  am  now successfully running my little operating system inside Xen.
>> It  is  fully  preemptive and working a treat, 
>>
> Aha, this is great! :-)
> 
>> but I have just noticed
>> something  I  wasn't expecting, and will really be a problem for me if
>> I can't work around it.
>>
> Well, let's see...
> 
>> My configuration is as follows:
>>
>> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
>>
>> 2.- Xen: 4.4 (just pulled from repository)
>>
>> 3.- Dom0: Debian Wheezy (Kernel 3.2)
>>
>> 4.- 2 cpu pools:
>>
>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-0               3    credit       y          2
>> pv499                1  arinc653       y          1
>>
> Ok, I think I figured this out from the other information, but it would
> be useful to know what pcpus are assigned to what cpupool. I think it's
> `xl cpupool-list -c'.
> 
>> 5.- 2 domU:
>>
>> # xl list
>> Name                                        ID   Mem VCPUs      State   Time(s)
>> Domain-0                                     0   984     3     r-----      39.7
>> win7x64                                      1  2046     3     -b----     143.0
>> pv499                                        3   128     1     -b----      61.2
>>
>> 6.- All VCPUs are pinned:
>>
> Right, although, if you use cpupools, and if I've understood what you're
> up to, you really should not require pinning. I mean, the isolation
> between the RT-ish domain and the rest of the world should be already in
> place thanks to cpupools.
> 
> Actually, pinning can help, but meybe not in the exact way you're using
> it...
> 
>> # xl vcpu-list
>> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
>> Domain-0                             0     0    0   -b-      27.5  0
>> Domain-0                             0     1    1   -b-       7.2  1
>> Domain-0                             0     2    2   r--       5.1  2
>> win7x64                              1     0    0   -b-      71.6  0
>> win7x64                              1     1    1   -b-      37.7  1
>> win7x64                              1     2    2   -b-      34.5  2
>> pv499                                3     0    3   -b-      62.1  3
>>
> ...as it can be seen here.
> 
> So, if you ask me, you're restricting too much things in pool-0, where
> dom0 and the Windows VM runs. In fact, is there a specific reason why
> you need all their vcpus to be statically pinned each one to only one
> pcpu? If not, I'd leave them a little bit more of freedom.
> 
> What I'd try is:
>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
>  2. pinning as follows:
>      * all vcpus of win7 --> pcpus 1,2
>      * all vcpus of dom0 --> no pinning
>    this way, what you get is the following: win7 could suffer sometimes,
>    if all its 3 vcpus gets busy, but that, I think is acceptable, at
>    least up to a certain extent, is that the case?
>    At the same time, you
>    are making sure dom0 always has a chance to run, as pcpu#0 would be
>    his exclusive playground, in case someone, including your pv499
>    domain, needs its services.
> 
>> 7.- pv499 is the domU that I am testing. It has no disk or vif devices
>> (yet). I am running a little test program in pv499 and the timing I
>> see is varies depending on disk activity.
>>
>> My test program runs prints up the time taken in milliseconds for a
>> million cycles. With no disk activity I see 940 ms, with disk activity
>> I see 1200 ms.
>>
> Wow, it's very hard to tell. What I first thought is that your domain
> may need something from dom0, and the suboptimal (IMHO) pinning
> configuration you're using could be slowing that down. The bug in this
> theory is that dom0 services are mostly PV drivers for disk and network,
> which you say you don't have...

  Any shared resource between domains could cause one domain to affect the
timing of another domain:  shared cache, shared memory controller, interrupts,
shared I/O interface, domain-0, etc...

  Giving the small size and nature of your test application, the cache, which
Ian mentioned, is a likely culprit, but if your application gets more complex
some of these other sources could show up.

  What kind of variation (jitter) on this measurement can your application handle?

  You can never eliminate all the jitter, but you can get rid of a lot of it by
carefully partitioning your system.  The pinning suggested by Dario looks to be
a good step towards this goal.

     Nate

> 
> I still think your pinning setup is unnecessary restrictive, so I'd give
> it a try, but it's probably not the root cause of your issue.
> 
>> I can't understand this as disk activity should be running on cores 0,
>> 1  and 2, but never on core 3. The only thing running on core 3 should
>> by my paravirtual machine and the hypervisor stub.
>>
> Right. Are you familiar with tracing what happens inside Xen with
> xentrace and, perhaps, xenalyze? It takes a bit of time to get used to
> it but, once you dominate it, it is a good mean for getting out really
> useful info!
> 
> There is a blog post about that here:
> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
> and it should have most of the info, or the links to where to find them.
> 
> It's going to be a lot of data, but if you trace one run without disk IO
> and one run with disk IO, it should be doable to compare the
> differences, for instance, in terms of when the vcpus of your domain are
> active, as well as when they get scheduled, and from that we hopefully
> can try to narrow down a bit more the real root cause of the thing.
> 
> Let us know if you think you need help with that.
> 
> Regards,
> Dario
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 21:12:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 21:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE3Ze-0000Aa-UF; Thu, 13 Feb 2014 21:11:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Robert.VanVossen@dornerworks.com>)
	id 1WE3Zc-0000AV-Oc
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 21:11:56 +0000
Received: from [85.158.143.35:54895] by server-1.bemta-4.messagelabs.com id
	8D/89-31661-C153DF25; Thu, 13 Feb 2014 21:11:56 +0000
X-Env-Sender: Robert.VanVossen@dornerworks.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392325914!5540859!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25707 invoked from network); 13 Feb 2014 21:11:55 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-2.tower-21.messagelabs.com with SMTP;
	13 Feb 2014 21:11:55 -0000
Received: from [172.27.12.69] (172.27.12.69) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Thu, 13 Feb 2014 16:10:21 -0500
Message-ID: <52FD349F.8070101@dornerworks.com>
Date: Thu, 13 Feb 2014 16:09:51 -0500
From: Robbie VanVossen <robert.vanvossen@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130620 Thunderbird/17.0.7
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<52F2AD63.7030109@dornerworks.com>
	<1391764936.9917.58.camel@Solace>
In-Reply-To: <1391764936.9917.58.camel@Solace>
X-Originating-IP: [172.27.12.69]
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/7/2014 4:22 AM, Dario Faggioli wrote:
> From your experiments (an from some other numbers I also have) it looks 
> like this lower bound is not terrible in Xen, which is something good to
> know... So thanks again for taking the time of running the benchmarks
> and sharing the results! :-D
> 
> That being said, especially if we compare to baremetal, I think there is
> some room for improvements (I mean, there always will be an overhead,
> but still...). Do you, by any chance, have the figures for cyclictest on
> Linux baremetal too (on the same hardware and kernel, if possible)?

Dario,

Here is an updated table:

+--------+--------+-----------+-----+-------+-----+
| Config | Domain | Scheduler |   Latency (us)    |
+--------+--------+-----------+-----+-------+-----+
|        |        |           | Min |   Max | Avg |
+--------+--------+-----------+-----+-------+-----+
| 0      | NA     | CFS       |   4 |    35 |  10 |
| 1      | 0      | Arinc653  |  20 |   163 |  68 |
| 2      | 0      | Arinc653  |  21 |   173 |  68 |
| 3      | 0      | Credit    |  23 |  1041 |  87 |
| 3      | 1      | Arinc653  |  20 |   155 |  75 |
+--------+--------+-----------+-----+-------+-----+

Configuration 0 is the same kernel as before, but running on baremetal, as
requested. As expected, these values are lower than the other results. I also
added the results of running cyclictest on dom0 in configuration 3. In this
configuration, dom0 was running the credit schedule in a separate CPU-Pool from
the guest.


On another note, I attempted to get the same measurements for a linux kernel
with the Real Time Patch applied. Here are the results:

-------------------
Configuration 0 - Bare Metal Kernel

Ubuntu 12.04.1 - Linux 3.2.24-rt38

-------------------
Configuration 1 - Only Domain-0

Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.24-rt38

xl list -n:
Name             ID   Mem  VCPUs   State  Time(s)  NODE Affinity
Domain-0          0  1535      1  r-----     30.9  all

xl vcpu-list:
Name             ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0          0     0    0    r--     35.5  all

-------------------
Configuration 2 - Domain-0 and Unscheduled guest

Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.24-rt38
dom1: 		Ubuntu 12.04.1 - Linux 3.2.24-rt38

xl list -n:
Name             ID   Mem  VCPUs   State  Time(s)  NODE Affinity
Domain-0          0  1535      1  r-----     39.7  all
dom1              1   512      1  ------      0.0  all

xl vcpu-list:
Name             ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0          0     0    0    r--     40.5  all
dom1              1     0    0    ---      0.0  all

-------------------
Command used:

cyclictest -t1 -p 1 -i 30000 -l 500 -q

Results:
+--------+--------+-----------+-----+-------+-----+
| Config | Domain | Scheduler |   Latency (us)    |
+--------+--------+-----------+-----+-------+-----+
|        |        |           | Min |   Max | Avg |
+--------+--------+-----------+-----+-------+-----+
| 0      | NA     | CFS       |   3 |     8 |   5 |
| 1      | 0      | Arinc653  |  20 |   160 |  68 |
| 2      | 0      | Arinc653  |  18 |   150 |  66 |
+--------+--------+-----------+-----+-------+-----+

I couldn't seem to boot into the guest using the kernel with the Real Time Patch
applied, which is why I didn't replicate configuration 3.

-- 
---
Robbie VanVossen
DornerWorks, Ltd.
Embedded Systems Engineering

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 21:12:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 21:12:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE3Ze-0000Aa-UF; Thu, 13 Feb 2014 21:11:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Robert.VanVossen@dornerworks.com>)
	id 1WE3Zc-0000AV-Oc
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 21:11:56 +0000
Received: from [85.158.143.35:54895] by server-1.bemta-4.messagelabs.com id
	8D/89-31661-C153DF25; Thu, 13 Feb 2014 21:11:56 +0000
X-Env-Sender: Robert.VanVossen@dornerworks.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392325914!5540859!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25707 invoked from network); 13 Feb 2014 21:11:55 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-2.tower-21.messagelabs.com with SMTP;
	13 Feb 2014 21:11:55 -0000
Received: from [172.27.12.69] (172.27.12.69) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Thu, 13 Feb 2014 16:10:21 -0500
Message-ID: <52FD349F.8070101@dornerworks.com>
Date: Thu, 13 Feb 2014 16:09:51 -0500
From: Robbie VanVossen <robert.vanvossen@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130620 Thunderbird/17.0.7
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <CAE4oM6ze_oXz=hkgbfxPvDf+LH8YVhiYF6xrAdn4TQxk00c4UA@mail.gmail.com>
	<1390230336.23576.24.camel@Solace>
	<52F2AD63.7030109@dornerworks.com>
	<1391764936.9917.58.camel@Solace>
In-Reply-To: <1391764936.9917.58.camel@Solace>
X-Originating-IP: [172.27.12.69]
Cc: Pavlo Suikov <pavlo.suikov@globallogic.com>,
	Nate Studer <nate.studer@dornerworks.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Delays on usleep calls
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/7/2014 4:22 AM, Dario Faggioli wrote:
> From your experiments (an from some other numbers I also have) it looks 
> like this lower bound is not terrible in Xen, which is something good to
> know... So thanks again for taking the time of running the benchmarks
> and sharing the results! :-D
> 
> That being said, especially if we compare to baremetal, I think there is
> some room for improvements (I mean, there always will be an overhead,
> but still...). Do you, by any chance, have the figures for cyclictest on
> Linux baremetal too (on the same hardware and kernel, if possible)?

Dario,

Here is an updated table:

+--------+--------+-----------+-----+-------+-----+
| Config | Domain | Scheduler |   Latency (us)    |
+--------+--------+-----------+-----+-------+-----+
|        |        |           | Min |   Max | Avg |
+--------+--------+-----------+-----+-------+-----+
| 0      | NA     | CFS       |   4 |    35 |  10 |
| 1      | 0      | Arinc653  |  20 |   163 |  68 |
| 2      | 0      | Arinc653  |  21 |   173 |  68 |
| 3      | 0      | Credit    |  23 |  1041 |  87 |
| 3      | 1      | Arinc653  |  20 |   155 |  75 |
+--------+--------+-----------+-----+-------+-----+

Configuration 0 is the same kernel as before, but running on baremetal, as
requested. As expected, these values are lower than the other results. I also
added the results of running cyclictest on dom0 in configuration 3. In this
configuration, dom0 was running the credit schedule in a separate CPU-Pool from
the guest.


On another note, I attempted to get the same measurements for a linux kernel
with the Real Time Patch applied. Here are the results:

-------------------
Configuration 0 - Bare Metal Kernel

Ubuntu 12.04.1 - Linux 3.2.24-rt38

-------------------
Configuration 1 - Only Domain-0

Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.24-rt38

xl list -n:
Name             ID   Mem  VCPUs   State  Time(s)  NODE Affinity
Domain-0          0  1535      1  r-----     30.9  all

xl vcpu-list:
Name             ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0          0     0    0    r--     35.5  all

-------------------
Configuration 2 - Domain-0 and Unscheduled guest

Xen: 		4.4-rc2 - Arinc653 Scheduler
Domain-0: 	Ubuntu 12.04.1 - Linux 3.2.24-rt38
dom1: 		Ubuntu 12.04.1 - Linux 3.2.24-rt38

xl list -n:
Name             ID   Mem  VCPUs   State  Time(s)  NODE Affinity
Domain-0          0  1535      1  r-----     39.7  all
dom1              1   512      1  ------      0.0  all

xl vcpu-list:
Name             ID  VCPU  CPU  State  Time(s)  CPU Affinity
Domain-0          0     0    0    r--     40.5  all
dom1              1     0    0    ---      0.0  all

-------------------
Command used:

cyclictest -t1 -p 1 -i 30000 -l 500 -q

Results:
+--------+--------+-----------+-----+-------+-----+
| Config | Domain | Scheduler |   Latency (us)    |
+--------+--------+-----------+-----+-------+-----+
|        |        |           | Min |   Max | Avg |
+--------+--------+-----------+-----+-------+-----+
| 0      | NA     | CFS       |   3 |     8 |   5 |
| 1      | 0      | Arinc653  |  20 |   160 |  68 |
| 2      | 0      | Arinc653  |  18 |   150 |  66 |
+--------+--------+-----------+-----+-------+-----+

I couldn't seem to boot into the guest using the kernel with the Real Time Patch
applied, which is why I didn't replicate configuration 3.

-- 
---
Robbie VanVossen
DornerWorks, Ltd.
Embedded Systems Engineering

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 21:15:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 21:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE3dH-0000IA-LL; Thu, 13 Feb 2014 21:15:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WE3dF-0000I5-VC
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 21:15:42 +0000
Received: from [85.158.143.35:12768] by server-3.bemta-4.messagelabs.com id
	E0/57-11539-DF53DF25; Thu, 13 Feb 2014 21:15:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392326139!5541407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8617 invoked from network); 13 Feb 2014 21:15:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 21:15:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,840,1384300800"; d="scan'208";a="100608460"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 21:15:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 16:15:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WE3dB-0002g5-5U;
	Thu, 13 Feb 2014 21:15:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WE3dB-0004Yo-0V;
	Thu, 13 Feb 2014 21:15:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24865-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 21:15:37 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24865: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24865 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24865/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9
baseline version:
 xen                  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Jan Beulich <jbeulich@suse.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=640b31535ab8fe07911d0b90ae4adbe6078026c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 640b31535ab8fe07911d0b90ae4adbe6078026c9
+ branch=xen-4.2-testing
+ revision=640b31535ab8fe07911d0b90ae4adbe6078026c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 640b31535ab8fe07911d0b90ae4adbe6078026c9:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   eba76e4..640b315  640b31535ab8fe07911d0b90ae4adbe6078026c9 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 21:15:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 21:15:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE3dH-0000IA-LL; Thu, 13 Feb 2014 21:15:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WE3dF-0000I5-VC
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 21:15:42 +0000
Received: from [85.158.143.35:12768] by server-3.bemta-4.messagelabs.com id
	E0/57-11539-DF53DF25; Thu, 13 Feb 2014 21:15:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392326139!5541407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8617 invoked from network); 13 Feb 2014 21:15:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 21:15:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,840,1384300800"; d="scan'208";a="100608460"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 21:15:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 16:15:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WE3dB-0002g5-5U;
	Thu, 13 Feb 2014 21:15:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WE3dB-0004Yo-0V;
	Thu, 13 Feb 2014 21:15:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24865-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 21:15:37 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24865: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24865 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24865/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9
baseline version:
 xen                  eba76e4112339b2bb9dcac66835c04fd5ba7b5d2

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Eric Houby <ehouby@yahoo.com>
  Jan Beulich <jbeulich@suse.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=640b31535ab8fe07911d0b90ae4adbe6078026c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 640b31535ab8fe07911d0b90ae4adbe6078026c9
+ branch=xen-4.2-testing
+ revision=640b31535ab8fe07911d0b90ae4adbe6078026c9
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 640b31535ab8fe07911d0b90ae4adbe6078026c9:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   eba76e4..640b315  640b31535ab8fe07911d0b90ae4adbe6078026c9 -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 22:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 22:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE4j4-00024k-B4; Thu, 13 Feb 2014 22:25:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WE4j3-00024e-9y
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 22:25:45 +0000
Received: from [193.109.254.147:17510] by server-16.bemta-14.messagelabs.com
	id 82/F6-21945-8664DF25; Thu, 13 Feb 2014 22:25:44 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392330343!4190498!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31259 invoked from network); 13 Feb 2014 22:25:43 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 22:25:43 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so8036675wes.22
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 14:25:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=W3ob+VK+Yrer3zrjBmkisXvSAV5LBfMSfu3Ca81ZkfY=;
	b=GQLU3/WgCdz2P+IPRUuGoeHgkthEpSBAmFG/eUtywcpgK2sW8qy6FQM2t3esQP4SB2
	avdLsH8iASY1IfTY26CS4pWRCNmbgKG4wkMBMXgbi+y9XP838cI1CTQSjKOvf/4PMXCQ
	/fMoZtU4lhtlAAQ0gFoJvSL0+wUeWKIWTApAVpFwDz5L5vnfL0iWWVseoXYAbXuaGPTd
	i5yfsgoH1O5TIWgmk9iDCqjHoU6maOsfXICcR8PKwd7LcLcwuwxpFQaAj1QmfoNK+Blw
	q9XmiDriI9tFZG1KwoV6ukdepyQpH+BaRHOSdgiqMSalWvB8EixO4XmyrE39AXq4qDth
	4brg==
X-Received: by 10.180.211.208 with SMTP id ne16mr8198596wic.21.1392330343427; 
	Thu, 13 Feb 2014 14:25:43 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id de3sm7808394wjb.8.2014.02.13.14.25.20
	for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 14:25:42 -0800 (PST)
Date: Thu, 13 Feb 2014 22:25:07 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <295276356.20140213222507@gmail.com>
To: xen-devel@lists.xen.org
In-Reply-To: <1392313015.32038.112.camel@Solace>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>, Don Slutz <dslutz@verizon.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for all the replies guys.

With respect your comments/queries I'll put them all in here.

>> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
Andrew> Can you be more specific - this covers 4 generations of Intel CPUs.

This is a 4th gen (Haswell) processor.

>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-0               3    credit       y          2
>> pv499                1  arinc653       y          1
Dario> Ok, I think I figured this out from the other information, but it would
Dario> be useful to know what pcpus are assigned to what cpupool. I think it's
Dario> `xl cpupool-list -c'.

Pool-0: 0,1,2
Dom0: 3

Don> How many instruction per second a thread gets does depend on the
Don> "idleness" of other threads (no longer just the hyperThread's
Don> parther).

This    seems    a    bit    strange   to   me. In my case I have time
critical  PV  running  by  itself  in a CPU pool. So Xen should not be
scheduling it, so I can't see how this Hypervisor thread would be affected.

>> 6.- All VCPUs are pinned:
>> 
Dario> Right, although, if you use cpupools, and if I've understood what you're
Dario> up to, you really should not require pinning. I mean, the isolation
Dario> between the RT-ish domain and the rest of the world should be already in
Dario> place thanks to cpupools.

This  is what I thought, however when running looking at the vcpu-list
I  CPU  affinity  was "all" until I starting pinning. As I wasn't sure
whether  that  was  "all  inside this cpu pool" or "all" I felt it was
safer to do it explicitly.

Dario> So, if you ask me, you're restricting too much things in
Dario> pool-0, where dom0 and the Windows VM runs. In fact, is there a
Dario> specific reason why you need all their vcpus to be statically
Dario> pinned each one to only one pcpu? If not, I'd leave them a
Dario> little bit more of freedom.

I agree with you here, however when I don't pin CPU affinity is "all".
Is this "all in the CPU pool"? I couldn't find that info.

Dario> What I'd try is:
Dario>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
Dario>  2. pinning as follows:
Dario>      * all vcpus of win7 --> pcpus 1,2
Dario>      * all vcpus of dom0 --> no pinning
Dario>    this way, what you get is the following: win7 could suffer sometimes,
Dario>    if all its 3 vcpus gets busy, but that, I think is acceptable, at
Dario>    least up to a certain extent, is that the case?
Dario>    At the same time, you
Dario>    are making sure dom0 always has a chance to run, as pcpu#0 would be
Dario>    his exclusive playground, in case someone, including your pv499
Dario>    domain, needs its services.

This  is  what  I  had when I started :-). Thanks for the confirmation
that I was doing it right. However if the hyperthreading is the issue,
then I will only have 2 PCPU available, and I will assign them both to
dom0 and win7.

Dario> Right. Are you familiar with tracing what happens inside Xen
Dario> with xentrace and, perhaps, xenalyze? It takes a bit of time to
Dario> get used to it but, once you dominate it, it is a good mean for
Dario> getting out really useful info!

Dario> There is a blog post about that here:
Dario> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
Dario> and it should have most of the info, or the links to where to
Dario> find them.

Thanks for this. If this problem is more than the hyperthreading then
I will definitely use it. Also looks like it might be useful when I
start looking at the jitter on the singleshot timer (which should be
in a couple of weeks).

Andrew> How are you measuring time in pv499?

In  two  ways: counting the periodic interrupts (125 microsecond) and
from the monotonic_clock. They both agree on the time.

Andrew> What is your Cstates and Pstates looking like?

No  idea.  If the hyperthreading suggestion doesn't work out I'll look
at this.

Andrew> If you can, try disabling turbo?

If the hyperthreading suggestion doesn't work out I'll look at this.
-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 22:26:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 22:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE4j4-00024k-B4; Thu, 13 Feb 2014 22:25:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WE4j3-00024e-9y
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 22:25:45 +0000
Received: from [193.109.254.147:17510] by server-16.bemta-14.messagelabs.com
	id 82/F6-21945-8664DF25; Thu, 13 Feb 2014 22:25:44 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392330343!4190498!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31259 invoked from network); 13 Feb 2014 22:25:43 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 22:25:43 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so8036675wes.22
	for <xen-devel@lists.xen.org>; Thu, 13 Feb 2014 14:25:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=W3ob+VK+Yrer3zrjBmkisXvSAV5LBfMSfu3Ca81ZkfY=;
	b=GQLU3/WgCdz2P+IPRUuGoeHgkthEpSBAmFG/eUtywcpgK2sW8qy6FQM2t3esQP4SB2
	avdLsH8iASY1IfTY26CS4pWRCNmbgKG4wkMBMXgbi+y9XP838cI1CTQSjKOvf/4PMXCQ
	/fMoZtU4lhtlAAQ0gFoJvSL0+wUeWKIWTApAVpFwDz5L5vnfL0iWWVseoXYAbXuaGPTd
	i5yfsgoH1O5TIWgmk9iDCqjHoU6maOsfXICcR8PKwd7LcLcwuwxpFQaAj1QmfoNK+Blw
	q9XmiDriI9tFZG1KwoV6ukdepyQpH+BaRHOSdgiqMSalWvB8EixO4XmyrE39AXq4qDth
	4brg==
X-Received: by 10.180.211.208 with SMTP id ne16mr8198596wic.21.1392330343427; 
	Thu, 13 Feb 2014 14:25:43 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id de3sm7808394wjb.8.2014.02.13.14.25.20
	for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 14:25:42 -0800 (PST)
Date: Thu, 13 Feb 2014 22:25:07 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <295276356.20140213222507@gmail.com>
To: xen-devel@lists.xen.org
In-Reply-To: <1392313015.32038.112.camel@Solace>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>, Don Slutz <dslutz@verizon.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks for all the replies guys.

With respect your comments/queries I'll put them all in here.

>> 1.- Hardware: Intel i3, 4GB RAM, 64GB SSD.
Andrew> Can you be more specific - this covers 4 generations of Intel CPUs.

This is a 4th gen (Haswell) processor.

>> # xl cpupool-list
>> Name               CPUs   Sched     Active   Domain count
>> Pool-0               3    credit       y          2
>> pv499                1  arinc653       y          1
Dario> Ok, I think I figured this out from the other information, but it would
Dario> be useful to know what pcpus are assigned to what cpupool. I think it's
Dario> `xl cpupool-list -c'.

Pool-0: 0,1,2
Dom0: 3

Don> How many instruction per second a thread gets does depend on the
Don> "idleness" of other threads (no longer just the hyperThread's
Don> parther).

This    seems    a    bit    strange   to   me. In my case I have time
critical  PV  running  by  itself  in a CPU pool. So Xen should not be
scheduling it, so I can't see how this Hypervisor thread would be affected.

>> 6.- All VCPUs are pinned:
>> 
Dario> Right, although, if you use cpupools, and if I've understood what you're
Dario> up to, you really should not require pinning. I mean, the isolation
Dario> between the RT-ish domain and the rest of the world should be already in
Dario> place thanks to cpupools.

This  is what I thought, however when running looking at the vcpu-list
I  CPU  affinity  was "all" until I starting pinning. As I wasn't sure
whether  that  was  "all  inside this cpu pool" or "all" I felt it was
safer to do it explicitly.

Dario> So, if you ask me, you're restricting too much things in
Dario> pool-0, where dom0 and the Windows VM runs. In fact, is there a
Dario> specific reason why you need all their vcpus to be statically
Dario> pinned each one to only one pcpu? If not, I'd leave them a
Dario> little bit more of freedom.

I agree with you here, however when I don't pin CPU affinity is "all".
Is this "all in the CPU pool"? I couldn't find that info.

Dario> What I'd try is:
Dario>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
Dario>  2. pinning as follows:
Dario>      * all vcpus of win7 --> pcpus 1,2
Dario>      * all vcpus of dom0 --> no pinning
Dario>    this way, what you get is the following: win7 could suffer sometimes,
Dario>    if all its 3 vcpus gets busy, but that, I think is acceptable, at
Dario>    least up to a certain extent, is that the case?
Dario>    At the same time, you
Dario>    are making sure dom0 always has a chance to run, as pcpu#0 would be
Dario>    his exclusive playground, in case someone, including your pv499
Dario>    domain, needs its services.

This  is  what  I  had when I started :-). Thanks for the confirmation
that I was doing it right. However if the hyperthreading is the issue,
then I will only have 2 PCPU available, and I will assign them both to
dom0 and win7.

Dario> Right. Are you familiar with tracing what happens inside Xen
Dario> with xentrace and, perhaps, xenalyze? It takes a bit of time to
Dario> get used to it but, once you dominate it, it is a good mean for
Dario> getting out really useful info!

Dario> There is a blog post about that here:
Dario> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
Dario> and it should have most of the info, or the links to where to
Dario> find them.

Thanks for this. If this problem is more than the hyperthreading then
I will definitely use it. Also looks like it might be useful when I
start looking at the jitter on the singleshot timer (which should be
in a couple of weeks).

Andrew> How are you measuring time in pv499?

In  two  ways: counting the periodic interrupts (125 microsecond) and
from the monotonic_clock. They both agree on the time.

Andrew> What is your Cstates and Pstates looking like?

No  idea.  If the hyperthreading suggestion doesn't work out I'll look
at this.

Andrew> If you can, try disabling turbo?

If the hyperthreading suggestion doesn't work out I'll look at this.
-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 23:02:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 23:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE5IF-00034i-Sn; Thu, 13 Feb 2014 23:02:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WE5IE-00034d-Oy
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 23:02:07 +0000
Received: from [85.158.137.68:2241] by server-5.bemta-3.messagelabs.com id
	FA/94-04712-EEE4DF25; Thu, 13 Feb 2014 23:02:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392332523!497648!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30570 invoked from network); 13 Feb 2014 23:02:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 23:02:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,841,1384300800"; d="scan'208";a="102413521"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 23:02:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 18:02:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WE5I9-0003CX-J8;
	Thu, 13 Feb 2014 23:02:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WE5I9-0008Tf-Dc;
	Thu, 13 Feb 2014 23:02:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24867-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 23:02:01 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24867: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24867 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24867/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 24862

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  1278b09cc5a38da4efbe0de37a7f9fab9d19f913
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 23:02:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 23:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE5IF-00034i-Sn; Thu, 13 Feb 2014 23:02:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WE5IE-00034d-Oy
	for xen-devel@lists.xensource.com; Thu, 13 Feb 2014 23:02:07 +0000
Received: from [85.158.137.68:2241] by server-5.bemta-3.messagelabs.com id
	FA/94-04712-EEE4DF25; Thu, 13 Feb 2014 23:02:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392332523!497648!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30570 invoked from network); 13 Feb 2014 23:02:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 23:02:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,841,1384300800"; d="scan'208";a="102413521"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 13 Feb 2014 23:02:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 13 Feb 2014 18:02:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WE5I9-0003CX-J8;
	Thu, 13 Feb 2014 23:02:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WE5I9-0008Tf-Dc;
	Thu, 13 Feb 2014 23:02:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24867-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 13 Feb 2014 23:02:01 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24867: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24867 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24867/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 10 guest-saverestore            fail   like 24862

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  1278b09cc5a38da4efbe0de37a7f9fab9d19f913
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 13 23:13:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 23:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE5TM-0003Pl-Ca; Thu, 13 Feb 2014 23:13:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WE5TK-0003Pg-6J
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 23:13:34 +0000
Received: from [85.158.137.68:59571] by server-14.bemta-3.messagelabs.com id
	FB/3D-08196-D915DF25; Thu, 13 Feb 2014 23:13:33 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392333201!1747426!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDQ0NDIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24602 invoked from network); 13 Feb 2014 23:13:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 23:13:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,841,1384300800"; 
	d="asc'?scan'208";a="100637703"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 23:13:21 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 18:13:20 -0500
Message-ID: <1392333198.32038.153.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Fri, 14 Feb 2014 00:13:18 +0100
In-Reply-To: <295276356.20140213222507@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8656291078796531798=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8656291078796531798==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-R32yVRGz99RkEPtHT6pT"

--=-R32yVRGz99RkEPtHT6pT
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-13 at 22:25 +0000, Simon Martin wrote:
> Thanks for all the replies guys.
>=20
:-)

> Don> How many instruction per second a thread gets does depend on the
> Don> "idleness" of other threads (no longer just the hyperThread's
> Don> parther).
>=20
> This    seems    a    bit    strange   to   me. In my case I have time
> critical  PV  running  by  itself  in a CPU pool. So Xen should not be
> scheduling it, so I can't see how this Hypervisor thread would be affecte=
d.
>=20
I think Don is referring to the idleness of the other _hardware_ threads
in the chip, rather than software threads of execution, either in Xen or
in Dom0/DomU. I checked his original e-mail and, AFAIUI, he seems to
confirm that the throughput you get on, say, core 3, depends on what
it's sibling core (which really is his sibling hyperthread, again in the
hardware sense... Gah, the terminology is just a mess! :-P). He seems to
also add the fact that there is a similar kind of inter-dependency
between all the hardware hyperthread, not just between siblings.

Does this make sense Don?

> >> 6.- All VCPUs are pinned:
> >>=20
> Dario> Right, although, if you use cpupools, and if I've understood what =
you're
> Dario> up to, you really should not require pinning. I mean, the isolatio=
n
> Dario> between the RT-ish domain and the rest of the world should be alre=
ady in
> Dario> place thanks to cpupools.
>=20
> This  is what I thought, however when running looking at the vcpu-list
> I  CPU  affinity  was "all" until I starting pinning. As I wasn't sure
> whether  that  was  "all  inside this cpu pool" or "all" I felt it was
> safer to do it explicitly.
>=20
Actually, you are right, we could put things in a way that results more
clear, when one observes the output! So, I confirm that, despite the
fact that you see "all", that all is relative to the cpupool the domain
is assigned to.

I'll try to think on how to make this more evident... A note in the
manpage and/or the various sources of documentation, is the easy (but
still necessary, I agree) part, and I'll add this to my TODO list.
Actually modifying the output is more tricky, as affinity and cpupools
are orthogonal by design, and that is the right (IMHO) thing.

I guess trying to tweak the printf()-s in `xl vcpu-list' would not be
that hard... I'll have a look and see if I can come up with a proposal.

> Dario> So, if you ask me, you're restricting too much things in
> Dario> pool-0, where dom0 and the Windows VM runs. In fact, is there a
> Dario> specific reason why you need all their vcpus to be statically
> Dario> pinned each one to only one pcpu? If not, I'd leave them a
> Dario> little bit more of freedom.
>=20
> I agree with you here, however when I don't pin CPU affinity is "all".
> Is this "all in the CPU pool"? I couldn't find that info.
>=20
Again, yes: once a domain is in a cpupool, no matter what its affinity
says, it won't ever reach a pcpu assigned to another cpupool. The
technical reason is that each cpupool is ruled by it's own (copy of a)
scheduler, even if you use, e.g., credit, for both/all the pools. In
that case, what you will get are two full instances of credit,
completely independent between each other, each one in charge only of a
very specific subset of pcpus (as mandated by cpupools). So, different
runqueues, different data structures, different anything.

> Dario> What I'd try is:
> Dario>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
> Dario>  2. pinning as follows:
> Dario>      * all vcpus of win7 --> pcpus 1,2
> Dario>      * all vcpus of dom0 --> no pinning
> Dario>    this way, what you get is the following: win7 could suffer some=
times,
> Dario>    if all its 3 vcpus gets busy, but that, I think is acceptable, =
at
> Dario>    least up to a certain extent, is that the case?
> Dario>    At the same time, you
> Dario>    are making sure dom0 always has a chance to run, as pcpu#0 woul=
d be
> Dario>    his exclusive playground, in case someone, including your pv499
> Dario>    domain, needs its services.
>=20
> This  is  what  I  had when I started :-). Thanks for the confirmation
> that I was doing it right. However if the hyperthreading is the issue,
> then I will only have 2 PCPU available, and I will assign them both to
> dom0 and win7.
>=20
Yes, with hyperthreading in mind, that is what you should do.

Once we will have confirmed that hyperthreading is the issue, we'll see
what we can do. I mean, if, in your case, it's fine to 'waste' a cpu,
then ok, but I think we need a general solution for this... Perhaps with
a little worse performances than just leaving one core/hyperthread
completely idle, but at the same time more resource efficient.

I wonder how tweaking the sched_smt_power_savings would deal with
this...

> Dario> Right. Are you familiar with tracing what happens inside Xen
> Dario> with xentrace and, perhaps, xenalyze? It takes a bit of time to
> Dario> get used to it but, once you dominate it, it is a good mean for
> Dario> getting out really useful info!
>=20
> Dario> There is a blog post about that here:
> Dario> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and=
-xenalyze/
> Dario> and it should have most of the info, or the links to where to
> Dario> find them.
>=20
> Thanks for this. If this problem is more than the hyperthreading then
> I will definitely use it. Also looks like it might be useful when I
> start looking at the jitter on the singleshot timer (which should be
> in a couple of weeks).
>=20
It will reveal to be very useful for that, I'm sure! :-)

Let us know how the re-testing goes.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-R32yVRGz99RkEPtHT6pT
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL9UY4ACgkQk4XaBE3IOsR6WgCcCh5aPVng7s4eO3IKhoLrmYX4
x0gAnikeDy2TZdlfkjncpJNCipcShxQR
=0ti7
-----END PGP SIGNATURE-----

--=-R32yVRGz99RkEPtHT6pT--


--===============8656291078796531798==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8656291078796531798==--


From xen-devel-bounces@lists.xen.org Thu Feb 13 23:13:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Feb 2014 23:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE5TM-0003Pl-Ca; Thu, 13 Feb 2014 23:13:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WE5TK-0003Pg-6J
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 23:13:34 +0000
Received: from [85.158.137.68:59571] by server-14.bemta-3.messagelabs.com id
	FB/3D-08196-D915DF25; Thu, 13 Feb 2014 23:13:33 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392333201!1747426!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDQ0NDIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24602 invoked from network); 13 Feb 2014 23:13:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	13 Feb 2014 23:13:23 -0000
X-IronPort-AV: E=Sophos;i="4.95,841,1384300800"; 
	d="asc'?scan'208";a="100637703"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 13 Feb 2014 23:13:21 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 13 Feb 2014 18:13:20 -0500
Message-ID: <1392333198.32038.153.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Fri, 14 Feb 2014 00:13:18 +0100
In-Reply-To: <295276356.20140213222507@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8656291078796531798=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8656291078796531798==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-R32yVRGz99RkEPtHT6pT"

--=-R32yVRGz99RkEPtHT6pT
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-13 at 22:25 +0000, Simon Martin wrote:
> Thanks for all the replies guys.
>=20
:-)

> Don> How many instruction per second a thread gets does depend on the
> Don> "idleness" of other threads (no longer just the hyperThread's
> Don> parther).
>=20
> This    seems    a    bit    strange   to   me. In my case I have time
> critical  PV  running  by  itself  in a CPU pool. So Xen should not be
> scheduling it, so I can't see how this Hypervisor thread would be affecte=
d.
>=20
I think Don is referring to the idleness of the other _hardware_ threads
in the chip, rather than software threads of execution, either in Xen or
in Dom0/DomU. I checked his original e-mail and, AFAIUI, he seems to
confirm that the throughput you get on, say, core 3, depends on what
it's sibling core (which really is his sibling hyperthread, again in the
hardware sense... Gah, the terminology is just a mess! :-P). He seems to
also add the fact that there is a similar kind of inter-dependency
between all the hardware hyperthread, not just between siblings.

Does this make sense Don?

> >> 6.- All VCPUs are pinned:
> >>=20
> Dario> Right, although, if you use cpupools, and if I've understood what =
you're
> Dario> up to, you really should not require pinning. I mean, the isolatio=
n
> Dario> between the RT-ish domain and the rest of the world should be alre=
ady in
> Dario> place thanks to cpupools.
>=20
> This  is what I thought, however when running looking at the vcpu-list
> I  CPU  affinity  was "all" until I starting pinning. As I wasn't sure
> whether  that  was  "all  inside this cpu pool" or "all" I felt it was
> safer to do it explicitly.
>=20
Actually, you are right, we could put things in a way that results more
clear, when one observes the output! So, I confirm that, despite the
fact that you see "all", that all is relative to the cpupool the domain
is assigned to.

I'll try to think on how to make this more evident... A note in the
manpage and/or the various sources of documentation, is the easy (but
still necessary, I agree) part, and I'll add this to my TODO list.
Actually modifying the output is more tricky, as affinity and cpupools
are orthogonal by design, and that is the right (IMHO) thing.

I guess trying to tweak the printf()-s in `xl vcpu-list' would not be
that hard... I'll have a look and see if I can come up with a proposal.

> Dario> So, if you ask me, you're restricting too much things in
> Dario> pool-0, where dom0 and the Windows VM runs. In fact, is there a
> Dario> specific reason why you need all their vcpus to be statically
> Dario> pinned each one to only one pcpu? If not, I'd leave them a
> Dario> little bit more of freedom.
>=20
> I agree with you here, however when I don't pin CPU affinity is "all".
> Is this "all in the CPU pool"? I couldn't find that info.
>=20
Again, yes: once a domain is in a cpupool, no matter what its affinity
says, it won't ever reach a pcpu assigned to another cpupool. The
technical reason is that each cpupool is ruled by it's own (copy of a)
scheduler, even if you use, e.g., credit, for both/all the pools. In
that case, what you will get are two full instances of credit,
completely independent between each other, each one in charge only of a
very specific subset of pcpus (as mandated by cpupools). So, different
runqueues, different data structures, different anything.

> Dario> What I'd try is:
> Dario>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
> Dario>  2. pinning as follows:
> Dario>      * all vcpus of win7 --> pcpus 1,2
> Dario>      * all vcpus of dom0 --> no pinning
> Dario>    this way, what you get is the following: win7 could suffer some=
times,
> Dario>    if all its 3 vcpus gets busy, but that, I think is acceptable, =
at
> Dario>    least up to a certain extent, is that the case?
> Dario>    At the same time, you
> Dario>    are making sure dom0 always has a chance to run, as pcpu#0 woul=
d be
> Dario>    his exclusive playground, in case someone, including your pv499
> Dario>    domain, needs its services.
>=20
> This  is  what  I  had when I started :-). Thanks for the confirmation
> that I was doing it right. However if the hyperthreading is the issue,
> then I will only have 2 PCPU available, and I will assign them both to
> dom0 and win7.
>=20
Yes, with hyperthreading in mind, that is what you should do.

Once we will have confirmed that hyperthreading is the issue, we'll see
what we can do. I mean, if, in your case, it's fine to 'waste' a cpu,
then ok, but I think we need a general solution for this... Perhaps with
a little worse performances than just leaving one core/hyperthread
completely idle, but at the same time more resource efficient.

I wonder how tweaking the sched_smt_power_savings would deal with
this...

> Dario> Right. Are you familiar with tracing what happens inside Xen
> Dario> with xentrace and, perhaps, xenalyze? It takes a bit of time to
> Dario> get used to it but, once you dominate it, it is a good mean for
> Dario> getting out really useful info!
>=20
> Dario> There is a blog post about that here:
> Dario> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and=
-xenalyze/
> Dario> and it should have most of the info, or the links to where to
> Dario> find them.
>=20
> Thanks for this. If this problem is more than the hyperthreading then
> I will definitely use it. Also looks like it might be useful when I
> start looking at the jitter on the singleshot timer (which should be
> in a couple of weeks).
>=20
It will reveal to be very useful for that, I'm sure! :-)

Let us know how the re-testing goes.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-R32yVRGz99RkEPtHT6pT
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL9UY4ACgkQk4XaBE3IOsR6WgCcCh5aPVng7s4eO3IKhoLrmYX4
x0gAnikeDy2TZdlfkjncpJNCipcShxQR
=0ti7
-----END PGP SIGNATURE-----

--=-R32yVRGz99RkEPtHT6pT--


--===============8656291078796531798==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8656291078796531798==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 01:13:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7LO-0001te-TR; Fri, 14 Feb 2014 01:13:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>)
	id 1WE7LN-0001tU-Qj; Fri, 14 Feb 2014 01:13:30 +0000
Received: from [85.158.143.35:7688] by server-3.bemta-4.messagelabs.com id
	2B/1D-11539-9BD6DF25; Fri, 14 Feb 2014 01:13:29 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392340407!5571172!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31633 invoked from network); 14 Feb 2014 01:13:28 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-21.messagelabs.com with SMTP;
	14 Feb 2014 01:13:28 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 13 Feb 2014 17:09:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,842,1384329600"; d="scan'208";a="455223371"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 13 Feb 2014 17:13:26 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 13 Feb 2014 17:13:26 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Fri, 14 Feb 2014 09:13:24 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Thread-Topic: [Xen-devel] Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcP//9LMA//6KNmA=
Date: Fri, 14 Feb 2014 01:13:23 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
In-Reply-To: <1392287382.32038.34.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli wrote on 2014-02-13:
> [Adding the Xen publicity mailing list]
> 
> On gio, 2014-02-13 at 03:55 +0000, Zhang, Yang Z wrote:
>> Hi George,
>> 
>> I have updated the Latest Xen nested status in the Xen wiki page, 
>> please have a look.
>> 
> Hey Yang,
> 
> This is something really useful to have on the wiki, thanks for doing it.
> 
>> Although it is hard to say nested is good supported or product 
>> quality, it is ready to let people to know nested basically is supported by Xen.
>> 
> Right. With that in mind, I think this topic would be a great one for 
> a blog post on the Xen Project's blog! The content you have on the 
> wiki is mostly fine as the core content of the blog post too, we'd just need a couple more "colloquial"
> glue paragraph here and there, both about nested virt in general and 
> about Xen supporting it, for instance...
> 
>> Especially, for Xen on Xen case, I didn't see any issue with it for 
>> more than half
> of year. Besides, I am always using nested Xen to debug Xen booting 
> issue which doesn't need to reboot my real box. And it really helps me a lot.
>> 
> ...something like this line above... It's actually a quite good 
> example of what I meant above! :-)
> 
> So, how do you feel about this?
> 

Sure. If you can do it, that's great.

>> So, if possible, I hope we can add nested support into Xen 4.4 
>> release to let people know current status.
>> 
> Sorry for my ignorance on the subject, what is it that is missing for 
> making the above (and the content of the wiki) true for 4.4?
> 

Sorry. I didn't clarify it clearly. Most of the patches to run nested are already in Xen upstream. What I want is to add "nested is basically supported in Xen 4.4" in the Xen 4.4 release note to let people know it.

> Anyway, I don't think that, whatever the answer is, it will be less 
> worth to have a blog post about nested virt... At most it will affect 
> when we want it. If it's going to be a 4.4 feature, I think the sooner 
> the better. If not, we can wait a little bit.
> 
> Let me know your thoughts on the idea.
> 
> Regards,
> Dario
>


Best regards,
Yang

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:13:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7LO-0001te-TR; Fri, 14 Feb 2014 01:13:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>)
	id 1WE7LN-0001tU-Qj; Fri, 14 Feb 2014 01:13:30 +0000
Received: from [85.158.143.35:7688] by server-3.bemta-4.messagelabs.com id
	2B/1D-11539-9BD6DF25; Fri, 14 Feb 2014 01:13:29 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392340407!5571172!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31633 invoked from network); 14 Feb 2014 01:13:28 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-15.tower-21.messagelabs.com with SMTP;
	14 Feb 2014 01:13:28 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 13 Feb 2014 17:09:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,842,1384329600"; d="scan'208";a="455223371"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 13 Feb 2014 17:13:26 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 13 Feb 2014 17:13:26 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Fri, 14 Feb 2014 09:13:24 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Thread-Topic: [Xen-devel] Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcP//9LMA//6KNmA=
Date: Fri, 14 Feb 2014 01:13:23 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
In-Reply-To: <1392287382.32038.34.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli wrote on 2014-02-13:
> [Adding the Xen publicity mailing list]
> 
> On gio, 2014-02-13 at 03:55 +0000, Zhang, Yang Z wrote:
>> Hi George,
>> 
>> I have updated the Latest Xen nested status in the Xen wiki page, 
>> please have a look.
>> 
> Hey Yang,
> 
> This is something really useful to have on the wiki, thanks for doing it.
> 
>> Although it is hard to say nested is good supported or product 
>> quality, it is ready to let people to know nested basically is supported by Xen.
>> 
> Right. With that in mind, I think this topic would be a great one for 
> a blog post on the Xen Project's blog! The content you have on the 
> wiki is mostly fine as the core content of the blog post too, we'd just need a couple more "colloquial"
> glue paragraph here and there, both about nested virt in general and 
> about Xen supporting it, for instance...
> 
>> Especially, for Xen on Xen case, I didn't see any issue with it for 
>> more than half
> of year. Besides, I am always using nested Xen to debug Xen booting 
> issue which doesn't need to reboot my real box. And it really helps me a lot.
>> 
> ...something like this line above... It's actually a quite good 
> example of what I meant above! :-)
> 
> So, how do you feel about this?
> 

Sure. If you can do it, that's great.

>> So, if possible, I hope we can add nested support into Xen 4.4 
>> release to let people know current status.
>> 
> Sorry for my ignorance on the subject, what is it that is missing for 
> making the above (and the content of the wiki) true for 4.4?
> 

Sorry. I didn't clarify it clearly. Most of the patches to run nested are already in Xen upstream. What I want is to add "nested is basically supported in Xen 4.4" in the Xen 4.4 release note to let people know it.

> Anyway, I don't think that, whatever the answer is, it will be less 
> worth to have a blog post about nested virt... At most it will affect 
> when we want it. If it's going to be a 4.4 feature, I think the sooner 
> the better. If not, we can wait a little bit.
> 
> Let me know your thoughts on the idea.
> 
> Regards,
> Dario
>


Best regards,
Yang

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:21:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7Sm-0002E4-7t; Fri, 14 Feb 2014 01:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WE7Sk-0002Dz-5t
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 01:21:06 +0000
Received: from [85.158.139.211:29920] by server-17.bemta-5.messagelabs.com id
	81/74-31975-18F6DF25; Fri, 14 Feb 2014 01:21:05 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392340863!3822033!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1503 invoked from network); 14 Feb 2014 01:21:04 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-13.tower-206.messagelabs.com with SMTP;
	14 Feb 2014 01:21:04 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 13 Feb 2014 17:21:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,842,1384329600"; d="scan'208";a="455226944"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 13 Feb 2014 17:20:54 -0800
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 13 Feb 2014 17:20:46 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 13 Feb 2014 17:20:47 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Fri, 14 Feb 2014 09:20:45 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcP//9pWA//6EzCA=
Date: Fri, 14 Feb 2014 01:20:44 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E9BF1@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<20140213103627.GF2924@reaktio.net>
In-Reply-To: <20140213103627.GF2924@reaktio.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote on 2014-02-13:
> On Thu, Feb 13, 2014 at 03:55:31AM +0000, Zhang, Yang Z wrote:
>> Hi George,
>> =

> =

> Hello,
> =

>> I have updated the Latest Xen nested status in the Xen wiki page,
>> please have a look. Although it is hard to say nested is good supported
>> or product quality, it is
> ready to let people to know nested basically is supported by Xen.
> Especially, for Xen on Xen case, I didn't see any issue with it for more =
than half of year.
> Besides, I am always using nested Xen to debug Xen booting issue which
> doesn't need to reboot my real box. And it really helps me a lot. So,
> if possible, I hope we can add nested support into Xen 4.4 release to
> let people know current status.
>> =

>> Xen nested wiki:
>> http://wiki.xenproject.org/wiki/Xen_nested
>> =

> =

> Thanks a lot for writing that! Good stuff.
> =

> How about the bugfix patches mentioned on the wiki page:
> http://www.gossamer-threads.com/lists/xen/devel/316994
> http://www.gossamer-threads.com/lists/xen/devel/316993
> =

> It looks like the patch 2/2 needs some more work, is it likely to be
> fixed/merged before Xen 4.4 final?

I will try my best to do it. But I am working on pushing GFX pass through t=
o QEMU upstream to catch the Xen 4.4 now which having a higher priority to =
me and may occupy lots of my time. So I am not sure whether I have time to =
rework it and catch the Xen 4.4 release.
BTW: if you only want to boot Xen on Xen or KVM on Xen, those patches are n=
ot required. =


> (is it suitable to be merged as a bugfix?)

Yes. It is actually a bugfix.

> =

> Thanks,
> =

> -- Pasi
> =

> =

>> best regards
>> yang
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:21:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:21:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7Sm-0002E4-7t; Fri, 14 Feb 2014 01:21:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WE7Sk-0002Dz-5t
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 01:21:06 +0000
Received: from [85.158.139.211:29920] by server-17.bemta-5.messagelabs.com id
	81/74-31975-18F6DF25; Fri, 14 Feb 2014 01:21:05 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392340863!3822033!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1503 invoked from network); 14 Feb 2014 01:21:04 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-13.tower-206.messagelabs.com with SMTP;
	14 Feb 2014 01:21:04 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 13 Feb 2014 17:21:02 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,842,1384329600"; d="scan'208";a="455226944"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga001.jf.intel.com with ESMTP; 13 Feb 2014 17:20:54 -0800
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 13 Feb 2014 17:20:46 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 13 Feb 2014 17:20:47 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Fri, 14 Feb 2014 09:20:45 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcP//9pWA//6EzCA=
Date: Fri, 14 Feb 2014 01:20:44 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E9BF1@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<20140213103627.GF2924@reaktio.net>
In-Reply-To: <20140213103627.GF2924@reaktio.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote on 2014-02-13:
> On Thu, Feb 13, 2014 at 03:55:31AM +0000, Zhang, Yang Z wrote:
>> Hi George,
>> =

> =

> Hello,
> =

>> I have updated the Latest Xen nested status in the Xen wiki page,
>> please have a look. Although it is hard to say nested is good supported
>> or product quality, it is
> ready to let people to know nested basically is supported by Xen.
> Especially, for Xen on Xen case, I didn't see any issue with it for more =
than half of year.
> Besides, I am always using nested Xen to debug Xen booting issue which
> doesn't need to reboot my real box. And it really helps me a lot. So,
> if possible, I hope we can add nested support into Xen 4.4 release to
> let people know current status.
>> =

>> Xen nested wiki:
>> http://wiki.xenproject.org/wiki/Xen_nested
>> =

> =

> Thanks a lot for writing that! Good stuff.
> =

> How about the bugfix patches mentioned on the wiki page:
> http://www.gossamer-threads.com/lists/xen/devel/316994
> http://www.gossamer-threads.com/lists/xen/devel/316993
> =

> It looks like the patch 2/2 needs some more work, is it likely to be
> fixed/merged before Xen 4.4 final?

I will try my best to do it. But I am working on pushing GFX pass through t=
o QEMU upstream to catch the Xen 4.4 now which having a higher priority to =
me and may occupy lots of my time. So I am not sure whether I have time to =
rework it and catch the Xen 4.4 release.
BTW: if you only want to boot Xen on Xen or KVM on Xen, those patches are n=
ot required. =


> (is it suitable to be merged as a bugfix?)

Yes. It is actually a bugfix.

> =

> Thanks,
> =

> -- Pasi
> =

> =

>> best regards
>> yang
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7hD-0002dC-Pg; Fri, 14 Feb 2014 01:36:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WE7h9-0002cq-7n; Fri, 14 Feb 2014 01:35:59 +0000
Received: from [85.158.137.68:6440] by server-13.bemta-3.messagelabs.com id
	28/0B-26923-EF27DF25; Fri, 14 Feb 2014 01:35:58 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392341755!518393!1
X-Originating-IP: [209.85.220.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8070 invoked from network); 14 Feb 2014 01:35:57 -0000
Received: from mail-pa0-f46.google.com (HELO mail-pa0-f46.google.com)
	(209.85.220.46)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 01:35:57 -0000
Received: by mail-pa0-f46.google.com with SMTP id rd3so11521802pab.19
	for <multiple recipients>; Thu, 13 Feb 2014 17:35:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=s7tKFqavVPB1HV7+b4u2snGpZHWFX57OK5rpk3WFAaU=;
	b=c8gDqDnmdm96tLE1xu33LtEQ1jEEEOLegH8XjuCV/HQiQ+QmYgCWFPq0YUvt6fo9/h
	fTzcTNfjqn8KjXu4NZ9lFQoZl5H1+AW6CftHRNcwUGOLSoPk3GqDa9nCkQHdKaatfDRu
	6KLdqf8YeoH6ahL9EqFMLxjCUctG1gwu+8EpnWjra2qk7UxVvtxhUAW2sXsvVB/YqpV2
	QCAdPNqUgp7Ppp9PRWK5jdEJsG9f67uG9f1mTNqu82zcR5V+qIGI1JLy+I4I0KBDM8Bo
	VFoPFRkMLM2cxaqhwzpQtgloA530N2+YnsfsFfTy4d3VWAd8GjVt57nDVPhmSVruv3w/
	opVA==
X-Received: by 10.68.183.228 with SMTP id ep4mr5396873pbc.67.1392341755249;
	Thu, 13 Feb 2014 17:35:55 -0800 (PST)
Received: from [172.16.25.10] ([128.177.190.114])
	by mx.google.com with ESMTPSA id
	si6sm27460169pab.19.2014.02.13.17.35.53 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 17:35:54 -0800 (PST)
Message-ID: <52FD72F8.107@xen.org>
Date: Fri, 14 Feb 2014 01:35:52 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org, "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
References: <52DCE9FA.6010400@xen.org>
	<52E7B6AF.3050604@xen.org>	<1391609348.6497.178.camel@kazak.uk.xensource.com>	<52F24D4D.7040004@citrix.com>	<1391611558.23098.2.camel@kazak.uk.xensource.com>	<52F24F5D.7020207@citrix.com>
	<1391611929.23098.6.camel@kazak.uk.xensource.com>
In-Reply-To: <1391611929.23098.6.camel@kazak.uk.xensource.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I created http://wiki.xen.org/wiki/GSoc_2014 based on the project list

Unless you guys step up, I will move all projects that have no
* level of difficulty
* skills needed
* outcomes
into http://wiki.xen.org/wiki/GSoc_2014#List_of_projects_that_need_more_work

I made a start on this and will do more on the weekend and/or monday. I 
will also sort, such that easy projects come first.

I am about to submit the application in the next two hours

Regards

On 05/02/2014 14:52, Ian Campbell wrote:
> (trimming cc, most people are presumably not interested)
>
> On Wed, 2014-02-05 at 14:49 +0000, Andrew Cooper wrote:
>> On 05/02/14 14:45, Ian Campbell wrote:
>>> On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
>>>> On 05/02/14 14:09, Ian Campbell wrote:
>>>>> Andy:
>>>>>
>>>>>        * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>>>>>          allocations
>>>>>
>>>>>          Sounds too hard for a GSoC to me. Would need fleshing out in any
>>>>>          case.
>>>> Malcolm made a prototype for this on the first day of the Hackathon.  It
>>>> can disappear.
>>> Removed.
>>>
>>>>>        * CPU/RAM/PCI diagram tool
>>>>>
>>>>>          Does this not already exist somewhere?
>>>> Not as far as I (or my ability to google) am aware.
>>>>
>>>> My furrowing into hwloc interacting with Xen and libxc is a start to all
>>>> of this, but it is still very much in my copious free time and there is
>>>> more than enough other work which could be done if someone were interested.
>>> OK, left in place.
>>>
>>> This could conceivably be done under another umbrella such as the Linux
>>> one too, since it seems generic.
>>>
>>> Ian.
>>>
>>>
>> For native Linux, hwloc kinda already does this already - certainly the
>> CPU and PCI bits.
> That's what I meant by "does this not alreayd exist somewhere". So it
> sounds like extending hwloc is the right answer, the blurb should
> reflect this and list the specific things which it is lacking.
>
> Can you update the description please?
>
>>    Under Xen there are quite a few areas needing
>> improvement, which will require active development work.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:36:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7hD-0002dC-Pg; Fri, 14 Feb 2014 01:36:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WE7h9-0002cq-7n; Fri, 14 Feb 2014 01:35:59 +0000
Received: from [85.158.137.68:6440] by server-13.bemta-3.messagelabs.com id
	28/0B-26923-EF27DF25; Fri, 14 Feb 2014 01:35:58 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392341755!518393!1
X-Originating-IP: [209.85.220.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8070 invoked from network); 14 Feb 2014 01:35:57 -0000
Received: from mail-pa0-f46.google.com (HELO mail-pa0-f46.google.com)
	(209.85.220.46)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 01:35:57 -0000
Received: by mail-pa0-f46.google.com with SMTP id rd3so11521802pab.19
	for <multiple recipients>; Thu, 13 Feb 2014 17:35:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=s7tKFqavVPB1HV7+b4u2snGpZHWFX57OK5rpk3WFAaU=;
	b=c8gDqDnmdm96tLE1xu33LtEQ1jEEEOLegH8XjuCV/HQiQ+QmYgCWFPq0YUvt6fo9/h
	fTzcTNfjqn8KjXu4NZ9lFQoZl5H1+AW6CftHRNcwUGOLSoPk3GqDa9nCkQHdKaatfDRu
	6KLdqf8YeoH6ahL9EqFMLxjCUctG1gwu+8EpnWjra2qk7UxVvtxhUAW2sXsvVB/YqpV2
	QCAdPNqUgp7Ppp9PRWK5jdEJsG9f67uG9f1mTNqu82zcR5V+qIGI1JLy+I4I0KBDM8Bo
	VFoPFRkMLM2cxaqhwzpQtgloA530N2+YnsfsFfTy4d3VWAd8GjVt57nDVPhmSVruv3w/
	opVA==
X-Received: by 10.68.183.228 with SMTP id ep4mr5396873pbc.67.1392341755249;
	Thu, 13 Feb 2014 17:35:55 -0800 (PST)
Received: from [172.16.25.10] ([128.177.190.114])
	by mx.google.com with ESMTPSA id
	si6sm27460169pab.19.2014.02.13.17.35.53 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 13 Feb 2014 17:35:54 -0800 (PST)
Message-ID: <52FD72F8.107@xen.org>
Date: Fri, 14 Feb 2014 01:35:52 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: xen-devel@lists.xen.org, "xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
References: <52DCE9FA.6010400@xen.org>
	<52E7B6AF.3050604@xen.org>	<1391609348.6497.178.camel@kazak.uk.xensource.com>	<52F24D4D.7040004@citrix.com>	<1391611558.23098.2.camel@kazak.uk.xensource.com>	<52F24F5D.7020207@citrix.com>
	<1391611929.23098.6.camel@kazak.uk.xensource.com>
In-Reply-To: <1391611929.23098.6.camel@kazak.uk.xensource.com>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I created http://wiki.xen.org/wiki/GSoc_2014 based on the project list

Unless you guys step up, I will move all projects that have no
* level of difficulty
* skills needed
* outcomes
into http://wiki.xen.org/wiki/GSoc_2014#List_of_projects_that_need_more_work

I made a start on this and will do more on the weekend and/or monday. I 
will also sort, such that easy projects come first.

I am about to submit the application in the next two hours

Regards

On 05/02/2014 14:52, Ian Campbell wrote:
> (trimming cc, most people are presumably not interested)
>
> On Wed, 2014-02-05 at 14:49 +0000, Andrew Cooper wrote:
>> On 05/02/14 14:45, Ian Campbell wrote:
>>> On Wed, 2014-02-05 at 14:40 +0000, Andrew Cooper wrote:
>>>> On 05/02/14 14:09, Ian Campbell wrote:
>>>>> Andy:
>>>>>
>>>>>        * IOMMU control for SWIOTLB, to avoid dom0 copy of all >4K DMA
>>>>>          allocations
>>>>>
>>>>>          Sounds too hard for a GSoC to me. Would need fleshing out in any
>>>>>          case.
>>>> Malcolm made a prototype for this on the first day of the Hackathon.  It
>>>> can disappear.
>>> Removed.
>>>
>>>>>        * CPU/RAM/PCI diagram tool
>>>>>
>>>>>          Does this not already exist somewhere?
>>>> Not as far as I (or my ability to google) am aware.
>>>>
>>>> My furrowing into hwloc interacting with Xen and libxc is a start to all
>>>> of this, but it is still very much in my copious free time and there is
>>>> more than enough other work which could be done if someone were interested.
>>> OK, left in place.
>>>
>>> This could conceivably be done under another umbrella such as the Linux
>>> one too, since it seems generic.
>>>
>>> Ian.
>>>
>>>
>> For native Linux, hwloc kinda already does this already - certainly the
>> CPU and PCI bits.
> That's what I meant by "does this not alreayd exist somewhere". So it
> sounds like extending hwloc is the right answer, the blurb should
> reflect this and list the specific things which it is lacking.
>
> Can you update the description please?
>
>>    Under Xen there are quite a few areas needing
>> improvement, which will require active development work.
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:49:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7uE-0003Ei-Ip; Fri, 14 Feb 2014 01:49:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WE7uD-0003C2-Bl
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 01:49:29 +0000
Received: from [85.158.139.211:2457] by server-1.bemta-5.messagelabs.com id
	66/62-12859-8267DF25; Fri, 14 Feb 2014 01:49:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392342566!3782690!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3316 invoked from network); 14 Feb 2014 01:49:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 01:49:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,842,1384300800"; d="scan'208";a="100661686"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 01:49:25 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 13 Feb 2014 20:49:25 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Fri, 14 Feb 2014
	02:49:23 +0100
Message-ID: <52FD7624.90202@citrix.com>
Date: Fri, 14 Feb 2014 01:49:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: John Baldwin <jhb@freebsd.org>, Roger Pau Monne <roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>	<1387884062-41154-10-git-send-email-roger.pau@citrix.com>
	<1980951.95r2q2cca3@ralph.baldwin.cx>
In-Reply-To: <1980951.95r2q2cca3@ralph.baldwin.cx>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
 ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/2014 21:42, John Baldwin wrote:
> On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
>> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
>> force the usage of the Xen mptable enumerator even when ACPI is
>> detected.
> Hmm, so I think one question is why does the existing MADT parser
> not work with the MADT table provided by Xen?  This may very well
> be correct, but if it's only a small change to make the existing
> MADT parser work with Xen's MADT table, that route might be
> preferable.
>

For dom0, the MADT seen is the system MADT, which does not bear any
reality to dom0's topology.  For PV domU, no MADT will be found.  For
HVM domU, the MADT seen ought to represent (virtual) reality.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 01:49:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 01:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE7uE-0003Ei-Ip; Fri, 14 Feb 2014 01:49:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WE7uD-0003C2-Bl
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 01:49:29 +0000
Received: from [85.158.139.211:2457] by server-1.bemta-5.messagelabs.com id
	66/62-12859-8267DF25; Fri, 14 Feb 2014 01:49:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392342566!3782690!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3316 invoked from network); 14 Feb 2014 01:49:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 01:49:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,842,1384300800"; d="scan'208";a="100661686"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 01:49:25 +0000
Received: from AMSPEX01CL01.citrite.net (10.69.46.32) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server
	(TLS) id 14.2.342.4; Thu, 13 Feb 2014 20:49:25 -0500
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL01.citrite.net
	(10.69.46.32) with Microsoft SMTP Server id 14.2.342.4; Fri, 14 Feb 2014
	02:49:23 +0100
Message-ID: <52FD7624.90202@citrix.com>
Date: Fri, 14 Feb 2014 01:49:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: John Baldwin <jhb@freebsd.org>, Roger Pau Monne <roger.pau@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>	<1387884062-41154-10-git-send-email-roger.pau@citrix.com>
	<1980951.95r2q2cca3@ralph.baldwin.cx>
In-Reply-To: <1980951.95r2q2cca3@ralph.baldwin.cx>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: MIA1
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
 ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/2014 21:42, John Baldwin wrote:
> On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
>> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
>> force the usage of the Xen mptable enumerator even when ACPI is
>> detected.
> Hmm, so I think one question is why does the existing MADT parser
> not work with the MADT table provided by Xen?  This may very well
> be correct, but if it's only a small change to make the existing
> MADT parser work with Xen's MADT table, that route might be
> preferable.
>

For dom0, the MADT seen is the system MADT, which does not bear any
reality to dom0's topology.  For PV domU, no MADT will be found.  For
HVM domU, the MADT seen ought to represent (virtual) reality.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 02:09:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 02:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE8Db-00048w-Dq; Fri, 14 Feb 2014 02:09:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WE8DZ-00048r-LL
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 02:09:30 +0000
Received: from [193.109.254.147:29691] by server-5.bemta-14.messagelabs.com id
	5F/13-16688-9DA7DF25; Fri, 14 Feb 2014 02:09:29 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392343767!4234606!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8616 invoked from network); 14 Feb 2014 02:09:28 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 02:09:28 -0000
Received: by mail-we0-f179.google.com with SMTP id q58so8140235wes.24
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 18:09:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=3rpOm/8aIn7vfkupyyyyNfXSF/0QUv/kOVBEA5CTdx4=;
	b=MH5mF8JUI+AInrQXJCuVojZ1RwxJmV2VRaYbCHKZnTtu19Q5wVjoZuX9ohcdPKwYDa
	Z54E8ALdgWaD9dtxSln8bYTgKKdUXXd7VdFjzEi//pP0KpzWIxwgoosWbzX1oAz4M7b8
	iMoX6rhFq87fZr41tM9GsGiLsTIa1igcWxVJyxK+Mxmos3uqdvn7M6ectkRotcgbzAZz
	1iOFl0Twv+GpBoUmx4442m6V0Z5u7ry9aZw3dDMyI1MDKg0IBuKw6BaBBl754lCFxETo
	jilHo5qhNTmzA0gQ4GfXH2FBRkm6ODVrGgc4gvjRBR2aI9PT9257hrT+I+JQF5/gAnNU
	jvTA==
X-Received: by 10.194.161.136 with SMTP id xs8mr3792855wjb.56.1392343767141;
	Thu, 13 Feb 2014 18:09:27 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Thu, 13 Feb 2014 18:09:06 -0800 (PST)
In-Reply-To: <52FC9A24.2020703@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Fri, 14 Feb 2014 02:09:06 +0000
Message-ID: <CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Content-Type: multipart/mixed; boundary=089e013cc30aa5701904f2544bed
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--089e013cc30aa5701904f2544bed
Content-Type: text/plain; charset=ISO-8859-1

After compiling with the patch and rebuilding/installing the module, I
reboot, I get a panic now when drbd starts.

That was all I could get from the JAVA supermicro  kvm console!

--089e013cc30aa5701904f2544bed
Content-Type: image/jpeg; name="panic_drbd.jpg"
Content-Disposition: attachment; filename="panic_drbd.jpg"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hrmth9y10

/9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAGdAvADASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD5/oor
V0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKKKACiipba2mu7hYIE3SNnjIAAAySSeAAAS
SeAASaAIqK1dSsbLT20x4ppbqGeDzZWX93uIldCEyCQMJwSM9yB90PvLfS20QXtrb3lvI1z5UYmu
VlEgC5fpGuCu6Pr138dDgAx6KK0hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBm0V0ul6
Bb3lnaSNbX0yXGfOvYXAgsvnK/vBsOdoAc5ZflYdOpzNHsLa/kuVuLho2jtppYo0XJdkidxk9Ao2
c9+QAOSVAM2it3Q9Ij1DT725On6hfyQSxRrDZPtIDiQlj8j8DYB0HWmWemW95q15CtlfDyIyUsBI
DcSOGVWQHZ1GWY/IeEI9wAYtFbV7ptlZanbpc/abSJ4GmltpjmaJhvxGx2jBfapBKjAkBwQMk1XT
odM+x3Bsbm2d5GD2N+xLlV2kMSFQ7W3FeAPuNg+gBi0Vpa9DBBqYFtAsEb21vL5aMxCl4UdsFiTj
LHqTWbQAUUVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFWLOzkvJiiFURF3yyucJGndm
PpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AYtFbGlaUNW8TRWDe
RDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj8268e1AGfRW14h0xbL
xDJptvHbRpHIYY2W5Vt4DlQ0jFiEY45Hygego8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAM
jnLdaAMWit3V9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2
heJAIyuxFfljvYMCCQAOF6kAwqK3bDSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6Q
L2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgbgBk9CAYVFbGi6XHfxXMhtby+kiZFW0s22yENuzJ9
1vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy673iJQM0ZAxl1JKc7RuHO3nABm0VsT6PGPFF/pcUrLBa
y3GZGG5vLiDMTjgFtqHA4BPcdaZLY2Un2G7jmltbC5naBzN+9eArs3N8oG8YdSMAHqMcAkAyqK2t
S0y3i0mO+jsr6x3yII0vJA/2hGVjvQ7E4XC5xn769O40Gj3GkX11Ba31u8HlrG8t2kqtIzcKVEan
7iyHOcfL7igDFordt9O066s5hCl4zQW3my328CBH2FxGUKZBLAxg7+TyM/dpllpFvONPtpXl+2ap
/wAerIR5cX7xo13jGWyykHGNowfmJ2gAxaK0LTSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4z
zhlvpzXE01qkqm8RsRxKQwmIyCFYHBbpgDhucHO0MAUqK1fDaW0/iGxtbu0iuYbmeOBlkZ12hnAL
AqwOcZ65HPSsqgAoqW1jhlu4Y7ifyIXkVZJdhby1J5bA5OBzitC7ttPk0k31nDc222dYQlxOsvm5
UklSEXG3C56/6xencAyqKK2LjSLWHw5baimpQPPLK6GECTPCxnaMoBuG85ycYxgnmgDHorVtNClu
4rb/AEu2inu/+PW3k375vmKDBClRllK/Mw6ZOBzUVppYuLQXU99bWcLSNHG04kO9lALABFYjAZeu
PvcZ5wAZ9FbGj6PBf39zbXd/BbmGKZvvM+8pG7ZDIrAqCoJ55H3cmsqVFjmdFkWRVYgOmcMPUZAO
D7gGgBlFS20KT3CxyXEVuhzmWUMVXjvtBPtwK09V0VbbxNLpGnzrdsbkwRAEght5UIxYKN3TJHy8
9aAMeirt5YRW0Qlh1GzvF3bWEJdSpPTh1UkcHkZA74yM2Nb0j+y768QPiGO7khgWQ/vJERmXfwMY
BXGeMnIGcNgAyqKK2Lfw9NcQWTi8s0kvlzawM7b5W3sm3AUhSWXgsQpz14bABj0VoWmli4tBdT31
tZwtI0cbTiQ72UAsAEViMBl64+9xnnDo9FmE14l5PBZLaS+RM8xZgsh3YT5AxJ+R+cY+XryMgGbR
WkNFm+2zwSTwRRwxLO9wxYoI227HwAWw29MDbkbuQMHD9Z0+HT4dMETRSNNaGV5YnLLIfOkUMM9P
lVeMAjHIBzQBlUVYsLOTUNQtrKIqslxKsSFzgAsQBn25qxd6U1taG5S7trlEkWKYQMx8pyCQpJAD
Z2tyhYfL15GQDPorVu9CltIrn/S7aWe0/wCPq3j374fmCHJKhThmC/Kx65GRzTJNHMNsHlv7NLkx
CYWrM4fYVDg7tuzJUhgN2ecfe4oAzaK0odHMlrDNLf2ds9wpaCKZnBlG4rncFKKCysPmZemTgc1Y
g0q2uPD1tePd21m5u54nlnZzuASIqoVQx/ic5xj1PKigDForSj0WYTXiXk8FktpL5EzzFmCyHdhP
kDEn5H5xj5evIyDRZvts8Ek8EUcMSzvcMWKCNtux8AFsNvTA25G7kDBwAZtFaus6fDp8OmCJopGm
tDK8sTllkPnSKGGenyqvGARjkA5qJdIuG1e00wPF5115GxsnaPNVWXPGeA4zx69aAM+itK6s7OTT
3vbAziOCVIJRORlywYq646A7Gypzt4+Zs8Nn0i4tnvxI8Wyz25kBOyXcQF2HHzbgS49VBPagDPor
Sj0czWxeK/s3uREZjaqzl9gUuTu27MhQWI3Z4x97iotMs47q5Z7gstnbqJbpkPzCPcFO3/aJYKO2
SM4GSAClRWla2dnHp6Xt+ZzHPK8EQgIyhUKWds9QN64UY3c/MuOW3WkXFo+oozxO+nz+TMqEk9WX
eOPugqBk45dR3oAz6KuyWC2upC0vbhYQqhpWVS5QlQxTbx84+6QcAMMEgZNWLuxsrLxLfWM80q2d
rPMgbq7hC21cgYBYgLuxgZzjAxQBlUVsa1pcdhFbSC1vLGSVnVrS8bdIAu3En3V+VtxA+Xqh5PQY
9ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HYrHo
h7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFFFABViyvZ9PulubZlWQKy/OiuCGUq
wKsCCCCRyO9V6ltraa7uFggTdI2eMgAADJJJ4AABJJ4ABJoA0NY1g6rbabG0cSPbQNG/l28cQLGR
242AcYK8eu445JNe8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVjUrGy09tMeKaW6hng82Vl
/d7iJXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AKM/wDZ/wDZ
9p9n+0/bfn+1eZt8vr8mzHPTrnv0q9b3Glr4cubOW4vBdzSpMFW2UoCiyALu8wHB8wZO3jHQ1j1p
Cwtj4clvxcM90lzFEYlXCorLKeSerfu88cAEckkhQC7Yatp8LaRdXDXK3Ol48uGOJWSbbK0oy5YF
Ml9v3WxjPOcCroVzp9pczTX01ymYJYUEECyZ8yN0JOXXGNwPfPtWhpegW95Z2kjW19Mlxnzr2FwI
LL5yv7wbDnaAHOWX5WHTqczR7C2v5Llbi4aNo7aaWKNFyXZIncZPQKNnPfkADklQCK3TS2mmS5nv
EjDfuZo4VYkc8MhYYJ4OQxxgjBzkaB1azmvbxJROlpcWcNmJVQM6iLysPs3AZbyhkbuN3U45i0XR
pNSiuboWl5eR27IjW9muZGL7sHO1tqjacnB5wMc5FRdPkn1CeBI2tkhZjL9pP/HugODvOByOBwMk
4AGSBQBbnvtOkubGBknmsrW2e38xlCOSzSMH2BiMq0mQu7DbOSM8Mu57KS0stNtJ5TDHPJK1zcxe
XgyBFIKqXOAIwcgknJ445luNGt/7dFlbXEv2b7Il0ZZEG/Z9nEz/ACg4zjdhc+gLd6r3VvbRRWmp
WsTNaSyvH9nuX3EMmwsCyhcqQ68jaeSOwYgBr00E+pg206zxpbW8XmIrAMUhRGwGAOMqeoFZtaWv
QwQamBbQLBG9tby+WjMQpeFHbBYk4yx6k1m0AFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1
ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyvfs2+KWPzrSXHnQk43Y6Mp/hYZOG9yCC
CQZtPu7ew1GeQGWSEwXEKHYAx3xOikjJA5YE8nHPWq9nZyXkxRCqIi75ZXOEjTuzH05A4ySSAASQ
DpaFpMWp6pMqNFJbQRyyAXEyQGTajsgILZwSo3bT8oJ+YdaAKWj3ken63YXsoZo7e5jlcIMkhWBO
PfiqVbGlaUNW8TRWDeRDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj
8268e1ADNYvI9Q1u/vYgyx3FzJKgcYIDMSM+/NGq3kd9dxyxBgq20ERDDnKRIh/DKnHtV3xDpi2X
iGTTbeO2jSOQwxstyrbwHKhpGLEIxxyPlA9BR4h0xdKNnbrHbDMCSNLHcrK8jNGjNuCsQACxCkAZ
HOW60AUr68jubTTYkDBrW2MTlhwSZZH49sOPxzRLeRvolrZAN5kNzNKxI4IdYgMe/wAh/StDV9IG
l6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYM
CCQAOF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDS
A3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLh
yQcgbgBk9CAVNPurP+z7qwvXnijmlimEsMQkIKBxt2ll4PmE5zxt6HPBeXVnqGrK8rzw2ixRwh1i
DuRHGqBtu4DJ2gkbuM9TjmbRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6GG9sLa11prV
7hoYAqu5dd7xEoGaMgYy6klOdo3DnbzgAt32rWY8UXWqWYnmgu2nMkcyCJlEodWUEMwyFfhj3/h4
wa8t9ZR/YbSOGW6sLadp3E37p5y2zcvyk7BhFAwSepzyAHz6PGPFF/pcUrLBay3GZGG5vLiDMTjg
FtqHA4BPcdaZLY2Un2G7jmltbC5naBzN+9eArs3N8oG8YdSMAHqMcAkALm50+DSZbGxmuZ/Pnjmd
54Fi2bFcAAB2znzD6Y2988V3vIzokNkgZZBcySykDAcbUCZ9SuJOvTecdTV3UtMt4tJjvo7K+sd8
iCNLyQP9oRlY70OxOFwucZ++vTvXm0cx2s00V/Z3L26hp4oWcmIbgudxUIwDMo+Vm65GRzQBYvJ9
HubGBI7q+hMMA22wtEMfm7Rvbf5mTuYZ3FcgYGMKBRZavbwDT7mVJftml/8AHqqAeXL+8aRd5zlc
MxJxncMD5SNxfJpAtfCiX7R2cklxKy7zdoXiQCMrsRX5Y72DAgkADhepd/YKXVjo7W9xbQ3N5AQs
UjsXuJfOkQAAAheAgy21T68NgAq21zp8+kxWN9NcweRPJMjwQLLv3qgIILrjHlj1zu7Y5r29xZ28
01wsDO4b/RopsOi9fmc4G4jjjABJyeBtbQ0nQpLvTzfnTtQ1CNpWhWKxGCpUKSztsbA+YADHPzcj
b82ZZWX27fFFJ/pfHkwkf671VT/e6YX+LkA5wGAL2gXEX/CTWl/qF+sSw3KXMsswdzIQ4JHyqxLH
k88e9Y9avhtLafxDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ7awjV5WMEBjYOFAUm
R2wuOSMMDzzkntiorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfWcNzbbZ1hCXE6y+blS
SVIRcbcLnr/rF6dwDKrQ+128ugpZSGVJoJ3mi2oGWTeI1IY5BXAjyMBs5xxjJz60pNHMNsHlv7NL
kxCYWrM4fYVDg7tuzJUhgN2ecfe4oA07PxF5Ol2dv/aur2X2SNk+z2TbUmy7PktvGwnftztbG0Hn
oKWj39tZxSrLeXluXYb0jt47mGYDoHjdlGVOSCd3UYCkZM/9gpdWOjtb3FtDc3kBCxSOxe4l86RA
AACF4CDLbVPrw2KFppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvcZ5wAOs7+2t9amuVt2htJVn
iESNvMSSIycE43FQ3cjOOozmq8FtbXF3JH9vit4RkpLcxuNwzxkIHIJHPp15qxHoswmvEvJ4LJbS
XyJnmLMFkO7CfIGJPyPzjHy9eRl9rpHntqMSP9olt4Ekg+zHcJWaWJBgYzyJOnBBwCAQRQBUvbSC
12eTqNtebs58hZBt+u9F6+2elWNVvLO+1uW/UTtHcymeaIgIULMSyK3zZAzgMQP92qVzB9muGh82
KUrgFom3LnHIB6HB4yMg44JGDT7Czk1DULayiKrJcSrEhc4ALEAZ9uaANLV9Stby0RBcXN9deYG+
13VusUgXBBUsHYyZyvLH5QgA4Jwa5q9vrV9f3UiSiVp3e2lIG4xluI5OewPBGSMbeRt2VbvSmtrQ
3KXdtcokixTCBmPlOQSFJIAbO1uULD5evIzY1Dw9Np5vEa8s557JiLiGF2YxrvCbslQpGWUYBLDd
yBg4AIYtMtJIkdtd0+NmUEo6XGVPocREZHsSKsWmr28GoeHbhklKabs84ADLYneT5eeeGHXHNRf2
FL5P/H3bfa/I+0fY/n8zy9nmZzt2fc+bG7OOOvFFpoUt3Fbf6XbRT3f/AB628m/fN8xQYIUqMspX
5mHTJwOaALWk659k0lbH+1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5NN177P/AGin2/U7L7XOk/2m
B/MmO3f8rHcm7PmZLZ6r054q2OhS31it0Lu2iDSSIscm/cRGqvI3CkYVWz1ycHAJwC9fD0wFy815
ZwQW7RBppHbDLKheNlAUsQQAcY3DIyBhsADl1S3lvtR+1XF9LDewLA1zLiWb5WjYMVLAHJjA27vl
B6tjlt9NYahJptrbTtbQW1sYWnu1OCfMkfdhAxAO4cYOCcZONxdpuhJP4lttLvbuKGOWSLEi7v3q
OVK7MKcFlYEbgMd8dKyrmFILho47iK4QYxLEGCtx23AH25FAGrZLZ6Nq2nah/adteJBdxyPFbJKH
2q24kb0Udsde4rFqW1tpry7htbdN800ixxrkDLE4AyeOpqxeWEVtEJYdRs7xd21hCXUqT04dVJHB
5GQO+MjIBsa54i/tWK7f+1dXk+1Sb/sUjYghy27Gd53hegG1ex4xg149TsV0c20k15OBEVSzmhRk
icg/Mku7cg3fOQqjOArEj5qpTaRcQXWqW7PEX03d5xBOGxIsfy8c8sOuOKsSeHpo4xm8s2uGthdL
bK7F2jMYkJ+7tBC5JDEH5eAcrkABdaXd2Fkl694klnE0QihiVhMPMeT75YbCd5X7rYxnnOKqS3kb
6Ja2QDeZDczSsSOCHWIDHv8AIf0q3b+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevDYz7Ozk
vp2iiKhliklJY8YRGc/jhTj3oA308SR/2hrTxXmoafHf3n2lJ7UZkABkwjLvXg+Zk/NwVHBzkUl1
S3lvtR+1XF9LDewLA1zLiWb5WjYMVLAHJjA27vlB6tjmrYWlu1pPf3gle2gkSIxQuEd3cMR8xBCg
BGJOD2GOch1xo8kd1cwxSrIIrZLpMjDyRsqsMLz8wV9zDJACsckDNAE19qsaSaa2kTXlu1lbGETM
2xyTJIxYFTwCH6Z4yRk4yZbrxZqlzqVldvczyLaNBIkE8zSIZI1UbyMjliCT3+Y896ZpWifaNUtL
a5SWXz7SW5EFu2JTtR2ReVOC2xSODlXUjrTb+xtrfULS1nsdQ0hXYGZr1vMIQnG8KI0OBhvXOO1A
EN1eWcenvZWAnMc8qTymcDKFQwVFx1A3tljjdx8q45sX+r295Zvp+yUWVpn+zAQN8eXG7ec87hlj
1wwG3auRTb6xszo6ahbW15aK0ojjW5lEguAQ25kIROEKgHry46d6kVnGulS3twWAdjFahT9+RShf
P+yFb2OWXGQGwAbtv4lt4bIx/atTSNrF7X7BGQturmEp5n3vm3N85G0csTk4+bClvI10qKytwwDs
JbosPvyKXCY/2QrexyzZyAuLv9kW+37Hvl/tD7J9t35HlbPK83ZjGc7Od2fvfLtx89VYNIuLl7AR
vFsvN2JCTsi2kht5x8u0AOfRSD3oAda3lnJp6WV+JxHBK88RgAy5YKGRs9Adi4YZ28/K2eLFv4gm
ttTvNai/d6vNOZInVQY4g+4yEA555AGcjBbvgixpWiR3eiR3o0nVdRke5kiYWT7RGFWMjP7t+TvP
p0qvoWlxarqkxRIhbQxyyiK4ukQttR2RSSVLDKgMVxgZPy9QAUpDp02pBgZ7azZQWCRiRkbaNwUF
hld2QMtnbjOTV3Ur3SrvxLc3wFzLZ3cksjrJGEeJnLYIAchtpIbkjdjBwOayrobbuZdkUeJGGyJ9
6Lz0VsnI9Dk59TUVAGlqF1Z/2fa2Fk88scMssxlmiEZJcINu0M3A8sHOed3QY5zaKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigArV0Z7bydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLby
dTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6
r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACTQBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dx
xySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBI
GE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P/s+0+z/aftvz/avM
2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoax60hYWx8OS34uGe6S
5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBV0
K50+0uZpr6a5TMEsKCCBZM+ZG6EnLrjG4Hvn2rQ0vQLe8s7SRra+mS4z517C4EFl85X94NhztADn
LL8rDp1OZo9hbX8lytxcNG0dtNLFGi5LskTuMnoFGznvyABySoA+0udPW0vdPuJrlLaWeOaOeOBW
f5A6gFC4AyJM/eOMY5zkPn12WPUpZ7HaI2torUi4hSQSJGqKCysGAJMatjnB4yetGi6NJqUVzdC0
vLyO3ZEa3s1zIxfdg52ttUbTk4POBjnINP0uO71S8ge1vC0CsyWKN/pEhDhfLB2n5lBLH5OiHgdQ
AS3fiEXGtxX/AJClBZpayIsaREgweVJjaMA8ttJBwNvGBtqpdXFtLFaabaystpFK8n2i5TaSz7Ax
KqWwoCLwNx4J7hRY1TTLfSb6zN1ZX0UNxAZmtJZAk0fzOgBcpjkoG+4ODj3pt/Dp1va6bd21rOrT
s8jW91OJQ8asFU5RUIBYSKR1+XPGRQBDr00E+pg206zxpbW8XmIrAMUhRGwGAOMqeoFZtaWvQwQa
mBbQLBG9tby+WjMQpeFHbBYk4yx6k1m0AFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVau
jJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyvfs2+KWPzrSXHnQk43Y6Mp/hYZOG9yCCCQZt
Pu7ew1GeQGWSEwXEKHYAx3xOikjJA5YE8nHPWq9nZyXkxRCqIi75ZXOEjTuzH05A4ySSAASQDpaF
pMWp6pMqNFJbQRyyAXEyQGTajsgILZwSo3bT8oJ+YdaAKWj3ken63YXsoZo7e5jlcIMkhWBOPfiq
VbGlaUNW8TRWDeRDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj8268
e1ADNYvI9Q1u/vYgyx3FzJKgcYIDMSM+/NGq3kd9dxyxBgq20ERDDnKRIh/DKnHtV3xDpi2XiGTT
beO2jSOQwxstyrbwHKhpGLEIxxyPlA9BR4h0xdKNnbrHbDMCSNLHcrK8jNGjNuCsQACxCkAZHOW6
0AUr68jubTTYkDBrW2MTlhwSZZH49sOPxzRLeRvolrZAN5kNzNKxI4IdYgMe/wAh/StDV9IGl6Lp
7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYMCCQA
OF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDSA3hy
91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcg
bgBk9CAVNPurP+z7qwvXnijmlimEsMQkIKBxt2ll4PmE5zxt6HPBeXVnqGrK8rzw2ixRwh1iDuRH
GqBtu4DJ2gkbuM9TjmbRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6GG9sLa11prV7hoY
Aqu5dd7xEoGaMgYy6klOdo3DnbzgAt32rWY8UXWqWYnmgu2nMkcyCJlEodWUEMwyFfhj3/h4wa8t
9ZR/YbSOGW6sLadp3E37p5y2zcvyk7BhFAwSepzyAHz6PGPFF/pcUrLBay3GZGG5vLiDMTjgFtqH
A4BPcdaZLY2Un2G7jmltbC5naBzN+9eArs3N8oG8YdSMAHqMcAkALm50+DSZbGxmuZ/Pnjmd54Fi
2bFcAAB2znzD6Y2988at/wCJbe507U7dbrU3S8jAhtHIFvaYlR9ijccgBSoIC4A+783y5+paZbxa
THfR2V9Y75EEaXkgf7QjKx3odicLhc4z99eneXX9BSz1HVfslxbNHaTuWto3Znhi8zapLEbT95Bg
MWG7kcNgAzZbyN9EtbIBvMhuZpWJHBDrEBj3+Q/pWrZatpcP9i3MovBd6UoIRUVknImeQLncCg+Y
Athuv3fl+av/AGRb7fse+X+0Psn23fkeVs8rzdmMZzs53Z+98u3Hz1XsbC2udJ1K6kuGFxbRCSOF
F4x5kaEsT2+fgDnIOcYG4AfbXOnz6TFY301zB5E8kyPBAsu/eqAgguuMeWPXO7tjkn1S3uNW1DVZ
LbdPPO00ML4eNCzEktn723jC4wTyeAVZ+k2NteREfYdQ1G7LMTb2TbTEg2/OT5b5BLEdsbec7hUL
aVG+oXtja3a3E0MrJb7V4ugCR8pBPzHghed3IBzgMATaBcRf8JNaX+oX6xLDcpcyyzB3MhDgkfKr
EseTzx71j1q+G0tp/ENja3dpFcw3M8cDLIzrtDOAWBVgc4z1yOelZVABVq7uVntrCNXlYwQGNg4U
BSZHbC45IwwPPOSe2KitY4ZbuGO4n8iF5FWSXYW8tSeWwOTgc4rQu7bT5NJN9Zw3NttnWEJcTrL5
uVJJUhFxtwuev+sXp3AMqti8utLvo1uZXvBdrbRwi3WJdmUjWMN5m7OPlDEbP9nP8VY9bFxpFrD4
cttRTUoHnlldDCBJnhYztGUA3Dec5OMYwTzQA601e3g1Dw7cMkpTTdnnAAZbE7yfLzzww645qXSd
c+yaStj/AGpqenbJ3m32I3ebuVBhhvTG3Zx1zuPTHMFvpFrN4cudRfUoEnilRBCRJnlZDtOEI3HY
MYOMZyRxVSLSrmeziuYgrrI04ChsECJFkcnPGNreueD7ZALcOoWdxFqFteyXkcd1cpciYATyZXzB
hslNxPmElsjkdOeCx1WHSpNSNiZwJ7YQQtIqksfMjZt69NrBG+X5hhtp3DJqvc6RcWlrJcSPEUj+
z5Ck5/fRmRe3YKc+/rVe8s5LGdYpSpZoo5QVPGHRXH44YZ96AJXSyu72PyZFsY5FzJ525kibnIBU
MxU4GMjIzg5xuOlpNtZ6f4h0i5/tixnRL6EuIxKuxQ4JYl0UADHrWRZWcl9dLBGVU7Wdnc8IiqWZ
jjnAUE8AnjgE8Vq3Ghr/AGfpa2ckF3c3t5LCksMhxIAIQq4bBUhmb7wB5z0waAK9zd6fHpMtnYm5
b7RPHO4nRR5WxXAUEE7/APWH5sL93pzxX1i8j1DW7+9iDLHcXMkqBxggMxIz780+70sW9obqC+tr
yFZFjkaASDYzAlQQ6qTkK3TP3eccZz6AOluPEX2jThH/AGrq8WLRLb7BG22A7YxHndv6HG4jZzkr
n+Kiz8ReTpdnb/2rq9l9kjZPs9k21Jsuz5LbxsJ37c7WxtB56DPj8P3st5BbL5X76S2jEm75VadN
8YPfpnOAcYPtlljo5v1iVL+zS5nbbBbMzl5CTgDKqVUk8DcV9TgEEgFjT9Xt7TT4LeRJS8f23JUD
H76BY179ipz7etXW1DS77SbxbqSeJWbT418sKzgxW7ozbCRuXIx1GNyn/ZObBpH2nSba6jfYzzzp
NJIcRxRosRDHAz1kI7knaACSAa9jpxvIpZ5LqC1t4mVGmm3kb2yVXCKxyQrHpjjr0yAW21iOPxLZ
6nFEzx2bW+xHO0yCFUUE9dpbZnHOM9TjNUfItJdQ8mG98u2PS4uoimOM8qm8jnjjPbp2ml0i4ie9
UvERawJcFgTiSNygUrxnkSKcHBx1weKdbaHc3U1nFHJAGu7aS5jLybQFTzMhieAf3Te3IyRzgAmt
xb6JqFjqMWo2d+1vcxy+RCJVJCnd1eMADjHc89KqXiaXHEBZT3k8hblpoViCj0wGbcTxzkYx0OeC
+042cUU8d1BdW8rMizQ7wN64LLh1U5AZT0xz164iuLOS2gtJXKlbqIyoFPIAdk598ofwxQBtXura
XN/bVzELw3eqqSUZFVICZkkK53EuPlIDYXp935vlytVvI767jliDBVtoIiGHOUiRD+GVOPart7pF
vANQtonl+2aX/wAfTOR5cv7xY22DGVwzADOdwyflI2mr/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/3
/lzQAy+vI7m002JAwa1tjE5YcEmWR+PbDj8c1Y0fxBqGjF1trq5WF45R5Uc7Iu94ygfA4yCQf+Aj
pWhFotumh2V+2j6veJLA8009tKEij2yOuD+6bGAgJye9ZlrZ2cenpe35nMc8rwRCAjKFQpZ2z1A3
rhRjdz8y45AHw6p9rS6g1a4uZEuZI5XuR+9lV4wyrwzDcMOwxkdjnjBeusRjVX1LymSeCKNbFc7g
jxhERnPGSFUnpgsBkbciqkulXMetvpChZLtbk2oCNwz7tvBOOCfXFWI7Gylu7p4ppZLGygWWVl4a
U5RCEyOAXfgkZC8kEjaQBl1eWeoahHd3QnDzqWvGjA/1pLZdAeo+6xXjJ3AbRjFg3ulR/wBmWeLm
6sLe7a4uGkjETur+WGQKHPaPruH3u2MmKTSPOu7VLN/3d7A09ukp+fguPL4HzMWjKrgfNleBnArx
aVczrY+UFaS+lMUEW7DMcgBueNpYlQc4yrelAEuqyWNwxuIb68uLhmAYTWiQqFAwANsjYAwAFAAA
6YxiotTvI7q5VLcMtnbqYrVXHzCPcWG7/aJYse2ScYGAC8sIraISw6jZ3i7trCEupUnpw6qSODyM
gd8ZGbdxo8dpok9xNK32+G5iikgA4iDrIcMf7/yDI/h6H5shQB39r2+37Zsl/tD7J9i2YHlbPK8r
fnOc7ONuPvfNux8lFhq9vZ2aafslNld4/tMADfJhzt2HPG0YYdMsTu3LgVi0UAXbNNLkiIvZ7yCQ
Nw0MKyhh6YLLtI55yc56DHNsaxHJ4h1DVJYmRbtbvEancVMsbqBnjIBcZPp27Vj0UAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie
1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsC
rAgggkcjvVepba2mu7hYIE3SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dvHECxkdu
NgHGCvHruOOSTXvLyOew062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ4PNlZf3
e4iV0ITIJAwnBIz3IH3Q+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP/AGf/AGfa
fZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9aQs
LY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJ
fb91sYzznAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl1xjcD3z7VoaXoFveWdpI1tfTJcZ869hcCCy
+cr+8Gw52gBzll+Vh06nM0ewtr+S5W4uGjaO2mlijRcl2SJ3GT0CjZz35AA5JUAfaXOnraXun3E1
yltLPHNHPHArP8gdQChcAZEmfvHGMc5yGXl1Z6hqyvK88NosUcIdYg7kRxqgbbuAydoJG7jPU45r
2Umnx7/t1tcz5xs8i4WLHrnKNnt6VuxaDZy63qtpDaahciytlcWsMoM3m7o0kTcIyCFZ35C87c9O
aAMXUbyO5NtDAGFvaReTEzjDuN7OWYDgEs7cDoMDJxkvvrmHULy1WN/IgSCGAGQHbGQih2wueC+9
uBk7icZNWho/23X00y3sb6ykEbNJBcHzZvlUudqhUySo4XHJxzzxX1O2XTtQhDaTeWihVc29+5Jk
GT3Codpxjjng8+gAa9NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnqBWbWlr0MEGpgW0CwRvbW8vlozE
KXhR2wWJOMsepNZtABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdiseiHtWVWroyW3k6nc3NpFdfZr
QSRxys4XcZo0ydjKejnvQBUsr37Nvilj860lx50JON2OjKf4WGThvcgggkGbT7u3sNRnkBlkhMFx
Ch2AMd8TopIyQOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9OQOMkkgAEkA6WhaTFqeqTKjRSW0Ecs
gFxMkBk2o7ICC2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3uY5XCDJIVgTj34qlWxpWlDVvE0Vg3kQ
xvchZFjuEAVC4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu1MK/LniRnI/NuvHtQAzWLyPUNbv72I
MsdxcySoHGCAzEjPvzRqt5HfXccsQYKttBEQw5ykSIfwypx7Vd8Q6Ytl4hk023jto0jkMMbLcq28
ByoaRixCMccj5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAAsQpAGRzlutAFK+vI7m002JAwa
1tjE5YcEmWR+PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8h/StDV9IGl6Lp7GOzeS4UySTJdpI4O
+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYMCCQAOF6kAz4ryNNEurIhv
MmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDSA3hy91R47OVlZY41lu0Uo
CshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgbgBk9CAVNPurP+z7q
wvXnijmlimEsMQkIKBxt2ll4PmE5zxt6HPBeXVnqGrK8rzw2ixRwh1iDuRHGqBtu4DJ2gkbuM9Tj
mbRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6GG9sLa11prV7hoYAqu5dd7xEoGaMgYy6
klOdo3DnbzgAt32rWY8UXWqWYnmgu2nMkcyCJlEodWUEMwyFfhj3/h4wa8t9ZR/YbSOGW6sLadp3
E37p5y2zcvyk7BhFAwSepzyAHz6PGPFF/pcUrLBay3GZGG5vLiDMTjgFtqHA4BPcdahurWzSK01C
BJ/sU0rxNA8oMilNhYBwuCCHXB28EkYOMsAPubnT4NJlsbGa5n8+eOZ3ngWLZsVwAAHbOfMPpjb3
zxd1TVtLnn1q6sheeZqbMDFMigRgzLJu3huSdg+XaMbvvHblqt5b6W2iC9tbe8t5GufKjE1ysokA
XL9I1wV3R9eu/jocOvdIt4BqFtE8v2zS/wDj6ZyPLl/eLG2wYyuGYAZzuGT8pG0gB/a9vt+2bJf7
Q+yfYtmB5WzyvK35znOzjbj73zbsfJTdIuNLgsNQivbi8jkuohCBDbLIFAkjfdkyLz8hGMd857VL
JpAtfCiX7R2cklxKy7zdoXiQCMrsRX5Y72DAgkADhepqWOjm/WJUv7NLmdtsFszOXkJOAMqpVSTw
NxX1OAQSANtP7KltBHevc28yyM3mwQiXzFIGFILqF2kE5Gc7u2BmWfVLe41bUNVktt0887TQwvh4
0LMSS2fvbeMLjBPJ4BVmaVHY3M0VpLY3lzdzyiOLybtIQS2ABho25z3yBzQ1hbXWoXtvptw0m2Vh
Zo683CZOADx85GCFwN3IHOFYAm0C4i/4Sa0v9Qv1iWG5S5llmDuZCHBI+VWJY8nnj3rHrV8NpbT+
IbG1u7SK5huZ44GWRnXaGcAsCrA5xnrkc9KyqACrV3crPbWEavKxggMbBwoCkyO2FxyRhgeeck9s
VFaxwy3cMdxP5ELyKskuwt5ak8tgcnA5xWhd22nyaSb6zhubbbOsIS4nWXzcqSSpCLjbhc9f9YvT
uAZVaH2u3l0FLKQypNBO80W1AyybxGpDHIK4EeRgNnOOMZOfWlHo5mti8V/ZvciIzG1VnL7Apcnd
t2ZCgsRuzxj73FADbS7t10m9sbgyp5skc8bxoG+dFcBSCRgHzPvc4x0OeLulatZ21glvdCdWja42
mJAwYTxLExOWGCoXcBzuPGV61Fb+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevDYr2mli4tB
dT31tZwtI0cbTiQ72UAsAEViMBl64+9xnnABfm1bT71bq1uGuYbZ/svlzRxLI58iIxDKFlA3Bt33
jjGOetQX0llrGrF4rlbC3W2hjQ3e5zlI0TGY1OT8pOcAEDt0p1lpEM+k6i889tBLbXcMZuJJSyKp
WXcBs3b8lV+6D0z0yazb2zksbpoJCrHarq6Hh0ZQysM84KkHkA88gHigDQtja6Pfxy/b4L2OWKaG
Q2qSZjDxlN2JFXJG8kDPO3BIzmp01fT9P/scWKXM/wDZ98927ThY/Nz5RAABbb/qyOp9c84GBWrq
Glr/AG5BY6bHK32mO3aGOWRS26WNG2lsKOr4zgUAFzc6fBpMtjYzXM/nzxzO88CxbNiuAAA7Zz5h
9Mbe+eGS6ZaRxO667p8jKpIREuMsfQZiAyfcgU270sW9obqC+tryFZFjkaASDYzAlQQ6qTkK3TP3
eccZdfaObBZVe/s3uYG2z2ys4eMg4IyyhWIPB2lvUZAJABsWviLT4jbzyLc+dFJaTmNY1Kl7aPYq
7t2cOCSWx8uMYfrRoPiW30saazXWp26WkgM1rZkLHdfvC29zuHOCFwVOQgG4Z+XHj0czWxeK/s3u
REZjaqzl9gUuTu27MhQWI3Z4x97im2Fpb3OnapJIJfOtoEmiKuAv+tRCGGMnh8jBGMd80AWLfWI4
dEg02SJpoftMss8ROAwZYwpU87XXY2DjjOOQWUy6Tq8emxX1pFqGoWcc0qSJdWqYkITeArLvXAO/
J+Y4KjrnIwq0NDtEvtesLWQRMks6KUldkV8n7pZQSN3TIHGaALS6pby32o/ari+lhvYFga5lxLN8
rRsGKlgDkxgbd3yg9WxzdtNS0ttSsUTzxaWumXVu5mdY2kJWduDyFLbwAOcE4+bGTi6dpkmpG52T
QQpbxedK8z7QE3qpPQ5PzA4HJxgZOAZTos322CCOeCWOaJp0uFLBDGu7e+CA2F2PkbcnbwDkZAH3
d/bw2lla6ZPct9mnkuVuJEELh2CDACs2MeWDnPfoMZL9R8S6pqdhbWk99ePHHF5cqvcMwmPmM4Zg
e4yo5z90fhXv9KaxtILtbu2ubeeR443gZjkoFJyGAK/fAwQDxnGCCc+gDavdXt5xqFzEkv2zVP8A
j6VwPLi/eLI2w5y2WUEZxtGR8xO4H9r2/wDZ39kbJf7M8vztuB5n2vy8eZnPTd8uOmznG7mm6h4e
m083iNeWc89kxFxDC7MY13hN2SoUjLKMAlhu5AwcM/sKXyf+Pu2+1+R9o+x/P5nl7PMznbs+582N
2ccdeKAH6PdaXYXNnqEr3gu7SVZRCsSskpVtw+fcCgPAPytjGec4ENreWcmnpZX4nEcErzxGADLl
goZGz0B2Lhhnbz8rZ4IdHMlrDNLf2ds9wpaCKZnBlG4rncFKKCysPmZemTgc1NpGkWuoWGoTz6lB
bPbxB1RxIcfvI13NtRvl+cjg5zjjGTQAxtX8y41LUGTbqN5I+Co/dokgfzcAnOSGCjOeC3fBFfTr
yO2NzDOGNvdxeTKyDLoN6uGUHgkMi8HqMjIzkFjpxvIpZ5LqC1t4mVGmm3kb2yVXCKxyQrHpjjr0
zYj0G4ku7qBri2jW2gW5eaRyEMTFNrDjPIkVsY3dsbvloAJL6ylu7VJYZZLGygaKJW4aU5dwXweA
XfkA5C8AkjcbD+IWuNYsNbu0afU4blZbg8IsyoUKdOjcMpwAMBeCck1Dos322CCOeCWOaJp0uFLB
DGu7e+CA2F2PkbcnbwDkZivtONnFFPHdQXVvKzIs0O8DeuCy4dVOQGU9Mc9euAAvE0uOICynvJ5C
3LTQrEFHpgM24njnIxjoc8aF14luL/R761u1ga4ubmKYyJaRISFEm4llUHcSy89cbueSDlXFnJbQ
WkrlSt1EZUCnkAOyc++UP4YrSvdIt4BqFtE8v2zS/wDj6ZyPLl/eLG2wYyuGYAZzuGT8pG0gGLRW
h/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/AN/5c1a0SDR724gtbu1vt53NPPFdoqpGoLMwQxknagJx
nJxx1xQBi0VYsrOS+ulgjKqdrOzueERVLMxxzgKCeATxwCeKuxWmnt9uvyLl9NgnWKKIOqTPv3lN
zYKrhUJJAPOABzkAGVRWlcaPJHdXMMUqyCK2S6TIw8kbKrDC8/MFfcwyQArHJAzT7PSPM1CzguXx
9ogacRRnEhwGKR8jhn2rt4PEikA5xQBlUVsa1pcdhFbSC1vLGSVnVrS8bdIAu3En3V+VtxA+Xqh5
PQY9ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HY
rHoh7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFFFABViyvZ9PulubZlWQKy/OiuC
GUqwKsCCCCRyO9V6ltraa7uFggTdI2eMgAADJJJ4AABJJ4ABJoA0NY1g6rbabG0cSPbQNG/l28cQ
LGR242AcYK8eu445JNe8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVjUrGy09tMeKaW6hng8
2Vl/d7iJXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AKM/9n/2
fafZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9a
QsLY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWB
TJfb91sYzznAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl1xjcD3z7VoaXoFveWdpI1tfTJcZ869hcC
Cy+cr+8Gw52gBzll+Vh06nM0ewtr+S5W4uGjaO2mlijRcl2SJ3GT0CjZz35AA5JUAisJrO01u2nl
RriyhuVd1eMZkjDAkFckZI7ZI96ZBJb3F3I+pzXJ83LNPGBI4cnO4hiN2eR94dc5OMG9oulx38Vz
IbW8vpImRVtLNtshDbsyfdb5V2gH5erjkdDDe2Fta601q9w0MAVXcuu94iUDNGQMZdSSnO0bhzt5
wAWxq1nDe2aRCd7S3s5rMysgV2Evm5fZuIyvmnA3c7eozxXu57KS0stNtJ5TDHPJK1zcxeXgyBFI
KqXOAIwcgknJ445Zf6U0XiO90uzDSCG5liQuwB2ox+ZjwAABkk4AAJ4FP1KytNNbTJbeT7ZHNB50
nmAqjsJXQgAYYKdnsT1+XOAAM16aCfUwbadZ40treLzEVgGKQojYDAHGVPUCs2tLXoYINTAtoFgj
e2t5fLRmIUvCjtgsScZY9SazaACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqtXRktvJ1O
5ubSK6+zWgkjjlZwu4zRpk7GU9HPegCpZXv2bfFLH51pLjzoScbsdGU/wsMnDe5BBBIM2n3dvYaj
PIDLJCYLiFDsAY74nRSRkgcsCeTjnrVezs5LyYohVERd8srnCRp3Zj6cgcZJJAAJIB0tC0mLU9Um
VGiktoI5ZALiZIDJtR2QEFs4JUbtp+UE/MOtAFLR7yPT9bsL2UM0dvcxyuEGSQrAnHvxVKtjStKG
reJorBvIhje5CyLHcIAqFwCI2ZjvPPGCxPvVe5tHn1draOKxt3OMJFdqYV+XPEjOR+bdePagBmsX
keoa3f3sQZY7i5klQOMEBmJGffmjVbyO+u45YgwVbaCIhhzlIkQ/hlTj2q74h0xbLxDJptvHbRpH
IYY2W5Vt4DlQ0jFiEY45Hygego8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAMjnLdaAKV9eR
3NppsSBg1rbGJyw4JMsj8e2HH45olvI30S1sgG8yG5mlYkcEOsQGPf5D+laGr6QNL0XT2Mdm8lwp
kkmS7SRwd8i7VCuQUwgO7B+bI3dqJNIFr4US/aOzkkuJWXebtC8SARldiK/LHewYEEgAcL1IBnxX
kaaJdWRDeZNcwyqQOAEWUHPv84/WixvI7a01KJwxa6thEhUcAiWN+fbCH8cVoWGkBvDl7qjx2crK
yxxrLdopQFZCzbQ4beCg2qeoJ+Vux4f0gXsF7evHZzLbRbkhuLtIlZ96L8w3qwXDkg5A3ADJ6EAq
afdWf9n3VhevPFHNLFMJYYhIQUDjbtLLwfMJznjb0OeC8urPUNWV5XnhtFijhDrEHciONUDbdwGT
tBI3cZ6nHM2i6XHfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy673i
JQM0ZAxl1JKc7RuHO3nABbvtWsx4outUsxPNBdtOZI5kETKJQ6soIZhkK/DHv/Dxg0r+7t2tILCz
Mr20EjyiWZAju7hQflBIUAIoAye5zzgS3djZWXiW+sZ5pVs7WeZA3V3CFtq5AwCxAXdjAznGBipd
V0f7P9jEFjfWtzcSNGLG6O+Y427XGFUkMWKgbeqHk9AAUry8jnsNOtogyi3iYSgjAaRpGJYep2+W
Mnn5QOgFXb3V7ecahcxJL9s1T/j6VwPLi/eLI2w5y2WUEZxtGR8xO4RajpdvZaTZ3Edz508k80M2
zBjUosZAUj733zluhI4yAGaxq+kDS9F09jHZvJcKZJJku0kcHfIu1QrkFMIDuwfmyN3agDPlvI30
S1sgG8yG5mlYkcEOsQGPf5D+lbug+JbfSxprNdanbpaSAzWtmQsd1+8Lb3O4c4IXBU5CAbhn5aVv
p2nXVnMIUvGaC282W+3gQI+wuIyhTIJYGMHfyeRn7tRaVo8d1HJLeStCrW08tuij5pTHG7Z56ICm
Ce54HRioBU0q8jsL/wC0yBiVilEZQcpIY2CMPQqxU56jGRzTLKe3tt88kXnTrjyUdQYwe7MD97HG
Fxgk88Aq1jSo7G5mitJbG8ubueURxeTdpCCWwAMNG3Oe+QOaGsLa61C9t9NuGk2ysLNHXm4TJwAe
PnIwQuBu5A5wrAE2gXEX/CTWl/qF+sSw3KXMsswdzIQ4JHyqxLHk88e9Y9avhtLafxDY2t3aRXMN
zPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ7awjV5WMEBjYOFAUmR2wuOSMMDzzkntiorWOGW7hju
J/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfWcNzbbZ1hCXE6y+blSSVIRcbcLnr/rF6dwDKrqrfxLb
w2Rj+1amkbWL2v2CMhbdXMJTzPvfNub5yNo5YnJx83K0UAXb68jubTTYkDBrW2MTlhwSZZH49sOP
xzVi2udPn0mKxvprmDyJ5JkeCBZd+9UBBBdcY8seud3bHL7fw9NcQWTi8s0kvlzawM7b5W3sm3AU
hSWXgsQpz14bFSzsIrmIyzajZ2a7tqiYuxYjrwisQORycA9s4OAB/wBtt10m9s445V867imjDENt
RFlGCeMn94vQc4PSpbl7bV9RVjdxWcaWkEe+5VyC0cSIQNiseSpI46enSmJoN+800AjX7RDeR2LQ
lxkyvvAAPTGUIzn0qrcWcltBaSuVK3URlQKeQA7Jz75Q/higCW4s7W2mhH9pwXMbt+8a1jkJjHHO
JFTJ64Ge3JFaV1q2nw6/pmqWLXM/2X7PvjniWLPkqijBDN97YT04z3rIsrOS+ulgjKqdrOzueERV
LMxxzgKCeATxwCeKtLo5m1Cys7W/s7k3cqwpJGzgK5IGGDKGA+Yc7cHnGSCAAPubnT4NJlsbGa5n
8+eOZ3ngWLZsVwAAHbOfMPpjb3zw/WLrS7+5vNQie8N3dytKYWiVUiLNuPz7iXA5A+Vc5zxjBr3e
lNbWhuUu7a5RJFimEDMfKcgkKSQA2drcoWHy9eRkuLRI9BsbpREXmnmVnV23DaI8KykADG7IIJzv
5xigDat/EtvDZGP7VqaRtYva/YIyFt1cwlPM+9825vnI2jlicnHzZukXGlwWGoRXtxeRyXUQhAht
lkCgSRvuyZF5+QjGO+c9qJPD00cYzeWbXDWwultldi7RmMSE/d2ghckhiD8vAOVzj0AXbOxt7mIv
LqtnaMGxsmWUkj1+RGGPxzxVvSZLDSvEdtc3N400FpLHMslpCWEhVlbbhyhA6jOOo6HrWe9nImnw
3pK+XNLJEoB5BQITn2+cfrWh/wAI9Mk2orPeWdumn3Itp5ZHbG47wCoClmGUPQZ5BxgEgAhiurOz
j1WCB55o7m2WKJ3iCHIljc7lDHA+RhwT2/CLSryOxv8AzZQxjeKWFygyVEkbIWA7kbs4yM4xkdaZ
cWn9n6gIboebGNjkxPt8yNgGBUkcZUgjIyM8jtVu70jb4lvtLtX/AHdvPMgklP3Y4yxLNgc4VSTg
ZOOB2oAsXx059E0y0s7ptq3k5kkuQFIDLD85RdxVeCOrE7SR6Clc6dawW7SR6zY3DjGIoknDNz23
Rge/Jp0mizGazSzngvVu5fIheEsoaQbcp84Ug/OnOMfN14OIrywitohLDqNneLu2sIS6lSenDqpI
4PIyB3xkZADWLyPUNbv72IMsdxcySoHGCAzEjPvzWxceIvtGnCP+1dXixaJbfYI22wHbGI87t/Q4
3EbOclc/xVn63pH9l314gfEMd3JDAsh/eSIjMu/gYwCuM8ZOQM4bGVQBu6VqdjaWQiuZrx49xaWx
aFJYZj6hiwMTFcLuVSw5IPO0UtLu7e3F5BdGVYbuAQtJEgdkxIjghSQDygHUdc9sEsLRJ9O1SdhE
z28CMqs7Ky5lRSy4GDjO0gkffyM4p1ro5n09L+a/s7S3eV4VaZnJLqFJG1FY4w45xgY5xkZADT7q
z/s+6sL154o5pYphLDEJCCgcbdpZeD5hOc8behzxYuNXt5H1IKkuyexhs4SQMnyzD8zDPGRETgZw
TjJ61Xj0WYTXiXk8FktpL5EzzFmCyHdhPkDEn5H5xj5evIzN/wAI7cre3NrJc2cZtraO6lkabKBH
2dGAO4jzB0znB27jgEAr6LqP9l6ol1ulTEckfmRHDx70ZN68jld2QMjOOo61Y1XVpLia1eLVtVvJ
IGLpPdvtMZ4xsXc2CMZ3bueOBjJhOizfbYII54JY5omnS4UsEMa7t74IDYXY+RtydvAORkk0WYzW
aWc8F6t3L5ELwllDSDblPnCkH505xj5uvBwATaj4l1TU7C2tJ768eOOLy5Ve4ZhMfMZwzA9xlRzn
7o/B17q9vONQuYkl+2ap/wAfSuB5cX7xZG2HOWyygjONoyPmJ3CleWEVtEJYdRs7xd21hCXUqT04
dVJHB5GQO+MjNvV9ItdPsNPng1KC5e4iLsiCQZ/eSLuXci/L8gHJznPGMGgB39r2/wDZ39kbJf7M
8vztuB5n2vy8eZnPTd8uOmznG7mqVheR2kF+GDedPbeVC6jlCXQtz2BQOpx1DY6E1Y/sKXyf+Pu2
+1+R9o+x/P5nl7PMznbs+582N2ccdeKfb+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevDYAK
kV5Hb6VLBCGFxcsUndhx5QKMqr7lgSeP4VwR8wL7C7t1tJ7C8MqW08iSmWFA7o6BgPlJAYEOwIyO
xzxglppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvcZ5wyLSrmTW00hgsd21yLUh24V923kjPAP
pmgC2usRjVX1LymSeCKNbFc7gjxhERnPGSFUnpgsBkbcinLq9u2rrqEqS+bcwTJelQP9ZIro0iDP
PDBsZALbgNoxilp1nHcm5mnLC3tIvOlVDh3G9UCqTwCWdeT0GTg4wbEmkedd2qWb/u72Bp7dJT8/
BceXwPmYtGVXA+bK8DOAAM1C6s/7PtbCyeeWOGWWYyzRCMkuEG3aGbgeWDnPO7oMc5tXYtKuZ1sf
KCtJfSmKCLdhmOQA3PG0sSoOcZVvSn3eli3tDdQX1teQrIscjQCQbGYEqCHVSchW6Z+7zjjIBn0V
dis410qW9uCwDsYrUKfvyKUL5/2Qrexyy4yA2Lv9kW+37Hvl/tD7J9t35HlbPK83ZjGc7Od2fvfL
tx89AGLRWhBpFxcvYCN4tl5uxISdkW0kNvOPl2gBz6KQe9XfD+kC9gvb147OZbaLckNxdpErPvRf
mG9WC4ckHIG4AZPQgGFRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjf
B2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzo
rghlKsCrAgggkcjvVepba2mu7hYIE3SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dv
HECxkduNgHGCvHruOOSTXvLyOew062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ
4PNlZf3e4iV0ITIJAwnBIz3IH3Q+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP8A
2f8A2fafZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x
0NY9aQsLY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2yt
KMuWBTJfb91sYzznAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl1xjcD3z7VoaXoFveWdpI1tfTJcZ8
69hcCCy+cr+8Gw52gBzll+Vh06nM0ewtr+S5W4uGjaO2mlijRcl2SJ3GT0CjZz35AA5JUAfaXOnr
aXun3E1yltLPHNHPHArP8gdQChcAZEmfvHGMc5yGXl1Z6hqyvK88NosUcIdYg7kRxqgbbuAydoJG
7jPU45m0XS47+K5kNreX0kTIq2lm22Qht2ZPut8q7QD8vVxyOhZdW2n6bq8sFzDczwiNT5aTrHJE
7KrFGYowJUkqflHI7dKAJdS1eJfEtzq2kyy/6RJLIRc26fL5hYMhUllYbWxk9cnim6nqo1mPSoJP
IgNvEYpJBbpGgLSu2cRrnaFK8Y67sDJJN2LRtPudcsrSG3vsS2L3MlqJleXf5byIquEwdyiM/dON
2OorP1S3h067t1Oj31o4/ePDqEpPmLnjgIhA4IJB+hGKAGa9NBPqYNtOs8aW1vF5iKwDFIURsBgD
jKnqBWbWlr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABWroz23k6nbXN3Fa/abQRxySq5XcJ
o3wdiseiHtWVWroyW3k6nc3NpFdfZrQSRxys4XcZo0ydjKejnvQBUsr37Nvilj860lx50JON2OjK
f4WGThvcgggkGbT7u3sNRnkBlkhMFxCh2AMd8TopIyQOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9
OQOMkkgAEkA6WhaTFqeqTKjRSW0EcsgFxMkBk2o7ICC2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3u
Y5XCDJIVgTj34qlWxpWlDVvE0Vg3kQxvchZFjuEAVC4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu
1MK/LniRnI/NuvHtQAzWLyPUNbv72IMsdxcySoHGCAzEjPvzRqt5HfXccsQYKttBEQw5ykSIfwyp
x7Vd8Q6Ytl4hk023jto0jkMMbLcq28ByoaRixCMccj5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozb
grEAAsQpAGRzlutAFK+vI7m002JAwa1tjE5YcEmWR+PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8
h/StDV9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIy
uxFfljvYMCCQAOF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/Pt
hD+OK0LDSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+
9F+Yb1YLhyQcgbgBk9CAVNPurP8As+6sL154o5pYphLDEJCCgcbdpZeD5hOc8behzwXl1Z6hqyvK
88NosUcIdYg7kRxqgbbuAydoJG7jPU45r2Vml1vaW9trSNMDfOWOSegCorMeh5xgdyMjNuPSPKu7
oXb7oLSBblzCeZY2KBNhI43eYhyRkAkkEjaQCXUr3SrvxLc3wFzLZ3cksjrJGEeJnLYIAchtpIbk
jdjBwOabNqcNnFp8WlTTs1ncvdJPNCqEO3lgDZuYYHlg5J5zjHHLJLGyhu7WWWaVLC5ga4QHmTAL
r5ZIGMl0KhsYwQxA5UWNR0i2tZrDz47zSlnlMc0N4PMkiQbf3uAqEqdzADHWM8noACLUNcfUNEtb
KSKBZIrmWVjFaxRDDKgXGwDn5Wz6/L1wMVL68jubTTYkDBrW2MTlhwSZZH49sOPxzV3UtMt4tJjv
o7K+sd8iCNLyQP8AaEZWO9DsThcLnGfvr07177RzYLKr39m9zA22e2VnDxkHBGWUKxB4O0t6jIBI
ALF5Po9zYwJHdX0JhgG22Fohj83aN7b/ADMncwzuK5AwMYUCn6R4luNPCRSrBJBFbTwxbrSJ3Uuj
4G5lzt3vkjOMEjB6UW+naddWcwhS8ZoLbzZb7eBAj7C4jKFMglgYwd/J5Gfu07+wUurHR2t7i2hu
byAhYpHYvcS+dIgAABC8BBltqn14bABm6VeR2F/9pkDErFKIyg5SQxsEYehVipz1GMjmmWU9vbb5
5IvOnXHko6gxg92YH72OMLjBJ54BVrGj2FtfyXK3Fw0bR200sUaLkuyRO4yegUbOe/IAHJK17Ky+
3b4opP8AS+PJhI/13qqn+90wv8XIBzgMAXtAuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x
61fDaW0/iGxtbu0iuYbmeOBlkZ12hnALAqwOcZ65HPSsqgAq1d3Kz21hGrysYIDGwcKApMjthcck
YYHnnJPbFRWscMt3DHcT+RC8irJLsLeWpPLYHJwOcVoXdtp8mkm+s4bm22zrCEuJ1l83KkkqQi42
4XPX/WL07gGVWlFplpJEjtrunxsyglHS4yp9DiIjI9iRWbWlJo5htg8t/ZpcmITC1ZnD7CocHdt2
ZKkMBuzzj73FAEV9eR3NppsSBg1rbGJyw4JMsj8e2HH45rQ0nWY7DTzALvULKRZWlZrFtpuQQoCO
24bQu04OHx5jcerv7BS6sdHa3uLaG5vICFikdi9xL50iAAAELwEGW2qfXhsZtnYRXMRlm1Gzs13b
VExdixHXhFYgcjk4B7ZwcAG1D4js01vUL1o5/Lk1NNShAUEsUaQrG3Pyg+ZywzjHQ5qhcPZajJaW
kV6tvb2dsYkuLuNlMhMjP92MPtP7wjqRhc5GcUyLQbg/bjdXFtZrYzrb3Bnc/K538AKGLcoR8ueu
egJDBos322eCSeCKOGJZ3uGLFBG23Y+AC2G3pgbcjdyBg4AAQ2unXttKNUW4XcSZLESK8JH3WHmI
uSDzgHnBGVyDWrbanYzeJtDmEzO0N5G897cQpAXXep+cKxBIwzGQncd3P3RWbNojJqEFlHcxSSPA
Z5HwwSNMM+7OMkeUFk4GfmxjcMVXvtONnFFPHdQXVvKzIs0O8DeuCy4dVOQGU9Mc9euACxc3enx6
TLZ2JuW+0TxzuJ0UeVsVwFBBO/8A1h+bC/d6c8PuLjS28OW1nFcXhu4ZXmKtbKEJdYwV3eYTgeWc
HbznoKx60r7RzYLKr39m9zA22e2VnDxkHBGWUKxB4O0t6jIBIALH9r2/9r/a9kvl/wBm/ZMYGd/2
Tyc9em7n6du1VbbTrWe3WSTWbG3c5zFKk5Zee+2Mj34NOk0cw2weW/s0uTEJhaszh9hUODu27MlS
GA3Z5x97iprfSLWbw5c6i+pQJPFKiCEiTPKyHacIRuOwYwcYzkjigAjuNOOnrYXks7La3Mssb2qA
i4DBBjLEFB+7GG2sfm5XjBdqer297/bXlpKPt2pLdx7gOEHncHnr+8XpnoeapWdhFcxGWbUbOzXd
tUTF2LEdeEViByOTgHtnBxd0zQkuNUvLHULuKzltY59yNuJLxo5OCqsMArz6jpk0AV7y6s9Q1OGS
V54bcW0ETskQdwUhVDhdwBBZfUcH8Ku3WraefFV3qVu1y9tefaPMEkSq8fnK6nADENtD56jOMcda
zbXTjdXNwi3UCwW6l5Llt+wJuChsBd2CzKB8ueeQOcMvrFrF4v30U8U0fmRTRbtsi5KkgMAwwysO
QOnpgkA049Ws9Om0kWQnuY7C8N4WmQQlyfL+TAZsD90Oc/xdOOc+8TS44gLKe8nkLctNCsQUemAz
bieOcjGOhzw9dIuG1e00wPF5115GxsnaPNVWXPGeA4zx69addWdnJp73tgZxHBKkEonIy5YMVdcd
AdjZU528fM2eACxrmr2+tX1/dSJKJWnd7aUgbjGW4jk57A8EZIxt5G3ZXi0y0kiR213T42ZQSjpc
ZU+hxERkexIps+kXFs9+JHi2We3MgJ2S7iAuw4+bcCXHqoJ7Vrx+G45rAvFZ6g6izNydSU5tsiIy
FMbOoIMZ+f7wzj+GgChpFxpcFhqEV7cXkcl1EIQIbZZAoEkb7smRefkIxjvnPaqkt5G+iWtkA3mQ
3M0rEjgh1iAx7/If0rS0fSbLUYoY/JvpnbH2m7iO2GxBYqDIChyAF3k7lGDjjBNZFlZyX10sEZVT
tZ2dzwiKpZmOOcBQTwCeOATxQBa0+6s/7PurC9eeKOaWKYSwxCQgoHG3aWXg+YTnPG3oc8aGqalZ
rdagsG5o7rTLSCLDh9hVbdiGYY5HlsDgdew7UorTT2+3X5Fy+mwTrFFEHVJn37ym5sFVwqEkgHnA
A5yGXGjyR3VzDFKsgitkukyMPJGyqwwvPzBX3MMkAKxyQM0AN0XUf7L1RLrdKmI5I/MiOHj3oyb1
5HK7sgZGcdR1qxqupR3s1qJdQ1XUo4mJdrt9hwcZVBl9p4+9k5yPl+XllnpHmahZwXL4+0QNOIoz
iQ4DFI+Rwz7V28HiRSAc4q7e+H44ZtLMsF5pMd7ctbut/wAmMLszLnany/vOmP4DzzwAV9X1K1vL
REFxc3115gb7XdW6xSBcEFSwdjJnK8sflCADgnFW7u7e60uwjzKtzaRmDZsBRkLvJu3ZyDl8bcds
57Vd1bSI7PT/ALSdP1DTZBKsaw3z7jMCGJZfkThdoB4P316d8KgDf/tbT9/2/dc/bfsP2P7P5S+X
/qPI3eZuz0+bGzrxn+Ks2+vI7m002JAwa1tjE5YcEmWR+PbDj8c1Ysktk8PajdS2kU8wnhgjaRnH
lh0lJYBWAJyi9cj2q1oOgpeajpX2u4tlju50K20jsrzReZtYhgNo+64wWDHbwOVyAGk659k0lbH+
1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5g03xDeaXrf2+2uLwRtcrNNEbk5nAbO2RgBuJyQSR3PFQ
2ujmfT0v5r+ztLd5XhVpmckuoUkbUVjjDjnGBjnGRm3b6Gv9n6ot5JBaXNleRQvLNIcRgiYMuFyW
JZV+6CeM9MmgCvDrVxPNONUubm6juYBbSSvIZJEQOrgruPOGUHHGRkZGcgkvrKW7tUlhlksbKBoo
lbhpTl3BfB4Bd+QDkLwCSNxYNFm+2zwSTwRRwxLO9wxYoI227HwAWw29MDbkbuQMHFjUbW20htJk
ENteCW0aWT945jmJllUN8pVh8oXjggjBGcigB7+IWuNYsNbu0afU4blZbg8IsyoUKdOjcMpwAMBe
Ccks1fVftlokH9s6vqH7wPi9O1EwCOF3vknPXIxg9c8VddtobPxDqdrbpshhu5Y41yThQ5AGTz0F
Z9AF3U7yO6uVS3DLZ26mK1Vx8wj3Fhu/2iWLHtknGBgC7/a9vt+2bJf7Q+yfYtmB5WzyvK35znOz
jbj73zbsfJWLRQBtWGr29nZpp+yU2V3j+0wAN8mHO3Yc8bRhh0yxO7cuBVKxvI7a01KJwxa6thEh
UcAiWN+fbCH8cVSooAKKKKACiiigAooooAKKKKACiiigArV0Z7bydTtrm7itftNoI45JVcruE0b4
OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dF
cEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACTQBoaxrB1W202No4ke2gaN/Lt4
4gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DP
B5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P
/s+0+z/aftvz/avM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoax
60hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJNtlaUZc
sCmS+37rYxnnOBV0K50+0uZpr6a5TMEsKCCBZM+ZG6EnLrjG4Hvn2rQ0vQLe8s7SRra+mS4z517C
4EFl85X94NhztADnLL8rDp1OZo9hbX8lytxcNG0dtNLFGi5LskTuMnoFGznvyABySoA+0udPW0vd
PuJrlLaWeOaOeOBWf5A6gFC4AyJM/eOMY5zkV9VvI76/82IMI0iihQuMFhHGqBiOxO3OMnGcZPWr
ei6XHfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0Ms+kW1lrstncx3jERRSRWajEzvIEIiztO
GXecnbzsxgFhgAqX95Z3/iC5uZBOllLK3lqgG+KPkIAOmFG0bcgYXAI6h93PZSWllptpPKYY55JW
ubmLy8GQIpBVS5wBGDkEk5PHHL9RsbPTNQthPbXiRyReZLZvKFnhOWUKzFOpwr8oOGH1Jfw6db2u
m3dtazq07PI1vdTiUPGrBVOUVCAWEikdflzxkUAQ69NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnqBW
bWlr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdis
eiHtWVWroyW3k6nc3NpFdfZrQSRxys4XcZo0ydjKejnvQBUsr37Nvilj860lx50JON2OjKf4WGTh
vcgggkGbT7u3sNRnkBlkhMFxCh2AMd8TopIyQOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9OQOMkk
gAEkA6WhaTFqeqTKjRSW0EcsgFxMkBk2o7ICC2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3uY5XCDJ
IVgTj34qlWxpWlDVvE0Vg3kQxvchZFjuEAVC4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu1MK/Ln
iRnI/NuvHtQAzWLyPUNbv72IMsdxcySoHGCAzEjPvzRqt5HfXccsQYKttBEQw5ykSIfwypx7Vd8Q
6Ytl4hk023jto0jkMMbLcq28ByoaRixCMccj5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAAs
QpAGRzlutAFK+vI7m002JAwa1tjE5YcEmWR+PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8h/StDV
9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFflj
vYMCCQAOF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0
LDSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1
YLhyQcgbgBk9CARaLqkdhFcxm6vLGSVkZbuzXdIAu7Mf3l+VtwJ+bqg4PUT3uu297q93My3P2W9t
IbaZpGDzLsWP5s8BjvjB5xuGfuk5GVZWaXW9pb22tI0wN85Y5J6AKisx6HnGB3IyM249I8q7uhdv
ugtIFuXMJ5ljYoE2Ejjd5iHJGQCSQSNpACS+spru1ilhlewtoGt0J4kwS7eYQDjIdywXOMAKSeWL
5r7TootPtIknvLS3uXuJfOUQGQP5YKYVmwMR/ezn5unHLJLGyhu7WWWaVLC5ga4QHmTALr5ZIGMl
0KhsYwQxA5UP1rS47CK2kFreWMkrOrWl426QBduJPur8rbiB8vVDyegAGXNzp8Gky2NjNcz+fPHM
7zwLFs2K4AADtnPmH0xt754frF1pd/c3moRPeG7u5WlMLRKqRFm3H59xLgcgfKuc54xgw6vYW1jH
p7Wtw1wtxbGV5Cu0FhLIh2jrt+TjPJ6kDOBd1/QUs9R1X7JcWzR2k7lraN2Z4YvM2qSxG0/eQYDF
hu5HDYAIryfR7mxgSO6voTDANtsLRDH5u0b23+Zk7mGdxXIGBjCgVYstW0uH+xbmUXgu9KUEIqKy
TkTPIFzuBQfMAWw3X7vy/MzT9BQu5uri2aQWM1ybTeyyKPIZ42zgK38DYVicHkcNiraaFLdxW3+l
20U93/x628m/fN8xQYIUqMspX5mHTJwOaADQrnT7S5mmvprlMwSwoIIFkz5kboScuuMbge+faq9v
NZ2k00qo1w6Ni2E0YCHr87rk5I4+TkEnkkDaz7TSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4
zzgTSZmu7qwc7NRhkMa25wfMYEhlVgcbsjgfxcgHO0MAWNAuIv8AhJrS/wBQv1iWG5S5llmDuZCH
BI+VWJY8nnj3rHrV8NpbT+IbG1u7SK5huZ44GWRnXaGcAsCrA5xnrkc9KyqACrV3crPbWEavKxgg
MbBwoCkyO2FxyRhgeeck9sVFaxwy3cMdxP5ELyKskuwt5ak8tgcnA5xWhd22nyaSb6zhubbbOsIS
4nWXzcqSSpCLjbhc9f8AWL07gGVWxeXWl30a3Mr3gu1to4RbrEuzKRrGG8zdnHyhiNn+zn+KsetK
TRzDbB5b+zS5MQmFqzOH2FQ4O7bsyVIYDdnnH3uKALFpq9vBqHh24ZJSmm7POAAy2J3k+Xnnhh1x
zT9J1mOw08wC71CykWVpWaxbabkEKAjtuG0LtODh8eY3HrXtNClu4rb/AEu2inu/+PW3k375vmKD
BClRllK/Mw6ZOBzTLXRzPp6X81/Z2lu8rwq0zOSXUKSNqKxxhxzjAxzjIyAWNT1e3vf7a8tJR9u1
JbuPcBwg87g89f3i9M9DzV3TNStp9eNx8pQafFALe4eONJ3SJIyCz5QAFTINwIJRRjJFZsWg3B+3
G6uLazWxnW3uDO5+Vzv4AUMW5Qj5c9c9ASKV7ZyWN00EhVjtV1dDw6MoZWGecFSDyAeeQDxQBuza
hFpfidL5LiXe8DpJ5MyStbFkaMbHTajbVKsAu0DhOMZqlqt+upTWsTavqF2qscz6gTiMNgcIGcgD
GSQSTwMcc59lZyX10sEZVTtZ2dzwiKpZmOOcBQTwCeOATxVh9KZru1trK7tr57mQRRmBmX5yQNpD
hSOo5Iwc9eDgALnTrWC3aSPWbG4cYxFEk4Zue26MD35NWNYutLv7m81CJ7w3d3K0phaJVSIs24/P
uJcDkD5VznPGMGvd6WLe0N1BfW15CsixyNAJBsZgSoIdVJyFbpn7vOOM2NQ8PTaebxGvLOeeyYi4
hhdmMa7wm7JUKRllGASw3cgYOACWPU7FdHNtJNeTgRFUs5oUZInIPzJLu3IN3zkKozgKxI+aqVpd
266Te2NwZU82SOeN40DfOiuApBIwD5n3ucY6HPFiTw9NHGM3lm1w1sLpbZXYu0ZjEhP3doIXJIYg
/LwDlcstNClu4rb/AEu2inu/+PW3k375vmKDBClRllK/Mw6ZOBzQBY0nWY7DTzALvULKRZWlZrFt
puQQoCO24bQu04OHx5jcerH1e3fxVf6lslFtdyXIxgb0SZXXOM4JAfOM84xkdaq2mli4tBdT31tZ
wtI0cbTiQ72UAsAEViMBl64+9xnnFjR9Hgv7+5tru/gtzDFM33mfeUjdshkVgVBUE88j7uTQAaPq
kelXl6Irq8gjuIjCl1Au2aMb1cNt3Dk7MEbuNx5OMFl9q9w2pxXdrqmpyzRR7Fu7iUrL3zjDEqMM
Rjcc8nvgZsqLHM6LIsiqxAdM4YeoyAcH3ANX00eQa7caXNKqtatN50iDcMRBmcqDjJwhwDjJxkjr
QBbuvFmqXOpWV29zPIto0EiQTzNIhkjVRvIyOWIJPf5jz3qpdXlnHp72VgJzHPKk8pnAyhUMFRcd
QN7ZY43cfKuOXy6bb3H2Gezk+z215O1uBdyg+S67MlnAAK4kU5wMcjHGTUk064htJbidfJ8uf7OY
5AVcvglgAR/DgbvTcvrQBoX+r295Zvp+yUWVpn+zAQN8eXG7ec87hlj1wwG3auRVezurOwtmuInn
kv5IpITG0QWJA6shbduJY7ScDC8nOSBhruv6ClnqOq/ZLi2aO0nctbRuzPDF5m1SWI2n7yDAYsN3
I4bEEGjx/wBk31zdSslzFbJcQwKOdhkjXc/oCHyo6kc8DbuAHaVe6VZTWF+4uUvLKRZDFHGHS4ZX
Lgly4KZGF4U4255JxVKK8jt9KlghDC4uWKTuw48oFGVV9ywJPH8K4I+YHYi0W3TQ7K/bR9XvElge
aae2lCRR7ZHXB/dNjAQE5PesKys5L66WCMqp2s7O54RFUszHHOAoJ4BPHAJ4oAsWF3braT2F4ZUt
p5ElMsKB3R0DAfKSAwIdgRkdjnjBsLrEY1V9S8pkngijWxXO4I8YREZzxkhVJ6YLAZG3IpkVpp7f
br8i5fTYJ1iiiDqkz795Tc2Cq4VCSQDzgAc5DLjR5I7q5hilWQRWyXSZGHkjZVYYXn5gr7mGSAFY
5IGaAG313b6jfRXUplSWf5r11QHMhY7mRcjqMHGQNxbGBjFqLV7fT7vSxZpLPbafd/aw0wEbyuSh
IwCwUYjUDk9znnANL0bzdRgt7nyibixnuERpNnlkRSFC5OAvKq/JxtKnoaryaLMZrNLOeC9W7l8i
F4SyhpBtynzhSD86c4x83Xg4AH3Nzp8Gky2NjNcz+fPHM7zwLFs2K4AADtnPmH0xt754yqu3lhFb
RCWHUbO8XdtYQl1Kk9OHVSRweRkDvjIzsapoFvZ2d3IttfQpb48m9mcGC9+cL+7GwY3Alxhm+VT1
6gAz7J7Z/D2o2st3FBMZ4Z41kVz5gRJQVBVSAcuvXA96u6Xq2lwT6LdXovPM0xlAihRSJAJmk3by
3BG8/LtOdv3huyvO1q6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AV5byN9EtbIBvMhuZpWJH
BDrEBj3+Q/pVjTrvT10m8sL43KefPDKksCK+zYsgOVJG7O8DGR654wYrTSmubQXL3dtbI8jRQidm
HmuACVBAIXG5eXKj5uvBxfv9BQ6trRhuLazsbG+a3zO7fKCz7AAAzN9zHGTznoCQANOrWc17eJKJ
0tLizhsxKqBnUReVh9m4DLeUMjdxu6nHMV9NYahJptrbTtbQW1sYWnu1OCfMkfdhAxAO4cYOCcZO
Nxls9DUyarb3MkCtBZpcRTtIVQK0kWH9SDG5IXG7nG3dgVUOizfbYII54JY5omnS4UsEMa7t74ID
YXY+RtydvAORkAbrtzDeeIdTurd98M13LJG2CMqXJBweehrPravdPttO0nTrtWtr0yXcwLxu+yRE
WIhSPlZeWcdFPOemDVfXoYINTAtoFgje2t5fLRmIUvCjtgsScZY9SaAM2iiigAooooAKKKKACiii
gAooooAKKKKACiiigArV0Z7bydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1
oJI45WcLuM0aZOxlPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLB
Am6Rs8ZAAAGSSTwAACSTwACTQBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2G
nW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H
3lvpbaIL21t7y3ka58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P/s+0+z/AGn7b8/2rzNvl9fk2Y56
dc9+lXre40tfDlzZy3F4LuaVJgq2ylAUWQBd3mA4PmDJ28Y6GsetIWFsfDkt+LhnukuYojEq4VFZ
ZTyT1b93njgAjkkkKAXbDVtPhbSLq4a5W50vHlwxxKyTbZWlGXLApkvt+62MZ5zgVdCudPtLmaa+
muUzBLCgggWTPmRuhJy64xuB759q0NL0C3vLO0ka2vpkuM+dewuBBZfOV/eDYc7QA5yy/Kw6dTma
PYW1/JcrcXDRtHbTSxRouS7JE7jJ6BRs578gAckqAV4I9Pa7kW4ubmO2GfLkjt1d254ypcAcf7Rx
79atvfWV/qEz30MsULQJBC8f7x4RGEVWxlQ5KptPIHzEgcAU/RdLjv4rmQ2t5fSRMiraWbbZCG3Z
k+63yrtAPy9XHI6G1/wj8ceu3Nk8F5I0VtHcJZJ8txIXCHygdp+ZQ5JO3kRnhewBm6pd29wLOC1M
rQ2kBhWSVAjPmR3JKgkDlyOp6Z74BfXMOoXlqsb+RAkEMAMgO2MhFDthc8F97cDJ3E4ya0p/DMja
xY2cVveWzXds9ybadN80YQyZUDC72IjJUYXO4D3rP1O2XTtQhDaTeWihVc29+5JkGT3Codpxjjng
8+gAa9NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnqBWbWlr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMse
pNZtABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdiseiHtWVWroyW3k6nc3NpFdfZrQSRxys4XcZo0
ydjKejnvQBUsr37Nvilj860lx50JON2OjKf4WGThvcgggkGbT7u3sNRnkBlkhMFxCh2AMd8TopIy
QOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9OQOMkkgAEkA6WhaTFqeqTKjRSW0EcsgFxMkBk2o7IC
C2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3uY5XCDJIVgTj34qlWxpWlDVvE0Vg3kQxvchZFjuEAVC
4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu1MK/LniRnI/NuvHtQAzWLyPUNbv72IMsdxcySoHGCA
zEjPvzRqt5HfXccsQYKttBEQw5ykSIfwypx7Vd8Q6Ytl4hk023jto0jkMMbLcq28ByoaRixCMccj
5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAAsQpAGRzlutAFK+vI7m002JAwa1tjE5YcEmWR+
PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8h/StDV9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHd
g/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYMCCQAOF6kAz4ryNNEurIhvMmuYZVIHACLK
Dn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDSA3hy91R47OVlZY41lu0UoCshZtocNvBQb
VPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgbgBk9CARaLqkdhFcxm6vLGSVkZbuz
XdIAu7Mf3l+VtwJ+bqg4PUT3uu297q93My3P2W9tIbaZpGDzLsWP5s8BjvjB5xuGfuk5EGi6XHfx
XMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy673iJQM0ZAxl1JKc7RuHO
3nAA+S+spru1ilhlewtoGt0J4kwS7eYQDjIdywXOMAKSeWLNQurP+z7WwsnnljhllmMs0QjJLhBt
2hm4Hlg5zzu6DHM0+jxjxRf6XFKywWstxmRhuby4gzE44BbahwOAT3HWj+yY7+bTm04tFHf3JtUj
uX3GOQbM5ZVGV/eKc4B6jHAJADV7jS57DT4rK4vJJLWIwkTWyxhgZJH3ZEjc/OBjHbOe1WtU1bS5
59aurIXnmamzAxTIoEYMyybt4bknYPl2jG77x25bPurOzk0972wM4jglSCUTkZcsGKuuOgOxsqc7
ePmbPE1xo8dpok9xNK32+G5iikgA4iDrIcMf7/yDI/h6H5shQC1Hq2l+ab2UXhuzp5sxCqKEQ/Zz
CH37ssDgEjaMbup24Z9n4i8nS7O3/tXV7L7JGyfZ7JtqTZdnyW3jYTv252tjaDz0Fe307TrqzmEK
XjNBbebLfbwIEfYXEZQpkEsDGDv5PIz92rWjeG49TgsFSz1C5a8ba91bH91aEuUxIuw5IADn5l4Y
dOpAK+k659k0lbH+1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5oJe24u7q9kjlnnaQtCtwRIMkklpC
cbyOOMYYnJ4BVrGk2NteREfYdQ1G7LMTb2TbTEg2/OT5b5BLEdsbec7hULaVG+oXtja3a3E0MrJb
7V4ugCR8pBPzHghed3IBzgMATaBcRf8ACTWl/qF+sSw3KXMsswdzIQ4JHyqxLHk88e9Y9avhtLaf
xDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ7awjV5WMEBjYOFAUmR2wuOSMMDzzknt
iorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfWcNzbbZ1hCXE6y+blSSVIRcbcLnr/rF6
dwDKrYvLrS76NbmV7wXa20cIt1iXZlI1jDeZuzj5QxGz/Zz/ABVj1of2Rcebt3xeX9k+1+dk+Xs2
5xux13fu/wDf+XNAF+w1bT4W0i6uGuVudLx5cMcSsk22VpRlywKZL7futjGec4GbLeRvolrZAN5k
NzNKxI4IdYgMe/yH9KsWmhS3cVt/pdtFPd/8etvJv3zfMUGCFKjLKV+Zh0ycDmmWujmfT0v5r+zt
Ld5XhVpmckuoUkbUVjjDjnGBjnGRkAsanq9ve/215aSj7dqS3ce4DhB53B56/vF6Z6Hmorl7bV9R
VjdxWcaWkEe+5VyC0cSIQNiseSpI46enSs+6tprO7mtbhNk0MjRyLkHDA4IyOOoqxp2nHUDck3UF
tHbxebJJNvIA3qnRVY5y47UAW7Y2uj38cv2+C9jlimhkNqkmYw8ZTdiRVyRvJAzztwSM5pkF3p+l
atp95Ym5uvs06zuZ0WHdtYEKAC2Oh+bJ69Bjl8vhy8huo4GkgJLSpK4Y4heJd0qtxklFIJ2hgf4S
x4qje2aWuxor22u43yN8BYYI6gq6qw6jnGD2JwcAFu5udPg0mWxsZrmfz545neeBYtmxXAAAds58
w+mNvfPEt3q9vPqHiK4VJQmpb/JBAyuZ0k+bnjhT0zzWLWrd6FLaRXP+l20s9p/x9W8e/fD8wQ5J
UKcMwX5WPXIyOaAJf7Xt/wC1/teyXy/7N+yYwM7/ALJ5OevTdz9O3ardn4i8nS7O3/tXV7L7JGyf
Z7JtqTZdnyW3jYTv252tjaDz0GV/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/3/AJc1Yt9ItZvDlzqL
6lAk8UqIISJM8rIdpwhG47BjBxjOSOKAGW1zp8+kxWN9NcweRPJMjwQLLv3qgIILrjHlj1zu7Y5i
0+9t7PVHm8uVbZ45ocZDuiSIyZ7BiA2e2cds8FppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvc
Z5xLFoNwftxuri2s1sZ1t7gzuflc7+AFDFuUI+XPXPQEgAqQX1xpt3JJpl7cwZyiyxsYnZM99p46
A4ya07rxReXuuy390880DtOFtpJywijlBVlQnhTtbAOMZA4OMVUGizfbZ4JJ4Io4Ylne4YsUEbbd
j4ALYbemBtyN3IGDh+s6fDp8OmCJopGmtDK8sTllkPnSKGGenyqvGARjkA5oAJbvT2+w2ANy+mwT
tLLKUVJn37A+1clVwqAAEnnJJ5wDUtX/ALWtI/tSYubfZDbeWMIluA37s5OTtO3aeTy24njGbFG0
0yRKVDOwUF2CjJ9SeAPc8Vdu9LFvaG6gvra8hWRY5GgEg2MwJUEOqk5Ct0z93nHGQDS1TVtLnn1q
6sheeZqbMDFMigRgzLJu3huSdg+XaMbvvHblooPEtx/Zt9aXKwSGazS1icWkW4BWjxufbuICIQOS
Qdp6gERah4em083iNeWc89kxFxDC7MY13hN2SoUjLKMAlhu5AwcCaCRbSTT30EbR232mSFVdnRGU
GMnjbhi0Y4YkbwSODgANHutLsLmz1CV7wXdpKsohWJWSUq24fPuBQHgH5WxjPOcCpFeR2+lSwQhh
cXLFJ3YceUCjKq+5YEnj+FcEfMDLDo5ktYZpb+ztnuFLQRTM4Mo3Fc7gpRQWVh8zL0ycDmqtnZyX
07RRFQyxSSkseMIjOfxwpx70AWLC7t1tJ7C8MqW08iSmWFA7o6BgPlJAYEOwIyOxzxg2F1iMaq+p
eUyTwRRrYrncEeMIiM54yQqk9MFgMjbkVXsLS3a0nv7wSvbQSJEYoXCO7uGI+YghQAjEnB7DHOQ6
40eSO6uYYpVkEVsl0mRh5I2VWGF5+YK+5hkgBWOSBmgC3DrFmdaGpXUU7vPbTreLGQu+WRJELITn
AO5SeOCWwMYFEerWenTaSLIT3MdheG8LTIIS5Pl/JgM2B+6HOf4unHNez0jzNQs4Ll8faIGnEUZx
IcBikfI4Z9q7eDxIpAOcVY1HQ2SawhtrK8tLu7lMK2V44MmflCvkqmFYsVGRjKHk9gDPvE0uOICy
nvJ5C3LTQrEFHpgM24njnIxjoc8aV/q2nzNq91btctc6pnzIZIlVId0qynDhiXwU2/dXOc8YwaF3
pYt7Q3UF9bXkKyLHI0AkGxmBKgh1UnIVumfu844yyKzjXSpb24LAOxitQp+/IpQvn/ZCt7HLLjID
YAKVaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1S/2Rb7fse+X+0Psn23fkeVs8rzdmMZzs53
Z+98u3Hz1Vg0i4uXsBG8Wy83YkJOyLaSG3nHy7QA59FIPegCW2u9Pk0mKzvjcr9nnknQQIp83eqA
qSSNn+rHzYb73TjmXU9Xt73+2vLSUfbtSW7j3AcIPO4PPX94vTPQ80/SdIjvNPNyNP1DUpDK0bQ2
L7TCAFIZ/kfhtxA4H3G69pW0CCKLWoJLqCN7DUI7f7VMWUbP3wPyrkkkonABI+gJoAz9IvLO2j1C
G9E5ju7YQhoQCUPmxvuweoGw8d+mRnItjVrOG9s0iE72lvZzWZlZArsJfNy+zcRlfNOBu529RnjK
vbOSxumgkKsdquroeHRlDKwzzgqQeQDzyAeKr0AbF5cadLp9hptnLOqxXMsklxdIFBDiMZ2qWIA2
HI+Y8ZHXaIdemgn1MG2nWeNLa3i8xFYBikKI2AwBxlT1ArNooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqtXRktvJ1O5ubSK6+zWgkjjlZ
wu4zRpk7GU9HPegDKooooAKsWV7Pp90tzbMqyBWX50VwQylWBVgQQQSOR3qvUttbTXdwsECbpGzx
kAAAZJJPAAAJJPAAJNAGhrGsHVbbTY2jiR7aBo38u3jiBYyO3GwDjBXj13HHJJr3l5HPYadbRBlF
vEwlBGA0jSMSw9Tt8sZPPygdAKsalY2WntpjxTS3UM8Hmysv7vcRK6EJkEgYTgkZ7kD7ofeW+lto
gvbW3vLeRrnyoxNcrKJAFy/SNcFd0fXrv46HABRn/s/+z7T7P9p+2/P9q8zb5fX5NmOenXPfpV63
uNLXw5c2ctxeC7mlSYKtspQFFkAXd5gOD5gydvGOhrHrSFhbHw5Lfi4Z7pLmKIxKuFRWWU8k9W/d
544AI5JJCgF2w1bT4W0i6uGuVudLx5cMcSsk22VpRlywKZL7futjGec4FXQrnT7S5mmvprlMwSwo
IIFkz5kboScuuMbge+fatDS9At7yztJGtr6ZLjPnXsLgQWXzlf3g2HO0AOcsvysOnU5+l2tlcIkb
2t9f3ssjKtrZvsZVUA7uUfdnLcDGNhJzngAqQR6e13Itxc3MdsM+XJHbq7tzxlS4A4/2jj361NcX
dvqWqCW5MttbCNIh5aCV1RECJwSoY4VcnjuQO1WP7Ms7abUZp5mu7KzuRbKbZwhnLb9rBiGCriNj
nDdh33C1Y+H45NQvoGgvL8Q2cd1DFafJJKHMW3I2vghZMkAHkdSOaAK8er28WoWwVJfsVvaS2SMQ
PMKSCQM5GcZzKzBc9AF3fxGK7nspLSy020nlMMc8krXNzF5eDIEUgqpc4AjByCScnjjll5FZ2eqi
KfTNQt40X97bTXAE2SMg7jGNo5U4Kn688TX8OnW9rpt3bWs6tOzyNb3U4lDxqwVTlFQgFhIpHX5c
8ZFAEOvTQT6mDbTrPGltbxeYisAxSFEbAYA4yp6gVm1pa9DBBqYFtAsEb21vL5aMxCl4UdsFiTjL
HqTWbQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HYrHoh7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3Ga
NMnYyno570AVLK9+zb4pY/OtJcedCTjdjoyn+Fhk4b3IIIJBm0+7t7DUZ5AZZITBcQodgDHfE6KS
MkDlgTycc9ar2dnJeTFEKoiLvllc4SNO7MfTkDjJJIABJAOloWkxanqkyo0UltBHLIBcTJAZNqOy
AgtnBKjdtPygn5h1oApaPeR6frdheyhmjt7mOVwgySFYE49+KpVsaVpQ1bxNFYN5EMb3IWRY7hAF
QuARGzMd554wWJ96r3No8+rtbRxWNu5xhIrtTCvy54kZyPzbrx7UAM1i8j1DW7+9iDLHcXMkqBxg
gMxIz780areR313HLEGCrbQREMOcpEiH8Mqce1XfEOmLZeIZNNt47aNI5DDGy3KtvAcqGkYsQjHH
I+UD0FHiHTF0o2dusdsMwJI0sdysryM0aM24KxAALEKQBkc5brQBSvryO5tNNiQMGtbYxOWHBJlk
fj2w4/HNEt5G+iWtkA3mQ3M0rEjgh1iAx7/If0rQ1fSBpei6exjs3kuFMkkyXaSODvkXaoVyCmEB
3YPzZG7tRJpAtfCiX7R2cklxKy7zdoXiQCMrsRX5Y72DAgkADhepAM+K8jTRLqyIbzJrmGVSBwAi
yg59/nH60WN5HbWmpROGLXVsIkKjgESxvz7YQ/jitCw0gN4cvdUeOzlZWWONZbtFKArIWbaHDbwU
G1T1BPyt2PD+kC9gvb147OZbaLckNxdpErPvRfmG9WC4ckHIG4AZPQgFTT7qz/s+6sL154o5pYph
LDEJCCgcbdpZeD5hOc8behzwXl1Z6hqyvK88NosUcIdYg7kRxqgbbuAydoJG7jPU45r2Vml1vaW9
trSNMDfOWOSegCorMeh5xgdyMjNuPSPKu7oXb7oLSBblzCeZY2KBNhI43eYhyRkAkkEjaQCxfatZ
jxRdapZieaC7acyRzIImUSh1ZQQzDIV+GPf+HjBi/taOwm05dODSx2FybpJLlNpkkOzOVVjhf3aj
GSepzyAGSWNlDd2sss0qWFzA1wgPMmAXXyyQMZLoVDYxghiByofrWlx2EVtILW8sZJWdWtLxt0gC
7cSfdX5W3ED5eqHk9AAQ3V5Zx6e9lYCcxzypPKZwMoVDBUXHUDe2WON3Hyrjm3deJbi/0e+tbtYG
uLm5imMiWkSEhRJuJZVB3EsvPXG7nkg1NXsLaxj09rW4a4W4tjK8hXaCwlkQ7R12/Jxnk9SBnAt6
vpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2oAZeT6Pc2MCR3V9CYYBtthaIY/N2je2/z
MncwzuK5AwMYUCq+lXVnp00WoM87XttKJYYBEPLYrgqWfdkDPUBeQMZGci3b6dp11ZzCFLxmgtvN
lvt4ECPsLiMoUyCWBjB38nkZ+7VjS9At7yztJGtr6ZLjPnXsLgQWXzlf3g2HO0AOcsvysOnUgGVa
f2VLaCO9e5t5lkZvNghEvmKQMKQXULtIJyM53dsDMs+qW9xq2oarJbbp552mhhfDxoWYkls/e28Y
XGCeTwCrP0mxtryIj7DqGo3ZZibeybaYkG35yfLfIJYjtjbzncKhbSo31C9sbW7W4mhlZLfavF0A
SPlIJ+Y8ELzu5AOcBgCbQLiL/hJrS/1C/WJYblLmWWYO5kIcEj5VYljyeePesetXw2ltP4hsbW7t
IrmG5njgZZGddoZwCwKsDnGeuRz0rKoAKtXdys9tYRq8rGCAxsHCgKTI7YXHJGGB55yT2xUVrHDL
dwx3E/kQvIqyS7C3lqTy2BycDnFaF3bafJpJvrOG5tts6whLidZfNypJKkIuNuFz1/1i9O4BlVtf
2vb/ANnf2Rsl/szy/O24Hmfa/Lx5mc9N3y46bOcbuaxa39P0FC7m6uLZpBYzXJtN7LIo8hnjbOAr
fwNhWJweRw2ACaz8ReTpdnb/ANq6vZfZI2T7PZNtSbLs+S28bCd+3O1sbQeegx5byN9EtbIBvMhu
ZpWJHBDrEBj3+Q/pVu38PTXEFk4vLNJL5c2sDO2+Vt7JtwFIUll4LEKc9eGw6DSra48PW1493bWb
m7nieWdnO4BIiqhVDH+JznGPU8qKAG3htda1vU737fBZRzXLyxi6SQlgzE/8s1bkcZ+vGaYr22mw
6jbLdxXn2u0WNJLdXCq3nRvg71U9EPQHqPfDI9FmE14l5PBZLaS+RM8xZgsh3YT5AxJ+R+cY+Xry
MmvWUen6mLaNVUC2t2bY+8F2hRmIYEggsSeDjnjigDVuPEdnNfyyiOcRzXl/KxKjKx3MaoCBnllw
TjIB4GecjFuItOM0MVnczlWb95PdRCMKDgfcUucDkk5JOcAccs07T5tTvBawNEshjkkzK4RcIhc5
J4HCnk8epHWrEmizGazSzngvVu5fIheEsoaQbcp84Ug/OnOMfN14OABtzp1rBbtJHrNjcOMYiiSc
M3PbdGB78mtXXPEX9qxXb/2rq8n2qTf9ikbEEOW3YzvO8L0A2r2PGMHKu9LFvaG6gvra8hWRY5Gg
Eg2MwJUEOqk5Ct0z93nHGXX2jmwWVXv7N7mBts9srOHjIOCMsoViDwdpb1GQCQAWP7Xt/wCzv7I2
S/2Z5fnbcDzPtfl48zOem75cdNnON3NVbS7t10m9sbgyp5skc8bxoG+dFcBSCRgHzPvc4x0OeJf7
Cl8n/j7tvtfkfaPsfz+Z5ezzM527PufNjdnHHXiorC0SfTtUnYRM9vAjKrOysuZUUsuBg4ztIJH3
8jOKAL+k659k0lbH+1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5pS6lHLp+oQHz2kuryK4VpX3nCiU
Hc3GW/eDnHPPSqr2ciafDekr5c0skSgHkFAhOfb5x+tXToUsV3fw3N3bW0dlP9nlnk3lDJlgAAql
udjHOMcc4yAQCwdWs5rq5WUTx29zp9vaPIiBnQxrDkhdwDAtFjqOGz2xUV9NYahJptrbTtbQW1sY
Wnu1OCfMkfdhAxAO4cYOCcZONxZHoNxJd3UDXFtGttAty80jkIYmKbWHGeRIrYxu7Y3fLVS+sWsX
i/fRTxTR+ZFNFu2yLkqSAwDDDKw5A6emCQC2kNnpd3aXn22x1NIp0Z7WNZRvUHJB3xgYOMd+vSru
q67Hd6JJYHUdV1CRrmOZZb04ChVkBULvbB+YHOefQbfmzV0i4bV7TTA8XnXXkbGydo81VZc8Z4Dj
PHr1p11Z2cmnve2BnEcEqQSicjLlgxV1x0B2NlTnbx8zZ4AItYvI9Q1u/vYgyx3FzJKgcYIDMSM+
/NdFPLbT+Go4WniVVtBvu1uYRJKwG5Yni2+cwDbYx820BFfGBWBPpFxbPfiR4tlntzICdku4gLsO
Pm3Alx6qCe1OFhbHw5Lfi4Z7pLmKIxKuFRWWU8k9W/d544AI5JJCgFvStTsbSyEVzNePHuLS2LQp
LDMfUMWBiYrhdyqWHJB52ivo/iDUNGLrbXVysLxyjyo52Rd7xlA+BxkEg/8AAR0rQ0vQLe8s7SRr
a+mS4z517C4EFl85X94NhztADnLL8rDp1OFZWcl9dLBGVU7Wdnc8IiqWZjjnAUE8AnjgE8UAXYdU
+1pdQatcXMiXMkcr3I/eyq8YZV4ZhuGHYYyOxzxgvXWIxqr6l5TJPBFGtiudwR4wiIznjJCqT0wW
AyNuRTIrTT2+3X5Fy+mwTrFFEHVJn37ym5sFVwqEkgHnAA5yGXGjyR3VzDFKsgitkukyMPJGyqww
vPzBX3MMkAKxyQM0AWF1e3bV11CVJfNuYJkvSoH+skV0aRBnnhg2MgFtwG0Yw+PVrPTptJFkJ7mO
wvDeFpkEJcny/kwGbA/dDnP8XTjnMlsWge0WeaKL7RGsmW3fulJIBcAZHADcA5VlIzmrGq6dDbat
HaWcrPHJFA6PcFYyTJGj/NzhRlu5wB3PWgCxq+q/bLRIP7Z1fUP3gfF6dqJgEcLvfJOeuRjB654p
aneR3VyqW4ZbO3UxWquPmEe4sN3+0SxY9sk4wMAPu9LFvaG6gvra8hWRY5GgEg2MwJUEOqk5Ct0z
93nHGb93pNkNLubq2hvkhh/1N9Of3N4Q4QhFKKVJBZ8bmICEHPJABF/a9vt+2bJf7Q+yfYtmB5Wz
yvK35znOzjbj73zbsfJRYavb2dmmn7JTZXeP7TAA3yYc7dhzxtGGHTLE7ty4FYtaujJbeTqdzc2k
V19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyXTzva+luV24KRwRqfM9QWLDZ252t16cYOmurWd/Fqq6oJ4p
L+8S68y1QMIyPN3fKzDI/eYAyPXPGDStNKa5tBcvd21sjyNFCJ2Yea4AJUEAhcbl5cqPm68HF+/0
FDq2tGG4trOxsb5rfM7t8oLPsAADM33McZPOegJABm6reR31/wCbEGEaRRQoXGCwjjVAxHYnbnGT
jOMnrVKt2z0NTJqtvcyQK0FmlxFO0hVArSRYf1IMbkhcbucbd2BVQ6LN9tggjngljmiadLhSwQxr
u3vggNhdj5G3J28A5GQDNoravdPttO0nTrtWtr0yXcwLxu+yREWIhSPlZeWcdFPOemDVfXoYINTA
toFgje2t5fLRmIUvCjtgsScZY9SaAM2iiigAooooAKKKKACiiigAooooAKKKKACiiigArV0Z7byd
Ttrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKK
KACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACTQB
oaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGT
z8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKyiQ
Bcv0jXBXdH167+OhwAUZ/wCz/wCz7T7P9p+2/P8AavM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5p
UmCrbKUBRZAF3eYDg+YMnbxjoax60hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0
+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBBY3GljR3tbi4vLWeSUmV4LZZRLHhSikmRSA
GDHA4J2k/dGLul6Bb3lnaSNbX0yXGfOvYXAgsvnK/vBsOdoAc5ZflYdOpz9LtbK4RI3tb6/vZZGV
bWzfYyqoB3co+7OW4GMbCTnPAAy1urNIrvT53n+xTSpKs6RAyKU3hSULYIIdsjdwSDk4wxLeWeo6
k0t4J4IPKSKMwgSNGEVUXIO0Odq4OCvJz22mxp2kWVz4sXSpr/da/a/IWeFcmYFwgKdQM5zknAGe
pwDlW0cMtwqzz+TFyWfYWIAGeAOpPQcgZIyQOQAW9Uu7e4FnBamVobSAwrJKgRnzI7klQSBy5HU9
M98AvrmHULy1WN/IgSCGAGQHbGQih2wueC+9uBk7icZNX4dEt38ZXukBLmeGCS6VEiYCWTylcqoO
0jJKgdO/SquqW8OnXdup0e+tHH7x4dQlJ8xc8cBEIHBBIP0IxQAzXpoJ9TBtp1njS2t4vMRWAYpC
iNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7TaCOOS
VXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWkuPOhJ
xux0ZT/CwycN7kEEEgzafd29hqM8gMskJguIUOwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3yyucJ
GndmPpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1uwvZ
QzR29zHK4QZJCsCce/FUq2NK0oat4misG8iGN7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to4rG3
c4wkV2phX5c8SM5H5t149qAGaxeR6hrd/exBljuLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiGHOUi
RD+GVOParviHTFsvEMmm28dtGkchhjZblW3gOVDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO5WV5
GaNGbcFYgAFiFIAyOct1oApX15Hc2mmxIGDWtsYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaViRwQ6
xAY9/kP6VoavpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZd5u0
LxIBGV2Ir8sd7BgQSABwvUgGfFeRpol1ZEN5k1zDKpA4ARZQc+/zj9aLG8jtrTUonDFrq2ESFRwC
JY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W7Hh/SBewXt68dnMttFuSG4
u0iVn3ovzDerBcOSDkDcAMnoQCLRdUjsIrmM3V5YySsjLd2a7pAF3Zj+8vytuBPzdUHB6ie9123v
dXu5mW5+y3tpDbTNIweZdix/NngMd8YPONwz90nIg0XS47+K5kNreX0kTIq2lm22Qht2ZPut8q7Q
D8vVxyOhhvbC2tdaa1e4aGAKruXXe8RKBmjIGMupJTnaNw5284AHyX1lNd2sUsMr2FtA1uhPEmCX
bzCAcZDuWC5xgBSTyxZqF1Z/2fa2Fk88scMssxlmiEZJcINu0M3A8sHOed3QY5fd2NlZeJb6xnml
WztZ5kDdXcIW2rkDALEBd2MDOcYGKl1XR/s/2MQWN9a3NxI0Ysbo75jjbtcYVSQxYqBt6oeT0AA3
V7jS57DT4rK4vJJLWIwkTWyxhgZJH3ZEjc/OBjHbOe1VL68jubTTYkDBrW2MTlhwSZZH49sOPxzV
jUdLt7LSbO4jufOnknmhm2YMalFjICkfe++ct0JHGQAzWv7BS1sdYa4uLaa5s4AGijdg9vL50aEE
EANwXGV3KPXlcgEV5Po9zYwJHdX0JhgG22Fohj83aN7b/MydzDO4rkDAxhQKlsNW0+FtIurhrlbn
S8eXDHErJNtlaUZcsCmS+37rYxnnOBV/sKXyf+Pu2+1+R9o+x/P5nl7PMznbs+582N2ccdeK0tG8
Nx6nBYKlnqFy14217q2P7q0JcpiRdhyQAHPzLww6dSAZFp/ZUtoI717m3mWRm82CES+YpAwpBdQu
0gnIznd2wMyz6pb3GrahqsltunnnaaGF8PGhZiSWz97bxhcYJ5PAKszR7C2v5Llbi4aNo7aaWKNF
yXZIncZPQKNnPfkADkla9lZfbt8UUn+l8eTCR/rvVVP97phf4uQDnAYAvaBcRf8ACTWl/qF+sSw3
KXMsswdzIQ4JHyqxLHk88e9Y9avhtLafxDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7l
Z7awjV5WMEBjYOFAUmR2wuOSMMDzzkntiorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTf
WcNzbbZ1hCXE6y+blSSVIRcbcLnr/rF6dwDKroo9W0vzTeyi8N2dPNmIVRQiH7OYQ+/dlgcAkbRj
d1O3Dc7RQBdvryO5tNNiQMGtbYxOWHBJlkfj2w4/HNW7e60ubRILC9e8hkiuZZhLDEsgw6xjbtLL
z8hOc8Y6HPy49aFppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvcZ5wAXZNWs9Rm1YXonto7+8F
4GhQTFCPM+TBZcj96ec/w9OeIr6Sy1jVi8VythbrbQxobvc5ykaJjManJ+UnOACB26VXttKab7Q0
13bW0NvIInmkZnTed2FBjDE5CMcgY468jNrTtIhnbVoZJ7ZvItFljuPNIjXMsQ3+p+Rm+Ujdk427
uKAG25tdHu/NF/BerJbXERFskgKF4mRc+Yq8ZbtngH2zR0y9/s3VrO+8vzPs06TbM43bWBxnt0qw
dFm+2wQRzwSxzRNOlwpYIY13b3wQGwux8jbk7eAcjJJosxms0s54L1buXyIXhLKGkG3KfOFIPzpz
jHzdeDgAsavqv2y0SD+2dX1D94HxenaiYBHC73yTnrkYweueG6xdaXf3N5qET3hu7uVpTC0SqkRZ
tx+fcS4HIHyrnOeMYLp9Ktrfw9c3iXdteOLuCJJYGcbQUlLKVYKf4UOcY9DwwqrNpFxBdapbs8Rf
Td3nEE4bEix/Lxzyw644oA1bjxF9o04R/wBq6vFi0S2+wRttgO2MR53b+hxuI2c5K5/iqlpFxpcF
hqEV7cXkcl1EIQIbZZAoEkb7smRefkIxjvnPaq8dpbyeHri8xKLmG7iizvGwo6SHpjOQY+ue/Tip
bTQpbuK2/wBLtop7v/j1t5N++b5igwQpUZZSvzMOmTgc0AS6Bq9vpfnfaUlbZJHd2/lgH/SIt3lh
8kfu/nbdjnpgijRNY/s60u7X7dfWHnyRyfaLIZf5A42Ebl4O/Oc/wjg5yM2zs5L6dooioZYpJSWP
GERnP44U496sWFpbtaT394JXtoJEiMULhHd3DEfMQQoARiTg9hjnIALEmqxvJq7PNeXDXlskKTXD
bnYrJE25jngERnAycZAycZrMtrq4s7hbi1nlgmTO2SJyrDIwcEc9Cau3GjyR3VzDFKsgitkukyMP
JGyqwwvPzBX3MMkAKxyQM1Y0rRPtGqWltcpLL59pLciC3bEp2o7IvKnBbYpHByrqR1oAfdeLNUud
Ssrt7meRbRoJEgnmaRDJGqjeRkcsQSe/zHnvVS6vLOPT3srATmOeVJ5TOBlCoYKi46gb2yxxu4+V
ccv1SwWC7t7VNMvtOmk6rqE6/MCcA5KIFGQck8fTFS+IdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAA
sQpAGRzlutABf6vb3lm+n7JRZWmf7MBA3x5cbt5zzuGWPXDAbdq5FNt7jS18OXNnLcXgu5pUmCrb
KUBRZAF3eYDg+YMnbxjoav6j4bjtrC/mis9QSOzUMl/Kd0F2PMVAU+QYDbt4+ZuBjnORi2VnHLbX
N3clltoF2ZQ/M0rK3lqPbKknp8qnnJAIBpWGrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJfb91sYz
znAzYryO30qWCEMLi5YpO7DjygUZVX3LAk8fwrgj5gbtlpFvONPtpXl+2ap/x6shHlxfvGjXeMZb
LKQcY2jB+YnaM+PTria0iuIF87zJ/s4jjBZw+AVBAH8WTt9drelAE1hd262k9heGVLaeRJTLCgd0
dAwHykgMCHYEZHY54wbC6xGNVfUvKZJ4Io1sVzuCPGERGc8ZIVSemCwGRtyKl07w/JcTX7CCfUo7
KUQtHpvzGUtuwyttOE+Qndg5+UY5yMw28I1OSCcy2USSMGWVS8kYGflIAGW4x/CM9do5ABNfXdvq
N9FdSmVJZ/mvXVAcyFjuZFyOowcZA3FsYGMXb/UdLOtaff2yz3ccCwLNBdQLGHESIuOGfIbac5HG
e9RT6PGPFF/pcUrLBay3GZGG5vLiDMTjgFtqHA4BPcdar39pbraQX9mJUtp5HiEUzh3R0Ck/MAAw
IdSDgdxjjJANLVddju9EksDqOq6hI1zHMst6cBQqyAqF3tg/MDnPPoNvzV5r3SoLbUPsAuQ99GI/
s8kYCW6+YkmA+8l8bAvKrnOeOhxaKACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqigDVt
rvT5NJis743K/Z55J0ECKfN3qgKkkjZ/qx82G+9045l1PV7e9/try0lH27Ulu49wHCDzuDz1/eL0
z0PNYtFAGlpF5Z20eoQ3onMd3bCENCASh82N92D1A2Hjv0yM5FsatZw3tmkQne0t7OazMrIFdhL5
uX2biMr5pwN3O3qM8YVFAGxeXGnS6fYabZyzqsVzLJJcXSBQQ4jGdqliANhyPmPGR12iHXpoJ9TB
tp1njS2t4vMRWAYpCiNgMAcZU9QKzaKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArV0Z7b
ydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyq
KKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACT
QBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fL
GTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKy
iQBcv0jXBXdH167+OhwAUZ/7P/s+0+z/AGn7b8/2rzNvl9fk2Y56dc9+lXre40tfDlzZy3F4LuaV
Jgq2ylAUWQBd3mA4PmDJ28Y6GsetIWFsfDkt+LhnukuYojEq4VFZZTyT1b93njgAjkkkKAXbDVtP
hbSLq4a5W50vHlwxxKyTbZWlGXLApkvt+62MZ5zgQWNxpY0d7W4uLy1nklJleC2WUSx4UopJkUgB
gxwOCdpP3Ri7pegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqc/S7WyuESN7W+v72WRlW
1s32MqqAd3KPuzluBjGwk5zwAGi3On6f4ht7y4muTbWs6zRmOBS8m1wQCC4C5A9Tj3qp5en/ANob
PtNz9i/57fZ18zp/c3468fe6c+1aGnaRZXPixdKmv91r9r8hZ4VyZgXCAp1AznOScAZ6nAObZvZp
MTewTzR7eFhmERB9clW468Y/GgDQ1a60u/8AEdzeo941pdSySvuiVHjLsx4G5gwGQeq55Hy9aZdz
2UlpZabaTymGOeSVrm5i8vBkCKQVUucARg5BJOTxxzoRaNp9zrllaQ299iWxe5ktRMry7/LeRFVw
mDuURn7pxux1FZ+qW8OnXdup0e+tHH7x4dQlJ8xc8cBEIHBBIP0IxQAzXpoJ9TBtp1njS2t4vMRW
AYpCiNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7Ta
COOSVXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWku
POhJxux0ZT/CwycN7kEEEgzafd29hqM8gMskJguIUOwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3y
yucJGndmPpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1
uwvZQzR29zHK4QZJCsCce/FUq2NK0oat4misG8iGN7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to
4rG3c4wkV2phX5c8SM5H5t149qAGaxeR6hrd/exBljuLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiG
HOUiRD+GVOParviHTFsvEMmm28dtGkchhjZblW3gOVDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO
5WV5GaNGbcFYgAFiFIAyOct1oApX15Hc2mmxIGDWtsYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaVi
RwQ6xAY9/kP6VoavpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZ
d5u0LxIBGV2Ir8sd7BgQSABwvUgGfFeRpol1ZEN5k1zDKpA4ARZQc+/zj9aLG8jtrTUonDFrq2ES
FRwCJY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W7Hh/SBewXt68dnMttF
uSG4u0iVn3ovzDerBcOSDkDcAMnoQCpp91Z/2fdWF688Uc0sUwlhiEhBQONu0svB8wnOeNvQ54Ly
6s9Q1ZXleeG0WKOEOsQdyI41QNt3AZO0Ejdxnqcc17KzS63tLe21pGmBvnLHJPQBUVmPQ84wO5GR
m3HpHlXd0Lt90FpAty5hPMsbFAmwkcbvMQ5IyASSCRtIBLqV7pV34lub4C5ls7uSWR1kjCPEzlsE
AOQ20kNyRuxg4HNNm1OGzi0+LSpp2azuXuknmhVCHbywBs3MMDywck85xjjlkljZQ3drLLNKlhcw
NcIDzJgF18skDGS6FQ2MYIYgcqLGo6RbWs1h58d5pSzymOaG8HmSRINv73AVCVO5gBjrGeT0ABFq
GuPqGiWtlJFAskVzLKxitYohhlQLjYBz8rZ9fl64GLV7q2lzf21cxC8N3qqklGRVSAmZJCudxLj5
SA2F6fd+b5at9Y2Z0dNQtra8tFaURxrcyiQXAIbcyEInCFQD15cdO8uuaFJosRil07UEkjl8p72U
bYJTzkINnTjg7jkDOBnAALFx4i+0acI/7V1eLFolt9gjbbAdsYjzu39DjcRs5yVz/FWZpV1Z6dNF
qDPO17bSiWGARDy2K4Kln3ZAz1AXkDGRnIJNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4q7
/YKXVjo7W9xbQ3N5AQsUjsXuJfOkQAAAheAgy21T68NgAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl
1xjcD3z7VXt5rO0mmlVGuHRsWwmjAQ9fndcnJHHycgk8kgbWLOwiuYjLNqNnZru2qJi7FiOvCKxA
5HJwD2zg4emkzNd3Vg52ajDIY1tzg+YwJDKrA43ZHA/i5AOdoYAsaBcRf8JNaX+oX6xLDcpcyyzB
3MhDgkfKrEseTzx71j1q+G0tp/ENja3dpFcw3M8cDLIzrtDOAWBVgc4z1yOelZVABVq7uVntrCNX
lYwQGNg4UBSZHbC45IwwPPOSe2KitY4ZbuGO4n8iF5FWSXYW8tSeWwOTgc4rQu7bT5NJN9Zw3Ntt
nWEJcTrL5uVJJUhFxtwuev8ArF6dwDKrSi0y0kiR213T42ZQSjpcZU+hxERkexIrNrf0/QULubq4
tmkFjNcm03ssijyGeNs4Ct/A2FYnB5HDYAKFtp1rPbrJJrNjbuc5ilScsvPfbGR78GrWl3lrp/2i
J9RuYwZMN5NstxBcKPu7o5Co4OSNwPUcKRy238PTXEFk4vLNJL5c2sDO2+Vt7JtwFIUll4LEKc9e
Gw6DSra48PW1493bWbm7nieWdnO4BIiqhVDH+JznGPU8qKAGw32nSxahaSpPZ2lxcpcReSonMYTz
AEwzLkYk+9nPy9OeIYryzto9VhgE5jubZYYmcDJIljcswHQHY3AzjIGT1oj0WYTXiXk8FktpL5Ez
zFmCyHdhPkDEn5H5xj5evIya9ZR6fqYto1VQLa3Ztj7wXaFGYhgSCCxJ4OOeOKALdnrkdpc6VKhn
ja1s5rZ5I+GQyNN86c8lRKD1GSMZHWnTa9s1bTLz7fqeqfYp1nzfPt6Mp2qNz7fu8tnnI4GOcW2h
Se4WOS4it0Ocyyhiq8d9oJ9uBW62i2Vt48g0gTrd2h1BYHALAhfN2lGOF+bHUrxzwaAKtxdaXDok
9hZPeTSS3MUxlmiWMYRZBt2hm5+cHOec9Bj5rV7q2lzf21cxC8N3qqklGRVSAmZJCudxLj5SA2F6
fd+b5c270pra0Nyl3bXKJIsUwgZj5TkEhSSAGztblCw+XryM2NQ8PTaebxGvLOeeyYi4hhdmMa7w
m7JUKRllGASw3cgYOAAt7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoau2fiLydLs7f+1d
XsvskbJ9nsm2pNl2fJbeNhO/bna2NoPPQZ/9hS+T/wAfdt9r8j7R9j+fzPL2eZnO3Z9z5sbs4468
Uyx0c36xKl/ZpczttgtmZy8hJwBlVKqSeBuK+pwCCQB+j+INQ0YuttdXKwvHKPKjnZF3vGUD4HGQ
SD/wEdKIdU+1pdQatcXMiXMkcr3I/eyq8YZV4ZhuGHYYyOxzxgxWmlNc2guXu7a2R5GihE7MPNcA
EqCAQuNy8uVHzdeDg122hs/EOp2tumyGG7ljjXJOFDkAZPPQUAWF1iMaq+peUyTwRRrYrncEeMIi
M54yQqk9MFgMjbkVDdXlnqGoR3d0Jw86lrxowP8AWktl0B6j7rFeMncBtGMV7Gxa+eX99FBFDH5k
s0u7bGuQoJCgscsyjgHr6ZImGlNJqCWsF3bTo0bS+fGzbAigl2IIDDAVjgrk44ByMgDtQurP+z7W
wsnnljhllmMs0QjJLhBt2hm4Hlg5zzu6DHMWq3kd9dxyxBgq20ERDDnKRIh/DKnHtT7nSxb/AGeR
b62mtZ5DGLmMSbFZdu4EMobgMp4U9eMnIqXXdLttKuYY7a+iuRJBFIwUPlS0aNk7lUYJYkYycdcG
gBgurOz02eK0eeae8iWKcyxBFiAZXIXDEsSyryduADwc/LFe3kcttbWlsGW2gXfhx8zSsq+Yx9sq
AOnyqOMkkvm0i4gutUt2eIvpu7ziCcNiRY/l455YdccVYk8PTRxjN5ZtcNbC6W2V2LtGYxIT93aC
FySGIPy8A5XIA6y1e3gGn3MqS/bNL/49VQDy5f3jSLvOcrhmJOM7hgfKRuMWm6v/AGTaSfZUzc3G
+G58wZR7chf3Ywcjcd248HhdpHOaVvZyXMF3KhULaxCVwx5ILqnHvlx+GatWtnZx6el7fmcxzyvB
EICMoVClnbPUDeuFGN3PzLjkAmhutL8rULBnvIrKa5SaGURLJIAnmBVZdyjJEmSQeCvQ54hvLqz1
DVleV54bRYo4Q6xB3IjjVA23cBk7QSN3Gepxy260i4tH1FGeJ30+fyZlQknqy7xx90FQMnHLqO9M
ksFtdSFpe3CwhVDSsqlyhKhim3j5x90g4AYYJAyaANC+1azHii61SzE80F205kjmQRMolDqyghmG
Qr8Me/8ADxg0r+7t2tILCzMr20EjyiWZAju7hQflBIUAIoAye5zzgWm0i3Xxbd6UHlMME88aDI8y
Xy921AcY3OVCjg8sOD0o1vR/7OtLS6+w31h58kkf2e9OX+QId4O1eDvxjH8J5OcAAxaKuy2cdvpU
U8xYXFyweBFPHlAurM3uWAA5/hbIHyk3b3SLeAahbRPL9s0v/j6ZyPLl/eLG2wYyuGYAZzuGT8pG
0gGLRWh/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/3/AJc1dsNIDeHL3VHjs5WVljjWW7RSgKyFm2hw
28FBtU9QT8rdgDCord0nSI7zTzcjT9Q1KQytG0Ni+0wgBSGf5H4bcQOB9xuvbM1Oy/s3Vryx8zzP
s07w78Y3bWIzjt0oAq0UVu6vYRT+K73TrC1gtIbaWaPh3ICRlizsWLHIVSTj04GeoBhUVvvoKSWm
kQ2txbTz319LbrcRu2wjEIUEEBlwzt1UHvyMGqF3pTW1oblLu2uUSRYphAzHynIJCkkANna3KFh8
vXkZAM+it/X9BSz1HVfslxbNHaTuWto3Znhi8zapLEbT95BgMWG7kcNg0/QULubq4tmkFjNcm03s
sijyGeNs4Ct/A2FYnB5HDYAMCitVUtpfC0832SJLm3u4Y/PVn3OrrKSGBYr/AALjAHSsqgAooooA
KKKKACiiigAooooAKKKKACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqtXRktvJ1O5ubSK
6+zWgkjjlZwu4zRpk7GU9HPegDKooooAKsWV7Pp90tzbMqyBWX50VwQylWBVgQQQSOR3qvUttbTX
dwsECbpGzxkAAAZJJPAAAJJPAAJNAGhrGsHVbbTY2jiR7aBo38u3jiBYyO3GwDjBXj13HHJJr3l5
HPYadbRBlFvEwlBGA0jSMSw9Tt8sZPPygdAKsalY2WntpjxTS3UM8Hmysv7vcRK6EJkEgYTgkZ7k
D7ofeW+ltogvbW3vLeRrnyoxNcrKJAFy/SNcFd0fXrv46HABRn/s/wDs+0+z/aftvz/avM2+X1+T
Zjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoax60hYWx8OS34uGe6S5iiMSr
hUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBBY3GljR3
tbi4vLWeSUmV4LZZRLHhSikmRSAGDHA4J2k/dGLul6Bb3lnaSNbX0yXGfOvYXAgsvnK/vBsOdoAc
5ZflYdOpz9LtbK4RI3tb6/vZZGVbWzfYyqoB3co+7OW4GMbCTnPAAaLc6fp/iG3vLia5NtazrNGY
4FLybXBAILgLkD1OPes+5W3W4ZbWWWWEY2vLGI2PHOVDMBznua047HTodQvLaR59QaO5+z20dmwQ
3Ayw3htrjHC/LjJ3jB45c+m2UGtzWsn2mULGhjtYz++eV9n7ndtIDKWYEleShGATwAV7+8s7/wAQ
XNzIJ0spZW8tUA3xR8hAB0wo2jbkDC4BHUPu57KS0stNtJ5TDHPJK1zcxeXgyBFIKqXOAIwcgknJ
445tSeHvO1u1sbeO5gkngaaS1lXzJ4du8lMALuYqm5Rhc71H+0aWp2y6dqEIbSby0UKrm3v3JMgy
e4VDtOMcc8Hn0ADXpoJ9TBtp1njS2t4vMRWAYpCiNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS
8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaC
SOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWkuPOhJxux0ZT/CwycN7kEEEgzafd29hqM8gMskJguIU
OwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3yyucJGndmPpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkA
uJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1uwvZQzR29zHK4QZJCsCce/FUq2NK0oat4misG8iGN
7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to4rG3c4wkV2phX5c8SM5H5t149qAGaxeR6hrd/exBl
juLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiGHOUiRD+GVOParviHTFsvEMmm28dtGkchhjZblW3gO
VDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO5WV5GaNGbcFYgAFiFIAyOct1oApX15Hc2mmxIGDWt
sYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VoavpA0vRdPYx2byXCmSSZLtJHB3y
LtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZd5u0LxIBGV2Ir8sd7BgQSABwvUgGfFeRpol1ZEN5k
1zDKpA4ARZQc+/zj9aLG8jtrTUonDFrq2ESFRwCJY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAV
kLNtDht4KDap6gn5W7Hh/SBewXt68dnMttFuSG4u0iVn3ovzDerBcOSDkDcAMnoQCLRdUjsIrmM3
V5YySsjLd2a7pAF3Zj+8vytuBPzdUHB6ie9123vdXu5mW5+y3tpDbTNIweZdix/NngMd8YPONwz9
0nIyrKzS63tLe21pGmBvnLHJPQBUVmPQ84wO5GRm3HpHlXd0Lt90FpAty5hPMsbFAmwkcbvMQ5Iy
ASSCRtIASX1lNd2sUsMr2FtA1uhPEmCXbzCAcZDuWC5xgBSTyxlN7pUf9mWeLm6sLe7a4uGkjETu
r+WGQKHPaPruH3u2MmKSxsobu1llmlSwuYGuEB5kwC6+WSBjJdCobGMEMQOVFjUdItrWaw8+O80p
Z5THNDeDzJIkG397gKhKncwAx1jPJ6AAqarJY3DG4hvry4uGYBhNaJCoUDAA2yNgDAAUAADpjGKu
3+rafM2r3Vu1y1zqmfMhkiVUh3SrKcOGJfBTb91c5zxjBgvrGzOjpqFtbXlorSiONbmUSC4BDbmQ
hE4QqAevLjp3l1zQpNFiMUunagkkcvlPeyjbBKechBs6ccHccgZwM4ABFeXWl30a3Mr3gu1to4Rb
rEuzKRrGG8zdnHyhiNn+zn+KrVlq2lw/2Lcyi8F3pSghFRWSciZ5AudwKD5gC2G6/d+X5s+TRzDb
B5b+zS5MQmFqzOH2FQ4O7bsyVIYDdnnH3uKLHRzfrEqX9mlzO22C2ZnLyEnAGVUqpJ4G4r6nAIJA
Lek6zHYaeYBd6hZSLK0rNYttNyCFAR23DaF2nBw+PMbj1ivNTs59b1PVFhaR5rl5baOZAUXcxO5x
k5I4+XkEnkkDa0Oj2FtfyXK3Fw0bR200sUaLkuyRO4yegUbOe/IAHJK17Ky+3b4opP8AS+PJhI/1
3qqn+90wv8XIBzgMAXtAuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x61fDaW0/iGxtbu0i
uYbmeOBlkZ12hnALAqwOcZ65HPSsqgAq1d3Kz21hGrysYIDGwcKApMjthcckYYHnnJPbFRWscMt3
DHcT+RC8irJLsLeWpPLYHJwOcVoXdtp8mkm+s4bm22zrCEuJ1l83KkkqQi424XPX/WL07gGVXRR6
tpfmm9lF4bs6ebMQqihEP2cwh9+7LA4BI2jG7qduG52tWPw/ey3kFsvlfvpLaMSbvlVp03xg9+mc
4Bxg+2QCvfXkdzaabEgYNa2xicsOCTLI/Hthx+Oat291pc2iQWF695DJFcyzCWGJZBh1jG3aWXn5
Cc54x0Oflx60LTSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4zzgAuyatZ6jNqwvRPbR394LwN
CgmKEeZ8mCy5H7085/h6c8RX0llrGrF4rlbC3W2hjQ3e5zlI0TGY1OT8pOcAEDt0qvbaU032hpru
2tobeQRPNIzOm87sKDGGJyEY5Axx15GZY9Cla7uoZbu2gjtoFuGnk3lGjYoEICqW58xDggEZ5x0o
AqXtpBa7PJ1G2vN2c+Qsg2/Xei9fbPStNtWsx4xg1tROY2vFvJoygBQ+ZvZFO75gOgJ259BTD4Yv
ReJaiW2MjecrEybVjkhTfJGxYAAgY5+7z97qRSvtONnFFPHdQXVvKzIs0O8DeuCy4dVOQGU9Mc9e
uAAivI00S6siG8ya5hlUgcAIsoOff5x+tGsXkeoa3f3sQZY7i5klQOMEBmJGffmn6laW9vbaZNbi
UfarTzZBI4bDiR0OMAYB2ZxzjPU1Ld6FLaRXP+l20s9p/wAfVvHv3w/MEOSVCnDMF+Vj1yMjmgDQ
uPEX2jThH/aurxYtEtvsEbbYDtjEed2/ocbiNnOSuf4ql0HxLb6WNNZrrU7dLSQGa1syFjuv3hbe
53DnBC4KnIQDcM/Lm3GkWsPhy21FNSgeeWV0MIEmeFjO0ZQDcN5zk4xjBPNMtNClu4rb/S7aKe7/
AOPW3k375vmKDBClRllK/Mw6ZOBzQAW13p8mkxWd8blfs88k6CBFPm71QFSSRs/1Y+bDfe6ccvvD
a61rep3v2+CyjmuXljF0khLBmJ/5Zq3I4z9eM1n2dnJfTtFEVDLFJKSx4wiM5/HCnHvViwtLdrSe
/vBK9tBIkRihcI7u4Yj5iCFACMScHsMc5ABLbG20vU1ddWlOIyUu9NDhomPHRwhPGRwR97OTgqbS
a3a22vw31qJYwsDwvcRQrA5Z1dfMWNDtUqHGACM7M5BY4pXGjyR3VzDFKsgitkukyMPJGyqwwvPz
BX3MMkAKxyQM01NIuHuLWHfErzwG4O4n91GAzEuMZHyLvwASVKkZyBQBLrOo/b/IX+09T1Dy9x8y
+ONuccKu5sdOTu5yOBjJi1S7t74208ZlWYQRwyxsg2r5caoCrZy2QuTkDGcc9aZfacbOKKeO6gur
eVmRZod4G9cFlw6qcgMp6Y569cEtnHb6VFPMWFxcsHgRTx5QLqzN7lgAOf4WyB8pIBq3uraXN/bV
zELw3eqqSUZFVICZkkK53EuPlIDYXp935vllvNQ0u3uVuopJ5Lv+zI7YxqFaJi1qsZO/OVK7jldr
cp1GflpXukW8A1C2ieX7Zpf/AB9M5Hly/vFjbYMZXDMAM53DJ+Ujaav9kXHm7d8Xl/ZPtfnZPl7N
ucbsdd37v/f+XNAFjTvEuqaZYXNpBfXiRyReXEqXDKIT5iuWUDucMOMfeP4si1G3urT7PqrXL7J5
LlZIiGeV3Ch1YseM7F+fnHOVbPBaaFLdxW3+l20U93/x628m/fN8xQYIUqMspX5mHTJwOayqANq3
8QTW2p3mtRfu9XmnMkTqoMcQfcZCAc88gDORgt3wRSkOnTakGBntrNlBYJGJGRto3BQWGV3ZAy2d
uM5NP0K2hvPEOmWtwm+Ga7ijkXJGVLgEZHPQ0Q2w1W7nkjS2sLeOMSSHMhjiXKrn+NzlmUcZ5bsO
gBLrVzp+oeIbi8t5rkW11O00hkgUPHuckgAOQ2AfUZ9qJbvT2+w2ANy+mwTtLLKUVJn37A+1clVw
qAAEnnJJ5wLVx4f8ybS7SzltjLPYy3DzediOTa8xzuP3fkQDBxg/ewc02w0WI63o6STwXthd3iQM
8JdQxDJvT5grA4decY+bg5BwAZV7eSX1008gVTtVFRBwiKoVVGecBQBySeOSTzWle6vbzjULmJJf
tmqf8fSuB5cX7xZG2HOWyygjONoyPmJ3Crd6U1taG5S7trlEkWKYQMx8pyCQpJADZ2tyhYfL15Gb
/wDYKWtjrDXFxbTXNnAA0UbsHt5fOjQgggBuC4yu5R68rkAi/te3/s7+yNkv9meX523A8z7X5ePM
znpu+XHTZzjdzVKK8jTRLqyIbzJrmGVSBwAiyg59/nH61YvUtn8PaddRWkUExnmgkaNnPmBEiIYh
mIBy7dMD2rKoAtWS6ed7X0tyu3BSOCNT5nqCxYbO3O1uvTjBbf3kmoahc3soVZLiVpXCDABYknHt
zVeigAror+/soPGF9fxXS3VpeNc/PCjAxrMHXo4XLKGzjoeme452igDorfVtLsJtEW2F5LHYag11
NJIiqZAfK+6oY4P7sjBJ7HPOBztFFAHRapq2lzz61dWQvPM1NmBimRQIwZlk3bw3JOwfLtGN33jt
yxHq2l+ab2UXhuzp5sxCqKEQ/ZzCH37ssDgEjaMbup24bnaKANVXtovC08P2uJ7m4u4ZPIVX3IqL
KCWJUL/GuME9ayqKKACiiigAooooAKKKKACiiigAooooAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+
DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AMqiiigAqxZXs+n3S3NsyrIFZfnR
XBDKVYFWBBBBI5Heq9S21tNd3CwQJukbPGQAABkkk8AAAkk8AAk0AaGsawdVttNjaOJHtoGjfy7e
OIFjI7cbAOMFePXccckmveXkc9hp1tEGUW8TCUEYDSNIxLD1O3yxk8/KB0AqxqVjZae2mPFNLdQz
webKy/u9xEroQmQSBhOCRnuQPuh95b6W2iC9tbe8t5GufKjE1ysokAXL9I1wV3R9eu/jocAFGf8A
s/8As+0+z/aftvz/AGrzNvl9fk2Y56dc9+lXre40tfDlzZy3F4LuaVJgq2ylAUWQBd3mA4PmDJ28
Y6GsetIWFsfDkt+LhnukuYojEq4VFZZTyT1b93njgAjkkkKAXbDVtPhbSLq4a5W50vHlwxxKyTbZ
WlGXLApkvt+62MZ5zgQWNxpY0d7W4uLy1nklJleC2WUSx4UopJkUgBgxwOCdpP3Ri7pegW95Z2kj
W19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqc/S7WyuESN7W+v72WRlW1s32MqqAd3KPuzluBjGwk5
zwAP0640u3iv4pLi8ieVgkNzFbK7+V8wdSpkG0tlM4J4DLnBOa8P9lJdzxSPcyWrxhY7jyQJI2yp
3eXvwejLgt0bPUYqxHY6dDqF5bSPPqDR3P2e2js2CG4GWG8NtcY4X5cZO8YPHM8eiW7a3dWipc3H
kwLKlnEwE7u2zdDnafmTe275f+WbcL2AGjVrOG9s0iE72lvZzWZlZArsJfNy+zcRlfNOBu529Rni
vdz2UlpZabaTymGOeSVrm5i8vBkCKQVUucARg5BJOTxxzpDw3GNds7WWz1CNbmzmuRZMcXClBLhM
7OSxiBHydGxg9Tm6pbw6dd26nR760cfvHh1CUnzFzxwEQgcEEg/QjFADNemgn1MG2nWeNLa3i8xF
YBikKI2AwBxlT1ArNrS16GCDUwLaBYI3treXy0ZiFLwo7YLEnGWPUms2gArV0Z7bydTtrm7itftN
oI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAqWV79m3xSx+daS
486EnG7HRlP8LDJw3uQQQSDNp93b2GozyAyyQmC4hQ7AGO+J0UkZIHLAnk4561Xs7OS8mKIVREXf
LK5wkad2Y+nIHGSSQACSAdLQtJi1PVJlRopLaCOWQC4mSAybUdkBBbOCVG7aflBPzDrQBS0e8j0/
W7C9lDNHb3McrhBkkKwJx78VSrY0rShq3iaKwbyIY3uQsix3CAKhcAiNmY7zzxgsT71XubR59Xa2
jisbdzjCRXamFflzxIzkfm3Xj2oAZrF5HqGt397EGWO4uZJUDjBAZiRn35o1W8jvruOWIMFW2giI
Yc5SJEP4ZU49qu+IdMWy8Qyabbx20aRyGGNluVbeA5UNIxYhGOOR8oHoKPEOmLpRs7dY7YZgSRpY
7lZXkZo0ZtwViAAWIUgDI5y3WgClfXkdzaabEgYNa2xicsOCTLI/Hthx+OaJbyN9EtbIBvMhuZpW
JHBDrEBj3+Q/pWhq+kDS9F09jHZvJcKZJJku0kcHfIu1QrkFMIDuwfmyN3aiTSBa+FEv2js5JLiV
l3m7QvEgEZXYivyx3sGBBIAHC9SAZ8V5GmiXVkQ3mTXMMqkDgBFlBz7/ADj9aLG8jtrTUonDFrq2
ESFRwCJY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W7Hh/SBewXt68dnMt
tFuSG4u0iVn3ovzDerBcOSDkDcAMnoQCLRdUjsIrmM3V5YySsjLd2a7pAF3Zj+8vytuBPzdUHB6i
e9123vdXu5mW5+y3tpDbTNIweZdix/NngMd8YPONwz90nIg0XS47+K5kNreX0kTIq2lm22Qht2ZP
ut8q7QD8vVxyOhhvbC2tdaa1e4aGAKruXXe8RKBmjIGMupJTnaNw5284AHyX1lNd2sUsMr2FtA1u
hPEmCXbzCAcZDuWC5xgBSTyxlN7pUf8AZlni5urC3u2uLhpIxE7q/lhkChz2j67h97tjJbPo8Y8U
X+lxSssFrLcZkYbm8uIMxOOAW2ocDgE9x1qW10mzv9Q0hoDPFZX94LVo3cNJGQU3YYKARiRSDgc5
GOMkAqarJY3DG4hvry4uGYBhNaJCoUDAA2yNgDAAUAADpjGKu3+rafM2r3Vu1y1zqmfMhkiVUh3S
rKcOGJfBTb91c5zxjBg1iwjsYogdF1XT5HY7WvZchwOoA8pOeRzn8Oal1fSBpei6exjs3kuFMkky
XaSODvkXaoVyCmEB3YPzZG7tQBFeXWl30a3Mr3gu1to4RbrEuzKRrGG8zdnHyhiNn+zn+KtLQfEt
vpY01mutTt0tJAZrWzIWO6/eFt7ncOcELgqchANwz8tK80KSw0dbiXTtQkaSKOYXqjbbIHCkL9w7
jggE7l+Y4wcZZ2j6TZajFDH5N9M7Y+03cR2w2ILFQZAUOQAu8ncowccYJoAq6Fc6faXM019NcpmC
WFBBAsmfMjdCTl1xjcD3z7VXt5rO0mmlVGuHRsWwmjAQ9fndcnJHHycgk8kgbWl0ewtr+S5W4uGj
aO2mlijRcl2SJ3GT0CjZz35AA5JWvZWX27fFFJ/pfHkwkf671VT/AHumF/i5AOcBgC9oFxF/wk1p
f6hfrEsNylzLLMHcyEOCR8qsSx5PPHvWPWr4bS2n8Q2Nrd2kVzDczxwMsjOu0M4BYFWBzjPXI56V
lUAFWru5We2sI1eVjBAY2DhQFJkdsLjkjDA885J7YqK1jhlu4Y7ifyIXkVZJdhby1J5bA5OBzitC
7ttPk0k31nDc222dYQlxOsvm5UklSEXG3C56/wCsXp3AMquqtfEWnxG3nkW586KS0nMaxqVL20ex
V3bs4cEktj5cYw/WuVrV1LSPsa+cj7YPIt3UynmSR4kdlXA5xvz6AYyclcgEVtp1rPbrJJrNjbuc
5ilScsvPfbGR78GrEc2nPp66deXU6LbXMssc9rAJRKHCL0ZkKgeWCO53cgY5LfSLWbw5c6i+pQJP
FKiCEiTPKyHacIRuOwYwcYzkjinQaVbXHh62vHu7azc3c8Tyzs53AJEVUKoY/wATnOMep5UUARWl
3p5tL2xuDc29tNPHPG8aLM67A4CkEoDkSfe4+7054luNXt5H1IKkuyexhs4SQMnyzD8zDPGRETgZ
wTjJ61FFoNwftxuri2s1sZ1t7gzuflc7+AFDFuUI+XPXPQEhg0Wb7bPBJPBFHDEs73DFigjbbsfA
BbDb0wNuRu5AwcAG7Frun3mqzvIsqQvPqlyQ7LGSk0GFUHkBiVI6HkjrWFqF1Z/2fa2Fk88scMss
xlmiEZJcINu0M3A8sHOed3QY5bNpTRXcERu7YwzxmWO63MsZQFgW+YBuCrDG3JI4ByMuk0WYzWaW
c8F6t3L5ELwllDSDblPnCkH505xj5uvBwATavcaXPYafFZXF5JJaxGEia2WMMDJI+7Ikbn5wMY7Z
z2q7rniL+1Yrt/7V1eT7VJv+xSNiCHLbsZ3neF6AbV7HjGDUn0q2t/D1zeJd2144u4IklgZxtBSU
spVgp/hQ5xj0PDCs17ORNPhvSV8uaWSJQDyCgQnPt84/WgCx9rt5dBSykMqTQTvNFtQMsm8RqQxy
CuBHkYDZzjjGTq2fiLydLs7f+1dXsvskbJ9nsm2pNl2fJbeNhO/bna2NoPPQVP7It9v2PfL/AGh9
k+278jytnlebsxjOdnO7P3vl24+eqsGkXFy9gI3i2Xm7EhJ2RbSQ284+XaAHPopB70AS6P4g1DRi
6211crC8co8qOdkXe8ZQPgcZBIP/AAEdKIdU+1pdQatcXMiXMkcr3I/eyq8YZV4ZhuGHYYyOxzxg
s0qOxuZorSWxvLm7nlEcXk3aQglsADDRtznvkDmrtnocN9d6lPYwX2o2FrOI4o7VT50qsW2MTtO0
YQknb1wMDOQAQLrEY1V9S8pkngijWxXO4I8YREZzxkhVJ6YLAZG3IqWDXI4tdi1bM8dzLFKLqSLg
rLIHQyRjPXDK+Mj5sgbRjGPdR+VdzR+TLDskZfKlOXTB+63A5HQ8D6CtNtHjGqppvmsk8EUjXzY3
BHjDu6oOMkKoHXBYHB24NADtQ1G3v7uyW71PV9Qto5MyyXJAdUJGRGpZgDgHktzkcDGTm3t5JfXT
TyBVO1UVEHCIqhVUZ5wFAHJJ45JPNXZdNt7j7DPZyfZ7a8na3Au5QfJddmSzgAFcSKc4GORjjJqS
adcQ2ktxOvk+XP8AZzHICrl8EsACP4cDd6bl9aANC91e3nGoXMSS/bNU/wCPpXA8uL94sjbDnLZZ
QRnG0ZHzE7gf2vb/ANnf2Rsl/szy/O24Hmfa/Lx5mc9N3y46bOcbuaGg0e40i+uoLW+t3g8tY3lu
0lVpGbhSojU/cWQ5zj5fcVF/YUvk/wDH3bfa/I+0fY/n8zy9nmZzt2fc+bG7OOOvFAGhZ+IvJ0uz
t/7V1ey+yRsn2eybak2XZ8lt42E79udrY2g89BzVFFAGhoVzDZ+IdMurh9kMN3FJI2CcKHBJwOeg
qW0ltLCa/s5rnzra6gEJubWMttw6SZCvsJ5TbzjqTzjnKooA321fT4rizFslyYbbTbizzIF3M8gm
w2AcAZlBxzjkZbGTm6PeR6frdheyhmjt7mOVwgySFYE49+KpUUAatzd6fHpMtnYm5b7RPHO4nRR5
WxXAUEE7/wDWH5sL93pzxdvdW0ub+2rmIXhu9VUkoyKqQEzJIVzuJcfKQGwvT7vzfLztFAGrevbJ
4e061iu4p5hPNPIsauPLDpEApLKATlG6ZHvWVRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19m
tBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsCrAgggkcjvVepba2mu7hY
IE3SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dvHECxkduNgHGCvHruOOSTXvLyOew
062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ4PNlZf3e4iV0ITIJAwnBIz3IH3Q
+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP/AGf/AGfafZ/tP235/tXmbfL6/Jsx
z06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9aQsLY+HJb8XDPdJcxRGJVwq
Kyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJfb91sYzznAgsbjSxo72t
xcXlrPJKTK8FssoljwpRSTIpADBjgcE7SfujF3S9At7yztJGtr6ZLjPnXsLgQWXzlf3g2HO0AOcs
vysOnU5+l2tlcIkb2t9f3ssjKtrZvsZVUA7uUfdnLcDGNhJzngAfp1xpdvFfxSXF5E8rBIbmK2V3
8r5g6lTINpbKZwTwGXOCc0fL0/8AtDZ9pufsX/Pb7OvmdP7m/HXj73Tn2rQt9Nshqd5aD7Tqkkc5
ht4rE7WnUbsyA7XGAFHGOd2c8HNu10C3l1bUbZba+u/s1ok6Wtu4EwdmiDRsdjcp5jA/KOUPA6AA
zLi8s7m5tIHE4sLWIwI6geaQWZt5HTO5yduemF3Z+an3c9lJaWWm2k8phjnkla5uYvLwZAikFVLn
AEYOQSTk8cclxa2UOriC5tb7Too4y0sNy+6UsFLBQdi7d3ygEqcZzyOKfqNnbWcVhex2k8SzMW+x
3km4ug2lXyoQ7H3EDA/gOG9ACHXpoJ9TBtp1njS2t4vMRWAYpCiNgMAcZU9QKza0tehgg1MC2gWC
N7a3l8tGYhS8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq1dGS28nU
7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWkuPOhJxux0ZT/CwycN7kEEEgzafd29hq
M8gMskJguIUOwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3yyucJGndmPpyBxkkkAAkgHS0LSYtT1S
ZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1uwvZQzR29zHK4QZJCsCce/FUq2NK0o
at4misG8iGN7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to4rG3c4wkV2phX5c8SM5H5t149qAGax
eR6hrd/exBljuLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiGHOUiRD+GVOParviHTFsvEMmm28dtGk
chhjZblW3gOVDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO5WV5GaNGbcFYgAFiFIAyOct1oApX15
Hc2mmxIGDWtsYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VoavpA0vRdPYx2byXC
mSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZd5u0LxIBGV2Ir8sd7BgQSABwvUgGfF
eRpol1ZEN5k1zDKpA4ARZQc+/wA4/WixvI7a01KJwxa6thEhUcAiWN+fbCH8cVoWGkBvDl7qjx2c
rKyxxrLdopQFZCzbQ4beCg2qeoJ+Vux4f0gXsF7evHZzLbRbkhuLtIlZ96L8w3qwXDkg5A3ADJ6E
AqafdWf9n3VhevPFHNLFMJYYhIQUDjbtLLwfMJznjb0OeC8urPUNWV5XnhtFijhDrEHciONUDbdw
GTtBI3cZ6nHM2i6XHfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy67
3iJQM0ZAxl1JKc7RuHO3nABbvtWsx4outUsxPNBdtOZI5kETKJQ6soIZhkK/DHv/AA8YNS6vLPyr
SxgE8tlbyvKzuBHJIX2BsAbggwigfe5ye+0Pu7GysvEt9YzzSrZ2s8yBuruELbVyBgFiAu7GBnOM
DFaDaBbnVtHtpLa+sPtt2IJLW6cGZU3IPMB2LwdzAZXqh5PQAGfc3OnwaTLY2M1zP588czvPAsWz
YrgAAO2c+YfTG3vnivfXkdzaabEgYNa2xicsOCTLI/Hthx+Oat6xYR2MUQOi6rp8jsdrXsuQ4HUA
eUnPI5z+HNS65oUmixGKXTtQSSOXynvZRtglPOQg2dOODuOQM4GcAAd/a2n7/t+65+2/Yfsf2fyl
8v8A1HkbvM3Z6fNjZ14z/FUWlXulWU1hfuLlLyykWQxRxh0uGVy4JcuCmRheFONueScVLBpNlc6d
JLFDfFIoC76iTi3WUR7/ACipTrkiMfPySCByFqLRINHvbiC1u7W+3nc088V2iqkagszBDGSdqAnG
cnHHXFAEWhXOn2lzNNfTXKZglhQQQLJnzI3Qk5dcY3A98+1V7eaztJppVRrh0bFsJowEPX53XJyR
x8nIJPJIG1regaPHqeoWi3krQWc1ylvvUZeR2IG1Ae4yCT0UEZySqtRsrL7dviik/wBL48mEj/Xe
qqf73TC/xcgHOAwBe0C4i/4Sa0v9Qv1iWG5S5llmDuZCHBI+VWJY8nnj3rHrV8NpbT+IbG1u7SK5
huZ44GWRnXaGcAsCrA5xnrkc9KyqACrV3crPbWEavKxggMbBwoCkyO2FxyRhgeeck9sVFaxwy3cM
dxP5ELyKskuwt5ak8tgcnA5xWhd22nyaSb6zhubbbOsIS4nWXzcqSSpCLjbhc9f9YvTuAZVbWoav
b6kqRTpKY4bSGO3kwN8TpEqlevMbMpOM8Z3Dqyti1q6lpH2NfOR9sHkW7qZTzJI8SOyrgc4359AM
ZOSuQCK0u7ddJvbG4MqebJHPG8aBvnRXAUgkYB8z73OMdDnhkt5G+iWtkA3mQ3M0rEjgh1iAx7/I
f0qxaaFLdxW3+l20U93/AMetvJv3zfMUGCFKjLKV+Zh0ycDmmWujmfT0v5r+ztLd5XhVpmckuoUk
bUVjjDjnGBjnGRkAsanq9ve/215aSj7dqS3ce4DhB53B56/vF6Z6HmotC1T+y5rr/SLm2+0QeT9o
tf8AWRfOr5A3LnOzb94cMTzjBItBuD9uN1cW1mtjOtvcGdz8rnfwAoYtyhHy5656AkMGizfbZ4JJ
4Io4Ylne4YsUEbbdj4ALYbemBtyN3IGDgAtnWY/7dgu5bvUL1Y4miFzdNmVSQwDou47ShYMBu6rn
K54tXPiaPztGlFxqGoSadeNctLevgyA+UQo5bYPkIxk+vfAwr6xaxeL99FPFNH5kU0W7bIuSpIDA
MMMrDkDp6YJm1m0t7PURHaiUQvBDMolcMw8yJXIJAAOCxHQUAWLi60uHRJ7Cye8mkluYpjLNEsYw
iyDbtDNz84Oc856DHzOuPFWsXOkpYyajfN+8lMjtdOfNR1QbCM8gbW6/3z+OLWxq+kWun2Gnzwal
BcvcRF2RBIM/vJF3LuRfl+QDk5znjGDQA7+17fb9s2S/2h9k+xbMDytnleVvznOdnG3H3vm3Y+Si
w1e3s7NNP2Smyu8f2mABvkw527DnjaMMOmWJ3blwKl1DQUDobW4tlkNjDci03s0jDyFeRs4Kr/G2
GYHA4HK5gt/D01xBZOLyzSS+XNrAztvlbeybcBSFJZeCxCnPXhsAFTSryOwv/tMgYlYpRGUHKSGN
gjD0KsVOeoxkc1Lp91Z/2fdWF688Uc0sUwlhiEhBQONu0svB8wnOeNvQ54baaWLi0F1PfW1nC0jR
xtOJDvZQCwARWIwGXrj73GecCaUy3d1bXt3bWL20hikM7M3zgkbQEDE9DyBgY68jIBabV7ddXbUI
kl822ghSyLAf6yNURZHGeOFLYyQG2g7hnOfp179gvBOY/MQxyROucEo6FGwecHDHBwcHHB6VYGiz
fbZ4JJ4Io4Ylne4YsUEbbdj4ALYbemBtyN3IGDh+s6fDp8OmCJopGmtDK8sTllkPnSKGGenyqvGA
RjkA5oAJbvT2+w2ANy+mwTtLLKUVJn37A+1clVwqAAEnnJJ5wDUtX/ta0j+1Ji5t9kNt5YwiW4Df
uzk5O07dp5PLbieMRaFbQ3niHTLW4TfDNdxRyLkjKlwCMjnoaLvSmtrQ3KXdtcokixTCBmPlOQSF
JIAbO1uULD5evIyAMe8jOiQ2SBlkFzJLKQMBxtQJn1K4k69N5x1NbFx4i+0acI/7V1eLFolt9gjb
bAdsYjzu39DjcRs5yVz/ABVn3ehS2kVz/pdtLPaf8fVvHv3w/MEOSVCnDMF+Vj1yMjmoo7S3k8PX
F5iUXMN3FFneNhR0kPTGcgx9c9+nFAGfRWxb+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevD
Yr2mli4tBdT31tZwtI0cbTiQ72UAsAEViMBl64+9xnnABn0VqxaDcH7cbq4trNbGdbe4M7n5XO/g
BQxblCPlz1z0BIpXtnJY3TQSFWO1XV0PDoyhlYZ5wVIPIB55APFAFeitJNHkGu3GlzSqrWrTedIg
3DEQZnKg4ycIcA4ycZI60+XTbe4+wz2cn2e2vJ2twLuUHyXXZks4ABXEinOBjkY4yQDKoq1Jp1xD
aS3E6+T5c/2cxyAq5fBLAAj+HA3em5fWr1xo8dpok9xNK32+G5iikgA4iDrIcMf7/wAgyP4eh+bI
UAx6Ku2VnHLbXN3clltoF2ZQ/M0rK3lqPbKknp8qnnJAN2y0i3nGn20ry/bNU/49WQjy4v3jRrvG
MtllIOMbRg/MTtABi0Vaj064mtIriBfO8yf7OI4wWcPgFQQB/Fk7fXa3pV7StJtrrxNFpd3eqITc
iAy23z+aS4X5D0wc5yeMZPJwpAMeirunWcdybmacsLe0i86VUOHcb1QKpPAJZ15PQZODjBNRs47Y
200BY293F50Sucug3shViOCQyNyOowcDOAAUqKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igArV0Z7bydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOx
lPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTw
AACSTwACTQBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI
0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3k
a58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P8A7PtPs/2n7b8/2rzNvl9fk2Y56dc9+lXre40tfDlz
Zy3F4LuaVJgq2ylAUWQBd3mA4PmDJ28Y6GsetIWFsfDkt+LhnukuYojEq4VFZZTyT1b93njgAjkk
kKAXbDVtPhbSLq4a5W50vHlwxxKyTbZWlGXLApkvt+62MZ5zgQWNxpY0d7W4uLy1nklJleC2WUSx
4UopJkUgBgxwOCdpP3Ri7pegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqc/S7WyuESN7
W+v72WRlW1s32MqqAd3KPuzluBjGwk5zwARW39lH7RDcvcqhkDQ3McIZwo3AqYy4A3ZU53HG3Azk
mpbq70+/1QvcG5jthBFBHJGis48tFQMUJAOQn3dwxu6nHLG0qNdQvY/tatYWkrI94q5DjJC7Rn5m
bBwufUkhQWFifSLeDXNYgZ5TZaZJIXwR5josojUA4wCSy5OOBk4ONpAHrqOl/wBoaekizy2VlbPC
sjwKXZyZHVzGW2kK8g+UsQQvPXFUr1bKe7RoNRuZnmkJnmvINm0k/eJV3LdSTxn65q1FpFvNqdkq
vL9lvIHuIY8jzW27x5WcYLM8ZQEDnKnbk7afq2krpcVjetpt5arLK6GzvySWCbDnIVDtbfjgZG08
88AFTXpoJ9TBtp1njS2t4vMRWAYpCiNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS8KO2CxJxlj
1JrNoAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNG
mTsZT0c96AKlle/Zt8UsfnWkuPOhJxux0ZT/AAsMnDe5BBBIM2n3dvYajPIDLJCYLiFDsAY74nRS
RkgcsCeTjnrVezs5LyYohVERd8srnCRp3Zj6cgcZJJAAJIB0tC0mLU9UmVGiktoI5ZALiZIDJtR2
QEFs4JUbtp+UE/MOtAFLR7yPT9bsL2UM0dvcxyuEGSQrAnHvxVKtjStKGreJorBvIhje5CyLHcIA
qFwCI2ZjvPPGCxPvVe5tHn1draOKxt3OMJFdqYV+XPEjOR+bdePagBmsXkeoa3f3sQZY7i5klQOM
EBmJGffmjVbyO+u45YgwVbaCIhhzlIkQ/hlTj2q74h0xbLxDJptvHbRpHIYY2W5Vt4DlQ0jFiEY4
5Hygego8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAMjnLdaAKV9eR3NppsSBg1rbGJyw4JMs
j8e2HH45olvI30S1sgG8yG5mlYkcEOsQGPf5D+laGr6QNL0XT2Mdm8lwpkkmS7SRwd8i7VCuQUwg
O7B+bI3dqJNIFr4US/aOzkkuJWXebtC8SARldiK/LHewYEEgAcL1IBnxXkaaJdWRDeZNcwyqQOAE
WUHPv84/WixvI7a01KJwxa6thEhUcAiWN+fbCH8cVoWGkBvDl7qjx2crKyxxrLdopQFZCzbQ4beC
g2qeoJ+Vux4f0gXsF7evHZzLbRbkhuLtIlZ96L8w3qwXDkg5A3ADJ6EAqafdWf8AZ91YXrzxRzSx
TCWGISEFA427Sy8HzCc5429DngvLqz1DVleV54bRYo4Q6xB3IjjVA23cBk7QSN3GepxzNoulx38V
zIbW8vpImRVtLNtshDbsyfdb5V2gH5erjkdDDe2Fta601q9w0MAVXcuu94iUDNGQMZdSSnO0bhzt
5wAWNSvdKu/EtzfAXMtndySyOskYR4mctggByG2khuSN2MHA5qK7u9PFpZWNubm4toZ5J5HkRYXb
eEBUAFwMCP73P3unHJd2NlZeJb6xnmlWztZ5kDdXcIW2rkDALEBd2MDOcYGK0G0C3OraPbSW19Yf
bbsQSWt04MypuQeYDsXg7mAyvVDyegAM+5udPg0mWxsZrmfz545neeBYtmxXAAAds58w+mNvfPFq
/wBW0+ZtXurdrlrnVM+ZDJEqpDulWU4cMS+Cm37q5znjGDBrFhHYxRA6LqunyOx2tey5DgdQB5Sc
8jnP4c0XGjx2miT3E0rfb4bmKKSADiIOshwx/v8AyDI/h6H5shQB1ve6VaKbuAXK3RtHtzbeWDHu
eIxM/mF887i+NvX5enNUrC8jtIL8MG86e28qF1HKEuhbnsCgdTjqGx0Jq3Bo8f8AZN9c3UrJcxWy
XEMCjnYZI13P6Ah8qOpHPA27p9H0my1GKGPyb6Z2x9pu4jthsQWKgyAocgBd5O5Rg44wTQBFoPiG
fSLuyVxFJZQ3a3DI1tHI45XcUZhlSQo6EdBVK3u4Y5pruaFZbktuij8tRCGOSWKjggcYTG0554G1
rek2NteREfYdQ1G7LMTb2TbTEg2/OT5b5BLEdsbec7hULaVG+oXtja3a3E0MrJb7V4ugCR8pBPzH
ghed3IBzgMATaBcRf8JNaX+oX6xLDcpcyyzB3MhDgkfKrEseTzx71j1q+G0tp/ENja3dpFcw3M8c
DLIzrtDOAWBVgc4z1yOelZVABVq7uVntrCNXlYwQGNg4UBSZHbC45IwwPPOSe2KitY4ZbuGO4n8i
F5FWSXYW8tSeWwOTgc4rQu7bT5NJN9Zw3NttnWEJcTrL5uVJJUhFxtwuev8ArF6dwDKra1DV7fUl
SKdJTHDaQx28mBvidIlUr15jZlJxnjO4dWVsWtCO0Q+Hri8xE0i3cUWd7B0BSQ9MbSG29c5GzpzQ
Bq2fiLydLs7f+1dXsvskbJ9nsm2pNl2fJbeNhO/bna2NoPPQY8t5G+iWtkA3mQ3M0rEjgh1iAx7/
ACH9Kt2/h6a4gsnF5ZpJfLm1gZ23ytvZNuApCksvBYhTnrw2DSNItdQsNQnn1KC2e3iDqjiQ4/eR
rubajfL85HBznHGMmgCpFeRpol1ZEN5k1zDKpA4ARZQc+/zj9a1bfxBHDqU00U95aLPp8Fobi3/1
sRjWLJUbhkExEfeHDZ9q52runaZJqRudk0EKW8XnSvM+0BN6qT0OT8wOBycYGTgEAl1O9j1DUIWl
v9Quo1VUe4uvnkxkk7V3HAGeF3HJBORnAm1ibTr+/tns7qcKYoYJGuoAgQJGke75WckHaSRjI7Zq
E6LN9tggjngljmiadLhSwQxru3vggNhdj5G3J28A5GW3Oli3+zyLfW01rPIYxcxiTYrLt3AhlDcB
lPCnrxk5FABc6dawW7SR6zY3DjGIoknDNz23Rge/Jou7u3utLsI8yrc2kZg2bAUZC7ybt2cg5fG3
HbOe1S67pdtpVzDHbX0VyJIIpGCh8qWjRsncqjBLEjGTjrg1lUAbX9r2/wDa/wBr2S+X/Zv2TGBn
f9k8nPXpu5+nbtVK+vI7m002JAwa1tjE5YcEmWR+PbDj8c1oSaQLXwol+0dnJJcSsu83aF4kAjK7
EV+WO9gwIJAA4XqYrfw9NcQWTi8s0kvlzawM7b5W3sm3AUhSWXgsQpz14bABPpOufZNJWx/tTU9O
2TvNvsRu83cqDDDemNuzjrncemOW6drMcE1+xu9Qs5LmUSreQN5s4A3ZRjuTIbcCTkZKDj0pWmli
4tBdT31tZwtI0cbTiQ72UAsAEViMBl64+9xnnFi18OXlzNdwNJBBcW9ylp5MjEmSZt4VFKgjOUYZ
JA6c0AWrrXLO+1W/llN4tve2cNs8r4lmQoIiWPKhyWixnK/ez7VVvprDUJNNtbadraC2tjC092pw
T5kj7sIGIB3DjBwTjJxuNG+sWsXi/fRTxTR+ZFNFu2yLkqSAwDDDKw5A6emCW2VnJfXSwRlVO1nZ
3PCIqlmY45wFBPAJ44BPFAGlZLZ6Nq2nah/adteJBdxyPFbJKH2q24kb0Udsde4qlFeRpol1ZEN5
k1zDKpA4ARZQc+/zj9alXRzNqFlZ2t/Z3Ju5VhSSNnAVyQMMGUMB8w524POMkECqlnI+nzXoK+XD
LHEwJ5JcORj2+Q/pQBu654i/tWK7f+1dXk+1Sb/sUjYghy27Gd53hegG1ex4xg0re40tfDlzZy3F
4LuaVJgq2ylAUWQBd3mA4PmDJ28Y6Gob7RzYLKr39m9zA22e2VnDxkHBGWUKxB4O0t6jIBIzaAOn
g1DS7SLw9eSyTtd2EXmiOIK6uVuJXCMcgxnpz83DA44+avpOufZNJWx/tTU9O2TvNvsRu83cqDDD
emNuzjrncemOaUOjmS1hmlv7O2e4UtBFMzgyjcVzuClFBZWHzMvTJwOais7CK5iMs2o2dmu7aomL
sWI68IrEDkcnAPbODgAll1KOXT9QgPntJdXkVwrSvvOFEoO5uMt+8HOOeelVbO/vNPmMtldz20hX
aXhkKEjrjI7cD8qu6Zpav4pttI1KOVd12LWZYpFDIxbbwcEcH88fjWbEYxMhlVnjDDeqNtJHcA4O
D74P0oA2rrxReXuuy390880DtOFtpJywijlBVlQnhTtbAOMZA4OMVXlu9Pb7DYA3L6bBO0sspRUm
ffsD7VyVXCoAASecknnAr6xZx6frd/ZRFmjt7mSJC5ySFYgZ9+KeukXDavaaYHi8668jY2TtHmqr
LnjPAcZ49etAEupav/a1pH9qTFzb7IbbyxhEtwG/dnJydp27TyeW3E8YsXXiW4v9HvrW7WBri5uY
pjIlpEhIUSbiWVQdxLLz1xu55INS6s7OTT3vbAziOCVIJRORlywYq646A7Gypzt4+Zs8Nn0i4tnv
xI8Wyz25kBOyXcQF2HHzbgS49VBPagBl7eRy21taWwZbaBd+HHzNKyr5jH2yoA6fKo4ySTdstXt4
Bp9zKkv2zS/+PVUA8uX940i7znK4ZiTjO4YHykbiWMGj3dpc7rW+jkt7RpZJ/taFA4AVfk8vOGkZ
Fxu43ZzwTVLTLOO6uWe4LLZ26iW6ZD8wj3BTt/2iWCjtkjOBkgAsabq/9k2kn2VM3NxvhufMGUe3
IX92MHI3HduPB4XaRzl+lXWl2PiaK9Z7xbK2uRNCBEryOFcFVb5lAJA5Izz2qG1s7OPT0vb8zmOe
V4IhARlCoUs7Z6gb1woxu5+ZcctutIuLR9RRnid9Pn8mZUJJ6su8cfdBUDJxy6jvQA63urO0ubu3
R55bC6iELyNEElUblfcF3EZDIOM8jIypORFqN5Hcm2hgDC3tIvJiZxh3G9nLMBwCWduB0GBk4ySS
wW11IWl7cLCFUNKyqXKEqGKbePnH3SDgBhgkDJrSi0a3/wCEv1HSR88UH2xIjLIE5jjkKFm4AwVB
JOB68UAYFFXb7TjZxRTx3UF1bysyLNDvA3rgsuHVTkBlPTHPXrilQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJ
HHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsCrAgggkcjvVepba2mu7hYIE3
SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dvHECxkduNgHGCvHruOOSTXvLyOew062
iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ4PNlZf3e4iV0ITIJAwnBIz3IH3Q+8t
9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP/Z/9n2n2f7T9t+f7V5m3y+vybMc9Oue/
Sr1vcaWvhy5s5bi8F3NKkwVbZSgKLIAu7zAcHzBk7eMdDWPWkLC2PhyW/Fwz3SXMURiVcKissp5J
6t+7zxwARySSFALthq2nwtpF1cNcrc6Xjy4Y4lZJtsrSjLlgUyX2/dbGM85wILG40saO9rcXF5az
ySkyvBbLKJY8KUUkyKQAwY4HBO0n7oxd0vQLe8s7SRra+mS4z517C4EFl85X94NhztADnLL8rDp1
OfpdrZXCJG9rfX97LIyra2b7GVVAO7lH3Zy3AxjYSc54AIrbVJ9N+0QWbxSW0sgbFzaxybtu4KxV
wwU4Y9D3PJq7d65Dea/q9y8bLZ6kzI2yNVdE8xXUgDgsCi55+bnkE7hUbSo11C9j+1q1haSsj3ir
kOMkLtGfmZsHC59SSFBYXV8Peb4k1awto7m4h0+SU+VCu6aVFkCALgYySwyccDJwcYIBQvLu3upr
OBTKllaxiBJCgMhTezsxXIGcuxC54GBk43F11cW0sVpptrKy2kUryfaLlNpLPsDEqpbCgIvA3Hgn
uFE0ejSXniE6bFaXlm20uYJ18yZQsZcgDau5iAdowM5Az3qHU7ZdO1CENpN5aKFVzb37kmQZPcKh
2nGOOeDz6ABr00E+pg206zxpbW8XmIrAMUhRGwGAOMqeoFZtaWvQwQamBbQLBG9tby+WjMQpeFHb
BYk4yx6k1m0AFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHK
zhdxmjTJ2Mp6Oe9AFSyvfs2+KWPzrSXHnQk43Y6Mp/hYZOG9yCCCQZtPu7ew1GeQGWSEwXEKHYAx
3xOikjJA5YE8nHPWq9nZyXkxRCqIi75ZXOEjTuzH05A4ySSAASQDpaFpMWp6pMqNFJbQRyyAXEyQ
GTajsgILZwSo3bT8oJ+YdaAKWj3ken63YXsoZo7e5jlcIMkhWBOPfiqVbGlaUNW8TRWDeRDG9yFk
WO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj8268e1ADNYvI9Q1u/vYgyx3Fz
JKgcYIDMSM+/NGq3kd9dxyxBgq20ERDDnKRIh/DKnHtV3xDpi2XiGTTbeO2jSOQwxstyrbwHKhpG
LEIxxyPlA9BR4h0xdKNnbrHbDMCSNLHcrK8jNGjNuCsQACxCkAZHOW60AUr68jubTTYkDBrW2MTl
hwSZZH49sOPxzRLeRvolrZAN5kNzNKxI4IdYgMe/yH9K0NX0gaXounsY7N5LhTJJMl2kjg75F2qF
cgphAd2D82Ru7USaQLXwol+0dnJJcSsu83aF4kAjK7EV+WO9gwIJAA4XqQDPivI00S6siG8ya5hl
UgcAIsoOff5x+tFjeR21pqUThi11bCJCo4BEsb8+2EP44rQsNIDeHL3VHjs5WVljjWW7RSgKyFm2
hw28FBtU9QT8rdjw/pAvYL29eOzmW2i3JDcXaRKz70X5hvVguHJByBuAGT0IBU0+6s/7PurC9eeK
OaWKYSwxCQgoHG3aWXg+YTnPG3oc8F5dWeoasryvPDaLFHCHWIO5EcaoG27gMnaCRu4z1OOZtF0u
O/iuZDa3l9JEyKtpZttkIbdmT7rfKu0A/L1ccjoYb2wtrXWmtXuGhgCq7l13vESgZoyBjLqSU52j
cOdvOACxqV7pV34lub4C5ls7uSWR1kjCPEzlsEAOQ20kNyRuxg4HNRXd3p4tLKxtzc3FtDPJPI8i
LC7bwgKgAuBgR/e5+9045LuxsrLxLfWM80q2drPMgbq7hC21cgYBYgLuxgZzjAxUuq6P9n+xiCxv
rW5uJGjFjdHfMcbdrjCqSGLFQNvVDyegAIrm50+DSZbGxmuZ/Pnjmd54Fi2bFcAAB2znzD6Y2988
WLrxLcX+j31rdrA1xc3MUxkS0iQkKJNxLKoO4ll5643c8kGvqOl29lpNncR3PnTyTzQzbMGNSixk
BSPvffOW6EjjIAZrGr6QNL0XT2Mdm8lwpkkmS7SRwd8i7VCuQUwgO7B+bI3dqACDxLcf2bfWlysE
hms0tYnFpFuAVo8bn27iAiEDkkHaeoBDNKvdKsprC/cXKXllIshijjDpcMrlwS5cFMjC8Kcbc8k4
q3qGi29hpyT/ANj6u8bWkMv27zQIN8kat/zy6Bmxjd2xmodH0my1GKGPyb6Z2x9pu4jthsQWKgyA
ocgBd5O5Rg44wTQBQtP7KltBHevc28yyM3mwQiXzFIGFILqF2kE5Gc7u2BmWfVLe41bUNVktt088
7TQwvh40LMSS2fvbeMLjBPJ4BVorTSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4zzhlvpzXE0
1qkqm8RsRxKQwmIyCFYHBbpgDhucHO0MAW9AuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x
61fDaW0/iGxtbu0iuYbmeOBlkZ12hnALAqwOcZ65HPSsqgAq1d3Kz21hGrysYIDGwcKApMjthcck
YYHnnJPbFRWscMt3DHcT+RC8irJLsLeWpPLYHJwOcVoXdtp8mkm+s4bm22zrCEuJ1l83KkkqQi42
4XPX/WL07gGVWxb3Glr4cubOW4vBdzSpMFW2UoCiyALu8wHB8wZO3jHQ1j1sSeHpo4xm8s2uGthd
LbK7F2jMYkJ+7tBC5JDEH5eAcrkAdaavbwah4duGSUppuzzgAMtid5Pl554Ydcc1V0u7t7cXkF0Z
Vhu4BC0kSB2TEiOCFJAPKAdR1z2wZbTQpbuK2/0u2inu/wDj1t5N++b5igwQpUZZSvzMOmTgc0/S
NItdQsNQnn1KC2e3iDqjiQ4/eRrubajfL85HBznHGMmgCjZWkF1v87Ubaz24x56yHd9NiN098dat
q9tpsOo2y3cV59rtFjSS3Vwqt50b4O9VPRD0B6j3xFodol9r1hayCJklnRSkrsivk/dLKCRu6ZA4
zTNO0yTUjc7JoIUt4vOleZ9oCb1Unocn5gcDk4wMnAIA/RdR/svVEut0qYjkj8yI4ePejJvXkcru
yBkZx1HWpdZ1H7f5C/2nqeoeXuPmXxxtzjhV3Njpyd3ORwMZMU2liC7gikvrZbeeMyR3eJDGygsu
cbd4+ZWXleo9Oadr+nQaTrd3ZW10txHDK6AjdlcMRtbKjLDHOMj0NADdUu7e+NtPGZVmEEcMsbIN
q+XGqAq2ctkLk5AxnHPWnS6ZaRxO667p8jKpIREuMsfQZiAyfcgVm1sah4em083iNeWc89kxFxDC
7MY13hN2SoUjLKMAlhu5AwcAFSW8jfRLWyAbzIbmaViRwQ6xAY9/kP6UX15Hc2mmxIGDWtsYnLDg
kyyPx7YcfjmrH9hS+T/x9232vyPtH2P5/M8vZ5mc7dn3PmxuzjjrxTIdHMlrDNLf2ds9wpaCKZnB
lG4rncFKKCysPmZemTgc0APtrnT59Jisb6a5g8ieSZHggWXfvVAQQXXGPLHrnd2xzoWviS3XVp76
eCVfO1mDUSiYbaiNKWXJxk/vBj1welUtI0i11Cw1CefUoLZ7eIOqOJDj95Gu5tqN8vzkcHOccYya
qWOnG8ilnkuoLW3iZUaabeRvbJVcIrHJCsemOOvTIBYZ7bUodOtmu4rP7JaNG8lwrlWbzpHwNise
jjqB0PtlghtdOvbaUaotwu4kyWIkV4SPusPMRckHnAPOCMrkGnx6DcSXd1A1xbRrbQLcvNI5CGJi
m1hxnkSK2Mbu2N3y0w6LN9tggjngljmiadLhSwQxru3vggNhdj5G3J28A5GQCxqGrxfa7K6sZZZL
23k803stukTuwIK7lUsGIIJLsSW3c9BVvVPstp4emtUgtoJp7uGbbDfLc7tqShsFCQiguuA2W+Y/
M2OMe+042cUU8d1BdW8rMizQ7wN64LLh1U5AZT0xz164pUAbt7qdjNpTW4mvLp9qrAt1CmbUAjhZ
gxZ1ABULhV+YtgEYqpFplpJEjtrunxsyglHS4yp9DiIjI9iRRNo5jtZpor+zuXt1DTxQs5MQ3Bc7
ioRgGZR8rN1yMjmppPD00cYzeWbXDWwultldi7RmMSE/d2ghckhiD8vAOVyAAutLu7CyS9e8SSzi
aIRQxKwmHmPJ98sNhO8r91sYzznFS6TrMdhp5gF3qFlIsrSs1i203IIUBHbcNoXacHD48xuPWKw0
E3a27z30FssytKEZXZzCpIeQADbhQkh2lgTsOByM17TSxcWgup762s4WkaONpxId7KAWACKxGAy9
cfe4zzgAvwatp7eN5NauGuY7YXxvI1jiV3b95vCkFgBx3ycehrK8vT/7Q2fabn7F/wA9vs6+Z0/u
b8dePvdOfai10+a71aHTUaITSzrArbwybi23O5cgjPcZ9qfp1nHcm5mnLC3tIvOlVDh3G9UCqTwC
WdeT0GTg4wQC3q2qRt4judV0i6vImnlkmDsvkvGXZiVBVjkYOM5GcnipbrxZqlzqVldvczyLaNBI
kE8zSIZI1UbyMjliCT3+Y896ryaR513apZv+7vYGnt0lPz8Fx5fA+Zi0ZVcD5srwM4Bpekfa77SE
uX2Q6hdiHapxJs3KpcZGMEsQDzyjDtQAy6vLOPT3srATmOeVJ5TOBlCoYKi46gb2yxxu4+Vcc2L/
AFe3vLN9P2SiytM/2YCBvjy43bznncMseuGA27VyKbrFhHYxRA6LqunyOx2tey5DgdQB5Sc8jnP4
c1Uis410qW9uCwDsYrUKfvyKUL5/2Qrexyy4yA2AAgvI4dHvLUBhPPLEd6jjy1DllJ64LGM46ZQH
sKJbyNdKisrcMA7CW6LD78ilwmP9kK3scs2cgLi7/ZFvt+x75f7Q+yfbd+R5WzyvN2YxnOzndn73
y7cfPVWDSLi5ewEbxbLzdiQk7ItpIbecfLtADn0Ug96AHWt5ZyaellficRwSvPEYAMuWChkbPQHY
uGGdvPytnixb+IJrbU7zWov3erzTmSJ1UGOIPuMhAOeeQBnIwW74IfpOhSXenm/OnahqEbStCsVi
MFSoUlnbY2B8wAGOfm5G35sKgC7IdOm1IMDPbWbKCwSMSMjbRuCgsMruyBls7cZya1ZtW0seLLzV
EF5NaXa3JeNkWJ1MqSLtB3MMDePm+vy8YPO0UAaWoXVn/Z9rYWTzyxwyyzGWaIRklwg27QzcDywc
553dBjnNoooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACtXRntvJ1O2ubuK1+02gjjklVyu
4TRvg7FY9EPasqtXRktvJ1O5ubSK6+zWgkjjlZwu4zRpk7GU9HPegDKooooAKsWV7Pp90tzbMqyB
WX50VwQylWBVgQQQSOR3qvUttbTXdwsECbpGzxkAAAZJJPAAAJJPAAJNAGhrGsHVbbTY2jiR7aBo
38u3jiBYyO3GwDjBXj13HHJJr3l5HPYadbRBlFvEwlBGA0jSMSw9Tt8sZPPygdAKsalY2WntpjxT
S3UM8Hmysv7vcRK6EJkEgYTgkZ7kD7ofeW+ltogvbW3vLeRrnyoxNcrKJAFy/SNcFd0fXrv46HAB
Rn/s/wDs+0+z/aftvz/avM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMn
bxjoax60hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJN
tlaUZcsCmS+37rYxnnOBBY3GljR3tbi4vLWeSUmV4LZZRLHhSikmRSAGDHA4J2k/dGLul6Bb3lna
SNbX0yXGfOvYXAgsvnK/vBsOdoAc5ZflYdOpz9LtbK4RI3tb6/vZZGVbWzfYyqoB3co+7OW4GMbC
TnPABFbapPpv2iCzeKS2lkDYubWOTdt3BWKuGCnDHoe55NX7jW7W+1zW7icSxWup7lDxwrvjXzVk
U7AQCfkAPI6k5J6zWOgW8v8Aa3l219q/2O7SCP8As9wu9D5n7z7j8fIuMf3utZgi06PULoXUN5bw
wLhbR5B5zOCFKF9mFIyzcr/Dt6nNAFsatZw3tmkQne0t7OazMrIFdhL5uX2biMr5pwN3O3qM8V7u
eyktLLTbSeUwxzyStc3MXl4MgRSCqlzgCMHIJJyeOOZb3TbKy1O3S5+02kTwNNLbTHM0TDfiNjtG
C+1SCVGBIDggZJqunQ6Z9juDY3Ns7yMHsb9iXKrtIYkKh2tuK8AfcbB9ACvr00E+pg206zxpbW8X
mIrAMUhRGwGAOMqeoFZtaWvQwQamBbQLBG9tby+WjMQpeFHbBYk4yx6k1m0AFaujPbeTqdtc3cVr
9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyvfs2+KWPz
rSXHnQk43Y6Mp/hYZOG9yCCCQZtPu7ew1GeQGWSEwXEKHYAx3xOikjJA5YE8nHPWq9nZyXkxRCqI
i75ZXOEjTuzH05A4ySSAASQDpaFpMWp6pMqNFJbQRyyAXEyQGTajsgILZwSo3bT8oJ+YdaAKWj3k
en63YXsoZo7e5jlcIMkhWBOPfiqVbGlaUNW8TRWDeRDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq
7W0cVjbucYSK7Uwr8ueJGcj8268e1ADNYvI9Q1u/vYgyx3FzJKgcYIDMSM+/NGq3kd9dxyxBgq20
ERDDnKRIh/DKnHtV3xDpi2XiGTTbeO2jSOQwxstyrbwHKhpGLEIxxyPlA9BR4h0xdKNnbrHbDMCS
NLHcrK8jNGjNuCsQACxCkAZHOW60AUr68jubTTYkDBrW2MTlhwSZZH49sOPxzRLeRvolrZAN5kNz
NKxI4IdYgMe/yH9K0NX0gaXounsY7N5LhTJJMl2kjg75F2qFcgphAd2D82Ru7USaQLXwol+0dnJJ
cSsu83aF4kAjK7EV+WO9gwIJAA4XqQDPivI00S6siG8ya5hlUgcAIsoOff5x+tFjeR21pqUThi11
bCJCo4BEsb8+2EP44rQsNIDeHL3VHjs5WVljjWW7RSgKyFm2hw28FBtU9QT8rdjw/pAvYL29eOzm
W2i3JDcXaRKz70X5hvVguHJByBuAGT0IBU0+6s/7PurC9eeKOaWKYSwxCQgoHG3aWXg+YTnPG3oc
8F5dWeoasryvPDaLFHCHWIO5EcaoG27gMnaCRu4z1OOZtF0uO/iuZDa3l9JEyKtpZttkIbdmT7rf
Ku0A/L1ccjoYb2wtrXWmtXuGhgCq7l13vESgZoyBjLqSU52jcOdvOACxqV7pV34lub4C5ls7uSWR
1kjCPEzlsEAOQ20kNyRuxg4HNNm1OGzi0+LSpp2azuXuknmhVCHbywBs3MMDywck85xjjll3Y2Vl
4lvrGeaVbO1nmQN1dwhbauQMAsQF3YwM5xgYrQbQLc6to9tJbX1h9tuxBJa3TgzKm5B5gOxeDuYD
K9UPJ6AApahrj6holrZSRQLJFcyysYrWKIYZUC42Ac/K2fX5euBipfXkdzaabEgYNa2xicsOCTLI
/Hthx+Oat6xYR2MUQOi6rp8jsdrXsuQ4HUAeUnPI5z+HNS65oUmixGKXTtQSSOXynvZRtglPOQg2
dOODuOQM4GcAAis7rS7GNrmJ7w3bW0kJt2iXZl42jLeZuzj5iwGz/Zz/ABU7Sr3SrKawv3Fyl5ZS
LIYo4w6XDK5cEuXBTIwvCnG3PJOKlg0myudOklihvikUBd9RJxbrKI9/lFSnXJEY+fkkEDkLUWiQ
aPe3EFrd2t9vO5p54rtFVI1BZmCGMk7UBOM5OOOuKAIra50+fSYrG+muYPInkmR4IFl371QEEF1x
jyx653dsc17e4s7eaa4WBncN/o0U2HRevzOcDcRxxgAk5PA2tLa2dnHp6Xt+ZzHPK8EQgIyhUKWd
s9QN64UY3c/MuORtHkXUL3TfNVr+3laJIlGRMVJDBT/e4GBj5uQPmwrAE2gXEX/CTWl/qF+sSw3K
XMsswdzIQ4JHyqxLHk88e9Y9avhtLafxDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ
7awjV5WMEBjYOFAUmR2wuOSMMDzzkntiorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfW
cNzbbZ1hCXE6y+blSSVIRcbcLnr/AKxencAyqu6reR313HLEGCrbQREMOcpEiH8Mqce1Uq2E0Ei2
kmnvoI2jtvtMkKq7OiMoMZPG3DFoxwxI3gkcHABds/EXk6XZ2/8Aaur2X2SNk+z2TbUmy7PktvGw
nftztbG0HnoMrS7u3txeQXRlWG7gELSRIHZMSI4IUkA8oB1HXPbBdDo5ktYZpb+ztnuFLQRTM4Mo
3Fc7gpRQWVh8zL0ycDms2gDS0C6s7HW7S9vXnWO2lSYCGIOXKsDt5ZcA4PPP0oiurOzj1WCB55o7
m2WKJ3iCHIljc7lDHA+RhwT2/CKx043kUs8l1Ba28TKjTTbyN7ZKrhFY5IVj0xx16Zu6dpEM7atD
JPbN5Fossdx5pEa5liG/1PyM3ykbsnG3dxQBSvryO5tNNiQMGtbYxOWHBJlkfj2w4/HNS6rcW2qa
3LdxStEt5KZZfOTAhZ2JIypYsoz97AJ/u02bSmiu4Ijd2xhnjMsd1uZYygLAt8wDcFWGNuSRwDkZ
lGhSy3dhDbXdtcx3s/2eKePeEEmVBBDKG43qc4xzxnBAAIrnTrWC3aSPWbG4cYxFEk4Zue26MD35
NM1i8j1DW7+9iDLHcXMkqBxggMxIz780+70pra0Nyl3bXKJIsUwgZj5TkEhSSAGztblCw+XryM2L
rQTZWt3LPfQGS1YRSQxq5KTFseWxIAzhZDuUsP3ZGeRkAu3HiL7Rpwj/ALV1eLFolt9gjbbAdsYj
zu39DjcRs5yVz/FVfStTsbSyEVzNePHuLS2LQpLDMfUMWBiYrhdyqWHJB52iv/YUvk/8fdt9r8j7
R9j+fzPL2eZnO3Z9z5sbs4468VSt7OS5gu5UKhbWISuGPJBdU498uPwzQBY0u7t7cXkF0ZVhu4BC
0kSB2TEiOCFJAPKAdR1z2wXafdWf9n3VhevPFHNLFMJYYhIQUDjbtLLwfMJznjb0OeC1s7OPT0vb
8zmOeV4IhARlCoUs7Z6gb1woxu5+ZcctutIuLR9RRnid9Pn8mZUJJ6su8cfdBUDJxy6jvQBauNXt
5H1IKkuyexhs4SQMnyzD8zDPGRETgZwTjJ61V0XUf7L1RLrdKmI5I/MiOHj3oyb15HK7sgZGcdR1
pklgtrqQtL24WEKoaVlUuUJUMU28fOPukHADDBIGTVi7sbKy8S31jPNKtnazzIG6u4QttXIGAWIC
7sYGc4wMUAP1W/XUprWJtX1C7VWOZ9QJxGGwOEDOQBjJIJJ4GOOa9zp1rBbtJHrNjcOMYiiScM3P
bdGB78mrGtaXHYRW0gtbyxklZ1a0vG3SALtxJ91flbcQPl6oeT0FSWzjt9KinmLC4uWDwIp48oF1
Zm9ywAHP8LZA+UkA3b/xLb3Onanbrdam6XkYENo5At7TEqPsUbjkAKVBAXAH3fm+XC1W8jvruOWI
MFW2giIYc5SJEP4ZU49qu3ukW8A1C2ieX7Zpf/H0zkeXL+8WNtgxlcMwAzncMn5SNpq/2Rcebt3x
eX9k+1+dk+Xs25xux13fu/8Af+XNAG/pUttJ4dhtZZ4lB8wTXf2mGOW1RjgoEdTI6gZfEZXd5rLy
c1kaPf21nFKst5eW5dhvSO3juYZgOgeN2UZU5IJ3dRgKRkw2NhbXOk6ldSXDC4tohJHCi8Y8yNCW
J7fPwBzkHOMDdNpNjbXkRH2HUNRuyzE29k20xINvzk+W+QSxHbG3nO4UAGm6/NpGt/bdP8+2tDcr
K1nHcMA6K2RGx/iGCRkg9TTIdauJ5pxqlzc3UdzALaSV5DJIiB1cFdx5wyg44yMjIzkUr+3jtNQu
baKdbiOGVo0mTpIASAw5PB69TVegDVkvrKW7tUlhlksbKBoolbhpTl3BfB4Bd+QDkLwCSNxNS1Zd
YuIb7UBLLfNIRdum1BLGAoXHBCtjcvTGApwTnOVRQBq3Nzp8Gky2NjNcz+fPHM7zwLFs2K4AADtn
PmH0xt754r6neR3VyqW4ZbO3UxWquPmEe4sN3+0SxY9sk4wMAUqKANr+17fb9s2S/wBofZPsWzA8
rZ5Xlb85znZxtx975t2PkosNXt7OzTT9kpsrvH9pgAb5MOduw542jDDplid25cCsWigDVtrnT59J
isb6a5g8ieSZHggWXfvVAQQXXGPLHrnd2xzU1O9/tLVry+8vy/tM7zbM527mJxnv1qrRQAUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HYr
Hoh7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFFFABViyvZ9PulubZlWQKy/OiuCG
UqwKsCCCCRyO9V6ltraa7uFggTdI2eMgAADJJJ4AABJJ4ABJoA0NY1g6rbabG0cSPbQNG/l28cQL
GR242AcYK8eu445JNe8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVjUrGy09tMeKaW6hng82
Vl/d7iJXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AKM/9n/2f
afZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9aQ
sLY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWBT
Jfb91sYzznAgsbjSxo72txcXlrPJKTK8FssoljwpRSTIpADBjgcE7SfujF3S9At7yztJGtr6ZLjP
nXsLgQWXzlf3g2HO0AOcsvysOnU5+l2tlcIkb2t9f3ssjKtrZvsZVUA7uUfdnLcDGNhJzngAr26a
W00yXM94kYb9zNHCrEjnhkLDBPByGOMEYOcjSi1bT21ea5la5hCWkUFpcRxK8kbxrGgk2lgASqN3
O0sCCSAamsdAt5f7W8u2vtX+x3aQR/2e4Xeh8z959x+PkXGP73Wspv7Pt9TnS5sL5YUyn2c3KrKj
jAO5jHjqDxtH6cgAP7Kh1BMPc3Vm0bK7SQiJ0YggMFDkNtJDYLDOMHA5qW7nspLSy020nlMMc8kr
XNzF5eDIEUgqpc4AjByCScnjjm0+k6fJq9jbQrcxCa08+S3eVZJC5VnSNWCgEuvl44ODJjBIxUWq
6dDpn2O4Njc2zvIwexv2Jcqu0hiQqHa24rwB9xsH0AK+vTQT6mDbTrPGltbxeYisAxSFEbAYA4yp
6gVm1pa9DBBqYFtAsEb21vL5aMxCl4UdsFiTjLHqTWbQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8
HYrHoh7VLcWWlWii0nNyt0bRLgXPmAx7niEqp5YTPO4Jnd1+bpxUVpoUt3Fbf6XbRT3f/Hrbyb98
3zFBghSoyylfmYdMnA5oAqWV79m3xSx+daS486EnG7HRlP8ACwycN7kEEEgzafd29hqM8gMskJgu
IUOwBjvidFJGSBywJ5OOetFppTXNoLl7u2tkeRooROzDzXABKggELjcvLlR83Xg4u3+ixDW9YSOe
CysLS8eBXmLsFJZ9ifKGYnCNzjHy8nJGQDP0e8j0/W7C9lDNHb3McrhBkkKwJx78VSq6+mSRarFY
STQIZGj2zu+I9jgFXJIyF2sDyMgdQDxVe6g+y3c1v5sUvlSMnmRNuR8HGVPcHsaALGsXkeoa3f3s
QZY7i5klQOMEBmJGffmjVbyO+u45YgwVbaCIhhzlIkQ/hlTj2qrEYxMhlVnjDDeqNtJHcA4OD74P
0rVvLfS20QXtrb3lvI1z5UYmuVlEgC5fpGuCu6Pr138dDgAqX15Hc2mmxIGDWtsYnLDgkyyPx7Yc
fjmiW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VdvdIt4BqFtE8v2zS/+PpnI8uX94sbbBjK4ZgBnO4ZP
ykbTi0AXYryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OKt6Vo8d
1HJLeStCrW08tuij5pTHG7Z56ICmCe54HRisuk6RHeaebkafqGpSGVo2hsX2mEAKQz/I/DbiBwPu
N17AFTT7qz/s+6sL154o5pYphLDEJCCgcbdpZeD5hOc8behzwXl1Z6hqyvK88NosUcIdYg7kRxqg
bbuAydoJG7jPU45q39vHaahc20U63EcMrRpMnSQAkBhyeD16mrWi6fHqFzMjxzztHF5iW1ucSznc
o2qcNyAxY/KeEPTqACxqV7pV34lub4C5ls7uSWR1kjCPEzlsEAOQ20kNyRuxg4HNRXd3p4tLKxtz
c3FtDPJPI8iLC7bwgKgAuBgR/e5+90450E0K3/tyK0MMsJmsbiY2t3KFe3kWOXaHbCgcor8hRhhn
I5OPfacbOKKeO6gureVmRZod4G9cFlw6qcgMp6Y569cAFi5udPg0mWxsZrmfz545neeBYtmxXAAA
ds58w+mNvfPFq/1bT5m1e6t2uWudUz5kMkSqkO6VZThwxL4KbfurnOeMYOBWxcaPHaaJPcTSt9vh
uYopIAOIg6yHDH+/8gyP4eh+bIUAdb3ulWim7gFyt0bR7c23lgx7niMTP5hfPO4vjb1+XpzVKwvI
7SC/DBvOntvKhdRyhLoW57AoHU46hsdCau/2Rb7fse+X+0Psn23fkeVs8rzdmMZzs53Z+98u3Hz1
i0AaVreWcmnpZX4nEcErzxGADLlgoZGz0B2Lhhnbz8rZ4G1OOfUL3VLmFZLyaVpY49mYVdiSWIJO
QOynIOeSQCrGj2FtfyXK3Fw0bR200sUaLkuyRO4yegUbOe/IAHJKtsLS3a0nv7wSvbQSJEYoXCO7
uGI+YghQAjEnB7DHOQAWNAuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x6tajZfYLwwCTzE
MccqNjBKOgdcjnBwwyMnBzyetRW1tNd3CwQJukbPGQAABkkk8AAAkk8AAk0ARVau7lZ7awjV5WME
BjYOFAUmR2wuOSMMDzzkntitC40iyS90aCK//c30amW6kXaiHznjLAHB2gLnnBPU7c4EuraH9k0l
r7+y9T07ZOkOy+O7zdyucqdiY27Oeudw6Y5AMCuvnltp/DUcLTxKq2g33a3MIklYDcsTxbfOYBts
Y+baAivjArkK0pNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4oAt6VqdjaWQiuZrx49xaWxa
FJYZj6hiwMTFcLuVSw5IPO0Z9nY29zEXl1WztGDY2TLKSR6/IjDH454q7okGj3txBa3drfbzuaee
K7RVSNQWZghjJO1ATjOTjjrisWgDYgksILa80u5vGeCSWKZbq0hLglFcbdrlDj94efVehzkQxXln
bR6rDAJzHc2ywxM4GSRLG5ZgOgOxuBnGQMnrUVjpxvIpZ5LqC1t4mVGmm3kb2yVXCKxyQrHpjjr0
zLb6ZH9puzPMsttZxCeRrd8mVCyqoUkcEl1zkZUZyuRtIBYt9Xt4ptJdklH2S0kgd1A3I7PKVkTn
kr5isORyvUda0JPEtu13obyXWp339nXxuJJ7sgu6ExHCjcduNjDBY+uecDC1GzjtjbTQFjb3cXnR
K5y6DeyFWI4JDI3I6jBwM4D9CtobzxDplrcJvhmu4o5FyRlS4BGRz0NADIryNNEurIhvMmuYZVIH
ACLKDn3+cfrXReIpba50xys8QCbDHLHcwub5xhd8kaKJFYgu+ZC20ll6tWBd6U1taG5S7trlEkWK
YQMx8pyCQpJADZ2tyhYfL15GbV7pFvANQtonl+2aX/x9M5Hly/vFjbYMZXDMAM53DJ+UjaQC3ceI
vtGnCP8AtXV4sWiW32CNtsB2xiPO7f0ONxGznJXP8VUtO8S6pplhc2kF9eJHJF5cSpcMohPmK5ZQ
O5ww4x94/jj1v6DoKXmo6V9ruLZY7udCttI7K80XmbWIYDaPuuMFgx28DlcgFWLUbe6tPs+qtcvs
nkuVkiIZ5XcKHVix4zsX5+cc5Vs8S2/iCa21O81qL93q805kidVBjiD7jIQDnnkAZyMFu+CH+H9I
F7Be3rx2cy20W5Ibi7SJWfei/MN6sFw5IOQNwAyehwqALsh06bUgwM9tZsoLBIxIyNtG4KCwyu7I
GWztxnJq7qV7pV34lub4C5ls7uSWR1kjCPEzlsEAOQ20kNyRuxg4HNUtMgtp7lluVnlO0CK3t+Hn
csFCKdrYPJPQ5246kVdvdNsrLU7dLn7TaRPA00ttMczRMN+I2O0YL7VIJUYEgOCBkgDZrrS/K0+w
V7yWyhuXmmlMSxyEP5YZVXcwyBHkEnkt0GOc+9vJL66aeQKp2qiog4RFUKqjPOAoA5JPHJJ5rQ1r
S47CK2kFreWMkrOrWl426QBduJPur8rbiB8vVDyegx6ANq91e3nGoXMSS/bNU/4+lcDy4v3iyNsO
ctllBGcbRkfMTuB/a9v/AGd/ZGyX+zPL87bgeZ9r8vHmZz03fLjps5xu5qvNo5jtZpor+zuXt1DT
xQs5MQ3Bc7ioRgGZR8rN1yMjmrH9kW+37Hvl/tD7J9t35HlbPK83ZjGc7Od2fvfLtx89ADdIuNLg
sNQivbi8jkuohCBDbLIFAkjfdkyLz8hGMd857VXtP7KltBHevc28yyM3mwQiXzFIGFILqF2kE5Gc
7u2BnPrY0DR49T1C0W8laCzmuUt96jLyOxA2oD3GQSeigjOSVVgCjqd7/aWrXl95fl/aZ3m2Zzt3
MTjPfrVWtjRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6Gpq1nHYalJbxFtoVGKuctGWU
MY26fMpJU8DlTwOgAKVFWLKzkvrpYIyqnazs7nhEVSzMcc4CgngE8cAnipbi0gs5oX+1wX1uzfMb
V2Q8YyvzqCpwRg7SOeM4IABSorYvLfS20QXtrb3lvI1z5UYmuVlEgC5fpGuCu6Pr138dDjHoAKK0
pNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4osbC2udJ1K6kuGFxbRCSOFF4x5kaEsT2+fgD
nIOcYG4AzaKK0NMhspd32mO5uZmkSOG1tm2PIWzkhirDghRtxk7xjocgGfRW/Holu2t3VoqXNx5M
CypZxMBO7ts3Q52n5k3tu+X/AJZtwvarrul/2XNa/wCj3Nt9og877Pdf6yL52TBO1c52bvujhgOc
ZIBlUVd0ezj1DW7CylLLHcXMcTlDggMwBx781b1iwjsYogdF1XT5HY7WvZchwOoA8pOeRzn8OaAM
eiul1TQLezs7uRba+hS3x5N7M4MF784X92NgxuBLjDN8qnr1HNUAFFb+j6TZajFDH5N9M7Y+03cR
2w2ILFQZAUOQAu8ncowccYJqDQNHj1PULRbyVoLOa5S33qMvI7EDagPcZBJ6KCM5JVWAMeiirFna
rdTFXuYLZFXc0sxOAOnRQWJyRwAT36AkAFeitVNEY6hDAbmJ7eWB7lbiIMQ0SBy5VWCnP7twAduS
OuDmor+0t1tIL+zEqW08jxCKZw7o6BSfmAAYEOpBwO4xxkgGfRRWxcaPHaaJPcTSt9vhuYopIAOI
g6yHDH+/8gyP4eh+bIUAx6K6X+wLf+zvN+zX3l/ZPP8A7T3j7Nv8vf5eNnXd+6+/97t/DXNUAFFF
FABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdiseiHtWVRQAUVu6TpEd5p5uRp+oalIZWjaGxfaYQA
pDP8j8NuIHA+43XszTtIsrnxYulTX+61+1+Qs8K5MwLhAU6gZznJOAM9TgEAxasWV7Pp90tzbMqy
BWX50VwQylWBVgQQQSOR3qvViys5L66WCMqp2s7O54RFUszHHOAoJ4BPHAJ4oAu6xrB1W202No4k
e2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCnvp9tFd2
qPqltJayyBZLiBXbyhkbiUYK3AOemD0ByDixfWNmdHTULa2vLRWlEca3MokFwCG3MhCJwhUA9eXH
TuAUZ/7P/s+0+z/aftvz/avM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+Y
Mnbxjoax60o9HM1sXiv7N7kRGY2qs5fYFLk7tuzIUFiN2eMfe4oAu2GrafC2kXVw1ytzpePLhjiV
km2ytKMuWBTJfb91sYzznAgsbjSxo72txcXlrPJKTK8FssoljwpRSTIpADBjgcE7SfujDrLSLeca
fbSvL9s1T/j1ZCPLi/eNGu8Yy2WUg4xtGD8xO0YtAF23TS2mmS5nvEjDfuZo4VYkc8MhYYJ4OQxx
gjBzkGq3kd9f+bEGEaRRQoXGCwjjVAxHYnbnGTjOMnrT7C0t2tJ7+8Er20EiRGKFwju7hiPmIIUA
IxJwewxzkFxp8Npqgtp7rbbmNJhN5ZLeW6CRflB+8QwGM4z/ABY+agC02r27eJ7u/CSizn8+JFwN
0UUiNGoC5x8isMLkD5QMgc1Fdz2UlpZabaTymGOeSVrm5i8vBkCKQVUucARg5BJOTxxzX1izj0/W
7+yiLNHb3MkSFzkkKxAz78VFYWcmoahbWURVZLiVYkLnABYgDPtzQBa16aCfUwbadZ40treLzEVg
GKQojYDAHGVPUCs2tK6s7OTT3vbAziOCVIJRORlywYq646A7Gypzt4+Zs8WL3SLeAahbRPL9s0v/
AI+mcjy5f3ixtsGMrhmAGc7hk/KRtIAXF7pV2ou5xctdC0S3Ft5YEe5IhEr+YHzxtD429fl6c1LY
atp8LaRdXDXK3Ol48uGOJWSbbK0oy5YFMl9v3WxjPOcDArastIt5xp9tK8v2zVP+PVkI8uL940a7
xjLZZSDjG0YPzE7QARW13p8mkxWd8blfs88k6CBFPm71QFSSRs/1Y+bDfe6cc6SeJI/7Q1p4rzUN
Pjv7z7Sk9qMyAAyYRl3rwfMyfm4Kjg5yMq1s7OPT0vb8zmOeV4IhARlCoUs7Z6gb1woxu5+Zcc1b
+zk0/ULmylKtJbytE5Q5BKkg49uKALFxewz62Lu4NzfQiRDJ9qlPmTquAcsMlcgep2ggZOMmpdSQ
y3c0lvB5ELyM0cW8t5ak8Lk8nA4zT7OzkvJiiFURF3yyucJGndmPpyBxkkkAAkgHVfR7MeI4rCKW
eS3e2jnTcAskpaAShAOQGZjtA+bBYfe7gGFV28vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBV
vWtLjsIraQWt5YySs6taXjbpAF24k+6vytuIHy9UPJ6DHoA2r3V7ecahcxJL9s1T/j6VwPLi/eLI
2w5y2WUEZxtGR8xO4YtdLqmgW9nZ3ci219Clvjyb2ZwYL35wv7sbBjcCXGGb5VPXqIdQ0FA6G1uL
ZZDYw3ItN7NIw8hXkbOCq/xthmBwOByuQBukeJbjTwkUqwSQRW08MW60id1Lo+BuZc7d75IzjBIw
elV7a50+fSYrG+muYPInkmR4IFl371QEEF1xjyx653dsc5VbGk2NteREfYdQ1G7LMTb2TbTEg2/O
T5b5BLEdsbec7hQBR1O9/tLVry+8vy/tM7zbM527mJxnv1qbS7u3txeQXRlWG7gELSRIHZMSI4IU
kA8oB1HXPbBsf2ZZ202ozTzNd2Vnci2U2zhDOW37WDEMFXEbHOG7DvuFHUbL7BeGASeYhjjlRsYJ
R0Drkc4OGGRk4OeT1oA1YtX0+31CzGy5msrWxntN2Fjkl8wSnOMsF5lx1bgZx2qlqF1Z/wBn2thZ
PPLHDLLMZZohGSXCDbtDNwPLBznnd0GOaVtbTXdwsECbpGzxkAAAZJJPAAAJJPAAJNatxpFkl7o0
EV/+5vo1Mt1Iu1EPnPGWAODtAXPOCep25wADFrduvEtxf6PfWt2sDXFzcxTGRLSJCQok3Esqg7iW
XnrjdzyQTVtIjs9P+0nT9Q02QSrGsN8+4zAhiWX5E4XaAeD99enfCoA2v7Xt9v2zZL/aH2T7FswP
K2eV5W/Oc52cbcfe+bdj5Kxa0pNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4qxokGj3txBa
3drfbzuaeeK7RVSNQWZghjJO1ATjOTjjrigCLQrnT7S5mmvprlMwSwoIIFkz5kboScuuMbge+fam
Wt1ZpFd6fO8/2KaVJVnSIGRSm8KShbBBDtkbuCQcnGGzau2OnG8ilnkuoLW3iZUaabeRvbJVcIrH
JCsemOOvTIAzUb37feGcR+WgjjiRc5IRECLk8ZOFGTgZOeB0ptlez6fdLc2zKsgVl+dFcEMpVgVY
EEEEjkd6lSOzs72SO9VruEL8rWlwEDZwQ2SjcY7EAg9cEEVLrNrZ2s1qLRJ4jJbJLLFNKJChbJX5
gqggoUbpxux1FAD9Y1g6rbabG0cSPbQNG/l28cQLGR242AcYK8eu445JLLq6s4dPewsHnmjllSaW
WeIRnKBgqqoZuPnYkk85HAx81CKNppkiUqGdgoLsFGT6k8Ae54rX8Q6YulGzt1jthmBJGljuVleR
mjRm3BWIABYhSAMjnLdaAMWti8utLvo1uZXvBdrbRwi3WJdmUjWMN5m7OPlDEbP9nP8AFUur6QNL
0XT2Mdm8lwpkkmS7SRwd8i7VCuQUwgO7B+bI3dqwqALtheR2kF+GDedPbeVC6jlCXQtz2BQOpx1D
Y6E1SrpYtFt00Oyv20fV7xJYHmmntpQkUe2R1wf3TYwEBOT3rM0qOxuZorSWxvLm7nlEcXk3aQgl
sADDRtznvkDmgA0+6s/7PurC9eeKOaWKYSwxCQgoHG3aWXg+YTnPG3oc8FveWdtc3cCCc2F1EIHd
gPNADK28DpncgO3PTK7s/NVW/S2j1C5SykaS0WVhC79WTJ2k8DkjHYVLpkFtPcstys8p2gRW9vw8
7lgoRTtbB5J6HO3HUigA1G8juTbQwBhb2kXkxM4w7jezlmA4BLO3A6DAycZJo95Hp+t2F7KGaO3u
Y5XCDJIVgTj34q3qOjNFqFtbWsE8U08XmPa3LDzLcgsCHbCgDau/JCgKwJ4GTDq9hbWMenta3DXC
3FsZXkK7QWEsiHaOu35OM8nqQM4AA+5u9Pj0mWzsTct9onjncToo8rYrgKCCd/8ArD82F+7054lv
dXt5xqFzEkv2zVP+PpXA8uL94sjbDnLZZQRnG0ZHzE7hi1pTaOY7WaaK/s7l7dQ08ULOTENwXO4q
EYBmUfKzdcjI5oAza6LS9W0uCfRbq9F55mmMoEUKKRIBM0m7eW4I3n5dpzt+8N2Vr/2Rb7fse+X+
0Psn23fkeVs8rzdmMZzs53Z+98u3Hz1i0AXbG8jtrTUonDFrq2ESFRwCJY359sIfxxVKtjSbG2vI
iPsOoajdlmJt7JtpiQbfnJ8t8gliO2NvOdwp2naRZXPixdKmv91r9r8hZ4VyZgXCAp1AznOScAZ6
nAIBn2LWW+VL5ZRHJHtWWJdzRNkHcFJUNkArgkfez2xV6e+06S5sYGSeaytbZ7fzGUI5LNIwfYGI
yrSZC7sNs5Izxj1YsrOS+ulgjKqdrOzueERVLMxxzgKCeATxwCeKALWoXVn/AGfa2Fk88scMssxl
miEZJcINu0M3A8sHOed3QY5za0l0czahZWdrf2dybuVYUkjZwFckDDBlDAfMOduDzjJBALqzs5NP
e9sDOI4JUglE5GXLBirrjoDsbKnO3j5mzwAbF/4lt7nTtTt1utTdLyMCG0cgW9piVH2KNxyAFKgg
LgD7vzfLn/2vb7ftmyX+0Psn2LZgeVs8ryt+c5zs424+9827HyVi1q/2FL5P/H3bfa/I+0fY/n8z
y9nmZzt2fc+bG7OOOvFAGVW1oPiGfSLuyVxFJZQ3a3DI1tHI45XcUZhlSQo6EdBVvS9At7yztJGt
r6ZLjPnXsLgQWXzlf3g2HO0AOcsvysOnU81QBq2l9ay2l7aX7Swpczx3HmW0CthlDjaE3IAD5hPB
42gY54r6reR31/5sQYRpFFChcYLCONUDEdiducZOM4yetW/DulDVL99/kNHDFJKUluEi3lY2ZRyw
JUlQGI6Ak5XrUNvZRz3l/LOqx29mpmljtn3ZG9UCIxLDG51G4k4GT82MEAi0q8jsb/zZQxjeKWFy
gyVEkbIWA7kbs4yM4xkdaZerp6bFsZbmbqXknjWP6AKGbpzzu5z0GMl+o2cdsbaaAsbe7i86JXOX
Qb2QqxHBIZG5HUYOBnAr2ttNeXcNrbpvmmkWONcgZYnAGTx1NAFi8vI57DTraIMot4mEoIwGkaRi
WHqdvljJ5+UDoBVKtqfSra38PXN4l3bXji7giSWBnG0FJSylWCn+FDnGPQ8MKr3lhbQaJYXkNw00
08ssco24RCqxsAM8kjzCCemRxkDJAJry60u+jW5le8F2ttHCLdYl2ZSNYw3mbs4+UMRs/wBnP8VG
kXGlwWGoRXtxeRyXUQhAhtlkCgSRvuyZF5+QjGO+c9qx62rLSLecafbSvL9s1T/j1ZCPLi/eNGu8
Yy2WUg4xtGD8xO0AGLWrpVzp8NpeR3U1zbzy7Vjnt4FlIjwwkXBdcbsryOcAjoxBZa2dnHp6Xt+Z
zHPK8EQgIyhUKWds9QN64UY3c/MuOat/ZyafqFzZSlWkt5WicocglSQce3FADvL0/wDtDZ9pufsX
/Pb7OvmdP7m/HXj73Tn2p+o3kdybaGAMLe0i8mJnGHcb2cswHAJZ24HQYGTjJZY2LXzy/vooIoY/
Mlml3bY1yFBIUFjlmUcA9fTJD7rTja3NujXUDQXCh47ld+wpuKlsFd2AysD8ueOAeMgFWIxiZDKr
PGGG9UbaSO4BwcH3wfpWlc3OnwaTLY2M1zP588czvPAsWzYrgAAO2c+YfTG3vniLWbK3sNREFrJL
JCYIZVaUAMd8SucgZA5Y8ZOPU9az6AN+/wBW0+ZtXurdrlrnVM+ZDJEqpDulWU4cMS+Cm37q5znj
GDgVa+xbNP8AtU8nl+Z/x7x4y0uDgt7KMEbu5GADhiuh/ZFvt+x75f7Q+yfbd+R5WzyvN2YxnOzn
dn73y7cfPQAaVe6VZTWF+4uUvLKRZDFHGHS4ZXLgly4KZGF4U4255JxRoPiGfSLuyVxFJZQ3a3DI
1tHI45XcUZhlSQo6EdBT7DSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3bMgsvtNpJJBJu
niyzwY5MYGSyn+LHO4dQOeRuKgEEsjTTPKwUM7FiEUKMn0A4A9hxV/RdQj0+5md5J4Gki8tLm3GZ
YDuU7lGV5IUqfmHDnr0ObV3TrOO5NzNOWFvaRedKqHDuN6oFUngEs68noMnBxggGrf8AiKObULG4
Vry6EFnJaTPdyfvJQ5lDENzg7ZOM52nj5gMnNv7u3a0gsLMyvbQSPKJZkCO7uFB+UEhQAigDJ7nP
OBfs9Gt59UgRPmtryxuLmBZpApQqkoAduBw8fXgEAEgZIGbfacbOKKeO6gureVmRZod4G9cFlw6q
cgMp6Y569cAFKt268S3F/o99a3awNcXNzFMZEtIkJCiTcSyqDuJZeeuN3PJBwq2LjR47TRJ7iaVv
t8NzFFJABxEHWQ4Y/wB/5Bkfw9D82QoBP/a2n7/t+65+2/Yfsf2fyl8v/UeRu8zdnp82NnXjP8VY
FdL/AGBb/wBneb9mvvL+yef/AGnvH2bf5e/y8bOu7919/wC92/hrmqACiiigAooooA1ba50+fSYr
G+muYPInkmR4IFl371QEEF1xjyx653dsc2LPVrN/GJ1u9E8Mf2z7YI4UEpJ8zfsyWXjqM/pRpOkR
3mnm5Gn6hqUhlaNobF9phACkM/yPw24gcD7jdezk0W0a01cC+tttnfRQpeyMQjRkTDIVdxbcVQ8B
iOvTJoAxblbdbhltZZZYRja8sYjY8c5UMwHOe5qxpV5HY3/myhjG8UsLlBkqJI2QsB3I3ZxkZxjI
61Fe2cljdNBIVY7VdXQ8OjKGVhnnBUg8gHnkA8U7TrL7feCAyeWgjkldsZIRELtgcZOFOBkZOOR1
oAsKdGg1CyIN5dWiyq1z5kaxF0yMqqhjzgHndzkcDGSarJY3DG4hvry4uGYBhNaJCoUDAA2yNgDA
AUAADpjGKm/smO/m05tOLRR39ybVI7l9xjkGzOWVRlf3inOAeoxwCYbqzs5NPe9sDOI4JUglE5GX
LBirrjoDsbKnO3j5mzwAZtdVb+JbeGyMf2rU0jaxe1+wRkLbq5hKeZ975tzfORtHLE5OPm5WtiDR
4/7Jvrm6lZLmK2S4hgUc7DJGu5/QEPlR1I54G3cAOstXt4Bp9zKkv2zS/wDj1VAPLl/eNIu85yuG
Yk4zuGB8pG44tdLpegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqeaoA0LC7t1tJ7C8Mq
W08iSmWFA7o6BgPlJAYEOwIyOxzxglxd2+paoJbky21sI0iHloJXVEQInBKhjhVyeO5A7U7SLS2u
mdZbe8vLgsFhs7Q7XcYYs2djcLtHGMndnPBrQTQ7Y3mr23mKn2ezjnVriTH2di8O9Xx1ZA7oRjJI
4XOBQBn6/dWd9rd3e2TztHcyvMRNEEKFmJ28M2QMjnj6VVsLyTT9Qtr2IK0lvKsqBxkEqQRn24p1
9YtYvF++inimj8yKaLdtkXJUkBgGGGVhyB09MEwRRSTzJFFG0kjsFREGSxPAAHc0AX7q8s49Peys
BOY55UnlM4GUKhgqLjqBvbLHG7j5VxzYvdXt5xqFzEkv2zVP+PpXA8uL94sjbDnLZZQRnG0ZHzE7
hFqOl29lpNncR3PnTyTzQzbMGNSixkBSPvffOW6EjjIAZn3Gjx2miT3E0rfb4bmKKSADiIOshwx/
v/IMj+HofmyFAMetqy1e3gGn3MqS/bNL/wCPVUA8uX940i7znK4ZiTjO4YHykbji1u6Rp2nagbe2
KXks8q757iJwsdku8rudSh3Kow5bcow2MjGaAKlreWcmnpZX4nEcErzxGADLlgoZGz0B2Lhhnbz8
rZ4q395JqGoXN7KFWS4laVwgwAWJJx7c1oaBo8ep6haLeStBZzXKW+9Rl5HYgbUB7jIJPRQRnJKq
2PQBasdRudNeVrYxfvY/LkWWFJVZchsFXBHVQenatWfxBFc67HeyRbYvsIs38qFEK5t/KZgq4BwW
YgEjgAZUYxi20cMtwqzz+TFyWfYWIAGeAOpPQcgZIyQORqtpFuvi270oPKYYJ540GR5kvl7tqA4x
ucqFHB5YcHpQBX1C6s/7PtbCyeeWOGWWYyzRCMkuEG3aGbgeWDnPO7oMc5tbGtaXHYRW0gtbyxkl
Z1a0vG3SALtxJ91flbcQPl6oeT0GPQBv3+rafM2r3Vu1y1zqmfMhkiVUh3SrKcOGJfBTb91c5zxj
BfJq2l+aL2IXguxp4szCyKUc/ZxCX37sqBkkDac7eo3YWu0Gj3GkX11Ba31u8HlrG8t2kqtIzcKV
Ean7iyHOcfL7ireoaLb2GnJP/Y+rvG1pDL9u80CDfJGrf88ugZsY3dsZoA5qtC0/sqW0Ed69zbzL
IzebBCJfMUgYUguoXaQTkZzu7YGc+t3SdIjvNPNyNP1DUpDK0bQ2L7TCAFIZ/kfhtxA4H3G69gBk
ur2+oXeqC8SWC21C7+1loQJHicFyBglQwxIwPI7HPGDn6je/b7wziPy0EccSLnJCIgRcnjJwoycD
JzwOlbtt4bjE2sxGz1DU5NPvFtlSyOwkHzQXI2Px+7H/AH11rAvkjjvZY4ree2VG2mG4fc6EcEMd
q85z2GKACyvZ9PulubZlWQKy/OiuCGUqwKsCCCCRyO9a8niUve6DdtbxF9NwzpHDHCHYTM+BsHTB
UdODuOOSTi21tNd3CwQJukbPGQAABkkk8AAAkk8AAk1q3GkWSXujQRX/AO5vo1Mt1Iu1EPnPGWAO
DtAXPOCep25wACK5udPg0mWxsZrmfz545neeBYtmxXAAAds58w+mNvfPGVW7q2kR2en/AGk6fqGm
yCVY1hvn3GYEMSy/InC7QDwfvr074VAGxeXWl30a3Mr3gu1to4RbrEuzKRrGG8zdnHyhiNn+zn+K
qlheR2kF+GDedPbeVC6jlCXQtz2BQOpx1DY6E1sf2Bb/ANneb9mvvL+yef8A2nvH2bf5e/y8bOu7
919/73b+GodB0FLzUdK+13Fssd3OhW2kdleaLzNrEMBtH3XGCwY7eByuQDArS0+6s/7PurC9eeKO
aWKYSwxCQgoHG3aWXg+YTnPG3oc8ZtbGm6ZBe6JfXEk0Fu8NzAvnzOwCoyy5G1QSxJVOgJGM8DJo
AzLn7P8AaGFr5vkjAUy43NxySBwMnJxzjOMnGTY1i8jv9YvLqEMsEkrGFGGCkecIuBwAFwABwAMC
pRos322eCSeCKOGJZ3uGLFBG23Y+AC2G3pgbcjdyBg4q3lqtrMFS5guUZdyywk4I6dGAYHIPBAPf
oQSAV6u6reR313HLEGCrbQREMOcpEiH8Mqce1V7WOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK076xsz
o6ahbW15aK0ojjW5lEguAQ25kIROEKgHry46dwCpfXkdzaabEgYNa2xicsOCTLI/Hthx+OapVsXG
jx2miT3E0rfb4bmKKSADiIOshwx/v/IMj+HofmyFx6ANjR7rS7C5s9Qle8F3aSrKIViVklKtuHz7
gUB4B+VsYzznAqaVeR2F/wDaZAxKxSiMoOUkMbBGHoVYqc9RjI5rY0vQLe8s7SRra+mS4z517C4E
Fl85X94NhztADnLL8rDp1NfSdIjvNPNyNP1DUpDK0bQ2L7TCAFIZ/kfhtxA4H3G69gDCq1YtZb5U
vllEcke1ZYl3NE2QdwUlQ2QCuCR97PbFNv7eO01C5top1uI4ZWjSZOkgBIDDk8Hr1NS6dZx3JuZp
ywt7SLzpVQ4dxvVAqk8AlnXk9Bk4OMEAtz6wLa5sTpjsVsrZ7dZLiFMyB2kZt0Z3KBiUrjJ4Ge+A
zWNYOq22mxtHEj20DRv5dvHECxkduNgHGCvHruOOSS/+x45r228mVktLm2ku1Ljc8cab94OMBmHl
OB0DcE7ckCvf2lutpBf2YlS2nkeIRTOHdHQKT8wADAh1IOB3GOMkAz66q/8AEtvc6dqdut1qbpeR
gQ2jkC3tMSo+xRuOQApUEBcAfd+b5eVravdIt4BqFtE8v2zS/wDj6ZyPLl/eLG2wYyuGYAZzuGT8
pG0gB/a9vt+2bJf7Q+yfYtmB5WzyvK35znOzjbj73zbsfJWLW1/ZFvt+x75f7Q+yfbd+R5WzyvN2
YxnOzndn73y7cfPWLQBoWn9lS2gjvXubeZZGbzYIRL5ikDCkF1C7SCcjOd3bAzds9Ws38YnW70Tw
x/bPtgjhQSknzN+zJZeOoz+lVLWzs49PS9vzOY55XgiEBGUKhSztnqBvXCjG7n5lxzN/ZMdhNqLa
iWljsLkWrx2z7TJId+MMynC/u2OcE9BjkkAGZcrbrcMtrLLLCMbXljEbHjnKhmA5z3NWNKvI7G/8
2UMY3ilhcoMlRJGyFgO5G7OMjOMZHWmajZfYLwwCTzEMccqNjBKOgdcjnBwwyMnBzyetNsrOS+ul
gjKqdrOzueERVLMxxzgKCeATxwCeKALS3VnpmoWV3pbzzyW8qzb7qIRglSCq7FZuOOTu5zjAxkl1
eWcenvZWAnMc8qTymcDKFQwVFx1A3tljjdx8q45iuLSCzmhf7XBfW7N8xtXZDxjK/OoKnBGDtI54
zggW7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AMeuluPEX2jThH/aurxYtEtvsEbbY
DtjEed2/ocbiNnOSuf4q5qigDfsNW0+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBgUUUA
XdKvI7G7kllDFWtp4gFHOXidB+GWGfajTryO2NzDOGNvdxeTKyDLoN6uGUHgkMi8HqMjIzkUqKAL
uo3kdybaGAMLe0i8mJnGHcb2cswHAJZ24HQYGTjJZpl7/ZurWd95fmfZp0m2Zxu2sDjPbpVWigDY
uLrS4dEnsLJ7yaSW5imMs0SxjCLINu0M3Pzg5zznoMfMXFxpbeHLaziuLw3cMrzFWtlCEusYK7vM
JwPLODt5z0FY9FABW1ZavbwDT7mVJftml/8AHqqAeXL+8aRd5zlcMxJxncMD5SNxxaKANK1vLOTT
0sr8TiOCV54jABlywUMjZ6A7Fwwzt5+Vs8Vb+8k1DULm9lCrJcStK4QYALEk49uar0UAauhap/Zc
11/pFzbfaIPJ+0Wv+si+dXyBuXOdm37w4YnnGCzU72PUNQhaW/1C6jVVR7i6+eTGSTtXccAZ4Xcc
kE5GcDNooA1ddudPu7mGaxmuXxBFC4ngWPHlxogIw7ZztJ7Y96yqKKALsl5HdWQS6DG5hULDMoyW
QYGx/UAfdbqANvI27Lv9r2+37Zsl/tD7J9i2YHlbPK8rfnOc7ONuPvfNux8lYtFAF2K8jTRLqyIb
zJrmGVSBwAiyg59/nH60Wd5HYxGaEMb/AHYjkI4hH95fV+uD/DjIyxBWlRQAVd068jtjcwzhjb3c
Xkysgy6DerhlB4JDIvB6jIyM5FKigDdt9Ys4dSt98U72FtZz2iBSFlkDrLyeoUlpSe+0YHzEZNTU
Lqz/ALPtbCyeeWOGWWYyzRCMkuEG3aGbgeWDnPO7oMc5tFABW7deJbi/0e+tbtYGuLm5imMiWkSE
hRJuJZVB3EsvPXG7nkg4VFAG/wD2tp+/7fuuftv2H7H9n8pfL/1HkbvM3Z6fNjZ14z/FWBRRQAUU
UUAFFFaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9ABbXOnz6TFY301zB5E8kyPBAsu/eqAgg
uuMeWPXO7tjm0+r6fqH9sC+S5g/tC+S7RoAsnlY80kEEru/1gHUeueMHAooAu6reR31/5sQYRpFF
ChcYLCONUDEdiducZOM4yetM069+wXgnMfmIY5InXOCUdCjYPODhjg4ODjg9Kq1LbW013cLBAm6R
s8ZAAAGSSTwAACSTwACTQBp/2tHYTacunBpY7C5N0klym0ySHZnKqxwv7tRjJPU55AEN1eWcenvZ
WAnMc8qTymcDKFQwVFx1A3tljjdx8q45fqVjZae2mPFNLdQzwebKy/u9xEroQmQSBhOCRnuQPuh9
5b6W2iC9tbe8t5GufKjE1ysokAXL9I1wV3R9eu/jocAGPW7B4luP7NvrS5WCQzWaWsTi0i3AK0eN
z7dxARCBySDtPUAjCrSFhbHw5Lfi4Z7pLmKIxKuFRWWU8k9W/d544AI5JJCgF2w1bT4W0i6uGuVu
dLx5cMcSsk22VpRlywKZL7futjGec4GBXS6XoFveWdpI1tfTJcZ869hcCCy+cr+8Gw52gBzll+Vh
06mloGjx6nqFot5K0FnNcpb71GXkdiBtQHuMgk9FBGckqrAFSxOnNFLFfmeJiytHPDGJCoGQV2Fl
GDkHOcjbjHJxpDV9Plu9QFwlz9muLGGyjaMLvHlmEByCcdIt23Pfbu/iEGi6NJqUVzdC0vLyO3ZE
a3s1zIxfdg52ttUbTk4POBjnINP0aTUNUvIUtLzbaK0r2iLuuCA4XYPl+9lgCccAE4ONpAK+qXdv
cCzgtTK0NpAYVklQIz5kdySoJA5cjqeme+BUtbmazu4bq3fZNDIskbYBwwOQcHjqKuy2ltBrTwXl
veWECKWaCc5myE3Bc7BgscAHbgbgSDjmbUbO2s4rC9jtJ4lmYt9jvJNxdBtKvlQh2PuIGB/AcN6A
BqGuPqGiWtlJFAskVzLKxitYohhlQLjYBz8rZ9fl64GJbrxLcX+j31rdrA1xc3MUxkS0iQkKJNxL
KoO4ll5643c8kGpr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABW0k+jy6Ra2st1fWrrlrhIb
RJFlk3NtYsZFPCEADGB82PvHOhceHLeKyD/YNTijNilz/aMjg25cwiTbjyx1Y+WPn6kdehzdIt9L
nsNQlvbe8kktYhMDDcrGGBkjTbgxtz85Oc9sY70AO0HxDPpF3ZK4iksobtbhka2jkccruKMwypIU
dCOgrIlkaaZ5WChnYsQihRk+gHAHsOK39K0SO70SO9Gk6rqMj3MkTCyfaIwqxkZ/dvyd59OlYt/b
x2moXNtFOtxHDK0aTJ0kAJAYcng9epoAZbLbtcKt1LLFCc7nijEjDjjCllB5x3FbV1q2nnxVd6lb
tcvbXn2jzBJEqvH5yupwAxDbQ+eozjHHWsCigDS1C6s/7PtbCyeeWOGWWYyzRCMkuEG3aGbgeWDn
PO7oMc5tauuJbW81rbW9pFDttIJHkVnLSs8KOS25iByTjAHWqk9l9mtI5J5Ns8uGSDHIjIyGY/w5
42jqRzwNpYAe95GdEhskDLILmSWUgYDjagTPqVxJ16bzjqat2d1pdjG1zE94btraSE27RLsy8bRl
vM3Zx8xYDZ/s5/iqXV9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySX
ErLvN2heJAIyuxFfljvYMCCQAOF6kAwqtWS6ed7X0tyu3BSOCNT5nqCxYbO3O1uvTjB07DSA3hy9
1R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgb
gBk9CAVGurPU9QvbvVHngkuJWm32sQkALEll2My8c8HdxjGDnIi1W8jvr/zYgwjSKKFC4wWEcaoG
I7E7c4ycZxk9at+HdKGqX77/ACGjhiklKS3CRbysbMo5YEqSoDEdAScr1o0rShq3iaKwbyIY3uQs
ix3CAKhcAiNmY7zzxgsT70AZ9lez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd615PEpe90G7a3iL6bh
nSOGOEOwmZ8DYOmCo6cHccckmhc2jz6u1tHFY27nGEiu1MK/LniRnI/NuvHtVrxDpi2XiGTTbeO2
jSOQwxstyrbwHKhpGLEIxxyPlA9BQBFc3OnwaTLY2M1zP588czvPAsWzYrgAAO2c+YfTG3vnjKra
8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAMjnLdarzaOY7WaaK/s7l7dQ08ULOTENwXO4qEY
BmUfKzdcjI5oAu/2tp+/7fuuftv2H7H9n8pfL/1HkbvM3Z6fNjZ14z/FT9L1bS4J9Fur0XnmaYyg
RQopEgEzSbt5bgjefl2nO37w3ZWv/ZFvt+x75f7Q+yfbd+R5WzyvN2YxnOzndn73y7cfPUuj6TZa
jFDH5N9M7Y+03cR2w2ILFQZAUOQAu8ncowccYJoAwK1dOu9PXSbywvjcp588MqSwIr7NiyA5Ukbs
7wMZHrnjBfpNjbXkRH2HUNRuyzE29k20xINvzk+W+QSxHbG3nO4VOmi2jWmrgX1tts76KFL2RiEa
MiYZCruLbiqHgMR16ZNADTq1nNe3iSidLS4s4bMSqgZ1EXlYfZuAy3lDI3cbupxzlXgs1mC2TTvG
F5kmUKXPrtBO0dBjJ6ZyM4Fg21vpuoPDqMMtxH5ashtpxHuDAMrAsjcFSDggHkZxgirV5pNvJqFn
bWCywNLaC4nS5lEnkjDSZLKoyPKCvgAnkjk8UAZtg9tHqFs97G0losqmZE6smRuA5HJGe4q1qslj
cMbiG+vLi4ZgGE1okKhQMADbI2AMABQAAOmMYq6+i2i2mkA31ttvL6WF72NiUWMCEZKttK7SznkK
T16YNN1bSI7PT/tJ0/UNNkEqxrDfPuMwIYll+ROF2gHg/fXp3AC68S3F/o99a3awNcXNzFMZEtIk
JCiTcSyqDuJZeeuN3PJBwqtXtl9m2SxSedaS58mYDG7HVWH8LDIyvuCCQQTbvUtn8PaddRWkUExn
mgkaNnPmBEiIYhmIBy7dMD2oAtWGrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJfb91sYzznAq21zp
8+kxWN9NcweRPJMjwQLLv3qgIILrjHlj1zu7Y5yq0LTSxcWgup762s4WkaONpxId7KAWACKxGAy9
cfe4zzgAh1O9/tLVry+8vy/tM7zbM527mJxnv1p+nXkdsbmGcMbe7i8mVkGXQb1cMoPBIZF4PUZG
RnIu2WkQz6TqLzz20Ettdwxm4klLIqlZdwGzdvyVX7oPTPTJqqdKaPUHtZ7u2gRY1l8+Rm2FGAKM
AAWOQynAXIzyBg4ALH9sRw3tt5MTPaW1tJaKHO15I337ycZCsfNcjqF4B3YJNe/u7drSCwszK9tB
I8olmQI7u4UH5QSFACKAMnuc84EqaDcS6hDawXFtIs8D3EM+8pG6IHLHLgFeY2HzAcj05pkmizGa
zSzngvVu5fIheEsoaQbcp84Ug/OnOMfN14OADNravdXt5xqFzEkv2zVP+PpXA8uL94sjbDnLZZQR
nG0ZHzE7hXvNKitNPF2uoQXKvL5UXko+GIGX5dVIK5j7YPmcHg4fd6FLaRXP+l20s9p/x9W8e/fD
8wQ5JUKcMwX5WPXIyOaAJf7Xt9v2zZL/AGh9k+xbMDytnleVvznOdnG3H3vm3Y+SsWitKHRzJawz
S39nbPcKWgimZwZRuK53BSigsrD5mXpk4HNABa3lnJp6WV+JxHBK88RgAy5YKGRs9Adi4YZ28/K2
eJv7Wjv5tRXUQ0Ud/ci6eS2TcY5BvxhWYZX94wxkHoc8EGvpFpb3tzPDcCXi0nljMbhcOkbOM5By
Plxjjr14qulnI+nzXoK+XDLHEwJ5JcORj2+Q/pQA7Ub37feGcR+WgjjiRc5IRECLk8ZOFGTgZOeB
0p+lXkdjf+bKGMbxSwuUGSokjZCwHcjdnGRnGMjrUq6LN9qeGSeCJI7aO6lmcsUjSRUK5wCxOZEX
gHk+gzUT2ESXscB1GzMLruF0pcoBz1AXeDkYwVz0PQg0AMvV09Ni2MtzN1LyTxrH9AFDN0553c56
DGS+8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVvVdFW28TS6Rp863bG5MEQBIIbeVCMWCjd
0yR8vPWq93pYt7Q3UF9bXkKyLHI0AkGxmBKgh1UnIVumfu844yAZ9FbGoeHptPN4jXlnPPZMRcQw
uzGNd4TdkqFIyyjAJYbuQMHGPQAUVdfSrlbSO6AV42tvtR2tyieaYuc99w7Z4I98W7bSTFrWo6fc
mCSSziucqXdQ7xo3KkL1GNwBwDtwSM0AY9FXbHTjeRSzyXUFrbxMqNNNvI3tkquEVjkhWPTHHXpn
Qv8AQ1XVpLaOSC3hgs7eaeZpC6IWjj3NldxYF3GNoI+b05ABhUVavrFrF4v30U8U0fmRTRbtsi5K
kgMAwwysOQOnpgmK1tpry7htbdN800ixxrkDLE4AyeOpoAioq7eWEVtEJYdRs7xd21hCXUqT04dV
JHB5GQO+MjNvV9ItdPsNPng1KC5e4iLsiCQZ/eSLuXci/L8gHJznPGMGgDHorSj0czWxeK/s3uRE
Zjaqzl9gUuTu27MhQWI3Z4x97iprfSLWbw5c6i+pQJPFKiCEiTPKyHacIRuOwYwcYzkjigDHorQt
NLFxaC6nvrazhaRo42nEh3soBYAIrEYDL1x97jPOGRaVcya2mkMFju2uRakO3Cvu28kZ4B9M0AUq
Ku6dZx3JuZpywt7SLzpVQ4dxvVAqk8AlnXk9Bk4OMGxJpHnXdqlm/wC7vYGnt0lPz8Fx5fA+Zi0Z
VcD5srwM4ABlUVYls5IbK3unKhZ2cImfmwuBux/dJJAPqjDtVrV7C2sY9Pa1uGuFuLYyvIV2gsJZ
EO0ddvycZ5PUgZwADNorp9R8Nx21hfzRWeoJHZqGS/lO6C7HmKgKfIMBt28fM3AxznIb/YFv/Z3m
/Zr7y/snn/2nvH2bf5e/y8bOu7919/73b+GgDmqKu6ZZx3Vyz3BZbO3US3TIfmEe4Kdv+0SwUdsk
ZwMkS2tnZx6el7fmcxzyvBEICMoVClnbPUDeuFGN3PzLjkAzaK0LrSLi0fUUZ4nfT5/JmVCSerLv
HH3QVAyccuo71astEzqdxaXKS3E0ECy/ZLVsSyudmYxlSQyhiWG048th7gAxaK2NY0hdPvLJGjns
VuohKYrwEvAN7J8xCgkfJu4XODjBxk19Zsrew1EQWskskJghlVpQAx3xK5yBkDljxk49T1oAz6K0
tXsLaxj09rW4a4W4tjK8hXaCwlkQ7R12/Jxnk9SBnAzaACitWyS2Tw9qN1LaRTzCeGCNpGceWHSU
lgFYAnKL1yPasqgAooooAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq39B0FLzUdK+13F
ssd3OhW2kdleaLzNrEMBtH3XGCwY7eByuQDAorStdHM+npfzX9naW7yvCrTM5JdQpI2orHGHHOMD
HOMjNu30Nf7P1RbySC0ubK8iheWaQ4jBEwZcLksSyr90E8Z6ZNAGFViyvZ9PulubZlWQKy/OiuCG
UqwKsCCCCRyO9Whos322eCSeCKOGJZ3uGLFBG23Y+AC2G3pgbcjdyBg4saja22kNpMghtrwS2jSy
fvHMcxMsqhvlKsPlC8cEEYIzkUARaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2
GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCn67bQ2fiHU7W3TZDDdyxxrknChyAMnnoKz6ALU/9n/2
fafZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9F
AG/Yatp8LaRdXDXK3Ol48uGOJWSbbK0oy5YFMl9v3WxjPOcCLQfEM+kXdkriKSyhu1uGRraORxyu
4ozDKkhR0I6CsWigDVtL61ltL20v2lhS5njuPMtoFbDKHG0JuQAHzCeDxtAxzxYOrWc17eJKJ0tL
izhsxKqBnUReVh9m4DLeUMjdxu6nHOFRQBurqOl/2hp6SLPLZWVs8KyPApdnJkdXMZbaQryD5SxB
C89cVSvVsp7tGg1G5meaQmea8g2bST94lXct1JPGfrms+igDS16aCfUwbadZ40treLzEVgGKQojY
DAHGVPUCs2iigDam1e3nmnSRJTazWMMJXA3LLFCqq45wPnUjPXY7Dgmm6RcaXBYahFe3F5HJdRCE
CG2WQKBJG+7JkXn5CMY75z2rHooAuy3kb6Ja2QDeZDczSsSOCHWIDHv8h/SqVFFABRRRQBq649tc
TWtzb3cU260gjeNVcNEyQohDblAPIOME9Kr3l5HfRCaYML/diSQDiYf3m9H6ZP8AFnJwwJalRQBd
vryO5tNNiQMGtbYxOWHBJlkfj2w4/HNEt5G+iWtkA3mQ3M0rEjgh1iAx7/If0qlRQBdivI00S6si
G8ya5hlUgcAIsoOff5x+tFjeR21pqUThi11bCJCo4BEsb8+2EP44qlRQBd0q8jsbuSWUMVa2niAU
c5eJ0H4ZYZ9qNHvI9P1uwvZQzR29zHK4QZJCsCce/FUqKACrusXkeoa3f3sQZY7i5klQOMEBmJGf
fmqVFAF3VbyO+u45YgwVbaCIhhzlIkQ/hlTj2rdv/Etvc6dqdut1qbpeRgQ2jkC3tMSo+xRuOQAp
UEBcAfd+b5eVooA2v7Xt9v2zZL/aH2T7FswPK2eV5W/Oc52cbcfe+bdj5KNKvdKsprC/cXKXllIs
hijjDpcMrlwS5cFMjC8Kcbc8k4rFooA0LT+ypbQR3r3NvMsjN5sEIl8xSBhSC6hdpBORnO7tgZvv
q+n6h/bAvkuYP7Qvku0aALJ5WPNJBBK7v9YB1HrnjBwKKALuq3kd9f8AmxBhGkUUKFxgsI41QMR2
J25xk4zjJ61dbV7dvE93fhJRZz+fEi4G6KKRGjUBc4+RWGFyB8oGQOaxaKAN9NX0/T/7HFilzP8A
2ffPdu04WPzc+UQAAW2/6sjqfXPOBVubnT4NJlsbGa5n8+eOZ3ngWLZsVwAAHbOfMPpjb3zxlUUA
Xby8jaIWdmGSzRt3zDDzP03vjvycLyFBIGSWZrF69snh7TrWK7inmE808ixq48sOkQCksoBOUbpk
e9ZVFABWrbXOnz6TFY301zB5E8kyPBAsu/eqAgguuMeWPXO7tjnKooA0Pttuuk3tnHHKvnXcU0YY
htqIsowTxk/vF6DnB6Vfh1u3XVHuFe5tt1jBbJcwqDLA6JGrMg3Dr5bL94fK5+hwKKAOlu/ENvca
haztJfXHk6bPZvLcsGkkdxMFY89P3i8ZOBkZbGTn2t7C1ppdmbmW0eG+kme6VSfKVxEAwwckr5ZO
B7YrKooA6fxTJb3MSTIkFmwlPl2VtdQTx7WyWYeSoCEYQZbLMCOfkpuueIv7Viu3/tXV5PtUm/7F
I2IIctuxned4XoBtXseMYPNUUAaUWmWkkSO2u6fGzKCUdLjKn0OIiMj2JFTC60u7sLJL17xJLOJo
hFDErCYeY8n3yw2E7yv3WxjPOcVj0UAauhXOn2lzNNfTXKZglhQQQLJnzI3Qk5dcY3A98+1Fpc6e
tpe6fcTXKW0s8c0c8cCs/wAgdQChcAZEmfvHGMc5yMqigDdl1Wzk15byCbULGNLaKKKWFg00RSJE
PQruB2sOCvXP+yaWr3Vrd3aPapgCMK8nkrD5rZPzeWhKpwQuAeduepNZ9FAG1Pq9uviyPXLdJZM3
YvZIZAE2vv3lAwJyO27Az/dFGr6r9stEg/tnV9Q/eB8Xp2omARwu98k565GMHrnjFooAu6xeR6hr
d/exBljuLmSVA4wQGYkZ9+ali0y0kiR213T42ZQSjpcZU+hxERkexIrNooA37bVtPbSltLprmJza
GyYxRLIAnn+eHGWXJzhNvHHzbv4aLXVtPm1/U9UvmuYPtX2jZHBEsuPOV1OSWX7u8HpzjtWBRQBq
2lzp62l7p9xNcpbSzxzRzxwKz/IHUAoXAGRJn7xxjHOcjQj8Q28Wr3M9rJfWUM9jDaLNEwaaHYsW
SMFQ2TEV6rw2cfw1zVFAGrfXCatqcQk1W5kQR7Dd6kWJ7n7q7yo5xgFueeM4EU9lZ2nlyf2lbXqe
YA8Vt5qPt74LxgD079Rwaz6KANrV9Stby0RBcXN9deYG+13VusUgXBBUsHYyZyvLH5QgA4JxVu7u
3utLsI8yrc2kZg2bAUZC7ybt2cg5fG3HbOe1Z9FAHVW/iW3hsjH9q1NI2sXtfsEZC26uYSnmfe+b
c3zkbRyxOTj5sW0u7ddJvbG4MqebJHPG8aBvnRXAUgkYB8z73OMdDnjPooA39J1z7JpK2P8Aamp6
dsnebfYjd5u5UGGG9MbdnHXO49Mcwab4hvNL1v7fbXF4I2uVmmiNyczgNnbIwA3E5IJI7niseigD
Vh1q4nmnGqXNzdR3MAtpJXkMkiIHVwV3HnDKDjjIyMjOQSX1lLd2qSwyyWNlA0UStw0py7gvg8Au
/IByF4BJG45VFAGlq+qtrLJe3ZZ9TdiLiXaFWRQFCHA6N94HAAwF75Jm1e40uew0+KyuLySS1iMJ
E1ssYYGSR92RI3PzgYx2zntWPRQBpC6s7PTZ4rR55p7yJYpzLEEWIBlchcMSxLKvJ24APBz8t3+1
tP3/AG/dc/bfsP2P7P5S+X/qPI3eZuz0+bGzrxn+KsCigC7LeRrpUVlbhgHYS3RYffkUuEx/shW9
jlmzkBcS2t5ZyaellficRwSvPEYAMuWChkbPQHYuGGdvPytnjNooA2rfxBNbanea1F+71eacyROq
gxxB9xkIBzzyAM5GC3fBFKQ6dNqQYGe2s2UFgkYkZG2jcFBYZXdkDLZ24zk1SooA0Ly7t7qazgUy
pZWsYgSQoDIU3s7MVyBnLsQueBgZONxl1250+7uYZrGa5fEEULieBY8eXGiAjDtnO0ntj3rKooA2
NXuNLnsNPisri8kktYjCRNbLGGBkkfdkSNz84GMds57Vj0UUAatk9s/h7UbWW7igmM8M8ayK58wI
koKgqpAOXXrge9ZVFFABRRRQAV0Wl6tpcE+i3V6LzzNMZQIoUUiQCZpN28twRvPy7Tnb94bsrzta
ujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFeW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VY06709
dJvLC+NynnzwypLAivs2LIDlSRuzvAxkeueMHKooA3Tq1nNe3iSidLS4s4bMSqgZ1EXlYfZuAy3l
DI3cbupxzFfTWGoSaba207W0FtbGFp7tTgnzJH3YQMQDuHGDgnGTjccepba2mu7hYIE3SNnjIAAA
ySSeAAASSeAASaALeu3MN54h1O6t33wzXcskbYIypckHB56Gs+tXUrGy09tMeKaW6hng82Vl/d7i
JXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AMeiitIWFsfDkt+
LhnukuYojEq4VFZZTyT1b93njgAjkkkKAZtFdLpegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOW
X5WHTqaWgaPHqeoWi3krQWc1ylvvUZeR2IG1Ae4yCT0UEZySqsAY9FWrKTT49/262uZ842eRcLFj
1zlGz29KvX9vpem67cW0lveSQRKqNEtyqvHLgb1L+WQwVt68KM4BB9QDHorV1WysobuyjtjLbCaC
OSZbmTzfJLklSWRBkbCjcAkbiOoxTNVtbbTNWjS33XFuIoJgLgbS4eNHIYKeB8xGAcgd+9AGbRWl
r0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABRRXVWHhy3udO0yVrDU2S7jLTX6OPs9r+9dNzD
yzwoUMcuOO460AcrRWxY2NmNHfULm2vLtVlMci20ojFuAF2s5KPw5YgdOUPXta0zRLa7i1OWC11D
VltrmOKEWX7sujeZ+8IKOQPkXjtuoA52irurWcdhqUlvEW2hUYq5y0ZZQxjbp8yklTwOVPA6ClQA
UVduNOa3mhtXlUXjtiSJiFEJOAAzE4Ddcg8Lxk53BbviHTF0o2dusdsMwJI0sdysryM0aM24KxAA
LEKQBkc5brQBi0Vu6vpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4
lZd5u0LxIBGV2Ir8sd7BgQSABwvUgGFRW7YaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W
7Gk6RHeaebkafqGpSGVo2hsX2mEAKQz/ACPw24gcD7jdewBhUVv2eiW8l3qUcaXOrfZZxFHFYMFe
VMt++Hyv8o2qOB/y0Xn1il0i3h1O9Vnl+y2cCXE0eR5q7tg8rOMBleQISRxhjtyNtAGLRWx/Y8c1
7beTKyWlzbSXalxueONN+8HGAzDynA6BuCduSAf2THfzac2nFoo7+5Nqkdy+4xyDZnLKoyv7xTnA
PUY4BIBj0VpXVnZyae97YGcRwSpBKJyMuWDFXXHQHY2VOdvHzNnjT8R6Lb6M91Euj6vCkc7Qw3lz
KPKkwTyB5QzkAkYb88UAc1RXS/2Bb/2d5v2a+8v7J5/9p7x9m3+Xv8vGzru/dff+92/hqHR9JstR
ihj8m+mdsfabuI7YbEFioMgKHIAXeTuUYOOME0AYFFaFppTXNoLl7u2tkeRooROzDzXABKggELjc
vLlR83Xg417zQ7LzvECpJBaJZamsMcs0jYjjPnDbgZZiSqdATxnpk0AcxRWkNFm+2zwSTwRRwxLO
9wxYoI227HwAWw29MDbkbuQMHD00RjqEMBuYnt5YHuVuIgxDRIHLlVYKc/u3AB25I64OaAMqitK6
s7PyrS+gM8VlcSvEyORJJGU2FsEbQ4w6kfd5yO241byzks5gjlXR13xSocpInZlPpwRzgggggEEA
Ar0UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9
ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2f
T7pbm2ZVkCsvzorghlKsCrAgggkcjvVepba2mu7hYIE3SNnjIAAAySSeAAASSeAASaANDWNYOq22
mxtHEj20DRv5dvHECxkduNgHGCvHruOOSTXvLyOew062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1
KxstPbTHimluoZ4PNlZf3e4iV0ITIJAwnBIz3IH3Q+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7
o+vXfx0OACjP/Z/9n2n2f7T9t+f7V5m3y+vybMc9Oue/Sr1vcaWvhy5s5bi8F3NKkwVbZSgKLIAu
7zAcHzBk7eMdDWPWkLC2PhyW/Fwz3SXMURiVcKissp5J6t+7zxwARySSFALthq2nwtpF1cNcrc6X
jy4Y4lZJtsrSjLlgUyX2/dbGM85wItB8Qz6Rd2SuIpLKG7W4ZGto5HHK7ijMMqSFHQjoKt6XoFve
WdpI1tfTJcZ869hcCCy+cr+8Gw52gBzll+Vh06mloGjx6nqFot5K0FnNcpb71GXkdiBtQHuMgk9F
BGckqrAFSwu4Y9btry9hWSBblZZokjXDLuBZQvC4IyMcCqsssk8zyyyNJI7FndzksTyST3Naui6X
HfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0JcWOnWOq3MV085hiiSRYFYLIXYKTEX2kKybmy
SvJjIwpPABX1O7t9R165uQZYrOWc7BsBaOLOFULnHyrgBcgcAZxVjWJtOv7+2ezupwpihgka6gCB
AkaR7vlZyQdpJGMjtmjUdGaLULa2tYJ4pp4vMe1uWHmW5BYEO2FAG1d+SFAVgTwMlmpWVppraZLb
yfbI5oPOk8wFUdhK6EADDBTs9ievy5wABmvTQT6mDbTrPGltbxeYisAxSFEbAYA4yp6gVm1pa9DB
BqYFtAsEb21vL5aMxCl4UdsFiTjLHqTWbQAVtQavbo2mRSpK1tFaNaXigA70aWRyVGcEgOrLno6g
44rFooA1dLurK3RJHur6wvYpGZbqzTezKwA28um3GG5Gc7yDjHJdapb3Fpqkcdt9n+130dzHEmNk
SKJfl7dPMUDA7HpWVWhoVtDeeIdMtbhN8M13FHIuSMqXAIyOehoAz6fFLJBMksUjRyIwZHQ4Kkcg
g9jTrmZJ7hpI7eK3Q4xFEWKrx23En35NPsrOS+ulgjKqdrOzueERVLMxxzgKCeATxwCeKAHXs9vc
7J44vJnbPnIigRk9mUD7uecrjAI44IVX6reR313HLEGCrbQREMOcpEiH8Mqce1WBoUst3YQ213bX
Md7P9ninj3hBJlQQQyhuN6nOMc8ZwQIrvSmtrQ3KXdtcokixTCBmPlOQSFJIAbO1uULD5evIyAMv
ryO5tNNiQMGtbYxOWHBJlkfj2w4/HNEt5G+iWtkA3mQ3M0rEjgh1iAx7/If0rS/sFLWx1hri4tpr
mzgAaKN2D28vnRoQQQA3BcZXco9eVzgUAXYryNNEurIhvMmuYZVIHACLKDn3+cfrVi2udPn0mKxv
prmDyJ5JkeCBZd+9UBBBdcY8seud3bHL7fSLWbw5c6i+pQJPFKiCEiTPKyHacIRuOwYwcYzkjiob
XRzPp6X81/Z2lu8rwq0zOSXUKSNqKxxhxzjAxzjIyAW5NWs9Rm1YXonto7+8F4GhQTFCPM+TBZcj
96ec/wAPTnhkmr28uoXIZJfsVxaRWTsAPMCRiMK4GcZzErFc9CV3fxCqmlMt3dW17d21i9tIYpDO
zN84JG0BAxPQ8gYGOvIy4aLN9tngkngijhiWd7hixQRtt2PgAtht6YG3I3cgYOACb+2I4b228mJn
tLa2ktFDna8kb795OMhWPmuR1C8A7sEk/taOwm05dODSx2FybpJLlNpkkOzOVVjhf3ajGSepzyAK
40ppNQS1gu7adGjaXz42bYEUEuxBAYYCscFcnHAORll9pxs4op47qC6t5WZFmh3gb1wWXDqpyAyn
pjnr1wAS3V5Zx6e9lYCcxzypPKZwMoVDBUXHUDe2WON3HyrjmY3Wl2lhepZPePJeRLEYpolUQjzE
k++GO8jYF+6uc54xisetpoNHuNIvrqC1vrd4PLWN5btJVaRm4UqI1P3FkOc4+X3FAEv9rafv+37r
n7b9h+x/Z/KXy/8AUeRu8zdnp82NnXjP8VRaVe6VZTWF+4uUvLKRZDFHGHS4ZXLgly4KZGF4U425
5JxUX9hS+T/x9232vyPtH2P5/M8vZ5mc7dn3PmxuzjjrxTIdHMlrDNLf2ds9wpaCKZnBlG4rncFK
KCysPmZemTgc0APtrvT5NJis743K/Z55J0ECKfN3qgKkkjZ/qx82G+9045u3GraXfza2tyLyKO/1
BbqGSNFYxgeb95Swyf3gGAR3OeMGlpdrZXCJG9rfX97LIyra2b7GVVAO7lH3Zy3AxjYSc54DoyG7
v1XUbZbK0n8kXkgYpISW24CBj8wRj6cdemQC7b+II4dVupYp7y1jms4rNLmD/XIIxGA+3cOW8rBG
7jeeTjmvNrWNXS6Nzfaggge3eS8kxI6OrKwXltmA5xy3PPfaKsOlNLdzxC7thDBGJZLrczRhCVAb
5QW5LKMbcgnkDBxa1HSIYG0mGOe2Xz7RpZLjzSY2xLKN/qPkVflA3ZGNu7igCvdXVm8Vpp8Dz/Yo
ZXlad4gJGL7AxCBsAAIuBu5IJyM4Wve3v2nZFFH5NpFnyYQc7c9WY/xMcDLewAAAAFiTSooJrN5N
QgewuJfKa7hRyIyNu/5WVWJUMp6YOeDnOLGpaZbxaTHfR2V9Y75EEaXkgf7QjKx3odicLhc4z99e
ncAxaK3dX0gaXounsY7N5LhTJJMl2kjg75F2qFcgphAd2D82Ru7Uyxg0e7tLnda30clvaNLJP9rQ
oHACr8nl5w0jIuN3G7OeCaAMWiun0bw3HqcFgqWeoXLXjbXurY/urQlymJF2HJAAc/MvDDp1MWla
JHd6JHejSdV1GR7mSJhZPtEYVYyM/u35O8+nSgDnaK0LbT0vftFws8VlZRyBRJcsz4LbiqnYhJOF
bnaB8p6ZAp1vpkf2m7M8yy21nEJ5Gt3yZULKqhSRwSXXORlRnK5G0gGbRW/Z6Nbz6pAifNbXljcX
MCzSBShVJQA7cDh4+vAIAJAyQM2+042cUU8d1BdW8rMizQ7wN64LLh1U5AZT0xz164AKVFaWr2Ft
Yx6e1rcNcLcWxleQrtBYSyIdo67fk4zyepAzgZtABRWrepbP4e066itIoJjPNBI0bOfMCJEQxDMQ
Dl26YHtWVQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1Z
VFABRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsCrAgggkcjvVeigDV1jWDqttpsbRxI9tA0b+XbxxAs
ZHbjYBxgrx67jjkk17y8jnsNOtogyi3iYSgjAaRpGJYep2+WMnn5QOgFUqKALU/9n/2fafZ/tP23
5/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9FAG/Yatp8L
aRdXDXK3Ol48uGOJWSbbK0oy5YFMl9v3WxjPOcCLQfEM+kXdkriKSyhu1uGRraORxyu4ozDKkhR0
I6CsWigDVtL61ltL20v2lhS5njuPMtoFbDKHG0JuQAHzCeDxtAxzxai1bT21ea5la5hCWkUFpcRx
K8kbxrGgk2lgASqN3O0sCCSAawKKANAXUWmaglxpNzLLiNlLXNqi/eBVlKFnUgqe/qeKsanqo1mP
SoJPIgNvEYpJBbpGgLSu2cRrnaFK8Y67sDJJOPRQBpa9NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnq
BWbRRQBof27rH2T7J/at99m8vy/J+0Ps2Yxt25xjHGKLbXdYs7dbe11W+ghTO2OK4dVGTk4AOOpN
Z9FAF2z1jVNPiMVlqV5bRltxSGdkBPTOAevA/KpdHvlh8TWF/ezMVS8jmmlfLHAcFmPcnqfWs2ig
CW5hSC4aOO4iuEGMSxBgrcdtwB9uRVvRdR/svVEut0qYjkj8yI4ePejJvXkcruyBkZx1HWs+igDo
jrsa63pN3LqOq6jHZ3KzO92eQAykqib2wfl67ucgYGMnKivI00S6siG8ya5hlUgcAIsoOff5x+tU
qKAOivdW0ub+2rmIXhu9VUkoyKqQEzJIVzuJcfKQGwvT7vzfLztFFAGhaXduuk3tjcGVPNkjnjeN
A3zorgKQSMA+Z97nGOhzwyW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VSooA2NF1SOwiuYzdXljJKyMt
3ZrukAXdmP7y/K24E/N1QcHqLV1rlnfarfyym8W3vbOG2eV8SzIUERLHlQ5LRYzlfvZ9q52igDYs
dQs9J1qK4spLzy1ikiM5ASQF0Zd6qDwV3AgbuSvUZ4ZrOo/b/IX+09T1Dy9x8y+ONuccKu5sdOTu
5yOBjJyqKACrr3kZ0SGyQMsguZJZSBgONqBM+pXEnXpvOOpqlRQB0tx4i+0acI/7V1eLFolt9gjb
bAdsYjzu39DjcRs5yVz/ABVSF1pd3YWSXr3iSWcTRCKGJWEw8x5PvlhsJ3lfutjGec4rHooA2LG4
0saO9rcXF5azySkyvBbLKJY8KUUkyKQAwY4HBO0n7owy0udPW0vdPuJrlLaWeOaOeOBWf5A6gFC4
AyJM/eOMY5zkZVFAGraXen281/b5uVsrqAQebsVpFw6Pu2ZA5Mf3d3Abq2ObR1fT4b7S2tkufJs7
SS38xwvmK7NKRKoBwCpkDgZ4Ixu43VgUUAdFqeu217FpkM8uoamtpcySTNeybTMjeX8owzFB8jDq
fXPOBSubnT4NJlsbGa5n8+eOZ3ngWLZsVwAAHbOfMPpjb3zxlUUAXb68jubTTYkDBrW2MTlhwSZZ
H49sOPxzRBeRw6PeWoDCeeWI71HHlqHLKT1wWMZx0ygPYVSooA0tKurPTpotQZ52vbaUSwwCIeWx
XBUs+7IGeoC8gYyM5EVmmlyREXs95BIG4aGFZQw9MFl2kc85Oc9BjmlRQBuyatZ6jNqwvRPbR394
LwNCgmKEeZ8mCy5H7085/h6c8VLe8s7a5u4EE5sLqIQO7AeaAGVt4HTO5AduemV3Z+as2igDdt9Y
s4dSt98U72FtZz2iBSFlkDrLyeoUlpSe+0YHzEZNTULqz/s+1sLJ55Y4ZZZjLNEIyS4QbdoZuB5Y
Oc87ugxzm0UAbGr3Glz2GnxWVxeSSWsRhImtljDAySPuyJG5+cDGO2c9qx6KKANW9e2Tw9p1rFdx
TzCeaeRY1ceWHSIBSWUAnKN0yPeorbXdYs7dbe11W+ghTO2OK4dVGTk4AOOpNZ9FAF2z1jVNPiMV
lqV5bRltxSGdkBPTOAevA/Ki31jVLSaaW21K8hknbdM8c7KZDycsQeTyevqapUUAS3N1cXlw1xdT
yzzPjdJK5ZjgYGSeegFRUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV0trovhiW0hkuPF3kTPGrSRf2bK3lsRyuQcHB
4zXNUUAWtSgtLXUJYbG9+22y42XHlGPfwCflPIwcj8Kq0UUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAf//Z
--089e013cc30aa5701904f2544bed
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--089e013cc30aa5701904f2544bed--


From xen-devel-bounces@lists.xen.org Fri Feb 14 02:09:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 02:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE8Dj-00049n-AL; Fri, 14 Feb 2014 02:09:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ctakemura@axcient.com>) id 1WE8Dh-00049c-RW
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 02:09:38 +0000
Received: from [85.158.143.35:33232] by server-2.bemta-4.messagelabs.com id
	18/BD-10891-1EA7DF25; Fri, 14 Feb 2014 02:09:37 +0000
X-Env-Sender: ctakemura@axcient.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392343775!5578264!1
X-Originating-IP: [208.65.145.78]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4LjY1LjE0NS43OCA9PiAyMzIxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10147 invoked from network); 14 Feb 2014 02:09:36 -0000
Received: from p02c12o145.mxlogic.net (HELO p02c12o145.mxlogic.net)
	(208.65.145.78)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 02:09:36 -0000
Received: from unknown [12.250.146.126] (EHLO exchange.axcient.com)
	by p02c12o145.mxlogic.net(mxl_mta-7.2.4-1)
	with ESMTP id eda7df25.0.210191.00-371.534652.p02c12o145.mxlogic.net
	(envelope-from <ctakemura@axcient.com>); 
	Thu, 13 Feb 2014 19:09:36 -0700 (MST)
X-MXL-Hash: 52fd7ae03233948f-8cfb283fd4906dfe215b581f4feb7c88be077973
Received: from TESLA3.axcient.inc ([fe80::546:f8ef:fd9d:516e]) by
	tesla3.axcient.inc ([fe80::546:f8ef:fd9d:516e%10]) with mapi;
	Thu, 13 Feb 2014 18:09:34 -0800
From: Chris Takemura <ctakemura@axcient.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 18:09:33 -0800
Thread-Topic: Use of watch_pipe in xs_handle structure
Thread-Index: Ac8pKc3FkPbfL7hwTeS1yLb6lboPTw==
Message-ID: <CF22BADD.4AD%ctakemura@axcient.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Microsoft-MacOutlook/14.3.9.131030
acceptlanguage: en-US
MIME-Version: 1.0
X-AnalysisOut: [v=2.0 cv=Bo0qN/r5 c=1 sm=1 a=xqQDggKg7E0CnudaAZgsZA==:17 a]
X-AnalysisOut: [=LEH2DxGZ4DAA:10 a=sq_w24qPQy8A:10 a=Xq3sQncxG94A:10 a=BLc]
X-AnalysisOut: [eEmwcHowA:10 a=kj9zAlcOel0A:10 a=xqWC_Br6kY4A:10 a=kxSA2Y8]
X-AnalysisOut: [aAAAA:8 a=hmgVrc31OLoA:10 a=imiyPv79uIGjPeqrdS8A:9 a=CjuIK]
X-AnalysisOut: [1q_8ugA:10]
X-Spam: [F=0.5000000000; CM=0.500; MH=0.500(2014021319); S=0.200(2010122901)]
X-MAIL-FROM: <ctakemura@axcient.com>
X-SOURCE-IP: [12.250.146.126]
Subject: [Xen-devel] Use of watch_pipe in xs_handle structure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This message was also posted to the qemu-devel list, but I didn't get any
reply, and it occurred to me that it might make more sense here.  Sorry if
you're reading it twice.

Anyway, I'm trying to debug a problem that causes qemu-dm to lock up with
Xen HVM domains.  We're using the qemu version that came with Xen 3.4.2.
I know it's old, but we're stuck with it for a little while yet.

I think the hang is related to thread synchronization and the xenstore,
but I'm not sure how it all fits together. In particular, I don't
understand the lines in xs.c that handle the watch_pipe, e.g.:

        /* Kick users out of their select() loop. */

        if (list_empty(&h->watch_list) &&
            (h->watch_pipe[1] != -1))
            while (write(h->watch_pipe[1], body, 1) != 1)
                continue;


It looks to me like the other thread blocks while reading from the pipe,
and the write allows it to continue.  But this code seems like it does the
same thing as the condvar_signal call that comes slightly after, and
therefore it seems like I could safely #ifndef USE_PTHREAD it out.  Is
this the case?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 02:09:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 02:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE8Db-00048w-Dq; Fri, 14 Feb 2014 02:09:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WE8DZ-00048r-LL
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 02:09:30 +0000
Received: from [193.109.254.147:29691] by server-5.bemta-14.messagelabs.com id
	5F/13-16688-9DA7DF25; Fri, 14 Feb 2014 02:09:29 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392343767!4234606!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8616 invoked from network); 14 Feb 2014 02:09:28 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 02:09:28 -0000
Received: by mail-we0-f179.google.com with SMTP id q58so8140235wes.24
	for <xen-devel@lists.xensource.com>;
	Thu, 13 Feb 2014 18:09:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=3rpOm/8aIn7vfkupyyyyNfXSF/0QUv/kOVBEA5CTdx4=;
	b=MH5mF8JUI+AInrQXJCuVojZ1RwxJmV2VRaYbCHKZnTtu19Q5wVjoZuX9ohcdPKwYDa
	Z54E8ALdgWaD9dtxSln8bYTgKKdUXXd7VdFjzEi//pP0KpzWIxwgoosWbzX1oAz4M7b8
	iMoX6rhFq87fZr41tM9GsGiLsTIa1igcWxVJyxK+Mxmos3uqdvn7M6ectkRotcgbzAZz
	1iOFl0Twv+GpBoUmx4442m6V0Z5u7ry9aZw3dDMyI1MDKg0IBuKw6BaBBl754lCFxETo
	jilHo5qhNTmzA0gQ4GfXH2FBRkm6ODVrGgc4gvjRBR2aI9PT9257hrT+I+JQF5/gAnNU
	jvTA==
X-Received: by 10.194.161.136 with SMTP id xs8mr3792855wjb.56.1392343767141;
	Thu, 13 Feb 2014 18:09:27 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Thu, 13 Feb 2014 18:09:06 -0800 (PST)
In-Reply-To: <52FC9A24.2020703@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<1391528110.6497.32.camel@kazak.uk.xensource.com>
	<CAP8mzPPNYpG3RYrr6afzGvQDOjFutHkZT=JG4hqrBm3X-M+XTA@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Fri, 14 Feb 2014 02:09:06 +0000
Message-ID: <CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Content-Type: multipart/mixed; boundary=089e013cc30aa5701904f2544bed
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--089e013cc30aa5701904f2544bed
Content-Type: text/plain; charset=ISO-8859-1

After compiling with the patch and rebuilding/installing the module, I
reboot, I get a panic now when drbd starts.

That was all I could get from the JAVA supermicro  kvm console!

--089e013cc30aa5701904f2544bed
Content-Type: image/jpeg; name="panic_drbd.jpg"
Content-Disposition: attachment; filename="panic_drbd.jpg"
Content-Transfer-Encoding: base64
X-Attachment-Id: f_hrmth9y10

/9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0a
HBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIy
MjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAGdAvADASIA
AhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQA
AAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3
ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWm
p6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEA
AwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSEx
BhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElK
U1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3
uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD5/oor
V0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKKKACiipba2mu7hYIE3SNnjIAAAySSeAAAS
SeAASaAIqK1dSsbLT20x4ppbqGeDzZWX93uIldCEyCQMJwSM9yB90PvLfS20QXtrb3lvI1z5UYmu
VlEgC5fpGuCu6Pr138dDgAx6KK0hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBm0V0ul6
Bb3lnaSNbX0yXGfOvYXAgsvnK/vBsOdoAc5ZflYdOpzNHsLa/kuVuLho2jtppYo0XJdkidxk9Ao2
c9+QAOSVAM2it3Q9Ij1DT725On6hfyQSxRrDZPtIDiQlj8j8DYB0HWmWemW95q15CtlfDyIyUsBI
DcSOGVWQHZ1GWY/IeEI9wAYtFbV7ptlZanbpc/abSJ4GmltpjmaJhvxGx2jBfapBKjAkBwQMk1XT
odM+x3Bsbm2d5GD2N+xLlV2kMSFQ7W3FeAPuNg+gBi0Vpa9DBBqYFtAsEb21vL5aMxCl4UdsFiTj
LHqTWbQAUUVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFWLOzkvJiiFURF3yyucJGndm
PpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AYtFbGlaUNW8TRWDe
RDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj8268e1AGfRW14h0xbL
xDJptvHbRpHIYY2W5Vt4DlQ0jFiEY45Hygego8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAM
jnLdaAMWit3V9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2
heJAIyuxFfljvYMCCQAOF6kAwqK3bDSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6Q
L2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgbgBk9CAYVFbGi6XHfxXMhtby+kiZFW0s22yENuzJ9
1vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy673iJQM0ZAxl1JKc7RuHO3nABm0VsT6PGPFF/pcUrLBa
y3GZGG5vLiDMTjgFtqHA4BPcdaZLY2Un2G7jmltbC5naBzN+9eArs3N8oG8YdSMAHqMcAkAyqK2t
S0y3i0mO+jsr6x3yII0vJA/2hGVjvQ7E4XC5xn769O40Gj3GkX11Ba31u8HlrG8t2kqtIzcKVEan
7iyHOcfL7igDFordt9O066s5hCl4zQW3my328CBH2FxGUKZBLAxg7+TyM/dpllpFvONPtpXl+2ap
/wAerIR5cX7xo13jGWyykHGNowfmJ2gAxaK0LTSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4z
zhlvpzXE01qkqm8RsRxKQwmIyCFYHBbpgDhucHO0MAUqK1fDaW0/iGxtbu0iuYbmeOBlkZ12hnAL
AqwOcZ65HPSsqgAoqW1jhlu4Y7ifyIXkVZJdhby1J5bA5OBzitC7ttPk0k31nDc222dYQlxOsvm5
UklSEXG3C56/6xencAyqKK2LjSLWHw5baimpQPPLK6GECTPCxnaMoBuG85ycYxgnmgDHorVtNClu
4rb/AEu2inu/+PW3k375vmKDBClRllK/Mw6ZOBzUVppYuLQXU99bWcLSNHG04kO9lALABFYjAZeu
PvcZ5wAZ9FbGj6PBf39zbXd/BbmGKZvvM+8pG7ZDIrAqCoJ55H3cmsqVFjmdFkWRVYgOmcMPUZAO
D7gGgBlFS20KT3CxyXEVuhzmWUMVXjvtBPtwK09V0VbbxNLpGnzrdsbkwRAEght5UIxYKN3TJHy8
9aAMeirt5YRW0Qlh1GzvF3bWEJdSpPTh1UkcHkZA74yM2Nb0j+y768QPiGO7khgWQ/vJERmXfwMY
BXGeMnIGcNgAyqKK2Lfw9NcQWTi8s0kvlzawM7b5W3sm3AUhSWXgsQpz14bABj0VoWmli4tBdT31
tZwtI0cbTiQ72UAsAEViMBl64+9xnnDo9FmE14l5PBZLaS+RM8xZgsh3YT5AxJ+R+cY+XryMgGbR
WkNFm+2zwSTwRRwxLO9wxYoI227HwAWw29MDbkbuQMHD9Z0+HT4dMETRSNNaGV5YnLLIfOkUMM9P
lVeMAjHIBzQBlUVYsLOTUNQtrKIqslxKsSFzgAsQBn25qxd6U1taG5S7trlEkWKYQMx8pyCQpJAD
Z2tyhYfL15GQDPorVu9CltIrn/S7aWe0/wCPq3j374fmCHJKhThmC/Kx65GRzTJNHMNsHlv7NLkx
CYWrM4fYVDg7tuzJUhgN2ecfe4oAzaK0odHMlrDNLf2ds9wpaCKZnBlG4rncFKKCysPmZemTgc1Y
g0q2uPD1tePd21m5u54nlnZzuASIqoVQx/ic5xj1PKigDForSj0WYTXiXk8FktpL5EzzFmCyHdhP
kDEn5H5xj5evIyDRZvts8Ek8EUcMSzvcMWKCNtux8AFsNvTA25G7kDBwAZtFaus6fDp8OmCJopGm
tDK8sTllkPnSKGGenyqvGARjkA5qJdIuG1e00wPF5115GxsnaPNVWXPGeA4zx69aAM+itK6s7OTT
3vbAziOCVIJRORlywYq646A7Gypzt4+Zs8Nn0i4tnvxI8Wyz25kBOyXcQF2HHzbgS49VBPagDPor
Sj0czWxeK/s3uREZjaqzl9gUuTu27MhQWI3Z4x97iotMs47q5Z7gstnbqJbpkPzCPcFO3/aJYKO2
SM4GSAClRWla2dnHp6Xt+ZzHPK8EQgIyhUKWds9QN64UY3c/MuOW3WkXFo+oozxO+nz+TMqEk9WX
eOPugqBk45dR3oAz6KuyWC2upC0vbhYQqhpWVS5QlQxTbx84+6QcAMMEgZNWLuxsrLxLfWM80q2d
rPMgbq7hC21cgYBYgLuxgZzjAxQBlUVsa1pcdhFbSC1vLGSVnVrS8bdIAu3En3V+VtxA+Xqh5PQY
9ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HYrHo
h7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFFFABViyvZ9PulubZlWQKy/OiuCGUq
wKsCCCCRyO9V6ltraa7uFggTdI2eMgAADJJJ4AABJJ4ABJoA0NY1g6rbabG0cSPbQNG/l28cQLGR
242AcYK8eu445JNe8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVjUrGy09tMeKaW6hng82Vl
/d7iJXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AKM/wDZ/wDZ
9p9n+0/bfn+1eZt8vr8mzHPTrnv0q9b3Glr4cubOW4vBdzSpMFW2UoCiyALu8wHB8wZO3jHQ1j1p
Cwtj4clvxcM90lzFEYlXCorLKeSerfu88cAEckkhQC7Yatp8LaRdXDXK3Ol48uGOJWSbbK0oy5YF
Ml9v3WxjPOcCroVzp9pczTX01ymYJYUEECyZ8yN0JOXXGNwPfPtWhpegW95Z2kjW19Mlxnzr2FwI
LL5yv7wbDnaAHOWX5WHTqczR7C2v5Llbi4aNo7aaWKNFyXZIncZPQKNnPfkADklQCK3TS2mmS5nv
EjDfuZo4VYkc8MhYYJ4OQxxgjBzkaB1azmvbxJROlpcWcNmJVQM6iLysPs3AZbyhkbuN3U45i0XR
pNSiuboWl5eR27IjW9muZGL7sHO1tqjacnB5wMc5FRdPkn1CeBI2tkhZjL9pP/HugODvOByOBwMk
4AGSBQBbnvtOkubGBknmsrW2e38xlCOSzSMH2BiMq0mQu7DbOSM8Mu57KS0stNtJ5TDHPJK1zcxe
XgyBFIKqXOAIwcgknJ445luNGt/7dFlbXEv2b7Il0ZZEG/Z9nEz/ACg4zjdhc+gLd6r3VvbRRWmp
WsTNaSyvH9nuX3EMmwsCyhcqQ68jaeSOwYgBr00E+pg206zxpbW8XmIrAMUhRGwGAOMqeoFZtaWv
QwQamBbQLBG9tby+WjMQpeFHbBYk4yx6k1m0AFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1
ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyvfs2+KWPzrSXHnQk43Y6Mp/hYZOG9yCC
CQZtPu7ew1GeQGWSEwXEKHYAx3xOikjJA5YE8nHPWq9nZyXkxRCqIi75ZXOEjTuzH05A4ySSAASQ
DpaFpMWp6pMqNFJbQRyyAXEyQGTajsgILZwSo3bT8oJ+YdaAKWj3ken63YXsoZo7e5jlcIMkhWBO
PfiqVbGlaUNW8TRWDeRDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj
8268e1ADNYvI9Q1u/vYgyx3FzJKgcYIDMSM+/NGq3kd9dxyxBgq20ERDDnKRIh/DKnHtV3xDpi2X
iGTTbeO2jSOQwxstyrbwHKhpGLEIxxyPlA9BR4h0xdKNnbrHbDMCSNLHcrK8jNGjNuCsQACxCkAZ
HOW60AUr68jubTTYkDBrW2MTlhwSZZH49sOPxzRLeRvolrZAN5kNzNKxI4IdYgMe/wAh/StDV9IG
l6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYM
CCQAOF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDS
A3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLh
yQcgbgBk9CAVNPurP+z7qwvXnijmlimEsMQkIKBxt2ll4PmE5zxt6HPBeXVnqGrK8rzw2ixRwh1i
DuRHGqBtu4DJ2gkbuM9TjmbRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6GG9sLa11prV
7hoYAqu5dd7xEoGaMgYy6klOdo3DnbzgAt32rWY8UXWqWYnmgu2nMkcyCJlEodWUEMwyFfhj3/h4
wa8t9ZR/YbSOGW6sLadp3E37p5y2zcvyk7BhFAwSepzyAHz6PGPFF/pcUrLBay3GZGG5vLiDMTjg
FtqHA4BPcdaZLY2Un2G7jmltbC5naBzN+9eArs3N8oG8YdSMAHqMcAkALm50+DSZbGxmuZ/Pnjmd
54Fi2bFcAAB2znzD6Y2988V3vIzokNkgZZBcySykDAcbUCZ9SuJOvTecdTV3UtMt4tJjvo7K+sd8
iCNLyQP9oRlY70OxOFwucZ++vTvXm0cx2s00V/Z3L26hp4oWcmIbgudxUIwDMo+Vm65GRzQBYvJ9
HubGBI7q+hMMA22wtEMfm7Rvbf5mTuYZ3FcgYGMKBRZavbwDT7mVJftml/8AHqqAeXL+8aRd5zlc
MxJxncMD5SNxfJpAtfCiX7R2cklxKy7zdoXiQCMrsRX5Y72DAgkADhepd/YKXVjo7W9xbQ3N5AQs
UjsXuJfOkQAAAheAgy21T68NgAq21zp8+kxWN9NcweRPJMjwQLLv3qgIILrjHlj1zu7Y5r29xZ28
01wsDO4b/RopsOi9fmc4G4jjjABJyeBtbQ0nQpLvTzfnTtQ1CNpWhWKxGCpUKSztsbA+YADHPzcj
b82ZZWX27fFFJ/pfHkwkf671VT/e6YX+LkA5wGAL2gXEX/CTWl/qF+sSw3KXMsswdzIQ4JHyqxLH
k88e9Y9avhtLafxDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ7awjV5WMEBjYOFAUm
R2wuOSMMDzzkntiorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfWcNzbbZ1hCXE6y+blS
SVIRcbcLnr/rF6dwDKrQ+128ugpZSGVJoJ3mi2oGWTeI1IY5BXAjyMBs5xxjJz60pNHMNsHlv7NL
kxCYWrM4fYVDg7tuzJUhgN2ecfe4oA07PxF5Ol2dv/aur2X2SNk+z2TbUmy7PktvGwnftztbG0Hn
oKWj39tZxSrLeXluXYb0jt47mGYDoHjdlGVOSCd3UYCkZM/9gpdWOjtb3FtDc3kBCxSOxe4l86RA
AACF4CDLbVPrw2KFppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvcZ5wAOs7+2t9amuVt2htJVn
iESNvMSSIycE43FQ3cjOOozmq8FtbXF3JH9vit4RkpLcxuNwzxkIHIJHPp15qxHoswmvEvJ4LJbS
XyJnmLMFkO7CfIGJPyPzjHy9eRl9rpHntqMSP9olt4Ekg+zHcJWaWJBgYzyJOnBBwCAQRQBUvbSC
12eTqNtebs58hZBt+u9F6+2elWNVvLO+1uW/UTtHcymeaIgIULMSyK3zZAzgMQP92qVzB9muGh82
KUrgFom3LnHIB6HB4yMg44JGDT7Czk1DULayiKrJcSrEhc4ALEAZ9uaANLV9Stby0RBcXN9deYG+
13VusUgXBBUsHYyZyvLH5QgA4Jwa5q9vrV9f3UiSiVp3e2lIG4xluI5OewPBGSMbeRt2VbvSmtrQ
3KXdtcokixTCBmPlOQSFJIAbO1uULD5evIzY1Dw9Np5vEa8s557JiLiGF2YxrvCbslQpGWUYBLDd
yBg4AIYtMtJIkdtd0+NmUEo6XGVPocREZHsSKsWmr28GoeHbhklKabs84ADLYneT5eeeGHXHNRf2
FL5P/H3bfa/I+0fY/n8zy9nmZzt2fc+bG7OOOvFFpoUt3Fbf6XbRT3f/AB628m/fN8xQYIUqMspX
5mHTJwOaALWk659k0lbH+1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5NN177P/AGin2/U7L7XOk/2m
B/MmO3f8rHcm7PmZLZ6r054q2OhS31it0Lu2iDSSIscm/cRGqvI3CkYVWz1ycHAJwC9fD0wFy815
ZwQW7RBppHbDLKheNlAUsQQAcY3DIyBhsADl1S3lvtR+1XF9LDewLA1zLiWb5WjYMVLAHJjA27vl
B6tjlt9NYahJptrbTtbQW1sYWnu1OCfMkfdhAxAO4cYOCcZONxdpuhJP4lttLvbuKGOWSLEi7v3q
OVK7MKcFlYEbgMd8dKyrmFILho47iK4QYxLEGCtx23AH25FAGrZLZ6Nq2nah/adteJBdxyPFbJKH
2q24kb0Udsde4rFqW1tpry7htbdN800ixxrkDLE4AyeOpqxeWEVtEJYdRs7xd21hCXUqT04dVJHB
5GQO+MjIBsa54i/tWK7f+1dXk+1Sb/sUjYghy27Gd53hegG1ex4xg149TsV0c20k15OBEVSzmhRk
icg/Mku7cg3fOQqjOArEj5qpTaRcQXWqW7PEX03d5xBOGxIsfy8c8sOuOKsSeHpo4xm8s2uGthdL
bK7F2jMYkJ+7tBC5JDEH5eAcrkABdaXd2Fkl694klnE0QihiVhMPMeT75YbCd5X7rYxnnOKqS3kb
6Ja2QDeZDczSsSOCHWIDHv8AIf0q3b+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevDYz7Ozk
vp2iiKhliklJY8YRGc/jhTj3oA308SR/2hrTxXmoafHf3n2lJ7UZkABkwjLvXg+Zk/NwVHBzkUl1
S3lvtR+1XF9LDewLA1zLiWb5WjYMVLAHJjA27vlB6tjmrYWlu1pPf3gle2gkSIxQuEd3cMR8xBCg
BGJOD2GOch1xo8kd1cwxSrIIrZLpMjDyRsqsMLz8wV9zDJACsckDNAE19qsaSaa2kTXlu1lbGETM
2xyTJIxYFTwCH6Z4yRk4yZbrxZqlzqVldvczyLaNBIkE8zSIZI1UbyMjliCT3+Y896ZpWifaNUtL
a5SWXz7SW5EFu2JTtR2ReVOC2xSODlXUjrTb+xtrfULS1nsdQ0hXYGZr1vMIQnG8KI0OBhvXOO1A
EN1eWcenvZWAnMc8qTymcDKFQwVFx1A3tljjdx8q45sX+r295Zvp+yUWVpn+zAQN8eXG7ec87hlj
1wwG3auRTb6xszo6ahbW15aK0ojjW5lEguAQ25kIROEKgHry46d6kVnGulS3twWAdjFahT9+RShf
P+yFb2OWXGQGwAbtv4lt4bIx/atTSNrF7X7BGQturmEp5n3vm3N85G0csTk4+bClvI10qKytwwDs
JbosPvyKXCY/2QrexyzZyAuLv9kW+37Hvl/tD7J9t35HlbPK83ZjGc7Od2fvfLtx89VYNIuLl7AR
vFsvN2JCTsi2kht5x8u0AOfRSD3oAda3lnJp6WV+JxHBK88RgAy5YKGRs9Adi4YZ28/K2eLFv4gm
ttTvNai/d6vNOZInVQY4g+4yEA555AGcjBbvgixpWiR3eiR3o0nVdRke5kiYWT7RGFWMjP7t+TvP
p0qvoWlxarqkxRIhbQxyyiK4ukQttR2RSSVLDKgMVxgZPy9QAUpDp02pBgZ7azZQWCRiRkbaNwUF
hld2QMtnbjOTV3Ur3SrvxLc3wFzLZ3cksjrJGEeJnLYIAchtpIbkjdjBwOayrobbuZdkUeJGGyJ9
6Lz0VsnI9Dk59TUVAGlqF1Z/2fa2Fk88scMssxlmiEZJcINu0M3A8sHOed3QY5zaKKACiiigAooo
oAKKKKACiiigAooooAKKKKACiiigArV0Z7bydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLby
dTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6
r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACTQBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dx
xySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBI
GE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P/s+0+z/aftvz/avM
2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoax60hYWx8OS34uGe6S
5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBV0
K50+0uZpr6a5TMEsKCCBZM+ZG6EnLrjG4Hvn2rQ0vQLe8s7SRra+mS4z517C4EFl85X94NhztADn
LL8rDp1OZo9hbX8lytxcNG0dtNLFGi5LskTuMnoFGznvyABySoA+0udPW0vdPuJrlLaWeOaOeOBW
f5A6gFC4AyJM/eOMY5zkPn12WPUpZ7HaI2torUi4hSQSJGqKCysGAJMatjnB4yetGi6NJqUVzdC0
vLyO3ZEa3s1zIxfdg52ttUbTk4POBjnINP0uO71S8ge1vC0CsyWKN/pEhDhfLB2n5lBLH5OiHgdQ
AS3fiEXGtxX/AJClBZpayIsaREgweVJjaMA8ttJBwNvGBtqpdXFtLFaabaystpFK8n2i5TaSz7Ax
KqWwoCLwNx4J7hRY1TTLfSb6zN1ZX0UNxAZmtJZAk0fzOgBcpjkoG+4ODj3pt/Dp1va6bd21rOrT
s8jW91OJQ8asFU5RUIBYSKR1+XPGRQBDr00E+pg206zxpbW8XmIrAMUhRGwGAOMqeoFZtaWvQwQa
mBbQLBG9tby+WjMQpeFHbBYk4yx6k1m0AFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVau
jJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyvfs2+KWPzrSXHnQk43Y6Mp/hYZOG9yCCCQZt
Pu7ew1GeQGWSEwXEKHYAx3xOikjJA5YE8nHPWq9nZyXkxRCqIi75ZXOEjTuzH05A4ySSAASQDpaF
pMWp6pMqNFJbQRyyAXEyQGTajsgILZwSo3bT8oJ+YdaAKWj3ken63YXsoZo7e5jlcIMkhWBOPfiq
VbGlaUNW8TRWDeRDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj8268
e1ADNYvI9Q1u/vYgyx3FzJKgcYIDMSM+/NGq3kd9dxyxBgq20ERDDnKRIh/DKnHtV3xDpi2XiGTT
beO2jSOQwxstyrbwHKhpGLEIxxyPlA9BR4h0xdKNnbrHbDMCSNLHcrK8jNGjNuCsQACxCkAZHOW6
0AUr68jubTTYkDBrW2MTlhwSZZH49sOPxzRLeRvolrZAN5kNzNKxI4IdYgMe/wAh/StDV9IGl6Lp
7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYMCCQA
OF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDSA3hy
91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcg
bgBk9CAVNPurP+z7qwvXnijmlimEsMQkIKBxt2ll4PmE5zxt6HPBeXVnqGrK8rzw2ixRwh1iDuRH
GqBtu4DJ2gkbuM9TjmbRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6GG9sLa11prV7hoY
Aqu5dd7xEoGaMgYy6klOdo3DnbzgAt32rWY8UXWqWYnmgu2nMkcyCJlEodWUEMwyFfhj3/h4wa8t
9ZR/YbSOGW6sLadp3E37p5y2zcvyk7BhFAwSepzyAHz6PGPFF/pcUrLBay3GZGG5vLiDMTjgFtqH
A4BPcdaZLY2Un2G7jmltbC5naBzN+9eArs3N8oG8YdSMAHqMcAkALm50+DSZbGxmuZ/Pnjmd54Fi
2bFcAAB2znzD6Y2988at/wCJbe507U7dbrU3S8jAhtHIFvaYlR9ijccgBSoIC4A+783y5+paZbxa
THfR2V9Y75EEaXkgf7QjKx3odicLhc4z99eneXX9BSz1HVfslxbNHaTuWto3Znhi8zapLEbT95Bg
MWG7kcNgAzZbyN9EtbIBvMhuZpWJHBDrEBj3+Q/pWrZatpcP9i3MovBd6UoIRUVknImeQLncCg+Y
Athuv3fl+av/AGRb7fse+X+0Psn23fkeVs8rzdmMZzs53Z+98u3Hz1XsbC2udJ1K6kuGFxbRCSOF
F4x5kaEsT2+fgDnIOcYG4AfbXOnz6TFY301zB5E8kyPBAsu/eqAgguuMeWPXO7tjkn1S3uNW1DVZ
LbdPPO00ML4eNCzEktn723jC4wTyeAVZ+k2NteREfYdQ1G7LMTb2TbTEg2/OT5b5BLEdsbec7hUL
aVG+oXtja3a3E0MrJb7V4ugCR8pBPzHghed3IBzgMATaBcRf8JNaX+oX6xLDcpcyyzB3MhDgkfKr
EseTzx71j1q+G0tp/ENja3dpFcw3M8cDLIzrtDOAWBVgc4z1yOelZVABVq7uVntrCNXlYwQGNg4U
BSZHbC45IwwPPOSe2KitY4ZbuGO4n8iF5FWSXYW8tSeWwOTgc4rQu7bT5NJN9Zw3NttnWEJcTrL5
uVJJUhFxtwuev+sXp3AMqti8utLvo1uZXvBdrbRwi3WJdmUjWMN5m7OPlDEbP9nP8VY9bFxpFrD4
cttRTUoHnlldDCBJnhYztGUA3Dec5OMYwTzQA601e3g1Dw7cMkpTTdnnAAZbE7yfLzzww645qXSd
c+yaStj/AGpqenbJ3m32I3ebuVBhhvTG3Zx1zuPTHMFvpFrN4cudRfUoEnilRBCRJnlZDtOEI3HY
MYOMZyRxVSLSrmeziuYgrrI04ChsECJFkcnPGNreueD7ZALcOoWdxFqFteyXkcd1cpciYATyZXzB
hslNxPmElsjkdOeCx1WHSpNSNiZwJ7YQQtIqksfMjZt69NrBG+X5hhtp3DJqvc6RcWlrJcSPEUj+
z5Ck5/fRmRe3YKc+/rVe8s5LGdYpSpZoo5QVPGHRXH44YZ96AJXSyu72PyZFsY5FzJ525kibnIBU
MxU4GMjIzg5xuOlpNtZ6f4h0i5/tixnRL6EuIxKuxQ4JYl0UADHrWRZWcl9dLBGVU7Wdnc8IiqWZ
jjnAUE8AnjgE8Vq3Ghr/AGfpa2ckF3c3t5LCksMhxIAIQq4bBUhmb7wB5z0waAK9zd6fHpMtnYm5
b7RPHO4nRR5WxXAUEE7/APWH5sL93pzxX1i8j1DW7+9iDLHcXMkqBxggMxIz780+70sW9obqC+tr
yFZFjkaASDYzAlQQ6qTkK3TP3eccZz6AOluPEX2jThH/AGrq8WLRLb7BG22A7YxHndv6HG4jZzkr
n+Kiz8ReTpdnb/2rq9l9kjZPs9k21Jsuz5LbxsJ37c7WxtB56DPj8P3st5BbL5X76S2jEm75VadN
8YPfpnOAcYPtlljo5v1iVL+zS5nbbBbMzl5CTgDKqVUk8DcV9TgEEgFjT9Xt7TT4LeRJS8f23JUD
H76BY179ipz7etXW1DS77SbxbqSeJWbT418sKzgxW7ozbCRuXIx1GNyn/ZObBpH2nSba6jfYzzzp
NJIcRxRosRDHAz1kI7knaACSAa9jpxvIpZ5LqC1t4mVGmm3kb2yVXCKxyQrHpjjr0yAW21iOPxLZ
6nFEzx2bW+xHO0yCFUUE9dpbZnHOM9TjNUfItJdQ8mG98u2PS4uoimOM8qm8jnjjPbp2ml0i4ie9
UvERawJcFgTiSNygUrxnkSKcHBx1weKdbaHc3U1nFHJAGu7aS5jLybQFTzMhieAf3Te3IyRzgAmt
xb6JqFjqMWo2d+1vcxy+RCJVJCnd1eMADjHc89KqXiaXHEBZT3k8hblpoViCj0wGbcTxzkYx0OeC
+042cUU8d1BdW8rMizQ7wN64LLh1U5AZT0xz164iuLOS2gtJXKlbqIyoFPIAdk598ofwxQBtXura
XN/bVzELw3eqqSUZFVICZkkK53EuPlIDYXp935vlytVvI767jliDBVtoIiGHOUiRD+GVOPart7pF
vANQtonl+2aX/wAfTOR5cv7xY22DGVwzADOdwyflI2mr/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/3
/lzQAy+vI7m002JAwa1tjE5YcEmWR+PbDj8c1Y0fxBqGjF1trq5WF45R5Uc7Iu94ygfA4yCQf+Aj
pWhFotumh2V+2j6veJLA8009tKEij2yOuD+6bGAgJye9ZlrZ2cenpe35nMc8rwRCAjKFQpZ2z1A3
rhRjdz8y45AHw6p9rS6g1a4uZEuZI5XuR+9lV4wyrwzDcMOwxkdjnjBeusRjVX1LymSeCKNbFc7g
jxhERnPGSFUnpgsBkbciqkulXMetvpChZLtbk2oCNwz7tvBOOCfXFWI7Gylu7p4ppZLGygWWVl4a
U5RCEyOAXfgkZC8kEjaQBl1eWeoahHd3QnDzqWvGjA/1pLZdAeo+6xXjJ3AbRjFg3ulR/wBmWeLm
6sLe7a4uGkjETur+WGQKHPaPruH3u2MmKTSPOu7VLN/3d7A09ukp+fguPL4HzMWjKrgfNleBnArx
aVczrY+UFaS+lMUEW7DMcgBueNpYlQc4yrelAEuqyWNwxuIb68uLhmAYTWiQqFAwANsjYAwAFAAA
6YxiotTvI7q5VLcMtnbqYrVXHzCPcWG7/aJYse2ScYGAC8sIraISw6jZ3i7trCEupUnpw6qSODyM
gd8ZGbdxo8dpok9xNK32+G5iikgA4iDrIcMf7/yDI/h6H5shQB39r2+37Zsl/tD7J9i2YHlbPK8r
fnOc7ONuPvfNux8lFhq9vZ2aafslNld4/tMADfJhzt2HPG0YYdMsTu3LgVi0UAXbNNLkiIvZ7yCQ
Nw0MKyhh6YLLtI55yc56DHNsaxHJ4h1DVJYmRbtbvEancVMsbqBnjIBcZPp27Vj0UAFFFFABRRRQ
AUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie
1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsC
rAgggkcjvVepba2mu7hYIE3SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dvHECxkdu
NgHGCvHruOOSTXvLyOew062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ4PNlZf3
e4iV0ITIJAwnBIz3IH3Q+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP/AGf/AGfa
fZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9aQs
LY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJ
fb91sYzznAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl1xjcD3z7VoaXoFveWdpI1tfTJcZ869hcCCy
+cr+8Gw52gBzll+Vh06nM0ewtr+S5W4uGjaO2mlijRcl2SJ3GT0CjZz35AA5JUAfaXOnraXun3E1
yltLPHNHPHArP8gdQChcAZEmfvHGMc5yGXl1Z6hqyvK88NosUcIdYg7kRxqgbbuAydoJG7jPU45r
2Umnx7/t1tcz5xs8i4WLHrnKNnt6VuxaDZy63qtpDaahciytlcWsMoM3m7o0kTcIyCFZ35C87c9O
aAMXUbyO5NtDAGFvaReTEzjDuN7OWYDgEs7cDoMDJxkvvrmHULy1WN/IgSCGAGQHbGQih2wueC+9
uBk7icZNWho/23X00y3sb6ykEbNJBcHzZvlUudqhUySo4XHJxzzxX1O2XTtQhDaTeWihVc29+5Jk
GT3Codpxjjng8+gAa9NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnqBWbWlr0MEGpgW0CwRvbW8vlozE
KXhR2wWJOMsepNZtABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdiseiHtWVWroyW3k6nc3NpFdfZr
QSRxys4XcZo0ydjKejnvQBUsr37Nvilj860lx50JON2OjKf4WGThvcgggkGbT7u3sNRnkBlkhMFx
Ch2AMd8TopIyQOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9OQOMkkgAEkA6WhaTFqeqTKjRSW0Ecs
gFxMkBk2o7ICC2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3uY5XCDJIVgTj34qlWxpWlDVvE0Vg3kQ
xvchZFjuEAVC4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu1MK/LniRnI/NuvHtQAzWLyPUNbv72I
MsdxcySoHGCAzEjPvzRqt5HfXccsQYKttBEQw5ykSIfwypx7Vd8Q6Ytl4hk023jto0jkMMbLcq28
ByoaRixCMccj5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAAsQpAGRzlutAFK+vI7m002JAwa
1tjE5YcEmWR+PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8h/StDV9IGl6Lp7GOzeS4UySTJdpI4O
+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYMCCQAOF6kAz4ryNNEurIhv
MmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDSA3hy91R47OVlZY41lu0Uo
CshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgbgBk9CAVNPurP+z7q
wvXnijmlimEsMQkIKBxt2ll4PmE5zxt6HPBeXVnqGrK8rzw2ixRwh1iDuRHGqBtu4DJ2gkbuM9Tj
mbRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6GG9sLa11prV7hoYAqu5dd7xEoGaMgYy6
klOdo3DnbzgAt32rWY8UXWqWYnmgu2nMkcyCJlEodWUEMwyFfhj3/h4wa8t9ZR/YbSOGW6sLadp3
E37p5y2zcvyk7BhFAwSepzyAHz6PGPFF/pcUrLBay3GZGG5vLiDMTjgFtqHA4BPcdahurWzSK01C
BJ/sU0rxNA8oMilNhYBwuCCHXB28EkYOMsAPubnT4NJlsbGa5n8+eOZ3ngWLZsVwAAHbOfMPpjb3
zxd1TVtLnn1q6sheeZqbMDFMigRgzLJu3huSdg+XaMbvvHblqt5b6W2iC9tbe8t5GufKjE1ysokA
XL9I1wV3R9eu/jocOvdIt4BqFtE8v2zS/wDj6ZyPLl/eLG2wYyuGYAZzuGT8pG0gB/a9vt+2bJf7
Q+yfYtmB5WzyvK35znOzjbj73zbsfJTdIuNLgsNQivbi8jkuohCBDbLIFAkjfdkyLz8hGMd857VL
JpAtfCiX7R2cklxKy7zdoXiQCMrsRX5Y72DAgkADhepqWOjm/WJUv7NLmdtsFszOXkJOAMqpVSTw
NxX1OAQSANtP7KltBHevc28yyM3mwQiXzFIGFILqF2kE5Gc7u2BmWfVLe41bUNVktt0887TQwvh4
0LMSS2fvbeMLjBPJ4BVmaVHY3M0VpLY3lzdzyiOLybtIQS2ABho25z3yBzQ1hbXWoXtvptw0m2Vh
Zo683CZOADx85GCFwN3IHOFYAm0C4i/4Sa0v9Qv1iWG5S5llmDuZCHBI+VWJY8nnj3rHrV8NpbT+
IbG1u7SK5huZ44GWRnXaGcAsCrA5xnrkc9KyqACrV3crPbWEavKxggMbBwoCkyO2FxyRhgeeck9s
VFaxwy3cMdxP5ELyKskuwt5ak8tgcnA5xWhd22nyaSb6zhubbbOsIS4nWXzcqSSpCLjbhc9f9YvT
uAZVaH2u3l0FLKQypNBO80W1AyybxGpDHIK4EeRgNnOOMZOfWlHo5mti8V/ZvciIzG1VnL7Apcnd
t2ZCgsRuzxj73FADbS7t10m9sbgyp5skc8bxoG+dFcBSCRgHzPvc4x0OeLulatZ21glvdCdWja42
mJAwYTxLExOWGCoXcBzuPGV61Fb+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevDYr2mli4tB
dT31tZwtI0cbTiQ72UAsAEViMBl64+9xnnABfm1bT71bq1uGuYbZ/svlzRxLI58iIxDKFlA3Bt33
jjGOetQX0llrGrF4rlbC3W2hjQ3e5zlI0TGY1OT8pOcAEDt0p1lpEM+k6i889tBLbXcMZuJJSyKp
WXcBs3b8lV+6D0z0yazb2zksbpoJCrHarq6Hh0ZQysM84KkHkA88gHigDQtja6Pfxy/b4L2OWKaG
Q2qSZjDxlN2JFXJG8kDPO3BIzmp01fT9P/scWKXM/wDZ98927ThY/Nz5RAABbb/qyOp9c84GBWrq
Glr/AG5BY6bHK32mO3aGOWRS26WNG2lsKOr4zgUAFzc6fBpMtjYzXM/nzxzO88CxbNiuAAA7Zz5h
9Mbe+eGS6ZaRxO667p8jKpIREuMsfQZiAyfcgU270sW9obqC+tryFZFjkaASDYzAlQQ6qTkK3TP3
eccZdfaObBZVe/s3uYG2z2ys4eMg4IyyhWIPB2lvUZAJABsWviLT4jbzyLc+dFJaTmNY1Kl7aPYq
7t2cOCSWx8uMYfrRoPiW30saazXWp26WkgM1rZkLHdfvC29zuHOCFwVOQgG4Z+XHj0czWxeK/s3u
REZjaqzl9gUuTu27MhQWI3Z4x97im2Fpb3OnapJIJfOtoEmiKuAv+tRCGGMnh8jBGMd80AWLfWI4
dEg02SJpoftMss8ROAwZYwpU87XXY2DjjOOQWUy6Tq8emxX1pFqGoWcc0qSJdWqYkITeArLvXAO/
J+Y4KjrnIwq0NDtEvtesLWQRMks6KUldkV8n7pZQSN3TIHGaALS6pby32o/ari+lhvYFga5lxLN8
rRsGKlgDkxgbd3yg9WxzdtNS0ttSsUTzxaWumXVu5mdY2kJWduDyFLbwAOcE4+bGTi6dpkmpG52T
QQpbxedK8z7QE3qpPQ5PzA4HJxgZOAZTos322CCOeCWOaJp0uFLBDGu7e+CA2F2PkbcnbwDkZAH3
d/bw2lla6ZPct9mnkuVuJEELh2CDACs2MeWDnPfoMZL9R8S6pqdhbWk99ePHHF5cqvcMwmPmM4Zg
e4yo5z90fhXv9KaxtILtbu2ubeeR443gZjkoFJyGAK/fAwQDxnGCCc+gDavdXt5xqFzEkv2zVP8A
j6VwPLi/eLI2w5y2WUEZxtGR8xO4H9r2/wDZ39kbJf7M8vztuB5n2vy8eZnPTd8uOmznG7mm6h4e
m083iNeWc89kxFxDC7MY13hN2SoUjLKMAlhu5AwcM/sKXyf+Pu2+1+R9o+x/P5nl7PMznbs+582N
2ccdeKAH6PdaXYXNnqEr3gu7SVZRCsSskpVtw+fcCgPAPytjGec4ENreWcmnpZX4nEcErzxGADLl
goZGz0B2Lhhnbz8rZ4IdHMlrDNLf2ds9wpaCKZnBlG4rncFKKCysPmZemTgc1NpGkWuoWGoTz6lB
bPbxB1RxIcfvI13NtRvl+cjg5zjjGTQAxtX8y41LUGTbqN5I+Co/dokgfzcAnOSGCjOeC3fBFfTr
yO2NzDOGNvdxeTKyDLoN6uGUHgkMi8HqMjIzkFjpxvIpZ5LqC1t4mVGmm3kb2yVXCKxyQrHpjjr0
zYj0G4ku7qBri2jW2gW5eaRyEMTFNrDjPIkVsY3dsbvloAJL6ylu7VJYZZLGygaKJW4aU5dwXweA
XfkA5C8AkjcbD+IWuNYsNbu0afU4blZbg8IsyoUKdOjcMpwAMBeCck1Dos322CCOeCWOaJp0uFLB
DGu7e+CA2F2PkbcnbwDkZivtONnFFPHdQXVvKzIs0O8DeuCy4dVOQGU9Mc9euAAvE0uOICynvJ5C
3LTQrEFHpgM24njnIxjoc8aF14luL/R761u1ga4ubmKYyJaRISFEm4llUHcSy89cbueSDlXFnJbQ
WkrlSt1EZUCnkAOyc++UP4YrSvdIt4BqFtE8v2zS/wDj6ZyPLl/eLG2wYyuGYAZzuGT8pG0gGLRW
h/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/AN/5c1a0SDR724gtbu1vt53NPPFdoqpGoLMwQxknagJx
nJxx1xQBi0VYsrOS+ulgjKqdrOzueERVLMxxzgKCeATxwCeKuxWmnt9uvyLl9NgnWKKIOqTPv3lN
zYKrhUJJAPOABzkAGVRWlcaPJHdXMMUqyCK2S6TIw8kbKrDC8/MFfcwyQArHJAzT7PSPM1CzguXx
9ogacRRnEhwGKR8jhn2rt4PEikA5xQBlUVsa1pcdhFbSC1vLGSVnVrS8bdIAu3En3V+VtxA+Xqh5
PQY9ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HY
rHoh7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFFFABViyvZ9PulubZlWQKy/OiuC
GUqwKsCCCCRyO9V6ltraa7uFggTdI2eMgAADJJJ4AABJJ4ABJoA0NY1g6rbabG0cSPbQNG/l28cQ
LGR242AcYK8eu445JNe8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVjUrGy09tMeKaW6hng8
2Vl/d7iJXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AKM/9n/2
fafZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9a
QsLY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWB
TJfb91sYzznAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl1xjcD3z7VoaXoFveWdpI1tfTJcZ869hcC
Cy+cr+8Gw52gBzll+Vh06nM0ewtr+S5W4uGjaO2mlijRcl2SJ3GT0CjZz35AA5JUAisJrO01u2nl
RriyhuVd1eMZkjDAkFckZI7ZI96ZBJb3F3I+pzXJ83LNPGBI4cnO4hiN2eR94dc5OMG9oulx38Vz
IbW8vpImRVtLNtshDbsyfdb5V2gH5erjkdDDe2Fta601q9w0MAVXcuu94iUDNGQMZdSSnO0bhzt5
wAWxq1nDe2aRCd7S3s5rMysgV2Evm5fZuIyvmnA3c7eozxXu57KS0stNtJ5TDHPJK1zcxeXgyBFI
KqXOAIwcgknJ445Zf6U0XiO90uzDSCG5liQuwB2ox+ZjwAABkk4AAJ4FP1KytNNbTJbeT7ZHNB50
nmAqjsJXQgAYYKdnsT1+XOAAM16aCfUwbadZ40treLzEVgGKQojYDAHGVPUCs2tLXoYINTAtoFgj
e2t5fLRmIUvCjtgsScZY9SazaACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqtXRktvJ1O
5ubSK6+zWgkjjlZwu4zRpk7GU9HPegCpZXv2bfFLH51pLjzoScbsdGU/wsMnDe5BBBIM2n3dvYaj
PIDLJCYLiFDsAY74nRSRkgcsCeTjnrVezs5LyYohVERd8srnCRp3Zj6cgcZJJAAJIB0tC0mLU9Um
VGiktoI5ZALiZIDJtR2QEFs4JUbtp+UE/MOtAFLR7yPT9bsL2UM0dvcxyuEGSQrAnHvxVKtjStKG
reJorBvIhje5CyLHcIAqFwCI2ZjvPPGCxPvVe5tHn1draOKxt3OMJFdqYV+XPEjOR+bdePagBmsX
keoa3f3sQZY7i5klQOMEBmJGffmjVbyO+u45YgwVbaCIhhzlIkQ/hlTj2q74h0xbLxDJptvHbRpH
IYY2W5Vt4DlQ0jFiEY45Hygego8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAMjnLdaAKV9eR
3NppsSBg1rbGJyw4JMsj8e2HH45olvI30S1sgG8yG5mlYkcEOsQGPf5D+laGr6QNL0XT2Mdm8lwp
kkmS7SRwd8i7VCuQUwgO7B+bI3dqJNIFr4US/aOzkkuJWXebtC8SARldiK/LHewYEEgAcL1IBnxX
kaaJdWRDeZNcwyqQOAEWUHPv84/WixvI7a01KJwxa6thEhUcAiWN+fbCH8cVoWGkBvDl7qjx2crK
yxxrLdopQFZCzbQ4beCg2qeoJ+Vux4f0gXsF7evHZzLbRbkhuLtIlZ96L8w3qwXDkg5A3ADJ6EAq
afdWf9n3VhevPFHNLFMJYYhIQUDjbtLLwfMJznjb0OeC8urPUNWV5XnhtFijhDrEHciONUDbdwGT
tBI3cZ6nHM2i6XHfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy673i
JQM0ZAxl1JKc7RuHO3nABbvtWsx4outUsxPNBdtOZI5kETKJQ6soIZhkK/DHv/Dxg0r+7t2tILCz
Mr20EjyiWZAju7hQflBIUAIoAye5zzgS3djZWXiW+sZ5pVs7WeZA3V3CFtq5AwCxAXdjAznGBipd
V0f7P9jEFjfWtzcSNGLG6O+Y427XGFUkMWKgbeqHk9AAUry8jnsNOtogyi3iYSgjAaRpGJYep2+W
Mnn5QOgFXb3V7ecahcxJL9s1T/j6VwPLi/eLI2w5y2WUEZxtGR8xO4RajpdvZaTZ3Edz508k80M2
zBjUosZAUj733zluhI4yAGaxq+kDS9F09jHZvJcKZJJku0kcHfIu1QrkFMIDuwfmyN3agDPlvI30
S1sgG8yG5mlYkcEOsQGPf5D+lbug+JbfSxprNdanbpaSAzWtmQsd1+8Lb3O4c4IXBU5CAbhn5aVv
p2nXVnMIUvGaC282W+3gQI+wuIyhTIJYGMHfyeRn7tRaVo8d1HJLeStCrW08tuij5pTHG7Z56ICm
Ce54HRioBU0q8jsL/wC0yBiVilEZQcpIY2CMPQqxU56jGRzTLKe3tt88kXnTrjyUdQYwe7MD97HG
Fxgk88Aq1jSo7G5mitJbG8ubueURxeTdpCCWwAMNG3Oe+QOaGsLa61C9t9NuGk2ysLNHXm4TJwAe
PnIwQuBu5A5wrAE2gXEX/CTWl/qF+sSw3KXMsswdzIQ4JHyqxLHk88e9Y9avhtLafxDY2t3aRXMN
zPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ7awjV5WMEBjYOFAUmR2wuOSMMDzzkntiorWOGW7hju
J/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfWcNzbbZ1hCXE6y+blSSVIRcbcLnr/rF6dwDKrqrfxLb
w2Rj+1amkbWL2v2CMhbdXMJTzPvfNub5yNo5YnJx83K0UAXb68jubTTYkDBrW2MTlhwSZZH49sOP
xzVi2udPn0mKxvprmDyJ5JkeCBZd+9UBBBdcY8seud3bHL7fw9NcQWTi8s0kvlzawM7b5W3sm3AU
hSWXgsQpz14bFSzsIrmIyzajZ2a7tqiYuxYjrwisQORycA9s4OAB/wBtt10m9s445V867imjDENt
RFlGCeMn94vQc4PSpbl7bV9RVjdxWcaWkEe+5VyC0cSIQNiseSpI46enSmJoN+800AjX7RDeR2LQ
lxkyvvAAPTGUIzn0qrcWcltBaSuVK3URlQKeQA7Jz75Q/higCW4s7W2mhH9pwXMbt+8a1jkJjHHO
JFTJ64Ge3JFaV1q2nw6/pmqWLXM/2X7PvjniWLPkqijBDN97YT04z3rIsrOS+ulgjKqdrOzueERV
LMxxzgKCeATxwCeKtLo5m1Cys7W/s7k3cqwpJGzgK5IGGDKGA+Yc7cHnGSCAAPubnT4NJlsbGa5n
8+eOZ3ngWLZsVwAAHbOfMPpjb3zw/WLrS7+5vNQie8N3dytKYWiVUiLNuPz7iXA5A+Vc5zxjBr3e
lNbWhuUu7a5RJFimEDMfKcgkKSQA2drcoWHy9eRkuLRI9BsbpREXmnmVnV23DaI8KykADG7IIJzv
5xigDat/EtvDZGP7VqaRtYva/YIyFt1cwlPM+9825vnI2jlicnHzZukXGlwWGoRXtxeRyXUQhAht
lkCgSRvuyZF5+QjGO+c9qJPD00cYzeWbXDWwultldi7RmMSE/d2ghckhiD8vAOVzj0AXbOxt7mIv
LqtnaMGxsmWUkj1+RGGPxzxVvSZLDSvEdtc3N400FpLHMslpCWEhVlbbhyhA6jOOo6HrWe9nImnw
3pK+XNLJEoB5BQITn2+cfrWh/wAI9Mk2orPeWdumn3Itp5ZHbG47wCoClmGUPQZ5BxgEgAhiurOz
j1WCB55o7m2WKJ3iCHIljc7lDHA+RhwT2/CLSryOxv8AzZQxjeKWFygyVEkbIWA7kbs4yM4xkdaZ
cWn9n6gIboebGNjkxPt8yNgGBUkcZUgjIyM8jtVu70jb4lvtLtX/AHdvPMgklP3Y4yxLNgc4VSTg
ZOOB2oAsXx059E0y0s7ptq3k5kkuQFIDLD85RdxVeCOrE7SR6Clc6dawW7SR6zY3DjGIoknDNz23
Rge/Jp0mizGazSzngvVu5fIheEsoaQbcp84Ug/OnOMfN14OIrywitohLDqNneLu2sIS6lSenDqpI
4PIyB3xkZADWLyPUNbv72IMsdxcySoHGCAzEjPvzWxceIvtGnCP+1dXixaJbfYI22wHbGI87t/Q4
3EbOclc/xVn63pH9l314gfEMd3JDAsh/eSIjMu/gYwCuM8ZOQM4bGVQBu6VqdjaWQiuZrx49xaWx
aFJYZj6hiwMTFcLuVSw5IPO0UtLu7e3F5BdGVYbuAQtJEgdkxIjghSQDygHUdc9sEsLRJ9O1SdhE
z28CMqs7Ky5lRSy4GDjO0gkffyM4p1ro5n09L+a/s7S3eV4VaZnJLqFJG1FY4w45xgY5xkZADT7q
z/s+6sL154o5pYphLDEJCCgcbdpZeD5hOc8behzxYuNXt5H1IKkuyexhs4SQMnyzD8zDPGRETgZw
TjJ61Xj0WYTXiXk8FktpL5EzzFmCyHdhPkDEn5H5xj5evIzN/wAI7cre3NrJc2cZtraO6lkabKBH
2dGAO4jzB0znB27jgEAr6LqP9l6ol1ulTEckfmRHDx70ZN68jld2QMjOOo61Y1XVpLia1eLVtVvJ
IGLpPdvtMZ4xsXc2CMZ3bueOBjJhOizfbYII54JY5omnS4UsEMa7t74IDYXY+RtydvAORkk0WYzW
aWc8F6t3L5ELwllDSDblPnCkH505xj5uvBwATaj4l1TU7C2tJ768eOOLy5Ve4ZhMfMZwzA9xlRzn
7o/B17q9vONQuYkl+2ap/wAfSuB5cX7xZG2HOWyygjONoyPmJ3CleWEVtEJYdRs7xd21hCXUqT04
dVJHB5GQO+MjNvV9ItdPsNPng1KC5e4iLsiCQZ/eSLuXci/L8gHJznPGMGgB39r2/wDZ39kbJf7M
8vztuB5n2vy8eZnPTd8uOmznG7mqVheR2kF+GDedPbeVC6jlCXQtz2BQOpx1DY6E1Y/sKXyf+Pu2
+1+R9o+x/P5nl7PMznbs+582N2ccdeKfb+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevDYAK
kV5Hb6VLBCGFxcsUndhx5QKMqr7lgSeP4VwR8wL7C7t1tJ7C8MqW08iSmWFA7o6BgPlJAYEOwIyO
xzxglppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvcZ5wyLSrmTW00hgsd21yLUh24V923kjPAP
pmgC2usRjVX1LymSeCKNbFc7gjxhERnPGSFUnpgsBkbcinLq9u2rrqEqS+bcwTJelQP9ZIro0iDP
PDBsZALbgNoxilp1nHcm5mnLC3tIvOlVDh3G9UCqTwCWdeT0GTg4wbEmkedd2qWb/u72Bp7dJT8/
BceXwPmYtGVXA+bK8DOAAM1C6s/7PtbCyeeWOGWWYyzRCMkuEG3aGbgeWDnPO7oMc5tXYtKuZ1sf
KCtJfSmKCLdhmOQA3PG0sSoOcZVvSn3eli3tDdQX1teQrIscjQCQbGYEqCHVSchW6Z+7zjjIBn0V
dis410qW9uCwDsYrUKfvyKUL5/2Qrexyy4yA2Lv9kW+37Hvl/tD7J9t35HlbPK83ZjGc7Od2fvfL
tx89AGLRWhBpFxcvYCN4tl5uxISdkW0kNvOPl2gBz6KQe9XfD+kC9gvb147OZbaLckNxdpErPvRf
mG9WC4ckHIG4AZPQgGFRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjf
B2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzo
rghlKsCrAgggkcjvVepba2mu7hYIE3SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dv
HECxkduNgHGCvHruOOSTXvLyOew062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ
4PNlZf3e4iV0ITIJAwnBIz3IH3Q+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP8A
2f8A2fafZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x
0NY9aQsLY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2yt
KMuWBTJfb91sYzznAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl1xjcD3z7VoaXoFveWdpI1tfTJcZ8
69hcCCy+cr+8Gw52gBzll+Vh06nM0ewtr+S5W4uGjaO2mlijRcl2SJ3GT0CjZz35AA5JUAfaXOnr
aXun3E1yltLPHNHPHArP8gdQChcAZEmfvHGMc5yGXl1Z6hqyvK88NosUcIdYg7kRxqgbbuAydoJG
7jPU45m0XS47+K5kNreX0kTIq2lm22Qht2ZPut8q7QD8vVxyOhZdW2n6bq8sFzDczwiNT5aTrHJE
7KrFGYowJUkqflHI7dKAJdS1eJfEtzq2kyy/6RJLIRc26fL5hYMhUllYbWxk9cnim6nqo1mPSoJP
IgNvEYpJBbpGgLSu2cRrnaFK8Y67sDJJN2LRtPudcsrSG3vsS2L3MlqJleXf5byIquEwdyiM/dON
2OorP1S3h067t1Oj31o4/ePDqEpPmLnjgIhA4IJB+hGKAGa9NBPqYNtOs8aW1vF5iKwDFIURsBgD
jKnqBWbWlr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABWroz23k6nbXN3Fa/abQRxySq5XcJ
o3wdiseiHtWVWroyW3k6nc3NpFdfZrQSRxys4XcZo0ydjKejnvQBUsr37Nvilj860lx50JON2OjK
f4WGThvcgggkGbT7u3sNRnkBlkhMFxCh2AMd8TopIyQOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9
OQOMkkgAEkA6WhaTFqeqTKjRSW0EcsgFxMkBk2o7ICC2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3u
Y5XCDJIVgTj34qlWxpWlDVvE0Vg3kQxvchZFjuEAVC4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu
1MK/LniRnI/NuvHtQAzWLyPUNbv72IMsdxcySoHGCAzEjPvzRqt5HfXccsQYKttBEQw5ykSIfwyp
x7Vd8Q6Ytl4hk023jto0jkMMbLcq28ByoaRixCMccj5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozb
grEAAsQpAGRzlutAFK+vI7m002JAwa1tjE5YcEmWR+PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8
h/StDV9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIy
uxFfljvYMCCQAOF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/Pt
hD+OK0LDSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+
9F+Yb1YLhyQcgbgBk9CAVNPurP8As+6sL154o5pYphLDEJCCgcbdpZeD5hOc8behzwXl1Z6hqyvK
88NosUcIdYg7kRxqgbbuAydoJG7jPU45r2Vml1vaW9trSNMDfOWOSegCorMeh5xgdyMjNuPSPKu7
oXb7oLSBblzCeZY2KBNhI43eYhyRkAkkEjaQCXUr3SrvxLc3wFzLZ3cksjrJGEeJnLYIAchtpIbk
jdjBwOabNqcNnFp8WlTTs1ncvdJPNCqEO3lgDZuYYHlg5J5zjHHLJLGyhu7WWWaVLC5ga4QHmTAL
r5ZIGMl0KhsYwQxA5UWNR0i2tZrDz47zSlnlMc0N4PMkiQbf3uAqEqdzADHWM8noACLUNcfUNEtb
KSKBZIrmWVjFaxRDDKgXGwDn5Wz6/L1wMVL68jubTTYkDBrW2MTlhwSZZH49sOPxzV3UtMt4tJjv
o7K+sd8iCNLyQP8AaEZWO9DsThcLnGfvr07177RzYLKr39m9zA22e2VnDxkHBGWUKxB4O0t6jIBI
ALF5Po9zYwJHdX0JhgG22Fohj83aN7b/ADMncwzuK5AwMYUCn6R4luNPCRSrBJBFbTwxbrSJ3Uuj
4G5lzt3vkjOMEjB6UW+naddWcwhS8ZoLbzZb7eBAj7C4jKFMglgYwd/J5Gfu07+wUurHR2t7i2hu
byAhYpHYvcS+dIgAABC8BBltqn14bABm6VeR2F/9pkDErFKIyg5SQxsEYehVipz1GMjmmWU9vbb5
5IvOnXHko6gxg92YH72OMLjBJ54BVrGj2FtfyXK3Fw0bR200sUaLkuyRO4yegUbOe/IAHJK17Ky+
3b4opP8AS+PJhI/13qqn+90wv8XIBzgMAXtAuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x
61fDaW0/iGxtbu0iuYbmeOBlkZ12hnALAqwOcZ65HPSsqgAq1d3Kz21hGrysYIDGwcKApMjthcck
YYHnnJPbFRWscMt3DHcT+RC8irJLsLeWpPLYHJwOcVoXdtp8mkm+s4bm22zrCEuJ1l83KkkqQi42
4XPX/WL07gGVWlFplpJEjtrunxsyglHS4yp9DiIjI9iRWbWlJo5htg8t/ZpcmITC1ZnD7CocHdt2
ZKkMBuzzj73FAEV9eR3NppsSBg1rbGJyw4JMsj8e2HH45rQ0nWY7DTzALvULKRZWlZrFtpuQQoCO
24bQu04OHx5jcerv7BS6sdHa3uLaG5vICFikdi9xL50iAAAELwEGW2qfXhsZtnYRXMRlm1Gzs13b
VExdixHXhFYgcjk4B7ZwcAG1D4js01vUL1o5/Lk1NNShAUEsUaQrG3Pyg+ZywzjHQ5qhcPZajJaW
kV6tvb2dsYkuLuNlMhMjP92MPtP7wjqRhc5GcUyLQbg/bjdXFtZrYzrb3Bnc/K538AKGLcoR8ueu
egJDBos322eCSeCKOGJZ3uGLFBG23Y+AC2G3pgbcjdyBg4AAQ2unXttKNUW4XcSZLESK8JH3WHmI
uSDzgHnBGVyDWrbanYzeJtDmEzO0N5G897cQpAXXep+cKxBIwzGQncd3P3RWbNojJqEFlHcxSSPA
Z5HwwSNMM+7OMkeUFk4GfmxjcMVXvtONnFFPHdQXVvKzIs0O8DeuCy4dVOQGU9Mc9euACxc3enx6
TLZ2JuW+0TxzuJ0UeVsVwFBBO/8A1h+bC/d6c8PuLjS28OW1nFcXhu4ZXmKtbKEJdYwV3eYTgeWc
HbznoKx60r7RzYLKr39m9zA22e2VnDxkHBGWUKxB4O0t6jIBIALH9r2/9r/a9kvl/wBm/ZMYGd/2
Tyc9em7n6du1VbbTrWe3WSTWbG3c5zFKk5Zee+2Mj34NOk0cw2weW/s0uTEJhaszh9hUODu27MlS
GA3Z5x97iprfSLWbw5c6i+pQJPFKiCEiTPKyHacIRuOwYwcYzkjigAjuNOOnrYXks7La3Mssb2qA
i4DBBjLEFB+7GG2sfm5XjBdqer297/bXlpKPt2pLdx7gOEHncHnr+8XpnoeapWdhFcxGWbUbOzXd
tUTF2LEdeEViByOTgHtnBxd0zQkuNUvLHULuKzltY59yNuJLxo5OCqsMArz6jpk0AV7y6s9Q1OGS
V54bcW0ETskQdwUhVDhdwBBZfUcH8Ku3WraefFV3qVu1y9tefaPMEkSq8fnK6nADENtD56jOMcda
zbXTjdXNwi3UCwW6l5Llt+wJuChsBd2CzKB8ueeQOcMvrFrF4v30U8U0fmRTRbtsi5KkgMAwwysO
QOnpgkA049Ws9Om0kWQnuY7C8N4WmQQlyfL+TAZsD90Oc/xdOOc+8TS44gLKe8nkLctNCsQUemAz
bieOcjGOhzw9dIuG1e00wPF5115GxsnaPNVWXPGeA4zx69addWdnJp73tgZxHBKkEonIy5YMVdcd
AdjZU528fM2eACxrmr2+tX1/dSJKJWnd7aUgbjGW4jk57A8EZIxt5G3ZXi0y0kiR213T42ZQSjpc
ZU+hxERkexIps+kXFs9+JHi2We3MgJ2S7iAuw4+bcCXHqoJ7Vrx+G45rAvFZ6g6izNydSU5tsiIy
FMbOoIMZ+f7wzj+GgChpFxpcFhqEV7cXkcl1EIQIbZZAoEkb7smRefkIxjvnPaqkt5G+iWtkA3mQ
3M0rEjgh1iAx7/If0rS0fSbLUYoY/JvpnbH2m7iO2GxBYqDIChyAF3k7lGDjjBNZFlZyX10sEZVT
tZ2dzwiKpZmOOcBQTwCeOATxQBa0+6s/7PurC9eeKOaWKYSwxCQgoHG3aWXg+YTnPG3oc8aGqalZ
rdagsG5o7rTLSCLDh9hVbdiGYY5HlsDgdew7UorTT2+3X5Fy+mwTrFFEHVJn37ym5sFVwqEkgHnA
A5yGXGjyR3VzDFKsgitkukyMPJGyqwwvPzBX3MMkAKxyQM0AN0XUf7L1RLrdKmI5I/MiOHj3oyb1
5HK7sgZGcdR1qxqupR3s1qJdQ1XUo4mJdrt9hwcZVBl9p4+9k5yPl+XllnpHmahZwXL4+0QNOIoz
iQ4DFI+Rwz7V28HiRSAc4q7e+H44ZtLMsF5pMd7ctbut/wAmMLszLnany/vOmP4DzzwAV9X1K1vL
REFxc3115gb7XdW6xSBcEFSwdjJnK8sflCADgnFW7u7e60uwjzKtzaRmDZsBRkLvJu3ZyDl8bcds
57Vd1bSI7PT/ALSdP1DTZBKsaw3z7jMCGJZfkThdoB4P316d8KgDf/tbT9/2/dc/bfsP2P7P5S+X
/qPI3eZuz0+bGzrxn+Ks2+vI7m002JAwa1tjE5YcEmWR+PbDj8c1Ysktk8PajdS2kU8wnhgjaRnH
lh0lJYBWAJyi9cj2q1oOgpeajpX2u4tlju50K20jsrzReZtYhgNo+64wWDHbwOVyAGk659k0lbH+
1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5g03xDeaXrf2+2uLwRtcrNNEbk5nAbO2RgBuJyQSR3PFQ
2ujmfT0v5r+ztLd5XhVpmckuoUkbUVjjDjnGBjnGRm3b6Gv9n6ot5JBaXNleRQvLNIcRgiYMuFyW
JZV+6CeM9MmgCvDrVxPNONUubm6juYBbSSvIZJEQOrgruPOGUHHGRkZGcgkvrKW7tUlhlksbKBoo
lbhpTl3BfB4Bd+QDkLwCSNxYNFm+2zwSTwRRwxLO9wxYoI227HwAWw29MDbkbuQMHFjUbW20htJk
ENteCW0aWT945jmJllUN8pVh8oXjggjBGcigB7+IWuNYsNbu0afU4blZbg8IsyoUKdOjcMpwAMBe
Ccks1fVftlokH9s6vqH7wPi9O1EwCOF3vknPXIxg9c8VddtobPxDqdrbpshhu5Y41yThQ5AGTz0F
Z9AF3U7yO6uVS3DLZ26mK1Vx8wj3Fhu/2iWLHtknGBgC7/a9vt+2bJf7Q+yfYtmB5WzyvK35znOz
jbj73zbsfJWLRQBtWGr29nZpp+yU2V3j+0wAN8mHO3Yc8bRhh0yxO7cuBVKxvI7a01KJwxa6thEh
UcAiWN+fbCH8cVSooAKKKKACiiigAooooAKKKKACiiigArV0Z7bydTtrm7itftNoI45JVcruE0b4
OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dF
cEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACTQBoaxrB1W202No4ke2gaN/Lt4
4gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DP
B5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P
/s+0+z/aftvz/avM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoax
60hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJNtlaUZc
sCmS+37rYxnnOBV0K50+0uZpr6a5TMEsKCCBZM+ZG6EnLrjG4Hvn2rQ0vQLe8s7SRra+mS4z517C
4EFl85X94NhztADnLL8rDp1OZo9hbX8lytxcNG0dtNLFGi5LskTuMnoFGznvyABySoA+0udPW0vd
PuJrlLaWeOaOeOBWf5A6gFC4AyJM/eOMY5zkV9VvI76/82IMI0iihQuMFhHGqBiOxO3OMnGcZPWr
ei6XHfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0Ms+kW1lrstncx3jERRSRWajEzvIEIiztO
GXecnbzsxgFhgAqX95Z3/iC5uZBOllLK3lqgG+KPkIAOmFG0bcgYXAI6h93PZSWllptpPKYY55JW
ubmLy8GQIpBVS5wBGDkEk5PHHL9RsbPTNQthPbXiRyReZLZvKFnhOWUKzFOpwr8oOGH1Jfw6db2u
m3dtazq07PI1vdTiUPGrBVOUVCAWEikdflzxkUAQ69NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnqBW
bWlr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdis
eiHtWVWroyW3k6nc3NpFdfZrQSRxys4XcZo0ydjKejnvQBUsr37Nvilj860lx50JON2OjKf4WGTh
vcgggkGbT7u3sNRnkBlkhMFxCh2AMd8TopIyQOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9OQOMkk
gAEkA6WhaTFqeqTKjRSW0EcsgFxMkBk2o7ICC2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3uY5XCDJ
IVgTj34qlWxpWlDVvE0Vg3kQxvchZFjuEAVC4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu1MK/Ln
iRnI/NuvHtQAzWLyPUNbv72IMsdxcySoHGCAzEjPvzRqt5HfXccsQYKttBEQw5ykSIfwypx7Vd8Q
6Ytl4hk023jto0jkMMbLcq28ByoaRixCMccj5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAAs
QpAGRzlutAFK+vI7m002JAwa1tjE5YcEmWR+PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8h/StDV
9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFflj
vYMCCQAOF6kAz4ryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0
LDSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1
YLhyQcgbgBk9CARaLqkdhFcxm6vLGSVkZbuzXdIAu7Mf3l+VtwJ+bqg4PUT3uu297q93My3P2W9t
IbaZpGDzLsWP5s8BjvjB5xuGfuk5GVZWaXW9pb22tI0wN85Y5J6AKisx6HnGB3IyM249I8q7uhdv
ugtIFuXMJ5ljYoE2Ejjd5iHJGQCSQSNpACS+spru1ilhlewtoGt0J4kwS7eYQDjIdywXOMAKSeWL
5r7TootPtIknvLS3uXuJfOUQGQP5YKYVmwMR/ezn5unHLJLGyhu7WWWaVLC5ga4QHmTALr5ZIGMl
0KhsYwQxA5UP1rS47CK2kFreWMkrOrWl426QBduJPur8rbiB8vVDyegAGXNzp8Gky2NjNcz+fPHM
7zwLFs2K4AADtnPmH0xt754frF1pd/c3moRPeG7u5WlMLRKqRFm3H59xLgcgfKuc54xgw6vYW1jH
p7Wtw1wtxbGV5Cu0FhLIh2jrt+TjPJ6kDOBd1/QUs9R1X7JcWzR2k7lraN2Z4YvM2qSxG0/eQYDF
hu5HDYAIryfR7mxgSO6voTDANtsLRDH5u0b23+Zk7mGdxXIGBjCgVYstW0uH+xbmUXgu9KUEIqKy
TkTPIFzuBQfMAWw3X7vy/MzT9BQu5uri2aQWM1ybTeyyKPIZ42zgK38DYVicHkcNiraaFLdxW3+l
20U93/x628m/fN8xQYIUqMspX5mHTJwOaADQrnT7S5mmvprlMwSwoIIFkz5kboScuuMbge+faq9v
NZ2k00qo1w6Ni2E0YCHr87rk5I4+TkEnkkDaz7TSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4
zzgTSZmu7qwc7NRhkMa25wfMYEhlVgcbsjgfxcgHO0MAWNAuIv8AhJrS/wBQv1iWG5S5llmDuZCH
BI+VWJY8nnj3rHrV8NpbT+IbG1u7SK5huZ44GWRnXaGcAsCrA5xnrkc9KyqACrV3crPbWEavKxgg
MbBwoCkyO2FxyRhgeeck9sVFaxwy3cMdxP5ELyKskuwt5ak8tgcnA5xWhd22nyaSb6zhubbbOsIS
4nWXzcqSSpCLjbhc9f8AWL07gGVWxeXWl30a3Mr3gu1to4RbrEuzKRrGG8zdnHyhiNn+zn+KsetK
TRzDbB5b+zS5MQmFqzOH2FQ4O7bsyVIYDdnnH3uKALFpq9vBqHh24ZJSmm7POAAy2J3k+Xnnhh1x
zT9J1mOw08wC71CykWVpWaxbabkEKAjtuG0LtODh8eY3HrXtNClu4rb/AEu2inu/+PW3k375vmKD
BClRllK/Mw6ZOBzTLXRzPp6X81/Z2lu8rwq0zOSXUKSNqKxxhxzjAxzjIyAWNT1e3vf7a8tJR9u1
JbuPcBwg87g89f3i9M9DzV3TNStp9eNx8pQafFALe4eONJ3SJIyCz5QAFTINwIJRRjJFZsWg3B+3
G6uLazWxnW3uDO5+Vzv4AUMW5Qj5c9c9ASKV7ZyWN00EhVjtV1dDw6MoZWGecFSDyAeeQDxQBuza
hFpfidL5LiXe8DpJ5MyStbFkaMbHTajbVKsAu0DhOMZqlqt+upTWsTavqF2qscz6gTiMNgcIGcgD
GSQSTwMcc59lZyX10sEZVTtZ2dzwiKpZmOOcBQTwCeOATxVh9KZru1trK7tr57mQRRmBmX5yQNpD
hSOo5Iwc9eDgALnTrWC3aSPWbG4cYxFEk4Zue26MD35NWNYutLv7m81CJ7w3d3K0phaJVSIs24/P
uJcDkD5VznPGMGvd6WLe0N1BfW15CsixyNAJBsZgSoIdVJyFbpn7vOOM2NQ8PTaebxGvLOeeyYi4
hhdmMa7wm7JUKRllGASw3cgYOACWPU7FdHNtJNeTgRFUs5oUZInIPzJLu3IN3zkKozgKxI+aqVpd
266Te2NwZU82SOeN40DfOiuApBIwD5n3ucY6HPFiTw9NHGM3lm1w1sLpbZXYu0ZjEhP3doIXJIYg
/LwDlcstNClu4rb/AEu2inu/+PW3k375vmKDBClRllK/Mw6ZOBzQBY0nWY7DTzALvULKRZWlZrFt
puQQoCO24bQu04OHx5jcerH1e3fxVf6lslFtdyXIxgb0SZXXOM4JAfOM84xkdaq2mli4tBdT31tZ
wtI0cbTiQ72UAsAEViMBl64+9xnnFjR9Hgv7+5tru/gtzDFM33mfeUjdshkVgVBUE88j7uTQAaPq
kelXl6Irq8gjuIjCl1Au2aMb1cNt3Dk7MEbuNx5OMFl9q9w2pxXdrqmpyzRR7Fu7iUrL3zjDEqMM
Rjcc8nvgZsqLHM6LIsiqxAdM4YeoyAcH3ANX00eQa7caXNKqtatN50iDcMRBmcqDjJwhwDjJxkjr
QBbuvFmqXOpWV29zPIto0EiQTzNIhkjVRvIyOWIJPf5jz3qpdXlnHp72VgJzHPKk8pnAyhUMFRcd
QN7ZY43cfKuOXy6bb3H2Gezk+z215O1uBdyg+S67MlnAAK4kU5wMcjHGTUk064htJbidfJ8uf7OY
5AVcvglgAR/DgbvTcvrQBoX+r295Zvp+yUWVpn+zAQN8eXG7ec87hlj1wwG3auRVezurOwtmuInn
kv5IpITG0QWJA6shbduJY7ScDC8nOSBhruv6ClnqOq/ZLi2aO0nctbRuzPDF5m1SWI2n7yDAYsN3
I4bEEGjx/wBk31zdSslzFbJcQwKOdhkjXc/oCHyo6kc8DbuAHaVe6VZTWF+4uUvLKRZDFHGHS4ZX
Lgly4KZGF4U4255JxVKK8jt9KlghDC4uWKTuw48oFGVV9ywJPH8K4I+YHYi0W3TQ7K/bR9XvElge
aae2lCRR7ZHXB/dNjAQE5PesKys5L66WCMqp2s7O54RFUszHHOAoJ4BPHAJ4oAsWF3braT2F4ZUt
p5ElMsKB3R0DAfKSAwIdgRkdjnjBsLrEY1V9S8pkngijWxXO4I8YREZzxkhVJ6YLAZG3IpkVpp7f
br8i5fTYJ1iiiDqkz795Tc2Cq4VCSQDzgAc5DLjR5I7q5hilWQRWyXSZGHkjZVYYXn5gr7mGSAFY
5IGaAG313b6jfRXUplSWf5r11QHMhY7mRcjqMHGQNxbGBjFqLV7fT7vSxZpLPbafd/aw0wEbyuSh
IwCwUYjUDk9znnANL0bzdRgt7nyibixnuERpNnlkRSFC5OAvKq/JxtKnoaryaLMZrNLOeC9W7l8i
F4SyhpBtynzhSD86c4x83Xg4AH3Nzp8Gky2NjNcz+fPHM7zwLFs2K4AADtnPmH0xt754yqu3lhFb
RCWHUbO8XdtYQl1Kk9OHVSRweRkDvjIzsapoFvZ2d3IttfQpb48m9mcGC9+cL+7GwY3Alxhm+VT1
6gAz7J7Z/D2o2st3FBMZ4Z41kVz5gRJQVBVSAcuvXA96u6Xq2lwT6LdXovPM0xlAihRSJAJmk3by
3BG8/LtOdv3huyvO1q6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AV5byN9EtbIBvMhuZpWJH
BDrEBj3+Q/pVjTrvT10m8sL43KefPDKksCK+zYsgOVJG7O8DGR654wYrTSmubQXL3dtbI8jRQidm
HmuACVBAIXG5eXKj5uvBxfv9BQ6trRhuLazsbG+a3zO7fKCz7AAAzN9zHGTznoCQANOrWc17eJKJ
0tLizhsxKqBnUReVh9m4DLeUMjdxu6nHMV9NYahJptrbTtbQW1sYWnu1OCfMkfdhAxAO4cYOCcZO
Nxls9DUyarb3MkCtBZpcRTtIVQK0kWH9SDG5IXG7nG3dgVUOizfbYII54JY5omnS4UsEMa7t74ID
YXY+RtydvAORkAbrtzDeeIdTurd98M13LJG2CMqXJBweehrPravdPttO0nTrtWtr0yXcwLxu+yRE
WIhSPlZeWcdFPOemDVfXoYINTAtoFgje2t5fLRmIUvCjtgsScZY9SaAM2iiigAooooAKKKKACiii
gAooooAKKKKACiiigArV0Z7bydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1
oJI45WcLuM0aZOxlPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLB
Am6Rs8ZAAAGSSTwAACSTwACTQBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2G
nW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H
3lvpbaIL21t7y3ka58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P/s+0+z/AGn7b8/2rzNvl9fk2Y56
dc9+lXre40tfDlzZy3F4LuaVJgq2ylAUWQBd3mA4PmDJ28Y6GsetIWFsfDkt+LhnukuYojEq4VFZ
ZTyT1b93njgAjkkkKAXbDVtPhbSLq4a5W50vHlwxxKyTbZWlGXLApkvt+62MZ5zgVdCudPtLmaa+
muUzBLCgggWTPmRuhJy64xuB759q0NL0C3vLO0ka2vpkuM+dewuBBZfOV/eDYc7QA5yy/Kw6dTma
PYW1/JcrcXDRtHbTSxRouS7JE7jJ6BRs578gAckqAV4I9Pa7kW4ubmO2GfLkjt1d254ypcAcf7Rx
79atvfWV/qEz30MsULQJBC8f7x4RGEVWxlQ5KptPIHzEgcAU/RdLjv4rmQ2t5fSRMiraWbbZCG3Z
k+63yrtAPy9XHI6G1/wj8ceu3Nk8F5I0VtHcJZJ8txIXCHygdp+ZQ5JO3kRnhewBm6pd29wLOC1M
rQ2kBhWSVAjPmR3JKgkDlyOp6Z74BfXMOoXlqsb+RAkEMAMgO2MhFDthc8F97cDJ3E4ya0p/DMja
xY2cVveWzXds9ybadN80YQyZUDC72IjJUYXO4D3rP1O2XTtQhDaTeWihVc29+5JkGT3Codpxjjng
8+gAa9NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnqBWbWlr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMse
pNZtABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdiseiHtWVWroyW3k6nc3NpFdfZrQSRxys4XcZo0
ydjKejnvQBUsr37Nvilj860lx50JON2OjKf4WGThvcgggkGbT7u3sNRnkBlkhMFxCh2AMd8TopIy
QOWBPJxz1qvZ2cl5MUQqiIu+WVzhI07sx9OQOMkkgAEkA6WhaTFqeqTKjRSW0EcsgFxMkBk2o7IC
C2cEqN20/KCfmHWgClo95Hp+t2F7KGaO3uY5XCDJIVgTj34qlWxpWlDVvE0Vg3kQxvchZFjuEAVC
4BEbMx3nnjBYn3qvc2jz6u1tHFY27nGEiu1MK/LniRnI/NuvHtQAzWLyPUNbv72IMsdxcySoHGCA
zEjPvzRqt5HfXccsQYKttBEQw5ykSIfwypx7Vd8Q6Ytl4hk023jto0jkMMbLcq28ByoaRixCMccj
5QPQUeIdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAAsQpAGRzlutAFK+vI7m002JAwa1tjE5YcEmWR+
PbDj8c0S3kb6Ja2QDeZDczSsSOCHWIDHv8h/StDV9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHd
g/Nkbu1EmkC18KJftHZySXErLvN2heJAIyuxFfljvYMCCQAOF6kAz4ryNNEurIhvMmuYZVIHACLK
Dn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OK0LDSA3hy91R47OVlZY41lu0UoCshZtocNvBQb
VPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgbgBk9CARaLqkdhFcxm6vLGSVkZbuz
XdIAu7Mf3l+VtwJ+bqg4PUT3uu297q93My3P2W9tIbaZpGDzLsWP5s8BjvjB5xuGfuk5EGi6XHfx
XMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy673iJQM0ZAxl1JKc7RuHO
3nAA+S+spru1ilhlewtoGt0J4kwS7eYQDjIdywXOMAKSeWLNQurP+z7WwsnnljhllmMs0QjJLhBt
2hm4Hlg5zzu6DHM0+jxjxRf6XFKywWstxmRhuby4gzE44BbahwOAT3HWj+yY7+bTm04tFHf3JtUj
uX3GOQbM5ZVGV/eKc4B6jHAJADV7jS57DT4rK4vJJLWIwkTWyxhgZJH3ZEjc/OBjHbOe1WtU1bS5
59aurIXnmamzAxTIoEYMyybt4bknYPl2jG77x25bPurOzk0972wM4jglSCUTkZcsGKuuOgOxsqc7
ePmbPE1xo8dpok9xNK32+G5iikgA4iDrIcMf7/yDI/h6H5shQC1Hq2l+ab2UXhuzp5sxCqKEQ/Zz
CH37ssDgEjaMbup24Z9n4i8nS7O3/tXV7L7JGyfZ7JtqTZdnyW3jYTv252tjaDz0Fe307TrqzmEK
XjNBbebLfbwIEfYXEZQpkEsDGDv5PIz92rWjeG49TgsFSz1C5a8ba91bH91aEuUxIuw5IADn5l4Y
dOpAK+k659k0lbH+1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5oJe24u7q9kjlnnaQtCtwRIMkklpC
cbyOOMYYnJ4BVrGk2NteREfYdQ1G7LMTb2TbTEg2/OT5b5BLEdsbec7hULaVG+oXtja3a3E0MrJb
7V4ugCR8pBPzHghed3IBzgMATaBcRf8ACTWl/qF+sSw3KXMsswdzIQ4JHyqxLHk88e9Y9avhtLaf
xDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ7awjV5WMEBjYOFAUmR2wuOSMMDzzknt
iorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfWcNzbbZ1hCXE6y+blSSVIRcbcLnr/rF6
dwDKrYvLrS76NbmV7wXa20cIt1iXZlI1jDeZuzj5QxGz/Zz/ABVj1of2Rcebt3xeX9k+1+dk+Xs2
5xux13fu/wDf+XNAF+w1bT4W0i6uGuVudLx5cMcSsk22VpRlywKZL7futjGec4GbLeRvolrZAN5k
NzNKxI4IdYgMe/yH9KsWmhS3cVt/pdtFPd/8etvJv3zfMUGCFKjLKV+Zh0ycDmmWujmfT0v5r+zt
Ld5XhVpmckuoUkbUVjjDjnGBjnGRkAsanq9ve/215aSj7dqS3ce4DhB53B56/vF6Z6Hmorl7bV9R
VjdxWcaWkEe+5VyC0cSIQNiseSpI46enSs+6tprO7mtbhNk0MjRyLkHDA4IyOOoqxp2nHUDck3UF
tHbxebJJNvIA3qnRVY5y47UAW7Y2uj38cv2+C9jlimhkNqkmYw8ZTdiRVyRvJAzztwSM5pkF3p+l
atp95Ym5uvs06zuZ0WHdtYEKAC2Oh+bJ69Bjl8vhy8huo4GkgJLSpK4Y4heJd0qtxklFIJ2hgf4S
x4qje2aWuxor22u43yN8BYYI6gq6qw6jnGD2JwcAFu5udPg0mWxsZrmfz545neeBYtmxXAAAds58
w+mNvfPEt3q9vPqHiK4VJQmpb/JBAyuZ0k+bnjhT0zzWLWrd6FLaRXP+l20s9p/x9W8e/fD8wQ5J
UKcMwX5WPXIyOaAJf7Xt/wC1/teyXy/7N+yYwM7/ALJ5OevTdz9O3ardn4i8nS7O3/tXV7L7JGyf
Z7JtqTZdnyW3jYTv252tjaDz0GV/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/3/AJc1Yt9ItZvDlzqL
6lAk8UqIISJM8rIdpwhG47BjBxjOSOKAGW1zp8+kxWN9NcweRPJMjwQLLv3qgIILrjHlj1zu7Y5i
0+9t7PVHm8uVbZ45ocZDuiSIyZ7BiA2e2cds8FppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvc
Z5xLFoNwftxuri2s1sZ1t7gzuflc7+AFDFuUI+XPXPQEgAqQX1xpt3JJpl7cwZyiyxsYnZM99p46
A4ya07rxReXuuy390880DtOFtpJywijlBVlQnhTtbAOMZA4OMVUGizfbZ4JJ4Io4Ylne4YsUEbbd
j4ALYbemBtyN3IGDh+s6fDp8OmCJopGmtDK8sTllkPnSKGGenyqvGARjkA5oAJbvT2+w2ANy+mwT
tLLKUVJn37A+1clVwqAAEnnJJ5wDUtX/ALWtI/tSYubfZDbeWMIluA37s5OTtO3aeTy24njGbFG0
0yRKVDOwUF2CjJ9SeAPc8Vdu9LFvaG6gvra8hWRY5GgEg2MwJUEOqk5Ct0z93nHGQDS1TVtLnn1q
6sheeZqbMDFMigRgzLJu3huSdg+XaMbvvHblooPEtx/Zt9aXKwSGazS1icWkW4BWjxufbuICIQOS
Qdp6gERah4em083iNeWc89kxFxDC7MY13hN2SoUjLKMAlhu5AwcCaCRbSTT30EbR232mSFVdnRGU
GMnjbhi0Y4YkbwSODgANHutLsLmz1CV7wXdpKsohWJWSUq24fPuBQHgH5WxjPOcCpFeR2+lSwQhh
cXLFJ3YceUCjKq+5YEnj+FcEfMDLDo5ktYZpb+ztnuFLQRTM4Mo3Fc7gpRQWVh8zL0ycDmqtnZyX
07RRFQyxSSkseMIjOfxwpx70AWLC7t1tJ7C8MqW08iSmWFA7o6BgPlJAYEOwIyOxzxg2F1iMaq+p
eUyTwRRrYrncEeMIiM54yQqk9MFgMjbkVXsLS3a0nv7wSvbQSJEYoXCO7uGI+YghQAjEnB7DHOQ6
40eSO6uYYpVkEVsl0mRh5I2VWGF5+YK+5hkgBWOSBmgC3DrFmdaGpXUU7vPbTreLGQu+WRJELITn
AO5SeOCWwMYFEerWenTaSLIT3MdheG8LTIIS5Pl/JgM2B+6HOf4unHNez0jzNQs4Ll8faIGnEUZx
IcBikfI4Z9q7eDxIpAOcVY1HQ2SawhtrK8tLu7lMK2V44MmflCvkqmFYsVGRjKHk9gDPvE0uOICy
nvJ5C3LTQrEFHpgM24njnIxjoc8aV/q2nzNq91btctc6pnzIZIlVId0qynDhiXwU2/dXOc8YwaF3
pYt7Q3UF9bXkKyLHI0AkGxmBKgh1UnIVumfu844yyKzjXSpb24LAOxitQp+/IpQvn/ZCt7HLLjID
YAKVaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1S/2Rb7fse+X+0Psn23fkeVs8rzdmMZzs53
Z+98u3Hz1Vg0i4uXsBG8Wy83YkJOyLaSG3nHy7QA59FIPegCW2u9Pk0mKzvjcr9nnknQQIp83eqA
qSSNn+rHzYb73TjmXU9Xt73+2vLSUfbtSW7j3AcIPO4PPX94vTPQ80/SdIjvNPNyNP1DUpDK0bQ2
L7TCAFIZ/kfhtxA4H3G69pW0CCKLWoJLqCN7DUI7f7VMWUbP3wPyrkkkonABI+gJoAz9IvLO2j1C
G9E5ju7YQhoQCUPmxvuweoGw8d+mRnItjVrOG9s0iE72lvZzWZlZArsJfNy+zcRlfNOBu529RnjK
vbOSxumgkKsdquroeHRlDKwzzgqQeQDzyAeKr0AbF5cadLp9hptnLOqxXMsklxdIFBDiMZ2qWIA2
HI+Y8ZHXaIdemgn1MG2nWeNLa3i8xFYBikKI2AwBxlT1ArNooAKKKKACiiigAooooAKKKKACiiig
AooooAKKKKACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqtXRktvJ1O5ubSK6+zWgkjjlZ
wu4zRpk7GU9HPegDKooooAKsWV7Pp90tzbMqyBWX50VwQylWBVgQQQSOR3qvUttbTXdwsECbpGzx
kAAAZJJPAAAJJPAAJNAGhrGsHVbbTY2jiR7aBo38u3jiBYyO3GwDjBXj13HHJJr3l5HPYadbRBlF
vEwlBGA0jSMSw9Tt8sZPPygdAKsalY2WntpjxTS3UM8Hmysv7vcRK6EJkEgYTgkZ7kD7ofeW+lto
gvbW3vLeRrnyoxNcrKJAFy/SNcFd0fXrv46HABRn/s/+z7T7P9p+2/P9q8zb5fX5NmOenXPfpV63
uNLXw5c2ctxeC7mlSYKtspQFFkAXd5gOD5gydvGOhrHrSFhbHw5Lfi4Z7pLmKIxKuFRWWU8k9W/d
544AI5JJCgF2w1bT4W0i6uGuVudLx5cMcSsk22VpRlywKZL7futjGec4FXQrnT7S5mmvprlMwSwo
IIFkz5kboScuuMbge+fatDS9At7yztJGtr6ZLjPnXsLgQWXzlf3g2HO0AOcsvysOnU5+l2tlcIkb
2t9f3ssjKtrZvsZVUA7uUfdnLcDGNhJzngAqQR6e13Itxc3MdsM+XJHbq7tzxlS4A4/2jj361NcX
dvqWqCW5MttbCNIh5aCV1RECJwSoY4VcnjuQO1WP7Ms7abUZp5mu7KzuRbKbZwhnLb9rBiGCriNj
nDdh33C1Y+H45NQvoGgvL8Q2cd1DFafJJKHMW3I2vghZMkAHkdSOaAK8er28WoWwVJfsVvaS2SMQ
PMKSCQM5GcZzKzBc9AF3fxGK7nspLSy020nlMMc8krXNzF5eDIEUgqpc4AjByCScnjjll5FZ2eqi
KfTNQt40X97bTXAE2SMg7jGNo5U4Kn688TX8OnW9rpt3bWs6tOzyNb3U4lDxqwVTlFQgFhIpHX5c
8ZFAEOvTQT6mDbTrPGltbxeYisAxSFEbAYA4yp6gVm1pa9DBBqYFtAsEb21vL5aMxCl4UdsFiTjL
HqTWbQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HYrHoh7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3Ga
NMnYyno570AVLK9+zb4pY/OtJcedCTjdjoyn+Fhk4b3IIIJBm0+7t7DUZ5AZZITBcQodgDHfE6KS
MkDlgTycc9ar2dnJeTFEKoiLvllc4SNO7MfTkDjJJIABJAOloWkxanqkyo0UltBHLIBcTJAZNqOy
AgtnBKjdtPygn5h1oApaPeR6frdheyhmjt7mOVwgySFYE49+KpVsaVpQ1bxNFYN5EMb3IWRY7hAF
QuARGzMd554wWJ96r3No8+rtbRxWNu5xhIrtTCvy54kZyPzbrx7UAM1i8j1DW7+9iDLHcXMkqBxg
gMxIz780areR313HLEGCrbQREMOcpEiH8Mqce1XfEOmLZeIZNNt47aNI5DDGy3KtvAcqGkYsQjHH
I+UD0FHiHTF0o2dusdsMwJI0sdysryM0aM24KxAALEKQBkc5brQBSvryO5tNNiQMGtbYxOWHBJlk
fj2w4/HNEt5G+iWtkA3mQ3M0rEjgh1iAx7/If0rQ1fSBpei6exjs3kuFMkkyXaSODvkXaoVyCmEB
3YPzZG7tRJpAtfCiX7R2cklxKy7zdoXiQCMrsRX5Y72DAgkADhepAM+K8jTRLqyIbzJrmGVSBwAi
yg59/nH60WN5HbWmpROGLXVsIkKjgESxvz7YQ/jitCw0gN4cvdUeOzlZWWONZbtFKArIWbaHDbwU
G1T1BPyt2PD+kC9gvb147OZbaLckNxdpErPvRfmG9WC4ckHIG4AZPQgFTT7qz/s+6sL154o5pYph
LDEJCCgcbdpZeD5hOc8behzwXl1Z6hqyvK88NosUcIdYg7kRxqgbbuAydoJG7jPU45r2Vml1vaW9
trSNMDfOWOSegCorMeh5xgdyMjNuPSPKu7oXb7oLSBblzCeZY2KBNhI43eYhyRkAkkEjaQCxfatZ
jxRdapZieaC7acyRzIImUSh1ZQQzDIV+GPf+HjBi/taOwm05dODSx2FybpJLlNpkkOzOVVjhf3aj
GSepzyAGSWNlDd2sss0qWFzA1wgPMmAXXyyQMZLoVDYxghiByofrWlx2EVtILW8sZJWdWtLxt0gC
7cSfdX5W3ED5eqHk9AAQ3V5Zx6e9lYCcxzypPKZwMoVDBUXHUDe2WON3Hyrjm3deJbi/0e+tbtYG
uLm5imMiWkSEhRJuJZVB3EsvPXG7nkg1NXsLaxj09rW4a4W4tjK8hXaCwlkQ7R12/Jxnk9SBnAt6
vpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2oAZeT6Pc2MCR3V9CYYBtthaIY/N2je2/z
MncwzuK5AwMYUCq+lXVnp00WoM87XttKJYYBEPLYrgqWfdkDPUBeQMZGci3b6dp11ZzCFLxmgtvN
lvt4ECPsLiMoUyCWBjB38nkZ+7VjS9At7yztJGtr6ZLjPnXsLgQWXzlf3g2HO0AOcsvysOnUgGVa
f2VLaCO9e5t5lkZvNghEvmKQMKQXULtIJyM53dsDMs+qW9xq2oarJbbp552mhhfDxoWYkls/e28Y
XGCeTwCrP0mxtryIj7DqGo3ZZibeybaYkG35yfLfIJYjtjbzncKhbSo31C9sbW7W4mhlZLfavF0A
SPlIJ+Y8ELzu5AOcBgCbQLiL/hJrS/1C/WJYblLmWWYO5kIcEj5VYljyeePesetXw2ltP4hsbW7t
IrmG5njgZZGddoZwCwKsDnGeuRz0rKoAKtXdys9tYRq8rGCAxsHCgKTI7YXHJGGB55yT2xUVrHDL
dwx3E/kQvIqyS7C3lqTy2BycDnFaF3bafJpJvrOG5tts6whLidZfNypJKkIuNuFz1/1i9O4BlVtf
2vb/ANnf2Rsl/szy/O24Hmfa/Lx5mc9N3y46bOcbuaxa39P0FC7m6uLZpBYzXJtN7LIo8hnjbOAr
fwNhWJweRw2ACaz8ReTpdnb/ANq6vZfZI2T7PZNtSbLs+S28bCd+3O1sbQeegx5byN9EtbIBvMhu
ZpWJHBDrEBj3+Q/pVu38PTXEFk4vLNJL5c2sDO2+Vt7JtwFIUll4LEKc9eGw6DSra48PW1493bWb
m7nieWdnO4BIiqhVDH+JznGPU8qKAG3htda1vU737fBZRzXLyxi6SQlgzE/8s1bkcZ+vGaYr22mw
6jbLdxXn2u0WNJLdXCq3nRvg71U9EPQHqPfDI9FmE14l5PBZLaS+RM8xZgsh3YT5AxJ+R+cY+Xry
MmvWUen6mLaNVUC2t2bY+8F2hRmIYEggsSeDjnjigDVuPEdnNfyyiOcRzXl/KxKjKx3MaoCBnllw
TjIB4GecjFuItOM0MVnczlWb95PdRCMKDgfcUucDkk5JOcAccs07T5tTvBawNEshjkkzK4RcIhc5
J4HCnk8epHWrEmizGazSzngvVu5fIheEsoaQbcp84Ug/OnOMfN14OABtzp1rBbtJHrNjcOMYiiSc
M3PbdGB78mtXXPEX9qxXb/2rq8n2qTf9ikbEEOW3YzvO8L0A2r2PGMHKu9LFvaG6gvra8hWRY5Gg
Eg2MwJUEOqk5Ct0z93nHGXX2jmwWVXv7N7mBts9srOHjIOCMsoViDwdpb1GQCQAWP7Xt/wCzv7I2
S/2Z5fnbcDzPtfl48zOem75cdNnON3NVbS7t10m9sbgyp5skc8bxoG+dFcBSCRgHzPvc4x0OeJf7
Cl8n/j7tvtfkfaPsfz+Z5ezzM527PufNjdnHHXiorC0SfTtUnYRM9vAjKrOysuZUUsuBg4ztIJH3
8jOKAL+k659k0lbH+1NT07ZO82+xG7zdyoMMN6Y27OOudx6Y5pS6lHLp+oQHz2kuryK4VpX3nCiU
Hc3GW/eDnHPPSqr2ciafDekr5c0skSgHkFAhOfb5x+tXToUsV3fw3N3bW0dlP9nlnk3lDJlgAAql
udjHOMcc4yAQCwdWs5rq5WUTx29zp9vaPIiBnQxrDkhdwDAtFjqOGz2xUV9NYahJptrbTtbQW1sY
Wnu1OCfMkfdhAxAO4cYOCcZONxZHoNxJd3UDXFtGttAty80jkIYmKbWHGeRIrYxu7Y3fLVS+sWsX
i/fRTxTR+ZFNFu2yLkqSAwDDDKw5A6emCQC2kNnpd3aXn22x1NIp0Z7WNZRvUHJB3xgYOMd+vSru
q67Hd6JJYHUdV1CRrmOZZb04ChVkBULvbB+YHOefQbfmzV0i4bV7TTA8XnXXkbGydo81VZc8Z4Dj
PHr1p11Z2cmnve2BnEcEqQSicjLlgxV1x0B2NlTnbx8zZ4AItYvI9Q1u/vYgyx3FzJKgcYIDMSM+
/NdFPLbT+Go4WniVVtBvu1uYRJKwG5Yni2+cwDbYx820BFfGBWBPpFxbPfiR4tlntzICdku4gLsO
Pm3Alx6qCe1OFhbHw5Lfi4Z7pLmKIxKuFRWWU8k9W/d544AI5JJCgFvStTsbSyEVzNePHuLS2LQp
LDMfUMWBiYrhdyqWHJB52ivo/iDUNGLrbXVysLxyjyo52Rd7xlA+BxkEg/8AAR0rQ0vQLe8s7SRr
a+mS4z517C4EFl85X94NhztADnLL8rDp1OFZWcl9dLBGVU7Wdnc8IiqWZjjnAUE8AnjgE8UAXYdU
+1pdQatcXMiXMkcr3I/eyq8YZV4ZhuGHYYyOxzxgvXWIxqr6l5TJPBFGtiudwR4wiIznjJCqT0wW
AyNuRTIrTT2+3X5Fy+mwTrFFEHVJn37ym5sFVwqEkgHnAA5yGXGjyR3VzDFKsgitkukyMPJGyqww
vPzBX3MMkAKxyQM0AWF1e3bV11CVJfNuYJkvSoH+skV0aRBnnhg2MgFtwG0Yw+PVrPTptJFkJ7mO
wvDeFpkEJcny/kwGbA/dDnP8XTjnMlsWge0WeaKL7RGsmW3fulJIBcAZHADcA5VlIzmrGq6dDbat
HaWcrPHJFA6PcFYyTJGj/NzhRlu5wB3PWgCxq+q/bLRIP7Z1fUP3gfF6dqJgEcLvfJOeuRjB654p
aneR3VyqW4ZbO3UxWquPmEe4sN3+0SxY9sk4wMAPu9LFvaG6gvra8hWRY5GgEg2MwJUEOqk5Ct0z
93nHGb93pNkNLubq2hvkhh/1N9Of3N4Q4QhFKKVJBZ8bmICEHPJABF/a9vt+2bJf7Q+yfYtmB5Wz
yvK35znOzjbj73zbsfJRYavb2dmmn7JTZXeP7TAA3yYc7dhzxtGGHTLE7ty4FYtaujJbeTqdzc2k
V19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyXTzva+luV24KRwRqfM9QWLDZ252t16cYOmurWd/Fqq6oJ4p
L+8S68y1QMIyPN3fKzDI/eYAyPXPGDStNKa5tBcvd21sjyNFCJ2Yea4AJUEAhcbl5cqPm68HF+/0
FDq2tGG4trOxsb5rfM7t8oLPsAADM33McZPOegJABm6reR31/wCbEGEaRRQoXGCwjjVAxHYnbnGT
jOMnrVKt2z0NTJqtvcyQK0FmlxFO0hVArSRYf1IMbkhcbucbd2BVQ6LN9tggjngljmiadLhSwQxr
u3vggNhdj5G3J28A5GQDNoravdPttO0nTrtWtr0yXcwLxu+yREWIhSPlZeWcdFPOemDVfXoYINTA
toFgje2t5fLRmIUvCjtgsScZY9SaAM2iiigAooooAKKKKACiiigAooooAKKKKACiiigArV0Z7byd
Ttrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyqKK
KACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACTQB
oaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGT
z8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKyiQ
Bcv0jXBXdH167+OhwAUZ/wCz/wCz7T7P9p+2/P8AavM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5p
UmCrbKUBRZAF3eYDg+YMnbxjoax60hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0
+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBBY3GljR3tbi4vLWeSUmV4LZZRLHhSikmRSA
GDHA4J2k/dGLul6Bb3lnaSNbX0yXGfOvYXAgsvnK/vBsOdoAc5ZflYdOpz9LtbK4RI3tb6/vZZGV
bWzfYyqoB3co+7OW4GMbCTnPAAy1urNIrvT53n+xTSpKs6RAyKU3hSULYIIdsjdwSDk4wxLeWeo6
k0t4J4IPKSKMwgSNGEVUXIO0Odq4OCvJz22mxp2kWVz4sXSpr/da/a/IWeFcmYFwgKdQM5zknAGe
pwDlW0cMtwqzz+TFyWfYWIAGeAOpPQcgZIyQOQAW9Uu7e4FnBamVobSAwrJKgRnzI7klQSBy5HU9
M98AvrmHULy1WN/IgSCGAGQHbGQih2wueC+9uBk7icZNX4dEt38ZXukBLmeGCS6VEiYCWTylcqoO
0jJKgdO/SquqW8OnXdup0e+tHH7x4dQlJ8xc8cBEIHBBIP0IxQAzXpoJ9TBtp1njS2t4vMRWAYpC
iNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7TaCOOS
VXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWkuPOhJ
xux0ZT/CwycN7kEEEgzafd29hqM8gMskJguIUOwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3yyucJ
GndmPpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1uwvZ
QzR29zHK4QZJCsCce/FUq2NK0oat4misG8iGN7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to4rG3
c4wkV2phX5c8SM5H5t149qAGaxeR6hrd/exBljuLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiGHOUi
RD+GVOParviHTFsvEMmm28dtGkchhjZblW3gOVDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO5WV5
GaNGbcFYgAFiFIAyOct1oApX15Hc2mmxIGDWtsYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaViRwQ6
xAY9/kP6VoavpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZd5u0
LxIBGV2Ir8sd7BgQSABwvUgGfFeRpol1ZEN5k1zDKpA4ARZQc+/zj9aLG8jtrTUonDFrq2ESFRwC
JY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W7Hh/SBewXt68dnMttFuSG4
u0iVn3ovzDerBcOSDkDcAMnoQCLRdUjsIrmM3V5YySsjLd2a7pAF3Zj+8vytuBPzdUHB6ie9123v
dXu5mW5+y3tpDbTNIweZdix/NngMd8YPONwz90nIg0XS47+K5kNreX0kTIq2lm22Qht2ZPut8q7Q
D8vVxyOhhvbC2tdaa1e4aGAKruXXe8RKBmjIGMupJTnaNw5284AHyX1lNd2sUsMr2FtA1uhPEmCX
bzCAcZDuWC5xgBSTyxZqF1Z/2fa2Fk88scMssxlmiEZJcINu0M3A8sHOed3QY5fd2NlZeJb6xnml
WztZ5kDdXcIW2rkDALEBd2MDOcYGKl1XR/s/2MQWN9a3NxI0Ysbo75jjbtcYVSQxYqBt6oeT0AA3
V7jS57DT4rK4vJJLWIwkTWyxhgZJH3ZEjc/OBjHbOe1VL68jubTTYkDBrW2MTlhwSZZH49sOPxzV
jUdLt7LSbO4jufOnknmhm2YMalFjICkfe++ct0JHGQAzWv7BS1sdYa4uLaa5s4AGijdg9vL50aEE
EANwXGV3KPXlcgEV5Po9zYwJHdX0JhgG22Fohj83aN7b/MydzDO4rkDAxhQKlsNW0+FtIurhrlbn
S8eXDHErJNtlaUZcsCmS+37rYxnnOBV/sKXyf+Pu2+1+R9o+x/P5nl7PMznbs+582N2ccdeK0tG8
Nx6nBYKlnqFy14217q2P7q0JcpiRdhyQAHPzLww6dSAZFp/ZUtoI717m3mWRm82CES+YpAwpBdQu
0gnIznd2wMyz6pb3GrahqsltunnnaaGF8PGhZiSWz97bxhcYJ5PAKszR7C2v5Llbi4aNo7aaWKNF
yXZIncZPQKNnPfkADkla9lZfbt8UUn+l8eTCR/rvVVP97phf4uQDnAYAvaBcRf8ACTWl/qF+sSw3
KXMsswdzIQ4JHyqxLHk88e9Y9avhtLafxDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7l
Z7awjV5WMEBjYOFAUmR2wuOSMMDzzkntiorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTf
WcNzbbZ1hCXE6y+blSSVIRcbcLnr/rF6dwDKroo9W0vzTeyi8N2dPNmIVRQiH7OYQ+/dlgcAkbRj
d1O3Dc7RQBdvryO5tNNiQMGtbYxOWHBJlkfj2w4/HNW7e60ubRILC9e8hkiuZZhLDEsgw6xjbtLL
z8hOc8Y6HPy49aFppYuLQXU99bWcLSNHG04kO9lALABFYjAZeuPvcZ5wAXZNWs9Rm1YXonto7+8F
4GhQTFCPM+TBZcj96ec/w9OeIr6Sy1jVi8VythbrbQxobvc5ykaJjManJ+UnOACB26VXttKab7Q0
13bW0NvIInmkZnTed2FBjDE5CMcgY468jNrTtIhnbVoZJ7ZvItFljuPNIjXMsQ3+p+Rm+Ujdk427
uKAG25tdHu/NF/BerJbXERFskgKF4mRc+Yq8ZbtngH2zR0y9/s3VrO+8vzPs06TbM43bWBxnt0qw
dFm+2wQRzwSxzRNOlwpYIY13b3wQGwux8jbk7eAcjJJosxms0s54L1buXyIXhLKGkG3KfOFIPzpz
jHzdeDgAsavqv2y0SD+2dX1D94HxenaiYBHC73yTnrkYweueG6xdaXf3N5qET3hu7uVpTC0SqkRZ
tx+fcS4HIHyrnOeMYLp9Ktrfw9c3iXdteOLuCJJYGcbQUlLKVYKf4UOcY9DwwqrNpFxBdapbs8Rf
Td3nEE4bEix/Lxzyw644oA1bjxF9o04R/wBq6vFi0S2+wRttgO2MR53b+hxuI2c5K5/iqlpFxpcF
hqEV7cXkcl1EIQIbZZAoEkb7smRefkIxjvnPaq8dpbyeHri8xKLmG7iizvGwo6SHpjOQY+ue/Tip
bTQpbuK2/wBLtop7v/j1t5N++b5igwQpUZZSvzMOmTgc0AS6Bq9vpfnfaUlbZJHd2/lgH/SIt3lh
8kfu/nbdjnpgijRNY/s60u7X7dfWHnyRyfaLIZf5A42Ebl4O/Oc/wjg5yM2zs5L6dooioZYpJSWP
GERnP44U496sWFpbtaT394JXtoJEiMULhHd3DEfMQQoARiTg9hjnIALEmqxvJq7PNeXDXlskKTXD
bnYrJE25jngERnAycZAycZrMtrq4s7hbi1nlgmTO2SJyrDIwcEc9Cau3GjyR3VzDFKsgitkukyMP
JGyqwwvPzBX3MMkAKxyQM1Y0rRPtGqWltcpLL59pLciC3bEp2o7IvKnBbYpHByrqR1oAfdeLNUud
Ssrt7meRbRoJEgnmaRDJGqjeRkcsQSe/zHnvVS6vLOPT3srATmOeVJ5TOBlCoYKi46gb2yxxu4+V
ccv1SwWC7t7VNMvtOmk6rqE6/MCcA5KIFGQck8fTFS+IdMXSjZ26x2wzAkjSx3KyvIzRozbgrEAA
sQpAGRzlutABf6vb3lm+n7JRZWmf7MBA3x5cbt5zzuGWPXDAbdq5FNt7jS18OXNnLcXgu5pUmCrb
KUBRZAF3eYDg+YMnbxjoav6j4bjtrC/mis9QSOzUMl/Kd0F2PMVAU+QYDbt4+ZuBjnORi2VnHLbX
N3clltoF2ZQ/M0rK3lqPbKknp8qnnJAIBpWGrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJfb91sYz
znAzYryO30qWCEMLi5YpO7DjygUZVX3LAk8fwrgj5gbtlpFvONPtpXl+2ap/x6shHlxfvGjXeMZb
LKQcY2jB+YnaM+PTria0iuIF87zJ/s4jjBZw+AVBAH8WTt9drelAE1hd262k9heGVLaeRJTLCgd0
dAwHykgMCHYEZHY54wbC6xGNVfUvKZJ4Io1sVzuCPGERGc8ZIVSemCwGRtyKl07w/JcTX7CCfUo7
KUQtHpvzGUtuwyttOE+Qndg5+UY5yMw28I1OSCcy2USSMGWVS8kYGflIAGW4x/CM9do5ABNfXdvq
N9FdSmVJZ/mvXVAcyFjuZFyOowcZA3FsYGMXb/UdLOtaff2yz3ccCwLNBdQLGHESIuOGfIbac5HG
e9RT6PGPFF/pcUrLBay3GZGG5vLiDMTjgFtqHA4BPcdar39pbraQX9mJUtp5HiEUzh3R0Ck/MAAw
IdSDgdxjjJANLVddju9EksDqOq6hI1zHMst6cBQqyAqF3tg/MDnPPoNvzV5r3SoLbUPsAuQ99GI/
s8kYCW6+YkmA+8l8bAvKrnOeOhxaKACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqigDVt
rvT5NJis743K/Z55J0ECKfN3qgKkkjZ/qx82G+9045l1PV7e9/try0lH27Ulu49wHCDzuDz1/eL0
z0PNYtFAGlpF5Z20eoQ3onMd3bCENCASh82N92D1A2Hjv0yM5FsatZw3tmkQne0t7OazMrIFdhL5
uX2biMr5pwN3O3qM8YVFAGxeXGnS6fYabZyzqsVzLJJcXSBQQ4jGdqliANhyPmPGR12iHXpoJ9TB
tp1njS2t4vMRWAYpCiNgMAcZU9QKzaKACiiigAooooAKKKKACiiigAooooAKKKKACiiigArV0Z7b
ydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAyq
KKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTwAACSTwACT
QBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fL
GTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3ka58qMTXKy
iQBcv0jXBXdH167+OhwAUZ/7P/s+0+z/AGn7b8/2rzNvl9fk2Y56dc9+lXre40tfDlzZy3F4LuaV
Jgq2ylAUWQBd3mA4PmDJ28Y6GsetIWFsfDkt+LhnukuYojEq4VFZZTyT1b93njgAjkkkKAXbDVtP
hbSLq4a5W50vHlwxxKyTbZWlGXLApkvt+62MZ5zgQWNxpY0d7W4uLy1nklJleC2WUSx4UopJkUgB
gxwOCdpP3Ri7pegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqc/S7WyuESN7W+v72WRlW
1s32MqqAd3KPuzluBjGwk5zwAGi3On6f4ht7y4muTbWs6zRmOBS8m1wQCC4C5A9Tj3qp5en/ANob
PtNz9i/57fZ18zp/c3468fe6c+1aGnaRZXPixdKmv91r9r8hZ4VyZgXCAp1AznOScAZ6nAObZvZp
MTewTzR7eFhmERB9clW468Y/GgDQ1a60u/8AEdzeo941pdSySvuiVHjLsx4G5gwGQeq55Hy9aZdz
2UlpZabaTymGOeSVrm5i8vBkCKQVUucARg5BJOTxxzoRaNp9zrllaQ299iWxe5ktRMry7/LeRFVw
mDuURn7pxux1FZ+qW8OnXdup0e+tHH7x4dQlJ8xc8cBEIHBBIP0IxQAzXpoJ9TBtp1njS2t4vMRW
AYpCiNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7Ta
COOSVXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWku
POhJxux0ZT/CwycN7kEEEgzafd29hqM8gMskJguIUOwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3y
yucJGndmPpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1
uwvZQzR29zHK4QZJCsCce/FUq2NK0oat4misG8iGN7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to
4rG3c4wkV2phX5c8SM5H5t149qAGaxeR6hrd/exBljuLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiG
HOUiRD+GVOParviHTFsvEMmm28dtGkchhjZblW3gOVDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO
5WV5GaNGbcFYgAFiFIAyOct1oApX15Hc2mmxIGDWtsYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaVi
RwQ6xAY9/kP6VoavpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZ
d5u0LxIBGV2Ir8sd7BgQSABwvUgGfFeRpol1ZEN5k1zDKpA4ARZQc+/zj9aLG8jtrTUonDFrq2ES
FRwCJY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W7Hh/SBewXt68dnMttF
uSG4u0iVn3ovzDerBcOSDkDcAMnoQCpp91Z/2fdWF688Uc0sUwlhiEhBQONu0svB8wnOeNvQ54Ly
6s9Q1ZXleeG0WKOEOsQdyI41QNt3AZO0Ejdxnqcc17KzS63tLe21pGmBvnLHJPQBUVmPQ84wO5GR
m3HpHlXd0Lt90FpAty5hPMsbFAmwkcbvMQ5IyASSCRtIBLqV7pV34lub4C5ls7uSWR1kjCPEzlsE
AOQ20kNyRuxg4HNNm1OGzi0+LSpp2azuXuknmhVCHbywBs3MMDywck85xjjlkljZQ3drLLNKlhcw
NcIDzJgF18skDGS6FQ2MYIYgcqLGo6RbWs1h58d5pSzymOaG8HmSRINv73AVCVO5gBjrGeT0ABFq
GuPqGiWtlJFAskVzLKxitYohhlQLjYBz8rZ9fl64GLV7q2lzf21cxC8N3qqklGRVSAmZJCudxLj5
SA2F6fd+b5at9Y2Z0dNQtra8tFaURxrcyiQXAIbcyEInCFQD15cdO8uuaFJosRil07UEkjl8p72U
bYJTzkINnTjg7jkDOBnAALFx4i+0acI/7V1eLFolt9gjbbAdsYjzu39DjcRs5yVz/FWZpV1Z6dNF
qDPO17bSiWGARDy2K4Kln3ZAz1AXkDGRnIJNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4q7
/YKXVjo7W9xbQ3N5AQsUjsXuJfOkQAAAheAgy21T68NgAq6Fc6faXM019NcpmCWFBBAsmfMjdCTl
1xjcD3z7VXt5rO0mmlVGuHRsWwmjAQ9fndcnJHHycgk8kgbWLOwiuYjLNqNnZru2qJi7FiOvCKxA
5HJwD2zg4emkzNd3Vg52ajDIY1tzg+YwJDKrA43ZHA/i5AOdoYAsaBcRf8JNaX+oX6xLDcpcyyzB
3MhDgkfKrEseTzx71j1q+G0tp/ENja3dpFcw3M8cDLIzrtDOAWBVgc4z1yOelZVABVq7uVntrCNX
lYwQGNg4UBSZHbC45IwwPPOSe2KitY4ZbuGO4n8iF5FWSXYW8tSeWwOTgc4rQu7bT5NJN9Zw3Ntt
nWEJcTrL5uVJJUhFxtwuev8ArF6dwDKrSi0y0kiR213T42ZQSjpcZU+hxERkexIrNrf0/QULubq4
tmkFjNcm03ssijyGeNs4Ct/A2FYnB5HDYAKFtp1rPbrJJrNjbuc5ilScsvPfbGR78GrWl3lrp/2i
J9RuYwZMN5NstxBcKPu7o5Co4OSNwPUcKRy238PTXEFk4vLNJL5c2sDO2+Vt7JtwFIUll4LEKc9e
Gw6DSra48PW1493bWbm7nieWdnO4BIiqhVDH+JznGPU8qKAGw32nSxahaSpPZ2lxcpcReSonMYTz
AEwzLkYk+9nPy9OeIYryzto9VhgE5jubZYYmcDJIljcswHQHY3AzjIGT1oj0WYTXiXk8FktpL5Ez
zFmCyHdhPkDEn5H5xj5evIya9ZR6fqYto1VQLa3Ztj7wXaFGYhgSCCxJ4OOeOKALdnrkdpc6VKhn
ja1s5rZ5I+GQyNN86c8lRKD1GSMZHWnTa9s1bTLz7fqeqfYp1nzfPt6Mp2qNz7fu8tnnI4GOcW2h
Se4WOS4it0Ocyyhiq8d9oJ9uBW62i2Vt48g0gTrd2h1BYHALAhfN2lGOF+bHUrxzwaAKtxdaXDok
9hZPeTSS3MUxlmiWMYRZBt2hm5+cHOec9Bj5rV7q2lzf21cxC8N3qqklGRVSAmZJCudxLj5SA2F6
fd+b5c270pra0Nyl3bXKJIsUwgZj5TkEhSSAGztblCw+XryM2NQ8PTaebxGvLOeeyYi4hhdmMa7w
m7JUKRllGASw3cgYOAAt7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoau2fiLydLs7f+1d
XsvskbJ9nsm2pNl2fJbeNhO/bna2NoPPQZ/9hS+T/wAfdt9r8j7R9j+fzPL2eZnO3Z9z5sbs4468
Uyx0c36xKl/ZpczttgtmZy8hJwBlVKqSeBuK+pwCCQB+j+INQ0YuttdXKwvHKPKjnZF3vGUD4HGQ
SD/wEdKIdU+1pdQatcXMiXMkcr3I/eyq8YZV4ZhuGHYYyOxzxgxWmlNc2guXu7a2R5GihE7MPNcA
EqCAQuNy8uVHzdeDg122hs/EOp2tumyGG7ljjXJOFDkAZPPQUAWF1iMaq+peUyTwRRrYrncEeMIi
M54yQqk9MFgMjbkVDdXlnqGoR3d0Jw86lrxowP8AWktl0B6j7rFeMncBtGMV7Gxa+eX99FBFDH5k
s0u7bGuQoJCgscsyjgHr6ZImGlNJqCWsF3bTo0bS+fGzbAigl2IIDDAVjgrk44ByMgDtQurP+z7W
wsnnljhllmMs0QjJLhBt2hm4Hlg5zzu6DHMWq3kd9dxyxBgq20ERDDnKRIh/DKnHtT7nSxb/AGeR
b62mtZ5DGLmMSbFZdu4EMobgMp4U9eMnIqXXdLttKuYY7a+iuRJBFIwUPlS0aNk7lUYJYkYycdcG
gBgurOz02eK0eeae8iWKcyxBFiAZXIXDEsSyryduADwc/LFe3kcttbWlsGW2gXfhx8zSsq+Yx9sq
AOnyqOMkkvm0i4gutUt2eIvpu7ziCcNiRY/l455YdccVYk8PTRxjN5ZtcNbC6W2V2LtGYxIT93aC
FySGIPy8A5XIA6y1e3gGn3MqS/bNL/49VQDy5f3jSLvOcrhmJOM7hgfKRuMWm6v/AGTaSfZUzc3G
+G58wZR7chf3Ywcjcd248HhdpHOaVvZyXMF3KhULaxCVwx5ILqnHvlx+GatWtnZx6el7fmcxzyvB
EICMoVClnbPUDeuFGN3PzLjkAmhutL8rULBnvIrKa5SaGURLJIAnmBVZdyjJEmSQeCvQ54hvLqz1
DVleV54bRYo4Q6xB3IjjVA23cBk7QSN3Gepxy260i4tH1FGeJ30+fyZlQknqy7xx90FQMnHLqO9M
ksFtdSFpe3CwhVDSsqlyhKhim3j5x90g4AYYJAyaANC+1azHii61SzE80F205kjmQRMolDqyghmG
Qr8Me/8ADxg0r+7t2tILCzMr20EjyiWZAju7hQflBIUAIoAye5zzgWm0i3Xxbd6UHlMME88aDI8y
Xy921AcY3OVCjg8sOD0o1vR/7OtLS6+w31h58kkf2e9OX+QId4O1eDvxjH8J5OcAAxaKuy2cdvpU
U8xYXFyweBFPHlAurM3uWAA5/hbIHyk3b3SLeAahbRPL9s0v/j6ZyPLl/eLG2wYyuGYAZzuGT8pG
0gGLRWh/ZFx5u3fF5f2T7X52T5ezbnG7HXd+7/3/AJc1dsNIDeHL3VHjs5WVljjWW7RSgKyFm2hw
28FBtU9QT8rdgDCord0nSI7zTzcjT9Q1KQytG0Ni+0wgBSGf5H4bcQOB9xuvbM1Oy/s3Vryx8zzP
s07w78Y3bWIzjt0oAq0UVu6vYRT+K73TrC1gtIbaWaPh3ICRlizsWLHIVSTj04GeoBhUVvvoKSWm
kQ2txbTz319LbrcRu2wjEIUEEBlwzt1UHvyMGqF3pTW1oblLu2uUSRYphAzHynIJCkkANna3KFh8
vXkZAM+it/X9BSz1HVfslxbNHaTuWto3Znhi8zapLEbT95BgMWG7kcNg0/QULubq4tmkFjNcm03s
sijyGeNs4Ct/A2FYnB5HDYAMCitVUtpfC0832SJLm3u4Y/PVn3OrrKSGBYr/AALjAHSsqgAooooA
KKKKACiiigAooooAKKKKACtXRntvJ1O2ubuK1+02gjjklVyu4TRvg7FY9EPasqtXRktvJ1O5ubSK
6+zWgkjjlZwu4zRpk7GU9HPegDKooooAKsWV7Pp90tzbMqyBWX50VwQylWBVgQQQSOR3qvUttbTX
dwsECbpGzxkAAAZJJPAAAJJPAAJNAGhrGsHVbbTY2jiR7aBo38u3jiBYyO3GwDjBXj13HHJJr3l5
HPYadbRBlFvEwlBGA0jSMSw9Tt8sZPPygdAKsalY2WntpjxTS3UM8Hmysv7vcRK6EJkEgYTgkZ7k
D7ofeW+ltogvbW3vLeRrnyoxNcrKJAFy/SNcFd0fXrv46HABRn/s/wDs+0+z/aftvz/avM2+X1+T
Zjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMnbxjoax60hYWx8OS34uGe6S5iiMSr
hUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBBY3GljR3
tbi4vLWeSUmV4LZZRLHhSikmRSAGDHA4J2k/dGLul6Bb3lnaSNbX0yXGfOvYXAgsvnK/vBsOdoAc
5ZflYdOpz9LtbK4RI3tb6/vZZGVbWzfYyqoB3co+7OW4GMbCTnPAAaLc6fp/iG3vLia5NtazrNGY
4FLybXBAILgLkD1OPes+5W3W4ZbWWWWEY2vLGI2PHOVDMBznua047HTodQvLaR59QaO5+z20dmwQ
3Ayw3htrjHC/LjJ3jB45c+m2UGtzWsn2mULGhjtYz++eV9n7ndtIDKWYEleShGATwAV7+8s7/wAQ
XNzIJ0spZW8tUA3xR8hAB0wo2jbkDC4BHUPu57KS0stNtJ5TDHPJK1zcxeXgyBFIKqXOAIwcgknJ
445tSeHvO1u1sbeO5gkngaaS1lXzJ4du8lMALuYqm5Rhc71H+0aWp2y6dqEIbSby0UKrm3v3JMgy
e4VDtOMcc8Hn0ADXpoJ9TBtp1njS2t4vMRWAYpCiNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS
8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaC
SOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWkuPOhJxux0ZT/CwycN7kEEEgzafd29hqM8gMskJguIU
OwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3yyucJGndmPpyBxkkkAAkgHS0LSYtT1SZUaKS2gjlkA
uJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1uwvZQzR29zHK4QZJCsCce/FUq2NK0oat4misG8iGN
7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to4rG3c4wkV2phX5c8SM5H5t149qAGaxeR6hrd/exBl
juLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiGHOUiRD+GVOParviHTFsvEMmm28dtGkchhjZblW3gO
VDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO5WV5GaNGbcFYgAFiFIAyOct1oApX15Hc2mmxIGDWt
sYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VoavpA0vRdPYx2byXCmSSZLtJHB3y
LtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZd5u0LxIBGV2Ir8sd7BgQSABwvUgGfFeRpol1ZEN5k
1zDKpA4ARZQc+/zj9aLG8jtrTUonDFrq2ESFRwCJY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAV
kLNtDht4KDap6gn5W7Hh/SBewXt68dnMttFuSG4u0iVn3ovzDerBcOSDkDcAMnoQCLRdUjsIrmM3
V5YySsjLd2a7pAF3Zj+8vytuBPzdUHB6ie9123vdXu5mW5+y3tpDbTNIweZdix/NngMd8YPONwz9
0nIyrKzS63tLe21pGmBvnLHJPQBUVmPQ84wO5GRm3HpHlXd0Lt90FpAty5hPMsbFAmwkcbvMQ5Iy
ASSCRtIASX1lNd2sUsMr2FtA1uhPEmCXbzCAcZDuWC5xgBSTyxlN7pUf9mWeLm6sLe7a4uGkjETu
r+WGQKHPaPruH3u2MmKSxsobu1llmlSwuYGuEB5kwC6+WSBjJdCobGMEMQOVFjUdItrWaw8+O80p
Z5THNDeDzJIkG397gKhKncwAx1jPJ6AAqarJY3DG4hvry4uGYBhNaJCoUDAA2yNgDAAUAADpjGKu
3+rafM2r3Vu1y1zqmfMhkiVUh3SrKcOGJfBTb91c5zxjBgvrGzOjpqFtbXlorSiONbmUSC4BDbmQ
hE4QqAevLjp3l1zQpNFiMUunagkkcvlPeyjbBKechBs6ccHccgZwM4ABFeXWl30a3Mr3gu1to4Rb
rEuzKRrGG8zdnHyhiNn+zn+KrVlq2lw/2Lcyi8F3pSghFRWSciZ5AudwKD5gC2G6/d+X5s+TRzDb
B5b+zS5MQmFqzOH2FQ4O7bsyVIYDdnnH3uKLHRzfrEqX9mlzO22C2ZnLyEnAGVUqpJ4G4r6nAIJA
Lek6zHYaeYBd6hZSLK0rNYttNyCFAR23DaF2nBw+PMbj1ivNTs59b1PVFhaR5rl5baOZAUXcxO5x
k5I4+XkEnkkDa0Oj2FtfyXK3Fw0bR200sUaLkuyRO4yegUbOe/IAHJK17Ky+3b4opP8AS+PJhI/1
3qqn+90wv8XIBzgMAXtAuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x61fDaW0/iGxtbu0i
uYbmeOBlkZ12hnALAqwOcZ65HPSsqgAq1d3Kz21hGrysYIDGwcKApMjthcckYYHnnJPbFRWscMt3
DHcT+RC8irJLsLeWpPLYHJwOcVoXdtp8mkm+s4bm22zrCEuJ1l83KkkqQi424XPX/WL07gGVXRR6
tpfmm9lF4bs6ebMQqihEP2cwh9+7LA4BI2jG7qduG52tWPw/ey3kFsvlfvpLaMSbvlVp03xg9+mc
4Bxg+2QCvfXkdzaabEgYNa2xicsOCTLI/Hthx+Oat291pc2iQWF695DJFcyzCWGJZBh1jG3aWXn5
Cc54x0Oflx60LTSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4zzgAuyatZ6jNqwvRPbR394LwN
CgmKEeZ8mCy5H7085/h6c8RX0llrGrF4rlbC3W2hjQ3e5zlI0TGY1OT8pOcAEDt0qvbaU032hpru
2tobeQRPNIzOm87sKDGGJyEY5Axx15GZY9Cla7uoZbu2gjtoFuGnk3lGjYoEICqW58xDggEZ5x0o
AqXtpBa7PJ1G2vN2c+Qsg2/Xei9fbPStNtWsx4xg1tROY2vFvJoygBQ+ZvZFO75gOgJ259BTD4Yv
ReJaiW2MjecrEybVjkhTfJGxYAAgY5+7z97qRSvtONnFFPHdQXVvKzIs0O8DeuCy4dVOQGU9Mc9e
uAAivI00S6siG8ya5hlUgcAIsoOff5x+tGsXkeoa3f3sQZY7i5klQOMEBmJGffmn6laW9vbaZNbi
UfarTzZBI4bDiR0OMAYB2ZxzjPU1Ld6FLaRXP+l20s9p/wAfVvHv3w/MEOSVCnDMF+Vj1yMjmgDQ
uPEX2jThH/aurxYtEtvsEbbYDtjEed2/ocbiNnOSuf4ql0HxLb6WNNZrrU7dLSQGa1syFjuv3hbe
53DnBC4KnIQDcM/Lm3GkWsPhy21FNSgeeWV0MIEmeFjO0ZQDcN5zk4xjBPNMtNClu4rb/S7aKe7/
AOPW3k375vmKDBClRllK/Mw6ZOBzQAW13p8mkxWd8blfs88k6CBFPm71QFSSRs/1Y+bDfe6ccvvD
a61rep3v2+CyjmuXljF0khLBmJ/5Zq3I4z9eM1n2dnJfTtFEVDLFJKSx4wiM5/HCnHvViwtLdrSe
/vBK9tBIkRihcI7u4Yj5iCFACMScHsMc5ABLbG20vU1ddWlOIyUu9NDhomPHRwhPGRwR97OTgqbS
a3a22vw31qJYwsDwvcRQrA5Z1dfMWNDtUqHGACM7M5BY4pXGjyR3VzDFKsgitkukyMPJGyqwwvPz
BX3MMkAKxyQM01NIuHuLWHfErzwG4O4n91GAzEuMZHyLvwASVKkZyBQBLrOo/b/IX+09T1Dy9x8y
+ONuccKu5sdOTu5yOBjJi1S7t74208ZlWYQRwyxsg2r5caoCrZy2QuTkDGcc9aZfacbOKKeO6gur
eVmRZod4G9cFlw6qcgMp6Y569cEtnHb6VFPMWFxcsHgRTx5QLqzN7lgAOf4WyB8pIBq3uraXN/bV
zELw3eqqSUZFVICZkkK53EuPlIDYXp935vllvNQ0u3uVuopJ5Lv+zI7YxqFaJi1qsZO/OVK7jldr
cp1GflpXukW8A1C2ieX7Zpf/AB9M5Hly/vFjbYMZXDMAM53DJ+Ujaav9kXHm7d8Xl/ZPtfnZPl7N
ucbsdd37v/f+XNAFjTvEuqaZYXNpBfXiRyReXEqXDKIT5iuWUDucMOMfeP4si1G3urT7PqrXL7J5
LlZIiGeV3Ch1YseM7F+fnHOVbPBaaFLdxW3+l20U93/x628m/fN8xQYIUqMspX5mHTJwOayqANq3
8QTW2p3mtRfu9XmnMkTqoMcQfcZCAc88gDORgt3wRSkOnTakGBntrNlBYJGJGRto3BQWGV3ZAy2d
uM5NP0K2hvPEOmWtwm+Ga7ijkXJGVLgEZHPQ0Q2w1W7nkjS2sLeOMSSHMhjiXKrn+NzlmUcZ5bsO
gBLrVzp+oeIbi8t5rkW11O00hkgUPHuckgAOQ2AfUZ9qJbvT2+w2ANy+mwTtLLKUVJn37A+1clVw
qAAEnnJJ5wLVx4f8ybS7SzltjLPYy3DzediOTa8xzuP3fkQDBxg/ewc02w0WI63o6STwXthd3iQM
8JdQxDJvT5grA4decY+bg5BwAZV7eSX1008gVTtVFRBwiKoVVGecBQBySeOSTzWle6vbzjULmJJf
tmqf8fSuB5cX7xZG2HOWyygjONoyPmJ3Crd6U1taG5S7trlEkWKYQMx8pyCQpJADZ2tyhYfL15Gb
/wDYKWtjrDXFxbTXNnAA0UbsHt5fOjQgggBuC4yu5R68rkAi/te3/s7+yNkv9meX523A8z7X5ePM
znpu+XHTZzjdzVKK8jTRLqyIbzJrmGVSBwAiyg59/nH61YvUtn8PaddRWkUExnmgkaNnPmBEiIYh
mIBy7dMD2rKoAtWS6ed7X0tyu3BSOCNT5nqCxYbO3O1uvTjBbf3kmoahc3soVZLiVpXCDABYknHt
zVeigAror+/soPGF9fxXS3VpeNc/PCjAxrMHXo4XLKGzjoeme452igDorfVtLsJtEW2F5LHYag11
NJIiqZAfK+6oY4P7sjBJ7HPOBztFFAHRapq2lzz61dWQvPM1NmBimRQIwZlk3bw3JOwfLtGN33jt
yxHq2l+ab2UXhuzp5sxCqKEQ/ZzCH37ssDgEjaMbup24bnaKANVXtovC08P2uJ7m4u4ZPIVX3IqL
KCWJUL/GuME9ayqKKACiiigAooooAKKKKACiiigAooooAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+
DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AMqiiigAqxZXs+n3S3NsyrIFZfnR
XBDKVYFWBBBBI5Heq9S21tNd3CwQJukbPGQAABkkk8AAAkk8AAk0AaGsawdVttNjaOJHtoGjfy7e
OIFjI7cbAOMFePXccckmveXkc9hp1tEGUW8TCUEYDSNIxLD1O3yxk8/KB0AqxqVjZae2mPFNLdQz
webKy/u9xEroQmQSBhOCRnuQPuh95b6W2iC9tbe8t5GufKjE1ysokAXL9I1wV3R9eu/jocAFGf8A
s/8As+0+z/aftvz/AGrzNvl9fk2Y56dc9+lXre40tfDlzZy3F4LuaVJgq2ylAUWQBd3mA4PmDJ28
Y6GsetIWFsfDkt+LhnukuYojEq4VFZZTyT1b93njgAjkkkKAXbDVtPhbSLq4a5W50vHlwxxKyTbZ
WlGXLApkvt+62MZ5zgQWNxpY0d7W4uLy1nklJleC2WUSx4UopJkUgBgxwOCdpP3Ri7pegW95Z2kj
W19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqc/S7WyuESN7W+v72WRlW1s32MqqAd3KPuzluBjGwk5
zwAP0640u3iv4pLi8ieVgkNzFbK7+V8wdSpkG0tlM4J4DLnBOa8P9lJdzxSPcyWrxhY7jyQJI2yp
3eXvwejLgt0bPUYqxHY6dDqF5bSPPqDR3P2e2js2CG4GWG8NtcY4X5cZO8YPHM8eiW7a3dWipc3H
kwLKlnEwE7u2zdDnafmTe275f+WbcL2AGjVrOG9s0iE72lvZzWZlZArsJfNy+zcRlfNOBu529Rni
vdz2UlpZabaTymGOeSVrm5i8vBkCKQVUucARg5BJOTxxzpDw3GNds7WWz1CNbmzmuRZMcXClBLhM
7OSxiBHydGxg9Tm6pbw6dd26nR760cfvHh1CUnzFzxwEQgcEEg/QjFADNemgn1MG2nWeNLa3i8xF
YBikKI2AwBxlT1ArNrS16GCDUwLaBYI3treXy0ZiFLwo7YLEnGWPUms2gArV0Z7bydTtrm7itftN
oI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOxlPRz3oAqWV79m3xSx+daS
486EnG7HRlP8LDJw3uQQQSDNp93b2GozyAyyQmC4hQ7AGO+J0UkZIHLAnk4561Xs7OS8mKIVREXf
LK5wkad2Y+nIHGSSQACSAdLQtJi1PVJlRopLaCOWQC4mSAybUdkBBbOCVG7aflBPzDrQBS0e8j0/
W7C9lDNHb3McrhBkkKwJx78VSrY0rShq3iaKwbyIY3uQsix3CAKhcAiNmY7zzxgsT71XubR59Xa2
jisbdzjCRXamFflzxIzkfm3Xj2oAZrF5HqGt397EGWO4uZJUDjBAZiRn35o1W8jvruOWIMFW2giI
Yc5SJEP4ZU49qu+IdMWy8Qyabbx20aRyGGNluVbeA5UNIxYhGOOR8oHoKPEOmLpRs7dY7YZgSRpY
7lZXkZo0ZtwViAAWIUgDI5y3WgClfXkdzaabEgYNa2xicsOCTLI/Hthx+OaJbyN9EtbIBvMhuZpW
JHBDrEBj3+Q/pWhq+kDS9F09jHZvJcKZJJku0kcHfIu1QrkFMIDuwfmyN3aiTSBa+FEv2js5JLiV
l3m7QvEgEZXYivyx3sGBBIAHC9SAZ8V5GmiXVkQ3mTXMMqkDgBFlBz7/ADj9aLG8jtrTUonDFrq2
ESFRwCJY359sIfxxWhYaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W7Hh/SBewXt68dnMt
tFuSG4u0iVn3ovzDerBcOSDkDcAMnoQCLRdUjsIrmM3V5YySsjLd2a7pAF3Zj+8vytuBPzdUHB6i
e9123vdXu5mW5+y3tpDbTNIweZdix/NngMd8YPONwz90nIg0XS47+K5kNreX0kTIq2lm22Qht2ZP
ut8q7QD8vVxyOhhvbC2tdaa1e4aGAKruXXe8RKBmjIGMupJTnaNw5284AHyX1lNd2sUsMr2FtA1u
hPEmCXbzCAcZDuWC5xgBSTyxlN7pUf8AZlni5urC3u2uLhpIxE7q/lhkChz2j67h97tjJbPo8Y8U
X+lxSssFrLcZkYbm8uIMxOOAW2ocDgE9x1qW10mzv9Q0hoDPFZX94LVo3cNJGQU3YYKARiRSDgc5
GOMkAqarJY3DG4hvry4uGYBhNaJCoUDAA2yNgDAAUAADpjGKu3+rafM2r3Vu1y1zqmfMhkiVUh3S
rKcOGJfBTb91c5zxjBg1iwjsYogdF1XT5HY7WvZchwOoA8pOeRzn8Oal1fSBpei6exjs3kuFMkky
XaSODvkXaoVyCmEB3YPzZG7tQBFeXWl30a3Mr3gu1to4RbrEuzKRrGG8zdnHyhiNn+zn+KtLQfEt
vpY01mutTt0tJAZrWzIWO6/eFt7ncOcELgqchANwz8tK80KSw0dbiXTtQkaSKOYXqjbbIHCkL9w7
jggE7l+Y4wcZZ2j6TZajFDH5N9M7Y+03cR2w2ILFQZAUOQAu8ncowccYJoAq6Fc6faXM019NcpmC
WFBBAsmfMjdCTl1xjcD3z7VXt5rO0mmlVGuHRsWwmjAQ9fndcnJHHycgk8kgbWl0ewtr+S5W4uGj
aO2mlijRcl2SJ3GT0CjZz35AA5JWvZWX27fFFJ/pfHkwkf671VT/AHumF/i5AOcBgC9oFxF/wk1p
f6hfrEsNylzLLMHcyEOCR8qsSx5PPHvWPWr4bS2n8Q2Nrd2kVzDczxwMsjOu0M4BYFWBzjPXI56V
lUAFWru5We2sI1eVjBAY2DhQFJkdsLjkjDA885J7YqK1jhlu4Y7ifyIXkVZJdhby1J5bA5OBzitC
7ttPk0k31nDc222dYQlxOsvm5UklSEXG3C56/wCsXp3AMquqtfEWnxG3nkW586KS0nMaxqVL20ex
V3bs4cEktj5cYw/WuVrV1LSPsa+cj7YPIt3UynmSR4kdlXA5xvz6AYyclcgEVtp1rPbrJJrNjbuc
5ilScsvPfbGR78GrEc2nPp66deXU6LbXMssc9rAJRKHCL0ZkKgeWCO53cgY5LfSLWbw5c6i+pQJP
FKiCEiTPKyHacIRuOwYwcYzkjinQaVbXHh62vHu7azc3c8Tyzs53AJEVUKoY/wATnOMep5UUARWl
3p5tL2xuDc29tNPHPG8aLM67A4CkEoDkSfe4+7054luNXt5H1IKkuyexhs4SQMnyzD8zDPGRETgZ
wTjJ61FFoNwftxuri2s1sZ1t7gzuflc7+AFDFuUI+XPXPQEhg0Wb7bPBJPBFHDEs73DFigjbbsfA
BbDb0wNuRu5AwcAG7Frun3mqzvIsqQvPqlyQ7LGSk0GFUHkBiVI6HkjrWFqF1Z/2fa2Fk88scMss
xlmiEZJcINu0M3A8sHOed3QY5bNpTRXcERu7YwzxmWO63MsZQFgW+YBuCrDG3JI4ByMuk0WYzWaW
c8F6t3L5ELwllDSDblPnCkH505xj5uvBwATavcaXPYafFZXF5JJaxGEia2WMMDJI+7Ikbn5wMY7Z
z2q7rniL+1Yrt/7V1eT7VJv+xSNiCHLbsZ3neF6AbV7HjGDUn0q2t/D1zeJd2144u4IklgZxtBSU
spVgp/hQ5xj0PDCs17ORNPhvSV8uaWSJQDyCgQnPt84/WgCx9rt5dBSykMqTQTvNFtQMsm8RqQxy
CuBHkYDZzjjGTq2fiLydLs7f+1dXsvskbJ9nsm2pNl2fJbeNhO/bna2NoPPQVP7It9v2PfL/AGh9
k+278jytnlebsxjOdnO7P3vl24+eqsGkXFy9gI3i2Xm7EhJ2RbSQ284+XaAHPopB70AS6P4g1DRi
6211crC8co8qOdkXe8ZQPgcZBIP/AAEdKIdU+1pdQatcXMiXMkcr3I/eyq8YZV4ZhuGHYYyOxzxg
s0qOxuZorSWxvLm7nlEcXk3aQglsADDRtznvkDmrtnocN9d6lPYwX2o2FrOI4o7VT50qsW2MTtO0
YQknb1wMDOQAQLrEY1V9S8pkngijWxXO4I8YREZzxkhVJ6YLAZG3IqWDXI4tdi1bM8dzLFKLqSLg
rLIHQyRjPXDK+Mj5sgbRjGPdR+VdzR+TLDskZfKlOXTB+63A5HQ8D6CtNtHjGqppvmsk8EUjXzY3
BHjDu6oOMkKoHXBYHB24NADtQ1G3v7uyW71PV9Qto5MyyXJAdUJGRGpZgDgHktzkcDGTm3t5JfXT
TyBVO1UVEHCIqhVUZ5wFAHJJ45JPNXZdNt7j7DPZyfZ7a8na3Au5QfJddmSzgAFcSKc4GORjjJqS
adcQ2ktxOvk+XP8AZzHICrl8EsACP4cDd6bl9aANC91e3nGoXMSS/bNU/wCPpXA8uL94sjbDnLZZ
QRnG0ZHzE7gf2vb/ANnf2Rsl/szy/O24Hmfa/Lx5mc9N3y46bOcbuaGg0e40i+uoLW+t3g8tY3lu
0lVpGbhSojU/cWQ5zj5fcVF/YUvk/wDH3bfa/I+0fY/n8zy9nmZzt2fc+bG7OOOvFAGhZ+IvJ0uz
t/7V1ey+yRsn2eybak2XZ8lt42E79udrY2g89BzVFFAGhoVzDZ+IdMurh9kMN3FJI2CcKHBJwOeg
qW0ltLCa/s5rnzra6gEJubWMttw6SZCvsJ5TbzjqTzjnKooA321fT4rizFslyYbbTbizzIF3M8gm
w2AcAZlBxzjkZbGTm6PeR6frdheyhmjt7mOVwgySFYE49+KpUUAatzd6fHpMtnYm5b7RPHO4nRR5
WxXAUEE7/wDWH5sL93pzxdvdW0ub+2rmIXhu9VUkoyKqQEzJIVzuJcfKQGwvT7vzfLztFAGrevbJ
4e061iu4p5hPNPIsauPLDpEApLKATlG6ZHvWVRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUU
UAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19m
tBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsCrAgggkcjvVepba2mu7hY
IE3SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dvHECxkduNgHGCvHruOOSTXvLyOew
062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ4PNlZf3e4iV0ITIJAwnBIz3IH3Q
+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP/AGf/AGfafZ/tP235/tXmbfL6/Jsx
z06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9aQsLY+HJb8XDPdJcxRGJVwq
Kyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJfb91sYzznAgsbjSxo72t
xcXlrPJKTK8FssoljwpRSTIpADBjgcE7SfujF3S9At7yztJGtr6ZLjPnXsLgQWXzlf3g2HO0AOcs
vysOnU5+l2tlcIkb2t9f3ssjKtrZvsZVUA7uUfdnLcDGNhJzngAfp1xpdvFfxSXF5E8rBIbmK2V3
8r5g6lTINpbKZwTwGXOCc0fL0/8AtDZ9pufsX/Pb7OvmdP7m/HXj73Tn2rQt9Nshqd5aD7Tqkkc5
ht4rE7WnUbsyA7XGAFHGOd2c8HNu10C3l1bUbZba+u/s1ok6Wtu4EwdmiDRsdjcp5jA/KOUPA6AA
zLi8s7m5tIHE4sLWIwI6geaQWZt5HTO5yduemF3Z+an3c9lJaWWm2k8phjnkla5uYvLwZAikFVLn
AEYOQSTk8cclxa2UOriC5tb7Too4y0sNy+6UsFLBQdi7d3ygEqcZzyOKfqNnbWcVhex2k8SzMW+x
3km4ug2lXyoQ7H3EDA/gOG9ACHXpoJ9TBtp1njS2t4vMRWAYpCiNgMAcZU9QKza0tehgg1MC2gWC
N7a3l8tGYhS8KO2CxJxlj1JrNoAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq1dGS28nU
7m5tIrr7NaCSOOVnC7jNGmTsZT0c96AKlle/Zt8UsfnWkuPOhJxux0ZT/CwycN7kEEEgzafd29hq
M8gMskJguIUOwBjvidFJGSBywJ5OOetV7OzkvJiiFURF3yyucJGndmPpyBxkkkAAkgHS0LSYtT1S
ZUaKS2gjlkAuJkgMm1HZAQWzglRu2n5QT8w60AUtHvI9P1uwvZQzR29zHK4QZJCsCce/FUq2NK0o
at4misG8iGN7kLIsdwgCoXAIjZmO888YLE+9V7m0efV2to4rG3c4wkV2phX5c8SM5H5t149qAGax
eR6hrd/exBljuLmSVA4wQGYkZ9+aNVvI767jliDBVtoIiGHOUiRD+GVOParviHTFsvEMmm28dtGk
chhjZblW3gOVDSMWIRjjkfKB6CjxDpi6UbO3WO2GYEkaWO5WV5GaNGbcFYgAFiFIAyOct1oApX15
Hc2mmxIGDWtsYnLDgkyyPx7YcfjmiW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VoavpA0vRdPYx2byXC
mSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4lZd5u0LxIBGV2Ir8sd7BgQSABwvUgGfF
eRpol1ZEN5k1zDKpA4ARZQc+/wA4/WixvI7a01KJwxa6thEhUcAiWN+fbCH8cVoWGkBvDl7qjx2c
rKyxxrLdopQFZCzbQ4beCg2qeoJ+Vux4f0gXsF7evHZzLbRbkhuLtIlZ96L8w3qwXDkg5A3ADJ6E
AqafdWf9n3VhevPFHNLFMJYYhIQUDjbtLLwfMJznjb0OeC8urPUNWV5XnhtFijhDrEHciONUDbdw
GTtBI3cZ6nHM2i6XHfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0MN7YW1rrTWr3DQwBVdy67
3iJQM0ZAxl1JKc7RuHO3nABbvtWsx4outUsxPNBdtOZI5kETKJQ6soIZhkK/DHv/AA8YNS6vLPyr
SxgE8tlbyvKzuBHJIX2BsAbggwigfe5ye+0Pu7GysvEt9YzzSrZ2s8yBuruELbVyBgFiAu7GBnOM
DFaDaBbnVtHtpLa+sPtt2IJLW6cGZU3IPMB2LwdzAZXqh5PQAGfc3OnwaTLY2M1zP588czvPAsWz
YrgAAO2c+YfTG3vnivfXkdzaabEgYNa2xicsOCTLI/Hthx+Oat6xYR2MUQOi6rp8jsdrXsuQ4HUA
eUnPI5z+HNS65oUmixGKXTtQSSOXynvZRtglPOQg2dOODuOQM4GcAAd/a2n7/t+65+2/Yfsf2fyl
8v8A1HkbvM3Z6fNjZ14z/FUWlXulWU1hfuLlLyykWQxRxh0uGVy4JcuCmRheFONueScVLBpNlc6d
JLFDfFIoC76iTi3WUR7/ACipTrkiMfPySCByFqLRINHvbiC1u7W+3nc088V2iqkagszBDGSdqAnG
cnHHXFAEWhXOn2lzNNfTXKZglhQQQLJnzI3Qk5dcY3A98+1V7eaztJppVRrh0bFsJowEPX53XJyR
x8nIJPJIG1regaPHqeoWi3krQWc1ylvvUZeR2IG1Ae4yCT0UEZySqtRsrL7dviik/wBL48mEj/Xe
qqf73TC/xcgHOAwBe0C4i/4Sa0v9Qv1iWG5S5llmDuZCHBI+VWJY8nnj3rHrV8NpbT+IbG1u7SK5
huZ44GWRnXaGcAsCrA5xnrkc9KyqACrV3crPbWEavKxggMbBwoCkyO2FxyRhgeeck9sVFaxwy3cM
dxP5ELyKskuwt5ak8tgcnA5xWhd22nyaSb6zhubbbOsIS4nWXzcqSSpCLjbhc9f9YvTuAZVbWoav
b6kqRTpKY4bSGO3kwN8TpEqlevMbMpOM8Z3Dqyti1q6lpH2NfOR9sHkW7qZTzJI8SOyrgc4359AM
ZOSuQCK0u7ddJvbG4MqebJHPG8aBvnRXAUgkYB8z73OMdDnhkt5G+iWtkA3mQ3M0rEjgh1iAx7/I
f0qxaaFLdxW3+l20U93/AMetvJv3zfMUGCFKjLKV+Zh0ycDmmWujmfT0v5r+ztLd5XhVpmckuoUk
bUVjjDjnGBjnGRkAsanq9ve/215aSj7dqS3ce4DhB53B56/vF6Z6HmotC1T+y5rr/SLm2+0QeT9o
tf8AWRfOr5A3LnOzb94cMTzjBItBuD9uN1cW1mtjOtvcGdz8rnfwAoYtyhHy5656AkMGizfbZ4JJ
4Io4Ylne4YsUEbbdj4ALYbemBtyN3IGDgAtnWY/7dgu5bvUL1Y4miFzdNmVSQwDou47ShYMBu6rn
K54tXPiaPztGlFxqGoSadeNctLevgyA+UQo5bYPkIxk+vfAwr6xaxeL99FPFNH5kU0W7bIuSpIDA
MMMrDkDp6YJm1m0t7PURHaiUQvBDMolcMw8yJXIJAAOCxHQUAWLi60uHRJ7Cye8mkluYpjLNEsYw
iyDbtDNz84Oc856DHzOuPFWsXOkpYyajfN+8lMjtdOfNR1QbCM8gbW6/3z+OLWxq+kWun2Gnzwal
BcvcRF2RBIM/vJF3LuRfl+QDk5znjGDQA7+17fb9s2S/2h9k+xbMDytnleVvznOdnG3H3vm3Y+Si
w1e3s7NNP2Smyu8f2mABvkw527DnjaMMOmWJ3blwKl1DQUDobW4tlkNjDci03s0jDyFeRs4Kr/G2
GYHA4HK5gt/D01xBZOLyzSS+XNrAztvlbeybcBSFJZeCxCnPXhsAFTSryOwv/tMgYlYpRGUHKSGN
gjD0KsVOeoxkc1Lp91Z/2fdWF688Uc0sUwlhiEhBQONu0svB8wnOeNvQ54baaWLi0F1PfW1nC0jR
xtOJDvZQCwARWIwGXrj73GecCaUy3d1bXt3bWL20hikM7M3zgkbQEDE9DyBgY68jIBabV7ddXbUI
kl822ghSyLAf6yNURZHGeOFLYyQG2g7hnOfp179gvBOY/MQxyROucEo6FGwecHDHBwcHHB6VYGiz
fbZ4JJ4Io4Ylne4YsUEbbdj4ALYbemBtyN3IGDh+s6fDp8OmCJopGmtDK8sTllkPnSKGGenyqvGA
RjkA5oAJbvT2+w2ANy+mwTtLLKUVJn37A+1clVwqAAEnnJJ5wDUtX/ta0j+1Ji5t9kNt5YwiW4Df
uzk5O07dp5PLbieMRaFbQ3niHTLW4TfDNdxRyLkjKlwCMjnoaLvSmtrQ3KXdtcokixTCBmPlOQSF
JIAbO1uULD5evIyAMe8jOiQ2SBlkFzJLKQMBxtQJn1K4k69N5x1NbFx4i+0acI/7V1eLFolt9gjb
bAdsYjzu39DjcRs5yVz/ABVn3ehS2kVz/pdtLPaf8fVvHv3w/MEOSVCnDMF+Vj1yMjmoo7S3k8PX
F5iUXMN3FFneNhR0kPTGcgx9c9+nFAGfRWxb+HpriCycXlmkl8ubWBnbfK29k24CkKSy8FiFOevD
Yr2mli4tBdT31tZwtI0cbTiQ72UAsAEViMBl64+9xnnABn0VqxaDcH7cbq4trNbGdbe4M7n5XO/g
BQxblCPlz1z0BIpXtnJY3TQSFWO1XV0PDoyhlYZ5wVIPIB55APFAFeitJNHkGu3GlzSqrWrTedIg
3DEQZnKg4ycIcA4ycZI60+XTbe4+wz2cn2e2vJ2twLuUHyXXZks4ABXEinOBjkY4yQDKoq1Jp1xD
aS3E6+T5c/2cxyAq5fBLAAj+HA3em5fWr1xo8dpok9xNK32+G5iikgA4iDrIcMf7/wAgyP4eh+bI
UAx6Ku2VnHLbXN3clltoF2ZQ/M0rK3lqPbKknp8qnnJAN2y0i3nGn20ry/bNU/49WQjy4v3jRrvG
MtllIOMbRg/MTtABi0Vaj064mtIriBfO8yf7OI4wWcPgFQQB/Fk7fXa3pV7StJtrrxNFpd3eqITc
iAy23z+aS4X5D0wc5yeMZPJwpAMeirunWcdybmacsLe0i86VUOHcb1QKpPAJZ15PQZODjBNRs47Y
200BY293F50Sucug3shViOCQyNyOowcDOAAUqKKKACiiigAooooAKKKKACiiigAooooAKKKKACii
igArV0Z7bydTtrm7itftNoI45JVcruE0b4OxWPRD2rKrV0ZLbydTubm0iuvs1oJI45WcLuM0aZOx
lPRz3oAyqKKKACrFlez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd6r1LbW013cLBAm6Rs8ZAAAGSSTw
AACSTwACTQBoaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI
0jEsPU7fLGTz8oHQCrGpWNlp7aY8U0t1DPB5srL+73ESuhCZBIGE4JGe5A+6H3lvpbaIL21t7y3k
a58qMTXKyiQBcv0jXBXdH167+OhwAUZ/7P8A7PtPs/2n7b8/2rzNvl9fk2Y56dc9+lXre40tfDlz
Zy3F4LuaVJgq2ylAUWQBd3mA4PmDJ28Y6GsetIWFsfDkt+LhnukuYojEq4VFZZTyT1b93njgAjkk
kKAXbDVtPhbSLq4a5W50vHlwxxKyTbZWlGXLApkvt+62MZ5zgQWNxpY0d7W4uLy1nklJleC2WUSx
4UopJkUgBgxwOCdpP3Ri7pegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqc/S7WyuESN7
W+v72WRlW1s32MqqAd3KPuzluBjGwk5zwARW39lH7RDcvcqhkDQ3McIZwo3AqYy4A3ZU53HG3Azk
mpbq70+/1QvcG5jthBFBHJGis48tFQMUJAOQn3dwxu6nHLG0qNdQvY/tatYWkrI94q5DjJC7Rn5m
bBwufUkhQWFifSLeDXNYgZ5TZaZJIXwR5josojUA4wCSy5OOBk4ONpAHrqOl/wBoaekizy2VlbPC
sjwKXZyZHVzGW2kK8g+UsQQvPXFUr1bKe7RoNRuZnmkJnmvINm0k/eJV3LdSTxn65q1FpFvNqdkq
vL9lvIHuIY8jzW27x5WcYLM8ZQEDnKnbk7afq2krpcVjetpt5arLK6GzvySWCbDnIVDtbfjgZG08
88AFTXpoJ9TBtp1njS2t4vMRWAYpCiNgMAcZU9QKza0tehgg1MC2gWCN7a3l8tGYhS8KO2CxJxlj
1JrNoAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq1dGS28nU7m5tIrr7NaCSOOVnC7jNG
mTsZT0c96AKlle/Zt8UsfnWkuPOhJxux0ZT/AAsMnDe5BBBIM2n3dvYajPIDLJCYLiFDsAY74nRS
RkgcsCeTjnrVezs5LyYohVERd8srnCRp3Zj6cgcZJJAAJIB0tC0mLU9UmVGiktoI5ZALiZIDJtR2
QEFs4JUbtp+UE/MOtAFLR7yPT9bsL2UM0dvcxyuEGSQrAnHvxVKtjStKGreJorBvIhje5CyLHcIA
qFwCI2ZjvPPGCxPvVe5tHn1draOKxt3OMJFdqYV+XPEjOR+bdePagBmsXkeoa3f3sQZY7i5klQOM
EBmJGffmjVbyO+u45YgwVbaCIhhzlIkQ/hlTj2q74h0xbLxDJptvHbRpHIYY2W5Vt4DlQ0jFiEY4
5Hygego8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAMjnLdaAKV9eR3NppsSBg1rbGJyw4JMs
j8e2HH45olvI30S1sgG8yG5mlYkcEOsQGPf5D+laGr6QNL0XT2Mdm8lwpkkmS7SRwd8i7VCuQUwg
O7B+bI3dqJNIFr4US/aOzkkuJWXebtC8SARldiK/LHewYEEgAcL1IBnxXkaaJdWRDeZNcwyqQOAE
WUHPv84/WixvI7a01KJwxa6thEhUcAiWN+fbCH8cVoWGkBvDl7qjx2crKyxxrLdopQFZCzbQ4beC
g2qeoJ+Vux4f0gXsF7evHZzLbRbkhuLtIlZ96L8w3qwXDkg5A3ADJ6EAqafdWf8AZ91YXrzxRzSx
TCWGISEFA427Sy8HzCc5429DngvLqz1DVleV54bRYo4Q6xB3IjjVA23cBk7QSN3GepxzNoulx38V
zIbW8vpImRVtLNtshDbsyfdb5V2gH5erjkdDDe2Fta601q9w0MAVXcuu94iUDNGQMZdSSnO0bhzt
5wAWNSvdKu/EtzfAXMtndySyOskYR4mctggByG2khuSN2MHA5qK7u9PFpZWNubm4toZ5J5HkRYXb
eEBUAFwMCP73P3unHJd2NlZeJb6xnmlWztZ5kDdXcIW2rkDALEBd2MDOcYGK0G0C3OraPbSW19Yf
bbsQSWt04MypuQeYDsXg7mAyvVDyegAM+5udPg0mWxsZrmfz545neeBYtmxXAAAds58w+mNvfPFq
/wBW0+ZtXurdrlrnVM+ZDJEqpDulWU4cMS+Cm37q5znjGDBrFhHYxRA6LqunyOx2tey5DgdQB5Sc
8jnP4c0XGjx2miT3E0rfb4bmKKSADiIOshwx/v8AyDI/h6H5shQB1ve6VaKbuAXK3RtHtzbeWDHu
eIxM/mF887i+NvX5enNUrC8jtIL8MG86e28qF1HKEuhbnsCgdTjqGx0Jq3Bo8f8AZN9c3UrJcxWy
XEMCjnYZI13P6Ah8qOpHPA27p9H0my1GKGPyb6Z2x9pu4jthsQWKgyAocgBd5O5Rg44wTQBFoPiG
fSLuyVxFJZQ3a3DI1tHI45XcUZhlSQo6EdBVK3u4Y5pruaFZbktuij8tRCGOSWKjggcYTG0554G1
rek2NteREfYdQ1G7LMTb2TbTEg2/OT5b5BLEdsbec7hULaVG+oXtja3a3E0MrJb7V4ugCR8pBPzH
ghed3IBzgMATaBcRf8JNaX+oX6xLDcpcyyzB3MhDgkfKrEseTzx71j1q+G0tp/ENja3dpFcw3M8c
DLIzrtDOAWBVgc4z1yOelZVABVq7uVntrCNXlYwQGNg4UBSZHbC45IwwPPOSe2KitY4ZbuGO4n8i
F5FWSXYW8tSeWwOTgc4rQu7bT5NJN9Zw3NttnWEJcTrL5uVJJUhFxtwuev8ArF6dwDKra1DV7fUl
SKdJTHDaQx28mBvidIlUr15jZlJxnjO4dWVsWtCO0Q+Hri8xE0i3cUWd7B0BSQ9MbSG29c5GzpzQ
Bq2fiLydLs7f+1dXsvskbJ9nsm2pNl2fJbeNhO/bna2NoPPQY8t5G+iWtkA3mQ3M0rEjgh1iAx7/
ACH9Kt2/h6a4gsnF5ZpJfLm1gZ23ytvZNuApCksvBYhTnrw2DSNItdQsNQnn1KC2e3iDqjiQ4/eR
rubajfL85HBznHGMmgCpFeRpol1ZEN5k1zDKpA4ARZQc+/zj9a1bfxBHDqU00U95aLPp8Fobi3/1
sRjWLJUbhkExEfeHDZ9q52runaZJqRudk0EKW8XnSvM+0BN6qT0OT8wOBycYGTgEAl1O9j1DUIWl
v9Quo1VUe4uvnkxkk7V3HAGeF3HJBORnAm1ibTr+/tns7qcKYoYJGuoAgQJGke75WckHaSRjI7Zq
E6LN9tggjngljmiadLhSwQxru3vggNhdj5G3J28A5GW3Oli3+zyLfW01rPIYxcxiTYrLt3AhlDcB
lPCnrxk5FABc6dawW7SR6zY3DjGIoknDNz23Rge/Jou7u3utLsI8yrc2kZg2bAUZC7ybt2cg5fG3
HbOe1S67pdtpVzDHbX0VyJIIpGCh8qWjRsncqjBLEjGTjrg1lUAbX9r2/wDa/wBr2S+X/Zv2TGBn
f9k8nPXpu5+nbtVK+vI7m002JAwa1tjE5YcEmWR+PbDj8c1oSaQLXwol+0dnJJcSsu83aF4kAjK7
EV+WO9gwIJAA4XqYrfw9NcQWTi8s0kvlzawM7b5W3sm3AUhSWXgsQpz14bABPpOufZNJWx/tTU9O
2TvNvsRu83cqDDDemNuzjrncemOW6drMcE1+xu9Qs5LmUSreQN5s4A3ZRjuTIbcCTkZKDj0pWmli
4tBdT31tZwtI0cbTiQ72UAsAEViMBl64+9xnnFi18OXlzNdwNJBBcW9ylp5MjEmSZt4VFKgjOUYZ
JA6c0AWrrXLO+1W/llN4tve2cNs8r4lmQoIiWPKhyWixnK/ez7VVvprDUJNNtbadraC2tjC092pw
T5kj7sIGIB3DjBwTjJxuNG+sWsXi/fRTxTR+ZFNFu2yLkqSAwDDDKw5A6emCW2VnJfXSwRlVO1nZ
3PCIqlmY45wFBPAJ44BPFAGlZLZ6Nq2nah/adteJBdxyPFbJKH2q24kb0Udsde4qlFeRpol1ZEN5
k1zDKpA4ARZQc+/zj9alXRzNqFlZ2t/Z3Ju5VhSSNnAVyQMMGUMB8w524POMkECqlnI+nzXoK+XD
LHEwJ5JcORj2+Q/pQBu654i/tWK7f+1dXk+1Sb/sUjYghy27Gd53hegG1ex4xg0re40tfDlzZy3F
4LuaVJgq2ylAUWQBd3mA4PmDJ28Y6Gob7RzYLKr39m9zA22e2VnDxkHBGWUKxB4O0t6jIBIzaAOn
g1DS7SLw9eSyTtd2EXmiOIK6uVuJXCMcgxnpz83DA44+avpOufZNJWx/tTU9O2TvNvsRu83cqDDD
emNuzjrncemOaUOjmS1hmlv7O2e4UtBFMzgyjcVzuClFBZWHzMvTJwOais7CK5iMs2o2dmu7aomL
sWI68IrEDkcnAPbODgAll1KOXT9QgPntJdXkVwrSvvOFEoO5uMt+8HOOeelVbO/vNPmMtldz20hX
aXhkKEjrjI7cD8qu6Zpav4pttI1KOVd12LWZYpFDIxbbwcEcH88fjWbEYxMhlVnjDDeqNtJHcA4O
D74P0oA2rrxReXuuy390880DtOFtpJywijlBVlQnhTtbAOMZA4OMVXlu9Pb7DYA3L6bBO0sspRUm
ffsD7VyVXCoAASecknnAr6xZx6frd/ZRFmjt7mSJC5ySFYgZ9+KeukXDavaaYHi8668jY2TtHmqr
LnjPAcZ49etAEupav/a1pH9qTFzb7IbbyxhEtwG/dnJydp27TyeW3E8YsXXiW4v9HvrW7WBri5uY
pjIlpEhIUSbiWVQdxLLz1xu55INS6s7OTT3vbAziOCVIJRORlywYq646A7Gypzt4+Zs8Nn0i4tnv
xI8Wyz25kBOyXcQF2HHzbgS49VBPagBl7eRy21taWwZbaBd+HHzNKyr5jH2yoA6fKo4ySTdstXt4
Bp9zKkv2zS/+PVUA8uX940i7znK4ZiTjO4YHykbiWMGj3dpc7rW+jkt7RpZJ/taFA4AVfk8vOGkZ
Fxu43ZzwTVLTLOO6uWe4LLZ26iW6ZD8wj3BTt/2iWCjtkjOBkgAsabq/9k2kn2VM3NxvhufMGUe3
IX92MHI3HduPB4XaRzl+lXWl2PiaK9Z7xbK2uRNCBEryOFcFVb5lAJA5Izz2qG1s7OPT0vb8zmOe
V4IhARlCoUs7Z6gb1woxu5+ZcctutIuLR9RRnid9Pn8mZUJJ6su8cfdBUDJxy6jvQA63urO0ubu3
R55bC6iELyNEElUblfcF3EZDIOM8jIypORFqN5Hcm2hgDC3tIvJiZxh3G9nLMBwCWduB0GBk4ySS
wW11IWl7cLCFUNKyqXKEqGKbePnH3SDgBhgkDJrSi0a3/wCEv1HSR88UH2xIjLIE5jjkKFm4AwVB
JOB68UAYFFXb7TjZxRTx3UF1bysyLNDvA3rgsuHVTkBlPTHPXrilQAUUUUAFFFFABRRRQAUUUUAF
FFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJ
HHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsCrAgggkcjvVepba2mu7hYIE3
SNnjIAAAySSeAAASSeAASaANDWNYOq22mxtHEj20DRv5dvHECxkduNgHGCvHruOOSTXvLyOew062
iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1KxstPbTHimluoZ4PNlZf3e4iV0ITIJAwnBIz3IH3Q+8t
9LbRBe2tveW8jXPlRia5WUSALl+ka4K7o+vXfx0OACjP/Z/9n2n2f7T9t+f7V5m3y+vybMc9Oue/
Sr1vcaWvhy5s5bi8F3NKkwVbZSgKLIAu7zAcHzBk7eMdDWPWkLC2PhyW/Fwz3SXMURiVcKissp5J
6t+7zxwARySSFALthq2nwtpF1cNcrc6Xjy4Y4lZJtsrSjLlgUyX2/dbGM85wILG40saO9rcXF5az
ySkyvBbLKJY8KUUkyKQAwY4HBO0n7oxd0vQLe8s7SRra+mS4z517C4EFl85X94NhztADnLL8rDp1
OfpdrZXCJG9rfX97LIyra2b7GVVAO7lH3Zy3AxjYSc54AIrbVJ9N+0QWbxSW0sgbFzaxybtu4KxV
wwU4Y9D3PJq7d65Dea/q9y8bLZ6kzI2yNVdE8xXUgDgsCi55+bnkE7hUbSo11C9j+1q1haSsj3ir
kOMkLtGfmZsHC59SSFBYXV8Peb4k1awto7m4h0+SU+VCu6aVFkCALgYySwyccDJwcYIBQvLu3upr
OBTKllaxiBJCgMhTezsxXIGcuxC54GBk43F11cW0sVpptrKy2kUryfaLlNpLPsDEqpbCgIvA3Hgn
uFE0ejSXniE6bFaXlm20uYJ18yZQsZcgDau5iAdowM5Az3qHU7ZdO1CENpN5aKFVzb37kmQZPcKh
2nGOOeDz6ABr00E+pg206zxpbW8XmIrAMUhRGwGAOMqeoFZtaWvQwQamBbQLBG9tby+WjMQpeFHb
BYk4yx6k1m0AFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHK
zhdxmjTJ2Mp6Oe9AFSyvfs2+KWPzrSXHnQk43Y6Mp/hYZOG9yCCCQZtPu7ew1GeQGWSEwXEKHYAx
3xOikjJA5YE8nHPWq9nZyXkxRCqIi75ZXOEjTuzH05A4ySSAASQDpaFpMWp6pMqNFJbQRyyAXEyQ
GTajsgILZwSo3bT8oJ+YdaAKWj3ken63YXsoZo7e5jlcIMkhWBOPfiqVbGlaUNW8TRWDeRDG9yFk
WO4QBULgERszHeeeMFifeq9zaPPq7W0cVjbucYSK7Uwr8ueJGcj8268e1ADNYvI9Q1u/vYgyx3Fz
JKgcYIDMSM+/NGq3kd9dxyxBgq20ERDDnKRIh/DKnHtV3xDpi2XiGTTbeO2jSOQwxstyrbwHKhpG
LEIxxyPlA9BR4h0xdKNnbrHbDMCSNLHcrK8jNGjNuCsQACxCkAZHOW60AUr68jubTTYkDBrW2MTl
hwSZZH49sOPxzRLeRvolrZAN5kNzNKxI4IdYgMe/yH9K0NX0gaXounsY7N5LhTJJMl2kjg75F2qF
cgphAd2D82Ru7USaQLXwol+0dnJJcSsu83aF4kAjK7EV+WO9gwIJAA4XqQDPivI00S6siG8ya5hl
UgcAIsoOff5x+tFjeR21pqUThi11bCJCo4BEsb8+2EP44rQsNIDeHL3VHjs5WVljjWW7RSgKyFm2
hw28FBtU9QT8rdjw/pAvYL29eOzmW2i3JDcXaRKz70X5hvVguHJByBuAGT0IBU0+6s/7PurC9eeK
OaWKYSwxCQgoHG3aWXg+YTnPG3oc8F5dWeoasryvPDaLFHCHWIO5EcaoG27gMnaCRu4z1OOZtF0u
O/iuZDa3l9JEyKtpZttkIbdmT7rfKu0A/L1ccjoYb2wtrXWmtXuGhgCq7l13vESgZoyBjLqSU52j
cOdvOACxqV7pV34lub4C5ls7uSWR1kjCPEzlsEAOQ20kNyRuxg4HNRXd3p4tLKxtzc3FtDPJPI8i
LC7bwgKgAuBgR/e5+9045LuxsrLxLfWM80q2drPMgbq7hC21cgYBYgLuxgZzjAxUuq6P9n+xiCxv
rW5uJGjFjdHfMcbdrjCqSGLFQNvVDyegAIrm50+DSZbGxmuZ/Pnjmd54Fi2bFcAAB2znzD6Y2988
WLrxLcX+j31rdrA1xc3MUxkS0iQkKJNxLKoO4ll5643c8kGvqOl29lpNncR3PnTyTzQzbMGNSixk
BSPvffOW6EjjIAZrGr6QNL0XT2Mdm8lwpkkmS7SRwd8i7VCuQUwgO7B+bI3dqACDxLcf2bfWlysE
hms0tYnFpFuAVo8bn27iAiEDkkHaeoBDNKvdKsprC/cXKXllIshijjDpcMrlwS5cFMjC8Kcbc8k4
q3qGi29hpyT/ANj6u8bWkMv27zQIN8kat/zy6Bmxjd2xmodH0my1GKGPyb6Z2x9pu4jthsQWKgyA
ocgBd5O5Rg44wTQBQtP7KltBHevc28yyM3mwQiXzFIGFILqF2kE5Gc7u2BmWfVLe41bUNVktt088
7TQwvh40LMSS2fvbeMLjBPJ4BVorTSxcWgup762s4WkaONpxId7KAWACKxGAy9cfe4zzhlvpzXE0
1qkqm8RsRxKQwmIyCFYHBbpgDhucHO0MAW9AuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x
61fDaW0/iGxtbu0iuYbmeOBlkZ12hnALAqwOcZ65HPSsqgAq1d3Kz21hGrysYIDGwcKApMjthcck
YYHnnJPbFRWscMt3DHcT+RC8irJLsLeWpPLYHJwOcVoXdtp8mkm+s4bm22zrCEuJ1l83KkkqQi42
4XPX/WL07gGVWxb3Glr4cubOW4vBdzSpMFW2UoCiyALu8wHB8wZO3jHQ1j1sSeHpo4xm8s2uGthd
LbK7F2jMYkJ+7tBC5JDEH5eAcrkAdaavbwah4duGSUppuzzgAMtid5Pl554Ydcc1V0u7t7cXkF0Z
Vhu4BC0kSB2TEiOCFJAPKAdR1z2wZbTQpbuK2/0u2inu/wDj1t5N++b5igwQpUZZSvzMOmTgc0/S
NItdQsNQnn1KC2e3iDqjiQ4/eRrubajfL85HBznHGMmgCjZWkF1v87Ubaz24x56yHd9NiN098dat
q9tpsOo2y3cV59rtFjSS3Vwqt50b4O9VPRD0B6j3xFodol9r1hayCJklnRSkrsivk/dLKCRu6ZA4
zTNO0yTUjc7JoIUt4vOleZ9oCb1Unocn5gcDk4wMnAIA/RdR/svVEut0qYjkj8yI4ePejJvXkcru
yBkZx1HWpdZ1H7f5C/2nqeoeXuPmXxxtzjhV3Njpyd3ORwMZMU2liC7gikvrZbeeMyR3eJDGygsu
cbd4+ZWXleo9Oadr+nQaTrd3ZW10txHDK6AjdlcMRtbKjLDHOMj0NADdUu7e+NtPGZVmEEcMsbIN
q+XGqAq2ctkLk5AxnHPWnS6ZaRxO667p8jKpIREuMsfQZiAyfcgVm1sah4em083iNeWc89kxFxDC
7MY13hN2SoUjLKMAlhu5AwcAFSW8jfRLWyAbzIbmaViRwQ6xAY9/kP6UX15Hc2mmxIGDWtsYnLDg
kyyPx7YcfjmrH9hS+T/x9232vyPtH2P5/M8vZ5mc7dn3PmxuzjjrxTIdHMlrDNLf2ds9wpaCKZnB
lG4rncFKKCysPmZemTgc0APtrnT59Jisb6a5g8ieSZHggWXfvVAQQXXGPLHrnd2xzoWviS3XVp76
eCVfO1mDUSiYbaiNKWXJxk/vBj1welUtI0i11Cw1CefUoLZ7eIOqOJDj95Gu5tqN8vzkcHOccYya
qWOnG8ilnkuoLW3iZUaabeRvbJVcIrHJCsemOOvTIBYZ7bUodOtmu4rP7JaNG8lwrlWbzpHwNise
jjqB0PtlghtdOvbaUaotwu4kyWIkV4SPusPMRckHnAPOCMrkGnx6DcSXd1A1xbRrbQLcvNI5CGJi
m1hxnkSK2Mbu2N3y0w6LN9tggjngljmiadLhSwQxru3vggNhdj5G3J28A5GQCxqGrxfa7K6sZZZL
23k803stukTuwIK7lUsGIIJLsSW3c9BVvVPstp4emtUgtoJp7uGbbDfLc7tqShsFCQiguuA2W+Y/
M2OMe+042cUU8d1BdW8rMizQ7wN64LLh1U5AZT0xz164pUAbt7qdjNpTW4mvLp9qrAt1CmbUAjhZ
gxZ1ABULhV+YtgEYqpFplpJEjtrunxsyglHS4yp9DiIjI9iRRNo5jtZpor+zuXt1DTxQs5MQ3Bc7
ioRgGZR8rN1yMjmppPD00cYzeWbXDWwultldi7RmMSE/d2ghckhiD8vAOVyAAutLu7CyS9e8SSzi
aIRQxKwmHmPJ98sNhO8r91sYzznFS6TrMdhp5gF3qFlIsrSs1i203IIUBHbcNoXacHD48xuPWKw0
E3a27z30FssytKEZXZzCpIeQADbhQkh2lgTsOByM17TSxcWgup762s4WkaONpxId7KAWACKxGAy9
cfe4zzgAvwatp7eN5NauGuY7YXxvI1jiV3b95vCkFgBx3ycehrK8vT/7Q2fabn7F/wA9vs6+Z0/u
b8dePvdOfai10+a71aHTUaITSzrArbwybi23O5cgjPcZ9qfp1nHcm5mnLC3tIvOlVDh3G9UCqTwC
WdeT0GTg4wQC3q2qRt4judV0i6vImnlkmDsvkvGXZiVBVjkYOM5GcnipbrxZqlzqVldvczyLaNBI
kE8zSIZI1UbyMjliCT3+Y896ryaR513apZv+7vYGnt0lPz8Fx5fA+Zi0ZVcD5srwM4Bpekfa77SE
uX2Q6hdiHapxJs3KpcZGMEsQDzyjDtQAy6vLOPT3srATmOeVJ5TOBlCoYKi46gb2yxxu4+Vcc2L/
AFe3vLN9P2SiytM/2YCBvjy43bznncMseuGA27VyKbrFhHYxRA6LqunyOx2tey5DgdQB5Sc8jnP4
c1Uis410qW9uCwDsYrUKfvyKUL5/2Qrexyy4yA2AAgvI4dHvLUBhPPLEd6jjy1DllJ64LGM46ZQH
sKJbyNdKisrcMA7CW6LD78ilwmP9kK3scs2cgLi7/ZFvt+x75f7Q+yfbd+R5WzyvN2YxnOzndn73
y7cfPVWDSLi5ewEbxbLzdiQk7ItpIbecfLtADn0Ug96AHWt5ZyaellficRwSvPEYAMuWChkbPQHY
uGGdvPytnixb+IJrbU7zWov3erzTmSJ1UGOIPuMhAOeeQBnIwW74IfpOhSXenm/OnahqEbStCsVi
MFSoUlnbY2B8wAGOfm5G35sKgC7IdOm1IMDPbWbKCwSMSMjbRuCgsMruyBls7cZya1ZtW0seLLzV
EF5NaXa3JeNkWJ1MqSLtB3MMDePm+vy8YPO0UAaWoXVn/Z9rYWTzyxwyyzGWaIRklwg27QzcDywc
553dBjnNoooAKKKKACiiigAooooAKKKKACiiigAooooAKKKKACtXRntvJ1O2ubuK1+02gjjklVyu
4TRvg7FY9EPasqtXRktvJ1O5ubSK6+zWgkjjlZwu4zRpk7GU9HPegDKooooAKsWV7Pp90tzbMqyB
WX50VwQylWBVgQQQSOR3qvUttbTXdwsECbpGzxkAAAZJJPAAAJJPAAJNAGhrGsHVbbTY2jiR7aBo
38u3jiBYyO3GwDjBXj13HHJJr3l5HPYadbRBlFvEwlBGA0jSMSw9Tt8sZPPygdAKsalY2WntpjxT
S3UM8Hmysv7vcRK6EJkEgYTgkZ7kD7ofeW+ltogvbW3vLeRrnyoxNcrKJAFy/SNcFd0fXrv46HAB
Rn/s/wDs+0+z/aftvz/avM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+YMn
bxjoax60hYWx8OS34uGe6S5iiMSrhUVllPJPVv3eeOACOSSQoBdsNW0+FtIurhrlbnS8eXDHErJN
tlaUZcsCmS+37rYxnnOBBY3GljR3tbi4vLWeSUmV4LZZRLHhSikmRSAGDHA4J2k/dGLul6Bb3lna
SNbX0yXGfOvYXAgsvnK/vBsOdoAc5ZflYdOpz9LtbK4RI3tb6/vZZGVbWzfYyqoB3co+7OW4GMbC
TnPABFbapPpv2iCzeKS2lkDYubWOTdt3BWKuGCnDHoe55NX7jW7W+1zW7icSxWup7lDxwrvjXzVk
U7AQCfkAPI6k5J6zWOgW8v8Aa3l219q/2O7SCP8As9wu9D5n7z7j8fIuMf3utZgi06PULoXUN5bw
wLhbR5B5zOCFKF9mFIyzcr/Dt6nNAFsatZw3tmkQne0t7OazMrIFdhL5uX2biMr5pwN3O3qM8V7u
eyktLLTbSeUwxzyStc3MXl4MgRSCqlzgCMHIJJyeOOZb3TbKy1O3S5+02kTwNNLbTHM0TDfiNjtG
C+1SCVGBIDggZJqunQ6Z9juDY3Ns7yMHsb9iXKrtIYkKh2tuK8AfcbB9ACvr00E+pg206zxpbW8X
mIrAMUhRGwGAOMqeoFZtaWvQwQamBbQLBG9tby+WjMQpeFHbBYk4yx6k1m0AFaujPbeTqdtc3cVr
9ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFSyvfs2+KWPz
rSXHnQk43Y6Mp/hYZOG9yCCCQZtPu7ew1GeQGWSEwXEKHYAx3xOikjJA5YE8nHPWq9nZyXkxRCqI
i75ZXOEjTuzH05A4ySSAASQDpaFpMWp6pMqNFJbQRyyAXEyQGTajsgILZwSo3bT8oJ+YdaAKWj3k
en63YXsoZo7e5jlcIMkhWBOPfiqVbGlaUNW8TRWDeRDG9yFkWO4QBULgERszHeeeMFifeq9zaPPq
7W0cVjbucYSK7Uwr8ueJGcj8268e1ADNYvI9Q1u/vYgyx3FzJKgcYIDMSM+/NGq3kd9dxyxBgq20
ERDDnKRIh/DKnHtV3xDpi2XiGTTbeO2jSOQwxstyrbwHKhpGLEIxxyPlA9BR4h0xdKNnbrHbDMCS
NLHcrK8jNGjNuCsQACxCkAZHOW60AUr68jubTTYkDBrW2MTlhwSZZH49sOPxzRLeRvolrZAN5kNz
NKxI4IdYgMe/yH9K0NX0gaXounsY7N5LhTJJMl2kjg75F2qFcgphAd2D82Ru7USaQLXwol+0dnJJ
cSsu83aF4kAjK7EV+WO9gwIJAA4XqQDPivI00S6siG8ya5hlUgcAIsoOff5x+tFjeR21pqUThi11
bCJCo4BEsb8+2EP44rQsNIDeHL3VHjs5WVljjWW7RSgKyFm2hw28FBtU9QT8rdjw/pAvYL29eOzm
W2i3JDcXaRKz70X5hvVguHJByBuAGT0IBU0+6s/7PurC9eeKOaWKYSwxCQgoHG3aWXg+YTnPG3oc
8F5dWeoasryvPDaLFHCHWIO5EcaoG27gMnaCRu4z1OOZtF0uO/iuZDa3l9JEyKtpZttkIbdmT7rf
Ku0A/L1ccjoYb2wtrXWmtXuGhgCq7l13vESgZoyBjLqSU52jcOdvOACxqV7pV34lub4C5ls7uSWR
1kjCPEzlsEAOQ20kNyRuxg4HNNm1OGzi0+LSpp2azuXuknmhVCHbywBs3MMDywck85xjjll3Y2Vl
4lvrGeaVbO1nmQN1dwhbauQMAsQF3YwM5xgYrQbQLc6to9tJbX1h9tuxBJa3TgzKm5B5gOxeDuYD
K9UPJ6AApahrj6holrZSRQLJFcyysYrWKIYZUC42Ac/K2fX5euBipfXkdzaabEgYNa2xicsOCTLI
/Hthx+Oat6xYR2MUQOi6rp8jsdrXsuQ4HUAeUnPI5z+HNS65oUmixGKXTtQSSOXynvZRtglPOQg2
dOODuOQM4GcAAis7rS7GNrmJ7w3bW0kJt2iXZl42jLeZuzj5iwGz/Zz/ABU7Sr3SrKawv3Fyl5ZS
LIYo4w6XDK5cEuXBTIwvCnG3PJOKlg0myudOklihvikUBd9RJxbrKI9/lFSnXJEY+fkkEDkLUWiQ
aPe3EFrd2t9vO5p54rtFVI1BZmCGMk7UBOM5OOOuKAIra50+fSYrG+muYPInkmR4IFl371QEEF1x
jyx653dsc17e4s7eaa4WBncN/o0U2HRevzOcDcRxxgAk5PA2tLa2dnHp6Xt+ZzHPK8EQgIyhUKWd
s9QN64UY3c/MuORtHkXUL3TfNVr+3laJIlGRMVJDBT/e4GBj5uQPmwrAE2gXEX/CTWl/qF+sSw3K
XMsswdzIQ4JHyqxLHk88e9Y9avhtLafxDY2t3aRXMNzPHAyyM67QzgFgVYHOM9cjnpWVQAVau7lZ
7awjV5WMEBjYOFAUmR2wuOSMMDzzkntiorWOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK0Lu20+TSTfW
cNzbbZ1hCXE6y+blSSVIRcbcLnr/AKxencAyqu6reR313HLEGCrbQREMOcpEiH8Mqce1Uq2E0Ei2
kmnvoI2jtvtMkKq7OiMoMZPG3DFoxwxI3gkcHABds/EXk6XZ2/8Aaur2X2SNk+z2TbUmy7PktvGw
nftztbG0HnoMrS7u3txeQXRlWG7gELSRIHZMSI4IUkA8oB1HXPbBdDo5ktYZpb+ztnuFLQRTM4Mo
3Fc7gpRQWVh8zL0ycDms2gDS0C6s7HW7S9vXnWO2lSYCGIOXKsDt5ZcA4PPP0oiurOzj1WCB55o7
m2WKJ3iCHIljc7lDHA+RhwT2/CKx043kUs8l1Ba28TKjTTbyN7ZKrhFY5IVj0xx16Zu6dpEM7atD
JPbN5Fossdx5pEa5liG/1PyM3ykbsnG3dxQBSvryO5tNNiQMGtbYxOWHBJlkfj2w4/HNS6rcW2qa
3LdxStEt5KZZfOTAhZ2JIypYsoz97AJ/u02bSmiu4Ijd2xhnjMsd1uZYygLAt8wDcFWGNuSRwDkZ
lGhSy3dhDbXdtcx3s/2eKePeEEmVBBDKG43qc4xzxnBAAIrnTrWC3aSPWbG4cYxFEk4Zue26MD35
NM1i8j1DW7+9iDLHcXMkqBxggMxIz780+70pra0Nyl3bXKJIsUwgZj5TkEhSSAGztblCw+XryM2L
rQTZWt3LPfQGS1YRSQxq5KTFseWxIAzhZDuUsP3ZGeRkAu3HiL7Rpwj/ALV1eLFolt9gjbbAdsYj
zu39DjcRs5yVz/FVfStTsbSyEVzNePHuLS2LQpLDMfUMWBiYrhdyqWHJB52iv/YUvk/8fdt9r8j7
R9j+fzPL2eZnO3Z9z5sbs4468VSt7OS5gu5UKhbWISuGPJBdU498uPwzQBY0u7t7cXkF0ZVhu4BC
0kSB2TEiOCFJAPKAdR1z2wXafdWf9n3VhevPFHNLFMJYYhIQUDjbtLLwfMJznjb0OeC1s7OPT0vb
8zmOeV4IhARlCoUs7Z6gb1woxu5+ZcctutIuLR9RRnid9Pn8mZUJJ6su8cfdBUDJxy6jvQBauNXt
5H1IKkuyexhs4SQMnyzD8zDPGRETgZwTjJ61V0XUf7L1RLrdKmI5I/MiOHj3oyb15HK7sgZGcdR1
pklgtrqQtL24WEKoaVlUuUJUMU28fOPukHADDBIGTVi7sbKy8S31jPNKtnazzIG6u4QttXIGAWIC
7sYGc4wMUAP1W/XUprWJtX1C7VWOZ9QJxGGwOEDOQBjJIJJ4GOOa9zp1rBbtJHrNjcOMYiiScM3P
bdGB78mrGtaXHYRW0gtbyxklZ1a0vG3SALtxJ91flbcQPl6oeT0FSWzjt9KinmLC4uWDwIp48oF1
Zm9ywAHP8LZA+UkA3b/xLb3Onanbrdam6XkYENo5At7TEqPsUbjkAKVBAXAH3fm+XC1W8jvruOWI
MFW2giIYc5SJEP4ZU49qu3ukW8A1C2ieX7Zpf/H0zkeXL+8WNtgxlcMwAzncMn5SNpq/2Rcebt3x
eX9k+1+dk+Xs25xux13fu/8Af+XNAG/pUttJ4dhtZZ4lB8wTXf2mGOW1RjgoEdTI6gZfEZXd5rLy
c1kaPf21nFKst5eW5dhvSO3juYZgOgeN2UZU5IJ3dRgKRkw2NhbXOk6ldSXDC4tohJHCi8Y8yNCW
J7fPwBzkHOMDdNpNjbXkRH2HUNRuyzE29k20xINvzk+W+QSxHbG3nO4UAGm6/NpGt/bdP8+2tDcr
K1nHcMA6K2RGx/iGCRkg9TTIdauJ5pxqlzc3UdzALaSV5DJIiB1cFdx5wyg44yMjIzkUr+3jtNQu
baKdbiOGVo0mTpIASAw5PB69TVegDVkvrKW7tUlhlksbKBoolbhpTl3BfB4Bd+QDkLwCSNxNS1Zd
YuIb7UBLLfNIRdum1BLGAoXHBCtjcvTGApwTnOVRQBq3Nzp8Gky2NjNcz+fPHM7zwLFs2K4AADtn
PmH0xt754r6neR3VyqW4ZbO3UxWquPmEe4sN3+0SxY9sk4wMAUqKANr+17fb9s2S/wBofZPsWzA8
rZ5Xlb85znZxtx975t2PkosNXt7OzTT9kpsrvH9pgAb5MOduw542jDDplid25cCsWigDVtrnT59J
isb6a5g8ieSZHggWXfvVAQQXXGPLHrnd2xzU1O9/tLVry+8vy/tM7zbM527mJxnv1qrRQAUUUUAF
FFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8HYr
Hoh7VlVq6Mlt5Op3NzaRXX2a0EkccrOF3GaNMnYyno570AZVFFFABViyvZ9PulubZlWQKy/OiuCG
UqwKsCCCCRyO9V6ltraa7uFggTdI2eMgAADJJJ4AABJJ4ABJoA0NY1g6rbabG0cSPbQNG/l28cQL
GR242AcYK8eu445JNe8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVjUrGy09tMeKaW6hng82
Vl/d7iJXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AKM/9n/2f
afZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9aQ
sLY+HJb8XDPdJcxRGJVwqKyynknq37vPHABHJJIUAu2GrafC2kXVw1ytzpePLhjiVkm2ytKMuWBT
Jfb91sYzznAgsbjSxo72txcXlrPJKTK8FssoljwpRSTIpADBjgcE7SfujF3S9At7yztJGtr6ZLjP
nXsLgQWXzlf3g2HO0AOcsvysOnU5+l2tlcIkb2t9f3ssjKtrZvsZVUA7uUfdnLcDGNhJzngAr26a
W00yXM94kYb9zNHCrEjnhkLDBPByGOMEYOcjSi1bT21ea5la5hCWkUFpcRxK8kbxrGgk2lgASqN3
O0sCCSAamsdAt5f7W8u2vtX+x3aQR/2e4Xeh8z959x+PkXGP73Wspv7Pt9TnS5sL5YUyn2c3KrKj
jAO5jHjqDxtH6cgAP7Kh1BMPc3Vm0bK7SQiJ0YggMFDkNtJDYLDOMHA5qW7nspLSy020nlMMc8kr
XNzF5eDIEUgqpc4AjByCScnjjm0+k6fJq9jbQrcxCa08+S3eVZJC5VnSNWCgEuvl44ODJjBIxUWq
6dDpn2O4Njc2zvIwexv2Jcqu0hiQqHa24rwB9xsH0AK+vTQT6mDbTrPGltbxeYisAxSFEbAYA4yp
6gVm1pa9DBBqYFtAsEb21vL5aMxCl4UdsFiTjLHqTWbQAVq6M9t5Op21zdxWv2m0EcckquV3CaN8
HYrHoh7VLcWWlWii0nNyt0bRLgXPmAx7niEqp5YTPO4Jnd1+bpxUVpoUt3Fbf6XbRT3f/Hrbyb98
3zFBghSoyylfmYdMnA5oAqWV79m3xSx+daS486EnG7HRlP8ACwycN7kEEEgzafd29hqM8gMskJgu
IUOwBjvidFJGSBywJ5OOetFppTXNoLl7u2tkeRooROzDzXABKggELjcvLlR83Xg4u3+ixDW9YSOe
CysLS8eBXmLsFJZ9ifKGYnCNzjHy8nJGQDP0e8j0/W7C9lDNHb3McrhBkkKwJx78VSq6+mSRarFY
STQIZGj2zu+I9jgFXJIyF2sDyMgdQDxVe6g+y3c1v5sUvlSMnmRNuR8HGVPcHsaALGsXkeoa3f3s
QZY7i5klQOMEBmJGffmjVbyO+u45YgwVbaCIhhzlIkQ/hlTj2qrEYxMhlVnjDDeqNtJHcA4OD74P
0rVvLfS20QXtrb3lvI1z5UYmuVlEgC5fpGuCu6Pr138dDgAqX15Hc2mmxIGDWtsYnLDgkyyPx7Yc
fjmiW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VdvdIt4BqFtE8v2zS/+PpnI8uX94sbbBjK4ZgBnO4ZP
ykbTi0AXYryNNEurIhvMmuYZVIHACLKDn3+cfrRY3kdtaalE4YtdWwiQqOARLG/PthD+OKt6Vo8d
1HJLeStCrW08tuij5pTHG7Z56ICmCe54HRisuk6RHeaebkafqGpSGVo2hsX2mEAKQz/I/DbiBwPu
N17AFTT7qz/s+6sL154o5pYphLDEJCCgcbdpZeD5hOc8behzwXl1Z6hqyvK88NosUcIdYg7kRxqg
bbuAydoJG7jPU45q39vHaahc20U63EcMrRpMnSQAkBhyeD16mrWi6fHqFzMjxzztHF5iW1ucSznc
o2qcNyAxY/KeEPTqACxqV7pV34lub4C5ls7uSWR1kjCPEzlsEAOQ20kNyRuxg4HNRXd3p4tLKxtz
c3FtDPJPI8iLC7bwgKgAuBgR/e5+90450E0K3/tyK0MMsJmsbiY2t3KFe3kWOXaHbCgcor8hRhhn
I5OPfacbOKKeO6gureVmRZod4G9cFlw6qcgMp6Y569cAFi5udPg0mWxsZrmfz545neeBYtmxXAAA
ds58w+mNvfPFq/1bT5m1e6t2uWudUz5kMkSqkO6VZThwxL4KbfurnOeMYOBWxcaPHaaJPcTSt9vh
uYopIAOIg6yHDH+/8gyP4eh+bIUAdb3ulWim7gFyt0bR7c23lgx7niMTP5hfPO4vjb1+XpzVKwvI
7SC/DBvOntvKhdRyhLoW57AoHU46hsdCau/2Rb7fse+X+0Psn23fkeVs8rzdmMZzs53Z+98u3Hz1
i0AaVreWcmnpZX4nEcErzxGADLlgoZGz0B2Lhhnbz8rZ4G1OOfUL3VLmFZLyaVpY49mYVdiSWIJO
QOynIOeSQCrGj2FtfyXK3Fw0bR200sUaLkuyRO4yegUbOe/IAHJKtsLS3a0nv7wSvbQSJEYoXCO7
uGI+YghQAjEnB7DHOQAWNAuIv+EmtL/UL9YlhuUuZZZg7mQhwSPlViWPJ5496x6tajZfYLwwCTzE
MccqNjBKOgdcjnBwwyMnBzyetRW1tNd3CwQJukbPGQAABkkk8AAAkk8AAk0ARVau7lZ7awjV5WME
BjYOFAUmR2wuOSMMDzzkntitC40iyS90aCK//c30amW6kXaiHznjLAHB2gLnnBPU7c4EuraH9k0l
r7+y9T07ZOkOy+O7zdyucqdiY27Oeudw6Y5AMCuvnltp/DUcLTxKq2g33a3MIklYDcsTxbfOYBts
Y+baAivjArkK0pNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4oAt6VqdjaWQiuZrx49xaWxa
FJYZj6hiwMTFcLuVSw5IPO0Z9nY29zEXl1WztGDY2TLKSR6/IjDH454q7okGj3txBa3drfbzuaee
K7RVSNQWZghjJO1ATjOTjjrisWgDYgksILa80u5vGeCSWKZbq0hLglFcbdrlDj94efVehzkQxXln
bR6rDAJzHc2ywxM4GSRLG5ZgOgOxuBnGQMnrUVjpxvIpZ5LqC1t4mVGmm3kb2yVXCKxyQrHpjjr0
zLb6ZH9puzPMsttZxCeRrd8mVCyqoUkcEl1zkZUZyuRtIBYt9Xt4ptJdklH2S0kgd1A3I7PKVkTn
kr5isORyvUda0JPEtu13obyXWp339nXxuJJ7sgu6ExHCjcduNjDBY+uecDC1GzjtjbTQFjb3cXnR
K5y6DeyFWI4JDI3I6jBwM4D9CtobzxDplrcJvhmu4o5FyRlS4BGRz0NADIryNNEurIhvMmuYZVIH
ACLKDn3+cfrXReIpba50xys8QCbDHLHcwub5xhd8kaKJFYgu+ZC20ll6tWBd6U1taG5S7trlEkWK
YQMx8pyCQpJADZ2tyhYfL15GbV7pFvANQtonl+2aX/x9M5Hly/vFjbYMZXDMAM53DJ+UjaQC3ceI
vtGnCP8AtXV4sWiW32CNtsB2xiPO7f0ONxGznJXP8VUtO8S6pplhc2kF9eJHJF5cSpcMohPmK5ZQ
O5ww4x94/jj1v6DoKXmo6V9ruLZY7udCttI7K80XmbWIYDaPuuMFgx28DlcgFWLUbe6tPs+qtcvs
nkuVkiIZ5XcKHVix4zsX5+cc5Vs8S2/iCa21O81qL93q805kidVBjiD7jIQDnnkAZyMFu+CH+H9I
F7Be3rx2cy20W5Ibi7SJWfei/MN6sFw5IOQNwAyehwqALsh06bUgwM9tZsoLBIxIyNtG4KCwyu7I
GWztxnJq7qV7pV34lub4C5ls7uSWR1kjCPEzlsEAOQ20kNyRuxg4HNUtMgtp7lluVnlO0CK3t+Hn
csFCKdrYPJPQ5246kVdvdNsrLU7dLn7TaRPA00ttMczRMN+I2O0YL7VIJUYEgOCBkgDZrrS/K0+w
V7yWyhuXmmlMSxyEP5YZVXcwyBHkEnkt0GOc+9vJL66aeQKp2qiog4RFUKqjPOAoA5JPHJJ5rQ1r
S47CK2kFreWMkrOrWl426QBduJPur8rbiB8vVDyegx6ANq91e3nGoXMSS/bNU/4+lcDy4v3iyNsO
ctllBGcbRkfMTuB/a9v/AGd/ZGyX+zPL87bgeZ9r8vHmZz03fLjps5xu5qvNo5jtZpor+zuXt1DT
xQs5MQ3Bc7ioRgGZR8rN1yMjmrH9kW+37Hvl/tD7J9t35HlbPK83ZjGc7Od2fvfLtx89ADdIuNLg
sNQivbi8jkuohCBDbLIFAkjfdkyLz8hGMd857VXtP7KltBHevc28yyM3mwQiXzFIGFILqF2kE5Gc
7u2BnPrY0DR49T1C0W8laCzmuUt96jLyOxA2oD3GQSeigjOSVVgCjqd7/aWrXl95fl/aZ3m2Zzt3
MTjPfrVWtjRdLjv4rmQ2t5fSRMiraWbbZCG3Zk+63yrtAPy9XHI6Gpq1nHYalJbxFtoVGKuctGWU
MY26fMpJU8DlTwOgAKVFWLKzkvrpYIyqnazs7nhEVSzMcc4CgngE8cAnipbi0gs5oX+1wX1uzfMb
V2Q8YyvzqCpwRg7SOeM4IABSorYvLfS20QXtrb3lvI1z5UYmuVlEgC5fpGuCu6Pr138dDjHoAKK0
pNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4osbC2udJ1K6kuGFxbRCSOFF4x5kaEsT2+fgD
nIOcYG4AzaKK0NMhspd32mO5uZmkSOG1tm2PIWzkhirDghRtxk7xjocgGfRW/Holu2t3VoqXNx5M
CypZxMBO7ts3Q52n5k3tu+X/AJZtwvarrul/2XNa/wCj3Nt9og877Pdf6yL52TBO1c52bvujhgOc
ZIBlUVd0ezj1DW7CylLLHcXMcTlDggMwBx781b1iwjsYogdF1XT5HY7WvZchwOoA8pOeRzn8OaAM
eiul1TQLezs7uRba+hS3x5N7M4MF784X92NgxuBLjDN8qnr1HNUAFFb+j6TZajFDH5N9M7Y+03cR
2w2ILFQZAUOQAu8ncowccYJqDQNHj1PULRbyVoLOa5S33qMvI7EDagPcZBJ6KCM5JVWAMeiirFna
rdTFXuYLZFXc0sxOAOnRQWJyRwAT36AkAFeitVNEY6hDAbmJ7eWB7lbiIMQ0SBy5VWCnP7twAduS
OuDmor+0t1tIL+zEqW08jxCKZw7o6BSfmAAYEOpBwO4xxkgGfRRWxcaPHaaJPcTSt9vhuYopIAOI
g6yHDH+/8gyP4eh+bIUAx6K6X+wLf+zvN+zX3l/ZPP8A7T3j7Nv8vf5eNnXd+6+/97t/DXNUAFFF
FABWroz23k6nbXN3Fa/abQRxySq5XcJo3wdiseiHtWVRQAUVu6TpEd5p5uRp+oalIZWjaGxfaYQA
pDP8j8NuIHA+43XszTtIsrnxYulTX+61+1+Qs8K5MwLhAU6gZznJOAM9TgEAxasWV7Pp90tzbMqy
BWX50VwQylWBVgQQQSOR3qvViys5L66WCMqp2s7O54RFUszHHOAoJ4BPHAJ4oAu6xrB1W202No4k
e2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCnvp9tFd2
qPqltJayyBZLiBXbyhkbiUYK3AOemD0ByDixfWNmdHTULa2vLRWlEca3MokFwCG3MhCJwhUA9eXH
TuAUZ/7P/s+0+z/aftvz/avM2+X1+TZjnp1z36Vet7jS18OXNnLcXgu5pUmCrbKUBRZAF3eYDg+Y
Mnbxjoax60o9HM1sXiv7N7kRGY2qs5fYFLk7tuzIUFiN2eMfe4oAu2GrafC2kXVw1ytzpePLhjiV
km2ytKMuWBTJfb91sYzznAgsbjSxo72txcXlrPJKTK8FssoljwpRSTIpADBjgcE7SfujDrLSLeca
fbSvL9s1T/j1ZCPLi/eNGu8Yy2WUg4xtGD8xO0YtAF23TS2mmS5nvEjDfuZo4VYkc8MhYYJ4OQxx
gjBzkGq3kd9f+bEGEaRRQoXGCwjjVAxHYnbnGTjOMnrT7C0t2tJ7+8Er20EiRGKFwju7hiPmIIUA
IxJwewxzkFxp8Npqgtp7rbbmNJhN5ZLeW6CRflB+8QwGM4z/ABY+agC02r27eJ7u/CSizn8+JFwN
0UUiNGoC5x8isMLkD5QMgc1Fdz2UlpZabaTymGOeSVrm5i8vBkCKQVUucARg5BJOTxxzX1izj0/W
7+yiLNHb3MkSFzkkKxAz78VFYWcmoahbWURVZLiVYkLnABYgDPtzQBa16aCfUwbadZ40treLzEVg
GKQojYDAHGVPUCs2tK6s7OTT3vbAziOCVIJRORlywYq646A7Gypzt4+Zs8WL3SLeAahbRPL9s0v/
AI+mcjy5f3ixtsGMrhmAGc7hk/KRtIAXF7pV2ou5xctdC0S3Ft5YEe5IhEr+YHzxtD429fl6c1LY
atp8LaRdXDXK3Ol48uGOJWSbbK0oy5YFMl9v3WxjPOcDArastIt5xp9tK8v2zVP+PVkI8uL940a7
xjLZZSDjG0YPzE7QARW13p8mkxWd8blfs88k6CBFPm71QFSSRs/1Y+bDfe6cc6SeJI/7Q1p4rzUN
Pjv7z7Sk9qMyAAyYRl3rwfMyfm4Kjg5yMq1s7OPT0vb8zmOeV4IhARlCoUs7Z6gb1woxu5+Zcc1b
+zk0/ULmylKtJbytE5Q5BKkg49uKALFxewz62Lu4NzfQiRDJ9qlPmTquAcsMlcgep2ggZOMmpdSQ
y3c0lvB5ELyM0cW8t5ak8Lk8nA4zT7OzkvJiiFURF3yyucJGndmPpyBxkkkAAkgHVfR7MeI4rCKW
eS3e2jnTcAskpaAShAOQGZjtA+bBYfe7gGFV28vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBV
vWtLjsIraQWt5YySs6taXjbpAF24k+6vytuIHy9UPJ6DHoA2r3V7ecahcxJL9s1T/j6VwPLi/eLI
2w5y2WUEZxtGR8xO4YtdLqmgW9nZ3ci219Clvjyb2ZwYL35wv7sbBjcCXGGb5VPXqIdQ0FA6G1uL
ZZDYw3ItN7NIw8hXkbOCq/xthmBwOByuQBukeJbjTwkUqwSQRW08MW60id1Lo+BuZc7d75IzjBIw
elV7a50+fSYrG+muYPInkmR4IFl371QEEF1xjyx653dsc5VbGk2NteREfYdQ1G7LMTb2TbTEg2/O
T5b5BLEdsbec7hQBR1O9/tLVry+8vy/tM7zbM527mJxnv1qbS7u3txeQXRlWG7gELSRIHZMSI4IU
kA8oB1HXPbBsf2ZZ202ozTzNd2Vnci2U2zhDOW37WDEMFXEbHOG7DvuFHUbL7BeGASeYhjjlRsYJ
R0Drkc4OGGRk4OeT1oA1YtX0+31CzGy5msrWxntN2Fjkl8wSnOMsF5lx1bgZx2qlqF1Z/wBn2thZ
PPLHDLLMZZohGSXCDbtDNwPLBznnd0GOaVtbTXdwsECbpGzxkAAAZJJPAAAJJPAAJNatxpFkl7o0
EV/+5vo1Mt1Iu1EPnPGWAODtAXPOCep25wADFrduvEtxf6PfWt2sDXFzcxTGRLSJCQok3Esqg7iW
XnrjdzyQTVtIjs9P+0nT9Q02QSrGsN8+4zAhiWX5E4XaAeD99enfCoA2v7Xt9v2zZL/aH2T7FswP
K2eV5W/Oc52cbcfe+bdj5Kxa0pNHMNsHlv7NLkxCYWrM4fYVDg7tuzJUhgN2ecfe4qxokGj3txBa
3drfbzuaeeK7RVSNQWZghjJO1ATjOTjjrigCLQrnT7S5mmvprlMwSwoIIFkz5kboScuuMbge+fam
Wt1ZpFd6fO8/2KaVJVnSIGRSm8KShbBBDtkbuCQcnGGzau2OnG8ilnkuoLW3iZUaabeRvbJVcIrH
JCsemOOvTIAzUb37feGcR+WgjjiRc5IRECLk8ZOFGTgZOeB0ptlez6fdLc2zKsgVl+dFcEMpVgVY
EEEEjkd6lSOzs72SO9VruEL8rWlwEDZwQ2SjcY7EAg9cEEVLrNrZ2s1qLRJ4jJbJLLFNKJChbJX5
gqggoUbpxux1FAD9Y1g6rbabG0cSPbQNG/l28cQLGR242AcYK8eu445JLLq6s4dPewsHnmjllSaW
WeIRnKBgqqoZuPnYkk85HAx81CKNppkiUqGdgoLsFGT6k8Ae54rX8Q6YulGzt1jthmBJGljuVleR
mjRm3BWIABYhSAMjnLdaAMWti8utLvo1uZXvBdrbRwi3WJdmUjWMN5m7OPlDEbP9nP8AFUur6QNL
0XT2Mdm8lwpkkmS7SRwd8i7VCuQUwgO7B+bI3dqwqALtheR2kF+GDedPbeVC6jlCXQtz2BQOpx1D
Y6E1SrpYtFt00Oyv20fV7xJYHmmntpQkUe2R1wf3TYwEBOT3rM0qOxuZorSWxvLm7nlEcXk3aQgl
sADDRtznvkDmgA0+6s/7PurC9eeKOaWKYSwxCQgoHG3aWXg+YTnPG3oc8FveWdtc3cCCc2F1EIHd
gPNADK28DpncgO3PTK7s/NVW/S2j1C5SykaS0WVhC79WTJ2k8DkjHYVLpkFtPcstys8p2gRW9vw8
7lgoRTtbB5J6HO3HUigA1G8juTbQwBhb2kXkxM4w7jezlmA4BLO3A6DAycZJo95Hp+t2F7KGaO3u
Y5XCDJIVgTj34q3qOjNFqFtbWsE8U08XmPa3LDzLcgsCHbCgDau/JCgKwJ4GTDq9hbWMenta3DXC
3FsZXkK7QWEsiHaOu35OM8nqQM4AA+5u9Pj0mWzsTct9onjncToo8rYrgKCCd/8ArD82F+7054lv
dXt5xqFzEkv2zVP+PpXA8uL94sjbDnLZZQRnG0ZHzE7hi1pTaOY7WaaK/s7l7dQ08ULOTENwXO4q
EYBmUfKzdcjI5oAza6LS9W0uCfRbq9F55mmMoEUKKRIBM0m7eW4I3n5dpzt+8N2Vr/2Rb7fse+X+
0Psn23fkeVs8rzdmMZzs53Z+98u3Hz1i0AXbG8jtrTUonDFrq2ESFRwCJY359sIfxxVKtjSbG2vI
iPsOoajdlmJt7JtpiQbfnJ8t8gliO2NvOdwp2naRZXPixdKmv91r9r8hZ4VyZgXCAp1AznOScAZ6
nAIBn2LWW+VL5ZRHJHtWWJdzRNkHcFJUNkArgkfez2xV6e+06S5sYGSeaytbZ7fzGUI5LNIwfYGI
yrSZC7sNs5Izxj1YsrOS+ulgjKqdrOzueERVLMxxzgKCeATxwCeKALWoXVn/AGfa2Fk88scMssxl
miEZJcINu0M3A8sHOed3QY5za0l0czahZWdrf2dybuVYUkjZwFckDDBlDAfMOduDzjJBALqzs5NP
e9sDOI4JUglE5GXLBirrjoDsbKnO3j5mzwAbF/4lt7nTtTt1utTdLyMCG0cgW9piVH2KNxyAFKgg
LgD7vzfLn/2vb7ftmyX+0Psn2LZgeVs8ryt+c5zs424+9827HyVi1q/2FL5P/H3bfa/I+0fY/n8z
y9nmZzt2fc+bG7OOOvFAGVW1oPiGfSLuyVxFJZQ3a3DI1tHI45XcUZhlSQo6EdBVvS9At7yztJGt
r6ZLjPnXsLgQWXzlf3g2HO0AOcsvysOnU81QBq2l9ay2l7aX7Swpczx3HmW0CthlDjaE3IAD5hPB
42gY54r6reR31/5sQYRpFFChcYLCONUDEdiducZOM4yetW/DulDVL99/kNHDFJKUluEi3lY2ZRyw
JUlQGI6Ak5XrUNvZRz3l/LOqx29mpmljtn3ZG9UCIxLDG51G4k4GT82MEAi0q8jsb/zZQxjeKWFy
gyVEkbIWA7kbs4yM4xkdaZerp6bFsZbmbqXknjWP6AKGbpzzu5z0GMl+o2cdsbaaAsbe7i86JXOX
Qb2QqxHBIZG5HUYOBnAr2ttNeXcNrbpvmmkWONcgZYnAGTx1NAFi8vI57DTraIMot4mEoIwGkaRi
WHqdvljJ5+UDoBVKtqfSra38PXN4l3bXji7giSWBnG0FJSylWCn+FDnGPQ8MKr3lhbQaJYXkNw00
08ssco24RCqxsAM8kjzCCemRxkDJAJry60u+jW5le8F2ttHCLdYl2ZSNYw3mbs4+UMRs/wBnP8VG
kXGlwWGoRXtxeRyXUQhAhtlkCgSRvuyZF5+QjGO+c9qx62rLSLecafbSvL9s1T/j1ZCPLi/eNGu8
Yy2WUg4xtGD8xO0AGLWrpVzp8NpeR3U1zbzy7Vjnt4FlIjwwkXBdcbsryOcAjoxBZa2dnHp6Xt+Z
zHPK8EQgIyhUKWds9QN64UY3c/MuOat/ZyafqFzZSlWkt5WicocglSQce3FADvL0/wDtDZ9pufsX
/Pb7OvmdP7m/HXj73Tn2p+o3kdybaGAMLe0i8mJnGHcb2cswHAJZ24HQYGTjJZY2LXzy/vooIoY/
Mlml3bY1yFBIUFjlmUcA9fTJD7rTja3NujXUDQXCh47ld+wpuKlsFd2AysD8ueOAeMgFWIxiZDKr
PGGG9UbaSO4BwcH3wfpWlc3OnwaTLY2M1zP588czvPAsWzYrgAAO2c+YfTG3vniLWbK3sNREFrJL
JCYIZVaUAMd8SucgZA5Y8ZOPU9az6AN+/wBW0+ZtXurdrlrnVM+ZDJEqpDulWU4cMS+Cm37q5znj
GDgVa+xbNP8AtU8nl+Z/x7x4y0uDgt7KMEbu5GADhiuh/ZFvt+x75f7Q+yfbd+R5WzyvN2YxnOzn
dn73y7cfPQAaVe6VZTWF+4uUvLKRZDFHGHS4ZXLgly4KZGF4U4255JxRoPiGfSLuyVxFJZQ3a3DI
1tHI45XcUZhlSQo6EdBT7DSA3hy91R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3bMgsvtNpJJBJu
niyzwY5MYGSyn+LHO4dQOeRuKgEEsjTTPKwUM7FiEUKMn0A4A9hxV/RdQj0+5md5J4Gki8tLm3GZ
YDuU7lGV5IUqfmHDnr0ObV3TrOO5NzNOWFvaRedKqHDuN6oFUngEs68noMnBxggGrf8AiKObULG4
Vry6EFnJaTPdyfvJQ5lDENzg7ZOM52nj5gMnNv7u3a0gsLMyvbQSPKJZkCO7uFB+UEhQAigDJ7nP
OBfs9Gt59UgRPmtryxuLmBZpApQqkoAduBw8fXgEAEgZIGbfacbOKKeO6gureVmRZod4G9cFlw6q
cgMp6Y569cAFKt268S3F/o99a3awNcXNzFMZEtIkJCiTcSyqDuJZeeuN3PJBwq2LjR47TRJ7iaVv
t8NzFFJABxEHWQ4Y/wB/5Bkfw9D82QoBP/a2n7/t+65+2/Yfsf2fyl8v/UeRu8zdnp82NnXjP8VY
FdL/AGBb/wBneb9mvvL+yef/AGnvH2bf5e/y8bOu7919/wC92/hrmqACiiigAooooA1ba50+fSYr
G+muYPInkmR4IFl371QEEF1xjyx653dsc2LPVrN/GJ1u9E8Mf2z7YI4UEpJ8zfsyWXjqM/pRpOkR
3mnm5Gn6hqUhlaNobF9phACkM/yPw24gcD7jdezk0W0a01cC+tttnfRQpeyMQjRkTDIVdxbcVQ8B
iOvTJoAxblbdbhltZZZYRja8sYjY8c5UMwHOe5qxpV5HY3/myhjG8UsLlBkqJI2QsB3I3ZxkZxjI
61Fe2cljdNBIVY7VdXQ8OjKGVhnnBUg8gHnkA8U7TrL7feCAyeWgjkldsZIRELtgcZOFOBkZOOR1
oAsKdGg1CyIN5dWiyq1z5kaxF0yMqqhjzgHndzkcDGSarJY3DG4hvry4uGYBhNaJCoUDAA2yNgDA
AUAADpjGKm/smO/m05tOLRR39ybVI7l9xjkGzOWVRlf3inOAeoxwCYbqzs5NPe9sDOI4JUglE5GX
LBirrjoDsbKnO3j5mzwAZtdVb+JbeGyMf2rU0jaxe1+wRkLbq5hKeZ975tzfORtHLE5OPm5WtiDR
4/7Jvrm6lZLmK2S4hgUc7DJGu5/QEPlR1I54G3cAOstXt4Bp9zKkv2zS/wDj1VAPLl/eNIu85yuG
Yk4zuGB8pG44tdLpegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOWX5WHTqeaoA0LC7t1tJ7C8Mq
W08iSmWFA7o6BgPlJAYEOwIyOxzxglxd2+paoJbky21sI0iHloJXVEQInBKhjhVyeO5A7U7SLS2u
mdZbe8vLgsFhs7Q7XcYYs2djcLtHGMndnPBrQTQ7Y3mr23mKn2ezjnVriTH2di8O9Xx1ZA7oRjJI
4XOBQBn6/dWd9rd3e2TztHcyvMRNEEKFmJ28M2QMjnj6VVsLyTT9Qtr2IK0lvKsqBxkEqQRn24p1
9YtYvF++inimj8yKaLdtkXJUkBgGGGVhyB09MEwRRSTzJFFG0kjsFREGSxPAAHc0AX7q8s49Peys
BOY55UnlM4GUKhgqLjqBvbLHG7j5VxzYvdXt5xqFzEkv2zVP+PpXA8uL94sjbDnLZZQRnG0ZHzE7
hFqOl29lpNncR3PnTyTzQzbMGNSixkBSPvffOW6EjjIAZn3Gjx2miT3E0rfb4bmKKSADiIOshwx/
v/IMj+HofmyFAMetqy1e3gGn3MqS/bNL/wCPVUA8uX940i7znK4ZiTjO4YHykbji1u6Rp2nagbe2
KXks8q757iJwsdku8rudSh3Kow5bcow2MjGaAKlreWcmnpZX4nEcErzxGADLlgoZGz0B2Lhhnbz8
rZ4q395JqGoXN7KFWS4laVwgwAWJJx7c1oaBo8ep6haLeStBZzXKW+9Rl5HYgbUB7jIJPRQRnJKq
2PQBasdRudNeVrYxfvY/LkWWFJVZchsFXBHVQenatWfxBFc67HeyRbYvsIs38qFEK5t/KZgq4BwW
YgEjgAZUYxi20cMtwqzz+TFyWfYWIAGeAOpPQcgZIyQORqtpFuvi270oPKYYJ540GR5kvl7tqA4x
ucqFHB5YcHpQBX1C6s/7PtbCyeeWOGWWYyzRCMkuEG3aGbgeWDnPO7oMc5tbGtaXHYRW0gtbyxkl
Z1a0vG3SALtxJ91flbcQPl6oeT0GPQBv3+rafM2r3Vu1y1zqmfMhkiVUh3SrKcOGJfBTb91c5zxj
BfJq2l+aL2IXguxp4szCyKUc/ZxCX37sqBkkDac7eo3YWu0Gj3GkX11Ba31u8HlrG8t2kqtIzcKV
Ean7iyHOcfL7ireoaLb2GnJP/Y+rvG1pDL9u80CDfJGrf88ugZsY3dsZoA5qtC0/sqW0Ed69zbzL
IzebBCJfMUgYUguoXaQTkZzu7YGc+t3SdIjvNPNyNP1DUpDK0bQ2L7TCAFIZ/kfhtxA4H3G69gBk
ur2+oXeqC8SWC21C7+1loQJHicFyBglQwxIwPI7HPGDn6je/b7wziPy0EccSLnJCIgRcnjJwoycD
JzwOlbtt4bjE2sxGz1DU5NPvFtlSyOwkHzQXI2Px+7H/AH11rAvkjjvZY4ree2VG2mG4fc6EcEMd
q85z2GKACyvZ9PulubZlWQKy/OiuCGUqwKsCCCCRyO9a8niUve6DdtbxF9NwzpHDHCHYTM+BsHTB
UdODuOOSTi21tNd3CwQJukbPGQAABkkk8AAAkk8AAk1q3GkWSXujQRX/AO5vo1Mt1Iu1EPnPGWAO
DtAXPOCep25wACK5udPg0mWxsZrmfz545neeBYtmxXAAAds58w+mNvfPGVW7q2kR2en/AGk6fqGm
yCVY1hvn3GYEMSy/InC7QDwfvr074VAGxeXWl30a3Mr3gu1to4RbrEuzKRrGG8zdnHyhiNn+zn+K
qlheR2kF+GDedPbeVC6jlCXQtz2BQOpx1DY6E1sf2Bb/ANneb9mvvL+yef8A2nvH2bf5e/y8bOu7
919/73b+GodB0FLzUdK+13Fssd3OhW2kdleaLzNrEMBtH3XGCwY7eByuQDArS0+6s/7PurC9eeKO
aWKYSwxCQgoHG3aWXg+YTnPG3oc8ZtbGm6ZBe6JfXEk0Fu8NzAvnzOwCoyy5G1QSxJVOgJGM8DJo
AzLn7P8AaGFr5vkjAUy43NxySBwMnJxzjOMnGTY1i8jv9YvLqEMsEkrGFGGCkecIuBwAFwABwAMC
pRos322eCSeCKOGJZ3uGLFBG23Y+AC2G3pgbcjdyBg4q3lqtrMFS5guUZdyywk4I6dGAYHIPBAPf
oQSAV6u6reR313HLEGCrbQREMOcpEiH8Mqce1V7WOGW7hjuJ/IheRVkl2FvLUnlsDk4HOK076xsz
o6ahbW15aK0ojjW5lEguAQ25kIROEKgHry46dwCpfXkdzaabEgYNa2xicsOCTLI/Hthx+OapVsXG
jx2miT3E0rfb4bmKKSADiIOshwx/v/IMj+HofmyFx6ANjR7rS7C5s9Qle8F3aSrKIViVklKtuHz7
gUB4B+VsYzznAqaVeR2F/wDaZAxKxSiMoOUkMbBGHoVYqc9RjI5rY0vQLe8s7SRra+mS4z517C4E
Fl85X94NhztADnLL8rDp1NfSdIjvNPNyNP1DUpDK0bQ2L7TCAFIZ/kfhtxA4H3G69gDCq1YtZb5U
vllEcke1ZYl3NE2QdwUlQ2QCuCR97PbFNv7eO01C5top1uI4ZWjSZOkgBIDDk8Hr1NS6dZx3JuZp
ywt7SLzpVQ4dxvVAqk8AlnXk9Bk4OMEAtz6wLa5sTpjsVsrZ7dZLiFMyB2kZt0Z3KBiUrjJ4Ge+A
zWNYOq22mxtHEj20DRv5dvHECxkduNgHGCvHruOOSS/+x45r228mVktLm2ku1Ljc8cab94OMBmHl
OB0DcE7ckCvf2lutpBf2YlS2nkeIRTOHdHQKT8wADAh1IOB3GOMkAz66q/8AEtvc6dqdut1qbpeR
gQ2jkC3tMSo+xRuOQApUEBcAfd+b5eVravdIt4BqFtE8v2zS/wDj6ZyPLl/eLG2wYyuGYAZzuGT8
pG0gB/a9vt+2bJf7Q+yfYtmB5WzyvK35znOzjbj73zbsfJWLW1/ZFvt+x75f7Q+yfbd+R5WzyvN2
YxnOzndn73y7cfPWLQBoWn9lS2gjvXubeZZGbzYIRL5ikDCkF1C7SCcjOd3bAzds9Ws38YnW70Tw
x/bPtgjhQSknzN+zJZeOoz+lVLWzs49PS9vzOY55XgiEBGUKhSztnqBvXCjG7n5lxzN/ZMdhNqLa
iWljsLkWrx2z7TJId+MMynC/u2OcE9BjkkAGZcrbrcMtrLLLCMbXljEbHjnKhmA5z3NWNKvI7G/8
2UMY3ilhcoMlRJGyFgO5G7OMjOMZHWmajZfYLwwCTzEMccqNjBKOgdcjnBwwyMnBzyetNsrOS+ul
gjKqdrOzueERVLMxxzgKCeATxwCeKALS3VnpmoWV3pbzzyW8qzb7qIRglSCq7FZuOOTu5zjAxkl1
eWcenvZWAnMc8qTymcDKFQwVFx1A3tljjdx8q45iuLSCzmhf7XBfW7N8xtXZDxjK/OoKnBGDtI54
zggW7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AMeuluPEX2jThH/aurxYtEtvsEbbY
DtjEed2/ocbiNnOSuf4q5qigDfsNW0+FtIurhrlbnS8eXDHErJNtlaUZcsCmS+37rYxnnOBgUUUA
XdKvI7G7kllDFWtp4gFHOXidB+GWGfajTryO2NzDOGNvdxeTKyDLoN6uGUHgkMi8HqMjIzkUqKAL
uo3kdybaGAMLe0i8mJnGHcb2cswHAJZ24HQYGTjJZpl7/ZurWd95fmfZp0m2Zxu2sDjPbpVWigDY
uLrS4dEnsLJ7yaSW5imMs0SxjCLINu0M3Pzg5zznoMfMXFxpbeHLaziuLw3cMrzFWtlCEusYK7vM
JwPLODt5z0FY9FABW1ZavbwDT7mVJftml/8AHqqAeXL+8aRd5zlcMxJxncMD5SNxxaKANK1vLOTT
0sr8TiOCV54jABlywUMjZ6A7Fwwzt5+Vs8Vb+8k1DULm9lCrJcStK4QYALEk49uar0UAauhap/Zc
11/pFzbfaIPJ+0Wv+si+dXyBuXOdm37w4YnnGCzU72PUNQhaW/1C6jVVR7i6+eTGSTtXccAZ4Xcc
kE5GcDNooA1ddudPu7mGaxmuXxBFC4ngWPHlxogIw7ZztJ7Y96yqKKALsl5HdWQS6DG5hULDMoyW
QYGx/UAfdbqANvI27Lv9r2+37Zsl/tD7J9i2YHlbPK8rfnOc7ONuPvfNux8lYtFAF2K8jTRLqyIb
zJrmGVSBwAiyg59/nH60Wd5HYxGaEMb/AHYjkI4hH95fV+uD/DjIyxBWlRQAVd068jtjcwzhjb3c
Xkysgy6DerhlB4JDIvB6jIyM5FKigDdt9Ys4dSt98U72FtZz2iBSFlkDrLyeoUlpSe+0YHzEZNTU
Lqz/ALPtbCyeeWOGWWYyzRCMkuEG3aGbgeWDnPO7oMc5tFABW7deJbi/0e+tbtYGuLm5imMiWkSE
hRJuJZVB3EsvPXG7nkg4VFAG/wD2tp+/7fuuftv2H7H9n8pfL/1HkbvM3Z6fNjZ14z/FWBRRQAUU
UUAFFFaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9ABbXOnz6TFY301zB5E8kyPBAsu/eqAgg
uuMeWPXO7tjm0+r6fqH9sC+S5g/tC+S7RoAsnlY80kEEru/1gHUeueMHAooAu6reR31/5sQYRpFF
ChcYLCONUDEdiducZOM4yetM069+wXgnMfmIY5InXOCUdCjYPODhjg4ODjg9Kq1LbW013cLBAm6R
s8ZAAAGSSTwAACSTwACTQBp/2tHYTacunBpY7C5N0klym0ySHZnKqxwv7tRjJPU55AEN1eWcenvZ
WAnMc8qTymcDKFQwVFx1A3tljjdx8q45fqVjZae2mPFNLdQzwebKy/u9xEroQmQSBhOCRnuQPuh9
5b6W2iC9tbe8t5GufKjE1ysokAXL9I1wV3R9eu/jocAGPW7B4luP7NvrS5WCQzWaWsTi0i3AK0eN
z7dxARCBySDtPUAjCrSFhbHw5Lfi4Z7pLmKIxKuFRWWU8k9W/d544AI5JJCgF2w1bT4W0i6uGuVu
dLx5cMcSsk22VpRlywKZL7futjGec4GBXS6XoFveWdpI1tfTJcZ869hcCCy+cr+8Gw52gBzll+Vh
06mloGjx6nqFot5K0FnNcpb71GXkdiBtQHuMgk9FBGckqrAFSxOnNFLFfmeJiytHPDGJCoGQV2Fl
GDkHOcjbjHJxpDV9Plu9QFwlz9muLGGyjaMLvHlmEByCcdIt23Pfbu/iEGi6NJqUVzdC0vLyO3ZE
a3s1zIxfdg52ttUbTk4POBjnINP0aTUNUvIUtLzbaK0r2iLuuCA4XYPl+9lgCccAE4ONpAK+qXdv
cCzgtTK0NpAYVklQIz5kdySoJA5cjqeme+BUtbmazu4bq3fZNDIskbYBwwOQcHjqKuy2ltBrTwXl
veWECKWaCc5myE3Bc7BgscAHbgbgSDjmbUbO2s4rC9jtJ4lmYt9jvJNxdBtKvlQh2PuIGB/AcN6A
BqGuPqGiWtlJFAskVzLKxitYohhlQLjYBz8rZ9fl64GJbrxLcX+j31rdrA1xc3MUxkS0iQkKJNxL
KoO4ll5643c8kGpr0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABW0k+jy6Ra2st1fWrrlrhIb
RJFlk3NtYsZFPCEADGB82PvHOhceHLeKyD/YNTijNilz/aMjg25cwiTbjyx1Y+WPn6kdehzdIt9L
nsNQlvbe8kktYhMDDcrGGBkjTbgxtz85Oc9sY70AO0HxDPpF3ZK4iksobtbhka2jkccruKMwypIU
dCOgrIlkaaZ5WChnYsQihRk+gHAHsOK39K0SO70SO9Gk6rqMj3MkTCyfaIwqxkZ/dvyd59OlYt/b
x2moXNtFOtxHDK0aTJ0kAJAYcng9epoAZbLbtcKt1LLFCc7nijEjDjjCllB5x3FbV1q2nnxVd6lb
tcvbXn2jzBJEqvH5yupwAxDbQ+eozjHHWsCigDS1C6s/7PtbCyeeWOGWWYyzRCMkuEG3aGbgeWDn
PO7oMc5tauuJbW81rbW9pFDttIJHkVnLSs8KOS25iByTjAHWqk9l9mtI5J5Ns8uGSDHIjIyGY/w5
42jqRzwNpYAe95GdEhskDLILmSWUgYDjagTPqVxJ16bzjqat2d1pdjG1zE94btraSE27RLsy8bRl
vM3Zx8xYDZ/s5/iqXV9IGl6Lp7GOzeS4UySTJdpI4O+RdqhXIKYQHdg/Nkbu1EmkC18KJftHZySX
ErLvN2heJAIyuxFfljvYMCCQAOF6kAwqtWS6ed7X0tyu3BSOCNT5nqCxYbO3O1uvTjB07DSA3hy9
1R47OVlZY41lu0UoCshZtocNvBQbVPUE/K3Y8P6QL2C9vXjs5ltotyQ3F2kSs+9F+Yb1YLhyQcgb
gBk9CAVGurPU9QvbvVHngkuJWm32sQkALEll2My8c8HdxjGDnIi1W8jvr/zYgwjSKKFC4wWEcaoG
I7E7c4ycZxk9at+HdKGqX77/ACGjhiklKS3CRbysbMo5YEqSoDEdAScr1o0rShq3iaKwbyIY3uQs
ix3CAKhcAiNmY7zzxgsT70AZ9lez6fdLc2zKsgVl+dFcEMpVgVYEEEEjkd615PEpe90G7a3iL6bh
nSOGOEOwmZ8DYOmCo6cHccckmhc2jz6u1tHFY27nGEiu1MK/LniRnI/NuvHtVrxDpi2XiGTTbeO2
jSOQwxstyrbwHKhpGLEIxxyPlA9BQBFc3OnwaTLY2M1zP588czvPAsWzYrgAAO2c+YfTG3vnjKra
8Q6YulGzt1jthmBJGljuVleRmjRm3BWIABYhSAMjnLdarzaOY7WaaK/s7l7dQ08ULOTENwXO4qEY
BmUfKzdcjI5oAu/2tp+/7fuuftv2H7H9n8pfL/1HkbvM3Z6fNjZ14z/FT9L1bS4J9Fur0XnmaYyg
RQopEgEzSbt5bgjefl2nO37w3ZWv/ZFvt+x75f7Q+yfbd+R5WzyvN2YxnOzndn73y7cfPUuj6TZa
jFDH5N9M7Y+03cR2w2ILFQZAUOQAu8ncowccYJoAwK1dOu9PXSbywvjcp588MqSwIr7NiyA5Ukbs
7wMZHrnjBfpNjbXkRH2HUNRuyzE29k20xINvzk+W+QSxHbG3nO4VOmi2jWmrgX1tts76KFL2RiEa
MiYZCruLbiqHgMR16ZNADTq1nNe3iSidLS4s4bMSqgZ1EXlYfZuAy3lDI3cbupxzlXgs1mC2TTvG
F5kmUKXPrtBO0dBjJ6ZyM4Fg21vpuoPDqMMtxH5ashtpxHuDAMrAsjcFSDggHkZxgirV5pNvJqFn
bWCywNLaC4nS5lEnkjDSZLKoyPKCvgAnkjk8UAZtg9tHqFs97G0losqmZE6smRuA5HJGe4q1qslj
cMbiG+vLi4ZgGE1okKhQMADbI2AMABQAAOmMYq6+i2i2mkA31ttvL6WF72NiUWMCEZKttK7SznkK
T16YNN1bSI7PT/tJ0/UNNkEqxrDfPuMwIYll+ROF2gHg/fXp3AC68S3F/o99a3awNcXNzFMZEtIk
JCiTcSyqDuJZeeuN3PJBwqtXtl9m2SxSedaS58mYDG7HVWH8LDIyvuCCQQTbvUtn8PaddRWkUExn
mgkaNnPmBEiIYhmIBy7dMD2oAtWGrafC2kXVw1ytzpePLhjiVkm2ytKMuWBTJfb91sYzznAq21zp
8+kxWN9NcweRPJMjwQLLv3qgIILrjHlj1zu7Y5yq0LTSxcWgup762s4WkaONpxId7KAWACKxGAy9
cfe4zzgAh1O9/tLVry+8vy/tM7zbM527mJxnv1p+nXkdsbmGcMbe7i8mVkGXQb1cMoPBIZF4PUZG
RnIu2WkQz6TqLzz20Ettdwxm4klLIqlZdwGzdvyVX7oPTPTJqqdKaPUHtZ7u2gRY1l8+Rm2FGAKM
AAWOQynAXIzyBg4ALH9sRw3tt5MTPaW1tJaKHO15I337ycZCsfNcjqF4B3YJNe/u7drSCwszK9tB
I8olmQI7u4UH5QSFACKAMnuc84EqaDcS6hDawXFtIs8D3EM+8pG6IHLHLgFeY2HzAcj05pkmizGa
zSzngvVu5fIheEsoaQbcp84Ug/OnOMfN14OADNravdXt5xqFzEkv2zVP+PpXA8uL94sjbDnLZZQR
nG0ZHzE7hXvNKitNPF2uoQXKvL5UXko+GIGX5dVIK5j7YPmcHg4fd6FLaRXP+l20s9p/x9W8e/fD
8wQ5JUKcMwX5WPXIyOaAJf7Xt9v2zZL/AGh9k+xbMDytnleVvznOdnG3H3vm3Y+SsWitKHRzJawz
S39nbPcKWgimZwZRuK53BSigsrD5mXpk4HNABa3lnJp6WV+JxHBK88RgAy5YKGRs9Adi4YZ28/K2
eJv7Wjv5tRXUQ0Ud/ci6eS2TcY5BvxhWYZX94wxkHoc8EGvpFpb3tzPDcCXi0nljMbhcOkbOM5By
Plxjjr14qulnI+nzXoK+XDLHEwJ5JcORj2+Q/pQA7Ub37feGcR+WgjjiRc5IRECLk8ZOFGTgZOeB
0p+lXkdjf+bKGMbxSwuUGSokjZCwHcjdnGRnGMjrUq6LN9qeGSeCJI7aO6lmcsUjSRUK5wCxOZEX
gHk+gzUT2ESXscB1GzMLruF0pcoBz1AXeDkYwVz0PQg0AMvV09Ni2MtzN1LyTxrH9AFDN0553c56
DGS+8vI57DTraIMot4mEoIwGkaRiWHqdvljJ5+UDoBVvVdFW28TS6Rp863bG5MEQBIIbeVCMWCjd
0yR8vPWq93pYt7Q3UF9bXkKyLHI0AkGxmBKgh1UnIVumfu844yAZ9FbGoeHptPN4jXlnPPZMRcQw
uzGNd4TdkqFIyyjAJYbuQMHGPQAUVdfSrlbSO6AV42tvtR2tyieaYuc99w7Z4I98W7bSTFrWo6fc
mCSSziucqXdQ7xo3KkL1GNwBwDtwSM0AY9FXbHTjeRSzyXUFrbxMqNNNvI3tkquEVjkhWPTHHXpn
Qv8AQ1XVpLaOSC3hgs7eaeZpC6IWjj3NldxYF3GNoI+b05ABhUVavrFrF4v30U8U0fmRTRbtsi5K
kgMAwwysOQOnpgmK1tpry7htbdN800ixxrkDLE4AyeOpoAioq7eWEVtEJYdRs7xd21hCXUqT04dV
JHB5GQO+MjNvV9ItdPsNPng1KC5e4iLsiCQZ/eSLuXci/L8gHJznPGMGgDHorSj0czWxeK/s3uRE
Zjaqzl9gUuTu27MhQWI3Z4x97iprfSLWbw5c6i+pQJPFKiCEiTPKyHacIRuOwYwcYzkjigDHorQt
NLFxaC6nvrazhaRo42nEh3soBYAIrEYDL1x97jPOGRaVcya2mkMFju2uRakO3Cvu28kZ4B9M0AUq
Ku6dZx3JuZpywt7SLzpVQ4dxvVAqk8AlnXk9Bk4OMGxJpHnXdqlm/wC7vYGnt0lPz8Fx5fA+Zi0Z
VcD5srwM4ABlUVYls5IbK3unKhZ2cImfmwuBux/dJJAPqjDtVrV7C2sY9Pa1uGuFuLYyvIV2gsJZ
EO0ddvycZ5PUgZwADNorp9R8Nx21hfzRWeoJHZqGS/lO6C7HmKgKfIMBt28fM3AxznIb/YFv/Z3m
/Zr7y/snn/2nvH2bf5e/y8bOu7919/73b+GgDmqKu6ZZx3Vyz3BZbO3US3TIfmEe4Kdv+0SwUdsk
ZwMkS2tnZx6el7fmcxzyvBEICMoVClnbPUDeuFGN3PzLjkAzaK0LrSLi0fUUZ4nfT5/JmVCSerLv
HH3QVAyccuo71astEzqdxaXKS3E0ECy/ZLVsSyudmYxlSQyhiWG048th7gAxaK2NY0hdPvLJGjns
VuohKYrwEvAN7J8xCgkfJu4XODjBxk19Zsrew1EQWskskJghlVpQAx3xK5yBkDljxk49T1oAz6K0
tXsLaxj09rW4a4W4tjK8hXaCwlkQ7R12/Jxnk9SBnAzaACitWyS2Tw9qN1LaRTzCeGCNpGceWHSU
lgFYAnKL1yPasqgAooooAK1dGe28nU7a5u4rX7TaCOOSVXK7hNG+DsVj0Q9qyq39B0FLzUdK+13F
ssd3OhW2kdleaLzNrEMBtH3XGCwY7eByuQDAorStdHM+npfzX9naW7yvCrTM5JdQpI2orHGHHOMD
HOMjNu30Nf7P1RbySC0ubK8iheWaQ4jBEwZcLksSyr90E8Z6ZNAGFViyvZ9PulubZlWQKy/OiuCG
UqwKsCCCCRyO9Whos322eCSeCKOGJZ3uGLFBG23Y+AC2G3pgbcjdyBg4saja22kNpMghtrwS2jSy
fvHMcxMsqhvlKsPlC8cEEYIzkUARaxrB1W202No4ke2gaN/Lt44gWMjtxsA4wV49dxxySa95eRz2
GnW0QZRbxMJQRgNI0jEsPU7fLGTz8oHQCn67bQ2fiHU7W3TZDDdyxxrknChyAMnnoKz6ALU/9n/2
fafZ/tP235/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9F
AG/Yatp8LaRdXDXK3Ol48uGOJWSbbK0oy5YFMl9v3WxjPOcCLQfEM+kXdkriKSyhu1uGRraORxyu
4ozDKkhR0I6CsWigDVtL61ltL20v2lhS5njuPMtoFbDKHG0JuQAHzCeDxtAxzxYOrWc17eJKJ0tL
izhsxKqBnUReVh9m4DLeUMjdxu6nHOFRQBurqOl/2hp6SLPLZWVs8KyPApdnJkdXMZbaQryD5SxB
C89cVSvVsp7tGg1G5meaQmea8g2bST94lXct1JPGfrms+igDS16aCfUwbadZ40treLzEVgGKQojY
DAHGVPUCs2iigDam1e3nmnSRJTazWMMJXA3LLFCqq45wPnUjPXY7Dgmm6RcaXBYahFe3F5HJdRCE
CG2WQKBJG+7JkXn5CMY75z2rHooAuy3kb6Ja2QDeZDczSsSOCHWIDHv8h/SqVFFABRRRQBq649tc
TWtzb3cU260gjeNVcNEyQohDblAPIOME9Kr3l5HfRCaYML/diSQDiYf3m9H6ZP8AFnJwwJalRQBd
vryO5tNNiQMGtbYxOWHBJlkfj2w4/HNEt5G+iWtkA3mQ3M0rEjgh1iAx7/If0qlRQBdivI00S6si
G8ya5hlUgcAIsoOff5x+tFjeR21pqUThi11bCJCo4BEsb8+2EP44qlRQBd0q8jsbuSWUMVa2niAU
c5eJ0H4ZYZ9qNHvI9P1uwvZQzR29zHK4QZJCsCce/FUqKACrusXkeoa3f3sQZY7i5klQOMEBmJGf
fmqVFAF3VbyO+u45YgwVbaCIhhzlIkQ/hlTj2rdv/Etvc6dqdut1qbpeRgQ2jkC3tMSo+xRuOQAp
UEBcAfd+b5eVooA2v7Xt9v2zZL/aH2T7FswPK2eV5W/Oc52cbcfe+bdj5KNKvdKsprC/cXKXllIs
hijjDpcMrlwS5cFMjC8Kcbc8k4rFooA0LT+ypbQR3r3NvMsjN5sEIl8xSBhSC6hdpBORnO7tgZvv
q+n6h/bAvkuYP7Qvku0aALJ5WPNJBBK7v9YB1HrnjBwKKALuq3kd9f8AmxBhGkUUKFxgsI41QMR2
J25xk4zjJ61dbV7dvE93fhJRZz+fEi4G6KKRGjUBc4+RWGFyB8oGQOaxaKAN9NX0/T/7HFilzP8A
2ffPdu04WPzc+UQAAW2/6sjqfXPOBVubnT4NJlsbGa5n8+eOZ3ngWLZsVwAAHbOfMPpjb3zxlUUA
Xby8jaIWdmGSzRt3zDDzP03vjvycLyFBIGSWZrF69snh7TrWK7inmE808ixq48sOkQCksoBOUbpk
e9ZVFABWrbXOnz6TFY301zB5E8kyPBAsu/eqAgguuMeWPXO7tjnKooA0Pttuuk3tnHHKvnXcU0YY
htqIsowTxk/vF6DnB6Vfh1u3XVHuFe5tt1jBbJcwqDLA6JGrMg3Dr5bL94fK5+hwKKAOlu/ENvca
haztJfXHk6bPZvLcsGkkdxMFY89P3i8ZOBkZbGTn2t7C1ppdmbmW0eG+kme6VSfKVxEAwwckr5ZO
B7YrKooA6fxTJb3MSTIkFmwlPl2VtdQTx7WyWYeSoCEYQZbLMCOfkpuueIv7Viu3/tXV5PtUm/7F
I2IIctuxned4XoBtXseMYPNUUAaUWmWkkSO2u6fGzKCUdLjKn0OIiMj2JFTC60u7sLJL17xJLOJo
hFDErCYeY8n3yw2E7yv3WxjPOcVj0UAauhXOn2lzNNfTXKZglhQQQLJnzI3Qk5dcY3A98+1Fpc6e
tpe6fcTXKW0s8c0c8cCs/wAgdQChcAZEmfvHGMc5yMqigDdl1Wzk15byCbULGNLaKKKWFg00RSJE
PQruB2sOCvXP+yaWr3Vrd3aPapgCMK8nkrD5rZPzeWhKpwQuAeduepNZ9FAG1Pq9uviyPXLdJZM3
YvZIZAE2vv3lAwJyO27Az/dFGr6r9stEg/tnV9Q/eB8Xp2omARwu98k565GMHrnjFooAu6xeR6hr
d/exBljuLmSVA4wQGYkZ9+ali0y0kiR213T42ZQSjpcZU+hxERkexIrNooA37bVtPbSltLprmJza
GyYxRLIAnn+eHGWXJzhNvHHzbv4aLXVtPm1/U9UvmuYPtX2jZHBEsuPOV1OSWX7u8HpzjtWBRQBq
2lzp62l7p9xNcpbSzxzRzxwKz/IHUAoXAGRJn7xxjHOcjQj8Q28Wr3M9rJfWUM9jDaLNEwaaHYsW
SMFQ2TEV6rw2cfw1zVFAGrfXCatqcQk1W5kQR7Dd6kWJ7n7q7yo5xgFueeM4EU9lZ2nlyf2lbXqe
YA8Vt5qPt74LxgD079Rwaz6KANrV9Stby0RBcXN9deYG+13VusUgXBBUsHYyZyvLH5QgA4JxVu7u
3utLsI8yrc2kZg2bAUZC7ybt2cg5fG3HbOe1Z9FAHVW/iW3hsjH9q1NI2sXtfsEZC26uYSnmfe+b
c3zkbRyxOTj5sW0u7ddJvbG4MqebJHPG8aBvnRXAUgkYB8z73OMdDnjPooA39J1z7JpK2P8Aamp6
dsnebfYjd5u5UGGG9MbdnHXO49Mcwab4hvNL1v7fbXF4I2uVmmiNyczgNnbIwA3E5IJI7niseigD
Vh1q4nmnGqXNzdR3MAtpJXkMkiIHVwV3HnDKDjjIyMjOQSX1lLd2qSwyyWNlA0UStw0py7gvg8Au
/IByF4BJG45VFAGlq+qtrLJe3ZZ9TdiLiXaFWRQFCHA6N94HAAwF75Jm1e40uew0+KyuLySS1iMJ
E1ssYYGSR92RI3PzgYx2zntWPRQBpC6s7PTZ4rR55p7yJYpzLEEWIBlchcMSxLKvJ24APBz8t3+1
tP3/AG/dc/bfsP2P7P5S+X/qPI3eZuz0+bGzrxn+KsCigC7LeRrpUVlbhgHYS3RYffkUuEx/shW9
jlmzkBcS2t5ZyaellficRwSvPEYAMuWChkbPQHYuGGdvPytnjNooA2rfxBNbanea1F+71eacyROq
gxxB9xkIBzzyAM5GC3fBFKQ6dNqQYGe2s2UFgkYkZG2jcFBYZXdkDLZ24zk1SooA0Ly7t7qazgUy
pZWsYgSQoDIU3s7MVyBnLsQueBgZONxl1250+7uYZrGa5fEEULieBY8eXGiAjDtnO0ntj3rKooA2
NXuNLnsNPisri8kktYjCRNbLGGBkkfdkSNz84GMds57Vj0UUAatk9s/h7UbWW7igmM8M8ayK58wI
koKgqpAOXXrge9ZVFFABRRRQAV0Wl6tpcE+i3V6LzzNMZQIoUUiQCZpN28twRvPy7Tnb94bsrzta
ujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AFeW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VY06709
dJvLC+NynnzwypLAivs2LIDlSRuzvAxkeueMHKooA3Tq1nNe3iSidLS4s4bMSqgZ1EXlYfZuAy3l
DI3cbupxzFfTWGoSaba207W0FtbGFp7tTgnzJH3YQMQDuHGDgnGTjccepba2mu7hYIE3SNnjIAAA
ySSeAAASSeAASaALeu3MN54h1O6t33wzXcskbYIypckHB56Gs+tXUrGy09tMeKaW6hng82Vl/d7i
JXQhMgkDCcEjPcgfdD7y30ttEF7a295byNc+VGJrlZRIAuX6Rrgruj69d/HQ4AMeiitIWFsfDkt+
LhnukuYojEq4VFZZTyT1b93njgAjkkkKAZtFdLpegW95Z2kjW19Mlxnzr2FwILL5yv7wbDnaAHOW
X5WHTqaWgaPHqeoWi3krQWc1ylvvUZeR2IG1Ae4yCT0UEZySqsAY9FWrKTT49/262uZ842eRcLFj
1zlGz29KvX9vpem67cW0lveSQRKqNEtyqvHLgb1L+WQwVt68KM4BB9QDHorV1WysobuyjtjLbCaC
OSZbmTzfJLklSWRBkbCjcAkbiOoxTNVtbbTNWjS33XFuIoJgLgbS4eNHIYKeB8xGAcgd+9AGbRWl
r0MEGpgW0CwRvbW8vlozEKXhR2wWJOMsepNZtABRRXVWHhy3udO0yVrDU2S7jLTX6OPs9r+9dNzD
yzwoUMcuOO460AcrRWxY2NmNHfULm2vLtVlMci20ojFuAF2s5KPw5YgdOUPXta0zRLa7i1OWC11D
VltrmOKEWX7sujeZ+8IKOQPkXjtuoA52irurWcdhqUlvEW2hUYq5y0ZZQxjbp8yklTwOVPA6ClQA
UVduNOa3mhtXlUXjtiSJiFEJOAAzE4Ddcg8Lxk53BbviHTF0o2dusdsMwJI0sdysryM0aM24KxAA
LEKQBkc5brQBi0Vu6vpA0vRdPYx2byXCmSSZLtJHB3yLtUK5BTCA7sH5sjd2ok0gWvhRL9o7OSS4
lZd5u0LxIBGV2Ir8sd7BgQSABwvUgGFRW7YaQG8OXuqPHZysrLHGst2ilAVkLNtDht4KDap6gn5W
7Gk6RHeaebkafqGpSGVo2hsX2mEAKQz/ACPw24gcD7jdewBhUVv2eiW8l3qUcaXOrfZZxFHFYMFe
VMt++Hyv8o2qOB/y0Xn1il0i3h1O9Vnl+y2cCXE0eR5q7tg8rOMBleQISRxhjtyNtAGLRWx/Y8c1
7beTKyWlzbSXalxueONN+8HGAzDynA6BuCduSAf2THfzac2nFoo7+5Nqkdy+4xyDZnLKoyv7xTnA
PUY4BIBj0VpXVnZyae97YGcRwSpBKJyMuWDFXXHQHY2VOdvHzNnjT8R6Lb6M91Euj6vCkc7Qw3lz
KPKkwTyB5QzkAkYb88UAc1RXS/2Bb/2d5v2a+8v7J5/9p7x9m3+Xv8vGzru/dff+92/hqHR9JstR
ihj8m+mdsfabuI7YbEFioMgKHIAXeTuUYOOME0AYFFaFppTXNoLl7u2tkeRooROzDzXABKggELjc
vLlR83Xg417zQ7LzvECpJBaJZamsMcs0jYjjPnDbgZZiSqdATxnpk0AcxRWkNFm+2zwSTwRRwxLO
9wxYoI227HwAWw29MDbkbuQMHD00RjqEMBuYnt5YHuVuIgxDRIHLlVYKc/u3AB25I64OaAMqitK6
s7PyrS+gM8VlcSvEyORJJGU2FsEbQ4w6kfd5yO241byzks5gjlXR13xSocpInZlPpwRzgggggEEA
Ar0UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFA
BRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9
ptBHHJKrldwmjfB2Kx6Ie1ZVaujJbeTqdzc2kV19mtBJHHKzhdxmjTJ2Mp6Oe9AGVRRRQAVYsr2f
T7pbm2ZVkCsvzorghlKsCrAgggkcjvVepba2mu7hYIE3SNnjIAAAySSeAAASSeAASaANDWNYOq22
mxtHEj20DRv5dvHECxkduNgHGCvHruOOSTXvLyOew062iDKLeJhKCMBpGkYlh6nb5YyeflA6AVY1
KxstPbTHimluoZ4PNlZf3e4iV0ITIJAwnBIz3IH3Q+8t9LbRBe2tveW8jXPlRia5WUSALl+ka4K7
o+vXfx0OACjP/Z/9n2n2f7T9t+f7V5m3y+vybMc9Oue/Sr1vcaWvhy5s5bi8F3NKkwVbZSgKLIAu
7zAcHzBk7eMdDWPWkLC2PhyW/Fwz3SXMURiVcKissp5J6t+7zxwARySSFALthq2nwtpF1cNcrc6X
jy4Y4lZJtsrSjLlgUyX2/dbGM85wItB8Qz6Rd2SuIpLKG7W4ZGto5HHK7ijMMqSFHQjoKt6XoFve
WdpI1tfTJcZ869hcCCy+cr+8Gw52gBzll+Vh06mloGjx6nqFot5K0FnNcpb71GXkdiBtQHuMgk9F
BGckqrAFSwu4Y9btry9hWSBblZZokjXDLuBZQvC4IyMcCqsssk8zyyyNJI7FndzksTyST3Naui6X
HfxXMhtby+kiZFW0s22yENuzJ91vlXaAfl6uOR0JcWOnWOq3MV085hiiSRYFYLIXYKTEX2kKybmy
SvJjIwpPABX1O7t9R165uQZYrOWc7BsBaOLOFULnHyrgBcgcAZxVjWJtOv7+2ezupwpihgka6gCB
AkaR7vlZyQdpJGMjtmjUdGaLULa2tYJ4pp4vMe1uWHmW5BYEO2FAG1d+SFAVgTwMlmpWVppraZLb
yfbI5oPOk8wFUdhK6EADDBTs9ievy5wABmvTQT6mDbTrPGltbxeYisAxSFEbAYA4yp6gVm1pa9DB
BqYFtAsEb21vL5aMxCl4UdsFiTjLHqTWbQAVtQavbo2mRSpK1tFaNaXigA70aWRyVGcEgOrLno6g
44rFooA1dLurK3RJHur6wvYpGZbqzTezKwA28um3GG5Gc7yDjHJdapb3Fpqkcdt9n+130dzHEmNk
SKJfl7dPMUDA7HpWVWhoVtDeeIdMtbhN8M13FHIuSMqXAIyOehoAz6fFLJBMksUjRyIwZHQ4Kkcg
g9jTrmZJ7hpI7eK3Q4xFEWKrx23En35NPsrOS+ulgjKqdrOzueERVLMxxzgKCeATxwCeKAHXs9vc
7J44vJnbPnIigRk9mUD7uecrjAI44IVX6reR313HLEGCrbQREMOcpEiH8Mqce1WBoUst3YQ213bX
Md7P9ninj3hBJlQQQyhuN6nOMc8ZwQIrvSmtrQ3KXdtcokixTCBmPlOQSFJIAbO1uULD5evIyAMv
ryO5tNNiQMGtbYxOWHBJlkfj2w4/HNEt5G+iWtkA3mQ3M0rEjgh1iAx7/If0rS/sFLWx1hri4tpr
mzgAaKN2D28vnRoQQQA3BcZXco9eVzgUAXYryNNEurIhvMmuYZVIHACLKDn3+cfrVi2udPn0mKxv
prmDyJ5JkeCBZd+9UBBBdcY8seud3bHL7fSLWbw5c6i+pQJPFKiCEiTPKyHacIRuOwYwcYzkjiob
XRzPp6X81/Z2lu8rwq0zOSXUKSNqKxxhxzjAxzjIyAW5NWs9Rm1YXonto7+8F4GhQTFCPM+TBZcj
96ec/wAPTnhkmr28uoXIZJfsVxaRWTsAPMCRiMK4GcZzErFc9CV3fxCqmlMt3dW17d21i9tIYpDO
zN84JG0BAxPQ8gYGOvIy4aLN9tngkngijhiWd7hixQRtt2PgAtht6YG3I3cgYOACb+2I4b228mJn
tLa2ktFDna8kb795OMhWPmuR1C8A7sEk/taOwm05dODSx2FybpJLlNpkkOzOVVjhf3ajGSepzyAK
40ppNQS1gu7adGjaXz42bYEUEuxBAYYCscFcnHAORll9pxs4op47qC6t5WZFmh3gb1wWXDqpyAyn
pjnr1wAS3V5Zx6e9lYCcxzypPKZwMoVDBUXHUDe2WON3HyrjmY3Wl2lhepZPePJeRLEYpolUQjzE
k++GO8jYF+6uc54xisetpoNHuNIvrqC1vrd4PLWN5btJVaRm4UqI1P3FkOc4+X3FAEv9rafv+37r
n7b9h+x/Z/KXy/8AUeRu8zdnp82NnXjP8VRaVe6VZTWF+4uUvLKRZDFHGHS4ZXLgly4KZGF4U425
5JxUX9hS+T/x9232vyPtH2P5/M8vZ5mc7dn3PmxuzjjrxTIdHMlrDNLf2ds9wpaCKZnBlG4rncFK
KCysPmZemTgc0APtrvT5NJis743K/Z55J0ECKfN3qgKkkjZ/qx82G+9045u3GraXfza2tyLyKO/1
BbqGSNFYxgeb95Swyf3gGAR3OeMGlpdrZXCJG9rfX97LIyra2b7GVVAO7lH3Zy3AxjYSc54DoyG7
v1XUbZbK0n8kXkgYpISW24CBj8wRj6cdemQC7b+II4dVupYp7y1jms4rNLmD/XIIxGA+3cOW8rBG
7jeeTjmvNrWNXS6Nzfaggge3eS8kxI6OrKwXltmA5xy3PPfaKsOlNLdzxC7thDBGJZLrczRhCVAb
5QW5LKMbcgnkDBxa1HSIYG0mGOe2Xz7RpZLjzSY2xLKN/qPkVflA3ZGNu7igCvdXVm8Vpp8Dz/Yo
ZXlad4gJGL7AxCBsAAIuBu5IJyM4Wve3v2nZFFH5NpFnyYQc7c9WY/xMcDLewAAAAFiTSooJrN5N
QgewuJfKa7hRyIyNu/5WVWJUMp6YOeDnOLGpaZbxaTHfR2V9Y75EEaXkgf7QjKx3odicLhc4z99e
ncAxaK3dX0gaXounsY7N5LhTJJMl2kjg75F2qFcgphAd2D82Ru7Uyxg0e7tLnda30clvaNLJP9rQ
oHACr8nl5w0jIuN3G7OeCaAMWiun0bw3HqcFgqWeoXLXjbXurY/urQlymJF2HJAAc/MvDDp1MWla
JHd6JHejSdV1GR7mSJhZPtEYVYyM/u35O8+nSgDnaK0LbT0vftFws8VlZRyBRJcsz4LbiqnYhJOF
bnaB8p6ZAp1vpkf2m7M8yy21nEJ5Gt3yZULKqhSRwSXXORlRnK5G0gGbRW/Z6Nbz6pAifNbXljcX
MCzSBShVJQA7cDh4+vAIAJAyQM2+042cUU8d1BdW8rMizQ7wN64LLh1U5AZT0xz164AKVFaWr2Ft
Yx6e1rcNcLcWxleQrtBYSyIdo67fk4zyepAzgZtABRWrepbP4e066itIoJjPNBI0bOfMCJEQxDMQ
Dl26YHtWVQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFaujPbeTqdtc3cVr9ptBHHJKrldwmjfB2Kx6Ie1Z
VFABRRRQAVYsr2fT7pbm2ZVkCsvzorghlKsCrAgggkcjvVeigDV1jWDqttpsbRxI9tA0b+XbxxAs
ZHbjYBxgrx67jjkk17y8jnsNOtogyi3iYSgjAaRpGJYep2+WMnn5QOgFUqKALU/9n/2fafZ/tP23
5/tXmbfL6/Jsxz06579KvW9xpa+HLmzluLwXc0qTBVtlKAosgC7vMBwfMGTt4x0NY9FAG/Yatp8L
aRdXDXK3Ol48uGOJWSbbK0oy5YFMl9v3WxjPOcCLQfEM+kXdkriKSyhu1uGRraORxyu4ozDKkhR0
I6CsWigDVtL61ltL20v2lhS5njuPMtoFbDKHG0JuQAHzCeDxtAxzxai1bT21ea5la5hCWkUFpcRx
K8kbxrGgk2lgASqN3O0sCCSAawKKANAXUWmaglxpNzLLiNlLXNqi/eBVlKFnUgqe/qeKsanqo1mP
SoJPIgNvEYpJBbpGgLSu2cRrnaFK8Y67sDJJOPRQBpa9NBPqYNtOs8aW1vF5iKwDFIURsBgDjKnq
BWbRRQBof27rH2T7J/at99m8vy/J+0Ps2Yxt25xjHGKLbXdYs7dbe11W+ghTO2OK4dVGTk4AOOpN
Z9FAF2z1jVNPiMVlqV5bRltxSGdkBPTOAevA/KpdHvlh8TWF/ezMVS8jmmlfLHAcFmPcnqfWs2ig
CW5hSC4aOO4iuEGMSxBgrcdtwB9uRVvRdR/svVEut0qYjkj8yI4ePejJvXkcruyBkZx1HWs+igDo
jrsa63pN3LqOq6jHZ3KzO92eQAykqib2wfl67ucgYGMnKivI00S6siG8ya5hlUgcAIsoOff5x+tU
qKAOivdW0ub+2rmIXhu9VUkoyKqQEzJIVzuJcfKQGwvT7vzfLztFFAGhaXduuk3tjcGVPNkjnjeN
A3zorgKQSMA+Z97nGOhzwyW8jfRLWyAbzIbmaViRwQ6xAY9/kP6VSooA2NF1SOwiuYzdXljJKyMt
3ZrukAXdmP7y/K24E/N1QcHqLV1rlnfarfyym8W3vbOG2eV8SzIUERLHlQ5LRYzlfvZ9q52igDYs
dQs9J1qK4spLzy1ikiM5ASQF0Zd6qDwV3AgbuSvUZ4ZrOo/b/IX+09T1Dy9x8y+ONuccKu5sdOTu
5yOBjJyqKACrr3kZ0SGyQMsguZJZSBgONqBM+pXEnXpvOOpqlRQB0tx4i+0acI/7V1eLFolt9gjb
bAdsYjzu39DjcRs5yVz/ABVSF1pd3YWSXr3iSWcTRCKGJWEw8x5PvlhsJ3lfutjGec4rHooA2LG4
0saO9rcXF5azySkyvBbLKJY8KUUkyKQAwY4HBO0n7owy0udPW0vdPuJrlLaWeOaOeOBWf5A6gFC4
AyJM/eOMY5zkZVFAGraXen281/b5uVsrqAQebsVpFw6Pu2ZA5Mf3d3Abq2ObR1fT4b7S2tkufJs7
SS38xwvmK7NKRKoBwCpkDgZ4Ixu43VgUUAdFqeu217FpkM8uoamtpcySTNeybTMjeX8owzFB8jDq
fXPOBSubnT4NJlsbGa5n8+eOZ3ngWLZsVwAAHbOfMPpjb3zxlUUAXb68jubTTYkDBrW2MTlhwSZZ
H49sOPxzRBeRw6PeWoDCeeWI71HHlqHLKT1wWMZx0ygPYVSooA0tKurPTpotQZ52vbaUSwwCIeWx
XBUs+7IGeoC8gYyM5EVmmlyREXs95BIG4aGFZQw9MFl2kc85Oc9BjmlRQBuyatZ6jNqwvRPbR394
LwNCgmKEeZ8mCy5H7085/h6c8VLe8s7a5u4EE5sLqIQO7AeaAGVt4HTO5AduemV3Z+as2igDdt9Y
s4dSt98U72FtZz2iBSFlkDrLyeoUlpSe+0YHzEZNTULqz/s+1sLJ55Y4ZZZjLNEIyS4QbdoZuB5Y
Oc87ugxzm0UAbGr3Glz2GnxWVxeSSWsRhImtljDAySPuyJG5+cDGO2c9qx6KKANW9e2Tw9p1rFdx
TzCeaeRY1ceWHSIBSWUAnKN0yPeorbXdYs7dbe11W+ghTO2OK4dVGTk4AOOpNZ9FAF2z1jVNPiMV
lqV5bRltxSGdkBPTOAevA/Ki31jVLSaaW21K8hknbdM8c7KZDycsQeTyevqapUUAS3N1cXlw1xdT
yzzPjdJK5ZjgYGSeegFRUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAV0trovhiW0hkuPF3kTPGrSRf2bK3lsRyuQcHB
4zXNUUAWtSgtLXUJYbG9+22y42XHlGPfwCflPIwcj8Kq0UUAFFFFABRRRQAUUUUAFFFFABRRRQAU
UUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRR
RQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFF
ABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAFFFFABRRRQAUUUUAf//Z
--089e013cc30aa5701904f2544bed
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--089e013cc30aa5701904f2544bed--


From xen-devel-bounces@lists.xen.org Fri Feb 14 02:09:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 02:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WE8Dj-00049n-AL; Fri, 14 Feb 2014 02:09:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ctakemura@axcient.com>) id 1WE8Dh-00049c-RW
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 02:09:38 +0000
Received: from [85.158.143.35:33232] by server-2.bemta-4.messagelabs.com id
	18/BD-10891-1EA7DF25; Fri, 14 Feb 2014 02:09:37 +0000
X-Env-Sender: ctakemura@axcient.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392343775!5578264!1
X-Originating-IP: [208.65.145.78]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA4LjY1LjE0NS43OCA9PiAyMzIxNzI=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10147 invoked from network); 14 Feb 2014 02:09:36 -0000
Received: from p02c12o145.mxlogic.net (HELO p02c12o145.mxlogic.net)
	(208.65.145.78)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 02:09:36 -0000
Received: from unknown [12.250.146.126] (EHLO exchange.axcient.com)
	by p02c12o145.mxlogic.net(mxl_mta-7.2.4-1)
	with ESMTP id eda7df25.0.210191.00-371.534652.p02c12o145.mxlogic.net
	(envelope-from <ctakemura@axcient.com>); 
	Thu, 13 Feb 2014 19:09:36 -0700 (MST)
X-MXL-Hash: 52fd7ae03233948f-8cfb283fd4906dfe215b581f4feb7c88be077973
Received: from TESLA3.axcient.inc ([fe80::546:f8ef:fd9d:516e]) by
	tesla3.axcient.inc ([fe80::546:f8ef:fd9d:516e%10]) with mapi;
	Thu, 13 Feb 2014 18:09:34 -0800
From: Chris Takemura <ctakemura@axcient.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 18:09:33 -0800
Thread-Topic: Use of watch_pipe in xs_handle structure
Thread-Index: Ac8pKc3FkPbfL7hwTeS1yLb6lboPTw==
Message-ID: <CF22BADD.4AD%ctakemura@axcient.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Microsoft-MacOutlook/14.3.9.131030
acceptlanguage: en-US
MIME-Version: 1.0
X-AnalysisOut: [v=2.0 cv=Bo0qN/r5 c=1 sm=1 a=xqQDggKg7E0CnudaAZgsZA==:17 a]
X-AnalysisOut: [=LEH2DxGZ4DAA:10 a=sq_w24qPQy8A:10 a=Xq3sQncxG94A:10 a=BLc]
X-AnalysisOut: [eEmwcHowA:10 a=kj9zAlcOel0A:10 a=xqWC_Br6kY4A:10 a=kxSA2Y8]
X-AnalysisOut: [aAAAA:8 a=hmgVrc31OLoA:10 a=imiyPv79uIGjPeqrdS8A:9 a=CjuIK]
X-AnalysisOut: [1q_8ugA:10]
X-Spam: [F=0.5000000000; CM=0.500; MH=0.500(2014021319); S=0.200(2010122901)]
X-MAIL-FROM: <ctakemura@axcient.com>
X-SOURCE-IP: [12.250.146.126]
Subject: [Xen-devel] Use of watch_pipe in xs_handle structure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This message was also posted to the qemu-devel list, but I didn't get any
reply, and it occurred to me that it might make more sense here.  Sorry if
you're reading it twice.

Anyway, I'm trying to debug a problem that causes qemu-dm to lock up with
Xen HVM domains.  We're using the qemu version that came with Xen 3.4.2.
I know it's old, but we're stuck with it for a little while yet.

I think the hang is related to thread synchronization and the xenstore,
but I'm not sure how it all fits together. In particular, I don't
understand the lines in xs.c that handle the watch_pipe, e.g.:

        /* Kick users out of their select() loop. */

        if (list_empty(&h->watch_list) &&
            (h->watch_pipe[1] != -1))
            while (write(h->watch_pipe[1], body, 1) != 1)
                continue;


It looks to me like the other thread blocks while reading from the pipe,
and the write allows it to continue.  But this code seems like it does the
same thing as the condvar_signal call that comes slightly after, and
therefore it seems like I could safely #ifndef USE_PTHREAD it out.  Is
this the case?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 05:41:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 05:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEBVr-0000R6-LG; Fri, 14 Feb 2014 05:40:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WEBVq-0000R1-Pa
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 05:40:34 +0000
Received: from [85.158.139.211:12896] by server-3.bemta-5.messagelabs.com id
	D6/6C-13671-15CADF25; Fri, 14 Feb 2014 05:40:33 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392356429!3745593!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14771 invoked from network); 14 Feb 2014 05:40:32 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	14 Feb 2014 05:40:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384272000"; 
   d="scan'208";a="9525750"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 14 Feb 2014 13:36:36 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1E5eM5M029905;
	Fri, 14 Feb 2014 13:40:22 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021413382207-1834962 ;
	Fri, 14 Feb 2014 13:38:22 +0800 
Message-ID: <52FDACDC.2010607@cn.fujitsu.com>
Date: Fri, 14 Feb 2014 13:42:52 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>	<1392215531.13563.79.camel@kazak.uk.xensource.com>
	<21243.35619.819765.162321@mariner.uk.xensource.com>
In-Reply-To: <21243.35619.819765.162321@mariner.uk.xensource.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/14 13:38:22,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/14 13:38:24,
	Serialize complete at 2014/02/14 13:38:24
Cc: xen-devel@lists.xen.org, Ian Campbell <Ian.Campbell@citrix.com>,
	george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 10:54 PM, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
>> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
>>> ARM does not (currently) support migration, so stop offering tasty looking
>>> treats like "xl migrate".
> 
>>> Other than the additions of the #define/#ifdef there is a tiny bit of code
>>> motion ("dump-core" in the command list and core_dump_domain in the
>>> implementations) which serves to put ifdeffable bits next to each other.
> 
> I'm not a huge fan of #ifdef but this is tolerable, I think.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Also

Acked-by: Lai Jiangshan <laijs@cn.fujitsu.com>

Thanks,
Lai

> 
> I think this should go into 4.4.  It is essential that we start
> advertising lack-of-resume in 4.4 as otherwise in 4.5 we'll have to
> invent a new HAVE_HAVE_NO_SUSPEND_RESUME which tells you whether the
> lack of HAVE_NO_SUSPEND_RESUME means that you can definitely
> suspend/resume.
> 
> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 05:41:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 05:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEBVr-0000R6-LG; Fri, 14 Feb 2014 05:40:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WEBVq-0000R1-Pa
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 05:40:34 +0000
Received: from [85.158.139.211:12896] by server-3.bemta-5.messagelabs.com id
	D6/6C-13671-15CADF25; Fri, 14 Feb 2014 05:40:33 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392356429!3745593!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14771 invoked from network); 14 Feb 2014 05:40:32 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-5.tower-206.messagelabs.com with SMTP;
	14 Feb 2014 05:40:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384272000"; 
   d="scan'208";a="9525750"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 14 Feb 2014 13:36:36 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1E5eM5M029905;
	Fri, 14 Feb 2014 13:40:22 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014021413382207-1834962 ;
	Fri, 14 Feb 2014 13:38:22 +0800 
Message-ID: <52FDACDC.2010607@cn.fujitsu.com>
Date: Fri, 14 Feb 2014 13:42:52 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>	<1392215531.13563.79.camel@kazak.uk.xensource.com>
	<21243.35619.819765.162321@mariner.uk.xensource.com>
In-Reply-To: <21243.35619.819765.162321@mariner.uk.xensource.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/14 13:38:22,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/14 13:38:24,
	Serialize complete at 2014/02/14 13:38:24
Cc: xen-devel@lists.xen.org, Ian Campbell <Ian.Campbell@citrix.com>,
	george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/12/2014 10:54 PM, Ian Jackson wrote:
> Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
>> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
>>> ARM does not (currently) support migration, so stop offering tasty looking
>>> treats like "xl migrate".
> 
>>> Other than the additions of the #define/#ifdef there is a tiny bit of code
>>> motion ("dump-core" in the command list and core_dump_domain in the
>>> implementations) which serves to put ifdeffable bits next to each other.
> 
> I'm not a huge fan of #ifdef but this is tolerable, I think.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Also

Acked-by: Lai Jiangshan <laijs@cn.fujitsu.com>

Thanks,
Lai

> 
> I think this should go into 4.4.  It is essential that we start
> advertising lack-of-resume in 4.4 as otherwise in 4.5 we'll have to
> invent a new HAVE_HAVE_NO_SUSPEND_RESUME which tells you whether the
> lack of HAVE_NO_SUSPEND_RESUME means that you can definitely
> suspend/resume.
> 
> Ian.
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 06:07:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 06:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEBvQ-00011o-LA; Fri, 14 Feb 2014 06:07:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEBvP-00011j-W9
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 06:07:00 +0000
Received: from [85.158.137.68:60019] by server-13.bemta-3.messagelabs.com id
	83/34-26923-282BDF25; Fri, 14 Feb 2014 06:06:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392358016!1795184!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13799 invoked from network); 14 Feb 2014 06:06:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 06:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384300800"; d="scan'208";a="100694051"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 06:06:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 01:06:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEBvK-0005QB-K4;
	Fri, 14 Feb 2014 06:06:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEBvK-00033p-AQ;
	Fri, 14 Feb 2014 06:06:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24869-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Feb 2014 06:06:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24869: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5821857787007705622=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5821857787007705622==
Content-Type: text/plain

flight 24869 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24869/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  4 xen-install       fail REGR. vs. 24797

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install         fail like 24786
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24797
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24797
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24797

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                29b5f720990fafc302a034468455426dd662e101
baseline version:
 linux                1569265782ef26ed77ce45ebeb0676f11d4c114a

------------------------------------------------------------
People who touched revisions under test:
  "David S. Miller" <davem@davemloft.net>
  Akash Goel <akash.goel@intel.com>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Al Viro <viro@ZenIV.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Antti Palosaari <crope@iki.fi>
  Ben Skeggs <bskeggs@redhat.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Boaz Harrosh <bharrosh@panasas.com>
  Borislav Petkov <bp@suse.de>
  Brecht Machiels <brecht@mos6581.org>
  Brennan Shacklett <bpshacklett@gmail.com>
  Brian Norris <computersforpeace@gmail.com>
  Chris Ball <cjb@laptop.org>
  Chris Metcalf <cmetcalf@tilera.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Lameter <cl@linux.com>
  Chuck Anderson <chuck.anderson@oracle.com>
  Dan Duval <dan.duval@oracle.com>
  Daniel Santos <daniel.santos@pobox.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Paris <eparis@redhat.com>
  Florian Fainelli <florian@openwrt.org>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Ira Weiny <ira.weiny@intel.com>
  James Ralston <james.d.ralston@intel.com>
  Jiri Slaby <jslaby@suse.cz>
  Joe Thornber <ejt@redhat.com>
  Johannes Weiner <hannes@cmpxchg.org>
  John Stultz <john.stultz@linaro.org>
  Jonas Gorski <jogo@openwrt.org>
  Josh Triplett <josh@joshtriplett.org>
  Kamil Debski <k.debski@samsung.com>
  Len Brown <len.brown@intel.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ludovic Desroches <ludovic.desroches@atmel.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Maarten Lankhorst <maarten.lankhorst@canonical.com>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek OlÅ¡Ã¡k <marek.olsak@amd.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@linaro.org>
  Mauro Carvalho Chehab <m.chehab@samsung.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Michael Grzeschik <m.grzeschik@pengutronix.de>
  Michel DÃ¤nzer <michel.daenzer@amd.com>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@suse.cz>
  Mikulas Patocka <mpatocka@redhat.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nell Hardcastle <nell@spicious.com>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Olivier Grenie <olivier.grenie@parrot.com>
  Patrick Boettcher <pboettcher@kernellabs.com>
  Paul Moore <pmoore@redhat.com>
  Pekka Enberg <penberg@kernel.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ray Jui <rjui@broadcom.com>
  Richard Guy Briggs <rgb@redhat.com>
  Roland Dreier <roland@purestorage.com>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Seth Heasley <seth.heasley@intel.com>
  Seungwon Jeon <tgih.jun@samsung.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Steven Rostedt <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Todd Previte <tprevite@gmail.com>
  Trond Myklebust <trond.myklebust@primarydata.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Wanlong Gao <gaowanlong@cn.fujitsu.com>
  Weston Andros Adamson <dros@netapp.com>
  Weston Andros Adamson <dros@primarydata.com>
  Wolfram Sang <wsa@the-dreams.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2210 lines long.)


--===============5821857787007705622==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5821857787007705622==--

From xen-devel-bounces@lists.xen.org Fri Feb 14 06:07:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 06:07:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEBvQ-00011o-LA; Fri, 14 Feb 2014 06:07:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEBvP-00011j-W9
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 06:07:00 +0000
Received: from [85.158.137.68:60019] by server-13.bemta-3.messagelabs.com id
	83/34-26923-282BDF25; Fri, 14 Feb 2014 06:06:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392358016!1795184!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13799 invoked from network); 14 Feb 2014 06:06:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 06:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384300800"; d="scan'208";a="100694051"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 06:06:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 01:06:54 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEBvK-0005QB-K4;
	Fri, 14 Feb 2014 06:06:54 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEBvK-00033p-AQ;
	Fri, 14 Feb 2014 06:06:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24869-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Feb 2014 06:06:54 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24869: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5821857787007705622=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5821857787007705622==
Content-Type: text/plain

flight 24869 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24869/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  4 xen-install       fail REGR. vs. 24797

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install         fail like 24786
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24797
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24797
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24797

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                29b5f720990fafc302a034468455426dd662e101
baseline version:
 linux                1569265782ef26ed77ce45ebeb0676f11d4c114a

------------------------------------------------------------
People who touched revisions under test:
  "David S. Miller" <davem@davemloft.net>
  Akash Goel <akash.goel@intel.com>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Al Viro <viro@ZenIV.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Antti Palosaari <crope@iki.fi>
  Ben Skeggs <bskeggs@redhat.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Boaz Harrosh <bharrosh@panasas.com>
  Borislav Petkov <bp@suse.de>
  Brecht Machiels <brecht@mos6581.org>
  Brennan Shacklett <bpshacklett@gmail.com>
  Brian Norris <computersforpeace@gmail.com>
  Chris Ball <cjb@laptop.org>
  Chris Metcalf <cmetcalf@tilera.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Lameter <cl@linux.com>
  Chuck Anderson <chuck.anderson@oracle.com>
  Dan Duval <dan.duval@oracle.com>
  Daniel Santos <daniel.santos@pobox.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Paris <eparis@redhat.com>
  Florian Fainelli <florian@openwrt.org>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Ira Weiny <ira.weiny@intel.com>
  James Ralston <james.d.ralston@intel.com>
  Jiri Slaby <jslaby@suse.cz>
  Joe Thornber <ejt@redhat.com>
  Johannes Weiner <hannes@cmpxchg.org>
  John Stultz <john.stultz@linaro.org>
  Jonas Gorski <jogo@openwrt.org>
  Josh Triplett <josh@joshtriplett.org>
  Kamil Debski <k.debski@samsung.com>
  Len Brown <len.brown@intel.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ludovic Desroches <ludovic.desroches@atmel.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Maarten Lankhorst <maarten.lankhorst@canonical.com>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek OlÅ¡Ã¡k <marek.olsak@amd.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@linaro.org>
  Mauro Carvalho Chehab <m.chehab@samsung.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Michael Grzeschik <m.grzeschik@pengutronix.de>
  Michel DÃ¤nzer <michel.daenzer@amd.com>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@suse.cz>
  Mikulas Patocka <mpatocka@redhat.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nell Hardcastle <nell@spicious.com>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Olivier Grenie <olivier.grenie@parrot.com>
  Patrick Boettcher <pboettcher@kernellabs.com>
  Paul Moore <pmoore@redhat.com>
  Pekka Enberg <penberg@kernel.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ray Jui <rjui@broadcom.com>
  Richard Guy Briggs <rgb@redhat.com>
  Roland Dreier <roland@purestorage.com>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Seth Heasley <seth.heasley@intel.com>
  Seungwon Jeon <tgih.jun@samsung.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Steven Rostedt <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Todd Previte <tprevite@gmail.com>
  Trond Myklebust <trond.myklebust@primarydata.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Wanlong Gao <gaowanlong@cn.fujitsu.com>
  Weston Andros Adamson <dros@netapp.com>
  Weston Andros Adamson <dros@primarydata.com>
  Wolfram Sang <wsa@the-dreams.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2210 lines long.)


--===============5821857787007705622==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5821857787007705622==--

From xen-devel-bounces@lists.xen.org Fri Feb 14 08:05:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 08:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEDlt-00046Q-V5; Fri, 14 Feb 2014 08:05:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WEDlr-00046K-U2
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 08:05:16 +0000
Received: from [193.109.254.147:26701] by server-6.bemta-14.messagelabs.com id
	E7/E5-03396-B3ECDF25; Fri, 14 Feb 2014 08:05:15 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392365112!4308637!1
X-Originating-IP: [122.248.162.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTIyLjI0OC4xNjIuNSA9PiAzNDgzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6486 invoked from network); 14 Feb 2014 08:05:14 -0000
Received: from e28smtp05.in.ibm.com (HELO e28smtp05.in.ibm.com) (122.248.162.5)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 08:05:14 -0000
Received: from /spool/local
	by e28smtp05.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Fri, 14 Feb 2014 13:35:11 +0530
Received: from d28dlp01.in.ibm.com (9.184.220.126)
	by e28smtp05.in.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 14 Feb 2014 13:35:08 +0530
Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58])
	by d28dlp01.in.ibm.com (Postfix) with ESMTP id BBFAFE0056
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 13:38:29 +0530 (IST)
Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64])
	by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1E850aE50397276
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 13:35:00 +0530
Received: from d28av02.in.ibm.com (localhost [127.0.0.1])
	by d28av02.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1E856vI031023
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 13:35:07 +0530
Received: from srivatsabhat.in.ibm.com ([9.78.204.74])
	by d28av02.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1E84tT3030547; Fri, 14 Feb 2014 13:35:03 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: paulus@samba.org, oleg@redhat.com, mingo@kernel.org, rusty@rustcorp.com.au,
	peterz@infradead.org, tglx@linutronix.de, akpm@linux-foundation.org
Date: Fri, 14 Feb 2014 13:29:36 +0530
Message-ID: <20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
In-Reply-To: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
User-Agent: StGIT/0.14.3
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021408-8256-0000-0000-00000B767D8E
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, rjw@rjwysocki.net, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>,
	tj@kernel.org, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	paulmck@linux.vnet.ibm.com, Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH v2 46/52] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	register_cpu_notifier(&foobar_cpu_notifier);

	put_online_cpus();

This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).

Interestingly, the balloon code in xen can actually prevent double
initialization and hence can use the following simplified form of callback
registration:

	register_cpu_notifier(&foobar_cpu_notifier);

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	put_online_cpus();

A hotplug operation that occurs between registering the notifier and calling
get_online_cpus(), won't disrupt anything, because the code takes care to
perform the memory allocations only once.

So reorganize the balloon code in xen this way to fix the deadlock with
callback registration.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 37d06ea..afe1a3f 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
 	}
 }
 
+static int alloc_balloon_scratch_page(int cpu)
+{
+	if (per_cpu(balloon_scratch_page, cpu) != NULL)
+		return 0;
+
+	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+
 static int balloon_cpu_notify(struct notifier_block *self,
 				    unsigned long action, void *hcpu)
 {
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_UP_PREPARE:
-		if (per_cpu(balloon_scratch_page, cpu) != NULL)
-			break;
-		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		if (alloc_balloon_scratch_page(cpu))
 			return NOTIFY_BAD;
-		}
 		break;
 	default:
 		break;
@@ -624,15 +634,16 @@ static int __init balloon_init(void)
 		return -ENODEV;
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		for_each_online_cpu(cpu)
-		{
-			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		register_cpu_notifier(&balloon_cpu_notifier);
+
+		get_online_cpus();
+		for_each_online_cpu(cpu) {
+			if (alloc_balloon_scratch_page(cpu)) {
+				put_online_cpus();
 				return -ENOMEM;
 			}
 		}
-		register_cpu_notifier(&balloon_cpu_notifier);
+		put_online_cpus();
 	}
 
 	pr_info("Initialising balloon driver\n");


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 08:05:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 08:05:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEDlt-00046Q-V5; Fri, 14 Feb 2014 08:05:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WEDlr-00046K-U2
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 08:05:16 +0000
Received: from [193.109.254.147:26701] by server-6.bemta-14.messagelabs.com id
	E7/E5-03396-B3ECDF25; Fri, 14 Feb 2014 08:05:15 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392365112!4308637!1
X-Originating-IP: [122.248.162.5]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTIyLjI0OC4xNjIuNSA9PiAzNDgzOTQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6486 invoked from network); 14 Feb 2014 08:05:14 -0000
Received: from e28smtp05.in.ibm.com (HELO e28smtp05.in.ibm.com) (122.248.162.5)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 08:05:14 -0000
Received: from /spool/local
	by e28smtp05.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Fri, 14 Feb 2014 13:35:11 +0530
Received: from d28dlp01.in.ibm.com (9.184.220.126)
	by e28smtp05.in.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 14 Feb 2014 13:35:08 +0530
Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58])
	by d28dlp01.in.ibm.com (Postfix) with ESMTP id BBFAFE0056
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 13:38:29 +0530 (IST)
Received: from d28av02.in.ibm.com (d28av02.in.ibm.com [9.184.220.64])
	by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1E850aE50397276
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 13:35:00 +0530
Received: from d28av02.in.ibm.com (localhost [127.0.0.1])
	by d28av02.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1E856vI031023
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 13:35:07 +0530
Received: from srivatsabhat.in.ibm.com ([9.78.204.74])
	by d28av02.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1E84tT3030547; Fri, 14 Feb 2014 13:35:03 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
To: paulus@samba.org, oleg@redhat.com, mingo@kernel.org, rusty@rustcorp.com.au,
	peterz@infradead.org, tglx@linutronix.de, akpm@linux-foundation.org
Date: Fri, 14 Feb 2014 13:29:36 +0530
Message-ID: <20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
In-Reply-To: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
User-Agent: StGIT/0.14.3
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021408-8256-0000-0000-00000B767D8E
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, rjw@rjwysocki.net, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	"Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>,
	tj@kernel.org, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	paulmck@linux.vnet.ibm.com, Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH v2 46/52] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	register_cpu_notifier(&foobar_cpu_notifier);

	put_online_cpus();

This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).

Interestingly, the balloon code in xen can actually prevent double
initialization and hence can use the following simplified form of callback
registration:

	register_cpu_notifier(&foobar_cpu_notifier);

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	put_online_cpus();

A hotplug operation that occurs between registering the notifier and calling
get_online_cpus(), won't disrupt anything, because the code takes care to
perform the memory allocations only once.

So reorganize the balloon code in xen this way to fix the deadlock with
callback registration.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 37d06ea..afe1a3f 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
 	}
 }
 
+static int alloc_balloon_scratch_page(int cpu)
+{
+	if (per_cpu(balloon_scratch_page, cpu) != NULL)
+		return 0;
+
+	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+
 static int balloon_cpu_notify(struct notifier_block *self,
 				    unsigned long action, void *hcpu)
 {
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_UP_PREPARE:
-		if (per_cpu(balloon_scratch_page, cpu) != NULL)
-			break;
-		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		if (alloc_balloon_scratch_page(cpu))
 			return NOTIFY_BAD;
-		}
 		break;
 	default:
 		break;
@@ -624,15 +634,16 @@ static int __init balloon_init(void)
 		return -ENODEV;
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		for_each_online_cpu(cpu)
-		{
-			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		register_cpu_notifier(&balloon_cpu_notifier);
+
+		get_online_cpus();
+		for_each_online_cpu(cpu) {
+			if (alloc_balloon_scratch_page(cpu)) {
+				put_online_cpus();
 				return -ENOMEM;
 			}
 		}
-		register_cpu_notifier(&balloon_cpu_notifier);
+		put_online_cpus();
 	}
 
 	pr_info("Initialising balloon driver\n");


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 08:16:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 08:16:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEDwo-0004QS-OQ; Fri, 14 Feb 2014 08:16:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zir_blazer@hotmail.com>) id 1WE5zv-0004AH-83
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 23:47:15 +0000
Received: from [85.158.139.211:15607] by server-13.bemta-5.messagelabs.com id
	36/55-18801-2895DF25; Thu, 13 Feb 2014 23:47:14 +0000
X-Env-Sender: zir_blazer@hotmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392335233!3811271!1
X-Originating-IP: [65.55.90.219]
X-SpamReason: No, hits=1.0 required=7.0 tests=FORGED_HOTMAIL_RCVD,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20183 invoked from network); 13 Feb 2014 23:47:13 -0000
Received: from snt0-omc4-s16.snt0.hotmail.com (HELO
	snt0-omc4-s16.snt0.hotmail.com) (65.55.90.219)
	by server-13.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 23:47:13 -0000
Received: from SNT151-W82 ([65.55.90.201]) by snt0-omc4-s16.snt0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Thu, 13 Feb 2014 15:47:12 -0800
X-TMN: [mfvDHdQoNEH0SIQDBUPmZpYikSnMRaco9OFVQ1CjGTU=]
X-Originating-Email: [zir_blazer@hotmail.com]
Message-ID: <SNT151-W825E0E5AD8362D419805A0F39D0@phx.gbl>
From: Zir Blazer <zir_blazer@hotmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 20:47:12 -0300
Importance: Normal
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Feb 2014 23:47:12.0964 (UTC)
	FILETIME=[EAEF5840:01CF2915]
X-Mailman-Approved-At: Fri, 14 Feb 2014 08:16:33 +0000
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0853339934329388111=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0853339934329388111==
Content-Type: multipart/alternative;
	boundary="_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_"

--_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

>> I can't understand this as disk activity should be running on cores 0=2C=
>> 1  and 2=2C but never on core 3. The only thing running on core 3 should=
>> by my paravirtual machine and the hypervisor stub.>>>> Any idea what's g=
oing on?
Your Core i3 is a Dual Core Processor with Hyper Threading. Hyper Threading=
 allows each Core to run two Threads simultaneously=2C in what is called ph=
ysical Core and virtual Core (Or around those lines=2C but you get the idea=
). They share resources=2C and the extra Thread actually gets the free reso=
urces/execution time that weren't used by the main Thread. As Core 3 is the=
 virtual core of the physical Core 2 (Assuming that on Linux it sees and nu=
mbers them as Physical Core 0=2C Logical Core 1=2C Physical Core 2 and Logi=
cal Core 3 and so on)=2C you're giving that VM just a spare virtual Core wi=
th the free resources that weren't used by the physical Core. You should tr=
y with a full physical Core (Core 2 and 3)=2C otherwise whatever runs on Co=
re 2 WILL impact heavily what you see on Core 3. 		 	   		  =

--_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div>&gt=3B&gt=3B I can't unders=
tand this as disk activity should be running on cores 0=2C</div><div>&gt=3B=
&gt=3B 1 &nbsp=3Band 2=2C but never on core 3. The only thing running on co=
re 3 should</div><div>&gt=3B&gt=3B by my paravirtual machine and the hyperv=
isor stub.</div><div>&gt=3B&gt=3B</div><div>&gt=3B&gt=3B Any idea what's go=
ing on?</div><div><br></div><div>Your Core i3 is a Dual Core Processor with=
 Hyper Threading. Hyper Threading allows each Core to run two Threads simul=
taneously=2C in what is called physical Core and virtual Core (Or around th=
ose lines=2C but you get the idea). They share resources=2C and the extra T=
hread actually gets the free resources/execution time that weren't used by =
the main Thread. As Core 3 is the virtual core of the physical Core 2 (Assu=
ming that on Linux it sees and numbers them as Physical Core 0=2C Logical C=
ore 1=2C&nbsp=3B<span style=3D"font-size: 12pt=3B">Physical Core 2 and Logi=
cal Core 3 and so on)=2C you're giving that VM just a spare virtual Core wi=
th the free resources that weren't used by the physical Core. You should tr=
y with a full physical Core (Core 2 and 3)=2C otherwise whatever runs on Co=
re 2 WILL impact heavily what you see on Core 3.</span></div> 		 	   		  </=
div></body>
</html>=

--_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_--


--===============0853339934329388111==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0853339934329388111==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 08:16:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 08:16:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEDwo-0004QS-OQ; Fri, 14 Feb 2014 08:16:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zir_blazer@hotmail.com>) id 1WE5zv-0004AH-83
	for xen-devel@lists.xen.org; Thu, 13 Feb 2014 23:47:15 +0000
Received: from [85.158.139.211:15607] by server-13.bemta-5.messagelabs.com id
	36/55-18801-2895DF25; Thu, 13 Feb 2014 23:47:14 +0000
X-Env-Sender: zir_blazer@hotmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392335233!3811271!1
X-Originating-IP: [65.55.90.219]
X-SpamReason: No, hits=1.0 required=7.0 tests=FORGED_HOTMAIL_RCVD,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_12, ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20183 invoked from network); 13 Feb 2014 23:47:13 -0000
Received: from snt0-omc4-s16.snt0.hotmail.com (HELO
	snt0-omc4-s16.snt0.hotmail.com) (65.55.90.219)
	by server-13.tower-206.messagelabs.com with SMTP;
	13 Feb 2014 23:47:13 -0000
Received: from SNT151-W82 ([65.55.90.201]) by snt0-omc4-s16.snt0.hotmail.com
	with Microsoft SMTPSVC(6.0.3790.4675); 
	Thu, 13 Feb 2014 15:47:12 -0800
X-TMN: [mfvDHdQoNEH0SIQDBUPmZpYikSnMRaco9OFVQ1CjGTU=]
X-Originating-Email: [zir_blazer@hotmail.com]
Message-ID: <SNT151-W825E0E5AD8362D419805A0F39D0@phx.gbl>
From: Zir Blazer <zir_blazer@hotmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Date: Thu, 13 Feb 2014 20:47:12 -0300
Importance: Normal
MIME-Version: 1.0
X-OriginalArrivalTime: 13 Feb 2014 23:47:12.0964 (UTC)
	FILETIME=[EAEF5840:01CF2915]
X-Mailman-Approved-At: Fri, 14 Feb 2014 08:16:33 +0000
Subject: Re: [Xen-devel] (no subject)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0853339934329388111=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0853339934329388111==
Content-Type: multipart/alternative;
	boundary="_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_"

--_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

>> I can't understand this as disk activity should be running on cores 0=2C=
>> 1  and 2=2C but never on core 3. The only thing running on core 3 should=
>> by my paravirtual machine and the hypervisor stub.>>>> Any idea what's g=
oing on?
Your Core i3 is a Dual Core Processor with Hyper Threading. Hyper Threading=
 allows each Core to run two Threads simultaneously=2C in what is called ph=
ysical Core and virtual Core (Or around those lines=2C but you get the idea=
). They share resources=2C and the extra Thread actually gets the free reso=
urces/execution time that weren't used by the main Thread. As Core 3 is the=
 virtual core of the physical Core 2 (Assuming that on Linux it sees and nu=
mbers them as Physical Core 0=2C Logical Core 1=2C Physical Core 2 and Logi=
cal Core 3 and so on)=2C you're giving that VM just a spare virtual Core wi=
th the free resources that weren't used by the physical Core. You should tr=
y with a full physical Core (Core 2 and 3)=2C otherwise whatever runs on Co=
re 2 WILL impact heavily what you see on Core 3. 		 	   		  =

--_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'><div>&gt=3B&gt=3B I can't unders=
tand this as disk activity should be running on cores 0=2C</div><div>&gt=3B=
&gt=3B 1 &nbsp=3Band 2=2C but never on core 3. The only thing running on co=
re 3 should</div><div>&gt=3B&gt=3B by my paravirtual machine and the hyperv=
isor stub.</div><div>&gt=3B&gt=3B</div><div>&gt=3B&gt=3B Any idea what's go=
ing on?</div><div><br></div><div>Your Core i3 is a Dual Core Processor with=
 Hyper Threading. Hyper Threading allows each Core to run two Threads simul=
taneously=2C in what is called physical Core and virtual Core (Or around th=
ose lines=2C but you get the idea). They share resources=2C and the extra T=
hread actually gets the free resources/execution time that weren't used by =
the main Thread. As Core 3 is the virtual core of the physical Core 2 (Assu=
ming that on Linux it sees and numbers them as Physical Core 0=2C Logical C=
ore 1=2C&nbsp=3B<span style=3D"font-size: 12pt=3B">Physical Core 2 and Logi=
cal Core 3 and so on)=2C you're giving that VM just a spare virtual Core wi=
th the free resources that weren't used by the physical Core. You should tr=
y with a full physical Core (Core 2 and 3)=2C otherwise whatever runs on Co=
re 2 WILL impact heavily what you see on Core 3.</span></div> 		 	   		  </=
div></body>
</html>=

--_df9db6c2-0843-4bed-b94c-8e82ecbe69d4_--


--===============0853339934329388111==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0853339934329388111==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 08:29:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 08:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEE8h-0004yA-Ga; Fri, 14 Feb 2014 08:28:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEE8g-0004y5-9C
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 08:28:50 +0000
Received: from [85.158.143.35:40808] by server-1.bemta-4.messagelabs.com id
	15/BA-31661-1C3DDF25; Fri, 14 Feb 2014 08:28:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392366528!5618084!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17205 invoked from network); 14 Feb 2014 08:28:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 08:28:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 08:28:48 +0000
Message-Id: <52FDE1CE020000780011C575@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 08:28:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <chegger@amazon.de>,
	"Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
	<52FD0079.8050601@amd.com> <52FD0DCC.1030904@amd.com>
In-Reply-To: <52FD0DCC.1030904@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 19:24, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
wrote:
> On 2/13/2014 11:27 AM, Aravind Gopalakrishnan wrote:
>> On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>>>     *val = 0;
>>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>> +    /* Allow only first 3 MC banks into switch() */
>>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>>       {
>>>>       case MSR_IA32_MC0_CTL:
>>>>           /* stick all 1's to MCi_CTL */
>>> I'm confused: You now add a comment as if the mask was including
>>> bit 4, which it doesn't. What am I missing?
>>
>> Darn. Sorry about that. Will fix..
> 
> Jan,
> 
> Do let me know if the following wording is fine:
> 
> /*
>   * Apply mask to allow bits[0:1] (necessary to uniquely identify MC0)
>   * MC1 is handled by virtue of 'bank' value.
>   */
> 
> If not, I'm open to suggestions:)

I don't particularly like this, but I also don't have a good alternative
suggestion. It was Christoph who asked for a comment in the first
place. Since I don't see a particular need for a comment here, you
two should work out what best suits both of you.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 08:29:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 08:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEE8h-0004yA-Ga; Fri, 14 Feb 2014 08:28:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEE8g-0004y5-9C
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 08:28:50 +0000
Received: from [85.158.143.35:40808] by server-1.bemta-4.messagelabs.com id
	15/BA-31661-1C3DDF25; Fri, 14 Feb 2014 08:28:49 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392366528!5618084!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17205 invoked from network); 14 Feb 2014 08:28:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 08:28:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 08:28:48 +0000
Message-Id: <52FDE1CE020000780011C575@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 08:28:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <chegger@amazon.de>,
	"Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
	<52FD0079.8050601@amd.com> <52FD0DCC.1030904@amd.com>
In-Reply-To: <52FD0DCC.1030904@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: jinsong.liu@intel.com, boris.ostrovsky@oracle.com,
	suravee.suthikulpanit@amd.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 19:24, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
wrote:
> On 2/13/2014 11:27 AM, Aravind Gopalakrishnan wrote:
>> On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>>>     *val = 0;
>>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>> +    /* Allow only first 3 MC banks into switch() */
>>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>>       {
>>>>       case MSR_IA32_MC0_CTL:
>>>>           /* stick all 1's to MCi_CTL */
>>> I'm confused: You now add a comment as if the mask was including
>>> bit 4, which it doesn't. What am I missing?
>>
>> Darn. Sorry about that. Will fix..
> 
> Jan,
> 
> Do let me know if the following wording is fine:
> 
> /*
>   * Apply mask to allow bits[0:1] (necessary to uniquely identify MC0)
>   * MC1 is handled by virtue of 'bank' value.
>   */
> 
> If not, I'm open to suggestions:)

I don't particularly like this, but I also don't have a good alternative
suggestion. It was Christoph who asked for a comment in the first
place. Since I don't see a particular need for a comment here, you
two should work out what best suits both of you.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 09:14:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 09:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEEqC-0005v6-B5; Fri, 14 Feb 2014 09:13:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WEEqB-0005v1-Cr
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 09:13:47 +0000
Received: from [85.158.139.211:15187] by server-9.bemta-5.messagelabs.com id
	DA/ED-11237-A4EDDF25; Fri, 14 Feb 2014 09:13:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392369223!3784556!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24305 invoked from network); 14 Feb 2014 09:13:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 09:13:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384300800"; d="scan'208";a="102499916"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 09:13:43 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 04:13:42 -0500
Message-ID: <52FDDE44.3060001@citrix.com>
Date: Fri, 14 Feb 2014 10:13:40 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
In-Reply-To: <CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 03:09, Miguel Clara wrote:
> After compiling with the patch and rebuilding/installing the module, I
> reboot, I get a panic now when drbd starts.

There was no need to rebuild the module, the patch only modified the
block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
everything seemed to be fine (no kernel panic of course).

Since the patch didn't modify anything in the kernel module itself I
find it unlikely to cause a kernel panic, probably there's some kind of
problem with your kernel/module.

> That was all I could get from the JAVA supermicro  kvm console!

Without a proper serial console log it's impossible to tell what's going
on (at least to me).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 09:14:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 09:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEEqC-0005v6-B5; Fri, 14 Feb 2014 09:13:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WEEqB-0005v1-Cr
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 09:13:47 +0000
Received: from [85.158.139.211:15187] by server-9.bemta-5.messagelabs.com id
	DA/ED-11237-A4EDDF25; Fri, 14 Feb 2014 09:13:46 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392369223!3784556!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24305 invoked from network); 14 Feb 2014 09:13:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 09:13:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384300800"; d="scan'208";a="102499916"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 09:13:43 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 04:13:42 -0500
Message-ID: <52FDDE44.3060001@citrix.com>
Date: Fri, 14 Feb 2014 10:13:40 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
In-Reply-To: <CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 03:09, Miguel Clara wrote:
> After compiling with the patch and rebuilding/installing the module, I
> reboot, I get a panic now when drbd starts.

There was no need to rebuild the module, the patch only modified the
block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
everything seemed to be fine (no kernel panic of course).

Since the patch didn't modify anything in the kernel module itself I
find it unlikely to cause a kernel panic, probably there's some kind of
problem with your kernel/module.

> That was all I could get from the JAVA supermicro  kvm console!

Without a proper serial console log it's impossible to tell what's going
on (at least to me).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 09:33:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 09:33:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEF9N-0006RI-I2; Fri, 14 Feb 2014 09:33:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WEF9M-0006RD-5c
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 09:33:36 +0000
Received: from [193.109.254.147:26737] by server-13.bemta-14.messagelabs.com
	id 78/58-01226-FE2EDF25; Fri, 14 Feb 2014 09:33:35 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392370414!316558!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5944 invoked from network); 14 Feb 2014 09:33:34 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 09:33:34 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:Subject:Content-Type:
	Content-Transfer-Encoding;
	b=lLhXl2OMK+KQXIGJZdgvtzpUzr0EI2W9+QZYR64z2EvD1RG3QlpBcTcQ
	g6eznvXvjDSIpmoytpcTjJRsKVHvvrWTeFET8/PMPiOR16m3MxLmofykb
	LNj8pjJLNkth74xcTc1c5IzlTdqeRqNDu6QprFrJg08Sy98pXCBNlnxj7
	ULNkl7fYTroyCHpyVREHOxjo46ymy9cKFLqKe+fWC7WOQDAiSljqQoqM/
	ycc5vCgI6yqvGlcZ9qAdrtQxJ3iYo;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392370414; x=1423906414;
	h=message-id:date:from:mime-version:to:subject:
	content-transfer-encoding;
	bh=Q+15ryZxBw+Sgv2AmyAqeySc0ZWpuboriH6MFthPtGI=;
	b=gCZzyPK4oj7srNjMVr8bAnYk3SKTtCV4+4aQL/KsO3Kbv6kTbZl6FRQg
	w6a9EehLBOEc6zAvZbQo9U3xd3Oo+lAmCJe8a5Pnztb+G9yqia7c9Kt4s
	Cob3fvalXrXUpyZ7WuePYb5YUv4FM+EX+o3GZUL7oMkwjEZoYkNSLleC1
	g/J5B4hvZlgLwkymxcdlIEZE5NPkNQnuaYhDKQiAgEqm5PFpf9ZbirWyv
	+C0lg5TU5CpJpApEDS1VTzwoyuwoX;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,843,1384297200"; d="scan'208";a="159003385"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 14 Feb 2014 10:33:33 +0100
X-IronPort-AV: E=Sophos;i="4.95,843,1384297200"; d="scan'208";a="31552927"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 14 Feb 2014 10:33:34 +0100
Message-ID: <52FDE2ED.4030008@ts.fujitsu.com>
Date: Fri, 14 Feb 2014 10:33:33 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

we've found a problem with debug registers in HVM domains with Xen (we are
running 4.2, but the code in the hypervisor seems to be unchanged in unstable)
on INTEL processors:

Debug registers are restored on vcpu switch only if db7 has any debug events
activated. This leads to problems in the following cases:

- db0-3 are changed by the guest before events are set "active" in db7. In case
   of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: setting
   db7 before db0-3 is no option, as this could trigger debug interrupts due to
   stale db0-3 contents.

- single stepping is used and vcpu switch occurs between the single step trap
   and reading of db6 in the guest. db6 contents (single step indicator) are
   lost in this case.

Any thoughts?


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 09:33:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 09:33:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEF9N-0006RI-I2; Fri, 14 Feb 2014 09:33:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WEF9M-0006RD-5c
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 09:33:36 +0000
Received: from [193.109.254.147:26737] by server-13.bemta-14.messagelabs.com
	id 78/58-01226-FE2EDF25; Fri, 14 Feb 2014 09:33:35 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392370414!316558!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5944 invoked from network); 14 Feb 2014 09:33:34 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 09:33:34 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:Subject:Content-Type:
	Content-Transfer-Encoding;
	b=lLhXl2OMK+KQXIGJZdgvtzpUzr0EI2W9+QZYR64z2EvD1RG3QlpBcTcQ
	g6eznvXvjDSIpmoytpcTjJRsKVHvvrWTeFET8/PMPiOR16m3MxLmofykb
	LNj8pjJLNkth74xcTc1c5IzlTdqeRqNDu6QprFrJg08Sy98pXCBNlnxj7
	ULNkl7fYTroyCHpyVREHOxjo46ymy9cKFLqKe+fWC7WOQDAiSljqQoqM/
	ycc5vCgI6yqvGlcZ9qAdrtQxJ3iYo;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392370414; x=1423906414;
	h=message-id:date:from:mime-version:to:subject:
	content-transfer-encoding;
	bh=Q+15ryZxBw+Sgv2AmyAqeySc0ZWpuboriH6MFthPtGI=;
	b=gCZzyPK4oj7srNjMVr8bAnYk3SKTtCV4+4aQL/KsO3Kbv6kTbZl6FRQg
	w6a9EehLBOEc6zAvZbQo9U3xd3Oo+lAmCJe8a5Pnztb+G9yqia7c9Kt4s
	Cob3fvalXrXUpyZ7WuePYb5YUv4FM+EX+o3GZUL7oMkwjEZoYkNSLleC1
	g/J5B4hvZlgLwkymxcdlIEZE5NPkNQnuaYhDKQiAgEqm5PFpf9ZbirWyv
	+C0lg5TU5CpJpApEDS1VTzwoyuwoX;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,843,1384297200"; d="scan'208";a="159003385"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 14 Feb 2014 10:33:33 +0100
X-IronPort-AV: E=Sophos;i="4.95,843,1384297200"; d="scan'208";a="31552927"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 14 Feb 2014 10:33:34 +0100
Message-ID: <52FDE2ED.4030008@ts.fujitsu.com>
Date: Fri, 14 Feb 2014 10:33:33 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

we've found a problem with debug registers in HVM domains with Xen (we are
running 4.2, but the code in the hypervisor seems to be unchanged in unstable)
on INTEL processors:

Debug registers are restored on vcpu switch only if db7 has any debug events
activated. This leads to problems in the following cases:

- db0-3 are changed by the guest before events are set "active" in db7. In case
   of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: setting
   db7 before db0-3 is no option, as this could trigger debug interrupts due to
   stale db0-3 contents.

- single stepping is used and vcpu switch occurs between the single step trap
   and reading of db6 in the guest. db6 contents (single step indicator) are
   lost in this case.

Any thoughts?


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 09:36:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 09:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFBj-0006X8-4i; Fri, 14 Feb 2014 09:36:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEFBh-0006X3-Ny
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 09:36:02 +0000
Received: from [85.158.139.211:54315] by server-8.bemta-5.messagelabs.com id
	55/05-05298-083EDF25; Fri, 14 Feb 2014 09:36:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392370551!3888706!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28769 invoked from network); 14 Feb 2014 09:35:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 09:35:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384300800"; d="scan'208";a="100729442"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 09:35:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 04:35:50 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEFBW-0006X9-2Y;
	Fri, 14 Feb 2014 09:35:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEFBV-0002tZ-Tq;
	Fri, 14 Feb 2014 09:35:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24870-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Feb 2014 09:35:49 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24870: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8401080547787498075=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8401080547787498075==
Content-Type: text/plain

flight 24870 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24862

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============8401080547787498075==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8401080547787498075==--

From xen-devel-bounces@lists.xen.org Fri Feb 14 09:36:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 09:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFBj-0006X8-4i; Fri, 14 Feb 2014 09:36:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEFBh-0006X3-Ny
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 09:36:02 +0000
Received: from [85.158.139.211:54315] by server-8.bemta-5.messagelabs.com id
	55/05-05298-083EDF25; Fri, 14 Feb 2014 09:36:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392370551!3888706!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28769 invoked from network); 14 Feb 2014 09:35:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 09:35:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,843,1384300800"; d="scan'208";a="100729442"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 09:35:50 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 04:35:50 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEFBW-0006X9-2Y;
	Fri, 14 Feb 2014 09:35:50 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEFBV-0002tZ-Tq;
	Fri, 14 Feb 2014 09:35:50 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24870-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Feb 2014 09:35:49 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24870: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8401080547787498075=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8401080547787498075==
Content-Type: text/plain

flight 24870 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            3 host-build-prep           fail REGR. vs. 24862

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============8401080547787498075==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8401080547787498075==--

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:12:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFke-0007d6-Nf; Fri, 14 Feb 2014 10:12:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luis.henriques@canonical.com>) id 1WEFkc-0007cw-HR
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 10:12:06 +0000
Received: from [85.158.139.211:30747] by server-17.bemta-5.messagelabs.com id
	C2/16-31975-5FBEDF25; Fri, 14 Feb 2014 10:12:05 +0000
X-Env-Sender: luis.henriques@canonical.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392372724!3895490!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19812 invoked from network); 14 Feb 2014 10:12:04 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-8.tower-206.messagelabs.com with SMTP;
	14 Feb 2014 10:12:04 -0000
Received: from bl20-147-22.dsl.telepac.pt ([2.81.147.22] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <luis.henriques@canonical.com>)
	id 1WEFkU-0008Nx-RN; Fri, 14 Feb 2014 10:11:59 +0000
From: Luis Henriques <luis.henriques@canonical.com>
To: John Stultz <john.stultz@linaro.org>
Date: Fri, 14 Feb 2014 10:11:57 +0000
Message-Id: <1392372717-10178-1-git-send-email-luis.henriques@canonical.com>
X-Mailer: git-send-email 1.8.3.2
X-Extended-Stable: 3.11
Cc: Prarit Bhargava <prarit@redhat.com>,
	Luis Henriques <luis.henriques@canonical.com>,
	Richard Cochran <richardcochran@gmail.com>,
	xen-devel@lists.xen.org, kernel-team@lists.ubuntu.com,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [3.11.y.z extended stable] Patch "timekeeping: Fix
	potential lost pv notification of time change" has been added
	to staging queue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a note to let you know that I have just added a patch titled

    timekeeping: Fix potential lost pv notification of time change

to the linux-3.11.y-queue branch of the 3.11.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.11.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.11.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Luis

------

>From 41edb2f35f8f770b4e9022e515c606edf5a02060 Mon Sep 17 00:00:00 2001
From: John Stultz <john.stultz@linaro.org>
Date: Wed, 11 Dec 2013 20:07:49 -0800
Subject: timekeeping: Fix potential lost pv notification of time change

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 22f3ae2..7b96f30 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;

 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);

 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }

 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;

 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);

 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;

 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);

 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
--
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:12:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:12:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFke-0007d6-Nf; Fri, 14 Feb 2014 10:12:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luis.henriques@canonical.com>) id 1WEFkc-0007cw-HR
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 10:12:06 +0000
Received: from [85.158.139.211:30747] by server-17.bemta-5.messagelabs.com id
	C2/16-31975-5FBEDF25; Fri, 14 Feb 2014 10:12:05 +0000
X-Env-Sender: luis.henriques@canonical.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392372724!3895490!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19812 invoked from network); 14 Feb 2014 10:12:04 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-8.tower-206.messagelabs.com with SMTP;
	14 Feb 2014 10:12:04 -0000
Received: from bl20-147-22.dsl.telepac.pt ([2.81.147.22] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <luis.henriques@canonical.com>)
	id 1WEFkU-0008Nx-RN; Fri, 14 Feb 2014 10:11:59 +0000
From: Luis Henriques <luis.henriques@canonical.com>
To: John Stultz <john.stultz@linaro.org>
Date: Fri, 14 Feb 2014 10:11:57 +0000
Message-Id: <1392372717-10178-1-git-send-email-luis.henriques@canonical.com>
X-Mailer: git-send-email 1.8.3.2
X-Extended-Stable: 3.11
Cc: Prarit Bhargava <prarit@redhat.com>,
	Luis Henriques <luis.henriques@canonical.com>,
	Richard Cochran <richardcochran@gmail.com>,
	xen-devel@lists.xen.org, kernel-team@lists.ubuntu.com,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [3.11.y.z extended stable] Patch "timekeeping: Fix
	potential lost pv notification of time change" has been added
	to staging queue
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a note to let you know that I have just added a patch titled

    timekeeping: Fix potential lost pv notification of time change

to the linux-3.11.y-queue branch of the 3.11.y.z extended stable tree 
which can be found at:

 http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.11.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.11.y.z tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Luis

------

>From 41edb2f35f8f770b4e9022e515c606edf5a02060 Mon Sep 17 00:00:00 2001
From: John Stultz <john.stultz@linaro.org>
Date: Wed, 11 Dec 2013 20:07:49 -0800
Subject: timekeeping: Fix potential lost pv notification of time change

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 22f3ae2..7b96f30 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;

 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);

 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }

 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;

 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);

 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;

 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);

 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
--
1.8.3.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFws-0008N9-Lc; Fri, 14 Feb 2014 10:24:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>)
	id 1WEFwr-0008Mo-2J; Fri, 14 Feb 2014 10:24:45 +0000
Received: from [85.158.139.211:57633] by server-4.bemta-5.messagelabs.com id
	E8/B9-08092-CEEEDF25; Fri, 14 Feb 2014 10:24:44 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392373481!3918534!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13128 invoked from network); 14 Feb 2014 10:24:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 10:24:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100739335"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 10:24:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 05:24:41 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WEFwm-0000GF-9b;
	Fri, 14 Feb 2014 10:24:40 +0000
Message-ID: <52FDEEDB.40305@eu.citrix.com>
Date: Fri, 14 Feb 2014 10:24:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Dario Faggioli
	<dario.faggioli@citrix.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 01:13 AM, Zhang, Yang Z wrote:
> Dario Faggioli wrote on 2014-02-13:
>> [Adding the Xen publicity mailing list]
>>
>> On gio, 2014-02-13 at 03:55 +0000, Zhang, Yang Z wrote:
>>> Hi George,
>>>
>>> I have updated the Latest Xen nested status in the Xen wiki page,
>>> please have a look.
>>>
>> Hey Yang,
>>
>> This is something really useful to have on the wiki, thanks for doing it.
>>
>>> Although it is hard to say nested is good supported or product
>>> quality, it is ready to let people to know nested basically is supported by Xen.
>>>
>> Right. With that in mind, I think this topic would be a great one for
>> a blog post on the Xen Project's blog! The content you have on the
>> wiki is mostly fine as the core content of the blog post too, we'd just need a couple more "colloquial"
>> glue paragraph here and there, both about nested virt in general and
>> about Xen supporting it, for instance...
>>
>>> Especially, for Xen on Xen case, I didn't see any issue with it for
>>> more than half
>> of year. Besides, I am always using nested Xen to debug Xen booting
>> issue which doesn't need to reboot my real box. And it really helps me a lot.
>> ...something like this line above... It's actually a quite good
>> example of what I meant above! :-)
>>
>> So, how do you feel about this?
>>
> Sure. If you can do it, that's great.
>
>>> So, if possible, I hope we can add nested support into Xen 4.4
>>> release to let people know current status.
>>>
>> Sorry for my ignorance on the subject, what is it that is missing for
>> making the above (and the content of the wiki) true for 4.4?
>>
> Sorry. I didn't clarify it clearly. Most of the patches to run nested are already in Xen upstream. What I want is to add "nested is basically supported in Xen 4.4" in the Xen 4.4 release note to let people know it.

I'm afraid "basically supported" will imply to people that they might 
consider shipping it on production systems.  But because of the issues 
with shadow-on-HAP, and the potential locking issues with the nested p2m 
table, both of which are in control of the guest admin rather than the 
host admin, I don't think that's a recommendation we can make at this time.

But I do think making some kind of announcement about common 
functionality being complete and ready to be tested would be a good 
idea.  When we come to make the release we can brainstorm on what 
wording to use.

Actually, I wonder whether advertising Win7's XP compatibility mode as a 
separate "tech preview" feature would make sense.  The people who use 
that feature are very likely very different than most other people who 
might think about using nested virtualization.

Also, re what's missing: We had a discussion a few weeks ago about what 
it might take to move nested virt out of "tech preview/experimental" 
mode.  I'll write those up and put them in my 4.4 planning e-mail so we 
can track the progress.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFws-0008N9-Lc; Fri, 14 Feb 2014 10:24:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>)
	id 1WEFwr-0008Mo-2J; Fri, 14 Feb 2014 10:24:45 +0000
Received: from [85.158.139.211:57633] by server-4.bemta-5.messagelabs.com id
	E8/B9-08092-CEEEDF25; Fri, 14 Feb 2014 10:24:44 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392373481!3918534!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13128 invoked from network); 14 Feb 2014 10:24:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 10:24:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100739335"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 10:24:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 05:24:41 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WEFwm-0000GF-9b;
	Fri, 14 Feb 2014 10:24:40 +0000
Message-ID: <52FDEEDB.40305@eu.citrix.com>
Date: Fri, 14 Feb 2014 10:24:27 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Dario Faggioli
	<dario.faggioli@citrix.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 01:13 AM, Zhang, Yang Z wrote:
> Dario Faggioli wrote on 2014-02-13:
>> [Adding the Xen publicity mailing list]
>>
>> On gio, 2014-02-13 at 03:55 +0000, Zhang, Yang Z wrote:
>>> Hi George,
>>>
>>> I have updated the Latest Xen nested status in the Xen wiki page,
>>> please have a look.
>>>
>> Hey Yang,
>>
>> This is something really useful to have on the wiki, thanks for doing it.
>>
>>> Although it is hard to say nested is good supported or product
>>> quality, it is ready to let people to know nested basically is supported by Xen.
>>>
>> Right. With that in mind, I think this topic would be a great one for
>> a blog post on the Xen Project's blog! The content you have on the
>> wiki is mostly fine as the core content of the blog post too, we'd just need a couple more "colloquial"
>> glue paragraph here and there, both about nested virt in general and
>> about Xen supporting it, for instance...
>>
>>> Especially, for Xen on Xen case, I didn't see any issue with it for
>>> more than half
>> of year. Besides, I am always using nested Xen to debug Xen booting
>> issue which doesn't need to reboot my real box. And it really helps me a lot.
>> ...something like this line above... It's actually a quite good
>> example of what I meant above! :-)
>>
>> So, how do you feel about this?
>>
> Sure. If you can do it, that's great.
>
>>> So, if possible, I hope we can add nested support into Xen 4.4
>>> release to let people know current status.
>>>
>> Sorry for my ignorance on the subject, what is it that is missing for
>> making the above (and the content of the wiki) true for 4.4?
>>
> Sorry. I didn't clarify it clearly. Most of the patches to run nested are already in Xen upstream. What I want is to add "nested is basically supported in Xen 4.4" in the Xen 4.4 release note to let people know it.

I'm afraid "basically supported" will imply to people that they might 
consider shipping it on production systems.  But because of the issues 
with shadow-on-HAP, and the potential locking issues with the nested p2m 
table, both of which are in control of the guest admin rather than the 
host admin, I don't think that's a recommendation we can make at this time.

But I do think making some kind of announcement about common 
functionality being complete and ready to be tested would be a good 
idea.  When we come to make the release we can brainstorm on what 
wording to use.

Actually, I wonder whether advertising Win7's XP compatibility mode as a 
separate "tech preview" feature would make sense.  The people who use 
that feature are very likely very different than most other people who 
might think about using nested virtualization.

Also, re what's missing: We had a discussion a few weeks ago about what 
it might take to move nested virt out of "tech preview/experimental" 
mode.  I'll write those up and put them in my 4.4 planning e-mail so we 
can track the progress.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:26:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFyK-00006v-6C; Fri, 14 Feb 2014 10:26:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WEFyJ-00006g-1Y
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 10:26:15 +0000
Received: from [85.158.139.211:40770] by server-6.bemta-5.messagelabs.com id
	44/23-14342-64FEDF25; Fri, 14 Feb 2014 10:26:14 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392373569!3891081!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11780 invoked from network); 14 Feb 2014 10:26:10 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 10:26:10 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 14 Feb 2014 10:26:09 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="671673366"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.202])
	by fldsmtpi01.verizon.com with ESMTP; 14 Feb 2014 10:26:08 +0000
Message-ID: <52FDEF40.8040709@terremark.com>
Date: Fri, 14 Feb 2014 05:26:08 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<1392333198.32038.153.camel@Solace>
In-Reply-To: <1392333198.32038.153.camel@Solace>
Cc: Simon Martin <furryfuttock@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/14 18:13, Dario Faggioli wrote:
> On gio, 2014-02-13 at 22:25 +0000, Simon Martin wrote:
>> Thanks for all the replies guys.
>>
> :-)
>
>> Don> How many instruction per second a thread gets does depend on the
>> Don> "idleness" of other threads (no longer just the hyperThread's
>> Don> parther).
>>
>> This    seems    a    bit    strange   to   me. In my case I have time
>> critical  PV  running  by  itself  in a CPU pool. So Xen should not be
>> scheduling it, so I can't see how this Hypervisor thread would be affected.
>>
> I think Don is referring to the idleness of the other _hardware_ threads
> in the chip, rather than software threads of execution, either in Xen or
> in Dom0/DomU. I checked his original e-mail and, AFAIUI, he seems to
> confirm that the throughput you get on, say, core 3, depends on what
> it's sibling core (which really is his sibling hyperthread, again in the
> hardware sense... Gah, the terminology is just a mess! :-P). He seems to
> also add the fact that there is a similar kind of inter-dependency
> between all the hardware hyperthread, not just between siblings.
>
> Does this make sense Don?
>

Yes, but the results I am getting vary based on the disto (most likely 
the microcode version).


Linux (and I think that xen) both have a CPU scheduler that picks core 
before threads:

top - 04:06:29 up 66 days, 15:31, 11 users,  load average: 2.43, 0.72, 0.29
Tasks: 250 total,   1 running, 249 sleeping,   0 stopped,   0 zombie
Cpu0  : 99.7%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.2%hi, 0.1%si,  
0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi, 0.2%si,  
0.0%st
Cpu2  : 99.9%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.1%hi, 0.0%si,  
0.0%st
Cpu3  :  1.6%us,  0.1%sy,  0.0%ni, 98.3%id,  0.0%wa,  0.0%hi, 0.0%si,  
0.0%st
Cpu4  : 99.9%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.1%hi, 0.0%si,  
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi, 0.0%si,  
0.0%st
Cpu6  :  1.4%us,  0.0%sy,  0.0%ni, 98.6%id,  0.0%wa,  0.0%hi, 0.0%si,  
0.0%st
Cpu7  : 99.9%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.1%hi, 0.0%si,  
0.0%st
Mem:  32940640k total, 18008576k used, 14932064k free,   285740k buffers
Swap: 10223612k total,     4696k used, 10218916k free, 16746224k cached


Is an example without xen involved and Fedora 17

Linux dcs-xen-50 3.8.11-100.fc17.x86_64 #1 SMP Wed May 1 19:31:26 UTC 
2013 x86_64 x86_64 x86_64 GNU/Linux

On this machine:

Just 7:
         start                     done
thr 0:  14 Feb 14 04:11:08.944566  14 Feb 14 04:13:20.874764
        +02:11.930198 ~= 131.93 and 9.10 GiI/Sec


6 & 7:
         start                     done
thr 0:  14 Feb 14 04:14:31.010426  14 Feb 14 04:18:55.404116
        +04:24.393690 ~= 264.39 and 4.54 GiI/Sec
thr 1:  14 Feb 14 04:14:31.010426  14 Feb 14 04:18:55.415561
        +04:24.405135 ~= 264.41 and 4.54 GiI/Sec


5 & 7:
         start                     done
thr 0:  14 Feb 14 04:20:28.902831  14 Feb 14 04:22:45.563511
        +02:16.660680 ~= 136.66 and 8.78 GiI/Sec
thr 1:  14 Feb 14 04:20:28.902831  14 Feb 14 04:22:46.182159
        +02:17.279328 ~= 137.28 and 8.74 GiI/Sec


1 & 3 & 5 & 7:
         start                     done
thr 0:  14 Feb 14 04:32:24.353302  14 Feb 14 04:35:16.870558
        +02:52.517256 ~= 172.52 and 6.96 GiI/Sec
thr 1:  14 Feb 14 04:32:24.353301  14 Feb 14 04:35:17.371155
        +02:53.017854 ~= 173.02 and 6.94 GiI/Sec
thr 2:  14 Feb 14 04:32:24.353302  14 Feb 14 04:35:17.225871
        +02:52.872569 ~= 172.87 and 6.94 GiI/Sec
thr 3:  14 Feb 14 04:32:24.353302  14 Feb 14 04:35:16.655362
        +02:52.302060 ~= 172.30 and 6.96 GiI/Sec



This is from:
Feb 14 04:29:21 dcs-xen-51 kernel: [   41.921367] microcode: CPU3 
updated to revision 0x28, date = 2012-04-24


On CentOS 5.10:
Linux dcs-xen-53 2.6.18-371.el5 #1 SMP Tue Oct 1 08:35:08 EDT 2013 
x86_64 x86_64 x86_64 GNU/Linux

only 7:
         start                     done
thr 0:  14 Feb 14 09:43:10.903549  14 Feb 14 09:46:04.925463
        +02:54.021914 ~= 174.02 and 6.90 GiI/Sec


6 & 7:
         start                     done
thr 0:  14 Feb 14 09:49:17.804633  14 Feb 14 09:55:02.473549
        +05:44.668916 ~= 344.67 and 3.48 GiI/Sec
thr 1:  14 Feb 14 09:49:17.804618  14 Feb 14 09:55:02.533243
        +05:44.728625 ~= 344.73 and 3.48 GiI/Sec


5 & 7:
         start                     done
thr 0:  14 Feb 14 10:01:30.566603  14 Feb 14 10:04:23.024858
        +02:52.458255 ~= 172.46 and 6.96 GiI/Sec
thr 1:  14 Feb 14 10:01:30.566603  14 Feb 14 10:04:23.069964
        +02:52.503361 ~= 172.50 and 6.96 GiI/Sec


1 & 3 & 5 & 7:
         start                     done
thr 0:  14 Feb 14 10:05:58.359646  14 Feb 14 10:08:50.984629
        +02:52.624983 ~= 172.62 and 6.95 GiI/Sec
thr 1:  14 Feb 14 10:05:58.359646  14 Feb 14 10:08:50.993064
        +02:52.633418 ~= 172.63 and 6.95 GiI/Sec
thr 2:  14 Feb 14 10:05:58.359645  14 Feb 14 10:08:50.857982
        +02:52.498337 ~= 172.50 and 6.96 GiI/Sec
thr 3:  14 Feb 14 10:05:58.359645  14 Feb 14 10:08:50.905031
        +02:52.545386 ~= 172.55 and 6.95 GiI/Sec




Feb 14 09:41:42 dcs-xen-53 kernel: microcode: CPU3 updated from revision 
0x17 to 0x29, date = 06122013


Hope this helps.
     -Don Slutz

>>>> 6.- All VCPUs are pinned:
>>>>
>> Dario> Right, although, if you use cpupools, and if I've understood what you're
>> Dario> up to, you really should not require pinning. I mean, the isolation
>> Dario> between the RT-ish domain and the rest of the world should be already in
>> Dario> place thanks to cpupools.
>>
>> This  is what I thought, however when running looking at the vcpu-list
>> I  CPU  affinity  was "all" until I starting pinning. As I wasn't sure
>> whether  that  was  "all  inside this cpu pool" or "all" I felt it was
>> safer to do it explicitly.
>>
> Actually, you are right, we could put things in a way that results more
> clear, when one observes the output! So, I confirm that, despite the
> fact that you see "all", that all is relative to the cpupool the domain
> is assigned to.
>
> I'll try to think on how to make this more evident... A note in the
> manpage and/or the various sources of documentation, is the easy (but
> still necessary, I agree) part, and I'll add this to my TODO list.
> Actually modifying the output is more tricky, as affinity and cpupools
> are orthogonal by design, and that is the right (IMHO) thing.
>
> I guess trying to tweak the printf()-s in `xl vcpu-list' would not be
> that hard... I'll have a look and see if I can come up with a proposal.
>
>> Dario> So, if you ask me, you're restricting too much things in
>> Dario> pool-0, where dom0 and the Windows VM runs. In fact, is there a
>> Dario> specific reason why you need all their vcpus to be statically
>> Dario> pinned each one to only one pcpu? If not, I'd leave them a
>> Dario> little bit more of freedom.
>>
>> I agree with you here, however when I don't pin CPU affinity is "all".
>> Is this "all in the CPU pool"? I couldn't find that info.
>>
> Again, yes: once a domain is in a cpupool, no matter what its affinity
> says, it won't ever reach a pcpu assigned to another cpupool. The
> technical reason is that each cpupool is ruled by it's own (copy of a)
> scheduler, even if you use, e.g., credit, for both/all the pools. In
> that case, what you will get are two full instances of credit,
> completely independent between each other, each one in charge only of a
> very specific subset of pcpus (as mandated by cpupools). So, different
> runqueues, different data structures, different anything.
>
>> Dario> What I'd try is:
>> Dario>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
>> Dario>  2. pinning as follows:
>> Dario>      * all vcpus of win7 --> pcpus 1,2
>> Dario>      * all vcpus of dom0 --> no pinning
>> Dario>    this way, what you get is the following: win7 could suffer sometimes,
>> Dario>    if all its 3 vcpus gets busy, but that, I think is acceptable, at
>> Dario>    least up to a certain extent, is that the case?
>> Dario>    At the same time, you
>> Dario>    are making sure dom0 always has a chance to run, as pcpu#0 would be
>> Dario>    his exclusive playground, in case someone, including your pv499
>> Dario>    domain, needs its services.
>>
>> This  is  what  I  had when I started :-). Thanks for the confirmation
>> that I was doing it right. However if the hyperthreading is the issue,
>> then I will only have 2 PCPU available, and I will assign them both to
>> dom0 and win7.
>>
> Yes, with hyperthreading in mind, that is what you should do.
>
> Once we will have confirmed that hyperthreading is the issue, we'll see
> what we can do. I mean, if, in your case, it's fine to 'waste' a cpu,
> then ok, but I think we need a general solution for this... Perhaps with
> a little worse performances than just leaving one core/hyperthread
> completely idle, but at the same time more resource efficient.
>
> I wonder how tweaking the sched_smt_power_savings would deal with
> this...
>
>> Dario> Right. Are you familiar with tracing what happens inside Xen
>> Dario> with xentrace and, perhaps, xenalyze? It takes a bit of time to
>> Dario> get used to it but, once you dominate it, it is a good mean for
>> Dario> getting out really useful info!
>>
>> Dario> There is a blog post about that here:
>> Dario> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
>> Dario> and it should have most of the info, or the links to where to
>> Dario> find them.
>>
>> Thanks for this. If this problem is more than the hyperthreading then
>> I will definitely use it. Also looks like it might be useful when I
>> start looking at the jitter on the singleshot timer (which should be
>> in a couple of weeks).
>>
> It will reveal to be very useful for that, I'm sure! :-)
>
> Let us know how the re-testing goes.
>
> Regards,
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:26:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEFyK-00006v-6C; Fri, 14 Feb 2014 10:26:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WEFyJ-00006g-1Y
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 10:26:15 +0000
Received: from [85.158.139.211:40770] by server-6.bemta-5.messagelabs.com id
	44/23-14342-64FEDF25; Fri, 14 Feb 2014 10:26:14 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392373569!3891081!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11780 invoked from network); 14 Feb 2014 10:26:10 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 10:26:10 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 14 Feb 2014 10:26:09 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="671673366"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.5.202])
	by fldsmtpi01.verizon.com with ESMTP; 14 Feb 2014 10:26:08 +0000
Message-ID: <52FDEF40.8040709@terremark.com>
Date: Fri, 14 Feb 2014 05:26:08 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<1392333198.32038.153.camel@Solace>
In-Reply-To: <1392333198.32038.153.camel@Solace>
Cc: Simon Martin <furryfuttock@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/13/14 18:13, Dario Faggioli wrote:
> On gio, 2014-02-13 at 22:25 +0000, Simon Martin wrote:
>> Thanks for all the replies guys.
>>
> :-)
>
>> Don> How many instruction per second a thread gets does depend on the
>> Don> "idleness" of other threads (no longer just the hyperThread's
>> Don> parther).
>>
>> This    seems    a    bit    strange   to   me. In my case I have time
>> critical  PV  running  by  itself  in a CPU pool. So Xen should not be
>> scheduling it, so I can't see how this Hypervisor thread would be affected.
>>
> I think Don is referring to the idleness of the other _hardware_ threads
> in the chip, rather than software threads of execution, either in Xen or
> in Dom0/DomU. I checked his original e-mail and, AFAIUI, he seems to
> confirm that the throughput you get on, say, core 3, depends on what
> it's sibling core (which really is his sibling hyperthread, again in the
> hardware sense... Gah, the terminology is just a mess! :-P). He seems to
> also add the fact that there is a similar kind of inter-dependency
> between all the hardware hyperthread, not just between siblings.
>
> Does this make sense Don?
>

Yes, but the results I am getting vary based on the disto (most likely 
the microcode version).


Linux (and I think that xen) both have a CPU scheduler that picks core 
before threads:

top - 04:06:29 up 66 days, 15:31, 11 users,  load average: 2.43, 0.72, 0.29
Tasks: 250 total,   1 running, 249 sleeping,   0 stopped,   0 zombie
Cpu0  : 99.7%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.2%hi, 0.1%si,  
0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni, 99.8%id,  0.0%wa,  0.0%hi, 0.2%si,  
0.0%st
Cpu2  : 99.9%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.1%hi, 0.0%si,  
0.0%st
Cpu3  :  1.6%us,  0.1%sy,  0.0%ni, 98.3%id,  0.0%wa,  0.0%hi, 0.0%si,  
0.0%st
Cpu4  : 99.9%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.1%hi, 0.0%si,  
0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi, 0.0%si,  
0.0%st
Cpu6  :  1.4%us,  0.0%sy,  0.0%ni, 98.6%id,  0.0%wa,  0.0%hi, 0.0%si,  
0.0%st
Cpu7  : 99.9%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.1%hi, 0.0%si,  
0.0%st
Mem:  32940640k total, 18008576k used, 14932064k free,   285740k buffers
Swap: 10223612k total,     4696k used, 10218916k free, 16746224k cached


Is an example without xen involved and Fedora 17

Linux dcs-xen-50 3.8.11-100.fc17.x86_64 #1 SMP Wed May 1 19:31:26 UTC 
2013 x86_64 x86_64 x86_64 GNU/Linux

On this machine:

Just 7:
         start                     done
thr 0:  14 Feb 14 04:11:08.944566  14 Feb 14 04:13:20.874764
        +02:11.930198 ~= 131.93 and 9.10 GiI/Sec


6 & 7:
         start                     done
thr 0:  14 Feb 14 04:14:31.010426  14 Feb 14 04:18:55.404116
        +04:24.393690 ~= 264.39 and 4.54 GiI/Sec
thr 1:  14 Feb 14 04:14:31.010426  14 Feb 14 04:18:55.415561
        +04:24.405135 ~= 264.41 and 4.54 GiI/Sec


5 & 7:
         start                     done
thr 0:  14 Feb 14 04:20:28.902831  14 Feb 14 04:22:45.563511
        +02:16.660680 ~= 136.66 and 8.78 GiI/Sec
thr 1:  14 Feb 14 04:20:28.902831  14 Feb 14 04:22:46.182159
        +02:17.279328 ~= 137.28 and 8.74 GiI/Sec


1 & 3 & 5 & 7:
         start                     done
thr 0:  14 Feb 14 04:32:24.353302  14 Feb 14 04:35:16.870558
        +02:52.517256 ~= 172.52 and 6.96 GiI/Sec
thr 1:  14 Feb 14 04:32:24.353301  14 Feb 14 04:35:17.371155
        +02:53.017854 ~= 173.02 and 6.94 GiI/Sec
thr 2:  14 Feb 14 04:32:24.353302  14 Feb 14 04:35:17.225871
        +02:52.872569 ~= 172.87 and 6.94 GiI/Sec
thr 3:  14 Feb 14 04:32:24.353302  14 Feb 14 04:35:16.655362
        +02:52.302060 ~= 172.30 and 6.96 GiI/Sec



This is from:
Feb 14 04:29:21 dcs-xen-51 kernel: [   41.921367] microcode: CPU3 
updated to revision 0x28, date = 2012-04-24


On CentOS 5.10:
Linux dcs-xen-53 2.6.18-371.el5 #1 SMP Tue Oct 1 08:35:08 EDT 2013 
x86_64 x86_64 x86_64 GNU/Linux

only 7:
         start                     done
thr 0:  14 Feb 14 09:43:10.903549  14 Feb 14 09:46:04.925463
        +02:54.021914 ~= 174.02 and 6.90 GiI/Sec


6 & 7:
         start                     done
thr 0:  14 Feb 14 09:49:17.804633  14 Feb 14 09:55:02.473549
        +05:44.668916 ~= 344.67 and 3.48 GiI/Sec
thr 1:  14 Feb 14 09:49:17.804618  14 Feb 14 09:55:02.533243
        +05:44.728625 ~= 344.73 and 3.48 GiI/Sec


5 & 7:
         start                     done
thr 0:  14 Feb 14 10:01:30.566603  14 Feb 14 10:04:23.024858
        +02:52.458255 ~= 172.46 and 6.96 GiI/Sec
thr 1:  14 Feb 14 10:01:30.566603  14 Feb 14 10:04:23.069964
        +02:52.503361 ~= 172.50 and 6.96 GiI/Sec


1 & 3 & 5 & 7:
         start                     done
thr 0:  14 Feb 14 10:05:58.359646  14 Feb 14 10:08:50.984629
        +02:52.624983 ~= 172.62 and 6.95 GiI/Sec
thr 1:  14 Feb 14 10:05:58.359646  14 Feb 14 10:08:50.993064
        +02:52.633418 ~= 172.63 and 6.95 GiI/Sec
thr 2:  14 Feb 14 10:05:58.359645  14 Feb 14 10:08:50.857982
        +02:52.498337 ~= 172.50 and 6.96 GiI/Sec
thr 3:  14 Feb 14 10:05:58.359645  14 Feb 14 10:08:50.905031
        +02:52.545386 ~= 172.55 and 6.95 GiI/Sec




Feb 14 09:41:42 dcs-xen-53 kernel: microcode: CPU3 updated from revision 
0x17 to 0x29, date = 06122013


Hope this helps.
     -Don Slutz

>>>> 6.- All VCPUs are pinned:
>>>>
>> Dario> Right, although, if you use cpupools, and if I've understood what you're
>> Dario> up to, you really should not require pinning. I mean, the isolation
>> Dario> between the RT-ish domain and the rest of the world should be already in
>> Dario> place thanks to cpupools.
>>
>> This  is what I thought, however when running looking at the vcpu-list
>> I  CPU  affinity  was "all" until I starting pinning. As I wasn't sure
>> whether  that  was  "all  inside this cpu pool" or "all" I felt it was
>> safer to do it explicitly.
>>
> Actually, you are right, we could put things in a way that results more
> clear, when one observes the output! So, I confirm that, despite the
> fact that you see "all", that all is relative to the cpupool the domain
> is assigned to.
>
> I'll try to think on how to make this more evident... A note in the
> manpage and/or the various sources of documentation, is the easy (but
> still necessary, I agree) part, and I'll add this to my TODO list.
> Actually modifying the output is more tricky, as affinity and cpupools
> are orthogonal by design, and that is the right (IMHO) thing.
>
> I guess trying to tweak the printf()-s in `xl vcpu-list' would not be
> that hard... I'll have a look and see if I can come up with a proposal.
>
>> Dario> So, if you ask me, you're restricting too much things in
>> Dario> pool-0, where dom0 and the Windows VM runs. In fact, is there a
>> Dario> specific reason why you need all their vcpus to be statically
>> Dario> pinned each one to only one pcpu? If not, I'd leave them a
>> Dario> little bit more of freedom.
>>
>> I agree with you here, however when I don't pin CPU affinity is "all".
>> Is this "all in the CPU pool"? I couldn't find that info.
>>
> Again, yes: once a domain is in a cpupool, no matter what its affinity
> says, it won't ever reach a pcpu assigned to another cpupool. The
> technical reason is that each cpupool is ruled by it's own (copy of a)
> scheduler, even if you use, e.g., credit, for both/all the pools. In
> that case, what you will get are two full instances of credit,
> completely independent between each other, each one in charge only of a
> very specific subset of pcpus (as mandated by cpupools). So, different
> runqueues, different data structures, different anything.
>
>> Dario> What I'd try is:
>> Dario>  1. all dom0 and win7 vcpus free, so no pinning in pool0.
>> Dario>  2. pinning as follows:
>> Dario>      * all vcpus of win7 --> pcpus 1,2
>> Dario>      * all vcpus of dom0 --> no pinning
>> Dario>    this way, what you get is the following: win7 could suffer sometimes,
>> Dario>    if all its 3 vcpus gets busy, but that, I think is acceptable, at
>> Dario>    least up to a certain extent, is that the case?
>> Dario>    At the same time, you
>> Dario>    are making sure dom0 always has a chance to run, as pcpu#0 would be
>> Dario>    his exclusive playground, in case someone, including your pv499
>> Dario>    domain, needs its services.
>>
>> This  is  what  I  had when I started :-). Thanks for the confirmation
>> that I was doing it right. However if the hyperthreading is the issue,
>> then I will only have 2 PCPU available, and I will assign them both to
>> dom0 and win7.
>>
> Yes, with hyperthreading in mind, that is what you should do.
>
> Once we will have confirmed that hyperthreading is the issue, we'll see
> what we can do. I mean, if, in your case, it's fine to 'waste' a cpu,
> then ok, but I think we need a general solution for this... Perhaps with
> a little worse performances than just leaving one core/hyperthread
> completely idle, but at the same time more resource efficient.
>
> I wonder how tweaking the sched_smt_power_savings would deal with
> this...
>
>> Dario> Right. Are you familiar with tracing what happens inside Xen
>> Dario> with xentrace and, perhaps, xenalyze? It takes a bit of time to
>> Dario> get used to it but, once you dominate it, it is a good mean for
>> Dario> getting out really useful info!
>>
>> Dario> There is a blog post about that here:
>> Dario> http://blog.xen.org/index.php/2012/09/27/tracing-with-xentrace-and-xenalyze/
>> Dario> and it should have most of the info, or the links to where to
>> Dario> find them.
>>
>> Thanks for this. If this problem is more than the hyperthreading then
>> I will definitely use it. Also looks like it might be useful when I
>> start looking at the jitter on the singleshot timer (which should be
>> in a couple of weeks).
>>
> It will reveal to be very useful for that, I'm sure! :-)
>
> Let us know how the re-testing goes.
>
> Regards,
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEGA1-0000tb-RG; Fri, 14 Feb 2014 10:38:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WEGA0-0000tW-4I
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 10:38:20 +0000
Received: from [193.109.254.147:28168] by server-9.bemta-14.messagelabs.com id
	96/17-24895-B12FDF25; Fri, 14 Feb 2014 10:38:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392374297!4341443!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23484 invoked from network); 14 Feb 2014 10:38:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 10:38:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100742221"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 10:38:17 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 05:38:17 -0500
Message-ID: <52FDF217.3040005@citrix.com>
Date: Fri, 14 Feb 2014 11:38:15 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: John Baldwin <jhb@freebsd.org>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-11-git-send-email-roger.pau@citrix.com>
	<2410827.IqfpSAhe3T@ralph.baldwin.cx>
In-Reply-To: <2410827.IqfpSAhe3T@ralph.baldwin.cx>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 10/13] xen: add ACPI bus to xen_nexus
 when running as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/14 22:50, John Baldwin wrote:
> On Tuesday, December 24, 2013 12:20:59 PM Roger Pau Monne wrote:
>> Also disable a couple of ACPI devices that are not usable under Dom0.
> 
> Hmm, setting debug.acpi.disabled in this way is a bit hacky.  It might
> be fine however if there's no way for the user to set it before booting
> the kernel (as opposed to haing the relevant drivers explicitly disable
> themselves under Xen which I think would be cleaner, but would also
> make your patch larger)

Thanks for the review, the user can pass parameters to FreeBSD when
booted as Dom0, I just find it uncomfortable to force the user into
always setting something on the command line in order to boot.

What do you mean with "haing the relevant drivers explicitly disable
themselves under Xen"? Adding a gate on every one of those devices like
"if (xen_pv_domain()) return (ENXIO);" in the identify/probe routine
seems even worse.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEGA1-0000tb-RG; Fri, 14 Feb 2014 10:38:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WEGA0-0000tW-4I
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 10:38:20 +0000
Received: from [193.109.254.147:28168] by server-9.bemta-14.messagelabs.com id
	96/17-24895-B12FDF25; Fri, 14 Feb 2014 10:38:19 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392374297!4341443!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23484 invoked from network); 14 Feb 2014 10:38:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 10:38:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100742221"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 10:38:17 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 05:38:17 -0500
Message-ID: <52FDF217.3040005@citrix.com>
Date: Fri, 14 Feb 2014 11:38:15 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: John Baldwin <jhb@freebsd.org>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1387884062-41154-11-git-send-email-roger.pau@citrix.com>
	<2410827.IqfpSAhe3T@ralph.baldwin.cx>
In-Reply-To: <2410827.IqfpSAhe3T@ralph.baldwin.cx>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 10/13] xen: add ACPI bus to xen_nexus
 when running as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 08/02/14 22:50, John Baldwin wrote:
> On Tuesday, December 24, 2013 12:20:59 PM Roger Pau Monne wrote:
>> Also disable a couple of ACPI devices that are not usable under Dom0.
> 
> Hmm, setting debug.acpi.disabled in this way is a bit hacky.  It might
> be fine however if there's no way for the user to set it before booting
> the kernel (as opposed to haing the relevant drivers explicitly disable
> themselves under Xen which I think would be cleaner, but would also
> make your patch larger)

Thanks for the review, the user can pass parameters to FreeBSD when
booted as Dom0, I just find it uncomfortable to force the user into
always setting something on the command line in order to boot.

What do you mean with "haing the relevant drivers explicitly disable
themselves under Xen"? Adding a gate on every one of those devices like
"if (xen_pv_domain()) return (ENXIO);" in the identify/probe routine
seems even worse.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:41:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEGCa-00010P-De; Fri, 14 Feb 2014 10:41:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEGCZ-00010K-Pt
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 10:41:00 +0000
Received: from [85.158.137.68:5604] by server-11.bemta-3.messagelabs.com id
	79/57-04255-AB2FDF25; Fri, 14 Feb 2014 10:40:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392374458!547161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10492 invoked from network); 14 Feb 2014 10:40:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 10:40:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 10:40:57 +0000
Message-Id: <52FE00C8020000780011C649@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 10:40:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
In-Reply-To: <52FDE2ED.4030008@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> Debug registers are restored on vcpu switch only if db7 has any debug events
> activated. This leads to problems in the following cases:
> 
> - db0-3 are changed by the guest before events are set "active" in db7. In case
>    of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: setting
>    db7 before db0-3 is no option, as this could trigger debug interrupts due to
>    stale db0-3 contents.
> 
> - single stepping is used and vcpu switch occurs between the single step trap
>    and reading of db6 in the guest. db6 contents (single step indicator) are
>    lost in this case.

Not exactly, at least not looking at how things are supposed to work:
__restore_debug_registers() gets called when
- context switching in (vmx_restore_dr())
- injecting TRAP_debug
- any DRn is being accessed

So when your guest writes DR[0-3], debug registers should get
restored (from their original zero values) and the guest would be
permitted direct access to the hardware registers. Once context
switched out, vmx_save_dr() ought to be saving the values
(irrespective of DR7 contents, only depending upon
v->arch.hvm_vcpu.flag_dr_dirty). During the next context
switch in, they would get restored immediately if DR7 already has
some breakpoint enabled, or again during first DR access if not.

Hence I think that in general this ought to work. Question is
whether one of the more modern feature additions broke any of
this. Assuming that your guest isn't doing heavy accesses to the
debug registers, instrumenting the hypervisor side code to track
the saving/restoring shouldn't be causing too much log output (as
long as you suppress output from the context switch path when
no relevant state changed).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 10:41:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 10:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEGCa-00010P-De; Fri, 14 Feb 2014 10:41:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEGCZ-00010K-Pt
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 10:41:00 +0000
Received: from [85.158.137.68:5604] by server-11.bemta-3.messagelabs.com id
	79/57-04255-AB2FDF25; Fri, 14 Feb 2014 10:40:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392374458!547161!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10492 invoked from network); 14 Feb 2014 10:40:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 10:40:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 10:40:57 +0000
Message-Id: <52FE00C8020000780011C649@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 10:40:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
In-Reply-To: <52FDE2ED.4030008@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> Debug registers are restored on vcpu switch only if db7 has any debug events
> activated. This leads to problems in the following cases:
> 
> - db0-3 are changed by the guest before events are set "active" in db7. In case
>    of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: setting
>    db7 before db0-3 is no option, as this could trigger debug interrupts due to
>    stale db0-3 contents.
> 
> - single stepping is used and vcpu switch occurs between the single step trap
>    and reading of db6 in the guest. db6 contents (single step indicator) are
>    lost in this case.

Not exactly, at least not looking at how things are supposed to work:
__restore_debug_registers() gets called when
- context switching in (vmx_restore_dr())
- injecting TRAP_debug
- any DRn is being accessed

So when your guest writes DR[0-3], debug registers should get
restored (from their original zero values) and the guest would be
permitted direct access to the hardware registers. Once context
switched out, vmx_save_dr() ought to be saving the values
(irrespective of DR7 contents, only depending upon
v->arch.hvm_vcpu.flag_dr_dirty). During the next context
switch in, they would get restored immediately if DR7 already has
some breakpoint enabled, or again during first DR access if not.

Hence I think that in general this ought to work. Question is
whether one of the more modern feature additions broke any of
this. Assuming that your guest isn't doing heavy accesses to the
debug registers, instrumenting the hypervisor side code to track
the saving/restoring shouldn't be causing too much log output (as
long as you suppress output from the context switch path when
no relevant state changed).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHID-0001PZ-RC; Fri, 14 Feb 2014 11:50:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIC-0001PT-GX
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:52 +0000
Received: from [85.158.139.211:30212] by server-4.bemta-5.messagelabs.com id
	A3/28-08092-B130EF25; Fri, 14 Feb 2014 11:50:51 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392378649!3944970!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11831 invoked from network); 14 Feb 2014 11:50:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528601"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:48 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHI7-0001S0-96;
	Fri, 14 Feb 2014 11:50:47 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI5-0001bS-Pg; Fri, 14 Feb 2014 11:50:45 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:19 +0000
Message-ID: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V2:
- Rebase onto net-next
- Change queue->number to queue->id
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu
- Fixup formatting and style issues
- XenStore protocol changes documented in netif.h
- Default max. number of queues to num_online_cpus()
- Check requested number of queues does not exceed maximum

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHIO-0001SG-N9; Fri, 14 Feb 2014 11:51:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIM-0001Ru-FN
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:51:03 +0000
Received: from [85.158.139.211:30907] by server-8.bemta-5.messagelabs.com id
	97/BD-05298-5230EF25; Fri, 14 Feb 2014 11:51:01 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392378649!3944970!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12240 invoked from network); 14 Feb 2014 11:50:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528617"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:52 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHIB-0001SF-92;
	Fri, 14 Feb 2014 11:50:51 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI9-0001bf-Fb; Fri, 14 Feb 2014 11:50:49 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:22 +0000
Message-ID: <1392378624-6123-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  944 ++++++++++++++++++++++++++------------------
 1 file changed, 551 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f9daa9e..d4239b9 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup)
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1261,24 +1311,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = alloc_percpu(struct netfront_stats);
@@ -1291,38 +1332,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 		u64_stats_init(&xen_nf_stats->syncp);
 	}
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1348,10 +1359,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1409,30 +1416,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1473,100 +1485,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1575,13 +1573,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1589,21 +1587,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1614,17 +1612,77 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1632,13 +1690,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1646,34 +1763,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1733,6 +1850,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1745,6 +1865,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1765,36 +1887,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1803,14 +1929,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1883,7 +2012,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(vif + xenvif_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1915,7 +2044,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1926,6 +2058,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1939,16 +2073,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1958,7 +2095,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1969,6 +2109,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1982,16 +2124,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -2001,7 +2146,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
 	xennet_disconnect_backend(info);
 
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
+
 	xennet_sysfs_delif(info->netdev);
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
-
 	free_percpu(info->stats);
 
 	free_netdev(info->netdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHIF-0001Pz-Oi; Fri, 14 Feb 2014 11:50:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHID-0001PY-Ip
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:53 +0000
Received: from [85.158.139.211:35665] by server-9.bemta-5.messagelabs.com id
	34/D0-11237-C130EF25; Fri, 14 Feb 2014 11:50:52 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392378649!3944970!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11959 invoked from network); 14 Feb 2014 11:50:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528608"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:50 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHI9-0001SC-Ue;
	Fri, 14 Feb 2014 11:50:49 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI8-0001ba-CB; Fri, 14 Feb 2014 11:50:48 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:21 +0000
Message-ID: <1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    6 +++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 80 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 2550867..8180929 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 4cde112..4dc092c 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+						  xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 46b2f5b..aeb5ffa 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,9 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1588,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..3f97f6f 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+						 "guest requested %u queues, exceeding the maximum of %u.",
+						 requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1)
+		xspath = (char *)dev->otherend;
+	else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHIK-0001RT-7x; Fri, 14 Feb 2014 11:51:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHII-0001QT-Vi
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:59 +0000
Received: from [193.109.254.147:12624] by server-16.bemta-14.messagelabs.com
	id DD/BC-21945-2230EF25; Fri, 14 Feb 2014 11:50:58 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392378654!656165!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4019 invoked from network); 14 Feb 2014 11:50:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528622"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:53 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHIC-0001SI-Cy;
	Fri, 14 Feb 2014 11:50:52 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHIA-0001bk-NF; Fri, 14 Feb 2014 11:50:50 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:23 +0000
Message-ID: <1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 138 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d4239b9..d584fa4 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,10 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+							   void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1)
+		queue_idx = 0;
+	else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1327,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1683,6 +1699,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1692,10 +1790,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1711,12 +1820,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1754,49 +1864,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1846,8 +1942,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2241,6 +2338,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHII-0001QN-5f; Fri, 14 Feb 2014 11:50:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIG-0001Pw-5m
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:56 +0000
Received: from [85.158.137.68:12411] by server-7.bemta-3.messagelabs.com id
	EB/E3-13775-F130EF25; Fri, 14 Feb 2014 11:50:55 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392378651!1877463!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7149 invoked from network); 14 Feb 2014 11:50:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100756447"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHI8-0001S9-Rh;
	Fri, 14 Feb 2014 11:50:48 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI6-0001bW-V0; Fri, 14 Feb 2014 11:50:47 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:20 +0000
Message-ID: <1392378624-6123-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   81 ++++--
 drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
 drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 593 insertions(+), 417 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..2550867 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,36 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +159,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +200,12 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..4cde112 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+							   void *accel_priv)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -253,7 +328,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)vif + xenvif_stats[i].offset);
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +361,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +371,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +383,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +402,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +411,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +427,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +550,52 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..46b2f5b 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		atomic_inc(&vif->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHII-0001QX-Jf; Fri, 14 Feb 2014 11:50:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIH-0001QA-2P
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:57 +0000
Received: from [85.158.137.68:12508] by server-2.bemta-3.messagelabs.com id
	BE/E3-06531-0230EF25; Fri, 14 Feb 2014 11:50:56 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392378651!1877463!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7435 invoked from network); 14 Feb 2014 11:50:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100756454"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:54 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHID-0001SL-QP;
	Fri, 14 Feb 2014 11:50:53 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHIB-0001bp-SI; Fri, 14 Feb 2014 11:50:51 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:24 +0000
Message-ID: <1392378624-6123-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..8868c51 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHII-0001QN-5f; Fri, 14 Feb 2014 11:50:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIG-0001Pw-5m
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:56 +0000
Received: from [85.158.137.68:12411] by server-7.bemta-3.messagelabs.com id
	EB/E3-13775-F130EF25; Fri, 14 Feb 2014 11:50:55 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392378651!1877463!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7149 invoked from network); 14 Feb 2014 11:50:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100756447"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHI8-0001S9-Rh;
	Fri, 14 Feb 2014 11:50:48 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI6-0001bW-V0; Fri, 14 Feb 2014 11:50:47 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:20 +0000
Message-ID: <1392378624-6123-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   81 ++++--
 drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
 drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 593 insertions(+), 417 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..2550867 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,36 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +159,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +200,12 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..4cde112 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+							   void *accel_priv)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -253,7 +328,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)vif + xenvif_stats[i].offset);
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +361,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +371,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +383,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +402,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +411,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +427,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +550,52 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..46b2f5b 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		atomic_inc(&vif->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHIF-0001Pz-Oi; Fri, 14 Feb 2014 11:50:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHID-0001PY-Ip
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:53 +0000
Received: from [85.158.139.211:35665] by server-9.bemta-5.messagelabs.com id
	34/D0-11237-C130EF25; Fri, 14 Feb 2014 11:50:52 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392378649!3944970!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11959 invoked from network); 14 Feb 2014 11:50:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528608"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:50 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHI9-0001SC-Ue;
	Fri, 14 Feb 2014 11:50:49 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI8-0001ba-CB; Fri, 14 Feb 2014 11:50:48 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:21 +0000
Message-ID: <1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    6 +++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 80 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 2550867..8180929 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 4cde112..4dc092c 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+						  xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 46b2f5b..aeb5ffa 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,9 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1588,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..3f97f6f 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+						 "guest requested %u queues, exceeding the maximum of %u.",
+						 requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1)
+		xspath = (char *)dev->otherend;
+	else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHIO-0001SG-N9; Fri, 14 Feb 2014 11:51:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIM-0001Ru-FN
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:51:03 +0000
Received: from [85.158.139.211:30907] by server-8.bemta-5.messagelabs.com id
	97/BD-05298-5230EF25; Fri, 14 Feb 2014 11:51:01 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392378649!3944970!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12240 invoked from network); 14 Feb 2014 11:50:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528617"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:52 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:52 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHIB-0001SF-92;
	Fri, 14 Feb 2014 11:50:51 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI9-0001bf-Fb; Fri, 14 Feb 2014 11:50:49 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:22 +0000
Message-ID: <1392378624-6123-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  944 ++++++++++++++++++++++++++------------------
 1 file changed, 551 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f9daa9e..d4239b9 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup)
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1261,24 +1311,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = alloc_percpu(struct netfront_stats);
@@ -1291,38 +1332,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 		u64_stats_init(&xen_nf_stats->syncp);
 	}
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1348,10 +1359,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1409,30 +1416,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1473,100 +1485,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1575,13 +1573,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1589,21 +1587,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1614,17 +1612,77 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1632,13 +1690,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1646,34 +1763,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1733,6 +1850,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1745,6 +1865,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1765,36 +1887,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1803,14 +1929,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1883,7 +2012,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(vif + xenvif_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1915,7 +2044,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1926,6 +2058,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1939,16 +2073,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1958,7 +2095,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1969,6 +2109,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1982,16 +2124,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -2001,7 +2146,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
 	xennet_disconnect_backend(info);
 
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
+
 	xennet_sysfs_delif(info->netdev);
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
-
 	free_percpu(info->stats);
 
 	free_netdev(info->netdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHIK-0001RT-7x; Fri, 14 Feb 2014 11:51:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHII-0001QT-Vi
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:59 +0000
Received: from [193.109.254.147:12624] by server-16.bemta-14.messagelabs.com
	id DD/BC-21945-2230EF25; Fri, 14 Feb 2014 11:50:58 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392378654!656165!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4019 invoked from network); 14 Feb 2014 11:50:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528622"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:53 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHIC-0001SI-Cy;
	Fri, 14 Feb 2014 11:50:52 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHIA-0001bk-NF; Fri, 14 Feb 2014 11:50:50 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:23 +0000
Message-ID: <1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 138 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d4239b9..d584fa4 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,10 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+							   void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1)
+		queue_idx = 0;
+	else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1327,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1683,6 +1699,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1692,10 +1790,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1711,12 +1820,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1754,49 +1864,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1846,8 +1942,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2241,6 +2338,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHID-0001PZ-RC; Fri, 14 Feb 2014 11:50:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIC-0001PT-GX
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:52 +0000
Received: from [85.158.139.211:30212] by server-4.bemta-5.messagelabs.com id
	A3/28-08092-B130EF25; Fri, 14 Feb 2014 11:50:51 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392378649!3944970!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11831 invoked from network); 14 Feb 2014 11:50:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102528601"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:48 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHI7-0001S0-96;
	Fri, 14 Feb 2014 11:50:47 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHI5-0001bS-Pg; Fri, 14 Feb 2014 11:50:45 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:19 +0000
Message-ID: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V2:
- Rebase onto net-next
- Change queue->number to queue->id
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu
- Fixup formatting and style issues
- XenStore protocol changes documented in netif.h
- Default max. number of queues to num_online_cpus()
- Check requested number of queues does not exceed maximum

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:51:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:51:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHII-0001QX-Jf; Fri, 14 Feb 2014 11:50:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEHIH-0001QA-2P
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 11:50:57 +0000
Received: from [85.158.137.68:12508] by server-2.bemta-3.messagelabs.com id
	BE/E3-06531-0230EF25; Fri, 14 Feb 2014 11:50:56 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392378651!1877463!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7435 invoked from network); 14 Feb 2014 11:50:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:50:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100756454"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 11:50:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 06:50:54 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEHID-0001SL-QP;
	Fri, 14 Feb 2014 11:50:53 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEHIB-0001bp-SI; Fri, 14 Feb 2014 11:50:51 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 11:50:24 +0000
Message-ID: <1392378624-6123-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V2 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..8868c51 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:52:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHJT-0001n8-GW; Fri, 14 Feb 2014 11:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WEHJS-0001mf-C6
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 11:52:10 +0000
Received: from [85.158.143.35:14859] by server-2.bemta-4.messagelabs.com id
	DB/06-10891-9630EF25; Fri, 14 Feb 2014 11:52:09 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392378728!5673538!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25642 invoked from network); 14 Feb 2014 11:52:08 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:52:08 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so8544186wes.40
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 03:52:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=T4FbokRu3l5erVgUoRlB7GJbAr8Vn2dJv9e2ZsSir3U=;
	b=fHcYAlewu1E0SzApommhnPmDYJb7hTpIcOuC3iDD+CLX7OxvrzKBpvRvzhL7nq/Ebm
	yaEmDje0kkGzNU/UwJ7wlqWqC/rbrQHVLZNPuSkyH3cC5m34UR2qpKBE0EeWhZPwx8CJ
	3B8PxhFrHIPeRu15Oq0LOwivEMDYyFPDNRl8QcSMV7gk3shSzInCxNJDqxGqHRTzvWIj
	mGNi7qYo8ryGtyWyaYmzlynI6KWGZeZJJ9oSlZYEUdboP8jIp5okIDc9U+u0Vb6ESaXy
	4ozUrM0fDo3Vx4cj0UH0Dok/dqxc8GVQibHSKiWbziMGYvBdIqUGEL9p6JzaGi/Ng/aN
	4Bpg==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr1911166wie.6.1392378728135; Fri,
	14 Feb 2014 03:52:08 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 14 Feb 2014 03:52:08 -0800 (PST)
In-Reply-To: <52FD034E.2070600@citrix.com>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
Date: Fri, 14 Feb 2014 11:52:08 +0000
X-Google-Sender-Auth: kVRpuiDkezbIGA2cB-hFyNlUisM
Message-ID: <CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 5:39 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/14 17:32, Tim Deegan wrote:
>> Hi,
>>
>> This series implements the most recent idea I was proposing about
>> reworking the RTC PF interrupt injection.
>>
>> Patch 1 switches handling the !PIE case to calculate the right answer
>> for REG_C.PF on demand rather than running the timers.
>> Patch 2 switches back to the old model of having the vpt code control
>> the timer interrupt injection; this is the fix for the w2k3 hang.
>> Patch 3 is just a minor cleanup, and not particularly necessary.
>>
>> N.B. In its current state it DOES NOT WORK.  I got distracted by
>> other things today and didn't get a chance to finish working on it,
>> but I wanted to send it out for feedback on the general approach.
>> If it seems broadly acceptable then either I can pick it up again next
>> week or maybe Andrew can look at fixing it.
>>
>> Cheers,
>>
>> Tim.
>>
>
> I should have time to look at the series tomorrow.

The next question to ask is this:

This is the last big disruptive bug / bugfix on my list.  We're
planning on cutting an RC Monday probably, with a test day Tuesday.
This bug was originally marked as "Not for 4.4".

So our options are:
* Delay the release, waiting for this new series to be ready
* Take the patch Andy posted last week for now, and backport Tim's fix
when it's ready
* Release without this bug being fixed

As a reminder (for those who haven't been following the thread), the
effect of this bug is that w2k3 guests sometimes hang during boot.
I'm not sure exactly how often this is, but from talking to Andy it
seems to be fairly low -- one percent maybe?  The code is very subtle
and any change may risk causing similar hangs in other situations; in
particular we would want to be able to test it pretty well.

At the moment I'm leaning towards not delaying the release for it.
That could either mean checking the patch we have to hand today (so it
can make it into the RC Monday hopefully), or just going without it.

Any thoughts?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 11:52:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 11:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHJT-0001n8-GW; Fri, 14 Feb 2014 11:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WEHJS-0001mf-C6
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 11:52:10 +0000
Received: from [85.158.143.35:14859] by server-2.bemta-4.messagelabs.com id
	DB/06-10891-9630EF25; Fri, 14 Feb 2014 11:52:09 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392378728!5673538!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25642 invoked from network); 14 Feb 2014 11:52:08 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 11:52:08 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so8544186wes.40
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 03:52:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=T4FbokRu3l5erVgUoRlB7GJbAr8Vn2dJv9e2ZsSir3U=;
	b=fHcYAlewu1E0SzApommhnPmDYJb7hTpIcOuC3iDD+CLX7OxvrzKBpvRvzhL7nq/Ebm
	yaEmDje0kkGzNU/UwJ7wlqWqC/rbrQHVLZNPuSkyH3cC5m34UR2qpKBE0EeWhZPwx8CJ
	3B8PxhFrHIPeRu15Oq0LOwivEMDYyFPDNRl8QcSMV7gk3shSzInCxNJDqxGqHRTzvWIj
	mGNi7qYo8ryGtyWyaYmzlynI6KWGZeZJJ9oSlZYEUdboP8jIp5okIDc9U+u0Vb6ESaXy
	4ozUrM0fDo3Vx4cj0UH0Dok/dqxc8GVQibHSKiWbziMGYvBdIqUGEL9p6JzaGi/Ng/aN
	4Bpg==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr1911166wie.6.1392378728135; Fri,
	14 Feb 2014 03:52:08 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 14 Feb 2014 03:52:08 -0800 (PST)
In-Reply-To: <52FD034E.2070600@citrix.com>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
Date: Fri, 14 Feb 2014 11:52:08 +0000
X-Google-Sender-Auth: kVRpuiDkezbIGA2cB-hFyNlUisM
Message-ID: <CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 5:39 PM, Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
> On 13/02/14 17:32, Tim Deegan wrote:
>> Hi,
>>
>> This series implements the most recent idea I was proposing about
>> reworking the RTC PF interrupt injection.
>>
>> Patch 1 switches handling the !PIE case to calculate the right answer
>> for REG_C.PF on demand rather than running the timers.
>> Patch 2 switches back to the old model of having the vpt code control
>> the timer interrupt injection; this is the fix for the w2k3 hang.
>> Patch 3 is just a minor cleanup, and not particularly necessary.
>>
>> N.B. In its current state it DOES NOT WORK.  I got distracted by
>> other things today and didn't get a chance to finish working on it,
>> but I wanted to send it out for feedback on the general approach.
>> If it seems broadly acceptable then either I can pick it up again next
>> week or maybe Andrew can look at fixing it.
>>
>> Cheers,
>>
>> Tim.
>>
>
> I should have time to look at the series tomorrow.

The next question to ask is this:

This is the last big disruptive bug / bugfix on my list.  We're
planning on cutting an RC Monday probably, with a test day Tuesday.
This bug was originally marked as "Not for 4.4".

So our options are:
* Delay the release, waiting for this new series to be ready
* Take the patch Andy posted last week for now, and backport Tim's fix
when it's ready
* Release without this bug being fixed

As a reminder (for those who haven't been following the thread), the
effect of this bug is that w2k3 guests sometimes hang during boot.
I'm not sure exactly how often this is, but from talking to Andy it
seems to be fairly low -- one percent maybe?  The code is very subtle
and any change may risk causing similar hangs in other situations; in
particular we would want to be able to test it pretty well.

At the moment I'm leaning towards not delaying the release for it.
That could either mean checking the patch we have to hand today (so it
can make it into the RC Monday hopefully), or just going without it.

Any thoughts?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 12:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 12:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHTs-0002ce-J2; Fri, 14 Feb 2014 12:02:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WEHTr-0002cZ-2n
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 12:02:55 +0000
Received: from [85.158.139.211:31842] by server-16.bemta-5.messagelabs.com id
	46/36-05060-EE50EF25; Fri, 14 Feb 2014 12:02:54 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392379373!3921566!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 777 invoked from network); 14 Feb 2014 12:02:53 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 12:02:53 -0000
Received: by mail-wg0-f44.google.com with SMTP id k14so310611wgh.11
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 04:02:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=WSishsdnBzrWxgi6szxJqmwL5ce13VjAzyEOV+h8+vY=;
	b=k6kVHCd1ITvIa4ycgTQlGil4skk3xWvr4AlGCCmbMjKXGqbiEe0Hjdfdv26vchVQHz
	PyaA/G0yuBdoYSxTSt5QuNtOGqQinawkGogdaja8r+8v1BS2Ey0Oym4RnNBICJx0sBW/
	hAvtddeEjsDxVhRG+ZP0RQdrNxOt5fcM1SL35jaqbOGESCdjnNcgyCQJNSYJP7lB2D48
	W/+EfNB4YrLykp3ddHcGxr2xCtZqsMWLsWcdGbodaqoQsKa19SAu3e2SDSo+vjcrg94+
	O6YowUWabi0x7/mceiM7koNuDIBNh2obO5jDCnxYcFKQ3BikmMrLe8dbS/uVmF6uwFHB
	Hvrw==
X-Received: by 10.180.92.169 with SMTP id cn9mr1843180wib.35.1392379373404;
	Fri, 14 Feb 2014 04:02:53 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id
	bm8sm12199423wjc.12.2014.02.14.04.02.48 for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 04:02:52 -0800 (PST)
Date: Fri, 14 Feb 2014 12:02:38 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <6010385428.20140214120238@gmail.com>
To: xen-devel@lists.xen.org
In-Reply-To: <295276356.20140213222507@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace> 
	<295276356.20140213222507@gmail.com>
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>, Don Slutz <dslutz@verizon.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Simon,

Thanks everyone and especially Ian! It was the hyperthreading that was
causing the problem.

Here's my current configuration:

# xl cpupool-list -c
Name               CPU list
Pool-0             0,1
pv499              2,3
# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   r--      16.6  0
Domain-0                             0     1    1   -b-       7.3  1
win7x64                              1     0    1   -b-      82.5  all
win7x64                              1     1    0   -b-      18.6  all
pv499                                2     0    3   r--     226.1  3

I  have  pinned  dom0 as I wasn't sure whether it belongs to Pool-0 (I
assume it does, can you confirm please).

Dario, if you are going to look at the

Looking at my timings with this configuration I am seeing a 1%
variation (945 milliseconds +/- 5). I think this can be attributable
to  RAM  contention, at the end of the day all cores are competing for
the same bus.

-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 12:03:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 12:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHTs-0002ce-J2; Fri, 14 Feb 2014 12:02:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WEHTr-0002cZ-2n
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 12:02:55 +0000
Received: from [85.158.139.211:31842] by server-16.bemta-5.messagelabs.com id
	46/36-05060-EE50EF25; Fri, 14 Feb 2014 12:02:54 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392379373!3921566!1
X-Originating-IP: [74.125.82.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 777 invoked from network); 14 Feb 2014 12:02:53 -0000
Received: from mail-wg0-f44.google.com (HELO mail-wg0-f44.google.com)
	(74.125.82.44)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 12:02:53 -0000
Received: by mail-wg0-f44.google.com with SMTP id k14so310611wgh.11
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 04:02:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=WSishsdnBzrWxgi6szxJqmwL5ce13VjAzyEOV+h8+vY=;
	b=k6kVHCd1ITvIa4ycgTQlGil4skk3xWvr4AlGCCmbMjKXGqbiEe0Hjdfdv26vchVQHz
	PyaA/G0yuBdoYSxTSt5QuNtOGqQinawkGogdaja8r+8v1BS2Ey0Oym4RnNBICJx0sBW/
	hAvtddeEjsDxVhRG+ZP0RQdrNxOt5fcM1SL35jaqbOGESCdjnNcgyCQJNSYJP7lB2D48
	W/+EfNB4YrLykp3ddHcGxr2xCtZqsMWLsWcdGbodaqoQsKa19SAu3e2SDSo+vjcrg94+
	O6YowUWabi0x7/mceiM7koNuDIBNh2obO5jDCnxYcFKQ3BikmMrLe8dbS/uVmF6uwFHB
	Hvrw==
X-Received: by 10.180.92.169 with SMTP id cn9mr1843180wib.35.1392379373404;
	Fri, 14 Feb 2014 04:02:53 -0800 (PST)
Received: from localhost (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id
	bm8sm12199423wjc.12.2014.02.14.04.02.48 for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 04:02:52 -0800 (PST)
Date: Fri, 14 Feb 2014 12:02:38 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <6010385428.20140214120238@gmail.com>
To: xen-devel@lists.xen.org
In-Reply-To: <295276356.20140213222507@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace> 
	<295276356.20140213222507@gmail.com>
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>, Don Slutz <dslutz@verizon.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Simon,

Thanks everyone and especially Ian! It was the hyperthreading that was
causing the problem.

Here's my current configuration:

# xl cpupool-list -c
Name               CPU list
Pool-0             0,1
pv499              2,3
# xl vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    0   r--      16.6  0
Domain-0                             0     1    1   -b-       7.3  1
win7x64                              1     0    1   -b-      82.5  all
win7x64                              1     1    0   -b-      18.6  all
pv499                                2     0    3   r--     226.1  3

I  have  pinned  dom0 as I wasn't sure whether it belongs to Pool-0 (I
assume it does, can you confirm please).

Dario, if you are going to look at the

Looking at my timings with this configuration I am seeing a 1%
variation (945 milliseconds +/- 5). I think this can be attributable
to  RAM  contention, at the end of the day all cores are competing for
the same bus.

-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 12:26:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 12:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHqf-0002tH-AQ; Fri, 14 Feb 2014 12:26:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WEHqd-0002tC-UK
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 12:26:28 +0000
Received: from [85.158.137.68:42413] by server-12.bemta-3.messagelabs.com id
	65/9A-01674-4A90EF25; Fri, 14 Feb 2014 12:18:44 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392380322!324205!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26689 invoked from network); 14 Feb 2014 12:18:43 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 12:18:43 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=fGS9Iq/2YLqJ4kxtO8y6yUZx+KqoqYr5Bt7EXiJgMaq8Cauvhs8m0QjH
	0R9qxUL+Vz3q3J7a5ATuRnH8OAmrsQJCTIvChAQ7UCto0ImSun15LZE/9
	Mccgc3p4NS3sLMIh71W/0qrrhWRFD4v6NVCtpM/SXD1AXWPL0iW4kpLl7
	CPelVp1kBVTzL+mYiQY5RiRqh9ZKalqkPd11NK5nPUXhI8IzAFlR68MZB
	ACwcKbovNsfgheU1hyArxWVSdDFWw;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392380323; x=1423916323;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=lOTck6bytRxiZEcPMByzhhoEmDFfUP1foGvHMmAV6S4=;
	b=YJUcIZHL0KzDQfTsqeGd7joQs2zP0iCJrTg4GCBEfjHLk0uMdivy9Z9t
	9dXcMMHLNVkd/YAyUw8OaYURBk7TUSvRYubdFIo5v3Z53fSLi4FcmXefK
	QdqcEsxWvGfzAFx4uuZJkN9KcTTZ13MJYzgXitzBpsCqdNxoHzfzvx0zN
	b/8FQ/HUTjSPd75zP7/zRwPatJqoSOzSkfIj2H6ES7X20MZUN3x4tyB2t
	vd9fmZZngyoQkDj3/YBc/XCVBgsFV;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,844,1384297200"; d="scan'208";a="185621091"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 14 Feb 2014 13:18:42 +0100
X-IronPort-AV: E=Sophos;i="4.95,844,1384297200"; d="scan'208";a="80133907"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 14 Feb 2014 13:18:42 +0100
Message-ID: <52FE09A2.4000909@ts.fujitsu.com>
Date: Fri, 14 Feb 2014 13:18:42 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
In-Reply-To: <52FE00C8020000780011C649@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14.02.2014 11:40, Jan Beulich wrote:
>>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>> Debug registers are restored on vcpu switch only if db7 has any debug events
>> activated. This leads to problems in the following cases:
>>
>> - db0-3 are changed by the guest before events are set "active" in db7. In case
>>     of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: setting
>>     db7 before db0-3 is no option, as this could trigger debug interrupts due to
>>     stale db0-3 contents.
>>
>> - single stepping is used and vcpu switch occurs between the single step trap
>>     and reading of db6 in the guest. db6 contents (single step indicator) are
>>     lost in this case.
>
> Not exactly, at least not looking at how things are supposed to work:
> __restore_debug_registers() gets called when
> - context switching in (vmx_restore_dr())
> - injecting TRAP_debug

Is this the case when the guest itself uses single stepping? Initially the
debug trap shouldn't cause a VMEXIT, I think. And I'm not sure the hypervisor
will see a guest setting TF via an IRET.

I _have_ seen a debug trap in the guest after single stepping without db6
having the single step indicator set...

> - any DRn is being accessed
>
> So when your guest writes DR[0-3], debug registers should get
> restored (from their original zero values) and the guest would be
> permitted direct access to the hardware registers. Once context
> switched out, vmx_save_dr() ought to be saving the values
> (irrespective of DR7 contents, only depending upon
> v->arch.hvm_vcpu.flag_dr_dirty). During the next context
> switch in, they would get restored immediately if DR7 already has
> some breakpoint enabled, or again during first DR access if not.

Okay, I'll check that. A little test routine in my domU should be able to
verify that debug registers won't change under it's feet in case of no
activated events in db7.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 12:26:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 12:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHqf-0002tH-AQ; Fri, 14 Feb 2014 12:26:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WEHqd-0002tC-UK
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 12:26:28 +0000
Received: from [85.158.137.68:42413] by server-12.bemta-3.messagelabs.com id
	65/9A-01674-4A90EF25; Fri, 14 Feb 2014 12:18:44 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392380322!324205!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26689 invoked from network); 14 Feb 2014 12:18:43 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 12:18:43 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=fGS9Iq/2YLqJ4kxtO8y6yUZx+KqoqYr5Bt7EXiJgMaq8Cauvhs8m0QjH
	0R9qxUL+Vz3q3J7a5ATuRnH8OAmrsQJCTIvChAQ7UCto0ImSun15LZE/9
	Mccgc3p4NS3sLMIh71W/0qrrhWRFD4v6NVCtpM/SXD1AXWPL0iW4kpLl7
	CPelVp1kBVTzL+mYiQY5RiRqh9ZKalqkPd11NK5nPUXhI8IzAFlR68MZB
	ACwcKbovNsfgheU1hyArxWVSdDFWw;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392380323; x=1423916323;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=lOTck6bytRxiZEcPMByzhhoEmDFfUP1foGvHMmAV6S4=;
	b=YJUcIZHL0KzDQfTsqeGd7joQs2zP0iCJrTg4GCBEfjHLk0uMdivy9Z9t
	9dXcMMHLNVkd/YAyUw8OaYURBk7TUSvRYubdFIo5v3Z53fSLi4FcmXefK
	QdqcEsxWvGfzAFx4uuZJkN9KcTTZ13MJYzgXitzBpsCqdNxoHzfzvx0zN
	b/8FQ/HUTjSPd75zP7/zRwPatJqoSOzSkfIj2H6ES7X20MZUN3x4tyB2t
	vd9fmZZngyoQkDj3/YBc/XCVBgsFV;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,844,1384297200"; d="scan'208";a="185621091"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 14 Feb 2014 13:18:42 +0100
X-IronPort-AV: E=Sophos;i="4.95,844,1384297200"; d="scan'208";a="80133907"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 14 Feb 2014 13:18:42 +0100
Message-ID: <52FE09A2.4000909@ts.fujitsu.com>
Date: Fri, 14 Feb 2014 13:18:42 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
In-Reply-To: <52FE00C8020000780011C649@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14.02.2014 11:40, Jan Beulich wrote:
>>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>> Debug registers are restored on vcpu switch only if db7 has any debug events
>> activated. This leads to problems in the following cases:
>>
>> - db0-3 are changed by the guest before events are set "active" in db7. In case
>>     of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: setting
>>     db7 before db0-3 is no option, as this could trigger debug interrupts due to
>>     stale db0-3 contents.
>>
>> - single stepping is used and vcpu switch occurs between the single step trap
>>     and reading of db6 in the guest. db6 contents (single step indicator) are
>>     lost in this case.
>
> Not exactly, at least not looking at how things are supposed to work:
> __restore_debug_registers() gets called when
> - context switching in (vmx_restore_dr())
> - injecting TRAP_debug

Is this the case when the guest itself uses single stepping? Initially the
debug trap shouldn't cause a VMEXIT, I think. And I'm not sure the hypervisor
will see a guest setting TF via an IRET.

I _have_ seen a debug trap in the guest after single stepping without db6
having the single step indicator set...

> - any DRn is being accessed
>
> So when your guest writes DR[0-3], debug registers should get
> restored (from their original zero values) and the guest would be
> permitted direct access to the hardware registers. Once context
> switched out, vmx_save_dr() ought to be saving the values
> (irrespective of DR7 contents, only depending upon
> v->arch.hvm_vcpu.flag_dr_dirty). During the next context
> switch in, they would get restored immediately if DR7 already has
> some breakpoint enabled, or again during first DR access if not.

Okay, I'll check that. A little test routine in my domU should be able to
verify that debug registers won't change under it's feet in case of no
activated events in db7.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 12:31:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 12:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHvU-000321-6q; Fri, 14 Feb 2014 12:31:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WEHvT-00031r-4j
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 12:31:27 +0000
Received: from [193.109.254.147:51743] by server-5.bemta-14.messagelabs.com id
	95/03-16688-E9C0EF25; Fri, 14 Feb 2014 12:31:26 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392381085!368278!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7254 invoked from network); 14 Feb 2014 12:31:25 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 12:31:25 -0000
Received: by mail-wg0-f42.google.com with SMTP id k14so314997wgh.5
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 04:31:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version:content-type
	:content-transfer-encoding:subject:from:date:to:cc:message-id;
	bh=/0nBmGVFQ7azJRJa7cLWnNwz290dwwFW3J6avltg110=;
	b=fUzgZE4AvaT5842JC6GPHTZLN/VKgA74kWAos/81cVyMXXzuvpwjjlpmGe6ufxPZ75
	7aLw/eGyT1JxfY7c4L6z6RnGBxEil6Ub+XRKNh6aAAGSDTUUaz9QlgyoYIUnBfZnaXEW
	OhAeJCfDceiW4bQM9kGu95JArVLNfz+KjRcUoSHBmios05hDGXdJTjM7IXFyVSDKQKvk
	aL5eED1wj2TitTbJ8U4LNUQNjL2XvHXItgRMyigkNYLPC9gETmHe0/lUGs2CMC6xoQc6
	3XacHSPbs09/bv0GNOQ4K3sfspmjDc15W3tlwNzJUQ5WeHTBPuYy0acI4OmRft4fZALG
	YCpw==
X-Received: by 10.180.102.97 with SMTP id fn1mr2031346wib.15.1392381085147;
	Fri, 14 Feb 2014 04:31:25 -0800 (PST)
Received: from [10.63.183.33] (199.59.103.87.rev.vodafone.pt. [87.103.59.199])
	by mx.google.com with ESMTPSA id f3sm4117674wiv.2.2014.02.14.04.31.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 04:31:24 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <52FDDE44.3060001@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Fri, 14 Feb 2014 12:31:15 +0000
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Message-ID: <6bee9f6a-57a0-42d1-bbd4-829d6f961c5a@email.android.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
	Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8467566205573937368=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8467566205573937368==
Content-Type: multipart/alternative; boundary="----17XILTU75KRRRBHIJJNK9UWF57HZT9"
Content-Transfer-Encoding: 8bit

------17XILTU75KRRRBHIJJNK9UWF57HZT9
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
 charset=UTF-8


I had version 8.3 only, from apt-get, not sure if only the tools changed so I decided to build the module too.

the panic only happens on service start, so it might be something with the config, probably there are changes from 8.3 to 9.... but loading the module it self doesn't cause any issue.

I'll post my config  as soon has I'm in the laptop.

Sadly I cant seem to get Supermicro to work with SOL, only iKVM, but I think I can record the console output using iKVM, I'll try this later today.

Thanks


On February 14, 2014 9:13:40 AM GMT, "Roger Pau MonnÃ©" <roger.pau@citrix.com> wrote:
>On 14/02/14 03:09, Miguel Clara wrote:
>> After compiling with the patch and rebuilding/installing the module,
>I
>> reboot, I get a panic now when drbd starts.
>
>There was no need to rebuild the module, the patch only modified the
>block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
>everything seemed to be fine (no kernel panic of course).
>
>Since the patch didn't modify anything in the kernel module itself I
>find it unlikely to cause a kernel panic, probably there's some kind of
>problem with your kernel/module.
>
>> That was all I could get from the JAVA supermicro  kvm console!
>
>Without a proper serial console log it's impossible to tell what's
>going
>on (at least to me).
>
>Roger.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------17XILTU75KRRRBHIJJNK9UWF57HZT9
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head></head><body><br>
I had version 8.3 only, from apt-get, not sure if only the tools changed so I decided to build the module too.<br>
<br>
the panic only happens on service start, so it might be something with the config, probably there are changes from 8.3 to 9.... but loading the module it self doesn&#39;t cause any issue.<br>
<br>
I&#39;ll post my config  as soon has I&#39;m in the laptop.<br>
<br>
Sadly I cant seem to get Supermicro to work with SOL, only iKVM, but I think I can record the console output using iKVM, I&#39;ll try this later today.<br>
<br>
Thanks<br>
<br><br><div class="gmail_quote">On February 14, 2014 9:13:40 AM GMT, &quot;Roger Pau MonnÃ©&quot; &lt;roger.pau@citrix.com&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">On 14/02/14 03:09, Miguel Clara wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> After compiling with the patch and rebuilding/installing the module, I<br /> reboot, I get a panic now when drbd starts.<br /></blockquote><br />There was no need to rebuild the module, the patch only modified the<br />block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,<br />everything seemed to be fine (no kernel panic of course).<br /><br />Since the patch didn't modify anything in the kernel module itself I<br />find it unlikely to cause a kernel panic, probably there's some kind of<br />problem with your kernel/module.<br /><br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> That was all I could get from the JAVA supermicro  kvm console!<br /></blockquote><br />Without a proper serial console log it's impossible
to tell what's going<br />on (at least to me).<br /><br />Roger.<br /><br /></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>
------17XILTU75KRRRBHIJJNK9UWF57HZT9--



--===============8467566205573937368==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8467566205573937368==--



From xen-devel-bounces@lists.xen.org Fri Feb 14 12:31:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 12:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEHvU-000321-6q; Fri, 14 Feb 2014 12:31:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WEHvT-00031r-4j
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 12:31:27 +0000
Received: from [193.109.254.147:51743] by server-5.bemta-14.messagelabs.com id
	95/03-16688-E9C0EF25; Fri, 14 Feb 2014 12:31:26 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392381085!368278!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7254 invoked from network); 14 Feb 2014 12:31:25 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 12:31:25 -0000
Received: by mail-wg0-f42.google.com with SMTP id k14so314997wgh.5
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 04:31:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=user-agent:in-reply-to:references:mime-version:content-type
	:content-transfer-encoding:subject:from:date:to:cc:message-id;
	bh=/0nBmGVFQ7azJRJa7cLWnNwz290dwwFW3J6avltg110=;
	b=fUzgZE4AvaT5842JC6GPHTZLN/VKgA74kWAos/81cVyMXXzuvpwjjlpmGe6ufxPZ75
	7aLw/eGyT1JxfY7c4L6z6RnGBxEil6Ub+XRKNh6aAAGSDTUUaz9QlgyoYIUnBfZnaXEW
	OhAeJCfDceiW4bQM9kGu95JArVLNfz+KjRcUoSHBmios05hDGXdJTjM7IXFyVSDKQKvk
	aL5eED1wj2TitTbJ8U4LNUQNjL2XvHXItgRMyigkNYLPC9gETmHe0/lUGs2CMC6xoQc6
	3XacHSPbs09/bv0GNOQ4K3sfspmjDc15W3tlwNzJUQ5WeHTBPuYy0acI4OmRft4fZALG
	YCpw==
X-Received: by 10.180.102.97 with SMTP id fn1mr2031346wib.15.1392381085147;
	Fri, 14 Feb 2014 04:31:25 -0800 (PST)
Received: from [10.63.183.33] (199.59.103.87.rev.vodafone.pt. [87.103.59.199])
	by mx.google.com with ESMTPSA id f3sm4117674wiv.2.2014.02.14.04.31.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 04:31:24 -0800 (PST)
User-Agent: K-9 Mail for Android
In-Reply-To: <52FDDE44.3060001@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
MIME-Version: 1.0
From: "Mike C." <miguelmclara@gmail.com>
Date: Fri, 14 Feb 2014 12:31:15 +0000
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Message-ID: <6bee9f6a-57a0-42d1-bbd4-829d6f961c5a@email.android.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
	Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8467566205573937368=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8467566205573937368==
Content-Type: multipart/alternative; boundary="----17XILTU75KRRRBHIJJNK9UWF57HZT9"
Content-Transfer-Encoding: 8bit

------17XILTU75KRRRBHIJJNK9UWF57HZT9
Content-Transfer-Encoding: 8bit
Content-Type: text/plain;
 charset=UTF-8


I had version 8.3 only, from apt-get, not sure if only the tools changed so I decided to build the module too.

the panic only happens on service start, so it might be something with the config, probably there are changes from 8.3 to 9.... but loading the module it self doesn't cause any issue.

I'll post my config  as soon has I'm in the laptop.

Sadly I cant seem to get Supermicro to work with SOL, only iKVM, but I think I can record the console output using iKVM, I'll try this later today.

Thanks


On February 14, 2014 9:13:40 AM GMT, "Roger Pau MonnÃ©" <roger.pau@citrix.com> wrote:
>On 14/02/14 03:09, Miguel Clara wrote:
>> After compiling with the patch and rebuilding/installing the module,
>I
>> reboot, I get a panic now when drbd starts.
>
>There was no need to rebuild the module, the patch only modified the
>block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
>everything seemed to be fine (no kernel panic of course).
>
>Since the patch didn't modify anything in the kernel module itself I
>find it unlikely to cause a kernel panic, probably there's some kind of
>problem with your kernel/module.
>
>> That was all I could get from the JAVA supermicro  kvm console!
>
>Without a proper serial console log it's impossible to tell what's
>going
>on (at least to me).
>
>Roger.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
------17XILTU75KRRRBHIJJNK9UWF57HZT9
Content-Type: text/html;
 charset=utf-8
Content-Transfer-Encoding: 8bit

<html><head></head><body><br>
I had version 8.3 only, from apt-get, not sure if only the tools changed so I decided to build the module too.<br>
<br>
the panic only happens on service start, so it might be something with the config, probably there are changes from 8.3 to 9.... but loading the module it self doesn&#39;t cause any issue.<br>
<br>
I&#39;ll post my config  as soon has I&#39;m in the laptop.<br>
<br>
Sadly I cant seem to get Supermicro to work with SOL, only iKVM, but I think I can record the console output using iKVM, I&#39;ll try this later today.<br>
<br>
Thanks<br>
<br><br><div class="gmail_quote">On February 14, 2014 9:13:40 AM GMT, &quot;Roger Pau MonnÃ©&quot; &lt;roger.pau@citrix.com&gt; wrote:<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<pre class="k9mail">On 14/02/14 03:09, Miguel Clara wrote:<br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> After compiling with the patch and rebuilding/installing the module, I<br /> reboot, I get a panic now when drbd starts.<br /></blockquote><br />There was no need to rebuild the module, the patch only modified the<br />block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,<br />everything seemed to be fine (no kernel panic of course).<br /><br />Since the patch didn't modify anything in the kernel module itself I<br />find it unlikely to cause a kernel panic, probably there's some kind of<br />problem with your kernel/module.<br /><br /><blockquote class="gmail_quote" style="margin: 0pt 0pt 1ex 0.8ex; border-left: 1px solid #729fcf; padding-left: 1ex;"> That was all I could get from the JAVA supermicro  kvm console!<br /></blockquote><br />Without a proper serial console log it's impossible
to tell what's going<br />on (at least to me).<br /><br />Roger.<br /><br /></pre></blockquote></div><br>
-- <br>
Sent from my Android device with K-9 Mail. Please excuse my brevity.</body></html>
------17XILTU75KRRRBHIJJNK9UWF57HZT9--



--===============8467566205573937368==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8467566205573937368==--



From xen-devel-bounces@lists.xen.org Fri Feb 14 13:02:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIPJ-0003JZ-IS; Fri, 14 Feb 2014 13:02:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEIPH-0003JU-H5
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 13:02:15 +0000
Received: from [85.158.137.68:27573] by server-8.bemta-3.messagelabs.com id
	E3/02-16039-6D31EF25; Fri, 14 Feb 2014 13:02:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392382933!1902980!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20828 invoked from network); 14 Feb 2014 13:02:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:02:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:02:13 +0000
Message-Id: <52FE21E4020000780011C6F5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:02:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
In-Reply-To: <52FE09A2.4000909@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> On 14.02.2014 11:40, Jan Beulich wrote:
>>>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>> Debug registers are restored on vcpu switch only if db7 has any debug events
>>> activated. This leads to problems in the following cases:
>>>
>>> - db0-3 are changed by the guest before events are set "active" in db7. In 
> case
>>>     of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: 
> setting
>>>     db7 before db0-3 is no option, as this could trigger debug interrupts due 
> to
>>>     stale db0-3 contents.
>>>
>>> - single stepping is used and vcpu switch occurs between the single step trap
>>>     and reading of db6 in the guest. db6 contents (single step indicator) 
> are
>>>     lost in this case.
>>
>> Not exactly, at least not looking at how things are supposed to work:
>> __restore_debug_registers() gets called when
>> - context switching in (vmx_restore_dr())
>> - injecting TRAP_debug
> 
> Is this the case when the guest itself uses single stepping? Initially the
> debug trap shouldn't cause a VMEXIT, I think.

That looks like a bug, indeed - it's missing from the initially set
exception_bitmap. Could you check whether adding this in
construct_vmcs() addresses that part of the issue? (A proper fix
would likely include further adjustments to the setting of this flag,
e.g. clearing it alongside clearing the DR intercept.) But then
again all of this already depends on cpu_has_monitor_trap_flag -
if that's set on your system, maybe you could try suppressing its
detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
the optional feature set in vmx_init_vmcs_config())?

> And I'm not sure the 
> hypervisor will see a guest setting TF via an IRET.

It shouldn't need to know of this.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:02:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIPJ-0003JZ-IS; Fri, 14 Feb 2014 13:02:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEIPH-0003JU-H5
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 13:02:15 +0000
Received: from [85.158.137.68:27573] by server-8.bemta-3.messagelabs.com id
	E3/02-16039-6D31EF25; Fri, 14 Feb 2014 13:02:14 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392382933!1902980!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20828 invoked from network); 14 Feb 2014 13:02:14 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:02:14 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:02:13 +0000
Message-Id: <52FE21E4020000780011C6F5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:02:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
In-Reply-To: <52FE09A2.4000909@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> On 14.02.2014 11:40, Jan Beulich wrote:
>>>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>> Debug registers are restored on vcpu switch only if db7 has any debug events
>>> activated. This leads to problems in the following cases:
>>>
>>> - db0-3 are changed by the guest before events are set "active" in db7. In 
> case
>>>     of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW: 
> setting
>>>     db7 before db0-3 is no option, as this could trigger debug interrupts due 
> to
>>>     stale db0-3 contents.
>>>
>>> - single stepping is used and vcpu switch occurs between the single step trap
>>>     and reading of db6 in the guest. db6 contents (single step indicator) 
> are
>>>     lost in this case.
>>
>> Not exactly, at least not looking at how things are supposed to work:
>> __restore_debug_registers() gets called when
>> - context switching in (vmx_restore_dr())
>> - injecting TRAP_debug
> 
> Is this the case when the guest itself uses single stepping? Initially the
> debug trap shouldn't cause a VMEXIT, I think.

That looks like a bug, indeed - it's missing from the initially set
exception_bitmap. Could you check whether adding this in
construct_vmcs() addresses that part of the issue? (A proper fix
would likely include further adjustments to the setting of this flag,
e.g. clearing it alongside clearing the DR intercept.) But then
again all of this already depends on cpu_has_monitor_trap_flag -
if that's set on your system, maybe you could try suppressing its
detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
the optional feature set in vmx_init_vmcs_config())?

> And I'm not sure the 
> hypervisor will see a guest setting TF via an IRET.

It shouldn't need to know of this.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:19:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIfY-0003VM-FJ; Fri, 14 Feb 2014 13:19:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEIfX-0003VH-6Y
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 13:19:03 +0000
Received: from [85.158.139.211:64419] by server-1.bemta-5.messagelabs.com id
	42/AF-12859-6C71EF25; Fri, 14 Feb 2014 13:19:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392383940!3950045!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26838 invoked from network); 14 Feb 2014 13:19:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 13:19:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102549289"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 13:19:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 08:18:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WEIfT-0007fP-7t;
	Fri, 14 Feb 2014 13:18:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WEIfR-0005q6-Jh;
	Fri, 14 Feb 2014 13:18:57 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.6080.514481.458181@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 13:18:56 +0000
To: <xen-devel@lists.xenproject.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] RC4 now planned for Monday
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We didn't get the test push that would have been ideal in the run
which reported this morning, and the retry is currently scheduled to
finish around 20:00 UTC tonight.

We're therefore missing in master:
  pvh: Fix regression due to assumption that HVM paths MUST ...
  When enabling log dirty mode, it sets all guest's memory to readonly

As discussed with George we're delaying cutting RC4 until Monday, in
the hope that these changes will make it through the push gate by
then.  (So, also, please don't commit anything to staging for now.)

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:19:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIfY-0003VM-FJ; Fri, 14 Feb 2014 13:19:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEIfX-0003VH-6Y
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 13:19:03 +0000
Received: from [85.158.139.211:64419] by server-1.bemta-5.messagelabs.com id
	42/AF-12859-6C71EF25; Fri, 14 Feb 2014 13:19:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392383940!3950045!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26838 invoked from network); 14 Feb 2014 13:19:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 13:19:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102549289"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 13:19:00 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 08:18:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WEIfT-0007fP-7t;
	Fri, 14 Feb 2014 13:18:59 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WEIfR-0005q6-Jh;
	Fri, 14 Feb 2014 13:18:57 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.6080.514481.458181@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 13:18:56 +0000
To: <xen-devel@lists.xenproject.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] RC4 now planned for Monday
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We didn't get the test push that would have been ideal in the run
which reported this morning, and the retry is currently scheduled to
finish around 20:00 UTC tonight.

We're therefore missing in master:
  pvh: Fix regression due to assumption that HVM paths MUST ...
  When enabling log dirty mode, it sets all guest's memory to readonly

As discussed with George we're delaying cutting RC4 until Monday, in
the hope that these changes will make it through the push gate by
then.  (So, also, please don't commit anything to staging for now.)

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:26:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEImg-0003eQ-HD; Fri, 14 Feb 2014 13:26:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEIme-0003eL-A3
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:26:24 +0000
Received: from [85.158.143.35:61413] by server-3.bemta-4.messagelabs.com id
	63/F6-11539-F791EF25; Fri, 14 Feb 2014 13:26:23 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392384381!5717591!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10233 invoked from network); 14 Feb 2014 13:26:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 13:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100779193"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 13:26:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 08:26:20 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WEIma-0002gh-2v;
	Fri, 14 Feb 2014 13:26:20 +0000
Message-ID: <52FE197B.7090609@citrix.com>
Date: Fri, 14 Feb 2014 13:26:19 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Martin <furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
In-Reply-To: <6010385428.20140214120238@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 12:02, Simon Martin wrote:
> Hello Simon,
>
> Thanks everyone and especially Ian! It was the hyperthreading that was
> causing the problem.
>
> Here's my current configuration:
>
> # xl cpupool-list -c
> Name               CPU list
> Pool-0             0,1
> pv499              2,3
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
> Domain-0                             0     0    0   r--      16.6  0
> Domain-0                             0     1    1   -b-       7.3  1
> win7x64                              1     0    1   -b-      82.5  all
> win7x64                              1     1    0   -b-      18.6  all
> pv499                                2     0    3   r--     226.1  3
>
> I  have  pinned  dom0 as I wasn't sure whether it belongs to Pool-0 (I
> assume it does, can you confirm please).
>
> Dario, if you are going to look at the
>
> Looking at my timings with this configuration I am seeing a 1%
> variation (945 milliseconds +/- 5). I think this can be attributable
> to  RAM  contention, at the end of the day all cores are competing for
> the same bus.
>

There is also things such as the Xen time calibration rendezvous which a
synchronisation point of all online cpus, once per second.  Having any
single cpu slow to enter the rendezvous will delay all others which have
already entered.

This will likely add a bit of jitter if one cpu in xen is doing a
lengthy operation with interrupts disabled at the point at which the
rendezvous is triggered.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:26:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEImg-0003eQ-HD; Fri, 14 Feb 2014 13:26:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEIme-0003eL-A3
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:26:24 +0000
Received: from [85.158.143.35:61413] by server-3.bemta-4.messagelabs.com id
	63/F6-11539-F791EF25; Fri, 14 Feb 2014 13:26:23 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392384381!5717591!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10233 invoked from network); 14 Feb 2014 13:26:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 13:26:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="100779193"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 13:26:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 08:26:20 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WEIma-0002gh-2v;
	Fri, 14 Feb 2014 13:26:20 +0000
Message-ID: <52FE197B.7090609@citrix.com>
Date: Fri, 14 Feb 2014 13:26:19 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Simon Martin <furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
In-Reply-To: <6010385428.20140214120238@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 12:02, Simon Martin wrote:
> Hello Simon,
>
> Thanks everyone and especially Ian! It was the hyperthreading that was
> causing the problem.
>
> Here's my current configuration:
>
> # xl cpupool-list -c
> Name               CPU list
> Pool-0             0,1
> pv499              2,3
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
> Domain-0                             0     0    0   r--      16.6  0
> Domain-0                             0     1    1   -b-       7.3  1
> win7x64                              1     0    1   -b-      82.5  all
> win7x64                              1     1    0   -b-      18.6  all
> pv499                                2     0    3   r--     226.1  3
>
> I  have  pinned  dom0 as I wasn't sure whether it belongs to Pool-0 (I
> assume it does, can you confirm please).
>
> Dario, if you are going to look at the
>
> Looking at my timings with this configuration I am seeing a 1%
> variation (945 milliseconds +/- 5). I think this can be attributable
> to  RAM  contention, at the end of the day all cores are competing for
> the same bus.
>

There is also things such as the Xen time calibration rendezvous which a
synchronisation point of all online cpus, once per second.  Having any
single cpu slow to enter the rendezvous will delay all others which have
already entered.

This will likely add a bit of jitter if one cpu in xen is doing a
lengthy operation with interrupts disabled at the point at which the
rendezvous is triggered.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:27:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEInr-0003jy-6p; Fri, 14 Feb 2014 13:27:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEInp-0003jo-U1
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:27:38 +0000
Received: from [85.158.137.68:16046] by server-10.bemta-3.messagelabs.com id
	C3/E5-07302-9C91EF25; Fri, 14 Feb 2014 13:27:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392384456!1919645!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25869 invoked from network); 14 Feb 2014 13:27:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:27:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:27:36 +0000
Message-Id: <52FE27D7020000780011C726@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:27:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-2-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-2-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> If the guest has not asked for interrupts, don't run the vpt timer
> to generate them.  This is a prerequisite for a patch to simplify how
> the vpt interacts with the RTC, and also gets rid of a timer series in
> Xen in a case where it's unlikely to be needed.
> 
> Instead, calculate the correct value for REG_C.PF whenever REG_C is
> read or PIE is enabled.  This allow a guest to poll for the PF bit
> while not asking for actual timer interrupts.  Such a guest would no
> longer get the benefit of the vpt's timer modes.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Looks okay to me. Two minor comments below.

> @@ -125,24 +144,28 @@ static void rtc_timer_update(RTCState *s)
>      case RTC_REF_CLCK_4MHZ:
>          if ( period_code != 0 )
>          {
> -            if ( period_code != s->pt_code )
> +            now = NOW();

This is needed only inside the next if, so perhaps move it there (and
I'd prefer the variable declaration to be moved there too).

> +            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> +            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
> +            if ( period != s->period )
>              {
> -                s->pt_code = period_code;
> -                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> -                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
> +                s->period = period;
>                  if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
>                      delta = 0;
>                  else
> -                    delta = period - ((NOW() - s->start_time) % period);
> -                create_periodic_time(v, &s->pt, delta, period,
> -                                     RTC_IRQ, NULL, s);
> +                    delta = period - ((now - s->start_time) % period);
> +                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
> +                    create_periodic_time(v, &s->pt, delta, period,
> +                                         RTC_IRQ, NULL, s);
> +                else
> +                    s->check_ticks_since = now;
>              }
>              break;
>          }
> @@ -492,6 +516,11 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
>          rtc_update_irq(s);
>          if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
>              rtc_timer_update(s);
> +        else if ( !(data & RTC_PIE) && (orig & RTC_PIE) )
> +        {
> +            destroy_periodic_time(&s->pt);
> +            rtc_timer_update(s);
> +        }

I think these two paths should be folded, along the lines of

        if ( (data ^ orig) & RTC_PIE )
        {
            if ( !(data & RTC_PIE) )
                destroy_periodic_time(&s->pt);
            rtc_timer_update(s);
        }

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:27:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEInr-0003jy-6p; Fri, 14 Feb 2014 13:27:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEInp-0003jo-U1
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:27:38 +0000
Received: from [85.158.137.68:16046] by server-10.bemta-3.messagelabs.com id
	C3/E5-07302-9C91EF25; Fri, 14 Feb 2014 13:27:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392384456!1919645!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25869 invoked from network); 14 Feb 2014 13:27:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:27:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:27:36 +0000
Message-Id: <52FE27D7020000780011C726@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:27:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-2-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-2-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> If the guest has not asked for interrupts, don't run the vpt timer
> to generate them.  This is a prerequisite for a patch to simplify how
> the vpt interacts with the RTC, and also gets rid of a timer series in
> Xen in a case where it's unlikely to be needed.
> 
> Instead, calculate the correct value for REG_C.PF whenever REG_C is
> read or PIE is enabled.  This allow a guest to poll for the PF bit
> while not asking for actual timer interrupts.  Such a guest would no
> longer get the benefit of the vpt's timer modes.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Looks okay to me. Two minor comments below.

> @@ -125,24 +144,28 @@ static void rtc_timer_update(RTCState *s)
>      case RTC_REF_CLCK_4MHZ:
>          if ( period_code != 0 )
>          {
> -            if ( period_code != s->pt_code )
> +            now = NOW();

This is needed only inside the next if, so perhaps move it there (and
I'd prefer the variable declaration to be moved there too).

> +            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> +            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
> +            if ( period != s->period )
>              {
> -                s->pt_code = period_code;
> -                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
> -                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
> +                s->period = period;
>                  if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
>                      delta = 0;
>                  else
> -                    delta = period - ((NOW() - s->start_time) % period);
> -                create_periodic_time(v, &s->pt, delta, period,
> -                                     RTC_IRQ, NULL, s);
> +                    delta = period - ((now - s->start_time) % period);
> +                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
> +                    create_periodic_time(v, &s->pt, delta, period,
> +                                         RTC_IRQ, NULL, s);
> +                else
> +                    s->check_ticks_since = now;
>              }
>              break;
>          }
> @@ -492,6 +516,11 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
>          rtc_update_irq(s);
>          if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
>              rtc_timer_update(s);
> +        else if ( !(data & RTC_PIE) && (orig & RTC_PIE) )
> +        {
> +            destroy_periodic_time(&s->pt);
> +            rtc_timer_update(s);
> +        }

I think these two paths should be folded, along the lines of

        if ( (data ^ orig) & RTC_PIE )
        {
            if ( !(data & RTC_PIE) )
                destroy_periodic_time(&s->pt);
            rtc_timer_update(s);
        }

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:31:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIrG-0003vh-Uv; Fri, 14 Feb 2014 13:31:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEIrF-0003vZ-BP
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:31:09 +0000
Received: from [193.109.254.147:30056] by server-9.bemta-14.messagelabs.com id
	2A/86-24895-C9A1EF25; Fri, 14 Feb 2014 13:31:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392384668!4389047!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12165 invoked from network); 14 Feb 2014 13:31:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:31:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:31:08 +0000
Message-Id: <52FE28AA020000780011C731@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:31:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-3-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-3-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 2/3] x86/hvm/rtc: Inject RTC periodic
 interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> Let the vpt code drive the RTC's timer interrupts directly, as it does
> for other periodic time sources, and fix up the register state in a
> vpt callback when the interrupt is injected.
> 
> This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
> when a tick was pending, the early callback from the VPT code would
> always set REG_C.PF on every VMENTER; meanwhile the guest was in its
> interrupt handler reading REG_C in a loop and waiting to see it clear.
> 
> One drawback is that a guest that attempts to suppress RTC periodic
> interrupts by failing to read REG_C will receive up to 10 spurious
> interrupts, even in 'strict' mode.  However:
>  - since all previous RTC models have had this property (including
>    the current one, since 'no-ack' mode is hard-coded on) we're
>    pretty sure that all guests can handle this; and
>  - we're already playing some other interesting games with this
>    interrupt in the vpt code.
> 
> One other corner case: a guest that enables the PF timer interrupt,
> masks the interupt in the APIC and then polls REG_C looking for PF
> will not see PF getting set.  The more likely case of enabling the
> timers and masking the interrupt with REG_B.PIE is already handled
> correctly.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Looks plausible too, and the revert it is effectively doing is
obviously acceptable now with patch 1 in place. Curious to know
what's still broken with it...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:31:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIrG-0003vh-Uv; Fri, 14 Feb 2014 13:31:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEIrF-0003vZ-BP
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:31:09 +0000
Received: from [193.109.254.147:30056] by server-9.bemta-14.messagelabs.com id
	2A/86-24895-C9A1EF25; Fri, 14 Feb 2014 13:31:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392384668!4389047!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12165 invoked from network); 14 Feb 2014 13:31:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:31:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:31:08 +0000
Message-Id: <52FE28AA020000780011C731@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:31:06 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-3-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-3-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 2/3] x86/hvm/rtc: Inject RTC periodic
 interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> Let the vpt code drive the RTC's timer interrupts directly, as it does
> for other periodic time sources, and fix up the register state in a
> vpt callback when the interrupt is injected.
> 
> This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
> when a tick was pending, the early callback from the VPT code would
> always set REG_C.PF on every VMENTER; meanwhile the guest was in its
> interrupt handler reading REG_C in a loop and waiting to see it clear.
> 
> One drawback is that a guest that attempts to suppress RTC periodic
> interrupts by failing to read REG_C will receive up to 10 spurious
> interrupts, even in 'strict' mode.  However:
>  - since all previous RTC models have had this property (including
>    the current one, since 'no-ack' mode is hard-coded on) we're
>    pretty sure that all guests can handle this; and
>  - we're already playing some other interesting games with this
>    interrupt in the vpt code.
> 
> One other corner case: a guest that enables the PF timer interrupt,
> masks the interupt in the APIC and then polls REG_C looking for PF
> will not see PF getting set.  The more likely case of enabling the
> timers and masking the interrupt with REG_B.PIE is already handled
> correctly.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Looks plausible too, and the revert it is effectively doing is
obviously acceptable now with patch 1 in place. Curious to know
what's still broken with it...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIsA-00040X-Fc; Fri, 14 Feb 2014 13:32:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEIs9-00040P-OX
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:32:05 +0000
Received: from [85.158.143.35:59165] by server-3.bemta-4.messagelabs.com id
	F4/B0-11539-5DA1EF25; Fri, 14 Feb 2014 13:32:05 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392384723!5725562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28932 invoked from network); 14 Feb 2014 13:32:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 13:32:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102551922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 13:32:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 08:32:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WEIs5-0002lC-Lo;
	Fri, 14 Feb 2014 13:32:01 +0000
Message-ID: <52FE1AD1.80704@citrix.com>
Date: Fri, 14 Feb 2014 13:32:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
	<CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
In-Reply-To: <CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 11:52, George Dunlap wrote:
> On Thu, Feb 13, 2014 at 5:39 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/14 17:32, Tim Deegan wrote:
>>> Hi,
>>>
>>> This series implements the most recent idea I was proposing about
>>> reworking the RTC PF interrupt injection.
>>>
>>> Patch 1 switches handling the !PIE case to calculate the right answer
>>> for REG_C.PF on demand rather than running the timers.
>>> Patch 2 switches back to the old model of having the vpt code control
>>> the timer interrupt injection; this is the fix for the w2k3 hang.
>>> Patch 3 is just a minor cleanup, and not particularly necessary.
>>>
>>> N.B. In its current state it DOES NOT WORK.  I got distracted by
>>> other things today and didn't get a chance to finish working on it,
>>> but I wanted to send it out for feedback on the general approach.
>>> If it seems broadly acceptable then either I can pick it up again next
>>> week or maybe Andrew can look at fixing it.
>>>
>>> Cheers,
>>>
>>> Tim.
>>>
>> I should have time to look at the series tomorrow.
> The next question to ask is this:
>
> This is the last big disruptive bug / bugfix on my list.  We're
> planning on cutting an RC Monday probably, with a test day Tuesday.
> This bug was originally marked as "Not for 4.4".

It was originally for 4.4 when I proposed the very first fix, then not
for 4.4 when it was clear my first fix didn't work and I had to start
from scratch.  The efforts for fixing now is simply because of getting
some traction on the problem.

>
> So our options are:
> * Delay the release, waiting for this new series to be ready
> * Take the patch Andy posted last week for now, and backport Tim's fix
> when it's ready
> * Release without this bug being fixed
>
> As a reminder (for those who haven't been following the thread), the
> effect of this bug is that w2k3 guests sometimes hang during boot.
> I'm not sure exactly how often this is, but from talking to Andy it
> seems to be fairly low -- one percent maybe?  The code is very subtle
> and any change may risk causing similar hangs in other situations; in
> particular we would want to be able to test it pretty well.
>
> At the moment I'm leaning towards not delaying the release for it.
> That could either mean checking the patch we have to hand today (so it
> can make it into the RC Monday hopefully), or just going without it.
>
> Any thoughts?
>
>  -George

The bug has been present in Xen since the 4.3 dev cycle, meaning that
All Xen-4.3.x are susceptible.

I do not believe holding the release for a fix is sensible, especially
as the fix is still not considered ready yet

Also, I do not advise taking last weeks patch as an interim fix.  As
repeatedly proved, this is very tricky to get right, and it is less risk
to leave it as-is.


I will attempt to get the series working, and if it is well tested and
still within an acceptable timeframe for 4.4 then it should go in.  If
not, releasing with this bug is not the end of the world.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEIsA-00040X-Fc; Fri, 14 Feb 2014 13:32:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEIs9-00040P-OX
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:32:05 +0000
Received: from [85.158.143.35:59165] by server-3.bemta-4.messagelabs.com id
	F4/B0-11539-5DA1EF25; Fri, 14 Feb 2014 13:32:05 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392384723!5725562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28932 invoked from network); 14 Feb 2014 13:32:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 13:32:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,844,1384300800"; d="scan'208";a="102551922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 13:32:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 08:32:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WEIs5-0002lC-Lo;
	Fri, 14 Feb 2014 13:32:01 +0000
Message-ID: <52FE1AD1.80704@citrix.com>
Date: Fri, 14 Feb 2014 13:32:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
	<CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
In-Reply-To: <CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 11:52, George Dunlap wrote:
> On Thu, Feb 13, 2014 at 5:39 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/14 17:32, Tim Deegan wrote:
>>> Hi,
>>>
>>> This series implements the most recent idea I was proposing about
>>> reworking the RTC PF interrupt injection.
>>>
>>> Patch 1 switches handling the !PIE case to calculate the right answer
>>> for REG_C.PF on demand rather than running the timers.
>>> Patch 2 switches back to the old model of having the vpt code control
>>> the timer interrupt injection; this is the fix for the w2k3 hang.
>>> Patch 3 is just a minor cleanup, and not particularly necessary.
>>>
>>> N.B. In its current state it DOES NOT WORK.  I got distracted by
>>> other things today and didn't get a chance to finish working on it,
>>> but I wanted to send it out for feedback on the general approach.
>>> If it seems broadly acceptable then either I can pick it up again next
>>> week or maybe Andrew can look at fixing it.
>>>
>>> Cheers,
>>>
>>> Tim.
>>>
>> I should have time to look at the series tomorrow.
> The next question to ask is this:
>
> This is the last big disruptive bug / bugfix on my list.  We're
> planning on cutting an RC Monday probably, with a test day Tuesday.
> This bug was originally marked as "Not for 4.4".

It was originally for 4.4 when I proposed the very first fix, then not
for 4.4 when it was clear my first fix didn't work and I had to start
from scratch.  The efforts for fixing now is simply because of getting
some traction on the problem.

>
> So our options are:
> * Delay the release, waiting for this new series to be ready
> * Take the patch Andy posted last week for now, and backport Tim's fix
> when it's ready
> * Release without this bug being fixed
>
> As a reminder (for those who haven't been following the thread), the
> effect of this bug is that w2k3 guests sometimes hang during boot.
> I'm not sure exactly how often this is, but from talking to Andy it
> seems to be fairly low -- one percent maybe?  The code is very subtle
> and any change may risk causing similar hangs in other situations; in
> particular we would want to be able to test it pretty well.
>
> At the moment I'm leaning towards not delaying the release for it.
> That could either mean checking the patch we have to hand today (so it
> can make it into the RC Monday hopefully), or just going without it.
>
> Any thoughts?
>
>  -George

The bug has been present in Xen since the 4.3 dev cycle, meaning that
All Xen-4.3.x are susceptible.

I do not believe holding the release for a fix is sensible, especially
as the fix is still not considered ready yet

Also, I do not advise taking last weeks patch as an interim fix.  As
repeatedly proved, this is very tricky to get right, and it is less risk
to leave it as-is.


I will attempt to get the series working, and if it is well tested and
still within an acceptable timeframe for 4.4 then it should go in.  If
not, releasing with this bug is not the end of the world.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:46:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJ6I-0004HB-9B; Fri, 14 Feb 2014 13:46:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEJ6G-0004H6-OP
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:46:41 +0000
Received: from [85.158.139.211:35365] by server-7.bemta-5.messagelabs.com id
	16/E3-14867-04E1EF25; Fri, 14 Feb 2014 13:46:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392385599!3956682!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5098 invoked from network); 14 Feb 2014 13:46:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:46:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:46:39 +0000
Message-Id: <52FE2C4F020000780011C74E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:46:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-4-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-4-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/3] x86/hvm/rtc: Always deassert the
 IRQ line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> Even in no-ack mode, there's no reason to leave the line asserted
> after an explicit ack of the interrupt.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>
> ---
>  xen/arch/x86/hvm/rtc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> index 18a4fe8..b592547 100644
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -674,7 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
>          check_for_pf_ticks(s);
>          ret = s->hw.cmos_data[s->hw.cmos_index];
>          s->hw.cmos_data[RTC_REG_C] = 0x00;
> -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> +        if ( ret & RTC_IRQF )
>              hvm_isa_irq_deassert(d, RTC_IRQ);
>          rtc_update_irq(s);
>          check_update_timer(s);

Wait - does one of the earlier patches remove the other de-assert?
Looking... No, they don't. Doing it in exactly one of the two places
should be sufficient, shouldn't it? The more that the other de-assert
is in rtc_update_irq(), which is being called right afterwards. 

But then again - that call seems pointless (I think I mentioned this in
response to Andrew's first attempt): Since REG_C is now clear, the
first conditional return path in that function will never be taken, and
the second one always will be.

So I think together with removing that call, and considering the
intended level-ness of the IRQ, I think I agree with you after all,
provided two de-asserts with no assert in between are not a
problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:46:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJ6I-0004HB-9B; Fri, 14 Feb 2014 13:46:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEJ6G-0004H6-OP
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:46:41 +0000
Received: from [85.158.139.211:35365] by server-7.bemta-5.messagelabs.com id
	16/E3-14867-04E1EF25; Fri, 14 Feb 2014 13:46:40 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392385599!3956682!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5098 invoked from network); 14 Feb 2014 13:46:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:46:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:46:39 +0000
Message-Id: <52FE2C4F020000780011C74E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:46:39 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-4-git-send-email-tim@xen.org>
In-Reply-To: <1392312779-14373-4-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/3] x86/hvm/rtc: Always deassert the
 IRQ line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> Even in no-ack mode, there's no reason to leave the line asserted
> after an explicit ack of the interrupt.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>
> ---
>  xen/arch/x86/hvm/rtc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> index 18a4fe8..b592547 100644
> --- a/xen/arch/x86/hvm/rtc.c
> +++ b/xen/arch/x86/hvm/rtc.c
> @@ -674,7 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
>          check_for_pf_ticks(s);
>          ret = s->hw.cmos_data[s->hw.cmos_index];
>          s->hw.cmos_data[RTC_REG_C] = 0x00;
> -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> +        if ( ret & RTC_IRQF )
>              hvm_isa_irq_deassert(d, RTC_IRQ);
>          rtc_update_irq(s);
>          check_update_timer(s);

Wait - does one of the earlier patches remove the other de-assert?
Looking... No, they don't. Doing it in exactly one of the two places
should be sufficient, shouldn't it? The more that the other de-assert
is in rtc_update_irq(), which is being called right afterwards. 

But then again - that call seems pointless (I think I mentioned this in
response to Andrew's first attempt): Since REG_C is now clear, the
first conditional return path in that function will never be taken, and
the second one always will be.

So I think together with removing that call, and considering the
intended level-ness of the IRQ, I think I agree with you after all,
provided two de-asserts with no assert in between are not a
problem.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:51:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJAd-0004Pi-4U; Fri, 14 Feb 2014 13:51:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEJAb-0004Pc-Ql
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:51:10 +0000
Received: from [85.158.139.211:9455] by server-6.bemta-5.messagelabs.com id
	25/E5-14342-D4F1EF25; Fri, 14 Feb 2014 13:51:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392385868!3865136!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19883 invoked from network); 14 Feb 2014 13:51:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:51:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:51:09 +0000
Message-Id: <52FE2D5B020000780011C75A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:51:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
	<CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
In-Reply-To: <CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 12:52, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Thu, Feb 13, 2014 at 5:39 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/14 17:32, Tim Deegan wrote:
>>> Hi,
>>>
>>> This series implements the most recent idea I was proposing about
>>> reworking the RTC PF interrupt injection.
>>>
>>> Patch 1 switches handling the !PIE case to calculate the right answer
>>> for REG_C.PF on demand rather than running the timers.
>>> Patch 2 switches back to the old model of having the vpt code control
>>> the timer interrupt injection; this is the fix for the w2k3 hang.
>>> Patch 3 is just a minor cleanup, and not particularly necessary.
>>>
>>> N.B. In its current state it DOES NOT WORK.  I got distracted by
>>> other things today and didn't get a chance to finish working on it,
>>> but I wanted to send it out for feedback on the general approach.
>>> If it seems broadly acceptable then either I can pick it up again next
>>> week or maybe Andrew can look at fixing it.
>>>
>>> Cheers,
>>>
>>> Tim.
>>>
>>
>> I should have time to look at the series tomorrow.
> 
> The next question to ask is this:
> 
> This is the last big disruptive bug / bugfix on my list.  We're
> planning on cutting an RC Monday probably, with a test day Tuesday.
> This bug was originally marked as "Not for 4.4".
> 
> So our options are:
> * Delay the release, waiting for this new series to be ready
> * Take the patch Andy posted last week for now, and backport Tim's fix
> when it's ready
> * Release without this bug being fixed
> 
> As a reminder (for those who haven't been following the thread), the
> effect of this bug is that w2k3 guests sometimes hang during boot.
> I'm not sure exactly how often this is, but from talking to Andy it
> seems to be fairly low -- one percent maybe?  The code is very subtle
> and any change may risk causing similar hangs in other situations; in
> particular we would want to be able to test it pretty well.
> 
> At the moment I'm leaning towards not delaying the release for it.
> That could either mean checking the patch we have to hand today (so it
> can make it into the RC Monday hopefully), or just going without it.

I thought we already agreed that Andrew's patch as-is is not an
option. And with the same code being in 4.3, leaving this issue
un-fixed if no sufficiently tested patch becomes ready in time seems
like the better option. Once we have a fix, and it got some extended
exposure in -unstable, we could then still backport it to both 4.4.x
and 4.3.x.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 13:51:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 13:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJAd-0004Pi-4U; Fri, 14 Feb 2014 13:51:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEJAb-0004Pc-Ql
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 13:51:10 +0000
Received: from [85.158.139.211:9455] by server-6.bemta-5.messagelabs.com id
	25/E5-14342-D4F1EF25; Fri, 14 Feb 2014 13:51:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392385868!3865136!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19883 invoked from network); 14 Feb 2014 13:51:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 13:51:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 13:51:09 +0000
Message-Id: <52FE2D5B020000780011C75A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 13:51:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <George.Dunlap@eu.citrix.com>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
	<CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
In-Reply-To: <CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 12:52, George Dunlap <George.Dunlap@eu.citrix.com> wrote:
> On Thu, Feb 13, 2014 at 5:39 PM, Andrew Cooper
> <andrew.cooper3@citrix.com> wrote:
>> On 13/02/14 17:32, Tim Deegan wrote:
>>> Hi,
>>>
>>> This series implements the most recent idea I was proposing about
>>> reworking the RTC PF interrupt injection.
>>>
>>> Patch 1 switches handling the !PIE case to calculate the right answer
>>> for REG_C.PF on demand rather than running the timers.
>>> Patch 2 switches back to the old model of having the vpt code control
>>> the timer interrupt injection; this is the fix for the w2k3 hang.
>>> Patch 3 is just a minor cleanup, and not particularly necessary.
>>>
>>> N.B. In its current state it DOES NOT WORK.  I got distracted by
>>> other things today and didn't get a chance to finish working on it,
>>> but I wanted to send it out for feedback on the general approach.
>>> If it seems broadly acceptable then either I can pick it up again next
>>> week or maybe Andrew can look at fixing it.
>>>
>>> Cheers,
>>>
>>> Tim.
>>>
>>
>> I should have time to look at the series tomorrow.
> 
> The next question to ask is this:
> 
> This is the last big disruptive bug / bugfix on my list.  We're
> planning on cutting an RC Monday probably, with a test day Tuesday.
> This bug was originally marked as "Not for 4.4".
> 
> So our options are:
> * Delay the release, waiting for this new series to be ready
> * Take the patch Andy posted last week for now, and backport Tim's fix
> when it's ready
> * Release without this bug being fixed
> 
> As a reminder (for those who haven't been following the thread), the
> effect of this bug is that w2k3 guests sometimes hang during boot.
> I'm not sure exactly how often this is, but from talking to Andy it
> seems to be fairly low -- one percent maybe?  The code is very subtle
> and any change may risk causing similar hangs in other situations; in
> particular we would want to be able to test it pretty well.
> 
> At the moment I'm leaning towards not delaying the release for it.
> That could either mean checking the patch we have to hand today (so it
> can make it into the RC Monday hopefully), or just going without it.

I thought we already agreed that Andrew's patch as-is is not an
option. And with the same code being in 4.3, leaving this issue
un-fixed if no sufficiently tested patch becomes ready in time seems
like the better option. Once we have a fix, and it got some extended
exposure in -unstable, we could then still backport it to both 4.4.x
and 4.3.x.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:07:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJQ2-0004fr-38; Fri, 14 Feb 2014 14:07:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEJQ0-0004fm-3K
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:07:04 +0000
Received: from [193.109.254.147:24939] by server-6.bemta-14.messagelabs.com id
	4D/23-03396-7032EF25; Fri, 14 Feb 2014 14:07:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392386820!4361438!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3463 invoked from network); 14 Feb 2014 14:07:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:07:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100788782"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:06:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:06:36 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEJPY-0003F7-6c;
	Fri, 14 Feb 2014 14:06:36 +0000
Date: Fri, 14 Feb 2014 14:06:35 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140214140635.GA18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
> 
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
>  - Patch 5 documents the XenStore keys required for the new feature
>    in include/xen/interface/io/netif.h
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> 
> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 

This has an impact on the protocol. If the key to select hash algorithm
is missing then we're assuming L4 is in use.

This either needs to be documented (which is missing in your patch to
netif.h) or you need to write that key explicitly in XenStore.

I also have a question what would happen if one end advertises one hash
algorithm then use a different one. This can happen when the
driver is rogue or buggy. Will it cause the "good guy" to stall? We
certainly don't want to stall backend, at the very least.

I don't see relevant code in this series to handle "rogue other end". I
presume for a simple hash algorithm like L4 is not very important (say,
even a packet ends up in the wrong queue we can still safely process
it), or core driver can deal with this all by itself (dropping)?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:07:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJQ2-0004fr-38; Fri, 14 Feb 2014 14:07:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEJQ0-0004fm-3K
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:07:04 +0000
Received: from [193.109.254.147:24939] by server-6.bemta-14.messagelabs.com id
	4D/23-03396-7032EF25; Fri, 14 Feb 2014 14:07:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392386820!4361438!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3463 invoked from network); 14 Feb 2014 14:07:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:07:02 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100788782"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:06:37 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:06:36 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEJPY-0003F7-6c;
	Fri, 14 Feb 2014 14:06:36 +0000
Date: Fri, 14 Feb 2014 14:06:35 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140214140635.GA18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
> 
> This patch series implements multiple transmit and receive queues (i.e.
> multiple shared rings) for the xen virtual network interfaces.
> 
> The series is split up as follows:
>  - Patches 1 and 3 factor out the queue-specific data for netback and
>     netfront respectively, and modify the rest of the code to use these
>     as appropriate.
>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>    multiple shared rings and event channels, and code to connect these
>    as appropriate.
>  - Patch 5 documents the XenStore keys required for the new feature
>    in include/xen/interface/io/netif.h
> 
> All other transmit and receive processing remains unchanged, i.e. there
> is a kthread per queue and a NAPI context per queue.
> 
> The performance of these patches has been analysed in detail, with
> results available at:
> 
> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> 
> To summarise:
>   * Using multiple queues allows a VM to transmit at line rate on a 10
>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>     with a single queue.
>   * For intra-host VM--VM traffic, eight queues provide 171% of the
>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>   * There is a corresponding increase in total CPU usage, i.e. this is a
>     scaling out over available resources, not an efficiency improvement.
>   * Results depend on the availability of sufficient CPUs, as well as the
>     distribution of interrupts and the distribution of TCP streams across
>     the queues.
> 
> Queue selection is currently achieved via an L4 hash on the packet (i.e.
> TCP src/dst port, IP src/dst address) and is not negotiated between the
> frontend and backend, since only one option exists. Future patches to
> support other frontends (particularly Windows) will need to add some
> capability to negotiate not only the hash algorithm selection, but also
> allow the frontend to specify some parameters to this.
> 

This has an impact on the protocol. If the key to select hash algorithm
is missing then we're assuming L4 is in use.

This either needs to be documented (which is missing in your patch to
netif.h) or you need to write that key explicitly in XenStore.

I also have a question what would happen if one end advertises one hash
algorithm then use a different one. This can happen when the
driver is rogue or buggy. Will it cause the "good guy" to stall? We
certainly don't want to stall backend, at the very least.

I don't see relevant code in this series to handle "rogue other end". I
presume for a simple hash algorithm like L4 is not very important (say,
even a packet ends up in the wrong queue we can still safely process
it), or core driver can deal with this all by itself (dropping)?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:11:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJUJ-0004o6-Ut; Fri, 14 Feb 2014 14:11:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEJUI-0004o0-CW
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:11:30 +0000
Received: from [85.158.137.68:49657] by server-6.bemta-3.messagelabs.com id
	F7/8B-09180-1142EF25; Fri, 14 Feb 2014 14:11:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392387087!601008!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19979 invoked from network); 14 Feb 2014 14:11:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:11:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100790270"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:11:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:11:26 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEJUD-0003J2-Ld;
	Fri, 14 Feb 2014 14:11:25 +0000
Date: Fri, 14 Feb 2014 14:11:25 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140214141125.GB18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
[...]
>  
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 4cde112..4dc092c 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	char name[IFNAMSIZ] = {};
>  
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/* Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +						  xenvif_max_queues);

Indentation.

>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 46b2f5b..aeb5ffa 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -54,6 +54,9 @@
[...]
> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0) {
> +		requested_num_queues = 1; /* Fall back to single queue */
> +	} else if (requested_num_queues > xenvif_max_queues) {
> +		/* buggy or malicious guest */
> +		xenbus_dev_fatal(dev, err,
> +						 "guest requested %u queues, exceeding the maximum of %u.",
> +						 requested_num_queues, xenvif_max_queues);

Indentation.

> +		return;
> +	}
>  
[...]
> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;
> +	size_t xspathsize;
> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1)
> +		xspath = (char *)dev->otherend;

Coding style.

> +	else {

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:11:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:11:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJUJ-0004o6-Ut; Fri, 14 Feb 2014 14:11:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEJUI-0004o0-CW
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:11:30 +0000
Received: from [85.158.137.68:49657] by server-6.bemta-3.messagelabs.com id
	F7/8B-09180-1142EF25; Fri, 14 Feb 2014 14:11:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392387087!601008!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19979 invoked from network); 14 Feb 2014 14:11:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:11:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100790270"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:11:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:11:26 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEJUD-0003J2-Ld;
	Fri, 14 Feb 2014 14:11:25 +0000
Date: Fri, 14 Feb 2014 14:11:25 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140214141125.GB18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
[...]
>  
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 4cde112..4dc092c 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	char name[IFNAMSIZ] = {};
>  
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/* Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +						  xenvif_max_queues);

Indentation.

>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 46b2f5b..aeb5ffa 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -54,6 +54,9 @@
[...]
> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0) {
> +		requested_num_queues = 1; /* Fall back to single queue */
> +	} else if (requested_num_queues > xenvif_max_queues) {
> +		/* buggy or malicious guest */
> +		xenbus_dev_fatal(dev, err,
> +						 "guest requested %u queues, exceeding the maximum of %u.",
> +						 requested_num_queues, xenvif_max_queues);

Indentation.

> +		return;
> +	}
>  
[...]
> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;
> +	size_t xspathsize;
> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1)
> +		xspath = (char *)dev->otherend;

Coding style.

> +	else {

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:13:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJWM-0004wI-MJ; Fri, 14 Feb 2014 14:13:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEJWK-0004w9-Ph
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:13:37 +0000
Received: from [85.158.139.211:26649] by server-8.bemta-5.messagelabs.com id
	CD/FB-05298-0942EF25; Fri, 14 Feb 2014 14:13:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392387213!3871036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9139 invoked from network); 14 Feb 2014 14:13:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:13:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102562656"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:13:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:13:33 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEJWG-0003LP-Gx;
	Fri, 14 Feb 2014 14:13:32 +0000
Date: Fri, 14 Feb 2014 14:13:32 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140214141332.GC18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 11:50:23AM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
>  1 file changed, 138 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index d4239b9..d584fa4 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,10 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues;
> +module_param(xennet_max_queues, uint, 0644);
> +
>  static const struct ethtool_ops xennet_ethtool_ops;
>  
>  struct netfront_cb {
> @@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>  	return pages;
>  }
>  
> -static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
> +static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
> +							   void *accel_priv)

Indentation.

>  {
> -	/* Stub for later implementation of queue selection */
> -	return 0;
> +	struct netfront_info *info = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_idx;
> +
> +	/* First, check if there is only one queue */
> +	if (info->num_queues == 1)
> +		queue_idx = 0;

Coding style. Need to put braces around this single statement.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:13:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJWM-0004wI-MJ; Fri, 14 Feb 2014 14:13:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEJWK-0004w9-Ph
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:13:37 +0000
Received: from [85.158.139.211:26649] by server-8.bemta-5.messagelabs.com id
	CD/FB-05298-0942EF25; Fri, 14 Feb 2014 14:13:36 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392387213!3871036!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9139 invoked from network); 14 Feb 2014 14:13:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:13:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102562656"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:13:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:13:33 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEJWG-0003LP-Gx;
	Fri, 14 Feb 2014 14:13:32 +0000
Date: Fri, 14 Feb 2014 14:13:32 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140214141332.GC18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 11:50:23AM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
>  1 file changed, 138 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index d4239b9..d584fa4 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,10 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues;
> +module_param(xennet_max_queues, uint, 0644);
> +
>  static const struct ethtool_ops xennet_ethtool_ops;
>  
>  struct netfront_cb {
> @@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>  	return pages;
>  }
>  
> -static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
> +static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
> +							   void *accel_priv)

Indentation.

>  {
> -	/* Stub for later implementation of queue selection */
> -	return 0;
> +	struct netfront_info *info = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_idx;
> +
> +	/* First, check if there is only one queue */
> +	if (info->num_queues == 1)
> +		queue_idx = 0;

Coding style. Need to put braces around this single statement.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:36:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJsR-0005Qo-5H; Fri, 14 Feb 2014 14:36:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WEJsQ-0005Qa-6X; Fri, 14 Feb 2014 14:36:26 +0000
Received: from [85.158.143.35:41677] by server-1.bemta-4.messagelabs.com id
	10/A8-31661-9E92EF25; Fri, 14 Feb 2014 14:36:25 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392388574!5731319!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjk2MTggKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17975 invoked from network); 14 Feb 2014 14:36:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; 
	d="asc'?scan'208";a="100797314"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:36:14 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:36:13 -0500
Message-ID: <1392388571.32038.258.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Fri, 14 Feb 2014 15:36:11 +0100
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7284641042967755444=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7284641042967755444==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-nMuMg6AJc84BEv2XeD1I"

--=-nMuMg6AJc84BEv2XeD1I
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 01:13 +0000, Zhang, Yang Z wrote:
> Dario Faggioli wrote on 2014-02-13:
> > ...something like this line above... It's actually a quite good=20
> > example of what I meant above! :-)
> >=20
> > So, how do you feel about this?
> >=20
>=20
> Sure. If you can do it, that's great.
>
Cool! So, first step for this is you go to blog.xen.org and register,
and then let me know your username, so that I can allow you to write
blog posts.

No need to rush, of course. Although, as per George summary, nested virt
won't be advertised as production ready for 4.4, I still think that it
may be worthwhile waiting after the release, as Sarah was suggesting.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-nMuMg6AJc84BEv2XeD1I
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+KdsACgkQk4XaBE3IOsQ8ywCfe4OMiP8jNh0DHXi1Fg0ZhaSy
NfkAoJNoFABtYwXxyos4TBozHdeBx1Ny
=7Ic4
-----END PGP SIGNATURE-----

--=-nMuMg6AJc84BEv2XeD1I--


--===============7284641042967755444==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7284641042967755444==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 14:36:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:36:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJsR-0005Qo-5H; Fri, 14 Feb 2014 14:36:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WEJsQ-0005Qa-6X; Fri, 14 Feb 2014 14:36:26 +0000
Received: from [85.158.143.35:41677] by server-1.bemta-4.messagelabs.com id
	10/A8-31661-9E92EF25; Fri, 14 Feb 2014 14:36:25 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392388574!5731319!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjk2MTggKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17975 invoked from network); 14 Feb 2014 14:36:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; 
	d="asc'?scan'208";a="100797314"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:36:14 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:36:13 -0500
Message-ID: <1392388571.32038.258.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Date: Fri, 14 Feb 2014 15:36:11 +0100
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7284641042967755444=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7284641042967755444==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-nMuMg6AJc84BEv2XeD1I"

--=-nMuMg6AJc84BEv2XeD1I
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 01:13 +0000, Zhang, Yang Z wrote:
> Dario Faggioli wrote on 2014-02-13:
> > ...something like this line above... It's actually a quite good=20
> > example of what I meant above! :-)
> >=20
> > So, how do you feel about this?
> >=20
>=20
> Sure. If you can do it, that's great.
>
Cool! So, first step for this is you go to blog.xen.org and register,
and then let me know your username, so that I can allow you to write
blog posts.

No need to rush, of course. Although, as per George summary, nested virt
won't be advertised as production ready for 4.4, I still think that it
may be worthwhile waiting after the release, as Sarah was suggesting.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-nMuMg6AJc84BEv2XeD1I
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+KdsACgkQk4XaBE3IOsQ8ywCfe4OMiP8jNh0DHXi1Fg0ZhaSy
NfkAoJNoFABtYwXxyos4TBozHdeBx1Ny
=7Ic4
-----END PGP SIGNATURE-----

--=-nMuMg6AJc84BEv2XeD1I--


--===============7284641042967755444==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7284641042967755444==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 14:41:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJx0-0005cJ-7W; Fri, 14 Feb 2014 14:41:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WEJwy-0005c0-7p; Fri, 14 Feb 2014 14:41:08 +0000
Received: from [85.158.143.35:28781] by server-3.bemta-4.messagelabs.com id
	52/FD-11539-30B2EF25; Fri, 14 Feb 2014 14:41:07 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392388865!5729302!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21991 invoked from network); 14 Feb 2014 14:41:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:41:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; 
	d="asc'?scan'208";a="102570325"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:40:44 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:40:43 -0500
Message-ID: <1392388841.32038.263.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 14 Feb 2014 15:40:41 +0100
In-Reply-To: <52FDEEDB.40305@eu.citrix.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
	<52FDEEDB.40305@eu.citrix.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5552330549095400601=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5552330549095400601==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-bAqudo5HkAOoCWA/Gsud"

--=-bAqudo5HkAOoCWA/Gsud
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 10:24 +0000, George Dunlap wrote:
> On 02/14/2014 01:13 AM, Zhang, Yang Z wrote:

> > Sorry. I didn't clarify it clearly. Most of the patches to run nested a=
re already in Xen upstream. What I want is to add "nested is basically supp=
orted in Xen 4.4" in the Xen 4.4 release note to let people know it.
>=20
> I'm afraid "basically supported" will imply to people that they might=20
> consider shipping it on production systems.  But because of the issues=
=20
> with shadow-on-HAP, and the potential locking issues with the nested p2m=
=20
> table, both of which are in control of the guest admin rather than the=
=20
> host admin, I don't think that's a recommendation we can make at this tim=
e.
>=20
> But I do think making some kind of announcement about common=20
> functionality being complete and ready to be tested would be a good=20
> idea.  When we come to make the release we can brainstorm on what=20
> wording to use.
>=20
I agree. Let's not sell something not entirely ready, as that could
backfire, but we should at least hint that it's there and it's improved.

> Actually, I wonder whether advertising Win7's XP compatibility mode as a=
=20
> separate "tech preview" feature would make sense.  The people who use=20
> that feature are very likely very different than most other people who=
=20
> might think about using nested virtualization.
>=20
Completely agree again. And this is something that could be very well
described in a blog post on the subject (happening after the release), I
think. :-)

Regads,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-bAqudo5HkAOoCWA/Gsud
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+KukACgkQk4XaBE3IOsQ4tACfXLfD6/UNNwiY8tCd/BYn8rsk
Zf0An2kKVXb1Fzy9AozhEaLEXDTxoBhy
=bxmP
-----END PGP SIGNATURE-----

--=-bAqudo5HkAOoCWA/Gsud--


--===============5552330549095400601==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5552330549095400601==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 14:41:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEJx0-0005cJ-7W; Fri, 14 Feb 2014 14:41:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WEJwy-0005c0-7p; Fri, 14 Feb 2014 14:41:08 +0000
Received: from [85.158.143.35:28781] by server-3.bemta-4.messagelabs.com id
	52/FD-11539-30B2EF25; Fri, 14 Feb 2014 14:41:07 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392388865!5729302!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21991 invoked from network); 14 Feb 2014 14:41:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:41:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; 
	d="asc'?scan'208";a="102570325"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:40:44 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:40:43 -0500
Message-ID: <1392388841.32038.263.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 14 Feb 2014 15:40:41 +0100
In-Reply-To: <52FDEEDB.40305@eu.citrix.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
	<52FDEEDB.40305@eu.citrix.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Zhang, Yang Z" <yang.z.zhang@intel.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>,
	"'Jan Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5552330549095400601=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5552330549095400601==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-bAqudo5HkAOoCWA/Gsud"

--=-bAqudo5HkAOoCWA/Gsud
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 10:24 +0000, George Dunlap wrote:
> On 02/14/2014 01:13 AM, Zhang, Yang Z wrote:

> > Sorry. I didn't clarify it clearly. Most of the patches to run nested a=
re already in Xen upstream. What I want is to add "nested is basically supp=
orted in Xen 4.4" in the Xen 4.4 release note to let people know it.
>=20
> I'm afraid "basically supported" will imply to people that they might=20
> consider shipping it on production systems.  But because of the issues=
=20
> with shadow-on-HAP, and the potential locking issues with the nested p2m=
=20
> table, both of which are in control of the guest admin rather than the=
=20
> host admin, I don't think that's a recommendation we can make at this tim=
e.
>=20
> But I do think making some kind of announcement about common=20
> functionality being complete and ready to be tested would be a good=20
> idea.  When we come to make the release we can brainstorm on what=20
> wording to use.
>=20
I agree. Let's not sell something not entirely ready, as that could
backfire, but we should at least hint that it's there and it's improved.

> Actually, I wonder whether advertising Win7's XP compatibility mode as a=
=20
> separate "tech preview" feature would make sense.  The people who use=20
> that feature are very likely very different than most other people who=
=20
> might think about using nested virtualization.
>=20
Completely agree again. And this is something that could be very well
described in a blog post on the subject (happening after the release), I
think. :-)

Regads,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-bAqudo5HkAOoCWA/Gsud
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+KukACgkQk4XaBE3IOsQ4tACfXLfD6/UNNwiY8tCd/BYn8rsk
Zf0An2kKVXb1Fzy9AozhEaLEXDTxoBhy
=bxmP
-----END PGP SIGNATURE-----

--=-bAqudo5HkAOoCWA/Gsud--


--===============5552330549095400601==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5552330549095400601==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 14:46:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEK22-0005oT-6D; Fri, 14 Feb 2014 14:46:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WEK20-0005oN-ON
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:46:21 +0000
Received: from [85.158.137.68:9166] by server-10.bemta-3.messagelabs.com id
	A4/AB-07302-C3C2EF25; Fri, 14 Feb 2014 14:46:20 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392389177!1939939!1
X-Originating-IP: [209.85.192.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3456 invoked from network); 14 Feb 2014 14:46:19 -0000
Received: from mail-qg0-f51.google.com (HELO mail-qg0-f51.google.com)
	(209.85.192.51)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:46:19 -0000
Received: by mail-qg0-f51.google.com with SMTP id q108so1939939qgd.10
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 06:46:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:from:date:message-id:subject:to:content-type;
	bh=BaM/dt8T6yxjUjXmduYQj8mzpz/S8fRr/E/xYXFgFQs=;
	b=YBKzdBUEszUbvBc8TdSvXafYMIRMfBBudwIDapTfj4ru91VppYdObt06+KCGeqbUdz
	Ducmrbcg73cpSSovsICSEjSkNcJLZDrZHbrKgLSXcAKZyoS3IAqsW5gyTQyHBhUGwoKV
	bbDGZJ4qiISPv7Pv5rTgxQUW7NMru4mn8coKI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:from:date:message-id:subject
	:to:content-type;
	bh=BaM/dt8T6yxjUjXmduYQj8mzpz/S8fRr/E/xYXFgFQs=;
	b=bzWsEYbtTKzrKOtxsR+uB9DVmrPgl9E1AgqAYkaf6D3kqmnXQQgDbHUKlPU7rIWaRu
	Mroj1nlfmR5OIOPtcnH8xW4NZcOdoYoHxj/Q0yOlLDjisyINKk/ETkM1dmUT4OSVprQq
	sUQhDm0bF1eHvz7RtB9C9vnI+0gaM1LqhA1678U0qUNoy5FXCLtMLrPo0tOoDuzALemn
	OA/kAq6pNsydMlwM6gFTT+swaWuTt+R8PDGGvycG2KdxmxVGOnzbNWXGFyKUg7UlLmRm
	dCD1ebJARffLn9oN7URniZ2WLdP7sP7i84YDXlK02LsvGXVCxSUD4FbEs0Am4JUYa0Zu
	s9Bg==
X-Gm-Message-State: ALoCoQlWcHcMddculFMUtSRd31oiWY/4Zgp1LKml2MqgGRDbOQHa0bW5G6SDmb2ChtdVZP2GzD3C
X-Received: by 10.224.115.11 with SMTP id g11mr13921842qaq.18.1392389177560;
	Fri, 14 Feb 2014 06:46:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 14 Feb 2014 06:46:02 -0800 (PST)
X-Originating-IP: [217.66.157.55]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 14 Feb 2014 18:46:02 +0400
X-Google-Sender-Auth: 2Zg8vuYeaXjMCTGtMW1Qhdqrlx4
Message-ID: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all.
Today i'm compile xen 4.3.2 rc1 from stable-4.3 branch.
Also i'm using upstream qemu from debian jessie (1.7.0).

Domain created and works fine.
memory=512
maxmemory=1024

Problem:
>From dom0 xl mem-set works fine, it modifies xenstore and do hipercall to xen.
>From domU side memory ballooned up.

Now i'm try to balloon up memory from guest (write to
/sys/devices/system/xen_memory/xen_memory0/target) and memory
ballooned up only to 1Mb. For example before 512, after 513.

Main problems that i cant upgrade guest kernel (i have many vps with
2.6.32.26 from old xen kernel git tree).

How can i deal with it? Whats wrong from xen side?


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:46:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEK22-0005oT-6D; Fri, 14 Feb 2014 14:46:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WEK20-0005oN-ON
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:46:21 +0000
Received: from [85.158.137.68:9166] by server-10.bemta-3.messagelabs.com id
	A4/AB-07302-C3C2EF25; Fri, 14 Feb 2014 14:46:20 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392389177!1939939!1
X-Originating-IP: [209.85.192.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3456 invoked from network); 14 Feb 2014 14:46:19 -0000
Received: from mail-qg0-f51.google.com (HELO mail-qg0-f51.google.com)
	(209.85.192.51)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:46:19 -0000
Received: by mail-qg0-f51.google.com with SMTP id q108so1939939qgd.10
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 06:46:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:from:date:message-id:subject:to:content-type;
	bh=BaM/dt8T6yxjUjXmduYQj8mzpz/S8fRr/E/xYXFgFQs=;
	b=YBKzdBUEszUbvBc8TdSvXafYMIRMfBBudwIDapTfj4ru91VppYdObt06+KCGeqbUdz
	Ducmrbcg73cpSSovsICSEjSkNcJLZDrZHbrKgLSXcAKZyoS3IAqsW5gyTQyHBhUGwoKV
	bbDGZJ4qiISPv7Pv5rTgxQUW7NMru4mn8coKI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:from:date:message-id:subject
	:to:content-type;
	bh=BaM/dt8T6yxjUjXmduYQj8mzpz/S8fRr/E/xYXFgFQs=;
	b=bzWsEYbtTKzrKOtxsR+uB9DVmrPgl9E1AgqAYkaf6D3kqmnXQQgDbHUKlPU7rIWaRu
	Mroj1nlfmR5OIOPtcnH8xW4NZcOdoYoHxj/Q0yOlLDjisyINKk/ETkM1dmUT4OSVprQq
	sUQhDm0bF1eHvz7RtB9C9vnI+0gaM1LqhA1678U0qUNoy5FXCLtMLrPo0tOoDuzALemn
	OA/kAq6pNsydMlwM6gFTT+swaWuTt+R8PDGGvycG2KdxmxVGOnzbNWXGFyKUg7UlLmRm
	dCD1ebJARffLn9oN7URniZ2WLdP7sP7i84YDXlK02LsvGXVCxSUD4FbEs0Am4JUYa0Zu
	s9Bg==
X-Gm-Message-State: ALoCoQlWcHcMddculFMUtSRd31oiWY/4Zgp1LKml2MqgGRDbOQHa0bW5G6SDmb2ChtdVZP2GzD3C
X-Received: by 10.224.115.11 with SMTP id g11mr13921842qaq.18.1392389177560;
	Fri, 14 Feb 2014 06:46:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 14 Feb 2014 06:46:02 -0800 (PST)
X-Originating-IP: [217.66.157.55]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 14 Feb 2014 18:46:02 +0400
X-Google-Sender-Auth: 2Zg8vuYeaXjMCTGtMW1Qhdqrlx4
Message-ID: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all.
Today i'm compile xen 4.3.2 rc1 from stable-4.3 branch.
Also i'm using upstream qemu from debian jessie (1.7.0).

Domain created and works fine.
memory=512
maxmemory=1024

Problem:
>From dom0 xl mem-set works fine, it modifies xenstore and do hipercall to xen.
>From domU side memory ballooned up.

Now i'm try to balloon up memory from guest (write to
/sys/devices/system/xen_memory/xen_memory0/target) and memory
ballooned up only to 1Mb. For example before 512, after 513.

Main problems that i cant upgrade guest kernel (i have many vps with
2.6.32.26 from old xen kernel git tree).

How can i deal with it? Whats wrong from xen side?


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:54:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEK9M-0005z6-9k; Fri, 14 Feb 2014 14:53:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEK9K-0005z0-9f
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:53:54 +0000
Received: from [193.109.254.147:17562] by server-11.bemta-14.messagelabs.com
	id 9A/B0-24604-10E2EF25; Fri, 14 Feb 2014 14:53:53 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392389630!4396607!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20819 invoked from network); 14 Feb 2014 14:53:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:53:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102573801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:53:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:53:49 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEK9E-00048R-Ix; Fri, 14 Feb 2014 14:53:48 +0000
Message-ID: <52FE2DFC.8050702@citrix.com>
Date: Fri, 14 Feb 2014 14:53:48 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
In-Reply-To: <20140214140635.GA18398@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 14:06, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
>>
>> This patch series implements multiple transmit and receive queues (i.e.
>> multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows:
>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>      netfront respectively, and modify the rest of the code to use these
>>      as appropriate.
>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>     multiple shared rings and event channels, and code to connect these
>>     as appropriate.
>>   - Patch 5 documents the XenStore keys required for the new feature
>>     in include/xen/interface/io/netif.h
>>
>> All other transmit and receive processing remains unchanged, i.e. there
>> is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>
>> To summarise:
>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>      with a single queue.
>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>      scaling out over available resources, not an efficiency improvement.
>>    * Results depend on the availability of sufficient CPUs, as well as the
>>      distribution of interrupts and the distribution of TCP streams across
>>      the queues.
>>
>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>> frontend and backend, since only one option exists. Future patches to
>> support other frontends (particularly Windows) will need to add some
>> capability to negotiate not only the hash algorithm selection, but also
>> allow the frontend to specify some parameters to this.
>>
>
> This has an impact on the protocol. If the key to select hash algorithm
> is missing then we're assuming L4 is in use.
>
> This either needs to be documented (which is missing in your patch to
> netif.h) or you need to write that key explicitly in XenStore.
>
> I also have a question what would happen if one end advertises one hash
> algorithm then use a different one. This can happen when the
> driver is rogue or buggy. Will it cause the "good guy" to stall? We
> certainly don't want to stall backend, at the very least.

I'm not sure I understand. There is no negotiable selection of hash 
algorithm here. This paragraph refers to a possible future in which we 
may have to support multiple such. These issues will absolutely have to 
be addressed then, but it is completely irrelevant for now.

Andrew.
>
> I don't see relevant code in this series to handle "rogue other end". I
> presume for a simple hash algorithm like L4 is not very important (say,
> even a packet ends up in the wrong queue we can still safely process
> it), or core driver can deal with this all by itself (dropping)?
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:54:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEK9M-0005z6-9k; Fri, 14 Feb 2014 14:53:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEK9K-0005z0-9f
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:53:54 +0000
Received: from [193.109.254.147:17562] by server-11.bemta-14.messagelabs.com
	id 9A/B0-24604-10E2EF25; Fri, 14 Feb 2014 14:53:53 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392389630!4396607!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20819 invoked from network); 14 Feb 2014 14:53:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:53:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102573801"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:53:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:53:49 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEK9E-00048R-Ix; Fri, 14 Feb 2014 14:53:48 +0000
Message-ID: <52FE2DFC.8050702@citrix.com>
Date: Fri, 14 Feb 2014 14:53:48 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
In-Reply-To: <20140214140635.GA18398@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 14:06, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
>>
>> This patch series implements multiple transmit and receive queues (i.e.
>> multiple shared rings) for the xen virtual network interfaces.
>>
>> The series is split up as follows:
>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>      netfront respectively, and modify the rest of the code to use these
>>      as appropriate.
>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>     multiple shared rings and event channels, and code to connect these
>>     as appropriate.
>>   - Patch 5 documents the XenStore keys required for the new feature
>>     in include/xen/interface/io/netif.h
>>
>> All other transmit and receive processing remains unchanged, i.e. there
>> is a kthread per queue and a NAPI context per queue.
>>
>> The performance of these patches has been analysed in detail, with
>> results available at:
>>
>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>
>> To summarise:
>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>      with a single queue.
>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>      scaling out over available resources, not an efficiency improvement.
>>    * Results depend on the availability of sufficient CPUs, as well as the
>>      distribution of interrupts and the distribution of TCP streams across
>>      the queues.
>>
>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>> frontend and backend, since only one option exists. Future patches to
>> support other frontends (particularly Windows) will need to add some
>> capability to negotiate not only the hash algorithm selection, but also
>> allow the frontend to specify some parameters to this.
>>
>
> This has an impact on the protocol. If the key to select hash algorithm
> is missing then we're assuming L4 is in use.
>
> This either needs to be documented (which is missing in your patch to
> netif.h) or you need to write that key explicitly in XenStore.
>
> I also have a question what would happen if one end advertises one hash
> algorithm then use a different one. This can happen when the
> driver is rogue or buggy. Will it cause the "good guy" to stall? We
> certainly don't want to stall backend, at the very least.

I'm not sure I understand. There is no negotiable selection of hash 
algorithm here. This paragraph refers to a possible future in which we 
may have to support multiple such. These issues will absolutely have to 
be addressed then, but it is completely irrelevant for now.

Andrew.
>
> I don't see relevant code in this series to handle "rogue other end". I
> presume for a simple hash algorithm like L4 is not very important (say,
> even a packet ends up in the wrong queue we can still safely process
> it), or core driver can deal with this all by itself (dropping)?
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:56:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKBO-00064w-RL; Fri, 14 Feb 2014 14:56:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WEKBN-00064r-KT
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:56:01 +0000
Received: from [85.158.139.211:16975] by server-16.bemta-5.messagelabs.com id
	45/14-05060-08E2EF25; Fri, 14 Feb 2014 14:56:00 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392389759!42298!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29765 invoked from network); 14 Feb 2014 14:56:00 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:56:00 -0000
Received: by mail-qc0-f171.google.com with SMTP id n7so20536351qcx.30
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 06:55:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=DZ3f/q0E+vzLrTnpbgXctVHddtfT3m0CTUPDPvUJt4c=;
	b=Q65LmxqaRzr8KQcHg0jR2tunTLoAgRc31bul643DdSa4ZA2AD+Rfz9rAN4WWRbffBu
	RjRHQzON2OoBZtCtl+DocaxPFWr8ow6OKqr8WI8gFXcJVisUB8V7EL2ggDBCxuQ8fpF9
	i+bY9G8zmkLFeXRC0Z96iK6aNcjlkzh3DS9Xc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=DZ3f/q0E+vzLrTnpbgXctVHddtfT3m0CTUPDPvUJt4c=;
	b=GLJZGCA3d+HohMJBCTzhvRNu95eFq2OR+K4upkyNI+GatjU5/NNjxU6mrwdVboF1Va
	0dLbKu8pSi8Q2a2nfSAi+m/RBWeXrtuvSG94DSpYvvTRxB8Xw71oa8KlCc91sQtrUlBU
	rLYeEOH6dPc9usGUmRow79YgarDEpnfp1oh9pKSw5DA2TKfhElG5nJhk5weEkNM+dbB3
	tAxIg0laFA38S+ilkITLTRzDnT0cKq8MBj7sIyod5tlhDyvYohUSwArlBS/wMMZLhvPb
	owJRkp9jpnRdwyjJOp585etOpB4s/OnjInwOmEFMCsqNCq8Ho7XVhfk8slrKMZBde4lc
	spDQ==
X-Gm-Message-State: ALoCoQm9BbM3BhRiAx7LpGxP2ais5/Uz0xawXEW89IXHtbiZVEQXeq0U5OJ9mmv8EJzlJtyiCEMv
X-Received: by 10.224.5.136 with SMTP id 8mr14062310qav.42.1392389758580; Fri,
	14 Feb 2014 06:55:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 14 Feb 2014 06:55:43 -0800 (PST)
X-Originating-IP: [217.66.157.55]
In-Reply-To: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 14 Feb 2014 18:55:43 +0400
X-Google-Sender-Auth: FnGHSisIFC1pYJKh9nKrlSU6G3M
Message-ID: <CACaajQtL_yRpR3WgJv_Xrk=y-geH=8n2SL_aHZdqa3=Xob6=TA@mail.gmail.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As workaround i can use xl mem-set xxx maxmemory, after that guest vm
can balloon up and down memory inside range 0 - maxmemory.

2014-02-14 18:46 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> Hi all.
> Today i'm compile xen 4.3.2 rc1 from stable-4.3 branch.
> Also i'm using upstream qemu from debian jessie (1.7.0).
>
> Domain created and works fine.
> memory=512
> maxmemory=1024
>
> Problem:
> From dom0 xl mem-set works fine, it modifies xenstore and do hipercall to xen.
> From domU side memory ballooned up.
>
> Now i'm try to balloon up memory from guest (write to
> /sys/devices/system/xen_memory/xen_memory0/target) and memory
> ballooned up only to 1Mb. For example before 512, after 513.
>
> Main problems that i cant upgrade guest kernel (i have many vps with
> 2.6.32.26 from old xen kernel git tree).
>
> How can i deal with it? Whats wrong from xen side?
>
>
> --
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru



-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:56:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKBO-00064w-RL; Fri, 14 Feb 2014 14:56:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WEKBN-00064r-KT
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:56:01 +0000
Received: from [85.158.139.211:16975] by server-16.bemta-5.messagelabs.com id
	45/14-05060-08E2EF25; Fri, 14 Feb 2014 14:56:00 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392389759!42298!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29765 invoked from network); 14 Feb 2014 14:56:00 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:56:00 -0000
Received: by mail-qc0-f171.google.com with SMTP id n7so20536351qcx.30
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 06:55:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=DZ3f/q0E+vzLrTnpbgXctVHddtfT3m0CTUPDPvUJt4c=;
	b=Q65LmxqaRzr8KQcHg0jR2tunTLoAgRc31bul643DdSa4ZA2AD+Rfz9rAN4WWRbffBu
	RjRHQzON2OoBZtCtl+DocaxPFWr8ow6OKqr8WI8gFXcJVisUB8V7EL2ggDBCxuQ8fpF9
	i+bY9G8zmkLFeXRC0Z96iK6aNcjlkzh3DS9Xc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=DZ3f/q0E+vzLrTnpbgXctVHddtfT3m0CTUPDPvUJt4c=;
	b=GLJZGCA3d+HohMJBCTzhvRNu95eFq2OR+K4upkyNI+GatjU5/NNjxU6mrwdVboF1Va
	0dLbKu8pSi8Q2a2nfSAi+m/RBWeXrtuvSG94DSpYvvTRxB8Xw71oa8KlCc91sQtrUlBU
	rLYeEOH6dPc9usGUmRow79YgarDEpnfp1oh9pKSw5DA2TKfhElG5nJhk5weEkNM+dbB3
	tAxIg0laFA38S+ilkITLTRzDnT0cKq8MBj7sIyod5tlhDyvYohUSwArlBS/wMMZLhvPb
	owJRkp9jpnRdwyjJOp585etOpB4s/OnjInwOmEFMCsqNCq8Ho7XVhfk8slrKMZBde4lc
	spDQ==
X-Gm-Message-State: ALoCoQm9BbM3BhRiAx7LpGxP2ais5/Uz0xawXEW89IXHtbiZVEQXeq0U5OJ9mmv8EJzlJtyiCEMv
X-Received: by 10.224.5.136 with SMTP id 8mr14062310qav.42.1392389758580; Fri,
	14 Feb 2014 06:55:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 14 Feb 2014 06:55:43 -0800 (PST)
X-Originating-IP: [217.66.157.55]
In-Reply-To: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 14 Feb 2014 18:55:43 +0400
X-Google-Sender-Auth: FnGHSisIFC1pYJKh9nKrlSU6G3M
Message-ID: <CACaajQtL_yRpR3WgJv_Xrk=y-geH=8n2SL_aHZdqa3=Xob6=TA@mail.gmail.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

As workaround i can use xl mem-set xxx maxmemory, after that guest vm
can balloon up and down memory inside range 0 - maxmemory.

2014-02-14 18:46 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> Hi all.
> Today i'm compile xen 4.3.2 rc1 from stable-4.3 branch.
> Also i'm using upstream qemu from debian jessie (1.7.0).
>
> Domain created and works fine.
> memory=512
> maxmemory=1024
>
> Problem:
> From dom0 xl mem-set works fine, it modifies xenstore and do hipercall to xen.
> From domU side memory ballooned up.
>
> Now i'm try to balloon up memory from guest (write to
> /sys/devices/system/xen_memory/xen_memory0/target) and memory
> ballooned up only to 1Mb. For example before 512, after 513.
>
> Main problems that i cant upgrade guest kernel (i have many vps with
> 2.6.32.26 from old xen kernel git tree).
>
> How can i deal with it? Whats wrong from xen side?
>
>
> --
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru



-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:57:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKCv-0006BG-EI; Fri, 14 Feb 2014 14:57:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKCu-0006B8-Ed
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:57:36 +0000
Received: from [85.158.139.211:59057] by server-5.bemta-5.messagelabs.com id
	06/89-32749-FDE2EF25; Fri, 14 Feb 2014 14:57:35 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392389853!3934492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3064 invoked from network); 14 Feb 2014 14:57:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:57:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100803713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:57:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:57:26 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKCk-0004Bt-4W; Fri, 14 Feb 2014 14:57:26 +0000
Message-ID: <52FE2ED5.3020905@citrix.com>
Date: Fri, 14 Feb 2014 14:57:25 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
	<20140214141125.GB18398@zion.uk.xensource.com>
In-Reply-To: <20140214141125.GB18398@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 14:11, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
> [...]
>>
>> +extern unsigned int xenvif_max_queues;
>> +
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index 4cde112..4dc092c 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/* Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +						  xenvif_max_queues);
>
> Indentation.

How would you like this to be indented? The CodingStyle says (and I quote):
Chapter 2: Breaking long lines and strings:
	... descendants are always substantially shorter than the
	parent and placed substantially to the right...

There is no further advice to this point in CodingStyle, so please 
explain how you'd prefer this.

>
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index 46b2f5b..aeb5ffa 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -54,6 +54,9 @@
> [...]
>> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0) {
>> +		requested_num_queues = 1; /* Fall back to single queue */
>> +	} else if (requested_num_queues > xenvif_max_queues) {
>> +		/* buggy or malicious guest */
>> +		xenbus_dev_fatal(dev, err,
>> +						 "guest requested %u queues, exceeding the maximum of %u.",
>> +						 requested_num_queues, xenvif_max_queues);
>
> Indentation.
Ditto.

>
>> +		return;
>> +	}
>>
> [...]
>> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>> +	size_t xspathsize;
>> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>> +	if (queue->vif->num_queues == 1)
>> +		xspath = (char *)dev->otherend;
>
> Coding style.
>
Ok; I thought I'd caught all of those. I'll change it.

>> +	else {
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:57:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKCv-0006BG-EI; Fri, 14 Feb 2014 14:57:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKCu-0006B8-Ed
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:57:36 +0000
Received: from [85.158.139.211:59057] by server-5.bemta-5.messagelabs.com id
	06/89-32749-FDE2EF25; Fri, 14 Feb 2014 14:57:35 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392389853!3934492!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3064 invoked from network); 14 Feb 2014 14:57:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:57:34 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100803713"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 14:57:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:57:26 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKCk-0004Bt-4W; Fri, 14 Feb 2014 14:57:26 +0000
Message-ID: <52FE2ED5.3020905@citrix.com>
Date: Fri, 14 Feb 2014 14:57:25 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
	<20140214141125.GB18398@zion.uk.xensource.com>
In-Reply-To: <20140214141125.GB18398@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 14:11, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
> [...]
>>
>> +extern unsigned int xenvif_max_queues;
>> +
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index 4cde112..4dc092c 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/* Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +						  xenvif_max_queues);
>
> Indentation.

How would you like this to be indented? The CodingStyle says (and I quote):
Chapter 2: Breaking long lines and strings:
	... descendants are always substantially shorter than the
	parent and placed substantially to the right...

There is no further advice to this point in CodingStyle, so please 
explain how you'd prefer this.

>
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index 46b2f5b..aeb5ffa 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -54,6 +54,9 @@
> [...]
>> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0) {
>> +		requested_num_queues = 1; /* Fall back to single queue */
>> +	} else if (requested_num_queues > xenvif_max_queues) {
>> +		/* buggy or malicious guest */
>> +		xenbus_dev_fatal(dev, err,
>> +						 "guest requested %u queues, exceeding the maximum of %u.",
>> +						 requested_num_queues, xenvif_max_queues);
>
> Indentation.
Ditto.

>
>> +		return;
>> +	}
>>
> [...]
>> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>> +	size_t xspathsize;
>> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>> +	if (queue->vif->num_queues == 1)
>> +		xspath = (char *)dev->otherend;
>
> Coding style.
>
Ok; I thought I'd caught all of those. I'll change it.

>> +	else {
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:58:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKDQ-0006G1-Ji; Fri, 14 Feb 2014 14:58:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKDP-0006Fm-Fw
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:58:07 +0000
Received: from [193.109.254.147:3821] by server-10.bemta-14.messagelabs.com id
	9B/93-10711-EFE2EF25; Fri, 14 Feb 2014 14:58:06 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392389884!4396160!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24922 invoked from network); 14 Feb 2014 14:58:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:58:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102575126"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:58:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:58:04 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKDL-0004CT-Et; Fri, 14 Feb 2014 14:58:03 +0000
Message-ID: <52FE2EFB.5080107@citrix.com>
Date: Fri, 14 Feb 2014 14:58:03 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
	<20140214141332.GC18398@zion.uk.xensource.com>
In-Reply-To: <20140214141332.GC18398@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 14:13, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:23AM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Build on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Check XenStore for multi-queue support, and set up the rings and event
>> channels accordingly.
>>
>> Write ring references and event channels to XenStore in a queue
>> hierarchy if appropriate, or flat when using only one queue.
>>
>> Update the xennet_select_queue() function to choose the queue on which
>> to transmit a packet based on the skb hash result.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
>>   1 file changed, 138 insertions(+), 38 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index d4239b9..d584fa4 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -57,6 +57,10 @@
>>   #include <xen/interface/memory.h>
>>   #include <xen/interface/grant_table.h>
>>
>> +/* Module parameters */
>> +unsigned int xennet_max_queues;
>> +module_param(xennet_max_queues, uint, 0644);
>> +
>>   static const struct ethtool_ops xennet_ethtool_ops;
>>
>>   struct netfront_cb {
>> @@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>>   	return pages;
>>   }
>>
>> -static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
>> +static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
>> +							   void *accel_priv)
>
> Indentation.
>
>>   {
>> -	/* Stub for later implementation of queue selection */
>> -	return 0;
>> +	struct netfront_info *info = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_idx;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (info->num_queues == 1)
>> +		queue_idx = 0;
>
> Coding style. Need to put braces around this single statement.
>

Good catch; thanks.

> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 14:58:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 14:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKDQ-0006G1-Ji; Fri, 14 Feb 2014 14:58:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKDP-0006Fm-Fw
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 14:58:07 +0000
Received: from [193.109.254.147:3821] by server-10.bemta-14.messagelabs.com id
	9B/93-10711-EFE2EF25; Fri, 14 Feb 2014 14:58:06 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392389884!4396160!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24922 invoked from network); 14 Feb 2014 14:58:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 14:58:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102575126"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 14:58:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 09:58:04 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKDL-0004CT-Et; Fri, 14 Feb 2014 14:58:03 +0000
Message-ID: <52FE2EFB.5080107@citrix.com>
Date: Fri, 14 Feb 2014 14:58:03 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-5-git-send-email-andrew.bennieston@citrix.com>
	<20140214141332.GC18398@zion.uk.xensource.com>
In-Reply-To: <20140214141332.GC18398@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 14:13, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 11:50:23AM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Build on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Check XenStore for multi-queue support, and set up the rings and event
>> channels accordingly.
>>
>> Write ring references and event channels to XenStore in a queue
>> hierarchy if appropriate, or flat when using only one queue.
>>
>> Update the xennet_select_queue() function to choose the queue on which
>> to transmit a packet based on the skb hash result.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
>>   1 file changed, 138 insertions(+), 38 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index d4239b9..d584fa4 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -57,6 +57,10 @@
>>   #include <xen/interface/memory.h>
>>   #include <xen/interface/grant_table.h>
>>
>> +/* Module parameters */
>> +unsigned int xennet_max_queues;
>> +module_param(xennet_max_queues, uint, 0644);
>> +
>>   static const struct ethtool_ops xennet_ethtool_ops;
>>
>>   struct netfront_cb {
>> @@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
>>   	return pages;
>>   }
>>
>> -static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
>> +static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
>> +							   void *accel_priv)
>
> Indentation.
>
>>   {
>> -	/* Stub for later implementation of queue selection */
>> -	return 0;
>> +	struct netfront_info *info = netdev_priv(dev);
>> +	u32 hash;
>> +	u16 queue_idx;
>> +
>> +	/* First, check if there is only one queue */
>> +	if (info->num_queues == 1)
>> +		queue_idx = 0;
>
> Coding style. Need to put braces around this single statement.
>

Good catch; thanks.

> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:26:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKej-0006zP-8j; Fri, 14 Feb 2014 15:26:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEKei-0006zK-0Q
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:26:20 +0000
Received: from [85.158.137.68:56751] by server-7.bemta-3.messagelabs.com id
	71/8B-13775-B953EF25; Fri, 14 Feb 2014 15:26:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392391576!1939681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5963 invoked from network); 14 Feb 2014 15:26:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102586727"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:25:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:25:40 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEKe4-0004bb-4c;
	Fri, 14 Feb 2014 15:25:40 +0000
Date: Fri, 14 Feb 2014 15:25:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140214152539.GD18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
	<52FE2DFC.8050702@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE2DFC.8050702@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 02:53:48PM +0000, Andrew Bennieston wrote:
> On 14/02/14 14:06, Wei Liu wrote:
> >On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
> >>
> >>This patch series implements multiple transmit and receive queues (i.e.
> >>multiple shared rings) for the xen virtual network interfaces.
> >>
> >>The series is split up as follows:
> >>  - Patches 1 and 3 factor out the queue-specific data for netback and
> >>     netfront respectively, and modify the rest of the code to use these
> >>     as appropriate.
> >>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
> >>    multiple shared rings and event channels, and code to connect these
> >>    as appropriate.
> >>  - Patch 5 documents the XenStore keys required for the new feature
> >>    in include/xen/interface/io/netif.h
> >>
> >>All other transmit and receive processing remains unchanged, i.e. there
> >>is a kthread per queue and a NAPI context per queue.
> >>
> >>The performance of these patches has been analysed in detail, with
> >>results available at:
> >>
> >>http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> >>
> >>To summarise:
> >>   * Using multiple queues allows a VM to transmit at line rate on a 10
> >>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
> >>     with a single queue.
> >>   * For intra-host VM--VM traffic, eight queues provide 171% of the
> >>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
> >>   * There is a corresponding increase in total CPU usage, i.e. this is a
> >>     scaling out over available resources, not an efficiency improvement.
> >>   * Results depend on the availability of sufficient CPUs, as well as the
> >>     distribution of interrupts and the distribution of TCP streams across
> >>     the queues.
> >>
> >>Queue selection is currently achieved via an L4 hash on the packet (i.e.
> >>TCP src/dst port, IP src/dst address) and is not negotiated between the
> >>frontend and backend, since only one option exists. Future patches to
> >>support other frontends (particularly Windows) will need to add some
> >>capability to negotiate not only the hash algorithm selection, but also
> >>allow the frontend to specify some parameters to this.
> >>
> >
> >This has an impact on the protocol. If the key to select hash algorithm
> >is missing then we're assuming L4 is in use.
> >
> >This either needs to be documented (which is missing in your patch to
> >netif.h) or you need to write that key explicitly in XenStore.
> >

a)

> >I also have a question what would happen if one end advertises one hash
> >algorithm then use a different one. This can happen when the
> >driver is rogue or buggy. Will it cause the "good guy" to stall? We
> >certainly don't want to stall backend, at the very least.
> 

b)

> I'm not sure I understand. There is no negotiable selection of hash
> algorithm here. This paragraph refers to a possible future in which
> we may have to support multiple such. These issues will absolutely
> have to be addressed then, but it is completely irrelevant for now.
> 

There's actaully two questions.

I suspect your above reply was for a). My starting point of a) is, if
I'm to write a driver, either backend or frontend, for any random OS,
will I be able to have some basic idea what the correct behavior is by
looking at netif.h only? The current answer for multiqueue hash
algorithm selection is "no" given that 1) the document is not clear L4
is the default algorithm if no key is specified, 2) the key to select
algorithm is not mandatory the the current protocol.

I was not very clear in previous reply, especially the "write that key
explicitly in XenStore", sorry. The thing you need to do would be:
1) document L4 will be selected if algorithm selection is missing, or
2) document algorithm key is mandatory and implement negotiation.

For question b). Say, if I'm writing a malicious frontend driver, I
advertise I want L4 but actually I always select a particular queue, or
deliberately select random queue, will that cause problem to the
backend? If we are to use a more complex algorithm, will a rogue
frontend cause problem to backend?

Wei.

> Andrew.
> >
> >I don't see relevant code in this series to handle "rogue other end". I
> >presume for a simple hash algorithm like L4 is not very important (say,
> >even a packet ends up in the wrong queue we can still safely process
> >it), or core driver can deal with this all by itself (dropping)?
> >
> >Wei.
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:26:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKej-0006zP-8j; Fri, 14 Feb 2014 15:26:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEKei-0006zK-0Q
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:26:20 +0000
Received: from [85.158.137.68:56751] by server-7.bemta-3.messagelabs.com id
	71/8B-13775-B953EF25; Fri, 14 Feb 2014 15:26:19 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392391576!1939681!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5963 invoked from network); 14 Feb 2014 15:26:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102586727"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:25:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:25:40 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEKe4-0004bb-4c;
	Fri, 14 Feb 2014 15:25:40 +0000
Date: Fri, 14 Feb 2014 15:25:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140214152539.GD18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
	<52FE2DFC.8050702@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE2DFC.8050702@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 02:53:48PM +0000, Andrew Bennieston wrote:
> On 14/02/14 14:06, Wei Liu wrote:
> >On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
> >>
> >>This patch series implements multiple transmit and receive queues (i.e.
> >>multiple shared rings) for the xen virtual network interfaces.
> >>
> >>The series is split up as follows:
> >>  - Patches 1 and 3 factor out the queue-specific data for netback and
> >>     netfront respectively, and modify the rest of the code to use these
> >>     as appropriate.
> >>  - Patches 2 and 4 introduce new XenStore keys to negotiate and use
> >>    multiple shared rings and event channels, and code to connect these
> >>    as appropriate.
> >>  - Patch 5 documents the XenStore keys required for the new feature
> >>    in include/xen/interface/io/netif.h
> >>
> >>All other transmit and receive processing remains unchanged, i.e. there
> >>is a kthread per queue and a NAPI context per queue.
> >>
> >>The performance of these patches has been analysed in detail, with
> >>results available at:
> >>
> >>http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
> >>
> >>To summarise:
> >>   * Using multiple queues allows a VM to transmit at line rate on a 10
> >>     Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
> >>     with a single queue.
> >>   * For intra-host VM--VM traffic, eight queues provide 171% of the
> >>     throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
> >>   * There is a corresponding increase in total CPU usage, i.e. this is a
> >>     scaling out over available resources, not an efficiency improvement.
> >>   * Results depend on the availability of sufficient CPUs, as well as the
> >>     distribution of interrupts and the distribution of TCP streams across
> >>     the queues.
> >>
> >>Queue selection is currently achieved via an L4 hash on the packet (i.e.
> >>TCP src/dst port, IP src/dst address) and is not negotiated between the
> >>frontend and backend, since only one option exists. Future patches to
> >>support other frontends (particularly Windows) will need to add some
> >>capability to negotiate not only the hash algorithm selection, but also
> >>allow the frontend to specify some parameters to this.
> >>
> >
> >This has an impact on the protocol. If the key to select hash algorithm
> >is missing then we're assuming L4 is in use.
> >
> >This either needs to be documented (which is missing in your patch to
> >netif.h) or you need to write that key explicitly in XenStore.
> >

a)

> >I also have a question what would happen if one end advertises one hash
> >algorithm then use a different one. This can happen when the
> >driver is rogue or buggy. Will it cause the "good guy" to stall? We
> >certainly don't want to stall backend, at the very least.
> 

b)

> I'm not sure I understand. There is no negotiable selection of hash
> algorithm here. This paragraph refers to a possible future in which
> we may have to support multiple such. These issues will absolutely
> have to be addressed then, but it is completely irrelevant for now.
> 

There's actaully two questions.

I suspect your above reply was for a). My starting point of a) is, if
I'm to write a driver, either backend or frontend, for any random OS,
will I be able to have some basic idea what the correct behavior is by
looking at netif.h only? The current answer for multiqueue hash
algorithm selection is "no" given that 1) the document is not clear L4
is the default algorithm if no key is specified, 2) the key to select
algorithm is not mandatory the the current protocol.

I was not very clear in previous reply, especially the "write that key
explicitly in XenStore", sorry. The thing you need to do would be:
1) document L4 will be selected if algorithm selection is missing, or
2) document algorithm key is mandatory and implement negotiation.

For question b). Say, if I'm writing a malicious frontend driver, I
advertise I want L4 but actually I always select a particular queue, or
deliberately select random queue, will that cause problem to the
backend? If we are to use a more complex algorithm, will a rogue
frontend cause problem to backend?

Wei.

> Andrew.
> >
> >I don't see relevant code in this series to handle "rogue other end". I
> >presume for a simple hash algorithm like L4 is not very important (say,
> >even a packet ends up in the wrong queue we can still safely process
> >it), or core driver can deal with this all by itself (dropping)?
> >
> >Wei.
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:36:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKoZ-0007AX-My; Fri, 14 Feb 2014 15:36:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEKoY-0007AS-5p
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:36:30 +0000
Received: from [193.109.254.147:2037] by server-5.bemta-14.messagelabs.com id
	48/50-16688-DF73EF25; Fri, 14 Feb 2014 15:36:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392392187!4423135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12212 invoked from network); 14 Feb 2014 15:36:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:36:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102590223"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:36:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:36:21 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEKoP-0004l1-7f;
	Fri, 14 Feb 2014 15:36:21 +0000
Date: Fri, 14 Feb 2014 15:36:20 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140214153620.GE18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
	<20140214141125.GB18398@zion.uk.xensource.com>
	<52FE2ED5.3020905@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE2ED5.3020905@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 02:57:25PM +0000, Andrew Bennieston wrote:
> On 14/02/14 14:11, Wei Liu wrote:
> >On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
> >[...]
> >>
> >>+extern unsigned int xenvif_max_queues;
> >>+
> >>  #endif /* __XEN_NETBACK__COMMON_H__ */
> >>diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> >>index 4cde112..4dc092c 100644
> >>--- a/drivers/net/xen-netback/interface.c
> >>+++ b/drivers/net/xen-netback/interface.c
> >>@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> >>  	char name[IFNAMSIZ] = {};
> >>
> >>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> >>-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> >>+	/* Allocate a netdev with the max. supported number of queues.
> >>+	 * When the guest selects the desired number, it will be updated
> >>+	 * via netif_set_real_num_tx_queues().
> >>+	 */
> >>+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> >>+						  xenvif_max_queues);
> >
> >Indentation.
> 
> How would you like this to be indented? The CodingStyle says (and I quote):
> Chapter 2: Breaking long lines and strings:
> 	... descendants are always substantially shorter than the
> 	parent and placed substantially to the right...
> 
> There is no further advice to this point in CodingStyle, so please
> explain how you'd prefer this.
> 

Kernel code in general use indentation style like

	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
			      xenvif_max_queues);

You can find lots of examples in existing kernel code.

Probably "place substantially to the right" is just too vague. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:36:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKoZ-0007AX-My; Fri, 14 Feb 2014 15:36:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEKoY-0007AS-5p
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:36:30 +0000
Received: from [193.109.254.147:2037] by server-5.bemta-14.messagelabs.com id
	48/50-16688-DF73EF25; Fri, 14 Feb 2014 15:36:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392392187!4423135!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12212 invoked from network); 14 Feb 2014 15:36:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:36:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102590223"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:36:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:36:21 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEKoP-0004l1-7f;
	Fri, 14 Feb 2014 15:36:21 +0000
Date: Fri, 14 Feb 2014 15:36:20 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140214153620.GE18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
	<20140214141125.GB18398@zion.uk.xensource.com>
	<52FE2ED5.3020905@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE2ED5.3020905@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 02:57:25PM +0000, Andrew Bennieston wrote:
> On 14/02/14 14:11, Wei Liu wrote:
> >On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
> >[...]
> >>
> >>+extern unsigned int xenvif_max_queues;
> >>+
> >>  #endif /* __XEN_NETBACK__COMMON_H__ */
> >>diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> >>index 4cde112..4dc092c 100644
> >>--- a/drivers/net/xen-netback/interface.c
> >>+++ b/drivers/net/xen-netback/interface.c
> >>@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> >>  	char name[IFNAMSIZ] = {};
> >>
> >>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> >>-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> >>+	/* Allocate a netdev with the max. supported number of queues.
> >>+	 * When the guest selects the desired number, it will be updated
> >>+	 * via netif_set_real_num_tx_queues().
> >>+	 */
> >>+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> >>+						  xenvif_max_queues);
> >
> >Indentation.
> 
> How would you like this to be indented? The CodingStyle says (and I quote):
> Chapter 2: Breaking long lines and strings:
> 	... descendants are always substantially shorter than the
> 	parent and placed substantially to the right...
> 
> There is no further advice to this point in CodingStyle, so please
> explain how you'd prefer this.
> 

Kernel code in general use indentation style like

	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
			      xenvif_max_queues);

You can find lots of examples in existing kernel code.

Probably "place substantially to the right" is just too vague. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:41:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKtR-0007J6-Kv; Fri, 14 Feb 2014 15:41:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKtQ-0007J1-I1
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:41:32 +0000
Received: from [85.158.139.211:10129] by server-11.bemta-5.messagelabs.com id
	CB/8A-23886-B293EF25; Fri, 14 Feb 2014 15:41:31 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392392489!4012408!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14352 invoked from network); 14 Feb 2014 15:41:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102591875"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:41:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:41:00 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKsu-0004ot-3c; Fri, 14 Feb 2014 15:41:00 +0000
Message-ID: <52FE390B.3020102@citrix.com>
Date: Fri, 14 Feb 2014 15:40:59 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
	<52FE2DFC.8050702@citrix.com>
	<20140214152539.GD18398@zion.uk.xensource.com>
In-Reply-To: <20140214152539.GD18398@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 15:25, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 02:53:48PM +0000, Andrew Bennieston wrote:
>> On 14/02/14 14:06, Wei Liu wrote:
>>> On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
>>>>
>>>> This patch series implements multiple transmit and receive queues (i.e.
>>>> multiple shared rings) for the xen virtual network interfaces.
>>>>
>>>> The series is split up as follows:
>>>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>>>      netfront respectively, and modify the rest of the code to use these
>>>>      as appropriate.
>>>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>>>     multiple shared rings and event channels, and code to connect these
>>>>     as appropriate.
>>>>   - Patch 5 documents the XenStore keys required for the new feature
>>>>     in include/xen/interface/io/netif.h
>>>>
>>>> All other transmit and receive processing remains unchanged, i.e. there
>>>> is a kthread per queue and a NAPI context per queue.
>>>>
>>>> The performance of these patches has been analysed in detail, with
>>>> results available at:
>>>>
>>>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>>>
>>>> To summarise:
>>>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>>>      with a single queue.
>>>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>>>      scaling out over available resources, not an efficiency improvement.
>>>>    * Results depend on the availability of sufficient CPUs, as well as the
>>>>      distribution of interrupts and the distribution of TCP streams across
>>>>      the queues.
>>>>
>>>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>>>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>>>> frontend and backend, since only one option exists. Future patches to
>>>> support other frontends (particularly Windows) will need to add some
>>>> capability to negotiate not only the hash algorithm selection, but also
>>>> allow the frontend to specify some parameters to this.
>>>>
>>>
>>> This has an impact on the protocol. If the key to select hash algorithm
>>> is missing then we're assuming L4 is in use.
>>>
>>> This either needs to be documented (which is missing in your patch to
>>> netif.h) or you need to write that key explicitly in XenStore.
>>>
>
> a)
>
>>> I also have a question what would happen if one end advertises one hash
>>> algorithm then use a different one. This can happen when the
>>> driver is rogue or buggy. Will it cause the "good guy" to stall? We
>>> certainly don't want to stall backend, at the very least.
>>
>
> b)
>
>> I'm not sure I understand. There is no negotiable selection of hash
>> algorithm here. This paragraph refers to a possible future in which
>> we may have to support multiple such. These issues will absolutely
>> have to be addressed then, but it is completely irrelevant for now.
>>
>
> There's actaully two questions.
>
> I suspect your above reply was for a). My starting point of a) is, if
> I'm to write a driver, either backend or frontend, for any random OS,
> will I be able to have some basic idea what the correct behavior is by
> looking at netif.h only? The current answer for multiqueue hash
> algorithm selection is "no" given that 1) the document is not clear L4
> is the default algorithm if no key is specified, 2) the key to select
> algorithm is not mandatory the the current protocol.
>
> I was not very clear in previous reply, especially the "write that key
> explicitly in XenStore", sorry. The thing you need to do would be:
> 1) document L4 will be selected if algorithm selection is missing, or
> 2) document algorithm key is mandatory and implement negotiation.
>
> For question b). Say, if I'm writing a malicious frontend driver, I
> advertise I want L4 but actually I always select a particular queue, or
> deliberately select random queue, will that cause problem to the
> backend? If we are to use a more complex algorithm, will a rogue
> frontend cause problem to backend?
>
> Wei.

Let me attempt to clear this up. Bear with me...

Queue selection is a decision by a transmitting system about which queue 
it uses for a particular packet. A well-behaved receiving system will 
pick up packets on any queue and throw them up into its network stack as 
normal. In this manner, the details of queue selection don't matter from 
the point of view of a receiving guest (either frontend or backend). 
That is; if a "malicious" frontend sends all of its packets on a single 
queue, then it is only damaging itself - by reducing its effective 
throughput to that of a single queue. This will not cause a problem to 
the backend. The same goes for the "select a random queue" scenario, 
although here you probably shouldn't expect decent TCP performance. 
Certainly there will be no badness in terms of affecting the backend or 
other systems, beyond that which a guest could achieve with a broken TCP 
stack anyway.

In light of this, algorithm selection is (mostly) a function of the 
transmitting side. The receiving side should be prepared to receive 
packets on any of the legitimately established queues. It just happens 
that the Linux netback and Linux netfront both use skb_get_hash() to 
determine this value.

In the future, some frontends (i.e. Windows) may need to do complex 
things like pushing hash state to the backend. This will be taken care 
of with extensions to the protocol at the point these are implemented.

Andrew.

>
>> Andrew.
>>>
>>> I don't see relevant code in this series to handle "rogue other end". I
>>> presume for a simple hash algorithm like L4 is not very important (say,
>>> even a packet ends up in the wrong queue we can still safely process
>>> it), or core driver can deal with this all by itself (dropping)?
>>>
>>> Wei.
>>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:41:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKtR-0007J6-Kv; Fri, 14 Feb 2014 15:41:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKtQ-0007J1-I1
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:41:32 +0000
Received: from [85.158.139.211:10129] by server-11.bemta-5.messagelabs.com id
	CB/8A-23886-B293EF25; Fri, 14 Feb 2014 15:41:31 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392392489!4012408!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14352 invoked from network); 14 Feb 2014 15:41:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:41:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102591875"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:41:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:41:00 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKsu-0004ot-3c; Fri, 14 Feb 2014 15:41:00 +0000
Message-ID: <52FE390B.3020102@citrix.com>
Date: Fri, 14 Feb 2014 15:40:59 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
	<52FE2DFC.8050702@citrix.com>
	<20140214152539.GD18398@zion.uk.xensource.com>
In-Reply-To: <20140214152539.GD18398@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 15:25, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 02:53:48PM +0000, Andrew Bennieston wrote:
>> On 14/02/14 14:06, Wei Liu wrote:
>>> On Fri, Feb 14, 2014 at 11:50:19AM +0000, Andrew J. Bennieston wrote:
>>>>
>>>> This patch series implements multiple transmit and receive queues (i.e.
>>>> multiple shared rings) for the xen virtual network interfaces.
>>>>
>>>> The series is split up as follows:
>>>>   - Patches 1 and 3 factor out the queue-specific data for netback and
>>>>      netfront respectively, and modify the rest of the code to use these
>>>>      as appropriate.
>>>>   - Patches 2 and 4 introduce new XenStore keys to negotiate and use
>>>>     multiple shared rings and event channels, and code to connect these
>>>>     as appropriate.
>>>>   - Patch 5 documents the XenStore keys required for the new feature
>>>>     in include/xen/interface/io/netif.h
>>>>
>>>> All other transmit and receive processing remains unchanged, i.e. there
>>>> is a kthread per queue and a NAPI context per queue.
>>>>
>>>> The performance of these patches has been analysed in detail, with
>>>> results available at:
>>>>
>>>> http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing
>>>>
>>>> To summarise:
>>>>    * Using multiple queues allows a VM to transmit at line rate on a 10
>>>>      Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
>>>>      with a single queue.
>>>>    * For intra-host VM--VM traffic, eight queues provide 171% of the
>>>>      throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
>>>>    * There is a corresponding increase in total CPU usage, i.e. this is a
>>>>      scaling out over available resources, not an efficiency improvement.
>>>>    * Results depend on the availability of sufficient CPUs, as well as the
>>>>      distribution of interrupts and the distribution of TCP streams across
>>>>      the queues.
>>>>
>>>> Queue selection is currently achieved via an L4 hash on the packet (i.e.
>>>> TCP src/dst port, IP src/dst address) and is not negotiated between the
>>>> frontend and backend, since only one option exists. Future patches to
>>>> support other frontends (particularly Windows) will need to add some
>>>> capability to negotiate not only the hash algorithm selection, but also
>>>> allow the frontend to specify some parameters to this.
>>>>
>>>
>>> This has an impact on the protocol. If the key to select hash algorithm
>>> is missing then we're assuming L4 is in use.
>>>
>>> This either needs to be documented (which is missing in your patch to
>>> netif.h) or you need to write that key explicitly in XenStore.
>>>
>
> a)
>
>>> I also have a question what would happen if one end advertises one hash
>>> algorithm then use a different one. This can happen when the
>>> driver is rogue or buggy. Will it cause the "good guy" to stall? We
>>> certainly don't want to stall backend, at the very least.
>>
>
> b)
>
>> I'm not sure I understand. There is no negotiable selection of hash
>> algorithm here. This paragraph refers to a possible future in which
>> we may have to support multiple such. These issues will absolutely
>> have to be addressed then, but it is completely irrelevant for now.
>>
>
> There's actaully two questions.
>
> I suspect your above reply was for a). My starting point of a) is, if
> I'm to write a driver, either backend or frontend, for any random OS,
> will I be able to have some basic idea what the correct behavior is by
> looking at netif.h only? The current answer for multiqueue hash
> algorithm selection is "no" given that 1) the document is not clear L4
> is the default algorithm if no key is specified, 2) the key to select
> algorithm is not mandatory the the current protocol.
>
> I was not very clear in previous reply, especially the "write that key
> explicitly in XenStore", sorry. The thing you need to do would be:
> 1) document L4 will be selected if algorithm selection is missing, or
> 2) document algorithm key is mandatory and implement negotiation.
>
> For question b). Say, if I'm writing a malicious frontend driver, I
> advertise I want L4 but actually I always select a particular queue, or
> deliberately select random queue, will that cause problem to the
> backend? If we are to use a more complex algorithm, will a rogue
> frontend cause problem to backend?
>
> Wei.

Let me attempt to clear this up. Bear with me...

Queue selection is a decision by a transmitting system about which queue 
it uses for a particular packet. A well-behaved receiving system will 
pick up packets on any queue and throw them up into its network stack as 
normal. In this manner, the details of queue selection don't matter from 
the point of view of a receiving guest (either frontend or backend). 
That is; if a "malicious" frontend sends all of its packets on a single 
queue, then it is only damaging itself - by reducing its effective 
throughput to that of a single queue. This will not cause a problem to 
the backend. The same goes for the "select a random queue" scenario, 
although here you probably shouldn't expect decent TCP performance. 
Certainly there will be no badness in terms of affecting the backend or 
other systems, beyond that which a guest could achieve with a broken TCP 
stack anyway.

In light of this, algorithm selection is (mostly) a function of the 
transmitting side. The receiving side should be prepared to receive 
packets on any of the legitimately established queues. It just happens 
that the Linux netback and Linux netfront both use skb_get_hash() to 
determine this value.

In the future, some frontends (i.e. Windows) may need to do complex 
things like pushing hash state to the backend. This will be taken care 
of with extensions to the protocol at the point these are implemented.

Andrew.

>
>> Andrew.
>>>
>>> I don't see relevant code in this series to handle "rogue other end". I
>>> presume for a simple hash algorithm like L4 is not very important (say,
>>> even a packet ends up in the wrong queue we can still safely process
>>> it), or core driver can deal with this all by itself (dropping)?
>>>
>>> Wei.
>>>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:41:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKtp-0007L3-Eo; Fri, 14 Feb 2014 15:41:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEKtl-0007Kn-N3
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:41:54 +0000
Received: from [193.109.254.147:50165] by server-2.bemta-14.messagelabs.com id
	C5/0C-01236-0493EF25; Fri, 14 Feb 2014 15:41:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392392510!719462!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17792 invoked from network); 14 Feb 2014 15:41:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:41:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102592226"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:41:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 10:41:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEKtV-0008NU-CU;
	Fri, 14 Feb 2014 15:41:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEKtU-00074l-Ud;
	Fri, 14 Feb 2014 15:41:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24878-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Feb 2014 15:41:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24878: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4265193807059029291=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4265193807059029291==
Content-Type: text/plain

flight 24878 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24878/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24797
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24797
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24797

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                29b5f720990fafc302a034468455426dd662e101
baseline version:
 linux                1569265782ef26ed77ce45ebeb0676f11d4c114a

------------------------------------------------------------
People who touched revisions under test:
  "David S. Miller" <davem@davemloft.net>
  Akash Goel <akash.goel@intel.com>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Al Viro <viro@ZenIV.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Antti Palosaari <crope@iki.fi>
  Ben Skeggs <bskeggs@redhat.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Boaz Harrosh <bharrosh@panasas.com>
  Borislav Petkov <bp@suse.de>
  Brecht Machiels <brecht@mos6581.org>
  Brennan Shacklett <bpshacklett@gmail.com>
  Brian Norris <computersforpeace@gmail.com>
  Chris Ball <cjb@laptop.org>
  Chris Metcalf <cmetcalf@tilera.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Lameter <cl@linux.com>
  Chuck Anderson <chuck.anderson@oracle.com>
  Dan Duval <dan.duval@oracle.com>
  Daniel Santos <daniel.santos@pobox.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Paris <eparis@redhat.com>
  Florian Fainelli <florian@openwrt.org>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Ira Weiny <ira.weiny@intel.com>
  James Ralston <james.d.ralston@intel.com>
  Jiri Slaby <jslaby@suse.cz>
  Joe Thornber <ejt@redhat.com>
  Johannes Weiner <hannes@cmpxchg.org>
  John Stultz <john.stultz@linaro.org>
  Jonas Gorski <jogo@openwrt.org>
  Josh Triplett <josh@joshtriplett.org>
  Kamil Debski <k.debski@samsung.com>
  Len Brown <len.brown@intel.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ludovic Desroches <ludovic.desroches@atmel.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Maarten Lankhorst <maarten.lankhorst@canonical.com>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek OlÅ¡Ã¡k <marek.olsak@amd.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@linaro.org>
  Mauro Carvalho Chehab <m.chehab@samsung.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Michael Grzeschik <m.grzeschik@pengutronix.de>
  Michel DÃ¤nzer <michel.daenzer@amd.com>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@suse.cz>
  Mikulas Patocka <mpatocka@redhat.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nell Hardcastle <nell@spicious.com>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Olivier Grenie <olivier.grenie@parrot.com>
  Patrick Boettcher <pboettcher@kernellabs.com>
  Paul Moore <pmoore@redhat.com>
  Pekka Enberg <penberg@kernel.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ray Jui <rjui@broadcom.com>
  Richard Guy Briggs <rgb@redhat.com>
  Roland Dreier <roland@purestorage.com>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Seth Heasley <seth.heasley@intel.com>
  Seungwon Jeon <tgih.jun@samsung.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Steven Rostedt <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Todd Previte <tprevite@gmail.com>
  Trond Myklebust <trond.myklebust@primarydata.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Wanlong Gao <gaowanlong@cn.fujitsu.com>
  Weston Andros Adamson <dros@netapp.com>
  Weston Andros Adamson <dros@primarydata.com>
  Wolfram Sang <wsa@the-dreams.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=29b5f720990fafc302a034468455426dd662e101
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 29b5f720990fafc302a034468455426dd662e101
+ branch=linux-3.10
+ revision=29b5f720990fafc302a034468455426dd662e101
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 29b5f720990fafc302a034468455426dd662e101:tested/linux-3.10
Counting objects: 1   
Counting objects: 77   
Counting objects: 108   
Counting objects: 168   
Counting objects: 291   
Counting objects: 575   
Counting objects: 651, done.
Compressing objects:   1% (1/95)   
Compressing objects:   2% (2/95)   
Compressing objects:   3% (3/95)   
Compressing objects:   4% (4/95)   
Compressing objects:   5% (5/95)   
Compressing objects:   6% (6/95)   
Compressing objects:   7% (7/95)   
Compressing objects:   8% (8/95)   
Compressing objects:   9% (9/95)   
Compressing objects:  10% (10/95)   
Compressing objects:  11% (11/95)   
Compressing objects:  12% (12/95)   
Compressing objects:  13% (13/95)   
Compressing objects:  14% (14/95)   
Compressing objects:  15% (15/95)   
Compressing objects:  16% (16/95)   
Compressing objects:  17% (17/95)   
Compressing objects:  18% (18/95)   
Compressing objects:  20% (19/95)   
Compressing objects:  21% (20/95)   
Compressing objects:  22% (21/95)   
Compressing objects:  23% (22/95)   
Compressing objects:  24% (23/95)   
Compressing objects:  25% (24/95)   
Compressing objects:  26% (25/95)   
Compressing objects:  27% (26/95)   
Compressing objects:  28% (27/95)   
Compressing objects:  29% (28/95)   
Compressing objects:  30% (29/95)   
Compressing objects:  31% (30/95)   
Compressing objects:  32% (31/95)   
Compressing objects:  33% (32/95)   
Compressing objects:  34% (33/95)   
Compressing objects:  35% (34/95)   
Compressing objects:  36% (35/95)   
Compressing objects:  37% (36/95)   
Compressing objects:  38% (37/95)   
Compressing objects:  40% (38/95)   
Compressing objects:  41% (39/95)   
Compressing objects:  42% (40/95)   
Compressing objects:  43% (41/95)   
Compressing objects:  44% (42/95)   
Compressing objects:  45% (43/95)   
Compressing objects:  46% (44/95)   
Compressing objects:  47% (45/95)   
Compressing objects:  48% (46/95)   
Compressing objects:  49% (47/95)   
Compressing objects:  50% (48/95)   
Compressing objects:  51% (49/95)   
Compressing objects:  52% (50/95)   
Compressing objects:  53% (51/95)   
Compressing objects:  54% (52/95)   
Compressing objects:  55% (53/95)   
Compressing objects:  56% (54/95)   
Compressing objects:  57% (55/95)   
Compressing objects:  58% (56/95)   
Compressing objects:  60% (57/95)   
Compressing objects:  61% (58/95)   
Compressing objects:  62% (59/95)   
Compressing objects:  63% (60/95)   
Compressing objects:  64% (61/95)   
Compressing objects:  65% (62/95)   
Compressing objects:  66% (63/95)   
Compressing objects:  67% (64/95)   
Compressing objects:  68% (65/95)   
Compressing objects:  69% (66/95)   
Compressing objects:  70% (67/95)   
Compressing objects:  71% (68/95)   
Compressing objects:  72% (69/95)   
Compressing objects:  73% (70/95)   
Compressing objects:  74% (71/95)   
Compressing objects:  75% (72/95)   
Compressing objects:  76% (73/95)   
Compressing objects:  77% (74/95)   
Compressing objects:  78% (75/95)   
Compressing objects:  80% (76/95)   
Compressing objects:  81% (77/95)   
Compressing objects:  82% (78/95)   
Compressing objects:  83% (79/95)   
Compressing objects:  84% (80/95)   
Compressing objects:  85% (81/95)   
Compressing objects:  86% (82/95)   
Compressing objects:  87% (83/95)   
Compressing objects:  88% (84/95)   
Compressing objects:  89% (85/95)   
Compressing objects:  90% (86/95)   
Compressing objects:  91% (87/95)   
Compressing objects:  92% (88/95)   
Compressing objects:  93% (89/95)   
Compressing objects:  94% (90/95)   
Compressing objects:  95% (91/95)   
Compressing objects:  96% (92/95)   
Compressing objects:  97% (93/95)   
Compressing objects:  98% (94/95)   
Compressing objects: 100% (95/95)   
Compressing objects: 100% (95/95), done.
Writing objects:   0% (1/495)   
Writing objects:   1% (5/495)   
Writing objects:   2% (10/495)   
Writing objects:   3% (15/495)   
Writing objects:   4% (20/495)   
Writing objects:   5% (25/495)   
Writing objects:   6% (30/495)   
Writing objects:   7% (35/495)   
Writing objects:   8% (40/495)   
Writing objects:   9% (45/495)   
Writing objects:  10% (50/495)   
Writing objects:  11% (55/495)   
Writing objects:  12% (61/495)   
Writing objects:  13% (65/495)   
Writing objects:  14% (70/495)   
Writing objects:  15% (75/495)   
Writing objects:  16% (80/495)   
Writing objects:  17% (85/495)   
Writing objects:  18% (90/495)   
Writing objects:  19% (95/495)   
Writing objects:  20% (99/495)   
Writing objects:  21% (104/495)   
Writing objects:  22% (109/495)   
Writing objects:  23% (114/495)   
Writing objects:  24% (119/495)   
Writing objects:  25% (124/495)   
Writing objects:  26% (129/495)   
Writing objects:  27% (134/495)   
Writing objects:  28% (139/495)   
Writing objects:  29% (144/495)   
Writing objects:  30% (149/495)   
Writing objects:  31% (155/495)   
Writing objects:  32% (159/495)   
Writing objects:  33% (164/495)   
Writing objects:  34% (169/495)   
Writing objects:  35% (174/495)   
Writing objects:  36% (179/495)   
Writing objects:  37% (184/495)   
Writing objects:  38% (189/495)   
Writing objects:  39% (194/495)   
Writing objects:  40% (198/495)   
Writing objects:  41% (203/495)   
Writing objects:  42% (208/495)   
Writing objects:  43% (213/495)   
Writing objects:  44% (218/495)   
Writing objects:  45% (223/495)   
Writing objects:  46% (228/495)   
Writing objects:  47% (233/495)   
Writing objects:  48% (238/495)   
Writing objects:  49% (243/495)   
Writing objects:  50% (248/495)   
Writing objects:  51% (253/495)   
Writing objects:  52% (258/495)   
Writing objects:  53% (263/495)   
Writing objects:  54% (268/495)   
Writing objects:  55% (273/495)   
Writing objects:  56% (278/495)   
Writing objects:  57% (283/495)   
Writing objects:  58% (288/495)   
Writing objects:  59% (293/495)   
Writing objects:  60% (297/495)   
Writing objects:  61% (302/495)   
Writing objects:  62% (307/495)   
Writing objects:  63% (312/495)   
Writing objects:  64% (317/495)   
Writing objects:  65% (322/495)   
Writing objects:  66% (327/495)   
Writing objects:  67% (332/495)   
Writing objects:  68% (337/495)   
Writing objects:  69% (342/495)   
Writing objects:  70% (347/495)   
Writing objects:  71% (352/495)   
Writing objects:  72% (357/495)   
Writing objects:  73% (362/495)   
Writing objects:  74% (367/495)   
Writing objects:  75% (372/495)   
Writing objects:  76% (377/495)   
Writing objects:  77% (382/495)   
Writing objects:  78% (387/495)   
Writing objects:  79% (392/495)   
Writing objects:  80% (396/495)   
Writing objects:  81% (401/495)   
Writing objects:  82% (406/495)   
Writing objects:  83% (411/495)   
Writing objects:  84% (416/495)   
Writing objects:  85% (421/495)   
Writing objects:  86% (426/495)   
Writing objects:  87% (431/495)   
Writing objects:  88% (436/495)   
Writing objects:  89% (441/495)   
Writing objects:  90% (446/495)   
Writing objects:  91% (451/495)   
Writing objects:  92% (456/495)   
Writing objects:  93% (461/495)   
Writing objects:  94% (466/495)   
Writing objects:  95% (471/495)   
Writing objects:  96% (476/495)   
Writing objects:  97% (481/495)   
Writing objects:  98% (486/495)   
Writing objects:  99% (491/495)   
Writing objects: 100% (495/495)   
Writing objects: 100% (495/495), 90.72 KiB, done.
Total 495 (delta 400), reused 495 (delta 400)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   1569265..29b5f72  29b5f720990fafc302a034468455426dd662e101 -> tested/linux-3.10
+ exit 0


--===============4265193807059029291==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4265193807059029291==--

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:41:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKtp-0007L3-Eo; Fri, 14 Feb 2014 15:41:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEKtl-0007Kn-N3
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:41:54 +0000
Received: from [193.109.254.147:50165] by server-2.bemta-14.messagelabs.com id
	C5/0C-01236-0493EF25; Fri, 14 Feb 2014 15:41:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392392510!719462!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17792 invoked from network); 14 Feb 2014 15:41:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:41:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102592226"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:41:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 10:41:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEKtV-0008NU-CU;
	Fri, 14 Feb 2014 15:41:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEKtU-00074l-Ud;
	Fri, 14 Feb 2014 15:41:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24878-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 14 Feb 2014 15:41:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 24878: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4265193807059029291=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4265193807059029291==
Content-Type: text/plain

flight 24878 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24878/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-win7-amd64  7 windows-install              fail like 24797
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24797
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24797

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 linux                29b5f720990fafc302a034468455426dd662e101
baseline version:
 linux                1569265782ef26ed77ce45ebeb0676f11d4c114a

------------------------------------------------------------
People who touched revisions under test:
  "David S. Miller" <davem@davemloft.net>
  Akash Goel <akash.goel@intel.com>
  AKASHI Takahiro <takahiro.akashi@linaro.org>
  Al Viro <viro@ZenIV.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Antti Palosaari <crope@iki.fi>
  Ben Skeggs <bskeggs@redhat.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Boaz Harrosh <bharrosh@panasas.com>
  Borislav Petkov <bp@suse.de>
  Brecht Machiels <brecht@mos6581.org>
  Brennan Shacklett <bpshacklett@gmail.com>
  Brian Norris <computersforpeace@gmail.com>
  Chris Ball <cjb@laptop.org>
  Chris Metcalf <cmetcalf@tilera.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Lameter <cl@linux.com>
  Chuck Anderson <chuck.anderson@oracle.com>
  Dan Duval <dan.duval@oracle.com>
  Daniel Santos <daniel.santos@pobox.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dirk Brandewie <dirk.j.brandewie@intel.com>
  Eric Paris <eparis@redhat.com>
  Florian Fainelli <florian@openwrt.org>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Ira Weiny <ira.weiny@intel.com>
  James Ralston <james.d.ralston@intel.com>
  Jiri Slaby <jslaby@suse.cz>
  Joe Thornber <ejt@redhat.com>
  Johannes Weiner <hannes@cmpxchg.org>
  John Stultz <john.stultz@linaro.org>
  Jonas Gorski <jogo@openwrt.org>
  Josh Triplett <josh@joshtriplett.org>
  Kamil Debski <k.debski@samsung.com>
  Len Brown <len.brown@intel.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ludovic Desroches <ludovic.desroches@atmel.com>
  Luiz Capitulino <lcapitulino@redhat.com>
  Maarten Lankhorst <maarten.lankhorst@canonical.com>
  Malcolm Priestley <tvboxspy@gmail.com>
  Marek OlÅ¡Ã¡k <marek.olsak@amd.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@linaro.org>
  Mauro Carvalho Chehab <m.chehab@samsung.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Michael Grzeschik <m.grzeschik@pengutronix.de>
  Michel DÃ¤nzer <michel.daenzer@amd.com>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@suse.cz>
  Mikulas Patocka <mpatocka@redhat.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nell Hardcastle <nell@spicious.com>
  Nicolas Ferre <nicolas.ferre@atmel.com>
  Olivier Grenie <olivier.grenie@parrot.com>
  Patrick Boettcher <pboettcher@kernellabs.com>
  Paul Moore <pmoore@redhat.com>
  Pekka Enberg <penberg@kernel.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ray Jui <rjui@broadcom.com>
  Richard Guy Briggs <rgb@redhat.com>
  Roland Dreier <roland@purestorage.com>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sasha.levin@oracle.com>
  Seth Heasley <seth.heasley@intel.com>
  Seungwon Jeon <tgih.jun@samsung.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Steven Rostedt <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Todd Previte <tprevite@gmail.com>
  Trond Myklebust <trond.myklebust@primarydata.com>
  Viresh Kumar <viresh.kumar@linaro.org>
  Wanlong Gao <gaowanlong@cn.fujitsu.com>
  Weston Andros Adamson <dros@netapp.com>
  Weston Andros Adamson <dros@primarydata.com>
  Wolfram Sang <wsa@the-dreams.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=29b5f720990fafc302a034468455426dd662e101
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 29b5f720990fafc302a034468455426dd662e101
+ branch=linux-3.10
+ revision=29b5f720990fafc302a034468455426dd662e101
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 29b5f720990fafc302a034468455426dd662e101:tested/linux-3.10
Counting objects: 1   
Counting objects: 77   
Counting objects: 108   
Counting objects: 168   
Counting objects: 291   
Counting objects: 575   
Counting objects: 651, done.
Compressing objects:   1% (1/95)   
Compressing objects:   2% (2/95)   
Compressing objects:   3% (3/95)   
Compressing objects:   4% (4/95)   
Compressing objects:   5% (5/95)   
Compressing objects:   6% (6/95)   
Compressing objects:   7% (7/95)   
Compressing objects:   8% (8/95)   
Compressing objects:   9% (9/95)   
Compressing objects:  10% (10/95)   
Compressing objects:  11% (11/95)   
Compressing objects:  12% (12/95)   
Compressing objects:  13% (13/95)   
Compressing objects:  14% (14/95)   
Compressing objects:  15% (15/95)   
Compressing objects:  16% (16/95)   
Compressing objects:  17% (17/95)   
Compressing objects:  18% (18/95)   
Compressing objects:  20% (19/95)   
Compressing objects:  21% (20/95)   
Compressing objects:  22% (21/95)   
Compressing objects:  23% (22/95)   
Compressing objects:  24% (23/95)   
Compressing objects:  25% (24/95)   
Compressing objects:  26% (25/95)   
Compressing objects:  27% (26/95)   
Compressing objects:  28% (27/95)   
Compressing objects:  29% (28/95)   
Compressing objects:  30% (29/95)   
Compressing objects:  31% (30/95)   
Compressing objects:  32% (31/95)   
Compressing objects:  33% (32/95)   
Compressing objects:  34% (33/95)   
Compressing objects:  35% (34/95)   
Compressing objects:  36% (35/95)   
Compressing objects:  37% (36/95)   
Compressing objects:  38% (37/95)   
Compressing objects:  40% (38/95)   
Compressing objects:  41% (39/95)   
Compressing objects:  42% (40/95)   
Compressing objects:  43% (41/95)   
Compressing objects:  44% (42/95)   
Compressing objects:  45% (43/95)   
Compressing objects:  46% (44/95)   
Compressing objects:  47% (45/95)   
Compressing objects:  48% (46/95)   
Compressing objects:  49% (47/95)   
Compressing objects:  50% (48/95)   
Compressing objects:  51% (49/95)   
Compressing objects:  52% (50/95)   
Compressing objects:  53% (51/95)   
Compressing objects:  54% (52/95)   
Compressing objects:  55% (53/95)   
Compressing objects:  56% (54/95)   
Compressing objects:  57% (55/95)   
Compressing objects:  58% (56/95)   
Compressing objects:  60% (57/95)   
Compressing objects:  61% (58/95)   
Compressing objects:  62% (59/95)   
Compressing objects:  63% (60/95)   
Compressing objects:  64% (61/95)   
Compressing objects:  65% (62/95)   
Compressing objects:  66% (63/95)   
Compressing objects:  67% (64/95)   
Compressing objects:  68% (65/95)   
Compressing objects:  69% (66/95)   
Compressing objects:  70% (67/95)   
Compressing objects:  71% (68/95)   
Compressing objects:  72% (69/95)   
Compressing objects:  73% (70/95)   
Compressing objects:  74% (71/95)   
Compressing objects:  75% (72/95)   
Compressing objects:  76% (73/95)   
Compressing objects:  77% (74/95)   
Compressing objects:  78% (75/95)   
Compressing objects:  80% (76/95)   
Compressing objects:  81% (77/95)   
Compressing objects:  82% (78/95)   
Compressing objects:  83% (79/95)   
Compressing objects:  84% (80/95)   
Compressing objects:  85% (81/95)   
Compressing objects:  86% (82/95)   
Compressing objects:  87% (83/95)   
Compressing objects:  88% (84/95)   
Compressing objects:  89% (85/95)   
Compressing objects:  90% (86/95)   
Compressing objects:  91% (87/95)   
Compressing objects:  92% (88/95)   
Compressing objects:  93% (89/95)   
Compressing objects:  94% (90/95)   
Compressing objects:  95% (91/95)   
Compressing objects:  96% (92/95)   
Compressing objects:  97% (93/95)   
Compressing objects:  98% (94/95)   
Compressing objects: 100% (95/95)   
Compressing objects: 100% (95/95), done.
Writing objects:   0% (1/495)   
Writing objects:   1% (5/495)   
Writing objects:   2% (10/495)   
Writing objects:   3% (15/495)   
Writing objects:   4% (20/495)   
Writing objects:   5% (25/495)   
Writing objects:   6% (30/495)   
Writing objects:   7% (35/495)   
Writing objects:   8% (40/495)   
Writing objects:   9% (45/495)   
Writing objects:  10% (50/495)   
Writing objects:  11% (55/495)   
Writing objects:  12% (61/495)   
Writing objects:  13% (65/495)   
Writing objects:  14% (70/495)   
Writing objects:  15% (75/495)   
Writing objects:  16% (80/495)   
Writing objects:  17% (85/495)   
Writing objects:  18% (90/495)   
Writing objects:  19% (95/495)   
Writing objects:  20% (99/495)   
Writing objects:  21% (104/495)   
Writing objects:  22% (109/495)   
Writing objects:  23% (114/495)   
Writing objects:  24% (119/495)   
Writing objects:  25% (124/495)   
Writing objects:  26% (129/495)   
Writing objects:  27% (134/495)   
Writing objects:  28% (139/495)   
Writing objects:  29% (144/495)   
Writing objects:  30% (149/495)   
Writing objects:  31% (155/495)   
Writing objects:  32% (159/495)   
Writing objects:  33% (164/495)   
Writing objects:  34% (169/495)   
Writing objects:  35% (174/495)   
Writing objects:  36% (179/495)   
Writing objects:  37% (184/495)   
Writing objects:  38% (189/495)   
Writing objects:  39% (194/495)   
Writing objects:  40% (198/495)   
Writing objects:  41% (203/495)   
Writing objects:  42% (208/495)   
Writing objects:  43% (213/495)   
Writing objects:  44% (218/495)   
Writing objects:  45% (223/495)   
Writing objects:  46% (228/495)   
Writing objects:  47% (233/495)   
Writing objects:  48% (238/495)   
Writing objects:  49% (243/495)   
Writing objects:  50% (248/495)   
Writing objects:  51% (253/495)   
Writing objects:  52% (258/495)   
Writing objects:  53% (263/495)   
Writing objects:  54% (268/495)   
Writing objects:  55% (273/495)   
Writing objects:  56% (278/495)   
Writing objects:  57% (283/495)   
Writing objects:  58% (288/495)   
Writing objects:  59% (293/495)   
Writing objects:  60% (297/495)   
Writing objects:  61% (302/495)   
Writing objects:  62% (307/495)   
Writing objects:  63% (312/495)   
Writing objects:  64% (317/495)   
Writing objects:  65% (322/495)   
Writing objects:  66% (327/495)   
Writing objects:  67% (332/495)   
Writing objects:  68% (337/495)   
Writing objects:  69% (342/495)   
Writing objects:  70% (347/495)   
Writing objects:  71% (352/495)   
Writing objects:  72% (357/495)   
Writing objects:  73% (362/495)   
Writing objects:  74% (367/495)   
Writing objects:  75% (372/495)   
Writing objects:  76% (377/495)   
Writing objects:  77% (382/495)   
Writing objects:  78% (387/495)   
Writing objects:  79% (392/495)   
Writing objects:  80% (396/495)   
Writing objects:  81% (401/495)   
Writing objects:  82% (406/495)   
Writing objects:  83% (411/495)   
Writing objects:  84% (416/495)   
Writing objects:  85% (421/495)   
Writing objects:  86% (426/495)   
Writing objects:  87% (431/495)   
Writing objects:  88% (436/495)   
Writing objects:  89% (441/495)   
Writing objects:  90% (446/495)   
Writing objects:  91% (451/495)   
Writing objects:  92% (456/495)   
Writing objects:  93% (461/495)   
Writing objects:  94% (466/495)   
Writing objects:  95% (471/495)   
Writing objects:  96% (476/495)   
Writing objects:  97% (481/495)   
Writing objects:  98% (486/495)   
Writing objects:  99% (491/495)   
Writing objects: 100% (495/495)   
Writing objects: 100% (495/495), 90.72 KiB, done.
Total 495 (delta 400), reused 495 (delta 400)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   1569265..29b5f72  29b5f720990fafc302a034468455426dd662e101 -> tested/linux-3.10
+ exit 0


--===============4265193807059029291==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4265193807059029291==--

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:43:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKus-0007TZ-6i; Fri, 14 Feb 2014 15:43:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKup-0007TG-TJ
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:43:00 +0000
Received: from [85.158.137.68:54546] by server-2.bemta-3.messagelabs.com id
	38/A3-06531-3893EF25; Fri, 14 Feb 2014 15:42:59 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392392576!937293!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15847 invoked from network); 14 Feb 2014 15:42:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:42:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102592659"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:42:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:42:55 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKul-0004qJ-43; Fri, 14 Feb 2014 15:42:55 +0000
Message-ID: <52FE397E.8050502@citrix.com>
Date: Fri, 14 Feb 2014 15:42:54 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
	<20140214141125.GB18398@zion.uk.xensource.com>
	<52FE2ED5.3020905@citrix.com>
	<20140214153620.GE18398@zion.uk.xensource.com>
In-Reply-To: <20140214153620.GE18398@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 15:36, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 02:57:25PM +0000, Andrew Bennieston wrote:
>> On 14/02/14 14:11, Wei Liu wrote:
>>> On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
>>> [...]
>>>>
>>>> +extern unsigned int xenvif_max_queues;
>>>> +
>>>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>>>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>>>> index 4cde112..4dc092c 100644
>>>> --- a/drivers/net/xen-netback/interface.c
>>>> +++ b/drivers/net/xen-netback/interface.c
>>>> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>>>   	char name[IFNAMSIZ] = {};
>>>>
>>>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>>>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>>>> +	/* Allocate a netdev with the max. supported number of queues.
>>>> +	 * When the guest selects the desired number, it will be updated
>>>> +	 * via netif_set_real_num_tx_queues().
>>>> +	 */
>>>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>>>> +						  xenvif_max_queues);
>>>
>>> Indentation.
>>
>> How would you like this to be indented? The CodingStyle says (and I quote):
>> Chapter 2: Breaking long lines and strings:
>> 	... descendants are always substantially shorter than the
>> 	parent and placed substantially to the right...
>>
>> There is no further advice to this point in CodingStyle, so please
>> explain how you'd prefer this.
>>
>
> Kernel code in general use indentation style like
>
> 	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> 			      xenvif_max_queues);
>
> You can find lots of examples in existing kernel code.
>
> Probably "place substantially to the right" is just too vague. :-)

Ah, I think the issue here is that my editor was configured to have a 
tab width of 4, so the offending line _did_ look to be aligned to the 
opening ( of the line above, to me. I'll set the appropriate tab width 
and change it.

Cheers,
Andrew
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:43:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEKus-0007TZ-6i; Fri, 14 Feb 2014 15:43:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WEKup-0007TG-TJ
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:43:00 +0000
Received: from [85.158.137.68:54546] by server-2.bemta-3.messagelabs.com id
	38/A3-06531-3893EF25; Fri, 14 Feb 2014 15:42:59 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392392576!937293!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15847 invoked from network); 14 Feb 2014 15:42:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:42:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102592659"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:42:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:42:55 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WEKul-0004qJ-43; Fri, 14 Feb 2014 15:42:55 +0000
Message-ID: <52FE397E.8050502@citrix.com>
Date: Fri, 14 Feb 2014 15:42:54 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<1392378624-6123-3-git-send-email-andrew.bennieston@citrix.com>
	<20140214141125.GB18398@zion.uk.xensource.com>
	<52FE2ED5.3020905@citrix.com>
	<20140214153620.GE18398@zion.uk.xensource.com>
In-Reply-To: <20140214153620.GE18398@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 15:36, Wei Liu wrote:
> On Fri, Feb 14, 2014 at 02:57:25PM +0000, Andrew Bennieston wrote:
>> On 14/02/14 14:11, Wei Liu wrote:
>>> On Fri, Feb 14, 2014 at 11:50:21AM +0000, Andrew J. Bennieston wrote:
>>> [...]
>>>>
>>>> +extern unsigned int xenvif_max_queues;
>>>> +
>>>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>>>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>>>> index 4cde112..4dc092c 100644
>>>> --- a/drivers/net/xen-netback/interface.c
>>>> +++ b/drivers/net/xen-netback/interface.c
>>>> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>>>   	char name[IFNAMSIZ] = {};
>>>>
>>>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>>>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>>>> +	/* Allocate a netdev with the max. supported number of queues.
>>>> +	 * When the guest selects the desired number, it will be updated
>>>> +	 * via netif_set_real_num_tx_queues().
>>>> +	 */
>>>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>>>> +						  xenvif_max_queues);
>>>
>>> Indentation.
>>
>> How would you like this to be indented? The CodingStyle says (and I quote):
>> Chapter 2: Breaking long lines and strings:
>> 	... descendants are always substantially shorter than the
>> 	parent and placed substantially to the right...
>>
>> There is no further advice to this point in CodingStyle, so please
>> explain how you'd prefer this.
>>
>
> Kernel code in general use indentation style like
>
> 	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> 			      xenvif_max_queues);
>
> You can find lots of examples in existing kernel code.
>
> Probably "place substantially to the right" is just too vague. :-)

Ah, I think the issue here is that my editor was configured to have a 
tab width of 4, so the offending line _did_ look to be aligned to the 
opening ( of the line above, to me. I'll set the appropriate tab width 
and change it.

Cheers,
Andrew
>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:50:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL2G-0007nb-8P; Fri, 14 Feb 2014 15:50:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL2E-0007nR-Qz
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:50:39 +0000
Received: from [85.158.139.211:29252] by server-6.bemta-5.messagelabs.com id
	BC/B9-14342-E4B3EF25; Fri, 14 Feb 2014 15:50:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392393035!4005691!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29456 invoked from network); 14 Feb 2014 15:50:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:50:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823589"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:50:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:50:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL2A-0004vn-2O;
	Fri, 14 Feb 2014 15:50:34 +0000
Date: Fri, 14 Feb 2014 15:50:33 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 0/10] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.


Changes in v2:
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr;
- simplify gic_clear_lrs;
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1);
- add a patch to keep track of the LR number in pending_irq;
- add a patch to set GICH_LR_PENDING to inject a second irq while the
first one is still active;
- add a patch to simplify and reduce the usage of gic.lock;
- add a patch to reduce the usage of vgic.lock;
- add a patch to use GICH_ELSR[01] to avoid reading all the GICH_LRs in
gic_clear_lrs;
- add a debug patch to print more info in gic_dump_info.


Stefano Stabellini (10):
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: support HW interrupts in gic_set_lr
      xen/arm: do not request maintenance_interrupts
      xen/arm: set GICH_HCR_NPIE if all the LRs are in use
      xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
      xen/arm: second irq injection while the first irq is still inflight
      xen/arm: don't protect GICH and lr_queue accesses with gic.lock
      xen/arm: avoid taking unconditionally the vgic.lock in gic_clear_lrs
      xen/arm: use GICH_ELSR[01] to avoid reading all the GICH_LRs in gic_clear_lrs
      xen/arm: print more info in gic_dump_info, keep gic_lr sync'ed

 xen/arch/arm/domain.c        |    2 +-
 xen/arch/arm/gic.c           |  216 +++++++++++++++++++++++++++++--------------------------
 xen/arch/arm/irq.c           |    2 +-
 xen/arch/arm/time.c          |    2 +-
 xen/arch/arm/vgic.c          |   34 ++++++---
 xen/arch/arm/vtimer.c        |    4 +-
 xen/include/asm-arm/domain.h |    1 +
 xen/include/asm-arm/gic.h    |    4 +-
 8 files changed, 151 insertions(+), 114 deletions(-)


git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:50:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL2G-0007nb-8P; Fri, 14 Feb 2014 15:50:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL2E-0007nR-Qz
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:50:39 +0000
Received: from [85.158.139.211:29252] by server-6.bemta-5.messagelabs.com id
	BC/B9-14342-E4B3EF25; Fri, 14 Feb 2014 15:50:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392393035!4005691!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29456 invoked from network); 14 Feb 2014 15:50:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:50:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823589"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:50:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:50:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL2A-0004vn-2O;
	Fri, 14 Feb 2014 15:50:34 +0000
Date: Fri, 14 Feb 2014 15:50:33 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 0/10] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.


Changes in v2:
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr;
- simplify gic_clear_lrs;
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1);
- add a patch to keep track of the LR number in pending_irq;
- add a patch to set GICH_LR_PENDING to inject a second irq while the
first one is still active;
- add a patch to simplify and reduce the usage of gic.lock;
- add a patch to reduce the usage of vgic.lock;
- add a patch to use GICH_ELSR[01] to avoid reading all the GICH_LRs in
gic_clear_lrs;
- add a debug patch to print more info in gic_dump_info.


Stefano Stabellini (10):
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: support HW interrupts in gic_set_lr
      xen/arm: do not request maintenance_interrupts
      xen/arm: set GICH_HCR_NPIE if all the LRs are in use
      xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
      xen/arm: second irq injection while the first irq is still inflight
      xen/arm: don't protect GICH and lr_queue accesses with gic.lock
      xen/arm: avoid taking unconditionally the vgic.lock in gic_clear_lrs
      xen/arm: use GICH_ELSR[01] to avoid reading all the GICH_LRs in gic_clear_lrs
      xen/arm: print more info in gic_dump_info, keep gic_lr sync'ed

 xen/arch/arm/domain.c        |    2 +-
 xen/arch/arm/gic.c           |  216 +++++++++++++++++++++++++++++--------------------------
 xen/arch/arm/irq.c           |    2 +-
 xen/arch/arm/time.c          |    2 +-
 xen/arch/arm/vgic.c          |   34 ++++++---
 xen/arch/arm/vtimer.c        |    4 +-
 xen/include/asm-arm/domain.h |    1 +
 xen/include/asm-arm/gic.h    |    4 +-
 8 files changed, 151 insertions(+), 114 deletions(-)


git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3T-0007s4-7u; Fri, 14 Feb 2014 15:51:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3R-0007rV-Vf
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:54 +0000
Received: from [193.109.254.147:31424] by server-5.bemta-14.messagelabs.com id
	1F/33-16688-99B3EF25; Fri, 14 Feb 2014 15:51:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6793 invoked from network); 14 Feb 2014 15:51:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595722"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-JL;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:38 +0000
Message-ID: <1392393098-7351-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 10/10] xen/arm: print more info in
	gic_dump_info, keep gic_lr sync'ed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For each inflight and pending irqs print GIC_IRQ_GUEST_ENABLED,
GIC_IRQ_GUEST_PENDING and GIC_IRQ_GUEST_VISIBLE.

In order to get consistent information from gic_dump_info, we need to
get the vgic.lock before walking the inflight and lr_pending lists.

We also need to keep v->arch.gic_lr in sync with GICH_LR registers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index b00f77c..af0994a 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -642,6 +642,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
             ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
 
     GICH[GICH_LR + lr] = lr_reg;
+    v->arch.gic_lr[lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -708,11 +709,15 @@ static void _gic_clear_lr(struct vcpu *v, int i, int vgic_locked)
     {
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+        {
             GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+            v->arch.gic_lr[i] = lr | GICH_LR_PENDING;
+        }
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
         GICH[GICH_LR + i] = 0;
+        v->arch.gic_lr[i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
         if ( !vgic_locked ) spin_lock(&v->arch.vgic.lock);
@@ -986,6 +991,8 @@ void gic_dump_info(struct vcpu *v)
     struct pending_irq *p;
 
     printk("GICH_LRs (vcpu %d) mask=%"PRIx64"\n", v->vcpu_id, v->arch.lr_mask);
+
+    spin_lock(&v->arch.vgic.lock);
     if ( v == current )
     {
         for ( i = 0; i < nr_lrs; i++ )
@@ -997,14 +1004,20 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Inflight irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Pending irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
-
+    spin_unlock(&v->arch.vgic.lock);
 }
 
 void __cpuinit init_maintenance_interrupt(void)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3S-0007rb-Cz; Fri, 14 Feb 2014 15:51:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3Q-0007rJ-JB
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:52 +0000
Received: from [193.109.254.147:40190] by server-12.bemta-14.messagelabs.com
	id 5D/19-17220-79B3EF25; Fri, 14 Feb 2014 15:51:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6091 invoked from network); 14 Feb 2014 15:51:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595721"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Ir;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:37 +0000
Message-ID: <1392393098-7351-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 09/10] xen/arm: use GICH_ELSR[01] to
	avoid reading all the GICH_LRs in gic_clear_lrs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read GICH_ELSR0 and GICH_ELSR1 to figure out which GICH_LR registers
do not contain valid interrupts. Only call _gic_clear_lr on those.

If a cpu is trying to inject an interrupt that is already inflight into
another cpu, it sets GIC_IRQ_GUEST_PENDING and sends an SGI to it.
The target cpu is going to be interrupted and _gic_clear_lr, called by
gic_clear_lrs, will take care of setting GICH_LR_PENDING if the irq is
active.
In order to make sure that _gic_clear_lr is called for this irq, avoid
filtering lr_mask with GICH_ELSR[01] in this case (so that in this
situation we call _gic_clear_lr on all the GICH_LRs). Use a simple
percpu bit, lr_clear_all, set by the sender cpu and reset by the
receiver cpu, to understand whether we need to evaluate all GICH_LRs or
we can filter them.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |   27 ++++++++++++++++++++++-----
 xen/arch/arm/vgic.c       |    1 +
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 54be9ca..b00f77c 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -55,6 +55,7 @@ static struct {
 static irq_desc_t irq_desc[NR_IRQS];
 static DEFINE_PER_CPU(irq_desc_t[NR_LOCAL_IRQS], local_irq_desc);
 static DEFINE_PER_CPU(uint64_t, lr_mask);
+static DEFINE_PER_CPU(uint8_t, lr_clear_all);
 
 static unsigned nr_lrs;
 
@@ -67,7 +68,7 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
-static void gic_clear_lrs(struct vcpu *v);
+static void gic_clear_lrs(struct vcpu *v, bool_t all);
 
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
@@ -109,6 +110,7 @@ void gic_save_state(struct vcpu *v)
         v->arch.gic_lr[i] = GICH[GICH_LR + i];
     v->arch.lr_mask = this_cpu(lr_mask);
     v->arch.gic_apr = GICH[GICH_APR];
+    this_cpu(lr_clear_all) = 0ULL;
     /* Disable until next VCPU scheduled */
     GICH[GICH_HCR] = 0;
     isb();
@@ -122,13 +124,14 @@ void gic_restore_state(struct vcpu *v)
         return;
 
     this_cpu(lr_mask) = v->arch.lr_mask;
+    this_cpu(lr_clear_all) = 0ULL;
     for ( i=0; i<nr_lrs; i++)
         GICH[GICH_LR + i] = v->arch.gic_lr[i];
     GICH[GICH_APR] = v->arch.gic_apr;
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
-    gic_clear_lrs(v);
+    gic_clear_lrs(v, 1);
     gic_restore_pending_irqs(v);
 }
 
@@ -372,6 +375,7 @@ static void __cpuinit gic_hyp_init(void)
 
     GICH[GICH_MISR] = GICH_MISR_EOI;
     this_cpu(lr_mask) = 0ULL;
+    this_cpu(lr_clear_all) = 0ULL;
 }
 
 static void __cpuinit gic_hyp_disable(void)
@@ -726,11 +730,19 @@ static void _gic_clear_lr(struct vcpu *v, int i, int vgic_locked)
     }
 }
 
-static void gic_clear_lrs(struct vcpu *v)
+static void gic_clear_lrs(struct vcpu *v, bool_t all)
 {
     int i = 0;
+    uint64_t elsr;
+    
+    if ( !all )
+    {
+        elsr = GICH[GICH_ELSR0] | (((uint64_t) GICH[GICH_ELSR1]) << 32);
+        elsr &= this_cpu(lr_mask);
+    } else
+        elsr = this_cpu(lr_mask);
 
-    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+    while ((i = find_next_bit((const long unsigned int *) &elsr,
                               nr_lrs, i)) < nr_lrs) {
 
         _gic_clear_lr(v, i, 0);
@@ -743,6 +755,11 @@ void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
     _gic_clear_lr(v, p->lr, 1);
 }
 
+void gic_set_clear_lrs_other(struct vcpu *v)
+{
+    set_bit(0, &per_cpu(lr_clear_all, v->processor));
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -796,7 +813,7 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    gic_clear_lrs(current);
+    gic_clear_lrs(current, test_and_clear_bit(0, &this_cpu(lr_clear_all)));
 
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 4bfab26..02ad3cd 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -716,6 +716,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
             return;
         } else {
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
+            gic_set_clear_lrs_other(v);
             goto out;
         }
     }
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6de0d9b..8d36f7c 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -185,6 +185,7 @@ extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
                                   const char * devname);
 extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);
+extern void gic_set_clear_lrs_other(struct vcpu *v);
 
 /* Accept an interrupt from the GIC and dispatch its handler */
 extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3S-0007rb-Cz; Fri, 14 Feb 2014 15:51:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3Q-0007rJ-JB
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:52 +0000
Received: from [193.109.254.147:40190] by server-12.bemta-14.messagelabs.com
	id 5D/19-17220-79B3EF25; Fri, 14 Feb 2014 15:51:51 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6091 invoked from network); 14 Feb 2014 15:51:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595721"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Ir;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:37 +0000
Message-ID: <1392393098-7351-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 09/10] xen/arm: use GICH_ELSR[01] to
	avoid reading all the GICH_LRs in gic_clear_lrs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Read GICH_ELSR0 and GICH_ELSR1 to figure out which GICH_LR registers
do not contain valid interrupts. Only call _gic_clear_lr on those.

If a cpu is trying to inject an interrupt that is already inflight into
another cpu, it sets GIC_IRQ_GUEST_PENDING and sends an SGI to it.
The target cpu is going to be interrupted and _gic_clear_lr, called by
gic_clear_lrs, will take care of setting GICH_LR_PENDING if the irq is
active.
In order to make sure that _gic_clear_lr is called for this irq, avoid
filtering lr_mask with GICH_ELSR[01] in this case (so that in this
situation we call _gic_clear_lr on all the GICH_LRs). Use a simple
percpu bit, lr_clear_all, set by the sender cpu and reset by the
receiver cpu, to understand whether we need to evaluate all GICH_LRs or
we can filter them.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |   27 ++++++++++++++++++++++-----
 xen/arch/arm/vgic.c       |    1 +
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 24 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 54be9ca..b00f77c 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -55,6 +55,7 @@ static struct {
 static irq_desc_t irq_desc[NR_IRQS];
 static DEFINE_PER_CPU(irq_desc_t[NR_LOCAL_IRQS], local_irq_desc);
 static DEFINE_PER_CPU(uint64_t, lr_mask);
+static DEFINE_PER_CPU(uint8_t, lr_clear_all);
 
 static unsigned nr_lrs;
 
@@ -67,7 +68,7 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
-static void gic_clear_lrs(struct vcpu *v);
+static void gic_clear_lrs(struct vcpu *v, bool_t all);
 
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
@@ -109,6 +110,7 @@ void gic_save_state(struct vcpu *v)
         v->arch.gic_lr[i] = GICH[GICH_LR + i];
     v->arch.lr_mask = this_cpu(lr_mask);
     v->arch.gic_apr = GICH[GICH_APR];
+    this_cpu(lr_clear_all) = 0ULL;
     /* Disable until next VCPU scheduled */
     GICH[GICH_HCR] = 0;
     isb();
@@ -122,13 +124,14 @@ void gic_restore_state(struct vcpu *v)
         return;
 
     this_cpu(lr_mask) = v->arch.lr_mask;
+    this_cpu(lr_clear_all) = 0ULL;
     for ( i=0; i<nr_lrs; i++)
         GICH[GICH_LR + i] = v->arch.gic_lr[i];
     GICH[GICH_APR] = v->arch.gic_apr;
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
-    gic_clear_lrs(v);
+    gic_clear_lrs(v, 1);
     gic_restore_pending_irqs(v);
 }
 
@@ -372,6 +375,7 @@ static void __cpuinit gic_hyp_init(void)
 
     GICH[GICH_MISR] = GICH_MISR_EOI;
     this_cpu(lr_mask) = 0ULL;
+    this_cpu(lr_clear_all) = 0ULL;
 }
 
 static void __cpuinit gic_hyp_disable(void)
@@ -726,11 +730,19 @@ static void _gic_clear_lr(struct vcpu *v, int i, int vgic_locked)
     }
 }
 
-static void gic_clear_lrs(struct vcpu *v)
+static void gic_clear_lrs(struct vcpu *v, bool_t all)
 {
     int i = 0;
+    uint64_t elsr;
+    
+    if ( !all )
+    {
+        elsr = GICH[GICH_ELSR0] | (((uint64_t) GICH[GICH_ELSR1]) << 32);
+        elsr &= this_cpu(lr_mask);
+    } else
+        elsr = this_cpu(lr_mask);
 
-    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+    while ((i = find_next_bit((const long unsigned int *) &elsr,
                               nr_lrs, i)) < nr_lrs) {
 
         _gic_clear_lr(v, i, 0);
@@ -743,6 +755,11 @@ void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
     _gic_clear_lr(v, p->lr, 1);
 }
 
+void gic_set_clear_lrs_other(struct vcpu *v)
+{
+    set_bit(0, &per_cpu(lr_clear_all, v->processor));
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -796,7 +813,7 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    gic_clear_lrs(current);
+    gic_clear_lrs(current, test_and_clear_bit(0, &this_cpu(lr_clear_all)));
 
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 4bfab26..02ad3cd 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -716,6 +716,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
             return;
         } else {
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
+            gic_set_clear_lrs_other(v);
             goto out;
         }
     }
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6de0d9b..8d36f7c 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -185,6 +185,7 @@ extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
                                   const char * devname);
 extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);
+extern void gic_set_clear_lrs_other(struct vcpu *v);
 
 /* Accept an interrupt from the GIC and dispatch its handler */
 extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3S-0007rj-Qh; Fri, 14 Feb 2014 15:51:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3R-0007rL-7P
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:53 +0000
Received: from [193.109.254.147:40259] by server-7.bemta-14.messagelabs.com id
	2E/73-23424-89B3EF25; Fri, 14 Feb 2014 15:51:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6638 invoked from network); 14 Feb 2014 15:51:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Hs;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:35 +0000
Message-ID: <1392393098-7351-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 07/10] xen/arm: don't protect GICH and
	lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH is banked, protect accesses by disabling interrupts.
Protect lr_queue accesses with the vgic.lock only.
gic.lock only protects accesses to GICD now.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c  |   22 +++-------------------
 xen/arch/arm/vgic.c |   12 ++++++++++--
 2 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0955d48..6386ccb 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -667,19 +667,14 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
 
-    spin_lock(&gic.lock);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock(&gic.lock);
 }
 
 void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
-    unsigned long flags;
-
-    spin_lock_irqsave(&gic.lock, flags);
 
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
@@ -687,15 +682,11 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
             gic_set_lr(v, i, irq, state, priority);
-            goto out;
+            return;
         }
     }
 
     gic_add_to_lr_pending(v, irq, priority);
-
-out:
-    spin_unlock_irqrestore(&gic.lock, flags);
-    return;
 }
 
 static void _gic_clear_lr(struct vcpu *v, int i)
@@ -717,8 +708,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
-        spin_lock(&gic.lock);
-
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
@@ -732,8 +721,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
             gic_add_to_lr_pending(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
-
-        spin_unlock(&gic.lock);
     }
 }
 
@@ -767,11 +754,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if ( i >= nr_lrs ) return;
 
-        spin_lock_irqsave(&gic.lock, flags);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&gic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     }
 
 }
@@ -779,13 +766,10 @@ static void gic_restore_pending_irqs(struct vcpu *v)
 void gic_clear_pending_irqs(struct vcpu *v)
 {
     struct pending_irq *p, *t;
-    unsigned long flags;
 
-    spin_lock_irqsave(&gic.lock, flags);
     v->arch.lr_mask = 0;
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
         list_del_init(&p->lr_queue);
-    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 static void gic_inject_irq_start(void)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 210ac39..4bfab26 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -365,12 +365,15 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
+    unsigned long flags;
 
     while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         gic_remove_from_queues(v, irq);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
         if ( p->desc != NULL )
             p->desc->handler->disable(p->desc);
         i++;
@@ -391,8 +394,13 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
             vgic_vcpu_inject_irq(v, irq);
-        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+        else {
+            unsigned long flags;
+            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+                gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        }
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3T-0007s4-7u; Fri, 14 Feb 2014 15:51:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3R-0007rV-Vf
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:54 +0000
Received: from [193.109.254.147:31424] by server-5.bemta-14.messagelabs.com id
	1F/33-16688-99B3EF25; Fri, 14 Feb 2014 15:51:53 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6793 invoked from network); 14 Feb 2014 15:51:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:52 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595722"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-JL;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:38 +0000
Message-ID: <1392393098-7351-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 10/10] xen/arm: print more info in
	gic_dump_info, keep gic_lr sync'ed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For each inflight and pending irqs print GIC_IRQ_GUEST_ENABLED,
GIC_IRQ_GUEST_PENDING and GIC_IRQ_GUEST_VISIBLE.

In order to get consistent information from gic_dump_info, we need to
get the vgic.lock before walking the inflight and lr_pending lists.

We also need to keep v->arch.gic_lr in sync with GICH_LR registers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index b00f77c..af0994a 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -642,6 +642,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
             ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
 
     GICH[GICH_LR + lr] = lr_reg;
+    v->arch.gic_lr[lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -708,11 +709,15 @@ static void _gic_clear_lr(struct vcpu *v, int i, int vgic_locked)
     {
         if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+        {
             GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+            v->arch.gic_lr[i] = lr | GICH_LR_PENDING;
+        }
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
         GICH[GICH_LR + i] = 0;
+        v->arch.gic_lr[i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
         if ( !vgic_locked ) spin_lock(&v->arch.vgic.lock);
@@ -986,6 +991,8 @@ void gic_dump_info(struct vcpu *v)
     struct pending_irq *p;
 
     printk("GICH_LRs (vcpu %d) mask=%"PRIx64"\n", v->vcpu_id, v->arch.lr_mask);
+
+    spin_lock(&v->arch.vgic.lock);
     if ( v == current )
     {
         for ( i = 0; i < nr_lrs; i++ )
@@ -997,14 +1004,20 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Inflight irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Pending irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
-
+    spin_unlock(&v->arch.vgic.lock);
 }
 
 void __cpuinit init_maintenance_interrupt(void)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3U-0007sw-PP; Fri, 14 Feb 2014 15:51:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3S-0007ra-LP
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:54 +0000
Received: from [193.109.254.147:31507] by server-12.bemta-14.messagelabs.com
	id 9F/29-17220-A9B3EF25; Fri, 14 Feb 2014 15:51:54 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6926 invoked from network); 14 Feb 2014 15:51:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595723"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-HP;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:34 +0000
Message-ID: <1392393098-7351-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 06/10] xen/arm: second irq injection
	while the first irq is still inflight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
while the first one is still active.
If the first irq is already pending (not active), just clear
GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
already visible by the guest.
If the irq has already been EOI'ed then just clear the GICH_LR right
away and move the interrupt to lr_pending so that it is going to be
reinjected by gic_restore_pending_irqs on return to guest.

If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
and send an SGI. The target cpu is going to be interrupted and call
gic_clear_lrs, that is going to take the same actions.

Do not call vgic_vcpu_inject_irq from gic_inject if
evtchn_upcall_pending is set. If we remove that call, we don't need to
special case evtchn_irq in vgic_vcpu_inject_irq anymore.
We also need to force the first injection of evtchn_irq (call
gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
is already set by common code on vcpu creation.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |   82 +++++++++++++++++++++++++--------------------
 xen/arch/arm/vgic.c       |   18 +++++++---
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 61 insertions(+), 40 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 5fca5be..0955d48 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -698,51 +698,64 @@ out:
     return;
 }
 
-static void gic_clear_lrs(struct vcpu *v)
+static void _gic_clear_lr(struct vcpu *v, int i)
 {
-    struct pending_irq *p;
-    int i = 0, irq;
+    int irq;
     uint32_t lr;
-    bool_t inflight;
+    struct pending_irq *p;
 
     ASSERT(!local_irq_is_enabled());
 
-    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
-                              nr_lrs, i)) < nr_lrs) {
-        lr = GICH[GICH_LR + i];
-        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+    lr = GICH[GICH_LR + i];
+    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+    p = irq_to_pending(v, irq);
+    if ( lr & GICH_LR_ACTIVE )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
+             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+    } else if ( lr & GICH_LR_PENDING ) {
+        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    } else {
+        spin_lock(&gic.lock);
+
+        GICH[GICH_LR + i] = 0;
+        clear_bit(i, &this_cpu(lr_mask));
+
+        if ( p->desc != NULL )
+            p->desc->status &= ~IRQ_INPROGRESS;
+        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        p->lr = nr_lrs;
+        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
-            inflight = 0;
-            GICH[GICH_LR + i] = 0;
-            clear_bit(i, &this_cpu(lr_mask));
-
-            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
-            spin_lock(&gic.lock);
-            p = irq_to_pending(v, irq);
-            if ( p->desc != NULL )
-                p->desc->status &= ~IRQ_INPROGRESS;
-            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-            p->lr = nr_lrs;
-            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-            {
-                inflight = 1;
-                gic_add_to_lr_pending(v, irq, p->priority);
-            }
-            spin_unlock(&gic.lock);
-            if ( !inflight )
-            {
-                spin_lock(&v->arch.vgic.lock);
-                list_del_init(&p->inflight);
-                spin_unlock(&v->arch.vgic.lock);
-            }
+            gic_add_to_lr_pending(v, irq, p->priority);
+        } else
+            list_del_init(&p->inflight);
 
-        }
+        spin_unlock(&gic.lock);
+    }
+}
+
+static void gic_clear_lrs(struct vcpu *v)
+{
+    int i = 0;
 
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+
+        spin_lock(&v->arch.vgic.lock);
+        _gic_clear_lr(v, i);
+        spin_unlock(&v->arch.vgic.lock);
         i++;
     }
 }
 
+void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
+{
+    _gic_clear_lr(v, p->lr);
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -801,9 +814,6 @@ void gic_inject(void)
 {
     gic_clear_lrs(current);
 
-    if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
-
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
         gic_inject_irq_stop();
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index da15f4d..210ac39 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        if ( irq == v->domain->arch.evtchn_irq &&
+             vcpu_info(current, evtchn_upcall_pending) &&
+             list_empty(&p->inflight) )
+            vgic_vcpu_inject_irq(v, irq);
+        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
@@ -696,10 +700,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     if ( !list_empty(&n->inflight) )
     {
-        if ( (irq != current->domain->arch.evtchn_irq) ||
-             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
+        if ( v == current )
+        {
+            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
+            gic_set_clear_lr(v, n);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+            return;
+        } else {
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        goto out;
+            goto out;
+        }
     }
 
     /* vcpu offline */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6fce5c2..6de0d9b 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -184,6 +184,7 @@ extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq);
 extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
                                   const char * devname);
+extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);
 
 /* Accept an interrupt from the GIC and dispatch its handler */
 extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3S-0007rj-Qh; Fri, 14 Feb 2014 15:51:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3R-0007rL-7P
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:53 +0000
Received: from [193.109.254.147:40259] by server-7.bemta-14.messagelabs.com id
	2E/73-23424-89B3EF25; Fri, 14 Feb 2014 15:51:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6638 invoked from network); 14 Feb 2014 15:51:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Hs;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:35 +0000
Message-ID: <1392393098-7351-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 07/10] xen/arm: don't protect GICH and
	lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH is banked, protect accesses by disabling interrupts.
Protect lr_queue accesses with the vgic.lock only.
gic.lock only protects accesses to GICD now.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c  |   22 +++-------------------
 xen/arch/arm/vgic.c |   12 ++++++++++--
 2 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0955d48..6386ccb 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -667,19 +667,14 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
 
-    spin_lock(&gic.lock);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock(&gic.lock);
 }
 
 void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
-    unsigned long flags;
-
-    spin_lock_irqsave(&gic.lock, flags);
 
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
@@ -687,15 +682,11 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
             gic_set_lr(v, i, irq, state, priority);
-            goto out;
+            return;
         }
     }
 
     gic_add_to_lr_pending(v, irq, priority);
-
-out:
-    spin_unlock_irqrestore(&gic.lock, flags);
-    return;
 }
 
 static void _gic_clear_lr(struct vcpu *v, int i)
@@ -717,8 +708,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
-        spin_lock(&gic.lock);
-
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
@@ -732,8 +721,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
             gic_add_to_lr_pending(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
-
-        spin_unlock(&gic.lock);
     }
 }
 
@@ -767,11 +754,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if ( i >= nr_lrs ) return;
 
-        spin_lock_irqsave(&gic.lock, flags);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&gic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     }
 
 }
@@ -779,13 +766,10 @@ static void gic_restore_pending_irqs(struct vcpu *v)
 void gic_clear_pending_irqs(struct vcpu *v)
 {
     struct pending_irq *p, *t;
-    unsigned long flags;
 
-    spin_lock_irqsave(&gic.lock, flags);
     v->arch.lr_mask = 0;
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
         list_del_init(&p->lr_queue);
-    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 static void gic_inject_irq_start(void)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 210ac39..4bfab26 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -365,12 +365,15 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
+    unsigned long flags;
 
     while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         gic_remove_from_queues(v, irq);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
         if ( p->desc != NULL )
             p->desc->handler->disable(p->desc);
         i++;
@@ -391,8 +394,13 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
             vgic_vcpu_inject_irq(v, irq);
-        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+        else {
+            unsigned long flags;
+            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+                gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        }
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:51:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3U-0007sw-PP; Fri, 14 Feb 2014 15:51:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3S-0007ra-LP
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:51:54 +0000
Received: from [193.109.254.147:31507] by server-12.bemta-14.messagelabs.com
	id 9F/29-17220-A9B3EF25; Fri, 14 Feb 2014 15:51:54 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392393110!4409914!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6926 invoked from network); 14 Feb 2014 15:51:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:51:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102595723"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-HP;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:34 +0000
Message-ID: <1392393098-7351-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 06/10] xen/arm: second irq injection
	while the first irq is still inflight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
while the first one is still active.
If the first irq is already pending (not active), just clear
GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
already visible by the guest.
If the irq has already been EOI'ed then just clear the GICH_LR right
away and move the interrupt to lr_pending so that it is going to be
reinjected by gic_restore_pending_irqs on return to guest.

If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
and send an SGI. The target cpu is going to be interrupted and call
gic_clear_lrs, that is going to take the same actions.

Do not call vgic_vcpu_inject_irq from gic_inject if
evtchn_upcall_pending is set. If we remove that call, we don't need to
special case evtchn_irq in vgic_vcpu_inject_irq anymore.
We also need to force the first injection of evtchn_irq (call
gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
is already set by common code on vcpu creation.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |   82 +++++++++++++++++++++++++--------------------
 xen/arch/arm/vgic.c       |   18 +++++++---
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 61 insertions(+), 40 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 5fca5be..0955d48 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -698,51 +698,64 @@ out:
     return;
 }
 
-static void gic_clear_lrs(struct vcpu *v)
+static void _gic_clear_lr(struct vcpu *v, int i)
 {
-    struct pending_irq *p;
-    int i = 0, irq;
+    int irq;
     uint32_t lr;
-    bool_t inflight;
+    struct pending_irq *p;
 
     ASSERT(!local_irq_is_enabled());
 
-    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
-                              nr_lrs, i)) < nr_lrs) {
-        lr = GICH[GICH_LR + i];
-        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+    lr = GICH[GICH_LR + i];
+    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+    p = irq_to_pending(v, irq);
+    if ( lr & GICH_LR_ACTIVE )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
+             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+    } else if ( lr & GICH_LR_PENDING ) {
+        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    } else {
+        spin_lock(&gic.lock);
+
+        GICH[GICH_LR + i] = 0;
+        clear_bit(i, &this_cpu(lr_mask));
+
+        if ( p->desc != NULL )
+            p->desc->status &= ~IRQ_INPROGRESS;
+        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        p->lr = nr_lrs;
+        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
-            inflight = 0;
-            GICH[GICH_LR + i] = 0;
-            clear_bit(i, &this_cpu(lr_mask));
-
-            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
-            spin_lock(&gic.lock);
-            p = irq_to_pending(v, irq);
-            if ( p->desc != NULL )
-                p->desc->status &= ~IRQ_INPROGRESS;
-            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-            p->lr = nr_lrs;
-            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-            {
-                inflight = 1;
-                gic_add_to_lr_pending(v, irq, p->priority);
-            }
-            spin_unlock(&gic.lock);
-            if ( !inflight )
-            {
-                spin_lock(&v->arch.vgic.lock);
-                list_del_init(&p->inflight);
-                spin_unlock(&v->arch.vgic.lock);
-            }
+            gic_add_to_lr_pending(v, irq, p->priority);
+        } else
+            list_del_init(&p->inflight);
 
-        }
+        spin_unlock(&gic.lock);
+    }
+}
+
+static void gic_clear_lrs(struct vcpu *v)
+{
+    int i = 0;
 
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+
+        spin_lock(&v->arch.vgic.lock);
+        _gic_clear_lr(v, i);
+        spin_unlock(&v->arch.vgic.lock);
         i++;
     }
 }
 
+void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
+{
+    _gic_clear_lr(v, p->lr);
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -801,9 +814,6 @@ void gic_inject(void)
 {
     gic_clear_lrs(current);
 
-    if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
-
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
         gic_inject_irq_stop();
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index da15f4d..210ac39 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        if ( irq == v->domain->arch.evtchn_irq &&
+             vcpu_info(current, evtchn_upcall_pending) &&
+             list_empty(&p->inflight) )
+            vgic_vcpu_inject_irq(v, irq);
+        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
@@ -696,10 +700,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     if ( !list_empty(&n->inflight) )
     {
-        if ( (irq != current->domain->arch.evtchn_irq) ||
-             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
+        if ( v == current )
+        {
+            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
+            gic_set_clear_lr(v, n);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+            return;
+        } else {
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        goto out;
+            goto out;
+        }
     }
 
     /* vcpu offline */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6fce5c2..6de0d9b 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -184,6 +184,7 @@ extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq);
 extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
                                   const char * devname);
+extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);
 
 /* Accept an interrupt from the GIC and dispatch its handler */
 extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3u-000814-RI; Fri, 14 Feb 2014 15:52:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3s-0007zk-8l
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:20 +0000
Received: from [85.158.137.68:58317] by server-16.bemta-3.messagelabs.com id
	41/CD-29917-3BB3EF25; Fri, 14 Feb 2014 15:52:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32347 invoked from network); 14 Feb 2014 15:52:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823980"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:45 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-CQ;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:29 +0000
Message-ID: <1392393098-7351-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 01/10] xen/arm: remove unused virtual
	parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..244738d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 50b3a38..acf7195 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -748,7 +748,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 68b939d..0548201 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..7d10227 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3u-000814-RI; Fri, 14 Feb 2014 15:52:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3s-0007zk-8l
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:20 +0000
Received: from [85.158.137.68:58317] by server-16.bemta-3.messagelabs.com id
	41/CD-29917-3BB3EF25; Fri, 14 Feb 2014 15:52:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32347 invoked from network); 14 Feb 2014 15:52:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:18 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823980"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:45 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-CQ;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:29 +0000
Message-ID: <1392393098-7351-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 01/10] xen/arm: remove unused virtual
	parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..244738d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 50b3a38..acf7195 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -748,7 +748,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 68b939d..0548201 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..7d10227 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3v-00082D-Ei; Fri, 14 Feb 2014 15:52:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3t-000808-71
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:21 +0000
Received: from [85.158.137.68:20900] by server-17.bemta-3.messagelabs.com id
	35/F7-22569-4BB3EF25; Fri, 14 Feb 2014 15:52:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32456 invoked from network); 14 Feb 2014 15:52:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823981"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Gw;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:33 +0000
Message-ID: <1392393098-7351-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 05/10] xen/arm: keep track of the GICH_LR
	used for the irq in struct pending_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c           |    6 ++++--
 xen/include/asm-arm/domain.h |    1 +
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0928aca..5fca5be 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -641,6 +641,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    p->lr = lr;
 }
 
 static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
@@ -721,6 +722,7 @@ static void gic_clear_lrs(struct vcpu *v)
             if ( p->desc != NULL )
                 p->desc->status &= ~IRQ_INPROGRESS;
             clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            p->lr = nr_lrs;
             if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                     test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
             {
@@ -984,12 +986,12 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d\n", p->irq);
+        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d\n", p->irq);
+        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
     }
 
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..7b636c8 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -59,6 +59,7 @@ struct pending_irq
 #define GIC_IRQ_GUEST_VISIBLE  1
 #define GIC_IRQ_GUEST_ENABLED  2
     unsigned long status;
+    uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     uint8_t priority;
     /* inflight is used to append instances of pending_irq to
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3v-00082D-Ei; Fri, 14 Feb 2014 15:52:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3t-000808-71
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:21 +0000
Received: from [85.158.137.68:20900] by server-17.bemta-3.messagelabs.com id
	35/F7-22569-4BB3EF25; Fri, 14 Feb 2014 15:52:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32456 invoked from network); 14 Feb 2014 15:52:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823981"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Gw;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:33 +0000
Message-ID: <1392393098-7351-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 05/10] xen/arm: keep track of the GICH_LR
	used for the irq in struct pending_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c           |    6 ++++--
 xen/include/asm-arm/domain.h |    1 +
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0928aca..5fca5be 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -641,6 +641,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    p->lr = lr;
 }
 
 static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
@@ -721,6 +722,7 @@ static void gic_clear_lrs(struct vcpu *v)
             if ( p->desc != NULL )
                 p->desc->status &= ~IRQ_INPROGRESS;
             clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            p->lr = nr_lrs;
             if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                     test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
             {
@@ -984,12 +986,12 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d\n", p->irq);
+        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d\n", p->irq);
+        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
     }
 
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..7b636c8 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -59,6 +59,7 @@ struct pending_irq
 #define GIC_IRQ_GUEST_VISIBLE  1
 #define GIC_IRQ_GUEST_ENABLED  2
     unsigned long status;
+    uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     uint8_t priority;
     /* inflight is used to append instances of pending_irq to
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3w-00082z-2F; Fri, 14 Feb 2014 15:52:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3t-00080B-VI
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:22 +0000
Received: from [85.158.137.68:63338] by server-8.bemta-3.messagelabs.com id
	FB/40-16039-5BB3EF25; Fri, 14 Feb 2014 15:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392393138!691212!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6608 invoked from network); 14 Feb 2014 15:52:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823982"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:45 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Cv;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:30 +0000
Message-ID: <1392393098-7351-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 02/10] xen/arm: support HW interrupts in
	gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW.

Remove the code to EOI a physical interrupt on behalf of the guest
because it has become unnecessary.

Also add a struct vcpu* parameter to gic_set_lr.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- remove the EOI code, now unnecessary;
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr.
---
 xen/arch/arm/gic.c |   52 +++++++++++++++++-----------------------------------
 1 file changed, 17 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index acf7195..64c8aa7 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     return rc;
 }
 
-static inline void gic_set_lr(int lr, unsigned int virtual_irq,
+static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
-    struct pending_irq *p = irq_to_pending(current, virtual_irq);
+    struct pending_irq *p = irq_to_pending(v, irq);
+    uint32_t lr_reg;
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
+    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
         ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
-        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+        ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        lr_reg |= GICH_LR_HW |
+            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
+
+    GICH[GICH_LR + lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -666,7 +670,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
@@ -679,12 +683,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, virtual_irq, state, priority);
+            gic_set_lr(v, i, irq, state, priority);
             goto out;
         }
     }
 
-    gic_add_to_lr_pending(v, virtual_irq, priority);
+    gic_add_to_lr_pending(v, irq, priority);
 
 out:
     spin_unlock_irqrestore(&gic.lock, flags);
@@ -703,7 +707,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         if ( i >= nr_lrs ) return;
 
         spin_lock_irqsave(&gic.lock, flags);
-        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
+        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
         spin_unlock_irqrestore(&gic.lock, flags);
@@ -904,15 +908,9 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
+    int i = 0, virq;
     uint32_t lr;
     struct vcpu *v = current;
     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
@@ -920,10 +918,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
     while ((i = find_next_bit((const long unsigned int *) &eisr,
                               64, i)) < 64) {
         struct pending_irq *p, *p2;
-        int cpu;
         bool_t inflight;
 
-        cpu = -1;
         inflight = 0;
 
         spin_lock_irq(&gic.lock);
@@ -933,12 +929,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
         clear_bit(i, &this_cpu(lr_mask));
 
         p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
+        if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
@@ -950,7 +942,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
 
         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
             p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
+            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
             list_del_init(&p2->lr_queue);
             set_bit(i, &this_cpu(lr_mask));
         }
@@ -963,16 +955,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
             spin_unlock_irq(&v->arch.vgic.lock);
         }
 
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
         i++;
     }
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3w-00082z-2F; Fri, 14 Feb 2014 15:52:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3t-00080B-VI
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:22 +0000
Received: from [85.158.137.68:63338] by server-8.bemta-3.messagelabs.com id
	FB/40-16039-5BB3EF25; Fri, 14 Feb 2014 15:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392393138!691212!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6608 invoked from network); 14 Feb 2014 15:52:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823982"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:45 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Cv;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:30 +0000
Message-ID: <1392393098-7351-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 02/10] xen/arm: support HW interrupts in
	gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW.

Remove the code to EOI a physical interrupt on behalf of the guest
because it has become unnecessary.

Also add a struct vcpu* parameter to gic_set_lr.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- remove the EOI code, now unnecessary;
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr.
---
 xen/arch/arm/gic.c |   52 +++++++++++++++++-----------------------------------
 1 file changed, 17 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index acf7195..64c8aa7 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     return rc;
 }
 
-static inline void gic_set_lr(int lr, unsigned int virtual_irq,
+static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
-    struct pending_irq *p = irq_to_pending(current, virtual_irq);
+    struct pending_irq *p = irq_to_pending(v, irq);
+    uint32_t lr_reg;
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
+    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
         ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
-        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+        ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        lr_reg |= GICH_LR_HW |
+            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
+
+    GICH[GICH_LR + lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -666,7 +670,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
@@ -679,12 +683,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, virtual_irq, state, priority);
+            gic_set_lr(v, i, irq, state, priority);
             goto out;
         }
     }
 
-    gic_add_to_lr_pending(v, virtual_irq, priority);
+    gic_add_to_lr_pending(v, irq, priority);
 
 out:
     spin_unlock_irqrestore(&gic.lock, flags);
@@ -703,7 +707,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         if ( i >= nr_lrs ) return;
 
         spin_lock_irqsave(&gic.lock, flags);
-        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
+        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
         spin_unlock_irqrestore(&gic.lock, flags);
@@ -904,15 +908,9 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
+    int i = 0, virq;
     uint32_t lr;
     struct vcpu *v = current;
     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
@@ -920,10 +918,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
     while ((i = find_next_bit((const long unsigned int *) &eisr,
                               64, i)) < 64) {
         struct pending_irq *p, *p2;
-        int cpu;
         bool_t inflight;
 
-        cpu = -1;
         inflight = 0;
 
         spin_lock_irq(&gic.lock);
@@ -933,12 +929,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
         clear_bit(i, &this_cpu(lr_mask));
 
         p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
+        if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
@@ -950,7 +942,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
 
         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
             p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
+            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
             list_del_init(&p2->lr_queue);
             set_bit(i, &this_cpu(lr_mask));
         }
@@ -963,16 +955,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
             spin_unlock_irq(&v->arch.vgic.lock);
         }
 
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
         i++;
     }
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3x-00083x-3D; Fri, 14 Feb 2014 15:52:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3u-00080E-Au
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:22 +0000
Received: from [85.158.137.68:58465] by server-11.bemta-3.messagelabs.com id
	63/48-04255-5BB3EF25; Fri, 14 Feb 2014 15:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32542 invoked from network); 14 Feb 2014 15:52:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823983"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-IP;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:36 +0000
Message-ID: <1392393098-7351-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 08/10] xen/arm: avoid taking
	unconditionally the vgic.lock in gic_clear_lrs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We don't actually need to call _gic_clear_lr with the vgic.lock taken,
only in case the GICH_LR has to be free'ed we need the lock.
Add a boolean vgic_locked parameter to _gic_clear_lr, so that we can
avoid taking the lock if it has been already taken, or take the lock
only in the necessary case if it hasn't.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6386ccb..54be9ca 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -689,7 +689,7 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
     gic_add_to_lr_pending(v, irq, priority);
 }
 
-static void _gic_clear_lr(struct vcpu *v, int i)
+static void _gic_clear_lr(struct vcpu *v, int i, int vgic_locked)
 {
     int irq;
     uint32_t lr;
@@ -711,6 +711,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
+        if ( !vgic_locked ) spin_lock(&v->arch.vgic.lock);
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -721,6 +722,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
             gic_add_to_lr_pending(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
+        if ( !vgic_locked ) spin_unlock(&v->arch.vgic.lock);
     }
 }
 
@@ -731,16 +733,14 @@ static void gic_clear_lrs(struct vcpu *v)
     while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
                               nr_lrs, i)) < nr_lrs) {
 
-        spin_lock(&v->arch.vgic.lock);
-        _gic_clear_lr(v, i);
-        spin_unlock(&v->arch.vgic.lock);
+        _gic_clear_lr(v, i, 0);
         i++;
     }
 }
 
 void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
 {
-    _gic_clear_lr(v, p->lr);
+    _gic_clear_lr(v, p->lr, 1);
 }
 
 static void gic_restore_pending_irqs(struct vcpu *v)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3x-00083x-3D; Fri, 14 Feb 2014 15:52:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3u-00080E-Au
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:22 +0000
Received: from [85.158.137.68:58465] by server-11.bemta-3.messagelabs.com id
	63/48-04255-5BB3EF25; Fri, 14 Feb 2014 15:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32542 invoked from network); 14 Feb 2014 15:52:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823983"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-IP;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:36 +0000
Message-ID: <1392393098-7351-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 08/10] xen/arm: avoid taking
	unconditionally the vgic.lock in gic_clear_lrs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We don't actually need to call _gic_clear_lr with the vgic.lock taken,
only in case the GICH_LR has to be free'ed we need the lock.
Add a boolean vgic_locked parameter to _gic_clear_lr, so that we can
avoid taking the lock if it has been already taken, or take the lock
only in the necessary case if it hasn't.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6386ccb..54be9ca 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -689,7 +689,7 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
     gic_add_to_lr_pending(v, irq, priority);
 }
 
-static void _gic_clear_lr(struct vcpu *v, int i)
+static void _gic_clear_lr(struct vcpu *v, int i, int vgic_locked)
 {
     int irq;
     uint32_t lr;
@@ -711,6 +711,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
+        if ( !vgic_locked ) spin_lock(&v->arch.vgic.lock);
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
@@ -721,6 +722,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
             gic_add_to_lr_pending(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
+        if ( !vgic_locked ) spin_unlock(&v->arch.vgic.lock);
     }
 }
 
@@ -731,16 +733,14 @@ static void gic_clear_lrs(struct vcpu *v)
     while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
                               nr_lrs, i)) < nr_lrs) {
 
-        spin_lock(&v->arch.vgic.lock);
-        _gic_clear_lr(v, i);
-        spin_unlock(&v->arch.vgic.lock);
+        _gic_clear_lr(v, i, 0);
         i++;
     }
 }
 
 void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
 {
-    _gic_clear_lr(v, p->lr);
+    _gic_clear_lr(v, p->lr, 1);
 }
 
 static void gic_restore_pending_irqs(struct vcpu *v)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3x-000859-R8; Fri, 14 Feb 2014 15:52:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3u-00080i-O1
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:22 +0000
Received: from [85.158.137.68:21040] by server-14.bemta-3.messagelabs.com id
	0F/96-08196-5BB3EF25; Fri, 14 Feb 2014 15:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392393138!691212!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6739 invoked from network); 14 Feb 2014 15:52:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823984"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-GL;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:32 +0000
Message-ID: <1392393098-7351-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 04/10] xen/arm: set GICH_HCR_UIE if all
	the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_UIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1).

---
 xen/arch/arm/gic.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index ee383bc..0928aca 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -805,8 +805,15 @@ void gic_inject(void)
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
         gic_inject_irq_stop();
-    else
+    else {
         gic_inject_irq_start();
+    }
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) &&
+         this_cpu(lr_mask) == ((1 << nr_lrs) - 1) )
+        GICH[GICH_HCR] |= GICH_HCR_UIE;
+    else
+        GICH[GICH_HCR] &= ~GICH_HCR_UIE;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3x-000859-R8; Fri, 14 Feb 2014 15:52:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3u-00080i-O1
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:22 +0000
Received: from [85.158.137.68:21040] by server-14.bemta-3.messagelabs.com id
	0F/96-08196-5BB3EF25; Fri, 14 Feb 2014 15:52:21 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392393138!691212!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6739 invoked from network); 14 Feb 2014 15:52:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823984"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-GL;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:32 +0000
Message-ID: <1392393098-7351-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 04/10] xen/arm: set GICH_HCR_UIE if all
	the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_UIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1).

---
 xen/arch/arm/gic.c |    9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index ee383bc..0928aca 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -805,8 +805,15 @@ void gic_inject(void)
     gic_restore_pending_irqs(current);
     if (!gic_events_need_delivery())
         gic_inject_irq_stop();
-    else
+    else {
         gic_inject_irq_start();
+    }
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) &&
+         this_cpu(lr_mask) == ((1 << nr_lrs) - 1) )
+        GICH[GICH_HCR] |= GICH_HCR_UIE;
+    else
+        GICH[GICH_HCR] &= ~GICH_HCR_UIE;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3z-000871-80; Fri, 14 Feb 2014 15:52:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3v-00080p-Bf
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:23 +0000
Received: from [85.158.137.68:63434] by server-5.bemta-3.messagelabs.com id
	A4/FC-04712-6BB3EF25; Fri, 14 Feb 2014 15:52:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32627 invoked from network); 14 Feb 2014 15:52:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823986"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Ed;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:31 +0000
Message-ID: <1392393098-7351-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 03/10] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
GICH_LR registers.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs from gic_restore_state and on return to guest
(gic_inject).

Remove the now unused code in maintenance_interrupts and gic_irq_eoi.

In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
send and SGI to it to interrupt it and force it to clear the old LRs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- simplify gic_clear_lrs.
---
 xen/arch/arm/gic.c  |   99 ++++++++++++++++++++++++++-------------------------
 xen/arch/arm/vgic.c |    3 +-
 2 files changed, 51 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 64c8aa7..ee383bc 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_lrs(struct vcpu *v);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -126,6 +128,7 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
+    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -628,8 +631,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
-        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+    lr_reg = state | ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
         ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     if ( p->desc != NULL )
         lr_reg |= GICH_LR_HW |
@@ -695,6 +697,50 @@ out:
     return;
 }
 
+static void gic_clear_lrs(struct vcpu *v)
+{
+    struct pending_irq *p;
+    int i = 0, irq;
+    uint32_t lr;
+    bool_t inflight;
+
+    ASSERT(!local_irq_is_enabled());
+
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        lr = GICH[GICH_LR + i];
+        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+        {
+            inflight = 0;
+            GICH[GICH_LR + i] = 0;
+            clear_bit(i, &this_cpu(lr_mask));
+
+            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+            spin_lock(&gic.lock);
+            p = irq_to_pending(v, irq);
+            if ( p->desc != NULL )
+                p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+            {
+                inflight = 1;
+                gic_add_to_lr_pending(v, irq, p->priority);
+            }
+            spin_unlock(&gic.lock);
+            if ( !inflight )
+            {
+                spin_lock(&v->arch.vgic.lock);
+                list_del_init(&p->inflight);
+                spin_unlock(&v->arch.vgic.lock);
+            }
+
+        }
+
+        i++;
+    }
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -751,6 +797,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    gic_clear_lrs(current);
+
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
@@ -910,53 +958,6 @@ int gicv_setup(struct domain *d)
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        bool_t inflight;
-
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL )
-            p->desc->status &= ~IRQ_INPROGRESS;
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, virq, p->priority);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        i++;
-    }
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d10227..da15f4d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -699,8 +699,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL3z-000871-80; Fri, 14 Feb 2014 15:52:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEL3v-00080p-Bf
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 15:52:23 +0000
Received: from [85.158.137.68:63434] by server-5.bemta-3.messagelabs.com id
	A4/FC-04712-6BB3EF25; Fri, 14 Feb 2014 15:52:22 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392393136!626381!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32627 invoked from network); 14 Feb 2014 15:52:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100823986"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:51:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:51:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEL3E-0004ww-Ed;
	Fri, 14 Feb 2014 15:51:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Fri, 14 Feb 2014 15:51:31 +0000
Message-ID: <1392393098-7351-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v2 03/10] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
GICH_LR registers.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs from gic_restore_state and on return to guest
(gic_inject).

Remove the now unused code in maintenance_interrupts and gic_irq_eoi.

In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
send and SGI to it to interrupt it and force it to clear the old LRs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- simplify gic_clear_lrs.
---
 xen/arch/arm/gic.c  |   99 ++++++++++++++++++++++++++-------------------------
 xen/arch/arm/vgic.c |    3 +-
 2 files changed, 51 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 64c8aa7..ee383bc 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_lrs(struct vcpu *v);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -126,6 +128,7 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
+    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -628,8 +631,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
-        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+    lr_reg = state | ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
         ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     if ( p->desc != NULL )
         lr_reg |= GICH_LR_HW |
@@ -695,6 +697,50 @@ out:
     return;
 }
 
+static void gic_clear_lrs(struct vcpu *v)
+{
+    struct pending_irq *p;
+    int i = 0, irq;
+    uint32_t lr;
+    bool_t inflight;
+
+    ASSERT(!local_irq_is_enabled());
+
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        lr = GICH[GICH_LR + i];
+        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+        {
+            inflight = 0;
+            GICH[GICH_LR + i] = 0;
+            clear_bit(i, &this_cpu(lr_mask));
+
+            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+            spin_lock(&gic.lock);
+            p = irq_to_pending(v, irq);
+            if ( p->desc != NULL )
+                p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+            {
+                inflight = 1;
+                gic_add_to_lr_pending(v, irq, p->priority);
+            }
+            spin_unlock(&gic.lock);
+            if ( !inflight )
+            {
+                spin_lock(&v->arch.vgic.lock);
+                list_del_init(&p->inflight);
+                spin_unlock(&v->arch.vgic.lock);
+            }
+
+        }
+
+        i++;
+    }
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -751,6 +797,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    gic_clear_lrs(current);
+
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
@@ -910,53 +958,6 @@ int gicv_setup(struct domain *d)
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        bool_t inflight;
-
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL )
-            p->desc->status &= ~IRQ_INPROGRESS;
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, virq, p->priority);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        i++;
-    }
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d10227..da15f4d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -699,8 +699,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL4E-0008Kc-Qk; Fri, 14 Feb 2014 15:52:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEL4D-0008JO-PA
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:52:41 +0000
Received: from [193.109.254.147:54607] by server-8.bemta-14.messagelabs.com id
	6F/AF-18529-9CB3EF25; Fri, 14 Feb 2014 15:52:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392393159!4416255!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11789 invoked from network); 14 Feb 2014 15:52:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100824338"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:52:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:52:37 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEL49-0004xo-CK;
	Fri, 14 Feb 2014 15:52:37 +0000
Date: Fri, 14 Feb 2014 15:52:36 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140214155236.GF18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
	<52FE2DFC.8050702@citrix.com>
	<20140214152539.GD18398@zion.uk.xensource.com>
	<52FE390B.3020102@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE390B.3020102@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 03:40:59PM +0000, Andrew Bennieston wrote:
[...]
> >Wei.
> 
> Let me attempt to clear this up. Bear with me...
> 
> Queue selection is a decision by a transmitting system about which
> queue it uses for a particular packet. A well-behaved receiving
> system will pick up packets on any queue and throw them up into its
> network stack as normal. In this manner, the details of queue
> selection don't matter from the point of view of a receiving guest
> (either frontend or backend). That is; if a "malicious" frontend
> sends all of its packets on a single queue, then it is only damaging
> itself - by reducing its effective throughput to that of a single
> queue. This will not cause a problem to the backend. The same goes
> for the "select a random queue" scenario, although here you probably
> shouldn't expect decent TCP performance. Certainly there will be no
> badness in terms of affecting the backend or other systems, beyond
> that which a guest could achieve with a broken TCP stack anyway.
> 

Cool, this is much clearer about this feature and what I want to know.
In a word, there's no coupling what's so ever when frontend / backend
select which algorithm to use. Then there's nothing to fix. Thank you
for being patient to explain it to a dumb guy. :-)

> In light of this, algorithm selection is (mostly) a function of the
> transmitting side. The receiving side should be prepared to receive
> packets on any of the legitimately established queues. It just
> happens that the Linux netback and Linux netfront both use
> skb_get_hash() to determine this value.
> 

I somehow had the impression that two ends need to use the same
algorithm. They just happen to be using the same algorithm in the
current implementation. I understand now.

> In the future, some frontends (i.e. Windows) may need to do complex
> things like pushing hash state to the backend. This will be taken
> care of with extensions to the protocol at the point these are
> implemented.
> 

As long as this doesn't break that "no coupling" condition it is fine.

Wei.

> Andrew.
> 
> >
> >>Andrew.
> >>>
> >>>I don't see relevant code in this series to handle "rogue other end". I
> >>>presume for a simple hash algorithm like L4 is not very important (say,
> >>>even a packet ends up in the wrong queue we can still safely process
> >>>it), or core driver can deal with this all by itself (dropping)?
> >>>
> >>>Wei.
> >>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 15:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 15:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEL4E-0008Kc-Qk; Fri, 14 Feb 2014 15:52:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WEL4D-0008JO-PA
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 15:52:41 +0000
Received: from [193.109.254.147:54607] by server-8.bemta-14.messagelabs.com id
	6F/AF-18529-9CB3EF25; Fri, 14 Feb 2014 15:52:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392393159!4416255!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11789 invoked from network); 14 Feb 2014 15:52:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 15:52:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100824338"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 15:52:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 10:52:37 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WEL49-0004xo-CK;
	Fri, 14 Feb 2014 15:52:37 +0000
Date: Fri, 14 Feb 2014 15:52:36 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140214155236.GF18398@zion.uk.xensource.com>
References: <1392378624-6123-1-git-send-email-andrew.bennieston@citrix.com>
	<20140214140635.GA18398@zion.uk.xensource.com>
	<52FE2DFC.8050702@citrix.com>
	<20140214152539.GD18398@zion.uk.xensource.com>
	<52FE390B.3020102@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE390B.3020102@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V2 net-next 0/5] xen-net{back,
 front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 03:40:59PM +0000, Andrew Bennieston wrote:
[...]
> >Wei.
> 
> Let me attempt to clear this up. Bear with me...
> 
> Queue selection is a decision by a transmitting system about which
> queue it uses for a particular packet. A well-behaved receiving
> system will pick up packets on any queue and throw them up into its
> network stack as normal. In this manner, the details of queue
> selection don't matter from the point of view of a receiving guest
> (either frontend or backend). That is; if a "malicious" frontend
> sends all of its packets on a single queue, then it is only damaging
> itself - by reducing its effective throughput to that of a single
> queue. This will not cause a problem to the backend. The same goes
> for the "select a random queue" scenario, although here you probably
> shouldn't expect decent TCP performance. Certainly there will be no
> badness in terms of affecting the backend or other systems, beyond
> that which a guest could achieve with a broken TCP stack anyway.
> 

Cool, this is much clearer about this feature and what I want to know.
In a word, there's no coupling what's so ever when frontend / backend
select which algorithm to use. Then there's nothing to fix. Thank you
for being patient to explain it to a dumb guy. :-)

> In light of this, algorithm selection is (mostly) a function of the
> transmitting side. The receiving side should be prepared to receive
> packets on any of the legitimately established queues. It just
> happens that the Linux netback and Linux netfront both use
> skb_get_hash() to determine this value.
> 

I somehow had the impression that two ends need to use the same
algorithm. They just happen to be using the same algorithm in the
current implementation. I understand now.

> In the future, some frontends (i.e. Windows) may need to do complex
> things like pushing hash state to the backend. This will be taken
> care of with extensions to the protocol at the point these are
> implemented.
> 

As long as this doesn't break that "no coupling" condition it is fine.

Wei.

> Andrew.
> 
> >
> >>Andrew.
> >>>
> >>>I don't see relevant code in this series to handle "rogue other end". I
> >>>presume for a simple hash algorithm like L4 is not very important (say,
> >>>even a packet ends up in the wrong queue we can still safely process
> >>>it), or core driver can deal with this all by itself (dropping)?
> >>>
> >>>Wei.
> >>>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:31:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:31:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WELfQ-00023D-Kc; Fri, 14 Feb 2014 16:31:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WELfP-000234-KD
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 16:31:07 +0000
Received: from [85.158.137.68:44704] by server-17.bemta-3.messagelabs.com id
	DA/41-22569-AC44EF25; Fri, 14 Feb 2014 16:31:06 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392395464!635448!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22132 invoked from network); 14 Feb 2014 16:31:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 16:31:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100838994"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 16:31:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 11:31:03 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WELXj-0005Rk-MB;
	Fri, 14 Feb 2014 16:23:11 +0000
Message-ID: <1392394986.11369.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Christoph Egger <chegger@amazon.de>, Liu Jinsong <jinsong.liu@intel.com>
Date: Fri, 14 Feb 2014 16:23:06 +0000
In-Reply-To: <1391175434.11034.2.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391175434.11034.2.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping

On Fri, 2014-01-31 at 13:37 +0000, Frediano Ziglio wrote:
> Ping
> 
> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
> > From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
> > From: Frediano Ziglio <frediano.ziglio@citrix.com>
> > Date: Wed, 22 Jan 2014 10:48:50 +0000
> > Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
> > MIME-Version: 1.0
> > Content-Type: text/plain; charset=UTF-8
> > Content-Transfer-Encoding: 8bit
> > 
> > These lines (in mctelem_reserve)
> > 
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > 
> > are racy. After you read the newhead pointer it can happen that another
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> > 
> > This patch use instead a bit array and atomic bit operations.
> > 
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> > ---
> >  xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
> >  1 file changed, 30 insertions(+), 51 deletions(-)
> > 
> > Changes from v1:
> > - Use bitmap to allow any number of items to be used;
> > - Use a single bitmap to simplify reserve loop;
> > - Remove HOME flags as was not used anymore.
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
> > index 895ce1a..ed8e8d2 100644
> > --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> > +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> > @@ -37,24 +37,19 @@ struct mctelem_ent {
> >  	void *mcte_data;		/* corresponding data payload */
> >  };
> >  
> > -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
> > -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
> > -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
> > -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
> > +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
> > +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
> >  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
> >  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
> >  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
> >  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
> >  
> > -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
> >  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
> >  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
> >  				MCTE_F_STATE_UNCOMMITTED | \
> >  				MCTE_F_STATE_COMMITTED | \
> >  				MCTE_F_STATE_PROCESSING)
> >  
> > -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
> > -
> >  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
> >  #define	MCTE_SET_CLASS(tep, new) do { \
> >      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
> > @@ -69,6 +64,8 @@ struct mctelem_ent {
> >  #define	MC_URGENT_NENT		10
> >  #define	MC_NONURGENT_NENT	20
> >  
> > +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
> > +
> >  #define	MC_NCLASSES		(MC_NONURGENT + 1)
> >  
> >  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> > @@ -77,11 +74,9 @@ struct mctelem_ent {
> >  static struct mc_telem_ctl {
> >  	/* Linked lists that thread the array members together.
> >  	 *
> > -	 * The free lists are singly-linked via mcte_next, and we allocate
> > -	 * from them by atomically unlinking an element from the head.
> > -	 * Consumed entries are returned to the head of the free list.
> > -	 * When an entry is reserved off the free list it is not linked
> > -	 * on any list until it is committed or dismissed.
> > +	 * The free lists is a bit array where bit 1 means free.
> > +	 * This as element number is quite small and is easy to
> > +	 * atomically allocate that way.
> >  	 *
> >  	 * The committed list grows at the head and we do not maintain a
> >  	 * tail pointer; insertions are performed atomically.  The head
> > @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
> >  	 * we can lock it for updates.  The head of the processing list
> >  	 * always has the oldest telemetry, and we append (as above)
> >  	 * at the tail of the processing list. */
> > -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> > +	DECLARE_BITMAP(mctc_free, MC_NENT);
> >  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> > @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
> >   */
> >  static void mctelem_free(struct mctelem_ent *tep)
> >  {
> > -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
> > -	    MC_URGENT : MC_NONURGENT;
> > -
> >  	BUG_ON(tep->mcte_refcnt != 0);
> >  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
> >  
> >  	tep->mcte_prev = NULL;
> > -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> > +	tep->mcte_next = NULL;
> > +
> > +	/* set free in array */
> > +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
> >  }
> >  
> >  /* Increment the reference count of an entry that is not linked on to
> > @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
> >  	}
> >  
> >  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
> > -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
> > -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
> > -	    datasz)) == NULL) {
> > +	    MC_NENT)) == NULL ||
> > +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
> >  		if (mctctl.mctc_elems)
> >  			xfree(mctctl.mctc_elems);
> >  		printk("Allocations for MCA telemetry failed\n");
> >  		return;
> >  	}
> >  
> > -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> > -		struct mctelem_ent *tep, **tepp;
> > +	for (i = 0; i < MC_NENT; i++) {
> > +		struct mctelem_ent *tep;
> >  
> >  		tep = mctctl.mctc_elems + i;
> >  		tep->mcte_flags = MCTE_F_STATE_FREE;
> >  		tep->mcte_refcnt = 0;
> >  		tep->mcte_data = datarr + i * datasz;
> >  
> > -		if (i < MC_URGENT_NENT) {
> > -			tepp = &mctctl.mctc_free[MC_URGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> > -		} else {
> > -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> > -		}
> > -
> > -		tep->mcte_next = *tepp;
> > +		__set_bit(i, mctctl.mctc_free);
> > +		tep->mcte_next = NULL;
> >  		tep->mcte_prev = NULL;
> > -		*tepp = tep;
> >  	}
> >  }
> >  
> > @@ -310,32 +296,25 @@ static int mctelem_drop_count;
> >  
> >  /* Reserve a telemetry entry, or return NULL if none available.
> >   * If we return an entry then the caller must subsequently call exactly one of
> > - * mctelem_unreserve or mctelem_commit for that entry.
> > + * mctelem_dismiss or mctelem_commit for that entry.
> >   */
> >  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
> >  {
> > -	struct mctelem_ent **freelp;
> > -	struct mctelem_ent *oldhead, *newhead;
> > -	mctelem_class_t target = (which == MC_URGENT) ?
> > -	    MC_URGENT : MC_NONURGENT;
> > +	unsigned bit;
> > +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
> >  
> > -	freelp = &mctctl.mctc_free[target];
> >  	for (;;) {
> > -		if ((oldhead = *freelp) == NULL) {
> > -			if (which == MC_URGENT && target == MC_URGENT) {
> > -				/* raid the non-urgent freelist */
> > -				target = MC_NONURGENT;
> > -				freelp = &mctctl.mctc_free[target];
> > -				continue;
> > -			} else {
> > -				mctelem_drop_count++;
> > -				return (NULL);
> > -			}
> > +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
> > +
> > +		if (bit >= MC_NENT) {
> > +			mctelem_drop_count++;
> > +			return (NULL);
> >  		}
> >  
> > -		newhead = oldhead->mcte_next;
> > -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > -			struct mctelem_ent *tep = oldhead;
> > +		/* try to allocate, atomically clear free bit */
> > +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
> > +			/* return element we got */
> > +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
> >  
> >  			mctelem_hold(tep);
> >  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:31:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:31:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WELfQ-00023D-Kc; Fri, 14 Feb 2014 16:31:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WELfP-000234-KD
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 16:31:07 +0000
Received: from [85.158.137.68:44704] by server-17.bemta-3.messagelabs.com id
	DA/41-22569-AC44EF25; Fri, 14 Feb 2014 16:31:06 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392395464!635448!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22132 invoked from network); 14 Feb 2014 16:31:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 16:31:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100838994"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 16:31:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 11:31:03 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WELXj-0005Rk-MB;
	Fri, 14 Feb 2014 16:23:11 +0000
Message-ID: <1392394986.11369.1.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: Christoph Egger <chegger@amazon.de>, Liu Jinsong <jinsong.liu@intel.com>
Date: Fri, 14 Feb 2014 16:23:06 +0000
In-Reply-To: <1391175434.11034.2.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391175434.11034.2.camel@hamster.uk.xensource.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ping

On Fri, 2014-01-31 at 13:37 +0000, Frediano Ziglio wrote:
> Ping
> 
> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
> > From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
> > From: Frediano Ziglio <frediano.ziglio@citrix.com>
> > Date: Wed, 22 Jan 2014 10:48:50 +0000
> > Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
> > MIME-Version: 1.0
> > Content-Type: text/plain; charset=UTF-8
> > Content-Transfer-Encoding: 8bit
> > 
> > These lines (in mctelem_reserve)
> > 
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > 
> > are racy. After you read the newhead pointer it can happen that another
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> > 
> > This patch use instead a bit array and atomic bit operations.
> > 
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> > ---
> >  xen/arch/x86/cpu/mcheck/mctelem.c |   81 ++++++++++++++-----------------------
> >  1 file changed, 30 insertions(+), 51 deletions(-)
> > 
> > Changes from v1:
> > - Use bitmap to allow any number of items to be used;
> > - Use a single bitmap to simplify reserve loop;
> > - Remove HOME flags as was not used anymore.
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
> > index 895ce1a..ed8e8d2 100644
> > --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> > +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> > @@ -37,24 +37,19 @@ struct mctelem_ent {
> >  	void *mcte_data;		/* corresponding data payload */
> >  };
> >  
> > -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
> > -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
> > -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
> > -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
> > +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
> > +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
> >  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
> >  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
> >  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
> >  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
> >  
> > -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
> >  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
> >  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
> >  				MCTE_F_STATE_UNCOMMITTED | \
> >  				MCTE_F_STATE_COMMITTED | \
> >  				MCTE_F_STATE_PROCESSING)
> >  
> > -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
> > -
> >  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
> >  #define	MCTE_SET_CLASS(tep, new) do { \
> >      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
> > @@ -69,6 +64,8 @@ struct mctelem_ent {
> >  #define	MC_URGENT_NENT		10
> >  #define	MC_NONURGENT_NENT	20
> >  
> > +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
> > +
> >  #define	MC_NCLASSES		(MC_NONURGENT + 1)
> >  
> >  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> > @@ -77,11 +74,9 @@ struct mctelem_ent {
> >  static struct mc_telem_ctl {
> >  	/* Linked lists that thread the array members together.
> >  	 *
> > -	 * The free lists are singly-linked via mcte_next, and we allocate
> > -	 * from them by atomically unlinking an element from the head.
> > -	 * Consumed entries are returned to the head of the free list.
> > -	 * When an entry is reserved off the free list it is not linked
> > -	 * on any list until it is committed or dismissed.
> > +	 * The free lists is a bit array where bit 1 means free.
> > +	 * This as element number is quite small and is easy to
> > +	 * atomically allocate that way.
> >  	 *
> >  	 * The committed list grows at the head and we do not maintain a
> >  	 * tail pointer; insertions are performed atomically.  The head
> > @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
> >  	 * we can lock it for updates.  The head of the processing list
> >  	 * always has the oldest telemetry, and we append (as above)
> >  	 * at the tail of the processing list. */
> > -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> > +	DECLARE_BITMAP(mctc_free, MC_NENT);
> >  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> > @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
> >   */
> >  static void mctelem_free(struct mctelem_ent *tep)
> >  {
> > -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
> > -	    MC_URGENT : MC_NONURGENT;
> > -
> >  	BUG_ON(tep->mcte_refcnt != 0);
> >  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
> >  
> >  	tep->mcte_prev = NULL;
> > -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> > +	tep->mcte_next = NULL;
> > +
> > +	/* set free in array */
> > +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
> >  }
> >  
> >  /* Increment the reference count of an entry that is not linked on to
> > @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
> >  	}
> >  
> >  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
> > -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
> > -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
> > -	    datasz)) == NULL) {
> > +	    MC_NENT)) == NULL ||
> > +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
> >  		if (mctctl.mctc_elems)
> >  			xfree(mctctl.mctc_elems);
> >  		printk("Allocations for MCA telemetry failed\n");
> >  		return;
> >  	}
> >  
> > -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> > -		struct mctelem_ent *tep, **tepp;
> > +	for (i = 0; i < MC_NENT; i++) {
> > +		struct mctelem_ent *tep;
> >  
> >  		tep = mctctl.mctc_elems + i;
> >  		tep->mcte_flags = MCTE_F_STATE_FREE;
> >  		tep->mcte_refcnt = 0;
> >  		tep->mcte_data = datarr + i * datasz;
> >  
> > -		if (i < MC_URGENT_NENT) {
> > -			tepp = &mctctl.mctc_free[MC_URGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> > -		} else {
> > -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> > -		}
> > -
> > -		tep->mcte_next = *tepp;
> > +		__set_bit(i, mctctl.mctc_free);
> > +		tep->mcte_next = NULL;
> >  		tep->mcte_prev = NULL;
> > -		*tepp = tep;
> >  	}
> >  }
> >  
> > @@ -310,32 +296,25 @@ static int mctelem_drop_count;
> >  
> >  /* Reserve a telemetry entry, or return NULL if none available.
> >   * If we return an entry then the caller must subsequently call exactly one of
> > - * mctelem_unreserve or mctelem_commit for that entry.
> > + * mctelem_dismiss or mctelem_commit for that entry.
> >   */
> >  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
> >  {
> > -	struct mctelem_ent **freelp;
> > -	struct mctelem_ent *oldhead, *newhead;
> > -	mctelem_class_t target = (which == MC_URGENT) ?
> > -	    MC_URGENT : MC_NONURGENT;
> > +	unsigned bit;
> > +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
> >  
> > -	freelp = &mctctl.mctc_free[target];
> >  	for (;;) {
> > -		if ((oldhead = *freelp) == NULL) {
> > -			if (which == MC_URGENT && target == MC_URGENT) {
> > -				/* raid the non-urgent freelist */
> > -				target = MC_NONURGENT;
> > -				freelp = &mctctl.mctc_free[target];
> > -				continue;
> > -			} else {
> > -				mctelem_drop_count++;
> > -				return (NULL);
> > -			}
> > +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
> > +
> > +		if (bit >= MC_NENT) {
> > +			mctelem_drop_count++;
> > +			return (NULL);
> >  		}
> >  
> > -		newhead = oldhead->mcte_next;
> > -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > -			struct mctelem_ent *tep = oldhead;
> > +		/* try to allocate, atomically clear free bit */
> > +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
> > +			/* return element we got */
> > +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
> >  
> >  			mctelem_hold(tep);
> >  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:34:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WELij-00029n-CJ; Fri, 14 Feb 2014 16:34:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WELih-00029e-La
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 16:34:31 +0000
Received: from [85.158.139.211:54518] by server-3.bemta-5.messagelabs.com id
	05/21-13671-6954EF25; Fri, 14 Feb 2014 16:34:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392395668!3999173!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8571 invoked from network); 14 Feb 2014 16:34:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 16:34:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102612080"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 16:34:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 11:34:27 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WELic-0005hU-VE;
	Fri, 14 Feb 2014 16:34:26 +0000
Date: Fri, 14 Feb 2014 16:34:26 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140214163426.GG18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, wei.liu2@citrix.com
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 06:46:02PM +0400, Vasiliy Tolstov wrote:
> Hi all.
> Today i'm compile xen 4.3.2 rc1 from stable-4.3 branch.
> Also i'm using upstream qemu from debian jessie (1.7.0).
> 
> Domain created and works fine.
> memory=512
> maxmemory=1024
> 

This should be "maxmem"? I could not find "maxmemory" in documents.

> Problem:
> >From dom0 xl mem-set works fine, it modifies xenstore and do hipercall to xen.
> >From domU side memory ballooned up.
> 
> Now i'm try to balloon up memory from guest (write to
> /sys/devices/system/xen_memory/xen_memory0/target) and memory
> ballooned up only to 1Mb. For example before 512, after 513.
> 
> Main problems that i cant upgrade guest kernel (i have many vps with
> 2.6.32.26 from old xen kernel git tree).
> 
> How can i deal with it? Whats wrong from xen side?
> 

My gut feeling is nothing is wrong with Xen. It's just that you have a
typo in your config file. :-)

Wei.

> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:34:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WELij-00029n-CJ; Fri, 14 Feb 2014 16:34:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WELih-00029e-La
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 16:34:31 +0000
Received: from [85.158.139.211:54518] by server-3.bemta-5.messagelabs.com id
	05/21-13671-6954EF25; Fri, 14 Feb 2014 16:34:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392395668!3999173!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8571 invoked from network); 14 Feb 2014 16:34:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 16:34:29 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102612080"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 16:34:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 11:34:27 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WELic-0005hU-VE;
	Fri, 14 Feb 2014 16:34:26 +0000
Date: Fri, 14 Feb 2014 16:34:26 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140214163426.GG18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, wei.liu2@citrix.com
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 06:46:02PM +0400, Vasiliy Tolstov wrote:
> Hi all.
> Today i'm compile xen 4.3.2 rc1 from stable-4.3 branch.
> Also i'm using upstream qemu from debian jessie (1.7.0).
> 
> Domain created and works fine.
> memory=512
> maxmemory=1024
> 

This should be "maxmem"? I could not find "maxmemory" in documents.

> Problem:
> >From dom0 xl mem-set works fine, it modifies xenstore and do hipercall to xen.
> >From domU side memory ballooned up.
> 
> Now i'm try to balloon up memory from guest (write to
> /sys/devices/system/xen_memory/xen_memory0/target) and memory
> ballooned up only to 1Mb. For example before 512, after 513.
> 
> Main problems that i cant upgrade guest kernel (i have many vps with
> 2.6.32.26 from old xen kernel git tree).
> 
> How can i deal with it? Whats wrong from xen side?
> 

My gut feeling is nothing is wrong with Xen. It's just that you have a
typo in your config file. :-)

Wei.

> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:49:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WELwU-0002Ri-Ra; Fri, 14 Feb 2014 16:48:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WELwT-0002Rd-TJ
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 16:48:46 +0000
Received: from [193.109.254.147:37567] by server-7.bemta-14.messagelabs.com id
	3C/8C-23424-DE84EF25; Fri, 14 Feb 2014 16:48:45 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392396523!735103!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25088 invoked from network); 14 Feb 2014 16:48:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 16:48:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1EGlpW0001989
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 14 Feb 2014 16:47:52 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1EGlkJM013300
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 14 Feb 2014 16:47:46 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1EGljmq013242; Fri, 14 Feb 2014 16:47:45 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 14 Feb 2014 08:47:44 -0800
Message-ID: <52FE490B.8000908@oracle.com>
Date: Fri, 14 Feb 2014 11:49:15 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
In-Reply-To: <20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [PATCH v2 46/52] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
> Subsystems that want to register CPU hotplug callbacks, as well as perform
> initialization for the CPUs that are already online, often do it as shown
> below:
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	put_online_cpus();
>
> This is wrong, since it is prone to ABBA deadlocks involving the
> cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
> with CPU hotplug operations).
>
> Interestingly, the balloon code in xen can actually prevent double
> initialization and hence can use the following simplified form of callback
> registration:
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	put_online_cpus();
>
> A hotplug operation that occurs between registering the notifier and calling
> get_online_cpus(), won't disrupt anything, because the code takes care to
> perform the memory allocations only once.
>
> So reorganize the balloon code in xen this way to fix the deadlock with
> callback registration.
>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>   drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
>   1 file changed, 23 insertions(+), 12 deletions(-)


This looks exactly like the earlier version (i.e the notifier is still 
kept registered on allocation failure and commit message doesn't exactly 
reflect the change).

-boris

>
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 37d06ea..afe1a3f 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
>   	}
>   }
>   
> +static int alloc_balloon_scratch_page(int cpu)
> +{
> +	if (per_cpu(balloon_scratch_page, cpu) != NULL)
> +		return 0;
> +
> +	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> +	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> +		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +
>   static int balloon_cpu_notify(struct notifier_block *self,
>   				    unsigned long action, void *hcpu)
>   {
>   	int cpu = (long)hcpu;
>   	switch (action) {
>   	case CPU_UP_PREPARE:
> -		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> -			break;
> -		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		if (alloc_balloon_scratch_page(cpu))
>   			return NOTIFY_BAD;
> -		}
>   		break;
>   	default:
>   		break;
> @@ -624,15 +634,16 @@ static int __init balloon_init(void)
>   		return -ENODEV;
>   
>   	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for_each_online_cpu(cpu)
> -		{
> -			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		register_cpu_notifier(&balloon_cpu_notifier);
> +
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			if (alloc_balloon_scratch_page(cpu)) {
> +				put_online_cpus();
>   				return -ENOMEM;
>   			}
>   		}
> -		register_cpu_notifier(&balloon_cpu_notifier);
> +		put_online_cpus();
>   	}
>   
>   	pr_info("Initialising balloon driver\n");
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:49:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WELwU-0002Ri-Ra; Fri, 14 Feb 2014 16:48:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WELwT-0002Rd-TJ
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 16:48:46 +0000
Received: from [193.109.254.147:37567] by server-7.bemta-14.messagelabs.com id
	3C/8C-23424-DE84EF25; Fri, 14 Feb 2014 16:48:45 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392396523!735103!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25088 invoked from network); 14 Feb 2014 16:48:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 16:48:44 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1EGlpW0001989
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 14 Feb 2014 16:47:52 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1EGlkJM013300
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 14 Feb 2014 16:47:46 GMT
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1EGljmq013242; Fri, 14 Feb 2014 16:47:45 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 14 Feb 2014 08:47:44 -0800
Message-ID: <52FE490B.8000908@oracle.com>
Date: Fri, 14 Feb 2014 11:49:15 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
In-Reply-To: <20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [PATCH v2 46/52] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
> Subsystems that want to register CPU hotplug callbacks, as well as perform
> initialization for the CPUs that are already online, often do it as shown
> below:
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	put_online_cpus();
>
> This is wrong, since it is prone to ABBA deadlocks involving the
> cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
> with CPU hotplug operations).
>
> Interestingly, the balloon code in xen can actually prevent double
> initialization and hence can use the following simplified form of callback
> registration:
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	put_online_cpus();
>
> A hotplug operation that occurs between registering the notifier and calling
> get_online_cpus(), won't disrupt anything, because the code takes care to
> perform the memory allocations only once.
>
> So reorganize the balloon code in xen this way to fix the deadlock with
> callback registration.
>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>   drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
>   1 file changed, 23 insertions(+), 12 deletions(-)


This looks exactly like the earlier version (i.e the notifier is still 
kept registered on allocation failure and commit message doesn't exactly 
reflect the change).

-boris

>
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 37d06ea..afe1a3f 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
>   	}
>   }
>   
> +static int alloc_balloon_scratch_page(int cpu)
> +{
> +	if (per_cpu(balloon_scratch_page, cpu) != NULL)
> +		return 0;
> +
> +	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> +	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> +		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +
>   static int balloon_cpu_notify(struct notifier_block *self,
>   				    unsigned long action, void *hcpu)
>   {
>   	int cpu = (long)hcpu;
>   	switch (action) {
>   	case CPU_UP_PREPARE:
> -		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> -			break;
> -		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		if (alloc_balloon_scratch_page(cpu))
>   			return NOTIFY_BAD;
> -		}
>   		break;
>   	default:
>   		break;
> @@ -624,15 +634,16 @@ static int __init balloon_init(void)
>   		return -ENODEV;
>   
>   	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for_each_online_cpu(cpu)
> -		{
> -			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		register_cpu_notifier(&balloon_cpu_notifier);
> +
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			if (alloc_balloon_scratch_page(cpu)) {
> +				put_online_cpus();
>   				return -ENOMEM;
>   			}
>   		}
> -		register_cpu_notifier(&balloon_cpu_notifier);
> +		put_online_cpus();
>   	}
>   
>   	pr_info("Initialising balloon driver\n");
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:54:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEM1k-0002aq-UG; Fri, 14 Feb 2014 16:54:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEM1j-0002ak-7x
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 16:54:11 +0000
Received: from [85.158.143.35:9196] by server-1.bemta-4.messagelabs.com id
	2C/23-31661-23A4EF25; Fri, 14 Feb 2014 16:54:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392396849!5778658!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29342 invoked from network); 14 Feb 2014 16:54:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 16:54:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 16:54:08 +0000
Message-Id: <52FE583F020000780011C861@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 16:54:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <chegger@amazon.de>,
	"Liu Jinsong" <jinsong.liu@intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391175434.11034.2.camel@hamster.uk.xensource.com>
	<1392394986.11369.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1392394986.11369.1.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Donald D Dugger <donald.d.dugger@intel.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 17:23, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> Ping

And this is clearly not the first ping. Guys - you had over 3 weeks time
to respond to this. Please!

Jan

> On Fri, 2014-01-31 at 13:37 +0000, Frediano Ziglio wrote:
>> Ping
>> 
>> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
>> > From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
>> > From: Frediano Ziglio <frediano.ziglio@citrix.com>
>> > Date: Wed, 22 Jan 2014 10:48:50 +0000
>> > Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
>> > MIME-Version: 1.0
>> > Content-Type: text/plain; charset=UTF-8
>> > Content-Transfer-Encoding: 8bit
>> > 
>> > These lines (in mctelem_reserve)
>> > 
>> >         newhead = oldhead->mcte_next;
>> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> > 
>> > are racy. After you read the newhead pointer it can happen that another
>> > flow (thread or recursive invocation) change all the list but set head
>> > with same value. So oldhead is the same as *freelp but you are setting
>> > a new head that could point to whatever element (even already used).
>> > 
>> > This patch use instead a bit array and atomic bit operations.
>> > 
>> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
>> > ---
>> >  xen/arch/x86/cpu/mcheck/mctelem.c |   81 
> ++++++++++++++-----------------------
>> >  1 file changed, 30 insertions(+), 51 deletions(-)
>> > 
>> > Changes from v1:
>> > - Use bitmap to allow any number of items to be used;
>> > - Use a single bitmap to simplify reserve loop;
>> > - Remove HOME flags as was not used anymore.
>> > 
>> > diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c 
> b/xen/arch/x86/cpu/mcheck/mctelem.c
>> > index 895ce1a..ed8e8d2 100644
>> > --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>> > +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>> > @@ -37,24 +37,19 @@ struct mctelem_ent {
>> >  	void *mcte_data;		/* corresponding data payload */
>> >  };
>> >  
>> > -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
>> > -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
>> > -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
>> > -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
>> > +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
>> > +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>> >  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>> >  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>> >  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>> >  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>> >  
>> > -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>> >  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>> >  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>> >  				MCTE_F_STATE_UNCOMMITTED | \
>> >  				MCTE_F_STATE_COMMITTED | \
>> >  				MCTE_F_STATE_PROCESSING)
>> >  
>> > -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
>> > -
>> >  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>> >  #define	MCTE_SET_CLASS(tep, new) do { \
>> >      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
>> > @@ -69,6 +64,8 @@ struct mctelem_ent {
>> >  #define	MC_URGENT_NENT		10
>> >  #define	MC_NONURGENT_NENT	20
>> >  
>> > +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
>> > +
>> >  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>> >  
>> >  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>> > @@ -77,11 +74,9 @@ struct mctelem_ent {
>> >  static struct mc_telem_ctl {
>> >  	/* Linked lists that thread the array members together.
>> >  	 *
>> > -	 * The free lists are singly-linked via mcte_next, and we allocate
>> > -	 * from them by atomically unlinking an element from the head.
>> > -	 * Consumed entries are returned to the head of the free list.
>> > -	 * When an entry is reserved off the free list it is not linked
>> > -	 * on any list until it is committed or dismissed.
>> > +	 * The free lists is a bit array where bit 1 means free.
>> > +	 * This as element number is quite small and is easy to
>> > +	 * atomically allocate that way.
>> >  	 *
>> >  	 * The committed list grows at the head and we do not maintain a
>> >  	 * tail pointer; insertions are performed atomically.  The head
>> > @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>> >  	 * we can lock it for updates.  The head of the processing list
>> >  	 * always has the oldest telemetry, and we append (as above)
>> >  	 * at the tail of the processing list. */
>> > -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>> > +	DECLARE_BITMAP(mctc_free, MC_NENT);
>> >  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>> >  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>> >  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>> > @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>> >   */
>> >  static void mctelem_free(struct mctelem_ent *tep)
>> >  {
>> > -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
>> > -	    MC_URGENT : MC_NONURGENT;
>> > -
>> >  	BUG_ON(tep->mcte_refcnt != 0);
>> >  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>> >  
>> >  	tep->mcte_prev = NULL;
>> > -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
>> > +	tep->mcte_next = NULL;
>> > +
>> > +	/* set free in array */
>> > +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>> >  }
>> >  
>> >  /* Increment the reference count of an entry that is not linked on to
>> > @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>> >  	}
>> >  
>> >  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>> > -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
>> > -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
>> > -	    datasz)) == NULL) {
>> > +	    MC_NENT)) == NULL ||
>> > +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>> >  		if (mctctl.mctc_elems)
>> >  			xfree(mctctl.mctc_elems);
>> >  		printk("Allocations for MCA telemetry failed\n");
>> >  		return;
>> >  	}
>> >  
>> > -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>> > -		struct mctelem_ent *tep, **tepp;
>> > +	for (i = 0; i < MC_NENT; i++) {
>> > +		struct mctelem_ent *tep;
>> >  
>> >  		tep = mctctl.mctc_elems + i;
>> >  		tep->mcte_flags = MCTE_F_STATE_FREE;
>> >  		tep->mcte_refcnt = 0;
>> >  		tep->mcte_data = datarr + i * datasz;
>> >  
>> > -		if (i < MC_URGENT_NENT) {
>> > -			tepp = &mctctl.mctc_free[MC_URGENT];
>> > -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>> > -		} else {
>> > -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>> > -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>> > -		}
>> > -
>> > -		tep->mcte_next = *tepp;
>> > +		__set_bit(i, mctctl.mctc_free);
>> > +		tep->mcte_next = NULL;
>> >  		tep->mcte_prev = NULL;
>> > -		*tepp = tep;
>> >  	}
>> >  }
>> >  
>> > @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>> >  
>> >  /* Reserve a telemetry entry, or return NULL if none available.
>> >   * If we return an entry then the caller must subsequently call exactly 
> one of
>> > - * mctelem_unreserve or mctelem_commit for that entry.
>> > + * mctelem_dismiss or mctelem_commit for that entry.
>> >   */
>> >  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>> >  {
>> > -	struct mctelem_ent **freelp;
>> > -	struct mctelem_ent *oldhead, *newhead;
>> > -	mctelem_class_t target = (which == MC_URGENT) ?
>> > -	    MC_URGENT : MC_NONURGENT;
>> > +	unsigned bit;
>> > +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>> >  
>> > -	freelp = &mctctl.mctc_free[target];
>> >  	for (;;) {
>> > -		if ((oldhead = *freelp) == NULL) {
>> > -			if (which == MC_URGENT && target == MC_URGENT) {
>> > -				/* raid the non-urgent freelist */
>> > -				target = MC_NONURGENT;
>> > -				freelp = &mctctl.mctc_free[target];
>> > -				continue;
>> > -			} else {
>> > -				mctelem_drop_count++;
>> > -				return (NULL);
>> > -			}
>> > +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
>> > +
>> > +		if (bit >= MC_NENT) {
>> > +			mctelem_drop_count++;
>> > +			return (NULL);
>> >  		}
>> >  
>> > -		newhead = oldhead->mcte_next;
>> > -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> > -			struct mctelem_ent *tep = oldhead;
>> > +		/* try to allocate, atomically clear free bit */
>> > +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
>> > +			/* return element we got */
>> > +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>> >  
>> >  			mctelem_hold(tep);
>> >  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:54:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEM1k-0002aq-UG; Fri, 14 Feb 2014 16:54:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WEM1j-0002ak-7x
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 16:54:11 +0000
Received: from [85.158.143.35:9196] by server-1.bemta-4.messagelabs.com id
	2C/23-31661-23A4EF25; Fri, 14 Feb 2014 16:54:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392396849!5778658!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29342 invoked from network); 14 Feb 2014 16:54:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 16:54:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 14 Feb 2014 16:54:08 +0000
Message-Id: <52FE583F020000780011C861@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 14 Feb 2014 16:54:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Christoph Egger" <chegger@amazon.de>,
	"Liu Jinsong" <jinsong.liu@intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391175434.11034.2.camel@hamster.uk.xensource.com>
	<1392394986.11369.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1392394986.11369.1.camel@hamster.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Donald D Dugger <donald.d.dugger@intel.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 14.02.14 at 17:23, Frediano Ziglio <frediano.ziglio@citrix.com> wrote:
> Ping

And this is clearly not the first ping. Guys - you had over 3 weeks time
to respond to this. Please!

Jan

> On Fri, 2014-01-31 at 13:37 +0000, Frediano Ziglio wrote:
>> Ping
>> 
>> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
>> > From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
>> > From: Frediano Ziglio <frediano.ziglio@citrix.com>
>> > Date: Wed, 22 Jan 2014 10:48:50 +0000
>> > Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
>> > MIME-Version: 1.0
>> > Content-Type: text/plain; charset=UTF-8
>> > Content-Transfer-Encoding: 8bit
>> > 
>> > These lines (in mctelem_reserve)
>> > 
>> >         newhead = oldhead->mcte_next;
>> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> > 
>> > are racy. After you read the newhead pointer it can happen that another
>> > flow (thread or recursive invocation) change all the list but set head
>> > with same value. So oldhead is the same as *freelp but you are setting
>> > a new head that could point to whatever element (even already used).
>> > 
>> > This patch use instead a bit array and atomic bit operations.
>> > 
>> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
>> > ---
>> >  xen/arch/x86/cpu/mcheck/mctelem.c |   81 
> ++++++++++++++-----------------------
>> >  1 file changed, 30 insertions(+), 51 deletions(-)
>> > 
>> > Changes from v1:
>> > - Use bitmap to allow any number of items to be used;
>> > - Use a single bitmap to simplify reserve loop;
>> > - Remove HOME flags as was not used anymore.
>> > 
>> > diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c 
> b/xen/arch/x86/cpu/mcheck/mctelem.c
>> > index 895ce1a..ed8e8d2 100644
>> > --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>> > +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>> > @@ -37,24 +37,19 @@ struct mctelem_ent {
>> >  	void *mcte_data;		/* corresponding data payload */
>> >  };
>> >  
>> > -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
>> > -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent freelist */
>> > -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
>> > -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent errors */
>> > +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent errors */
>> > +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use - nonurgent errors */
>> >  #define	MCTE_F_STATE_FREE		0x0010U	/* on a freelist */
>> >  #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved; on no list */
>> >  #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a committed list */
>> >  #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on a processing list */
>> >  
>> > -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT | MCTE_F_HOME_NONURGENT)
>> >  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT | MCTE_F_CLASS_NONURGENT)
>> >  #define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>> >  				MCTE_F_STATE_UNCOMMITTED | \
>> >  				MCTE_F_STATE_COMMITTED | \
>> >  				MCTE_F_STATE_PROCESSING)
>> >  
>> > -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME)
>> > -
>> >  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>> >  #define	MCTE_SET_CLASS(tep, new) do { \
>> >      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
>> > @@ -69,6 +64,8 @@ struct mctelem_ent {
>> >  #define	MC_URGENT_NENT		10
>> >  #define	MC_NONURGENT_NENT	20
>> >  
>> > +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT)
>> > +
>> >  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>> >  
>> >  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>> > @@ -77,11 +74,9 @@ struct mctelem_ent {
>> >  static struct mc_telem_ctl {
>> >  	/* Linked lists that thread the array members together.
>> >  	 *
>> > -	 * The free lists are singly-linked via mcte_next, and we allocate
>> > -	 * from them by atomically unlinking an element from the head.
>> > -	 * Consumed entries are returned to the head of the free list.
>> > -	 * When an entry is reserved off the free list it is not linked
>> > -	 * on any list until it is committed or dismissed.
>> > +	 * The free lists is a bit array where bit 1 means free.
>> > +	 * This as element number is quite small and is easy to
>> > +	 * atomically allocate that way.
>> >  	 *
>> >  	 * The committed list grows at the head and we do not maintain a
>> >  	 * tail pointer; insertions are performed atomically.  The head
>> > @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>> >  	 * we can lock it for updates.  The head of the processing list
>> >  	 * always has the oldest telemetry, and we append (as above)
>> >  	 * at the tail of the processing list. */
>> > -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>> > +	DECLARE_BITMAP(mctc_free, MC_NENT);
>> >  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>> >  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>> >  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>> > @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)
>> >   */
>> >  static void mctelem_free(struct mctelem_ent *tep)
>> >  {
>> > -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
>> > -	    MC_URGENT : MC_NONURGENT;
>> > -
>> >  	BUG_ON(tep->mcte_refcnt != 0);
>> >  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>> >  
>> >  	tep->mcte_prev = NULL;
>> > -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
>> > +	tep->mcte_next = NULL;
>> > +
>> > +	/* set free in array */
>> > +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);
>> >  }
>> >  
>> >  /* Increment the reference count of an entry that is not linked on to
>> > @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)
>> >  	}
>> >  
>> >  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>> > -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
>> > -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT) *
>> > -	    datasz)) == NULL) {
>> > +	    MC_NENT)) == NULL ||
>> > +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {
>> >  		if (mctctl.mctc_elems)
>> >  			xfree(mctctl.mctc_elems);
>> >  		printk("Allocations for MCA telemetry failed\n");
>> >  		return;
>> >  	}
>> >  
>> > -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>> > -		struct mctelem_ent *tep, **tepp;
>> > +	for (i = 0; i < MC_NENT; i++) {
>> > +		struct mctelem_ent *tep;
>> >  
>> >  		tep = mctctl.mctc_elems + i;
>> >  		tep->mcte_flags = MCTE_F_STATE_FREE;
>> >  		tep->mcte_refcnt = 0;
>> >  		tep->mcte_data = datarr + i * datasz;
>> >  
>> > -		if (i < MC_URGENT_NENT) {
>> > -			tepp = &mctctl.mctc_free[MC_URGENT];
>> > -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>> > -		} else {
>> > -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>> > -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>> > -		}
>> > -
>> > -		tep->mcte_next = *tepp;
>> > +		__set_bit(i, mctctl.mctc_free);
>> > +		tep->mcte_next = NULL;
>> >  		tep->mcte_prev = NULL;
>> > -		*tepp = tep;
>> >  	}
>> >  }
>> >  
>> > @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>> >  
>> >  /* Reserve a telemetry entry, or return NULL if none available.
>> >   * If we return an entry then the caller must subsequently call exactly 
> one of
>> > - * mctelem_unreserve or mctelem_commit for that entry.
>> > + * mctelem_dismiss or mctelem_commit for that entry.
>> >   */
>> >  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>> >  {
>> > -	struct mctelem_ent **freelp;
>> > -	struct mctelem_ent *oldhead, *newhead;
>> > -	mctelem_class_t target = (which == MC_URGENT) ?
>> > -	    MC_URGENT : MC_NONURGENT;
>> > +	unsigned bit;
>> > +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>> >  
>> > -	freelp = &mctctl.mctc_free[target];
>> >  	for (;;) {
>> > -		if ((oldhead = *freelp) == NULL) {
>> > -			if (which == MC_URGENT && target == MC_URGENT) {
>> > -				/* raid the non-urgent freelist */
>> > -				target = MC_NONURGENT;
>> > -				freelp = &mctctl.mctc_free[target];
>> > -				continue;
>> > -			} else {
>> > -				mctelem_drop_count++;
>> > -				return (NULL);
>> > -			}
>> > +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit);
>> > +
>> > +		if (bit >= MC_NENT) {
>> > +			mctelem_drop_count++;
>> > +			return (NULL);
>> >  		}
>> >  
>> > -		newhead = oldhead->mcte_next;
>> > -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>> > -			struct mctelem_ent *tep = oldhead;
>> > +		/* try to allocate, atomically clear free bit */
>> > +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
>> > +			/* return element we got */
>> > +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>> >  
>> >  			mctelem_hold(tep);
>> >  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:55:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:55:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEM3D-0002fq-Kf; Fri, 14 Feb 2014 16:55:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WEM3C-0002fe-06
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 16:55:42 +0000
Received: from [193.109.254.147:15990] by server-11.bemta-14.messagelabs.com
	id 29/46-24604-D8A4EF25; Fri, 14 Feb 2014 16:55:41 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392396938!4427156!1
X-Originating-IP: [122.248.162.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTIyLjI0OC4xNjIuMiA9PiAzNDAyNjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1516 invoked from network); 14 Feb 2014 16:55:39 -0000
Received: from e28smtp02.in.ibm.com (HELO e28smtp02.in.ibm.com) (122.248.162.2)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 16:55:39 -0000
Received: from /spool/local
	by e28smtp02.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Fri, 14 Feb 2014 22:25:37 +0530
Received: from d28dlp03.in.ibm.com (9.184.220.128)
	by e28smtp02.in.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 14 Feb 2014 22:25:35 +0530
Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58])
	by d28dlp03.in.ibm.com (Postfix) with ESMTP id DCFB21258054
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 22:27:30 +0530 (IST)
Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67])
	by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1EGtQca63832186
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 22:25:26 +0530
Received: from d28av05.in.ibm.com (localhost [127.0.0.1])
	by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1EGtW48005809
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 22:25:34 +0530
Received: from srivatsabhat.in.ibm.com ([9.78.204.74])
	by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1EGtMM3005439; Fri, 14 Feb 2014 22:25:30 +0530
Message-ID: <52FE493A.2030206@linux.vnet.ibm.com>
Date: Fri, 14 Feb 2014 22:20:02 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:15.0) Gecko/20120828 Thunderbird/15.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
	<52FE490B.8000908@oracle.com>
In-Reply-To: <52FE490B.8000908@oracle.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021416-5816-0000-0000-00000C476B56
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [PATCH v2 46/52] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 10:19 PM, Boris Ostrovsky wrote:
> On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
>> Subsystems that want to register CPU hotplug callbacks, as well as
>> perform
>> initialization for the CPUs that are already online, often do it as shown
>> below:
>>
>>     get_online_cpus();
>>
>>     for_each_online_cpu(cpu)
>>         init_cpu(cpu);
>>
>>     register_cpu_notifier(&foobar_cpu_notifier);
>>
>>     put_online_cpus();
>>
>> This is wrong, since it is prone to ABBA deadlocks involving the
>> cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
>> with CPU hotplug operations).
>>
>> Interestingly, the balloon code in xen can actually prevent double
>> initialization and hence can use the following simplified form of
>> callback
>> registration:
>>
>>     register_cpu_notifier(&foobar_cpu_notifier);
>>
>>     get_online_cpus();
>>
>>     for_each_online_cpu(cpu)
>>         init_cpu(cpu);
>>
>>     put_online_cpus();
>>
>> A hotplug operation that occurs between registering the notifier and
>> calling
>> get_online_cpus(), won't disrupt anything, because the code takes care to
>> perform the memory allocations only once.
>>
>> So reorganize the balloon code in xen this way to fix the deadlock with
>> callback registration.
>>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
>> Cc: Ingo Molnar <mingo@kernel.org>
>> Cc: xen-devel@lists.xenproject.org
>> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
>> ---
>>
>>   drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
>>   1 file changed, 23 insertions(+), 12 deletions(-)
> 
> 
> This looks exactly like the earlier version (i.e the notifier is still
> kept registered on allocation failure and commit message doesn't exactly
> reflect the change).
>

Sorry, your earlier reply (for some unknown reason) missed the email-threading
and landed elsewhere in my inbox, and hence unfortunately I forgot to take
your suggestions into account while sending out the v2.

I'll send out an updated version of just this patch, as a reply.

Thank you!

Regards,
Srivatsa S. Bhat

>>
>> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
>> index 37d06ea..afe1a3f 100644
>> --- a/drivers/xen/balloon.c
>> +++ b/drivers/xen/balloon.c
>> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned
>> long start_pfn,
>>       }
>>   }
>>   +static int alloc_balloon_scratch_page(int cpu)
>> +{
>> +    if (per_cpu(balloon_scratch_page, cpu) != NULL)
>> +        return 0;
>> +
>> +    per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
>> +    if (per_cpu(balloon_scratch_page, cpu) == NULL) {
>> +        pr_warn("Failed to allocate balloon_scratch_page for cpu
>> %d\n", cpu);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +
>>   static int balloon_cpu_notify(struct notifier_block *self,
>>                       unsigned long action, void *hcpu)
>>   {
>>       int cpu = (long)hcpu;
>>       switch (action) {
>>       case CPU_UP_PREPARE:
>> -        if (per_cpu(balloon_scratch_page, cpu) != NULL)
>> -            break;
>> -        per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
>> -        if (per_cpu(balloon_scratch_page, cpu) == NULL) {
>> -            pr_warn("Failed to allocate balloon_scratch_page for cpu
>> %d\n", cpu);
>> +        if (alloc_balloon_scratch_page(cpu))
>>               return NOTIFY_BAD;
>> -        }
>>           break;
>>       default:
>>           break;
>> @@ -624,15 +634,16 @@ static int __init balloon_init(void)
>>           return -ENODEV;
>>         if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>> -        for_each_online_cpu(cpu)
>> -        {
>> -            per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
>> -            if (per_cpu(balloon_scratch_page, cpu) == NULL) {
>> -                pr_warn("Failed to allocate balloon_scratch_page for
>> cpu %d\n", cpu);
>> +        register_cpu_notifier(&balloon_cpu_notifier);
>> +
>> +        get_online_cpus();
>> +        for_each_online_cpu(cpu) {
>> +            if (alloc_balloon_scratch_page(cpu)) {
>> +                put_online_cpus();
>>                   return -ENOMEM;
>>               }
>>           }
>> -        register_cpu_notifier(&balloon_cpu_notifier);
>> +        put_online_cpus();
>>       }
>>         pr_info("Initialising balloon driver\n");
>>
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 16:55:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 16:55:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEM3D-0002fq-Kf; Fri, 14 Feb 2014 16:55:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WEM3C-0002fe-06
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 16:55:42 +0000
Received: from [193.109.254.147:15990] by server-11.bemta-14.messagelabs.com
	id 29/46-24604-D8A4EF25; Fri, 14 Feb 2014 16:55:41 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392396938!4427156!1
X-Originating-IP: [122.248.162.2]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTIyLjI0OC4xNjIuMiA9PiAzNDAyNjU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1516 invoked from network); 14 Feb 2014 16:55:39 -0000
Received: from e28smtp02.in.ibm.com (HELO e28smtp02.in.ibm.com) (122.248.162.2)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 16:55:39 -0000
Received: from /spool/local
	by e28smtp02.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Fri, 14 Feb 2014 22:25:37 +0530
Received: from d28dlp03.in.ibm.com (9.184.220.128)
	by e28smtp02.in.ibm.com (192.168.1.132) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 14 Feb 2014 22:25:35 +0530
Received: from d28relay01.in.ibm.com (d28relay01.in.ibm.com [9.184.220.58])
	by d28dlp03.in.ibm.com (Postfix) with ESMTP id DCFB21258054
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 22:27:30 +0530 (IST)
Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67])
	by d28relay01.in.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1EGtQca63832186
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 22:25:26 +0530
Received: from d28av05.in.ibm.com (localhost [127.0.0.1])
	by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1EGtW48005809
	for <xen-devel@lists.xenproject.org>; Fri, 14 Feb 2014 22:25:34 +0530
Received: from srivatsabhat.in.ibm.com ([9.78.204.74])
	by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1EGtMM3005439; Fri, 14 Feb 2014 22:25:30 +0530
Message-ID: <52FE493A.2030206@linux.vnet.ibm.com>
Date: Fri, 14 Feb 2014 22:20:02 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:15.0) Gecko/20120828 Thunderbird/15.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
	<52FE490B.8000908@oracle.com>
In-Reply-To: <52FE490B.8000908@oracle.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021416-5816-0000-0000-00000C476B56
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [PATCH v2 46/52] xen,
	balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 10:19 PM, Boris Ostrovsky wrote:
> On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
>> Subsystems that want to register CPU hotplug callbacks, as well as
>> perform
>> initialization for the CPUs that are already online, often do it as shown
>> below:
>>
>>     get_online_cpus();
>>
>>     for_each_online_cpu(cpu)
>>         init_cpu(cpu);
>>
>>     register_cpu_notifier(&foobar_cpu_notifier);
>>
>>     put_online_cpus();
>>
>> This is wrong, since it is prone to ABBA deadlocks involving the
>> cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
>> with CPU hotplug operations).
>>
>> Interestingly, the balloon code in xen can actually prevent double
>> initialization and hence can use the following simplified form of
>> callback
>> registration:
>>
>>     register_cpu_notifier(&foobar_cpu_notifier);
>>
>>     get_online_cpus();
>>
>>     for_each_online_cpu(cpu)
>>         init_cpu(cpu);
>>
>>     put_online_cpus();
>>
>> A hotplug operation that occurs between registering the notifier and
>> calling
>> get_online_cpus(), won't disrupt anything, because the code takes care to
>> perform the memory allocations only once.
>>
>> So reorganize the balloon code in xen this way to fix the deadlock with
>> callback registration.
>>
>> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: David Vrabel <david.vrabel@citrix.com>
>> Cc: Ingo Molnar <mingo@kernel.org>
>> Cc: xen-devel@lists.xenproject.org
>> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
>> ---
>>
>>   drivers/xen/balloon.c |   35 +++++++++++++++++++++++------------
>>   1 file changed, 23 insertions(+), 12 deletions(-)
> 
> 
> This looks exactly like the earlier version (i.e the notifier is still
> kept registered on allocation failure and commit message doesn't exactly
> reflect the change).
>

Sorry, your earlier reply (for some unknown reason) missed the email-threading
and landed elsewhere in my inbox, and hence unfortunately I forgot to take
your suggestions into account while sending out the v2.

I'll send out an updated version of just this patch, as a reply.

Thank you!

Regards,
Srivatsa S. Bhat

>>
>> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
>> index 37d06ea..afe1a3f 100644
>> --- a/drivers/xen/balloon.c
>> +++ b/drivers/xen/balloon.c
>> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned
>> long start_pfn,
>>       }
>>   }
>>   +static int alloc_balloon_scratch_page(int cpu)
>> +{
>> +    if (per_cpu(balloon_scratch_page, cpu) != NULL)
>> +        return 0;
>> +
>> +    per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
>> +    if (per_cpu(balloon_scratch_page, cpu) == NULL) {
>> +        pr_warn("Failed to allocate balloon_scratch_page for cpu
>> %d\n", cpu);
>> +        return -ENOMEM;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +
>>   static int balloon_cpu_notify(struct notifier_block *self,
>>                       unsigned long action, void *hcpu)
>>   {
>>       int cpu = (long)hcpu;
>>       switch (action) {
>>       case CPU_UP_PREPARE:
>> -        if (per_cpu(balloon_scratch_page, cpu) != NULL)
>> -            break;
>> -        per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
>> -        if (per_cpu(balloon_scratch_page, cpu) == NULL) {
>> -            pr_warn("Failed to allocate balloon_scratch_page for cpu
>> %d\n", cpu);
>> +        if (alloc_balloon_scratch_page(cpu))
>>               return NOTIFY_BAD;
>> -        }
>>           break;
>>       default:
>>           break;
>> @@ -624,15 +634,16 @@ static int __init balloon_init(void)
>>           return -ENODEV;
>>         if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>> -        for_each_online_cpu(cpu)
>> -        {
>> -            per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
>> -            if (per_cpu(balloon_scratch_page, cpu) == NULL) {
>> -                pr_warn("Failed to allocate balloon_scratch_page for
>> cpu %d\n", cpu);
>> +        register_cpu_notifier(&balloon_cpu_notifier);
>> +
>> +        get_online_cpus();
>> +        for_each_online_cpu(cpu) {
>> +            if (alloc_balloon_scratch_page(cpu)) {
>> +                put_online_cpus();
>>                   return -ENOMEM;
>>               }
>>           }
>> -        register_cpu_notifier(&balloon_cpu_notifier);
>> +        put_online_cpus();
>>       }
>>         pr_info("Initialising balloon driver\n");
>>
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMBi-0002wW-An; Fri, 14 Feb 2014 17:04:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WEMBh-0002wR-15
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:04:29 +0000
Received: from [85.158.143.35:31763] by server-3.bemta-4.messagelabs.com id
	A1/11-11539-C9C4EF25; Fri, 14 Feb 2014 17:04:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392397467!5763356!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21624 invoked from network); 14 Feb 2014 17:04:27 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 17:04:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WEMBZ-0003A9-Mf; Fri, 14 Feb 2014 17:04:21 +0000
Date: Fri, 14 Feb 2014 18:04:21 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140214170421.GA6581@deinos.phlegethon.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
	<CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
	<52FE1AD1.80704@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE1AD1.80704@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:32 +0000 on 14 Feb (1392381121), Andrew Cooper wrote:
> On 14/02/14 11:52, George Dunlap wrote:
> > At the moment I'm leaning towards not delaying the release for it.
> > That could either mean checking the patch we have to hand today (so it
> > can make it into the RC Monday hopefully), or just going without it.
> 
> The bug has been present in Xen since the 4.3 dev cycle, meaning that
> All Xen-4.3.x are susceptible.
> 
> I do not believe holding the release for a fix is sensible, especially
> as the fix is still not considered ready yet

Agreed.  Since this isn't a 4.4 regression, we should just let the
release go ahead, and apply whatever the final fix is to the 4.4.x branch.

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:04:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMBi-0002wW-An; Fri, 14 Feb 2014 17:04:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WEMBh-0002wR-15
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:04:29 +0000
Received: from [85.158.143.35:31763] by server-3.bemta-4.messagelabs.com id
	A1/11-11539-C9C4EF25; Fri, 14 Feb 2014 17:04:28 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392397467!5763356!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21624 invoked from network); 14 Feb 2014 17:04:27 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 17:04:27 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WEMBZ-0003A9-Mf; Fri, 14 Feb 2014 17:04:21 +0000
Date: Fri, 14 Feb 2014 18:04:21 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140214170421.GA6581@deinos.phlegethon.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<52FD034E.2070600@citrix.com>
	<CAFLBxZZOWWpZxHnj_=9g-ya4dcC7TrwVFNVmhKBO1Dqtjeb8eQ@mail.gmail.com>
	<52FE1AD1.80704@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE1AD1.80704@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:32 +0000 on 14 Feb (1392381121), Andrew Cooper wrote:
> On 14/02/14 11:52, George Dunlap wrote:
> > At the moment I'm leaning towards not delaying the release for it.
> > That could either mean checking the patch we have to hand today (so it
> > can make it into the RC Monday hopefully), or just going without it.
> 
> The bug has been present in Xen since the 4.3 dev cycle, meaning that
> All Xen-4.3.x are susceptible.
> 
> I do not believe holding the release for a fix is sensible, especially
> as the fix is still not considered ready yet

Agreed.  Since this isn't a 4.4 regression, we should just let the
release go ahead, and apply whatever the final fix is to the 4.4.x branch.

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMFD-00033t-5u; Fri, 14 Feb 2014 17:08:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WEMFB-00033g-0y
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:08:05 +0000
Received: from [193.109.254.147:21444] by server-14.bemta-14.messagelabs.com
	id 6B/4F-29228-37D4EF25; Fri, 14 Feb 2014 17:08:03 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392397682!4433197!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3419 invoked from network); 14 Feb 2014 17:08:02 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 17:08:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WEMF4-0003Du-NP; Fri, 14 Feb 2014 17:07:58 +0000
Date: Fri, 14 Feb 2014 18:07:58 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140214170758.GB6581@deinos.phlegethon.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-2-git-send-email-tim@xen.org>
	<52FE27D7020000780011C726@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE27D7020000780011C726@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:27 +0000 on 14 Feb (1392380855), Jan Beulich wrote:
> >>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> > If the guest has not asked for interrupts, don't run the vpt timer
> > to generate them.  This is a prerequisite for a patch to simplify how
> > the vpt interacts with the RTC, and also gets rid of a timer series in
> > Xen in a case where it's unlikely to be needed.
> > 
> > Instead, calculate the correct value for REG_C.PF whenever REG_C is
> > read or PIE is enabled.  This allow a guest to poll for the PF bit
> > while not asking for actual timer interrupts.  Such a guest would no
> > longer get the benefit of the vpt's timer modes.
> > 
> > Signed-off-by: Tim Deegan <tim@xen.org>
> 
> Looks okay to me. Two minor comments below.
> 
> > @@ -125,24 +144,28 @@ static void rtc_timer_update(RTCState *s)
> >      case RTC_REF_CLCK_4MHZ:
> >          if ( period_code != 0 )
> >          {
> > -            if ( period_code != s->pt_code )
> > +            now = NOW();
> 
> This is needed only inside the next if, so perhaps move it there (and
> I'd prefer the variable declaration to be moved there too).

Yep.  That's an oversight, after an earlier version that set the 
check_ticks_since in more places.

> > @@ -492,6 +516,11 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
> >          rtc_update_irq(s);
> >          if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
> >              rtc_timer_update(s);
> > +        else if ( !(data & RTC_PIE) && (orig & RTC_PIE) )
> > +        {
> > +            destroy_periodic_time(&s->pt);
> > +            rtc_timer_update(s);
> > +        }
> 
> I think these two paths should be folded, along the lines of
> 
>         if ( (data ^ orig) & RTC_PIE )
>         {
>             if ( !(data & RTC_PIE) )
>                 destroy_periodic_time(&s->pt);
>             rtc_timer_update(s);
>         }

Sure. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMFD-00033t-5u; Fri, 14 Feb 2014 17:08:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WEMFB-00033g-0y
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:08:05 +0000
Received: from [193.109.254.147:21444] by server-14.bemta-14.messagelabs.com
	id 6B/4F-29228-37D4EF25; Fri, 14 Feb 2014 17:08:03 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392397682!4433197!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3419 invoked from network); 14 Feb 2014 17:08:02 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 17:08:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WEMF4-0003Du-NP; Fri, 14 Feb 2014 17:07:58 +0000
Date: Fri, 14 Feb 2014 18:07:58 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140214170758.GB6581@deinos.phlegethon.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-2-git-send-email-tim@xen.org>
	<52FE27D7020000780011C726@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE27D7020000780011C726@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:27 +0000 on 14 Feb (1392380855), Jan Beulich wrote:
> >>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> > If the guest has not asked for interrupts, don't run the vpt timer
> > to generate them.  This is a prerequisite for a patch to simplify how
> > the vpt interacts with the RTC, and also gets rid of a timer series in
> > Xen in a case where it's unlikely to be needed.
> > 
> > Instead, calculate the correct value for REG_C.PF whenever REG_C is
> > read or PIE is enabled.  This allow a guest to poll for the PF bit
> > while not asking for actual timer interrupts.  Such a guest would no
> > longer get the benefit of the vpt's timer modes.
> > 
> > Signed-off-by: Tim Deegan <tim@xen.org>
> 
> Looks okay to me. Two minor comments below.
> 
> > @@ -125,24 +144,28 @@ static void rtc_timer_update(RTCState *s)
> >      case RTC_REF_CLCK_4MHZ:
> >          if ( period_code != 0 )
> >          {
> > -            if ( period_code != s->pt_code )
> > +            now = NOW();
> 
> This is needed only inside the next if, so perhaps move it there (and
> I'd prefer the variable declaration to be moved there too).

Yep.  That's an oversight, after an earlier version that set the 
check_ticks_since in more places.

> > @@ -492,6 +516,11 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
> >          rtc_update_irq(s);
> >          if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
> >              rtc_timer_update(s);
> > +        else if ( !(data & RTC_PIE) && (orig & RTC_PIE) )
> > +        {
> > +            destroy_periodic_time(&s->pt);
> > +            rtc_timer_update(s);
> > +        }
> 
> I think these two paths should be folded, along the lines of
> 
>         if ( (data ^ orig) & RTC_PIE )
>         {
>             if ( !(data & RTC_PIE) )
>                 destroy_periodic_time(&s->pt);
>             rtc_timer_update(s);
>         }

Sure. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:10:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMHR-0003Eo-Bb; Fri, 14 Feb 2014 17:10:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WEMHQ-0003Eg-9b
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:10:24 +0000
Received: from [193.109.254.147:52251] by server-4.bemta-14.messagelabs.com id
	16/69-32066-FFD4EF25; Fri, 14 Feb 2014 17:10:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392397822!4444362!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10884 invoked from network); 14 Feb 2014 17:10:22 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:10:22 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so5783822eek.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 09:10:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=OXfwe6JGU3MvWXUHUiPYeXOdBZsBnokroTGXINm5mm0=;
	b=E5G74ao2oGinc5CN4sddtaYMXcHfX4XYQNhUCEU4wlPCUCJtHfB2oZTKXACxGSnTig
	+WBbvYERVMldlS9TlnAsnC0GQTq8auUk5b3zF4/s/G7Rs/x5mfvoEtbIJ8TFZ8Y+7JNr
	kWRX7oXysdvusSOr6nkgvmZMiCDH8B64zIhHU98BVTQSZBegmIoiEnYiEKJvtVvw4JcV
	WrlNlX1zBYUeak9A/FMdQAN9cFE4igd3+6MhZCM44/9Qmm4up1/Pu0eI3VRoGKSVubIM
	5jtZtVNf/VGOBDyJ0sCCLQ9hEgFzaHrMoHGpgFkLuysmszPKwANtAFY1OVa8LkuUV9dy
	TpKQ==
X-Gm-Message-State: ALoCoQm2WHp338EPJKmhCO+t4ZRHOnRhiDQjgD9VuxkoMCtdS1F3NN21g4uWAyIohEp++c82mf9w
X-Received: by 10.15.41.14 with SMTP id r14mr3842036eev.78.1392397822278;
	Fri, 14 Feb 2014 09:10:22 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm21936946eep.17.2014.02.14.09.10.20
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 14 Feb 2014 09:10:21 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 14 Feb 2014 17:10:09 +0000
Message-Id: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
	pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current implementation of raw_copy_* helpers may lead to data corruption
and sometimes Xen crash when the guest virtual address is not aligned to
PAGE_SIZE.

When the total length is higher than a page, the length to read is badly
compute with
    min(len, (unsigned)(PAGE_SIZE - offset))

As the offset is only computed one time per function, if the start address was
not aligned to PAGE_SIZE, we can end up in same iteration:
    - to read accross page boundary => xen crash
    - read the previous page => data corruption

This issue can be resolved by computing the offset on every iteration.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch is a bug fix for Xen 4.4. Without this patch the data may be
    corrupted between Xen and the guest when the guest virtual address is
    not aligned to PAGE_SIZE. Sometimes it can also crash Xen.

    These functions are used in numerous place in Xen. If it introduce another
    bug we can see quickly with small amount of data.
---
 xen/arch/arm/guestcopy.c |    8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index af0af6b..b3b54e9 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
                                               unsigned len, int flush_dcache)
 {
     /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
-
     while ( len )
     {
         paddr_t g;
         void *p;
+        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
         if ( gvirt_to_maddr((vaddr_t) to, &g) )
@@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
     /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
 
     while ( len )
     {
         paddr_t g;
         void *p;
+        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
         if ( gvirt_to_maddr((vaddr_t) to, &g) )
@@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
-
     while ( len )
     {
         paddr_t g;
         void *p;
+        unsigned offset = (vaddr_t)from & ~PAGE_MASK;
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
 
         if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:10:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:10:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMHR-0003Eo-Bb; Fri, 14 Feb 2014 17:10:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WEMHQ-0003Eg-9b
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:10:24 +0000
Received: from [193.109.254.147:52251] by server-4.bemta-14.messagelabs.com id
	16/69-32066-FFD4EF25; Fri, 14 Feb 2014 17:10:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392397822!4444362!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10884 invoked from network); 14 Feb 2014 17:10:22 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:10:22 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so5783822eek.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 09:10:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=OXfwe6JGU3MvWXUHUiPYeXOdBZsBnokroTGXINm5mm0=;
	b=E5G74ao2oGinc5CN4sddtaYMXcHfX4XYQNhUCEU4wlPCUCJtHfB2oZTKXACxGSnTig
	+WBbvYERVMldlS9TlnAsnC0GQTq8auUk5b3zF4/s/G7Rs/x5mfvoEtbIJ8TFZ8Y+7JNr
	kWRX7oXysdvusSOr6nkgvmZMiCDH8B64zIhHU98BVTQSZBegmIoiEnYiEKJvtVvw4JcV
	WrlNlX1zBYUeak9A/FMdQAN9cFE4igd3+6MhZCM44/9Qmm4up1/Pu0eI3VRoGKSVubIM
	5jtZtVNf/VGOBDyJ0sCCLQ9hEgFzaHrMoHGpgFkLuysmszPKwANtAFY1OVa8LkuUV9dy
	TpKQ==
X-Gm-Message-State: ALoCoQm2WHp338EPJKmhCO+t4ZRHOnRhiDQjgD9VuxkoMCtdS1F3NN21g4uWAyIohEp++c82mf9w
X-Received: by 10.15.41.14 with SMTP id r14mr3842036eev.78.1392397822278;
	Fri, 14 Feb 2014 09:10:22 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm21936946eep.17.2014.02.14.09.10.20
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 14 Feb 2014 09:10:21 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Fri, 14 Feb 2014 17:10:09 +0000
Message-Id: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com, george.dunlap@citrix.com
Subject: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
	pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current implementation of raw_copy_* helpers may lead to data corruption
and sometimes Xen crash when the guest virtual address is not aligned to
PAGE_SIZE.

When the total length is higher than a page, the length to read is badly
compute with
    min(len, (unsigned)(PAGE_SIZE - offset))

As the offset is only computed one time per function, if the start address was
not aligned to PAGE_SIZE, we can end up in same iteration:
    - to read accross page boundary => xen crash
    - read the previous page => data corruption

This issue can be resolved by computing the offset on every iteration.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch is a bug fix for Xen 4.4. Without this patch the data may be
    corrupted between Xen and the guest when the guest virtual address is
    not aligned to PAGE_SIZE. Sometimes it can also crash Xen.

    These functions are used in numerous place in Xen. If it introduce another
    bug we can see quickly with small amount of data.
---
 xen/arch/arm/guestcopy.c |    8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index af0af6b..b3b54e9 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
                                               unsigned len, int flush_dcache)
 {
     /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
-
     while ( len )
     {
         paddr_t g;
         void *p;
+        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
         if ( gvirt_to_maddr((vaddr_t) to, &g) )
@@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
 unsigned long raw_clear_guest(void *to, unsigned len)
 {
     /* XXX needs to handle faults */
-    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
 
     while ( len )
     {
         paddr_t g;
         void *p;
+        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
         unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
 
         if ( gvirt_to_maddr((vaddr_t) to, &g) )
@@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len)
 
 unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
 {
-    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
-
     while ( len )
     {
         paddr_t g;
         void *p;
+        unsigned offset = (vaddr_t)from & ~PAGE_MASK;
         unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
 
         if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMJf-0003NE-Tp; Fri, 14 Feb 2014 17:12:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WEMJd-0003N5-0T
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:12:42 +0000
Received: from [85.158.137.68:40391] by server-3.bemta-3.messagelabs.com id
	03/68-14520-88E4EF25; Fri, 14 Feb 2014 17:12:40 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392397959!957507!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP, SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17470 invoked from network); 14 Feb 2014 17:12:39 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 17:12:39 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WEMJW-0003JC-I1; Fri, 14 Feb 2014 17:12:34 +0000
Date: Fri, 14 Feb 2014 18:12:34 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140214171234.GC6581@deinos.phlegethon.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-4-git-send-email-tim@xen.org>
	<52FE2C4F020000780011C74E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE2C4F020000780011C74E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/3] x86/hvm/rtc: Always deassert the
 IRQ line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:46 +0000 on 14 Feb (1392381999), Jan Beulich wrote:
> >>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> > Even in no-ack mode, there's no reason to leave the line asserted
> > after an explicit ack of the interrupt.
> > 
> > Signed-off-by: Tim Deegan <tim@xen.org>
> > ---
> >  xen/arch/x86/hvm/rtc.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> > index 18a4fe8..b592547 100644
> > --- a/xen/arch/x86/hvm/rtc.c
> > +++ b/xen/arch/x86/hvm/rtc.c
> > @@ -674,7 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
> >          check_for_pf_ticks(s);
> >          ret = s->hw.cmos_data[s->hw.cmos_index];
> >          s->hw.cmos_data[RTC_REG_C] = 0x00;
> > -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> > +        if ( ret & RTC_IRQF )
> >              hvm_isa_irq_deassert(d, RTC_IRQ);
> >          rtc_update_irq(s);
> >          check_update_timer(s);
> 
> Wait - does one of the earlier patches remove the other de-assert?
> Looking... No, they don't. Doing it in exactly one of the two places
> should be sufficient, shouldn't it?

Yes; it just seemed odd to leave it asserted in some cases when the
hardware wouldn't.  Given that we know the line is never shared and
that it should always be edge-sensitive in the IOAPIC, it doesn't
matter very much. 

> The more that the other de-assert
> is in rtc_update_irq(), which is being called right afterwards. 
> 
> But then again - that call seems pointless (I think I mentioned this in
> response to Andrew's first attempt): Since REG_C is now clear, the
> first conditional return path in that function will never be taken, and
> the second one always will be.
>
> So I think together with removing that call, and considering the
> intended level-ness of the IRQ, I think I agree with you after all,

Right; this patch could just drop that call too.  I think the original
idea was that having all the irq frobbing in one place was a good
idea.  So maybe the better choice is to make sure that
rtc_update_irq() DTRT and just drop the deassert from here altogether.

> provided two de-asserts with no assert in between are not a
> problem.

They shouldn't be -- the _isa_irq_[de]assert operations are idempotent.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMJf-0003NE-Tp; Fri, 14 Feb 2014 17:12:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WEMJd-0003N5-0T
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:12:42 +0000
Received: from [85.158.137.68:40391] by server-3.bemta-3.messagelabs.com id
	03/68-14520-88E4EF25; Fri, 14 Feb 2014 17:12:40 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392397959!957507!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP, SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17470 invoked from network); 14 Feb 2014 17:12:39 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 17:12:39 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WEMJW-0003JC-I1; Fri, 14 Feb 2014 17:12:34 +0000
Date: Fri, 14 Feb 2014 18:12:34 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140214171234.GC6581@deinos.phlegethon.org>
References: <1392312779-14373-1-git-send-email-tim@xen.org>
	<1392312779-14373-4-git-send-email-tim@xen.org>
	<52FE2C4F020000780011C74E@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <52FE2C4F020000780011C74E@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: george.dunlap@eu.citrix.com, andrew.cooper3@citrix.com,
	xen-devel@lists.xen.org, keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH RFC 3/3] x86/hvm/rtc: Always deassert the
 IRQ line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:46 +0000 on 14 Feb (1392381999), Jan Beulich wrote:
> >>> On 13.02.14 at 18:32, Tim Deegan <tim@xen.org> wrote:
> > Even in no-ack mode, there's no reason to leave the line asserted
> > after an explicit ack of the interrupt.
> > 
> > Signed-off-by: Tim Deegan <tim@xen.org>
> > ---
> >  xen/arch/x86/hvm/rtc.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
> > index 18a4fe8..b592547 100644
> > --- a/xen/arch/x86/hvm/rtc.c
> > +++ b/xen/arch/x86/hvm/rtc.c
> > @@ -674,7 +674,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
> >          check_for_pf_ticks(s);
> >          ret = s->hw.cmos_data[s->hw.cmos_index];
> >          s->hw.cmos_data[RTC_REG_C] = 0x00;
> > -        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
> > +        if ( ret & RTC_IRQF )
> >              hvm_isa_irq_deassert(d, RTC_IRQ);
> >          rtc_update_irq(s);
> >          check_update_timer(s);
> 
> Wait - does one of the earlier patches remove the other de-assert?
> Looking... No, they don't. Doing it in exactly one of the two places
> should be sufficient, shouldn't it?

Yes; it just seemed odd to leave it asserted in some cases when the
hardware wouldn't.  Given that we know the line is never shared and
that it should always be edge-sensitive in the IOAPIC, it doesn't
matter very much. 

> The more that the other de-assert
> is in rtc_update_irq(), which is being called right afterwards. 
> 
> But then again - that call seems pointless (I think I mentioned this in
> response to Andrew's first attempt): Since REG_C is now clear, the
> first conditional return path in that function will never be taken, and
> the second one always will be.
>
> So I think together with removing that call, and considering the
> intended level-ness of the IRQ, I think I agree with you after all,

Right; this patch could just drop that call too.  I think the original
idea was that having all the irq frobbing in one place was a good
idea.  So maybe the better choice is to make sure that
rtc_update_irq() DTRT and just drop the deassert from here altogether.

> provided two de-asserts with no assert in between are not a
> problem.

They shouldn't be -- the _isa_irq_[de]assert operations are idempotent.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:22:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMSP-0003aw-08; Fri, 14 Feb 2014 17:21:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WEMSN-0003ap-KB
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:21:43 +0000
Received: from [85.158.137.68:4600] by server-2.bemta-3.messagelabs.com id
	BC/55-06531-5A05EF25; Fri, 14 Feb 2014 17:21:41 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392398490!701938!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTMxNzkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9516 invoked from network); 14 Feb 2014 17:21:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:21:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; 
	d="asc'?scan'208";a="102626720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:21:09 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:21:08 -0500
Message-ID: <1392398466.32038.334.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Fri, 14 Feb 2014 18:21:06 +0100
In-Reply-To: <6010385428.20140214120238@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9039118972003151388=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9039118972003151388==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-kgnknmet+NsChhHAiKnc"

--=-kgnknmet+NsChhHAiKnc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 12:02 +0000, Simon Martin wrote:
> Thanks everyone and especially Ian! It was the hyperthreading that was
> causing the problem.
>=20
Good to hear, and at the same time, sorry to hear that. :-)

I mean, I'm glad you nailed it, but at the same time, I'm sorry that the
solution is to 'waste' a core! :-( I reiterate and restate here, without
any problem doing so, the fact that Xen should be doing at least a bit
better in these circumstances, if we want to properly address use cases
like yours. However, there are limits on how far we can go, and hardware
design is certainly among them!

All this to say that, it should be possible to get a bit more of
isolation, by tweaking the proper Xen code path appropriately, but if
the amount of interference that comes from two hypethreads sharing
registers, pipeline stages, and whatever it is that they share, is
enough for disturbing your workload, then, I'm afraid we never get much
farther from the 'don't use hyperthread' solution! :-(

Anyways, with respect to the first part of this reasoning, would you
mind (when you've got the time of course) one more test? If no, I'd say,
configure the system as I was suggesting in my first reply, i.e., using
also core #2 (or, in general, all the cores). Also, make sure you add
this parameter, to the Xen boot command line:

 sched_smt_power_savings=3D1

(some background here:
http://lists.xen.org/archives/html/xen-devel/2009-03/msg01335.html)

And then run the bench with disk activity on.

> Here's my current configuration:
>=20
> # xl cpupool-list -c
> Name               CPU list
> Pool-0             0,1
> pv499              2,3
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Af=
finity
> Domain-0                             0     0    0   r--      16.6  0
> Domain-0                             0     1    1   -b-       7.3  1
> win7x64                              1     0    1   -b-      82.5  all
> win7x64                              1     1    0   -b-      18.6  all
> pv499                                2     0    3   r--     226.1  3
>=20
> I  have  pinned  dom0 as I wasn't sure whether it belongs to Pool-0 (I
> assume it does, can you confirm please).
>=20
Actually, you are right. It looks like there is no command or command
parameter telling explicitly to which pool a domain belong [BTW, adding
Juergen, who knows that for sure].

If that is the case, we really should add one.

BTW, If you boot the system and then create the (new) pool(s), all the
existing domains, including Dom0, at pool(s) creation time will stay in
the "original" pool, while the new pool(s) will be empty. To change
that, you'd have to either migrate existing domain into specific pools
with `xl cpupool-migrate', or create them specifying the proper option
and the name of the target pool in the config file (which is probably
what you're doing for your DomUs).

I guess a workaround to confirm where a domain is, you can (try to)
migrate it around with `xl cpupool-migrate', and see what happens.

> Dario, if you are going to look at the
>=20
Is something missing here... ?

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-kgnknmet+NsChhHAiKnc
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+UIIACgkQk4XaBE3IOsS56ACfZ84a4ZMjghtUojQNqenrJ0h5
lYUAoJGnf0k4M1whx1mRXdpSdOmFCWvR
=j5wn
-----END PGP SIGNATURE-----

--=-kgnknmet+NsChhHAiKnc--


--===============9039118972003151388==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9039118972003151388==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 17:22:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMSP-0003aw-08; Fri, 14 Feb 2014 17:21:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WEMSN-0003ap-KB
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 17:21:43 +0000
Received: from [85.158.137.68:4600] by server-2.bemta-3.messagelabs.com id
	BC/55-06531-5A05EF25; Fri, 14 Feb 2014 17:21:41 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392398490!701938!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTMxNzkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9516 invoked from network); 14 Feb 2014 17:21:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:21:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; 
	d="asc'?scan'208";a="102626720"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:21:09 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:21:08 -0500
Message-ID: <1392398466.32038.334.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Fri, 14 Feb 2014 18:21:06 +0100
In-Reply-To: <6010385428.20140214120238@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9039118972003151388=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9039118972003151388==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-kgnknmet+NsChhHAiKnc"

--=-kgnknmet+NsChhHAiKnc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 12:02 +0000, Simon Martin wrote:
> Thanks everyone and especially Ian! It was the hyperthreading that was
> causing the problem.
>=20
Good to hear, and at the same time, sorry to hear that. :-)

I mean, I'm glad you nailed it, but at the same time, I'm sorry that the
solution is to 'waste' a core! :-( I reiterate and restate here, without
any problem doing so, the fact that Xen should be doing at least a bit
better in these circumstances, if we want to properly address use cases
like yours. However, there are limits on how far we can go, and hardware
design is certainly among them!

All this to say that, it should be possible to get a bit more of
isolation, by tweaking the proper Xen code path appropriately, but if
the amount of interference that comes from two hypethreads sharing
registers, pipeline stages, and whatever it is that they share, is
enough for disturbing your workload, then, I'm afraid we never get much
farther from the 'don't use hyperthread' solution! :-(

Anyways, with respect to the first part of this reasoning, would you
mind (when you've got the time of course) one more test? If no, I'd say,
configure the system as I was suggesting in my first reply, i.e., using
also core #2 (or, in general, all the cores). Also, make sure you add
this parameter, to the Xen boot command line:

 sched_smt_power_savings=3D1

(some background here:
http://lists.xen.org/archives/html/xen-devel/2009-03/msg01335.html)

And then run the bench with disk activity on.

> Here's my current configuration:
>=20
> # xl cpupool-list -c
> Name               CPU list
> Pool-0             0,1
> pv499              2,3
> # xl vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU Af=
finity
> Domain-0                             0     0    0   r--      16.6  0
> Domain-0                             0     1    1   -b-       7.3  1
> win7x64                              1     0    1   -b-      82.5  all
> win7x64                              1     1    0   -b-      18.6  all
> pv499                                2     0    3   r--     226.1  3
>=20
> I  have  pinned  dom0 as I wasn't sure whether it belongs to Pool-0 (I
> assume it does, can you confirm please).
>=20
Actually, you are right. It looks like there is no command or command
parameter telling explicitly to which pool a domain belong [BTW, adding
Juergen, who knows that for sure].

If that is the case, we really should add one.

BTW, If you boot the system and then create the (new) pool(s), all the
existing domains, including Dom0, at pool(s) creation time will stay in
the "original" pool, while the new pool(s) will be empty. To change
that, you'd have to either migrate existing domain into specific pools
with `xl cpupool-migrate', or create them specifying the proper option
and the name of the target pool in the config file (which is probably
what you're doing for your DomUs).

I guess a workaround to confirm where a domain is, you can (try to)
migrate it around with `xl cpupool-migrate', and see what happens.

> Dario, if you are going to look at the
>=20
Is something missing here... ?

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-kgnknmet+NsChhHAiKnc
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+UIIACgkQk4XaBE3IOsS56ACfZ84a4ZMjghtUojQNqenrJ0h5
lYUAoJGnf0k4M1whx1mRXdpSdOmFCWvR
=j5wn
-----END PGP SIGNATURE-----

--=-kgnknmet+NsChhHAiKnc--


--===============9039118972003151388==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9039118972003151388==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMg4-0003m3-HQ; Fri, 14 Feb 2014 17:35:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEMg3-0003ly-IU
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:35:51 +0000
Received: from [85.158.137.68:13331] by server-11.bemta-3.messagelabs.com id
	58/B6-04255-6F35EF25; Fri, 14 Feb 2014 17:35:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392399348!1970330!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29009 invoked from network); 14 Feb 2014 17:35:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631329"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:35:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:35:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEMfy-0006YJ-Tg;
	Fri, 14 Feb 2014 17:35:46 +0000
Date: Fri, 14 Feb 2014 17:35:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1402141735360.4307@kaball.uk.xensource.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Feb 2014, Julien Grall wrote:
> The current implementation of raw_copy_* helpers may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.
> 
> When the total length is higher than a page, the length to read is badly
> compute with
>     min(len, (unsigned)(PAGE_SIZE - offset))
> 
> As the offset is only computed one time per function, if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>     - to read accross page boundary => xen crash
>     - read the previous page => data corruption
> 
> This issue can be resolved by computing the offset on every iteration.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>     This patch is a bug fix for Xen 4.4. Without this patch the data may be
>     corrupted between Xen and the guest when the guest virtual address is
>     not aligned to PAGE_SIZE. Sometimes it can also crash Xen.
> 
>     These functions are used in numerous place in Xen. If it introduce another
>     bug we can see quickly with small amount of data.
> ---
>  xen/arch/arm/guestcopy.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..b3b54e9 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>                                                unsigned len, int flush_dcache)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>  
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)from & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>  
>          if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMg4-0003m3-HQ; Fri, 14 Feb 2014 17:35:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WEMg3-0003ly-IU
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:35:51 +0000
Received: from [85.158.137.68:13331] by server-11.bemta-3.messagelabs.com id
	58/B6-04255-6F35EF25; Fri, 14 Feb 2014 17:35:50 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392399348!1970330!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29009 invoked from network); 14 Feb 2014 17:35:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:35:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631329"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:35:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:35:47 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WEMfy-0006YJ-Tg;
	Fri, 14 Feb 2014 17:35:46 +0000
Date: Fri, 14 Feb 2014 17:35:46 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1402141735360.4307@kaball.uk.xensource.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com, george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Feb 2014, Julien Grall wrote:
> The current implementation of raw_copy_* helpers may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.
> 
> When the total length is higher than a page, the length to read is badly
> compute with
>     min(len, (unsigned)(PAGE_SIZE - offset))
> 
> As the offset is only computed one time per function, if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>     - to read accross page boundary => xen crash
>     - read the previous page => data corruption
> 
> This issue can be resolved by computing the offset on every iteration.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>


>     This patch is a bug fix for Xen 4.4. Without this patch the data may be
>     corrupted between Xen and the guest when the guest virtual address is
>     not aligned to PAGE_SIZE. Sometimes it can also crash Xen.
> 
>     These functions are used in numerous place in Xen. If it introduce another
>     bug we can see quickly with small amount of data.
> ---
>  xen/arch/arm/guestcopy.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..b3b54e9 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>                                                unsigned len, int flush_dcache)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>  
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)from & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>  
>          if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgd-0003o1-W6; Fri, 14 Feb 2014 17:36:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgc-0003nn-49
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:26 +0000
Received: from [85.158.137.68:15440] by server-2.bemta-3.messagelabs.com id
	D8/C3-06531-9145EF25; Fri, 14 Feb 2014 17:36:25 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392399381!1960650!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17805 invoked from network); 14 Feb 2014 17:36:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100859317"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:19 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgV-0006Z8-GC;
	Fri, 14 Feb 2014 17:36:19 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgT-000383-Jr; Fri, 14 Feb 2014 17:36:17 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:51 +0000
Message-ID: <1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  944 ++++++++++++++++++++++++++------------------
 1 file changed, 551 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f9daa9e..d4239b9 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup)
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1261,24 +1311,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = alloc_percpu(struct netfront_stats);
@@ -1291,38 +1332,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 		u64_stats_init(&xen_nf_stats->syncp);
 	}
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1348,10 +1359,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1409,30 +1416,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1473,100 +1485,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1575,13 +1573,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1589,21 +1587,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1614,17 +1612,77 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1632,13 +1690,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1646,34 +1763,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1733,6 +1850,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1745,6 +1865,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1765,36 +1887,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1803,14 +1929,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1883,7 +2012,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(vif + xenvif_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1915,7 +2044,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1926,6 +2058,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1939,16 +2073,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1958,7 +2095,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1969,6 +2109,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1982,16 +2124,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -2001,7 +2146,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
 	xennet_disconnect_backend(info);
 
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
+
 	xennet_sysfs_delif(info->netdev);
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
-
 	free_percpu(info->stats);
 
 	free_netdev(info->netdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgd-0003o1-W6; Fri, 14 Feb 2014 17:36:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgc-0003nn-49
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:26 +0000
Received: from [85.158.137.68:15440] by server-2.bemta-3.messagelabs.com id
	D8/C3-06531-9145EF25; Fri, 14 Feb 2014 17:36:25 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392399381!1960650!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17805 invoked from network); 14 Feb 2014 17:36:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:22 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="100859317"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:19 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgV-0006Z8-GC;
	Fri, 14 Feb 2014 17:36:19 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgT-000383-Jr; Fri, 14 Feb 2014 17:36:17 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:51 +0000
Message-ID: <1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  944 ++++++++++++++++++++++++++------------------
 1 file changed, 551 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f9daa9e..d4239b9 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup)
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1261,24 +1311,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = alloc_percpu(struct netfront_stats);
@@ -1291,38 +1332,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 		u64_stats_init(&xen_nf_stats->syncp);
 	}
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1348,10 +1359,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1409,30 +1416,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1473,100 +1485,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1575,13 +1573,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1589,21 +1587,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1614,17 +1612,77 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1632,13 +1690,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1646,34 +1763,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1733,6 +1850,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1745,6 +1865,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1765,36 +1887,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1803,14 +1929,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1883,7 +2012,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(vif + xenvif_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1915,7 +2044,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1926,6 +2058,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1939,16 +2073,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1958,7 +2095,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1969,6 +2109,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1982,16 +2124,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -2001,7 +2146,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
 	xennet_disconnect_backend(info);
 
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
+
 	xennet_sysfs_delif(info->netdev);
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
-
 	free_percpu(info->stats);
 
 	free_netdev(info->netdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgq-0003qS-GZ; Fri, 14 Feb 2014 17:36:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgo-0003pn-V1
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:39 +0000
Received: from [85.158.139.211:5396] by server-8.bemta-5.messagelabs.com id
	FD/37-05298-6245EF25; Fri, 14 Feb 2014 17:36:38 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8257 invoked from network); 14 Feb 2014 17:36:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631450"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:16 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgR-0006Yz-OA;
	Fri, 14 Feb 2014 17:36:15 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgQ-00037r-7h; Fri, 14 Feb 2014 17:36:14 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:48 +0000
Message-ID: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.



V3:
- Further indentation and style fixups

V2:
- Rebase onto net-next
- Change queue->number to queue->id
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu
- Fixup formatting and style issues
- XenStore protocol changes documented in netif.h
- Default max. number of queues to num_online_cpus()
- Check requested number of queues does not exceed maximum

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgq-0003qS-GZ; Fri, 14 Feb 2014 17:36:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgo-0003pn-V1
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:39 +0000
Received: from [85.158.139.211:5396] by server-8.bemta-5.messagelabs.com id
	FD/37-05298-6245EF25; Fri, 14 Feb 2014 17:36:38 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8257 invoked from network); 14 Feb 2014 17:36:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631450"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:16 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgR-0006Yz-OA;
	Fri, 14 Feb 2014 17:36:15 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgQ-00037r-7h; Fri, 14 Feb 2014 17:36:14 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:48 +0000
Message-ID: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.



V3:
- Further indentation and style fixups

V2:
- Rebase onto net-next
- Change queue->number to queue->id
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu
- Fixup formatting and style issues
- XenStore protocol changes documented in netif.h
- Default max. number of queues to num_online_cpus()
- Check requested number of queues does not exceed maximum

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgr-0003r4-Vf; Fri, 14 Feb 2014 17:36:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgq-0003qK-Gh
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:40 +0000
Received: from [85.158.139.211:24930] by server-1.bemta-5.messagelabs.com id
	B4/71-12859-7245EF25; Fri, 14 Feb 2014 17:36:39 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8342 invoked from network); 14 Feb 2014 17:36:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631462"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:18 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgU-0006Z5-3O;
	Fri, 14 Feb 2014 17:36:18 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgS-00037y-E0; Fri, 14 Feb 2014 17:36:16 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:50 +0000
Message-ID: <1392399353-11973-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    6 +++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 80 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 2550867..8180929 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index daf93f6..bc7a82d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			      xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 46b2f5b..aeb5ffa 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,9 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1588,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..d11f51e 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+			"guest requested %u queues, exceeding the maximum of %u.",
+			requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1) {
+		xspath = (char *)dev->otherend;
+	} else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgr-0003r4-Vf; Fri, 14 Feb 2014 17:36:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgq-0003qK-Gh
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:40 +0000
Received: from [85.158.139.211:24930] by server-1.bemta-5.messagelabs.com id
	B4/71-12859-7245EF25; Fri, 14 Feb 2014 17:36:39 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8342 invoked from network); 14 Feb 2014 17:36:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631462"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:18 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgU-0006Z5-3O;
	Fri, 14 Feb 2014 17:36:18 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgS-00037y-E0; Fri, 14 Feb 2014 17:36:16 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:50 +0000
Message-ID: <1392399353-11973-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    6 +++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 80 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 2550867..8180929 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index daf93f6..bc7a82d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			      xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 46b2f5b..aeb5ffa 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,9 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1588,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..d11f51e 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+			"guest requested %u queues, exceeding the maximum of %u.",
+			requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1) {
+		xspath = (char *)dev->otherend;
+	} else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgs-0003ro-II; Fri, 14 Feb 2014 17:36:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgr-0003qW-8c
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:41 +0000
Received: from [85.158.139.211:10629] by server-5.bemta-5.messagelabs.com id
	61/AB-32749-8245EF25; Fri, 14 Feb 2014 17:36:40 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8455 invoked from network); 14 Feb 2014 17:36:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631472"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:21 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgX-0006ZE-O3;
	Fri, 14 Feb 2014 17:36:21 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgW-00038D-6D; Fri, 14 Feb 2014 17:36:20 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:53 +0000
Message-ID: <1392399353-11973-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..8868c51 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgs-0003ro-II; Fri, 14 Feb 2014 17:36:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgr-0003qW-8c
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:41 +0000
Received: from [85.158.139.211:10629] by server-5.bemta-5.messagelabs.com id
	61/AB-32749-8245EF25; Fri, 14 Feb 2014 17:36:40 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8455 invoked from network); 14 Feb 2014 17:36:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631472"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:21 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgX-0006ZE-O3;
	Fri, 14 Feb 2014 17:36:21 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgW-00038D-6D; Fri, 14 Feb 2014 17:36:20 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:53 +0000
Message-ID: <1392399353-11973-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..8868c51 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgu-0003tC-6q; Fri, 14 Feb 2014 17:36:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgs-0003r3-CK
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:42 +0000
Received: from [85.158.139.211:5708] by server-14.bemta-5.messagelabs.com id
	2B/20-27598-9245EF25; Fri, 14 Feb 2014 17:36:41 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8625 invoked from network); 14 Feb 2014 17:36:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631468"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:21 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgW-0006ZB-Nk;
	Fri, 14 Feb 2014 17:36:20 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgV-000388-0W; Fri, 14 Feb 2014 17:36:19 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:52 +0000
Message-ID: <1392399353-11973-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 138 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d4239b9..a72ddbc 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,10 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1) {
+		queue_idx = 0;
+	} else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1327,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1683,6 +1699,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1692,10 +1790,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1711,12 +1820,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1754,49 +1864,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1846,8 +1942,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2241,6 +2338,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgu-0003tC-6q; Fri, 14 Feb 2014 17:36:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgs-0003r3-CK
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:42 +0000
Received: from [85.158.139.211:5708] by server-14.bemta-5.messagelabs.com id
	2B/20-27598-9245EF25; Fri, 14 Feb 2014 17:36:41 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392399395!4027707!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8625 invoked from network); 14 Feb 2014 17:36:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631468"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:21 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgW-0006ZB-Nk;
	Fri, 14 Feb 2014 17:36:20 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgV-000388-0W; Fri, 14 Feb 2014 17:36:19 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:52 +0000
Message-ID: <1392399353-11973-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  176 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 138 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index d4239b9..a72ddbc 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,10 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +569,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1) {
+		queue_idx = 0;
+	} else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1327,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1683,6 +1699,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1692,10 +1790,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1711,12 +1820,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1754,49 +1864,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1846,8 +1942,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2241,6 +2338,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgv-0003uL-Pj; Fri, 14 Feb 2014 17:36:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgt-0003rs-0G
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:43 +0000
Received: from [85.158.137.68:39298] by server-10.bemta-3.messagelabs.com id
	2D/2A-07302-A245EF25; Fri, 14 Feb 2014 17:36:42 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392399397!1969216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16113 invoked from network); 14 Feb 2014 17:36:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631459"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:17 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgT-0006Z2-1B;
	Fri, 14 Feb 2014 17:36:17 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgR-00037u-9C; Fri, 14 Feb 2014 17:36:15 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:49 +0000
Message-ID: <1392399353-11973-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   81 ++++--
 drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
 drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 593 insertions(+), 417 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..2550867 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,36 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +159,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +200,12 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..daf93f6 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -253,7 +328,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)vif + xenvif_stats[i].offset);
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +361,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +371,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +383,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +402,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +411,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +427,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +550,52 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..46b2f5b 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		atomic_inc(&vif->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:36:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMgv-0003uL-Pj; Fri, 14 Feb 2014 17:36:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WEMgt-0003rs-0G
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:36:43 +0000
Received: from [85.158.137.68:39298] by server-10.bemta-3.messagelabs.com id
	2D/2A-07302-A245EF25; Fri, 14 Feb 2014 17:36:42 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392399397!1969216!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16113 invoked from network); 14 Feb 2014 17:36:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:36:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,845,1384300800"; d="scan'208";a="102631459"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 17:36:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 12:36:17 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WEMgT-0006Z2-1B;
	Fri, 14 Feb 2014 17:36:17 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WEMgR-00037u-9C; Fri, 14 Feb 2014 17:36:15 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 17:35:49 +0000
Message-ID: <1392399353-11973-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V3 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   81 ++++--
 drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
 drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 593 insertions(+), 417 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..2550867 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,36 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +159,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +200,12 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..daf93f6 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -253,7 +328,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)vif + xenvif_stats[i].offset);
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +361,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +371,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +383,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +402,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +411,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +427,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +550,52 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..46b2f5b 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		atomic_inc(&vif->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMhw-0004JD-4m; Fri, 14 Feb 2014 17:37:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WEMhu-0004Ih-FP
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 17:37:46 +0000
Received: from [193.109.254.147:43485] by server-9.bemta-14.messagelabs.com id
	29/8A-24895-9645EF25; Fri, 14 Feb 2014 17:37:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392399465!442582!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29746 invoked from network); 14 Feb 2014 17:37:45 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:37:45 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so3719369eaj.40
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 09:37:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=F+7dIA8oqdyUW1FDQmfI8eezYwjZ1fCcsZ7vNpIMjOQ=;
	b=Drub0UZDGpyyiAc8+2zFmKBL6XVGWDNVJU0n098B+VsIaOOwMAbDtxWeK/eoHYJOQE
	0dHKA7EIgetpHKUnDKQ672AY85HNhhfRD6hPNwgKXaV0PKm32G3OFnAbV1vPMYqGwBgL
	TH7toQA+G+MEBbMSKFylte/007myutkac5IN/CVPjSCe9xt8R6uHLdhOnTkQYadVtQDF
	w/rqgOd4KRZvRROI3lvbi+eUpXkTWmKi6n/dengOkqmkarXKEnfkR3DSVnx62ph9/AEp
	6NnC+efMr02RCYNY7qMksxR2peRBBrwlGqfNIsjO2jKlAg3O4WDKSKtV7yx0hBsPOUZM
	7dOw==
X-Gm-Message-State: ALoCoQmVQ4oP/DazwAbhwRToBY7Tk+aBjykAxy5xWDdK4RXAv5oY0hktx2efoJNzCWIodUjWvoNi
X-Received: by 10.14.211.71 with SMTP id v47mr10523193eeo.37.1392399464669;
	Fri, 14 Feb 2014 09:37:44 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o45sm22140301eeb.18.2014.02.14.09.37.42 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 09:37:43 -0800 (PST)
Message-ID: <52FE5465.8060803@linaro.org>
Date: Fri, 14 Feb 2014 17:37:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
	<1392393098-7351-7-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1392393098-7351-7-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 07/10] xen/arm: don't protect GICH
 and lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 02/14/2014 03:51 PM, Stefano Stabellini wrote:
> GICH is banked, protect accesses by disabling interrupts.
> Protect lr_queue accesses with the vgic.lock only.
> gic.lock only protects accesses to GICD now.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/gic.c  |   22 +++-------------------
>  xen/arch/arm/vgic.c |   12 ++++++++++--
>  2 files changed, 13 insertions(+), 21 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 0955d48..6386ccb 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -667,19 +667,14 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>  
> -    spin_lock(&gic.lock);
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> -    spin_unlock(&gic.lock);
>  }

This patch doesn't apply correctly on the latest master. The commit
0ddaeff has replace spin_lock by spin_lock_irqsave.

You need to rebase your patches series at least on top of this patch.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMhw-0004JD-4m; Fri, 14 Feb 2014 17:37:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WEMhu-0004Ih-FP
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 17:37:46 +0000
Received: from [193.109.254.147:43485] by server-9.bemta-14.messagelabs.com id
	29/8A-24895-9645EF25; Fri, 14 Feb 2014 17:37:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392399465!442582!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29746 invoked from network); 14 Feb 2014 17:37:45 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:37:45 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so3719369eaj.40
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 09:37:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=F+7dIA8oqdyUW1FDQmfI8eezYwjZ1fCcsZ7vNpIMjOQ=;
	b=Drub0UZDGpyyiAc8+2zFmKBL6XVGWDNVJU0n098B+VsIaOOwMAbDtxWeK/eoHYJOQE
	0dHKA7EIgetpHKUnDKQ672AY85HNhhfRD6hPNwgKXaV0PKm32G3OFnAbV1vPMYqGwBgL
	TH7toQA+G+MEBbMSKFylte/007myutkac5IN/CVPjSCe9xt8R6uHLdhOnTkQYadVtQDF
	w/rqgOd4KRZvRROI3lvbi+eUpXkTWmKi6n/dengOkqmkarXKEnfkR3DSVnx62ph9/AEp
	6NnC+efMr02RCYNY7qMksxR2peRBBrwlGqfNIsjO2jKlAg3O4WDKSKtV7yx0hBsPOUZM
	7dOw==
X-Gm-Message-State: ALoCoQmVQ4oP/DazwAbhwRToBY7Tk+aBjykAxy5xWDdK4RXAv5oY0hktx2efoJNzCWIodUjWvoNi
X-Received: by 10.14.211.71 with SMTP id v47mr10523193eeo.37.1392399464669;
	Fri, 14 Feb 2014 09:37:44 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o45sm22140301eeb.18.2014.02.14.09.37.42 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 09:37:43 -0800 (PST)
Message-ID: <52FE5465.8060803@linaro.org>
Date: Fri, 14 Feb 2014 17:37:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
	<1392393098-7351-7-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1392393098-7351-7-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 07/10] xen/arm: don't protect GICH
 and lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 02/14/2014 03:51 PM, Stefano Stabellini wrote:
> GICH is banked, protect accesses by disabling interrupts.
> Protect lr_queue accesses with the vgic.lock only.
> gic.lock only protects accesses to GICD now.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/gic.c  |   22 +++-------------------
>  xen/arch/arm/vgic.c |   12 ++++++++++--
>  2 files changed, 13 insertions(+), 21 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 0955d48..6386ccb 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -667,19 +667,14 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>  {
>      struct pending_irq *p = irq_to_pending(v, virtual_irq);
>  
> -    spin_lock(&gic.lock);
>      if ( !list_empty(&p->lr_queue) )
>          list_del_init(&p->lr_queue);
> -    spin_unlock(&gic.lock);
>  }

This patch doesn't apply correctly on the latest master. The commit
0ddaeff has replace spin_lock by spin_lock_irqsave.

You need to rebase your patches series at least on top of this patch.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:49:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMtL-00057x-AU; Fri, 14 Feb 2014 17:49:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WEMtJ-00057o-JN
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 17:49:33 +0000
Received: from [85.158.139.211:25542] by server-11.bemta-5.messagelabs.com id
	CC/39-23886-C275EF25; Fri, 14 Feb 2014 17:49:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392400172!3919038!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24371 invoked from network); 14 Feb 2014 17:49:32 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:49:32 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so4179884eak.29
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 09:49:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=DP9SJRSVjn+Sficcw6yBJpMAcgSKgJUp1pAIL7vku3o=;
	b=NcybiAmCF+iCWGM/1rPGdrMI+wmCdeUsq5bU1M39sATNIoN/rOwgZLzOHVelqATvwS
	FhlDy1pe+UNThwNt2+y/SnKp4LdzxbWgmqd0UcE/w9JLggyO9ZJFeP57e1NMXMV/7SZ/
	gN8CZnp63quTqyKBo2hyO17uONEq0mrh+cYkynylG6qTWCqRh/3PWjUv8BhAMcMNHkCf
	yOx1vgR2uJo0hKsmDVJw0KFJUBETQ/C3yxzyE2PzK7M1tvbSy6aiKoEriEYsYnzX98JQ
	tCQOCcY8JzDU4Zmi0SX7FvT9d9VnbO5Gmn8lef+qAqmR8SdNZiIXVcHu3tsRCG/nQvgR
	13/Q==
X-Gm-Message-State: ALoCoQloDpVThWjqN0dk8e46HwvmEs/fG8UnQjaa64l7sIOYJe1QNjwW6v7+QeNauZFtFBrOlwLF
X-Received: by 10.15.56.8 with SMTP id x8mr970127eew.83.1392400171951;
	Fri, 14 Feb 2014 09:49:31 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm22241740ees.4.2014.02.14.09.49.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 09:49:31 -0800 (PST)
Message-ID: <52FE5729.1050906@linaro.org>
Date: Fri, 14 Feb 2014 17:49:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
	<1392393098-7351-2-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1392393098-7351-2-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 02/10] xen/arm: support HW interrupts
 in gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 03:51 PM, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW.
> 
> Remove the code to EOI a physical interrupt on behalf of the guest
> because it has become unnecessary.
> 
> Also add a struct vcpu* parameter to gic_set_lr.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

IRL you told me that this patch as dependency on another. It would be
nice to mention this dependency in the commit message for bisection.

> ---
> 
> Changes in v2:
> - remove the EOI code, now unnecessary;
> - do not assume physical IRQ == virtual IRQ;
> - refactor gic_set_lr.
> ---
>  xen/arch/arm/gic.c |   52 +++++++++++++++++-----------------------------------
>  1 file changed, 17 insertions(+), 35 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index acf7195..64c8aa7 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      return rc;
>  }
>  
> -static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> +static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
>          unsigned int state, unsigned int priority)
>  {
> -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> -    struct pending_irq *p = irq_to_pending(current, virtual_irq);
> +    struct pending_irq *p = irq_to_pending(v, irq);
> +    uint32_t lr_reg;
>  
>      BUG_ON(lr >= nr_lrs);
>      BUG_ON(lr < 0);
>      BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
>  
> -    GICH[GICH_LR + lr] = state |
> -        maintenance_int |
> +    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
>          ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> -        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +        ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +    if ( p->desc != NULL )
> +        lr_reg |= GICH_LR_HW |
> +            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
> +
> +    GICH[GICH_LR + lr] = lr_reg;
>  
>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>      clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
> @@ -666,7 +670,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>      spin_unlock(&gic.lock);
>  }
>  
> -void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> +void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
>          unsigned int state, unsigned int priority)
>  {
>      int i;
> @@ -679,12 +683,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
>          if (i < nr_lrs) {
>              set_bit(i, &this_cpu(lr_mask));
> -            gic_set_lr(i, virtual_irq, state, priority);
> +            gic_set_lr(v, i, irq, state, priority);
>              goto out;
>          }
>      }
>  
> -    gic_add_to_lr_pending(v, virtual_irq, priority);
> +    gic_add_to_lr_pending(v, irq, priority);
>  
>  out:
>      spin_unlock_irqrestore(&gic.lock, flags);
> @@ -703,7 +707,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>          if ( i >= nr_lrs ) return;
>  
>          spin_lock_irqsave(&gic.lock, flags);
> -        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> +        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
>          list_del_init(&p->lr_queue);
>          set_bit(i, &this_cpu(lr_mask));
>          spin_unlock_irqrestore(&gic.lock, flags);
> @@ -904,15 +908,9 @@ int gicv_setup(struct domain *d)
>  
>  }
>  
> -static void gic_irq_eoi(void *info)
> -{
> -    int virq = (uintptr_t) info;
> -    GICC[GICC_DIR] = virq;
> -}
> -
>  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>  {
> -    int i = 0, virq, pirq = -1;
> +    int i = 0, virq;
>      uint32_t lr;
>      struct vcpu *v = current;
>      uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
> @@ -920,10 +918,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>      while ((i = find_next_bit((const long unsigned int *) &eisr,
>                                64, i)) < 64) {
>          struct pending_irq *p, *p2;
> -        int cpu;
>          bool_t inflight;
>  
> -        cpu = -1;
>          inflight = 0;
>  
>          spin_lock_irq(&gic.lock);
> @@ -933,12 +929,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>          clear_bit(i, &this_cpu(lr_mask));
>  
>          p = irq_to_pending(v, virq);
> -        if ( p->desc != NULL ) {
> +        if ( p->desc != NULL )
>              p->desc->status &= ~IRQ_INPROGRESS;
> -            /* Assume only one pcpu needs to EOI the irq */
> -            cpu = p->desc->arch.eoi_cpu;
> -            pirq = p->desc->irq;
> -        }
>          if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
>               test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
>          {
> @@ -950,7 +942,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>  
>          if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>              p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
> -            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
> +            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
>              list_del_init(&p2->lr_queue);
>              set_bit(i, &this_cpu(lr_mask));
>          }
> @@ -963,16 +955,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>              spin_unlock_irq(&v->arch.vgic.lock);
>          }
>  
> -        if ( p->desc != NULL ) {
> -            /* this is not racy because we can't receive another irq of the
> -             * same type until we EOI it.  */
> -            if ( cpu == smp_processor_id() )
> -                gic_irq_eoi((void*)(uintptr_t)pirq);
> -            else
> -                on_selected_cpus(cpumask_of(cpu),
> -                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> -        }
> -
>          i++;
>      }
>  }
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:49:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMtL-00057x-AU; Fri, 14 Feb 2014 17:49:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WEMtJ-00057o-JN
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 17:49:33 +0000
Received: from [85.158.139.211:25542] by server-11.bemta-5.messagelabs.com id
	CC/39-23886-C275EF25; Fri, 14 Feb 2014 17:49:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392400172!3919038!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24371 invoked from network); 14 Feb 2014 17:49:32 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 17:49:32 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so4179884eak.29
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 09:49:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=DP9SJRSVjn+Sficcw6yBJpMAcgSKgJUp1pAIL7vku3o=;
	b=NcybiAmCF+iCWGM/1rPGdrMI+wmCdeUsq5bU1M39sATNIoN/rOwgZLzOHVelqATvwS
	FhlDy1pe+UNThwNt2+y/SnKp4LdzxbWgmqd0UcE/w9JLggyO9ZJFeP57e1NMXMV/7SZ/
	gN8CZnp63quTqyKBo2hyO17uONEq0mrh+cYkynylG6qTWCqRh/3PWjUv8BhAMcMNHkCf
	yOx1vgR2uJo0hKsmDVJw0KFJUBETQ/C3yxzyE2PzK7M1tvbSy6aiKoEriEYsYnzX98JQ
	tCQOCcY8JzDU4Zmi0SX7FvT9d9VnbO5Gmn8lef+qAqmR8SdNZiIXVcHu3tsRCG/nQvgR
	13/Q==
X-Gm-Message-State: ALoCoQloDpVThWjqN0dk8e46HwvmEs/fG8UnQjaa64l7sIOYJe1QNjwW6v7+QeNauZFtFBrOlwLF
X-Received: by 10.15.56.8 with SMTP id x8mr970127eew.83.1392400171951;
	Fri, 14 Feb 2014 09:49:31 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm22241740ees.4.2014.02.14.09.49.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 09:49:31 -0800 (PST)
Message-ID: <52FE5729.1050906@linaro.org>
Date: Fri, 14 Feb 2014 17:49:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
	<1392393098-7351-2-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1392393098-7351-2-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 02/10] xen/arm: support HW interrupts
 in gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 03:51 PM, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW.
> 
> Remove the code to EOI a physical interrupt on behalf of the guest
> because it has become unnecessary.
> 
> Also add a struct vcpu* parameter to gic_set_lr.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

IRL you told me that this patch as dependency on another. It would be
nice to mention this dependency in the commit message for bisection.

> ---
> 
> Changes in v2:
> - remove the EOI code, now unnecessary;
> - do not assume physical IRQ == virtual IRQ;
> - refactor gic_set_lr.
> ---
>  xen/arch/arm/gic.c |   52 +++++++++++++++++-----------------------------------
>  1 file changed, 17 insertions(+), 35 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index acf7195..64c8aa7 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -618,20 +618,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      return rc;
>  }
>  
> -static inline void gic_set_lr(int lr, unsigned int virtual_irq,
> +static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
>          unsigned int state, unsigned int priority)
>  {
> -    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
> -    struct pending_irq *p = irq_to_pending(current, virtual_irq);
> +    struct pending_irq *p = irq_to_pending(v, irq);
> +    uint32_t lr_reg;
>  
>      BUG_ON(lr >= nr_lrs);
>      BUG_ON(lr < 0);
>      BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
>  
> -    GICH[GICH_LR + lr] = state |
> -        maintenance_int |
> +    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
>          ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
> -        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +        ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
> +    if ( p->desc != NULL )
> +        lr_reg |= GICH_LR_HW |
> +            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
> +
> +    GICH[GICH_LR + lr] = lr_reg;
>  
>      set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
>      clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
> @@ -666,7 +670,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
>      spin_unlock(&gic.lock);
>  }
>  
> -void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
> +void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
>          unsigned int state, unsigned int priority)
>  {
>      int i;
> @@ -679,12 +683,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
>          i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
>          if (i < nr_lrs) {
>              set_bit(i, &this_cpu(lr_mask));
> -            gic_set_lr(i, virtual_irq, state, priority);
> +            gic_set_lr(v, i, irq, state, priority);
>              goto out;
>          }
>      }
>  
> -    gic_add_to_lr_pending(v, virtual_irq, priority);
> +    gic_add_to_lr_pending(v, irq, priority);
>  
>  out:
>      spin_unlock_irqrestore(&gic.lock, flags);
> @@ -703,7 +707,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
>          if ( i >= nr_lrs ) return;
>  
>          spin_lock_irqsave(&gic.lock, flags);
> -        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
> +        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
>          list_del_init(&p->lr_queue);
>          set_bit(i, &this_cpu(lr_mask));
>          spin_unlock_irqrestore(&gic.lock, flags);
> @@ -904,15 +908,9 @@ int gicv_setup(struct domain *d)
>  
>  }
>  
> -static void gic_irq_eoi(void *info)
> -{
> -    int virq = (uintptr_t) info;
> -    GICC[GICC_DIR] = virq;
> -}
> -
>  static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
>  {
> -    int i = 0, virq, pirq = -1;
> +    int i = 0, virq;
>      uint32_t lr;
>      struct vcpu *v = current;
>      uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
> @@ -920,10 +918,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>      while ((i = find_next_bit((const long unsigned int *) &eisr,
>                                64, i)) < 64) {
>          struct pending_irq *p, *p2;
> -        int cpu;
>          bool_t inflight;
>  
> -        cpu = -1;
>          inflight = 0;
>  
>          spin_lock_irq(&gic.lock);
> @@ -933,12 +929,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>          clear_bit(i, &this_cpu(lr_mask));
>  
>          p = irq_to_pending(v, virq);
> -        if ( p->desc != NULL ) {
> +        if ( p->desc != NULL )
>              p->desc->status &= ~IRQ_INPROGRESS;
> -            /* Assume only one pcpu needs to EOI the irq */
> -            cpu = p->desc->arch.eoi_cpu;
> -            pirq = p->desc->irq;
> -        }
>          if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
>               test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
>          {
> @@ -950,7 +942,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>  
>          if ( !list_empty(&v->arch.vgic.lr_pending) ) {
>              p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
> -            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
> +            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
>              list_del_init(&p2->lr_queue);
>              set_bit(i, &this_cpu(lr_mask));
>          }
> @@ -963,16 +955,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
>              spin_unlock_irq(&v->arch.vgic.lock);
>          }
>  
> -        if ( p->desc != NULL ) {
> -            /* this is not racy because we can't receive another irq of the
> -             * same type until we EOI it.  */
> -            if ( cpu == smp_processor_id() )
> -                gic_irq_eoi((void*)(uintptr_t)pirq);
> -            else
> -                on_selected_cpus(cpumask_of(cpu),
> -                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
> -        }
> -
>          i++;
>      }
>  }
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:51:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMvO-0005GH-A7; Fri, 14 Feb 2014 17:51:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WEMvM-0005Fy-MQ
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:51:40 +0000
Received: from [85.158.139.211:27013] by server-6.bemta-5.messagelabs.com id
	06/5C-14342-CA75EF25; Fri, 14 Feb 2014 17:51:40 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392400297!3991470!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31959 invoked from network); 14 Feb 2014 17:51:39 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 17:51:39 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 57AA4B9B1;
	Fri, 14 Feb 2014 12:51:36 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau =?iso-8859-1?q?Monn=E9?= <roger.pau@citrix.com>
Date: Fri, 14 Feb 2014 12:50:06 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<2410827.IqfpSAhe3T@ralph.baldwin.cx> <52FDF217.3040005@citrix.com>
In-Reply-To: <52FDF217.3040005@citrix.com>
MIME-Version: 1.0
Message-Id: <201402141250.06829.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Fri, 14 Feb 2014 12:51:36 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 10/13] xen: add ACPI bus to xen_nexus
	when running as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Friday, February 14, 2014 5:38:15 am Roger Pau Monn=E9 wrote:
> On 08/02/14 22:50, John Baldwin wrote:
> > On Tuesday, December 24, 2013 12:20:59 PM Roger Pau Monne wrote:
> >> Also disable a couple of ACPI devices that are not usable under Dom0.
> > =

> > Hmm, setting debug.acpi.disabled in this way is a bit hacky.  It might
> > be fine however if there's no way for the user to set it before booting
> > the kernel (as opposed to haing the relevant drivers explicitly disable
> > themselves under Xen which I think would be cleaner, but would also
> > make your patch larger)
> =

> Thanks for the review, the user can pass parameters to FreeBSD when
> booted as Dom0, I just find it uncomfortable to force the user into
> always setting something on the command line in order to boot.

Can the user set debug.acpi.disabled?  If so, you are overriding their
setting which would be bad.

> What do you mean with "haing the relevant drivers explicitly disable
> themselves under Xen"? Adding a gate on every one of those devices like
> "if (xen_pv_domain()) return (ENXIO);" in the identify/probe routine
> seems even worse.

A check like this in probe() is what I had in mind, though I agree it's
not perfect.

-- =

John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:51:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMvN-0005GA-Ss; Fri, 14 Feb 2014 17:51:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WEMvM-0005Fw-Hy
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:51:40 +0000
Received: from [85.158.143.35:54342] by server-2.bemta-4.messagelabs.com id
	77/0B-10891-BA75EF25; Fri, 14 Feb 2014 17:51:39 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392400298!5789839!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1300 invoked from network); 14 Feb 2014 17:51:39 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 17:51:39 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 6C199B9E6;
	Fri, 14 Feb 2014 12:51:37 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 14 Feb 2014 12:51:10 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1980951.95r2q2cca3@ralph.baldwin.cx> <52FD7624.90202@citrix.com>
In-Reply-To: <52FD7624.90202@citrix.com>
MIME-Version: 1.0
Message-Id: <201402141251.10278.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Fri, 14 Feb 2014 12:51:37 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
	ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday, February 13, 2014 8:49:24 pm Andrew Cooper wrote:
> On 08/02/2014 21:42, John Baldwin wrote:
> > On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
> >> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
> >> force the usage of the Xen mptable enumerator even when ACPI is
> >> detected.
> > Hmm, so I think one question is why does the existing MADT parser
> > not work with the MADT table provided by Xen?  This may very well
> > be correct, but if it's only a small change to make the existing
> > MADT parser work with Xen's MADT table, that route might be
> > preferable.
> >
> 
> For dom0, the MADT seen is the system MADT, which does not bear any
> reality to dom0's topology.  For PV domU, no MADT will be found.  For
> HVM domU, the MADT seen ought to represent (virtual) reality.

Hmm, the other changes suggested that you do want to use the I/O APIC
entries and interrupt overrides from the system MADT for dom0?  Just
not the CPU entries.  Is that correct?

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:51:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMvN-0005GA-Ss; Fri, 14 Feb 2014 17:51:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WEMvM-0005Fw-Hy
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:51:40 +0000
Received: from [85.158.143.35:54342] by server-2.bemta-4.messagelabs.com id
	77/0B-10891-BA75EF25; Fri, 14 Feb 2014 17:51:39 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392400298!5789839!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1300 invoked from network); 14 Feb 2014 17:51:39 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 17:51:39 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 6C199B9E6;
	Fri, 14 Feb 2014 12:51:37 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 14 Feb 2014 12:51:10 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1980951.95r2q2cca3@ralph.baldwin.cx> <52FD7624.90202@citrix.com>
In-Reply-To: <52FD7624.90202@citrix.com>
MIME-Version: 1.0
Message-Id: <201402141251.10278.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Fri, 14 Feb 2014 12:51:37 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
	ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday, February 13, 2014 8:49:24 pm Andrew Cooper wrote:
> On 08/02/2014 21:42, John Baldwin wrote:
> > On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
> >> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
> >> force the usage of the Xen mptable enumerator even when ACPI is
> >> detected.
> > Hmm, so I think one question is why does the existing MADT parser
> > not work with the MADT table provided by Xen?  This may very well
> > be correct, but if it's only a small change to make the existing
> > MADT parser work with Xen's MADT table, that route might be
> > preferable.
> >
> 
> For dom0, the MADT seen is the system MADT, which does not bear any
> reality to dom0's topology.  For PV domU, no MADT will be found.  For
> HVM domU, the MADT seen ought to represent (virtual) reality.

Hmm, the other changes suggested that you do want to use the I/O APIC
entries and interrupt overrides from the system MADT for dom0?  Just
not the CPU entries.  Is that correct?

-- 
John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 17:51:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 17:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEMvO-0005GH-A7; Fri, 14 Feb 2014 17:51:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jhb@freebsd.org>) id 1WEMvM-0005Fy-MQ
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 17:51:40 +0000
Received: from [85.158.139.211:27013] by server-6.bemta-5.messagelabs.com id
	06/5C-14342-CA75EF25; Fri, 14 Feb 2014 17:51:40 +0000
X-Env-Sender: jhb@freebsd.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392400297!3991470!1
X-Originating-IP: [96.47.65.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31959 invoked from network); 14 Feb 2014 17:51:39 -0000
Received: from bigwig.baldwin.cx (HELO bigwig.baldwin.cx) (96.47.65.170)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 14 Feb 2014 17:51:39 -0000
Received: from jhbbsd.localnet (unknown [209.249.190.124])
	by bigwig.baldwin.cx (Postfix) with ESMTPSA id 57AA4B9B1;
	Fri, 14 Feb 2014 12:51:36 -0500 (EST)
From: John Baldwin <jhb@freebsd.org>
To: Roger Pau =?iso-8859-1?q?Monn=E9?= <roger.pau@citrix.com>
Date: Fri, 14 Feb 2014 12:50:06 -0500
User-Agent: KMail/1.13.5 (FreeBSD/8.4-CBSD-20130906; KDE/4.5.5; amd64; ; )
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<2410827.IqfpSAhe3T@ralph.baldwin.cx> <52FDF217.3040005@citrix.com>
In-Reply-To: <52FDF217.3040005@citrix.com>
MIME-Version: 1.0
Message-Id: <201402141250.06829.jhb@freebsd.org>
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.2.7
	(bigwig.baldwin.cx); Fri, 14 Feb 2014 12:51:36 -0500 (EST)
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 10/13] xen: add ACPI bus to xen_nexus
	when running as Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Friday, February 14, 2014 5:38:15 am Roger Pau Monn=E9 wrote:
> On 08/02/14 22:50, John Baldwin wrote:
> > On Tuesday, December 24, 2013 12:20:59 PM Roger Pau Monne wrote:
> >> Also disable a couple of ACPI devices that are not usable under Dom0.
> > =

> > Hmm, setting debug.acpi.disabled in this way is a bit hacky.  It might
> > be fine however if there's no way for the user to set it before booting
> > the kernel (as opposed to haing the relevant drivers explicitly disable
> > themselves under Xen which I think would be cleaner, but would also
> > make your patch larger)
> =

> Thanks for the review, the user can pass parameters to FreeBSD when
> booted as Dom0, I just find it uncomfortable to force the user into
> always setting something on the command line in order to boot.

Can the user set debug.acpi.disabled?  If so, you are overriding their
setting which would be bad.

> What do you mean with "haing the relevant drivers explicitly disable
> themselves under Xen"? Adding a gate on every one of those devices like
> "if (xen_pv_domain()) return (ENXIO);" in the identify/probe routine
> seems even worse.

A check like this in probe() is what I had in mind, though I agree it's
not perfect.

-- =

John Baldwin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:04:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEN7c-0005m7-LB; Fri, 14 Feb 2014 18:04:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WEN7b-0005m2-GU
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 18:04:19 +0000
Received: from [85.158.143.35:7149] by server-1.bemta-4.messagelabs.com id
	6B/00-31661-2AA5EF25; Fri, 14 Feb 2014 18:04:18 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392401048!5792569!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDI0OTkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23633 invoked from network); 14 Feb 2014 18:04:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:04:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; 
	d="asc'?scan'208";a="100868625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:04:07 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:04:06 -0500
Message-ID: <1392401045.32038.346.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 14 Feb 2014 19:04:05 +0100
In-Reply-To: <1392216804.13563.83.camel@kazak.uk.xensource.com>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
	<21243.35358.349750.484725@mariner.uk.xensource.com>
	<1392216804.13563.83.camel@kazak.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4572503819344885818=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4572503819344885818==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-WUfiYELeNHiv80VOHSAS"

--=-WUfiYELeNHiv80VOHSAS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-02-12 at 14:53 +0000, Ian Campbell wrote:
> On Wed, 2014-02-12 at 14:50 +0000, Ian Jackson wrote:
> > Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):

> > > Make it possible to specify various bits of TFTP path via
> > > ~/.xen-osstest/config
> >=20
> > As I said in person: this would be much better if instead the host
> > property referred to a named TFTP scope/server.  Otherwise you have to
> > set a whole bunch of host properties identically.
> >=20
>=20
> Ack. I'll put this on my todo list.
>=20
Also, the README file has a, far than comprehensive, list of host
properties. I'm unsure whether that file is the proper place, but it
would be nice to have one actual place where a list and a brief
description of all the supported host properties could live.

When I previously added or had to deal with some undocumented host
properties, I added it in the README file, so I'd say do the same. If
then README is not deemed as the proper place, fine, but I still would
recommend putting the info somewhere...

It's not different from what we require in Xen, of having actual patches
updating the docs too, after all.

What do you think?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-WUfiYELeNHiv80VOHSAS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+WpUACgkQk4XaBE3IOsQROwCeILgQv2E/hk8sakC7WuCK44rk
YTkAn1c+ut1x7g7DiSm79H6g8cRlf/Ae
=8Mle
-----END PGP SIGNATURE-----

--=-WUfiYELeNHiv80VOHSAS--


--===============4572503819344885818==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4572503819344885818==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 18:04:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEN7c-0005m7-LB; Fri, 14 Feb 2014 18:04:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WEN7b-0005m2-GU
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 18:04:19 +0000
Received: from [85.158.143.35:7149] by server-1.bemta-4.messagelabs.com id
	6B/00-31661-2AA5EF25; Fri, 14 Feb 2014 18:04:18 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392401048!5792569!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDI0OTkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23633 invoked from network); 14 Feb 2014 18:04:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:04:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; 
	d="asc'?scan'208";a="100868625"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:04:07 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:04:06 -0500
Message-ID: <1392401045.32038.346.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 14 Feb 2014 19:04:05 +0100
In-Reply-To: <1392216804.13563.83.camel@kazak.uk.xensource.com>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
	<21243.35358.349750.484725@mariner.uk.xensource.com>
	<1392216804.13563.83.camel@kazak.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4572503819344885818=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4572503819344885818==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-WUfiYELeNHiv80VOHSAS"

--=-WUfiYELeNHiv80VOHSAS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-02-12 at 14:53 +0000, Ian Campbell wrote:
> On Wed, 2014-02-12 at 14:50 +0000, Ian Jackson wrote:
> > Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):

> > > Make it possible to specify various bits of TFTP path via
> > > ~/.xen-osstest/config
> >=20
> > As I said in person: this would be much better if instead the host
> > property referred to a named TFTP scope/server.  Otherwise you have to
> > set a whole bunch of host properties identically.
> >=20
>=20
> Ack. I'll put this on my todo list.
>=20
Also, the README file has a, far than comprehensive, list of host
properties. I'm unsure whether that file is the proper place, but it
would be nice to have one actual place where a list and a brief
description of all the supported host properties could live.

When I previously added or had to deal with some undocumented host
properties, I added it in the README file, so I'd say do the same. If
then README is not deemed as the proper place, fine, but I still would
recommend putting the info somewhere...

It's not different from what we require in Xen, of having actual patches
updating the docs too, after all.

What do you think?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-WUfiYELeNHiv80VOHSAS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+WpUACgkQk4XaBE3IOsQROwCeILgQv2E/hk8sakC7WuCK44rk
YTkAn1c+ut1x7g7DiSm79H6g8cRlf/Ae
=8Mle
-----END PGP SIGNATURE-----

--=-WUfiYELeNHiv80VOHSAS--


--===============4572503819344885818==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4572503819344885818==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 18:05:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEN8J-0005of-75; Fri, 14 Feb 2014 18:05:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WEN8F-0005oU-Vv
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:05:00 +0000
Received: from [85.158.137.68:9155] by server-10.bemta-3.messagelabs.com id
	2D/B5-07302-BCA5EF25; Fri, 14 Feb 2014 18:04:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392401097!403875!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14898 invoked from network); 14 Feb 2014 18:04:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:04:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102640186"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:04:56 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:04:55 -0500
Message-ID: <52FE5AC6.9000300@citrix.com>
Date: Fri, 14 Feb 2014 18:04:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 3/5] xen-netfront:
 Factor	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 17:35, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netfront, move the
> queue-specific data from struct netfront_info to struct netfront_queue,
> and update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_etherdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0, selecting the first (and
> only) queue.
[...]
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
[...]
> @@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
[...]
> +	for (i = 0; i < info->num_queues; ++i) {
> +		queue = &info->queues[i];
> +		del_timer_sync(&queue->rx_refill_timer);
> +	}
> +
> +	if (info->num_queues) {
> +		kfree(info->queues);
> +		info->queues = NULL;
> +	}
> +
>  	xennet_sysfs_delif(info->netdev);
>  
>  	unregister_netdev(info->netdev);
>  
> -	del_timer_sync(&info->rx_refill_timer);
> -

This has reordered the del_timer_sync() to before the
unregister_netdev() call.

Can you be sure that the timer cannot be restarted after deleting it?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:05:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:05:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEN8J-0005of-75; Fri, 14 Feb 2014 18:05:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WEN8F-0005oU-Vv
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:05:00 +0000
Received: from [85.158.137.68:9155] by server-10.bemta-3.messagelabs.com id
	2D/B5-07302-BCA5EF25; Fri, 14 Feb 2014 18:04:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392401097!403875!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14898 invoked from network); 14 Feb 2014 18:04:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:04:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102640186"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:04:56 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:04:55 -0500
Message-ID: <52FE5AC6.9000300@citrix.com>
Date: Fri, 14 Feb 2014 18:04:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 3/5] xen-netfront:
 Factor	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 17:35, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netfront, move the
> queue-specific data from struct netfront_info to struct netfront_queue,
> and update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_etherdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0, selecting the first (and
> only) queue.
[...]
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
[...]
> @@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
[...]
> +	for (i = 0; i < info->num_queues; ++i) {
> +		queue = &info->queues[i];
> +		del_timer_sync(&queue->rx_refill_timer);
> +	}
> +
> +	if (info->num_queues) {
> +		kfree(info->queues);
> +		info->queues = NULL;
> +	}
> +
>  	xennet_sysfs_delif(info->netdev);
>  
>  	unregister_netdev(info->netdev);
>  
> -	del_timer_sync(&info->rx_refill_timer);
> -

This has reordered the del_timer_sync() to before the
unregister_netdev() call.

Can you be sure that the timer cannot be restarted after deleting it?

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:12:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENF5-000676-Cx; Fri, 14 Feb 2014 18:12:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WENF2-000671-Gi
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:12:00 +0000
Received: from [85.158.139.211:60840] by server-12.bemta-5.messagelabs.com id
	67/E1-15415-F6C5EF25; Fri, 14 Feb 2014 18:11:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392401517!4015561!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10730 invoked from network); 14 Feb 2014 18:11:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:11:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102642457"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:11:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:11:46 -0500
Message-ID: <52FE5C61.6000807@citrix.com>
Date: Fri, 14 Feb 2014 18:11:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-5-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392399353-11973-5-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 4/5] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 17:35, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
[...]
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,10 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues;
> +module_param(xennet_max_queues, uint, 0644);

Module parameter should have some documentation with MODULE_PARAM_DESC()
or similar.

Otherwise,

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:12:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENF5-000676-Cx; Fri, 14 Feb 2014 18:12:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WENF2-000671-Gi
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:12:00 +0000
Received: from [85.158.139.211:60840] by server-12.bemta-5.messagelabs.com id
	67/E1-15415-F6C5EF25; Fri, 14 Feb 2014 18:11:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392401517!4015561!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10730 invoked from network); 14 Feb 2014 18:11:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:11:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102642457"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:11:46 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:11:46 -0500
Message-ID: <52FE5C61.6000807@citrix.com>
Date: Fri, 14 Feb 2014 18:11:45 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-5-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392399353-11973-5-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 4/5] xen-netfront: Add support
 for	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 17:35, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
[...]
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,10 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues;
> +module_param(xennet_max_queues, uint, 0644);

Module parameter should have some documentation with MODULE_PARAM_DESC()
or similar.

Otherwise,

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENFL-000693-Uo; Fri, 14 Feb 2014 18:12:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WENFK-00068i-61
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:12:18 +0000
Received: from [85.158.143.35:2392] by server-3.bemta-4.messagelabs.com id
	CC/A8-11539-18C5EF25; Fri, 14 Feb 2014 18:12:17 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392401528!5795964!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_8,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNDY2MTkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17490 invoked from network); 14 Feb 2014 18:12:08 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:12:08 -0000
Received: by mail-ea0-f174.google.com with SMTP id z10so2649846ead.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 10:12:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:reply-to:to:cc:date:in-reply-to:references
	:content-type:mime-version;
	bh=e2k7ZKHXt5ugrboRWYkH1/edjhS+J7x4dC/sq8zYJqM=;
	b=c4wnKHajq3zlAH4vX7rd3jjo+/kYp8xnWbW8Kty1iGysGBx5eBGIh9QF5s5qI+/uZq
	chS51OsrN74gP+nLl2m2MoqZje6zNj03epAxvR42kncAy7GFIjFEn9Z/Yl43iWxiWLGw
	XSQfnTlaZnxAWU0/HbXJ5tqq8pGdGOLYuLVAipzJWUrTkGn4BiP4Ql2D2WYCdQgbowJS
	6aBhIkr7YRhM1YNShrENiSnJ96z4N9vLlqsT3U+3LzIlDptkRGyyr7cviw0iHBAsben8
	IEheI/Nv1PXtQoKF4LUfOxJC0mk2YkLgsA7G6xHZ1MPs53RlSP8NREXpLr99ez+44hsd
	CmQw==
X-Received: by 10.14.5.11 with SMTP id 11mr4245241eek.57.1392401528619;
	Fri, 14 Feb 2014 10:12:08 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id x6sm22549844eew.20.2014.02.14.10.12.06
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 10:12:07 -0800 (PST)
Message-ID: <1392401513.32038.348.camel@Solace>
From: Dario Faggioli <raistlin.df@gmail.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 19:11:53 +0100
In-Reply-To: <1391005955.21756.7.camel@Abyss>
References: <1391005955.21756.7.camel@Abyss>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: raistlin@linux.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4795274739613851164=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4795274739613851164==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-FQb13E1UMjHB2kmDYVku"


--=-FQb13E1UMjHB2kmDYVku
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> standalone-reset's usage says:
>    =20
>   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildfli=
ght>]]]
>    branch and xenbranch default, separately, to xen-unstable
>   options:
>    -f<flight>     generate flight "flight", default is "standalone"
>    =20
> but then there is no place where '-f' is processed, and hence
> no real way to pass a specific flight name to make-flight.
>    =20
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>=20
Ping?

I know it's a busy period for OSSTest, but this should be pretty
straightforward, and it only affects standalone mode.

Anyway, I can put it on hold and resubmit in a while, if that's
considered better.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-FQb13E1UMjHB2kmDYVku
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+XGkACgkQk4XaBE3IOsRHOACdHKwz506FygEViQLW7ZiOGsct
KqgAoKPlHYcOr2kBrqoyCqwcG/Z2GZrn
=n4OG
-----END PGP SIGNATURE-----

--=-FQb13E1UMjHB2kmDYVku--



--===============4795274739613851164==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4795274739613851164==--



From xen-devel-bounces@lists.xen.org Fri Feb 14 18:12:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENFL-000693-Uo; Fri, 14 Feb 2014 18:12:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WENFK-00068i-61
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:12:18 +0000
Received: from [85.158.143.35:2392] by server-3.bemta-4.messagelabs.com id
	CC/A8-11539-18C5EF25; Fri, 14 Feb 2014 18:12:17 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392401528!5795964!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	ML_RADAR_SPEW_LINKS_8,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNDY2MTkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17490 invoked from network); 14 Feb 2014 18:12:08 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:12:08 -0000
Received: by mail-ea0-f174.google.com with SMTP id z10so2649846ead.33
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 10:12:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:subject:from:reply-to:to:cc:date:in-reply-to:references
	:content-type:mime-version;
	bh=e2k7ZKHXt5ugrboRWYkH1/edjhS+J7x4dC/sq8zYJqM=;
	b=c4wnKHajq3zlAH4vX7rd3jjo+/kYp8xnWbW8Kty1iGysGBx5eBGIh9QF5s5qI+/uZq
	chS51OsrN74gP+nLl2m2MoqZje6zNj03epAxvR42kncAy7GFIjFEn9Z/Yl43iWxiWLGw
	XSQfnTlaZnxAWU0/HbXJ5tqq8pGdGOLYuLVAipzJWUrTkGn4BiP4Ql2D2WYCdQgbowJS
	6aBhIkr7YRhM1YNShrENiSnJ96z4N9vLlqsT3U+3LzIlDptkRGyyr7cviw0iHBAsben8
	IEheI/Nv1PXtQoKF4LUfOxJC0mk2YkLgsA7G6xHZ1MPs53RlSP8NREXpLr99ez+44hsd
	CmQw==
X-Received: by 10.14.5.11 with SMTP id 11mr4245241eek.57.1392401528619;
	Fri, 14 Feb 2014 10:12:08 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id x6sm22549844eew.20.2014.02.14.10.12.06
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 10:12:07 -0800 (PST)
Message-ID: <1392401513.32038.348.camel@Solace>
From: Dario Faggioli <raistlin.df@gmail.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Date: Fri, 14 Feb 2014 19:11:53 +0100
In-Reply-To: <1391005955.21756.7.camel@Abyss>
References: <1391005955.21756.7.camel@Abyss>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: raistlin@linux.it
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4795274739613851164=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============4795274739613851164==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-FQb13E1UMjHB2kmDYVku"


--=-FQb13E1UMjHB2kmDYVku
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> standalone-reset's usage says:
>    =20
>   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildfli=
ght>]]]
>    branch and xenbranch default, separately, to xen-unstable
>   options:
>    -f<flight>     generate flight "flight", default is "standalone"
>    =20
> but then there is no place where '-f' is processed, and hence
> no real way to pass a specific flight name to make-flight.
>    =20
> Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>=20
Ping?

I know it's a busy period for OSSTest, but this should be pretty
straightforward, and it only affects standalone mode.

Anyway, I can put it on hold and resubmit in a while, if that's
considered better.

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-FQb13E1UMjHB2kmDYVku
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+XGkACgkQk4XaBE3IOsRHOACdHKwz506FygEViQLW7ZiOGsct
KqgAoKPlHYcOr2kBrqoyCqwcG/Z2GZrn
=n4OG
-----END PGP SIGNATURE-----

--=-FQb13E1UMjHB2kmDYVku--



--===============4795274739613851164==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4795274739613851164==--



From xen-devel-bounces@lists.xen.org Fri Feb 14 18:12:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENFm-0006Dx-Cj; Fri, 14 Feb 2014 18:12:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WENFl-0006Df-Hg
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:12:46 +0000
Received: from [85.158.143.35:55159] by server-1.bemta-4.messagelabs.com id
	2E/67-31661-C9C5EF25; Fri, 14 Feb 2014 18:12:44 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392401563!5780710!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18662 invoked from network); 14 Feb 2014 18:12:44 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:12:44 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so18999858qaq.11
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 10:12:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=aukS2GopK1ivs2L/3SsuSRACWFvj7AOk4UzVGmXu4D0=;
	b=YGMcX84vIi0j5GAd/h05ru9RcbVMs8Rn46woyUXgPazenhEy9Uh/yObBpUV0A9b46D
	riEtPaslFfKrsFeR6pbreU+hs3GlmIfdGB2WLv1XVy3vJp+ri7XflbyvJM8SvJ0zmPKI
	zPELdTRAertt3Oa7s0iRzMbFl/7yM8bXmtmLo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=aukS2GopK1ivs2L/3SsuSRACWFvj7AOk4UzVGmXu4D0=;
	b=UnMM5zW8FVUc8/cEH4KJCoEZzP9IvjJZtQfpshik2mCllA++FfBJ5DRw/x8FFjKJeB
	DAHsDv8b6jHYz01vm0bTDcqF1qLf2adkSAQRTPtidba4wFdxxg25Y/YRaI1FoM0yyP2o
	F1Qk3V8AE0UO3dmZDkpdprYwnEVc7oAZek72zVr52TOISNxqGSFqnayz5cga4pHjldt4
	4p1HcotC8xb6FuSkwQ2QAHViMB7qrLUZ1uiNuIIca1F3YKPC2ZWXvWaPN99glrHnqb6n
	AnBMFsc6QKCmJNXT1lEhJWJGx6Oq5DbABtDsyWwoliOxPcJUczNZn0lwN3JoSRtz6936
	5luQ==
X-Gm-Message-State: ALoCoQlafsnROFyY2wBCjfID9Ez7HCDfDqU1wrv6V5iC0yIpy4EjHdr5ETSY01FdoU61O/f6Pn6O
X-Received: by 10.224.131.135 with SMTP id x7mr2168648qas.15.1392401562947;
	Fri, 14 Feb 2014 10:12:42 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 14 Feb 2014 10:12:27 -0800 (PST)
X-Originating-IP: [217.66.157.55]
In-Reply-To: <20140214163426.GG18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 14 Feb 2014 22:12:27 +0400
X-Google-Sender-Auth: s9_28s_c3-9YJpWUdR8UvsySd38
Message-ID: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-14 20:34 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> My gut feeling is nothing is wrong with Xen. It's just that you have a
> typo in your config file. :-)
>
> Wei.

No, if i have error in config domain can't start. You known that.
my config is:

name="21-10918"
kernel="/var/storage/kernel/debian/6/kernel-64"
ramdisk="/var/storage/kernel/ramdisk-64"
vif=["mac=00:16:3e:00:2c:63,ip=62.76.185.64"]
disk=["phy:/dev/disk/vbd/21-916,xvda,w"]
memory=512
maxmem=1024
vcpus=3
maxvcpus=3
cpu_cap=300
cpu_weight=256
vfb=["type=vnc,vnclisten=0.0.0.0,vncpasswd=7QOG1885y3"]
extra="root=/dev/xvda1 selinux=1 enforcing=0 iommu=off swiotlb=off
earlyprintk=xen console=hvc0"
on_restart="destroy"
cpuid="host,x2apic=0,aes=0,xsave=0,avx=0"
device_model_version="qemu-xen"
device_model_override="/usr/bin/qemu-system-x86_64"

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:12:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENFm-0006Dx-Cj; Fri, 14 Feb 2014 18:12:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WENFl-0006Df-Hg
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:12:46 +0000
Received: from [85.158.143.35:55159] by server-1.bemta-4.messagelabs.com id
	2E/67-31661-C9C5EF25; Fri, 14 Feb 2014 18:12:44 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392401563!5780710!1
X-Originating-IP: [209.85.216.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18662 invoked from network); 14 Feb 2014 18:12:44 -0000
Received: from mail-qa0-f52.google.com (HELO mail-qa0-f52.google.com)
	(209.85.216.52)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:12:44 -0000
Received: by mail-qa0-f52.google.com with SMTP id j15so18999858qaq.11
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 10:12:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=aukS2GopK1ivs2L/3SsuSRACWFvj7AOk4UzVGmXu4D0=;
	b=YGMcX84vIi0j5GAd/h05ru9RcbVMs8Rn46woyUXgPazenhEy9Uh/yObBpUV0A9b46D
	riEtPaslFfKrsFeR6pbreU+hs3GlmIfdGB2WLv1XVy3vJp+ri7XflbyvJM8SvJ0zmPKI
	zPELdTRAertt3Oa7s0iRzMbFl/7yM8bXmtmLo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=aukS2GopK1ivs2L/3SsuSRACWFvj7AOk4UzVGmXu4D0=;
	b=UnMM5zW8FVUc8/cEH4KJCoEZzP9IvjJZtQfpshik2mCllA++FfBJ5DRw/x8FFjKJeB
	DAHsDv8b6jHYz01vm0bTDcqF1qLf2adkSAQRTPtidba4wFdxxg25Y/YRaI1FoM0yyP2o
	F1Qk3V8AE0UO3dmZDkpdprYwnEVc7oAZek72zVr52TOISNxqGSFqnayz5cga4pHjldt4
	4p1HcotC8xb6FuSkwQ2QAHViMB7qrLUZ1uiNuIIca1F3YKPC2ZWXvWaPN99glrHnqb6n
	AnBMFsc6QKCmJNXT1lEhJWJGx6Oq5DbABtDsyWwoliOxPcJUczNZn0lwN3JoSRtz6936
	5luQ==
X-Gm-Message-State: ALoCoQlafsnROFyY2wBCjfID9Ez7HCDfDqU1wrv6V5iC0yIpy4EjHdr5ETSY01FdoU61O/f6Pn6O
X-Received: by 10.224.131.135 with SMTP id x7mr2168648qas.15.1392401562947;
	Fri, 14 Feb 2014 10:12:42 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 14 Feb 2014 10:12:27 -0800 (PST)
X-Originating-IP: [217.66.157.55]
In-Reply-To: <20140214163426.GG18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 14 Feb 2014 22:12:27 +0400
X-Google-Sender-Auth: s9_28s_c3-9YJpWUdR8UvsySd38
Message-ID: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-14 20:34 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> My gut feeling is nothing is wrong with Xen. It's just that you have a
> typo in your config file. :-)
>
> Wei.

No, if i have error in config domain can't start. You known that.
my config is:

name="21-10918"
kernel="/var/storage/kernel/debian/6/kernel-64"
ramdisk="/var/storage/kernel/ramdisk-64"
vif=["mac=00:16:3e:00:2c:63,ip=62.76.185.64"]
disk=["phy:/dev/disk/vbd/21-916,xvda,w"]
memory=512
maxmem=1024
vcpus=3
maxvcpus=3
cpu_cap=300
cpu_weight=256
vfb=["type=vnc,vnclisten=0.0.0.0,vncpasswd=7QOG1885y3"]
extra="root=/dev/xvda1 selinux=1 enforcing=0 iommu=off swiotlb=off
earlyprintk=xen console=hvc0"
on_restart="destroy"
cpuid="host,x2apic=0,aes=0,xsave=0,avx=0"
device_model_version="qemu-xen"
device_model_override="/usr/bin/qemu-system-x86_64"

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:18:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENKm-0006ZB-8y; Fri, 14 Feb 2014 18:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WENKk-0006Z5-QA
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 18:17:54 +0000
Received: from [85.158.143.35:23516] by server-1.bemta-4.messagelabs.com id
	AC/FA-31661-2DD5EF25; Fri, 14 Feb 2014 18:17:54 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392401872!5768269!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10488 invoked from network); 14 Feb 2014 18:17:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:17:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; 
	d="asc'?scan'208";a="100872956"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:17:45 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:17:44 -0500
Message-ID: <1392401862.32038.351.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 14 Feb 2014 19:17:42 +0100
In-Reply-To: <1390905979.7753.36.camel@kazak.uk.xensource.com>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
	<1390905979.7753.36.camel@kazak.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
 terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7513322836005699248=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7513322836005699248==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-0rF8bBjdr3A9g8s3qL82"

--=-0rF8bBjdr3A9g8s3qL82
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-01-28 at 10:46 +0000, Ian Campbell wrote:
> I should have said this yesterday on the docs day, but: Ping
>=20
Actually, is it ok for someone to ping on someone else's patch?
Well, I really think this patch would be a super-cool one to have in,
so, I'll take my chances.

Ping. :-)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-0rF8bBjdr3A9g8s3qL82
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+XcYACgkQk4XaBE3IOsSynwCfUylLY3478ag6FNJH3gez+J0J
ISkAnRMdlqj9TOLcOL08irAOhHVIHX/b
=PKqB
-----END PGP SIGNATURE-----

--=-0rF8bBjdr3A9g8s3qL82--


--===============7513322836005699248==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7513322836005699248==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 18:18:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENKm-0006ZB-8y; Fri, 14 Feb 2014 18:17:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WENKk-0006Z5-QA
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 18:17:54 +0000
Received: from [85.158.143.35:23516] by server-1.bemta-4.messagelabs.com id
	AC/FA-31661-2DD5EF25; Fri, 14 Feb 2014 18:17:54 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392401872!5768269!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10488 invoked from network); 14 Feb 2014 18:17:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:17:53 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; 
	d="asc'?scan'208";a="100872956"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:17:45 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:17:44 -0500
Message-ID: <1392401862.32038.351.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Fri, 14 Feb 2014 19:17:42 +0100
In-Reply-To: <1390905979.7753.36.camel@kazak.uk.xensource.com>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
	<1390905979.7753.36.camel@kazak.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
 terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7513322836005699248=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7513322836005699248==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-0rF8bBjdr3A9g8s3qL82"

--=-0rF8bBjdr3A9g8s3qL82
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-01-28 at 10:46 +0000, Ian Campbell wrote:
> I should have said this yesterday on the docs day, but: Ping
>=20
Actually, is it ok for someone to ping on someone else's patch?
Well, I really think this patch would be a super-cool one to have in,
so, I'll take my chances.

Ping. :-)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-0rF8bBjdr3A9g8s3qL82
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlL+XcYACgkQk4XaBE3IOsSynwCfUylLY3478ag6FNJH3gez+J0J
ISkAnRMdlqj9TOLcOL08irAOhHVIHX/b
=PKqB
-----END PGP SIGNATURE-----

--=-0rF8bBjdr3A9g8s3qL82--


--===============7513322836005699248==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7513322836005699248==--


From xen-devel-bounces@lists.xen.org Fri Feb 14 18:23:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENQ2-0006iW-5R; Fri, 14 Feb 2014 18:23:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WENQ1-0006iK-3m
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:23:21 +0000
Received: from [193.109.254.147:22619] by server-9.bemta-14.messagelabs.com id
	51/3E-24895-81F5EF25; Fri, 14 Feb 2014 18:23:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392402198!4455227!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13154 invoked from network); 14 Feb 2014 18:23:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:23:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102645590"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:23:18 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:23:17 -0500
Message-ID: <52FE5F14.50203@citrix.com>
Date: Fri, 14 Feb 2014 18:23:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-6-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392399353-11973-6-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 17:35, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  include/xen/interface/io/netif.h |   21 +++++++++++++++++++++

The master copy of this file is in Xen. You will need to prepare and
submit a similar patch to xen-devel.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:23:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:23:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENQ2-0006iW-5R; Fri, 14 Feb 2014 18:23:22 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WENQ1-0006iK-3m
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:23:21 +0000
Received: from [193.109.254.147:22619] by server-9.bemta-14.messagelabs.com id
	51/3E-24895-81F5EF25; Fri, 14 Feb 2014 18:23:20 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392402198!4455227!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13154 invoked from network); 14 Feb 2014 18:23:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:23:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102645590"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:23:18 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 13:23:17 -0500
Message-ID: <52FE5F14.50203@citrix.com>
Date: Fri, 14 Feb 2014 18:23:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-6-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392399353-11973-6-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 17:35, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  include/xen/interface/io/netif.h |   21 +++++++++++++++++++++

The master copy of this file is in Xen. You will need to prepare and
submit a similar patch to xen-devel.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:27:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENUL-0006rj-SS; Fri, 14 Feb 2014 18:27:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WENUJ-0006rb-Qr
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 18:27:48 +0000
Received: from [85.158.143.35:5151] by server-3.bemta-4.messagelabs.com id
	0F/D3-11539-3206EF25; Fri, 14 Feb 2014 18:27:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392402464!5782765!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4790 invoked from network); 14 Feb 2014 18:27:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:27:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="100875892"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:27:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 13:27:24 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENTw-0000pp-4F;
	Fri, 14 Feb 2014 18:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENTu-0000Xk-7i;
	Fri, 14 Feb 2014 18:27:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.24584.705155.589266@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 18:27:20 +0000
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1392401862.32038.351.camel@Solace>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
	<1390905979.7753.36.camel@kazak.uk.xensource.com>
	<1392401862.32038.351.camel@Solace>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
 terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and terminology"):
> On mar, 2014-01-28 at 10:46 +0000, Ian Campbell wrote:
> > I should have said this yesterday on the docs day, but: Ping
> > 
> Actually, is it ok for someone to ping on someone else's patch?
> Well, I really think this patch would be a super-cool one to have in,
> so, I'll take my chances.

That's fine.  I have made a new branch for generally-good patches
which are not related to (a) the 4.4 release and (b) my efforts to
evacuate the dying server "woking".

This branch is here:
  http://xenbits.xen.org/gitweb/?p=people/iwj/osstest.git;a=shortlog;h=refs/heads/wip.rebasing

I have pushed this patch onto it.  And I'll throw it (and anything
else this branch accumulates) into the osstest push gate when the
current crop of stuff is done.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:27:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENUL-0006rj-SS; Fri, 14 Feb 2014 18:27:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WENUJ-0006rb-Qr
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 18:27:48 +0000
Received: from [85.158.143.35:5151] by server-3.bemta-4.messagelabs.com id
	0F/D3-11539-3206EF25; Fri, 14 Feb 2014 18:27:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392402464!5782765!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4790 invoked from network); 14 Feb 2014 18:27:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:27:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="100875892"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:27:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 13:27:24 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENTw-0000pp-4F;
	Fri, 14 Feb 2014 18:27:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENTu-0000Xk-7i;
	Fri, 14 Feb 2014 18:27:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.24584.705155.589266@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 18:27:20 +0000
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1392401862.32038.351.camel@Solace>
References: <1390224010-25510-1-git-send-email-ian.campbell@citrix.com>
	<1390905979.7753.36.camel@kazak.uk.xensource.com>
	<1392401862.32038.351.camel@Solace>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, Ian Campbell <Ian.Campbell@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and
 terminology
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Re: [Xen-devel] [PATCH OSSTEST] README: Add some core concepts and terminology"):
> On mar, 2014-01-28 at 10:46 +0000, Ian Campbell wrote:
> > I should have said this yesterday on the docs day, but: Ping
> > 
> Actually, is it ok for someone to ping on someone else's patch?
> Well, I really think this patch would be a super-cool one to have in,
> so, I'll take my chances.

That's fine.  I have made a new branch for generally-good patches
which are not related to (a) the 4.4 release and (b) my efforts to
evacuate the dying server "woking".

This branch is here:
  http://xenbits.xen.org/gitweb/?p=people/iwj/osstest.git;a=shortlog;h=refs/heads/wip.rebasing

I have pushed this patch onto it.  And I'll throw it (and anything
else this branch accumulates) into the osstest push gate when the
current crop of stuff is done.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:29:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENVZ-0006ye-BP; Fri, 14 Feb 2014 18:29:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WENVX-0006yV-Fo
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:29:03 +0000
Received: from [193.109.254.147:48742] by server-7.bemta-14.messagelabs.com id
	B1/A5-23424-E606EF25; Fri, 14 Feb 2014 18:29:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392402539!4436280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29813 invoked from network); 14 Feb 2014 18:29:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:29:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="100876489"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:28:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 13:28:58 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENVS-0000qE-Am;
	Fri, 14 Feb 2014 18:28:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENVQ-0000YE-9b;
	Fri, 14 Feb 2014 18:28:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.24679.155484.216198@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 18:28:55 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>, Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <21244.44538.522283.884578@mariner.uk.xensource.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F8C40E.7010707@citrix.com>
	<21240.50574.203262.432094@mariner.uk.xensource.com>
	<52F8E379.4020702@citrix.com>
	<21244.44538.522283.884578@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE (20140116-r260789)"):
> ts-redhat-install and ts-windows-install both use
> more_prepareguest_hvm which pass "undef" for the third argument.
> So I'm suggesting that the bit of ts-freebsd-install which constructs
> the default image filename be removed.

I have pushed a version of this patch containing only the make-flight
change.  I think that's sufficient.  Changing the fallback
arrangements used when the runvar is missing can wait.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:29:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENVZ-0006ye-BP; Fri, 14 Feb 2014 18:29:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WENVX-0006yV-Fo
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:29:03 +0000
Received: from [193.109.254.147:48742] by server-7.bemta-14.messagelabs.com id
	B1/A5-23424-E606EF25; Fri, 14 Feb 2014 18:29:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392402539!4436280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29813 invoked from network); 14 Feb 2014 18:29:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:29:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="100876489"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 18:28:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 13:28:58 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENVS-0000qE-Am;
	Fri, 14 Feb 2014 18:28:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENVQ-0000YE-9b;
	Fri, 14 Feb 2014 18:28:56 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.24679.155484.216198@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 18:28:55 +0000
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>, Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <21244.44538.522283.884578@mariner.uk.xensource.com>
References: <1392034392-27273-1-git-send-email-ian.jackson@eu.citrix.com>
	<52F8C40E.7010707@citrix.com>
	<21240.50574.203262.432094@mariner.uk.xensource.com>
	<52F8E379.4020702@citrix.com>
	<21244.44538.522283.884578@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE
	(20140116-r260789)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [OSSTEST PATCH] freebsd: switch to 10.0-RELEASE (20140116-r260789)"):
> ts-redhat-install and ts-windows-install both use
> more_prepareguest_hvm which pass "undef" for the third argument.
> So I'm suggesting that the bit of ts-freebsd-install which constructs
> the default image filename be removed.

I have pushed a version of this patch containing only the make-flight
change.  I think that's sufficient.  Changing the fallback
arrangements used when the runvar is missing can wait.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:42:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:42:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENho-0007OZ-Cd; Fri, 14 Feb 2014 18:41:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WENhn-0007OR-By
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:41:43 +0000
Received: from [85.158.139.211:62470] by server-15.bemta-5.messagelabs.com id
	D1/B3-24395-6636EF25; Fri, 14 Feb 2014 18:41:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392403300!4025914!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30172 invoked from network); 14 Feb 2014 18:41:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:41:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102650366"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:41:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 13:41:22 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENhS-0000uu-I2;
	Fri, 14 Feb 2014 18:41:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENhQ-0000fy-U1;
	Fri, 14 Feb 2014 18:41:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.25423.419772.949039@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 18:41:19 +0000
To: <raistlin@linux.it>
In-Reply-To: <1392401513.32038.348.camel@Solace>
References: <1391005955.21756.7.camel@Abyss>
	<1392401513.32038.348.camel@Solace>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f' option"):
> On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> > standalone-reset's usage says:
> >     
> >   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildflight>]]]
> >    branch and xenbranch default, separately, to xen-unstable
> >   options:
> >    -f<flight>     generate flight "flight", default is "standalone"
> >     
> > but then there is no place where '-f' is processed, and hence
> > no real way to pass a specific flight name to make-flight.
> >     
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
...
> I know it's a busy period for OSSTest, but this should be pretty
> straightforward, and it only affects standalone mode.

Right.  I don't use standalone mode much, so sorry about that.  I
looked for a comment from Ian C but didn't find one.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


> Anyway, I can put it on hold and resubmit in a while, if that's
> considered better.

No, pinging now is good.

This patch leads me to an observation: I looked at the code in
standalone-reset and it appears to me that there is not currently
anything which sets "$flight".

So the "DELETE" statements used if there's an existing db won't have
any effect.  This doesn't cause any strange effects because
Osstest/JobDB/Standalone.pm deletes them too.

I think it would be best to delete that part of standalone-reset.  Do
you agree ?

In the meantime I have added your patch to my queue branch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 18:42:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 18:42:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WENho-0007OZ-Cd; Fri, 14 Feb 2014 18:41:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WENhn-0007OR-By
	for xen-devel@lists.xenproject.org; Fri, 14 Feb 2014 18:41:43 +0000
Received: from [85.158.139.211:62470] by server-15.bemta-5.messagelabs.com id
	D1/B3-24395-6636EF25; Fri, 14 Feb 2014 18:41:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392403300!4025914!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30172 invoked from network); 14 Feb 2014 18:41:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 18:41:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102650366"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 18:41:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 13:41:22 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENhS-0000uu-I2;
	Fri, 14 Feb 2014 18:41:22 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WENhQ-0000fy-U1;
	Fri, 14 Feb 2014 18:41:20 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21246.25423.419772.949039@mariner.uk.xensource.com>
Date: Fri, 14 Feb 2014 18:41:19 +0000
To: <raistlin@linux.it>
In-Reply-To: <1392401513.32038.348.camel@Solace>
References: <1391005955.21756.7.camel@Abyss>
	<1392401513.32038.348.camel@Solace>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli writes ("Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f' option"):
> On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> > standalone-reset's usage says:
> >     
> >   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildflight>]]]
> >    branch and xenbranch default, separately, to xen-unstable
> >   options:
> >    -f<flight>     generate flight "flight", default is "standalone"
> >     
> > but then there is no place where '-f' is processed, and hence
> > no real way to pass a specific flight name to make-flight.
> >     
> > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
...
> I know it's a busy period for OSSTest, but this should be pretty
> straightforward, and it only affects standalone mode.

Right.  I don't use standalone mode much, so sorry about that.  I
looked for a comment from Ian C but didn't find one.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


> Anyway, I can put it on hold and resubmit in a while, if that's
> considered better.

No, pinging now is good.

This patch leads me to an observation: I looked at the code in
standalone-reset and it appears to me that there is not currently
anything which sets "$flight".

So the "DELETE" statements used if there's an existing db won't have
any effect.  This doesn't cause any strange effects because
Osstest/JobDB/Standalone.pm deletes them too.

I think it would be best to delete that part of standalone-reset.  Do
you agree ?

In the meantime I have added your patch to my queue branch.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOo1-00081s-S5; Fri, 14 Feb 2014 19:52:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOnz-00080n-QP
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:12 +0000
Received: from [85.158.143.35:47524] by server-2.bemta-4.messagelabs.com id
	F2/2B-10891-BE37EF25; Fri, 14 Feb 2014 19:52:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392407527!5793711!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3254 invoked from network); 14 Feb 2014 19:52:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102668593"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-P8; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:52:00 +0000
Message-ID: <1392407521-19884-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, Tim Deegan <tim@xen.org>, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 2/3] x86/hvm/rtc: Inject RTC periodic
	interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Let the vpt code drive the RTC's timer interrupts directly, as it does
for other periodic time sources, and fix up the register state in a
vpt callback when the interrupt is injected.

This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
when a tick was pending, the early callback from the VPT code would
always set REG_C.PF on every VMENTER; meanwhile the guest was in its
interrupt handler reading REG_C in a loop and waiting to see it clear.

One drawback is that a guest that attempts to suppress RTC periodic
interrupts by failing to read REG_C will receive up to 10 spurious
interrupts, even in 'strict' mode.  However:
 - since all previous RTC models have had this property (including
   the current one, since 'no-ack' mode is hard-coded on) we're
   pretty sure that all guests can handle this; and
 - we're already playing some other interesting games with this
   interrupt in the vpt code.

One other corner case: a guest that enables the PF timer interrupt,
masks the interupt in the APIC and then polls REG_C looking for PF
will not see PF getting set.  The more likely case of enabling the
timers and masking the interrupt with REG_B.PIE is already handled
correctly.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        |   25 +++++++++++--------------
 xen/arch/x86/hvm/vpt.c        |   40 ----------------------------------------
 xen/include/asm-x86/hvm/vpt.h |    1 -
 3 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 1455bc6..7a37ebb 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -78,29 +78,26 @@ static void rtc_update_irq(RTCState *s)
     hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
 }
 
-bool_t rtc_periodic_interrupt(void *opaque)
+/* Called by the VPT code after it's injected a PF interrupt for us.
+ * Fix up the register state to reflect what happened. */
+static void rtc_pf_callback(struct vcpu *v, void *opaque)
 {
     RTCState *s = opaque;
-    bool_t ret;
 
     spin_lock(&s->lock);
-    ret = rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF);
-    if ( rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_PF) )
-    {
-        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
-        rtc_update_irq(s);
-    }
-    else if ( ++(s->pt_dead_ticks) >= 10 )
+
+    if ( !rtc_mode_is(s, no_ack)
+         && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF)
+         && ++(s->pt_dead_ticks) >= 10 )
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
         s->period = 0;
     }
-    if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
-        ret = 0;
-    spin_unlock(&s->lock);
 
-    return ret;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF|RTC_IRQF;
+
+    spin_unlock(&s->lock);
 }
 
 /* Check whether the REG_C.PF bit should have been set by a tick since
@@ -156,7 +153,7 @@ static void rtc_timer_update(RTCState *s)
                 if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
                 {
                     create_periodic_time(v, &s->pt, delta, period,
-                                         RTC_IRQ, NULL, s);
+                                         RTC_IRQ, rtc_pf_callback, s);
                     s->period = period;
                 }
                 else
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 1961bda..f7af688 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -231,12 +231,9 @@ int pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, is_lapic;
-    void *pt_priv;
 
- rescan:
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
- rescan_locked:
     earliest_pt = NULL;
     max_lag = -1ULL;
     list_for_each_entry_safe ( pt, temp, head, list )
@@ -270,48 +267,11 @@ int pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
-    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    else if ( irq == RTC_IRQ && pt_priv )
-    {
-        if ( !rtc_periodic_interrupt(pt_priv) )
-            irq = -1;
-
-        pt_lock(earliest_pt);
-
-        if ( irq < 0 && earliest_pt->pending_intr_nr )
-        {
-            /*
-             * RTC periodic timer runs without the corresponding interrupt
-             * being enabled - need to mimic enough of pt_intr_post() to keep
-             * things going.
-             */
-            earliest_pt->pending_intr_nr = 0;
-            earliest_pt->irq_issued = 0;
-            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
-        }
-        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
-        {
-            if ( earliest_pt->on_list )
-            {
-                /* suspend timer emulation */
-                list_del(&earliest_pt->list);
-                earliest_pt->on_list = 0;
-            }
-            irq = -1;
-        }
-
-        /* Avoid dropping the lock if we can. */
-        if ( irq < 0 && v == earliest_pt->vcpu )
-            goto rescan_locked;
-        pt_unlock(earliest_pt);
-        if ( irq < 0 )
-            goto rescan;
-    }
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 9f48635..7d62653 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -184,7 +184,6 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
-bool_t rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOnz-00080v-SY; Fri, 14 Feb 2014 19:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOny-00080Y-5m
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:10 +0000
Received: from [85.158.143.35:47439] by server-3.bemta-4.messagelabs.com id
	1B/57-11539-9E37EF25; Fri, 14 Feb 2014 19:52:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392407527!5793711!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3072 invoked from network); 14 Feb 2014 19:52:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102668591"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-Qr; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:52:01 +0000
Message-ID: <1392407521-19884-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 3/3] x86/hvm/rtc: Always deassert the IRQ
	line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Even in no-ack mode, there's no reason to leave the line asserted
after an explicit ack of the interrupt.

Furthermore, rtc_update_irq() is an unconditional noop having just cleared
RTC_REG_C.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/rtc.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 7a37ebb..1844f2d 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -678,9 +678,8 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
-        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
+        if ( ret & RTC_IRQF )
             hvm_isa_irq_deassert(d, RTC_IRQ);
-        rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
         s->pt_dead_ticks = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOnz-00080o-HK; Fri, 14 Feb 2014 19:52:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOny-00080Z-8c
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:10 +0000
Received: from [193.109.254.147:3529] by server-12.bemta-14.messagelabs.com id
	55/16-17220-9E37EF25; Fri, 14 Feb 2014 19:52:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392407527!4478807!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7034 invoked from network); 14 Feb 2014 19:52:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="100898343"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-LS; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:51:58 +0000
Message-ID: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	keir@xen.org, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 0/3] Move RTC interrupt injection back
	into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This series implements the most recent idea Tim was proposing about
reworking the RTC PF interrupt injection.

Patch 1 switches handling the !PIE case to calculate the right answer
for REG_C.PF on demand rather than running the timers.
Patch 2 switches back to the old model of having the vpt code control
the timer interrupt injection; this is the fix for the w2k3 hang.
Patch 3 is just a minor cleanup, and not particularly necessary.

v2 now appears to work correctly, given my dev testing.  I am setting XenRT up
over the weekend to give it some thorough testing.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOnz-00080v-SY; Fri, 14 Feb 2014 19:52:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOny-00080Y-5m
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:10 +0000
Received: from [85.158.143.35:47439] by server-3.bemta-4.messagelabs.com id
	1B/57-11539-9E37EF25; Fri, 14 Feb 2014 19:52:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392407527!5793711!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3072 invoked from network); 14 Feb 2014 19:52:08 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102668591"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-Qr; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:52:01 +0000
Message-ID: <1392407521-19884-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 3/3] x86/hvm/rtc: Always deassert the IRQ
	line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Even in no-ack mode, there's no reason to leave the line asserted
after an explicit ack of the interrupt.

Furthermore, rtc_update_irq() is an unconditional noop having just cleared
RTC_REG_C.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/rtc.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 7a37ebb..1844f2d 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -678,9 +678,8 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
-        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
+        if ( ret & RTC_IRQF )
             hvm_isa_irq_deassert(d, RTC_IRQ);
-        rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
         s->pt_dead_ticks = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOo0-00081F-FR; Fri, 14 Feb 2014 19:52:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOny-00080c-Sh
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:11 +0000
Received: from [85.158.143.35:26394] by server-2.bemta-4.messagelabs.com id
	50/2B-10891-AE37EF25; Fri, 14 Feb 2014 19:52:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392407527!5793711!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3151 invoked from network); 14 Feb 2014 19:52:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102668592"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-NN; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:51:59 +0000
Message-ID: <1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, george.dunlap@eu.citrix.com, Andrew
	Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 1/3] x86/hvm/rtc: Don't run the vpt timer
	when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

If the guest has not asked for interrupts, don't run the vpt timer
to generate them.  This is a prerequisite for a patch to simplify how
the vpt interacts with the RTC, and also gets rid of a timer series in
Xen in a case where it's unlikely to be needed.

Instead, calculate the correct value for REG_C.PF whenever REG_C is
read or PIE is enabled.  This allow a guest to poll for the PF bit
while not asking for actual timer interrupts.  Such a guest would no
longer get the benefit of the vpt's timer modes.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Changes in v2:
 * Reduce scope of `now` in rtc_timer_update()
 * Merge PIE logic in REG_B write
 * Tightly couple setting s->period with creating/destroying timers, so the
   timer change gets properly recreated when the guest sets REG_B.PIE
---
 xen/arch/x86/hvm/rtc.c        |   58 ++++++++++++++++++++++++++++++++---------
 xen/include/asm-x86/hvm/vpt.h |    3 ++-
 2 files changed, 48 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cdedefe..1455bc6 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -94,7 +94,7 @@ bool_t rtc_periodic_interrupt(void *opaque)
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
     }
     if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
         ret = 0;
@@ -103,6 +103,24 @@ bool_t rtc_periodic_interrupt(void *opaque)
     return ret;
 }
 
+/* Check whether the REG_C.PF bit should have been set by a tick since
+ * the last time we looked. This is used to track ticks when REG_B.PIE
+ * is clear; when PIE is set, PF ticks are handled by the VPT callbacks.  */
+static void check_for_pf_ticks(RTCState *s)
+{
+    s_time_t now;
+
+    if ( s->period == 0 || (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+        return;
+
+    now = NOW();
+    if ( (now - s->start_time) / s->period
+         != (s->check_ticks_since - s->start_time) / s->period )
+        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+
+    s->check_ticks_since = now;
+}
+
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
@@ -125,24 +143,31 @@ static void rtc_timer_update(RTCState *s)
     case RTC_REF_CLCK_4MHZ:
         if ( period_code != 0 )
         {
-            if ( period_code != s->pt_code )
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            if ( period != s->period )
             {
-                s->pt_code = period_code;
-                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+                s_time_t now = NOW();
+
                 if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
-                    delta = period - ((NOW() - s->start_time) % period);
-                create_periodic_time(v, &s->pt, delta, period,
-                                     RTC_IRQ, NULL, s);
+                    delta = period - ((now - s->start_time) % period);
+                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+                {
+                    create_periodic_time(v, &s->pt, delta, period,
+                                         RTC_IRQ, NULL, s);
+                    s->period = period;
+                }
+                else
+                    s->check_ticks_since = now;
             }
             break;
         }
         /* fall through */
     default:
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
         break;
     }
 }
@@ -484,14 +509,22 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
             if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
+        check_for_pf_ticks(s);
         s->hw.cmos_data[RTC_REG_B] = data;
         /*
          * If the interrupt is already set when the interrupt becomes
          * enabled, raise an interrupt immediately.
          */
         rtc_update_irq(s);
-        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
+        if ( (data ^ orig) & RTC_PIE )
+        {
+            if ( !(data & RTC_PIE) )
+            {
+                destroy_periodic_time(&s->pt);
+                s->period = 0;
+            }
             rtc_timer_update(s);
+        }
         if ( (data ^ orig) & RTC_SET )
             check_update_timer(s);
         if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
@@ -645,6 +678,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
             ret |= RTC_UIP;
         break;
     case RTC_REG_C:
+        check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
         if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
@@ -652,7 +686,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
-        rtc_timer_update(s);
+        s->pt_dead_ticks = 0;
         break;
     default:
         ret = s->hw.cmos_data[s->hw.cmos_index];
@@ -748,7 +782,7 @@ void rtc_reset(struct domain *d)
     RTCState *s = domain_vrtc(d);
 
     destroy_periodic_time(&s->pt);
-    s->pt_code = 0;
+    s->period = 0;
     s->pt.source = PTSRC_isa;
 }
 
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 87c3a66..9f48635 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -113,7 +113,8 @@ typedef struct RTCState {
     /* periodic timer */
     struct periodic_time pt;
     s_time_t start_time;
-    int pt_code;
+    s_time_t check_ticks_since;
+    int period;
     uint8_t pt_dead_ticks;
     uint32_t use_timer;
     spinlock_t lock;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOnz-00080o-HK; Fri, 14 Feb 2014 19:52:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOny-00080Z-8c
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:10 +0000
Received: from [193.109.254.147:3529] by server-12.bemta-14.messagelabs.com id
	55/16-17220-9E37EF25; Fri, 14 Feb 2014 19:52:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392407527!4478807!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7034 invoked from network); 14 Feb 2014 19:52:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:08 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="100898343"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-LS; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:51:58 +0000
Message-ID: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	keir@xen.org, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 0/3] Move RTC interrupt injection back
	into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This series implements the most recent idea Tim was proposing about
reworking the RTC PF interrupt injection.

Patch 1 switches handling the !PIE case to calculate the right answer
for REG_C.PF on demand rather than running the timers.
Patch 2 switches back to the old model of having the vpt code control
the timer interrupt injection; this is the fix for the w2k3 hang.
Patch 3 is just a minor cleanup, and not particularly necessary.

v2 now appears to work correctly, given my dev testing.  I am setting XenRT up
over the weekend to give it some thorough testing.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOo1-00081s-S5; Fri, 14 Feb 2014 19:52:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOnz-00080n-QP
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:12 +0000
Received: from [85.158.143.35:47524] by server-2.bemta-4.messagelabs.com id
	F2/2B-10891-BE37EF25; Fri, 14 Feb 2014 19:52:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392407527!5793711!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3254 invoked from network); 14 Feb 2014 19:52:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102668593"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-P8; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:52:00 +0000
Message-ID: <1392407521-19884-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, Tim Deegan <tim@xen.org>, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 2/3] x86/hvm/rtc: Inject RTC periodic
	interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Let the vpt code drive the RTC's timer interrupts directly, as it does
for other periodic time sources, and fix up the register state in a
vpt callback when the interrupt is injected.

This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
when a tick was pending, the early callback from the VPT code would
always set REG_C.PF on every VMENTER; meanwhile the guest was in its
interrupt handler reading REG_C in a loop and waiting to see it clear.

One drawback is that a guest that attempts to suppress RTC periodic
interrupts by failing to read REG_C will receive up to 10 spurious
interrupts, even in 'strict' mode.  However:
 - since all previous RTC models have had this property (including
   the current one, since 'no-ack' mode is hard-coded on) we're
   pretty sure that all guests can handle this; and
 - we're already playing some other interesting games with this
   interrupt in the vpt code.

One other corner case: a guest that enables the PF timer interrupt,
masks the interupt in the APIC and then polls REG_C looking for PF
will not see PF getting set.  The more likely case of enabling the
timers and masking the interrupt with REG_B.PIE is already handled
correctly.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        |   25 +++++++++++--------------
 xen/arch/x86/hvm/vpt.c        |   40 ----------------------------------------
 xen/include/asm-x86/hvm/vpt.h |    1 -
 3 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 1455bc6..7a37ebb 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -78,29 +78,26 @@ static void rtc_update_irq(RTCState *s)
     hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
 }
 
-bool_t rtc_periodic_interrupt(void *opaque)
+/* Called by the VPT code after it's injected a PF interrupt for us.
+ * Fix up the register state to reflect what happened. */
+static void rtc_pf_callback(struct vcpu *v, void *opaque)
 {
     RTCState *s = opaque;
-    bool_t ret;
 
     spin_lock(&s->lock);
-    ret = rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF);
-    if ( rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_PF) )
-    {
-        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
-        rtc_update_irq(s);
-    }
-    else if ( ++(s->pt_dead_ticks) >= 10 )
+
+    if ( !rtc_mode_is(s, no_ack)
+         && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF)
+         && ++(s->pt_dead_ticks) >= 10 )
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
         s->period = 0;
     }
-    if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
-        ret = 0;
-    spin_unlock(&s->lock);
 
-    return ret;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF|RTC_IRQF;
+
+    spin_unlock(&s->lock);
 }
 
 /* Check whether the REG_C.PF bit should have been set by a tick since
@@ -156,7 +153,7 @@ static void rtc_timer_update(RTCState *s)
                 if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
                 {
                     create_periodic_time(v, &s->pt, delta, period,
-                                         RTC_IRQ, NULL, s);
+                                         RTC_IRQ, rtc_pf_callback, s);
                     s->period = period;
                 }
                 else
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 1961bda..f7af688 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -231,12 +231,9 @@ int pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, is_lapic;
-    void *pt_priv;
 
- rescan:
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
- rescan_locked:
     earliest_pt = NULL;
     max_lag = -1ULL;
     list_for_each_entry_safe ( pt, temp, head, list )
@@ -270,48 +267,11 @@ int pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
-    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    else if ( irq == RTC_IRQ && pt_priv )
-    {
-        if ( !rtc_periodic_interrupt(pt_priv) )
-            irq = -1;
-
-        pt_lock(earliest_pt);
-
-        if ( irq < 0 && earliest_pt->pending_intr_nr )
-        {
-            /*
-             * RTC periodic timer runs without the corresponding interrupt
-             * being enabled - need to mimic enough of pt_intr_post() to keep
-             * things going.
-             */
-            earliest_pt->pending_intr_nr = 0;
-            earliest_pt->irq_issued = 0;
-            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
-        }
-        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
-        {
-            if ( earliest_pt->on_list )
-            {
-                /* suspend timer emulation */
-                list_del(&earliest_pt->list);
-                earliest_pt->on_list = 0;
-            }
-            irq = -1;
-        }
-
-        /* Avoid dropping the lock if we can. */
-        if ( irq < 0 && v == earliest_pt->vcpu )
-            goto rescan_locked;
-        pt_unlock(earliest_pt);
-        if ( irq < 0 )
-            goto rescan;
-    }
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 9f48635..7d62653 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -184,7 +184,6 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
-bool_t rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 19:52:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEOo0-00081F-FR; Fri, 14 Feb 2014 19:52:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEOny-00080c-Sh
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 19:52:11 +0000
Received: from [85.158.143.35:26394] by server-2.bemta-4.messagelabs.com id
	50/2B-10891-AE37EF25; Fri, 14 Feb 2014 19:52:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392407527!5793711!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3151 invoked from network); 14 Feb 2014 19:52:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 19:52:09 -0000
X-IronPort-AV: E=Sophos;i="4.95,846,1384300800"; d="scan'208";a="102668592"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 14 Feb 2014 19:52:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 14 Feb 2014 14:52:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WEOnt-0008Hj-NN; Fri, 14 Feb 2014 19:52:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 14 Feb 2014 19:51:59 +0000
Message-ID: <1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, george.dunlap@eu.citrix.com, Andrew
	Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH RFC v2 1/3] x86/hvm/rtc: Don't run the vpt timer
	when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

If the guest has not asked for interrupts, don't run the vpt timer
to generate them.  This is a prerequisite for a patch to simplify how
the vpt interacts with the RTC, and also gets rid of a timer series in
Xen in a case where it's unlikely to be needed.

Instead, calculate the correct value for REG_C.PF whenever REG_C is
read or PIE is enabled.  This allow a guest to poll for the PF bit
while not asking for actual timer interrupts.  Such a guest would no
longer get the benefit of the vpt's timer modes.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Changes in v2:
 * Reduce scope of `now` in rtc_timer_update()
 * Merge PIE logic in REG_B write
 * Tightly couple setting s->period with creating/destroying timers, so the
   timer change gets properly recreated when the guest sets REG_B.PIE
---
 xen/arch/x86/hvm/rtc.c        |   58 ++++++++++++++++++++++++++++++++---------
 xen/include/asm-x86/hvm/vpt.h |    3 ++-
 2 files changed, 48 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cdedefe..1455bc6 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -94,7 +94,7 @@ bool_t rtc_periodic_interrupt(void *opaque)
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
     }
     if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
         ret = 0;
@@ -103,6 +103,24 @@ bool_t rtc_periodic_interrupt(void *opaque)
     return ret;
 }
 
+/* Check whether the REG_C.PF bit should have been set by a tick since
+ * the last time we looked. This is used to track ticks when REG_B.PIE
+ * is clear; when PIE is set, PF ticks are handled by the VPT callbacks.  */
+static void check_for_pf_ticks(RTCState *s)
+{
+    s_time_t now;
+
+    if ( s->period == 0 || (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+        return;
+
+    now = NOW();
+    if ( (now - s->start_time) / s->period
+         != (s->check_ticks_since - s->start_time) / s->period )
+        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+
+    s->check_ticks_since = now;
+}
+
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
@@ -125,24 +143,31 @@ static void rtc_timer_update(RTCState *s)
     case RTC_REF_CLCK_4MHZ:
         if ( period_code != 0 )
         {
-            if ( period_code != s->pt_code )
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            if ( period != s->period )
             {
-                s->pt_code = period_code;
-                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+                s_time_t now = NOW();
+
                 if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
-                    delta = period - ((NOW() - s->start_time) % period);
-                create_periodic_time(v, &s->pt, delta, period,
-                                     RTC_IRQ, NULL, s);
+                    delta = period - ((now - s->start_time) % period);
+                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+                {
+                    create_periodic_time(v, &s->pt, delta, period,
+                                         RTC_IRQ, NULL, s);
+                    s->period = period;
+                }
+                else
+                    s->check_ticks_since = now;
             }
             break;
         }
         /* fall through */
     default:
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
         break;
     }
 }
@@ -484,14 +509,22 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
             if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
+        check_for_pf_ticks(s);
         s->hw.cmos_data[RTC_REG_B] = data;
         /*
          * If the interrupt is already set when the interrupt becomes
          * enabled, raise an interrupt immediately.
          */
         rtc_update_irq(s);
-        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
+        if ( (data ^ orig) & RTC_PIE )
+        {
+            if ( !(data & RTC_PIE) )
+            {
+                destroy_periodic_time(&s->pt);
+                s->period = 0;
+            }
             rtc_timer_update(s);
+        }
         if ( (data ^ orig) & RTC_SET )
             check_update_timer(s);
         if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
@@ -645,6 +678,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
             ret |= RTC_UIP;
         break;
     case RTC_REG_C:
+        check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
         if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
@@ -652,7 +686,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
-        rtc_timer_update(s);
+        s->pt_dead_ticks = 0;
         break;
     default:
         ret = s->hw.cmos_data[s->hw.cmos_index];
@@ -748,7 +782,7 @@ void rtc_reset(struct domain *d)
     RTCState *s = domain_vrtc(d);
 
     destroy_periodic_time(&s->pt);
-    s->pt_code = 0;
+    s->period = 0;
     s->pt.source = PTSRC_isa;
 }
 
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 87c3a66..9f48635 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -113,7 +113,8 @@ typedef struct RTCState {
     /* periodic timer */
     struct periodic_time pt;
     s_time_t start_time;
-    int pt_code;
+    s_time_t check_ticks_since;
+    int period;
     uint8_t pt_dead_ticks;
     uint32_t use_timer;
     spinlock_t lock;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 20:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 20:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEP3m-0000Gh-Ux; Fri, 14 Feb 2014 20:08:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WEP3l-0000Gc-4e
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 20:08:29 +0000
Received: from [85.158.139.211:15351] by server-17.bemta-5.messagelabs.com id
	8F/BA-31975-CB77EF25; Fri, 14 Feb 2014 20:08:28 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392408507!97279!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26917 invoked from network); 14 Feb 2014 20:08:27 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 20:08:27 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm4so827753wib.7
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 12:08:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:content-transfer-encoding;
	bh=HB9zB5K1RQFb0Clet1KcoeurU6bBJtFqXg1dffZbps8=;
	b=cxi9zXxulsbZvAYu4PO4fGwKrwQyFRv6/KjMhVMsVSOyZhNJmvl7ji7LaSQcGIWUOI
	E25SN47U1w8bvZKnTAJmWEcwIwkIQu0Mvcn5dxsJ8km2M+OS8YeIEETzpZiXHV1Rjc9r
	IfB1TlYxTPW3vmdgHjocpY0mAgEh+aVxQ2LHP06aLSRcRw9t0j5NwaMGQrYfx2u3FoLL
	RB2DL5Jwck3lBQpn1xAKz2zsdEBmof0Xb9WUl58YYjCFlP1cz+25m58W1jOuGnA2TVMc
	RUrSU2c+A7HHzjfMvME7Dq0Pc/l+1dSOt4dJKGkn5MqNfm+wMVEvaDjAk0vz7fOv4s2+
	Fpxw==
X-Received: by 10.194.71.116 with SMTP id t20mr3388540wju.51.1392408507285;
	Fri, 14 Feb 2014 12:08:27 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Fri, 14 Feb 2014 12:08:07 -0800 (PST)
In-Reply-To: <52FDDE44.3060001@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Fri, 14 Feb 2014 20:08:07 +0000
Message-ID: <CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 9:13 AM, Roger Pau Monn=E9 <roger.pau@citrix.com> w=
rote:
> On 14/02/14 03:09, Miguel Clara wrote:
>> After compiling with the patch and rebuilding/installing the module, I
>> reboot, I get a panic now when drbd starts.
>
> There was no need to rebuild the module, the patch only modified the
> block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
> everything seemed to be fine (no kernel panic of course).

Just noticed this part now but before you said:
(The patch is against git://git.drbd.org/drbd-9.0.git)

So should I apply the patch to drbd-8.4.3... ?

>
> Since the patch didn't modify anything in the kernel module itself I
> find it unlikely to cause a kernel panic, probably there's some kind of
> problem with your kernel/module.
>
>> That was all I could get from the JAVA supermicro  kvm console!
>
> Without a proper serial console log it's impossible to tell what's going
> on (at least to me).
>
> Roger.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 20:08:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 20:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEP3m-0000Gh-Ux; Fri, 14 Feb 2014 20:08:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WEP3l-0000Gc-4e
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 20:08:29 +0000
Received: from [85.158.139.211:15351] by server-17.bemta-5.messagelabs.com id
	8F/BA-31975-CB77EF25; Fri, 14 Feb 2014 20:08:28 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392408507!97279!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26917 invoked from network); 14 Feb 2014 20:08:27 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 20:08:27 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm4so827753wib.7
	for <xen-devel@lists.xensource.com>;
	Fri, 14 Feb 2014 12:08:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:content-transfer-encoding;
	bh=HB9zB5K1RQFb0Clet1KcoeurU6bBJtFqXg1dffZbps8=;
	b=cxi9zXxulsbZvAYu4PO4fGwKrwQyFRv6/KjMhVMsVSOyZhNJmvl7ji7LaSQcGIWUOI
	E25SN47U1w8bvZKnTAJmWEcwIwkIQu0Mvcn5dxsJ8km2M+OS8YeIEETzpZiXHV1Rjc9r
	IfB1TlYxTPW3vmdgHjocpY0mAgEh+aVxQ2LHP06aLSRcRw9t0j5NwaMGQrYfx2u3FoLL
	RB2DL5Jwck3lBQpn1xAKz2zsdEBmof0Xb9WUl58YYjCFlP1cz+25m58W1jOuGnA2TVMc
	RUrSU2c+A7HHzjfMvME7Dq0Pc/l+1dSOt4dJKGkn5MqNfm+wMVEvaDjAk0vz7fOv4s2+
	Fpxw==
X-Received: by 10.194.71.116 with SMTP id t20mr3388540wju.51.1392408507285;
	Fri, 14 Feb 2014 12:08:27 -0800 (PST)
MIME-Version: 1.0
Received: by 10.194.234.42 with HTTP; Fri, 14 Feb 2014 12:08:07 -0800 (PST)
In-Reply-To: <52FDDE44.3060001@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CXoqNxkQ3CnD1DP2_7hyLAzHH1+0+Sym4ZGrOcHfhjT9A@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Fri, 14 Feb 2014 20:08:07 +0000
Message-ID: <CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, 2014 at 9:13 AM, Roger Pau Monn=E9 <roger.pau@citrix.com> w=
rote:
> On 14/02/14 03:09, Miguel Clara wrote:
>> After compiling with the patch and rebuilding/installing the module, I
>> reboot, I get a panic now when drbd starts.
>
> There was no need to rebuild the module, the patch only modified the
> block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
> everything seemed to be fine (no kernel panic of course).

Just noticed this part now but before you said:
(The patch is against git://git.drbd.org/drbd-9.0.git)

So should I apply the patch to drbd-8.4.3... ?

>
> Since the patch didn't modify anything in the kernel module itself I
> find it unlikely to cause a kernel panic, probably there's some kind of
> problem with your kernel/module.
>
>> That was all I could get from the JAVA supermicro  kvm console!
>
> Without a proper serial console log it's impossible to tell what's going
> on (at least to me).
>
> Roger.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 21:13:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 21:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEQ4F-0000mb-AK; Fri, 14 Feb 2014 21:13:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <francesco.gringoli@gmail.com>) id 1WEQ4D-0000mW-40
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 21:13:01 +0000
Received: from [193.109.254.147:33290] by server-9.bemta-14.messagelabs.com id
	AE/93-24895-CD68EF25; Fri, 14 Feb 2014 21:13:00 +0000
X-Env-Sender: francesco.gringoli@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392412379!4430777!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27337 invoked from network); 14 Feb 2014 21:12:59 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 21:12:59 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so4269022eak.29
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 13:12:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:subject:from:date:cc
	:content-transfer-encoding:message-id:to;
	bh=YxECvVb3R0ox3aG4sg3kDV3CYtwArd8lTPuBZ5lI1IM=;
	b=coVwre+ljJCh7qKVGSv137KTErqeHVm3t7X/Plcv2NpZlxQK9OL0RkXFKN01VsT+Nt
	UwsjR+jtsLIIoW9ULZFLLAs9+uwwo2jcouHVtWizeS153pveMcxgZFElCeOYyzzGKNtR
	62ANwJ/hckae89qaHAN827tTGqV0r634uEClEMVBM4eDxJVgHkXGMrSwAoLwqPxPoMTI
	Yj3qA9LS0qSTVM0XDYFOFbSd7KCFTexnAOYPNqp7sslnrFykMrvaE3Otj+f2NfhSQXBW
	JWPqJKpERW0QyedT6IzRxf387oIWBpKOS1ZYwauJUOSfd9ywc1rUOKg7cSc1Mms/pIj9
	L+NQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ing.unibs.it; s=google;
	h=sender:content-type:mime-version:subject:from:date:cc
	:content-transfer-encoding:message-id:to;
	bh=YxECvVb3R0ox3aG4sg3kDV3CYtwArd8lTPuBZ5lI1IM=;
	b=TW60XPgNBwrY/cQQFuy6aedtiElxjoOITTRp5Lof4ifoOGwslZ1F+S4EPVvXP2vjta
	xmsI4aQ9ig+uiWie4aPrHFcKXjzx5+TjUZ19L6IYv8LzUgZfxsnOrg5TbCCRozdpZkvz
	zWYhnV32LWptk0da9j0NfM1ruwn42dIiCzZzk=
X-Received: by 10.15.36.196 with SMTP id i44mr184137eev.104.1392412379550;
	Fri, 14 Feb 2014 13:12:59 -0800 (PST)
Received: from [10.20.10.12] ([192.167.23.210])
	by mx.google.com with ESMTPSA id 46sm24116649ees.4.2014.02.14.13.12.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 13:12:58 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
Date: Fri, 14 Feb 2014 22:12:58 +0100
Message-Id: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
To: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
X-Mailer: Apple Mail (2.1510)
Cc: John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello guys,

"building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.

Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.

What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues

1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.

I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.

Best regards,
-Francesco



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 21:13:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 21:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEQ4F-0000mb-AK; Fri, 14 Feb 2014 21:13:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <francesco.gringoli@gmail.com>) id 1WEQ4D-0000mW-40
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 21:13:01 +0000
Received: from [193.109.254.147:33290] by server-9.bemta-14.messagelabs.com id
	AE/93-24895-CD68EF25; Fri, 14 Feb 2014 21:13:00 +0000
X-Env-Sender: francesco.gringoli@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392412379!4430777!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27337 invoked from network); 14 Feb 2014 21:12:59 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 21:12:59 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so4269022eak.29
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 13:12:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:content-type:mime-version:subject:from:date:cc
	:content-transfer-encoding:message-id:to;
	bh=YxECvVb3R0ox3aG4sg3kDV3CYtwArd8lTPuBZ5lI1IM=;
	b=coVwre+ljJCh7qKVGSv137KTErqeHVm3t7X/Plcv2NpZlxQK9OL0RkXFKN01VsT+Nt
	UwsjR+jtsLIIoW9ULZFLLAs9+uwwo2jcouHVtWizeS153pveMcxgZFElCeOYyzzGKNtR
	62ANwJ/hckae89qaHAN827tTGqV0r634uEClEMVBM4eDxJVgHkXGMrSwAoLwqPxPoMTI
	Yj3qA9LS0qSTVM0XDYFOFbSd7KCFTexnAOYPNqp7sslnrFykMrvaE3Otj+f2NfhSQXBW
	JWPqJKpERW0QyedT6IzRxf387oIWBpKOS1ZYwauJUOSfd9ywc1rUOKg7cSc1Mms/pIj9
	L+NQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ing.unibs.it; s=google;
	h=sender:content-type:mime-version:subject:from:date:cc
	:content-transfer-encoding:message-id:to;
	bh=YxECvVb3R0ox3aG4sg3kDV3CYtwArd8lTPuBZ5lI1IM=;
	b=TW60XPgNBwrY/cQQFuy6aedtiElxjoOITTRp5Lof4ifoOGwslZ1F+S4EPVvXP2vjta
	xmsI4aQ9ig+uiWie4aPrHFcKXjzx5+TjUZ19L6IYv8LzUgZfxsnOrg5TbCCRozdpZkvz
	zWYhnV32LWptk0da9j0NfM1ruwn42dIiCzZzk=
X-Received: by 10.15.36.196 with SMTP id i44mr184137eev.104.1392412379550;
	Fri, 14 Feb 2014 13:12:59 -0800 (PST)
Received: from [10.20.10.12] ([192.167.23.210])
	by mx.google.com with ESMTPSA id 46sm24116649ees.4.2014.02.14.13.12.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 13:12:58 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
Date: Fri, 14 Feb 2014 22:12:58 +0100
Message-Id: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
To: Anthony PERARD <anthony.perard@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
X-Mailer: Apple Mail (2.1510)
Cc: John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello guys,

"building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.

Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.

What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues

1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.

I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.

Best regards,
-Francesco



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 21:18:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 21:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEQ9S-0000wY-DN; Fri, 14 Feb 2014 21:18:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEQ9R-0000wS-Iw
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 21:18:25 +0000
Received: from [85.158.139.211:61718] by server-17.bemta-5.messagelabs.com id
	55/81-31975-0288EF25; Fri, 14 Feb 2014 21:18:24 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392412703!4028245!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2711 invoked from network); 14 Feb 2014 21:18:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 21:18:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,847,1384300800"; 
   d="scan'208";a="9570400"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP/TLS/AES128-SHA;
	14 Feb 2014 21:18:23 +0000
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Fri, 14 Feb 2014
	22:18:23 +0100
Message-ID: <52FE8820.1070002@citrix.com>
Date: Fri, 14 Feb 2014 21:18:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>, Anthony PERARD
	<anthony.perard@citrix.com>, Ian Campbell <ian.campbell@citrix.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
In-Reply-To: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
Cc: John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [Wiki access] Booting XEN on Samsung ARM Chromebook
 with display support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/2014 21:12, Francesco Gringoli wrote:
> Hello guys,
>
> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
>
> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
>
> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
>
> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
>
> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.

Because of spam attacks, editing is disabled by default.

Use
http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html
to get access.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 21:18:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 21:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEQ9S-0000wY-DN; Fri, 14 Feb 2014 21:18:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEQ9R-0000wS-Iw
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 21:18:25 +0000
Received: from [85.158.139.211:61718] by server-17.bemta-5.messagelabs.com id
	55/81-31975-0288EF25; Fri, 14 Feb 2014 21:18:24 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392412703!4028245!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2711 invoked from network); 14 Feb 2014 21:18:24 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 21:18:24 -0000
X-IronPort-AV: E=Sophos;i="4.95,847,1384300800"; 
   d="scan'208";a="9570400"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP/TLS/AES128-SHA;
	14 Feb 2014 21:18:23 +0000
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL03.citrite.net
	(10.69.46.34) with Microsoft SMTP Server id 14.2.342.4; Fri, 14 Feb 2014
	22:18:23 +0100
Message-ID: <52FE8820.1070002@citrix.com>
Date: Fri, 14 Feb 2014 21:18:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>, Anthony PERARD
	<anthony.perard@citrix.com>, Ian Campbell <ian.campbell@citrix.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
In-Reply-To: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
Cc: John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] [Wiki access] Booting XEN on Samsung ARM Chromebook
 with display support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/2014 21:12, Francesco Gringoli wrote:
> Hello guys,
>
> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
>
> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
>
> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
>
> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
>
> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.

Because of spam attacks, editing is disabled by default.

Use
http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html
to get access.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 22:30:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 22:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WERGa-0001b1-UY; Fri, 14 Feb 2014 22:29:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WERGZ-0001aw-J4
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 22:29:51 +0000
Received: from [85.158.139.211:4272] by server-5.bemta-5.messagelabs.com id
	3B/D4-32749-ED89EF25; Fri, 14 Feb 2014 22:29:50 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392416988!4018366!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23736 invoked from network); 14 Feb 2014 22:29:50 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-14.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	14 Feb 2014 22:29:50 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WERGV-0001aN-Pp
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 14:29:47 -0800
Date: Fri, 14 Feb 2014 14:29:47 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1392416987740-5721292.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] smp guests on xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

As far as i know xen 3 supports SMP guest. I would like to know is it
possible to map many physical cpus to the vcpu-s of smp clinet. So if my
host has 4 cpus to map 2 phisical cpus to 2 vcpu-s of the SMP guest?

Best Regards



--
View this message in context: http://xen.1045712.n5.nabble.com/smp-guests-on-xen-tp5721292.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 22:30:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 22:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WERGa-0001b1-UY; Fri, 14 Feb 2014 22:29:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WERGZ-0001aw-J4
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 22:29:51 +0000
Received: from [85.158.139.211:4272] by server-5.bemta-5.messagelabs.com id
	3B/D4-32749-ED89EF25; Fri, 14 Feb 2014 22:29:50 +0000
X-Env-Sender: openbg@abv.bg
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392416988!4018366!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23736 invoked from network); 14 Feb 2014 22:29:50 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-14.tower-206.messagelabs.com with AES256-SHA encrypted SMTP;
	14 Feb 2014 22:29:50 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <openbg@abv.bg>) id 1WERGV-0001aN-Pp
	for xen-devel@lists.xensource.com; Fri, 14 Feb 2014 14:29:47 -0800
Date: Fri, 14 Feb 2014 14:29:47 -0800 (PST)
From: xennn <openbg@abv.bg>
To: xen-devel@lists.xensource.com
Message-ID: <1392416987740-5721292.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] smp guests on xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

As far as i know xen 3 supports SMP guest. I would like to know is it
possible to map many physical cpus to the vcpu-s of smp clinet. So if my
host has 4 cpus to map 2 phisical cpus to 2 vcpu-s of the SMP guest?

Best Regards



--
View this message in context: http://xen.1045712.n5.nabble.com/smp-guests-on-xen-tp5721292.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 23:03:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 23:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WERmw-0001vI-No; Fri, 14 Feb 2014 23:03:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WERmv-0001vD-W0
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 23:03:18 +0000
Received: from [193.109.254.147:38599] by server-3.bemta-14.messagelabs.com id
	34/76-00432-5B0AEF25; Fri, 14 Feb 2014 23:03:17 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392418996!4448545!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18588 invoked from network); 14 Feb 2014 23:03:16 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 23:03:16 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WERmq-00091U-UM; Fri, 14 Feb 2014 23:03:12 +0000
Date: Sat, 15 Feb 2014 00:03:12 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140214230312.GA33715@deinos.phlegethon.org>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
	<1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: george.dunlap@eu.citrix.com, roger.pau@citrix.com, keir@xen.org,
	JBeulich@suse.com, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC v2 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 19:51 +0000 on 14 Feb (1392403919), Andrew Cooper wrote:
> From: Tim Deegan <tim@xen.org>
> 
> If the guest has not asked for interrupts, don't run the vpt timer
> to generate them.  This is a prerequisite for a patch to simplify how
> the vpt interacts with the RTC, and also gets rid of a timer series in
> Xen in a case where it's unlikely to be needed.
> 
> Instead, calculate the correct value for REG_C.PF whenever REG_C is
> read or PIE is enabled.  This allow a guest to poll for the PF bit
> while not asking for actual timer interrupts.  Such a guest would no
> longer get the benefit of the vpt's timer modes.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> ---
> 
> Changes in v2:
>  * Reduce scope of `now` in rtc_timer_update()
>  * Merge PIE logic in REG_B write
>  * Tightly couple setting s->period with creating/destroying timers, so the
>    timer change gets properly recreated when the guest sets REG_B.PIE

Thanks for sorting out that bug, but I think in this version the !PIE
case won't work.  check_for_pf_ticks() uses s->period to figure out
whether to set PF, so it needs to be set whenever the REG_A selector
is configured, even if the timer's not running.

It looks like always setting s->period == 0 just before the call to
rtc_timer_update in the REG_B write (i.e. not just in the !PIE case)
would DTRT, but is that what you tried earlier?

Er, that is, here:

>          rtc_update_irq(s);
> -        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
> +        if ( (data ^ orig) & RTC_PIE )
> +        {
> +            if ( !(data & RTC_PIE) )
> +            {
> +                destroy_periodic_time(&s->pt);
> +                s->period = 0;
> +            }
>              rtc_timer_update(s);
> +        }
>          if ( (data ^ orig) & RTC_SET )

do this:

> +        if ( (data ^ orig) & RTC_PIE )
> +        {
> +            destroy_periodic_time(&s->pt);
> +            s->period = 0;
>              rtc_timer_update(s);
> +        }

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 23:03:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 23:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WERmw-0001vI-No; Fri, 14 Feb 2014 23:03:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WERmv-0001vD-W0
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 23:03:18 +0000
Received: from [193.109.254.147:38599] by server-3.bemta-14.messagelabs.com id
	34/76-00432-5B0AEF25; Fri, 14 Feb 2014 23:03:17 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392418996!4448545!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18588 invoked from network); 14 Feb 2014 23:03:16 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 14 Feb 2014 23:03:16 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WERmq-00091U-UM; Fri, 14 Feb 2014 23:03:12 +0000
Date: Sat, 15 Feb 2014 00:03:12 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140214230312.GA33715@deinos.phlegethon.org>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
	<1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: george.dunlap@eu.citrix.com, roger.pau@citrix.com, keir@xen.org,
	JBeulich@suse.com, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC v2 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 19:51 +0000 on 14 Feb (1392403919), Andrew Cooper wrote:
> From: Tim Deegan <tim@xen.org>
> 
> If the guest has not asked for interrupts, don't run the vpt timer
> to generate them.  This is a prerequisite for a patch to simplify how
> the vpt interacts with the RTC, and also gets rid of a timer series in
> Xen in a case where it's unlikely to be needed.
> 
> Instead, calculate the correct value for REG_C.PF whenever REG_C is
> read or PIE is enabled.  This allow a guest to poll for the PF bit
> while not asking for actual timer interrupts.  Such a guest would no
> longer get the benefit of the vpt's timer modes.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> ---
> 
> Changes in v2:
>  * Reduce scope of `now` in rtc_timer_update()
>  * Merge PIE logic in REG_B write
>  * Tightly couple setting s->period with creating/destroying timers, so the
>    timer change gets properly recreated when the guest sets REG_B.PIE

Thanks for sorting out that bug, but I think in this version the !PIE
case won't work.  check_for_pf_ticks() uses s->period to figure out
whether to set PF, so it needs to be set whenever the REG_A selector
is configured, even if the timer's not running.

It looks like always setting s->period == 0 just before the call to
rtc_timer_update in the REG_B write (i.e. not just in the !PIE case)
would DTRT, but is that what you tried earlier?

Er, that is, here:

>          rtc_update_irq(s);
> -        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
> +        if ( (data ^ orig) & RTC_PIE )
> +        {
> +            if ( !(data & RTC_PIE) )
> +            {
> +                destroy_periodic_time(&s->pt);
> +                s->period = 0;
> +            }
>              rtc_timer_update(s);
> +        }
>          if ( (data ^ orig) & RTC_SET )

do this:

> +        if ( (data ^ orig) & RTC_PIE )
> +        {
> +            destroy_periodic_time(&s->pt);
> +            s->period = 0;
>              rtc_timer_update(s);
> +        }

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 23:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 23:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WESPi-0002FI-Qy; Fri, 14 Feb 2014 23:43:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1WESPh-0002FD-5Z
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 23:43:21 +0000
Received: from [85.158.137.68:20028] by server-4.bemta-3.messagelabs.com id
	74/BD-04858-81AAEF25; Fri, 14 Feb 2014 23:43:20 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392421398!2006664!1
X-Originating-IP: [209.85.213.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23640 invoked from network); 14 Feb 2014 23:43:19 -0000
Received: from mail-yh0-f53.google.com (HELO mail-yh0-f53.google.com)
	(209.85.213.53)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 23:43:19 -0000
Received: by mail-yh0-f53.google.com with SMTP id v1so12429285yhn.12
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 15:43:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=9ucms1HT8lI12hjxOTX4bAurV6vLmWcMLafbU45Lr9o=;
	b=kUzzDOQd5ejkjmZec6Ht9FoQ384Ip5ziysbMBee4pUErdQkU9X+IBpXHk67QTB21ar
	reCiOhj821IMi03alSe1XCC/91oknt0SOFPmTKSsYnBDKeVYU38TbO69DX6eqy6rofUh
	g2ZbmzLNy54drKSyPnUoZ0h3YTRzeNGC7zkTYCLbaq+1wBt8mLauGniGtJQmd27CpZ/s
	PfNh8cqUdlAFUL8DHrb+nNDnDwMnMA3t+06gp+kNaNHRHXB0m9adZXYVaio4iotEM/2x
	+5BYKIcUyu1T4Fs/xrTIE2Yjjw9uRiEnTrk8jzU9/QRVE2xYlXeMkXzj//+zb11EO997
	VJWQ==
X-Received: by 10.236.120.147 with SMTP id p19mr6160535yhh.6.1392421397881;
	Fri, 14 Feb 2014 15:43:17 -0800 (PST)
Received: from [172.16.26.11] ([63.110.51.11])
	by mx.google.com with ESMTPSA id 48sm21957885yhq.11.2014.02.14.15.43.16
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 15:43:17 -0800 (PST)
Message-ID: <52FEAA14.6000800@xen.org>
Date: Fri, 14 Feb 2014 15:43:16 -0800
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <mailman.8661.1392343770.24322.xen-devel@lists.xen.org>
	<B00D7549-3E17-4A58-924B-7C640EA70755@gridcentric.ca>
In-Reply-To: <B00D7549-3E17-4A58-924B-7C640EA70755@gridcentric.ca>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/2014 07:07, Andres Lagar-Cavilla wrote:
>> Hi all,
>>
>> I created http://wiki.xen.org/wiki/GSoc_2014 based on the project list
>>
>> Unless you guys step up, I will move all projects that have no
>> * level of difficulty
>> * skills needed
>> * outcomes
>> into http://wiki.xen.org/wiki/GSoc_2014#List_of_projects_that_need_more_work
> Hi Lars,
> I updated the two projects I submitted which got moved down to the "need more work" list. I believe they are fully specified now.
>
> Sorry for last minute update -- on US west coast.
>
> Best. Thanks for shepherding this.
>
> Andres
Thank you. Don't wait for me to move stuff into the right section. If 
you feel it is ready, move it up.

Actually, we can and should make modifications in the next few weeks, 
while the GSoC mentor organization applications are reviewed. Activity 
and participation is good. That is entirely OK. I will probably go 
through the "need more work" list and make concrete comments and 
suggestions for improvement.

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 14 23:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Feb 2014 23:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WESPi-0002FI-Qy; Fri, 14 Feb 2014 23:43:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>) id 1WESPh-0002FD-5Z
	for xen-devel@lists.xen.org; Fri, 14 Feb 2014 23:43:21 +0000
Received: from [85.158.137.68:20028] by server-4.bemta-3.messagelabs.com id
	74/BD-04858-81AAEF25; Fri, 14 Feb 2014 23:43:20 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392421398!2006664!1
X-Originating-IP: [209.85.213.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23640 invoked from network); 14 Feb 2014 23:43:19 -0000
Received: from mail-yh0-f53.google.com (HELO mail-yh0-f53.google.com)
	(209.85.213.53)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	14 Feb 2014 23:43:19 -0000
Received: by mail-yh0-f53.google.com with SMTP id v1so12429285yhn.12
	for <xen-devel@lists.xen.org>; Fri, 14 Feb 2014 15:43:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=9ucms1HT8lI12hjxOTX4bAurV6vLmWcMLafbU45Lr9o=;
	b=kUzzDOQd5ejkjmZec6Ht9FoQ384Ip5ziysbMBee4pUErdQkU9X+IBpXHk67QTB21ar
	reCiOhj821IMi03alSe1XCC/91oknt0SOFPmTKSsYnBDKeVYU38TbO69DX6eqy6rofUh
	g2ZbmzLNy54drKSyPnUoZ0h3YTRzeNGC7zkTYCLbaq+1wBt8mLauGniGtJQmd27CpZ/s
	PfNh8cqUdlAFUL8DHrb+nNDnDwMnMA3t+06gp+kNaNHRHXB0m9adZXYVaio4iotEM/2x
	+5BYKIcUyu1T4Fs/xrTIE2Yjjw9uRiEnTrk8jzU9/QRVE2xYlXeMkXzj//+zb11EO997
	VJWQ==
X-Received: by 10.236.120.147 with SMTP id p19mr6160535yhh.6.1392421397881;
	Fri, 14 Feb 2014 15:43:17 -0800 (PST)
Received: from [172.16.26.11] ([63.110.51.11])
	by mx.google.com with ESMTPSA id 48sm21957885yhq.11.2014.02.14.15.43.16
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 15:43:17 -0800 (PST)
Message-ID: <52FEAA14.6000800@xen.org>
Date: Fri, 14 Feb 2014 15:43:16 -0800
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andres Lagar-Cavilla <andreslc@gridcentric.ca>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <mailman.8661.1392343770.24322.xen-devel@lists.xen.org>
	<B00D7549-3E17-4A58-924B-7C640EA70755@gridcentric.ca>
In-Reply-To: <B00D7549-3E17-4A58-924B-7C640EA70755@gridcentric.ca>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
	2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/2014 07:07, Andres Lagar-Cavilla wrote:
>> Hi all,
>>
>> I created http://wiki.xen.org/wiki/GSoc_2014 based on the project list
>>
>> Unless you guys step up, I will move all projects that have no
>> * level of difficulty
>> * skills needed
>> * outcomes
>> into http://wiki.xen.org/wiki/GSoc_2014#List_of_projects_that_need_more_work
> Hi Lars,
> I updated the two projects I submitted which got moved down to the "need more work" list. I believe they are fully specified now.
>
> Sorry for last minute update -- on US west coast.
>
> Best. Thanks for shepherding this.
>
> Andres
Thank you. Don't wait for me to move stuff into the right section. If 
you feel it is ready, move it up.

Actually, we can and should make modifications in the next few weeks, 
while the GSoC mentor organization applications are reviewed. Activity 
and participation is good. That is entirely OK. I will probably go 
through the "need more work" list and make concrete comments and 
suggestions for improvement.

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 00:02:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 00:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WESi1-0002uE-2U; Sat, 15 Feb 2014 00:02:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEShz-0002u9-BI
	for xen-devel@lists.xen.org; Sat, 15 Feb 2014 00:02:15 +0000
Received: from [85.158.143.35:46216] by server-1.bemta-4.messagelabs.com id
	11/1C-31661-68EAEF25; Sat, 15 Feb 2014 00:02:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392422533!5818382!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6125 invoked from network); 15 Feb 2014 00:02:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 00:02:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,848,1384300800"; 
   d="scan'208";a="9572690"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 15 Feb 2014 00:02:14 +0000
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sat, 15 Feb 2014
	01:02:12 +0100
Message-ID: <52FEAE87.9070601@citrix.com>
Date: Sat, 15 Feb 2014 00:02:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
	<1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
	<20140214230312.GA33715@deinos.phlegethon.org>
In-Reply-To: <20140214230312.GA33715@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: AMS1
Cc: george.dunlap@eu.citrix.com, roger.pau@citrix.com, keir@xen.org,
	JBeulich@suse.com, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC v2 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/2014 23:03, Tim Deegan wrote:
> At 19:51 +0000 on 14 Feb (1392403919), Andrew Cooper wrote:
>> From: Tim Deegan <tim@xen.org>
>>
>> If the guest has not asked for interrupts, don't run the vpt timer
>> to generate them.  This is a prerequisite for a patch to simplify how
>> the vpt interacts with the RTC, and also gets rid of a timer series in
>> Xen in a case where it's unlikely to be needed.
>>
>> Instead, calculate the correct value for REG_C.PF whenever REG_C is
>> read or PIE is enabled.  This allow a guest to poll for the PF bit
>> while not asking for actual timer interrupts.  Such a guest would no
>> longer get the benefit of the vpt's timer modes.
>>
>> Signed-off-by: Tim Deegan <tim@xen.org>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> ---
>>
>> Changes in v2:
>>  * Reduce scope of `now` in rtc_timer_update()
>>  * Merge PIE logic in REG_B write
>>  * Tightly couple setting s->period with creating/destroying timers, so the
>>    timer change gets properly recreated when the guest sets REG_B.PIE
> Thanks for sorting out that bug, but I think in this version the !PIE
> case won't work.  check_for_pf_ticks() uses s->period to figure out
> whether to set PF, so it needs to be set whenever the REG_A selector
> is configured, even if the timer's not running.
>
> It looks like always setting s->period == 0 just before the call to
> rtc_timer_update in the REG_B write (i.e. not just in the !PIE case)
> would DTRT, but is that what you tried earlier?

I tried the "careful clobbering" which resulted in the fragment below,
and altering the position of s->period in rtc_timer_update() to match,
but that does indeed lead to issues of s->period being 0 if !PIE.

I clearly should have been less careful.

>
> Er, that is, here:
>
>>          rtc_update_irq(s);
>> -        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
>> +        if ( (data ^ orig) & RTC_PIE )
>> +        {
>> +            if ( !(data & RTC_PIE) )
>> +            {
>> +                destroy_periodic_time(&s->pt);
>> +                s->period = 0;
>> +            }
>>              rtc_timer_update(s);
>> +        }
>>          if ( (data ^ orig) & RTC_SET )
> do this:
>
>> +        if ( (data ^ orig) & RTC_PIE )
>> +        {
>> +            destroy_periodic_time(&s->pt);
>> +            s->period = 0;
>>              rtc_timer_update(s);
>> +        }
> Tim.

... and undo the changes in rtc_timer_update().

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 00:02:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 00:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WESi1-0002uE-2U; Sat, 15 Feb 2014 00:02:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WEShz-0002u9-BI
	for xen-devel@lists.xen.org; Sat, 15 Feb 2014 00:02:15 +0000
Received: from [85.158.143.35:46216] by server-1.bemta-4.messagelabs.com id
	11/1C-31661-68EAEF25; Sat, 15 Feb 2014 00:02:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392422533!5818382!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6125 invoked from network); 15 Feb 2014 00:02:13 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 00:02:13 -0000
X-IronPort-AV: E=Sophos;i="4.95,848,1384300800"; 
   d="scan'208";a="9572690"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 15 Feb 2014 00:02:14 +0000
Received: from [10.68.19.43] (10.68.19.43) by AMSPEX01CL02.citrite.net
	(10.69.46.33) with Microsoft SMTP Server id 14.2.342.4; Sat, 15 Feb 2014
	01:02:12 +0100
Message-ID: <52FEAE87.9070601@citrix.com>
Date: Sat, 15 Feb 2014 00:02:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1392407521-19884-1-git-send-email-andrew.cooper3@citrix.com>
	<1392407521-19884-2-git-send-email-andrew.cooper3@citrix.com>
	<20140214230312.GA33715@deinos.phlegethon.org>
In-Reply-To: <20140214230312.GA33715@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-Originating-IP: [10.68.19.43]
X-DLP: AMS1
Cc: george.dunlap@eu.citrix.com, roger.pau@citrix.com, keir@xen.org,
	JBeulich@suse.com, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC v2 1/3] x86/hvm/rtc: Don't run the vpt
 timer when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/2014 23:03, Tim Deegan wrote:
> At 19:51 +0000 on 14 Feb (1392403919), Andrew Cooper wrote:
>> From: Tim Deegan <tim@xen.org>
>>
>> If the guest has not asked for interrupts, don't run the vpt timer
>> to generate them.  This is a prerequisite for a patch to simplify how
>> the vpt interacts with the RTC, and also gets rid of a timer series in
>> Xen in a case where it's unlikely to be needed.
>>
>> Instead, calculate the correct value for REG_C.PF whenever REG_C is
>> read or PIE is enabled.  This allow a guest to poll for the PF bit
>> while not asking for actual timer interrupts.  Such a guest would no
>> longer get the benefit of the vpt's timer modes.
>>
>> Signed-off-by: Tim Deegan <tim@xen.org>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> ---
>>
>> Changes in v2:
>>  * Reduce scope of `now` in rtc_timer_update()
>>  * Merge PIE logic in REG_B write
>>  * Tightly couple setting s->period with creating/destroying timers, so the
>>    timer change gets properly recreated when the guest sets REG_B.PIE
> Thanks for sorting out that bug, but I think in this version the !PIE
> case won't work.  check_for_pf_ticks() uses s->period to figure out
> whether to set PF, so it needs to be set whenever the REG_A selector
> is configured, even if the timer's not running.
>
> It looks like always setting s->period == 0 just before the call to
> rtc_timer_update in the REG_B write (i.e. not just in the !PIE case)
> would DTRT, but is that what you tried earlier?

I tried the "careful clobbering" which resulted in the fragment below,
and altering the position of s->period in rtc_timer_update() to match,
but that does indeed lead to issues of s->period being 0 if !PIE.

I clearly should have been less careful.

>
> Er, that is, here:
>
>>          rtc_update_irq(s);
>> -        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
>> +        if ( (data ^ orig) & RTC_PIE )
>> +        {
>> +            if ( !(data & RTC_PIE) )
>> +            {
>> +                destroy_periodic_time(&s->pt);
>> +                s->period = 0;
>> +            }
>>              rtc_timer_update(s);
>> +        }
>>          if ( (data ^ orig) & RTC_SET )
> do this:
>
>> +        if ( (data ^ orig) & RTC_PIE )
>> +        {
>> +            destroy_periodic_time(&s->pt);
>> +            s->period = 0;
>>              rtc_timer_update(s);
>> +        }
> Tim.

... and undo the changes in rtc_timer_update().

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 01:14:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 01:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WETpF-0007HI-U6; Sat, 15 Feb 2014 01:13:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WETpD-0007HD-UH
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 01:13:48 +0000
Received: from [85.158.139.211:9511] by server-5.bemta-5.messagelabs.com id
	CE/9E-32749-A4FBEF25; Sat, 15 Feb 2014 01:13:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392426824!4060544!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2489 invoked from network); 15 Feb 2014 01:13:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 01:13:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,848,1384300800"; d="scan'208";a="100962251"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 01:13:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 20:13:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WETp8-0002rr-5O;
	Sat, 15 Feb 2014 01:13:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WETp7-0001ZJ-VU;
	Sat, 15 Feb 2014 01:13:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24882-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 01:13:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24882: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7848682085180602417=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7848682085180602417==
Content-Type: text/plain

flight 24882 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24882/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24862
 build-i386-oldkern            3 host-build-prep  fail in 24870 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     10 guest-saverestore           fail pass in 24870

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============7848682085180602417==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7848682085180602417==--

From xen-devel-bounces@lists.xen.org Sat Feb 15 01:14:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 01:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WETpF-0007HI-U6; Sat, 15 Feb 2014 01:13:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WETpD-0007HD-UH
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 01:13:48 +0000
Received: from [85.158.139.211:9511] by server-5.bemta-5.messagelabs.com id
	CE/9E-32749-A4FBEF25; Sat, 15 Feb 2014 01:13:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392426824!4060544!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2489 invoked from network); 15 Feb 2014 01:13:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 01:13:45 -0000
X-IronPort-AV: E=Sophos;i="4.95,848,1384300800"; d="scan'208";a="100962251"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 01:13:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 14 Feb 2014 20:13:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WETp8-0002rr-5O;
	Sat, 15 Feb 2014 01:13:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WETp7-0001ZJ-VU;
	Sat, 15 Feb 2014 01:13:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24882-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 01:13:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24882: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7848682085180602417=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7848682085180602417==
Content-Type: text/plain

flight 24882 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24882/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24862
 build-i386-oldkern            3 host-build-prep  fail in 24870 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     10 guest-saverestore           fail pass in 24870

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============7848682085180602417==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7848682085180602417==--

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVTx-0008U5-Dd; Sat, 15 Feb 2014 02:59:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVTw-0008Tv-IQ
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 02:59:56 +0000
Received: from [85.158.137.68:15672] by server-13.bemta-3.messagelabs.com id
	31/23-26923-B28DEF25; Sat, 15 Feb 2014 02:59:55 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392433193!477668!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10709 invoked from network); 15 Feb 2014 02:59:54 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 02:59:54 -0000
Received: by mail-pd0-f181.google.com with SMTP id y10so12557994pdj.26
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 18:59:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=cjTXJ/HEx9jM1p9IIbz4SAbL2kHCLYEpCmtqUrPJXI0=;
	b=jiHVhV+nxnuBamR1vgp+WwQudeD1u83GHOl145evomiDFZr6OEaDKEmz5FzeEbj2lp
	F7faaNxPImlTsRGBrwSJ0/BCVXq72KezcqtNgZ2dfZJOkT1rN95Oh3MFScqDNfGTeBEZ
	2aqMDvE+s91s8ngFMLDDGyP4NqgU8+n5w3E6rIkMVwUUSsjOkaEfjoP6ZsG5mJm9bnU9
	wOiofuXLvTShv7JpFsjwMNYh1NSVjK8VkhL9SN10tJ7rTzBfmKwUqrum1OL5zhNB1kvu
	BWSVDlUbN/ffUFNtfBw8I4hU5EUfgbQS4bUbMWN55ZWkxo6t8+vB22+9RGTr2GWPZj0T
	dW2g==
X-Received: by 10.68.130.234 with SMTP id oh10mr12900646pbb.136.1392433192661; 
	Fri, 14 Feb 2014 18:59:52 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	ac5sm22144330pbc.37.2014.02.14.18.59.49 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 18:59:51 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:47 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:37 -0800
Message-Id: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out from
	becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

It doesn't make sense for some interfaces to become a root bridge
at any point in time. One example is virtual backend interfaces
which rely on other entities on the bridge for actual physical
connectivity. They only provide virtual access.

Device drivers that know they should never become part of the
root bridge have been using a trick of setting their MAC address
to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
of using these hacks lets the interfaces annotate its intent and
generalizes a solution for multiple drivers, while letting the
drivers use a random MAC address or one prefixed with a proper OUI.
This sort of hack is used by both qemu and xen for their backend
interfaces.

Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: bridge@lists.linux-foundation.org
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 include/uapi/linux/if.h | 1 +
 net/bridge/br_if.c      | 2 ++
 net/bridge/br_private.h | 1 +
 net/bridge/br_stp_if.c  | 2 ++
 4 files changed, 6 insertions(+)

diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
index d758163..8d10382 100644
--- a/include/uapi/linux/if.h
+++ b/include/uapi/linux/if.h
@@ -84,6 +84,7 @@
 #define IFF_LIVE_ADDR_CHANGE 0x100000	/* device supports hardware address
 					 * change when it's running */
 #define IFF_MACVLAN 0x200000		/* Macvlan device */
+#define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridge */
 
 
 #define IF_GET_IFACE	0x0001		/* for querying only */
diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
index 4bf02ad..a745415 100644
--- a/net/bridge/br_if.c
+++ b/net/bridge/br_if.c
@@ -228,6 +228,8 @@ static struct net_bridge_port *new_nbp(struct net_bridge *br,
 	br_init_port(p);
 	p->state = BR_STATE_DISABLED;
 	br_stp_port_timer_init(p);
+	if (dev->priv_flags & IFF_BRIDGE_NON_ROOT)
+		p->flags |= BR_DONT_ROOT;
 	br_multicast_add_port(p);
 
 	return p;
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index 045d56e..a89e8ad 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -173,6 +173,7 @@ struct net_bridge_port
 #define BR_ADMIN_COST		0x00000010
 #define BR_LEARNING		0x00000020
 #define BR_FLOOD		0x00000040
+#define BR_DONT_ROOT		0x00000080
 
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
 	struct bridge_mcast_query	ip4_query;
diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c
index 656a6f3..12fd848 100644
--- a/net/bridge/br_stp_if.c
+++ b/net/bridge/br_stp_if.c
@@ -228,6 +228,8 @@ bool br_stp_recalculate_bridge_id(struct net_bridge *br)
 		return false;
 
 	list_for_each_entry(p, &br->port_list, list) {
+		if (p->flags & BR_DONT_ROOT)
+			continue;
 		if (addr == br_mac_zero ||
 		    memcmp(p->dev->dev_addr, addr, ETH_ALEN) < 0)
 			addr = p->dev->dev_addr;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVU8-000055-94; Sat, 15 Feb 2014 03:00:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVU1-0008UQ-PG
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 03:00:05 +0000
Received: from [193.109.254.147:29924] by server-14.bemta-14.messagelabs.com
	id D3/A9-29228-138DEF25; Sat, 15 Feb 2014 03:00:01 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392433198!4466857!1
X-Originating-IP: [209.85.160.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5005 invoked from network); 15 Feb 2014 02:59:59 -0000
Received: from mail-pb0-f49.google.com (HELO mail-pb0-f49.google.com)
	(209.85.160.49)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 02:59:59 -0000
Received: by mail-pb0-f49.google.com with SMTP id up15so12985170pbc.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 18:59:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=RPZzAH9NBmo76+2Gqb0J3Z476Uznh3/zDgCxK4lFFlk=;
	b=D3G5Y2HmwiabMM98v9F8TTP7uopjd2l334G6vwTOzNpG98GWfuNIcFl0oXV28UuR/w
	kNWw2gXQH83DcJGZFi5IyE7CzZu+AfYEMyETxVy+HDzgzZOB38dJ8zldMzbZ/u8e1s8x
	08e/9j0nXwsgh0Z0Y9i3yXm257oWcO5io2yPRezYWrPKjyrX6zw2IH795YMThqKpOaYo
	ttWYZAgQtfEza3EezXkCA+h7qgSNlD79YMRWN+EZaXvvUQj/Cq6FOzNgAoTh9uMKk4Wf
	7xXxqNHuEf8KfL/Gs0anWp5Sb0L0Cm8iTcxukfAyf6fD+HIxSrhO77JcYd9qwpiJgMVA
	NEvw==
X-Received: by 10.66.156.4 with SMTP id wa4mr13022842pab.49.1392433197890;
	Fri, 14 Feb 2014 18:59:57 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	iq10sm22184658pbc.14.2014.02.14.18.59.54 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 18:59:56 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:52 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:38 -0800
Message-Id: <1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, James Morris <jmorris@namei.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

Some interfaces do not need to have any IPv4 or IPv6
addresses, so enable an option to specify this. One
example where this is observed are virtualization
backend interfaces which just use the net_device
constructs to help with their respective frontends.

This should optimize boot time and complexity on
virtualization environments for each backend interface
while also avoiding triggering SLAAC and DAD, which is
simply pointless for these type of interfaces.

Cc: "David S. Miller" <davem@davemloft.net>
cC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 include/uapi/linux/if.h | 1 +
 net/ipv4/devinet.c      | 3 +++
 net/ipv6/addrconf.c     | 6 ++++++
 3 files changed, 10 insertions(+)

diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
index 8d10382..566d856 100644
--- a/include/uapi/linux/if.h
+++ b/include/uapi/linux/if.h
@@ -85,6 +85,7 @@
 					 * change when it's running */
 #define IFF_MACVLAN 0x200000		/* Macvlan device */
 #define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridge */
+#define IFF_SKIP_IP	0x800000	/* Skip IPv4, IPv6 */
 
 
 #define IF_GET_IFACE	0x0001		/* for querying only */
diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
index a1b5bcb..8e9ef07 100644
--- a/net/ipv4/devinet.c
+++ b/net/ipv4/devinet.c
@@ -1342,6 +1342,9 @@ static int inetdev_event(struct notifier_block *this, unsigned long event,
 
 	ASSERT_RTNL();
 
+	if (dev->priv_flags & IFF_SKIP_IP)
+		goto out;
+
 	if (!in_dev) {
 		if (event == NETDEV_REGISTER) {
 			in_dev = inetdev_init(dev);
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 4b6b720..57f58e3 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -314,6 +314,9 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
 
 	ASSERT_RTNL();
 
+	if (dev->priv_flags & IFF_SKIP_IP)
+		return NULL;
+
 	if (dev->mtu < IPV6_MIN_MTU)
 		return NULL;
 
@@ -2749,6 +2752,9 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
 	int run_pending = 0;
 	int err;
 
+	if (dev->priv_flags & IFF_SKIP_IP)
+		return NOTIFY_OK;
+
 	switch (event) {
 	case NETDEV_REGISTER:
 		if (!idev && dev->mtu >= IPV6_MIN_MTU) {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVU9-00005Z-TL; Sat, 15 Feb 2014 03:00:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVU8-00004u-1X
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 03:00:08 +0000
Received: from [193.109.254.147:16391] by server-1.bemta-14.messagelabs.com id
	2E/A1-15438-738DEF25; Sat, 15 Feb 2014 03:00:07 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392433204!4484077!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10459 invoked from network); 15 Feb 2014 03:00:06 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 03:00:06 -0000
Received: by mail-pa0-f52.google.com with SMTP id bj1so13077043pad.11
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 19:00:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=Sk6DJe9pUk35tgfHgtk5lgke/NzZe3VJBGTXjNDLM3s=;
	b=HvmZg5an5HYyBDYCq2WfSpi7KAim69AU4wPLaOGFVRTwgCyxPXM2PVLDvsN0jwMpn7
	yPFR7AfJORMLyEvUWMJqFU/wJyw32aF9Q9m1x6H6uShsJEVD5kac3pxvgk9ZJFtBVTKl
	8dAq3CYRvHAW+wyPwT6h2AFMITufG5cO9K+YNHpiwy3cZwcN7/30RXqavZvsvhrwKbCh
	Qs5/YIp2r6Fv+9DSwjElG8wdlN+Fo3cUjRBklMtWZQl4y8H4Pi0hKbO+qqVqsff8c7MO
	r7Mcf9jg8aCCH6Wf29KM94tKD8HTuF1Vecj1f/MFTmr5FItTW+8vRqXx2lnqmhGK4ZJ2
	uWiw==
X-Received: by 10.68.92.98 with SMTP id cl2mr12905986pbb.81.1392433203977;
	Fri, 14 Feb 2014 19:00:03 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	eb5sm55049088pad.22.2014.02.14.19.00.00 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 19:00:02 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:57 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:39 -0800
Message-Id: <1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
was to prevent our backend interfaces from being used by the
bridge and nominating our interface as a root bridge. This was
possible given that the bridge code will use the lowest MAC
address for a port once a new interface gets added to the bridge.
The bridge code has a generic feature now to allow interfaces
to opt out from root bridge nominations, use that instead.

Cc: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 drivers/net/xen-netback/interface.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fff8cdd..d380e3f 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -42,6 +42,8 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -347,15 +349,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->mmap_pages[i] = NULL;
 
-	/*
-	 * Initialise a dummy MAC address. We choose the numerically
-	 * largest non-broadcast address to prevent the address getting
-	 * stolen by an Ethernet bridge for STP purposes.
-	 * (FE:FF:FF:FF:FF:FF)
-	 */
-	memset(dev->dev_addr, 0xFF, ETH_ALEN);
-	dev->dev_addr[0] &= ~0x01;
-
+	eth_hw_addr_random(dev);
+	memcpy(dev->dev_addr, xen_oui, 3);
+	dev->priv_flags |= IFF_BRIDGE_NON_ROOT;
 	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
 
 	netif_carrier_off(dev);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVU9-00005Z-TL; Sat, 15 Feb 2014 03:00:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVU8-00004u-1X
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 03:00:08 +0000
Received: from [193.109.254.147:16391] by server-1.bemta-14.messagelabs.com id
	2E/A1-15438-738DEF25; Sat, 15 Feb 2014 03:00:07 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392433204!4484077!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10459 invoked from network); 15 Feb 2014 03:00:06 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 03:00:06 -0000
Received: by mail-pa0-f52.google.com with SMTP id bj1so13077043pad.11
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 19:00:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=Sk6DJe9pUk35tgfHgtk5lgke/NzZe3VJBGTXjNDLM3s=;
	b=HvmZg5an5HYyBDYCq2WfSpi7KAim69AU4wPLaOGFVRTwgCyxPXM2PVLDvsN0jwMpn7
	yPFR7AfJORMLyEvUWMJqFU/wJyw32aF9Q9m1x6H6uShsJEVD5kac3pxvgk9ZJFtBVTKl
	8dAq3CYRvHAW+wyPwT6h2AFMITufG5cO9K+YNHpiwy3cZwcN7/30RXqavZvsvhrwKbCh
	Qs5/YIp2r6Fv+9DSwjElG8wdlN+Fo3cUjRBklMtWZQl4y8H4Pi0hKbO+qqVqsff8c7MO
	r7Mcf9jg8aCCH6Wf29KM94tKD8HTuF1Vecj1f/MFTmr5FItTW+8vRqXx2lnqmhGK4ZJ2
	uWiw==
X-Received: by 10.68.92.98 with SMTP id cl2mr12905986pbb.81.1392433203977;
	Fri, 14 Feb 2014 19:00:03 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	eb5sm55049088pad.22.2014.02.14.19.00.00 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 19:00:02 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:57 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:39 -0800
Message-Id: <1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
was to prevent our backend interfaces from being used by the
bridge and nominating our interface as a root bridge. This was
possible given that the bridge code will use the lowest MAC
address for a port once a new interface gets added to the bridge.
The bridge code has a generic feature now to allow interfaces
to opt out from root bridge nominations, use that instead.

Cc: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 drivers/net/xen-netback/interface.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index fff8cdd..d380e3f 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -42,6 +42,8 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -347,15 +349,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	for (i = 0; i < MAX_PENDING_REQS; i++)
 		vif->mmap_pages[i] = NULL;
 
-	/*
-	 * Initialise a dummy MAC address. We choose the numerically
-	 * largest non-broadcast address to prevent the address getting
-	 * stolen by an Ethernet bridge for STP purposes.
-	 * (FE:FF:FF:FF:FF:FF)
-	 */
-	memset(dev->dev_addr, 0xFF, ETH_ALEN);
-	dev->dev_addr[0] &= ~0x01;
-
+	eth_hw_addr_random(dev);
+	memcpy(dev->dev_addr, xen_oui, 3);
+	dev->priv_flags |= IFF_BRIDGE_NON_ROOT;
 	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
 
 	netif_carrier_off(dev);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVTx-0008U5-Dd; Sat, 15 Feb 2014 02:59:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVTw-0008Tv-IQ
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 02:59:56 +0000
Received: from [85.158.137.68:15672] by server-13.bemta-3.messagelabs.com id
	31/23-26923-B28DEF25; Sat, 15 Feb 2014 02:59:55 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392433193!477668!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10709 invoked from network); 15 Feb 2014 02:59:54 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 02:59:54 -0000
Received: by mail-pd0-f181.google.com with SMTP id y10so12557994pdj.26
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 18:59:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=cjTXJ/HEx9jM1p9IIbz4SAbL2kHCLYEpCmtqUrPJXI0=;
	b=jiHVhV+nxnuBamR1vgp+WwQudeD1u83GHOl145evomiDFZr6OEaDKEmz5FzeEbj2lp
	F7faaNxPImlTsRGBrwSJ0/BCVXq72KezcqtNgZ2dfZJOkT1rN95Oh3MFScqDNfGTeBEZ
	2aqMDvE+s91s8ngFMLDDGyP4NqgU8+n5w3E6rIkMVwUUSsjOkaEfjoP6ZsG5mJm9bnU9
	wOiofuXLvTShv7JpFsjwMNYh1NSVjK8VkhL9SN10tJ7rTzBfmKwUqrum1OL5zhNB1kvu
	BWSVDlUbN/ffUFNtfBw8I4hU5EUfgbQS4bUbMWN55ZWkxo6t8+vB22+9RGTr2GWPZj0T
	dW2g==
X-Received: by 10.68.130.234 with SMTP id oh10mr12900646pbb.136.1392433192661; 
	Fri, 14 Feb 2014 18:59:52 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	ac5sm22144330pbc.37.2014.02.14.18.59.49 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 18:59:51 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:47 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:37 -0800
Message-Id: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out from
	becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

It doesn't make sense for some interfaces to become a root bridge
at any point in time. One example is virtual backend interfaces
which rely on other entities on the bridge for actual physical
connectivity. They only provide virtual access.

Device drivers that know they should never become part of the
root bridge have been using a trick of setting their MAC address
to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
of using these hacks lets the interfaces annotate its intent and
generalizes a solution for multiple drivers, while letting the
drivers use a random MAC address or one prefixed with a proper OUI.
This sort of hack is used by both qemu and xen for their backend
interfaces.

Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: bridge@lists.linux-foundation.org
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 include/uapi/linux/if.h | 1 +
 net/bridge/br_if.c      | 2 ++
 net/bridge/br_private.h | 1 +
 net/bridge/br_stp_if.c  | 2 ++
 4 files changed, 6 insertions(+)

diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
index d758163..8d10382 100644
--- a/include/uapi/linux/if.h
+++ b/include/uapi/linux/if.h
@@ -84,6 +84,7 @@
 #define IFF_LIVE_ADDR_CHANGE 0x100000	/* device supports hardware address
 					 * change when it's running */
 #define IFF_MACVLAN 0x200000		/* Macvlan device */
+#define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridge */
 
 
 #define IF_GET_IFACE	0x0001		/* for querying only */
diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
index 4bf02ad..a745415 100644
--- a/net/bridge/br_if.c
+++ b/net/bridge/br_if.c
@@ -228,6 +228,8 @@ static struct net_bridge_port *new_nbp(struct net_bridge *br,
 	br_init_port(p);
 	p->state = BR_STATE_DISABLED;
 	br_stp_port_timer_init(p);
+	if (dev->priv_flags & IFF_BRIDGE_NON_ROOT)
+		p->flags |= BR_DONT_ROOT;
 	br_multicast_add_port(p);
 
 	return p;
diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
index 045d56e..a89e8ad 100644
--- a/net/bridge/br_private.h
+++ b/net/bridge/br_private.h
@@ -173,6 +173,7 @@ struct net_bridge_port
 #define BR_ADMIN_COST		0x00000010
 #define BR_LEARNING		0x00000020
 #define BR_FLOOD		0x00000040
+#define BR_DONT_ROOT		0x00000080
 
 #ifdef CONFIG_BRIDGE_IGMP_SNOOPING
 	struct bridge_mcast_query	ip4_query;
diff --git a/net/bridge/br_stp_if.c b/net/bridge/br_stp_if.c
index 656a6f3..12fd848 100644
--- a/net/bridge/br_stp_if.c
+++ b/net/bridge/br_stp_if.c
@@ -228,6 +228,8 @@ bool br_stp_recalculate_bridge_id(struct net_bridge *br)
 		return false;
 
 	list_for_each_entry(p, &br->port_list, list) {
+		if (p->flags & BR_DONT_ROOT)
+			continue;
 		if (addr == br_mac_zero ||
 		    memcmp(p->dev->dev_addr, addr, ETH_ALEN) < 0)
 			addr = p->dev->dev_addr;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVTs-0008Tm-0r; Sat, 15 Feb 2014 02:59:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVTr-0008Th-DV
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 02:59:51 +0000
Received: from [85.158.143.35:62657] by server-1.bemta-4.messagelabs.com id
	2C/14-31661-628DEF25; Sat, 15 Feb 2014 02:59:50 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392433188!5833592!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29451 invoked from network); 15 Feb 2014 02:59:49 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 02:59:49 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so12709991pdj.15
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 18:59:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id;
	bh=tZgKo07Ro0YXzLlQ2Bkgj0vHDkAs637+ixtSCZXPkpo=;
	b=q1D3XoI9j5UsnxuMPtdTMSbu8h4rHhMdLopHIMPYbnDYbvaUyh9u4q5XaRdKR7cuXT
	2COAfeAnUwKZq7Lf1stN/3VDx7I1fus0jaA5FDNGim3uk2cbsHzQbNi7Gcx7WEJYUz/i
	ETMGRHP/cSzp0yAoKXaBB1ImTvTqhVuKIVbre+BXfzWfTSAINwKXZOa7TXvB6Y0BAS8V
	dgwRs27nRh0DgX1hZvuVCv8Kz042dZ2iLyZ5SVC93JVUuDSi2UKWxwtWq8uocfMMyJ74
	tNP+naBqRb8JbFP6IAw38CEw+r1gHTFTxP/xkzH8Q1F5p9020DZs5aYBA5VzeSduMCYa
	SVkw==
X-Received: by 10.66.161.38 with SMTP id xp6mr12795237pab.145.1392433187750;
	Fri, 14 Feb 2014 18:59:47 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	qh2sm55020254pab.13.2014.02.14.18.59.42 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 18:59:44 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:41 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:36 -0800
Message-Id: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
Cc: xen-devel@lists.xenproject.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for virtual
	net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

This v2 series changes the approach from my original virtualization
multicast patch series [0] by abandoning completely the multicast
issues and instead generalizing an approach for virtualization
backends. There are two things in common with virtualization
backends:

  0) they should not become the root bridge
  1) they don't need ipv4 / ipv6 interfaces

Both qemu's usage of TAP interfaces and xen-netback's driver
avoid getting their interfaces added to the root bridge by
using a high MAC address. Lets just generalize the solution
by making this a flag.

The skipping of IPv4 / IPv6 interfaces is an optimization
I observed possible while studying the xen-netback in a
shared physical bridge environment. I haven't been able
to test the NAT environment so I appreciate it if someone
can test these patches for that case if I don't get to it
eventually.

The same flags can be embraced by TAP interfaces when needed,
I tested this as a temporary patch as follows:

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 44c4db8..19b967e 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -940,6 +940,7 @@ static void tun_net_init(struct net_device *dev)
 		ether_setup(dev);
 		dev->priv_flags &= ~IFF_TX_SKB_SHARING;
 		dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+		dev->priv_flags |= IFF_BRIDGE_NON_ROOT | IFF_SKIP_IP;
 
 		eth_hw_addr_random(dev);
 

a proper followup would be to specify the flags during open() or any
way prior, just to register_netdevice(). Before that is done we'd
need to evaluate all qemu use cases of the TAP interfaces both
for the xen HVM case (which tests fine for me) and for KVM's
use cases on both shared physical and in the NAT case. That is,
test the above patch and this series for all KVM / xen use cases.

[0] http://marc.info/?l=linux-netdev&m=139207142110536&w=2

Luis R. Rodriguez (4):
  bridge: enable interfaces to opt out from becoming the root bridge
  net: enables interface option to skip IP
  xen-netback: use a random MAC address
  xen-netback: skip IPv4 and IPv6 interfaces

 drivers/net/xen-netback/interface.c | 14 +++++---------
 include/uapi/linux/if.h             |  2 ++
 net/bridge/br_if.c                  |  2 ++
 net/bridge/br_private.h             |  1 +
 net/bridge/br_stp_if.c              |  2 ++
 net/ipv4/devinet.c                  |  3 +++
 net/ipv6/addrconf.c                 |  6 ++++++
 7 files changed, 21 insertions(+), 9 deletions(-)

-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVTs-0008Tm-0r; Sat, 15 Feb 2014 02:59:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVTr-0008Th-DV
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 02:59:51 +0000
Received: from [85.158.143.35:62657] by server-1.bemta-4.messagelabs.com id
	2C/14-31661-628DEF25; Sat, 15 Feb 2014 02:59:50 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392433188!5833592!1
X-Originating-IP: [209.85.192.170]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29451 invoked from network); 15 Feb 2014 02:59:49 -0000
Received: from mail-pd0-f170.google.com (HELO mail-pd0-f170.google.com)
	(209.85.192.170)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 02:59:49 -0000
Received: by mail-pd0-f170.google.com with SMTP id p10so12709991pdj.15
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 18:59:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id;
	bh=tZgKo07Ro0YXzLlQ2Bkgj0vHDkAs637+ixtSCZXPkpo=;
	b=q1D3XoI9j5UsnxuMPtdTMSbu8h4rHhMdLopHIMPYbnDYbvaUyh9u4q5XaRdKR7cuXT
	2COAfeAnUwKZq7Lf1stN/3VDx7I1fus0jaA5FDNGim3uk2cbsHzQbNi7Gcx7WEJYUz/i
	ETMGRHP/cSzp0yAoKXaBB1ImTvTqhVuKIVbre+BXfzWfTSAINwKXZOa7TXvB6Y0BAS8V
	dgwRs27nRh0DgX1hZvuVCv8Kz042dZ2iLyZ5SVC93JVUuDSi2UKWxwtWq8uocfMMyJ74
	tNP+naBqRb8JbFP6IAw38CEw+r1gHTFTxP/xkzH8Q1F5p9020DZs5aYBA5VzeSduMCYa
	SVkw==
X-Received: by 10.66.161.38 with SMTP id xp6mr12795237pab.145.1392433187750;
	Fri, 14 Feb 2014 18:59:47 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	qh2sm55020254pab.13.2014.02.14.18.59.42 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 18:59:44 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:41 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:36 -0800
Message-Id: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
Cc: xen-devel@lists.xenproject.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, kvm@vger.kernel.org
Subject: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for virtual
	net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

This v2 series changes the approach from my original virtualization
multicast patch series [0] by abandoning completely the multicast
issues and instead generalizing an approach for virtualization
backends. There are two things in common with virtualization
backends:

  0) they should not become the root bridge
  1) they don't need ipv4 / ipv6 interfaces

Both qemu's usage of TAP interfaces and xen-netback's driver
avoid getting their interfaces added to the root bridge by
using a high MAC address. Lets just generalize the solution
by making this a flag.

The skipping of IPv4 / IPv6 interfaces is an optimization
I observed possible while studying the xen-netback in a
shared physical bridge environment. I haven't been able
to test the NAT environment so I appreciate it if someone
can test these patches for that case if I don't get to it
eventually.

The same flags can be embraced by TAP interfaces when needed,
I tested this as a temporary patch as follows:

diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 44c4db8..19b967e 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -940,6 +940,7 @@ static void tun_net_init(struct net_device *dev)
 		ether_setup(dev);
 		dev->priv_flags &= ~IFF_TX_SKB_SHARING;
 		dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+		dev->priv_flags |= IFF_BRIDGE_NON_ROOT | IFF_SKIP_IP;
 
 		eth_hw_addr_random(dev);
 

a proper followup would be to specify the flags during open() or any
way prior, just to register_netdevice(). Before that is done we'd
need to evaluate all qemu use cases of the TAP interfaces both
for the xen HVM case (which tests fine for me) and for KVM's
use cases on both shared physical and in the NAT case. That is,
test the above patch and this series for all KVM / xen use cases.

[0] http://marc.info/?l=linux-netdev&m=139207142110536&w=2

Luis R. Rodriguez (4):
  bridge: enable interfaces to opt out from becoming the root bridge
  net: enables interface option to skip IP
  xen-netback: use a random MAC address
  xen-netback: skip IPv4 and IPv6 interfaces

 drivers/net/xen-netback/interface.c | 14 +++++---------
 include/uapi/linux/if.h             |  2 ++
 net/bridge/br_if.c                  |  2 ++
 net/bridge/br_private.h             |  1 +
 net/bridge/br_stp_if.c              |  2 ++
 net/ipv4/devinet.c                  |  3 +++
 net/ipv6/addrconf.c                 |  6 ++++++
 7 files changed, 21 insertions(+), 9 deletions(-)

-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVU8-000055-94; Sat, 15 Feb 2014 03:00:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVU1-0008UQ-PG
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 03:00:05 +0000
Received: from [193.109.254.147:29924] by server-14.bemta-14.messagelabs.com
	id D3/A9-29228-138DEF25; Sat, 15 Feb 2014 03:00:01 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392433198!4466857!1
X-Originating-IP: [209.85.160.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5005 invoked from network); 15 Feb 2014 02:59:59 -0000
Received: from mail-pb0-f49.google.com (HELO mail-pb0-f49.google.com)
	(209.85.160.49)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 02:59:59 -0000
Received: by mail-pb0-f49.google.com with SMTP id up15so12985170pbc.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 18:59:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=RPZzAH9NBmo76+2Gqb0J3Z476Uznh3/zDgCxK4lFFlk=;
	b=D3G5Y2HmwiabMM98v9F8TTP7uopjd2l334G6vwTOzNpG98GWfuNIcFl0oXV28UuR/w
	kNWw2gXQH83DcJGZFi5IyE7CzZu+AfYEMyETxVy+HDzgzZOB38dJ8zldMzbZ/u8e1s8x
	08e/9j0nXwsgh0Z0Y9i3yXm257oWcO5io2yPRezYWrPKjyrX6zw2IH795YMThqKpOaYo
	ttWYZAgQtfEza3EezXkCA+h7qgSNlD79YMRWN+EZaXvvUQj/Cq6FOzNgAoTh9uMKk4Wf
	7xXxqNHuEf8KfL/Gs0anWp5Sb0L0Cm8iTcxukfAyf6fD+HIxSrhO77JcYd9qwpiJgMVA
	NEvw==
X-Received: by 10.66.156.4 with SMTP id wa4mr13022842pab.49.1392433197890;
	Fri, 14 Feb 2014 18:59:57 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223]) by mx.google.com with ESMTPSA id
	iq10sm22184658pbc.14.2014.02.14.18.59.54 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 18:59:56 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 18:59:52 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:38 -0800
Message-Id: <1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, James Morris <jmorris@namei.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

Some interfaces do not need to have any IPv4 or IPv6
addresses, so enable an option to specify this. One
example where this is observed are virtualization
backend interfaces which just use the net_device
constructs to help with their respective frontends.

This should optimize boot time and complexity on
virtualization environments for each backend interface
while also avoiding triggering SLAAC and DAD, which is
simply pointless for these type of interfaces.

Cc: "David S. Miller" <davem@davemloft.net>
cC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: James Morris <jmorris@namei.org>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: netdev@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 include/uapi/linux/if.h | 1 +
 net/ipv4/devinet.c      | 3 +++
 net/ipv6/addrconf.c     | 6 ++++++
 3 files changed, 10 insertions(+)

diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
index 8d10382..566d856 100644
--- a/include/uapi/linux/if.h
+++ b/include/uapi/linux/if.h
@@ -85,6 +85,7 @@
 					 * change when it's running */
 #define IFF_MACVLAN 0x200000		/* Macvlan device */
 #define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridge */
+#define IFF_SKIP_IP	0x800000	/* Skip IPv4, IPv6 */
 
 
 #define IF_GET_IFACE	0x0001		/* for querying only */
diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
index a1b5bcb..8e9ef07 100644
--- a/net/ipv4/devinet.c
+++ b/net/ipv4/devinet.c
@@ -1342,6 +1342,9 @@ static int inetdev_event(struct notifier_block *this, unsigned long event,
 
 	ASSERT_RTNL();
 
+	if (dev->priv_flags & IFF_SKIP_IP)
+		goto out;
+
 	if (!in_dev) {
 		if (event == NETDEV_REGISTER) {
 			in_dev = inetdev_init(dev);
diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
index 4b6b720..57f58e3 100644
--- a/net/ipv6/addrconf.c
+++ b/net/ipv6/addrconf.c
@@ -314,6 +314,9 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
 
 	ASSERT_RTNL();
 
+	if (dev->priv_flags & IFF_SKIP_IP)
+		return NULL;
+
 	if (dev->mtu < IPV6_MIN_MTU)
 		return NULL;
 
@@ -2749,6 +2752,9 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
 	int run_pending = 0;
 	int err;
 
+	if (dev->priv_flags & IFF_SKIP_IP)
+		return NOTIFY_OK;
+
 	switch (event) {
 	case NETDEV_REGISTER:
 		if (!idev && dev->mtu >= IPV6_MIN_MTU) {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVUG-000081-Qr; Sat, 15 Feb 2014 03:00:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVUE-00006u-DK
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 03:00:14 +0000
Received: from [85.158.139.211:28036] by server-3.bemta-5.messagelabs.com id
	D1/97-13671-D38DEF25; Sat, 15 Feb 2014 03:00:13 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392433211!4060086!1
X-Originating-IP: [209.85.192.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28326 invoked from network); 15 Feb 2014 03:00:12 -0000
Received: from mail-pd0-f175.google.com (HELO mail-pd0-f175.google.com)
	(209.85.192.175)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 03:00:12 -0000
Received: by mail-pd0-f175.google.com with SMTP id w10so12609542pde.6
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 19:00:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=zavCAOp4FffOgHCIqVFgMZMrV/Wz6v9TByhiz6Abs/8=;
	b=j6QTdEto+3XP2Txp5Rzz/6c5o0zEnbXXeP34k6GoruYjHl0sei0ieZMnQ5JsTgQVF4
	D0rlMWsi0T7WzdOPRAqGKz/yeJ8gCPQjV8107BUw7Q6JNcPiPwhqBrNL7CCfhdC5rPwE
	rXib11A00SkFvp3Dh1xzEEyQoZxf3TstTztl82QYXzNUQxi0BdBy/A0dz3cAZ/u6+fOt
	QYpxkJKFz1qieEy0Ft78ighBdG00ad2ZVbtH1R+qD4jbMDI/sg50Ymilg8kcSgg+BUSs
	ZkTqnzJz4+fSr+r/lqVJFwY2ZxBJkJFqtLypqqWSFIGAsp5ZiwF6Ajq+n+dfYo5V8aw7
	1rDQ==
X-Received: by 10.67.14.231 with SMTP id fj7mr12632399pad.115.1392433210704;
	Fri, 14 Feb 2014 19:00:10 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223])
	by mx.google.com with ESMTPSA id e3sm22184777pbc.17.2014.02.14.19.00.06
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 19:00:09 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 19:00:04 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:40 -0800
Message-Id: <1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6 interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

The xen-netback driver is used only to provide a backend
interface for the frontend. The link is the only thing we
use, and that is used internally for letting us know when the
xen-netfront is ready, when it switches to XenbusStateConnected.

Note that only when the both the xen-netfront and xen-netback
are both in state XenbusStateConnected will xen-netback allow
userspace on the host (backend) to bring up the interface. Enabling
and disabling the interface will simply enable or disable NAPI
respectively, and that's used for IRQ communication set up with
the xen event channels.

Cc: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 drivers/net/xen-netback/interface.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index d380e3f..07e6fd2 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -351,7 +351,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	eth_hw_addr_random(dev);
 	memcpy(dev->dev_addr, xen_oui, 3);
-	dev->priv_flags |= IFF_BRIDGE_NON_ROOT;
+	dev->priv_flags |= IFF_BRIDGE_NON_ROOT | IFF_SKIP_IP;
 	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
 
 	netif_carrier_off(dev);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 03:00:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 03:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEVUG-000081-Qr; Sat, 15 Feb 2014 03:00:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WEVUE-00006u-DK
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 03:00:14 +0000
Received: from [85.158.139.211:28036] by server-3.bemta-5.messagelabs.com id
	D1/97-13671-D38DEF25; Sat, 15 Feb 2014 03:00:13 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392433211!4060086!1
X-Originating-IP: [209.85.192.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28326 invoked from network); 15 Feb 2014 03:00:12 -0000
Received: from mail-pd0-f175.google.com (HELO mail-pd0-f175.google.com)
	(209.85.192.175)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 03:00:12 -0000
Received: by mail-pd0-f175.google.com with SMTP id w10so12609542pde.6
	for <xen-devel@lists.xenproject.org>;
	Fri, 14 Feb 2014 19:00:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:from:to:cc:subject:date:message-id:in-reply-to:references;
	bh=zavCAOp4FffOgHCIqVFgMZMrV/Wz6v9TByhiz6Abs/8=;
	b=j6QTdEto+3XP2Txp5Rzz/6c5o0zEnbXXeP34k6GoruYjHl0sei0ieZMnQ5JsTgQVF4
	D0rlMWsi0T7WzdOPRAqGKz/yeJ8gCPQjV8107BUw7Q6JNcPiPwhqBrNL7CCfhdC5rPwE
	rXib11A00SkFvp3Dh1xzEEyQoZxf3TstTztl82QYXzNUQxi0BdBy/A0dz3cAZ/u6+fOt
	QYpxkJKFz1qieEy0Ft78ighBdG00ad2ZVbtH1R+qD4jbMDI/sg50Ymilg8kcSgg+BUSs
	ZkTqnzJz4+fSr+r/lqVJFwY2ZxBJkJFqtLypqqWSFIGAsp5ZiwF6Ajq+n+dfYo5V8aw7
	1rDQ==
X-Received: by 10.67.14.231 with SMTP id fj7mr12632399pad.115.1392433210704;
	Fri, 14 Feb 2014 19:00:10 -0800 (PST)
Received: from mcgrof@gmail.com (c-24-7-61-223.hsd1.ca.comcast.net.
	[24.7.61.223])
	by mx.google.com with ESMTPSA id e3sm22184777pbc.17.2014.02.14.19.00.06
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Fri, 14 Feb 2014 19:00:09 -0800 (PST)
Received: by mcgrof@gmail.com (sSMTP sendmail emulation);
	Fri, 14 Feb 2014 19:00:04 -0800
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
To: netdev@vger.kernel.org
Date: Fri, 14 Feb 2014 18:59:40 -0800
Message-Id: <1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: git-send-email 1.8.5.3
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6 interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Luis R. Rodriguez" <mcgrof@suse.com>

The xen-netback driver is used only to provide a backend
interface for the frontend. The link is the only thing we
use, and that is used internally for letting us know when the
xen-netfront is ready, when it switches to XenbusStateConnected.

Note that only when the both the xen-netfront and xen-netback
are both in state XenbusStateConnected will xen-netback allow
userspace on the host (backend) to bring up the interface. Enabling
and disabling the interface will simply enable or disable NAPI
respectively, and that's used for IRQ communication set up with
the xen event channels.

Cc: Paul Durrant <Paul.Durrant@citrix.com>
Cc: Ian Campbell <ian.campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: netdev@vger.kernel.org
Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
---
 drivers/net/xen-netback/interface.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index d380e3f..07e6fd2 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -351,7 +351,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	eth_hw_addr_random(dev);
 	memcpy(dev->dev_addr, xen_oui, 3);
-	dev->priv_flags |= IFF_BRIDGE_NON_ROOT;
+	dev->priv_flags |= IFF_BRIDGE_NON_ROOT | IFF_SKIP_IP;
 	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
 
 	netif_carrier_off(dev);
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 06:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 06:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEYSR-0002Af-7G; Sat, 15 Feb 2014 06:10:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEYSO-0002Aa-Ri
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 06:10:33 +0000
Received: from [85.158.137.68:7994] by server-7.bemta-3.messagelabs.com id
	F3/48-13775-7D40FF25; Sat, 15 Feb 2014 06:10:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392444629!2034779!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5424 invoked from network); 15 Feb 2014 06:10:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 06:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,849,1384300800"; d="scan'208";a="100994023"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 06:10:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 01:10:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEYSJ-0004Kt-CP;
	Sat, 15 Feb 2014 06:10:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEYSJ-0005NN-B0;
	Sat, 15 Feb 2014 06:10:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24883-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 06:10:27 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24883: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24883 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24883/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate      fail blocked in 12557
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 3 host-install(3) broken blocked in 12557
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 linux                4675348e78fab420e70f9144b320d9c063c7cee8
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7031 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2377488 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 06:11:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 06:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEYSR-0002Af-7G; Sat, 15 Feb 2014 06:10:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEYSO-0002Aa-Ri
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 06:10:33 +0000
Received: from [85.158.137.68:7994] by server-7.bemta-3.messagelabs.com id
	F3/48-13775-7D40FF25; Sat, 15 Feb 2014 06:10:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392444629!2034779!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5424 invoked from network); 15 Feb 2014 06:10:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 06:10:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,849,1384300800"; d="scan'208";a="100994023"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 06:10:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 01:10:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEYSJ-0004Kt-CP;
	Sat, 15 Feb 2014 06:10:27 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEYSJ-0005NN-B0;
	Sat, 15 Feb 2014 06:10:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24883-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 06:10:27 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24883: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24883 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24883/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-win7-amd64  9 guest-localmigrate      fail blocked in 12557
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 3 host-install(3) broken blocked in 12557
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass

version targeted for testing:
 linux                4675348e78fab420e70f9144b320d9c063c7cee8
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7031 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2377488 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 06:22:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 06:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEYdz-0002Ny-Ud; Sat, 15 Feb 2014 06:22:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEYdy-0002Nt-Ih
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 06:22:30 +0000
Received: from [193.109.254.147:31272] by server-16.bemta-14.messagelabs.com
	id A7/45-21945-5A70FF25; Sat, 15 Feb 2014 06:22:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392445347!4519095!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13365 invoked from network); 15 Feb 2014 06:22:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 06:22:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,849,1384300800"; d="scan'208";a="102765102"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 06:22:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 01:22:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEYdu-0004Oj-6l;
	Sat, 15 Feb 2014 06:22:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEYdu-0007OY-5D;
	Sat, 15 Feb 2014 06:22:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24887-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 06:22:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24887: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24887 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install    fail REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install          fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10    fail REGR. vs. 24865

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 06:22:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 06:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEYdz-0002Ny-Ud; Sat, 15 Feb 2014 06:22:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEYdy-0002Nt-Ih
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 06:22:30 +0000
Received: from [193.109.254.147:31272] by server-16.bemta-14.messagelabs.com
	id A7/45-21945-5A70FF25; Sat, 15 Feb 2014 06:22:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392445347!4519095!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13365 invoked from network); 15 Feb 2014 06:22:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 06:22:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,849,1384300800"; d="scan'208";a="102765102"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 06:22:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 01:22:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEYdu-0004Oj-6l;
	Sat, 15 Feb 2014 06:22:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEYdu-0007OY-5D;
	Sat, 15 Feb 2014 06:22:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24887-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 06:22:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24887: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24887 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install    fail REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install          fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10    fail REGR. vs. 24865

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 08:03:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 08:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEaD6-0003gx-2q; Sat, 15 Feb 2014 08:02:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEaD4-0003gs-L9
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 08:02:50 +0000
Received: from [193.109.254.147:4411] by server-12.bemta-14.messagelabs.com id
	A8/7F-17220-A2F1FF25; Sat, 15 Feb 2014 08:02:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392451367!4490744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13410 invoked from network); 15 Feb 2014 08:02:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 08:02:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,849,1384300800"; d="scan'208";a="102775687"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 08:02:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 03:02:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEaD0-0004ux-6h;
	Sat, 15 Feb 2014 08:02:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEaD0-0004bI-3d;
	Sat, 15 Feb 2014 08:02:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24888-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 08:02:46 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24888: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24888 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 24863

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 08:03:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 08:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEaD6-0003gx-2q; Sat, 15 Feb 2014 08:02:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEaD4-0003gs-L9
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 08:02:50 +0000
Received: from [193.109.254.147:4411] by server-12.bemta-14.messagelabs.com id
	A8/7F-17220-A2F1FF25; Sat, 15 Feb 2014 08:02:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392451367!4490744!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13410 invoked from network); 15 Feb 2014 08:02:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 08:02:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,849,1384300800"; d="scan'208";a="102775687"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 08:02:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 03:02:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEaD0-0004ux-6h;
	Sat, 15 Feb 2014 08:02:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEaD0-0004bI-3d;
	Sat, 15 Feb 2014 08:02:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24888-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 08:02:46 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24888: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24888 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-win7-amd64  7 windows-install         fail REGR. vs. 24863

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 09:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 09:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEbXP-0004Gr-Lw; Sat, 15 Feb 2014 09:27:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WEbXO-0004Gm-Bl
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 09:27:54 +0000
Received: from [85.158.143.35:35127] by server-1.bemta-4.messagelabs.com id
	27/53-31661-9133FF25; Sat, 15 Feb 2014 09:27:53 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392456472!5852191!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7483 invoked from network); 15 Feb 2014 09:27:52 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Feb 2014 09:27:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392456472; l=466;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=V/UOegZ8L2HJzveMutjuY18TMvo=;
	b=iga6yJ3B6lG80LxnKCHjFpNFGlzGsMlOIPUpgBNHIv8fliV1Pf60JsnEvTWDvngMO3/
	p71wA2VnVeWqItYm6NqXDuDC+D36+lNab4rURWNMX3CJCrNBwGBb19YsEDijK9QKKaTdD
	pzf/KVuNIRCz8kGJp3JH3/PWvy7i5+oYtkA=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id 601b65q1F9Rqysz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sat, 15 Feb 2014 10:27:52 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id C72545026A; Sat, 15 Feb 2014 10:27:51 +0100 (CET)
Date: Sat, 15 Feb 2014 10:27:51 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xennn <openbg@abv.bg>
Message-ID: <20140215092751.GA14932@aepfle.de>
References: <1392416987740-5721292.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392416987740-5721292.post@n5.nabble.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] smp guests on xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, xennn wrote:

> As far as i know xen 3 supports SMP guest. I would like to know is it
> possible to map many physical cpus to the vcpu-s of smp clinet. So if my
> host has 4 cpus to map 2 phisical cpus to 2 vcpu-s of the SMP guest?

Yes. Scroll down to "vcpu-pin" on this site:
http://xenbits.xen.org/docs/unstable/man/xl.1.html
Scroll down to "CPU Allocation" on this site:
http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 09:28:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 09:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEbXP-0004Gr-Lw; Sat, 15 Feb 2014 09:27:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WEbXO-0004Gm-Bl
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 09:27:54 +0000
Received: from [85.158.143.35:35127] by server-1.bemta-4.messagelabs.com id
	27/53-31661-9133FF25; Sat, 15 Feb 2014 09:27:53 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392456472!5852191!1
X-Originating-IP: [81.169.146.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7483 invoked from network); 15 Feb 2014 09:27:52 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.163)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Feb 2014 09:27:52 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392456472; l=466;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=V/UOegZ8L2HJzveMutjuY18TMvo=;
	b=iga6yJ3B6lG80LxnKCHjFpNFGlzGsMlOIPUpgBNHIv8fliV1Pf60JsnEvTWDvngMO3/
	p71wA2VnVeWqItYm6NqXDuDC+D36+lNab4rURWNMX3CJCrNBwGBb19YsEDijK9QKKaTdD
	pzf/KVuNIRCz8kGJp3JH3/PWvy7i5+oYtkA=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id 601b65q1F9Rqysz
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Sat, 15 Feb 2014 10:27:52 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id C72545026A; Sat, 15 Feb 2014 10:27:51 +0100 (CET)
Date: Sat, 15 Feb 2014 10:27:51 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xennn <openbg@abv.bg>
Message-ID: <20140215092751.GA14932@aepfle.de>
References: <1392416987740-5721292.post@n5.nabble.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392416987740-5721292.post@n5.nabble.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] smp guests on xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 14, xennn wrote:

> As far as i know xen 3 supports SMP guest. I would like to know is it
> possible to map many physical cpus to the vcpu-s of smp clinet. So if my
> host has 4 cpus to map 2 phisical cpus to 2 vcpu-s of the SMP guest?

Yes. Scroll down to "vcpu-pin" on this site:
http://xenbits.xen.org/docs/unstable/man/xl.1.html
Scroll down to "CPU Allocation" on this site:
http://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html


Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 15:36:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 15:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEhH7-0006RO-Ob; Sat, 15 Feb 2014 15:35:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEhH6-0006RJ-8o
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 15:35:28 +0000
Received: from [85.158.139.211:17050] by server-16.bemta-5.messagelabs.com id
	0B/9D-05060-F398FF25; Sat, 15 Feb 2014 15:35:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392478524!4079569!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10664 invoked from network); 15 Feb 2014 15:35:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 15:35:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,850,1384300800"; d="scan'208";a="102824970"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 15:35:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 10:35:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEhH0-0007FG-8v;
	Sat, 15 Feb 2014 15:35:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEhGz-0004IF-KO;
	Sat, 15 Feb 2014 15:35:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24901-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 15:35:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24901: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0305517487030652040=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0305517487030652040==
Content-Type: text/plain

flight 24901 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 24882 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 24882
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24882
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 24901

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24882 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============0305517487030652040==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0305517487030652040==--

From xen-devel-bounces@lists.xen.org Sat Feb 15 15:36:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 15:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEhH7-0006RO-Ob; Sat, 15 Feb 2014 15:35:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEhH6-0006RJ-8o
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 15:35:28 +0000
Received: from [85.158.139.211:17050] by server-16.bemta-5.messagelabs.com id
	0B/9D-05060-F398FF25; Sat, 15 Feb 2014 15:35:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392478524!4079569!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10664 invoked from network); 15 Feb 2014 15:35:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 15:35:25 -0000
X-IronPort-AV: E=Sophos;i="4.95,850,1384300800"; d="scan'208";a="102824970"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 15:35:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 10:35:23 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEhH0-0007FG-8v;
	Sat, 15 Feb 2014 15:35:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEhGz-0004IF-KO;
	Sat, 15 Feb 2014 15:35:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24901-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 15:35:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24901: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0305517487030652040=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0305517487030652040==
Content-Type: text/plain

flight 24901 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 24882 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 24882
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24882
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 24901

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24882 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============0305517487030652040==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0305517487030652040==--

From xen-devel-bounces@lists.xen.org Sat Feb 15 16:57:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 16:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEiYH-0007Ih-DY; Sat, 15 Feb 2014 16:57:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WEiYG-0007Ic-4i
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 16:57:16 +0000
Received: from [193.109.254.147:42577] by server-8.bemta-14.messagelabs.com id
	B0/E5-18529-B6C9FF25; Sat, 15 Feb 2014 16:57:15 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392483430!872752!1
X-Originating-IP: [202.81.31.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NSA9PiAzMTMyNzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9063 invoked from network); 15 Feb 2014 16:57:14 -0000
Received: from e23smtp03.au.ibm.com (HELO e23smtp03.au.ibm.com) (202.81.31.145)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Feb 2014 16:57:14 -0000
Received: from /spool/local
	by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Sun, 16 Feb 2014 02:57:07 +1000
Received: from d23dlp03.au.ibm.com (202.81.31.214)
	by e23smtp03.au.ibm.com (202.81.31.209) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Sun, 16 Feb 2014 02:57:05 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp03.au.ibm.com (Postfix) with ESMTP id CEBE03578056
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 03:57:02 +1100 (EST)
Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1FGbTAM7995746
	for <xen-devel@lists.xenproject.org>; Sun, 16 Feb 2014 03:37:30 +1100
Received: from d23av02.au.ibm.com (localhost [127.0.0.1])
	by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1FGv0pa015540
	for <xen-devel@lists.xenproject.org>; Sun, 16 Feb 2014 03:57:01 +1100
Received: from srivatsabhat.in.ibm.com ([9.79.254.206])
	by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1FGur6V015520; Sun, 16 Feb 2014 03:56:54 +1100
Message-ID: <52FF9B14.8000308@linux.vnet.ibm.com>
Date: Sat, 15 Feb 2014 22:21:32 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:15.0) Gecko/20120828 Thunderbird/15.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
	<52FE490B.8000908@oracle.com> <52FE493A.2030206@linux.vnet.ibm.com>
In-Reply-To: <52FE493A.2030206@linux.vnet.ibm.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021516-6102-0000-0000-000004F39A26
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: [Xen-devel] [UPDATED][PATCH v2 46/52] xen,
 balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 10:20 PM, Srivatsa S. Bhat wrote:
> On 02/14/2014 10:19 PM, Boris Ostrovsky wrote:
>> On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
>>> Subsystems that want to register CPU hotplug callbacks, as well as
>>> perform
>>> initialization for the CPUs that are already online, often do it as shown
>>> below:
>>>
[...]
>> This looks exactly like the earlier version (i.e the notifier is still
>> kept registered on allocation failure and commit message doesn't exactly
>> reflect the change).
>>
> 
> Sorry, your earlier reply (for some unknown reason) missed the email-threading
> and landed elsewhere in my inbox, and hence unfortunately I forgot to take
> your suggestions into account while sending out the v2.
> 
> I'll send out an updated version of just this patch, as a reply.

Here is the updated patch. Please let me know what you think!

----------------------------------------------------------------------------

From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Subject: [PATCH] xen, balloon: Fix CPU hotplug callback registration

Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	register_cpu_notifier(&foobar_cpu_notifier);

	put_online_cpus();

This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).

The xen balloon driver doesn't take get/put_online_cpus() around this code,
but that is also buggy, since it can miss CPU hotplug events in between the
initialization and callback registration:

	for_each_online_cpu(cpu)
		init_cpu(cpu);
		   ^
		   |  Race window; Can miss CPU hotplug events here.
		   v
	register_cpu_notifier(&foobar_cpu_notifier);

Interestingly, the balloon code in xen can simply be reorganized as shown
below, to have a race-free method to register hotplug callbacks, without even
taking get/put_online_cpus(). This is because the initialization performed for
already online CPUs is exactly the same as that performed for CPUs that come
online later. Moreover, the code has checks in place to avoid double
initialization.

	register_cpu_notifier(&foobar_cpu_notifier);

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	put_online_cpus();

A hotplug operation that occurs between registering the notifier and calling
get_online_cpus(), won't disrupt anything, because the code takes care to
perform the memory allocations only once.

So reorganize the balloon code in xen this way to fix the issues with CPU
hotplug callback registration.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/xen/balloon.c |   36 ++++++++++++++++++++++++------------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 37d06ea..dd79549 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
 	}
 }
 
+static int alloc_balloon_scratch_page(int cpu)
+{
+	if (per_cpu(balloon_scratch_page, cpu) != NULL)
+		return 0;
+
+	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+
 static int balloon_cpu_notify(struct notifier_block *self,
 				    unsigned long action, void *hcpu)
 {
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_UP_PREPARE:
-		if (per_cpu(balloon_scratch_page, cpu) != NULL)
-			break;
-		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		if (alloc_balloon_scratch_page(cpu))
 			return NOTIFY_BAD;
-		}
 		break;
 	default:
 		break;
@@ -624,15 +634,17 @@ static int __init balloon_init(void)
 		return -ENODEV;
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		for_each_online_cpu(cpu)
-		{
-			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		register_cpu_notifier(&balloon_cpu_notifier);
+
+		get_online_cpus();
+		for_each_online_cpu(cpu) {
+			if (alloc_balloon_scratch_page(cpu)) {
+				put_online_cpus();
+				unregister_cpu_notifier(&balloon_cpu_notifier);
 				return -ENOMEM;
 			}
 		}
-		register_cpu_notifier(&balloon_cpu_notifier);
+		put_online_cpus();
 	}
 
 	pr_info("Initialising balloon driver\n");




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 16:57:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 16:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEiYH-0007Ih-DY; Sat, 15 Feb 2014 16:57:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <srivatsa.bhat@linux.vnet.ibm.com>)
	id 1WEiYG-0007Ic-4i
	for xen-devel@lists.xenproject.org; Sat, 15 Feb 2014 16:57:16 +0000
Received: from [193.109.254.147:42577] by server-8.bemta-14.messagelabs.com id
	B0/E5-18529-B6C9FF25; Sat, 15 Feb 2014 16:57:15 +0000
X-Env-Sender: srivatsa.bhat@linux.vnet.ibm.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392483430!872752!1
X-Originating-IP: [202.81.31.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NSA9PiAzMTMyNzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9063 invoked from network); 15 Feb 2014 16:57:14 -0000
Received: from e23smtp03.au.ibm.com (HELO e23smtp03.au.ibm.com) (202.81.31.145)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Feb 2014 16:57:14 -0000
Received: from /spool/local
	by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<srivatsa.bhat@linux.vnet.ibm.com>; Sun, 16 Feb 2014 02:57:07 +1000
Received: from d23dlp03.au.ibm.com (202.81.31.214)
	by e23smtp03.au.ibm.com (202.81.31.209) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Sun, 16 Feb 2014 02:57:05 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp03.au.ibm.com (Postfix) with ESMTP id CEBE03578056
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 03:57:02 +1100 (EST)
Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1FGbTAM7995746
	for <xen-devel@lists.xenproject.org>; Sun, 16 Feb 2014 03:37:30 +1100
Received: from d23av02.au.ibm.com (localhost [127.0.0.1])
	by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1FGv0pa015540
	for <xen-devel@lists.xenproject.org>; Sun, 16 Feb 2014 03:57:01 +1100
Received: from srivatsabhat.in.ibm.com ([9.79.254.206])
	by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1FGur6V015520; Sun, 16 Feb 2014 03:56:54 +1100
Message-ID: <52FF9B14.8000308@linux.vnet.ibm.com>
Date: Sat, 15 Feb 2014 22:21:32 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:15.0) Gecko/20120828 Thunderbird/15.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
	<52FE490B.8000908@oracle.com> <52FE493A.2030206@linux.vnet.ibm.com>
In-Reply-To: <52FE493A.2030206@linux.vnet.ibm.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021516-6102-0000-0000-000004F39A26
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: [Xen-devel] [UPDATED][PATCH v2 46/52] xen,
 balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/14/2014 10:20 PM, Srivatsa S. Bhat wrote:
> On 02/14/2014 10:19 PM, Boris Ostrovsky wrote:
>> On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
>>> Subsystems that want to register CPU hotplug callbacks, as well as
>>> perform
>>> initialization for the CPUs that are already online, often do it as shown
>>> below:
>>>
[...]
>> This looks exactly like the earlier version (i.e the notifier is still
>> kept registered on allocation failure and commit message doesn't exactly
>> reflect the change).
>>
> 
> Sorry, your earlier reply (for some unknown reason) missed the email-threading
> and landed elsewhere in my inbox, and hence unfortunately I forgot to take
> your suggestions into account while sending out the v2.
> 
> I'll send out an updated version of just this patch, as a reply.

Here is the updated patch. Please let me know what you think!

----------------------------------------------------------------------------

From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Subject: [PATCH] xen, balloon: Fix CPU hotplug callback registration

Subsystems that want to register CPU hotplug callbacks, as well as perform
initialization for the CPUs that are already online, often do it as shown
below:

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	register_cpu_notifier(&foobar_cpu_notifier);

	put_online_cpus();

This is wrong, since it is prone to ABBA deadlocks involving the
cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
with CPU hotplug operations).

The xen balloon driver doesn't take get/put_online_cpus() around this code,
but that is also buggy, since it can miss CPU hotplug events in between the
initialization and callback registration:

	for_each_online_cpu(cpu)
		init_cpu(cpu);
		   ^
		   |  Race window; Can miss CPU hotplug events here.
		   v
	register_cpu_notifier(&foobar_cpu_notifier);

Interestingly, the balloon code in xen can simply be reorganized as shown
below, to have a race-free method to register hotplug callbacks, without even
taking get/put_online_cpus(). This is because the initialization performed for
already online CPUs is exactly the same as that performed for CPUs that come
online later. Moreover, the code has checks in place to avoid double
initialization.

	register_cpu_notifier(&foobar_cpu_notifier);

	get_online_cpus();

	for_each_online_cpu(cpu)
		init_cpu(cpu);

	put_online_cpus();

A hotplug operation that occurs between registering the notifier and calling
get_online_cpus(), won't disrupt anything, because the code takes care to
perform the memory allocations only once.

So reorganize the balloon code in xen this way to fix the issues with CPU
hotplug callback registration.

Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
---

 drivers/xen/balloon.c |   36 ++++++++++++++++++++++++------------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 37d06ea..dd79549 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
 	}
 }
 
+static int alloc_balloon_scratch_page(int cpu)
+{
+	if (per_cpu(balloon_scratch_page, cpu) != NULL)
+		return 0;
+
+	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
+	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
+		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		return -ENOMEM;
+	}
+
+	return 0;
+}
+
+
 static int balloon_cpu_notify(struct notifier_block *self,
 				    unsigned long action, void *hcpu)
 {
 	int cpu = (long)hcpu;
 	switch (action) {
 	case CPU_UP_PREPARE:
-		if (per_cpu(balloon_scratch_page, cpu) != NULL)
-			break;
-		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		if (alloc_balloon_scratch_page(cpu))
 			return NOTIFY_BAD;
-		}
 		break;
 	default:
 		break;
@@ -624,15 +634,17 @@ static int __init balloon_init(void)
 		return -ENODEV;
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
-		for_each_online_cpu(cpu)
-		{
-			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
-			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
-				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
+		register_cpu_notifier(&balloon_cpu_notifier);
+
+		get_online_cpus();
+		for_each_online_cpu(cpu) {
+			if (alloc_balloon_scratch_page(cpu)) {
+				put_online_cpus();
+				unregister_cpu_notifier(&balloon_cpu_notifier);
 				return -ENOMEM;
 			}
 		}
-		register_cpu_notifier(&balloon_cpu_notifier);
+		put_online_cpus();
 	}
 
 	pr_info("Initialising balloon driver\n");




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 19:00:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 19:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEkSv-00080R-0v; Sat, 15 Feb 2014 18:59:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEkSu-00080M-0H
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 18:59:52 +0000
Received: from [85.158.137.68:25865] by server-14.bemta-3.messagelabs.com id
	90/1F-08196-629BFF25; Sat, 15 Feb 2014 18:59:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392490788!842044!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29850 invoked from network); 15 Feb 2014 18:59:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 18:59:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,851,1384300800"; d="scan'208";a="101074969"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 18:59:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 13:59:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEkSo-0008H5-RT;
	Sat, 15 Feb 2014 18:59:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEkSo-0005hS-M0;
	Sat, 15 Feb 2014 18:59:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24905-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 18:59:46 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24905: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24905 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24905/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 24863
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24888 pass in 24905

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 19:00:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 19:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEkSv-00080R-0v; Sat, 15 Feb 2014 18:59:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEkSu-00080M-0H
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 18:59:52 +0000
Received: from [85.158.137.68:25865] by server-14.bemta-3.messagelabs.com id
	90/1F-08196-629BFF25; Sat, 15 Feb 2014 18:59:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392490788!842044!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29850 invoked from network); 15 Feb 2014 18:59:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 18:59:50 -0000
X-IronPort-AV: E=Sophos;i="4.95,851,1384300800"; d="scan'208";a="101074969"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 18:59:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 13:59:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEkSo-0008H5-RT;
	Sat, 15 Feb 2014 18:59:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEkSo-0005hS-M0;
	Sat, 15 Feb 2014 18:59:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24905-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 18:59:46 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24905: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24905 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24905/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 24863
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24888 pass in 24905

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 19:39:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 19:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEl5I-0008N8-LI; Sat, 15 Feb 2014 19:39:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEl5G-0008N3-KQ
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 19:39:31 +0000
Received: from [193.109.254.147:29904] by server-3.bemta-14.messagelabs.com id
	30/7C-00432-172CFF25; Sat, 15 Feb 2014 19:39:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392493167!4571083!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6535 invoked from network); 15 Feb 2014 19:39:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 19:39:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,851,1384300800"; d="scan'208";a="101079325"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 19:39:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 14:39:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEl5C-0008TU-0W;
	Sat, 15 Feb 2014 19:39:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEl5B-0005FC-Bm;
	Sat, 15 Feb 2014 19:39:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24904-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 19:39:25 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24904: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24904 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install          fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   14 guest-localmigrate/x10      fail pass in 24887
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install       fail pass in 24887
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 24904

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 19:39:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 19:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEl5I-0008N8-LI; Sat, 15 Feb 2014 19:39:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEl5G-0008N3-KQ
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 19:39:31 +0000
Received: from [193.109.254.147:29904] by server-3.bemta-14.messagelabs.com id
	30/7C-00432-172CFF25; Sat, 15 Feb 2014 19:39:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392493167!4571083!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6535 invoked from network); 15 Feb 2014 19:39:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 19:39:28 -0000
X-IronPort-AV: E=Sophos;i="4.95,851,1384300800"; d="scan'208";a="101079325"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 15 Feb 2014 19:39:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 14:39:26 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEl5C-0008TU-0W;
	Sat, 15 Feb 2014 19:39:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEl5B-0005FC-Bm;
	Sat, 15 Feb 2014 19:39:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24904-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 19:39:25 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24904: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24904 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24904/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install          fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   14 guest-localmigrate/x10      fail pass in 24887
 test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install       fail pass in 24887
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 24904

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 21:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 21:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEmwd-0000ei-1o; Sat, 15 Feb 2014 21:38:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jaceksburghardt@gmail.com>) id 1WEmwc-0000ed-1u
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 21:38:42 +0000
Received: from [85.158.143.35:21643] by server-1.bemta-4.messagelabs.com id
	F3/6E-31661-16EDFF25; Sat, 15 Feb 2014 21:38:41 +0000
X-Env-Sender: jaceksburghardt@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392500318!5934194!1
X-Originating-IP: [209.85.192.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14294 invoked from network); 15 Feb 2014 21:38:39 -0000
Received: from mail-qg0-f45.google.com (HELO mail-qg0-f45.google.com)
	(209.85.192.45)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 21:38:39 -0000
Received: by mail-qg0-f45.google.com with SMTP id j5so3044271qga.4
	for <xen-devel@lists.xensource.com>;
	Sat, 15 Feb 2014 13:38:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=uKjmMpxzSbvx4WyGOyBn2T0Byzqf2uge+yWa8vzL6Eo=;
	b=ehhCddrJ7MjL3dDTn2P/4/+ZnPgui7HTy3nMpSSxuZYntsaa0mZUGxajMv6+ezK/8c
	Zm6Dd3UDT7kx7JQdxT8BBmGypZKzKm2dXyWW/uMjFZqUabGQo1Oya5lo8a5kfIiK1y4Q
	c3Hu7oREsLVQbXTFGLJIId/rm/g5yiOAI5RM+i0kyDdStAlNLqOaTADnEl3u0WLOxdG8
	ZD8tPBzwpnAYZmGwzlspM0EHld58BaAfeDB+HDDNPG0u19VcOdGe9pGQhzWTeo5flPv5
	dAgulkrsB/iA17fB/2gHtz03aVsCcG98GAjyL4zys/ylW8luJYTK5y5x4RiVRUY5yMYQ
	DZtA==
MIME-Version: 1.0
X-Received: by 10.140.82.175 with SMTP id h44mr23088970qgd.68.1392500318472;
	Sat, 15 Feb 2014 13:38:38 -0800 (PST)
Received: by 10.140.83.180 with HTTP; Sat, 15 Feb 2014 13:38:38 -0800 (PST)
Received: by 10.140.83.180 with HTTP; Sat, 15 Feb 2014 13:38:38 -0800 (PST)
Date: Sat, 15 Feb 2014 14:38:38 -0700
Message-ID: <CAHyyzzQLLKQAnOcda4b_xgRq4iq1to9hDoHOLE8JhEsMUEW-AA@mail.gmail.com>
From: jacek burghardt <jaceksburghardt@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, pko@gmail.com, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 v2 06/10] xen/arm: second irqinjectionpp
 while the first irq is still inflightkppp
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7955566364708841979=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7955566364708841979==
Content-Type: multipart/alternative; boundary=001a11c129cad50eaf04f278be88

--001a11c129cad50eaf04f278be88
Content-Type: text/plain; charset=ISO-8859-1

J oh
On Feb 14, 2014 8:54 AM, "Stefano Stabellini" <
stefano.stabellini@eu.citrix.com> wrote:
>
> Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
> while the first one is still active.
> If the first irq is already pending (not active), just clear
> GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
> already visible by the guest.
> If the irq has already been EOI'ed then just clear the GICH_LR right
> away and move the interrupt to lr_pending so that it is going to be
> reinjected by gic_restore_pending_irqs on return to guest.
>
> If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
> and send an SGI. The target cpu is going to be interrupted and call
> gic_clear_lrs, that is going to take the same actions.
>
> Do not call vgic_vcpu_inject_irq from gic_inject if
> evtchn_upcall_pending is set. If we remove that call, we don't need to
> special case evtchn_irq in vgic_vcpu_inject_irq anymore.
> We also need to force the first injection of evtchn_irq (call
> gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
> is already set by common code on vcpu creation.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/gic.c        |   82
+++++++++++++++++++++++++--------------------
>  xen/arch/arm/vgic.c       |   18 +++++++---
>  xen/include/asm-arm/gic.h |    1 +
>  3 files changed, 61 insertions(+), 40 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 5fca5be..0955d48 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -698,51 +698,64 @@ out:
>      return;
>  }
>
> -static void gic_clear_lrs(struct vcpu *v)
> +static void _gic_clear_lr(struct vcpu *v, int i)
>  {
> -    struct pending_irq *p;
> -    int i = 0, irq;
> +    int irq;
>      uint32_t lr;
> -    bool_t inflight;
> +    struct pending_irq *p;
>
>      ASSERT(!local_irq_is_enabled());
>
> -    while ((i = find_next_bit((const long unsigned int *)
&this_cpu(lr_mask),
> -                              nr_lrs, i)) < nr_lrs) {
> -        lr = GICH[GICH_LR + i];
> -        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> +    lr = GICH[GICH_LR + i];
> +    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> +    p = irq_to_pending(v, irq);
> +    if ( lr & GICH_LR_ACTIVE )
> +    {
> +        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
> +             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
> +            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
> +    } else if ( lr & GICH_LR_PENDING ) {
> +        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
> +    } else {
> +        spin_lock(&gic.lock);
> +
> +        GICH[GICH_LR + i] = 0;
> +        clear_bit(i, &this_cpu(lr_mask));
> +
> +        if ( p->desc != NULL )
> +            p->desc->status &= ~IRQ_INPROGRESS;
> +        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> +        p->lr = nr_lrs;
> +        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> +                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
>          {
> -            inflight = 0;
> -            GICH[GICH_LR + i] = 0;
> -            clear_bit(i, &this_cpu(lr_mask));
> -
> -            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> -            spin_lock(&gic.lock);
> -            p = irq_to_pending(v, irq);
> -            if ( p->desc != NULL )
> -                p->desc->status &= ~IRQ_INPROGRESS;
> -            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> -            p->lr = nr_lrs;
> -            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> -                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> -            {
> -                inflight = 1;
> -                gic_add_to_lr_pending(v, irq, p->priority);
> -            }
> -            spin_unlock(&gic.lock);
> -            if ( !inflight )
> -            {
> -                spin_lock(&v->arch.vgic.lock);
> -                list_del_init(&p->inflight);
> -                spin_unlock(&v->arch.vgic.lock);
> -            }
> +            gic_add_to_lr_pending(v, irq, p->priority);
> +        } else
> +            list_del_init(&p->inflight);
>
> -        }
> +        spin_unlock(&gic.lock);
> +    }
> +}
> +
> +static void gic_clear_lrs(struct vcpu *v)
> +{
> +    int i = 0;
>
> +    while ((i = find_next_bit((const long unsigned int *)
&this_cpu(lr_mask),
> +                              nr_lrs, i)) < nr_lrs) {
> +
> +        spin_lock(&v->arch.vgic.lock);
> +        _gic_clear_lr(v, i);
> +        spin_unlock(&v->arch.vgic.lock);
>          i++;
>      }
>  }
>
> +void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
> +{
> +    _gic_clear_lr(v, p->lr);
> +}
> +
>  static void gic_restore_pending_irqs(struct vcpu *v)
>  {
>      int i;
> @@ -801,9 +814,6 @@ void gic_inject(void)
>  {
>      gic_clear_lrs(current);
>
> -    if ( vcpu_info(current, evtchn_upcall_pending) )
> -        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
> -
>      gic_restore_pending_irqs(current);
>      if (!gic_events_need_delivery())
>          gic_inject_irq_stop();
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index da15f4d..210ac39 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v,
uint32_t r, int n)
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -        if ( !list_empty(&p->inflight) &&
!test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> +        if ( irq == v->domain->arch.evtchn_irq &&
> +             vcpu_info(current, evtchn_upcall_pending) &&
> +             list_empty(&p->inflight) )
> +            vgic_vcpu_inject_irq(v, irq);
> +        else if ( !list_empty(&p->inflight) &&
!test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
>              gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
>          if ( p->desc != NULL )
>              p->desc->handler->enable(p->desc);
> @@ -696,10 +700,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned
int irq)
>
>      if ( !list_empty(&n->inflight) )
>      {
> -        if ( (irq != current->domain->arch.evtchn_irq) ||
> -             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
> +        if ( v == current )
> +        {
> +            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
> +            gic_set_clear_lr(v, n);
> +            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +            return;
> +        } else {
>              set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
> -        goto out;
> +            goto out;
> +        }
>      }
>
>      /* vcpu offline */
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 6fce5c2..6de0d9b 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -184,6 +184,7 @@ extern void gic_remove_from_queues(struct vcpu *v,
unsigned int virtual_irq);
>  extern int gic_route_irq_to_guest(struct domain *d,
>                                    const struct dt_irq *irq,
>                                    const char * devname);
> +extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);
>
>  /* Accept an interrupt from the GIC and dispatch its handler */
>  extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
> --
> 1.7.10.4
>
>
> _______________________________________________ky
> Xen-devel mailing list
> Xen-devel@lists.xen.forgo or go to
> http://lists.xen.org/Xena developp
ohp

--001a11c129cad50eaf04f278be88
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">J oh<br>
On Feb 14, 2014 8:54 AM, &quot;Stefano Stabellini&quot; &lt;<a href=3D"mail=
to:stefano.stabellini@eu.citrix.com">stefano.stabellini@eu.citrix.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second ir=
q<br>
&gt; while the first one is still active.<br>
&gt; If the first irq is already pending (not active), just clear<br>
&gt; GIC_IRQ_GUEST_PENDING because the irq has already been injected and is=
<br>
&gt; already visible by the guest.<br>
&gt; If the irq has already been EOI&#39;ed then just clear the GICH_LR rig=
ht<br>
&gt; away and move the interrupt to lr_pending so that it is going to be<br=
>
&gt; reinjected by gic_restore_pending_irqs on return to guest.<br>
&gt;<br>
&gt; If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDI=
NG<br>
&gt; and send an SGI. The target cpu is going to be interrupted and call<br=
>
&gt; gic_clear_lrs, that is going to take the same actions.<br>
&gt;<br>
&gt; Do not call vgic_vcpu_inject_irq from gic_inject if<br>
&gt; evtchn_upcall_pending is set. If we remove that call, we don&#39;t nee=
d to<br>
&gt; special case evtchn_irq in vgic_vcpu_inject_irq anymore.<br>
&gt; We also need to force the first injection of evtchn_irq (call<br>
&gt; gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pendi=
ng<br>
&gt; is already set by common code on vcpu creation.<br>
&gt;<br>
&gt; Signed-off-by: Stefano Stabellini &lt;<a href=3D"mailto:stefano.stabel=
lini@eu.citrix.com">stefano.stabellini@eu.citrix.com</a>&gt;<br>
&gt; ---<br>
&gt; =A0xen/arch/arm/gic.c =A0 =A0 =A0 =A0| =A0 82 ++++++++++++++++++++++++=
+--------------------<br>
&gt; =A0xen/arch/arm/vgic.c =A0 =A0 =A0 | =A0 18 +++++++---<br>
&gt; =A0xen/include/asm-arm/gic.h | =A0 =A01 +<br>
&gt; =A03 files changed, 61 insertions(+), 40 deletions(-)<br>
&gt;<br>
&gt; diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c<br>
&gt; index 5fca5be..0955d48 100644<br>
&gt; --- a/xen/arch/arm/gic.c<br>
&gt; +++ b/xen/arch/arm/gic.c<br>
&gt; @@ -698,51 +698,64 @@ out:<br>
&gt; =A0 =A0 =A0return;<br>
&gt; =A0}<br>
&gt;<br>
&gt; -static void gic_clear_lrs(struct vcpu *v)<br>
&gt; +static void _gic_clear_lr(struct vcpu *v, int i)<br>
&gt; =A0{<br>
&gt; - =A0 =A0struct pending_irq *p;<br>
&gt; - =A0 =A0int i =3D 0, irq;<br>
&gt; + =A0 =A0int irq;<br>
&gt; =A0 =A0 =A0uint32_t lr;<br>
&gt; - =A0 =A0bool_t inflight;<br>
&gt; + =A0 =A0struct pending_irq *p;<br>
&gt;<br>
&gt; =A0 =A0 =A0ASSERT(!local_irq_is_enabled());<br>
&gt;<br>
&gt; - =A0 =A0while ((i =3D find_next_bit((const long unsigned int *) &amp;=
this_cpu(lr_mask),<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nr_lrs, i=
)) &lt; nr_lrs) {<br>
&gt; - =A0 =A0 =A0 =A0lr =3D GICH[GICH_LR + i];<br>
&gt; - =A0 =A0 =A0 =A0if ( !(lr &amp; (GICH_LR_PENDING|GICH_LR_ACTIVE)) )<b=
r>
&gt; + =A0 =A0lr =3D GICH[GICH_LR + i];<br>
&gt; + =A0 =A0irq =3D (lr &gt;&gt; GICH_LR_VIRTUAL_SHIFT) &amp; GICH_LR_VIR=
TUAL_MASK;<br>
&gt; + =A0 =A0p =3D irq_to_pending(v, irq);<br>
&gt; + =A0 =A0if ( lr &amp; GICH_LR_ACTIVE )<br>
&gt; + =A0 =A0{<br>
&gt; + =A0 =A0 =A0 =A0if ( test_bit(GIC_IRQ_GUEST_ENABLED, &amp;p-&gt;statu=
s) &amp;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &a=
mp;p-&gt;status) )<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GICH[GICH_LR + i] =3D lr | GICH_LR_PENDING;<b=
r>
&gt; + =A0 =A0} else if ( lr &amp; GICH_LR_PENDING ) {<br>
&gt; + =A0 =A0 =A0 =A0clear_bit(GIC_IRQ_GUEST_PENDING, &amp;p-&gt;status);<=
br>
&gt; + =A0 =A0} else {<br>
&gt; + =A0 =A0 =A0 =A0spin_lock(&amp;gic.lock);<br>
&gt; +<br>
&gt; + =A0 =A0 =A0 =A0GICH[GICH_LR + i] =3D 0;<br>
&gt; + =A0 =A0 =A0 =A0clear_bit(i, &amp;this_cpu(lr_mask));<br>
&gt; +<br>
&gt; + =A0 =A0 =A0 =A0if ( p-&gt;desc !=3D NULL )<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0p-&gt;desc-&gt;status &amp;=3D ~IRQ_INPROGRES=
S;<br>
&gt; + =A0 =A0 =A0 =A0clear_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;status);<=
br>
&gt; + =A0 =A0 =A0 =A0p-&gt;lr =3D nr_lrs;<br>
&gt; + =A0 =A0 =A0 =A0if ( test_bit(GIC_IRQ_GUEST_PENDING, &amp;p-&gt;statu=
s) &amp;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0test_bit(GIC_IRQ_GUEST_ENABLED, &amp;=
p-&gt;status))<br>
&gt; =A0 =A0 =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0inflight =3D 0;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0GICH[GICH_LR + i] =3D 0;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0clear_bit(i, &amp;this_cpu(lr_mask));<br>
&gt; -<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0irq =3D (lr &gt;&gt; GICH_LR_VIRTUAL_SHIFT) &=
amp; GICH_LR_VIRTUAL_MASK;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0spin_lock(&amp;gic.lock);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0p =3D irq_to_pending(v, irq);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0if ( p-&gt;desc !=3D NULL )<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0p-&gt;desc-&gt;status &amp;=3D ~IRQ_I=
NPROGRESS;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0clear_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;s=
tatus);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0p-&gt;lr =3D nr_lrs;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0if ( test_bit(GIC_IRQ_GUEST_PENDING, &amp;p-&=
gt;status) &amp;&amp;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0test_bit(GIC_IRQ_GUEST_ENABLE=
D, &amp;p-&gt;status))<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0inflight =3D 1;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0gic_add_to_lr_pending(v, irq, p-&gt;p=
riority);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0}<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0spin_unlock(&amp;gic.lock);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0if ( !inflight )<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0spin_lock(&amp;v-&gt;arch.vgic.lock);=
<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0list_del_init(&amp;p-&gt;inflight);<b=
r>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0spin_unlock(&amp;v-&gt;arch.vgic.lock=
);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0}<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0gic_add_to_lr_pending(v, irq, p-&gt;priority)=
;<br>
&gt; + =A0 =A0 =A0 =A0} else<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0list_del_init(&amp;p-&gt;inflight);<br>
&gt;<br>
&gt; - =A0 =A0 =A0 =A0}<br>
&gt; + =A0 =A0 =A0 =A0spin_unlock(&amp;gic.lock);<br>
&gt; + =A0 =A0}<br>
&gt; +}<br>
&gt; +<br>
&gt; +static void gic_clear_lrs(struct vcpu *v)<br>
&gt; +{<br>
&gt; + =A0 =A0int i =3D 0;<br>
&gt;<br>
&gt; + =A0 =A0while ((i =3D find_next_bit((const long unsigned int *) &amp;=
this_cpu(lr_mask),<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nr_lrs, i=
)) &lt; nr_lrs) {<br>
&gt; +<br>
&gt; + =A0 =A0 =A0 =A0spin_lock(&amp;v-&gt;arch.vgic.lock);<br>
&gt; + =A0 =A0 =A0 =A0_gic_clear_lr(v, i);<br>
&gt; + =A0 =A0 =A0 =A0spin_unlock(&amp;v-&gt;arch.vgic.lock);<br>
&gt; =A0 =A0 =A0 =A0 =A0i++;<br>
&gt; =A0 =A0 =A0}<br>
&gt; =A0}<br>
&gt;<br>
&gt; +void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)<br>
&gt; +{<br>
&gt; + =A0 =A0_gic_clear_lr(v, p-&gt;lr);<br>
&gt; +}<br>
&gt; +<br>
&gt; =A0static void gic_restore_pending_irqs(struct vcpu *v)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0int i;<br>
&gt; @@ -801,9 +814,6 @@ void gic_inject(void)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0gic_clear_lrs(current);<br>
&gt;<br>
&gt; - =A0 =A0if ( vcpu_info(current, evtchn_upcall_pending) )<br>
&gt; - =A0 =A0 =A0 =A0vgic_vcpu_inject_irq(current, current-&gt;domain-&gt;=
arch.evtchn_irq);<br>
&gt; -<br>
&gt; =A0 =A0 =A0gic_restore_pending_irqs(current);<br>
&gt; =A0 =A0 =A0if (!gic_events_need_delivery())<br>
&gt; =A0 =A0 =A0 =A0 =A0gic_inject_irq_stop();<br>
&gt; diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c<br>
&gt; index da15f4d..210ac39 100644<br>
&gt; --- a/xen/arch/arm/vgic.c<br>
&gt; +++ b/xen/arch/arm/vgic.c<br>
&gt; @@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint=
32_t r, int n)<br>
&gt; =A0 =A0 =A0 =A0 =A0irq =3D i + (32 * n);<br>
&gt; =A0 =A0 =A0 =A0 =A0p =3D irq_to_pending(v, irq);<br>
&gt; =A0 =A0 =A0 =A0 =A0set_bit(GIC_IRQ_GUEST_ENABLED, &amp;p-&gt;status);<=
br>
&gt; - =A0 =A0 =A0 =A0if ( !list_empty(&amp;p-&gt;inflight) &amp;&amp; !tes=
t_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;status) )<br>
&gt; + =A0 =A0 =A0 =A0if ( irq =3D=3D v-&gt;domain-&gt;arch.evtchn_irq &amp=
;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 vcpu_info(current, evtchn_upcall_pending) &a=
mp;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 list_empty(&amp;p-&gt;inflight) )<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0vgic_vcpu_inject_irq(v, irq);<br>
&gt; + =A0 =A0 =A0 =A0else if ( !list_empty(&amp;p-&gt;inflight) &amp;&amp;=
 !test_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;status) )<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0gic_set_guest_irq(v, irq, GICH_LR_PENDING, =
p-&gt;priority);<br>
&gt; =A0 =A0 =A0 =A0 =A0if ( p-&gt;desc !=3D NULL )<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0p-&gt;desc-&gt;handler-&gt;enable(p-&gt;des=
c);<br>
&gt; @@ -696,10 +700,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsign=
ed int irq)<br>
&gt;<br>
&gt; =A0 =A0 =A0if ( !list_empty(&amp;n-&gt;inflight) )<br>
&gt; =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0if ( (irq !=3D current-&gt;domain-&gt;arch.evtchn_irq=
) ||<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 (!test_bit(GIC_IRQ_GUEST_VISIBLE, &amp;n-&gt=
;status)) )<br>
&gt; + =A0 =A0 =A0 =A0if ( v =3D=3D current )<br>
&gt; + =A0 =A0 =A0 =A0{<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0set_bit(GIC_IRQ_GUEST_PENDING, &amp;n-&gt;sta=
tus);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0gic_set_clear_lr(v, n);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0spin_unlock_irqrestore(&amp;v-&gt;arch.vgic.l=
ock, flags);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0return;<br>
&gt; + =A0 =A0 =A0 =A0} else {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0set_bit(GIC_IRQ_GUEST_PENDING, &amp;n-&gt;s=
tatus);<br>
&gt; - =A0 =A0 =A0 =A0goto out;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
&gt; + =A0 =A0 =A0 =A0}<br>
&gt; =A0 =A0 =A0}<br>
&gt;<br>
&gt; =A0 =A0 =A0/* vcpu offline */<br>
&gt; diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h<br>
&gt; index 6fce5c2..6de0d9b 100644<br>
&gt; --- a/xen/include/asm-arm/gic.h<br>
&gt; +++ b/xen/include/asm-arm/gic.h<br>
&gt; @@ -184,6 +184,7 @@ extern void gic_remove_from_queues(struct vcpu *v,=
 unsigned int virtual_irq);<br>
&gt; =A0extern int gic_route_irq_to_guest(struct domain *d,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0const struct dt_irq *irq,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0const char * devname);<br>
&gt; +extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);<=
br>
&gt;<br>
&gt; =A0/* Accept an interrupt from the GIC and dispatch its handler */<br>
&gt; =A0extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);<=
br>
&gt; --<br>
&gt; 1.7.10.4<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________ky<br>
&gt; Xen-devel mailing list<br>
&gt; Xen-devel@lists.xen.forgo or go to<br>
&gt; <a href=3D"http://lists.xen.org/Xena">http://lists.xen.org/Xena</a> de=
velopp<br>
 ohp</p>

--001a11c129cad50eaf04f278be88--


--===============7955566364708841979==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7955566364708841979==--


From xen-devel-bounces@lists.xen.org Sat Feb 15 21:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 21:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEmwd-0000ei-1o; Sat, 15 Feb 2014 21:38:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jaceksburghardt@gmail.com>) id 1WEmwc-0000ed-1u
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 21:38:42 +0000
Received: from [85.158.143.35:21643] by server-1.bemta-4.messagelabs.com id
	F3/6E-31661-16EDFF25; Sat, 15 Feb 2014 21:38:41 +0000
X-Env-Sender: jaceksburghardt@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392500318!5934194!1
X-Originating-IP: [209.85.192.45]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14294 invoked from network); 15 Feb 2014 21:38:39 -0000
Received: from mail-qg0-f45.google.com (HELO mail-qg0-f45.google.com)
	(209.85.192.45)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 21:38:39 -0000
Received: by mail-qg0-f45.google.com with SMTP id j5so3044271qga.4
	for <xen-devel@lists.xensource.com>;
	Sat, 15 Feb 2014 13:38:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=uKjmMpxzSbvx4WyGOyBn2T0Byzqf2uge+yWa8vzL6Eo=;
	b=ehhCddrJ7MjL3dDTn2P/4/+ZnPgui7HTy3nMpSSxuZYntsaa0mZUGxajMv6+ezK/8c
	Zm6Dd3UDT7kx7JQdxT8BBmGypZKzKm2dXyWW/uMjFZqUabGQo1Oya5lo8a5kfIiK1y4Q
	c3Hu7oREsLVQbXTFGLJIId/rm/g5yiOAI5RM+i0kyDdStAlNLqOaTADnEl3u0WLOxdG8
	ZD8tPBzwpnAYZmGwzlspM0EHld58BaAfeDB+HDDNPG0u19VcOdGe9pGQhzWTeo5flPv5
	dAgulkrsB/iA17fB/2gHtz03aVsCcG98GAjyL4zys/ylW8luJYTK5y5x4RiVRUY5yMYQ
	DZtA==
MIME-Version: 1.0
X-Received: by 10.140.82.175 with SMTP id h44mr23088970qgd.68.1392500318472;
	Sat, 15 Feb 2014 13:38:38 -0800 (PST)
Received: by 10.140.83.180 with HTTP; Sat, 15 Feb 2014 13:38:38 -0800 (PST)
Received: by 10.140.83.180 with HTTP; Sat, 15 Feb 2014 13:38:38 -0800 (PST)
Date: Sat, 15 Feb 2014 14:38:38 -0700
Message-ID: <CAHyyzzQLLKQAnOcda4b_xgRq4iq1to9hDoHOLE8JhEsMUEW-AA@mail.gmail.com>
From: jacek burghardt <jaceksburghardt@gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, pko@gmail.com, xen-devel@lists.xensource.com,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 v2 06/10] xen/arm: second irqinjectionpp
 while the first irq is still inflightkppp
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7955566364708841979=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7955566364708841979==
Content-Type: multipart/alternative; boundary=001a11c129cad50eaf04f278be88

--001a11c129cad50eaf04f278be88
Content-Type: text/plain; charset=ISO-8859-1

J oh
On Feb 14, 2014 8:54 AM, "Stefano Stabellini" <
stefano.stabellini@eu.citrix.com> wrote:
>
> Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
> while the first one is still active.
> If the first irq is already pending (not active), just clear
> GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
> already visible by the guest.
> If the irq has already been EOI'ed then just clear the GICH_LR right
> away and move the interrupt to lr_pending so that it is going to be
> reinjected by gic_restore_pending_irqs on return to guest.
>
> If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
> and send an SGI. The target cpu is going to be interrupted and call
> gic_clear_lrs, that is going to take the same actions.
>
> Do not call vgic_vcpu_inject_irq from gic_inject if
> evtchn_upcall_pending is set. If we remove that call, we don't need to
> special case evtchn_irq in vgic_vcpu_inject_irq anymore.
> We also need to force the first injection of evtchn_irq (call
> gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
> is already set by common code on vcpu creation.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> ---
>  xen/arch/arm/gic.c        |   82
+++++++++++++++++++++++++--------------------
>  xen/arch/arm/vgic.c       |   18 +++++++---
>  xen/include/asm-arm/gic.h |    1 +
>  3 files changed, 61 insertions(+), 40 deletions(-)
>
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 5fca5be..0955d48 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -698,51 +698,64 @@ out:
>      return;
>  }
>
> -static void gic_clear_lrs(struct vcpu *v)
> +static void _gic_clear_lr(struct vcpu *v, int i)
>  {
> -    struct pending_irq *p;
> -    int i = 0, irq;
> +    int irq;
>      uint32_t lr;
> -    bool_t inflight;
> +    struct pending_irq *p;
>
>      ASSERT(!local_irq_is_enabled());
>
> -    while ((i = find_next_bit((const long unsigned int *)
&this_cpu(lr_mask),
> -                              nr_lrs, i)) < nr_lrs) {
> -        lr = GICH[GICH_LR + i];
> -        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
> +    lr = GICH[GICH_LR + i];
> +    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> +    p = irq_to_pending(v, irq);
> +    if ( lr & GICH_LR_ACTIVE )
> +    {
> +        if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
> +             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
> +            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
> +    } else if ( lr & GICH_LR_PENDING ) {
> +        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
> +    } else {
> +        spin_lock(&gic.lock);
> +
> +        GICH[GICH_LR + i] = 0;
> +        clear_bit(i, &this_cpu(lr_mask));
> +
> +        if ( p->desc != NULL )
> +            p->desc->status &= ~IRQ_INPROGRESS;
> +        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> +        p->lr = nr_lrs;
> +        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> +                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
>          {
> -            inflight = 0;
> -            GICH[GICH_LR + i] = 0;
> -            clear_bit(i, &this_cpu(lr_mask));
> -
> -            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
> -            spin_lock(&gic.lock);
> -            p = irq_to_pending(v, irq);
> -            if ( p->desc != NULL )
> -                p->desc->status &= ~IRQ_INPROGRESS;
> -            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> -            p->lr = nr_lrs;
> -            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
> -                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
> -            {
> -                inflight = 1;
> -                gic_add_to_lr_pending(v, irq, p->priority);
> -            }
> -            spin_unlock(&gic.lock);
> -            if ( !inflight )
> -            {
> -                spin_lock(&v->arch.vgic.lock);
> -                list_del_init(&p->inflight);
> -                spin_unlock(&v->arch.vgic.lock);
> -            }
> +            gic_add_to_lr_pending(v, irq, p->priority);
> +        } else
> +            list_del_init(&p->inflight);
>
> -        }
> +        spin_unlock(&gic.lock);
> +    }
> +}
> +
> +static void gic_clear_lrs(struct vcpu *v)
> +{
> +    int i = 0;
>
> +    while ((i = find_next_bit((const long unsigned int *)
&this_cpu(lr_mask),
> +                              nr_lrs, i)) < nr_lrs) {
> +
> +        spin_lock(&v->arch.vgic.lock);
> +        _gic_clear_lr(v, i);
> +        spin_unlock(&v->arch.vgic.lock);
>          i++;
>      }
>  }
>
> +void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)
> +{
> +    _gic_clear_lr(v, p->lr);
> +}
> +
>  static void gic_restore_pending_irqs(struct vcpu *v)
>  {
>      int i;
> @@ -801,9 +814,6 @@ void gic_inject(void)
>  {
>      gic_clear_lrs(current);
>
> -    if ( vcpu_info(current, evtchn_upcall_pending) )
> -        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
> -
>      gic_restore_pending_irqs(current);
>      if (!gic_events_need_delivery())
>          gic_inject_irq_stop();
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index da15f4d..210ac39 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v,
uint32_t r, int n)
>          irq = i + (32 * n);
>          p = irq_to_pending(v, irq);
>          set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> -        if ( !list_empty(&p->inflight) &&
!test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
> +        if ( irq == v->domain->arch.evtchn_irq &&
> +             vcpu_info(current, evtchn_upcall_pending) &&
> +             list_empty(&p->inflight) )
> +            vgic_vcpu_inject_irq(v, irq);
> +        else if ( !list_empty(&p->inflight) &&
!test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
>              gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
>          if ( p->desc != NULL )
>              p->desc->handler->enable(p->desc);
> @@ -696,10 +700,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned
int irq)
>
>      if ( !list_empty(&n->inflight) )
>      {
> -        if ( (irq != current->domain->arch.evtchn_irq) ||
> -             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
> +        if ( v == current )
> +        {
> +            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
> +            gic_set_clear_lr(v, n);
> +            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
> +            return;
> +        } else {
>              set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
> -        goto out;
> +            goto out;
> +        }
>      }
>
>      /* vcpu offline */
> diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
> index 6fce5c2..6de0d9b 100644
> --- a/xen/include/asm-arm/gic.h
> +++ b/xen/include/asm-arm/gic.h
> @@ -184,6 +184,7 @@ extern void gic_remove_from_queues(struct vcpu *v,
unsigned int virtual_irq);
>  extern int gic_route_irq_to_guest(struct domain *d,
>                                    const struct dt_irq *irq,
>                                    const char * devname);
> +extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);
>
>  /* Accept an interrupt from the GIC and dispatch its handler */
>  extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);
> --
> 1.7.10.4
>
>
> _______________________________________________ky
> Xen-devel mailing list
> Xen-devel@lists.xen.forgo or go to
> http://lists.xen.org/Xena developp
ohp

--001a11c129cad50eaf04f278be88
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">J oh<br>
On Feb 14, 2014 8:54 AM, &quot;Stefano Stabellini&quot; &lt;<a href=3D"mail=
to:stefano.stabellini@eu.citrix.com">stefano.stabellini@eu.citrix.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second ir=
q<br>
&gt; while the first one is still active.<br>
&gt; If the first irq is already pending (not active), just clear<br>
&gt; GIC_IRQ_GUEST_PENDING because the irq has already been injected and is=
<br>
&gt; already visible by the guest.<br>
&gt; If the irq has already been EOI&#39;ed then just clear the GICH_LR rig=
ht<br>
&gt; away and move the interrupt to lr_pending so that it is going to be<br=
>
&gt; reinjected by gic_restore_pending_irqs on return to guest.<br>
&gt;<br>
&gt; If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDI=
NG<br>
&gt; and send an SGI. The target cpu is going to be interrupted and call<br=
>
&gt; gic_clear_lrs, that is going to take the same actions.<br>
&gt;<br>
&gt; Do not call vgic_vcpu_inject_irq from gic_inject if<br>
&gt; evtchn_upcall_pending is set. If we remove that call, we don&#39;t nee=
d to<br>
&gt; special case evtchn_irq in vgic_vcpu_inject_irq anymore.<br>
&gt; We also need to force the first injection of evtchn_irq (call<br>
&gt; gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pendi=
ng<br>
&gt; is already set by common code on vcpu creation.<br>
&gt;<br>
&gt; Signed-off-by: Stefano Stabellini &lt;<a href=3D"mailto:stefano.stabel=
lini@eu.citrix.com">stefano.stabellini@eu.citrix.com</a>&gt;<br>
&gt; ---<br>
&gt; =A0xen/arch/arm/gic.c =A0 =A0 =A0 =A0| =A0 82 ++++++++++++++++++++++++=
+--------------------<br>
&gt; =A0xen/arch/arm/vgic.c =A0 =A0 =A0 | =A0 18 +++++++---<br>
&gt; =A0xen/include/asm-arm/gic.h | =A0 =A01 +<br>
&gt; =A03 files changed, 61 insertions(+), 40 deletions(-)<br>
&gt;<br>
&gt; diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c<br>
&gt; index 5fca5be..0955d48 100644<br>
&gt; --- a/xen/arch/arm/gic.c<br>
&gt; +++ b/xen/arch/arm/gic.c<br>
&gt; @@ -698,51 +698,64 @@ out:<br>
&gt; =A0 =A0 =A0return;<br>
&gt; =A0}<br>
&gt;<br>
&gt; -static void gic_clear_lrs(struct vcpu *v)<br>
&gt; +static void _gic_clear_lr(struct vcpu *v, int i)<br>
&gt; =A0{<br>
&gt; - =A0 =A0struct pending_irq *p;<br>
&gt; - =A0 =A0int i =3D 0, irq;<br>
&gt; + =A0 =A0int irq;<br>
&gt; =A0 =A0 =A0uint32_t lr;<br>
&gt; - =A0 =A0bool_t inflight;<br>
&gt; + =A0 =A0struct pending_irq *p;<br>
&gt;<br>
&gt; =A0 =A0 =A0ASSERT(!local_irq_is_enabled());<br>
&gt;<br>
&gt; - =A0 =A0while ((i =3D find_next_bit((const long unsigned int *) &amp;=
this_cpu(lr_mask),<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nr_lrs, i=
)) &lt; nr_lrs) {<br>
&gt; - =A0 =A0 =A0 =A0lr =3D GICH[GICH_LR + i];<br>
&gt; - =A0 =A0 =A0 =A0if ( !(lr &amp; (GICH_LR_PENDING|GICH_LR_ACTIVE)) )<b=
r>
&gt; + =A0 =A0lr =3D GICH[GICH_LR + i];<br>
&gt; + =A0 =A0irq =3D (lr &gt;&gt; GICH_LR_VIRTUAL_SHIFT) &amp; GICH_LR_VIR=
TUAL_MASK;<br>
&gt; + =A0 =A0p =3D irq_to_pending(v, irq);<br>
&gt; + =A0 =A0if ( lr &amp; GICH_LR_ACTIVE )<br>
&gt; + =A0 =A0{<br>
&gt; + =A0 =A0 =A0 =A0if ( test_bit(GIC_IRQ_GUEST_ENABLED, &amp;p-&gt;statu=
s) &amp;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &a=
mp;p-&gt;status) )<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0GICH[GICH_LR + i] =3D lr | GICH_LR_PENDING;<b=
r>
&gt; + =A0 =A0} else if ( lr &amp; GICH_LR_PENDING ) {<br>
&gt; + =A0 =A0 =A0 =A0clear_bit(GIC_IRQ_GUEST_PENDING, &amp;p-&gt;status);<=
br>
&gt; + =A0 =A0} else {<br>
&gt; + =A0 =A0 =A0 =A0spin_lock(&amp;gic.lock);<br>
&gt; +<br>
&gt; + =A0 =A0 =A0 =A0GICH[GICH_LR + i] =3D 0;<br>
&gt; + =A0 =A0 =A0 =A0clear_bit(i, &amp;this_cpu(lr_mask));<br>
&gt; +<br>
&gt; + =A0 =A0 =A0 =A0if ( p-&gt;desc !=3D NULL )<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0p-&gt;desc-&gt;status &amp;=3D ~IRQ_INPROGRES=
S;<br>
&gt; + =A0 =A0 =A0 =A0clear_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;status);<=
br>
&gt; + =A0 =A0 =A0 =A0p-&gt;lr =3D nr_lrs;<br>
&gt; + =A0 =A0 =A0 =A0if ( test_bit(GIC_IRQ_GUEST_PENDING, &amp;p-&gt;statu=
s) &amp;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0test_bit(GIC_IRQ_GUEST_ENABLED, &amp;=
p-&gt;status))<br>
&gt; =A0 =A0 =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0inflight =3D 0;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0GICH[GICH_LR + i] =3D 0;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0clear_bit(i, &amp;this_cpu(lr_mask));<br>
&gt; -<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0irq =3D (lr &gt;&gt; GICH_LR_VIRTUAL_SHIFT) &=
amp; GICH_LR_VIRTUAL_MASK;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0spin_lock(&amp;gic.lock);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0p =3D irq_to_pending(v, irq);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0if ( p-&gt;desc !=3D NULL )<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0p-&gt;desc-&gt;status &amp;=3D ~IRQ_I=
NPROGRESS;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0clear_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;s=
tatus);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0p-&gt;lr =3D nr_lrs;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0if ( test_bit(GIC_IRQ_GUEST_PENDING, &amp;p-&=
gt;status) &amp;&amp;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0test_bit(GIC_IRQ_GUEST_ENABLE=
D, &amp;p-&gt;status))<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0inflight =3D 1;<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0gic_add_to_lr_pending(v, irq, p-&gt;p=
riority);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0}<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0spin_unlock(&amp;gic.lock);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0if ( !inflight )<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0spin_lock(&amp;v-&gt;arch.vgic.lock);=
<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0list_del_init(&amp;p-&gt;inflight);<b=
r>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0spin_unlock(&amp;v-&gt;arch.vgic.lock=
);<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0}<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0gic_add_to_lr_pending(v, irq, p-&gt;priority)=
;<br>
&gt; + =A0 =A0 =A0 =A0} else<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0list_del_init(&amp;p-&gt;inflight);<br>
&gt;<br>
&gt; - =A0 =A0 =A0 =A0}<br>
&gt; + =A0 =A0 =A0 =A0spin_unlock(&amp;gic.lock);<br>
&gt; + =A0 =A0}<br>
&gt; +}<br>
&gt; +<br>
&gt; +static void gic_clear_lrs(struct vcpu *v)<br>
&gt; +{<br>
&gt; + =A0 =A0int i =3D 0;<br>
&gt;<br>
&gt; + =A0 =A0while ((i =3D find_next_bit((const long unsigned int *) &amp;=
this_cpu(lr_mask),<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0nr_lrs, i=
)) &lt; nr_lrs) {<br>
&gt; +<br>
&gt; + =A0 =A0 =A0 =A0spin_lock(&amp;v-&gt;arch.vgic.lock);<br>
&gt; + =A0 =A0 =A0 =A0_gic_clear_lr(v, i);<br>
&gt; + =A0 =A0 =A0 =A0spin_unlock(&amp;v-&gt;arch.vgic.lock);<br>
&gt; =A0 =A0 =A0 =A0 =A0i++;<br>
&gt; =A0 =A0 =A0}<br>
&gt; =A0}<br>
&gt;<br>
&gt; +void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p)<br>
&gt; +{<br>
&gt; + =A0 =A0_gic_clear_lr(v, p-&gt;lr);<br>
&gt; +}<br>
&gt; +<br>
&gt; =A0static void gic_restore_pending_irqs(struct vcpu *v)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0int i;<br>
&gt; @@ -801,9 +814,6 @@ void gic_inject(void)<br>
&gt; =A0{<br>
&gt; =A0 =A0 =A0gic_clear_lrs(current);<br>
&gt;<br>
&gt; - =A0 =A0if ( vcpu_info(current, evtchn_upcall_pending) )<br>
&gt; - =A0 =A0 =A0 =A0vgic_vcpu_inject_irq(current, current-&gt;domain-&gt;=
arch.evtchn_irq);<br>
&gt; -<br>
&gt; =A0 =A0 =A0gic_restore_pending_irqs(current);<br>
&gt; =A0 =A0 =A0if (!gic_events_need_delivery())<br>
&gt; =A0 =A0 =A0 =A0 =A0gic_inject_irq_stop();<br>
&gt; diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c<br>
&gt; index da15f4d..210ac39 100644<br>
&gt; --- a/xen/arch/arm/vgic.c<br>
&gt; +++ b/xen/arch/arm/vgic.c<br>
&gt; @@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint=
32_t r, int n)<br>
&gt; =A0 =A0 =A0 =A0 =A0irq =3D i + (32 * n);<br>
&gt; =A0 =A0 =A0 =A0 =A0p =3D irq_to_pending(v, irq);<br>
&gt; =A0 =A0 =A0 =A0 =A0set_bit(GIC_IRQ_GUEST_ENABLED, &amp;p-&gt;status);<=
br>
&gt; - =A0 =A0 =A0 =A0if ( !list_empty(&amp;p-&gt;inflight) &amp;&amp; !tes=
t_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;status) )<br>
&gt; + =A0 =A0 =A0 =A0if ( irq =3D=3D v-&gt;domain-&gt;arch.evtchn_irq &amp=
;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 vcpu_info(current, evtchn_upcall_pending) &a=
mp;&amp;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0 list_empty(&amp;p-&gt;inflight) )<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0vgic_vcpu_inject_irq(v, irq);<br>
&gt; + =A0 =A0 =A0 =A0else if ( !list_empty(&amp;p-&gt;inflight) &amp;&amp;=
 !test_bit(GIC_IRQ_GUEST_VISIBLE, &amp;p-&gt;status) )<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0gic_set_guest_irq(v, irq, GICH_LR_PENDING, =
p-&gt;priority);<br>
&gt; =A0 =A0 =A0 =A0 =A0if ( p-&gt;desc !=3D NULL )<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0p-&gt;desc-&gt;handler-&gt;enable(p-&gt;des=
c);<br>
&gt; @@ -696,10 +700,16 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsign=
ed int irq)<br>
&gt;<br>
&gt; =A0 =A0 =A0if ( !list_empty(&amp;n-&gt;inflight) )<br>
&gt; =A0 =A0 =A0{<br>
&gt; - =A0 =A0 =A0 =A0if ( (irq !=3D current-&gt;domain-&gt;arch.evtchn_irq=
) ||<br>
&gt; - =A0 =A0 =A0 =A0 =A0 =A0 (!test_bit(GIC_IRQ_GUEST_VISIBLE, &amp;n-&gt=
;status)) )<br>
&gt; + =A0 =A0 =A0 =A0if ( v =3D=3D current )<br>
&gt; + =A0 =A0 =A0 =A0{<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0set_bit(GIC_IRQ_GUEST_PENDING, &amp;n-&gt;sta=
tus);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0gic_set_clear_lr(v, n);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0spin_unlock_irqrestore(&amp;v-&gt;arch.vgic.l=
ock, flags);<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0return;<br>
&gt; + =A0 =A0 =A0 =A0} else {<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0set_bit(GIC_IRQ_GUEST_PENDING, &amp;n-&gt;s=
tatus);<br>
&gt; - =A0 =A0 =A0 =A0goto out;<br>
&gt; + =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
&gt; + =A0 =A0 =A0 =A0}<br>
&gt; =A0 =A0 =A0}<br>
&gt;<br>
&gt; =A0 =A0 =A0/* vcpu offline */<br>
&gt; diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h<br>
&gt; index 6fce5c2..6de0d9b 100644<br>
&gt; --- a/xen/include/asm-arm/gic.h<br>
&gt; +++ b/xen/include/asm-arm/gic.h<br>
&gt; @@ -184,6 +184,7 @@ extern void gic_remove_from_queues(struct vcpu *v,=
 unsigned int virtual_irq);<br>
&gt; =A0extern int gic_route_irq_to_guest(struct domain *d,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0const struct dt_irq *irq,<br>
&gt; =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0const char * devname);<br>
&gt; +extern void gic_set_clear_lr(struct vcpu *v, struct pending_irq *p);<=
br>
&gt;<br>
&gt; =A0/* Accept an interrupt from the GIC and dispatch its handler */<br>
&gt; =A0extern void gic_interrupt(struct cpu_user_regs *regs, int is_fiq);<=
br>
&gt; --<br>
&gt; 1.7.10.4<br>
&gt;<br>
&gt;<br>
&gt; _______________________________________________ky<br>
&gt; Xen-devel mailing list<br>
&gt; Xen-devel@lists.xen.forgo or go to<br>
&gt; <a href=3D"http://lists.xen.org/Xena">http://lists.xen.org/Xena</a> de=
velopp<br>
 ohp</p>

--001a11c129cad50eaf04f278be88--


--===============7955566364708841979==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7955566364708841979==--


From xen-devel-bounces@lists.xen.org Sat Feb 15 22:18:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 22:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEnYO-00011A-I7; Sat, 15 Feb 2014 22:17:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WEnYM-000113-Lh
	for xen-devel@lists.xen.org; Sat, 15 Feb 2014 22:17:43 +0000
Received: from [193.109.254.147:21180] by server-15.bemta-14.messagelabs.com
	id F0/B2-10839-687EFF25; Sat, 15 Feb 2014 22:17:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392502659!841315!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17720 invoked from network); 15 Feb 2014 22:17:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Feb 2014 22:17:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392502659; l=8878;
	s=domk; d=aepfle.de;
	h=Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=+Q9Mhmv2gv/NN5NiT5ae4QcFZeg=;
	b=foVoJAPRSfMiGj7D1STXjuPD9FR2ODiVgzQa91yB0PJgMJV7y55B6aeCoqN4M6LO0po
	6zu30MNUvjQthqAtczj7qmgfMXf7Gje0hMKYJYBRelsnpV0ol+CUWLHC6DG05EO2xrMDs
	WfCK3C5ZWmWyZFOHKbqwWpZgCZUzkMaUKFc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id Q063b0q1FMHc4Tk
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate) for <xen-devel@lists.xen.org>;
	Sat, 15 Feb 2014 23:17:38 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 4EB165026A; Sat, 15 Feb 2014 23:17:38 +0100 (CET)
Date: Sat, 15 Feb 2014 23:17:38 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140215221737.GA28254@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


I'm not sure if libxlu_disk_l.h is a generated file. But just once I saw
this failure below with automated build and make -j 16. This source tree
has the discard-enable patch, which changes tools/libxl/libxlu_disk_l.l.
As a result libxlu_disk_l.c is regenerated, see the flex call below.

How should make become aware of the libxlu_disk_l.h dependency?

Olaf

....
[  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.h >_libxl_save_msgs_helper.h.new
[  126s] python gentypes.py libxl_types.idl __libxl_types.h __libxl_types_json.h __libxl_types.c
[  126s] python gentypes.py libxl_types_internal.idl __libxl_types_internal.h __libxl_types_internal_json.h __libxl_types_internal.c
[  126s] if ! cmp -s _libxl_list.h.new _libxl_list.h; then mv -f _libxl_list.h.new _libxl_list.h; else rm -f _libxl_list.h.new; fi
[  126s] /usr/bin/flex --header-file=libxlu_disk_l.h --outfile=libxlu_disk_l.c libxlu_disk_l.l
[  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_callout.c >_libxl_save_msgs_callout.c.new
[  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.c >_libxl_save_msgs_helper.c.new
[  126s] sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp >_paths.h.2.tmp
[  126s] rm -f _paths.h.tmp
--
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .flexarray.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o flexarray.o flexarray.c 
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl.o libxl.c 
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_create.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_create.o libxl_create.c 
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_dm.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_dm.o libxl_dm.c 
[  126s] libxlu_pci.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
[  126s]  #include "libxlu_disk_l.h"
[  126s]                            ^
[  126s] compilation terminated.
[  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_pci.o' failed
[  126s] make[3]: *** [libxlu_pci.o] Error 1
[  126s] make[3]: *** Waiting for unfinished jobs....
[  126s] libxlu_disk.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
[  126s]  #include "libxlu_disk_l.h"
[  126s]                            ^
[  126s] compilation terminated.
[  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_disk.o' failed
[  126s] make[3]: *** [libxlu_disk.o] Error 1
....

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 22:18:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 22:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEnYO-00011A-I7; Sat, 15 Feb 2014 22:17:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WEnYM-000113-Lh
	for xen-devel@lists.xen.org; Sat, 15 Feb 2014 22:17:43 +0000
Received: from [193.109.254.147:21180] by server-15.bemta-14.messagelabs.com
	id F0/B2-10839-687EFF25; Sat, 15 Feb 2014 22:17:42 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392502659!841315!1
X-Originating-IP: [81.169.146.162]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n,sa_preprocessor: 
	QmFkIElQOiA4MS4xNjkuMTQ2LjE2MiA9PiA1ODg3NDY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17720 invoked from network); 15 Feb 2014 22:17:39 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.162)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 15 Feb 2014 22:17:39 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392502659; l=8878;
	s=domk; d=aepfle.de;
	h=Content-Transfer-Encoding:Content-Type:MIME-Version:Subject:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=+Q9Mhmv2gv/NN5NiT5ae4QcFZeg=;
	b=foVoJAPRSfMiGj7D1STXjuPD9FR2ODiVgzQa91yB0PJgMJV7y55B6aeCoqN4M6LO0po
	6zu30MNUvjQthqAtczj7qmgfMXf7Gje0hMKYJYBRelsnpV0ol+CUWLHC6DG05EO2xrMDs
	WfCK3C5ZWmWyZFOHKbqwWpZgCZUzkMaUKFc=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfstAo5SQMel8AXawDOUvwr8pxjMon91SD6m7JJcBA==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:1160:6501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id Q063b0q1FMHc4Tk
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate) for <xen-devel@lists.xen.org>;
	Sat, 15 Feb 2014 23:17:38 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 4EB165026A; Sat, 15 Feb 2014 23:17:38 +0100 (CET)
Date: Sat, 15 Feb 2014 23:17:38 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xen.org
Message-ID: <20140215221737.GA28254@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Subject: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


I'm not sure if libxlu_disk_l.h is a generated file. But just once I saw
this failure below with automated build and make -j 16. This source tree
has the discard-enable patch, which changes tools/libxl/libxlu_disk_l.l.
As a result libxlu_disk_l.c is regenerated, see the flex call below.

How should make become aware of the libxlu_disk_l.h dependency?

Olaf

....
[  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.h >_libxl_save_msgs_helper.h.new
[  126s] python gentypes.py libxl_types.idl __libxl_types.h __libxl_types_json.h __libxl_types.c
[  126s] python gentypes.py libxl_types_internal.idl __libxl_types_internal.h __libxl_types_internal_json.h __libxl_types_internal.c
[  126s] if ! cmp -s _libxl_list.h.new _libxl_list.h; then mv -f _libxl_list.h.new _libxl_list.h; else rm -f _libxl_list.h.new; fi
[  126s] /usr/bin/flex --header-file=libxlu_disk_l.h --outfile=libxlu_disk_l.c libxlu_disk_l.l
[  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_callout.c >_libxl_save_msgs_callout.c.new
[  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.c >_libxl_save_msgs_helper.c.new
[  126s] sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp >_paths.h.2.tmp
[  126s] rm -f _paths.h.tmp
--
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .flexarray.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o flexarray.o flexarray.c 
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl.o libxl.c 
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_create.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_create.o libxl_create.c 
[  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_dm.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_dm.o libxl_dm.c 
[  126s] libxlu_pci.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
[  126s]  #include "libxlu_disk_l.h"
[  126s]                            ^
[  126s] compilation terminated.
[  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_pci.o' failed
[  126s] make[3]: *** [libxlu_pci.o] Error 1
[  126s] make[3]: *** Waiting for unfinished jobs....
[  126s] libxlu_disk.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
[  126s]  #include "libxlu_disk_l.h"
[  126s]                            ^
[  126s] compilation terminated.
[  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_disk.o' failed
[  126s] make[3]: *** [libxlu_disk.o] Error 1
....

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 23:19:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 23:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEoVe-0001TK-IY; Sat, 15 Feb 2014 23:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEoVc-0001TF-KA
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 23:18:56 +0000
Received: from [85.158.137.68:18105] by server-5.bemta-3.messagelabs.com id
	2B/EC-04712-FD5FFF25; Sat, 15 Feb 2014 23:18:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392506333!2111299!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23712 invoked from network); 15 Feb 2014 23:18:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 23:18:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,852,1384300800"; d="scan'208";a="102873661"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 23:18:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 18:18:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEoVX-0001AP-Mt;
	Sat, 15 Feb 2014 23:18:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEoVX-0000qe-Lr;
	Sat, 15 Feb 2014 23:18:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24910-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 23:18:51 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24910: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24910 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24910/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 12557
 test-amd64-i386-xend-winxpsp3  5 xen-boot                 fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 3 host-install(3) broken blocked in 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                ca033390a537dacdc2127c66d62e7862ad15ffdb
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7049 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2381907 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 15 23:19:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Feb 2014 23:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEoVe-0001TK-IY; Sat, 15 Feb 2014 23:18:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEoVc-0001TF-KA
	for xen-devel@lists.xensource.com; Sat, 15 Feb 2014 23:18:56 +0000
Received: from [85.158.137.68:18105] by server-5.bemta-3.messagelabs.com id
	2B/EC-04712-FD5FFF25; Sat, 15 Feb 2014 23:18:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392506333!2111299!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23712 invoked from network); 15 Feb 2014 23:18:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	15 Feb 2014 23:18:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,852,1384300800"; d="scan'208";a="102873661"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 15 Feb 2014 23:18:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 18:18:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEoVX-0001AP-Mt;
	Sat, 15 Feb 2014 23:18:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEoVX-0000qe-Lr;
	Sat, 15 Feb 2014 23:18:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24910-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 15 Feb 2014 23:18:51 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 24910: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24910 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24910/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 12557
 test-amd64-i386-xend-winxpsp3  5 xen-boot                 fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 3 host-install(3) broken blocked in 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                ca033390a537dacdc2127c66d62e7862ad15ffdb
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7049 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     broken  
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2381907 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 04:21:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 04:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEtE6-0007dF-Vg; Sun, 16 Feb 2014 04:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEtE4-0007d7-KH
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 04:21:09 +0000
Received: from [85.158.143.35:12593] by server-2.bemta-4.messagelabs.com id
	E9/D3-10891-3BC30035; Sun, 16 Feb 2014 04:21:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392524465!5964292!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4794 invoked from network); 16 Feb 2014 04:21:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 04:21:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,853,1384300800"; d="scan'208";a="101138432"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Feb 2014 04:21:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 23:21:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEtDy-0002eT-OJ;
	Sun, 16 Feb 2014 04:21:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEtDy-00086t-M3;
	Sun, 16 Feb 2014 04:21:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24916-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 04:21:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24916: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3638906866071401600=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3638906866071401600==
Content-Type: text/plain

flight 24916 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24916/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 24862
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24862
 build-i386-oldkern            4 xen-build        fail in 24901 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24901
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)         broken pass in 24901
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24901
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 24901
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24901 pass in 24916
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24901 pass in 24882
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 24916

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============3638906866071401600==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3638906866071401600==--

From xen-devel-bounces@lists.xen.org Sun Feb 16 04:21:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 04:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEtE6-0007dF-Vg; Sun, 16 Feb 2014 04:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEtE4-0007d7-KH
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 04:21:09 +0000
Received: from [85.158.143.35:12593] by server-2.bemta-4.messagelabs.com id
	E9/D3-10891-3BC30035; Sun, 16 Feb 2014 04:21:07 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392524465!5964292!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4794 invoked from network); 16 Feb 2014 04:21:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 04:21:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,853,1384300800"; d="scan'208";a="101138432"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Feb 2014 04:21:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 23:21:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEtDy-0002eT-OJ;
	Sun, 16 Feb 2014 04:21:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEtDy-00086t-M3;
	Sun, 16 Feb 2014 04:21:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24916-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 04:21:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 24916: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3638906866071401600=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3638906866071401600==
Content-Type: text/plain

flight 24916 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24916/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 24862
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24862
 build-i386-oldkern            4 xen-build        fail in 24901 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24901
 test-amd64-i386-rhel6hvm-intel  3 host-install(3)         broken pass in 24901
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24901
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 24901
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24901 pass in 24916
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24901 pass in 24882
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 24916

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               broken  
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============3638906866071401600==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3638906866071401600==--

From xen-devel-bounces@lists.xen.org Sun Feb 16 04:45:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 04:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEtbY-0007sf-C2; Sun, 16 Feb 2014 04:45:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEtbR-0007sV-II
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 04:45:17 +0000
Received: from [85.158.137.68:28507] by server-5.bemta-3.messagelabs.com id
	22/40-04712-05240035; Sun, 16 Feb 2014 04:45:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392525902!2138881!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26017 invoked from network); 16 Feb 2014 04:45:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 04:45:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,853,1384300800"; d="scan'208";a="101140707"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Feb 2014 04:45:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 23:45:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEtbA-0002mF-QD;
	Sun, 16 Feb 2014 04:45:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEtb9-0004Ys-Po;
	Sun, 16 Feb 2014 04:45:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24973-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 04:44:59 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24973: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24973 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24973/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 build-i386                    2 host-install(2)         broken REGR. vs. 24865
 test-amd64-i386-pv            9 guest-start      fail in 24904 REGR. vs. 24865
 test-i386-i386-pv             9 guest-start      fail in 24904 REGR. vs. 24865
 build-amd64-oldkern           4 xen-build        fail in 24904 REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install fail in 24904 REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24904
 test-amd64-i386-xl-credit2 14 guest-localmigrate/x10 fail in 24904 pass in 24887
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24904 pass in 24887
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 24973

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3 7 windows-install fail in 24904 like 24917-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24904 never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 24904 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24904 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24904 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24904 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24904 never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop        fail in 24904 never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop            fail in 24904 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 04:45:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 04:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEtbY-0007sf-C2; Sun, 16 Feb 2014 04:45:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEtbR-0007sV-II
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 04:45:17 +0000
Received: from [85.158.137.68:28507] by server-5.bemta-3.messagelabs.com id
	22/40-04712-05240035; Sun, 16 Feb 2014 04:45:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392525902!2138881!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26017 invoked from network); 16 Feb 2014 04:45:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 04:45:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,853,1384300800"; d="scan'208";a="101140707"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Feb 2014 04:45:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 23:45:01 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEtbA-0002mF-QD;
	Sun, 16 Feb 2014 04:45:00 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEtb9-0004Ys-Po;
	Sun, 16 Feb 2014 04:45:00 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24973-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 04:44:59 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 24973: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24973 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24973/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 build-i386                    2 host-install(2)         broken REGR. vs. 24865
 test-amd64-i386-pv            9 guest-start      fail in 24904 REGR. vs. 24865
 test-i386-i386-pv             9 guest-start      fail in 24904 REGR. vs. 24865
 build-amd64-oldkern           4 xen-build        fail in 24904 REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install fail in 24904 REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24904
 test-amd64-i386-xl-credit2 14 guest-localmigrate/x10 fail in 24904 pass in 24887
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24904 pass in 24887
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 24973

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3 7 windows-install fail in 24904 like 24917-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24904 never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop        fail in 24904 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24904 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24904 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24904 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24904 never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop        fail in 24904 never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop            fail in 24904 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 04:58:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 04:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEtoK-000861-7T; Sun, 16 Feb 2014 04:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEtoH-00085w-Ui
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 04:58:34 +0000
Received: from [85.158.139.211:58281] by server-5.bemta-5.messagelabs.com id
	31/CB-32749-97540035; Sun, 16 Feb 2014 04:58:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392526710!4178686!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5533 invoked from network); 16 Feb 2014 04:58:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 04:58:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,853,1384300800"; d="scan'208";a="102909415"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 04:58:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 23:58:29 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEtoD-0002qW-Cu;
	Sun, 16 Feb 2014 04:58:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEtoD-0007y8-CG;
	Sun, 16 Feb 2014 04:58:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24925-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 04:58:29 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24925: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24925 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24925/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863
 build-i386                    4 xen-build        fail in 24905 REGR. vs. 24863
 build-amd64-oldkern           4 xen-build        fail in 24905 REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 24905
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24905

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24905 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 04:58:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 04:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WEtoK-000861-7T; Sun, 16 Feb 2014 04:58:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WEtoH-00085w-Ui
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 04:58:34 +0000
Received: from [85.158.139.211:58281] by server-5.bemta-5.messagelabs.com id
	31/CB-32749-97540035; Sun, 16 Feb 2014 04:58:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392526710!4178686!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5533 invoked from network); 16 Feb 2014 04:58:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 04:58:32 -0000
X-IronPort-AV: E=Sophos;i="4.95,853,1384300800"; d="scan'208";a="102909415"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 04:58:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sat, 15 Feb 2014 23:58:29 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WEtoD-0002qW-Cu;
	Sun, 16 Feb 2014 04:58:29 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WEtoD-0007y8-CG;
	Sun, 16 Feb 2014 04:58:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-24925-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 04:58:29 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 24925: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 24925 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/24925/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863
 build-i386                    4 xen-build        fail in 24905 REGR. vs. 24863
 build-amd64-oldkern           4 xen-build        fail in 24905 REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 24905
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install           fail pass in 24905

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24905 never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 12:46:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 12:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF170-00033H-Sh; Sun, 16 Feb 2014 12:46:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF16z-00033C-9u
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 12:46:21 +0000
Received: from [85.158.139.211:49558] by server-10.bemta-5.messagelabs.com id
	BA/2E-08578-C13B0035; Sun, 16 Feb 2014 12:46:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392554777!4233294!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29971 invoked from network); 16 Feb 2014 12:46:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 12:46:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102957931"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 12:46:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 07:46:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF16t-0005EA-O6;
	Sun, 16 Feb 2014 12:46:15 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF16t-0003qw-NY;
	Sun, 16 Feb 2014 12:46:15 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25009-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 12:46:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25009: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9075557560751348380=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9075557560751348380==
Content-Type: text/plain

flight 25009 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25009/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 24916 REGR. vs. 24862
 build-i386-oldkern            4 xen-build        fail in 24901 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 24916
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24901
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 24916
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24916
 test-amd64-i386-freebsd10-i386  3 host-install(3)         broken pass in 24916
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24901
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 24901
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 24916 pass in 25009
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24901 pass in 25009
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24901 pass in 24882
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 25009

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============9075557560751348380==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9075557560751348380==--

From xen-devel-bounces@lists.xen.org Sun Feb 16 12:46:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 12:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF170-00033H-Sh; Sun, 16 Feb 2014 12:46:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF16z-00033C-9u
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 12:46:21 +0000
Received: from [85.158.139.211:49558] by server-10.bemta-5.messagelabs.com id
	BA/2E-08578-C13B0035; Sun, 16 Feb 2014 12:46:20 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392554777!4233294!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29971 invoked from network); 16 Feb 2014 12:46:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 12:46:19 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102957931"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 12:46:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 07:46:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF16t-0005EA-O6;
	Sun, 16 Feb 2014 12:46:15 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF16t-0003qw-NY;
	Sun, 16 Feb 2014 12:46:15 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25009-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 12:46:15 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25009: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9075557560751348380=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9075557560751348380==
Content-Type: text/plain

flight 25009 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25009/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 24916 REGR. vs. 24862
 build-i386-oldkern            4 xen-build        fail in 24901 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pv            3 host-install(3)           broken pass in 24916
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24901
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 24916
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10      fail pass in 24916
 test-amd64-i386-freebsd10-i386  3 host-install(3)         broken pass in 24916
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24901
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 24901
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 24916 pass in 25009
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 24901 pass in 25009
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24901 pass in 24882
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 25009

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           broken  
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============9075557560751348380==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9075557560751348380==--

From xen-devel-bounces@lists.xen.org Sun Feb 16 12:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 12:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF1Gg-0003HG-Vy; Sun, 16 Feb 2014 12:56:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WF1Gf-0003H8-Fe
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 12:56:21 +0000
Received: from [85.158.143.35:64360] by server-1.bemta-4.messagelabs.com id
	13/86-31661-475B0035; Sun, 16 Feb 2014 12:56:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392555379!6002888!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12164 invoked from network); 16 Feb 2014 12:56:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 12:56:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102958795"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 12:56:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 16 Feb 2014 07:56:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WF1Gb-00063Y-0l;
	Sun, 16 Feb 2014 12:56:17 +0000
Date: Sun, 16 Feb 2014 12:56:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
In-Reply-To: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
Message-ID: <alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Feb 2014, Francesco Gringoli wrote:
> Hello guys,
> 
> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
> 
> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
> 
> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
> 
> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
> 
> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.

Great job! Please do request access and update the wiki.

Anthony's branches are quite old now, many bugs have been discovered and
fixed in the upstream Xen and Linux trees in the meantime.
Although I expect that updating the Xen tree could be difficult at this
point, it is probably worth it from the stability point of view.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 12:56:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 12:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF1Gg-0003HG-Vy; Sun, 16 Feb 2014 12:56:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WF1Gf-0003H8-Fe
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 12:56:21 +0000
Received: from [85.158.143.35:64360] by server-1.bemta-4.messagelabs.com id
	13/86-31661-475B0035; Sun, 16 Feb 2014 12:56:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392555379!6002888!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12164 invoked from network); 16 Feb 2014 12:56:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 12:56:20 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102958795"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 12:56:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 16 Feb 2014 07:56:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WF1Gb-00063Y-0l;
	Sun, 16 Feb 2014 12:56:17 +0000
Date: Sun, 16 Feb 2014 12:56:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
In-Reply-To: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
Message-ID: <alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Feb 2014, Francesco Gringoli wrote:
> Hello guys,
> 
> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
> 
> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
> 
> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
> 
> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
> 
> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.

Great job! Please do request access and update the wiki.

Anthony's branches are quite old now, many bugs have been discovered and
fixed in the upstream Xen and Linux trees in the meantime.
Although I expect that updating the Xen tree could be difficult at this
point, it is probably worth it from the stability point of view.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 13:38:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 13:38:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF1vM-0003eY-IZ; Sun, 16 Feb 2014 13:38:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF1vL-0003eT-G4
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 13:38:23 +0000
Received: from [85.158.137.68:40202] by server-12.bemta-3.messagelabs.com id
	2E/37-01674-E4FB0035; Sun, 16 Feb 2014 13:38:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392557900!2149683!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18847 invoked from network); 16 Feb 2014 13:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 13:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102964005"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 13:38:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 08:38:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF1vG-0005UQ-Uk;
	Sun, 16 Feb 2014 13:38:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF1vG-0001BG-Pl;
	Sun, 16 Feb 2014 13:38:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25013-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 13:38:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25013: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25013 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25013/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24925
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-pv           3 host-install(3)  broken in 24925 pass in 25013
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24925 pass in 25013

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 13:38:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 13:38:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF1vM-0003eY-IZ; Sun, 16 Feb 2014 13:38:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF1vL-0003eT-G4
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 13:38:23 +0000
Received: from [85.158.137.68:40202] by server-12.bemta-3.messagelabs.com id
	2E/37-01674-E4FB0035; Sun, 16 Feb 2014 13:38:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392557900!2149683!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18847 invoked from network); 16 Feb 2014 13:38:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 13:38:21 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102964005"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 13:38:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 08:38:19 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF1vG-0005UQ-Uk;
	Sun, 16 Feb 2014 13:38:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF1vG-0001BG-Pl;
	Sun, 16 Feb 2014 13:38:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25013-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 13:38:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25013: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25013 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25013/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24925
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-pv           3 host-install(3)  broken in 24925 pass in 25013
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24925 pass in 25013

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:45:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF2xd-000486-G2; Sun, 16 Feb 2014 14:44:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF2xb-000481-OY
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:44:48 +0000
Received: from [85.158.139.211:38666] by server-11.bemta-5.messagelabs.com id
	C7/F4-23886-EDEC0035; Sun, 16 Feb 2014 14:44:46 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392561885!4204397!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8190 invoked from network); 16 Feb 2014 14:44:45 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	16 Feb 2014 14:44:45 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 16 Feb 2014 06:40:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; d="scan'208";a="456395850"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 06:44:43 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:44:42 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:44:41 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Christoph Egger <chegger@amazon.de>
Thread-Topic: [Xen-devel] [PATCH v2] MCE: Fix race condition in mctelem_reserve
Thread-Index: AQHPKaVqumHF6v9Bb0uaDG3eKZFC5Zq394gg
Date: Sun, 16 Feb 2014 14:44:40 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F11BB@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391175434.11034.2.camel@hamster.uk.xensource.com>
	<1392394986.11369.1.camel@hamster.uk.xensource.com>
	<52FE583F020000780011C861@nat28.tlf.novell.com>
In-Reply-To: <52FE583F020000780011C861@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, I just return from long Chinese Spring Festival vacation. I will review it ASAP.

Thanks,
Jinsong

Jan Beulich wrote:
>>>> On 14.02.14 at 17:23, Frediano Ziglio <frediano.ziglio@citrix.com>
>>>> wrote: Ping 
> 
> And this is clearly not the first ping. Guys - you had over 3 weeks
> time to respond to this. Please!
> 
> Jan
> 
>> On Fri, 2014-01-31 at 13:37 +0000, Frediano Ziglio wrote:
>>> Ping
>>> 
>>> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
>>>> From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00
>>>> 2001 From: Frediano Ziglio <frediano.ziglio@citrix.com>
>>>> Date: Wed, 22 Jan 2014 10:48:50 +0000
>>>> Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
>>>> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8
>>>> Content-Transfer-Encoding: 8bit
>>>> 
>>>> These lines (in mctelem_reserve)
>>>> 
>>>>         newhead = oldhead->mcte_next;
>>>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>> 
>>>> are racy. After you read the newhead pointer it can happen that
>>>> another flow (thread or recursive invocation) change all the list
>>>> but set head with same value. So oldhead is the same as *freelp
>>>> but you are setting 
>>>> a new head that could point to whatever element (even already
>>>> used). 
>>>> 
>>>> This patch use instead a bit array and atomic bit operations.
>>>> 
>>>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com> ---
>>>>  xen/arch/x86/cpu/mcheck/mctelem.c |   81
>> ++++++++++++++-----------------------
>>>>  1 file changed, 30 insertions(+), 51 deletions(-)
>>>> 
>>>> Changes from v1:
>>>> - Use bitmap to allow any number of items to be used;
>>>> - Use a single bitmap to simplify reserve loop;
>>>> - Remove HOME flags as was not used anymore.
>>>> 
>>>> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
>> b/xen/arch/x86/cpu/mcheck/mctelem.c
>>>> index 895ce1a..ed8e8d2 100644
>>>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>>>> @@ -37,24 +37,19 @@ struct mctelem_ent {
>>>>  	void *mcte_data;		/* corresponding data payload */  };
>>>> 
>>>> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
>>>> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent
>>>> freelist */ 
>>>> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
>>>> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent
>>>> errors */ +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent
>>>> errors */ +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use -
>>>>  nonurgent errors */ #define	MCTE_F_STATE_FREE		0x0010U	/* on a
>>>>  freelist */ #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved;
>>>>  on no list */ #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a
>>>>  committed list */ #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on
>>>> a processing list */ 
>>>> 
>>>> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT |
>>>>  MCTE_F_HOME_NONURGENT)
>>>>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT |
>>>>  				MCTE_F_CLASS_NONURGENT)
>>>>  				#define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>>>>  				MCTE_F_STATE_UNCOMMITTED | \ MCTE_F_STATE_COMMITTED | \
>>>> MCTE_F_STATE_PROCESSING) 
>>>> 
>>>> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME) -
>>>>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>>>>  #define	MCTE_SET_CLASS(tep, new) do { \
>>>>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
>>>> @@ -69,6 +64,8 @@ struct mctelem_ent {
>>>>  #define	MC_URGENT_NENT		10
>>>>  #define	MC_NONURGENT_NENT	20
>>>> 
>>>> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT) +
>>>>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>>>> 
>>>>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>>>> @@ -77,11 +74,9 @@ struct mctelem_ent {
>>>>  static struct mc_telem_ctl {
>>>>  	/* Linked lists that thread the array members together.  	 *
>>>> -	 * The free lists are singly-linked via mcte_next, and we
>>>> allocate 
>>>> -	 * from them by atomically unlinking an element from the head.
>>>> -	 * Consumed entries are returned to the head of the free list.
>>>> -	 * When an entry is reserved off the free list it is not linked
>>>> -	 * on any list until it is committed or dismissed.
>>>> +	 * The free lists is a bit array where bit 1 means free.
>>>> +	 * This as element number is quite small and is easy to
>>>> +	 * atomically allocate that way.
>>>>  	 *
>>>>  	 * The committed list grows at the head and we do not maintain a
>>>>  	 * tail pointer; insertions are performed atomically.  The head
>>>> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>>>>  	 * we can lock it for updates.  The head of the processing list
>>>>  	 * always has the oldest telemetry, and we append (as above)
>>>>  	 * at the tail of the processing list. */
>>>> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>>>> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>>>>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>>>>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>>>>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>>>> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)  
>>>>  */ static void mctelem_free(struct mctelem_ent *tep)
>>>>  {
>>>> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
>>>> -	    MC_URGENT : MC_NONURGENT;
>>>> -
>>>>  	BUG_ON(tep->mcte_refcnt != 0);
>>>>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>>>> 
>>>>  	tep->mcte_prev = NULL;
>>>> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next,
>>>> tep); +	tep->mcte_next = NULL; +
>>>> +	/* set free in array */
>>>> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);  }
>>>> 
>>>>  /* Increment the reference count of an entry that is not linked
>>>> on to @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)  	}
>>>> 
>>>>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>>>> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
>>>> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT)
>>>> * 
>>>> -	    datasz)) == NULL) {
>>>> +	    MC_NENT)) == NULL ||
>>>> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {  		if
>>>>  			(mctctl.mctc_elems) xfree(mctctl.mctc_elems);
>>>>  		printk("Allocations for MCA telemetry failed\n");  		return;
>>>>  	}
>>>> 
>>>> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>>>> -		struct mctelem_ent *tep, **tepp;
>>>> +	for (i = 0; i < MC_NENT; i++) {
>>>> +		struct mctelem_ent *tep;
>>>> 
>>>>  		tep = mctctl.mctc_elems + i;
>>>>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>>>>  		tep->mcte_refcnt = 0;
>>>>  		tep->mcte_data = datarr + i * datasz;
>>>> 
>>>> -		if (i < MC_URGENT_NENT) {
>>>> -			tepp = &mctctl.mctc_free[MC_URGENT];
>>>> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>>>> -		} else {
>>>> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>>>> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>>>> -		}
>>>> -
>>>> -		tep->mcte_next = *tepp;
>>>> +		__set_bit(i, mctctl.mctc_free);
>>>> +		tep->mcte_next = NULL;
>>>>  		tep->mcte_prev = NULL;
>>>> -		*tepp = tep;
>>>>  	}
>>>>  }
>>>> 
>>>> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>>>> 
>>>>  /* Reserve a telemetry entry, or return NULL if none available.
>>>>   * If we return an entry then the caller must subsequently call
>>>> exactly one of - * mctelem_unreserve or mctelem_commit for that
>>>> entry. + * mctelem_dismiss or mctelem_commit for that entry.   */
>>>>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)  {
>>>> -	struct mctelem_ent **freelp;
>>>> -	struct mctelem_ent *oldhead, *newhead;
>>>> -	mctelem_class_t target = (which == MC_URGENT) ?
>>>> -	    MC_URGENT : MC_NONURGENT;
>>>> +	unsigned bit;
>>>> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>>>> 
>>>> -	freelp = &mctctl.mctc_free[target];
>>>>  	for (;;) {
>>>> -		if ((oldhead = *freelp) == NULL) {
>>>> -			if (which == MC_URGENT && target == MC_URGENT) {
>>>> -				/* raid the non-urgent freelist */
>>>> -				target = MC_NONURGENT;
>>>> -				freelp = &mctctl.mctc_free[target];
>>>> -				continue;
>>>> -			} else {
>>>> -				mctelem_drop_count++;
>>>> -				return (NULL);
>>>> -			}
>>>> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit); +
>>>> +		if (bit >= MC_NENT) {
>>>> +			mctelem_drop_count++;
>>>> +			return (NULL);
>>>>  		}
>>>> 
>>>> -		newhead = oldhead->mcte_next;
>>>> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>> -			struct mctelem_ent *tep = oldhead;
>>>> +		/* try to allocate, atomically clear free bit */
>>>> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
>>>> +			/* return element we got */
>>>> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>>>> 
>>>>  			mctelem_hold(tep);
>>>>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:45:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF2xd-000486-G2; Sun, 16 Feb 2014 14:44:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF2xb-000481-OY
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:44:48 +0000
Received: from [85.158.139.211:38666] by server-11.bemta-5.messagelabs.com id
	C7/F4-23886-EDEC0035; Sun, 16 Feb 2014 14:44:46 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392561885!4204397!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8190 invoked from network); 16 Feb 2014 14:44:45 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	16 Feb 2014 14:44:45 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 16 Feb 2014 06:40:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; d="scan'208";a="456395850"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 06:44:43 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:44:42 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:44:41 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Christoph Egger <chegger@amazon.de>
Thread-Topic: [Xen-devel] [PATCH v2] MCE: Fix race condition in mctelem_reserve
Thread-Index: AQHPKaVqumHF6v9Bb0uaDG3eKZFC5Zq394gg
Date: Sun, 16 Feb 2014 14:44:40 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F11BB@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<1391175434.11034.2.camel@hamster.uk.xensource.com>
	<1392394986.11369.1.camel@hamster.uk.xensource.com>
	<52FE583F020000780011C861@nat28.tlf.novell.com>
In-Reply-To: <52FE583F020000780011C861@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Frediano Ziglio <frediano.ziglio@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, I just return from long Chinese Spring Festival vacation. I will review it ASAP.

Thanks,
Jinsong

Jan Beulich wrote:
>>>> On 14.02.14 at 17:23, Frediano Ziglio <frediano.ziglio@citrix.com>
>>>> wrote: Ping 
> 
> And this is clearly not the first ping. Guys - you had over 3 weeks
> time to respond to this. Please!
> 
> Jan
> 
>> On Fri, 2014-01-31 at 13:37 +0000, Frediano Ziglio wrote:
>>> Ping
>>> 
>>> On Wed, 2014-01-22 at 17:17 +0000, Frediano Ziglio wrote:
>>>> From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00
>>>> 2001 From: Frediano Ziglio <frediano.ziglio@citrix.com>
>>>> Date: Wed, 22 Jan 2014 10:48:50 +0000
>>>> Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
>>>> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8
>>>> Content-Transfer-Encoding: 8bit
>>>> 
>>>> These lines (in mctelem_reserve)
>>>> 
>>>>         newhead = oldhead->mcte_next;
>>>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>> 
>>>> are racy. After you read the newhead pointer it can happen that
>>>> another flow (thread or recursive invocation) change all the list
>>>> but set head with same value. So oldhead is the same as *freelp
>>>> but you are setting 
>>>> a new head that could point to whatever element (even already
>>>> used). 
>>>> 
>>>> This patch use instead a bit array and atomic bit operations.
>>>> 
>>>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com> ---
>>>>  xen/arch/x86/cpu/mcheck/mctelem.c |   81
>> ++++++++++++++-----------------------
>>>>  1 file changed, 30 insertions(+), 51 deletions(-)
>>>> 
>>>> Changes from v1:
>>>> - Use bitmap to allow any number of items to be used;
>>>> - Use a single bitmap to simplify reserve loop;
>>>> - Remove HOME flags as was not used anymore.
>>>> 
>>>> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
>> b/xen/arch/x86/cpu/mcheck/mctelem.c
>>>> index 895ce1a..ed8e8d2 100644
>>>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>>>> @@ -37,24 +37,19 @@ struct mctelem_ent {
>>>>  	void *mcte_data;		/* corresponding data payload */  };
>>>> 
>>>> -#define	MCTE_F_HOME_URGENT		0x0001U	/* free to urgent freelist */
>>>> -#define	MCTE_F_HOME_NONURGENT		0x0002U /* free to nonurgent
>>>> freelist */ 
>>>> -#define	MCTE_F_CLASS_URGENT		0x0004U /* in use - urgent errors */
>>>> -#define	MCTE_F_CLASS_NONURGENT		0x0008U /* in use - nonurgent
>>>> errors */ +#define	MCTE_F_CLASS_URGENT		0x0001U /* in use - urgent
>>>> errors */ +#define	MCTE_F_CLASS_NONURGENT		0x0002U /* in use -
>>>>  nonurgent errors */ #define	MCTE_F_STATE_FREE		0x0010U	/* on a
>>>>  freelist */ #define	MCTE_F_STATE_UNCOMMITTED	0x0020U	/* reserved;
>>>>  on no list */ #define	MCTE_F_STATE_COMMITTED		0x0040U	/* on a
>>>>  committed list */ #define	MCTE_F_STATE_PROCESSING		0x0080U	/* on
>>>> a processing list */ 
>>>> 
>>>> -#define	MCTE_F_MASK_HOME	(MCTE_F_HOME_URGENT |
>>>>  MCTE_F_HOME_NONURGENT)
>>>>  #define	MCTE_F_MASK_CLASS	(MCTE_F_CLASS_URGENT |
>>>>  				MCTE_F_CLASS_NONURGENT)
>>>>  				#define	MCTE_F_MASK_STATE	(MCTE_F_STATE_FREE | \
>>>>  				MCTE_F_STATE_UNCOMMITTED | \ MCTE_F_STATE_COMMITTED | \
>>>> MCTE_F_STATE_PROCESSING) 
>>>> 
>>>> -#define	MCTE_HOME(tep) ((tep)->mcte_flags & MCTE_F_MASK_HOME) -
>>>>  #define	MCTE_CLASS(tep) ((tep)->mcte_flags & MCTE_F_MASK_CLASS)
>>>>  #define	MCTE_SET_CLASS(tep, new) do { \
>>>>      (tep)->mcte_flags &= ~MCTE_F_MASK_CLASS; \
>>>> @@ -69,6 +64,8 @@ struct mctelem_ent {
>>>>  #define	MC_URGENT_NENT		10
>>>>  #define	MC_NONURGENT_NENT	20
>>>> 
>>>> +#define MC_NENT (MC_URGENT_NENT + MC_NONURGENT_NENT) +
>>>>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>>>> 
>>>>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>>>> @@ -77,11 +74,9 @@ struct mctelem_ent {
>>>>  static struct mc_telem_ctl {
>>>>  	/* Linked lists that thread the array members together.  	 *
>>>> -	 * The free lists are singly-linked via mcte_next, and we
>>>> allocate 
>>>> -	 * from them by atomically unlinking an element from the head.
>>>> -	 * Consumed entries are returned to the head of the free list.
>>>> -	 * When an entry is reserved off the free list it is not linked
>>>> -	 * on any list until it is committed or dismissed.
>>>> +	 * The free lists is a bit array where bit 1 means free.
>>>> +	 * This as element number is quite small and is easy to
>>>> +	 * atomically allocate that way.
>>>>  	 *
>>>>  	 * The committed list grows at the head and we do not maintain a
>>>>  	 * tail pointer; insertions are performed atomically.  The head
>>>> @@ -101,7 +96,7 @@ static struct mc_telem_ctl {
>>>>  	 * we can lock it for updates.  The head of the processing list
>>>>  	 * always has the oldest telemetry, and we append (as above)
>>>>  	 * at the tail of the processing list. */
>>>> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>>>> +	DECLARE_BITMAP(mctc_free, MC_NENT);
>>>>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>>>>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>>>>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>>>> @@ -207,14 +202,14 @@ int mctelem_has_deferred(unsigned int cpu)  
>>>>  */ static void mctelem_free(struct mctelem_ent *tep)
>>>>  {
>>>> -	mctelem_class_t target = MCTE_HOME(tep) == MCTE_F_HOME_URGENT ?
>>>> -	    MC_URGENT : MC_NONURGENT;
>>>> -
>>>>  	BUG_ON(tep->mcte_refcnt != 0);
>>>>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>>>> 
>>>>  	tep->mcte_prev = NULL;
>>>> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next,
>>>> tep); +	tep->mcte_next = NULL; +
>>>> +	/* set free in array */
>>>> +	set_bit(tep - mctctl.mctc_elems, mctctl.mctc_free);  }
>>>> 
>>>>  /* Increment the reference count of an entry that is not linked
>>>> on to @@ -274,34 +269,25 @@ void mctelem_init(int reqdatasz)  	}
>>>> 
>>>>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>>>> -	    MC_URGENT_NENT + MC_NONURGENT_NENT)) == NULL ||
>>>> -	    (datarr = xmalloc_bytes((MC_URGENT_NENT + MC_NONURGENT_NENT)
>>>> * 
>>>> -	    datasz)) == NULL) {
>>>> +	    MC_NENT)) == NULL ||
>>>> +	    (datarr = xmalloc_bytes(MC_NENT * datasz)) == NULL) {  		if
>>>>  			(mctctl.mctc_elems) xfree(mctctl.mctc_elems);
>>>>  		printk("Allocations for MCA telemetry failed\n");  		return;
>>>>  	}
>>>> 
>>>> -	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>>>> -		struct mctelem_ent *tep, **tepp;
>>>> +	for (i = 0; i < MC_NENT; i++) {
>>>> +		struct mctelem_ent *tep;
>>>> 
>>>>  		tep = mctctl.mctc_elems + i;
>>>>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>>>>  		tep->mcte_refcnt = 0;
>>>>  		tep->mcte_data = datarr + i * datasz;
>>>> 
>>>> -		if (i < MC_URGENT_NENT) {
>>>> -			tepp = &mctctl.mctc_free[MC_URGENT];
>>>> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>>>> -		} else {
>>>> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>>>> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>>>> -		}
>>>> -
>>>> -		tep->mcte_next = *tepp;
>>>> +		__set_bit(i, mctctl.mctc_free);
>>>> +		tep->mcte_next = NULL;
>>>>  		tep->mcte_prev = NULL;
>>>> -		*tepp = tep;
>>>>  	}
>>>>  }
>>>> 
>>>> @@ -310,32 +296,25 @@ static int mctelem_drop_count;
>>>> 
>>>>  /* Reserve a telemetry entry, or return NULL if none available.
>>>>   * If we return an entry then the caller must subsequently call
>>>> exactly one of - * mctelem_unreserve or mctelem_commit for that
>>>> entry. + * mctelem_dismiss or mctelem_commit for that entry.   */
>>>>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)  {
>>>> -	struct mctelem_ent **freelp;
>>>> -	struct mctelem_ent *oldhead, *newhead;
>>>> -	mctelem_class_t target = (which == MC_URGENT) ?
>>>> -	    MC_URGENT : MC_NONURGENT;
>>>> +	unsigned bit;
>>>> +	unsigned start_bit = (which == MC_URGENT) ? 0 : MC_URGENT_NENT;
>>>> 
>>>> -	freelp = &mctctl.mctc_free[target];
>>>>  	for (;;) {
>>>> -		if ((oldhead = *freelp) == NULL) {
>>>> -			if (which == MC_URGENT && target == MC_URGENT) {
>>>> -				/* raid the non-urgent freelist */
>>>> -				target = MC_NONURGENT;
>>>> -				freelp = &mctctl.mctc_free[target];
>>>> -				continue;
>>>> -			} else {
>>>> -				mctelem_drop_count++;
>>>> -				return (NULL);
>>>> -			}
>>>> +		bit = find_next_bit(mctctl.mctc_free, MC_NENT, start_bit); +
>>>> +		if (bit >= MC_NENT) {
>>>> +			mctelem_drop_count++;
>>>> +			return (NULL);
>>>>  		}
>>>> 
>>>> -		newhead = oldhead->mcte_next;
>>>> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>> -			struct mctelem_ent *tep = oldhead;
>>>> +		/* try to allocate, atomically clear free bit */
>>>> +		if (test_and_clear_bit(bit, mctctl.mctc_free)) {
>>>> +			/* return element we got */
>>>> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>>>> 
>>>>  			mctelem_hold(tep);
>>>>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF2zz-0004Dg-6s; Sun, 16 Feb 2014 14:47:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF2zy-0004DZ-BQ
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:47:14 +0000
Received: from [85.158.137.68:25686] by server-11.bemta-3.messagelabs.com id
	07/DE-04255-17FC0035; Sun, 16 Feb 2014 14:47:13 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392562032!2189508!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22345 invoked from network); 16 Feb 2014 14:47:12 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-14.tower-31.messagelabs.com with SMTP;
	16 Feb 2014 14:47:12 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 16 Feb 2014 06:47:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; d="scan'208";a="476095618"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 16 Feb 2014 06:47:10 -0800
Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:47:10 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:47:10 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:47:07 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "chegger@amazon.de" <chegger@amazon.de>, 
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPKV7YrpjR5j3ySEeyNVNl8C2Pk5q3+Jgg
Date: Sun, 16 Feb 2014 14:47:06 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F11C8@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
	<52FD0079.8050601@amd.com> <52FD0DCC.1030904@amd.com>
	<52FDE1CE020000780011C575@nat28.tlf.novell.com>
In-Reply-To: <52FDE1CE020000780011C575@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, Jan and Aravind, I just return from long Chinese Spring Festival vacation. I will review the thread ASAP.

Thanks,
Jinsong


Jan Beulich wrote:
>>>> On 13.02.14 at 19:24, Aravind Gopalakrishnan
>>>> <aravind.gopalakrishnan@amd.com> 
> wrote:
>> On 2/13/2014 11:27 AM, Aravind Gopalakrishnan wrote:
>>> On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>>>>     *val = 0;
>>>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>> +    /* Allow only first 3 MC banks into switch() */
>>>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>>>       {
>>>>>       case MSR_IA32_MC0_CTL:
>>>>>           /* stick all 1's to MCi_CTL */
>>>> I'm confused: You now add a comment as if the mask was including
>>>> bit 4, which it doesn't. What am I missing?
>>> 
>>> Darn. Sorry about that. Will fix..
>> 
>> Jan,
>> 
>> Do let me know if the following wording is fine:
>> 
>> /*
>>   * Apply mask to allow bits[0:1] (necessary to uniquely identify
>> MC0) 
>>   * MC1 is handled by virtue of 'bank' value.
>>   */
>> 
>> If not, I'm open to suggestions:)
> 
> I don't particularly like this, but I also don't have a good
> alternative suggestion. It was Christoph who asked for a comment in
> the first place. Since I don't see a particular need for a comment
> here, you 
> two should work out what best suits both of you.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF2zz-0004Dg-6s; Sun, 16 Feb 2014 14:47:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF2zy-0004DZ-BQ
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:47:14 +0000
Received: from [85.158.137.68:25686] by server-11.bemta-3.messagelabs.com id
	07/DE-04255-17FC0035; Sun, 16 Feb 2014 14:47:13 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392562032!2189508!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22345 invoked from network); 16 Feb 2014 14:47:12 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-14.tower-31.messagelabs.com with SMTP;
	16 Feb 2014 14:47:12 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 16 Feb 2014 06:47:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; d="scan'208";a="476095618"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 16 Feb 2014 06:47:10 -0800
Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:47:10 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:47:10 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:47:07 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "chegger@amazon.de" <chegger@amazon.de>, 
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPKV7YrpjR5j3ySEeyNVNl8C2Pk5q3+Jgg
Date: Sun, 16 Feb 2014 14:47:06 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F11C8@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<52FC927D020000780011BF0A@nat28.tlf.novell.com>
	<52FD0079.8050601@amd.com> <52FD0DCC.1030904@amd.com>
	<52FDE1CE020000780011C575@nat28.tlf.novell.com>
In-Reply-To: <52FDE1CE020000780011C575@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Sorry, Jan and Aravind, I just return from long Chinese Spring Festival vacation. I will review the thread ASAP.

Thanks,
Jinsong


Jan Beulich wrote:
>>>> On 13.02.14 at 19:24, Aravind Gopalakrishnan
>>>> <aravind.gopalakrishnan@amd.com> 
> wrote:
>> On 2/13/2014 11:27 AM, Aravind Gopalakrishnan wrote:
>>> On 2/13/2014 2:38 AM, Jan Beulich wrote:
>>>>>     *val = 0;
>>>>>   -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>> +    /* Allow only first 3 MC banks into switch() */
>>>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>>>       {
>>>>>       case MSR_IA32_MC0_CTL:
>>>>>           /* stick all 1's to MCi_CTL */
>>>> I'm confused: You now add a comment as if the mask was including
>>>> bit 4, which it doesn't. What am I missing?
>>> 
>>> Darn. Sorry about that. Will fix..
>> 
>> Jan,
>> 
>> Do let me know if the following wording is fine:
>> 
>> /*
>>   * Apply mask to allow bits[0:1] (necessary to uniquely identify
>> MC0) 
>>   * MC1 is handled by virtue of 'bank' value.
>>   */
>> 
>> If not, I'm open to suggestions:)
> 
> I don't particularly like this, but I also don't have a good
> alternative suggestion. It was Christoph who asked for a comment in
> the first place. Since I don't see a particular need for a comment
> here, you 
> two should work out what best suits both of you.
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:57:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF39U-0004Qd-B6; Sun, 16 Feb 2014 14:57:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF39S-0004QY-RZ
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:57:02 +0000
Received: from [85.158.143.35:15148] by server-1.bemta-4.messagelabs.com id
	44/F5-31661-EB1D0035; Sun, 16 Feb 2014 14:57:02 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392562618!6037043!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29415 invoked from network); 16 Feb 2014 14:56:59 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-21.messagelabs.com with SMTP;
	16 Feb 2014 14:56:59 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Feb 2014 06:56:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; d="scan'208";a="484269866"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 16 Feb 2014 06:56:47 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:56:47 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:56:47 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:56:44 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Ian.Campbell@citrix.com"
	<Ian.Campbell@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v1 0/2] expose RDSEED, ADX, and PREFETCHW to guest
Thread-Index: Ac8rJ08YepYOMstaRrSBCNgNjgJ/XA==
Date: Sun, 16 Feb 2014 14:56:44 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F11FB@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "haoxudong.hao@gmail.com" <haoxudong.hao@gmail.com>
Subject: [Xen-devel] [PATCH v1 0/2] expose RDSEED, ADX,
	and PREFETCHW to guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intel recently release some new features, including RDSEED, ADX, and PREFETCHW.
This patch expose these new features to guest.

Patch 1/2: expose RDSEED, ADX, and PREFETCHW to pv and hvm guest.
Patch 2/2: expose RDSEED, ADX, and PREFETCHW to dom0.

These patches are done by my ex-colleague, Xudong.
Now I help Xudong to push them to Xen upstream.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:57:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF39U-0004Qd-B6; Sun, 16 Feb 2014 14:57:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF39S-0004QY-RZ
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:57:02 +0000
Received: from [85.158.143.35:15148] by server-1.bemta-4.messagelabs.com id
	44/F5-31661-EB1D0035; Sun, 16 Feb 2014 14:57:02 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392562618!6037043!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29415 invoked from network); 16 Feb 2014 14:56:59 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-16.tower-21.messagelabs.com with SMTP;
	16 Feb 2014 14:56:59 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Feb 2014 06:56:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; d="scan'208";a="484269866"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 16 Feb 2014 06:56:47 -0800
Received: from fmsmsx115.amr.corp.intel.com (10.18.116.19) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:56:47 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx115.amr.corp.intel.com (10.18.116.19) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:56:47 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:56:44 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Ian.Campbell@citrix.com"
	<Ian.Campbell@citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v1 0/2] expose RDSEED, ADX, and PREFETCHW to guest
Thread-Index: Ac8rJ08YepYOMstaRrSBCNgNjgJ/XA==
Date: Sun, 16 Feb 2014 14:56:44 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F11FB@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "haoxudong.hao@gmail.com" <haoxudong.hao@gmail.com>
Subject: [Xen-devel] [PATCH v1 0/2] expose RDSEED, ADX,
	and PREFETCHW to guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intel recently release some new features, including RDSEED, ADX, and PREFETCHW.
This patch expose these new features to guest.

Patch 1/2: expose RDSEED, ADX, and PREFETCHW to pv and hvm guest.
Patch 2/2: expose RDSEED, ADX, and PREFETCHW to dom0.

These patches are done by my ex-colleague, Xudong.
Now I help Xudong to push them to Xen upstream.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:59:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF3Bq-0004Y2-T4; Sun, 16 Feb 2014 14:59:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF3Bo-0004Xx-Tz
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:59:29 +0000
Received: from [85.158.137.68:24670] by server-9.bemta-3.messagelabs.com id
	6D/BD-10184-052D0035; Sun, 16 Feb 2014 14:59:28 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392562766!920959!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 418 invoked from network); 16 Feb 2014 14:59:27 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-31.messagelabs.com with SMTP;
	16 Feb 2014 14:59:27 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Feb 2014 06:59:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; 
	d="scan'208,223";a="484270233"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 16 Feb 2014 06:59:25 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:59:25 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:59:25 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:59:22 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH v1 1/2] X86: expose RDSEED, ADX, and PREFETCHW to pv/hvm
Thread-Index: Ac8rJ60ljxIQKX4WQ0qQESBcdaclTg==
Date: Sun, 16 Feb 2014 14:59:21 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F120A@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "haoxudong.hao@gmail.com" <haoxudong.hao@gmail.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v1 1/2] X86: expose RDSEED, ADX,
	and PREFETCHW to pv/hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From 2fe617b6e7350ee1d635b0365914aa0d31f4fadd Mon Sep 17 00:00:00 2001
From: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 13 Feb 2014 21:05:01 +0800
Subject: [PATCH 1/2] X86: expose RDSEED, ADX, and PREFETCHW to pv/hvm

Intel recently release some new features, including RDSEED, ADX, and PREFETCHW.
This patch expose these new features to pv and hvm.

Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
---
 tools/libxc/xc_cpufeature.h |    3 +++
 tools/libxc/xc_cpuid_x86.c  |    5 +++++
 2 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/tools/libxc/xc_cpufeature.h b/tools/libxc/xc_cpufeature.h
index c464e3a..09b2c82 100644
--- a/tools/libxc/xc_cpufeature.h
+++ b/tools/libxc/xc_cpufeature.h
@@ -137,5 +137,8 @@
 #define X86_FEATURE_ERMS         9 /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID     10 /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM         11 /* Restricted Transactional Memory */
+#define X86_FEATURE_RDSEED      18 /* RDSEED instruction */
+#define X86_FEATURE_ADX         19 /* ADCX, ADOX instructions */
+
 
 #endif /* __LIBXC_CPUFEATURE_H */
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index bbbf9b8..9264039 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -197,6 +197,7 @@ static void intel_xc_cpuid_policy(
 
         /* Only a few features are advertised in Intel's 0x80000001. */
         regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
+                               bitmaskof(X86_FEATURE_3DNOWPREFETCH) |
                                bitmaskof(X86_FEATURE_ABM);
         regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
                     (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
@@ -371,6 +372,8 @@ static void xc_cpuid_hvm_policy(
                         bitmaskof(X86_FEATURE_ERMS) |
                         bitmaskof(X86_FEATURE_INVPCID) |
                         bitmaskof(X86_FEATURE_RTM)  |
+                        bitmaskof(X86_FEATURE_RDSEED)  |
+                        bitmaskof(X86_FEATURE_ADX)  |
                         bitmaskof(X86_FEATURE_FSGSBASE));
         } else
             regs[1] = 0;
@@ -502,6 +505,8 @@ static void xc_cpuid_pv_policy(
                         bitmaskof(X86_FEATURE_BMI2) |
                         bitmaskof(X86_FEATURE_ERMS) |
                         bitmaskof(X86_FEATURE_RTM)  |
+                        bitmaskof(X86_FEATURE_RDSEED)  |
+                        bitmaskof(X86_FEATURE_ADX)  |
                         bitmaskof(X86_FEATURE_FSGSBASE));
         else
             regs[1] = 0;
-- 
1.7.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 14:59:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 14:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF3Bq-0004Y2-T4; Sun, 16 Feb 2014 14:59:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF3Bo-0004Xx-Tz
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 14:59:29 +0000
Received: from [85.158.137.68:24670] by server-9.bemta-3.messagelabs.com id
	6D/BD-10184-052D0035; Sun, 16 Feb 2014 14:59:28 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392562766!920959!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 418 invoked from network); 16 Feb 2014 14:59:27 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-31.messagelabs.com with SMTP;
	16 Feb 2014 14:59:27 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 16 Feb 2014 06:59:26 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; 
	d="scan'208,223";a="484270233"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga002.jf.intel.com with ESMTP; 16 Feb 2014 06:59:25 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:59:25 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 06:59:25 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 22:59:22 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: "Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: [PATCH v1 1/2] X86: expose RDSEED, ADX, and PREFETCHW to pv/hvm
Thread-Index: Ac8rJ60ljxIQKX4WQ0qQESBcdaclTg==
Date: Sun, 16 Feb 2014 14:59:21 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F120A@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "haoxudong.hao@gmail.com" <haoxudong.hao@gmail.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v1 1/2] X86: expose RDSEED, ADX,
	and PREFETCHW to pv/hvm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From 2fe617b6e7350ee1d635b0365914aa0d31f4fadd Mon Sep 17 00:00:00 2001
From: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 13 Feb 2014 21:05:01 +0800
Subject: [PATCH 1/2] X86: expose RDSEED, ADX, and PREFETCHW to pv/hvm

Intel recently release some new features, including RDSEED, ADX, and PREFETCHW.
This patch expose these new features to pv and hvm.

Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
---
 tools/libxc/xc_cpufeature.h |    3 +++
 tools/libxc/xc_cpuid_x86.c  |    5 +++++
 2 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/tools/libxc/xc_cpufeature.h b/tools/libxc/xc_cpufeature.h
index c464e3a..09b2c82 100644
--- a/tools/libxc/xc_cpufeature.h
+++ b/tools/libxc/xc_cpufeature.h
@@ -137,5 +137,8 @@
 #define X86_FEATURE_ERMS         9 /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID     10 /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM         11 /* Restricted Transactional Memory */
+#define X86_FEATURE_RDSEED      18 /* RDSEED instruction */
+#define X86_FEATURE_ADX         19 /* ADCX, ADOX instructions */
+
 
 #endif /* __LIBXC_CPUFEATURE_H */
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
index bbbf9b8..9264039 100644
--- a/tools/libxc/xc_cpuid_x86.c
+++ b/tools/libxc/xc_cpuid_x86.c
@@ -197,6 +197,7 @@ static void intel_xc_cpuid_policy(
 
         /* Only a few features are advertised in Intel's 0x80000001. */
         regs[2] &= (is_64bit ? bitmaskof(X86_FEATURE_LAHF_LM) : 0) |
+                               bitmaskof(X86_FEATURE_3DNOWPREFETCH) |
                                bitmaskof(X86_FEATURE_ABM);
         regs[3] &= ((is_pae ? bitmaskof(X86_FEATURE_NX) : 0) |
                     (is_64bit ? bitmaskof(X86_FEATURE_LM) : 0) |
@@ -371,6 +372,8 @@ static void xc_cpuid_hvm_policy(
                         bitmaskof(X86_FEATURE_ERMS) |
                         bitmaskof(X86_FEATURE_INVPCID) |
                         bitmaskof(X86_FEATURE_RTM)  |
+                        bitmaskof(X86_FEATURE_RDSEED)  |
+                        bitmaskof(X86_FEATURE_ADX)  |
                         bitmaskof(X86_FEATURE_FSGSBASE));
         } else
             regs[1] = 0;
@@ -502,6 +505,8 @@ static void xc_cpuid_pv_policy(
                         bitmaskof(X86_FEATURE_BMI2) |
                         bitmaskof(X86_FEATURE_ERMS) |
                         bitmaskof(X86_FEATURE_RTM)  |
+                        bitmaskof(X86_FEATURE_RDSEED)  |
+                        bitmaskof(X86_FEATURE_ADX)  |
                         bitmaskof(X86_FEATURE_FSGSBASE));
         else
             regs[1] = 0;
-- 
1.7.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 15:01:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 15:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF3DP-0004gc-FL; Sun, 16 Feb 2014 15:01:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF3DO-0004gV-1Y
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 15:01:06 +0000
Received: from [85.158.143.35:44758] by server-3.bemta-4.messagelabs.com id
	E3/9B-11539-1B2D0035; Sun, 16 Feb 2014 15:01:05 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392562864!6013077!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3660 invoked from network); 16 Feb 2014 15:01:04 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-21.messagelabs.com with SMTP;
	16 Feb 2014 15:01:04 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Feb 2014 07:01:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; 
	d="scan'208,223";a="456399890"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 07:01:03 -0800
Received: from fmsmsx119.amr.corp.intel.com (10.19.9.28) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 07:01:02 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX119.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 07:01:02 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 23:00:59 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v1 2/2] X86: expose RDSEED, ADX, and PREFETCHW to dom0
Thread-Index: Ac8rJ+coCBaamJR0StOOcz3x6/ywWg==
Date: Sun, 16 Feb 2014 15:00:59 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F1225@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "haoxudong.hao@gmail.com" <haoxudong.hao@gmail.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v1 2/2] X86: expose RDSEED, ADX,
	and PREFETCHW to dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From 5a9178f28b7b16c2e21966722379d630a993c437 Mon Sep 17 00:00:00 2001
From: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 13 Feb 2014 20:37:15 +0800
Subject: [PATCH 2/2] X86: expose RDSEED, ADX, and PREFETCHW to dom0

This patch explicitly expose Intel new features to dom0, including RDSEED and ADX.
As for PREFETCHW, it doesn't need explicit exposing.

Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
---
 xen/arch/x86/traps.c             |    2 ++
 xen/include/asm-x86/cpufeature.h |    2 ++
 2 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0bd43b9..c736dd1 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -829,6 +829,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
                   cpufeat_mask(X86_FEATURE_BMI2) |
                   cpufeat_mask(X86_FEATURE_ERMS) |
                   cpufeat_mask(X86_FEATURE_RTM)  |
+                  cpufeat_mask(X86_FEATURE_RDSEED)  |
+                  cpufeat_mask(X86_FEATURE_ADX)  |
                   cpufeat_mask(X86_FEATURE_FSGSBASE));
         else
             b = 0;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..87d5f66 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -148,6 +148,8 @@
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
+#define X86_FEATURE_RDSEED	(7*32+18) /* RDSEED instruction */
+#define X86_FEATURE_ADX		(7*32+19) /* ADCX, ADOX instructions */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
 #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
-- 
1.7.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 15:01:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 15:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF3DP-0004gc-FL; Sun, 16 Feb 2014 15:01:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WF3DO-0004gV-1Y
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 15:01:06 +0000
Received: from [85.158.143.35:44758] by server-3.bemta-4.messagelabs.com id
	E3/9B-11539-1B2D0035; Sun, 16 Feb 2014 15:01:05 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392562864!6013077!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3660 invoked from network); 16 Feb 2014 15:01:04 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-21.messagelabs.com with SMTP;
	16 Feb 2014 15:01:04 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 16 Feb 2014 07:01:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,855,1384329600"; 
	d="scan'208,223";a="456399890"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 07:01:03 -0800
Received: from fmsmsx119.amr.corp.intel.com (10.19.9.28) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 07:01:02 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX119.amr.corp.intel.com (10.19.9.28) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 07:01:02 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Sun, 16 Feb 2014 23:00:59 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH v1 2/2] X86: expose RDSEED, ADX, and PREFETCHW to dom0
Thread-Index: Ac8rJ+coCBaamJR0StOOcz3x6/ywWg==
Date: Sun, 16 Feb 2014 15:00:59 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F1225@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "haoxudong.hao@gmail.com" <haoxudong.hao@gmail.com>,
	"Ian.Campbell@citrix.com" <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH v1 2/2] X86: expose RDSEED, ADX,
	and PREFETCHW to dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From 5a9178f28b7b16c2e21966722379d630a993c437 Mon Sep 17 00:00:00 2001
From: Liu Jinsong <jinsong.liu@intel.com>
Date: Thu, 13 Feb 2014 20:37:15 +0800
Subject: [PATCH 2/2] X86: expose RDSEED, ADX, and PREFETCHW to dom0

This patch explicitly expose Intel new features to dom0, including RDSEED and ADX.
As for PREFETCHW, it doesn't need explicit exposing.

Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
---
 xen/arch/x86/traps.c             |    2 ++
 xen/include/asm-x86/cpufeature.h |    2 ++
 2 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0bd43b9..c736dd1 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -829,6 +829,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
                   cpufeat_mask(X86_FEATURE_BMI2) |
                   cpufeat_mask(X86_FEATURE_ERMS) |
                   cpufeat_mask(X86_FEATURE_RTM)  |
+                  cpufeat_mask(X86_FEATURE_RDSEED)  |
+                  cpufeat_mask(X86_FEATURE_ADX)  |
                   cpufeat_mask(X86_FEATURE_FSGSBASE));
         else
             b = 0;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..87d5f66 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -148,6 +148,8 @@
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
+#define X86_FEATURE_RDSEED	(7*32+18) /* RDSEED instruction */
+#define X86_FEATURE_ADX		(7*32+19) /* ADCX, ADOX instructions */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
 #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
-- 
1.7.1

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 15:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 15:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF3p7-00053t-1W; Sun, 16 Feb 2014 15:40:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF3p5-00053o-5V
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 15:40:03 +0000
Received: from [85.158.143.35:18866] by server-2.bemta-4.messagelabs.com id
	E6/C8-10891-2DBD0035; Sun, 16 Feb 2014 15:40:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392565200!6041553!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18667 invoked from network); 16 Feb 2014 15:40:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 15:40:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102978914"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 15:39:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 10:39:59 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF3p0-00065G-Vq;
	Sun, 16 Feb 2014 15:39:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF3p0-0004IL-2O;
	Sun, 16 Feb 2014 15:39:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25011-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 15:39:58 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25011: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25011 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25011/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install          fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build        fail in 24904 REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24904
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24904
 test-i386-i386-xl             3 host-install(3)           broken pass in 24904
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24904
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)     broken pass in 24904
 test-amd64-i386-xl-credit2 14 guest-localmigrate/x10 fail in 24904 pass in 25011
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24904 pass in 24887
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 25011

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 15:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 15:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF3p7-00053t-1W; Sun, 16 Feb 2014 15:40:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF3p5-00053o-5V
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 15:40:03 +0000
Received: from [85.158.143.35:18866] by server-2.bemta-4.messagelabs.com id
	E6/C8-10891-2DBD0035; Sun, 16 Feb 2014 15:40:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392565200!6041553!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18667 invoked from network); 16 Feb 2014 15:40:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 15:40:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="102978914"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 15:39:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 10:39:59 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF3p0-00065G-Vq;
	Sun, 16 Feb 2014 15:39:59 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF3p0-0004IL-2O;
	Sun, 16 Feb 2014 15:39:58 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25011-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 15:39:58 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25011: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25011 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25011/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 test-amd64-i386-xend-winxpsp3  7 windows-install          fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build        fail in 24904 REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24904
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24904
 test-i386-i386-xl             3 host-install(3)           broken pass in 24904
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24904
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)     broken pass in 24904
 test-amd64-i386-xl-credit2 14 guest-localmigrate/x10 fail in 24904 pass in 25011
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24904 pass in 24887
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 25011

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            broken  
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 15:51:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 15:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF40M-0005ER-4H; Sun, 16 Feb 2014 15:51:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WF40K-0005EM-Bg
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 15:51:40 +0000
Received: from [193.109.254.147:35717] by server-4.bemta-14.messagelabs.com id
	48/05-32066-B8ED0035; Sun, 16 Feb 2014 15:51:39 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392565897!929871!1
X-Originating-IP: [209.85.220.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26283 invoked from network); 16 Feb 2014 15:51:38 -0000
Received: from mail-pa0-f54.google.com (HELO mail-pa0-f54.google.com)
	(209.85.220.54)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 15:51:38 -0000
Received: by mail-pa0-f54.google.com with SMTP id fa1so14167427pad.41
	for <xen-devel@lists.xen.org>; Sun, 16 Feb 2014 07:51:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=usBaYqciGiS4MUGoFUdLJERpgfmSQ9A69sL6chlaLXA=;
	b=f95ZQtDZHjGEj4wkAO8fnQavJ1AwNXRt9ZOZMwkch6Fi1qWzEKjywFK1NdVTVlLYox
	sL2W9ZgYr+woaGJnRZCoFqCRtMtg9u2+auepyxY8XVgKV/SWl8brPmQ2Tc7GFvDr91nH
	R1JOUA9LnaLeMaBhkj4mOisFEPLDloUGR5pQGNoEvq/YiRLmZJ5wz7TrkGy7kpWWbXQI
	3GgO4v3VYBc8hrkWsKyyoPuLNmmSZrUEh418VYuHDJr8NKkApbIM1J2RJk81/G9bDOAI
	PFGGp3CbROESDxwKpY9Yt5ufyX7UVl1RnJEKhYy74pZ01K4TVCVcPxb2n8wTCVRgV00B
	r63Q==
X-Received: by 10.66.231.104 with SMTP id tf8mr21558033pac.48.1392565896768;
	Sun, 16 Feb 2014 07:51:36 -0800 (PST)
Received: from [192.168.1.104] ([113.247.1.76])
	by mx.google.com with ESMTPSA id ss2sm94631433pab.8.2014.02.16.07.51.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 07:51:35 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
Date: Sun, 16 Feb 2014 23:51:29 +0800
Message-Id: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
To: List Developer Xen <xen-devel@lists.xen.org>
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

It is much later than I used to expect. I guess it might be help
to publish my work, though it is still not finished (and might not
be finished very soon...). =


I began to try to port mini-os to ARM64 since last summer. Since
the 64-bit guest support is not quite well at that time, this
work had been stopped for a long time until two months ago.

Though it is still at very early stage, it at least can be built,
setup a early page table for booting, parse the DTB passed by the
hypervisor, and be debugged by printk at present. So I put it
on github in case someone might be interested in it. Here is the
url: https://github.com/baozich/minios-arm64

Right now, there are some troubles to make GIC work properly,
as I didn=92t consider mapping GIC=92s interface in address space and
follows x86=92s memory layout which make the kernel virtual address
starts at 0x0. I=92ll fix it as soon as possible.

Besides, there is still lots of work to be done. So any comments
or patches are welcome.

Regards,

Baozi
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 15:51:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 15:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF40M-0005ER-4H; Sun, 16 Feb 2014 15:51:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WF40K-0005EM-Bg
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 15:51:40 +0000
Received: from [193.109.254.147:35717] by server-4.bemta-14.messagelabs.com id
	48/05-32066-B8ED0035; Sun, 16 Feb 2014 15:51:39 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392565897!929871!1
X-Originating-IP: [209.85.220.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26283 invoked from network); 16 Feb 2014 15:51:38 -0000
Received: from mail-pa0-f54.google.com (HELO mail-pa0-f54.google.com)
	(209.85.220.54)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 15:51:38 -0000
Received: by mail-pa0-f54.google.com with SMTP id fa1so14167427pad.41
	for <xen-devel@lists.xen.org>; Sun, 16 Feb 2014 07:51:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:content-transfer-encoding:subject:date:message-id
	:cc:to:mime-version;
	bh=usBaYqciGiS4MUGoFUdLJERpgfmSQ9A69sL6chlaLXA=;
	b=f95ZQtDZHjGEj4wkAO8fnQavJ1AwNXRt9ZOZMwkch6Fi1qWzEKjywFK1NdVTVlLYox
	sL2W9ZgYr+woaGJnRZCoFqCRtMtg9u2+auepyxY8XVgKV/SWl8brPmQ2Tc7GFvDr91nH
	R1JOUA9LnaLeMaBhkj4mOisFEPLDloUGR5pQGNoEvq/YiRLmZJ5wz7TrkGy7kpWWbXQI
	3GgO4v3VYBc8hrkWsKyyoPuLNmmSZrUEh418VYuHDJr8NKkApbIM1J2RJk81/G9bDOAI
	PFGGp3CbROESDxwKpY9Yt5ufyX7UVl1RnJEKhYy74pZ01K4TVCVcPxb2n8wTCVRgV00B
	r63Q==
X-Received: by 10.66.231.104 with SMTP id tf8mr21558033pac.48.1392565896768;
	Sun, 16 Feb 2014 07:51:36 -0800 (PST)
Received: from [192.168.1.104] ([113.247.1.76])
	by mx.google.com with ESMTPSA id ss2sm94631433pab.8.2014.02.16.07.51.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 07:51:35 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
Date: Sun, 16 Feb 2014 23:51:29 +0800
Message-Id: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
To: List Developer Xen <xen-devel@lists.xen.org>
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

It is much later than I used to expect. I guess it might be help
to publish my work, though it is still not finished (and might not
be finished very soon...). =


I began to try to port mini-os to ARM64 since last summer. Since
the 64-bit guest support is not quite well at that time, this
work had been stopped for a long time until two months ago.

Though it is still at very early stage, it at least can be built,
setup a early page table for booting, parse the DTB passed by the
hypervisor, and be debugged by printk at present. So I put it
on github in case someone might be interested in it. Here is the
url: https://github.com/baozich/minios-arm64

Right now, there are some troubles to make GIC work properly,
as I didn=92t consider mapping GIC=92s interface in address space and
follows x86=92s memory layout which make the kernel virtual address
starts at 0x0. I=92ll fix it as soon as possible.

Besides, there is still lots of work to be done. So any comments
or patches are welcome.

Regards,

Baozi
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 16:00:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 16:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF48J-0005Nm-35; Sun, 16 Feb 2014 15:59:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WF48H-0005Nh-Ni
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 15:59:53 +0000
Received: from [85.158.139.211:18137] by server-13.bemta-5.messagelabs.com id
	CD/D0-18801-970E0035; Sun, 16 Feb 2014 15:59:53 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392566390!4238666!1
X-Originating-IP: [209.85.192.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25920 invoked from network); 16 Feb 2014 15:59:52 -0000
Received: from mail-pd0-f174.google.com (HELO mail-pd0-f174.google.com)
	(209.85.192.174)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 15:59:52 -0000
Received: by mail-pd0-f174.google.com with SMTP id z10so13958985pdj.33
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 07:59:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=bkjr76J1DCe6R/6iw+dPifjf0JR24ce25Vph9qAclH0=;
	b=S0evz0WsgIhFrDkL60dhjPHfhXq9xWqlhq+oWBFJVmOPhTmr4Ak10/5vtcZWhxu0Ko
	ucUNV+oXs5C+C2pI5I94jONlA3OTDuuXMVQF96DpR2kx+3apYHERd60VKoRTaSGgLYpx
	yrE6Ic4MQgLHhWgz0M737kt353YwTOROSa8fD47bSDZCWQ64YCJJhjv5NUmNOIbwf8uu
	7jnjae1AVy90zn5O+nP8R+g2PDWLIIa2GTvCB8S6zm5bYIFHqi2g3qANKV/ZEJXOaLZG
	a2m29kACSJcylGQ802Ni0twMbIkd98lVW+HcD4oE+6tRcOw8axjY9Nig2UIXcYkum/1T
	2oag==
X-Received: by 10.68.76.68 with SMTP id i4mr21073113pbw.73.1392566389991;
	Sun, 16 Feb 2014 07:59:49 -0800 (PST)
Received: from [192.168.1.104] ([113.247.1.76])
	by mx.google.com with ESMTPSA id bc4sm37110799pbb.2.2014.02.16.07.59.47
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 07:59:49 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1390909022.7753.58.camel@kazak.uk.xensource.com>
Date: Sun, 16 Feb 2014 23:59:45 +0800
Message-Id: <018D9FB4-128B-443D-970B-1F14EA769E0F@gmail.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
	<1390909022.7753.58.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
	64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 28, 2014, at 19:37, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
>>         ldr   r4, =3DBOOT_FDT_VIRT_START
>> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
>> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_S=
TART */
> =

> Comparing the objdump before and after shows:
>        @@ -299,7 +299,7 @@
>           20041c:	e3822c0e 	orr	r2, r2, #3584	; 0xe00
>           200420:	e382207d 	orr	r2, r2, #125	; 0x7d
>           200424:	e3a04606 	mov	r4, #6291456	; 0x600000
>        -  200428:	e1a04924 	lsr	r4, r4, #18
>        +  200428:	e1a04aa4 	lsr	r4, r4, #21
>           20042c:	e18120f4 	strd	r2, [r1, r4]
>           200430:	f57ff04f 	dsb	sy
>           200434:	e28f0004 	add	r0, pc, #4
> =

> which I think is unexpected/incorrect. I think you wanted #(SECOND_SHIFT
> - 3) as elsewhere.
> =

> The only other change to the binary was the expected s/20/21/ in both
> arm32 and arm64.

Sorry, I was back from vacation last week and just noticed
this mail right now. I=92ll fix it at once.

Cheers,

Baozi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 16:00:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 16:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF48J-0005Nm-35; Sun, 16 Feb 2014 15:59:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WF48H-0005Nh-Ni
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 15:59:53 +0000
Received: from [85.158.139.211:18137] by server-13.bemta-5.messagelabs.com id
	CD/D0-18801-970E0035; Sun, 16 Feb 2014 15:59:53 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392566390!4238666!1
X-Originating-IP: [209.85.192.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25920 invoked from network); 16 Feb 2014 15:59:52 -0000
Received: from mail-pd0-f174.google.com (HELO mail-pd0-f174.google.com)
	(209.85.192.174)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 15:59:52 -0000
Received: by mail-pd0-f174.google.com with SMTP id z10so13958985pdj.33
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 07:59:50 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=bkjr76J1DCe6R/6iw+dPifjf0JR24ce25Vph9qAclH0=;
	b=S0evz0WsgIhFrDkL60dhjPHfhXq9xWqlhq+oWBFJVmOPhTmr4Ak10/5vtcZWhxu0Ko
	ucUNV+oXs5C+C2pI5I94jONlA3OTDuuXMVQF96DpR2kx+3apYHERd60VKoRTaSGgLYpx
	yrE6Ic4MQgLHhWgz0M737kt353YwTOROSa8fD47bSDZCWQ64YCJJhjv5NUmNOIbwf8uu
	7jnjae1AVy90zn5O+nP8R+g2PDWLIIa2GTvCB8S6zm5bYIFHqi2g3qANKV/ZEJXOaLZG
	a2m29kACSJcylGQ802Ni0twMbIkd98lVW+HcD4oE+6tRcOw8axjY9Nig2UIXcYkum/1T
	2oag==
X-Received: by 10.68.76.68 with SMTP id i4mr21073113pbw.73.1392566389991;
	Sun, 16 Feb 2014 07:59:49 -0800 (PST)
Received: from [192.168.1.104] ([113.247.1.76])
	by mx.google.com with ESMTPSA id bc4sm37110799pbb.2.2014.02.16.07.59.47
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 07:59:49 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1390909022.7753.58.camel@kazak.uk.xensource.com>
Date: Sun, 16 Feb 2014 23:59:45 +0800
Message-Id: <018D9FB4-128B-443D-970B-1F14EA769E0F@gmail.com>
References: <1389651599-26562-1-git-send-email-baozich@gmail.com>
	<1390909022.7753.58.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2] xen/arm{32,
	64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Jan 28, 2014, at 19:37, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Tue, 2014-01-14 at 06:19 +0800, Chen Baozi wrote:
>>         ldr   r4, =3DBOOT_FDT_VIRT_START
>> -        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
>> +        mov   r4, r4, lsr #(SECOND_SHIFT)   /* Slot for BOOT_FDT_VIRT_S=
TART */
> =

> Comparing the objdump before and after shows:
>        @@ -299,7 +299,7 @@
>           20041c:	e3822c0e 	orr	r2, r2, #3584	; 0xe00
>           200420:	e382207d 	orr	r2, r2, #125	; 0x7d
>           200424:	e3a04606 	mov	r4, #6291456	; 0x600000
>        -  200428:	e1a04924 	lsr	r4, r4, #18
>        +  200428:	e1a04aa4 	lsr	r4, r4, #21
>           20042c:	e18120f4 	strd	r2, [r1, r4]
>           200430:	f57ff04f 	dsb	sy
>           200434:	e28f0004 	add	r0, pc, #4
> =

> which I think is unexpected/incorrect. I think you wanted #(SECOND_SHIFT
> - 3) as elsewhere.
> =

> The only other change to the binary was the expected s/20/21/ in both
> arm32 and arm64.

Sorry, I was back from vacation last week and just noticed
this mail right now. I=92ll fix it at once.

Cheers,

Baozi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 16:09:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 16:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF4Hs-0005ym-Nr; Sun, 16 Feb 2014 16:09:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WF4Hr-0005yh-CH
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 16:09:47 +0000
Received: from [85.158.139.211:32694] by server-13.bemta-5.messagelabs.com id
	18/D4-18801-AC2E0035; Sun, 16 Feb 2014 16:09:46 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392566983!4201361!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23242 invoked from network); 16 Feb 2014 16:09:45 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 16:09:45 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so14303003pbc.4
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 08:09:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=pcAYrBrH9w9QCLCAGayruK1aAyQX+Bpoaxw68g6SCCk=;
	b=OYpMae5yqYZpWcmH5L6Hrbgp8u0wuWL20vS2RN+njdK2aAyJwdr7jav8/iwipe0QCl
	SfbxFcDePGR7/KuGxRyTOlpCDpXPP4iP89mFY/MI7LPYk5GDL7a05RYeQIyfY6vJLA6I
	4K2O/owMnR9e3pIvpaV4hzcPfVHEPn+c602CmarPMXPVwF4g9uEoMrJtvTpMiqEf80dw
	YOcz/p3o+u8BmPbNXUxyDjUI6hunjVF5Qu4gc5HiUvWdFakbOyAa5DZ/MpX5indMB564
	ho0es7dT/Kg5/qG4qOr9Xx79NkhvJIAHZSiezrZ7mvywY08NJOJhfk7KnKQDH0+MPqNX
	4u7A==
X-Received: by 10.66.142.132 with SMTP id rw4mr21708666pab.6.1392566983551;
	Sun, 16 Feb 2014 08:09:43 -0800 (PST)
Received: from localhost ([113.247.1.76]) by mx.google.com with ESMTPSA id
	bz4sm37166031pbb.12.2014.02.16.08.09.39 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 08:09:42 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 17 Feb 2014 00:09:26 +0800
Message-Id: <1392566966-24840-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v3] xen/arm{32,
	64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Section shift for level-2 page table should be #21 rather than #20. Besides,
since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
these macros instead of hard-coded shift value.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm32/head.S | 20 ++++++++++----------
 xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..0110807 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -291,14 +291,14 @@ cpu_init_done:
         ldr   r4, =boot_second
         add   r4, r4, r10            /* r1 := paddr (boot_second) */
 
-        lsr   r2, r9, #20            /* Base address for 2MB mapping */
-        lsl   r2, r2, #20
+        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
+        lsl   r2, r2, #SECOND_SHIFT
         orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
         orr   r2, r2, #PT_LOWER(MEM)
 
         /* ... map of vaddr(start) in boot_second */
         ldr   r1, =start
-        lsr   r1, #18                /* Slot for vaddr(start) */
+        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
@@ -307,7 +307,7 @@ cpu_init_done:
                                       * then the mapping was done in
                                       * boot_pgtable above */
 
-        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
+        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
         strd  r2, r3, [r4, r1]       /* Map Xen there */
 1:
 
@@ -339,8 +339,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
         mov   r3, #0
-        lsr   r2, r11, #12
-        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
+        lsr   r2, r11, #THIRD_SHIFT
+        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         orr   r2, r2, #PT_UPPER(DEV_L3)
         orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -353,7 +353,7 @@ paging:
         orr   r2, r2, #PT_UPPER(PT)
         orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
         ldr   r4, =FIXMAP_ADDR(0)
-        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
+        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
         strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -365,12 +365,12 @@ paging:
 
         ldr   r1, =boot_second
         mov   r3, #0x0
-        lsr   r2, r8, #21
-        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
+        lsr   r2, r8, #SECOND_SHIFT
+        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
         orr   r2, r2, #PT_UPPER(MEM)
         orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
         ldr   r4, =BOOT_FDT_VIRT_START
-        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
+        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* Slot for BOOT_FDT_VIRT_START */
         strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
         dsb
 1:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index bebddf0..5b164e9 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -278,11 +278,11 @@ skip_bss:
         str   x2, [x4, #0]           /* Map it in slot 0 */
 
         /* ... map of paddr(start) in boot_first */
-        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
+        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
         and   x1, x2, 0x1ff          /* x1 := Slot to use */
         cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
 
-        lsl   x2, x2, #30            /* Base address for 1GB mapping */
+        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
         lsl   x1, x1, #3             /* x1 := Slot offset */
@@ -292,23 +292,23 @@ skip_bss:
         ldr   x4, =boot_second
         add   x4, x4, x20            /* x4 := paddr (boot_second) */
 
-        lsr   x2, x19, #20           /* Base address for 2MB mapping */
-        lsl   x2, x2, #20
+        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
+        lsl   x2, x2, #SECOND_SHIFT
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
 
         /* ... map of vaddr(start) in boot_second */
         ldr   x1, =start
-        lsr   x1, x1, #18            /* Slot for vaddr(start) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         str   x2, [x4, x1]           /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
-        lsr   x1, x19, #30           /* Base paddr */
+        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
         cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
                                       * then the mapping was done in
                                       * boot_pgtable or boot_first above */
 
-        lsr   x1, x19, #18           /* Slot for paddr(start) */
+        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
         str   x2, [x4, x1]           /* Map Xen there */
 1:
 
@@ -340,8 +340,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap
         add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
-        lsr   x2, x23, #12
-        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
+        lsr   x2, x23, #THIRD_SHIFT
+        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         mov   x3, #PT_DEV_L3
         orr   x2, x2, x3             /* x2 := 4K dev map including UART */
         str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -354,7 +354,7 @@ paging:
         mov   x3, #PT_PT
         orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
         ldr   x1, =FIXMAP_ADDR(0)
-        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
         str   x2, [x4, x1]           /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -364,12 +364,12 @@ paging:
         /* Map the DTB in the boot misc slot */
         cbnz  x22, 1f                /* Only on boot CPU */
 
-        lsr   x2, x21, #21
-        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
+        lsr   x2, x21, #SECOND_SHIFT
+        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
         mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
         orr   x2, x2, x3
         ldr   x1, =BOOT_FDT_VIRT_START
-        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
         str   x2, [x4, x1]           /* Map it in the early fdt slot */
         dsb   sy
 1:
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 16:09:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 16:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF4Hs-0005ym-Nr; Sun, 16 Feb 2014 16:09:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WF4Hr-0005yh-CH
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 16:09:47 +0000
Received: from [85.158.139.211:32694] by server-13.bemta-5.messagelabs.com id
	18/D4-18801-AC2E0035; Sun, 16 Feb 2014 16:09:46 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392566983!4201361!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23242 invoked from network); 16 Feb 2014 16:09:45 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 16:09:45 -0000
Received: by mail-pb0-f45.google.com with SMTP id un15so14303003pbc.4
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 08:09:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=pcAYrBrH9w9QCLCAGayruK1aAyQX+Bpoaxw68g6SCCk=;
	b=OYpMae5yqYZpWcmH5L6Hrbgp8u0wuWL20vS2RN+njdK2aAyJwdr7jav8/iwipe0QCl
	SfbxFcDePGR7/KuGxRyTOlpCDpXPP4iP89mFY/MI7LPYk5GDL7a05RYeQIyfY6vJLA6I
	4K2O/owMnR9e3pIvpaV4hzcPfVHEPn+c602CmarPMXPVwF4g9uEoMrJtvTpMiqEf80dw
	YOcz/p3o+u8BmPbNXUxyDjUI6hunjVF5Qu4gc5HiUvWdFakbOyAa5DZ/MpX5indMB564
	ho0es7dT/Kg5/qG4qOr9Xx79NkhvJIAHZSiezrZ7mvywY08NJOJhfk7KnKQDH0+MPqNX
	4u7A==
X-Received: by 10.66.142.132 with SMTP id rw4mr21708666pab.6.1392566983551;
	Sun, 16 Feb 2014 08:09:43 -0800 (PST)
Received: from localhost ([113.247.1.76]) by mx.google.com with ESMTPSA id
	bz4sm37166031pbb.12.2014.02.16.08.09.39 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 08:09:42 -0800 (PST)
From: Chen Baozi <baozich@gmail.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 17 Feb 2014 00:09:26 +0800
Message-Id: <1392566966-24840-1-git-send-email-baozich@gmail.com>
X-Mailer: git-send-email 1.8.4.3
Cc: Chen Baozi <baozich@gmail.com>, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v3] xen/arm{32,
	64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Section shift for level-2 page table should be #21 rather than #20. Besides,
since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
these macros instead of hard-coded shift value.

Signed-off-by: Chen Baozi <baozich@gmail.com>
---
 xen/arch/arm/arm32/head.S | 20 ++++++++++----------
 xen/arch/arm/arm64/head.S | 26 +++++++++++++-------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..0110807 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -291,14 +291,14 @@ cpu_init_done:
         ldr   r4, =boot_second
         add   r4, r4, r10            /* r1 := paddr (boot_second) */
 
-        lsr   r2, r9, #20            /* Base address for 2MB mapping */
-        lsl   r2, r2, #20
+        lsr   r2, r9, #SECOND_SHIFT  /* Base address for 2MB mapping */
+        lsl   r2, r2, #SECOND_SHIFT
         orr   r2, r2, #PT_UPPER(MEM) /* r2:r3 := section map */
         orr   r2, r2, #PT_LOWER(MEM)
 
         /* ... map of vaddr(start) in boot_second */
         ldr   r1, =start
-        lsr   r1, #18                /* Slot for vaddr(start) */
+        lsr   r1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         strd  r2, r3, [r4, r1]       /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
@@ -307,7 +307,7 @@ cpu_init_done:
                                       * then the mapping was done in
                                       * boot_pgtable above */
 
-        mov   r1, r9, lsr #18        /* Slot for paddr(start) */
+        mov   r1, r9, lsr #(SECOND_SHIFT - 3)   /* Slot for paddr(start) */
         strd  r2, r3, [r4, r1]       /* Map Xen there */
 1:
 
@@ -339,8 +339,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   r1, =xen_fixmap        /* r1 := vaddr (xen_fixmap) */
         mov   r3, #0
-        lsr   r2, r11, #12
-        lsl   r2, r2, #12            /* 4K aligned paddr of UART */
+        lsr   r2, r11, #THIRD_SHIFT
+        lsl   r2, r2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         orr   r2, r2, #PT_UPPER(DEV_L3)
         orr   r2, r2, #PT_LOWER(DEV_L3) /* r2:r3 := 4K dev map including UART */
         strd  r2, r3, [r1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -353,7 +353,7 @@ paging:
         orr   r2, r2, #PT_UPPER(PT)
         orr   r2, r2, #PT_LOWER(PT)  /* r2:r3 := table map of xen_fixmap */
         ldr   r4, =FIXMAP_ADDR(0)
-        mov   r4, r4, lsr #18        /* r4 := Slot for FIXMAP(0) */
+        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* r4 := Slot for FIXMAP(0) */
         strd  r2, r3, [r1, r4]       /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -365,12 +365,12 @@ paging:
 
         ldr   r1, =boot_second
         mov   r3, #0x0
-        lsr   r2, r8, #21
-        lsl   r2, r2, #21            /* r2: 2MB-aligned paddr of DTB */
+        lsr   r2, r8, #SECOND_SHIFT
+        lsl   r2, r2, #SECOND_SHIFT  /* r2: 2MB-aligned paddr of DTB */
         orr   r2, r2, #PT_UPPER(MEM)
         orr   r2, r2, #PT_LOWER(MEM) /* r2:r3 := 2MB RAM incl. DTB */
         ldr   r4, =BOOT_FDT_VIRT_START
-        mov   r4, r4, lsr #18        /* Slot for BOOT_FDT_VIRT_START */
+        mov   r4, r4, lsr #(SECOND_SHIFT - 3)   /* Slot for BOOT_FDT_VIRT_START */
         strd  r2, r3, [r1, r4]       /* Map it in the early fdt slot */
         dsb
 1:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index bebddf0..5b164e9 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -278,11 +278,11 @@ skip_bss:
         str   x2, [x4, #0]           /* Map it in slot 0 */
 
         /* ... map of paddr(start) in boot_first */
-        lsr   x2, x19, #30           /* x2 := Offset of base paddr in boot_first */
+        lsr   x2, x19, #FIRST_SHIFT  /* x2 := Offset of base paddr in boot_first */
         and   x1, x2, 0x1ff          /* x1 := Slot to use */
         cbz   x1, 1f                 /* It's in slot 0, map in boot_second */
 
-        lsl   x2, x2, #30            /* Base address for 1GB mapping */
+        lsl   x2, x2, #FIRST_SHIFT   /* Base address for 1GB mapping */
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
         lsl   x1, x1, #3             /* x1 := Slot offset */
@@ -292,23 +292,23 @@ skip_bss:
         ldr   x4, =boot_second
         add   x4, x4, x20            /* x4 := paddr (boot_second) */
 
-        lsr   x2, x19, #20           /* Base address for 2MB mapping */
-        lsl   x2, x2, #20
+        lsr   x2, x19, #SECOND_SHIFT /* Base address for 2MB mapping */
+        lsl   x2, x2, #SECOND_SHIFT
         mov   x3, #PT_MEM            /* x2 := Section map */
         orr   x2, x2, x3
 
         /* ... map of vaddr(start) in boot_second */
         ldr   x1, =start
-        lsr   x1, x1, #18            /* Slot for vaddr(start) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* Slot for vaddr(start) */
         str   x2, [x4, x1]           /* Map vaddr(start) */
 
         /* ... map of paddr(start) in boot_second */
-        lsr   x1, x19, #30           /* Base paddr */
+        lsr   x1, x19, #FIRST_SHIFT  /* Base paddr */
         cbnz  x1, 1f                 /* If paddr(start) is not in slot 0
                                       * then the mapping was done in
                                       * boot_pgtable or boot_first above */
 
-        lsr   x1, x19, #18           /* Slot for paddr(start) */
+        lsr   x1, x19, #(SECOND_SHIFT - 3)  /* Slot for paddr(start) */
         str   x2, [x4, x1]           /* Map Xen there */
 1:
 
@@ -340,8 +340,8 @@ paging:
         /* Add UART to the fixmap table */
         ldr   x1, =xen_fixmap
         add   x1, x1, x20            /* x1 := paddr (xen_fixmap) */
-        lsr   x2, x23, #12
-        lsl   x2, x2, #12            /* 4K aligned paddr of UART */
+        lsr   x2, x23, #THIRD_SHIFT
+        lsl   x2, x2, #THIRD_SHIFT   /* 4K aligned paddr of UART */
         mov   x3, #PT_DEV_L3
         orr   x2, x2, x3             /* x2 := 4K dev map including UART */
         str   x2, [x1, #(FIXMAP_CONSOLE*8)] /* Map it in the first fixmap's slot */
@@ -354,7 +354,7 @@ paging:
         mov   x3, #PT_PT
         orr   x2, x2, x3             /* x2 := table map of xen_fixmap */
         ldr   x1, =FIXMAP_ADDR(0)
-        lsr   x1, x1, #18            /* x1 := Slot for FIXMAP(0) */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x1 := Slot for FIXMAP(0) */
         str   x2, [x4, x1]           /* Map it in the fixmap's slot */
 
         /* Use a virtual address to access the UART. */
@@ -364,12 +364,12 @@ paging:
         /* Map the DTB in the boot misc slot */
         cbnz  x22, 1f                /* Only on boot CPU */
 
-        lsr   x2, x21, #21
-        lsl   x2, x2, #21            /* x2 := 2MB-aligned paddr of DTB */
+        lsr   x2, x21, #SECOND_SHIFT
+        lsl   x2, x2, #SECOND_SHIFT  /* x2 := 2MB-aligned paddr of DTB */
         mov   x3, #PT_MEM            /* x2 := 2MB RAM incl. DTB */
         orr   x2, x2, x3
         ldr   x1, =BOOT_FDT_VIRT_START
-        lsr   x1, x1, #18            /* x4 := Slot for BOOT_FDT_VIRT_START */
+        lsr   x1, x1, #(SECOND_SHIFT - 3)   /* x4 := Slot for BOOT_FDT_VIRT_START */
         str   x2, [x4, x1]           /* Map it in the early fdt slot */
         dsb   sy
 1:
-- 
1.8.4.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 16:28:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 16:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF4ZY-0006AT-Fg; Sun, 16 Feb 2014 16:28:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF4ZV-0006AO-Jt
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 16:28:02 +0000
Received: from [193.109.254.147:59253] by server-2.bemta-14.messagelabs.com id
	03/5F-01236-017E0035; Sun, 16 Feb 2014 16:28:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392568078!933377!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15620 invoked from network); 16 Feb 2014 16:27:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 16:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="101222633"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Feb 2014 16:27:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 11:27:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF4ZR-0006Jb-02;
	Sun, 16 Feb 2014 16:27:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF4ZQ-0007sx-Se;
	Sun, 16 Feb 2014 16:27:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25035-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 16:27:56 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25035: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25035 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25035/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 12557
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                946dd683afb611721cce2cc7e76a19e74f58e067
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7049 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2382905 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 16:28:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 16:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF4ZY-0006AT-Fg; Sun, 16 Feb 2014 16:28:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF4ZV-0006AO-Jt
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 16:28:02 +0000
Received: from [193.109.254.147:59253] by server-2.bemta-14.messagelabs.com id
	03/5F-01236-017E0035; Sun, 16 Feb 2014 16:28:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392568078!933377!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15620 invoked from network); 16 Feb 2014 16:27:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 16:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,855,1384300800"; d="scan'208";a="101222633"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 16 Feb 2014 16:27:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 11:27:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF4ZR-0006Jb-02;
	Sun, 16 Feb 2014 16:27:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF4ZQ-0007sx-Se;
	Sun, 16 Feb 2014 16:27:56 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25035-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 16:27:56 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25035: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25035 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25035/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 12557
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3) broken REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                946dd683afb611721cce2cc7e76a19e74f58e067
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7049 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2382905 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 18:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 18:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF6aI-0006y4-Sd; Sun, 16 Feb 2014 18:36:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WF6aH-0006xz-Gu
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 18:36:57 +0000
Received: from [85.158.143.35:36328] by server-2.bemta-4.messagelabs.com id
	9A/A0-10891-84501035; Sun, 16 Feb 2014 18:36:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392575814!6044881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28286 invoked from network); 16 Feb 2014 18:36:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 18:36:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,856,1384300800"; d="scan'208";a="103000657"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 18:36:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 16 Feb 2014 13:36:52 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WF6aA-0001lp-Nf;
	Sun, 16 Feb 2014 18:36:50 +0000
Date: Sun, 16 Feb 2014 18:36:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - arch specific functions *_foreign_p2m_mapping do everything after the
>   hypercall
> - it cuts out common parts from m2p_*_override functions to
>   *_foreign_p2m_mapping functions
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Original-by: Anthony Liguori <aliguori@amazon.com>
> ---
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> v7:
> - the previous version broke build on ARM, as there is no need for those p2m
>   changes. I've put them into arch specific functions, which are stubs on arm
> 
> v8:
> - give credit to Anthony Liguori who submitted a very similar patch originally:
> http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
> - create ARM stub for get_phys_to_machine
> - move definition of mfn in __gnttab_unmap_refs to the right place
> 
> v9:
> - move everything after the hypercalls into set/clear_foreign_p2m_mapping
> - m2p override functions became unnecessary on ARM therefore
> 
>  arch/arm/include/asm/xen/page.h     |   19 +++---
>  arch/arm/xen/p2m.c                  |   34 ++++++++++
>  arch/x86/include/asm/xen/page.h     |   13 +++-
>  arch/x86/xen/p2m.c                  |  127 ++++++++++++++++++++++++++++++-----
>  drivers/block/xen-blkback/blkback.c |   15 ++---
>  drivers/xen/gntdev.c                |   13 ++--
>  drivers/xen/grant-table.c           |  115 +++++++++++--------------------
>  include/xen/grant_table.h           |    8 ++-
>  8 files changed, 227 insertions(+), 117 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> index e0965ab..4eaeb3f 100644
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
>  	return NULL;
>  }
>  
> -static inline int m2p_add_override(unsigned long mfn, struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> -{
> -	return 0;
> -}
> -
> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
> -{
> -	return 0;
> -}
> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +				   struct gnttab_map_grant_ref *kmap_ops,
> +				   struct page **pages, unsigned int count,
> +				   bool m2p_override);
> +
> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +				     struct gnttab_map_grant_ref *kmap_ops,
> +				     struct page **pages, unsigned int count,
> +				     bool m2p_override);

Much much better.
The only comment I have is about this m2p_override boolean parameter.
m2p_override is now meaningless in this context, what we really want to
let the arch specific implementation know is whether the mapping is a
kernel only mapping or a userspace mapping.
Testing for kmap_ops != NULL might even be enough, but it would not
improve the interface.

Is it possible to realize if the mapping is a userspace mapping by
checking for GNTMAP_application_map in map_ops?
Otherwise I would keep the boolean and rename it to user_mapping.


>  bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
> diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
> index b31ee1b2..74d977c 100644
> --- a/arch/arm/xen/p2m.c
> +++ b/arch/arm/xen/p2m.c
> @@ -146,6 +146,40 @@ unsigned long __mfn_to_pfn(unsigned long mfn)
>  }
>  EXPORT_SYMBOL_GPL(__mfn_to_pfn);
>  
> +int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +			    struct gnttab_map_grant_ref *kmap_ops,
> +			    struct page **pages, unsigned int count,
> +			    bool m2p_override)
> +{
> +	int i;
> +
> +	for (i = 0; i < count; i++) {
> +		if (map_ops[i].status)
> +			continue;
> +		set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> +				    map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
> +
> +int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count,
> +			      bool m2p_override)
> +{
> +	int i;
> +
> +	for (i = 0; i < count; i++) {
> +		set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> +				    INVALID_P2M_ENTRY);
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
> +
>  bool __set_phys_to_machine_multi(unsigned long pfn,
>  		unsigned long mfn, unsigned long nr_pages)
>  {
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 3e276eb..9edc8a8 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,19 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +				   struct gnttab_map_grant_ref *kmap_ops,
> +				   struct page **pages, unsigned int count,
> +				   bool m2p_override);
>  extern int m2p_add_override(unsigned long mfn, struct page *page,
>  			    struct gnttab_map_grant_ref *kmap_op);
> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +				     struct gnttab_map_grant_ref *kmap_ops,
> +				     struct page **pages, unsigned int count,
> +				     bool m2p_override);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +130,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 696c694..305af27 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -881,6 +881,67 @@ static unsigned long mfn_hash(unsigned long mfn)
>  	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
>  }
>  
> +int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +			    struct gnttab_map_grant_ref *kmap_ops,
> +			    struct page **pages, unsigned int count,
> +			    bool m2p_override)
> +{
> +	int i, ret = 0;
> +	bool lazy = false;
> +	pte_t *pte;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return 0;
> +
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < count; i++) {
> +		unsigned long mfn, pfn;
> +
> +		/* Do not add to override if the map failed. */
> +		if (map_ops[i].status)
> +			continue;
> +
> +		if (map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> +				(map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +		}
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +		pages[i]->index = pfn_to_mfn(pfn);
> +
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +
> +		if (m2p_override) {
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
> +			if (ret)
> +				goto out;
> +		}
> +	}
> +
> +out:
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
> +
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
>  		struct gnttab_map_grant_ref *kmap_op)
> @@ -899,13 +960,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -943,20 +997,66 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
> +
> +int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count,
> +			      bool m2p_override)
> +{
> +	int i, ret = 0;
> +	bool lazy = false;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return 0;
> +
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < count; i++) {
> +		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
> +		unsigned long pfn = page_to_pfn(pages[i]);
> +
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						  &kmap_ops[i] : NULL,
> +						  mfn);
> +		if (ret)
> +			goto out;
> +	}
> +
> +out:
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
> +
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
>  	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
>  	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
>  
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> @@ -970,10 +1070,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 073b4a1..34a2704 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 1ce1c40..5efacf8 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -928,15 +928,14 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +static int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +			     struct gnttab_map_grant_ref *kmap_ops,
> +			     struct page **pages, unsigned int count,
> +			     bool m2p_override)
>  {
>  	int i, ret;
> -	bool lazy = false;
> -	pte_t *pte;
> -	unsigned long mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -947,88 +946,56 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
>  						&map_ops[i].status, __func__);
>  
> -	/* this is basically a nop on x86 */
> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for (i = 0; i < count; i++) {
> -			if (map_ops[i].status)
> -				continue;
> -			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> -					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> -		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		/* Do not add to override if the map failed. */
> -		if (map_ops[i].status)
> -			continue;
> -
> -		if (map_ops[i].flags & GNTMAP_contains_pte) {
> -			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> -				(map_ops[i].host_addr & ~PAGE_MASK));
> -			mfn = pte_mfn(*pte);
> -		} else {
> -			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> -		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> -	}
> -
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> +	return set_foreign_p2m_mapping(map_ops, kmap_ops, pages, count,
> +				       m2p_override);
> +}
>  
> -	return ret;
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
>  {
> -	int i, ret;
> -	bool lazy = false;
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +static int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +			       struct gnttab_map_grant_ref *kmap_ops,
> +			       struct page **pages, unsigned int count,
> +			       bool m2p_override)
> +{
> +	int ret;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
>  
> -	/* this is basically a nop on x86 */
> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for (i = 0; i < count; i++) {
> -			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> -					INVALID_P2M_ENTRY);
> -		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> -	}
> -
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> +	return clear_foreign_p2m_mapping(unmap_ops, kmap_ops, pages, count,
> +					 m2p_override);
> +}
>  
> -	return ret;
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 5acb1e4..2541c96 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 18:37:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 18:37:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF6aI-0006y4-Sd; Sun, 16 Feb 2014 18:36:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WF6aH-0006xz-Gu
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 18:36:57 +0000
Received: from [85.158.143.35:36328] by server-2.bemta-4.messagelabs.com id
	9A/A0-10891-84501035; Sun, 16 Feb 2014 18:36:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392575814!6044881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28286 invoked from network); 16 Feb 2014 18:36:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 18:36:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,856,1384300800"; d="scan'208";a="103000657"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 18:36:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Sun, 16 Feb 2014 13:36:52 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WF6aA-0001lp-Nf;
	Sun, 16 Feb 2014 18:36:50 +0000
Date: Sun, 16 Feb 2014 18:36:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
Message-ID: <alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>, stefano.stabellini@eu.citrix.com,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
> for blkback and future netback patches it just cause a lock contention, as
> those pages never go to userspace. Therefore this series does the following:
> - the original functions were renamed to __gnttab_[un]map_refs, with a new
>   parameter m2p_override
> - arch specific functions *_foreign_p2m_mapping do everything after the
>   hypercall
> - it cuts out common parts from m2p_*_override functions to
>   *_foreign_p2m_mapping functions
> - gnttab_[un]map_refs are now a wrapper to call __gnttab_[un]map_refs with
>   m2p_override false
> - a new function gnttab_[un]map_refs_userspace provides the old behaviour
> 
> It also removes a stray space from page.h and change ret to 0 if
> XENFEAT_auto_translated_physmap, as that is the only possible return value
> there.
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> Original-by: Anthony Liguori <aliguori@amazon.com>
> ---
> v2:
> - move the storing of the old mfn in page->index to gnttab_map_refs
> - move the function header update to a separate patch
> 
> v3:
> - a new approach to retain old behaviour where it needed
> - squash the patches into one
> 
> v4:
> - move out the common bits from m2p* functions, and pass pfn/mfn as parameter
> - clear page->private before doing anything with the page, so m2p_find_override
>   won't race with this
> 
> v5:
> - change return value handling in __gnttab_[un]map_refs
> - remove a stray space in page.h
> - add detail why ret = 0 now at some places
> 
> v6:
> - don't pass pfn to m2p* functions, just get it locally
> 
> v7:
> - the previous version broke build on ARM, as there is no need for those p2m
>   changes. I've put them into arch specific functions, which are stubs on arm
> 
> v8:
> - give credit to Anthony Liguori who submitted a very similar patch originally:
> http://marc.info/?i=1384307336-5328-1-git-send-email-anthony%40codemonkey.ws
> - create ARM stub for get_phys_to_machine
> - move definition of mfn in __gnttab_unmap_refs to the right place
> 
> v9:
> - move everything after the hypercalls into set/clear_foreign_p2m_mapping
> - m2p override functions became unnecessary on ARM therefore
> 
>  arch/arm/include/asm/xen/page.h     |   19 +++---
>  arch/arm/xen/p2m.c                  |   34 ++++++++++
>  arch/x86/include/asm/xen/page.h     |   13 +++-
>  arch/x86/xen/p2m.c                  |  127 ++++++++++++++++++++++++++++++-----
>  drivers/block/xen-blkback/blkback.c |   15 ++---
>  drivers/xen/gntdev.c                |   13 ++--
>  drivers/xen/grant-table.c           |  115 +++++++++++--------------------
>  include/xen/grant_table.h           |    8 ++-
>  8 files changed, 227 insertions(+), 117 deletions(-)
> 
> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
> index e0965ab..4eaeb3f 100644
> --- a/arch/arm/include/asm/xen/page.h
> +++ b/arch/arm/include/asm/xen/page.h
> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
>  	return NULL;
>  }
>  
> -static inline int m2p_add_override(unsigned long mfn, struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> -{
> -	return 0;
> -}
> -
> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
> -{
> -	return 0;
> -}
> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +				   struct gnttab_map_grant_ref *kmap_ops,
> +				   struct page **pages, unsigned int count,
> +				   bool m2p_override);
> +
> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +				     struct gnttab_map_grant_ref *kmap_ops,
> +				     struct page **pages, unsigned int count,
> +				     bool m2p_override);

Much much better.
The only comment I have is about this m2p_override boolean parameter.
m2p_override is now meaningless in this context, what we really want to
let the arch specific implementation know is whether the mapping is a
kernel only mapping or a userspace mapping.
Testing for kmap_ops != NULL might even be enough, but it would not
improve the interface.

Is it possible to realize if the mapping is a userspace mapping by
checking for GNTMAP_application_map in map_ops?
Otherwise I would keep the boolean and rename it to user_mapping.


>  bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
> diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
> index b31ee1b2..74d977c 100644
> --- a/arch/arm/xen/p2m.c
> +++ b/arch/arm/xen/p2m.c
> @@ -146,6 +146,40 @@ unsigned long __mfn_to_pfn(unsigned long mfn)
>  }
>  EXPORT_SYMBOL_GPL(__mfn_to_pfn);
>  
> +int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +			    struct gnttab_map_grant_ref *kmap_ops,
> +			    struct page **pages, unsigned int count,
> +			    bool m2p_override)
> +{
> +	int i;
> +
> +	for (i = 0; i < count; i++) {
> +		if (map_ops[i].status)
> +			continue;
> +		set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> +				    map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
> +
> +int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count,
> +			      bool m2p_override)
> +{
> +	int i;
> +
> +	for (i = 0; i < count; i++) {
> +		set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> +				    INVALID_P2M_ENTRY);
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
> +
>  bool __set_phys_to_machine_multi(unsigned long pfn,
>  		unsigned long mfn, unsigned long nr_pages)
>  {
> diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
> index 3e276eb..9edc8a8 100644
> --- a/arch/x86/include/asm/xen/page.h
> +++ b/arch/x86/include/asm/xen/page.h
> @@ -49,10 +49,19 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
>  extern unsigned long set_phys_range_identity(unsigned long pfn_s,
>  					     unsigned long pfn_e);
>  
> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +				   struct gnttab_map_grant_ref *kmap_ops,
> +				   struct page **pages, unsigned int count,
> +				   bool m2p_override);
>  extern int m2p_add_override(unsigned long mfn, struct page *page,
>  			    struct gnttab_map_grant_ref *kmap_op);
> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +				     struct gnttab_map_grant_ref *kmap_ops,
> +				     struct page **pages, unsigned int count,
> +				     bool m2p_override);
>  extern int m2p_remove_override(struct page *page,
> -				struct gnttab_map_grant_ref *kmap_op);
> +			       struct gnttab_map_grant_ref *kmap_op,
> +			       unsigned long mfn);
>  extern struct page *m2p_find_override(unsigned long mfn);
>  extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
>  
> @@ -121,7 +130,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
>  		pfn = m2p_find_override_pfn(mfn, ~0);
>  	}
>  
> -	/* 
> +	/*
>  	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
>  	 * entry doesn't map back to the mfn and m2p_override doesn't have a
>  	 * valid entry for it.
> diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
> index 696c694..305af27 100644
> --- a/arch/x86/xen/p2m.c
> +++ b/arch/x86/xen/p2m.c
> @@ -881,6 +881,67 @@ static unsigned long mfn_hash(unsigned long mfn)
>  	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
>  }
>  
> +int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> +			    struct gnttab_map_grant_ref *kmap_ops,
> +			    struct page **pages, unsigned int count,
> +			    bool m2p_override)
> +{
> +	int i, ret = 0;
> +	bool lazy = false;
> +	pte_t *pte;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return 0;
> +
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < count; i++) {
> +		unsigned long mfn, pfn;
> +
> +		/* Do not add to override if the map failed. */
> +		if (map_ops[i].status)
> +			continue;
> +
> +		if (map_ops[i].flags & GNTMAP_contains_pte) {
> +			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> +				(map_ops[i].host_addr & ~PAGE_MASK));
> +			mfn = pte_mfn(*pte);
> +		} else {
> +			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> +		}
> +		pfn = page_to_pfn(pages[i]);
> +
> +		WARN_ON(PagePrivate(pages[i]));
> +		SetPagePrivate(pages[i]);
> +		set_page_private(pages[i], mfn);
> +		pages[i]->index = pfn_to_mfn(pfn);
> +
> +		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
> +			ret = -ENOMEM;
> +			goto out;
> +		}
> +
> +		if (m2p_override) {
> +			ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> +					       &kmap_ops[i] : NULL);
> +			if (ret)
> +				goto out;
> +		}
> +	}
> +
> +out:
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
> +
>  /* Add an MFN override for a particular page */
>  int m2p_add_override(unsigned long mfn, struct page *page,
>  		struct gnttab_map_grant_ref *kmap_op)
> @@ -899,13 +960,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  					"m2p_add_override: pfn %lx not mapped", pfn))
>  			return -EINVAL;
>  	}
> -	WARN_ON(PagePrivate(page));
> -	SetPagePrivate(page);
> -	set_page_private(page, mfn);
> -	page->index = pfn_to_mfn(pfn);
> -
> -	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
> -		return -ENOMEM;
>  
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
> @@ -943,20 +997,66 @@ int m2p_add_override(unsigned long mfn, struct page *page,
>  	return 0;
>  }
>  EXPORT_SYMBOL_GPL(m2p_add_override);
> +
> +int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count,
> +			      bool m2p_override)
> +{
> +	int i, ret = 0;
> +	bool lazy = false;
> +
> +	if (xen_feature(XENFEAT_auto_translated_physmap))
> +		return 0;
> +
> +	if (m2p_override &&
> +	    !in_interrupt() &&
> +	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> +		arch_enter_lazy_mmu_mode();
> +		lazy = true;
> +	}
> +
> +	for (i = 0; i < count; i++) {
> +		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
> +		unsigned long pfn = page_to_pfn(pages[i]);
> +
> +		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
> +			ret = -EINVAL;
> +			goto out;
> +		}
> +
> +		set_page_private(pages[i], INVALID_P2M_ENTRY);
> +		WARN_ON(!PagePrivate(pages[i]));
> +		ClearPagePrivate(pages[i]);
> +		set_phys_to_machine(pfn, pages[i]->index);
> +
> +		if (m2p_override)
> +			ret = m2p_remove_override(pages[i],
> +						  kmap_ops ?
> +						  &kmap_ops[i] : NULL,
> +						  mfn);
> +		if (ret)
> +			goto out;
> +	}
> +
> +out:
> +	if (lazy)
> +		arch_leave_lazy_mmu_mode();
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
> +
>  int m2p_remove_override(struct page *page,
> -		struct gnttab_map_grant_ref *kmap_op)
> +			struct gnttab_map_grant_ref *kmap_op,
> +			unsigned long mfn)
>  {
>  	unsigned long flags;
> -	unsigned long mfn;
>  	unsigned long pfn;
>  	unsigned long uninitialized_var(address);
>  	unsigned level;
>  	pte_t *ptep = NULL;
>  
>  	pfn = page_to_pfn(page);
> -	mfn = get_phys_to_machine(pfn);
> -	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
> -		return -EINVAL;
>  
>  	if (!PageHighMem(page)) {
>  		address = (unsigned long)__va(pfn << PAGE_SHIFT);
> @@ -970,10 +1070,7 @@ int m2p_remove_override(struct page *page,
>  	spin_lock_irqsave(&m2p_override_lock, flags);
>  	list_del(&page->lru);
>  	spin_unlock_irqrestore(&m2p_override_lock, flags);
> -	WARN_ON(!PagePrivate(page));
> -	ClearPagePrivate(page);
>  
> -	set_phys_to_machine(pfn, page->index);
>  	if (kmap_op != NULL) {
>  		if (!PageHighMem(page)) {
>  			struct multicall_space mcs;
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6620b73..875025f 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -285,8 +285,7 @@ static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root,
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST ||
>  			!rb_next(&persistent_gnt->node)) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -321,8 +320,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		pages[segs_to_unmap] = persistent_gnt->page;
>  
>  		if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, pages,
> -				segs_to_unmap);
> +			ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, pages, segs_to_unmap);
>  			segs_to_unmap = 0;
> @@ -330,7 +328,7 @@ static void unmap_purged_grants(struct work_struct *work)
>  		kfree(persistent_gnt);
>  	}
>  	if (segs_to_unmap > 0) {
> -		ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap);
> +		ret = gnttab_unmap_refs(unmap, pages, segs_to_unmap);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, pages, segs_to_unmap);
>  	}
> @@ -670,15 +668,14 @@ static void xen_blkbk_unmap(struct xen_blkif *blkif,
>  				    GNTMAP_host_map, pages[i]->handle);
>  		pages[i]->handle = BLKBACK_INVALID_HANDLE;
>  		if (++invcount == BLKIF_MAX_SEGMENTS_PER_REQUEST) {
> -			ret = gnttab_unmap_refs(unmap, NULL, unmap_pages,
> -			                        invcount);
> +			ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  			BUG_ON(ret);
>  			put_free_pages(blkif, unmap_pages, invcount);
>  			invcount = 0;
>  		}
>  	}
>  	if (invcount) {
> -		ret = gnttab_unmap_refs(unmap, NULL, unmap_pages, invcount);
> +		ret = gnttab_unmap_refs(unmap, unmap_pages, invcount);
>  		BUG_ON(ret);
>  		put_free_pages(blkif, unmap_pages, invcount);
>  	}
> @@ -740,7 +737,7 @@ again:
>  	}
>  
>  	if (segs_to_map) {
> -		ret = gnttab_map_refs(map, NULL, pages_to_gnt, segs_to_map);
> +		ret = gnttab_map_refs(map, pages_to_gnt, segs_to_map);
>  		BUG_ON(ret);
>  	}
>  
> diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
> index 073b4a1..34a2704 100644
> --- a/drivers/xen/gntdev.c
> +++ b/drivers/xen/gntdev.c
> @@ -284,8 +284,10 @@ static int map_grant_pages(struct grant_map *map)
>  	}
>  
>  	pr_debug("map %d+%d\n", map->index, map->count);
> -	err = gnttab_map_refs(map->map_ops, use_ptemod ? map->kmap_ops : NULL,
> -			map->pages, map->count);
> +	err = gnttab_map_refs_userspace(map->map_ops,
> +					use_ptemod ? map->kmap_ops : NULL,
> +					map->pages,
> +					map->count);
>  	if (err)
>  		return err;
>  
> @@ -315,9 +317,10 @@ static int __unmap_grant_pages(struct grant_map *map, int offset, int pages)
>  		}
>  	}
>  
> -	err = gnttab_unmap_refs(map->unmap_ops + offset,
> -			use_ptemod ? map->kmap_ops + offset : NULL, map->pages + offset,
> -			pages);
> +	err = gnttab_unmap_refs_userspace(map->unmap_ops + offset,
> +					  use_ptemod ? map->kmap_ops + offset : NULL,
> +					  map->pages + offset,
> +					  pages);
>  	if (err)
>  		return err;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 1ce1c40..5efacf8 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -928,15 +928,14 @@ void gnttab_batch_copy(struct gnttab_copy *batch, unsigned count)
>  }
>  EXPORT_SYMBOL_GPL(gnttab_batch_copy);
>  
> -int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
> -		    struct page **pages, unsigned int count)
> +static int __gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +			     struct gnttab_map_grant_ref *kmap_ops,
> +			     struct page **pages, unsigned int count,
> +			     bool m2p_override)
>  {
>  	int i, ret;
> -	bool lazy = false;
> -	pte_t *pte;
> -	unsigned long mfn;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
>  	if (ret)
>  		return ret;
> @@ -947,88 +946,56 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
>  			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
>  						&map_ops[i].status, __func__);
>  
> -	/* this is basically a nop on x86 */
> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for (i = 0; i < count; i++) {
> -			if (map_ops[i].status)
> -				continue;
> -			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
> -					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
> -		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		/* Do not add to override if the map failed. */
> -		if (map_ops[i].status)
> -			continue;
> -
> -		if (map_ops[i].flags & GNTMAP_contains_pte) {
> -			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
> -				(map_ops[i].host_addr & ~PAGE_MASK));
> -			mfn = pte_mfn(*pte);
> -		} else {
> -			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
> -		}
> -		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> -	}
> -
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> +	return set_foreign_p2m_mapping(map_ops, kmap_ops, pages, count,
> +				       m2p_override);
> +}
>  
> -	return ret;
> +int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_map_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_map_refs);
>  
> -int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kmap_ops,
> -		      struct page **pages, unsigned int count)
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count)
>  {
> -	int i, ret;
> -	bool lazy = false;
> +	return __gnttab_map_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_map_refs_userspace);
> +
> +static int __gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> +			       struct gnttab_map_grant_ref *kmap_ops,
> +			       struct page **pages, unsigned int count,
> +			       bool m2p_override)
> +{
> +	int ret;
>  
> +	BUG_ON(kmap_ops && !m2p_override);
>  	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
>  	if (ret)
>  		return ret;
>  
> -	/* this is basically a nop on x86 */
> -	if (xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for (i = 0; i < count; i++) {
> -			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
> -					INVALID_P2M_ENTRY);
> -		}
> -		return ret;
> -	}
> -
> -	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
> -		arch_enter_lazy_mmu_mode();
> -		lazy = true;
> -	}
> -
> -	for (i = 0; i < count; i++) {
> -		ret = m2p_remove_override(pages[i], kmap_ops ?
> -				       &kmap_ops[i] : NULL);
> -		if (ret)
> -			goto out;
> -	}
> -
> - out:
> -	if (lazy)
> -		arch_leave_lazy_mmu_mode();
> +	return clear_foreign_p2m_mapping(unmap_ops, kmap_ops, pages, count,
> +					 m2p_override);
> +}
>  
> -	return ret;
> +int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *map_ops,
> +		    struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, NULL, pages, count, false);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
>  
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *map_ops,
> +				struct gnttab_map_grant_ref *kmap_ops,
> +				struct page **pages, unsigned int count)
> +{
> +	return __gnttab_unmap_refs(map_ops, kmap_ops, pages, count, true);
> +}
> +EXPORT_SYMBOL_GPL(gnttab_unmap_refs_userspace);
> +
>  static unsigned nr_status_frames(unsigned nr_grant_frames)
>  {
>  	BUG_ON(grefs_per_grant_frame == 0);
> diff --git a/include/xen/grant_table.h b/include/xen/grant_table.h
> index 5acb1e4..2541c96 100644
> --- a/include/xen/grant_table.h
> +++ b/include/xen/grant_table.h
> @@ -191,11 +191,15 @@ void gnttab_free_auto_xlat_frames(void);
>  #define gnttab_map_vaddr(map) ((void *)(map.host_virt_addr))
>  
>  int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
> -		    struct gnttab_map_grant_ref *kmap_ops,
>  		    struct page **pages, unsigned int count);
> +int gnttab_map_refs_userspace(struct gnttab_map_grant_ref *map_ops,
> +			      struct gnttab_map_grant_ref *kmap_ops,
> +			      struct page **pages, unsigned int count);
>  int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
> -		      struct gnttab_map_grant_ref *kunmap_ops,
>  		      struct page **pages, unsigned int count);
> +int gnttab_unmap_refs_userspace(struct gnttab_unmap_grant_ref *unmap_ops,
> +				struct gnttab_map_grant_ref *kunmap_ops,
> +				struct page **pages, unsigned int count);
>  
>  /* Perform a batch of grant map/copy operations. Retry every batch slot
>   * for which the hypervisor returns GNTST_eagain. This is typically due
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 18:57:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 18:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF6to-0007BD-0M; Sun, 16 Feb 2014 18:57:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben@decadent.org.uk>) id 1WF6tl-0007B5-Ru
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 18:57:06 +0000
Received: from [85.158.137.68:11365] by server-14.bemta-3.messagelabs.com id
	14/96-08196-10A01035; Sun, 16 Feb 2014 18:57:05 +0000
X-Env-Sender: ben@decadent.org.uk
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392577023!2205251!1
X-Originating-IP: [88.96.1.126]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28980 invoked from network); 16 Feb 2014 18:57:04 -0000
Received: from shadbolt.e.decadent.org.uk (HELO shadbolt.e.decadent.org.uk)
	(88.96.1.126)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 16 Feb 2014 18:57:04 -0000
Received: from [2001:470:1f08:1539:2188:606c:cd35:323c]
	(helo=deadeye.wl.decadent.org.uk)
	by shadbolt.decadent.org.uk with esmtps
	(TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80)
	(envelope-from <ben@decadent.org.uk>)
	id 1WF6tf-0001WX-6F; Sun, 16 Feb 2014 18:56:59 +0000
Received: from ben by deadeye.wl.decadent.org.uk with local (Exim 4.82)
	(envelope-from <ben@decadent.org.uk>)
	id 1WF6tf-0005kH-7q; Sun, 16 Feb 2014 18:56:59 +0000
Message-ID: <1392577012.15615.116.camel@deadeye.wl.decadent.org.uk>
From: Ben Hutchings <ben@decadent.org.uk>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Sun, 16 Feb 2014 18:56:52 +0000
In-Reply-To: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: Evolution 3.8.5-2+b1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 2001:470:1f08:1539:2188:606c:cd35:323c
X-SA-Exim-Mail-From: ben@decadent.org.uk
X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk);
	SAEximRunCond expanded to false
Cc: kvm@vger.kernel.org, netdev@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0341244123815909358=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0341244123815909358==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-rsGrmwfKCYJri+Sjh8qI"


--=-rsGrmwfKCYJri+Sjh8qI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>=20
> It doesn't make sense for some interfaces to become a root bridge

I think you mean 'root port'.

> at any point in time. One example is virtual backend interfaces
> which rely on other entities on the bridge for actual physical
> connectivity. They only provide virtual access.
>=20
> Device drivers that know they should never become part of the
> root bridge have been using a trick of setting their MAC address
> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
> of using these hacks lets the interfaces annotate its intent and
> generalizes a solution for multiple drivers, while letting the
> drivers use a random MAC address or one prefixed with a proper OUI.
> This sort of hack is used by both qemu and xen for their backend
> interfaces.
>=20
> Cc: Stephen Hemminger <stephen@networkplumber.org>
> Cc: bridge@lists.linux-foundation.org
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> ---
>  include/uapi/linux/if.h | 1 +
>  net/bridge/br_if.c      | 2 ++
>  net/bridge/br_private.h | 1 +
>  net/bridge/br_stp_if.c  | 2 ++
>  4 files changed, 6 insertions(+)
>=20
> diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
> index d758163..8d10382 100644
> --- a/include/uapi/linux/if.h
> +++ b/include/uapi/linux/if.h
> @@ -84,6 +84,7 @@
>  #define IFF_LIVE_ADDR_CHANGE 0x100000	/* device supports hardware addres=
s
>  					 * change when it's running */
>  #define IFF_MACVLAN 0x200000		/* Macvlan device */
> +#define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridg=
e */
[...]

Does it really make sense to add a flag that says exactly which special
behaviour you want, or would it be better to define the flag as a
passive property, which other drivers/protocols then use as a condition
for special behaviour?

The fact that you also define the IFF_BRIDGE_SKIP_IP flag, and set it on
exactly the same devices, makes me think that they should actually be a
single flag.  I don't know how that flag should be named or described,
though.

Ben.

--=20
Ben Hutchings
Any sufficiently advanced bug is indistinguishable from a feature.

--=-rsGrmwfKCYJri+Sjh8qI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUAUwEJ9Oe/yOyVhhEJAQr7ZQ//XJTbfwlF6wA9jJv96XiPN/YXk8vMVUqC
qgbGy8U4FXpg6gRIyASm6HiyviT7VZiLfskA9QGPKadgVb2x4LAPKPlQJGRWN2Pb
I6WpRq0QzqUBLTPstUUVbayfH1TuLCQ1aUBfJx60oE5mi4qPfCw8Jn+ngyXlj4dc
1FoS8Td2VKM9GOope4AYVcQ5HVPbyDgD88AZZQDtTaYEpK+ffW/7UcotEL0k3ddc
mS+voQikMFajOl2pyrZ0ztZXIVTGvJtDiHxN1wACEEu4s4pCMHvTiSDofb9vawCE
AuH8/RvRDLwaK3uUOQ+5U9g16L5hn4fOrKCSRBg+asYJ0u42YCHQfhnFFlBcVj6s
D9rSjg7bJQMGeCbhB7KOAJAzd430N598jqZmK8/eFHK6GDh2HVwwuLV14vmkhnVb
GAdqim31ZtJTvNSBbVZQx/4KHBewutCFHc+281D0HzGNfFfpprMlEEfO5q/f2Nre
jWyaccNLUK6bGlfmNeANCE2oGKit4jFUe7jgKb9xaWc/aLLEO4l4Jyq96evKpea0
KrLJgoNkscPpDzM8uPoqBEBUYHQ4kwXcQqQdCR4W2tjhCdcT0ZYQzwsi+A7Np0XY
EmTRwRC2wHVoLm1QT9+HwUxqMem2wtTKy/8I3I9EbY0FC6BDGqVrnyAGUldTGkgW
yFWN/QaYj80=
=hTJd
-----END PGP SIGNATURE-----

--=-rsGrmwfKCYJri+Sjh8qI--


--===============0341244123815909358==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0341244123815909358==--


From xen-devel-bounces@lists.xen.org Sun Feb 16 18:57:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 18:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF6to-0007BD-0M; Sun, 16 Feb 2014 18:57:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben@decadent.org.uk>) id 1WF6tl-0007B5-Ru
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 18:57:06 +0000
Received: from [85.158.137.68:11365] by server-14.bemta-3.messagelabs.com id
	14/96-08196-10A01035; Sun, 16 Feb 2014 18:57:05 +0000
X-Env-Sender: ben@decadent.org.uk
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392577023!2205251!1
X-Originating-IP: [88.96.1.126]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28980 invoked from network); 16 Feb 2014 18:57:04 -0000
Received: from shadbolt.e.decadent.org.uk (HELO shadbolt.e.decadent.org.uk)
	(88.96.1.126)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 16 Feb 2014 18:57:04 -0000
Received: from [2001:470:1f08:1539:2188:606c:cd35:323c]
	(helo=deadeye.wl.decadent.org.uk)
	by shadbolt.decadent.org.uk with esmtps
	(TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80)
	(envelope-from <ben@decadent.org.uk>)
	id 1WF6tf-0001WX-6F; Sun, 16 Feb 2014 18:56:59 +0000
Received: from ben by deadeye.wl.decadent.org.uk with local (Exim 4.82)
	(envelope-from <ben@decadent.org.uk>)
	id 1WF6tf-0005kH-7q; Sun, 16 Feb 2014 18:56:59 +0000
Message-ID: <1392577012.15615.116.camel@deadeye.wl.decadent.org.uk>
From: Ben Hutchings <ben@decadent.org.uk>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Sun, 16 Feb 2014 18:56:52 +0000
In-Reply-To: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: Evolution 3.8.5-2+b1 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 2001:470:1f08:1539:2188:606c:cd35:323c
X-SA-Exim-Mail-From: ben@decadent.org.uk
X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk);
	SAEximRunCond expanded to false
Cc: kvm@vger.kernel.org, netdev@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0341244123815909358=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============0341244123815909358==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-rsGrmwfKCYJri+Sjh8qI"


--=-rsGrmwfKCYJri+Sjh8qI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>=20
> It doesn't make sense for some interfaces to become a root bridge

I think you mean 'root port'.

> at any point in time. One example is virtual backend interfaces
> which rely on other entities on the bridge for actual physical
> connectivity. They only provide virtual access.
>=20
> Device drivers that know they should never become part of the
> root bridge have been using a trick of setting their MAC address
> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
> of using these hacks lets the interfaces annotate its intent and
> generalizes a solution for multiple drivers, while letting the
> drivers use a random MAC address or one prefixed with a proper OUI.
> This sort of hack is used by both qemu and xen for their backend
> interfaces.
>=20
> Cc: Stephen Hemminger <stephen@networkplumber.org>
> Cc: bridge@lists.linux-foundation.org
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> ---
>  include/uapi/linux/if.h | 1 +
>  net/bridge/br_if.c      | 2 ++
>  net/bridge/br_private.h | 1 +
>  net/bridge/br_stp_if.c  | 2 ++
>  4 files changed, 6 insertions(+)
>=20
> diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
> index d758163..8d10382 100644
> --- a/include/uapi/linux/if.h
> +++ b/include/uapi/linux/if.h
> @@ -84,6 +84,7 @@
>  #define IFF_LIVE_ADDR_CHANGE 0x100000	/* device supports hardware addres=
s
>  					 * change when it's running */
>  #define IFF_MACVLAN 0x200000		/* Macvlan device */
> +#define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridg=
e */
[...]

Does it really make sense to add a flag that says exactly which special
behaviour you want, or would it be better to define the flag as a
passive property, which other drivers/protocols then use as a condition
for special behaviour?

The fact that you also define the IFF_BRIDGE_SKIP_IP flag, and set it on
exactly the same devices, makes me think that they should actually be a
single flag.  I don't know how that flag should be named or described,
though.

Ben.

--=20
Ben Hutchings
Any sufficiently advanced bug is indistinguishable from a feature.

--=-rsGrmwfKCYJri+Sjh8qI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUAUwEJ9Oe/yOyVhhEJAQr7ZQ//XJTbfwlF6wA9jJv96XiPN/YXk8vMVUqC
qgbGy8U4FXpg6gRIyASm6HiyviT7VZiLfskA9QGPKadgVb2x4LAPKPlQJGRWN2Pb
I6WpRq0QzqUBLTPstUUVbayfH1TuLCQ1aUBfJx60oE5mi4qPfCw8Jn+ngyXlj4dc
1FoS8Td2VKM9GOope4AYVcQ5HVPbyDgD88AZZQDtTaYEpK+ffW/7UcotEL0k3ddc
mS+voQikMFajOl2pyrZ0ztZXIVTGvJtDiHxN1wACEEu4s4pCMHvTiSDofb9vawCE
AuH8/RvRDLwaK3uUOQ+5U9g16L5hn4fOrKCSRBg+asYJ0u42YCHQfhnFFlBcVj6s
D9rSjg7bJQMGeCbhB7KOAJAzd430N598jqZmK8/eFHK6GDh2HVwwuLV14vmkhnVb
GAdqim31ZtJTvNSBbVZQx/4KHBewutCFHc+281D0HzGNfFfpprMlEEfO5q/f2Nre
jWyaccNLUK6bGlfmNeANCE2oGKit4jFUe7jgKb9xaWc/aLLEO4l4Jyq96evKpea0
KrLJgoNkscPpDzM8uPoqBEBUYHQ4kwXcQqQdCR4W2tjhCdcT0ZYQzwsi+A7Np0XY
EmTRwRC2wHVoLm1QT9+HwUxqMem2wtTKy/8I3I9EbY0FC6BDGqVrnyAGUldTGkgW
yFWN/QaYj80=
=hTJd
-----END PGP SIGNATURE-----

--=-rsGrmwfKCYJri+Sjh8qI--


--===============0341244123815909358==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0341244123815909358==--


From xen-devel-bounces@lists.xen.org Sun Feb 16 18:58:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 18:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF6ug-0007FO-OV; Sun, 16 Feb 2014 18:58:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WF6uf-0007F9-2L
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 18:58:01 +0000
Received: from [85.158.137.68:21904] by server-17.bemta-3.messagelabs.com id
	08/31-22569-83A01035; Sun, 16 Feb 2014 18:58:00 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392577077!2200683!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26407 invoked from network); 16 Feb 2014 18:57:59 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 18:57:59 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so14442263pab.9
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 10:57:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=YKYcWVMGIuEQcBfzgh2DvpEP0KEKCCBqWftWRUfplxY=;
	b=ETJy8gnYjjxjg+/vn4JuKnYx3fkV4n4Q3nlaE4/7Nmz10YdKpm51j5NiTcHsFTsimm
	JfsDpND5IpjrJKvO/1+78e5bCuKdlBw9NAV67SOHiyNXGtrQl1o0rLeBDydfq2PwJ4fa
	B7VgqNXgUfZDRJa08MY8n75t16A5N+rwfw+aHYGWjoj7n74gqW3AGHBoV1YchRMmpZ6U
	lLTQnUABqkBPDr6MGdCJeqp/zkNcPxaCr5Yy/AkphnVig5khLRISV83PJyd8JpEfC2sI
	eodRSAtpzGGjlA5G+htMB8MLyGXCH0QcR0RIJShU5xMdWqLDmNF2g/wf87aWKiQXA10f
	zVyg==
X-Gm-Message-State: ALoCoQnqdoQHxkGzuVr5h13gXLqOH94Bb+VpGn+mccx2RqR5VPHhE2r0jvEc+HyO7nGzesiU2qtR
X-Received: by 10.68.224.195 with SMTP id re3mr21990318pbc.93.1392577077407;
	Sun, 16 Feb 2014 10:57:57 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id
	pe3sm38171207pbc.23.2014.02.16.10.57.56 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 10:57:57 -0800 (PST)
Date: Sun, 16 Feb 2014 10:57:54 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140216105754.63738163@nehalam.linuxnetplumber.net>
In-Reply-To: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: kvm@vger.kernel.org, netdev@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Feb 2014 18:59:37 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> It doesn't make sense for some interfaces to become a root bridge
> at any point in time. One example is virtual backend interfaces
> which rely on other entities on the bridge for actual physical
> connectivity. They only provide virtual access.
> 
> Device drivers that know they should never become part of the
> root bridge have been using a trick of setting their MAC address
> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
> of using these hacks lets the interfaces annotate its intent and
> generalizes a solution for multiple drivers, while letting the
> drivers use a random MAC address or one prefixed with a proper OUI.
> This sort of hack is used by both qemu and xen for their backend
> interfaces.
> 
> Cc: Stephen Hemminger <stephen@networkplumber.org>
> Cc: bridge@lists.linux-foundation.org
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>

This is already supported in a more standard way via the root
block flag.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 18:58:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 18:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF6ug-0007FO-OV; Sun, 16 Feb 2014 18:58:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WF6uf-0007F9-2L
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 18:58:01 +0000
Received: from [85.158.137.68:21904] by server-17.bemta-3.messagelabs.com id
	08/31-22569-83A01035; Sun, 16 Feb 2014 18:58:00 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392577077!2200683!1
X-Originating-IP: [209.85.220.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26407 invoked from network); 16 Feb 2014 18:57:59 -0000
Received: from mail-pa0-f50.google.com (HELO mail-pa0-f50.google.com)
	(209.85.220.50)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 18:57:59 -0000
Received: by mail-pa0-f50.google.com with SMTP id kp14so14442263pab.9
	for <xen-devel@lists.xenproject.org>;
	Sun, 16 Feb 2014 10:57:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=YKYcWVMGIuEQcBfzgh2DvpEP0KEKCCBqWftWRUfplxY=;
	b=ETJy8gnYjjxjg+/vn4JuKnYx3fkV4n4Q3nlaE4/7Nmz10YdKpm51j5NiTcHsFTsimm
	JfsDpND5IpjrJKvO/1+78e5bCuKdlBw9NAV67SOHiyNXGtrQl1o0rLeBDydfq2PwJ4fa
	B7VgqNXgUfZDRJa08MY8n75t16A5N+rwfw+aHYGWjoj7n74gqW3AGHBoV1YchRMmpZ6U
	lLTQnUABqkBPDr6MGdCJeqp/zkNcPxaCr5Yy/AkphnVig5khLRISV83PJyd8JpEfC2sI
	eodRSAtpzGGjlA5G+htMB8MLyGXCH0QcR0RIJShU5xMdWqLDmNF2g/wf87aWKiQXA10f
	zVyg==
X-Gm-Message-State: ALoCoQnqdoQHxkGzuVr5h13gXLqOH94Bb+VpGn+mccx2RqR5VPHhE2r0jvEc+HyO7nGzesiU2qtR
X-Received: by 10.68.224.195 with SMTP id re3mr21990318pbc.93.1392577077407;
	Sun, 16 Feb 2014 10:57:57 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id
	pe3sm38171207pbc.23.2014.02.16.10.57.56 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 10:57:57 -0800 (PST)
Date: Sun, 16 Feb 2014 10:57:54 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140216105754.63738163@nehalam.linuxnetplumber.net>
In-Reply-To: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: kvm@vger.kernel.org, netdev@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 14 Feb 2014 18:59:37 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> It doesn't make sense for some interfaces to become a root bridge
> at any point in time. One example is virtual backend interfaces
> which rely on other entities on the bridge for actual physical
> connectivity. They only provide virtual access.
> 
> Device drivers that know they should never become part of the
> root bridge have been using a trick of setting their MAC address
> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
> of using these hacks lets the interfaces annotate its intent and
> generalizes a solution for multiple drivers, while letting the
> drivers use a random MAC address or one prefixed with a proper OUI.
> This sort of hack is used by both qemu and xen for their backend
> interfaces.
> 
> Cc: Stephen Hemminger <stephen@networkplumber.org>
> Cc: bridge@lists.linux-foundation.org
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>

This is already supported in a more standard way via the root
block flag.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 20:24:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 20:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF8Fe-0008AB-Ax; Sun, 16 Feb 2014 20:23:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WF8Fd-0008A6-7W
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 20:23:45 +0000
Received: from [193.109.254.147:37663] by server-2.bemta-14.messagelabs.com id
	94/9D-01236-05E11035; Sun, 16 Feb 2014 20:23:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392582223!4677169!1
X-Originating-IP: [209.85.212.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5299 invoked from network); 16 Feb 2014 20:23:43 -0000
Received: from mail-wi0-f170.google.com (HELO mail-wi0-f170.google.com)
	(209.85.212.170)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 20:23:43 -0000
Received: by mail-wi0-f170.google.com with SMTP id hi5so1797489wib.3
	for <xen-devel@lists.xensource.com>;
	Sun, 16 Feb 2014 12:23:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=PfOiT5d4OvmfzDuEHJoi/dl4CekcefBcv10vZhLqvz4=;
	b=UC8EjkVmN2KUftSTm+v3WI3DO3SKt8PWwaKpNGYXtMJfymvux+xZmoxjwsdEX2214r
	YpiYIeliYTQPSKE+ZxOMUE4CF7ZQmLy9At/ie8XdHZCN2QMRuTfVAA/ZUh9NT9BZUxVQ
	VLYbabjYBYj4er2HUlmHlqJZ2mj9BRy4Yhp/BQFtbNBYNwo5e8aJDICtl3z3LRqg/Pqj
	UPlm3ayuhz4FdNtkxVb3h6t6Xse7sCOwz/xFcJtgY6PjAuz2S38wrSSuGzAJ2m720Usz
	WuGosjzZulztq8mfpLstCMBENFdfZxvRQE+2BHPEXIvolkRIixouVqNMBYdeBNpz6G6r
	jaHQ==
X-Gm-Message-State: ALoCoQmPJFFoQdejPLpTJo2RCCAeEonzXNoSynIWlqHLXm9omwhT1q2INNIEhSl/fcFBd1hFGwXR
X-Received: by 10.180.108.199 with SMTP id hm7mr10103104wib.1.1392582223418;
	Sun, 16 Feb 2014 12:23:43 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cm5sm25685766wid.5.2014.02.16.12.23.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 12:23:42 -0800 (PST)
Message-ID: <53011E4D.5050908@linaro.org>
Date: Sun, 16 Feb 2014 20:23:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 v2 0/10] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 14/02/14 15:50, Stefano Stabellini wrote:
> Hi all,
> this patch series removes any needs for maintenance interrupts for both
> hardware and software interrupts in Xen.
> It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
> and by checking the status of the GICH_LR registers on return to guest,
> clearing the registers that are invalid and handling the lifecycle of
> the corresponding interrupts in Xen data structures.

To keep track here, I have tried the patch series on top of the latest 
Xen. Booting Xen and Dom0 is slower with this patch series.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 20:24:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 20:24:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF8Fe-0008AB-Ax; Sun, 16 Feb 2014 20:23:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WF8Fd-0008A6-7W
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 20:23:45 +0000
Received: from [193.109.254.147:37663] by server-2.bemta-14.messagelabs.com id
	94/9D-01236-05E11035; Sun, 16 Feb 2014 20:23:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392582223!4677169!1
X-Originating-IP: [209.85.212.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5299 invoked from network); 16 Feb 2014 20:23:43 -0000
Received: from mail-wi0-f170.google.com (HELO mail-wi0-f170.google.com)
	(209.85.212.170)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 20:23:43 -0000
Received: by mail-wi0-f170.google.com with SMTP id hi5so1797489wib.3
	for <xen-devel@lists.xensource.com>;
	Sun, 16 Feb 2014 12:23:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=PfOiT5d4OvmfzDuEHJoi/dl4CekcefBcv10vZhLqvz4=;
	b=UC8EjkVmN2KUftSTm+v3WI3DO3SKt8PWwaKpNGYXtMJfymvux+xZmoxjwsdEX2214r
	YpiYIeliYTQPSKE+ZxOMUE4CF7ZQmLy9At/ie8XdHZCN2QMRuTfVAA/ZUh9NT9BZUxVQ
	VLYbabjYBYj4er2HUlmHlqJZ2mj9BRy4Yhp/BQFtbNBYNwo5e8aJDICtl3z3LRqg/Pqj
	UPlm3ayuhz4FdNtkxVb3h6t6Xse7sCOwz/xFcJtgY6PjAuz2S38wrSSuGzAJ2m720Usz
	WuGosjzZulztq8mfpLstCMBENFdfZxvRQE+2BHPEXIvolkRIixouVqNMBYdeBNpz6G6r
	jaHQ==
X-Gm-Message-State: ALoCoQmPJFFoQdejPLpTJo2RCCAeEonzXNoSynIWlqHLXm9omwhT1q2INNIEhSl/fcFBd1hFGwXR
X-Received: by 10.180.108.199 with SMTP id hm7mr10103104wib.1.1392582223418;
	Sun, 16 Feb 2014 12:23:43 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cm5sm25685766wid.5.2014.02.16.12.23.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 12:23:42 -0800 (PST)
Message-ID: <53011E4D.5050908@linaro.org>
Date: Sun, 16 Feb 2014 20:23:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH-4.5 v2 0/10] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 14/02/14 15:50, Stefano Stabellini wrote:
> Hi all,
> this patch series removes any needs for maintenance interrupts for both
> hardware and software interrupts in Xen.
> It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
> and by checking the status of the GICH_LR registers on return to guest,
> clearing the registers that are invalid and handling the lifecycle of
> the corresponding interrupts in Xen data structures.

To keep track here, I have tried the patch series on top of the latest 
Xen. Booting Xen and Dom0 is slower with this patch series.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 20:31:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 20:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF8Mk-0008IH-9F; Sun, 16 Feb 2014 20:31:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WF8Mi-0008IB-Or
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 20:31:05 +0000
Received: from [85.158.137.68:20471] by server-16.bemta-3.messagelabs.com id
	C8/A4-29917-80021035; Sun, 16 Feb 2014 20:31:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392582663!641259!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19482 invoked from network); 16 Feb 2014 20:31:03 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 20:31:03 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so9988431wes.39
	for <xen-devel@lists.xensource.com>;
	Sun, 16 Feb 2014 12:31:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=7+DyuhTEF7oZC2xzWBVtS4utqIkOYCD/6/+jan4TWeQ=;
	b=bE3mJOpA9Gk3gH+41drd7uxm8FsoO2Y+XNPx38Y3g8is6u7HPBi9sW3ya12ySuua01
	pIxfWadVXBB3fyTHD4odQaajYJYnR4b2DDXAGqVBmmPNKcwu6/mzXkw1bAnqnModRMJ/
	Cr3uWjxcyK7DcOQUmCQI4PTgpXSlRALG9wx1rjjvl3eEnO9B7JQCewFrO6xVS3Zyah2d
	jhGKrfkc4WUzuZXwtG+syNUCHlYwld+wtDMHIYbaq4bdntFbzJJz/gIQFmaXw+e2DmTj
	A8pZy5fOGa4mWl4tUgUcc+W6HVJHoYn7lOIwXwUr7gnh4zwkEIJgSqtikBZFkKu1MdP6
	hNQA==
X-Gm-Message-State: ALoCoQk1mY+id77KnDqEC4CR5q2N2akqQt1xZKFZU5hO8lpV2jmAwq59kWUfAKnUQPwImTk8yqxy
X-Received: by 10.180.19.130 with SMTP id f2mr10068376wie.6.1392582662934;
	Sun, 16 Feb 2014 12:31:02 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	xt1sm31050902wjb.17.2014.02.16.12.31.01 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 12:31:02 -0800 (PST)
Message-ID: <53012005.4020406@linaro.org>
Date: Sun, 16 Feb 2014 20:31:01 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
	<1392393098-7351-10-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1392393098-7351-10-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 10/10] xen/arm: print more info in
 gic_dump_info, keep gic_lr sync'ed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 14/02/14 15:51, Stefano Stabellini wrote:
> @@ -986,6 +991,8 @@ void gic_dump_info(struct vcpu *v)
>       struct pending_irq *p;
>
>       printk("GICH_LRs (vcpu %d) mask=%"PRIx64"\n", v->vcpu_id, v->arch.lr_mask);
> +
> +    spin_lock(&v->arch.vgic.lock);

interrupts need to be disabled here. If not, it's possible to receive an 
interrupt while the lock is taken. Xen might deadlock if the interrupt 
is inject to this vcpu "v" (see vgic_vcpu_inject_irq).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 20:31:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 20:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF8Mk-0008IH-9F; Sun, 16 Feb 2014 20:31:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WF8Mi-0008IB-Or
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 20:31:05 +0000
Received: from [85.158.137.68:20471] by server-16.bemta-3.messagelabs.com id
	C8/A4-29917-80021035; Sun, 16 Feb 2014 20:31:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392582663!641259!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19482 invoked from network); 16 Feb 2014 20:31:03 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 20:31:03 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so9988431wes.39
	for <xen-devel@lists.xensource.com>;
	Sun, 16 Feb 2014 12:31:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=7+DyuhTEF7oZC2xzWBVtS4utqIkOYCD/6/+jan4TWeQ=;
	b=bE3mJOpA9Gk3gH+41drd7uxm8FsoO2Y+XNPx38Y3g8is6u7HPBi9sW3ya12ySuua01
	pIxfWadVXBB3fyTHD4odQaajYJYnR4b2DDXAGqVBmmPNKcwu6/mzXkw1bAnqnModRMJ/
	Cr3uWjxcyK7DcOQUmCQI4PTgpXSlRALG9wx1rjjvl3eEnO9B7JQCewFrO6xVS3Zyah2d
	jhGKrfkc4WUzuZXwtG+syNUCHlYwld+wtDMHIYbaq4bdntFbzJJz/gIQFmaXw+e2DmTj
	A8pZy5fOGa4mWl4tUgUcc+W6HVJHoYn7lOIwXwUr7gnh4zwkEIJgSqtikBZFkKu1MdP6
	hNQA==
X-Gm-Message-State: ALoCoQk1mY+id77KnDqEC4CR5q2N2akqQt1xZKFZU5hO8lpV2jmAwq59kWUfAKnUQPwImTk8yqxy
X-Received: by 10.180.19.130 with SMTP id f2mr10068376wie.6.1392582662934;
	Sun, 16 Feb 2014 12:31:02 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id
	xt1sm31050902wjb.17.2014.02.16.12.31.01 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 12:31:02 -0800 (PST)
Message-ID: <53012005.4020406@linaro.org>
Date: Sun, 16 Feb 2014 20:31:01 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
	<1392393098-7351-10-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1392393098-7351-10-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: julien.grall@citrix.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 10/10] xen/arm: print more info in
 gic_dump_info, keep gic_lr sync'ed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 14/02/14 15:51, Stefano Stabellini wrote:
> @@ -986,6 +991,8 @@ void gic_dump_info(struct vcpu *v)
>       struct pending_irq *p;
>
>       printk("GICH_LRs (vcpu %d) mask=%"PRIx64"\n", v->vcpu_id, v->arch.lr_mask);
> +
> +    spin_lock(&v->arch.vgic.lock);

interrupts need to be disabled here. If not, it's possible to receive an 
interrupt while the lock is taken. Xen might deadlock if the interrupt 
is inject to this vcpu "v" (see vgic_vcpu_inject_irq).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 20:44:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 20:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF8ZO-0008V8-KY; Sun, 16 Feb 2014 20:44:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WF8ZM-0008V2-Qz
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 20:44:09 +0000
Received: from [85.158.137.68:19923] by server-17.bemta-3.messagelabs.com id
	83/8F-22569-71321035; Sun, 16 Feb 2014 20:44:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392583446!2209547!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7404 invoked from network); 16 Feb 2014 20:44:07 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 20:44:07 -0000
Received: by mail-wg0-f49.google.com with SMTP id y10so1672886wgg.4
	for <xen-devel@lists.xen.org>; Sun, 16 Feb 2014 12:44:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=GVlhihaJ5/FMQBO4hiYStbe3um6UqjyGnX86n0XyKZU=;
	b=mwPoMTIzfYSr4Y98wlZQh1tNoUIqhxD5pLgRriqsZwyEsPQWZWgL86FFwxZtEIFEXu
	Cgnh723Er6eCzlz61PTan81t8ZTJN7G1CuSkhxJOImy8rgZwzZYjzW3EDwZX+iMhBIyy
	RZpLuU3Xs5yAyNg4eh8hPIHQc+OiLFxW9cgsOYA7MzOZrCetN0Vqd5wE6rhx1jvXiKXT
	NTtUA1PzgjpBl7zRMnmwa0RO6Ao7+bijuI9cXpjxHuGdSXaqN1Vkm7Uiq91YkhMoiL+w
	5Gd6p1+ybPSbVC4SewOVprdvD5B1dffUC62VRpBD4DG6uGe2/pZ+OOkNGi4pp65rPmhs
	a3dw==
X-Gm-Message-State: ALoCoQmrewmRCuvkMN60BPR7lhdiMZfdZcbDPW+4MKyC8tqJeutIK/edspQn/lL0fk61wziMzyGa
X-Received: by 10.194.60.103 with SMTP id g7mr14309218wjr.37.1392583446657;
	Sun, 16 Feb 2014 12:44:06 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id f3sm25840682wiv.2.2014.02.16.12.44.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 12:44:06 -0800 (PST)
Message-ID: <53012314.2050906@linaro.org>
Date: Sun, 16 Feb 2014 20:44:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Chen Baozi <baozich@gmail.com>, 
	List Developer Xen <xen-devel@lists.xen.org>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
In-Reply-To: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/02/14 15:51, Chen Baozi wrote:
> Hi all,

Hello Chen,

> It is much later than I used to expect. I guess it might be help
> to publish my work, though it is still not finished (and might not
> be finished very soon...).
>
> I began to try to port mini-os to ARM64 since last summer. Since
> the 64-bit guest support is not quite well at that time, this
> work had been stopped for a long time until two months ago.
>
> Though it is still at very early stage, it at least can be built,
> setup a early page table for booting, parse the DTB passed by the
> hypervisor, and be debugged by printk at present. So I put it
> on github in case someone might be interested in it. Here is the
> url: https://github.com/baozich/minios-arm64

Good job!

> Right now, there are some troubles to make GIC work properly,
> as I didn=92t consider mapping GIC=92s interface in address space and
> follows x86=92s memory layout which make the kernel virtual address
> starts at 0x0. I=92ll fix it as soon as possible.

I think you should try to sync up with Karim (in CC). He has started to =

port mini-OS on arm32. Except assembly code (which should be fairly =

small) everything can be shared between the both architecture.

If I remember correctly, Karim already wrote a GIC support but without =

FDT support.

> Besides, there is still lots of work to be done. So any comments
> or patches are welcome.

Regards,

-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 20:44:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 20:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF8ZO-0008V8-KY; Sun, 16 Feb 2014 20:44:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WF8ZM-0008V2-Qz
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 20:44:09 +0000
Received: from [85.158.137.68:19923] by server-17.bemta-3.messagelabs.com id
	83/8F-22569-71321035; Sun, 16 Feb 2014 20:44:07 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392583446!2209547!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7404 invoked from network); 16 Feb 2014 20:44:07 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 20:44:07 -0000
Received: by mail-wg0-f49.google.com with SMTP id y10so1672886wgg.4
	for <xen-devel@lists.xen.org>; Sun, 16 Feb 2014 12:44:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=GVlhihaJ5/FMQBO4hiYStbe3um6UqjyGnX86n0XyKZU=;
	b=mwPoMTIzfYSr4Y98wlZQh1tNoUIqhxD5pLgRriqsZwyEsPQWZWgL86FFwxZtEIFEXu
	Cgnh723Er6eCzlz61PTan81t8ZTJN7G1CuSkhxJOImy8rgZwzZYjzW3EDwZX+iMhBIyy
	RZpLuU3Xs5yAyNg4eh8hPIHQc+OiLFxW9cgsOYA7MzOZrCetN0Vqd5wE6rhx1jvXiKXT
	NTtUA1PzgjpBl7zRMnmwa0RO6Ao7+bijuI9cXpjxHuGdSXaqN1Vkm7Uiq91YkhMoiL+w
	5Gd6p1+ybPSbVC4SewOVprdvD5B1dffUC62VRpBD4DG6uGe2/pZ+OOkNGi4pp65rPmhs
	a3dw==
X-Gm-Message-State: ALoCoQmrewmRCuvkMN60BPR7lhdiMZfdZcbDPW+4MKyC8tqJeutIK/edspQn/lL0fk61wziMzyGa
X-Received: by 10.194.60.103 with SMTP id g7mr14309218wjr.37.1392583446657;
	Sun, 16 Feb 2014 12:44:06 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id f3sm25840682wiv.2.2014.02.16.12.44.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 12:44:06 -0800 (PST)
Message-ID: <53012314.2050906@linaro.org>
Date: Sun, 16 Feb 2014 20:44:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Chen Baozi <baozich@gmail.com>, 
	List Developer Xen <xen-devel@lists.xen.org>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
In-Reply-To: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="windows-1252"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/02/14 15:51, Chen Baozi wrote:
> Hi all,

Hello Chen,

> It is much later than I used to expect. I guess it might be help
> to publish my work, though it is still not finished (and might not
> be finished very soon...).
>
> I began to try to port mini-os to ARM64 since last summer. Since
> the 64-bit guest support is not quite well at that time, this
> work had been stopped for a long time until two months ago.
>
> Though it is still at very early stage, it at least can be built,
> setup a early page table for booting, parse the DTB passed by the
> hypervisor, and be debugged by printk at present. So I put it
> on github in case someone might be interested in it. Here is the
> url: https://github.com/baozich/minios-arm64

Good job!

> Right now, there are some troubles to make GIC work properly,
> as I didn=92t consider mapping GIC=92s interface in address space and
> follows x86=92s memory layout which make the kernel virtual address
> starts at 0x0. I=92ll fix it as soon as possible.

I think you should try to sync up with Karim (in CC). He has started to =

port mini-OS on arm32. Except assembly code (which should be fairly =

small) everything can be shared between the both architecture.

If I remember correctly, Karim already wrote a GIC support but without =

FDT support.

> Besides, there is still lots of work to be done. So any comments
> or patches are welcome.

Regards,

-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 22:14:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF9yj-0000Zx-3L; Sun, 16 Feb 2014 22:14:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cienlux@gmail.com>) id 1WF9yh-0000Zs-KN
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 22:14:23 +0000
Received: from [85.158.137.68:29083] by server-10.bemta-3.messagelabs.com id
	F6/41-07302-E3831035; Sun, 16 Feb 2014 22:14:22 +0000
X-Env-Sender: cienlux@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392588859!907552!1
X-Originating-IP: [209.85.219.51]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 16 Feb 2014 22:14:21 -0000
Received: from mail-oa0-f51.google.com (HELO mail-oa0-f51.google.com)
	(209.85.219.51)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 22:14:21 -0000
Received: by mail-oa0-f51.google.com with SMTP id h16so16812060oag.38
	for <xen-devel@lists.xen.org>; Sun, 16 Feb 2014 14:14:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:message-id:date:to:mime-version;
	bh=WMP8OGasLiZOdPlyUezcePpBMdVfvJSn+x43T78idR0=;
	b=sN8rcv0j28MFSA2viC2RYDzk/Ci+xBUcs8BrjxV5hxdh5Ni/EN+556IAU3ARM+aYlt
	9/HZ3vFRbtMH+v/mrIYcW9KZgeQgCAsnNGBSghjI0/mLxYpDdKpFVm0q1CFIJPMSca/w
	N7wntPNeduDThmH+d7X9rH5O9bdAXPfY2HtQTUIGzu3EeFvM3/9ztOGg9iACwkCfIjPZ
	Ir6ppzZlxr/UkcMbrsvRf/RX0Ehg0HQxLxp1czqVqvbYVyNKwk0aHRS5QAqFi2jnOY7+
	j9ZmQPBUdB5owlKG7ss0PHEEgcm+zQ8qPt5ijwRqB3cprH77z2ziNXtQazOuhItGSv0a
	IkeQ==
X-Received: by 10.60.15.131 with SMTP id x3mr17919214oec.15.1392588858766;
	Sun, 16 Feb 2014 14:14:18 -0800 (PST)
Received: from dhcp-10-201-67-201.tamulink.tamu.edu
	(nat-12-165-91-13-201.tamulink.tamu.edu. [165.91.13.201])
	by mx.google.com with ESMTPSA id cg5sm32146000obc.9.2014.02.16.14.14.18
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 14:14:18 -0800 (PST)
From: Jinchun Kim <cienlux@gmail.com>
Message-Id: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
Date: Sun, 16 Feb 2014 16:14:17 -0600
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Subject: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9188465868276551646=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============9188465868276551646==
Content-Type: multipart/alternative; boundary="Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421"


--Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hi, All.

While I was digging tmem and its source code, I found suspicious things =
about frontswap and cleancache.
When the guest OS wants to evict either a dirty or clean page, frontswap =
and cleancache will store it in tmem.
The linux kernel document tells that=20

[Documentation/vm/frontswap.txt]
A =93store=94 will copy the page to transcendent memory =85.
A =93load=94 will copy the page, if found, from transcendent memory into =
kernel memory ...

[Documentation/vm/cleancache.txt]
A =93put_page=94 will copy a page (presumably about-to-be-evicted) page =
into cleancache ...
A =93get_page=94 will copy the page, if found, from cleancache into =
kernel memory ...

My colleagues and I think copying the page is not necessary (especially =
for cleancache) because both kernel memory and tmem have same data. Why =
don=92t we just change the pointer to the page and let it belong to =
tmem? We were wondering if there is any specific reason not to copy the =
pages to tmem.

Thanks.=

--Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">Hi, =
All.<div><br>While I was digging tmem and its source code, I found =
suspicious things about frontswap and cleancache.<div>When the guest OS =
wants to evict either a dirty or clean page, frontswap and cleancache =
will store it in tmem.</div><div>The linux kernel document tells =
that&nbsp;</div><div><br></div><div>[Documentation/vm/frontswap.txt]</div>=
<div>A =93store=94 will&nbsp;<b>copy</b>&nbsp;the page to transcendent =
memory =85.</div><div>A =93load=94 will&nbsp;<b>copy</b>&nbsp;the page, =
if found, from transcendent memory into kernel memory =
...</div><div><br></div><div><div>[Documentation/vm/cleancache.txt]</div><=
div>A =93put_page=94 will&nbsp;<b>copy</b>&nbsp;a page (presumably =
about-to-be-evicted) page into cleancache ...</div><div>A =93get_page=94 =
will&nbsp;<b>copy</b>&nbsp;the page, if found, from cleancache into =
kernel memory ...</div></div><div><br></div><div>My colleagues and I =
think copying the page is not necessary (especially for cleancache) =
because both kernel memory and tmem have same data. Why don=92t we just =
change the pointer to the page and let it belong to tmem? We were =
wondering if there is any specific reason not to copy the pages to =
tmem.</div><div><br></div><div>Thanks.</div></div></body></html>=

--Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421--


--===============9188465868276551646==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9188465868276551646==--


From xen-devel-bounces@lists.xen.org Sun Feb 16 22:14:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF9yj-0000Zx-3L; Sun, 16 Feb 2014 22:14:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cienlux@gmail.com>) id 1WF9yh-0000Zs-KN
	for xen-devel@lists.xen.org; Sun, 16 Feb 2014 22:14:23 +0000
Received: from [85.158.137.68:29083] by server-10.bemta-3.messagelabs.com id
	F6/41-07302-E3831035; Sun, 16 Feb 2014 22:14:22 +0000
X-Env-Sender: cienlux@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392588859!907552!1
X-Originating-IP: [209.85.219.51]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 16 Feb 2014 22:14:21 -0000
Received: from mail-oa0-f51.google.com (HELO mail-oa0-f51.google.com)
	(209.85.219.51)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 22:14:21 -0000
Received: by mail-oa0-f51.google.com with SMTP id h16so16812060oag.38
	for <xen-devel@lists.xen.org>; Sun, 16 Feb 2014 14:14:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:message-id:date:to:mime-version;
	bh=WMP8OGasLiZOdPlyUezcePpBMdVfvJSn+x43T78idR0=;
	b=sN8rcv0j28MFSA2viC2RYDzk/Ci+xBUcs8BrjxV5hxdh5Ni/EN+556IAU3ARM+aYlt
	9/HZ3vFRbtMH+v/mrIYcW9KZgeQgCAsnNGBSghjI0/mLxYpDdKpFVm0q1CFIJPMSca/w
	N7wntPNeduDThmH+d7X9rH5O9bdAXPfY2HtQTUIGzu3EeFvM3/9ztOGg9iACwkCfIjPZ
	Ir6ppzZlxr/UkcMbrsvRf/RX0Ehg0HQxLxp1czqVqvbYVyNKwk0aHRS5QAqFi2jnOY7+
	j9ZmQPBUdB5owlKG7ss0PHEEgcm+zQ8qPt5ijwRqB3cprH77z2ziNXtQazOuhItGSv0a
	IkeQ==
X-Received: by 10.60.15.131 with SMTP id x3mr17919214oec.15.1392588858766;
	Sun, 16 Feb 2014 14:14:18 -0800 (PST)
Received: from dhcp-10-201-67-201.tamulink.tamu.edu
	(nat-12-165-91-13-201.tamulink.tamu.edu. [165.91.13.201])
	by mx.google.com with ESMTPSA id cg5sm32146000obc.9.2014.02.16.14.14.18
	for <xen-devel@lists.xen.org>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 16 Feb 2014 14:14:18 -0800 (PST)
From: Jinchun Kim <cienlux@gmail.com>
Message-Id: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
Date: Sun, 16 Feb 2014 16:14:17 -0600
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Subject: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9188465868276551646=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============9188465868276551646==
Content-Type: multipart/alternative; boundary="Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421"


--Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hi, All.

While I was digging tmem and its source code, I found suspicious things =
about frontswap and cleancache.
When the guest OS wants to evict either a dirty or clean page, frontswap =
and cleancache will store it in tmem.
The linux kernel document tells that=20

[Documentation/vm/frontswap.txt]
A =93store=94 will copy the page to transcendent memory =85.
A =93load=94 will copy the page, if found, from transcendent memory into =
kernel memory ...

[Documentation/vm/cleancache.txt]
A =93put_page=94 will copy a page (presumably about-to-be-evicted) page =
into cleancache ...
A =93get_page=94 will copy the page, if found, from cleancache into =
kernel memory ...

My colleagues and I think copying the page is not necessary (especially =
for cleancache) because both kernel memory and tmem have same data. Why =
don=92t we just change the pointer to the page and let it belong to =
tmem? We were wondering if there is any specific reason not to copy the =
pages to tmem.

Thanks.=

--Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;">Hi, =
All.<div><br>While I was digging tmem and its source code, I found =
suspicious things about frontswap and cleancache.<div>When the guest OS =
wants to evict either a dirty or clean page, frontswap and cleancache =
will store it in tmem.</div><div>The linux kernel document tells =
that&nbsp;</div><div><br></div><div>[Documentation/vm/frontswap.txt]</div>=
<div>A =93store=94 will&nbsp;<b>copy</b>&nbsp;the page to transcendent =
memory =85.</div><div>A =93load=94 will&nbsp;<b>copy</b>&nbsp;the page, =
if found, from transcendent memory into kernel memory =
...</div><div><br></div><div><div>[Documentation/vm/cleancache.txt]</div><=
div>A =93put_page=94 will&nbsp;<b>copy</b>&nbsp;a page (presumably =
about-to-be-evicted) page into cleancache ...</div><div>A =93get_page=94 =
will&nbsp;<b>copy</b>&nbsp;the page, if found, from cleancache into =
kernel memory ...</div></div><div><br></div><div>My colleagues and I =
think copying the page is not necessary (especially for cleancache) =
because both kernel memory and tmem have same data. Why don=92t we just =
change the pointer to the page and let it belong to tmem? We were =
wondering if there is any specific reason not to copy the pages to =
tmem.</div><div><br></div><div>Thanks.</div></div></body></html>=

--Apple-Mail=_1F0FD316-72CB-4AF4-974F-82083CF14421--


--===============9188465868276551646==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9188465868276551646==--


From xen-devel-bounces@lists.xen.org Sun Feb 16 22:15:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF9zK-0000bh-Fh; Sun, 16 Feb 2014 22:15:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF9zI-0000bX-At
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 22:15:00 +0000
Received: from [85.158.143.35:4944] by server-1.bemta-4.messagelabs.com id
	09/CE-31661-36831035; Sun, 16 Feb 2014 22:14:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392588897!6067371!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12783 invoked from network); 16 Feb 2014 22:14:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 22:14:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="103025834"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 22:14:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 17:14:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF9zC-00083P-W2;
	Sun, 16 Feb 2014 22:14:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF9zA-0004Fs-RP;
	Sun, 16 Feb 2014 22:14:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25048-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 22:14:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25048: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8086742880096740367=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8086742880096740367==
Content-Type: text/plain

flight 25048 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25048/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 24862
 build-i386-oldkern            4 xen-build        fail in 24901 REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 24916 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24901
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 25009
 test-amd64-i386-freebsd10-i386  3 host-install(3)         broken pass in 24916
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25009
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 24901
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24901 pass in 24882
 test-amd64-i386-pv            3 host-install(3)  broken in 25009 pass in 25048
 test-amd64-amd64-pv           3 host-install(3)  broken in 25009 pass in 25048
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 25009 pass in 25048
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 25009 pass in 25048
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 24916 pass in 25048
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 25048

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 25009 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============8086742880096740367==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8086742880096740367==--

From xen-devel-bounces@lists.xen.org Sun Feb 16 22:15:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WF9zK-0000bh-Fh; Sun, 16 Feb 2014 22:15:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WF9zI-0000bX-At
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 22:15:00 +0000
Received: from [85.158.143.35:4944] by server-1.bemta-4.messagelabs.com id
	09/CE-31661-36831035; Sun, 16 Feb 2014 22:14:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392588897!6067371!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12783 invoked from network); 16 Feb 2014 22:14:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 22:14:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="103025834"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 22:14:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 17:14:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WF9zC-00083P-W2;
	Sun, 16 Feb 2014 22:14:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WF9zA-0004Fs-RP;
	Sun, 16 Feb 2014 22:14:54 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25048-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 22:14:52 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25048: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8086742880096740367=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8086742880096740367==
Content-Type: text/plain

flight 25048 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25048/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            2 host-install(2)         broken REGR. vs. 24862
 build-i386-oldkern            4 xen-build        fail in 24901 REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 24916 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24901
 test-amd64-amd64-xl           3 host-install(3)           broken pass in 25009
 test-amd64-i386-freebsd10-i386  3 host-install(3)         broken pass in 24916
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25009
 test-amd64-amd64-xl-win7-amd64  3 host-install(3)         broken pass in 24901
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24901 pass in 24882
 test-amd64-i386-pv            3 host-install(3)  broken in 25009 pass in 25048
 test-amd64-amd64-pv           3 host-install(3)  broken in 25009 pass in 25048
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 25009 pass in 25048
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 25009 pass in 25048
 test-amd64-i386-rhel6hvm-intel 3 host-install(3) broken in 24916 pass in 25048
 test-amd64-amd64-xl-sedf     10 guest-saverestore  fail in 24882 pass in 25048

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 25009 never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop          fail in 24882 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           broken  
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               broken  
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============8086742880096740367==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8086742880096740367==--

From xen-devel-bounces@lists.xen.org Sun Feb 16 22:19:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFA41-0000sg-5E; Sun, 16 Feb 2014 22:19:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WF808-00085s-F6
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 20:07:44 +0000
Received: from [85.158.137.68:65198] by server-12.bemta-3.messagelabs.com id
	79/4B-01674-F8A11035; Sun, 16 Feb 2014 20:07:43 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392581262!946798!1
X-Originating-IP: [213.75.39.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuOCA9PiA2NjcxNTE=\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuOCA9PiA2NjcxNTE=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13258 invoked from network); 16 Feb 2014 20:07:42 -0000
Received: from cpsmtpb-ews05.kpnxchange.com (HELO
	cpsmtpb-ews05.kpnxchange.com) (213.75.39.8)
	by server-13.tower-31.messagelabs.com with SMTP;
	16 Feb 2014 20:07:42 -0000
Received: from cpsps-ews06.kpnxchange.com ([10.94.84.173]) by
	cpsmtpb-ews05.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sun, 16 Feb 2014 21:07:42 +0100
Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by
	cpsps-ews06.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sun, 16 Feb 2014 21:07:42 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF103.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Sun, 16 Feb 2014 21:07:42 +0100
Message-ID: <1392581262.28866.66.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>
Date: Sun, 16 Feb 2014 21:07:42 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 16 Feb 2014 20:07:42.0913 (UTC)
	FILETIME=[C0364B10:01CF2B52]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Sun, 16 Feb 2014 22:19:51 +0000
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
used nowhere in the tree. We do know grub2 has a script that greps
kernel configuration files for this symbol. It shouldn't do that. As
Linus summarized:
    This is a grub bug. It really is that simple. Treat it as one.

So there's no reason to not remove it, like we do with all unused
Kconfig symbols.

[pebolle@tiscali.nl: rewrote commit explanation.]
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
---
Tested with "git grep".

Michael's version can be found at https://lkml.org/lkml/2013/7/8/34 .
(This is the same patch, with a rewritten explanation, and my S-o-b
line.) The question whether this symbol can be removed was further
discussed in https://lkml.org/lkml/2013/7/15/308 .

I don't think a bug was ever filed against grub2 regarding its way to
check for Xen support. Should that be done first?

 arch/x86/xen/Kconfig | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 01b9026..512219d 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -19,11 +19,6 @@ config XEN_DOM0
 	depends on XEN && PCI_XEN && SWIOTLB_XEN
 	depends on X86_LOCAL_APIC && X86_IO_APIC && ACPI && PCI
 
-# Dummy symbol since people have come to rely on the PRIVILEGED_GUEST
-# name in tools.
-config XEN_PRIVILEGED_GUEST
-	def_bool XEN_DOM0
-
 config XEN_PVHVM
 	def_bool y
 	depends on XEN && PCI && X86_LOCAL_APIC
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 22:19:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFA41-0000sg-5E; Sun, 16 Feb 2014 22:19:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WF808-00085s-F6
	for xen-devel@lists.xenproject.org; Sun, 16 Feb 2014 20:07:44 +0000
Received: from [85.158.137.68:65198] by server-12.bemta-3.messagelabs.com id
	79/4B-01674-F8A11035; Sun, 16 Feb 2014 20:07:43 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392581262!946798!1
X-Originating-IP: [213.75.39.8]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuOCA9PiA2NjcxNTE=\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuOCA9PiA2NjcxNTE=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13258 invoked from network); 16 Feb 2014 20:07:42 -0000
Received: from cpsmtpb-ews05.kpnxchange.com (HELO
	cpsmtpb-ews05.kpnxchange.com) (213.75.39.8)
	by server-13.tower-31.messagelabs.com with SMTP;
	16 Feb 2014 20:07:42 -0000
Received: from cpsps-ews06.kpnxchange.com ([10.94.84.173]) by
	cpsmtpb-ews05.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sun, 16 Feb 2014 21:07:42 +0100
Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by
	cpsps-ews06.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Sun, 16 Feb 2014 21:07:42 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF103.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Sun, 16 Feb 2014 21:07:42 +0100
Message-ID: <1392581262.28866.66.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>
Date: Sun, 16 Feb 2014 21:07:42 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 16 Feb 2014 20:07:42.0913 (UTC)
	FILETIME=[C0364B10:01CF2B52]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Sun, 16 Feb 2014 22:19:51 +0000
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
used nowhere in the tree. We do know grub2 has a script that greps
kernel configuration files for this symbol. It shouldn't do that. As
Linus summarized:
    This is a grub bug. It really is that simple. Treat it as one.

So there's no reason to not remove it, like we do with all unused
Kconfig symbols.

[pebolle@tiscali.nl: rewrote commit explanation.]
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
---
Tested with "git grep".

Michael's version can be found at https://lkml.org/lkml/2013/7/8/34 .
(This is the same patch, with a rewritten explanation, and my S-o-b
line.) The question whether this symbol can be removed was further
discussed in https://lkml.org/lkml/2013/7/15/308 .

I don't think a bug was ever filed against grub2 regarding its way to
check for Xen support. Should that be done first?

 arch/x86/xen/Kconfig | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 01b9026..512219d 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -19,11 +19,6 @@ config XEN_DOM0
 	depends on XEN && PCI_XEN && SWIOTLB_XEN
 	depends on X86_LOCAL_APIC && X86_IO_APIC && ACPI && PCI
 
-# Dummy symbol since people have come to rely on the PRIVILEGED_GUEST
-# name in tools.
-config XEN_PRIVILEGED_GUEST
-	def_bool XEN_DOM0
-
 config XEN_PVHVM
 	def_bool y
 	depends on XEN && PCI && X86_LOCAL_APIC
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 22:49:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFAWB-0001CD-Sh; Sun, 16 Feb 2014 22:48:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFAWB-0001C8-5g
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 22:48:59 +0000
Received: from [85.158.139.211:19421] by server-13.bemta-5.messagelabs.com id
	3B/67-18801-A5041035; Sun, 16 Feb 2014 22:48:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392590935!4224492!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18922 invoked from network); 16 Feb 2014 22:48:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 22:48:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="103029992"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 22:48:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 17:48:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFAVk-0008DR-Cm;
	Sun, 16 Feb 2014 22:48:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFAVk-0003iN-CM;
	Sun, 16 Feb 2014 22:48:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25054-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 22:48:32 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25054: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25054 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25054/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 25013
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24925
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24925 pass in 25054

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 16 22:49:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Feb 2014 22:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFAWB-0001CD-Sh; Sun, 16 Feb 2014 22:48:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFAWB-0001C8-5g
	for xen-devel@lists.xensource.com; Sun, 16 Feb 2014 22:48:59 +0000
Received: from [85.158.139.211:19421] by server-13.bemta-5.messagelabs.com id
	3B/67-18801-A5041035; Sun, 16 Feb 2014 22:48:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392590935!4224492!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18922 invoked from network); 16 Feb 2014 22:48:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	16 Feb 2014 22:48:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="103029992"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 16 Feb 2014 22:48:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 17:48:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFAVk-0008DR-Cm;
	Sun, 16 Feb 2014 22:48:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFAVk-0003iN-CM;
	Sun, 16 Feb 2014 22:48:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25054-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 16 Feb 2014 22:48:32 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25054: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25054 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25054/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pv           3 host-install(3)           broken pass in 25013
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24925
 test-amd64-amd64-xl-qemuu-win7-amd64  3 host-install(3)   broken pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24925 pass in 25054

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check fail in 24888 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop    fail in 24888 never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check     fail in 24888 never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop     fail in 24888 never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop fail in 24888 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         broken  
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 00:59:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 00:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFCXf-0002Kn-W8; Mon, 17 Feb 2014 00:58:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFCXe-0002Ki-On
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 00:58:39 +0000
Received: from [85.158.139.211:62023] by server-1.bemta-5.messagelabs.com id
	78/1F-12859-DBE51035; Mon, 17 Feb 2014 00:58:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392598714!4277991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14614 invoked from network); 17 Feb 2014 00:58:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 00:58:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="103043906"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 00:58:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 19:58:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFCXY-0000QM-GJ;
	Mon, 17 Feb 2014 00:58:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFCXY-0007YU-DM;
	Mon, 17 Feb 2014 00:58:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25061-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 00:58:32 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25061: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25061 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25061/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24904
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24904
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3)   broken pass in 25011
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24904
 test-amd64-i386-xl-credit2    3 host-install(3)           broken pass in 25011
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)     broken pass in 24904
 test-amd64-i386-xl-credit2 14 guest-localmigrate/x10 fail in 24904 pass in 25011
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24904 pass in 24887
 test-i386-i386-xl             3 host-install(3)  broken in 25011 pass in 25061
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 25061

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 00:59:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 00:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFCXf-0002Kn-W8; Mon, 17 Feb 2014 00:58:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFCXe-0002Ki-On
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 00:58:39 +0000
Received: from [85.158.139.211:62023] by server-1.bemta-5.messagelabs.com id
	78/1F-12859-DBE51035; Mon, 17 Feb 2014 00:58:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392598714!4277991!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14614 invoked from network); 17 Feb 2014 00:58:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 00:58:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="103043906"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 00:58:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 19:58:32 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFCXY-0000QM-GJ;
	Mon, 17 Feb 2014 00:58:32 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFCXY-0007YU-DM;
	Mon, 17 Feb 2014 00:58:32 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25061-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 00:58:32 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25061: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25061 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25061/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  3 host-install(3)   broken pass in 24904
 test-amd64-i386-xl-multivcpu  3 host-install(3)           broken pass in 24904
 test-amd64-i386-qemut-rhel6hvm-intel  3 host-install(3)   broken pass in 25011
 test-amd64-amd64-xl-sedf      3 host-install(3)           broken pass in 24904
 test-amd64-i386-xl-credit2    3 host-install(3)           broken pass in 25011
 test-amd64-i386-xl-winxpsp3-vcpus1  3 host-install(3)     broken pass in 24904
 test-amd64-i386-xl-credit2 14 guest-localmigrate/x10 fail in 24904 pass in 25011
 test-amd64-i386-xl-winxpsp3-vcpus1 7 windows-install fail in 24904 pass in 24887
 test-i386-i386-xl             3 host-install(3)  broken in 25011 pass in 25061
 test-amd64-amd64-xl-sedf-pin 14 guest-localmigrate/x10 fail in 24887 pass in 25061

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop      fail in 24887 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   broken  
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         broken  
 test-amd64-i386-qemuu-rhel6hvm-intel                         broken  
 test-amd64-i386-xl-multivcpu                                 broken  
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           broken  
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 01:16:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 01:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFCp7-0006Re-Vh; Mon, 17 Feb 2014 01:16:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>)
	id 1WFCp6-0006RW-A9; Mon, 17 Feb 2014 01:16:40 +0000
Received: from [85.158.137.68:33789] by server-8.bemta-3.messagelabs.com id
	43/64-16039-7F261035; Mon, 17 Feb 2014 01:16:39 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392599797!924143!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1423 invoked from network); 17 Feb 2014 01:16:38 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-31.messagelabs.com with SMTP;
	17 Feb 2014 01:16:38 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 16 Feb 2014 17:12:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,857,1384329600"; d="scan'208";a="456590102"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 17:16:23 -0800
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 17:16:22 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 17:16:22 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Mon, 17 Feb 2014 09:16:21 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Dario Faggioli <dario.faggioli@citrix.com>, George Dunlap
	<george.dunlap@eu.citrix.com>
Thread-Topic: [Xen-devel] Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcP//9LMA//6KNmCAAwangIAAR5eA//unM+A=
Date: Mon, 17 Feb 2014 01:16:20 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EC687@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
	<52FDEEDB.40305@eu.citrix.com> <1392388841.32038.263.camel@Solace>
In-Reply-To: <1392388841.32038.263.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli wrote on 2014-02-14:
> On ven, 2014-02-14 at 10:24 +0000, George Dunlap wrote:
>> On 02/14/2014 01:13 AM, Zhang, Yang Z wrote:
> 
>>> Sorry. I didn't clarify it clearly. Most of the patches to run nested are already
> in Xen upstream. What I want is to add "nested is basically supported in Xen
> 4.4" in the Xen 4.4 release note to let people know it.
>> 
>> I'm afraid "basically supported" will imply to people that they might
>> consider shipping it on production systems.  But because of the issues
>> with shadow-on-HAP, and the potential locking issues with the nested p2m
>> table, both of which are in control of the guest admin rather than the

Hi, George,

Can you elaborate the two issues? I am not in the background.

>> host admin, I don't think that's a recommendation we can make at this time.
>> 
>> But I do think making some kind of announcement about common
>> functionality being complete and ready to be tested would be a good
>> idea.  When we come to make the release we can brainstorm on what
>> wording to use.
>> 
> I agree. Let's not sell something not entirely ready, as that could
> backfire, but we should at least hint that it's there and it's improved.
> 

I agree too. A hint to say it is there should be ok to me.

>> Actually, I wonder whether advertising Win7's XP compatibility mode as a
>> separate "tech preview" feature would make sense.  The people who use
>> that feature are very likely very different than most other people who
>> might think about using nested virtualization.
>> 
> Completely agree again. And this is something that could be very well
> described in a blog post on the subject (happening after the release), I
> think. :-)
> 
> Regads,
> Dario
>


Best regards,
Yang

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 01:16:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 01:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFCp7-0006Re-Vh; Mon, 17 Feb 2014 01:16:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>)
	id 1WFCp6-0006RW-A9; Mon, 17 Feb 2014 01:16:40 +0000
Received: from [85.158.137.68:33789] by server-8.bemta-3.messagelabs.com id
	43/64-16039-7F261035; Mon, 17 Feb 2014 01:16:39 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392599797!924143!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1423 invoked from network); 17 Feb 2014 01:16:38 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-31.messagelabs.com with SMTP;
	17 Feb 2014 01:16:38 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 16 Feb 2014 17:12:05 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,857,1384329600"; d="scan'208";a="456590102"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 17:16:23 -0800
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 17:16:22 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 17:16:22 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Mon, 17 Feb 2014 09:16:21 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Dario Faggioli <dario.faggioli@citrix.com>, George Dunlap
	<george.dunlap@eu.citrix.com>
Thread-Topic: [Xen-devel] Xen nested wiki
Thread-Index: Ac8oaRnW+W8g7uSZQTu9nFY+Bt3JcP//9LMA//6KNmCAAwangIAAR5eA//unM+A=
Date: Mon, 17 Feb 2014 01:16:20 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EC687@SHSMSX104.ccr.corp.intel.com>
References: <A9667DDFB95DB7438FA9D7D576C3D87E0A9DAE1E@SHSMSX104.ccr.corp.intel.com>
	<1392287382.32038.34.camel@Solace>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9E8BBF@SHSMSX104.ccr.corp.intel.com>
	<52FDEEDB.40305@eu.citrix.com> <1392388841.32038.263.camel@Solace>
In-Reply-To: <1392388841.32038.263.camel@Solace>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"publicity@lists.xenproject.org" <publicity@lists.xenproject.org>, "'Jan
	Beulich \(JBeulich@suse.com\)'" <JBeulich@suse.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: [Xen-devel] Xen nested wiki
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario Faggioli wrote on 2014-02-14:
> On ven, 2014-02-14 at 10:24 +0000, George Dunlap wrote:
>> On 02/14/2014 01:13 AM, Zhang, Yang Z wrote:
> 
>>> Sorry. I didn't clarify it clearly. Most of the patches to run nested are already
> in Xen upstream. What I want is to add "nested is basically supported in Xen
> 4.4" in the Xen 4.4 release note to let people know it.
>> 
>> I'm afraid "basically supported" will imply to people that they might
>> consider shipping it on production systems.  But because of the issues
>> with shadow-on-HAP, and the potential locking issues with the nested p2m
>> table, both of which are in control of the guest admin rather than the

Hi, George,

Can you elaborate the two issues? I am not in the background.

>> host admin, I don't think that's a recommendation we can make at this time.
>> 
>> But I do think making some kind of announcement about common
>> functionality being complete and ready to be tested would be a good
>> idea.  When we come to make the release we can brainstorm on what
>> wording to use.
>> 
> I agree. Let's not sell something not entirely ready, as that could
> backfire, but we should at least hint that it's there and it's improved.
> 

I agree too. A hint to say it is there should be ok to me.

>> Actually, I wonder whether advertising Win7's XP compatibility mode as a
>> separate "tech preview" feature would make sense.  The people who use
>> that feature are very likely very different than most other people who
>> might think about using nested virtualization.
>> 
> Completely agree again. And this is something that could be very well
> described in a blog post on the subject (happening after the release), I
> think. :-)
> 
> Regads,
> Dario
>


Best regards,
Yang

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 01:50:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 01:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFDLC-0006j1-Ty; Mon, 17 Feb 2014 01:49:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFDLB-0006iw-Vi
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 01:49:50 +0000
Received: from [85.158.143.35:45822] by server-1.bemta-4.messagelabs.com id
	3D/28-31661-DBA61035; Mon, 17 Feb 2014 01:49:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392601787!6074112!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23281 invoked from network); 17 Feb 2014 01:49:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 01:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="101295497"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 01:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 20:49:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFDL7-0000fz-Ns;
	Mon, 17 Feb 2014 01:49:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFDL7-0001z7-0a;
	Mon, 17 Feb 2014 01:49:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25092-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 01:49:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25092: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25092 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25092/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 24863
 build-amd64                   2 host-install(2)         broken REGR. vs. 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 01:50:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 01:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFDLC-0006j1-Ty; Mon, 17 Feb 2014 01:49:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFDLB-0006iw-Vi
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 01:49:50 +0000
Received: from [85.158.143.35:45822] by server-1.bemta-4.messagelabs.com id
	3D/28-31661-DBA61035; Mon, 17 Feb 2014 01:49:49 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392601787!6074112!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23281 invoked from network); 17 Feb 2014 01:49:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 01:49:48 -0000
X-IronPort-AV: E=Sophos;i="4.95,857,1384300800"; d="scan'208";a="101295497"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 01:49:46 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sun, 16 Feb 2014 20:49:46 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFDL7-0000fz-Ns;
	Mon, 17 Feb 2014 01:49:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFDL7-0001z7-0a;
	Mon, 17 Feb 2014 01:49:45 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25092-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 01:49:45 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25092: trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25092 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25092/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    2 host-install(2)         broken REGR. vs. 24863
 build-amd64-oldkern           2 host-install(2)         broken REGR. vs. 24863
 build-amd64                   2 host-install(2)         broken REGR. vs. 24863

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf      1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-sedf-pin  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-amd64-pair         1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   broken  
 build-amd64-oldkern                                          broken  
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               blocked 
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              blocked 
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 blocked 
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     blocked 
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:22:59 2014 +0100

    update Xen version to 4.3.2
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 06:11:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 06:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFHQK-0000BZ-Sv; Mon, 17 Feb 2014 06:11:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WFHQI-0000BT-Sc
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 06:11:23 +0000
Received: from [85.158.143.35:49914] by server-1.bemta-4.messagelabs.com id
	2C/EC-31661-A08A1035; Mon, 17 Feb 2014 06:11:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392617480!6077498!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28874 invoked from network); 17 Feb 2014 06:11:21 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-21.messagelabs.com with SMTP;
	17 Feb 2014 06:11:21 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 16 Feb 2014 22:07:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,858,1384329600"; d="scan'208";a="456679912"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 22:11:18 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 22:11:18 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 22:11:17 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Mon, 17 Feb 2014 14:11:16 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
	libxl/libxc
Thread-Index: AQHPJkksaIXDDB2LwE6tYeMe5l1qi5quVnSQgAqpjxA=
Date: Mon, 17 Feb 2014 06:11:15 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911947E8E@SHSMSX104.ccr.corp.intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
	<1392027356.5117.21.camel@kazak.uk.xensource.com> 
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Xu, Dongxiao
> Sent: Monday, February 10, 2014 7:19 PM
> To: Ian Campbell
> Cc: xen-devel@lists.xen.org; keir@xen.org; JBeulich@suse.com;
> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> andrew.cooper3@citrix.com; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov
> Subject: RE: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
> libxl/libxc
> 
> > > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > > index 649ce50..43c0f48 100644
> > > --- a/tools/libxl/libxl_types.idl
> > > +++ b/tools/libxl/libxl_types.idl
> > > @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
> > >                                   ])),
> > >             ("domain_create_console_available", Struct(None, [])),
> > >             ]))])
> > > +
> > > +libxl_cqminfo = Struct("cqminfo", [
> >
> > You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
> > see the existing examples in that header.
> 
> OK.

Hi Ian,

Just had another look at the comments:

* In the event that a change is required which cannot be made
 * backwards compatible in this manner a #define of the form
 * LIBXL_HAVE_<interface> will always be added in order to make it
 * possible to write applciations which build against any version of
 * libxl. Such changes are expected to be exceptional and used as a
 * last resort. The barrier for backporting such a change to a stable
 * branch will be very high.

LIBXL_HAVE_<interface> is to address the back compatibility issue, and mostly it is used in cases like "adding a new field to the existing data structure".

In this patch, the libxl_cqminfo is a new added data structure, and it seems that there is no such compatibility issue. Do we really need to add the LIBXL_HAVE_<interface> macro?

Thanks,
Dongxiao

> 
> Thanks,
> Dongxiao
> 
> >
> > Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 06:11:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 06:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFHQK-0000BZ-Sv; Mon, 17 Feb 2014 06:11:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WFHQI-0000BT-Sc
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 06:11:23 +0000
Received: from [85.158.143.35:49914] by server-1.bemta-4.messagelabs.com id
	2C/EC-31661-A08A1035; Mon, 17 Feb 2014 06:11:22 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392617480!6077498!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28874 invoked from network); 17 Feb 2014 06:11:21 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-5.tower-21.messagelabs.com with SMTP;
	17 Feb 2014 06:11:21 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 16 Feb 2014 22:07:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,858,1384329600"; d="scan'208";a="456679912"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by orsmga001.jf.intel.com with ESMTP; 16 Feb 2014 22:11:18 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 22:11:18 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sun, 16 Feb 2014 22:11:17 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Mon, 17 Feb 2014 14:11:16 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
	libxl/libxc
Thread-Index: AQHPJkksaIXDDB2LwE6tYeMe5l1qi5quVnSQgAqpjxA=
Date: Mon, 17 Feb 2014 06:11:15 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A911947E8E@SHSMSX104.ccr.corp.intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
	<1392027356.5117.21.camel@kazak.uk.xensource.com> 
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "keir@xen.org" <keir@xen.org>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Xu, Dongxiao
> Sent: Monday, February 10, 2014 7:19 PM
> To: Ian Campbell
> Cc: xen-devel@lists.xen.org; keir@xen.org; JBeulich@suse.com;
> Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> andrew.cooper3@citrix.com; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov
> Subject: RE: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
> libxl/libxc
> 
> > > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > > index 649ce50..43c0f48 100644
> > > --- a/tools/libxl/libxl_types.idl
> > > +++ b/tools/libxl/libxl_types.idl
> > > @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
> > >                                   ])),
> > >             ("domain_create_console_available", Struct(None, [])),
> > >             ]))])
> > > +
> > > +libxl_cqminfo = Struct("cqminfo", [
> >
> > You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
> > see the existing examples in that header.
> 
> OK.

Hi Ian,

Just had another look at the comments:

* In the event that a change is required which cannot be made
 * backwards compatible in this manner a #define of the form
 * LIBXL_HAVE_<interface> will always be added in order to make it
 * possible to write applciations which build against any version of
 * libxl. Such changes are expected to be exceptional and used as a
 * last resort. The barrier for backporting such a change to a stable
 * branch will be very high.

LIBXL_HAVE_<interface> is to address the back compatibility issue, and mostly it is used in cases like "adding a new field to the existing data structure".

In this patch, the libxl_cqminfo is a new added data structure, and it seems that there is no such compatibility issue. Do we really need to add the LIBXL_HAVE_<interface> macro?

Thanks,
Dongxiao

> 
> Thanks,
> Dongxiao
> 
> >
> > Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:18:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJPD-0001Df-Qg; Mon, 17 Feb 2014 08:18:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFJP2-0001Da-Ut
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 08:18:22 +0000
Received: from [85.158.137.68:57897] by server-7.bemta-3.messagelabs.com id
	94/38-13775-3C5C1035; Mon, 17 Feb 2014 08:18:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392625089!1040648!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13494 invoked from network); 17 Feb 2014 08:18:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 08:18:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101351626"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 08:18:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 03:18:07 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFJOx-0002iQ-3v;
	Mon, 17 Feb 2014 08:18:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFJOx-0007Y7-2J;
	Mon, 17 Feb 2014 08:18:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25094-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 08:18:07 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25094: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1659601298255620344=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1659601298255620344==
Content-Type: text/plain

flight 25094 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25094/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24862
 build-i386-oldkern           2 host-install(2) broken in 25048 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair        16 guest-start                 fail pass in 25048
 test-amd64-i386-qemuu-rhel6hvm-intel  9 guest-start.2       fail pass in 25048
 test-amd64-i386-xl-multivcpu  3 host-install(3)  broken in 25048 pass in 25094
 test-amd64-amd64-xl           3 host-install(3)  broken in 25048 pass in 25094
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 25048 pass in 25094
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 25048 pass in 25094
 test-amd64-amd64-xl-win7-amd64 3 host-install(3) broken in 25048 pass in 25094

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============1659601298255620344==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1659601298255620344==--

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:18:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:18:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJPD-0001Df-Qg; Mon, 17 Feb 2014 08:18:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFJP2-0001Da-Ut
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 08:18:22 +0000
Received: from [85.158.137.68:57897] by server-7.bemta-3.messagelabs.com id
	94/38-13775-3C5C1035; Mon, 17 Feb 2014 08:18:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392625089!1040648!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13494 invoked from network); 17 Feb 2014 08:18:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 08:18:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101351626"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 08:18:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 03:18:07 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFJOx-0002iQ-3v;
	Mon, 17 Feb 2014 08:18:07 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFJOx-0007Y7-2J;
	Mon, 17 Feb 2014 08:18:07 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25094-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 08:18:07 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25094: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1659601298255620344=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1659601298255620344==
Content-Type: text/plain

flight 25094 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25094/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24862
 build-i386-oldkern           2 host-install(2) broken in 25048 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair        16 guest-start                 fail pass in 25048
 test-amd64-i386-qemuu-rhel6hvm-intel  9 guest-start.2       fail pass in 25048
 test-amd64-i386-xl-multivcpu  3 host-install(3)  broken in 25048 pass in 25094
 test-amd64-amd64-xl           3 host-install(3)  broken in 25048 pass in 25094
 test-amd64-i386-freebsd10-i386 3 host-install(3) broken in 25048 pass in 25094
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 25048 pass in 25094
 test-amd64-amd64-xl-win7-amd64 3 host-install(3) broken in 25048 pass in 25094

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============1659601298255620344==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1659601298255620344==--

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:22:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJTL-0001Kq-J2; Mon, 17 Feb 2014 08:22:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiang.liu@linux.intel.com>) id 1WFGM1-0008Qq-4g
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 05:02:53 +0000
Received: from [193.109.254.147:40818] by server-9.bemta-14.messagelabs.com id
	2B/5F-24895-CF791035; Mon, 17 Feb 2014 05:02:52 +0000
X-Env-Sender: jiang.liu@linux.intel.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392613370!4743391!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjYzMTcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24762 invoked from network); 17 Feb 2014 05:02:51 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-6.tower-27.messagelabs.com with SMTP;
	17 Feb 2014 05:02:51 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 16 Feb 2014 21:02:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,858,1384329600"; d="scan'208";a="341479392"
Received: from gerry-dev.bj.intel.com ([10.238.158.74])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Feb 2014 21:02:34 -0800
From: Jiang Liu <jiang.liu@linux.intel.com>
To: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Lv Zheng <lv.zheng@intel.com>, Len Brown <lenb@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Date: Mon, 17 Feb 2014 13:02:51 +0800
Message-Id: <1392613373-11003-4-git-send-email-jiang.liu@linux.intel.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392613373-11003-1-git-send-email-jiang.liu@linux.intel.com>
References: <1392613271-10912-1-git-send-email-jiang.liu@linux.intel.com>
	<1392613373-11003-1-git-send-email-jiang.liu@linux.intel.com>
X-Mailman-Approved-At: Mon, 17 Feb 2014 08:22:38 +0000
Cc: linux-acpi@vger.kernel.org, Tony Luck <tony.luck@intel.com>,
	xen-devel@lists.xenproject.org, Jiang Liu <jiang.liu@linux.intel.com>,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [RFC Patch v1 4/6] xen,
	acpi_pad: use acpi_evaluate_pad() to replace open-coded version
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use public function acpi_evaluate_pad() to replace open-coded
version of evaluating ACPI _OST method.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/xen/xen-acpi-pad.c |   26 +++++++-------------------
 1 file changed, 7 insertions(+), 19 deletions(-)

diff --git a/drivers/xen/xen-acpi-pad.c b/drivers/xen/xen-acpi-pad.c
index 40c4bc0..f83b754 100644
--- a/drivers/xen/xen-acpi-pad.c
+++ b/drivers/xen/xen-acpi-pad.c
@@ -77,27 +77,14 @@ static int acpi_pad_pur(acpi_handle handle)
 	return num;
 }
 
-/* Notify firmware how many CPUs are idle */
-static void acpi_pad_ost(acpi_handle handle, int stat,
-	uint32_t idle_nums)
-{
-	union acpi_object params[3] = {
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_BUFFER,},
-	};
-	struct acpi_object_list arg_list = {3, params};
-
-	params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY;
-	params[1].integer.value =  stat;
-	params[2].buffer.length = 4;
-	params[2].buffer.pointer = (void *)&idle_nums;
-	acpi_evaluate_object(handle, "_OST", &arg_list, NULL);
-}
-
 static void acpi_pad_handle_notify(acpi_handle handle)
 {
 	int idle_nums;
+	struct acpi_buffer param = {
+		.length = 4,
+		.pointer = (void *)&idle_nums,
+	};
+
 
 	mutex_lock(&xen_cpu_lock);
 	idle_nums = acpi_pad_pur(handle);
@@ -109,7 +96,8 @@ static void acpi_pad_handle_notify(acpi_handle handle)
 	idle_nums = xen_acpi_pad_idle_cpus(idle_nums)
 		    ?: xen_acpi_pad_idle_cpus_num();
 	if (idle_nums >= 0)
-		acpi_pad_ost(handle, 0, idle_nums);
+		acpi_evaluate_ost(handle, ACPI_PROCESSOR_AGGREGATOR_NOTIFY,
+				  0, &param);
 	mutex_unlock(&xen_cpu_lock);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:22:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJTL-0001Kq-J2; Mon, 17 Feb 2014 08:22:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiang.liu@linux.intel.com>) id 1WFGM1-0008Qq-4g
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 05:02:53 +0000
Received: from [193.109.254.147:40818] by server-9.bemta-14.messagelabs.com id
	2B/5F-24895-CF791035; Mon, 17 Feb 2014 05:02:52 +0000
X-Env-Sender: jiang.liu@linux.intel.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392613370!4743391!1
X-Originating-IP: [143.182.124.21]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMjEgPT4gMjYzMTcz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24762 invoked from network); 17 Feb 2014 05:02:51 -0000
Received: from mga03.intel.com (HELO mga03.intel.com) (143.182.124.21)
	by server-6.tower-27.messagelabs.com with SMTP;
	17 Feb 2014 05:02:51 -0000
Received: from azsmga002.ch.intel.com ([10.2.17.35])
	by azsmga101.ch.intel.com with ESMTP; 16 Feb 2014 21:02:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.95,858,1384329600"; d="scan'208";a="341479392"
Received: from gerry-dev.bj.intel.com ([10.238.158.74])
	by AZSMGA002.ch.intel.com with ESMTP; 16 Feb 2014 21:02:34 -0800
From: Jiang Liu <jiang.liu@linux.intel.com>
To: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Lv Zheng <lv.zheng@intel.com>, Len Brown <lenb@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Date: Mon, 17 Feb 2014 13:02:51 +0800
Message-Id: <1392613373-11003-4-git-send-email-jiang.liu@linux.intel.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392613373-11003-1-git-send-email-jiang.liu@linux.intel.com>
References: <1392613271-10912-1-git-send-email-jiang.liu@linux.intel.com>
	<1392613373-11003-1-git-send-email-jiang.liu@linux.intel.com>
X-Mailman-Approved-At: Mon, 17 Feb 2014 08:22:38 +0000
Cc: linux-acpi@vger.kernel.org, Tony Luck <tony.luck@intel.com>,
	xen-devel@lists.xenproject.org, Jiang Liu <jiang.liu@linux.intel.com>,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [RFC Patch v1 4/6] xen,
	acpi_pad: use acpi_evaluate_pad() to replace open-coded version
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use public function acpi_evaluate_pad() to replace open-coded
version of evaluating ACPI _OST method.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/xen/xen-acpi-pad.c |   26 +++++++-------------------
 1 file changed, 7 insertions(+), 19 deletions(-)

diff --git a/drivers/xen/xen-acpi-pad.c b/drivers/xen/xen-acpi-pad.c
index 40c4bc0..f83b754 100644
--- a/drivers/xen/xen-acpi-pad.c
+++ b/drivers/xen/xen-acpi-pad.c
@@ -77,27 +77,14 @@ static int acpi_pad_pur(acpi_handle handle)
 	return num;
 }
 
-/* Notify firmware how many CPUs are idle */
-static void acpi_pad_ost(acpi_handle handle, int stat,
-	uint32_t idle_nums)
-{
-	union acpi_object params[3] = {
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_BUFFER,},
-	};
-	struct acpi_object_list arg_list = {3, params};
-
-	params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY;
-	params[1].integer.value =  stat;
-	params[2].buffer.length = 4;
-	params[2].buffer.pointer = (void *)&idle_nums;
-	acpi_evaluate_object(handle, "_OST", &arg_list, NULL);
-}
-
 static void acpi_pad_handle_notify(acpi_handle handle)
 {
 	int idle_nums;
+	struct acpi_buffer param = {
+		.length = 4,
+		.pointer = (void *)&idle_nums,
+	};
+
 
 	mutex_lock(&xen_cpu_lock);
 	idle_nums = acpi_pad_pur(handle);
@@ -109,7 +96,8 @@ static void acpi_pad_handle_notify(acpi_handle handle)
 	idle_nums = xen_acpi_pad_idle_cpus(idle_nums)
 		    ?: xen_acpi_pad_idle_cpus_num();
 	if (idle_nums >= 0)
-		acpi_pad_ost(handle, 0, idle_nums);
+		acpi_evaluate_ost(handle, ACPI_PROCESSOR_AGGREGATOR_NOTIFY,
+				  0, &param);
 	mutex_unlock(&xen_cpu_lock);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:32:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJd0-0001Vt-NM; Mon, 17 Feb 2014 08:32:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFJcz-0001Vo-I5
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 08:32:37 +0000
Received: from [85.158.137.68:7252] by server-8.bemta-3.messagelabs.com id
	88/3C-16039-429C1035; Mon, 17 Feb 2014 08:32:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392625954!2269636!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1231 invoked from network); 17 Feb 2014 08:32:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 08:32:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="103104167"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 08:32:34 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 03:32:33 -0500
Message-ID: <5301C920.4040905@citrix.com>
Date: Mon, 17 Feb 2014 09:32:32 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
	<CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
In-Reply-To: <CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 21:08, Miguel Clara wrote:
> On Fri, Feb 14, 2014 at 9:13 AM, Roger Pau Monn=E9 <roger.pau@citrix.com>=
 wrote:
>> On 14/02/14 03:09, Miguel Clara wrote:
>>> After compiling with the patch and rebuilding/installing the module, I
>>> reboot, I get a panic now when drbd starts.
>>
>> There was no need to rebuild the module, the patch only modified the
>> block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
>> everything seemed to be fine (no kernel panic of course).
> =

> Just noticed this part now but before you said:
> (The patch is against git://git.drbd.org/drbd-9.0.git)
> =

> So should I apply the patch to drbd-8.4.3... ?

Yes, I think this patch will apply to almost any version of drbd since
it only modifies the block-drbd script. I would recommend that you apply
it against your already installed block-drbd script, this way you will
be sure there's no version mismatch.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:32:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJd0-0001Vt-NM; Mon, 17 Feb 2014 08:32:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFJcz-0001Vo-I5
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 08:32:37 +0000
Received: from [85.158.137.68:7252] by server-8.bemta-3.messagelabs.com id
	88/3C-16039-429C1035; Mon, 17 Feb 2014 08:32:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392625954!2269636!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1231 invoked from network); 17 Feb 2014 08:32:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 08:32:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="103104167"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 08:32:34 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 03:32:33 -0500
Message-ID: <5301C920.4040905@citrix.com>
Date: Mon, 17 Feb 2014 09:32:32 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Miguel Clara <miguelmclara@gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
	<CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
In-Reply-To: <CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 21:08, Miguel Clara wrote:
> On Fri, Feb 14, 2014 at 9:13 AM, Roger Pau Monn=E9 <roger.pau@citrix.com>=
 wrote:
>> On 14/02/14 03:09, Miguel Clara wrote:
>>> After compiling with the patch and rebuilding/installing the module, I
>>> reboot, I get a panic now when drbd starts.
>>
>> There was no need to rebuild the module, the patch only modified the
>> block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
>> everything seemed to be fine (no kernel panic of course).
> =

> Just noticed this part now but before you said:
> (The patch is against git://git.drbd.org/drbd-9.0.git)
> =

> So should I apply the patch to drbd-8.4.3... ?

Yes, I think this patch will apply to almost any version of drbd since
it only modifies the block-drbd script. I would recommend that you apply
it against your already installed block-drbd script, this way you will
be sure there's no version mismatch.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 08:53:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJwu-0001jz-9c; Mon, 17 Feb 2014 08:53:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFJwt-0001ju-8R
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 08:53:11 +0000
Received: from [193.109.254.147:60644] by server-4.bemta-14.messagelabs.com id
	5D/C2-32066-6FDC1035; Mon, 17 Feb 2014 08:53:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392627189!796899!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18574 invoked from network); 17 Feb 2014 08:53:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 08:53:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 08:53:08 +0000
Message-Id: <5301DC00020000780011CC15@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 08:53:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part172548E0.1__="
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] xencons: further Dom0 handling improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part172548E0.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

c/s 1242:731ff1f10c46 ("xencons: generalize use of
add_preferred_console()") still left cases where (in Dom0) the console
would get registered with index -1. Eliminate these cases by
- also calling add_preferred_console() in Dom0 when "xencons=3D" was
  specified
- setting the index directly when in Dom0 and "xencons=3D" was not given

Also do some cleanup:
- Move the declaration of console_use_vt into the respective global
  header (where it should have been placed from the beginning), and
  use a #define instead of a variable when !XEN.
- Replace the needless uses of goto in xen_console_init() with plain
  return statements.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/char/tty_io.c
+++ b/drivers/char/tty_io.c
@@ -130,7 +130,9 @@ LIST_HEAD(tty_drivers);			/* linked =
list
    vt.c for deeply disgusting hack reasons */
 DEFINE_MUTEX(tty_mutex);
=20
+#ifndef console_use_vt
 int console_use_vt =3D 1;
+#endif
=20
 #ifdef CONFIG_UNIX98_PTYS
 extern struct tty_driver *ptm_driver;	/* Unix98 pty masters; for =
/dev/ptmx */
--- a/drivers/xen/console/console.c
+++ b/drivers/xen/console/console.c
@@ -86,9 +86,8 @@ static int __init xencons_setup(char *st
 {
 	char *q;
 	int n;
-	extern int console_use_vt;
=20
-	console_use_vt =3D 1;
+	console_use_vt =3D -1;
 	if (!strncmp(str, "ttyS", 4)) {
 		xc_mode =3D XC_SERIAL;
 		str +=3D 4;
@@ -193,13 +192,13 @@ static struct console kcons_info =3D {
 static int __init xen_console_init(void)
 {
 	if (!is_running_on_xen())
-		goto out;
+		return 0;
=20
 	if (is_initial_xendomain()) {
 		kcons_info.write =3D kcons_write_dom0;
 	} else {
 		if (!xen_start_info->console.domU.evtchn)
-			goto out;
+			return 0;
 		kcons_info.write =3D kcons_write;
 	}
=20
@@ -229,16 +228,17 @@ static int __init xen_console_init(void)
 		break;
=20
 	default:
-		goto out;
+		return 0;
 	}
=20
 	wbuf =3D alloc_bootmem(wbuf_size);
=20
-	if (!is_initial_xendomain())
+	if (console_use_vt <=3D 0 || !is_initial_xendomain())
 		add_preferred_console(kcons_info.name, xc_num, NULL);
+	else
+		kcons_info.index =3D xc_num;
 	register_console(&kcons_info);
=20
- out:
 	return 0;
 }
 console_initcall(xen_console_init);
--- a/include/linux/console.h
+++ b/include/linux/console.h
@@ -63,6 +63,12 @@ extern const struct consw vga_con;	/* VG
 extern const struct consw newport_con;	/* SGI Newport console  */
 extern const struct consw prom_con;	/* SPARC PROM console */
=20
+#ifdef CONFIG_XEN
+extern int console_use_vt;
+#else
+#define console_use_vt 1
+#endif
+
 int con_is_bound(const struct consw *csw);
 int register_con_driver(const struct consw *csw, int first, int last);
 int unregister_con_driver(const struct consw *csw);




--=__Part172548E0.1__=
Content-Type: text/plain; name="xen-console-index.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-console-index.patch"

xencons: further Dom0 handling improvements=0A=0Ac/s 1242:731ff1f10c46 =
("xencons: generalize use of=0Aadd_preferred_console()") still left cases =
where (in Dom0) the console=0Awould get registered with index -1. =
Eliminate these cases by=0A- also calling add_preferred_console() in Dom0 =
when "xencons=3D" was=0A  specified=0A- setting the index directly when in =
Dom0 and "xencons=3D" was not given=0A=0AAlso do some cleanup:=0A- Move =
the declaration of console_use_vt into the respective global=0A  header =
(where it should have been placed from the beginning), and=0A  use a =
#define instead of a variable when !XEN.=0A- Replace the needless uses of =
goto in xen_console_init() with plain=0A  return statements.=0A=0AReported-=
by: Olaf Hering <olaf@aepfle.de>=0ASigned-off-by: Jan Beulich <jbeulich@sus=
e.com>=0A=0A--- a/drivers/char/tty_io.c=0A+++ b/drivers/char/tty_io.c=0A@@ =
-130,7 +130,9 @@ LIST_HEAD(tty_drivers);			/* linked =
list=0A    vt.c for deeply disgusting hack reasons */=0A DEFINE_MUTEX(tty_m=
utex);=0A =0A+#ifndef console_use_vt=0A int console_use_vt =3D 1;=0A+#endif=
=0A =0A #ifdef CONFIG_UNIX98_PTYS=0A extern struct tty_driver *ptm_driver;	=
/* Unix98 pty masters; for /dev/ptmx */=0A--- a/drivers/xen/console/console=
.c=0A+++ b/drivers/xen/console/console.c=0A@@ -86,9 +86,8 @@ static int =
__init xencons_setup(char *st=0A {=0A 	char *q;=0A 	int n;=0A-	=
extern int console_use_vt;=0A =0A-	console_use_vt =3D 1;=0A+	=
console_use_vt =3D -1;=0A 	if (!strncmp(str, "ttyS", 4)) {=0A 		=
xc_mode =3D XC_SERIAL;=0A 		str +=3D 4;=0A@@ -193,13 +192,13 =
@@ static struct console kcons_info =3D {=0A static int __init xen_console_=
init(void)=0A {=0A 	if (!is_running_on_xen())=0A-		goto =
out;=0A+		return 0;=0A =0A 	if (is_initial_xendomain())=
 {=0A 		kcons_info.write =3D kcons_write_dom0;=0A 	} else =
{=0A 		if (!xen_start_info->console.domU.evtchn)=0A-			=
goto out;=0A+			return 0;=0A 		kcons_info.write =
=3D kcons_write;=0A 	}=0A =0A@@ -229,16 +228,17 @@ static int __init =
xen_console_init(void)=0A 		break;=0A =0A 	default:=0A-		=
goto out;=0A+		return 0;=0A 	}=0A =0A 	wbuf =3D alloc_boot=
mem(wbuf_size);=0A =0A-	if (!is_initial_xendomain())=0A+	if =
(console_use_vt <=3D 0 || !is_initial_xendomain())=0A 		add_preferr=
ed_console(kcons_info.name, xc_num, NULL);=0A+	else=0A+		=
kcons_info.index =3D xc_num;=0A 	register_console(&kcons_info);=0A =
=0A- out:=0A 	return 0;=0A }=0A console_initcall(xen_console_init);=0A---=
 a/include/linux/console.h=0A+++ b/include/linux/console.h=0A@@ -63,6 =
+63,12 @@ extern const struct consw vga_con;	/* VG=0A extern const =
struct consw newport_con;	/* SGI Newport console  */=0A extern const =
struct consw prom_con;	/* SPARC PROM console */=0A =0A+#ifdef CONFIG_XEN=
=0A+extern int console_use_vt;=0A+#else=0A+#define console_use_vt =
1=0A+#endif=0A+=0A int con_is_bound(const struct consw *csw);=0A int =
register_con_driver(const struct consw *csw, int first, int last);=0A int =
unregister_con_driver(const struct consw *csw);=0A
--=__Part172548E0.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part172548E0.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 17 08:53:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 08:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFJwu-0001jz-9c; Mon, 17 Feb 2014 08:53:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFJwt-0001ju-8R
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 08:53:11 +0000
Received: from [193.109.254.147:60644] by server-4.bemta-14.messagelabs.com id
	5D/C2-32066-6FDC1035; Mon, 17 Feb 2014 08:53:10 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392627189!796899!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18574 invoked from network); 17 Feb 2014 08:53:09 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 08:53:09 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 08:53:08 +0000
Message-Id: <5301DC00020000780011CC15@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 08:53:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part172548E0.1__="
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [Xen-devel] [PATCH] xencons: further Dom0 handling improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part172548E0.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

c/s 1242:731ff1f10c46 ("xencons: generalize use of
add_preferred_console()") still left cases where (in Dom0) the console
would get registered with index -1. Eliminate these cases by
- also calling add_preferred_console() in Dom0 when "xencons=3D" was
  specified
- setting the index directly when in Dom0 and "xencons=3D" was not given

Also do some cleanup:
- Move the declaration of console_use_vt into the respective global
  header (where it should have been placed from the beginning), and
  use a #define instead of a variable when !XEN.
- Replace the needless uses of goto in xen_console_init() with plain
  return statements.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/drivers/char/tty_io.c
+++ b/drivers/char/tty_io.c
@@ -130,7 +130,9 @@ LIST_HEAD(tty_drivers);			/* linked =
list
    vt.c for deeply disgusting hack reasons */
 DEFINE_MUTEX(tty_mutex);
=20
+#ifndef console_use_vt
 int console_use_vt =3D 1;
+#endif
=20
 #ifdef CONFIG_UNIX98_PTYS
 extern struct tty_driver *ptm_driver;	/* Unix98 pty masters; for =
/dev/ptmx */
--- a/drivers/xen/console/console.c
+++ b/drivers/xen/console/console.c
@@ -86,9 +86,8 @@ static int __init xencons_setup(char *st
 {
 	char *q;
 	int n;
-	extern int console_use_vt;
=20
-	console_use_vt =3D 1;
+	console_use_vt =3D -1;
 	if (!strncmp(str, "ttyS", 4)) {
 		xc_mode =3D XC_SERIAL;
 		str +=3D 4;
@@ -193,13 +192,13 @@ static struct console kcons_info =3D {
 static int __init xen_console_init(void)
 {
 	if (!is_running_on_xen())
-		goto out;
+		return 0;
=20
 	if (is_initial_xendomain()) {
 		kcons_info.write =3D kcons_write_dom0;
 	} else {
 		if (!xen_start_info->console.domU.evtchn)
-			goto out;
+			return 0;
 		kcons_info.write =3D kcons_write;
 	}
=20
@@ -229,16 +228,17 @@ static int __init xen_console_init(void)
 		break;
=20
 	default:
-		goto out;
+		return 0;
 	}
=20
 	wbuf =3D alloc_bootmem(wbuf_size);
=20
-	if (!is_initial_xendomain())
+	if (console_use_vt <=3D 0 || !is_initial_xendomain())
 		add_preferred_console(kcons_info.name, xc_num, NULL);
+	else
+		kcons_info.index =3D xc_num;
 	register_console(&kcons_info);
=20
- out:
 	return 0;
 }
 console_initcall(xen_console_init);
--- a/include/linux/console.h
+++ b/include/linux/console.h
@@ -63,6 +63,12 @@ extern const struct consw vga_con;	/* VG
 extern const struct consw newport_con;	/* SGI Newport console  */
 extern const struct consw prom_con;	/* SPARC PROM console */
=20
+#ifdef CONFIG_XEN
+extern int console_use_vt;
+#else
+#define console_use_vt 1
+#endif
+
 int con_is_bound(const struct consw *csw);
 int register_con_driver(const struct consw *csw, int first, int last);
 int unregister_con_driver(const struct consw *csw);




--=__Part172548E0.1__=
Content-Type: text/plain; name="xen-console-index.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xen-console-index.patch"

xencons: further Dom0 handling improvements=0A=0Ac/s 1242:731ff1f10c46 =
("xencons: generalize use of=0Aadd_preferred_console()") still left cases =
where (in Dom0) the console=0Awould get registered with index -1. =
Eliminate these cases by=0A- also calling add_preferred_console() in Dom0 =
when "xencons=3D" was=0A  specified=0A- setting the index directly when in =
Dom0 and "xencons=3D" was not given=0A=0AAlso do some cleanup:=0A- Move =
the declaration of console_use_vt into the respective global=0A  header =
(where it should have been placed from the beginning), and=0A  use a =
#define instead of a variable when !XEN.=0A- Replace the needless uses of =
goto in xen_console_init() with plain=0A  return statements.=0A=0AReported-=
by: Olaf Hering <olaf@aepfle.de>=0ASigned-off-by: Jan Beulich <jbeulich@sus=
e.com>=0A=0A--- a/drivers/char/tty_io.c=0A+++ b/drivers/char/tty_io.c=0A@@ =
-130,7 +130,9 @@ LIST_HEAD(tty_drivers);			/* linked =
list=0A    vt.c for deeply disgusting hack reasons */=0A DEFINE_MUTEX(tty_m=
utex);=0A =0A+#ifndef console_use_vt=0A int console_use_vt =3D 1;=0A+#endif=
=0A =0A #ifdef CONFIG_UNIX98_PTYS=0A extern struct tty_driver *ptm_driver;	=
/* Unix98 pty masters; for /dev/ptmx */=0A--- a/drivers/xen/console/console=
.c=0A+++ b/drivers/xen/console/console.c=0A@@ -86,9 +86,8 @@ static int =
__init xencons_setup(char *st=0A {=0A 	char *q;=0A 	int n;=0A-	=
extern int console_use_vt;=0A =0A-	console_use_vt =3D 1;=0A+	=
console_use_vt =3D -1;=0A 	if (!strncmp(str, "ttyS", 4)) {=0A 		=
xc_mode =3D XC_SERIAL;=0A 		str +=3D 4;=0A@@ -193,13 +192,13 =
@@ static struct console kcons_info =3D {=0A static int __init xen_console_=
init(void)=0A {=0A 	if (!is_running_on_xen())=0A-		goto =
out;=0A+		return 0;=0A =0A 	if (is_initial_xendomain())=
 {=0A 		kcons_info.write =3D kcons_write_dom0;=0A 	} else =
{=0A 		if (!xen_start_info->console.domU.evtchn)=0A-			=
goto out;=0A+			return 0;=0A 		kcons_info.write =
=3D kcons_write;=0A 	}=0A =0A@@ -229,16 +228,17 @@ static int __init =
xen_console_init(void)=0A 		break;=0A =0A 	default:=0A-		=
goto out;=0A+		return 0;=0A 	}=0A =0A 	wbuf =3D alloc_boot=
mem(wbuf_size);=0A =0A-	if (!is_initial_xendomain())=0A+	if =
(console_use_vt <=3D 0 || !is_initial_xendomain())=0A 		add_preferr=
ed_console(kcons_info.name, xc_num, NULL);=0A+	else=0A+		=
kcons_info.index =3D xc_num;=0A 	register_console(&kcons_info);=0A =
=0A- out:=0A 	return 0;=0A }=0A console_initcall(xen_console_init);=0A---=
 a/include/linux/console.h=0A+++ b/include/linux/console.h=0A@@ -63,6 =
+63,12 @@ extern const struct consw vga_con;	/* VG=0A extern const =
struct consw newport_con;	/* SGI Newport console  */=0A extern const =
struct consw prom_con;	/* SPARC PROM console */=0A =0A+#ifdef CONFIG_XEN=
=0A+extern int console_use_vt;=0A+#else=0A+#define console_use_vt =
1=0A+#endif=0A+=0A int con_is_bound(const struct consw *csw);=0A int =
register_con_driver(const struct consw *csw, int first, int last);=0A int =
unregister_con_driver(const struct consw *csw);=0A
--=__Part172548E0.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part172548E0.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 17 09:28:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 09:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFKUr-0001z4-Dl; Mon, 17 Feb 2014 09:28:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WFKUp-0001yz-QS
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 09:28:16 +0000
Received: from [85.158.143.35:10019] by server-2.bemta-4.messagelabs.com id
	91/49-10891-E26D1035; Mon, 17 Feb 2014 09:28:14 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392629292!6178481!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21088 invoked from network); 17 Feb 2014 09:28:13 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 09:28:13 -0000
Received: by mail-qa0-f44.google.com with SMTP id w5so21216244qac.17
	for <xen-devel@lists.xenproject.org>;
	Mon, 17 Feb 2014 01:28:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=e1UBsQaBkKMeEaVEZBlZRxvVFnpocyXs/7KqDfuMWPM=;
	b=ZeyQiFpZ6XgTyiE+wdawld24Y+GhRaQ1J4mF67H1h9QaRs+unAbXt2KI0UwJb0nka8
	KuXRYHvpsT/qZerxoyJUcRdVQXF90wNVsDJGibr4sKiYBnZ5aZqMWzR4MfxUKmQMvtLI
	YUXeqniI9Y05tCbnfBaoCQdRJNo/94TMzVLvI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=e1UBsQaBkKMeEaVEZBlZRxvVFnpocyXs/7KqDfuMWPM=;
	b=Td6q2GBvyb5ELh1X4sIbokLXZXTCc4mkpRb/JFepczX4mVxaBrMCWxc46M3b1hZhiG
	EMsQKKpQGrkAjW0X7n6D6l8vF1kfwwQ2I9b/5tToDyMo0XJggKQ+xt0QshkdBZuLnII0
	ELA65o+2GZv1JyNgURBaJRSfkMvM9l46CY7fVsvqowJb/K6ijGeLnkE5TCHyIsix1osl
	EPhDXnFSXKXmYd3ddfEW/tnZ6BBUouBLgobrGSzlJbSYHElAPLkvh9/8FAkEybZEn2N2
	NrtiFc1QLbBAD2GeRvvFfMc38zTFTZNBx0Z5n/8Ob7l/87YoKU+i7dBs5ICZAfZdhdHz
	AazA==
X-Gm-Message-State: ALoCoQnx+57nViqIrLygUOn6icm6wvKue6Rz9JBWwbGHsziikUuE2LS2nG0zidBD7EBU7+hGfEsP
X-Received: by 10.140.87.172 with SMTP id r41mr323659qgd.101.1392629292403;
	Mon, 17 Feb 2014 01:28:12 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 17 Feb 2014 01:27:57 -0800 (PST)
X-Originating-IP: [217.66.152.101]
In-Reply-To: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Mon, 17 Feb 2014 13:27:57 +0400
X-Google-Sender-Auth: ctWRdQT5icksXIABccuzXqSfFUo
Message-ID: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update info:
If i from dom0 do xl mem-set XXX maxmem, domU can balloon up/down fine.
What is the difference between 4.1 and 4.3 in this case?

2014-02-14 22:12 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> 2014-02-14 20:34 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
>> My gut feeling is nothing is wrong with Xen. It's just that you have a
>> typo in your config file. :-)
>>
>> Wei.
>
> No, if i have error in config domain can't start. You known that.
> my config is:
>
> name="21-10918"
> kernel="/var/storage/kernel/debian/6/kernel-64"
> ramdisk="/var/storage/kernel/ramdisk-64"
> vif=["mac=00:16:3e:00:2c:63,ip=62.76.185.64"]
> disk=["phy:/dev/disk/vbd/21-916,xvda,w"]
> memory=512
> maxmem=1024
> vcpus=3
> maxvcpus=3
> cpu_cap=300
> cpu_weight=256
> vfb=["type=vnc,vnclisten=0.0.0.0,vncpasswd=7QOG1885y3"]
> extra="root=/dev/xvda1 selinux=1 enforcing=0 iommu=off swiotlb=off
> earlyprintk=xen console=hvc0"
> on_restart="destroy"
> cpuid="host,x2apic=0,aes=0,xsave=0,avx=0"
> device_model_version="qemu-xen"
> device_model_override="/usr/bin/qemu-system-x86_64"
>
> --
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru



-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 09:28:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 09:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFKUr-0001z4-Dl; Mon, 17 Feb 2014 09:28:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WFKUp-0001yz-QS
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 09:28:16 +0000
Received: from [85.158.143.35:10019] by server-2.bemta-4.messagelabs.com id
	91/49-10891-E26D1035; Mon, 17 Feb 2014 09:28:14 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392629292!6178481!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21088 invoked from network); 17 Feb 2014 09:28:13 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 09:28:13 -0000
Received: by mail-qa0-f44.google.com with SMTP id w5so21216244qac.17
	for <xen-devel@lists.xenproject.org>;
	Mon, 17 Feb 2014 01:28:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=e1UBsQaBkKMeEaVEZBlZRxvVFnpocyXs/7KqDfuMWPM=;
	b=ZeyQiFpZ6XgTyiE+wdawld24Y+GhRaQ1J4mF67H1h9QaRs+unAbXt2KI0UwJb0nka8
	KuXRYHvpsT/qZerxoyJUcRdVQXF90wNVsDJGibr4sKiYBnZ5aZqMWzR4MfxUKmQMvtLI
	YUXeqniI9Y05tCbnfBaoCQdRJNo/94TMzVLvI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=e1UBsQaBkKMeEaVEZBlZRxvVFnpocyXs/7KqDfuMWPM=;
	b=Td6q2GBvyb5ELh1X4sIbokLXZXTCc4mkpRb/JFepczX4mVxaBrMCWxc46M3b1hZhiG
	EMsQKKpQGrkAjW0X7n6D6l8vF1kfwwQ2I9b/5tToDyMo0XJggKQ+xt0QshkdBZuLnII0
	ELA65o+2GZv1JyNgURBaJRSfkMvM9l46CY7fVsvqowJb/K6ijGeLnkE5TCHyIsix1osl
	EPhDXnFSXKXmYd3ddfEW/tnZ6BBUouBLgobrGSzlJbSYHElAPLkvh9/8FAkEybZEn2N2
	NrtiFc1QLbBAD2GeRvvFfMc38zTFTZNBx0Z5n/8Ob7l/87YoKU+i7dBs5ICZAfZdhdHz
	AazA==
X-Gm-Message-State: ALoCoQnx+57nViqIrLygUOn6icm6wvKue6Rz9JBWwbGHsziikUuE2LS2nG0zidBD7EBU7+hGfEsP
X-Received: by 10.140.87.172 with SMTP id r41mr323659qgd.101.1392629292403;
	Mon, 17 Feb 2014 01:28:12 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 17 Feb 2014 01:27:57 -0800 (PST)
X-Originating-IP: [217.66.152.101]
In-Reply-To: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Mon, 17 Feb 2014 13:27:57 +0400
X-Google-Sender-Auth: ctWRdQT5icksXIABccuzXqSfFUo
Message-ID: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update info:
If i from dom0 do xl mem-set XXX maxmem, domU can balloon up/down fine.
What is the difference between 4.1 and 4.3 in this case?

2014-02-14 22:12 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> 2014-02-14 20:34 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
>> My gut feeling is nothing is wrong with Xen. It's just that you have a
>> typo in your config file. :-)
>>
>> Wei.
>
> No, if i have error in config domain can't start. You known that.
> my config is:
>
> name="21-10918"
> kernel="/var/storage/kernel/debian/6/kernel-64"
> ramdisk="/var/storage/kernel/ramdisk-64"
> vif=["mac=00:16:3e:00:2c:63,ip=62.76.185.64"]
> disk=["phy:/dev/disk/vbd/21-916,xvda,w"]
> memory=512
> maxmem=1024
> vcpus=3
> maxvcpus=3
> cpu_cap=300
> cpu_weight=256
> vfb=["type=vnc,vnclisten=0.0.0.0,vncpasswd=7QOG1885y3"]
> extra="root=/dev/xvda1 selinux=1 enforcing=0 iommu=off swiotlb=off
> earlyprintk=xen console=hvc0"
> on_restart="destroy"
> cpuid="host,x2apic=0,aes=0,xsave=0,avx=0"
> device_model_version="qemu-xen"
> device_model_override="/usr/bin/qemu-system-x86_64"
>
> --
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru



-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:13:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLCn-0002Hw-IO; Mon, 17 Feb 2014 10:13:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFLCm-0002Hr-PB
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 10:13:41 +0000
Received: from [85.158.139.211:26633] by server-11.bemta-5.messagelabs.com id
	49/35-23886-4D0E1035; Mon, 17 Feb 2014 10:13:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392632016!431516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1837 invoked from network); 17 Feb 2014 10:13:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:13:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101375183"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 10:13:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 05:13:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFLCh-0003IY-01;
	Mon, 17 Feb 2014 10:13:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFLCg-0001Zw-SO;
	Mon, 17 Feb 2014 10:13:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25102-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 10:13:34 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25102: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25102 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25102/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-freebsd10-i386  8 guest-start         fail pass in 25061
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 25061 pass in 25102
 test-amd64-i386-xl-multivcpu  3 host-install(3)  broken in 25061 pass in 25102
 test-amd64-i386-qemut-rhel6hvm-intel 3 host-install(3) broken in 25061 pass in 25102
 test-amd64-amd64-xl-sedf      3 host-install(3)  broken in 25061 pass in 25102
 test-amd64-i386-xl-credit2    3 host-install(3)  broken in 25061 pass in 25102
 test-amd64-i386-xl-winxpsp3-vcpus1 3 host-install(3) broken in 25061 pass in 25102

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:13:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLCn-0002Hw-IO; Mon, 17 Feb 2014 10:13:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFLCm-0002Hr-PB
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 10:13:41 +0000
Received: from [85.158.139.211:26633] by server-11.bemta-5.messagelabs.com id
	49/35-23886-4D0E1035; Mon, 17 Feb 2014 10:13:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392632016!431516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1837 invoked from network); 17 Feb 2014 10:13:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:13:37 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101375183"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 10:13:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 05:13:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFLCh-0003IY-01;
	Mon, 17 Feb 2014 10:13:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFLCg-0001Zw-SO;
	Mon, 17 Feb 2014 10:13:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25102-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 10:13:34 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25102: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25102 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25102/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemuu-freebsd10-i386  8 guest-start         fail pass in 25061
 test-amd64-i386-qemuu-rhel6hvm-intel 3 host-install(3) broken in 25061 pass in 25102
 test-amd64-i386-xl-multivcpu  3 host-install(3)  broken in 25061 pass in 25102
 test-amd64-i386-qemut-rhel6hvm-intel 3 host-install(3) broken in 25061 pass in 25102
 test-amd64-amd64-xl-sedf      3 host-install(3)  broken in 25061 pass in 25102
 test-amd64-i386-xl-credit2    3 host-install(3)  broken in 25061 pass in 25102
 test-amd64-i386-xl-winxpsp3-vcpus1 3 host-install(3) broken in 25061 pass in 25102

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:18:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLHX-0002PS-BE; Mon, 17 Feb 2014 10:18:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFLHV-0002PM-Rd
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 10:18:34 +0000
Received: from [85.158.143.35:51179] by server-2.bemta-4.messagelabs.com id
	DD/E3-10891-9F1E1035; Mon, 17 Feb 2014 10:18:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392632312!6167182!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10174 invoked from network); 17 Feb 2014 10:18:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 10:18:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 10:18:32 +0000
Message-Id: <5301F000020000780011CCE0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 10:18:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
In-Reply-To: <20140213162022.GE82703@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 17:20, Tim Deegan <tim@xen.org> wrote:
> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>> >>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> > On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>> >> George Dunlap wrote on 2014-02-11:
>> >>> I think I got a bit distracted with the "A isn't really so bad" thing.
>> >>> Actually, if the overhead of not sharing tables isn't very high, then
>> >>> B isn't such a bad option.  In fact, B is what I expected Yang to
>> >>> submit when he originally described the problem.
>> >> Actually, the first solution came to my mind is B. Then I realized that 
> even 
>> > chose B, we still cannot track the memory updating from DMA(even with A/D 
>> > bit, it still a problem). Also, considering the current usage case of log 
>> > dirty in Xen(only vram tracking has problem), I though A is better.: 
>> > Hypervisor only need to track the vram change. If a malicious guest try to 
>> > DMA to vram range, it only crashed himself (This should be reasonable).
>> >>
>> >>> I was going to say, from a release perspective, B is probably the
>> >>> safest option for now.  But on the other hand, if we've been testing
>> >>> sharing all this time, maybe switching back over to non-sharing whole-hog has 
> 
>> > the higher risk?
>> >> Another problem with B is that current VT-d large paging supporting relies 
> on 
>> > the sharing EPT and VT-d page table. This means if we choose B, then we need 
> 
>> > to re-enable VT-d large page. This would be a huge performance impaction for 
>> > Xen 4.4 on using VT-d solution.
>> > 
>> > OK -- if that's the case, then it definitely tips the balance back to 
>> > A.  Unless Tim or Jan disagrees, can one of you two check it in?
>> > 
>> > Don't rush your judgement; but it would be nice to have this in before 
>> > RC4, which would mean checking it in today preferrably, or early 
>> > tomorrow at the latest.
>> 
>> That would be Tim then, as he would have to approve of it anyway.
> 
> Done.

Actually I'm afraid there are two problems with this patch:

For one, is enabling "global" log dirty mode still going to work
after VRAM-only mode already got enabled? I ask because the
paging_mode_log_dirty() check which paging_log_dirty_enable()
does first thing suggests otherwise to me (i.e. the now
conditional setting of all p2m entries to p2m_ram_logdirty would
seem to never get executed). IOW I would think that we're now
lacking a control operation allowing the transition from dirty VRAM
tracking mode to full log dirty mode.

And second, I have been fighting with finding both conditions
and (eventually) the root cause of a severe performance
regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
system. This became _much_ worse after adding in the patch here
(while in fact I had hoped it might help with the originally observed
degradation): X startup fails due to timing out, and booting the
guest now takes about 20 minutes). I didn't find the root cause of
this yet, but meanwhile I know that
- the same isn't observable on SVM
- there's no problem when forcing the domain to use shadow
  mode
- there's no need for any device to actually be assigned to the
  guest
- the regression is very likely purely graphics related (based on
  the observation that when running something that regularly but
  not heavily updates the screen with X up, the guest consumes a
  full CPU's worth of processing power, yet when that updating
  doesn't happen, CPU consumption goes down, and it goes further
  down when shutting down X altogether - at least as log as the
  patch here doesn't get involved).
This I'm observing on a Westmere box (and I didn't notice it earlier
because that's one of those where due to a chipset erratum the
IOMMU gets turned off by default), so it's possible that this can't
be seen on more modern hardware. I'll hopefully find time today to
check this on the one newer (Sandy Bridge) box I have.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:18:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLHX-0002PS-BE; Mon, 17 Feb 2014 10:18:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFLHV-0002PM-Rd
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 10:18:34 +0000
Received: from [85.158.143.35:51179] by server-2.bemta-4.messagelabs.com id
	DD/E3-10891-9F1E1035; Mon, 17 Feb 2014 10:18:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392632312!6167182!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10174 invoked from network); 17 Feb 2014 10:18:32 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 10:18:32 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 10:18:32 +0000
Message-Id: <5301F000020000780011CCE0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 10:18:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
In-Reply-To: <20140213162022.GE82703@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 17:20, Tim Deegan <tim@xen.org> wrote:
> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>> >>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> > On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>> >> George Dunlap wrote on 2014-02-11:
>> >>> I think I got a bit distracted with the "A isn't really so bad" thing.
>> >>> Actually, if the overhead of not sharing tables isn't very high, then
>> >>> B isn't such a bad option.  In fact, B is what I expected Yang to
>> >>> submit when he originally described the problem.
>> >> Actually, the first solution came to my mind is B. Then I realized that 
> even 
>> > chose B, we still cannot track the memory updating from DMA(even with A/D 
>> > bit, it still a problem). Also, considering the current usage case of log 
>> > dirty in Xen(only vram tracking has problem), I though A is better.: 
>> > Hypervisor only need to track the vram change. If a malicious guest try to 
>> > DMA to vram range, it only crashed himself (This should be reasonable).
>> >>
>> >>> I was going to say, from a release perspective, B is probably the
>> >>> safest option for now.  But on the other hand, if we've been testing
>> >>> sharing all this time, maybe switching back over to non-sharing whole-hog has 
> 
>> > the higher risk?
>> >> Another problem with B is that current VT-d large paging supporting relies 
> on 
>> > the sharing EPT and VT-d page table. This means if we choose B, then we need 
> 
>> > to re-enable VT-d large page. This would be a huge performance impaction for 
>> > Xen 4.4 on using VT-d solution.
>> > 
>> > OK -- if that's the case, then it definitely tips the balance back to 
>> > A.  Unless Tim or Jan disagrees, can one of you two check it in?
>> > 
>> > Don't rush your judgement; but it would be nice to have this in before 
>> > RC4, which would mean checking it in today preferrably, or early 
>> > tomorrow at the latest.
>> 
>> That would be Tim then, as he would have to approve of it anyway.
> 
> Done.

Actually I'm afraid there are two problems with this patch:

For one, is enabling "global" log dirty mode still going to work
after VRAM-only mode already got enabled? I ask because the
paging_mode_log_dirty() check which paging_log_dirty_enable()
does first thing suggests otherwise to me (i.e. the now
conditional setting of all p2m entries to p2m_ram_logdirty would
seem to never get executed). IOW I would think that we're now
lacking a control operation allowing the transition from dirty VRAM
tracking mode to full log dirty mode.

And second, I have been fighting with finding both conditions
and (eventually) the root cause of a severe performance
regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
system. This became _much_ worse after adding in the patch here
(while in fact I had hoped it might help with the originally observed
degradation): X startup fails due to timing out, and booting the
guest now takes about 20 minutes). I didn't find the root cause of
this yet, but meanwhile I know that
- the same isn't observable on SVM
- there's no problem when forcing the domain to use shadow
  mode
- there's no need for any device to actually be assigned to the
  guest
- the regression is very likely purely graphics related (based on
  the observation that when running something that regularly but
  not heavily updates the screen with X up, the guest consumes a
  full CPU's worth of processing power, yet when that updating
  doesn't happen, CPU consumption goes down, and it goes further
  down when shutting down X altogether - at least as log as the
  patch here doesn't get involved).
This I'm observing on a Westmere box (and I didn't notice it earlier
because that's one of those where due to a chipset erratum the
IOMMU gets turned off by default), so it's possible that this can't
be seen on more modern hardware. I'll hopefully find time today to
check this on the one newer (Sandy Bridge) box I have.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:19:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLII-0002Sx-PU; Mon, 17 Feb 2014 10:19:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WFLIH-0002Sk-E9
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:19:21 +0000
Received: from [85.158.143.35:47875] by server-3.bemta-4.messagelabs.com id
	B7/8A-11539-822E1035; Mon, 17 Feb 2014 10:19:20 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392632358!6177785!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24347 invoked from network); 17 Feb 2014 10:19:20 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-6.tower-21.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 17 Feb 2014 10:19:20 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S9245AbaBQKTL (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Mon, 17 Feb 2014 11:19:11 +0100
Date: Mon, 17 Feb 2014 11:19:11 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 01:27:57PM +0400, Vasiliy Tolstov wrote:
> Update info:
> If i from dom0 do xl mem-set XXX maxmem, domU can balloon up/down fine.
> What is the difference between 4.1 and 4.3 in this case?

Have you moved from xm to xl? If yes this is the change. You could
find more info here: http://lists.xen.org/archives/html/xen-devel/2013-04/msg03072.html
Those patches are not applied. However, I am going to back to this
work one day.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:19:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:19:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLII-0002Sx-PU; Mon, 17 Feb 2014 10:19:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WFLIH-0002Sk-E9
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:19:21 +0000
Received: from [85.158.143.35:47875] by server-3.bemta-4.messagelabs.com id
	B7/8A-11539-822E1035; Mon, 17 Feb 2014 10:19:20 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392632358!6177785!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24347 invoked from network); 17 Feb 2014 10:19:20 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-6.tower-21.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 17 Feb 2014 10:19:20 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S9245AbaBQKTL (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Mon, 17 Feb 2014 11:19:11 +0100
Date: Mon, 17 Feb 2014 11:19:11 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 01:27:57PM +0400, Vasiliy Tolstov wrote:
> Update info:
> If i from dom0 do xl mem-set XXX maxmem, domU can balloon up/down fine.
> What is the difference between 4.1 and 4.3 in this case?

Have you moved from xm to xl? If yes this is the change. You could
find more info here: http://lists.xen.org/archives/html/xen-devel/2013-04/msg03072.html
Those patches are not applied. However, I am going to back to this
work one day.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:26:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLOu-0002ia-S9; Mon, 17 Feb 2014 10:26:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WFLOt-0002iS-Ni
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:26:11 +0000
Received: from [85.158.143.35:7597] by server-1.bemta-4.messagelabs.com id
	EA/A1-31661-3C3E1035; Mon, 17 Feb 2014 10:26:11 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392632769!6179680!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23449 invoked from network); 17 Feb 2014 10:26:10 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:26:10 -0000
Received: by mail-qc0-f171.google.com with SMTP id n7so23465065qcx.2
	for <xen-devel@lists.xenproject.org>;
	Mon, 17 Feb 2014 02:26:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=s9NoMgNVQVJT6aCUdm2tfLxyny2ULzlLY+66eXPHzSQ=;
	b=dpTqla2Dgeh3cr4T9hJudh+rEBjAPBH0rQR9GtCP+Ko8AFTm0bCg4AkQJEYxn6EWkb
	7XlgicLpR+XO6bB3+NS2Pv6swDnH3yjIj+Fs23fMEQ1cIRsdzrRnN1tYOPU53Bv69AZ5
	tGJ9hZftASlh/nMHxF8BYBdwMuH0CB167C41A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=s9NoMgNVQVJT6aCUdm2tfLxyny2ULzlLY+66eXPHzSQ=;
	b=dUcU7HEBtjZMRolFjzJLUkvT7ZILf1mE3tYs2AXRxK6ftPT0ipJr2qVBSheCda7MGR
	F55xLhkaMzK0y8kfM1E5mZLcEe9mUGrHlhRmCcCFBqc6yQjTNBjLvIpo65EuwHhe48EE
	GgUss1R8kKzMikmlnM6MlM+TM0vOUUmAtmt1p/AVa+MIlcrrXdyIYe4/CK8baDavNMnu
	F2fE/PPACgJ5dpBXWk1+Xm/2nZADUIccQkTdnJ6SE5V008VSK4NZOhgrawD+HTajjFRz
	F5o1JoBAtEOJJtKko665z2iVTX1wT+PAR1cUiyRwN7sDpjaMap0kKdtPhiS6T5pU9IfQ
	+chQ==
X-Gm-Message-State: ALoCoQnVx5xV9scb+AlCjeI53ZQkesjeXy5yYFanszEksEG8rUcGmWHaEQ2L+hvvST+gthHElTt7
X-Received: by 10.229.84.198 with SMTP id k6mr842919qcl.20.1392632769022; Mon,
	17 Feb 2014 02:26:09 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 17 Feb 2014 02:25:53 -0800 (PST)
X-Originating-IP: [217.66.152.101]
In-Reply-To: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Mon, 17 Feb 2014 14:25:53 +0400
X-Google-Sender-Auth: PCc2HWLb0Ggzx3m9JoWn9o5f-j8
Message-ID: <CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
To: Daniel Kiper <dkiper@net-space.pl>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-17 14:19 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> Have you moved from xm to xl? If yes this is the change. You could
> find more info here: http://lists.xen.org/archives/html/xen-devel/2013-04/msg03072.html
> Those patches are not applied. However, I am going to back to this
> work one day.


Thanks! Yes, i moved from xm to xl.  Does this patches can work with xen 4.3?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:26:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLOu-0002ia-S9; Mon, 17 Feb 2014 10:26:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WFLOt-0002iS-Ni
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:26:11 +0000
Received: from [85.158.143.35:7597] by server-1.bemta-4.messagelabs.com id
	EA/A1-31661-3C3E1035; Mon, 17 Feb 2014 10:26:11 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392632769!6179680!1
X-Originating-IP: [209.85.216.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23449 invoked from network); 17 Feb 2014 10:26:10 -0000
Received: from mail-qc0-f171.google.com (HELO mail-qc0-f171.google.com)
	(209.85.216.171)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:26:10 -0000
Received: by mail-qc0-f171.google.com with SMTP id n7so23465065qcx.2
	for <xen-devel@lists.xenproject.org>;
	Mon, 17 Feb 2014 02:26:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=s9NoMgNVQVJT6aCUdm2tfLxyny2ULzlLY+66eXPHzSQ=;
	b=dpTqla2Dgeh3cr4T9hJudh+rEBjAPBH0rQR9GtCP+Ko8AFTm0bCg4AkQJEYxn6EWkb
	7XlgicLpR+XO6bB3+NS2Pv6swDnH3yjIj+Fs23fMEQ1cIRsdzrRnN1tYOPU53Bv69AZ5
	tGJ9hZftASlh/nMHxF8BYBdwMuH0CB167C41A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=s9NoMgNVQVJT6aCUdm2tfLxyny2ULzlLY+66eXPHzSQ=;
	b=dUcU7HEBtjZMRolFjzJLUkvT7ZILf1mE3tYs2AXRxK6ftPT0ipJr2qVBSheCda7MGR
	F55xLhkaMzK0y8kfM1E5mZLcEe9mUGrHlhRmCcCFBqc6yQjTNBjLvIpo65EuwHhe48EE
	GgUss1R8kKzMikmlnM6MlM+TM0vOUUmAtmt1p/AVa+MIlcrrXdyIYe4/CK8baDavNMnu
	F2fE/PPACgJ5dpBXWk1+Xm/2nZADUIccQkTdnJ6SE5V008VSK4NZOhgrawD+HTajjFRz
	F5o1JoBAtEOJJtKko665z2iVTX1wT+PAR1cUiyRwN7sDpjaMap0kKdtPhiS6T5pU9IfQ
	+chQ==
X-Gm-Message-State: ALoCoQnVx5xV9scb+AlCjeI53ZQkesjeXy5yYFanszEksEG8rUcGmWHaEQ2L+hvvST+gthHElTt7
X-Received: by 10.229.84.198 with SMTP id k6mr842919qcl.20.1392632769022; Mon,
	17 Feb 2014 02:26:09 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 17 Feb 2014 02:25:53 -0800 (PST)
X-Originating-IP: [217.66.152.101]
In-Reply-To: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Mon, 17 Feb 2014 14:25:53 +0400
X-Google-Sender-Auth: PCc2HWLb0Ggzx3m9JoWn9o5f-j8
Message-ID: <CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
To: Daniel Kiper <dkiper@net-space.pl>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-17 14:19 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> Have you moved from xm to xl? If yes this is the change. You could
> find more info here: http://lists.xen.org/archives/html/xen-devel/2013-04/msg03072.html
> Those patches are not applied. However, I am going to back to this
> work one day.


Thanks! Yes, i moved from xm to xl.  Does this patches can work with xen 4.3?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:27:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLQM-0002o0-Bt; Mon, 17 Feb 2014 10:27:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFLQK-0002nu-Gj
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:27:40 +0000
Received: from [85.158.139.211:20952] by server-12.bemta-5.messagelabs.com id
	33/3A-15415-B14E1035; Mon, 17 Feb 2014 10:27:39 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392632857!4381975!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23628 invoked from network); 17 Feb 2014 10:27:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:27:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101378133"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 10:27:37 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 05:27:36 -0500
Message-ID: <5301E411.5060908@citrix.com>
Date: Mon, 17 Feb 2014 10:27:29 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, "Luis R.
	Rodriguez" <mcgrof@suse.com>, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> This v2 series changes the approach from my original virtualization
> multicast patch series [0] by abandoning completely the multicast
> issues and instead generalizing an approach for virtualization
> backends. There are two things in common with virtualization
> backends:
> 
>   0) they should not become the root bridge
>   1) they don't need ipv4 / ipv6 interfaces

Why?  There's no real difference between a backend network device and a
physical device (from the point of view of the backend domain).  I do
not think these are intrinsic properties of backend devices.

I can see these being useful knobs for administrators (or management
toolstacks) to turn on, on a per-device basis.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:27:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLQM-0002o0-Bt; Mon, 17 Feb 2014 10:27:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFLQK-0002nu-Gj
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:27:40 +0000
Received: from [85.158.139.211:20952] by server-12.bemta-5.messagelabs.com id
	33/3A-15415-B14E1035; Mon, 17 Feb 2014 10:27:39 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392632857!4381975!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23628 invoked from network); 17 Feb 2014 10:27:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:27:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101378133"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 10:27:37 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 05:27:36 -0500
Message-ID: <5301E411.5060908@citrix.com>
Date: Mon, 17 Feb 2014 10:27:29 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: netdev@vger.kernel.org, "Luis R.
	Rodriguez" <mcgrof@suse.com>, linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> This v2 series changes the approach from my original virtualization
> multicast patch series [0] by abandoning completely the multicast
> issues and instead generalizing an approach for virtualization
> backends. There are two things in common with virtualization
> backends:
> 
>   0) they should not become the root bridge
>   1) they don't need ipv4 / ipv6 interfaces

Why?  There's no real difference between a backend network device and a
physical device (from the point of view of the backend domain).  I do
not think these are intrinsic properties of backend devices.

I can see these being useful knobs for administrators (or management
toolstacks) to turn on, on a per-device basis.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:30:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLSP-0002x9-1E; Mon, 17 Feb 2014 10:29:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFLSN-0002x0-IK
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:29:47 +0000
Received: from [85.158.139.211:4796] by server-10.bemta-5.messagelabs.com id
	AF/CF-08578-A94E1035; Mon, 17 Feb 2014 10:29:46 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392632984!4389191!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20607 invoked from network); 17 Feb 2014 10:29:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:29:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="103127274"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 10:29:44 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 05:29:43 -0500
Message-ID: <5301E496.40802@citrix.com>
Date: Mon, 17 Feb 2014 10:29:42 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, netdev@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@suse.com>, linux-kernel@vger.kernel.org,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
> was to prevent our backend interfaces from being used by the
> bridge and nominating our interface as a root bridge. This was
> possible given that the bridge code will use the lowest MAC
> address for a port once a new interface gets added to the bridge.
> The bridge code has a generic feature now to allow interfaces
> to opt out from root bridge nominations, use that instead.
[...]
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -42,6 +42,8 @@
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
>  
> +static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };

You shouldn't use a vendor prefix with a random MAC address.  You should
set the locally administered bit and clear the multicast/unicast bit and
randomize the remaining 46 bits.

(If existing VIF scripts are doing something similar, they also need to
be fixed.)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:30:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLSP-0002x9-1E; Mon, 17 Feb 2014 10:29:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFLSN-0002x0-IK
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:29:47 +0000
Received: from [85.158.139.211:4796] by server-10.bemta-5.messagelabs.com id
	AF/CF-08578-A94E1035; Mon, 17 Feb 2014 10:29:46 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392632984!4389191!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20607 invoked from network); 17 Feb 2014 10:29:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:29:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="103127274"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 10:29:44 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 05:29:43 -0500
Message-ID: <5301E496.40802@citrix.com>
Date: Mon, 17 Feb 2014 10:29:42 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, netdev@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@suse.com>, linux-kernel@vger.kernel.org,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
> was to prevent our backend interfaces from being used by the
> bridge and nominating our interface as a root bridge. This was
> possible given that the bridge code will use the lowest MAC
> address for a port once a new interface gets added to the bridge.
> The bridge code has a generic feature now to allow interfaces
> to opt out from root bridge nominations, use that instead.
[...]
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -42,6 +42,8 @@
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
>  
> +static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };

You shouldn't use a vendor prefix with a random MAC address.  You should
set the locally administered bit and clear the multicast/unicast bit and
randomize the remaining 46 bits.

(If existing VIF scripts are doing something similar, they also need to
be fixed.)

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:32:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLUe-00034H-KV; Mon, 17 Feb 2014 10:32:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WFLUd-000347-Pp
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:32:08 +0000
Received: from [85.158.137.68:44915] by server-8.bemta-3.messagelabs.com id
	C8/B7-16039-725E1035; Mon, 17 Feb 2014 10:32:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392633124!1078897!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14762 invoked from network); 17 Feb 2014 10:32:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:32:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101379278"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 10:32:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 05:32:03 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WFLUY-00053v-T1;
	Mon, 17 Feb 2014 10:32:02 +0000
Date: Mon, 17 Feb 2014 10:32:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140217103202.GI18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 01:27:57PM +0400, Vasiliy Tolstov wrote:
> Update info:
> If i from dom0 do xl mem-set XXX maxmem, domU can balloon up/down fine.
> What is the difference between 4.1 and 4.3 in this case?
> 

I don't quite know the difference between 4.1 and 4.3 off the top of my
head. 4.1 is too old for me.

I think I've seen this before. The root cause is that xl (now) always
caps maxmem to memory when building a domain but never raises it to the
real maxmem set in config file. It's a matter of turning a parameter
from "true" to "false" in source code, but it's not exposed to users at
the moment.

If you're interested, have a look at libxl/xl_cmdimpl.c:2683. The
"enforce" parameter to libxl_set_memory_target is always set to 1.

Daniel posted a series of patch to fix that behavior but it wasn't
merged. It's too late to fix that for 4.5.  We can revisit that series
during next development cycle if we have time.

http://lists.xenproject.org/archives/html/xen-devel/2013-04/msg03072.html

Wei.

> 2014-02-14 22:12 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> > 2014-02-14 20:34 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> >> My gut feeling is nothing is wrong with Xen. It's just that you have a
> >> typo in your config file. :-)
> >>
> >> Wei.
> >
> > No, if i have error in config domain can't start. You known that.
> > my config is:
> >
> > name="21-10918"
> > kernel="/var/storage/kernel/debian/6/kernel-64"
> > ramdisk="/var/storage/kernel/ramdisk-64"
> > vif=["mac=00:16:3e:00:2c:63,ip=62.76.185.64"]
> > disk=["phy:/dev/disk/vbd/21-916,xvda,w"]
> > memory=512
> > maxmem=1024
> > vcpus=3
> > maxvcpus=3
> > cpu_cap=300
> > cpu_weight=256
> > vfb=["type=vnc,vnclisten=0.0.0.0,vncpasswd=7QOG1885y3"]
> > extra="root=/dev/xvda1 selinux=1 enforcing=0 iommu=off swiotlb=off
> > earlyprintk=xen console=hvc0"
> > on_restart="destroy"
> > cpuid="host,x2apic=0,aes=0,xsave=0,avx=0"
> > device_model_version="qemu-xen"
> > device_model_override="/usr/bin/qemu-system-x86_64"
> >
> > --
> > Vasiliy Tolstov,
> > e-mail: v.tolstov@selfip.ru
> > jabber: vase@selfip.ru
> 
> 
> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:32:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLUe-00034H-KV; Mon, 17 Feb 2014 10:32:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WFLUd-000347-Pp
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:32:08 +0000
Received: from [85.158.137.68:44915] by server-8.bemta-3.messagelabs.com id
	C8/B7-16039-725E1035; Mon, 17 Feb 2014 10:32:07 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392633124!1078897!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14762 invoked from network); 17 Feb 2014 10:32:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 10:32:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="101379278"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 10:32:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 05:32:03 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WFLUY-00053v-T1;
	Mon, 17 Feb 2014 10:32:02 +0000
Date: Mon, 17 Feb 2014 10:32:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140217103202.GI18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 01:27:57PM +0400, Vasiliy Tolstov wrote:
> Update info:
> If i from dom0 do xl mem-set XXX maxmem, domU can balloon up/down fine.
> What is the difference between 4.1 and 4.3 in this case?
> 

I don't quite know the difference between 4.1 and 4.3 off the top of my
head. 4.1 is too old for me.

I think I've seen this before. The root cause is that xl (now) always
caps maxmem to memory when building a domain but never raises it to the
real maxmem set in config file. It's a matter of turning a parameter
from "true" to "false" in source code, but it's not exposed to users at
the moment.

If you're interested, have a look at libxl/xl_cmdimpl.c:2683. The
"enforce" parameter to libxl_set_memory_target is always set to 1.

Daniel posted a series of patch to fix that behavior but it wasn't
merged. It's too late to fix that for 4.5.  We can revisit that series
during next development cycle if we have time.

http://lists.xenproject.org/archives/html/xen-devel/2013-04/msg03072.html

Wei.

> 2014-02-14 22:12 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> > 2014-02-14 20:34 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> >> My gut feeling is nothing is wrong with Xen. It's just that you have a
> >> typo in your config file. :-)
> >>
> >> Wei.
> >
> > No, if i have error in config domain can't start. You known that.
> > my config is:
> >
> > name="21-10918"
> > kernel="/var/storage/kernel/debian/6/kernel-64"
> > ramdisk="/var/storage/kernel/ramdisk-64"
> > vif=["mac=00:16:3e:00:2c:63,ip=62.76.185.64"]
> > disk=["phy:/dev/disk/vbd/21-916,xvda,w"]
> > memory=512
> > maxmem=1024
> > vcpus=3
> > maxvcpus=3
> > cpu_cap=300
> > cpu_weight=256
> > vfb=["type=vnc,vnclisten=0.0.0.0,vncpasswd=7QOG1885y3"]
> > extra="root=/dev/xvda1 selinux=1 enforcing=0 iommu=off swiotlb=off
> > earlyprintk=xen console=hvc0"
> > on_restart="destroy"
> > cpuid="host,x2apic=0,aes=0,xsave=0,avx=0"
> > device_model_version="qemu-xen"
> > device_model_override="/usr/bin/qemu-system-x86_64"
> >
> > --
> > Vasiliy Tolstov,
> > e-mail: v.tolstov@selfip.ru
> > jabber: vase@selfip.ru
> 
> 
> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:56:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLsY-0003X5-Df; Mon, 17 Feb 2014 10:56:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WFLsW-0003Wy-FE
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:56:48 +0000
Received: from [85.158.137.68:33769] by server-16.bemta-3.messagelabs.com id
	3C/A4-29917-FEAE1035; Mon, 17 Feb 2014 10:56:47 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392634605!2342107!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10829 invoked from network); 17 Feb 2014 10:56:46 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-11.tower-31.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 17 Feb 2014 10:56:46 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1612380AbaBQK4l (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Mon, 17 Feb 2014 11:56:41 +0100
Date: Mon, 17 Feb 2014 11:56:41 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 02:25:53PM +0400, Vasiliy Tolstov wrote:
> 2014-02-17 14:19 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> > Have you moved from xm to xl? If yes this is the change. You could
> > find more info here: http://lists.xen.org/archives/html/xen-devel/2013-04/msg03072.html
> > Those patches are not applied. However, I am going to back to this
> > work one day.
>
>
> Thanks! Yes, i moved from xm to xl.  Does this patches can work with xen 4.3?

They should. IIRC they were targeted for 4.3.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 10:56:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 10:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFLsY-0003X5-Df; Mon, 17 Feb 2014 10:56:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WFLsW-0003Wy-FE
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 10:56:48 +0000
Received: from [85.158.137.68:33769] by server-16.bemta-3.messagelabs.com id
	3C/A4-29917-FEAE1035; Mon, 17 Feb 2014 10:56:47 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392634605!2342107!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10829 invoked from network); 17 Feb 2014 10:56:46 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-11.tower-31.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 17 Feb 2014 10:56:46 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1612380AbaBQK4l (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Mon, 17 Feb 2014 11:56:41 +0100
Date: Mon, 17 Feb 2014 11:56:41 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 02:25:53PM +0400, Vasiliy Tolstov wrote:
> 2014-02-17 14:19 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> > Have you moved from xm to xl? If yes this is the change. You could
> > find more info here: http://lists.xen.org/archives/html/xen-devel/2013-04/msg03072.html
> > Those patches are not applied. However, I am going to back to this
> > work one day.
>
>
> Thanks! Yes, i moved from xm to xl.  Does this patches can work with xen 4.3?

They should. IIRC they were targeted for 4.3.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:11:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFM68-0003ou-SC; Mon, 17 Feb 2014 11:10:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WFM66-0003op-RD
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 11:10:51 +0000
Received: from [85.158.143.35:38721] by server-1.bemta-4.messagelabs.com id
	43/8D-31661-A3EE1035; Mon, 17 Feb 2014 11:10:50 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392635447!6193238!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22330 invoked from network); 17 Feb 2014 11:10:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:10:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="103135112"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:10:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:10:24 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WFM5f-0005bu-HO; Mon, 17 Feb 2014 11:10:23 +0000
Message-ID: <5301EE1F.8080305@citrix.com>
Date: Mon, 17 Feb 2014 11:10:23 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
	<52FE5AC6.9000300@citrix.com>
In-Reply-To: <52FE5AC6.9000300@citrix.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 3/5] xen-netfront:
 Factor	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 18:04, David Vrabel wrote:
> On 14/02/14 17:35, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netfront, move the
>> queue-specific data from struct netfront_info to struct netfront_queue,
>> and update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_etherdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0, selecting the first (and
>> only) queue.
> [...]
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
> [...]
>> @@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
> [...]
>> +	for (i = 0; i < info->num_queues; ++i) {
>> +		queue = &info->queues[i];
>> +		del_timer_sync(&queue->rx_refill_timer);
>> +	}
>> +
>> +	if (info->num_queues) {
>> +		kfree(info->queues);
>> +		info->queues = NULL;
>> +	}
>> +
>>   	xennet_sysfs_delif(info->netdev);
>>
>>   	unregister_netdev(info->netdev);
>>
>> -	del_timer_sync(&info->rx_refill_timer);
>> -
>
> This has reordered the del_timer_sync() to before the
> unregister_netdev() call.
>
> Can you be sure that the timer cannot be restarted after deleting it?
>
> David
>
Looking at the code, mod_timer() is called from 
xennet_alloc_rx_buffers(), only. This, in turn, is called from 
xennet_poll, which is the registered NAPI handler function. This should 
not be called after a napi_disable(), which is done in xennet_close(). 
xennet_close() is called to stop the interface, which should be done 
before the module is removed (unless I'm mistaken here). So this should 
be safe.

That said, there is no reason that the queue cleanup has to happen 
before the unregister_netdev() call. I'll move it to after that point, 
just to be safe.

-Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:11:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFM68-0003ou-SC; Mon, 17 Feb 2014 11:10:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WFM66-0003op-RD
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 11:10:51 +0000
Received: from [85.158.143.35:38721] by server-1.bemta-4.messagelabs.com id
	43/8D-31661-A3EE1035; Mon, 17 Feb 2014 11:10:50 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392635447!6193238!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22330 invoked from network); 17 Feb 2014 11:10:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:10:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,859,1384300800"; d="scan'208";a="103135112"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:10:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:10:24 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WFM5f-0005bu-HO; Mon, 17 Feb 2014 11:10:23 +0000
Message-ID: <5301EE1F.8080305@citrix.com>
Date: Mon, 17 Feb 2014 11:10:23 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392399353-11973-1-git-send-email-andrew.bennieston@citrix.com>
	<1392399353-11973-4-git-send-email-andrew.bennieston@citrix.com>
	<52FE5AC6.9000300@citrix.com>
In-Reply-To: <52FE5AC6.9000300@citrix.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V3 net-next 3/5] xen-netfront:
 Factor	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 18:04, David Vrabel wrote:
> On 14/02/14 17:35, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netfront, move the
>> queue-specific data from struct netfront_info to struct netfront_queue,
>> and update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_etherdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0, selecting the first (and
>> only) queue.
> [...]
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
> [...]
>> @@ -2048,17 +2196,27 @@ static const struct xenbus_device_id netfront_ids[] = {
> [...]
>> +	for (i = 0; i < info->num_queues; ++i) {
>> +		queue = &info->queues[i];
>> +		del_timer_sync(&queue->rx_refill_timer);
>> +	}
>> +
>> +	if (info->num_queues) {
>> +		kfree(info->queues);
>> +		info->queues = NULL;
>> +	}
>> +
>>   	xennet_sysfs_delif(info->netdev);
>>
>>   	unregister_netdev(info->netdev);
>>
>> -	del_timer_sync(&info->rx_refill_timer);
>> -
>
> This has reordered the del_timer_sync() to before the
> unregister_netdev() call.
>
> Can you be sure that the timer cannot be restarted after deleting it?
>
> David
>
Looking at the code, mod_timer() is called from 
xennet_alloc_rx_buffers(), only. This, in turn, is called from 
xennet_poll, which is the registered NAPI handler function. This should 
not be called after a napi_disable(), which is done in xennet_close(). 
xennet_close() is called to stop the interface, which should be done 
before the module is removed (unless I'm mistaken here). So this should 
be safe.

That said, there is no reason that the queue cleanup has to happen 
before the unregister_netdev() call. I'll move it to after that point, 
just to be safe.

-Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:12:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFM7I-0003sx-BL; Mon, 17 Feb 2014 11:12:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1WFM7E-0003sl-Kh
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 11:12:00 +0000
Received: from [85.158.137.68:47247] by server-14.bemta-3.messagelabs.com id
	23/A3-08196-F7EE1035; Mon, 17 Feb 2014 11:11:59 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392635517!1090067!1
X-Originating-IP: [209.85.192.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28275 invoked from network); 17 Feb 2014 11:11:58 -0000
Received: from mail-pd0-f171.google.com (HELO mail-pd0-f171.google.com)
	(209.85.192.171)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:11:58 -0000
Received: by mail-pd0-f171.google.com with SMTP id g10so14695874pdj.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 17 Feb 2014 03:11:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=gmyusjuauBwIRC2+GNtcLNwsZWb2IqRztq4bZQTUlxQ=;
	b=B9H1uPgO6Yy6bSk6ghUNC1ES/dwhlrQmZCzhkFHuWJ5kcAbrSanXEwGAeufUDZEd3Q
	3/WRDJnTiXNgOSsP68DabOuEPK374xT8fnOwtGOyCcek5yYOwgXsa3OWuh2XiWAYj2zQ
	IVpx7/cU3shjmcUfaC/iUZMmS0C6hpddfsvYdaVxfIu2fQd3oub8k9VH+ezChSm7CzrF
	+6Ec3Zy60O9H9JxaE5v4PLca0/0eIhyW2D7ZO0n+jhzGwTO0t35v9U1PmuC2KfxWlza1
	/luthWi4jn+OAPxEbhWGr8fzCjV9BrmE6iTj3hDcN7BoofljQypKfYD7G0s83SNBFI0v
	AoDg==
X-Received: by 10.68.204.231 with SMTP id lb7mr11815569pbc.30.1392635516744;
	Mon, 17 Feb 2014 03:11:56 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	vx10sm115076735pac.17.2014.02.17.03.11.51 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 03:11:56 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: konrad.wilk@oracle.com
Date: Mon, 17 Feb 2014 19:11:39 +0800
Message-Id: <1392635499-24878-1-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] drivers:xen-selfballoon:reset
	'frontswap_inertia_counter' after frontswap_shrink
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When I looked at this issue https://lkml.org/lkml/2013/11/21/158, I found that
frontswap_selfshrink() doesn't work as expected sometimes.
Pages are continuously added to frontswap and gotten back soon. It's a waste of
cpu time and increases the memory pressue of Guest OS.

Take an example.
First time in frontswap_selfshrink():
1. last_frontswap_pages = cur_frontswap_pages = 0
2. cur_frontswap_pages  = frontswap_curr_pages() = 100

When 'frontswap_inertia_counter' decreased to 0:
1. last_frontswap_pages = cur_frontswap_pages = 100
2. cur_frontswap_pages = frontswap_curr_pages() = 100
3. call frontswap_shrink() and let's assumption that 10 pages are gotten back
   from frontswap.
4. now frontswap_curr_pages() is 90.

If then memory is not enough in Guest OS and 9 more pages(smaller than gotten
back) added to frontswap.
Now frontswap_curr_pages() is 99 and we don't expect to get back more pages from
frontswap because geust os is under memory pressure.

But next time in frontswap_selfshrink():
1. last_frontswap_pages is set to the old value of cur_frontswap_pages(still
   100)
2. cur_frontswap_pages(99) is still smaller than last_frontswap_pages.
3. call frontswap_shrink() and continue to get back pages from frontswap!!

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 drivers/xen/xen-selfballoon.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
index 21e18c1..bd0b416 100644
--- a/drivers/xen/xen-selfballoon.c
+++ b/drivers/xen/xen-selfballoon.c
@@ -170,6 +170,7 @@ static void frontswap_selfshrink(void)
 		tgt_frontswap_pages = cur_frontswap_pages -
 			(cur_frontswap_pages / frontswap_hysteresis);
 	frontswap_shrink(tgt_frontswap_pages);
+	frontswap_inertia_counter = frontswap_inertia;
 }
 
 #endif /* CONFIG_FRONTSWAP */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:12:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFM7I-0003sx-BL; Mon, 17 Feb 2014 11:12:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lliubbo@gmail.com>) id 1WFM7E-0003sl-Kh
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 11:12:00 +0000
Received: from [85.158.137.68:47247] by server-14.bemta-3.messagelabs.com id
	23/A3-08196-F7EE1035; Mon, 17 Feb 2014 11:11:59 +0000
X-Env-Sender: lliubbo@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392635517!1090067!1
X-Originating-IP: [209.85.192.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28275 invoked from network); 17 Feb 2014 11:11:58 -0000
Received: from mail-pd0-f171.google.com (HELO mail-pd0-f171.google.com)
	(209.85.192.171)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:11:58 -0000
Received: by mail-pd0-f171.google.com with SMTP id g10so14695874pdj.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 17 Feb 2014 03:11:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:to:cc:subject:date:message-id;
	bh=gmyusjuauBwIRC2+GNtcLNwsZWb2IqRztq4bZQTUlxQ=;
	b=B9H1uPgO6Yy6bSk6ghUNC1ES/dwhlrQmZCzhkFHuWJ5kcAbrSanXEwGAeufUDZEd3Q
	3/WRDJnTiXNgOSsP68DabOuEPK374xT8fnOwtGOyCcek5yYOwgXsa3OWuh2XiWAYj2zQ
	IVpx7/cU3shjmcUfaC/iUZMmS0C6hpddfsvYdaVxfIu2fQd3oub8k9VH+ezChSm7CzrF
	+6Ec3Zy60O9H9JxaE5v4PLca0/0eIhyW2D7ZO0n+jhzGwTO0t35v9U1PmuC2KfxWlza1
	/luthWi4jn+OAPxEbhWGr8fzCjV9BrmE6iTj3hDcN7BoofljQypKfYD7G0s83SNBFI0v
	AoDg==
X-Received: by 10.68.204.231 with SMTP id lb7mr11815569pbc.30.1392635516744;
	Mon, 17 Feb 2014 03:11:56 -0800 (PST)
Received: from localhost ([116.227.28.52]) by mx.google.com with ESMTPSA id
	vx10sm115076735pac.17.2014.02.17.03.11.51 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 03:11:56 -0800 (PST)
From: Bob Liu <lliubbo@gmail.com>
To: konrad.wilk@oracle.com
Date: Mon, 17 Feb 2014 19:11:39 +0800
Message-Id: <1392635499-24878-1-git-send-email-bob.liu@oracle.com>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: [Xen-devel] [PATCH] drivers:xen-selfballoon:reset
	'frontswap_inertia_counter' after frontswap_shrink
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When I looked at this issue https://lkml.org/lkml/2013/11/21/158, I found that
frontswap_selfshrink() doesn't work as expected sometimes.
Pages are continuously added to frontswap and gotten back soon. It's a waste of
cpu time and increases the memory pressue of Guest OS.

Take an example.
First time in frontswap_selfshrink():
1. last_frontswap_pages = cur_frontswap_pages = 0
2. cur_frontswap_pages  = frontswap_curr_pages() = 100

When 'frontswap_inertia_counter' decreased to 0:
1. last_frontswap_pages = cur_frontswap_pages = 100
2. cur_frontswap_pages = frontswap_curr_pages() = 100
3. call frontswap_shrink() and let's assumption that 10 pages are gotten back
   from frontswap.
4. now frontswap_curr_pages() is 90.

If then memory is not enough in Guest OS and 9 more pages(smaller than gotten
back) added to frontswap.
Now frontswap_curr_pages() is 99 and we don't expect to get back more pages from
frontswap because geust os is under memory pressure.

But next time in frontswap_selfshrink():
1. last_frontswap_pages is set to the old value of cur_frontswap_pages(still
   100)
2. cur_frontswap_pages(99) is still smaller than last_frontswap_pages.
3. call frontswap_shrink() and continue to get back pages from frontswap!!

Signed-off-by: Bob Liu <bob.liu@oracle.com>
---
 drivers/xen/xen-selfballoon.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/xen/xen-selfballoon.c b/drivers/xen/xen-selfballoon.c
index 21e18c1..bd0b416 100644
--- a/drivers/xen/xen-selfballoon.c
+++ b/drivers/xen/xen-selfballoon.c
@@ -170,6 +170,7 @@ static void frontswap_selfshrink(void)
 		tgt_frontswap_pages = cur_frontswap_pages -
 			(cur_frontswap_pages / frontswap_hysteresis);
 	frontswap_shrink(tgt_frontswap_pages);
+	frontswap_inertia_counter = frontswap_inertia;
 }
 
 #endif /* CONFIG_FRONTSWAP */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMOr-0004Ks-PD; Mon, 17 Feb 2014 11:30:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFMOq-0004Ki-Ca
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 11:30:12 +0000
Received: from [85.158.137.68:10285] by server-10.bemta-3.messagelabs.com id
	CF/46-07302-3C2F1035; Mon, 17 Feb 2014 11:30:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392636608!2363068!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21892 invoked from network); 17 Feb 2014 11:30:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:30:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101390742"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:30:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 06:30:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFMOk-0003jr-7p;
	Mon, 17 Feb 2014 11:30:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFMOj-00028U-TY;
	Mon, 17 Feb 2014 11:30:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25108-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 11:30:05 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25108: trouble:
	pass/preparing/queued/running
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25108 xen-4.2-testing running [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25108/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pcipt-intel  2 hosts-allocate        running [st=running!]
 test-amd64-i386-pv              <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-amd64-pv           3 host-install(3)          running [st=running!]
 test-amd64-i386-xl-multivcpu    <none executed>              queued
 test-i386-i386-pv               <none executed>              queued
 test-i386-i386-xl               <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-amd64-xl-sedf      2 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl-sedf-pin  2 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl           2 hosts-allocate           running [st=running!]
 test-amd64-i386-qemuu-freebsd10-i386    <none executed>              queued
 test-amd64-i386-xend-qemut-winxpsp3    <none executed>              queued
 test-i386-i386-pair             <none executed>              queued
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1    <none executed>             queued
 test-amd64-i386-pair            <none executed>              queued
 test-i386-i386-xl-qemut-winxpsp3    <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64  2 hosts-allocate   running [st=running!]
 test-amd64-amd64-xl-winxpsp3  2 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1    <none executed>             queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64  2 hosts-allocate   running [st=running!]
 test-amd64-amd64-xl-win7-amd64  2 hosts-allocate         running [st=running!]
 test-amd64-i386-xl-credit2      <none executed>              queued
 test-amd64-amd64-pair         2 hosts-allocate           running [st=running!]
 build-amd64-oldkern           1 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 build-i386                    1 hosts-allocate           running [st=running!]
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 test-amd64-i386-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-xend-winxpsp3    <none executed>              queued
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl-qemuu-winxpsp3  2 hosts-allocate     running [st=running!]
 test-i386-i386-xl-qemuu-winxpsp3    <none executed>              queued
 test-i386-i386-xl-winxpsp3      <none executed>              queued
 test-amd64-amd64-xl-qemut-winxpsp3  2 hosts-allocate     running [st=running!]
 test-amd64-i386-xl-winxpsp3-vcpus1    <none executed>              queued

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   preparing
 build-amd64-oldkern                                          preparing
 build-i386-oldkern                                           preparing
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          preparing
 test-amd64-i386-xl                                           queued  
 test-i386-i386-xl                                            queued  
 test-amd64-i386-rhel6hvm-amd                                 queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-freebsd10-amd64                        queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         preparing
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-win7-amd64                               preparing
 test-amd64-i386-xl-win7-amd64                                queued  
 test-amd64-i386-xl-credit2                                   queued  
 test-amd64-i386-qemuu-freebsd10-i386                         queued  
 test-amd64-amd64-xl-pcipt-intel                              preparing
 test-amd64-i386-rhel6hvm-intel                               queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-i386-xl-multivcpu                                 queued  
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         queued  
 test-i386-i386-pair                                          queued  
 test-amd64-amd64-xl-sedf-pin                                 preparing
 test-amd64-amd64-pv                                          running 
 test-amd64-i386-pv                                           queued  
 test-i386-i386-pv                                            queued  
 test-amd64-amd64-xl-sedf                                     preparing
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     queued  
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     queued  
 test-amd64-i386-xl-winxpsp3-vcpus1                           queued  
 test-amd64-i386-xend-qemut-winxpsp3                          queued  
 test-amd64-amd64-xl-qemut-winxpsp3                           preparing
 test-i386-i386-xl-qemut-winxpsp3                             queued  
 test-amd64-amd64-xl-qemuu-winxpsp3                           preparing
 test-i386-i386-xl-qemuu-winxpsp3                             queued  
 test-amd64-i386-xend-winxpsp3                                queued  
 test-amd64-amd64-xl-winxpsp3                                 preparing
 test-i386-i386-xl-winxpsp3                                   queued  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMOr-0004Ks-PD; Mon, 17 Feb 2014 11:30:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFMOq-0004Ki-Ca
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 11:30:12 +0000
Received: from [85.158.137.68:10285] by server-10.bemta-3.messagelabs.com id
	CF/46-07302-3C2F1035; Mon, 17 Feb 2014 11:30:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392636608!2363068!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21892 invoked from network); 17 Feb 2014 11:30:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:30:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101390742"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:30:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 06:30:06 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFMOk-0003jr-7p;
	Mon, 17 Feb 2014 11:30:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFMOj-00028U-TY;
	Mon, 17 Feb 2014 11:30:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25108-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 11:30:05 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25108: trouble:
	pass/preparing/queued/running
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25108 xen-4.2-testing running [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25108/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pcipt-intel  2 hosts-allocate        running [st=running!]
 test-amd64-i386-pv              <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-amd64-amd64-pv           3 host-install(3)          running [st=running!]
 test-amd64-i386-xl-multivcpu    <none executed>              queued
 test-i386-i386-pv               <none executed>              queued
 test-i386-i386-xl               <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-amd64-amd64-xl-sedf      2 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl-sedf-pin  2 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl           2 hosts-allocate           running [st=running!]
 test-amd64-i386-qemuu-freebsd10-i386    <none executed>              queued
 test-amd64-i386-xend-qemut-winxpsp3    <none executed>              queued
 test-i386-i386-pair             <none executed>              queued
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1    <none executed>             queued
 test-amd64-i386-pair            <none executed>              queued
 test-i386-i386-xl-qemut-winxpsp3    <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64  2 hosts-allocate   running [st=running!]
 test-amd64-amd64-xl-winxpsp3  2 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1    <none executed>             queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64  2 hosts-allocate   running [st=running!]
 test-amd64-amd64-xl-win7-amd64  2 hosts-allocate         running [st=running!]
 test-amd64-i386-xl-credit2      <none executed>              queued
 test-amd64-amd64-pair         2 hosts-allocate           running [st=running!]
 build-amd64-oldkern           1 hosts-allocate           running [st=running!]
 test-amd64-i386-xl-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 build-i386                    1 hosts-allocate           running [st=running!]
 build-i386-pvops              1 hosts-allocate           running [st=running!]
 test-amd64-i386-rhel6hvm-intel    <none executed>              queued
 test-amd64-i386-qemuu-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-xend-winxpsp3    <none executed>              queued
 build-i386-oldkern            1 hosts-allocate           running [st=running!]
 test-amd64-amd64-xl-qemuu-winxpsp3  2 hosts-allocate     running [st=running!]
 test-i386-i386-xl-qemuu-winxpsp3    <none executed>              queued
 test-i386-i386-xl-winxpsp3      <none executed>              queued
 test-amd64-amd64-xl-qemut-winxpsp3  2 hosts-allocate     running [st=running!]
 test-amd64-i386-xl-winxpsp3-vcpus1    <none executed>              queued

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   preparing
 build-amd64-oldkern                                          preparing
 build-i386-oldkern                                           preparing
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             preparing
 test-amd64-amd64-xl                                          preparing
 test-amd64-i386-xl                                           queued  
 test-i386-i386-xl                                            queued  
 test-amd64-i386-rhel6hvm-amd                                 queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-freebsd10-amd64                        queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         preparing
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-win7-amd64                               preparing
 test-amd64-i386-xl-win7-amd64                                queued  
 test-amd64-i386-xl-credit2                                   queued  
 test-amd64-i386-qemuu-freebsd10-i386                         queued  
 test-amd64-amd64-xl-pcipt-intel                              preparing
 test-amd64-i386-rhel6hvm-intel                               queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-i386-xl-multivcpu                                 queued  
 test-amd64-amd64-pair                                        preparing
 test-amd64-i386-pair                                         queued  
 test-i386-i386-pair                                          queued  
 test-amd64-amd64-xl-sedf-pin                                 preparing
 test-amd64-amd64-pv                                          running 
 test-amd64-i386-pv                                           queued  
 test-i386-i386-pv                                            queued  
 test-amd64-amd64-xl-sedf                                     preparing
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     queued  
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     queued  
 test-amd64-i386-xl-winxpsp3-vcpus1                           queued  
 test-amd64-i386-xend-qemut-winxpsp3                          queued  
 test-amd64-amd64-xl-qemut-winxpsp3                           preparing
 test-i386-i386-xl-qemut-winxpsp3                             queued  
 test-amd64-amd64-xl-qemuu-winxpsp3                           preparing
 test-i386-i386-xl-qemuu-winxpsp3                             queued  
 test-amd64-i386-xend-winxpsp3                                queued  
 test-amd64-amd64-xl-winxpsp3                                 preparing
 test-i386-i386-xl-winxpsp3                                   queued  


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:33:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMRe-0004Vn-LU; Mon, 17 Feb 2014 11:33:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFMRd-0004Va-3j
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:33:05 +0000
Received: from [85.158.139.211:47054] by server-2.bemta-5.messagelabs.com id
	51/46-23037-073F1035; Mon, 17 Feb 2014 11:33:04 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392636781!4340511!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20932 invoked from network); 17 Feb 2014 11:33:03 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 11:33:03 -0000
Received: from mail-vc0-f174.google.com ([209.85.220.174]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwHzbRoV00kBpGkNYpTZfGvborIiQ6pD@postini.com;
	Mon, 17 Feb 2014 03:33:03 PST
Received: by mail-vc0-f174.google.com with SMTP id im17so11684899vcb.33
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 03:33:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=YrijfssekfOAKqisi0ns0yKv54N+D1S8xQrgQkHVguA=;
	b=BjBoL6otxGa8r9DyyOKL7TecwwRMS4D8QD24/wpv3qckVT5Env9o1B8/ZymYIZyz2R
	nj6ZQf1U4BdxJpic4cHr7UgySNbJXhC+7vU/xvdoXcVxEk5aS6gF/gjhOW9sd+PM1Hi6
	/YNIai2z12AbCt/pjl0ORUwyubaBtN+l8dgg4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=YrijfssekfOAKqisi0ns0yKv54N+D1S8xQrgQkHVguA=;
	b=bSge/+fF+xEL5q81x2NzGG0s05qHkIrP/WsaIzfHGcCR7E5MKX3fAzIfBS/x4+x2pi
	r3/Wy86GjiyuCHzdETVVCeYJiEd+6X5chBYoAhbR4xDOmfKu4GbSkDMDnlyO6Vo/2+VT
	m1y67g2inr4cxg8FaMxSD+VfdPKjIkRS4lmh/cR5HXi9PRK2eC36OpUSUdIIeqav3Dyl
	3ywh3U158s2B0FfKprx15wZfs5LfVY1snxy0dtdrQeLA1CcSPaahEGb/NQGJLLa41pvc
	vrnXml5jB+6PqC6FBDt7qkP4Zd5WdMpQlhuWE5gbX1yKu7nhCiP+P9FMav7351tdHngN
	ZdrA==
X-Gm-Message-State: ALoCoQmY12fmMRCvF5MEI5/Nnp6znvJ8l83nCDq9oREb5SoQcEJ+QgJXeQwK+adP7PuymZZJrYlpTaLpvPeiqwFilq/1vN/cISN6LAXpby+isBYIqIkNnzMIEU9XZmCefSMMfsbpgYVazDvQ8GfcQd/v04YNxWPRWx+G3ceXknSukvQ5jpFs13I=
X-Received: by 10.52.27.9 with SMTP id p9mr10513037vdg.28.1392636780468;
	Mon, 17 Feb 2014 03:33:00 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.27.9 with SMTP id p9mr10513030vdg.28.1392636780290; Mon,
	17 Feb 2014 03:33:00 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 03:33:00 -0800 (PST)
Date: Mon, 17 Feb 2014 13:33:00 +0200
Message-ID: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Can anyone clarify - is it possible to make a run time memory trap in
Xen hypervisor?

To be more specific - I see the following function:
*int handle_mmio(mmio_info_t *info)*, which is called from *static
void do_trap_data_abort_guest(struct cpu_user_regs *regs, union hsr
hsr)*
Using these calls I can define memory region and create a trap for it.
But in current implementation I can do it only during compile time.
Is there any way to do similar in runtime - i.e. calculate memory
region value in code and add an entry to mmio_handlers[] ?
Is it a good idea - just to modify/extend existing code in
xen/arch/arm/io.c file to make it possible ?

regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.comxen/arch/arm/io.c

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:33:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMRe-0004Vn-LU; Mon, 17 Feb 2014 11:33:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFMRd-0004Va-3j
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:33:05 +0000
Received: from [85.158.139.211:47054] by server-2.bemta-5.messagelabs.com id
	51/46-23037-073F1035; Mon, 17 Feb 2014 11:33:04 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392636781!4340511!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20932 invoked from network); 17 Feb 2014 11:33:03 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 11:33:03 -0000
Received: from mail-vc0-f174.google.com ([209.85.220.174]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwHzbRoV00kBpGkNYpTZfGvborIiQ6pD@postini.com;
	Mon, 17 Feb 2014 03:33:03 PST
Received: by mail-vc0-f174.google.com with SMTP id im17so11684899vcb.33
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 03:33:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=YrijfssekfOAKqisi0ns0yKv54N+D1S8xQrgQkHVguA=;
	b=BjBoL6otxGa8r9DyyOKL7TecwwRMS4D8QD24/wpv3qckVT5Env9o1B8/ZymYIZyz2R
	nj6ZQf1U4BdxJpic4cHr7UgySNbJXhC+7vU/xvdoXcVxEk5aS6gF/gjhOW9sd+PM1Hi6
	/YNIai2z12AbCt/pjl0ORUwyubaBtN+l8dgg4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=YrijfssekfOAKqisi0ns0yKv54N+D1S8xQrgQkHVguA=;
	b=bSge/+fF+xEL5q81x2NzGG0s05qHkIrP/WsaIzfHGcCR7E5MKX3fAzIfBS/x4+x2pi
	r3/Wy86GjiyuCHzdETVVCeYJiEd+6X5chBYoAhbR4xDOmfKu4GbSkDMDnlyO6Vo/2+VT
	m1y67g2inr4cxg8FaMxSD+VfdPKjIkRS4lmh/cR5HXi9PRK2eC36OpUSUdIIeqav3Dyl
	3ywh3U158s2B0FfKprx15wZfs5LfVY1snxy0dtdrQeLA1CcSPaahEGb/NQGJLLa41pvc
	vrnXml5jB+6PqC6FBDt7qkP4Zd5WdMpQlhuWE5gbX1yKu7nhCiP+P9FMav7351tdHngN
	ZdrA==
X-Gm-Message-State: ALoCoQmY12fmMRCvF5MEI5/Nnp6znvJ8l83nCDq9oREb5SoQcEJ+QgJXeQwK+adP7PuymZZJrYlpTaLpvPeiqwFilq/1vN/cISN6LAXpby+isBYIqIkNnzMIEU9XZmCefSMMfsbpgYVazDvQ8GfcQd/v04YNxWPRWx+G3ceXknSukvQ5jpFs13I=
X-Received: by 10.52.27.9 with SMTP id p9mr10513037vdg.28.1392636780468;
	Mon, 17 Feb 2014 03:33:00 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.52.27.9 with SMTP id p9mr10513030vdg.28.1392636780290; Mon,
	17 Feb 2014 03:33:00 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 03:33:00 -0800 (PST)
Date: Mon, 17 Feb 2014 13:33:00 +0200
Message-ID: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Can anyone clarify - is it possible to make a run time memory trap in
Xen hypervisor?

To be more specific - I see the following function:
*int handle_mmio(mmio_info_t *info)*, which is called from *static
void do_trap_data_abort_guest(struct cpu_user_regs *regs, union hsr
hsr)*
Using these calls I can define memory region and create a trap for it.
But in current implementation I can do it only during compile time.
Is there any way to do similar in runtime - i.e. calculate memory
region value in code and add an entry to mmio_handlers[] ?
Is it a good idea - just to modify/extend existing code in
xen/arch/arm/io.c file to make it possible ?

regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.comxen/arch/arm/io.c

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWN-0004h5-OA; Mon, 17 Feb 2014 11:37:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWM-0004gj-Gu
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:37:58 +0000
Received: from [193.109.254.147:11213] by server-13.bemta-14.messagelabs.com
	id 62/F0-01226-594F1035; Mon, 17 Feb 2014 11:37:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392637075!1152132!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4381 invoked from network); 17 Feb 2014 11:37:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101392348"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-5f; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:36 +0000
Message-ID: <1392636577-10305-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: george.dunlap@eu.citrix.com, Tim Deegan <tim@xen.org>, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 2/3] x86/hvm/rtc: Inject RTC periodic
	interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Let the vpt code drive the RTC's timer interrupts directly, as it does
for other periodic time sources, and fix up the register state in a
vpt callback when the interrupt is injected.

This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
when a tick was pending, the early callback from the VPT code would
always set REG_C.PF on every VMENTER; meanwhile the guest was in its
interrupt handler reading REG_C in a loop and waiting to see it clear.

One drawback is that a guest that attempts to suppress RTC periodic
interrupts by failing to read REG_C will receive up to 10 spurious
interrupts, even in 'strict' mode.  However:
 - since all previous RTC models have had this property (including
   the current one, since 'no-ack' mode is hard-coded on) we're
   pretty sure that all guests can handle this; and
 - we're already playing some other interesting games with this
   interrupt in the vpt code.

One other corner case: a guest that enables the PF timer interrupt,
masks the interupt in the APIC and then polls REG_C looking for PF
will not see PF getting set.  The more likely case of enabling the
timers and masking the interrupt with REG_B.PIE is already handled
correctly.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        |   25 +++++++++++--------------
 xen/arch/x86/hvm/vpt.c        |   40 ----------------------------------------
 xen/include/asm-x86/hvm/vpt.h |    1 -
 3 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 1a731f7..d641d95 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -78,29 +78,26 @@ static void rtc_update_irq(RTCState *s)
     hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
 }
 
-bool_t rtc_periodic_interrupt(void *opaque)
+/* Called by the VPT code after it's injected a PF interrupt for us.
+ * Fix up the register state to reflect what happened. */
+static void rtc_pf_callback(struct vcpu *v, void *opaque)
 {
     RTCState *s = opaque;
-    bool_t ret;
 
     spin_lock(&s->lock);
-    ret = rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF);
-    if ( rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_PF) )
-    {
-        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
-        rtc_update_irq(s);
-    }
-    else if ( ++(s->pt_dead_ticks) >= 10 )
+
+    if ( !rtc_mode_is(s, no_ack)
+         && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF)
+         && ++(s->pt_dead_ticks) >= 10 )
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
         s->period = 0;
     }
-    if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
-        ret = 0;
-    spin_unlock(&s->lock);
 
-    return ret;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF|RTC_IRQF;
+
+    spin_unlock(&s->lock);
 }
 
 /* Check whether the REG_C.PF bit should have been set by a tick since
@@ -156,7 +153,7 @@ static void rtc_timer_update(RTCState *s)
                     delta = period - ((now - s->start_time) % period);
                 if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
                     create_periodic_time(v, &s->pt, delta, period,
-                                         RTC_IRQ, NULL, s);
+                                         RTC_IRQ, rtc_pf_callback, s);
                 else
                     s->check_ticks_since = now;
             }
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 1961bda..f7af688 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -231,12 +231,9 @@ int pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, is_lapic;
-    void *pt_priv;
 
- rescan:
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
- rescan_locked:
     earliest_pt = NULL;
     max_lag = -1ULL;
     list_for_each_entry_safe ( pt, temp, head, list )
@@ -270,48 +267,11 @@ int pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
-    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    else if ( irq == RTC_IRQ && pt_priv )
-    {
-        if ( !rtc_periodic_interrupt(pt_priv) )
-            irq = -1;
-
-        pt_lock(earliest_pt);
-
-        if ( irq < 0 && earliest_pt->pending_intr_nr )
-        {
-            /*
-             * RTC periodic timer runs without the corresponding interrupt
-             * being enabled - need to mimic enough of pt_intr_post() to keep
-             * things going.
-             */
-            earliest_pt->pending_intr_nr = 0;
-            earliest_pt->irq_issued = 0;
-            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
-        }
-        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
-        {
-            if ( earliest_pt->on_list )
-            {
-                /* suspend timer emulation */
-                list_del(&earliest_pt->list);
-                earliest_pt->on_list = 0;
-            }
-            irq = -1;
-        }
-
-        /* Avoid dropping the lock if we can. */
-        if ( irq < 0 && v == earliest_pt->vcpu )
-            goto rescan_locked;
-        pt_unlock(earliest_pt);
-        if ( irq < 0 )
-            goto rescan;
-    }
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 9f48635..7d62653 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -184,7 +184,6 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
-bool_t rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWP-0004he-HO; Mon, 17 Feb 2014 11:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWO-0004h3-3F
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:38:00 +0000
Received: from [85.158.143.35:24160] by server-3.bemta-4.messagelabs.com id
	D3/35-11539-794F1035; Mon, 17 Feb 2014 11:37:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392637075!6212570!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29463 invoked from network); 17 Feb 2014 11:37:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103140218"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-2U; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:34 +0000
Message-ID: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	keir@xen.org, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 0/3] Move RTC interrupt injection back into
	the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This series implements the most recent idea Tim was proposing about
reworking the RTC PF interrupt injection.

Patch 1 switches handling the !PIE case to calculate the right answer
for REG_C.PF on demand rather than running the timers.
Patch 2 switches back to the old model of having the vpt code control
the timer interrupt injection; this is the fix for the w2k3 hang.
Patch 3 is just a minor cleanup, and not particularly necessary.

v3 has undergone extensive testing in XenRT, confirming that the w2k3
has not reoccurred in 100 tests (normally expect to see 10-30
recurrences), and the clock drift tests are happy with the new code.

Roger:
  Would you kindly test against FreeBSD again please?

George:
  Regarding 4.4, I believe these patches are now of sufficient
  quality to be accepted (subject to any other review).

  The previous statements of risk still applies; These are changes to
  a very complicated area of code, and the worst case scenario is that
  a VM gets none/too few/too many timing interrupts, with possible
  clock drift as a problem.  The XenRT test results help alleviate
  concern regarding the worst case.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWN-0004gx-Cr; Mon, 17 Feb 2014 11:37:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWM-0004gi-1t
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:37:58 +0000
Received: from [85.158.143.35:23973] by server-1.bemta-4.messagelabs.com id
	E8/12-31661-594F1035; Mon, 17 Feb 2014 11:37:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392637075!6212570!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29288 invoked from network); 17 Feb 2014 11:37:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103140211"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:54 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-82; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:37 +0000
Message-ID: <1392636577-10305-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 3/3] x86/hvm/rtc: Always deassert the IRQ
	line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Even in no-ack mode, there's no reason to leave the line asserted
after an explicit ack of the interrupt.

Furthermore, rtc_update_irq() is an unconditional noop having just cleared
RTC_REG_C.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/rtc.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index d641d95..639b4c5 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -673,9 +673,8 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
-        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
+        if ( ret & RTC_IRQF )
             hvm_isa_irq_deassert(d, RTC_IRQ);
-        rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
         s->pt_dead_ticks = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWN-0004h5-OA; Mon, 17 Feb 2014 11:37:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWM-0004gj-Gu
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:37:58 +0000
Received: from [193.109.254.147:11213] by server-13.bemta-14.messagelabs.com
	id 62/F0-01226-594F1035; Mon, 17 Feb 2014 11:37:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392637075!1152132!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4381 invoked from network); 17 Feb 2014 11:37:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101392348"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-5f; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:36 +0000
Message-ID: <1392636577-10305-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: george.dunlap@eu.citrix.com, Tim Deegan <tim@xen.org>, keir@xen.org,
	JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 2/3] x86/hvm/rtc: Inject RTC periodic
	interupts from the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Let the vpt code drive the RTC's timer interrupts directly, as it does
for other periodic time sources, and fix up the register state in a
vpt callback when the interrupt is injected.

This fixes a hang seen on Windows 2003 in no-missed-ticks mode, where
when a tick was pending, the early callback from the VPT code would
always set REG_C.PF on every VMENTER; meanwhile the guest was in its
interrupt handler reading REG_C in a loop and waiting to see it clear.

One drawback is that a guest that attempts to suppress RTC periodic
interrupts by failing to read REG_C will receive up to 10 spurious
interrupts, even in 'strict' mode.  However:
 - since all previous RTC models have had this property (including
   the current one, since 'no-ack' mode is hard-coded on) we're
   pretty sure that all guests can handle this; and
 - we're already playing some other interesting games with this
   interrupt in the vpt code.

One other corner case: a guest that enables the PF timer interrupt,
masks the interupt in the APIC and then polls REG_C looking for PF
will not see PF getting set.  The more likely case of enabling the
timers and masking the interrupt with REG_B.PIE is already handled
correctly.

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/rtc.c        |   25 +++++++++++--------------
 xen/arch/x86/hvm/vpt.c        |   40 ----------------------------------------
 xen/include/asm-x86/hvm/vpt.h |    1 -
 3 files changed, 11 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index 1a731f7..d641d95 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -78,29 +78,26 @@ static void rtc_update_irq(RTCState *s)
     hvm_isa_irq_assert(vrtc_domain(s), RTC_IRQ);
 }
 
-bool_t rtc_periodic_interrupt(void *opaque)
+/* Called by the VPT code after it's injected a PF interrupt for us.
+ * Fix up the register state to reflect what happened. */
+static void rtc_pf_callback(struct vcpu *v, void *opaque)
 {
     RTCState *s = opaque;
-    bool_t ret;
 
     spin_lock(&s->lock);
-    ret = rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF);
-    if ( rtc_mode_is(s, no_ack) || !(s->hw.cmos_data[RTC_REG_C] & RTC_PF) )
-    {
-        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
-        rtc_update_irq(s);
-    }
-    else if ( ++(s->pt_dead_ticks) >= 10 )
+
+    if ( !rtc_mode_is(s, no_ack)
+         && (s->hw.cmos_data[RTC_REG_C] & RTC_IRQF)
+         && ++(s->pt_dead_ticks) >= 10 )
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
         s->period = 0;
     }
-    if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
-        ret = 0;
-    spin_unlock(&s->lock);
 
-    return ret;
+    s->hw.cmos_data[RTC_REG_C] |= RTC_PF|RTC_IRQF;
+
+    spin_unlock(&s->lock);
 }
 
 /* Check whether the REG_C.PF bit should have been set by a tick since
@@ -156,7 +153,7 @@ static void rtc_timer_update(RTCState *s)
                     delta = period - ((now - s->start_time) % period);
                 if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
                     create_periodic_time(v, &s->pt, delta, period,
-                                         RTC_IRQ, NULL, s);
+                                         RTC_IRQ, rtc_pf_callback, s);
                 else
                     s->check_ticks_since = now;
             }
diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 1961bda..f7af688 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -231,12 +231,9 @@ int pt_update_irq(struct vcpu *v)
     struct periodic_time *pt, *temp, *earliest_pt;
     uint64_t max_lag;
     int irq, is_lapic;
-    void *pt_priv;
 
- rescan:
     spin_lock(&v->arch.hvm_vcpu.tm_lock);
 
- rescan_locked:
     earliest_pt = NULL;
     max_lag = -1ULL;
     list_for_each_entry_safe ( pt, temp, head, list )
@@ -270,48 +267,11 @@ int pt_update_irq(struct vcpu *v)
     earliest_pt->irq_issued = 1;
     irq = earliest_pt->irq;
     is_lapic = (earliest_pt->source == PTSRC_lapic);
-    pt_priv = earliest_pt->priv;
 
     spin_unlock(&v->arch.hvm_vcpu.tm_lock);
 
     if ( is_lapic )
         vlapic_set_irq(vcpu_vlapic(v), irq, 0);
-    else if ( irq == RTC_IRQ && pt_priv )
-    {
-        if ( !rtc_periodic_interrupt(pt_priv) )
-            irq = -1;
-
-        pt_lock(earliest_pt);
-
-        if ( irq < 0 && earliest_pt->pending_intr_nr )
-        {
-            /*
-             * RTC periodic timer runs without the corresponding interrupt
-             * being enabled - need to mimic enough of pt_intr_post() to keep
-             * things going.
-             */
-            earliest_pt->pending_intr_nr = 0;
-            earliest_pt->irq_issued = 0;
-            set_timer(&earliest_pt->timer, earliest_pt->scheduled);
-        }
-        else if ( irq >= 0 && pt_irq_masked(earliest_pt) )
-        {
-            if ( earliest_pt->on_list )
-            {
-                /* suspend timer emulation */
-                list_del(&earliest_pt->list);
-                earliest_pt->on_list = 0;
-            }
-            irq = -1;
-        }
-
-        /* Avoid dropping the lock if we can. */
-        if ( irq < 0 && v == earliest_pt->vcpu )
-            goto rescan_locked;
-        pt_unlock(earliest_pt);
-        if ( irq < 0 )
-            goto rescan;
-    }
     else
     {
         hvm_isa_irq_deassert(v->domain, irq);
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 9f48635..7d62653 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -184,7 +184,6 @@ void rtc_migrate_timers(struct vcpu *v);
 void rtc_deinit(struct domain *d);
 void rtc_reset(struct domain *d);
 void rtc_update_clock(struct domain *d);
-bool_t rtc_periodic_interrupt(void *);
 
 void pmtimer_init(struct vcpu *v);
 void pmtimer_deinit(struct domain *d);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWP-0004hR-4E; Mon, 17 Feb 2014 11:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWM-0004go-UA
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:37:59 +0000
Received: from [85.158.143.35:15024] by server-2.bemta-4.messagelabs.com id
	80/E6-10891-694F1035; Mon, 17 Feb 2014 11:37:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392637075!6212570!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29371 invoked from network); 17 Feb 2014 11:37:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103140212"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-3W; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:35 +0000
Message-ID: <1392636577-10305-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 1/3] x86/hvm/rtc: Don't run the vpt timer
	when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

If the guest has not asked for interrupts, don't run the vpt timer
to generate them.  This is a prerequisite for a patch to simplify how
the vpt interacts with the RTC, and also gets rid of a timer series in
Xen in a case where it's unlikely to be needed.

Instead, calculate the correct value for REG_C.PF whenever REG_C is
read or PIE is enabled.  This allow a guest to poll for the PF bit
while not asking for actual timer interrupts.  Such a guest would no
longer get the benefit of the vpt's timer modes.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Changes in v3:

 * Always clobber s->period when changing REG_B.PIE, to cause
   rtc_update_timer() to restart the timer when setting REG_B.PIE without
   changing the REG_A divider control.
---
 xen/arch/x86/hvm/rtc.c        |   53 +++++++++++++++++++++++++++++++----------
 xen/include/asm-x86/hvm/vpt.h |    3 ++-
 2 files changed, 43 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cdedefe..1a731f7 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -94,7 +94,7 @@ bool_t rtc_periodic_interrupt(void *opaque)
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
     }
     if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
         ret = 0;
@@ -103,6 +103,24 @@ bool_t rtc_periodic_interrupt(void *opaque)
     return ret;
 }
 
+/* Check whether the REG_C.PF bit should have been set by a tick since
+ * the last time we looked. This is used to track ticks when REG_B.PIE
+ * is clear; when PIE is set, PF ticks are handled by the VPT callbacks.  */
+static void check_for_pf_ticks(RTCState *s)
+{
+    s_time_t now;
+
+    if ( s->period == 0 || (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+        return;
+
+    now = NOW();
+    if ( (now - s->start_time) / s->period
+         != (s->check_ticks_since - s->start_time) / s->period )
+        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+
+    s->check_ticks_since = now;
+}
+
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
@@ -125,24 +143,29 @@ static void rtc_timer_update(RTCState *s)
     case RTC_REF_CLCK_4MHZ:
         if ( period_code != 0 )
         {
-            if ( period_code != s->pt_code )
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            if ( period != s->period )
             {
-                s->pt_code = period_code;
-                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+                s_time_t now = NOW();
+
+                s->period = period;
                 if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
-                    delta = period - ((NOW() - s->start_time) % period);
-                create_periodic_time(v, &s->pt, delta, period,
-                                     RTC_IRQ, NULL, s);
+                    delta = period - ((now - s->start_time) % period);
+                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+                    create_periodic_time(v, &s->pt, delta, period,
+                                         RTC_IRQ, NULL, s);
+                else
+                    s->check_ticks_since = now;
             }
             break;
         }
         /* fall through */
     default:
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
         break;
     }
 }
@@ -484,14 +507,19 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
             if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
+        check_for_pf_ticks(s);
         s->hw.cmos_data[RTC_REG_B] = data;
         /*
          * If the interrupt is already set when the interrupt becomes
          * enabled, raise an interrupt immediately.
          */
         rtc_update_irq(s);
-        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
+        if ( (data ^ orig) & RTC_PIE )
+        {
+            destroy_periodic_time(&s->pt);
+            s->period = 0;
             rtc_timer_update(s);
+        }
         if ( (data ^ orig) & RTC_SET )
             check_update_timer(s);
         if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
@@ -645,6 +673,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
             ret |= RTC_UIP;
         break;
     case RTC_REG_C:
+        check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
         if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
@@ -652,7 +681,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
-        rtc_timer_update(s);
+        s->pt_dead_ticks = 0;
         break;
     default:
         ret = s->hw.cmos_data[s->hw.cmos_index];
@@ -748,7 +777,7 @@ void rtc_reset(struct domain *d)
     RTCState *s = domain_vrtc(d);
 
     destroy_periodic_time(&s->pt);
-    s->pt_code = 0;
+    s->period = 0;
     s->pt.source = PTSRC_isa;
 }
 
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 87c3a66..9f48635 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -113,7 +113,8 @@ typedef struct RTCState {
     /* periodic timer */
     struct periodic_time pt;
     s_time_t start_time;
-    int pt_code;
+    s_time_t check_ticks_since;
+    int period;
     uint8_t pt_dead_ticks;
     uint32_t use_timer;
     spinlock_t lock;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWP-0004he-HO; Mon, 17 Feb 2014 11:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWO-0004h3-3F
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:38:00 +0000
Received: from [85.158.143.35:24160] by server-3.bemta-4.messagelabs.com id
	D3/35-11539-794F1035; Mon, 17 Feb 2014 11:37:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392637075!6212570!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29463 invoked from network); 17 Feb 2014 11:37:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103140218"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-2U; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:34 +0000
Message-ID: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: george.dunlap@eu.citrix.com, Andrew Cooper <andrew.cooper3@citrix.com>,
	keir@xen.org, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 0/3] Move RTC interrupt injection back into
	the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

This series implements the most recent idea Tim was proposing about
reworking the RTC PF interrupt injection.

Patch 1 switches handling the !PIE case to calculate the right answer
for REG_C.PF on demand rather than running the timers.
Patch 2 switches back to the old model of having the vpt code control
the timer interrupt injection; this is the fix for the w2k3 hang.
Patch 3 is just a minor cleanup, and not particularly necessary.

v3 has undergone extensive testing in XenRT, confirming that the w2k3
has not reoccurred in 100 tests (normally expect to see 10-30
recurrences), and the clock drift tests are happy with the new code.

Roger:
  Would you kindly test against FreeBSD again please?

George:
  Regarding 4.4, I believe these patches are now of sufficient
  quality to be accepted (subject to any other review).

  The previous statements of risk still applies; These are changes to
  a very complicated area of code, and the worst case scenario is that
  a VM gets none/too few/too many timing interrupts, with possible
  clock drift as a problem.  The XenRT test results help alleviate
  concern regarding the worst case.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWP-0004hR-4E; Mon, 17 Feb 2014 11:38:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWM-0004go-UA
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:37:59 +0000
Received: from [85.158.143.35:15024] by server-2.bemta-4.messagelabs.com id
	80/E6-10891-694F1035; Mon, 17 Feb 2014 11:37:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392637075!6212570!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29371 invoked from network); 17 Feb 2014 11:37:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103140212"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-3W; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:35 +0000
Message-ID: <1392636577-10305-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: keir@xen.org, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 1/3] x86/hvm/rtc: Don't run the vpt timer
	when !REG_B.PIE.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

If the guest has not asked for interrupts, don't run the vpt timer
to generate them.  This is a prerequisite for a patch to simplify how
the vpt interacts with the RTC, and also gets rid of a timer series in
Xen in a case where it's unlikely to be needed.

Instead, calculate the correct value for REG_C.PF whenever REG_C is
read or PIE is enabled.  This allow a guest to poll for the PF bit
while not asking for actual timer interrupts.  Such a guest would no
longer get the benefit of the vpt's timer modes.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

---

Changes in v3:

 * Always clobber s->period when changing REG_B.PIE, to cause
   rtc_update_timer() to restart the timer when setting REG_B.PIE without
   changing the REG_A divider control.
---
 xen/arch/x86/hvm/rtc.c        |   53 +++++++++++++++++++++++++++++++----------
 xen/include/asm-x86/hvm/vpt.h |    3 ++-
 2 files changed, 43 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index cdedefe..1a731f7 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -94,7 +94,7 @@ bool_t rtc_periodic_interrupt(void *opaque)
     {
         /* VM is ignoring its RTC; no point in running the timer */
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
     }
     if ( !(s->hw.cmos_data[RTC_REG_C] & RTC_IRQF) )
         ret = 0;
@@ -103,6 +103,24 @@ bool_t rtc_periodic_interrupt(void *opaque)
     return ret;
 }
 
+/* Check whether the REG_C.PF bit should have been set by a tick since
+ * the last time we looked. This is used to track ticks when REG_B.PIE
+ * is clear; when PIE is set, PF ticks are handled by the VPT callbacks.  */
+static void check_for_pf_ticks(RTCState *s)
+{
+    s_time_t now;
+
+    if ( s->period == 0 || (s->hw.cmos_data[RTC_REG_B] & RTC_PIE) )
+        return;
+
+    now = NOW();
+    if ( (now - s->start_time) / s->period
+         != (s->check_ticks_since - s->start_time) / s->period )
+        s->hw.cmos_data[RTC_REG_C] |= RTC_PF;
+
+    s->check_ticks_since = now;
+}
+
 /* Enable/configure/disable the periodic timer based on the RTC_PIE and
  * RTC_RATE_SELECT settings */
 static void rtc_timer_update(RTCState *s)
@@ -125,24 +143,29 @@ static void rtc_timer_update(RTCState *s)
     case RTC_REF_CLCK_4MHZ:
         if ( period_code != 0 )
         {
-            if ( period_code != s->pt_code )
+            period = 1 << (period_code - 1); /* period in 32 Khz cycles */
+            period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+            if ( period != s->period )
             {
-                s->pt_code = period_code;
-                period = 1 << (period_code - 1); /* period in 32 Khz cycles */
-                period = DIV_ROUND(period * 1000000000ULL, 32768); /* in ns */
+                s_time_t now = NOW();
+
+                s->period = period;
                 if ( v->domain->arch.hvm_domain.params[HVM_PARAM_VPT_ALIGN] )
                     delta = 0;
                 else
-                    delta = period - ((NOW() - s->start_time) % period);
-                create_periodic_time(v, &s->pt, delta, period,
-                                     RTC_IRQ, NULL, s);
+                    delta = period - ((now - s->start_time) % period);
+                if ( s->hw.cmos_data[RTC_REG_B] & RTC_PIE )
+                    create_periodic_time(v, &s->pt, delta, period,
+                                         RTC_IRQ, NULL, s);
+                else
+                    s->check_ticks_since = now;
             }
             break;
         }
         /* fall through */
     default:
         destroy_periodic_time(&s->pt);
-        s->pt_code = 0;
+        s->period = 0;
         break;
     }
 }
@@ -484,14 +507,19 @@ static int rtc_ioport_write(void *opaque, uint32_t addr, uint32_t data)
             if ( orig & RTC_SET )
                 rtc_set_time(s);
         }
+        check_for_pf_ticks(s);
         s->hw.cmos_data[RTC_REG_B] = data;
         /*
          * If the interrupt is already set when the interrupt becomes
          * enabled, raise an interrupt immediately.
          */
         rtc_update_irq(s);
-        if ( (data & RTC_PIE) && !(orig & RTC_PIE) )
+        if ( (data ^ orig) & RTC_PIE )
+        {
+            destroy_periodic_time(&s->pt);
+            s->period = 0;
             rtc_timer_update(s);
+        }
         if ( (data ^ orig) & RTC_SET )
             check_update_timer(s);
         if ( (data ^ orig) & (RTC_24H | RTC_DM_BINARY | RTC_SET) )
@@ -645,6 +673,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
             ret |= RTC_UIP;
         break;
     case RTC_REG_C:
+        check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
         if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
@@ -652,7 +681,7 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
-        rtc_timer_update(s);
+        s->pt_dead_ticks = 0;
         break;
     default:
         ret = s->hw.cmos_data[s->hw.cmos_index];
@@ -748,7 +777,7 @@ void rtc_reset(struct domain *d)
     RTCState *s = domain_vrtc(d);
 
     destroy_periodic_time(&s->pt);
-    s->pt_code = 0;
+    s->period = 0;
     s->pt.source = PTSRC_isa;
 }
 
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index 87c3a66..9f48635 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -113,7 +113,8 @@ typedef struct RTCState {
     /* periodic timer */
     struct periodic_time pt;
     s_time_t start_time;
-    int pt_code;
+    s_time_t check_ticks_since;
+    int period;
     uint8_t pt_dead_ticks;
     uint32_t use_timer;
     spinlock_t lock;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMWN-0004gx-Cr; Mon, 17 Feb 2014 11:37:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMWM-0004gi-1t
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:37:58 +0000
Received: from [85.158.143.35:23973] by server-1.bemta-4.messagelabs.com id
	E8/12-31661-594F1035; Mon, 17 Feb 2014 11:37:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392637075!6212570!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29288 invoked from network); 17 Feb 2014 11:37:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:37:56 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103140211"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 11:37:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:37:54 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFMOJ-00064I-82; Mon, 17 Feb 2014 11:29:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 11:29:37 +0000
Message-ID: <1392636577-10305-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: keir@xen.org, george.dunlap@eu.citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, JBeulich@suse.com, roger.pau@citrix.com
Subject: [Xen-devel] [PATCH v3 3/3] x86/hvm/rtc: Always deassert the IRQ
	line when clearing REG_C.IRQF.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

Even in no-ack mode, there's no reason to leave the line asserted
after an explicit ack of the interrupt.

Furthermore, rtc_update_irq() is an unconditional noop having just cleared
RTC_REG_C.

Signed-off-by: Tim Deegan <tim@xen.org>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/rtc.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/rtc.c b/xen/arch/x86/hvm/rtc.c
index d641d95..639b4c5 100644
--- a/xen/arch/x86/hvm/rtc.c
+++ b/xen/arch/x86/hvm/rtc.c
@@ -673,9 +673,8 @@ static uint32_t rtc_ioport_read(RTCState *s, uint32_t addr)
         check_for_pf_ticks(s);
         ret = s->hw.cmos_data[s->hw.cmos_index];
         s->hw.cmos_data[RTC_REG_C] = 0x00;
-        if ( (ret & RTC_IRQF) && !rtc_mode_is(s, no_ack) )
+        if ( ret & RTC_IRQF )
             hvm_isa_irq_deassert(d, RTC_IRQ);
-        rtc_update_irq(s);
         check_update_timer(s);
         alarm_timer_update(s);
         s->pt_dead_ticks = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMXC-0004vV-9r; Mon, 17 Feb 2014 11:38:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFMXA-0004vA-RV
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:38:49 +0000
Received: from [85.158.139.211:12740] by server-16.bemta-5.messagelabs.com id
	48/58-05060-8C4F1035; Mon, 17 Feb 2014 11:38:48 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392637127!4359119!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32722 invoked from network); 17 Feb 2014 11:38:47 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:38:47 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so10364614wes.40
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 03:38:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=bfXi0eHCUCVfYULsjyPCLz1d40AlsFk40s6WcX0JDGw=;
	b=TPwzdu/XKOu6lgv8b4T44IaPKER8Fso8hiYx63/O5vercfVX7qOQaWkKkV0J+OGztt
	0XgvuvtmaUqFOMnQ3ExSZAPJ7tfShWGwGxniS5QiOIhMaUacLxbANfBEL7ndVzz/BqAa
	heFxHxbxq4199e8cOS2b5v2MMdFzZZqqPbNulyidZWtcfO1x6+Dgt4QEqXP6PP7ak4HT
	yC6p0z2rLHhg5mjylLxwiCDDrNZ9O5OranXhoS1pwhZtt0axaIsNQ9rvISPY7g4L9lNK
	7sZ3d9JZND8PDLkH/deRW5ulyjz6oIufwQyiGRIF8v81lMpo8fQF99yUQw11XhweSYc1
	Cxlg==
MIME-Version: 1.0
X-Received: by 10.180.105.41 with SMTP id gj9mr12537813wib.28.1392637127270;
	Mon, 17 Feb 2014 03:38:47 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 17 Feb 2014 03:38:47 -0800 (PST)
In-Reply-To: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
References: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
Date: Mon, 17 Feb 2014 11:38:47 +0000
X-Google-Sender-Auth: BYheLUpIesViBOzcwr_7kkmBW50
Message-ID: <CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jinchun Kim <cienlux@gmail.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 16, 2014 at 10:14 PM, Jinchun Kim <cienlux@gmail.com> wrote:
> Hi, All.
>
> While I was digging tmem and its source code, I found suspicious things
> about frontswap and cleancache.
> When the guest OS wants to evict either a dirty or clean page, frontswap and
> cleancache will store it in tmem.
> The linux kernel document tells that
>
> [Documentation/vm/frontswap.txt]
> A "store" will copy the page to transcendent memory ....
> A "load" will copy the page, if found, from transcendent memory into kernel
> memory ...
>
> [Documentation/vm/cleancache.txt]
> A "put_page" will copy a page (presumably about-to-be-evicted) page into
> cleancache ...
> A "get_page" will copy the page, if found, from cleancache into kernel
> memory ...
>
> My colleagues and I think copying the page is not necessary (especially for
> cleancache) because both kernel memory and tmem have same data. Why don't we
> just change the pointer to the page and let it belong to tmem? We were
> wondering if there is any specific reason not to copy the pages to tmem.

Konrad / Boris?  (Sorry, I can't remember the other person who's been
sending tmem patches recently.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMXC-0004vV-9r; Mon, 17 Feb 2014 11:38:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFMXA-0004vA-RV
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:38:49 +0000
Received: from [85.158.139.211:12740] by server-16.bemta-5.messagelabs.com id
	48/58-05060-8C4F1035; Mon, 17 Feb 2014 11:38:48 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392637127!4359119!1
X-Originating-IP: [74.125.82.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32722 invoked from network); 17 Feb 2014 11:38:47 -0000
Received: from mail-we0-f181.google.com (HELO mail-we0-f181.google.com)
	(74.125.82.181)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:38:47 -0000
Received: by mail-we0-f181.google.com with SMTP id w61so10364614wes.40
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 03:38:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=bfXi0eHCUCVfYULsjyPCLz1d40AlsFk40s6WcX0JDGw=;
	b=TPwzdu/XKOu6lgv8b4T44IaPKER8Fso8hiYx63/O5vercfVX7qOQaWkKkV0J+OGztt
	0XgvuvtmaUqFOMnQ3ExSZAPJ7tfShWGwGxniS5QiOIhMaUacLxbANfBEL7ndVzz/BqAa
	heFxHxbxq4199e8cOS2b5v2MMdFzZZqqPbNulyidZWtcfO1x6+Dgt4QEqXP6PP7ak4HT
	yC6p0z2rLHhg5mjylLxwiCDDrNZ9O5OranXhoS1pwhZtt0axaIsNQ9rvISPY7g4L9lNK
	7sZ3d9JZND8PDLkH/deRW5ulyjz6oIufwQyiGRIF8v81lMpo8fQF99yUQw11XhweSYc1
	Cxlg==
MIME-Version: 1.0
X-Received: by 10.180.105.41 with SMTP id gj9mr12537813wib.28.1392637127270;
	Mon, 17 Feb 2014 03:38:47 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 17 Feb 2014 03:38:47 -0800 (PST)
In-Reply-To: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
References: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
Date: Mon, 17 Feb 2014 11:38:47 +0000
X-Google-Sender-Auth: BYheLUpIesViBOzcwr_7kkmBW50
Message-ID: <CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jinchun Kim <cienlux@gmail.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 16, 2014 at 10:14 PM, Jinchun Kim <cienlux@gmail.com> wrote:
> Hi, All.
>
> While I was digging tmem and its source code, I found suspicious things
> about frontswap and cleancache.
> When the guest OS wants to evict either a dirty or clean page, frontswap and
> cleancache will store it in tmem.
> The linux kernel document tells that
>
> [Documentation/vm/frontswap.txt]
> A "store" will copy the page to transcendent memory ....
> A "load" will copy the page, if found, from transcendent memory into kernel
> memory ...
>
> [Documentation/vm/cleancache.txt]
> A "put_page" will copy a page (presumably about-to-be-evicted) page into
> cleancache ...
> A "get_page" will copy the page, if found, from cleancache into kernel
> memory ...
>
> My colleagues and I think copying the page is not necessary (especially for
> cleancache) because both kernel memory and tmem have same data. Why don't we
> just change the pointer to the page and let it belong to tmem? We were
> wondering if there is any specific reason not to copy the pages to tmem.

Konrad / Boris?  (Sorry, I can't remember the other person who's been
sending tmem patches recently.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:40:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMZ8-0005Fr-4L; Mon, 17 Feb 2014 11:40:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFMZ6-0005Fd-TF
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:40:49 +0000
Received: from [85.158.137.68:46422] by server-13.bemta-3.messagelabs.com id
	0E/C4-26923-045F1035; Mon, 17 Feb 2014 11:40:48 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392637247!2355371!1
X-Originating-IP: [209.85.212.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29105 invoked from network); 17 Feb 2014 11:40:47 -0000
Received: from mail-wi0-f170.google.com (HELO mail-wi0-f170.google.com)
	(209.85.212.170)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:40:47 -0000
Received: by mail-wi0-f170.google.com with SMTP id hi5so2263678wib.3
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 03:40:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=ZTmKQmhPMTFw2zcuMuadvhPloyrzuXEbQX2WCLhMpHA=;
	b=vMX9/QoN256J9figSCqncqs9kUn+XRaYJqx+Hq5jI1x3zw1HwdxUSBfyw1PyDkKmbz
	TE/E6Jv9+sdX0wUYRlFfmWv/wMDQ5zouzeCIzYQgnhvC3VLRmtwtVxY0eBe7ZNmHSH4w
	H9kaA1m+bMQkOivJZJ98ac5Vy1p20x6TytpVfluBvGYsDmdqOMO7QISZO55aOknJSipK
	Klvqe3H4sX3HGd7YQ7STvhbKMZCdPqMjDxbpUWRvwHLglrJJR/7kvWPWA31m7F6DYlqA
	4O7JXdlIXjnSNxNhQkfMmbYGqoXqAFUoAJmZm+ntBRCny3e7JsqK0PqHTUNUChlPWrA9
	1r2g==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr12489674wie.6.1392637247117;
	Mon, 17 Feb 2014 03:40:47 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 17 Feb 2014 03:40:47 -0800 (PST)
In-Reply-To: <CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
References: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
	<CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
Date: Mon, 17 Feb 2014 11:40:47 +0000
X-Google-Sender-Auth: CexMa4KrvII-5TLXF2_KAUtfvzc
Message-ID: <CAFLBxZac8PTv-4qK-Z9sWv2PtbXsyMohdH=UhzRzO31n2BFu8g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jinchun Kim <cienlux@gmail.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 11:38 AM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> On Sun, Feb 16, 2014 at 10:14 PM, Jinchun Kim <cienlux@gmail.com> wrote:
>> Hi, All.
>>
>> While I was digging tmem and its source code, I found suspicious things
>> about frontswap and cleancache.
>> When the guest OS wants to evict either a dirty or clean page, frontswap and
>> cleancache will store it in tmem.
>> The linux kernel document tells that
>>
>> [Documentation/vm/frontswap.txt]
>> A "store" will copy the page to transcendent memory ....
>> A "load" will copy the page, if found, from transcendent memory into kernel
>> memory ...
>>
>> [Documentation/vm/cleancache.txt]
>> A "put_page" will copy a page (presumably about-to-be-evicted) page into
>> cleancache ...
>> A "get_page" will copy the page, if found, from cleancache into kernel
>> memory ...
>>
>> My colleagues and I think copying the page is not necessary (especially for
>> cleancache) because both kernel memory and tmem have same data. Why don't we
>> just change the pointer to the page and let it belong to tmem? We were
>> wondering if there is any specific reason not to copy the pages to tmem.
>
> Konrad / Boris?  (Sorry, I can't remember the other person who's been
> sending tmem patches recently.)

Oh, it was Bob Liu. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:40:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMZ8-0005Fr-4L; Mon, 17 Feb 2014 11:40:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFMZ6-0005Fd-TF
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:40:49 +0000
Received: from [85.158.137.68:46422] by server-13.bemta-3.messagelabs.com id
	0E/C4-26923-045F1035; Mon, 17 Feb 2014 11:40:48 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392637247!2355371!1
X-Originating-IP: [209.85.212.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29105 invoked from network); 17 Feb 2014 11:40:47 -0000
Received: from mail-wi0-f170.google.com (HELO mail-wi0-f170.google.com)
	(209.85.212.170)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:40:47 -0000
Received: by mail-wi0-f170.google.com with SMTP id hi5so2263678wib.3
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 03:40:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=ZTmKQmhPMTFw2zcuMuadvhPloyrzuXEbQX2WCLhMpHA=;
	b=vMX9/QoN256J9figSCqncqs9kUn+XRaYJqx+Hq5jI1x3zw1HwdxUSBfyw1PyDkKmbz
	TE/E6Jv9+sdX0wUYRlFfmWv/wMDQ5zouzeCIzYQgnhvC3VLRmtwtVxY0eBe7ZNmHSH4w
	H9kaA1m+bMQkOivJZJ98ac5Vy1p20x6TytpVfluBvGYsDmdqOMO7QISZO55aOknJSipK
	Klvqe3H4sX3HGd7YQ7STvhbKMZCdPqMjDxbpUWRvwHLglrJJR/7kvWPWA31m7F6DYlqA
	4O7JXdlIXjnSNxNhQkfMmbYGqoXqAFUoAJmZm+ntBRCny3e7JsqK0PqHTUNUChlPWrA9
	1r2g==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr12489674wie.6.1392637247117;
	Mon, 17 Feb 2014 03:40:47 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 17 Feb 2014 03:40:47 -0800 (PST)
In-Reply-To: <CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
References: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
	<CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
Date: Mon, 17 Feb 2014 11:40:47 +0000
X-Google-Sender-Auth: CexMa4KrvII-5TLXF2_KAUtfvzc
Message-ID: <CAFLBxZac8PTv-4qK-Z9sWv2PtbXsyMohdH=UhzRzO31n2BFu8g@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Jinchun Kim <cienlux@gmail.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 11:38 AM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> On Sun, Feb 16, 2014 at 10:14 PM, Jinchun Kim <cienlux@gmail.com> wrote:
>> Hi, All.
>>
>> While I was digging tmem and its source code, I found suspicious things
>> about frontswap and cleancache.
>> When the guest OS wants to evict either a dirty or clean page, frontswap and
>> cleancache will store it in tmem.
>> The linux kernel document tells that
>>
>> [Documentation/vm/frontswap.txt]
>> A "store" will copy the page to transcendent memory ....
>> A "load" will copy the page, if found, from transcendent memory into kernel
>> memory ...
>>
>> [Documentation/vm/cleancache.txt]
>> A "put_page" will copy a page (presumably about-to-be-evicted) page into
>> cleancache ...
>> A "get_page" will copy the page, if found, from cleancache into kernel
>> memory ...
>>
>> My colleagues and I think copying the page is not necessary (especially for
>> cleancache) because both kernel memory and tmem have same data. Why don't we
>> just change the pointer to the page and let it belong to tmem? We were
>> wondering if there is any specific reason not to copy the pages to tmem.
>
> Konrad / Boris?  (Sorry, I can't remember the other person who's been
> sending tmem patches recently.)

Oh, it was Bob Liu. :-)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:49:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMhj-0005by-BF; Mon, 17 Feb 2014 11:49:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFMhh-0005bt-Ml
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 11:49:41 +0000
Received: from [85.158.137.68:13676] by server-3.bemta-3.messagelabs.com id
	05/5F-14520-457F1035; Mon, 17 Feb 2014 11:49:40 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392637778!2327468!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5380 invoked from network); 17 Feb 2014 11:49:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:49:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101394258"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:49:38 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:49:37 -0500
Message-ID: <5301F74E.3070107@citrix.com>
Date: Mon, 17 Feb 2014 11:49:34 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/02/14 18:36, Stefano Stabellini wrote:
> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
>> index e0965ab..4eaeb3f 100644
>> --- a/arch/arm/include/asm/xen/page.h
>> +++ b/arch/arm/include/asm/xen/page.h
>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
>>   	return NULL;
>>   }
>>
>> -static inline int m2p_add_override(unsigned long mfn, struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> -{
>> -	return 0;
>> -}
>> -
>> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
>> -{
>> -	return 0;
>> -}
>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
>> +				   struct gnttab_map_grant_ref *kmap_ops,
>> +				   struct page **pages, unsigned int count,
>> +				   bool m2p_override);
>> +
>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
>> +				     struct gnttab_map_grant_ref *kmap_ops,
>> +				     struct page **pages, unsigned int count,
>> +				     bool m2p_override);
>
> Much much better.
> The only comment I have is about this m2p_override boolean parameter.
> m2p_override is now meaningless in this context, what we really want to
> let the arch specific implementation know is whether the mapping is a
> kernel only mapping or a userspace mapping.
> Testing for kmap_ops != NULL might even be enough, but it would not
> improve the interface.
gntdev is the only user of this, the kmap_ops parameter there is:
use_ptemod ? map->kmap_ops + offset : NULL
where:
use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
So I think we can't rely on kmap_ops to decide whether we should 
m2p_override or not.

> Is it possible to realize if the mapping is a userspace mapping by
> checking for GNTMAP_application_map in map_ops?
> Otherwise I would keep the boolean and rename it to user_mapping.
Sounds better, but as far as I see gntdev set that flag in 
find_grant_ptes, which is called only

if (use_ptemod) {
	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
				  vma->vm_end - vma->vm_start,
				  find_grant_ptes, map);

So if xen_feature(XENFEAT_auto_translated_physmap), we don't have 
kmap_ops, and GNTMAP_application_map is not set as well, but I guess we 
still need m2p_override. Or not? I'm a bit confused, maybe because of 
Monday ...

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:49:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMhj-0005by-BF; Mon, 17 Feb 2014 11:49:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFMhh-0005bt-Ml
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 11:49:41 +0000
Received: from [85.158.137.68:13676] by server-3.bemta-3.messagelabs.com id
	05/5F-14520-457F1035; Mon, 17 Feb 2014 11:49:40 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392637778!2327468!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5380 invoked from network); 17 Feb 2014 11:49:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:49:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101394258"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:49:38 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:49:37 -0500
Message-ID: <5301F74E.3070107@citrix.com>
Date: Mon, 17 Feb 2014 11:49:34 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 16/02/14 18:36, Stefano Stabellini wrote:
> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>> diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
>> index e0965ab..4eaeb3f 100644
>> --- a/arch/arm/include/asm/xen/page.h
>> +++ b/arch/arm/include/asm/xen/page.h
>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
>>   	return NULL;
>>   }
>>
>> -static inline int m2p_add_override(unsigned long mfn, struct page *page,
>> -		struct gnttab_map_grant_ref *kmap_op)
>> -{
>> -	return 0;
>> -}
>> -
>> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
>> -{
>> -	return 0;
>> -}
>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
>> +				   struct gnttab_map_grant_ref *kmap_ops,
>> +				   struct page **pages, unsigned int count,
>> +				   bool m2p_override);
>> +
>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
>> +				     struct gnttab_map_grant_ref *kmap_ops,
>> +				     struct page **pages, unsigned int count,
>> +				     bool m2p_override);
>
> Much much better.
> The only comment I have is about this m2p_override boolean parameter.
> m2p_override is now meaningless in this context, what we really want to
> let the arch specific implementation know is whether the mapping is a
> kernel only mapping or a userspace mapping.
> Testing for kmap_ops != NULL might even be enough, but it would not
> improve the interface.
gntdev is the only user of this, the kmap_ops parameter there is:
use_ptemod ? map->kmap_ops + offset : NULL
where:
use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
So I think we can't rely on kmap_ops to decide whether we should 
m2p_override or not.

> Is it possible to realize if the mapping is a userspace mapping by
> checking for GNTMAP_application_map in map_ops?
> Otherwise I would keep the boolean and rename it to user_mapping.
Sounds better, but as far as I see gntdev set that flag in 
find_grant_ptes, which is called only

if (use_ptemod) {
	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
				  vma->vm_end - vma->vm_start,
				  find_grant_ptes, map);

So if xen_feature(XENFEAT_auto_translated_physmap), we don't have 
kmap_ops, and GNTMAP_application_map is not set as well, but I guess we 
still need m2p_override. Or not? I'm a bit confused, maybe because of 
Monday ...

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMop-0005lS-9I; Mon, 17 Feb 2014 11:57:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMoo-0005lN-1E
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:57:02 +0000
Received: from [193.109.254.147:13732] by server-8.bemta-14.messagelabs.com id
	A1/80-18529-D09F1035; Mon, 17 Feb 2014 11:57:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392638219!852978!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4779 invoked from network); 17 Feb 2014 11:57:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:57:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101395384"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:56:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:56:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFMoc-0006fy-4Z;
	Mon, 17 Feb 2014 11:56:50 +0000
Message-ID: <5301F901.6060202@citrix.com>
Date: Mon, 17 Feb 2014 11:56:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] (early) Xen-4.4-rc4 testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Given an available testing slot over the weekend, I ran a XenRT nightly
on current staging (c/s 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e, "pvh:
Fix regression due to assumption that HVM paths MUST use io-backend
device") which I took to be roughly Xen-4.4-rc4.

This testing included David Vrabels "x86/xen: allow privcmd hypercalls
to be preempted" kernel patch

I am happy to report that everything looks in shape.  There are no new
problems compared to rc3.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:57:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMop-0005lS-9I; Mon, 17 Feb 2014 11:57:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFMoo-0005lN-1E
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:57:02 +0000
Received: from [193.109.254.147:13732] by server-8.bemta-14.messagelabs.com id
	A1/80-18529-D09F1035; Mon, 17 Feb 2014 11:57:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392638219!852978!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4779 invoked from network); 17 Feb 2014 11:57:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 11:57:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101395384"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 11:56:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 06:56:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFMoc-0006fy-4Z;
	Mon, 17 Feb 2014 11:56:50 +0000
Message-ID: <5301F901.6060202@citrix.com>
Date: Mon, 17 Feb 2014 11:56:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] (early) Xen-4.4-rc4 testing
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Given an available testing slot over the weekend, I ran a XenRT nightly
on current staging (c/s 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e, "pvh:
Fix regression due to assumption that HVM paths MUST use io-backend
device") which I took to be roughly Xen-4.4-rc4.

This testing included David Vrabels "x86/xen: allow privcmd hypercalls
to be preempted" kernel patch

I am happy to report that everything looks in shape.  There are no new
problems compared to rc3.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:59:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMrY-0005rr-Rp; Mon, 17 Feb 2014 11:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFMrX-0005re-8A
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:59:51 +0000
Received: from [85.158.139.211:31160] by server-1.bemta-5.messagelabs.com id
	57/D7-12859-6B9F1035; Mon, 17 Feb 2014 11:59:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392638389!4362919!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12688 invoked from network); 17 Feb 2014 11:59:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 11:59:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 11:59:49 +0000
Message-Id: <530207C1020000780011CD88@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 11:59:45 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, Xen-devel <xen-devel@lists.xen.org>,
	keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 12:29, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This series implements the most recent idea Tim was proposing about
> reworking the RTC PF interrupt injection.
> 
> Patch 1 switches handling the !PIE case to calculate the right answer
> for REG_C.PF on demand rather than running the timers.
> Patch 2 switches back to the old model of having the vpt code control
> the timer interrupt injection; this is the fix for the w2k3 hang.
> Patch 3 is just a minor cleanup, and not particularly necessary.

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 11:59:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 11:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMrY-0005rr-Rp; Mon, 17 Feb 2014 11:59:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFMrX-0005re-8A
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 11:59:51 +0000
Received: from [85.158.139.211:31160] by server-1.bemta-5.messagelabs.com id
	57/D7-12859-6B9F1035; Mon, 17 Feb 2014 11:59:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392638389!4362919!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12688 invoked from network); 17 Feb 2014 11:59:49 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 11:59:49 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 11:59:49 +0000
Message-Id: <530207C1020000780011CD88@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 11:59:45 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, Xen-devel <xen-devel@lists.xen.org>,
	keir@xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] [PATCH v3 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 12:29, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This series implements the most recent idea Tim was proposing about
> reworking the RTC PF interrupt injection.
> 
> Patch 1 switches handling the !PIE case to calculate the right answer
> for REG_C.PF on demand rather than running the timers.
> Patch 2 switches back to the old model of having the vpt code control
> the timer interrupt injection; this is the fix for the w2k3 hang.
> Patch 3 is just a minor cleanup, and not particularly necessary.

Reviewed-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:02:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMuP-00067q-6x; Mon, 17 Feb 2014 12:02:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFMuO-00067l-0x
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:02:48 +0000
Received: from [85.158.143.35:42876] by server-2.bemta-4.messagelabs.com id
	D5/26-10891-76AF1035; Mon, 17 Feb 2014 12:02:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392638566!6222491!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13350 invoked from network); 17 Feb 2014 12:02:46 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:02:46 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so4504568eaj.8
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 04:02:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=X07wW4/CH9+USuSIbi0G6/2+/ME0QV5Yra0yzoUtkX4=;
	b=VFh7Knf+kQl7r60aW2lyHFozDY0PlDb49GoVeA8E9KcHbY4dpHKS847gRwgfGXmE9l
	DJmzsjZAU1ZjZPV1kryJ9LeL4GTc7IY/Z8W7Bu7BIRSEMDDIgt5ymXhs3NXno1cAfTA4
	4ZYpubFiPlVObOJnAV5fLQFSsOMDDadgonIhJFFrq99/QzNGs3XnaZvDGsdGYhAB6vwY
	CgvZwBLlSiUh+RVNNYW4HsTDR1ezUaOPpHQkyl1/p4M2tyLVB1uAAIqVH4tSAYVAI0l6
	YrRwj4eok8ZA8itG+S4tEf1p0mXdxFKDnB1JpbJQTWyR6tPS2m1pa///QiNYPfyVSvBR
	v9Fw==
X-Gm-Message-State: ALoCoQnXS695OPnYLG5qGr6nqfq9NAnqqhjOM6+Nqx3Id8NDeeYPWhVa6raMYIpQumU9E8NqhJxJ
X-Received: by 10.14.8.7 with SMTP id 7mr2263793eeq.56.1392638566391;
	Mon, 17 Feb 2014 04:02:46 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm56776908eew.20.2014.02.17.04.02.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 04:02:45 -0800 (PST)
Message-ID: <5301FA5F.8020602@linaro.org>
Date: Mon, 17 Feb 2014 12:02:39 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
In-Reply-To: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 11:33 AM, Andrii Tseglytskyi wrote:
> Hi,

Hello Andrii,

> Can anyone clarify - is it possible to make a run time memory trap in
> Xen hypervisor?

I guess you are talking about ARM? If so, it's not possible right now.

> To be more specific - I see the following function:
> *int handle_mmio(mmio_info_t *info)*, which is called from *static
> void do_trap_data_abort_guest(struct cpu_user_regs *regs, union hsr
> hsr)*
> Using these calls I can define memory region and create a trap for it.
> But in current implementation I can do it only during compile time.
> Is there any way to do similar in runtime - i.e. calculate memory
> region value in code and add an entry to mmio_handlers[] ?
> Is it a good idea - just to modify/extend existing code in
> xen/arch/arm/io.c file to make it possible ?

I think yes, we might need that for a couple of new features in Xen such
as: clock drivers and multiple devices in a same page (see UARTs on
cubieboard 2).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:02:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFMuP-00067q-6x; Mon, 17 Feb 2014 12:02:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFMuO-00067l-0x
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:02:48 +0000
Received: from [85.158.143.35:42876] by server-2.bemta-4.messagelabs.com id
	D5/26-10891-76AF1035; Mon, 17 Feb 2014 12:02:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392638566!6222491!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13350 invoked from network); 17 Feb 2014 12:02:46 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:02:46 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so4504568eaj.8
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 04:02:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=X07wW4/CH9+USuSIbi0G6/2+/ME0QV5Yra0yzoUtkX4=;
	b=VFh7Knf+kQl7r60aW2lyHFozDY0PlDb49GoVeA8E9KcHbY4dpHKS847gRwgfGXmE9l
	DJmzsjZAU1ZjZPV1kryJ9LeL4GTc7IY/Z8W7Bu7BIRSEMDDIgt5ymXhs3NXno1cAfTA4
	4ZYpubFiPlVObOJnAV5fLQFSsOMDDadgonIhJFFrq99/QzNGs3XnaZvDGsdGYhAB6vwY
	CgvZwBLlSiUh+RVNNYW4HsTDR1ezUaOPpHQkyl1/p4M2tyLVB1uAAIqVH4tSAYVAI0l6
	YrRwj4eok8ZA8itG+S4tEf1p0mXdxFKDnB1JpbJQTWyR6tPS2m1pa///QiNYPfyVSvBR
	v9Fw==
X-Gm-Message-State: ALoCoQnXS695OPnYLG5qGr6nqfq9NAnqqhjOM6+Nqx3Id8NDeeYPWhVa6raMYIpQumU9E8NqhJxJ
X-Received: by 10.14.8.7 with SMTP id 7mr2263793eeq.56.1392638566391;
	Mon, 17 Feb 2014 04:02:46 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm56776908eew.20.2014.02.17.04.02.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 04:02:45 -0800 (PST)
Message-ID: <5301FA5F.8020602@linaro.org>
Date: Mon, 17 Feb 2014 12:02:39 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
In-Reply-To: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 11:33 AM, Andrii Tseglytskyi wrote:
> Hi,

Hello Andrii,

> Can anyone clarify - is it possible to make a run time memory trap in
> Xen hypervisor?

I guess you are talking about ARM? If so, it's not possible right now.

> To be more specific - I see the following function:
> *int handle_mmio(mmio_info_t *info)*, which is called from *static
> void do_trap_data_abort_guest(struct cpu_user_regs *regs, union hsr
> hsr)*
> Using these calls I can define memory region and create a trap for it.
> But in current implementation I can do it only during compile time.
> Is there any way to do similar in runtime - i.e. calculate memory
> region value in code and add an entry to mmio_handlers[] ?
> Is it a good idea - just to modify/extend existing code in
> xen/arch/arm/io.c file to make it possible ?

I think yes, we might need that for a couple of new features in Xen such
as: clock drivers and multiple devices in a same page (see UARTs on
cubieboard 2).

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFN06-0006L6-0z; Mon, 17 Feb 2014 12:08:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFN04-0006L1-Fq
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:08:40 +0000
Received: from [85.158.139.211:49681] by server-6.bemta-5.messagelabs.com id
	0D/AC-14342-7CBF1035; Mon, 17 Feb 2014 12:08:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392638917!4391673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22423 invoked from network); 17 Feb 2014 12:08:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103145961"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 12:08:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 07:08:37 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFN00-0003w2-M9;
	Mon, 17 Feb 2014 12:08:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFMzy-0003RJ-RJ;
	Mon, 17 Feb 2014 12:08:34 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21249.64449.582039.323772@mariner.uk.xensource.com>
Date: Mon, 17 Feb 2014 12:08:33 +0000
To: <xen-devel@lists.xensource.com>
In-Reply-To: <osstest-24870-mainreport@xen.org>
References: <osstest-24870-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-unstable test] 24870: regressions - trouble: broken/fail/pass"):
> flight 24870 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386-oldkern            3 host-build-prep       fail REGR. vs. 24862

This was the "usual" failure: Citrix's intercepting web proxy causes
some hg clones of linux-2.6.18.hg from xenbits to fail.  The rest of
the flight was successful.

The rest of the weekend's tests were badly affected by a disk failure
on earwig.  So as a result we didn't get a push.

I cleared out a bunch of other stuff running in the test system in an
effort to get a pass sooner, but peeking at the results the same job
has failed the same way in the currently-running flight.  So we won't
get a push in that iteration either.

We should consider doing a force push for RC4.  The risks are:
 * There is something actually wrong with xen.git which causes the
   32-bit 2.6.18 build to fail;
 * Less resistance in the future to 2.6.18 build failures.
I'll discuss these in turn.

The build-*-oldkern tests involve using the kernel-building machinery
in xen.git to clone 2.6.18 from xenbits and build it.  Firstly, I think
it's unlikely that anything in xen.git#d883c179..4e8d89bc would affect
that.  Secondly, the build-amd64-oldkern builds have passed.  So I
think we can almost entirely discount the first risk.

I think the second risk is tolerable.  We should keep an eye on it for
a bit and if it turns out that the oldkern build really does become
broken later and as a result keeps failing indefinitely, we will be
able to spot that.

So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
xen.git#master and call it RC4.  Comments welcome.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFN06-0006L6-0z; Mon, 17 Feb 2014 12:08:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFN04-0006L1-Fq
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:08:40 +0000
Received: from [85.158.139.211:49681] by server-6.bemta-5.messagelabs.com id
	0D/AC-14342-7CBF1035; Mon, 17 Feb 2014 12:08:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392638917!4391673!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22423 invoked from network); 17 Feb 2014 12:08:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103145961"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 12:08:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 07:08:37 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFN00-0003w2-M9;
	Mon, 17 Feb 2014 12:08:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFMzy-0003RJ-RJ;
	Mon, 17 Feb 2014 12:08:34 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21249.64449.582039.323772@mariner.uk.xensource.com>
Date: Mon, 17 Feb 2014 12:08:33 +0000
To: <xen-devel@lists.xensource.com>
In-Reply-To: <osstest-24870-mainreport@xen.org>
References: <osstest-24870-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-unstable test] 24870: regressions - trouble: broken/fail/pass"):
> flight 24870 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-i386-oldkern            3 host-build-prep       fail REGR. vs. 24862

This was the "usual" failure: Citrix's intercepting web proxy causes
some hg clones of linux-2.6.18.hg from xenbits to fail.  The rest of
the flight was successful.

The rest of the weekend's tests were badly affected by a disk failure
on earwig.  So as a result we didn't get a push.

I cleared out a bunch of other stuff running in the test system in an
effort to get a pass sooner, but peeking at the results the same job
has failed the same way in the currently-running flight.  So we won't
get a push in that iteration either.

We should consider doing a force push for RC4.  The risks are:
 * There is something actually wrong with xen.git which causes the
   32-bit 2.6.18 build to fail;
 * Less resistance in the future to 2.6.18 build failures.
I'll discuss these in turn.

The build-*-oldkern tests involve using the kernel-building machinery
in xen.git to clone 2.6.18 from xenbits and build it.  Firstly, I think
it's unlikely that anything in xen.git#d883c179..4e8d89bc would affect
that.  Secondly, the build-amd64-oldkern builds have passed.  So I
think we can almost entirely discount the first risk.

I think the second risk is tolerable.  We should keep an eye on it for
a bit and if it turns out that the oldkern build really does become
broken later and as a result keeps failing indefinitely, we will be
able to spot that.

So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
xen.git#master and call it RC4.  Comments welcome.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:22:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFND2-0006Wg-F6; Mon, 17 Feb 2014 12:22:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFNCz-0006WY-Qp
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:22:02 +0000
Received: from [85.158.143.35:39317] by server-3.bemta-4.messagelabs.com id
	F9/2D-11539-9EEF1035; Mon, 17 Feb 2014 12:22:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392639718!6228501!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18878 invoked from network); 17 Feb 2014 12:22:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103148859"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 12:21:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 07:21:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFNCv-00040e-JH;
	Mon, 17 Feb 2014 12:21:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFNCv-0006zU-Cm;
	Mon, 17 Feb 2014 12:21:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25103-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 12:21:57 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25103: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25103 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25103/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24888 pass in 25103

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
+ branch=xen-4.3-testing
+ revision=b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   bf23663..b7d5b37  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:22:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFND2-0006Wg-F6; Mon, 17 Feb 2014 12:22:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFNCz-0006WY-Qp
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:22:02 +0000
Received: from [85.158.143.35:39317] by server-3.bemta-4.messagelabs.com id
	F9/2D-11539-9EEF1035; Mon, 17 Feb 2014 12:22:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392639718!6228501!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18878 invoked from network); 17 Feb 2014 12:22:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:22:00 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103148859"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 12:21:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 07:21:57 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFNCv-00040e-JH;
	Mon, 17 Feb 2014 12:21:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFNCv-0006zU-Cm;
	Mon, 17 Feb 2014 12:21:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25103-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 12:21:57 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.3-testing test] 25103: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25103 xen-4.3-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25103/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-win7-amd64  7 windows-install            fail pass in 24888
 test-amd64-amd64-xl-win7-amd64  7 windows-install  fail in 24888 pass in 25103

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24863

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop           fail in 24888 never pass

version targeted for testing:
 xen                  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
baseline version:
 xen                  bf236637c2417376693cd72a748cc6208fd0202b

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.3-testing
+ revision=b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.3-testing b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
+ branch=xen-4.3-testing
+ revision=b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.3-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.3-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.3-testing
++ : daily-cron.xen-4.3-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.3-testing.git
++ : daily-cron.xen-4.3-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.3-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.3-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.3-testing
+ xenversion=xen-4.3
+ xenversion=4.3
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e:stable-4.3
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   bf23663..b7d5b37  b7d5b3789c92f2e95f99fddca86ffb02e73f2d2e -> stable-4.3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:22:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNDb-0006a9-Uw; Mon, 17 Feb 2014 12:22:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNDa-0006Zz-Er
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:22:38 +0000
Received: from [85.158.139.211:16166] by server-7.bemta-5.messagelabs.com id
	D4/1C-14867-D0FF1035; Mon, 17 Feb 2014 12:22:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392639755!4402914!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20200 invoked from network); 17 Feb 2014 12:22:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:22:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101400948"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:22:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:22:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNDV-0007GX-Fi;
	Mon, 17 Feb 2014 12:22:33 +0000
Date: Mon, 17 Feb 2014 12:22:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
In-Reply-To: <20140204150727.GA1529@andromeda.dapyr.net>
Message-ID: <alpine.DEB.2.02.1402171220520.27926@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<20140204150727.GA1529@andromeda.dapyr.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 06:15:07PM +0000, Stefano Stabellini wrote:
> > Hi all,
> > this patch series introduces stolen ticks accounting for Xen on ARM and
> > ARM64.
> > Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
> > typically because Linux is running in a virtual machine and the vcpu has
> > been descheduled.
> > To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
> > so that we can make use of:
> > 
> > kernel/sched/cputime.c:steal_account_process_tick
> > 
> > 
> > Changes in v9:
> > - added back missing new files from patches;
> > - fix compilation on avr32 (remove patch #5, revert to previous version
> >   of patch #2).
> > 
> > 
> > 
> > Stefano Stabellini (5):
> >       xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
> >       kernel: missing include in cputime.c
> >       arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
> >       arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
> >       xen/arm: account for stolen ticks
> > 
> >  arch/arm/Kconfig                  |   20 ++++++++
> >  arch/arm/include/asm/paravirt.h   |   20 ++++++++
> >  arch/arm/kernel/Makefile          |    2 +
> >  arch/arm/kernel/paravirt.c        |   25 ++++++++++
> >  arch/arm/xen/enlighten.c          |   21 +++++++++
> >  arch/arm64/Kconfig                |   20 ++++++++
> >  arch/arm64/include/asm/paravirt.h |   20 ++++++++
> >  arch/arm64/kernel/Makefile        |    1 +
> >  arch/arm64/kernel/paravirt.c      |   25 ++++++++++
> >  arch/ia64/xen/time.c              |   48 +++----------------
> >  arch/x86/xen/time.c               |   76 +------------------------------
> >  drivers/xen/Makefile              |    2 +-
> >  drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
> >  include/xen/xen-ops.h             |    5 ++
> >  kernel/sched/cputime.c            |    3 ++
> >  15 files changed, 261 insertions(+), 118 deletions(-)
> >  create mode 100644 arch/arm/include/asm/paravirt.h
> >  create mode 100644 arch/arm/kernel/paravirt.c
> >  create mode 100644 arch/arm64/include/asm/paravirt.h
> >  create mode 100644 arch/arm64/kernel/paravirt.c
> >  create mode 100644 drivers/xen/time.c
> > 
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9
> 
> I tried to merge it on top of 3.14-rc1 + stable/for-linus-3.14 (which
> has the revert of " xen/grant-table: Avoid m2p_override during mapping"
> 
> 
> And I get:
> konrad@phenom:~/linux$ git merge stefano/lost_ticks_9
> Auto-merging drivers/xen/Makefile
> CONFLICT (content): Merge conflict in drivers/xen/Makefile
> Auto-merging arch/x86/xen/time.c
> CONFLICT (modify/delete): arch/ia64/xen/time.c deleted in HEAD and
> modified in stefano/lost_ticks_9. Version stefano/lost_ticks_9 of
> arch/ia64/xen/time.c left in tree.
> Auto-merging arch/arm64/kernel/Makefile
> CONFLICT (content): Merge conflict in arch/arm64/kernel/Makefile
> Auto-merging arch/arm64/Kconfig
> Auto-merging arch/arm/xen/enlighten.c
> CONFLICT (content): Merge conflict in arch/arm/xen/enlighten.c
> Auto-merging arch/arm/Kconfig
> CONFLICT (content): Merge conflict in arch/arm/Kconfig
> Automatic merge failed; fix conflicts and then commit the result.

Sorry for the delay, the series rebased on 3.14-rc3 is available here:

git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_10



> I presume that is mostly due to David's FIFO queue patches.
> 
> Could you kindly rebase it on top 3.14-rc1 and also tack on
> Catalin Marinas Ack on the patches?

I will.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:22:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNDb-0006a9-Uw; Mon, 17 Feb 2014 12:22:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNDa-0006Zz-Er
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:22:38 +0000
Received: from [85.158.139.211:16166] by server-7.bemta-5.messagelabs.com id
	D4/1C-14867-D0FF1035; Mon, 17 Feb 2014 12:22:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392639755!4402914!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20200 invoked from network); 17 Feb 2014 12:22:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:22:36 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101400948"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:22:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:22:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNDV-0007GX-Fi;
	Mon, 17 Feb 2014 12:22:33 +0000
Date: Mon, 17 Feb 2014 12:22:30 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
In-Reply-To: <20140204150727.GA1529@andromeda.dapyr.net>
Message-ID: <alpine.DEB.2.02.1402171220520.27926@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<20140204150727.GA1529@andromeda.dapyr.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Olof Johansson <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 0/5] xen/arm/arm64: CONFIG_PARAVIRT and
 stolen ticks accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 4 Feb 2014, Konrad Rzeszutek Wilk wrote:
> On Thu, Jan 09, 2014 at 06:15:07PM +0000, Stefano Stabellini wrote:
> > Hi all,
> > this patch series introduces stolen ticks accounting for Xen on ARM and
> > ARM64.
> > Stolen ticks are clocksource ticks that have been "stolen" from the cpu,
> > typically because Linux is running in a virtual machine and the vcpu has
> > been descheduled.
> > To account for these ticks we introduce CONFIG_PARAVIRT and pv_time_ops
> > so that we can make use of:
> > 
> > kernel/sched/cputime.c:steal_account_process_tick
> > 
> > 
> > Changes in v9:
> > - added back missing new files from patches;
> > - fix compilation on avr32 (remove patch #5, revert to previous version
> >   of patch #2).
> > 
> > 
> > 
> > Stefano Stabellini (5):
> >       xen: move xen_setup_runstate_info and get_runstate_snapshot to drivers/xen/time.c
> >       kernel: missing include in cputime.c
> >       arm: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
> >       arm64: introduce CONFIG_PARAVIRT, PARAVIRT_TIME_ACCOUNTING and pv_time_ops
> >       xen/arm: account for stolen ticks
> > 
> >  arch/arm/Kconfig                  |   20 ++++++++
> >  arch/arm/include/asm/paravirt.h   |   20 ++++++++
> >  arch/arm/kernel/Makefile          |    2 +
> >  arch/arm/kernel/paravirt.c        |   25 ++++++++++
> >  arch/arm/xen/enlighten.c          |   21 +++++++++
> >  arch/arm64/Kconfig                |   20 ++++++++
> >  arch/arm64/include/asm/paravirt.h |   20 ++++++++
> >  arch/arm64/kernel/Makefile        |    1 +
> >  arch/arm64/kernel/paravirt.c      |   25 ++++++++++
> >  arch/ia64/xen/time.c              |   48 +++----------------
> >  arch/x86/xen/time.c               |   76 +------------------------------
> >  drivers/xen/Makefile              |    2 +-
> >  drivers/xen/time.c                |   91 +++++++++++++++++++++++++++++++++++++
> >  include/xen/xen-ops.h             |    5 ++
> >  kernel/sched/cputime.c            |    3 ++
> >  15 files changed, 261 insertions(+), 118 deletions(-)
> >  create mode 100644 arch/arm/include/asm/paravirt.h
> >  create mode 100644 arch/arm/kernel/paravirt.c
> >  create mode 100644 arch/arm64/include/asm/paravirt.h
> >  create mode 100644 arch/arm64/kernel/paravirt.c
> >  create mode 100644 drivers/xen/time.c
> > 
> > 
> > git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_9
> 
> I tried to merge it on top of 3.14-rc1 + stable/for-linus-3.14 (which
> has the revert of " xen/grant-table: Avoid m2p_override during mapping"
> 
> 
> And I get:
> konrad@phenom:~/linux$ git merge stefano/lost_ticks_9
> Auto-merging drivers/xen/Makefile
> CONFLICT (content): Merge conflict in drivers/xen/Makefile
> Auto-merging arch/x86/xen/time.c
> CONFLICT (modify/delete): arch/ia64/xen/time.c deleted in HEAD and
> modified in stefano/lost_ticks_9. Version stefano/lost_ticks_9 of
> arch/ia64/xen/time.c left in tree.
> Auto-merging arch/arm64/kernel/Makefile
> CONFLICT (content): Merge conflict in arch/arm64/kernel/Makefile
> Auto-merging arch/arm64/Kconfig
> Auto-merging arch/arm/xen/enlighten.c
> CONFLICT (content): Merge conflict in arch/arm/xen/enlighten.c
> Auto-merging arch/arm/Kconfig
> CONFLICT (content): Merge conflict in arch/arm/Kconfig
> Automatic merge failed; fix conflicts and then commit the result.

Sorry for the delay, the series rebased on 3.14-rc3 is available here:

git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git lost_ticks_10



> I presume that is mostly due to David's FIFO queue patches.
> 
> Could you kindly rebase it on top 3.14-rc1 and also tack on
> Catalin Marinas Ack on the patches?

I will.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNEc-0006hG-Du; Mon, 17 Feb 2014 12:23:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WFNEa-0006h0-Sg
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 12:23:41 +0000
Received: from [193.109.254.147:16427] by server-14.bemta-14.messagelabs.com
	id 90/6C-29228-C4FF1035; Mon, 17 Feb 2014 12:23:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392639817!4875420!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9193 invoked from network); 17 Feb 2014 12:23:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 12:23:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HCNNkb026737
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 12:23:24 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HCNHAd018401
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 12:23:17 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HCNG0S023567; Mon, 17 Feb 2014 12:23:16 GMT
Message-Id: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
Received: from [192.168.2.114] (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 04:23:16 -0800
Date: Mon, 17 Feb 2014 07:23:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Paul Bolle <pebolle@tiscali.nl>
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ck9uIEZlYiAxNiwgMjAxNCAzOjA3IFBNLCBQYXVsIEJvbGxlIDxwZWJvbGxlQHRpc2NhbGkubmw+
IHdyb3RlOgo+Cj4gVGhpcyBwYXRjaCByZW1vdmVzIHRoZSBLY29uZmlnIHN5bWJvbCBYRU5fUFJJ
VklMRUdFRF9HVUVTVCB3aGljaCBpcyAKPiB1c2VkIG5vd2hlcmUgaW4gdGhlIHRyZWUuIFdlIGRv
IGtub3cgZ3J1YjIgaGFzIGEgc2NyaXB0IHRoYXQgZ3JlcHMgCj4ga2VybmVsIGNvbmZpZ3VyYXRp
b24gZmlsZXMgZm9yIHRoaXMgc3ltYm9sLiBJdCBzaG91bGRuJ3QgZG8gdGhhdC4gQXMKClBsZWFz
ZSBsb29rIGluIHRoZSBncnViIGdpdCB0cmVlLiBUaGV5IGhhdmUgZml4ZWQgdGhlaXIgY29kZSB0
byBub3QgZG8gdGhpcyBhbnltb3JlLiBUaGlzIHNob3VsZCBiZSByZWZsZWN0ZWQgaW4gdGhlIHBh
dGNoIGRlc2NyaXB0aW9uLgoKTGFzdGx5IHBsZWFzZSBjaGVjayB3aGljaCBkaXN0cm8gaGFzIHRo
aXMgbmV3IGdydWIgdmVyc2lvbiBzbyB0aGF0IHdlIGtub3cgd2hpY2ggZGlzdHJvcyB3b24ndCBi
ZSBhZmZlY3RlZC4KClRoYW5rcy4KCj4gTGludXMgc3VtbWFyaXplZDogCj4gwqDCoMKgIFRoaXMg
aXMgYSBncnViIGJ1Zy4gSXQgcmVhbGx5IGlzIHRoYXQgc2ltcGxlLiBUcmVhdCBpdCBhcyBvbmUu
IAo+Cj4gU28gdGhlcmUncyBubyByZWFzb24gdG8gbm90IHJlbW92ZSBpdCwgbGlrZSB3ZSBkbyB3
aXRoIGFsbCB1bnVzZWQgCj4gS2NvbmZpZyBzeW1ib2xzLiAKPgo+IFtwZWJvbGxlQHRpc2NhbGku
bmw6IHJld3JvdGUgY29tbWl0IGV4cGxhbmF0aW9uLl0gCj4gU2lnbmVkLW9mZi1ieTogTWljaGFl
bCBPcGRlbmFja2VyIDxtaWNoYWVsLm9wZGVuYWNrZXJAZnJlZS1lbGVjdHJvbnMuY29tPiAKPiBT
aWduZWQtb2ZmLWJ5OiBQYXVsIEJvbGxlIDxwZWJvbGxlQHRpc2NhbGkubmw+IAo+IC0tLSAKPiBU
ZXN0ZWQgd2l0aCAiZ2l0IGdyZXAiLiAKPgo+IE1pY2hhZWwncyB2ZXJzaW9uIGNhbiBiZSBmb3Vu
ZCBhdCBodHRwczovL2xrbWwub3JnL2xrbWwvMjAxMy83LzgvMzQgLiAKPiAoVGhpcyBpcyB0aGUg
c2FtZSBwYXRjaCwgd2l0aCBhIHJld3JpdHRlbiBleHBsYW5hdGlvbiwgYW5kIG15IFMtby1iIAo+
IGxpbmUuKSBUaGUgcXVlc3Rpb24gd2hldGhlciB0aGlzIHN5bWJvbCBjYW4gYmUgcmVtb3ZlZCB3
YXMgZnVydGhlciAKPiBkaXNjdXNzZWQgaW4gaHR0cHM6Ly9sa21sLm9yZy9sa21sLzIwMTMvNy8x
NS8zMDggLiAKPgo+IEkgZG9uJ3QgdGhpbmsgYSBidWcgd2FzIGV2ZXIgZmlsZWQgYWdhaW5zdCBn
cnViMiByZWdhcmRpbmcgaXRzIHdheSB0byAKPiBjaGVjayBmb3IgWGVuIHN1cHBvcnQuIFNob3Vs
ZCB0aGF0IGJlIGRvbmUgZmlyc3Q/CgpIYWQgYmVlbiBkb25lIHRoZSBtb21lbnQgSSBnb3QgTGlu
dXMgcmVwbHkgYnV0IGluc3RlYWQgb2YgYSBidWcgaXQgd2FzIG9uIHRoZSBtYWlsaW5nIGxpc3Qu
Cj4KPiBhcmNoL3g4Ni94ZW4vS2NvbmZpZyB8IDUgLS0tLS0gCj4gMSBmaWxlIGNoYW5nZWQsIDUg
ZGVsZXRpb25zKC0pIAo+Cj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9LY29uZmlnIGIvYXJj
aC94ODYveGVuL0tjb25maWcgCj4gaW5kZXggMDFiOTAyNi4uNTEyMjE5ZCAxMDA2NDQgCj4gLS0t
IGEvYXJjaC94ODYveGVuL0tjb25maWcgCj4gKysrIGIvYXJjaC94ODYveGVuL0tjb25maWcgCj4g
QEAgLTE5LDExICsxOSw2IEBAIGNvbmZpZyBYRU5fRE9NMCAKPiBkZXBlbmRzIG9uIFhFTiAmJiBQ
Q0lfWEVOICYmIFNXSU9UTEJfWEVOIAo+IGRlcGVuZHMgb24gWDg2X0xPQ0FMX0FQSUMgJiYgWDg2
X0lPX0FQSUMgJiYgQUNQSSAmJiBQQ0kgCj4KPiAtIyBEdW1teSBzeW1ib2wgc2luY2UgcGVvcGxl
IGhhdmUgY29tZSB0byByZWx5IG9uIHRoZSBQUklWSUxFR0VEX0dVRVNUIAo+IC0jIG5hbWUgaW4g
dG9vbHMuIAo+IC1jb25maWcgWEVOX1BSSVZJTEVHRURfR1VFU1QgCj4gLSBkZWZfYm9vbCBYRU5f
RE9NMCAKPiAtIAo+IGNvbmZpZyBYRU5fUFZIVk0gCj4gZGVmX2Jvb2wgeSAKPiBkZXBlbmRzIG9u
IFhFTiAmJiBQQ0kgJiYgWDg2X0xPQ0FMX0FQSUMgCj4gLS0gCj4gMS44LjUuMyAKPgpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGlu
ZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1k
ZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:23:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNEc-0006hG-Du; Mon, 17 Feb 2014 12:23:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WFNEa-0006h0-Sg
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 12:23:41 +0000
Received: from [193.109.254.147:16427] by server-14.bemta-14.messagelabs.com
	id 90/6C-29228-C4FF1035; Mon, 17 Feb 2014 12:23:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392639817!4875420!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9193 invoked from network); 17 Feb 2014 12:23:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 12:23:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HCNNkb026737
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 12:23:24 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HCNHAd018401
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 12:23:17 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HCNG0S023567; Mon, 17 Feb 2014 12:23:16 GMT
Message-Id: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
Received: from [192.168.2.114] (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 04:23:16 -0800
Date: Mon, 17 Feb 2014 07:23:13 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Paul Bolle <pebolle@tiscali.nl>
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ck9uIEZlYiAxNiwgMjAxNCAzOjA3IFBNLCBQYXVsIEJvbGxlIDxwZWJvbGxlQHRpc2NhbGkubmw+
IHdyb3RlOgo+Cj4gVGhpcyBwYXRjaCByZW1vdmVzIHRoZSBLY29uZmlnIHN5bWJvbCBYRU5fUFJJ
VklMRUdFRF9HVUVTVCB3aGljaCBpcyAKPiB1c2VkIG5vd2hlcmUgaW4gdGhlIHRyZWUuIFdlIGRv
IGtub3cgZ3J1YjIgaGFzIGEgc2NyaXB0IHRoYXQgZ3JlcHMgCj4ga2VybmVsIGNvbmZpZ3VyYXRp
b24gZmlsZXMgZm9yIHRoaXMgc3ltYm9sLiBJdCBzaG91bGRuJ3QgZG8gdGhhdC4gQXMKClBsZWFz
ZSBsb29rIGluIHRoZSBncnViIGdpdCB0cmVlLiBUaGV5IGhhdmUgZml4ZWQgdGhlaXIgY29kZSB0
byBub3QgZG8gdGhpcyBhbnltb3JlLiBUaGlzIHNob3VsZCBiZSByZWZsZWN0ZWQgaW4gdGhlIHBh
dGNoIGRlc2NyaXB0aW9uLgoKTGFzdGx5IHBsZWFzZSBjaGVjayB3aGljaCBkaXN0cm8gaGFzIHRo
aXMgbmV3IGdydWIgdmVyc2lvbiBzbyB0aGF0IHdlIGtub3cgd2hpY2ggZGlzdHJvcyB3b24ndCBi
ZSBhZmZlY3RlZC4KClRoYW5rcy4KCj4gTGludXMgc3VtbWFyaXplZDogCj4gwqDCoMKgIFRoaXMg
aXMgYSBncnViIGJ1Zy4gSXQgcmVhbGx5IGlzIHRoYXQgc2ltcGxlLiBUcmVhdCBpdCBhcyBvbmUu
IAo+Cj4gU28gdGhlcmUncyBubyByZWFzb24gdG8gbm90IHJlbW92ZSBpdCwgbGlrZSB3ZSBkbyB3
aXRoIGFsbCB1bnVzZWQgCj4gS2NvbmZpZyBzeW1ib2xzLiAKPgo+IFtwZWJvbGxlQHRpc2NhbGku
bmw6IHJld3JvdGUgY29tbWl0IGV4cGxhbmF0aW9uLl0gCj4gU2lnbmVkLW9mZi1ieTogTWljaGFl
bCBPcGRlbmFja2VyIDxtaWNoYWVsLm9wZGVuYWNrZXJAZnJlZS1lbGVjdHJvbnMuY29tPiAKPiBT
aWduZWQtb2ZmLWJ5OiBQYXVsIEJvbGxlIDxwZWJvbGxlQHRpc2NhbGkubmw+IAo+IC0tLSAKPiBU
ZXN0ZWQgd2l0aCAiZ2l0IGdyZXAiLiAKPgo+IE1pY2hhZWwncyB2ZXJzaW9uIGNhbiBiZSBmb3Vu
ZCBhdCBodHRwczovL2xrbWwub3JnL2xrbWwvMjAxMy83LzgvMzQgLiAKPiAoVGhpcyBpcyB0aGUg
c2FtZSBwYXRjaCwgd2l0aCBhIHJld3JpdHRlbiBleHBsYW5hdGlvbiwgYW5kIG15IFMtby1iIAo+
IGxpbmUuKSBUaGUgcXVlc3Rpb24gd2hldGhlciB0aGlzIHN5bWJvbCBjYW4gYmUgcmVtb3ZlZCB3
YXMgZnVydGhlciAKPiBkaXNjdXNzZWQgaW4gaHR0cHM6Ly9sa21sLm9yZy9sa21sLzIwMTMvNy8x
NS8zMDggLiAKPgo+IEkgZG9uJ3QgdGhpbmsgYSBidWcgd2FzIGV2ZXIgZmlsZWQgYWdhaW5zdCBn
cnViMiByZWdhcmRpbmcgaXRzIHdheSB0byAKPiBjaGVjayBmb3IgWGVuIHN1cHBvcnQuIFNob3Vs
ZCB0aGF0IGJlIGRvbmUgZmlyc3Q/CgpIYWQgYmVlbiBkb25lIHRoZSBtb21lbnQgSSBnb3QgTGlu
dXMgcmVwbHkgYnV0IGluc3RlYWQgb2YgYSBidWcgaXQgd2FzIG9uIHRoZSBtYWlsaW5nIGxpc3Qu
Cj4KPiBhcmNoL3g4Ni94ZW4vS2NvbmZpZyB8IDUgLS0tLS0gCj4gMSBmaWxlIGNoYW5nZWQsIDUg
ZGVsZXRpb25zKC0pIAo+Cj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3hlbi9LY29uZmlnIGIvYXJj
aC94ODYveGVuL0tjb25maWcgCj4gaW5kZXggMDFiOTAyNi4uNTEyMjE5ZCAxMDA2NDQgCj4gLS0t
IGEvYXJjaC94ODYveGVuL0tjb25maWcgCj4gKysrIGIvYXJjaC94ODYveGVuL0tjb25maWcgCj4g
QEAgLTE5LDExICsxOSw2IEBAIGNvbmZpZyBYRU5fRE9NMCAKPiBkZXBlbmRzIG9uIFhFTiAmJiBQ
Q0lfWEVOICYmIFNXSU9UTEJfWEVOIAo+IGRlcGVuZHMgb24gWDg2X0xPQ0FMX0FQSUMgJiYgWDg2
X0lPX0FQSUMgJiYgQUNQSSAmJiBQQ0kgCj4KPiAtIyBEdW1teSBzeW1ib2wgc2luY2UgcGVvcGxl
IGhhdmUgY29tZSB0byByZWx5IG9uIHRoZSBQUklWSUxFR0VEX0dVRVNUIAo+IC0jIG5hbWUgaW4g
dG9vbHMuIAo+IC1jb25maWcgWEVOX1BSSVZJTEVHRURfR1VFU1QgCj4gLSBkZWZfYm9vbCBYRU5f
RE9NMCAKPiAtIAo+IGNvbmZpZyBYRU5fUFZIVk0gCj4gZGVmX2Jvb2wgeSAKPiBkZXBlbmRzIG9u
IFhFTiAmJiBQQ0kgJiYgWDg2X0xPQ0FMX0FQSUMgCj4gLS0gCj4gMS44LjUuMyAKPgpfX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGlu
ZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1k
ZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:23:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNEn-0006kJ-21; Mon, 17 Feb 2014 12:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFNEk-0006jT-Ki
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:23:50 +0000
Received: from [85.158.139.211:39811] by server-15.bemta-5.messagelabs.com id
	A4/B6-24395-55FF1035; Mon, 17 Feb 2014 12:23:49 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392639827!4300474!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25309 invoked from network); 17 Feb 2014 12:23:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:23:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101401189"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:23:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:23:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFNEf-0007HG-Qe;
	Mon, 17 Feb 2014 12:23:45 +0000
Message-ID: <5301FF51.1060509@eu.citrix.com>
Date: Mon, 17 Feb 2014 12:23:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>,
	Tim Deegan <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
In-Reply-To: <5301F000020000780011CCE0@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>> On 13.02.14 at 17:20, Tim Deegan <tim@xen.org> wrote:
>> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>>> George Dunlap wrote on 2014-02-11:
>>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>>> Actually, if the overhead of not sharing tables isn't very high, then
>>>>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>>>>> submit when he originally described the problem.
>>>>> Actually, the first solution came to my mind is B. Then I realized that
>> even
>>>> chose B, we still cannot track the memory updating from DMA(even with A/D
>>>> bit, it still a problem). Also, considering the current usage case of log
>>>> dirty in Xen(only vram tracking has problem), I though A is better.:
>>>> Hypervisor only need to track the vram change. If a malicious guest try to
>>>> DMA to vram range, it only crashed himself (This should be reasonable).
>>>>>> I was going to say, from a release perspective, B is probably the
>>>>>> safest option for now.  But on the other hand, if we've been testing
>>>>>> sharing all this time, maybe switching back over to non-sharing whole-hog has
>>>> the higher risk?
>>>>> Another problem with B is that current VT-d large paging supporting relies
>> on
>>>> the sharing EPT and VT-d page table. This means if we choose B, then we need
>>>> to re-enable VT-d large page. This would be a huge performance impaction for
>>>> Xen 4.4 on using VT-d solution.
>>>>
>>>> OK -- if that's the case, then it definitely tips the balance back to
>>>> A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>>
>>>> Don't rush your judgement; but it would be nice to have this in before
>>>> RC4, which would mean checking it in today preferrably, or early
>>>> tomorrow at the latest.
>>> That would be Tim then, as he would have to approve of it anyway.
>> Done.
> Actually I'm afraid there are two problems with this patch:
>
> For one, is enabling "global" log dirty mode still going to work
> after VRAM-only mode already got enabled? I ask because the
> paging_mode_log_dirty() check which paging_log_dirty_enable()
> does first thing suggests otherwise to me (i.e. the now
> conditional setting of all p2m entries to p2m_ram_logdirty would
> seem to never get executed). IOW I would think that we're now
> lacking a control operation allowing the transition from dirty VRAM
> tracking mode to full log dirty mode.

Hmm, yes, doing a code inspection, that would appear to be the case.  
This probably wouldn't be caught by osstest, because (as I understand 
it) we never attach to the display, so dirty vram tracking is probably 
never enabled.

> And second, I have been fighting with finding both conditions
> and (eventually) the root cause of a severe performance
> regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
> system. This became _much_ worse after adding in the patch here
> (while in fact I had hoped it might help with the originally observed
> degradation): X startup fails due to timing out, and booting the
> guest now takes about 20 minutes). I didn't find the root cause of
> this yet, but meanwhile I know that
> - the same isn't observable on SVM
> - there's no problem when forcing the domain to use shadow
>    mode
> - there's no need for any device to actually be assigned to the
>    guest
> - the regression is very likely purely graphics related (based on
>    the observation that when running something that regularly but
>    not heavily updates the screen with X up, the guest consumes a
>    full CPU's worth of processing power, yet when that updating
>    doesn't happen, CPU consumption goes down, and it goes further
>    down when shutting down X altogether - at least as log as the
>    patch here doesn't get involved).
> This I'm observing on a Westmere box (and I didn't notice it earlier
> because that's one of those where due to a chipset erratum the
> IOMMU gets turned off by default), so it's possible that this can't
> be seen on more modern hardware. I'll hopefully find time today to
> check this on the one newer (Sandy Bridge) box I have.

So you're saying that the slowdown happens if you have EPT+IOMMU, but 
*not* if you have EPT alone (IOMMU disabled), or shadow + IOMMU?

I have an issue I haven't had time to look into where windows installs 
are sometimes terribly slow on my Nehalem box; but it seems to be only 
with qemu-xen, not qemu-traditional.  I haven't tried with shadow.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:23:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNEn-0006kX-Fo; Mon, 17 Feb 2014 12:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WFNEl-0006jd-BA
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:23:52 +0000
Received: from [85.158.139.211:20636] by server-3.bemta-5.messagelabs.com id
	08/A4-13671-65FF1035; Mon, 17 Feb 2014 12:23:50 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392639828!4423099!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32623 invoked from network); 17 Feb 2014 12:23:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 12:23:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HCNhx0027007
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 12:23:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HCNfvr019126
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 12:23:42 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HCNf65016671; Mon, 17 Feb 2014 12:23:41 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 04:23:41 -0800
Message-ID: <5301FF47.1040008@oracle.com>
Date: Mon, 17 Feb 2014 20:23:35 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jinchun Kim <cienlux@gmail.com>
References: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
	<CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
	<CAFLBxZac8PTv-4qK-Z9sWv2PtbXsyMohdH=UhzRzO31n2BFu8g@mail.gmail.com>
In-Reply-To: <CAFLBxZac8PTv-4qK-Z9sWv2PtbXsyMohdH=UhzRzO31n2BFu8g@mail.gmail.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 02/17/2014 07:40 PM, George Dunlap wrote:
> On Mon, Feb 17, 2014 at 11:38 AM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>> On Sun, Feb 16, 2014 at 10:14 PM, Jinchun Kim <cienlux@gmail.com> wrote:
>>> Hi, All.
>>>
>>> While I was digging tmem and its source code, I found suspicious things
>>> about frontswap and cleancache.
>>> When the guest OS wants to evict either a dirty or clean page, frontswap and
>>> cleancache will store it in tmem.
>>> The linux kernel document tells that
>>>
>>> [Documentation/vm/frontswap.txt]
>>> A "store" will copy the page to transcendent memory ....
>>> A "load" will copy the page, if found, from transcendent memory into kernel
>>> memory ...
>>>
>>> [Documentation/vm/cleancache.txt]
>>> A "put_page" will copy a page (presumably about-to-be-evicted) page into
>>> cleancache ...
>>> A "get_page" will copy the page, if found, from cleancache into kernel
>>> memory ...
>>>

Jinchun, I'm glad you are interested with tmem.

>>> My colleagues and I think copying the page is not necessary (especially for
>>> cleancache) because both kernel memory and tmem have same data. Why don't we
>>> just change the pointer to the page and let it belong to tmem? We were

No, kernel memory won't have the data any more.
For cleancache, the data exist only in tmem and the disk storage.

It's impossible to 'change the pointer to the page and let it belong to
tmem', because the page need to be reclaimed by guest os.
One possible way is to allocate a new page and return it to guest os, so
that we can let the original page belong to tmem instead of copy data.
But by this way, there is also some map/unmap cost and will make tmem
more complicated.

Welcome any ideas about making tmem better.

>>> wondering if there is any specific reason not to copy the pages to tmem.
>>
>> Konrad / Boris?  (Sorry, I can't remember the other person who's been
>> sending tmem patches recently.)
> 
> Oh, it was Bob Liu. :-)
> 

George, thanks for your cc.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:23:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNEn-0006kJ-21; Mon, 17 Feb 2014 12:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFNEk-0006jT-Ki
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:23:50 +0000
Received: from [85.158.139.211:39811] by server-15.bemta-5.messagelabs.com id
	A4/B6-24395-55FF1035; Mon, 17 Feb 2014 12:23:49 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392639827!4300474!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25309 invoked from network); 17 Feb 2014 12:23:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:23:49 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101401189"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:23:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:23:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFNEf-0007HG-Qe;
	Mon, 17 Feb 2014 12:23:45 +0000
Message-ID: <5301FF51.1060509@eu.citrix.com>
Date: Mon, 17 Feb 2014 12:23:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>,
	Tim Deegan <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
In-Reply-To: <5301F000020000780011CCE0@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>> On 13.02.14 at 17:20, Tim Deegan <tim@xen.org> wrote:
>> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>>> George Dunlap wrote on 2014-02-11:
>>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>>> Actually, if the overhead of not sharing tables isn't very high, then
>>>>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>>>>> submit when he originally described the problem.
>>>>> Actually, the first solution came to my mind is B. Then I realized that
>> even
>>>> chose B, we still cannot track the memory updating from DMA(even with A/D
>>>> bit, it still a problem). Also, considering the current usage case of log
>>>> dirty in Xen(only vram tracking has problem), I though A is better.:
>>>> Hypervisor only need to track the vram change. If a malicious guest try to
>>>> DMA to vram range, it only crashed himself (This should be reasonable).
>>>>>> I was going to say, from a release perspective, B is probably the
>>>>>> safest option for now.  But on the other hand, if we've been testing
>>>>>> sharing all this time, maybe switching back over to non-sharing whole-hog has
>>>> the higher risk?
>>>>> Another problem with B is that current VT-d large paging supporting relies
>> on
>>>> the sharing EPT and VT-d page table. This means if we choose B, then we need
>>>> to re-enable VT-d large page. This would be a huge performance impaction for
>>>> Xen 4.4 on using VT-d solution.
>>>>
>>>> OK -- if that's the case, then it definitely tips the balance back to
>>>> A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>>
>>>> Don't rush your judgement; but it would be nice to have this in before
>>>> RC4, which would mean checking it in today preferrably, or early
>>>> tomorrow at the latest.
>>> That would be Tim then, as he would have to approve of it anyway.
>> Done.
> Actually I'm afraid there are two problems with this patch:
>
> For one, is enabling "global" log dirty mode still going to work
> after VRAM-only mode already got enabled? I ask because the
> paging_mode_log_dirty() check which paging_log_dirty_enable()
> does first thing suggests otherwise to me (i.e. the now
> conditional setting of all p2m entries to p2m_ram_logdirty would
> seem to never get executed). IOW I would think that we're now
> lacking a control operation allowing the transition from dirty VRAM
> tracking mode to full log dirty mode.

Hmm, yes, doing a code inspection, that would appear to be the case.  
This probably wouldn't be caught by osstest, because (as I understand 
it) we never attach to the display, so dirty vram tracking is probably 
never enabled.

> And second, I have been fighting with finding both conditions
> and (eventually) the root cause of a severe performance
> regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
> system. This became _much_ worse after adding in the patch here
> (while in fact I had hoped it might help with the originally observed
> degradation): X startup fails due to timing out, and booting the
> guest now takes about 20 minutes). I didn't find the root cause of
> this yet, but meanwhile I know that
> - the same isn't observable on SVM
> - there's no problem when forcing the domain to use shadow
>    mode
> - there's no need for any device to actually be assigned to the
>    guest
> - the regression is very likely purely graphics related (based on
>    the observation that when running something that regularly but
>    not heavily updates the screen with X up, the guest consumes a
>    full CPU's worth of processing power, yet when that updating
>    doesn't happen, CPU consumption goes down, and it goes further
>    down when shutting down X altogether - at least as log as the
>    patch here doesn't get involved).
> This I'm observing on a Westmere box (and I didn't notice it earlier
> because that's one of those where due to a chipset erratum the
> IOMMU gets turned off by default), so it's possible that this can't
> be seen on more modern hardware. I'll hopefully find time today to
> check this on the one newer (Sandy Bridge) box I have.

So you're saying that the slowdown happens if you have EPT+IOMMU, but 
*not* if you have EPT alone (IOMMU disabled), or shadow + IOMMU?

I have an issue I haven't had time to look into where windows installs 
are sometimes terribly slow on my Nehalem box; but it seems to be only 
with qemu-xen, not qemu-traditional.  I haven't tried with shadow.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:23:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNEn-0006kX-Fo; Mon, 17 Feb 2014 12:23:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WFNEl-0006jd-BA
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:23:52 +0000
Received: from [85.158.139.211:20636] by server-3.bemta-5.messagelabs.com id
	08/A4-13671-65FF1035; Mon, 17 Feb 2014 12:23:50 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392639828!4423099!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32623 invoked from network); 17 Feb 2014 12:23:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 12:23:49 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HCNhx0027007
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 12:23:43 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HCNfvr019126
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 12:23:42 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HCNf65016671; Mon, 17 Feb 2014 12:23:41 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 04:23:41 -0800
Message-ID: <5301FF47.1040008@oracle.com>
Date: Mon, 17 Feb 2014 20:23:35 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>,
	Jinchun Kim <cienlux@gmail.com>
References: <930C8A66-3F17-4F3B-8419-82B3DBF5713B@gmail.com>
	<CAFLBxZaxvWSZzU2_TaymFqu3zYJ7S-p4yXEQh6H1Mw9GieTy=Q@mail.gmail.com>
	<CAFLBxZac8PTv-4qK-Z9sWv2PtbXsyMohdH=UhzRzO31n2BFu8g@mail.gmail.com>
In-Reply-To: <CAFLBxZac8PTv-4qK-Z9sWv2PtbXsyMohdH=UhzRzO31n2BFu8g@mail.gmail.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Why frontswap and cleancache make copies in tmem?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 02/17/2014 07:40 PM, George Dunlap wrote:
> On Mon, Feb 17, 2014 at 11:38 AM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>> On Sun, Feb 16, 2014 at 10:14 PM, Jinchun Kim <cienlux@gmail.com> wrote:
>>> Hi, All.
>>>
>>> While I was digging tmem and its source code, I found suspicious things
>>> about frontswap and cleancache.
>>> When the guest OS wants to evict either a dirty or clean page, frontswap and
>>> cleancache will store it in tmem.
>>> The linux kernel document tells that
>>>
>>> [Documentation/vm/frontswap.txt]
>>> A "store" will copy the page to transcendent memory ....
>>> A "load" will copy the page, if found, from transcendent memory into kernel
>>> memory ...
>>>
>>> [Documentation/vm/cleancache.txt]
>>> A "put_page" will copy a page (presumably about-to-be-evicted) page into
>>> cleancache ...
>>> A "get_page" will copy the page, if found, from cleancache into kernel
>>> memory ...
>>>

Jinchun, I'm glad you are interested with tmem.

>>> My colleagues and I think copying the page is not necessary (especially for
>>> cleancache) because both kernel memory and tmem have same data. Why don't we
>>> just change the pointer to the page and let it belong to tmem? We were

No, kernel memory won't have the data any more.
For cleancache, the data exist only in tmem and the disk storage.

It's impossible to 'change the pointer to the page and let it belong to
tmem', because the page need to be reclaimed by guest os.
One possible way is to allocate a new page and return it to guest os, so
that we can let the original page belong to tmem instead of copy data.
But by this way, there is also some map/unmap cost and will make tmem
more complicated.

Welcome any ideas about making tmem better.

>>> wondering if there is any specific reason not to copy the pages to tmem.
>>
>> Konrad / Boris?  (Sorry, I can't remember the other person who's been
>> sending tmem patches recently.)
> 
> Oh, it was Bob Liu. :-)
> 

George, thanks for your cc.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:25:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNFz-00071T-VG; Mon, 17 Feb 2014 12:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNFv-00070t-3m
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:25:06 +0000
Received: from [85.158.137.68:28048] by server-2.bemta-3.messagelabs.com id
	25/C7-06531-E9FF1035; Mon, 17 Feb 2014 12:25:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392639897!1100728!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13962 invoked from network); 17 Feb 2014 12:24:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:24:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103149298"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 12:24:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:24:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNFn-0007IH-Ii;
	Mon, 17 Feb 2014 12:24:55 +0000
Date: Mon, 17 Feb 2014 12:24:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <linux@arm.linux.org.uk>
In-Reply-To: <1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402171223290.27926@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	arnd@arndb.de, marc.zyngier@arm.com, catalin.marinas@arm.com,
	nico@linaro.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, cov@codeaurora.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v9 3/5] arm: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Christopher Covington <cov@codeaurora.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net

Russell, are you happy with this patch (for 3.15)?



> 
> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> 
> Changes in v3:
> - improve commit description and Kconfig help text;
> - no need to initialize pv_time_ops;
> - add PARAVIRT_TIME_ACCOUNTING.
> ---
>  arch/arm/Kconfig                |   20 ++++++++++++++++++++
>  arch/arm/include/asm/paravirt.h |   20 ++++++++++++++++++++
>  arch/arm/kernel/Makefile        |    2 ++
>  arch/arm/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
>  4 files changed, 67 insertions(+)
>  create mode 100644 arch/arm/include/asm/paravirt.h
>  create mode 100644 arch/arm/kernel/paravirt.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..d6c3ba1 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,25 @@ config SWIOTLB
>  config IOMMU_HELPER
>  	def_bool SWIOTLB
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -1885,6 +1904,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
> new file mode 100644
> index 0000000..8435ff59
> --- /dev/null
> +++ b/arch/arm/include/asm/paravirt.h
> @@ -0,0 +1,20 @@
> +#ifndef _ASM_ARM_PARAVIRT_H
> +#define _ASM_ARM_PARAVIRT_H
> +
> +#ifdef CONFIG_PARAVIRT
> +struct static_key;
> +extern struct static_key paravirt_steal_enabled;
> +extern struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops {
> +	unsigned long long (*steal_clock)(int cpu);
> +};
> +extern struct pv_time_ops pv_time_ops;
> +
> +static inline u64 paravirt_steal_clock(int cpu)
> +{
> +	return pv_time_ops.steal_clock(cpu);
> +}
> +#endif
> +
> +#endif
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index a30fc9b..34cf9a6 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
>  ifneq ($(CONFIG_ARCH_EBSA110),y)
>    obj-y		+= io.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  head-y			:= head$(MMUEXT).o
>  obj-$(CONFIG_DEBUG_LL)	+= debug.o
> @@ -97,5 +98,6 @@ ifeq ($(CONFIG_ARM_PSCI),y)
>  obj-y				+= psci.o
>  obj-$(CONFIG_SMP)		+= psci_smp.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  extra-y := $(head-y) vmlinux.lds
> diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
> new file mode 100644
> index 0000000..53f371e
> --- /dev/null
> +++ b/arch/arm/kernel/paravirt.c
> @@ -0,0 +1,25 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2013 Citrix Systems
> + *
> + * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <asm/paravirt.h>
> +
> +struct static_key paravirt_steal_enabled;
> +struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops pv_time_ops;
> +EXPORT_SYMBOL_GPL(pv_time_ops);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:25:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNFz-00071T-VG; Mon, 17 Feb 2014 12:25:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNFv-00070t-3m
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:25:06 +0000
Received: from [85.158.137.68:28048] by server-2.bemta-3.messagelabs.com id
	25/C7-06531-E9FF1035; Mon, 17 Feb 2014 12:25:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392639897!1100728!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13962 invoked from network); 17 Feb 2014 12:24:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:24:59 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103149298"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 12:24:57 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:24:57 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNFn-0007IH-Ii;
	Mon, 17 Feb 2014 12:24:55 +0000
Date: Mon, 17 Feb 2014 12:24:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <linux@arm.linux.org.uk>
In-Reply-To: <1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402171223290.27926@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-3-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	arnd@arndb.de, marc.zyngier@arm.com, catalin.marinas@arm.com,
	nico@linaro.org, will.deacon@arm.com,
	linux-kernel@vger.kernel.org, cov@codeaurora.org, olof@lixom.net,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v9 3/5] arm: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> Acked-by: Christopher Covington <cov@codeaurora.org>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: linux@arm.linux.org.uk
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net

Russell, are you happy with this patch (for 3.15)?



> 
> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> 
> Changes in v3:
> - improve commit description and Kconfig help text;
> - no need to initialize pv_time_ops;
> - add PARAVIRT_TIME_ACCOUNTING.
> ---
>  arch/arm/Kconfig                |   20 ++++++++++++++++++++
>  arch/arm/include/asm/paravirt.h |   20 ++++++++++++++++++++
>  arch/arm/kernel/Makefile        |    2 ++
>  arch/arm/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
>  4 files changed, 67 insertions(+)
>  create mode 100644 arch/arm/include/asm/paravirt.h
>  create mode 100644 arch/arm/kernel/paravirt.c
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..d6c3ba1 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1874,6 +1874,25 @@ config SWIOTLB
>  config IOMMU_HELPER
>  	def_bool SWIOTLB
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -1885,6 +1904,7 @@ config XEN
>  	depends on !GENERIC_ATOMIC64
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM.
>  
> diff --git a/arch/arm/include/asm/paravirt.h b/arch/arm/include/asm/paravirt.h
> new file mode 100644
> index 0000000..8435ff59
> --- /dev/null
> +++ b/arch/arm/include/asm/paravirt.h
> @@ -0,0 +1,20 @@
> +#ifndef _ASM_ARM_PARAVIRT_H
> +#define _ASM_ARM_PARAVIRT_H
> +
> +#ifdef CONFIG_PARAVIRT
> +struct static_key;
> +extern struct static_key paravirt_steal_enabled;
> +extern struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops {
> +	unsigned long long (*steal_clock)(int cpu);
> +};
> +extern struct pv_time_ops pv_time_ops;
> +
> +static inline u64 paravirt_steal_clock(int cpu)
> +{
> +	return pv_time_ops.steal_clock(cpu);
> +}
> +#endif
> +
> +#endif
> diff --git a/arch/arm/kernel/Makefile b/arch/arm/kernel/Makefile
> index a30fc9b..34cf9a6 100644
> --- a/arch/arm/kernel/Makefile
> +++ b/arch/arm/kernel/Makefile
> @@ -87,6 +87,7 @@ obj-$(CONFIG_ARM_CPU_TOPOLOGY)  += topology.o
>  ifneq ($(CONFIG_ARCH_EBSA110),y)
>    obj-y		+= io.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  head-y			:= head$(MMUEXT).o
>  obj-$(CONFIG_DEBUG_LL)	+= debug.o
> @@ -97,5 +98,6 @@ ifeq ($(CONFIG_ARM_PSCI),y)
>  obj-y				+= psci.o
>  obj-$(CONFIG_SMP)		+= psci_smp.o
>  endif
> +obj-$(CONFIG_PARAVIRT)	+= paravirt.o
>  
>  extra-y := $(head-y) vmlinux.lds
> diff --git a/arch/arm/kernel/paravirt.c b/arch/arm/kernel/paravirt.c
> new file mode 100644
> index 0000000..53f371e
> --- /dev/null
> +++ b/arch/arm/kernel/paravirt.c
> @@ -0,0 +1,25 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2013 Citrix Systems
> + *
> + * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <asm/paravirt.h>
> +
> +struct static_key paravirt_steal_enabled;
> +struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops pv_time_ops;
> +EXPORT_SYMBOL_GPL(pv_time_ops);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:25:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNGn-0007AX-G3; Mon, 17 Feb 2014 12:25:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNGl-0007A8-UL
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:25:56 +0000
Received: from [193.109.254.147:55322] by server-5.bemta-14.messagelabs.com id
	0C/6D-16688-3DFF1035; Mon, 17 Feb 2014 12:25:55 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392639953!4846482!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 380 invoked from network); 17 Feb 2014 12:25:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:25:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101401564"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:25:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:25:52 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNGb-0007Is-TL;
	Mon, 17 Feb 2014 12:25:45 +0000
Date: Mon, 17 Feb 2014 12:25:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <Catalin.Marinas@arm.com>
In-Reply-To: <1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402171225000.27926@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	marc.zyngier@arm.com, catalin.marinas@arm.com, nico@linaro.org,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	cov@codeaurora.org, olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> Necessary duplication of paravirt.h and paravirt.c with ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> CC: Catalin.Marinas@arm.com

Catalin, Will, are you happy with this patch for 3.15?

 
> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> ---
>  arch/arm64/Kconfig                |   20 ++++++++++++++++++++
>  arch/arm64/include/asm/paravirt.h |   20 ++++++++++++++++++++
>  arch/arm64/kernel/Makefile        |    1 +
>  arch/arm64/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
>  4 files changed, 66 insertions(+)
>  create mode 100644 arch/arm64/include/asm/paravirt.h
>  create mode 100644 arch/arm64/kernel/paravirt.c
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d4dd22..d1003ba 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -212,6 +212,25 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
>  
>  source "mm/Kconfig"
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -220,6 +239,7 @@ config XEN
>  	bool "Xen guest support on ARM64 (EXPERIMENTAL)"
>  	depends on ARM64 && OF
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
>  
> diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
> new file mode 100644
> index 0000000..fd5f428
> --- /dev/null
> +++ b/arch/arm64/include/asm/paravirt.h
> @@ -0,0 +1,20 @@
> +#ifndef _ASM_ARM64_PARAVIRT_H
> +#define _ASM_ARM64_PARAVIRT_H
> +
> +#ifdef CONFIG_PARAVIRT
> +struct static_key;
> +extern struct static_key paravirt_steal_enabled;
> +extern struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops {
> +	unsigned long long (*steal_clock)(int cpu);
> +};
> +extern struct pv_time_ops pv_time_ops;
> +
> +static inline u64 paravirt_steal_clock(int cpu)
> +{
> +	return pv_time_ops.steal_clock(cpu);
> +}
> +#endif
> +
> +#endif
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 5ba2fd4..1dee735 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
>  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
>  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
>  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
> new file mode 100644
> index 0000000..53f371e
> --- /dev/null
> +++ b/arch/arm64/kernel/paravirt.c
> @@ -0,0 +1,25 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2013 Citrix Systems
> + *
> + * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <asm/paravirt.h>
> +
> +struct static_key paravirt_steal_enabled;
> +struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops pv_time_ops;
> +EXPORT_SYMBOL_GPL(pv_time_ops);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:25:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNGn-0007AX-G3; Mon, 17 Feb 2014 12:25:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNGl-0007A8-UL
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 12:25:56 +0000
Received: from [193.109.254.147:55322] by server-5.bemta-14.messagelabs.com id
	0C/6D-16688-3DFF1035; Mon, 17 Feb 2014 12:25:55 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392639953!4846482!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 380 invoked from network); 17 Feb 2014 12:25:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:25:54 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101401564"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:25:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:25:52 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNGb-0007Is-TL;
	Mon, 17 Feb 2014 12:25:45 +0000
Date: Mon, 17 Feb 2014 12:25:42 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <Catalin.Marinas@arm.com>
In-Reply-To: <1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402171225000.27926@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, linux@arm.linux.org.uk,
	Ian Campbell <Ian.Campbell@citrix.com>, arnd@arndb.de,
	marc.zyngier@arm.com, catalin.marinas@arm.com, nico@linaro.org,
	will.deacon@arm.com, linux-kernel@vger.kernel.org,
	cov@codeaurora.org, olof@lixom.net, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> Necessary duplication of paravirt.h and paravirt.c with ARM.
> 
> The only paravirt interface supported is pv_time_ops.steal_clock, so no
> runtime pvops patching needed.
> 
> This allows us to make use of steal_account_process_tick for stolen
> ticks accounting.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> CC: will.deacon@arm.com
> CC: nico@linaro.org
> CC: marc.zyngier@arm.com
> CC: cov@codeaurora.org
> CC: arnd@arndb.de
> CC: olof@lixom.net
> CC: Catalin.Marinas@arm.com

Catalin, Will, are you happy with this patch for 3.15?

 
> Changes in v7:
> - ifdef CONFIG_PARAVIRT the content of paravirt.h.
> ---
>  arch/arm64/Kconfig                |   20 ++++++++++++++++++++
>  arch/arm64/include/asm/paravirt.h |   20 ++++++++++++++++++++
>  arch/arm64/kernel/Makefile        |    1 +
>  arch/arm64/kernel/paravirt.c      |   25 +++++++++++++++++++++++++
>  4 files changed, 66 insertions(+)
>  create mode 100644 arch/arm64/include/asm/paravirt.h
>  create mode 100644 arch/arm64/kernel/paravirt.c
> 
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d4dd22..d1003ba 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -212,6 +212,25 @@ config HAVE_ARCH_TRANSPARENT_HUGEPAGE
>  
>  source "mm/Kconfig"
>  
> +config PARAVIRT
> +	bool "Enable paravirtualization code"
> +	---help---
> +	  This changes the kernel so it can modify itself when it is run
> +	  under a hypervisor, potentially improving performance significantly
> +	  over full virtualization.
> +
> +config PARAVIRT_TIME_ACCOUNTING
> +	bool "Paravirtual steal time accounting"
> +	select PARAVIRT
> +	default n
> +	---help---
> +	  Select this option to enable fine granularity task steal time
> +	  accounting. Time spent executing other tasks in parallel with
> +	  the current vCPU is discounted from the vCPU power. To account for
> +	  that, there can be a small performance impact.
> +
> +	  If in doubt, say N here.
> +
>  config XEN_DOM0
>  	def_bool y
>  	depends on XEN
> @@ -220,6 +239,7 @@ config XEN
>  	bool "Xen guest support on ARM64 (EXPERIMENTAL)"
>  	depends on ARM64 && OF
>  	select SWIOTLB_XEN
> +	select PARAVIRT
>  	help
>  	  Say Y if you want to run Linux in a Virtual Machine on Xen on ARM64.
>  
> diff --git a/arch/arm64/include/asm/paravirt.h b/arch/arm64/include/asm/paravirt.h
> new file mode 100644
> index 0000000..fd5f428
> --- /dev/null
> +++ b/arch/arm64/include/asm/paravirt.h
> @@ -0,0 +1,20 @@
> +#ifndef _ASM_ARM64_PARAVIRT_H
> +#define _ASM_ARM64_PARAVIRT_H
> +
> +#ifdef CONFIG_PARAVIRT
> +struct static_key;
> +extern struct static_key paravirt_steal_enabled;
> +extern struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops {
> +	unsigned long long (*steal_clock)(int cpu);
> +};
> +extern struct pv_time_ops pv_time_ops;
> +
> +static inline u64 paravirt_steal_clock(int cpu)
> +{
> +	return pv_time_ops.steal_clock(cpu);
> +}
> +#endif
> +
> +#endif
> diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
> index 5ba2fd4..1dee735 100644
> --- a/arch/arm64/kernel/Makefile
> +++ b/arch/arm64/kernel/Makefile
> @@ -18,6 +18,7 @@ arm64-obj-$(CONFIG_SMP)			+= smp.o smp_spin_table.o
>  arm64-obj-$(CONFIG_HW_PERF_EVENTS)	+= perf_event.o
>  arm64-obj-$(CONFIG_HAVE_HW_BREAKPOINT)+= hw_breakpoint.o
>  arm64-obj-$(CONFIG_EARLY_PRINTK)	+= early_printk.o
> +arm64-obj-$(CONFIG_PARAVIRT)		+= paravirt.o
>  
>  obj-y					+= $(arm64-obj-y) vdso/
>  obj-m					+= $(arm64-obj-m)
> diff --git a/arch/arm64/kernel/paravirt.c b/arch/arm64/kernel/paravirt.c
> new file mode 100644
> index 0000000..53f371e
> --- /dev/null
> +++ b/arch/arm64/kernel/paravirt.c
> @@ -0,0 +1,25 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * Copyright (C) 2013 Citrix Systems
> + *
> + * Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> + */
> +
> +#include <linux/export.h>
> +#include <linux/jump_label.h>
> +#include <linux/types.h>
> +#include <asm/paravirt.h>
> +
> +struct static_key paravirt_steal_enabled;
> +struct static_key paravirt_steal_rq_enabled;
> +
> +struct pv_time_ops pv_time_ops;
> +EXPORT_SYMBOL_GPL(pv_time_ops);
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:34:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNP7-0007YT-II; Mon, 17 Feb 2014 12:34:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNP6-0007YO-2k
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 12:34:32 +0000
Received: from [85.158.137.68:48014] by server-9.bemta-3.messagelabs.com id
	45/3B-10184-7D102035; Mon, 17 Feb 2014 12:34:31 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392640444!2372635!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29222 invoked from network); 17 Feb 2014 12:34:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:34:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101402992"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:34:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:34:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNOc-0007PF-BW;
	Mon, 17 Feb 2014 12:34:02 +0000
Date: Mon, 17 Feb 2014 12:33:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <linux@arm.linux.org.uk>
In-Reply-To: <20140121180750.GO30706@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1402171232220.27926@kaball.uk.xensource.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140121180750.GO30706@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Will Deacon wrote:
> On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> > Remove !GENERIC_ATOMIC64 build dependency:
> > - introduce xen_atomic64_xchg
> > - use it to implement xchg_xen_ulong
> > 
> > Remove !CPU_V6 build dependency:
> > - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
> >   CONFIG_CPU_V6
> > - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: arnd@arndb.de
> > CC: linux@arm.linux.org.uk
> > CC: will.deacon@arm.com
> > CC: catalin.marinas@arm.com
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: linux-kernel@vger.kernel.org
> > CC: xen-devel@lists.xenproject.org
> 
>   Reviewed-by: Will Deacon <will.deacon@arm.com>

Russell, are you OK with this patch for 3.15?


> Changes in v4:
> - avoid moving and renaming atomic64_xchg
> - introduce xen_atomic64_xchg
> - fix asm comment in __cmpxchg8 and __cmpxchg16.
> 
> ---
>  arch/arm/Kconfig                   |    3 +-
>  arch/arm/include/asm/cmpxchg.h     |   60 ++++++++++++++++++++++++------------
>  arch/arm/include/asm/sync_bitops.h |   24 ++++++++++++++-
>  arch/arm/include/asm/xen/events.h  |   32 ++++++++++++++++++-
>  4 files changed, 95 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..ae54ae0 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1881,8 +1881,7 @@ config XEN_DOM0
>  config XEN
>  	bool "Xen guest support on ARM (EXPERIMENTAL)"
>  	depends on ARM && AEABI && OF
> -	depends on CPU_V7 && !CPU_V6
> -	depends on !GENERIC_ATOMIC64
> +	depends on CPU_V7
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
>  	help
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index df2fbba..a17cff1 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -133,6 +133,44 @@ extern void __bad_cmpxchg(volatile void *ptr, int size);
>   * cmpxchg only support 32-bits operands on ARMv6.
>   */
>  
> +static inline unsigned long __cmpxchg8(volatile void *ptr, unsigned long old,
> +				      unsigned long new)
> +{
> +	unsigned long oldval, res;
> +
> +	do {
> +		asm volatile("@ __cmpxchg8\n"
> +		"	ldrexb	%1, [%2]\n"
> +		"	mov	%0, #0\n"
> +		"	teq	%1, %3\n"
> +		"	strexbeq %0, %4, [%2]\n"
> +			: "=&r" (res), "=&r" (oldval)
> +			: "r" (ptr), "Ir" (old), "r" (new)
> +			: "memory", "cc");
> +	} while (res);
> +
> +	return oldval;
> +}
> +
> +static inline unsigned long __cmpxchg16(volatile void *ptr, unsigned long old,
> +				      unsigned long new)
> +{
> +	unsigned long oldval, res;
> +
> +	do {
> +		asm volatile("@ __cmpxchg16\n"
> +		"	ldrexh	%1, [%2]\n"
> +		"	mov	%0, #0\n"
> +		"	teq	%1, %3\n"
> +		"	strexheq %0, %4, [%2]\n"
> +			: "=&r" (res), "=&r" (oldval)
> +			: "r" (ptr), "Ir" (old), "r" (new)
> +			: "memory", "cc");
> +	} while (res);
> +
> +	return oldval;
> +}
> +
>  static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
>  				      unsigned long new, int size)
>  {
> @@ -141,28 +179,10 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
>  	switch (size) {
>  #ifndef CONFIG_CPU_V6	/* min ARCH >= ARMv6K */
>  	case 1:
> -		do {
> -			asm volatile("@ __cmpxchg1\n"
> -			"	ldrexb	%1, [%2]\n"
> -			"	mov	%0, #0\n"
> -			"	teq	%1, %3\n"
> -			"	strexbeq %0, %4, [%2]\n"
> -				: "=&r" (res), "=&r" (oldval)
> -				: "r" (ptr), "Ir" (old), "r" (new)
> -				: "memory", "cc");
> -		} while (res);
> +		oldval = __cmpxchg8(ptr, old, new);
>  		break;
>  	case 2:
> -		do {
> -			asm volatile("@ __cmpxchg1\n"
> -			"	ldrexh	%1, [%2]\n"
> -			"	mov	%0, #0\n"
> -			"	teq	%1, %3\n"
> -			"	strexheq %0, %4, [%2]\n"
> -				: "=&r" (res), "=&r" (oldval)
> -				: "r" (ptr), "Ir" (old), "r" (new)
> -				: "memory", "cc");
> -		} while (res);
> +		oldval = __cmpxchg16(ptr, old, new);
>  		break;
>  #endif
>  	case 4:
> diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
> index 63479ee..942659a 100644
> --- a/arch/arm/include/asm/sync_bitops.h
> +++ b/arch/arm/include/asm/sync_bitops.h
> @@ -21,7 +21,29 @@
>  #define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
>  #define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
>  #define sync_test_bit(nr, addr)		test_bit(nr, addr)
> -#define sync_cmpxchg			cmpxchg
>  
> +static inline unsigned long sync_cmpxchg(volatile void *ptr,
> +										 unsigned long old,
> +										 unsigned long new)
> +{
> +	unsigned long oldval;
> +	int size = sizeof(*(ptr));
> +
> +	smp_mb();
> +	switch (size) {
> +	case 1:
> +		oldval = __cmpxchg8(ptr, old, new);
> +		break;
> +	case 2:
> +		oldval = __cmpxchg16(ptr, old, new);
> +		break;
> +	default:
> +		oldval = __cmpxchg(ptr, old, new, size);
> +		break;
> +	}
> +	smp_mb();
> +
> +	return oldval;
> +}
>  
>  #endif
> diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
> index 8b1f37b..2032ee6 100644
> --- a/arch/arm/include/asm/xen/events.h
> +++ b/arch/arm/include/asm/xen/events.h
> @@ -16,7 +16,37 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
>  	return raw_irqs_disabled_flags(regs->ARM_cpsr);
>  }
>  
> -#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr),	\
> +#ifdef CONFIG_GENERIC_ATOMIC64
> +/* if CONFIG_GENERIC_ATOMIC64 is defined we cannot use the generic
> + * atomic64_xchg function because it is implemented using spin locks.
> + * Here we need proper atomic instructions to read and write memory
> + * shared with the hypervisor.
> + */
> +static inline u64 xen_atomic64_xchg(atomic64_t *ptr, u64 new)
> +{
> +	u64 result;
> +	unsigned long tmp;
> +
> +	smp_mb();
> +
> +	__asm__ __volatile__("@ xen_atomic64_xchg\n"
> +"1:	ldrexd	%0, %H0, [%3]\n"
> +"	strexd	%1, %4, %H4, [%3]\n"
> +"	teq	%1, #0\n"
> +"	bne	1b"
> +	: "=&r" (result), "=&r" (tmp), "+Qo" (ptr->counter)
> +	: "r" (&ptr->counter), "r" (new)
> +	: "cc");
> +
> +	smp_mb();
> +
> +	return result;
> +}
> +#else
> +#define xen_atomic64_xchg atomic64_xchg
> +#endif
> +
> +#define xchg_xen_ulong(ptr, val) xen_atomic64_xchg(container_of((ptr),	\
>  							    atomic64_t,	\
>  							    counter), (val))
>  
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:34:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNP7-0007YT-II; Mon, 17 Feb 2014 12:34:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFNP6-0007YO-2k
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 12:34:32 +0000
Received: from [85.158.137.68:48014] by server-9.bemta-3.messagelabs.com id
	45/3B-10184-7D102035; Mon, 17 Feb 2014 12:34:31 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392640444!2372635!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29222 invoked from network); 17 Feb 2014 12:34:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:34:30 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101402992"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 12:34:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 07:34:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFNOc-0007PF-BW;
	Mon, 17 Feb 2014 12:34:02 +0000
Date: Mon, 17 Feb 2014 12:33:59 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <linux@arm.linux.org.uk>
In-Reply-To: <20140121180750.GO30706@mudshark.cambridge.arm.com>
Message-ID: <alpine.DEB.2.02.1402171232220.27926@kaball.uk.xensource.com>
References: <1390311864-19119-1-git-send-email-stefano.stabellini@eu.citrix.com>
	<20140121180750.GO30706@mudshark.cambridge.arm.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	"arnd@arndb.de" <arnd@arndb.de>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Catalin Marinas <Catalin.Marinas@arm.com>,
	"jaccon.bastiaansen@gmail.com" <jaccon.bastiaansen@gmail.com>,
	Will Deacon <will.deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v4] arm: remove !CPU_V6 and
 !GENERIC_ATOMIC64 build dependencies for XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 21 Jan 2014, Will Deacon wrote:
> On Tue, Jan 21, 2014 at 01:44:24PM +0000, Stefano Stabellini wrote:
> > Remove !GENERIC_ATOMIC64 build dependency:
> > - introduce xen_atomic64_xchg
> > - use it to implement xchg_xen_ulong
> > 
> > Remove !CPU_V6 build dependency:
> > - introduce __cmpxchg8 and __cmpxchg16, compiled even ifdef
> >   CONFIG_CPU_V6
> > - implement sync_cmpxchg using __cmpxchg8 and __cmpxchg16
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: arnd@arndb.de
> > CC: linux@arm.linux.org.uk
> > CC: will.deacon@arm.com
> > CC: catalin.marinas@arm.com
> > CC: linux-arm-kernel@lists.infradead.org
> > CC: linux-kernel@vger.kernel.org
> > CC: xen-devel@lists.xenproject.org
> 
>   Reviewed-by: Will Deacon <will.deacon@arm.com>

Russell, are you OK with this patch for 3.15?


> Changes in v4:
> - avoid moving and renaming atomic64_xchg
> - introduce xen_atomic64_xchg
> - fix asm comment in __cmpxchg8 and __cmpxchg16.
> 
> ---
>  arch/arm/Kconfig                   |    3 +-
>  arch/arm/include/asm/cmpxchg.h     |   60 ++++++++++++++++++++++++------------
>  arch/arm/include/asm/sync_bitops.h |   24 ++++++++++++++-
>  arch/arm/include/asm/xen/events.h  |   32 ++++++++++++++++++-
>  4 files changed, 95 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index c1f1a7e..ae54ae0 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -1881,8 +1881,7 @@ config XEN_DOM0
>  config XEN
>  	bool "Xen guest support on ARM (EXPERIMENTAL)"
>  	depends on ARM && AEABI && OF
> -	depends on CPU_V7 && !CPU_V6
> -	depends on !GENERIC_ATOMIC64
> +	depends on CPU_V7
>  	select ARM_PSCI
>  	select SWIOTLB_XEN
>  	help
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index df2fbba..a17cff1 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -133,6 +133,44 @@ extern void __bad_cmpxchg(volatile void *ptr, int size);
>   * cmpxchg only support 32-bits operands on ARMv6.
>   */
>  
> +static inline unsigned long __cmpxchg8(volatile void *ptr, unsigned long old,
> +				      unsigned long new)
> +{
> +	unsigned long oldval, res;
> +
> +	do {
> +		asm volatile("@ __cmpxchg8\n"
> +		"	ldrexb	%1, [%2]\n"
> +		"	mov	%0, #0\n"
> +		"	teq	%1, %3\n"
> +		"	strexbeq %0, %4, [%2]\n"
> +			: "=&r" (res), "=&r" (oldval)
> +			: "r" (ptr), "Ir" (old), "r" (new)
> +			: "memory", "cc");
> +	} while (res);
> +
> +	return oldval;
> +}
> +
> +static inline unsigned long __cmpxchg16(volatile void *ptr, unsigned long old,
> +				      unsigned long new)
> +{
> +	unsigned long oldval, res;
> +
> +	do {
> +		asm volatile("@ __cmpxchg16\n"
> +		"	ldrexh	%1, [%2]\n"
> +		"	mov	%0, #0\n"
> +		"	teq	%1, %3\n"
> +		"	strexheq %0, %4, [%2]\n"
> +			: "=&r" (res), "=&r" (oldval)
> +			: "r" (ptr), "Ir" (old), "r" (new)
> +			: "memory", "cc");
> +	} while (res);
> +
> +	return oldval;
> +}
> +
>  static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
>  				      unsigned long new, int size)
>  {
> @@ -141,28 +179,10 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
>  	switch (size) {
>  #ifndef CONFIG_CPU_V6	/* min ARCH >= ARMv6K */
>  	case 1:
> -		do {
> -			asm volatile("@ __cmpxchg1\n"
> -			"	ldrexb	%1, [%2]\n"
> -			"	mov	%0, #0\n"
> -			"	teq	%1, %3\n"
> -			"	strexbeq %0, %4, [%2]\n"
> -				: "=&r" (res), "=&r" (oldval)
> -				: "r" (ptr), "Ir" (old), "r" (new)
> -				: "memory", "cc");
> -		} while (res);
> +		oldval = __cmpxchg8(ptr, old, new);
>  		break;
>  	case 2:
> -		do {
> -			asm volatile("@ __cmpxchg1\n"
> -			"	ldrexh	%1, [%2]\n"
> -			"	mov	%0, #0\n"
> -			"	teq	%1, %3\n"
> -			"	strexheq %0, %4, [%2]\n"
> -				: "=&r" (res), "=&r" (oldval)
> -				: "r" (ptr), "Ir" (old), "r" (new)
> -				: "memory", "cc");
> -		} while (res);
> +		oldval = __cmpxchg16(ptr, old, new);
>  		break;
>  #endif
>  	case 4:
> diff --git a/arch/arm/include/asm/sync_bitops.h b/arch/arm/include/asm/sync_bitops.h
> index 63479ee..942659a 100644
> --- a/arch/arm/include/asm/sync_bitops.h
> +++ b/arch/arm/include/asm/sync_bitops.h
> @@ -21,7 +21,29 @@
>  #define sync_test_and_clear_bit(nr, p)	_test_and_clear_bit(nr, p)
>  #define sync_test_and_change_bit(nr, p)	_test_and_change_bit(nr, p)
>  #define sync_test_bit(nr, addr)		test_bit(nr, addr)
> -#define sync_cmpxchg			cmpxchg
>  
> +static inline unsigned long sync_cmpxchg(volatile void *ptr,
> +										 unsigned long old,
> +										 unsigned long new)
> +{
> +	unsigned long oldval;
> +	int size = sizeof(*(ptr));
> +
> +	smp_mb();
> +	switch (size) {
> +	case 1:
> +		oldval = __cmpxchg8(ptr, old, new);
> +		break;
> +	case 2:
> +		oldval = __cmpxchg16(ptr, old, new);
> +		break;
> +	default:
> +		oldval = __cmpxchg(ptr, old, new, size);
> +		break;
> +	}
> +	smp_mb();
> +
> +	return oldval;
> +}
>  
>  #endif
> diff --git a/arch/arm/include/asm/xen/events.h b/arch/arm/include/asm/xen/events.h
> index 8b1f37b..2032ee6 100644
> --- a/arch/arm/include/asm/xen/events.h
> +++ b/arch/arm/include/asm/xen/events.h
> @@ -16,7 +16,37 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
>  	return raw_irqs_disabled_flags(regs->ARM_cpsr);
>  }
>  
> -#define xchg_xen_ulong(ptr, val) atomic64_xchg(container_of((ptr),	\
> +#ifdef CONFIG_GENERIC_ATOMIC64
> +/* if CONFIG_GENERIC_ATOMIC64 is defined we cannot use the generic
> + * atomic64_xchg function because it is implemented using spin locks.
> + * Here we need proper atomic instructions to read and write memory
> + * shared with the hypervisor.
> + */
> +static inline u64 xen_atomic64_xchg(atomic64_t *ptr, u64 new)
> +{
> +	u64 result;
> +	unsigned long tmp;
> +
> +	smp_mb();
> +
> +	__asm__ __volatile__("@ xen_atomic64_xchg\n"
> +"1:	ldrexd	%0, %H0, [%3]\n"
> +"	strexd	%1, %4, %H4, [%3]\n"
> +"	teq	%1, #0\n"
> +"	bne	1b"
> +	: "=&r" (result), "=&r" (tmp), "+Qo" (ptr->counter)
> +	: "r" (&ptr->counter), "r" (new)
> +	: "cc");
> +
> +	smp_mb();
> +
> +	return result;
> +}
> +#else
> +#define xen_atomic64_xchg atomic64_xchg
> +#endif
> +
> +#define xchg_xen_ulong(ptr, val) xen_atomic64_xchg(container_of((ptr),	\
>  							    atomic64_t,	\
>  							    counter), (val))
>  
> -- 
> 1.7.10.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:37:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNSL-0007gM-BF; Mon, 17 Feb 2014 12:37:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFNSJ-0007gH-AI
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:37:51 +0000
Received: from [85.158.143.35:55156] by server-1.bemta-4.messagelabs.com id
	3F/B1-31661-E9202035; Mon, 17 Feb 2014 12:37:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392640668!6221355!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27733 invoked from network); 17 Feb 2014 12:37:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 12:37:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 12:37:48 +0000
Message-Id: <530210A8020000780011CDD5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 12:37:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<5301FF51.1060509@eu.citrix.com>
In-Reply-To: <5301FF51.1060509@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 13:23, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>> And second, I have been fighting with finding both conditions
>> and (eventually) the root cause of a severe performance
>> regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
>> system. This became _much_ worse after adding in the patch here
>> (while in fact I had hoped it might help with the originally observed
>> degradation): X startup fails due to timing out, and booting the
>> guest now takes about 20 minutes). I didn't find the root cause of
>> this yet, but meanwhile I know that
>> - the same isn't observable on SVM
>> - there's no problem when forcing the domain to use shadow
>>    mode
>> - there's no need for any device to actually be assigned to the
>>    guest
>> - the regression is very likely purely graphics related (based on
>>    the observation that when running something that regularly but
>>    not heavily updates the screen with X up, the guest consumes a
>>    full CPU's worth of processing power, yet when that updating
>>    doesn't happen, CPU consumption goes down, and it goes further
>>    down when shutting down X altogether - at least as log as the
>>    patch here doesn't get involved).
>> This I'm observing on a Westmere box (and I didn't notice it earlier
>> because that's one of those where due to a chipset erratum the
>> IOMMU gets turned off by default), so it's possible that this can't
>> be seen on more modern hardware. I'll hopefully find time today to
>> check this on the one newer (Sandy Bridge) box I have.
> 
> So you're saying that the slowdown happens if you have EPT+IOMMU, but 
> *not* if you have EPT alone (IOMMU disabled), or shadow + IOMMU?
> 
> I have an issue I haven't had time to look into where windows installs 
> are sometimes terribly slow on my Nehalem box; but it seems to be only 
> with qemu-xen, not qemu-traditional.  I haven't tried with shadow.

Sorry I forgot to mention this - I too suspected the qemu version
update to be one possible reason, but I'm seeing the same behavior
with qemu-trad.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:37:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNSL-0007gM-BF; Mon, 17 Feb 2014 12:37:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFNSJ-0007gH-AI
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:37:51 +0000
Received: from [85.158.143.35:55156] by server-1.bemta-4.messagelabs.com id
	3F/B1-31661-E9202035; Mon, 17 Feb 2014 12:37:50 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392640668!6221355!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27733 invoked from network); 17 Feb 2014 12:37:48 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 12:37:48 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 12:37:48 +0000
Message-Id: <530210A8020000780011CDD5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 12:37:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<5301FF51.1060509@eu.citrix.com>
In-Reply-To: <5301FF51.1060509@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 13:23, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>> And second, I have been fighting with finding both conditions
>> and (eventually) the root cause of a severe performance
>> regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
>> system. This became _much_ worse after adding in the patch here
>> (while in fact I had hoped it might help with the originally observed
>> degradation): X startup fails due to timing out, and booting the
>> guest now takes about 20 minutes). I didn't find the root cause of
>> this yet, but meanwhile I know that
>> - the same isn't observable on SVM
>> - there's no problem when forcing the domain to use shadow
>>    mode
>> - there's no need for any device to actually be assigned to the
>>    guest
>> - the regression is very likely purely graphics related (based on
>>    the observation that when running something that regularly but
>>    not heavily updates the screen with X up, the guest consumes a
>>    full CPU's worth of processing power, yet when that updating
>>    doesn't happen, CPU consumption goes down, and it goes further
>>    down when shutting down X altogether - at least as log as the
>>    patch here doesn't get involved).
>> This I'm observing on a Westmere box (and I didn't notice it earlier
>> because that's one of those where due to a chipset erratum the
>> IOMMU gets turned off by default), so it's possible that this can't
>> be seen on more modern hardware. I'll hopefully find time today to
>> check this on the one newer (Sandy Bridge) box I have.
> 
> So you're saying that the slowdown happens if you have EPT+IOMMU, but 
> *not* if you have EPT alone (IOMMU disabled), or shadow + IOMMU?
> 
> I have an issue I haven't had time to look into where windows installs 
> are sometimes terribly slow on my Nehalem box; but it seems to be only 
> with qemu-xen, not qemu-traditional.  I haven't tried with shadow.

Sorry I forgot to mention this - I too suspected the qemu version
update to be one possible reason, but I'm seeing the same behavior
with qemu-trad.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:43:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNXN-0007s3-TA; Mon, 17 Feb 2014 12:43:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFNXH-0007ry-W5
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 12:43:00 +0000
Received: from [193.109.254.147:4952] by server-9.bemta-14.messagelabs.com id
	0D/25-24895-3D302035; Mon, 17 Feb 2014 12:42:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392640978!4830339!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17843 invoked from network); 17 Feb 2014 12:42:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 12:42:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 12:42:58 +0000
Message-Id: <530211DE020000780011CDEA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 12:42:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
In-Reply-To: <21249.64449.582039.323772@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 13:08, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> xen.org writes ("[xen-unstable test] 24870: regressions - trouble: 
> broken/fail/pass"):
>> flight 24870 xen-unstable real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/ 
>> 
>> Regressions :-(
>> 
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  build-i386-oldkern            3 host-build-prep       fail REGR. vs. 24862
> 
> This was the "usual" failure: Citrix's intercepting web proxy causes
> some hg clones of linux-2.6.18.hg from xenbits to fail.  The rest of
> the flight was successful.
> 
> The rest of the weekend's tests were badly affected by a disk failure
> on earwig.  So as a result we didn't get a push.
> 
> I cleared out a bunch of other stuff running in the test system in an
> effort to get a pass sooner, but peeking at the results the same job
> has failed the same way in the currently-running flight.  So we won't
> get a push in that iteration either.
> 
> We should consider doing a force push for RC4.  The risks are:
>  * There is something actually wrong with xen.git which causes the
>    32-bit 2.6.18 build to fail;
>  * Less resistance in the future to 2.6.18 build failures.
> I'll discuss these in turn.
> 
> The build-*-oldkern tests involve using the kernel-building machinery
> in xen.git to clone 2.6.18 from xenbits and build it.  Firstly, I think
> it's unlikely that anything in xen.git#d883c179..4e8d89bc would affect
> that.  Secondly, the build-amd64-oldkern builds have passed.  So I
> think we can almost entirely discount the first risk.
> 
> I think the second risk is tolerable.  We should keep an eye on it for
> a bit and if it turns out that the oldkern build really does become
> broken later and as a result keeps failing indefinitely, we will be
> able to spot that.
> 
> So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> xen.git#master and call it RC4.  Comments welcome.

On the basis of the almost-push mentioned above, I agree,
irrespective of the apparent regression I'm facing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:43:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNXN-0007s3-TA; Mon, 17 Feb 2014 12:43:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFNXH-0007ry-W5
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 12:43:00 +0000
Received: from [193.109.254.147:4952] by server-9.bemta-14.messagelabs.com id
	0D/25-24895-3D302035; Mon, 17 Feb 2014 12:42:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392640978!4830339!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17843 invoked from network); 17 Feb 2014 12:42:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 12:42:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 12:42:58 +0000
Message-Id: <530211DE020000780011CDEA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 12:42:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
In-Reply-To: <21249.64449.582039.323772@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 13:08, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> xen.org writes ("[xen-unstable test] 24870: regressions - trouble: 
> broken/fail/pass"):
>> flight 24870 xen-unstable real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/ 
>> 
>> Regressions :-(
>> 
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  build-i386-oldkern            3 host-build-prep       fail REGR. vs. 24862
> 
> This was the "usual" failure: Citrix's intercepting web proxy causes
> some hg clones of linux-2.6.18.hg from xenbits to fail.  The rest of
> the flight was successful.
> 
> The rest of the weekend's tests were badly affected by a disk failure
> on earwig.  So as a result we didn't get a push.
> 
> I cleared out a bunch of other stuff running in the test system in an
> effort to get a pass sooner, but peeking at the results the same job
> has failed the same way in the currently-running flight.  So we won't
> get a push in that iteration either.
> 
> We should consider doing a force push for RC4.  The risks are:
>  * There is something actually wrong with xen.git which causes the
>    32-bit 2.6.18 build to fail;
>  * Less resistance in the future to 2.6.18 build failures.
> I'll discuss these in turn.
> 
> The build-*-oldkern tests involve using the kernel-building machinery
> in xen.git to clone 2.6.18 from xenbits and build it.  Firstly, I think
> it's unlikely that anything in xen.git#d883c179..4e8d89bc would affect
> that.  Secondly, the build-amd64-oldkern builds have passed.  So I
> think we can almost entirely discount the first risk.
> 
> I think the second risk is tolerable.  We should keep an eye on it for
> a bit and if it turns out that the oldkern build really does become
> broken later and as a result keeps failing indefinitely, we will be
> able to spot that.
> 
> So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> xen.git#master and call it RC4.  Comments welcome.

On the basis of the almost-push mentioned above, I agree,
irrespective of the apparent regression I'm facing.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:46:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNae-0007z7-GY; Mon, 17 Feb 2014 12:46:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WFNac-0007z1-Tl
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:46:27 +0000
Received: from [85.158.143.35:19630] by server-3.bemta-4.messagelabs.com id
	A6/1A-11539-2A402035; Mon, 17 Feb 2014 12:46:26 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392641185!6234383!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9925 invoked from network); 17 Feb 2014 12:46:25 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:46:25 -0000
Received: by mail-we0-f175.google.com with SMTP id q59so10698464wes.34
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 04:46:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=WCdiNEOAGkp5CilTJlcCtXGE8jJRTI+o1HxakvZ8Cqk=;
	b=C98Dj2ETqQ300/N6HyfJ1kJ+WVtl1xVurMRZ8ec0hwhq2nzjWEr5+/JPhB34Nn3Cos
	GxZk2icrQEYYcAFcvHpwWRv40gmCvTZF5fr1def+3bc3QBEAlQ8Vr0Tii++WrobHTx+4
	fr+hPdgCg67xzdoIxEkTvndwICSOrWJov9jsSojr8l+UpGBmmZj90pVcQ7Gw9gw/D7oL
	ob0d0n8EaTHhSWTa05TK4bF86iWEJfSDxE+jNOPr2kZaDpDIT6rylnJfs+knxdL+EZl2
	0iw/kjT/HzdimOeZ0m3/3loSoP7nMHPriEAjmfHe6OAM+eUCwWHojmD7rosH8+jGFxjt
	Y8pQ==
X-Received: by 10.194.93.193 with SMTP id cw1mr1470920wjb.72.1392641184912;
	Mon, 17 Feb 2014 04:46:24 -0800 (PST)
Received: from [127.0.0.1] (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id ev4sm32120934wib.1.2014.02.17.04.46.21
	for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 04:46:24 -0800 (PST)
Date: Mon, 17 Feb 2014 12:46:16 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <752791084.20140217124616@gmail.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1392398466.32038.334.camel@Solace>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace> 
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Dario,

> All this to say that, it should be possible to get a bit more of
> isolation, by tweaking the proper Xen code path appropriately, but if
> the amount of interference that comes from two hypethreads sharing
> registers, pipeline stages, and whatever it is that they share, is
> enough for disturbing your workload, then, I'm afraid we never get much
> farther from the 'don't use hyperthread' solution! :-(

Hyperthreading is just a way to improve CPU resource utilization. Even
if you are doing a CPU intensive operation, a lot of the processor
circuits are actually idle, so adding 2 pipelines to feed one
processor is a good way to improve total throughput, but it does have
it have it's caveats. I totally forgot this.

Given the way that this works there isn't much that Xen can do. It is
a physical restriction.

The only thing I can think of would be to add an option to only run
hardware interaction on a sub-set of the available hyperthreads, but
that would require hacking the Dom0 kernel to lock device drivers onto
a given set of CPU and might end up worst than when it started.

Another way might be to have overlapping CPU pools. That way I have
Dom0 running on only one core and then I can run DomUs spread over the
available cores. AFAIU it is the hardware interaction that is causing
the interdependency.

> Anyways, with respect to the first part of this reasoning, would you
> mind (when you've got the time of course) one more test? If no, I'd say,
> configure the system as I was suggesting in my first reply, i.e., using
> also core #2 (or, in general, all the cores). Also, make sure you add
> this parameter, to the Xen boot command line:

>  sched_smt_power_savings=1

> (some background here:
> http://lists.xen.org/archives/html/xen-devel/2009-03/msg01335.html)

> And then run the bench with disk activity on.

OK. This is my current configuration:

Dom0     PCPU 0,1,2   no pinning
win7x64  PCPU   1,2   pinned
pv499    PCPU       3 pinned

And I get the same interdependence.

root@smartin-xen:~# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  1253     3     r-----      37.1
win7x64                                      4  2046     2     ------     106.8
pv499                                        5   128     1     r-----      66.8
root@smartin-xen:~# xl vcpu-list 
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    1   -b-      21.2  all
Domain-0                             0     1    2   r--       8.8  all
Domain-0                             0     2    0   -b-       8.2  all
win7x64                              4     0    1   -b-      59.8  1
win7x64                              4     1    2   -b-      49.0  2
pv499                                5     0    3   r--      70.2  3
root@smartin-xen:~# xl cpupool-list -c
Name               CPU list
Pool-0             0,1,2
pv499              3

I have gone back to my working settings.


-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 12:46:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 12:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFNae-0007z7-GY; Mon, 17 Feb 2014 12:46:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <furryfuttock@gmail.com>) id 1WFNac-0007z1-Tl
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 12:46:27 +0000
Received: from [85.158.143.35:19630] by server-3.bemta-4.messagelabs.com id
	A6/1A-11539-2A402035; Mon, 17 Feb 2014 12:46:26 +0000
X-Env-Sender: furryfuttock@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392641185!6234383!1
X-Originating-IP: [74.125.82.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9925 invoked from network); 17 Feb 2014 12:46:25 -0000
Received: from mail-we0-f175.google.com (HELO mail-we0-f175.google.com)
	(74.125.82.175)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 12:46:25 -0000
Received: by mail-we0-f175.google.com with SMTP id q59so10698464wes.34
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 04:46:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:message-id:to:cc:subject:in-reply-to:references
	:mime-version:content-type:content-transfer-encoding;
	bh=WCdiNEOAGkp5CilTJlcCtXGE8jJRTI+o1HxakvZ8Cqk=;
	b=C98Dj2ETqQ300/N6HyfJ1kJ+WVtl1xVurMRZ8ec0hwhq2nzjWEr5+/JPhB34Nn3Cos
	GxZk2icrQEYYcAFcvHpwWRv40gmCvTZF5fr1def+3bc3QBEAlQ8Vr0Tii++WrobHTx+4
	fr+hPdgCg67xzdoIxEkTvndwICSOrWJov9jsSojr8l+UpGBmmZj90pVcQ7Gw9gw/D7oL
	ob0d0n8EaTHhSWTa05TK4bF86iWEJfSDxE+jNOPr2kZaDpDIT6rylnJfs+knxdL+EZl2
	0iw/kjT/HzdimOeZ0m3/3loSoP7nMHPriEAjmfHe6OAM+eUCwWHojmD7rosH8+jGFxjt
	Y8pQ==
X-Received: by 10.194.93.193 with SMTP id cw1mr1470920wjb.72.1392641184912;
	Mon, 17 Feb 2014 04:46:24 -0800 (PST)
Received: from [127.0.0.1] (wombat.nemo.cl. [208.111.35.80])
	by mx.google.com with ESMTPSA id ev4sm32120934wib.1.2014.02.17.04.46.21
	for <multiple recipients>
	(version=TLSv1.1 cipher=RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 04:46:24 -0800 (PST)
Date: Mon, 17 Feb 2014 12:46:16 +0000
From: Simon Martin <furryfuttock@gmail.com>
X-Priority: 3 (Normal)
Message-ID: <752791084.20140217124616@gmail.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1392398466.32038.334.camel@Solace>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace> 
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
MIME-Version: 1.0
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Dario,

> All this to say that, it should be possible to get a bit more of
> isolation, by tweaking the proper Xen code path appropriately, but if
> the amount of interference that comes from two hypethreads sharing
> registers, pipeline stages, and whatever it is that they share, is
> enough for disturbing your workload, then, I'm afraid we never get much
> farther from the 'don't use hyperthread' solution! :-(

Hyperthreading is just a way to improve CPU resource utilization. Even
if you are doing a CPU intensive operation, a lot of the processor
circuits are actually idle, so adding 2 pipelines to feed one
processor is a good way to improve total throughput, but it does have
it have it's caveats. I totally forgot this.

Given the way that this works there isn't much that Xen can do. It is
a physical restriction.

The only thing I can think of would be to add an option to only run
hardware interaction on a sub-set of the available hyperthreads, but
that would require hacking the Dom0 kernel to lock device drivers onto
a given set of CPU and might end up worst than when it started.

Another way might be to have overlapping CPU pools. That way I have
Dom0 running on only one core and then I can run DomUs spread over the
available cores. AFAIU it is the hardware interaction that is causing
the interdependency.

> Anyways, with respect to the first part of this reasoning, would you
> mind (when you've got the time of course) one more test? If no, I'd say,
> configure the system as I was suggesting in my first reply, i.e., using
> also core #2 (or, in general, all the cores). Also, make sure you add
> this parameter, to the Xen boot command line:

>  sched_smt_power_savings=1

> (some background here:
> http://lists.xen.org/archives/html/xen-devel/2009-03/msg01335.html)

> And then run the bench with disk activity on.

OK. This is my current configuration:

Dom0     PCPU 0,1,2   no pinning
win7x64  PCPU   1,2   pinned
pv499    PCPU       3 pinned

And I get the same interdependence.

root@smartin-xen:~# xl list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0  1253     3     r-----      37.1
win7x64                                      4  2046     2     ------     106.8
pv499                                        5   128     1     r-----      66.8
root@smartin-xen:~# xl vcpu-list 
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                             0     0    1   -b-      21.2  all
Domain-0                             0     1    2   r--       8.8  all
Domain-0                             0     2    0   -b-       8.2  all
win7x64                              4     0    1   -b-      59.8  1
win7x64                              4     1    2   -b-      49.0  2
pv499                                5     0    3   r--      70.2  3
root@smartin-xen:~# xl cpupool-list -c
Name               CPU list
Pool-0             0,1,2
pv499              3

I have gone back to my working settings.


-- 
Best regards,
 Simon                            mailto:furryfuttock@gmail.com


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 13:19:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 13:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFO6J-0008LU-JP; Mon, 17 Feb 2014 13:19:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFO6I-0008LP-0X
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 13:19:10 +0000
Received: from [193.109.254.147:37051] by server-1.bemta-14.messagelabs.com id
	DD/AF-15438-D4C02035; Mon, 17 Feb 2014 13:19:09 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392643148!4871866!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26086 invoked from network); 17 Feb 2014 13:19:08 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 13:19:08 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=Rp/em3bSkNw/rx+dv2d08G8Gb3cSBXZFVbCJqRxTedlmJ4vipsXuqMJm
	E6IK+y91DNeoGzVmYGt/OTAiJ9kxMHzrm5QQhQ1PmnOm7MvOV8apRn6qH
	QBuKv0Kx6F2+BNDv2+ooMgjGOGfU/skTMZwqpidzKpjgUnifOZpJkp/cI
	j3sX4Mb2w3wLhpRckShEp0ywMNFRZ5zo1VbJl7X/JEFJjWC82+hL2JAmj
	JQtjBE+KqZcTx/KVx2jWlh8RRgwGG;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392643148; x=1424179148;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=jKQvMei/sAyTC6y3ItwYAhxJDq7XTWkckmPuE7SCldo=;
	b=V24P5eTmC2og0soeFCiMmWVWsAsI5jOKkqF+usb2c/6UhNNkR7xbFNZC
	0PgahX4ZuFGMD9J9Zf/Zprcv64RSAJp71Q2KnPSmIq/T7BpnNuqxOvJd3
	4Xm+MCd46IIlKzRPgqrs43SvF+tCh2Orx5JRRbEYrHKben7TnhhyYPydo
	UI3qSH4XHVS73jP5sspT0tX2/2yG1/lpV3wRTHH40zSfZlmbz1BxhW/KD
	cA7UXcpkvNYHyhoN/fctoeF5UUza1;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,860,1384297200"; d="scan'208";a="185798419"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 17 Feb 2014 14:19:07 +0100
X-IronPort-AV: E=Sophos;i="4.95,860,1384297200"; d="scan'208";a="31718146"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 17 Feb 2014 14:19:08 +0100
Message-ID: <53020C4B.6000509@ts.fujitsu.com>
Date: Mon, 17 Feb 2014 14:19:07 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
In-Reply-To: <1392398466.32038.334.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14.02.2014 18:21, Dario Faggioli wrote:
>
> Actually, you are right. It looks like there is no command or command
> parameter telling explicitly to which pool a domain belong [BTW, adding
> Juergen, who knows that for sure].

You didn't add me, but I just stumbled over this message. :-)

When I added cpupools the information could be obtained via "xm list -l".
In the moment I haven't got a xen-unstable system up. And on my 4.2.3
machine "xl list -l" isn't giving any information at all.

With "xenstore-ls /vm" the information can be retrieved: it is listed
under <uuid>/pool_name (with <uuid> being the UUID of the domain in
question).

> If that is the case, we really should add one.

Indeed. I think "xl cpupool-list" should have another option to show the
domains in the cpupool. I'll prepare a patch.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 13:19:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 13:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFO6J-0008LU-JP; Mon, 17 Feb 2014 13:19:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFO6I-0008LP-0X
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 13:19:10 +0000
Received: from [193.109.254.147:37051] by server-1.bemta-14.messagelabs.com id
	DD/AF-15438-D4C02035; Mon, 17 Feb 2014 13:19:09 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392643148!4871866!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26086 invoked from network); 17 Feb 2014 13:19:08 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 13:19:08 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=Rp/em3bSkNw/rx+dv2d08G8Gb3cSBXZFVbCJqRxTedlmJ4vipsXuqMJm
	E6IK+y91DNeoGzVmYGt/OTAiJ9kxMHzrm5QQhQ1PmnOm7MvOV8apRn6qH
	QBuKv0Kx6F2+BNDv2+ooMgjGOGfU/skTMZwqpidzKpjgUnifOZpJkp/cI
	j3sX4Mb2w3wLhpRckShEp0ywMNFRZ5zo1VbJl7X/JEFJjWC82+hL2JAmj
	JQtjBE+KqZcTx/KVx2jWlh8RRgwGG;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392643148; x=1424179148;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=jKQvMei/sAyTC6y3ItwYAhxJDq7XTWkckmPuE7SCldo=;
	b=V24P5eTmC2og0soeFCiMmWVWsAsI5jOKkqF+usb2c/6UhNNkR7xbFNZC
	0PgahX4ZuFGMD9J9Zf/Zprcv64RSAJp71Q2KnPSmIq/T7BpnNuqxOvJd3
	4Xm+MCd46IIlKzRPgqrs43SvF+tCh2Orx5JRRbEYrHKben7TnhhyYPydo
	UI3qSH4XHVS73jP5sspT0tX2/2yG1/lpV3wRTHH40zSfZlmbz1BxhW/KD
	cA7UXcpkvNYHyhoN/fctoeF5UUza1;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.95,860,1384297200"; d="scan'208";a="185798419"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 17 Feb 2014 14:19:07 +0100
X-IronPort-AV: E=Sophos;i="4.95,860,1384297200"; d="scan'208";a="31718146"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 17 Feb 2014 14:19:08 +0100
Message-ID: <53020C4B.6000509@ts.fujitsu.com>
Date: Mon, 17 Feb 2014 14:19:07 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
In-Reply-To: <1392398466.32038.334.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14.02.2014 18:21, Dario Faggioli wrote:
>
> Actually, you are right. It looks like there is no command or command
> parameter telling explicitly to which pool a domain belong [BTW, adding
> Juergen, who knows that for sure].

You didn't add me, but I just stumbled over this message. :-)

When I added cpupools the information could be obtained via "xm list -l".
In the moment I haven't got a xen-unstable system up. And on my 4.2.3
machine "xl list -l" isn't giving any information at all.

With "xenstore-ls /vm" the information can be retrieved: it is listed
under <uuid>/pool_name (with <uuid> being the UUID of the domain in
question).

> If that is the case, we really should add one.

Indeed. I think "xl cpupool-list" should have another option to show the
domains in the cpupool. I'll prepare a patch.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 13:19:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 13:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFO6r-0008Nj-0b; Mon, 17 Feb 2014 13:19:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFO6p-0008NX-UO
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 13:19:44 +0000
Received: from [85.158.139.211:21800] by server-3.bemta-5.messagelabs.com id
	6B/71-13671-F6C02035; Mon, 17 Feb 2014 13:19:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392643180!4370239!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10443 invoked from network); 17 Feb 2014 13:19:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 13:19:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101414063"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 13:19:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 08:19:40 -0500
Message-ID: <53020C69.2020505@citrix.com>
Date: Mon, 17 Feb 2014 14:19:37 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: george.dunlap@eu.citrix.com, keir@xen.org, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v3 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 12:29, Andrew Cooper wrote:
> Hi,
> =

> This series implements the most recent idea Tim was proposing about
> reworking the RTC PF interrupt injection.
> =

> Patch 1 switches handling the !PIE case to calculate the right answer
> for REG_C.PF on demand rather than running the timers.
> Patch 2 switches back to the old model of having the vpt code control
> the timer interrupt injection; this is the fix for the w2k3 hang.
> Patch 3 is just a minor cleanup, and not particularly necessary.
> =

> v3 has undergone extensive testing in XenRT, confirming that the w2k3
> has not reoccurred in 100 tests (normally expect to see 10-30
> recurrences), and the clock drift tests are happy with the new code.
> =

> Roger:
>   Would you kindly test against FreeBSD again please?

Tested-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
On FreeBSD 10.0, 9.2 and 8.4.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 13:19:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 13:19:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFO6r-0008Nj-0b; Mon, 17 Feb 2014 13:19:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFO6p-0008NX-UO
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 13:19:44 +0000
Received: from [85.158.139.211:21800] by server-3.bemta-5.messagelabs.com id
	6B/71-13671-F6C02035; Mon, 17 Feb 2014 13:19:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392643180!4370239!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10443 invoked from network); 17 Feb 2014 13:19:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 13:19:42 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="101414063"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 13:19:40 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 08:19:40 -0500
Message-ID: <53020C69.2020505@citrix.com>
Date: Mon, 17 Feb 2014 14:19:37 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
References: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1392636577-10305-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: george.dunlap@eu.citrix.com, keir@xen.org, JBeulich@suse.com
Subject: Re: [Xen-devel] [PATCH v3 0/3] Move RTC interrupt injection back
 into the vpt code.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 12:29, Andrew Cooper wrote:
> Hi,
> =

> This series implements the most recent idea Tim was proposing about
> reworking the RTC PF interrupt injection.
> =

> Patch 1 switches handling the !PIE case to calculate the right answer
> for REG_C.PF on demand rather than running the timers.
> Patch 2 switches back to the old model of having the vpt code control
> the timer interrupt injection; this is the fix for the w2k3 hang.
> Patch 3 is just a minor cleanup, and not particularly necessary.
> =

> v3 has undergone extensive testing in XenRT, confirming that the w2k3
> has not reoccurred in 100 tests (normally expect to see 10-30
> recurrences), and the clock drift tests are happy with the new code.
> =

> Roger:
>   Would you kindly test against FreeBSD again please?

Tested-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
On FreeBSD 10.0, 9.2 and 8.4.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 13:24:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 13:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOB2-00008q-Oj; Mon, 17 Feb 2014 13:24:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WFOB1-00008k-JY
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 13:24:03 +0000
Received: from [85.158.139.211:22419] by server-8.bemta-5.messagelabs.com id
	AD/B0-05298-27D02035; Mon, 17 Feb 2014 13:24:02 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392643440!4389669!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17213 invoked from network); 17 Feb 2014 13:24:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 13:24:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HDNndl026225
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 13:23:49 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HDNlx2017542
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 13:23:47 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HDNlpD017533; Mon, 17 Feb 2014 13:23:47 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 05:23:46 -0800
Date: Mon, 17 Feb 2014 14:23:31 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: xen-devel@lists.xenproject.org, virtio@lists.oasis-open.org
Message-ID: <20140217132331.GA3441@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, rusty@au1.ibm.com,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com
Subject: [Xen-devel] VIRTIO - compatibility with different virtualization
	solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Below you could find a summary of work in regards to VIRTIO compatibility with
different virtualization solutions. It was done mainly from Xen point of view
but results are quite generic and can be applied to wide spectrum
of virtualization platforms.

VIRTIO devices were designed as a set of generic devices for virtual environments.
They work without major issues on many currently existing virtualization solutions.
However, there is one VIRTIO specification and implementation issue which could hinder
VRITIO devices/drivers implementation on new or even existing platforms (e.g. Xen).

The problem is that the specification uses guest physical addresses as a pointers
to virtques, buffers and other structures. It means that VIRTIO device controller
(hypervisor/host or special device domain/process) knows the guest physical memory
layout and simply maps required regions as needed. However, this crude mapping
mechanism usually assumes that guests does not impose any access restrictions on
its whole memory. That situation is not desirable because many times guests would
like to put access restrictions on its whole memory and just give access to certain
memory regions needed for device operations. Fortunately many hypervisors have some
more or less advanced memory sharing mechanisms with relevant access control builtin.
However, those mechanisms do not use guest physical addresses as a shared memory
region address/reference but unique identifier which could be called "handle" here
(or anything else which clearly describes idea). It means that specification should
use term "handle" instead of guest physical addresses (in a particular case it can be
guest physical address). This way any virtualization environment could choose the best
way to access guest memory without compromising security if needed.

Above mentioned changes in specification require some changes in VIRTIO devices
and drivers implementation.

>From an implementation perspective of Linux VIRTIO drivers transition from old to
new one should not be very difficult. Linux Kernel itself provides DMA API which
should ease work on drivers. Hence they should use this API instead of omitting it.
Additionally, new IOMMU drivers should be created. Those IOMMU drivers should expose
handles to VIRTIO and hide hypervisor specific details. This way VIRTIO would not so
strongly depend on specific hypervisor behavior. Another part of VIRTIO are devices
which usually do not have an access to DMA API available in Linux Kernel and this may
present some challenges in transition to the new implementation. Additionally, similar
problems may also appear during drivers implementation in systems which do not have
Linux Kernel DMA API. However, even in that situation it should not be very big issue
and prevent transition to handles.

The author does not know FreeBSD and Windows well enough to make assumptions of how
to retool VRITIO drivers there to use some mechanism to get hypervisor handles,
but surely there must be some easy API to plumb this through.

As it can be seen from above description current VIRTIO specification could create
implementation challenges in some virtual environments. However, this issue could
be quite easily solved by migration from guest physical addresses which are used
as a pointers to virtques, bufferrs and other structures to handles. This change
should not be so difficult in implementation. Additionally, it makes VRITIO not so
tightly linked with specific virtual environment. This in turn helps improve fulfillment
of "Standard" assumption (Virtio makes no assumptions about the environment in which
it operates, beyond supporting the bus attaching the device. Virtio devices are
implemented over PCI and other buses, and earlier drafts been implemented on other
buses not included in this spec) made in VIRTIO spec introduction.

Acknowledgments for comments and suggestions: Wei Liu, Ian Pratt, Konrad Rzeszutek Wilk

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 13:24:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 13:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOB2-00008q-Oj; Mon, 17 Feb 2014 13:24:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WFOB1-00008k-JY
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 13:24:03 +0000
Received: from [85.158.139.211:22419] by server-8.bemta-5.messagelabs.com id
	AD/B0-05298-27D02035; Mon, 17 Feb 2014 13:24:02 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392643440!4389669!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17213 invoked from network); 17 Feb 2014 13:24:02 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 13:24:02 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HDNndl026225
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 13:23:49 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HDNlx2017542
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 13:23:47 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HDNlpD017533; Mon, 17 Feb 2014 13:23:47 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 05:23:46 -0800
Date: Mon, 17 Feb 2014 14:23:31 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: xen-devel@lists.xenproject.org, virtio@lists.oasis-open.org
Message-ID: <20140217132331.GA3441@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, rusty@au1.ibm.com,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com
Subject: [Xen-devel] VIRTIO - compatibility with different virtualization
	solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Below you could find a summary of work in regards to VIRTIO compatibility with
different virtualization solutions. It was done mainly from Xen point of view
but results are quite generic and can be applied to wide spectrum
of virtualization platforms.

VIRTIO devices were designed as a set of generic devices for virtual environments.
They work without major issues on many currently existing virtualization solutions.
However, there is one VIRTIO specification and implementation issue which could hinder
VRITIO devices/drivers implementation on new or even existing platforms (e.g. Xen).

The problem is that the specification uses guest physical addresses as a pointers
to virtques, buffers and other structures. It means that VIRTIO device controller
(hypervisor/host or special device domain/process) knows the guest physical memory
layout and simply maps required regions as needed. However, this crude mapping
mechanism usually assumes that guests does not impose any access restrictions on
its whole memory. That situation is not desirable because many times guests would
like to put access restrictions on its whole memory and just give access to certain
memory regions needed for device operations. Fortunately many hypervisors have some
more or less advanced memory sharing mechanisms with relevant access control builtin.
However, those mechanisms do not use guest physical addresses as a shared memory
region address/reference but unique identifier which could be called "handle" here
(or anything else which clearly describes idea). It means that specification should
use term "handle" instead of guest physical addresses (in a particular case it can be
guest physical address). This way any virtualization environment could choose the best
way to access guest memory without compromising security if needed.

Above mentioned changes in specification require some changes in VIRTIO devices
and drivers implementation.

>From an implementation perspective of Linux VIRTIO drivers transition from old to
new one should not be very difficult. Linux Kernel itself provides DMA API which
should ease work on drivers. Hence they should use this API instead of omitting it.
Additionally, new IOMMU drivers should be created. Those IOMMU drivers should expose
handles to VIRTIO and hide hypervisor specific details. This way VIRTIO would not so
strongly depend on specific hypervisor behavior. Another part of VIRTIO are devices
which usually do not have an access to DMA API available in Linux Kernel and this may
present some challenges in transition to the new implementation. Additionally, similar
problems may also appear during drivers implementation in systems which do not have
Linux Kernel DMA API. However, even in that situation it should not be very big issue
and prevent transition to handles.

The author does not know FreeBSD and Windows well enough to make assumptions of how
to retool VRITIO drivers there to use some mechanism to get hypervisor handles,
but surely there must be some easy API to plumb this through.

As it can be seen from above description current VIRTIO specification could create
implementation challenges in some virtual environments. However, this issue could
be quite easily solved by migration from guest physical addresses which are used
as a pointers to virtques, bufferrs and other structures to handles. This change
should not be so difficult in implementation. Additionally, it makes VRITIO not so
tightly linked with specific virtual environment. This in turn helps improve fulfillment
of "Standard" assumption (Virtio makes no assumptions about the environment in which
it operates, beyond supporting the bus attaching the device. Virtio devices are
implemented over PCI and other buses, and earlier drafts been implemented on other
buses not included in this spec) made in VIRTIO spec introduction.

Acknowledgments for comments and suggestions: Wei Liu, Ian Pratt, Konrad Rzeszutek Wilk

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:01:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOl3-0000bQ-GJ; Mon, 17 Feb 2014 14:01:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFOl1-0000bL-Uu
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 14:01:16 +0000
Received: from [85.158.139.211:52392] by server-1.bemta-5.messagelabs.com id
	1B/25-12859-B2612035; Mon, 17 Feb 2014 14:01:15 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392645672!4401595!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29208 invoked from network); 17 Feb 2014 14:01:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:01:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103169624"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 14:00:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 09:00:45 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFOkW-0000Ls-6o;
	Mon, 17 Feb 2014 14:00:44 +0000
Message-ID: <5302160B.70601@eu.citrix.com>
Date: Mon, 17 Feb 2014 14:00:43 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
In-Reply-To: <21249.64449.582039.323772@mariner.uk.xensource.com>
X-DLP: MIA2
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 12:08 PM, Ian Jackson wrote:
> xen.org writes ("[xen-unstable test] 24870: regressions - trouble: broken/fail/pass"):
>> flight 24870 xen-unstable real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>   build-i386-oldkern            3 host-build-prep       fail REGR. vs. 24862
> This was the "usual" failure: Citrix's intercepting web proxy causes
> some hg clones of linux-2.6.18.hg from xenbits to fail.  The rest of
> the flight was successful.
>
> The rest of the weekend's tests were badly affected by a disk failure
> on earwig.  So as a result we didn't get a push.
>
> I cleared out a bunch of other stuff running in the test system in an
> effort to get a pass sooner, but peeking at the results the same job
> has failed the same way in the currently-running flight.  So we won't
> get a push in that iteration either.
>
> We should consider doing a force push for RC4.  The risks are:
>   * There is something actually wrong with xen.git which causes the
>     32-bit 2.6.18 build to fail;
>   * Less resistance in the future to 2.6.18 build failures.
> I'll discuss these in turn.
>
> The build-*-oldkern tests involve using the kernel-building machinery
> in xen.git to clone 2.6.18 from xenbits and build it.  Firstly, I think
> it's unlikely that anything in xen.git#d883c179..4e8d89bc would affect
> that.  Secondly, the build-amd64-oldkern builds have passed.  So I
> think we can almost entirely discount the first risk.
>
> I think the second risk is tolerable.  We should keep an eye on it for
> a bit and if it turns out that the oldkern build really does become
> broken later and as a result keeps failing indefinitely, we will be
> able to spot that.
>
> So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> xen.git#master and call it RC4.  Comments welcome.

Thanks for the analysis.  This seems like a good plan.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:01:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOl3-0000bQ-GJ; Mon, 17 Feb 2014 14:01:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFOl1-0000bL-Uu
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 14:01:16 +0000
Received: from [85.158.139.211:52392] by server-1.bemta-5.messagelabs.com id
	1B/25-12859-B2612035; Mon, 17 Feb 2014 14:01:15 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392645672!4401595!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29208 invoked from network); 17 Feb 2014 14:01:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:01:14 -0000
X-IronPort-AV: E=Sophos;i="4.95,860,1384300800"; d="scan'208";a="103169624"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 14:00:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 09:00:45 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFOkW-0000Ls-6o;
	Mon, 17 Feb 2014 14:00:44 +0000
Message-ID: <5302160B.70601@eu.citrix.com>
Date: Mon, 17 Feb 2014 14:00:43 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
In-Reply-To: <21249.64449.582039.323772@mariner.uk.xensource.com>
X-DLP: MIA2
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 12:08 PM, Ian Jackson wrote:
> xen.org writes ("[xen-unstable test] 24870: regressions - trouble: broken/fail/pass"):
>> flight 24870 xen-unstable real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/24870/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>   build-i386-oldkern            3 host-build-prep       fail REGR. vs. 24862
> This was the "usual" failure: Citrix's intercepting web proxy causes
> some hg clones of linux-2.6.18.hg from xenbits to fail.  The rest of
> the flight was successful.
>
> The rest of the weekend's tests were badly affected by a disk failure
> on earwig.  So as a result we didn't get a push.
>
> I cleared out a bunch of other stuff running in the test system in an
> effort to get a pass sooner, but peeking at the results the same job
> has failed the same way in the currently-running flight.  So we won't
> get a push in that iteration either.
>
> We should consider doing a force push for RC4.  The risks are:
>   * There is something actually wrong with xen.git which causes the
>     32-bit 2.6.18 build to fail;
>   * Less resistance in the future to 2.6.18 build failures.
> I'll discuss these in turn.
>
> The build-*-oldkern tests involve using the kernel-building machinery
> in xen.git to clone 2.6.18 from xenbits and build it.  Firstly, I think
> it's unlikely that anything in xen.git#d883c179..4e8d89bc would affect
> that.  Secondly, the build-amd64-oldkern builds have passed.  So I
> think we can almost entirely discount the first risk.
>
> I think the second risk is tolerable.  We should keep an eye on it for
> a bit and if it turns out that the oldkern build really does become
> broken later and as a result keeps failing indefinitely, we will be
> able to spot that.
>
> So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> xen.git#master and call it RC4.  Comments welcome.

Thanks for the analysis.  This seems like a good plan.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:06:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOq2-0000is-En; Mon, 17 Feb 2014 14:06:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFOq0-0000in-Kn
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:06:24 +0000
Received: from [85.158.143.35:24672] by server-2.bemta-4.messagelabs.com id
	48/97-10891-06712035; Mon, 17 Feb 2014 14:06:24 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392645981!6259568!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16693 invoked from network); 17 Feb 2014 14:06:22 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:06:22 -0000
Received: from mail-vc0-f178.google.com ([209.85.220.178]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwIXXFrl5MuByH5fwtywRXZe08rhjAq+@postini.com;
	Mon, 17 Feb 2014 06:06:22 PST
Received: by mail-vc0-f178.google.com with SMTP id ik5so11545249vcb.23
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 06:06:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=OruWmXVImi1KD32jYqDbDNlHSnEniSFxCZwRB7Dc1z8=;
	b=lBuxayuHhXtrhNz0G6AEDwoIpCVUaWihMDQQB31XbI1kHO58VqNkIFpJLBGkgdHFv1
	a5Lzj19NAqT4RifLTq/KH1NK8F+l/qxg0NZym7DHW/dK3UL6RXB0fBFH1KU6BKWu3Dez
	f+AeZ/01pPxMRMSuhIdz7dQi0RHfym21hzLCo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=OruWmXVImi1KD32jYqDbDNlHSnEniSFxCZwRB7Dc1z8=;
	b=LPHc320wPW+R2KbwLRbErG1abC0BdDTkQIzzAOFG7lshBHbKfIHUCeOmuflOxCTFjU
	sAYpQCFamwPFN2zjJl1RgbqU0FegfngVhoQyLBC0E8SlQ7Lk4JrzACMfEpoQjL1O/etH
	+Nx1R7ER+0C1mkpJE1eZdygy7lOqhiLJShYchIurW79Es7gQYLt/DtESBn1ALUeoaWDM
	j+0Oczxa+ivGuexj3FNXmUKWazQ9H5OC2vPEX7tRHksyvnse5t3MPwE0EXa9XkhSe6H7
	PpVhV13rp0c+uoA1dNjQfcXpJ5n1eXpGQ0p70limEhDox+qk5qDoJRUdR8+SlfvZTXcA
	DyPQ==
X-Gm-Message-State: ALoCoQlhQhOJvpxx51bZqRrRaibM2nMyIAKy8ijcxqeaguMjlPorjl2lo7PDdysHpcX+LruN4XwlU0HSIgzJY4mKfd5qcFvLc1bZSnwX9Mbi9pBK3yt3UNSg2zpXdQMIToGdDGtbC70RAHXP9KahJOGGKPW+BMjr9vpN/pLMFNXRbA4K1k4C3mc=
X-Received: by 10.220.178.73 with SMTP id bl9mr879534vcb.42.1392645980148;
	Mon, 17 Feb 2014 06:06:20 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.178.73 with SMTP id bl9mr879520vcb.42.1392645980025;
	Mon, 17 Feb 2014 06:06:20 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 06:06:19 -0800 (PST)
In-Reply-To: <5301FA5F.8020602@linaro.org>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
Date: Mon, 17 Feb 2014 16:06:19 +0200
Message-ID: <CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1168300491050226916=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1168300491050226916==
Content-Type: multipart/alternative; boundary=047d7b673e56f00fcb04f29aa82a

--047d7b673e56f00fcb04f29aa82a
Content-Type: text/plain; charset=ISO-8859-1

Hi Julien,


>
> > Can anyone clarify - is it possible to make a run time memory trap in
> > Xen hypervisor?
>
> I guess you are talking about ARM? If so, it's not possible right now.
>
>
Does it mean, that it is possible on x86 ?


>
> I think yes, we might need that for a couple of new features in Xen such
> as: clock drivers and multiple devices in a same page (see UARTs on
> cubieboard 2).
>
>
This sounds good. For now I need this or similar solution for my
development. I'll try to make it generic and post it to this list as soon
as it is ready.

Thank you.

Regards,
Andrii



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

--047d7b673e56f00fcb04f29aa82a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Julien,<br><div class=3D"gmail_extra"><br><div class=3D=
"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex">
<br>
<div class=3D""><br>
&gt; Can anyone clarify - is it possible to make a run time memory trap in<=
br>
&gt; Xen hypervisor?<br>
<br>
</div>I guess you are talking about ARM? If so, it&#39;s not possible right=
 now.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>Does it mean, th=
at it is possible on x86 ?</div><div>=A0</div><blockquote class=3D"gmail_qu=
ote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-co=
lor:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class=3D"">
<br>
</div>I think yes, we might need that for a couple of new features in Xen s=
uch<br>
as: clock drivers and multiple devices in a same page (see UARTs on<br>
cubieboard 2).<br>
<br></blockquote><div><br></div><div>This sounds good. For now I need this =
or similar solution for my development. I&#39;ll try to make it generic and=
 post it to this list as soon as it is ready.</div><div><br></div><div>
Thank you.</div><div><br></div><div>Regards,</div><div>Andrii</div><div>=A0=
</div></div><br clear=3D"all"><div><br></div>-- <br><font size=3D"-1"><br><=
span style=3D"vertical-align:baseline;font-variant:normal;font-style:normal=
;font-size:12px;background-color:transparent;text-decoration:none;font-fami=
ly:Arial;font-weight:bold">Andrii Tseglytskyi | Embedded Dev</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br></font><a href=3D"http:=
//www.globallogic.com/" target=3D"_blank"><span style=3D"font-size:12px;fon=
t-family:Arial;color:rgb(17,85,204);vertical-align:baseline">www.globallogi=
c.com</span></a><font size=3D"-1"><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></font>
</div></div>

--047d7b673e56f00fcb04f29aa82a--


--===============1168300491050226916==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1168300491050226916==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 14:06:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOq2-0000is-En; Mon, 17 Feb 2014 14:06:26 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFOq0-0000in-Kn
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:06:24 +0000
Received: from [85.158.143.35:24672] by server-2.bemta-4.messagelabs.com id
	48/97-10891-06712035; Mon, 17 Feb 2014 14:06:24 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392645981!6259568!1
X-Originating-IP: [64.18.0.160]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16693 invoked from network); 17 Feb 2014 14:06:22 -0000
Received: from exprod5og118.obsmtp.com (HELO exprod5og118.obsmtp.com)
	(64.18.0.160)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:06:22 -0000
Received: from mail-vc0-f178.google.com ([209.85.220.178]) (using TLSv1) by
	exprod5ob118.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwIXXFrl5MuByH5fwtywRXZe08rhjAq+@postini.com;
	Mon, 17 Feb 2014 06:06:22 PST
Received: by mail-vc0-f178.google.com with SMTP id ik5so11545249vcb.23
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 06:06:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=OruWmXVImi1KD32jYqDbDNlHSnEniSFxCZwRB7Dc1z8=;
	b=lBuxayuHhXtrhNz0G6AEDwoIpCVUaWihMDQQB31XbI1kHO58VqNkIFpJLBGkgdHFv1
	a5Lzj19NAqT4RifLTq/KH1NK8F+l/qxg0NZym7DHW/dK3UL6RXB0fBFH1KU6BKWu3Dez
	f+AeZ/01pPxMRMSuhIdz7dQi0RHfym21hzLCo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=OruWmXVImi1KD32jYqDbDNlHSnEniSFxCZwRB7Dc1z8=;
	b=LPHc320wPW+R2KbwLRbErG1abC0BdDTkQIzzAOFG7lshBHbKfIHUCeOmuflOxCTFjU
	sAYpQCFamwPFN2zjJl1RgbqU0FegfngVhoQyLBC0E8SlQ7Lk4JrzACMfEpoQjL1O/etH
	+Nx1R7ER+0C1mkpJE1eZdygy7lOqhiLJShYchIurW79Es7gQYLt/DtESBn1ALUeoaWDM
	j+0Oczxa+ivGuexj3FNXmUKWazQ9H5OC2vPEX7tRHksyvnse5t3MPwE0EXa9XkhSe6H7
	PpVhV13rp0c+uoA1dNjQfcXpJ5n1eXpGQ0p70limEhDox+qk5qDoJRUdR8+SlfvZTXcA
	DyPQ==
X-Gm-Message-State: ALoCoQlhQhOJvpxx51bZqRrRaibM2nMyIAKy8ijcxqeaguMjlPorjl2lo7PDdysHpcX+LruN4XwlU0HSIgzJY4mKfd5qcFvLc1bZSnwX9Mbi9pBK3yt3UNSg2zpXdQMIToGdDGtbC70RAHXP9KahJOGGKPW+BMjr9vpN/pLMFNXRbA4K1k4C3mc=
X-Received: by 10.220.178.73 with SMTP id bl9mr879534vcb.42.1392645980148;
	Mon, 17 Feb 2014 06:06:20 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.178.73 with SMTP id bl9mr879520vcb.42.1392645980025;
	Mon, 17 Feb 2014 06:06:20 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 06:06:19 -0800 (PST)
In-Reply-To: <5301FA5F.8020602@linaro.org>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
Date: Mon, 17 Feb 2014 16:06:19 +0200
Message-ID: <CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1168300491050226916=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1168300491050226916==
Content-Type: multipart/alternative; boundary=047d7b673e56f00fcb04f29aa82a

--047d7b673e56f00fcb04f29aa82a
Content-Type: text/plain; charset=ISO-8859-1

Hi Julien,


>
> > Can anyone clarify - is it possible to make a run time memory trap in
> > Xen hypervisor?
>
> I guess you are talking about ARM? If so, it's not possible right now.
>
>
Does it mean, that it is possible on x86 ?


>
> I think yes, we might need that for a couple of new features in Xen such
> as: clock drivers and multiple devices in a same page (see UARTs on
> cubieboard 2).
>
>
This sounds good. For now I need this or similar solution for my
development. I'll try to make it generic and post it to this list as soon
as it is ready.

Thank you.

Regards,
Andrii



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

--047d7b673e56f00fcb04f29aa82a
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Julien,<br><div class=3D"gmail_extra"><br><div class=3D=
"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex">
<br>
<div class=3D""><br>
&gt; Can anyone clarify - is it possible to make a run time memory trap in<=
br>
&gt; Xen hypervisor?<br>
<br>
</div>I guess you are talking about ARM? If so, it&#39;s not possible right=
 now.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>Does it mean, th=
at it is possible on x86 ?</div><div>=A0</div><blockquote class=3D"gmail_qu=
ote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-co=
lor:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class=3D"">
<br>
</div>I think yes, we might need that for a couple of new features in Xen s=
uch<br>
as: clock drivers and multiple devices in a same page (see UARTs on<br>
cubieboard 2).<br>
<br></blockquote><div><br></div><div>This sounds good. For now I need this =
or similar solution for my development. I&#39;ll try to make it generic and=
 post it to this list as soon as it is ready.</div><div><br></div><div>
Thank you.</div><div><br></div><div>Regards,</div><div>Andrii</div><div>=A0=
</div></div><br clear=3D"all"><div><br></div>-- <br><font size=3D"-1"><br><=
span style=3D"vertical-align:baseline;font-variant:normal;font-style:normal=
;font-size:12px;background-color:transparent;text-decoration:none;font-fami=
ly:Arial;font-weight:bold">Andrii Tseglytskyi | Embedded Dev</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br></font><a href=3D"http:=
//www.globallogic.com/" target=3D"_blank"><span style=3D"font-size:12px;fon=
t-family:Arial;color:rgb(17,85,204);vertical-align:baseline">www.globallogi=
c.com</span></a><font size=3D"-1"><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></font>
</div></div>

--047d7b673e56f00fcb04f29aa82a--


--===============1168300491050226916==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1168300491050226916==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 14:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOyj-0000w3-KQ; Mon, 17 Feb 2014 14:15:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Nate.Studer@dornerworks.com>) id 1WFOyg-0000vy-Q3
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:15:23 +0000
Received: from [85.158.139.211:33750] by server-17.bemta-5.messagelabs.com id
	FA/31-31975-97912035; Mon, 17 Feb 2014 14:15:21 +0000
X-Env-Sender: Nate.Studer@dornerworks.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392646520!4428678!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20997 invoked from network); 17 Feb 2014 14:15:20 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-8.tower-206.messagelabs.com with SMTP;
	17 Feb 2014 14:15:20 -0000
Received: from [172.27.12.66] (172.27.12.66) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Mon, 17 Feb 2014 09:13:42 -0500
Message-ID: <5302191A.3070400@dornerworks.com>
Date: Mon, 17 Feb 2014 09:13:46 -0500
From: Nate Studer <nate.studer@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, Simon Martin
	<furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>	
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>	
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
In-Reply-To: <1392398466.32038.334.camel@Solace>
X-Originating-IP: [172.27.12.66]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/14/2014 12:21 PM, Dario Faggioli wrote:
> On ven, 2014-02-14 at 12:02 +0000, Simon Martin wrote:
>> Thanks everyone and especially Ian! It was the hyperthreading that was
>> causing the problem.
>>
> Good to hear, and at the same time, sorry to hear that. :-)
> 
> I mean, I'm glad you nailed it, but at the same time, I'm sorry that the
> solution is to 'waste' a core! :-( I reiterate and restate here, without
> any problem doing so, the fact that Xen should be doing at least a bit
> better in these circumstances, if we want to properly address use cases
> like yours. However, there are limits on how far we can go, and hardware
> design is certainly among them!
> 
> All this to say that, it should be possible to get a bit more of
> isolation, by tweaking the proper Xen code path appropriately, but if
> the amount of interference that comes from two hypethreads sharing
> registers, pipeline stages, and whatever it is that they share, is
> enough for disturbing your workload, then, I'm afraid we never get much
> farther from the 'don't use hyperthread' solution! :-(

Which, as you say, unfortunately is the solution unless there is some way to
configure the hardware to eliminate this interference.  If it's any consolation,
the only multi-core ARINC653 implementations I know of have enacted these two
restrictions:
1.  # of cores enabled = # of memory controllers.
2.  Each enabled core must be configured to not share a memory controller,
cache, registers, etc...

It is practically an AMP system at that point, but without these restrictions
you can get some unpredictable behavior unless you have some specialized or
exotic hardware to make things more deterministic.

-- 
Nathan Studer
DornerWorks, Ltd.
Embedded Systems Engineering

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:15:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOyj-0000w3-KQ; Mon, 17 Feb 2014 14:15:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Nate.Studer@dornerworks.com>) id 1WFOyg-0000vy-Q3
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:15:23 +0000
Received: from [85.158.139.211:33750] by server-17.bemta-5.messagelabs.com id
	FA/31-31975-97912035; Mon, 17 Feb 2014 14:15:21 +0000
X-Env-Sender: Nate.Studer@dornerworks.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392646520!4428678!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20997 invoked from network); 17 Feb 2014 14:15:20 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-8.tower-206.messagelabs.com with SMTP;
	17 Feb 2014 14:15:20 -0000
Received: from [172.27.12.66] (172.27.12.66) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Mon, 17 Feb 2014 09:13:42 -0500
Message-ID: <5302191A.3070400@dornerworks.com>
Date: Mon, 17 Feb 2014 09:13:46 -0500
From: Nate Studer <nate.studer@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, Simon Martin
	<furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>	
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>	
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
In-Reply-To: <1392398466.32038.334.camel@Solace>
X-Originating-IP: [172.27.12.66]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/14/2014 12:21 PM, Dario Faggioli wrote:
> On ven, 2014-02-14 at 12:02 +0000, Simon Martin wrote:
>> Thanks everyone and especially Ian! It was the hyperthreading that was
>> causing the problem.
>>
> Good to hear, and at the same time, sorry to hear that. :-)
> 
> I mean, I'm glad you nailed it, but at the same time, I'm sorry that the
> solution is to 'waste' a core! :-( I reiterate and restate here, without
> any problem doing so, the fact that Xen should be doing at least a bit
> better in these circumstances, if we want to properly address use cases
> like yours. However, there are limits on how far we can go, and hardware
> design is certainly among them!
> 
> All this to say that, it should be possible to get a bit more of
> isolation, by tweaking the proper Xen code path appropriately, but if
> the amount of interference that comes from two hypethreads sharing
> registers, pipeline stages, and whatever it is that they share, is
> enough for disturbing your workload, then, I'm afraid we never get much
> farther from the 'don't use hyperthread' solution! :-(

Which, as you say, unfortunately is the solution unless there is some way to
configure the hardware to eliminate this interference.  If it's any consolation,
the only multi-core ARINC653 implementations I know of have enacted these two
restrictions:
1.  # of cores enabled = # of memory controllers.
2.  Each enabled core must be configured to not share a memory controller,
cache, registers, etc...

It is practically an AMP system at that point, but without these restrictions
you can get some unpredictable behavior unless you have some specialized or
exotic hardware to make things more deterministic.

-- 
Nathan Studer
DornerWorks, Ltd.
Embedded Systems Engineering

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:16:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOzy-0000zn-JM; Mon, 17 Feb 2014 14:16:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFOzx-0000zh-EH
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:16:41 +0000
Received: from [85.158.143.35:38572] by server-3.bemta-4.messagelabs.com id
	A5/A7-11539-8C912035; Mon, 17 Feb 2014 14:16:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392646600!6253392!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14578 invoked from network); 17 Feb 2014 14:16:40 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:16:40 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so7141623eek.1
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 06:16:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=c8zM0BYemLXJybg26kM6cfDGonTb5spDnS0i8xOazNM=;
	b=l66V5qpYmD5/kbrpG1GNPK50hV8Kf/yrGdmc+VkW2z6BneIYG8oez6RPuV/nAk1kG/
	qlPOfmANyJbbRWY4FtkKEo/jhXXgXc4gIJ6f7etUmZUxnkCot4FA4+OgTj27noeRSajz
	IpVxWFXxmZLTVM82zQ/0IFziwQa+/hDDx3Q8CiPvtygNp0UI0yfv9xubU4TelwEqQEr3
	9tYhUvFyXeIbEU3B3Ozg/QPYnVsoNVN+sFDWzte0amiOklCUUj1jHDt9e8N0UMavRQp6
	/PmM7ZO6cKzYirSxU0owYePLKuv4tDSyh1krHtyIjD4mpgjOtg+foFqUxMc/FI9tIVeW
	P9jQ==
X-Gm-Message-State: ALoCoQkwA3nv5SSuDUFPBx4XaqvDNVrGp8rPULs1zKFlpd7ut22zQlr5zwel9ZRWqKpzSo47dzMh
X-Received: by 10.14.204.9 with SMTP id g9mr2195982eeo.82.1392646600077;
	Mon, 17 Feb 2014 06:16:40 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm57903828eep.11.2014.02.17.06.16.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 06:16:38 -0800 (PST)
Message-ID: <530219C4.3050304@linaro.org>
Date: Mon, 17 Feb 2014 14:16:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
In-Reply-To: <CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 02:06 PM, Andrii Tseglytskyi wrote:
> Hi Julien,

Hi Andrii,

> 
>     > Can anyone clarify - is it possible to make a run time memory trap in
>     > Xen hypervisor?
> 
>     I guess you are talking about ARM? If so, it's not possible right now.
> 
> 
> Does it mean, that it is possible on x86 ?

Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c

It's used a static array, but I don't think this is the solution for
ARM. We don't know in advance the maximum number of MMIO region to handle.

> 
>     I think yes, we might need that for a couple of new features in Xen such
>     as: clock drivers and multiple devices in a same page (see UARTs on
>     cubieboard 2).
> 
> 
> This sounds good. For now I need this or similar solution for my
> development. I'll try to make it generic and post it to this list as
> soon as it is ready.

Great, thank you.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:16:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:16:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFOzy-0000zn-JM; Mon, 17 Feb 2014 14:16:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFOzx-0000zh-EH
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:16:41 +0000
Received: from [85.158.143.35:38572] by server-3.bemta-4.messagelabs.com id
	A5/A7-11539-8C912035; Mon, 17 Feb 2014 14:16:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392646600!6253392!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14578 invoked from network); 17 Feb 2014 14:16:40 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:16:40 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so7141623eek.1
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 06:16:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=c8zM0BYemLXJybg26kM6cfDGonTb5spDnS0i8xOazNM=;
	b=l66V5qpYmD5/kbrpG1GNPK50hV8Kf/yrGdmc+VkW2z6BneIYG8oez6RPuV/nAk1kG/
	qlPOfmANyJbbRWY4FtkKEo/jhXXgXc4gIJ6f7etUmZUxnkCot4FA4+OgTj27noeRSajz
	IpVxWFXxmZLTVM82zQ/0IFziwQa+/hDDx3Q8CiPvtygNp0UI0yfv9xubU4TelwEqQEr3
	9tYhUvFyXeIbEU3B3Ozg/QPYnVsoNVN+sFDWzte0amiOklCUUj1jHDt9e8N0UMavRQp6
	/PmM7ZO6cKzYirSxU0owYePLKuv4tDSyh1krHtyIjD4mpgjOtg+foFqUxMc/FI9tIVeW
	P9jQ==
X-Gm-Message-State: ALoCoQkwA3nv5SSuDUFPBx4XaqvDNVrGp8rPULs1zKFlpd7ut22zQlr5zwel9ZRWqKpzSo47dzMh
X-Received: by 10.14.204.9 with SMTP id g9mr2195982eeo.82.1392646600077;
	Mon, 17 Feb 2014 06:16:40 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm57903828eep.11.2014.02.17.06.16.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 06:16:38 -0800 (PST)
Message-ID: <530219C4.3050304@linaro.org>
Date: Mon, 17 Feb 2014 14:16:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
In-Reply-To: <CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 02:06 PM, Andrii Tseglytskyi wrote:
> Hi Julien,

Hi Andrii,

> 
>     > Can anyone clarify - is it possible to make a run time memory trap in
>     > Xen hypervisor?
> 
>     I guess you are talking about ARM? If so, it's not possible right now.
> 
> 
> Does it mean, that it is possible on x86 ?

Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c

It's used a static array, but I don't think this is the solution for
ARM. We don't know in advance the maximum number of MMIO region to handle.

> 
>     I think yes, we might need that for a couple of new features in Xen such
>     as: clock drivers and multiple devices in a same page (see UARTs on
>     cubieboard 2).
> 
> 
> This sounds good. For now I need this or similar solution for my
> development. I'll try to make it generic and post it to this list as
> soon as it is ready.

Great, thank you.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPHU-0001Ge-2M; Mon, 17 Feb 2014 14:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dirk.winning@junidas.de>) id 1WFLRe-0002vJ-Rr
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 10:29:03 +0000
Received: from [85.158.143.35:4876] by server-3.bemta-4.messagelabs.com id
	7F/60-11539-E64E1035; Mon, 17 Feb 2014 10:29:02 +0000
X-Env-Sender: dirk.winning@junidas.de
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392632939!6179975!1
X-Originating-IP: [92.79.175.226]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15873 invoked from network); 17 Feb 2014 10:28:59 -0000
Received: from mail.junidas.de (HELO mail.junidas.de) (92.79.175.226)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 10:28:59 -0000
Received: from [192.168.22.30] (helo=exchange.stgt.junidas.de)
	by mail.junidas.de with esmtp (Exim 4.80.1)
	(envelope-from <dirk.winning@junidas.de>) id 1WFLRb-0004p9-2k
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 11:28:59 +0100
Received: from localhost (localhost [127.0.0.1])
	by exchange.stgt.junidas.de (Postfix) with ESMTP id 113084A015E
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:59 +0100 (CET)
Received: from exchange.stgt.junidas.de ([127.0.0.1])
	by localhost (exchange.stgt.junidas.de [127.0.0.1]) (amavisd-new,
	port 10032)
	with ESMTP id BEMKyIT156Eg for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by exchange.stgt.junidas.de (Postfix) with ESMTP id 8DCE64A018C
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
X-Virus-Scanned: amavisd-new at exchange.stgt.junidas.de
Received: from exchange.stgt.junidas.de ([127.0.0.1])
	by localhost (exchange.stgt.junidas.de [127.0.0.1]) (amavisd-new,
	port 10026)
	with ESMTP id k0nKQtJb9p4N for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
Received: from prado-mac.stgt.junidas.de (prado-mac.stgt.junidas.de
	[192.168.210.91])
	by exchange.stgt.junidas.de (Postfix) with ESMTPSA id 57F1F4A015E
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
From: "Dirk Winning, junidas GmbH" <dirk.winning@junidas.de>
Message-Id: <E5A6B4A8-4185-4002-B7A9-3098ABED8A0E@junidas.de>
Date: Mon, 17 Feb 2014 11:28:55 +0100
To: xen-devel@lists.xensource.com
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
X-Scanner: 1WFLRb-0004p9-2k has been scanned for malware on mail.junidas.de
X-Mailman-Approved-At: Mon, 17 Feb 2014 14:34:46 +0000
Subject: [Xen-devel] bug?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5980729225842240703=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5980729225842240703==
Content-Type: multipart/signed; boundary="Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54"; protocol="application/pkcs7-signature"; micalg=sha1


--Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5"


--Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1

Hi,
we have XEN 4.0.1 on debian 6.0.6

and I tried to activate a USB-Modem for a DOMU running OSX; the modem is =
already recognized and looks right; but it will not dial out or take a =
call.
Tried a memory-Stick and the same here: it appears in USB but will not =
mount.

Obviously it may not be clear where the problem exactly is, but one =
question arises:

is it neccessary to do all steps listed further down and what about the =
error message?

Since, the USB-devices already do appear correctly, but are unusable in =
the end.

Thanks for any hints

greets


Started domain fax (id=3D89)
root@graff:~# xm usb-list 89
root@graff:~# xm usb-hc-create fax 89 1
root@graff:~# xm usb-list 89
Idx BE  state usb-ver  BE-path                      =20
0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0 =20
port 1:=20
root@graff:~# xm usb-add 89 host:05ac:1401
root@graff:~# xm usb-list 89
Idx BE  state usb-ver  BE-path                      =20
0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0 =20
port 1:=20
root@graff:~# xm usb-attach 89 0 1 2-1
Unexpected error: <class 'xen.util.vusb_util.UsbDeviceParseError'>

Please report to xen-devel@lists.xensource.com
Traceback (most recent call last):
  File "/usr/lib/xen-4.0/bin/xm", line 8, in <module>
    main.main(sys.argv)
  File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3620, in main
    _, rc =3D _run_cmd(cmd, cmd_name, args)
  File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3644, in =
_run_cmd
    return True, cmd(args)
  File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 2868, in =
xm_usb_attach
    if vusb_util.bus_is_assigned(bus):
  File "/usr/lib/xen-4.0/lib/python/xen/util/vusb_util.py", line 275, in =
bus_is_assigned
    raise UsbDeviceParseError("Can't get assignment status: (%s)." % =
bus)
xen.util.vusb_util.UsbDeviceParseError: vusb: Error parsing USB device =
info: Can't get assignment status: (2-1).
root@graff:~# xm usb-list-assignable-devices
1-6          : ID 14dd:0002 Peppercon AG Multidevice
2-1          : ID 05ac:1401 Motorola, Inc. Apple USB Modem
2-2          : ID 06da:0003 OMRON USB UPS



Dipl. Ing. Dirk Winning, Systemberater
dirk.winning@junidas.de

junidas GmbH, Aixheimer Str. 12, 70619 Stuttgart
Tel. +49 (711) 4599799-12
Gesch=E4ftsf=FChrer: Dr. Markus Stoll, Matthias Zepf
Amtsgericht Stuttgart, HRB 21939


--Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=iso-8859-1

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Diso-8859-1"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space;">Hi,<div>we have XEN 4.0.1 on debian =
6.0.6</div><div><br></div><div>and I tried to activate a USB-Modem for a =
DOMU running OSX; the modem is already recognized and looks right; but =
it will not dial out or take a call.</div><div>Tried a memory-Stick and =
the same here: it appears in USB but will not =
mount.</div><div><br></div><div>Obviously it may not be clear where the =
problem exactly is, but one question arises:</div><div><br></div><div>is =
it neccessary to do all steps listed further down and what about the =
error message?</div><div><br></div><div>Since, the USB-devices already =
do appear correctly, but are unusable in the =
end.</div><div><br></div><div>Thanks for any =
hints</div><div><br></div><div>greets</div><div><br></div><div><br></div><=
div><div style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">Started domain fax =
(id=3D89)</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">root@graff:~# xm usb-list =
89</div><div style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">root@graff:~# xm usb-hc-create =
fax 89 1</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">root@graff:~# xm usb-list =
89</div><div style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">Idx BE&nbsp; state usb-ver&nbsp; =
BE-path&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp;</div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);">0 &nbsp; 0 =
&nbsp; 1 &nbsp; &nbsp; USB2.0&nbsp; /local/domain/0/backend/vusb/89/0 =
&nbsp;</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">port 1:&nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">root@graff:~# xm usb-add 89 =
host:05ac:1401</div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, =
209);">root@graff:~# xm usb-list 89</div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);">Idx BE&nbsp; state usb-ver&nbsp; BE-path&nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">0 &nbsp; 0 &nbsp; 1 &nbsp; &nbsp; =
USB2.0&nbsp; /local/domain/0/backend/vusb/89/0 &nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">port 1:&nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">root@graff:~# xm usb-attach 89 0 =
1 2-1</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">Unexpected error: =
&lt;class 'xen.util.vusb_util.UsbDeviceParseError'&gt;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209); min-height: 17px;"><br></div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209); position: static; z-index: =
auto;">Please report to <a =
href=3D"mailto:xen-devel@lists.xensource.com">xen-devel@lists.xensource.co=
m</a></div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209); position: static; z-index: =
auto;"><font color=3D"#831100">Traceback (most recent call =
last):</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; File "/usr/lib/xen-4.0/bin/xm", line 8, in =
&lt;module&gt;</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; main.main(sys.argv)</font></div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);"><font color=3D"#831100">&nbsp; =
File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3620, in =
main</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; _, rc =3D _run_cmd(cmd, cmd_name, =
args)</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; File =
"/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3644, in =
_run_cmd</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; return True, cmd(args)</font></div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);"><font color=3D"#831100">&nbsp; =
File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 2868, in =
xm_usb_attach</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; if =
vusb_util.bus_is_assigned(bus):</font></div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);"><font color=3D"#831100">&nbsp; File =
"/usr/lib/xen-4.0/lib/python/xen/util/vusb_util.py", line 275, in =
bus_is_assigned</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; raise UsbDeviceParseError("Can't get =
assignment status: (%s)." % bus)</font></div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);"><font color=3D"#831100">xen.util.vusb_util.UsbDeviceParseError: =
vusb: Error parsing USB device info: Can't get assignment status: =
(2-1).</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, =
209);">root@graff:~# xm usb-list-assignable-devices</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">1-6&nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; : ID 14dd:0002 Peppercon AG Multidevice</div><div style=3D"margin: =
0px; font-size: 13px; font-family: Monaco; background-color: rgb(255, =
248, 209);">2-1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; : ID 05ac:1401 =
Motorola, Inc. Apple USB Modem</div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);">2-2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; : ID 06da:0003 OMRON USB =
UPS</div><div><br></div><div><br></div><div><br></div><div>
<div>Dipl. Ing. Dirk Winning, Systemberater<br><a =
href=3D"mailto:dirk.winning@junidas.de">dirk.winning@junidas.de</a><br><br=
>junidas GmbH, Aixheimer Str. 12, 70619 Stuttgart<br>Tel. +49 (711) =
4599799-12<br>Gesch=E4ftsf=FChrer: Dr. Markus Stoll, Matthias =
Zepf<br>Amtsgericht Stuttgart, HRB 21939</div>
</div>
<br></div></body></html>=

--Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5--

--Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54
Content-Disposition: attachment;
	filename=smime.p7s
Content-Type: application/pkcs7-signature;
	name=smime.p7s
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIKUjCCBRow
ggQCoAMCAQICEG0Z6qcZT2ozIuYiMnqqcd4wDQYJKoZIhvcNAQEFBQAwga4xCzAJBgNVBAYTAlVT
MQswCQYDVQQIEwJVVDEXMBUGA1UEBxMOU2FsdCBMYWtlIENpdHkxHjAcBgNVBAoTFVRoZSBVU0VS
VFJVU1QgTmV0d29yazEhMB8GA1UECxMYaHR0cDovL3d3dy51c2VydHJ1c3QuY29tMTYwNAYDVQQD
Ey1VVE4tVVNFUkZpcnN0LUNsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgRW1haWwwHhcNMTEwNDI4
MDAwMDAwWhcNMjAwNTMwMTA0ODM4WjCBkzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIg
TWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQx
OTA3BgNVBAMTMENPTU9ETyBDbGllbnQgQXV0aGVudGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBD
QTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJKEhFtLV5jUXi+LpOFAyKNTWF9mZfEy
TvefMn1V0HhMVbdClOD5J3EHxcZppLkyxPFAGpDMJ1Zifxe1cWmu5SAb5MtjXmDKokH2auGj/7jf
H0htZUOMKi4rYzh337EXrMLaggLW1DJq1GdvIBOPXDX65VSAr9hxCh03CgJQU2yVHakQFLSZlVkS
Mf8JotJM3FLb3uJAAVtIaN3FSrTg7SQfOq9xXwfjrL8UO7AlcWg99A/WF1hGFYE8aIuLgw9teiFX
5jSw2zJ+40rhpVJyZCaRTqWSD//gsWD9Gm9oUZljjRqLpcxCm5t9ImPTqaD8zp6Q30QZ9FxbNboW
86eb/8ECAwEAAaOCAUswggFHMB8GA1UdIwQYMBaAFImCZ33EnSZwAEu0UEh83j2uBG59MB0GA1Ud
DgQWBBR6E04AdFvGeGNkJ8Ev4qBbvHnFezAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB
/wIBADARBgNVHSAECjAIMAYGBFUdIAAwWAYDVR0fBFEwTzBNoEugSYZHaHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VUTi1VU0VSRmlyc3QtQ2xpZW50QXV0aGVudGljYXRpb25hbmRFbWFpbC5jcmww
dAYIKwYBBQUHAQEEaDBmMD0GCCsGAQUFBzAChjFodHRwOi8vY3J0LnVzZXJ0cnVzdC5jb20vVVRO
QWRkVHJ1c3RDbGllbnRfQ0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3Qu
Y29tMA0GCSqGSIb3DQEBBQUAA4IBAQCF1r54V1VtM39EUv5C1QaoAQOAivsNsv1Kv/avQUn1G1rF
0q0bc24+6SZ85kyYwTAo38v7QjyhJT4KddbQPTmGZtGhm7VNm2+vKGwdr+XqdFqo2rHA8XV6L566
k3nK/uKRHlZ0sviN0+BDchvtj/1gOSBH+4uvOmVIPJg9pSW/ve9g4EnlFsjrP0OD8ODuDcHTzTNf
m9C9YGqzO/761Mk6PB/tm/+bSTO+Qik5g+4zaS6CnUVNqGnagBsePdIaXXxHmaWbCG0SmYbWXVcH
G6cwvktJRLiQfsrReTjrtDP6oDpdJlieYVUYtCHVmdXgQ0BCML7qpeeU0rD+83X5f27nMIIFMDCC
BBigAwIBAgIRANCB0CmOPCtw/j/Eh2UyZ1IwDQYJKoZIhvcNAQEFBQAwgZMxCzAJBgNVBAYTAkdC
MRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoT
EUNPTU9ETyBDQSBMaW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9u
IGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTMwMzExMDAwMDAwWhcNMTQwMzExMjM1OTU5WjAoMSYw
JAYJKoZIhvcNAQkBFhdkaXJrLndpbm5pbmdAanVuaWRhcy5kZTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMGhO1B0RPLLD95wujNL/o6Yp6wdNU0Vs44eBTacjIVBvQqnWumTHi/EVayS
HVj6PprsnZpYsU7JiOL/GXufR3faNeWytgFVPjSbwbGDbFqR5mMTHjevNjtOgcb9NLVnPE0WvFHF
IgrL5kET8UGrGKeHIKUFLEPIQGKM0+bxVgR4CwyWPjjvtBoKYL++dJIjN/O8bODgqrEEqnjyPvrh
HOM/A8I7wXXBY9WA1ZOsJKDevkRkrUgaCJWyp8uWbI6lld3MLeh7/r0BA4DnXIHXGXJpQKTesB5T
HW6BcZslr8gsGf3DEfYme9+cj1m6nyWnCpg48+I9L9VImeFn+egTVdUCAwEAAaOCAecwggHjMB8G
A1UdIwQYMBaAFHoTTgB0W8Z4Y2QnwS/ioFu8ecV7MB0GA1UdDgQWBBTwOowMAxLL3IMZDMJgmGlI
NGt+YTAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAgBgNVHSUEGTAXBggrBgEFBQcDBAYL
KwYBBAGyMQEDBQIwEQYJYIZIAYb4QgEBBAQDAgUgMEYGA1UdIAQ/MD0wOwYMKwYBBAGyMQECAQEB
MCswKQYIKwYBBQUHAgEWHWh0dHBzOi8vc2VjdXJlLmNvbW9kby5uZXQvQ1BTMFcGA1UdHwRQME4w
TKBKoEiGRmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0NPTU9ET0NsaWVudEF1dGhlbnRpY2F0aW9u
YW5kU2VjdXJlRW1haWxDQS5jcmwwgYgGCCsGAQUFBwEBBHwwejBSBggrBgEFBQcwAoZGaHR0cDov
L2NydC5jb21vZG9jYS5jb20vQ09NT0RPQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFp
bENBLmNydDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuY29tb2RvY2EuY29tMCIGA1UdEQQbMBmB
F2Rpcmsud2lubmluZ0BqdW5pZGFzLmRlMA0GCSqGSIb3DQEBBQUAA4IBAQAm/VSaSvNlW/Hh9+Z2
qAI3D7ZKryUNtaEGktgEPMKY9eLF924x1Cs+keP58wCLLKLvb6YzoLbAg+ktoa2WNoSWnwkSMBE3
uQU33G89kHzFA/ZYg2sw91MZfCq+3H97eNlilbuY4tGa46qK1egf5UIf0ruke+VyC6K5GY8VV2N/
A2BAywCpcNKO+yB5mSoztb3nEqQjymyyoyLM/Sxyl+xczCBBu/jcqVgSGQWu1Q3G5EL26bqHS0FA
b4eUWB2mbLbj/hMXSds1s8eA4yiMK4KghuSYUn/2sA7vZr84H3jKPk6bAK8iuSaVQpMJexWN+wD4
4UlqLP8dHkjrPNu3XcFeMYIDrjCCA6oCAQEwgakwgZMxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJH
cmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBM
aW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUg
RW1haWwgQ0ECEQDQgdApjjwrcP4/xIdlMmdSMAkGBSsOAwIaBQCgggHZMBgGCSqGSIb3DQEJAzEL
BgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE0MDIxNzEwMjg1NlowIwYJKoZIhvcNAQkEMRYE
FIZI0ic3pMiECXcrCBrsnidsJsm8MIG6BgkrBgEEAYI3EAQxgawwgakwgZMxCzAJBgNVBAYTAkdC
MRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoT
EUNPTU9ETyBDQSBMaW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9u
IGFuZCBTZWN1cmUgRW1haWwgQ0ECEQDQgdApjjwrcP4/xIdlMmdSMIG8BgsqhkiG9w0BCRACCzGB
rKCBqTCBkzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxOTA3BgNVBAMTMENPTU9ETyBD
bGllbnQgQXV0aGVudGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRANCB0CmOPCtw/j/Eh2Uy
Z1IwDQYJKoZIhvcNAQEBBQAEggEAH4/pdlXX0RXS1mSwkTKcJ6vJJHStsBX4N2kgAwnrlxLIHAVH
YgY9rBv9LKWNjHzivdWFZfdYrRBYkCUrYbx7Ryc9OSI98u9QIXtEIyCZNhg+04hTDq1YjoCjGGlC
BMh9Wl6zrdr2CK6hQnSHiYSvykQj+FC2T12CL7r0SKWsmhbBJWzF1MDtLrd2gpWJElADc3rERRa5
G2avr1TXqRZs7fZnBkWVjg8dcFvE+/9+DLZi47OxZl+bAws0vGNA3Gp/xfg8JM5slNH0nSDzsmP6
WnuYz5bOvc/q/JqWYAAp39vT+ansI8A3av6CkTscFlzkV8eaoSo34PqFgaLL76odRgAAAAAAAA==

--Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54--


--===============5980729225842240703==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5980729225842240703==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 14:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPHG-0001GE-L0; Mon, 17 Feb 2014 14:34:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFOp0-0000i8-Oi
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:05:22 +0000
Received: from [85.158.139.211:6332] by server-11.bemta-5.messagelabs.com id
	6B/DB-23886-22712035; Mon, 17 Feb 2014 14:05:22 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392642198!4425821!1
X-Originating-IP: [213.75.39.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjE1ID0+IDY1OTY2NA==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4567 invoked from network); 17 Feb 2014 13:03:18 -0000
Received: from cpsmtpb-ews10.kpnxchange.com (HELO
	cpsmtpb-ews10.kpnxchange.com) (213.75.39.15)
	by server-12.tower-206.messagelabs.com with SMTP;
	17 Feb 2014 13:03:18 -0000
Received: from cpsps-ews28.kpnxchange.com ([10.94.84.194]) by
	cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Mon, 17 Feb 2014 14:03:18 +0100
Received: from CPSMTPM-TLF101.kpnxchange.com ([195.121.3.4]) by
	cpsps-ews28.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Mon, 17 Feb 2014 14:03:18 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF101.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Mon, 17 Feb 2014 14:03:18 +0100
Message-ID: <1392642197.13000.20.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 17 Feb 2014 14:03:17 +0100
In-Reply-To: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 17 Feb 2014 13:03:18.0306 (UTC)
	FILETIME=[A0896020:01CF2BE0]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Mon, 17 Feb 2014 14:34:33 +0000
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> Please look in the grub git tree. They have fixed their code to not do
> this anymore. This should be reflected in the patch description.

Thanks, I didn't know that. That turned out to be grub commit
ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
use it to implement generating of config"), see
http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059

> Lastly please check which distro has this new grub version so that we
> know which distros won't be affected.

No distro should be affected. See, the test that grub2 used to do was
(edited for clarity):
    grep -qx "CONFIG_XEN_DOM0=y" "${config}" || grep -qx "CONFIG_XEN_PRIVILEGED_GUEST=y" "${config}"

But the Kconfig entry for XEN_PRIVILEGED_GUEST reads:
    config XEN_PRIVILEGED_GUEST
            def_bool XEN_DOM0

Ie, XEN_PRIVILEGED_GUEST is equal to XEN_DOM0 by definition, so the
second part of that test is superfluous. (We discussed this last year.
If lkml.org weren't down I'd provide a link.) Or am I misreading this
Kconfig entry?

I hope to send a v2, with an updated commit explanation, in a few days.


Paul Bolle


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPHU-0001Ge-2M; Mon, 17 Feb 2014 14:34:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dirk.winning@junidas.de>) id 1WFLRe-0002vJ-Rr
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 10:29:03 +0000
Received: from [85.158.143.35:4876] by server-3.bemta-4.messagelabs.com id
	7F/60-11539-E64E1035; Mon, 17 Feb 2014 10:29:02 +0000
X-Env-Sender: dirk.winning@junidas.de
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392632939!6179975!1
X-Originating-IP: [92.79.175.226]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15873 invoked from network); 17 Feb 2014 10:28:59 -0000
Received: from mail.junidas.de (HELO mail.junidas.de) (92.79.175.226)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 10:28:59 -0000
Received: from [192.168.22.30] (helo=exchange.stgt.junidas.de)
	by mail.junidas.de with esmtp (Exim 4.80.1)
	(envelope-from <dirk.winning@junidas.de>) id 1WFLRb-0004p9-2k
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 11:28:59 +0100
Received: from localhost (localhost [127.0.0.1])
	by exchange.stgt.junidas.de (Postfix) with ESMTP id 113084A015E
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:59 +0100 (CET)
Received: from exchange.stgt.junidas.de ([127.0.0.1])
	by localhost (exchange.stgt.junidas.de [127.0.0.1]) (amavisd-new,
	port 10032)
	with ESMTP id BEMKyIT156Eg for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by exchange.stgt.junidas.de (Postfix) with ESMTP id 8DCE64A018C
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
X-Virus-Scanned: amavisd-new at exchange.stgt.junidas.de
Received: from exchange.stgt.junidas.de ([127.0.0.1])
	by localhost (exchange.stgt.junidas.de [127.0.0.1]) (amavisd-new,
	port 10026)
	with ESMTP id k0nKQtJb9p4N for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
Received: from prado-mac.stgt.junidas.de (prado-mac.stgt.junidas.de
	[192.168.210.91])
	by exchange.stgt.junidas.de (Postfix) with ESMTPSA id 57F1F4A015E
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:28:56 +0100 (CET)
From: "Dirk Winning, junidas GmbH" <dirk.winning@junidas.de>
Message-Id: <E5A6B4A8-4185-4002-B7A9-3098ABED8A0E@junidas.de>
Date: Mon, 17 Feb 2014 11:28:55 +0100
To: xen-devel@lists.xensource.com
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
X-Scanner: 1WFLRb-0004p9-2k has been scanned for malware on mail.junidas.de
X-Mailman-Approved-At: Mon, 17 Feb 2014 14:34:46 +0000
Subject: [Xen-devel] bug?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5980729225842240703=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============5980729225842240703==
Content-Type: multipart/signed; boundary="Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54"; protocol="application/pkcs7-signature"; micalg=sha1


--Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5"


--Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=iso-8859-1

Hi,
we have XEN 4.0.1 on debian 6.0.6

and I tried to activate a USB-Modem for a DOMU running OSX; the modem is =
already recognized and looks right; but it will not dial out or take a =
call.
Tried a memory-Stick and the same here: it appears in USB but will not =
mount.

Obviously it may not be clear where the problem exactly is, but one =
question arises:

is it neccessary to do all steps listed further down and what about the =
error message?

Since, the USB-devices already do appear correctly, but are unusable in =
the end.

Thanks for any hints

greets


Started domain fax (id=3D89)
root@graff:~# xm usb-list 89
root@graff:~# xm usb-hc-create fax 89 1
root@graff:~# xm usb-list 89
Idx BE  state usb-ver  BE-path                      =20
0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0 =20
port 1:=20
root@graff:~# xm usb-add 89 host:05ac:1401
root@graff:~# xm usb-list 89
Idx BE  state usb-ver  BE-path                      =20
0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0 =20
port 1:=20
root@graff:~# xm usb-attach 89 0 1 2-1
Unexpected error: <class 'xen.util.vusb_util.UsbDeviceParseError'>

Please report to xen-devel@lists.xensource.com
Traceback (most recent call last):
  File "/usr/lib/xen-4.0/bin/xm", line 8, in <module>
    main.main(sys.argv)
  File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3620, in main
    _, rc =3D _run_cmd(cmd, cmd_name, args)
  File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3644, in =
_run_cmd
    return True, cmd(args)
  File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 2868, in =
xm_usb_attach
    if vusb_util.bus_is_assigned(bus):
  File "/usr/lib/xen-4.0/lib/python/xen/util/vusb_util.py", line 275, in =
bus_is_assigned
    raise UsbDeviceParseError("Can't get assignment status: (%s)." % =
bus)
xen.util.vusb_util.UsbDeviceParseError: vusb: Error parsing USB device =
info: Can't get assignment status: (2-1).
root@graff:~# xm usb-list-assignable-devices
1-6          : ID 14dd:0002 Peppercon AG Multidevice
2-1          : ID 05ac:1401 Motorola, Inc. Apple USB Modem
2-2          : ID 06da:0003 OMRON USB UPS



Dipl. Ing. Dirk Winning, Systemberater
dirk.winning@junidas.de

junidas GmbH, Aixheimer Str. 12, 70619 Stuttgart
Tel. +49 (711) 4599799-12
Gesch=E4ftsf=FChrer: Dr. Markus Stoll, Matthias Zepf
Amtsgericht Stuttgart, HRB 21939


--Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=iso-8859-1

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Diso-8859-1"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: =
after-white-space;">Hi,<div>we have XEN 4.0.1 on debian =
6.0.6</div><div><br></div><div>and I tried to activate a USB-Modem for a =
DOMU running OSX; the modem is already recognized and looks right; but =
it will not dial out or take a call.</div><div>Tried a memory-Stick and =
the same here: it appears in USB but will not =
mount.</div><div><br></div><div>Obviously it may not be clear where the =
problem exactly is, but one question arises:</div><div><br></div><div>is =
it neccessary to do all steps listed further down and what about the =
error message?</div><div><br></div><div>Since, the USB-devices already =
do appear correctly, but are unusable in the =
end.</div><div><br></div><div>Thanks for any =
hints</div><div><br></div><div>greets</div><div><br></div><div><br></div><=
div><div style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">Started domain fax =
(id=3D89)</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">root@graff:~# xm usb-list =
89</div><div style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">root@graff:~# xm usb-hc-create =
fax 89 1</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">root@graff:~# xm usb-list =
89</div><div style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">Idx BE&nbsp; state usb-ver&nbsp; =
BE-path&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp;</div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);">0 &nbsp; 0 =
&nbsp; 1 &nbsp; &nbsp; USB2.0&nbsp; /local/domain/0/backend/vusb/89/0 =
&nbsp;</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">port 1:&nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">root@graff:~# xm usb-add 89 =
host:05ac:1401</div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, =
209);">root@graff:~# xm usb-list 89</div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);">Idx BE&nbsp; state usb-ver&nbsp; BE-path&nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">0 &nbsp; 0 &nbsp; 1 &nbsp; &nbsp; =
USB2.0&nbsp; /local/domain/0/backend/vusb/89/0 &nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">port 1:&nbsp;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">root@graff:~# xm usb-attach 89 0 =
1 2-1</div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209);">Unexpected error: =
&lt;class 'xen.util.vusb_util.UsbDeviceParseError'&gt;</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209); min-height: 17px;"><br></div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209); position: static; z-index: =
auto;">Please report to <a =
href=3D"mailto:xen-devel@lists.xensource.com">xen-devel@lists.xensource.co=
m</a></div><div style=3D"margin: 0px; font-size: 13px; font-family: =
Monaco; background-color: rgb(255, 248, 209); position: static; z-index: =
auto;"><font color=3D"#831100">Traceback (most recent call =
last):</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; File "/usr/lib/xen-4.0/bin/xm", line 8, in =
&lt;module&gt;</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; main.main(sys.argv)</font></div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);"><font color=3D"#831100">&nbsp; =
File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3620, in =
main</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; _, rc =3D _run_cmd(cmd, cmd_name, =
args)</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; File =
"/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3644, in =
_run_cmd</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; return True, cmd(args)</font></div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);"><font color=3D"#831100">&nbsp; =
File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 2868, in =
xm_usb_attach</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; if =
vusb_util.bus_is_assigned(bus):</font></div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);"><font color=3D"#831100">&nbsp; File =
"/usr/lib/xen-4.0/lib/python/xen/util/vusb_util.py", line 275, in =
bus_is_assigned</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, 209);"><font =
color=3D"#831100">&nbsp; &nbsp; raise UsbDeviceParseError("Can't get =
assignment status: (%s)." % bus)</font></div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);"><font color=3D"#831100">xen.util.vusb_util.UsbDeviceParseError: =
vusb: Error parsing USB device info: Can't get assignment status: =
(2-1).</font></div><div style=3D"margin: 0px; font-size: 13px; =
font-family: Monaco; background-color: rgb(255, 248, =
209);">root@graff:~# xm usb-list-assignable-devices</div><div =
style=3D"margin: 0px; font-size: 13px; font-family: Monaco; =
background-color: rgb(255, 248, 209);">1-6&nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; : ID 14dd:0002 Peppercon AG Multidevice</div><div style=3D"margin: =
0px; font-size: 13px; font-family: Monaco; background-color: rgb(255, =
248, 209);">2-1&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; : ID 05ac:1401 =
Motorola, Inc. Apple USB Modem</div><div style=3D"margin: 0px; =
font-size: 13px; font-family: Monaco; background-color: rgb(255, 248, =
209);">2-2&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; : ID 06da:0003 OMRON USB =
UPS</div><div><br></div><div><br></div><div><br></div><div>
<div>Dipl. Ing. Dirk Winning, Systemberater<br><a =
href=3D"mailto:dirk.winning@junidas.de">dirk.winning@junidas.de</a><br><br=
>junidas GmbH, Aixheimer Str. 12, 70619 Stuttgart<br>Tel. +49 (711) =
4599799-12<br>Gesch=E4ftsf=FChrer: Dr. Markus Stoll, Matthias =
Zepf<br>Amtsgericht Stuttgart, HRB 21939</div>
</div>
<br></div></body></html>=

--Apple-Mail=_400050B4-F199-4C0B-88CE-7DD1823850C5--

--Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54
Content-Disposition: attachment;
	filename=smime.p7s
Content-Type: application/pkcs7-signature;
	name=smime.p7s
Content-Transfer-Encoding: base64

MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIKUjCCBRow
ggQCoAMCAQICEG0Z6qcZT2ozIuYiMnqqcd4wDQYJKoZIhvcNAQEFBQAwga4xCzAJBgNVBAYTAlVT
MQswCQYDVQQIEwJVVDEXMBUGA1UEBxMOU2FsdCBMYWtlIENpdHkxHjAcBgNVBAoTFVRoZSBVU0VS
VFJVU1QgTmV0d29yazEhMB8GA1UECxMYaHR0cDovL3d3dy51c2VydHJ1c3QuY29tMTYwNAYDVQQD
Ey1VVE4tVVNFUkZpcnN0LUNsaWVudCBBdXRoZW50aWNhdGlvbiBhbmQgRW1haWwwHhcNMTEwNDI4
MDAwMDAwWhcNMjAwNTMwMTA0ODM4WjCBkzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIg
TWFuY2hlc3RlcjEQMA4GA1UEBxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQx
OTA3BgNVBAMTMENPTU9ETyBDbGllbnQgQXV0aGVudGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBD
QTCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAJKEhFtLV5jUXi+LpOFAyKNTWF9mZfEy
TvefMn1V0HhMVbdClOD5J3EHxcZppLkyxPFAGpDMJ1Zifxe1cWmu5SAb5MtjXmDKokH2auGj/7jf
H0htZUOMKi4rYzh337EXrMLaggLW1DJq1GdvIBOPXDX65VSAr9hxCh03CgJQU2yVHakQFLSZlVkS
Mf8JotJM3FLb3uJAAVtIaN3FSrTg7SQfOq9xXwfjrL8UO7AlcWg99A/WF1hGFYE8aIuLgw9teiFX
5jSw2zJ+40rhpVJyZCaRTqWSD//gsWD9Gm9oUZljjRqLpcxCm5t9ImPTqaD8zp6Q30QZ9FxbNboW
86eb/8ECAwEAAaOCAUswggFHMB8GA1UdIwQYMBaAFImCZ33EnSZwAEu0UEh83j2uBG59MB0GA1Ud
DgQWBBR6E04AdFvGeGNkJ8Ev4qBbvHnFezAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB
/wIBADARBgNVHSAECjAIMAYGBFUdIAAwWAYDVR0fBFEwTzBNoEugSYZHaHR0cDovL2NybC51c2Vy
dHJ1c3QuY29tL1VUTi1VU0VSRmlyc3QtQ2xpZW50QXV0aGVudGljYXRpb25hbmRFbWFpbC5jcmww
dAYIKwYBBQUHAQEEaDBmMD0GCCsGAQUFBzAChjFodHRwOi8vY3J0LnVzZXJ0cnVzdC5jb20vVVRO
QWRkVHJ1c3RDbGllbnRfQ0EuY3J0MCUGCCsGAQUFBzABhhlodHRwOi8vb2NzcC51c2VydHJ1c3Qu
Y29tMA0GCSqGSIb3DQEBBQUAA4IBAQCF1r54V1VtM39EUv5C1QaoAQOAivsNsv1Kv/avQUn1G1rF
0q0bc24+6SZ85kyYwTAo38v7QjyhJT4KddbQPTmGZtGhm7VNm2+vKGwdr+XqdFqo2rHA8XV6L566
k3nK/uKRHlZ0sviN0+BDchvtj/1gOSBH+4uvOmVIPJg9pSW/ve9g4EnlFsjrP0OD8ODuDcHTzTNf
m9C9YGqzO/761Mk6PB/tm/+bSTO+Qik5g+4zaS6CnUVNqGnagBsePdIaXXxHmaWbCG0SmYbWXVcH
G6cwvktJRLiQfsrReTjrtDP6oDpdJlieYVUYtCHVmdXgQ0BCML7qpeeU0rD+83X5f27nMIIFMDCC
BBigAwIBAgIRANCB0CmOPCtw/j/Eh2UyZ1IwDQYJKoZIhvcNAQEFBQAwgZMxCzAJBgNVBAYTAkdC
MRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoT
EUNPTU9ETyBDQSBMaW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9u
IGFuZCBTZWN1cmUgRW1haWwgQ0EwHhcNMTMwMzExMDAwMDAwWhcNMTQwMzExMjM1OTU5WjAoMSYw
JAYJKoZIhvcNAQkBFhdkaXJrLndpbm5pbmdAanVuaWRhcy5kZTCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAMGhO1B0RPLLD95wujNL/o6Yp6wdNU0Vs44eBTacjIVBvQqnWumTHi/EVayS
HVj6PprsnZpYsU7JiOL/GXufR3faNeWytgFVPjSbwbGDbFqR5mMTHjevNjtOgcb9NLVnPE0WvFHF
IgrL5kET8UGrGKeHIKUFLEPIQGKM0+bxVgR4CwyWPjjvtBoKYL++dJIjN/O8bODgqrEEqnjyPvrh
HOM/A8I7wXXBY9WA1ZOsJKDevkRkrUgaCJWyp8uWbI6lld3MLeh7/r0BA4DnXIHXGXJpQKTesB5T
HW6BcZslr8gsGf3DEfYme9+cj1m6nyWnCpg48+I9L9VImeFn+egTVdUCAwEAAaOCAecwggHjMB8G
A1UdIwQYMBaAFHoTTgB0W8Z4Y2QnwS/ioFu8ecV7MB0GA1UdDgQWBBTwOowMAxLL3IMZDMJgmGlI
NGt+YTAOBgNVHQ8BAf8EBAMCBaAwDAYDVR0TAQH/BAIwADAgBgNVHSUEGTAXBggrBgEFBQcDBAYL
KwYBBAGyMQEDBQIwEQYJYIZIAYb4QgEBBAQDAgUgMEYGA1UdIAQ/MD0wOwYMKwYBBAGyMQECAQEB
MCswKQYIKwYBBQUHAgEWHWh0dHBzOi8vc2VjdXJlLmNvbW9kby5uZXQvQ1BTMFcGA1UdHwRQME4w
TKBKoEiGRmh0dHA6Ly9jcmwuY29tb2RvY2EuY29tL0NPTU9ET0NsaWVudEF1dGhlbnRpY2F0aW9u
YW5kU2VjdXJlRW1haWxDQS5jcmwwgYgGCCsGAQUFBwEBBHwwejBSBggrBgEFBQcwAoZGaHR0cDov
L2NydC5jb21vZG9jYS5jb20vQ09NT0RPQ2xpZW50QXV0aGVudGljYXRpb25hbmRTZWN1cmVFbWFp
bENBLmNydDAkBggrBgEFBQcwAYYYaHR0cDovL29jc3AuY29tb2RvY2EuY29tMCIGA1UdEQQbMBmB
F2Rpcmsud2lubmluZ0BqdW5pZGFzLmRlMA0GCSqGSIb3DQEBBQUAA4IBAQAm/VSaSvNlW/Hh9+Z2
qAI3D7ZKryUNtaEGktgEPMKY9eLF924x1Cs+keP58wCLLKLvb6YzoLbAg+ktoa2WNoSWnwkSMBE3
uQU33G89kHzFA/ZYg2sw91MZfCq+3H97eNlilbuY4tGa46qK1egf5UIf0ruke+VyC6K5GY8VV2N/
A2BAywCpcNKO+yB5mSoztb3nEqQjymyyoyLM/Sxyl+xczCBBu/jcqVgSGQWu1Q3G5EL26bqHS0FA
b4eUWB2mbLbj/hMXSds1s8eA4yiMK4KghuSYUn/2sA7vZr84H3jKPk6bAK8iuSaVQpMJexWN+wD4
4UlqLP8dHkjrPNu3XcFeMYIDrjCCA6oCAQEwgakwgZMxCzAJBgNVBAYTAkdCMRswGQYDVQQIExJH
cmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoTEUNPTU9ETyBDQSBM
aW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9uIGFuZCBTZWN1cmUg
RW1haWwgQ0ECEQDQgdApjjwrcP4/xIdlMmdSMAkGBSsOAwIaBQCgggHZMBgGCSqGSIb3DQEJAzEL
BgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE0MDIxNzEwMjg1NlowIwYJKoZIhvcNAQkEMRYE
FIZI0ic3pMiECXcrCBrsnidsJsm8MIG6BgkrBgEEAYI3EAQxgawwgakwgZMxCzAJBgNVBAYTAkdC
MRswGQYDVQQIExJHcmVhdGVyIE1hbmNoZXN0ZXIxEDAOBgNVBAcTB1NhbGZvcmQxGjAYBgNVBAoT
EUNPTU9ETyBDQSBMaW1pdGVkMTkwNwYDVQQDEzBDT01PRE8gQ2xpZW50IEF1dGhlbnRpY2F0aW9u
IGFuZCBTZWN1cmUgRW1haWwgQ0ECEQDQgdApjjwrcP4/xIdlMmdSMIG8BgsqhkiG9w0BCRACCzGB
rKCBqTCBkzELMAkGA1UEBhMCR0IxGzAZBgNVBAgTEkdyZWF0ZXIgTWFuY2hlc3RlcjEQMA4GA1UE
BxMHU2FsZm9yZDEaMBgGA1UEChMRQ09NT0RPIENBIExpbWl0ZWQxOTA3BgNVBAMTMENPTU9ETyBD
bGllbnQgQXV0aGVudGljYXRpb24gYW5kIFNlY3VyZSBFbWFpbCBDQQIRANCB0CmOPCtw/j/Eh2Uy
Z1IwDQYJKoZIhvcNAQEBBQAEggEAH4/pdlXX0RXS1mSwkTKcJ6vJJHStsBX4N2kgAwnrlxLIHAVH
YgY9rBv9LKWNjHzivdWFZfdYrRBYkCUrYbx7Ryc9OSI98u9QIXtEIyCZNhg+04hTDq1YjoCjGGlC
BMh9Wl6zrdr2CK6hQnSHiYSvykQj+FC2T12CL7r0SKWsmhbBJWzF1MDtLrd2gpWJElADc3rERRa5
G2avr1TXqRZs7fZnBkWVjg8dcFvE+/9+DLZi47OxZl+bAws0vGNA3Gp/xfg8JM5slNH0nSDzsmP6
WnuYz5bOvc/q/JqWYAAp39vT+ansI8A3av6CkTscFlzkV8eaoSo34PqFgaLL76odRgAAAAAAAA==

--Apple-Mail=_68542F20-3279-489D-B376-708BB99BBC54--


--===============5980729225842240703==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5980729225842240703==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 14:34:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:34:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPHG-0001GE-L0; Mon, 17 Feb 2014 14:34:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFOp0-0000i8-Oi
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:05:22 +0000
Received: from [85.158.139.211:6332] by server-11.bemta-5.messagelabs.com id
	6B/DB-23886-22712035; Mon, 17 Feb 2014 14:05:22 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392642198!4425821!1
X-Originating-IP: [213.75.39.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjE1ID0+IDY1OTY2NA==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4567 invoked from network); 17 Feb 2014 13:03:18 -0000
Received: from cpsmtpb-ews10.kpnxchange.com (HELO
	cpsmtpb-ews10.kpnxchange.com) (213.75.39.15)
	by server-12.tower-206.messagelabs.com with SMTP;
	17 Feb 2014 13:03:18 -0000
Received: from cpsps-ews28.kpnxchange.com ([10.94.84.194]) by
	cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Mon, 17 Feb 2014 14:03:18 +0100
Received: from CPSMTPM-TLF101.kpnxchange.com ([195.121.3.4]) by
	cpsps-ews28.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Mon, 17 Feb 2014 14:03:18 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF101.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Mon, 17 Feb 2014 14:03:18 +0100
Message-ID: <1392642197.13000.20.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Mon, 17 Feb 2014 14:03:17 +0100
In-Reply-To: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 17 Feb 2014 13:03:18.0306 (UTC)
	FILETIME=[A0896020:01CF2BE0]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Mon, 17 Feb 2014 14:34:33 +0000
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> Please look in the grub git tree. They have fixed their code to not do
> this anymore. This should be reflected in the patch description.

Thanks, I didn't know that. That turned out to be grub commit
ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
use it to implement generating of config"), see
http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059

> Lastly please check which distro has this new grub version so that we
> know which distros won't be affected.

No distro should be affected. See, the test that grub2 used to do was
(edited for clarity):
    grep -qx "CONFIG_XEN_DOM0=y" "${config}" || grep -qx "CONFIG_XEN_PRIVILEGED_GUEST=y" "${config}"

But the Kconfig entry for XEN_PRIVILEGED_GUEST reads:
    config XEN_PRIVILEGED_GUEST
            def_bool XEN_DOM0

Ie, XEN_PRIVILEGED_GUEST is equal to XEN_DOM0 by definition, so the
second part of that test is superfluous. (We discussed this last year.
If lkml.org weren't down I'd provide a link.) Or am I misreading this
Kconfig entry?

I hope to send a v2, with an updated commit explanation, in a few days.


Paul Bolle


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:37:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPKG-0001UT-Rh; Mon, 17 Feb 2014 14:37:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFPKG-0001UI-C6
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:37:40 +0000
Received: from [193.109.254.147:62055] by server-11.bemta-14.messagelabs.com
	id B8/77-24604-3BE12035; Mon, 17 Feb 2014 14:37:39 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392647854!4901965!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19142 invoked from network); 17 Feb 2014 14:37:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:37:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101441937"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 14:36:57 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 09:36:57 -0500
Message-ID: <53021E87.6020607@citrix.com>
Date: Mon, 17 Feb 2014 14:36:55 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, <netdev@vger.kernel.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6
	interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is a valid scenario to put IP addresses on the backend VIFs:

http://wiki.xen.org/wiki/Xen_Networking#Routing

Also, the backend is not necessarily Dom0, you can connect twou guests 
with backend/frontend pairs.

Zoli

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>
> The xen-netback driver is used only to provide a backend
> interface for the frontend. The link is the only thing we
> use, and that is used internally for letting us know when the
> xen-netfront is ready, when it switches to XenbusStateConnected.
>
> Note that only when the both the xen-netfront and xen-netback
> are both in state XenbusStateConnected will xen-netback allow
> userspace on the host (backend) to bring up the interface. Enabling
> and disabling the interface will simply enable or disable NAPI
> respectively, and that's used for IRQ communication set up with
> the xen event channels.
>
> Cc: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: xen-devel@lists.xenproject.org
> Cc: netdev@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> ---
>   drivers/net/xen-netback/interface.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index d380e3f..07e6fd2 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -351,7 +351,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>
>   	eth_hw_addr_random(dev);
>   	memcpy(dev->dev_addr, xen_oui, 3);
> -	dev->priv_flags |= IFF_BRIDGE_NON_ROOT;
> +	dev->priv_flags |= IFF_BRIDGE_NON_ROOT | IFF_SKIP_IP;
>   	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
>
>   	netif_carrier_off(dev);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:37:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPKG-0001UT-Rh; Mon, 17 Feb 2014 14:37:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFPKG-0001UI-C6
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:37:40 +0000
Received: from [193.109.254.147:62055] by server-11.bemta-14.messagelabs.com
	id B8/77-24604-3BE12035; Mon, 17 Feb 2014 14:37:39 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392647854!4901965!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19142 invoked from network); 17 Feb 2014 14:37:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:37:35 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101441937"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 14:36:57 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 09:36:57 -0500
Message-ID: <53021E87.6020607@citrix.com>
Date: Mon, 17 Feb 2014 14:36:55 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, <netdev@vger.kernel.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6
	interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

There is a valid scenario to put IP addresses on the backend VIFs:

http://wiki.xen.org/wiki/Xen_Networking#Routing

Also, the backend is not necessarily Dom0, you can connect twou guests 
with backend/frontend pairs.

Zoli

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>
> The xen-netback driver is used only to provide a backend
> interface for the frontend. The link is the only thing we
> use, and that is used internally for letting us know when the
> xen-netfront is ready, when it switches to XenbusStateConnected.
>
> Note that only when the both the xen-netfront and xen-netback
> are both in state XenbusStateConnected will xen-netback allow
> userspace on the host (backend) to bring up the interface. Enabling
> and disabling the interface will simply enable or disable NAPI
> respectively, and that's used for IRQ communication set up with
> the xen event channels.
>
> Cc: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: xen-devel@lists.xenproject.org
> Cc: netdev@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> ---
>   drivers/net/xen-netback/interface.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index d380e3f..07e6fd2 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -351,7 +351,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>
>   	eth_hw_addr_random(dev);
>   	memcpy(dev->dev_addr, xen_oui, 3);
> -	dev->priv_flags |= IFF_BRIDGE_NON_ROOT;
> +	dev->priv_flags |= IFF_BRIDGE_NON_ROOT | IFF_SKIP_IP;
>   	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
>
>   	netif_carrier_off(dev);
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:43:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPPw-0001ia-Qp; Mon, 17 Feb 2014 14:43:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WFPPv-0001iV-St
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:43:32 +0000
Received: from [193.109.254.147:18957] by server-7.bemta-14.messagelabs.com id
	08/F6-23424-31022035; Mon, 17 Feb 2014 14:43:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392648208!4865096!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10779 invoked from network); 17 Feb 2014 14:43:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:43:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HEhEAr014895
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 14:43:15 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HEhCSh017858
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 14:43:12 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HEhBJL005594; Mon, 17 Feb 2014 14:43:11 GMT
Received: from localhost.localdomain (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 06:43:10 -0800
Date: Mon, 17 Feb 2014 09:43:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Paul Bolle <pebolle@tiscali.nl>
Message-ID: <20140217144307.GB28658@localhost.localdomain>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392642197.13000.20.camel@x220>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
> On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> > On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> > Please look in the grub git tree. They have fixed their code to not do
> > this anymore. This should be reflected in the patch description.
> 
> Thanks, I didn't know that. That turned out to be grub commit
> ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
> use it to implement generating of config"), see
> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059
> 
> > Lastly please check which distro has this new grub version so that we
> > know which distros won't be affected.
> 
> No distro should be affected. See, the test that grub2 used to do was
> (edited for clarity):
>     grep -qx "CONFIG_XEN_DOM0=y" "${config}" || grep -qx "CONFIG_XEN_PRIVILEGED_GUEST=y" "${config}"
> 
> But the Kconfig entry for XEN_PRIVILEGED_GUEST reads:
>     config XEN_PRIVILEGED_GUEST
>             def_bool XEN_DOM0
> 
> Ie, XEN_PRIVILEGED_GUEST is equal to XEN_DOM0 by definition, so the
> second part of that test is superfluous. (We discussed this last year.
> If lkml.org weren't down I'd provide a link.) Or am I misreading this
> Kconfig entry?

Ah, forgot about the second test for 'XEN_DOM0'. Yes that should work.

Thanks!
> 
> I hope to send a v2, with an updated commit explanation, in a few days.
> 
> 
> Paul Bolle
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:43:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPPw-0001ia-Qp; Mon, 17 Feb 2014 14:43:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WFPPv-0001iV-St
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:43:32 +0000
Received: from [193.109.254.147:18957] by server-7.bemta-14.messagelabs.com id
	08/F6-23424-31022035; Mon, 17 Feb 2014 14:43:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392648208!4865096!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10779 invoked from network); 17 Feb 2014 14:43:30 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:43:30 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HEhEAr014895
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 14:43:15 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HEhCSh017858
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 14:43:12 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HEhBJL005594; Mon, 17 Feb 2014 14:43:11 GMT
Received: from localhost.localdomain (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 06:43:10 -0800
Date: Mon, 17 Feb 2014 09:43:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Paul Bolle <pebolle@tiscali.nl>
Message-ID: <20140217144307.GB28658@localhost.localdomain>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392642197.13000.20.camel@x220>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
> On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> > On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> > Please look in the grub git tree. They have fixed their code to not do
> > this anymore. This should be reflected in the patch description.
> 
> Thanks, I didn't know that. That turned out to be grub commit
> ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
> use it to implement generating of config"), see
> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059
> 
> > Lastly please check which distro has this new grub version so that we
> > know which distros won't be affected.
> 
> No distro should be affected. See, the test that grub2 used to do was
> (edited for clarity):
>     grep -qx "CONFIG_XEN_DOM0=y" "${config}" || grep -qx "CONFIG_XEN_PRIVILEGED_GUEST=y" "${config}"
> 
> But the Kconfig entry for XEN_PRIVILEGED_GUEST reads:
>     config XEN_PRIVILEGED_GUEST
>             def_bool XEN_DOM0
> 
> Ie, XEN_PRIVILEGED_GUEST is equal to XEN_DOM0 by definition, so the
> second part of that test is superfluous. (We discussed this last year.
> If lkml.org weren't down I'd provide a link.) Or am I misreading this
> Kconfig entry?

Ah, forgot about the second test for 'XEN_DOM0'. Yes that should work.

Thanks!
> 
> I hope to send a v2, with an updated commit explanation, in a few days.
> 
> 
> Paul Bolle
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:50:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPWO-0001t2-AL; Mon, 17 Feb 2014 14:50:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFPWN-0001sx-6C
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:50:11 +0000
Received: from [193.109.254.147:43768] by server-14.bemta-14.messagelabs.com
	id 22/3D-29228-2A122035; Mon, 17 Feb 2014 14:50:10 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392648608!4867105!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32269 invoked from network); 17 Feb 2014 14:50:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:50:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HEnLqf022186
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 14:49:22 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HEnEO6001695
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 14:49:15 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HEnEQl002461; Mon, 17 Feb 2014 14:49:14 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 06:49:14 -0800
Message-ID: <530221C9.60103@oracle.com>
Date: Mon, 17 Feb 2014 09:50:49 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
	<52FE490B.8000908@oracle.com> <52FE493A.2030206@linux.vnet.ibm.com>
	<52FF9B14.8000308@linux.vnet.ibm.com>
In-Reply-To: <52FF9B14.8000308@linux.vnet.ibm.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [UPDATED][PATCH v2 46/52] xen,
 balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/15/2014 11:51 AM, Srivatsa S. Bhat wrote:
> On 02/14/2014 10:20 PM, Srivatsa S. Bhat wrote:
>> On 02/14/2014 10:19 PM, Boris Ostrovsky wrote:
>>> On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
>>>> Subsystems that want to register CPU hotplug callbacks, as well as
>>>> perform
>>>> initialization for the CPUs that are already online, often do it as shown
>>>> below:
>>>>
> [...]
>>> This looks exactly like the earlier version (i.e the notifier is still
>>> kept registered on allocation failure and commit message doesn't exactly
>>> reflect the change).
>>>
>> Sorry, your earlier reply (for some unknown reason) missed the email-threading
>> and landed elsewhere in my inbox, and hence unfortunately I forgot to take
>> your suggestions into account while sending out the v2.
>>
>> I'll send out an updated version of just this patch, as a reply.
> Here is the updated patch. Please let me know what you think!

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

-boris


>
> ----------------------------------------------------------------------------
>
> From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> Subject: [PATCH] xen, balloon: Fix CPU hotplug callback registration
>
> Subsystems that want to register CPU hotplug callbacks, as well as perform
> initialization for the CPUs that are already online, often do it as shown
> below:
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	put_online_cpus();
>
> This is wrong, since it is prone to ABBA deadlocks involving the
> cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
> with CPU hotplug operations).
>
> The xen balloon driver doesn't take get/put_online_cpus() around this code,
> but that is also buggy, since it can miss CPU hotplug events in between the
> initialization and callback registration:
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
> 		   ^
> 		   |  Race window; Can miss CPU hotplug events here.
> 		   v
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> Interestingly, the balloon code in xen can simply be reorganized as shown
> below, to have a race-free method to register hotplug callbacks, without even
> taking get/put_online_cpus(). This is because the initialization performed for
> already online CPUs is exactly the same as that performed for CPUs that come
> online later. Moreover, the code has checks in place to avoid double
> initialization.
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	put_online_cpus();
>
> A hotplug operation that occurs between registering the notifier and calling
> get_online_cpus(), won't disrupt anything, because the code takes care to
> perform the memory allocations only once.
>
> So reorganize the balloon code in xen this way to fix the issues with CPU
> hotplug callback registration.
>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>   drivers/xen/balloon.c |   36 ++++++++++++++++++++++++------------
>   1 file changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 37d06ea..dd79549 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
>   	}
>   }
>   
> +static int alloc_balloon_scratch_page(int cpu)
> +{
> +	if (per_cpu(balloon_scratch_page, cpu) != NULL)
> +		return 0;
> +
> +	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> +	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> +		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +
>   static int balloon_cpu_notify(struct notifier_block *self,
>   				    unsigned long action, void *hcpu)
>   {
>   	int cpu = (long)hcpu;
>   	switch (action) {
>   	case CPU_UP_PREPARE:
> -		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> -			break;
> -		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		if (alloc_balloon_scratch_page(cpu))
>   			return NOTIFY_BAD;
> -		}
>   		break;
>   	default:
>   		break;
> @@ -624,15 +634,17 @@ static int __init balloon_init(void)
>   		return -ENODEV;
>   
>   	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for_each_online_cpu(cpu)
> -		{
> -			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		register_cpu_notifier(&balloon_cpu_notifier);
> +
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			if (alloc_balloon_scratch_page(cpu)) {
> +				put_online_cpus();
> +				unregister_cpu_notifier(&balloon_cpu_notifier);
>   				return -ENOMEM;
>   			}
>   		}
> -		register_cpu_notifier(&balloon_cpu_notifier);
> +		put_online_cpus();
>   	}
>   
>   	pr_info("Initialising balloon driver\n");
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:50:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPWO-0001t2-AL; Mon, 17 Feb 2014 14:50:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFPWN-0001sx-6C
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 14:50:11 +0000
Received: from [193.109.254.147:43768] by server-14.bemta-14.messagelabs.com
	id 22/3D-29228-2A122035; Mon, 17 Feb 2014 14:50:10 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392648608!4867105!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32269 invoked from network); 17 Feb 2014 14:50:09 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:50:09 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HEnLqf022186
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 14:49:22 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HEnEO6001695
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 14:49:15 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HEnEQl002461; Mon, 17 Feb 2014 14:49:14 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 06:49:14 -0800
Message-ID: <530221C9.60103@oracle.com>
Date: Mon, 17 Feb 2014 09:50:49 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: "Srivatsa S. Bhat" <srivatsa.bhat@linux.vnet.ibm.com>
References: <20140214074750.22701.47330.stgit@srivatsabhat.in.ibm.com>
	<20140214075935.22701.71000.stgit@srivatsabhat.in.ibm.com>
	<52FE490B.8000908@oracle.com> <52FE493A.2030206@linux.vnet.ibm.com>
	<52FF9B14.8000308@linux.vnet.ibm.com>
In-Reply-To: <52FF9B14.8000308@linux.vnet.ibm.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-arch@vger.kernel.org, ego@linux.vnet.ibm.com, walken@google.com,
	linux@arm.linux.org.uk, akpm@linux-foundation.org,
	peterz@infradead.org, rusty@rustcorp.com.au, rjw@rjwysocki.net,
	oleg@redhat.com, linux-kernel@vger.kernel.org, paulus@samba.org,
	David Vrabel <david.vrabel@citrix.com>, tj@kernel.org,
	xen-devel@lists.xenproject.org, tglx@linutronix.de,
	paulmck@linux.vnet.ibm.com, mingo@kernel.org
Subject: Re: [Xen-devel] [UPDATED][PATCH v2 46/52] xen,
 balloon: Fix CPU hotplug callback registration
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/15/2014 11:51 AM, Srivatsa S. Bhat wrote:
> On 02/14/2014 10:20 PM, Srivatsa S. Bhat wrote:
>> On 02/14/2014 10:19 PM, Boris Ostrovsky wrote:
>>> On 02/14/2014 02:59 AM, Srivatsa S. Bhat wrote:
>>>> Subsystems that want to register CPU hotplug callbacks, as well as
>>>> perform
>>>> initialization for the CPUs that are already online, often do it as shown
>>>> below:
>>>>
> [...]
>>> This looks exactly like the earlier version (i.e the notifier is still
>>> kept registered on allocation failure and commit message doesn't exactly
>>> reflect the change).
>>>
>> Sorry, your earlier reply (for some unknown reason) missed the email-threading
>> and landed elsewhere in my inbox, and hence unfortunately I forgot to take
>> your suggestions into account while sending out the v2.
>>
>> I'll send out an updated version of just this patch, as a reply.
> Here is the updated patch. Please let me know what you think!

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

-boris


>
> ----------------------------------------------------------------------------
>
> From: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> Subject: [PATCH] xen, balloon: Fix CPU hotplug callback registration
>
> Subsystems that want to register CPU hotplug callbacks, as well as perform
> initialization for the CPUs that are already online, often do it as shown
> below:
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	put_online_cpus();
>
> This is wrong, since it is prone to ABBA deadlocks involving the
> cpu_add_remove_lock and the cpu_hotplug.lock (when running concurrently
> with CPU hotplug operations).
>
> The xen balloon driver doesn't take get/put_online_cpus() around this code,
> but that is also buggy, since it can miss CPU hotplug events in between the
> initialization and callback registration:
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
> 		   ^
> 		   |  Race window; Can miss CPU hotplug events here.
> 		   v
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> Interestingly, the balloon code in xen can simply be reorganized as shown
> below, to have a race-free method to register hotplug callbacks, without even
> taking get/put_online_cpus(). This is because the initialization performed for
> already online CPUs is exactly the same as that performed for CPUs that come
> online later. Moreover, the code has checks in place to avoid double
> initialization.
>
> 	register_cpu_notifier(&foobar_cpu_notifier);
>
> 	get_online_cpus();
>
> 	for_each_online_cpu(cpu)
> 		init_cpu(cpu);
>
> 	put_online_cpus();
>
> A hotplug operation that occurs between registering the notifier and calling
> get_online_cpus(), won't disrupt anything, because the code takes care to
> perform the memory allocations only once.
>
> So reorganize the balloon code in xen this way to fix the issues with CPU
> hotplug callback registration.
>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: David Vrabel <david.vrabel@citrix.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
> ---
>
>   drivers/xen/balloon.c |   36 ++++++++++++++++++++++++------------
>   1 file changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 37d06ea..dd79549 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -592,19 +592,29 @@ static void __init balloon_add_region(unsigned long start_pfn,
>   	}
>   }
>   
> +static int alloc_balloon_scratch_page(int cpu)
> +{
> +	if (per_cpu(balloon_scratch_page, cpu) != NULL)
> +		return 0;
> +
> +	per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> +	if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> +		pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		return -ENOMEM;
> +	}
> +
> +	return 0;
> +}
> +
> +
>   static int balloon_cpu_notify(struct notifier_block *self,
>   				    unsigned long action, void *hcpu)
>   {
>   	int cpu = (long)hcpu;
>   	switch (action) {
>   	case CPU_UP_PREPARE:
> -		if (per_cpu(balloon_scratch_page, cpu) != NULL)
> -			break;
> -		per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -		if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -			pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		if (alloc_balloon_scratch_page(cpu))
>   			return NOTIFY_BAD;
> -		}
>   		break;
>   	default:
>   		break;
> @@ -624,15 +634,17 @@ static int __init balloon_init(void)
>   		return -ENODEV;
>   
>   	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> -		for_each_online_cpu(cpu)
> -		{
> -			per_cpu(balloon_scratch_page, cpu) = alloc_page(GFP_KERNEL);
> -			if (per_cpu(balloon_scratch_page, cpu) == NULL) {
> -				pr_warn("Failed to allocate balloon_scratch_page for cpu %d\n", cpu);
> +		register_cpu_notifier(&balloon_cpu_notifier);
> +
> +		get_online_cpus();
> +		for_each_online_cpu(cpu) {
> +			if (alloc_balloon_scratch_page(cpu)) {
> +				put_online_cpus();
> +				unregister_cpu_notifier(&balloon_cpu_notifier);
>   				return -ENOMEM;
>   			}
>   		}
> -		register_cpu_notifier(&balloon_cpu_notifier);
> +		put_online_cpus();
>   	}
>   
>   	pr_info("Initialising balloon driver\n");
>
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:52:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPY7-0001yu-UC; Mon, 17 Feb 2014 14:51:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WFPY6-0001yn-ED
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 14:51:58 +0000
Received: from [85.158.143.35:42154] by server-1.bemta-4.messagelabs.com id
	6C/A7-31661-D0222035; Mon, 17 Feb 2014 14:51:57 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392648716!6228596!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16040 invoked from network); 17 Feb 2014 14:51:57 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:51:57 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 480DC1887D5;
	Mon, 17 Feb 2014 16:51:54 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 9E11C36C01F; Mon, 17 Feb 2014 16:51:54 +0200 (EET)
Date: Mon, 17 Feb 2014 16:51:54 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Dirk Winning, junidas GmbH" <dirk.winning@junidas.de>
Message-ID: <20140217145154.GL2924@reaktio.net>
References: <E5A6B4A8-4185-4002-B7A9-3098ABED8A0E@junidas.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E5A6B4A8-4185-4002-B7A9-3098ABED8A0E@junidas.de>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] bug?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 11:28:55AM +0100, Dirk Winning, junidas GmbH wrote:
>    Hi,

Hi,

>    we have XEN 4.0.1 on debian 6.0.6

First of all you should upgrade Xen.. =

Xen 4.0.1 is very old already, and it's not supported anymore by upstream.

>    and I tried to activate a USB-Modem for a DOMU running OSX; the modem =
is
>    already recognized and looks right; but it will not dial out or take a
>    call.
>    Tried a memory-Stick and the same here: it appears in USB but will not
>    mount.
>    Obviously it may not be clear where the problem exactly is, but one
>    question arises:
>    is it neccessary to do all steps listed further down and what about the
>    error message?
>    Since, the USB-devices already do appear correctly, but are unusable in
>    the end.
>    Thanks for any hints
>    greets
>    Started domain fax (id=3D89)
>    root@graff:~# xm usb-list 89
>    root@graff:~# xm usb-hc-create fax 89 1
>    root@graff:~# xm usb-list 89
>    Idx BE  state usb-ver  BE-path
>    0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0
>    port 1:
>    root@graff:~# xm usb-add 89 host:05ac:1401
>    root@graff:~# xm usb-list 89
>    Idx BE  state usb-ver  BE-path
>    0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0
>    port 1:
>    root@graff:~# xm usb-attach 89 0 1 2-1
>

Hmm.. "usb-attach" is a command for Xen PVUSB. =


Do you really have a Xen PVUSB backend driver (usbback) in dom0 kernel,
and especially PVUSB frontend driver loaded in the *OSX* VM? =



-- Pasi


>    Unexpected error: <class 'xen.util.vusb_util.UsbDeviceParseError'>
>    Please report to [1]xen-devel@lists.xensource.com
>    Traceback (most recent call last):
>      File "/usr/lib/xen-4.0/bin/xm", line 8, in <module>
>        main.main(sys.argv)
>      File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3620, in main
>        _, rc =3D _run_cmd(cmd, cmd_name, args)
>      File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3644, in
>    _run_cmd
>        return True, cmd(args)
>      File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 2868, in
>    xm_usb_attach
>        if vusb_util.bus_is_assigned(bus):
>      File "/usr/lib/xen-4.0/lib/python/xen/util/vusb_util.py", line 275, =
in
>    bus_is_assigned
>        raise UsbDeviceParseError("Can't get assignment status: (%s)." % b=
us)
>    xen.util.vusb_util.UsbDeviceParseError: vusb: Error parsing USB device
>    info: Can't get assignment status: (2-1).
>    root@graff:~# xm usb-list-assignable-devices
>    1-6          : ID 14dd:0002 Peppercon AG Multidevice
>    2-1          : ID 05ac:1401 Motorola, Inc. Apple USB Modem
>    2-2          : ID 06da:0003 OMRON USB UPS
>    Dipl. Ing. Dirk Winning, Systemberater
>    [2]dirk.winning@junidas.de
> =

>    junidas GmbH, Aixheimer Str. 12, 70619 Stuttgart
>    Tel. +49 (711) 4599799-12
>    Gesch=E4ftsf=FChrer: Dr. Markus Stoll, Matthias Zepf
>    Amtsgericht Stuttgart, HRB 21939
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:52:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPY7-0001yu-UC; Mon, 17 Feb 2014 14:51:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WFPY6-0001yn-ED
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 14:51:58 +0000
Received: from [85.158.143.35:42154] by server-1.bemta-4.messagelabs.com id
	6C/A7-31661-D0222035; Mon, 17 Feb 2014 14:51:57 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392648716!6228596!1
X-Originating-IP: [62.142.5.109]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTA5ID0+IDk1MjIz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16040 invoked from network); 17 Feb 2014 14:51:57 -0000
Received: from emh03.mail.saunalahti.fi (HELO emh03.mail.saunalahti.fi)
	(62.142.5.109)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 14:51:57 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh03.mail.saunalahti.fi (Postfix) with ESMTP id 480DC1887D5;
	Mon, 17 Feb 2014 16:51:54 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 9E11C36C01F; Mon, 17 Feb 2014 16:51:54 +0200 (EET)
Date: Mon, 17 Feb 2014 16:51:54 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Dirk Winning, junidas GmbH" <dirk.winning@junidas.de>
Message-ID: <20140217145154.GL2924@reaktio.net>
References: <E5A6B4A8-4185-4002-B7A9-3098ABED8A0E@junidas.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <E5A6B4A8-4185-4002-B7A9-3098ABED8A0E@junidas.de>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] bug?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 11:28:55AM +0100, Dirk Winning, junidas GmbH wrote:
>    Hi,

Hi,

>    we have XEN 4.0.1 on debian 6.0.6

First of all you should upgrade Xen.. =

Xen 4.0.1 is very old already, and it's not supported anymore by upstream.

>    and I tried to activate a USB-Modem for a DOMU running OSX; the modem =
is
>    already recognized and looks right; but it will not dial out or take a
>    call.
>    Tried a memory-Stick and the same here: it appears in USB but will not
>    mount.
>    Obviously it may not be clear where the problem exactly is, but one
>    question arises:
>    is it neccessary to do all steps listed further down and what about the
>    error message?
>    Since, the USB-devices already do appear correctly, but are unusable in
>    the end.
>    Thanks for any hints
>    greets
>    Started domain fax (id=3D89)
>    root@graff:~# xm usb-list 89
>    root@graff:~# xm usb-hc-create fax 89 1
>    root@graff:~# xm usb-list 89
>    Idx BE  state usb-ver  BE-path
>    0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0
>    port 1:
>    root@graff:~# xm usb-add 89 host:05ac:1401
>    root@graff:~# xm usb-list 89
>    Idx BE  state usb-ver  BE-path
>    0   0   1     USB2.0  /local/domain/0/backend/vusb/89/0
>    port 1:
>    root@graff:~# xm usb-attach 89 0 1 2-1
>

Hmm.. "usb-attach" is a command for Xen PVUSB. =


Do you really have a Xen PVUSB backend driver (usbback) in dom0 kernel,
and especially PVUSB frontend driver loaded in the *OSX* VM? =



-- Pasi


>    Unexpected error: <class 'xen.util.vusb_util.UsbDeviceParseError'>
>    Please report to [1]xen-devel@lists.xensource.com
>    Traceback (most recent call last):
>      File "/usr/lib/xen-4.0/bin/xm", line 8, in <module>
>        main.main(sys.argv)
>      File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3620, in main
>        _, rc =3D _run_cmd(cmd, cmd_name, args)
>      File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 3644, in
>    _run_cmd
>        return True, cmd(args)
>      File "/usr/lib/xen-4.0/lib/python/xen/xm/main.py", line 2868, in
>    xm_usb_attach
>        if vusb_util.bus_is_assigned(bus):
>      File "/usr/lib/xen-4.0/lib/python/xen/util/vusb_util.py", line 275, =
in
>    bus_is_assigned
>        raise UsbDeviceParseError("Can't get assignment status: (%s)." % b=
us)
>    xen.util.vusb_util.UsbDeviceParseError: vusb: Error parsing USB device
>    info: Can't get assignment status: (2-1).
>    root@graff:~# xm usb-list-assignable-devices
>    1-6          : ID 14dd:0002 Peppercon AG Multidevice
>    2-1          : ID 05ac:1401 Motorola, Inc. Apple USB Modem
>    2-2          : ID 06da:0003 OMRON USB UPS
>    Dipl. Ing. Dirk Winning, Systemberater
>    [2]dirk.winning@junidas.de
> =

>    junidas GmbH, Aixheimer Str. 12, 70619 Stuttgart
>    Tel. +49 (711) 4599799-12
>    Gesch=E4ftsf=FChrer: Dr. Markus Stoll, Matthias Zepf
>    Amtsgericht Stuttgart, HRB 21939
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:52:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPYA-0001zY-9H; Mon, 17 Feb 2014 14:52:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFPY8-0001z2-Pa
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:52:01 +0000
Received: from [85.158.139.211:40898] by server-14.bemta-5.messagelabs.com id
	3B/92-27598-01222035; Mon, 17 Feb 2014 14:52:00 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392648716!4413708!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30879 invoked from network); 17 Feb 2014 14:51:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:51:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="103183796"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 14:51:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 09:51:54 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFPY1-000156-Rq;
	Mon, 17 Feb 2014 14:51:53 +0000
Message-ID: <53022209.1060005@eu.citrix.com>
Date: Mon, 17 Feb 2014 14:51:53 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>,
	Tim Deegan <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
In-Reply-To: <5301F000020000780011CCE0@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>> On 13.02.14 at 17:20, Tim Deegan <tim@xen.org> wrote:
>> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>>> George Dunlap wrote on 2014-02-11:
>>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>>> Actually, if the overhead of not sharing tables isn't very high, then
>>>>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>>>>> submit when he originally described the problem.
>>>>> Actually, the first solution came to my mind is B. Then I realized that
>> even
>>>> chose B, we still cannot track the memory updating from DMA(even with A/D
>>>> bit, it still a problem). Also, considering the current usage case of log
>>>> dirty in Xen(only vram tracking has problem), I though A is better.:
>>>> Hypervisor only need to track the vram change. If a malicious guest try to
>>>> DMA to vram range, it only crashed himself (This should be reasonable).
>>>>>> I was going to say, from a release perspective, B is probably the
>>>>>> safest option for now.  But on the other hand, if we've been testing
>>>>>> sharing all this time, maybe switching back over to non-sharing whole-hog has
>>>> the higher risk?
>>>>> Another problem with B is that current VT-d large paging supporting relies
>> on
>>>> the sharing EPT and VT-d page table. This means if we choose B, then we need
>>>> to re-enable VT-d large page. This would be a huge performance impaction for
>>>> Xen 4.4 on using VT-d solution.
>>>>
>>>> OK -- if that's the case, then it definitely tips the balance back to
>>>> A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>>
>>>> Don't rush your judgement; but it would be nice to have this in before
>>>> RC4, which would mean checking it in today preferrably, or early
>>>> tomorrow at the latest.
>>> That would be Tim then, as he would have to approve of it anyway.
>> Done.
> Actually I'm afraid there are two problems with this patch:
>
> For one, is enabling "global" log dirty mode still going to work
> after VRAM-only mode already got enabled? I ask because the
> paging_mode_log_dirty() check which paging_log_dirty_enable()
> does first thing suggests otherwise to me (i.e. the now
> conditional setting of all p2m entries to p2m_ram_logdirty would
> seem to never get executed). IOW I would think that we're now
> lacking a control operation allowing the transition from dirty VRAM
> tracking mode to full log dirty mode.

Hrm, will so far playing with this I've been unable to get a localhost 
migrate to fail with the vncviewer attached.  Which seems a bit strange...

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 14:52:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 14:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPYA-0001zY-9H; Mon, 17 Feb 2014 14:52:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFPY8-0001z2-Pa
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 14:52:01 +0000
Received: from [85.158.139.211:40898] by server-14.bemta-5.messagelabs.com id
	3B/92-27598-01222035; Mon, 17 Feb 2014 14:52:00 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392648716!4413708!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30879 invoked from network); 17 Feb 2014 14:51:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 14:51:58 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="103183796"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 14:51:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 09:51:54 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFPY1-000156-Rq;
	Mon, 17 Feb 2014 14:51:53 +0000
Message-ID: <53022209.1060005@eu.citrix.com>
Date: Mon, 17 Feb 2014 14:51:53 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>,
	Tim Deegan <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
In-Reply-To: <5301F000020000780011CCE0@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>> On 13.02.14 at 17:20, Tim Deegan <tim@xen.org> wrote:
>> At 15:55 +0000 on 13 Feb (1392303343), Jan Beulich wrote:
>>>>>> On 13.02.14 at 16:46, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>>>> On 02/12/2014 12:53 AM, Zhang, Yang Z wrote:
>>>>> George Dunlap wrote on 2014-02-11:
>>>>>> I think I got a bit distracted with the "A isn't really so bad" thing.
>>>>>> Actually, if the overhead of not sharing tables isn't very high, then
>>>>>> B isn't such a bad option.  In fact, B is what I expected Yang to
>>>>>> submit when he originally described the problem.
>>>>> Actually, the first solution came to my mind is B. Then I realized that
>> even
>>>> chose B, we still cannot track the memory updating from DMA(even with A/D
>>>> bit, it still a problem). Also, considering the current usage case of log
>>>> dirty in Xen(only vram tracking has problem), I though A is better.:
>>>> Hypervisor only need to track the vram change. If a malicious guest try to
>>>> DMA to vram range, it only crashed himself (This should be reasonable).
>>>>>> I was going to say, from a release perspective, B is probably the
>>>>>> safest option for now.  But on the other hand, if we've been testing
>>>>>> sharing all this time, maybe switching back over to non-sharing whole-hog has
>>>> the higher risk?
>>>>> Another problem with B is that current VT-d large paging supporting relies
>> on
>>>> the sharing EPT and VT-d page table. This means if we choose B, then we need
>>>> to re-enable VT-d large page. This would be a huge performance impaction for
>>>> Xen 4.4 on using VT-d solution.
>>>>
>>>> OK -- if that's the case, then it definitely tips the balance back to
>>>> A.  Unless Tim or Jan disagrees, can one of you two check it in?
>>>>
>>>> Don't rush your judgement; but it would be nice to have this in before
>>>> RC4, which would mean checking it in today preferrably, or early
>>>> tomorrow at the latest.
>>> That would be Tim then, as he would have to approve of it anyway.
>> Done.
> Actually I'm afraid there are two problems with this patch:
>
> For one, is enabling "global" log dirty mode still going to work
> after VRAM-only mode already got enabled? I ask because the
> paging_mode_log_dirty() check which paging_log_dirty_enable()
> does first thing suggests otherwise to me (i.e. the now
> conditional setting of all p2m entries to p2m_ram_logdirty would
> seem to never get executed). IOW I would think that we're now
> lacking a control operation allowing the transition from dirty VRAM
> tracking mode to full log dirty mode.

Hrm, will so far playing with this I've been unable to get a localhost 
migrate to fail with the vncviewer attached.  Which seems a bit strange...

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:01:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPh0-0002OY-P3; Mon, 17 Feb 2014 15:01:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFPgy-0002OQ-O2
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:01:08 +0000
Received: from [85.158.139.211:15516] by server-14.bemta-5.messagelabs.com id
	24/36-27598-F2422035; Mon, 17 Feb 2014 15:01:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392649262!4434860!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7608 invoked from network); 17 Feb 2014 15:01:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 15:01:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 15:01:02 +0000
Message-Id: <53023239020000780011CED9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 15:00:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
In-Reply-To: <5301F000020000780011CCE0@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
> And second, I have been fighting with finding both conditions
> and (eventually) the root cause of a severe performance
> regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
> system. This became _much_ worse after adding in the patch here
> (while in fact I had hoped it might help with the originally observed
> degradation): X startup fails due to timing out, and booting the
> guest now takes about 20 minutes). I didn't find the root cause of
> this yet, but meanwhile I know that
> - the same isn't observable on SVM
> - there's no problem when forcing the domain to use shadow
>   mode
> - there's no need for any device to actually be assigned to the
>   guest
> - the regression is very likely purely graphics related (based on
>   the observation that when running something that regularly but
>   not heavily updates the screen with X up, the guest consumes a
>   full CPU's worth of processing power, yet when that updating
>   doesn't happen, CPU consumption goes down, and it goes further
>   down when shutting down X altogether - at least as log as the
>   patch here doesn't get involved).
> This I'm observing on a Westmere box (and I didn't notice it earlier
> because that's one of those where due to a chipset erratum the
> IOMMU gets turned off by default), so it's possible that this can't
> be seen on more modern hardware. I'll hopefully find time today to
> check this on the one newer (Sandy Bridge) box I have.

Just got done with trying this: By default, things work fine there.
As soon as I use "iommu=no-snoop", things go bad (even worse
than one the older box - the guest is consuming about 2.5 CPUs
worth of processing power _without_ the patch here in use, so I
don't even want to think about trying it there); I guessed that to
be another of the potential sources of the problem since on that
older box the respective hardware feature is unavailable.

While I'll try to look into this further, I guess I have to defer to our
VT-d specialists at Intel at this point...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:01:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPh9-0002PW-Dd; Mon, 17 Feb 2014 15:01:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFPh8-0002PN-8x
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:01:18 +0000
Received: from [85.158.137.68:16935] by server-13.bemta-3.messagelabs.com id
	2E/32-26923-D3422035; Mon, 17 Feb 2014 15:01:17 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392649273!2382296!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6235 invoked from network); 17 Feb 2014 15:01:16 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 15:01:16 -0000
Received: from mail-ve0-f172.google.com ([209.85.128.172]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwIkOdroHTrJeQXted5W8UYkgtI35koQ@postini.com;
	Mon, 17 Feb 2014 07:01:15 PST
Received: by mail-ve0-f172.google.com with SMTP id c14so12161425vea.17
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:01:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LG23+JLd2DlkVBnSkzqq0jBVKilZbvHbo1VZsCCpkTM=;
	b=ZMx0s9masFF1eTJxZQJEi5WR4jydUXCaPCmjYKD8zmxMK13jjTU9cMfqSLxILGnoU/
	79zsZ8UgbSz4gLhAeQw6Qs5sPf+NQ0ouCzUSLTKp/+alGYAiXZy9MmrmqG0j01adDHHr
	bXWO7G1pQZNp8tjapH71mM8+Z3/3W8+mVVs7A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LG23+JLd2DlkVBnSkzqq0jBVKilZbvHbo1VZsCCpkTM=;
	b=MiDHiz3PFWwT0LYDzvKlnrRAX3ihFKaph1Jz2s2lNRpfGnXO51aUiXfwHnYlcP77bZ
	qMVYJmwhqPTKhkzgDCnne3JCKe91qW66sVpIbISrdGLYAgfZlljClOwOO84olULuGvpS
	LpadJTYVu9yXBBQQe3osPeto6P6zKr07qOTAugMvUhRAU5oiz1q9umzhGWCtGS28CmjL
	eti71/f6Myn4ncwy6YgJDUdqaKDTEheVRqCl0xDTKUJsVCGPXUwDtHHLp947GEH2BQWw
	EvsoV8IpuMDRoXRdX+/3ajgylnVKp/fwB+tcnKcixB5ADw/E5kOwyq3ZDvJjbKsX5+jS
	nM3A==
X-Gm-Message-State: ALoCoQkKUcT+WLawaBj0cBiSFLb1Bx68LYwnivnjuuT8Whhf1N2aE2hhi8OU8ZGT80nRS6g1I3zvU3ndO1W8T3OE5WG63q7L5n0Y/qU/LgSorHLvRhLhx6XMSiOiCDiOP69YRfsGh3y/f2T97z+gPeRPBuOkhkN2y/+OL79UmzYnVPHU8vo6zuo=
X-Received: by 10.220.67.18 with SMTP id p18mr17437093vci.14.1392649272938;
	Mon, 17 Feb 2014 07:01:12 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.67.18 with SMTP id p18mr17437084vci.14.1392649272836;
	Mon, 17 Feb 2014 07:01:12 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 07:01:12 -0800 (PST)
In-Reply-To: <530219C4.3050304@linaro.org>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
Date: Mon, 17 Feb 2014 17:01:12 +0200
Message-ID: <CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8337405805101902146=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8337405805101902146==
Content-Type: multipart/alternative; boundary=047d7b343f9a344d9704f29b6db8

--047d7b343f9a344d9704f29b6db8
Content-Type: text/plain; charset=ISO-8859-1

Hi Julien,


> >
> >     > Can anyone clarify - is it possible to make a run time memory trap
> in
> >     > Xen hypervisor?
> >
> >     I guess you are talking about ARM? If so, it's not possible right
> now.
> >
> >
> > Does it mean, that it is possible on x86 ?
>
> Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c
>
> It's used a static array, but I don't think this is the solution for
> ARM. We don't know in advance the maximum number of MMIO region to handle.
>
>
What I'm thinking about for ARM - is to use linked list for MMIO handlers +
API to register / unregister handler.

xen/arch/arm/io.c:

 25 static const struct mmio_handler *const mmio_handlers[] =
 26 {
 27     &vgic_distr_mmio_handler,
 28     &vuart_mmio_handler,
 29 };

This can be changed to list. New API will add / remove entries. VGIC and
VUART will call something like
mmio_register_handler(&vgic_distr_mmio_handle) during corresponding
initcall.

Than the only change which is required for existing int
handle_mmio(mmio_info_t *info) function  - is to enumerate list, instead of
array
 32 int handle_mmio(mmio_info_t *info)
 33 {
 34     struct vcpu *v = current;
 35     int i;
 36
 37     for ( i = 0; i < MMIO_HANDLER_NR; i++ ) --> *list_for_each*
 38         if ( mmio_handlers[i]->check_handler(v, info->gpa) )
 39             return info->dabt.write ?
 40                 mmio_handlers[i]->write_handler(v, info) :
 41                 mmio_handlers[i]->read_handler(v, info);
 42
 43     return 0;
 44 }

Something like this.

Regards,
Andrii



> --
> Julien Grall
>



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

--047d7b343f9a344d9704f29b6db8
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Julien,<div class=3D"gmail_extra"><br><div class=3D"gma=
il_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8=
ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-sty=
le:solid;padding-left:1ex">
<div class=3D""><br>
&gt;<br>
&gt; =A0 =A0 &gt; Can anyone clarify - is it possible to make a run time me=
mory trap in<br>
&gt; =A0 =A0 &gt; Xen hypervisor?<br>
&gt;<br>
&gt; =A0 =A0 I guess you are talking about ARM? If so, it&#39;s not possibl=
e right now.<br>
&gt;<br>
&gt;<br>
&gt; Does it mean, that it is possible on x86 ?<br>
<br>
</div>Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercep=
t.c<br>
<br>
It&#39;s used a static array, but I don&#39;t think this is the solution fo=
r<br>
ARM. We don&#39;t know in advance the maximum number of MMIO region to hand=
le.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>What I&#39;m thi=
nking about for ARM - is to use linked list for MMIO handlers + API to regi=
ster / unregister handler.</div><div><br></div><div>xen/arch/arm/io.c:<br>
</div><div><br></div><div><div>=A025 static const struct mmio_handler *cons=
t mmio_handlers[] =3D</div><div>=A026 { =A0=A0</div><div>=A027 =A0 =A0 &amp=
;vgic_distr_mmio_handler,</div><div>=A028 =A0 =A0 &amp;vuart_mmio_handler,<=
/div><div>=A029 }; =A0</div>
<div><br></div></div><div>This can be changed to list. New API will add / r=
emove entries. VGIC and VUART will call something like mmio_register_handle=
r(&amp;vgic_distr_mmio_handle) during corresponding initcall.</div><div>
<br></div><div>Than the only change which is required for existing=A0int ha=
ndle_mmio(mmio_info_t *info) function =A0- is to enumerate list, instead of=
 array</div><div><div>=A032 int handle_mmio(mmio_info_t *info)</div><div>=
=A033 {</div>
<div>=A034 =A0 =A0 struct vcpu *v =3D current;</div><div>=A035 =A0 =A0 int =
i;</div><div>=A036=A0</div><div>=A037 =A0 =A0 for ( i =3D 0; i &lt; MMIO_HA=
NDLER_NR; i++ ) --&gt; *list_for_each*</div><div>=A038 =A0 =A0 =A0 =A0 if (=
 mmio_handlers[i]-&gt;check_handler(v, info-&gt;gpa) )</div>
<div>=A039 =A0 =A0 =A0 =A0 =A0 =A0 return info-&gt;dabt.write ?</div><div>=
=A040 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mmio_handlers[i]-&gt;write_handler(v,=
 info) :</div><div>=A041 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mmio_handlers[i]-&=
gt;read_handler(v, info);</div><div>=A042=A0</div><div>
=A043 =A0 =A0 return 0;</div><div>=A044 }</div></div><div><br></div><div>So=
mething like this.</div><div><br></div><div>Regards,</div><div>Andrii</div>=
<div><br></div><div><br></div><blockquote class=3D"gmail_quote" style=3D"ma=
rgin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,=
204);border-left-style:solid;padding-left:1ex">
<div class=3D""><br></div><span class=3D""><font color=3D"#888888">
--<br>
Julien Grall<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant=
:normal;font-style:normal;font-size:12px;background-color:transparent;text-=
decoration:none;font-family:Arial;font-weight:bold">Andrii Tseglytskyi | Em=
bedded Dev</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br></font><a href=3D"http:=
//www.globallogic.com/" target=3D"_blank"><span style=3D"font-size:12px;fon=
t-family:Arial;color:rgb(17,85,204);vertical-align:baseline">www.globallogi=
c.com</span></a><font size=3D"-1"><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></font>
</div></div>

--047d7b343f9a344d9704f29b6db8--


--===============8337405805101902146==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8337405805101902146==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:01:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPh0-0002OY-P3; Mon, 17 Feb 2014 15:01:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFPgy-0002OQ-O2
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:01:08 +0000
Received: from [85.158.139.211:15516] by server-14.bemta-5.messagelabs.com id
	24/36-27598-F2422035; Mon, 17 Feb 2014 15:01:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392649262!4434860!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7608 invoked from network); 17 Feb 2014 15:01:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 15:01:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 15:01:02 +0000
Message-Id: <53023239020000780011CED9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 15:00:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
In-Reply-To: <5301F000020000780011CCE0@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
> And second, I have been fighting with finding both conditions
> and (eventually) the root cause of a severe performance
> regression (compared to 4.3.x) I'm observing on an EPT+IOMMU
> system. This became _much_ worse after adding in the patch here
> (while in fact I had hoped it might help with the originally observed
> degradation): X startup fails due to timing out, and booting the
> guest now takes about 20 minutes). I didn't find the root cause of
> this yet, but meanwhile I know that
> - the same isn't observable on SVM
> - there's no problem when forcing the domain to use shadow
>   mode
> - there's no need for any device to actually be assigned to the
>   guest
> - the regression is very likely purely graphics related (based on
>   the observation that when running something that regularly but
>   not heavily updates the screen with X up, the guest consumes a
>   full CPU's worth of processing power, yet when that updating
>   doesn't happen, CPU consumption goes down, and it goes further
>   down when shutting down X altogether - at least as log as the
>   patch here doesn't get involved).
> This I'm observing on a Westmere box (and I didn't notice it earlier
> because that's one of those where due to a chipset erratum the
> IOMMU gets turned off by default), so it's possible that this can't
> be seen on more modern hardware. I'll hopefully find time today to
> check this on the one newer (Sandy Bridge) box I have.

Just got done with trying this: By default, things work fine there.
As soon as I use "iommu=no-snoop", things go bad (even worse
than one the older box - the guest is consuming about 2.5 CPUs
worth of processing power _without_ the patch here in use, so I
don't even want to think about trying it there); I guessed that to
be another of the potential sources of the problem since on that
older box the respective hardware feature is unavailable.

While I'll try to look into this further, I guess I have to defer to our
VT-d specialists at Intel at this point...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:01:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPh9-0002PW-Dd; Mon, 17 Feb 2014 15:01:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFPh8-0002PN-8x
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:01:18 +0000
Received: from [85.158.137.68:16935] by server-13.bemta-3.messagelabs.com id
	2E/32-26923-D3422035; Mon, 17 Feb 2014 15:01:17 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392649273!2382296!1
X-Originating-IP: [64.18.0.246]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6235 invoked from network); 17 Feb 2014 15:01:16 -0000
Received: from exprod5og115.obsmtp.com (HELO exprod5og115.obsmtp.com)
	(64.18.0.246)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 15:01:16 -0000
Received: from mail-ve0-f172.google.com ([209.85.128.172]) (using TLSv1) by
	exprod5ob115.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwIkOdroHTrJeQXted5W8UYkgtI35koQ@postini.com;
	Mon, 17 Feb 2014 07:01:15 PST
Received: by mail-ve0-f172.google.com with SMTP id c14so12161425vea.17
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:01:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=LG23+JLd2DlkVBnSkzqq0jBVKilZbvHbo1VZsCCpkTM=;
	b=ZMx0s9masFF1eTJxZQJEi5WR4jydUXCaPCmjYKD8zmxMK13jjTU9cMfqSLxILGnoU/
	79zsZ8UgbSz4gLhAeQw6Qs5sPf+NQ0ouCzUSLTKp/+alGYAiXZy9MmrmqG0j01adDHHr
	bXWO7G1pQZNp8tjapH71mM8+Z3/3W8+mVVs7A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LG23+JLd2DlkVBnSkzqq0jBVKilZbvHbo1VZsCCpkTM=;
	b=MiDHiz3PFWwT0LYDzvKlnrRAX3ihFKaph1Jz2s2lNRpfGnXO51aUiXfwHnYlcP77bZ
	qMVYJmwhqPTKhkzgDCnne3JCKe91qW66sVpIbISrdGLYAgfZlljClOwOO84olULuGvpS
	LpadJTYVu9yXBBQQe3osPeto6P6zKr07qOTAugMvUhRAU5oiz1q9umzhGWCtGS28CmjL
	eti71/f6Myn4ncwy6YgJDUdqaKDTEheVRqCl0xDTKUJsVCGPXUwDtHHLp947GEH2BQWw
	EvsoV8IpuMDRoXRdX+/3ajgylnVKp/fwB+tcnKcixB5ADw/E5kOwyq3ZDvJjbKsX5+jS
	nM3A==
X-Gm-Message-State: ALoCoQkKUcT+WLawaBj0cBiSFLb1Bx68LYwnivnjuuT8Whhf1N2aE2hhi8OU8ZGT80nRS6g1I3zvU3ndO1W8T3OE5WG63q7L5n0Y/qU/LgSorHLvRhLhx6XMSiOiCDiOP69YRfsGh3y/f2T97z+gPeRPBuOkhkN2y/+OL79UmzYnVPHU8vo6zuo=
X-Received: by 10.220.67.18 with SMTP id p18mr17437093vci.14.1392649272938;
	Mon, 17 Feb 2014 07:01:12 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.67.18 with SMTP id p18mr17437084vci.14.1392649272836;
	Mon, 17 Feb 2014 07:01:12 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 07:01:12 -0800 (PST)
In-Reply-To: <530219C4.3050304@linaro.org>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
Date: Mon, 17 Feb 2014 17:01:12 +0200
Message-ID: <CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8337405805101902146=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8337405805101902146==
Content-Type: multipart/alternative; boundary=047d7b343f9a344d9704f29b6db8

--047d7b343f9a344d9704f29b6db8
Content-Type: text/plain; charset=ISO-8859-1

Hi Julien,


> >
> >     > Can anyone clarify - is it possible to make a run time memory trap
> in
> >     > Xen hypervisor?
> >
> >     I guess you are talking about ARM? If so, it's not possible right
> now.
> >
> >
> > Does it mean, that it is possible on x86 ?
>
> Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c
>
> It's used a static array, but I don't think this is the solution for
> ARM. We don't know in advance the maximum number of MMIO region to handle.
>
>
What I'm thinking about for ARM - is to use linked list for MMIO handlers +
API to register / unregister handler.

xen/arch/arm/io.c:

 25 static const struct mmio_handler *const mmio_handlers[] =
 26 {
 27     &vgic_distr_mmio_handler,
 28     &vuart_mmio_handler,
 29 };

This can be changed to list. New API will add / remove entries. VGIC and
VUART will call something like
mmio_register_handler(&vgic_distr_mmio_handle) during corresponding
initcall.

Than the only change which is required for existing int
handle_mmio(mmio_info_t *info) function  - is to enumerate list, instead of
array
 32 int handle_mmio(mmio_info_t *info)
 33 {
 34     struct vcpu *v = current;
 35     int i;
 36
 37     for ( i = 0; i < MMIO_HANDLER_NR; i++ ) --> *list_for_each*
 38         if ( mmio_handlers[i]->check_handler(v, info->gpa) )
 39             return info->dabt.write ?
 40                 mmio_handlers[i]->write_handler(v, info) :
 41                 mmio_handlers[i]->read_handler(v, info);
 42
 43     return 0;
 44 }

Something like this.

Regards,
Andrii



> --
> Julien Grall
>



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

--047d7b343f9a344d9704f29b6db8
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Julien,<div class=3D"gmail_extra"><br><div class=3D"gma=
il_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8=
ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-sty=
le:solid;padding-left:1ex">
<div class=3D""><br>
&gt;<br>
&gt; =A0 =A0 &gt; Can anyone clarify - is it possible to make a run time me=
mory trap in<br>
&gt; =A0 =A0 &gt; Xen hypervisor?<br>
&gt;<br>
&gt; =A0 =A0 I guess you are talking about ARM? If so, it&#39;s not possibl=
e right now.<br>
&gt;<br>
&gt;<br>
&gt; Does it mean, that it is possible on x86 ?<br>
<br>
</div>Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercep=
t.c<br>
<br>
It&#39;s used a static array, but I don&#39;t think this is the solution fo=
r<br>
ARM. We don&#39;t know in advance the maximum number of MMIO region to hand=
le.<br>
<div class=3D""><br></div></blockquote><div><br></div><div>What I&#39;m thi=
nking about for ARM - is to use linked list for MMIO handlers + API to regi=
ster / unregister handler.</div><div><br></div><div>xen/arch/arm/io.c:<br>
</div><div><br></div><div><div>=A025 static const struct mmio_handler *cons=
t mmio_handlers[] =3D</div><div>=A026 { =A0=A0</div><div>=A027 =A0 =A0 &amp=
;vgic_distr_mmio_handler,</div><div>=A028 =A0 =A0 &amp;vuart_mmio_handler,<=
/div><div>=A029 }; =A0</div>
<div><br></div></div><div>This can be changed to list. New API will add / r=
emove entries. VGIC and VUART will call something like mmio_register_handle=
r(&amp;vgic_distr_mmio_handle) during corresponding initcall.</div><div>
<br></div><div>Than the only change which is required for existing=A0int ha=
ndle_mmio(mmio_info_t *info) function =A0- is to enumerate list, instead of=
 array</div><div><div>=A032 int handle_mmio(mmio_info_t *info)</div><div>=
=A033 {</div>
<div>=A034 =A0 =A0 struct vcpu *v =3D current;</div><div>=A035 =A0 =A0 int =
i;</div><div>=A036=A0</div><div>=A037 =A0 =A0 for ( i =3D 0; i &lt; MMIO_HA=
NDLER_NR; i++ ) --&gt; *list_for_each*</div><div>=A038 =A0 =A0 =A0 =A0 if (=
 mmio_handlers[i]-&gt;check_handler(v, info-&gt;gpa) )</div>
<div>=A039 =A0 =A0 =A0 =A0 =A0 =A0 return info-&gt;dabt.write ?</div><div>=
=A040 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mmio_handlers[i]-&gt;write_handler(v,=
 info) :</div><div>=A041 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 mmio_handlers[i]-&=
gt;read_handler(v, info);</div><div>=A042=A0</div><div>
=A043 =A0 =A0 return 0;</div><div>=A044 }</div></div><div><br></div><div>So=
mething like this.</div><div><br></div><div>Regards,</div><div>Andrii</div>=
<div><br></div><div><br></div><blockquote class=3D"gmail_quote" style=3D"ma=
rgin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,=
204);border-left-style:solid;padding-left:1ex">
<div class=3D""><br></div><span class=3D""><font color=3D"#888888">
--<br>
Julien Grall<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant=
:normal;font-style:normal;font-size:12px;background-color:transparent;text-=
decoration:none;font-family:Arial;font-weight:bold">Andrii Tseglytskyi | Em=
bedded Dev</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br></font><a href=3D"http:=
//www.globallogic.com/" target=3D"_blank"><span style=3D"font-size:12px;fon=
t-family:Arial;color:rgb(17,85,204);vertical-align:baseline">www.globallogi=
c.com</span></a><font size=3D"-1"><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:11px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal"></span></font>
</div></div>

--047d7b343f9a344d9704f29b6db8--


--===============8337405805101902146==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8337405805101902146==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:02:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPib-0002ZU-UD; Mon, 17 Feb 2014 15:02:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFPia-0002ZF-2d
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:02:48 +0000
Received: from [85.158.137.68:4328] by server-9.bemta-3.messagelabs.com id
	DE/6E-10184-79422035; Mon, 17 Feb 2014 15:02:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392649364!2423359!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21180 invoked from network); 17 Feb 2014 15:02:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:02:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101448680"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 15:02:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:02:28 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFPiF-0001EC-Gf;
	Mon, 17 Feb 2014 15:02:27 +0000
Date: Mon, 17 Feb 2014 15:02:24 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402171502110.27926@kaball.uk.xensource.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1018423047-1392649344=:27926"
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1018423047-1392649344=:27926
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Mon, 17 Feb 2014, Andrii Tseglytskyi wrote:
> Hi Julien,
>=20
>       >
>       > =C2=A0 =C2=A0 > Can anyone clarify - is it possible to make a run=
 time memory trap in
>       > =C2=A0 =C2=A0 > Xen hypervisor?
>       >
>       > =C2=A0 =C2=A0 I guess you are talking about ARM? If so, it's not =
possible right now.
>       >
>       >
>       > Does it mean, that it is possible on x86 ?
>=20
> Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c
>=20
> It's used a static array, but I don't think this is the solution for
> ARM. We don't know in advance the maximum number of MMIO region to handle=
=2E
>=20
>=20
> What I'm thinking about for ARM - is to use linked list for MMIO handlers=
 + API to register / unregister handler.
>=20
> xen/arch/arm/io.c:
>=20
> =C2=A025 static const struct mmio_handler *const mmio_handlers[] =3D
> =C2=A026 { =C2=A0=C2=A0
> =C2=A027 =C2=A0 =C2=A0 &vgic_distr_mmio_handler,
> =C2=A028 =C2=A0 =C2=A0 &vuart_mmio_handler,
> =C2=A029 }; =C2=A0
>=20
> This can be changed to list. New API will add / remove entries. VGIC and =
VUART will call something like
> mmio_register_handler(&vgic_distr_mmio_handle) during corresponding initc=
all.
>=20
> Than the only change which is required for existing=C2=A0int handle_mmio(=
mmio_info_t *info) function =C2=A0- is to enumerate list, instead of array
> =C2=A032 int handle_mmio(mmio_info_t *info)
> =C2=A033 {
> =C2=A034 =C2=A0 =C2=A0 struct vcpu *v =3D current;
> =C2=A035 =C2=A0 =C2=A0 int i;
> =C2=A036=C2=A0
> =C2=A037 =C2=A0 =C2=A0 for ( i =3D 0; i < MMIO_HANDLER_NR; i++ ) --> *lis=
t_for_each*
> =C2=A038 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( mmio_handlers[i]->check_handler=
(v, info->gpa) )
> =C2=A039 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return info->dabt.writ=
e ?
> =C2=A040 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mmio_han=
dlers[i]->write_handler(v, info) :
> =C2=A041 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mmio_han=
dlers[i]->read_handler(v, info);
> =C2=A042=C2=A0
> =C2=A043 =C2=A0 =C2=A0 return 0;
> =C2=A044 }
>=20
> Something like this.

Sounds good in theory
--1342847746-1018423047-1392649344=:27926
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1018423047-1392649344=:27926--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:02:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPib-0002ZU-UD; Mon, 17 Feb 2014 15:02:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFPia-0002ZF-2d
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:02:48 +0000
Received: from [85.158.137.68:4328] by server-9.bemta-3.messagelabs.com id
	DE/6E-10184-79422035; Mon, 17 Feb 2014 15:02:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392649364!2423359!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21180 invoked from network); 17 Feb 2014 15:02:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:02:46 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101448680"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 15:02:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:02:28 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFPiF-0001EC-Gf;
	Mon, 17 Feb 2014 15:02:27 +0000
Date: Mon, 17 Feb 2014 15:02:24 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
In-Reply-To: <CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402171502110.27926@kaball.uk.xensource.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1018423047-1392649344=:27926"
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1018423047-1392649344=:27926
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Mon, 17 Feb 2014, Andrii Tseglytskyi wrote:
> Hi Julien,
>=20
>       >
>       > =C2=A0 =C2=A0 > Can anyone clarify - is it possible to make a run=
 time memory trap in
>       > =C2=A0 =C2=A0 > Xen hypervisor?
>       >
>       > =C2=A0 =C2=A0 I guess you are talking about ARM? If so, it's not =
possible right now.
>       >
>       >
>       > Does it mean, that it is possible on x86 ?
>=20
> Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c
>=20
> It's used a static array, but I don't think this is the solution for
> ARM. We don't know in advance the maximum number of MMIO region to handle=
=2E
>=20
>=20
> What I'm thinking about for ARM - is to use linked list for MMIO handlers=
 + API to register / unregister handler.
>=20
> xen/arch/arm/io.c:
>=20
> =C2=A025 static const struct mmio_handler *const mmio_handlers[] =3D
> =C2=A026 { =C2=A0=C2=A0
> =C2=A027 =C2=A0 =C2=A0 &vgic_distr_mmio_handler,
> =C2=A028 =C2=A0 =C2=A0 &vuart_mmio_handler,
> =C2=A029 }; =C2=A0
>=20
> This can be changed to list. New API will add / remove entries. VGIC and =
VUART will call something like
> mmio_register_handler(&vgic_distr_mmio_handle) during corresponding initc=
all.
>=20
> Than the only change which is required for existing=C2=A0int handle_mmio(=
mmio_info_t *info) function =C2=A0- is to enumerate list, instead of array
> =C2=A032 int handle_mmio(mmio_info_t *info)
> =C2=A033 {
> =C2=A034 =C2=A0 =C2=A0 struct vcpu *v =3D current;
> =C2=A035 =C2=A0 =C2=A0 int i;
> =C2=A036=C2=A0
> =C2=A037 =C2=A0 =C2=A0 for ( i =3D 0; i < MMIO_HANDLER_NR; i++ ) --> *lis=
t_for_each*
> =C2=A038 =C2=A0 =C2=A0 =C2=A0 =C2=A0 if ( mmio_handlers[i]->check_handler=
(v, info->gpa) )
> =C2=A039 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return info->dabt.writ=
e ?
> =C2=A040 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mmio_han=
dlers[i]->write_handler(v, info) :
> =C2=A041 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 mmio_han=
dlers[i]->read_handler(v, info);
> =C2=A042=C2=A0
> =C2=A043 =C2=A0 =C2=A0 return 0;
> =C2=A044 }
>=20
> Something like this.

Sounds good in theory
--1342847746-1018423047-1392649344=:27926
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1018423047-1392649344=:27926--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPkr-0002kN-GB; Mon, 17 Feb 2014 15:05:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFPkq-0002kE-2D
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:05:08 +0000
Received: from [85.158.139.211:27992] by server-10.bemta-5.messagelabs.com id
	E5/C4-08578-32522035; Mon, 17 Feb 2014 15:05:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392649506!4468684!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26819 invoked from network); 17 Feb 2014 15:05:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 15:05:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 15:05:05 +0000
Message-Id: <5302332D020000780011CEF1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 15:05:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
In-Reply-To: <53022209.1060005@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 15:51, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>> Actually I'm afraid there are two problems with this patch:
>>
>> For one, is enabling "global" log dirty mode still going to work
>> after VRAM-only mode already got enabled? I ask because the
>> paging_mode_log_dirty() check which paging_log_dirty_enable()
>> does first thing suggests otherwise to me (i.e. the now
>> conditional setting of all p2m entries to p2m_ram_logdirty would
>> seem to never get executed). IOW I would think that we're now
>> lacking a control operation allowing the transition from dirty VRAM
>> tracking mode to full log dirty mode.
> 
> Hrm, will so far playing with this I've been unable to get a localhost 
> migrate to fail with the vncviewer attached.  Which seems a bit strange...

Not necessarily - it may depend on how the tools actually do this:
They might temporarily disable log dirty mode altogether, just to
re-enable full mode again right away. But this specific usage of the
hypervisor interface wouldn't (to me) mean that other tool stacks
might not be doing this differently.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPkr-0002kN-GB; Mon, 17 Feb 2014 15:05:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFPkq-0002kE-2D
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:05:08 +0000
Received: from [85.158.139.211:27992] by server-10.bemta-5.messagelabs.com id
	E5/C4-08578-32522035; Mon, 17 Feb 2014 15:05:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392649506!4468684!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26819 invoked from network); 17 Feb 2014 15:05:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 15:05:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 15:05:05 +0000
Message-Id: <5302332D020000780011CEF1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 15:05:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
In-Reply-To: <53022209.1060005@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 15:51, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>> Actually I'm afraid there are two problems with this patch:
>>
>> For one, is enabling "global" log dirty mode still going to work
>> after VRAM-only mode already got enabled? I ask because the
>> paging_mode_log_dirty() check which paging_log_dirty_enable()
>> does first thing suggests otherwise to me (i.e. the now
>> conditional setting of all p2m entries to p2m_ram_logdirty would
>> seem to never get executed). IOW I would think that we're now
>> lacking a control operation allowing the transition from dirty VRAM
>> tracking mode to full log dirty mode.
> 
> Hrm, will so far playing with this I've been unable to get a localhost 
> migrate to fail with the vncviewer attached.  Which seems a bit strange...

Not necessarily - it may depend on how the tools actually do this:
They might temporarily disable log dirty mode altogether, just to
re-enable full mode again right away. But this specific usage of the
hypervisor interface wouldn't (to me) mean that other tool stacks
might not be doing this differently.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPoI-00030L-6k; Mon, 17 Feb 2014 15:08:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFPoH-00030A-IK
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:08:41 +0000
Received: from [193.109.254.147:62482] by server-16.bemta-14.messagelabs.com
	id 80/30-21945-7F522035; Mon, 17 Feb 2014 15:08:39 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392649717!4912063!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5315 invoked from network); 17 Feb 2014 15:08:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:08:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; 
	d="asc'?scan'208";a="103188513"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 15:08:37 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:08:36 -0500
Message-ID: <1392649714.32038.427.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Mon, 17 Feb 2014 16:08:34 +0100
In-Reply-To: <53020C4B.6000509@ts.fujitsu.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<53020C4B.6000509@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>, Nate
	Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7429617389850219579=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7429617389850219579==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-9jl9DFOogM1YeyoZa17z"

--=-9jl9DFOogM1YeyoZa17z
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 14:19 +0100, Juergen Gross wrote:
> On 14.02.2014 18:21, Dario Faggioli wrote:
> >
> > Actually, you are right. It looks like there is no command or command
> > parameter telling explicitly to which pool a domain belong [BTW, adding
> > Juergen, who knows that for sure].
>=20
> You didn't add me, but I just stumbled over this message. :-)
>=20
Oh, sorry... I could have sworn I did! :-P

Glad you've noticed the conversation anyway... and sorry.

> When I added cpupools the information could be obtained via "xm list -l".
> In the moment I haven't got a xen-unstable system up. And on my 4.2.3
> machine "xl list -l" isn't giving any information at all.
>=20
Yeah, I remember something about a discussion on this. On my -unstable
test box, I still get no output for dom0, while, if you have a domU, I
do see something, and the poolid is there.

root@Zhaman:~# xl list -l | grep -i pool
                "poolid": 0,

> With "xenstore-ls /vm" the information can be retrieved: it is listed
> under <uuid>/pool_name (with <uuid> being the UUID of the domain in
> question).
>=20
Funny:

root@Zhaman:~# xenstore-ls /vm | grep -i pool
root@Zhaman:~#

root@Zhaman:~# xenstore-ls | grep -i pool
root@Zhaman:~#

:-O

Is that because I didn't really add any pool (i.e., I'm running the
above with only Pool-0) ?

> > If that is the case, we really should add one.
>=20
> Indeed. I think "xl cpupool-list" should have another option to show the
> domains in the cpupool. I'll prepare a patch.
>=20
Cool!

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-9jl9DFOogM1YeyoZa17z
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMCJfIACgkQk4XaBE3IOsRxpACfXI+Ktguxpg2kAR3kbwWhmhEa
oL4AnixFdpoamJc4P364glWnmyiDo/I2
=7PwF
-----END PGP SIGNATURE-----

--=-9jl9DFOogM1YeyoZa17z--


--===============7429617389850219579==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7429617389850219579==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPoI-00030L-6k; Mon, 17 Feb 2014 15:08:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFPoH-00030A-IK
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:08:41 +0000
Received: from [193.109.254.147:62482] by server-16.bemta-14.messagelabs.com
	id 80/30-21945-7F522035; Mon, 17 Feb 2014 15:08:39 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392649717!4912063!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5315 invoked from network); 17 Feb 2014 15:08:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:08:38 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; 
	d="asc'?scan'208";a="103188513"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 15:08:37 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:08:36 -0500
Message-ID: <1392649714.32038.427.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Mon, 17 Feb 2014 16:08:34 +0100
In-Reply-To: <53020C4B.6000509@ts.fujitsu.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<53020C4B.6000509@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>, Nate
	Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7429617389850219579=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7429617389850219579==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-9jl9DFOogM1YeyoZa17z"

--=-9jl9DFOogM1YeyoZa17z
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 14:19 +0100, Juergen Gross wrote:
> On 14.02.2014 18:21, Dario Faggioli wrote:
> >
> > Actually, you are right. It looks like there is no command or command
> > parameter telling explicitly to which pool a domain belong [BTW, adding
> > Juergen, who knows that for sure].
>=20
> You didn't add me, but I just stumbled over this message. :-)
>=20
Oh, sorry... I could have sworn I did! :-P

Glad you've noticed the conversation anyway... and sorry.

> When I added cpupools the information could be obtained via "xm list -l".
> In the moment I haven't got a xen-unstable system up. And on my 4.2.3
> machine "xl list -l" isn't giving any information at all.
>=20
Yeah, I remember something about a discussion on this. On my -unstable
test box, I still get no output for dom0, while, if you have a domU, I
do see something, and the poolid is there.

root@Zhaman:~# xl list -l | grep -i pool
                "poolid": 0,

> With "xenstore-ls /vm" the information can be retrieved: it is listed
> under <uuid>/pool_name (with <uuid> being the UUID of the domain in
> question).
>=20
Funny:

root@Zhaman:~# xenstore-ls /vm | grep -i pool
root@Zhaman:~#

root@Zhaman:~# xenstore-ls | grep -i pool
root@Zhaman:~#

:-O

Is that because I didn't really add any pool (i.e., I'm running the
above with only Pool-0) ?

> > If that is the case, we really should add one.
>=20
> Indeed. I think "xl cpupool-list" should have another option to show the
> domains in the cpupool. I'll prepare a patch.
>=20
Cool!

Thanks,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-9jl9DFOogM1YeyoZa17z
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMCJfIACgkQk4XaBE3IOsRxpACfXI+Ktguxpg2kAR3kbwWhmhEa
oL4AnixFdpoamJc4P364glWnmyiDo/I2
=7PwF
-----END PGP SIGNATURE-----

--=-9jl9DFOogM1YeyoZa17z--


--===============7429617389850219579==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7429617389850219579==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:16:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPvP-0003Bd-4o; Mon, 17 Feb 2014 15:16:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WFPvO-0003BQ-4d; Mon, 17 Feb 2014 15:16:02 +0000
Received: from [193.109.254.147:7688] by server-9.bemta-14.messagelabs.com id
	D3/0C-24895-1B722035; Mon, 17 Feb 2014 15:16:01 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392650160!4914222!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 437 invoked from network); 17 Feb 2014 15:16:00 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:16:00 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so2510106wib.14
	for <multiple recipients>; Mon, 17 Feb 2014 07:16:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Pjmmfni50TU1RJp2sPwWZVgmYyOg26wWIPfZ/xJ6rqk=;
	b=oTQsde3pQ/mopyJqBTlx29xIneuYKZIgD7pOhJnPGL48FEOk7Dd6gVwFKPDezFQHfB
	KxB/7NZ0CReqNFTvHVkW8o8Qi56s4i9i+8sH+znhdUaJ2utdMb5Hg2VoivLkrp49p+Bi
	GfPRe3q59+nD/zAtuhXo27+NwFkmZLKfoCkZ5HQOq+nGa8klQXF7F4cgFht8VjLhWoqG
	8AQ6Ef/vePfcO+GnBYCqmmOunLPs97DAnNhHkxseNtyvoVqxOhsMQjTRoux/U7UYrh92
	mCnRsPbXX75R7qkuTQ2VL24L1ReLlblC9bOGsG3EFR9phV97aNN5tjR4glUcASGoBQBI
	SQmg==
X-Received: by 10.194.82.105 with SMTP id h9mr2586871wjy.52.1392650160432;
	Mon, 17 Feb 2014 07:16:00 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id j9sm37622567wjz.13.2014.02.17.07.15.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:15:59 -0800 (PST)
Message-ID: <530227AE.2090609@xen.org>
Date: Mon, 17 Feb 2014 15:15:58 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, =?ISO-8859-1?Q?Roger_Pau_?=
	=?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
References: <52DCE9FA.6010400@xen.org>	<52E7B6AF.3050604@xen.org>	<1391609348.6497.178.camel@kazak.uk.xensource.com>	<52FBA331.9040004@citrix.com>
	<21244.64811.826482.435714@mariner.uk.xensource.com>
In-Reply-To: <21244.64811.826482.435714@mariner.uk.xensource.com>
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/2014 17:13, Ian Jackson wrote:
> Roger Pau Monn=E9 writes ("Re: [Xen-devel] Prepping for GSOC 2014 [URGENT=
] - deadline Feb 14 2014"):
>> On 05/02/14 15:09, Ian Campbell wrote:
>>> Roger:
>>>        * Refactor Linux hotplug scripts
>>>
>>>          You did some of this I think?
>> No, I've added a block-iscsi script, but I did not refactor the other
>> ones. It is still valid, however, I'm not sure it's attractive from a
>> GSoC point of view. I'm going to leave it anyway in case someone is
>> interested.
> Who is actually submitting our GSOC application ?
>
> Ian.
I have submitted it.  Please continue working on and refining =

http://wiki.xen.org/wiki/GSoc_2014
If you feel that a project should be moved into =

http://wiki.xen.org/wiki/GSoc_2014#List_of_peer_reviewed_Projects please =

do so

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:16:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPvP-0003Bd-4o; Mon, 17 Feb 2014 15:16:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WFPvO-0003BQ-4d; Mon, 17 Feb 2014 15:16:02 +0000
Received: from [193.109.254.147:7688] by server-9.bemta-14.messagelabs.com id
	D3/0C-24895-1B722035; Mon, 17 Feb 2014 15:16:01 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392650160!4914222!1
X-Originating-IP: [209.85.212.175]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 437 invoked from network); 17 Feb 2014 15:16:00 -0000
Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com)
	(209.85.212.175)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:16:00 -0000
Received: by mail-wi0-f175.google.com with SMTP id hm4so2510106wib.14
	for <multiple recipients>; Mon, 17 Feb 2014 07:16:00 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Pjmmfni50TU1RJp2sPwWZVgmYyOg26wWIPfZ/xJ6rqk=;
	b=oTQsde3pQ/mopyJqBTlx29xIneuYKZIgD7pOhJnPGL48FEOk7Dd6gVwFKPDezFQHfB
	KxB/7NZ0CReqNFTvHVkW8o8Qi56s4i9i+8sH+znhdUaJ2utdMb5Hg2VoivLkrp49p+Bi
	GfPRe3q59+nD/zAtuhXo27+NwFkmZLKfoCkZ5HQOq+nGa8klQXF7F4cgFht8VjLhWoqG
	8AQ6Ef/vePfcO+GnBYCqmmOunLPs97DAnNhHkxseNtyvoVqxOhsMQjTRoux/U7UYrh92
	mCnRsPbXX75R7qkuTQ2VL24L1ReLlblC9bOGsG3EFR9phV97aNN5tjR4glUcASGoBQBI
	SQmg==
X-Received: by 10.194.82.105 with SMTP id h9mr2586871wjy.52.1392650160432;
	Mon, 17 Feb 2014 07:16:00 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id j9sm37622567wjz.13.2014.02.17.07.15.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:15:59 -0800 (PST)
Message-ID: <530227AE.2090609@xen.org>
Date: Mon, 17 Feb 2014 15:15:58 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, =?ISO-8859-1?Q?Roger_Pau_?=
	=?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
References: <52DCE9FA.6010400@xen.org>	<52E7B6AF.3050604@xen.org>	<1391609348.6497.178.camel@kazak.uk.xensource.com>	<52FBA331.9040004@citrix.com>
	<21244.64811.826482.435714@mariner.uk.xensource.com>
In-Reply-To: <21244.64811.826482.435714@mariner.uk.xensource.com>
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 13/02/2014 17:13, Ian Jackson wrote:
> Roger Pau Monn=E9 writes ("Re: [Xen-devel] Prepping for GSOC 2014 [URGENT=
] - deadline Feb 14 2014"):
>> On 05/02/14 15:09, Ian Campbell wrote:
>>> Roger:
>>>        * Refactor Linux hotplug scripts
>>>
>>>          You did some of this I think?
>> No, I've added a block-iscsi script, but I did not refactor the other
>> ones. It is still valid, however, I'm not sure it's attractive from a
>> GSoC point of view. I'm going to leave it anyway in case someone is
>> interested.
> Who is actually submitting our GSOC application ?
>
> Ian.
I have submitted it.  Please continue working on and refining =

http://wiki.xen.org/wiki/GSoc_2014
If you feel that a project should be moved into =

http://wiki.xen.org/wiki/GSoc_2014#List_of_peer_reviewed_Projects please =

do so

Regards
Lars

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPyb-0003KV-Vg; Mon, 17 Feb 2014 15:19:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFPyZ-0003KP-VZ
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:19:20 +0000
Received: from [85.158.143.35:20071] by server-3.bemta-4.messagelabs.com id
	30/53-11539-77822035; Mon, 17 Feb 2014 15:19:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392650358!6263654!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28570 invoked from network); 17 Feb 2014 15:19:18 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:19:18 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so4686308eaj.36
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:19:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=JuJW/aPWWFL/i7z4LGN+uCmSDqnV2n7RYH93n7CfgKo=;
	b=msrM8ZahLh3pJwvTCk+VJduF0Xztxy/w1s2rxiCpAYBQOBwExMSiOGYvqSKa68SFz/
	et6fbkJUWDoo4PqfZybgA6Q8Zl3nkOK+62QGm0vtIjVLHMcVTV3xRlZ7r8g5JDgBYaeN
	Cuw+pXMUXmosL0+RlEqhSCrsBPgc2T9Wc9YHhUdnmkSxZbLSCDvNIdiidrLxBvGHGBQb
	AD+p432+U6Rn1CO6WWgl6Ryx+Jaw1S6Pe/4Mop2Li3MDjFtRc8rMhr8v74JJGRaknHyi
	eyfNdguCdYRVsIkCXa/t9XmWHtCOEo0gTY8ZyhZfzL52WNRK679NWgG8h9MpyjMyEFXO
	e+qA==
X-Gm-Message-State: ALoCoQlsOtWf1PH2RsAUQrOlDelqLygUT0pFCd1dp3TxUN/18dJQ8ljls2VgEKyDWcW8F1dRRb2i
X-Received: by 10.14.205.3 with SMTP id i3mr27578849eeo.23.1392650358297;
	Mon, 17 Feb 2014 07:19:18 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o45sm58643182eeb.18.2014.02.17.07.19.16 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:19:17 -0800 (PST)
Message-ID: <53022872.80209@linaro.org>
Date: Mon, 17 Feb 2014 15:19:14 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
In-Reply-To: <CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 03:01 PM, Andrii Tseglytskyi wrote:
> Hi Julien,
> 
> 
>     >
>     >     > Can anyone clarify - is it possible to make a run time
>     memory trap in
>     >     > Xen hypervisor?
>     >
>     >     I guess you are talking about ARM? If so, it's not possible
>     right now.
>     >
>     >
>     > Does it mean, that it is possible on x86 ?
> 
>     Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c
> 
>     It's used a static array, but I don't think this is the solution for
>     ARM. We don't know in advance the maximum number of MMIO region to
>     handle.
> 
> 
> What I'm thinking about for ARM - is to use linked list for MMIO
> handlers + API to register / unregister handler.
> 
> xen/arch/arm/io.c:
> 
>  25 static const struct mmio_handler *const mmio_handlers[] =
>  26 {   
>  27     &vgic_distr_mmio_handler,
>  28     &vuart_mmio_handler,
>  29 };  
> 
> This can be changed to list. New API will add / remove entries. VGIC and
> VUART will call something like
> mmio_register_handler(&vgic_distr_mmio_handle) during corresponding
> initcall.
> 
> Than the only change which is required for existing int
> handle_mmio(mmio_info_t *info) function  - is to enumerate list, instead
> of array
>  32 int handle_mmio(mmio_info_t *info)
>  33 {
>  34     struct vcpu *v = current;
>  35     int i;
>  36 
>  37     for ( i = 0; i < MMIO_HANDLER_NR; i++ ) --> *list_for_each*
>  38         if ( mmio_handlers[i]->check_handler(v, info->gpa) )
>  39             return info->dabt.write ?
>  40                 mmio_handlers[i]->write_handler(v, info) :
>  41                 mmio_handlers[i]->read_handler(v, info);
>  42 
>  43     return 0;
>  44 }
> 
> Something like this.

This solution sounds good.

If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?
In this case, I think a per-domain MMU handlers would be better. Most of
handlers will be used for a specific guest (except the VGIC handler).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:19:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFPyb-0003KV-Vg; Mon, 17 Feb 2014 15:19:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFPyZ-0003KP-VZ
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:19:20 +0000
Received: from [85.158.143.35:20071] by server-3.bemta-4.messagelabs.com id
	30/53-11539-77822035; Mon, 17 Feb 2014 15:19:19 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392650358!6263654!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28570 invoked from network); 17 Feb 2014 15:19:18 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:19:18 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so4686308eaj.36
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:19:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=JuJW/aPWWFL/i7z4LGN+uCmSDqnV2n7RYH93n7CfgKo=;
	b=msrM8ZahLh3pJwvTCk+VJduF0Xztxy/w1s2rxiCpAYBQOBwExMSiOGYvqSKa68SFz/
	et6fbkJUWDoo4PqfZybgA6Q8Zl3nkOK+62QGm0vtIjVLHMcVTV3xRlZ7r8g5JDgBYaeN
	Cuw+pXMUXmosL0+RlEqhSCrsBPgc2T9Wc9YHhUdnmkSxZbLSCDvNIdiidrLxBvGHGBQb
	AD+p432+U6Rn1CO6WWgl6Ryx+Jaw1S6Pe/4Mop2Li3MDjFtRc8rMhr8v74JJGRaknHyi
	eyfNdguCdYRVsIkCXa/t9XmWHtCOEo0gTY8ZyhZfzL52WNRK679NWgG8h9MpyjMyEFXO
	e+qA==
X-Gm-Message-State: ALoCoQlsOtWf1PH2RsAUQrOlDelqLygUT0pFCd1dp3TxUN/18dJQ8ljls2VgEKyDWcW8F1dRRb2i
X-Received: by 10.14.205.3 with SMTP id i3mr27578849eeo.23.1392650358297;
	Mon, 17 Feb 2014 07:19:18 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o45sm58643182eeb.18.2014.02.17.07.19.16 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:19:17 -0800 (PST)
Message-ID: <53022872.80209@linaro.org>
Date: Mon, 17 Feb 2014 15:19:14 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
In-Reply-To: <CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 03:01 PM, Andrii Tseglytskyi wrote:
> Hi Julien,
> 
> 
>     >
>     >     > Can anyone clarify - is it possible to make a run time
>     memory trap in
>     >     > Xen hypervisor?
>     >
>     >     I guess you are talking about ARM? If so, it's not possible
>     right now.
>     >
>     >
>     > Does it mean, that it is possible on x86 ?
> 
>     Yes, you can look at register_io_handler in xen/arch/x86/hvm/intercept.c
> 
>     It's used a static array, but I don't think this is the solution for
>     ARM. We don't know in advance the maximum number of MMIO region to
>     handle.
> 
> 
> What I'm thinking about for ARM - is to use linked list for MMIO
> handlers + API to register / unregister handler.
> 
> xen/arch/arm/io.c:
> 
>  25 static const struct mmio_handler *const mmio_handlers[] =
>  26 {   
>  27     &vgic_distr_mmio_handler,
>  28     &vuart_mmio_handler,
>  29 };  
> 
> This can be changed to list. New API will add / remove entries. VGIC and
> VUART will call something like
> mmio_register_handler(&vgic_distr_mmio_handle) during corresponding
> initcall.
> 
> Than the only change which is required for existing int
> handle_mmio(mmio_info_t *info) function  - is to enumerate list, instead
> of array
>  32 int handle_mmio(mmio_info_t *info)
>  33 {
>  34     struct vcpu *v = current;
>  35     int i;
>  36 
>  37     for ( i = 0; i < MMIO_HANDLER_NR; i++ ) --> *list_for_each*
>  38         if ( mmio_handlers[i]->check_handler(v, info->gpa) )
>  39             return info->dabt.write ?
>  40                 mmio_handlers[i]->write_handler(v, info) :
>  41                 mmio_handlers[i]->read_handler(v, info);
>  42 
>  43     return 0;
>  44 }
> 
> Something like this.

This solution sounds good.

If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?
In this case, I think a per-domain MMU handlers would be better. Most of
handlers will be used for a specific guest (except the VGIC handler).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:26:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:26:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQ4v-0003Wz-1H; Mon, 17 Feb 2014 15:25:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFQ4u-0003Wp-8H
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 15:25:52 +0000
Received: from [85.158.139.211:2633] by server-8.bemta-5.messagelabs.com id
	5C/72-05298-FF922035; Mon, 17 Feb 2014 15:25:51 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392650740!4414421!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDUwMjMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8729 invoked from network); 17 Feb 2014 15:25:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:25:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; 
	d="asc'?scan'208";a="101456937"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 15:25:39 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:25:38 -0500
Message-ID: <1392650737.32038.437.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 16:25:37 +0100
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] TOMORROW (Feb 18) is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8883754478322253280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8883754478322253280==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Yvp28eZB8qtsnKUqDNv8"

--=-Yvp28eZB8qtsnKUqDNv8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Tomorrow, February 18, is the Test Day for Xen 4.4. Release Candidate 4.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in
RC4, we need to know about it and how to test it. Please edit the
instructions page this week to reflect suggested configuration and
testing instructions.

EVERYONE:

Please join us on Tuesday, February 18, to flush out any potential bugs
before the next release.

Thank you!
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Yvp28eZB8qtsnKUqDNv8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMCKfEACgkQk4XaBE3IOsTF9ACbB4g3w6y0xmkyj4Trzo0TbWDm
WYEAoJUCTloh3J/NPfxoq4jN5tbqGfFg
=yr7Q
-----END PGP SIGNATURE-----

--=-Yvp28eZB8qtsnKUqDNv8--


--===============8883754478322253280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8883754478322253280==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:26:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:26:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQ4v-0003Wz-1H; Mon, 17 Feb 2014 15:25:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFQ4u-0003Wp-8H
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 15:25:52 +0000
Received: from [85.158.139.211:2633] by server-8.bemta-5.messagelabs.com id
	5C/72-05298-FF922035; Mon, 17 Feb 2014 15:25:51 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392650740!4414421!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDUwMjMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8729 invoked from network); 17 Feb 2014 15:25:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:25:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; 
	d="asc'?scan'208";a="101456937"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 15:25:39 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:25:38 -0500
Message-ID: <1392650737.32038.437.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 16:25:37 +0100
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] TOMORROW (Feb 18) is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8883754478322253280=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8883754478322253280==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Yvp28eZB8qtsnKUqDNv8"

--=-Yvp28eZB8qtsnKUqDNv8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Tomorrow, February 18, is the Test Day for Xen 4.4. Release Candidate 4.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions

XEN 4.4 FEATURE DEVELOPERS:

If you have a new feature which is cooked and ready for testing in
RC4, we need to know about it and how to test it. Please edit the
instructions page this week to reflect suggested configuration and
testing instructions.

EVERYONE:

Please join us on Tuesday, February 18, to flush out any potential bugs
before the next release.

Thank you!
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Yvp28eZB8qtsnKUqDNv8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMCKfEACgkQk4XaBE3IOsTF9ACbB4g3w6y0xmkyj4Trzo0TbWDm
WYEAoJUCTloh3J/NPfxoq4jN5tbqGfFg
=yr7Q
-----END PGP SIGNATURE-----

--=-Yvp28eZB8qtsnKUqDNv8--


--===============8883754478322253280==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8883754478322253280==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 15:31:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQ9z-0003oC-F2; Mon, 17 Feb 2014 15:31:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1WFQ9x-0003o2-Qp
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:31:05 +0000
Received: from [85.158.143.35:24932] by server-1.bemta-4.messagelabs.com id
	A4/D0-31661-93B22035; Mon, 17 Feb 2014 15:31:05 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392651062!6289459!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14586 invoked from network); 17 Feb 2014 15:31:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:31:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101458277"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 15:31:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:31:01 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1WFQ9s-0001qz-J9;
	Mon, 17 Feb 2014 15:31:00 +0000
Date: Mon, 17 Feb 2014 15:31:00 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: herbert cland <herbert.cland@gmail.com>
Message-ID: <20140217153100.GA26280@perard.uk.xensource.com>
References: <2014021217255231643849@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <2014021217255231643849@gmail.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 05:25:58PM +0800, herbert cland wrote:
> Dear ALL!
> 
> Following merge may be overwrite the "xen: Fix vcpu initialization" patch
> --------------------
> Merge remote branch 'origin/stable-1.6' into xen-staging-master-9 
> -------------------- 
> 
> This made a conflict for xen-all.c.
> So it seems that the vpcu hotplug patch was overwrited by the upstream qemu version.

Thanks, I will look into this issue.

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:31:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQ9z-0003oC-F2; Mon, 17 Feb 2014 15:31:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony.perard@citrix.com>) id 1WFQ9x-0003o2-Qp
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:31:05 +0000
Received: from [85.158.143.35:24932] by server-1.bemta-4.messagelabs.com id
	A4/D0-31661-93B22035; Mon, 17 Feb 2014 15:31:05 +0000
X-Env-Sender: anthony.perard@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392651062!6289459!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14586 invoked from network); 17 Feb 2014 15:31:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:31:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101458277"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 15:31:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 10:31:01 -0500
Received: from perard.uk.xensource.com ([10.80.2.84])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<anthony.perard@citrix.com>)	id 1WFQ9s-0001qz-J9;
	Mon, 17 Feb 2014 15:31:00 +0000
Date: Mon, 17 Feb 2014 15:31:00 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: herbert cland <herbert.cland@gmail.com>
Message-ID: <20140217153100.GA26280@perard.uk.xensource.com>
References: <2014021217255231643849@gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <2014021217255231643849@gmail.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [qemu-upstream-unstable.git] is in CONFLICT state
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 12, 2014 at 05:25:58PM +0800, herbert cland wrote:
> Dear ALL!
> 
> Following merge may be overwrite the "xen: Fix vcpu initialization" patch
> --------------------
> Merge remote branch 'origin/stable-1.6' into xen-staging-master-9 
> -------------------- 
> 
> This made a conflict for xen-all.c.
> So it seems that the vpcu hotplug patch was overwrited by the upstream qemu version.

Thanks, I will look into this issue.

Regards,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:39:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQHb-00045k-Er; Mon, 17 Feb 2014 15:38:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFQHa-00045f-BT
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:38:58 +0000
Received: from [193.109.254.147:14420] by server-10.bemta-14.messagelabs.com
	id 29/0C-10711-11D22035; Mon, 17 Feb 2014 15:38:57 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392651535!4931011!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5220 invoked from network); 17 Feb 2014 15:38:56 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 15:38:56 -0000
Received: from mail-vc0-f181.google.com ([209.85.220.181]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwItDsr6Vb6Eiltuxep2o/5t+FnAn07z@postini.com;
	Mon, 17 Feb 2014 07:38:56 PST
Received: by mail-vc0-f181.google.com with SMTP id ie18so11596441vcb.26
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:38:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=DAQMEOPBsU0AoSaljesAa20QatNMVPrxumyEKgWqhAs=;
	b=VGAwvLHdgB3hg5P95mwuZqPFo5J73/Lliu4YK6ebiWYZMNwZcDTBEPQPogGtIHsasj
	n+7oLCoNTS7Ow9E488rmCwfBfO8AomO4CdZlFdKfSP7nK5/sZBu8Fut+QCnFl0YcJa7T
	pnBvMzAekCw1kQOWqgMR1mVMbJOVfBSKtyz3I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=DAQMEOPBsU0AoSaljesAa20QatNMVPrxumyEKgWqhAs=;
	b=EmL2/c0TMvL6CXZMl65sb+9iG7AtamsuYHnlMdtya+PEuuJaOXAhUOKYRazfeKlkgu
	16Qbj7wMlPhg9FX5RAszsJR0q2Gk2oZDxXDXImSFDBD+rzGdk8wDdOwo415qzxk8vZJl
	qexI2D7sXGfmE/Tfli3urrk76nX+wbQumME10jWXEoW2XwkGpGD0GhX9m8XVg8jkQL8C
	XtzX3Y/0tSxvUkqwbj4LcC/6WP/orb0MZjqL7JpKvj0yb/P3YfIYqpzpIlnfnVd/EXrt
	gT8dkbzWGaoEeJ7l7s/y8Nap5TkFbyXGDGNZO1Jigoo2ib+X0l2MZaYT+lHs76njvLkN
	l4xA==
X-Gm-Message-State: ALoCoQm8lEY0KX8PWP1MUqffUw5JKQbY5ZqTV0qTczOo4knGIwFN1nDYyORnGfl225KpUTTFdkB0vsUGOCVUaCEwAjOMWiDeQAbtAukRWJqxYVPuzUNwvhEgiVuKY1IIaJv2IxDLoOauwckACv4gpo657dJrsTVL9Zyyrgc5ggeT+rX1jkUcJVg=
X-Received: by 10.220.71.20 with SMTP id f20mr65124vcj.70.1392651534271;
	Mon, 17 Feb 2014 07:38:54 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.71.20 with SMTP id f20mr65117vcj.70.1392651534203; Mon,
	17 Feb 2014 07:38:54 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 07:38:54 -0800 (PST)
In-Reply-To: <53022872.80209@linaro.org>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
Date: Mon, 17 Feb 2014 17:38:54 +0200
Message-ID: <CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi

>
> > Something like this.
>
> This solution sounds good.
>
> If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?


Right. And for now I started developing something like shadow page
table algorithm. I need to create a trap for a real pagetable, which
is placed somewhere in domain heap memory. That's why I thought about
generic algorithm which can create a runtime trap for memory, which
address is not defined during compile time.


>
> In this case, I think a per-domain MMU handlers would be better. Most of
> handlers will be used for a specific guest (except the VGIC handler).
>

Do you mean that existing mmio_handlers[] are global and new runtime
memory handlers should be per domain? This sounds quite reasonable.

regards,
Andrii



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:39:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQHb-00045k-Er; Mon, 17 Feb 2014 15:38:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFQHa-00045f-BT
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:38:58 +0000
Received: from [193.109.254.147:14420] by server-10.bemta-14.messagelabs.com
	id 29/0C-10711-11D22035; Mon, 17 Feb 2014 15:38:57 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392651535!4931011!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5220 invoked from network); 17 Feb 2014 15:38:56 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 15:38:56 -0000
Received: from mail-vc0-f181.google.com ([209.85.220.181]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwItDsr6Vb6Eiltuxep2o/5t+FnAn07z@postini.com;
	Mon, 17 Feb 2014 07:38:56 PST
Received: by mail-vc0-f181.google.com with SMTP id ie18so11596441vcb.26
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:38:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=DAQMEOPBsU0AoSaljesAa20QatNMVPrxumyEKgWqhAs=;
	b=VGAwvLHdgB3hg5P95mwuZqPFo5J73/Lliu4YK6ebiWYZMNwZcDTBEPQPogGtIHsasj
	n+7oLCoNTS7Ow9E488rmCwfBfO8AomO4CdZlFdKfSP7nK5/sZBu8Fut+QCnFl0YcJa7T
	pnBvMzAekCw1kQOWqgMR1mVMbJOVfBSKtyz3I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=DAQMEOPBsU0AoSaljesAa20QatNMVPrxumyEKgWqhAs=;
	b=EmL2/c0TMvL6CXZMl65sb+9iG7AtamsuYHnlMdtya+PEuuJaOXAhUOKYRazfeKlkgu
	16Qbj7wMlPhg9FX5RAszsJR0q2Gk2oZDxXDXImSFDBD+rzGdk8wDdOwo415qzxk8vZJl
	qexI2D7sXGfmE/Tfli3urrk76nX+wbQumME10jWXEoW2XwkGpGD0GhX9m8XVg8jkQL8C
	XtzX3Y/0tSxvUkqwbj4LcC/6WP/orb0MZjqL7JpKvj0yb/P3YfIYqpzpIlnfnVd/EXrt
	gT8dkbzWGaoEeJ7l7s/y8Nap5TkFbyXGDGNZO1Jigoo2ib+X0l2MZaYT+lHs76njvLkN
	l4xA==
X-Gm-Message-State: ALoCoQm8lEY0KX8PWP1MUqffUw5JKQbY5ZqTV0qTczOo4knGIwFN1nDYyORnGfl225KpUTTFdkB0vsUGOCVUaCEwAjOMWiDeQAbtAukRWJqxYVPuzUNwvhEgiVuKY1IIaJv2IxDLoOauwckACv4gpo657dJrsTVL9Zyyrgc5ggeT+rX1jkUcJVg=
X-Received: by 10.220.71.20 with SMTP id f20mr65124vcj.70.1392651534271;
	Mon, 17 Feb 2014 07:38:54 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.71.20 with SMTP id f20mr65117vcj.70.1392651534203; Mon,
	17 Feb 2014 07:38:54 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Mon, 17 Feb 2014 07:38:54 -0800 (PST)
In-Reply-To: <53022872.80209@linaro.org>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
Date: Mon, 17 Feb 2014 17:38:54 +0200
Message-ID: <CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Julien Grall <julien.grall@linaro.org>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi

>
> > Something like this.
>
> This solution sounds good.
>
> If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?


Right. And for now I started developing something like shadow page
table algorithm. I need to create a trap for a real pagetable, which
is placed somewhere in domain heap memory. That's why I thought about
generic algorithm which can create a runtime trap for memory, which
address is not defined during compile time.


>
> In this case, I think a per-domain MMU handlers would be better. Most of
> handlers will be used for a specific guest (except the VGIC handler).
>

Do you mean that existing mmio_handlers[] are global and new runtime
memory handlers should be per domain? This sounds quite reasonable.

regards,
Andrii



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQIG-00048V-T7; Mon, 17 Feb 2014 15:39:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WFQIF-000487-4W; Mon, 17 Feb 2014 15:39:39 +0000
Received: from [85.158.139.211:64815] by server-12.bemta-5.messagelabs.com id
	1B/8A-15415-A3D22035; Mon, 17 Feb 2014 15:39:38 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392651577!4458569!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17023 invoked from network); 17 Feb 2014 15:39:37 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:39:37 -0000
Received: by mail-wi0-f181.google.com with SMTP id hi5so2536599wib.14
	for <multiple recipients>; Mon, 17 Feb 2014 07:39:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2+96d+0AA/vEERDIWnR/W4Bk/Ndo9AZDzFVMz9We0o4=;
	b=MCTeM5zrEpNJD2Q6iQ7588fM+ePm/b3bBnu7JwmMKhW0972AbJrnxqnrsvEwmamTdX
	L1ZB1gXucHtt8iWn6W9kPXPvDyRGSKEyS8A1S/cF0oPZ46vq80ahcbCmFxBGfvpK0fp5
	FRgj/WC2TEwb6lxICi5zN8vkp5F/ZOlzc1hJ+BdaGVqH0aH1BHf4v7pg3zO6eVqI5bl8
	+ddq0NaWPIqllORDhvg+NQW7dgULA8fQuGxKDRR5hbBnnTEh2A8uu7+FZTR22BjNkBKG
	oKFb0Io9sc/AfnFvSVsK8GIjLHRLttoOJbHUZu082WSAzhciuvPokrWXo75GmYDjhFMT
	mvxg==
X-Received: by 10.194.123.201 with SMTP id mc9mr13897417wjb.43.1392651577179; 
	Mon, 17 Feb 2014 07:39:37 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75]) by mx.google.com with ESMTPSA id
	q15sm37788675wjw.18.2014.02.17.07.39.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:39:36 -0800 (PST)
Message-ID: <53022D35.8050704@xen.org>
Date: Mon, 17 Feb 2014 15:39:33 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, =?ISO-8859-1?Q?Roger_Pau_?=
	=?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
References: <52DCE9FA.6010400@xen.org>	<52E7B6AF.3050604@xen.org>	<1391609348.6497.178.camel@kazak.uk.xensource.com>	<52FBA331.9040004@citrix.com>
	<21244.64811.826482.435714@mariner.uk.xensource.com>
In-Reply-To: <21244.64811.826482.435714@mariner.uk.xensource.com>
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I just combed through the list of projects, and to be honest most can be =

improved.

Please put yourself into the shoes of someone who
a) is a student and
b) has not worked on Xen before.
* If you use acronyms, please point to the definitions.
* If you refer to specs, please link to the specs.
* If you refer to a location to the code, why not link to it.
c) breaking down the project into more manageable chunks will also help

Anyway, Google is reviewing applications this week. Changes to the =

project page that are made this week will still make an impact. Accepted =

orgs are announced on the 24th.

Regards
Lars

On 13/02/2014 17:13, Ian Jackson wrote:
> Roger Pau Monn=E9 writes ("Re: [Xen-devel] Prepping for GSOC 2014 [URGENT=
] - deadline Feb 14 2014"):
>> On 05/02/14 15:09, Ian Campbell wrote:
>>> Roger:
>>>        * Refactor Linux hotplug scripts
>>>
>>>          You did some of this I think?
>> No, I've added a block-iscsi script, but I did not refactor the other
>> ones. It is still valid, however, I'm not sure it's attractive from a
>> GSoC point of view. I'm going to leave it anyway in case someone is
>> interested.
> Who is actually submitting our GSOC application ?
>
> Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:39:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQIG-00048V-T7; Mon, 17 Feb 2014 15:39:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WFQIF-000487-4W; Mon, 17 Feb 2014 15:39:39 +0000
Received: from [85.158.139.211:64815] by server-12.bemta-5.messagelabs.com id
	1B/8A-15415-A3D22035; Mon, 17 Feb 2014 15:39:38 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392651577!4458569!1
X-Originating-IP: [209.85.212.181]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17023 invoked from network); 17 Feb 2014 15:39:37 -0000
Received: from mail-wi0-f181.google.com (HELO mail-wi0-f181.google.com)
	(209.85.212.181)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:39:37 -0000
Received: by mail-wi0-f181.google.com with SMTP id hi5so2536599wib.14
	for <multiple recipients>; Mon, 17 Feb 2014 07:39:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=2+96d+0AA/vEERDIWnR/W4Bk/Ndo9AZDzFVMz9We0o4=;
	b=MCTeM5zrEpNJD2Q6iQ7588fM+ePm/b3bBnu7JwmMKhW0972AbJrnxqnrsvEwmamTdX
	L1ZB1gXucHtt8iWn6W9kPXPvDyRGSKEyS8A1S/cF0oPZ46vq80ahcbCmFxBGfvpK0fp5
	FRgj/WC2TEwb6lxICi5zN8vkp5F/ZOlzc1hJ+BdaGVqH0aH1BHf4v7pg3zO6eVqI5bl8
	+ddq0NaWPIqllORDhvg+NQW7dgULA8fQuGxKDRR5hbBnnTEh2A8uu7+FZTR22BjNkBKG
	oKFb0Io9sc/AfnFvSVsK8GIjLHRLttoOJbHUZu082WSAzhciuvPokrWXo75GmYDjhFMT
	mvxg==
X-Received: by 10.194.123.201 with SMTP id mc9mr13897417wjb.43.1392651577179; 
	Mon, 17 Feb 2014 07:39:37 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75]) by mx.google.com with ESMTPSA id
	q15sm37788675wjw.18.2014.02.17.07.39.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:39:36 -0800 (PST)
Message-ID: <53022D35.8050704@xen.org>
Date: Mon, 17 Feb 2014 15:39:33 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>, =?ISO-8859-1?Q?Roger_Pau_?=
	=?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
References: <52DCE9FA.6010400@xen.org>	<52E7B6AF.3050604@xen.org>	<1391609348.6497.178.camel@kazak.uk.xensource.com>	<52FBA331.9040004@citrix.com>
	<21244.64811.826482.435714@mariner.uk.xensource.com>
In-Reply-To: <21244.64811.826482.435714@mariner.uk.xensource.com>
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	mirageos-devel@lists.xenproject.org,
	Paul Durrant <paul.durrant@citrix.com>,
	Santosh Jodh <Santosh.Jodh@citrix.com>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Prepping for GSOC 2014 [URGENT] - deadline Feb 14
 2014
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

I just combed through the list of projects, and to be honest most can be =

improved.

Please put yourself into the shoes of someone who
a) is a student and
b) has not worked on Xen before.
* If you use acronyms, please point to the definitions.
* If you refer to specs, please link to the specs.
* If you refer to a location to the code, why not link to it.
c) breaking down the project into more manageable chunks will also help

Anyway, Google is reviewing applications this week. Changes to the =

project page that are made this week will still make an impact. Accepted =

orgs are announced on the 24th.

Regards
Lars

On 13/02/2014 17:13, Ian Jackson wrote:
> Roger Pau Monn=E9 writes ("Re: [Xen-devel] Prepping for GSOC 2014 [URGENT=
] - deadline Feb 14 2014"):
>> On 05/02/14 15:09, Ian Campbell wrote:
>>> Roger:
>>>        * Refactor Linux hotplug scripts
>>>
>>>          You did some of this I think?
>> No, I've added a block-iscsi script, but I did not refactor the other
>> ones. It is still valid, however, I'm not sure it's attractive from a
>> GSoC point of view. I'm going to leave it anyway in case someone is
>> interested.
> Who is actually submitting our GSOC application ?
>
> Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQZA-0004W0-Vj; Mon, 17 Feb 2014 15:57:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFQZ3-0004Vs-W6
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:57:07 +0000
Received: from [85.158.143.35:41001] by server-3.bemta-4.messagelabs.com id
	1C/78-11539-D4132035; Mon, 17 Feb 2014 15:57:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392652620!6285283!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6433 invoked from network); 17 Feb 2014 15:57:00 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:57:00 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so7246692eae.37
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:57:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8Jb2fjuEFWAjNIJLw7Or5j+We2ahlbyL5ER4GIkdnTU=;
	b=AFXQWQ7N/FmRveheUJxjwB7e2kDlQiKNfJcEUcJ3Bqk4AJ8fIjdSt0yEpLebTRKDpO
	rXu9rsNwQXYjnzVZWkVEvT19lfGcHhZ/sCwI6ptVWkN2MRR6TzlRK450JzLg92A0TuUu
	XoyyETAx2sxHwM4YBqudGq0Ne0IjFhLixr+6JODODVt1rHZbs+xhGl2zMRgHrDjQkUrc
	K0HudnM5kMPWtp/PT089Y22l/+sMdoh2YFoe9haohJrUiFC/2E9EAGwA3MElgagKU53g
	M2BoylwC7oS0XiLaPC/sqMgL6y4aZ5tDo/XboX0fSuRmIM3snKIF1QORSDqOF9zCFI9R
	yjRA==
X-Gm-Message-State: ALoCoQk67giXrbMnbbe3u4G2UlLP9+/v9+MxPkSh4gzl6LS7dcQjOXaMlq3YkgYUd6LlQvfVX3O2
X-Received: by 10.14.198.132 with SMTP id v4mr27759932een.43.1392652620581;
	Mon, 17 Feb 2014 07:57:00 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm58866881eep.11.2014.02.17.07.56.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:56:59 -0800 (PST)
Message-ID: <53023144.80101@linaro.org>
Date: Mon, 17 Feb 2014 15:56:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
	<CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
In-Reply-To: <CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On 02/17/2014 03:38 PM, Andrii Tseglytskyi wrote:
> Do you mean that existing mmio_handlers[] are global and new runtime
> memory handlers should be per domain? This sounds quite reasonable.

Yes. For now, check_handler callback check if theses handlers can be
callback on the current domain.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 15:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQZA-0004W0-Vj; Mon, 17 Feb 2014 15:57:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFQZ3-0004Vs-W6
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 15:57:07 +0000
Received: from [85.158.143.35:41001] by server-3.bemta-4.messagelabs.com id
	1C/78-11539-D4132035; Mon, 17 Feb 2014 15:57:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392652620!6285283!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6433 invoked from network); 17 Feb 2014 15:57:00 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 15:57:00 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so7246692eae.37
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 07:57:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=8Jb2fjuEFWAjNIJLw7Or5j+We2ahlbyL5ER4GIkdnTU=;
	b=AFXQWQ7N/FmRveheUJxjwB7e2kDlQiKNfJcEUcJ3Bqk4AJ8fIjdSt0yEpLebTRKDpO
	rXu9rsNwQXYjnzVZWkVEvT19lfGcHhZ/sCwI6ptVWkN2MRR6TzlRK450JzLg92A0TuUu
	XoyyETAx2sxHwM4YBqudGq0Ne0IjFhLixr+6JODODVt1rHZbs+xhGl2zMRgHrDjQkUrc
	K0HudnM5kMPWtp/PT089Y22l/+sMdoh2YFoe9haohJrUiFC/2E9EAGwA3MElgagKU53g
	M2BoylwC7oS0XiLaPC/sqMgL6y4aZ5tDo/XboX0fSuRmIM3snKIF1QORSDqOF9zCFI9R
	yjRA==
X-Gm-Message-State: ALoCoQk67giXrbMnbbe3u4G2UlLP9+/v9+MxPkSh4gzl6LS7dcQjOXaMlq3YkgYUd6LlQvfVX3O2
X-Received: by 10.14.198.132 with SMTP id v4mr27759932een.43.1392652620581;
	Mon, 17 Feb 2014 07:57:00 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm58866881eep.11.2014.02.17.07.56.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 07:56:59 -0800 (PST)
Message-ID: <53023144.80101@linaro.org>
Date: Mon, 17 Feb 2014 15:56:52 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
	<CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
In-Reply-To: <CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On 02/17/2014 03:38 PM, Andrii Tseglytskyi wrote:
> Do you mean that existing mmio_handlers[] are global and new runtime
> memory handlers should be per domain? This sounds quite reasonable.

Yes. For now, check_handler callback check if theses handlers can be
callback on the current domain.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:01:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQda-00054u-My; Mon, 17 Feb 2014 16:01:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFQdZ-00054p-GA
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 16:01:41 +0000
Received: from [85.158.143.35:27199] by server-2.bemta-4.messagelabs.com id
	59/FA-10891-46232035; Mon, 17 Feb 2014 16:01:40 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392652899!6298244!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5701 invoked from network); 17 Feb 2014 16:01:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 16:01:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101466418"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 16:01:38 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 11:01:37 -0500
Message-ID: <53023260.1070109@citrix.com>
Date: Mon, 17 Feb 2014 17:01:36 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: John Baldwin <jhb@freebsd.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1980951.95r2q2cca3@ralph.baldwin.cx> <52FD7624.90202@citrix.com>
	<201402141251.10278.jhb@freebsd.org>
In-Reply-To: <201402141251.10278.jhb@freebsd.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
 ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 18:51, John Baldwin wrote:
> On Thursday, February 13, 2014 8:49:24 pm Andrew Cooper wrote:
>> On 08/02/2014 21:42, John Baldwin wrote:
>>> On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
>>>> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
>>>> force the usage of the Xen mptable enumerator even when ACPI is
>>>> detected.
>>> Hmm, so I think one question is why does the existing MADT parser
>>> not work with the MADT table provided by Xen?  This may very well
>>> be correct, but if it's only a small change to make the existing
>>> MADT parser work with Xen's MADT table, that route might be
>>> preferable.
>>>
>>
>> For dom0, the MADT seen is the system MADT, which does not bear any
>> reality to dom0's topology.  For PV domU, no MADT will be found.  For
>> HVM domU, the MADT seen ought to represent (virtual) reality.
> 
> Hmm, the other changes suggested that you do want to use the I/O APIC
> entries and interrupt overrides from the system MADT for dom0?  Just
> not the CPU entries.  Is that correct?

Yes, we need the interrupt entries in order to interact with the
underlying hardware, but not the CPU entries/topology.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:01:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQda-00054u-My; Mon, 17 Feb 2014 16:01:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFQdZ-00054p-GA
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 16:01:41 +0000
Received: from [85.158.143.35:27199] by server-2.bemta-4.messagelabs.com id
	59/FA-10891-46232035; Mon, 17 Feb 2014 16:01:40 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392652899!6298244!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5701 invoked from network); 17 Feb 2014 16:01:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 16:01:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101466418"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 16:01:38 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 11:01:37 -0500
Message-ID: <53023260.1070109@citrix.com>
Date: Mon, 17 Feb 2014 17:01:36 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: John Baldwin <jhb@freebsd.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <1387884062-41154-1-git-send-email-roger.pau@citrix.com>
	<1980951.95r2q2cca3@ralph.baldwin.cx> <52FD7624.90202@citrix.com>
	<201402141251.10278.jhb@freebsd.org>
In-Reply-To: <201402141251.10278.jhb@freebsd.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: julien.grall@citrix.com, freebsd-xen@freebsd.org,
	freebsd-current@freebsd.org, kib@freebsd.org,
	xen-devel@lists.xenproject.org, gibbs@freebsd.org
Subject: Re: [Xen-devel] [PATCH RFC 09/13] xen: change quality of the MADT
 ACPI enumerator
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 14/02/14 18:51, John Baldwin wrote:
> On Thursday, February 13, 2014 8:49:24 pm Andrew Cooper wrote:
>> On 08/02/2014 21:42, John Baldwin wrote:
>>> On Tuesday, December 24, 2013 12:20:58 PM Roger Pau Monne wrote:
>>>> Lower the quality of the MADT ACPI enumerator, so on Xen Dom0 we can
>>>> force the usage of the Xen mptable enumerator even when ACPI is
>>>> detected.
>>> Hmm, so I think one question is why does the existing MADT parser
>>> not work with the MADT table provided by Xen?  This may very well
>>> be correct, but if it's only a small change to make the existing
>>> MADT parser work with Xen's MADT table, that route might be
>>> preferable.
>>>
>>
>> For dom0, the MADT seen is the system MADT, which does not bear any
>> reality to dom0's topology.  For PV domU, no MADT will be found.  For
>> HVM domU, the MADT seen ought to represent (virtual) reality.
> 
> Hmm, the other changes suggested that you do want to use the I/O APIC
> entries and interrupt overrides from the system MADT for dom0?  Just
> not the CPU entries.  Is that correct?

Yes, we need the interrupt entries in order to interact with the
underlying hardware, but not the CPU entries/topology.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:24:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQyu-0005J9-Ok; Mon, 17 Feb 2014 16:23:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFQyt-0005J2-0E
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 16:23:43 +0000
Received: from [193.109.254.147:7556] by server-14.bemta-14.messagelabs.com id
	B7/E2-29228-E8732035; Mon, 17 Feb 2014 16:23:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392654219!4917544!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29262 invoked from network); 17 Feb 2014 16:23:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 16:23:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101472911"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 16:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 11:22:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFQxF-0002gg-Cq;
	Mon, 17 Feb 2014 16:22:01 +0000
Message-ID: <53023729.7020009@citrix.com>
Date: Mon, 17 Feb 2014 16:22:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Jun Nakajima <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>, Boris
	Ostrovsky <boris.ostrovsky@oracle.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Here is a design proposal to improve VM feature levelling support in Xen
and libxc.

PDF can be found here:
http://xenbits.xen.org/people/andrewcoop/feature-levelling/feature-levelling-C.pdf

And markdown source inline:

Introduction
============

Revision History
----------------

------------------------------------------------------------------------------
Version  Date         Changes
-------  ----------- 
--------------------------------------------------------
Draft A  07 Feb 2014  Initial draft

Draft B  13 Feb 2014  More detail for proposed new implementation

Draft C  17 Feb 2014  Even more details for proposed new implementation
------------------------------------------------------------------------------

Background
----------

_CPU feature masking_ is a term used to mean altering the visible
feature-set
of a processor.  For single systems, this could be to hide certain features
from operating system software, for which support is buggy.

In the world of virtualisation, it is common to have non-identical
hardware in
a cluster but still want to migrate a virtual machine safely.  On regular
hardware, the kernel can safely assume that the feature-set as detected on
boot will remain the same.  Live migration invalidates this assumption when
moving between two non-identical pieces of hardware.

To migrate virtual machines in this fashion, orchestration software must
ensure that the available feature set remains consistent anywhere the
virtual
machine might end up.

The feature-set of a particular CPU can be obtained using the `CPUID`
instruction.  It was introduced as a forward compatible way of
advertising new
features which were detectable at runtime.  Information available includes
processor branding, available features, topology information and cache
details.

The `CPUID` instruction is an unprivileged instruction, usable from
user-mode
without interception from the kernel.  This makes it impossible to
paravirtualise using the standard trap-and-emulate method.

Purpose
-------

This project originally started to improve the way in which XenServer
performed heterogeneous pool levelling.  In the process of investigation, it
was discovered that the current implementation in Xen and libxc are in
need of
improvement, particularly in relation to PV guests.

This document describes:

* What properties are needed from a VM point of view
* What hardware features are available to aid with levelling
* What abilities are exposed by Xen and libxc for levelling
* How XenServer currently does pool levelling (and why it is in need of
improvements)

This document also proposes a new mechanism for VM feature levelling, taking
into account the information needed by orchestration software.


What a Virtual Machine cares about
==================================

On native hardware, a kernel, as well as certain userspace libraries
will use
the set of available features to tune themselves to run more efficiently.
Over a migrate, it is critical that features a VM is using do not disappear.
(In some cases it might be possible to trap-and-emulate missing
features, but
this would be an unacceptably high overhead so is not considered.  It is
also
not applicable in the general case)

When a VM is liable to migrate between hardware of differing
feature-sets, it
is important to ensure that the VM is strictly only using the common
subset of
features available on any potential destination.

This can be done either by hiding features outside of the common subset,
or in
some cases specifically instructing the kernel not to use a feature which it
can see.

Hardware features to aid levelling
==================================

HVM guests (using `Intel VT-x` or `AMD SVM`) will exit to Xen on each
`CPUID`
instruction, allowing Xen full and complete control over all leaves.

PV guests are harder.  By default, `CPUID` instructions executed in a PV
guest
will not trap, leaving Xen no direct ability to control the information
returned.

On newer Intel hardware, a feature known as _CPUID Faulting_ can allow
Xen to
cause `CPUID` instruction executed in PV guests to trap, which allows
Xen full
and complete control over all leaves (exactly like an HVM guest).

Xen-aware PV guest kernels and userspace can make use of the 'Forced
Emulation
Prefix'

> `ud2a; .byte 'x'; .byte 'e'; .byte 'n'; cpuid`

which Xen recognises as a deliberate attempt to get the fully-controlled
`CPUID` information rather than the hardware-reported information.  This
only
works with cooperative guests and guest userspace, so cannot be directly
relied upon.

Most hardware available these days have some number of `CPUID` Feature Mask
MSRs which are a simple AND-mask applied to all `CPUID` instructions
requesting specific feature bitmap sets.  The exact MSRs, and which feature
bitmap sets they affect are hardware specific.

Having said that, for PV guests particularly, there are features which might
be visible, but which they cannot possibly use.  As a result, Xen can
get away
with hiding fewer features where it knows the guest could not use the
feature.


How Xen currently uses and exposes levelling support
====================================================

Libxc has a `CPUID` Policy API which can be set by the toolstack for a
domain.
Libxc performs some information gathering, and uses the `DOMCTL_set_cpuid`
hypercall to specify what information should be returned by Xen when the
domain requests specific `CPUID` leaves.

The user of the libxc `CPUID` Policy API may specify, for any leaf
whatsoever,
whether particular bits should be forced high, forced low, default (as
chosen
by libxc), specifically the same as hardware, or specifically the same
hardware and maintained consistently across migration.

The default `CPUID` Policy involves libxc trying to work out which features
should be set or cleared in the policy.  It does this with a mixture of
native
`CPUID` instructions, some switch statements choosing to enable/disable
certain features and hypercalls querying certain Xen state.

When Xen is servicing a `CPUID` instruction on behalf of a guest and ends up
using the policy provided by libxc, it subsequently edits certain fields,
particularly in the feature sets.

Support for the feature masking MSRs is available via the five command line
parameters `cpuid_mask_({,extd_}{ecx,edx}|xsave_eax)`, which get applied at
boot and reduce the visible feature set to every subsequent `CPUID`
instruction.

Support for _CPUID Faulting_ exists, but only insofar as having the same
effect as the masking MSRs would provide.

How XenServer currently does levelling
======================================

The _Heterogeneous Pool Levelling_ support in XenServer appears to
predate the
libxc CPUID policy API, so does not currently use it.  The toolstack has a
table of CPU model numbers identifying whether levelling is supported.  It
then uses native `CPUID` instructions to look at the first four feature
masks,
and identifies the subset of features across the pool.
`cpuid_mask_{,extd_}{ecx,edx}` is then set on Xen's command line for
each host
in the pool, and all hosts rebooted.

This has several limitations:

* Xen and dom0 have a reduced feature set despite not needing to migrate
* There is only a single level for all VMs in the pool
* The toolstack only understands 4 of the 5 possible masking MSRs, and there
  are now feature maps in further `CPUID` leaves which have no masking MSRs


Proposal for new implementation
===============================

Experimentally, the masking MSRs can be context switched.  There is no
need to
force all PV guests to the same level, and no need to prevent dom0 or
Xen from
using certain features.

The toolstack needs to know how much control Xen has over VM features. 
In the
case that there are insufficient masking MSRs, and no faulting support is
present, a PV VM can still potentially be made safe to migrate by explicitly
disabling features on the kernel command line.  As a result, there
should be a
new mechanism which reports the levelling controls Xen has available.

The features available to each type of guest is really only known to Xen.
Having libxc try to divine them is bogus (especially as libxc is subject to
the toolstack domains cpuid policy itself).  Therefore on boot, Xen should
work out the maximal feature set available to each type of guest and
make this
information available to the toolstack.

`struct sysctl_physinfo.levelling_caps`
---------------------------------------

A bitmap field.  This is to inform a toolstack what Xen is capable of in
terms
of levelling.  Bits reported include:

* `faulting`
* `mask_ecx`
* `mask_edx`
* `mask_extd_ecx`
* `mask_extd_edx`
* `mask_xsave_eax`

_It is probably better extending sysctl_phsyinfo in preference to
introducing
a new hypercall to return a word with a few bits set._

Improvements to `XEN_DOMCTL_set_cpuid`
--------------------------------------

The `XEN_DOMCTL_set_cpuid` hypercall is too lax at validating its input,
which
results in needing further validation scattered over the Xen code.  In
particular it should not be possible to set feature bits which are blatantly
untrue.

* Feature bitmaps should be strictly checked against Xen's maximal set for a
  domain.
* Leaves should be checked against `max{,_extd}_eax`.  `libxc` currently
sets
  the leaves in a suitable order for this restriction to be enforced.
* Xen should calculate a domains feature masking MSRs from uploaded leaves,
  which prevents the toolstack from needing to special-case `CPUID`
masking vs
  faulting based on host support.

Lazy context switching of VCPU masking MSRs
-------------------------------------------

Domains having different sets of features is an important flexibility. This
requires tracking and properly context switching the MSRs on vcpu context
switches, in the case that _CPUID faulting_ is not available.

At boot, Xen shall determine which masking MSRs are available as part of
calculating `sysctl_physinfo.levelling_caps`.  All domain masks
(including the
idle domain) default to `~0`, and for PV guests (when _faulting_ is not
available) can be reduced by setting the policy.  Updates to a domain's
masks
must never be able to exceed the equivalent mask in the idle domain.

The context switch code shall lazily update the masking MSRs when context
switching between VCPUs.

Deprecation of `cpuid_mask_*` command line parameters
-----------------------------------------------------

The presence of these masking MSRs is already intermittent, and are starting
to disappear in more modern hardware.  With feature levelling being properly
configurable via the improvements presented here, there is no real
justification to use the command line parameters.  Features needing hiding
from Xen or dom0 should be done so using appropriate command line
parameters.

Attempted use of these command line parameters should emit a deprecation
warning, but continue to work as a host-wide lowering of features.  It shall
continue to work by lowering the idle domain's masks.

`XEN_SYSCTL_get_domain_cpuid_policy`
------------------------------------

Get the Xen-calculated default CPUID policy for PV and HVM domains.  This is
needed by toolstacks to calculate how to level the VM features for safe
migration.

_This is a SYSCTL rather than DOMCTL as it is system specific information
referring to types of domains, rather than domain information.  On the other
hand, it could probably just be another set of hw_caps and forgo
introducing a
new hypercall - I am open to suggestions as to the best method of reporting
this information_


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:24:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFQyu-0005J9-Ok; Mon, 17 Feb 2014 16:23:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFQyt-0005J2-0E
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 16:23:43 +0000
Received: from [193.109.254.147:7556] by server-14.bemta-14.messagelabs.com id
	B7/E2-29228-E8732035; Mon, 17 Feb 2014 16:23:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392654219!4917544!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29262 invoked from network); 17 Feb 2014 16:23:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 16:23:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101472911"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 16:23:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 11:22:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFQxF-0002gg-Cq;
	Mon, 17 Feb 2014 16:22:01 +0000
Message-ID: <53023729.7020009@citrix.com>
Date: Mon, 17 Feb 2014 16:22:01 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Jun Nakajima <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>, Boris
	Ostrovsky <boris.ostrovsky@oracle.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

Here is a design proposal to improve VM feature levelling support in Xen
and libxc.

PDF can be found here:
http://xenbits.xen.org/people/andrewcoop/feature-levelling/feature-levelling-C.pdf

And markdown source inline:

Introduction
============

Revision History
----------------

------------------------------------------------------------------------------
Version  Date         Changes
-------  ----------- 
--------------------------------------------------------
Draft A  07 Feb 2014  Initial draft

Draft B  13 Feb 2014  More detail for proposed new implementation

Draft C  17 Feb 2014  Even more details for proposed new implementation
------------------------------------------------------------------------------

Background
----------

_CPU feature masking_ is a term used to mean altering the visible
feature-set
of a processor.  For single systems, this could be to hide certain features
from operating system software, for which support is buggy.

In the world of virtualisation, it is common to have non-identical
hardware in
a cluster but still want to migrate a virtual machine safely.  On regular
hardware, the kernel can safely assume that the feature-set as detected on
boot will remain the same.  Live migration invalidates this assumption when
moving between two non-identical pieces of hardware.

To migrate virtual machines in this fashion, orchestration software must
ensure that the available feature set remains consistent anywhere the
virtual
machine might end up.

The feature-set of a particular CPU can be obtained using the `CPUID`
instruction.  It was introduced as a forward compatible way of
advertising new
features which were detectable at runtime.  Information available includes
processor branding, available features, topology information and cache
details.

The `CPUID` instruction is an unprivileged instruction, usable from
user-mode
without interception from the kernel.  This makes it impossible to
paravirtualise using the standard trap-and-emulate method.

Purpose
-------

This project originally started to improve the way in which XenServer
performed heterogeneous pool levelling.  In the process of investigation, it
was discovered that the current implementation in Xen and libxc are in
need of
improvement, particularly in relation to PV guests.

This document describes:

* What properties are needed from a VM point of view
* What hardware features are available to aid with levelling
* What abilities are exposed by Xen and libxc for levelling
* How XenServer currently does pool levelling (and why it is in need of
improvements)

This document also proposes a new mechanism for VM feature levelling, taking
into account the information needed by orchestration software.


What a Virtual Machine cares about
==================================

On native hardware, a kernel, as well as certain userspace libraries
will use
the set of available features to tune themselves to run more efficiently.
Over a migrate, it is critical that features a VM is using do not disappear.
(In some cases it might be possible to trap-and-emulate missing
features, but
this would be an unacceptably high overhead so is not considered.  It is
also
not applicable in the general case)

When a VM is liable to migrate between hardware of differing
feature-sets, it
is important to ensure that the VM is strictly only using the common
subset of
features available on any potential destination.

This can be done either by hiding features outside of the common subset,
or in
some cases specifically instructing the kernel not to use a feature which it
can see.

Hardware features to aid levelling
==================================

HVM guests (using `Intel VT-x` or `AMD SVM`) will exit to Xen on each
`CPUID`
instruction, allowing Xen full and complete control over all leaves.

PV guests are harder.  By default, `CPUID` instructions executed in a PV
guest
will not trap, leaving Xen no direct ability to control the information
returned.

On newer Intel hardware, a feature known as _CPUID Faulting_ can allow
Xen to
cause `CPUID` instruction executed in PV guests to trap, which allows
Xen full
and complete control over all leaves (exactly like an HVM guest).

Xen-aware PV guest kernels and userspace can make use of the 'Forced
Emulation
Prefix'

> `ud2a; .byte 'x'; .byte 'e'; .byte 'n'; cpuid`

which Xen recognises as a deliberate attempt to get the fully-controlled
`CPUID` information rather than the hardware-reported information.  This
only
works with cooperative guests and guest userspace, so cannot be directly
relied upon.

Most hardware available these days have some number of `CPUID` Feature Mask
MSRs which are a simple AND-mask applied to all `CPUID` instructions
requesting specific feature bitmap sets.  The exact MSRs, and which feature
bitmap sets they affect are hardware specific.

Having said that, for PV guests particularly, there are features which might
be visible, but which they cannot possibly use.  As a result, Xen can
get away
with hiding fewer features where it knows the guest could not use the
feature.


How Xen currently uses and exposes levelling support
====================================================

Libxc has a `CPUID` Policy API which can be set by the toolstack for a
domain.
Libxc performs some information gathering, and uses the `DOMCTL_set_cpuid`
hypercall to specify what information should be returned by Xen when the
domain requests specific `CPUID` leaves.

The user of the libxc `CPUID` Policy API may specify, for any leaf
whatsoever,
whether particular bits should be forced high, forced low, default (as
chosen
by libxc), specifically the same as hardware, or specifically the same
hardware and maintained consistently across migration.

The default `CPUID` Policy involves libxc trying to work out which features
should be set or cleared in the policy.  It does this with a mixture of
native
`CPUID` instructions, some switch statements choosing to enable/disable
certain features and hypercalls querying certain Xen state.

When Xen is servicing a `CPUID` instruction on behalf of a guest and ends up
using the policy provided by libxc, it subsequently edits certain fields,
particularly in the feature sets.

Support for the feature masking MSRs is available via the five command line
parameters `cpuid_mask_({,extd_}{ecx,edx}|xsave_eax)`, which get applied at
boot and reduce the visible feature set to every subsequent `CPUID`
instruction.

Support for _CPUID Faulting_ exists, but only insofar as having the same
effect as the masking MSRs would provide.

How XenServer currently does levelling
======================================

The _Heterogeneous Pool Levelling_ support in XenServer appears to
predate the
libxc CPUID policy API, so does not currently use it.  The toolstack has a
table of CPU model numbers identifying whether levelling is supported.  It
then uses native `CPUID` instructions to look at the first four feature
masks,
and identifies the subset of features across the pool.
`cpuid_mask_{,extd_}{ecx,edx}` is then set on Xen's command line for
each host
in the pool, and all hosts rebooted.

This has several limitations:

* Xen and dom0 have a reduced feature set despite not needing to migrate
* There is only a single level for all VMs in the pool
* The toolstack only understands 4 of the 5 possible masking MSRs, and there
  are now feature maps in further `CPUID` leaves which have no masking MSRs


Proposal for new implementation
===============================

Experimentally, the masking MSRs can be context switched.  There is no
need to
force all PV guests to the same level, and no need to prevent dom0 or
Xen from
using certain features.

The toolstack needs to know how much control Xen has over VM features. 
In the
case that there are insufficient masking MSRs, and no faulting support is
present, a PV VM can still potentially be made safe to migrate by explicitly
disabling features on the kernel command line.  As a result, there
should be a
new mechanism which reports the levelling controls Xen has available.

The features available to each type of guest is really only known to Xen.
Having libxc try to divine them is bogus (especially as libxc is subject to
the toolstack domains cpuid policy itself).  Therefore on boot, Xen should
work out the maximal feature set available to each type of guest and
make this
information available to the toolstack.

`struct sysctl_physinfo.levelling_caps`
---------------------------------------

A bitmap field.  This is to inform a toolstack what Xen is capable of in
terms
of levelling.  Bits reported include:

* `faulting`
* `mask_ecx`
* `mask_edx`
* `mask_extd_ecx`
* `mask_extd_edx`
* `mask_xsave_eax`

_It is probably better extending sysctl_phsyinfo in preference to
introducing
a new hypercall to return a word with a few bits set._

Improvements to `XEN_DOMCTL_set_cpuid`
--------------------------------------

The `XEN_DOMCTL_set_cpuid` hypercall is too lax at validating its input,
which
results in needing further validation scattered over the Xen code.  In
particular it should not be possible to set feature bits which are blatantly
untrue.

* Feature bitmaps should be strictly checked against Xen's maximal set for a
  domain.
* Leaves should be checked against `max{,_extd}_eax`.  `libxc` currently
sets
  the leaves in a suitable order for this restriction to be enforced.
* Xen should calculate a domains feature masking MSRs from uploaded leaves,
  which prevents the toolstack from needing to special-case `CPUID`
masking vs
  faulting based on host support.

Lazy context switching of VCPU masking MSRs
-------------------------------------------

Domains having different sets of features is an important flexibility. This
requires tracking and properly context switching the MSRs on vcpu context
switches, in the case that _CPUID faulting_ is not available.

At boot, Xen shall determine which masking MSRs are available as part of
calculating `sysctl_physinfo.levelling_caps`.  All domain masks
(including the
idle domain) default to `~0`, and for PV guests (when _faulting_ is not
available) can be reduced by setting the policy.  Updates to a domain's
masks
must never be able to exceed the equivalent mask in the idle domain.

The context switch code shall lazily update the masking MSRs when context
switching between VCPUs.

Deprecation of `cpuid_mask_*` command line parameters
-----------------------------------------------------

The presence of these masking MSRs is already intermittent, and are starting
to disappear in more modern hardware.  With feature levelling being properly
configurable via the improvements presented here, there is no real
justification to use the command line parameters.  Features needing hiding
from Xen or dom0 should be done so using appropriate command line
parameters.

Attempted use of these command line parameters should emit a deprecation
warning, but continue to work as a host-wide lowering of features.  It shall
continue to work by lowering the idle domain's masks.

`XEN_SYSCTL_get_domain_cpuid_policy`
------------------------------------

Get the Xen-calculated default CPUID policy for PV and HVM domains.  This is
needed by toolstacks to calculate how to level the VM features for safe
migration.

_This is a SYSCTL rather than DOMCTL as it is system specific information
referring to types of domains, rather than domain information.  On the other
hand, it could probably just be another set of hw_caps and forgo
introducing a
new hypercall - I am open to suggestions as to the best method of reporting
this information_


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFRLE-0005oh-T8; Mon, 17 Feb 2014 16:46:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFRLD-0005oc-7V
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 16:46:47 +0000
Received: from [85.158.143.35:20406] by server-1.bemta-4.messagelabs.com id
	68/BF-31661-6FC32035; Mon, 17 Feb 2014 16:46:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392655605!6309218!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 592 invoked from network); 17 Feb 2014 16:46:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 16:46:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 16:46:45 +0000
Message-Id: <53024B01020000780011CF81@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 16:46:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <53023729.7020009@citrix.com>
In-Reply-To: <53023729.7020009@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 17:22, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> How XenServer currently does levelling
> ======================================
> 
> The _Heterogeneous Pool Levelling_ support in XenServer appears to
> predate the
> libxc CPUID policy API, so does not currently use it.  The toolstack has a
> table of CPU model numbers identifying whether levelling is supported.  It
> then uses native `CPUID` instructions to look at the first four feature
> masks,
> and identifies the subset of features across the pool.
> `cpuid_mask_{,extd_}{ecx,edx}` is then set on Xen's command line for
> each host
> in the pool, and all hosts rebooted.
> 
> This has several limitations:
> 
> * Xen and dom0 have a reduced feature set despite not needing to migrate

Xen, at least for most features, doesn't, as it retrieves the feature
flags before applying the mask. Dom0 indeed is being limited without
need.

> * There is only a single level for all VMs in the pool
> * The toolstack only understands 4 of the 5 possible masking MSRs, and there
>   are now feature maps in further `CPUID` leaves which have no masking MSRs
> 
> 
> Proposal for new implementation
> ===============================

Sounds reasonable, but is of course in need of some details when
getting closer to actually implementing this. I'm in particular not
in favor of an approach where three more MSR writes would be
added to the (PV) context switch path (mostly) unconditionally.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFRLE-0005oh-T8; Mon, 17 Feb 2014 16:46:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFRLD-0005oc-7V
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 16:46:47 +0000
Received: from [85.158.143.35:20406] by server-1.bemta-4.messagelabs.com id
	68/BF-31661-6FC32035; Mon, 17 Feb 2014 16:46:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392655605!6309218!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 592 invoked from network); 17 Feb 2014 16:46:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 17 Feb 2014 16:46:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 17 Feb 2014 16:46:45 +0000
Message-Id: <53024B01020000780011CF81@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 17 Feb 2014 16:46:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <53023729.7020009@citrix.com>
In-Reply-To: <53023729.7020009@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 17:22, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> How XenServer currently does levelling
> ======================================
> 
> The _Heterogeneous Pool Levelling_ support in XenServer appears to
> predate the
> libxc CPUID policy API, so does not currently use it.  The toolstack has a
> table of CPU model numbers identifying whether levelling is supported.  It
> then uses native `CPUID` instructions to look at the first four feature
> masks,
> and identifies the subset of features across the pool.
> `cpuid_mask_{,extd_}{ecx,edx}` is then set on Xen's command line for
> each host
> in the pool, and all hosts rebooted.
> 
> This has several limitations:
> 
> * Xen and dom0 have a reduced feature set despite not needing to migrate

Xen, at least for most features, doesn't, as it retrieves the feature
flags before applying the mask. Dom0 indeed is being limited without
need.

> * There is only a single level for all VMs in the pool
> * The toolstack only understands 4 of the 5 possible masking MSRs, and there
>   are now feature maps in further `CPUID` leaves which have no masking MSRs
> 
> 
> Proposal for new implementation
> ===============================

Sounds reasonable, but is of course in need of some details when
getting closer to actually implementing this. I'm in particular not
in favor of an approach where three more MSR writes would be
added to the (PV) context switch path (mostly) unconditionally.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:49:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFRO6-0005wj-Fu; Mon, 17 Feb 2014 16:49:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFRO4-0005wc-Kl
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 16:49:44 +0000
Received: from [85.158.139.211:44636] by server-16.bemta-5.messagelabs.com id
	BC/6F-05060-7AD32035; Mon, 17 Feb 2014 16:49:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392655781!4477343!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16498 invoked from network); 17 Feb 2014 16:49:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 16:49:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101479243"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 16:49:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 11:49:40 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFRO0-0005OK-Jq;
	Mon, 17 Feb 2014 16:49:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFRNy-0004nD-RQ;
	Mon, 17 Feb 2014 16:49:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21250.15776.972814.117850@mariner.uk.xensource.com>
Date: Mon, 17 Feb 2014 16:49:36 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <5302160B.70601@eu.citrix.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: Proposed force push of staging to master"):
> On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > xen.git#master and call it RC4.  Comments welcome.
> 
> Thanks for the analysis.  This seems like a good plan.

I have done this (RC4 is tagged, tarballs are in production).

I also had to force push the change below to xen.git#master.

Can I request that we don't change this back to say "master" until we
are done with 4.4.0 ?  Either way we have to update Config.mk with new
qemu upstream versions, but if we set this to "master" in between RCs,
I end up having to do it as a force push in the middle of the RC
production which is out-of-course, error-prone, and suboptimal.

It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
Config.mk (updated when the qemu-upstream tree has passed its push
gate).

That is I think the best workflow is:
  * make a change to staging/qemu-upstream-unstable.git
  * wait for push gate to put it in qemu-upstream-unstable.git
  * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
    to new commit hash
  * whatever is in xen.git#master is what gets called rcN
    (ie we tag xen.git and */qemu-upstream-unstable.git with the rcN
     tags, but we don't use the actual tag name in Config.mk)

Thanks,
Ian.

>From b7319350278d0220febc8a7dc8be8e8d41b0abd2 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 16:33:48 +0000
Subject: [PATCH] Update QEMU_UPSTREAM_REVISION for 4.4.0-rc4

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Config.mk |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 1e034f7..a6cd2e3 100644
--- a/Config.mk
+++ b/Config.mk
@@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
 SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 endif
 OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
-QEMU_UPSTREAM_REVISION ?= master
+QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc4
 SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
 # Fri Aug 2 14:12:09 2013 -0400
 # Fix bug in CBFS file walking with compressed files.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 16:49:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 16:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFRO6-0005wj-Fu; Mon, 17 Feb 2014 16:49:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFRO4-0005wc-Kl
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 16:49:44 +0000
Received: from [85.158.139.211:44636] by server-16.bemta-5.messagelabs.com id
	BC/6F-05060-7AD32035; Mon, 17 Feb 2014 16:49:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392655781!4477343!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16498 invoked from network); 17 Feb 2014 16:49:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 16:49:43 -0000
X-IronPort-AV: E=Sophos;i="4.95,861,1384300800"; d="scan'208";a="101479243"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 16:49:41 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 11:49:40 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFRO0-0005OK-Jq;
	Mon, 17 Feb 2014 16:49:40 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFRNy-0004nD-RQ;
	Mon, 17 Feb 2014 16:49:38 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21250.15776.972814.117850@mariner.uk.xensource.com>
Date: Mon, 17 Feb 2014 16:49:36 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <5302160B.70601@eu.citrix.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: Proposed force push of staging to master"):
> On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > xen.git#master and call it RC4.  Comments welcome.
> 
> Thanks for the analysis.  This seems like a good plan.

I have done this (RC4 is tagged, tarballs are in production).

I also had to force push the change below to xen.git#master.

Can I request that we don't change this back to say "master" until we
are done with 4.4.0 ?  Either way we have to update Config.mk with new
qemu upstream versions, but if we set this to "master" in between RCs,
I end up having to do it as a force push in the middle of the RC
production which is out-of-course, error-prone, and suboptimal.

It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
Config.mk (updated when the qemu-upstream tree has passed its push
gate).

That is I think the best workflow is:
  * make a change to staging/qemu-upstream-unstable.git
  * wait for push gate to put it in qemu-upstream-unstable.git
  * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
    to new commit hash
  * whatever is in xen.git#master is what gets called rcN
    (ie we tag xen.git and */qemu-upstream-unstable.git with the rcN
     tags, but we don't use the actual tag name in Config.mk)

Thanks,
Ian.

>From b7319350278d0220febc8a7dc8be8e8d41b0abd2 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 16:33:48 +0000
Subject: [PATCH] Update QEMU_UPSTREAM_REVISION for 4.4.0-rc4

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Config.mk |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index 1e034f7..a6cd2e3 100644
--- a/Config.mk
+++ b/Config.mk
@@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
 SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 endif
 OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
-QEMU_UPSTREAM_REVISION ?= master
+QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc4
 SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
 # Fri Aug 2 14:12:09 2013 -0400
 # Fix bug in CBFS file walking with compressed files.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:37:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFS7T-0006H7-EC; Mon, 17 Feb 2014 17:36:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WFS7R-0006H2-Dh
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:36:37 +0000
Received: from [85.158.137.68:7192] by server-9.bemta-3.messagelabs.com id
	EC/3F-10184-4A842035; Mon, 17 Feb 2014 17:36:36 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392658595!2429050!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28177 invoked from network); 17 Feb 2014 17:36:35 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:36:35 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so7427398eak.14
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 09:36:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:content-type:mime-version; 
	bh=sA+m7LUcCQtQGG1HZtju3bP69rd/kzfiusnIXGkUciY=;
	b=QYEljQ6EHCkWD0hWjJfBDmFcXLEIBtlEcvIUtDML32TbEFvCeIq0T2VLpUkXBue4tx
	S7PqOwDSbjKCqrVrSiMzG0HaS6fXFUdV3PvJ8qRFAaY3LfQNwU9Y8iiiWofK/p/2kRvW
	/qzvltZvdjwri0t5QFYT3Sipy5/jfkFsprBCAiywssjmdehf0C9C6ajwepTDRQZspSyc
	Ha2eiIQk6MaR/2avwrnIfeT26t1IbyDBFJY/mIxHN0EeM1a1oEmoexkdjkOL8fEh1+T/
	9aFdpo0Ni90q9Zi0BWQkgQUzvEUfGiuBQsOv9mOdklM/i/e9eCAtneF+AxFSjYz4b2Lt
	OcvQ==
X-Received: by 10.14.93.199 with SMTP id l47mr3713049eef.58.1392658595598;
	Mon, 17 Feb 2014 09:36:35 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id
	j41sm59905418eey.15.2014.02.17.09.36.27 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 09:36:29 -0800 (PST)
Message-ID: <1392658580.32038.446.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: M A Young <m.a.young@durham.ac.uk>
Date: Mon, 17 Feb 2014 18:36:20 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 RC4 out... TestDay tomorrow...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6768201977641995376=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6768201977641995376==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-6RO7/TwIhfJdvKZyqdG2"


--=-6RO7/TwIhfJdvKZyqdG2
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey Michael,

As usual, I'm asking whether you'd be up for preparing a temporary
build, to facilitate using Fedora as a platform for Xen 4.4 RC4 test
day.

Instructions for the Test Day are here:
http://wiki.xen.org/wiki/Xen_4.4_RC4_test_instructions

Tarrball is here:
http://bits.xensource.com/oss-xen/release/4.4.0-rc4/xen-4.4.0-rc4.tar.gz
http://bits.xensource.com/oss-xen/release/4.4.0-rc4/xen-4.4.0-rc4.tar.gz.si=
g

As usual, sorry for the short notice... But there's really few we could
do about it, RC4 has just been tagged! :-D

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-6RO7/TwIhfJdvKZyqdG2
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMCSJQACgkQk4XaBE3IOsRtmwCeJFnIBOyhnTdZb94D7Ib6DxMP
klAAn3pQyIjhbG5coj221tHF/B3hmr1V
=/mHY
-----END PGP SIGNATURE-----

--=-6RO7/TwIhfJdvKZyqdG2--



--===============6768201977641995376==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6768201977641995376==--



From xen-devel-bounces@lists.xen.org Mon Feb 17 17:37:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFS7T-0006H7-EC; Mon, 17 Feb 2014 17:36:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WFS7R-0006H2-Dh
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:36:37 +0000
Received: from [85.158.137.68:7192] by server-9.bemta-3.messagelabs.com id
	EC/3F-10184-4A842035; Mon, 17 Feb 2014 17:36:36 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392658595!2429050!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28177 invoked from network); 17 Feb 2014 17:36:35 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:36:35 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so7427398eak.14
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 09:36:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:content-type:mime-version; 
	bh=sA+m7LUcCQtQGG1HZtju3bP69rd/kzfiusnIXGkUciY=;
	b=QYEljQ6EHCkWD0hWjJfBDmFcXLEIBtlEcvIUtDML32TbEFvCeIq0T2VLpUkXBue4tx
	S7PqOwDSbjKCqrVrSiMzG0HaS6fXFUdV3PvJ8qRFAaY3LfQNwU9Y8iiiWofK/p/2kRvW
	/qzvltZvdjwri0t5QFYT3Sipy5/jfkFsprBCAiywssjmdehf0C9C6ajwepTDRQZspSyc
	Ha2eiIQk6MaR/2avwrnIfeT26t1IbyDBFJY/mIxHN0EeM1a1oEmoexkdjkOL8fEh1+T/
	9aFdpo0Ni90q9Zi0BWQkgQUzvEUfGiuBQsOv9mOdklM/i/e9eCAtneF+AxFSjYz4b2Lt
	OcvQ==
X-Received: by 10.14.93.199 with SMTP id l47mr3713049eef.58.1392658595598;
	Mon, 17 Feb 2014 09:36:35 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id
	j41sm59905418eey.15.2014.02.17.09.36.27 for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Mon, 17 Feb 2014 09:36:29 -0800 (PST)
Message-ID: <1392658580.32038.446.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: M A Young <m.a.young@durham.ac.uk>
Date: Mon, 17 Feb 2014 18:36:20 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 RC4 out... TestDay tomorrow...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6768201977641995376=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6768201977641995376==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-6RO7/TwIhfJdvKZyqdG2"


--=-6RO7/TwIhfJdvKZyqdG2
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey Michael,

As usual, I'm asking whether you'd be up for preparing a temporary
build, to facilitate using Fedora as a platform for Xen 4.4 RC4 test
day.

Instructions for the Test Day are here:
http://wiki.xen.org/wiki/Xen_4.4_RC4_test_instructions

Tarrball is here:
http://bits.xensource.com/oss-xen/release/4.4.0-rc4/xen-4.4.0-rc4.tar.gz
http://bits.xensource.com/oss-xen/release/4.4.0-rc4/xen-4.4.0-rc4.tar.gz.si=
g

As usual, sorry for the short notice... But there's really few we could
do about it, RC4 has just been tagged! :-D

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-6RO7/TwIhfJdvKZyqdG2
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMCSJQACgkQk4XaBE3IOsRtmwCeJFnIBOyhnTdZb94D7Ib6DxMP
klAAn3pQyIjhbG5coj221tHF/B3hmr1V
=/mHY
-----END PGP SIGNATURE-----

--=-6RO7/TwIhfJdvKZyqdG2--



--===============6768201977641995376==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6768201977641995376==--



From xen-devel-bounces@lists.xen.org Mon Feb 17 17:51:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSM4-0006Sj-1p; Mon, 17 Feb 2014 17:51:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSM2-0006SX-SU
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:51:42 +0000
Received: from [85.158.139.211:2241] by server-6.bemta-5.messagelabs.com id
	ED/46-14342-D2C42035; Mon, 17 Feb 2014 17:51:41 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392659499!4447694!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15734 invoked from network); 17 Feb 2014 17:51:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:51:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103236086"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:10 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-TA;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:15 +0000
Message-ID: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1 0/3] xen/events: remove some
	unused/unnecessary code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove some unused and unnecessary event channel code.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:51:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSM5-0006Sq-Dr; Mon, 17 Feb 2014 17:51:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSM3-0006Sc-Op
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:51:43 +0000
Received: from [85.158.139.211:36244] by server-1.bemta-5.messagelabs.com id
	FB/31-12859-E2C42035; Mon, 17 Feb 2014 17:51:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392659499!4447694!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15783 invoked from network); 17 Feb 2014 17:51:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:51:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103236093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:12 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-VP;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:17 +0000
Message-ID: <1392659118-32593-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xen/events: remove unnecessary call to
	bind_evtchn_to_cpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since bind_evtchn_to_cpu() is always called after an event channel is
bound, there is no need to call it after closing an event channel.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    4 ----
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index dca101a..72898c7 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -469,9 +469,6 @@ static void xen_evtchn_close(unsigned int port)
 	close.port = port;
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
 		BUG();
-
-	/* Closed ports are implicitly re-bound to VCPU0. */
-	bind_evtchn_to_cpu(port, 0);
 }
 
 static void pirq_query_unmask(int irq)
@@ -1003,7 +1000,6 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
 			irq = ret;
 			goto out;
 		}
-
 		bind_evtchn_to_cpu(evtchn, cpu);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:51:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSM4-0006Sj-1p; Mon, 17 Feb 2014 17:51:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSM2-0006SX-SU
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:51:42 +0000
Received: from [85.158.139.211:2241] by server-6.bemta-5.messagelabs.com id
	ED/46-14342-D2C42035; Mon, 17 Feb 2014 17:51:41 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392659499!4447694!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15734 invoked from network); 17 Feb 2014 17:51:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:51:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103236086"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:10 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-TA;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:15 +0000
Message-ID: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv1 0/3] xen/events: remove some
	unused/unnecessary code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove some unused and unnecessary event channel code.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:51:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:51:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSM5-0006Sq-Dr; Mon, 17 Feb 2014 17:51:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSM3-0006Sc-Op
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:51:43 +0000
Received: from [85.158.139.211:36244] by server-1.bemta-5.messagelabs.com id
	FB/31-12859-E2C42035; Mon, 17 Feb 2014 17:51:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392659499!4447694!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15783 invoked from network); 17 Feb 2014 17:51:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:51:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103236093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:12 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-VP;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:17 +0000
Message-ID: <1392659118-32593-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 2/3] xen/events: remove unnecessary call to
	bind_evtchn_to_cpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since bind_evtchn_to_cpu() is always called after an event channel is
bound, there is no need to call it after closing an event channel.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |    4 ----
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index dca101a..72898c7 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -469,9 +469,6 @@ static void xen_evtchn_close(unsigned int port)
 	close.port = port;
 	if (HYPERVISOR_event_channel_op(EVTCHNOP_close, &close) != 0)
 		BUG();
-
-	/* Closed ports are implicitly re-bound to VCPU0. */
-	bind_evtchn_to_cpu(port, 0);
 }
 
 static void pirq_query_unmask(int irq)
@@ -1003,7 +1000,6 @@ int bind_virq_to_irq(unsigned int virq, unsigned int cpu)
 			irq = ret;
 			goto out;
 		}
-
 		bind_evtchn_to_cpu(evtchn, cpu);
 	} else {
 		struct irq_info *info = info_for_irq(irq);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSMU-0006WU-71; Mon, 17 Feb 2014 17:52:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFSMS-0006Vw-IL
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:52:08 +0000
Received: from [85.158.143.35:21639] by server-3.bemta-4.messagelabs.com id
	6F/40-11539-74C42035; Mon, 17 Feb 2014 17:52:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392659525!6329023!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31168 invoked from network); 17 Feb 2014 17:52:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498584"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:52:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:52:04 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFS8w-0003yP-8o;
	Mon, 17 Feb 2014 17:38:10 +0000
Message-ID: <53024901.2000000@citrix.com>
Date: Mon, 17 Feb 2014 17:38:09 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <53023729.7020009@citrix.com>
	<53024B01020000780011CF81@nat28.tlf.novell.com>
In-Reply-To: <53024B01020000780011CF81@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 16:46, Jan Beulich wrote:
>>>> On 17.02.14 at 17:22, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> How XenServer currently does levelling
>> ======================================
>>
>> The _Heterogeneous Pool Levelling_ support in XenServer appears to
>> predate the
>> libxc CPUID policy API, so does not currently use it.  The toolstack has a
>> table of CPU model numbers identifying whether levelling is supported.  It
>> then uses native `CPUID` instructions to look at the first four feature
>> masks,
>> and identifies the subset of features across the pool.
>> `cpuid_mask_{,extd_}{ecx,edx}` is then set on Xen's command line for
>> each host
>> in the pool, and all hosts rebooted.
>>
>> This has several limitations:
>>
>> * Xen and dom0 have a reduced feature set despite not needing to migrate
> Xen, at least for most features, doesn't, as it retrieves the feature
> flags before applying the mask. Dom0 indeed is being limited without
> need.

I should have worded this better.  In XenServer there are further
restrictions to Xen, mainly in the form of default command line options,
to work around PV guest bugs.  This is purely because of a lack of
per-VM feature levelling, and I am hoping to throw all of it away as
soon as a better implementation exists.

Logic such as "To boot the Ubuntu 12.04 installer on AMD
Piledriver/Bulldozer system, XSAVE and FMA4 must be hidden until the
guest admin has updated to the latest kernel and glibc" can then be
moved into the toolstack, rather than having to be blindly applied to
the entire system.  (This doesn't actually matter the latest release of
XenServer is still a Xen-4.1 based system which pre-dates XSAVE support
actually working correctly in Xen, but it is quite important to fix
before our next release).


>
>> * There is only a single level for all VMs in the pool
>> * The toolstack only understands 4 of the 5 possible masking MSRs, and there
>>   are now feature maps in further `CPUID` leaves which have no masking MSRs
>>
>>
>> Proposal for new implementation
>> ===============================
> Sounds reasonable, but is of course in need of some details when
> getting closer to actually implementing this. I'm in particular not
> in favor of an approach where three more MSR writes would be
> added to the (PV) context switch path (mostly) unconditionally.
>
> Jan
>

If there are no particular objections to the proposed design, I shall
work on a patch series which implements it, and documents its expected use.

I am also fairly loath to put more into the context switch codepath, but
I can see no other way of doing per-vm feature levelling for PV guests. 
In the hopefully common case that no masking is needed, then the MSRs
will be written once on the first switch, then never again, at which
point the overhead is a few failed conditions.

It is obviously in the toolstacks best interest to not set different
feature masks for each PV domain, and having dom0, the idle domain and
all HVM domains with the same mask will reduce the switching somewhat,
but correctness in this area to aid in safe migration is cruital.

I am open to alternate suggestions, which is why this is just a proposal
at this stage.  However, as I said - I cant see another way of doing
per-vm feature levelling.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSMU-0006X4-Oy; Mon, 17 Feb 2014 17:52:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSMS-0006Vx-NR
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:52:08 +0000
Received: from [85.158.139.211:43836] by server-14.bemta-5.messagelabs.com id
	20/84-27598-74C42035; Mon, 17 Feb 2014 17:52:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392659525!4484146!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24755 invoked from network); 17 Feb 2014 17:52:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498096"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:07 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-Ux;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:16 +0000
Message-ID: <1392659118-32593-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] xen/events: remove the unused
	resend_irq_on_evtchn()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

resend_irq_on_evtchn() was only used by ia64 (which no longer has Xen
support).

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |   33 ++++++++++++---------------------
 include/xen/events.h             |    1 -
 2 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index f4a9e33..dca101a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1344,26 +1344,6 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 	return rebind_irq_to_cpu(data->irq, tcpu);
 }
 
-static int retrigger_evtchn(int evtchn)
-{
-	int masked;
-
-	if (!VALID_EVTCHN(evtchn))
-		return 0;
-
-	masked = test_and_set_mask(evtchn);
-	set_evtchn(evtchn);
-	if (!masked)
-		unmask_evtchn(evtchn);
-
-	return 1;
-}
-
-int resend_irq_on_evtchn(unsigned int irq)
-{
-	return retrigger_evtchn(evtchn_from_irq(irq));
-}
-
 static void enable_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -1398,7 +1378,18 @@ static void mask_ack_dynirq(struct irq_data *data)
 
 static int retrigger_dynirq(struct irq_data *data)
 {
-	return retrigger_evtchn(evtchn_from_irq(data->irq));
+	unsigned int evtchn = evtchn_from_irq(data->irq);
+	int masked;
+
+	if (!VALID_EVTCHN(evtchn))
+		return 0;
+
+	masked = test_and_set_mask(evtchn);
+	set_evtchn(evtchn);
+	if (!masked)
+		unmask_evtchn(evtchn);
+
+	return 1;
 }
 
 static void restore_pirqs(void)
diff --git a/include/xen/events.h b/include/xen/events.h
index c9c85cf..a6d9237 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -52,7 +52,6 @@ int evtchn_get(unsigned int evtchn);
 void evtchn_put(unsigned int evtchn);
 
 void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector);
-int resend_irq_on_evtchn(unsigned int irq);
 void rebind_evtchn_irq(int evtchn, int irq);
 
 static inline void notify_remote_via_evtchn(int port)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSMT-0006WG-RW; Mon, 17 Feb 2014 17:52:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSMR-0006Vf-T7
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:52:08 +0000
Received: from [85.158.143.35:31176] by server-1.bemta-4.messagelabs.com id
	92/10-31661-74C42035; Mon, 17 Feb 2014 17:52:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392659525!6329023!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31083 invoked from network); 17 Feb 2014 17:52:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498081"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:05 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-Vr;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:18 +0000
Message-ID: <1392659118-32593-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xen/xenbus: remove unused
	xenbus_bind_evtchn()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

xenbus_bind_evtchn() has no callers so remove it.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/xenbus/xenbus_client.c |   27 ---------------------------
 include/xen/xenbus.h               |    1 -
 2 files changed, 0 insertions(+), 28 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 01d59e6..439c9dc 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -401,33 +401,6 @@ EXPORT_SYMBOL_GPL(xenbus_alloc_evtchn);
 
 
 /**
- * Bind to an existing interdomain event channel in another domain. Returns 0
- * on success and stores the local port in *port. On error, returns -errno,
- * switches the device to XenbusStateClosing, and saves the error in XenStore.
- */
-int xenbus_bind_evtchn(struct xenbus_device *dev, int remote_port, int *port)
-{
-	struct evtchn_bind_interdomain bind_interdomain;
-	int err;
-
-	bind_interdomain.remote_dom = dev->otherend_id;
-	bind_interdomain.remote_port = remote_port;
-
-	err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
-					  &bind_interdomain);
-	if (err)
-		xenbus_dev_fatal(dev, err,
-				 "binding to event channel %d from domain %d",
-				 remote_port, dev->otherend_id);
-	else
-		*port = bind_interdomain.local_port;
-
-	return err;
-}
-EXPORT_SYMBOL_GPL(xenbus_bind_evtchn);
-
-
-/**
  * Free an existing event channel. Returns 0 on success or -errno on error.
  */
 int xenbus_free_evtchn(struct xenbus_device *dev, int port)
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 569c07f..0324c6d 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -207,7 +207,6 @@ int xenbus_unmap_ring(struct xenbus_device *dev,
 		      grant_handle_t handle, void *vaddr);
 
 int xenbus_alloc_evtchn(struct xenbus_device *dev, int *port);
-int xenbus_bind_evtchn(struct xenbus_device *dev, int remote_port, int *port);
 int xenbus_free_evtchn(struct xenbus_device *dev, int port);
 
 enum xenbus_state xenbus_read_driver_state(const char *path);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSMT-0006WG-RW; Mon, 17 Feb 2014 17:52:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSMR-0006Vf-T7
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:52:08 +0000
Received: from [85.158.143.35:31176] by server-1.bemta-4.messagelabs.com id
	92/10-31661-74C42035; Mon, 17 Feb 2014 17:52:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392659525!6329023!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31083 invoked from network); 17 Feb 2014 17:52:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498081"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:05 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-Vr;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:18 +0000
Message-ID: <1392659118-32593-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 3/3] xen/xenbus: remove unused
	xenbus_bind_evtchn()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

xenbus_bind_evtchn() has no callers so remove it.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/xenbus/xenbus_client.c |   27 ---------------------------
 include/xen/xenbus.h               |    1 -
 2 files changed, 0 insertions(+), 28 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 01d59e6..439c9dc 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -401,33 +401,6 @@ EXPORT_SYMBOL_GPL(xenbus_alloc_evtchn);
 
 
 /**
- * Bind to an existing interdomain event channel in another domain. Returns 0
- * on success and stores the local port in *port. On error, returns -errno,
- * switches the device to XenbusStateClosing, and saves the error in XenStore.
- */
-int xenbus_bind_evtchn(struct xenbus_device *dev, int remote_port, int *port)
-{
-	struct evtchn_bind_interdomain bind_interdomain;
-	int err;
-
-	bind_interdomain.remote_dom = dev->otherend_id;
-	bind_interdomain.remote_port = remote_port;
-
-	err = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
-					  &bind_interdomain);
-	if (err)
-		xenbus_dev_fatal(dev, err,
-				 "binding to event channel %d from domain %d",
-				 remote_port, dev->otherend_id);
-	else
-		*port = bind_interdomain.local_port;
-
-	return err;
-}
-EXPORT_SYMBOL_GPL(xenbus_bind_evtchn);
-
-
-/**
  * Free an existing event channel. Returns 0 on success or -errno on error.
  */
 int xenbus_free_evtchn(struct xenbus_device *dev, int port)
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 569c07f..0324c6d 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -207,7 +207,6 @@ int xenbus_unmap_ring(struct xenbus_device *dev,
 		      grant_handle_t handle, void *vaddr);
 
 int xenbus_alloc_evtchn(struct xenbus_device *dev, int *port);
-int xenbus_bind_evtchn(struct xenbus_device *dev, int remote_port, int *port);
 int xenbus_free_evtchn(struct xenbus_device *dev, int port);
 
 enum xenbus_state xenbus_read_driver_state(const char *path);
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSMU-0006WU-71; Mon, 17 Feb 2014 17:52:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFSMS-0006Vw-IL
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:52:08 +0000
Received: from [85.158.143.35:21639] by server-3.bemta-4.messagelabs.com id
	6F/40-11539-74C42035; Mon, 17 Feb 2014 17:52:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392659525!6329023!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31168 invoked from network); 17 Feb 2014 17:52:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498584"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:52:05 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:52:04 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFS8w-0003yP-8o;
	Mon, 17 Feb 2014 17:38:10 +0000
Message-ID: <53024901.2000000@citrix.com>
Date: Mon, 17 Feb 2014 17:38:09 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <53023729.7020009@citrix.com>
	<53024B01020000780011CF81@nat28.tlf.novell.com>
In-Reply-To: <53024B01020000780011CF81@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	BorisOstrovsky <boris.ostrovsky@oracle.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 16:46, Jan Beulich wrote:
>>>> On 17.02.14 at 17:22, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> How XenServer currently does levelling
>> ======================================
>>
>> The _Heterogeneous Pool Levelling_ support in XenServer appears to
>> predate the
>> libxc CPUID policy API, so does not currently use it.  The toolstack has a
>> table of CPU model numbers identifying whether levelling is supported.  It
>> then uses native `CPUID` instructions to look at the first four feature
>> masks,
>> and identifies the subset of features across the pool.
>> `cpuid_mask_{,extd_}{ecx,edx}` is then set on Xen's command line for
>> each host
>> in the pool, and all hosts rebooted.
>>
>> This has several limitations:
>>
>> * Xen and dom0 have a reduced feature set despite not needing to migrate
> Xen, at least for most features, doesn't, as it retrieves the feature
> flags before applying the mask. Dom0 indeed is being limited without
> need.

I should have worded this better.  In XenServer there are further
restrictions to Xen, mainly in the form of default command line options,
to work around PV guest bugs.  This is purely because of a lack of
per-VM feature levelling, and I am hoping to throw all of it away as
soon as a better implementation exists.

Logic such as "To boot the Ubuntu 12.04 installer on AMD
Piledriver/Bulldozer system, XSAVE and FMA4 must be hidden until the
guest admin has updated to the latest kernel and glibc" can then be
moved into the toolstack, rather than having to be blindly applied to
the entire system.  (This doesn't actually matter the latest release of
XenServer is still a Xen-4.1 based system which pre-dates XSAVE support
actually working correctly in Xen, but it is quite important to fix
before our next release).


>
>> * There is only a single level for all VMs in the pool
>> * The toolstack only understands 4 of the 5 possible masking MSRs, and there
>>   are now feature maps in further `CPUID` leaves which have no masking MSRs
>>
>>
>> Proposal for new implementation
>> ===============================
> Sounds reasonable, but is of course in need of some details when
> getting closer to actually implementing this. I'm in particular not
> in favor of an approach where three more MSR writes would be
> added to the (PV) context switch path (mostly) unconditionally.
>
> Jan
>

If there are no particular objections to the proposed design, I shall
work on a patch series which implements it, and documents its expected use.

I am also fairly loath to put more into the context switch codepath, but
I can see no other way of doing per-vm feature levelling for PV guests. 
In the hopefully common case that no masking is needed, then the MSRs
will be written once on the first switch, then never again, at which
point the overhead is a few failed conditions.

It is obviously in the toolstacks best interest to not set different
feature masks for each PV domain, and having dom0, the idle domain and
all HVM domains with the same mask will reduce the switching somewhat,
but correctness in this area to aid in safe migration is cruital.

I am open to alternate suggestions, which is why this is just a proposal
at this stage.  However, as I said - I cant see another way of doing
per-vm feature levelling.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSMU-0006X4-Oy; Mon, 17 Feb 2014 17:52:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSMS-0006Vx-NR
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:52:08 +0000
Received: from [85.158.139.211:43836] by server-14.bemta-5.messagelabs.com id
	20/84-27598-74C42035; Mon, 17 Feb 2014 17:52:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392659525!4484146!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24755 invoked from network); 17 Feb 2014 17:52:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498096"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:51:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:51:07 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WFSFx-00045p-Ux;
	Mon, 17 Feb 2014 17:45:25 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 17 Feb 2014 17:45:16 +0000
Message-ID: <1392659118-32593-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCH 1/3] xen/events: remove the unused
	resend_irq_on_evtchn()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

resend_irq_on_evtchn() was only used by ia64 (which no longer has Xen
support).

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 drivers/xen/events/events_base.c |   33 ++++++++++++---------------------
 include/xen/events.h             |    1 -
 2 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index f4a9e33..dca101a 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1344,26 +1344,6 @@ static int set_affinity_irq(struct irq_data *data, const struct cpumask *dest,
 	return rebind_irq_to_cpu(data->irq, tcpu);
 }
 
-static int retrigger_evtchn(int evtchn)
-{
-	int masked;
-
-	if (!VALID_EVTCHN(evtchn))
-		return 0;
-
-	masked = test_and_set_mask(evtchn);
-	set_evtchn(evtchn);
-	if (!masked)
-		unmask_evtchn(evtchn);
-
-	return 1;
-}
-
-int resend_irq_on_evtchn(unsigned int irq)
-{
-	return retrigger_evtchn(evtchn_from_irq(irq));
-}
-
 static void enable_dynirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -1398,7 +1378,18 @@ static void mask_ack_dynirq(struct irq_data *data)
 
 static int retrigger_dynirq(struct irq_data *data)
 {
-	return retrigger_evtchn(evtchn_from_irq(data->irq));
+	unsigned int evtchn = evtchn_from_irq(data->irq);
+	int masked;
+
+	if (!VALID_EVTCHN(evtchn))
+		return 0;
+
+	masked = test_and_set_mask(evtchn);
+	set_evtchn(evtchn);
+	if (!masked)
+		unmask_evtchn(evtchn);
+
+	return 1;
 }
 
 static void restore_pirqs(void)
diff --git a/include/xen/events.h b/include/xen/events.h
index c9c85cf..a6d9237 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -52,7 +52,6 @@ int evtchn_get(unsigned int evtchn);
 void evtchn_put(unsigned int evtchn);
 
 void xen_send_IPI_one(unsigned int cpu, enum ipi_vector vector);
-int resend_irq_on_evtchn(unsigned int irq);
 void rebind_evtchn_irq(int evtchn, int irq);
 
 static inline void notify_remote_via_evtchn(int port)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSNC-0006lG-7S; Mon, 17 Feb 2014 17:52:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFSNA-0006kc-7U
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 17:52:52 +0000
Received: from [85.158.139.211:9358] by server-16.bemta-5.messagelabs.com id
	61/79-05060-37C42035; Mon, 17 Feb 2014 17:52:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392659569!4489166!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23706 invoked from network); 17 Feb 2014 17:52:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498693"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:52:26 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:52:25 -0500
Message-ID: <53024C58.4010900@citrix.com>
Date: Mon, 17 Feb 2014 17:52:24 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, <netdev@vger.kernel.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>
> It doesn't make sense for some interfaces to become a root bridge
> at any point in time. One example is virtual backend interfaces
> which rely on other entities on the bridge for actual physical
> connectivity. They only provide virtual access.

It is possible that a guest bridge together to VIF, either from the same 
Dom0 bridge or from different ones. In that case using STP on VIFs sound 
sensible to me.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:52:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSNC-0006lG-7S; Mon, 17 Feb 2014 17:52:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFSNA-0006kc-7U
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 17:52:52 +0000
Received: from [85.158.139.211:9358] by server-16.bemta-5.messagelabs.com id
	61/79-05060-37C42035; Mon, 17 Feb 2014 17:52:51 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392659569!4489166!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23706 invoked from network); 17 Feb 2014 17:52:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 17:52:51 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101498693"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 17:52:26 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 12:52:25 -0500
Message-ID: <53024C58.4010900@citrix.com>
Date: Mon, 17 Feb 2014 17:52:24 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, <netdev@vger.kernel.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
In-Reply-To: <1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	bridge@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 15/02/14 02:59, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>
> It doesn't make sense for some interfaces to become a root bridge
> at any point in time. One example is virtual backend interfaces
> which rely on other entities on the bridge for actual physical
> connectivity. They only provide virtual access.

It is possible that a guest bridge together to VIF, either from the same 
Dom0 bridge or from different ones. In that case using STP on VIFs sound 
sensible to me.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP2-00076S-OP; Mon, 17 Feb 2014 17:54:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP1-00075o-4c
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:47 +0000
Received: from [85.158.137.68:38374] by server-6.bemta-3.messagelabs.com id
	1F/0C-09180-6EC42035; Mon, 17 Feb 2014 17:54:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392659684!2448845!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31433 invoked from network); 17 Feb 2014 17:54:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsddi003422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHscM2027568
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:39 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsc68027553; Mon, 17 Feb 2014 17:54:38 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:38 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:47 -0500
Message-Id: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 00/17] x86/PMU: Xen PMU PV(H) support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the fifth version of PV(H) PMU patches.


Changes in v5:

* Dropped patch number 2 ("Stop AMD counters when called from vpmu_save_force()")
  as no longer needed
* Added patch number 2 that marks context as loaded before PMU registers are
  loaded. This prevents situation where a PMU interrupt may occur while context
  is still viewed as not loaded. (This is really a bug fix for exsiting VPMU
  code)
* Renamed xenpmu.h files to pmu.h
* More careful use of is_pv_domain(), is_hvm_domain(, is_pvh_domain and
  has_hvm_container_domain(). Also explicitly disabled support for PVH until
  patch 16 to make distinction between usage of the above macros more clear.
* Added support for disabling VPMU support during runtime.
* Disable VPMUs for non-privileged domains when switching to privileged
  profiling mode
* Added ARM stub for xen_arch_pmu_t
* Separated vpmu_mode from vpmu_features
* Moved CS register query to make sure we use appropriate query mechanism for
  various guest types.
* LVTPC is now set from value in shared area, not copied from dom0
* Various code and comments cleanup as suggested by Jan.

Changes in v4:

* Added support for PVH guests:
  o changes in pvpmu_init() to accommodate both PV and PVH guests, still in patch 10
  o more careful use of is_hvm_domain
  o Additional patch (16)
* Moved HVM interrupt handling out of vpmu_do_interrupt() for NMI-safe handling
* Fixed dom0's VCPU selection in privileged mode
* Added a cast in register copy for 32-bit PV guests cpu_user_regs_t in vpmu_do_interrupt.
  (don't want to expose compat_cpu_user_regs in a public header)
* Renamed public structures by prefixing them with "xen_"
* Added an entry for xenpf_symdata in xlat.lst
* Fixed pv_cpuid check for vpmu-specific cpuid adjustments
* Varios code style fixes
* Eliminated anonymous unions
* Added more verbiage to NMI patch description


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest


The following patch series adds PMU support in Xen for PV(H)
guests. There is a companion patchset for Linux kernel. In addition,
another set of changes will be provided (later) for userland perf
code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately

A few notes that may help reviewing:

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or regular vector
interrupts for both HVM and PV(H). The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV(H) guests
* PV guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for HVM, PV and PVH and therefore has been moved
up from hvm subtree




Boris Ostrovsky (17):
  common/symbols: Export hypervisor symbols to privileged guest
  VPMU: Mark context LOADED before registers are loaded
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Suport for PVH guests
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  15 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/hvm.c                   |   3 +-
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/svm.c               |   6 +-
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
 xen/arch/x86/hvm/vmx/vmx.c               |   6 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  35 +-
 xen/arch/x86/vpmu.c                      | 720 +++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 509 +++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 940 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               |  99 ++++
 xen/include/public/arch-arm.h            |   3 +
 xen/include/public/arch-x86/pmu.h        |  63 +++
 xen/include/public/platform.h            |  16 +
 xen/include/public/pmu.h                 |  99 ++++
 xen/include/public/xen.h                 |   2 +
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   6 +-
 xen/include/xlat.lst                     |   1 +
 40 files changed, 2661 insertions(+), 1875 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/pmu.h
 create mode 100644 xen/include/public/pmu.h

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP3-00076m-LY; Mon, 17 Feb 2014 17:54:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP1-000763-T0
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:48 +0000
Received: from [85.158.143.35:47828] by server-3.bemta-4.messagelabs.com id
	03/E2-11539-7EC42035; Mon, 17 Feb 2014 17:54:47 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392659685!6302935!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4438 invoked from network); 17 Feb 2014 17:54:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:46 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsfJm003451
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsexm001548
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:41 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsdWp014312; Mon, 17 Feb 2014 17:54:39 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:39 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:48 -0500
Message-Id: <1392659764-22183-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 01/17] common/symbols: Export hypervisor
	symbols to privileged guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 16 ++++++++++
 xen/include/xen/symbols.h                |  6 ++--
 xen/include/xlat.lst                     |  1 +
 7 files changed, 91 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..0a93037 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, strlen(name)) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..bc83f76 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++*symnum;
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..c5ae187 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -275,7 +275,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 4341f54..dc09b55 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,21 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    XEN_GUEST_HANDLE(char) name;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +568,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..3017449 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,7 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +10,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+int xensyms_read(uint32_t *symnum, uint32_t *type,
+                 uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 8caede6..6e3994d 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -85,6 +85,7 @@
 ?	processor_px			platform.h
 !	psd_package			platform.h
 ?	xenpf_enter_acpi_sleep		platform.h
+!	xenpf_symdata			platform.h
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 !	sched_poll			sched.h
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP4-000777-2x; Mon, 17 Feb 2014 17:54:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP2-00076K-PG
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:49 +0000
Received: from [193.109.254.147:34305] by server-4.bemta-14.messagelabs.com id
	C4/20-32066-8EC42035; Mon, 17 Feb 2014 17:54:48 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392659686!4941967!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23492 invoked from network); 17 Feb 2014 17:54:47 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsg1F003468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:43 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsf72027683
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsecY029003; Mon, 17 Feb 2014 17:54:40 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:40 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:49 -0500
Message-Id: <1392659764-22183-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 02/17] VPMU: Mark context LOADED before
	registers are loaded
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because a PMU interrupt may be generated as soon as PMU registers are loaded (or,
more precisely, as soon as HW PMU is "armed") we don't want to delay marking
context as LOADED until after registers are loaded. Otherwise during interrupt
handling VPMU_CONTEXT_LOADED may not be set and this could be confusing.

(Technically, only SVM needs this change right now since VMX will "arm" PMU later,
during VMRUN when global control register is loaded from VMCS. However, both
AMD and Intel code will require this patch when we introduce PV VPMU).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 2 ++
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 2 ++
 xen/arch/x86/hvm/vpmu.c           | 3 +--
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..3ac7d53 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -203,6 +203,8 @@ static void amd_vpmu_load(struct vcpu *v)
         return;
     }
 
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
     context_load(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ee26362..8aa7cb2 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -369,6 +369,8 @@ static void core2_vpmu_load(struct vcpu *v)
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
     __core2_vpmu_load(v);
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..63765fa 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -211,10 +211,9 @@ void vpmu_load(struct vcpu *v)
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
     {
         apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        /* Arch code needs to set VPMU_CONTEXT_LOADED */
         vpmu->arch_vpmu_ops->arch_vpmu_load(v);
     }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
 }
 
 void vpmu_initialise(struct vcpu *v)
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP2-00076S-OP; Mon, 17 Feb 2014 17:54:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP1-00075o-4c
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:47 +0000
Received: from [85.158.137.68:38374] by server-6.bemta-3.messagelabs.com id
	1F/0C-09180-6EC42035; Mon, 17 Feb 2014 17:54:46 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392659684!2448845!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31433 invoked from network); 17 Feb 2014 17:54:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:45 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsddi003422
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHscM2027568
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:39 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsc68027553; Mon, 17 Feb 2014 17:54:38 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:38 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:47 -0500
Message-Id: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 00/17] x86/PMU: Xen PMU PV(H) support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is the fifth version of PV(H) PMU patches.


Changes in v5:

* Dropped patch number 2 ("Stop AMD counters when called from vpmu_save_force()")
  as no longer needed
* Added patch number 2 that marks context as loaded before PMU registers are
  loaded. This prevents situation where a PMU interrupt may occur while context
  is still viewed as not loaded. (This is really a bug fix for exsiting VPMU
  code)
* Renamed xenpmu.h files to pmu.h
* More careful use of is_pv_domain(), is_hvm_domain(, is_pvh_domain and
  has_hvm_container_domain(). Also explicitly disabled support for PVH until
  patch 16 to make distinction between usage of the above macros more clear.
* Added support for disabling VPMU support during runtime.
* Disable VPMUs for non-privileged domains when switching to privileged
  profiling mode
* Added ARM stub for xen_arch_pmu_t
* Separated vpmu_mode from vpmu_features
* Moved CS register query to make sure we use appropriate query mechanism for
  various guest types.
* LVTPC is now set from value in shared area, not copied from dom0
* Various code and comments cleanup as suggested by Jan.

Changes in v4:

* Added support for PVH guests:
  o changes in pvpmu_init() to accommodate both PV and PVH guests, still in patch 10
  o more careful use of is_hvm_domain
  o Additional patch (16)
* Moved HVM interrupt handling out of vpmu_do_interrupt() for NMI-safe handling
* Fixed dom0's VCPU selection in privileged mode
* Added a cast in register copy for 32-bit PV guests cpu_user_regs_t in vpmu_do_interrupt.
  (don't want to expose compat_cpu_user_regs in a public header)
* Renamed public structures by prefixing them with "xen_"
* Added an entry for xenpf_symdata in xlat.lst
* Fixed pv_cpuid check for vpmu-specific cpuid adjustments
* Varios code style fixes
* Eliminated anonymous unions
* Added more verbiage to NMI patch description


Changes in v3:

* Moved PMU MSR banks out from architectural context data structures to allow
for future expansion without protocol changes
* PMU interrupts can be either NMIs or regular vector interrupts (the latter
is the default)
* Context is now marked as PMU_CACHED by the hypervisor code to avoid certain
race conditions with the guest
* Fixed races with PV guest in MSR access handlers
* More Intel VPMU cleanup
* Moved NMI-unsafe code from NMI handler
* Dropped changes to vcpu->is_running
* Added LVTPC apic handling (cached for PV guests)
* Separated privileged profiling mode into a standalone patch
* Separated NMI handling into a standalone patch


Changes in v2:

* Xen symbols are exported as data structure (as opoosed to a set of formatted
strings in v1). Even though one symbol per hypercall is returned performance
appears to be acceptable: reading whole file from dom0 userland takes on average
about twice as long as reading /proc/kallsyms
* More cleanup of Intel VPMU code to simplify publicly exported structures
* There is an architecture-independent and x86-specific public include files (ARM
has a stub)
* General cleanup of public include files to make them more presentable (and
to make auto doc generation better)
* Setting of vcpu->is_running is now done on ARM in schedule_tail as well (making
changes to common/schedule.c architecture-independent). Note that this is not
tested since I don't have access to ARM hardware.
* PCPU ID of interrupted processor is now passed to PV guest


The following patch series adds PMU support in Xen for PV(H)
guests. There is a companion patchset for Linux kernel. In addition,
another set of changes will be provided (later) for userland perf
code.

This version has following limitations:
* For accurate profiling of dom0/Xen dom0 VCPUs should be pinned.
* Hypervisor code is only profiled on processors that have running dom0 VCPUs
on them.
* No backtrace support.
* Will fail to load under XSM: we ran out of bits in permissions vector and
this needs to be fixed separately

A few notes that may help reviewing:

* A shared data structure (xenpmu_data_t) between each PV VPCU and hypervisor
CPU is used for passing registers' values as well as PMU state at the time of
PMU interrupt.
* PMU interrupts are taken by hypervisor either as NMIs or regular vector
interrupts for both HVM and PV(H). The interrupts are sent as NMIs to HVM guests
and as virtual interrupts to PV(H) guests
* PV guest's interrupt handler does not read/write PMU MSRs directly. Instead, it
accesses xenpmu_data_t and flushes it to HW it before returning.
* PMU mode is controlled at runtime via /sys/hypervisor/pmu/pmu/{pmu_mode,pmu_flags}
in addition to 'vpmu' boot option (which is preserved for back compatibility).
The following modes are provided:
  * disable: VPMU is off
  * enable: VPMU is on. Guests can profile themselves, dom0 profiles itself and Xen
  * priv_enable: dom0 only profiling. dom0 collects samples for everyone. Sampling
    in guests is suspended.
* /proc/xen/xensyms file exports hypervisor's symbols to dom0 (similar to
/proc/kallsyms)
* VPMU infrastructure is now used for HVM, PV and PVH and therefore has been moved
up from hvm subtree




Boris Ostrovsky (17):
  common/symbols: Export hypervisor symbols to privileged guest
  VPMU: Mark context LOADED before registers are loaded
  x86/VPMU: Minor VPMU cleanup
  intel/VPMU: Clean up Intel VPMU code
  x86/VPMU: Handle APIC_LVTPC accesses
  intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL should be initialized to zero
  x86/VPMU: Add public xenpmu.h
  x86/VPMU: Make vpmu not HVM-specific
  x86/VPMU: Interface for setting PMU mode and flags
  x86/VPMU: Initialize PMU for PV guests
  x86/VPMU: Add support for PMU register handling on PV guests
  x86/VPMU: Handle PMU interrupts for PV guests
  x86/VPMU: Add privileged PMU mode
  x86/VPMU: Save VPMU state for PV guests during context switch
  x86/VPMU: NMI-based VPMU support
  x86/VPMU: Suport for PVH guests
  x86/VPMU: Move VPMU files up from hvm/ directory

 xen/arch/x86/Makefile                    |   1 +
 xen/arch/x86/domain.c                    |  15 +-
 xen/arch/x86/hvm/Makefile                |   1 -
 xen/arch/x86/hvm/hvm.c                   |   3 +-
 xen/arch/x86/hvm/svm/Makefile            |   1 -
 xen/arch/x86/hvm/svm/svm.c               |   6 +-
 xen/arch/x86/hvm/svm/vpmu.c              | 494 ----------------
 xen/arch/x86/hvm/vlapic.c                |   5 +-
 xen/arch/x86/hvm/vmx/Makefile            |   1 -
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++
 xen/arch/x86/hvm/vmx/vmx.c               |   6 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 931 ------------------------------
 xen/arch/x86/hvm/vpmu.c                  | 266 ---------
 xen/arch/x86/oprofile/op_model_ppro.c    |   8 +-
 xen/arch/x86/platform_hypercall.c        |  18 +
 xen/arch/x86/traps.c                     |  35 +-
 xen/arch/x86/vpmu.c                      | 720 +++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c                  | 509 +++++++++++++++++
 xen/arch/x86/vpmu_intel.c                | 940 +++++++++++++++++++++++++++++++
 xen/arch/x86/x86_64/compat/entry.S       |   4 +
 xen/arch/x86/x86_64/entry.S              |   4 +
 xen/arch/x86/x86_64/platform_hypercall.c |   2 +
 xen/common/event_channel.c               |   1 +
 xen/common/symbols.c                     |  50 +-
 xen/common/vsprintf.c                    |   2 +-
 xen/include/asm-x86/domain.h             |   2 +
 xen/include/asm-x86/hvm/vcpu.h           |   3 -
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   4 +-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  51 --
 xen/include/asm-x86/hvm/vpmu.h           | 104 ----
 xen/include/asm-x86/vpmu.h               |  99 ++++
 xen/include/public/arch-arm.h            |   3 +
 xen/include/public/arch-x86/pmu.h        |  63 +++
 xen/include/public/platform.h            |  16 +
 xen/include/public/pmu.h                 |  99 ++++
 xen/include/public/xen.h                 |   2 +
 xen/include/xen/hypercall.h              |   4 +
 xen/include/xen/softirq.h                |   1 +
 xen/include/xen/symbols.h                |   6 +-
 xen/include/xlat.lst                     |   1 +
 40 files changed, 2661 insertions(+), 1875 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h
 create mode 100644 xen/include/public/arch-x86/pmu.h
 create mode 100644 xen/include/public/pmu.h

-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP3-00076m-LY; Mon, 17 Feb 2014 17:54:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP1-000763-T0
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:48 +0000
Received: from [85.158.143.35:47828] by server-3.bemta-4.messagelabs.com id
	03/E2-11539-7EC42035; Mon, 17 Feb 2014 17:54:47 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392659685!6302935!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4438 invoked from network); 17 Feb 2014 17:54:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:46 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsfJm003451
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsexm001548
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:41 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsdWp014312; Mon, 17 Feb 2014 17:54:39 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:39 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:48 -0500
Message-Id: <1392659764-22183-2-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 01/17] common/symbols: Export hypervisor
	symbols to privileged guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Export Xen's symbols as {<address><type><name>} triplet via new XENPF_get_symbol
hypercall

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/platform_hypercall.c        | 18 ++++++++++++
 xen/arch/x86/x86_64/platform_hypercall.c |  2 ++
 xen/common/symbols.c                     | 50 +++++++++++++++++++++++++++++++-
 xen/common/vsprintf.c                    |  2 +-
 xen/include/public/platform.h            | 16 ++++++++++
 xen/include/xen/symbols.h                |  6 ++--
 xen/include/xlat.lst                     |  1 +
 7 files changed, 91 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 2162811..0a93037 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -23,6 +23,7 @@
 #include <xen/cpu.h>
 #include <xen/pmstat.h>
 #include <xen/irq.h>
+#include <xen/symbols.h>
 #include <asm/current.h>
 #include <public/platform.h>
 #include <acpi/cpufreq/processor_perf.h>
@@ -601,6 +602,23 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     }
     break;
 
+    case XENPF_get_symbol:
+    {
+        char name[XEN_KSYM_NAME_LEN + 1];
+        XEN_GUEST_HANDLE(char) nameh;
+
+        guest_from_compat_handle(nameh, op->u.symdata.name);
+
+        ret = xensyms_read(&op->u.symdata.symnum, &op->u.symdata.type,
+                           &op->u.symdata.address, name);
+
+        if ( !ret && copy_to_guest(nameh, name, strlen(name)) )
+            ret = -EFAULT;
+        if ( !ret && __copy_field_to_guest(u_xenpf_op, op, u.symdata) )
+            ret = -EFAULT;
+    }
+    break;
+ 
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/arch/x86/x86_64/platform_hypercall.c b/xen/arch/x86/x86_64/platform_hypercall.c
index b6f380e..795837f 100644
--- a/xen/arch/x86/x86_64/platform_hypercall.c
+++ b/xen/arch/x86/x86_64/platform_hypercall.c
@@ -32,6 +32,8 @@ CHECK_pf_pcpu_version;
 CHECK_pf_enter_acpi_sleep;
 #undef xen_pf_enter_acpi_sleep
 
+#define xenpf_symdata   compat_pf_symdata
+
 #define COMPAT
 #define _XEN_GUEST_HANDLE(t) XEN_GUEST_HANDLE(t)
 #define _XEN_GUEST_HANDLE_PARAM(t) XEN_GUEST_HANDLE_PARAM(t)
diff --git a/xen/common/symbols.c b/xen/common/symbols.c
index 45941e1..bc83f76 100644
--- a/xen/common/symbols.c
+++ b/xen/common/symbols.c
@@ -17,6 +17,8 @@
 #include <xen/lib.h>
 #include <xen/string.h>
 #include <xen/spinlock.h>
+#include <public/platform.h>
+#include <xen/guest_access.h>
 
 #ifdef SYMBOLS_ORIGIN
 extern const unsigned int symbols_offsets[1];
@@ -107,7 +109,7 @@ const char *symbols_lookup(unsigned long addr,
     unsigned long i, low, high, mid;
     unsigned long symbol_end = 0;
 
-    namebuf[KSYM_NAME_LEN] = 0;
+    namebuf[XEN_KSYM_NAME_LEN] = 0;
     namebuf[0] = 0;
 
     if (!is_active_kernel_text(addr))
@@ -148,3 +150,49 @@ const char *symbols_lookup(unsigned long addr,
     *offset = addr - symbols_address(low);
     return namebuf;
 }
+
+/*
+ * Get symbol type information. This is encoded as a single char at the
+ * beginning of the symbol name.
+ */
+static char symbols_get_symbol_type(unsigned int off)
+{
+    /*
+     * Get just the first code, look it up in the token table,
+     * and return the first char from this token.
+     */
+    return symbols_token_table[symbols_token_index[symbols_names[off + 1]]];
+}
+
+/*
+ * Symbols are most likely accessed sequentially so we remember position from
+ * previous read. This can help us avoid the extra call to get_symbol_offset().
+ */
+static uint64_t next_symbol, next_offset;
+static DEFINE_SPINLOCK(symbols_mutex);
+
+int xensyms_read(uint32_t *symnum, uint32_t *type, uint64_t *address, char *name)
+{
+    if ( *symnum > symbols_num_syms )
+        return -ERANGE;
+    if ( *symnum == symbols_num_syms )
+        return 0;
+
+    spin_lock(&symbols_mutex);
+
+    if ( *symnum == 0 )
+        next_offset = next_symbol = 0;
+    if ( next_symbol != *symnum )
+        /* Non-sequential access */
+        next_offset = get_symbol_offset(*symnum);
+
+    *type = symbols_get_symbol_type(next_offset);
+    next_offset = symbols_expand_symbol(next_offset, name);
+    *address = symbols_offsets[*symnum] + SYMBOLS_ORIGIN;
+
+    next_symbol = ++*symnum;
+
+    spin_unlock(&symbols_mutex);
+
+    return 0;
+}
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..c5ae187 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -275,7 +275,7 @@ static char *pointer(char *str, char *end, const char **fmt_ptr,
     case 'S': /* Symbol name unconditionally with offset and size */
     {
         unsigned long sym_size, sym_offset;
-        char namebuf[KSYM_NAME_LEN+1];
+        char namebuf[XEN_KSYM_NAME_LEN+1];
 
         /* Advance parents fmt string, as we have consumed 's' or 'S' */
         ++*fmt_ptr;
diff --git a/xen/include/public/platform.h b/xen/include/public/platform.h
index 4341f54..dc09b55 100644
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -527,6 +527,21 @@ struct xenpf_core_parking {
 typedef struct xenpf_core_parking xenpf_core_parking_t;
 DEFINE_XEN_GUEST_HANDLE(xenpf_core_parking_t);
 
+#define XENPF_get_symbol   61
+#define XEN_KSYM_NAME_LEN 127
+struct xenpf_symdata {
+    /* IN variables */
+    uint32_t symnum;
+
+    /* OUT variables */
+    uint32_t type;
+    uint64_t address;
+
+    XEN_GUEST_HANDLE(char) name;
+};
+typedef struct xenpf_symdata xenpf_symdata_t;
+DEFINE_XEN_GUEST_HANDLE(xenpf_symdata_t);
+
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_platform_op(const struct xen_platform_op*);
@@ -553,6 +568,7 @@ struct xen_platform_op {
         struct xenpf_cpu_hotadd        cpu_add;
         struct xenpf_mem_hotadd        mem_add;
         struct xenpf_core_parking      core_parking;
+        struct xenpf_symdata           symdata;
         uint8_t                        pad[128];
     } u;
 };
diff --git a/xen/include/xen/symbols.h b/xen/include/xen/symbols.h
index 87cd77d..3017449 100644
--- a/xen/include/xen/symbols.h
+++ b/xen/include/xen/symbols.h
@@ -2,8 +2,7 @@
 #define _XEN_SYMBOLS_H
 
 #include <xen/types.h>
-
-#define KSYM_NAME_LEN 127
+#include <public/platform.h>
 
 /* Lookup an address. */
 const char *symbols_lookup(unsigned long addr,
@@ -11,4 +10,7 @@ const char *symbols_lookup(unsigned long addr,
                            unsigned long *offset,
                            char *namebuf);
 
+int xensyms_read(uint32_t *symnum, uint32_t *type,
+                 uint64_t *address, char *name);
+
 #endif /*_XEN_SYMBOLS_H*/
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 8caede6..6e3994d 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -85,6 +85,7 @@
 ?	processor_px			platform.h
 !	psd_package			platform.h
 ?	xenpf_enter_acpi_sleep		platform.h
+!	xenpf_symdata			platform.h
 ?	xenpf_pcpuinfo			platform.h
 ?	xenpf_pcpu_version		platform.h
 !	sched_poll			sched.h
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP4-000777-2x; Mon, 17 Feb 2014 17:54:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP2-00076K-PG
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:49 +0000
Received: from [193.109.254.147:34305] by server-4.bemta-14.messagelabs.com id
	C4/20-32066-8EC42035; Mon, 17 Feb 2014 17:54:48 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392659686!4941967!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23492 invoked from network); 17 Feb 2014 17:54:47 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:47 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsg1F003468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:43 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsf72027683
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsecY029003; Mon, 17 Feb 2014 17:54:40 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:40 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:49 -0500
Message-Id: <1392659764-22183-3-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 02/17] VPMU: Mark context LOADED before
	registers are loaded
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Because a PMU interrupt may be generated as soon as PMU registers are loaded (or,
more precisely, as soon as HW PMU is "armed") we don't want to delay marking
context as LOADED until after registers are loaded. Otherwise during interrupt
handling VPMU_CONTEXT_LOADED may not be set and this could be confusing.

(Technically, only SVM needs this change right now since VMX will "arm" PMU later,
during VMRUN when global control register is loaded from VMCS. However, both
AMD and Intel code will require this patch when we introduce PV VPMU).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 2 ++
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 2 ++
 xen/arch/x86/hvm/vpmu.c           | 3 +--
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 66a3815..3ac7d53 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -203,6 +203,8 @@ static void amd_vpmu_load(struct vcpu *v)
         return;
     }
 
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
     context_load(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index ee26362..8aa7cb2 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -369,6 +369,8 @@ static void core2_vpmu_load(struct vcpu *v)
     if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
         return;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
     __core2_vpmu_load(v);
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 21fbaba..63765fa 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -211,10 +211,9 @@ void vpmu_load(struct vcpu *v)
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
     {
         apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        /* Arch code needs to set VPMU_CONTEXT_LOADED */
         vpmu->arch_vpmu_ops->arch_vpmu_load(v);
     }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
 }
 
 void vpmu_initialise(struct vcpu *v)
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP8-0007A4-Kx; Mon, 17 Feb 2014 17:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP4-00077O-Ud
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:51 +0000
Received: from [85.158.143.35:47936] by server-2.bemta-4.messagelabs.com id
	F6/95-10891-AEC42035; Mon, 17 Feb 2014 17:54:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392659686!6302941!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4556 invoked from network); 17 Feb 2014 17:54:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsgkK003470
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsgFp027700
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsgfP027697; Mon, 17 Feb 2014 17:54:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:51 -0500
Message-Id: <1392659764-22183-5-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 04/17] intel/VPMU: Clean up Intel VPMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 310 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 199 insertions(+), 187 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 44f33cb..5f86b17 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1205,6 +1205,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1235,6 +1263,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 1e32ff3..513eca4 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,26 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +108,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +117,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +129,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +142,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +185,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +195,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +213,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +221,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +236,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +257,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +289,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +302,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -343,20 +329,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -375,56 +363,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -435,10 +406,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -454,7 +423,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    unsigned pmu_enable = 0;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -499,6 +468,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -511,27 +481,25 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
+        for ( i = 0; i < arch_pmc_cnt; i++ )
         {
             rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
             global_ctrl >>= 1;
         }
 
         rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
         global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
@@ -540,27 +508,27 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= 4;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
     if ( pmu_enable )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
@@ -579,7 +547,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -595,7 +562,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -680,7 +647,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -698,27 +665,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -736,7 +701,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -799,18 +764,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ebaba5c..ed81cfb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -473,7 +473,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP8-0007A4-Kx; Mon, 17 Feb 2014 17:54:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP4-00077O-Ud
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:51 +0000
Received: from [85.158.143.35:47936] by server-2.bemta-4.messagelabs.com id
	F6/95-10891-AEC42035; Mon, 17 Feb 2014 17:54:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392659686!6302941!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4556 invoked from network); 17 Feb 2014 17:54:48 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:48 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsgkK003470
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:43 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsgFp027700
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsgfP027697; Mon, 17 Feb 2014 17:54:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:51 -0500
Message-Id: <1392659764-22183-5-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 04/17] intel/VPMU: Clean up Intel VPMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Remove struct pmumsr and core2_pmu_enable. Replace static MSR structures with
fields in core2_vpmu_context.

Call core2_get_pmc_count() once, during initialization.

Properly clean up when core2_vpmu_alloc_resource() fails and add routines
to remove MSRs from VMCS.


Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c              |  55 ++++++
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 310 ++++++++++++++-----------------
 xen/include/asm-x86/hvm/vmx/vmcs.h       |   2 +
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h |  19 --
 4 files changed, 199 insertions(+), 187 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 44f33cb..5f86b17 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1205,6 +1205,34 @@ int vmx_add_guest_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_guest_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx, msr_count = curr->arch.hvm_vmx.msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_STORE_COUNT, msr_count);
+    __vmwrite(VM_ENTRY_MSR_LOAD_COUNT, msr_count);
+}
+
 int vmx_add_host_load_msr(u32 msr)
 {
     struct vcpu *curr = current;
@@ -1235,6 +1263,33 @@ int vmx_add_host_load_msr(u32 msr)
     return 0;
 }
 
+void vmx_rm_host_load_msr(u32 msr)
+{
+    struct vcpu *curr = current;
+    unsigned int idx,  msr_count = curr->arch.hvm_vmx.host_msr_count;
+    struct vmx_msr_entry *msr_area = curr->arch.hvm_vmx.host_msr_area;
+
+    if ( msr_area == NULL )
+        return;
+
+    for ( idx = 0; idx < msr_count; idx++ )
+        if ( msr_area[idx].index == msr )
+            break;
+
+    if ( idx == msr_count )
+        return;
+
+    for ( ; idx < msr_count - 1; idx++ )
+    {
+        msr_area[idx].index = msr_area[idx + 1].index;
+        msr_area[idx].data = msr_area[idx + 1].data;
+    }
+    msr_area[msr_count - 1].index = 0;
+
+    curr->arch.hvm_vmx.host_msr_count = --msr_count;
+    __vmwrite(VM_EXIT_MSR_LOAD_COUNT, msr_count);
+}
+
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector)
 {
     if ( !test_and_set_bit(vector, v->arch.hvm_vmx.eoi_exit_bitmap) )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 1e32ff3..513eca4 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -69,6 +69,26 @@
 static bool_t __read_mostly full_width_write;
 
 /*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+#define VPMU_CORE2_MAX_FIXED_PMCS     4
+struct core2_vpmu_context {
+    u64 fixed_ctrl;
+    u64 ds_area;
+    u64 pebs_enable;
+    u64 global_ovf_status;
+    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
+    struct arch_msr_pair arch_msr_pair[1];
+};
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
  * QUIRK to workaround an issue on various family 6 cpus.
  * The issue leads to endless PMC interrupt loops on the processor.
  * If the interrupt handler is running and a pmc reaches the value 0, this
@@ -88,11 +108,8 @@ static void check_pmc_quirk(void)
         is_pmc_quirk = 0;    
 }
 
-static int core2_get_pmc_count(void);
 static void handle_pmc_quirk(u64 msr_content)
 {
-    int num_gen_pmc = core2_get_pmc_count();
-    int num_fix_pmc  = 3;
     int i;
     u64 val;
 
@@ -100,7 +117,7 @@ static void handle_pmc_quirk(u64 msr_content)
         return;
 
     val = msr_content;
-    for ( i = 0; i < num_gen_pmc; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -112,7 +129,7 @@ static void handle_pmc_quirk(u64 msr_content)
         val >>= 1;
     }
     val = msr_content >> 32;
-    for ( i = 0; i < num_fix_pmc; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         if ( val & 0x1 )
         {
@@ -125,75 +142,42 @@ static void handle_pmc_quirk(u64 msr_content)
     }
 }
 
-static const u32 core2_fix_counters_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR0,
-    MSR_CORE_PERF_FIXED_CTR1,
-    MSR_CORE_PERF_FIXED_CTR2
-};
-
 /*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
  */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* The index into the core2_ctrls_msr[] of this MSR used in core2_vpmu_dump() */
-#define MSR_CORE_PERF_FIXED_CTR_CTRL_IDX 0
-
-/* Core 2 Non-architectual Performance Control MSRs. */
-static const u32 core2_ctrls_msr[] = {
-    MSR_CORE_PERF_FIXED_CTR_CTRL,
-    MSR_IA32_PEBS_ENABLE,
-    MSR_IA32_DS_AREA
-};
-
-struct pmumsr {
-    unsigned int num;
-    const u32 *msr;
-};
-
-static const struct pmumsr core2_fix_counters = {
-    VPMU_CORE2_NUM_FIXED,
-    core2_fix_counters_msr
-};
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
 
-static const struct pmumsr core2_ctrls = {
-    VPMU_CORE2_NUM_CTRLS,
-    core2_ctrls_msr
-};
-static int arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
 
 /*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
  */
-static int core2_get_pmc_count(void)
+static int core2_get_fixed_pmc_count(void)
 {
-    u32 eax, ebx, ecx, edx;
-
-    if ( arch_pmc_cnt == 0 )
-    {
-        cpuid(0xa, &eax, &ebx, &ecx, &edx);
-        arch_pmc_cnt = (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT;
-    }
+    u32 eax;
 
-    return arch_pmc_cnt;
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
 static u64 core2_calc_intial_glb_ctrl_msr(void)
 {
-    int arch_pmc_bits = (1 << core2_get_pmc_count()) - 1;
-    u64 fix_pmc_bits  = (1 << 3) - 1;
-    return ((fix_pmc_bits << 32) | arch_pmc_bits);
+    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
+    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
+    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
 }
 
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
-    u32 eax, ebx, ecx, edx;
+    u32 edx;
 
-    cpuid(0xa, &eax, &ebx, &ecx, &edx);
-    return ((edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT);
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
 }
 
 static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
@@ -201,9 +185,9 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     int i;
     u32 msr_index_pmc;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_fix_counters.msr[i] == msr_index )
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
         {
             *type = MSR_TYPE_COUNTER;
             *index = i;
@@ -211,14 +195,12 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
     {
-        if ( core2_ctrls.msr[i] == msr_index )
-        {
-            *type = MSR_TYPE_CTRL;
-            *index = i;
-            return 1;
-        }
+        *type = MSR_TYPE_CTRL;
+        return 1;
     }
 
     if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
@@ -231,7 +213,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
 
     msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
     if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + core2_get_pmc_count())) )
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_COUNTER;
         *index = msr_index_pmc - MSR_IA32_PERFCTR0;
@@ -239,7 +221,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     }
 
     if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + core2_get_pmc_count())) )
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
     {
         *type = MSR_TYPE_ARCH_CTRL;
         *index = msr_index - MSR_P6_EVNTSEL0;
@@ -254,13 +236,13 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     int i;
 
     /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                   msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
         clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
@@ -275,26 +257,28 @@ static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
     }
 
     /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        clear_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]), msr_bitmap);
-        set_bit(msraddr_to_bitpos(core2_fix_counters.msr[i]),
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
     }
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
                 msr_bitmap + 0x800/BYTES_PER_LONG);
 
         if ( full_width_write )
@@ -305,10 +289,12 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
         }
     }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        set_bit(msraddr_to_bitpos(core2_ctrls.msr[i]), msr_bitmap);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0+i), msr_bitmap);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
 }
 
 static inline void __core2_vpmu_save(struct vcpu *v)
@@ -316,10 +302,10 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     int i;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        rdmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        rdmsrl(MSR_IA32_PERFCTR0+i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -343,20 +329,22 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     unsigned int i, pmc_start;
     struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        wrmsrl(core2_fix_counters.msr[i], core2_vpmu_cxt->fix_counters[i]);
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
     else
         pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
         wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    }
 
-    for ( i = 0; i < core2_ctrls.num; i++ )
-        wrmsrl(core2_ctrls.msr[i], core2_vpmu_cxt->ctrls[i]);
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        wrmsrl(MSR_P6_EVNTSEL0+i, core2_vpmu_cxt->arch_msr_pair[i].control);
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -375,56 +363,39 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct core2_vpmu_context *core2_vpmu_cxt;
-    struct core2_pmu_enable *pmu_enable;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
 
     wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
     if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        return 0;
+        goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
                  core2_calc_intial_glb_ctrl_msr());
 
-    pmu_enable = xzalloc_bytes(sizeof(struct core2_pmu_enable) +
-                               core2_get_pmc_count() - 1);
-    if ( !pmu_enable )
-        goto out1;
-
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (core2_get_pmc_count()-1)*sizeof(struct arch_msr_pair));
+                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
     if ( !core2_vpmu_cxt )
-        goto out2;
-    core2_vpmu_cxt->pmu_enable = pmu_enable;
+        goto out_err;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
     return 1;
- out2:
-    xfree(pmu_enable);
- out1:
-    gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, PMU feature is "
-             "unavailable on domain %d vcpu %d.\n",
-             v->vcpu_id, v->domain->domain_id);
-    return 0;
-}
 
-static void core2_vpmu_save_msr_context(struct vcpu *v, int type,
-                                       int index, u64 msr_data)
-{
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
 
-    switch ( type )
-    {
-    case MSR_TYPE_CTRL:
-        core2_vpmu_cxt->ctrls[index] = msr_data;
-        break;
-    case MSR_TYPE_ARCH_CTRL:
-        core2_vpmu_cxt->arch_msr_pair[index].control = msr_data;
-        break;
-    }
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
 }
 
 static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
@@ -435,10 +406,8 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
         return 0;
 
     if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-	 (vpmu->context != NULL ||
-	  !core2_vpmu_alloc_resource(current)) )
+         !core2_vpmu_alloc_resource(current) )
         return 0;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
 
     /* Do the lazy load staff. */
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
@@ -454,7 +423,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
-    char pmu_enable = 0;
+    unsigned pmu_enable = 0;
     int i, tmp;
     int type = -1, index = -1;
     struct vcpu *v = current;
@@ -499,6 +468,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( msr_content & 1 )
             gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
                      "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
         return 1;
     case MSR_IA32_DS_AREA:
         if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
@@ -511,27 +481,25 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
                 return 1;
             }
-            core2_vpmu_cxt->pmu_enable->ds_area_enable = msr_content ? 1 : 0;
+            core2_vpmu_cxt->ds_area = msr_content;
             break;
         }
         gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
         return 1;
     case MSR_CORE_PERF_GLOBAL_CTRL:
         global_ctrl = msr_content;
-        for ( i = 0; i < core2_get_pmc_count(); i++ )
+        for ( i = 0; i < arch_pmc_cnt; i++ )
         {
             rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] =
-                    global_ctrl & (non_global_ctrl >> 22) & 1;
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
             global_ctrl >>= 1;
         }
 
         rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
         global_ctrl = msr_content >> 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
@@ -540,27 +508,27 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         non_global_ctrl = msr_content;
         vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         global_ctrl >>= 32;
-        for ( i = 0; i < core2_fix_counters.num; i++ )
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
-            core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] =
-                (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1: 0);
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
             non_global_ctrl >>= 4;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
         break;
     default:
         tmp = msr - MSR_P6_EVNTSEL0;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        if ( tmp >= 0 && tmp < core2_get_pmc_count() )
-            core2_vpmu_cxt->pmu_enable->arch_pmc_enable[tmp] =
-                (global_ctrl >> tmp) & (msr_content >> 22) & 1;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+        }
     }
 
-    for ( i = 0; i < core2_fix_counters.num; i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i];
-    for ( i = 0; i < core2_get_pmc_count(); i++ )
-        pmu_enable |= core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i];
-    pmu_enable |= core2_vpmu_cxt->pmu_enable->ds_area_enable;
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
     if ( pmu_enable )
         vpmu_set(vpmu, VPMU_RUNNING);
     else
@@ -579,7 +547,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
     }
 
-    core2_vpmu_save_msr_context(v, type, index, msr_content);
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -595,7 +562,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             if  ( msr == MSR_IA32_DS_AREA )
                 break;
             /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (VPMU_CORE2_NUM_FIXED * FIXED_CTR_CTRL_BITS)) - 1);
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
             if (msr_content & mask)
                 inject_gp = 1;
             break;
@@ -680,7 +647,7 @@ static void core2_vpmu_do_cpuid(unsigned int input,
 static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i, num;
+    int i;
     const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
     u64 val;
 
@@ -698,27 +665,25 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     printk("    vPMU running\n");
     core2_vpmu_cxt = vpmu->context;
-    num = core2_get_pmc_count();
+
     /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < num; i++ )
+    for ( i = 0; i < arch_pmc_cnt; i++ )
     {
         const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
 
-        if ( core2_vpmu_cxt->pmu_enable->arch_pmc_enable[i] )
-            printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-                   i, msr_pair[i].counter, msr_pair[i].control);
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+               i, msr_pair[i].counter, msr_pair[i].control);
     }
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
      */
-    val = core2_vpmu_cxt->ctrls[MSR_CORE_PERF_FIXED_CTR_CTRL_IDX];
-    for ( i = 0; i < core2_fix_counters.num; i++ )
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
-        if ( core2_vpmu_cxt->pmu_enable->fixed_ctr_enable[i] )
-            printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-                   i, core2_vpmu_cxt->fix_counters[i],
-                   val & FIXED_CTR_CTRL_MASK);
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, core2_vpmu_cxt->fix_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
 }
@@ -736,7 +701,7 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
         core2_vpmu_cxt->global_ovf_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << core2_get_pmc_count()) - 1);
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
     else
@@ -799,18 +764,27 @@ static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         }
     }
 func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
+    {
+        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
+        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
+               fixed_pmc_cnt);
+    }
     check_pmc_quirk();
+
     return 0;
 }
 
 static void core2_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
-    xfree(core2_vpmu_cxt->pmu_enable);
+
     xfree(vpmu->context);
     if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ebaba5c..ed81cfb 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -473,7 +473,9 @@ void vmx_enable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
 int vmx_read_guest_msr(u32 msr, u64 *val);
 int vmx_write_guest_msr(u32 msr, u64 val);
 int vmx_add_guest_msr(u32 msr);
+void vmx_rm_guest_msr(u32 msr);
 int vmx_add_host_load_msr(u32 msr);
+void vmx_rm_host_load_msr(u32 msr);
 void vmx_vmcs_switch(struct vmcs_struct *from, struct vmcs_struct *to);
 void vmx_set_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
index 60b05fd..410372d 100644
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
@@ -23,29 +23,10 @@
 #ifndef __ASM_X86_HVM_VPMU_CORE_H_
 #define __ASM_X86_HVM_VPMU_CORE_H_
 
-/* Currently only 3 fixed counters are supported. */
-#define VPMU_CORE2_NUM_FIXED 3
-/* Currently only 3 Non-architectual Performance Control MSRs */
-#define VPMU_CORE2_NUM_CTRLS 3
-
 struct arch_msr_pair {
     u64 counter;
     u64 control;
 };
 
-struct core2_pmu_enable {
-    char ds_area_enable;
-    char fixed_ctr_enable[VPMU_CORE2_NUM_FIXED];
-    char arch_pmc_enable[1];
-};
-
-struct core2_vpmu_context {
-    struct core2_pmu_enable *pmu_enable;
-    u64 fix_counters[VPMU_CORE2_NUM_FIXED];
-    u64 ctrls[VPMU_CORE2_NUM_CTRLS];
-    u64 global_ovf_status;
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 #endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP9-0007Ad-4L; Mon, 17 Feb 2014 17:54:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP5-00077W-AY
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:51 +0000
Received: from [85.158.137.68:53088] by server-11.bemta-3.messagelabs.com id
	45/51-04255-AEC42035; Mon, 17 Feb 2014 17:54:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392659688!2453024!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22159 invoked from network); 17 Feb 2014 17:54:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsiND003493
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:45 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHsh7N014460
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:44 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHshdD027735; Mon, 17 Feb 2014 17:54:43 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:43 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:53 -0500
Message-Id: <1392659764-22183-7-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 06/17] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL
	should be initialized to zero
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index c16ae10..c66289a 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -164,13 +164,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -373,8 +366,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSP9-0007Ad-4L; Mon, 17 Feb 2014 17:54:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP5-00077W-AY
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:51 +0000
Received: from [85.158.137.68:53088] by server-11.bemta-3.messagelabs.com id
	45/51-04255-AEC42035; Mon, 17 Feb 2014 17:54:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392659688!2453024!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22159 invoked from network); 17 Feb 2014 17:54:49 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:49 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsiND003493
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:45 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHsh7N014460
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:44 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHshdD027735; Mon, 17 Feb 2014 17:54:43 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:43 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:53 -0500
Message-Id: <1392659764-22183-7-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 06/17] intel/VPMU: MSR_CORE_PERF_GLOBAL_CTRL
	should be initialized to zero
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

MSR_CORE_PERF_GLOBAL_CTRL register should be set zero initially. It is up to
the guest to set it so that counters are enabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 10 +---------
 1 file changed, 1 insertion(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index c16ae10..c66289a 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -164,13 +164,6 @@ static int core2_get_fixed_pmc_count(void)
     return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
 }
 
-static u64 core2_calc_intial_glb_ctrl_msr(void)
-{
-    int arch_pmc_bits = (1 << arch_pmc_cnt) - 1;
-    u64 fix_pmc_bits  = (1 << fixed_pmc_cnt) - 1;
-    return ( (fix_pmc_bits << 32) | arch_pmc_bits );
-}
-
 /* edx bits 5-12: Bit width of fixed-function performance counters  */
 static int core2_get_bitwidth_fix_count(void)
 {
@@ -373,8 +366,7 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
 
     if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
         goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL,
-                 core2_calc_intial_glb_ctrl_msr());
+    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
                     (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPA-0007CL-38; Mon, 17 Feb 2014 17:54:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP8-00078v-5f
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:54 +0000
Received: from [193.109.254.147:62002] by server-1.bemta-14.messagelabs.com id
	7D/BF-15438-DEC42035; Mon, 17 Feb 2014 17:54:53 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392659691!4907911!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26581 invoked from network); 17 Feb 2014 17:54:52 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHslWg024867
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:48 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHsjXk014511
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:47 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsj20027784; Mon, 17 Feb 2014 17:54:45 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:55 -0500
Message-Id: <1392659764-22183-9-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 08/17] x86/VPMU: Make vpmu not HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..f38298c 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -400,6 +400,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..9beeaa9 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 87a72ce..edc67f6 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
+#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPA-0007CL-38; Mon, 17 Feb 2014 17:54:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP8-00078v-5f
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:54 +0000
Received: from [193.109.254.147:62002] by server-1.bemta-14.messagelabs.com id
	7D/BF-15438-DEC42035; Mon, 17 Feb 2014 17:54:53 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392659691!4907911!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26581 invoked from network); 17 Feb 2014 17:54:52 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHslWg024867
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:48 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHsjXk014511
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:47 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsj20027784; Mon, 17 Feb 2014 17:54:45 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:55 -0500
Message-Id: <1392659764-22183-9-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 08/17] x86/VPMU: Make vpmu not HVM-specific
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

vpmu structure will be used for both HVM and PV guests. Move it from
hvm_vcpu to arch_vcpu.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/include/asm-x86/domain.h   | 2 ++
 xen/include/asm-x86/hvm/vcpu.h | 3 ---
 xen/include/asm-x86/hvm/vpmu.h | 5 ++---
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..f38298c 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -400,6 +400,8 @@ struct arch_vcpu
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
 
+    struct vpmu_struct vpmu;
+
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv_vcpu;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 122ab0d..9beeaa9 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -152,9 +152,6 @@ struct hvm_vcpu {
     u32                 msr_tsc_aux;
     u64                 msr_tsc_adjust;
 
-    /* VPMU */
-    struct vpmu_struct  vpmu;
-
     union {
         struct arch_vmx_struct vmx;
         struct arch_svm_struct svm;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 87a72ce..edc67f6 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -31,9 +31,8 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-#define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
-#define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
-                                          arch.hvm_vcpu.vpmu))
+#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
+#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPA-0007D2-NZ; Mon, 17 Feb 2014 17:54:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP7-00078s-H9
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:53 +0000
Received: from [85.158.139.211:7900] by server-5.bemta-5.messagelabs.com id
	EE/AB-32749-CEC42035; Mon, 17 Feb 2014 17:54:52 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392659689!4494411!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14814 invoked from network); 17 Feb 2014 17:54:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsiNX024846
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:45 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsh0m001647
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:44 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsghJ029076; Mon, 17 Feb 2014 17:54:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:42 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:52 -0500
Message-Id: <1392659764-22183-6-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 05/17] x86/VPMU: Handle APIC_LVTPC accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           | 14 +++++++++++---
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 16 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3666915..2fbe2c1 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -298,8 +298,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( is_hvm_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -310,8 +308,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index bc06010..d954f4f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 513eca4..c16ae10 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -534,19 +534,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -712,10 +699,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a48dae2..979bd33 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -230,18 +238,18 @@ void vpmu_initialise(struct vcpu *v)
     case X86_VENDOR_AMD:
         if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     case X86_VENDOR_INTEL:
         if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
         opt_vpmu_enabled = 0;
-        break;
+        return;
     }
 }
 
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPA-0007D2-NZ; Mon, 17 Feb 2014 17:54:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP7-00078s-H9
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:53 +0000
Received: from [85.158.139.211:7900] by server-5.bemta-5.messagelabs.com id
	EE/AB-32749-CEC42035; Mon, 17 Feb 2014 17:54:52 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392659689!4494411!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14814 invoked from network); 17 Feb 2014 17:54:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:50 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsiNX024846
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:45 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsh0m001647
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:44 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsghJ029076; Mon, 17 Feb 2014 17:54:42 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:42 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:52 -0500
Message-Id: <1392659764-22183-6-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 05/17] x86/VPMU: Handle APIC_LVTPC accesses
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update APIC_LVTPC vector when HVM guest writes to it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |  4 ----
 xen/arch/x86/hvm/vlapic.c         |  5 ++++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 17 -----------------
 xen/arch/x86/hvm/vpmu.c           | 14 +++++++++++---
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 5 files changed, 16 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3666915..2fbe2c1 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -298,8 +298,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
         if ( is_hvm_domain(v->domain) &&
              !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
@@ -310,8 +308,6 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
-        apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
              ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index bc06010..d954f4f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,6 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
+#include <asm/hvm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
@@ -732,8 +733,10 @@ static int vlapic_reg_write(struct vcpu *v,
             vlapic_adjust_i8259_target(v->domain);
             pt_may_unmask_irq(v->domain, NULL);
         }
-        if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
+        else if ( (offset == APIC_LVTT) && !(val & APIC_LVT_MASKED) )
             pt_may_unmask_irq(NULL, &vlapic->pt);
+        else if ( offset == APIC_LVTPC )
+            vpmu_lvtpc_update(val);
         break;
 
     case APIC_TMICT:
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 513eca4..c16ae10 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -534,19 +534,6 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     else
         vpmu_reset(vpmu, VPMU_RUNNING);
 
-    /* Setup LVTPC in local apic */
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) &&
-         is_vlapic_lvtpc_enabled(vcpu_vlapic(v)) )
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
-    }
-    else
-    {
-        apic_write_around(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
-    }
-
     if ( type != MSR_TYPE_GLOBAL )
     {
         u64 mask;
@@ -712,10 +699,6 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
             return 0;
     }
 
-    /* HW sets the MASK bit when performance counter interrupt occurs*/
-    vpmu->hw_lapic_lvtpc = apic_read(APIC_LVTPC) & ~APIC_LVT_MASKED;
-    apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-
     return 1;
 }
 
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a48dae2..979bd33 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -64,6 +64,14 @@ static void __init parse_vpmu_param(char *s)
     }
 }
 
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -230,18 +238,18 @@ void vpmu_initialise(struct vcpu *v)
     case X86_VENDOR_AMD:
         if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     case X86_VENDOR_INTEL:
         if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
             opt_vpmu_enabled = 0;
-        break;
+        return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
         opt_vpmu_enabled = 0;
-        break;
+        return;
     }
 }
 
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 2a713be..7ee0f01 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -87,6 +87,7 @@ struct vpmu_struct {
 #define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
 #define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
+void vpmu_lvtpc_update(uint32_t val);
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
 int vpmu_do_interrupt(struct cpu_user_regs *regs);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPB-0007E6-A2; Mon, 17 Feb 2014 17:54:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP6-00078Z-3l
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:53 +0000
Received: from [85.158.143.35:55851] by server-2.bemta-4.messagelabs.com id
	2A/95-10891-AEC42035; Mon, 17 Feb 2014 17:54:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392659688!6312440!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23934 invoked from network); 17 Feb 2014 17:54:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:50 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsgTY024833
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:43 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsfAQ027696
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsfAs014358; Mon, 17 Feb 2014 17:54:41 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:50 -0500
Message-Id: <1392659764-22183-4-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 03/17] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c |  9 +++------
 xen/arch/x86/hvm/vpmu.c           |  3 +--
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3ac7d53..3666915 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -244,7 +244,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -284,7 +285,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -300,7 +301,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -311,7 +313,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -403,7 +406,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 8aa7cb2..1e32ff3 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,10 +326,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
@@ -448,7 +445,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -815,7 +812,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 63765fa..a48dae2 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -143,8 +143,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPB-0007E6-A2; Mon, 17 Feb 2014 17:54:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP6-00078Z-3l
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:53 +0000
Received: from [85.158.143.35:55851] by server-2.bemta-4.messagelabs.com id
	2A/95-10891-AEC42035; Mon, 17 Feb 2014 17:54:50 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392659688!6312440!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23934 invoked from network); 17 Feb 2014 17:54:50 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:50 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsgTY024833
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:43 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsfAQ027696
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:42 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsfAs014358; Mon, 17 Feb 2014 17:54:41 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:41 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:50 -0500
Message-Id: <1392659764-22183-4-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 03/17] x86/VPMU: Minor VPMU cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Update macros that modify VPMU flags to allow changing multiple bits at once.

Make sure that we only touch MSR bitmap on HVM guests (both VMX and SVM). This
is needed by subsequent PMU patches.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       | 14 +++++++++-----
 xen/arch/x86/hvm/vmx/vpmu_core2.c |  9 +++------
 xen/arch/x86/hvm/vpmu.c           |  3 +--
 xen/include/asm-x86/hvm/vpmu.h    |  9 +++++----
 4 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 3ac7d53..3666915 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -244,7 +244,8 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     return 1;
@@ -284,7 +285,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -300,7 +301,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR;
 
-        if ( !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -311,7 +313,8 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | APIC_LVT_MASKED;
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+        if ( is_hvm_domain(v->domain) &&
+             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -403,7 +406,8 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+    if ( is_hvm_domain(v->domain) &&
+         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 8aa7cb2..1e32ff3 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -326,10 +326,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        return 0;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) ) 
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
@@ -448,7 +445,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap )
+        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -815,7 +812,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
         return;
     xfree(core2_vpmu_cxt->pmu_enable);
     xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap )
+    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
     release_pmu_ownship(PMU_OWNER_HVM);
     vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 63765fa..a48dae2 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -143,8 +143,7 @@ void vpmu_save(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int pcpu = smp_processor_id();
 
-    if ( !(vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) &&
-           vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)) )
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
        return;
 
     vpmu->last_pcpu = pcpu;
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 40f63fb..2a713be 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -81,10 +81,11 @@ struct vpmu_struct {
 #define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
 
 
-#define vpmu_set(_vpmu, _x)    ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)  ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x) ((_vpmu)->flags & (_x))
-#define vpmu_clear(_vpmu)      ((_vpmu)->flags = 0)
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
 int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPC-0007FP-64; Mon, 17 Feb 2014 17:54:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP8-00078u-0Y
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:54 +0000
Received: from [85.158.143.35:48221] by server-3.bemta-4.messagelabs.com id
	1D/F2-11539-DEC42035; Mon, 17 Feb 2014 17:54:53 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392659691!6311758!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11064 invoked from network); 17 Feb 2014 17:54:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHslH6003554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:47 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHskMT014552
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:46 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsjst014529; Mon, 17 Feb 2014 17:54:46 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:45 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:56 -0500
Message-Id: <1392659764-22183-10-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 09/17] x86/VPMU: Interface for setting PMU
	mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c              |   4 +-
 xen/arch/x86/hvm/svm/vpmu.c        |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  10 +--
 xen/arch/x86/hvm/vpmu.c            | 121 ++++++++++++++++++++++++++++++++++---
 xen/arch/x86/x86_64/compat/entry.S |   4 ++
 xen/arch/x86/x86_64/entry.S        |   4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  14 ++---
 xen/include/public/pmu.h           |  48 +++++++++++++++
 xen/include/public/xen.h           |   1 +
 xen/include/xen/hypercall.h        |   4 ++
 10 files changed, 188 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..b615c07 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1465,7 +1465,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
     if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
             vpmu_save(prev);
 
         if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
@@ -1508,7 +1508,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index d199f08..97cd6e5 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -472,14 +472,14 @@ struct arch_vpmu_ops amd_vpmu_ops = {
     .arch_vpmu_dump = amd_vpmu_dump
 };
 
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int svm_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_mode == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 856281d..efbebe2 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -703,13 +703,13 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     return 1;
 }
 
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+static int core2_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -820,7 +820,7 @@ struct arch_vpmu_ops core2_no_vpmu_ops = {
     .do_cpuid = core2_no_vpmu_do_cpuid,
 };
 
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int vmx_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
@@ -828,7 +828,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_mode == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
@@ -867,7 +867,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         case 0x3f:
         case 0x45:
         case 0x46:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
+            ret = core2_vpmu_initialise(v);
             if ( !ret )
                 vpmu->arch_vpmu_ops = &core2_vpmu_ops;
             return ret;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a6e933a..50f6bb8 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,8 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+uint64_t __read_mostly vpmu_features = 0;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +54,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +62,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode = XENPMU_MODE_ON;
         break;
     }
 }
@@ -77,6 +79,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
         return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
     return 0;
@@ -86,6 +91,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
         return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
     return 0;
@@ -237,19 +245,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         return;
     }
 }
@@ -271,3 +279,100 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Unload VPMU contexts */
+static void vpmu_unload_all(void)
+{
+    struct domain *d;
+    struct vcpu *v;
+    struct vpmu_struct *vpmu;
+
+    for_each_domain(d)
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( v != current )
+                vcpu_pause(v);
+            vpmu = vcpu_vpmu(v);
+
+            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            {
+                if ( v != current )
+                    vcpu_unpause(v);
+                continue;
+            }
+
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+            if ( v != current )
+                vcpu_unpause(v);
+        }
+    }
+}
+
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode = pmu_params.d.val;
+        if ( vpmu_mode == XENPMU_MODE_OFF )
+            /*
+             * After this VPMU context will never be loaded during context
+             * switch. We also prevent PMU MSR accesses (which can load
+             * context) when VPMU is disabled.
+             */
+            vpmu_unload_all();
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_features = pmu_params.d.val;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 594b0b9..07c736d 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index edc67f6..b945e8f 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/pmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
 #define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
 
@@ -58,8 +51,8 @@ struct arch_vpmu_ops {
     void (*arch_vpmu_dump)(const struct vcpu *);
 };
 
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+int vmx_vpmu_initialise(struct vcpu *);
+int svm_vpmu_initialise(struct vcpu *);
 
 struct vpmu_struct {
     u32 flags;
@@ -98,5 +91,8 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint64_t vpmu_mode;
+extern uint64_t vpmu_features;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 3ffd2cf..f91d935 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -13,6 +13,54 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xen_pmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } v;
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } d;
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xen_pmu_params xen_pmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a00ab21 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..cf34547 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/pmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPC-0007FP-64; Mon, 17 Feb 2014 17:54:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP8-00078u-0Y
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:54 +0000
Received: from [85.158.143.35:48221] by server-3.bemta-4.messagelabs.com id
	1D/F2-11539-DEC42035; Mon, 17 Feb 2014 17:54:53 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392659691!6311758!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11064 invoked from network); 17 Feb 2014 17:54:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:52 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHslH6003554
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:47 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHskMT014552
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:46 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsjst014529; Mon, 17 Feb 2014 17:54:46 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:45 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:56 -0500
Message-Id: <1392659764-22183-10-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 09/17] x86/VPMU: Interface for setting PMU
	mode and flags
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add runtime interface for setting PMU mode and flags. Three main modes are
provided:
* PMU off
* PMU on: Guests can access PMU MSRs and receive PMU interrupts. dom0
  profiles itself and the hypervisor.
* dom0-only PMU: dom0 collects samples for both itself and guests.

For feature flags only Intel's BTS is currently supported.

Mode and flags are set via HYPERVISOR_xenpmu_op hypercall.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c              |   4 +-
 xen/arch/x86/hvm/svm/vpmu.c        |   4 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c  |  10 +--
 xen/arch/x86/hvm/vpmu.c            | 121 ++++++++++++++++++++++++++++++++++---
 xen/arch/x86/x86_64/compat/entry.S |   4 ++
 xen/arch/x86/x86_64/entry.S        |   4 ++
 xen/include/asm-x86/hvm/vpmu.h     |  14 ++---
 xen/include/public/pmu.h           |  48 +++++++++++++++
 xen/include/public/xen.h           |   1 +
 xen/include/xen/hypercall.h        |   4 ++
 10 files changed, 188 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..b615c07 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1465,7 +1465,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 
     if ( is_hvm_vcpu(prev) )
     {
-        if (prev != next)
+        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
             vpmu_save(prev);
 
         if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
@@ -1508,7 +1508,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if (is_hvm_vcpu(next) && (prev != next) )
+    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index d199f08..97cd6e5 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -472,14 +472,14 @@ struct arch_vpmu_ops amd_vpmu_ops = {
     .arch_vpmu_dump = amd_vpmu_dump
 };
 
-int svm_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int svm_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
     int ret = 0;
 
     /* vpmu enabled? */
-    if ( !vpmu_flags )
+    if ( vpmu_mode == XENPMU_MODE_OFF )
         return 0;
 
     switch ( family )
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 856281d..efbebe2 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -703,13 +703,13 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     return 1;
 }
 
-static int core2_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+static int core2_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     u64 msr_content;
     struct cpuinfo_x86 *c = &current_cpu_data;
 
-    if ( !(vpmu_flags & VPMU_BOOT_BTS) )
+    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
         goto func_out;
     /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
     if ( cpu_has(c, X86_FEATURE_DS) )
@@ -820,7 +820,7 @@ struct arch_vpmu_ops core2_no_vpmu_ops = {
     .do_cpuid = core2_no_vpmu_do_cpuid,
 };
 
-int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
+int vmx_vpmu_initialise(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
@@ -828,7 +828,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
     int ret = 0;
 
     vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( !vpmu_flags )
+    if ( vpmu_mode == XENPMU_MODE_OFF )
         return 0;
 
     if ( family == 6 )
@@ -867,7 +867,7 @@ int vmx_vpmu_initialise(struct vcpu *v, unsigned int vpmu_flags)
         case 0x3f:
         case 0x45:
         case 0x46:
-            ret = core2_vpmu_initialise(v, vpmu_flags);
+            ret = core2_vpmu_initialise(v);
             if ( !ret )
                 vpmu->arch_vpmu_ops = &core2_vpmu_ops;
             return ret;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index a6e933a..50f6bb8 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,6 +21,7 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
@@ -38,7 +39,8 @@
  * "vpmu=off" : vpmu generally disabled
  * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
  */
-static unsigned int __read_mostly opt_vpmu_enabled;
+uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+uint64_t __read_mostly vpmu_features = 0;
 static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
@@ -52,7 +54,7 @@ static void __init parse_vpmu_param(char *s)
         break;
     default:
         if ( !strcmp(s, "bts") )
-            opt_vpmu_enabled |= VPMU_BOOT_BTS;
+            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
         else if ( *s )
         {
             printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
@@ -60,7 +62,7 @@ static void __init parse_vpmu_param(char *s)
         }
         /* fall through */
     case 1:
-        opt_vpmu_enabled |= VPMU_BOOT_ENABLED;
+        vpmu_mode = XENPMU_MODE_ON;
         break;
     }
 }
@@ -77,6 +79,9 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
         return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
     return 0;
@@ -86,6 +91,9 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
+    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+        return 0;
+
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
         return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
     return 0;
@@ -237,19 +245,19 @@ void vpmu_initialise(struct vcpu *v)
     switch ( vendor )
     {
     case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( svm_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v, opt_vpmu_enabled) != 0 )
-            opt_vpmu_enabled = 0;
+        if ( vmx_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
         return;
 
     default:
         printk("VPMU: Initialization failed. "
                "Unknown CPU vendor %d\n", vendor);
-        opt_vpmu_enabled = 0;
+        vpmu_mode = XENPMU_MODE_OFF;
         return;
     }
 }
@@ -271,3 +279,100 @@ void vpmu_dump(struct vcpu *v)
         vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
 }
 
+/* Unload VPMU contexts */
+static void vpmu_unload_all(void)
+{
+    struct domain *d;
+    struct vcpu *v;
+    struct vpmu_struct *vpmu;
+
+    for_each_domain(d)
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( v != current )
+                vcpu_pause(v);
+            vpmu = vcpu_vpmu(v);
+
+            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            {
+                if ( v != current )
+                    vcpu_unpause(v);
+                continue;
+            }
+
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+            if ( v != current )
+                vcpu_unpause(v);
+        }
+    }
+}
+
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_MODE_ON )
+            return -EINVAL;
+
+        vpmu_mode = pmu_params.d.val;
+        if ( vpmu_mode == XENPMU_MODE_OFF )
+            /*
+             * After this VPMU context will never be loaded during context
+             * switch. We also prevent PMU MSR accesses (which can load
+             * context) when VPMU is disabled.
+             */
+            vpmu_unload_all();
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_features = pmu_params.d.val;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+     }
+
+    return ret;
+}
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 594b0b9..07c736d 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -416,6 +416,8 @@ ENTRY(compat_hypercall_table)
         .quad do_domctl
         .quad compat_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall           /* reserved for XenClient */
+        .quad do_xenpmu_op              /* 40 */
         .rept __HYPERVISOR_arch_0-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -464,6 +466,8 @@ ENTRY(compat_hypercall_args_table)
         .byte 1 /* do_domctl                */
         .byte 2 /* compat_kexec_op          */
         .byte 1 /* do_tmem_op               */
+        .byte 0 /* reserved for XenClient   */
+        .byte 2 /* do_xenpmu_op             */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 3ea4683..c36ffce 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -757,6 +757,8 @@ ENTRY(hypercall_table)
         .quad do_domctl
         .quad do_kexec_op
         .quad do_tmem_op
+        .quad do_ni_hypercall       /* reserved for XenClient */
+        .quad do_xenpmu_op          /* 40 */
         .rept __HYPERVISOR_arch_0-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -805,6 +807,8 @@ ENTRY(hypercall_args_table)
         .byte 1 /* do_domctl            */
         .byte 2 /* do_kexec             */
         .byte 1 /* do_tmem_op           */
+        .byte 0 /* reserved for XenClient */
+        .byte 2 /* do_xenpmu_op         */  /* 40 */
         .rept __HYPERVISOR_arch_0-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index edc67f6..b945e8f 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -24,13 +24,6 @@
 
 #include <public/pmu.h>
 
-/*
- * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
- * See arch/x86/hvm/vpmu.c.
- */
-#define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
-#define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
-
 #define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
 #define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
 
@@ -58,8 +51,8 @@ struct arch_vpmu_ops {
     void (*arch_vpmu_dump)(const struct vcpu *);
 };
 
-int vmx_vpmu_initialise(struct vcpu *, unsigned int flags);
-int svm_vpmu_initialise(struct vcpu *, unsigned int flags);
+int vmx_vpmu_initialise(struct vcpu *);
+int svm_vpmu_initialise(struct vcpu *);
 
 struct vpmu_struct {
     u32 flags;
@@ -98,5 +91,8 @@ void vpmu_dump(struct vcpu *v);
 extern int acquire_pmu_ownership(int pmu_ownership);
 extern void release_pmu_ownership(int pmu_ownership);
 
+extern uint64_t vpmu_mode;
+extern uint64_t vpmu_features;
+
 #endif /* __ASM_X86_HVM_VPMU_H_*/
 
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 3ffd2cf..f91d935 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -13,6 +13,54 @@
 #define XENPMU_VER_MAJ    0
 #define XENPMU_VER_MIN    0
 
+/*
+ * ` enum neg_errnoval
+ * ` HYPERVISOR_xenpmu_op(enum xenpmu_op cmd, struct xenpmu_params *args);
+ *
+ * @cmd  == XENPMU_* (PMU operation)
+ * @args == struct xenpmu_params
+ */
+/* ` enum xenpmu_op { */
+#define XENPMU_mode_get        0 /* Also used for getting PMU version */
+#define XENPMU_mode_set        1
+#define XENPMU_feature_get     2
+#define XENPMU_feature_set     3
+/* ` } */
+
+/* Parameters structure for HYPERVISOR_xenpmu_op call */
+struct xen_pmu_params {
+    /* IN/OUT parameters */
+    union {
+        struct version {
+            uint8_t maj;
+            uint8_t min;
+        } version;
+        uint64_t pad;
+    } v;
+    union {
+        uint64_t val;
+        XEN_GUEST_HANDLE(void) valp;
+    } d;
+
+    /* IN parameters */
+    uint64_t vcpu;
+};
+typedef struct xen_pmu_params xen_pmu_params_t;
+DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
+
+/* PMU modes:
+ * - XENPMU_MODE_OFF:   No PMU virtualization
+ * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
+ *                      itself and Xen
+ */
+#define XENPMU_MODE_OFF           0
+#define XENPMU_MODE_ON            (1<<0)
+
+/*
+ * PMU features:
+ * - XENPMU_FEATURE_INTEL_BTS: Intel BTS support (ignored on AMD)
+ */
+#define XENPMU_FEATURE_INTEL_BTS  1
 
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 8c5697e..a00ab21 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -101,6 +101,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_kexec_op             37
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
+#define __HYPERVISOR_xenpmu_op            40
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index a9e5229..cf34547 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -14,6 +14,7 @@
 #include <public/event_channel.h>
 #include <public/tmem.h>
 #include <public/version.h>
+#include <public/pmu.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -139,6 +140,9 @@ do_tmem_op(
 extern long
 do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 
+extern long
+do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
+
 #ifdef CONFIG_COMPAT
 
 extern int
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPD-0007Gt-5R; Mon, 17 Feb 2014 17:54:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP7-00078t-Si
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:54 +0000
Received: from [85.158.139.211:7920] by server-2.bemta-5.messagelabs.com id
	F5/48-23037-DEC42035; Mon, 17 Feb 2014 17:54:53 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392659690!4480807!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2681 invoked from network); 17 Feb 2014 17:54:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:52 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsjKZ003518
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:46 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsjrh029138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:45 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsikI029122; Mon, 17 Feb 2014 17:54:44 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:54 -0500
Message-Id: <1392659764-22183-8-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add pmu.h header files, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              | 71 ++++++++++++++------------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 87 +++++++++++++++++---------------
 xen/arch/x86/hvm/vpmu.c                  |  1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |  6 ++-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h | 32 ------------
 xen/include/asm-x86/hvm/vpmu.h           | 13 ++---
 xen/include/public/arch-arm.h            |  3 ++
 xen/include/public/arch-x86/pmu.h        | 63 +++++++++++++++++++++++
 xen/include/public/pmu.h                 | 38 ++++++++++++++
 9 files changed, 199 insertions(+), 115 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/pmu.h
 create mode 100644 xen/include/public/pmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 2fbe2c1..d199f08 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/pmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,13 +84,6 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
-
 static inline int get_pmu_reg_type(u32 addr)
 {
     if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
@@ -142,7 +136,7 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -157,7 +151,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -177,19 +171,22 @@ static inline void context_load(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     vpmu_reset(vpmu, VPMU_FROZEN);
 
@@ -198,7 +195,7 @@ static void amd_vpmu_load(struct vcpu *v)
         unsigned int i;
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -212,17 +209,18 @@ static inline void context_save(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
     unsigned int i;
 
     /*
@@ -256,7 +254,9 @@ static void context_update(unsigned int msr, u64 msr_content)
     unsigned int i;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -268,12 +268,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -300,7 +300,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu_set(vpmu, VPMU_RUNNING);
 
         if ( is_hvm_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -310,7 +310,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -351,7 +351,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 
 static int amd_vpmu_initialise(struct vcpu *v)
 {
-    struct amd_vpmu_context *ctxt;
+    struct xen_pmu_amd_ctxt *ctxt;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
 
@@ -381,7 +381,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -390,6 +392,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
@@ -403,7 +408,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
         return;
 
     if ( is_hvm_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
@@ -420,7 +425,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
 static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -450,8 +457,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index c66289a..856281d 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/pmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,16 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -224,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -293,12 +288,15 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -320,10 +318,13 @@ static int core2_vpmu_save(struct vcpu *v)
 static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -331,8 +332,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -355,7 +356,7 @@ static void core2_vpmu_load(struct vcpu *v)
 static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
@@ -368,11 +369,16 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
         goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
+    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+				   sizeof(uint64_t) * fixed_pmc_cnt +
+				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
     if ( !core2_vpmu_cxt )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
@@ -420,7 +426,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -449,7 +455,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -512,11 +518,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
-                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
         }
     }
 
@@ -567,7 +576,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -578,7 +587,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -627,8 +636,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -647,12 +659,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -661,7 +670,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -672,14 +681,14 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vcpu *v = current;
     u64 msr_content;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
 
     rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
     if ( msr_content )
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -742,12 +751,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 979bd33..a6e933a 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/pmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..87a72ce 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/pmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -42,6 +41,9 @@
 #define MSR_TYPE_ARCH_COUNTER       3
 #define MSR_TYPE_ARCH_CTRL          4
 
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
 
 /* Arch specific operations shared by all vpmus */
 struct arch_vpmu_ops {
@@ -76,11 +78,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 7496556..e982b53 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -388,6 +388,9 @@ typedef uint64_t xen_callback_t;
 
 #endif
 
+/* Stub definition of PMU structure */
+typedef struct xen_arch_pmu {} xen_arch_pmu_t;
+
 #endif /*  __XEN_PUBLIC_ARCH_ARM_H__ */
 
 /*
diff --git a/xen/include/public/arch-x86/pmu.h b/xen/include/public/arch-x86/pmu.h
new file mode 100644
index 0000000..fc022e2
--- /dev/null
+++ b/xen/include/public/arch-x86/pmu.h
@@ -0,0 +1,63 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+
+/* AMD PMU registers and structures */
+struct xen_pmu_amd_ctxt {
+    uint32_t counters;       /* Offset to counter MSRs */
+    uint32_t ctrls;          /* Offset to control MSRs */
+    uint32_t msr_bitmap_set; /* Used by HVM only */
+};
+
+/* Intel PMU registers and structures */
+struct xen_pmu_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+
+struct xen_pmu_intel_ctxt {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
+                                    sizeof(struct xen_pmu_intel_ctxt) ? \
+                                     sizeof(struct xen_pmu_amd_ctxt) : \
+                                     sizeof(struct xen_pmu_intel_ctxt))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct xen_arch_pmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } r;
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } l;
+    union {
+        struct xen_pmu_amd_ctxt amd;
+        struct xen_pmu_intel_ctxt intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } c;
+};
+typedef struct xen_arch_pmu xen_arch_pmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
new file mode 100644
index 0000000..3ffd2cf
--- /dev/null
+++ b/xen/include/public/pmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_PMU_H__
+#define __XEN_PUBLIC_PMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/pmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xen_pmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    xen_arch_pmu_t pmu;
+};
+typedef struct xen_pmu_data xen_pmu_data_t;
+
+#endif /* __XEN_PUBLIC_PMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:54:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:54:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPD-0007Gt-5R; Mon, 17 Feb 2014 17:54:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP7-00078t-Si
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:54 +0000
Received: from [85.158.139.211:7920] by server-2.bemta-5.messagelabs.com id
	F5/48-23037-DEC42035; Mon, 17 Feb 2014 17:54:53 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392659690!4480807!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2681 invoked from network); 17 Feb 2014 17:54:52 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:52 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsjKZ003518
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:46 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsjrh029138
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:45 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsikI029122; Mon, 17 Feb 2014 17:54:44 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:44 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:54 -0500
Message-Id: <1392659764-22183-8-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 07/17] x86/VPMU: Add public xenpmu.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add pmu.h header files, move various macros and structures that will be
shared between hypervisor and PV guests to it.

Move MSR banks out of architectural PMU structures to allow for larger sizes
in the future. The banks are allocated immediately after the context and
PMU structures store offsets to them.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c              | 71 ++++++++++++++------------
 xen/arch/x86/hvm/vmx/vpmu_core2.c        | 87 +++++++++++++++++---------------
 xen/arch/x86/hvm/vpmu.c                  |  1 +
 xen/arch/x86/oprofile/op_model_ppro.c    |  6 ++-
 xen/include/asm-x86/hvm/vmx/vpmu_core2.h | 32 ------------
 xen/include/asm-x86/hvm/vpmu.h           | 13 ++---
 xen/include/public/arch-arm.h            |  3 ++
 xen/include/public/arch-x86/pmu.h        | 63 +++++++++++++++++++++++
 xen/include/public/pmu.h                 | 38 ++++++++++++++
 9 files changed, 199 insertions(+), 115 deletions(-)
 delete mode 100644 xen/include/asm-x86/hvm/vmx/vpmu_core2.h
 create mode 100644 xen/include/public/arch-x86/pmu.h
 create mode 100644 xen/include/public/pmu.h

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 2fbe2c1..d199f08 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -30,10 +30,7 @@
 #include <asm/apic.h>
 #include <asm/hvm/vlapic.h>
 #include <asm/hvm/vpmu.h>
-
-#define F10H_NUM_COUNTERS 4
-#define F15H_NUM_COUNTERS 6
-#define MAX_NUM_COUNTERS F15H_NUM_COUNTERS
+#include <public/pmu.h>
 
 #define MSR_F10H_EVNTSEL_GO_SHIFT   40
 #define MSR_F10H_EVNTSEL_EN_SHIFT   22
@@ -49,6 +46,10 @@ static const u32 __read_mostly *counters;
 static const u32 __read_mostly *ctrls;
 static bool_t __read_mostly k7_counters_mirrored;
 
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
 /* PMU Counter MSRs. */
 static const u32 AMD_F10H_COUNTERS[] = {
     MSR_K7_PERFCTR0,
@@ -83,13 +84,6 @@ static const u32 AMD_F15H_CTRLS[] = {
     MSR_AMD_FAM15H_EVNTSEL5
 };
 
-/* storage for context switching */
-struct amd_vpmu_context {
-    u64 counters[MAX_NUM_COUNTERS];
-    u64 ctrls[MAX_NUM_COUNTERS];
-    bool_t msr_bitmap_set;
-};
-
 static inline int get_pmu_reg_type(u32 addr)
 {
     if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
@@ -142,7 +136,7 @@ static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -157,7 +151,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
 
     for ( i = 0; i < num_counters; i++ )
     {
@@ -177,19 +171,22 @@ static inline void context_load(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     for ( i = 0; i < num_counters; i++ )
     {
-        wrmsrl(counters[i], ctxt->counters[i]);
-        wrmsrl(ctrls[i], ctxt->ctrls[i]);
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
     }
 }
 
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     vpmu_reset(vpmu, VPMU_FROZEN);
 
@@ -198,7 +195,7 @@ static void amd_vpmu_load(struct vcpu *v)
         unsigned int i;
 
         for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctxt->ctrls[i]);
+            wrmsrl(ctrls[i], ctrl_regs[i]);
 
         return;
     }
@@ -212,17 +209,18 @@ static inline void context_save(struct vcpu *v)
 {
     unsigned int i;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
 
     /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
     for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], ctxt->counters[i]);
+        rdmsrl(counters[i], counter_regs[i]);
 }
 
 static int amd_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctx = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
     unsigned int i;
 
     /*
@@ -256,7 +254,9 @@ static void context_update(unsigned int msr, u64 msr_content)
     unsigned int i;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct amd_vpmu_context *ctxt = vpmu->context;
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
 
     if ( k7_counters_mirrored &&
         ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
@@ -268,12 +268,12 @@ static void context_update(unsigned int msr, u64 msr_content)
     {
        if ( msr == ctrls[i] )
        {
-           ctxt->ctrls[i] = msr_content;
+           ctrl_regs[i] = msr_content;
            return;
        }
         else if (msr == counters[i] )
         {
-            ctxt->counters[i] = msr_content;
+            counter_regs[i] = msr_content;
             return;
         }
     }
@@ -300,7 +300,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         vpmu_set(vpmu, VPMU_RUNNING);
 
         if ( is_hvm_domain(v->domain) &&
-             !((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
 
@@ -310,7 +310,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
         if ( is_hvm_domain(v->domain) &&
-             ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
     }
@@ -351,7 +351,7 @@ static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 
 static int amd_vpmu_initialise(struct vcpu *v)
 {
-    struct amd_vpmu_context *ctxt;
+    struct xen_pmu_amd_ctxt *ctxt;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     uint8_t family = current_cpu_data.x86;
 
@@ -381,7 +381,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc(struct amd_vpmu_context);
+    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
     if ( !ctxt )
     {
         gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
@@ -390,6 +392,9 @@ static int amd_vpmu_initialise(struct vcpu *v)
         return -ENOMEM;
     }
 
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
     vpmu->context = ctxt;
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
     return 0;
@@ -403,7 +408,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
         return;
 
     if ( is_hvm_domain(v->domain) &&
-         ((struct amd_vpmu_context *)vpmu->context)->msr_bitmap_set )
+         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
     xfree(vpmu->context);
@@ -420,7 +425,9 @@ static void amd_vpmu_destroy(struct vcpu *v)
 static void amd_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct amd_vpmu_context *ctxt = vpmu->context;
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
     unsigned int i;
 
     printk("    VPMU state: 0x%x ", vpmu->flags);
@@ -450,8 +457,8 @@ static void amd_vpmu_dump(const struct vcpu *v)
         rdmsrl(ctrls[i], ctrl);
         rdmsrl(counters[i], cntr);
         printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctxt->ctrls[i], ctrl,
-               counters[i], ctxt->counters[i], cntr);
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
     }
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index c66289a..856281d 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -35,8 +35,8 @@
 #include <asm/hvm/vmx/vmcs.h>
 #include <public/sched.h>
 #include <public/hvm/save.h>
+#include <public/pmu.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 /*
  * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
@@ -68,6 +68,10 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
 static bool_t __read_mostly full_width_write;
 
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -75,16 +79,6 @@ static bool_t __read_mostly full_width_write;
 #define FIXED_CTR_CTRL_BITS 4
 #define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
 
-#define VPMU_CORE2_MAX_FIXED_PMCS     4
-struct core2_vpmu_context {
-    u64 fixed_ctrl;
-    u64 ds_area;
-    u64 pebs_enable;
-    u64 global_ovf_status;
-    u64 fix_counters[VPMU_CORE2_MAX_FIXED_PMCS];
-    struct arch_msr_pair arch_msr_pair[1];
-};
-
 /* Number of general-purpose and fixed performance counters */
 static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
 
@@ -224,6 +218,7 @@ static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
     return 0;
 }
 
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
 {
     int i;
@@ -293,12 +288,15 @@ static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
 static inline void __core2_vpmu_save(struct vcpu *v)
 {
     int i;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
@@ -320,10 +318,13 @@ static int core2_vpmu_save(struct vcpu *v)
 static inline void __core2_vpmu_load(struct vcpu *v)
 {
     unsigned int i, pmc_start;
-    struct core2_vpmu_context *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, core2_vpmu_cxt->fix_counters[i]);
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
 
     if ( full_width_write )
         pmc_start = MSR_IA32_A_PERFCTR0;
@@ -331,8 +332,8 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         pmc_start = MSR_IA32_PERFCTR0;
     for ( i = 0; i < arch_pmc_cnt; i++ )
     {
-        wrmsrl(pmc_start + i, core2_vpmu_cxt->arch_msr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, core2_vpmu_cxt->arch_msr_pair[i].control);
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
     }
 
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
@@ -355,7 +356,7 @@ static void core2_vpmu_load(struct vcpu *v)
 static int core2_vpmu_alloc_resource(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
@@ -368,11 +369,16 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
         goto out_err;
     vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct core2_vpmu_context) +
-                    (arch_pmc_cnt-1)*sizeof(struct arch_msr_pair));
+    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+				   sizeof(uint64_t) * fixed_pmc_cnt +
+				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
     if ( !core2_vpmu_cxt )
         goto out_err;
 
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
     vpmu->context = (void *)core2_vpmu_cxt;
 
     vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
@@ -420,7 +426,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -449,7 +455,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     switch ( msr )
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_ovf_status &= ~msr_content;
+        core2_vpmu_cxt->global_status &= ~msr_content;
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
@@ -512,11 +518,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         tmp = msr - MSR_P6_EVNTSEL0;
         if ( tmp >= 0 && tmp < arch_pmc_cnt )
         {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            core2_vpmu_cxt->arch_msr_pair[tmp].control = msr_content;
+            xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
-                    (core2_vpmu_cxt->arch_msr_pair[i].control >> 22) & 1;
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
         }
     }
 
@@ -567,7 +576,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     int type = -1, index = -1;
     struct vcpu *v = current;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
 
     if ( core2_vpmu_msr_common_check(msr, &type, &index) )
     {
@@ -578,7 +587,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = 0;
             break;
         case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_ovf_status;
+            *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -627,8 +636,11 @@ static void core2_vpmu_dump(const struct vcpu *v)
 {
     const struct vpmu_struct *vpmu = vcpu_vpmu(v);
     int i;
-    const struct core2_vpmu_context *core2_vpmu_cxt = NULL;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
     u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
          return;
@@ -647,12 +659,9 @@ static void core2_vpmu_dump(const struct vcpu *v)
 
     /* Print the contents of the counter and its configuration msr. */
     for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        const struct arch_msr_pair *msr_pair = core2_vpmu_cxt->arch_msr_pair;
-
         printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-               i, msr_pair[i].counter, msr_pair[i].control);
-    }
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
     /*
      * The configuration of the fixed counter is 4 bits each in the
      * MSR_CORE_PERF_FIXED_CTR_CTRL.
@@ -661,7 +670,7 @@ static void core2_vpmu_dump(const struct vcpu *v)
     for ( i = 0; i < fixed_pmc_cnt; i++ )
     {
         printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, core2_vpmu_cxt->fix_counters[i],
+               i, fixed_counters[i],
                val & FIXED_CTR_CTRL_MASK);
         val >>= FIXED_CTR_CTRL_BITS;
     }
@@ -672,14 +681,14 @@ static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vcpu *v = current;
     u64 msr_content;
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct core2_vpmu_context *core2_vpmu_cxt = vpmu->context;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
 
     rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
     if ( msr_content )
     {
         if ( is_pmc_quirk )
             handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_ovf_status |= msr_content;
+        core2_vpmu_cxt->global_status |= msr_content;
         msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
     }
@@ -742,12 +751,6 @@ func_out:
 
     arch_pmc_cnt = core2_get_arch_pmc_count();
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    if ( fixed_pmc_cnt > VPMU_CORE2_MAX_FIXED_PMCS )
-    {
-        fixed_pmc_cnt = VPMU_CORE2_MAX_FIXED_PMCS;
-        printk(XENLOG_G_WARNING "Limiting number of fixed counters to %d\n",
-               fixed_pmc_cnt);
-    }
     check_pmc_quirk();
 
     return 0;
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 979bd33..a6e933a 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -31,6 +31,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <public/pmu.h>
 
 /*
  * "vpmu" :     vpmu generally enabled
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 3225937..5aae2e7 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -20,11 +20,15 @@
 #include <asm/regs.h>
 #include <asm/current.h>
 #include <asm/hvm/vpmu.h>
-#include <asm/hvm/vmx/vpmu_core2.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
 
+struct arch_msr_pair {
+    u64 counter;
+    u64 control;
+};
+
 /*
  * Intel "Architectural Performance Monitoring" CPUID
  * detection/enumeration details:
diff --git a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h b/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
deleted file mode 100644
index 410372d..0000000
--- a/xen/include/asm-x86/hvm/vmx/vpmu_core2.h
+++ /dev/null
@@ -1,32 +0,0 @@
-
-/*
- * vpmu_core2.h: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_CORE_H_
-#define __ASM_X86_HVM_VPMU_CORE_H_
-
-struct arch_msr_pair {
-    u64 counter;
-    u64 control;
-};
-
-#endif /* __ASM_X86_HVM_VPMU_CORE_H_ */
-
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index 7ee0f01..87a72ce 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -22,6 +22,8 @@
 #ifndef __ASM_X86_HVM_VPMU_H_
 #define __ASM_X86_HVM_VPMU_H_
 
+#include <public/pmu.h>
+
 /*
  * Flag bits given as a string on the hypervisor boot parameter 'vpmu'.
  * See arch/x86/hvm/vpmu.c.
@@ -29,12 +31,9 @@
 #define VPMU_BOOT_ENABLED 0x1    /* vpmu generally enabled. */
 #define VPMU_BOOT_BTS     0x2    /* Intel BTS feature wanted. */
 
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
 #define vcpu_vpmu(vcpu)   (&((vcpu)->arch.hvm_vcpu.vpmu))
 #define vpmu_vcpu(vpmu)   (container_of((vpmu), struct vcpu, \
                                           arch.hvm_vcpu.vpmu))
-#define vpmu_domain(vpmu) (vpmu_vcpu(vpmu)->domain)
 
 #define MSR_TYPE_COUNTER            0
 #define MSR_TYPE_CTRL               1
@@ -42,6 +41,9 @@
 #define MSR_TYPE_ARCH_COUNTER       3
 #define MSR_TYPE_ARCH_CTRL          4
 
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
 
 /* Arch specific operations shared by all vpmus */
 struct arch_vpmu_ops {
@@ -76,11 +78,6 @@ struct vpmu_struct {
 #define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
 #define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
 
-/* VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-
 #define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
 #define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
 #define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 7496556..e982b53 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -388,6 +388,9 @@ typedef uint64_t xen_callback_t;
 
 #endif
 
+/* Stub definition of PMU structure */
+typedef struct xen_arch_pmu {} xen_arch_pmu_t;
+
 #endif /*  __XEN_PUBLIC_ARCH_ARM_H__ */
 
 /*
diff --git a/xen/include/public/arch-x86/pmu.h b/xen/include/public/arch-x86/pmu.h
new file mode 100644
index 0000000..fc022e2
--- /dev/null
+++ b/xen/include/public/arch-x86/pmu.h
@@ -0,0 +1,63 @@
+#ifndef __XEN_PUBLIC_ARCH_X86_PMU_H__
+#define __XEN_PUBLIC_ARCH_X86_PMU_H__
+
+/* x86-specific PMU definitions */
+
+/* AMD PMU registers and structures */
+struct xen_pmu_amd_ctxt {
+    uint32_t counters;       /* Offset to counter MSRs */
+    uint32_t ctrls;          /* Offset to control MSRs */
+    uint32_t msr_bitmap_set; /* Used by HVM only */
+};
+
+/* Intel PMU registers and structures */
+struct xen_pmu_cntr_pair {
+    uint64_t counter;
+    uint64_t control;
+};
+
+struct xen_pmu_intel_ctxt {
+    uint64_t global_ctrl;
+    uint64_t global_ovf_ctrl;
+    uint64_t global_status;
+    uint64_t fixed_ctrl;
+    uint64_t ds_area;
+    uint64_t pebs_enable;
+    uint64_t debugctl;
+    uint32_t fixed_counters;  /* Offset to fixed counter MSRs */
+    uint32_t arch_counters;   /* Offset to architectural counter MSRs */
+};
+
+#define XENPMU_MAX_CTXT_SZ        (sizeof(struct xen_pmu_amd_ctxt) > \
+                                    sizeof(struct xen_pmu_intel_ctxt) ? \
+                                     sizeof(struct xen_pmu_amd_ctxt) : \
+                                     sizeof(struct xen_pmu_intel_ctxt))
+#define XENPMU_CTXT_PAD_SZ        (((XENPMU_MAX_CTXT_SZ + 64) & ~63) + 128)
+struct xen_arch_pmu {
+    union {
+        struct cpu_user_regs regs;
+        uint8_t pad1[256];
+    } r;
+    union {
+        uint32_t lapic_lvtpc;
+        uint64_t pad2;
+    } l;
+    union {
+        struct xen_pmu_amd_ctxt amd;
+        struct xen_pmu_intel_ctxt intel;
+        uint8_t pad3[XENPMU_CTXT_PAD_SZ];
+    } c;
+};
+typedef struct xen_arch_pmu xen_arch_pmu_t;
+
+#endif /* __XEN_PUBLIC_ARCH_X86_PMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
new file mode 100644
index 0000000..3ffd2cf
--- /dev/null
+++ b/xen/include/public/pmu.h
@@ -0,0 +1,38 @@
+#ifndef __XEN_PUBLIC_PMU_H__
+#define __XEN_PUBLIC_PMU_H__
+
+#include "xen.h"
+#if defined(__i386__) || defined(__x86_64__)
+#include "arch-x86/pmu.h"
+#elif defined (__arm__) || defined (__aarch64__)
+#include "arch-arm.h"
+#else
+#error "Unsupported architecture"
+#endif
+
+#define XENPMU_VER_MAJ    0
+#define XENPMU_VER_MIN    0
+
+
+/* Shared between hypervisor and PV domain */
+struct xen_pmu_data {
+    uint32_t domain_id;
+    uint32_t vcpu_id;
+    uint32_t pcpu_id;
+    uint32_t pmu_flags;
+
+    xen_arch_pmu_t pmu;
+};
+typedef struct xen_pmu_data xen_pmu_data_t;
+
+#endif /* __XEN_PUBLIC_PMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPD-0007Ic-Uz; Mon, 17 Feb 2014 17:54:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP9-0007AN-Bs
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:55 +0000
Received: from [85.158.143.35:48296] by server-1.bemta-4.messagelabs.com id
	AC/A2-31661-EEC42035; Mon, 17 Feb 2014 17:54:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392659692!6324943!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18488 invoked from network); 17 Feb 2014 17:54:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHslqD003568
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:48 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHskhS029180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:47 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHskt7001716; Mon, 17 Feb 2014 17:54:46 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:46 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:57 -0500
Message-Id: <1392659764-22183-11-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 10/17] x86/VPMU: Initialize PMU for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/svm.c        |  6 ++-
 xen/arch/x86/hvm/svm/vpmu.c       | 40 +++++++++---------
 xen/arch/x86/hvm/vmx/vmx.c        |  6 ++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 51 +++++++++++++++--------
 xen/arch/x86/hvm/vpmu.c           | 85 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/pmu.h          |  2 +
 xen/include/public/xen.h          |  1 +
 xen/include/xen/softirq.h         |  1 +
 10 files changed, 154 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..9ee0c1f 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1045,7 +1045,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     svm_guest_osvw_init(v);
 
@@ -1054,7 +1055,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
 
 static void svm_vcpu_destroy(struct vcpu *v)
 {
-    vpmu_destroy(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_destroy(v);
     svm_destroy_vmcb(v);
     passive_domain_destroy(v);
 }
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 97cd6e5..346aa51 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -381,16 +381,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( !is_pv_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
 
     ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -407,18 +412,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) &&
-         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( is_hvm_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        release_pmu_ownship(PMU_OWNER_HVM);
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+    release_pmu_ownship(PMU_OWNER_HVM);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6409d6..76c1bc8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -112,7 +112,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     vmx_install_vlapic_mapping(v);
 
@@ -126,7 +127,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 static void vmx_vcpu_destroy(struct vcpu *v)
 {
     vmx_destroy_vmcs(v);
-    vpmu_destroy(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_destroy(v);
     passive_domain_destroy(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index efbebe2..de1e09d 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -358,22 +358,30 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
 
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-				   sizeof(uint64_t) * fixed_pmc_cnt +
-				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-    if ( !core2_vpmu_cxt )
-        goto out_err;
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
 
     core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
@@ -753,6 +761,10 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -763,11 +775,16 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
     release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 50f6bb8..68897f6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,10 +21,14 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
+#include <asm/p2m.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vmx/vmcs.h>
@@ -267,7 +271,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -312,6 +322,67 @@ static void vpmu_unload_all(void)
     }
 }
 
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gfn = params->d.val;
+
+    if ( !is_pv_domain(d) )
+        return -EINVAL;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page_and_type(page);
+        return -EINVAL;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
 
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
 {
@@ -372,7 +443,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index db952af..f0aaa63 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index b945e8f..0f3de14 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -60,6 +60,7 @@ struct vpmu_struct {
     u32 hw_lapic_lvtpc;
     void *context;
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index f91d935..814e061 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a00ab21..2eb5fd7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPD-0007Ic-Uz; Mon, 17 Feb 2014 17:54:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSP9-0007AN-Bs
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:55 +0000
Received: from [85.158.143.35:48296] by server-1.bemta-4.messagelabs.com id
	AC/A2-31661-EEC42035; Mon, 17 Feb 2014 17:54:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392659692!6324943!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18488 invoked from network); 17 Feb 2014 17:54:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:53 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHslqD003568
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:48 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHskhS029180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:47 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHskt7001716; Mon, 17 Feb 2014 17:54:46 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:46 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:57 -0500
Message-Id: <1392659764-22183-11-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 10/17] x86/VPMU: Initialize PMU for PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code for initializing/tearing down PMU for PV guests

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/svm.c        |  6 ++-
 xen/arch/x86/hvm/svm/vpmu.c       | 40 +++++++++---------
 xen/arch/x86/hvm/vmx/vmx.c        |  6 ++-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 51 +++++++++++++++--------
 xen/arch/x86/hvm/vpmu.c           | 85 ++++++++++++++++++++++++++++++++++++++-
 xen/common/event_channel.c        |  1 +
 xen/include/asm-x86/hvm/vpmu.h    |  1 +
 xen/include/public/pmu.h          |  2 +
 xen/include/public/xen.h          |  1 +
 xen/include/xen/softirq.h         |  1 +
 10 files changed, 154 insertions(+), 40 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..9ee0c1f 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1045,7 +1045,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     svm_guest_osvw_init(v);
 
@@ -1054,7 +1055,8 @@ static int svm_vcpu_initialise(struct vcpu *v)
 
 static void svm_vcpu_destroy(struct vcpu *v)
 {
-    vpmu_destroy(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_destroy(v);
     svm_destroy_vmcb(v);
     passive_domain_destroy(v);
 }
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 97cd6e5..346aa51 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -381,16 +381,21 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-			 sizeof(uint64_t) * AMD_MAX_COUNTERS);
-    if ( !ctxt )
+    if ( !is_pv_domain(v->domain) )
     {
-        gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-            " PMU feature is unavailable on domain %d vcpu %d.\n",
-            v->vcpu_id, v->domain->domain_id);
-        return -ENOMEM;
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
     }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
 
     ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
     ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
@@ -407,18 +412,17 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) &&
-         ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    xfree(vpmu->context);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( is_hvm_domain(v->domain) )
     {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        release_pmu_ownship(PMU_OWNER_HVM);
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
     }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+    release_pmu_ownship(PMU_OWNER_HVM);
 }
 
 /* VPMU part of the 'q' keyhandler */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f6409d6..76c1bc8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -112,7 +112,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
         return rc;
     }
 
-    vpmu_initialise(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_initialise(v);
 
     vmx_install_vlapic_mapping(v);
 
@@ -126,7 +127,8 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 static void vmx_vcpu_destroy(struct vcpu *v)
 {
     vmx_destroy_vmcs(v);
-    vpmu_destroy(v);
+    if ( is_hvm_domain(v->domain) )
+        vpmu_destroy(v);
     passive_domain_destroy(v);
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index efbebe2..de1e09d 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -358,22 +358,30 @@ static int core2_vpmu_alloc_resource(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
     struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
 
-    if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-        return 0;
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
 
-    wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-    if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
 
-    if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-        goto out_err;
-    vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
-    core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-				   sizeof(uint64_t) * fixed_pmc_cnt +
-				   sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-    if ( !core2_vpmu_cxt )
-        goto out_err;
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
 
     core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
     core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
@@ -753,6 +761,10 @@ func_out:
     fixed_pmc_cnt = core2_get_fixed_pmc_count();
     check_pmc_quirk();
 
+    /* PV domains can allocate resources immediately */
+    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
     return 0;
 }
 
@@ -763,11 +775,16 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    xfree(vpmu->context);
-    if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    if ( is_hvm_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
     release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu_reset(vpmu, VPMU_CONTEXT_ALLOCATED);
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
 }
 
 struct arch_vpmu_ops core2_vpmu_ops = {
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 50f6bb8..68897f6 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -21,10 +21,14 @@
 #include <xen/config.h>
 #include <xen/sched.h>
 #include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/msr.h>
+#include <asm/p2m.h>
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/vmx/vmcs.h>
@@ -267,7 +271,13 @@ void vpmu_destroy(struct vcpu *v)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
         vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
 }
 
 /* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
@@ -312,6 +322,67 @@ static void vpmu_unload_all(void)
     }
 }
 
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gfn = params->d.val;
+
+    if ( !is_pv_domain(d) )
+        return -EINVAL;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page_and_type(page);
+        return -EINVAL;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
 
 long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
 {
@@ -372,7 +443,19 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         ret = 0;
         break;
-     }
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index db952af..f0aaa63 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -108,6 +108,7 @@ static int virq_is_global(uint32_t virq)
     case VIRQ_TIMER:
     case VIRQ_DEBUG:
     case VIRQ_XENOPROF:
+    case VIRQ_XENPMU:
         rc = 0;
         break;
     case VIRQ_ARCH_0 ... VIRQ_ARCH_7:
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
index b945e8f..0f3de14 100644
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ b/xen/include/asm-x86/hvm/vpmu.h
@@ -60,6 +60,7 @@ struct vpmu_struct {
     u32 hw_lapic_lvtpc;
     void *context;
     struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
 };
 
 /* VPMU states */
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index f91d935..814e061 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -25,6 +25,8 @@
 #define XENPMU_mode_set        1
 #define XENPMU_feature_get     2
 #define XENPMU_feature_set     3
+#define XENPMU_init            4
+#define XENPMU_finish          5
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index a00ab21..2eb5fd7 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -161,6 +161,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occured           */
 #define VIRQ_XC_RESERVED 11 /* G. Reserved for XenClient                     */
 #define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
+#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 0c0d481..5829fa4 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -8,6 +8,7 @@ enum {
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
     RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
+    PMU_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPF-0007Li-EU; Mon, 17 Feb 2014 17:55:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPB-0007Cr-2G
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:57 +0000
Received: from [85.158.137.68:53388] by server-10.bemta-3.messagelabs.com id
	11/41-07302-0FC42035; Mon, 17 Feb 2014 17:54:56 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392659693!2462161!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9765 invoked from network); 17 Feb 2014 17:54:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsneJ024887
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:50 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsmct027919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:49 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHslJS014581; Mon, 17 Feb 2014 17:54:47 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:47 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:58 -0500
Message-Id: <1392659764-22183-12-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 11/17] x86/VPMU: Add support for PMU register
	handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 60 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hvm/vpmu.c           |  7 +++++
 xen/arch/x86/traps.c              | 31 +++++++++++++++++++-
 xen/include/public/pmu.h          |  1 +
 5 files changed, 89 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b615c07..b389abc 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1995,8 +1995,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index de1e09d..9eac418 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,19 +298,26 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -339,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -426,6 +441,14 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
@@ -452,7 +475,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -464,11 +487,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -484,7 +508,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -509,10 +533,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
         global_ctrl >>= 32;
         for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
@@ -529,7 +557,10 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
             xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
@@ -568,13 +599,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -598,7 +635,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 68897f6..789eb2a 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -455,6 +455,13 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( current->arch.vpmu.xenpmu_data == NULL )
+            return -EINVAL;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0bd43b9..442d3fb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -865,8 +866,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
         __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
         break;
 
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        break; 
+
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -882,6 +885,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
     }
 
  out:
+    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);
+
     regs->eax = a;
     regs->ebx = b;
     regs->ecx = c;
@@ -2497,6 +2502,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( wrmsr_safe(regs->ecx, msr_content) != 0 )
                 goto fail;
             break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2584,6 +2597,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             regs->eax = (uint32_t)msr_content;
             regs->edx = (uint32_t)(msr_content >> 32);
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) ) 
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
             {
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 814e061..81783de 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPF-0007Li-EU; Mon, 17 Feb 2014 17:55:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPB-0007Cr-2G
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:57 +0000
Received: from [85.158.137.68:53388] by server-10.bemta-3.messagelabs.com id
	11/41-07302-0FC42035; Mon, 17 Feb 2014 17:54:56 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392659693!2462161!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9765 invoked from network); 17 Feb 2014 17:54:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsneJ024887
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:50 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsmct027919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:49 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHslJS014581; Mon, 17 Feb 2014 17:54:47 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:47 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:58 -0500
Message-Id: <1392659764-22183-12-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 11/17] x86/VPMU: Add support for PMU register
	handling on PV guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Intercept accesses to PMU MSRs and process them in VPMU module.

Dump VPMU state for all domains (HVM and PV) when requested.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c             |  3 +-
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 60 ++++++++++++++++++++++++++++++++-------
 xen/arch/x86/hvm/vpmu.c           |  7 +++++
 xen/arch/x86/traps.c              | 31 +++++++++++++++++++-
 xen/include/public/pmu.h          |  1 +
 5 files changed, 89 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b615c07..b389abc 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1995,8 +1995,7 @@ void arch_dump_vcpu_info(struct vcpu *v)
 {
     paging_dump_vcpu_info(v);
 
-    if ( is_hvm_vcpu(v) )
-        vpmu_dump(v);
+    vpmu_dump(v);
 }
 
 void domain_cpuid(
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index de1e09d..9eac418 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -27,6 +27,7 @@
 #include <asm/regs.h>
 #include <asm/types.h>
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/msr.h>
 #include <asm/msr-index.h>
 #include <asm/hvm/support.h>
@@ -297,19 +298,26 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( is_pv_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
 static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
+    if ( is_pv_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
         return 0;
 
     __core2_vpmu_save(v);
 
     /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && !is_pv_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -339,6 +347,13 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( is_pv_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
 }
 
 static void core2_vpmu_load(struct vcpu *v)
@@ -426,6 +441,14 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     return 1;
 }
 
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( !is_pv_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
 static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     u64 global_ctrl, non_global_ctrl;
@@ -452,7 +475,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
                     return 1;
                 gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 0;
             }
         }
@@ -464,11 +487,12 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     {
     case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
         return 1;
     case MSR_CORE_PERF_GLOBAL_STATUS:
         gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
                  "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        hvm_inject_hw_exception(TRAP_gp_fault, 0);
+        inject_trap(v, TRAP_gp_fault);
         return 1;
     case MSR_IA32_PEBS_ENABLE:
         if ( msr_content & 1 )
@@ -484,7 +508,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 gdprintk(XENLOG_WARNING,
                          "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
                          msr_content);
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
+                inject_trap(v, TRAP_gp_fault);
                 return 1;
             }
             core2_vpmu_cxt->ds_area = msr_content;
@@ -509,10 +533,14 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
             global_ctrl >>= 1;
         }
+        core2_vpmu_cxt->global_ctrl = msr_content;
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        if ( !is_pv_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
         global_ctrl >>= 32;
         for ( i = 0; i < fixed_pmc_cnt; i++ )
         {
@@ -529,7 +557,10 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
             xen_pmu_cntr_pair[tmp].control = msr_content;
             for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
                 pmu_enable += (global_ctrl >> i) &
@@ -568,13 +599,19 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
                 inject_gp = 1;
             break;
         }
-        if (inject_gp)
-            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
         else
             wrmsrl(msr, msr_content);
     }
     else
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    {
+       if ( !is_pv_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
 
     return 1;
 }
@@ -598,7 +635,10 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            if ( !is_pv_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
             break;
         default:
             rdmsrl(msr, *msr_content);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 68897f6..789eb2a 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -455,6 +455,13 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
             return -EFAULT;
         pvpmu_finish(current->domain, &pmu_params);
         break;
+
+    case XENPMU_lvtpc_set:
+        if ( current->arch.vpmu.xenpmu_data == NULL )
+            return -EINVAL;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0bd43b9..442d3fb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,6 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
+#include <asm/hvm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
@@ -865,8 +866,10 @@ void pv_cpuid(struct cpu_user_regs *regs)
         __clear_bit(X86_FEATURE_TOPOEXT % 32, &c);
         break;
 
+    case 0x0000000a: /* Architectural Performance Monitor Features (Intel) */
+        break; 
+
     case 0x00000005: /* MONITOR/MWAIT */
-    case 0x0000000a: /* Architectural Performance Monitor Features */
     case 0x0000000b: /* Extended Topology Enumeration */
     case 0x8000000a: /* SVM revision and features */
     case 0x8000001b: /* Instruction Based Sampling */
@@ -882,6 +885,8 @@ void pv_cpuid(struct cpu_user_regs *regs)
     }
 
  out:
+    vpmu_do_cpuid(regs->eax, &a, &b, &c, &d);
+
     regs->eax = a;
     regs->ebx = b;
     regs->ecx = c;
@@ -2497,6 +2502,14 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             if ( wrmsr_safe(regs->ecx, msr_content) != 0 )
                 goto fail;
             break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
+                goto invalid;
+            break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
                 break;
@@ -2584,6 +2597,22 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
             regs->eax = (uint32_t)msr_content;
             regs->edx = (uint32_t)(msr_content >> 32);
             break;
+        case MSR_IA32_PERF_CAPABILITIES:
+            /* No extra capabilities are supported */
+            regs->eax = regs->edx = 0;
+            break;
+        case MSR_P6_PERFCTR0...MSR_P6_PERFCTR1:
+        case MSR_P6_EVNTSEL0...MSR_P6_EVNTSEL1:
+        case MSR_CORE_PERF_FIXED_CTR0...MSR_CORE_PERF_FIXED_CTR2:
+        case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
+            if ( vpmu_do_rdmsr(regs->ecx, &msr_content) ) 
+            {
+                regs->eax = (uint32_t)msr_content;
+                regs->edx = (uint32_t)(msr_content >> 32);
+                break;
+            }
+            goto rdmsr_normal;
         default:
             if ( rdmsr_hypervisor_regs(regs->ecx, &val) )
             {
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 814e061..81783de 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -27,6 +27,7 @@
 #define XENPMU_feature_set     3
 #define XENPMU_init            4
 #define XENPMU_finish          5
+#define XENPMU_lvtpc_set       6
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPH-0007OS-HE; Mon, 17 Feb 2014 17:55:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPB-0007DA-7l
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:57 +0000
Received: from [85.158.143.35:48414] by server-1.bemta-4.messagelabs.com id
	F2/B2-31661-0FC42035; Mon, 17 Feb 2014 17:54:56 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392659694!6304462!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28709 invoked from network); 17 Feb 2014 17:54:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:55 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsn08024886
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:49 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHsnmY014616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:49 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsmvk014598; Mon, 17 Feb 2014 17:54:48 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:48 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:59 -0500
Message-Id: <1392659764-22183-13-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 12/17] x86/VPMU: Handle PMU interrupts for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c  | 110 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/pmu.h |   7 +++
 2 files changed, 112 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 789eb2a..abc4c1f 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -76,7 +76,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -87,7 +92,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
@@ -99,14 +120,86 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        const struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs *cmp;
+
+            gregs = guest_cpu_user_regs();
+
+            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            XLAT_cpu_user_regs(cmp, gregs);
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   &cmp, sizeof(struct compat_cpu_user_regs));
+        }
+        else if ( !is_control_domain(current->domain) &&
+                 !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
 
     if ( vpmu->arch_vpmu_ops )
     {
@@ -225,7 +318,8 @@ void vpmu_load(struct vcpu *v)
     local_irq_enable();
 
     /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+        (is_pv_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -462,6 +556,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        vpmu_load(current);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 81783de..50f6d6d 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
@@ -65,6 +66,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  */
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
     uint32_t domain_id;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPH-0007OS-HE; Mon, 17 Feb 2014 17:55:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPB-0007DA-7l
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:57 +0000
Received: from [85.158.143.35:48414] by server-1.bemta-4.messagelabs.com id
	F2/B2-31661-0FC42035; Mon, 17 Feb 2014 17:54:56 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392659694!6304462!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28709 invoked from network); 17 Feb 2014 17:54:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:55 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsn08024886
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:49 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHsnmY014616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:49 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HHsmvk014598; Mon, 17 Feb 2014 17:54:48 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:48 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:55:59 -0500
Message-Id: <1392659764-22183-13-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 12/17] x86/VPMU: Handle PMU interrupts for PV
	guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for handling PMU interrupts for PV guests.

VPMU for the interrupted VCPU is unloaded until the guest issues XENPMU_flush
hypercall. This allows the guest to access PMU MSR values that are stored in
VPMU context which is shared between hypervisor and domain, thus avoiding
traps to hypervisor.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c  | 110 ++++++++++++++++++++++++++++++++++++++++++++---
 xen/include/public/pmu.h |   7 +++
 2 files changed, 112 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index 789eb2a..abc4c1f 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -76,7 +76,12 @@ void vpmu_lvtpc_update(uint32_t val)
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
     vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
-    apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !is_pv_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
@@ -87,7 +92,23 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-        return vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
@@ -99,14 +120,86 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-        return vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !is_hvm_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
     return 0;
 }
 
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !is_hvm_domain(v->domain) )
+    {
+        /* PV guest or dom0 is doing system profiling */
+        const struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        /* Store appropriate registers in xenpmu_data */
+        if ( is_pv_32bit_domain(current->domain) )
+        {
+            /*
+             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
+             * and therefore we treat it the same way as a non-priviledged
+             * PV 32-bit domain.
+             */
+            struct compat_cpu_user_regs *cmp;
+
+            gregs = guest_cpu_user_regs();
+
+            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            XLAT_cpu_user_regs(cmp, gregs);
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   &cmp, sizeof(struct compat_cpu_user_regs));
+        }
+        else if ( !is_control_domain(current->domain) &&
+                 !is_idle_vcpu(current) )
+        {
+            /* PV guest */
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+        }
+        else
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   regs, sizeof(struct cpu_user_regs));
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
 
     if ( vpmu->arch_vpmu_ops )
     {
@@ -225,7 +318,8 @@ void vpmu_load(struct vcpu *v)
     local_irq_enable();
 
     /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+        (is_pv_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -462,6 +556,12 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
         ret = 0;
         break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        vpmu_load(current);
+        ret = 0;
+        break;
     }
 
     return ret;
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 81783de..50f6d6d 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -28,6 +28,7 @@
 #define XENPMU_init            4
 #define XENPMU_finish          5
 #define XENPMU_lvtpc_set       6
+#define XENPMU_flush           7 /* Write cached MSR values to HW     */
 /* ` } */
 
 /* Parameters structure for HYPERVISOR_xenpmu_op call */
@@ -65,6 +66,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  */
 #define XENPMU_FEATURE_INTEL_BTS  1
 
+/*
+ * PMU MSRs are cached in the context so the PV guest doesn't need to trap to
+ * the hypervisor
+ */
+#define PMU_CACHED 1
+
 /* Shared between hypervisor and PV domain */
 struct xen_pmu_data {
     uint32_t domain_id;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPI-0007QH-KT; Mon, 17 Feb 2014 17:55:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPC-0007Ex-8i
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:58 +0000
Received: from [85.158.137.68:38947] by server-12.bemta-3.messagelabs.com id
	60/D4-01674-1FC42035; Mon, 17 Feb 2014 17:54:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392659695!2452304!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19417 invoked from network); 17 Feb 2014 17:54:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHspIU003618
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:51 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHso5c014647
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:50 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsoSJ001784; Mon, 17 Feb 2014 17:54:50 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:49 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:01 -0500
Message-Id: <1392659764-22183-15-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 14/17] x86/VPMU: Save VPMU state for PV
	guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b389abc..87c3a02 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1461,16 +1461,14 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if ( prev != next )
-        _update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
+        _update_runstate_area(prev);
+        if ( vpmu_mode & XENPMU_MODE_ON )
             vpmu_save(prev);
+    }
 
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+    if ( is_hvm_vcpu(prev) &&  !list_empty(&prev->arch.hvm_vcpu.tm_list) )
             pt_save_timer(prev);
-    }
 
     local_irq_disable();
 
@@ -1508,7 +1506,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
+    if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPI-0007QH-KT; Mon, 17 Feb 2014 17:55:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPC-0007Ex-8i
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:58 +0000
Received: from [85.158.137.68:38947] by server-12.bemta-3.messagelabs.com id
	60/D4-01674-1FC42035; Mon, 17 Feb 2014 17:54:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392659695!2452304!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19417 invoked from network); 17 Feb 2014 17:54:56 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:56 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHspIU003618
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:51 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1HHso5c014647
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:50 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsoSJ001784; Mon, 17 Feb 2014 17:54:50 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:49 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:01 -0500
Message-Id: <1392659764-22183-15-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 14/17] x86/VPMU: Save VPMU state for PV
	guests during context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Save VPMU state during context switch for both HVM and PV guests unless we
are in PMU privileged mode (i.e. dom0 is doing all profiling).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/domain.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b389abc..87c3a02 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1461,16 +1461,14 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     }
 
     if ( prev != next )
-        _update_runstate_area(prev);
-
-    if ( is_hvm_vcpu(prev) )
     {
-        if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
+        _update_runstate_area(prev);
+        if ( vpmu_mode & XENPMU_MODE_ON )
             vpmu_save(prev);
+    }
 
-        if ( !list_empty(&prev->arch.hvm_vcpu.tm_list) )
+    if ( is_hvm_vcpu(prev) &&  !list_empty(&prev->arch.hvm_vcpu.tm_list) )
             pt_save_timer(prev);
-    }
 
     local_irq_disable();
 
@@ -1508,7 +1506,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
                            (next->domain->domain_id != 0));
     }
 
-    if ( is_hvm_vcpu(next) && (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
+    if ( (prev != next) && (vpmu_mode & XENPMU_MODE_ON) )
         /* Must be done with interrupts enabled */
         vpmu_load(next);
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPJ-0007Si-Ms; Mon, 17 Feb 2014 17:55:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPC-0007FK-IS
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:58 +0000
Received: from [193.109.254.147:30040] by server-10.bemta-14.messagelabs.com
	id B7/A4-10711-1FC42035; Mon, 17 Feb 2014 17:54:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392659695!4950406!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20375 invoked from network); 17 Feb 2014 17:54:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsoNh024903
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:51 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsnbt029279
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:50 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsnvq029267; Mon, 17 Feb 2014 17:54:49 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:48 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:00 -0500
Message-Id: <1392659764-22183-14-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c  | 97 +++++++++++++++++++++++++++++++++---------------
 xen/arch/x86/traps.c     |  6 ++-
 xen/include/public/pmu.h |  3 ++
 3 files changed, 76 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index abc4c1f..dc416f9 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -88,7 +88,8 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
@@ -116,7 +117,8 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
@@ -141,14 +143,18 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vpmu_struct *vpmu;
 
     /* dom0 will handle this interrupt */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
         v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
-        const struct cpu_user_regs *gregs;
+        struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
@@ -159,34 +165,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
         vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
-        /* Store appropriate registers in xenpmu_data */
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs *cmp;
-
-            gregs = guest_cpu_user_regs();
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           &cmp, sizeof(struct compat_cpu_user_regs));
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
 
-            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            XLAT_cpu_user_regs(cmp, gregs);
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   &cmp, sizeof(struct compat_cpu_user_regs));
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
         }
-        else if ( !is_control_domain(current->domain) &&
-                 !is_idle_vcpu(current) )
+        else
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
+
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -492,15 +526,20 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         if ( copy_from_guest(&pmu_params, arg, 1) )
             return -EFAULT;
 
-        if ( pmu_params.d.val & ~XENPMU_MODE_ON )
+        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((pmu_params.d.val & XENPMU_MODE_ON) &&
+              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode = pmu_params.d.val;
-        if ( vpmu_mode == XENPMU_MODE_OFF )
+
+        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
             /*
              * After this VPMU context will never be loaded during context
-             * switch. We also prevent PMU MSR accesses (which can load
-             * context) when VPMU is disabled.
+             * switch. Because PMU MSR accesses load VPMU context we don't
+             * allow them when VPMU is off and, for non-provileged domains,
+             * when we are in privileged mode. (We do want these accesses to
+             * load VPMU context for control domain in this mode)
              */
             vpmu_unload_all();
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 442d3fb..83ea479 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2508,7 +2508,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
-                goto invalid;
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                      is_control_domain(v->domain) )
+                    goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 50f6d6d..e3352a2 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -56,9 +56,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPJ-0007Si-Ms; Mon, 17 Feb 2014 17:55:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPC-0007FK-IS
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:54:58 +0000
Received: from [193.109.254.147:30040] by server-10.bemta-14.messagelabs.com
	id B7/A4-10711-1FC42035; Mon, 17 Feb 2014 17:54:57 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392659695!4950406!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20375 invoked from network); 17 Feb 2014 17:54:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:56 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsoNh024903
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:51 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsnbt029279
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:50 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsnvq029267; Mon, 17 Feb 2014 17:54:49 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:48 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:00 -0500
Message-Id: <1392659764-22183-14-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 13/17] x86/VPMU: Add privileged PMU mode
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for privileged PMU mode which allows privileged domain (dom0)
profile both itself (and the hypervisor) and the guests. While this mode is on
profiling in guests is disabled.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/vpmu.c  | 97 +++++++++++++++++++++++++++++++++---------------
 xen/arch/x86/traps.c     |  6 ++-
 xen/include/public/pmu.h |  3 ++
 3 files changed, 76 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index abc4c1f..dc416f9 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -88,7 +88,8 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
@@ -116,7 +117,8 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    if ( !(vpmu_mode & XENPMU_MODE_ON) )
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
         return 0;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
@@ -141,14 +143,18 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
     struct vpmu_struct *vpmu;
 
     /* dom0 will handle this interrupt */
-    if ( v->domain->domain_id >= DOMID_FIRST_RESERVED )
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
         v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
 
     vpmu = vcpu_vpmu(v);
-    if ( !is_hvm_domain(v->domain) )
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
         /* PV guest or dom0 is doing system profiling */
-        const struct cpu_user_regs *gregs;
+        struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
@@ -159,34 +165,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
         vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
 
-        /* Store appropriate registers in xenpmu_data */
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
         {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs *cmp;
-
-            gregs = guest_cpu_user_regs();
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           &cmp, sizeof(struct compat_cpu_user_regs));
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
 
-            cmp = (void *)&v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            XLAT_cpu_user_regs(cmp, gregs);
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   &cmp, sizeof(struct compat_cpu_user_regs));
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
         }
-        else if ( !is_control_domain(current->domain) &&
-                 !is_idle_vcpu(current) )
+        else
         {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
             gregs = guest_cpu_user_regs();
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
+
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs.attr.fields.dpl;
         }
-        else
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   regs, sizeof(struct cpu_user_regs));
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
@@ -492,15 +526,20 @@ long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
         if ( copy_from_guest(&pmu_params, arg, 1) )
             return -EFAULT;
 
-        if ( pmu_params.d.val & ~XENPMU_MODE_ON )
+        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((pmu_params.d.val & XENPMU_MODE_ON) &&
+              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
             return -EINVAL;
 
         vpmu_mode = pmu_params.d.val;
-        if ( vpmu_mode == XENPMU_MODE_OFF )
+
+        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
             /*
              * After this VPMU context will never be loaded during context
-             * switch. We also prevent PMU MSR accesses (which can load
-             * context) when VPMU is disabled.
+             * switch. Because PMU MSR accesses load VPMU context we don't
+             * allow them when VPMU is off and, for non-provileged domains,
+             * when we are in privileged mode. (We do want these accesses to
+             * load VPMU context for control domain in this mode)
              */
             vpmu_unload_all();
 
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 442d3fb..83ea479 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2508,7 +2508,11 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
         case MSR_CORE_PERF_FIXED_CTR_CTRL...MSR_CORE_PERF_GLOBAL_OVF_CTRL:
         case MSR_AMD_FAM15H_EVNTSEL0...MSR_AMD_FAM15H_PERFCTR5:
             if ( !vpmu_do_wrmsr(regs->ecx, msr_content) )
-                goto invalid;
+            {
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                      is_control_domain(v->domain) )
+                    goto invalid;
+            }
             break;
         default:
             if ( wrmsr_hypervisor_regs(regs->ecx, msr_content) == 1 )
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index 50f6d6d..e3352a2 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -56,9 +56,12 @@ DEFINE_XEN_GUEST_HANDLE(xen_pmu_params_t);
  * - XENPMU_MODE_OFF:   No PMU virtualization
  * - XENPMU_MODE_ON:    Guests can profile themselves, dom0 profiles
  *                      itself and Xen
+ * - XENPMU_MODE_PRIV:  Only dom0 has access to VPMU and it profiles
+ *                      everyone: itself, the hypervisor and the guests.
  */
 #define XENPMU_MODE_OFF           0
 #define XENPMU_MODE_ON            (1<<0)
+#define XENPMU_MODE_PRIV          (1<<1)
 
 /*
  * PMU features:
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPL-0007W3-Ar; Mon, 17 Feb 2014 17:55:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPD-0007Hf-R4
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:55:00 +0000
Received: from [193.109.254.147:30128] by server-9.bemta-14.messagelabs.com id
	82/77-24895-3FC42035; Mon, 17 Feb 2014 17:54:59 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392659696!4961885!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29490 invoked from network); 17 Feb 2014 17:54:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsq6s024947
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHspG3029328
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:52 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHspAT028025; Mon, 17 Feb 2014 17:54:51 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:51 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:03 -0500
Message-Id: <1392659764-22183-17-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for PVH guests. Most of operations are performed as in an HVM guest.
However, interrupt management is done in PV-like manner.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/hvm.c            |  3 ++-
 xen/arch/x86/hvm/svm/vpmu.c       | 13 +++++++------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 24 ++++++++++++------------
 xen/arch/x86/hvm/vpmu.c           | 34 ++++++++++++++++++++++------------
 4 files changed, 43 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..1e50c35 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3451,7 +3451,8 @@ static hvm_hypercall_t *const pvh_hypercall64_table[NR_hypercalls] = {
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    HYPERCALL(xenpmu_op)
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 04d3b91..0e5dac4 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -162,6 +162,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
     ctxt->msr_bitmap_set = 0;
 }
 
+/* Must be NMI-safe */
 static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     return 1;
@@ -243,7 +244,7 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( is_hvm_domain(v->domain) &&
+    if ( has_hvm_container_domain(v->domain) &&
         !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
@@ -286,7 +287,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -300,7 +301,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
 
-        if ( is_hvm_domain(v->domain) &&
+        if ( has_hvm_container_domain(v->domain) &&
              !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
@@ -310,7 +311,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_domain(v->domain) &&
+        if ( has_hvm_container_domain(v->domain) &&
              ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
@@ -382,7 +383,7 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    if ( !is_pv_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
                              sizeof(uint64_t) * AMD_MAX_COUNTERS + 
@@ -413,7 +414,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index e214f01..5a07817 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -299,7 +299,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
@@ -308,7 +308,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
@@ -318,7 +318,7 @@ static int core2_vpmu_save(struct vcpu *v)
 
     /* Unset PMU MSR bitmap to trap lazy load. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && !is_pv_domain(v->domain) )
+        && has_hvm_container_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -349,7 +349,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
     {
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
         core2_vpmu_cxt->global_ovf_ctrl = 0;
@@ -436,7 +436,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
+        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -444,7 +444,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 
 static void inject_trap(struct vcpu *v, unsigned int trapno)
 {
-    if ( !is_pv_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
         hvm_inject_hw_exception(trapno, 0);
     else
         send_guest_trap(v->domain, v->vcpu_id, trapno);
@@ -538,7 +538,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        if ( !is_pv_domain(v->domain) )
+        if ( has_hvm_container_domain(v->domain) )
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         else
             rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
@@ -558,7 +558,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            if ( !is_pv_domain(v->domain) )
+            if ( has_hvm_container_domain(v->domain) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
@@ -608,7 +608,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     }
     else
     {
-       if ( !is_pv_domain(v->domain) )
+       if ( has_hvm_container_domain(v->domain) )
            vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
        else
            wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -636,7 +636,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( !is_pv_domain(v->domain) )
+            if ( has_hvm_container_domain(v->domain) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
@@ -803,7 +803,7 @@ func_out:
     check_pmc_quirk();
 
     /* PV domains can allocate resources immediately */
-    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
             return 1;
 
     return 0;
@@ -816,7 +816,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         xfree(vpmu->context);
         if ( cpu_has_vmx_msr_bitmap )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index cbe8cfd..3645e4c 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -103,7 +103,7 @@ void vpmu_lvtpc_update(uint32_t val)
     vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !is_pv_domain(current->domain) ||
+    if ( !has_hvm_container_domain(current->domain) ||
          !(current->arch.vpmu.xenpmu_data &&
            current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
@@ -147,7 +147,7 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
          * and since do_wrmsr may load VPMU context we should save
          * (and unload) it again.
          */
-        if ( !is_hvm_domain(current->domain) &&
+        if ( !has_hvm_container_domain(current->domain) &&
             (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         {
             vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -171,7 +171,7 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     {
         int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
 
-        if ( !is_hvm_domain(current->domain) &&
+        if ( !has_hvm_container_domain(current->domain) &&
             (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         {
             vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -200,13 +200,17 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
     if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
-        /* PV guest or dom0 is doing system profiling */
+        /* PV(H) guest or dom0 is doing system profiling */
         struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
             return 1;
 
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
+             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
         /* PV guest will be reading PMU MSRs from xenpmu_data */
         vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
@@ -243,7 +247,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             else if ( !is_control_domain(current->domain) &&
                       !is_idle_vcpu(current) )
             {
-                /* PV guest */
+                /* PV(H) guest */
                 gregs = guest_cpu_user_regs();
                 memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                        gregs, sizeof(struct cpu_user_regs));
@@ -253,7 +257,15 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
                        regs, sizeof(struct cpu_user_regs));
 
             gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
         }
         else
         {
@@ -277,7 +289,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
         v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
 
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
@@ -404,7 +417,7 @@ void vpmu_load(struct vcpu *v)
 
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-        (is_pv_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -521,7 +534,7 @@ static void pmu_softnmi(void)
     }
 
     regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( !is_pv_domain(sampled->domain) )
+    if ( has_hvm_container_domain(sampled->domain) )
     {
         struct segment_register cs;
 
@@ -544,9 +557,6 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
     uint64_t gfn = params->d.val;
     static bool_t __read_mostly pvpmu_initted = 0;
 
-    if ( !is_pv_domain(d) )
-        return -EINVAL;
-
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPL-0007W3-Ar; Mon, 17 Feb 2014 17:55:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPD-0007Hf-R4
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:55:00 +0000
Received: from [193.109.254.147:30128] by server-9.bemta-14.messagelabs.com id
	82/77-24895-3FC42035; Mon, 17 Feb 2014 17:54:59 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392659696!4961885!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29490 invoked from network); 17 Feb 2014 17:54:58 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:58 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsq6s024947
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHspG3029328
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:52 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHspAT028025; Mon, 17 Feb 2014 17:54:51 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:51 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:03 -0500
Message-Id: <1392659764-22183-17-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 16/17] x86/VPMU: Suport for PVH guests
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for PVH guests. Most of operations are performed as in an HVM guest.
However, interrupt management is done in PV-like manner.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/hvm.c            |  3 ++-
 xen/arch/x86/hvm/svm/vpmu.c       | 13 +++++++------
 xen/arch/x86/hvm/vmx/vpmu_core2.c | 24 ++++++++++++------------
 xen/arch/x86/hvm/vpmu.c           | 34 ++++++++++++++++++++++------------
 4 files changed, 43 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..1e50c35 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3451,7 +3451,8 @@ static hvm_hypercall_t *const pvh_hypercall64_table[NR_hypercalls] = {
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    HYPERCALL(xenpmu_op)
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 04d3b91..0e5dac4 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -162,6 +162,7 @@ static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
     ctxt->msr_bitmap_set = 0;
 }
 
+/* Must be NMI-safe */
 static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     return 1;
@@ -243,7 +244,7 @@ static int amd_vpmu_save(struct vcpu *v)
 
     context_save(v);
 
-    if ( is_hvm_domain(v->domain) &&
+    if ( has_hvm_container_domain(v->domain) &&
         !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
         amd_vpmu_unset_msr_bitmap(v);
 
@@ -286,7 +287,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
     /* For all counters, enable guest only mode for HVM guest */
-    if ( is_hvm_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
         !(is_guest_mode(msr_content)) )
     {
         set_guest_mode(msr_content);
@@ -300,7 +301,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             return 1;
         vpmu_set(vpmu, VPMU_RUNNING);
 
-        if ( is_hvm_domain(v->domain) &&
+        if ( has_hvm_container_domain(v->domain) &&
              !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_set_msr_bitmap(v);
     }
@@ -310,7 +311,7 @@ static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_domain(v->domain) &&
+        if ( has_hvm_container_domain(v->domain) &&
              ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownship(PMU_OWNER_HVM);
@@ -382,7 +383,7 @@ static int amd_vpmu_initialise(struct vcpu *v)
 	 }
     }
 
-    if ( !is_pv_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
                              sizeof(uint64_t) * AMD_MAX_COUNTERS + 
@@ -413,7 +414,7 @@ static void amd_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
             amd_vpmu_unset_msr_bitmap(v);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index e214f01..5a07817 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -299,7 +299,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     for ( i = 0; i < arch_pmc_cnt; i++ )
         rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
@@ -308,7 +308,7 @@ static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
 
     if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
@@ -318,7 +318,7 @@ static int core2_vpmu_save(struct vcpu *v)
 
     /* Unset PMU MSR bitmap to trap lazy load. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && !is_pv_domain(v->domain) )
+        && has_hvm_container_domain(v->domain) )
         core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
 
     return 1;
@@ -349,7 +349,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
     wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
     wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
 
-    if ( is_pv_domain(v->domain) )
+    if ( !has_hvm_container_domain(v->domain) )
     {
         wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
         core2_vpmu_cxt->global_ovf_ctrl = 0;
@@ -436,7 +436,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
     {
         __core2_vpmu_load(current);
         vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && is_hvm_domain(current->domain) )
+        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
             core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
     }
     return 1;
@@ -444,7 +444,7 @@ static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
 
 static void inject_trap(struct vcpu *v, unsigned int trapno)
 {
-    if ( !is_pv_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
         hvm_inject_hw_exception(trapno, 0);
     else
         send_guest_trap(v->domain, v->vcpu_id, trapno);
@@ -538,7 +538,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
         break;
     case MSR_CORE_PERF_FIXED_CTR_CTRL:
         non_global_ctrl = msr_content;
-        if ( !is_pv_domain(v->domain) )
+        if ( has_hvm_container_domain(v->domain) )
             vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
         else
             rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
@@ -558,7 +558,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
             struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
                 vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
 
-            if ( !is_pv_domain(v->domain) )
+            if ( has_hvm_container_domain(v->domain) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
@@ -608,7 +608,7 @@ static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
     }
     else
     {
-       if ( !is_pv_domain(v->domain) )
+       if ( has_hvm_container_domain(v->domain) )
            vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
        else
            wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -636,7 +636,7 @@ static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             *msr_content = core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( !is_pv_domain(v->domain) )
+            if ( has_hvm_container_domain(v->domain) )
                 vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
@@ -803,7 +803,7 @@ func_out:
     check_pmc_quirk();
 
     /* PV domains can allocate resources immediately */
-    if ( is_pv_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
             return 1;
 
     return 0;
@@ -816,7 +816,7 @@ static void core2_vpmu_destroy(struct vcpu *v)
     if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
         return;
 
-    if ( is_hvm_domain(v->domain) )
+    if ( has_hvm_container_domain(v->domain) )
     {
         xfree(vpmu->context);
         if ( cpu_has_vmx_msr_bitmap )
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index cbe8cfd..3645e4c 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -103,7 +103,7 @@ void vpmu_lvtpc_update(uint32_t val)
     vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !is_pv_domain(current->domain) ||
+    if ( !has_hvm_container_domain(current->domain) ||
          !(current->arch.vpmu.xenpmu_data &&
            current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
@@ -147,7 +147,7 @@ int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
          * and since do_wrmsr may load VPMU context we should save
          * (and unload) it again.
          */
-        if ( !is_hvm_domain(current->domain) &&
+        if ( !has_hvm_container_domain(current->domain) &&
             (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         {
             vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -171,7 +171,7 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     {
         int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
 
-        if ( !is_hvm_domain(current->domain) &&
+        if ( !has_hvm_container_domain(current->domain) &&
             (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
         {
             vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
@@ -200,13 +200,17 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
 
     if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
     {
-        /* PV guest or dom0 is doing system profiling */
+        /* PV(H) guest or dom0 is doing system profiling */
         struct cpu_user_regs *gregs;
         int err;
 
         if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
             return 1;
 
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
+             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
         /* PV guest will be reading PMU MSRs from xenpmu_data */
         vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
         err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
@@ -243,7 +247,7 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             else if ( !is_control_domain(current->domain) &&
                       !is_idle_vcpu(current) )
             {
-                /* PV guest */
+                /* PV(H) guest */
                 gregs = guest_cpu_user_regs();
                 memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                        gregs, sizeof(struct cpu_user_regs));
@@ -253,7 +257,15 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
                        regs, sizeof(struct cpu_user_regs));
 
             gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
         }
         else
         {
@@ -277,7 +289,8 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
         v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
 
-        v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
@@ -404,7 +417,7 @@ void vpmu_load(struct vcpu *v)
 
     /* Only when PMU is counting, we load PMU context immediately. */
     if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-        (is_pv_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
         return;
 
     if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
@@ -521,7 +534,7 @@ static void pmu_softnmi(void)
     }
 
     regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( !is_pv_domain(sampled->domain) )
+    if ( has_hvm_container_domain(sampled->domain) )
     {
         struct segment_register cs;
 
@@ -544,9 +557,6 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
     uint64_t gfn = params->d.val;
     static bool_t __read_mostly pvpmu_initted = 0;
 
-    if ( !is_pv_domain(d) )
-        return -EINVAL;
-
     if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
         return -EINVAL;
 
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPM-0007Xu-At; Mon, 17 Feb 2014 17:55:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPF-0007L4-NS
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:55:02 +0000
Received: from [85.158.139.211:12192] by server-7.bemta-5.messagelabs.com id
	81/13-14867-4FC42035; Mon, 17 Feb 2014 17:55:00 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392659698!4489529!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31082 invoked from network); 17 Feb 2014 17:54:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsrSC003639
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsqfB028098
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsqVL028088; Mon, 17 Feb 2014 17:54:52 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:52 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:04 -0500
Message-Id: <1392659764-22183-18-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 17/17] x86/VPMU: Move VPMU files up from hvm/
	directory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 509 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 940 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 720 --------------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 720 ++++++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 509 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 940 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        |  99 ----
 xen/include/asm-x86/vpmu.h            |  99 ++++
 16 files changed, 2273 insertions(+), 2275 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index 0e5dac4..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,509 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/pmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    ctxt->msr_bitmap_set = 1;
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    ctxt->msr_bitmap_set = 0;
-}
-
-/* Must be NMI-safe */
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-/* Must be NMI-safe */
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
-    unsigned int i;
-
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-    {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        return 0;
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    context_save(v);
-
-    if ( has_hvm_container_domain(v->domain) &&
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( has_hvm_container_domain(v->domain) &&
-             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( has_hvm_container_domain(v->domain) &&
-             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct xen_pmu_amd_ctxt *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
-
-    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-    release_pmu_ownship(PMU_OWNER_HVM);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d954f4f..d49ed3a 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index 5a07817..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,940 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/pmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
-
-    if ( !has_hvm_container_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-/* Must be NMI-safe */
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !has_hvm_container_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && has_hvm_container_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( !has_hvm_container_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 0;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-
-        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
-        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-      sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( has_hvm_container_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    u64 global_ctrl, non_global_ctrl;
-    unsigned pmu_enable = 0;
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
-        for ( i = 0; i < arch_pmc_cnt; i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
-        if ( has_hvm_container_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( has_hvm_container_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-            xen_pmu_cntr_pair[tmp].control = msr_content;
-            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
-                pmu_enable += (global_ctrl >> i) &
-                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
-        }
-    }
-
-    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
-    if ( pmu_enable )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( has_hvm_container_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( has_hvm_container_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        xfree(vpmu->context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    }
-
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-            ret = core2_vpmu_initialise(v);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index 3645e4c..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,720 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/p2m.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <public/pmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-uint64_t __read_mostly vpmu_features = 0;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  ( parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_interrupt_type = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !has_hvm_container_domain(current->domain) ||
-         !(current->arch.vpmu.xenpmu_data &&
-           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-static void vpmu_send_nmi(struct vcpu *v)
-{
-    struct vlapic *vlapic;
-    u32 vlapic_lvtpc;
-    unsigned char int_vec;
-
-    ASSERT( is_hvm_vcpu(v) );
-
-    vlapic = vcpu_vlapic(v);
-    if ( !is_vlapic_lvtpc_enabled(vlapic) )
-        return;
-
-    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-    else
-        v->nmi_pending = 1;
-}
-
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
-         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
-
-        /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !has_hvm_container_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
-         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !has_hvm_container_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-/* This routine may be called in NMI context */
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /* dom0 will handle this interrupt */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV(H) guest or dom0 is doing system profiling */
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-            return 1;
-
-        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
-             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        if ( !is_hvm_domain(current->domain) )
-        {
-            /* Store appropriate registers in xenpmu_data */
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                gregs = guest_cpu_user_regs();
-
-                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
-                     !is_pv_32bit_domain(v->domain) )
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           gregs, sizeof(struct cpu_user_regs));
-                else 
-                {
-                    /*
-                     * 32-bit dom0 cannot process Xen's addresses (which are
-                     * 64 bit) and therefore we treat it the same way as a
-                     * non-priviledged PV 32-bit domain.
-                     */
-
-                    struct compat_cpu_user_regs *cmp;
-
-                    cmp = (struct compat_cpu_user_regs *)
-                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                    XLAT_cpu_user_regs(cmp, gregs);
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           &cmp, sizeof(struct compat_cpu_user_regs));
-                }
-            }
-            else if ( !is_control_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV(H) guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       regs, sizeof(struct cpu_user_regs));
-
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            if ( !is_pvh_domain(current->domain) )
-                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
-            {
-                struct segment_register seg_cs;
-
-                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
-                gregs->cs = seg_cs.attr.fields.dpl;
-            }
-        }
-        else
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   gregs, sizeof(struct cpu_user_regs));
-
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                gregs->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_interrupt_type & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-
-    if ( vpmu->arch_vpmu_ops )
-    {
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( vpmu_interrupt_type & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            vpmu_send_nmi(v);
-
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        /* Arch code needs to set VPMU_CONTEXT_LOADED */
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        return;
-    }
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Unload VPMU contexts */
-static void vpmu_unload_all(void)
-{
-    struct domain *d;
-    struct vcpu *v;
-    struct vpmu_struct *vpmu;
-
-    for_each_domain(d)
-    {
-        for_each_vcpu ( d, v )
-        {
-            if ( v != current )
-                vcpu_pause(v);
-            vpmu = vcpu_vpmu(v);
-
-            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            {
-                if ( v != current )
-                    vcpu_unpause(v);
-                continue;
-            }
-
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-            if ( v != current )
-                vcpu_unpause(v);
-        }
-    }
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-    else
-    {
-        if ( is_hvm_domain(sampled->domain) )
-        {
-            vpmu_send_nmi(sampled);
-            return;
-        }
-        v = sampled;
-    }
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( has_hvm_container_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    struct page_info *page;
-    uint64_t gfn = params->d.val;
-    static bool_t __read_mostly pvpmu_initted = 0;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
-    if ( !v->arch.vpmu.xenpmu_data )
-    {
-        put_page_and_type(page);
-        return -EINVAL;
-    }
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            put_page(page);
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page_and_type(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xen_pmu_params_t pmu_params;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((pmu_params.d.val & XENPMU_MODE_ON) &&
-              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode = pmu_params.d.val;
-
-        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            /*
-             * After this VPMU context will never be loaded during context
-             * switch. Because PMU MSR accesses load VPMU context we don't
-             * allow them when VPMU is off and, for non-provileged domains,
-             * when we are in privileged mode. (We do want these accesses to
-             * load VPMU context for control domain in this mode)
-             */
-            vpmu_unload_all();
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.d.val = vpmu_mode;
-        pmu_params.v.version.maj = XENPMU_VER_MAJ;
-        pmu_params.v.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_features = pmu_params.d.val;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.d.val = vpmu_mode;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( current->arch.vpmu.xenpmu_data == NULL )
-            return -EINVAL;
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        vpmu_load(current);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 83ea479..7fb1d30 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..8c2723b
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,720 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/p2m.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <public/pmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+uint64_t __read_mostly vpmu_features = 0;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  ( parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_interrupt_type = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !has_hvm_container_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic;
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    ASSERT( is_hvm_vcpu(v) );
+
+    vlapic = vcpu_vlapic(v);
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !has_hvm_container_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !has_hvm_container_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+/* This routine may be called in NMI context */
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV(H) guest or dom0 is doing system profiling */
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
+             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        if ( !is_hvm_domain(current->domain) )
+        {
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           &cmp, sizeof(struct compat_cpu_user_regs));
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV(H) guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
+        }
+        else
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+
+    if ( vpmu->arch_vpmu_ops )
+    {
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            vpmu_send_nmi(v);
+
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+        }
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+        vpmu_save_force(prev);
+        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* Only when PMU is counting, we load PMU context immediately. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        /* Arch code needs to set VPMU_CONTEXT_LOADED */
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        return;
+    }
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Unload VPMU contexts */
+static void vpmu_unload_all(void)
+{
+    struct domain *d;
+    struct vcpu *v;
+    struct vpmu_struct *vpmu;
+
+    for_each_domain(d)
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( v != current )
+                vcpu_pause(v);
+            vpmu = vcpu_vpmu(v);
+
+            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            {
+                if ( v != current )
+                    vcpu_unpause(v);
+                continue;
+            }
+
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+            if ( v != current )
+                vcpu_unpause(v);
+        }
+    }
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( has_hvm_container_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gfn = params->d.val;
+    static bool_t __read_mostly pvpmu_initted = 0;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page_and_type(page);
+        return -EINVAL;
+    }
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((pmu_params.d.val & XENPMU_MODE_ON) &&
+              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode = pmu_params.d.val;
+
+        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            /*
+             * After this VPMU context will never be loaded during context
+             * switch. Because PMU MSR accesses load VPMU context we don't
+             * allow them when VPMU is off and, for non-provileged domains,
+             * when we are in privileged mode. (We do want these accesses to
+             * load VPMU context for control domain in this mode)
+             */
+            vpmu_unload_all();
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_features = pmu_params.d.val;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( current->arch.vpmu.xenpmu_data == NULL )
+            return -EINVAL;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        vpmu_load(current);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..32d2882
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,509 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/pmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    ctxt->msr_bitmap_set = 1;
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    ctxt->msr_bitmap_set = 0;
+}
+
+/* Must be NMI-safe */
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+/* Must be NMI-safe */
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
+    unsigned int i;
+
+    /*
+     * Stop the counters. If we came here via vpmu_save_force (i.e.
+     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    {
+        vpmu_set(vpmu, VPMU_FROZEN);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        return 0;
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    context_save(v);
+
+    if ( has_hvm_container_domain(v->domain) &&
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( has_hvm_container_domain(v->domain) &&
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( has_hvm_container_domain(v->domain) &&
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct xen_pmu_amd_ctxt *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
+
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+    release_pmu_ownship(PMU_OWNER_HVM);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..195511e
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,940 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/pmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( !has_hvm_container_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+/* Must be NMI-safe */
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !has_hvm_container_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && has_hvm_container_domain(v->domain) )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !has_hvm_container_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( has_hvm_container_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    u64 global_ctrl, non_global_ctrl;
+    unsigned pmu_enable = 0;
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        global_ctrl = msr_content;
+        for ( i = 0; i < arch_pmc_cnt; i++ )
+        {
+            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
+            global_ctrl >>= 1;
+        }
+
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
+        global_ctrl = msr_content >> 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        non_global_ctrl = msr_content;
+        if ( has_hvm_container_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+        global_ctrl >>= 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= 4;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( has_hvm_container_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+            xen_pmu_cntr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
+        }
+    }
+
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
+    if ( pmu_enable )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( has_hvm_container_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( has_hvm_container_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
+    release_pmu_ownship(PMU_OWNER_HVM);
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+            ret = core2_vpmu_initialise(v);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ed81cfb..d27df39 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index 0f3de14..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/pmu.h>
-
-#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
-#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-/* Start of PMU register bank */
-#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
-                                                 (uintptr_t)ctxt->offset))
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *);
-int svm_vpmu_initialise(struct vcpu *);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xen_pmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint64_t vpmu_mode;
-extern uint64_t vpmu_features;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..0f3de14
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,99 @@
+/*
+ * vpmu.h: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_HVM_VPMU_H_
+#define __ASM_X86_HVM_VPMU_H_
+
+#include <public/pmu.h>
+
+#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
+#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *);
+int svm_vpmu_initialise(struct vcpu *);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint64_t vpmu_mode;
+extern uint64_t vpmu_features;
+
+#endif /* __ASM_X86_HVM_VPMU_H_*/
+
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:55:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSPM-0007Xu-At; Mon, 17 Feb 2014 17:55:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSPF-0007L4-NS
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:55:02 +0000
Received: from [85.158.139.211:12192] by server-7.bemta-5.messagelabs.com id
	81/13-14867-4FC42035; Mon, 17 Feb 2014 17:55:00 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392659698!4489529!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31082 invoked from network); 17 Feb 2014 17:54:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:59 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsrSC003639
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsqfB028098
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsqVL028088; Mon, 17 Feb 2014 17:54:52 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:52 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:04 -0500
Message-Id: <1392659764-22183-18-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 17/17] x86/VPMU: Move VPMU files up from hvm/
	directory
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since PMU is now not HVM specific we can move VPMU-related files up from
arch/x86/hvm/ directory.

Specifically:
    arch/x86/hvm/vpmu.c -> arch/x86/vpmu.c
    arch/x86/hvm/svm/vpmu.c -> arch/x86/vpmu_amd.c
    arch/x86/hvm/vmx/vpmu_core2.c -> arch/x86/vpmu_intel.c
    include/asm-x86/hvm/vpmu.h -> include/asm-x86/vpmu.h

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/Makefile                 |   1 +
 xen/arch/x86/hvm/Makefile             |   1 -
 xen/arch/x86/hvm/svm/Makefile         |   1 -
 xen/arch/x86/hvm/svm/vpmu.c           | 509 ------------------
 xen/arch/x86/hvm/vlapic.c             |   2 +-
 xen/arch/x86/hvm/vmx/Makefile         |   1 -
 xen/arch/x86/hvm/vmx/vpmu_core2.c     | 940 ----------------------------------
 xen/arch/x86/hvm/vpmu.c               | 720 --------------------------
 xen/arch/x86/oprofile/op_model_ppro.c |   2 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/vpmu.c                   | 720 ++++++++++++++++++++++++++
 xen/arch/x86/vpmu_amd.c               | 509 ++++++++++++++++++
 xen/arch/x86/vpmu_intel.c             | 940 ++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h    |   2 +-
 xen/include/asm-x86/hvm/vpmu.h        |  99 ----
 xen/include/asm-x86/vpmu.h            |  99 ++++
 16 files changed, 2273 insertions(+), 2275 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/svm/vpmu.c
 delete mode 100644 xen/arch/x86/hvm/vmx/vpmu_core2.c
 delete mode 100644 xen/arch/x86/hvm/vpmu.c
 create mode 100644 xen/arch/x86/vpmu.c
 create mode 100644 xen/arch/x86/vpmu_amd.c
 create mode 100644 xen/arch/x86/vpmu_intel.c
 delete mode 100644 xen/include/asm-x86/hvm/vpmu.h
 create mode 100644 xen/include/asm-x86/vpmu.h

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..cf85dda 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += vpmu.o vpmu_amd.o vpmu_intel.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index eea5555..742b83b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -22,4 +22,3 @@ obj-y += vlapic.o
 obj-y += vmsi.o
 obj-y += vpic.o
 obj-y += vpt.o
-obj-y += vpmu.o
\ No newline at end of file
diff --git a/xen/arch/x86/hvm/svm/Makefile b/xen/arch/x86/hvm/svm/Makefile
index a10a55e..760d295 100644
--- a/xen/arch/x86/hvm/svm/Makefile
+++ b/xen/arch/x86/hvm/svm/Makefile
@@ -6,4 +6,3 @@ obj-y += nestedsvm.o
 obj-y += svm.o
 obj-y += svmdebug.o
 obj-y += vmcb.o
-obj-y += vpmu.o
diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
deleted file mode 100644
index 0e5dac4..0000000
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ /dev/null
@@ -1,509 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2010, Advanced Micro Devices, Inc.
- * Parts of this code are Copyright (c) 2007, Intel Corporation
- *
- * Author: Wei Wang <wei.wang2@amd.com>
- * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- */
-
-#include <xen/config.h>
-#include <xen/xenoprof.h>
-#include <xen/hvm/save.h>
-#include <xen/sched.h>
-#include <xen/irq.h>
-#include <asm/apic.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vpmu.h>
-#include <public/pmu.h>
-
-#define MSR_F10H_EVNTSEL_GO_SHIFT   40
-#define MSR_F10H_EVNTSEL_EN_SHIFT   22
-#define MSR_F10H_COUNTER_LENGTH     48
-
-#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
-#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
-#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
-
-static unsigned int __read_mostly num_counters;
-static const u32 __read_mostly *counters;
-static const u32 __read_mostly *ctrls;
-static bool_t __read_mostly k7_counters_mirrored;
-
-#define F10H_NUM_COUNTERS   4
-#define F15H_NUM_COUNTERS   6
-#define AMD_MAX_COUNTERS    6
-
-/* PMU Counter MSRs. */
-static const u32 AMD_F10H_COUNTERS[] = {
-    MSR_K7_PERFCTR0,
-    MSR_K7_PERFCTR1,
-    MSR_K7_PERFCTR2,
-    MSR_K7_PERFCTR3
-};
-
-/* PMU Control MSRs. */
-static const u32 AMD_F10H_CTRLS[] = {
-    MSR_K7_EVNTSEL0,
-    MSR_K7_EVNTSEL1,
-    MSR_K7_EVNTSEL2,
-    MSR_K7_EVNTSEL3
-};
-
-static const u32 AMD_F15H_COUNTERS[] = {
-    MSR_AMD_FAM15H_PERFCTR0,
-    MSR_AMD_FAM15H_PERFCTR1,
-    MSR_AMD_FAM15H_PERFCTR2,
-    MSR_AMD_FAM15H_PERFCTR3,
-    MSR_AMD_FAM15H_PERFCTR4,
-    MSR_AMD_FAM15H_PERFCTR5
-};
-
-static const u32 AMD_F15H_CTRLS[] = {
-    MSR_AMD_FAM15H_EVNTSEL0,
-    MSR_AMD_FAM15H_EVNTSEL1,
-    MSR_AMD_FAM15H_EVNTSEL2,
-    MSR_AMD_FAM15H_EVNTSEL3,
-    MSR_AMD_FAM15H_EVNTSEL4,
-    MSR_AMD_FAM15H_EVNTSEL5
-};
-
-static inline int get_pmu_reg_type(u32 addr)
-{
-    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
-        return MSR_TYPE_CTRL;
-
-    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
-        return MSR_TYPE_COUNTER;
-
-    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
-         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
-    {
-        if (addr & 1)
-            return MSR_TYPE_COUNTER;
-        else
-            return MSR_TYPE_CTRL;
-    }
-
-    /* unsupported registers */
-    return -1;
-}
-
-static inline u32 get_fam15h_addr(u32 addr)
-{
-    switch ( addr )
-    {
-    case MSR_K7_PERFCTR0:
-        return MSR_AMD_FAM15H_PERFCTR0;
-    case MSR_K7_PERFCTR1:
-        return MSR_AMD_FAM15H_PERFCTR1;
-    case MSR_K7_PERFCTR2:
-        return MSR_AMD_FAM15H_PERFCTR2;
-    case MSR_K7_PERFCTR3:
-        return MSR_AMD_FAM15H_PERFCTR3;
-    case MSR_K7_EVNTSEL0:
-        return MSR_AMD_FAM15H_EVNTSEL0;
-    case MSR_K7_EVNTSEL1:
-        return MSR_AMD_FAM15H_EVNTSEL1;
-    case MSR_K7_EVNTSEL2:
-        return MSR_AMD_FAM15H_EVNTSEL2;
-    case MSR_K7_EVNTSEL3:
-        return MSR_AMD_FAM15H_EVNTSEL3;
-    default:
-        break;
-    }
-
-    return addr;
-}
-
-static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
-    }
-
-    ctxt->msr_bitmap_set = 1;
-}
-
-static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
-        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
-    }
-
-    ctxt->msr_bitmap_set = 0;
-}
-
-/* Must be NMI-safe */
-static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    return 1;
-}
-
-static inline void context_load(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        wrmsrl(counters[i], counter_regs[i]);
-        wrmsrl(ctrls[i], ctrl_regs[i]);
-    }
-}
-
-/* Must be NMI-safe */
-static void amd_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    vpmu_reset(vpmu, VPMU_FROZEN);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        unsigned int i;
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], ctrl_regs[i]);
-
-        return;
-    }
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-
-    context_load(v);
-}
-
-static inline void context_save(struct vcpu *v)
-{
-    unsigned int i;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-
-    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
-    for ( i = 0; i < num_counters; i++ )
-        rdmsrl(counters[i], counter_regs[i]);
-}
-
-static int amd_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
-    unsigned int i;
-
-    /*
-     * Stop the counters. If we came here via vpmu_save_force (i.e.
-     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
-     */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-    {
-        vpmu_set(vpmu, VPMU_FROZEN);
-
-        for ( i = 0; i < num_counters; i++ )
-            wrmsrl(ctrls[i], 0);
-
-        return 0;
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    context_save(v);
-
-    if ( has_hvm_container_domain(v->domain) &&
-        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
-        amd_vpmu_unset_msr_bitmap(v);
-
-    return 1;
-}
-
-static void context_update(unsigned int msr, u64 msr_content)
-{
-    unsigned int i;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-
-    if ( k7_counters_mirrored &&
-        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
-    {
-        msr = get_fam15h_addr(msr);
-    }
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-       if ( msr == ctrls[i] )
-       {
-           ctrl_regs[i] = msr_content;
-           return;
-       }
-        else if (msr == counters[i] )
-        {
-            counter_regs[i] = msr_content;
-            return;
-        }
-    }
-}
-
-static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    /* For all counters, enable guest only mode for HVM guest */
-    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        !(is_guest_mode(msr_content)) )
-    {
-        set_guest_mode(msr_content);
-    }
-
-    /* check if the first counter is enabled */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 1;
-        vpmu_set(vpmu, VPMU_RUNNING);
-
-        if ( has_hvm_container_domain(v->domain) &&
-             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_set_msr_bitmap(v);
-    }
-
-    /* stop saving & restore if guest stops first counter */
-    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
-        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( has_hvm_container_domain(v->domain) &&
-             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-        release_pmu_ownship(PMU_OWNER_HVM);
-    }
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    /* Update vpmu context immediately */
-    context_update(msr, msr_content);
-
-    /* Write to hw counters */
-    wrmsrl(msr, msr_content);
-    return 1;
-}
-
-static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
-        || vpmu_is_set(vpmu, VPMU_FROZEN) )
-    {
-        context_load(v);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        vpmu_reset(vpmu, VPMU_FROZEN);
-    }
-
-    rdmsrl(msr, *msr_content);
-
-    return 1;
-}
-
-static int amd_vpmu_initialise(struct vcpu *v)
-{
-    struct xen_pmu_amd_ctxt *ctxt;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( counters == NULL )
-    {
-         switch ( family )
-	 {
-	 case 0x15:
-	     num_counters = F15H_NUM_COUNTERS;
-	     counters = AMD_F15H_COUNTERS;
-	     ctrls = AMD_F15H_CTRLS;
-	     k7_counters_mirrored = 1;
-	     break;
-	 case 0x10:
-	 case 0x12:
-	 case 0x14:
-	 case 0x16:
-	 default:
-	     num_counters = F10H_NUM_COUNTERS;
-	     counters = AMD_F10H_COUNTERS;
-	     ctrls = AMD_F10H_CTRLS;
-	     k7_counters_mirrored = 0;
-	     break;
-	 }
-    }
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
-                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
-        if ( !ctxt )
-        {
-            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
-                     " PMU feature is unavailable on domain %d vcpu %d.\n",
-                     v->vcpu_id, v->domain->domain_id);
-            return -ENOMEM;
-        }
-    }
-    else
-        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
-
-    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
-    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
-
-    vpmu->context = ctxt;
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    return 0;
-}
-
-static void amd_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
-            amd_vpmu_unset_msr_bitmap(v);
-
-        xfree(vpmu->context);
-    }
-
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-    release_pmu_ownship(PMU_OWNER_HVM);
-}
-
-/* VPMU part of the 'q' keyhandler */
-static void amd_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
-    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
-    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
-    unsigned int i;
-
-    printk("    VPMU state: 0x%x ", vpmu->flags);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-    {
-         printk("\n");
-         return;
-    }
-
-    printk("(");
-    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
-        printk("PASSIVE_DOMAIN_ALLOCATED, ");
-    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
-        printk("FROZEN, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
-        printk("SAVE, ");
-    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
-        printk("RUNNING, ");
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        printk("LOADED, ");
-    printk("ALLOCATED)\n");
-
-    for ( i = 0; i < num_counters; i++ )
-    {
-        uint64_t ctrl, cntr;
-
-        rdmsrl(ctrls[i], ctrl);
-        rdmsrl(counters[i], cntr);
-        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
-               ctrls[i], ctrl_regs[i], ctrl,
-               counters[i], counter_regs[i], cntr);
-    }
-}
-
-struct arch_vpmu_ops amd_vpmu_ops = {
-    .do_wrmsr = amd_vpmu_do_wrmsr,
-    .do_rdmsr = amd_vpmu_do_rdmsr,
-    .do_interrupt = amd_vpmu_do_interrupt,
-    .arch_vpmu_destroy = amd_vpmu_destroy,
-    .arch_vpmu_save = amd_vpmu_save,
-    .arch_vpmu_load = amd_vpmu_load,
-    .arch_vpmu_dump = amd_vpmu_dump
-};
-
-int svm_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    int ret = 0;
-
-    /* vpmu enabled? */
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        return 0;
-
-    switch ( family )
-    {
-    case 0x10:
-    case 0x12:
-    case 0x14:
-    case 0x15:
-    case 0x16:
-        ret = amd_vpmu_initialise(v);
-        if ( !ret )
-            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
-        return ret;
-    }
-
-    printk("VPMU: Initialization failed. "
-           "AMD processor family %d has not "
-           "been supported\n", family);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index d954f4f..d49ed3a 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -38,7 +38,7 @@
 #include <asm/hvm/support.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/hvm/nestedhvm.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
diff --git a/xen/arch/x86/hvm/vmx/Makefile b/xen/arch/x86/hvm/vmx/Makefile
index 373b3d9..04a29ce 100644
--- a/xen/arch/x86/hvm/vmx/Makefile
+++ b/xen/arch/x86/hvm/vmx/Makefile
@@ -3,5 +3,4 @@ obj-y += intr.o
 obj-y += realmode.o
 obj-y += vmcs.o
 obj-y += vmx.o
-obj-y += vpmu_core2.o
 obj-y += vvmx.o
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
deleted file mode 100644
index 5a07817..0000000
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ /dev/null
@@ -1,940 +0,0 @@
-/*
- * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/irq.h>
-#include <asm/system.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/apic.h>
-#include <asm/traps.h>
-#include <asm/msr.h>
-#include <asm/msr-index.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vlapic.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <public/sched.h>
-#include <public/hvm/save.h>
-#include <public/pmu.h>
-#include <asm/hvm/vpmu.h>
-
-/*
- * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
- * instruction.
- * cpuid 0xa - Architectural Performance Monitoring Leaf
- * Register eax
- */
-#define PMU_VERSION_SHIFT        0  /* Version ID */
-#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
-#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
-
-#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
-#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
-#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
-
-#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
-#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
-#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
-/* Register edx */
-#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
-#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
-#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
-
-#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
-#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
-#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
-
-/* Alias registers (0x4c1) for full-width writes to PMCs */
-#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
-static bool_t __read_mostly full_width_write;
-
-/* Intel-specific VPMU features */
-#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
-#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
-
-/*
- * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
- * counters. 4 bits for every counter.
- */
-#define FIXED_CTR_CTRL_BITS 4
-#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
-
-/* Number of general-purpose and fixed performance counters */
-static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
-
-/*
- * QUIRK to workaround an issue on various family 6 cpus.
- * The issue leads to endless PMC interrupt loops on the processor.
- * If the interrupt handler is running and a pmc reaches the value 0, this
- * value remains forever and it triggers immediately a new interrupt after
- * finishing the handler.
- * A workaround is to read all flagged counters and if the value is 0 write
- * 1 (or another value != 0) into it.
- * There exist no errata and the real cause of this behaviour is unknown.
- */
-bool_t __read_mostly is_pmc_quirk;
-
-static void check_pmc_quirk(void)
-{
-    if ( current_cpu_data.x86 == 6 )
-        is_pmc_quirk = 1;
-    else
-        is_pmc_quirk = 0;    
-}
-
-static void handle_pmc_quirk(u64 msr_content)
-{
-    int i;
-    u64 val;
-
-    if ( !is_pmc_quirk )
-        return;
-
-    val = msr_content;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-    val = msr_content >> 32;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( val & 0x1 )
-        {
-            u64 cnt;
-            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
-            if ( cnt == 0 )
-                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
-        }
-        val >>= 1;
-    }
-}
-
-/*
- * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
- */
-static int core2_get_arch_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
-}
-
-/*
- * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
- */
-static int core2_get_fixed_pmc_count(void)
-{
-    u32 eax;
-
-    eax = cpuid_eax(0xa);
-    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
-}
-
-/* edx bits 5-12: Bit width of fixed-function performance counters  */
-static int core2_get_bitwidth_fix_count(void)
-{
-    u32 edx;
-
-    edx = cpuid_edx(0xa);
-    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
-}
-
-static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
-{
-    int i;
-    u32 msr_index_pmc;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
-        {
-            *type = MSR_TYPE_COUNTER;
-            *index = i;
-            return 1;
-        }
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
-        (msr_index == MSR_IA32_DS_AREA) ||
-        (msr_index == MSR_IA32_PEBS_ENABLE) )
-    {
-        *type = MSR_TYPE_CTRL;
-        return 1;
-    }
-
-    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
-         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
-    {
-        *type = MSR_TYPE_GLOBAL;
-        return 1;
-    }
-
-    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
-    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
-         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_COUNTER;
-        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
-        return 1;
-    }
-
-    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
-         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
-    {
-        *type = MSR_TYPE_ARCH_CTRL;
-        *index = msr_index - MSR_P6_EVNTSEL0;
-        return 1;
-    }
-
-    return 0;
-}
-
-#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
-static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    /* Allow Read/Write PMU Counters MSR Directly. */
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
-        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
-                  msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    /* Allow Read PMU Non-global Controls Directly. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
-{
-    int i;
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-    }
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
-        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
-                msr_bitmap + 0x800/BYTES_PER_LONG);
-
-        if ( full_width_write )
-        {
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
-            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
-                      msr_bitmap + 0x800/BYTES_PER_LONG);
-        }
-    }
-
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
-
-    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
-    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
-}
-
-static inline void __core2_vpmu_save(struct vcpu *v)
-{
-    int i;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
-
-    if ( !has_hvm_container_domain(v->domain) )
-        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
-}
-
-/* Must be NMI-safe */
-static int core2_vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !has_hvm_container_domain(v->domain) )
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
-        return 0;
-
-    __core2_vpmu_save(v);
-
-    /* Unset PMU MSR bitmap to trap lazy load. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
-        && has_hvm_container_domain(v->domain) )
-        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-
-    return 1;
-}
-
-static inline void __core2_vpmu_load(struct vcpu *v)
-{
-    unsigned int i, pmc_start;
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
-
-    if ( full_width_write )
-        pmc_start = MSR_IA32_A_PERFCTR0;
-    else
-        pmc_start = MSR_IA32_PERFCTR0;
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-    {
-        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
-        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
-    }
-
-    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
-    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
-    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
-
-    if ( !has_hvm_container_domain(v->domain) )
-    {
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
-        core2_vpmu_cxt->global_ovf_ctrl = 0;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
-    }
-}
-
-static void core2_vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-
-    __core2_vpmu_load(v);
-}
-
-static int core2_vpmu_alloc_resource(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
-
-    if ( !is_pv_domain(v->domain) )
-    {
-        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
-            return 0;
-
-        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-
-        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
-            goto out_err;
-        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
-
-        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
-                                       sizeof(uint64_t) * fixed_pmc_cnt +
-                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
-        if ( !core2_vpmu_cxt )
-            goto out_err;
-    }
-    else
-    {
-        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
-        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-    }
-
-    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
-    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
-      sizeof(uint64_t) * fixed_pmc_cnt;
-
-    vpmu->context = (void *)core2_vpmu_cxt;
-
-    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
-
-    return 1;
-
-out_err:
-    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
-    release_pmu_ownship(PMU_OWNER_HVM);
-
-    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
-           v->vcpu_id, v->domain->domain_id);
-
-    return 0;
-}
-
-static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( !is_core2_vpmu_msr(msr_index, type, index) )
-        return 0;
-
-    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
-         !core2_vpmu_alloc_resource(current) )
-        return 0;
-
-    /* Do the lazy load staff. */
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-    {
-        __core2_vpmu_load(current);
-        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
-        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
-            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
-    }
-    return 1;
-}
-
-static void inject_trap(struct vcpu *v, unsigned int trapno)
-{
-    if ( has_hvm_container_domain(v->domain) )
-        hvm_inject_hw_exception(trapno, 0);
-    else
-        send_guest_trap(v->domain, v->vcpu_id, trapno);
-}
-
-static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    u64 global_ctrl, non_global_ctrl;
-    unsigned pmu_enable = 0;
-    int i, tmp;
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        /* Special handling for BTS */
-        if ( msr == MSR_IA32_DEBUGCTLMSR )
-        {
-            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
-                                 IA32_DEBUGCTLMSR_BTINT;
-
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
-                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
-            if ( msr_content & supported )
-            {
-                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                    return 1;
-                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
-                inject_trap(v, TRAP_gp_fault);
-                return 0;
-            }
-        }
-        return 0;
-    }
-
-    core2_vpmu_cxt = vpmu->context;
-    switch ( msr )
-    {
-    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-        core2_vpmu_cxt->global_status &= ~msr_content;
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_STATUS:
-        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
-                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
-        inject_trap(v, TRAP_gp_fault);
-        return 1;
-    case MSR_IA32_PEBS_ENABLE:
-        if ( msr_content & 1 )
-            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
-                     "which is not supported.\n");
-        core2_vpmu_cxt->pebs_enable = msr_content;
-        return 1;
-    case MSR_IA32_DS_AREA:
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            if ( !is_canonical_address(msr_content) )
-            {
-                gdprintk(XENLOG_WARNING,
-                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
-                         msr_content);
-                inject_trap(v, TRAP_gp_fault);
-                return 1;
-            }
-            core2_vpmu_cxt->ds_area = msr_content;
-            break;
-        }
-        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
-        return 1;
-    case MSR_CORE_PERF_GLOBAL_CTRL:
-        global_ctrl = msr_content;
-        for ( i = 0; i < arch_pmc_cnt; i++ )
-        {
-            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
-            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
-            global_ctrl >>= 1;
-        }
-
-        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
-        global_ctrl = msr_content >> 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->global_ctrl = msr_content;
-        break;
-    case MSR_CORE_PERF_FIXED_CTR_CTRL:
-        non_global_ctrl = msr_content;
-        if ( has_hvm_container_domain(v->domain) )
-            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-        else
-            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-        global_ctrl >>= 32;
-        for ( i = 0; i < fixed_pmc_cnt; i++ )
-        {
-            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
-            non_global_ctrl >>= 4;
-            global_ctrl >>= 1;
-        }
-        core2_vpmu_cxt->fixed_ctrl = msr_content;
-        break;
-    default:
-        tmp = msr - MSR_P6_EVNTSEL0;
-        if ( tmp >= 0 && tmp < arch_pmc_cnt )
-        {
-            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-            if ( has_hvm_container_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
-            xen_pmu_cntr_pair[tmp].control = msr_content;
-            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
-                pmu_enable += (global_ctrl >> i) &
-                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
-        }
-    }
-
-    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
-    if ( pmu_enable )
-        vpmu_set(vpmu, VPMU_RUNNING);
-    else
-        vpmu_reset(vpmu, VPMU_RUNNING);
-
-    if ( type != MSR_TYPE_GLOBAL )
-    {
-        u64 mask;
-        int inject_gp = 0;
-        switch ( type )
-        {
-        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
-            mask = ~((1ull << 32) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
-            if  ( msr == MSR_IA32_DS_AREA )
-                break;
-            /* 4 bits per counter, currently 3 fixed counters implemented. */
-            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
-            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
-            if (msr_content & mask)
-                inject_gp = 1;
-            break;
-        }
-
-        if (inject_gp) 
-            inject_trap(v, TRAP_gp_fault);
-        else
-            wrmsrl(msr, msr_content);
-    }
-    else
-    {
-       if ( has_hvm_container_domain(v->domain) )
-           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-       else
-           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-
-    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
-    {
-        core2_vpmu_cxt = vpmu->context;
-        switch ( msr )
-        {
-        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
-            *msr_content = 0;
-            break;
-        case MSR_CORE_PERF_GLOBAL_STATUS:
-            *msr_content = core2_vpmu_cxt->global_status;
-            break;
-        case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( has_hvm_container_domain(v->domain) )
-                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
-            else
-                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
-            break;
-        default:
-            rdmsrl(msr, *msr_content);
-        }
-    }
-    else
-    {
-        /* Extension for BTS */
-        if ( msr == MSR_IA32_MISC_ENABLE )
-        {
-            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
-                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
-        }
-        else
-            return 0;
-    }
-
-    return 1;
-}
-
-static void core2_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    if (input == 0x1)
-    {
-        struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
-        {
-            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
-            *edx |= cpufeat_mask(X86_FEATURE_DS);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
-            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
-                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
-        }
-    }
-}
-
-/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
-static void core2_vpmu_dump(const struct vcpu *v)
-{
-    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int i;
-    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
-    u64 val;
-    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
-    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
-        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-         return;
-
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
-    {
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            printk("    vPMU loaded\n");
-        else
-            printk("    vPMU allocated\n");
-        return;
-    }
-
-    printk("    vPMU running\n");
-    core2_vpmu_cxt = vpmu->context;
-
-    /* Print the contents of the counter and its configuration msr. */
-    for ( i = 0; i < arch_pmc_cnt; i++ )
-        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
-            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
-
-    /*
-     * The configuration of the fixed counter is 4 bits each in the
-     * MSR_CORE_PERF_FIXED_CTR_CTRL.
-     */
-    val = core2_vpmu_cxt->fixed_ctrl;
-    for ( i = 0; i < fixed_pmc_cnt; i++ )
-    {
-        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
-               i, fixed_counters[i],
-               val & FIXED_CTR_CTRL_MASK);
-        val >>= FIXED_CTR_CTRL_BITS;
-    }
-}
-
-static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    u64 msr_content;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
-
-    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
-    if ( msr_content )
-    {
-        if ( is_pmc_quirk )
-            handle_pmc_quirk(msr_content);
-        core2_vpmu_cxt->global_status |= msr_content;
-        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
-        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
-    }
-    else
-    {
-        /* No PMC overflow but perhaps a Trace Message interrupt. */
-        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
-        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
-            return 0;
-    }
-
-    return 1;
-}
-
-static int core2_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    u64 msr_content;
-    struct cpuinfo_x86 *c = &current_cpu_data;
-
-    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
-        goto func_out;
-    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
-    if ( cpu_has(c, X86_FEATURE_DS) )
-    {
-        if ( !cpu_has(c, X86_FEATURE_DTES64) )
-        {
-            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-            goto func_out;
-        }
-        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
-        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
-        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
-        {
-            /* If BTS_UNAVAIL is set reset the DS feature. */
-            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
-            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
-        }
-        else
-        {
-            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
-            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
-                printk(XENLOG_G_INFO
-                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
-            printk("******************************************************\n");
-            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
-            printk("** Using this processor feature in a virtualized    **\n");
-            printk("** environment is not 100%% safe.                    **\n");
-            printk("** Setting the DS buffer address with wrong values  **\n");
-            printk("** may lead to hypervisor hangs or crashes.         **\n");
-            printk("** It is NOT recommended for production use!        **\n");
-            printk("******************************************************\n");
-        }
-    }
-func_out:
-
-    arch_pmc_cnt = core2_get_arch_pmc_count();
-    fixed_pmc_cnt = core2_get_fixed_pmc_count();
-    check_pmc_quirk();
-
-    /* PV domains can allocate resources immediately */
-    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
-            return 1;
-
-    return 0;
-}
-
-static void core2_vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    if ( has_hvm_container_domain(v->domain) )
-    {
-        xfree(vpmu->context);
-        if ( cpu_has_vmx_msr_bitmap )
-            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
-    }
-
-    release_pmu_ownship(PMU_OWNER_HVM);
-    vpmu->context = NULL;
-    vpmu_clear(vpmu);
-}
-
-struct arch_vpmu_ops core2_vpmu_ops = {
-    .do_wrmsr = core2_vpmu_do_wrmsr,
-    .do_rdmsr = core2_vpmu_do_rdmsr,
-    .do_interrupt = core2_vpmu_do_interrupt,
-    .do_cpuid = core2_vpmu_do_cpuid,
-    .arch_vpmu_destroy = core2_vpmu_destroy,
-    .arch_vpmu_save = core2_vpmu_save,
-    .arch_vpmu_load = core2_vpmu_load,
-    .arch_vpmu_dump = core2_vpmu_dump
-};
-
-static void core2_no_vpmu_do_cpuid(unsigned int input,
-                                unsigned int *eax, unsigned int *ebx,
-                                unsigned int *ecx, unsigned int *edx)
-{
-    /*
-     * As in this case the vpmu is not enabled reset some bits in the
-     * architectural performance monitoring related part.
-     */
-    if ( input == 0xa )
-    {
-        *eax &= ~PMU_VERSION_MASK;
-        *eax &= ~PMU_GENERAL_NR_MASK;
-        *eax &= ~PMU_GENERAL_WIDTH_MASK;
-
-        *edx &= ~PMU_FIXED_NR_MASK;
-        *edx &= ~PMU_FIXED_WIDTH_MASK;
-    }
-}
-
-/*
- * If its a vpmu msr set it to 0.
- */
-static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    int type = -1, index = -1;
-    if ( !is_core2_vpmu_msr(msr, &type, &index) )
-        return 0;
-    *msr_content = 0;
-    return 1;
-}
-
-/*
- * These functions are used in case vpmu is not enabled.
- */
-struct arch_vpmu_ops core2_no_vpmu_ops = {
-    .do_rdmsr = core2_no_vpmu_do_rdmsr,
-    .do_cpuid = core2_no_vpmu_do_cpuid,
-};
-
-int vmx_vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t family = current_cpu_data.x86;
-    uint8_t cpu_model = current_cpu_data.x86_model;
-    int ret = 0;
-
-    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
-    if ( vpmu_mode == XENPMU_MODE_OFF )
-        return 0;
-
-    if ( family == 6 )
-    {
-        u64 caps;
-
-        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
-        full_width_write = (caps >> 13) & 1;
-
-        switch ( cpu_model )
-        {
-        /* Core2: */
-        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
-        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
-        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
-        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
-
-        case 0x2a: /* SandyBridge */
-        case 0x2d: /* SandyBridge, "Romley-EP" */
-
-        /* Nehalem: */
-        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
-        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
-        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
-
-        /* Westmere: */
-        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
-        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
-        case 0x27: /* 32 nm Westmere-EX */
-
-        case 0x3a: /* IvyBridge */
-        case 0x3e: /* IvyBridge EP */
-
-        /* Haswell: */
-        case 0x3c:
-        case 0x3f:
-        case 0x45:
-        case 0x46:
-            ret = core2_vpmu_initialise(v);
-            if ( !ret )
-                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
-            return ret;
-        }
-    }
-
-    printk("VPMU: Initialization failed. "
-           "Intel processor family %d model %d has not "
-           "been supported\n", family, cpu_model);
-    return -EINVAL;
-}
-
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
deleted file mode 100644
index 3645e4c..0000000
--- a/xen/arch/x86/hvm/vpmu.c
+++ /dev/null
@@ -1,720 +0,0 @@
-/*
- * vpmu.c: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-#include <xen/config.h>
-#include <xen/sched.h>
-#include <xen/xenoprof.h>
-#include <xen/event.h>
-#include <xen/softirq.h>
-#include <xen/hypercall.h>
-#include <xen/guest_access.h>
-#include <asm/regs.h>
-#include <asm/types.h>
-#include <asm/msr.h>
-#include <asm/p2m.h>
-#include <asm/hvm/support.h>
-#include <asm/hvm/vmx/vmx.h>
-#include <asm/hvm/vmx/vmcs.h>
-#include <asm/hvm/vpmu.h>
-#include <asm/hvm/svm/svm.h>
-#include <asm/hvm/svm/vmcb.h>
-#include <asm/apic.h>
-#include <asm/nmi.h>
-#include <public/pmu.h>
-
-/*
- * "vpmu" :     vpmu generally enabled
- * "vpmu=off" : vpmu generally disabled
- * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
- */
-uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
-uint64_t __read_mostly vpmu_features = 0;
-static void parse_vpmu_param(char *s);
-custom_param("vpmu", parse_vpmu_param);
-
-static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
-static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
-
-static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
-
-static void __init parse_vpmu_param(char *s)
-{
-    char *ss;
-
-    vpmu_mode = XENPMU_MODE_ON;
-    if (*s == '\0')
-        return;
-
-    do {
-        ss = strchr(s, ',');
-        if ( ss )
-            *ss = '\0';
-
-        switch  ( parse_bool(s) )
-        {
-        case 0:
-            vpmu_mode = XENPMU_MODE_OFF;
-            return;
-        case -1:
-            if ( !strcmp(s, "nmi") )
-                vpmu_interrupt_type = APIC_DM_NMI;
-            else if ( !strcmp(s, "bts") )
-                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
-            else if ( !strcmp(s, "priv") )
-            {
-                vpmu_mode &= ~XENPMU_MODE_ON;
-                vpmu_mode |= XENPMU_MODE_PRIV;
-            }
-            else
-            {
-                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
-                vpmu_mode = XENPMU_MODE_OFF;
-                return;
-            }
-        default:
-            break;
-        }
-
-        s = ss + 1;
-    } while ( ss );
-}
-
-void vpmu_lvtpc_update(uint32_t val)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
-
-    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
-    if ( !has_hvm_container_domain(current->domain) ||
-         !(current->arch.vpmu.xenpmu_data &&
-           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-}
-
-static void vpmu_send_nmi(struct vcpu *v)
-{
-    struct vlapic *vlapic;
-    u32 vlapic_lvtpc;
-    unsigned char int_vec;
-
-    ASSERT( is_hvm_vcpu(v) );
-
-    vlapic = vcpu_vlapic(v);
-    if ( !is_vlapic_lvtpc_enabled(vlapic) )
-        return;
-
-    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
-    else
-        v->nmi_pending = 1;
-}
-
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
-         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
-
-        /*
-         * We may have received a PMU interrupt during WRMSR handling
-         * and since do_wrmsr may load VPMU context we should save
-         * (and unload) it again.
-         */
-        if ( !has_hvm_container_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
-         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
-        return 0;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
-    {
-        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
-
-        if ( !has_hvm_container_domain(current->domain) &&
-            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
-            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        }
-        return ret;
-    }
-    return 0;
-}
-
-/* This routine may be called in NMI context */
-int vpmu_do_interrupt(struct cpu_user_regs *regs)
-{
-    struct vcpu *v = current;
-    struct vpmu_struct *vpmu;
-
-    /* dom0 will handle this interrupt */
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-
-    vpmu = vcpu_vpmu(v);
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return 0;
-
-    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-    {
-        /* PV(H) guest or dom0 is doing system profiling */
-        struct cpu_user_regs *gregs;
-        int err;
-
-        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
-            return 1;
-
-        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
-             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        /* PV guest will be reading PMU MSRs from xenpmu_data */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
-
-        if ( !is_hvm_domain(current->domain) )
-        {
-            /* Store appropriate registers in xenpmu_data */
-            if ( is_pv_32bit_domain(current->domain) )
-            {
-                gregs = guest_cpu_user_regs();
-
-                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
-                     !is_pv_32bit_domain(v->domain) )
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           gregs, sizeof(struct cpu_user_regs));
-                else 
-                {
-                    /*
-                     * 32-bit dom0 cannot process Xen's addresses (which are
-                     * 64 bit) and therefore we treat it the same way as a
-                     * non-priviledged PV 32-bit domain.
-                     */
-
-                    struct compat_cpu_user_regs *cmp;
-
-                    cmp = (struct compat_cpu_user_regs *)
-                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                    XLAT_cpu_user_regs(cmp, gregs);
-                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                           &cmp, sizeof(struct compat_cpu_user_regs));
-                }
-            }
-            else if ( !is_control_domain(current->domain) &&
-                      !is_idle_vcpu(current) )
-            {
-                /* PV(H) guest */
-                gregs = guest_cpu_user_regs();
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       gregs, sizeof(struct cpu_user_regs));
-            }
-            else
-                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                       regs, sizeof(struct cpu_user_regs));
-
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            if ( !is_pvh_domain(current->domain) )
-                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
-            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
-            {
-                struct segment_register seg_cs;
-
-                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
-                gregs->cs = seg_cs.attr.fields.dpl;
-            }
-        }
-        else
-        {
-            /* HVM guest */
-            struct segment_register cs;
-
-            gregs = guest_cpu_user_regs();
-            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
-                   gregs, sizeof(struct cpu_user_regs));
-
-            /* This is unsafe in NMI context, we'll do it in softint handler */
-            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
-            {
-                hvm_get_segment_register(current, x86_seg_cs, &cs);
-                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-                gregs->cs = cs.attr.fields.dpl;
-            }
-        }
-
-        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
-        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
-        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
-
-        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
-        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
-        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
-
-        if ( vpmu_interrupt_type & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            send_guest_vcpu_virq(v, VIRQ_XENPMU);
-
-        return 1;
-    }
-
-    if ( vpmu->arch_vpmu_ops )
-    {
-        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
-            return 0;
-
-        if ( vpmu_interrupt_type & APIC_DM_NMI )
-        {
-            per_cpu(sampled_vcpu, smp_processor_id()) = current;
-            raise_softirq(PMU_SOFTIRQ);
-        }
-        else
-            vpmu_send_nmi(v);
-
-        return 1;
-    }
-
-    return 0;
-}
-
-void vpmu_do_cpuid(unsigned int input,
-                   unsigned int *eax, unsigned int *ebx,
-                   unsigned int *ecx, unsigned int *edx)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(current);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
-        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
-}
-
-static void vpmu_save_force(void *arg)
-{
-    struct vcpu *v = (struct vcpu *)arg;
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops )
-        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
-
-    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
-
-    per_cpu(last_vcpu, smp_processor_id()) = NULL;
-}
-
-void vpmu_save(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-
-    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
-       return;
-
-    vpmu->last_pcpu = pcpu;
-    per_cpu(last_vcpu, pcpu) = v;
-
-    if ( vpmu->arch_vpmu_ops )
-        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
-}
-
-void vpmu_load(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    int pcpu = smp_processor_id();
-    struct vcpu *prev = NULL;
-
-    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        return;
-
-    /* First time this VCPU is running here */
-    if ( vpmu->last_pcpu != pcpu )
-    {
-        /*
-         * Get the context from last pcpu that we ran on. Note that if another
-         * VCPU is running there it must have saved this VPCU's context before
-         * startig to run (see below).
-         * There should be no race since remote pcpu will disable interrupts
-         * before saving the context.
-         */
-        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-        {
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-        }
-    } 
-
-    /* Prevent forced context save from remote CPU */
-    local_irq_disable();
-
-    prev = per_cpu(last_vcpu, pcpu);
-
-    if ( prev != v && prev )
-    {
-        vpmu = vcpu_vpmu(prev);
-
-        /* Someone ran here before us */
-        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-        vpmu_save_force(prev);
-        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-        vpmu = vcpu_vpmu(v);
-    }
-
-    local_irq_enable();
-
-    /* Only when PMU is counting, we load PMU context immediately. */
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
-        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
-        return;
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
-    {
-        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
-        /* Arch code needs to set VPMU_CONTEXT_LOADED */
-        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
-    }
-}
-
-void vpmu_initialise(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-    uint8_t vendor = current_cpu_data.x86_vendor;
-
-    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
-        vpmu_destroy(v);
-    vpmu_clear(vpmu);
-    vpmu->context = NULL;
-
-    switch ( vendor )
-    {
-    case X86_VENDOR_AMD:
-        if ( svm_vpmu_initialise(v) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    case X86_VENDOR_INTEL:
-        if ( vmx_vpmu_initialise(v) != 0 )
-            vpmu_mode = XENPMU_MODE_OFF;
-        return;
-
-    default:
-        printk("VPMU: Initialization failed. "
-               "Unknown CPU vendor %d\n", vendor);
-        vpmu_mode = XENPMU_MODE_OFF;
-        return;
-    }
-}
-
-void vpmu_destroy(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
-    {
-        /* Unload VPMU first. This will stop counters */
-        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
-                         vpmu_save_force, (void *)v, 1);
-
-        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
-    }
-}
-
-/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
-void vpmu_dump(struct vcpu *v)
-{
-    struct vpmu_struct *vpmu = vcpu_vpmu(v);
-
-    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
-        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
-}
-
-/* Unload VPMU contexts */
-static void vpmu_unload_all(void)
-{
-    struct domain *d;
-    struct vcpu *v;
-    struct vpmu_struct *vpmu;
-
-    for_each_domain(d)
-    {
-        for_each_vcpu ( d, v )
-        {
-            if ( v != current )
-                vcpu_pause(v);
-            vpmu = vcpu_vpmu(v);
-
-            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
-            {
-                if ( v != current )
-                    vcpu_unpause(v);
-                continue;
-            }
-
-            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
-            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
-                             vpmu_save_force, (void *)v, 1);
-            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
-
-            if ( v != current )
-                vcpu_unpause(v);
-        }
-    }
-}
-
-/* Process the softirq set by PMU NMI handler */
-static void pmu_softnmi(void)
-{
-    struct cpu_user_regs *regs;
-    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
-
-    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
-         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
-        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
-    else
-    {
-        if ( is_hvm_domain(sampled->domain) )
-        {
-            vpmu_send_nmi(sampled);
-            return;
-        }
-        v = sampled;
-    }
-
-    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-    if ( has_hvm_container_domain(sampled->domain) )
-    {
-        struct segment_register cs;
-
-        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
-        regs->cs = cs.attr.fields.dpl;
-    }
-
-    send_guest_vcpu_virq(v, VIRQ_XENPMU);
-}
-
-int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
-{
-    return vpmu_do_interrupt(regs);
-}
-
-static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    struct page_info *page;
-    uint64_t gfn = params->d.val;
-    static bool_t __read_mostly pvpmu_initted = 0;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return -EINVAL;
-
-    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
-    if ( !page )
-        return -EINVAL;
-
-    if ( !get_page_type(page, PGT_writable_page) )
-    {
-        put_page(page);
-        return -EINVAL;
-    }
-
-    v = d->vcpu[params->vcpu];
-    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
-    if ( !v->arch.vpmu.xenpmu_data )
-    {
-        put_page_and_type(page);
-        return -EINVAL;
-    }
-
-    if ( !pvpmu_initted )
-    {
-        if (reserve_lapic_nmi() == 0)
-            set_nmi_callback(pmu_nmi_interrupt);
-        else
-        {
-            printk("Failed to reserve PMU NMI\n");
-            put_page(page);
-            return -EBUSY;
-        }
-        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
-
-        pvpmu_initted = 1;
-    }
-
-    vpmu_initialise(v);
-
-    return 0;
-}
-
-static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
-{
-    struct vcpu *v;
-    uint64_t mfn;
-
-    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
-        return;
-
-    v = d->vcpu[params->vcpu];
-    if (v != current)
-        vcpu_pause(v);
-
-    if ( v->arch.vpmu.xenpmu_data )
-    {
-        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
-        if ( mfn_valid(mfn) )
-        {
-            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
-            put_page_and_type(mfn_to_page(mfn));
-        }
-    }
-    vpmu_destroy(v);
-
-    if (v != current)
-        vcpu_unpause(v);
-}
-
-long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
-{
-    int ret = -EINVAL;
-    xen_pmu_params_t pmu_params;
-
-    switch ( op )
-    {
-    case XENPMU_mode_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
-             ((pmu_params.d.val & XENPMU_MODE_ON) &&
-              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
-            return -EINVAL;
-
-        vpmu_mode = pmu_params.d.val;
-
-        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
-            /*
-             * After this VPMU context will never be loaded during context
-             * switch. Because PMU MSR accesses load VPMU context we don't
-             * allow them when VPMU is off and, for non-provileged domains,
-             * when we are in privileged mode. (We do want these accesses to
-             * load VPMU context for control domain in this mode)
-             */
-            vpmu_unload_all();
-
-        ret = 0;
-        break;
-
-    case XENPMU_mode_get:
-        pmu_params.d.val = vpmu_mode;
-        pmu_params.v.version.maj = XENPMU_VER_MAJ;
-        pmu_params.v.version.min = XENPMU_VER_MIN;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_feature_set:
-        if ( !is_control_domain(current->domain) )
-            return -EPERM;
-
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-
-        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
-            return -EINVAL;
-
-        vpmu_features = pmu_params.d.val;
-
-        ret = 0;
-        break;
-
-    case XENPMU_feature_get:
-        pmu_params.d.val = vpmu_mode;
-        if ( copy_to_guest(arg, &pmu_params, 1) )
-            return -EFAULT;
-        ret = 0;
-        break;
-
-    case XENPMU_init:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        ret = pvpmu_init(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_finish:
-        if ( copy_from_guest(&pmu_params, arg, 1) )
-            return -EFAULT;
-        pvpmu_finish(current->domain, &pmu_params);
-        break;
-
-    case XENPMU_lvtpc_set:
-        if ( current->arch.vpmu.xenpmu_data == NULL )
-            return -EINVAL;
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        ret = 0;
-        break;
-    case XENPMU_flush:
-        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
-        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
-        vpmu_load(current);
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
diff --git a/xen/arch/x86/oprofile/op_model_ppro.c b/xen/arch/x86/oprofile/op_model_ppro.c
index 5aae2e7..bf5d9a5 100644
--- a/xen/arch/x86/oprofile/op_model_ppro.c
+++ b/xen/arch/x86/oprofile/op_model_ppro.c
@@ -19,7 +19,7 @@
 #include <asm/processor.h>
 #include <asm/regs.h>
 #include <asm/current.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 
 #include "op_x86_model.h"
 #include "op_counter.h"
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 83ea479..7fb1d30 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -72,7 +72,7 @@
 #include <asm/apic.h>
 #include <asm/mc146818rtc.h>
 #include <asm/hpet.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <public/arch-x86/cpuid.h>
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/vpmu.c b/xen/arch/x86/vpmu.c
new file mode 100644
index 0000000..8c2723b
--- /dev/null
+++ b/xen/arch/x86/vpmu.c
@@ -0,0 +1,720 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/msr.h>
+#include <asm/p2m.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <asm/vpmu.h>
+#include <asm/hvm/svm/svm.h>
+#include <asm/hvm/svm/vmcb.h>
+#include <asm/apic.h>
+#include <asm/nmi.h>
+#include <public/pmu.h>
+
+/*
+ * "vpmu" :     vpmu generally enabled
+ * "vpmu=off" : vpmu generally disabled
+ * "vpmu=bts" : vpmu enabled and Intel BTS feature switched on.
+ */
+uint64_t __read_mostly vpmu_mode = XENPMU_MODE_OFF;
+uint64_t __read_mostly vpmu_features = 0;
+static void parse_vpmu_param(char *s);
+custom_param("vpmu", parse_vpmu_param);
+
+static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
+
+static void __init parse_vpmu_param(char *s)
+{
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  ( parse_bool(s) )
+        {
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_interrupt_type = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
+            break;
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+void vpmu_lvtpc_update(uint32_t val)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
+
+    /* Postpone APIC updates for PV guests if PMU interrupt is pending */
+    if ( !has_hvm_container_domain(current->domain) ||
+         !(current->arch.vpmu.xenpmu_data &&
+           current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+}
+
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic;
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    ASSERT( is_hvm_vcpu(v) );
+
+    vlapic = vcpu_vlapic(v);
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_wrmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_wrmsr(msr, msr_content);
+
+        /*
+         * We may have received a PMU interrupt during WRMSR handling
+         * and since do_wrmsr may load VPMU context we should save
+         * (and unload) it again.
+         */
+        if ( !has_hvm_container_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( (vpmu_mode == XENPMU_MODE_OFF) ||
+         ((vpmu_mode & XENPMU_MODE_PRIV) && !is_control_domain(current->domain)) )
+        return 0;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_rdmsr )
+    {
+        int ret = vpmu->arch_vpmu_ops->do_rdmsr(msr, msr_content);
+
+        if ( !has_hvm_container_domain(current->domain) &&
+            (current->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            vpmu->arch_vpmu_ops->arch_vpmu_save(current);
+            vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        }
+        return ret;
+    }
+    return 0;
+}
+
+/* This routine may be called in NMI context */
+int vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu;
+
+    /* dom0 will handle this interrupt */
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (v->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+
+    vpmu = vcpu_vpmu(v);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( !is_hvm_domain(v->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+    {
+        /* PV(H) guest or dom0 is doing system profiling */
+        struct cpu_user_regs *gregs;
+        int err;
+
+        if ( v->arch.vpmu.xenpmu_data->pmu_flags & PMU_CACHED )
+            return 1;
+
+        if ( is_pvh_domain(current->domain) && !(vpmu_mode & XENPMU_MODE_PRIV) &&
+             !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        /* PV guest will be reading PMU MSRs from xenpmu_data */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+        err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+        vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
+
+        if ( !is_hvm_domain(current->domain) )
+        {
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else 
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           &cmp, sizeof(struct compat_cpu_user_regs));
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV(H) guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            if ( !is_pvh_domain(current->domain) )
+                gregs->cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
+            else if ( !(vpmu_interrupt_type & APIC_DM_NMI) )
+            {
+                struct segment_register seg_cs;
+
+                hvm_get_segment_register(current, x86_seg_cs, &seg_cs);
+                gregs->cs = seg_cs.attr.fields.dpl;
+            }
+        }
+        else
+        {
+            /* HVM guest */
+            struct segment_register cs;
+
+            gregs = guest_cpu_user_regs();
+            memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                   gregs, sizeof(struct cpu_user_regs));
+
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
+        }
+
+        v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
+        v->arch.vpmu.xenpmu_data->vcpu_id = current->vcpu_id;
+        v->arch.vpmu.xenpmu_data->pcpu_id = smp_processor_id();
+
+        if ( !is_pvh_domain(current->domain) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            v->arch.vpmu.xenpmu_data->pmu_flags |= PMU_CACHED;
+        apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
+        vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
+
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
+
+        return 1;
+    }
+
+    if ( vpmu->arch_vpmu_ops )
+    {
+        if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
+            return 0;
+
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            vpmu_send_nmi(v);
+
+        return 1;
+    }
+
+    return 0;
+}
+
+void vpmu_do_cpuid(unsigned int input,
+                   unsigned int *eax, unsigned int *ebx,
+                   unsigned int *ecx, unsigned int *edx)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->do_cpuid )
+        vpmu->arch_vpmu_ops->do_cpuid(input, eax, ebx, ecx, edx);
+}
+
+static void vpmu_save_force(void *arg)
+{
+    struct vcpu *v = (struct vcpu *)arg;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops )
+        (void)vpmu->arch_vpmu_ops->arch_vpmu_save(v);
+
+    vpmu_reset(vpmu, VPMU_CONTEXT_SAVE);
+
+    per_cpu(last_vcpu, smp_processor_id()) = NULL;
+}
+
+void vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_ALLOCATED | VPMU_CONTEXT_LOADED) )
+       return;
+
+    vpmu->last_pcpu = pcpu;
+    per_cpu(last_vcpu, pcpu) = v;
+
+    if ( vpmu->arch_vpmu_ops )
+        if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
+}
+
+void vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int pcpu = smp_processor_id();
+    struct vcpu *prev = NULL;
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    /* First time this VCPU is running here */
+    if ( vpmu->last_pcpu != pcpu )
+    {
+        /*
+         * Get the context from last pcpu that we ran on. Note that if another
+         * VCPU is running there it must have saved this VPCU's context before
+         * startig to run (see below).
+         * There should be no race since remote pcpu will disable interrupts
+         * before saving the context.
+         */
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        {
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+        }
+    } 
+
+    /* Prevent forced context save from remote CPU */
+    local_irq_disable();
+
+    prev = per_cpu(last_vcpu, pcpu);
+
+    if ( prev != v && prev )
+    {
+        vpmu = vcpu_vpmu(prev);
+
+        /* Someone ran here before us */
+        vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+        vpmu_save_force(prev);
+        vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+        vpmu = vcpu_vpmu(v);
+    }
+
+    local_irq_enable();
+
+    /* Only when PMU is counting, we load PMU context immediately. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) ||
+        (!has_hvm_container_domain(v->domain) && vpmu->xenpmu_data->pmu_flags & PMU_CACHED) )
+        return;
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_load )
+    {
+        apic_write_around(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
+        /* Arch code needs to set VPMU_CONTEXT_LOADED */
+        vpmu->arch_vpmu_ops->arch_vpmu_load(v);
+    }
+}
+
+void vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t vendor = current_cpu_data.x86_vendor;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        vpmu_destroy(v);
+    vpmu_clear(vpmu);
+    vpmu->context = NULL;
+
+    switch ( vendor )
+    {
+    case X86_VENDOR_AMD:
+        if ( svm_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    case X86_VENDOR_INTEL:
+        if ( vmx_vpmu_initialise(v) != 0 )
+            vpmu_mode = XENPMU_MODE_OFF;
+        return;
+
+    default:
+        printk("VPMU: Initialization failed. "
+               "Unknown CPU vendor %d\n", vendor);
+        vpmu_mode = XENPMU_MODE_OFF;
+        return;
+    }
+}
+
+void vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_destroy )
+    {
+        /* Unload VPMU first. This will stop counters */
+        on_selected_cpus(cpumask_of(vcpu_vpmu(v)->last_pcpu),
+                         vpmu_save_force, (void *)v, 1);
+
+        vpmu->arch_vpmu_ops->arch_vpmu_destroy(v);
+    }
+}
+
+/* Dump some vpmu informations on console. Used in keyhandler dump_domains(). */
+void vpmu_dump(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu->arch_vpmu_ops && vpmu->arch_vpmu_ops->arch_vpmu_dump )
+        vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
+}
+
+/* Unload VPMU contexts */
+static void vpmu_unload_all(void)
+{
+    struct domain *d;
+    struct vcpu *v;
+    struct vpmu_struct *vpmu;
+
+    for_each_domain(d)
+    {
+        for_each_vcpu ( d, v )
+        {
+            if ( v != current )
+                vcpu_pause(v);
+            vpmu = vcpu_vpmu(v);
+
+            if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            {
+                if ( v != current )
+                    vcpu_unpause(v);
+                continue;
+            }
+
+            vpmu_set(vpmu, VPMU_CONTEXT_SAVE);
+            on_selected_cpus(cpumask_of(vpmu->last_pcpu),
+                             vpmu_save_force, (void *)v, 1);
+            vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
+
+            if ( v != current )
+                vcpu_unpause(v);
+        }
+    }
+}
+
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( has_hvm_container_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
+static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    struct page_info *page;
+    uint64_t gfn = params->d.val;
+    static bool_t __read_mostly pvpmu_initted = 0;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    page = get_page_from_gfn(d, gfn, NULL, P2M_ALLOC);
+    if ( !page )
+        return -EINVAL;
+
+    if ( !get_page_type(page, PGT_writable_page) )
+    {
+        put_page(page);
+        return -EINVAL;
+    }
+
+    v = d->vcpu[params->vcpu];
+    v->arch.vpmu.xenpmu_data = __map_domain_page_global(page);
+    if ( !v->arch.vpmu.xenpmu_data )
+    {
+        put_page_and_type(page);
+        return -EINVAL;
+    }
+
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
+    vpmu_initialise(v);
+
+    return 0;
+}
+
+static void pvpmu_finish(struct domain *d, xen_pmu_params_t *params)
+{
+    struct vcpu *v;
+    uint64_t mfn;
+
+    if ( params->vcpu < 0 || params->vcpu >= d->max_vcpus )
+        return;
+
+    v = d->vcpu[params->vcpu];
+    if (v != current)
+        vcpu_pause(v);
+
+    if ( v->arch.vpmu.xenpmu_data )
+    {
+        mfn = domain_page_map_to_mfn(v->arch.vpmu.xenpmu_data);
+        if ( mfn_valid(mfn) )
+        {
+            unmap_domain_page_global(v->arch.vpmu.xenpmu_data);
+            put_page_and_type(mfn_to_page(mfn));
+        }
+    }
+    vpmu_destroy(v);
+
+    if (v != current)
+        vcpu_unpause(v);
+}
+
+long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
+{
+    int ret = -EINVAL;
+    xen_pmu_params_t pmu_params;
+
+    switch ( op )
+    {
+    case XENPMU_mode_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( (pmu_params.d.val & ~(XENPMU_MODE_ON | XENPMU_MODE_PRIV)) ||
+             ((pmu_params.d.val & XENPMU_MODE_ON) &&
+              (pmu_params.d.val & XENPMU_MODE_PRIV)) )
+            return -EINVAL;
+
+        vpmu_mode = pmu_params.d.val;
+
+        if ( (vpmu_mode == XENPMU_MODE_OFF) || (vpmu_mode & XENPMU_MODE_PRIV) )
+            /*
+             * After this VPMU context will never be loaded during context
+             * switch. Because PMU MSR accesses load VPMU context we don't
+             * allow them when VPMU is off and, for non-provileged domains,
+             * when we are in privileged mode. (We do want these accesses to
+             * load VPMU context for control domain in this mode)
+             */
+            vpmu_unload_all();
+
+        ret = 0;
+        break;
+
+    case XENPMU_mode_get:
+        pmu_params.d.val = vpmu_mode;
+        pmu_params.v.version.maj = XENPMU_VER_MAJ;
+        pmu_params.v.version.min = XENPMU_VER_MIN;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_feature_set:
+        if ( !is_control_domain(current->domain) )
+            return -EPERM;
+
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+
+        if ( pmu_params.d.val & ~XENPMU_FEATURE_INTEL_BTS )
+            return -EINVAL;
+
+        vpmu_features = pmu_params.d.val;
+
+        ret = 0;
+        break;
+
+    case XENPMU_feature_get:
+        pmu_params.d.val = vpmu_mode;
+        if ( copy_to_guest(arg, &pmu_params, 1) )
+            return -EFAULT;
+        ret = 0;
+        break;
+
+    case XENPMU_init:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        ret = pvpmu_init(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_finish:
+        if ( copy_from_guest(&pmu_params, arg, 1) )
+            return -EFAULT;
+        pvpmu_finish(current->domain, &pmu_params);
+        break;
+
+    case XENPMU_lvtpc_set:
+        if ( current->arch.vpmu.xenpmu_data == NULL )
+            return -EINVAL;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        ret = 0;
+        break;
+    case XENPMU_flush:
+        current->arch.vpmu.xenpmu_data->pmu_flags &= ~PMU_CACHED;
+        vpmu_lvtpc_update(current->arch.vpmu.xenpmu_data->pmu.l.lapic_lvtpc);
+        vpmu_load(current);
+        ret = 0;
+        break;
+    }
+
+    return ret;
+}
diff --git a/xen/arch/x86/vpmu_amd.c b/xen/arch/x86/vpmu_amd.c
new file mode 100644
index 0000000..32d2882
--- /dev/null
+++ b/xen/arch/x86/vpmu_amd.c
@@ -0,0 +1,509 @@
+/*
+ * vpmu.c: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2010, Advanced Micro Devices, Inc.
+ * Parts of this code are Copyright (c) 2007, Intel Corporation
+ *
+ * Author: Wei Wang <wei.wang2@amd.com>
+ * Tested by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ */
+
+#include <xen/config.h>
+#include <xen/xenoprof.h>
+#include <xen/hvm/save.h>
+#include <xen/sched.h>
+#include <xen/irq.h>
+#include <asm/apic.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/vpmu.h>
+#include <public/pmu.h>
+
+#define MSR_F10H_EVNTSEL_GO_SHIFT   40
+#define MSR_F10H_EVNTSEL_EN_SHIFT   22
+#define MSR_F10H_COUNTER_LENGTH     48
+
+#define is_guest_mode(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
+#define set_guest_mode(msr) (msr |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
+#define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH-1))))
+
+static unsigned int __read_mostly num_counters;
+static const u32 __read_mostly *counters;
+static const u32 __read_mostly *ctrls;
+static bool_t __read_mostly k7_counters_mirrored;
+
+#define F10H_NUM_COUNTERS   4
+#define F15H_NUM_COUNTERS   6
+#define AMD_MAX_COUNTERS    6
+
+/* PMU Counter MSRs. */
+static const u32 AMD_F10H_COUNTERS[] = {
+    MSR_K7_PERFCTR0,
+    MSR_K7_PERFCTR1,
+    MSR_K7_PERFCTR2,
+    MSR_K7_PERFCTR3
+};
+
+/* PMU Control MSRs. */
+static const u32 AMD_F10H_CTRLS[] = {
+    MSR_K7_EVNTSEL0,
+    MSR_K7_EVNTSEL1,
+    MSR_K7_EVNTSEL2,
+    MSR_K7_EVNTSEL3
+};
+
+static const u32 AMD_F15H_COUNTERS[] = {
+    MSR_AMD_FAM15H_PERFCTR0,
+    MSR_AMD_FAM15H_PERFCTR1,
+    MSR_AMD_FAM15H_PERFCTR2,
+    MSR_AMD_FAM15H_PERFCTR3,
+    MSR_AMD_FAM15H_PERFCTR4,
+    MSR_AMD_FAM15H_PERFCTR5
+};
+
+static const u32 AMD_F15H_CTRLS[] = {
+    MSR_AMD_FAM15H_EVNTSEL0,
+    MSR_AMD_FAM15H_EVNTSEL1,
+    MSR_AMD_FAM15H_EVNTSEL2,
+    MSR_AMD_FAM15H_EVNTSEL3,
+    MSR_AMD_FAM15H_EVNTSEL4,
+    MSR_AMD_FAM15H_EVNTSEL5
+};
+
+static inline int get_pmu_reg_type(u32 addr)
+{
+    if ( (addr >= MSR_K7_EVNTSEL0) && (addr <= MSR_K7_EVNTSEL3) )
+        return MSR_TYPE_CTRL;
+
+    if ( (addr >= MSR_K7_PERFCTR0) && (addr <= MSR_K7_PERFCTR3) )
+        return MSR_TYPE_COUNTER;
+
+    if ( (addr >= MSR_AMD_FAM15H_EVNTSEL0) &&
+         (addr <= MSR_AMD_FAM15H_PERFCTR5 ) )
+    {
+        if (addr & 1)
+            return MSR_TYPE_COUNTER;
+        else
+            return MSR_TYPE_CTRL;
+    }
+
+    /* unsupported registers */
+    return -1;
+}
+
+static inline u32 get_fam15h_addr(u32 addr)
+{
+    switch ( addr )
+    {
+    case MSR_K7_PERFCTR0:
+        return MSR_AMD_FAM15H_PERFCTR0;
+    case MSR_K7_PERFCTR1:
+        return MSR_AMD_FAM15H_PERFCTR1;
+    case MSR_K7_PERFCTR2:
+        return MSR_AMD_FAM15H_PERFCTR2;
+    case MSR_K7_PERFCTR3:
+        return MSR_AMD_FAM15H_PERFCTR3;
+    case MSR_K7_EVNTSEL0:
+        return MSR_AMD_FAM15H_EVNTSEL0;
+    case MSR_K7_EVNTSEL1:
+        return MSR_AMD_FAM15H_EVNTSEL1;
+    case MSR_K7_EVNTSEL2:
+        return MSR_AMD_FAM15H_EVNTSEL2;
+    case MSR_K7_EVNTSEL3:
+        return MSR_AMD_FAM15H_EVNTSEL3;
+    default:
+        break;
+    }
+
+    return addr;
+}
+
+static void amd_vpmu_set_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_NONE);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_WRITE);
+    }
+
+    ctxt->msr_bitmap_set = 1;
+}
+
+static void amd_vpmu_unset_msr_bitmap(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        svm_intercept_msr(v, counters[i], MSR_INTERCEPT_RW);
+        svm_intercept_msr(v, ctrls[i], MSR_INTERCEPT_RW);
+    }
+
+    ctxt->msr_bitmap_set = 0;
+}
+
+/* Must be NMI-safe */
+static int amd_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    return 1;
+}
+
+static inline void context_load(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        wrmsrl(counters[i], counter_regs[i]);
+        wrmsrl(ctrls[i], ctrl_regs[i]);
+    }
+}
+
+/* Must be NMI-safe */
+static void amd_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    vpmu_reset(vpmu, VPMU_FROZEN);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        unsigned int i;
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], ctrl_regs[i]);
+
+        return;
+    }
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
+    context_load(v);
+}
+
+static inline void context_save(struct vcpu *v)
+{
+    unsigned int i;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+
+    /* No need to save controls -- they are saved in amd_vpmu_do_wrmsr */
+    for ( i = 0; i < num_counters; i++ )
+        rdmsrl(counters[i], counter_regs[i]);
+}
+
+static int amd_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctx = vpmu->context;
+    unsigned int i;
+
+    /*
+     * Stop the counters. If we came here via vpmu_save_force (i.e.
+     * when VPMU_CONTEXT_SAVE is set) counters are already stopped.
+     */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+    {
+        vpmu_set(vpmu, VPMU_FROZEN);
+
+        for ( i = 0; i < num_counters; i++ )
+            wrmsrl(ctrls[i], 0);
+
+        return 0;
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    context_save(v);
+
+    if ( has_hvm_container_domain(v->domain) &&
+        !vpmu_is_set(vpmu, VPMU_RUNNING) && ctx->msr_bitmap_set )
+        amd_vpmu_unset_msr_bitmap(v);
+
+    return 1;
+}
+
+static void context_update(unsigned int msr, u64 msr_content)
+{
+    unsigned int i;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+
+    if ( k7_counters_mirrored &&
+        ((msr >= MSR_K7_EVNTSEL0) && (msr <= MSR_K7_PERFCTR3)) )
+    {
+        msr = get_fam15h_addr(msr);
+    }
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+       if ( msr == ctrls[i] )
+       {
+           ctrl_regs[i] = msr_content;
+           return;
+       }
+        else if (msr == counters[i] )
+        {
+            counter_regs[i] = msr_content;
+            return;
+        }
+    }
+}
+
+static int amd_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    /* For all counters, enable guest only mode for HVM guest */
+    if ( has_hvm_container_domain(v->domain) && (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        !(is_guest_mode(msr_content)) )
+    {
+        set_guest_mode(msr_content);
+    }
+
+    /* check if the first counter is enabled */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        is_pmu_enabled(msr_content) && !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 1;
+        vpmu_set(vpmu, VPMU_RUNNING);
+
+        if ( has_hvm_container_domain(v->domain) &&
+             !((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_set_msr_bitmap(v);
+    }
+
+    /* stop saving & restore if guest stops first counter */
+    if ( (get_pmu_reg_type(msr) == MSR_TYPE_CTRL) &&
+        (is_pmu_enabled(msr_content) == 0) && vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        vpmu_reset(vpmu, VPMU_RUNNING);
+        if ( has_hvm_container_domain(v->domain) &&
+             ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+        release_pmu_ownship(PMU_OWNER_HVM);
+    }
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    /* Update vpmu context immediately */
+    context_update(msr, msr_content);
+
+    /* Write to hw counters */
+    wrmsrl(msr, msr_content);
+    return 1;
+}
+
+static int amd_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED)
+        || vpmu_is_set(vpmu, VPMU_FROZEN) )
+    {
+        context_load(v);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        vpmu_reset(vpmu, VPMU_FROZEN);
+    }
+
+    rdmsrl(msr, *msr_content);
+
+    return 1;
+}
+
+static int amd_vpmu_initialise(struct vcpu *v)
+{
+    struct xen_pmu_amd_ctxt *ctxt;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return 0;
+
+    if ( counters == NULL )
+    {
+         switch ( family )
+	 {
+	 case 0x15:
+	     num_counters = F15H_NUM_COUNTERS;
+	     counters = AMD_F15H_COUNTERS;
+	     ctrls = AMD_F15H_CTRLS;
+	     k7_counters_mirrored = 1;
+	     break;
+	 case 0x10:
+	 case 0x12:
+	 case 0x14:
+	 case 0x16:
+	 default:
+	     num_counters = F10H_NUM_COUNTERS;
+	     counters = AMD_F10H_COUNTERS;
+	     ctrls = AMD_F10H_CTRLS;
+	     k7_counters_mirrored = 0;
+	     break;
+	 }
+    }
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        ctxt = xzalloc_bytes(sizeof(struct xen_pmu_amd_ctxt) + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS + 
+                             sizeof(uint64_t) * AMD_MAX_COUNTERS);
+        if ( !ctxt )
+        {
+            gdprintk(XENLOG_WARNING, "Insufficient memory for PMU, "
+                     " PMU feature is unavailable on domain %d vcpu %d.\n",
+                     v->vcpu_id, v->domain->domain_id);
+            return -ENOMEM;
+        }
+    }
+    else
+        ctxt = &v->arch.vpmu.xenpmu_data->pmu.c.amd;
+
+    ctxt->counters = sizeof(struct xen_pmu_amd_ctxt);
+    ctxt->ctrls = ctxt->counters + sizeof(uint64_t) * AMD_MAX_COUNTERS;
+
+    vpmu->context = ctxt;
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    return 0;
+}
+
+static void amd_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        if ( ((struct xen_pmu_amd_ctxt *)vpmu->context)->msr_bitmap_set )
+            amd_vpmu_unset_msr_bitmap(v);
+
+        xfree(vpmu->context);
+    }
+
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+    release_pmu_ownship(PMU_OWNER_HVM);
+}
+
+/* VPMU part of the 'q' keyhandler */
+static void amd_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    const struct xen_pmu_amd_ctxt *ctxt = vpmu->context;
+    uint64_t *counter_regs = vpmu_reg_pointer(ctxt, counters);
+    uint64_t *ctrl_regs = vpmu_reg_pointer(ctxt, ctrls);
+    unsigned int i;
+
+    printk("    VPMU state: 0x%x ", vpmu->flags);
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+    {
+         printk("\n");
+         return;
+    }
+
+    printk("(");
+    if ( vpmu_is_set(vpmu, VPMU_PASSIVE_DOMAIN_ALLOCATED) )
+        printk("PASSIVE_DOMAIN_ALLOCATED, ");
+    if ( vpmu_is_set(vpmu, VPMU_FROZEN) )
+        printk("FROZEN, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_SAVE) )
+        printk("SAVE, ");
+    if ( vpmu_is_set(vpmu, VPMU_RUNNING) )
+        printk("RUNNING, ");
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        printk("LOADED, ");
+    printk("ALLOCATED)\n");
+
+    for ( i = 0; i < num_counters; i++ )
+    {
+        uint64_t ctrl, cntr;
+
+        rdmsrl(ctrls[i], ctrl);
+        rdmsrl(counters[i], cntr);
+        printk("      %#x: %#lx (%#lx in HW)    %#x: %#lx (%#lx in HW)\n",
+               ctrls[i], ctrl_regs[i], ctrl,
+               counters[i], counter_regs[i], cntr);
+    }
+}
+
+struct arch_vpmu_ops amd_vpmu_ops = {
+    .do_wrmsr = amd_vpmu_do_wrmsr,
+    .do_rdmsr = amd_vpmu_do_rdmsr,
+    .do_interrupt = amd_vpmu_do_interrupt,
+    .arch_vpmu_destroy = amd_vpmu_destroy,
+    .arch_vpmu_save = amd_vpmu_save,
+    .arch_vpmu_load = amd_vpmu_load,
+    .arch_vpmu_dump = amd_vpmu_dump
+};
+
+int svm_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    int ret = 0;
+
+    /* vpmu enabled? */
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        return 0;
+
+    switch ( family )
+    {
+    case 0x10:
+    case 0x12:
+    case 0x14:
+    case 0x15:
+    case 0x16:
+        ret = amd_vpmu_initialise(v);
+        if ( !ret )
+            vpmu->arch_vpmu_ops = &amd_vpmu_ops;
+        return ret;
+    }
+
+    printk("VPMU: Initialization failed. "
+           "AMD processor family %d has not "
+           "been supported\n", family);
+    return -EINVAL;
+}
+
diff --git a/xen/arch/x86/vpmu_intel.c b/xen/arch/x86/vpmu_intel.c
new file mode 100644
index 0000000..195511e
--- /dev/null
+++ b/xen/arch/x86/vpmu_intel.c
@@ -0,0 +1,940 @@
+/*
+ * vpmu_core2.c: CORE 2 specific PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#include <xen/config.h>
+#include <xen/sched.h>
+#include <xen/xenoprof.h>
+#include <xen/irq.h>
+#include <asm/system.h>
+#include <asm/regs.h>
+#include <asm/types.h>
+#include <asm/apic.h>
+#include <asm/traps.h>
+#include <asm/msr.h>
+#include <asm/msr-index.h>
+#include <asm/hvm/support.h>
+#include <asm/hvm/vlapic.h>
+#include <asm/hvm/vmx/vmx.h>
+#include <asm/hvm/vmx/vmcs.h>
+#include <public/sched.h>
+#include <public/hvm/save.h>
+#include <public/pmu.h>
+#include <asm/vpmu.h>
+
+/*
+ * See Intel SDM Vol 2a Instruction Set Reference chapter 3 for CPUID
+ * instruction.
+ * cpuid 0xa - Architectural Performance Monitoring Leaf
+ * Register eax
+ */
+#define PMU_VERSION_SHIFT        0  /* Version ID */
+#define PMU_VERSION_BITS         8  /* 8 bits 0..7 */
+#define PMU_VERSION_MASK         (((1 << PMU_VERSION_BITS) - 1) << PMU_VERSION_SHIFT)
+
+#define PMU_GENERAL_NR_SHIFT     8  /* Number of general pmu registers */
+#define PMU_GENERAL_NR_BITS      8  /* 8 bits 8..15 */
+#define PMU_GENERAL_NR_MASK      (((1 << PMU_GENERAL_NR_BITS) - 1) << PMU_GENERAL_NR_SHIFT)
+
+#define PMU_GENERAL_WIDTH_SHIFT 16  /* Width of general pmu registers */
+#define PMU_GENERAL_WIDTH_BITS   8  /* 8 bits 16..23 */
+#define PMU_GENERAL_WIDTH_MASK  (((1 << PMU_GENERAL_WIDTH_BITS) - 1) << PMU_GENERAL_WIDTH_SHIFT)
+/* Register edx */
+#define PMU_FIXED_NR_SHIFT       0  /* Number of fixed pmu registers */
+#define PMU_FIXED_NR_BITS        5  /* 5 bits 0..4 */
+#define PMU_FIXED_NR_MASK        (((1 << PMU_FIXED_NR_BITS) -1) << PMU_FIXED_NR_SHIFT)
+
+#define PMU_FIXED_WIDTH_SHIFT    5  /* Width of fixed pmu registers */
+#define PMU_FIXED_WIDTH_BITS     8  /* 8 bits 5..12 */
+#define PMU_FIXED_WIDTH_MASK     (((1 << PMU_FIXED_WIDTH_BITS) -1) << PMU_FIXED_WIDTH_SHIFT)
+
+/* Alias registers (0x4c1) for full-width writes to PMCs */
+#define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
+static bool_t __read_mostly full_width_write;
+
+/* Intel-specific VPMU features */
+#define VPMU_CPU_HAS_DS                     0x100 /* Has Debug Store */
+#define VPMU_CPU_HAS_BTS                    0x200 /* Has Branch Trace Store */
+
+/*
+ * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
+ * counters. 4 bits for every counter.
+ */
+#define FIXED_CTR_CTRL_BITS 4
+#define FIXED_CTR_CTRL_MASK ((1 << FIXED_CTR_CTRL_BITS) - 1)
+
+/* Number of general-purpose and fixed performance counters */
+static unsigned int __read_mostly arch_pmc_cnt, fixed_pmc_cnt;
+
+/*
+ * QUIRK to workaround an issue on various family 6 cpus.
+ * The issue leads to endless PMC interrupt loops on the processor.
+ * If the interrupt handler is running and a pmc reaches the value 0, this
+ * value remains forever and it triggers immediately a new interrupt after
+ * finishing the handler.
+ * A workaround is to read all flagged counters and if the value is 0 write
+ * 1 (or another value != 0) into it.
+ * There exist no errata and the real cause of this behaviour is unknown.
+ */
+bool_t __read_mostly is_pmc_quirk;
+
+static void check_pmc_quirk(void)
+{
+    if ( current_cpu_data.x86 == 6 )
+        is_pmc_quirk = 1;
+    else
+        is_pmc_quirk = 0;    
+}
+
+static void handle_pmc_quirk(u64 msr_content)
+{
+    int i;
+    u64 val;
+
+    if ( !is_pmc_quirk )
+        return;
+
+    val = msr_content;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_P6_PERFCTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_P6_PERFCTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+    val = msr_content >> 32;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( val & 0x1 )
+        {
+            u64 cnt;
+            rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, cnt);
+            if ( cnt == 0 )
+                wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, 1);
+        }
+        val >>= 1;
+    }
+}
+
+/*
+ * Read the number of general counters via CPUID.EAX[0xa].EAX[8..15]
+ */
+static int core2_get_arch_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_GENERAL_NR_MASK) >> PMU_GENERAL_NR_SHIFT );
+}
+
+/*
+ * Read the number of fixed counters via CPUID.EDX[0xa].EDX[0..4]
+ */
+static int core2_get_fixed_pmc_count(void)
+{
+    u32 eax;
+
+    eax = cpuid_eax(0xa);
+    return ( (eax & PMU_FIXED_NR_MASK) >> PMU_FIXED_NR_SHIFT );
+}
+
+/* edx bits 5-12: Bit width of fixed-function performance counters  */
+static int core2_get_bitwidth_fix_count(void)
+{
+    u32 edx;
+
+    edx = cpuid_edx(0xa);
+    return ( (edx & PMU_FIXED_WIDTH_MASK) >> PMU_FIXED_WIDTH_SHIFT );
+}
+
+static int is_core2_vpmu_msr(u32 msr_index, int *type, int *index)
+{
+    int i;
+    u32 msr_index_pmc;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        if ( msr_index == MSR_CORE_PERF_FIXED_CTR0 + i )
+        {
+            *type = MSR_TYPE_COUNTER;
+            *index = i;
+            return 1;
+        }
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_FIXED_CTR_CTRL ) ||
+        (msr_index == MSR_IA32_DS_AREA) ||
+        (msr_index == MSR_IA32_PEBS_ENABLE) )
+    {
+        *type = MSR_TYPE_CTRL;
+        return 1;
+    }
+
+    if ( (msr_index == MSR_CORE_PERF_GLOBAL_CTRL) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_STATUS) ||
+         (msr_index == MSR_CORE_PERF_GLOBAL_OVF_CTRL) )
+    {
+        *type = MSR_TYPE_GLOBAL;
+        return 1;
+    }
+
+    msr_index_pmc = msr_index & MSR_PMC_ALIAS_MASK;
+    if ( (msr_index_pmc >= MSR_IA32_PERFCTR0) &&
+         (msr_index_pmc < (MSR_IA32_PERFCTR0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_COUNTER;
+        *index = msr_index_pmc - MSR_IA32_PERFCTR0;
+        return 1;
+    }
+
+    if ( (msr_index >= MSR_P6_EVNTSEL0) &&
+         (msr_index < (MSR_P6_EVNTSEL0 + arch_pmc_cnt)) )
+    {
+        *type = MSR_TYPE_ARCH_CTRL;
+        *index = msr_index - MSR_P6_EVNTSEL0;
+        return 1;
+    }
+
+    return 0;
+}
+
+#define msraddr_to_bitpos(x) (((x)&0xffff) + ((x)>>31)*0x2000)
+static void core2_vpmu_set_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    /* Allow Read/Write PMU Counters MSR Directly. */
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i), msr_bitmap);
+        clear_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0+i),
+                  msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            clear_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    /* Allow Read PMU Non-global Controls Directly. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+         clear_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    clear_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    clear_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static void core2_vpmu_unset_msr_bitmap(unsigned long *msr_bitmap)
+{
+    int i;
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+    }
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i), msr_bitmap);
+        set_bit(msraddr_to_bitpos(MSR_IA32_PERFCTR0 + i),
+                msr_bitmap + 0x800/BYTES_PER_LONG);
+
+        if ( full_width_write )
+        {
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i), msr_bitmap);
+            set_bit(msraddr_to_bitpos(MSR_IA32_A_PERFCTR0 + i),
+                      msr_bitmap + 0x800/BYTES_PER_LONG);
+        }
+    }
+
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        set_bit(msraddr_to_bitpos(MSR_P6_EVNTSEL0 + i), msr_bitmap);
+
+    set_bit(msraddr_to_bitpos(MSR_CORE_PERF_FIXED_CTR_CTRL), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_PEBS_ENABLE), msr_bitmap);
+    set_bit(msraddr_to_bitpos(MSR_IA32_DS_AREA), msr_bitmap);
+}
+
+static inline void __core2_vpmu_save(struct vcpu *v)
+{
+    int i;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        rdmsrl(MSR_IA32_PERFCTR0 + i, xen_pmu_cntr_pair[i].counter);
+
+    if ( !has_hvm_container_domain(v->domain) )
+        rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
+}
+
+/* Must be NMI-safe */
+static int core2_vpmu_save(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !has_hvm_container_domain(v->domain) )
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+    if ( !vpmu_is_set_all(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED) )
+        return 0;
+
+    __core2_vpmu_save(v);
+
+    /* Unset PMU MSR bitmap to trap lazy load. */
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && cpu_has_vmx_msr_bitmap
+        && has_hvm_container_domain(v->domain) )
+        core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+
+    return 1;
+}
+
+static inline void __core2_vpmu_load(struct vcpu *v)
+{
+    unsigned int i, pmc_start;
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vcpu_vpmu(v)->context;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+        wrmsrl(MSR_CORE_PERF_FIXED_CTR0 + i, fixed_counters[i]);
+
+    if ( full_width_write )
+        pmc_start = MSR_IA32_A_PERFCTR0;
+    else
+        pmc_start = MSR_IA32_PERFCTR0;
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+    {
+        wrmsrl(pmc_start + i, xen_pmu_cntr_pair[i].counter);
+        wrmsrl(MSR_P6_EVNTSEL0 + i, xen_pmu_cntr_pair[i].control);
+    }
+
+    wrmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, core2_vpmu_cxt->fixed_ctrl);
+    wrmsrl(MSR_IA32_DS_AREA, core2_vpmu_cxt->ds_area);
+    wrmsrl(MSR_IA32_PEBS_ENABLE, core2_vpmu_cxt->pebs_enable);
+
+    if ( !has_hvm_container_domain(v->domain) )
+    {
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, core2_vpmu_cxt->global_ovf_ctrl);
+        core2_vpmu_cxt->global_ovf_ctrl = 0;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
+    }
+}
+
+static void core2_vpmu_load(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+        return;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+
+    __core2_vpmu_load(v);
+}
+
+static int core2_vpmu_alloc_resource(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt;
+
+    if ( !is_pv_domain(v->domain) )
+    {
+        if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
+            return 0;
+
+        wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+        if ( vmx_add_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+
+        if ( vmx_add_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL) )
+            goto out_err;
+        vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, 0);
+
+        core2_vpmu_cxt = xzalloc_bytes(sizeof(struct xen_pmu_intel_ctxt) +
+                                       sizeof(uint64_t) * fixed_pmc_cnt +
+                                       sizeof(struct xen_pmu_cntr_pair) * arch_pmc_cnt);
+        if ( !core2_vpmu_cxt )
+            goto out_err;
+    }
+    else
+    {
+        core2_vpmu_cxt = &v->arch.vpmu.xenpmu_data->pmu.c.intel;
+        vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+    }
+
+    core2_vpmu_cxt->fixed_counters = sizeof(struct xen_pmu_intel_ctxt);
+    core2_vpmu_cxt->arch_counters = core2_vpmu_cxt->fixed_counters +
+      sizeof(uint64_t) * fixed_pmc_cnt;
+
+    vpmu->context = (void *)core2_vpmu_cxt;
+
+    vpmu_set(vpmu, VPMU_CONTEXT_ALLOCATED);
+
+    return 1;
+
+out_err:
+    vmx_rm_host_load_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    vmx_rm_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL);
+    release_pmu_ownship(PMU_OWNER_HVM);
+
+    printk("Failed to allocate VPMU resources for domain %u vcpu %u\n",
+           v->vcpu_id, v->domain->domain_id);
+
+    return 0;
+}
+
+static int core2_vpmu_msr_common_check(u32 msr_index, int *type, int *index)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+    if ( !is_core2_vpmu_msr(msr_index, type, index) )
+        return 0;
+
+    if ( unlikely(!vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED)) &&
+         !core2_vpmu_alloc_resource(current) )
+        return 0;
+
+    /* Do the lazy load staff. */
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+    {
+        __core2_vpmu_load(current);
+        vpmu_set(vpmu, VPMU_CONTEXT_LOADED);
+        if ( cpu_has_vmx_msr_bitmap && has_hvm_container_domain(current->domain) )
+            core2_vpmu_set_msr_bitmap(current->arch.hvm_vmx.msr_bitmap);
+    }
+    return 1;
+}
+
+static void inject_trap(struct vcpu *v, unsigned int trapno)
+{
+    if ( has_hvm_container_domain(v->domain) )
+        hvm_inject_hw_exception(trapno, 0);
+    else
+        send_guest_trap(v->domain, v->vcpu_id, trapno);
+}
+
+static int core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
+{
+    u64 global_ctrl, non_global_ctrl;
+    unsigned pmu_enable = 0;
+    int i, tmp;
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( !core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        /* Special handling for BTS */
+        if ( msr == MSR_IA32_DEBUGCTLMSR )
+        {
+            uint64_t supported = IA32_DEBUGCTLMSR_TR | IA32_DEBUGCTLMSR_BTS |
+                                 IA32_DEBUGCTLMSR_BTINT;
+
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                supported |= IA32_DEBUGCTLMSR_BTS_OFF_OS |
+                             IA32_DEBUGCTLMSR_BTS_OFF_USR;
+            if ( msr_content & supported )
+            {
+                if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                    return 1;
+                gdprintk(XENLOG_WARNING, "Debug Store is not supported on this cpu\n");
+                inject_trap(v, TRAP_gp_fault);
+                return 0;
+            }
+        }
+        return 0;
+    }
+
+    core2_vpmu_cxt = vpmu->context;
+    switch ( msr )
+    {
+    case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+        core2_vpmu_cxt->global_status &= ~msr_content;
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_STATUS:
+        gdprintk(XENLOG_INFO, "Can not write readonly MSR: "
+                 "MSR_PERF_GLOBAL_STATUS(0x38E)!\n");
+        inject_trap(v, TRAP_gp_fault);
+        return 1;
+    case MSR_IA32_PEBS_ENABLE:
+        if ( msr_content & 1 )
+            gdprintk(XENLOG_WARNING, "Guest is trying to enable PEBS, "
+                     "which is not supported.\n");
+        core2_vpmu_cxt->pebs_enable = msr_content;
+        return 1;
+    case MSR_IA32_DS_AREA:
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            if ( !is_canonical_address(msr_content) )
+            {
+                gdprintk(XENLOG_WARNING,
+                         "Illegal address for IA32_DS_AREA: %#" PRIx64 "x\n",
+                         msr_content);
+                inject_trap(v, TRAP_gp_fault);
+                return 1;
+            }
+            core2_vpmu_cxt->ds_area = msr_content;
+            break;
+        }
+        gdprintk(XENLOG_WARNING, "Guest setting of DTS is ignored.\n");
+        return 1;
+    case MSR_CORE_PERF_GLOBAL_CTRL:
+        global_ctrl = msr_content;
+        for ( i = 0; i < arch_pmc_cnt; i++ )
+        {
+            rdmsrl(MSR_P6_EVNTSEL0+i, non_global_ctrl);
+            pmu_enable += global_ctrl & (non_global_ctrl >> 22) & 1;
+            global_ctrl >>= 1;
+        }
+
+        rdmsrl(MSR_CORE_PERF_FIXED_CTR_CTRL, non_global_ctrl);
+        global_ctrl = msr_content >> 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= FIXED_CTR_CTRL_BITS;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->global_ctrl = msr_content;
+        break;
+    case MSR_CORE_PERF_FIXED_CTR_CTRL:
+        non_global_ctrl = msr_content;
+        if ( has_hvm_container_domain(v->domain) )
+            vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+        else
+            rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+        global_ctrl >>= 32;
+        for ( i = 0; i < fixed_pmc_cnt; i++ )
+        {
+            pmu_enable += (global_ctrl & 1) & ((non_global_ctrl & 0x3)? 1 : 0);
+            non_global_ctrl >>= 4;
+            global_ctrl >>= 1;
+        }
+        core2_vpmu_cxt->fixed_ctrl = msr_content;
+        break;
+    default:
+        tmp = msr - MSR_P6_EVNTSEL0;
+        if ( tmp >= 0 && tmp < arch_pmc_cnt )
+        {
+            struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+                vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+            if ( has_hvm_container_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, &global_ctrl);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, global_ctrl);
+            xen_pmu_cntr_pair[tmp].control = msr_content;
+            for ( i = 0; i < arch_pmc_cnt && !pmu_enable; i++ )
+                pmu_enable += (global_ctrl >> i) &
+                    (xen_pmu_cntr_pair[i].control >> 22) & 1;
+        }
+    }
+
+    pmu_enable += (core2_vpmu_cxt->ds_area != 0);
+    if ( pmu_enable )
+        vpmu_set(vpmu, VPMU_RUNNING);
+    else
+        vpmu_reset(vpmu, VPMU_RUNNING);
+
+    if ( type != MSR_TYPE_GLOBAL )
+    {
+        u64 mask;
+        int inject_gp = 0;
+        switch ( type )
+        {
+        case MSR_TYPE_ARCH_CTRL:      /* MSR_P6_EVNTSEL[0,...] */
+            mask = ~((1ull << 32) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_CTRL:           /* IA32_FIXED_CTR_CTRL */
+            if  ( msr == MSR_IA32_DS_AREA )
+                break;
+            /* 4 bits per counter, currently 3 fixed counters implemented. */
+            mask = ~((1ull << (fixed_pmc_cnt * FIXED_CTR_CTRL_BITS)) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        case MSR_TYPE_COUNTER:        /* IA32_FIXED_CTR[0-2] */
+            mask = ~((1ull << core2_get_bitwidth_fix_count()) - 1);
+            if (msr_content & mask)
+                inject_gp = 1;
+            break;
+        }
+
+        if (inject_gp) 
+            inject_trap(v, TRAP_gp_fault);
+        else
+            wrmsrl(msr, msr_content);
+    }
+    else
+    {
+       if ( has_hvm_container_domain(v->domain) )
+           vmx_write_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+       else
+           wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    struct vcpu *v = current;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+
+    if ( core2_vpmu_msr_common_check(msr, &type, &index) )
+    {
+        core2_vpmu_cxt = vpmu->context;
+        switch ( msr )
+        {
+        case MSR_CORE_PERF_GLOBAL_OVF_CTRL:
+            *msr_content = 0;
+            break;
+        case MSR_CORE_PERF_GLOBAL_STATUS:
+            *msr_content = core2_vpmu_cxt->global_status;
+            break;
+        case MSR_CORE_PERF_GLOBAL_CTRL:
+            if ( has_hvm_container_domain(v->domain) )
+                vmx_read_guest_msr(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
+            else
+                rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
+            break;
+        default:
+            rdmsrl(msr, *msr_content);
+        }
+    }
+    else
+    {
+        /* Extension for BTS */
+        if ( msr == MSR_IA32_MISC_ENABLE )
+        {
+            if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_BTS) )
+                *msr_content &= ~MSR_IA32_MISC_ENABLE_BTS_UNAVAIL;
+        }
+        else
+            return 0;
+    }
+
+    return 1;
+}
+
+static void core2_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    if (input == 0x1)
+    {
+        struct vpmu_struct *vpmu = vcpu_vpmu(current);
+
+        if ( vpmu_is_set(vpmu, VPMU_CPU_HAS_DS) )
+        {
+            /* Switch on the 'Debug Store' feature in CPUID.EAX[1]:EDX[21] */
+            *edx |= cpufeat_mask(X86_FEATURE_DS);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DTES64) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DTES64);
+            if ( cpu_has(&current_cpu_data, X86_FEATURE_DSCPL) )
+                *ecx |= cpufeat_mask(X86_FEATURE_DSCPL);
+        }
+    }
+}
+
+/* Dump vpmu info on console, called in the context of keyhandler 'q'. */
+static void core2_vpmu_dump(const struct vcpu *v)
+{
+    const struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    int i;
+    const struct xen_pmu_intel_ctxt *core2_vpmu_cxt = NULL;
+    u64 val;
+    uint64_t *fixed_counters = vpmu_reg_pointer(core2_vpmu_cxt, fixed_counters);
+    struct xen_pmu_cntr_pair *xen_pmu_cntr_pair =
+        vpmu_reg_pointer(core2_vpmu_cxt, arch_counters);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+         return;
+
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) )
+    {
+        if ( vpmu_is_set(vpmu, VPMU_CONTEXT_LOADED) )
+            printk("    vPMU loaded\n");
+        else
+            printk("    vPMU allocated\n");
+        return;
+    }
+
+    printk("    vPMU running\n");
+    core2_vpmu_cxt = vpmu->context;
+
+    /* Print the contents of the counter and its configuration msr. */
+    for ( i = 0; i < arch_pmc_cnt; i++ )
+        printk("      general_%d: 0x%016lx ctrl: 0x%016lx\n",
+            i, xen_pmu_cntr_pair[i].counter, xen_pmu_cntr_pair[i].control);
+
+    /*
+     * The configuration of the fixed counter is 4 bits each in the
+     * MSR_CORE_PERF_FIXED_CTR_CTRL.
+     */
+    val = core2_vpmu_cxt->fixed_ctrl;
+    for ( i = 0; i < fixed_pmc_cnt; i++ )
+    {
+        printk("      fixed_%d:   0x%016lx ctrl: %#lx\n",
+               i, fixed_counters[i],
+               val & FIXED_CTR_CTRL_MASK);
+        val >>= FIXED_CTR_CTRL_BITS;
+    }
+}
+
+static int core2_vpmu_do_interrupt(struct cpu_user_regs *regs)
+{
+    struct vcpu *v = current;
+    u64 msr_content;
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    struct xen_pmu_intel_ctxt *core2_vpmu_cxt = vpmu->context;
+
+    rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, msr_content);
+    if ( msr_content )
+    {
+        if ( is_pmc_quirk )
+            handle_pmc_quirk(msr_content);
+        core2_vpmu_cxt->global_status |= msr_content;
+        msr_content = 0xC000000700000000 | ((1 << arch_pmc_cnt) - 1);
+        wrmsrl(MSR_CORE_PERF_GLOBAL_OVF_CTRL, msr_content);
+    }
+    else
+    {
+        /* No PMC overflow but perhaps a Trace Message interrupt. */
+        __vmread(GUEST_IA32_DEBUGCTL, &msr_content);
+        if ( !(msr_content & IA32_DEBUGCTLMSR_TR) )
+            return 0;
+    }
+
+    return 1;
+}
+
+static int core2_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    u64 msr_content;
+    struct cpuinfo_x86 *c = &current_cpu_data;
+
+    if ( !(vpmu_features & XENPMU_FEATURE_INTEL_BTS) )
+        goto func_out;
+    /* Check the 'Debug Store' feature in the CPUID.EAX[1]:EDX[21] */
+    if ( cpu_has(c, X86_FEATURE_DS) )
+    {
+        if ( !cpu_has(c, X86_FEATURE_DTES64) )
+        {
+            printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+            goto func_out;
+        }
+        vpmu_set(vpmu, VPMU_CPU_HAS_DS);
+        rdmsrl(MSR_IA32_MISC_ENABLE, msr_content);
+        if ( msr_content & MSR_IA32_MISC_ENABLE_BTS_UNAVAIL )
+        {
+            /* If BTS_UNAVAIL is set reset the DS feature. */
+            vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
+            printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
+                   " - Debug Store disabled for d%d:v%d\n",
+                   v->domain->domain_id, v->vcpu_id);
+        }
+        else
+        {
+            vpmu_set(vpmu, VPMU_CPU_HAS_BTS);
+            if ( !cpu_has(c, X86_FEATURE_DSCPL) )
+                printk(XENLOG_G_INFO
+                       "vpmu: CPU doesn't support CPL-Qualified BTS\n");
+            printk("******************************************************\n");
+            printk("** WARNING: Emulation of BTS Feature is switched on **\n");
+            printk("** Using this processor feature in a virtualized    **\n");
+            printk("** environment is not 100%% safe.                    **\n");
+            printk("** Setting the DS buffer address with wrong values  **\n");
+            printk("** may lead to hypervisor hangs or crashes.         **\n");
+            printk("** It is NOT recommended for production use!        **\n");
+            printk("******************************************************\n");
+        }
+    }
+func_out:
+
+    arch_pmc_cnt = core2_get_arch_pmc_count();
+    fixed_pmc_cnt = core2_get_fixed_pmc_count();
+    check_pmc_quirk();
+
+    /* PV domains can allocate resources immediately */
+    if ( !has_hvm_container_domain(v->domain) && !core2_vpmu_alloc_resource(v) )
+            return 1;
+
+    return 0;
+}
+
+static void core2_vpmu_destroy(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+
+    if ( !vpmu_is_set(vpmu, VPMU_CONTEXT_ALLOCATED) )
+        return;
+
+    if ( has_hvm_container_domain(v->domain) )
+    {
+        xfree(vpmu->context);
+        if ( cpu_has_vmx_msr_bitmap )
+            core2_vpmu_unset_msr_bitmap(v->arch.hvm_vmx.msr_bitmap);
+    }
+
+    release_pmu_ownship(PMU_OWNER_HVM);
+    vpmu->context = NULL;
+    vpmu_clear(vpmu);
+}
+
+struct arch_vpmu_ops core2_vpmu_ops = {
+    .do_wrmsr = core2_vpmu_do_wrmsr,
+    .do_rdmsr = core2_vpmu_do_rdmsr,
+    .do_interrupt = core2_vpmu_do_interrupt,
+    .do_cpuid = core2_vpmu_do_cpuid,
+    .arch_vpmu_destroy = core2_vpmu_destroy,
+    .arch_vpmu_save = core2_vpmu_save,
+    .arch_vpmu_load = core2_vpmu_load,
+    .arch_vpmu_dump = core2_vpmu_dump
+};
+
+static void core2_no_vpmu_do_cpuid(unsigned int input,
+                                unsigned int *eax, unsigned int *ebx,
+                                unsigned int *ecx, unsigned int *edx)
+{
+    /*
+     * As in this case the vpmu is not enabled reset some bits in the
+     * architectural performance monitoring related part.
+     */
+    if ( input == 0xa )
+    {
+        *eax &= ~PMU_VERSION_MASK;
+        *eax &= ~PMU_GENERAL_NR_MASK;
+        *eax &= ~PMU_GENERAL_WIDTH_MASK;
+
+        *edx &= ~PMU_FIXED_NR_MASK;
+        *edx &= ~PMU_FIXED_WIDTH_MASK;
+    }
+}
+
+/*
+ * If its a vpmu msr set it to 0.
+ */
+static int core2_no_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
+{
+    int type = -1, index = -1;
+    if ( !is_core2_vpmu_msr(msr, &type, &index) )
+        return 0;
+    *msr_content = 0;
+    return 1;
+}
+
+/*
+ * These functions are used in case vpmu is not enabled.
+ */
+struct arch_vpmu_ops core2_no_vpmu_ops = {
+    .do_rdmsr = core2_no_vpmu_do_rdmsr,
+    .do_cpuid = core2_no_vpmu_do_cpuid,
+};
+
+int vmx_vpmu_initialise(struct vcpu *v)
+{
+    struct vpmu_struct *vpmu = vcpu_vpmu(v);
+    uint8_t family = current_cpu_data.x86;
+    uint8_t cpu_model = current_cpu_data.x86_model;
+    int ret = 0;
+
+    vpmu->arch_vpmu_ops = &core2_no_vpmu_ops;
+    if ( vpmu_mode == XENPMU_MODE_OFF )
+        return 0;
+
+    if ( family == 6 )
+    {
+        u64 caps;
+
+        rdmsrl(MSR_IA32_PERF_CAPABILITIES, caps);
+        full_width_write = (caps >> 13) & 1;
+
+        switch ( cpu_model )
+        {
+        /* Core2: */
+        case 0x0f: /* original 65 nm celeron/pentium/core2/xeon, "Merom"/"Conroe" */
+        case 0x16: /* single-core 65 nm celeron/core2solo "Merom-L"/"Conroe-L" */
+        case 0x17: /* 45 nm celeron/core2/xeon "Penryn"/"Wolfdale" */
+        case 0x1d: /* six-core 45 nm xeon "Dunnington" */
+
+        case 0x2a: /* SandyBridge */
+        case 0x2d: /* SandyBridge, "Romley-EP" */
+
+        /* Nehalem: */
+        case 0x1a: /* 45 nm nehalem, "Bloomfield" */
+        case 0x1e: /* 45 nm nehalem, "Lynnfield", "Clarksfield", "Jasper Forest" */
+        case 0x2e: /* 45 nm nehalem-ex, "Beckton" */
+
+        /* Westmere: */
+        case 0x25: /* 32 nm nehalem, "Clarkdale", "Arrandale" */
+        case 0x2c: /* 32 nm nehalem, "Gulftown", "Westmere-EP" */
+        case 0x27: /* 32 nm Westmere-EX */
+
+        case 0x3a: /* IvyBridge */
+        case 0x3e: /* IvyBridge EP */
+
+        /* Haswell: */
+        case 0x3c:
+        case 0x3f:
+        case 0x45:
+        case 0x46:
+            ret = core2_vpmu_initialise(v);
+            if ( !ret )
+                vpmu->arch_vpmu_ops = &core2_vpmu_ops;
+            return ret;
+        }
+    }
+
+    printk("VPMU: Initialization failed. "
+           "Intel processor family %d model %d has not "
+           "been supported\n", family, cpu_model);
+    return -EINVAL;
+}
+
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index ed81cfb..d27df39 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -20,7 +20,7 @@
 #define __ASM_X86_HVM_VMX_VMCS_H__
 
 #include <asm/hvm/io.h>
-#include <asm/hvm/vpmu.h>
+#include <asm/vpmu.h>
 #include <irq_vectors.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vpmu.h b/xen/include/asm-x86/hvm/vpmu.h
deleted file mode 100644
index 0f3de14..0000000
--- a/xen/include/asm-x86/hvm/vpmu.h
+++ /dev/null
@@ -1,99 +0,0 @@
-/*
- * vpmu.h: PMU virtualization for HVM domain.
- *
- * Copyright (c) 2007, Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
- * Place - Suite 330, Boston, MA 02111-1307 USA.
- *
- * Author: Haitao Shan <haitao.shan@intel.com>
- */
-
-#ifndef __ASM_X86_HVM_VPMU_H_
-#define __ASM_X86_HVM_VPMU_H_
-
-#include <public/pmu.h>
-
-#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
-#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
-
-#define MSR_TYPE_COUNTER            0
-#define MSR_TYPE_CTRL               1
-#define MSR_TYPE_GLOBAL             2
-#define MSR_TYPE_ARCH_COUNTER       3
-#define MSR_TYPE_ARCH_CTRL          4
-
-/* Start of PMU register bank */
-#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
-                                                 (uintptr_t)ctxt->offset))
-
-/* Arch specific operations shared by all vpmus */
-struct arch_vpmu_ops {
-    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
-    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
-    int (*do_interrupt)(struct cpu_user_regs *regs);
-    void (*do_cpuid)(unsigned int input,
-                     unsigned int *eax, unsigned int *ebx,
-                     unsigned int *ecx, unsigned int *edx);
-    void (*arch_vpmu_destroy)(struct vcpu *v);
-    int (*arch_vpmu_save)(struct vcpu *v);
-    void (*arch_vpmu_load)(struct vcpu *v);
-    void (*arch_vpmu_dump)(const struct vcpu *);
-};
-
-int vmx_vpmu_initialise(struct vcpu *);
-int svm_vpmu_initialise(struct vcpu *);
-
-struct vpmu_struct {
-    u32 flags;
-    u32 last_pcpu;
-    u32 hw_lapic_lvtpc;
-    void *context;
-    struct arch_vpmu_ops *arch_vpmu_ops;
-    xen_pmu_data_t *xenpmu_data;
-};
-
-/* VPMU states */
-#define VPMU_CONTEXT_ALLOCATED              0x1
-#define VPMU_CONTEXT_LOADED                 0x2
-#define VPMU_RUNNING                        0x4
-#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
-#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
-#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
-
-#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
-#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
-#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
-#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
-#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
-
-void vpmu_lvtpc_update(uint32_t val);
-int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
-int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
-int vpmu_do_interrupt(struct cpu_user_regs *regs);
-void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
-                                       unsigned int *ecx, unsigned int *edx);
-void vpmu_initialise(struct vcpu *v);
-void vpmu_destroy(struct vcpu *v);
-void vpmu_save(struct vcpu *v);
-void vpmu_load(struct vcpu *v);
-void vpmu_dump(struct vcpu *v);
-
-extern int acquire_pmu_ownership(int pmu_ownership);
-extern void release_pmu_ownership(int pmu_ownership);
-
-extern uint64_t vpmu_mode;
-extern uint64_t vpmu_features;
-
-#endif /* __ASM_X86_HVM_VPMU_H_*/
-
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
new file mode 100644
index 0000000..0f3de14
--- /dev/null
+++ b/xen/include/asm-x86/vpmu.h
@@ -0,0 +1,99 @@
+/*
+ * vpmu.h: PMU virtualization for HVM domain.
+ *
+ * Copyright (c) 2007, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ *
+ * Author: Haitao Shan <haitao.shan@intel.com>
+ */
+
+#ifndef __ASM_X86_HVM_VPMU_H_
+#define __ASM_X86_HVM_VPMU_H_
+
+#include <public/pmu.h>
+
+#define vcpu_vpmu(vcpu)   (&(vcpu)->arch.vpmu)
+#define vpmu_vcpu(vpmu)   container_of(vpmu, struct vcpu, arch.vpmu)
+
+#define MSR_TYPE_COUNTER            0
+#define MSR_TYPE_CTRL               1
+#define MSR_TYPE_GLOBAL             2
+#define MSR_TYPE_ARCH_COUNTER       3
+#define MSR_TYPE_ARCH_CTRL          4
+
+/* Start of PMU register bank */
+#define vpmu_reg_pointer(ctxt, offset) ((void *)((uintptr_t)ctxt + \
+                                                 (uintptr_t)ctxt->offset))
+
+/* Arch specific operations shared by all vpmus */
+struct arch_vpmu_ops {
+    int (*do_wrmsr)(unsigned int msr, uint64_t msr_content);
+    int (*do_rdmsr)(unsigned int msr, uint64_t *msr_content);
+    int (*do_interrupt)(struct cpu_user_regs *regs);
+    void (*do_cpuid)(unsigned int input,
+                     unsigned int *eax, unsigned int *ebx,
+                     unsigned int *ecx, unsigned int *edx);
+    void (*arch_vpmu_destroy)(struct vcpu *v);
+    int (*arch_vpmu_save)(struct vcpu *v);
+    void (*arch_vpmu_load)(struct vcpu *v);
+    void (*arch_vpmu_dump)(const struct vcpu *);
+};
+
+int vmx_vpmu_initialise(struct vcpu *);
+int svm_vpmu_initialise(struct vcpu *);
+
+struct vpmu_struct {
+    u32 flags;
+    u32 last_pcpu;
+    u32 hw_lapic_lvtpc;
+    void *context;
+    struct arch_vpmu_ops *arch_vpmu_ops;
+    xen_pmu_data_t *xenpmu_data;
+};
+
+/* VPMU states */
+#define VPMU_CONTEXT_ALLOCATED              0x1
+#define VPMU_CONTEXT_LOADED                 0x2
+#define VPMU_RUNNING                        0x4
+#define VPMU_CONTEXT_SAVE                   0x8   /* Force context save */
+#define VPMU_FROZEN                         0x10  /* Stop counters while VCPU is not running */
+#define VPMU_PASSIVE_DOMAIN_ALLOCATED       0x20
+
+#define vpmu_set(_vpmu, _x)         ((_vpmu)->flags |= (_x))
+#define vpmu_reset(_vpmu, _x)       ((_vpmu)->flags &= ~(_x))
+#define vpmu_is_set(_vpmu, _x)      ((_vpmu)->flags & (_x))
+#define vpmu_is_set_all(_vpmu, _x)  (((_vpmu)->flags & (_x)) == (_x))
+#define vpmu_clear(_vpmu)           ((_vpmu)->flags = 0)
+
+void vpmu_lvtpc_update(uint32_t val);
+int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content);
+int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content);
+int vpmu_do_interrupt(struct cpu_user_regs *regs);
+void vpmu_do_cpuid(unsigned int input, unsigned int *eax, unsigned int *ebx,
+                                       unsigned int *ecx, unsigned int *edx);
+void vpmu_initialise(struct vcpu *v);
+void vpmu_destroy(struct vcpu *v);
+void vpmu_save(struct vcpu *v);
+void vpmu_load(struct vcpu *v);
+void vpmu_dump(struct vcpu *v);
+
+extern int acquire_pmu_ownership(int pmu_ownership);
+extern void release_pmu_ownership(int pmu_ownership);
+
+extern uint64_t vpmu_mode;
+extern uint64_t vpmu_features;
+
+#endif /* __ASM_X86_HVM_VPMU_H_*/
+
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:56:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSQI-00009U-HJ; Mon, 17 Feb 2014 17:56:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1WFSQH-00007u-14
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 17:56:05 +0000
Received: from [85.158.137.68:11153] by server-15.bemta-3.messagelabs.com id
	25/32-19263-43D42035; Mon, 17 Feb 2014 17:56:04 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392659762!2422237!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9402 invoked from network); 17 Feb 2014 17:56:02 -0000
Received: from fw-tnat.austin.arm.com (HELO collaborate-mta1.arm.com)
	(217.140.110.23) by server-9.tower-31.messagelabs.com with SMTP;
	17 Feb 2014 17:56:02 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.182])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 256E013F6EA;
	Mon, 17 Feb 2014 11:55:59 -0600 (CST)
Date: Mon, 17 Feb 2014 17:55:52 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140217175552.GB8361@arm.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1402171225000.27926@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1402171225000.27926@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Marc Zyngier <Marc.Zyngier@arm.com>, "nico@linaro.org" <nico@linaro.org>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 12:25:42PM +0000, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> > Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> > Necessary duplication of paravirt.h and paravirt.c with ARM.
> > 
> > The only paravirt interface supported is pv_time_ops.steal_clock, so no
> > runtime pvops patching needed.
> > 
> > This allows us to make use of steal_account_process_tick for stolen
> > ticks accounting.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: will.deacon@arm.com
> > CC: nico@linaro.org
> > CC: marc.zyngier@arm.com
> > CC: cov@codeaurora.org
> > CC: arnd@arndb.de
> > CC: olof@lixom.net
> > CC: Catalin.Marinas@arm.com
> 
> Catalin, Will, are you happy with this patch for 3.15?

It's pretty small and looks fine to me. However, I would like someone
with more virtualisation experience than me to ack it (e.g. Marc Z).

Some nitpicks:

> > +config PARAVIRT
> > +	bool "Enable paravirtualization code"
> > +	---help---
> > +	  This changes the kernel so it can modify itself when it is run
> > +	  under a hypervisor, potentially improving performance significantly
> > +	  over full virtualization.
> > +
> > +config PARAVIRT_TIME_ACCOUNTING
> > +	bool "Paravirtual steal time accounting"
> > +	select PARAVIRT
> > +	default n
> > +	---help---

For consistency with this file, just use "help" rather than
"---help---".

> > --- /dev/null
> > +++ b/arch/arm64/include/asm/paravirt.h
> > @@ -0,0 +1,20 @@
> > +#ifndef _ASM_ARM64_PARAVIRT_H
> > +#define _ASM_ARM64_PARAVIRT_H

__ASM_PARAVIRT_H for consistency.

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:56:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSQI-00009U-HJ; Mon, 17 Feb 2014 17:56:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@arm.com>) id 1WFSQH-00007u-14
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 17:56:05 +0000
Received: from [85.158.137.68:11153] by server-15.bemta-3.messagelabs.com id
	25/32-19263-43D42035; Mon, 17 Feb 2014 17:56:04 +0000
X-Env-Sender: catalin.marinas@arm.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392659762!2422237!1
X-Originating-IP: [217.140.110.23]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9402 invoked from network); 17 Feb 2014 17:56:02 -0000
Received: from fw-tnat.austin.arm.com (HELO collaborate-mta1.arm.com)
	(217.140.110.23) by server-9.tower-31.messagelabs.com with SMTP;
	17 Feb 2014 17:56:02 -0000
Received: from arm.com (e102109-lin.cambridge.arm.com [10.1.203.182])
	by collaborate-mta1.arm.com (Postfix) with ESMTPS id 256E013F6EA;
	Mon, 17 Feb 2014 11:55:59 -0600 (CST)
Date: Mon, 17 Feb 2014 17:55:52 +0000
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140217175552.GB8361@arm.com>
References: <alpine.DEB.2.02.1401091755390.21510@kaball.uk.xensource.com>
	<1389292336-9292-4-git-send-email-stefano.stabellini@eu.citrix.com>
	<alpine.DEB.2.02.1402171225000.27926@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1402171225000.27926@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"linux@arm.linux.org.uk" <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, "arnd@arndb.de" <arnd@arndb.de>,
	Marc Zyngier <Marc.Zyngier@arm.com>, "nico@linaro.org" <nico@linaro.org>,
	Will Deacon <Will.Deacon@arm.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"cov@codeaurora.org" <cov@codeaurora.org>,
	"olof@lixom.net" <olof@lixom.net>, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [PATCH v9 4/5] arm64: introduce CONFIG_PARAVIRT,
 PARAVIRT_TIME_ACCOUNTING and pv_time_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 12:25:42PM +0000, Stefano Stabellini wrote:
> On Thu, 9 Jan 2014, Stefano Stabellini wrote:
> > Introduce CONFIG_PARAVIRT and PARAVIRT_TIME_ACCOUNTING on ARM64.
> > Necessary duplication of paravirt.h and paravirt.c with ARM.
> > 
> > The only paravirt interface supported is pv_time_ops.steal_clock, so no
> > runtime pvops patching needed.
> > 
> > This allows us to make use of steal_account_process_tick for stolen
> > ticks accounting.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > CC: will.deacon@arm.com
> > CC: nico@linaro.org
> > CC: marc.zyngier@arm.com
> > CC: cov@codeaurora.org
> > CC: arnd@arndb.de
> > CC: olof@lixom.net
> > CC: Catalin.Marinas@arm.com
> 
> Catalin, Will, are you happy with this patch for 3.15?

It's pretty small and looks fine to me. However, I would like someone
with more virtualisation experience than me to ack it (e.g. Marc Z).

Some nitpicks:

> > +config PARAVIRT
> > +	bool "Enable paravirtualization code"
> > +	---help---
> > +	  This changes the kernel so it can modify itself when it is run
> > +	  under a hypervisor, potentially improving performance significantly
> > +	  over full virtualization.
> > +
> > +config PARAVIRT_TIME_ACCOUNTING
> > +	bool "Paravirtual steal time accounting"
> > +	select PARAVIRT
> > +	default n
> > +	---help---

For consistency with this file, just use "help" rather than
"---help---".

> > --- /dev/null
> > +++ b/arch/arm64/include/asm/paravirt.h
> > @@ -0,0 +1,20 @@
> > +#ifndef _ASM_ARM64_PARAVIRT_H
> > +#define _ASM_ARM64_PARAVIRT_H

__ASM_PARAVIRT_H for consistency.

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSQl-0000Uk-W4; Mon, 17 Feb 2014 17:56:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSQk-0000TF-66
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:56:34 +0000
Received: from [85.158.143.35:4491] by server-2.bemta-4.messagelabs.com id
	A1/17-10891-15D42035; Mon, 17 Feb 2014 17:56:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392659696!6324517!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24335 invoked from network); 17 Feb 2014 17:54:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsqS9003629
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsp90001811
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:52 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsofV029304; Mon, 17 Feb 2014 17:54:51 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:50 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:02 -0500
Message-Id: <1392659764-22183-16-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for using NMIs as PMU interrupts.

Most of processing is still performed by vpmu_do_interrupt(). However, since
certain operations are not NMI-safe we defer them to a softint that vpmu_do_interrupt()
will schedule:
* For PV guests that would be send_guest_vcpu_virq()
* For HVM guests it's VLAPIC accesses and hvm_get_segment_register() (the later
can be called in privileged profiling mode when the interrupted guest is an HVM one).

With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and vlapic
accesses for HVM moved to sofint, the only routines/macros that vpmu_do_interrupt()
calls in NMI mode are:
* memcpy()
* querying domain type (is_XX_domain())
* guest_cpu_user_regs()
* XLAT_cpu_user_regs()
* raise_softirq()
* vcpu_vpmu()
* vpmu_ops->arch_vpmu_save()
* vpmu_ops->do_interrupt() (in the future for PVH support)

The latter two only access PMU MSRs with {rd,wr}msrl() (not the _safe versions
which would not be NMI-safe).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |   1 +
 xen/arch/x86/hvm/vmx/vpmu_core2.c |   1 +
 xen/arch/x86/hvm/vpmu.c           | 169 ++++++++++++++++++++++++++++++--------
 3 files changed, 138 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 346aa51..04d3b91 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -182,6 +182,7 @@ static inline void context_load(struct vcpu *v)
     }
 }
 
+/* Must be NMI-safe */
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 9eac418..e214f01 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -303,6 +303,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
+/* Must be NMI-safe */
 static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index dc416f9..cbe8cfd 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -36,6 +36,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/pmu.h>
 
 /*
@@ -49,33 +50,57 @@ static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  ( parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_interrupt_type = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode = XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if ( !is_pv_domain(current->domain) ||
@@ -84,6 +109,27 @@ void vpmu_lvtpc_update(uint32_t val)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic;
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    ASSERT( is_hvm_vcpu(v) );
+
+    vlapic = vcpu_vlapic(v);
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -137,6 +183,7 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     return 0;
 }
 
+/* This routine may be called in NMI context */
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -217,9 +264,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
 
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -230,30 +281,30 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
 
     if ( vpmu->arch_vpmu_ops )
     {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;
 
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
         else
-            v->nmi_pending = 1;
+            vpmu_send_nmi(v);
+
         return 1;
     }
 
@@ -301,7 +352,7 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
 }
 
 void vpmu_load(struct vcpu *v)
@@ -450,11 +501,48 @@ static void vpmu_unload_all(void)
     }
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
 static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
 {
     struct vcpu *v;
     struct page_info *page;
     uint64_t gfn = params->d.val;
+    static bool_t __read_mostly pvpmu_initted = 0;
 
     if ( !is_pv_domain(d) )
         return -EINVAL;
@@ -480,6 +568,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
         return -EINVAL;
     }
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     vpmu_initialise(v);
 
     return 0;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 17:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 17:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSQl-0000Uk-W4; Mon, 17 Feb 2014 17:56:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFSQk-0000TF-66
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 17:56:34 +0000
Received: from [85.158.143.35:4491] by server-2.bemta-4.messagelabs.com id
	A1/17-10891-15D42035; Mon, 17 Feb 2014 17:56:33 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392659696!6324517!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24335 invoked from network); 17 Feb 2014 17:54:57 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 17:54:57 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HHsqS9003629
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 17:54:53 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsp90001811
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 17 Feb 2014 17:54:52 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HHsofV029304; Mon, 17 Feb 2014 17:54:51 GMT
Received: from
	dhcp-burlington7-2nd-B-east-10-152-55-89.usdhcp.oraclecorp.com.com
	(/10.152.54.238) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 09:54:50 -0800
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: JBeulich@suse.com
Date: Mon, 17 Feb 2014 12:56:02 -0500
Message-Id: <1392659764-22183-16-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.1.4
In-Reply-To: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
References: <1392659764-22183-1-git-send-email-boris.ostrovsky@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: keir@xen.org, jun.nakajima@intel.com, andrew.cooper3@citrix.com,
	eddie.dong@intel.com, dietmar.hahn@ts.fujitsu.com,
	xen-devel@lists.xen.org, suravee.suthikulpanit@amd.com,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH v5 15/17] x86/VPMU: NMI-based VPMU support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add support for using NMIs as PMU interrupts.

Most of processing is still performed by vpmu_do_interrupt(). However, since
certain operations are not NMI-safe we defer them to a softint that vpmu_do_interrupt()
will schedule:
* For PV guests that would be send_guest_vcpu_virq()
* For HVM guests it's VLAPIC accesses and hvm_get_segment_register() (the later
can be called in privileged profiling mode when the interrupted guest is an HVM one).

With send_guest_vcpu_virq() and hvm_get_segment_register() for PV(H) and vlapic
accesses for HVM moved to sofint, the only routines/macros that vpmu_do_interrupt()
calls in NMI mode are:
* memcpy()
* querying domain type (is_XX_domain())
* guest_cpu_user_regs()
* XLAT_cpu_user_regs()
* raise_softirq()
* vcpu_vpmu()
* vpmu_ops->arch_vpmu_save()
* vpmu_ops->do_interrupt() (in the future for PVH support)

The latter two only access PMU MSRs with {rd,wr}msrl() (not the _safe versions
which would not be NMI-safe).

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 xen/arch/x86/hvm/svm/vpmu.c       |   1 +
 xen/arch/x86/hvm/vmx/vpmu_core2.c |   1 +
 xen/arch/x86/hvm/vpmu.c           | 169 ++++++++++++++++++++++++++++++--------
 3 files changed, 138 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/vpmu.c b/xen/arch/x86/hvm/svm/vpmu.c
index 346aa51..04d3b91 100644
--- a/xen/arch/x86/hvm/svm/vpmu.c
+++ b/xen/arch/x86/hvm/svm/vpmu.c
@@ -182,6 +182,7 @@ static inline void context_load(struct vcpu *v)
     }
 }
 
+/* Must be NMI-safe */
 static void amd_vpmu_load(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/hvm/vmx/vpmu_core2.c b/xen/arch/x86/hvm/vmx/vpmu_core2.c
index 9eac418..e214f01 100644
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -303,6 +303,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_status);
 }
 
+/* Must be NMI-safe */
 static int core2_vpmu_save(struct vcpu *v)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(v);
diff --git a/xen/arch/x86/hvm/vpmu.c b/xen/arch/x86/hvm/vpmu.c
index dc416f9..cbe8cfd 100644
--- a/xen/arch/x86/hvm/vpmu.c
+++ b/xen/arch/x86/hvm/vpmu.c
@@ -36,6 +36,7 @@
 #include <asm/hvm/svm/svm.h>
 #include <asm/hvm/svm/vmcb.h>
 #include <asm/apic.h>
+#include <asm/nmi.h>
 #include <public/pmu.h>
 
 /*
@@ -49,33 +50,57 @@ static void parse_vpmu_param(char *s);
 custom_param("vpmu", parse_vpmu_param);
 
 static DEFINE_PER_CPU(struct vcpu *, last_vcpu);
+static DEFINE_PER_CPU(struct vcpu *, sampled_vcpu);
+
+static uint32_t __read_mostly vpmu_interrupt_type = PMU_APIC_VECTOR;
 
 static void __init parse_vpmu_param(char *s)
 {
-    switch ( parse_bool(s) )
-    {
-    case 0:
-        break;
-    default:
-        if ( !strcmp(s, "bts") )
-            vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
-        else if ( *s )
+    char *ss;
+
+    vpmu_mode = XENPMU_MODE_ON;
+    if (*s == '\0')
+        return;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        switch  ( parse_bool(s) )
         {
-            printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+        case 0:
+            vpmu_mode = XENPMU_MODE_OFF;
+            return;
+        case -1:
+            if ( !strcmp(s, "nmi") )
+                vpmu_interrupt_type = APIC_DM_NMI;
+            else if ( !strcmp(s, "bts") )
+                vpmu_features |= XENPMU_FEATURE_INTEL_BTS;
+            else if ( !strcmp(s, "priv") )
+            {
+                vpmu_mode &= ~XENPMU_MODE_ON;
+                vpmu_mode |= XENPMU_MODE_PRIV;
+            }
+            else
+            {
+                printk("VPMU: unknown flag: %s - vpmu disabled!\n", s);
+                vpmu_mode = XENPMU_MODE_OFF;
+                return;
+            }
+        default:
             break;
         }
-        /* fall through */
-    case 1:
-        vpmu_mode = XENPMU_MODE_ON;
-        break;
-    }
+
+        s = ss + 1;
+    } while ( ss );
 }
 
 void vpmu_lvtpc_update(uint32_t val)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
 
-    vpmu->hw_lapic_lvtpc = PMU_APIC_VECTOR | (val & APIC_LVT_MASKED);
+    vpmu->hw_lapic_lvtpc = vpmu_interrupt_type | (val & APIC_LVT_MASKED);
 
     /* Postpone APIC updates for PV guests if PMU interrupt is pending */
     if ( !is_pv_domain(current->domain) ||
@@ -84,6 +109,27 @@ void vpmu_lvtpc_update(uint32_t val)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc);
 }
 
+static void vpmu_send_nmi(struct vcpu *v)
+{
+    struct vlapic *vlapic;
+    u32 vlapic_lvtpc;
+    unsigned char int_vec;
+
+    ASSERT( is_hvm_vcpu(v) );
+
+    vlapic = vcpu_vlapic(v);
+    if ( !is_vlapic_lvtpc_enabled(vlapic) )
+        return;
+
+    vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
+    int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
+
+    if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
+        vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+    else
+        v->nmi_pending = 1;
+}
+
 int vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 {
     struct vpmu_struct *vpmu = vcpu_vpmu(current);
@@ -137,6 +183,7 @@ int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
     return 0;
 }
 
+/* This routine may be called in NMI context */
 int vpmu_do_interrupt(struct cpu_user_regs *regs)
 {
     struct vcpu *v = current;
@@ -217,9 +264,13 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
             memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                    gregs, sizeof(struct cpu_user_regs));
 
-            hvm_get_segment_register(current, x86_seg_cs, &cs);
-            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            gregs->cs = cs.attr.fields.dpl;
+            /* This is unsafe in NMI context, we'll do it in softint handler */
+            if ( !(vpmu_interrupt_type & APIC_DM_NMI ) )
+            {
+                hvm_get_segment_register(current, x86_seg_cs, &cs);
+                gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                gregs->cs = cs.attr.fields.dpl;
+            }
         }
 
         v->arch.vpmu.xenpmu_data->domain_id = current->domain->domain_id;
@@ -230,30 +281,30 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
         apic_write(APIC_LVTPC, vpmu->hw_lapic_lvtpc | APIC_LVT_MASKED);
         vpmu->hw_lapic_lvtpc |= APIC_LVT_MASKED;
 
-        send_guest_vcpu_virq(v, VIRQ_XENPMU);
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
+        else
+            send_guest_vcpu_virq(v, VIRQ_XENPMU);
 
         return 1;
     }
 
     if ( vpmu->arch_vpmu_ops )
     {
-        struct vlapic *vlapic = vcpu_vlapic(v);
-        u32 vlapic_lvtpc;
-        unsigned char int_vec;
-
         if ( !vpmu->arch_vpmu_ops->do_interrupt(regs) )
             return 0;
 
-        if ( !is_vlapic_lvtpc_enabled(vlapic) )
-            return 1;
-
-        vlapic_lvtpc = vlapic_get_reg(vlapic, APIC_LVTPC);
-        int_vec = vlapic_lvtpc & APIC_VECTOR_MASK;
-
-        if ( GET_APIC_DELIVERY_MODE(vlapic_lvtpc) == APIC_MODE_FIXED )
-            vlapic_set_irq(vcpu_vlapic(v), int_vec, 0);
+        if ( vpmu_interrupt_type & APIC_DM_NMI )
+        {
+            per_cpu(sampled_vcpu, smp_processor_id()) = current;
+            raise_softirq(PMU_SOFTIRQ);
+        }
         else
-            v->nmi_pending = 1;
+            vpmu_send_nmi(v);
+
         return 1;
     }
 
@@ -301,7 +352,7 @@ void vpmu_save(struct vcpu *v)
         if ( vpmu->arch_vpmu_ops->arch_vpmu_save(v) )
             vpmu_reset(vpmu, VPMU_CONTEXT_LOADED);
 
-    apic_write(APIC_LVTPC, PMU_APIC_VECTOR | APIC_LVT_MASKED);
+    apic_write(APIC_LVTPC, vpmu_interrupt_type | APIC_LVT_MASKED);
 }
 
 void vpmu_load(struct vcpu *v)
@@ -450,11 +501,48 @@ static void vpmu_unload_all(void)
     }
 }
 
+/* Process the softirq set by PMU NMI handler */
+static void pmu_softnmi(void)
+{
+    struct cpu_user_regs *regs;
+    struct vcpu *v, *sampled = per_cpu(sampled_vcpu, smp_processor_id());
+
+    if ( (vpmu_mode & XENPMU_MODE_PRIV) ||
+         (sampled->domain->domain_id >= DOMID_FIRST_RESERVED) )
+        v = dom0->vcpu[smp_processor_id() % dom0->max_vcpus];
+    else
+    {
+        if ( is_hvm_domain(sampled->domain) )
+        {
+            vpmu_send_nmi(sampled);
+            return;
+        }
+        v = sampled;
+    }
+
+    regs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+    if ( !is_pv_domain(sampled->domain) )
+    {
+        struct segment_register cs;
+
+        hvm_get_segment_register(sampled, x86_seg_cs, &cs);
+        regs->cs = cs.attr.fields.dpl;
+    }
+
+    send_guest_vcpu_virq(v, VIRQ_XENPMU);
+}
+
+int pmu_nmi_interrupt(struct cpu_user_regs *regs, int cpu)
+{
+    return vpmu_do_interrupt(regs);
+}
+
 static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
 {
     struct vcpu *v;
     struct page_info *page;
     uint64_t gfn = params->d.val;
+    static bool_t __read_mostly pvpmu_initted = 0;
 
     if ( !is_pv_domain(d) )
         return -EINVAL;
@@ -480,6 +568,21 @@ static int pvpmu_init(struct domain *d, xen_pmu_params_t *params)
         return -EINVAL;
     }
 
+    if ( !pvpmu_initted )
+    {
+        if (reserve_lapic_nmi() == 0)
+            set_nmi_callback(pmu_nmi_interrupt);
+        else
+        {
+            printk("Failed to reserve PMU NMI\n");
+            put_page(page);
+            return -EBUSY;
+        }
+        open_softirq(PMU_SOFTIRQ, pmu_softnmi);
+
+        pvpmu_initted = 1;
+    }
+
     vpmu_initialise(v);
 
     return 0;
-- 
1.8.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWB-0001ZK-7X; Mon, 17 Feb 2014 18:02:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW9-0001Xt-Hg
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:09 +0000
Received: from [85.158.139.211:13576] by server-10.bemta-5.messagelabs.com id
	21/D5-08578-0AE42035; Mon, 17 Feb 2014 18:02:08 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392660124!4449122!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23514 invoked from network); 17 Feb 2014 18:02:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101501664"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:07 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW6-0004Md-6l;
	Mon, 17 Feb 2014 18:02:06 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW4-0000qZ-Ah; Mon, 17 Feb 2014 18:02:04 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:58:00 +0000
Message-ID: <1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..8868c51 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWD-0001ae-Bq; Mon, 17 Feb 2014 18:02:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSWB-0001ZX-Vb
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:12 +0000
Received: from [85.158.139.211:13744] by server-9.bemta-5.messagelabs.com id
	E0/32-11237-3AE42035; Mon, 17 Feb 2014 18:02:11 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392660129!4495664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20047 invoked from network); 17 Feb 2014 18:02:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239427"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:07 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW7-0004Mg-8y;
	Mon, 17 Feb 2014 18:02:07 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW5-0000qb-LV; Mon, 17 Feb 2014 18:02:05 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 18:01:50 +0000
Message-ID: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J.
	Bennieston" <andrew.bennieston@citrix.com>, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, david.vrabel@citrix.com
Subject: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 xen/include/public/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index d7fb771..90be2fc 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -69,6 +69,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSW8-0001XX-IZ; Mon, 17 Feb 2014 18:02:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW7-0001XE-Ep
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:07 +0000
Received: from [85.158.139.211:17909] by server-12.bemta-5.messagelabs.com id
	B4/A6-15415-E9E42035; Mon, 17 Feb 2014 18:02:06 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392660124!4449122!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23372 invoked from network); 17 Feb 2014 18:02:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101501637"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:03 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW2-0004MR-MB;
	Mon, 17 Feb 2014 18:02:02 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW1-0000qJ-1L; Mon, 17 Feb 2014 18:02:01 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:57 +0000
Message-ID: <1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    8 ++++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 82 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 2550867..8180929 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index daf93f6..bc7a82d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			      xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 46b2f5b..64d66a1 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,11 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+MODULE_PARM_DESC(xenvif_max_queues,
+		"Maximum number of queues per virtual interface");
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1590,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..d11f51e 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+			"guest requested %u queues, exceeding the maximum of %u.",
+			requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1) {
+		xspath = (char *)dev->otherend;
+	} else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWC-0001Zz-Lq; Mon, 17 Feb 2014 18:02:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSWB-0001Yg-10
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:11 +0000
Received: from [193.109.254.147:37136] by server-3.bemta-14.messagelabs.com id
	C0/A0-00432-2AE42035; Mon, 17 Feb 2014 18:02:10 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392660123!4912013!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29530 invoked from network); 17 Feb 2014 18:02:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239401"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:05 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW3-0004MX-QH;
	Mon, 17 Feb 2014 18:02:03 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW2-0000qP-4D; Mon, 17 Feb 2014 18:02:02 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:58 +0000
Message-ID: <1392659880-2538-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  945 ++++++++++++++++++++++++++------------------
 1 file changed, 552 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 2b62d79..238c2cb 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1257,66 +1307,27 @@ static const struct net_device_ops xennet_netdev_ops = {
 
 static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 {
-	int i, err;
+	int err;
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->stats == NULL)
 		goto exit;
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1342,10 +1353,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1403,30 +1410,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1467,100 +1479,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1569,13 +1567,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1583,21 +1581,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1608,17 +1606,78 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+		queue->grant_tx_page[i] = NULL;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1626,13 +1685,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1640,34 +1758,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1727,6 +1845,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1739,6 +1860,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1759,36 +1882,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1797,14 +1924,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1877,7 +2007,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(np + xennet_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1909,7 +2039,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1920,6 +2053,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1933,16 +2068,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1952,7 +2090,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1963,6 +2104,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1976,16 +2119,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1995,7 +2141,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2042,6 +2191,8 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
@@ -2051,7 +2202,15 @@ static int xennet_remove(struct xenbus_device *dev)
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
 
 	free_percpu(info->stats);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWB-0001ZK-7X; Mon, 17 Feb 2014 18:02:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW9-0001Xt-Hg
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:09 +0000
Received: from [85.158.139.211:13576] by server-10.bemta-5.messagelabs.com id
	21/D5-08578-0AE42035; Mon, 17 Feb 2014 18:02:08 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392660124!4449122!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23514 invoked from network); 17 Feb 2014 18:02:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:07 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101501664"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:07 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW6-0004Md-6l;
	Mon, 17 Feb 2014 18:02:06 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW4-0000qZ-Ah; Mon, 17 Feb 2014 18:02:04 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:58:00 +0000
Message-ID: <1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..8868c51 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSW7-0001XF-63; Mon, 17 Feb 2014 18:02:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW5-0001X5-Cp
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:05 +0000
Received: from [85.158.137.68:41647] by server-2.bemta-3.messagelabs.com id
	FE/73-06531-C9E42035; Mon, 17 Feb 2014 18:02:04 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392660122!2450081!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21202 invoked from network); 17 Feb 2014 18:02:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239371"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:01 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW0-0004ML-Lo;
	Mon, 17 Feb 2014 18:02:00 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSVy-0000q8-S8; Mon, 17 Feb 2014 18:01:58 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:55 +0000
Message-ID: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, david.vrabel@citrix.com
Subject: [Xen-devel] [PATCH V4 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V4:
- Add MODULE_PARM_DESC() for the multi-queue parameters for netback
  and netfront modules.
- Move del_timer_sync() in netfront to after unregister_netdev, which
  restores the order in which these functions were called before applying
  these patches.

V3:
- Further indentation and style fixups.

V2:
- Rebase onto net-next.
- Change queue->number to queue->id.
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu.
- Fixup formatting and style issues.
- XenStore protocol changes documented in netif.h.
- Default max. number of queues to num_online_cpus().
- Check requested number of queues does not exceed maximum.

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWD-0001ae-Bq; Mon, 17 Feb 2014 18:02:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSWB-0001ZX-Vb
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:12 +0000
Received: from [85.158.139.211:13744] by server-9.bemta-5.messagelabs.com id
	E0/32-11237-3AE42035; Mon, 17 Feb 2014 18:02:11 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392660129!4495664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20047 invoked from network); 17 Feb 2014 18:02:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:10 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239427"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:07 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW7-0004Mg-8y;
	Mon, 17 Feb 2014 18:02:07 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW5-0000qb-LV; Mon, 17 Feb 2014 18:02:05 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 18:01:50 +0000
Message-ID: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J.
	Bennieston" <andrew.bennieston@citrix.com>, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, david.vrabel@citrix.com
Subject: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 xen/include/public/io/netif.h |   21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index d7fb771..90be2fc 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -69,6 +69,27 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" is required when using multiple queues.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel and ring-ref keys, instead writing them under sub-keys having
+ * the name "queue-N" where N is the integer ID of the queue for which those
+ * keys belong. Queues are indexed from zero.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWC-0001Zz-Lq; Mon, 17 Feb 2014 18:02:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSWB-0001Yg-10
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:11 +0000
Received: from [193.109.254.147:37136] by server-3.bemta-14.messagelabs.com id
	C0/A0-00432-2AE42035; Mon, 17 Feb 2014 18:02:10 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392660123!4912013!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29530 invoked from network); 17 Feb 2014 18:02:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239401"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:05 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW3-0004MX-QH;
	Mon, 17 Feb 2014 18:02:03 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW2-0000qP-4D; Mon, 17 Feb 2014 18:02:02 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:58 +0000
Message-ID: <1392659880-2538-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  945 ++++++++++++++++++++++++++------------------
 1 file changed, 552 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 2b62d79..238c2cb 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1257,66 +1307,27 @@ static const struct net_device_ops xennet_netdev_ops = {
 
 static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 {
-	int i, err;
+	int err;
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->stats == NULL)
 		goto exit;
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1342,10 +1353,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1403,30 +1410,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1467,100 +1479,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1569,13 +1567,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1583,21 +1581,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1608,17 +1606,78 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+		queue->grant_tx_page[i] = NULL;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1626,13 +1685,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1640,34 +1758,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1727,6 +1845,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1739,6 +1860,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1759,36 +1882,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1797,14 +1924,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1877,7 +2007,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(np + xennet_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1909,7 +2039,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1920,6 +2053,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1933,16 +2068,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1952,7 +2090,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1963,6 +2104,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1976,16 +2119,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1995,7 +2141,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2042,6 +2191,8 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
@@ -2051,7 +2202,15 @@ static int xennet_remove(struct xenbus_device *dev)
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
 
 	free_percpu(info->stats);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSW7-0001XF-63; Mon, 17 Feb 2014 18:02:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW5-0001X5-Cp
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:05 +0000
Received: from [85.158.137.68:41647] by server-2.bemta-3.messagelabs.com id
	FE/73-06531-C9E42035; Mon, 17 Feb 2014 18:02:04 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392660122!2450081!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21202 invoked from network); 17 Feb 2014 18:02:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:03 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239371"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:01 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW0-0004ML-Lo;
	Mon, 17 Feb 2014 18:02:00 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSVy-0000q8-S8; Mon, 17 Feb 2014 18:01:58 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:55 +0000
Message-ID: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, david.vrabel@citrix.com
Subject: [Xen-devel] [PATCH V4 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V4:
- Add MODULE_PARM_DESC() for the multi-queue parameters for netback
  and netfront modules.
- Move del_timer_sync() in netfront to after unregister_netdev, which
  restores the order in which these functions were called before applying
  these patches.

V3:
- Further indentation and style fixups.

V2:
- Rebase onto net-next.
- Change queue->number to queue->id.
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu.
- Fixup formatting and style issues.
- XenStore protocol changes documented in netif.h.
- Default max. number of queues to num_online_cpus().
- Check requested number of queues does not exceed maximum.

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWA-0001YO-5S; Mon, 17 Feb 2014 18:02:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW7-0001XN-VD
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:08 +0000
Received: from [85.158.137.68:4060] by server-16.bemta-3.messagelabs.com id
	E1/1D-29917-F9E42035; Mon, 17 Feb 2014 18:02:07 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392660122!2450081!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21285 invoked from network); 17 Feb 2014 18:02:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239382"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:02 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW1-0004MO-HG;
	Mon, 17 Feb 2014 18:02:01 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW0-0000qB-3v; Mon, 17 Feb 2014 18:02:00 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:56 +0000
Message-ID: <1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   81 ++++--
 drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
 drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 593 insertions(+), 417 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..2550867 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,36 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +159,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +200,12 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..daf93f6 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -253,7 +328,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)vif + xenvif_stats[i].offset);
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +361,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +371,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +383,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +402,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +411,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +427,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +550,52 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..46b2f5b 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		atomic_inc(&vif->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSW8-0001XX-IZ; Mon, 17 Feb 2014 18:02:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW7-0001XE-Ep
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:07 +0000
Received: from [85.158.139.211:17909] by server-12.bemta-5.messagelabs.com id
	B4/A6-15415-E9E42035; Mon, 17 Feb 2014 18:02:06 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392660124!4449122!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23372 invoked from network); 17 Feb 2014 18:02:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:05 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101501637"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:03 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW2-0004MR-MB;
	Mon, 17 Feb 2014 18:02:02 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW1-0000qJ-1L; Mon, 17 Feb 2014 18:02:01 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:57 +0000
Message-ID: <1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    8 ++++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 82 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 2550867..8180929 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index daf93f6..bc7a82d 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			      xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index 46b2f5b..64d66a1 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,11 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+MODULE_PARM_DESC(xenvif_max_queues,
+		"Maximum number of queues per virtual interface");
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1590,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..d11f51e 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+			"guest requested %u queues, exceeding the maximum of %u.",
+			requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath = NULL;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1) {
+		xspath = (char *)dev->otherend;
+	} else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWA-0001Yf-Jx; Mon, 17 Feb 2014 18:02:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW8-0001XV-Md
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:09 +0000
Received: from [85.158.139.211:40839] by server-6.bemta-5.messagelabs.com id
	BC/E0-14342-F9E42035; Mon, 17 Feb 2014 18:02:07 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392660124!4449122!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23468 invoked from network); 17 Feb 2014 18:02:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101501661"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:06 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW5-0004Ma-3u;
	Mon, 17 Feb 2014 18:02:05 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW3-0000qU-86; Mon, 17 Feb 2014 18:02:03 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:59 +0000
Message-ID: <1392659880-2538-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 140 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 238c2cb..0798b0d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+MODULE_PARM_DESC(xennet_max_queues,
+		"Maximum number of queues per virtual interface");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +571,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1) {
+		queue_idx = 0;
+	} else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1329,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1678,6 +1696,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1687,10 +1787,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1706,12 +1817,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1749,49 +1861,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1841,8 +1939,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2236,6 +2335,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWA-0001YO-5S; Mon, 17 Feb 2014 18:02:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW7-0001XN-VD
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:08 +0000
Received: from [85.158.137.68:4060] by server-16.bemta-3.messagelabs.com id
	E1/1D-29917-F9E42035; Mon, 17 Feb 2014 18:02:07 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392660122!2450081!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21285 invoked from network); 17 Feb 2014 18:02:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:04 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103239382"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:02 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW1-0004MO-HG;
	Mon, 17 Feb 2014 18:02:01 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW0-0000qB-3v; Mon, 17 Feb 2014 18:02:00 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:56 +0000
Message-ID: <1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   81 ++++--
 drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
 drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 593 insertions(+), 417 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..2550867 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,36 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +159,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +200,12 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..daf93f6 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -253,7 +328,7 @@ static void xenvif_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)vif + xenvif_stats[i].offset);
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +361,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +371,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +383,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +402,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +411,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +427,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
-
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
+	queue->task = task;
 
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +550,52 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
-	unregister_netdev(vif->dev);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
 
-	vfree(vif->grant_copy_op);
+	/* Free the array of queues */
+	vfree(vif->queues);
+	vif->num_queues = 0;
+	vif->queues = 0;
+
+	unregister_netdev(vif->dev);
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..46b2f5b 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		atomic_inc(&vif->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue->vif, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:02:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSWA-0001Yf-Jx; Mon, 17 Feb 2014 18:02:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WFSW8-0001XV-Md
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:02:09 +0000
Received: from [85.158.139.211:40839] by server-6.bemta-5.messagelabs.com id
	BC/E0-14342-F9E42035; Mon, 17 Feb 2014 18:02:07 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392660124!4449122!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23468 invoked from network); 17 Feb 2014 18:02:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:02:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101501661"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:02:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:02:06 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WFSW5-0004Ma-3u;
	Mon, 17 Feb 2014 18:02:05 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WFSW3-0000qU-86; Mon, 17 Feb 2014 18:02:03 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 17 Feb 2014 17:57:59 +0000
Message-ID: <1392659880-2538-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org,
	paul.durrant@citrix.com, david.vrabel@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: [Xen-devel] [PATCH V4 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 140 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 238c2cb..0798b0d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+MODULE_PARM_DESC(xennet_max_queues,
+		"Maximum number of queues per virtual interface");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +571,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1) {
+		queue_idx = 0;
+	} else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1329,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1678,6 +1696,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1687,10 +1787,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1706,12 +1817,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1749,49 +1861,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1841,8 +1939,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2236,6 +2335,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:04:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSY4-0002KP-US; Mon, 17 Feb 2014 18:04:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSY3-0002K2-Bj
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:04:07 +0000
Received: from [193.109.254.147:43645] by server-6.bemta-14.messagelabs.com id
	C6/8B-03396-61F42035; Mon, 17 Feb 2014 18:04:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392660245!4952075!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10192 invoked from network); 17 Feb 2014 18:04:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:04:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101502265"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:04:04 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:04:04 -0500
Message-ID: <53024F12.4030209@citrix.com>
Date: Mon, 17 Feb 2014 18:04:02 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-4-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-4-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V4 net-next 3/5] xen-netfront: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 17:57, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netfront, move the
> queue-specific data from struct netfront_info to struct netfront_queue,
> and update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_etherdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0, selecting the first (and
> only) queue.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:04:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSY4-0002KP-US; Mon, 17 Feb 2014 18:04:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSY3-0002K2-Bj
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:04:07 +0000
Received: from [193.109.254.147:43645] by server-6.bemta-14.messagelabs.com id
	C6/8B-03396-61F42035; Mon, 17 Feb 2014 18:04:06 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392660245!4952075!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10192 invoked from network); 17 Feb 2014 18:04:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:04:06 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101502265"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:04:04 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:04:04 -0500
Message-ID: <53024F12.4030209@citrix.com>
Date: Mon, 17 Feb 2014 18:04:02 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-4-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-4-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V4 net-next 3/5] xen-netfront: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 17:57, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netfront, move the
> queue-specific data from struct netfront_info to struct netfront_queue,
> and update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_etherdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0, selecting the first (and
> only) queue.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:04:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSYh-0002Rs-Cd; Mon, 17 Feb 2014 18:04:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSYf-0002Rf-9Q
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:04:45 +0000
Received: from [85.158.143.35:32487] by server-3.bemta-4.messagelabs.com id
	6F/5D-11539-C3F42035; Mon, 17 Feb 2014 18:04:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392660283!6314264!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31626 invoked from network); 17 Feb 2014 18:04:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:04:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101502380"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:04:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:04:42 -0500
Message-ID: <53024F38.6010205@citrix.com>
Date: Mon, 17 Feb 2014 18:04:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-5-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-5-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V4 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 17:57, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:04:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSYh-0002Rs-Cd; Mon, 17 Feb 2014 18:04:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSYf-0002Rf-9Q
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:04:45 +0000
Received: from [85.158.143.35:32487] by server-3.bemta-4.messagelabs.com id
	6F/5D-11539-C3F42035; Mon, 17 Feb 2014 18:04:44 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392660283!6314264!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31626 invoked from network); 17 Feb 2014 18:04:44 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:04:44 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="101502380"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 18:04:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:04:42 -0500
Message-ID: <53024F38.6010205@citrix.com>
Date: Mon, 17 Feb 2014 18:04:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-5-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-5-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V4 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 17:57, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:06:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSaa-0002iB-41; Mon, 17 Feb 2014 18:06:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSaZ-0002hw-2e
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:06:43 +0000
Received: from [85.158.143.35:56725] by server-1.bemta-4.messagelabs.com id
	42/5E-31661-2BF42035; Mon, 17 Feb 2014 18:06:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392660400!6323373!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25248 invoked from network); 17 Feb 2014 18:06:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:06:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103240600"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:06:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:06:18 -0500
Message-ID: <53024F98.4040503@citrix.com>
Date: Mon, 17 Feb 2014 18:06:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 17:58, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:06:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSaa-0002iB-41; Mon, 17 Feb 2014 18:06:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSaZ-0002hw-2e
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:06:43 +0000
Received: from [85.158.143.35:56725] by server-1.bemta-4.messagelabs.com id
	42/5E-31661-2BF42035; Mon, 17 Feb 2014 18:06:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392660400!6323373!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25248 invoked from network); 17 Feb 2014 18:06:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:06:41 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103240600"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:06:19 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:06:18 -0500
Message-ID: <53024F98.4040503@citrix.com>
Date: Mon, 17 Feb 2014 18:06:16 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 17:58, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:07:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSau-0002lw-HR; Mon, 17 Feb 2014 18:07:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSat-0002lX-1V
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:07:03 +0000
Received: from [85.158.137.68:33603] by server-14.bemta-3.messagelabs.com id
	DA/A6-08196-6CF42035; Mon, 17 Feb 2014 18:07:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392660420!876247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24943 invoked from network); 17 Feb 2014 18:07:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:07:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103240917"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:06:59 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:06:59 -0500
Message-ID: <53024FC2.4020302@citrix.com>
Date: Mon, 17 Feb 2014 18:06:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 18:01, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:07:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFSau-0002lw-HR; Mon, 17 Feb 2014 18:07:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFSat-0002lX-1V
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 18:07:03 +0000
Received: from [85.158.137.68:33603] by server-14.bemta-3.messagelabs.com id
	DA/A6-08196-6CF42035; Mon, 17 Feb 2014 18:07:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392660420!876247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24943 invoked from network); 17 Feb 2014 18:07:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:07:01 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103240917"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:06:59 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 17 Feb 2014 13:06:59 -0500
Message-ID: <53024FC2.4020302@citrix.com>
Date: Mon, 17 Feb 2014 18:06:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17/02/14 18:01, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:39:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:39:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFT5Y-0003XX-0o; Mon, 17 Feb 2014 18:38:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFT5W-0003XO-Mq
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 18:38:43 +0000
Received: from [85.158.137.68:53119] by server-8.bemta-3.messagelabs.com id
	3A/35-16039-13752035; Mon, 17 Feb 2014 18:38:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392662319!2458775!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30897 invoked from network); 17 Feb 2014 18:38:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:38:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103247224"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:38:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 13:38:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFT5R-0005xF-AU;
	Mon, 17 Feb 2014 18:38:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFT5R-0000RX-2a;
	Mon, 17 Feb 2014 18:38:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25107-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 18:38:37 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25107: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3669771176739914145=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3669771176739914145==
Content-Type: text/plain

flight 25107 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25107/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 25094 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail pass in 25094
 test-amd64-amd64-pair        16 guest-start        fail in 25094 pass in 25107
 test-amd64-i386-qemuu-rhel6hvm-intel 9 guest-start.2 fail in 25094 pass in 25107

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 25094 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============3669771176739914145==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3669771176739914145==--

From xen-devel-bounces@lists.xen.org Mon Feb 17 18:39:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 18:39:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFT5Y-0003XX-0o; Mon, 17 Feb 2014 18:38:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFT5W-0003XO-Mq
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 18:38:43 +0000
Received: from [85.158.137.68:53119] by server-8.bemta-3.messagelabs.com id
	3A/35-16039-13752035; Mon, 17 Feb 2014 18:38:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392662319!2458775!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30897 invoked from network); 17 Feb 2014 18:38:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 18:38:40 -0000
X-IronPort-AV: E=Sophos;i="4.95,862,1384300800"; d="scan'208";a="103247224"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 17 Feb 2014 18:38:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 13:38:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFT5R-0005xF-AU;
	Mon, 17 Feb 2014 18:38:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFT5R-0000RX-2a;
	Mon, 17 Feb 2014 18:38:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25107-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 18:38:37 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25107: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3669771176739914145=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3669771176739914145==
Content-Type: text/plain

flight 25107 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25107/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24862
 build-amd64-oldkern           4 xen-build        fail in 25094 REGR. vs. 24862

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3 12 guest-localmigrate/x10 fail pass in 25094
 test-amd64-amd64-pair        16 guest-start        fail in 25094 pass in 25107
 test-amd64-i386-qemuu-rhel6hvm-intel 9 guest-start.2 fail in 25094 pass in 25107

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop      fail in 25094 never pass

version targeted for testing:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
baseline version:
 xen                  d883c179a74111a6804baf8cb8224235242a88fc

------------------------------------------------------------
People who touched revisions under test:
  "Zhang, Yang Z" <yang.z.zhang@intel.com>
  Boris Ostrovsky <boris.ostrovsky@oracle.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  George Dunlap <george.dunlap@eu.citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Mukesh Rathor <mukesh.rathor@oracle.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau MonnÃ© <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e
Author: Mukesh Rathor <mukesh.rathor@oracle.com>
Date:   Thu Feb 13 17:56:39 2014 +0100

    pvh: Fix regression due to assumption that HVM paths MUST use io-backend device
    
    The commit 09bb434748af9bfe3f7fca4b6eef721a7d5042a4
    "Nested VMX: prohibit virtual vmentry/vmexit during IO emulation"
    assumes that the HVM paths are only taken by HVM guests. With the PVH
    enabled that is no longer the case - which means that we do not have
    to have the IO-backend device (QEMU) enabled.
    
    As such, that patch can crash the hypervisor:
    
    Xen call trace:
        [<ffff82d0801ddd9a>] nvmx_switch_guest+0x4d/0x903
        [<ffff82d0801de95b>] vmx_asm_vmexit_handler+0x4b/0xc0
    
    Pagetable walk from 000000000000001e:
      L4[0x000] = 0000000000000000 ffffffffffffffff
    
    ****************************************
    Panic on CPU 7:
    FATAL PAGE FAULT
    [error_code=0000]
    Faulting linear address: 000000000000001e
    ****************************************
    
    as we do not have an io based backend. In the case that the
    PVH guest does run an HVM guest inside it - we need to do
    further work to suport this - and for now the check will
    bail us out.
    
    We also fix spelling mistakes and the sentence structure.
    
    Suggested-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
    Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
    Acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 077fc1c04d70ef1748ac2daa6622b3320a1a004c
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Thu Feb 13 15:50:22 2014 +0000

    When enabling log dirty mode, it sets all guest's memory to readonly.
    And in HAP enabled domain, it modifies all EPT entries to clear write bit
    to make sure it is readonly. This will cause problem if VT-d shares page
    table with EPT: the device may issue a DMA write request, then VT-d engine
    tells it the target memory is readonly and result in VT-d fault.
    
    Currnetly, there are two places will enable log dirty mode: migration and vram
    tracking. Migration with device assigned is not allowed, so it is ok. For vram,
    it doesn't need to set all memory to readonly. Only track the vram range is enough.
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 0e251a8371574b905d37d7650d1d625caf0f1181
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 15:13:07 2014 +0000

    xen: Don't use __builtin_stdarg_start().
    
    Cset fca49a00 ("netbsd: build fix with gcc 4.5") changed the
    definition of va_start() to use __builtin_va_start() rather than
    __builtin_stdarg_start() for GCCs >= 4.5, but in fact GCC dropped
    __builtin_stdarg_start() before v3.3.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

commit d2985386925fab3abe075852db46df29b56c95bb
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Feb 13 15:43:24 2014 +0100

    docs: mention whitespace handling diskspec target= parsing
    
    disk=[ ' target=/dev/loop0 ' ] will fail to parse because
    '/dev/loop ' does not exist.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 0873829a70daa3c23d03b9841ccd529f05889f21
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 13 12:13:58 2014 +0000

    xen: stop trying to use the system <stdarg.h> and <stdbool.h>
    
    We already have our own versions of the stdarg/stdbool definitions, for
    systems where those headers are installed in /usr/include.
    
    On linux, they're typically installed in compiler-specific paths, but
    finding them has proved unreliable.  Drop that and use our own versions
    everywhere.
    
    Signed-off-by: Tim Deegan <tim@xen.org>
    Tested-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Keir Fraser <keir@xen.org>

commit 42788ddd24a06bf05f0f2b5da1880ed89736bd7b
Author: Jan Beulich <JBeulich@suse.com>
Date:   Thu Feb 13 12:57:43 2014 +0000

    tools/configure: correct --enable-blktap1 help text
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 1278b09cc5a38da4efbe0de37a7f9fab9d19f913
Author: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date:   Tue Feb 11 10:25:17 2014 -0500

    docs/vtpm: fix auto-shutdown reference
    
    The automatic shutdown feature of the vTPM was removed because it
    interfered with pv-grub measurement support and was also not triggered
    if the guest did not use the vTPM. Virtual TPM domains will need to be
    shut down or destroyed on guest shutdown via a script or other user
    action.
    
    This also fixes an incorrect reference to the vTPM being PV-only.
    
    Signed-off-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit 001bdcee7bc19be3e047d227b4d940c04972eb02
Author: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date:   Thu Feb 13 10:49:55 2014 +0100

    x86/pci: Store VF's memory space displacement in a 64-bit value
    
    VF's memory space offset can be greater than 4GB and therefore needs
    to be stored in a 64-bit variable.
    
    Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 3ac3817762d1a8b39fa45998ec8c40cabfcfc802
Author: Ian Campbell <ian.campbell@citrix.com>
Date:   Wed Feb 12 14:27:37 2014 +0000

    xl: suppress suspend/resume functions on platforms which do not support it.
    
    ARM does not (currently) support migration, so stop offering tasty looking
    treats like "xl migrate".
    
    Apart from the UI improvement my intention is to use this in osstest to detect
    whether to attempt the save/restore/migrate tests.
    
    Other than the additions of the #define/#ifdef there is a tiny bit of code
    motion ("dump-core" in the command list and core_dump_domain in the
    implementations) which serves to put ifdeffable bits next to each other.
    
    Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


--===============3669771176739914145==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3669771176739914145==--

From xen-devel-bounces@lists.xen.org Mon Feb 17 19:18:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 19:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFThq-00044F-Kp; Mon, 17 Feb 2014 19:18:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WFTho-00044A-Ak
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 19:18:16 +0000
Received: from [85.158.143.35:46707] by server-3.bemta-4.messagelabs.com id
	69/09-11539-77062035; Mon, 17 Feb 2014 19:18:15 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392664694!6313339!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29520 invoked from network); 17 Feb 2014 19:18:14 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 19:18:14 -0000
Received: by mail-wg0-f47.google.com with SMTP id k14so2541443wgh.2
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:18:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:content-transfer-encoding;
	bh=6MnTJ0j47gKMxzquUik0/qTwvMIez542TnM2ogRyqoM=;
	b=ir+f8j7STq7K2Kh3n31a+QJMkfnPQNWDs93HjCb4tfKSNTB9VA+0vPL/LYYlhcFLDZ
	+ydifma5HGHhcF5go6lG7HXxqIgn56WIQWgWCJ4uX+xl9/uA1MMJBFw3eFmBB1prebyP
	uQ/LdwaBrD+1oXTwelBLqJwZKCnLn9KiUCHAGND3qfHiSAx8Yj+j8zwJe62ibejproof
	l7jfk7t0+LUMCpFuUZ4LGVZaaPZ6i3qiZSqxtgOKE4kKn4iMns7rFIvY+3ggHWgEigdG
	UClHnPildcW4lbQsl37nXMntrwKDzPUTBtRm0vorVTRW0A9Sd+Qd9wvNHOhkYtfF8uRF
	OHiA==
X-Received: by 10.194.192.233 with SMTP id hj9mr3312328wjc.78.1392664694083;
	Mon, 17 Feb 2014 11:18:14 -0800 (PST)
MIME-Version: 1.0
Received: by 10.217.55.201 with HTTP; Mon, 17 Feb 2014 11:17:54 -0800 (PST)
In-Reply-To: <5301C920.4040905@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
	<CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
	<5301C920.4040905@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Mon, 17 Feb 2014 19:17:54 +0000
Message-ID: <CADGo8CXyeYWV33CuY4JdNLPyVNXGYvT2O03Br2fATTf+YyBG-g@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I see, from the previous message I assume it had to be against drbd-9,
aptget repos offer 8.3 only but 8.4 is availeble as a .deb manual
download so I'll go fro the same version has you, just to be sure its
the same thing

On Mon, Feb 17, 2014 at 8:32 AM, Roger Pau Monn=E9 <roger.pau@citrix.com> w=
rote:
> On 14/02/14 21:08, Miguel Clara wrote:
>> On Fri, Feb 14, 2014 at 9:13 AM, Roger Pau Monn=E9 <roger.pau@citrix.com=
> wrote:
>>> On 14/02/14 03:09, Miguel Clara wrote:
>>>> After compiling with the patch and rebuilding/installing the module, I
>>>> reboot, I get a panic now when drbd starts.
>>>
>>> There was no need to rebuild the module, the patch only modified the
>>> block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
>>> everything seemed to be fine (no kernel panic of course).
>>
>> Just noticed this part now but before you said:
>> (The patch is against git://git.drbd.org/drbd-9.0.git)
>>
>> So should I apply the patch to drbd-8.4.3... ?
>
> Yes, I think this patch will apply to almost any version of drbd since
> it only modifies the block-drbd script. I would recommend that you apply
> it against your already installed block-drbd script, this way you will
> be sure there's no version mismatch.
>
> Roger.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 19:18:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 19:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFThq-00044F-Kp; Mon, 17 Feb 2014 19:18:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WFTho-00044A-Ak
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 19:18:16 +0000
Received: from [85.158.143.35:46707] by server-3.bemta-4.messagelabs.com id
	69/09-11539-77062035; Mon, 17 Feb 2014 19:18:15 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392664694!6313339!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29520 invoked from network); 17 Feb 2014 19:18:14 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 19:18:14 -0000
Received: by mail-wg0-f47.google.com with SMTP id k14so2541443wgh.2
	for <xen-devel@lists.xensource.com>;
	Mon, 17 Feb 2014 11:18:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type:content-transfer-encoding;
	bh=6MnTJ0j47gKMxzquUik0/qTwvMIez542TnM2ogRyqoM=;
	b=ir+f8j7STq7K2Kh3n31a+QJMkfnPQNWDs93HjCb4tfKSNTB9VA+0vPL/LYYlhcFLDZ
	+ydifma5HGHhcF5go6lG7HXxqIgn56WIQWgWCJ4uX+xl9/uA1MMJBFw3eFmBB1prebyP
	uQ/LdwaBrD+1oXTwelBLqJwZKCnLn9KiUCHAGND3qfHiSAx8Yj+j8zwJe62ibejproof
	l7jfk7t0+LUMCpFuUZ4LGVZaaPZ6i3qiZSqxtgOKE4kKn4iMns7rFIvY+3ggHWgEigdG
	UClHnPildcW4lbQsl37nXMntrwKDzPUTBtRm0vorVTRW0A9Sd+Qd9wvNHOhkYtfF8uRF
	OHiA==
X-Received: by 10.194.192.233 with SMTP id hj9mr3312328wjc.78.1392664694083;
	Mon, 17 Feb 2014 11:18:14 -0800 (PST)
MIME-Version: 1.0
Received: by 10.217.55.201 with HTTP; Mon, 17 Feb 2014 11:17:54 -0800 (PST)
In-Reply-To: <5301C920.4040905@citrix.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
	<CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
	<5301C920.4040905@citrix.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Mon, 17 Feb 2014 19:17:54 +0000
Message-ID: <CADGo8CXyeYWV33CuY4JdNLPyVNXGYvT2O03Br2fATTf+YyBG-g@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I see, from the previous message I assume it had to be against drbd-9,
aptget repos offer 8.3 only but 8.4 is availeble as a .deb manual
download so I'll go fro the same version has you, just to be sure its
the same thing

On Mon, Feb 17, 2014 at 8:32 AM, Roger Pau Monn=E9 <roger.pau@citrix.com> w=
rote:
> On 14/02/14 21:08, Miguel Clara wrote:
>> On Fri, Feb 14, 2014 at 9:13 AM, Roger Pau Monn=E9 <roger.pau@citrix.com=
> wrote:
>>> On 14/02/14 03:09, Miguel Clara wrote:
>>>> After compiling with the patch and rebuilding/installing the module, I
>>>> reboot, I get a panic now when drbd starts.
>>>
>>> There was no need to rebuild the module, the patch only modified the
>>> block-drbd script. I've tested it with drbd-8.4.3 and Linux 3.13,
>>> everything seemed to be fine (no kernel panic of course).
>>
>> Just noticed this part now but before you said:
>> (The patch is against git://git.drbd.org/drbd-9.0.git)
>>
>> So should I apply the patch to drbd-8.4.3... ?
>
> Yes, I think this patch will apply to almost any version of drbd since
> it only modifies the block-drbd script. I would recommend that you apply
> it against your already installed block-drbd script, this way you will
> be sure there's no version mismatch.
>
> Roger.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 19:26:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 19:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFTpb-0004Dp-Km; Mon, 17 Feb 2014 19:26:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFTpa-0004Dk-9W
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 19:26:18 +0000
Received: from [85.158.143.35:32217] by server-1.bemta-4.messagelabs.com id
	E4/1C-31661-85262035; Mon, 17 Feb 2014 19:26:16 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392665174!6310701!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18817 invoked from network); 17 Feb 2014 19:26:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 19:26:15 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HJQB15010546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 19:26:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HJQB8W002940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 19:26:11 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HJQAN1018129; Mon, 17 Feb 2014 19:26:10 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 11:26:10 -0800
Message-ID: <530262B3.5000603@oracle.com>
Date: Mon, 17 Feb 2014 14:27:47 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv1 0/3] xen/events: remove some
	unused/unnecessary code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 12:45 PM, David Vrabel wrote:
> Remove some unused and unnecessary event channel code.
>
> David
>

I'd probably leave the blank line in bind_virq_to_irq() (second patch). 
Other than that

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 19:26:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 19:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFTpb-0004Dp-Km; Mon, 17 Feb 2014 19:26:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFTpa-0004Dk-9W
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 19:26:18 +0000
Received: from [85.158.143.35:32217] by server-1.bemta-4.messagelabs.com id
	E4/1C-31661-85262035; Mon, 17 Feb 2014 19:26:16 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392665174!6310701!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18817 invoked from network); 17 Feb 2014 19:26:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 19:26:15 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1HJQB15010546
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 19:26:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1HJQB8W002940
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 19:26:11 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1HJQAN1018129; Mon, 17 Feb 2014 19:26:10 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 17 Feb 2014 11:26:10 -0800
Message-ID: <530262B3.5000603@oracle.com>
Date: Mon, 17 Feb 2014 14:27:47 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1392659118-32593-1-git-send-email-david.vrabel@citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv1 0/3] xen/events: remove some
	unused/unnecessary code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 12:45 PM, David Vrabel wrote:
> Remove some unused and unnecessary event channel code.
>
> David
>

I'd probably leave the blank line in bind_virq_to_irq() (second patch). 
Other than that

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 20:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 20:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFUlz-0004Zb-Np; Mon, 17 Feb 2014 20:26:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WFUlx-0004ZW-Qt
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 20:26:37 +0000
Received: from [85.158.143.35:22299] by server-1.bemta-4.messagelabs.com id
	57/72-31661-D7072035; Mon, 17 Feb 2014 20:26:37 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392668796!6346500!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32099 invoked from network); 17 Feb 2014 20:26:36 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 20:26:36 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1HKQOwl005266;
	Mon, 17 Feb 2014 20:26:29 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1HKQIA7010329
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 20:26:18 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s1HKQHI0032728;
	Mon, 17 Feb 2014 20:26:17 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s1HKQHO5032724; Mon, 17 Feb 2014 20:26:17 GMT
Date: Mon, 17 Feb 2014 20:26:17 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Dario Faggioli <raistlin@linux.it>
In-Reply-To: <1392658580.32038.446.camel@Solace>
Message-ID: <alpine.DEB.2.00.1402172025390.29650@procyon.dur.ac.uk>
References: <1392658580.32038.446.camel@Solace>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s1HKQOwl005266
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 RC4 out... TestDay tomorrow...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Feb 2014, Dario Faggioli wrote:

> Hey Michael,
>
> As usual, I'm asking whether you'd be up for preparing a temporary
> build, to facilitate using Fedora as a platform for Xen 4.4 RC4 test
> day.

There is a temporary build at 
http://koji.fedoraproject.org/koji/taskinfo?taskID=6540303

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 20:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 20:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFUlz-0004Zb-Np; Mon, 17 Feb 2014 20:26:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WFUlx-0004ZW-Qt
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 20:26:37 +0000
Received: from [85.158.143.35:22299] by server-1.bemta-4.messagelabs.com id
	57/72-31661-D7072035; Mon, 17 Feb 2014 20:26:37 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392668796!6346500!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32099 invoked from network); 17 Feb 2014 20:26:36 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 17 Feb 2014 20:26:36 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1HKQOwl005266;
	Mon, 17 Feb 2014 20:26:29 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1HKQIA7010329
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 17 Feb 2014 20:26:18 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s1HKQHI0032728;
	Mon, 17 Feb 2014 20:26:17 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s1HKQHO5032724; Mon, 17 Feb 2014 20:26:17 GMT
Date: Mon, 17 Feb 2014 20:26:17 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Dario Faggioli <raistlin@linux.it>
In-Reply-To: <1392658580.32038.446.camel@Solace>
Message-ID: <alpine.DEB.2.00.1402172025390.29650@procyon.dur.ac.uk>
References: <1392658580.32038.446.camel@Solace>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s1HKQOwl005266
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 RC4 out... TestDay tomorrow...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Feb 2014, Dario Faggioli wrote:

> Hey Michael,
>
> As usual, I'm asking whether you'd be up for preparing a temporary
> build, to facilitate using Fedora as a platform for Xen 4.4 RC4 test
> day.

There is a temporary build at 
http://koji.fedoraproject.org/koji/taskinfo?taskID=6540303

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 22:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 22:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFWjf-000555-Lk; Mon, 17 Feb 2014 22:32:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1WFWje-000550-IW
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 22:32:22 +0000
Received: from [85.158.139.211:2057] by server-16.bemta-5.messagelabs.com id
	38/57-05060-5FD82035; Mon, 17 Feb 2014 22:32:21 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392676337!4512849!1
X-Originating-IP: [209.85.214.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26302 invoked from network); 17 Feb 2014 22:32:18 -0000
Received: from mail-ob0-f174.google.com (HELO mail-ob0-f174.google.com)
	(209.85.214.174)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 22:32:18 -0000
Received: by mail-ob0-f174.google.com with SMTP id uy5so17558049obc.5
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 14:32:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=09aK8NeGvi7blzBT/z5eg5SLajVSUsnqO1t+PYHnRBk=;
	b=AJOqJ87ko0uIRbISpDdJjLLi4FCl314NAr/wKiqGwZXGOdgqYWEt4dUsZtHtwLxwC+
	g+6AipX8dzC0XsN+sM8x6F3W4FThIB/J2wtpN8Iq2JHpBAuyBmilu4NFZhKwKw0uEtew
	hJBlB1tQVY2tgpiurm6J3xxDUu4Jl7Gf2Z9lXDjg9jdsuy7SP5MMZRPxmUJDoJNa0Ou8
	HuIbWDGIM/Gkb8swi5lmCMVQxFQzMX1PCZ2DalkIDWve2jaBpEjbVs7wsWq6DWaav/1f
	htQvJrkNG8foQVBsf0uaCTP7wTaPnSx4RbqA2frlfjWn56s2ax4GVS9TPqNuTkL3S74J
	lhng==
MIME-Version: 1.0
X-Received: by 10.60.44.42 with SMTP id b10mr93603oem.70.1392676337189; Mon,
	17 Feb 2014 14:32:17 -0800 (PST)
Received: by 10.76.173.98 with HTTP; Mon, 17 Feb 2014 14:32:17 -0800 (PST)
Date: Mon, 17 Feb 2014 17:32:17 -0500
Message-ID: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: "mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>
Subject: [Xen-devel] Question about running a program(Intel PCM) in ring 0
	on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6408963643605319677=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6408963643605319677==
Content-Type: multipart/alternative; boundary=001a11c30a045d8c7d04f2a1ba3e

--001a11c30a045d8c7d04f2a1ba3e
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'm a PhD student, working on real time system.

*[My goal]*
I want to measure the cache hit/miss rate of each guest domain in Xen. I
may also want to measure some other events, say memory access rate, for
each program in each guest domain in Xen.

My machine's CPU uses intel IvyBridge architecture.

*[The problem I'm encountering]*
I tried intel's Performance Counter Monitor (PCM) in Linux on bare machine
to get the machine's cache access rate for each level of cache, it works
very well.

However, when I want to use the PCM in Xen and run it in dom0, it cannot
work. I think the PCM needs to run in ring 0 to read/write the MSR. Because
dom0 is running in ring 1, so PCM running in dom0 cannot work.

*So my question is:*
How can I run a program (say PCM) in ring 0 on Xen?

*What's in my mind is:*
Writing a hypercall to call the PCM in Xen's kernel space, then the PCM
will run in ring 0?
But the problem I'm concerned is that some of the PCM's instruction, say
printf(), may not be able to run in kernel space?

Do you have any suggestion on running PCM or other performance monitor
program in ring 0 on Xen?

*What I tried before:*
I wrote a hypercall to read and write the MSR and record the cache
hit/miss event for each level of cache, using Intel's performance counter.
It worked on my machine. But it's not portable to other machines since the
event number may be different. That's why I think running PCM or other
existing performance monitor program on Xen will be a better idea.

Thank you very much for your time and help in this question!

Best,

Meng

--001a11c30a045d8c7d04f2a1ba3e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi,=
</div><div class=3D"gmail_default" style=3D"font-size:small"><br></div><div=
 class=3D"gmail_default" style=3D"font-size:small">I&#39;m a PhD student, w=
orking on real time system.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small"><b>[My goal]</b></div><div cla=
ss=3D"gmail_default" style=3D"font-size:small">I want to measure the cache =
hit/miss rate of each guest domain in Xen. I may also want to measure some =
other events, say memory access rate, for each program in each guest domain=
 in Xen.</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">My machine&#39;s CPU uses inte=
l IvyBridge architecture.=A0</div><div class=3D"gmail_default" style=3D"fon=
t-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small"><b>[The pr=
oblem I&#39;m encountering]</b></div><div class=3D"gmail_default" style=3D"=
font-size:small">I tried intel&#39;s Performance Counter Monitor (PCM) in L=
inux on bare machine to get the machine&#39;s cache access rate for each le=
vel of cache, it works very well.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">However, when I want to use th=
e PCM in Xen and run it in dom0, it cannot work. I think the PCM needs to r=
un in ring 0 to read/write the MSR. Because dom0 is running in ring 1, so P=
CM running in dom0 cannot work.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small"><b>So my question is:</b></div=
><div class=3D"gmail_default" style=3D"font-size:small">How can I run a pro=
gram (say PCM) in ring 0 on Xen?=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small"><b>What&#39;s in my mind is:</=
b></div><div class=3D"gmail_default" style=3D"font-size:small">Writing a hy=
percall to call the PCM in Xen&#39;s kernel space, then the PCM will run in=
 ring 0?=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small">But the problem I&#3=
9;m concerned is that some of the PCM&#39;s instruction, say printf(), may =
not be able to run in kernel space?=A0</div><div><div dir=3D"ltr"><br></div=
><div dir=3D"ltr">
<div class=3D"gmail_default" style=3D"font-size:small">Do you have any sugg=
estion on running PCM or other performance monitor program in ring 0 on Xen=
?=A0</div><div class=3D"gmail_default" style=3D"font-size:small"><br></div>=
<div class=3D"gmail_default" style=3D"font-size:small">
<b>What I tried before:</b></div><div class=3D"gmail_default" style=3D"font=
-size:small">I wrote a hypercall to read and write the MSR and record the c=
ache hit/miss event for each level of cache, using Intel&#39;s performance =
counter.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small">It worked on my mach=
ine. But it&#39;s not portable to other machines since the event number may=
 be different. That&#39;s why I think running PCM or other existing perform=
ance monitor program on Xen will be a better idea.</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Thank you very much for your t=
ime and help in this question!</div><div class=3D"gmail_default" style=3D"f=
ont-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Best,</div=
><div class=3D"gmail_default" style=3D"font-size:small"><br></div><div clas=
s=3D"gmail_default" style=3D"font-size:small">Meng</div></div><div dir=3D"l=
tr"><br><br>
</div></div>
</div>

--001a11c30a045d8c7d04f2a1ba3e--


--===============6408963643605319677==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6408963643605319677==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 22:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 22:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFWjf-000555-Lk; Mon, 17 Feb 2014 22:32:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1WFWje-000550-IW
	for xen-devel@lists.xen.org; Mon, 17 Feb 2014 22:32:22 +0000
Received: from [85.158.139.211:2057] by server-16.bemta-5.messagelabs.com id
	38/57-05060-5FD82035; Mon, 17 Feb 2014 22:32:21 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392676337!4512849!1
X-Originating-IP: [209.85.214.174]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26302 invoked from network); 17 Feb 2014 22:32:18 -0000
Received: from mail-ob0-f174.google.com (HELO mail-ob0-f174.google.com)
	(209.85.214.174)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 22:32:18 -0000
Received: by mail-ob0-f174.google.com with SMTP id uy5so17558049obc.5
	for <xen-devel@lists.xen.org>; Mon, 17 Feb 2014 14:32:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=09aK8NeGvi7blzBT/z5eg5SLajVSUsnqO1t+PYHnRBk=;
	b=AJOqJ87ko0uIRbISpDdJjLLi4FCl314NAr/wKiqGwZXGOdgqYWEt4dUsZtHtwLxwC+
	g+6AipX8dzC0XsN+sM8x6F3W4FThIB/J2wtpN8Iq2JHpBAuyBmilu4NFZhKwKw0uEtew
	hJBlB1tQVY2tgpiurm6J3xxDUu4Jl7Gf2Z9lXDjg9jdsuy7SP5MMZRPxmUJDoJNa0Ou8
	HuIbWDGIM/Gkb8swi5lmCMVQxFQzMX1PCZ2DalkIDWve2jaBpEjbVs7wsWq6DWaav/1f
	htQvJrkNG8foQVBsf0uaCTP7wTaPnSx4RbqA2frlfjWn56s2ax4GVS9TPqNuTkL3S74J
	lhng==
MIME-Version: 1.0
X-Received: by 10.60.44.42 with SMTP id b10mr93603oem.70.1392676337189; Mon,
	17 Feb 2014 14:32:17 -0800 (PST)
Received: by 10.76.173.98 with HTTP; Mon, 17 Feb 2014 14:32:17 -0800 (PST)
Date: Mon, 17 Feb 2014 17:32:17 -0500
Message-ID: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: "mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>
Subject: [Xen-devel] Question about running a program(Intel PCM) in ring 0
	on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6408963643605319677=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6408963643605319677==
Content-Type: multipart/alternative; boundary=001a11c30a045d8c7d04f2a1ba3e

--001a11c30a045d8c7d04f2a1ba3e
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'm a PhD student, working on real time system.

*[My goal]*
I want to measure the cache hit/miss rate of each guest domain in Xen. I
may also want to measure some other events, say memory access rate, for
each program in each guest domain in Xen.

My machine's CPU uses intel IvyBridge architecture.

*[The problem I'm encountering]*
I tried intel's Performance Counter Monitor (PCM) in Linux on bare machine
to get the machine's cache access rate for each level of cache, it works
very well.

However, when I want to use the PCM in Xen and run it in dom0, it cannot
work. I think the PCM needs to run in ring 0 to read/write the MSR. Because
dom0 is running in ring 1, so PCM running in dom0 cannot work.

*So my question is:*
How can I run a program (say PCM) in ring 0 on Xen?

*What's in my mind is:*
Writing a hypercall to call the PCM in Xen's kernel space, then the PCM
will run in ring 0?
But the problem I'm concerned is that some of the PCM's instruction, say
printf(), may not be able to run in kernel space?

Do you have any suggestion on running PCM or other performance monitor
program in ring 0 on Xen?

*What I tried before:*
I wrote a hypercall to read and write the MSR and record the cache
hit/miss event for each level of cache, using Intel's performance counter.
It worked on my machine. But it's not portable to other machines since the
event number may be different. That's why I think running PCM or other
existing performance monitor program on Xen will be a better idea.

Thank you very much for your time and help in this question!

Best,

Meng

--001a11c30a045d8c7d04f2a1ba3e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi,=
</div><div class=3D"gmail_default" style=3D"font-size:small"><br></div><div=
 class=3D"gmail_default" style=3D"font-size:small">I&#39;m a PhD student, w=
orking on real time system.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small"><b>[My goal]</b></div><div cla=
ss=3D"gmail_default" style=3D"font-size:small">I want to measure the cache =
hit/miss rate of each guest domain in Xen. I may also want to measure some =
other events, say memory access rate, for each program in each guest domain=
 in Xen.</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">My machine&#39;s CPU uses inte=
l IvyBridge architecture.=A0</div><div class=3D"gmail_default" style=3D"fon=
t-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small"><b>[The pr=
oblem I&#39;m encountering]</b></div><div class=3D"gmail_default" style=3D"=
font-size:small">I tried intel&#39;s Performance Counter Monitor (PCM) in L=
inux on bare machine to get the machine&#39;s cache access rate for each le=
vel of cache, it works very well.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">However, when I want to use th=
e PCM in Xen and run it in dom0, it cannot work. I think the PCM needs to r=
un in ring 0 to read/write the MSR. Because dom0 is running in ring 1, so P=
CM running in dom0 cannot work.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small"><b>So my question is:</b></div=
><div class=3D"gmail_default" style=3D"font-size:small">How can I run a pro=
gram (say PCM) in ring 0 on Xen?=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small"><b>What&#39;s in my mind is:</=
b></div><div class=3D"gmail_default" style=3D"font-size:small">Writing a hy=
percall to call the PCM in Xen&#39;s kernel space, then the PCM will run in=
 ring 0?=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small">But the problem I&#3=
9;m concerned is that some of the PCM&#39;s instruction, say printf(), may =
not be able to run in kernel space?=A0</div><div><div dir=3D"ltr"><br></div=
><div dir=3D"ltr">
<div class=3D"gmail_default" style=3D"font-size:small">Do you have any sugg=
estion on running PCM or other performance monitor program in ring 0 on Xen=
?=A0</div><div class=3D"gmail_default" style=3D"font-size:small"><br></div>=
<div class=3D"gmail_default" style=3D"font-size:small">
<b>What I tried before:</b></div><div class=3D"gmail_default" style=3D"font=
-size:small">I wrote a hypercall to read and write the MSR and record the c=
ache hit/miss event for each level of cache, using Intel&#39;s performance =
counter.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small">It worked on my mach=
ine. But it&#39;s not portable to other machines since the event number may=
 be different. That&#39;s why I think running PCM or other existing perform=
ance monitor program on Xen will be a better idea.</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Thank you very much for your t=
ime and help in this question!</div><div class=3D"gmail_default" style=3D"f=
ont-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Best,</div=
><div class=3D"gmail_default" style=3D"font-size:small"><br></div><div clas=
s=3D"gmail_default" style=3D"font-size:small">Meng</div></div><div dir=3D"l=
tr"><br><br>
</div></div>
</div>

--001a11c30a045d8c7d04f2a1ba3e--


--===============6408963643605319677==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6408963643605319677==--


From xen-devel-bounces@lists.xen.org Mon Feb 17 23:33:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 23:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFXgJ-0005Mo-Ka; Mon, 17 Feb 2014 23:32:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFXgH-0005Mj-Vs
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 23:32:58 +0000
Received: from [85.158.143.35:63567] by server-3.bemta-4.messagelabs.com id
	4A/66-11539-92C92035; Mon, 17 Feb 2014 23:32:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392679974!6365799!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24313 invoked from network); 17 Feb 2014 23:32:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 23:32:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,863,1384300800"; d="scan'208";a="101570782"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 23:32:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 18:32:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFXgD-0007OP-AY;
	Mon, 17 Feb 2014 23:32:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFXgD-0001UF-3z;
	Mon, 17 Feb 2014 23:32:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25109-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 23:32:53 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25109: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25109 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25109/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  5 xen-boot               fail like 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                6d0abeca3242a88cab8232e4acd7e2bf088f3bc2
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7050 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2383214 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 17 23:33:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Feb 2014 23:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFXgJ-0005Mo-Ka; Mon, 17 Feb 2014 23:32:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFXgH-0005Mj-Vs
	for xen-devel@lists.xensource.com; Mon, 17 Feb 2014 23:32:58 +0000
Received: from [85.158.143.35:63567] by server-3.bemta-4.messagelabs.com id
	4A/66-11539-92C92035; Mon, 17 Feb 2014 23:32:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392679974!6365799!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24313 invoked from network); 17 Feb 2014 23:32:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	17 Feb 2014 23:32:55 -0000
X-IronPort-AV: E=Sophos;i="4.95,863,1384300800"; d="scan'208";a="101570782"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 17 Feb 2014 23:32:54 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 18:32:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFXgD-0007OP-AY;
	Mon, 17 Feb 2014 23:32:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFXgD-0001UF-3z;
	Mon, 17 Feb 2014 23:32:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25109-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 17 Feb 2014 23:32:53 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25109: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25109 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25109/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64  5 xen-boot               fail like 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                6d0abeca3242a88cab8232e4acd7e2bf088f3bc2
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7050 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2383214 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 01:55:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 01:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFZtf-0001g1-VL; Tue, 18 Feb 2014 01:54:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFZtd-0001fw-M6
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 01:54:53 +0000
Received: from [193.109.254.147:37508] by server-13.bemta-14.messagelabs.com
	id E8/C4-01226-C6DB2035; Tue, 18 Feb 2014 01:54:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392688490!4989451!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16230 invoked from network); 18 Feb 2014 01:54:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 01:54:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,864,1389744000"; d="scan'208";a="101592871"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 01:54:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 20:54:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFZtZ-00086h-58;
	Tue, 18 Feb 2014 01:54:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFZsE-0007Y5-63;
	Tue, 18 Feb 2014 01:54:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25112-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 01:53:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25112: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25112 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25112/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 13 guest-localmigrate.2        fail pass in 25102
 test-amd64-i386-xl-credit2    9 guest-start                 fail pass in 25102
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25102
 test-amd64-i386-qemuu-freebsd10-i386 8 guest-start fail in 25102 pass in 25112

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 25102 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 01:55:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 01:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFZtf-0001g1-VL; Tue, 18 Feb 2014 01:54:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFZtd-0001fw-M6
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 01:54:53 +0000
Received: from [193.109.254.147:37508] by server-13.bemta-14.messagelabs.com
	id E8/C4-01226-C6DB2035; Tue, 18 Feb 2014 01:54:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392688490!4989451!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16230 invoked from network); 18 Feb 2014 01:54:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 01:54:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,864,1389744000"; d="scan'208";a="101592871"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 01:54:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 20:54:49 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFZtZ-00086h-58;
	Tue, 18 Feb 2014 01:54:49 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFZsE-0007Y5-63;
	Tue, 18 Feb 2014 01:54:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25112-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 01:53:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25112: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25112 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25112/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 13 guest-localmigrate.2        fail pass in 25102
 test-amd64-i386-xl-credit2    9 guest-start                 fail pass in 25102
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25102
 test-amd64-i386-qemuu-freebsd10-i386 8 guest-start fail in 25102 pass in 25112

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop     fail in 25102 never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 03:15:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 03:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFb96-0002MN-Vl; Tue, 18 Feb 2014 03:14:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFb95-0002MI-DJ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 03:14:55 +0000
Received: from [85.158.143.35:41124] by server-3.bemta-4.messagelabs.com id
	12/B2-11539-E20D2035; Tue, 18 Feb 2014 03:14:54 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392693293!6386710!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6475 invoked from network); 18 Feb 2014 03:14:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 03:14:53 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 17 Feb 2014 19:14:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,499,1389772800"; d="scan'208";a="483185974"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 17 Feb 2014 19:14:50 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:14:49 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:14:49 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 11:14:48 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///GNICAAAOsgIABNL/w
Date: Tue, 18 Feb 2014 03:14:47 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
In-Reply-To: <5302332D020000780011CEF1@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-17:
>>>> On 17.02.14 at 15:51, George Dunlap <george.dunlap@eu.citrix.com>
> wrote:
>> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>> Actually I'm afraid there are two problems with this patch:
>>> 
>>> For one, is enabling "global" log dirty mode still going to work
>>> after VRAM-only mode already got enabled? I ask because the
>>> paging_mode_log_dirty() check which paging_log_dirty_enable() does
>>> first thing suggests otherwise to me (i.e. the now conditional
>>> setting of all p2m entries to p2m_ram_logdirty would seem to never
>>> get executed). IOW I would think that we're now lacking a control
>>> operation allowing the transition from dirty VRAM tracking mode to
>>> full log dirty mode.
>> 
>> Hrm, will so far playing with this I've been unable to get a
>> localhost migrate to fail with the vncviewer attached.  Which seems a bit strange...
> 
> Not necessarily - it may depend on how the tools actually do this:
> They might temporarily disable log dirty mode altogether, just to
> re-enable full mode again right away. But this specific usage of the
> hypervisor interface wouldn't (to me) mean that other tool stacks
> might not be doing this differently.

You are right. Before migration, libxc will disable log dirty mode if it already enabled before and re-enable it. So when I am developing this patch, I think it is ok for migration.

If there really have other tool stacks also will use this interface (Is it true?), perhaps my original patch is better which will check paging_mode_log_dirty(d) && log_global:

diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ab5eacb..368c975 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -168,7 +168,7 @@ int paging_log_dirty_enable(struct domain *d, bool_t log_global)
 {
     int ret;
 
-    if ( paging_mode_log_dirty(d) )
+    if ( paging_mode_log_dirty(d) && !log_global )
         return -EINVAL;
 
     domain_pause(d);


> 
> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 03:15:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 03:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFb96-0002MN-Vl; Tue, 18 Feb 2014 03:14:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFb95-0002MI-DJ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 03:14:55 +0000
Received: from [85.158.143.35:41124] by server-3.bemta-4.messagelabs.com id
	12/B2-11539-E20D2035; Tue, 18 Feb 2014 03:14:54 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392693293!6386710!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6475 invoked from network); 18 Feb 2014 03:14:53 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-4.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 03:14:53 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 17 Feb 2014 19:14:51 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,499,1389772800"; d="scan'208";a="483185974"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by fmsmga002.fm.intel.com with ESMTP; 17 Feb 2014 19:14:50 -0800
Received: from fmsmsx101.amr.corp.intel.com (10.19.9.52) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:14:49 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX101.amr.corp.intel.com (10.19.9.52) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:14:49 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 11:14:48 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///GNICAAAOsgIABNL/w
Date: Tue, 18 Feb 2014 03:14:47 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
In-Reply-To: <5302332D020000780011CEF1@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-17:
>>>> On 17.02.14 at 15:51, George Dunlap <george.dunlap@eu.citrix.com>
> wrote:
>> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>> Actually I'm afraid there are two problems with this patch:
>>> 
>>> For one, is enabling "global" log dirty mode still going to work
>>> after VRAM-only mode already got enabled? I ask because the
>>> paging_mode_log_dirty() check which paging_log_dirty_enable() does
>>> first thing suggests otherwise to me (i.e. the now conditional
>>> setting of all p2m entries to p2m_ram_logdirty would seem to never
>>> get executed). IOW I would think that we're now lacking a control
>>> operation allowing the transition from dirty VRAM tracking mode to
>>> full log dirty mode.
>> 
>> Hrm, will so far playing with this I've been unable to get a
>> localhost migrate to fail with the vncviewer attached.  Which seems a bit strange...
> 
> Not necessarily - it may depend on how the tools actually do this:
> They might temporarily disable log dirty mode altogether, just to
> re-enable full mode again right away. But this specific usage of the
> hypervisor interface wouldn't (to me) mean that other tool stacks
> might not be doing this differently.

You are right. Before migration, libxc will disable log dirty mode if it already enabled before and re-enable it. So when I am developing this patch, I think it is ok for migration.

If there really have other tool stacks also will use this interface (Is it true?), perhaps my original patch is better which will check paging_mode_log_dirty(d) && log_global:

diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index ab5eacb..368c975 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -168,7 +168,7 @@ int paging_log_dirty_enable(struct domain *d, bool_t log_global)
 {
     int ret;
 
-    if ( paging_mode_log_dirty(d) )
+    if ( paging_mode_log_dirty(d) && !log_global )
         return -EINVAL;
 
     domain_pause(d);


> 
> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 03:25:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 03:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFbJQ-0002VX-57; Tue, 18 Feb 2014 03:25:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFbJO-0002VS-Ho
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 03:25:34 +0000
Received: from [193.109.254.147:7560] by server-6.bemta-14.messagelabs.com id
	03/DB-03396-AA2D2035; Tue, 18 Feb 2014 03:25:30 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392693928!4997605!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26402 invoked from network); 18 Feb 2014 03:25:29 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 03:25:29 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 17 Feb 2014 19:25:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,499,1389772800"; d="scan'208";a="457160170"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 17 Feb 2014 19:25:27 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:25:26 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:25:25 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 11:25:21 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///IvYCAAVNtYA==
Date: Tue, 18 Feb 2014 03:25:21 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
In-Reply-To: <53023239020000780011CED9@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Dugger, Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-17:
>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>> And second, I have been fighting with finding both conditions and
>> (eventually) the root cause of a severe performance regression
>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>> became _much_ worse after adding in the patch here (while in fact I
>> had hoped it might help with the originally observed
>> degradation): X startup fails due to timing out, and booting the
>> guest now takes about 20 minutes). I didn't find the root cause of
>> this yet, but meanwhile I know that
>> - the same isn't observable on SVM
>> - there's no problem when forcing the domain to use shadow
>>   mode - there's no need for any device to actually be assigned to the
>>   guest - the regression is very likely purely graphics related (based
>>   on the observation that when running something that regularly but not
>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>   worth of processing power, yet when that updating doesn't happen, CPU
>>   consumption goes down, and it goes further down when shutting down X
>>   altogether - at least as log as the patch here doesn't get involved).
>> This I'm observing on a Westmere box (and I didn't notice it earlier
>> because that's one of those where due to a chipset erratum the IOMMU
>> gets turned off by default), so it's possible that this can't be
>> seen on more modern hardware. I'll hopefully find time today to
>> check this on the one newer (Sandy Bridge) box I have.
> 
> Just got done with trying this: By default, things work fine there.
> As soon as I use "iommu=no-snoop", things go bad (even worse than one
> the older box - the guest is consuming about 2.5 CPUs worth of
> processing power _without_ the patch here in use, so I don't even want
> to think about trying it there); I guessed that to be another of the
> potential sources of the problem since on that older box the respective hardware feature is unavailable.
> 
> While I'll try to look into this further, I guess I have to defer to
> our VT-d specialists at Intel at this point...
> 

Hi, Jan,

I tried to reproduce it. But unfortunately, I cannot reproduce it in my box (sandy bridge EP)with latest Xen(include my patch). I guess my configuration or steps may wrong, here is mine:

1. add iommu=1,no-snoop in by xen cmd line:
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables enabled.

2. boot a rhel6u4 guest.

3. after guest boot up, run startx inside guest.

4. a few second, the X windows shows and didn't see any error. Also the CPU utilization is about 1.7%.

Any thing wrong?

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 03:25:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 03:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFbJQ-0002VX-57; Tue, 18 Feb 2014 03:25:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFbJO-0002VS-Ho
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 03:25:34 +0000
Received: from [193.109.254.147:7560] by server-6.bemta-14.messagelabs.com id
	03/DB-03396-AA2D2035; Tue, 18 Feb 2014 03:25:30 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392693928!4997605!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26402 invoked from network); 18 Feb 2014 03:25:29 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-8.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 03:25:29 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga101.jf.intel.com with ESMTP; 17 Feb 2014 19:25:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,499,1389772800"; d="scan'208";a="457160170"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga001.jf.intel.com with ESMTP; 17 Feb 2014 19:25:27 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:25:26 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 19:25:25 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 11:25:21 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, "Zhang, Xiantao" <xiantao.zhang@intel.com>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///IvYCAAVNtYA==
Date: Tue, 18 Feb 2014 03:25:21 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
In-Reply-To: <53023239020000780011CED9@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Dugger, Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-17:
>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>> And second, I have been fighting with finding both conditions and
>> (eventually) the root cause of a severe performance regression
>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>> became _much_ worse after adding in the patch here (while in fact I
>> had hoped it might help with the originally observed
>> degradation): X startup fails due to timing out, and booting the
>> guest now takes about 20 minutes). I didn't find the root cause of
>> this yet, but meanwhile I know that
>> - the same isn't observable on SVM
>> - there's no problem when forcing the domain to use shadow
>>   mode - there's no need for any device to actually be assigned to the
>>   guest - the regression is very likely purely graphics related (based
>>   on the observation that when running something that regularly but not
>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>   worth of processing power, yet when that updating doesn't happen, CPU
>>   consumption goes down, and it goes further down when shutting down X
>>   altogether - at least as log as the patch here doesn't get involved).
>> This I'm observing on a Westmere box (and I didn't notice it earlier
>> because that's one of those where due to a chipset erratum the IOMMU
>> gets turned off by default), so it's possible that this can't be
>> seen on more modern hardware. I'll hopefully find time today to
>> check this on the one newer (Sandy Bridge) box I have.
> 
> Just got done with trying this: By default, things work fine there.
> As soon as I use "iommu=no-snoop", things go bad (even worse than one
> the older box - the guest is consuming about 2.5 CPUs worth of
> processing power _without_ the patch here in use, so I don't even want
> to think about trying it there); I guessed that to be another of the
> potential sources of the problem since on that older box the respective hardware feature is unavailable.
> 
> While I'll try to look into this further, I guess I have to defer to
> our VT-d specialists at Intel at this point...
> 

Hi, Jan,

I tried to reproduce it. But unfortunately, I cannot reproduce it in my box (sandy bridge EP)with latest Xen(include my patch). I guess my configuration or steps may wrong, here is mine:

1. add iommu=1,no-snoop in by xen cmd line:
(XEN) Intel VT-d Snoop Control not enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables enabled.

2. boot a rhel6u4 guest.

3. after guest boot up, run startx inside guest.

4. a few second, the X windows shows and didn't see any error. Also the CPU utilization is about 1.7%.

Any thing wrong?

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 03:46:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 03:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFbdB-0002gK-3E; Tue, 18 Feb 2014 03:46:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFbd8-0002gF-5a
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 03:45:59 +0000
Received: from [85.158.143.35:34001] by server-3.bemta-4.messagelabs.com id
	0B/A6-11539-477D2035; Tue, 18 Feb 2014 03:45:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392695153!6394994!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23446 invoked from network); 18 Feb 2014 03:45:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 03:45:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,499,1389744000"; d="scan'208";a="101610576"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 03:45:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 22:45:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFbd1-0000CD-Ht;
	Tue, 18 Feb 2014 03:45:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFbd0-00089H-TX;
	Tue, 18 Feb 2014 03:45:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25114-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 03:45:50 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable baseline test] 25114: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 25114 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25114/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-winxpsp3  4 xen-install         fail REGR. vs. 25107
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 25107

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2
baseline version:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.

------------------------------------------------------------
commit b7319350278d0220febc8a7dc8be8e8d41b0abd2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 17 16:33:48 2014 +0000

    Update QEMU_UPSTREAM_REVISION for 4.4.0-rc4
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 03:46:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 03:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFbdB-0002gK-3E; Tue, 18 Feb 2014 03:46:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFbd8-0002gF-5a
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 03:45:59 +0000
Received: from [85.158.143.35:34001] by server-3.bemta-4.messagelabs.com id
	0B/A6-11539-477D2035; Tue, 18 Feb 2014 03:45:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392695153!6394994!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23446 invoked from network); 18 Feb 2014 03:45:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 03:45:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,499,1389744000"; d="scan'208";a="101610576"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 03:45:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 17 Feb 2014 22:45:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFbd1-0000CD-Ht;
	Tue, 18 Feb 2014 03:45:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFbd0-00089H-TX;
	Tue, 18 Feb 2014 03:45:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25114-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 03:45:50 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable baseline test] 25114: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 25114 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25114/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-winxpsp3  4 xen-install         fail REGR. vs. 25107
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 25107

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2
baseline version:
 xen                  4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.

------------------------------------------------------------
commit b7319350278d0220febc8a7dc8be8e8d41b0abd2
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 17 16:33:48 2014 +0000

    Update QEMU_UPSTREAM_REVISION for 4.4.0-rc4
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 04:26:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 04:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFcFM-0002x2-GZ; Tue, 18 Feb 2014 04:25:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFcFK-0002wx-FT
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 04:25:26 +0000
Received: from [85.158.139.211:32643] by server-3.bemta-5.messagelabs.com id
	BD/CA-13671-5B0E2035; Tue, 18 Feb 2014 04:25:25 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392697523!4519628!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11260 invoked from network); 18 Feb 2014 04:25:24 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	18 Feb 2014 04:25:24 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 17 Feb 2014 20:21:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,499,1389772800"; d="scan'208";a="484989437"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 17 Feb 2014 20:25:21 -0800
Received: from fmsmsx111.amr.corp.intel.com (10.18.116.5) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 20:25:20 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx111.amr.corp.intel.com (10.18.116.5) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 20:25:20 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 12:25:18 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"chegger@amazon.de" <chegger@amazon.de>, "suravee.suthikulpanit@amd.com"
	<suravee.suthikulpanit@amd.com>, "boris.ostrovsky@oracle.com"
	<boris.ostrovsky@oracle.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPKEwerpjR5j3ySEeyNVNl8C2Pk5q6b1vQ
Date: Tue, 18 Feb 2014 04:25:17 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For Intel it didn't disturb Intel's vmce logic so it's OK for me.

For AMD, c000_040x is bank4 (MC4_MISCj), while vmce current only support bank0/1. Even considering in the future AMD may add MC0/1_MISCj, it doesn't need emulation (say, read return 0 and write ignore). So how about simply filter out AMD MCx_MISCj at mce_vendor_bank_msr()?

Thanks,
Jinsong

Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits which meant the register
> accesses never made it to vmce_amd_* functions.
> 
> Corrected this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> While at it, remove some clutter in the vmce_amd* functions. Retained
> current policy of returning zero for reads and ignoring writes.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> ---
>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>  ++++++------------------------------- xen/arch/x86/cpu/mcheck/vmce.c
>  |   14 +++++++++++-- 2 files changed, 18 insertions(+), 37
> deletions(-) 
> 
> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
> b/xen/arch/x86/cpu/mcheck/amd_f10.c 
> index 61319dc..03797ab 100644
> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		v->arch.vmce.bank[1].mci_misc = val;
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* ignore write: we do not emulate link and l3 cache errors
> -		 * to the guest.
> -		 */
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Do nothing as we don't emulate this MC bank currently */
> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> +    return 1;
>  }
> 
>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		*val = v->arch.vmce.bank[1].mci_misc;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* we do not emulate link and l3 cache
> -		 * errors to the guest.
> -		 */
> -		*val = 0;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Assign '0' as we don't emulate this MC bank currently */
> +    *val = 0;
> +    return 1;
>  }
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c 
> index f6c35db..84843fc 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
> uint32_t msr, uint64_t *val) 
> 
>      *val = 0;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    /* Allow only first 3 MC banks into switch() */
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */
> @@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>              uint32_t msr, uint64_t *val) ret = vmce_intel_rdmsr(v,
>              msr, val); break;
>          case X86_VENDOR_AMD:
> +            /*
> +             * Extended block of AMD thresholding registers fall
> into default. +             * Handle reads here.
> +             */
>              ret = vmce_amd_rdmsr(v, msr, val);
>              break;
>          default:
> @@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v,
>      uint32_t msr, uint64_t val) int ret = 1;
>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    /* Allow only first 3 MC banks into switch() */
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /*
> @@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v,
>              uint32_t msr, uint64_t val) ret = vmce_intel_wrmsr(v,
>              msr, val); break;
>          case X86_VENDOR_AMD:
> +            /*
> +             * Extended block of AMD thresholding registers fall
> into default. +             * Handle writes here.
> +             */
>              ret = vmce_amd_wrmsr(v, msr, val);
>              break;
>          default:


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 04:26:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 04:26:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFcFM-0002x2-GZ; Tue, 18 Feb 2014 04:25:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFcFK-0002wx-FT
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 04:25:26 +0000
Received: from [85.158.139.211:32643] by server-3.bemta-5.messagelabs.com id
	BD/CA-13671-5B0E2035; Tue, 18 Feb 2014 04:25:25 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392697523!4519628!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11260 invoked from network); 18 Feb 2014 04:25:24 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-14.tower-206.messagelabs.com with SMTP;
	18 Feb 2014 04:25:24 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 17 Feb 2014 20:21:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,499,1389772800"; d="scan'208";a="484989437"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 17 Feb 2014 20:25:21 -0800
Received: from fmsmsx111.amr.corp.intel.com (10.18.116.5) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 20:25:20 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	fmsmsx111.amr.corp.intel.com (10.18.116.5) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 17 Feb 2014 20:25:20 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 12:25:18 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"chegger@amazon.de" <chegger@amazon.de>, "suravee.suthikulpanit@amd.com"
	<suravee.suthikulpanit@amd.com>, "boris.ostrovsky@oracle.com"
	<boris.ostrovsky@oracle.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPKEwerpjR5j3ySEeyNVNl8C2Pk5q6b1vQ
Date: Tue, 18 Feb 2014 04:25:17 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For Intel it didn't disturb Intel's vmce logic so it's OK for me.

For AMD, c000_040x is bank4 (MC4_MISCj), while vmce current only support bank0/1. Even considering in the future AMD may add MC0/1_MISCj, it doesn't need emulation (say, read return 0 and write ignore). So how about simply filter out AMD MCx_MISCj at mce_vendor_bank_msr()?

Thanks,
Jinsong

Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits which meant the register
> accesses never made it to vmce_amd_* functions.
> 
> Corrected this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> While at it, remove some clutter in the vmce_amd* functions. Retained
> current policy of returning zero for reads and ignoring writes.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> ---
>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>  ++++++------------------------------- xen/arch/x86/cpu/mcheck/vmce.c
>  |   14 +++++++++++-- 2 files changed, 18 insertions(+), 37
> deletions(-) 
> 
> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
> b/xen/arch/x86/cpu/mcheck/amd_f10.c 
> index 61319dc..03797ab 100644
> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		v->arch.vmce.bank[1].mci_misc = val;
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* ignore write: we do not emulate link and l3 cache errors
> -		 * to the guest.
> -		 */
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Do nothing as we don't emulate this MC bank currently */
> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> +    return 1;
>  }
> 
>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		*val = v->arch.vmce.bank[1].mci_misc;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* we do not emulate link and l3 cache
> -		 * errors to the guest.
> -		 */
> -		*val = 0;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Assign '0' as we don't emulate this MC bank currently */
> +    *val = 0;
> +    return 1;
>  }
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c 
> index f6c35db..84843fc 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
> uint32_t msr, uint64_t *val) 
> 
>      *val = 0;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    /* Allow only first 3 MC banks into switch() */
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */
> @@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>              uint32_t msr, uint64_t *val) ret = vmce_intel_rdmsr(v,
>              msr, val); break;
>          case X86_VENDOR_AMD:
> +            /*
> +             * Extended block of AMD thresholding registers fall
> into default. +             * Handle reads here.
> +             */
>              ret = vmce_amd_rdmsr(v, msr, val);
>              break;
>          default:
> @@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v,
>      uint32_t msr, uint64_t val) int ret = 1;
>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    /* Allow only first 3 MC banks into switch() */
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /*
> @@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v,
>              uint32_t msr, uint64_t val) ret = vmce_intel_wrmsr(v,
>              msr, val); break;
>          case X86_VENDOR_AMD:
> +            /*
> +             * Extended block of AMD thresholding registers fall
> into default. +             * Handle writes here.
> +             */
>              ret = vmce_amd_wrmsr(v, msr, val);
>              break;
>          default:


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 05:32:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 05:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFdHX-0003PK-2J; Tue, 18 Feb 2014 05:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFdHV-0003PF-7Z
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 05:31:45 +0000
Received: from [85.158.139.211:50085] by server-7.bemta-5.messagelabs.com id
	38/59-14867-040F2035; Tue, 18 Feb 2014 05:31:44 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392701502!4557881!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18591 invoked from network); 18 Feb 2014 05:31:43 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 05:31:43 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=W1tGqmwN9L4eS1jSkh6I4dEKOYIt/EhQAMEbiLDPwS2ZoyIP6tSSSV+n
	0WoV7kTbZ8pvmSyTaxK/6Zmvk365x4sndg9HrnQl4uMcnmFZ9BttwObYY
	Uvypg2zH0R7j93UQIl02dXWJW9q/Jv2yeaVzd07Qn5xVz8KXBCrNNXqhv
	2SAmZdMQWGG6+nNM3Iza+pKP7V3/FYEZ7dBNcHLekcR/bSQPdhaWpsWYa
	/pmTRyHC/FPoTAgDGWVgLTtBuoJKM;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392701503; x=1424237503;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=TBGyjoSnwB7DFWV8A/ywfYEEYwAsiPbHJ4YMg51tXls=;
	b=T/RspPQQC77WnPA6u8UAWXGCeJMMQQ/t+SNknzX2CLooR+tEddK0UY3R
	PgrgBnqo0abzVDwsi0OrzmdfUWUyBJDbaAUgfL69wygcWeuwdVuQJMRGO
	eGslZsK4yn2wkUsrH1PoUgpMQKghWmY9ABemJZb8f/ddstEBKhD1oGvN5
	DayQUUfoDrTdrDELuFDwNqCZieCbEijU2mEDlbPV4oT4EJZpsyvmmHkHZ
	JE9riA9Uit65lgirovD2/el2/6nQW;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="185852355"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 18 Feb 2014 06:31:42 +0100
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="31755757"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 18 Feb 2014 06:31:42 +0100
Message-ID: <5302F03D.3020407@ts.fujitsu.com>
Date: Tue, 18 Feb 2014 06:31:41 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<53020C4B.6000509@ts.fujitsu.com>
	<1392649714.32038.427.camel@Solace>
In-Reply-To: <1392649714.32038.427.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17.02.2014 16:08, Dario Faggioli wrote:
> On lun, 2014-02-17 at 14:19 +0100, Juergen Gross wrote:

>> With "xenstore-ls /vm" the information can be retrieved: it is listed
>> under <uuid>/pool_name (with <uuid> being the UUID of the domain in
>> question).
>>
> Funny:
>
> root@Zhaman:~# xenstore-ls /vm | grep -i pool
> root@Zhaman:~#
>
> root@Zhaman:~# xenstore-ls | grep -i pool
> root@Zhaman:~#
>
> :-O
>
> Is that because I didn't really add any pool (i.e., I'm running the
> above with only Pool-0) ?

Hmm. Could be another difference between xm and xl (on my 4.2 box I created
the domU via xm):

root@nehalem1 # xenstore-ls /vm | grep -i pool
  pool_name = "Pool-0"
  pool_name = "bs2_pool"


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 05:32:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 05:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFdHX-0003PK-2J; Tue, 18 Feb 2014 05:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFdHV-0003PF-7Z
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 05:31:45 +0000
Received: from [85.158.139.211:50085] by server-7.bemta-5.messagelabs.com id
	38/59-14867-040F2035; Tue, 18 Feb 2014 05:31:44 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392701502!4557881!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18591 invoked from network); 18 Feb 2014 05:31:43 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 05:31:43 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=W1tGqmwN9L4eS1jSkh6I4dEKOYIt/EhQAMEbiLDPwS2ZoyIP6tSSSV+n
	0WoV7kTbZ8pvmSyTaxK/6Zmvk365x4sndg9HrnQl4uMcnmFZ9BttwObYY
	Uvypg2zH0R7j93UQIl02dXWJW9q/Jv2yeaVzd07Qn5xVz8KXBCrNNXqhv
	2SAmZdMQWGG6+nNM3Iza+pKP7V3/FYEZ7dBNcHLekcR/bSQPdhaWpsWYa
	/pmTRyHC/FPoTAgDGWVgLTtBuoJKM;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392701503; x=1424237503;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=TBGyjoSnwB7DFWV8A/ywfYEEYwAsiPbHJ4YMg51tXls=;
	b=T/RspPQQC77WnPA6u8UAWXGCeJMMQQ/t+SNknzX2CLooR+tEddK0UY3R
	PgrgBnqo0abzVDwsi0OrzmdfUWUyBJDbaAUgfL69wygcWeuwdVuQJMRGO
	eGslZsK4yn2wkUsrH1PoUgpMQKghWmY9ABemJZb8f/ddstEBKhD1oGvN5
	DayQUUfoDrTdrDELuFDwNqCZieCbEijU2mEDlbPV4oT4EJZpsyvmmHkHZ
	JE9riA9Uit65lgirovD2/el2/6nQW;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="185852355"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 18 Feb 2014 06:31:42 +0100
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="31755757"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 18 Feb 2014 06:31:42 +0100
Message-ID: <5302F03D.3020407@ts.fujitsu.com>
Date: Tue, 18 Feb 2014 06:31:41 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<53020C4B.6000509@ts.fujitsu.com>
	<1392649714.32038.427.camel@Solace>
In-Reply-To: <1392649714.32038.427.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 17.02.2014 16:08, Dario Faggioli wrote:
> On lun, 2014-02-17 at 14:19 +0100, Juergen Gross wrote:

>> With "xenstore-ls /vm" the information can be retrieved: it is listed
>> under <uuid>/pool_name (with <uuid> being the UUID of the domain in
>> question).
>>
> Funny:
>
> root@Zhaman:~# xenstore-ls /vm | grep -i pool
> root@Zhaman:~#
>
> root@Zhaman:~# xenstore-ls | grep -i pool
> root@Zhaman:~#
>
> :-O
>
> Is that because I didn't really add any pool (i.e., I'm running the
> above with only Pool-0) ?

Hmm. Could be another difference between xm and xl (on my 4.2 box I created
the domU via xm):

root@nehalem1 # xenstore-ls /vm | grep -i pool
  pool_name = "Pool-0"
  pool_name = "bs2_pool"


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 07:39:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 07:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFfGy-00044m-2j; Tue, 18 Feb 2014 07:39:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFfGv-00044c-Nf
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 07:39:18 +0000
Received: from [85.158.137.68:41697] by server-15.bemta-3.messagelabs.com id
	F0/A8-19263-42E03035; Tue, 18 Feb 2014 07:39:16 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392709155!1530653!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26094 invoked from network); 18 Feb 2014 07:39:15 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 07:39:15 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer;
	b=LfPDd+s1gDhTsR3dcNbN1wfsKlXAUJ+1eE2fo6bp4Helk1Ns83zl+2ie
	3FStzIPZQEZFDCBaWc5Lh+ZIDBRefz1kw0x9M7imaYdqWURgj4EBcEBx3
	nJvDc/9eG47DmJrtPXtFfNvxSNGwk24qnel/NMInwqVoQnTcR/HVhJ69f
	UrHUGxjklvUVSkoo/jwnkEB3QPVYXVx2o6/CqSHoLKF2EMtJ04fCW9cKG
	zV7LwhmmOZF6c2Zb+XnSn13ULyMGb;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392709156; x=1424245156;
	h=from:to:cc:subject:date:message-id;
	bh=VZ21ro0jEnIBa7pfRIPDJSAgN3kl8leEroi+02TVCOc=;
	b=Qb1m8fP3fqNa2t5kMtMnHLUip3RlEAGgDYeXGsABcIw6moeCg7MMB5MO
	0JefltUfGPUQHU1fFjcKHFcV/JwoGtL4Dw7m+TEt9Dlc70LhE9HRjY65A
	usUaTYFyYw4bl0Dh3OsECmSGKHTmf8gnMx7zen/ySua1Mgg9w7Lw9WNiu
	Ae827WOP59udCkoSSRIssGgI3hmntgRz9ICXRr/2agjVyslU1OSvHddwm
	usUsDpemRkieygi5aBOvvFKfvIj34;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="159229271"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 18 Feb 2014 08:39:15 +0100
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="80306562"
Received: from mchverdon.mch.fsc.net (HELO verdon.mch.fsc.net)
	([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 18 Feb 2014 08:39:15 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
To: xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com
Date: Tue, 18 Feb 2014 08:38:37 +0100
Message-Id: <1392709117-30019-1-git-send-email-juergen.gross@ts.fujitsu.com>
X-Mailer: git-send-email 1.7.10.4
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: [Xen-devel] [PATCH] xl cpupool-list: add option to list domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is rather complicated to obtain the cpupool a domain lives in. Add an
option -d (or --domains) to list all domains running in a cpupool.

Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
---
 docs/man/xl.pod.1         |    5 ++++-
 tools/libxl/xl_cmdimpl.c  |   47 ++++++++++++++++++++++++++++++++++++++-------
 tools/libxl/xl_cmdtable.c |    5 +++--
 3 files changed, 47 insertions(+), 10 deletions(-)

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..547af6d 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1019,10 +1019,13 @@ Use the given configuration file.
 
 =back
 
-=item B<cpupool-list> [I<-c|--cpus>] [I<cpu-pool>]
+=item B<cpupool-list> [I<-c|--cpus>] [I<-d|--domains>] [I<cpu-pool>]
 
 List CPU pools on the host.
 If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
+If I<-d> is specified, B<xl> prints a list of domains in I<cpu-pool> instead
+of the domain count.
+I<-c> and I<-d> are mutually exclusive.
 
 =item B<cpupool-destroy> I<cpu-pool>
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..c7b9fce 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -6754,23 +6754,32 @@ int main_cpupoollist(int argc, char **argv)
     int opt;
     static struct option opts[] = {
         {"cpus", 0, 0, 'c'},
+        {"domains", 0, 0, 'd'},
         COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
-    int opt_cpus = 0;
+    int opt_cpus = 0, opt_domains = 0;
     const char *pool = NULL;
     libxl_cpupoolinfo *poolinfo;
-    int n_pools, p, c, n;
+    libxl_dominfo *dominfo = NULL;
+    int n_pools, n_domains, p, c, n;
     uint32_t poolid;
     char *name;
     int ret = 0;
 
-    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 0) {
+    SWITCH_FOREACH_OPT(opt, "hcd", opts, "cpupool-list", 0) {
     case 'c':
         opt_cpus = 1;
         break;
+    case 'd':
+        opt_domains = 1;
+        break;
     }
 
+    if (opt_cpus && opt_domains) {
+        fprintf(stderr, "specifying both cpu- and domain-list not allowed\n");
+        return -ERROR_FAIL;
+    }
     if (optind < argc) {
         pool = argv[optind];
         if (libxl_name_to_cpupoolid(ctx, pool, &poolid)) {
@@ -6784,12 +6793,21 @@ int main_cpupoollist(int argc, char **argv)
         fprintf(stderr, "error getting cpupool info\n");
         return -ERROR_NOMEM;
     }
+    if (opt_domains) {
+        dominfo = libxl_list_domain(ctx, &n_domains);
+        if (!dominfo) {
+            fprintf(stderr, "error getting domain info\n");
+            ret = -ERROR_NOMEM;
+            goto out;
+        }
+    }
 
     printf("%-19s", "Name");
     if (opt_cpus)
         printf("CPU list\n");
     else
-        printf("CPUs   Sched     Active   Domain count\n");
+        printf("CPUs   Sched     Active   Domain %s\n",
+               opt_domains ? "list" : "count");
 
     for (p = 0; p < n_pools; p++) {
         if (!ret && (!pool || (poolinfo[p].poolid == poolid))) {
@@ -6808,15 +6826,30 @@ int main_cpupoollist(int argc, char **argv)
                         n++;
                     }
                 if (!opt_cpus) {
-                    printf("%3d %9s       y       %4d", n,
-                           libxl_scheduler_to_string(poolinfo[p].sched),
-                           poolinfo[p].n_dom);
+                    printf("%3d %9s       y     ", n,
+                           libxl_scheduler_to_string(poolinfo[p].sched));
+                    if (opt_domains) {
+                        c = 0;
+                        for (n = 0; n < n_domains; n++) {
+                            if (poolinfo[p].poolid == dominfo[n].cpupool) {
+                                name = libxl_domid_to_name(ctx, dominfo[n].domid);
+                                printf("%s%s", c ? ", " : "", name);
+                                free(name);
+                                c++;
+                            }
+                        }
+                    }
+                    else
+                        printf("  %4d", poolinfo[p].n_dom);
                 }
                 printf("\n");
             }
         }
     }
 
+    if (dominfo)
+        libxl_dominfo_list_free(dominfo, n_domains);
+out:
     libxl_cpupoolinfo_list_free(poolinfo, n_pools);
 
     return ret;
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..8a52d26 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -426,8 +426,9 @@ struct cmd_spec cmd_table[] = {
     { "cpupool-list",
       &main_cpupoollist, 0, 0,
       "List CPU pools on host",
-      "[-c|--cpus] [<CPU Pool>]",
-      "-c, --cpus                     Output list of CPUs used by a pool"
+      "[-c|--cpus] [-d|--domains] [<CPU Pool>]",
+      "-c, --cpus                     Output list of CPUs used by a pool\n"
+      "-d, --domains                  Output list of domains running in a pool"
     },
     { "cpupool-destroy",
       &main_cpupooldestroy, 0, 1,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 07:39:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 07:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFfGy-00044m-2j; Tue, 18 Feb 2014 07:39:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFfGv-00044c-Nf
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 07:39:18 +0000
Received: from [85.158.137.68:41697] by server-15.bemta-3.messagelabs.com id
	F0/A8-19263-42E03035; Tue, 18 Feb 2014 07:39:16 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392709155!1530653!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26094 invoked from network); 18 Feb 2014 07:39:15 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 07:39:15 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer;
	b=LfPDd+s1gDhTsR3dcNbN1wfsKlXAUJ+1eE2fo6bp4Helk1Ns83zl+2ie
	3FStzIPZQEZFDCBaWc5Lh+ZIDBRefz1kw0x9M7imaYdqWURgj4EBcEBx3
	nJvDc/9eG47DmJrtPXtFfNvxSNGwk24qnel/NMInwqVoQnTcR/HVhJ69f
	UrHUGxjklvUVSkoo/jwnkEB3QPVYXVx2o6/CqSHoLKF2EMtJ04fCW9cKG
	zV7LwhmmOZF6c2Zb+XnSn13ULyMGb;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392709156; x=1424245156;
	h=from:to:cc:subject:date:message-id;
	bh=VZ21ro0jEnIBa7pfRIPDJSAgN3kl8leEroi+02TVCOc=;
	b=Qb1m8fP3fqNa2t5kMtMnHLUip3RlEAGgDYeXGsABcIw6moeCg7MMB5MO
	0JefltUfGPUQHU1fFjcKHFcV/JwoGtL4Dw7m+TEt9Dlc70LhE9HRjY65A
	usUaTYFyYw4bl0Dh3OsECmSGKHTmf8gnMx7zen/ySua1Mgg9w7Lw9WNiu
	Ae827WOP59udCkoSSRIssGgI3hmntgRz9ICXRr/2agjVyslU1OSvHddwm
	usUsDpemRkieygi5aBOvvFKfvIj34;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="159229271"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 18 Feb 2014 08:39:15 +0100
X-IronPort-AV: E=Sophos;i="4.97,500,1389740400"; d="scan'208";a="80306562"
Received: from mchverdon.mch.fsc.net (HELO verdon.mch.fsc.net)
	([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 18 Feb 2014 08:39:15 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
To: xen-devel@lists.xen.org,
	Ian.Jackson@eu.citrix.com
Date: Tue, 18 Feb 2014 08:38:37 +0100
Message-Id: <1392709117-30019-1-git-send-email-juergen.gross@ts.fujitsu.com>
X-Mailer: git-send-email 1.7.10.4
Cc: Juergen Gross <juergen.gross@ts.fujitsu.com>
Subject: [Xen-devel] [PATCH] xl cpupool-list: add option to list domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is rather complicated to obtain the cpupool a domain lives in. Add an
option -d (or --domains) to list all domains running in a cpupool.

Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com>
---
 docs/man/xl.pod.1         |    5 ++++-
 tools/libxl/xl_cmdimpl.c  |   47 ++++++++++++++++++++++++++++++++++++++-------
 tools/libxl/xl_cmdtable.c |    5 +++--
 3 files changed, 47 insertions(+), 10 deletions(-)

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..547af6d 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1019,10 +1019,13 @@ Use the given configuration file.
 
 =back
 
-=item B<cpupool-list> [I<-c|--cpus>] [I<cpu-pool>]
+=item B<cpupool-list> [I<-c|--cpus>] [I<-d|--domains>] [I<cpu-pool>]
 
 List CPU pools on the host.
 If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
+If I<-d> is specified, B<xl> prints a list of domains in I<cpu-pool> instead
+of the domain count.
+I<-c> and I<-d> are mutually exclusive.
 
 =item B<cpupool-destroy> I<cpu-pool>
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index aff6f90..c7b9fce 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -6754,23 +6754,32 @@ int main_cpupoollist(int argc, char **argv)
     int opt;
     static struct option opts[] = {
         {"cpus", 0, 0, 'c'},
+        {"domains", 0, 0, 'd'},
         COMMON_LONG_OPTS,
         {0, 0, 0, 0}
     };
-    int opt_cpus = 0;
+    int opt_cpus = 0, opt_domains = 0;
     const char *pool = NULL;
     libxl_cpupoolinfo *poolinfo;
-    int n_pools, p, c, n;
+    libxl_dominfo *dominfo = NULL;
+    int n_pools, n_domains, p, c, n;
     uint32_t poolid;
     char *name;
     int ret = 0;
 
-    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 0) {
+    SWITCH_FOREACH_OPT(opt, "hcd", opts, "cpupool-list", 0) {
     case 'c':
         opt_cpus = 1;
         break;
+    case 'd':
+        opt_domains = 1;
+        break;
     }
 
+    if (opt_cpus && opt_domains) {
+        fprintf(stderr, "specifying both cpu- and domain-list not allowed\n");
+        return -ERROR_FAIL;
+    }
     if (optind < argc) {
         pool = argv[optind];
         if (libxl_name_to_cpupoolid(ctx, pool, &poolid)) {
@@ -6784,12 +6793,21 @@ int main_cpupoollist(int argc, char **argv)
         fprintf(stderr, "error getting cpupool info\n");
         return -ERROR_NOMEM;
     }
+    if (opt_domains) {
+        dominfo = libxl_list_domain(ctx, &n_domains);
+        if (!dominfo) {
+            fprintf(stderr, "error getting domain info\n");
+            ret = -ERROR_NOMEM;
+            goto out;
+        }
+    }
 
     printf("%-19s", "Name");
     if (opt_cpus)
         printf("CPU list\n");
     else
-        printf("CPUs   Sched     Active   Domain count\n");
+        printf("CPUs   Sched     Active   Domain %s\n",
+               opt_domains ? "list" : "count");
 
     for (p = 0; p < n_pools; p++) {
         if (!ret && (!pool || (poolinfo[p].poolid == poolid))) {
@@ -6808,15 +6826,30 @@ int main_cpupoollist(int argc, char **argv)
                         n++;
                     }
                 if (!opt_cpus) {
-                    printf("%3d %9s       y       %4d", n,
-                           libxl_scheduler_to_string(poolinfo[p].sched),
-                           poolinfo[p].n_dom);
+                    printf("%3d %9s       y     ", n,
+                           libxl_scheduler_to_string(poolinfo[p].sched));
+                    if (opt_domains) {
+                        c = 0;
+                        for (n = 0; n < n_domains; n++) {
+                            if (poolinfo[p].poolid == dominfo[n].cpupool) {
+                                name = libxl_domid_to_name(ctx, dominfo[n].domid);
+                                printf("%s%s", c ? ", " : "", name);
+                                free(name);
+                                c++;
+                            }
+                        }
+                    }
+                    else
+                        printf("  %4d", poolinfo[p].n_dom);
                 }
                 printf("\n");
             }
         }
     }
 
+    if (dominfo)
+        libxl_dominfo_list_free(dominfo, n_domains);
+out:
     libxl_cpupoolinfo_list_free(poolinfo, n_pools);
 
     return ret;
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..8a52d26 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -426,8 +426,9 @@ struct cmd_spec cmd_table[] = {
     { "cpupool-list",
       &main_cpupoollist, 0, 0,
       "List CPU pools on host",
-      "[-c|--cpus] [<CPU Pool>]",
-      "-c, --cpus                     Output list of CPUs used by a pool"
+      "[-c|--cpus] [-d|--domains] [<CPU Pool>]",
+      "-c, --cpus                     Output list of CPUs used by a pool\n"
+      "-d, --domains                  Output list of domains running in a pool"
     },
     { "cpupool-destroy",
       &main_cpupooldestroy, 0, 1,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 08:31:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 08:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFg5B-0004m6-5g; Tue, 18 Feb 2014 08:31:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFg52-0004ly-AS
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 08:31:10 +0000
Received: from [85.158.137.68:14488] by server-14.bemta-3.messagelabs.com id
	10/05-08196-74A13035; Tue, 18 Feb 2014 08:31:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392712262!2551238!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11996 invoked from network); 18 Feb 2014 08:31:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 08:31:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 08:31:01 +0000
Message-Id: <53032850020000780011D1FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 08:30:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
In-Reply-To: <53023239020000780011CED9@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 16:00, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>> This I'm observing on a Westmere box (and I didn't notice it earlier
>> because that's one of those where due to a chipset erratum the
>> IOMMU gets turned off by default), so it's possible that this can't
>> be seen on more modern hardware. I'll hopefully find time today to
>> check this on the one newer (Sandy Bridge) box I have.
> 
> Just got done with trying this: By default, things work fine there.
> As soon as I use "iommu=no-snoop", things go bad (even worse
> than one the older box - the guest is consuming about 2.5 CPUs
> worth of processing power _without_ the patch here in use, so I
> don't even want to think about trying it there); I guessed that to
> be another of the potential sources of the problem since on that
> older box the respective hardware feature is unavailable.

That wasn't a fair comparison: The guest here had an SR-IOV NIC
assigned. Once removing that, badness goes back to about the
level I observe on the Westmere box. I'm then relatively certain
that this "extra" badness can be attributed to the excessive use of
wbinvd. Which of course puts under question what good assigning
a device does on a system without snoop control, if overall
performance gets worse rather than better.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 08:31:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 08:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFg5B-0004m6-5g; Tue, 18 Feb 2014 08:31:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFg52-0004ly-AS
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 08:31:10 +0000
Received: from [85.158.137.68:14488] by server-14.bemta-3.messagelabs.com id
	10/05-08196-74A13035; Tue, 18 Feb 2014 08:31:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392712262!2551238!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11996 invoked from network); 18 Feb 2014 08:31:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 08:31:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 08:31:01 +0000
Message-Id: <53032850020000780011D1FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 08:30:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
In-Reply-To: <53023239020000780011CED9@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 16:00, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>> This I'm observing on a Westmere box (and I didn't notice it earlier
>> because that's one of those where due to a chipset erratum the
>> IOMMU gets turned off by default), so it's possible that this can't
>> be seen on more modern hardware. I'll hopefully find time today to
>> check this on the one newer (Sandy Bridge) box I have.
> 
> Just got done with trying this: By default, things work fine there.
> As soon as I use "iommu=no-snoop", things go bad (even worse
> than one the older box - the guest is consuming about 2.5 CPUs
> worth of processing power _without_ the patch here in use, so I
> don't even want to think about trying it there); I guessed that to
> be another of the potential sources of the problem since on that
> older box the respective hardware feature is unavailable.

That wasn't a fair comparison: The guest here had an SR-IOV NIC
assigned. Once removing that, badness goes back to about the
level I observe on the Westmere box. I'm then relatively certain
that this "extra" badness can be attributed to the excessive use of
wbinvd. Which of course puts under question what good assigning
a device does on a system without snoop control, if overall
performance gets worse rather than better.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 08:43:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 08:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgGx-0004vw-4f; Tue, 18 Feb 2014 08:43:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFgGe-0004vo-Ce
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 08:43:21 +0000
Received: from [193.109.254.147:41266] by server-1.bemta-14.messagelabs.com id
	84/D6-15438-71D13035; Tue, 18 Feb 2014 08:43:03 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392712981!5068358!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8558 invoked from network); 18 Feb 2014 08:43:02 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 08:43:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 00:43:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,500,1389772800"; d="scan'208";a="476912445"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 00:43:00 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 00:43:00 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
	fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 00:43:00 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX104.ccr.corp.intel.com ([169.254.5.227]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 16:42:52 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>, Christoph Egger
	<chegger@amazon.de>
Thread-Topic: [PATCH] MCE: Fix race condition in mctelem_reserve
Thread-Index: AQHPF1/VgMI0KUqfrUe4gcMRCMo09Jq62xZQ
Date: Tue, 18 Feb 2014 08:42:52 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1390387834.32296.1.camel@hamster.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This logic (mctelem) is related to dom0 mcelog logic. Have you tested if mcelog works fine with your patch?

Thanks,
Jinsong

Frediano Ziglio wrote:
> These lines (in mctelem_reserve)
> 
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that
> another 
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Actually it use unsigned long instead of bitmap type as testing for
> all zeroes is easier.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/cpu/mcheck/mctelem.c |   52
>  ++++++++++++++++++++++--------------- 1 file changed, 31
> insertions(+), 21 deletions(-) 
> 
> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
> b/xen/arch/x86/cpu/mcheck/mctelem.c index 895ce1a..e56b6fb 100644
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -69,6 +69,11 @@ struct mctelem_ent {
>  #define	MC_URGENT_NENT		10
>  #define	MC_NONURGENT_NENT	20
> 
> +/* Check if we can fit enough bits in the free bit array */
> +#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
> +#error Too much elements
> +#endif
> +
>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
> 
>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> @@ -77,11 +82,9 @@ struct mctelem_ent {
>  static struct mc_telem_ctl {
>  	/* Linked lists that thread the array members together.
>  	 *
> -	 * The free lists are singly-linked via mcte_next, and we allocate
> -	 * from them by atomically unlinking an element from the head.
> -	 * Consumed entries are returned to the head of the free list.
> -	 * When an entry is reserved off the free list it is not linked
> -	 * on any list until it is committed or dismissed.
> +	 * The free lists is a bit array where bit 1 means free.
> +	 * This as element number is quite small and is easy to
> +	 * atomically allocate that way.
>  	 *
>  	 * The committed list grows at the head and we do not maintain a
>  	 * tail pointer; insertions are performed atomically.  The head
> @@ -101,7 +104,7 @@ static struct mc_telem_ctl {
>  	 * we can lock it for updates.  The head of the processing list
>  	 * always has the oldest telemetry, and we append (as above)
>  	 * at the tail of the processing list. */
> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> +	unsigned long mctc_free[MC_NCLASSES];
>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> @@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent *tep)
>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
> 
>  	tep->mcte_prev = NULL;
> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> +	tep->mcte_next = NULL;
> +
> +	/* set free in array */
> +	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
>  }
> 
>  /* Increment the reference count of an entry that is not linked on to
> @@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)
>  	}
> 
>  	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> -		struct mctelem_ent *tep, **tepp;
> +		struct mctelem_ent *tep;
> 
>  		tep = mctctl.mctc_elems + i;
>  		tep->mcte_flags = MCTE_F_STATE_FREE;
> @@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
>  		tep->mcte_data = datarr + i * datasz;
> 
>  		if (i < MC_URGENT_NENT) {
> -			tepp = &mctctl.mctc_free[MC_URGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> +			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
> +			tep->mcte_flags = MCTE_F_HOME_URGENT;
>  		} else {
> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> +			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
> +			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
>  		}
> 
> -		tep->mcte_next = *tepp;
> +		tep->mcte_next = NULL;
>  		tep->mcte_prev = NULL;
> -		*tepp = tep;
>  	}
>  }
> 
> @@ -310,18 +315,21 @@ static int mctelem_drop_count;
> 
>  /* Reserve a telemetry entry, or return NULL if none available.
>   * If we return an entry then the caller must subsequently call
> exactly one of - * mctelem_unreserve or mctelem_commit for that entry.
> + * mctelem_dismiss or mctelem_commit for that entry.
>   */
>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>  {
> -	struct mctelem_ent **freelp;
> -	struct mctelem_ent *oldhead, *newhead;
> +	unsigned long *freelp;
> +	unsigned long oldfree;
> +	unsigned bit;
>  	mctelem_class_t target = (which == MC_URGENT) ?
>  	    MC_URGENT : MC_NONURGENT;
> 
>  	freelp = &mctctl.mctc_free[target];
>  	for (;;) {
> -		if ((oldhead = *freelp) == NULL) {
> +		oldfree = *freelp;
> +
> +		if (oldfree == 0) {
>  			if (which == MC_URGENT && target == MC_URGENT) {
>  				/* raid the non-urgent freelist */
>  				target = MC_NONURGENT;
> @@ -333,9 +341,11 @@ mctelem_cookie_t mctelem_reserve(mctelem_class_t
>  			which) }
>  		}
> 
> -		newhead = oldhead->mcte_next;
> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> -			struct mctelem_ent *tep = oldhead;
> +		/* try to allocate, atomically clear free bit */
> +		bit = find_first_set_bit(oldfree);
> +		if (test_and_clear_bit(bit, freelp)) {
> +			/* return element we got */
> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
> 
>  			mctelem_hold(tep);
>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 08:43:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 08:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgGx-0004vw-4f; Tue, 18 Feb 2014 08:43:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFgGe-0004vo-Ce
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 08:43:21 +0000
Received: from [193.109.254.147:41266] by server-1.bemta-14.messagelabs.com id
	84/D6-15438-71D13035; Tue, 18 Feb 2014 08:43:03 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392712981!5068358!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8558 invoked from network); 18 Feb 2014 08:43:02 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 08:43:02 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 00:43:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,500,1389772800"; d="scan'208";a="476912445"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 00:43:00 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 00:43:00 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
	fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 00:43:00 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX104.ccr.corp.intel.com ([169.254.5.227]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 16:42:52 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>, Christoph Egger
	<chegger@amazon.de>
Thread-Topic: [PATCH] MCE: Fix race condition in mctelem_reserve
Thread-Index: AQHPF1/VgMI0KUqfrUe4gcMRCMo09Jq62xZQ
Date: Tue, 18 Feb 2014 08:42:52 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
In-Reply-To: <1390387834.32296.1.camel@hamster.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This logic (mctelem) is related to dom0 mcelog logic. Have you tested if mcelog works fine with your patch?

Thanks,
Jinsong

Frediano Ziglio wrote:
> These lines (in mctelem_reserve)
> 
> 
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> 
> are racy. After you read the newhead pointer it can happen that
> another 
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
> 
> This patch use instead a bit array and atomic bit operations.
> 
> Actually it use unsigned long instead of bitmap type as testing for
> all zeroes is easier.
> 
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> ---
>  xen/arch/x86/cpu/mcheck/mctelem.c |   52
>  ++++++++++++++++++++++--------------- 1 file changed, 31
> insertions(+), 21 deletions(-) 
> 
> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
> b/xen/arch/x86/cpu/mcheck/mctelem.c index 895ce1a..e56b6fb 100644
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -69,6 +69,11 @@ struct mctelem_ent {
>  #define	MC_URGENT_NENT		10
>  #define	MC_NONURGENT_NENT	20
> 
> +/* Check if we can fit enough bits in the free bit array */
> +#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
> +#error Too much elements
> +#endif
> +
>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
> 
>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> @@ -77,11 +82,9 @@ struct mctelem_ent {
>  static struct mc_telem_ctl {
>  	/* Linked lists that thread the array members together.
>  	 *
> -	 * The free lists are singly-linked via mcte_next, and we allocate
> -	 * from them by atomically unlinking an element from the head.
> -	 * Consumed entries are returned to the head of the free list.
> -	 * When an entry is reserved off the free list it is not linked
> -	 * on any list until it is committed or dismissed.
> +	 * The free lists is a bit array where bit 1 means free.
> +	 * This as element number is quite small and is easy to
> +	 * atomically allocate that way.
>  	 *
>  	 * The committed list grows at the head and we do not maintain a
>  	 * tail pointer; insertions are performed atomically.  The head
> @@ -101,7 +104,7 @@ static struct mc_telem_ctl {
>  	 * we can lock it for updates.  The head of the processing list
>  	 * always has the oldest telemetry, and we append (as above)
>  	 * at the tail of the processing list. */
> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> +	unsigned long mctc_free[MC_NCLASSES];
>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> @@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent *tep)
>  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
> 
>  	tep->mcte_prev = NULL;
> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> +	tep->mcte_next = NULL;
> +
> +	/* set free in array */
> +	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
>  }
> 
>  /* Increment the reference count of an entry that is not linked on to
> @@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)
>  	}
> 
>  	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> -		struct mctelem_ent *tep, **tepp;
> +		struct mctelem_ent *tep;
> 
>  		tep = mctctl.mctc_elems + i;
>  		tep->mcte_flags = MCTE_F_STATE_FREE;
> @@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
>  		tep->mcte_data = datarr + i * datasz;
> 
>  		if (i < MC_URGENT_NENT) {
> -			tepp = &mctctl.mctc_free[MC_URGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> +			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
> +			tep->mcte_flags = MCTE_F_HOME_URGENT;
>  		} else {
> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> +			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
> +			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
>  		}
> 
> -		tep->mcte_next = *tepp;
> +		tep->mcte_next = NULL;
>  		tep->mcte_prev = NULL;
> -		*tepp = tep;
>  	}
>  }
> 
> @@ -310,18 +315,21 @@ static int mctelem_drop_count;
> 
>  /* Reserve a telemetry entry, or return NULL if none available.
>   * If we return an entry then the caller must subsequently call
> exactly one of - * mctelem_unreserve or mctelem_commit for that entry.
> + * mctelem_dismiss or mctelem_commit for that entry.
>   */
>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
>  {
> -	struct mctelem_ent **freelp;
> -	struct mctelem_ent *oldhead, *newhead;
> +	unsigned long *freelp;
> +	unsigned long oldfree;
> +	unsigned bit;
>  	mctelem_class_t target = (which == MC_URGENT) ?
>  	    MC_URGENT : MC_NONURGENT;
> 
>  	freelp = &mctctl.mctc_free[target];
>  	for (;;) {
> -		if ((oldhead = *freelp) == NULL) {
> +		oldfree = *freelp;
> +
> +		if (oldfree == 0) {
>  			if (which == MC_URGENT && target == MC_URGENT) {
>  				/* raid the non-urgent freelist */
>  				target = MC_NONURGENT;
> @@ -333,9 +341,11 @@ mctelem_cookie_t mctelem_reserve(mctelem_class_t
>  			which) }
>  		}
> 
> -		newhead = oldhead->mcte_next;
> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> -			struct mctelem_ent *tep = oldhead;
> +		/* try to allocate, atomically clear free bit */
> +		bit = find_first_set_bit(oldfree);
> +		if (test_and_clear_bit(bit, freelp)) {
> +			/* return element we got */
> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
> 
>  			mctelem_hold(tep);
>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 08:45:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 08:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgJ9-00050Z-Mj; Tue, 18 Feb 2014 08:45:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFgJ7-00050S-Sd
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 08:45:38 +0000
Received: from [85.158.139.211:64566] by server-10.bemta-5.messagelabs.com id
	A8/34-08578-1BD13035; Tue, 18 Feb 2014 08:45:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392713136!4585178!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30871 invoked from network); 18 Feb 2014 08:45:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 08:45:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 08:45:35 +0000
Message-Id: <53032BBB020000780011D214@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 08:45:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> I tried to reproduce it. But unfortunately, I cannot reproduce it in my box 
> (sandy bridge EP)with latest Xen(include my patch). I guess my configuration 
> or steps may wrong, here is mine:
> 
> 1. add iommu=1,no-snoop in by xen cmd line:
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables enabled.
> 
> 2. boot a rhel6u4 guest.
> 
> 3. after guest boot up, run startx inside guest.
> 
> 4. a few second, the X windows shows and didn't see any error. Also the CPU 
> utilization is about 1.7%.
> 
> Any thing wrong?

Nothing I can see. The main difference might be that I'm using a
SLES11 SP3 guest instead of a RHEL one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 08:45:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 08:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgJ9-00050Z-Mj; Tue, 18 Feb 2014 08:45:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFgJ7-00050S-Sd
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 08:45:38 +0000
Received: from [85.158.139.211:64566] by server-10.bemta-5.messagelabs.com id
	A8/34-08578-1BD13035; Tue, 18 Feb 2014 08:45:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392713136!4585178!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30871 invoked from network); 18 Feb 2014 08:45:36 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 08:45:36 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 08:45:35 +0000
Message-Id: <53032BBB020000780011D214@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 08:45:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> I tried to reproduce it. But unfortunately, I cannot reproduce it in my box 
> (sandy bridge EP)with latest Xen(include my patch). I guess my configuration 
> or steps may wrong, here is mine:
> 
> 1. add iommu=1,no-snoop in by xen cmd line:
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables enabled.
> 
> 2. boot a rhel6u4 guest.
> 
> 3. after guest boot up, run startx inside guest.
> 
> 4. a few second, the X windows shows and didn't see any error. Also the CPU 
> utilization is about 1.7%.
> 
> Any thing wrong?

Nothing I can see. The main difference might be that I'm using a
SLES11 SP3 guest instead of a RHEL one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 09:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgY5-0005Hj-79; Tue, 18 Feb 2014 09:01:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WFgY1-0005He-Ll
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:01:03 +0000
Received: from [85.158.143.35:11971] by server-2.bemta-4.messagelabs.com id
	70/D3-10891-D4123035; Tue, 18 Feb 2014 09:01:01 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392714051!6426415!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDk2NjUgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25942 invoked from network); 18 Feb 2014 09:00:51 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:00:51 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so7553819eek.8
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 01:00:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:mime-version;
	bh=79+OLK3Bri6dViwuLM3qKwO4+bUMJZ3Ho7rD+fl33PI=;
	b=no4e3RkaYItfFHyoRVd9daEBEe+l381EUk3OsMz71SfXpziua9VaFsCzA+36SBDZAu
	AWPEFJU6g1Lz40Smm61Z34+maOfj1rA1BLE+GkVcZup8R7M9Jv6+oFa7nq+kY6GloBzO
	3peLQNm5WotPARdAh+yM9F8/f2++kd+/MqKsGvkZojLimXbRPoYAQSJYAbBBwPjVek0z
	bFypqWnvA+t0OdEQtfxOe1dAnu7NdPV/DEzS/a52PF6ClKNLCjGnATt0PblYfYf6+ki9
	HPlDHJkMJFvKYZ4PlW2KU9SETlHpiENI8GCkc83ldHTbWcE9o7MjRXcZwgskmMegg5u+
	JlVg==
X-Received: by 10.15.81.196 with SMTP id x44mr31985366eey.31.1392714051046;
	Tue, 18 Feb 2014 01:00:51 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id u6sm67605105eep.11.2014.02.18.01.00.49
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 01:00:49 -0800 (PST)
Message-ID: <1392714041.32038.455.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: M A Young <m.a.young@durham.ac.uk>
Date: Tue, 18 Feb 2014 10:00:41 +0100
In-Reply-To: <alpine.DEB.2.00.1402172025390.29650@procyon.dur.ac.uk>
References: <1392658580.32038.446.camel@Solace>
	<alpine.DEB.2.00.1402172025390.29650@procyon.dur.ac.uk>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 RC4 out... TestDay tomorrow...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3110739188307333677=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3110739188307333677==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-CfvVH4+SxzUCpXDvs6+E"


--=-CfvVH4+SxzUCpXDvs6+E
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 20:26 +0000, M A Young wrote:
> On Mon, 17 Feb 2014, Dario Faggioli wrote:
>=20
> > Hey Michael,
> >
> > As usual, I'm asking whether you'd be up for preparing a temporary
> > build, to facilitate using Fedora as a platform for Xen 4.4 RC4 test
> > day.
>=20
> There is a temporary build at=20
> http://koji.fedoraproject.org/koji/taskinfo?taskID=3D6540303
>=20
As awesome as usual... Thanks! :-)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-CfvVH4+SxzUCpXDvs6+E
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDITkACgkQk4XaBE3IOsQhtwCeKmW4nD2TX5VsyGvzjkSumnJz
yfIAmwUs72UsuL0fj9g7w4TtMK0Yq7F1
=74YJ
-----END PGP SIGNATURE-----

--=-CfvVH4+SxzUCpXDvs6+E--



--===============3110739188307333677==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3110739188307333677==--



From xen-devel-bounces@lists.xen.org Tue Feb 18 09:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgY5-0005Hj-79; Tue, 18 Feb 2014 09:01:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WFgY1-0005He-Ll
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:01:03 +0000
Received: from [85.158.143.35:11971] by server-2.bemta-4.messagelabs.com id
	70/D3-10891-D4123035; Tue, 18 Feb 2014 09:01:01 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392714051!6426415!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNDk2NjUgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25942 invoked from network); 18 Feb 2014 09:00:51 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:00:51 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so7553819eek.8
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 01:00:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:in-reply-to:references
	:content-type:mime-version;
	bh=79+OLK3Bri6dViwuLM3qKwO4+bUMJZ3Ho7rD+fl33PI=;
	b=no4e3RkaYItfFHyoRVd9daEBEe+l381EUk3OsMz71SfXpziua9VaFsCzA+36SBDZAu
	AWPEFJU6g1Lz40Smm61Z34+maOfj1rA1BLE+GkVcZup8R7M9Jv6+oFa7nq+kY6GloBzO
	3peLQNm5WotPARdAh+yM9F8/f2++kd+/MqKsGvkZojLimXbRPoYAQSJYAbBBwPjVek0z
	bFypqWnvA+t0OdEQtfxOe1dAnu7NdPV/DEzS/a52PF6ClKNLCjGnATt0PblYfYf6+ki9
	HPlDHJkMJFvKYZ4PlW2KU9SETlHpiENI8GCkc83ldHTbWcE9o7MjRXcZwgskmMegg5u+
	JlVg==
X-Received: by 10.15.81.196 with SMTP id x44mr31985366eey.31.1392714051046;
	Tue, 18 Feb 2014 01:00:51 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id u6sm67605105eep.11.2014.02.18.01.00.49
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 01:00:49 -0800 (PST)
Message-ID: <1392714041.32038.455.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: M A Young <m.a.young@durham.ac.uk>
Date: Tue, 18 Feb 2014 10:00:41 +0100
In-Reply-To: <alpine.DEB.2.00.1402172025390.29650@procyon.dur.ac.uk>
References: <1392658580.32038.446.camel@Solace>
	<alpine.DEB.2.00.1402172025390.29650@procyon.dur.ac.uk>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20)
Mime-Version: 1.0
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 RC4 out... TestDay tomorrow...
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3110739188307333677=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3110739188307333677==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-CfvVH4+SxzUCpXDvs6+E"


--=-CfvVH4+SxzUCpXDvs6+E
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 20:26 +0000, M A Young wrote:
> On Mon, 17 Feb 2014, Dario Faggioli wrote:
>=20
> > Hey Michael,
> >
> > As usual, I'm asking whether you'd be up for preparing a temporary
> > build, to facilitate using Fedora as a platform for Xen 4.4 RC4 test
> > day.
>=20
> There is a temporary build at=20
> http://koji.fedoraproject.org/koji/taskinfo?taskID=3D6540303
>=20
As awesome as usual... Thanks! :-)

Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-CfvVH4+SxzUCpXDvs6+E
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDITkACgkQk4XaBE3IOsQhtwCeKmW4nD2TX5VsyGvzjkSumnJz
yfIAmwUs72UsuL0fj9g7w4TtMK0Yq7F1
=74YJ
-----END PGP SIGNATURE-----

--=-CfvVH4+SxzUCpXDvs6+E--



--===============3110739188307333677==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3110739188307333677==--



From xen-devel-bounces@lists.xen.org Tue Feb 18 09:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFglg-0005Sy-Tp; Tue, 18 Feb 2014 09:15:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFglf-0005Sq-PJ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:15:07 +0000
Received: from [85.158.143.35:20831] by server-1.bemta-4.messagelabs.com id
	73/B0-31661-B9423035; Tue, 18 Feb 2014 09:15:07 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392714895!6440194!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTQ5MzMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10029 invoked from network); 18 Feb 2014 09:14:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,500,1389744000"; 
	d="asc'?scan'208";a="103404389"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 09:14:54 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 04:14:54 -0500
Message-ID: <1392714890.32038.463.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <xumengpanda@gmail.com>
Date: Tue, 18 Feb 2014 10:14:50 +0100
In-Reply-To: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3379800297325698129=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3379800297325698129==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-cFFrSb3FSWgb3Ns4iQJG"

--=-cFFrSb3FSWgb3Ns4iQJG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
> Hi,
>=20
Hi,

> I'm a PhD student, working on real time system.=20
>=20
Cool. There really seems to be a lot of interest in Real-Time
virtualization these days. :-D

> [My goal]
> I want to measure the cache hit/miss rate of each guest domain in Xen.
> I may also want to measure some other events, say memory access rate,
> for each program in each guest domain in Xen.
>=20
Ok. Can I, out of curiosity, as you to detail a bit more what your
*final* goal is (I mean, you're interested in these measurements for a
reason, not just for the sake of having them, right?).

> [The problem I'm encountering]
> I tried intel's Performance Counter Monitor (PCM) in Linux on bare
> machine to get the machine's cache access rate for each level of
> cache, it works very well.=20
>=20
>=20
> However, when I want to use the PCM in Xen and run it in dom0, it
> cannot work. I think the PCM needs to run in ring 0 to read/write the
> MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot
> work.=20
>=20
Indeed.

> So my question is:
> How can I run a program (say PCM) in ring 0 on Xen?=20
>=20
Running "a program" in there is going to be terribly difficult. What I
think you're better off is trying to access, from dom0 and/or
(para)virtualize the counters.

In fact, there is work going on already on this, although I don't have
all the details about what's the current status.

> What's in my mind is:
> Writing a hypercall to call the PCM in Xen's kernel space, then the
> PCM will run in ring 0?=20
> But the problem I'm concerned is that some of the PCM's instruction,
> say printf(), may not be able to run in kernel space?=20
>=20
Well, Xen can print, e.g., on a serial console, but again, that's not
what you want. I'm adding the link to a few conversation about virtual
PMU. These are just the very first google's result, so there may well be
more:

http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Moni=
toring-Unit-td5623065.html
https://lwn.net/Articles/566159/

Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
Developers Summit:
http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-cFFrSb3FSWgb3Ns4iQJG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDJIsACgkQk4XaBE3IOsQhZACgitLZKlyuMjkHe4uGGo+FkEaT
S24An3Mxb1aGQf2MT4LTDOfeYVrTKd4Z
=v/ld
-----END PGP SIGNATURE-----

--=-cFFrSb3FSWgb3Ns4iQJG--


--===============3379800297325698129==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3379800297325698129==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 09:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFglg-0005Sy-Tp; Tue, 18 Feb 2014 09:15:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFglf-0005Sq-PJ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:15:07 +0000
Received: from [85.158.143.35:20831] by server-1.bemta-4.messagelabs.com id
	73/B0-31661-B9423035; Tue, 18 Feb 2014 09:15:07 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392714895!6440194!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTQ5MzMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10029 invoked from network); 18 Feb 2014 09:14:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:14:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,500,1389744000"; 
	d="asc'?scan'208";a="103404389"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 09:14:54 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 04:14:54 -0500
Message-ID: <1392714890.32038.463.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <xumengpanda@gmail.com>
Date: Tue, 18 Feb 2014 10:14:50 +0100
In-Reply-To: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3379800297325698129=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3379800297325698129==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-cFFrSb3FSWgb3Ns4iQJG"

--=-cFFrSb3FSWgb3Ns4iQJG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
> Hi,
>=20
Hi,

> I'm a PhD student, working on real time system.=20
>=20
Cool. There really seems to be a lot of interest in Real-Time
virtualization these days. :-D

> [My goal]
> I want to measure the cache hit/miss rate of each guest domain in Xen.
> I may also want to measure some other events, say memory access rate,
> for each program in each guest domain in Xen.
>=20
Ok. Can I, out of curiosity, as you to detail a bit more what your
*final* goal is (I mean, you're interested in these measurements for a
reason, not just for the sake of having them, right?).

> [The problem I'm encountering]
> I tried intel's Performance Counter Monitor (PCM) in Linux on bare
> machine to get the machine's cache access rate for each level of
> cache, it works very well.=20
>=20
>=20
> However, when I want to use the PCM in Xen and run it in dom0, it
> cannot work. I think the PCM needs to run in ring 0 to read/write the
> MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot
> work.=20
>=20
Indeed.

> So my question is:
> How can I run a program (say PCM) in ring 0 on Xen?=20
>=20
Running "a program" in there is going to be terribly difficult. What I
think you're better off is trying to access, from dom0 and/or
(para)virtualize the counters.

In fact, there is work going on already on this, although I don't have
all the details about what's the current status.

> What's in my mind is:
> Writing a hypercall to call the PCM in Xen's kernel space, then the
> PCM will run in ring 0?=20
> But the problem I'm concerned is that some of the PCM's instruction,
> say printf(), may not be able to run in kernel space?=20
>=20
Well, Xen can print, e.g., on a serial console, but again, that's not
what you want. I'm adding the link to a few conversation about virtual
PMU. These are just the very first google's result, so there may well be
more:

http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Moni=
toring-Unit-td5623065.html
https://lwn.net/Articles/566159/

Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
Developers Summit:
http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-cFFrSb3FSWgb3Ns4iQJG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDJIsACgkQk4XaBE3IOsQhZACgitLZKlyuMjkHe4uGGo+FkEaT
S24An3Mxb1aGQf2MT4LTDOfeYVrTKd4Z
=v/ld
-----END PGP SIGNATURE-----

--=-cFFrSb3FSWgb3Ns4iQJG--


--===============3379800297325698129==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3379800297325698129==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 09:18:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgpJ-0005cj-Hx; Tue, 18 Feb 2014 09:18:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WFgpI-0005cL-DU
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 09:18:52 +0000
Received: from [85.158.139.211:49989] by server-8.bemta-5.messagelabs.com id
	16/93-05298-B7523035; Tue, 18 Feb 2014 09:18:51 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392715119!4593741!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=2.2 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	SUSPICIOUS_RECIPS,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjYzNjkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7736 invoked from network); 18 Feb 2014 09:18:39 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:18:39 -0000
Received: by mail-ee0-f47.google.com with SMTP id d49so7522155eek.6
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 01:18:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:content-type:mime-version; 
	bh=/Q/8w3+qC2JFxhKbdEeSUiQt5BZ3FJT1pSkYuKum4M8=;
	b=aBevmY8sl93bogr62PtsJIGetzidl8O1sXOQbFtvKxGAv6XzAc6dQwKwfQcpXKLB8G
	vd3XtC/1uchTy2clGIMaREZ97cCcN7RQdYgw+Y7GSNDeHZY4OtfLkF46ztDrj64nwsT7
	QLVJ9i5zmnp4MDXWlw5JPIs4zaCA+3xoV8zbJMDaZR7hoR0eG/ahZciV/emZ+voq3G34
	YSBfbvwLKreGYgKe4IObRXINA3X+RXw3yzfQuDMRUUVb0oPQ+DjVei05LQvFVHZMgwmE
	wHABAIid21oScbo816IMHOpfXGAXsApJtpx1JXe/7jAk4l/zvvu2/IpsSVojhbxMo8Df
	JTqw==
X-Received: by 10.14.175.193 with SMTP id z41mr22992eel.108.1392715119256;
	Tue, 18 Feb 2014 01:18:39 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id 46sm67642616ees.4.2014.02.18.01.18.32
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 01:18:38 -0800 (PST)
Message-ID: <1392715101.32038.466.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Tue, 18 Feb 2014 10:18:21 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20)
Mime-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen <xen@lists.fedoraproject.org>, cl-mirage <cl-mirage@lists.cam.ac.uk>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1725909490069777014=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1725909490069777014==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-ozW96/qKXM/3ULIOimJ8"


--=-ozW96/qKXM/3ULIOimJ8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

This is a reminder that today is the Xen Project Test Day for Xen 4.4
RC4.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions

Developers: please consider monitoring the Freenode IRC channel
#xentest today to make sure that people are able to build and test the
code.

Hope to see you today on #xentest!

Dario
=20
--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-ozW96/qKXM/3ULIOimJ8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDJV0ACgkQk4XaBE3IOsTpTgCePiO87TXF1CTTBmHT4AR7tner
/jwAn2beaBxh+skppx2cFj/ANIxQJhQo
=aLdi
-----END PGP SIGNATURE-----

--=-ozW96/qKXM/3ULIOimJ8--



--===============1725909490069777014==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1725909490069777014==--



From xen-devel-bounces@lists.xen.org Tue Feb 18 09:18:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFgpJ-0005cj-Hx; Tue, 18 Feb 2014 09:18:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raistlin.df@gmail.com>) id 1WFgpI-0005cL-DU
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 09:18:52 +0000
Received: from [85.158.139.211:49989] by server-8.bemta-5.messagelabs.com id
	16/93-05298-B7523035; Tue, 18 Feb 2014 09:18:51 +0000
X-Env-Sender: raistlin.df@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392715119!4593741!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=2.2 required=7.0 tests=ML_RADAR_SPEW_LINKS_8,
	SUSPICIOUS_RECIPS,spamassassin: ,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNjYzNjkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7736 invoked from network); 18 Feb 2014 09:18:39 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:18:39 -0000
Received: by mail-ee0-f47.google.com with SMTP id d49so7522155eek.6
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 01:18:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:subject:from:to:cc:date:content-type:mime-version; 
	bh=/Q/8w3+qC2JFxhKbdEeSUiQt5BZ3FJT1pSkYuKum4M8=;
	b=aBevmY8sl93bogr62PtsJIGetzidl8O1sXOQbFtvKxGAv6XzAc6dQwKwfQcpXKLB8G
	vd3XtC/1uchTy2clGIMaREZ97cCcN7RQdYgw+Y7GSNDeHZY4OtfLkF46ztDrj64nwsT7
	QLVJ9i5zmnp4MDXWlw5JPIs4zaCA+3xoV8zbJMDaZR7hoR0eG/ahZciV/emZ+voq3G34
	YSBfbvwLKreGYgKe4IObRXINA3X+RXw3yzfQuDMRUUVb0oPQ+DjVei05LQvFVHZMgwmE
	wHABAIid21oScbo816IMHOpfXGAXsApJtpx1JXe/7jAk4l/zvvu2/IpsSVojhbxMo8Df
	JTqw==
X-Received: by 10.14.175.193 with SMTP id z41mr22992eel.108.1392715119256;
	Tue, 18 Feb 2014 01:18:39 -0800 (PST)
Received: from [192.168.0.40] (ip-183-225.sn1.eutelia.it. [62.94.183.225])
	by mx.google.com with ESMTPSA id 46sm67642616ees.4.2014.02.18.01.18.32
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 01:18:38 -0800 (PST)
Message-ID: <1392715101.32038.466.camel@Solace>
From: Dario Faggioli <raistlin@linux.it>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Tue, 18 Feb 2014 10:18:21 +0100
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20)
Mime-Version: 1.0
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen <xen@lists.fedoraproject.org>, cl-mirage <cl-mirage@lists.cam.ac.uk>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1725909490069777014=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============1725909490069777014==
Content-Type: multipart/signed; micalg="pgp-sha1"; protocol="application/pgp-signature";
	boundary="=-ozW96/qKXM/3ULIOimJ8"


--=-ozW96/qKXM/3ULIOimJ8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

This is a reminder that today is the Xen Project Test Day for Xen 4.4
RC4.

General Information about Test Days can be found here:
http://wiki.xenproject.org/wiki/Xen_Test_Days

and specific instructions for this Test Day are located here:
http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions

Developers: please consider monitoring the Freenode IRC channel
#xentest today to make sure that people are able to build and test the
code.

Hope to see you today on #xentest!

Dario
=20
--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-ozW96/qKXM/3ULIOimJ8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDJV0ACgkQk4XaBE3IOsTpTgCePiO87TXF1CTTBmHT4AR7tner
/jwAn2beaBxh+skppx2cFj/ANIxQJhQo
=aLdi
-----END PGP SIGNATURE-----

--=-ozW96/qKXM/3ULIOimJ8--



--===============1725909490069777014==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1725909490069777014==--



From xen-devel-bounces@lists.xen.org Tue Feb 18 09:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhBF-0006BH-Sn; Tue, 18 Feb 2014 09:41:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFhBE-0006BA-Ez
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:41:32 +0000
Received: from [85.158.139.211:12863] by server-13.bemta-5.messagelabs.com id
	F5/15-18801-BCA23035; Tue, 18 Feb 2014 09:41:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392716490!4562206!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2726 invoked from network); 18 Feb 2014 09:41:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 09:41:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 09:41:30 +0000
Message-Id: <530338D5020000780011D258@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 09:41:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 05:25, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> For Intel it didn't disturb Intel's vmce logic so it's OK for me.
> 
> For AMD, c000_040x is bank4 (MC4_MISCj), while vmce current only support 
> bank0/1. Even considering in the future AMD may add MC0/1_MISCj, it doesn't 
> need emulation (say, read return 0 and write ignore). So how about simply 
> filter out AMD MCx_MISCj at mce_vendor_bank_msr()?

mce_vendor_bank_msr() already has

    case X86_VENDOR_AMD:
        switch (msr) {
        case MSR_F10_MC4_MISC1:
        case MSR_F10_MC4_MISC2:
        case MSR_F10_MC4_MISC3:
            return 1;
        }

Jan

> Aravind Gopalakrishnan wrote:
>> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
>> registers. But due to this statement here:
>> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> we are wrongly masking off top two bits which meant the register
>> accesses never made it to vmce_amd_* functions.
>> 
>> Corrected this problem by modifying the mask in this patch to allow
>> AMD thresholding registers to fall to 'default' case which in turn
>> allows vmce_amd_* functions to handle access to the registers.
>> 
>> While at it, remove some clutter in the vmce_amd* functions. Retained
>> current policy of returning zero for reads and ignoring writes.
>> 
>> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
>> ---
>>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>>  ++++++------------------------------- xen/arch/x86/cpu/mcheck/vmce.c
>>  |   14 +++++++++++-- 2 files changed, 18 insertions(+), 37
>> deletions(-) 
>> 
>> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
>> b/xen/arch/x86/cpu/mcheck/amd_f10.c 
>> index 61319dc..03797ab 100644
>> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
>> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
>> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>  {
>> -	switch (msr) {
>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>> -		v->arch.vmce.bank[1].mci_misc = val;
>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> -		break;
>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>> -		/* ignore write: we do not emulate link and l3 cache errors
>> -		 * to the guest.
>> -		 */
>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> -		break;
>> -	default:
>> -		return 0;
>> -	}
>> -
>> -	return 1;
>> +    /* Do nothing as we don't emulate this MC bank currently */
>> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> +    return 1;
>>  }
>> 
>>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>>  {
>> -	switch (msr) {
>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>> -		*val = v->arch.vmce.bank[1].mci_misc;
>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>> -		break;
>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>> -		/* we do not emulate link and l3 cache
>> -		 * errors to the guest.
>> -		 */
>> -		*val = 0;
>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>> -		break;
>> -	default:
>> -		return 0;
>> -	}
>> -
>> -	return 1;
>> +    /* Assign '0' as we don't emulate this MC bank currently */
>> +    *val = 0;
>> +    return 1;
>>  }
>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
>> b/xen/arch/x86/cpu/mcheck/vmce.c 
>> index f6c35db..84843fc 100644
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>> uint32_t msr, uint64_t *val) 
>> 
>>      *val = 0;
>> 
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    /* Allow only first 3 MC banks into switch() */
>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>      {
>>      case MSR_IA32_MC0_CTL:
>>          /* stick all 1's to MCi_CTL */
>> @@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>              uint32_t msr, uint64_t *val) ret = vmce_intel_rdmsr(v,
>>              msr, val); break;
>>          case X86_VENDOR_AMD:
>> +            /*
>> +             * Extended block of AMD thresholding registers fall
>> into default. +             * Handle reads here.
>> +             */
>>              ret = vmce_amd_rdmsr(v, msr, val);
>>              break;
>>          default:
>> @@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>      uint32_t msr, uint64_t val) int ret = 1;
>>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>> 
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    /* Allow only first 3 MC banks into switch() */
>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>      {
>>      case MSR_IA32_MC0_CTL:
>>          /*
>> @@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>              uint32_t msr, uint64_t val) ret = vmce_intel_wrmsr(v,
>>              msr, val); break;
>>          case X86_VENDOR_AMD:
>> +            /*
>> +             * Extended block of AMD thresholding registers fall
>> into default. +             * Handle writes here.
>> +             */
>>              ret = vmce_amd_wrmsr(v, msr, val);
>>              break;
>>          default:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 09:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhBF-0006BH-Sn; Tue, 18 Feb 2014 09:41:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFhBE-0006BA-Ez
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:41:32 +0000
Received: from [85.158.139.211:12863] by server-13.bemta-5.messagelabs.com id
	F5/15-18801-BCA23035; Tue, 18 Feb 2014 09:41:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392716490!4562206!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2726 invoked from network); 18 Feb 2014 09:41:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 09:41:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 09:41:30 +0000
Message-Id: <530338D5020000780011D258@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 09:41:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 05:25, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> For Intel it didn't disturb Intel's vmce logic so it's OK for me.
> 
> For AMD, c000_040x is bank4 (MC4_MISCj), while vmce current only support 
> bank0/1. Even considering in the future AMD may add MC0/1_MISCj, it doesn't 
> need emulation (say, read return 0 and write ignore). So how about simply 
> filter out AMD MCx_MISCj at mce_vendor_bank_msr()?

mce_vendor_bank_msr() already has

    case X86_VENDOR_AMD:
        switch (msr) {
        case MSR_F10_MC4_MISC1:
        case MSR_F10_MC4_MISC2:
        case MSR_F10_MC4_MISC3:
            return 1;
        }

Jan

> Aravind Gopalakrishnan wrote:
>> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
>> registers. But due to this statement here:
>> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> we are wrongly masking off top two bits which meant the register
>> accesses never made it to vmce_amd_* functions.
>> 
>> Corrected this problem by modifying the mask in this patch to allow
>> AMD thresholding registers to fall to 'default' case which in turn
>> allows vmce_amd_* functions to handle access to the registers.
>> 
>> While at it, remove some clutter in the vmce_amd* functions. Retained
>> current policy of returning zero for reads and ignoring writes.
>> 
>> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
>> ---
>>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>>  ++++++------------------------------- xen/arch/x86/cpu/mcheck/vmce.c
>>  |   14 +++++++++++-- 2 files changed, 18 insertions(+), 37
>> deletions(-) 
>> 
>> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
>> b/xen/arch/x86/cpu/mcheck/amd_f10.c 
>> index 61319dc..03797ab 100644
>> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
>> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
>> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>  {
>> -	switch (msr) {
>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>> -		v->arch.vmce.bank[1].mci_misc = val;
>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> -		break;
>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>> -		/* ignore write: we do not emulate link and l3 cache errors
>> -		 * to the guest.
>> -		 */
>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> -		break;
>> -	default:
>> -		return 0;
>> -	}
>> -
>> -	return 1;
>> +    /* Do nothing as we don't emulate this MC bank currently */
>> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>> +    return 1;
>>  }
>> 
>>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>>  {
>> -	switch (msr) {
>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>> -		*val = v->arch.vmce.bank[1].mci_misc;
>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>> -		break;
>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>> -		/* we do not emulate link and l3 cache
>> -		 * errors to the guest.
>> -		 */
>> -		*val = 0;
>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>> -		break;
>> -	default:
>> -		return 0;
>> -	}
>> -
>> -	return 1;
>> +    /* Assign '0' as we don't emulate this MC bank currently */
>> +    *val = 0;
>> +    return 1;
>>  }
>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
>> b/xen/arch/x86/cpu/mcheck/vmce.c 
>> index f6c35db..84843fc 100644
>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>> uint32_t msr, uint64_t *val) 
>> 
>>      *val = 0;
>> 
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    /* Allow only first 3 MC banks into switch() */
>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>      {
>>      case MSR_IA32_MC0_CTL:
>>          /* stick all 1's to MCi_CTL */
>> @@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>              uint32_t msr, uint64_t *val) ret = vmce_intel_rdmsr(v,
>>              msr, val); break;
>>          case X86_VENDOR_AMD:
>> +            /*
>> +             * Extended block of AMD thresholding registers fall
>> into default. +             * Handle reads here.
>> +             */
>>              ret = vmce_amd_rdmsr(v, msr, val);
>>              break;
>>          default:
>> @@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>      uint32_t msr, uint64_t val) int ret = 1;
>>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>> 
>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>> +    /* Allow only first 3 MC banks into switch() */
>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>      {
>>      case MSR_IA32_MC0_CTL:
>>          /*
>> @@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>              uint32_t msr, uint64_t val) ret = vmce_intel_wrmsr(v,
>>              msr, val); break;
>>          case X86_VENDOR_AMD:
>> +            /*
>> +             * Extended block of AMD thresholding registers fall
>> into default. +             * Handle writes here.
>> +             */
>>              ret = vmce_amd_wrmsr(v, msr, val);
>>              break;
>>          default:



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 09:57:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhQp-0006N8-JP; Tue, 18 Feb 2014 09:57:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhQo-0006N3-86
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:57:38 +0000
Received: from [85.158.143.35:21831] by server-3.bemta-4.messagelabs.com id
	D3/35-11539-19E23035; Tue, 18 Feb 2014 09:57:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392717455!6468062!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25449 invoked from network); 18 Feb 2014 09:57:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:57:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101684677"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 09:57:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 04:57:35 -0500
Message-ID: <1392717453.11080.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Date: Tue, 18 Feb 2014 09:57:33 +0000
In-Reply-To: <52FDACDC.2010607@cn.fujitsu.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
	<21243.35619.819765.162321@mariner.uk.xensource.com>
	<52FDACDC.2010607@cn.fujitsu.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 13:42 +0800, Lai Jiangshan wrote:
> On 02/12/2014 10:54 PM, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
> >> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> >>> ARM does not (currently) support migration, so stop offering tasty looking
> >>> treats like "xl migrate".
> > 
> >>> Other than the additions of the #define/#ifdef there is a tiny bit of code
> >>> motion ("dump-core" in the command list and core_dump_domain in the
> >>> implementations) which serves to put ifdeffable bits next to each other.
> > 
> > I'm not a huge fan of #ifdef but this is tolerable, I think.
> > 
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Also
> 
> Acked-by: Lai Jiangshan <laijs@cn.fujitsu.com>

Thanks. This patch was already committed.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 09:57:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 09:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhQp-0006N8-JP; Tue, 18 Feb 2014 09:57:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhQo-0006N3-86
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 09:57:38 +0000
Received: from [85.158.143.35:21831] by server-3.bemta-4.messagelabs.com id
	D3/35-11539-19E23035; Tue, 18 Feb 2014 09:57:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392717455!6468062!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25449 invoked from network); 18 Feb 2014 09:57:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 09:57:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101684677"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 09:57:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 04:57:35 -0500
Message-ID: <1392717453.11080.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Lai Jiangshan <laijs@cn.fujitsu.com>
Date: Tue, 18 Feb 2014 09:57:33 +0000
In-Reply-To: <52FDACDC.2010607@cn.fujitsu.com>
References: <1392215257-26993-1-git-send-email-ian.campbell@citrix.com>
	<1392215531.13563.79.camel@kazak.uk.xensource.com>
	<21243.35619.819765.162321@mariner.uk.xensource.com>
	<52FDACDC.2010607@cn.fujitsu.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, george.dunlap@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xl: suppress suspend/resume functions on
 platforms which do not support it.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 13:42 +0800, Lai Jiangshan wrote:
> On 02/12/2014 10:54 PM, Ian Jackson wrote:
> > Ian Campbell writes ("Re: [PATCH] xl: suppress suspend/resume functions on platforms which do not support it."):
> >> On Wed, 2014-02-12 at 14:27 +0000, Ian Campbell wrote:
> >>> ARM does not (currently) support migration, so stop offering tasty looking
> >>> treats like "xl migrate".
> > 
> >>> Other than the additions of the #define/#ifdef there is a tiny bit of code
> >>> motion ("dump-core" in the command list and core_dump_domain in the
> >>> implementations) which serves to put ifdeffable bits next to each other.
> > 
> > I'm not a huge fan of #ifdef but this is tolerable, I think.
> > 
> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Also
> 
> Acked-by: Lai Jiangshan <laijs@cn.fujitsu.com>

Thanks. This patch was already committed.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:06:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhZH-0006b8-L2; Tue, 18 Feb 2014 10:06:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhZG-0006b3-9l
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:06:22 +0000
Received: from [193.109.254.147:5599] by server-5.bemta-14.messagelabs.com id
	76/5A-16688-D9033035; Tue, 18 Feb 2014 10:06:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392717979!5048123!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17713 invoked from network); 18 Feb 2014 10:06:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:06:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103414657"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 10:06:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:06:18 -0500
Message-ID: <1392717977.11080.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 18 Feb 2014 10:06:17 +0000
In-Reply-To: <1392401045.32038.346.camel@Solace>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
	<21243.35358.349750.484725@mariner.uk.xensource.com>
	<1392216804.13563.83.camel@kazak.uk.xensource.com>
	<1392401045.32038.346.camel@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 19:04 +0100, Dario Faggioli wrote:
> On mer, 2014-02-12 at 14:53 +0000, Ian Campbell wrote:
> > On Wed, 2014-02-12 at 14:50 +0000, Ian Jackson wrote:
> > > Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):
> 
> > > > Make it possible to specify various bits of TFTP path via
> > > > ~/.xen-osstest/config
> > > 
> > > As I said in person: this would be much better if instead the host
> > > property referred to a named TFTP scope/server.  Otherwise you have to
> > > set a whole bunch of host properties identically.
> > > 
> > 
> > Ack. I'll put this on my todo list.
> > 
> Also, the README file has a, far than comprehensive, list of host
> properties. I'm unsure whether that file is the proper place, but it
> would be nice to have one actual place where a list and a brief
> description of all the supported host properties could live.

Yes, I'll do this.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:06:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:06:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhZH-0006b8-L2; Tue, 18 Feb 2014 10:06:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhZG-0006b3-9l
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:06:22 +0000
Received: from [193.109.254.147:5599] by server-5.bemta-14.messagelabs.com id
	76/5A-16688-D9033035; Tue, 18 Feb 2014 10:06:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392717979!5048123!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17713 invoked from network); 18 Feb 2014 10:06:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:06:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103414657"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 10:06:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:06:18 -0500
Message-ID: <1392717977.11080.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Date: Tue, 18 Feb 2014 10:06:17 +0000
In-Reply-To: <1392401045.32038.346.camel@Solace>
References: <1392214585-26602-1-git-send-email-ian.campbell@citrix.com>
	<21243.35358.349750.484725@mariner.uk.xensource.com>
	<1392216804.13563.83.camel@kazak.uk.xensource.com>
	<1392401045.32038.346.camel@Solace>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH OSSTEST] Allow per-host TFTP setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 19:04 +0100, Dario Faggioli wrote:
> On mer, 2014-02-12 at 14:53 +0000, Ian Campbell wrote:
> > On Wed, 2014-02-12 at 14:50 +0000, Ian Jackson wrote:
> > > Ian Campbell writes ("[PATCH OSSTEST] Allow per-host TFTP setup"):
> 
> > > > Make it possible to specify various bits of TFTP path via
> > > > ~/.xen-osstest/config
> > > 
> > > As I said in person: this would be much better if instead the host
> > > property referred to a named TFTP scope/server.  Otherwise you have to
> > > set a whole bunch of host properties identically.
> > > 
> > 
> > Ack. I'll put this on my todo list.
> > 
> Also, the README file has a, far than comprehensive, list of host
> properties. I'm unsure whether that file is the proper place, but it
> would be nice to have one actual place where a list and a brief
> description of all the supported host properties could live.

Yes, I'll do this.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:06:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhZR-0006bv-24; Tue, 18 Feb 2014 10:06:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WFUiK-0004Yw-AF
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 20:22:52 +0000
Received: from [193.109.254.147:50321] by server-6.bemta-14.messagelabs.com id
	53/2B-03396-B9F62035; Mon, 17 Feb 2014 20:22:51 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392668570!4956604!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12694 invoked from network); 17 Feb 2014 20:22:50 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	17 Feb 2014 20:22:50 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1HKMh0B023232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 15:22:43 -0500
Received: from [10.3.235.49] (vpn-235-49.phx2.redhat.com [10.3.235.49])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1HKMfRW012749; Mon, 17 Feb 2014 15:22:41 -0500
Message-ID: <1392668638.21106.5.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 17 Feb 2014 14:23:58 -0600
In-Reply-To: <1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Tue, 18 Feb 2014 10:06:32 +0000
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	netdev@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, James Morris <jmorris@namei.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> Some interfaces do not need to have any IPv4 or IPv6
> addresses, so enable an option to specify this. One
> example where this is observed are virtualization
> backend interfaces which just use the net_device
> constructs to help with their respective frontends.
> 
> This should optimize boot time and complexity on
> virtualization environments for each backend interface
> while also avoiding triggering SLAAC and DAD, which is
> simply pointless for these type of interfaces.

Would it not be better/cleaner to use disable_ipv6 and then add a
disable_ipv4 sysctl, then use those with that interface?  The
IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
already doing.

Dan

> Cc: "David S. Miller" <davem@davemloft.net>
> cC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
> Cc: James Morris <jmorris@namei.org>
> Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
> Cc: Patrick McHardy <kaber@trash.net>
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> ---
>  include/uapi/linux/if.h | 1 +
>  net/ipv4/devinet.c      | 3 +++
>  net/ipv6/addrconf.c     | 6 ++++++
>  3 files changed, 10 insertions(+)
> 
> diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
> index 8d10382..566d856 100644
> --- a/include/uapi/linux/if.h
> +++ b/include/uapi/linux/if.h
> @@ -85,6 +85,7 @@
>  					 * change when it's running */
>  #define IFF_MACVLAN 0x200000		/* Macvlan device */
>  #define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridge */
> +#define IFF_SKIP_IP	0x800000	/* Skip IPv4, IPv6 */
>  
> 
>  #define IF_GET_IFACE	0x0001		/* for querying only */
> diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
> index a1b5bcb..8e9ef07 100644
> --- a/net/ipv4/devinet.c
> +++ b/net/ipv4/devinet.c
> @@ -1342,6 +1342,9 @@ static int inetdev_event(struct notifier_block *this, unsigned long event,
>  
>  	ASSERT_RTNL();
>  
> +	if (dev->priv_flags & IFF_SKIP_IP)
> +		goto out;
> +
>  	if (!in_dev) {
>  		if (event == NETDEV_REGISTER) {
>  			in_dev = inetdev_init(dev);
> diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
> index 4b6b720..57f58e3 100644
> --- a/net/ipv6/addrconf.c
> +++ b/net/ipv6/addrconf.c
> @@ -314,6 +314,9 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
>  
>  	ASSERT_RTNL();
>  
> +	if (dev->priv_flags & IFF_SKIP_IP)
> +		return NULL;
> +
>  	if (dev->mtu < IPV6_MIN_MTU)
>  		return NULL;
>  
> @@ -2749,6 +2752,9 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
>  	int run_pending = 0;
>  	int err;
>  
> +	if (dev->priv_flags & IFF_SKIP_IP)
> +		return NOTIFY_OK;
> +
>  	switch (event) {
>  	case NETDEV_REGISTER:
>  		if (!idev && dev->mtu >= IPV6_MIN_MTU) {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:06:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhZR-0006bv-24; Tue, 18 Feb 2014 10:06:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WFUiK-0004Yw-AF
	for xen-devel@lists.xenproject.org; Mon, 17 Feb 2014 20:22:52 +0000
Received: from [193.109.254.147:50321] by server-6.bemta-14.messagelabs.com id
	53/2B-03396-B9F62035; Mon, 17 Feb 2014 20:22:51 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392668570!4956604!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12694 invoked from network); 17 Feb 2014 20:22:50 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-6.tower-27.messagelabs.com with SMTP;
	17 Feb 2014 20:22:50 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1HKMh0B023232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 17 Feb 2014 15:22:43 -0500
Received: from [10.3.235.49] (vpn-235-49.phx2.redhat.com [10.3.235.49])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1HKMfRW012749; Mon, 17 Feb 2014 15:22:41 -0500
Message-ID: <1392668638.21106.5.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 17 Feb 2014 14:23:58 -0600
In-Reply-To: <1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
X-Mailman-Approved-At: Tue, 18 Feb 2014 10:06:32 +0000
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	netdev@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, James Morris <jmorris@namei.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> 
> Some interfaces do not need to have any IPv4 or IPv6
> addresses, so enable an option to specify this. One
> example where this is observed are virtualization
> backend interfaces which just use the net_device
> constructs to help with their respective frontends.
> 
> This should optimize boot time and complexity on
> virtualization environments for each backend interface
> while also avoiding triggering SLAAC and DAD, which is
> simply pointless for these type of interfaces.

Would it not be better/cleaner to use disable_ipv6 and then add a
disable_ipv4 sysctl, then use those with that interface?  The
IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
already doing.

Dan

> Cc: "David S. Miller" <davem@davemloft.net>
> cC: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
> Cc: James Morris <jmorris@namei.org>
> Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
> Cc: Patrick McHardy <kaber@trash.net>
> Cc: netdev@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> ---
>  include/uapi/linux/if.h | 1 +
>  net/ipv4/devinet.c      | 3 +++
>  net/ipv6/addrconf.c     | 6 ++++++
>  3 files changed, 10 insertions(+)
> 
> diff --git a/include/uapi/linux/if.h b/include/uapi/linux/if.h
> index 8d10382..566d856 100644
> --- a/include/uapi/linux/if.h
> +++ b/include/uapi/linux/if.h
> @@ -85,6 +85,7 @@
>  					 * change when it's running */
>  #define IFF_MACVLAN 0x200000		/* Macvlan device */
>  #define IFF_BRIDGE_NON_ROOT 0x400000    /* Don't consider for root bridge */
> +#define IFF_SKIP_IP	0x800000	/* Skip IPv4, IPv6 */
>  
> 
>  #define IF_GET_IFACE	0x0001		/* for querying only */
> diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
> index a1b5bcb..8e9ef07 100644
> --- a/net/ipv4/devinet.c
> +++ b/net/ipv4/devinet.c
> @@ -1342,6 +1342,9 @@ static int inetdev_event(struct notifier_block *this, unsigned long event,
>  
>  	ASSERT_RTNL();
>  
> +	if (dev->priv_flags & IFF_SKIP_IP)
> +		goto out;
> +
>  	if (!in_dev) {
>  		if (event == NETDEV_REGISTER) {
>  			in_dev = inetdev_init(dev);
> diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
> index 4b6b720..57f58e3 100644
> --- a/net/ipv6/addrconf.c
> +++ b/net/ipv6/addrconf.c
> @@ -314,6 +314,9 @@ static struct inet6_dev *ipv6_add_dev(struct net_device *dev)
>  
>  	ASSERT_RTNL();
>  
> +	if (dev->priv_flags & IFF_SKIP_IP)
> +		return NULL;
> +
>  	if (dev->mtu < IPV6_MIN_MTU)
>  		return NULL;
>  
> @@ -2749,6 +2752,9 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
>  	int run_pending = 0;
>  	int err;
>  
> +	if (dev->priv_flags & IFF_SKIP_IP)
> +		return NOTIFY_OK;
> +
>  	switch (event) {
>  	case NETDEV_REGISTER:
>  		if (!idev && dev->mtu >= IPV6_MIN_MTU) {



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFheP-0006sG-8S; Tue, 18 Feb 2014 10:11:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFheN-0006s7-5r
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:11:39 +0000
Received: from [85.158.137.68:65291] by server-8.bemta-3.messagelabs.com id
	4A/3D-16039-AD133035; Tue, 18 Feb 2014 10:11:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392718296!2580460!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14805 invoked from network); 18 Feb 2014 10:11:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:11:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101687621"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:11:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:11:35 -0500
Message-ID: <1392718294.11080.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 10:11:34 +0000
In-Reply-To: <21246.25423.419772.949039@mariner.uk.xensource.com>
References: <1391005955.21756.7.camel@Abyss>
	<1392401513.32038.348.camel@Solace>
	<21246.25423.419772.949039@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, raistlin@linux.it
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 18:41 +0000, Ian Jackson wrote:
> Dario Faggioli writes ("Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f' option"):
> > On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> > > standalone-reset's usage says:
> > >     
> > >   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildflight>]]]
> > >    branch and xenbranch default, separately, to xen-unstable
> > >   options:
> > >    -f<flight>     generate flight "flight", default is "standalone"
> > >     
> > > but then there is no place where '-f' is processed, and hence
> > > no real way to pass a specific flight name to make-flight.
> > >     
> > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ...
> > I know it's a busy period for OSSTest, but this should be pretty
> > straightforward, and it only affects standalone mode.
> 
> Right.  I don't use standalone mode much, so sorry about that.  I
> looked for a comment from Ian C but didn't find one.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> > Anyway, I can put it on hold and resubmit in a while, if that's
> > considered better.
> 
> No, pinging now is good.
> 
> This patch leads me to an observation: I looked at the code in
> standalone-reset and it appears to me that there is not currently
> anything which sets "$flight".

This patch from Dario does it I think.

> So the "DELETE" statements used if there's an existing db won't have
> any effect.  This doesn't cause any strange effects because
> Osstest/JobDB/Standalone.pm deletes them too.
> 
> I think it would be best to delete that part of standalone-reset.  Do
> you agree ?

FWIW I've been carrying the following with the intention of using it
from my standalone helper script, since as you've just point out
Standalone.pm also does it then it seems like I could drop the
forget-flight bit.

8<-----------------

>From 98f57473e4787620c6cad443ee15cc38b330065d Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 12 Feb 2014 11:19:45 +0000
Subject: [PATCH] standalone: refactor out some useful bits of standalone-reset

I sometimes want just these bits.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 standalone-forget-flight | 39 +++++++++++++++++++++++++++++++++
 standalone-init-db       | 56 ++++++++++++++++++++++++++++++++++++++++++++++++
 standalone-reset         | 30 ++------------------------
 3 files changed, 97 insertions(+), 28 deletions(-)
 create mode 100755 standalone-forget-flight
 create mode 100755 standalone-init-db

diff --git a/standalone-forget-flight b/standalone-forget-flight
new file mode 100755
index 0000000..6dd6a84
--- /dev/null
+++ b/standalone-forget-flight
@@ -0,0 +1,39 @@
+#!/bin/bash
+
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -e
+
+usage(){
+	cat <<END
+usage: ./standalone-forget-flight [database] [flight]
+END
+}
+
+if [ $# -ne 2 ] ; then
+	usage >&2
+	exit 1
+fi
+
+db="$1"
+flight="$1"
+
+sqlite3 "$db" <<END
+	DELETE FROM runvars WHERE flight='$flight';
+	DELETE FROM jobs    WHERE flight='$flight';
+	DELETE FROM flights WHERE flight='$flight';
+END
diff --git a/standalone-init-db b/standalone-init-db
new file mode 100755
index 0000000..34a47c4
--- /dev/null
+++ b/standalone-init-db
@@ -0,0 +1,56 @@
+#!/bin/bash
+
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -e
+
+usage(){
+	cat <<END
+usage: ./standalone-init-db [database]
+END
+}
+
+if [ $# -ne 1 ] ; then
+	usage >&2
+	exit 1
+fi
+
+db="$1"
+
+sqlite3 "$db" <<END
+	CREATE TABLE flights (
+		flight TEXT PRIMARY KEY,
+		blessing TEXT,
+		intended TEXT,
+		branch TEXT
+		);
+	CREATE TABLE jobs (
+		flight TEXT NOT NULL,
+		job TEXT NOT NULL,
+		recipe TEXT NOT NULL,
+		status TEXT NOT NULL,
+		PRIMARY KEY(flight,job)
+		);
+	CREATE TABLE runvars (
+		flight TEXT NOT NULL,
+		job TEXT NOT NULL,
+		name TEXT NOT NULL,
+		val TEXT NOT NULL,
+		synth BOOLEAN NOT NULL,
+		PRIMARY KEY(flight,job,name)
+		);
+END
diff --git a/standalone-reset b/standalone-reset
index 83a6606..ea9d027 100755
--- a/standalone-reset
+++ b/standalone-reset
@@ -122,35 +122,9 @@ case $# in
 esac
 
 if test -f standalone.db; then
-	sqlite3 standalone.db <<END
-		DELETE FROM runvars WHERE flight='$flight';
-		DELETE FROM jobs    WHERE flight='$flight';
-		DELETE FROM flights WHERE flight='$flight';
-END
+	./standalone-forget-flight standalone.db "$flight"
 else
-	sqlite3 standalone.db <<END
-		CREATE TABLE flights (
-			flight TEXT PRIMARY KEY,
-			blessing TEXT,
-			intended TEXT,
-			branch TEXT
-			);
-		CREATE TABLE jobs (
-			flight TEXT NOT NULL,
-			job TEXT NOT NULL,
-			recipe TEXT NOT NULL,
-			status TEXT NOT NULL,
-			PRIMARY KEY(flight,job)
-			);
-		CREATE TABLE runvars (
-			flight TEXT NOT NULL,
-			job TEXT NOT NULL,
-			name TEXT NOT NULL,
-			val TEXT NOT NULL,
-			synth BOOLEAN NOT NULL,
-			PRIMARY KEY(flight,job,name)
-			);
-END
+	./standalone-db-init standalone.db
 fi
 
 : ${BUILD_LVEXTEND_MAX:=50}
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFheG-0006rm-RZ; Tue, 18 Feb 2014 10:11:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFheE-0006rd-Ty
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:11:31 +0000
Received: from [85.158.143.35:64680] by server-2.bemta-4.messagelabs.com id
	1C/A3-10891-2D133035; Tue, 18 Feb 2014 10:11:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392718289!6460028!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20930 invoked from network); 18 Feb 2014 10:11:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 10:11:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 10:11:28 +0000
Message-Id: <53033FDD020000780011D2A8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 10:11:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
In-Reply-To: <21250.15776.972814.117850@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 17:49, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> George Dunlap writes ("Re: Proposed force push of staging to master"):
>> On 02/17/2014 12:08 PM, Ian Jackson wrote:
>> > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
>> > xen.git#master and call it RC4.  Comments welcome.
>> 
>> Thanks for the analysis.  This seems like a good plan.
> 
> I have done this (RC4 is tagged, tarballs are in production).
> 
> I also had to force push the change below to xen.git#master.
> 
> Can I request that we don't change this back to say "master" until we
> are done with 4.4.0 ?  Either way we have to update Config.mk with new
> qemu upstream versions, but if we set this to "master" in between RCs,
> I end up having to do it as a force push in the middle of the RC
> production which is out-of-course, error-prone, and suboptimal.
> 
> It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> Config.mk (updated when the qemu-upstream tree has passed its push
> gate).
> 
> That is I think the best workflow is:
>   * make a change to staging/qemu-upstream-unstable.git
>   * wait for push gate to put it in qemu-upstream-unstable.git
>   * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
>     to new commit hash
>   * whatever is in xen.git#master is what gets called rcN
>     (ie we tag xen.git and */qemu-upstream-unstable.git with the rcN
>      tags, but we don't use the actual tag name in Config.mk)

Do you propose this just for the RC phase, or do you propose
reverting the decision to have this set to master altogether? In either
case, it was only pretty recently that we decided to use master here,
and I don't think you objected, so I'm a little puzzled by the proposal.
I personally think that using master and an explicit tag for RCs is just
as appropriate as properly naming the RC in
xen/Makefile:XEN_EXTRAVERSION, and that doing the respective
adjustments could - if one wanted to - likely be fully automated (albeit
when I'm doing the same on the stable branches, I didn't bother
scripting this so far as it's just not happening frequently enough to
warrant this).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFheP-0006sG-8S; Tue, 18 Feb 2014 10:11:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFheN-0006s7-5r
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:11:39 +0000
Received: from [85.158.137.68:65291] by server-8.bemta-3.messagelabs.com id
	4A/3D-16039-AD133035; Tue, 18 Feb 2014 10:11:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392718296!2580460!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14805 invoked from network); 18 Feb 2014 10:11:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:11:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101687621"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:11:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:11:35 -0500
Message-ID: <1392718294.11080.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 10:11:34 +0000
In-Reply-To: <21246.25423.419772.949039@mariner.uk.xensource.com>
References: <1391005955.21756.7.camel@Abyss>
	<1392401513.32038.348.camel@Solace>
	<21246.25423.419772.949039@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, raistlin@linux.it
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 18:41 +0000, Ian Jackson wrote:
> Dario Faggioli writes ("Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f' option"):
> > On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> > > standalone-reset's usage says:
> > >     
> > >   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buildflight>]]]
> > >    branch and xenbranch default, separately, to xen-unstable
> > >   options:
> > >    -f<flight>     generate flight "flight", default is "standalone"
> > >     
> > > but then there is no place where '-f' is processed, and hence
> > > no real way to pass a specific flight name to make-flight.
> > >     
> > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
> ...
> > I know it's a busy period for OSSTest, but this should be pretty
> > straightforward, and it only affects standalone mode.
> 
> Right.  I don't use standalone mode much, so sorry about that.  I
> looked for a comment from Ian C but didn't find one.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> 
> > Anyway, I can put it on hold and resubmit in a while, if that's
> > considered better.
> 
> No, pinging now is good.
> 
> This patch leads me to an observation: I looked at the code in
> standalone-reset and it appears to me that there is not currently
> anything which sets "$flight".

This patch from Dario does it I think.

> So the "DELETE" statements used if there's an existing db won't have
> any effect.  This doesn't cause any strange effects because
> Osstest/JobDB/Standalone.pm deletes them too.
> 
> I think it would be best to delete that part of standalone-reset.  Do
> you agree ?

FWIW I've been carrying the following with the intention of using it
from my standalone helper script, since as you've just point out
Standalone.pm also does it then it seems like I could drop the
forget-flight bit.

8<-----------------

>From 98f57473e4787620c6cad443ee15cc38b330065d Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 12 Feb 2014 11:19:45 +0000
Subject: [PATCH] standalone: refactor out some useful bits of standalone-reset

I sometimes want just these bits.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 standalone-forget-flight | 39 +++++++++++++++++++++++++++++++++
 standalone-init-db       | 56 ++++++++++++++++++++++++++++++++++++++++++++++++
 standalone-reset         | 30 ++------------------------
 3 files changed, 97 insertions(+), 28 deletions(-)
 create mode 100755 standalone-forget-flight
 create mode 100755 standalone-init-db

diff --git a/standalone-forget-flight b/standalone-forget-flight
new file mode 100755
index 0000000..6dd6a84
--- /dev/null
+++ b/standalone-forget-flight
@@ -0,0 +1,39 @@
+#!/bin/bash
+
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -e
+
+usage(){
+	cat <<END
+usage: ./standalone-forget-flight [database] [flight]
+END
+}
+
+if [ $# -ne 2 ] ; then
+	usage >&2
+	exit 1
+fi
+
+db="$1"
+flight="$1"
+
+sqlite3 "$db" <<END
+	DELETE FROM runvars WHERE flight='$flight';
+	DELETE FROM jobs    WHERE flight='$flight';
+	DELETE FROM flights WHERE flight='$flight';
+END
diff --git a/standalone-init-db b/standalone-init-db
new file mode 100755
index 0000000..34a47c4
--- /dev/null
+++ b/standalone-init-db
@@ -0,0 +1,56 @@
+#!/bin/bash
+
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2014 Citrix Inc.
+#
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+#
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+set -e
+
+usage(){
+	cat <<END
+usage: ./standalone-init-db [database]
+END
+}
+
+if [ $# -ne 1 ] ; then
+	usage >&2
+	exit 1
+fi
+
+db="$1"
+
+sqlite3 "$db" <<END
+	CREATE TABLE flights (
+		flight TEXT PRIMARY KEY,
+		blessing TEXT,
+		intended TEXT,
+		branch TEXT
+		);
+	CREATE TABLE jobs (
+		flight TEXT NOT NULL,
+		job TEXT NOT NULL,
+		recipe TEXT NOT NULL,
+		status TEXT NOT NULL,
+		PRIMARY KEY(flight,job)
+		);
+	CREATE TABLE runvars (
+		flight TEXT NOT NULL,
+		job TEXT NOT NULL,
+		name TEXT NOT NULL,
+		val TEXT NOT NULL,
+		synth BOOLEAN NOT NULL,
+		PRIMARY KEY(flight,job,name)
+		);
+END
diff --git a/standalone-reset b/standalone-reset
index 83a6606..ea9d027 100755
--- a/standalone-reset
+++ b/standalone-reset
@@ -122,35 +122,9 @@ case $# in
 esac
 
 if test -f standalone.db; then
-	sqlite3 standalone.db <<END
-		DELETE FROM runvars WHERE flight='$flight';
-		DELETE FROM jobs    WHERE flight='$flight';
-		DELETE FROM flights WHERE flight='$flight';
-END
+	./standalone-forget-flight standalone.db "$flight"
 else
-	sqlite3 standalone.db <<END
-		CREATE TABLE flights (
-			flight TEXT PRIMARY KEY,
-			blessing TEXT,
-			intended TEXT,
-			branch TEXT
-			);
-		CREATE TABLE jobs (
-			flight TEXT NOT NULL,
-			job TEXT NOT NULL,
-			recipe TEXT NOT NULL,
-			status TEXT NOT NULL,
-			PRIMARY KEY(flight,job)
-			);
-		CREATE TABLE runvars (
-			flight TEXT NOT NULL,
-			job TEXT NOT NULL,
-			name TEXT NOT NULL,
-			val TEXT NOT NULL,
-			synth BOOLEAN NOT NULL,
-			PRIMARY KEY(flight,job,name)
-			);
-END
+	./standalone-db-init standalone.db
 fi
 
 : ${BUILD_LVEXTEND_MAX:=50}
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:11:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFheG-0006rm-RZ; Tue, 18 Feb 2014 10:11:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFheE-0006rd-Ty
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:11:31 +0000
Received: from [85.158.143.35:64680] by server-2.bemta-4.messagelabs.com id
	1C/A3-10891-2D133035; Tue, 18 Feb 2014 10:11:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392718289!6460028!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20930 invoked from network); 18 Feb 2014 10:11:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 10:11:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 10:11:28 +0000
Message-Id: <53033FDD020000780011D2A8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 10:11:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
In-Reply-To: <21250.15776.972814.117850@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 17.02.14 at 17:49, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> George Dunlap writes ("Re: Proposed force push of staging to master"):
>> On 02/17/2014 12:08 PM, Ian Jackson wrote:
>> > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
>> > xen.git#master and call it RC4.  Comments welcome.
>> 
>> Thanks for the analysis.  This seems like a good plan.
> 
> I have done this (RC4 is tagged, tarballs are in production).
> 
> I also had to force push the change below to xen.git#master.
> 
> Can I request that we don't change this back to say "master" until we
> are done with 4.4.0 ?  Either way we have to update Config.mk with new
> qemu upstream versions, but if we set this to "master" in between RCs,
> I end up having to do it as a force push in the middle of the RC
> production which is out-of-course, error-prone, and suboptimal.
> 
> It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> Config.mk (updated when the qemu-upstream tree has passed its push
> gate).
> 
> That is I think the best workflow is:
>   * make a change to staging/qemu-upstream-unstable.git
>   * wait for push gate to put it in qemu-upstream-unstable.git
>   * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
>     to new commit hash
>   * whatever is in xen.git#master is what gets called rcN
>     (ie we tag xen.git and */qemu-upstream-unstable.git with the rcN
>      tags, but we don't use the actual tag name in Config.mk)

Do you propose this just for the RC phase, or do you propose
reverting the decision to have this set to master altogether? In either
case, it was only pretty recently that we decided to use master here,
and I don't think you objected, so I'm a little puzzled by the proposal.
I personally think that using master and an explicit tag for RCs is just
as appropriate as properly naming the RC in
xen/Makefile:XEN_EXTRAVERSION, and that doing the respective
adjustments could - if one wanted to - likely be fully automated (albeit
when I'm doing the same on the stable branches, I didn't bother
scripting this so far as it's just not happening frequently enough to
warrant this).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:17:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhjV-0007Ab-FP; Tue, 18 Feb 2014 10:16:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFhjU-0007AT-0p
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:16:56 +0000
Received: from [193.109.254.147:16710] by server-13.bemta-14.messagelabs.com
	id D3/C8-01226-71333035; Tue, 18 Feb 2014 10:16:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392718614!1332638!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12760 invoked from network); 18 Feb 2014 10:16:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 10:16:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 10:16:53 +0000
Message-Id: <53034122020000780011D2CC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 10:16:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
In-Reply-To: <52FCA2E0020000780011BF52@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> This test is already performed a couple of lines above.
> 
> Except that it's the wrong code you remove:

No opinion on this alternative at all?

Jan

>> @@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 
> func, u8 bir, int vf)
>>          if ( vf < 0 || (vf && vf % stride) )
>>              return 0;
>>          if ( stride )
>> -        {
>> -            if ( vf % stride )
>> -                return 0;
>>              vf /= stride;
>> -        }
> 
> Note how this second check carefully avoids a division by zero.
> From what I can tell I think that I simply forgot to remove the
> right side of the earlier || after having converted it to the safer
> variant inside the if(). Hence I think we instead want:
> 
> x86/MSI: don't risk division by zero
> 
> The check in question is redundant with the one in the immediately
> following if(), where dividing by zero gets carefully avoided.
> 
> Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 
>              return 0;
>          base = pos + PCI_SRIOV_BAR;
>          vf -= PCI_BDF(bus, slot, func) + offset;
> -        if ( vf < 0 || (vf && vf % stride) )
> +        if ( vf < 0 )
>              return 0;
>          if ( stride )
>          {




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:17:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhjV-0007Ab-FP; Tue, 18 Feb 2014 10:16:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFhjU-0007AT-0p
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:16:56 +0000
Received: from [193.109.254.147:16710] by server-13.bemta-14.messagelabs.com
	id D3/C8-01226-71333035; Tue, 18 Feb 2014 10:16:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392718614!1332638!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12760 invoked from network); 18 Feb 2014 10:16:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 10:16:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 10:16:53 +0000
Message-Id: <53034122020000780011D2CC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 10:16:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
In-Reply-To: <52FCA2E0020000780011BF52@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> This test is already performed a couple of lines above.
> 
> Except that it's the wrong code you remove:

No opinion on this alternative at all?

Jan

>> @@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8 
> func, u8 bir, int vf)
>>          if ( vf < 0 || (vf && vf % stride) )
>>              return 0;
>>          if ( stride )
>> -        {
>> -            if ( vf % stride )
>> -                return 0;
>>              vf /= stride;
>> -        }
> 
> Note how this second check carefully avoids a division by zero.
> From what I can tell I think that I simply forgot to remove the
> right side of the earlier || after having converted it to the safer
> variant inside the if(). Hence I think we instead want:
> 
> x86/MSI: don't risk division by zero
> 
> The check in question is redundant with the one in the immediately
> following if(), where dividing by zero gets carefully avoided.
> 
> Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/msi.c
> +++ b/xen/arch/x86/msi.c
> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 
>              return 0;
>          base = pos + PCI_SRIOV_BAR;
>          vf -= PCI_BDF(bus, slot, func) + offset;
> -        if ( vf < 0 || (vf && vf % stride) )
> +        if ( vf < 0 )
>              return 0;
>          if ( stride )
>          {




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:18:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhlP-0007II-W3; Tue, 18 Feb 2014 10:18:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhlO-0007IA-A7
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:18:54 +0000
Received: from [85.158.139.211:28704] by server-15.bemta-5.messagelabs.com id
	27/99-24395-D8333035; Tue, 18 Feb 2014 10:18:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392718731!4615632!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14931 invoked from network); 18 Feb 2014 10:18:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:18:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101689671"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:18:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:18:50 -0500
Message-ID: <1392718729.11080.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 18 Feb 2014 10:18:49 +0000
In-Reply-To: <20140215221737.GA28254@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-15 at 23:17 +0100, Olaf Hering wrote:

Please remember to CC the maintainers of the code you are discussing.
Ian J added.

> I'm not sure if libxlu_disk_l.h is a generated file.

It is, but we also check in the generated version, IIRC due to a bug in
some versions of flex, but also partially for convenience to avoid the
need for flex on all development systems.

>  But just once I saw
> this failure below with automated build and make -j 16. This source tree
> has the discard-enable patch, which changes tools/libxl/libxlu_disk_l.l.
> As a result libxlu_disk_l.c is regenerated, see the flex call below.

It might be a good idea to either also patch the generated files or to
have the patch remove them, to avoid any possible confusion due to skew.

> How should make become aware of the libxlu_disk_l.h dependency?

I think it would probably need explicitly specifying like we do for the
IDL generated files.

What deleted that file though?

> Olaf
> 
> ....
> [  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.h >_libxl_save_msgs_helper.h.new
> [  126s] python gentypes.py libxl_types.idl __libxl_types.h __libxl_types_json.h __libxl_types.c
> [  126s] python gentypes.py libxl_types_internal.idl __libxl_types_internal.h __libxl_types_internal_json.h __libxl_types_internal.c
> [  126s] if ! cmp -s _libxl_list.h.new _libxl_list.h; then mv -f _libxl_list.h.new _libxl_list.h; else rm -f _libxl_list.h.new; fi
> [  126s] /usr/bin/flex --header-file=libxlu_disk_l.h --outfile=libxlu_disk_l.c libxlu_disk_l.l
> [  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_callout.c >_libxl_save_msgs_callout.c.new
> [  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.c >_libxl_save_msgs_helper.c.new
> [  126s] sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp >_paths.h.2.tmp
> [  126s] rm -f _paths.h.tmp
> --
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .flexarray.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o flexarray.o flexarray.c 
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl.o libxl.c 
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_create.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_create.o libxl_create.c 
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_dm.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_dm.o libxl_dm.c 
> [  126s] libxlu_pci.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
> [  126s]  #include "libxlu_disk_l.h"
> [  126s]                            ^
> [  126s] compilation terminated.
> [  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_pci.o' failed
> [  126s] make[3]: *** [libxlu_pci.o] Error 1
> [  126s] make[3]: *** Waiting for unfinished jobs....
> [  126s] libxlu_disk.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
> [  126s]  #include "libxlu_disk_l.h"
> [  126s]                            ^
> [  126s] compilation terminated.
> [  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_disk.o' failed
> [  126s] make[3]: *** [libxlu_disk.o] Error 1
> ....
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:18:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhlP-0007II-W3; Tue, 18 Feb 2014 10:18:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhlO-0007IA-A7
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:18:54 +0000
Received: from [85.158.139.211:28704] by server-15.bemta-5.messagelabs.com id
	27/99-24395-D8333035; Tue, 18 Feb 2014 10:18:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392718731!4615632!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14931 invoked from network); 18 Feb 2014 10:18:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:18:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101689671"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:18:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:18:50 -0500
Message-ID: <1392718729.11080.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Tue, 18 Feb 2014 10:18:49 +0000
In-Reply-To: <20140215221737.GA28254@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-15 at 23:17 +0100, Olaf Hering wrote:

Please remember to CC the maintainers of the code you are discussing.
Ian J added.

> I'm not sure if libxlu_disk_l.h is a generated file.

It is, but we also check in the generated version, IIRC due to a bug in
some versions of flex, but also partially for convenience to avoid the
need for flex on all development systems.

>  But just once I saw
> this failure below with automated build and make -j 16. This source tree
> has the discard-enable patch, which changes tools/libxl/libxlu_disk_l.l.
> As a result libxlu_disk_l.c is regenerated, see the flex call below.

It might be a good idea to either also patch the generated files or to
have the patch remove them, to avoid any possible confusion due to skew.

> How should make become aware of the libxlu_disk_l.h dependency?

I think it would probably need explicitly specifying like we do for the
IDL generated files.

What deleted that file though?

> Olaf
> 
> ....
> [  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.h >_libxl_save_msgs_helper.h.new
> [  126s] python gentypes.py libxl_types.idl __libxl_types.h __libxl_types_json.h __libxl_types.c
> [  126s] python gentypes.py libxl_types_internal.idl __libxl_types_internal.h __libxl_types_internal_json.h __libxl_types_internal.c
> [  126s] if ! cmp -s _libxl_list.h.new _libxl_list.h; then mv -f _libxl_list.h.new _libxl_list.h; else rm -f _libxl_list.h.new; fi
> [  126s] /usr/bin/flex --header-file=libxlu_disk_l.h --outfile=libxlu_disk_l.c libxlu_disk_l.l
> [  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_callout.c >_libxl_save_msgs_callout.c.new
> [  126s] /usr/bin/perl -w libxl_save_msgs_gen.pl _libxl_save_msgs_helper.c >_libxl_save_msgs_helper.c.new
> [  126s] sed -e "s/\([^=]*\)=\(.*\)/#define \1 \2/g" _paths.h.tmp >_paths.h.2.tmp
> [  126s] rm -f _paths.h.tmp
> --
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .flexarray.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o flexarray.o flexarray.c 
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl.o libxl.c 
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_create.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_create.o libxl_create.c 
> [  126s] gcc  -O1 -fno-omit-frame-pointer -m64 -g -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes -Wdeclaration-after-statement -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -D__XEN_TOOLS__ -MMD -MF .libxl_dm.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE -fno-optimize-sibling-calls -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind-tables -Werror -Wno-format-zero-length -Wmissing-declarations -Wno-declaration-after-statement -Wformat-nonliteral -I. -fPIC -pthread -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/libxc -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/xenstore -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/control -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/blktap2/include -I/home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/include  -Wshadow -include /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/config.h  -c -o libxl_dm.o libxl_dm.c 
> [  126s] libxlu_pci.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
> [  126s]  #include "libxlu_disk_l.h"
> [  126s]                            ^
> [  126s] compilation terminated.
> [  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_pci.o' failed
> [  126s] make[3]: *** [libxlu_pci.o] Error 1
> [  126s] make[3]: *** Waiting for unfinished jobs....
> [  126s] libxlu_disk.c:3:27: fatal error: libxlu_disk_l.h: No such file or directory
> [  126s]  #include "libxlu_disk_l.h"
> [  126s]                            ^
> [  126s] compilation terminated.
> [  126s] /home/abuild/rpmbuild/BUILD/xen-4.4.0-testing/tools/libxl/../../tools/Rules.mk:89: recipe for target 'libxlu_disk.o' failed
> [  126s] make[3]: *** [libxlu_disk.o] Error 1
> ....
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:20:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhmj-0007OR-Fl; Tue, 18 Feb 2014 10:20:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhmi-0007OK-7o
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:20:16 +0000
Received: from [85.158.139.211:62010] by server-16.bemta-5.messagelabs.com id
	37/5A-05060-FD333035; Tue, 18 Feb 2014 10:20:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392718811!664838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14694 invoked from network); 18 Feb 2014 10:20:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:20:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101689904"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:20:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:20:07 -0500
Message-ID: <1392718806.11080.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 Feb 2014 10:20:06 +0000
In-Reply-To: <52FE8820.1070002@citrix.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<52FE8820.1070002@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>,
	Francesco Gringoli <francesco.gringoli@ing.unibs.it>,
	Chen Baozi <baozich@gmail.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Wiki access] Booting XEN on Samsung ARM Chromebook
 with display support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 21:18 +0000, Andrew Cooper wrote:
> On 14/02/2014 21:12, Francesco Gringoli wrote:
> > Hello guys,
> >
> > "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
> >
> > Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
> >
> > What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
> >
> > 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> > 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
> >
> > I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
> 
> Because of spam attacks, editing is disabled by default.
> 
> Use
> http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html
> to get access.

FYI I saw the result of this and added write perms for Francesco on
Saturday.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:20:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhmj-0007OR-Fl; Tue, 18 Feb 2014 10:20:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFhmi-0007OK-7o
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:20:16 +0000
Received: from [85.158.139.211:62010] by server-16.bemta-5.messagelabs.com id
	37/5A-05060-FD333035; Tue, 18 Feb 2014 10:20:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392718811!664838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14694 invoked from network); 18 Feb 2014 10:20:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:20:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101689904"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:20:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:20:07 -0500
Message-ID: <1392718806.11080.14.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 Feb 2014 10:20:06 +0000
In-Reply-To: <52FE8820.1070002@citrix.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<52FE8820.1070002@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>,
	Francesco Gringoli <francesco.gringoli@ing.unibs.it>,
	Chen Baozi <baozich@gmail.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [Wiki access] Booting XEN on Samsung ARM Chromebook
 with display support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 21:18 +0000, Andrew Cooper wrote:
> On 14/02/2014 21:12, Francesco Gringoli wrote:
> > Hello guys,
> >
> > "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
> >
> > Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
> >
> > What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
> >
> > 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> > 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
> >
> > I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
> 
> Because of spam attacks, editing is disabled by default.
> 
> Use
> http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html
> to get access.

FYI I saw the result of this and added write perms for Francesco on
Saturday.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:21:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFho9-0007Vv-Vb; Tue, 18 Feb 2014 10:21:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFho8-0007Vd-Nd
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:21:44 +0000
Received: from [193.109.254.147:47950] by server-14.bemta-14.messagelabs.com
	id D0/BD-29228-73433035; Tue, 18 Feb 2014 10:21:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392718900!5108902!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10530 invoked from network); 18 Feb 2014 10:21:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:21:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101690230"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:21:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:21:39 -0500
Message-ID: <1392718898.11080.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Feb 2014 10:21:38 +0000
In-Reply-To: <alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>,
	Francesco Gringoli <francesco.gringoli@ing.unibs.it>,
	Chen Baozi <baozich@gmail.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-16 at 12:56 +0000, Stefano Stabellini wrote:
> On Fri, 14 Feb 2014, Francesco Gringoli wrote:
> > Hello guys,
> > 
> > "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
> > 
> > Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
> > 
> > What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
> > 
> > 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> > 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
> > 
> > I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
> 
> Great job!

Yes, very nice.

>  Please do request access and update the wiki.
> 
> Anthony's branches are quite old now, many bugs have been discovered and
> fixed in the upstream Xen and Linux trees in the meantime.
> Although I expect that updating the Xen tree could be difficult at this
> point, it is probably worth it from the stability point of view.

I agree, ideally the tree would be rebased and whatever needed to be
(and could be) would be upstreamed. I've no idea what the divergence is
like, but at least in principal supporting a "new" platform with the
modern mainline code ought to be loads easier than it was back when
Anthony started work on this stuff.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:21:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFho9-0007Vv-Vb; Tue, 18 Feb 2014 10:21:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFho8-0007Vd-Nd
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:21:44 +0000
Received: from [193.109.254.147:47950] by server-14.bemta-14.messagelabs.com
	id D0/BD-29228-73433035; Tue, 18 Feb 2014 10:21:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392718900!5108902!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10530 invoked from network); 18 Feb 2014 10:21:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:21:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101690230"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:21:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:21:39 -0500
Message-ID: <1392718898.11080.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Feb 2014 10:21:38 +0000
In-Reply-To: <alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>,
	Francesco Gringoli <francesco.gringoli@ing.unibs.it>,
	Chen Baozi <baozich@gmail.com>, Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-16 at 12:56 +0000, Stefano Stabellini wrote:
> On Fri, 14 Feb 2014, Francesco Gringoli wrote:
> > Hello guys,
> > 
> > "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
> > 
> > Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
> > 
> > What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
> > 
> > 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
> > 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
> > 
> > I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
> 
> Great job!

Yes, very nice.

>  Please do request access and update the wiki.
> 
> Anthony's branches are quite old now, many bugs have been discovered and
> fixed in the upstream Xen and Linux trees in the meantime.
> Although I expect that updating the Xen tree could be difficult at this
> point, it is probably worth it from the stability point of view.

I agree, ideally the tree would be rebased and whatever needed to be
(and could be) would be upstreamed. I've no idea what the divergence is
like, but at least in principal supporting a "new" platform with the
modern mainline code ought to be loads easier than it was back when
Anthony started work on this stuff.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhsA-0007io-Ll; Tue, 18 Feb 2014 10:25:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFhhD-00077I-Ss
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:14:36 +0000
Received: from [85.158.143.35:57302] by server-2.bemta-4.messagelabs.com id
	A6/7A-10891-B8233035; Tue, 18 Feb 2014 10:14:35 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392718467!6469106!1
X-Originating-IP: [213.75.39.6]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNiA9PiA1MTA0Njk=\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNiA9PiA1MTA0Njk=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31974 invoked from network); 18 Feb 2014 10:14:28 -0000
Received: from cpsmtpb-ews03.kpnxchange.com (HELO
	cpsmtpb-ews03.kpnxchange.com) (213.75.39.6)
	by server-3.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 10:14:28 -0000
Received: from cpsps-ews12.kpnxchange.com ([10.94.84.179]) by
	cpsmtpb-ews03.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 11:14:27 +0100
Received: from CPSMTPM-TLF101.kpnxchange.com ([195.121.3.4]) by
	cpsps-ews12.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 11:14:27 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF101.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Tue, 18 Feb 2014 11:14:27 +0100
Message-ID: <1392718467.30073.12.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 18 Feb 2014 11:14:27 +0100
In-Reply-To: <20140217144307.GB28658@localhost.localdomain>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 18 Feb 2014 10:14:27.0593 (UTC)
	FILETIME=[3492FF90:01CF2C92]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Tue, 18 Feb 2014 10:25:53 +0000
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 09:43 -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
> > On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> > > On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> > > Please look in the grub git tree. They have fixed their code to not do
> > > this anymore. This should be reflected in the patch description.
> > 
> > Thanks, I didn't know that. That turned out to be grub commit
> > ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
> > use it to implement generating of config"), see
> > http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059

And that commit was reverted a week later in grub commit
faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 ("Revert grub-file usage in
grub-mkconfig."), see
http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 .

That commit has no explanation (other than its one line summary). So
we're left guessing why this was done. Luckily, it doesn't matter here,
because the test for CONFIG_XEN_PRIVILEGED_GUEST is superfluous.

Anyhow, I hope to submit a second version of this patch later this day.


Paul Bolle


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:26:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:26:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhsA-0007io-Ll; Tue, 18 Feb 2014 10:25:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFhhD-00077I-Ss
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 10:14:36 +0000
Received: from [85.158.143.35:57302] by server-2.bemta-4.messagelabs.com id
	A6/7A-10891-B8233035; Tue, 18 Feb 2014 10:14:35 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392718467!6469106!1
X-Originating-IP: [213.75.39.6]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNiA9PiA1MTA0Njk=\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNiA9PiA1MTA0Njk=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31974 invoked from network); 18 Feb 2014 10:14:28 -0000
Received: from cpsmtpb-ews03.kpnxchange.com (HELO
	cpsmtpb-ews03.kpnxchange.com) (213.75.39.6)
	by server-3.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 10:14:28 -0000
Received: from cpsps-ews12.kpnxchange.com ([10.94.84.179]) by
	cpsmtpb-ews03.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 11:14:27 +0100
Received: from CPSMTPM-TLF101.kpnxchange.com ([195.121.3.4]) by
	cpsps-ews12.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 11:14:27 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF101.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Tue, 18 Feb 2014 11:14:27 +0100
Message-ID: <1392718467.30073.12.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 18 Feb 2014 11:14:27 +0100
In-Reply-To: <20140217144307.GB28658@localhost.localdomain>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 18 Feb 2014 10:14:27.0593 (UTC)
	FILETIME=[3492FF90:01CF2C92]
X-RcptDomain: lists.xenproject.org
X-Mailman-Approved-At: Tue, 18 Feb 2014 10:25:53 +0000
Cc: "H. Peter Anvin" <hpa@zytor.com>, Richard Weinberger <richard@nod.at>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, David Vrabel <david.vrabel@citrix.com>,
	Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 09:43 -0500, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
> > On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> > > On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> > > Please look in the grub git tree. They have fixed their code to not do
> > > this anymore. This should be reflected in the patch description.
> > 
> > Thanks, I didn't know that. That turned out to be grub commit
> > ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
> > use it to implement generating of config"), see
> > http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059

And that commit was reverted a week later in grub commit
faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 ("Revert grub-file usage in
grub-mkconfig."), see
http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 .

That commit has no explanation (other than its one line summary). So
we're left guessing why this was done. Luckily, it doesn't matter here,
because the test for CONFIG_XEN_PRIVILEGED_GUEST is superfluous.

Anyhow, I hope to submit a second version of this patch later this day.


Paul Bolle


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:26:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhsa-0007lO-2g; Tue, 18 Feb 2014 10:26:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFhsY-0007l8-M0
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:26:18 +0000
Received: from [193.109.254.147:63550] by server-11.bemta-14.messagelabs.com
	id 2E/A7-24604-A4533035; Tue, 18 Feb 2014 10:26:18 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392719175!5049190!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14388 invoked from network); 18 Feb 2014 10:26:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:26:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101691543"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:26:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:26:15 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFhsU-0000I9-9F;
	Tue, 18 Feb 2014 10:26:14 +0000
Message-ID: <53033544.2000409@eu.citrix.com>
Date: Tue, 18 Feb 2014 10:26:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	Tim Deegan <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
X-DLP: MIA1
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
> Jan Beulich wrote on 2014-02-17:
>>>>> On 17.02.14 at 15:51, George Dunlap <george.dunlap@eu.citrix.com>
>> wrote:
>>> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>> Actually I'm afraid there are two problems with this patch:
>>>>
>>>> For one, is enabling "global" log dirty mode still going to work
>>>> after VRAM-only mode already got enabled? I ask because the
>>>> paging_mode_log_dirty() check which paging_log_dirty_enable() does
>>>> first thing suggests otherwise to me (i.e. the now conditional
>>>> setting of all p2m entries to p2m_ram_logdirty would seem to never
>>>> get executed). IOW I would think that we're now lacking a control
>>>> operation allowing the transition from dirty VRAM tracking mode to
>>>> full log dirty mode.
>>> Hrm, will so far playing with this I've been unable to get a
>>> localhost migrate to fail with the vncviewer attached.  Which seems a bit strange...
>> Not necessarily - it may depend on how the tools actually do this:
>> They might temporarily disable log dirty mode altogether, just to
>> re-enable full mode again right away. But this specific usage of the
>> hypervisor interface wouldn't (to me) mean that other tool stacks
>> might not be doing this differently.
> You are right. Before migration, libxc will disable log dirty mode if it already enabled before and re-enable it. So when I am developing this patch, I think it is ok for migration.
>
> If there really have other tool stacks also will use this interface (Is it true?), perhaps my original patch is better which will check paging_mode_log_dirty(d) && log_global:

It turns out that the reason I couldn't get a crash was because libxc 
was actually paying attention to the -EINVAL return value, and disabling 
and then re-enabling logdirty.  That's what would happen before your 
dirty vram patch, and that's what happens after.  And arguably, that's 
the correct behavior for any toolstack, given that the interface returns 
an error.

This patch would actually change the interface; if we check this in, 
then if you enable logdirty when dirty vram tracking is enabled, you 
*won't* get an error, and thus *won't* disable and re-enable logdirty 
mode.  So actually, this patch would be more disruptive.

Thanks for the patch, though. :-)

  -George

>
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index ab5eacb..368c975 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -168,7 +168,7 @@ int paging_log_dirty_enable(struct domain *d, bool_t log_global)
>   {
>       int ret;
>   
> -    if ( paging_mode_log_dirty(d) )
> +    if ( paging_mode_log_dirty(d) && !log_global )
>           return -EINVAL;
>   
>       domain_pause(d);
>
>
>> Jan
>
> Best regards,
> Yang
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:26:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFhsa-0007lO-2g; Tue, 18 Feb 2014 10:26:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFhsY-0007l8-M0
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:26:18 +0000
Received: from [193.109.254.147:63550] by server-11.bemta-14.messagelabs.com
	id 2E/A7-24604-A4533035; Tue, 18 Feb 2014 10:26:18 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392719175!5049190!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14388 invoked from network); 18 Feb 2014 10:26:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:26:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101691543"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:26:15 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:26:15 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFhsU-0000I9-9F;
	Tue, 18 Feb 2014 10:26:14 +0000
Message-ID: <53033544.2000409@eu.citrix.com>
Date: Tue, 18 Feb 2014 10:26:12 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>, Jan Beulich <JBeulich@suse.com>, 
	Tim Deegan <tim@xen.org>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
X-DLP: MIA1
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
> Jan Beulich wrote on 2014-02-17:
>>>>> On 17.02.14 at 15:51, George Dunlap <george.dunlap@eu.citrix.com>
>> wrote:
>>> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>> Actually I'm afraid there are two problems with this patch:
>>>>
>>>> For one, is enabling "global" log dirty mode still going to work
>>>> after VRAM-only mode already got enabled? I ask because the
>>>> paging_mode_log_dirty() check which paging_log_dirty_enable() does
>>>> first thing suggests otherwise to me (i.e. the now conditional
>>>> setting of all p2m entries to p2m_ram_logdirty would seem to never
>>>> get executed). IOW I would think that we're now lacking a control
>>>> operation allowing the transition from dirty VRAM tracking mode to
>>>> full log dirty mode.
>>> Hrm, will so far playing with this I've been unable to get a
>>> localhost migrate to fail with the vncviewer attached.  Which seems a bit strange...
>> Not necessarily - it may depend on how the tools actually do this:
>> They might temporarily disable log dirty mode altogether, just to
>> re-enable full mode again right away. But this specific usage of the
>> hypervisor interface wouldn't (to me) mean that other tool stacks
>> might not be doing this differently.
> You are right. Before migration, libxc will disable log dirty mode if it already enabled before and re-enable it. So when I am developing this patch, I think it is ok for migration.
>
> If there really have other tool stacks also will use this interface (Is it true?), perhaps my original patch is better which will check paging_mode_log_dirty(d) && log_global:

It turns out that the reason I couldn't get a crash was because libxc 
was actually paying attention to the -EINVAL return value, and disabling 
and then re-enabling logdirty.  That's what would happen before your 
dirty vram patch, and that's what happens after.  And arguably, that's 
the correct behavior for any toolstack, given that the interface returns 
an error.

This patch would actually change the interface; if we check this in, 
then if you enable logdirty when dirty vram tracking is enabled, you 
*won't* get an error, and thus *won't* disable and re-enable logdirty 
mode.  So actually, this patch would be more disruptive.

Thanks for the patch, though. :-)

  -George

>
> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
> index ab5eacb..368c975 100644
> --- a/xen/arch/x86/mm/paging.c
> +++ b/xen/arch/x86/mm/paging.c
> @@ -168,7 +168,7 @@ int paging_log_dirty_enable(struct domain *d, bool_t log_global)
>   {
>       int ret;
>   
> -    if ( paging_mode_log_dirty(d) )
> +    if ( paging_mode_log_dirty(d) && !log_global )
>           return -EINVAL;
>   
>       domain_pause(d);
>
>
>> Jan
>
> Best regards,
> Yang
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:45:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiB1-00087Q-5H; Tue, 18 Feb 2014 10:45:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFiAz-00087L-Lf
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:45:21 +0000
Received: from [85.158.143.35:32890] by server-2.bemta-4.messagelabs.com id
	D3/5B-10891-0C933035; Tue, 18 Feb 2014 10:45:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392720313!6479463!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8792 invoked from network); 18 Feb 2014 10:45:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:45:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101695376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:45:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:45:07 -0500
Message-ID: <1392720305.11080.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Date: Tue, 18 Feb 2014 10:45:05 +0000
In-Reply-To: <40776A41FC278F40B59438AD47D147A911947E8E@SHSMSX104.ccr.corp.intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
	<1392027356.5117.21.camel@kazak.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A911947E8E@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "keir@xen.org" <keir@xen.org>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 06:11 +0000, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Xu, Dongxiao
> > Sent: Monday, February 10, 2014 7:19 PM
> > To: Ian Campbell
> > Cc: xen-devel@lists.xen.org; keir@xen.org; JBeulich@suse.com;
> > Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> > andrew.cooper3@citrix.com; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov
> > Subject: RE: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
> > libxl/libxc
> > 
> > > > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > > > index 649ce50..43c0f48 100644
> > > > --- a/tools/libxl/libxl_types.idl
> > > > +++ b/tools/libxl/libxl_types.idl
> > > > @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
> > > >                                   ])),
> > > >             ("domain_create_console_available", Struct(None, [])),
> > > >             ]))])
> > > > +
> > > > +libxl_cqminfo = Struct("cqminfo", [
> > >
> > > You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
> > > see the existing examples in that header.
> > 
> > OK.
> 
> Hi Ian,
> 
> Just had another look at the comments:
> 
> * In the event that a change is required which cannot be made
>  * backwards compatible in this manner a #define of the form
>  * LIBXL_HAVE_<interface> will always be added in order to make it
>  * possible to write applciations which build against any version of
>  * libxl. Such changes are expected to be exceptional and used as a
>  * last resort. The barrier for backporting such a change to a stable
>  * branch will be very high.
> 
> LIBXL_HAVE_<interface> is to address the back compatibility issue, and
> mostly it is used in cases like "adding a new field to the existing
> data structure".
> 
> In this patch, the libxl_cqminfo is a new added data structure, and it
> seems that there is no such compatibility issue. Do we really need to
> add the LIBXL_HAVE_<interface> macro?

I think so -- so that consumers of the libxl API know that it is
available to be used. The fact that it is an entire struct rather than a
new field doesn't make a difference here I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:45:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiB1-00087Q-5H; Tue, 18 Feb 2014 10:45:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFiAz-00087L-Lf
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:45:21 +0000
Received: from [85.158.143.35:32890] by server-2.bemta-4.messagelabs.com id
	D3/5B-10891-0C933035; Tue, 18 Feb 2014 10:45:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392720313!6479463!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8792 invoked from network); 18 Feb 2014 10:45:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:45:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101695376"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:45:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:45:07 -0500
Message-ID: <1392720305.11080.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Xu, Dongxiao" <dongxiao.xu@intel.com>
Date: Tue, 18 Feb 2014 10:45:05 +0000
In-Reply-To: <40776A41FC278F40B59438AD47D147A911947E8E@SHSMSX104.ccr.corp.intel.com>
References: <1391836058-81430-1-git-send-email-dongxiao.xu@intel.com>
	<1391836058-81430-7-git-send-email-dongxiao.xu@intel.com>
	<1392027356.5117.21.camel@kazak.uk.xensource.com>
	<40776A41FC278F40B59438AD47D147A911947E8E@SHSMSX104.ccr.corp.intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "keir@xen.org" <keir@xen.org>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"Ian.Jackson@eu.citrix.com" <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"JBeulich@suse.com" <JBeulich@suse.com>,
	"dgdegra@tycho.nsa.gov" <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH v8 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 06:11 +0000, Xu, Dongxiao wrote:
> > -----Original Message-----
> > From: Xu, Dongxiao
> > Sent: Monday, February 10, 2014 7:19 PM
> > To: Ian Campbell
> > Cc: xen-devel@lists.xen.org; keir@xen.org; JBeulich@suse.com;
> > Ian.Jackson@eu.citrix.com; stefano.stabellini@eu.citrix.com;
> > andrew.cooper3@citrix.com; konrad.wilk@oracle.com; dgdegra@tycho.nsa.gov
> > Subject: RE: [PATCH v8 6/6] tools: enable Cache QoS Monitoring feature for
> > libxl/libxc
> > 
> > > > diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> > > > index 649ce50..43c0f48 100644
> > > > --- a/tools/libxl/libxl_types.idl
> > > > +++ b/tools/libxl/libxl_types.idl
> > > > @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
> > > >                                   ])),
> > > >             ("domain_create_console_available", Struct(None, [])),
> > > >             ]))])
> > > > +
> > > > +libxl_cqminfo = Struct("cqminfo", [
> > >
> > > You need to also patch libxl.h to add a suitable LIBXL_HAVE_FOO define,
> > > see the existing examples in that header.
> > 
> > OK.
> 
> Hi Ian,
> 
> Just had another look at the comments:
> 
> * In the event that a change is required which cannot be made
>  * backwards compatible in this manner a #define of the form
>  * LIBXL_HAVE_<interface> will always be added in order to make it
>  * possible to write applciations which build against any version of
>  * libxl. Such changes are expected to be exceptional and used as a
>  * last resort. The barrier for backporting such a change to a stable
>  * branch will be very high.
> 
> LIBXL_HAVE_<interface> is to address the back compatibility issue, and
> mostly it is used in cases like "adding a new field to the existing
> data structure".
> 
> In this patch, the libxl_cqminfo is a new added data structure, and it
> seems that there is no such compatibility issue. Do we really need to
> add the LIBXL_HAVE_<interface> macro?

I think so -- so that consumers of the libxl API know that it is
available to be used. The fact that it is an entire struct rather than a
new field doesn't make a difference here I think.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:49:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiEf-0008F1-RW; Tue, 18 Feb 2014 10:49:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFiEd-0008Es-Nu
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:49:07 +0000
Received: from [85.158.137.68:4864] by server-15.bemta-3.messagelabs.com id
	C7/C4-19263-2AA33035; Tue, 18 Feb 2014 10:49:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392720545!1018501!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15151 invoked from network); 18 Feb 2014 10:49:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:49:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101696218"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:49:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 05:49:04 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52]
	helo=cosworth.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WFiEZ-0002Kj-W6;
	Tue, 18 Feb 2014 10:49:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Feb 2014 10:49:03 +0000
Message-ID: <1392720543-17784-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: fix typo in comment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 06bbca6..e29a810 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -51,7 +51,7 @@
  * In the event that a change is required which cannot be made
  * backwards compatible in this manner a #define of the form
  * LIBXL_HAVE_<interface> will always be added in order to make it
- * possible to write applciations which build against any version of
+ * possible to write applications which build against any version of
  * libxl. Such changes are expected to be exceptional and used as a
  * last resort. The barrier for backporting such a change to a stable
  * branch will be very high.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:49:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiEf-0008F1-RW; Tue, 18 Feb 2014 10:49:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFiEd-0008Es-Nu
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:49:07 +0000
Received: from [85.158.137.68:4864] by server-15.bemta-3.messagelabs.com id
	C7/C4-19263-2AA33035; Tue, 18 Feb 2014 10:49:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392720545!1018501!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15151 invoked from network); 18 Feb 2014 10:49:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:49:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101696218"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:49:04 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 05:49:04 -0500
Received: from cosworth.uk.xensource.com ([10.80.16.52]
	helo=cosworth.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WFiEZ-0002Kj-W6;
	Tue, 18 Feb 2014 10:49:04 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Tue, 18 Feb 2014 10:49:03 +0000
Message-ID: <1392720543-17784-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: fix typo in comment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 tools/libxl/libxl.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 06bbca6..e29a810 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -51,7 +51,7 @@
  * In the event that a change is required which cannot be made
  * backwards compatible in this manner a #define of the form
  * LIBXL_HAVE_<interface> will always be added in order to make it
- * possible to write applciations which build against any version of
+ * possible to write applications which build against any version of
  * libxl. Such changes are expected to be exceptional and used as a
  * last resort. The barrier for backporting such a change to a stable
  * branch will be very high.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiIT-0008Oo-HI; Tue, 18 Feb 2014 10:53:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFiIS-0008Oi-L3
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:53:04 +0000
Received: from [85.158.137.68:9234] by server-17.bemta-3.messagelabs.com id
	34/A5-22569-F8B33035; Tue, 18 Feb 2014 10:53:03 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392720781!2598441!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5985 invoked from network); 18 Feb 2014 10:53:02 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-11.tower-31.messagelabs.com with SMTP;
	18 Feb 2014 10:53:02 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 02:48:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="485122154"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 02:53:00 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 02:52:59 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 02:52:59 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 18:52:57 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Aravind Gopalakrishnan
	<aravind.gopalakrishnan@amd.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPLI2WrpjR5j3ySEeyNVNl8C2Pk5q6zMFA
Date: Tue, 18 Feb 2014 10:52:57 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
In-Reply-To: <530338D5020000780011D258@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 18.02.14 at 05:25, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> For Intel it didn't disturb Intel's vmce logic so it's OK for me.
>> 
>> For AMD, c000_040x is bank4 (MC4_MISCj), while vmce current only
>> support bank0/1. Even considering in the future AMD may add
>> MC0/1_MISCj, it doesn't need emulation (say, read return 0 and write
>> ignore). So how about simply filter out AMD MCx_MISCj at
>> mce_vendor_bank_msr()? 
> 
> mce_vendor_bank_msr() already has
> 
>     case X86_VENDOR_AMD:
>         switch (msr) {
>         case MSR_F10_MC4_MISC1:
>         case MSR_F10_MC4_MISC2:
>         case MSR_F10_MC4_MISC3:
>             return 1;
>         }
> 
> Jan

OK.

> 
>> Aravind Gopalakrishnan wrote:
>>> vmce_amd_[rd|wr]msr functions can handle accesses to AMD
>>> thresholding registers. But due to this statement here:
>>> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> we are wrongly masking off top two bits which meant the register
>>> accesses never made it to vmce_amd_* functions.
>>> 
>>> Corrected this problem by modifying the mask in this patch to allow
>>> AMD thresholding registers to fall to 'default' case which in turn
>>> allows vmce_amd_* functions to handle access to the registers.
>>> 
>>> While at it, remove some clutter in the vmce_amd* functions.
>>> Retained current policy of returning zero for reads and ignoring
>>> writes. 
>>> 
>>> Signed-off-by: Aravind Gopalakrishnan
>>>  <aravind.gopalakrishnan@amd.com> ---
>>>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>>>  ++++++-------------------------------
>>> xen/arch/x86/cpu/mcheck/vmce.c |   14 +++++++++++-- 2 files
>>> changed, 18 insertions(+), 37 deletions(-)  
>>> 
>>> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> b/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> index 61319dc..03797ab 100644
>>> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>>>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>>>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)  {
>>> -	switch (msr) {
>>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>>> -		v->arch.vmce.bank[1].mci_misc = val;
>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>> -		break;
>>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>>> -		/* ignore write: we do not emulate link and l3 cache errors
>>> -		 * to the guest.
>>> -		 */
>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>> -		break;
>>> -	default:
>>> -		return 0;
>>> -	}
>>> -
>>> -	return 1;
>>> +    /* Do nothing as we don't emulate this MC bank currently */
>>> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val); +   
>>>  return 1; }
>>> 
>>>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t
>>> *val)  { 
>>> -	switch (msr) {
>>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>>> -		*val = v->arch.vmce.bank[1].mci_misc;
>>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>>> -		break;
>>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>>> -		/* we do not emulate link and l3 cache
>>> -		 * errors to the guest.
>>> -		 */
>>> -		*val = 0;
>>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>>> -		break;
>>> -	default:
>>> -		return 0;
>>> -	}
>>> -
>>> -	return 1;
>>> +    /* Assign '0' as we don't emulate this MC bank currently */ + 
>>> *val = 0; +    return 1;
>>>  }
>>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
>>> b/xen/arch/x86/cpu/mcheck/vmce.c
>>> index f6c35db..84843fc 100644
>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>> uint32_t msr, uint64_t *val) 
>>> 
>>>      *val = 0;
>>> 
>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    /* Allow only first 3 MC banks into switch() */

I don't think this comments is good here. Remove it is better.

Thanks,
Jinsong


>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>      {
>>>      case MSR_IA32_MC0_CTL:
>>>          /* stick all 1's to MCi_CTL */
>>> @@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>>              uint32_t msr, uint64_t *val) ret = vmce_intel_rdmsr(v,
>>>              msr, val); break;
>>>          case X86_VENDOR_AMD:
>>> +            /*
>>> +             * Extended block of AMD thresholding registers fall
>>> into default. +             * Handle reads here.
>>> +             */
>>>              ret = vmce_amd_rdmsr(v, msr, val);
>>>              break;
>>>          default:
>>> @@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>>      uint32_t msr, uint64_t val) int ret = 1;
>>>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>>> 
>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    /* Allow only first 3 MC banks into switch() */
>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>      {
>>>      case MSR_IA32_MC0_CTL:
>>>          /*
>>> @@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>>              uint32_t msr, uint64_t val) ret = vmce_intel_wrmsr(v,
>>>              msr, val); break;
>>>          case X86_VENDOR_AMD:
>>> +            /*
>>> +             * Extended block of AMD thresholding registers fall
>>> into default. +             * Handle writes here.
>>> +             */
>>>              ret = vmce_amd_wrmsr(v, msr, val);
>>>              break;
>>>          default:


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiIT-0008Oo-HI; Tue, 18 Feb 2014 10:53:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFiIS-0008Oi-L3
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 10:53:04 +0000
Received: from [85.158.137.68:9234] by server-17.bemta-3.messagelabs.com id
	34/A5-22569-F8B33035; Tue, 18 Feb 2014 10:53:03 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392720781!2598441!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5985 invoked from network); 18 Feb 2014 10:53:02 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-11.tower-31.messagelabs.com with SMTP;
	18 Feb 2014 10:53:02 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 02:48:41 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="485122154"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 02:53:00 -0800
Received: from fmsmsx151.amr.corp.intel.com (10.19.17.220) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 02:52:59 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX151.amr.corp.intel.com (10.19.17.220) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 02:52:59 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 18:52:57 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, Aravind Gopalakrishnan
	<aravind.gopalakrishnan@amd.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPLI2WrpjR5j3ySEeyNVNl8C2Pk5q6zMFA
Date: Tue, 18 Feb 2014 10:52:57 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
In-Reply-To: <530338D5020000780011D258@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 18.02.14 at 05:25, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> For Intel it didn't disturb Intel's vmce logic so it's OK for me.
>> 
>> For AMD, c000_040x is bank4 (MC4_MISCj), while vmce current only
>> support bank0/1. Even considering in the future AMD may add
>> MC0/1_MISCj, it doesn't need emulation (say, read return 0 and write
>> ignore). So how about simply filter out AMD MCx_MISCj at
>> mce_vendor_bank_msr()? 
> 
> mce_vendor_bank_msr() already has
> 
>     case X86_VENDOR_AMD:
>         switch (msr) {
>         case MSR_F10_MC4_MISC1:
>         case MSR_F10_MC4_MISC2:
>         case MSR_F10_MC4_MISC3:
>             return 1;
>         }
> 
> Jan

OK.

> 
>> Aravind Gopalakrishnan wrote:
>>> vmce_amd_[rd|wr]msr functions can handle accesses to AMD
>>> thresholding registers. But due to this statement here:
>>> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> we are wrongly masking off top two bits which meant the register
>>> accesses never made it to vmce_amd_* functions.
>>> 
>>> Corrected this problem by modifying the mask in this patch to allow
>>> AMD thresholding registers to fall to 'default' case which in turn
>>> allows vmce_amd_* functions to handle access to the registers.
>>> 
>>> While at it, remove some clutter in the vmce_amd* functions.
>>> Retained current policy of returning zero for reads and ignoring
>>> writes. 
>>> 
>>> Signed-off-by: Aravind Gopalakrishnan
>>>  <aravind.gopalakrishnan@amd.com> ---
>>>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>>>  ++++++-------------------------------
>>> xen/arch/x86/cpu/mcheck/vmce.c |   14 +++++++++++-- 2 files
>>> changed, 18 insertions(+), 37 deletions(-)  
>>> 
>>> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> b/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> index 61319dc..03797ab 100644
>>> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
>>> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>>>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>>>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)  {
>>> -	switch (msr) {
>>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>>> -		v->arch.vmce.bank[1].mci_misc = val;
>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>> -		break;
>>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>>> -		/* ignore write: we do not emulate link and l3 cache errors
>>> -		 * to the guest.
>>> -		 */
>>> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
>>> -		break;
>>> -	default:
>>> -		return 0;
>>> -	}
>>> -
>>> -	return 1;
>>> +    /* Do nothing as we don't emulate this MC bank currently */
>>> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val); +   
>>>  return 1; }
>>> 
>>>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t
>>> *val)  { 
>>> -	switch (msr) {
>>> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
>>> -		*val = v->arch.vmce.bank[1].mci_misc;
>>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>>> -		break;
>>> -	case MSR_F10_MC4_MISC2: /* Link error type */
>>> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
>>> -		/* we do not emulate link and l3 cache
>>> -		 * errors to the guest.
>>> -		 */
>>> -		*val = 0;
>>> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
>>> -		break;
>>> -	default:
>>> -		return 0;
>>> -	}
>>> -
>>> -	return 1;
>>> +    /* Assign '0' as we don't emulate this MC bank currently */ + 
>>> *val = 0; +    return 1;
>>>  }
>>> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
>>> b/xen/arch/x86/cpu/mcheck/vmce.c
>>> index f6c35db..84843fc 100644
>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>> uint32_t msr, uint64_t *val) 
>>> 
>>>      *val = 0;
>>> 
>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    /* Allow only first 3 MC banks into switch() */

I don't think this comments is good here. Remove it is better.

Thanks,
Jinsong


>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>      {
>>>      case MSR_IA32_MC0_CTL:
>>>          /* stick all 1's to MCi_CTL */
>>> @@ -148,6 +149,10 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>>              uint32_t msr, uint64_t *val) ret = vmce_intel_rdmsr(v,
>>>              msr, val); break;
>>>          case X86_VENDOR_AMD:
>>> +            /*
>>> +             * Extended block of AMD thresholding registers fall
>>> into default. +             * Handle reads here.
>>> +             */
>>>              ret = vmce_amd_rdmsr(v, msr, val);
>>>              break;
>>>          default:
>>> @@ -210,7 +215,8 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>>      uint32_t msr, uint64_t val) int ret = 1;
>>>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
>>> 
>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>> +    /* Allow only first 3 MC banks into switch() */
>>> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>>>      {
>>>      case MSR_IA32_MC0_CTL:
>>>          /*
>>> @@ -246,6 +252,10 @@ static int bank_mce_wrmsr(struct vcpu *v,
>>>              uint32_t msr, uint64_t val) ret = vmce_intel_wrmsr(v,
>>>              msr, val); break;
>>>          case X86_VENDOR_AMD:
>>> +            /*
>>> +             * Extended block of AMD thresholding registers fall
>>> into default. +             * Handle writes here.
>>> +             */
>>>              ret = vmce_amd_wrmsr(v, msr, val);
>>>              break;
>>>          default:


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:57:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiMq-00005q-9k; Tue, 18 Feb 2014 10:57:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFiMp-00005l-7U
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 10:57:35 +0000
Received: from [85.158.139.211:11113] by server-9.bemta-5.messagelabs.com id
	62/66-11237-E9C33035; Tue, 18 Feb 2014 10:57:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392721052!4626066!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24474 invoked from network); 18 Feb 2014 10:57:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:57:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101697869"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:57:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:57:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFiMk-0000na-DR;
	Tue, 18 Feb 2014 10:57:30 +0000
Date: Tue, 18 Feb 2014 10:57:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <21250.15776.972814.117850@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Feb 2014, Ian Jackson wrote:
> George Dunlap writes ("Re: Proposed force push of staging to master"):
> > On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > > xen.git#master and call it RC4.  Comments welcome.
> > 
> > Thanks for the analysis.  This seems like a good plan.
> 
> I have done this (RC4 is tagged, tarballs are in production).
> 
> I also had to force push the change below to xen.git#master.
> 
> Can I request that we don't change this back to say "master" until we
> are done with 4.4.0 ?  Either way we have to update Config.mk with new
> qemu upstream versions, but if we set this to "master" in between RCs,
> I end up having to do it as a force push in the middle of the RC
> production which is out-of-course, error-prone, and suboptimal.
> 
> It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> Config.mk (updated when the qemu-upstream tree has passed its push
> gate).
> 
> That is I think the best workflow is:
>   * make a change to staging/qemu-upstream-unstable.git
>   * wait for push gate to put it in qemu-upstream-unstable.git

Does this work because the test infrastructure doesn't obey Config.mk?
Otherwise how could the new changes be tested if QEMU_UPSTREAM_REVISION
in Config.mk is unchanged?

In fact, even if the test infrastructure does test the new changes by
manually setting QEMU_UPSTREAM_REVISION, following your proposed
workflow we would still miss all the possible bug reports from the
community between RCs.
It doesn't seem very community friendly to me.


>   * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
>     to new commit hash
>   * whatever is in xen.git#master is what gets called rcN
>     (ie we tag xen.git and */qemu-upstream-unstable.git with the rcN
>      tags, but we don't use the actual tag name in Config.mk)
> 
> Thanks,
> Ian.
> 
> >From b7319350278d0220febc8a7dc8be8e8d41b0abd2 Mon Sep 17 00:00:00 2001
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Date: Mon, 17 Feb 2014 16:33:48 +0000
> Subject: [PATCH] Update QEMU_UPSTREAM_REVISION for 4.4.0-rc4
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> ---
>  Config.mk |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Config.mk b/Config.mk
> index 1e034f7..a6cd2e3 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
>  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
>  endif
>  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> -QEMU_UPSTREAM_REVISION ?= master
> +QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc4
>  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
>  # Fri Aug 2 14:12:09 2013 -0400
>  # Fix bug in CBFS file walking with compressed files.
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 10:57:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 10:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiMq-00005q-9k; Tue, 18 Feb 2014 10:57:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFiMp-00005l-7U
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 10:57:35 +0000
Received: from [85.158.139.211:11113] by server-9.bemta-5.messagelabs.com id
	62/66-11237-E9C33035; Tue, 18 Feb 2014 10:57:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392721052!4626066!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24474 invoked from network); 18 Feb 2014 10:57:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 10:57:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101697869"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 10:57:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 05:57:31 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFiMk-0000na-DR;
	Tue, 18 Feb 2014 10:57:30 +0000
Date: Tue, 18 Feb 2014 10:57:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
In-Reply-To: <21250.15776.972814.117850@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Feb 2014, Ian Jackson wrote:
> George Dunlap writes ("Re: Proposed force push of staging to master"):
> > On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > > xen.git#master and call it RC4.  Comments welcome.
> > 
> > Thanks for the analysis.  This seems like a good plan.
> 
> I have done this (RC4 is tagged, tarballs are in production).
> 
> I also had to force push the change below to xen.git#master.
> 
> Can I request that we don't change this back to say "master" until we
> are done with 4.4.0 ?  Either way we have to update Config.mk with new
> qemu upstream versions, but if we set this to "master" in between RCs,
> I end up having to do it as a force push in the middle of the RC
> production which is out-of-course, error-prone, and suboptimal.
> 
> It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> Config.mk (updated when the qemu-upstream tree has passed its push
> gate).
> 
> That is I think the best workflow is:
>   * make a change to staging/qemu-upstream-unstable.git
>   * wait for push gate to put it in qemu-upstream-unstable.git

Does this work because the test infrastructure doesn't obey Config.mk?
Otherwise how could the new changes be tested if QEMU_UPSTREAM_REVISION
in Config.mk is unchanged?

In fact, even if the test infrastructure does test the new changes by
manually setting QEMU_UPSTREAM_REVISION, following your proposed
workflow we would still miss all the possible bug reports from the
community between RCs.
It doesn't seem very community friendly to me.


>   * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
>     to new commit hash
>   * whatever is in xen.git#master is what gets called rcN
>     (ie we tag xen.git and */qemu-upstream-unstable.git with the rcN
>      tags, but we don't use the actual tag name in Config.mk)
> 
> Thanks,
> Ian.
> 
> >From b7319350278d0220febc8a7dc8be8e8d41b0abd2 Mon Sep 17 00:00:00 2001
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Date: Mon, 17 Feb 2014 16:33:48 +0000
> Subject: [PATCH] Update QEMU_UPSTREAM_REVISION for 4.4.0-rc4
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> ---
>  Config.mk |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Config.mk b/Config.mk
> index 1e034f7..a6cd2e3 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -234,7 +234,7 @@ QEMU_UPSTREAM_URL ?= git://xenbits.xen.org/qemu-upstream-unstable.git
>  SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
>  endif
>  OVMF_UPSTREAM_REVISION ?= 447d264115c476142f884af0be287622cd244423
> -QEMU_UPSTREAM_REVISION ?= master
> +QEMU_UPSTREAM_REVISION ?= qemu-xen-4.4.0-rc4
>  SEABIOS_UPSTREAM_TAG ?= rel-1.7.3.1
>  # Fri Aug 2 14:12:09 2013 -0400
>  # Fix bug in CBFS file walking with compressed files.
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:02:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiRM-0000IN-2E; Tue, 18 Feb 2014 11:02:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFiRK-0000II-PX
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:02:14 +0000
Received: from [85.158.139.211:8330] by server-12.bemta-5.messagelabs.com id
	F2/FC-15415-5BD33035; Tue, 18 Feb 2014 11:02:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392721333!76574!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15764 invoked from network); 18 Feb 2014 11:02:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 11:02:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 11:02:12 +0000
Message-Id: <53034BC1020000780011D371@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 11:02:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>>> uint32_t msr, uint64_t *val) 
>>>> 
>>>>      *val = 0;
>>>> 
>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>> +    /* Allow only first 3 MC banks into switch() */
> 
> I don't think this comments is good here. Remove it is better.

I had asked for this to be removed again too. I'm really thinking
that V3 is what we should go with.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:02:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiRM-0000IN-2E; Tue, 18 Feb 2014 11:02:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFiRK-0000II-PX
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:02:14 +0000
Received: from [85.158.139.211:8330] by server-12.bemta-5.messagelabs.com id
	F2/FC-15415-5BD33035; Tue, 18 Feb 2014 11:02:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392721333!76574!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15764 invoked from network); 18 Feb 2014 11:02:13 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 11:02:13 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 11:02:12 +0000
Message-Id: <53034BC1020000780011D371@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 11:02:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu *v,
>>>> uint32_t msr, uint64_t *val) 
>>>> 
>>>>      *val = 0;
>>>> 
>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>> +    /* Allow only first 3 MC banks into switch() */
> 
> I don't think this comments is good here. Remove it is better.

I had asked for this to be removed again too. I'm really thinking
that V3 is what we should go with.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:05:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiUm-0000RT-W1; Tue, 18 Feb 2014 11:05:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFiUl-0000RO-Gq
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:05:47 +0000
Received: from [193.109.254.147:9993] by server-6.bemta-14.messagelabs.com id
	85/DC-03396-A8E33035; Tue, 18 Feb 2014 11:05:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392721545!5091590!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29450 invoked from network); 18 Feb 2014 11:05:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:05:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101699885"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:05:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:05:44 -0500
Message-ID: <1392721543.11080.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
Date: Tue, 18 Feb 2014 11:05:43 +0000
In-Reply-To: <CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
	<CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 17:38 +0200, Andrii Tseglytskyi wrote:
> Hi
> 
> >
> > > Something like this.
> >
> > This solution sounds good.
> >
> > If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?
> 
> 
> Right. And for now I started developing something like shadow page
> table algorithm. I need to create a trap for a real pagetable, which
> is placed somewhere in domain heap memory. That's why I thought about
> generic algorithm which can create a runtime trap for memory, which
> address is not defined during compile time.

I think you should handle this case by making the page r/o in the p2m
and using the p2m fault handler rather than the MMIO trap
infrastructure.

See e.g. the live migration patches by Samsung.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:05:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiUm-0000RT-W1; Tue, 18 Feb 2014 11:05:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFiUl-0000RO-Gq
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:05:47 +0000
Received: from [193.109.254.147:9993] by server-6.bemta-14.messagelabs.com id
	85/DC-03396-A8E33035; Tue, 18 Feb 2014 11:05:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392721545!5091590!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29450 invoked from network); 18 Feb 2014 11:05:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:05:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101699885"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:05:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:05:44 -0500
Message-ID: <1392721543.11080.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
Date: Tue, 18 Feb 2014 11:05:43 +0000
In-Reply-To: <CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
	<CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@linaro.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 17:38 +0200, Andrii Tseglytskyi wrote:
> Hi
> 
> >
> > > Something like this.
> >
> > This solution sounds good.
> >
> > If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?
> 
> 
> Right. And for now I started developing something like shadow page
> table algorithm. I need to create a trap for a real pagetable, which
> is placed somewhere in domain heap memory. That's why I thought about
> generic algorithm which can create a runtime trap for memory, which
> address is not defined during compile time.

I think you should handle this case by making the page r/o in the p2m
and using the p2m fault handler rather than the MMIO trap
infrastructure.

See e.g. the live migration patches by Samsung.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:11:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiZk-0000eA-05; Tue, 18 Feb 2014 11:10:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WFiZi-0000e5-5E
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:10:54 +0000
Received: from [85.158.143.35:15216] by server-2.bemta-4.messagelabs.com id
	F5/11-10891-DBF33035; Tue, 18 Feb 2014 11:10:53 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392721851!6480898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21396 invoked from network); 18 Feb 2014 11:10:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:10:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101700948"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:10:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:10:50 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WFiZd-00019j-NQ;
	Tue, 18 Feb 2014 11:10:49 +0000
Message-ID: <1392721844.27274.2.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Date: Tue, 18 Feb 2014 11:10:44 +0000
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Christoph Egger <chegger@amazon.de>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes,
  all these patches came from some customer reports. We manage with
xen-mceinj to reproduce the problem and got these types of races. These
patches (the other one is already applied) fix the race issues we found.

I tested with a CentOS 6.4 version with mcelog.

Frediano

On Tue, 2014-02-18 at 08:42 +0000, Liu, Jinsong wrote:
> This logic (mctelem) is related to dom0 mcelog logic. Have you tested if mcelog works fine with your patch?
> 
> Thanks,
> Jinsong
> 
> Frediano Ziglio wrote:
> > These lines (in mctelem_reserve)
> > 
> > 
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > 
> > are racy. After you read the newhead pointer it can happen that
> > another 
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> > 
> > This patch use instead a bit array and atomic bit operations.
> > 
> > Actually it use unsigned long instead of bitmap type as testing for
> > all zeroes is easier.
> > 
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> > ---
> >  xen/arch/x86/cpu/mcheck/mctelem.c |   52
> >  ++++++++++++++++++++++--------------- 1 file changed, 31
> > insertions(+), 21 deletions(-) 
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
> > b/xen/arch/x86/cpu/mcheck/mctelem.c index 895ce1a..e56b6fb 100644
> > --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> > +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> > @@ -69,6 +69,11 @@ struct mctelem_ent {
> >  #define	MC_URGENT_NENT		10
> >  #define	MC_NONURGENT_NENT	20
> > 
> > +/* Check if we can fit enough bits in the free bit array */
> > +#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
> > +#error Too much elements
> > +#endif
> > +
> >  #define	MC_NCLASSES		(MC_NONURGENT + 1)
> > 
> >  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> > @@ -77,11 +82,9 @@ struct mctelem_ent {
> >  static struct mc_telem_ctl {
> >  	/* Linked lists that thread the array members together.
> >  	 *
> > -	 * The free lists are singly-linked via mcte_next, and we allocate
> > -	 * from them by atomically unlinking an element from the head.
> > -	 * Consumed entries are returned to the head of the free list.
> > -	 * When an entry is reserved off the free list it is not linked
> > -	 * on any list until it is committed or dismissed.
> > +	 * The free lists is a bit array where bit 1 means free.
> > +	 * This as element number is quite small and is easy to
> > +	 * atomically allocate that way.
> >  	 *
> >  	 * The committed list grows at the head and we do not maintain a
> >  	 * tail pointer; insertions are performed atomically.  The head
> > @@ -101,7 +104,7 @@ static struct mc_telem_ctl {
> >  	 * we can lock it for updates.  The head of the processing list
> >  	 * always has the oldest telemetry, and we append (as above)
> >  	 * at the tail of the processing list. */
> > -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> > +	unsigned long mctc_free[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> > @@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent *tep)
> >  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
> > 
> >  	tep->mcte_prev = NULL;
> > -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> > +	tep->mcte_next = NULL;
> > +
> > +	/* set free in array */
> > +	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
> >  }
> > 
> >  /* Increment the reference count of an entry that is not linked on to
> > @@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)
> >  	}
> > 
> >  	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> > -		struct mctelem_ent *tep, **tepp;
> > +		struct mctelem_ent *tep;
> > 
> >  		tep = mctctl.mctc_elems + i;
> >  		tep->mcte_flags = MCTE_F_STATE_FREE;
> > @@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
> >  		tep->mcte_data = datarr + i * datasz;
> > 
> >  		if (i < MC_URGENT_NENT) {
> > -			tepp = &mctctl.mctc_free[MC_URGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> > +			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
> > +			tep->mcte_flags = MCTE_F_HOME_URGENT;
> >  		} else {
> > -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> > +			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
> > +			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
> >  		}
> > 
> > -		tep->mcte_next = *tepp;
> > +		tep->mcte_next = NULL;
> >  		tep->mcte_prev = NULL;
> > -		*tepp = tep;
> >  	}
> >  }
> > 
> > @@ -310,18 +315,21 @@ static int mctelem_drop_count;
> > 
> >  /* Reserve a telemetry entry, or return NULL if none available.
> >   * If we return an entry then the caller must subsequently call
> > exactly one of - * mctelem_unreserve or mctelem_commit for that entry.
> > + * mctelem_dismiss or mctelem_commit for that entry.
> >   */
> >  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
> >  {
> > -	struct mctelem_ent **freelp;
> > -	struct mctelem_ent *oldhead, *newhead;
> > +	unsigned long *freelp;
> > +	unsigned long oldfree;
> > +	unsigned bit;
> >  	mctelem_class_t target = (which == MC_URGENT) ?
> >  	    MC_URGENT : MC_NONURGENT;
> > 
> >  	freelp = &mctctl.mctc_free[target];
> >  	for (;;) {
> > -		if ((oldhead = *freelp) == NULL) {
> > +		oldfree = *freelp;
> > +
> > +		if (oldfree == 0) {
> >  			if (which == MC_URGENT && target == MC_URGENT) {
> >  				/* raid the non-urgent freelist */
> >  				target = MC_NONURGENT;
> > @@ -333,9 +341,11 @@ mctelem_cookie_t mctelem_reserve(mctelem_class_t
> >  			which) }
> >  		}
> > 
> > -		newhead = oldhead->mcte_next;
> > -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > -			struct mctelem_ent *tep = oldhead;
> > +		/* try to allocate, atomically clear free bit */
> > +		bit = find_first_set_bit(oldfree);
> > +		if (test_and_clear_bit(bit, freelp)) {
> > +			/* return element we got */
> > +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
> > 
> >  			mctelem_hold(tep);
> >  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:11:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiZk-0000eA-05; Tue, 18 Feb 2014 11:10:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WFiZi-0000e5-5E
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:10:54 +0000
Received: from [85.158.143.35:15216] by server-2.bemta-4.messagelabs.com id
	F5/11-10891-DBF33035; Tue, 18 Feb 2014 11:10:53 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392721851!6480898!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21396 invoked from network); 18 Feb 2014 11:10:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:10:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101700948"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:10:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:10:50 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WFiZd-00019j-NQ;
	Tue, 18 Feb 2014 11:10:49 +0000
Message-ID: <1392721844.27274.2.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: "Liu, Jinsong" <jinsong.liu@intel.com>
Date: Tue, 18 Feb 2014 11:10:44 +0000
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Christoph Egger <chegger@amazon.de>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Yes,
  all these patches came from some customer reports. We manage with
xen-mceinj to reproduce the problem and got these types of races. These
patches (the other one is already applied) fix the race issues we found.

I tested with a CentOS 6.4 version with mcelog.

Frediano

On Tue, 2014-02-18 at 08:42 +0000, Liu, Jinsong wrote:
> This logic (mctelem) is related to dom0 mcelog logic. Have you tested if mcelog works fine with your patch?
> 
> Thanks,
> Jinsong
> 
> Frediano Ziglio wrote:
> > These lines (in mctelem_reserve)
> > 
> > 
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > 
> > are racy. After you read the newhead pointer it can happen that
> > another 
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> > 
> > This patch use instead a bit array and atomic bit operations.
> > 
> > Actually it use unsigned long instead of bitmap type as testing for
> > all zeroes is easier.
> > 
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> > ---
> >  xen/arch/x86/cpu/mcheck/mctelem.c |   52
> >  ++++++++++++++++++++++--------------- 1 file changed, 31
> > insertions(+), 21 deletions(-) 
> > 
> > diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
> > b/xen/arch/x86/cpu/mcheck/mctelem.c index 895ce1a..e56b6fb 100644
> > --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> > +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> > @@ -69,6 +69,11 @@ struct mctelem_ent {
> >  #define	MC_URGENT_NENT		10
> >  #define	MC_NONURGENT_NENT	20
> > 
> > +/* Check if we can fit enough bits in the free bit array */
> > +#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG
> > +#error Too much elements
> > +#endif
> > +
> >  #define	MC_NCLASSES		(MC_NONURGENT + 1)
> > 
> >  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
> > @@ -77,11 +82,9 @@ struct mctelem_ent {
> >  static struct mc_telem_ctl {
> >  	/* Linked lists that thread the array members together.
> >  	 *
> > -	 * The free lists are singly-linked via mcte_next, and we allocate
> > -	 * from them by atomically unlinking an element from the head.
> > -	 * Consumed entries are returned to the head of the free list.
> > -	 * When an entry is reserved off the free list it is not linked
> > -	 * on any list until it is committed or dismissed.
> > +	 * The free lists is a bit array where bit 1 means free.
> > +	 * This as element number is quite small and is easy to
> > +	 * atomically allocate that way.
> >  	 *
> >  	 * The committed list grows at the head and we do not maintain a
> >  	 * tail pointer; insertions are performed atomically.  The head
> > @@ -101,7 +104,7 @@ static struct mc_telem_ctl {
> >  	 * we can lock it for updates.  The head of the processing list
> >  	 * always has the oldest telemetry, and we append (as above)
> >  	 * at the tail of the processing list. */
> > -	struct mctelem_ent *mctc_free[MC_NCLASSES];
> > +	unsigned long mctc_free[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
> >  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
> > @@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent *tep)
> >  	BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
> > 
> >  	tep->mcte_prev = NULL;
> > -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next, tep);
> > +	tep->mcte_next = NULL;
> > +
> > +	/* set free in array */
> > +	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);
> >  }
> > 
> >  /* Increment the reference count of an entry that is not linked on to
> > @@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)
> >  	}
> > 
> >  	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
> > -		struct mctelem_ent *tep, **tepp;
> > +		struct mctelem_ent *tep;
> > 
> >  		tep = mctctl.mctc_elems + i;
> >  		tep->mcte_flags = MCTE_F_STATE_FREE;
> > @@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
> >  		tep->mcte_data = datarr + i * datasz;
> > 
> >  		if (i < MC_URGENT_NENT) {
> > -			tepp = &mctctl.mctc_free[MC_URGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
> > +			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
> > +			tep->mcte_flags = MCTE_F_HOME_URGENT;
> >  		} else {
> > -			tepp = &mctctl.mctc_free[MC_NONURGENT];
> > -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
> > +			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
> > +			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
> >  		}
> > 
> > -		tep->mcte_next = *tepp;
> > +		tep->mcte_next = NULL;
> >  		tep->mcte_prev = NULL;
> > -		*tepp = tep;
> >  	}
> >  }
> > 
> > @@ -310,18 +315,21 @@ static int mctelem_drop_count;
> > 
> >  /* Reserve a telemetry entry, or return NULL if none available.
> >   * If we return an entry then the caller must subsequently call
> > exactly one of - * mctelem_unreserve or mctelem_commit for that entry.
> > + * mctelem_dismiss or mctelem_commit for that entry.
> >   */
> >  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)
> >  {
> > -	struct mctelem_ent **freelp;
> > -	struct mctelem_ent *oldhead, *newhead;
> > +	unsigned long *freelp;
> > +	unsigned long oldfree;
> > +	unsigned bit;
> >  	mctelem_class_t target = (which == MC_URGENT) ?
> >  	    MC_URGENT : MC_NONURGENT;
> > 
> >  	freelp = &mctctl.mctc_free[target];
> >  	for (;;) {
> > -		if ((oldhead = *freelp) == NULL) {
> > +		oldfree = *freelp;
> > +
> > +		if (oldfree == 0) {
> >  			if (which == MC_URGENT && target == MC_URGENT) {
> >  				/* raid the non-urgent freelist */
> >  				target = MC_NONURGENT;
> > @@ -333,9 +341,11 @@ mctelem_cookie_t mctelem_reserve(mctelem_class_t
> >  			which) }
> >  		}
> > 
> > -		newhead = oldhead->mcte_next;
> > -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> > -			struct mctelem_ent *tep = oldhead;
> > +		/* try to allocate, atomically clear free bit */
> > +		bit = find_first_set_bit(oldfree);
> > +		if (test_and_clear_bit(bit, freelp)) {
> > +			/* return element we got */
> > +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
> > 
> >  			mctelem_hold(tep);
> >  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:23:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFilS-0000qZ-E0; Tue, 18 Feb 2014 11:23:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFilQ-0000qU-Q5
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:23:01 +0000
Received: from [85.158.139.211:5695] by server-10.bemta-5.messagelabs.com id
	4E/87-08578-49243035; Tue, 18 Feb 2014 11:23:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392722577!4635351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12701 invoked from network); 18 Feb 2014 11:22:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:22:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103430837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:22:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:22:56 -0500
Message-ID: <1392722575.11080.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 18 Feb 2014 11:22:55 +0000
In-Reply-To: <5301E496.40802@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
	<5301E496.40802@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	netdev@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 10:29 +0000, David Vrabel wrote:
> On 15/02/14 02:59, Luis R. Rodriguez wrote:
> > From: "Luis R. Rodriguez" <mcgrof@suse.com>
> > 
> > The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
> > was to prevent our backend interfaces from being used by the
> > bridge and nominating our interface as a root bridge. This was
> > possible given that the bridge code will use the lowest MAC
> > address for a port once a new interface gets added to the bridge.
> > The bridge code has a generic feature now to allow interfaces
> > to opt out from root bridge nominations, use that instead.
> [...]
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -42,6 +42,8 @@
> >  #define XENVIF_QUEUE_LENGTH 32
> >  #define XENVIF_NAPI_WEIGHT  64
> >  
> > +static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
> 
> You shouldn't use a vendor prefix with a random MAC address.  You should
> set the locally administered bit and clear the multicast/unicast bit and
> randomize the remaining 46 bits.

I'd have thought that eth_hw_addr_random would get this right, *checks*
yes it does. And then this patch tramples overt the top three bytes.

Might there be any requirement to have a specific MAC on the vif device?
IOW do we need to figure out a way to plumb this through the Xen tools
(perhaps having the vif script sort it out).

Speaking of which -- do the Xen tools not overwrite this random mac from
xen-network-common.sh:_setup_bridge_port. What is the plan to change
that (in a forwards/backwards compatible manner).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:23:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFilS-0000qZ-E0; Tue, 18 Feb 2014 11:23:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFilQ-0000qU-Q5
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:23:01 +0000
Received: from [85.158.139.211:5695] by server-10.bemta-5.messagelabs.com id
	4E/87-08578-49243035; Tue, 18 Feb 2014 11:23:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392722577!4635351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12701 invoked from network); 18 Feb 2014 11:22:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:22:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103430837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:22:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:22:56 -0500
Message-ID: <1392722575.11080.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Tue, 18 Feb 2014 11:22:55 +0000
In-Reply-To: <5301E496.40802@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
	<5301E496.40802@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	netdev@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@suse.com>,
	linux-kernel@vger.kernel.org, Paul Durrant <Paul.Durrant@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 10:29 +0000, David Vrabel wrote:
> On 15/02/14 02:59, Luis R. Rodriguez wrote:
> > From: "Luis R. Rodriguez" <mcgrof@suse.com>
> > 
> > The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
> > was to prevent our backend interfaces from being used by the
> > bridge and nominating our interface as a root bridge. This was
> > possible given that the bridge code will use the lowest MAC
> > address for a port once a new interface gets added to the bridge.
> > The bridge code has a generic feature now to allow interfaces
> > to opt out from root bridge nominations, use that instead.
> [...]
> > --- a/drivers/net/xen-netback/interface.c
> > +++ b/drivers/net/xen-netback/interface.c
> > @@ -42,6 +42,8 @@
> >  #define XENVIF_QUEUE_LENGTH 32
> >  #define XENVIF_NAPI_WEIGHT  64
> >  
> > +static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
> 
> You shouldn't use a vendor prefix with a random MAC address.  You should
> set the locally administered bit and clear the multicast/unicast bit and
> randomize the remaining 46 bits.

I'd have thought that eth_hw_addr_random would get this right, *checks*
yes it does. And then this patch tramples overt the top three bytes.

Might there be any requirement to have a specific MAC on the vif device?
IOW do we need to figure out a way to plumb this through the Xen tools
(perhaps having the vif script sort it out).

Speaking of which -- do the Xen tools not overwrite this random mac from
xen-network-common.sh:_setup_bridge_port. What is the plan to change
that (in a forwards/backwards compatible manner).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:25:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:25:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFioA-0000x0-0M; Tue, 18 Feb 2014 11:25:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFio9-0000wv-6L
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:25:49 +0000
Received: from [193.109.254.147:53866] by server-11.bemta-14.messagelabs.com
	id D8/AB-24604-C3343035; Tue, 18 Feb 2014 11:25:48 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392722746!5069064!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2319 invoked from network); 18 Feb 2014 11:25:47 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 11:25:47 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 18 Feb 2014 03:25:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="483360103"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 03:25:46 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:25:45 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 19:25:43 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Thread-Topic: [PATCH] MCE: Fix race condition in mctelem_reserve
Thread-Index: AQHPLJoRgMI0KUqfrUe4gcMRCMo09Jq63Y3w
Date: Tue, 18 Feb 2014 11:25:43 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F79BC@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
	<1392721844.27274.2.camel@hamster.uk.xensource.com>
In-Reply-To: <1392721844.27274.2.camel@hamster.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <chegger@amazon.de>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks!

Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>


Frediano Ziglio wrote:
> Yes,
>   all these patches came from some customer reports. We manage with
> xen-mceinj to reproduce the problem and got these types of races.
> These patches (the other one is already applied) fix the race issues
> we found. 
> 
> I tested with a CentOS 6.4 version with mcelog.
> 
> Frediano
> 
> On Tue, 2014-02-18 at 08:42 +0000, Liu, Jinsong wrote:
>> This logic (mctelem) is related to dom0 mcelog logic. Have you
>> tested if mcelog works fine with your patch? 
>> 
>> Thanks,
>> Jinsong
>> 
>> Frediano Ziglio wrote:
>>> These lines (in mctelem_reserve)
>>> 
>>> 
>>>         newhead = oldhead->mcte_next;
>>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>> 
>>> are racy. After you read the newhead pointer it can happen that
>>> another flow (thread or recursive invocation) change all the list
>>> but set head with same value. So oldhead is the same as *freelp but
>>> you are setting a new head that could point to whatever element
>>> (even already used). 
>>> 
>>> This patch use instead a bit array and atomic bit operations.
>>> 
>>> Actually it use unsigned long instead of bitmap type as testing for
>>> all zeroes is easier. 
>>> 
>>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com> ---
>>>  xen/arch/x86/cpu/mcheck/mctelem.c |   52
>>>  ++++++++++++++++++++++--------------- 1 file changed, 31
>>> insertions(+), 21 deletions(-)
>>> 
>>> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
>>> b/xen/arch/x86/cpu/mcheck/mctelem.c index 895ce1a..e56b6fb 100644
>>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>>> @@ -69,6 +69,11 @@ struct mctelem_ent {
>>>  #define	MC_URGENT_NENT		10
>>>  #define	MC_NONURGENT_NENT	20
>>> 
>>> +/* Check if we can fit enough bits in the free bit array */
>>> +#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG +#error Too
>>> much elements +#endif
>>> +
>>>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>>> 
>>>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>>> @@ -77,11 +82,9 @@ struct mctelem_ent {
>>>  static struct mc_telem_ctl {
>>>  	/* Linked lists that thread the array members together.  	 *
>>> -	 * The free lists are singly-linked via mcte_next, and we allocate
>>> -	 * from them by atomically unlinking an element from the head.
>>> -	 * Consumed entries are returned to the head of the free list.
>>> -	 * When an entry is reserved off the free list it is not linked
>>> -	 * on any list until it is committed or dismissed.
>>> +	 * The free lists is a bit array where bit 1 means free.
>>> +	 * This as element number is quite small and is easy to
>>> +	 * atomically allocate that way.
>>>  	 *
>>>  	 * The committed list grows at the head and we do not maintain a
>>>  	 * tail pointer; insertions are performed atomically.  The head
>>> @@ -101,7 +104,7 @@ static struct mc_telem_ctl {
>>>  	 * we can lock it for updates.  The head of the processing list
>>>  	 * always has the oldest telemetry, and we append (as above)
>>>  	 * at the tail of the processing list. */
>>> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>>> +	unsigned long mctc_free[MC_NCLASSES];
>>>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>>>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>>>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>>> @@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent
>>>  	*tep) BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>>> 
>>>  	tep->mcte_prev = NULL;
>>> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next,
>>> tep); +	tep->mcte_next = NULL; +
>>> +	/* set free in array */
>>> +	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);  }
>>> 
>>>  /* Increment the reference count of an entry that is not linked on
>>> to @@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)  	}
>>> 
>>>  	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>>> -		struct mctelem_ent *tep, **tepp;
>>> +		struct mctelem_ent *tep;
>>> 
>>>  		tep = mctctl.mctc_elems + i;
>>>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>>> @@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
>>>  		tep->mcte_data = datarr + i * datasz;
>>> 
>>>  		if (i < MC_URGENT_NENT) {
>>> -			tepp = &mctctl.mctc_free[MC_URGENT];
>>> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>>> +			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
>>> +			tep->mcte_flags = MCTE_F_HOME_URGENT;
>>>  		} else {
>>> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>>> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>>> +			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
>>> +			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
>>>  		}
>>> 
>>> -		tep->mcte_next = *tepp;
>>> +		tep->mcte_next = NULL;
>>>  		tep->mcte_prev = NULL;
>>> -		*tepp = tep;
>>>  	}
>>>  }
>>> 
>>> @@ -310,18 +315,21 @@ static int mctelem_drop_count;
>>> 
>>>  /* Reserve a telemetry entry, or return NULL if none available.
>>>   * If we return an entry then the caller must subsequently call
>>> exactly one of - * mctelem_unreserve or mctelem_commit for that
>>> entry. + * mctelem_dismiss or mctelem_commit for that entry.   */
>>>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)  {
>>> -	struct mctelem_ent **freelp;
>>> -	struct mctelem_ent *oldhead, *newhead;
>>> +	unsigned long *freelp;
>>> +	unsigned long oldfree;
>>> +	unsigned bit;
>>>  	mctelem_class_t target = (which == MC_URGENT) ?
>>>  	    MC_URGENT : MC_NONURGENT;
>>> 
>>>  	freelp = &mctctl.mctc_free[target];
>>>  	for (;;) {
>>> -		if ((oldhead = *freelp) == NULL) {
>>> +		oldfree = *freelp;
>>> +
>>> +		if (oldfree == 0) {
>>>  			if (which == MC_URGENT && target == MC_URGENT) {
>>>  				/* raid the non-urgent freelist */
>>>  				target = MC_NONURGENT;
>>> @@ -333,9 +341,11 @@ mctelem_cookie_t
>>>  		mctelem_reserve(mctelem_class_t  			which) } }
>>> 
>>> -		newhead = oldhead->mcte_next;
>>> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>> -			struct mctelem_ent *tep = oldhead;
>>> +		/* try to allocate, atomically clear free bit */
>>> +		bit = find_first_set_bit(oldfree);
>>> +		if (test_and_clear_bit(bit, freelp)) {
>>> +			/* return element we got */
>>> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>>> 
>>>  			mctelem_hold(tep);
>>>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:25:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:25:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFioA-0000x0-0M; Tue, 18 Feb 2014 11:25:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFio9-0000wv-6L
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:25:49 +0000
Received: from [193.109.254.147:53866] by server-11.bemta-14.messagelabs.com
	id D8/AB-24604-C3343035; Tue, 18 Feb 2014 11:25:48 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392722746!5069064!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2319 invoked from network); 18 Feb 2014 11:25:47 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 11:25:47 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 18 Feb 2014 03:25:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="483360103"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 03:25:46 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.4.69) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:25:45 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 19:25:43 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Thread-Topic: [PATCH] MCE: Fix race condition in mctelem_reserve
Thread-Index: AQHPLJoRgMI0KUqfrUe4gcMRCMo09Jq63Y3w
Date: Tue, 18 Feb 2014 11:25:43 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F79BC@SHSMSX101.ccr.corp.intel.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7153@SHSMSX101.ccr.corp.intel.com>
	<1392721844.27274.2.camel@hamster.uk.xensource.com>
In-Reply-To: <1392721844.27274.2.camel@hamster.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Christoph Egger <chegger@amazon.de>, David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] MCE: Fix race condition in mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks!

Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>


Frediano Ziglio wrote:
> Yes,
>   all these patches came from some customer reports. We manage with
> xen-mceinj to reproduce the problem and got these types of races.
> These patches (the other one is already applied) fix the race issues
> we found. 
> 
> I tested with a CentOS 6.4 version with mcelog.
> 
> Frediano
> 
> On Tue, 2014-02-18 at 08:42 +0000, Liu, Jinsong wrote:
>> This logic (mctelem) is related to dom0 mcelog logic. Have you
>> tested if mcelog works fine with your patch? 
>> 
>> Thanks,
>> Jinsong
>> 
>> Frediano Ziglio wrote:
>>> These lines (in mctelem_reserve)
>>> 
>>> 
>>>         newhead = oldhead->mcte_next;
>>>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>> 
>>> are racy. After you read the newhead pointer it can happen that
>>> another flow (thread or recursive invocation) change all the list
>>> but set head with same value. So oldhead is the same as *freelp but
>>> you are setting a new head that could point to whatever element
>>> (even already used). 
>>> 
>>> This patch use instead a bit array and atomic bit operations.
>>> 
>>> Actually it use unsigned long instead of bitmap type as testing for
>>> all zeroes is easier. 
>>> 
>>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com> ---
>>>  xen/arch/x86/cpu/mcheck/mctelem.c |   52
>>>  ++++++++++++++++++++++--------------- 1 file changed, 31
>>> insertions(+), 21 deletions(-)
>>> 
>>> diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c
>>> b/xen/arch/x86/cpu/mcheck/mctelem.c index 895ce1a..e56b6fb 100644
>>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>>> @@ -69,6 +69,11 @@ struct mctelem_ent {
>>>  #define	MC_URGENT_NENT		10
>>>  #define	MC_NONURGENT_NENT	20
>>> 
>>> +/* Check if we can fit enough bits in the free bit array */
>>> +#if MC_URGENT_NENT + MC_NONURGENT_NENT > BITS_PER_LONG +#error Too
>>> much elements +#endif
>>> +
>>>  #define	MC_NCLASSES		(MC_NONURGENT + 1)
>>> 
>>>  #define	COOKIE2MCTE(c)		((struct mctelem_ent *)(c))
>>> @@ -77,11 +82,9 @@ struct mctelem_ent {
>>>  static struct mc_telem_ctl {
>>>  	/* Linked lists that thread the array members together.  	 *
>>> -	 * The free lists are singly-linked via mcte_next, and we allocate
>>> -	 * from them by atomically unlinking an element from the head.
>>> -	 * Consumed entries are returned to the head of the free list.
>>> -	 * When an entry is reserved off the free list it is not linked
>>> -	 * on any list until it is committed or dismissed.
>>> +	 * The free lists is a bit array where bit 1 means free.
>>> +	 * This as element number is quite small and is easy to
>>> +	 * atomically allocate that way.
>>>  	 *
>>>  	 * The committed list grows at the head and we do not maintain a
>>>  	 * tail pointer; insertions are performed atomically.  The head
>>> @@ -101,7 +104,7 @@ static struct mc_telem_ctl {
>>>  	 * we can lock it for updates.  The head of the processing list
>>>  	 * always has the oldest telemetry, and we append (as above)
>>>  	 * at the tail of the processing list. */
>>> -	struct mctelem_ent *mctc_free[MC_NCLASSES];
>>> +	unsigned long mctc_free[MC_NCLASSES];
>>>  	struct mctelem_ent *mctc_committed[MC_NCLASSES];
>>>  	struct mctelem_ent *mctc_processing_head[MC_NCLASSES];
>>>  	struct mctelem_ent *mctc_processing_tail[MC_NCLASSES];
>>> @@ -214,7 +217,10 @@ static void mctelem_free(struct mctelem_ent
>>>  	*tep) BUG_ON(MCTE_STATE(tep) != MCTE_F_STATE_FREE);
>>> 
>>>  	tep->mcte_prev = NULL;
>>> -	mctelem_xchg_head(&mctctl.mctc_free[target], &tep->mcte_next,
>>> tep); +	tep->mcte_next = NULL; +
>>> +	/* set free in array */
>>> +	set_bit(tep - mctctl.mctc_elems, &mctctl.mctc_free[target]);  }
>>> 
>>>  /* Increment the reference count of an entry that is not linked on
>>> to @@ -284,7 +290,7 @@ void mctelem_init(int reqdatasz)  	}
>>> 
>>>  	for (i = 0; i < MC_URGENT_NENT + MC_NONURGENT_NENT; i++) {
>>> -		struct mctelem_ent *tep, **tepp;
>>> +		struct mctelem_ent *tep;
>>> 
>>>  		tep = mctctl.mctc_elems + i;
>>>  		tep->mcte_flags = MCTE_F_STATE_FREE;
>>> @@ -292,16 +298,15 @@ void mctelem_init(int reqdatasz)
>>>  		tep->mcte_data = datarr + i * datasz;
>>> 
>>>  		if (i < MC_URGENT_NENT) {
>>> -			tepp = &mctctl.mctc_free[MC_URGENT];
>>> -			tep->mcte_flags |= MCTE_F_HOME_URGENT;
>>> +			__set_bit(i, &mctctl.mctc_free[MC_URGENT]);
>>> +			tep->mcte_flags = MCTE_F_HOME_URGENT;
>>>  		} else {
>>> -			tepp = &mctctl.mctc_free[MC_NONURGENT];
>>> -			tep->mcte_flags |= MCTE_F_HOME_NONURGENT;
>>> +			__set_bit(i, &mctctl.mctc_free[MC_NONURGENT]);
>>> +			tep->mcte_flags = MCTE_F_HOME_NONURGENT;
>>>  		}
>>> 
>>> -		tep->mcte_next = *tepp;
>>> +		tep->mcte_next = NULL;
>>>  		tep->mcte_prev = NULL;
>>> -		*tepp = tep;
>>>  	}
>>>  }
>>> 
>>> @@ -310,18 +315,21 @@ static int mctelem_drop_count;
>>> 
>>>  /* Reserve a telemetry entry, or return NULL if none available.
>>>   * If we return an entry then the caller must subsequently call
>>> exactly one of - * mctelem_unreserve or mctelem_commit for that
>>> entry. + * mctelem_dismiss or mctelem_commit for that entry.   */
>>>  mctelem_cookie_t mctelem_reserve(mctelem_class_t which)  {
>>> -	struct mctelem_ent **freelp;
>>> -	struct mctelem_ent *oldhead, *newhead;
>>> +	unsigned long *freelp;
>>> +	unsigned long oldfree;
>>> +	unsigned bit;
>>>  	mctelem_class_t target = (which == MC_URGENT) ?
>>>  	    MC_URGENT : MC_NONURGENT;
>>> 
>>>  	freelp = &mctctl.mctc_free[target];
>>>  	for (;;) {
>>> -		if ((oldhead = *freelp) == NULL) {
>>> +		oldfree = *freelp;
>>> +
>>> +		if (oldfree == 0) {
>>>  			if (which == MC_URGENT && target == MC_URGENT) {
>>>  				/* raid the non-urgent freelist */
>>>  				target = MC_NONURGENT;
>>> @@ -333,9 +341,11 @@ mctelem_cookie_t
>>>  		mctelem_reserve(mctelem_class_t  			which) } }
>>> 
>>> -		newhead = oldhead->mcte_next;
>>> -		if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>> -			struct mctelem_ent *tep = oldhead;
>>> +		/* try to allocate, atomically clear free bit */
>>> +		bit = find_first_set_bit(oldfree);
>>> +		if (test_and_clear_bit(bit, freelp)) {
>>> +			/* return element we got */
>>> +			struct mctelem_ent *tep = mctctl.mctc_elems + bit;
>>> 
>>>  			mctelem_hold(tep);
>>>  			MCTE_TRANSITION_STATE(tep, FREE, UNCOMMITTED);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFirO-000170-E4; Tue, 18 Feb 2014 11:29:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFirL-000165-Rx
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:29:08 +0000
Received: from [85.158.143.35:48744] by server-2.bemta-4.messagelabs.com id
	25/E3-10891-30443035; Tue, 18 Feb 2014 11:29:07 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392722946!6476675!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=3.0 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4378 invoked from network); 18 Feb 2014 11:29:06 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:29:06 -0000
Received: by mail-we0-f179.google.com with SMTP id q58so11295586wes.24
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 03:29:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=i9H2TKAjA0H28GzQkyygvj2vJjkC1dSEfKKk9/10vWs=;
	b=oM8IjXZ1e679G90KdqYlt1GLvhcjGHjCXYZUhvT1tTPcEBKcNShqzuRYBrhEcgnxF1
	n81nKx5UNJVVpaaf2te7QNzcSx5JDl9+RLrbFDK52B4RBgjNZWBoQSJINERipbYoxgoU
	+57lQPiwa/Mm/xc3QvZX46Gh6lxje/K/RXd9PDJ+ZnJ5c1Rjwy5QSRRD7YXYGSMf4UEM
	FukDV4Ot0ArbQBozC+WPAbt6v26GZPrt1GUDn7RpVkE1vUfJdBhRd1zaCRJKjQFiVin2
	P5tZrqE0iYXljZYmUzd3RdcC1M7M/vcCCcakvbRIVpM6f8MvLTpN8HuHHdB2kN9MWLkF
	TWng==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr17364810wic.56.1392722945886; 
	Tue, 18 Feb 2014 03:29:05 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 18 Feb 2014 03:29:05 -0800 (PST)
In-Reply-To: <1392715101.32038.466.camel@Solace>
References: <1392715101.32038.466.camel@Solace>
Date: Tue, 18 Feb 2014 11:29:05 +0000
X-Google-Sender-Auth: pyBui7PODgNun46e3YTlQnYg7R8
Message-ID: <CAFLBxZY2-837vd2AcJ9fGOnHvTMyewfAxaAuVptYfeq-Y9i_rw@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Dario Faggioli <raistlin@linux.it>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	xen <xen@lists.fedoraproject.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	cl-mirage <cl-mirage@lists.cam.ac.uk>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 9:18 AM, Dario Faggioli <raistlin@linux.it> wrote:
> This is a reminder that today is the Xen Project Test Day for Xen 4.4
> RC4.
>
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions

>From a release management perspective, there are a couple of big
changes since RC3 that need testing:

* PVH mode has been fixed.  This involved a change on the nested virt code.

* Building with the latest release of Clang has been fixed.  This may
affect builds on other compilers -- please test different versions of
clang and gcc.

* A big patch series fixing races in libvirt/libxl was checked in.
Testing with libvirt or xl would be helpful.

* A patch fixing guest floating point support on ARM systems was
checked in.  Please run some programs that use the floating point
unit.

* A patch fixing an issue with device-passthru for HVM guests when the
VNC client attached.  Please test migration, in particular with and
without the VNC client attached; and device pass-through, with and
without the VNC client attached.  (Migration with a device passed
through is not supported.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:29:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFirO-000170-E4; Tue, 18 Feb 2014 11:29:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFirL-000165-Rx
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:29:08 +0000
Received: from [85.158.143.35:48744] by server-2.bemta-4.messagelabs.com id
	25/E3-10891-30443035; Tue, 18 Feb 2014 11:29:07 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392722946!6476675!1
X-Originating-IP: [74.125.82.179]
X-SpamReason: No, hits=3.0 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4378 invoked from network); 18 Feb 2014 11:29:06 -0000
Received: from mail-we0-f179.google.com (HELO mail-we0-f179.google.com)
	(74.125.82.179)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:29:06 -0000
Received: by mail-we0-f179.google.com with SMTP id q58so11295586wes.24
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 03:29:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=i9H2TKAjA0H28GzQkyygvj2vJjkC1dSEfKKk9/10vWs=;
	b=oM8IjXZ1e679G90KdqYlt1GLvhcjGHjCXYZUhvT1tTPcEBKcNShqzuRYBrhEcgnxF1
	n81nKx5UNJVVpaaf2te7QNzcSx5JDl9+RLrbFDK52B4RBgjNZWBoQSJINERipbYoxgoU
	+57lQPiwa/Mm/xc3QvZX46Gh6lxje/K/RXd9PDJ+ZnJ5c1Rjwy5QSRRD7YXYGSMf4UEM
	FukDV4Ot0ArbQBozC+WPAbt6v26GZPrt1GUDn7RpVkE1vUfJdBhRd1zaCRJKjQFiVin2
	P5tZrqE0iYXljZYmUzd3RdcC1M7M/vcCCcakvbRIVpM6f8MvLTpN8HuHHdB2kN9MWLkF
	TWng==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr17364810wic.56.1392722945886; 
	Tue, 18 Feb 2014 03:29:05 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 18 Feb 2014 03:29:05 -0800 (PST)
In-Reply-To: <1392715101.32038.466.camel@Solace>
References: <1392715101.32038.466.camel@Solace>
Date: Tue, 18 Feb 2014 11:29:05 +0000
X-Google-Sender-Auth: pyBui7PODgNun46e3YTlQnYg7R8
Message-ID: <CAFLBxZY2-837vd2AcJ9fGOnHvTMyewfAxaAuVptYfeq-Y9i_rw@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Dario Faggioli <raistlin@linux.it>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	xen <xen@lists.fedoraproject.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	cl-mirage <cl-mirage@lists.cam.ac.uk>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 9:18 AM, Dario Faggioli <raistlin@linux.it> wrote:
> This is a reminder that today is the Xen Project Test Day for Xen 4.4
> RC4.
>
> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions

>From a release management perspective, there are a couple of big
changes since RC3 that need testing:

* PVH mode has been fixed.  This involved a change on the nested virt code.

* Building with the latest release of Clang has been fixed.  This may
affect builds on other compilers -- please test different versions of
clang and gcc.

* A big patch series fixing races in libvirt/libxl was checked in.
Testing with libvirt or xl would be helpful.

* A patch fixing guest floating point support on ARM systems was
checked in.  Please run some programs that use the floating point
unit.

* A patch fixing an issue with device-passthru for HVM guests when the
VNC client attached.  Please test migration, in particular with and
without the VNC client attached; and device pass-through, with and
without the VNC client attached.  (Migration with a device passed
through is not supported.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:29:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFirj-0001AQ-S8; Tue, 18 Feb 2014 11:29:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFiri-00019l-IF
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:29:30 +0000
Received: from [85.158.137.68:46778] by server-5.bemta-3.messagelabs.com id
	B1/A2-04712-91443035; Tue, 18 Feb 2014 11:29:29 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392722967!2608549!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32594 invoked from network); 18 Feb 2014 11:29:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:29:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; 
	d="asc'?scan'208";a="101705548"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:29:27 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:29:25 -0500
Message-ID: <1392722963.32038.494.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <xumengpanda@gmail.com>
Date: Tue, 18 Feb 2014 12:29:23 +0100
In-Reply-To: <1392714890.32038.463.camel@Solace>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7593149654821441216=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7593149654821441216==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-qqk6Uz7K8QMmP2DS28IE"

--=-qqk6Uz7K8QMmP2DS28IE
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-02-18 at 10:14 +0100, Dario Faggioli wrote:
> Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
> Developers Summit:
> http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>=20
And there appears to be a new version of this work, released just
yesterday! :-)

Have a look here:
http://bugs.xenproject.org/xen/mid/%3C1392659764-22183-1-git-send-email-bor=
is.ostrovsky@oracle.com%3E

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-qqk6Uz7K8QMmP2DS28IE
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDRBMACgkQk4XaBE3IOsS+ZQCgjlKqWoYSQrrLipymzBmoaZFH
zjwAn2Na7HspCc0B6Vkul8R18MfksvcG
=pV/d
-----END PGP SIGNATURE-----

--=-qqk6Uz7K8QMmP2DS28IE--


--===============7593149654821441216==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7593149654821441216==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 11:29:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFirj-0001AQ-S8; Tue, 18 Feb 2014 11:29:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFiri-00019l-IF
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:29:30 +0000
Received: from [85.158.137.68:46778] by server-5.bemta-3.messagelabs.com id
	B1/A2-04712-91443035; Tue, 18 Feb 2014 11:29:29 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392722967!2608549!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32594 invoked from network); 18 Feb 2014 11:29:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:29:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; 
	d="asc'?scan'208";a="101705548"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:29:27 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:29:25 -0500
Message-ID: <1392722963.32038.494.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Meng Xu <xumengpanda@gmail.com>
Date: Tue, 18 Feb 2014 12:29:23 +0100
In-Reply-To: <1392714890.32038.463.camel@Solace>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7593149654821441216=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7593149654821441216==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-qqk6Uz7K8QMmP2DS28IE"

--=-qqk6Uz7K8QMmP2DS28IE
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-02-18 at 10:14 +0100, Dario Faggioli wrote:
> Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
> Developers Summit:
> http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>=20
And there appears to be a new version of this work, released just
yesterday! :-)

Have a look here:
http://bugs.xenproject.org/xen/mid/%3C1392659764-22183-1-git-send-email-bor=
is.ostrovsky@oracle.com%3E

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-qqk6Uz7K8QMmP2DS28IE
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDRBMACgkQk4XaBE3IOsS+ZQCgjlKqWoYSQrrLipymzBmoaZFH
zjwAn2Na7HspCc0B6Vkul8R18MfksvcG
=pV/d
-----END PGP SIGNATURE-----

--=-qqk6Uz7K8QMmP2DS28IE--


--===============7593149654821441216==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7593149654821441216==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 11:30:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFisl-0001Op-KT; Tue, 18 Feb 2014 11:30:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFisj-0001ON-E3
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:30:33 +0000
Received: from [85.158.143.35:51413] by server-2.bemta-4.messagelabs.com id
	31/C6-10891-85443035; Tue, 18 Feb 2014 11:30:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392723030!6468686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1115 invoked from network); 18 Feb 2014 11:30:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101705706"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:30:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:30:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisT-0002XV-Lp;
	Tue, 18 Feb 2014 11:30:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisR-0005gn-O6;
	Tue, 18 Feb 2014 11:30:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.17478.82364.547362@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 11:30:14 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>
In-Reply-To: <53033FDD020000780011D2A8@nat28.tlf.novell.com>,
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
	<53033FDD020000780011D2A8@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master [and 1
	more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: Proposed force push of staging to master"):
> Do you propose this just for the RC phase, or do you propose
> reverting the decision to have this set to master altogether?

Just for the RC phase.

> In either case, it was only pretty recently that we decided to use
> master here, and I don't think you objected, so I'm a little puzzled
> by the proposal.

We decided to use master here "most of the time".

> I personally think that using master and an explicit tag for RCs is just
> as appropriate as properly naming the RC in
> xen/Makefile:XEN_EXTRAVERSION, and that doing the respective
> adjustments could - if one wanted to - likely be fully automated (albeit
> when I'm doing the same on the stable branches, I didn't bother
> scripting this so far as it's just not happening frequently enough to
> warrant this).

Yes, I agree that it could be automated.  But there's a problem with
doing this on a routine basis.  As you can see from the test report
just sent, a force push requires osstest to construct a new baseline
test.  If that baseline test suffers from some kind of passing problem
(eg, infrastructure problems, or git servers being down at the wrong
moment, or whatever), then osstest wouldn't be able to spot any actual
regressions in the affected tests.

An alternative would be to make these part-of-the-release changes on a
separate git branch, but then people who want to test an RC would have
to do something other than just "git clone xen.git".

It would be possible to have osstest spot the force push and try to
find a tested ancestor of the current master to use as a baseline.
But that would make it impossible to use a forced push to deliberately
introduce a known but tolerable regression.

Stefano Stabellini writes ("Re: Proposed force push of staging to master"):
> On Mon, 17 Feb 2014, Ian Jackson wrote:
> > That is I think the best workflow is:
> >   * make a change to staging/qemu-upstream-unstable.git
> >   * wait for push gate to put it in qemu-upstream-unstable.git
> 
> Does this work because the test infrastructure doesn't obey Config.mk?

Yes.

> In fact, even if the test infrastructure does test the new changes by
> manually setting QEMU_UPSTREAM_REVISION, following your proposed
> workflow we would still miss all the possible bug reports from the
> community between RCs.

I'm suggesting that whenever the qemu-upstream tree is updated, the
hash in Config.mk should be updated too, the way it's done for qemu
trad.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:30:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFisl-0001Op-KT; Tue, 18 Feb 2014 11:30:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFisj-0001ON-E3
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:30:33 +0000
Received: from [85.158.143.35:51413] by server-2.bemta-4.messagelabs.com id
	31/C6-10891-85443035; Tue, 18 Feb 2014 11:30:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392723030!6468686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1115 invoked from network); 18 Feb 2014 11:30:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101705706"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:30:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:30:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisT-0002XV-Lp;
	Tue, 18 Feb 2014 11:30:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisR-0005gn-O6;
	Tue, 18 Feb 2014 11:30:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.17478.82364.547362@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 11:30:14 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>
In-Reply-To: <53033FDD020000780011D2A8@nat28.tlf.novell.com>,
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
	<53033FDD020000780011D2A8@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master [and 1
	more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: Proposed force push of staging to master"):
> Do you propose this just for the RC phase, or do you propose
> reverting the decision to have this set to master altogether?

Just for the RC phase.

> In either case, it was only pretty recently that we decided to use
> master here, and I don't think you objected, so I'm a little puzzled
> by the proposal.

We decided to use master here "most of the time".

> I personally think that using master and an explicit tag for RCs is just
> as appropriate as properly naming the RC in
> xen/Makefile:XEN_EXTRAVERSION, and that doing the respective
> adjustments could - if one wanted to - likely be fully automated (albeit
> when I'm doing the same on the stable branches, I didn't bother
> scripting this so far as it's just not happening frequently enough to
> warrant this).

Yes, I agree that it could be automated.  But there's a problem with
doing this on a routine basis.  As you can see from the test report
just sent, a force push requires osstest to construct a new baseline
test.  If that baseline test suffers from some kind of passing problem
(eg, infrastructure problems, or git servers being down at the wrong
moment, or whatever), then osstest wouldn't be able to spot any actual
regressions in the affected tests.

An alternative would be to make these part-of-the-release changes on a
separate git branch, but then people who want to test an RC would have
to do something other than just "git clone xen.git".

It would be possible to have osstest spot the force push and try to
find a tested ancestor of the current master to use as a baseline.
But that would make it impossible to use a forced push to deliberately
introduce a known but tolerable regression.

Stefano Stabellini writes ("Re: Proposed force push of staging to master"):
> On Mon, 17 Feb 2014, Ian Jackson wrote:
> > That is I think the best workflow is:
> >   * make a change to staging/qemu-upstream-unstable.git
> >   * wait for push gate to put it in qemu-upstream-unstable.git
> 
> Does this work because the test infrastructure doesn't obey Config.mk?

Yes.

> In fact, even if the test infrastructure does test the new changes by
> manually setting QEMU_UPSTREAM_REVISION, following your proposed
> workflow we would still miss all the possible bug reports from the
> community between RCs.

I'm suggesting that whenever the qemu-upstream tree is updated, the
hash in Config.mk should be updated too, the way it's done for qemu
trad.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:30:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFism-0001Ox-0K; Tue, 18 Feb 2014 11:30:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFisj-0001OO-E8
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:30:33 +0000
Received: from [193.109.254.147:59859] by server-14.bemta-14.messagelabs.com
	id 28/57-29228-85443035; Tue, 18 Feb 2014 11:30:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392723029!5125117!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11643 invoked from network); 18 Feb 2014 11:30:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101705706"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:30:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:30:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisT-0002XV-Lp;
	Tue, 18 Feb 2014 11:30:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisR-0005gn-O6;
	Tue, 18 Feb 2014 11:30:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.17478.82364.547362@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 11:30:14 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>
In-Reply-To: <53033FDD020000780011D2A8@nat28.tlf.novell.com>,
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
	<53033FDD020000780011D2A8@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master [and 1
	more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: Proposed force push of staging to master"):
> Do you propose this just for the RC phase, or do you propose
> reverting the decision to have this set to master altogether?

Just for the RC phase.

> In either case, it was only pretty recently that we decided to use
> master here, and I don't think you objected, so I'm a little puzzled
> by the proposal.

We decided to use master here "most of the time".

> I personally think that using master and an explicit tag for RCs is just
> as appropriate as properly naming the RC in
> xen/Makefile:XEN_EXTRAVERSION, and that doing the respective
> adjustments could - if one wanted to - likely be fully automated (albeit
> when I'm doing the same on the stable branches, I didn't bother
> scripting this so far as it's just not happening frequently enough to
> warrant this).

Yes, I agree that it could be automated.  But there's a problem with
doing this on a routine basis.  As you can see from the test report
just sent, a force push requires osstest to construct a new baseline
test.  If that baseline test suffers from some kind of passing problem
(eg, infrastructure problems, or git servers being down at the wrong
moment, or whatever), then osstest wouldn't be able to spot any actual
regressions in the affected tests.

An alternative would be to make these part-of-the-release changes on a
separate git branch, but then people who want to test an RC would have
to do something other than just "git clone xen.git".

It would be possible to have osstest spot the force push and try to
find a tested ancestor of the current master to use as a baseline.
But that would make it impossible to use a forced push to deliberately
introduce a known but tolerable regression.

Stefano Stabellini writes ("Re: Proposed force push of staging to master"):
> On Mon, 17 Feb 2014, Ian Jackson wrote:
> > That is I think the best workflow is:
> >   * make a change to staging/qemu-upstream-unstable.git
> >   * wait for push gate to put it in qemu-upstream-unstable.git
> 
> Does this work because the test infrastructure doesn't obey Config.mk?

Yes.

> In fact, even if the test infrastructure does test the new changes by
> manually setting QEMU_UPSTREAM_REVISION, following your proposed
> workflow we would still miss all the possible bug reports from the
> community between RCs.

I'm suggesting that whenever the qemu-upstream tree is updated, the
hash in Config.mk should be updated too, the way it's done for qemu
trad.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:30:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFism-0001Ox-0K; Tue, 18 Feb 2014 11:30:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFisj-0001OO-E8
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:30:33 +0000
Received: from [193.109.254.147:59859] by server-14.bemta-14.messagelabs.com
	id 28/57-29228-85443035; Tue, 18 Feb 2014 11:30:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392723029!5125117!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11643 invoked from network); 18 Feb 2014 11:30:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:30:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101705706"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:30:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:30:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisT-0002XV-Lp;
	Tue, 18 Feb 2014 11:30:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFisR-0005gn-O6;
	Tue, 18 Feb 2014 11:30:15 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.17478.82364.547362@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 11:30:14 +0000
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>
In-Reply-To: <53033FDD020000780011D2A8@nat28.tlf.novell.com>,
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
	<53033FDD020000780011D2A8@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master [and 1
	more messages]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: Proposed force push of staging to master"):
> Do you propose this just for the RC phase, or do you propose
> reverting the decision to have this set to master altogether?

Just for the RC phase.

> In either case, it was only pretty recently that we decided to use
> master here, and I don't think you objected, so I'm a little puzzled
> by the proposal.

We decided to use master here "most of the time".

> I personally think that using master and an explicit tag for RCs is just
> as appropriate as properly naming the RC in
> xen/Makefile:XEN_EXTRAVERSION, and that doing the respective
> adjustments could - if one wanted to - likely be fully automated (albeit
> when I'm doing the same on the stable branches, I didn't bother
> scripting this so far as it's just not happening frequently enough to
> warrant this).

Yes, I agree that it could be automated.  But there's a problem with
doing this on a routine basis.  As you can see from the test report
just sent, a force push requires osstest to construct a new baseline
test.  If that baseline test suffers from some kind of passing problem
(eg, infrastructure problems, or git servers being down at the wrong
moment, or whatever), then osstest wouldn't be able to spot any actual
regressions in the affected tests.

An alternative would be to make these part-of-the-release changes on a
separate git branch, but then people who want to test an RC would have
to do something other than just "git clone xen.git".

It would be possible to have osstest spot the force push and try to
find a tested ancestor of the current master to use as a baseline.
But that would make it impossible to use a forced push to deliberately
introduce a known but tolerable regression.

Stefano Stabellini writes ("Re: Proposed force push of staging to master"):
> On Mon, 17 Feb 2014, Ian Jackson wrote:
> > That is I think the best workflow is:
> >   * make a change to staging/qemu-upstream-unstable.git
> >   * wait for push gate to put it in qemu-upstream-unstable.git
> 
> Does this work because the test infrastructure doesn't obey Config.mk?

Yes.

> In fact, even if the test infrastructure does test the new changes by
> manually setting QEMU_UPSTREAM_REVISION, following your proposed
> workflow we would still miss all the possible bug reports from the
> community between RCs.

I'm suggesting that whenever the qemu-upstream tree is updated, the
hash in Config.mk should be updated too, the way it's done for qemu
trad.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:35:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFixG-0001vm-SM; Tue, 18 Feb 2014 11:35:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFixE-0001vZ-WF
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:35:13 +0000
Received: from [85.158.139.211:22487] by server-4.bemta-5.messagelabs.com id
	BF/13-08092-07543035; Tue, 18 Feb 2014 11:35:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392723309!4639112!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16556 invoked from network); 18 Feb 2014 11:35:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103434161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:35:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:35:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFixA-0002Yl-60;
	Tue, 18 Feb 2014 11:35:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFix8-0005hQ-GY;
	Tue, 18 Feb 2014 11:35:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.17769.656791.139815@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 11:35:05 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392718729.11080.13.camel@kazak.uk.xensource.com>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] missing dependency on libxlu_disk_l.h"):
> On Sat, 2014-02-15 at 23:17 +0100, Olaf Hering wrote:
> > I'm not sure if libxlu_disk_l.h is a generated file.
> 
> It is, but we also check in the generated version, IIRC due to a bug in
> some versions of flex, but also partially for convenience to avoid the
> need for flex on all development systems.

Right.

> >  But just once I saw
> > this failure below with automated build and make -j 16. This source tree
> > has the discard-enable patch, which changes tools/libxl/libxlu_disk_l.l.
> > As a result libxlu_disk_l.c is regenerated, see the flex call below.
> 
> It might be a good idea to either also patch the generated files or to
> have the patch remove them, to avoid any possible confusion due to skew.

This ought to be taken care of by the build system, provided you don't
actually git commit only the change to .l and not the change to .[ch].
In the final patch.

> > How should make become aware of the libxlu_disk_l.h dependency?
> 
> I think it would probably need explicitly specifying like we do for the
> IDL generated files.

There is already a place to put this.  See diff below.  Sorry for not
doing this at the time.

> What deleted that file though?

make might have done.

Olaf, can you test whether this diff makes the problem go away for you ?

Thanks,
Ian.

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index dab2929..755b666 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
 $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
-	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
+	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:35:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFixG-0001vm-SM; Tue, 18 Feb 2014 11:35:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFixE-0001vZ-WF
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:35:13 +0000
Received: from [85.158.139.211:22487] by server-4.bemta-5.messagelabs.com id
	BF/13-08092-07543035; Tue, 18 Feb 2014 11:35:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392723309!4639112!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16556 invoked from network); 18 Feb 2014 11:35:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:35:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103434161"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:35:08 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:35:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFixA-0002Yl-60;
	Tue, 18 Feb 2014 11:35:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFix8-0005hQ-GY;
	Tue, 18 Feb 2014 11:35:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.17769.656791.139815@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 11:35:05 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392718729.11080.13.camel@kazak.uk.xensource.com>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] missing dependency on libxlu_disk_l.h"):
> On Sat, 2014-02-15 at 23:17 +0100, Olaf Hering wrote:
> > I'm not sure if libxlu_disk_l.h is a generated file.
> 
> It is, but we also check in the generated version, IIRC due to a bug in
> some versions of flex, but also partially for convenience to avoid the
> need for flex on all development systems.

Right.

> >  But just once I saw
> > this failure below with automated build and make -j 16. This source tree
> > has the discard-enable patch, which changes tools/libxl/libxlu_disk_l.l.
> > As a result libxlu_disk_l.c is regenerated, see the flex call below.
> 
> It might be a good idea to either also patch the generated files or to
> have the patch remove them, to avoid any possible confusion due to skew.

This ought to be taken care of by the build system, provided you don't
actually git commit only the change to .l and not the change to .[ch].
In the final patch.

> > How should make become aware of the libxlu_disk_l.h dependency?
> 
> I think it would probably need explicitly specifying like we do for the
> IDL generated files.

There is already a place to put this.  See diff below.  Sorry for not
doing this at the time.

> What deleted that file though?

make might have done.

Olaf, can you test whether this diff makes the problem go away for you ?

Thanks,
Ian.

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index dab2929..755b666 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
 $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
-	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
+	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:36:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiyU-00020u-9e; Tue, 18 Feb 2014 11:36:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFiyS-00020k-Gh
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:36:28 +0000
Received: from [193.109.254.147:30523] by server-16.bemta-14.messagelabs.com
	id 65/FD-21945-BB543035; Tue, 18 Feb 2014 11:36:27 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392723385!5106643!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23321 invoked from network); 18 Feb 2014 11:36:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:36:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; 
	d="asc'?scan'208";a="103434882"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:36:25 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:36:24 -0500
Message-ID: <1392723382.32038.495.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 12:36:22 +0100
In-Reply-To: <21246.25423.419772.949039@mariner.uk.xensource.com>
References: <1391005955.21756.7.camel@Abyss>
	<1392401513.32038.348.camel@Solace>
	<21246.25423.419772.949039@mariner.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4086905968455837331=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4086905968455837331==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-86OFeTQ+cmJ+ui9gc8rp"

--=-86OFeTQ+cmJ+ui9gc8rp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 18:41 +0000, Ian Jackson wrote:
> Dario Faggioli writes ("Re: [Xen-devel] [OSSTest] standalone-reset: actua=
lly honour '-f' option"):
> > On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> > > standalone-reset's usage says:
> > >    =20
> > >   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buil=
dflight>]]]
> > >    branch and xenbranch default, separately, to xen-unstable
> > >   options:
> > >    -f<flight>     generate flight "flight", default is "standalone"
> > >    =20
> > > but then there is no place where '-f' is processed, and hence
> > > no real way to pass a specific flight name to make-flight.
> > >    =20
> > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
> Right.  I don't use standalone mode much, so sorry about that.
>
I know... That's fine. :-)

> This patch leads me to an observation: I looked at the code in
> standalone-reset and it appears to me that there is not currently
> anything which sets "$flight".
>=20
Indeed, that's what this does.

> So the "DELETE" statements used if there's an existing db won't have
> any effect.  This doesn't cause any strange effects because
> Osstest/JobDB/Standalone.pm deletes them too.
>=20
> I think it would be best to delete that part of standalone-reset.  Do
> you agree ?
>=20
Well, if it's either never invoked (right now, without this patch) or
duplicate (with this patch), I certainly think it can be removed.

I'll send a patch to that effect.

> In the meantime I have added your patch to my queue branch.
>=20
Ok, thanks.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-86OFeTQ+cmJ+ui9gc8rp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDRbYACgkQk4XaBE3IOsS67ACfdrNordUkbwIVQenepXowQ6dO
tmgAoJC7005Nh4q9LmGLIwroqyxifFgC
=38gV
-----END PGP SIGNATURE-----

--=-86OFeTQ+cmJ+ui9gc8rp--


--===============4086905968455837331==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4086905968455837331==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 11:36:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFiyU-00020u-9e; Tue, 18 Feb 2014 11:36:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFiyS-00020k-Gh
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 11:36:28 +0000
Received: from [193.109.254.147:30523] by server-16.bemta-14.messagelabs.com
	id 65/FD-21945-BB543035; Tue, 18 Feb 2014 11:36:27 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392723385!5106643!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23321 invoked from network); 18 Feb 2014 11:36:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:36:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; 
	d="asc'?scan'208";a="103434882"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:36:25 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:36:24 -0500
Message-ID: <1392723382.32038.495.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 12:36:22 +0100
In-Reply-To: <21246.25423.419772.949039@mariner.uk.xensource.com>
References: <1391005955.21756.7.camel@Abyss>
	<1392401513.32038.348.camel@Solace>
	<21246.25423.419772.949039@mariner.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [OSSTest] standalone-reset: actually honour '-f'
 option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4086905968455837331=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4086905968455837331==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-86OFeTQ+cmJ+ui9gc8rp"

--=-86OFeTQ+cmJ+ui9gc8rp
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-14 at 18:41 +0000, Ian Jackson wrote:
> Dario Faggioli writes ("Re: [Xen-devel] [OSSTest] standalone-reset: actua=
lly honour '-f' option"):
> > On mer, 2014-01-29 at 14:32 +0000, Dario Faggioli wrote:
> > > standalone-reset's usage says:
> > >    =20
> > >   usage: ./standalone-reset [<options>] [<branch> [<xenbranch> [<buil=
dflight>]]]
> > >    branch and xenbranch default, separately, to xen-unstable
> > >   options:
> > >    -f<flight>     generate flight "flight", default is "standalone"
> > >    =20
> > > but then there is no place where '-f' is processed, and hence
> > > no real way to pass a specific flight name to make-flight.
> > >    =20
> > > Signed-off-by: Dario Faggioli <dario.faggioli@citrix.com>
>
> Right.  I don't use standalone mode much, so sorry about that.
>
I know... That's fine. :-)

> This patch leads me to an observation: I looked at the code in
> standalone-reset and it appears to me that there is not currently
> anything which sets "$flight".
>=20
Indeed, that's what this does.

> So the "DELETE" statements used if there's an existing db won't have
> any effect.  This doesn't cause any strange effects because
> Osstest/JobDB/Standalone.pm deletes them too.
>=20
> I think it would be best to delete that part of standalone-reset.  Do
> you agree ?
>=20
Well, if it's either never invoked (right now, without this patch) or
duplicate (with this patch), I certainly think it can be removed.

I'll send a patch to that effect.

> In the meantime I have added your patch to my queue branch.
>=20
Ok, thanks.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-86OFeTQ+cmJ+ui9gc8rp
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDRbYACgkQk4XaBE3IOsS67ACfdrNordUkbwIVQenepXowQ6dO
tmgAoJC7005Nh4q9LmGLIwroqyxifFgC
=38gV
-----END PGP SIGNATURE-----

--=-86OFeTQ+cmJ+ui9gc8rp--


--===============4086905968455837331==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4086905968455837331==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 11:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj4K-0002F8-JL; Tue, 18 Feb 2014 11:42:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFj4J-0002F2-CL
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:42:31 +0000
Received: from [193.109.254.147:39778] by server-9.bemta-14.messagelabs.com id
	45/AE-24895-62743035; Tue, 18 Feb 2014 11:42:30 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392723749!1360592!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31685 invoked from network); 18 Feb 2014 11:42:30 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 11:42:30 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 03:42:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="485143073"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 03:42:28 -0800
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:42:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:42:28 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 19:42:24 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPLJjerpjR5j3ySEeyNVNl8C2Pk5q640rw
Date: Tue, 18 Feb 2014 11:42:23 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
In-Reply-To: <53034BC1020000780011D371@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>> *v, uint32_t msr, uint64_t *val) 
>>>>> 
>>>>>      *val = 0;
>>>>> 
>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>> +    /* Allow only first 3 MC banks into switch() */
>> 
>> I don't think this comments is good here. Remove it is better.
> 
> I had asked for this to be removed again too. I'm really thinking
> that V3 is what we should go with.
> 
> Jan

V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is slightly better.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj4K-0002F8-JL; Tue, 18 Feb 2014 11:42:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFj4J-0002F2-CL
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:42:31 +0000
Received: from [193.109.254.147:39778] by server-9.bemta-14.messagelabs.com id
	45/AE-24895-62743035; Tue, 18 Feb 2014 11:42:30 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392723749!1360592!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31685 invoked from network); 18 Feb 2014 11:42:30 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-4.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 11:42:30 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 03:42:28 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="485143073"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 03:42:28 -0800
Received: from fmsmsx112.amr.corp.intel.com (10.18.116.6) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:42:27 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX112.amr.corp.intel.com (10.18.116.6) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:42:28 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 19:42:24 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPLJjerpjR5j3ySEeyNVNl8C2Pk5q640rw
Date: Tue, 18 Feb 2014 11:42:23 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
In-Reply-To: <53034BC1020000780011D371@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>> *v, uint32_t msr, uint64_t *val) 
>>>>> 
>>>>>      *val = 0;
>>>>> 
>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>> +    /* Allow only first 3 MC banks into switch() */
>> 
>> I don't think this comments is good here. Remove it is better.
> 
> I had asked for this to be removed again too. I'm really thinking
> that V3 is what we should go with.
> 
> Jan

V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is slightly better.

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:45:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:45:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj6j-0002N1-8D; Tue, 18 Feb 2014 11:45:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WFj6h-0002Mr-6A
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:44:59 +0000
Received: from [85.158.139.211:6999] by server-1.bemta-5.messagelabs.com id
	3D/00-12859-AB743035; Tue, 18 Feb 2014 11:44:58 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392723895!4645596!1
X-Originating-IP: [64.18.0.182]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8137 invoked from network); 18 Feb 2014 11:44:57 -0000
Received: from exprod5og106.obsmtp.com (HELO exprod5og106.obsmtp.com)
	(64.18.0.182)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 11:44:57 -0000
Received: from mail-wg0-f50.google.com ([74.125.82.50]) (using TLSv1) by
	exprod5ob106.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNHto6Uml9rty25UZiaucqS+TUmHJRs@postini.com;
	Tue, 18 Feb 2014 03:44:57 PST
Received: by mail-wg0-f50.google.com with SMTP id z12so3141163wgg.17
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 03:44:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=LaxH0r0zBX3FT7JKtwMQROibsOWVRItlR5F7KSptoNY=;
	b=NLKtY+e4roZ8vFOrE9GK1Vben9OrVlrLpyXexCGmtgbdN7HZUPwpudpp1kMLCV8D99
	dSiNbOtOoL9wLfKBpUgxwyvlw/KEc6SY7GJnYL6o3/LRmD9uSoS5JeG58Qk3i/Tmh38J
	VfP/WTNMbEGZTzhqj0YiaY1YbWGrbxDWT3+Q0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=LaxH0r0zBX3FT7JKtwMQROibsOWVRItlR5F7KSptoNY=;
	b=GEgIrnUEAhMB7FMKPuWazAXVkhIiPfuF8VPY1QPAjm6bSFCZB2VfbM5eyDyhsUUAe8
	/Z+pH+ANSZJTVAXQUT2j6K/+yAIvRW1p91egUTOO5WGo/4LM422GN0GMiCFmSrl3y31G
	uD2IgjJr2M+sqhcLULpuGAdYfP5H7FCPHJlSwujdvSUHAeQ2GXFAWhLjcdhDo7qZ9cs4
	e4yewm8wxPcwJBsQ9iBXqexZAI11E77FKNA4jkspNJBVzh8B7xDocqD1tYGW8OhT9y2i
	GfQ7TnyC6dykVaxm4qpg2bLBlxUCy2kaatDem/ZDHiLPQYHOtQHLEXHRVUEYt2QMEmPZ
	GFPg==
X-Gm-Message-State: ALoCoQk28o++0xf3TlxdnnfWOXMkA9pkZShMGHhtsaOI3t30jv7rqDuGm6pxgcTMU1Z1hkj9/pjFD2JduT2pr3/uXeBp7jJ28WckL/Rf0reps6CE44tUI/2Fn2fLfuvvjQWSGibpZul3SbMqeB1+qXrlCnwi2FuJcpJAxKLdWZNLLnxLka4/99g=
X-Received: by 10.180.93.74 with SMTP id cs10mr17665243wib.15.1392723893849;
	Tue, 18 Feb 2014 03:44:53 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.93.74 with SMTP id cs10mr17665233wib.15.1392723893727;
	Tue, 18 Feb 2014 03:44:53 -0800 (PST)
Received: by 10.216.31.67 with HTTP; Tue, 18 Feb 2014 03:44:53 -0800 (PST)
Date: Tue, 18 Feb 2014 13:44:53 +0200
Message-ID: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel]  Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, all.
I am working with DRA7 (OMAP5) platform.

I rarely see that Hypervisor hangs during bringing up CPU1.
It is not a deadlock, XEN console is working, but system is unusable(

We have next situation.
The CPU0 can not exit from busy loop in __cpu_up func (smpboot.c),
he is waiting CPU1 to become online but CPU1 stucks during bring up sequence.
The last print which I see is "- Turning on paging -".
It seems, it happens during enable MMU/Cache or immediately after it.
But anyway,
instruction "mov pc, r1" does not result to transition to the label "paging".

(XEN) Latest ChangeSet: Mon Feb 17 13:43:16 2014 +0200 git:c30f01f
(XEN) Processor: 412fc0f2: "ARM Limited", variant: 0x2, part 0xc0f, rev 0x2
(XEN) 32-bit Execution:
(XEN)   Processor Features: 00001131:00011011
(XEN)     Instruction Sets: AArch32 Thumb Thumb-2 ThumbEE Jazelle
(XEN)     Extensions: GenericTimer Security
(XEN)   Debug Features: 02010555
(XEN)   Auxiliary Features: 00000000
(XEN)   Memory Model Features: 10201105 20000000 01240000 02102211
(XEN)  ISA Features: 02101110 13112111 21232041 11112131 10011142 00000000
(XEN) Platform: TI DRA7
(XEN) /psci method must be smc, but is: "hvc"
(XEN) Set AuxCoreBoot1 to 00000000dec0004c (0020004c)
(XEN) Set AuxCoreBoot0 to 0x20
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27
(XEN) Using generic timer at 6144 KHz
(XEN) GIC initialization:
(XEN)         gic_dist_addr=0000000048211000
(XEN)         gic_cpu_addr=0000000048212000
(XEN)         gic_hyp_addr=0000000048214000
(XEN)         gic_vcpu_addr=0000000048216000
(XEN)         gic_maintenance_irq=25
(XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
(XEN) Bringing up CPU1
- CPU 00000001 booting -
- NOT HYP, setting it ... -
- Xen starting in Hyp mode -
- Setting up control registers -
- Turning on paging -

Currently I am using XEN 4.4 RC3, but this issue is reproduced since XEN 4.3.
We do not have any patches related to bring up sequence on top of 4.4
RC3 except one:
So long as our bootloader does not set HYP mode we need to keep this
hack in Hypervisor.

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..6fa1e3d 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -174,6 +174,15 @@ common_start:
         teq   r0, #0x1a              /* Hyp Mode? */
         beq   hyp

+        PRINT("- NOT HYP, setting it ... -\r\n")
+        bl    enter_hyp_mode
+
+        /* Check that set HYP mode was succesfull */
+        mrs   r0, cpsr
+        and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
+        teq   r0, #0x1a              /* Hyp Mode? */
+        beq   hyp
+
         /* OK, we're boned. */
         PRINT("- Xen must be entered in NS Hyp mode -\r\n" \
               "- Please update the bootloader -\r\n")
@@ -547,6 +556,24 @@ putn:   mov   pc, lr

 #endif /* !EARLY_PRINTK */

+GLOBAL(enter_hyp_mode)
+enter_hyp_mode:
+        adr   r0, save
+        stmea r0, {r4-r13,lr}
+        ldr   r12, =0x102
+        adr   r0, hyp_return
+        dsb
+        isb
+        dmb
+        smc   #0
+hyp_return:
+        adr   r0, save
+        ldmfd r0, {r4-r13,pc}
+save:
+        .rept 11
+        .word 0
+        .endr
+
 /*
  * Local variables:
  * mode: ASM


Please, could anyone give me advice about this issue? And what do you
think about solution to exit from busy loop by timeout and restart
CPU1 bringing up sequence in this case?

Thank you.

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:45:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:45:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj6j-0002N1-8D; Tue, 18 Feb 2014 11:45:01 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WFj6h-0002Mr-6A
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:44:59 +0000
Received: from [85.158.139.211:6999] by server-1.bemta-5.messagelabs.com id
	3D/00-12859-AB743035; Tue, 18 Feb 2014 11:44:58 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392723895!4645596!1
X-Originating-IP: [64.18.0.182]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8137 invoked from network); 18 Feb 2014 11:44:57 -0000
Received: from exprod5og106.obsmtp.com (HELO exprod5og106.obsmtp.com)
	(64.18.0.182)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 11:44:57 -0000
Received: from mail-wg0-f50.google.com ([74.125.82.50]) (using TLSv1) by
	exprod5ob106.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNHto6Uml9rty25UZiaucqS+TUmHJRs@postini.com;
	Tue, 18 Feb 2014 03:44:57 PST
Received: by mail-wg0-f50.google.com with SMTP id z12so3141163wgg.17
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 03:44:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=LaxH0r0zBX3FT7JKtwMQROibsOWVRItlR5F7KSptoNY=;
	b=NLKtY+e4roZ8vFOrE9GK1Vben9OrVlrLpyXexCGmtgbdN7HZUPwpudpp1kMLCV8D99
	dSiNbOtOoL9wLfKBpUgxwyvlw/KEc6SY7GJnYL6o3/LRmD9uSoS5JeG58Qk3i/Tmh38J
	VfP/WTNMbEGZTzhqj0YiaY1YbWGrbxDWT3+Q0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=LaxH0r0zBX3FT7JKtwMQROibsOWVRItlR5F7KSptoNY=;
	b=GEgIrnUEAhMB7FMKPuWazAXVkhIiPfuF8VPY1QPAjm6bSFCZB2VfbM5eyDyhsUUAe8
	/Z+pH+ANSZJTVAXQUT2j6K/+yAIvRW1p91egUTOO5WGo/4LM422GN0GMiCFmSrl3y31G
	uD2IgjJr2M+sqhcLULpuGAdYfP5H7FCPHJlSwujdvSUHAeQ2GXFAWhLjcdhDo7qZ9cs4
	e4yewm8wxPcwJBsQ9iBXqexZAI11E77FKNA4jkspNJBVzh8B7xDocqD1tYGW8OhT9y2i
	GfQ7TnyC6dykVaxm4qpg2bLBlxUCy2kaatDem/ZDHiLPQYHOtQHLEXHRVUEYt2QMEmPZ
	GFPg==
X-Gm-Message-State: ALoCoQk28o++0xf3TlxdnnfWOXMkA9pkZShMGHhtsaOI3t30jv7rqDuGm6pxgcTMU1Z1hkj9/pjFD2JduT2pr3/uXeBp7jJ28WckL/Rf0reps6CE44tUI/2Fn2fLfuvvjQWSGibpZul3SbMqeB1+qXrlCnwi2FuJcpJAxKLdWZNLLnxLka4/99g=
X-Received: by 10.180.93.74 with SMTP id cs10mr17665243wib.15.1392723893849;
	Tue, 18 Feb 2014 03:44:53 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.93.74 with SMTP id cs10mr17665233wib.15.1392723893727;
	Tue, 18 Feb 2014 03:44:53 -0800 (PST)
Received: by 10.216.31.67 with HTTP; Tue, 18 Feb 2014 03:44:53 -0800 (PST)
Date: Tue, 18 Feb 2014 13:44:53 +0200
Message-ID: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel]  Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, all.
I am working with DRA7 (OMAP5) platform.

I rarely see that Hypervisor hangs during bringing up CPU1.
It is not a deadlock, XEN console is working, but system is unusable(

We have next situation.
The CPU0 can not exit from busy loop in __cpu_up func (smpboot.c),
he is waiting CPU1 to become online but CPU1 stucks during bring up sequence.
The last print which I see is "- Turning on paging -".
It seems, it happens during enable MMU/Cache or immediately after it.
But anyway,
instruction "mov pc, r1" does not result to transition to the label "paging".

(XEN) Latest ChangeSet: Mon Feb 17 13:43:16 2014 +0200 git:c30f01f
(XEN) Processor: 412fc0f2: "ARM Limited", variant: 0x2, part 0xc0f, rev 0x2
(XEN) 32-bit Execution:
(XEN)   Processor Features: 00001131:00011011
(XEN)     Instruction Sets: AArch32 Thumb Thumb-2 ThumbEE Jazelle
(XEN)     Extensions: GenericTimer Security
(XEN)   Debug Features: 02010555
(XEN)   Auxiliary Features: 00000000
(XEN)   Memory Model Features: 10201105 20000000 01240000 02102211
(XEN)  ISA Features: 02101110 13112111 21232041 11112131 10011142 00000000
(XEN) Platform: TI DRA7
(XEN) /psci method must be smc, but is: "hvc"
(XEN) Set AuxCoreBoot1 to 00000000dec0004c (0020004c)
(XEN) Set AuxCoreBoot0 to 0x20
(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27
(XEN) Using generic timer at 6144 KHz
(XEN) GIC initialization:
(XEN)         gic_dist_addr=0000000048211000
(XEN)         gic_cpu_addr=0000000048212000
(XEN)         gic_hyp_addr=0000000048214000
(XEN)         gic_vcpu_addr=0000000048216000
(XEN)         gic_maintenance_irq=25
(XEN) GIC: 192 lines, 2 cpus, secure (IID 0000043b).
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Allocated console ring of 16 KiB.
(XEN) VFP implementer 0x41 architecture 4 part 0x30 variant 0xf rev 0x0
(XEN) Bringing up CPU1
- CPU 00000001 booting -
- NOT HYP, setting it ... -
- Xen starting in Hyp mode -
- Setting up control registers -
- Turning on paging -

Currently I am using XEN 4.4 RC3, but this issue is reproduced since XEN 4.3.
We do not have any patches related to bring up sequence on top of 4.4
RC3 except one:
So long as our bootloader does not set HYP mode we need to keep this
hack in Hypervisor.

diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..6fa1e3d 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -174,6 +174,15 @@ common_start:
         teq   r0, #0x1a              /* Hyp Mode? */
         beq   hyp

+        PRINT("- NOT HYP, setting it ... -\r\n")
+        bl    enter_hyp_mode
+
+        /* Check that set HYP mode was succesfull */
+        mrs   r0, cpsr
+        and   r0, r0, #0x1f          /* Mode is in the low 5 bits of CPSR */
+        teq   r0, #0x1a              /* Hyp Mode? */
+        beq   hyp
+
         /* OK, we're boned. */
         PRINT("- Xen must be entered in NS Hyp mode -\r\n" \
               "- Please update the bootloader -\r\n")
@@ -547,6 +556,24 @@ putn:   mov   pc, lr

 #endif /* !EARLY_PRINTK */

+GLOBAL(enter_hyp_mode)
+enter_hyp_mode:
+        adr   r0, save
+        stmea r0, {r4-r13,lr}
+        ldr   r12, =0x102
+        adr   r0, hyp_return
+        dsb
+        isb
+        dmb
+        smc   #0
+hyp_return:
+        adr   r0, save
+        ldmfd r0, {r4-r13,pc}
+save:
+        .rept 11
+        .word 0
+        .endr
+
 /*
  * Local variables:
  * mode: ASM


Please, could anyone give me advice about this issue? And what do you
think about solution to exit from busy loop by timeout and restart
CPU1 bringing up sequence in this case?

Thank you.

-- 

Oleksandr Tyshchenko | Embedded Developer
GlobalLogic
www.globallogic.com

http://www.globallogic.com/email_disclaimer.txt

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:46:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj8M-0002X4-O7; Tue, 18 Feb 2014 11:46:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFj8L-0002We-Rg
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:46:42 +0000
Received: from [85.158.143.35:7471] by server-3.bemta-4.messagelabs.com id
	09/F1-11539-E1843035; Tue, 18 Feb 2014 11:46:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392723997!6505079!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21629 invoked from network); 18 Feb 2014 11:46:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 11:46:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 11:46:36 +0000
Message-Id: <53035628020000780011D3EE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 11:46:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Dongxiao Xu" <dongxiao.xu@intel.com>,
	"Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-02-17:
>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>>> And second, I have been fighting with finding both conditions and
>>> (eventually) the root cause of a severe performance regression
>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>> became _much_ worse after adding in the patch here (while in fact I
>>> had hoped it might help with the originally observed
>>> degradation): X startup fails due to timing out, and booting the
>>> guest now takes about 20 minutes). I didn't find the root cause of
>>> this yet, but meanwhile I know that
>>> - the same isn't observable on SVM
>>> - there's no problem when forcing the domain to use shadow
>>>   mode - there's no need for any device to actually be assigned to the
>>>   guest - the regression is very likely purely graphics related (based
>>>   on the observation that when running something that regularly but not
>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>   worth of processing power, yet when that updating doesn't happen, CPU
>>>   consumption goes down, and it goes further down when shutting down X
>>>   altogether - at least as log as the patch here doesn't get involved).
>>> This I'm observing on a Westmere box (and I didn't notice it earlier
>>> because that's one of those where due to a chipset erratum the IOMMU
>>> gets turned off by default), so it's possible that this can't be
>>> seen on more modern hardware. I'll hopefully find time today to
>>> check this on the one newer (Sandy Bridge) box I have.
>> 
>> Just got done with trying this: By default, things work fine there.
>> As soon as I use "iommu=no-snoop", things go bad (even worse than one
>> the older box - the guest is consuming about 2.5 CPUs worth of
>> processing power _without_ the patch here in use, so I don't even want
>> to think about trying it there); I guessed that to be another of the
>> potential sources of the problem since on that older box the respective 
> hardware feature is unavailable.
>> 
>> While I'll try to look into this further, I guess I have to defer to
>> our VT-d specialists at Intel at this point...
>> 
> 
> Hi, Jan,
> 
> I tried to reproduce it. But unfortunately, I cannot reproduce it in my box 
> (sandy bridge EP)with latest Xen(include my patch). I guess my configuration 
> or steps may wrong, here is mine:
> 
> 1. add iommu=1,no-snoop in by xen cmd line:
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables enabled.
> 
> 2. boot a rhel6u4 guest.
> 
> 3. after guest boot up, run startx inside guest.
> 
> 4. a few second, the X windows shows and didn't see any error. Also the CPU 
> utilization is about 1.7%.
> 
> Any thing wrong?

Nothing at all, as it turns out. The regression is due to Dongxiao's

http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.html

which I have in my tree as part of various things pending for 4.5.
And which at the first, second, and third glance looks pretty
innocent (IOW I still have to find out _why_ it is wrong).

In any case - I'm very sorry for the false alarm.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:46:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj8M-0002X4-O7; Tue, 18 Feb 2014 11:46:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFj8L-0002We-Rg
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:46:42 +0000
Received: from [85.158.143.35:7471] by server-3.bemta-4.messagelabs.com id
	09/F1-11539-E1843035; Tue, 18 Feb 2014 11:46:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392723997!6505079!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21629 invoked from network); 18 Feb 2014 11:46:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 11:46:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 11:46:36 +0000
Message-Id: <53035628020000780011D3EE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 11:46:32 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Dongxiao Xu" <dongxiao.xu@intel.com>,
	"Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-02-17:
>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>>> And second, I have been fighting with finding both conditions and
>>> (eventually) the root cause of a severe performance regression
>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>> became _much_ worse after adding in the patch here (while in fact I
>>> had hoped it might help with the originally observed
>>> degradation): X startup fails due to timing out, and booting the
>>> guest now takes about 20 minutes). I didn't find the root cause of
>>> this yet, but meanwhile I know that
>>> - the same isn't observable on SVM
>>> - there's no problem when forcing the domain to use shadow
>>>   mode - there's no need for any device to actually be assigned to the
>>>   guest - the regression is very likely purely graphics related (based
>>>   on the observation that when running something that regularly but not
>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>   worth of processing power, yet when that updating doesn't happen, CPU
>>>   consumption goes down, and it goes further down when shutting down X
>>>   altogether - at least as log as the patch here doesn't get involved).
>>> This I'm observing on a Westmere box (and I didn't notice it earlier
>>> because that's one of those where due to a chipset erratum the IOMMU
>>> gets turned off by default), so it's possible that this can't be
>>> seen on more modern hardware. I'll hopefully find time today to
>>> check this on the one newer (Sandy Bridge) box I have.
>> 
>> Just got done with trying this: By default, things work fine there.
>> As soon as I use "iommu=no-snoop", things go bad (even worse than one
>> the older box - the guest is consuming about 2.5 CPUs worth of
>> processing power _without_ the patch here in use, so I don't even want
>> to think about trying it there); I guessed that to be another of the
>> potential sources of the problem since on that older box the respective 
> hardware feature is unavailable.
>> 
>> While I'll try to look into this further, I guess I have to defer to
>> our VT-d specialists at Intel at this point...
>> 
> 
> Hi, Jan,
> 
> I tried to reproduce it. But unfortunately, I cannot reproduce it in my box 
> (sandy bridge EP)with latest Xen(include my patch). I guess my configuration 
> or steps may wrong, here is mine:
> 
> 1. add iommu=1,no-snoop in by xen cmd line:
> (XEN) Intel VT-d Snoop Control not enabled.
> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
> (XEN) Intel VT-d Queued Invalidation enabled.
> (XEN) Intel VT-d Interrupt Remapping enabled.
> (XEN) Intel VT-d Shared EPT tables enabled.
> 
> 2. boot a rhel6u4 guest.
> 
> 3. after guest boot up, run startx inside guest.
> 
> 4. a few second, the X windows shows and didn't see any error. Also the CPU 
> utilization is about 1.7%.
> 
> Any thing wrong?

Nothing at all, as it turns out. The regression is due to Dongxiao's

http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.html

which I have in my tree as part of various things pending for 4.5.
And which at the first, second, and third glance looks pretty
innocent (IOW I still have to find out _why_ it is wrong).

In any case - I'm very sorry for the false alarm.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:47:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj99-0002cD-5x; Tue, 18 Feb 2014 11:47:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFj97-0002bw-RK
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:47:30 +0000
Received: from [193.109.254.147:37228] by server-6.bemta-14.messagelabs.com id
	CE/A4-03396-15843035; Tue, 18 Feb 2014 11:47:29 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392724047!5075213!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14804 invoked from network); 18 Feb 2014 11:47:27 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 11:47:27 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 03:43:07 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="457349810"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 03:47:19 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:47:18 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 19:47:16 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"chegger@amazon.de" <chegger@amazon.de>, "suravee.suthikulpanit@amd.com"
	<suravee.suthikulpanit@amd.com>, "boris.ostrovsky@oracle.com"
	<boris.ostrovsky@oracle.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>
Thread-Topic: [PATCH V3] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD thresolding MSRs
Thread-Index: AQHPJoJ9NfUZGv3dHE2Bic7fvT9USpq67/qw
Date: Tue, 18 Feb 2014 11:47:16 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F7A70@SHSMSX101.ccr.corp.intel.com>
References: <1392051041-3372-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392051041-3372-1-git-send-email-aravind.gopalakrishnan@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH V3] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits which meant the register
> accesses never made it to vmce_amd_* functions.
> 
> Corrected this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> While at it, remove some clutter in the vmce_amd* functions. Retained
> current policy of returning zero for reads and ignoring writes.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> ---
>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>  ++++++------------------------------- xen/arch/x86/cpu/mcheck/vmce.c
>  |    4 ++-- 2 files changed, 8 insertions(+), 37 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
> b/xen/arch/x86/cpu/mcheck/amd_f10.c 
> index 61319dc..03797ab 100644
> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		v->arch.vmce.bank[1].mci_misc = val;
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* ignore write: we do not emulate link and l3 cache errors
> -		 * to the guest.
> -		 */
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Do nothing as we don't emulate this MC bank currently */
> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> +    return 1;
>  }
> 
>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		*val = v->arch.vmce.bank[1].mci_misc;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* we do not emulate link and l3 cache
> -		 * errors to the guest.
> -		 */
> -		*val = 0;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Assign '0' as we don't emulate this MC bank currently */
> +    *val = 0;
> +    return 1;
>  }
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c 
> index f6c35db..be9bb5e 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v,
> uint32_t msr, uint64_t *val) 
> 
>      *val = 0;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )

Please add comments for '-MSR_IA32_MC0_CTL' where 1 mask works for all cases, so that later reader know the history:

Intel: MCi_CTL/STATUS/ADDR/MISC    0x400~0x403
        MCi_CTL2                                   0x280 ...

AMD: MCi_CTL/STATUS/ADDR/MISC    0x400~0x403
         MCi_MISCj                                  0xC0000_040x

Thanks,
Jinsong

>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */
> @@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v,
>      uint32_t msr, uint64_t val) int ret = 1;
>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:47:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFj99-0002cD-5x; Tue, 18 Feb 2014 11:47:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFj97-0002bw-RK
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:47:30 +0000
Received: from [193.109.254.147:37228] by server-6.bemta-14.messagelabs.com id
	CE/A4-03396-15843035; Tue, 18 Feb 2014 11:47:29 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392724047!5075213!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14804 invoked from network); 18 Feb 2014 11:47:27 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 11:47:27 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 03:43:07 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="457349810"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 03:47:19 -0800
Received: from shsmsx152.ccr.corp.intel.com (10.239.6.52) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 03:47:18 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX152.ccr.corp.intel.com ([169.254.6.185]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 19:47:16 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"chegger@amazon.de" <chegger@amazon.de>, "suravee.suthikulpanit@amd.com"
	<suravee.suthikulpanit@amd.com>, "boris.ostrovsky@oracle.com"
	<boris.ostrovsky@oracle.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>, "JBeulich@suse.com" <JBeulich@suse.com>
Thread-Topic: [PATCH V3] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD thresolding MSRs
Thread-Index: AQHPJoJ9NfUZGv3dHE2Bic7fvT9USpq67/qw
Date: Tue, 18 Feb 2014 11:47:16 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F7A70@SHSMSX101.ccr.corp.intel.com>
References: <1392051041-3372-1-git-send-email-aravind.gopalakrishnan@amd.com>
In-Reply-To: <1392051041-3372-1-git-send-email-aravind.gopalakrishnan@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Subject: Re: [Xen-devel] [PATCH V3] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Aravind Gopalakrishnan wrote:
> vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
> registers. But due to this statement here:
> switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> we are wrongly masking off top two bits which meant the register
> accesses never made it to vmce_amd_* functions.
> 
> Corrected this problem by modifying the mask in this patch to allow
> AMD thresholding registers to fall to 'default' case which in turn
> allows vmce_amd_* functions to handle access to the registers.
> 
> While at it, remove some clutter in the vmce_amd* functions. Retained
> current policy of returning zero for reads and ignoring writes.
> 
> Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
> ---
>  xen/arch/x86/cpu/mcheck/amd_f10.c |   41
>  ++++++------------------------------- xen/arch/x86/cpu/mcheck/vmce.c
>  |    4 ++-- 2 files changed, 8 insertions(+), 37 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mcheck/amd_f10.c
> b/xen/arch/x86/cpu/mcheck/amd_f10.c 
> index 61319dc..03797ab 100644
> --- a/xen/arch/x86/cpu/mcheck/amd_f10.c
> +++ b/xen/arch/x86/cpu/mcheck/amd_f10.c
> @@ -105,43 +105,14 @@ enum mcheck_type amd_f10_mcheck_init(struct
>  cpuinfo_x86 *c) /* amd specific MCA MSR */
>  int vmce_amd_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		v->arch.vmce.bank[1].mci_misc = val;
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* ignore write: we do not emulate link and l3 cache errors
> -		 * to the guest.
> -		 */
> -		mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Do nothing as we don't emulate this MC bank currently */
> +    mce_printk(MCE_VERBOSE, "MCE: wr msr %#"PRIx64"\n", val);
> +    return 1;
>  }
> 
>  int vmce_amd_rdmsr(const struct vcpu *v, uint32_t msr, uint64_t *val)
>  {
> -	switch (msr) {
> -	case MSR_F10_MC4_MISC1: /* DRAM error type */
> -		*val = v->arch.vmce.bank[1].mci_misc;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	case MSR_F10_MC4_MISC2: /* Link error type */
> -	case MSR_F10_MC4_MISC3: /* L3 cache error type */
> -		/* we do not emulate link and l3 cache
> -		 * errors to the guest.
> -		 */
> -		*val = 0;
> -		mce_printk(MCE_VERBOSE, "MCE: rd msr %#"PRIx64"\n", *val);
> -		break;
> -	default:
> -		return 0;
> -	}
> -
> -	return 1;
> +    /* Assign '0' as we don't emulate this MC bank currently */
> +    *val = 0;
> +    return 1;
>  }
> diff --git a/xen/arch/x86/cpu/mcheck/vmce.c
> b/xen/arch/x86/cpu/mcheck/vmce.c 
> index f6c35db..be9bb5e 100644
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -107,7 +107,7 @@ static int bank_mce_rdmsr(const struct vcpu *v,
> uint32_t msr, uint64_t *val) 
> 
>      *val = 0;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )

Please add comments for '-MSR_IA32_MC0_CTL' where 1 mask works for all cases, so that later reader know the history:

Intel: MCi_CTL/STATUS/ADDR/MISC    0x400~0x403
        MCi_CTL2                                   0x280 ...

AMD: MCi_CTL/STATUS/ADDR/MISC    0x400~0x403
         MCi_MISCj                                  0xC0000_040x

Thanks,
Jinsong

>      {
>      case MSR_IA32_MC0_CTL:
>          /* stick all 1's to MCi_CTL */
> @@ -210,7 +210,7 @@ static int bank_mce_wrmsr(struct vcpu *v,
>      uint32_t msr, uint64_t val) int ret = 1;
>      unsigned int bank = (msr - MSR_IA32_MC0_CTL) / 4;
> 
> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
> +    switch ( msr & (-MSR_IA32_MC0_CTL | 3) )
>      {
>      case MSR_IA32_MC0_CTL:
>          /*


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:48:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjA2-0002jl-Kl; Tue, 18 Feb 2014 11:48:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <francesco.gringoli@gmail.com>) id 1WFjA1-0002jO-6F
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:48:25 +0000
Received: from [85.158.139.211:6683] by server-15.bemta-5.messagelabs.com id
	57/DA-24395-88843035; Tue, 18 Feb 2014 11:48:24 +0000
X-Env-Sender: francesco.gringoli@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392724103!4633502!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24310 invoked from network); 18 Feb 2014 11:48:23 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:48:23 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so6060004eak.1
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 03:48:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=2mJ/0z5UiCf1uMBeRCgEk+RkFYVKKjLd956M3hwE2Qg=;
	b=0FyZRfDJwP4RNyZUOb1VwU6oZheFqFa1peTr+Vfp7uds2PWPTXD1sBiHeAFTDC/Uh/
	xV6Jt8/oGX7fGB3Ue0xat2Z84LqkIJNCDvdLlmbGtfEUbAHmaq3QwtdjxjahKlS7fSGu
	/GXVmh0rjQS7bXArH9R1G0t07uyVYGQ6f1lo8GdKZO2gkRUCx6DDqHydz3IuJn7k52CU
	9pn94rDI5Nc8m4zZxuFRotln8ANruqqzq0O5E0qgx+wDPWbQu0AjZjLZt7dAtGxgFbJ1
	8oxdV3lxY8WLmQJTk9uGxrDHELIyxanwgHI9tmS8uPDAWEJWMdihPHDo8FervPMt+DKC
	EzkA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ing.unibs.it; s=google;
	h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=2mJ/0z5UiCf1uMBeRCgEk+RkFYVKKjLd956M3hwE2Qg=;
	b=AragNqzdtkVNIdduIw/SqjCFf8wLZd4Ac6/nqVZCxjqpd2LzpydtPVvXNoam984DSb
	9qiFjDhRDxTUmraNNVUOn3C2qkygr9nA2BecGlgsPciY5mVMz+SJ07Wt1W+6aRl2nwZE
	cHrvNvTplWCsoFCnq21csCK6UGvglK93JJ750=
X-Received: by 10.14.103.67 with SMTP id e43mr2936498eeg.94.1392724103510;
	Tue, 18 Feb 2014 03:48:23 -0800 (PST)
Received: from [10.20.10.12] ([192.167.23.210])
	by mx.google.com with ESMTPSA id x6sm69543288eew.20.2014.02.18.03.48.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 03:48:21 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
In-Reply-To: <1392718898.11080.16.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 12:48:09 +0100
Message-Id: <A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
	<1392718898.11080.16.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1510)
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Feb 18, 2014, at 11:21 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sun, 2014-02-16 at 12:56 +0000, Stefano Stabellini wrote:
>> On Fri, 14 Feb 2014, Francesco Gringoli wrote:
>>> Hello guys,
>>> 
>>> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
>>> 
>>> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
>>> 
>>> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
>>> 
>>> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
>>> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
>>> 
>>> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
>> 
>> Great job!
> 
> Yes, very nice.
> 
>> Please do request access and update the wiki.
>> 
>> Anthony's branches are quite old now, many bugs have been discovered and
>> fixed in the upstream Xen and Linux trees in the meantime.
>> Although I expect that updating the Xen tree could be difficult at this
>> point, it is probably worth it from the stability point of view.
> 
> I agree, ideally the tree would be rebased and whatever needed to be
> (and could be) would be upstreamed. I've no idea what the divergence is
> like, but at least in principal supporting a "new" platform with the
> modern mainline code ought to be loads easier than it was back when
> Anthony started work on this stuff.
I updated the page. I'm starting working on latest Xen only (at the moment) to make it booting old Linux tree described in the page.

Keep you posted.
-Francesco
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:48:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:48:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjA2-0002jl-Kl; Tue, 18 Feb 2014 11:48:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <francesco.gringoli@gmail.com>) id 1WFjA1-0002jO-6F
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 11:48:25 +0000
Received: from [85.158.139.211:6683] by server-15.bemta-5.messagelabs.com id
	57/DA-24395-88843035; Tue, 18 Feb 2014 11:48:24 +0000
X-Env-Sender: francesco.gringoli@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392724103!4633502!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24310 invoked from network); 18 Feb 2014 11:48:23 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:48:23 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so6060004eak.1
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 03:48:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=2mJ/0z5UiCf1uMBeRCgEk+RkFYVKKjLd956M3hwE2Qg=;
	b=0FyZRfDJwP4RNyZUOb1VwU6oZheFqFa1peTr+Vfp7uds2PWPTXD1sBiHeAFTDC/Uh/
	xV6Jt8/oGX7fGB3Ue0xat2Z84LqkIJNCDvdLlmbGtfEUbAHmaq3QwtdjxjahKlS7fSGu
	/GXVmh0rjQS7bXArH9R1G0t07uyVYGQ6f1lo8GdKZO2gkRUCx6DDqHydz3IuJn7k52CU
	9pn94rDI5Nc8m4zZxuFRotln8ANruqqzq0O5E0qgx+wDPWbQu0AjZjLZt7dAtGxgFbJ1
	8oxdV3lxY8WLmQJTk9uGxrDHELIyxanwgHI9tmS8uPDAWEJWMdihPHDo8FervPMt+DKC
	EzkA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ing.unibs.it; s=google;
	h=sender:subject:mime-version:content-type:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=2mJ/0z5UiCf1uMBeRCgEk+RkFYVKKjLd956M3hwE2Qg=;
	b=AragNqzdtkVNIdduIw/SqjCFf8wLZd4Ac6/nqVZCxjqpd2LzpydtPVvXNoam984DSb
	9qiFjDhRDxTUmraNNVUOn3C2qkygr9nA2BecGlgsPciY5mVMz+SJ07Wt1W+6aRl2nwZE
	cHrvNvTplWCsoFCnq21csCK6UGvglK93JJ750=
X-Received: by 10.14.103.67 with SMTP id e43mr2936498eeg.94.1392724103510;
	Tue, 18 Feb 2014 03:48:23 -0800 (PST)
Received: from [10.20.10.12] ([192.167.23.210])
	by mx.google.com with ESMTPSA id x6sm69543288eew.20.2014.02.18.03.48.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 03:48:21 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
In-Reply-To: <1392718898.11080.16.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 12:48:09 +0100
Message-Id: <A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
	<1392718898.11080.16.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1510)
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Feb 18, 2014, at 11:21 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sun, 2014-02-16 at 12:56 +0000, Stefano Stabellini wrote:
>> On Fri, 14 Feb 2014, Francesco Gringoli wrote:
>>> Hello guys,
>>> 
>>> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
>>> 
>>> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
>>> 
>>> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
>>> 
>>> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
>>> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
>>> 
>>> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
>> 
>> Great job!
> 
> Yes, very nice.
> 
>> Please do request access and update the wiki.
>> 
>> Anthony's branches are quite old now, many bugs have been discovered and
>> fixed in the upstream Xen and Linux trees in the meantime.
>> Although I expect that updating the Xen tree could be difficult at this
>> point, it is probably worth it from the stability point of view.
> 
> I agree, ideally the tree would be rebased and whatever needed to be
> (and could be) would be upstreamed. I've no idea what the divergence is
> like, but at least in principal supporting a "new" platform with the
> modern mainline code ought to be loads easier than it was back when
> Anthony started work on this stuff.
I updated the page. I'm starting working on latest Xen only (at the moment) to make it booting old Linux tree described in the page.

Keep you posted.
-Francesco
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:54:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjFN-00035B-Fo; Tue, 18 Feb 2014 11:53:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFjFM-000356-0k
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:53:56 +0000
Received: from [85.158.137.68:5224] by server-17.bemta-3.messagelabs.com id
	8F/E2-22569-3D943035; Tue, 18 Feb 2014 11:53:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392724432!1290291!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11596 invoked from network); 18 Feb 2014 11:53:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:53:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101712390"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:53:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:53:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFjFH-0002g1-Ci;
	Tue, 18 Feb 2014 11:53:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFjDM-0005uC-6x;
	Tue, 18 Feb 2014 11:52:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25116-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 11:51:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25116: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25116 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25116/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386                    4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:54:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjFN-00035B-Fo; Tue, 18 Feb 2014 11:53:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFjFM-000356-0k
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:53:56 +0000
Received: from [85.158.137.68:5224] by server-17.bemta-3.messagelabs.com id
	8F/E2-22569-3D943035; Tue, 18 Feb 2014 11:53:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392724432!1290291!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11596 invoked from network); 18 Feb 2014 11:53:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:53:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101712390"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:53:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 06:53:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFjFH-0002g1-Ci;
	Tue, 18 Feb 2014 11:53:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFjDM-0005uC-6x;
	Tue, 18 Feb 2014 11:52:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25116-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 11:51:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25116: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25116 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25116/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386                    4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-i386-i386-pv             1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl             1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-freebsd10-i386  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-i386-i386-pair           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-i386-i386-xl-qemut-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-freebsd10-amd64  1 xen-build-check(1)        blocked n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-i386-i386-xl-winxpsp3    1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-i386-i386-xl                                            blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-freebsd10-amd64                        blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-qemuu-freebsd10-i386                         blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-i386-i386-pair                                          blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           blocked 
 test-i386-i386-pv                                            blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             blocked 
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             blocked 
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   blocked 


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:58:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjJZ-0003DP-C2; Tue, 18 Feb 2014 11:58:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjJY-0003DK-GJ
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:58:16 +0000
Received: from [85.158.137.68:12024] by server-9.bemta-3.messagelabs.com id
	3C/EE-10184-7DA43035; Tue, 18 Feb 2014 11:58:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392724693!2617272!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11050 invoked from network); 18 Feb 2014 11:58:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:58:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101713361"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:58:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:58:12 -0500
Message-ID: <1392724691.11080.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Feb 2014 11:58:11 +0000
In-Reply-To: <alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Stefano
	Stabellini <Stefano.Stabellini@citrix.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 10:57 +0000, Stefano Stabellini wrote:
> On Mon, 17 Feb 2014, Ian Jackson wrote:
> > George Dunlap writes ("Re: Proposed force push of staging to master"):
> > > On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > > > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > > > xen.git#master and call it RC4.  Comments welcome.
> > > 
> > > Thanks for the analysis.  This seems like a good plan.
> > 
> > I have done this (RC4 is tagged, tarballs are in production).
> > 
> > I also had to force push the change below to xen.git#master.
> > 
> > Can I request that we don't change this back to say "master" until we
> > are done with 4.4.0 ?  Either way we have to update Config.mk with new
> > qemu upstream versions, but if we set this to "master" in between RCs,
> > I end up having to do it as a force push in the middle of the RC
> > production which is out-of-course, error-prone, and suboptimal.
> > 
> > It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> > Config.mk (updated when the qemu-upstream tree has passed its push
> > gate).
> > 
> > That is I think the best workflow is:
> >   * make a change to staging/qemu-upstream-unstable.git
> >   * wait for push gate to put it in qemu-upstream-unstable.git
> 
> Does this work because the test infrastructure doesn't obey Config.mk?

For the test flights which target testing of new qemu bits osstest
overrides the version to pick up the new stuff.

For test flights which target testing of Xen itself Config.mk is obeyed.

> Otherwise how could the new changes be tested if QEMU_UPSTREAM_REVISION
> in Config.mk is unchanged?

By the qemu specific flights.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:58:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:58:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjJZ-0003DP-C2; Tue, 18 Feb 2014 11:58:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjJY-0003DK-GJ
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:58:16 +0000
Received: from [85.158.137.68:12024] by server-9.bemta-3.messagelabs.com id
	3C/EE-10184-7DA43035; Tue, 18 Feb 2014 11:58:15 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392724693!2617272!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11050 invoked from network); 18 Feb 2014 11:58:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:58:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101713361"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 11:58:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:58:12 -0500
Message-ID: <1392724691.11080.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Feb 2014 11:58:11 +0000
In-Reply-To: <alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
	<alpine.DEB.2.02.1402181055100.27926@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, Stefano
	Stabellini <Stefano.Stabellini@citrix.com>, xen-devel@lists.xensource.com,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 10:57 +0000, Stefano Stabellini wrote:
> On Mon, 17 Feb 2014, Ian Jackson wrote:
> > George Dunlap writes ("Re: Proposed force push of staging to master"):
> > > On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > > > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > > > xen.git#master and call it RC4.  Comments welcome.
> > > 
> > > Thanks for the analysis.  This seems like a good plan.
> > 
> > I have done this (RC4 is tagged, tarballs are in production).
> > 
> > I also had to force push the change below to xen.git#master.
> > 
> > Can I request that we don't change this back to say "master" until we
> > are done with 4.4.0 ?  Either way we have to update Config.mk with new
> > qemu upstream versions, but if we set this to "master" in between RCs,
> > I end up having to do it as a force push in the middle of the RC
> > production which is out-of-course, error-prone, and suboptimal.
> > 
> > It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> > Config.mk (updated when the qemu-upstream tree has passed its push
> > gate).
> > 
> > That is I think the best workflow is:
> >   * make a change to staging/qemu-upstream-unstable.git
> >   * wait for push gate to put it in qemu-upstream-unstable.git
> 
> Does this work because the test infrastructure doesn't obey Config.mk?

For the test flights which target testing of new qemu bits osstest
overrides the version to pick up the new stuff.

For test flights which target testing of Xen itself Config.mk is obeyed.

> Otherwise how could the new changes be tested if QEMU_UPSTREAM_REVISION
> in Config.mk is unchanged?

By the qemu specific flights.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjKz-0003OH-19; Tue, 18 Feb 2014 11:59:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjKx-0003OA-IU
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:59:43 +0000
Received: from [85.158.143.35:25263] by server-2.bemta-4.messagelabs.com id
	68/10-10891-E2B43035; Tue, 18 Feb 2014 11:59:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392724781!6495562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26987 invoked from network); 18 Feb 2014 11:59:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:59:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103440412"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:59:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:59:40 -0500
Message-ID: <1392724779.11080.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 11:59:39 +0000
In-Reply-To: <21250.15776.972814.117850@mariner.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 16:49 +0000, Ian Jackson wrote:
> George Dunlap writes ("Re: Proposed force push of staging to master"):
> > On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > > xen.git#master and call it RC4.  Comments welcome.
> > 
> > Thanks for the analysis.  This seems like a good plan.
> 
> I have done this (RC4 is tagged, tarballs are in production).
> 
> I also had to force push the change below to xen.git#master.
> 
> Can I request that we don't change this back to say "master" until we
> are done with 4.4.0 ?  Either way we have to update Config.mk with new
> qemu upstream versions, but if we set this to "master" in between RCs,
> I end up having to do it as a force push in the middle of the RC
> production which is out-of-course, error-prone, and suboptimal.
> 
> It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> Config.mk (updated when the qemu-upstream tree has passed its push
> gate).
> 
> That is I think the best workflow is:
>   * make a change to staging/qemu-upstream-unstable.git
>   * wait for push gate to put it in qemu-upstream-unstable.git
>   * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
>     to new commit hash

This seems to be prone to being forgotten, can we get osstest (or
something else) to send us^WStefano^Wxen-devel a persistent reminder
when there is a mismatch between xen.git:Config.mk and
qemu-upstream.git#master?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 11:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 11:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjKz-0003OH-19; Tue, 18 Feb 2014 11:59:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjKx-0003OA-IU
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 11:59:43 +0000
Received: from [85.158.143.35:25263] by server-2.bemta-4.messagelabs.com id
	68/10-10891-E2B43035; Tue, 18 Feb 2014 11:59:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392724781!6495562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26987 invoked from network); 18 Feb 2014 11:59:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 11:59:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103440412"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 11:59:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 06:59:40 -0500
Message-ID: <1392724779.11080.40.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 11:59:39 +0000
In-Reply-To: <21250.15776.972814.117850@mariner.uk.xensource.com>
References: <osstest-24870-mainreport@xen.org>
	<21249.64449.582039.323772@mariner.uk.xensource.com>
	<5302160B.70601@eu.citrix.com>
	<21250.15776.972814.117850@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Proposed force push of staging to master
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 16:49 +0000, Ian Jackson wrote:
> George Dunlap writes ("Re: Proposed force push of staging to master"):
> > On 02/17/2014 12:08 PM, Ian Jackson wrote:
> > > So, we propose to push 4e8d89bc1445f91c4c6c7bf0ad8d51b0c809841e to
> > > xen.git#master and call it RC4.  Comments welcome.
> > 
> > Thanks for the analysis.  This seems like a good plan.
> 
> I have done this (RC4 is tagged, tarballs are in production).
> 
> I also had to force push the change below to xen.git#master.
> 
> Can I request that we don't change this back to say "master" until we
> are done with 4.4.0 ?  Either way we have to update Config.mk with new
> qemu upstream versions, but if we set this to "master" in between RCs,
> I end up having to do it as a force push in the middle of the RC
> production which is out-of-course, error-prone, and suboptimal.
> 
> It is IMO better to put a commit hash in QEMU_UPSTREAM_REVISION in
> Config.mk (updated when the qemu-upstream tree has passed its push
> gate).
> 
> That is I think the best workflow is:
>   * make a change to staging/qemu-upstream-unstable.git
>   * wait for push gate to put it in qemu-upstream-unstable.git
>   * make change to xen.git#staging to update QEMU_UPSTREAM_REVISION
>     to new commit hash

This seems to be prone to being forgotten, can we get osstest (or
something else) to send us^WStefano^Wxen-devel a persistent reminder
when there is a mismatch between xen.git:Config.mk and
qemu-upstream.git#master?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:14:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjZ9-0003yd-HP; Tue, 18 Feb 2014 12:14:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjZ7-0003yV-RP
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:14:22 +0000
Received: from [85.158.139.211:15752] by server-16.bemta-5.messagelabs.com id
	C8/43-05060-D9E43035; Tue, 18 Feb 2014 12:14:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392725658!4647777!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16638 invoked from network); 18 Feb 2014 12:14:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:14:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103443907"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 12:14:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:14:07 -0500
Message-ID: <1392725646.11080.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Tue, 18 Feb 2014 12:14:06 +0000
In-Reply-To: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:44 +0200, Oleksandr Tyshchenko wrote:

> +GLOBAL(enter_hyp_mode)
> +enter_hyp_mode:
> +        adr   r0, save
> +        stmea r0, {r4-r13,lr}
> +        ldr   r12, =0x102
> +        adr   r0, hyp_return
> +        dsb
> +        isb
> +        dmb
> +        smc   #0

Who/what implements this handler?

> +hyp_return:
> +        adr   r0, save
> +        ldmfd r0, {r4-r13,pc}
> +save:
> +        .rept 11
> +        .word 0
> +        .endr
> +
>  /*
>   * Local variables:
>   * mode: ASM
> 
> 
> Please, could anyone give me advice about this issue?

Do you have any hardware debugging tools which could give some insight?

Usually these things are down to either missing cache flushes or
barriers, but tracking them down has historically been a total pain.

>  And what do you
> think about solution to exit from busy loop by timeout and restart
> CPU1 bringing up sequence in this case?

A timeout isn't a bad idea, although I would not be inclined to try
again with the CPU in an indeterminate state.

Either we should carry on without it or we should panic (which is more
obvious than a hang). I think carrying on would be surprising (you'd
only get half the processing power you expected).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:14:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjZ9-0003yd-HP; Tue, 18 Feb 2014 12:14:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjZ7-0003yV-RP
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:14:22 +0000
Received: from [85.158.139.211:15752] by server-16.bemta-5.messagelabs.com id
	C8/43-05060-D9E43035; Tue, 18 Feb 2014 12:14:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392725658!4647777!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16638 invoked from network); 18 Feb 2014 12:14:20 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:14:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103443907"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 12:14:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:14:07 -0500
Message-ID: <1392725646.11080.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Tue, 18 Feb 2014 12:14:06 +0000
In-Reply-To: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:44 +0200, Oleksandr Tyshchenko wrote:

> +GLOBAL(enter_hyp_mode)
> +enter_hyp_mode:
> +        adr   r0, save
> +        stmea r0, {r4-r13,lr}
> +        ldr   r12, =0x102
> +        adr   r0, hyp_return
> +        dsb
> +        isb
> +        dmb
> +        smc   #0

Who/what implements this handler?

> +hyp_return:
> +        adr   r0, save
> +        ldmfd r0, {r4-r13,pc}
> +save:
> +        .rept 11
> +        .word 0
> +        .endr
> +
>  /*
>   * Local variables:
>   * mode: ASM
> 
> 
> Please, could anyone give me advice about this issue?

Do you have any hardware debugging tools which could give some insight?

Usually these things are down to either missing cache flushes or
barriers, but tracking them down has historically been a total pain.

>  And what do you
> think about solution to exit from busy loop by timeout and restart
> CPU1 bringing up sequence in this case?

A timeout isn't a bad idea, although I would not be inclined to try
again with the CPU in an indeterminate state.

Either we should carry on without it or we should panic (which is more
obvious than a hang). I think carrying on would be surprising (you'd
only get half the processing power you expected).

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:17:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjbm-00048o-67; Tue, 18 Feb 2014 12:17:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFjbk-00048f-6r
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:17:04 +0000
Received: from [85.158.139.211:5048] by server-10.bemta-5.messagelabs.com id
	E5/12-08578-F3F43035; Tue, 18 Feb 2014 12:17:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392725821!4626418!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5471 invoked from network); 18 Feb 2014 12:17:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:17:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103444795"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 12:17:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:17:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFjbf-000277-Ju;
	Tue, 18 Feb 2014 12:16:59 +0000
Date: Tue, 18 Feb 2014 12:16:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>, armbru@redhat.com,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 00/20] acpi, pc,
	pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Feb 2014, Peter Maydell wrote:
> On 10 February 2014 16:47, Michael S. Tsirkin <mst@redhat.com> wrote:
> > The following changes since commit 2b2449f7e467957778ca006904471b231dc0ac8e:
> >
> >   Merge remote-tracking branch 'remotes/borntraeger/tags/kvm-s390-20140131' into staging (2014-02-04 18:46:33 +0000)
> >
> > are available in the git repository at:
> >
> >
> >   git://git.kernel.org/pub/scm/virt/kvm/mst/qemu.git tags/for_upstream
> >
> > for you to fetch changes up to 417c45ab2f847c0a47b1232f611aa886df6a97d5:
> >
> >   ACPI: Remove commented-out code from HPET._CRS (2014-02-10 11:09:33 +0200)
> 
> Applied, thanks.

It looks like that this series breaks disk unplug
(hw/ide/piix.c:pci_piix3_xen_ide_unplug).

I bisected it and the problem is caused by:

commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
Author: Igor Mammedov <imammedo@redhat.com>
Date:   Wed Feb 5 16:36:52 2014 +0100

    hw/pci: switch to a generic hotplug handling for PCIDevice
    
    make qdev_unplug()/device_set_realized() to call hotplug handler's
    plug/unplug methods if available and remove not needed anymore
    hot(un)plug handling from PCIDevice.
    
    In case if hotplug handler is not available, revert to the legacy
    hotplug method for compatibility with not yet converted buses.
    
    Signed-off-by: Igor Mammedov <imammedo@redhat.com>
    Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:17:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjbm-00048o-67; Tue, 18 Feb 2014 12:17:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFjbk-00048f-6r
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:17:04 +0000
Received: from [85.158.139.211:5048] by server-10.bemta-5.messagelabs.com id
	E5/12-08578-F3F43035; Tue, 18 Feb 2014 12:17:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392725821!4626418!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5471 invoked from network); 18 Feb 2014 12:17:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:17:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103444795"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 12:17:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:17:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFjbf-000277-Ju;
	Tue, 18 Feb 2014 12:16:59 +0000
Date: Tue, 18 Feb 2014 12:16:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	"Michael S. Tsirkin" <mst@redhat.com>, armbru@redhat.com,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 00/20] acpi, pc,
	pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 13 Feb 2014, Peter Maydell wrote:
> On 10 February 2014 16:47, Michael S. Tsirkin <mst@redhat.com> wrote:
> > The following changes since commit 2b2449f7e467957778ca006904471b231dc0ac8e:
> >
> >   Merge remote-tracking branch 'remotes/borntraeger/tags/kvm-s390-20140131' into staging (2014-02-04 18:46:33 +0000)
> >
> > are available in the git repository at:
> >
> >
> >   git://git.kernel.org/pub/scm/virt/kvm/mst/qemu.git tags/for_upstream
> >
> > for you to fetch changes up to 417c45ab2f847c0a47b1232f611aa886df6a97d5:
> >
> >   ACPI: Remove commented-out code from HPET._CRS (2014-02-10 11:09:33 +0200)
> 
> Applied, thanks.

It looks like that this series breaks disk unplug
(hw/ide/piix.c:pci_piix3_xen_ide_unplug).

I bisected it and the problem is caused by:

commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
Author: Igor Mammedov <imammedo@redhat.com>
Date:   Wed Feb 5 16:36:52 2014 +0100

    hw/pci: switch to a generic hotplug handling for PCIDevice
    
    make qdev_unplug()/device_set_realized() to call hotplug handler's
    plug/unplug methods if available and remove not needed anymore
    hot(un)plug handling from PCIDevice.
    
    In case if hotplug handler is not available, revert to the legacy
    hotplug method for compatibility with not yet converted buses.
    
    Signed-off-by: Igor Mammedov <imammedo@redhat.com>
    Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:18:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjcw-0004ED-LQ; Tue, 18 Feb 2014 12:18:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjcv-0004E2-L7
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:18:17 +0000
Received: from [85.158.143.35:45802] by server-3.bemta-4.messagelabs.com id
	31/0F-11539-88F43035; Tue, 18 Feb 2014 12:18:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392725895!6518701!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4092 invoked from network); 18 Feb 2014 12:18:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:18:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101718417"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 12:18:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:18:14 -0500
Message-ID: <1392725892.11080.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
Date: Tue, 18 Feb 2014 12:18:12 +0000
In-Reply-To: <A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
	<1392718898.11080.16.camel@kazak.uk.xensource.com>
	<A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 12:48 +0100, Francesco Gringoli wrote:
> I updated the page. I'm starting working on latest Xen only (at the
> moment) to make it booting old Linux tree described in the page.

Awesome, thank you!
Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:18:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjcw-0004ED-LQ; Tue, 18 Feb 2014 12:18:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFjcv-0004E2-L7
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:18:17 +0000
Received: from [85.158.143.35:45802] by server-3.bemta-4.messagelabs.com id
	31/0F-11539-88F43035; Tue, 18 Feb 2014 12:18:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392725895!6518701!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4092 invoked from network); 18 Feb 2014 12:18:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:18:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101718417"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 12:18:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:18:14 -0500
Message-ID: <1392725892.11080.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
Date: Tue, 18 Feb 2014 12:18:12 +0000
In-Reply-To: <A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
	<1392718898.11080.16.camel@kazak.uk.xensource.com>
	<A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	John Johnson <lausgans@gmail.com>, Chen Baozi <baozich@gmail.com>,
	Xen Devel <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
 support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 12:48 +0100, Francesco Gringoli wrote:
> I updated the page. I'm starting working on latest Xen only (at the
> moment) to make it booting old Linux tree described in the page.

Awesome, thank you!
Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:27:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:27:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjm1-0004bd-W0; Tue, 18 Feb 2014 12:27:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WFjm0-0004bY-6f
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:27:40 +0000
Received: from [85.158.137.68:40534] by server-16.bemta-3.messagelabs.com id
	04/CC-29917-BB153035; Tue, 18 Feb 2014 12:27:39 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392726457!1351684!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4067 invoked from network); 18 Feb 2014 12:27:38 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:27:38 -0000
Received: by mail-qa0-f42.google.com with SMTP id k4so23772211qaq.29
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Feb 2014 04:27:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=LqB3dB9WC7/yMuZhdJgIdLvd2IDqEqbaJD7i4Fom2Mk=;
	b=QDkW/xzqyOi+WaQV1wGL61OJkktvJpUUQ/R9pLX90UUhkun/jiXUxAMzHr4IdsB6wF
	6E259sdb3CXDIPJwyplt1AxwvvRkoFDc7LAfcTULUgUrlOogyoXvdv/GlTqdRn9lgISu
	+42q+lhnwelBrDgQBZh0vKvsn/0+VYEpXEod3vZxPO2HjhMD+f8uAxEGu6k6zJScZW2I
	BMpJ0k5arXDSRibrOuOAptuPzb+FP4hHmnGREcERWS1yW1cCrZfCVIQLTCgpebSiyrfL
	gQvSQh3L2Su63x+EpmZ89poPz9lA+L+e9+EZ6zmPGnL4K0cE4Ro7X9+oEqgDcqe/nQVN
	1H4w==
X-Received: by 10.140.42.138 with SMTP id c10mr39917622qga.24.1392726457201;
	Tue, 18 Feb 2014 04:27:37 -0800 (PST)
Received: from yakj.usersys.redhat.com
	(net-37-117-154-249.cust.vodafonedsl.it. [37.117.154.249])
	by mx.google.com with ESMTPSA id m14sm54836932qax.9.2014.02.18.04.27.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 04:27:36 -0800 (PST)
Message-ID: <530351B4.1010402@redhat.com>
Date: Tue, 18 Feb 2014 13:27:32 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Peter Maydell <peter.maydell@linaro.org>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: xen-devel@lists.xensource.com, "Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> It looks like that this series breaks disk unplug
> (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
>
> I bisected it and the problem is caused by:
>
> commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> Author: Igor Mammedov <imammedo@redhat.com>
> Date:   Wed Feb 5 16:36:52 2014 +0100
>
>     hw/pci: switch to a generic hotplug handling for PCIDevice
>
>     make qdev_unplug()/device_set_realized() to call hotplug handler's
>     plug/unplug methods if available and remove not needed anymore
>     hot(un)plug handling from PCIDevice.
>
>     In case if hotplug handler is not available, revert to the legacy
>     hotplug method for compatibility with not yet converted buses.
>
>     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
>

What exactly breaks?

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:27:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:27:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjm1-0004bd-W0; Tue, 18 Feb 2014 12:27:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WFjm0-0004bY-6f
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:27:40 +0000
Received: from [85.158.137.68:40534] by server-16.bemta-3.messagelabs.com id
	04/CC-29917-BB153035; Tue, 18 Feb 2014 12:27:39 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392726457!1351684!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4067 invoked from network); 18 Feb 2014 12:27:38 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:27:38 -0000
Received: by mail-qa0-f42.google.com with SMTP id k4so23772211qaq.29
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Feb 2014 04:27:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=LqB3dB9WC7/yMuZhdJgIdLvd2IDqEqbaJD7i4Fom2Mk=;
	b=QDkW/xzqyOi+WaQV1wGL61OJkktvJpUUQ/R9pLX90UUhkun/jiXUxAMzHr4IdsB6wF
	6E259sdb3CXDIPJwyplt1AxwvvRkoFDc7LAfcTULUgUrlOogyoXvdv/GlTqdRn9lgISu
	+42q+lhnwelBrDgQBZh0vKvsn/0+VYEpXEod3vZxPO2HjhMD+f8uAxEGu6k6zJScZW2I
	BMpJ0k5arXDSRibrOuOAptuPzb+FP4hHmnGREcERWS1yW1cCrZfCVIQLTCgpebSiyrfL
	gQvSQh3L2Su63x+EpmZ89poPz9lA+L+e9+EZ6zmPGnL4K0cE4Ro7X9+oEqgDcqe/nQVN
	1H4w==
X-Received: by 10.140.42.138 with SMTP id c10mr39917622qga.24.1392726457201;
	Tue, 18 Feb 2014 04:27:37 -0800 (PST)
Received: from yakj.usersys.redhat.com
	(net-37-117-154-249.cust.vodafonedsl.it. [37.117.154.249])
	by mx.google.com with ESMTPSA id m14sm54836932qax.9.2014.02.18.04.27.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 04:27:36 -0800 (PST)
Message-ID: <530351B4.1010402@redhat.com>
Date: Tue, 18 Feb 2014 13:27:32 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Peter Maydell <peter.maydell@linaro.org>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
Cc: xen-devel@lists.xensource.com, "Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> It looks like that this series breaks disk unplug
> (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
>
> I bisected it and the problem is caused by:
>
> commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> Author: Igor Mammedov <imammedo@redhat.com>
> Date:   Wed Feb 5 16:36:52 2014 +0100
>
>     hw/pci: switch to a generic hotplug handling for PCIDevice
>
>     make qdev_unplug()/device_set_realized() to call hotplug handler's
>     plug/unplug methods if available and remove not needed anymore
>     hot(un)plug handling from PCIDevice.
>
>     In case if hotplug handler is not available, revert to the legacy
>     hotplug method for compatibility with not yet converted buses.
>
>     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
>     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
>
>

What exactly breaks?

Paolo


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:36:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjuM-0004oU-G3; Tue, 18 Feb 2014 12:36:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFjuL-0004oP-9Q
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:36:17 +0000
Received: from [85.158.139.211:57099] by server-12.bemta-5.messagelabs.com id
	32/53-15415-0C353035; Tue, 18 Feb 2014 12:36:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392726975!4668185!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17735 invoked from network); 18 Feb 2014 12:36:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 12:36:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 12:36:15 +0000
Message-Id: <530361CB020000780011D475@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 12:36:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 12:42, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>>> *v, uint32_t msr, uint64_t *val) 
>>>>>> 
>>>>>>      *val = 0;
>>>>>> 
>>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>>> +    /* Allow only first 3 MC banks into switch() */
>>> 
>>> I don't think this comments is good here. Remove it is better.
>> 
>> I had asked for this to be removed again too. I'm really thinking
>> that V3 is what we should go with.
> 
> V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is slightly 
> better.

Can I read this as an ack then (I already explained elsewhere
why I think a comment there is rather pointless)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:36:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjuM-0004oU-G3; Tue, 18 Feb 2014 12:36:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFjuL-0004oP-9Q
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:36:17 +0000
Received: from [85.158.139.211:57099] by server-12.bemta-5.messagelabs.com id
	32/53-15415-0C353035; Tue, 18 Feb 2014 12:36:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392726975!4668185!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17735 invoked from network); 18 Feb 2014 12:36:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 12:36:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 12:36:15 +0000
Message-Id: <530361CB020000780011D475@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 12:36:11 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 12:42, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Jan Beulich wrote:
>>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>>> *v, uint32_t msr, uint64_t *val) 
>>>>>> 
>>>>>>      *val = 0;
>>>>>> 
>>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>>> +    /* Allow only first 3 MC banks into switch() */
>>> 
>>> I don't think this comments is good here. Remove it is better.
>> 
>> I had asked for this to be removed again too. I'm really thinking
>> that V3 is what we should go with.
> 
> V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is slightly 
> better.

Can I read this as an ack then (I already explained elsewhere
why I think a comment there is rather pointless)?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjv8-0004rZ-U7; Tue, 18 Feb 2014 12:37:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WFjv4-0004rK-MF
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:37:05 +0000
Received: from [85.158.139.211:65487] by server-3.bemta-5.messagelabs.com id
	AB/0F-13671-DE353035; Tue, 18 Feb 2014 12:37:01 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392727017!104409!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29605 invoked from network); 18 Feb 2014 12:36:59 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 12:36:59 -0000
Received: from mail-la0-f41.google.com ([209.85.215.41]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNT6ObgfQMR6trjTN7bnwEOU9KLMegv@postini.com;
	Tue, 18 Feb 2014 04:36:59 PST
Received: by mail-la0-f41.google.com with SMTP id mc6so12197920lab.14
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 04:36:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=PQ+MD7kHMACgCECKQ10/GAx5CQUr2eRDshD753phAUA=;
	b=V27V3uB3Yi+v7k8LkgjgXpKLLKyWAnaMzyavU6tiJu6Fsxb5W+2SgHvffFOnhb9Ipx
	KM0dYrJAgaLSBOK1i59i/DsaIECFI+k08LczIm9cCUkOuYFUh6WXNIc0Lv/lat+muT0M
	7jQnOVXZxMKIrgA3ZJl2XBFHik4XV+jb+fU/A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=PQ+MD7kHMACgCECKQ10/GAx5CQUr2eRDshD753phAUA=;
	b=cWgiPxPnOQ4Vjd9ngvHf0pc4fce4Wh/bF9Oz/Gh38L7P1IHOp1nHJlLK6qgC0FNKab
	CGxI22yUQS8Z4GUN5OwPDuSWb+N9c6MMTmNHni36tAi8bRyE/tbbxO1Qcy+4q0JSn9V6
	U3LFx1QziVuJs3c36pupJlcet69UJgRkRi9ZNfo7dnjtOxBujND77DhiNyX3yd3R5VN2
	gCr6qN68EgBdUw3BEZ4mBBFTTXj/uQ5ohMEABTsnwavn2zefYgqMRi6r3p+YXdJVfPT1
	A03AX+lJ59kxz7iV8PnQxJuN7aujowEQ5FjQCDYQk+jvCx0d6fjTlxxW5WwfYjiTX/HO
	zUvg==
X-Gm-Message-State: ALoCoQkO2Ejs5H5pS6S5LqA93H9d79B37vnpQzIibjbp54+gIv/JqDE5SxFFFO0kzVBqoQJWWe8AViRq4oc1ABuQXxqlJ4Gu8yZwkfWL7rvrL3tXHR2RI3QOjKRGShMvYW4s5KMIFoco5HYBdoRyMwYeiT8FJYhsMcLRBhrzjSRKbwhXg3n65Mc=
X-Received: by 10.152.170.232 with SMTP id ap8mr1714109lac.40.1392727015428;
	Tue, 18 Feb 2014 04:36:55 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.152.170.232 with SMTP id ap8mr1714103lac.40.1392727015274;
	Tue, 18 Feb 2014 04:36:55 -0800 (PST)
Received: by 10.114.18.193 with HTTP; Tue, 18 Feb 2014 04:36:55 -0800 (PST)
In-Reply-To: <1392725646.11080.47.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 14:36:55 +0200
Message-ID: <CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4060277682547822024=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4060277682547822024==
Content-Type: multipart/alternative; boundary=089e0117769103cf0404f2ad8789

--089e0117769103cf0404f2ad8789
Content-Type: text/plain; charset=ISO-8859-1

>
> > +GLOBAL(enter_hyp_mode)
> > +enter_hyp_mode:
> > +        adr   r0, save
> > +        stmea r0, {r4-r13,lr}
> > +        ldr   r12, =0x102
> > +        adr   r0, hyp_return
> > +        dsb
> > +        isb
> > +        dmb
> > +        smc   #0
>
> Who/what implements this handler?
>

Ian, this handler is implemented by ROM code, and this is the common OMAP
sequence to switch to HYP mode. On our side we decided to leave switch to
hyp in XEN for now.

Do you have any hardware debugging tools which could give some insight?
>

Yep, we have one (TI's Code Composer Studio with STM560v2 JTAG) but it has
no proper HYP mode debug support yet, TI says it will have in 6 months or
so :( So we the only thing we can do with it is stop CPU at some moment and
see some registers, no breakpoints or stepping.

What we discovered yet is that the last command executed by CPU1 before
hang is

mcr   CP32(r0, HSCTLR)       /* now paging is enabled */

After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode, not
abort.
It looks like we have broken MMU translation.

Usually these things are down to either missing cache flushes or barriers,
> but tracking them down has historically been a total pain.
>
I suspected missing flushes during CPU1 MMU tables preparation but that
code looks correct, I do not see any issues there.

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt


On Tue, Feb 18, 2014 at 2:14 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2014-02-18 at 13:44 +0200, Oleksandr Tyshchenko wrote:
>
> > +GLOBAL(enter_hyp_mode)
> > +enter_hyp_mode:
> > +        adr   r0, save
> > +        stmea r0, {r4-r13,lr}
> > +        ldr   r12, =0x102
> > +        adr   r0, hyp_return
> > +        dsb
> > +        isb
> > +        dmb
> > +        smc   #0
>
> Who/what implements this handler?
>
> > +hyp_return:
> > +        adr   r0, save
> > +        ldmfd r0, {r4-r13,pc}
> > +save:
> > +        .rept 11
> > +        .word 0
> > +        .endr
> > +
> >  /*
> >   * Local variables:
> >   * mode: ASM
> >
> >
> > Please, could anyone give me advice about this issue?
>
> Do you have any hardware debugging tools which could give some insight?
>
> Usually these things are down to either missing cache flushes or
> barriers, but tracking them down has historically been a total pain.
>
> >  And what do you
> > think about solution to exit from busy loop by timeout and restart
> > CPU1 bringing up sequence in this case?
>
> A timeout isn't a bad idea, although I would not be inclined to try
> again with the CPU in an indeterminate state.
>
> Either we should carry on without it or we should panic (which is more
> obvious than a hang). I think carrying on would be surprising (you'd
> only get half the processing power you expected).
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--089e0117769103cf0404f2ad8789
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div class=3D"">&gt; +GLOBAL(enter_hyp_mo=
de)<br>
&gt; +enter_hyp_mode:<br>&gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>&gt; + =
=A0 =A0 =A0 =A0stmea r0, {r4-r13,lr}<br>&gt; + =A0 =A0 =A0 =A0ldr =A0 r12, =
=3D0x102<br>&gt; + =A0 =A0 =A0 =A0adr =A0 r0, hyp_return<br>&gt; + =A0 =A0 =
=A0 =A0dsb<br>&gt; + =A0 =A0 =A0 =A0isb<br>&gt; + =A0 =A0 =A0 =A0dmb<br>
&gt; + =A0 =A0 =A0 =A0smc =A0 #0<br><br></div>Who/what implements this hand=
ler?<br></blockquote><div class=3D"gmail_extra"><br></div><div class=3D"gma=
il_extra">Ian, this handler is implemented by ROM code, and this is the com=
mon OMAP sequence to switch to HYP mode. On our side we decided to leave sw=
itch to hyp in XEN for now.</div>
<div class=3D"gmail_extra"><br></div><blockquote class=3D"gmail_quote" styl=
e=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(2=
04,204,204);border-left-style:solid;padding-left:1ex">Do you have any hardw=
are debugging tools which could give some insight?<br>
</blockquote><div>=A0</div><div>Yep, we have one (TI&#39;s Code Composer St=
udio with STM560v2 JTAG) but it has no proper HYP mode debug support yet, T=
I says it will have in 6 months or so :( So we the only thing we can do wit=
h it is stop CPU at some moment and see some registers, no breakpoints or s=
tepping.</div>
<div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">What we dis=
covered yet is that the last command executed by CPU1 before hang is=A0</di=
v><div class=3D"gmail_extra"><blockquote style=3D"margin:0px 0px 0px 40px;b=
order:none;padding:0px">
<div class=3D"gmail_extra"><div class=3D"gmail_extra">mcr =A0 CP32(r0, HSCT=
LR) =A0 =A0 =A0 /* now paging is enabled */</div></div></blockquote>After t=
his PC contains 0x00000004, CPSR.M is b11010 what is HYP mode, not abort.</=
div><div class=3D"gmail_extra">
It looks like we have broken MMU translation.</div><div class=3D"gmail_extr=
a"><br></div><div class=3D"gmail_extra"><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rg=
b(204,204,204);border-left-style:solid;padding-left:1ex">
Usually these things are down to either missing cache flushes or=A0barriers=
, but tracking them down has historically been a total pain.<br></blockquot=
e><div class=3D"gmail_extra">I suspected missing flushes during CPU1 MMU ta=
bles preparation but that code looks correct, I do not see any issues there=
.</div>
<div><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-vari=
ant:normal;font-style:normal;font-size:12px;background-color:transparent;te=
xt-decoration:none;font-family:Arial;font-weight:bold">Andrii Anisov | Soft=
ware Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline">www.global=
logic.com</span></a><span style=3D"vertical-align:baseline;font-variant:nor=
mal;font-style:normal;font-size:12px;text-decoration:none;font-family:Arial=
;font-weight:normal;background-color:transparent"></span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline"></span></a=
><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:rgb(17,85,204);background-color:transparent;font-weight:normal;font=
-style:normal;font-variant:normal;text-decoration:underline;vertical-align:=
baseline">http://www.globallogic.com/email_disclaimer.txt</span></a><span s=
tyle=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-=
size:11px;text-decoration:none;font-family:Arial;font-weight:normal;backgro=
und-color:transparent"></span></div>
</div>
<br><br><div class=3D"gmail_quote">On Tue, Feb 18, 2014 at 2:14 PM, Ian Cam=
pbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targ=
et=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1=
px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:=
1ex">
<div class=3D"">On Tue, 2014-02-18 at 13:44 +0200, Oleksandr Tyshchenko wro=
te:<br>
<br>
&gt; +GLOBAL(enter_hyp_mode)<br>
&gt; +enter_hyp_mode:<br>
&gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>
&gt; + =A0 =A0 =A0 =A0stmea r0, {r4-r13,lr}<br>
&gt; + =A0 =A0 =A0 =A0ldr =A0 r12, =3D0x102<br>
&gt; + =A0 =A0 =A0 =A0adr =A0 r0, hyp_return<br>
&gt; + =A0 =A0 =A0 =A0dsb<br>
&gt; + =A0 =A0 =A0 =A0isb<br>
&gt; + =A0 =A0 =A0 =A0dmb<br>
&gt; + =A0 =A0 =A0 =A0smc =A0 #0<br>
<br>
</div>Who/what implements this handler?<br>
<div class=3D""><br>
&gt; +hyp_return:<br>
&gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>
&gt; + =A0 =A0 =A0 =A0ldmfd r0, {r4-r13,pc}<br>
&gt; +save:<br>
&gt; + =A0 =A0 =A0 =A0.rept 11<br>
&gt; + =A0 =A0 =A0 =A0.word 0<br>
&gt; + =A0 =A0 =A0 =A0.endr<br>
&gt; +<br>
&gt; =A0/*<br>
&gt; =A0 * Local variables:<br>
&gt; =A0 * mode: ASM<br>
&gt;<br>
&gt;<br>
&gt; Please, could anyone give me advice about this issue?<br>
<br>
</div>Do you have any hardware debugging tools which could give some insigh=
t?<br>
<br>
Usually these things are down to either missing cache flushes or<br>
barriers, but tracking them down has historically been a total pain.<br>
<div class=3D""><br>
&gt; =A0And what do you<br>
&gt; think about solution to exit from busy loop by timeout and restart<br>
&gt; CPU1 bringing up sequence in this case?<br>
<br>
</div>A timeout isn&#39;t a bad idea, although I would not be inclined to t=
ry<br>
again with the CPU in an indeterminate state.<br>
<br>
Either we should carry on without it or we should panic (which is more<br>
obvious than a hang). I think carrying on would be surprising (you&#39;d<br=
>
only get half the processing power you expected).<br>
<span class=3D""><font color=3D"#888888"><br>
Ian.<br>
</font></span><div class=3D""><div class=3D"h5"><br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div></div>

--089e0117769103cf0404f2ad8789--


--===============4060277682547822024==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4060277682547822024==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 12:37:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFjv8-0004rZ-U7; Tue, 18 Feb 2014 12:37:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WFjv4-0004rK-MF
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:37:05 +0000
Received: from [85.158.139.211:65487] by server-3.bemta-5.messagelabs.com id
	AB/0F-13671-DE353035; Tue, 18 Feb 2014 12:37:01 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392727017!104409!1
X-Originating-IP: [64.18.0.188]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_60_70,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29605 invoked from network); 18 Feb 2014 12:36:59 -0000
Received: from exprod5og109.obsmtp.com (HELO exprod5og109.obsmtp.com)
	(64.18.0.188)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 12:36:59 -0000
Received: from mail-la0-f41.google.com ([209.85.215.41]) (using TLSv1) by
	exprod5ob109.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNT6ObgfQMR6trjTN7bnwEOU9KLMegv@postini.com;
	Tue, 18 Feb 2014 04:36:59 PST
Received: by mail-la0-f41.google.com with SMTP id mc6so12197920lab.14
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 04:36:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=PQ+MD7kHMACgCECKQ10/GAx5CQUr2eRDshD753phAUA=;
	b=V27V3uB3Yi+v7k8LkgjgXpKLLKyWAnaMzyavU6tiJu6Fsxb5W+2SgHvffFOnhb9Ipx
	KM0dYrJAgaLSBOK1i59i/DsaIECFI+k08LczIm9cCUkOuYFUh6WXNIc0Lv/lat+muT0M
	7jQnOVXZxMKIrgA3ZJl2XBFHik4XV+jb+fU/A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=PQ+MD7kHMACgCECKQ10/GAx5CQUr2eRDshD753phAUA=;
	b=cWgiPxPnOQ4Vjd9ngvHf0pc4fce4Wh/bF9Oz/Gh38L7P1IHOp1nHJlLK6qgC0FNKab
	CGxI22yUQS8Z4GUN5OwPDuSWb+N9c6MMTmNHni36tAi8bRyE/tbbxO1Qcy+4q0JSn9V6
	U3LFx1QziVuJs3c36pupJlcet69UJgRkRi9ZNfo7dnjtOxBujND77DhiNyX3yd3R5VN2
	gCr6qN68EgBdUw3BEZ4mBBFTTXj/uQ5ohMEABTsnwavn2zefYgqMRi6r3p+YXdJVfPT1
	A03AX+lJ59kxz7iV8PnQxJuN7aujowEQ5FjQCDYQk+jvCx0d6fjTlxxW5WwfYjiTX/HO
	zUvg==
X-Gm-Message-State: ALoCoQkO2Ejs5H5pS6S5LqA93H9d79B37vnpQzIibjbp54+gIv/JqDE5SxFFFO0kzVBqoQJWWe8AViRq4oc1ABuQXxqlJ4Gu8yZwkfWL7rvrL3tXHR2RI3QOjKRGShMvYW4s5KMIFoco5HYBdoRyMwYeiT8FJYhsMcLRBhrzjSRKbwhXg3n65Mc=
X-Received: by 10.152.170.232 with SMTP id ap8mr1714109lac.40.1392727015428;
	Tue, 18 Feb 2014 04:36:55 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.152.170.232 with SMTP id ap8mr1714103lac.40.1392727015274;
	Tue, 18 Feb 2014 04:36:55 -0800 (PST)
Received: by 10.114.18.193 with HTTP; Tue, 18 Feb 2014 04:36:55 -0800 (PST)
In-Reply-To: <1392725646.11080.47.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 14:36:55 +0200
Message-ID: <CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4060277682547822024=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4060277682547822024==
Content-Type: multipart/alternative; boundary=089e0117769103cf0404f2ad8789

--089e0117769103cf0404f2ad8789
Content-Type: text/plain; charset=ISO-8859-1

>
> > +GLOBAL(enter_hyp_mode)
> > +enter_hyp_mode:
> > +        adr   r0, save
> > +        stmea r0, {r4-r13,lr}
> > +        ldr   r12, =0x102
> > +        adr   r0, hyp_return
> > +        dsb
> > +        isb
> > +        dmb
> > +        smc   #0
>
> Who/what implements this handler?
>

Ian, this handler is implemented by ROM code, and this is the common OMAP
sequence to switch to HYP mode. On our side we decided to leave switch to
hyp in XEN for now.

Do you have any hardware debugging tools which could give some insight?
>

Yep, we have one (TI's Code Composer Studio with STM560v2 JTAG) but it has
no proper HYP mode debug support yet, TI says it will have in 6 months or
so :( So we the only thing we can do with it is stop CPU at some moment and
see some registers, no breakpoints or stepping.

What we discovered yet is that the last command executed by CPU1 before
hang is

mcr   CP32(r0, HSCTLR)       /* now paging is enabled */

After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode, not
abort.
It looks like we have broken MMU translation.

Usually these things are down to either missing cache flushes or barriers,
> but tracking them down has historically been a total pain.
>
I suspected missing flushes during CPU1 MMU tables preparation but that
code looks correct, I do not see any issues there.

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt


On Tue, Feb 18, 2014 at 2:14 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2014-02-18 at 13:44 +0200, Oleksandr Tyshchenko wrote:
>
> > +GLOBAL(enter_hyp_mode)
> > +enter_hyp_mode:
> > +        adr   r0, save
> > +        stmea r0, {r4-r13,lr}
> > +        ldr   r12, =0x102
> > +        adr   r0, hyp_return
> > +        dsb
> > +        isb
> > +        dmb
> > +        smc   #0
>
> Who/what implements this handler?
>
> > +hyp_return:
> > +        adr   r0, save
> > +        ldmfd r0, {r4-r13,pc}
> > +save:
> > +        .rept 11
> > +        .word 0
> > +        .endr
> > +
> >  /*
> >   * Local variables:
> >   * mode: ASM
> >
> >
> > Please, could anyone give me advice about this issue?
>
> Do you have any hardware debugging tools which could give some insight?
>
> Usually these things are down to either missing cache flushes or
> barriers, but tracking them down has historically been a total pain.
>
> >  And what do you
> > think about solution to exit from busy loop by timeout and restart
> > CPU1 bringing up sequence in this case?
>
> A timeout isn't a bad idea, although I would not be inclined to try
> again with the CPU in an indeterminate state.
>
> Either we should carry on without it or we should panic (which is more
> obvious than a hang). I think carrying on would be surprising (you'd
> only get half the processing power you expected).
>
> Ian.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--089e0117769103cf0404f2ad8789
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div class=3D"">&gt; +GLOBAL(enter_hyp_mo=
de)<br>
&gt; +enter_hyp_mode:<br>&gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>&gt; + =
=A0 =A0 =A0 =A0stmea r0, {r4-r13,lr}<br>&gt; + =A0 =A0 =A0 =A0ldr =A0 r12, =
=3D0x102<br>&gt; + =A0 =A0 =A0 =A0adr =A0 r0, hyp_return<br>&gt; + =A0 =A0 =
=A0 =A0dsb<br>&gt; + =A0 =A0 =A0 =A0isb<br>&gt; + =A0 =A0 =A0 =A0dmb<br>
&gt; + =A0 =A0 =A0 =A0smc =A0 #0<br><br></div>Who/what implements this hand=
ler?<br></blockquote><div class=3D"gmail_extra"><br></div><div class=3D"gma=
il_extra">Ian, this handler is implemented by ROM code, and this is the com=
mon OMAP sequence to switch to HYP mode. On our side we decided to leave sw=
itch to hyp in XEN for now.</div>
<div class=3D"gmail_extra"><br></div><blockquote class=3D"gmail_quote" styl=
e=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(2=
04,204,204);border-left-style:solid;padding-left:1ex">Do you have any hardw=
are debugging tools which could give some insight?<br>
</blockquote><div>=A0</div><div>Yep, we have one (TI&#39;s Code Composer St=
udio with STM560v2 JTAG) but it has no proper HYP mode debug support yet, T=
I says it will have in 6 months or so :( So we the only thing we can do wit=
h it is stop CPU at some moment and see some registers, no breakpoints or s=
tepping.</div>
<div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">What we dis=
covered yet is that the last command executed by CPU1 before hang is=A0</di=
v><div class=3D"gmail_extra"><blockquote style=3D"margin:0px 0px 0px 40px;b=
order:none;padding:0px">
<div class=3D"gmail_extra"><div class=3D"gmail_extra">mcr =A0 CP32(r0, HSCT=
LR) =A0 =A0 =A0 /* now paging is enabled */</div></div></blockquote>After t=
his PC contains 0x00000004, CPSR.M is b11010 what is HYP mode, not abort.</=
div><div class=3D"gmail_extra">
It looks like we have broken MMU translation.</div><div class=3D"gmail_extr=
a"><br></div><div class=3D"gmail_extra"><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rg=
b(204,204,204);border-left-style:solid;padding-left:1ex">
Usually these things are down to either missing cache flushes or=A0barriers=
, but tracking them down has historically been a total pain.<br></blockquot=
e><div class=3D"gmail_extra">I suspected missing flushes during CPU1 MMU ta=
bles preparation but that code looks correct, I do not see any issues there=
.</div>
<div><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-vari=
ant:normal;font-style:normal;font-size:12px;background-color:transparent;te=
xt-decoration:none;font-family:Arial;font-weight:bold">Andrii Anisov | Soft=
ware Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline">www.global=
logic.com</span></a><span style=3D"vertical-align:baseline;font-variant:nor=
mal;font-style:normal;font-size:12px;text-decoration:none;font-family:Arial=
;font-weight:normal;background-color:transparent"></span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline"></span></a=
><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:rgb(17,85,204);background-color:transparent;font-weight:normal;font=
-style:normal;font-variant:normal;text-decoration:underline;vertical-align:=
baseline">http://www.globallogic.com/email_disclaimer.txt</span></a><span s=
tyle=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-=
size:11px;text-decoration:none;font-family:Arial;font-weight:normal;backgro=
und-color:transparent"></span></div>
</div>
<br><br><div class=3D"gmail_quote">On Tue, Feb 18, 2014 at 2:14 PM, Ian Cam=
pbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targ=
et=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-width:1=
px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:=
1ex">
<div class=3D"">On Tue, 2014-02-18 at 13:44 +0200, Oleksandr Tyshchenko wro=
te:<br>
<br>
&gt; +GLOBAL(enter_hyp_mode)<br>
&gt; +enter_hyp_mode:<br>
&gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>
&gt; + =A0 =A0 =A0 =A0stmea r0, {r4-r13,lr}<br>
&gt; + =A0 =A0 =A0 =A0ldr =A0 r12, =3D0x102<br>
&gt; + =A0 =A0 =A0 =A0adr =A0 r0, hyp_return<br>
&gt; + =A0 =A0 =A0 =A0dsb<br>
&gt; + =A0 =A0 =A0 =A0isb<br>
&gt; + =A0 =A0 =A0 =A0dmb<br>
&gt; + =A0 =A0 =A0 =A0smc =A0 #0<br>
<br>
</div>Who/what implements this handler?<br>
<div class=3D""><br>
&gt; +hyp_return:<br>
&gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>
&gt; + =A0 =A0 =A0 =A0ldmfd r0, {r4-r13,pc}<br>
&gt; +save:<br>
&gt; + =A0 =A0 =A0 =A0.rept 11<br>
&gt; + =A0 =A0 =A0 =A0.word 0<br>
&gt; + =A0 =A0 =A0 =A0.endr<br>
&gt; +<br>
&gt; =A0/*<br>
&gt; =A0 * Local variables:<br>
&gt; =A0 * mode: ASM<br>
&gt;<br>
&gt;<br>
&gt; Please, could anyone give me advice about this issue?<br>
<br>
</div>Do you have any hardware debugging tools which could give some insigh=
t?<br>
<br>
Usually these things are down to either missing cache flushes or<br>
barriers, but tracking them down has historically been a total pain.<br>
<div class=3D""><br>
&gt; =A0And what do you<br>
&gt; think about solution to exit from busy loop by timeout and restart<br>
&gt; CPU1 bringing up sequence in this case?<br>
<br>
</div>A timeout isn&#39;t a bad idea, although I would not be inclined to t=
ry<br>
again with the CPU in an indeterminate state.<br>
<br>
Either we should carry on without it or we should panic (which is more<br>
obvious than a hang). I think carrying on would be surprising (you&#39;d<br=
>
only get half the processing power you expected).<br>
<span class=3D""><font color=3D"#888888"><br>
Ian.<br>
</font></span><div class=3D""><div class=3D"h5"><br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</div></div></blockquote></div><br></div></div>

--089e0117769103cf0404f2ad8789--


--===============4060277682547822024==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4060277682547822024==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 12:42:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk0Y-0005Ah-Ti; Tue, 18 Feb 2014 12:42:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFk0Y-0005Aa-0V
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:42:42 +0000
Received: from [85.158.143.35:7764] by server-2.bemta-4.messagelabs.com id
	23/2F-10891-14553035; Tue, 18 Feb 2014 12:42:41 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392727359!6519871!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29307 invoked from network); 18 Feb 2014 12:42:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 12:42:40 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 04:42:39 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="477011360"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 04:42:38 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 04:42:38 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 04:42:38 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 20:42:35 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPLKYArpjR5j3ySEeyNVNl8C2Pk5q69E7g
Date: Tue, 18 Feb 2014 12:42:35 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F8CC1@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
	<530361CB020000780011D475@nat28.tlf.novell.com>
In-Reply-To: <530361CB020000780011D475@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 18.02.14 at 12:42, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Jan Beulich wrote:
>>>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>> wrote: 
>>>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>>>> *v, uint32_t msr, uint64_t *val)
>>>>>>> 
>>>>>>>      *val = 0;
>>>>>>> 
>>>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>>>> +    /* Allow only first 3 MC banks into switch() */
>>>> 
>>>> I don't think this comments is good here. Remove it is better.
>>> 
>>> I had asked for this to be removed again too. I'm really thinking
>>> that V3 is what we should go with.
>> 
>> V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is
>> slightly better.
> 
> Can I read this as an ack then (I already explained elsewhere
> why I think a comment there is rather pointless)?
> 
> Jan

Yes, please.

Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:42:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk0Y-0005Ah-Ti; Tue, 18 Feb 2014 12:42:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WFk0Y-0005Aa-0V
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:42:42 +0000
Received: from [85.158.143.35:7764] by server-2.bemta-4.messagelabs.com id
	23/2F-10891-14553035; Tue, 18 Feb 2014 12:42:41 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392727359!6519871!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29307 invoked from network); 18 Feb 2014 12:42:40 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-3.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 12:42:40 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 04:42:39 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,501,1389772800"; d="scan'208";a="477011360"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 04:42:38 -0800
Received: from fmsmsx156.amr.corp.intel.com (10.18.116.74) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 04:42:38 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx156.amr.corp.intel.com (10.18.116.74) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 04:42:38 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Tue, 18 Feb 2014 20:42:35 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [PATCH V4.1] mcheck,vmce: Allow vmce_amd_* functions to handle
	AMD  thresolding MSRs
Thread-Index: AQHPLKYArpjR5j3ySEeyNVNl8C2Pk5q69E7g
Date: Tue, 18 Feb 2014 12:42:35 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335014F8CC1@SHSMSX101.ccr.corp.intel.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
	<530361CB020000780011D475@nat28.tlf.novell.com>
In-Reply-To: <530361CB020000780011D475@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote:
>>>> On 18.02.14 at 12:42, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>> Jan Beulich wrote:
>>>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>> wrote: 
>>>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>>>> *v, uint32_t msr, uint64_t *val)
>>>>>>> 
>>>>>>>      *val = 0;
>>>>>>> 
>>>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>>>> +    /* Allow only first 3 MC banks into switch() */
>>>> 
>>>> I don't think this comments is good here. Remove it is better.
>>> 
>>> I had asked for this to be removed again too. I'm really thinking
>>> that V3 is what we should go with.
>> 
>> V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is
>> slightly better.
> 
> Can I read this as an ack then (I already explained elsewhere
> why I think a comment there is rather pointless)?
> 
> Jan

Yes, please.

Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:48:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk5a-0005Ix-LA; Tue, 18 Feb 2014 12:47:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFk5Y-0005Ip-Vb
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:47:53 +0000
Received: from [85.158.139.211:57921] by server-14.bemta-5.messagelabs.com id
	89/86-27598-87653035; Tue, 18 Feb 2014 12:47:52 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392727671!4657244!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21572 invoked from network); 18 Feb 2014 12:47:51 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:47:51 -0000
Received: by mail-wg0-f47.google.com with SMTP id k14so3275858wgh.14
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 04:47:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=6bUKzoOEp6qtY0nYO88ck0Xes53LKqH4O9S8HxvnqbE=;
	b=EsKZQdrY4R0Lq6sDDH4K2xRbh65Fi8Sk0YW8jguocxpn1ajGuHkXVPt2af1NUoaq6F
	Am+cMS8PKy2J/Jr/xoNzUGa32fk7VlF2ysc83YKCWIHLr/40QXU775G/dheQRCGlMDOl
	KIgLTicHGsHbVzMQ6o2zpXmgpqeHuek5gZ390vAz7EyCEm4NlDRAToAgXU8wNSyG2ok4
	BpZ2912K82Jx4P4pcjzHBztTcUQqrHBfAX/AfBzpHcpPyuyeoBePhbf+iKHnl47/ZP2F
	bGmCbre2W++UVyX+c8BiYkrU+PUWdxOER5Xltf34sf3DPhkv2Cjq72Ah05GcgeW46n2k
	ILXA==
MIME-Version: 1.0
X-Received: by 10.194.109.68 with SMTP id hq4mr22789949wjb.12.1392727671192;
	Tue, 18 Feb 2014 04:47:51 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 18 Feb 2014 04:47:51 -0800 (PST)
In-Reply-To: <1390411039.32296.8.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
Date: Tue, 18 Feb 2014 12:47:51 +0000
X-Google-Sender-Auth: T5ey_bQR4V-s3tOTCkrYI_TjDeg
Message-ID: <CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
	mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 5:17 PM, Frediano Ziglio
<frediano.ziglio@citrix.com> wrote:
> From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
> From: Frediano Ziglio <frediano.ziglio@citrix.com>
> Date: Wed, 22 Jan 2014 10:48:50 +0000
> Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
>
> These lines (in mctelem_reserve)
>
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
>
> This patch use instead a bit array and atomic bit operations.
>
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

What is this like from a release perspective?  When is this code run,
and how often is the bug triggered?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:48:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk5a-0005Ix-LA; Tue, 18 Feb 2014 12:47:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WFk5Y-0005Ip-Vb
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:47:53 +0000
Received: from [85.158.139.211:57921] by server-14.bemta-5.messagelabs.com id
	89/86-27598-87653035; Tue, 18 Feb 2014 12:47:52 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392727671!4657244!1
X-Originating-IP: [74.125.82.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21572 invoked from network); 18 Feb 2014 12:47:51 -0000
Received: from mail-wg0-f47.google.com (HELO mail-wg0-f47.google.com)
	(74.125.82.47)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:47:51 -0000
Received: by mail-wg0-f47.google.com with SMTP id k14so3275858wgh.14
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 04:47:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=6bUKzoOEp6qtY0nYO88ck0Xes53LKqH4O9S8HxvnqbE=;
	b=EsKZQdrY4R0Lq6sDDH4K2xRbh65Fi8Sk0YW8jguocxpn1ajGuHkXVPt2af1NUoaq6F
	Am+cMS8PKy2J/Jr/xoNzUGa32fk7VlF2ysc83YKCWIHLr/40QXU775G/dheQRCGlMDOl
	KIgLTicHGsHbVzMQ6o2zpXmgpqeHuek5gZ390vAz7EyCEm4NlDRAToAgXU8wNSyG2ok4
	BpZ2912K82Jx4P4pcjzHBztTcUQqrHBfAX/AfBzpHcpPyuyeoBePhbf+iKHnl47/ZP2F
	bGmCbre2W++UVyX+c8BiYkrU+PUWdxOER5Xltf34sf3DPhkv2Cjq72Ah05GcgeW46n2k
	ILXA==
MIME-Version: 1.0
X-Received: by 10.194.109.68 with SMTP id hq4mr22789949wjb.12.1392727671192;
	Tue, 18 Feb 2014 04:47:51 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Tue, 18 Feb 2014 04:47:51 -0800 (PST)
In-Reply-To: <1390411039.32296.8.camel@hamster.uk.xensource.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
Date: Tue, 18 Feb 2014 12:47:51 +0000
X-Google-Sender-Auth: T5ey_bQR4V-s3tOTCkrYI_TjDeg
Message-ID: <CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Frediano Ziglio <frediano.ziglio@citrix.com>
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
	mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Jan 22, 2014 at 5:17 PM, Frediano Ziglio
<frediano.ziglio@citrix.com> wrote:
> From 49b37906afef0981f318064f4cb53a3602bca50a Mon Sep 17 00:00:00 2001
> From: Frediano Ziglio <frediano.ziglio@citrix.com>
> Date: Wed, 22 Jan 2014 10:48:50 +0000
> Subject: [PATCH] MCE: Fix race condition in mctelem_reserve
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
>
> These lines (in mctelem_reserve)
>
>         newhead = oldhead->mcte_next;
>         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>
> are racy. After you read the newhead pointer it can happen that another
> flow (thread or recursive invocation) change all the list but set head
> with same value. So oldhead is the same as *freelp but you are setting
> a new head that could point to whatever element (even already used).
>
> This patch use instead a bit array and atomic bit operations.
>
> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>

What is this like from a release perspective?  When is this code run,
and how often is the bug triggered?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:48:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk5v-0005Kd-1j; Tue, 18 Feb 2014 12:48:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFk5s-0005KA-Ug
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 12:48:13 +0000
Received: from [193.109.254.147:20472] by server-2.bemta-14.messagelabs.com id
	A6/29-01236-C8653035; Tue, 18 Feb 2014 12:48:12 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392727690!5122312!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15287 invoked from network); 18 Feb 2014 12:48:10 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 12:48:10 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type;
	b=IWlIM03SQxF7oLeMfaigdQJZu7ij1hCN/+kGe2BYqNWgGRTTiUqvppUo
	idcimvCWXZhDRUuk7d11Z0PbAOKcETK9Yzyeq3C15qCnE9cJ7Sp056GFF
	ORJ39schtKGR0fSZvwF59GkhDadekc0LJxnk8cubgRcQxSWlm4LT57Gm+
	mzPv+MZfLl1iAjLUljrrIxT4jh7lVZbGv1ZjOmaXinBFCkAQ9RlfW3MED
	RcqUGynFJhrEKPyh79+v1OQ9GnJEx;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392727691; x=1424263691;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to;
	bh=L0VrBfecXK4l5478C+6EDlG2TTA7D7/HwnqoWDifXkI=;
	b=douz63Qupx4Y5MKa96KD7cxKvKv7XYU3cXVg0WxinmavPLo8LdvNrib4
	xYrJ0pxOS8YsFAiIKGk4gYSPQRxAofF0GHGp01WtU9HUTsTk0pnYJB+GF
	ayl6c2AyeY7fHuZ6i5IKrUWFHMzgM1CjG5jePUNjFSPWS7OWQnnCCj9HQ
	WWB5bNpyVcybw48zrxWxI/g3kGAXCY9RO+4Yy+vYQbsSDcxGxD130oqFk
	oz0GB+0fcjKu4uo0J/+A9CkcXPkP/;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,501,1389740400"; d="scan'208";a="159275133"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 18 Feb 2014 13:48:10 +0100
X-IronPort-AV: E=Sophos;i="4.97,501,1389740400"; d="scan'208";a="80334566"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 18 Feb 2014 13:48:08 +0100
Message-ID: <53035689.4000602@ts.fujitsu.com>
Date: Tue, 18 Feb 2014 13:48:09 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
In-Reply-To: <52FE21E4020000780011C6F5@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="------------050407090402060806020009"
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050407090402060806020009
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 14.02.2014 14:02, Jan Beulich wrote:
>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>> On 14.02.2014 11:40, Jan Beulich wrote:
>>>>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>>> Debug registers are restored on vcpu switch only if db7 has any debug events
>>>> activated. This leads to problems in the following cases:
>>>>
>>>> - db0-3 are changed by the guest before events are set "active" in db7. In
>> case
>>>>      of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW:
>> setting
>>>>      db7 before db0-3 is no option, as this could trigger debug interrupts due
>> to
>>>>      stale db0-3 contents.
>>>>
>>>> - single stepping is used and vcpu switch occurs between the single step trap
>>>>      and reading of db6 in the guest. db6 contents (single step indicator)
>> are
>>>>      lost in this case.
>>>
>>> Not exactly, at least not looking at how things are supposed to work:
>>> __restore_debug_registers() gets called when
>>> - context switching in (vmx_restore_dr())
>>> - injecting TRAP_debug

Okay, db0-3 seem to be preserved. I did a test modifying the registers without
activating any debug traps. Even under heavy vcpu scheduling load everything
was fine.

>>
>> Is this the case when the guest itself uses single stepping? Initially the
>> debug trap shouldn't cause a VMEXIT, I think.
>
> That looks like a bug, indeed - it's missing from the initially set
> exception_bitmap. Could you check whether adding this in
> construct_vmcs() addresses that part of the issue? (A proper fix
> would likely include further adjustments to the setting of this flag,
> e.g. clearing it alongside clearing the DR intercept.) But then
> again all of this already depends on cpu_has_monitor_trap_flag -
> if that's set on your system, maybe you could try suppressing its
> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
> the optional feature set in vmx_init_vmcs_config())?

I've currently a test running with the attached patch (the bug was hit about
once every 3 hours, test is running now for about 4 hours without problem).
Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

--------------050407090402060806020009
Content-Type: text/x-patch;
 name="single-step.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="single-step.patch"

--- xen-4.2.3-testing.orig/xen/include/asm-x86/hvm/hvm.h	2014-02-14 19:05:59.000000000 +0100
+++ xen-4.2.3-testing/xen/include/asm-x86/hvm/hvm.h	2014-02-17 07:43:05.000000000 +0100
@@ -374,7 +374,8 @@ static inline int hvm_do_pmu_interrupt(s
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+	(1 << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 07:48:43.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 10:16:25.000000000 +0100
@@ -168,7 +168,7 @@ static int vmx_init_vmcs_config(void)
            CPU_BASED_RDTSC_EXITING);
     opt = (CPU_BASED_ACTIVATE_MSR_BITMAP |
            CPU_BASED_TPR_SHADOW |
-           CPU_BASED_MONITOR_TRAP_FLAG |
+           /* CPU_BASED_MONITOR_TRAP_FLAG | */
            CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
     _vmx_cpu_based_exec_control = adjust_vmx_controls(
         "CPU-Based Exec Control", min, opt,
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 08:04:23.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 10:45:42.000000000 +0100
@@ -2646,7 +2646,11 @@ void vmx_vmexit_handler(struct cpu_user_
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
+            {
+                __restore_debug_registers(v);
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+                break;
+            }
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 

--------------050407090402060806020009
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050407090402060806020009--


From xen-devel-bounces@lists.xen.org Tue Feb 18 12:48:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:48:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk5v-0005Kd-1j; Tue, 18 Feb 2014 12:48:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WFk5s-0005KA-Ug
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 12:48:13 +0000
Received: from [193.109.254.147:20472] by server-2.bemta-14.messagelabs.com id
	A6/29-01236-C8653035; Tue, 18 Feb 2014 12:48:12 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392727690!5122312!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15287 invoked from network); 18 Feb 2014 12:48:10 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 12:48:10 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type;
	b=IWlIM03SQxF7oLeMfaigdQJZu7ij1hCN/+kGe2BYqNWgGRTTiUqvppUo
	idcimvCWXZhDRUuk7d11Z0PbAOKcETK9Yzyeq3C15qCnE9cJ7Sp056GFF
	ORJ39schtKGR0fSZvwF59GkhDadekc0LJxnk8cubgRcQxSWlm4LT57Gm+
	mzPv+MZfLl1iAjLUljrrIxT4jh7lVZbGv1ZjOmaXinBFCkAQ9RlfW3MED
	RcqUGynFJhrEKPyh79+v1OQ9GnJEx;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392727691; x=1424263691;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to;
	bh=L0VrBfecXK4l5478C+6EDlG2TTA7D7/HwnqoWDifXkI=;
	b=douz63Qupx4Y5MKa96KD7cxKvKv7XYU3cXVg0WxinmavPLo8LdvNrib4
	xYrJ0pxOS8YsFAiIKGk4gYSPQRxAofF0GHGp01WtU9HUTsTk0pnYJB+GF
	ayl6c2AyeY7fHuZ6i5IKrUWFHMzgM1CjG5jePUNjFSPWS7OWQnnCCj9HQ
	WWB5bNpyVcybw48zrxWxI/g3kGAXCY9RO+4Yy+vYQbsSDcxGxD130oqFk
	oz0GB+0fcjKu4uo0J/+A9CkcXPkP/;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,501,1389740400"; d="scan'208";a="159275133"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate20u.abg.fsc.net with ESMTP; 18 Feb 2014 13:48:10 +0100
X-IronPort-AV: E=Sophos;i="4.97,501,1389740400"; d="scan'208";a="80334566"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 18 Feb 2014 13:48:08 +0100
Message-ID: <53035689.4000602@ts.fujitsu.com>
Date: Tue, 18 Feb 2014 13:48:09 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
In-Reply-To: <52FE21E4020000780011C6F5@nat28.tlf.novell.com>
Content-Type: multipart/mixed; boundary="------------050407090402060806020009"
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------050407090402060806020009
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 14.02.2014 14:02, Jan Beulich wrote:
>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>> On 14.02.2014 11:40, Jan Beulich wrote:
>>>>>> On 14.02.14 at 10:33, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>>> Debug registers are restored on vcpu switch only if db7 has any debug events
>>>> activated. This leads to problems in the following cases:
>>>>
>>>> - db0-3 are changed by the guest before events are set "active" in db7. In
>> case
>>>>      of a vcpu switch between setting db0-3 and db7, db0-3 are lost. BTW:
>> setting
>>>>      db7 before db0-3 is no option, as this could trigger debug interrupts due
>> to
>>>>      stale db0-3 contents.
>>>>
>>>> - single stepping is used and vcpu switch occurs between the single step trap
>>>>      and reading of db6 in the guest. db6 contents (single step indicator)
>> are
>>>>      lost in this case.
>>>
>>> Not exactly, at least not looking at how things are supposed to work:
>>> __restore_debug_registers() gets called when
>>> - context switching in (vmx_restore_dr())
>>> - injecting TRAP_debug

Okay, db0-3 seem to be preserved. I did a test modifying the registers without
activating any debug traps. Even under heavy vcpu scheduling load everything
was fine.

>>
>> Is this the case when the guest itself uses single stepping? Initially the
>> debug trap shouldn't cause a VMEXIT, I think.
>
> That looks like a bug, indeed - it's missing from the initially set
> exception_bitmap. Could you check whether adding this in
> construct_vmcs() addresses that part of the issue? (A proper fix
> would likely include further adjustments to the setting of this flag,
> e.g. clearing it alongside clearing the DR intercept.) But then
> again all of this already depends on cpu_has_monitor_trap_flag -
> if that's set on your system, maybe you could try suppressing its
> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
> the optional feature set in vmx_init_vmcs_config())?

I've currently a test running with the attached patch (the bug was hit about
once every 3 hours, test is running now for about 4 hours without problem).
Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

--------------050407090402060806020009
Content-Type: text/x-patch;
 name="single-step.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="single-step.patch"

--- xen-4.2.3-testing.orig/xen/include/asm-x86/hvm/hvm.h	2014-02-14 19:05:59.000000000 +0100
+++ xen-4.2.3-testing/xen/include/asm-x86/hvm/hvm.h	2014-02-17 07:43:05.000000000 +0100
@@ -374,7 +374,8 @@ static inline int hvm_do_pmu_interrupt(s
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+	(1 << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 07:48:43.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 10:16:25.000000000 +0100
@@ -168,7 +168,7 @@ static int vmx_init_vmcs_config(void)
            CPU_BASED_RDTSC_EXITING);
     opt = (CPU_BASED_ACTIVATE_MSR_BITMAP |
            CPU_BASED_TPR_SHADOW |
-           CPU_BASED_MONITOR_TRAP_FLAG |
+           /* CPU_BASED_MONITOR_TRAP_FLAG | */
            CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
     _vmx_cpu_based_exec_control = adjust_vmx_controls(
         "CPU-Based Exec Control", min, opt,
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 08:04:23.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 10:45:42.000000000 +0100
@@ -2646,7 +2646,11 @@ void vmx_vmexit_handler(struct cpu_user_
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
+            {
+                __restore_debug_registers(v);
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+                break;
+            }
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 

--------------050407090402060806020009
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------050407090402060806020009--


From xen-devel-bounces@lists.xen.org Tue Feb 18 12:49:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk7G-0005Uz-Ia; Tue, 18 Feb 2014 12:49:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lausgans@gmail.com>) id 1WFk7F-0005Up-AQ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:49:37 +0000
Received: from [193.109.254.147:37311] by server-6.bemta-14.messagelabs.com id
	87/E9-03396-0E653035; Tue, 18 Feb 2014 12:49:36 +0000
X-Env-Sender: lausgans@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392727774!1380210!1
X-Originating-IP: [209.85.192.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27265 invoked from network); 18 Feb 2014 12:49:35 -0000
Received: from mail-pd0-f180.google.com (HELO mail-pd0-f180.google.com)
	(209.85.192.180)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:49:35 -0000
Received: by mail-pd0-f180.google.com with SMTP id x10so16262280pdj.11
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 04:49:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:content-transfer-encoding;
	bh=pRo77zBDPObZZe2HyGKaJSuHSzYFI7CY99VWLntFmJA=;
	b=vq9a62hqtY5Mk7AuUJ4XReLMHaf/40cEjQHK7Yd+osRImbdG/7i0YxPO4QA0J6yBg7
	sJLnSDTVFSMmKXsbgj8T3souuwbpiDNhwbCpbar+4b0i8NHXcsQqKnpz6s5N4abZhriV
	5YFcavzom6Nn0n4zhkoXsZucf+LPJTXRJyRlXEVhn/4RYcAI0oNAtqL8xeFP11B0v3f5
	0qiG6ijf6ksfB0q4YRwEaGI4H0oZFCgvmr93QsEDMrbDeh4APk9QzfXhS+Eh3f1olKNv
	TiTt24+s0l5vydVJx9PtxpahOTmQa+jLdimSlraId9XbcQzweToi7R+r7JY7qfx1RkSP
	nRdg==
MIME-Version: 1.0
X-Received: by 10.68.136.162 with SMTP id qb2mr33184431pbb.88.1392727773688;
	Tue, 18 Feb 2014 04:49:33 -0800 (PST)
Received: by 10.68.220.97 with HTTP; Tue, 18 Feb 2014 04:49:33 -0800 (PST)
In-Reply-To: <A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
	<1392718898.11080.16.camel@kazak.uk.xensource.com>
	<A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
Date: Tue, 18 Feb 2014 16:49:33 +0400
Message-ID: <CANoehN9RHr8P2VLZVmyFUJrBzKqD55H4v4OJ01QwX3xKWV7ooA@mail.gmail.com>
From: Held Bier <lausgans@gmail.com>
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-18 15:48 GMT+04:00 Francesco Gringoli <francesco.gringoli@ing.unibs.it>:
> On Feb 18, 2014, at 11:21 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>
>> On Sun, 2014-02-16 at 12:56 +0000, Stefano Stabellini wrote:
>>> On Fri, 14 Feb 2014, Francesco Gringoli wrote:
>>>> Hello guys,
>>>>
>>>> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
>>>>
>>>> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
>>>>
>>>> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
>>>>
>>>> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
>>>> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
>>>>
>>>> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
>>>
>>> Great job!
>>
>> Yes, very nice.
>>
>>> Please do request access and update the wiki.
>>>
>>> Anthony's branches are quite old now, many bugs have been discovered and
>>> fixed in the upstream Xen and Linux trees in the meantime.
>>> Although I expect that updating the Xen tree could be difficult at this
>>> point, it is probably worth it from the stability point of view.
>>
>> I agree, ideally the tree would be rebased and whatever needed to be
>> (and could be) would be upstreamed. I've no idea what the divergence is
>> like, but at least in principal supporting a "new" platform with the
>> modern mainline code ought to be loads easier than it was back when
>> Anthony started work on this stuff.

> I updated the page. I'm starting working on latest Xen only (at the moment) to make it booting old Linux tree described in the page.
>
> Keep you posted.

Big kudos for your job, and thanks for keep working!

> -Francesco

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:49:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk7G-0005Uz-Ia; Tue, 18 Feb 2014 12:49:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lausgans@gmail.com>) id 1WFk7F-0005Up-AQ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:49:37 +0000
Received: from [193.109.254.147:37311] by server-6.bemta-14.messagelabs.com id
	87/E9-03396-0E653035; Tue, 18 Feb 2014 12:49:36 +0000
X-Env-Sender: lausgans@gmail.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392727774!1380210!1
X-Originating-IP: [209.85.192.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27265 invoked from network); 18 Feb 2014 12:49:35 -0000
Received: from mail-pd0-f180.google.com (HELO mail-pd0-f180.google.com)
	(209.85.192.180)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:49:35 -0000
Received: by mail-pd0-f180.google.com with SMTP id x10so16262280pdj.11
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 04:49:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type:content-transfer-encoding;
	bh=pRo77zBDPObZZe2HyGKaJSuHSzYFI7CY99VWLntFmJA=;
	b=vq9a62hqtY5Mk7AuUJ4XReLMHaf/40cEjQHK7Yd+osRImbdG/7i0YxPO4QA0J6yBg7
	sJLnSDTVFSMmKXsbgj8T3souuwbpiDNhwbCpbar+4b0i8NHXcsQqKnpz6s5N4abZhriV
	5YFcavzom6Nn0n4zhkoXsZucf+LPJTXRJyRlXEVhn/4RYcAI0oNAtqL8xeFP11B0v3f5
	0qiG6ijf6ksfB0q4YRwEaGI4H0oZFCgvmr93QsEDMrbDeh4APk9QzfXhS+Eh3f1olKNv
	TiTt24+s0l5vydVJx9PtxpahOTmQa+jLdimSlraId9XbcQzweToi7R+r7JY7qfx1RkSP
	nRdg==
MIME-Version: 1.0
X-Received: by 10.68.136.162 with SMTP id qb2mr33184431pbb.88.1392727773688;
	Tue, 18 Feb 2014 04:49:33 -0800 (PST)
Received: by 10.68.220.97 with HTTP; Tue, 18 Feb 2014 04:49:33 -0800 (PST)
In-Reply-To: <A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
References: <CD9258C7-8ACA-4654-B830-99CE2B899142@ing.unibs.it>
	<alpine.DEB.2.02.1402161253340.4307@kaball.uk.xensource.com>
	<1392718898.11080.16.camel@kazak.uk.xensource.com>
	<A950DC8F-A2A3-485C-B86C-F9CCB146EA65@ing.unibs.it>
Date: Tue, 18 Feb 2014 16:49:33 +0400
Message-ID: <CANoehN9RHr8P2VLZVmyFUJrBzKqD55H4v4OJ01QwX3xKWV7ooA@mail.gmail.com>
From: Held Bier <lausgans@gmail.com>
To: Francesco Gringoli <francesco.gringoli@ing.unibs.it>
Cc: Xen Devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Booting XEN on Samsung ARM Chromebook with display
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-18 15:48 GMT+04:00 Francesco Gringoli <francesco.gringoli@ing.unibs.it>:
> On Feb 18, 2014, at 11:21 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
>
>> On Sun, 2014-02-16 at 12:56 +0000, Stefano Stabellini wrote:
>>> On Fri, 14 Feb 2014, Francesco Gringoli wrote:
>>>> Hello guys,
>>>>
>>>> "building" on the great work done by Anthony I was finally able to boot xen and dom0 on the Chromebook with display support.
>>>>
>>>> Basically everything was already done by Anthony, the only problem was the .config file of the dom0 missing some options and a couple of files in arch/arm/mach-exynos which were crashing the boot process (by reading the dtb). Actually I was not able to address the problem in a clever way, but I fixed it so boot does not crash (almost) anymore.
>>>>
>>>> What I get is dom0 booting and working, boot logs appear on the display like when running archlinux natively. The keyboard does not work as pointed out by Anthony but external does, so it is possible to log in and check /proc/xen is populated. There are two main issues
>>>>
>>>> 1) sometimes (very few) boot crashes, but late, e.g., after 2 secs
>>>> 2) after minutes something weird happens, and it is not anymore possible to cat files' content, although ls + cd etc keep working.
>>>>
>>>> I would like to add some files and info to the old wiki page (http://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Chromebook) but it seems I can not edit it.
>>>
>>> Great job!
>>
>> Yes, very nice.
>>
>>> Please do request access and update the wiki.
>>>
>>> Anthony's branches are quite old now, many bugs have been discovered and
>>> fixed in the upstream Xen and Linux trees in the meantime.
>>> Although I expect that updating the Xen tree could be difficult at this
>>> point, it is probably worth it from the stability point of view.
>>
>> I agree, ideally the tree would be rebased and whatever needed to be
>> (and could be) would be upstreamed. I've no idea what the divergence is
>> like, but at least in principal supporting a "new" platform with the
>> modern mainline code ought to be loads easier than it was back when
>> Anthony started work on this stuff.

> I updated the page. I'm starting working on latest Xen only (at the moment) to make it booting old Linux tree described in the page.
>
> Keep you posted.

Big kudos for your job, and thanks for keep working!

> -Francesco

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:51:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk9O-0005g4-4C; Tue, 18 Feb 2014 12:51:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFk9M-0005ft-Ap
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:51:48 +0000
Received: from [193.109.254.147:20028] by server-6.bemta-14.messagelabs.com id
	22/1D-03396-36753035; Tue, 18 Feb 2014 12:51:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392727535!5120983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20103 invoked from network); 18 Feb 2014 12:45:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:45:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101725442"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 12:45:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:45:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFk3J-0002WU-8h;
	Tue, 18 Feb 2014 12:45:33 +0000
Date: Tue, 18 Feb 2014 12:45:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <530351B4.1010402@redhat.com>
Message-ID: <alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> > It looks like that this series breaks disk unplug
> > (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
> > 
> > I bisected it and the problem is caused by:
> > 
> > commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> > Author: Igor Mammedov <imammedo@redhat.com>
> > Date:   Wed Feb 5 16:36:52 2014 +0100
> > 
> >     hw/pci: switch to a generic hotplug handling for PCIDevice
> > 
> >     make qdev_unplug()/device_set_realized() to call hotplug handler's
> >     plug/unplug methods if available and remove not needed anymore
> >     hot(un)plug handling from PCIDevice.
> > 
> >     In case if hotplug handler is not available, revert to the legacy
> >     hotplug method for compatibility with not yet converted buses.
> > 
> >     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > 
> 
> What exactly breaks?

Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
of the email :-P).
It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
response to the guest writing to a magic ioport specifically to unplug
the emulated disk.
With this patch after the guest boots I can still access both xvda and
sda for the same disk, leading to fs corruptions.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:51:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFk9O-0005g4-4C; Tue, 18 Feb 2014 12:51:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFk9M-0005ft-Ap
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:51:48 +0000
Received: from [193.109.254.147:20028] by server-6.bemta-14.messagelabs.com id
	22/1D-03396-36753035; Tue, 18 Feb 2014 12:51:47 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392727535!5120983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20103 invoked from network); 18 Feb 2014 12:45:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:45:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101725442"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 12:45:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:45:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFk3J-0002WU-8h;
	Tue, 18 Feb 2014 12:45:33 +0000
Date: Tue, 18 Feb 2014 12:45:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <530351B4.1010402@redhat.com>
Message-ID: <alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> > It looks like that this series breaks disk unplug
> > (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
> > 
> > I bisected it and the problem is caused by:
> > 
> > commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> > Author: Igor Mammedov <imammedo@redhat.com>
> > Date:   Wed Feb 5 16:36:52 2014 +0100
> > 
> >     hw/pci: switch to a generic hotplug handling for PCIDevice
> > 
> >     make qdev_unplug()/device_set_realized() to call hotplug handler's
> >     plug/unplug methods if available and remove not needed anymore
> >     hot(un)plug handling from PCIDevice.
> > 
> >     In case if hotplug handler is not available, revert to the legacy
> >     hotplug method for compatibility with not yet converted buses.
> > 
> >     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> >     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > 
> > 
> 
> What exactly breaks?

Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
of the email :-P).
It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
response to the guest writing to a magic ioport specifically to unplug
the emulated disk.
With this patch after the guest boots I can still access both xvda and
sda for the same disk, leading to fs corruptions.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:53:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkAm-0005nT-Jv; Tue, 18 Feb 2014 12:53:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFkAk-0005nF-Er
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:53:14 +0000
Received: from [85.158.139.211:57997] by server-15.bemta-5.messagelabs.com id
	99/C9-24395-9B753035; Tue, 18 Feb 2014 12:53:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392727991!108553!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29866 invoked from network); 18 Feb 2014 12:53:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:53:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101726806"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 12:53:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:53:10 -0500
Message-ID: <1392727989.11080.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Anisov <andrii.anisov@globallogic.com>
Date: Tue, 18 Feb 2014 12:53:09 +0000
In-Reply-To: <CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 14:36 +0200, Andrii Anisov wrote:
>         > +GLOBAL(enter_hyp_mode)
>         > +enter_hyp_mode:
>         > +        adr   r0, save
>         > +        stmea r0, {r4-r13,lr}
>         > +        ldr   r12, =0x102
>         > +        adr   r0, hyp_return
>         > +        dsb
>         > +        isb
>         > +        dmb
>         > +        smc   #0
>         
>         
>         Who/what implements this handler?
> 
> 
> Ian, this handler is implemented by ROM code, and this is the common
> OMAP sequence to switch to HYP mode. On our side we decided to leave
> switch to hyp in XEN for now.

OK, fair enough. I was wonder if maybe it left some cache lines dirty or
leaving caches enabled or something? It might be worth adding a full
cacheflush (i.e. the loop over set/way stuff).

>         Do you have any hardware debugging tools which could give some
>         insight?
>  
> Yep, we have one (TI's Code Composer Studio with STM560v2 JTAG) but it
> has no proper HYP mode debug support yet, TI says it will have in 6
> months or so :( So we the only thing we can do with it is stop CPU at
> some moment and see some registers, no breakpoints or stepping.

Hrm, I guess that might be sufficient to gain some insight.

> What we discovered yet is that the last command executed by CPU1
> before hang is 
>         mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode,
> not abort.

I think this is correct for a trap taken from HYP mode -- you jump to
the corresponding vector but there is no actual mode change (since ABT
mode is PL1 and HYP mode is PL2 that would mean you dropped a privilege
level).

The patch at
http://lists.xen.org/archives/html/xen-devel/2013-09/msg00886.html might
help confirm this.

> It looks like we have broken MMU translation.

What do the fault status and fault address registers say?

Offset 0x4 is undefined instruction not prefetch abort which suggests
that there is at least some mapping present, but apparently not the
expected one so the instruction is invalid.

Is the debugger able to tell you what bytes it read instead of the real
instruction?

>         Usually these things are down to either missing cache flushes
>         or barriers, but tracking them down has historically been a
>         total pain.
> I suspected missing flushes during CPU1 MMU tables preparation but
> that code looks correct, I do not see any issues there.

Right, that's why I was wondering about firmware leaving dirty cache
lines around.

On the Cortex-A15 we (actually, firmware) need to set a bit in the ACTLR
to enable cache coherency etc -- I suppose it is worth checking that the
OMAP doesn't have anything similar but I expect that this would be
handled in the firmware already.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:53:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkAm-0005nT-Jv; Tue, 18 Feb 2014 12:53:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFkAk-0005nF-Er
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 12:53:14 +0000
Received: from [85.158.139.211:57997] by server-15.bemta-5.messagelabs.com id
	99/C9-24395-9B753035; Tue, 18 Feb 2014 12:53:13 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392727991!108553!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29866 invoked from network); 18 Feb 2014 12:53:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:53:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101726806"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 12:53:11 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 07:53:10 -0500
Message-ID: <1392727989.11080.61.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Anisov <andrii.anisov@globallogic.com>
Date: Tue, 18 Feb 2014 12:53:09 +0000
In-Reply-To: <CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 14:36 +0200, Andrii Anisov wrote:
>         > +GLOBAL(enter_hyp_mode)
>         > +enter_hyp_mode:
>         > +        adr   r0, save
>         > +        stmea r0, {r4-r13,lr}
>         > +        ldr   r12, =0x102
>         > +        adr   r0, hyp_return
>         > +        dsb
>         > +        isb
>         > +        dmb
>         > +        smc   #0
>         
>         
>         Who/what implements this handler?
> 
> 
> Ian, this handler is implemented by ROM code, and this is the common
> OMAP sequence to switch to HYP mode. On our side we decided to leave
> switch to hyp in XEN for now.

OK, fair enough. I was wonder if maybe it left some cache lines dirty or
leaving caches enabled or something? It might be worth adding a full
cacheflush (i.e. the loop over set/way stuff).

>         Do you have any hardware debugging tools which could give some
>         insight?
>  
> Yep, we have one (TI's Code Composer Studio with STM560v2 JTAG) but it
> has no proper HYP mode debug support yet, TI says it will have in 6
> months or so :( So we the only thing we can do with it is stop CPU at
> some moment and see some registers, no breakpoints or stepping.

Hrm, I guess that might be sufficient to gain some insight.

> What we discovered yet is that the last command executed by CPU1
> before hang is 
>         mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode,
> not abort.

I think this is correct for a trap taken from HYP mode -- you jump to
the corresponding vector but there is no actual mode change (since ABT
mode is PL1 and HYP mode is PL2 that would mean you dropped a privilege
level).

The patch at
http://lists.xen.org/archives/html/xen-devel/2013-09/msg00886.html might
help confirm this.

> It looks like we have broken MMU translation.

What do the fault status and fault address registers say?

Offset 0x4 is undefined instruction not prefetch abort which suggests
that there is at least some mapping present, but apparently not the
expected one so the instruction is invalid.

Is the debugger able to tell you what bytes it read instead of the real
instruction?

>         Usually these things are down to either missing cache flushes
>         or barriers, but tracking them down has historically been a
>         total pain.
> I suspected missing flushes during CPU1 MMU tables preparation but
> that code looks correct, I do not see any issues there.

Right, that's why I was wondering about firmware leaving dirty cache
lines around.

On the Cortex-A15 we (actually, firmware) need to set a bit in the ACTLR
to enable cache coherency etc -- I suppose it is worth checking that the
OMAP doesn't have anything similar but I expect that this would be
handled in the firmware already.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:56:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkDV-0005zz-Oi; Tue, 18 Feb 2014 12:56:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFkDQ-0005zm-8S
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:56:00 +0000
Received: from [85.158.139.211:8515] by server-9.bemta-5.messagelabs.com id
	9C/90-11237-F5853035; Tue, 18 Feb 2014 12:55:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392728157!109171!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14692 invoked from network); 18 Feb 2014 12:55:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103453844"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 12:55:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 07:55:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFkDL-00030J-R5;
	Tue, 18 Feb 2014 12:55:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFkDL-0004ks-NW;
	Tue, 18 Feb 2014 12:55:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25117-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 12:55:55 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25117: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25117 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25117/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2
baseline version:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:56:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkDV-0005zz-Oi; Tue, 18 Feb 2014 12:56:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFkDQ-0005zm-8S
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 12:56:00 +0000
Received: from [85.158.139.211:8515] by server-9.bemta-5.messagelabs.com id
	9C/90-11237-F5853035; Tue, 18 Feb 2014 12:55:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392728157!109171!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14692 invoked from network); 18 Feb 2014 12:55:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 12:55:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="103453844"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 12:55:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 07:55:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFkDL-00030J-R5;
	Tue, 18 Feb 2014 12:55:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFkDL-0004ks-NW;
	Tue, 18 Feb 2014 12:55:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25117-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 12:55:55 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25117: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25117 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25117/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2
baseline version:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:56:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkE0-000650-7J; Tue, 18 Feb 2014 12:56:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFkDy-00064e-87
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 12:56:34 +0000
Received: from [85.158.143.35:15034] by server-2.bemta-4.messagelabs.com id
	E7/27-10891-18853035; Tue, 18 Feb 2014 12:56:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392728188!6504292!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30698 invoked from network); 18 Feb 2014 12:56:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 12:56:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 12:56:28 +0000
Message-Id: <53036688020000780011D4B5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 12:56:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
	<53035689.4000602@ts.fujitsu.com>
In-Reply-To: <53035689.4000602@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 13:48, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> On 14.02.2014 14:02, Jan Beulich wrote:
>>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>> Is this the case when the guest itself uses single stepping? Initially the
>>> debug trap shouldn't cause a VMEXIT, I think.
>>
>> That looks like a bug, indeed - it's missing from the initially set
>> exception_bitmap. Could you check whether adding this in
>> construct_vmcs() addresses that part of the issue? (A proper fix
>> would likely include further adjustments to the setting of this flag,
>> e.g. clearing it alongside clearing the DR intercept.) But then
>> again all of this already depends on cpu_has_monitor_trap_flag -
>> if that's set on your system, maybe you could try suppressing its
>> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
>> the optional feature set in vmx_init_vmcs_config())?
> 
> I've currently a test running with the attached patch (the bug was hit about
> once every 3 hours, test is running now for about 4 hours without problem).
> Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.

Which, if it continues running fine, would confirm the theory.
I'd like to defer to the VMX folks though for putting together a
proper fix then - I'd likely overlook some corner case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 12:56:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 12:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkE0-000650-7J; Tue, 18 Feb 2014 12:56:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFkDy-00064e-87
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 12:56:34 +0000
Received: from [85.158.143.35:15034] by server-2.bemta-4.messagelabs.com id
	E7/27-10891-18853035; Tue, 18 Feb 2014 12:56:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392728188!6504292!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30698 invoked from network); 18 Feb 2014 12:56:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 12:56:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 12:56:28 +0000
Message-Id: <53036688020000780011D4B5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 12:56:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
	<53035689.4000602@ts.fujitsu.com>
In-Reply-To: <53035689.4000602@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 13:48, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> On 14.02.2014 14:02, Jan Beulich wrote:
>>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>> Is this the case when the guest itself uses single stepping? Initially the
>>> debug trap shouldn't cause a VMEXIT, I think.
>>
>> That looks like a bug, indeed - it's missing from the initially set
>> exception_bitmap. Could you check whether adding this in
>> construct_vmcs() addresses that part of the issue? (A proper fix
>> would likely include further adjustments to the setting of this flag,
>> e.g. clearing it alongside clearing the DR intercept.) But then
>> again all of this already depends on cpu_has_monitor_trap_flag -
>> if that's set on your system, maybe you could try suppressing its
>> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
>> the optional feature set in vmx_init_vmcs_config())?
> 
> I've currently a test running with the attached patch (the bug was hit about
> once every 3 hours, test is running now for about 4 hours without problem).
> Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.

Which, if it continues running fine, would confirm the theory.
I'd like to defer to the VMX folks though for putting together a
proper fix then - I'd likely overlook some corner case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkPA-0006Z5-Eh; Tue, 18 Feb 2014 13:08:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFkP9-0006Z0-2I
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 13:08:07 +0000
Received: from [85.158.143.35:34304] by server-2.bemta-4.messagelabs.com id
	CA/2C-10891-F1B53035; Tue, 18 Feb 2014 13:07:43 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392728862!6500811!1
X-Originating-IP: [213.75.39.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjE1ID0+IDY1OTY2NA==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15208 invoked from network); 18 Feb 2014 13:07:42 -0000
Received: from cpsmtpb-ews10.kpnxchange.com (HELO
	cpsmtpb-ews10.kpnxchange.com) (213.75.39.15)
	by server-6.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 13:07:42 -0000
Received: from cpsps-ews27.kpnxchange.com ([10.94.84.193]) by
	cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:07:42 +0100
Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by
	cpsps-ews27.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:07:42 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF103.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Tue, 18 Feb 2014 14:07:42 +0100
Message-ID: <1392728861.5144.10.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>
Date: Tue, 18 Feb 2014 14:07:41 +0100
In-Reply-To: <1392718467.30073.12.camel@x220>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 18 Feb 2014 13:07:42.0356 (UTC)
	FILETIME=[6855F540:01CF2CAA]
X-RcptDomain: lists.xenproject.org
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: [Xen-devel] [PATCH v2] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
used nowhere in the tree.

We do know grub2 has a script that greps kernel configuration files for
its macro. It shouldn't do that. As Linus summarized:
    This is a grub bug. It really is that simple. Treat it as one.

Besides, grub2's grepping for that macro is actually superfluous. See,
that script currently contains this test (simplified):
    grep -x CONFIG_XEN_DOM0=y $config || grep -x CONFIG_XEN_PRIVILEGED_GUEST=y $config

But since XEN_DOM0 and XEN_PRIVILEGED_GUEST are by definition equal,
removing XEN_PRIVILEGED_GUEST cannot influence this test.

So there's no reason to not remove this symbol, like we do with all
unused Kconfig symbols.

[pebolle@tiscali.nl: rewrote commit explanation.]
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
---
v2: added a few lines to the commit explanation to show grub2's test for
this symbol is superfluous anyway. Still only git "grep tested".

 arch/x86/xen/Kconfig | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 01b9026..512219d 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -19,11 +19,6 @@ config XEN_DOM0
 	depends on XEN && PCI_XEN && SWIOTLB_XEN
 	depends on X86_LOCAL_APIC && X86_IO_APIC && ACPI && PCI
 
-# Dummy symbol since people have come to rely on the PRIVILEGED_GUEST
-# name in tools.
-config XEN_PRIVILEGED_GUEST
-	def_bool XEN_DOM0
-
 config XEN_PVHVM
 	def_bool y
 	depends on XEN && PCI && X86_LOCAL_APIC
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:08:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkPA-0006Z5-Eh; Tue, 18 Feb 2014 13:08:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFkP9-0006Z0-2I
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 13:08:07 +0000
Received: from [85.158.143.35:34304] by server-2.bemta-4.messagelabs.com id
	CA/2C-10891-F1B53035; Tue, 18 Feb 2014 13:07:43 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392728862!6500811!1
X-Originating-IP: [213.75.39.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjE1ID0+IDY1OTY2NA==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15208 invoked from network); 18 Feb 2014 13:07:42 -0000
Received: from cpsmtpb-ews10.kpnxchange.com (HELO
	cpsmtpb-ews10.kpnxchange.com) (213.75.39.15)
	by server-6.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 13:07:42 -0000
Received: from cpsps-ews27.kpnxchange.com ([10.94.84.193]) by
	cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:07:42 +0100
Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by
	cpsps-ews27.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:07:42 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF103.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Tue, 18 Feb 2014 14:07:42 +0100
Message-ID: <1392728861.5144.10.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>
Date: Tue, 18 Feb 2014 14:07:41 +0100
In-Reply-To: <1392718467.30073.12.camel@x220>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 18 Feb 2014 13:07:42.0356 (UTC)
	FILETIME=[6855F540:01CF2CAA]
X-RcptDomain: lists.xenproject.org
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: [Xen-devel] [PATCH v2] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
used nowhere in the tree.

We do know grub2 has a script that greps kernel configuration files for
its macro. It shouldn't do that. As Linus summarized:
    This is a grub bug. It really is that simple. Treat it as one.

Besides, grub2's grepping for that macro is actually superfluous. See,
that script currently contains this test (simplified):
    grep -x CONFIG_XEN_DOM0=y $config || grep -x CONFIG_XEN_PRIVILEGED_GUEST=y $config

But since XEN_DOM0 and XEN_PRIVILEGED_GUEST are by definition equal,
removing XEN_PRIVILEGED_GUEST cannot influence this test.

So there's no reason to not remove this symbol, like we do with all
unused Kconfig symbols.

[pebolle@tiscali.nl: rewrote commit explanation.]
Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
---
v2: added a few lines to the commit explanation to show grub2's test for
this symbol is superfluous anyway. Still only git "grep tested".

 arch/x86/xen/Kconfig | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 01b9026..512219d 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -19,11 +19,6 @@ config XEN_DOM0
 	depends on XEN && PCI_XEN && SWIOTLB_XEN
 	depends on X86_LOCAL_APIC && X86_IO_APIC && ACPI && PCI
 
-# Dummy symbol since people have come to rely on the PRIVILEGED_GUEST
-# name in tools.
-config XEN_PRIVILEGED_GUEST
-	def_bool XEN_DOM0
-
 config XEN_PVHVM
 	def_bool y
 	depends on XEN && PCI && X86_LOCAL_APIC
-- 
1.8.5.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:09:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkQM-0006e7-Vr; Tue, 18 Feb 2014 13:09:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1WFkQL-0006de-4G
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 13:09:22 +0000
Received: from [85.158.143.35:48305] by server-2.bemta-4.messagelabs.com id
	61/BE-10891-67B53035; Tue, 18 Feb 2014 13:09:10 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392728948!6508546!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20732 invoked from network); 18 Feb 2014 13:09:09 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-7.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 13:09:09 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1ID921h004648
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 08:09:02 -0500
Received: from nial.usersys.redhat.com (dhcp-1-126.brq.redhat.com
	[10.34.1.126])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1ID8xY9026315; Tue, 18 Feb 2014 08:09:00 -0500
Date: Tue, 18 Feb 2014 14:08:58 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140218140858.4bcc511b@nial.usersys.redhat.com>
In-Reply-To: <alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S.
	Tsirkin" <mst@redhat.com>, QEMU Developers <qemu-devel@nongnu.org>,
	armbru@redhat.com, Anthony Liguori <aliguori@amazon.com>,
	Anthony.Perard@citrix.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 00/20] acpi, pc,
	pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014 12:45:29 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> > > It looks like that this series breaks disk unplug
> > > (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
> > > 
> > > I bisected it and the problem is caused by:
> > > 
> > > commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> > > Author: Igor Mammedov <imammedo@redhat.com>
> > > Date:   Wed Feb 5 16:36:52 2014 +0100
> > > 
> > >     hw/pci: switch to a generic hotplug handling for PCIDevice
> > > 
> > >     make qdev_unplug()/device_set_realized() to call hotplug handler's
> > >     plug/unplug methods if available and remove not needed anymore
> > >     hot(un)plug handling from PCIDevice.
> > > 
> > >     In case if hotplug handler is not available, revert to the legacy
> > >     hotplug method for compatibility with not yet converted buses.
> > > 
> > >     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > >     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > 
> > > 
> > 
> > What exactly breaks?
> 
> Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> of the email :-P).
> It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> response to the guest writing to a magic ioport specifically to unplug
> the emulated disk.
> With this patch after the guest boots I can still access both xvda and
> sda for the same disk, leading to fs corruptions.
> 
Could you try with following debug patch?

diff --git a/hw/core/qdev.c b/hw/core/qdev.c
index 64b66e0..84aa8be 100644
--- a/hw/core/qdev.c
+++ b/hw/core/qdev.c
@@ -214,6 +214,7 @@ void qdev_unplug(DeviceState *dev, Error **errp)
         return;
     }
 
+    fprintf(stderr, "dc->hotpluggable %d\n", dc->hotpluggable);
     if (!dc->hotpluggable) {
         error_set(errp, QERR_DEVICE_NO_HOTPLUG,
                   object_get_typename(OBJECT(dev)));
@@ -223,8 +224,12 @@ void qdev_unplug(DeviceState *dev, Error **errp)
     qdev_hot_removed = true;
 
     if (dev->parent_bus && dev->parent_bus->hotplug_handler) {
+        fprintf(stderr, "bus name: %s, hotplug_handler: %s\n",
+                dev->parent_bus->name,
+                object_get_typename(OBJECT(dev->parent_bus->hotplug_handler)));
         hotplug_handler_unplug(dev->parent_bus->hotplug_handler, dev, errp);
     } else {
+        fprintf(stderr, "legacy unplug: %p\n", dc->unplug);
         assert(dc->unplug != NULL);
         if (dc->unplug(dev) < 0) { /* legacy handler */
             error_set(errp, QERR_UNDEFINED_ERROR);
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 1acd2b2..74b0cac 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -238,6 +238,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
     if (pci_enabled && acpi_enabled) {
         i2c_bus *smbus;
 
+       fprintf(stderr, "create piix4_pm_init\n");
         smi_irq = qemu_allocate_irqs(pc_acpi_smi_interrupt, first_cpu, 1);
         /* TODO: Populate SPD eeprom data.  */
         smbus = piix4_pm_init(pci_bus, piix3_devfn + 3, 0xb100,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:09:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkQM-0006e7-Vr; Tue, 18 Feb 2014 13:09:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <imammedo@redhat.com>) id 1WFkQL-0006de-4G
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 13:09:22 +0000
Received: from [85.158.143.35:48305] by server-2.bemta-4.messagelabs.com id
	61/BE-10891-67B53035; Tue, 18 Feb 2014 13:09:10 +0000
X-Env-Sender: imammedo@redhat.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392728948!6508546!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20732 invoked from network); 18 Feb 2014 13:09:09 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-7.tower-21.messagelabs.com with SMTP;
	18 Feb 2014 13:09:09 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1ID921h004648
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 08:09:02 -0500
Received: from nial.usersys.redhat.com (dhcp-1-126.brq.redhat.com
	[10.34.1.126])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1ID8xY9026315; Tue, 18 Feb 2014 08:09:00 -0500
Date: Tue, 18 Feb 2014 14:08:58 +0100
From: Igor Mammedov <imammedo@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140218140858.4bcc511b@nial.usersys.redhat.com>
In-Reply-To: <alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S.
	Tsirkin" <mst@redhat.com>, QEMU Developers <qemu-devel@nongnu.org>,
	armbru@redhat.com, Anthony Liguori <aliguori@amazon.com>,
	Anthony.Perard@citrix.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 00/20] acpi, pc,
	pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014 12:45:29 +0000
Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:

> On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> > > It looks like that this series breaks disk unplug
> > > (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
> > > 
> > > I bisected it and the problem is caused by:
> > > 
> > > commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> > > Author: Igor Mammedov <imammedo@redhat.com>
> > > Date:   Wed Feb 5 16:36:52 2014 +0100
> > > 
> > >     hw/pci: switch to a generic hotplug handling for PCIDevice
> > > 
> > >     make qdev_unplug()/device_set_realized() to call hotplug handler's
> > >     plug/unplug methods if available and remove not needed anymore
> > >     hot(un)plug handling from PCIDevice.
> > > 
> > >     In case if hotplug handler is not available, revert to the legacy
> > >     hotplug method for compatibility with not yet converted buses.
> > > 
> > >     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > >     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > 
> > > 
> > 
> > What exactly breaks?
> 
> Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> of the email :-P).
> It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> response to the guest writing to a magic ioport specifically to unplug
> the emulated disk.
> With this patch after the guest boots I can still access both xvda and
> sda for the same disk, leading to fs corruptions.
> 
Could you try with following debug patch?

diff --git a/hw/core/qdev.c b/hw/core/qdev.c
index 64b66e0..84aa8be 100644
--- a/hw/core/qdev.c
+++ b/hw/core/qdev.c
@@ -214,6 +214,7 @@ void qdev_unplug(DeviceState *dev, Error **errp)
         return;
     }
 
+    fprintf(stderr, "dc->hotpluggable %d\n", dc->hotpluggable);
     if (!dc->hotpluggable) {
         error_set(errp, QERR_DEVICE_NO_HOTPLUG,
                   object_get_typename(OBJECT(dev)));
@@ -223,8 +224,12 @@ void qdev_unplug(DeviceState *dev, Error **errp)
     qdev_hot_removed = true;
 
     if (dev->parent_bus && dev->parent_bus->hotplug_handler) {
+        fprintf(stderr, "bus name: %s, hotplug_handler: %s\n",
+                dev->parent_bus->name,
+                object_get_typename(OBJECT(dev->parent_bus->hotplug_handler)));
         hotplug_handler_unplug(dev->parent_bus->hotplug_handler, dev, errp);
     } else {
+        fprintf(stderr, "legacy unplug: %p\n", dc->unplug);
         assert(dc->unplug != NULL);
         if (dc->unplug(dev) < 0) { /* legacy handler */
             error_set(errp, QERR_UNDEFINED_ERROR);
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 1acd2b2..74b0cac 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -238,6 +238,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
     if (pci_enabled && acpi_enabled) {
         i2c_bus *smbus;
 
+       fprintf(stderr, "create piix4_pm_init\n");
         smi_irq = qemu_allocate_irqs(pc_acpi_smi_interrupt, first_cpu, 1);
         /* TODO: Populate SPD eeprom data.  */
         smbus = piix4_pm_init(pci_bus, piix3_devfn + 3, 0xb100,


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:11:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkSB-0006pQ-Hz; Tue, 18 Feb 2014 13:11:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WFkS9-0006ot-CG
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 13:11:13 +0000
Received: from [193.109.254.147:18757] by server-2.bemta-14.messagelabs.com id
	E3/D6-01236-3EB53035; Tue, 18 Feb 2014 13:10:59 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392729058!5105155!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5159 invoked from network); 18 Feb 2014 13:10:59 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 13:10:59 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1IDArp7024949
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 08:10:53 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-51.ams2.redhat.com
	[10.36.112.51])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1IDAnWH014126; Tue, 18 Feb 2014 08:10:50 -0500
Message-ID: <53035BD8.6050805@redhat.com>
Date: Tue, 18 Feb 2014 14:10:48 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> of the email :-P).
> It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> response to the guest writing to a magic ioport specifically to unplug
> the emulated disk.
> With this patch after the guest boots I can still access both xvda and
> sda for the same disk, leading to fs corruptions.

Ok, the last paragraph is what I was missing.

So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a 
hotplug handler, dc->unplug is not called anymore.

But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't 
free the device, it just drops the disks underneath.  I think the 
simplest solution is to _not_ make it a dc->unplug callback at all, and 
call pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug. 
  qdev_unplug means "ask guest to start unplug", which is not what Xen 
wants to do here.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:11:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkSB-0006pQ-Hz; Tue, 18 Feb 2014 13:11:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WFkS9-0006ot-CG
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 13:11:13 +0000
Received: from [193.109.254.147:18757] by server-2.bemta-14.messagelabs.com id
	E3/D6-01236-3EB53035; Tue, 18 Feb 2014 13:10:59 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392729058!5105155!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5159 invoked from network); 18 Feb 2014 13:10:59 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-2.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 13:10:59 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1IDArp7024949
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 08:10:53 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-51.ams2.redhat.com
	[10.36.112.51])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1IDAnWH014126; Tue, 18 Feb 2014 08:10:50 -0500
Message-ID: <53035BD8.6050805@redhat.com>
Date: Tue, 18 Feb 2014 14:10:48 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> of the email :-P).
> It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> response to the guest writing to a magic ioport specifically to unplug
> the emulated disk.
> With this patch after the guest boots I can still access both xvda and
> sda for the same disk, leading to fs corruptions.

Ok, the last paragraph is what I was missing.

So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a 
hotplug handler, dc->unplug is not called anymore.

But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't 
free the device, it just drops the disks underneath.  I think the 
simplest solution is to _not_ make it a dc->unplug callback at all, and 
call pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug. 
  qdev_unplug means "ask guest to start unplug", which is not what Xen 
wants to do here.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:14:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkUr-00072O-8r; Tue, 18 Feb 2014 13:14:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFkUq-00072H-Cl
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 13:14:00 +0000
Received: from [193.109.254.147:19995] by server-10.bemta-14.messagelabs.com
	id 5F/2D-10711-79C53035; Tue, 18 Feb 2014 13:13:59 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392729238!5101114!1
X-Originating-IP: [213.75.39.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNCA9PiA1MjMxMjI=\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNCA9PiA1MjMxMjI=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30219 invoked from network); 18 Feb 2014 13:13:58 -0000
Received: from cpsmtpb-ews01.kpnxchange.com (HELO
	cpsmtpb-ews01.kpnxchange.com) (213.75.39.4)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 13:13:58 -0000
Received: from cpsps-ews18.kpnxchange.com ([10.94.84.184]) by
	cpsmtpb-ews01.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:13:58 +0100
Received: from CPSMTPM-TLF102.kpnxchange.com ([195.121.3.5]) by
	cpsps-ews18.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:13:58 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF102.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Tue, 18 Feb 2014 14:13:58 +0100
Message-ID: <1392729238.5144.14.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>
Date: Tue, 18 Feb 2014 14:13:58 +0100
In-Reply-To: <1392728861.5144.10.camel@x220>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220> <1392728861.5144.10.camel@x220>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 18 Feb 2014 13:13:58.0607 (UTC)
	FILETIME=[48994DF0:01CF2CAB]
X-RcptDomain: lists.xenproject.org
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: Re: [Xen-devel] [PATCH v2] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here I should have added:

From:  Michael Opdenacker <michael.opdenacker@free-electrons.com>

in order for Michael to show up as author of the patch. 

On Tue, 2014-02-18 at 14:07 +0100, Paul Bolle wrote:
> This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
> used nowhere in the tree.
> 
> We do know grub2 has a script that greps kernel configuration files for
> its macro. It shouldn't do that. As Linus summarized:
>     This is a grub bug. It really is that simple. Treat it as one.
> 
> Besides, grub2's grepping for that macro is actually superfluous. See,
> that script currently contains this test (simplified):
>     grep -x CONFIG_XEN_DOM0=y $config || grep -x CONFIG_XEN_PRIVILEGED_GUEST=y $config
> 
> But since XEN_DOM0 and XEN_PRIVILEGED_GUEST are by definition equal,
> removing XEN_PRIVILEGED_GUEST cannot influence this test.
> 
> So there's no reason to not remove this symbol, like we do with all
> unused Kconfig symbols.
> 
> [pebolle@tiscali.nl: rewrote commit explanation.]
> Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
> Signed-off-by: Paul Bolle <pebolle@tiscali.nl>


Paul Bolle


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:14:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkUr-00072O-8r; Tue, 18 Feb 2014 13:14:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WFkUq-00072H-Cl
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 13:14:00 +0000
Received: from [193.109.254.147:19995] by server-10.bemta-14.messagelabs.com
	id 5F/2D-10711-79C53035; Tue, 18 Feb 2014 13:13:59 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392729238!5101114!1
X-Originating-IP: [213.75.39.4]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNCA9PiA1MjMxMjI=\n,sa_preprocessor: 
	QmFkIElQOiAyMTMuNzUuMzkuNCA9PiA1MjMxMjI=\n,ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30219 invoked from network); 18 Feb 2014 13:13:58 -0000
Received: from cpsmtpb-ews01.kpnxchange.com (HELO
	cpsmtpb-ews01.kpnxchange.com) (213.75.39.4)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 13:13:58 -0000
Received: from cpsps-ews18.kpnxchange.com ([10.94.84.184]) by
	cpsmtpb-ews01.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:13:58 +0100
Received: from CPSMTPM-TLF102.kpnxchange.com ([195.121.3.5]) by
	cpsps-ews18.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Tue, 18 Feb 2014 14:13:58 +0100
Received: from [192.168.1.104] ([82.169.24.127]) by
	CPSMTPM-TLF102.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Tue, 18 Feb 2014 14:13:58 +0100
Message-ID: <1392729238.5144.14.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, David Vrabel <david.vrabel@citrix.com>,
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H.
	Peter Anvin" <hpa@zytor.com>
Date: Tue, 18 Feb 2014 14:13:58 +0100
In-Reply-To: <1392728861.5144.10.camel@x220>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220> <1392728861.5144.10.camel@x220>
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 18 Feb 2014 13:13:58.0607 (UTC)
	FILETIME=[48994DF0:01CF2CAB]
X-RcptDomain: lists.xenproject.org
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Richard Weinberger <richard@nod.at>
Subject: Re: [Xen-devel] [PATCH v2] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here I should have added:

From:  Michael Opdenacker <michael.opdenacker@free-electrons.com>

in order for Michael to show up as author of the patch. 

On Tue, 2014-02-18 at 14:07 +0100, Paul Bolle wrote:
> This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
> used nowhere in the tree.
> 
> We do know grub2 has a script that greps kernel configuration files for
> its macro. It shouldn't do that. As Linus summarized:
>     This is a grub bug. It really is that simple. Treat it as one.
> 
> Besides, grub2's grepping for that macro is actually superfluous. See,
> that script currently contains this test (simplified):
>     grep -x CONFIG_XEN_DOM0=y $config || grep -x CONFIG_XEN_PRIVILEGED_GUEST=y $config
> 
> But since XEN_DOM0 and XEN_PRIVILEGED_GUEST are by definition equal,
> removing XEN_PRIVILEGED_GUEST cannot influence this test.
> 
> So there's no reason to not remove this symbol, like we do with all
> unused Kconfig symbols.
> 
> [pebolle@tiscali.nl: rewrote commit explanation.]
> Signed-off-by: Michael Opdenacker <michael.opdenacker@free-electrons.com>
> Signed-off-by: Paul Bolle <pebolle@tiscali.nl>


Paul Bolle


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:24:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkeF-0007Ld-ME; Tue, 18 Feb 2014 13:23:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WFkeD-0007LY-K7
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:23:42 +0000
Received: from [85.158.139.211:59912] by server-11.bemta-5.messagelabs.com id
	2F/87-23886-CDE53035; Tue, 18 Feb 2014 13:23:40 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392729816!115685!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29599 invoked from network); 18 Feb 2014 13:23:38 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 13:23:38 -0000
Received: from mail-lb0-f177.google.com ([209.85.217.177]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNe1xXgAT1jrrUfqoDkmey8P01dyIxS@postini.com;
	Tue, 18 Feb 2014 05:23:38 PST
Received: by mail-lb0-f177.google.com with SMTP id 10so10532401lbg.8
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 05:23:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=UYB/8ydLNJeYBwgiQz8P2m2QSxHdsBwg9JUveChAInA=;
	b=Wr8loFCJ6g3OjZfCZPZilvSr6CjdFN74p7chXWCQAKb0YUMYUKhLq3vNcHZcOsof3Z
	uIphDjGygKdUS9axB4uv3MJY09tPpidhxKlciZmirt2rZwKR2r33YI2Shiz06UAJDkdc
	qxNVOprvFYu02DKzsj86qYPplDtNzZLC1pukI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=UYB/8ydLNJeYBwgiQz8P2m2QSxHdsBwg9JUveChAInA=;
	b=jjwu6mpR6SI3gwUXFJ6W+IXYPL/5jcMd1RKJk18mxxt9V8yDaEN/o3ikOSFLyz0SFm
	P03ml9O4S0j1xULmH1zXXsGl7Gl9KB8if9ovDaLWXqXNJKk4/Jc+LNnY1ucODkmm60Mt
	Yo/eTruJtwzk4ay8HGD/bvb3XesL8ttUAz9xtXM0HdIduH/JLd8+94VY+AzspID5FCop
	08wgmgROFFfD7KD9ar9AebwcivWJlNluyTRtd+lLu5bWoTjElmDQxyn2GDd2zUFrP51q
	vfCYqY4PlDkB+89d8Y5HQYOBV5HFEf+wBkb3JcnuUaPmAOwWZVCcGaSvoAxhF0m0pPvh
	9dyw==
X-Gm-Message-State: ALoCoQlg7D3ndtezvnljajOshq0w4ljj0LaCNFz18nCN4BF++IT1vUxLyv6HBc+6gZlxqZSHgJA733NlzA5rMpT/xGpzWe5/SGriutI0W91YuVTxmSi1qfu6XIDrl5/GmIpSZFUJlxYSVCRqFDC0HeW++HLY+EFIS1UTFCmImupyhhxTkZugJ0U=
X-Received: by 10.152.164.199 with SMTP id ys7mr22101464lab.31.1392729814503; 
	Tue, 18 Feb 2014 05:23:34 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.152.164.199 with SMTP id ys7mr22101453lab.31.1392729814344; 
	Tue, 18 Feb 2014 05:23:34 -0800 (PST)
Received: by 10.114.18.193 with HTTP; Tue, 18 Feb 2014 05:23:34 -0800 (PST)
In-Reply-To: <1392727989.11080.61.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 15:23:34 +0200
Message-ID: <CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0471014000499440765=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0471014000499440765==
Content-Type: multipart/alternative; boundary=001a11349626da394304f2ae2dc0

--001a11349626da394304f2ae2dc0
Content-Type: text/plain; charset=ISO-8859-1

Ian,

First of all, unfortunately none of our team are so strong in ARM processor
low levels (yet).
We will check you suggestions with full cache flush and early trap debug
patch.

I've checked right now, the PC is 0xc (prefetch abort).

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt


On Tue, Feb 18, 2014 at 2:53 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2014-02-18 at 14:36 +0200, Andrii Anisov wrote:
> >         > +GLOBAL(enter_hyp_mode)
> >         > +enter_hyp_mode:
> >         > +        adr   r0, save
> >         > +        stmea r0, {r4-r13,lr}
> >         > +        ldr   r12, =0x102
> >         > +        adr   r0, hyp_return
> >         > +        dsb
> >         > +        isb
> >         > +        dmb
> >         > +        smc   #0
> >
> >
> >         Who/what implements this handler?
> >
> >
> > Ian, this handler is implemented by ROM code, and this is the common
> > OMAP sequence to switch to HYP mode. On our side we decided to leave
> > switch to hyp in XEN for now.
>
> OK, fair enough. I was wonder if maybe it left some cache lines dirty or
> leaving caches enabled or something? It might be worth adding a full
> cacheflush (i.e. the loop over set/way stuff).
>
> >         Do you have any hardware debugging tools which could give some
> >         insight?
> >
> > Yep, we have one (TI's Code Composer Studio with STM560v2 JTAG) but it
> > has no proper HYP mode debug support yet, TI says it will have in 6
> > months or so :( So we the only thing we can do with it is stop CPU at
> > some moment and see some registers, no breakpoints or stepping.
>
> Hrm, I guess that might be sufficient to gain some insight.
>
> > What we discovered yet is that the last command executed by CPU1
> > before hang is
> >         mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> > After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode,
> > not abort.
>
> I think this is correct for a trap taken from HYP mode -- you jump to
> the corresponding vector but there is no actual mode change (since ABT
> mode is PL1 and HYP mode is PL2 that would mean you dropped a privilege
> level).
>
> The patch at
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg00886.html might
> help confirm this.
>
> > It looks like we have broken MMU translation.
>
> What do the fault status and fault address registers say?
>
> Offset 0x4 is undefined instruction not prefetch abort which suggests
> that there is at least some mapping present, but apparently not the
> expected one so the instruction is invalid.
>
> Is the debugger able to tell you what bytes it read instead of the real
> instruction?
>
> >         Usually these things are down to either missing cache flushes
> >         or barriers, but tracking them down has historically been a
> >         total pain.
> > I suspected missing flushes during CPU1 MMU tables preparation but
> > that code looks correct, I do not see any issues there.
>
> Right, that's why I was wondering about firmware leaving dirty cache
> lines around.
>
> On the Cortex-A15 we (actually, firmware) need to set a bit in the ACTLR
> to enable cache coherency etc -- I suppose it is worth checking that the
> OMAP doesn't have anything similar but I expect that this would be
> handled in the firmware already.
>
> Ian.
>
>
>

--001a11349626da394304f2ae2dc0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Ian,<div><br></div><div>First of all, unfortunately none o=
f our team are so strong in ARM processor low levels (yet).</div><div>We wi=
ll check you suggestions with full cache flush and early trap debug patch.<=
/div>
<div><br></div><div>I&#39;ve checked right now, the PC is 0xc (prefetch abo=
rt).</div><div><br></div><div class=3D"gmail_extra"><div><font size=3D"-1">=
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:bold">Andrii Anisov | Software Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline">www.globallogic.c=
om</span></a><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:normal;background-color:transparent"></s=
pan><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:#1155cc;background-color:transparent;font-weight:normal;font-style:=
normal;font-variant:normal;text-decoration:underline;vertical-align:baselin=
e">http://www.globallogic.com/email_disclaimer.txt</span></a><span style=3D=
"vertical-align:baseline;font-variant:normal;font-style:normal;font-size:11=
px;background-color:transparent;text-decoration:none;font-family:Arial;font=
-weight:normal;background-color:transparent"></span></div>
</div>
<br><br><div class=3D"gmail_quote">On Tue, Feb 18, 2014 at 2:53 PM, Ian Cam=
pbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targ=
et=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid=
;padding-left:1ex">
<div class=3D"">On Tue, 2014-02-18 at 14:36 +0200, Andrii Anisov wrote:<br>
&gt; =A0 =A0 =A0 =A0 &gt; +GLOBAL(enter_hyp_mode)<br>
&gt; =A0 =A0 =A0 =A0 &gt; +enter_hyp_mode:<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0stmea r0, {r4-r13,lr}<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0ldr =A0 r12, =3D0x102<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0adr =A0 r0, hyp_return<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0dsb<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0isb<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0dmb<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0smc =A0 #0<br>
&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 Who/what implements this handler?<br>
&gt;<br>
&gt;<br>
</div><div class=3D"">&gt; Ian, this handler is implemented by ROM code, an=
d this is the common<br>
&gt; OMAP sequence to switch to HYP mode. On our side we decided to leave<b=
r>
&gt; switch to hyp in XEN for now.<br>
<br>
</div>OK, fair enough. I was wonder if maybe it left some cache lines dirty=
 or<br>
leaving caches enabled or something? It might be worth adding a full<br>
cacheflush (i.e. the loop over set/way stuff).<br>
<div class=3D""><br>
&gt; =A0 =A0 =A0 =A0 Do you have any hardware debugging tools which could g=
ive some<br>
&gt; =A0 =A0 =A0 =A0 insight?<br>
&gt;<br>
&gt; Yep, we have one (TI&#39;s Code Composer Studio with STM560v2 JTAG) bu=
t it<br>
&gt; has no proper HYP mode debug support yet, TI says it will have in 6<br=
>
&gt; months or so :( So we the only thing we can do with it is stop CPU at<=
br>
&gt; some moment and see some registers, no breakpoints or stepping.<br>
<br>
</div>Hrm, I guess that might be sufficient to gain some insight.<br>
<div class=3D""><br>
&gt; What we discovered yet is that the last command executed by CPU1<br>
&gt; before hang is<br>
&gt; =A0 =A0 =A0 =A0 mcr =A0 CP32(r0, HSCTLR) =A0 =A0 =A0 /* now paging is =
enabled */<br>
&gt; After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode,<=
br>
&gt; not abort.<br>
<br>
</div>I think this is correct for a trap taken from HYP mode -- you jump to=
<br>
the corresponding vector but there is no actual mode change (since ABT<br>
mode is PL1 and HYP mode is PL2 that would mean you dropped a privilege<br>
level).<br>
<br>
The patch at<br>
<a href=3D"http://lists.xen.org/archives/html/xen-devel/2013-09/msg00886.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-devel/2013-09/=
msg00886.html</a> might<br>
help confirm this.<br>
<div class=3D""><br>
&gt; It looks like we have broken MMU translation.<br>
<br>
</div>What do the fault status and fault address registers say?<br>
<br>
Offset 0x4 is undefined instruction not prefetch abort which suggests<br>
that there is at least some mapping present, but apparently not the<br>
expected one so the instruction is invalid.<br>
<br>
Is the debugger able to tell you what bytes it read instead of the real<br>
instruction?<br>
<div class=3D""><br>
&gt; =A0 =A0 =A0 =A0 Usually these things are down to either missing cache =
flushes<br>
&gt; =A0 =A0 =A0 =A0 or barriers, but tracking them down has historically b=
een a<br>
&gt; =A0 =A0 =A0 =A0 total pain.<br>
&gt; I suspected missing flushes during CPU1 MMU tables preparation but<br>
&gt; that code looks correct, I do not see any issues there.<br>
<br>
</div>Right, that&#39;s why I was wondering about firmware leaving dirty ca=
che<br>
lines around.<br>
<br>
On the Cortex-A15 we (actually, firmware) need to set a bit in the ACTLR<br=
>
to enable cache coherency etc -- I suppose it is worth checking that the<br=
>
OMAP doesn&#39;t have anything similar but I expect that this would be<br>
handled in the firmware already.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div></div>

--001a11349626da394304f2ae2dc0--


--===============0471014000499440765==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0471014000499440765==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 13:24:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkeF-0007Ld-ME; Tue, 18 Feb 2014 13:23:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WFkeD-0007LY-K7
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:23:42 +0000
Received: from [85.158.139.211:59912] by server-11.bemta-5.messagelabs.com id
	2F/87-23886-CDE53035; Tue, 18 Feb 2014 13:23:40 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392729816!115685!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29599 invoked from network); 18 Feb 2014 13:23:38 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 13:23:38 -0000
Received: from mail-lb0-f177.google.com ([209.85.217.177]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNe1xXgAT1jrrUfqoDkmey8P01dyIxS@postini.com;
	Tue, 18 Feb 2014 05:23:38 PST
Received: by mail-lb0-f177.google.com with SMTP id 10so10532401lbg.8
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 05:23:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=UYB/8ydLNJeYBwgiQz8P2m2QSxHdsBwg9JUveChAInA=;
	b=Wr8loFCJ6g3OjZfCZPZilvSr6CjdFN74p7chXWCQAKb0YUMYUKhLq3vNcHZcOsof3Z
	uIphDjGygKdUS9axB4uv3MJY09tPpidhxKlciZmirt2rZwKR2r33YI2Shiz06UAJDkdc
	qxNVOprvFYu02DKzsj86qYPplDtNzZLC1pukI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=UYB/8ydLNJeYBwgiQz8P2m2QSxHdsBwg9JUveChAInA=;
	b=jjwu6mpR6SI3gwUXFJ6W+IXYPL/5jcMd1RKJk18mxxt9V8yDaEN/o3ikOSFLyz0SFm
	P03ml9O4S0j1xULmH1zXXsGl7Gl9KB8if9ovDaLWXqXNJKk4/Jc+LNnY1ucODkmm60Mt
	Yo/eTruJtwzk4ay8HGD/bvb3XesL8ttUAz9xtXM0HdIduH/JLd8+94VY+AzspID5FCop
	08wgmgROFFfD7KD9ar9AebwcivWJlNluyTRtd+lLu5bWoTjElmDQxyn2GDd2zUFrP51q
	vfCYqY4PlDkB+89d8Y5HQYOBV5HFEf+wBkb3JcnuUaPmAOwWZVCcGaSvoAxhF0m0pPvh
	9dyw==
X-Gm-Message-State: ALoCoQlg7D3ndtezvnljajOshq0w4ljj0LaCNFz18nCN4BF++IT1vUxLyv6HBc+6gZlxqZSHgJA733NlzA5rMpT/xGpzWe5/SGriutI0W91YuVTxmSi1qfu6XIDrl5/GmIpSZFUJlxYSVCRqFDC0HeW++HLY+EFIS1UTFCmImupyhhxTkZugJ0U=
X-Received: by 10.152.164.199 with SMTP id ys7mr22101464lab.31.1392729814503; 
	Tue, 18 Feb 2014 05:23:34 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.152.164.199 with SMTP id ys7mr22101453lab.31.1392729814344; 
	Tue, 18 Feb 2014 05:23:34 -0800 (PST)
Received: by 10.114.18.193 with HTTP; Tue, 18 Feb 2014 05:23:34 -0800 (PST)
In-Reply-To: <1392727989.11080.61.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 15:23:34 +0200
Message-ID: <CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0471014000499440765=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0471014000499440765==
Content-Type: multipart/alternative; boundary=001a11349626da394304f2ae2dc0

--001a11349626da394304f2ae2dc0
Content-Type: text/plain; charset=ISO-8859-1

Ian,

First of all, unfortunately none of our team are so strong in ARM processor
low levels (yet).
We will check you suggestions with full cache flush and early trap debug
patch.

I've checked right now, the PC is 0xc (prefetch abort).

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt


On Tue, Feb 18, 2014 at 2:53 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Tue, 2014-02-18 at 14:36 +0200, Andrii Anisov wrote:
> >         > +GLOBAL(enter_hyp_mode)
> >         > +enter_hyp_mode:
> >         > +        adr   r0, save
> >         > +        stmea r0, {r4-r13,lr}
> >         > +        ldr   r12, =0x102
> >         > +        adr   r0, hyp_return
> >         > +        dsb
> >         > +        isb
> >         > +        dmb
> >         > +        smc   #0
> >
> >
> >         Who/what implements this handler?
> >
> >
> > Ian, this handler is implemented by ROM code, and this is the common
> > OMAP sequence to switch to HYP mode. On our side we decided to leave
> > switch to hyp in XEN for now.
>
> OK, fair enough. I was wonder if maybe it left some cache lines dirty or
> leaving caches enabled or something? It might be worth adding a full
> cacheflush (i.e. the loop over set/way stuff).
>
> >         Do you have any hardware debugging tools which could give some
> >         insight?
> >
> > Yep, we have one (TI's Code Composer Studio with STM560v2 JTAG) but it
> > has no proper HYP mode debug support yet, TI says it will have in 6
> > months or so :( So we the only thing we can do with it is stop CPU at
> > some moment and see some registers, no breakpoints or stepping.
>
> Hrm, I guess that might be sufficient to gain some insight.
>
> > What we discovered yet is that the last command executed by CPU1
> > before hang is
> >         mcr   CP32(r0, HSCTLR)       /* now paging is enabled */
> > After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode,
> > not abort.
>
> I think this is correct for a trap taken from HYP mode -- you jump to
> the corresponding vector but there is no actual mode change (since ABT
> mode is PL1 and HYP mode is PL2 that would mean you dropped a privilege
> level).
>
> The patch at
> http://lists.xen.org/archives/html/xen-devel/2013-09/msg00886.html might
> help confirm this.
>
> > It looks like we have broken MMU translation.
>
> What do the fault status and fault address registers say?
>
> Offset 0x4 is undefined instruction not prefetch abort which suggests
> that there is at least some mapping present, but apparently not the
> expected one so the instruction is invalid.
>
> Is the debugger able to tell you what bytes it read instead of the real
> instruction?
>
> >         Usually these things are down to either missing cache flushes
> >         or barriers, but tracking them down has historically been a
> >         total pain.
> > I suspected missing flushes during CPU1 MMU tables preparation but
> > that code looks correct, I do not see any issues there.
>
> Right, that's why I was wondering about firmware leaving dirty cache
> lines around.
>
> On the Cortex-A15 we (actually, firmware) need to set a bit in the ACTLR
> to enable cache coherency etc -- I suppose it is worth checking that the
> OMAP doesn't have anything similar but I expect that this would be
> handled in the firmware already.
>
> Ian.
>
>
>

--001a11349626da394304f2ae2dc0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Ian,<div><br></div><div>First of all, unfortunately none o=
f our team are so strong in ARM processor low levels (yet).</div><div>We wi=
ll check you suggestions with full cache flush and early trap debug patch.<=
/div>
<div><br></div><div>I&#39;ve checked right now, the PC is 0xc (prefetch abo=
rt).</div><div><br></div><div class=3D"gmail_extra"><div><font size=3D"-1">=
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:bold">Andrii Anisov | Software Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline">www.globallogic.c=
om</span></a><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:normal;background-color:transparent"></s=
pan><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:#1155cc;background-color:transparent;font-weight:normal;font-style:=
normal;font-variant:normal;text-decoration:underline;vertical-align:baselin=
e">http://www.globallogic.com/email_disclaimer.txt</span></a><span style=3D=
"vertical-align:baseline;font-variant:normal;font-style:normal;font-size:11=
px;background-color:transparent;text-decoration:none;font-family:Arial;font=
-weight:normal;background-color:transparent"></span></div>
</div>
<br><br><div class=3D"gmail_quote">On Tue, Feb 18, 2014 at 2:53 PM, Ian Cam=
pbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" targ=
et=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid=
;padding-left:1ex">
<div class=3D"">On Tue, 2014-02-18 at 14:36 +0200, Andrii Anisov wrote:<br>
&gt; =A0 =A0 =A0 =A0 &gt; +GLOBAL(enter_hyp_mode)<br>
&gt; =A0 =A0 =A0 =A0 &gt; +enter_hyp_mode:<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0adr =A0 r0, save<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0stmea r0, {r4-r13,lr}<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0ldr =A0 r12, =3D0x102<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0adr =A0 r0, hyp_return<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0dsb<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0isb<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0dmb<br>
&gt; =A0 =A0 =A0 =A0 &gt; + =A0 =A0 =A0 =A0smc =A0 #0<br>
&gt;<br>
&gt;<br>
&gt; =A0 =A0 =A0 =A0 Who/what implements this handler?<br>
&gt;<br>
&gt;<br>
</div><div class=3D"">&gt; Ian, this handler is implemented by ROM code, an=
d this is the common<br>
&gt; OMAP sequence to switch to HYP mode. On our side we decided to leave<b=
r>
&gt; switch to hyp in XEN for now.<br>
<br>
</div>OK, fair enough. I was wonder if maybe it left some cache lines dirty=
 or<br>
leaving caches enabled or something? It might be worth adding a full<br>
cacheflush (i.e. the loop over set/way stuff).<br>
<div class=3D""><br>
&gt; =A0 =A0 =A0 =A0 Do you have any hardware debugging tools which could g=
ive some<br>
&gt; =A0 =A0 =A0 =A0 insight?<br>
&gt;<br>
&gt; Yep, we have one (TI&#39;s Code Composer Studio with STM560v2 JTAG) bu=
t it<br>
&gt; has no proper HYP mode debug support yet, TI says it will have in 6<br=
>
&gt; months or so :( So we the only thing we can do with it is stop CPU at<=
br>
&gt; some moment and see some registers, no breakpoints or stepping.<br>
<br>
</div>Hrm, I guess that might be sufficient to gain some insight.<br>
<div class=3D""><br>
&gt; What we discovered yet is that the last command executed by CPU1<br>
&gt; before hang is<br>
&gt; =A0 =A0 =A0 =A0 mcr =A0 CP32(r0, HSCTLR) =A0 =A0 =A0 /* now paging is =
enabled */<br>
&gt; After this PC contains 0x00000004, CPSR.M is b11010 what is HYP mode,<=
br>
&gt; not abort.<br>
<br>
</div>I think this is correct for a trap taken from HYP mode -- you jump to=
<br>
the corresponding vector but there is no actual mode change (since ABT<br>
mode is PL1 and HYP mode is PL2 that would mean you dropped a privilege<br>
level).<br>
<br>
The patch at<br>
<a href=3D"http://lists.xen.org/archives/html/xen-devel/2013-09/msg00886.ht=
ml" target=3D"_blank">http://lists.xen.org/archives/html/xen-devel/2013-09/=
msg00886.html</a> might<br>
help confirm this.<br>
<div class=3D""><br>
&gt; It looks like we have broken MMU translation.<br>
<br>
</div>What do the fault status and fault address registers say?<br>
<br>
Offset 0x4 is undefined instruction not prefetch abort which suggests<br>
that there is at least some mapping present, but apparently not the<br>
expected one so the instruction is invalid.<br>
<br>
Is the debugger able to tell you what bytes it read instead of the real<br>
instruction?<br>
<div class=3D""><br>
&gt; =A0 =A0 =A0 =A0 Usually these things are down to either missing cache =
flushes<br>
&gt; =A0 =A0 =A0 =A0 or barriers, but tracking them down has historically b=
een a<br>
&gt; =A0 =A0 =A0 =A0 total pain.<br>
&gt; I suspected missing flushes during CPU1 MMU tables preparation but<br>
&gt; that code looks correct, I do not see any issues there.<br>
<br>
</div>Right, that&#39;s why I was wondering about firmware leaving dirty ca=
che<br>
lines around.<br>
<br>
On the Cortex-A15 we (actually, firmware) need to set a bit in the ACTLR<br=
>
to enable cache coherency etc -- I suppose it is worth checking that the<br=
>
OMAP doesn&#39;t have anything similar but I expect that this would be<br>
handled in the firmware already.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div></div>

--001a11349626da394304f2ae2dc0--


--===============0471014000499440765==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0471014000499440765==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 13:35:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkpB-0007ZL-3Y; Tue, 18 Feb 2014 13:35:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFkp8-0007ZG-UB
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:34:59 +0000
Received: from [193.109.254.147:42445] by server-15.bemta-14.messagelabs.com
	id 67/25-10839-28163035; Tue, 18 Feb 2014 13:34:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392730496!5107438!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12822 invoked from network); 18 Feb 2014 13:34:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 13:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101742099"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 13:34:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 08:34:55 -0500
Message-ID: <1392730494.11080.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Anisov <andrii.anisov@globallogic.com>
Date: Tue, 18 Feb 2014 13:34:54 +0000
In-Reply-To: <CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:23 +0200, Andrii Anisov wrote:
> Ian,
> 
> 
> First of all, unfortunately none of our team are so strong in ARM
> processor low levels (yet).
> We will check you suggestions with full cache flush and early trap
> debug patch.

Thanks.

> I've checked right now, the PC is 0xc (prefetch abort).

Good, that makes more sense!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:35:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFkpB-0007ZL-3Y; Tue, 18 Feb 2014 13:35:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFkp8-0007ZG-UB
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:34:59 +0000
Received: from [193.109.254.147:42445] by server-15.bemta-14.messagelabs.com
	id 67/25-10839-28163035; Tue, 18 Feb 2014 13:34:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392730496!5107438!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12822 invoked from network); 18 Feb 2014 13:34:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 13:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101742099"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 13:34:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 08:34:55 -0500
Message-ID: <1392730494.11080.62.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Anisov <andrii.anisov@globallogic.com>
Date: Tue, 18 Feb 2014 13:34:54 +0000
In-Reply-To: <CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:23 +0200, Andrii Anisov wrote:
> Ian,
> 
> 
> First of all, unfortunately none of our team are so strong in ARM
> processor low levels (yet).
> We will check you suggestions with full cache flush and early trap
> debug patch.

Thanks.

> I've checked right now, the PC is 0xc (prefetch abort).

Good, that makes more sense!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:37:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFks1-0007fQ-1m; Tue, 18 Feb 2014 13:37:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WFkrz-0007fK-Ox
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:37:56 +0000
Received: from [85.158.143.35:61936] by server-1.bemta-4.messagelabs.com id
	46/40-31661-23263035; Tue, 18 Feb 2014 13:37:54 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392730670!6534924!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25012 invoked from network); 18 Feb 2014 13:37:53 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 13:37:53 -0000
Received: from mail-la0-f41.google.com ([209.85.215.41]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNiLqf9pkfAWq0QWZzQRsaspEr2adIc@postini.com;
	Tue, 18 Feb 2014 05:37:53 PST
Received: by mail-la0-f41.google.com with SMTP id mc6so12481578lab.0
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 05:37:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ICL983r7Otcy59eCojLLWPKSbUKmA/dvDdAHzSBNumg=;
	b=P/w7L2UOyVJzXT/wuKO/pDGYIuUX9WRz5fKFNrR3VT6I5Hwqu6vCIWjWq9/6+mc9pO
	LGSas+xiWnkkKA2UJLhV4qbDxymWTXIvOqr2lA1Tqi2fWFX5MtHcHU67RGy+UU8Oghf2
	+RplZ4ZKWc32tZb/M14g6ZmbQyjK/ko765/VE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ICL983r7Otcy59eCojLLWPKSbUKmA/dvDdAHzSBNumg=;
	b=RAVD+J9ZrMNHTO8ndduU4gQoiYCY2e/owt74nOVjTU1JEgUpYeJaykXS5Phl7DURnM
	8p0ER/Vkz/F0BdZvsIKMuvWQgSTmWsYXcO5HsJ0UxxfR2zH0c1BYiqB4sz61demowQQJ
	B4z1nIXeNs5eovz991xCFSqTMyCaG0zWE51mKtKS4dsAmcvqsKfzoGolkJ9gxrndcDfi
	oDp+x3KmcSpT+rrG3tvqG7R8+Ma4nh49Pz01a+IW9KfV2CWrsCimxXkA89h8UQ6vZEvi
	cDJXOVkgznqwhPiQfmPN0FWuqHQquCWVG3LIiJW0VyKr19Wd3yYmlXMJIh9ipGFdhQ/U
	wYPA==
X-Gm-Message-State: ALoCoQkacVbile1x15S1JMAAKuNDXzWAYQPlVwChJvf+q0vxGMNpj2BnZSHesPEwWctI0SMjf/m1zVHT2oPEG1M8LgUdgXzXuXqnw4b5Zf9CgVSPy0Qeq6dNAd3mi5Bcouz4moG2qNMLOFYk8cMGVrqa2z5vhpOVIIs4C7VBMkxLcuE5h+xo90Q=
X-Received: by 10.152.236.72 with SMTP id us8mr22030570lac.11.1392730663816;
	Tue, 18 Feb 2014 05:37:43 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.152.236.72 with SMTP id us8mr22030561lac.11.1392730663710;
	Tue, 18 Feb 2014 05:37:43 -0800 (PST)
Received: by 10.114.18.193 with HTTP; Tue, 18 Feb 2014 05:37:43 -0800 (PST)
In-Reply-To: <1392730494.11080.62.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 15:37:43 +0200
Message-ID: <CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0709589024700314380=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0709589024700314380==
Content-Type: multipart/alternative; boundary=001a1136c0ba7a855c04f2ae6090

--001a1136c0ba7a855c04f2ae6090
Content-Type: text/plain; charset=ISO-8859-1

>
> > I've checked right now, the PC is 0xc (prefetch abort).
>
> Good, that makes more sense!
>

Any additional suggestions, by any chance?
As I understand your early trap debug patch would not give a lot of help
here.

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a1136c0ba7a855c04f2ae6090
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div class=3D"">&gt; I&#39;ve checked rig=
ht now, the PC is 0xc (prefetch abort).<br>
<br></div>Good, that makes more sense!<br></blockquote><div><br></div><div>=
Any additional suggestions, by any chance?</div><div>As I understand your e=
arly trap debug patch would not give a lot of help here.</div><div class=3D=
"gmail_extra">
<div><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-vari=
ant:normal;font-style:normal;font-size:12px;background-color:transparent;te=
xt-decoration:none;font-family:Arial;font-weight:bold">Andrii Anisov | Soft=
ware Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline">www.global=
logic.com</span></a><span style=3D"vertical-align:baseline;font-variant:nor=
mal;font-style:normal;font-size:12px;text-decoration:none;font-family:Arial=
;font-weight:normal;background-color:transparent"></span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline"></span></a=
><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:rgb(17,85,204);background-color:transparent;font-weight:normal;font=
-style:normal;font-variant:normal;text-decoration:underline;vertical-align:=
baseline">http://www.globallogic.com/email_disclaimer.txt</span></a><span s=
tyle=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-=
size:11px;text-decoration:none;font-family:Arial;font-weight:normal;backgro=
und-color:transparent"></span></div>
</div>
<br></div></div>

--001a1136c0ba7a855c04f2ae6090--


--===============0709589024700314380==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0709589024700314380==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 13:37:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFks1-0007fQ-1m; Tue, 18 Feb 2014 13:37:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WFkrz-0007fK-Ox
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:37:56 +0000
Received: from [85.158.143.35:61936] by server-1.bemta-4.messagelabs.com id
	46/40-31661-23263035; Tue, 18 Feb 2014 13:37:54 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392730670!6534924!1
X-Originating-IP: [64.18.0.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25012 invoked from network); 18 Feb 2014 13:37:53 -0000
Received: from exprod5og104.obsmtp.com (HELO exprod5og104.obsmtp.com)
	(64.18.0.178)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 13:37:53 -0000
Received: from mail-la0-f41.google.com ([209.85.215.41]) (using TLSv1) by
	exprod5ob104.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwNiLqf9pkfAWq0QWZzQRsaspEr2adIc@postini.com;
	Tue, 18 Feb 2014 05:37:53 PST
Received: by mail-la0-f41.google.com with SMTP id mc6so12481578lab.0
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 05:37:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ICL983r7Otcy59eCojLLWPKSbUKmA/dvDdAHzSBNumg=;
	b=P/w7L2UOyVJzXT/wuKO/pDGYIuUX9WRz5fKFNrR3VT6I5Hwqu6vCIWjWq9/6+mc9pO
	LGSas+xiWnkkKA2UJLhV4qbDxymWTXIvOqr2lA1Tqi2fWFX5MtHcHU67RGy+UU8Oghf2
	+RplZ4ZKWc32tZb/M14g6ZmbQyjK/ko765/VE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ICL983r7Otcy59eCojLLWPKSbUKmA/dvDdAHzSBNumg=;
	b=RAVD+J9ZrMNHTO8ndduU4gQoiYCY2e/owt74nOVjTU1JEgUpYeJaykXS5Phl7DURnM
	8p0ER/Vkz/F0BdZvsIKMuvWQgSTmWsYXcO5HsJ0UxxfR2zH0c1BYiqB4sz61demowQQJ
	B4z1nIXeNs5eovz991xCFSqTMyCaG0zWE51mKtKS4dsAmcvqsKfzoGolkJ9gxrndcDfi
	oDp+x3KmcSpT+rrG3tvqG7R8+Ma4nh49Pz01a+IW9KfV2CWrsCimxXkA89h8UQ6vZEvi
	cDJXOVkgznqwhPiQfmPN0FWuqHQquCWVG3LIiJW0VyKr19Wd3yYmlXMJIh9ipGFdhQ/U
	wYPA==
X-Gm-Message-State: ALoCoQkacVbile1x15S1JMAAKuNDXzWAYQPlVwChJvf+q0vxGMNpj2BnZSHesPEwWctI0SMjf/m1zVHT2oPEG1M8LgUdgXzXuXqnw4b5Zf9CgVSPy0Qeq6dNAd3mi5Bcouz4moG2qNMLOFYk8cMGVrqa2z5vhpOVIIs4C7VBMkxLcuE5h+xo90Q=
X-Received: by 10.152.236.72 with SMTP id us8mr22030570lac.11.1392730663816;
	Tue, 18 Feb 2014 05:37:43 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.152.236.72 with SMTP id us8mr22030561lac.11.1392730663710;
	Tue, 18 Feb 2014 05:37:43 -0800 (PST)
Received: by 10.114.18.193 with HTTP; Tue, 18 Feb 2014 05:37:43 -0800 (PST)
In-Reply-To: <1392730494.11080.62.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 15:37:43 +0200
Message-ID: <CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0709589024700314380=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0709589024700314380==
Content-Type: multipart/alternative; boundary=001a1136c0ba7a855c04f2ae6090

--001a1136c0ba7a855c04f2ae6090
Content-Type: text/plain; charset=ISO-8859-1

>
> > I've checked right now, the PC is 0xc (prefetch abort).
>
> Good, that makes more sense!
>

Any additional suggestions, by any chance?
As I understand your early trap debug patch would not give a lot of help
here.

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a1136c0ba7a855c04f2ae6090
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-l=
eft-style:solid;padding-left:1ex"><div class=3D"">&gt; I&#39;ve checked rig=
ht now, the PC is 0xc (prefetch abort).<br>
<br></div>Good, that makes more sense!<br></blockquote><div><br></div><div>=
Any additional suggestions, by any chance?</div><div>As I understand your e=
arly trap debug patch would not give a lot of help here.</div><div class=3D=
"gmail_extra">
<div><font size=3D"-1"><br><span style=3D"vertical-align:baseline;font-vari=
ant:normal;font-style:normal;font-size:12px;background-color:transparent;te=
xt-decoration:none;font-family:Arial;font-weight:bold">Andrii Anisov | Soft=
ware Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline">www.global=
logic.com</span></a><span style=3D"vertical-align:baseline;font-variant:nor=
mal;font-style:normal;font-size:12px;text-decoration:none;font-family:Arial=
;font-weight:normal;background-color:transparent"></span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:rgb(17,85,20=
4);background-color:transparent;font-weight:normal;font-style:normal;font-v=
ariant:normal;text-decoration:underline;vertical-align:baseline"></span></a=
><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:rgb(17,85,204);background-color:transparent;font-weight:normal;font=
-style:normal;font-variant:normal;text-decoration:underline;vertical-align:=
baseline">http://www.globallogic.com/email_disclaimer.txt</span></a><span s=
tyle=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-=
size:11px;text-decoration:none;font-family:Arial;font-weight:normal;backgro=
und-color:transparent"></span></div>
</div>
<br></div></div>

--001a1136c0ba7a855c04f2ae6090--


--===============0709589024700314380==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0709589024700314380==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 13:57:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlAD-0007xy-VZ; Tue, 18 Feb 2014 13:56:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WFlAC-0007xt-39
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:56:44 +0000
Received: from [85.158.137.68:60060] by server-17.bemta-3.messagelabs.com id
	1B/BE-22569-B9663035; Tue, 18 Feb 2014 13:56:43 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392731802!2648175!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7791 invoked from network); 18 Feb 2014 13:56:42 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 13:56:42 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392731802; l=1482;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=V+XnxekrTX4JyFSt4uzZe7+t2rE=;
	b=pGquoxxKBLc9dxhxx0QYzSNuw7FjeB/V3rc4Z4NnZnti+E3D6n6LEKO959sM1+2RTJk
	z1WeZ5CzQOnsMqG4lLonjFN8htqfsYfNcnLoBifbdfSQZH47V3oWTiA4EP+E9b9BYstGX
	hAkl3YtkqMiPDNQgnfLAdqkaV2T5e5FHupw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id L030d4q1IDugb4e
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 18 Feb 2014 14:56:42 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 1774A5026A; Tue, 18 Feb 2014 14:56:42 +0100 (CET)
Date: Tue, 18 Feb 2014 14:56:41 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20140218135641.GB2804@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
	<21251.17769.656791.139815@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <21251.17769.656791.139815@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, Ian Jackson wrote:

> Ian Campbell writes ("Re: [Xen-devel] missing dependency on libxlu_disk_l.h"):
> > It might be a good idea to either also patch the generated files or to
> > have the patch remove them, to avoid any possible confusion due to skew.
> This ought to be taken care of by the build system, provided you don't
> actually git commit only the change to .l and not the change to .[ch].
> In the final patch.

In my case the patch changes only the .l file. I expect that all
dependencies are written into the Makefile.

> Olaf, can you test whether this diff makes the problem go away for you ?

Ian, xen.rpm is rebuilt often, but the failure happend exactly once.
I will see if I can force it to fail without the change below.

Olaf

> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index dab2929..755b666 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
>  $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
>  
>  AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
> -	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
> +	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
>  AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>  AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>  LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:57:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlAD-0007xy-VZ; Tue, 18 Feb 2014 13:56:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WFlAC-0007xt-39
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 13:56:44 +0000
Received: from [85.158.137.68:60060] by server-17.bemta-3.messagelabs.com id
	1B/BE-22569-B9663035; Tue, 18 Feb 2014 13:56:43 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392731802!2648175!1
X-Originating-IP: [81.169.146.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7791 invoked from network); 18 Feb 2014 13:56:42 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.216)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 13:56:42 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392731802; l=1482;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=V+XnxekrTX4JyFSt4uzZe7+t2rE=;
	b=pGquoxxKBLc9dxhxx0QYzSNuw7FjeB/V3rc4Z4NnZnti+E3D6n6LEKO959sM1+2RTJk
	z1WeZ5CzQOnsMqG4lLonjFN8htqfsYfNcnLoBifbdfSQZH47V3oWTiA4EP+E9b9BYstGX
	hAkl3YtkqMiPDNQgnfLAdqkaV2T5e5FHupw=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id L030d4q1IDugb4e
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 18 Feb 2014 14:56:42 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 1774A5026A; Tue, 18 Feb 2014 14:56:42 +0100 (CET)
Date: Tue, 18 Feb 2014 14:56:41 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20140218135641.GB2804@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
	<21251.17769.656791.139815@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <21251.17769.656791.139815@mariner.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, Ian Jackson wrote:

> Ian Campbell writes ("Re: [Xen-devel] missing dependency on libxlu_disk_l.h"):
> > It might be a good idea to either also patch the generated files or to
> > have the patch remove them, to avoid any possible confusion due to skew.
> This ought to be taken care of by the build system, provided you don't
> actually git commit only the change to .l and not the change to .[ch].
> In the final patch.

In my case the patch changes only the .l file. I expect that all
dependencies are written into the Makefile.

> Olaf, can you test whether this diff makes the problem go away for you ?

Ian, xen.rpm is rebuilt often, but the failure happend exactly once.
I will see if I can force it to fail without the change below.

Olaf

> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index dab2929..755b666 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
>  $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
>  
>  AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
> -	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
> +	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
>  AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>  AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>  LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlBw-000856-LI; Tue, 18 Feb 2014 13:58:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFlBv-00084u-Lk
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 13:58:32 +0000
Received: from [193.109.254.147:5944] by server-13.bemta-14.messagelabs.com id
	54/6B-01226-60763035; Tue, 18 Feb 2014 13:58:30 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392731910!5107214!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19666 invoked from network); 18 Feb 2014 13:58:30 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 13:58:30 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so5236794eaj.22
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 05:58:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=7laLm8+MS2DG05fNF0uv1cAgdzi0L9uRGgX8jYA34xY=;
	b=LnzYCJ5WVbS6OregJkK241XmRwsVlxFiV6KArx5AmuCkj4LInQ05DTJe5eGsTIdKHG
	qzVtF8/cQQYDmrN+8Yw4nXpg6Tu6syV5y0qgeiQFKfq0ceLqeeSUBN86p19veSyY33hE
	E0ZVhATMqfqabqU1svPd7tGCauK+01+j3qEIxhpy2VW/yLytqIs3m2p6AqOIKATlM9TS
	Hqf4B38trO9ImrYRJzHHS9jA+I/4CRyBBla/f4wZOTcRPVuH1thbCcQ7xGD43GxahvT/
	kcsmv6DJtw2BVWnUagFhpuRZgrq/m6NpbweYZhn9VfpqiAaVKEK5HY4gS+sw2jA3Y5nx
	gWGg==
X-Gm-Message-State: ALoCoQkII+WDvkdFYa3CQhFn/R2dahWRXDh2/sH/jPLNry0cMS0vR8qadsqIjmDhGceCD9GCBGU4
X-Received: by 10.14.194.2 with SMTP id l2mr33989527een.39.1392731909837;
	Tue, 18 Feb 2014 05:58:29 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	y47sm70546300eel.14.2014.02.18.05.58.28 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 18 Feb 2014 05:58:28 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 18 Feb 2014 13:58:21 +0000
Message-Id: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: Save/restore GICH_VMCR on domain
	context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH_VMCR register contains alias to important bits of GICV interface such as:
    - priority mask of the CPU
    - EOImode
    - ...

We were safe because Linux guest always use the same value for this bits.
When new guests will handle priority or change EOI mode, VCPU interrupt
management will be in a wrong state.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>

---
    This is a bug fix for Xen 4.4. Without this patch we can't support guest
    that doesn't have the same behavior as Linux to handle GICC interface.
    theses bits are not modified by them.
---
 xen/arch/arm/gic.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 62294ac..51e5990 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -107,6 +107,7 @@ void gic_save_state(struct vcpu *v)
         v->arch.gic_lr[i] = GICH[GICH_LR + i];
     v->arch.lr_mask = this_cpu(lr_mask);
     v->arch.gic_apr = GICH[GICH_APR];
+    v->arch.gic_vmcr = GICH[GICH_VMCR];
     /* Disable until next VCPU scheduled */
     GICH[GICH_HCR] = 0;
     isb();
@@ -123,6 +124,7 @@ void gic_restore_state(struct vcpu *v)
     for ( i=0; i<nr_lrs; i++)
         GICH[GICH_LR + i] = v->arch.gic_lr[i];
     GICH[GICH_APR] = v->arch.gic_apr;
+    GICH[GICH_VMCR] = v->arch.gic_vmcr;
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 13:58:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 13:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlBw-000856-LI; Tue, 18 Feb 2014 13:58:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFlBv-00084u-Lk
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 13:58:32 +0000
Received: from [193.109.254.147:5944] by server-13.bemta-14.messagelabs.com id
	54/6B-01226-60763035; Tue, 18 Feb 2014 13:58:30 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392731910!5107214!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19666 invoked from network); 18 Feb 2014 13:58:30 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 13:58:30 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so5236794eaj.22
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 05:58:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=7laLm8+MS2DG05fNF0uv1cAgdzi0L9uRGgX8jYA34xY=;
	b=LnzYCJ5WVbS6OregJkK241XmRwsVlxFiV6KArx5AmuCkj4LInQ05DTJe5eGsTIdKHG
	qzVtF8/cQQYDmrN+8Yw4nXpg6Tu6syV5y0qgeiQFKfq0ceLqeeSUBN86p19veSyY33hE
	E0ZVhATMqfqabqU1svPd7tGCauK+01+j3qEIxhpy2VW/yLytqIs3m2p6AqOIKATlM9TS
	Hqf4B38trO9ImrYRJzHHS9jA+I/4CRyBBla/f4wZOTcRPVuH1thbCcQ7xGD43GxahvT/
	kcsmv6DJtw2BVWnUagFhpuRZgrq/m6NpbweYZhn9VfpqiAaVKEK5HY4gS+sw2jA3Y5nx
	gWGg==
X-Gm-Message-State: ALoCoQkII+WDvkdFYa3CQhFn/R2dahWRXDh2/sH/jPLNry0cMS0vR8qadsqIjmDhGceCD9GCBGU4
X-Received: by 10.14.194.2 with SMTP id l2mr33989527een.39.1392731909837;
	Tue, 18 Feb 2014 05:58:29 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	y47sm70546300eel.14.2014.02.18.05.58.28 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 18 Feb 2014 05:58:28 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 18 Feb 2014 13:58:21 +0000
Message-Id: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH] xen/arm: Save/restore GICH_VMCR on domain
	context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH_VMCR register contains alias to important bits of GICV interface such as:
    - priority mask of the CPU
    - EOImode
    - ...

We were safe because Linux guest always use the same value for this bits.
When new guests will handle priority or change EOI mode, VCPU interrupt
management will be in a wrong state.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>

---
    This is a bug fix for Xen 4.4. Without this patch we can't support guest
    that doesn't have the same behavior as Linux to handle GICC interface.
    theses bits are not modified by them.
---
 xen/arch/arm/gic.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 62294ac..51e5990 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -107,6 +107,7 @@ void gic_save_state(struct vcpu *v)
         v->arch.gic_lr[i] = GICH[GICH_LR + i];
     v->arch.lr_mask = this_cpu(lr_mask);
     v->arch.gic_apr = GICH[GICH_APR];
+    v->arch.gic_vmcr = GICH[GICH_VMCR];
     /* Disable until next VCPU scheduled */
     GICH[GICH_HCR] = 0;
     isb();
@@ -123,6 +124,7 @@ void gic_restore_state(struct vcpu *v)
     for ( i=0; i<nr_lrs; i++)
         GICH[GICH_LR + i] = v->arch.gic_lr[i];
     GICH[GICH_APR] = v->arch.gic_apr;
+    GICH[GICH_VMCR] = v->arch.gic_vmcr;
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:07:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlKd-0008NI-MD; Tue, 18 Feb 2014 14:07:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFlKb-0008ND-OZ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 14:07:29 +0000
Received: from [85.158.137.68:4640] by server-17.bemta-3.messagelabs.com id
	71/F3-22569-02963035; Tue, 18 Feb 2014 14:07:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392732444!2644460!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17352 invoked from network); 18 Feb 2014 14:07:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:07:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101757034"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 14:07:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:07:23 -0500
Message-ID: <1392732442.11080.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chris Takemura <ctakemura@axcient.com>
Date: Tue, 18 Feb 2014 14:07:22 +0000
In-Reply-To: <CF22BADD.4AD%ctakemura@axcient.com>
References: <CF22BADD.4AD%ctakemura@axcient.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Use of watch_pipe in xs_handle structure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 18:09 -0800, Chris Takemura wrote:
> Hi,
> 
> This message was also posted to the qemu-devel list, but I didn't get any
> reply, and it occurred to me that it might make more sense here.  Sorry if
> you're reading it twice.
> 
> Anyway, I'm trying to debug a problem that causes qemu-dm to lock up with
> Xen HVM domains.  We're using the qemu version that came with Xen 3.4.2.
> I know it's old, but we're stuck with it for a little while yet.

I'm afraid you are unlikely to get much specific interest in a 3.4.x
issue.

At the very least I would suggest an upgrade to 3.4.4.

Otherwise I'd suggest you at least skim the commit logs (at a minimum
for qemu-dm and libxenstore) for any interesting looking related fixes
which you are missing.

> I think the hang is related to thread synchronization and the xenstore,
> but I'm not sure how it all fits together. In particular, I don't
> understand the lines in xs.c that handle the watch_pipe, e.g.:
> 
>         /* Kick users out of their select() loop. */
> 
>         if (list_empty(&h->watch_list) &&
>             (h->watch_pipe[1] != -1))
>             while (write(h->watch_pipe[1], body, 1) != 1)
>                 continue;
> 
> 
> It looks to me like the other thread blocks while reading from the pipe,
> and the write allows it to continue.  But this code seems like it does the
> same thing as the condvar_signal call that comes slightly after, and
> therefore it seems like I could safely #ifndef USE_PTHREAD it out.  Is
> this the case?

I don't think so. The cond var is for libxenstore's internal
synchronisation, while the pipe is there to allow the calling
application to include xenstore in its select or poll based event loop.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:07:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlKd-0008NI-MD; Tue, 18 Feb 2014 14:07:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFlKb-0008ND-OZ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 14:07:29 +0000
Received: from [85.158.137.68:4640] by server-17.bemta-3.messagelabs.com id
	71/F3-22569-02963035; Tue, 18 Feb 2014 14:07:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392732444!2644460!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17352 invoked from network); 18 Feb 2014 14:07:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:07:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,501,1389744000"; d="scan'208";a="101757034"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 14:07:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:07:23 -0500
Message-ID: <1392732442.11080.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chris Takemura <ctakemura@axcient.com>
Date: Tue, 18 Feb 2014 14:07:22 +0000
In-Reply-To: <CF22BADD.4AD%ctakemura@axcient.com>
References: <CF22BADD.4AD%ctakemura@axcient.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Use of watch_pipe in xs_handle structure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 18:09 -0800, Chris Takemura wrote:
> Hi,
> 
> This message was also posted to the qemu-devel list, but I didn't get any
> reply, and it occurred to me that it might make more sense here.  Sorry if
> you're reading it twice.
> 
> Anyway, I'm trying to debug a problem that causes qemu-dm to lock up with
> Xen HVM domains.  We're using the qemu version that came with Xen 3.4.2.
> I know it's old, but we're stuck with it for a little while yet.

I'm afraid you are unlikely to get much specific interest in a 3.4.x
issue.

At the very least I would suggest an upgrade to 3.4.4.

Otherwise I'd suggest you at least skim the commit logs (at a minimum
for qemu-dm and libxenstore) for any interesting looking related fixes
which you are missing.

> I think the hang is related to thread synchronization and the xenstore,
> but I'm not sure how it all fits together. In particular, I don't
> understand the lines in xs.c that handle the watch_pipe, e.g.:
> 
>         /* Kick users out of their select() loop. */
> 
>         if (list_empty(&h->watch_list) &&
>             (h->watch_pipe[1] != -1))
>             while (write(h->watch_pipe[1], body, 1) != 1)
>                 continue;
> 
> 
> It looks to me like the other thread blocks while reading from the pipe,
> and the write allows it to continue.  But this code seems like it does the
> same thing as the condvar_signal call that comes slightly after, and
> therefore it seems like I could safely #ifndef USE_PTHREAD it out.  Is
> this the case?

I don't think so. The cond var is for libxenstore's internal
synchronisation, while the pipe is there to allow the calling
application to include xenstore in its select or poll based event loop.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:22:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlZ8-0000Ah-5W; Tue, 18 Feb 2014 14:22:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WFlZ6-0000Ac-JX
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 14:22:28 +0000
Received: from [193.109.254.147:19221] by server-3.bemta-14.messagelabs.com id
	F9/A9-00432-3AC63035; Tue, 18 Feb 2014 14:22:27 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392733341!5125609!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9673 invoked from network); 18 Feb 2014 14:22:25 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 14:22:25 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392733341; l=342;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=97cHbdaBzUHVO8o6h2mAWvXHvj0=;
	b=NR+bkuoY9TzLuGChseY6XWf4v8EJnhLy3HD2wsqoSVe38s/OGlyJUtlApVuU873vSTq
	T5kXd+I+xQpNXQgJXcAw2hZVwo3S/9bzfs2bEIq86JoEi1v66MkrGt9S3tsZMUABP2jxk
	KirNzEmxmz9z4yLZMCkPS6fqdw09FeTqtF8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id 6027afq1IEMKbws
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 18 Feb 2014 15:22:20 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 314325026A; Tue, 18 Feb 2014 15:21:54 +0100 (CET)
Date: Tue, 18 Feb 2014 15:21:54 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20140218142154.GA9326@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
	<21251.17769.656791.139815@mariner.uk.xensource.com>
	<20140218135641.GB2804@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140218135641.GB2804@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, Olaf Hering wrote:

> I will see if I can force it to fail without the change below.

With "env FLEX=$PWD/my_flex.sh ./configure ..." the build fails.

#!/bin/bash
set -ex
rm -fv /src/dir/tools/*/libxlu_disk_l.h
sleep 12
exec /usr/bin/flex "$@"


And with the patch applied the build works fine. Thanks!

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:22:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlZ8-0000Ah-5W; Tue, 18 Feb 2014 14:22:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WFlZ6-0000Ac-JX
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 14:22:28 +0000
Received: from [193.109.254.147:19221] by server-3.bemta-14.messagelabs.com id
	F9/A9-00432-3AC63035; Tue, 18 Feb 2014 14:22:27 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392733341!5125609!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9673 invoked from network); 18 Feb 2014 14:22:25 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 14:22:25 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392733341; l=342;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=97cHbdaBzUHVO8o6h2mAWvXHvj0=;
	b=NR+bkuoY9TzLuGChseY6XWf4v8EJnhLy3HD2wsqoSVe38s/OGlyJUtlApVuU873vSTq
	T5kXd+I+xQpNXQgJXcAw2hZVwo3S/9bzfs2bEIq86JoEi1v66MkrGt9S3tsZMUABP2jxk
	KirNzEmxmz9z4yLZMCkPS6fqdw09FeTqtF8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id 6027afq1IEMKbws
	(using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits))
	(Client did not present a certificate);
	Tue, 18 Feb 2014 15:22:20 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 314325026A; Tue, 18 Feb 2014 15:21:54 +0100 (CET)
Date: Tue, 18 Feb 2014 15:21:54 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Message-ID: <20140218142154.GA9326@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
	<21251.17769.656791.139815@mariner.uk.xensource.com>
	<20140218135641.GB2804@aepfle.de>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140218135641.GB2804@aepfle.de>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, Olaf Hering wrote:

> I will see if I can force it to fail without the change below.

With "env FLEX=$PWD/my_flex.sh ./configure ..." the build fails.

#!/bin/bash
set -ex
rm -fv /src/dir/tools/*/libxlu_disk_l.h
sleep 12
exec /usr/bin/flex "$@"


And with the patch applied the build works fine. Thanks!

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:26:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlcb-0000HH-QZ; Tue, 18 Feb 2014 14:26:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFlcX-0000H8-Vc
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:26:04 +0000
Received: from [85.158.143.35:47589] by server-1.bemta-4.messagelabs.com id
	CC/6B-31661-97D63035; Tue, 18 Feb 2014 14:26:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392733558!6557097!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14905 invoked from network); 18 Feb 2014 14:26:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:26:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103488757"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:25:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:25:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFlcS-00040D-Gt;
	Tue, 18 Feb 2014 14:25:56 +0000
Date: Tue, 18 Feb 2014 14:25:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <53035BD8.6050805@redhat.com>
Message-ID: <alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > of the email :-P).
> > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > response to the guest writing to a magic ioport specifically to unplug
> > the emulated disk.
> > With this patch after the guest boots I can still access both xvda and
> > sda for the same disk, leading to fs corruptions.
> 
> Ok, the last paragraph is what I was missing.
> 
> So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> hotplug handler, dc->unplug is not called anymore.
> 
> But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't free
> the device, it just drops the disks underneath.  I think the simplest solution
> is to _not_ make it a dc->unplug callback at all, and call
> pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> qdev_unplug means "ask guest to start unplug", which is not what Xen wants to
> do here.

Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
Calling it directly from unplug_disks fixes the issue:


---

Call pci_piix3_xen_ide_unplug from unplug_disks

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 0eda301..40757eb 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
     return 0;
 }
 
-static int pci_piix3_xen_ide_unplug(DeviceState *dev)
+int pci_piix3_xen_ide_unplug(DeviceState *dev)
 {
     PCIIDEState *pci_ide;
     DriveInfo *di;
@@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass, void *data)
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
     k->class_id = PCI_CLASS_STORAGE_IDE;
     set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
-    dc->unplug = pci_piix3_xen_ide_unplug;
 }
 
 static const TypeInfo piix3_ide_xen_info = {
diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
index 70875e4..1d9d0e9 100644
--- a/hw/xen/xen_platform.c
+++ b/hw/xen/xen_platform.c
@@ -27,6 +27,7 @@
 
 #include "hw/hw.h"
 #include "hw/i386/pc.h"
+#include "hw/ide.h"
 #include "hw/pci/pci.h"
 #include "hw/irq.h"
 #include "hw/xen/xen_common.h"
@@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *o)
     if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
             PCI_CLASS_STORAGE_IDE
             && strcmp(d->name, "xen-pci-passthrough") != 0) {
-        qdev_unplug(DEVICE(d), NULL);
+        pci_piix3_xen_ide_unplug(DEVICE(d));
     }
 }
 
diff --git a/include/hw/ide.h b/include/hw/ide.h
index 507e6d3..bc8bd32 100644
--- a/include/hw/ide.h
+++ b/include/hw/ide.h
@@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo **hd_table,
 PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
+int pci_piix3_xen_ide_unplug(DeviceState *dev);
 void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 
 /* ide-mmio.c */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:26:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlcb-0000HH-QZ; Tue, 18 Feb 2014 14:26:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFlcX-0000H8-Vc
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:26:04 +0000
Received: from [85.158.143.35:47589] by server-1.bemta-4.messagelabs.com id
	CC/6B-31661-97D63035; Tue, 18 Feb 2014 14:26:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392733558!6557097!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14905 invoked from network); 18 Feb 2014 14:26:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:26:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103488757"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:25:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:25:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFlcS-00040D-Gt;
	Tue, 18 Feb 2014 14:25:56 +0000
Date: Tue, 18 Feb 2014 14:25:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <53035BD8.6050805@redhat.com>
Message-ID: <alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > of the email :-P).
> > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > response to the guest writing to a magic ioport specifically to unplug
> > the emulated disk.
> > With this patch after the guest boots I can still access both xvda and
> > sda for the same disk, leading to fs corruptions.
> 
> Ok, the last paragraph is what I was missing.
> 
> So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> hotplug handler, dc->unplug is not called anymore.
> 
> But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't free
> the device, it just drops the disks underneath.  I think the simplest solution
> is to _not_ make it a dc->unplug callback at all, and call
> pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> qdev_unplug means "ask guest to start unplug", which is not what Xen wants to
> do here.

Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
Calling it directly from unplug_disks fixes the issue:


---

Call pci_piix3_xen_ide_unplug from unplug_disks

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 0eda301..40757eb 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
     return 0;
 }
 
-static int pci_piix3_xen_ide_unplug(DeviceState *dev)
+int pci_piix3_xen_ide_unplug(DeviceState *dev)
 {
     PCIIDEState *pci_ide;
     DriveInfo *di;
@@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass, void *data)
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
     k->class_id = PCI_CLASS_STORAGE_IDE;
     set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
-    dc->unplug = pci_piix3_xen_ide_unplug;
 }
 
 static const TypeInfo piix3_ide_xen_info = {
diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
index 70875e4..1d9d0e9 100644
--- a/hw/xen/xen_platform.c
+++ b/hw/xen/xen_platform.c
@@ -27,6 +27,7 @@
 
 #include "hw/hw.h"
 #include "hw/i386/pc.h"
+#include "hw/ide.h"
 #include "hw/pci/pci.h"
 #include "hw/irq.h"
 #include "hw/xen/xen_common.h"
@@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *o)
     if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
             PCI_CLASS_STORAGE_IDE
             && strcmp(d->name, "xen-pci-passthrough") != 0) {
-        qdev_unplug(DEVICE(d), NULL);
+        pci_piix3_xen_ide_unplug(DEVICE(d));
     }
 }
 
diff --git a/include/hw/ide.h b/include/hw/ide.h
index 507e6d3..bc8bd32 100644
--- a/include/hw/ide.h
+++ b/include/hw/ide.h
@@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo **hd_table,
 PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
+int pci_piix3_xen_ide_unplug(DeviceState *dev);
 void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 
 /* ide-mmio.c */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:27:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFldb-0000LO-95; Tue, 18 Feb 2014 14:27:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WFldZ-0000LH-Gm
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:27:05 +0000
Received: from [193.109.254.147:38546] by server-3.bemta-14.messagelabs.com id
	BD/FF-00432-8BD63035; Tue, 18 Feb 2014 14:27:04 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392733623!5122540!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30686 invoked from network); 18 Feb 2014 14:27:03 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 14:27:03 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1IEQsdJ014695
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 09:26:54 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-51.ams2.redhat.com
	[10.36.112.51])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1IEQmU9004614; Tue, 18 Feb 2014 09:26:50 -0500
Message-ID: <53036DA8.3080804@redhat.com>
Date: Tue, 18 Feb 2014 15:26:48 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> On Tue, 18 Feb 2014, Paolo Bonzini wrote:
>> Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
>>> Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
>>> of the email :-P).
>>> It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
>>> response to the guest writing to a magic ioport specifically to unplug
>>> the emulated disk.
>>> With this patch after the guest boots I can still access both xvda and
>>> sda for the same disk, leading to fs corruptions.
>>
>> Ok, the last paragraph is what I was missing.
>>
>> So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
>> hotplug handler, dc->unplug is not called anymore.
>>
>> But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't free
>> the device, it just drops the disks underneath.  I think the simplest solution
>> is to _not_ make it a dc->unplug callback at all, and call
>> pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
>> qdev_unplug means "ask guest to start unplug", which is not what Xen wants to
>> do here.
>
> Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> Calling it directly from unplug_disks fixes the issue:
>
>
> ---
>
> Call pci_piix3_xen_ide_unplug from unplug_disks
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> index 0eda301..40757eb 100644
> --- a/hw/ide/piix.c
> +++ b/hw/ide/piix.c
> @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
>      return 0;
>  }
>
> -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> +int pci_piix3_xen_ide_unplug(DeviceState *dev)
>  {
>      PCIIDEState *pci_ide;
>      DriveInfo *di;
> @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass, void *data)
>      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
>      k->class_id = PCI_CLASS_STORAGE_IDE;
>      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> -    dc->unplug = pci_piix3_xen_ide_unplug;
>  }
>
>  static const TypeInfo piix3_ide_xen_info = {
> diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> index 70875e4..1d9d0e9 100644
> --- a/hw/xen/xen_platform.c
> +++ b/hw/xen/xen_platform.c
> @@ -27,6 +27,7 @@
>
>  #include "hw/hw.h"
>  #include "hw/i386/pc.h"
> +#include "hw/ide.h"
>  #include "hw/pci/pci.h"
>  #include "hw/irq.h"
>  #include "hw/xen/xen_common.h"
> @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *o)
>      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
>              PCI_CLASS_STORAGE_IDE
>              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> -        qdev_unplug(DEVICE(d), NULL);
> +        pci_piix3_xen_ide_unplug(DEVICE(d));
>      }
>  }
>
> diff --git a/include/hw/ide.h b/include/hw/ide.h
> index 507e6d3..bc8bd32 100644
> --- a/include/hw/ide.h
> +++ b/include/hw/ide.h
> @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo **hd_table,
>  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
>  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
>  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> +int pci_piix3_xen_ide_unplug(DeviceState *dev);
>  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
>
>  /* ide-mmio.c */
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:27:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFldb-0000LO-95; Tue, 18 Feb 2014 14:27:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WFldZ-0000LH-Gm
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:27:05 +0000
Received: from [193.109.254.147:38546] by server-3.bemta-14.messagelabs.com id
	BD/FF-00432-8BD63035; Tue, 18 Feb 2014 14:27:04 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392733623!5122540!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30686 invoked from network); 18 Feb 2014 14:27:03 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-27.messagelabs.com with SMTP;
	18 Feb 2014 14:27:03 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1IEQsdJ014695
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 09:26:54 -0500
Received: from yakj.usersys.redhat.com (ovpn-112-51.ams2.redhat.com
	[10.36.112.51])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1IEQmU9004614; Tue, 18 Feb 2014 09:26:50 -0500
Message-ID: <53036DA8.3080804@redhat.com>
Date: Tue, 18 Feb 2014 15:26:48 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> On Tue, 18 Feb 2014, Paolo Bonzini wrote:
>> Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
>>> Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
>>> of the email :-P).
>>> It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
>>> response to the guest writing to a magic ioport specifically to unplug
>>> the emulated disk.
>>> With this patch after the guest boots I can still access both xvda and
>>> sda for the same disk, leading to fs corruptions.
>>
>> Ok, the last paragraph is what I was missing.
>>
>> So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
>> hotplug handler, dc->unplug is not called anymore.
>>
>> But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't free
>> the device, it just drops the disks underneath.  I think the simplest solution
>> is to _not_ make it a dc->unplug callback at all, and call
>> pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
>> qdev_unplug means "ask guest to start unplug", which is not what Xen wants to
>> do here.
>
> Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> Calling it directly from unplug_disks fixes the issue:
>
>
> ---
>
> Call pci_piix3_xen_ide_unplug from unplug_disks
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> index 0eda301..40757eb 100644
> --- a/hw/ide/piix.c
> +++ b/hw/ide/piix.c
> @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
>      return 0;
>  }
>
> -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> +int pci_piix3_xen_ide_unplug(DeviceState *dev)
>  {
>      PCIIDEState *pci_ide;
>      DriveInfo *di;
> @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass, void *data)
>      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
>      k->class_id = PCI_CLASS_STORAGE_IDE;
>      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> -    dc->unplug = pci_piix3_xen_ide_unplug;
>  }
>
>  static const TypeInfo piix3_ide_xen_info = {
> diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> index 70875e4..1d9d0e9 100644
> --- a/hw/xen/xen_platform.c
> +++ b/hw/xen/xen_platform.c
> @@ -27,6 +27,7 @@
>
>  #include "hw/hw.h"
>  #include "hw/i386/pc.h"
> +#include "hw/ide.h"
>  #include "hw/pci/pci.h"
>  #include "hw/irq.h"
>  #include "hw/xen/xen_common.h"
> @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *o)
>      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
>              PCI_CLASS_STORAGE_IDE
>              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> -        qdev_unplug(DEVICE(d), NULL);
> +        pci_piix3_xen_ide_unplug(DEVICE(d));
>      }
>  }
>
> diff --git a/include/hw/ide.h b/include/hw/ide.h
> index 507e6d3..bc8bd32 100644
> --- a/include/hw/ide.h
> +++ b/include/hw/ide.h
> @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo **hd_table,
>  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
>  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
>  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> +int pci_piix3_xen_ide_unplug(DeviceState *dev);
>  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
>
>  /* ide-mmio.c */
>

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:28:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFleT-0000QN-O3; Tue, 18 Feb 2014 14:28:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFleS-0000QB-Cv
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:28:00 +0000
Received: from [85.158.143.35:45269] by server-3.bemta-4.messagelabs.com id
	53/25-11539-EED63035; Tue, 18 Feb 2014 14:27:58 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392733676!6548496!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3559 invoked from network); 18 Feb 2014 14:27:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:27:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103489599"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:27:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:27:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFleK-00041X-4I;
	Tue, 18 Feb 2014 14:27:52 +0000
Date: Tue, 18 Feb 2014 14:27:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Igor Mammedov <imammedo@redhat.com>
In-Reply-To: <20140218140858.4bcc511b@nial.usersys.redhat.com>
Message-ID: <alpine.DEB.2.02.1402181426390.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<20140218140858.4bcc511b@nial.usersys.redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 00/20] acpi, pc,
	pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Igor Mammedov wrote:
> On Tue, 18 Feb 2014 12:45:29 +0000
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> > > > It looks like that this series breaks disk unplug
> > > > (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
> > > > 
> > > > I bisected it and the problem is caused by:
> > > > 
> > > > commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> > > > Author: Igor Mammedov <imammedo@redhat.com>
> > > > Date:   Wed Feb 5 16:36:52 2014 +0100
> > > > 
> > > >     hw/pci: switch to a generic hotplug handling for PCIDevice
> > > > 
> > > >     make qdev_unplug()/device_set_realized() to call hotplug handler's
> > > >     plug/unplug methods if available and remove not needed anymore
> > > >     hot(un)plug handling from PCIDevice.
> > > > 
> > > >     In case if hotplug handler is not available, revert to the legacy
> > > >     hotplug method for compatibility with not yet converted buses.
> > > > 
> > > >     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > > >     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > > >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > 
> > > > 
> > > 
> > > What exactly breaks?
> > 
> > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > of the email :-P).
> > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > response to the guest writing to a magic ioport specifically to unplug
> > the emulated disk.
> > With this patch after the guest boots I can still access both xvda and
> > sda for the same disk, leading to fs corruptions.
> > 
> Could you try with following debug patch?
> 
> diff --git a/hw/core/qdev.c b/hw/core/qdev.c
> index 64b66e0..84aa8be 100644
> --- a/hw/core/qdev.c
> +++ b/hw/core/qdev.c
> @@ -214,6 +214,7 @@ void qdev_unplug(DeviceState *dev, Error **errp)
>          return;
>      }
>  
> +    fprintf(stderr, "dc->hotpluggable %d\n", dc->hotpluggable);
>      if (!dc->hotpluggable) {
>          error_set(errp, QERR_DEVICE_NO_HOTPLUG,
>                    object_get_typename(OBJECT(dev)));
> @@ -223,8 +224,12 @@ void qdev_unplug(DeviceState *dev, Error **errp)
>      qdev_hot_removed = true;
>  
>      if (dev->parent_bus && dev->parent_bus->hotplug_handler) {
> +        fprintf(stderr, "bus name: %s, hotplug_handler: %s\n",
> +                dev->parent_bus->name,
> +                object_get_typename(OBJECT(dev->parent_bus->hotplug_handler)));
>          hotplug_handler_unplug(dev->parent_bus->hotplug_handler, dev, errp);
>      } else {
> +        fprintf(stderr, "legacy unplug: %p\n", dc->unplug);
>          assert(dc->unplug != NULL);
>          if (dc->unplug(dev) < 0) { /* legacy handler */
>              error_set(errp, QERR_UNDEFINED_ERROR);
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 1acd2b2..74b0cac 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -238,6 +238,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
>      if (pci_enabled && acpi_enabled) {
>          i2c_bus *smbus;
>  
> +       fprintf(stderr, "create piix4_pm_init\n");
>          smi_irq = qemu_allocate_irqs(pc_acpi_smi_interrupt, first_cpu, 1);
>          /* TODO: Populate SPD eeprom data.  */
>          smbus = piix4_pm_init(pci_bus, piix3_devfn + 3, 0xb100,

Sure, this is the output:

dc->hotpluggable 1
bus name: pci.0, hotplug_handler: PIIX4_PM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:28:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFleT-0000QN-O3; Tue, 18 Feb 2014 14:28:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFleS-0000QB-Cv
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:28:00 +0000
Received: from [85.158.143.35:45269] by server-3.bemta-4.messagelabs.com id
	53/25-11539-EED63035; Tue, 18 Feb 2014 14:27:58 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392733676!6548496!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3559 invoked from network); 18 Feb 2014 14:27:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:27:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103489599"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:27:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:27:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFleK-00041X-4I;
	Tue, 18 Feb 2014 14:27:52 +0000
Date: Tue, 18 Feb 2014 14:27:47 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Igor Mammedov <imammedo@redhat.com>
In-Reply-To: <20140218140858.4bcc511b@nial.usersys.redhat.com>
Message-ID: <alpine.DEB.2.02.1402181426390.27926@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<20140218140858.4bcc511b@nial.usersys.redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [Qemu-devel] [PULL 00/20] acpi, pc,
	pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Igor Mammedov wrote:
> On Tue, 18 Feb 2014 12:45:29 +0000
> Stefano Stabellini <stefano.stabellini@eu.citrix.com> wrote:
> 
> > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > Il 18/02/2014 13:16, Stefano Stabellini ha scritto:
> > > > It looks like that this series breaks disk unplug
> > > > (hw/ide/piix.c:pci_piix3_xen_ide_unplug).
> > > > 
> > > > I bisected it and the problem is caused by:
> > > > 
> > > > commit 5e95494380ecf83c97d28f72134ab45e0cace8f9
> > > > Author: Igor Mammedov <imammedo@redhat.com>
> > > > Date:   Wed Feb 5 16:36:52 2014 +0100
> > > > 
> > > >     hw/pci: switch to a generic hotplug handling for PCIDevice
> > > > 
> > > >     make qdev_unplug()/device_set_realized() to call hotplug handler's
> > > >     plug/unplug methods if available and remove not needed anymore
> > > >     hot(un)plug handling from PCIDevice.
> > > > 
> > > >     In case if hotplug handler is not available, revert to the legacy
> > > >     hotplug method for compatibility with not yet converted buses.
> > > > 
> > > >     Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> > > >     Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
> > > >     Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> > > > 
> > > > 
> > > 
> > > What exactly breaks?
> > 
> > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > of the email :-P).
> > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > response to the guest writing to a magic ioport specifically to unplug
> > the emulated disk.
> > With this patch after the guest boots I can still access both xvda and
> > sda for the same disk, leading to fs corruptions.
> > 
> Could you try with following debug patch?
> 
> diff --git a/hw/core/qdev.c b/hw/core/qdev.c
> index 64b66e0..84aa8be 100644
> --- a/hw/core/qdev.c
> +++ b/hw/core/qdev.c
> @@ -214,6 +214,7 @@ void qdev_unplug(DeviceState *dev, Error **errp)
>          return;
>      }
>  
> +    fprintf(stderr, "dc->hotpluggable %d\n", dc->hotpluggable);
>      if (!dc->hotpluggable) {
>          error_set(errp, QERR_DEVICE_NO_HOTPLUG,
>                    object_get_typename(OBJECT(dev)));
> @@ -223,8 +224,12 @@ void qdev_unplug(DeviceState *dev, Error **errp)
>      qdev_hot_removed = true;
>  
>      if (dev->parent_bus && dev->parent_bus->hotplug_handler) {
> +        fprintf(stderr, "bus name: %s, hotplug_handler: %s\n",
> +                dev->parent_bus->name,
> +                object_get_typename(OBJECT(dev->parent_bus->hotplug_handler)));
>          hotplug_handler_unplug(dev->parent_bus->hotplug_handler, dev, errp);
>      } else {
> +        fprintf(stderr, "legacy unplug: %p\n", dc->unplug);
>          assert(dc->unplug != NULL);
>          if (dc->unplug(dev) < 0) { /* legacy handler */
>              error_set(errp, QERR_UNDEFINED_ERROR);
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 1acd2b2..74b0cac 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -238,6 +238,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
>      if (pci_enabled && acpi_enabled) {
>          i2c_bus *smbus;
>  
> +       fprintf(stderr, "create piix4_pm_init\n");
>          smi_irq = qemu_allocate_irqs(pc_acpi_smi_interrupt, first_cpu, 1);
>          /* TODO: Populate SPD eeprom data.  */
>          smbus = piix4_pm_init(pci_bus, piix3_devfn + 3, 0xb100,

Sure, this is the output:

dc->hotpluggable 1
bus name: pci.0, hotplug_handler: PIIX4_PM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:41:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlrR-0000sV-3k; Tue, 18 Feb 2014 14:41:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFlrO-0000sQ-Lg
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:41:22 +0000
Received: from [85.158.137.68:35241] by server-3.bemta-3.messagelabs.com id
	79/3D-14520-11173035; Tue, 18 Feb 2014 14:41:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392734479!2632109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2718 invoked from network); 18 Feb 2014 14:41:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:41:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103496927"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:41:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:41:18 -0500
Message-ID: <1392734477.11080.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xu cong <congxumail@gmail.com>
Date: Tue, 18 Feb 2014 14:41:17 +0000
In-Reply-To: <CA+hYhXsG0Tx43GuYPVps5tRX-NVYcBL+AaqmxneGT-8BoC0Z_Q@mail.gmail.com>
References: <CA+hYhXsG0Tx43GuYPVps5tRX-NVYcBL+AaqmxneGT-8BoC0Z_Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] What's the state of vCPU scheduled out by
 hypervisor from the view of guest?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 13:31 -0500, xu cong wrote:
> Hi All,
> 
> 
> In a virtual SMP guest, when one vCPU is scheduled out, what's the
> view of guest? Is it similar to removing one cpu via hotplug
> temporally?

No.

The time is "stolen" from the guest, which can be seen in the runstate
info. When it is running again it could look at the current time and
observe that there must have been a gap.

It's a bit like what a process under a regular OS sees when it is not
running.

CPU hotplug is a much more heavy weight operation.

>  How about the timer on the vCPU scheduled out?

The timer firing will cause the vcpu to be woken up and become
schedulable again.

>  For example, assume the guest OS is linux, its process scheduler
> (e.g. CFS) need update the vruntime and other information for each
> process. When one vCPU gets scheduled again after waiting in run
> queue, how does the process scheduler update the information for the
> processes running on it?

I believe that the Linux scheduler has visibility into the "stolen" time
and does the right thing -- which is to not account it against the
process which appeared to be running but was actually preempted because
the VCPU wasn't running. That would be a bit unfair to that process!

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:41:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFlrR-0000sV-3k; Tue, 18 Feb 2014 14:41:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFlrO-0000sQ-Lg
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 14:41:22 +0000
Received: from [85.158.137.68:35241] by server-3.bemta-3.messagelabs.com id
	79/3D-14520-11173035; Tue, 18 Feb 2014 14:41:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392734479!2632109!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2718 invoked from network); 18 Feb 2014 14:41:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:41:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103496927"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:41:19 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:41:18 -0500
Message-ID: <1392734477.11080.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xu cong <congxumail@gmail.com>
Date: Tue, 18 Feb 2014 14:41:17 +0000
In-Reply-To: <CA+hYhXsG0Tx43GuYPVps5tRX-NVYcBL+AaqmxneGT-8BoC0Z_Q@mail.gmail.com>
References: <CA+hYhXsG0Tx43GuYPVps5tRX-NVYcBL+AaqmxneGT-8BoC0Z_Q@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] What's the state of vCPU scheduled out by
 hypervisor from the view of guest?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-13 at 13:31 -0500, xu cong wrote:
> Hi All,
> 
> 
> In a virtual SMP guest, when one vCPU is scheduled out, what's the
> view of guest? Is it similar to removing one cpu via hotplug
> temporally?

No.

The time is "stolen" from the guest, which can be seen in the runstate
info. When it is running again it could look at the current time and
observe that there must have been a gap.

It's a bit like what a process under a regular OS sees when it is not
running.

CPU hotplug is a much more heavy weight operation.

>  How about the timer on the vCPU scheduled out?

The timer firing will cause the vcpu to be woken up and become
schedulable again.

>  For example, assume the guest OS is linux, its process scheduler
> (e.g. CFS) need update the vruntime and other information for each
> process. When one vCPU gets scheduled again after waiting in run
> queue, how does the process scheduler update the information for the
> processes running on it?

I believe that the Linux scheduler has visibility into the "stolen" time
and does the right thing -- which is to not account it against the
process which appeared to be running but was actually preempted because
the VCPU wasn't running. That would be a bit unfair to that process!

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:51:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm1M-00014N-Ba; Tue, 18 Feb 2014 14:51:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFm1L-00014I-KH
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 14:51:39 +0000
Received: from [85.158.137.68:32075] by server-14.bemta-3.messagelabs.com id
	00/F0-08196-A7373035; Tue, 18 Feb 2014 14:51:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392735096!1391861!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5382 invoked from network); 18 Feb 2014 14:51:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103501609"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:51:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:51:35 -0500
Message-ID: <1392735093.11080.81.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 14:51:33 +0000
In-Reply-To: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
References: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Save/restore GICH_VMCR on domain
	context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:58 +0000, Julien Grall wrote:
> GICH_VMCR register contains alias to important bits of GICV interface such as:
>     - priority mask of the CPU
>     - EOImode
>     - ...
> 
> We were safe because Linux guest always use the same value for this bits.
> When new guests will handle priority or change EOI mode, VCPU interrupt
> management will be in a wrong state.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This is a bug fix for Xen 4.4. Without this patch we can't support guest
>     that doesn't have the same behavior as Linux to handle GICC interface.
>     theses bits are not modified by them.

I'd say we pretty much have to take this -- otherwise some guest can
break things for everyone else by writing to GICC registers.

I've had a look at the GICH register list and I think we correctly
switch everything else.

Ian.

> ---
>  xen/arch/arm/gic.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 62294ac..51e5990 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -107,6 +107,7 @@ void gic_save_state(struct vcpu *v)
>          v->arch.gic_lr[i] = GICH[GICH_LR + i];
>      v->arch.lr_mask = this_cpu(lr_mask);
>      v->arch.gic_apr = GICH[GICH_APR];
> +    v->arch.gic_vmcr = GICH[GICH_VMCR];
>      /* Disable until next VCPU scheduled */
>      GICH[GICH_HCR] = 0;
>      isb();
> @@ -123,6 +124,7 @@ void gic_restore_state(struct vcpu *v)
>      for ( i=0; i<nr_lrs; i++)
>          GICH[GICH_LR + i] = v->arch.gic_lr[i];
>      GICH[GICH_APR] = v->arch.gic_apr;
> +    GICH[GICH_VMCR] = v->arch.gic_vmcr;
>      GICH[GICH_HCR] = GICH_HCR_EN;
>      isb();
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:51:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm1M-00014N-Ba; Tue, 18 Feb 2014 14:51:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFm1L-00014I-KH
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 14:51:39 +0000
Received: from [85.158.137.68:32075] by server-14.bemta-3.messagelabs.com id
	00/F0-08196-A7373035; Tue, 18 Feb 2014 14:51:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392735096!1391861!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5382 invoked from network); 18 Feb 2014 14:51:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:51:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103501609"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:51:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:51:35 -0500
Message-ID: <1392735093.11080.81.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 14:51:33 +0000
In-Reply-To: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
References: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Save/restore GICH_VMCR on domain
	context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:58 +0000, Julien Grall wrote:
> GICH_VMCR register contains alias to important bits of GICV interface such as:
>     - priority mask of the CPU
>     - EOImode
>     - ...
> 
> We were safe because Linux guest always use the same value for this bits.
> When new guests will handle priority or change EOI mode, VCPU interrupt
> management will be in a wrong state.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>     This is a bug fix for Xen 4.4. Without this patch we can't support guest
>     that doesn't have the same behavior as Linux to handle GICC interface.
>     theses bits are not modified by them.

I'd say we pretty much have to take this -- otherwise some guest can
break things for everyone else by writing to GICC registers.

I've had a look at the GICH register list and I think we correctly
switch everything else.

Ian.

> ---
>  xen/arch/arm/gic.c |    2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 62294ac..51e5990 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -107,6 +107,7 @@ void gic_save_state(struct vcpu *v)
>          v->arch.gic_lr[i] = GICH[GICH_LR + i];
>      v->arch.lr_mask = this_cpu(lr_mask);
>      v->arch.gic_apr = GICH[GICH_APR];
> +    v->arch.gic_vmcr = GICH[GICH_VMCR];
>      /* Disable until next VCPU scheduled */
>      GICH[GICH_HCR] = 0;
>      isb();
> @@ -123,6 +124,7 @@ void gic_restore_state(struct vcpu *v)
>      for ( i=0; i<nr_lrs; i++)
>          GICH[GICH_LR + i] = v->arch.gic_lr[i];
>      GICH[GICH_APR] = v->arch.gic_apr;
> +    GICH[GICH_VMCR] = v->arch.gic_vmcr;
>      GICH[GICH_HCR] = GICH_HCR_EN;
>      isb();
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:53:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm3N-00019G-Sz; Tue, 18 Feb 2014 14:53:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFm3M-000199-6B
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 14:53:44 +0000
Received: from [85.158.143.35:8711] by server-2.bemta-4.messagelabs.com id
	9A/1E-10891-7F373035; Tue, 18 Feb 2014 14:53:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392735221!6565056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20976 invoked from network); 18 Feb 2014 14:53:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:53:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101778104"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 14:53:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 09:53:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFm3H-0003be-Mx;
	Tue, 18 Feb 2014 14:53:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFm3F-0005y8-Ts;
	Tue, 18 Feb 2014 14:53:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.29680.814469.673789@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 14:53:36 +0000
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140218135641.GB2804@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
	<21251.17769.656791.139815@mariner.uk.xensource.com>
	<20140218135641.GB2804@aepfle.de>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] missing dependency on libxlu_disk_l.h"):
> On Tue, Feb 18, Ian Jackson wrote:
> > This ought to be taken care of by the build system, provided you don't
> > actually git commit only the change to .l and not the change to .[ch].
> > In the final patch.
> 
> In my case the patch changes only the .l file. I expect that all
> dependencies are written into the Makefile.

Yes, and they are.  But if you edit the .l file, commit it, and then
do various gitish things, you might end up with the .l and the
generated .[ch] having "wrong" timestamps which persuade make not to
rebuild it.

> > Olaf, can you test whether this diff makes the problem go away for you ?
> 
> Ian, xen.rpm is rebuilt often, but the failure happend exactly once.
> I will see if I can force it to fail without the change below.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:53:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm3N-00019G-Sz; Tue, 18 Feb 2014 14:53:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFm3M-000199-6B
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 14:53:44 +0000
Received: from [85.158.143.35:8711] by server-2.bemta-4.messagelabs.com id
	9A/1E-10891-7F373035; Tue, 18 Feb 2014 14:53:43 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392735221!6565056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20976 invoked from network); 18 Feb 2014 14:53:42 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:53:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101778104"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 14:53:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 09:53:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFm3H-0003be-Mx;
	Tue, 18 Feb 2014 14:53:39 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFm3F-0005y8-Ts;
	Tue, 18 Feb 2014 14:53:37 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21251.29680.814469.673789@mariner.uk.xensource.com>
Date: Tue, 18 Feb 2014 14:53:36 +0000
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140218135641.GB2804@aepfle.de>
References: <20140215221737.GA28254@aepfle.de>
	<1392718729.11080.13.camel@kazak.uk.xensource.com>
	<21251.17769.656791.139815@mariner.uk.xensource.com>
	<20140218135641.GB2804@aepfle.de>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] missing dependency on libxlu_disk_l.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Olaf Hering writes ("Re: [Xen-devel] missing dependency on libxlu_disk_l.h"):
> On Tue, Feb 18, Ian Jackson wrote:
> > This ought to be taken care of by the build system, provided you don't
> > actually git commit only the change to .l and not the change to .[ch].
> > In the final patch.
> 
> In my case the patch changes only the .l file. I expect that all
> dependencies are written into the Makefile.

Yes, and they are.  But if you edit the .l file, commit it, and then
do various gitish things, you might end up with the .l and the
generated .[ch] having "wrong" timestamps which persuade make not to
rebuild it.

> > Olaf, can you test whether this diff makes the problem go away for you ?
> 
> Ian, xen.rpm is rebuilt often, but the failure happend exactly once.
> I will see if I can force it to fail without the change below.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:54:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm4K-0001EW-Bo; Tue, 18 Feb 2014 14:54:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFm4J-0001EL-24
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 14:54:43 +0000
Received: from [193.109.254.147:13164] by server-12.bemta-14.messagelabs.com
	id EF/82-17220-23473035; Tue, 18 Feb 2014 14:54:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392735279!5135409!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23469 invoked from network); 18 Feb 2014 14:54:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:54:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101778767"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 14:54:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:54:29 -0500
Message-ID: <1392735268.11080.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 18 Feb 2014 14:54:28 +0000
In-Reply-To: <1392566966-24840-1-git-send-email-baozich@gmail.com>
References: <1392566966-24840-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 00:09 +0800, Chen Baozi wrote:
> Section shift for level-2 page table should be #21 rather than #20. Besides,
> since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
> these macros instead of hard-coded shift value.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Where I was wavering about 4.4 inclusion with v2 I think that window has
now clearly passed so I will put this aside until 4.5 development opens
up.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:54:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm4K-0001EW-Bo; Tue, 18 Feb 2014 14:54:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFm4J-0001EL-24
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 14:54:43 +0000
Received: from [193.109.254.147:13164] by server-12.bemta-14.messagelabs.com
	id EF/82-17220-23473035; Tue, 18 Feb 2014 14:54:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392735279!5135409!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23469 invoked from network); 18 Feb 2014 14:54:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:54:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101778767"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 14:54:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:54:29 -0500
Message-ID: <1392735268.11080.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 18 Feb 2014 14:54:28 +0000
In-Reply-To: <1392566966-24840-1-git-send-email-baozich@gmail.com>
References: <1392566966-24840-1-git-send-email-baozich@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v3] xen/arm{32,
 64}: fix section shift when mapping 2MB block in boot page table
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 00:09 +0800, Chen Baozi wrote:
> Section shift for level-2 page table should be #21 rather than #20. Besides,
> since there are {FIRST,SECOND,THIRD}_SHIFT macros defined in asm/page.h, use
> these macros instead of hard-coded shift value.
> 
> Signed-off-by: Chen Baozi <baozich@gmail.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Where I was wavering about 4.4 inclusion with v2 I think that window has
now clearly passed so I will put this aside until 4.5 development opens
up.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:59:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm98-0001W9-5A; Tue, 18 Feb 2014 14:59:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFm97-0001W4-BQ
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 14:59:41 +0000
Received: from [85.158.137.68:5744] by server-5.bemta-3.messagelabs.com id
	6A/80-04712-C5573035; Tue, 18 Feb 2014 14:59:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392735578!2659438!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24531 invoked from network); 18 Feb 2014 14:59:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:59:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103506596"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:59:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:59:37 -0500
Message-ID: <1392735576.11080.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 14:59:36 +0000
In-Reply-To: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 17:10 +0000, Julien Grall wrote:
> The current implementation of raw_copy_* helpers may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.

Isn't a non-aligned address the vast majority of the cases (hypercall
arguments on the guest stack)? How have we managed to get away with this
for so long?

> When the total length is higher than a page, the length to read is badly
> compute with
>     min(len, (unsigned)(PAGE_SIZE - offset))
> 
> As the offset is only computed one time per function,

We set offset = 0 at the end of the first iteration.

Which I think is correct: On the second iteration things should now be
aligned to a page boundary.

Have you observed offset != 0 for the second and subsequent iterations?

>  if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>     - to read accross page boundary => xen crash
>     - read the previous page => data corruption
> 
> This issue can be resolved by computing the offset on every iteration.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     This patch is a bug fix for Xen 4.4. Without this patch the data may be
>     corrupted between Xen and the guest when the guest virtual address is
>     not aligned to PAGE_SIZE. Sometimes it can also crash Xen.
> 
>     These functions are used in numerous place in Xen. If it introduce another
>     bug we can see quickly with small amount of data.
> ---
>  xen/arch/arm/guestcopy.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..b3b54e9 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>                                                unsigned len, int flush_dcache)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>  
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)from & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>  
>          if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 14:59:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 14:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFm98-0001W9-5A; Tue, 18 Feb 2014 14:59:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFm97-0001W4-BQ
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 14:59:41 +0000
Received: from [85.158.137.68:5744] by server-5.bemta-3.messagelabs.com id
	6A/80-04712-C5573035; Tue, 18 Feb 2014 14:59:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392735578!2659438!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24531 invoked from network); 18 Feb 2014 14:59:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 14:59:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103506596"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 14:59:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 09:59:37 -0500
Message-ID: <1392735576.11080.87.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 14:59:36 +0000
In-Reply-To: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 17:10 +0000, Julien Grall wrote:
> The current implementation of raw_copy_* helpers may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.

Isn't a non-aligned address the vast majority of the cases (hypercall
arguments on the guest stack)? How have we managed to get away with this
for so long?

> When the total length is higher than a page, the length to read is badly
> compute with
>     min(len, (unsigned)(PAGE_SIZE - offset))
> 
> As the offset is only computed one time per function,

We set offset = 0 at the end of the first iteration.

Which I think is correct: On the second iteration things should now be
aligned to a page boundary.

Have you observed offset != 0 for the second and subsequent iterations?

>  if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>     - to read accross page boundary => xen crash
>     - read the previous page => data corruption
> 
> This issue can be resolved by computing the offset on every iteration.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     This patch is a bug fix for Xen 4.4. Without this patch the data may be
>     corrupted between Xen and the guest when the guest virtual address is
>     not aligned to PAGE_SIZE. Sometimes it can also crash Xen.
> 
>     These functions are used in numerous place in Xen. If it introduce another
>     bug we can see quickly with small amount of data.
> ---
>  xen/arch/arm/guestcopy.c |    8 +++-----
>  1 file changed, 3 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..b3b54e9 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -9,12 +9,11 @@ static unsigned long raw_copy_to_guest_helper(void *to, const void *from,
>                                                unsigned len, int flush_dcache)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -50,12 +49,12 @@ unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
>  unsigned long raw_clear_guest(void *to, unsigned len)
>  {
>      /* XXX needs to handle faults */
> -    unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>  
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)to & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)PAGE_SIZE - offset);
>  
>          if ( gvirt_to_maddr((vaddr_t) to, &g) )
> @@ -76,12 +75,11 @@ unsigned long raw_clear_guest(void *to, unsigned len)
>  
>  unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
>  {
> -    unsigned offset = (vaddr_t)from & ~PAGE_MASK;
> -
>      while ( len )
>      {
>          paddr_t g;
>          void *p;
> +        unsigned offset = (vaddr_t)from & ~PAGE_MASK;
>          unsigned size = min(len, (unsigned)(PAGE_SIZE - offset));
>  
>          if ( gvirt_to_maddr((vaddr_t) from & PAGE_MASK, &g) )



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:01:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmAX-0001bt-Kv; Tue, 18 Feb 2014 15:01:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFmAW-0001bn-NF
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:01:08 +0000
Received: from [85.158.137.68:34285] by server-15.bemta-3.messagelabs.com id
	6E/78-19263-2B573035; Tue, 18 Feb 2014 15:01:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392735662!1090044!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22139 invoked from network); 18 Feb 2014 15:01:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:01:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101782977"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:01:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:01:00 -0500
Message-ID: <1392735659.11080.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 15:00:59 +0000
In-Reply-To: <1392735576.11080.87.camel@kazak.uk.xensource.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:

> > As the offset is only computed one time per function,
> 
> We set offset = 0 at the end of the first iteration.

Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
raw_copy_from_guest -- which I think is the actual bug here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:01:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmAX-0001bt-Kv; Tue, 18 Feb 2014 15:01:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFmAW-0001bn-NF
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:01:08 +0000
Received: from [85.158.137.68:34285] by server-15.bemta-3.messagelabs.com id
	6E/78-19263-2B573035; Tue, 18 Feb 2014 15:01:06 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392735662!1090044!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22139 invoked from network); 18 Feb 2014 15:01:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:01:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101782977"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:01:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:01:00 -0500
Message-ID: <1392735659.11080.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 15:00:59 +0000
In-Reply-To: <1392735576.11080.87.camel@kazak.uk.xensource.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:

> > As the offset is only computed one time per function,
> 
> We set offset = 0 at the end of the first iteration.

Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
raw_copy_from_guest -- which I think is the actual bug here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:06:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmFK-0001oA-Dg; Tue, 18 Feb 2014 15:06:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFmFI-0001o1-TP
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:06:05 +0000
Received: from [85.158.137.68:39143] by server-15.bemta-3.messagelabs.com id
	7D/F3-19263-CD673035; Tue, 18 Feb 2014 15:06:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392735963!2671281!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22841 invoked from network); 18 Feb 2014 15:06:03 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:06:03 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so8018537eak.14
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 07:06:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=pCsk0FxdLw0vH6rQ84ArOtiBekIVsH4oFemAACoPcJs=;
	b=i9SaAzKJmq9BMKaVhQ5xG4KJCHKDHN95wwbflVieathlVQcG2nHSNllC4STYgCnwRy
	X6RNcDG6nnAJt95IMIJAGVTxwcuKaKAzbXpdPaL2MqX0fgpOkl/QbuIfnoiF6gmOlLUT
	rUhG4l+ISO/jcbQ4c5vGygUJ1L5lwoYk/gGa3udWlWzUiv5O6QppsMRsHMtjCLSezxXu
	ZCOl7lDTqOxSIEzkN8xpnyzck8OmaUZ/s9gEzoVg4vAFRfNvkz7axXsLw7pRQ8MSjdjF
	t+niPb1Jk5sSj1llof4OoTJf1C8Pj2eBRe62tbVjWbCK7n5HAGgALRc8LUMGxM0fw97X
	mmMA==
X-Gm-Message-State: ALoCoQk8huv0nQQm1M/h+/1/giQaSRo9+U5FSSny4KbC4No0qdjdS0BRZ/8273PwKqnNJfeBUUSL
X-Received: by 10.14.39.3 with SMTP id c3mr34477914eeb.42.1392735962880;
	Tue, 18 Feb 2014 07:06:02 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm71350433eep.17.2014.02.18.07.06.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 07:06:02 -0800 (PST)
Message-ID: <530376D6.9000704@linaro.org>
Date: Tue, 18 Feb 2014 15:05:58 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
In-Reply-To: <1392735576.11080.87.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 02:59 PM, Ian Campbell wrote:
> On Fri, 2014-02-14 at 17:10 +0000, Julien Grall wrote:
>> The current implementation of raw_copy_* helpers may lead to data corruption
>> and sometimes Xen crash when the guest virtual address is not aligned to
>> PAGE_SIZE.
> 
> Isn't a non-aligned address the vast majority of the cases (hypercall
> arguments on the guest stack)? How have we managed to get away with this
> for so long?

Because most of the time the size is smaller than 1 page. It not the
case with flask policy.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:06:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmFK-0001oA-Dg; Tue, 18 Feb 2014 15:06:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFmFI-0001o1-TP
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:06:05 +0000
Received: from [85.158.137.68:39143] by server-15.bemta-3.messagelabs.com id
	7D/F3-19263-CD673035; Tue, 18 Feb 2014 15:06:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392735963!2671281!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22841 invoked from network); 18 Feb 2014 15:06:03 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:06:03 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so8018537eak.14
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 07:06:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=pCsk0FxdLw0vH6rQ84ArOtiBekIVsH4oFemAACoPcJs=;
	b=i9SaAzKJmq9BMKaVhQ5xG4KJCHKDHN95wwbflVieathlVQcG2nHSNllC4STYgCnwRy
	X6RNcDG6nnAJt95IMIJAGVTxwcuKaKAzbXpdPaL2MqX0fgpOkl/QbuIfnoiF6gmOlLUT
	rUhG4l+ISO/jcbQ4c5vGygUJ1L5lwoYk/gGa3udWlWzUiv5O6QppsMRsHMtjCLSezxXu
	ZCOl7lDTqOxSIEzkN8xpnyzck8OmaUZ/s9gEzoVg4vAFRfNvkz7axXsLw7pRQ8MSjdjF
	t+niPb1Jk5sSj1llof4OoTJf1C8Pj2eBRe62tbVjWbCK7n5HAGgALRc8LUMGxM0fw97X
	mmMA==
X-Gm-Message-State: ALoCoQk8huv0nQQm1M/h+/1/giQaSRo9+U5FSSny4KbC4No0qdjdS0BRZ/8273PwKqnNJfeBUUSL
X-Received: by 10.14.39.3 with SMTP id c3mr34477914eeb.42.1392735962880;
	Tue, 18 Feb 2014 07:06:02 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm71350433eep.17.2014.02.18.07.06.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 07:06:02 -0800 (PST)
Message-ID: <530376D6.9000704@linaro.org>
Date: Tue, 18 Feb 2014 15:05:58 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
In-Reply-To: <1392735576.11080.87.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 02:59 PM, Ian Campbell wrote:
> On Fri, 2014-02-14 at 17:10 +0000, Julien Grall wrote:
>> The current implementation of raw_copy_* helpers may lead to data corruption
>> and sometimes Xen crash when the guest virtual address is not aligned to
>> PAGE_SIZE.
> 
> Isn't a non-aligned address the vast majority of the cases (hypercall
> arguments on the guest stack)? How have we managed to get away with this
> for so long?

Because most of the time the size is smaller than 1 page. It not the
case with flask policy.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:09:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmIB-0001xX-21; Tue, 18 Feb 2014 15:09:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFmI9-0001xR-HZ
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:09:01 +0000
Received: from [85.158.137.68:25122] by server-14.bemta-3.messagelabs.com id
	A7/B4-08196-C8773035; Tue, 18 Feb 2014 15:09:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392736138!2651688!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25273 invoked from network); 18 Feb 2014 15:08:59 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:08:59 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so7792228eek.19
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 07:08:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=QNsB+hSgRxWo7desmVY3I0np3PFmTVt0VBEC2b5yPAI=;
	b=VVbPlmRPjyS8leNuzH6OoEQiCBv3PE9kAqWw0cdRCtSmy9D/BiXm0L6225I1NOZgGP
	Wri7cq3S1eG9LkVGO4gqy6ik6/PEAF91+DUPH67T/A3JCtMOFJPm7ZeU5gunxkWYCzM8
	L/iScFv2hnzv9MXvbepTTw/i8zZ0Y2QwC4ZsZmLGJDkybEU1fLQRN46PUECZwwlh8oiI
	1KSLNbDpx/zysorGOKNsvCCaOq1c/U7gVv5YM326P5nKntzPKG+oNq9Rd0rvAeeY2H75
	8wx1IlULt7oBkzkWU0hgjLoGvZ9hc2mThM1kHLETX788iGnVKpdqkdJiKXlRtd6B9kKi
	W0Qg==
X-Gm-Message-State: ALoCoQmm6U4t9EnXuAAcsqLWXjXWfFS3wZyyFQJFD+R9Yvx6ZClbzkQ7MbISOgTn+lGyiyD5+IKD
X-Received: by 10.15.33.193 with SMTP id c41mr4546924eev.79.1392736138607;
	Tue, 18 Feb 2014 07:08:58 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	i43sm71210608eeu.13.2014.02.18.07.08.56 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 07:08:57 -0800 (PST)
Message-ID: <53037788.4010702@linaro.org>
Date: Tue, 18 Feb 2014 15:08:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
	<1392735659.11080.88.camel@kazak.uk.xensource.com>
In-Reply-To: <1392735659.11080.88.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 03:00 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:
> 
>>> As the offset is only computed one time per function,
>>
>> We set offset = 0 at the end of the first iteration.
> 
> Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
> raw_copy_from_guest -- which I think is the actual bug here.

I didn't notice the offset = 0 at then end of raw_copy_to_guest.

I can send a patch to only set offset to 0 in raw_copy_from_guest. But I
think it's less clear than this patch. What do you think?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:09:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmIB-0001xX-21; Tue, 18 Feb 2014 15:09:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFmI9-0001xR-HZ
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:09:01 +0000
Received: from [85.158.137.68:25122] by server-14.bemta-3.messagelabs.com id
	A7/B4-08196-C8773035; Tue, 18 Feb 2014 15:09:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392736138!2651688!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25273 invoked from network); 18 Feb 2014 15:08:59 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:08:59 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so7792228eek.19
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 07:08:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=QNsB+hSgRxWo7desmVY3I0np3PFmTVt0VBEC2b5yPAI=;
	b=VVbPlmRPjyS8leNuzH6OoEQiCBv3PE9kAqWw0cdRCtSmy9D/BiXm0L6225I1NOZgGP
	Wri7cq3S1eG9LkVGO4gqy6ik6/PEAF91+DUPH67T/A3JCtMOFJPm7ZeU5gunxkWYCzM8
	L/iScFv2hnzv9MXvbepTTw/i8zZ0Y2QwC4ZsZmLGJDkybEU1fLQRN46PUECZwwlh8oiI
	1KSLNbDpx/zysorGOKNsvCCaOq1c/U7gVv5YM326P5nKntzPKG+oNq9Rd0rvAeeY2H75
	8wx1IlULt7oBkzkWU0hgjLoGvZ9hc2mThM1kHLETX788iGnVKpdqkdJiKXlRtd6B9kKi
	W0Qg==
X-Gm-Message-State: ALoCoQmm6U4t9EnXuAAcsqLWXjXWfFS3wZyyFQJFD+R9Yvx6ZClbzkQ7MbISOgTn+lGyiyD5+IKD
X-Received: by 10.15.33.193 with SMTP id c41mr4546924eev.79.1392736138607;
	Tue, 18 Feb 2014 07:08:58 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	i43sm71210608eeu.13.2014.02.18.07.08.56 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 07:08:57 -0800 (PST)
Message-ID: <53037788.4010702@linaro.org>
Date: Tue, 18 Feb 2014 15:08:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
	<1392735659.11080.88.camel@kazak.uk.xensource.com>
In-Reply-To: <1392735659.11080.88.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 03:00 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:
> 
>>> As the offset is only computed one time per function,
>>
>> We set offset = 0 at the end of the first iteration.
> 
> Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
> raw_copy_from_guest -- which I think is the actual bug here.

I didn't notice the offset = 0 at then end of raw_copy_to_guest.

I can send a patch to only set offset to 0 in raw_copy_from_guest. But I
think it's less clear than this patch. What do you think?

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:24:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:24:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmWv-0002RI-FT; Tue, 18 Feb 2014 15:24:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1WFmWt-0002RD-JL
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:24:15 +0000
Received: from [85.158.139.211:34125] by server-10.bemta-5.messagelabs.com id
	FE/D4-08578-E1B73035; Tue, 18 Feb 2014 15:24:14 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392737051!4704393!1
X-Originating-IP: [209.85.214.179]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12811 invoked from network); 18 Feb 2014 15:24:12 -0000
Received: from mail-ob0-f179.google.com (HELO mail-ob0-f179.google.com)
	(209.85.214.179)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:24:12 -0000
Received: by mail-ob0-f179.google.com with SMTP id wo20so18504311obc.24
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 07:24:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=mZpuBXdtKh7dtEcU04h7sE8jomaGxLeK52ksm1F3lrU=;
	b=nhWFmyBjA9DTZJ8JefE674c5LDaDuTtrajepwYow8NqtBKDz5xprzNeDU6UI7jW3B6
	GoTP02oT7I4+t34Ct3f31aZxT4rwrz8ncm1hi5x1Mmzb1/Q5K6uZUo1PGfoLnPQxN/7i
	tx/KwOIE56WLlXO78Ul4zMYWHHI0j6/6FM9a6h0Zr88mAeu2K61hmtXd8hGpoPiY9HWZ
	RtfoWSDLQvNhe1R0iIZfM7Ys4QLAzgAf+tJTM01Fmx/olpgMrWjYNBpIL8jY0+X8MlDz
	k8sPkkUyNzAmib5mB9HgYmlKpa4Om2pC2ETSNZ+ieABuwai/92gJmYpW8UGoMKgnU0ym
	nU7g==
MIME-Version: 1.0
X-Received: by 10.60.98.240 with SMTP id el16mr2621070oeb.50.1392737051144;
	Tue, 18 Feb 2014 07:24:11 -0800 (PST)
Received: by 10.76.173.98 with HTTP; Tue, 18 Feb 2014 07:24:11 -0800 (PST)
In-Reply-To: <1392714890.32038.463.camel@Solace>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
Date: Tue, 18 Feb 2014 10:24:11 -0500
Message-ID: <CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5513351237416863832=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5513351237416863832==
Content-Type: multipart/alternative; boundary=089e011832a632fac404f2afdd56

--089e011832a632fac404f2afdd56
Content-Type: text/plain; charset=ISO-8859-1

Hi Dario,

Thank you so much for your detailed reply! It is really helpful! I'm
looking at the vPMU and perf on Xen, and will try it. :-)

The reason why I want to know this information from hardware performance
counter is because I want to know the interference among each domains when
they are running.

In addition, when we measure the latency of accessing a large array, the
result is out of our expectation. We increase the size of an array from 1KB
to 12MB, which covers the L1(32KB), L2(256KB) and L3(12MB) cache size. We
expect that the latency of accessing the whole array should have clear cut
at around 32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
several times different.

However, we saw the latency does not increase much when the array size is
larger than the size of L1, L2, and L3. It's weird because if we run the
same task in Linux on bare machine, it is the expected result.

We are not sure if this is because of the virt. overhead or cache miss,
that's why we want to know the cache access rate of each domain.

It's really appreciated  if you can share some of your insight on this. :-)

Thank you very much for your time!

Best,

Meng


2014-02-18 4:14 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:

> On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
> > Hi,
> >
> Hi,
>
> > I'm a PhD student, working on real time system.
> >
> Cool. There really seems to be a lot of interest in Real-Time
> virtualization these days. :-D
>
> > [My goal]
> > I want to measure the cache hit/miss rate of each guest domain in Xen.
> > I may also want to measure some other events, say memory access rate,
> > for each program in each guest domain in Xen.
> >
> Ok. Can I, out of curiosity, as you to detail a bit more what your
> *final* goal is (I mean, you're interested in these measurements for a
> reason, not just for the sake of having them, right?).
>
> > [The problem I'm encountering]
> > I tried intel's Performance Counter Monitor (PCM) in Linux on bare
> > machine to get the machine's cache access rate for each level of
> > cache, it works very well.
> >
> >
> > However, when I want to use the PCM in Xen and run it in dom0, it
> > cannot work. I think the PCM needs to run in ring 0 to read/write the
> > MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot
> > work.
> >
> Indeed.
>
> > So my question is:
> > How can I run a program (say PCM) in ring 0 on Xen?
> >
> Running "a program" in there is going to be terribly difficult. What I
> think you're better off is trying to access, from dom0 and/or
> (para)virtualize the counters.
>
> In fact, there is work going on already on this, although I don't have
> all the details about what's the current status.
>
> > What's in my mind is:
> > Writing a hypercall to call the PCM in Xen's kernel space, then the
> > PCM will run in ring 0?
> > But the problem I'm concerned is that some of the PCM's instruction,
> > say printf(), may not be able to run in kernel space?
> >
> Well, Xen can print, e.g., on a serial console, but again, that's not
> what you want. I'm adding the link to a few conversation about virtual
> PMU. These are just the very first google's result, so there may well be
> more:
>
>
> http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html
> https://lwn.net/Articles/566159/
>
> Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
> Developers Summit:
> http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>
> Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>

--089e011832a632fac404f2afdd56
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi =
Dario,</div><div class=3D"gmail_default" style=3D"font-size:small"><br></di=
v><div class=3D"gmail_default" style=3D"font-size:small">Thank you so much =
for your detailed reply! It is really helpful! I&#39;m looking at the vPMU =
and perf on Xen, and will try it. :-)</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">The reason why I want to know =
this information from hardware performance counter is because I want to kno=
w the interference among each domains when they are running.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">In addition, when we measure t=
he latency of accessing a large array, the result is out of our expectation=
. We increase the size of an array from 1KB to 12MB, which covers the L1(32=
KB), L2(256KB) and L3(12MB) cache size. We expect that the latency of acces=
sing the whole array should have clear cut at around 32KB, 256KB and 12MB b=
ecause the latency of L1 L2 and L3 are several times different.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">However, we saw the latency do=
es not increase much when the array size is larger than the size of L1, L2,=
 and L3. It&#39;s weird because if we run the same task in Linux on bare ma=
chine, it is the expected result.</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">We are not sure if this is bec=
ause of the virt. overhead or cache miss, that&#39;s why we want to know th=
e cache access rate of each domain.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">It&#39;s really appreciated =
=A0if you can share some of your insight on this. :-)</div><div class=3D"gm=
ail_default" style=3D"font-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Thank you =
very much for your time!</div><div class=3D"gmail_default" style=3D"font-si=
ze:small"><br></div><div class=3D"gmail_default" style=3D"font-size:small">=
Best,</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Meng</div><div class=3D"gmail_=
default" style=3D"font-size:small"><br></div><div class=3D"gmail_extra"><br=
><div class=3D"gmail_quote">
2014-02-18 4:14 GMT-05:00 Dario Faggioli <span dir=3D"ltr">&lt;<a href=3D"m=
ailto:dario.faggioli@citrix.com" target=3D"_blank">dario.faggioli@citrix.co=
m</a>&gt;</span>:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 =
0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:<br>
&gt; Hi,<br>
<div class=3D"">&gt;<br>
Hi,<br>
<br>
&gt; I&#39;m a PhD student, working on real time system.<br>
&gt;<br>
</div>Cool. There really seems to be a lot of interest in Real-Time<br>
virtualization these days. :-D<br>
<div class=3D""><br>
&gt; [My goal]<br>
&gt; I want to measure the cache hit/miss rate of each guest domain in Xen.=
<br>
&gt; I may also want to measure some other events, say memory access rate,<=
br>
&gt; for each program in each guest domain in Xen.<br>
&gt;<br>
</div>Ok. Can I, out of curiosity, as you to detail a bit more what your<br=
>
*final* goal is (I mean, you&#39;re interested in these measurements for a<=
br>
reason, not just for the sake of having them, right?).<br>
<div class=3D""><br>
&gt; [The problem I&#39;m encountering]<br>
&gt; I tried intel&#39;s Performance Counter Monitor (PCM) in Linux on bare=
<br>
&gt; machine to get the machine&#39;s cache access rate for each level of<b=
r>
&gt; cache, it works very well.<br>
&gt;<br>
&gt;<br>
&gt; However, when I want to use the PCM in Xen and run it in dom0, it<br>
&gt; cannot work. I think the PCM needs to run in ring 0 to read/write the<=
br>
&gt; MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot<=
br>
&gt; work.<br>
&gt;<br>
</div>Indeed.<br>
<div class=3D""><br>
&gt; So my question is:<br>
&gt; How can I run a program (say PCM) in ring 0 on Xen?<br>
&gt;<br>
</div>Running &quot;a program&quot; in there is going to be terribly diffic=
ult. What I<br>
think you&#39;re better off is trying to access, from dom0 and/or<br>
(para)virtualize the counters.<br>
<br>
In fact, there is work going on already on this, although I don&#39;t have<=
br>
all the details about what&#39;s the current status.<br>
<div class=3D""><br>
&gt; What&#39;s in my mind is:<br>
&gt; Writing a hypercall to call the PCM in Xen&#39;s kernel space, then th=
e<br>
&gt; PCM will run in ring 0?<br>
&gt; But the problem I&#39;m concerned is that some of the PCM&#39;s instru=
ction,<br>
&gt; say printf(), may not be able to run in kernel space?<br>
&gt;<br>
</div>Well, Xen can print, e.g., on a serial console, but again, that&#39;s=
 not<br>
what you want. I&#39;m adding the link to a few conversation about virtual<=
br>
PMU. These are just the very first google&#39;s result, so there may well b=
e<br>
more:<br>
<br>
<a href=3D"http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Perfo=
rmance-Monitoring-Unit-td5623065.html" target=3D"_blank">http://xen.1045712=
.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623=
065.html</a><br>

<a href=3D"https://lwn.net/Articles/566159/" target=3D"_blank">https://lwn.=
net/Articles/566159/</a><br>
<br>
Boris (which I&#39;m Cc-ing), gave a presentation about this at latest Xen<=
br>
Developers Summit:<br>
<a href=3D"http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013" tar=
get=3D"_blank">http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013<=
/a><br>
<br>
Regards,<br>
Dario<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
&lt;&lt;This happens because I choose it to happen!&gt;&gt; (Raistlin Majer=
e)<br>
-----------------------------------------------------------------<br>
Dario Faggioli, Ph.D, <a href=3D"http://about.me/dario.faggioli" target=3D"=
_blank">http://about.me/dario.faggioli</a><br>
Senior Software Engineer, Citrix Systems R&amp;D Ltd., Cambridge (UK)<br>
<br>
</font></span></blockquote></div><br></div></div>

--089e011832a632fac404f2afdd56--


--===============5513351237416863832==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5513351237416863832==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 15:24:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:24:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmWv-0002RI-FT; Tue, 18 Feb 2014 15:24:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1WFmWt-0002RD-JL
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:24:15 +0000
Received: from [85.158.139.211:34125] by server-10.bemta-5.messagelabs.com id
	FE/D4-08578-E1B73035; Tue, 18 Feb 2014 15:24:14 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392737051!4704393!1
X-Originating-IP: [209.85.214.179]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12811 invoked from network); 18 Feb 2014 15:24:12 -0000
Received: from mail-ob0-f179.google.com (HELO mail-ob0-f179.google.com)
	(209.85.214.179)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:24:12 -0000
Received: by mail-ob0-f179.google.com with SMTP id wo20so18504311obc.24
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 07:24:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=mZpuBXdtKh7dtEcU04h7sE8jomaGxLeK52ksm1F3lrU=;
	b=nhWFmyBjA9DTZJ8JefE674c5LDaDuTtrajepwYow8NqtBKDz5xprzNeDU6UI7jW3B6
	GoTP02oT7I4+t34Ct3f31aZxT4rwrz8ncm1hi5x1Mmzb1/Q5K6uZUo1PGfoLnPQxN/7i
	tx/KwOIE56WLlXO78Ul4zMYWHHI0j6/6FM9a6h0Zr88mAeu2K61hmtXd8hGpoPiY9HWZ
	RtfoWSDLQvNhe1R0iIZfM7Ys4QLAzgAf+tJTM01Fmx/olpgMrWjYNBpIL8jY0+X8MlDz
	k8sPkkUyNzAmib5mB9HgYmlKpa4Om2pC2ETSNZ+ieABuwai/92gJmYpW8UGoMKgnU0ym
	nU7g==
MIME-Version: 1.0
X-Received: by 10.60.98.240 with SMTP id el16mr2621070oeb.50.1392737051144;
	Tue, 18 Feb 2014 07:24:11 -0800 (PST)
Received: by 10.76.173.98 with HTTP; Tue, 18 Feb 2014 07:24:11 -0800 (PST)
In-Reply-To: <1392714890.32038.463.camel@Solace>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
Date: Tue, 18 Feb 2014 10:24:11 -0500
Message-ID: <CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5513351237416863832=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5513351237416863832==
Content-Type: multipart/alternative; boundary=089e011832a632fac404f2afdd56

--089e011832a632fac404f2afdd56
Content-Type: text/plain; charset=ISO-8859-1

Hi Dario,

Thank you so much for your detailed reply! It is really helpful! I'm
looking at the vPMU and perf on Xen, and will try it. :-)

The reason why I want to know this information from hardware performance
counter is because I want to know the interference among each domains when
they are running.

In addition, when we measure the latency of accessing a large array, the
result is out of our expectation. We increase the size of an array from 1KB
to 12MB, which covers the L1(32KB), L2(256KB) and L3(12MB) cache size. We
expect that the latency of accessing the whole array should have clear cut
at around 32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
several times different.

However, we saw the latency does not increase much when the array size is
larger than the size of L1, L2, and L3. It's weird because if we run the
same task in Linux on bare machine, it is the expected result.

We are not sure if this is because of the virt. overhead or cache miss,
that's why we want to know the cache access rate of each domain.

It's really appreciated  if you can share some of your insight on this. :-)

Thank you very much for your time!

Best,

Meng


2014-02-18 4:14 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:

> On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
> > Hi,
> >
> Hi,
>
> > I'm a PhD student, working on real time system.
> >
> Cool. There really seems to be a lot of interest in Real-Time
> virtualization these days. :-D
>
> > [My goal]
> > I want to measure the cache hit/miss rate of each guest domain in Xen.
> > I may also want to measure some other events, say memory access rate,
> > for each program in each guest domain in Xen.
> >
> Ok. Can I, out of curiosity, as you to detail a bit more what your
> *final* goal is (I mean, you're interested in these measurements for a
> reason, not just for the sake of having them, right?).
>
> > [The problem I'm encountering]
> > I tried intel's Performance Counter Monitor (PCM) in Linux on bare
> > machine to get the machine's cache access rate for each level of
> > cache, it works very well.
> >
> >
> > However, when I want to use the PCM in Xen and run it in dom0, it
> > cannot work. I think the PCM needs to run in ring 0 to read/write the
> > MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot
> > work.
> >
> Indeed.
>
> > So my question is:
> > How can I run a program (say PCM) in ring 0 on Xen?
> >
> Running "a program" in there is going to be terribly difficult. What I
> think you're better off is trying to access, from dom0 and/or
> (para)virtualize the counters.
>
> In fact, there is work going on already on this, although I don't have
> all the details about what's the current status.
>
> > What's in my mind is:
> > Writing a hypercall to call the PCM in Xen's kernel space, then the
> > PCM will run in ring 0?
> > But the problem I'm concerned is that some of the PCM's instruction,
> > say printf(), may not be able to run in kernel space?
> >
> Well, Xen can print, e.g., on a serial console, but again, that's not
> what you want. I'm adding the link to a few conversation about virtual
> PMU. These are just the very first google's result, so there may well be
> more:
>
>
> http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html
> https://lwn.net/Articles/566159/
>
> Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
> Developers Summit:
> http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>
> Regards,
> Dario
>
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>

--089e011832a632fac404f2afdd56
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi =
Dario,</div><div class=3D"gmail_default" style=3D"font-size:small"><br></di=
v><div class=3D"gmail_default" style=3D"font-size:small">Thank you so much =
for your detailed reply! It is really helpful! I&#39;m looking at the vPMU =
and perf on Xen, and will try it. :-)</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">The reason why I want to know =
this information from hardware performance counter is because I want to kno=
w the interference among each domains when they are running.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">In addition, when we measure t=
he latency of accessing a large array, the result is out of our expectation=
. We increase the size of an array from 1KB to 12MB, which covers the L1(32=
KB), L2(256KB) and L3(12MB) cache size. We expect that the latency of acces=
sing the whole array should have clear cut at around 32KB, 256KB and 12MB b=
ecause the latency of L1 L2 and L3 are several times different.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">However, we saw the latency do=
es not increase much when the array size is larger than the size of L1, L2,=
 and L3. It&#39;s weird because if we run the same task in Linux on bare ma=
chine, it is the expected result.</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">We are not sure if this is bec=
ause of the virt. overhead or cache miss, that&#39;s why we want to know th=
e cache access rate of each domain.=A0</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">It&#39;s really appreciated =
=A0if you can share some of your insight on this. :-)</div><div class=3D"gm=
ail_default" style=3D"font-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Thank you =
very much for your time!</div><div class=3D"gmail_default" style=3D"font-si=
ze:small"><br></div><div class=3D"gmail_default" style=3D"font-size:small">=
Best,</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Meng</div><div class=3D"gmail_=
default" style=3D"font-size:small"><br></div><div class=3D"gmail_extra"><br=
><div class=3D"gmail_quote">
2014-02-18 4:14 GMT-05:00 Dario Faggioli <span dir=3D"ltr">&lt;<a href=3D"m=
ailto:dario.faggioli@citrix.com" target=3D"_blank">dario.faggioli@citrix.co=
m</a>&gt;</span>:<br><blockquote class=3D"gmail_quote" style=3D"margin:0 0 =
0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:<br>
&gt; Hi,<br>
<div class=3D"">&gt;<br>
Hi,<br>
<br>
&gt; I&#39;m a PhD student, working on real time system.<br>
&gt;<br>
</div>Cool. There really seems to be a lot of interest in Real-Time<br>
virtualization these days. :-D<br>
<div class=3D""><br>
&gt; [My goal]<br>
&gt; I want to measure the cache hit/miss rate of each guest domain in Xen.=
<br>
&gt; I may also want to measure some other events, say memory access rate,<=
br>
&gt; for each program in each guest domain in Xen.<br>
&gt;<br>
</div>Ok. Can I, out of curiosity, as you to detail a bit more what your<br=
>
*final* goal is (I mean, you&#39;re interested in these measurements for a<=
br>
reason, not just for the sake of having them, right?).<br>
<div class=3D""><br>
&gt; [The problem I&#39;m encountering]<br>
&gt; I tried intel&#39;s Performance Counter Monitor (PCM) in Linux on bare=
<br>
&gt; machine to get the machine&#39;s cache access rate for each level of<b=
r>
&gt; cache, it works very well.<br>
&gt;<br>
&gt;<br>
&gt; However, when I want to use the PCM in Xen and run it in dom0, it<br>
&gt; cannot work. I think the PCM needs to run in ring 0 to read/write the<=
br>
&gt; MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot<=
br>
&gt; work.<br>
&gt;<br>
</div>Indeed.<br>
<div class=3D""><br>
&gt; So my question is:<br>
&gt; How can I run a program (say PCM) in ring 0 on Xen?<br>
&gt;<br>
</div>Running &quot;a program&quot; in there is going to be terribly diffic=
ult. What I<br>
think you&#39;re better off is trying to access, from dom0 and/or<br>
(para)virtualize the counters.<br>
<br>
In fact, there is work going on already on this, although I don&#39;t have<=
br>
all the details about what&#39;s the current status.<br>
<div class=3D""><br>
&gt; What&#39;s in my mind is:<br>
&gt; Writing a hypercall to call the PCM in Xen&#39;s kernel space, then th=
e<br>
&gt; PCM will run in ring 0?<br>
&gt; But the problem I&#39;m concerned is that some of the PCM&#39;s instru=
ction,<br>
&gt; say printf(), may not be able to run in kernel space?<br>
&gt;<br>
</div>Well, Xen can print, e.g., on a serial console, but again, that&#39;s=
 not<br>
what you want. I&#39;m adding the link to a few conversation about virtual<=
br>
PMU. These are just the very first google&#39;s result, so there may well b=
e<br>
more:<br>
<br>
<a href=3D"http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Perfo=
rmance-Monitoring-Unit-td5623065.html" target=3D"_blank">http://xen.1045712=
.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623=
065.html</a><br>

<a href=3D"https://lwn.net/Articles/566159/" target=3D"_blank">https://lwn.=
net/Articles/566159/</a><br>
<br>
Boris (which I&#39;m Cc-ing), gave a presentation about this at latest Xen<=
br>
Developers Summit:<br>
<a href=3D"http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013" tar=
get=3D"_blank">http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013<=
/a><br>
<br>
Regards,<br>
Dario<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
--<br>
&lt;&lt;This happens because I choose it to happen!&gt;&gt; (Raistlin Majer=
e)<br>
-----------------------------------------------------------------<br>
Dario Faggioli, Ph.D, <a href=3D"http://about.me/dario.faggioli" target=3D"=
_blank">http://about.me/dario.faggioli</a><br>
Senior Software Engineer, Citrix Systems R&amp;D Ltd., Cambridge (UK)<br>
<br>
</font></span></blockquote></div><br></div></div>

--089e011832a632fac404f2afdd56--


--===============5513351237416863832==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5513351237416863832==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 15:25:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmXu-0002V7-UU; Tue, 18 Feb 2014 15:25:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFmXt-0002Uw-RZ
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:25:18 +0000
Received: from [85.158.137.68:3421] by server-2.bemta-3.messagelabs.com id
	36/9D-06531-C5B73035; Tue, 18 Feb 2014 15:25:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392737114!2666688!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29551 invoked from network); 18 Feb 2014 15:25:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101796887"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:25:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:25:13 -0500
Message-ID: <1392737112.11080.102.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 15:25:12 +0000
In-Reply-To: <53037788.4010702@linaro.org>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
	<1392735659.11080.88.camel@kazak.uk.xensource.com>
	<53037788.4010702@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:08 +0000, Julien Grall wrote:
> On 02/18/2014 03:00 PM, Ian Campbell wrote:
> > On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:
> > 
> >>> As the offset is only computed one time per function,
> >>
> >> We set offset = 0 at the end of the first iteration.
> > 
> > Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
> > raw_copy_from_guest -- which I think is the actual bug here.
> 
> I didn't notice the offset = 0 at then end of raw_copy_to_guest.
> 
> I can send a patch to only set offset to 0 in raw_copy_from_guest. But I
> think it's less clear than this patch. What do you think?

I think the approach currently used by the (non-buggy) functions is
better -- it makes it obvious that after the first iteration things
*have* to now be aligned.

I also wouldn't be surprised if the compiler had trouble proving this
and so ended up needlessly recalculating offset instead of optimising it
out.

If you find the code unclear please feel free to add comments etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:25:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:25:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmXu-0002V7-UU; Tue, 18 Feb 2014 15:25:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFmXt-0002Uw-RZ
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:25:18 +0000
Received: from [85.158.137.68:3421] by server-2.bemta-3.messagelabs.com id
	36/9D-06531-C5B73035; Tue, 18 Feb 2014 15:25:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392737114!2666688!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29551 invoked from network); 18 Feb 2014 15:25:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:25:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101796887"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:25:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:25:13 -0500
Message-ID: <1392737112.11080.102.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 15:25:12 +0000
In-Reply-To: <53037788.4010702@linaro.org>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
	<1392735659.11080.88.camel@kazak.uk.xensource.com>
	<53037788.4010702@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:08 +0000, Julien Grall wrote:
> On 02/18/2014 03:00 PM, Ian Campbell wrote:
> > On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:
> > 
> >>> As the offset is only computed one time per function,
> >>
> >> We set offset = 0 at the end of the first iteration.
> > 
> > Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
> > raw_copy_from_guest -- which I think is the actual bug here.
> 
> I didn't notice the offset = 0 at then end of raw_copy_to_guest.
> 
> I can send a patch to only set offset to 0 in raw_copy_from_guest. But I
> think it's less clear than this patch. What do you think?

I think the approach currently used by the (non-buggy) functions is
better -- it makes it obvious that after the first iteration things
*have* to now be aligned.

I also wouldn't be surprised if the compiler had trouble proving this
and so ended up needlessly recalculating offset instead of optimising it
out.

If you find the code unclear please feel free to add comments etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:28:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmbB-0002lr-SS; Tue, 18 Feb 2014 15:28:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFmbA-0002kt-8K
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:28:40 +0000
Received: from [193.109.254.147:18837] by server-12.bemta-14.messagelabs.com
	id 37/40-17220-72C73035; Tue, 18 Feb 2014 15:28:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392737318!5202835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3926 invoked from network); 18 Feb 2014 15:28:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 15:28:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 15:28:38 +0000
Message-Id: <53038A33020000780011D5FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 15:28:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>,
	"Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
In-Reply-To: <53035628020000780011D3EE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part77452A33.3__="
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part77452A33.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 18.02.14 at 12:46, "Jan Beulich" <JBeulich@suse.com> wrote:
> Nothing at all, as it turns out. The regression is due to Dongxiao's
>=20
> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.html=
=20
>=20
> which I have in my tree as part of various things pending for 4.5.
> And which at the first, second, and third glance looks pretty
> innocent (IOW I still have to find out _why_ it is wrong).

And here's a fixed version of the patch - we simply can't drop
the bogus HVM_PARAM_IDENT_PT check entirely yet.

In the course of fixing this I also found two other shortcomings:
- EPT EMT field should be updated upon guest MTRR writes (the
  lack thereof is the reason for needing to retain the bogus check)
- epte_get_entry_emt() either needs "order" passed, or its callers
  must call it more than once for big/huge pages

Jan

x86/hvm: refine the judgment on IDENT_PT for EMT

When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.

In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
=20
     *ipat =3D 0;
=20
-    if ( (current->domain !=3D d) &&
-         ((d->vcpu =3D=3D NULL) || ((v =3D d->vcpu[0]) =3D=3D NULL)) )
-        return MTRR_TYPE_WRBACK;
-
-    if ( !is_pvh_vcpu(v) &&
-         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
-        return MTRR_TYPE_WRBACK;
+    if ( v->domain !=3D d )
+        v =3D d->vcpu ? d->vcpu[0] : NULL;
=20
     if ( !mfn_valid(mfn_x(mfn)) )
         return MTRR_TYPE_UNCACHABLE;
@@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
         return MTRR_TYPE_WRBACK;
     }
=20
-    gmtrr_mtype =3D is_hvm_vcpu(v) ?
+    gmtrr_mtype =3D is_hvm_domain(d) && v &&
+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << PAGE_SHIFT=
)) :
                   MTRR_TYPE_WRBACK;
=20





--=__Part77452A33.3__=
Content-Type: application/octet-stream; name="EPT-ignore-IDENT_PT"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="EPT-ignore-IDENT_PT"

eDg2L2h2bTogcmVmaW5lIHRoZSBqdWRnbWVudCBvbiBJREVOVF9QVCBmb3IgRU1UCgpXaGVuIHRy
eWluZyB0byBnZXQgdGhlIEVQVCBFTVQgdHlwZSwgdGhlIGp1ZGdtZW50IG9uCkhWTV9QQVJBTV9J
REVOVF9QVCBpcyBub3QgY29ycmVjdCB3aGljaCBhbHdheXMgcmV0dXJucyBXQiB0eXBlIGlmCnRo
ZSBwYXJhbWV0ZXIgaXMgbm90IHNldC4gUmVtb3ZlIHRoZSByZWxhdGVkIGNvZGUuCgpTaWduZWQt
b2ZmLWJ5OiBEb25neGlhbyBYdSA8ZG9uZ3hpYW8ueHVAaW50ZWwuY29tPgoKV2UgY2FuJ3QgZnVs
bHkgZHJvcCB0aGUgZGVwZW5kZW5jeSB5ZXQsIGJ1dCB3ZSBzaG91bGQgY2VydGFpbmx5IGF2b2lk
Cm92ZXJyaWRpbmcgY2FzZXMgYWxyZWFkeSBwcm9wZXJseSBoYW5kbGVkLiBUaGUgcmVhc29uIGZv
ciB0aGlzIGlzIHRoYXQKdGhlIGd1ZXN0IHNldHRpbmcgdXAgaXRzIE1UUlJzIGhhcHBlbnMgX2Fm
dGVyXyB0aGUgRVBUIHRhYmxlcyBnb3QKYWxyZWFkeSBjb25zdHJ1Y3RlZCwgYW5kIG5vIGNvZGUg
aXMgaW4gcGxhY2UgdG8gcHJvcGFnYXRlIHRoaXMgdG8gdGhlCkVQVCBjb2RlLiBXaXRob3V0IHRo
aXMgY2hlY2sgd2UncmUgZm9yY2luZyB0aGUgZ3Vlc3QgdG8gcnVuIHdpdGggYWxsIG9mCml0cyBt
ZW1vcnkgdW5jYWNoYWJsZSB1bnRpbCBzb21ldGhpbmcgaGFwcGVucyB0byByZS13cml0ZSBldmVy
eSBzaW5nbGUKRVBUIGVudHJ5LiBCdXQgb2YgY291cnNlIHRoaXMgaGFzIHRvIGJlIGp1c3QgYSB0
ZW1wb3Jhcnkgc29sdXRpb24uCgpJbiB0aGUgc2FtZSBzcGlyaXQgd2Ugc2hvdWxkIGRlZmVyIHRo
ZSAidmVyeSBlYXJseSIgKHdoZW4gdGhlIGd1ZXN0IGlzCnN0aWxsIGJlaW5nIGNvbnN0cnVjdGVk
IGFuZCBoYXMgbm8gdkNQVSB5ZXQpIG92ZXJyaWRlIHRvIHRoZSBsYXN0CnBvc3NpYmxlIHBvaW50
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEv
eGVuL2FyY2gveDg2L2h2bS9tdHJyLmMKKysrIGIveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMKQEAg
LTY4OSwxMyArNjg5LDggQEAgdWludDhfdCBlcHRlX2dldF9lbnRyeV9lbXQoc3RydWN0IGRvbWFp
bgogCiAgICAgKmlwYXQgPSAwOwogCi0gICAgaWYgKCAoY3VycmVudC0+ZG9tYWluICE9IGQpICYm
Ci0gICAgICAgICAoKGQtPnZjcHUgPT0gTlVMTCkgfHwgKCh2ID0gZC0+dmNwdVswXSkgPT0gTlVM
TCkpICkKLSAgICAgICAgcmV0dXJuIE1UUlJfVFlQRV9XUkJBQ0s7Ci0KLSAgICBpZiAoICFpc19w
dmhfdmNwdSh2KSAmJgotICAgICAgICAgIXYtPmRvbWFpbi0+YXJjaC5odm1fZG9tYWluLnBhcmFt
c1tIVk1fUEFSQU1fSURFTlRfUFRdICkKLSAgICAgICAgcmV0dXJuIE1UUlJfVFlQRV9XUkJBQ0s7
CisgICAgaWYgKCB2LT5kb21haW4gIT0gZCApCisgICAgICAgIHYgPSBkLT52Y3B1ID8gZC0+dmNw
dVswXSA6IE5VTEw7CiAKICAgICBpZiAoICFtZm5fdmFsaWQobWZuX3gobWZuKSkgKQogICAgICAg
ICByZXR1cm4gTVRSUl9UWVBFX1VOQ0FDSEFCTEU7CkBAIC03MTgsNyArNzEzLDggQEAgdWludDhf
dCBlcHRlX2dldF9lbnRyeV9lbXQoc3RydWN0IGRvbWFpbgogICAgICAgICByZXR1cm4gTVRSUl9U
WVBFX1dSQkFDSzsKICAgICB9CiAKLSAgICBnbXRycl9tdHlwZSA9IGlzX2h2bV92Y3B1KHYpID8K
KyAgICBnbXRycl9tdHlwZSA9IGlzX2h2bV9kb21haW4oZCkgJiYgdiAmJgorICAgICAgICAgICAg
ICAgICAgZC0+YXJjaC5odm1fZG9tYWluLnBhcmFtc1tIVk1fUEFSQU1fSURFTlRfUFRdID8KICAg
ICAgICAgICAgICAgICAgIGdldF9tdHJyX3R5cGUoJnYtPmFyY2guaHZtX3ZjcHUubXRyciwgKGdm
biA8PCBQQUdFX1NISUZUKSkgOgogICAgICAgICAgICAgICAgICAgTVRSUl9UWVBFX1dSQkFDSzsK
IAo=

--=__Part77452A33.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part77452A33.3__=--


From xen-devel-bounces@lists.xen.org Tue Feb 18 15:28:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmbB-0002lr-SS; Tue, 18 Feb 2014 15:28:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFmbA-0002kt-8K
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:28:40 +0000
Received: from [193.109.254.147:18837] by server-12.bemta-14.messagelabs.com
	id 37/40-17220-72C73035; Tue, 18 Feb 2014 15:28:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392737318!5202835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3926 invoked from network); 18 Feb 2014 15:28:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 15:28:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 15:28:38 +0000
Message-Id: <53038A33020000780011D5FD@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 15:28:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>,
	"Xiantao Zhang" <xiantao.zhang@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
In-Reply-To: <53035628020000780011D3EE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part77452A33.3__="
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part77452A33.3__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 18.02.14 at 12:46, "Jan Beulich" <JBeulich@suse.com> wrote:
> Nothing at all, as it turns out. The regression is due to Dongxiao's
>=20
> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.html=
=20
>=20
> which I have in my tree as part of various things pending for 4.5.
> And which at the first, second, and third glance looks pretty
> innocent (IOW I still have to find out _why_ it is wrong).

And here's a fixed version of the patch - we simply can't drop
the bogus HVM_PARAM_IDENT_PT check entirely yet.

In the course of fixing this I also found two other shortcomings:
- EPT EMT field should be updated upon guest MTRR writes (the
  lack thereof is the reason for needing to retain the bogus check)
- epte_get_entry_emt() either needs "order" passed, or its callers
  must call it more than once for big/huge pages

Jan

x86/hvm: refine the judgment on IDENT_PT for EMT

When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.

In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
=20
     *ipat =3D 0;
=20
-    if ( (current->domain !=3D d) &&
-         ((d->vcpu =3D=3D NULL) || ((v =3D d->vcpu[0]) =3D=3D NULL)) )
-        return MTRR_TYPE_WRBACK;
-
-    if ( !is_pvh_vcpu(v) &&
-         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
-        return MTRR_TYPE_WRBACK;
+    if ( v->domain !=3D d )
+        v =3D d->vcpu ? d->vcpu[0] : NULL;
=20
     if ( !mfn_valid(mfn_x(mfn)) )
         return MTRR_TYPE_UNCACHABLE;
@@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
         return MTRR_TYPE_WRBACK;
     }
=20
-    gmtrr_mtype =3D is_hvm_vcpu(v) ?
+    gmtrr_mtype =3D is_hvm_domain(d) && v &&
+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << PAGE_SHIFT=
)) :
                   MTRR_TYPE_WRBACK;
=20





--=__Part77452A33.3__=
Content-Type: application/octet-stream; name="EPT-ignore-IDENT_PT"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename="EPT-ignore-IDENT_PT"

eDg2L2h2bTogcmVmaW5lIHRoZSBqdWRnbWVudCBvbiBJREVOVF9QVCBmb3IgRU1UCgpXaGVuIHRy
eWluZyB0byBnZXQgdGhlIEVQVCBFTVQgdHlwZSwgdGhlIGp1ZGdtZW50IG9uCkhWTV9QQVJBTV9J
REVOVF9QVCBpcyBub3QgY29ycmVjdCB3aGljaCBhbHdheXMgcmV0dXJucyBXQiB0eXBlIGlmCnRo
ZSBwYXJhbWV0ZXIgaXMgbm90IHNldC4gUmVtb3ZlIHRoZSByZWxhdGVkIGNvZGUuCgpTaWduZWQt
b2ZmLWJ5OiBEb25neGlhbyBYdSA8ZG9uZ3hpYW8ueHVAaW50ZWwuY29tPgoKV2UgY2FuJ3QgZnVs
bHkgZHJvcCB0aGUgZGVwZW5kZW5jeSB5ZXQsIGJ1dCB3ZSBzaG91bGQgY2VydGFpbmx5IGF2b2lk
Cm92ZXJyaWRpbmcgY2FzZXMgYWxyZWFkeSBwcm9wZXJseSBoYW5kbGVkLiBUaGUgcmVhc29uIGZv
ciB0aGlzIGlzIHRoYXQKdGhlIGd1ZXN0IHNldHRpbmcgdXAgaXRzIE1UUlJzIGhhcHBlbnMgX2Fm
dGVyXyB0aGUgRVBUIHRhYmxlcyBnb3QKYWxyZWFkeSBjb25zdHJ1Y3RlZCwgYW5kIG5vIGNvZGUg
aXMgaW4gcGxhY2UgdG8gcHJvcGFnYXRlIHRoaXMgdG8gdGhlCkVQVCBjb2RlLiBXaXRob3V0IHRo
aXMgY2hlY2sgd2UncmUgZm9yY2luZyB0aGUgZ3Vlc3QgdG8gcnVuIHdpdGggYWxsIG9mCml0cyBt
ZW1vcnkgdW5jYWNoYWJsZSB1bnRpbCBzb21ldGhpbmcgaGFwcGVucyB0byByZS13cml0ZSBldmVy
eSBzaW5nbGUKRVBUIGVudHJ5LiBCdXQgb2YgY291cnNlIHRoaXMgaGFzIHRvIGJlIGp1c3QgYSB0
ZW1wb3Jhcnkgc29sdXRpb24uCgpJbiB0aGUgc2FtZSBzcGlyaXQgd2Ugc2hvdWxkIGRlZmVyIHRo
ZSAidmVyeSBlYXJseSIgKHdoZW4gdGhlIGd1ZXN0IGlzCnN0aWxsIGJlaW5nIGNvbnN0cnVjdGVk
IGFuZCBoYXMgbm8gdkNQVSB5ZXQpIG92ZXJyaWRlIHRvIHRoZSBsYXN0CnBvc3NpYmxlIHBvaW50
LgoKU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEv
eGVuL2FyY2gveDg2L2h2bS9tdHJyLmMKKysrIGIveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMKQEAg
LTY4OSwxMyArNjg5LDggQEAgdWludDhfdCBlcHRlX2dldF9lbnRyeV9lbXQoc3RydWN0IGRvbWFp
bgogCiAgICAgKmlwYXQgPSAwOwogCi0gICAgaWYgKCAoY3VycmVudC0+ZG9tYWluICE9IGQpICYm
Ci0gICAgICAgICAoKGQtPnZjcHUgPT0gTlVMTCkgfHwgKCh2ID0gZC0+dmNwdVswXSkgPT0gTlVM
TCkpICkKLSAgICAgICAgcmV0dXJuIE1UUlJfVFlQRV9XUkJBQ0s7Ci0KLSAgICBpZiAoICFpc19w
dmhfdmNwdSh2KSAmJgotICAgICAgICAgIXYtPmRvbWFpbi0+YXJjaC5odm1fZG9tYWluLnBhcmFt
c1tIVk1fUEFSQU1fSURFTlRfUFRdICkKLSAgICAgICAgcmV0dXJuIE1UUlJfVFlQRV9XUkJBQ0s7
CisgICAgaWYgKCB2LT5kb21haW4gIT0gZCApCisgICAgICAgIHYgPSBkLT52Y3B1ID8gZC0+dmNw
dVswXSA6IE5VTEw7CiAKICAgICBpZiAoICFtZm5fdmFsaWQobWZuX3gobWZuKSkgKQogICAgICAg
ICByZXR1cm4gTVRSUl9UWVBFX1VOQ0FDSEFCTEU7CkBAIC03MTgsNyArNzEzLDggQEAgdWludDhf
dCBlcHRlX2dldF9lbnRyeV9lbXQoc3RydWN0IGRvbWFpbgogICAgICAgICByZXR1cm4gTVRSUl9U
WVBFX1dSQkFDSzsKICAgICB9CiAKLSAgICBnbXRycl9tdHlwZSA9IGlzX2h2bV92Y3B1KHYpID8K
KyAgICBnbXRycl9tdHlwZSA9IGlzX2h2bV9kb21haW4oZCkgJiYgdiAmJgorICAgICAgICAgICAg
ICAgICAgZC0+YXJjaC5odm1fZG9tYWluLnBhcmFtc1tIVk1fUEFSQU1fSURFTlRfUFRdID8KICAg
ICAgICAgICAgICAgICAgIGdldF9tdHJyX3R5cGUoJnYtPmFyY2guaHZtX3ZjcHUubXRyciwgKGdm
biA8PCBQQUdFX1NISUZUKSkgOgogICAgICAgICAgICAgICAgICAgTVRSUl9UWVBFX1dSQkFDSzsK
IAo=

--=__Part77452A33.3__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part77452A33.3__=--


From xen-devel-bounces@lists.xen.org Tue Feb 18 15:29:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmc0-0002qD-BU; Tue, 18 Feb 2014 15:29:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFmby-0002q3-Hx
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:29:30 +0000
Received: from [85.158.139.211:25442] by server-14.bemta-5.messagelabs.com id
	B0/ED-27598-95C73035; Tue, 18 Feb 2014 15:29:29 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392737369!4669000!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 600 invoked from network); 18 Feb 2014 15:29:29 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:29:29 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so5772650eaj.12
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 07:29:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=DcVsHWUk341g6JNgcxqCYYZVXzTlwmwh/cUVpadPhWU=;
	b=GK0W5EaAGAnaiNta7YExVogNTFIwZpIkz5AQ/rvJ8jHESiFuOvF7C9SaBW5wrMuD6C
	cgc17LzF0ayvSz4t86LiBlpg7V8OVWJhHqbGNLELA5pKo4yDY5e58N3ASKqmS0YtvyCe
	0Rg3WPLqsU8I+g7SAggEUfXcY8AnGL3Xw6NlJ8g+7bUlfeexjXQOTSAKNSmvZcIJn2AE
	ylItxjYkeH3bwrEIhiccAOxuQgy8uQNliKntCBMqGI8IjI3ATDn8tp50LGIA+ZzK+4El
	z3oUc02cO3x6PXSuVtAEpjaoBVDqxR0PdpZCmokbQFVrl9Xf9PlMkz/4h3cWbUD4s11m
	oMNg==
X-Gm-Message-State: ALoCoQmO4jBDqDUOB5kamEdmZGk+0W7+ULtcqABHz2KZmj/8ECKUpIWpO6dW/gFBSTWLhXxZClzT
X-Received: by 10.14.205.3 with SMTP id i3mr33899264eeo.23.1392737368663;
	Tue, 18 Feb 2014 07:29:28 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	y47sm71470356eel.14.2014.02.18.07.29.27 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 07:29:27 -0800 (PST)
Message-ID: <53037C56.3080804@linaro.org>
Date: Tue, 18 Feb 2014 15:29:26 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
	<1392735659.11080.88.camel@kazak.uk.xensource.com>
	<53037788.4010702@linaro.org>
	<1392737112.11080.102.camel@kazak.uk.xensource.com>
In-Reply-To: <1392737112.11080.102.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 03:25 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 15:08 +0000, Julien Grall wrote:
>> On 02/18/2014 03:00 PM, Ian Campbell wrote:
>>> On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:
>>>
>>>>> As the offset is only computed one time per function,
>>>>
>>>> We set offset = 0 at the end of the first iteration.
>>>
>>> Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
>>> raw_copy_from_guest -- which I think is the actual bug here.
>>
>> I didn't notice the offset = 0 at then end of raw_copy_to_guest.
>>
>> I can send a patch to only set offset to 0 in raw_copy_from_guest. But I
>> think it's less clear than this patch. What do you think?
> 
> I think the approach currently used by the (non-buggy) functions is
> better -- it makes it obvious that after the first iteration things
> *have* to now be aligned.

Ok. I will resend the patch.

> I also wouldn't be surprised if the compiler had trouble proving this
> and so ended up needlessly recalculating offset instead of optimising it
> out.
> 
> If you find the code unclear please feel free to add comments etc.

I will add comment.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:29:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:29:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmc0-0002qD-BU; Tue, 18 Feb 2014 15:29:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFmby-0002q3-Hx
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:29:30 +0000
Received: from [85.158.139.211:25442] by server-14.bemta-5.messagelabs.com id
	B0/ED-27598-95C73035; Tue, 18 Feb 2014 15:29:29 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392737369!4669000!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 600 invoked from network); 18 Feb 2014 15:29:29 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:29:29 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so5772650eaj.12
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 07:29:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=DcVsHWUk341g6JNgcxqCYYZVXzTlwmwh/cUVpadPhWU=;
	b=GK0W5EaAGAnaiNta7YExVogNTFIwZpIkz5AQ/rvJ8jHESiFuOvF7C9SaBW5wrMuD6C
	cgc17LzF0ayvSz4t86LiBlpg7V8OVWJhHqbGNLELA5pKo4yDY5e58N3ASKqmS0YtvyCe
	0Rg3WPLqsU8I+g7SAggEUfXcY8AnGL3Xw6NlJ8g+7bUlfeexjXQOTSAKNSmvZcIJn2AE
	ylItxjYkeH3bwrEIhiccAOxuQgy8uQNliKntCBMqGI8IjI3ATDn8tp50LGIA+ZzK+4El
	z3oUc02cO3x6PXSuVtAEpjaoBVDqxR0PdpZCmokbQFVrl9Xf9PlMkz/4h3cWbUD4s11m
	oMNg==
X-Gm-Message-State: ALoCoQmO4jBDqDUOB5kamEdmZGk+0W7+ULtcqABHz2KZmj/8ECKUpIWpO6dW/gFBSTWLhXxZClzT
X-Received: by 10.14.205.3 with SMTP id i3mr33899264eeo.23.1392737368663;
	Tue, 18 Feb 2014 07:29:28 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	y47sm71470356eel.14.2014.02.18.07.29.27 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 07:29:27 -0800 (PST)
Message-ID: <53037C56.3080804@linaro.org>
Date: Tue, 18 Feb 2014 15:29:26 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392397809-13255-1-git-send-email-julien.grall@linaro.org>
	<1392735576.11080.87.camel@kazak.uk.xensource.com>
	<1392735659.11080.88.camel@kazak.uk.xensource.com>
	<53037788.4010702@linaro.org>
	<1392737112.11080.102.camel@kazak.uk.xensource.com>
In-Reply-To: <1392737112.11080.102.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, george.dunlap@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Correctly handle non-page aligned
 pointer in raw_copy_*
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 03:25 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 15:08 +0000, Julien Grall wrote:
>> On 02/18/2014 03:00 PM, Ian Campbell wrote:
>>> On Tue, 2014-02-18 at 14:59 +0000, Ian Campbell wrote:
>>>
>>>>> As the offset is only computed one time per function,
>>>>
>>>> We set offset = 0 at the end of the first iteration.
>>>
>>> Ah, we do in raw_copy_to_guest_helper and raw_clear_guest but not
>>> raw_copy_from_guest -- which I think is the actual bug here.
>>
>> I didn't notice the offset = 0 at then end of raw_copy_to_guest.
>>
>> I can send a patch to only set offset to 0 in raw_copy_from_guest. But I
>> think it's less clear than this patch. What do you think?
> 
> I think the approach currently used by the (non-buggy) functions is
> better -- it makes it obvious that after the first iteration things
> *have* to now be aligned.

Ok. I will resend the patch.

> I also wouldn't be surprised if the compiler had trouble proving this
> and so ended up needlessly recalculating offset instead of optimising it
> out.
> 
> If you find the code unclear please feel free to add comments etc.

I will add comment.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:33:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmgA-00036g-3e; Tue, 18 Feb 2014 15:33:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>)
	id 1WFmg8-00036X-3B; Tue, 18 Feb 2014 15:33:48 +0000
Received: from [85.158.137.68:7380] by server-12.bemta-3.messagelabs.com id
	94/B4-01674-A5D73035; Tue, 18 Feb 2014 15:33:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392737625!1665958!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9346 invoked from network); 18 Feb 2014 15:33:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 15:33:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 15:33:45 +0000
Message-Id: <53038B64020000780011D614@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 15:33:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-announce@lists.xenproject.org
Subject: [Xen-devel] Xen 4.3.2 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

I am pleased to announce the release of Xen 4.3.2. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.3 
(tag RELEASE-4.3.2) or from the XenProject download page
http://www.xenproject.org/downloads/xen-archives/supported-xen-43-series/xen-432.html 

This fixes the following critical vulnerabilities:
 * CVE-2013-2212 / XSA-60
    Excessive time to disable caching with HVM guests with PCI passthrough
 * CVE-2013-4494 / XSA-73
    Lock order reversal between page allocation and grant table locks
 * CVE-2013-4553 / XSA-74
    Lock order reversal between page_alloc_lock and mm_rwlock
 * CVE-2013-4551 / XSA-75
    Host crash due to guest VMX instruction execution
 * CVE-2013-4554 / XSA-76
    Hypercalls exposed to privilege rings 1 and 2 of HVM guests
 * CVE-2013-6375 / XSA-78
    Insufficient TLB flushing in VT-d (iommu) code
 * CVE-2013-6400 / XSA-80
    IOMMU TLB flushing may be inadvertently suppressed
 * CVE-2013-6885 / XSA-82
    Guest triggerable AMD CPU erratum may cause host hang
 * CVE-2014-1642 / XSA-83
    Out-of-memory condition yielding memory corruption during IRQ setup
 * CVE-2014-1891 / XSA-84
    integer overflow in several XSM/Flask hypercalls
 * CVE-2014-1895 / XSA-85
    Off-by-one error in FLASK_AVC_CACHESTAT hypercall
 * CVE-2014-1896 / XSA-86
    libvchan failure handling malicious ring indexes
 * CVE-2014-1666 / XSA-87
    PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests
 * CVE-2014-1950 / XSA-88
    use-after-free in xc_cpupool_getinfo() under memory pressure

Apart from those there are many further bug fixes and improvements.

We recommend all users of the 4.3 stable series to update to this
latest point release.

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:33:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmgA-00036g-3e; Tue, 18 Feb 2014 15:33:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>)
	id 1WFmg8-00036X-3B; Tue, 18 Feb 2014 15:33:48 +0000
Received: from [85.158.137.68:7380] by server-12.bemta-3.messagelabs.com id
	94/B4-01674-A5D73035; Tue, 18 Feb 2014 15:33:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392737625!1665958!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9346 invoked from network); 18 Feb 2014 15:33:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 15:33:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 15:33:45 +0000
Message-Id: <53038B64020000780011D614@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 15:33:40 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-announce@lists.xenproject.org
Subject: [Xen-devel] Xen 4.3.2 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

I am pleased to announce the release of Xen 4.3.2. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.3 
(tag RELEASE-4.3.2) or from the XenProject download page
http://www.xenproject.org/downloads/xen-archives/supported-xen-43-series/xen-432.html 

This fixes the following critical vulnerabilities:
 * CVE-2013-2212 / XSA-60
    Excessive time to disable caching with HVM guests with PCI passthrough
 * CVE-2013-4494 / XSA-73
    Lock order reversal between page allocation and grant table locks
 * CVE-2013-4553 / XSA-74
    Lock order reversal between page_alloc_lock and mm_rwlock
 * CVE-2013-4551 / XSA-75
    Host crash due to guest VMX instruction execution
 * CVE-2013-4554 / XSA-76
    Hypercalls exposed to privilege rings 1 and 2 of HVM guests
 * CVE-2013-6375 / XSA-78
    Insufficient TLB flushing in VT-d (iommu) code
 * CVE-2013-6400 / XSA-80
    IOMMU TLB flushing may be inadvertently suppressed
 * CVE-2013-6885 / XSA-82
    Guest triggerable AMD CPU erratum may cause host hang
 * CVE-2014-1642 / XSA-83
    Out-of-memory condition yielding memory corruption during IRQ setup
 * CVE-2014-1891 / XSA-84
    integer overflow in several XSM/Flask hypercalls
 * CVE-2014-1895 / XSA-85
    Off-by-one error in FLASK_AVC_CACHESTAT hypercall
 * CVE-2014-1896 / XSA-86
    libvchan failure handling malicious ring indexes
 * CVE-2014-1666 / XSA-87
    PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests
 * CVE-2014-1950 / XSA-88
    use-after-free in xc_cpupool_getinfo() under memory pressure

Apart from those there are many further bug fixes and improvements.

We recommend all users of the 4.3 stable series to update to this
latest point release.

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:34:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmgR-00038Q-H4; Tue, 18 Feb 2014 15:34:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFmgQ-00038F-Df
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:34:06 +0000
Received: from [85.158.139.211:13512] by server-1.bemta-5.messagelabs.com id
	22/F5-12859-D6D73035; Tue, 18 Feb 2014 15:34:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392737642!4682210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9853 invoked from network); 18 Feb 2014 15:34:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:34:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101801452"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:33:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:33:53 -0500
Message-ID: <1392737632.23084.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Anisov <andrii.anisov@globallogic.com>
Date: Tue, 18 Feb 2014 15:33:52 +0000
In-Reply-To: <CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:37 +0200, Andrii Anisov wrote:
>         > I've checked right now, the PC is 0xc (prefetch abort).
>         
>         
>         Good, that makes more sense!
> 
> 
> Any additional suggestions, by any chance?

I'm afraid nothing is coming to my mind. :-(

> As I understand your early trap debug patch would not give a lot of
> help here.

Not terribly -- but I suppose it would allow you to add logging of the
fault address|status registers etc, although if your debugger can handle
reading those then that is of little extra value.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:34:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmgR-00038Q-H4; Tue, 18 Feb 2014 15:34:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFmgQ-00038F-Df
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:34:06 +0000
Received: from [85.158.139.211:13512] by server-1.bemta-5.messagelabs.com id
	22/F5-12859-D6D73035; Tue, 18 Feb 2014 15:34:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392737642!4682210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9853 invoked from network); 18 Feb 2014 15:34:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:34:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101801452"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:33:54 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:33:53 -0500
Message-ID: <1392737632.23084.4.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrii Anisov <andrii.anisov@globallogic.com>
Date: Tue, 18 Feb 2014 15:33:52 +0000
In-Reply-To: <CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:37 +0200, Andrii Anisov wrote:
>         > I've checked right now, the PC is 0xc (prefetch abort).
>         
>         
>         Good, that makes more sense!
> 
> 
> Any additional suggestions, by any chance?

I'm afraid nothing is coming to my mind. :-(

> As I understand your early trap debug patch would not give a lot of
> help here.

Not terribly -- but I suppose it would allow you to add logging of the
fault address|status registers etc, although if your debugger can handle
reading those then that is of little extra value.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:34:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmge-0003BF-Ud; Tue, 18 Feb 2014 15:34:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>)
	id 1WFmgd-0003Ab-3M; Tue, 18 Feb 2014 15:34:19 +0000
Received: from [85.158.137.68:14740] by server-15.bemta-3.messagelabs.com id
	47/47-19263-A7D73035; Tue, 18 Feb 2014 15:34:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392737657!1666106!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13011 invoked from network); 18 Feb 2014 15:34:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 15:34:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 15:34:17 +0000
Message-Id: <53038B84020000780011D617@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 15:34:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-announce@lists.xenproject.org
Subject: [Xen-devel] Xen 4.2.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

I am pleased to announce the release of Xen 4.2.4. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.2 
(tag RELEASE-4.2.4) or from the XenProject download page
http://www.xenproject.org/downloads/xen-archives/supported-xen-42-series/xen-424.html 

This fixes the following critical vulnerabilities:
 * CVE-2013-2212 / XSA-60
    Excessive time to disable caching with HVM guests with PCI passthrough
 * CVE-2013-1442 / XSA-62
    Information leak on AVX and/or LWP capable CPUs
 * CVE-2013-4355 / XSA-63
    Information leaks through I/O instruction emulation
 * CVE-2013-4361 / XSA-66
    Information leak through fbld instruction emulation
 * CVE-2013-4368 / XSA-67
    Information leak through outs instruction emulation
 * CVE-2013-4369 / XSA-68
    possible null dereference when parsing vif ratelimiting info
 * CVE-2013-4370 / XSA-69
    misplaced free in ocaml xc_vcpu_getaffinity stub
 * CVE-2013-4371 / XSA-70
    use-after-free in libxl_list_cpupool under memory pressure
 * CVE-2013-4375 / XSA-71
    qemu disk backend (qdisk) resource leak
 * CVE-2013-4416 / XSA-72
    ocaml xenstored mishandles oversized message replies
 * CVE-2013-4494 / XSA-73
    Lock order reversal between page allocation and grant table locks
 * CVE-2013-4553 / XSA-74
    Lock order reversal between page_alloc_lock and mm_rwlock
 * CVE-2013-4551 / XSA-75
    Host crash due to guest VMX instruction execution
 * CVE-2013-4554 / XSA-76
    Hypercalls exposed to privilege rings 1 and 2 of HVM guests
 * CVE-2013-6375 / XSA-78
    Insufficient TLB flushing in VT-d (iommu) code
 * CVE-2013-6400 / XSA-80
    IOMMU TLB flushing may be inadvertently suppressed
 * CVE-2013-6885 / XSA-82
    Guest triggerable AMD CPU erratum may cause host hang
 * CVE-2014-1642 / XSA-83
    Out-of-memory condition yielding memory corruption during IRQ setup
 * CVE-2014-1891 / XSA-84
    integer overflow in several XSM/Flask hypercalls
 * CVE-2014-1895 / XSA-85
    Off-by-one error in FLASK_AVC_CACHESTAT hypercall
 * CVE-2014-1896 / XSA-86
    libvchan failure handling malicious ring indexes
 * CVE-2014-1666 / XSA-87
    PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests
 * CVE-2014-1950 / XSA-88
    use-after-free in xc_cpupool_getinfo() under memory pressure

Apart from those there are many further bug fixes and improvements.

We recommend all users of the 4.2 stable series to update to this
latest point release.

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:34:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmge-0003BF-Ud; Tue, 18 Feb 2014 15:34:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>)
	id 1WFmgd-0003Ab-3M; Tue, 18 Feb 2014 15:34:19 +0000
Received: from [85.158.137.68:14740] by server-15.bemta-3.messagelabs.com id
	47/47-19263-A7D73035; Tue, 18 Feb 2014 15:34:18 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392737657!1666106!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13011 invoked from network); 18 Feb 2014 15:34:17 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 15:34:17 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 15:34:17 +0000
Message-Id: <53038B84020000780011D617@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 15:34:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-announce@lists.xenproject.org
Subject: [Xen-devel] Xen 4.2.4 released
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All,

I am pleased to announce the release of Xen 4.2.4. This is
available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.2 
(tag RELEASE-4.2.4) or from the XenProject download page
http://www.xenproject.org/downloads/xen-archives/supported-xen-42-series/xen-424.html 

This fixes the following critical vulnerabilities:
 * CVE-2013-2212 / XSA-60
    Excessive time to disable caching with HVM guests with PCI passthrough
 * CVE-2013-1442 / XSA-62
    Information leak on AVX and/or LWP capable CPUs
 * CVE-2013-4355 / XSA-63
    Information leaks through I/O instruction emulation
 * CVE-2013-4361 / XSA-66
    Information leak through fbld instruction emulation
 * CVE-2013-4368 / XSA-67
    Information leak through outs instruction emulation
 * CVE-2013-4369 / XSA-68
    possible null dereference when parsing vif ratelimiting info
 * CVE-2013-4370 / XSA-69
    misplaced free in ocaml xc_vcpu_getaffinity stub
 * CVE-2013-4371 / XSA-70
    use-after-free in libxl_list_cpupool under memory pressure
 * CVE-2013-4375 / XSA-71
    qemu disk backend (qdisk) resource leak
 * CVE-2013-4416 / XSA-72
    ocaml xenstored mishandles oversized message replies
 * CVE-2013-4494 / XSA-73
    Lock order reversal between page allocation and grant table locks
 * CVE-2013-4553 / XSA-74
    Lock order reversal between page_alloc_lock and mm_rwlock
 * CVE-2013-4551 / XSA-75
    Host crash due to guest VMX instruction execution
 * CVE-2013-4554 / XSA-76
    Hypercalls exposed to privilege rings 1 and 2 of HVM guests
 * CVE-2013-6375 / XSA-78
    Insufficient TLB flushing in VT-d (iommu) code
 * CVE-2013-6400 / XSA-80
    IOMMU TLB flushing may be inadvertently suppressed
 * CVE-2013-6885 / XSA-82
    Guest triggerable AMD CPU erratum may cause host hang
 * CVE-2014-1642 / XSA-83
    Out-of-memory condition yielding memory corruption during IRQ setup
 * CVE-2014-1891 / XSA-84
    integer overflow in several XSM/Flask hypercalls
 * CVE-2014-1895 / XSA-85
    Off-by-one error in FLASK_AVC_CACHESTAT hypercall
 * CVE-2014-1896 / XSA-86
    libvchan failure handling malicious ring indexes
 * CVE-2014-1666 / XSA-87
    PHYSDEVOP_{prepare,release}_msix exposed to unprivileged guests
 * CVE-2014-1950 / XSA-88
    use-after-free in xc_cpupool_getinfo() under memory pressure

Apart from those there are many further bug fixes and improvements.

We recommend all users of the 4.2 stable series to update to this
latest point release.

Regards,
Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmtR-0003cb-W3; Tue, 18 Feb 2014 15:47:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFmtQ-0003cP-Q6
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:47:32 +0000
Received: from [85.158.143.35:16613] by server-2.bemta-4.messagelabs.com id
	C1/4F-10891-49083035; Tue, 18 Feb 2014 15:47:32 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392738450!6560329!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17151 invoked from network); 18 Feb 2014 15:47:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 15:47:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IFlOxP015937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 15:47:25 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IFlNil016465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 18 Feb 2014 15:47:23 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IFlMrT016438; Tue, 18 Feb 2014 15:47:22 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 07:47:22 -0800
Message-ID: <530380ED.4050708@oracle.com>
Date: Tue, 18 Feb 2014 10:49:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
In-Reply-To: <53034122020000780011D2CC@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> This test is already performed a couple of lines above.
>> Except that it's the wrong code you remove:
> No opinion on this alternative at all?

Sorry Jan, I didn't realize you were waiting for me on this.

Yes, your version is fine although to be honest I don't see how the 
original patch had any issues with division by zero since we'd still be 
inside the 'if (stride)' clause.

But as I said, either version is OK with me so you can add

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

-boris


>
> Jan
>
>>> @@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8
>> func, u8 bir, int vf)
>>>           if ( vf < 0 || (vf && vf % stride) )
>>>               return 0;
>>>           if ( stride )
>>> -        {
>>> -            if ( vf % stride )
>>> -                return 0;
>>>               vf /= stride;
>>> -        }
>> Note how this second check carefully avoids a division by zero.
>>  From what I can tell I think that I simply forgot to remove the
>> right side of the earlier || after having converted it to the safer
>> variant inside the if(). Hence I think we instead want:
>>
>> x86/MSI: don't risk division by zero
>>
>> The check in question is redundant with the one in the immediately
>> following if(), where dividing by zero gets carefully avoided.
>>
>> Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/msi.c
>> +++ b/xen/arch/x86/msi.c
>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>               return 0;
>>           base = pos + PCI_SRIOV_BAR;
>>           vf -= PCI_BDF(bus, slot, func) + offset;
>> -        if ( vf < 0 || (vf && vf % stride) )
>> +        if ( vf < 0 )
>>               return 0;
>>           if ( stride )
>>           {
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:47:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFmtR-0003cb-W3; Tue, 18 Feb 2014 15:47:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFmtQ-0003cP-Q6
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 15:47:32 +0000
Received: from [85.158.143.35:16613] by server-2.bemta-4.messagelabs.com id
	C1/4F-10891-49083035; Tue, 18 Feb 2014 15:47:32 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392738450!6560329!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17151 invoked from network); 18 Feb 2014 15:47:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 15:47:31 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IFlOxP015937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 15:47:25 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IFlNil016465
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 18 Feb 2014 15:47:23 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IFlMrT016438; Tue, 18 Feb 2014 15:47:22 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 07:47:22 -0800
Message-ID: <530380ED.4050708@oracle.com>
Date: Tue, 18 Feb 2014 10:49:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
In-Reply-To: <53034122020000780011D2CC@nat28.tlf.novell.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> This test is already performed a couple of lines above.
>> Except that it's the wrong code you remove:
> No opinion on this alternative at all?

Sorry Jan, I didn't realize you were waiting for me on this.

Yes, your version is fine although to be honest I don't see how the 
original patch had any issues with division by zero since we'd still be 
inside the 'if (stride)' clause.

But as I said, either version is OK with me so you can add

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

-boris


>
> Jan
>
>>> @@ -639,11 +639,7 @@ static u64 read_pci_mem_bar(u16 seg, u8 bus, u8 slot, u8
>> func, u8 bir, int vf)
>>>           if ( vf < 0 || (vf && vf % stride) )
>>>               return 0;
>>>           if ( stride )
>>> -        {
>>> -            if ( vf % stride )
>>> -                return 0;
>>>               vf /= stride;
>>> -        }
>> Note how this second check carefully avoids a division by zero.
>>  From what I can tell I think that I simply forgot to remove the
>> right side of the earlier || after having converted it to the safer
>> variant inside the if(). Hence I think we instead want:
>>
>> x86/MSI: don't risk division by zero
>>
>> The check in question is redundant with the one in the immediately
>> following if(), where dividing by zero gets carefully avoided.
>>
>> Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/msi.c
>> +++ b/xen/arch/x86/msi.c
>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>               return 0;
>>           base = pos + PCI_SRIOV_BAR;
>>           vf -= PCI_BDF(bus, slot, func) + offset;
>> -        if ( vf < 0 || (vf && vf % stride) )
>> +        if ( vf < 0 )
>>               return 0;
>>           if ( stride )
>>           {
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:55:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFn0n-0003yM-2b; Tue, 18 Feb 2014 15:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFn0d-0003yF-Us
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:55:00 +0000
Received: from [85.158.139.211:47954] by server-3.bemta-5.messagelabs.com id
	93/0A-13671-35283035; Tue, 18 Feb 2014 15:54:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392738896!159564!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5662 invoked from network); 18 Feb 2014 15:54:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:54:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101812386"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:54:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:54:55 -0500
Message-ID: <1392738894.23084.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 18 Feb 2014 15:54:54 +0000
In-Reply-To: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCAyMDE0LTAyLTE2IGF0IDIzOjUxICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IEhp
IGFsbCwKPiAKPiBJdCBpcyBtdWNoIGxhdGVyIHRoYW4gSSB1c2VkIHRvIGV4cGVjdC4gSSBndWVz
cyBpdCBtaWdodCBiZSBoZWxwCj4gdG8gcHVibGlzaCBteSB3b3JrLCB0aG91Z2ggaXQgaXMgc3Rp
bGwgbm90IGZpbmlzaGVkIChhbmQgbWlnaHQgbm90Cj4gYmUgZmluaXNoZWQgdmVyeSBzb29uLi4u
KS4gCj4gCj4gSSBiZWdhbiB0byB0cnkgdG8gcG9ydCBtaW5pLW9zIHRvIEFSTTY0IHNpbmNlIGxh
c3Qgc3VtbWVyLiBTaW5jZQo+IHRoZSA2NC1iaXQgZ3Vlc3Qgc3VwcG9ydCBpcyBub3QgcXVpdGUg
d2VsbCBhdCB0aGF0IHRpbWUsIHRoaXMKPiB3b3JrIGhhZCBiZWVuIHN0b3BwZWQgZm9yIGEgbG9u
ZyB0aW1lIHVudGlsIHR3byBtb250aHMgYWdvLgo+IAo+IFRob3VnaCBpdCBpcyBzdGlsbCBhdCB2
ZXJ5IGVhcmx5IHN0YWdlLCBpdCBhdCBsZWFzdCBjYW4gYmUgYnVpbHQsCj4gc2V0dXAgYSBlYXJs
eSBwYWdlIHRhYmxlIGZvciBib290aW5nLCBwYXJzZSB0aGUgRFRCIHBhc3NlZCBieSB0aGUKPiBo
eXBlcnZpc29yLCBhbmQgYmUgZGVidWdnZWQgYnkgcHJpbnRrIGF0IHByZXNlbnQuIFNvIEkgcHV0
IGl0Cj4gb24gZ2l0aHViIGluIGNhc2Ugc29tZW9uZSBtaWdodCBiZSBpbnRlcmVzdGVkIGluIGl0
LiBIZXJlIGlzIHRoZQo+IHVybDogaHR0cHM6Ly9naXRodWIuY29tL2Jhb3ppY2gvbWluaW9zLWFy
bTY0CgpDb29sLiBUaGFuayB5b3UgdmVyeSBtdWNoIGZvciBzaGFyaW5nLgoKPiBSaWdodCBub3cs
IHRoZXJlIGFyZSBzb21lIHRyb3VibGVzIHRvIG1ha2UgR0lDIHdvcmsgcHJvcGVybHksCj4gYXMg
SSBkaWRu4oCZdCBjb25zaWRlciBtYXBwaW5nIEdJQ+KAmXMgaW50ZXJmYWNlIGluIGFkZHJlc3Mg
c3BhY2UgYW5kCj4gZm9sbG93cyB4ODbigJlzIG1lbW9yeSBsYXlvdXQgd2hpY2ggbWFrZSB0aGUg
a2VybmVsIHZpcnR1YWwgYWRkcmVzcwo+IHN0YXJ0cyBhdCAweDAuIEnigJlsbCBmaXggaXQgYXMg
c29vbiBhcyBwb3NzaWJsZS4KCkFjdHVhbGx5LCBoYXZpbmcgdmlydHVhbCBtZW1vcnkgc3RhcnQg
YXQgMHgwIHNlZW1zIHF1aXRlIHJlYXNvbmFibGUgdG8KbWUsIHdoYXQgaXMgdGhlIHByb2JsZW0/
CgpTb21lb25lIHNvbWV3aGVyZSB3YXMgdGhpbmtpbmcgb2YgbWFraW5nIG1pbmlvcyBydW4gd2l0
aG91dCB0aGUgTU1VCmVuYWJsZWQgb24gQVJNIC0tIHRvIHNhdmUgb24gdGhlIG92ZXJoZWFkIElJ
UkMuIEJ1dCBpdCBvY2N1cnMgdG8gbWUgaGVyZQp0aGF0IHRoaXMgd291bGQgYmUgcHJvYmxlbWF0
aWMgaWYgd2Ugd2VyZSB0byBtb3ZlIHRoZSBndWVzdCBtZW1vcnkgbWFwCmFyb3VuZCAtLSB3aGlj
aCB3ZSBhcmUgcGxhbm5pbmcgdG8gZG8gZm9yIDQuNS4gSSB0aGluayB0aGlzIG1lYW5zIHRoYXQK
bWluaW9zIG11c3QgdXNlIHRoZSBNTVUsIGF0IGxlYXN0IGJ5IGRlZmF1bHQuCgpJIHdvdWxkbid0
IG5lY2Vzc2FyaWx5IG9iamVjdCB0byB0aGUgcHJlc2VuY2Ugb2YgYW4gb3B0aW9uIHRvIGJ1aWxk
IGFuCk1NVS1sZXNzIHZhcmlhbnQgZm9yIHNwZWNpZmljIHVzZSBjYXNlcywgc28gbG9uZyBhcyBp
dCB3YXMgY2xlYXIgdG8KdGhvc2UgZW5hYmxpbmcgaXQgdGhhdCB0aGVyZSBWTXMgbWlnaHQgb25s
eSB3b3JrIG9uIGEgc2luZ2xlIHZlcnNpb24gb2YKWGVuLgoKPiBCZXNpZGVzLCB0aGVyZSBpcyBz
dGlsbCBsb3RzIG9mIHdvcmsgdG8gYmUgZG9uZS4gU28gYW55IGNvbW1lbnRzCj4gb3IgcGF0Y2hl
cyBhcmUgd2VsY29tZS4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:55:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFn0n-0003yM-2b; Tue, 18 Feb 2014 15:55:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFn0d-0003yF-Us
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:55:00 +0000
Received: from [85.158.139.211:47954] by server-3.bemta-5.messagelabs.com id
	93/0A-13671-35283035; Tue, 18 Feb 2014 15:54:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392738896!159564!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5662 invoked from network); 18 Feb 2014 15:54:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:54:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101812386"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:54:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:54:55 -0500
Message-ID: <1392738894.23084.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Tue, 18 Feb 2014 15:54:54 +0000
In-Reply-To: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCAyMDE0LTAyLTE2IGF0IDIzOjUxICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IEhp
IGFsbCwKPiAKPiBJdCBpcyBtdWNoIGxhdGVyIHRoYW4gSSB1c2VkIHRvIGV4cGVjdC4gSSBndWVz
cyBpdCBtaWdodCBiZSBoZWxwCj4gdG8gcHVibGlzaCBteSB3b3JrLCB0aG91Z2ggaXQgaXMgc3Rp
bGwgbm90IGZpbmlzaGVkIChhbmQgbWlnaHQgbm90Cj4gYmUgZmluaXNoZWQgdmVyeSBzb29uLi4u
KS4gCj4gCj4gSSBiZWdhbiB0byB0cnkgdG8gcG9ydCBtaW5pLW9zIHRvIEFSTTY0IHNpbmNlIGxh
c3Qgc3VtbWVyLiBTaW5jZQo+IHRoZSA2NC1iaXQgZ3Vlc3Qgc3VwcG9ydCBpcyBub3QgcXVpdGUg
d2VsbCBhdCB0aGF0IHRpbWUsIHRoaXMKPiB3b3JrIGhhZCBiZWVuIHN0b3BwZWQgZm9yIGEgbG9u
ZyB0aW1lIHVudGlsIHR3byBtb250aHMgYWdvLgo+IAo+IFRob3VnaCBpdCBpcyBzdGlsbCBhdCB2
ZXJ5IGVhcmx5IHN0YWdlLCBpdCBhdCBsZWFzdCBjYW4gYmUgYnVpbHQsCj4gc2V0dXAgYSBlYXJs
eSBwYWdlIHRhYmxlIGZvciBib290aW5nLCBwYXJzZSB0aGUgRFRCIHBhc3NlZCBieSB0aGUKPiBo
eXBlcnZpc29yLCBhbmQgYmUgZGVidWdnZWQgYnkgcHJpbnRrIGF0IHByZXNlbnQuIFNvIEkgcHV0
IGl0Cj4gb24gZ2l0aHViIGluIGNhc2Ugc29tZW9uZSBtaWdodCBiZSBpbnRlcmVzdGVkIGluIGl0
LiBIZXJlIGlzIHRoZQo+IHVybDogaHR0cHM6Ly9naXRodWIuY29tL2Jhb3ppY2gvbWluaW9zLWFy
bTY0CgpDb29sLiBUaGFuayB5b3UgdmVyeSBtdWNoIGZvciBzaGFyaW5nLgoKPiBSaWdodCBub3cs
IHRoZXJlIGFyZSBzb21lIHRyb3VibGVzIHRvIG1ha2UgR0lDIHdvcmsgcHJvcGVybHksCj4gYXMg
SSBkaWRu4oCZdCBjb25zaWRlciBtYXBwaW5nIEdJQ+KAmXMgaW50ZXJmYWNlIGluIGFkZHJlc3Mg
c3BhY2UgYW5kCj4gZm9sbG93cyB4ODbigJlzIG1lbW9yeSBsYXlvdXQgd2hpY2ggbWFrZSB0aGUg
a2VybmVsIHZpcnR1YWwgYWRkcmVzcwo+IHN0YXJ0cyBhdCAweDAuIEnigJlsbCBmaXggaXQgYXMg
c29vbiBhcyBwb3NzaWJsZS4KCkFjdHVhbGx5LCBoYXZpbmcgdmlydHVhbCBtZW1vcnkgc3RhcnQg
YXQgMHgwIHNlZW1zIHF1aXRlIHJlYXNvbmFibGUgdG8KbWUsIHdoYXQgaXMgdGhlIHByb2JsZW0/
CgpTb21lb25lIHNvbWV3aGVyZSB3YXMgdGhpbmtpbmcgb2YgbWFraW5nIG1pbmlvcyBydW4gd2l0
aG91dCB0aGUgTU1VCmVuYWJsZWQgb24gQVJNIC0tIHRvIHNhdmUgb24gdGhlIG92ZXJoZWFkIElJ
UkMuIEJ1dCBpdCBvY2N1cnMgdG8gbWUgaGVyZQp0aGF0IHRoaXMgd291bGQgYmUgcHJvYmxlbWF0
aWMgaWYgd2Ugd2VyZSB0byBtb3ZlIHRoZSBndWVzdCBtZW1vcnkgbWFwCmFyb3VuZCAtLSB3aGlj
aCB3ZSBhcmUgcGxhbm5pbmcgdG8gZG8gZm9yIDQuNS4gSSB0aGluayB0aGlzIG1lYW5zIHRoYXQK
bWluaW9zIG11c3QgdXNlIHRoZSBNTVUsIGF0IGxlYXN0IGJ5IGRlZmF1bHQuCgpJIHdvdWxkbid0
IG5lY2Vzc2FyaWx5IG9iamVjdCB0byB0aGUgcHJlc2VuY2Ugb2YgYW4gb3B0aW9uIHRvIGJ1aWxk
IGFuCk1NVS1sZXNzIHZhcmlhbnQgZm9yIHNwZWNpZmljIHVzZSBjYXNlcywgc28gbG9uZyBhcyBp
dCB3YXMgY2xlYXIgdG8KdGhvc2UgZW5hYmxpbmcgaXQgdGhhdCB0aGVyZSBWTXMgbWlnaHQgb25s
eSB3b3JrIG9uIGEgc2luZ2xlIHZlcnNpb24gb2YKWGVuLgoKPiBCZXNpZGVzLCB0aGVyZSBpcyBz
dGlsbCBsb3RzIG9mIHdvcmsgdG8gYmUgZG9uZS4gU28gYW55IGNvbW1lbnRzCj4gb3IgcGF0Y2hl
cyBhcmUgd2VsY29tZS4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4u
b3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:59:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFn4k-0004Cm-O9; Tue, 18 Feb 2014 15:59:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFn4j-0004Ch-RP
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:59:14 +0000
Received: from [85.158.139.211:14672] by server-2.bemta-5.messagelabs.com id
	25/53-23037-15383035; Tue, 18 Feb 2014 15:59:13 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392739150!4715563!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29324 invoked from network); 18 Feb 2014 15:59:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:59:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101814778"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:59:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:59:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFn4c-0005Yw-NY; Tue, 18 Feb 2014 15:59:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 18 Feb 2014 15:59:05 +0000
Message-ID: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is very common for BIOSes to advertise more cpus than are actually present
on the system, and mark some of them as offline.  This is what Xen does to
allow for later CPU hotplug, and what BIOSes common to multiple different
systems do to to save fully rewriting the MADT in memory.

An excerpt from `xl info` might look like:

...
nr_cpus                : 2
max_cpu_id             : 3
...

Which shows 4 CPUs in the MADT, but only 2 online (as this particular box is
the dual-core rather than the quad-core SKU of its particular brand)

Because of the way Xen exposes this information, a libxl_cputopology array is
bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.

The current libxl code has two places which erroneously assume that a
libxl_cputopology array is as long as the number of bits found in a cpu
bitmap, and valgrind complains:

==14961== Invalid read of size 4
==14961==    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.c:230)
==14961==    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
==14961==    by 0x406246F: libxl__domain_build (libxl_create.c:371)
...
==14961==  Address 0x4324788 is 8 bytes after a block of size 24 alloc'd
==14961==    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==14961==    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
==14961==    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
==14961==    by 0x407A899: libxl__get_numa_candidate (libxl_numa.c:342)
...

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Dario Faggioli <dario.faggioli@citrix.com>
---
 tools/libxl/libxl_numa.c  |    5 ++++-
 tools/libxl/libxl_utils.c |    5 ++++-
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 20c99ac..4fac664 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -180,6 +180,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, int vcpus_on_node[],
 /* Number of vcpus able to run on the cpus of the various nodes
  * (reported by filling the array vcpus_on_node[]). */
 static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
+                             size_t tinfo_elements,
                              const libxl_bitmap *suitable_cpumap,
                              int vcpus_on_node[])
 {
@@ -222,6 +223,8 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
              */
             libxl_bitmap_set_none(&nodes_counted);
             libxl_for_each_set_bit(k, vinfo[j].cpumap) {
+                if (k >= tinfo_elements)
+                    break;
                 int node = tinfo[k].node;
 
                 if (libxl_bitmap_test(suitable_cpumap, k) &&
@@ -364,7 +367,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
      * all we have to do later is summing up the right elements of the
      * vcpus_on_node array.
      */
-    rc = nr_vcpus_on_nodes(gc, tinfo, suitable_cpumap, vcpus_on_node);
+    rc = nr_vcpus_on_nodes(gc, tinfo, nr_cpus, suitable_cpumap, vcpus_on_node);
     if (rc)
         goto out;
 
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index c9cef66..1f334f2 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -762,8 +762,11 @@ int libxl_cpumap_to_nodemap(libxl_ctx *ctx,
     }
 
     libxl_bitmap_set_none(nodemap);
-    libxl_for_each_set_bit(i, *cpumap)
+    libxl_for_each_set_bit(i, *cpumap) {
+        if (i >= nr_cpus)
+            break;
         libxl_bitmap_set(nodemap, tinfo[i].node);
+    }
  out:
     libxl_cputopology_list_free(tinfo, nr_cpus);
     return rc;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 15:59:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 15:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFn4k-0004Cm-O9; Tue, 18 Feb 2014 15:59:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFn4j-0004Ch-RP
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 15:59:14 +0000
Received: from [85.158.139.211:14672] by server-2.bemta-5.messagelabs.com id
	25/53-23037-15383035; Tue, 18 Feb 2014 15:59:13 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392739150!4715563!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29324 invoked from network); 18 Feb 2014 15:59:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 15:59:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101814778"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 15:59:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 10:59:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WFn4c-0005Yw-NY; Tue, 18 Feb 2014 15:59:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 18 Feb 2014 15:59:05 +0000
Message-ID: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is very common for BIOSes to advertise more cpus than are actually present
on the system, and mark some of them as offline.  This is what Xen does to
allow for later CPU hotplug, and what BIOSes common to multiple different
systems do to to save fully rewriting the MADT in memory.

An excerpt from `xl info` might look like:

...
nr_cpus                : 2
max_cpu_id             : 3
...

Which shows 4 CPUs in the MADT, but only 2 online (as this particular box is
the dual-core rather than the quad-core SKU of its particular brand)

Because of the way Xen exposes this information, a libxl_cputopology array is
bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.

The current libxl code has two places which erroneously assume that a
libxl_cputopology array is as long as the number of bits found in a cpu
bitmap, and valgrind complains:

==14961== Invalid read of size 4
==14961==    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.c:230)
==14961==    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
==14961==    by 0x406246F: libxl__domain_build (libxl_create.c:371)
...
==14961==  Address 0x4324788 is 8 bytes after a block of size 24 alloc'd
==14961==    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==14961==    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
==14961==    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
==14961==    by 0x407A899: libxl__get_numa_candidate (libxl_numa.c:342)
...

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Dario Faggioli <dario.faggioli@citrix.com>
---
 tools/libxl/libxl_numa.c  |    5 ++++-
 tools/libxl/libxl_utils.c |    5 ++++-
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_numa.c b/tools/libxl/libxl_numa.c
index 20c99ac..4fac664 100644
--- a/tools/libxl/libxl_numa.c
+++ b/tools/libxl/libxl_numa.c
@@ -180,6 +180,7 @@ static int nodemap_to_nr_vcpus(libxl__gc *gc, int vcpus_on_node[],
 /* Number of vcpus able to run on the cpus of the various nodes
  * (reported by filling the array vcpus_on_node[]). */
 static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
+                             size_t tinfo_elements,
                              const libxl_bitmap *suitable_cpumap,
                              int vcpus_on_node[])
 {
@@ -222,6 +223,8 @@ static int nr_vcpus_on_nodes(libxl__gc *gc, libxl_cputopology *tinfo,
              */
             libxl_bitmap_set_none(&nodes_counted);
             libxl_for_each_set_bit(k, vinfo[j].cpumap) {
+                if (k >= tinfo_elements)
+                    break;
                 int node = tinfo[k].node;
 
                 if (libxl_bitmap_test(suitable_cpumap, k) &&
@@ -364,7 +367,7 @@ int libxl__get_numa_candidate(libxl__gc *gc,
      * all we have to do later is summing up the right elements of the
      * vcpus_on_node array.
      */
-    rc = nr_vcpus_on_nodes(gc, tinfo, suitable_cpumap, vcpus_on_node);
+    rc = nr_vcpus_on_nodes(gc, tinfo, nr_cpus, suitable_cpumap, vcpus_on_node);
     if (rc)
         goto out;
 
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index c9cef66..1f334f2 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -762,8 +762,11 @@ int libxl_cpumap_to_nodemap(libxl_ctx *ctx,
     }
 
     libxl_bitmap_set_none(nodemap);
-    libxl_for_each_set_bit(i, *cpumap)
+    libxl_for_each_set_bit(i, *cpumap) {
+        if (i >= nr_cpus)
+            break;
         libxl_bitmap_set(nodemap, tinfo[i].node);
+    }
  out:
     libxl_cputopology_list_free(tinfo, nr_cpus);
     return rc;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:15:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnKB-0005AG-F3; Tue, 18 Feb 2014 16:15:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFnK2-0005A7-33
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:15:05 +0000
Received: from [85.158.139.211:40866] by server-14.bemta-5.messagelabs.com id
	B6/6F-27598-50783035; Tue, 18 Feb 2014 16:15:01 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392740098!4693062!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11035 invoked from network); 18 Feb 2014 16:15:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 16:15:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IGEVgE013337
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 16:14:32 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGEUGx026615
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 18 Feb 2014 16:14:31 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGET8i017014; Tue, 18 Feb 2014 16:14:29 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 08:14:29 -0800
Message-ID: <53038748.7070002@oracle.com>
Date: Tue, 18 Feb 2014 11:16:08 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Meng Xu <xumengpanda@gmail.com>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
	<CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
In-Reply-To: <CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3210133911722153420=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============3210133911722153420==
Content-Type: multipart/alternative;
 boundary="------------080806060508050501030701"

This is a multi-part message in MIME format.
--------------080806060508050501030701
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/18/2014 10:24 AM, Meng Xu wrote:
> Hi Dario,
>
> Thank you so much for your detailed reply! It is really helpful! I'm 
> looking at the vPMU and perf on Xen, and will try it. :-)

You will need the Xen patches that Dario pointed you to (thanks Dario) 
plus Linux kernel and toolstack changes that I can send you in a 
separate email (they still need some cleanup but should be usable).

BTW, you mentioned in the earlier email that you you wrote some code to 
directly access PMU registers and didn't think the code is particularly 
useful because of portability concerns. I believe basic counters (such 
as those for cache misses) and controls are common across pretty much 
all recent Intel processors.

>
> The reason why I want to know this information from hardware 
> performance counter is because I want to know the interference among 
> each domains when they are running.
>
> In addition, when we measure the latency of accessing a large array, 
> the result is out of our expectation. We increase the size of an array 
> from 1KB to 12MB, which covers the L1(32KB), L2(256KB) and L3(12MB) 
> cache size. We expect that the latency of accessing the whole array 
> should have clear cut at around 32KB, 256KB and 12MB because the 
> latency of L1 L2 and L3 are several times different.
>
> However, we saw the latency does not increase much when the array size 
> is larger than the size of L1, L2, and L3. It's weird because if we 
> run the same task in Linux on bare machine, it is the expected result.

Although most likely your vcpus are not migrating you should still make 
sure that they are pinned (and not oversubscribed to physical processors).

And (as with any performance measurements) disable power management and 
turbo mode. These things often mess up your timing.

-boris

>
> We are not sure if this is because of the virt. overhead or cache 
> miss, that's why we want to know the cache access rate of each domain.
>
> It's really appreciated  if you can share some of your insight on 
> this. :-)
>
> Thank you very much for your time!
>
> Best,
>
> Meng
>
>
> 2014-02-18 4:14 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com 
> <mailto:dario.faggioli@citrix.com>>:
>
>     On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
>     > Hi,
>     >
>     Hi,
>
>     > I'm a PhD student, working on real time system.
>     >
>     Cool. There really seems to be a lot of interest in Real-Time
>     virtualization these days. :-D
>
>     > [My goal]
>     > I want to measure the cache hit/miss rate of each guest domain
>     in Xen.
>     > I may also want to measure some other events, say memory access
>     rate,
>     > for each program in each guest domain in Xen.
>     >
>     Ok. Can I, out of curiosity, as you to detail a bit more what your
>     *final* goal is (I mean, you're interested in these measurements for a
>     reason, not just for the sake of having them, right?).
>
>     > [The problem I'm encountering]
>     > I tried intel's Performance Counter Monitor (PCM) in Linux on bare
>     > machine to get the machine's cache access rate for each level of
>     > cache, it works very well.
>     >
>     >
>     > However, when I want to use the PCM in Xen and run it in dom0, it
>     > cannot work. I think the PCM needs to run in ring 0 to
>     read/write the
>     > MSR. Because dom0 is running in ring 1, so PCM running in dom0
>     cannot
>     > work.
>     >
>     Indeed.
>
>     > So my question is:
>     > How can I run a program (say PCM) in ring 0 on Xen?
>     >
>     Running "a program" in there is going to be terribly difficult. What I
>     think you're better off is trying to access, from dom0 and/or
>     (para)virtualize the counters.think
>
>     In fact, there is work going on already on this, although I don't have
>     all the details about what's the current status.
>
>     > What's in my mind is:
>     > Writing a hypercall to call the PCM in Xen's kernel space, then the
>     > PCM will run in ring 0?
>     > But the problem I'm concerned is that some of the PCM's instruction,
>     > say printf(), may not be able to run in kernel space?
>     >
>     Well, Xen can print, e.g., on a serial console, but again, that's not
>     what you want. I'm adding the link to a few conversation about virtual
>     PMU. These are just the very first google's result, so there may
>     well be
>     more:
>
>     http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html
>     https://lwn.net/Articles/566159/
>
>     Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
>     Developers Summit:
>     http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>
>     Regards,
>     Dario
>
>     --
>     <<This happens because I choose it to happen!>> (Raistlin Majere)
>     -----------------------------------------------------------------
>     Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>     Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>


--------------080806060508050501030701
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 02/18/2014 10:24 AM, Meng Xu wrote:<br>
    </div>
    <blockquote
cite="mid:CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-size:small">Hi Dario,</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">Thank you so
          much for your detailed reply! It is really helpful! I'm
          looking at the vPMU and perf on Xen, and will try it. :-)</div>
      </div>
    </blockquote>
    <br>
    You will need the Xen patches that Dario pointed you to (thanks
    Dario) plus Linux kernel and toolstack changes that I can send you
    in a separate email (they still need some cleanup but should be
    usable).<br>
    <br>
    BTW, you mentioned in the earlier email that you you wrote some code
    to directly access PMU registers and didn't think the code is
    particularly useful because of portability concerns. I believe basic
    counters (such as those for cache misses) and controls are common&nbsp;
    across pretty much all recent Intel processors.<br>
    <br>
    <blockquote
cite="mid:CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">The reason
          why I want to know this information from hardware performance
          counter is because I want to know the interference among each
          domains when they are running.&nbsp;</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">In addition,
          when we measure the latency of accessing a large array, the
          result is out of our expectation. We increase the size of an
          array from 1KB to 12MB, which covers the L1(32KB), L2(256KB)
          and L3(12MB) cache size. We expect that the latency of
          accessing the whole array should have clear cut at around
          32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
          several times different.&nbsp;</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">However, we
          saw the latency does not increase much when the array size is
          larger than the size of L1, L2, and L3. It's weird because if
          we run the same task in Linux on bare machine, it is the
          expected result.</div>
      </div>
    </blockquote>
    <br>
    Although most likely your vcpus are not migrating you should still
    make sure that they are pinned (and not oversubscribed to physical
    processors).<br>
    <br>
    And (as with any performance measurements) disable power management
    and turbo mode. These things often mess up your timing.<br>
    <br>
    -boris<br>
    <br>
    <blockquote
cite="mid:CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">We are not
          sure if this is because of the virt. overhead or cache miss,
          that's why we want to know the cache access rate of each
          domain.&nbsp;</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">It's really
          appreciated &nbsp;if you can share some of your insight on this.
          :-)</div>
        <div class="gmail_default" style="font-size:small">
          <br>
        </div>
        <div class="gmail_default" style="font-size:small">Thank you
          very much for your time!</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">Best,</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">Meng</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">
            2014-02-18 4:14 GMT-05:00 Dario Faggioli <span dir="ltr">&lt;<a
                moz-do-not-send="true"
                href="mailto:dario.faggioli@citrix.com" target="_blank">dario.faggioli@citrix.com</a>&gt;</span>:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:<br>
              &gt; Hi,<br>
              <div class="">&gt;<br>
                Hi,<br>
                <br>
                &gt; I'm a PhD student, working on real time system.<br>
                &gt;<br>
              </div>
              Cool. There really seems to be a lot of interest in
              Real-Time<br>
              virtualization these days. :-D<br>
              <div class=""><br>
                &gt; [My goal]<br>
                &gt; I want to measure the cache hit/miss rate of each
                guest domain in Xen.<br>
                &gt; I may also want to measure some other events, say
                memory access rate,<br>
                &gt; for each program in each guest domain in Xen.<br>
                &gt;<br>
              </div>
              Ok. Can I, out of curiosity, as you to detail a bit more
              what your<br>
              *final* goal is (I mean, you're interested in these
              measurements for a<br>
              reason, not just for the sake of having them, right?).<br>
              <div class=""><br>
                &gt; [The problem I'm encountering]<br>
                &gt; I tried intel's Performance Counter Monitor (PCM)
                in Linux on bare<br>
                &gt; machine to get the machine's cache access rate for
                each level of<br>
                &gt; cache, it works very well.<br>
                &gt;<br>
                &gt;<br>
                &gt; However, when I want to use the PCM in Xen and run
                it in dom0, it<br>
                &gt; cannot work. I think the PCM needs to run in ring 0
                to read/write the<br>
                &gt; MSR. Because dom0 is running in ring 1, so PCM
                running in dom0 cannot<br>
                &gt; work.<br>
                &gt;<br>
              </div>
              Indeed.<br>
              <div class=""><br>
                &gt; So my question is:<br>
                &gt; How can I run a program (say PCM) in ring 0 on Xen?<br>
                &gt;<br>
              </div>
              Running "a program" in there is going to be terribly
              difficult. What I<br>
              think you're better off is trying to access, from dom0
              and/or<br>
              (para)virtualize the counters.think<br>
              <br>
              In fact, there is work going on already on this, although
              I don't have<br>
              all the details about what's the current status.<br>
              <div class=""><br>
                &gt; What's in my mind is:<br>
                &gt; Writing a hypercall to call the PCM in Xen's kernel
                space, then the<br>
                &gt; PCM will run in ring 0?<br>
                &gt; But the problem I'm concerned is that some of the
                PCM's instruction,<br>
                &gt; say printf(), may not be able to run in kernel
                space?<br>
                &gt;<br>
              </div>
              Well, Xen can print, e.g., on a serial console, but again,
              that's not<br>
              what you want. I'm adding the link to a few conversation
              about virtual<br>
              PMU. These are just the very first google's result, so
              there may well be<br>
              more:<br>
              <br>
              <a moz-do-not-send="true"
href="http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html"
                target="_blank">http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html</a><br>
              <a moz-do-not-send="true"
                href="https://lwn.net/Articles/566159/" target="_blank">https://lwn.net/Articles/566159/</a><br>
              <br>
              Boris (which I'm Cc-ing), gave a presentation about this
              at latest Xen<br>
              Developers Summit:<br>
              <a moz-do-not-send="true"
                href="http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013"
                target="_blank">http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013</a><br>
              <br>
              Regards,<br>
              Dario<br>
              <span class="HOEnZb"><font color="#888888"><br>
                  --<br>
                  &lt;&lt;This happens because I choose it to
                  happen!&gt;&gt; (Raistlin Majere)<br>
-----------------------------------------------------------------<br>
                  Dario Faggioli, Ph.D, <a moz-do-not-send="true"
                    href="http://about.me/dario.faggioli"
                    target="_blank">http://about.me/dario.faggioli</a><br>
                  Senior Software Engineer, Citrix Systems R&amp;D Ltd.,
                  Cambridge (UK)<br>
                  <br>
                </font></span></blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>

--------------080806060508050501030701--


--===============3210133911722153420==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3210133911722153420==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:15:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:15:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnKB-0005AG-F3; Tue, 18 Feb 2014 16:15:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFnK2-0005A7-33
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:15:05 +0000
Received: from [85.158.139.211:40866] by server-14.bemta-5.messagelabs.com id
	B6/6F-27598-50783035; Tue, 18 Feb 2014 16:15:01 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392740098!4693062!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11035 invoked from network); 18 Feb 2014 16:15:00 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 16:15:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IGEVgE013337
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 16:14:32 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGEUGx026615
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 18 Feb 2014 16:14:31 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGET8i017014; Tue, 18 Feb 2014 16:14:29 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 08:14:29 -0800
Message-ID: <53038748.7070002@oracle.com>
Date: Tue, 18 Feb 2014 11:16:08 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Meng Xu <xumengpanda@gmail.com>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
	<CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
In-Reply-To: <CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3210133911722153420=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============3210133911722153420==
Content-Type: multipart/alternative;
 boundary="------------080806060508050501030701"

This is a multi-part message in MIME format.
--------------080806060508050501030701
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/18/2014 10:24 AM, Meng Xu wrote:
> Hi Dario,
>
> Thank you so much for your detailed reply! It is really helpful! I'm 
> looking at the vPMU and perf on Xen, and will try it. :-)

You will need the Xen patches that Dario pointed you to (thanks Dario) 
plus Linux kernel and toolstack changes that I can send you in a 
separate email (they still need some cleanup but should be usable).

BTW, you mentioned in the earlier email that you you wrote some code to 
directly access PMU registers and didn't think the code is particularly 
useful because of portability concerns. I believe basic counters (such 
as those for cache misses) and controls are common across pretty much 
all recent Intel processors.

>
> The reason why I want to know this information from hardware 
> performance counter is because I want to know the interference among 
> each domains when they are running.
>
> In addition, when we measure the latency of accessing a large array, 
> the result is out of our expectation. We increase the size of an array 
> from 1KB to 12MB, which covers the L1(32KB), L2(256KB) and L3(12MB) 
> cache size. We expect that the latency of accessing the whole array 
> should have clear cut at around 32KB, 256KB and 12MB because the 
> latency of L1 L2 and L3 are several times different.
>
> However, we saw the latency does not increase much when the array size 
> is larger than the size of L1, L2, and L3. It's weird because if we 
> run the same task in Linux on bare machine, it is the expected result.

Although most likely your vcpus are not migrating you should still make 
sure that they are pinned (and not oversubscribed to physical processors).

And (as with any performance measurements) disable power management and 
turbo mode. These things often mess up your timing.

-boris

>
> We are not sure if this is because of the virt. overhead or cache 
> miss, that's why we want to know the cache access rate of each domain.
>
> It's really appreciated  if you can share some of your insight on 
> this. :-)
>
> Thank you very much for your time!
>
> Best,
>
> Meng
>
>
> 2014-02-18 4:14 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com 
> <mailto:dario.faggioli@citrix.com>>:
>
>     On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
>     > Hi,
>     >
>     Hi,
>
>     > I'm a PhD student, working on real time system.
>     >
>     Cool. There really seems to be a lot of interest in Real-Time
>     virtualization these days. :-D
>
>     > [My goal]
>     > I want to measure the cache hit/miss rate of each guest domain
>     in Xen.
>     > I may also want to measure some other events, say memory access
>     rate,
>     > for each program in each guest domain in Xen.
>     >
>     Ok. Can I, out of curiosity, as you to detail a bit more what your
>     *final* goal is (I mean, you're interested in these measurements for a
>     reason, not just for the sake of having them, right?).
>
>     > [The problem I'm encountering]
>     > I tried intel's Performance Counter Monitor (PCM) in Linux on bare
>     > machine to get the machine's cache access rate for each level of
>     > cache, it works very well.
>     >
>     >
>     > However, when I want to use the PCM in Xen and run it in dom0, it
>     > cannot work. I think the PCM needs to run in ring 0 to
>     read/write the
>     > MSR. Because dom0 is running in ring 1, so PCM running in dom0
>     cannot
>     > work.
>     >
>     Indeed.
>
>     > So my question is:
>     > How can I run a program (say PCM) in ring 0 on Xen?
>     >
>     Running "a program" in there is going to be terribly difficult. What I
>     think you're better off is trying to access, from dom0 and/or
>     (para)virtualize the counters.think
>
>     In fact, there is work going on already on this, although I don't have
>     all the details about what's the current status.
>
>     > What's in my mind is:
>     > Writing a hypercall to call the PCM in Xen's kernel space, then the
>     > PCM will run in ring 0?
>     > But the problem I'm concerned is that some of the PCM's instruction,
>     > say printf(), may not be able to run in kernel space?
>     >
>     Well, Xen can print, e.g., on a serial console, but again, that's not
>     what you want. I'm adding the link to a few conversation about virtual
>     PMU. These are just the very first google's result, so there may
>     well be
>     more:
>
>     http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html
>     https://lwn.net/Articles/566159/
>
>     Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
>     Developers Summit:
>     http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>
>     Regards,
>     Dario
>
>     --
>     <<This happens because I choose it to happen!>> (Raistlin Majere)
>     -----------------------------------------------------------------
>     Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>     Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>
>


--------------080806060508050501030701
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body bgcolor="#FFFFFF" text="#000000">
    <div class="moz-cite-prefix">On 02/18/2014 10:24 AM, Meng Xu wrote:<br>
    </div>
    <blockquote
cite="mid:CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-size:small">Hi Dario,</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">Thank you so
          much for your detailed reply! It is really helpful! I'm
          looking at the vPMU and perf on Xen, and will try it. :-)</div>
      </div>
    </blockquote>
    <br>
    You will need the Xen patches that Dario pointed you to (thanks
    Dario) plus Linux kernel and toolstack changes that I can send you
    in a separate email (they still need some cleanup but should be
    usable).<br>
    <br>
    BTW, you mentioned in the earlier email that you you wrote some code
    to directly access PMU registers and didn't think the code is
    particularly useful because of portability concerns. I believe basic
    counters (such as those for cache misses) and controls are common&nbsp;
    across pretty much all recent Intel processors.<br>
    <br>
    <blockquote
cite="mid:CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">The reason
          why I want to know this information from hardware performance
          counter is because I want to know the interference among each
          domains when they are running.&nbsp;</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">In addition,
          when we measure the latency of accessing a large array, the
          result is out of our expectation. We increase the size of an
          array from 1KB to 12MB, which covers the L1(32KB), L2(256KB)
          and L3(12MB) cache size. We expect that the latency of
          accessing the whole array should have clear cut at around
          32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
          several times different.&nbsp;</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">However, we
          saw the latency does not increase much when the array size is
          larger than the size of L1, L2, and L3. It's weird because if
          we run the same task in Linux on bare machine, it is the
          expected result.</div>
      </div>
    </blockquote>
    <br>
    Although most likely your vcpus are not migrating you should still
    make sure that they are pinned (and not oversubscribed to physical
    processors).<br>
    <br>
    And (as with any performance measurements) disable power management
    and turbo mode. These things often mess up your timing.<br>
    <br>
    -boris<br>
    <br>
    <blockquote
cite="mid:CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com"
      type="cite">
      <div dir="ltr">
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">We are not
          sure if this is because of the virt. overhead or cache miss,
          that's why we want to know the cache access rate of each
          domain.&nbsp;</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">It's really
          appreciated &nbsp;if you can share some of your insight on this.
          :-)</div>
        <div class="gmail_default" style="font-size:small">
          <br>
        </div>
        <div class="gmail_default" style="font-size:small">Thank you
          very much for your time!</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">Best,</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_default" style="font-size:small">Meng</div>
        <div class="gmail_default" style="font-size:small"><br>
        </div>
        <div class="gmail_extra"><br>
          <div class="gmail_quote">
            2014-02-18 4:14 GMT-05:00 Dario Faggioli <span dir="ltr">&lt;<a
                moz-do-not-send="true"
                href="mailto:dario.faggioli@citrix.com" target="_blank">dario.faggioli@citrix.com</a>&gt;</span>:<br>
            <blockquote class="gmail_quote" style="margin:0 0 0
              .8ex;border-left:1px #ccc solid;padding-left:1ex">
              On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:<br>
              &gt; Hi,<br>
              <div class="">&gt;<br>
                Hi,<br>
                <br>
                &gt; I'm a PhD student, working on real time system.<br>
                &gt;<br>
              </div>
              Cool. There really seems to be a lot of interest in
              Real-Time<br>
              virtualization these days. :-D<br>
              <div class=""><br>
                &gt; [My goal]<br>
                &gt; I want to measure the cache hit/miss rate of each
                guest domain in Xen.<br>
                &gt; I may also want to measure some other events, say
                memory access rate,<br>
                &gt; for each program in each guest domain in Xen.<br>
                &gt;<br>
              </div>
              Ok. Can I, out of curiosity, as you to detail a bit more
              what your<br>
              *final* goal is (I mean, you're interested in these
              measurements for a<br>
              reason, not just for the sake of having them, right?).<br>
              <div class=""><br>
                &gt; [The problem I'm encountering]<br>
                &gt; I tried intel's Performance Counter Monitor (PCM)
                in Linux on bare<br>
                &gt; machine to get the machine's cache access rate for
                each level of<br>
                &gt; cache, it works very well.<br>
                &gt;<br>
                &gt;<br>
                &gt; However, when I want to use the PCM in Xen and run
                it in dom0, it<br>
                &gt; cannot work. I think the PCM needs to run in ring 0
                to read/write the<br>
                &gt; MSR. Because dom0 is running in ring 1, so PCM
                running in dom0 cannot<br>
                &gt; work.<br>
                &gt;<br>
              </div>
              Indeed.<br>
              <div class=""><br>
                &gt; So my question is:<br>
                &gt; How can I run a program (say PCM) in ring 0 on Xen?<br>
                &gt;<br>
              </div>
              Running "a program" in there is going to be terribly
              difficult. What I<br>
              think you're better off is trying to access, from dom0
              and/or<br>
              (para)virtualize the counters.think<br>
              <br>
              In fact, there is work going on already on this, although
              I don't have<br>
              all the details about what's the current status.<br>
              <div class=""><br>
                &gt; What's in my mind is:<br>
                &gt; Writing a hypercall to call the PCM in Xen's kernel
                space, then the<br>
                &gt; PCM will run in ring 0?<br>
                &gt; But the problem I'm concerned is that some of the
                PCM's instruction,<br>
                &gt; say printf(), may not be able to run in kernel
                space?<br>
                &gt;<br>
              </div>
              Well, Xen can print, e.g., on a serial console, but again,
              that's not<br>
              what you want. I'm adding the link to a few conversation
              about virtual<br>
              PMU. These are just the very first google's result, so
              there may well be<br>
              more:<br>
              <br>
              <a moz-do-not-send="true"
href="http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html"
                target="_blank">http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html</a><br>
              <a moz-do-not-send="true"
                href="https://lwn.net/Articles/566159/" target="_blank">https://lwn.net/Articles/566159/</a><br>
              <br>
              Boris (which I'm Cc-ing), gave a presentation about this
              at latest Xen<br>
              Developers Summit:<br>
              <a moz-do-not-send="true"
                href="http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013"
                target="_blank">http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013</a><br>
              <br>
              Regards,<br>
              Dario<br>
              <span class="HOEnZb"><font color="#888888"><br>
                  --<br>
                  &lt;&lt;This happens because I choose it to
                  happen!&gt;&gt; (Raistlin Majere)<br>
-----------------------------------------------------------------<br>
                  Dario Faggioli, Ph.D, <a moz-do-not-send="true"
                    href="http://about.me/dario.faggioli"
                    target="_blank">http://about.me/dario.faggioli</a><br>
                  Senior Software Engineer, Citrix Systems R&amp;D Ltd.,
                  Cambridge (UK)<br>
                  <br>
                </font></span></blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </body>
</html>

--------------080806060508050501030701--


--===============3210133911722153420==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3210133911722153420==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:24:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnT1-0005Qt-Pf; Tue, 18 Feb 2014 16:24:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WFnT0-0005Qo-Qt
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:24:19 +0000
Received: from [85.158.143.35:11805] by server-1.bemta-4.messagelabs.com id
	C5/80-31661-23983035; Tue, 18 Feb 2014 16:24:18 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392740657!6571170!1
X-Originating-IP: [213.199.154.206]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3507 invoked from network); 18 Feb 2014 16:24:17 -0000
Received: from am1ehsobe003.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.206)
	by server-11.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	18 Feb 2014 16:24:17 -0000
Received: from mail29-am1-R.bigfish.com (10.3.201.229) by
	AM1EHSOBE017.bigfish.com (10.3.207.139) with Microsoft SMTP Server id
	14.1.225.22; Tue, 18 Feb 2014 16:24:17 +0000
Received: from mail29-am1 (localhost [127.0.0.1])	by mail29-am1-R.bigfish.com
	(Postfix) with ESMTP id DE56D2C04D8;
	Tue, 18 Feb 2014 16:24:16 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail29-am1 (localhost.localdomain [127.0.0.1]) by mail29-am1
	(MessageSwitch) id 1392740650575153_27926;
	Tue, 18 Feb 2014 16:24:10 +0000 (UTC)
Received: from AM1EHSMHS005.bigfish.com (unknown [10.3.201.235])	by
	mail29-am1.bigfish.com (Postfix) with ESMTP id 88366300084;
	Tue, 18 Feb 2014 16:24:10 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by AM1EHSMHS005.bigfish.com
	(10.3.207.105) with Microsoft SMTP Server id 14.16.227.3;
	Tue, 18 Feb 2014 16:24:00 +0000
X-WSS-ID: 0N179JX-08-D32-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	209D5D16026;	Tue, 18 Feb 2014 10:23:56 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Tue, 18 Feb 2014 10:24:08 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Tue, 18 Feb 2014 11:22:23 -0500
Message-ID: <5303891D.5070707@amd.com>
Date: Tue, 18 Feb 2014 10:23:57 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
	<530361CB020000780011D475@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F8CC1@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F8CC1@SHSMSX101.ccr.corp.intel.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/18/2014 6:42 AM, Liu, Jinsong wrote:
> Jan Beulich wrote:
>>>>> On 18.02.14 at 12:42, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> Jan Beulich wrote:
>>>>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>>> wrote:
>>>>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>>>>> *v, uint32_t msr, uint64_t *val)
>>>>>>>>
>>>>>>>>       *val = 0;
>>>>>>>>
>>>>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>>>>> +    /* Allow only first 3 MC banks into switch() */
>>>>> I don't think this comments is good here. Remove it is better.
>>>> I had asked for this to be removed again too. I'm really thinking
>>>> that V3 is what we should go with.
>>> V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is
>>> slightly better.
>> Can I read this as an ack then (I already explained elsewhere
>> why I think a comment there is rather pointless)?
>>
>> Jan

Ok, guess V3 is good enough..

> Yes, please.
>
> Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:24:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:24:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnT1-0005Qt-Pf; Tue, 18 Feb 2014 16:24:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WFnT0-0005Qo-Qt
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:24:19 +0000
Received: from [85.158.143.35:11805] by server-1.bemta-4.messagelabs.com id
	C5/80-31661-23983035; Tue, 18 Feb 2014 16:24:18 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392740657!6571170!1
X-Originating-IP: [213.199.154.206]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3507 invoked from network); 18 Feb 2014 16:24:17 -0000
Received: from am1ehsobe003.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.206)
	by server-11.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	18 Feb 2014 16:24:17 -0000
Received: from mail29-am1-R.bigfish.com (10.3.201.229) by
	AM1EHSOBE017.bigfish.com (10.3.207.139) with Microsoft SMTP Server id
	14.1.225.22; Tue, 18 Feb 2014 16:24:17 +0000
Received: from mail29-am1 (localhost [127.0.0.1])	by mail29-am1-R.bigfish.com
	(Postfix) with ESMTP id DE56D2C04D8;
	Tue, 18 Feb 2014 16:24:16 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -2
X-BigFish: VPS-2(z579eh37d5kzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail29-am1 (localhost.localdomain [127.0.0.1]) by mail29-am1
	(MessageSwitch) id 1392740650575153_27926;
	Tue, 18 Feb 2014 16:24:10 +0000 (UTC)
Received: from AM1EHSMHS005.bigfish.com (unknown [10.3.201.235])	by
	mail29-am1.bigfish.com (Postfix) with ESMTP id 88366300084;
	Tue, 18 Feb 2014 16:24:10 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by AM1EHSMHS005.bigfish.com
	(10.3.207.105) with Microsoft SMTP Server id 14.16.227.3;
	Tue, 18 Feb 2014 16:24:00 +0000
X-WSS-ID: 0N179JX-08-D32-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	209D5D16026;	Tue, 18 Feb 2014 10:23:56 -0600 (CST)
Received: from SATLEXDAG05.amd.com (10.181.40.11) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Tue, 18 Feb 2014 10:24:08 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag05.amd.com
	(10.181.40.11) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Tue, 18 Feb 2014 11:22:23 -0500
Message-ID: <5303891D.5070707@amd.com>
Date: Tue, 18 Feb 2014 10:23:57 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "Liu, Jinsong" <jinsong.liu@intel.com>, Jan Beulich <JBeulich@suse.com>
References: <1392247608-6960-1-git-send-email-aravind.gopalakrishnan@amd.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F667B@SHSMSX101.ccr.corp.intel.com>
	<530338D5020000780011D258@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F786F@SHSMSX101.ccr.corp.intel.com>
	<53034BC1020000780011D371@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F7A4D@SHSMSX101.ccr.corp.intel.com>
	<530361CB020000780011D475@nat28.tlf.novell.com>
	<DE8DF0795D48FD4CA783C40EC8292335014F8CC1@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335014F8CC1@SHSMSX101.ccr.corp.intel.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
	"chegger@amazon.de" <chegger@amazon.de>,
	"suravee.suthikulpanit@amd.com" <suravee.suthikulpanit@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH V4.1] mcheck,
 vmce: Allow vmce_amd_* functions to handle AMD  thresolding MSRs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/18/2014 6:42 AM, Liu, Jinsong wrote:
> Jan Beulich wrote:
>>>>> On 18.02.14 at 12:42, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
>>> Jan Beulich wrote:
>>>>>>> On 18.02.14 at 11:52, "Liu, Jinsong" <jinsong.liu@intel.com>
>>>>>>> wrote:
>>>>>>>> --- a/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>>> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
>>>>>>>> @@ -107,7 +107,8 @@ static int bank_mce_rdmsr(const struct vcpu
>>>>>>>> *v, uint32_t msr, uint64_t *val)
>>>>>>>>
>>>>>>>>       *val = 0;
>>>>>>>>
>>>>>>>> -    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
>>>>>>>> +    /* Allow only first 3 MC banks into switch() */
>>>>> I don't think this comments is good here. Remove it is better.
>>>> I had asked for this to be removed again too. I'm really thinking
>>>> that V3 is what we should go with.
>>> V3 is fine, except adding comments for '-MSR_IA32_MC0_CTL' is
>>> slightly better.
>> Can I read this as an ack then (I already explained elsewhere
>> why I think a comment there is rather pointless)?
>>
>> Jan

Ok, guess V3 is good enough..

> Yes, please.
>
> Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:30:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnYZ-0005ek-Ki; Tue, 18 Feb 2014 16:30:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFnYX-0005ef-TQ
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 16:30:02 +0000
Received: from [85.158.137.68:22320] by server-11.bemta-3.messagelabs.com id
	8E/F1-04255-98A83035; Tue, 18 Feb 2014 16:30:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392740998!1365431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8073 invoked from network); 18 Feb 2014 16:30:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:30:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103556227"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:29:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:29:58 -0500
Message-ID: <1392740996.23084.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Feb 2014 16:29:56 +0000
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 0/10] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 15:50 +0000, Stefano Stabellini wrote:
> Hi all,
> this patch series removes any needs for maintenance interrupts for both
> hardware and software interrupts in Xen.

I tried this on Xgene and it fails to boot, apparently it's not seeing
any SATA interrupts (the logs are uninteresting I think, all looks like
SATA failures).

bisecting fingers:

8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e is the first bad commit
commit 8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Feb 12 17:48:10 2014 +0000

    xen/arm: support HW interrupts in gic_set_lr
    
    If the irq to be injected is an hardware irq (p->desc != NULL), set
    GICH_LR_HW.
    
    Remove the code to EOI a physical interrupt on behalf of the guest
    because it has become unnecessary.
    
    Also add a struct vcpu* parameter to gic_set_lr.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    ---
    
    Changes in v2:
    - remove the EOI code, now unnecessary;
    - do not assume physical IRQ == virtual IRQ;
    - refactor gic_set_lr.

:040000 040000 9647c168e6e7d81fd57f70c8519d1c4bbee7d33c 93344e9ddb4fa2da3c0e578bc3ebfbcf153ad5a1 M	xen


$ git bisect log
git bisect start
# good: [feee1ace547cf6247a358d082dd64fa762be2488] Merge branch 'master' into staging
git bisect good feee1ace547cf6247a358d082dd64fa762be2488
# bad: [739a2ff8910dac953c3adebddebfe621b537fab4] Merge branch 'no_maintenance_interrupts-v2' of git://xenbits.xen.org/people/sstabellini/xen-unstable into no-maint-irq
git bisect bad 739a2ff8910dac953c3adebddebfe621b537fab4
# bad: [c94edf5af6962aba8840fe03529717d359781ae7] xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
git bisect bad c94edf5af6962aba8840fe03529717d359781ae7
# bad: [8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e] xen/arm: support HW interrupts in gic_set_lr
git bisect bad 8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e
# good: [38b0d97d407d33d9eeb26e310daf25119867a943] xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
git bisect good 38b0d97d407d33d9eeb26e310daf25119867a943



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:30:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnYZ-0005ek-Ki; Tue, 18 Feb 2014 16:30:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFnYX-0005ef-TQ
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 16:30:02 +0000
Received: from [85.158.137.68:22320] by server-11.bemta-3.messagelabs.com id
	8E/F1-04255-98A83035; Tue, 18 Feb 2014 16:30:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392740998!1365431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8073 invoked from network); 18 Feb 2014 16:30:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:30:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103556227"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:29:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:29:58 -0500
Message-ID: <1392740996.23084.23.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Tue, 18 Feb 2014 16:29:56 +0000
In-Reply-To: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402141541520.4307@kaball.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH-4.5 v2 0/10] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-14 at 15:50 +0000, Stefano Stabellini wrote:
> Hi all,
> this patch series removes any needs for maintenance interrupts for both
> hardware and software interrupts in Xen.

I tried this on Xgene and it fails to boot, apparently it's not seeing
any SATA interrupts (the logs are uninteresting I think, all looks like
SATA failures).

bisecting fingers:

8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e is the first bad commit
commit 8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Feb 12 17:48:10 2014 +0000

    xen/arm: support HW interrupts in gic_set_lr
    
    If the irq to be injected is an hardware irq (p->desc != NULL), set
    GICH_LR_HW.
    
    Remove the code to EOI a physical interrupt on behalf of the guest
    because it has become unnecessary.
    
    Also add a struct vcpu* parameter to gic_set_lr.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    ---
    
    Changes in v2:
    - remove the EOI code, now unnecessary;
    - do not assume physical IRQ == virtual IRQ;
    - refactor gic_set_lr.

:040000 040000 9647c168e6e7d81fd57f70c8519d1c4bbee7d33c 93344e9ddb4fa2da3c0e578bc3ebfbcf153ad5a1 M	xen


$ git bisect log
git bisect start
# good: [feee1ace547cf6247a358d082dd64fa762be2488] Merge branch 'master' into staging
git bisect good feee1ace547cf6247a358d082dd64fa762be2488
# bad: [739a2ff8910dac953c3adebddebfe621b537fab4] Merge branch 'no_maintenance_interrupts-v2' of git://xenbits.xen.org/people/sstabellini/xen-unstable into no-maint-irq
git bisect bad 739a2ff8910dac953c3adebddebfe621b537fab4
# bad: [c94edf5af6962aba8840fe03529717d359781ae7] xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
git bisect bad c94edf5af6962aba8840fe03529717d359781ae7
# bad: [8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e] xen/arm: support HW interrupts in gic_set_lr
git bisect bad 8916c25c9f4fc0d71dc3ac5dcb28b68bf4effb4e
# good: [38b0d97d407d33d9eeb26e310daf25119867a943] xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
git bisect good 38b0d97d407d33d9eeb26e310daf25119867a943



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:34:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnd2-0005m9-KN; Tue, 18 Feb 2014 16:34:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnd1-0005m2-Mt
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:34:39 +0000
Received: from [85.158.143.35:46556] by server-3.bemta-4.messagelabs.com id
	6C/10-11539-E9B83035; Tue, 18 Feb 2014 16:34:38 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392741276!6583307!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25568 invoked from network); 18 Feb 2014 16:34:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:34:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="103558536"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:33:43 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:33:39 -0500
Message-ID: <1392741217.32038.563.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 Feb 2014 17:33:37 +0100
In-Reply-To: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of
	tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6954548410674264870=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6954548410674264870==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-GXn1lkf+nJ+y9TSqzfza"

--=-GXn1lkf+nJ+y9TSqzfza
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-02-18 at 15:59 +0000, Andrew Cooper wrote:
> It is very common for BIOSes to advertise more cpus than are actually pre=
sent
> on the system, and mark some of them as offline.  This is what Xen does t=
o
> allow for later CPU hotplug, and what BIOSes common to multiple different
> systems do to to save fully rewriting the MADT in memory.
>=20
> An excerpt from `xl info` might look like:
>=20
> ...
> nr_cpus                : 2
> max_cpu_id             : 3
> ...
>=20
> Which shows 4 CPUs in the MADT, but only 2 online (as this particular box=
 is
> the dual-core rather than the quad-core SKU of its particular brand)
>=20
> Because of the way Xen exposes this information, a libxl_cputopology arra=
y is
> bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.
>=20
> The current libxl code has two places which erroneously assume that a
> libxl_cputopology array is as long as the number of bits found in a cpu
> bitmap, and valgrind complains:
>=20
> =3D=3D14961=3D=3D Invalid read of size 4
> =3D=3D14961=3D=3D    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.=
c:230)
> =3D=3D14961=3D=3D    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
> =3D=3D14961=3D=3D    by 0x406246F: libxl__domain_build (libxl_create.c:37=
1)
> ...
> =3D=3D14961=3D=3D  Address 0x4324788 is 8 bytes after a block of size 24 =
alloc'd
> =3D=3D14961=3D=3D    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_=
memcheck-x86-linux.so)
> =3D=3D14961=3D=3D    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
> =3D=3D14961=3D=3D    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
> =3D=3D14961=3D=3D    by 0x407A899: libxl__get_numa_candidate (libxl_numa.=
c:342)
> ...
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Dario Faggioli <dario.faggioli@citrix.com>
>
Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-GXn1lkf+nJ+y9TSqzfza
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDi2EACgkQk4XaBE3IOsQomwCcDSOkaL8oRmUmQ8vevlVI2Nx1
zEYAn1gsZo8rkW/h7TTvowNRpPPVeePe
=7Axl
-----END PGP SIGNATURE-----

--=-GXn1lkf+nJ+y9TSqzfza--


--===============6954548410674264870==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6954548410674264870==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:34:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnd2-0005m9-KN; Tue, 18 Feb 2014 16:34:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnd1-0005m2-Mt
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:34:39 +0000
Received: from [85.158.143.35:46556] by server-3.bemta-4.messagelabs.com id
	6C/10-11539-E9B83035; Tue, 18 Feb 2014 16:34:38 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392741276!6583307!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25568 invoked from network); 18 Feb 2014 16:34:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:34:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="103558536"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:33:43 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:33:39 -0500
Message-ID: <1392741217.32038.563.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 Feb 2014 17:33:37 +0100
In-Reply-To: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of
	tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6954548410674264870=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6954548410674264870==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-GXn1lkf+nJ+y9TSqzfza"

--=-GXn1lkf+nJ+y9TSqzfza
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-02-18 at 15:59 +0000, Andrew Cooper wrote:
> It is very common for BIOSes to advertise more cpus than are actually pre=
sent
> on the system, and mark some of them as offline.  This is what Xen does t=
o
> allow for later CPU hotplug, and what BIOSes common to multiple different
> systems do to to save fully rewriting the MADT in memory.
>=20
> An excerpt from `xl info` might look like:
>=20
> ...
> nr_cpus                : 2
> max_cpu_id             : 3
> ...
>=20
> Which shows 4 CPUs in the MADT, but only 2 online (as this particular box=
 is
> the dual-core rather than the quad-core SKU of its particular brand)
>=20
> Because of the way Xen exposes this information, a libxl_cputopology arra=
y is
> bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.
>=20
> The current libxl code has two places which erroneously assume that a
> libxl_cputopology array is as long as the number of bits found in a cpu
> bitmap, and valgrind complains:
>=20
> =3D=3D14961=3D=3D Invalid read of size 4
> =3D=3D14961=3D=3D    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.=
c:230)
> =3D=3D14961=3D=3D    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
> =3D=3D14961=3D=3D    by 0x406246F: libxl__domain_build (libxl_create.c:37=
1)
> ...
> =3D=3D14961=3D=3D  Address 0x4324788 is 8 bytes after a block of size 24 =
alloc'd
> =3D=3D14961=3D=3D    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_=
memcheck-x86-linux.so)
> =3D=3D14961=3D=3D    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
> =3D=3D14961=3D=3D    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
> =3D=3D14961=3D=3D    by 0x407A899: libxl__get_numa_candidate (libxl_numa.=
c:342)
> ...
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Dario Faggioli <dario.faggioli@citrix.com>
>
Reviewed-by: Dario Faggioli <dario.faggioli@citrix.com>

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-GXn1lkf+nJ+y9TSqzfza
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDi2EACgkQk4XaBE3IOsQomwCcDSOkaL8oRmUmQ8vevlVI2Nx1
zEYAn1gsZo8rkW/h7TTvowNRpPPVeePe
=7Axl
-----END PGP SIGNATURE-----

--=-GXn1lkf+nJ+y9TSqzfza--


--===============6954548410674264870==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6954548410674264870==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnhY-0005z6-BZ; Tue, 18 Feb 2014 16:39:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFnhW-0005z1-F3
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:39:18 +0000
Received: from [193.109.254.147:33985] by server-4.bemta-14.messagelabs.com id
	53/37-32066-5BC83035; Tue, 18 Feb 2014 16:39:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392741555!5181654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29626 invoked from network); 18 Feb 2014 16:39:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:39:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103561843"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:39:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:39:06 -0500
Message-ID: <1392741544.23084.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 Feb 2014 16:39:04 +0000
In-Reply-To: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of
	tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:59 +0000, Andrew Cooper wrote:
> It is very common for BIOSes to advertise more cpus than are actually present
> on the system, and mark some of them as offline.  This is what Xen does to
> allow for later CPU hotplug, and what BIOSes common to multiple different
> systems do to to save fully rewriting the MADT in memory.
> 
> An excerpt from `xl info` might look like:
> 
> ...
> nr_cpus                : 2
> max_cpu_id             : 3
> ...
> 
> Which shows 4 CPUs in the MADT, but only 2 online (as this particular box is
> the dual-core rather than the quad-core SKU of its particular brand)
> 
> Because of the way Xen exposes this information, a libxl_cputopology array is
> bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.
> 
> The current libxl code has two places which erroneously assume that a
> libxl_cputopology array is as long as the number of bits found in a cpu
> bitmap, and valgrind complains:
> 
> ==14961== Invalid read of size 4
> ==14961==    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.c:230)
> ==14961==    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
> ==14961==    by 0x406246F: libxl__domain_build (libxl_create.c:371)
> ...
> ==14961==  Address 0x4324788 is 8 bytes after a block of size 24 alloc'd
> ==14961==    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
> ==14961==    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
> ==14961==    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
> ==14961==    by 0x407A899: libxl__get_numa_candidate (libxl_numa.c:342)
> ...
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Unless someone argues otherwise this is going into my 4.5 pile.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnhY-0005z6-BZ; Tue, 18 Feb 2014 16:39:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFnhW-0005z1-F3
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:39:18 +0000
Received: from [193.109.254.147:33985] by server-4.bemta-14.messagelabs.com id
	53/37-32066-5BC83035; Tue, 18 Feb 2014 16:39:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392741555!5181654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29626 invoked from network); 18 Feb 2014 16:39:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:39:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103561843"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:39:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:39:06 -0500
Message-ID: <1392741544.23084.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 Feb 2014 16:39:04 +0000
In-Reply-To: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of
	tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 15:59 +0000, Andrew Cooper wrote:
> It is very common for BIOSes to advertise more cpus than are actually present
> on the system, and mark some of them as offline.  This is what Xen does to
> allow for later CPU hotplug, and what BIOSes common to multiple different
> systems do to to save fully rewriting the MADT in memory.
> 
> An excerpt from `xl info` might look like:
> 
> ...
> nr_cpus                : 2
> max_cpu_id             : 3
> ...
> 
> Which shows 4 CPUs in the MADT, but only 2 online (as this particular box is
> the dual-core rather than the quad-core SKU of its particular brand)
> 
> Because of the way Xen exposes this information, a libxl_cputopology array is
> bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.
> 
> The current libxl code has two places which erroneously assume that a
> libxl_cputopology array is as long as the number of bits found in a cpu
> bitmap, and valgrind complains:
> 
> ==14961== Invalid read of size 4
> ==14961==    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.c:230)
> ==14961==    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
> ==14961==    by 0x406246F: libxl__domain_build (libxl_create.c:371)
> ...
> ==14961==  Address 0x4324788 is 8 bytes after a block of size 24 alloc'd
> ==14961==    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
> ==14961==    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
> ==14961==    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
> ==14961==    by 0x407A899: libxl__get_numa_candidate (libxl_numa.c:342)
> ...
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

Unless someone argues otherwise this is going into my 4.5 pile.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:41:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnjH-00063m-Rz; Tue, 18 Feb 2014 16:41:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFnjG-00063Y-FH
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:41:06 +0000
Received: from [85.158.137.68:18957] by server-1.bemta-3.messagelabs.com id
	25/C2-17293-12D83035; Tue, 18 Feb 2014 16:41:05 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392741663!2695290!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9879 invoked from network); 18 Feb 2014 16:41:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 16:41:04 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IGev7A030007
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 16:40:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1IGeuhE012641
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Feb 2014 16:40:56 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGeuF2011652; Tue, 18 Feb 2014 16:40:56 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 08:40:55 -0800
Message-ID: <53038D7A.8030807@oracle.com>
Date: Tue, 18 Feb 2014 11:42:34 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <53023729.7020009@citrix.com>
In-Reply-To: <53023729.7020009@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Jun Nakajima <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>,
	Xen-devel List <xen-devel@lists.xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 11:22 AM, Andrew Cooper wrote:
> Hello,
>
> Here is a design proposal to improve VM feature levelling support in Xen
> and libxc.

In case you haven't seen this you might also find this useful:
http://developer.amd.com/wordpress/media/2012/10/CrossVendorMigration.pdf

It's a slightly different subject but perhaps some of the experiences 
that are listed there could influence your design.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:41:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnjH-00063m-Rz; Tue, 18 Feb 2014 16:41:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFnjG-00063Y-FH
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:41:06 +0000
Received: from [85.158.137.68:18957] by server-1.bemta-3.messagelabs.com id
	25/C2-17293-12D83035; Tue, 18 Feb 2014 16:41:05 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392741663!2695290!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9879 invoked from network); 18 Feb 2014 16:41:04 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 16:41:04 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IGev7A030007
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 16:40:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1IGeuhE012641
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Feb 2014 16:40:56 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGeuF2011652; Tue, 18 Feb 2014 16:40:56 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 08:40:55 -0800
Message-ID: <53038D7A.8030807@oracle.com>
Date: Tue, 18 Feb 2014 11:42:34 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <53023729.7020009@citrix.com>
In-Reply-To: <53023729.7020009@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Jun Nakajima <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>,
	Xen-devel List <xen-devel@lists.xen.org>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/17/2014 11:22 AM, Andrew Cooper wrote:
> Hello,
>
> Here is a design proposal to improve VM feature levelling support in Xen
> and libxc.

In case you haven't seen this you might also find this useful:
http://developer.amd.com/wordpress/media/2012/10/CrossVendorMigration.pdf

It's a slightly different subject but perhaps some of the experiences 
that are listed there could influence your design.

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:42:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnk7-00068M-AM; Tue, 18 Feb 2014 16:41:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFnk6-000684-3a
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 16:41:58 +0000
Received: from [85.158.137.68:62151] by server-9.bemta-3.messagelabs.com id
	C5/86-10184-55D83035; Tue, 18 Feb 2014 16:41:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392741716!1144086!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 18 Feb 2014 16:41:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 16:41:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 16:41:55 +0000
Message-Id: <53039B61020000780011D68B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 16:41:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
In-Reply-To: <530380ED.4050708@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> This test is already performed a couple of lines above.
>>> Except that it's the wrong code you remove:
>> No opinion on this alternative at all?
> 
> Sorry Jan, I didn't realize you were waiting for me on this.
> 
> Yes, your version is fine although to be honest I don't see how the 
> original patch had any issues with division by zero since we'd still be 
> inside the 'if (stride)' clause.

It's the very division that this patch removes:

>>> --- a/xen/arch/x86/msi.c
>>> +++ b/xen/arch/x86/msi.c
>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>               return 0;
>>>           base = pos + PCI_SRIOV_BAR;
>>>           vf -= PCI_BDF(bus, slot, func) + offset;
>>> -        if ( vf < 0 || (vf && vf % stride) )
>>> +        if ( vf < 0 )
>>>               return 0;
>>>           if ( stride )
>>>           {

Which isn't inside the if(stride).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:42:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnk7-00068M-AM; Tue, 18 Feb 2014 16:41:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFnk6-000684-3a
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 16:41:58 +0000
Received: from [85.158.137.68:62151] by server-9.bemta-3.messagelabs.com id
	C5/86-10184-55D83035; Tue, 18 Feb 2014 16:41:57 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392741716!1144086!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 18 Feb 2014 16:41:56 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 16:41:56 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 16:41:55 +0000
Message-Id: <53039B61020000780011D68B@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 16:41:52 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
In-Reply-To: <530380ED.4050708@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> This test is already performed a couple of lines above.
>>> Except that it's the wrong code you remove:
>> No opinion on this alternative at all?
> 
> Sorry Jan, I didn't realize you were waiting for me on this.
> 
> Yes, your version is fine although to be honest I don't see how the 
> original patch had any issues with division by zero since we'd still be 
> inside the 'if (stride)' clause.

It's the very division that this patch removes:

>>> --- a/xen/arch/x86/msi.c
>>> +++ b/xen/arch/x86/msi.c
>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>               return 0;
>>>           base = pos + PCI_SRIOV_BAR;
>>>           vf -= PCI_BDF(bus, slot, func) + offset;
>>> -        if ( vf < 0 || (vf && vf % stride) )
>>> +        if ( vf < 0 )
>>>               return 0;
>>>           if ( stride )
>>>           {

Which isn't inside the if(stride).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:44:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnmP-0006KQ-Sc; Tue, 18 Feb 2014 16:44:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnmO-0006K4-PQ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:44:20 +0000
Received: from [85.158.139.211:62091] by server-9.bemta-5.messagelabs.com id
	06/F1-11237-4ED83035; Tue, 18 Feb 2014 16:44:20 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392741856!4681960!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21353 invoked from network); 18 Feb 2014 16:44:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:44:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="101838541"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 16:44:08 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:44:07 -0500
Message-ID: <1392741845.32038.569.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 18 Feb 2014 17:44:05 +0100
In-Reply-To: <1389968194.6697.108.camel@kazak.uk.xensource.com>
References: <1389964977.2099.3.camel@64bitDom0>
	<1389968194.6697.108.camel@kazak.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>,
	mihai.bucicoiu@trust.cased.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3275991284384673761=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3275991284384673761==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-M6cm2ic09lJ/+E04T+Oc"

--=-M6cm2ic09lJ/+E04T+Oc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey,

[I always wanted to reply to this thread, but then it slipped over and
over, up to now!]

On ven, 2014-01-17 at 14:16 +0000, Ian Campbell wrote:
> On Fri, 2014-01-17 at 14:22 +0100, Ferdinand Brasser wrote:
> > Dear all,
> >=20
> > My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt.=
=20
> >=20
> > Here at CASED, we have developed a live updating mechanism for Xen,
> > which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
> > would like to know if there is any interest from the community to
> > integrate our approach into Xen. If so, some advice on how to proceed i=
s
> > welcomed.
> >=20
> > Our approach to update Xen is - very high level - to load a complete ne=
w
> > version of Xen at runtime and then transfer the state of the old versio=
n
> > to the new one. Afterwards the execution is continued by the new
> > version. We make use of Xen functions to disable all but one CPU and
> > interrupts during the update process to keep the state consistent while
> > transferring. We have evaluate our prototype with the result that the
> > update process takes about 45ms on our test system.=20
>=20
Wow, 45ms is certainly something bearable for this kind of
operation! :-P

> This sounds pretty cool. I think everyone would be interested in hearing
> a bit more about it and in seeing the code.
>=20
I agree... this would be a really great feature to have!

So, any news? Any update? Any plan on following Ian's suggestions to
--at least try to-- upstream it?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-M6cm2ic09lJ/+E04T+Oc
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDjdUACgkQk4XaBE3IOsQ1YQCfa5vVeBKWLNwgmIwQEylApbAn
Kj0AoKp/BvB4X/+yTEdgo7o5aazjtXtp
=7DRj
-----END PGP SIGNATURE-----

--=-M6cm2ic09lJ/+E04T+Oc--


--===============3275991284384673761==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3275991284384673761==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:44:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnmP-0006KQ-Sc; Tue, 18 Feb 2014 16:44:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnmO-0006K4-PQ
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:44:20 +0000
Received: from [85.158.139.211:62091] by server-9.bemta-5.messagelabs.com id
	06/F1-11237-4ED83035; Tue, 18 Feb 2014 16:44:20 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392741856!4681960!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21353 invoked from network); 18 Feb 2014 16:44:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:44:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="101838541"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 16:44:08 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:44:07 -0500
Message-ID: <1392741845.32038.569.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Date: Tue, 18 Feb 2014 17:44:05 +0100
In-Reply-To: <1389968194.6697.108.camel@kazak.uk.xensource.com>
References: <1389964977.2099.3.camel@64bitDom0>
	<1389968194.6697.108.camel@kazak.uk.xensource.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ferdinand Brasser <ferdinand.brasser@trust.cased.de>,
	mihai.bucicoiu@trust.cased.de, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [HotSwap] Live Update for Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3275991284384673761=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3275991284384673761==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-M6cm2ic09lJ/+E04T+Oc"

--=-M6cm2ic09lJ/+E04T+Oc
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey,

[I always wanted to reply to this thread, but then it slipped over and
over, up to now!]

On ven, 2014-01-17 at 14:16 +0000, Ian Campbell wrote:
> On Fri, 2014-01-17 at 14:22 +0100, Ferdinand Brasser wrote:
> > Dear all,
> >=20
> > My name is Ferdinand Brasser, research assistant at CASED/TU Darmstadt.=
=20
> >=20
> > Here at CASED, we have developed a live updating mechanism for Xen,
> > which we call it HotSwap. Currently we have a prototype for Xen 4.2 and
> > would like to know if there is any interest from the community to
> > integrate our approach into Xen. If so, some advice on how to proceed i=
s
> > welcomed.
> >=20
> > Our approach to update Xen is - very high level - to load a complete ne=
w
> > version of Xen at runtime and then transfer the state of the old versio=
n
> > to the new one. Afterwards the execution is continued by the new
> > version. We make use of Xen functions to disable all but one CPU and
> > interrupts during the update process to keep the state consistent while
> > transferring. We have evaluate our prototype with the result that the
> > update process takes about 45ms on our test system.=20
>=20
Wow, 45ms is certainly something bearable for this kind of
operation! :-P

> This sounds pretty cool. I think everyone would be interested in hearing
> a bit more about it and in seeing the code.
>=20
I agree... this would be a really great feature to have!

So, any news? Any update? Any plan on following Ian's suggestions to
--at least try to-- upstream it?

Thanks and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-M6cm2ic09lJ/+E04T+Oc
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDjdUACgkQk4XaBE3IOsQ1YQCfa5vVeBKWLNwgmIwQEylApbAn
Kj0AoKp/BvB4X/+yTEdgo7o5aazjtXtp
=7DRj
-----END PGP SIGNATURE-----

--=-M6cm2ic09lJ/+E04T+Oc--


--===============3275991284384673761==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3275991284384673761==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:47:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnpC-0006Wa-Nc; Tue, 18 Feb 2014 16:47:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnpB-0006WT-7Y
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:47:13 +0000
Received: from [85.158.139.211:50237] by server-11.bemta-5.messagelabs.com id
	B6/42-23886-09E83035; Tue, 18 Feb 2014 16:47:12 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392742026!172082!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29603 invoked from network); 18 Feb 2014 16:47:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:47:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="103565721"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:47:05 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:47:04 -0500
Message-ID: <1392742022.32038.572.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Nate Studer <nate.studer@dornerworks.com>
Date: Tue, 18 Feb 2014 17:47:02 +0100
In-Reply-To: <5302191A.3070400@dornerworks.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<5302191A.3070400@dornerworks.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Simon Martin <furryfuttock@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3128400476845483721=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3128400476845483721==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Xw9E/NZwkp4KRihc8L6G"

--=-Xw9E/NZwkp4KRihc8L6G
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 09:13 -0500, Nate Studer wrote:
> On 2/14/2014 12:21 PM, Dario Faggioli wrote:

> > All this to say that, it should be possible to get a bit more of
> > isolation, by tweaking the proper Xen code path appropriately, but if
> > the amount of interference that comes from two hypethreads sharing
> > registers, pipeline stages, and whatever it is that they share, is
> > enough for disturbing your workload, then, I'm afraid we never get much
> > farther from the 'don't use hyperthread' solution! :-(
>=20
> Which, as you say, unfortunately is the solution unless there is some way=
 to
> configure the hardware to eliminate this interference. =20
>
Yeah, I know!

> If it's any consolation,
> the only multi-core ARINC653 implementations I know of have enacted these=
 two
> restrictions:
> 1.  # of cores enabled =3D # of memory controllers.
> 2.  Each enabled core must be configured to not share a memory controller=
,
> cache, registers, etc...
>=20
Nice. :-)

> It is practically an AMP system at that point, but without these restrict=
ions
> you can get some unpredictable behavior unless you have some specialized =
or
> exotic hardware to make things more deterministic.
>=20
Sure, when you really need to be serious about isolation, treats are
there well before software (whatever is just OS, virt, whatever) comes
into play!

Thanks for sharing this and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Xw9E/NZwkp4KRihc8L6G
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDjoYACgkQk4XaBE3IOsTR0gCeKfGBCaBCwRunbEjqU9kalmZc
8sIAn2OeA1lp29zRvWIzSS8nuVlWhMS6
=f4Dm
-----END PGP SIGNATURE-----

--=-Xw9E/NZwkp4KRihc8L6G--


--===============3128400476845483721==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3128400476845483721==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:47:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnpC-0006Wa-Nc; Tue, 18 Feb 2014 16:47:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnpB-0006WT-7Y
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:47:13 +0000
Received: from [85.158.139.211:50237] by server-11.bemta-5.messagelabs.com id
	B6/42-23886-09E83035; Tue, 18 Feb 2014 16:47:12 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392742026!172082!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29603 invoked from network); 18 Feb 2014 16:47:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:47:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="103565721"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 16:47:05 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:47:04 -0500
Message-ID: <1392742022.32038.572.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Nate Studer <nate.studer@dornerworks.com>
Date: Tue, 18 Feb 2014 17:47:02 +0100
In-Reply-To: <5302191A.3070400@dornerworks.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<5302191A.3070400@dornerworks.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Simon Martin <furryfuttock@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3128400476845483721=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3128400476845483721==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Xw9E/NZwkp4KRihc8L6G"

--=-Xw9E/NZwkp4KRihc8L6G
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 09:13 -0500, Nate Studer wrote:
> On 2/14/2014 12:21 PM, Dario Faggioli wrote:

> > All this to say that, it should be possible to get a bit more of
> > isolation, by tweaking the proper Xen code path appropriately, but if
> > the amount of interference that comes from two hypethreads sharing
> > registers, pipeline stages, and whatever it is that they share, is
> > enough for disturbing your workload, then, I'm afraid we never get much
> > farther from the 'don't use hyperthread' solution! :-(
>=20
> Which, as you say, unfortunately is the solution unless there is some way=
 to
> configure the hardware to eliminate this interference. =20
>
Yeah, I know!

> If it's any consolation,
> the only multi-core ARINC653 implementations I know of have enacted these=
 two
> restrictions:
> 1.  # of cores enabled =3D # of memory controllers.
> 2.  Each enabled core must be configured to not share a memory controller=
,
> cache, registers, etc...
>=20
Nice. :-)

> It is practically an AMP system at that point, but without these restrict=
ions
> you can get some unpredictable behavior unless you have some specialized =
or
> exotic hardware to make things more deterministic.
>=20
Sure, when you really need to be serious about isolation, treats are
there well before software (whatever is just OS, virt, whatever) comes
into play!

Thanks for sharing this and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Xw9E/NZwkp4KRihc8L6G
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDjoYACgkQk4XaBE3IOsTR0gCeKfGBCaBCwRunbEjqU9kalmZc
8sIAn2OeA1lp29zRvWIzSS8nuVlWhMS6
=f4Dm
-----END PGP SIGNATURE-----

--=-Xw9E/NZwkp4KRihc8L6G--


--===============3128400476845483721==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3128400476845483721==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:48:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnqZ-0006d0-7d; Tue, 18 Feb 2014 16:48:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFnqW-0006cm-Ms
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:48:37 +0000
Received: from [85.158.143.35:8523] by server-1.bemta-4.messagelabs.com id
	82/95-31661-4EE83035; Tue, 18 Feb 2014 16:48:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392742112!6577659!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 506 invoked from network); 18 Feb 2014 16:48:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:48:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101841242"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 16:48:04 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:48:04 -0500
Message-ID: <53038EC2.8030403@citrix.com>
Date: Tue, 18 Feb 2014 17:48:02 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, xen-devel
	<xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's PVH 
Dom0 Xen tree (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
h=dom0pvh-v7), and got the following crash. Do you have any new Xen 
Dom0 series that I could use to test PVH Dom0?

 __  __            _  _   _  _                      _        _     _
 \ \/ /___ _ __   | || | | || |     _   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \  | || |_| || |_ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|__   _|__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_) |_|     \__,_|_| |_|___/\__\__,_|_.__/|_|\___|

(XEN) Xen version 4.4-unstable (root@) (FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610) debug=y Tue Feb 18 15:41:18 CET 2014
(XEN) Latest ChangeSet: Tue Feb 18 15:37:28 2014 +0100 git:f574c06-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: PXELINUX 4.02 debian-20101014
(XEN) Command line: dom0pvh=1 sync_console=true dom0_mem=1024M com1=115200,8n1 guest_loglvl=all loglvl=all console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN)  EDID info not retrieved because of reasons unknown
(XEN) Disc information:
(XEN)  Found 2 MBR signatures
(XEN)  Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000092400 (usable)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000dfdf9c00 (usable)
(XEN)  00000000dfdf9c00 - 00000000dfe4bc00 (ACPI NVS)
(XEN)  00000000dfe4bc00 - 00000000dfe4dc00 (ACPI data)
(XEN)  00000000dfe4dc00 - 00000000e0000000 (reserved)
(XEN)  00000000f8000000 - 00000000fd000000 (reserved)
(XEN)  00000000fe000000 - 00000000fed00400 (reserved)
(XEN)  00000000fee00000 - 00000000fef00000 (reserved)
(XEN)  00000000ffb00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 00000001a0000000 (usable)
(XEN) ACPI: RSDP 000FEC30, 0024 (r2 DELL  )
(XEN) ACPI: XSDT 000FCCC7, 007C (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: FACP 000FCDB7, 00F4 (r3 DELL    B10K          15 ASL        61)
(XEN) ACPI: DSDT FFE9E951, 4A74 (r1   DELL    dt_ex     1000 INTL 20050624)
(XEN) ACPI: FACS DFDF9C00, 0040
(XEN) ACPI: SSDT FFEA34D6, 009C (r1   DELL    st_ex     1000 INTL 20050624)
(XEN) ACPI: APIC 000FCEAB, 015E (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: BOOT 000FD009, 0028 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: ASF! 000FD031, 0096 (r32 DELL    B10K          15 ASL        61)
(XEN) ACPI: MCFG 000FD0C7, 003C (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: HPET 000FD103, 0038 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: TCPA 000FD35F, 0032 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: DMAR 000FD391, 00C8 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: SLIC 000FD13B, 0176 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: SSDT DFE4DC00, 15C4 (r1  INTEL PPM RCM  80000001 INTL 20061109)
(XEN) System RAM: 6141MB (6288940kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-00000001a0000000
(XEN) Domain heap initialised
(XEN) DMI 2.5 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[dfdf9c0c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x11] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x12] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x13] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x14] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x15] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x16] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x17] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x18] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x19] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x20] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
(XEN) ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: IOAPIC (id[0x09] address[0xfec80000] gsi_base[24])
(XEN) IOAPIC[1]: apic_id 9, version 32, address 0xfec80000, GSI 24-47
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 2 I/O APICs
(XEN) ACPI: HPET id: 0x8086a301 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 32 CPUs (24 hotplug CPUs)
(XEN) IRQ limits: 48 GSI, 1504 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3066.865 MHz processor.
(XEN) Initing memory sharing.
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) mwait-idle: MWAIT substates: 0x1120
(XEN) mwait-idle: v0.4 model 0x1a
(XEN) mwait-idle: lapic_timer_reliable_states 0x2
(XEN) HPET: 0 timers usable for broadcast (4 total)
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB
(XEN) Brought up 8 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xb2b000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0xc4110
(XEN) elf_parse_binary: phdr: paddr=0x1cc5000 memsz=0x14d40
(XEN) elf_parse_binary: phdr: paddr=0x1cda000 memsz=0xdd0000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x2aaa000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81cda1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
(XEN) elf_xen_parse_note: SUPPORTED_FEATURES = 0x90d
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff82aaa000
(XEN)     virt_entry       = 0xffffffff81cda1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x2aaa000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000194000000->0000000198000000 (244199 pages to be allocated)
(XEN)  Init. ramdisk: 000000019f9e7000->000000019ffff200
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff82aaa000
(XEN)  Init. ramdisk: ffffffff82aaa000->ffffffff830c2200
(XEN)  Phys-Mach map: ffffffff830c3000->ffffffff832c3000
(XEN)  Start info:    ffffffff832c3000->ffffffff832c44b4
(XEN)  Page tables:   ffffffff832c5000->ffffffff832e2000
(XEN)  Boot stack:    ffffffff832e2000->ffffffff832e3000
(XEN)  TOTAL:         ffffffff80000000->ffffffff83400000
(XEN)  ENTRY ADDRESS: ffffffff81cda1e0
(XEN) Dom0 has maximum 8 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2b000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81cc4110
(XEN) elf_load_binary: phdr 2 at 0xffffffff81cc5000 -> 0xffffffff81cd9d40
(XEN) elf_load_binary: phdr 3 at 0xffffffff81cda000 -> 0xffffffff81db3000
(XEN) Scrubbing Free RAM: ..................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) **********************************************
(XEN) ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) ******* This option is intended to aid debugging of Xen by ensuring
(XEN) ******* that all output is synchronously delivered on the serial line.
(XEN) ******* However it can introduce SIGNIFICANT latencies and affect
(XEN) ******* timekeeping. It is NOT recommended for production use!
(XEN) **********************************************
(XEN) 3... 2... 1...
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 240kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.14.0-rc3 (root@loki) (gcc version 4.4.5 (Debian 4.4.5-8) ) #0 SMP Wed Jan 8 11:20:24 CET 2014
[    0.000000] Command line: root=/dev/sda1 ro ramdisk_size=1024000 earlyprintk=xenboot loglevel=9 console=hvc0 debug
[    0.000000] Released 110 pages of unused memory
[    0.000000] Set 131701 page(s) to 1-1 mapping
[    0.000000] Populating 40000-4006e pfn range: 110 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000091fff] usable
[    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x00000000dfdf8fff] usable
[    0.000000] Xen: [mem 0x00000000dfdf9c00-0x00000000dfe4bbff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000dfe4bc00-0x00000000dfe4dbff] ACPI data
[    0.000000] Xen: [mem 0x00000000dfe4dc00-0x00000000dfffffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fcffffff] reserved
[    0.000000] Xen: [mem 0x00000000fe000000-0x00000000fed003ff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ffb00000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000019fffffff] usable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.5 present.
[    0.000000] DMI: Dell Inc. Precision WorkStation T3500  /09KPNV, BIOS A15 03/28/2012
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn = 0x1a0000 max_arch_pfn = 0x400000000
[    0.000000] e820: last_pfn = 0xdfdf9 max_arch_pfn = 0x400000000
[    0.000000] Base memory trampoline at [ffff88000008c000] 8c000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x3fe00000-0x3fffffff]
[    0.000000]  [mem 0x3fe00000-0x3fffffff] page 4k
[    0.000000] BRK [0x02688000, 0x02688fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x3c000000-0x3fdfffff]
[    0.000000]  [mem 0x3c000000-0x3fdfffff] page 4k
[    0.000000] BRK [0x02689000, 0x02689fff] PGTABLE
[    0.000000] BRK [0x0268a000, 0x0268afff] PGTABLE
[    0.000000] BRK [0x0268b000, 0x0268bfff] PGTABLE
[    0.000000] BRK [0x0268c000, 0x0268cfff] PGTABLE
[    0.000000] BRK [0x0268d000, 0x0268dfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x3bffffff]
[    0.000000]  [mem 0x00100000-0x3bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x40000000-0xdfdf8fff]
[    0.000000]  [mem 0x40000000-0xdfdf8fff] page 4k
[    0.000000] init_memory_mapping: [mem 0x100000000-0x19fffffff]
[    0.000000]  [mem 0x100000000-0x19fffffff] page 4k
[    0.000000] RAMDISK: [mem 0x02aaa000-0x030c2fff]
[    0.000000] ACPI: RSDP 00000000000fec30 000024 (v02 DELL  )
[    0.000000] ACPI: XSDT 00000000000fccc7 00007C (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: FACP 00000000000fcdb7 0000F4 (v03 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI BIOS Warning (bug): 32/64X length mismatch in FADT/Gpe0Block: 128/64 (20131218/tbfadt-603)
[    0.000000] ACPI: DSDT 00000000ffe9e951 004A74 (v01   DELL    dt_ex 00001000 INTL 20050624)
[    0.000000] ACPI: FACS 00000000dfdf9c00 000040
[    0.000000] ACPI: SSDT 00000000ffea34d6 00009C (v01   DELL    st_ex 00001000 INTL 20050624)
[    0.000000] ACPI: APIC 00000000000fceab 00015E (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: BOOT 00000000000fd009 000028 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: ASF! 00000000000fd031 000096 (v32 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: MCFG 00000000000fd0c7 00003C (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: HPET 00000000000fd103 000038 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: TCPA 00000000000fd35f 000032 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: DMAR 00000000000fd391 0000C8 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: SLIC 00000000000fd13b 000176 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: SSDT 00000000dfe4dc00 0015C4 (v01  INTEL PPM RCM  80000001 INTL 20061109)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x19fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00091fff]
[    0.000000]   node   0: [mem 0x00100000-0xdfdf8fff]
[    0.000000]   node   0: [mem 0x100000000-0x19fffffff]
[    0.000000] On node 0 totalpages: 1572234
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3985 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 12481 pages used for memmap
[    0.000000]   DMA32 zone: 912889 pages, LIFO batch:31
[    0.000000]   Normal zone: 8960 pages used for memmap
[    0.000000]   Normal zone: 655360 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x11] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x12] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x13] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x14] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x15] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x16] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x17] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x18] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x19] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x20] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: IOAPIC (id[0x09] address[0xfec80000] gsi_base[24])
[    0.000000] IOAPIC[1]: apic_id 9, version 32, address 0xfec80000, GSI 24-47
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000
[    0.000000] smpboot: 32 Processors exceeds NR_CPUS limit of 8
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 64
[    0.000000] e820: [mem 0xe0000000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel with PVH extensions on Xen
[    0.000000] Xen version: 4.4-unstable
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88003f000000 s85312 r8192 d21184 u262144
[    0.000000] pcpu-alloc: s85312 r8192 d21184 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 1550716
[    0.000000] Kernel command line: root=/dev/sda1 ro ramdisk_size=1024000 earlyprintk=xenboot loglevel=9 console=hvc0 debug
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[    0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.000000] software IO TLB [mem 0x34c00000-0x38c00000] (64MB) mapped at [ffff880034c00000-ffff880038bfffff]
[    0.000000] Memory: 838552K/6288936K available (6268K kernel code, 782K rwdata, 3244K rodata, 928K init, 9040K bss, 5450384K reserved)
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	CONFIG_RCU_FANOUT set to non-default value of 32
[    0.000000] 	RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:4352 nr_irqs:1152 16
[    0.000000] xen:events: Using FIFO-based ABI
[    0.000000] xen: sci override: global_irq=9 trigger=0 polarity=0
[    0.000000] xen: registering gsi 9 triggering 0 polarity 0
[    0.000000] xen: --> pirq=9 -> irq=9 (gsi=9)
[    0.000000] xen: acpi sci 9
[    0.000000] xen: --> pirq=1 -> irq=1 (gsi=1)
[    0.000000] xen: --> pirq=2 -> irq=2 (gsi=2)
[    0.000000] xen: --> pirq=3 -> irq=3 (gsi=3)
[    0.000000] xen: --> pirq=4 -> irq=4 (gsi=4)
[    0.000000] xen: --> pirq=5 -> irq=5 (gsi=5)
[    0.000000] xen: --> pirq=6 -> irq=6 (gsi=6)
[    0.000000] xen: --> pirq=7 -> irq=7 (gsi=7)
[    0.000000] xen: --> pirq=8 -> irq=8 (gsi=8)
[    0.000000] xen: --> pirq=10 -> irq=10 (gsi=10)
[    0.000000] xen: --> pirq=11 -> irq=11 (gsi=11)
[    0.000000] xen: --> pirq=12 -> irq=12 (gsi=12)
[    0.000000] xen: --> pirq=13 -> irq=13 (gsi=13)
[    0.000000] xen: --> pirq=14 -> irq=14 (gsi=14)
[    0.000000] xen: --> pirq=15 -> irq=15 (gsi=15)
(XEN) irq.c:375: Dom0 callback via changed to Direct Vector 0xf3
[    0.000000] xen:events: Xen HVM callback vector for event delivery is enabled
[    0.000000] ACPI: Core revision 20131218
(XEN) ----[ Xen-4.4-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d0801b43c0>] hvm_do_resume+0x50/0x150
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor
(XEN) rax: 000000000018a852   rbx: 000000000018a852   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff8300dfb0e000
(XEN) rbp: ffff82d0802c7dd8   rsp: ffff82d0802c7dc8   r8:  ffff82d0803087d0
(XEN) r9:  0000000000000005   r10: ffff82d080304890   r11: 0000000000000000
(XEN) r12: 0000000182a5ea8d   r13: ffff82d0803087d0   r14: ffff8300dfb0e000
(XEN) r15: ffff8300dfb0e000   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000019f96e000   cr2: 000000000018a870
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802c7dc8:
(XEN)    ffff82d0803093e0 ffff830199a1e000 ffff82d0802c7e08 ffff82d0801d63c8
(XEN)    0000000000000000 ffff8300dfb0e000 ffff8300dfb0e000 0000000001c9c380
(XEN)    ffff82d0802c7e18 ffff82d08015fbaa ffff82d0802c7ec8 ffff82d0801254fb
(XEN)    ffff82d080308880 000000003fd870ec 0000000182a5ea8d ffff8300dfb0e000
(XEN)    ffff82d0803087b8 0000000000000001 ffff82d0802c7e88 ffff82d080158ea9
(XEN)    0000000000000000 ffff82d080308900 ffff82d080308880 0000000182e2fa0e
(XEN)    ffff8300dfb0e000 0000000001c9c380 0000000000000000 ffff82d0802dfe00
(XEN)    0000000000000002 ffff82d0802dfe00 ffffffffffffffff 0000000000000001
(XEN)    ffff82d0802c7f08 ffff82d080126726 0000000000000001 ffff8300dfb0e000
(XEN)    ffffffff81d6a900 ffffffff81d730a0 0000000000000059 0000000000000000
(XEN)    ffffffff81c01e78 ffff82d0801e18da 0000000000000000 0000000000000059
(XEN)    ffffffff81d730a0 ffffffff81d6a900 ffffffff81c01e78 ffff88003480f400
(XEN)    0000000000000000 0000000000000000 000000000000001d 0000000000000000
(XEN)    ffffc900000020e1 ffffc90000000951 ffffffff81c13490 ffffc900000053c5
(XEN)    ffffc90000000951 000000fa0000beef ffffffff812b8380 000000bf0000beef
(XEN)    0000000000000006 ffffffff81c01e78 000000000000beef 000000000000beef
(XEN)    000000000000beef 000000000000beef 000000000000beef 0000000000000000
(XEN)    ffff8300dfb0e000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d0801b43c0>] hvm_do_resume+0x50/0x150
(XEN)    [<ffff82d0801d63c8>] vmx_do_resume+0x118/0x150
(XEN)    [<ffff82d08015fbaa>] continue_running+0xa/0x10
(XEN)    [<ffff82d0801254fb>] schedule+0x22b/0x310
(XEN)    [<ffff82d080126726>] __do_softirq+0x46/0xa0
(XEN)    [<ffff82d0801e18da>] vmx_asm_do_vmentry+0x2a/0x50
(XEN)
(XEN) Pagetable walk from 000000000018a870:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000018a870
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:48:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnqZ-0006d0-7d; Tue, 18 Feb 2014 16:48:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WFnqW-0006cm-Ms
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:48:37 +0000
Received: from [85.158.143.35:8523] by server-1.bemta-4.messagelabs.com id
	82/95-31661-4EE83035; Tue, 18 Feb 2014 16:48:36 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392742112!6577659!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 506 invoked from network); 18 Feb 2014 16:48:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:48:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101841242"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 16:48:04 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:48:04 -0500
Message-ID: <53038EC2.8030403@citrix.com>
Date: Tue, 18 Feb 2014 17:48:02 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>, xen-devel
	<xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

I've tried to boot a PVH Dom0 using Linux 3.14.0-rc3 and Mukesh's PVH 
Dom0 Xen tree (https://oss.oracle.com/git/?p=mrathor/xen.git;a=shortlog;
h=dom0pvh-v7), and got the following crash. Do you have any new Xen 
Dom0 series that I could use to test PVH Dom0?

 __  __            _  _   _  _                      _        _     _
 \ \/ /___ _ __   | || | | || |     _   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \  | || |_| || |_ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ \
  /  \  __/ | | | |__   _|__   _|__| |_| | | | \__ \ || (_| | |_) | |  __/
 /_/\_\___|_| |_|    |_|(_) |_|     \__,_|_| |_|___/\__\__,_|_.__/|_|\___|

(XEN) Xen version 4.4-unstable (root@) (FreeBSD clang version 3.3 (tags/RELEASE_33/final 183502) 20130610) debug=y Tue Feb 18 15:41:18 CET 2014
(XEN) Latest ChangeSet: Tue Feb 18 15:37:28 2014 +0100 git:f574c06-dirty
(XEN) Console output is synchronous.
(XEN) Bootloader: PXELINUX 4.02 debian-20101014
(XEN) Command line: dom0pvh=1 sync_console=true dom0_mem=1024M com1=115200,8n1 guest_loglvl=all loglvl=all console=com1
(XEN) Video information:
(XEN)  VGA is text mode 80x25, font 8x16
(XEN)  VBE/DDC methods: V2; EDID transfer time: 1 seconds
(XEN)  EDID info not retrieved because of reasons unknown
(XEN) Disc information:
(XEN)  Found 2 MBR signatures
(XEN)  Found 2 EDD information structures
(XEN) Xen-e820 RAM map:
(XEN)  0000000000000000 - 0000000000092400 (usable)
(XEN)  00000000000f0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 00000000dfdf9c00 (usable)
(XEN)  00000000dfdf9c00 - 00000000dfe4bc00 (ACPI NVS)
(XEN)  00000000dfe4bc00 - 00000000dfe4dc00 (ACPI data)
(XEN)  00000000dfe4dc00 - 00000000e0000000 (reserved)
(XEN)  00000000f8000000 - 00000000fd000000 (reserved)
(XEN)  00000000fe000000 - 00000000fed00400 (reserved)
(XEN)  00000000fee00000 - 00000000fef00000 (reserved)
(XEN)  00000000ffb00000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 00000001a0000000 (usable)
(XEN) ACPI: RSDP 000FEC30, 0024 (r2 DELL  )
(XEN) ACPI: XSDT 000FCCC7, 007C (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: FACP 000FCDB7, 00F4 (r3 DELL    B10K          15 ASL        61)
(XEN) ACPI: DSDT FFE9E951, 4A74 (r1   DELL    dt_ex     1000 INTL 20050624)
(XEN) ACPI: FACS DFDF9C00, 0040
(XEN) ACPI: SSDT FFEA34D6, 009C (r1   DELL    st_ex     1000 INTL 20050624)
(XEN) ACPI: APIC 000FCEAB, 015E (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: BOOT 000FD009, 0028 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: ASF! 000FD031, 0096 (r32 DELL    B10K          15 ASL        61)
(XEN) ACPI: MCFG 000FD0C7, 003C (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: HPET 000FD103, 0038 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: TCPA 000FD35F, 0032 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: DMAR 000FD391, 00C8 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: SLIC 000FD13B, 0176 (r1 DELL    B10K          15 ASL        61)
(XEN) ACPI: SSDT DFE4DC00, 15C4 (r1  INTEL PPM RCM  80000001 INTL 20061109)
(XEN) System RAM: 6141MB (6288940kB)
(XEN) No NUMA configuration found
(XEN) Faking a node at 0000000000000000-00000001a0000000
(XEN) Domain heap initialised
(XEN) DMI 2.5 present.
(XEN) Using APIC driver default
(XEN) ACPI: PM-Timer IO Port: 0x808
(XEN) ACPI: SLEEP INFO: pm1x_cnt[804,0], pm1x_evt[800,0]
(XEN) ACPI:             wakeup_vec[dfdf9c0c], vec_size[20]
(XEN) ACPI: Local APIC address 0xfee00000
(XEN) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
(XEN) Processor #0 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
(XEN) Processor #2 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
(XEN) Processor #4 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
(XEN) Processor #6 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
(XEN) Processor #1 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
(XEN) Processor #3 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
(XEN) Processor #5 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
(XEN) Processor #7 7:10 APIC version 21
(XEN) ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x10] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x11] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x12] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x13] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x14] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x15] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x16] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x17] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x18] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x19] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC (acpi_id[0x20] lapic_id[0x00] disabled)
(XEN) ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
(XEN) ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
(XEN) IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
(XEN) ACPI: IOAPIC (id[0x09] address[0xfec80000] gsi_base[24])
(XEN) IOAPIC[1]: apic_id 9, version 32, address 0xfec80000, GSI 24-47
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
(XEN) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
(XEN) ACPI: IRQ0 used by override.
(XEN) ACPI: IRQ2 used by override.
(XEN) ACPI: IRQ9 used by override.
(XEN) Enabling APIC mode:  Flat.  Using 2 I/O APICs
(XEN) ACPI: HPET id: 0x8086a301 base: 0xfed00000
(XEN) ERST table was not found
(XEN) Using ACPI (MADT) for SMP configuration information
(XEN) SMP: Allowing 32 CPUs (24 hotplug CPUs)
(XEN) IRQ limits: 48 GSI, 1504 MSI/MSI-X
(XEN) Using scheduler: SMP Credit Scheduler (credit)
(XEN) Detected 3066.865 MHz processor.
(XEN) Initing memory sharing.
(XEN) mce_intel.c:717: MCA Capability: BCAST 1 SER 0 CMCI 1 firstbank 0 extended MCE MSR 0
(XEN) Intel machine check reporting enabled
(XEN) PCI: MCFG configuration 0: base f8000000 segment 0000 buses 00 - 3f
(XEN) PCI: MCFG area at f8000000 reserved in E820
(XEN) PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) Intel VT-d iommu 0 supported page sizes: 4kB.
(XEN) Intel VT-d Snoop Control enabled.
(XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
(XEN) Intel VT-d Queued Invalidation enabled.
(XEN) Intel VT-d Interrupt Remapping enabled.
(XEN) Intel VT-d Shared EPT tables not enabled.
(XEN) I/O virtualisation enabled
(XEN)  - Dom0 mode: Relaxed
(XEN) Interrupt remapping enabled
(XEN) ENABLING IO-APIC IRQs
(XEN)  -> Using new ACK method
(XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) Platform timer is 14.318MHz HPET
(XEN) Allocated console ring of 64 KiB.
(XEN) mwait-idle: MWAIT substates: 0x1120
(XEN) mwait-idle: v0.4 model 0x1a
(XEN) mwait-idle: lapic_timer_reliable_states 0x2
(XEN) HPET: 0 timers usable for broadcast (4 total)
(XEN) VMX: Supported advanced features:
(XEN)  - APIC MMIO access virtualisation
(XEN)  - APIC TPR shadow
(XEN)  - Extended Page Tables (EPT)
(XEN)  - Virtual-Processor Identifiers (VPID)
(XEN)  - Virtual NMI
(XEN)  - MSR direct-access bitmap
(XEN) HVM: ASIDs enabled.
(XEN) HVM: VMX enabled
(XEN) HVM: Hardware Assisted Paging (HAP) detected
(XEN) HVM: HAP page sizes: 4kB, 2MB
(XEN) Brought up 8 CPUs
(XEN) ACPI sleep modes: S3
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) *** LOADING DOMAIN 0 ***
(XEN) elf_parse_binary: phdr: paddr=0x1000000 memsz=0xb2b000
(XEN) elf_parse_binary: phdr: paddr=0x1c00000 memsz=0xc4110
(XEN) elf_parse_binary: phdr: paddr=0x1cc5000 memsz=0x14d40
(XEN) elf_parse_binary: phdr: paddr=0x1cda000 memsz=0xdd0000
(XEN) elf_parse_binary: memory: 0x1000000 -> 0x2aaa000
(XEN) elf_xen_parse_note: GUEST_OS = "linux"
(XEN) elf_xen_parse_note: GUEST_VERSION = "2.6"
(XEN) elf_xen_parse_note: XEN_VERSION = "xen-3.0"
(XEN) elf_xen_parse_note: VIRT_BASE = 0xffffffff80000000
(XEN) elf_xen_parse_note: ENTRY = 0xffffffff81cda1e0
(XEN) elf_xen_parse_note: HYPERCALL_PAGE = 0xffffffff81001000
(XEN) elf_xen_parse_note: FEATURES = "!writable_page_tables|pae_pgdir_above_4gb|writable_descriptor_tables|auto_translated_physmap|supervisor_mode_kernel"
(XEN) elf_xen_parse_note: SUPPORTED_FEATURES = 0x90d
(XEN) elf_xen_parse_note: PAE_MODE = "yes"
(XEN) elf_xen_parse_note: LOADER = "generic"
(XEN) elf_xen_parse_note: unknown xen elf note (0xd)
(XEN) elf_xen_parse_note: SUSPEND_CANCEL = 0x1
(XEN) elf_xen_parse_note: HV_START_LOW = 0xffff800000000000
(XEN) elf_xen_parse_note: PADDR_OFFSET = 0x0
(XEN) elf_xen_addr_calc_check: addresses:
(XEN)     virt_base        = 0xffffffff80000000
(XEN)     elf_paddr_offset = 0x0
(XEN)     virt_offset      = 0xffffffff80000000
(XEN)     virt_kstart      = 0xffffffff81000000
(XEN)     virt_kend        = 0xffffffff82aaa000
(XEN)     virt_entry       = 0xffffffff81cda1e0
(XEN)     p2m_base         = 0xffffffffffffffff
(XEN)  Xen  kernel: 64-bit, lsb, compat32
(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x2aaa000
(XEN) PHYSICAL MEMORY ARRANGEMENT:
(XEN)  Dom0 alloc.:   0000000194000000->0000000198000000 (244199 pages to be allocated)
(XEN)  Init. ramdisk: 000000019f9e7000->000000019ffff200
(XEN) VIRTUAL MEMORY ARRANGEMENT:
(XEN)  Loaded kernel: ffffffff81000000->ffffffff82aaa000
(XEN)  Init. ramdisk: ffffffff82aaa000->ffffffff830c2200
(XEN)  Phys-Mach map: ffffffff830c3000->ffffffff832c3000
(XEN)  Start info:    ffffffff832c3000->ffffffff832c44b4
(XEN)  Page tables:   ffffffff832c5000->ffffffff832e2000
(XEN)  Boot stack:    ffffffff832e2000->ffffffff832e3000
(XEN)  TOTAL:         ffffffff80000000->ffffffff83400000
(XEN)  ENTRY ADDRESS: ffffffff81cda1e0
(XEN) Dom0 has maximum 8 VCPUs
(XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2b000
(XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81cc4110
(XEN) elf_load_binary: phdr 2 at 0xffffffff81cc5000 -> 0xffffffff81cd9d40
(XEN) elf_load_binary: phdr 3 at 0xffffffff81cda000 -> 0xffffffff81db3000
(XEN) Scrubbing Free RAM: ..................................................done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) **********************************************
(XEN) ******* WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) ******* This option is intended to aid debugging of Xen by ensuring
(XEN) ******* that all output is synchronously delivered on the serial line.
(XEN) ******* However it can introduce SIGNIFICANT latencies and affect
(XEN) ******* timekeeping. It is NOT recommended for production use!
(XEN) **********************************************
(XEN) 3... 2... 1...
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen)
(XEN) Freed 240kB init memory.
mapping kernel into physical memory
about to get started...
[    0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Initializing cgroup subsys cpuacct
[    0.000000] Linux version 3.14.0-rc3 (root@loki) (gcc version 4.4.5 (Debian 4.4.5-8) ) #0 SMP Wed Jan 8 11:20:24 CET 2014
[    0.000000] Command line: root=/dev/sda1 ro ramdisk_size=1024000 earlyprintk=xenboot loglevel=9 console=hvc0 debug
[    0.000000] Released 110 pages of unused memory
[    0.000000] Set 131701 page(s) to 1-1 mapping
[    0.000000] Populating 40000-4006e pfn range: 110 pages added
[    0.000000] e820: BIOS-provided physical RAM map:
[    0.000000] Xen: [mem 0x0000000000000000-0x0000000000091fff] usable
[    0.000000] Xen: [mem 0x00000000000a0000-0x00000000000fffff] reserved
[    0.000000] Xen: [mem 0x0000000000100000-0x00000000dfdf8fff] usable
[    0.000000] Xen: [mem 0x00000000dfdf9c00-0x00000000dfe4bbff] ACPI NVS
[    0.000000] Xen: [mem 0x00000000dfe4bc00-0x00000000dfe4dbff] ACPI data
[    0.000000] Xen: [mem 0x00000000dfe4dc00-0x00000000dfffffff] reserved
[    0.000000] Xen: [mem 0x00000000f8000000-0x00000000fcffffff] reserved
[    0.000000] Xen: [mem 0x00000000fe000000-0x00000000fed003ff] reserved
[    0.000000] Xen: [mem 0x00000000fee00000-0x00000000feefffff] reserved
[    0.000000] Xen: [mem 0x00000000ffb00000-0x00000000ffffffff] reserved
[    0.000000] Xen: [mem 0x0000000100000000-0x000000019fffffff] usable
[    0.000000] bootconsole [xenboot0] enabled
[    0.000000] NX (Execute Disable) protection: active
[    0.000000] SMBIOS 2.5 present.
[    0.000000] DMI: Dell Inc. Precision WorkStation T3500  /09KPNV, BIOS A15 03/28/2012
[    0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
[    0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable
[    0.000000] No AGP bridge found
[    0.000000] e820: last_pfn = 0x1a0000 max_arch_pfn = 0x400000000
[    0.000000] e820: last_pfn = 0xdfdf9 max_arch_pfn = 0x400000000
[    0.000000] Base memory trampoline at [ffff88000008c000] 8c000 size 24576
[    0.000000] init_memory_mapping: [mem 0x00000000-0x000fffff]
[    0.000000]  [mem 0x00000000-0x000fffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x3fe00000-0x3fffffff]
[    0.000000]  [mem 0x3fe00000-0x3fffffff] page 4k
[    0.000000] BRK [0x02688000, 0x02688fff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x3c000000-0x3fdfffff]
[    0.000000]  [mem 0x3c000000-0x3fdfffff] page 4k
[    0.000000] BRK [0x02689000, 0x02689fff] PGTABLE
[    0.000000] BRK [0x0268a000, 0x0268afff] PGTABLE
[    0.000000] BRK [0x0268b000, 0x0268bfff] PGTABLE
[    0.000000] BRK [0x0268c000, 0x0268cfff] PGTABLE
[    0.000000] BRK [0x0268d000, 0x0268dfff] PGTABLE
[    0.000000] init_memory_mapping: [mem 0x00100000-0x3bffffff]
[    0.000000]  [mem 0x00100000-0x3bffffff] page 4k
[    0.000000] init_memory_mapping: [mem 0x40000000-0xdfdf8fff]
[    0.000000]  [mem 0x40000000-0xdfdf8fff] page 4k
[    0.000000] init_memory_mapping: [mem 0x100000000-0x19fffffff]
[    0.000000]  [mem 0x100000000-0x19fffffff] page 4k
[    0.000000] RAMDISK: [mem 0x02aaa000-0x030c2fff]
[    0.000000] ACPI: RSDP 00000000000fec30 000024 (v02 DELL  )
[    0.000000] ACPI: XSDT 00000000000fccc7 00007C (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: FACP 00000000000fcdb7 0000F4 (v03 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI BIOS Warning (bug): 32/64X length mismatch in FADT/Gpe0Block: 128/64 (20131218/tbfadt-603)
[    0.000000] ACPI: DSDT 00000000ffe9e951 004A74 (v01   DELL    dt_ex 00001000 INTL 20050624)
[    0.000000] ACPI: FACS 00000000dfdf9c00 000040
[    0.000000] ACPI: SSDT 00000000ffea34d6 00009C (v01   DELL    st_ex 00001000 INTL 20050624)
[    0.000000] ACPI: APIC 00000000000fceab 00015E (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: BOOT 00000000000fd009 000028 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: ASF! 00000000000fd031 000096 (v32 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: MCFG 00000000000fd0c7 00003C (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: HPET 00000000000fd103 000038 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: TCPA 00000000000fd35f 000032 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: DMAR 00000000000fd391 0000C8 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: SLIC 00000000000fd13b 000176 (v01 DELL    B10K    00000015 ASL  00000061)
[    0.000000] ACPI: SSDT 00000000dfe4dc00 0015C4 (v01  INTEL PPM RCM  80000001 INTL 20061109)
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] Zone ranges:
[    0.000000]   DMA      [mem 0x00001000-0x00ffffff]
[    0.000000]   DMA32    [mem 0x01000000-0xffffffff]
[    0.000000]   Normal   [mem 0x100000000-0x19fffffff]
[    0.000000] Movable zone start for each node
[    0.000000] Early memory node ranges
[    0.000000]   node   0: [mem 0x00001000-0x00091fff]
[    0.000000]   node   0: [mem 0x00100000-0xdfdf8fff]
[    0.000000]   node   0: [mem 0x100000000-0x19fffffff]
[    0.000000] On node 0 totalpages: 1572234
[    0.000000]   DMA zone: 56 pages used for memmap
[    0.000000]   DMA zone: 21 pages reserved
[    0.000000]   DMA zone: 3985 pages, LIFO batch:0
[    0.000000]   DMA32 zone: 12481 pages used for memmap
[    0.000000]   DMA32 zone: 912889 pages, LIFO batch:31
[    0.000000]   Normal zone: 8960 pages used for memmap
[    0.000000]   Normal zone: 655360 pages, LIFO batch:31
[    0.000000] ACPI: PM-Timer IO Port: 0x808
[    0.000000] ACPI: Local APIC address 0xfee00000
[    0.000000] ACPI: LAPIC (acpi_id[0x01] lapic_id[0x00] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x03] lapic_id[0x04] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x04] lapic_id[0x06] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x05] lapic_id[0x01] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x06] lapic_id[0x03] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x07] lapic_id[0x05] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x08] lapic_id[0x07] enabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x09] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0e] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x0f] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x10] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x11] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x12] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x13] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x14] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x15] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x16] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x17] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x18] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x19] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1a] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1b] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1c] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1d] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1e] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x1f] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC (acpi_id[0x20] lapic_id[0x00] disabled)
[    0.000000] ACPI: LAPIC_NMI (acpi_id[0xff] high level lint[0x1])
[    0.000000] ACPI: IOAPIC (id[0x08] address[0xfec00000] gsi_base[0])
[    0.000000] IOAPIC[0]: apic_id 8, version 32, address 0xfec00000, GSI 0-23
[    0.000000] ACPI: IOAPIC (id[0x09] address[0xfec80000] gsi_base[24])
[    0.000000] IOAPIC[1]: apic_id 9, version 32, address 0xfec80000, GSI 24-47
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
[    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
[    0.000000] ACPI: IRQ0 used by override.
[    0.000000] ACPI: IRQ2 used by override.
[    0.000000] ACPI: IRQ9 used by override.
[    0.000000] Using ACPI (MADT) for SMP configuration information
[    0.000000] ACPI: HPET id: 0x8086a301 base: 0xfed00000
[    0.000000] smpboot: 32 Processors exceeds NR_CPUS limit of 8
[    0.000000] smpboot: Allowing 8 CPUs, 0 hotplug CPUs
[    0.000000] nr_irqs_gsi: 64
[    0.000000] e820: [mem 0xe0000000-0xf7ffffff] available for PCI devices
[    0.000000] Booting paravirtualized kernel with PVH extensions on Xen
[    0.000000] Xen version: 4.4-unstable
[    0.000000] setup_percpu: NR_CPUS:8 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
[    0.000000] PERCPU: Embedded 28 pages/cpu @ffff88003f000000 s85312 r8192 d21184 u262144
[    0.000000] pcpu-alloc: s85312 r8192 d21184 u262144 alloc=1*2097152
[    0.000000] pcpu-alloc: [0] 0 1 2 3 4 5 6 7
[    0.000000] Built 1 zonelists in Zone order, mobility grouping on.  Total pages: 1550716
[    0.000000] Kernel command line: root=/dev/sda1 ro ramdisk_size=1024000 earlyprintk=xenboot loglevel=9 console=hvc0 debug
[    0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes)
[    0.000000] Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes)
[    0.000000] Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes)
[    0.000000] software IO TLB [mem 0x34c00000-0x38c00000] (64MB) mapped at [ffff880034c00000-ffff880038bfffff]
[    0.000000] Memory: 838552K/6288936K available (6268K kernel code, 782K rwdata, 3244K rodata, 928K init, 9040K bss, 5450384K reserved)
[    0.000000] Hierarchical RCU implementation.
[    0.000000] 	CONFIG_RCU_FANOUT set to non-default value of 32
[    0.000000] 	RCU dyntick-idle grace-period acceleration is enabled.
[    0.000000] NR_IRQS:4352 nr_irqs:1152 16
[    0.000000] xen:events: Using FIFO-based ABI
[    0.000000] xen: sci override: global_irq=9 trigger=0 polarity=0
[    0.000000] xen: registering gsi 9 triggering 0 polarity 0
[    0.000000] xen: --> pirq=9 -> irq=9 (gsi=9)
[    0.000000] xen: acpi sci 9
[    0.000000] xen: --> pirq=1 -> irq=1 (gsi=1)
[    0.000000] xen: --> pirq=2 -> irq=2 (gsi=2)
[    0.000000] xen: --> pirq=3 -> irq=3 (gsi=3)
[    0.000000] xen: --> pirq=4 -> irq=4 (gsi=4)
[    0.000000] xen: --> pirq=5 -> irq=5 (gsi=5)
[    0.000000] xen: --> pirq=6 -> irq=6 (gsi=6)
[    0.000000] xen: --> pirq=7 -> irq=7 (gsi=7)
[    0.000000] xen: --> pirq=8 -> irq=8 (gsi=8)
[    0.000000] xen: --> pirq=10 -> irq=10 (gsi=10)
[    0.000000] xen: --> pirq=11 -> irq=11 (gsi=11)
[    0.000000] xen: --> pirq=12 -> irq=12 (gsi=12)
[    0.000000] xen: --> pirq=13 -> irq=13 (gsi=13)
[    0.000000] xen: --> pirq=14 -> irq=14 (gsi=14)
[    0.000000] xen: --> pirq=15 -> irq=15 (gsi=15)
(XEN) irq.c:375: Dom0 callback via changed to Direct Vector 0xf3
[    0.000000] xen:events: Xen HVM callback vector for event delivery is enabled
[    0.000000] ACPI: Core revision 20131218
(XEN) ----[ Xen-4.4-unstable  x86_64  debug=y  Tainted:    C ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d0801b43c0>] hvm_do_resume+0x50/0x150
(XEN) RFLAGS: 0000000000010202   CONTEXT: hypervisor
(XEN) rax: 000000000018a852   rbx: 000000000018a852   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: ffff8300dfb0e000
(XEN) rbp: ffff82d0802c7dd8   rsp: ffff82d0802c7dc8   r8:  ffff82d0803087d0
(XEN) r9:  0000000000000005   r10: ffff82d080304890   r11: 0000000000000000
(XEN) r12: 0000000182a5ea8d   r13: ffff82d0803087d0   r14: ffff8300dfb0e000
(XEN) r15: ffff8300dfb0e000   cr0: 0000000080050033   cr4: 00000000000026f0
(XEN) cr3: 000000019f96e000   cr2: 000000000018a870
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen stack trace from rsp=ffff82d0802c7dc8:
(XEN)    ffff82d0803093e0 ffff830199a1e000 ffff82d0802c7e08 ffff82d0801d63c8
(XEN)    0000000000000000 ffff8300dfb0e000 ffff8300dfb0e000 0000000001c9c380
(XEN)    ffff82d0802c7e18 ffff82d08015fbaa ffff82d0802c7ec8 ffff82d0801254fb
(XEN)    ffff82d080308880 000000003fd870ec 0000000182a5ea8d ffff8300dfb0e000
(XEN)    ffff82d0803087b8 0000000000000001 ffff82d0802c7e88 ffff82d080158ea9
(XEN)    0000000000000000 ffff82d080308900 ffff82d080308880 0000000182e2fa0e
(XEN)    ffff8300dfb0e000 0000000001c9c380 0000000000000000 ffff82d0802dfe00
(XEN)    0000000000000002 ffff82d0802dfe00 ffffffffffffffff 0000000000000001
(XEN)    ffff82d0802c7f08 ffff82d080126726 0000000000000001 ffff8300dfb0e000
(XEN)    ffffffff81d6a900 ffffffff81d730a0 0000000000000059 0000000000000000
(XEN)    ffffffff81c01e78 ffff82d0801e18da 0000000000000000 0000000000000059
(XEN)    ffffffff81d730a0 ffffffff81d6a900 ffffffff81c01e78 ffff88003480f400
(XEN)    0000000000000000 0000000000000000 000000000000001d 0000000000000000
(XEN)    ffffc900000020e1 ffffc90000000951 ffffffff81c13490 ffffc900000053c5
(XEN)    ffffc90000000951 000000fa0000beef ffffffff812b8380 000000bf0000beef
(XEN)    0000000000000006 ffffffff81c01e78 000000000000beef 000000000000beef
(XEN)    000000000000beef 000000000000beef 000000000000beef 0000000000000000
(XEN)    ffff8300dfb0e000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d0801b43c0>] hvm_do_resume+0x50/0x150
(XEN)    [<ffff82d0801d63c8>] vmx_do_resume+0x118/0x150
(XEN)    [<ffff82d08015fbaa>] continue_running+0xa/0x10
(XEN)    [<ffff82d0801254fb>] schedule+0x22b/0x310
(XEN)    [<ffff82d080126726>] __do_softirq+0x46/0xa0
(XEN)    [<ffff82d0801e18da>] vmx_asm_do_vmentry+0x2a/0x50
(XEN)
(XEN) Pagetable walk from 000000000018a870:
(XEN)  L4[0x000] = 0000000000000000 ffffffffffffffff
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=0000]
(XEN) Faulting linear address: 000000000018a870
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:55:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnx7-0006v7-Ae; Tue, 18 Feb 2014 16:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFnx5-0006v2-PC
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 16:55:23 +0000
Received: from [85.158.139.211:11929] by server-11.bemta-5.messagelabs.com id
	A6/50-23886-B7093035; Tue, 18 Feb 2014 16:55:23 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392742520!4724340!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14702 invoked from network); 18 Feb 2014 16:55:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 16:55:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IGtEZH007169
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 16:55:15 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGtBRv026576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 18 Feb 2014 16:55:12 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGtBM1010943; Tue, 18 Feb 2014 16:55:11 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 08:55:11 -0800
Message-ID: <530390D2.6020000@oracle.com>
Date: Tue, 18 Feb 2014 11:56:50 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
	<53039B61020000780011D68B@nat28.tlf.novell.com>
In-Reply-To: <53039B61020000780011D68B@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 11:41 AM, Jan Beulich wrote:
>>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>> This test is already performed a couple of lines above.
>>>> Except that it's the wrong code you remove:
>>> No opinion on this alternative at all?
>> Sorry Jan, I didn't realize you were waiting for me on this.
>>
>> Yes, your version is fine although to be honest I don't see how the
>> original patch had any issues with division by zero since we'd still be
>> inside the 'if (stride)' clause.
> It's the very division that this patch removes:
>
>>>> --- a/xen/arch/x86/msi.c
>>>> +++ b/xen/arch/x86/msi.c
>>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>>                return 0;
>>>>            base = pos + PCI_SRIOV_BAR;
>>>>            vf -= PCI_BDF(bus, slot, func) + offset;
>>>> -        if ( vf < 0 || (vf && vf % stride) )
>>>> +        if ( vf < 0 )
>>>>                return 0;
>>>>            if ( stride )
>>>>            {
> Which isn't inside the if(stride).


Yes, I see it now. I was staring at a wrong line.

This actually now looks like a bug. You do check above for '(num_vf > 1 
&& !stride) ' but presumably if things are really messed up num_vf can 
be 1 but vf is 0. And then if stride is zero too then we are not doing 
particularly well.

So probably this should go into 4.4 as well?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:55:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:55:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnx7-0006v7-Ae; Tue, 18 Feb 2014 16:55:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFnx5-0006v2-PC
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 16:55:23 +0000
Received: from [85.158.139.211:11929] by server-11.bemta-5.messagelabs.com id
	A6/50-23886-B7093035; Tue, 18 Feb 2014 16:55:23 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392742520!4724340!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14702 invoked from network); 18 Feb 2014 16:55:22 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 16:55:22 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IGtEZH007169
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 16:55:15 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGtBRv026576
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Tue, 18 Feb 2014 16:55:12 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IGtBM1010943; Tue, 18 Feb 2014 16:55:11 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 08:55:11 -0800
Message-ID: <530390D2.6020000@oracle.com>
Date: Tue, 18 Feb 2014 11:56:50 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
	<53039B61020000780011D68B@nat28.tlf.novell.com>
In-Reply-To: <53039B61020000780011D68B@nat28.tlf.novell.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 11:41 AM, Jan Beulich wrote:
>>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>> This test is already performed a couple of lines above.
>>>> Except that it's the wrong code you remove:
>>> No opinion on this alternative at all?
>> Sorry Jan, I didn't realize you were waiting for me on this.
>>
>> Yes, your version is fine although to be honest I don't see how the
>> original patch had any issues with division by zero since we'd still be
>> inside the 'if (stride)' clause.
> It's the very division that this patch removes:
>
>>>> --- a/xen/arch/x86/msi.c
>>>> +++ b/xen/arch/x86/msi.c
>>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>>                return 0;
>>>>            base = pos + PCI_SRIOV_BAR;
>>>>            vf -= PCI_BDF(bus, slot, func) + offset;
>>>> -        if ( vf < 0 || (vf && vf % stride) )
>>>> +        if ( vf < 0 )
>>>>                return 0;
>>>>            if ( stride )
>>>>            {
> Which isn't inside the if(stride).


Yes, I see it now. I was staring at a wrong line.

This actually now looks like a bug. You do check above for '(num_vf > 1 
&& !stride) ' but presumably if things are really messed up num_vf can 
be 1 but vf is 0. And then if stride is zero too then we are not doing 
particularly well.

So probably this should go into 4.4 as well?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnxg-0006xN-P2; Tue, 18 Feb 2014 16:56:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnxf-0006x7-4F
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:55:59 +0000
Received: from [85.158.137.68:50505] by server-1.bemta-3.messagelabs.com id
	53/27-17293-E9093035; Tue, 18 Feb 2014 16:55:58 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392742555!2694792!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 18 Feb 2014 16:55:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:55:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="101845607"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 16:55:52 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:55:51 -0500
Message-ID: <1392742549.32038.580.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Tue, 18 Feb 2014 17:55:49 +0100
In-Reply-To: <752791084.20140217124616@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4571835881833689328=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4571835881833689328==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-h+Z7qroTDMvlfRFoksZV"

--=-h+Z7qroTDMvlfRFoksZV
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 12:46 +0000, Simon Martin wrote:
> Hyperthreading is just a way to improve CPU resource utilization. Even
> if you are doing a CPU intensive operation, a lot of the processor
> circuits are actually idle, so adding 2 pipelines to feed one
> processor is a good way to improve total throughput, but it does have
> it have it's caveats. I totally forgot this.
>=20
> Given the way that this works there isn't much that Xen can do. It is
> a physical restriction.
>=20
I know, and my point was not that we should try to "fix" in Xen, what's
impossible to fix, because it's just how hardware works... and that is
by design!

I was only saying that there are cases, when you still need isolation,
but you can afford a bit more of uncertainty, there is the possibility
for Xen to at least try to do the right thing automagically.

Basically, I'm ok with people looking for hard real-time response time
and jitter to have to pick up the proper hw platform, as a first step,
and properly fine tune it. For more _soft_ real-time workloads, I wish
we were (think we should be) able to do better, perhaps still with some
user required tweaks, but nothing equally intrusive, that's it. :-)

> OK. This is my current configuration:
>=20
> Dom0     PCPU 0,1,2   no pinning
> win7x64  PCPU   1,2   pinned
> pv499    PCPU       3 pinned
>=20
> And I get the same interdependence.
>=20
> root@smartin-xen:~# xl list
> Name                                        ID   Mem VCPUs      State   T=
ime(s)
> Domain-0                                     0  1253     3     r-----    =
  37.1
> win7x64                                      4  2046     2     ------    =
 106.8
> pv499                                        5   128     1     r-----    =
  66.8
> root@smartin-xen:~# xl vcpu-list=20
> Name                                ID  VCPU   CPU State   Time(s) CPU Af=
finity
> Domain-0                             0     0    1   -b-      21.2  all
> Domain-0                             0     1    2   r--       8.8  all
> Domain-0                             0     2    0   -b-       8.2  all
> win7x64                              4     0    1   -b-      59.8  1
> win7x64                              4     1    2   -b-      49.0  2
> pv499                                5     0    3   r--      70.2  3
> root@smartin-xen:~# xl cpupool-list -c
> Name               CPU list
> Pool-0             0,1,2
> pv499              3
>=20
> I have gone back to my working settings.
>=20
Ok, thanks a lot for trying. I was hoping for that tweak, although
developed for completely different reasons, to be helpful in this case,
but it appears it is not.

Probably, I wouldn't have pinned the two win7 vcpus either, but anyway,
I don't want to eat any more of your time... Glad you find a
configuration that is working out! :-)

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-h+Z7qroTDMvlfRFoksZV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDkJUACgkQk4XaBE3IOsQFGACeKYce6icMxBcX2x00VvYScNAA
d8gAn3fk18FzppSVgGOeYBd5uv+7kYLY
=hX3O
-----END PGP SIGNATURE-----

--=-h+Z7qroTDMvlfRFoksZV--


--===============4571835881833689328==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4571835881833689328==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFnxg-0006xN-P2; Tue, 18 Feb 2014 16:56:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFnxf-0006x7-4F
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 16:55:59 +0000
Received: from [85.158.137.68:50505] by server-1.bemta-3.messagelabs.com id
	53/27-17293-E9093035; Tue, 18 Feb 2014 16:55:58 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392742555!2694792!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31702 invoked from network); 18 Feb 2014 16:55:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:55:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="101845607"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 16:55:52 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 11:55:51 -0500
Message-ID: <1392742549.32038.580.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Simon Martin <furryfuttock@gmail.com>
Date: Tue, 18 Feb 2014 17:55:49 +0100
In-Reply-To: <752791084.20140217124616@gmail.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4571835881833689328=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4571835881833689328==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-h+Z7qroTDMvlfRFoksZV"

--=-h+Z7qroTDMvlfRFoksZV
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On lun, 2014-02-17 at 12:46 +0000, Simon Martin wrote:
> Hyperthreading is just a way to improve CPU resource utilization. Even
> if you are doing a CPU intensive operation, a lot of the processor
> circuits are actually idle, so adding 2 pipelines to feed one
> processor is a good way to improve total throughput, but it does have
> it have it's caveats. I totally forgot this.
>=20
> Given the way that this works there isn't much that Xen can do. It is
> a physical restriction.
>=20
I know, and my point was not that we should try to "fix" in Xen, what's
impossible to fix, because it's just how hardware works... and that is
by design!

I was only saying that there are cases, when you still need isolation,
but you can afford a bit more of uncertainty, there is the possibility
for Xen to at least try to do the right thing automagically.

Basically, I'm ok with people looking for hard real-time response time
and jitter to have to pick up the proper hw platform, as a first step,
and properly fine tune it. For more _soft_ real-time workloads, I wish
we were (think we should be) able to do better, perhaps still with some
user required tweaks, but nothing equally intrusive, that's it. :-)

> OK. This is my current configuration:
>=20
> Dom0     PCPU 0,1,2   no pinning
> win7x64  PCPU   1,2   pinned
> pv499    PCPU       3 pinned
>=20
> And I get the same interdependence.
>=20
> root@smartin-xen:~# xl list
> Name                                        ID   Mem VCPUs      State   T=
ime(s)
> Domain-0                                     0  1253     3     r-----    =
  37.1
> win7x64                                      4  2046     2     ------    =
 106.8
> pv499                                        5   128     1     r-----    =
  66.8
> root@smartin-xen:~# xl vcpu-list=20
> Name                                ID  VCPU   CPU State   Time(s) CPU Af=
finity
> Domain-0                             0     0    1   -b-      21.2  all
> Domain-0                             0     1    2   r--       8.8  all
> Domain-0                             0     2    0   -b-       8.2  all
> win7x64                              4     0    1   -b-      59.8  1
> win7x64                              4     1    2   -b-      49.0  2
> pv499                                5     0    3   r--      70.2  3
> root@smartin-xen:~# xl cpupool-list -c
> Name               CPU list
> Pool-0             0,1,2
> pv499              3
>=20
> I have gone back to my working settings.
>=20
Ok, thanks a lot for trying. I was hoping for that tweak, although
developed for completely different reasons, to be helpful in this case,
but it appears it is not.

Probably, I wouldn't have pinned the two win7 vcpus either, but anyway,
I don't want to eat any more of your time... Glad you find a
configuration that is working out! :-)

Thanks again and Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-h+Z7qroTDMvlfRFoksZV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDkJUACgkQk4XaBE3IOsQFGACeKYce6icMxBcX2x00VvYScNAA
d8gAn3fk18FzppSVgGOeYBd5uv+7kYLY
=hX3O
-----END PGP SIGNATURE-----

--=-h+Z7qroTDMvlfRFoksZV--


--===============4571835881833689328==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4571835881833689328==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 16:56:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFny5-00071F-92; Tue, 18 Feb 2014 16:56:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFny4-00070w-9B
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 16:56:24 +0000
Received: from [193.109.254.147:47499] by server-11.bemta-14.messagelabs.com
	id FE/44-24604-7B093035; Tue, 18 Feb 2014 16:56:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392742582!1445861!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6514 invoked from network); 18 Feb 2014 16:56:22 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:56:22 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so7967062eae.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 08:56:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=gZ4ErtuExugE1iEmQjtztMV/EpPPf2I+La4ENVvMOoc=;
	b=Gp/iqzlI3HkvSf53ckDE2wsLP3bP1Wuh/yztLiMh5No8nuuG6FVNZxstKdldQjyi1l
	EXB7OO+QPbu69FoVpFjxlH6gRPE+UJIL/Ktp+xfB2W/PjU125ae3bc4aLq+mceINTLfI
	FjAmqKZxGE7iqeIuRmIm6qXcC34R+WyZAFlHzS2NJd1Y/ROZbxv/IIu+SBu8t+/c88pn
	Kg/kwITfMbVH4jbB71VKk1s9miq0IXbt2ud4EVy1GHe0/Oz7N8vZ6MGZz5ktzey7moVh
	3UkKUTRfdX2aispyVzl0ohP2nq1uKDZ/8DMdkz271N4yabpMz7WiD3O/x1ZJWyZjiuuw
	vPpA==
X-Gm-Message-State: ALoCoQkUbbkVMIkLRYesVQHecQzeKKaUBjys6dX8p/SJLs4ECVMEuq0mRpaNd94uVtRdcVaR6dxI
X-Received: by 10.14.202.136 with SMTP id d8mr35093133eeo.46.1392742582141;
	Tue, 18 Feb 2014 08:56:22 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm72189448eei.9.2014.02.18.08.56.20
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 18 Feb 2014 08:56:21 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 18 Feb 2014 16:56:17 +0000
Message-Id: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page aligned
	pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current implementation of raw_copy_guest helper may lead to data corruption
and sometimes Xen crash when the guest virtual address is not aligned to
PAGE_SIZE.

When the total length is higher than a page, the length to read is badly
compute with
    min(len, (unsigned)(PAGE_SIZE - offset))

As the offset is only computed one time per function, if the start address was
not aligned to PAGE_SIZE, we can end up in same iteration:
    - to read accross page boundary => xen crash
    - read the previous page => data corruption

This issue can be resolved by setting offset to 0 at the end of the first
iteration. Indeed, after it, the virtual guest address is always aligned
to PAGE_SIZE.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>

---
    This patch is a bug fix for Xen 4.4. Without this patch the data may be
    corrupted when Xen is copied data from the guest if the guest virtual
    address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen.

    This function is used in numerous place in Xen. If it introduces another
    bug we can see quickly with small amount of data.

    Changes in v2:
        - Only raw_copy_from_guest is buggy, the other raw_copy_*
        helpers where safe because of the "offset = 0" at the end of the loop
        - Update commit message and title
---
 xen/arch/arm/guestcopy.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index af0af6b..715bb4e 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -96,6 +96,11 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         len -= size;
         from += size;
         to += size;
+        /*
+         * After the first iteration, guest virtual address is correctly
+         * aligned to PAGE_SIZE.
+         */
+        offset = 0;
     }
     return 0;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 16:56:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 16:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFny5-00071F-92; Tue, 18 Feb 2014 16:56:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFny4-00070w-9B
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 16:56:24 +0000
Received: from [193.109.254.147:47499] by server-11.bemta-14.messagelabs.com
	id FE/44-24604-7B093035; Tue, 18 Feb 2014 16:56:23 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392742582!1445861!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6514 invoked from network); 18 Feb 2014 16:56:22 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 16:56:22 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so7967062eae.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 08:56:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=gZ4ErtuExugE1iEmQjtztMV/EpPPf2I+La4ENVvMOoc=;
	b=Gp/iqzlI3HkvSf53ckDE2wsLP3bP1Wuh/yztLiMh5No8nuuG6FVNZxstKdldQjyi1l
	EXB7OO+QPbu69FoVpFjxlH6gRPE+UJIL/Ktp+xfB2W/PjU125ae3bc4aLq+mceINTLfI
	FjAmqKZxGE7iqeIuRmIm6qXcC34R+WyZAFlHzS2NJd1Y/ROZbxv/IIu+SBu8t+/c88pn
	Kg/kwITfMbVH4jbB71VKk1s9miq0IXbt2ud4EVy1GHe0/Oz7N8vZ6MGZz5ktzey7moVh
	3UkKUTRfdX2aispyVzl0ohP2nq1uKDZ/8DMdkz271N4yabpMz7WiD3O/x1ZJWyZjiuuw
	vPpA==
X-Gm-Message-State: ALoCoQkUbbkVMIkLRYesVQHecQzeKKaUBjys6dX8p/SJLs4ECVMEuq0mRpaNd94uVtRdcVaR6dxI
X-Received: by 10.14.202.136 with SMTP id d8mr35093133eeo.46.1392742582141;
	Tue, 18 Feb 2014 08:56:22 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm72189448eei.9.2014.02.18.08.56.20
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Tue, 18 Feb 2014 08:56:21 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Tue, 18 Feb 2014 16:56:17 +0000
Message-Id: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	George Dunlap <george.dunlap@citrix.com>
Subject: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page aligned
	pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The current implementation of raw_copy_guest helper may lead to data corruption
and sometimes Xen crash when the guest virtual address is not aligned to
PAGE_SIZE.

When the total length is higher than a page, the length to read is badly
compute with
    min(len, (unsigned)(PAGE_SIZE - offset))

As the offset is only computed one time per function, if the start address was
not aligned to PAGE_SIZE, we can end up in same iteration:
    - to read accross page boundary => xen crash
    - read the previous page => data corruption

This issue can be resolved by setting offset to 0 at the end of the first
iteration. Indeed, after it, the virtual guest address is always aligned
to PAGE_SIZE.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: George Dunlap <george.dunlap@citrix.com>

---
    This patch is a bug fix for Xen 4.4. Without this patch the data may be
    corrupted when Xen is copied data from the guest if the guest virtual
    address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen.

    This function is used in numerous place in Xen. If it introduces another
    bug we can see quickly with small amount of data.

    Changes in v2:
        - Only raw_copy_from_guest is buggy, the other raw_copy_*
        helpers where safe because of the "offset = 0" at the end of the loop
        - Update commit message and title
---
 xen/arch/arm/guestcopy.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index af0af6b..715bb4e 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -96,6 +96,11 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
         len -= size;
         from += size;
         to += size;
+        /*
+         * After the first iteration, guest virtual address is correctly
+         * aligned to PAGE_SIZE.
+         */
+        offset = 0;
     }
     return 0;
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:04:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFo5X-0007NE-9d; Tue, 18 Feb 2014 17:04:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFo5V-0007N9-EA
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:04:05 +0000
Received: from [85.158.143.35:45336] by server-3.bemta-4.messagelabs.com id
	B7/0D-11539-48293035; Tue, 18 Feb 2014 17:04:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392743044!6599024!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3153 invoked from network); 18 Feb 2014 17:04:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 17:04:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 17:04:03 +0000
Message-Id: <5303A090020000780011D6C6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 17:04:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
	<53039B61020000780011D68B@nat28.tlf.novell.com>
	<530390D2.6020000@oracle.com>
In-Reply-To: <530390D2.6020000@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 17:56, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/18/2014 11:41 AM, Jan Beulich wrote:
>>>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>>> This test is already performed a couple of lines above.
>>>>> Except that it's the wrong code you remove:
>>>> No opinion on this alternative at all?
>>> Sorry Jan, I didn't realize you were waiting for me on this.
>>>
>>> Yes, your version is fine although to be honest I don't see how the
>>> original patch had any issues with division by zero since we'd still be
>>> inside the 'if (stride)' clause.
>> It's the very division that this patch removes:
>>
>>>>> --- a/xen/arch/x86/msi.c
>>>>> +++ b/xen/arch/x86/msi.c
>>>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>>>                return 0;
>>>>>            base = pos + PCI_SRIOV_BAR;
>>>>>            vf -= PCI_BDF(bus, slot, func) + offset;
>>>>> -        if ( vf < 0 || (vf && vf % stride) )
>>>>> +        if ( vf < 0 )
>>>>>                return 0;
>>>>>            if ( stride )
>>>>>            {
>> Which isn't inside the if(stride).
> 
> 
> Yes, I see it now. I was staring at a wrong line.
> 
> This actually now looks like a bug.

You mean the old code looks wrong or the new one?

> You do check above for '(num_vf > 1 
> && !stride) ' but presumably if things are really messed up num_vf can 
> be 1 but vf is 0. And then if stride is zero too then we are not doing 
> particularly well.
> 
> So probably this should go into 4.4 as well?

We've done with this unfixed quite fine so far, so it's generally
okay to leave as is until 4.4.1 (read: not a regression). I
personally wouldn't mind pushing it in, but only if other similar
not too high priority bug fixes would also go in.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:04:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFo5X-0007NE-9d; Tue, 18 Feb 2014 17:04:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WFo5V-0007N9-EA
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:04:05 +0000
Received: from [85.158.143.35:45336] by server-3.bemta-4.messagelabs.com id
	B7/0D-11539-48293035; Tue, 18 Feb 2014 17:04:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392743044!6599024!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3153 invoked from network); 18 Feb 2014 17:04:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 17:04:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 18 Feb 2014 17:04:03 +0000
Message-Id: <5303A090020000780011D6C6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 18 Feb 2014 17:04:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Boris Ostrovsky" <boris.ostrovsky@oracle.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
	<53039B61020000780011D68B@nat28.tlf.novell.com>
	<530390D2.6020000@oracle.com>
In-Reply-To: <530390D2.6020000@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 18.02.14 at 17:56, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
> On 02/18/2014 11:41 AM, Jan Beulich wrote:
>>>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>>> This test is already performed a couple of lines above.
>>>>> Except that it's the wrong code you remove:
>>>> No opinion on this alternative at all?
>>> Sorry Jan, I didn't realize you were waiting for me on this.
>>>
>>> Yes, your version is fine although to be honest I don't see how the
>>> original patch had any issues with division by zero since we'd still be
>>> inside the 'if (stride)' clause.
>> It's the very division that this patch removes:
>>
>>>>> --- a/xen/arch/x86/msi.c
>>>>> +++ b/xen/arch/x86/msi.c
>>>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>>>                return 0;
>>>>>            base = pos + PCI_SRIOV_BAR;
>>>>>            vf -= PCI_BDF(bus, slot, func) + offset;
>>>>> -        if ( vf < 0 || (vf && vf % stride) )
>>>>> +        if ( vf < 0 )
>>>>>                return 0;
>>>>>            if ( stride )
>>>>>            {
>> Which isn't inside the if(stride).
> 
> 
> Yes, I see it now. I was staring at a wrong line.
> 
> This actually now looks like a bug.

You mean the old code looks wrong or the new one?

> You do check above for '(num_vf > 1 
> && !stride) ' but presumably if things are really messed up num_vf can 
> be 1 but vf is 0. And then if stride is zero too then we are not doing 
> particularly well.
> 
> So probably this should go into 4.4 as well?

We've done with this unfixed quite fine so far, so it's generally
okay to leave as is until 4.4.1 (read: not a regression). I
personally wouldn't mind pushing it in, but only if other similar
not too high priority bug fixes would also go in.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:07:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFo8M-0007U5-U1; Tue, 18 Feb 2014 17:07:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFo8J-0007Tx-OT
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:07:01 +0000
Received: from [85.158.137.68:15020] by server-9.bemta-3.messagelabs.com id
	59/1D-10184-33393035; Tue, 18 Feb 2014 17:06:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392743216!2664868!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2414 invoked from network); 18 Feb 2014 17:06:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103578607"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:06:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:06:55 -0500
Message-ID: <1392743214.23084.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:06:54 +0000
In-Reply-To: <1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> This patch contains the new definitions necessary for grant mapping.

Is this just adding a bunch of (currently) unused functions? That's a
slightly odd way to structure a series. They don't seem to be "generic
helpers" or anything so it would be more normal to introduce these as
they get used -- it's a bit hard to review them out of context.

> v2:

This sort of intraversion changelog should go after the S-o-b and a
"---" marker. This way they are not included in the final commit
message.

[...]
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---

v2: Blah blah

v3: Etc etc


> @@ -226,6 +248,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
>  
>  void xenvif_stop_queue(struct xenvif *vif);
>  
> +/* Callback from stack when TX packet can be released */
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
> +
> +/* Unmap a pending page, usually has to be called before xenvif_idx_release */

"usually" or always? How does one determine when it is or isn't
appropriate to call it later?

> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
> +
>  extern bool separate_tx_rx_irq;
>  
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 7669d49..f0f0c3d 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -38,6 +38,7 @@
>  
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> +#include <xen/balloon.h>

What is this for?
 
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index bb241d0..195602f 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -773,6 +773,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>  	return page;
>  }
>  
> +static inline void xenvif_tx_create_gop(struct xenvif *vif,
> +					u16 pending_idx,
> +					struct xen_netif_tx_request *txp,
> +					struct gnttab_map_grant_ref *gop)
> +{
> +	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  txp->gref, vif->domid);
> +
> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
> +	       sizeof(*txp));

Can this not go in xenvif_tx_build_gops? Or conversely should the
non-mapping code there be factored out?

Given the presence of both kinds of gop the name of this function needs
to be more specific I think.

> +}
> +
>  static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
> @@ -1612,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  	return work_done;
>  }
>  
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
> +{
> +	unsigned long flags;
> +	pending_ring_idx_t index;
> +	u16 pending_idx = ubuf->desc;
> +	struct pending_tx_info *temp =
> +		container_of(ubuf, struct pending_tx_info, callback_struct);
> +	struct xenvif *vif = container_of(temp - pending_idx,

This is subtracting a u16 from a pointer?

> +					  struct xenvif,
> +					  pending_tx_info[0]);
> +
> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> +	do {
> +		pending_idx = ubuf->desc;
> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> +		index = pending_index(vif->dealloc_prod);
> +		vif->dealloc_ring[index] = pending_idx;
> +		/* Sync with xenvif_tx_dealloc_action:
> +		 * insert idx then incr producer.
> +		 */
> +		smp_wmb();

Is this really needed given that there is a lock held?

Or what is dealloc_lock protecting against?

> +		vif->dealloc_prod++;

What happens if the dealloc ring becomes full, will this wrap and cause
havoc?

> +	} while (ubuf);
> +	wake_up(&vif->dealloc_wq);
> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> +}
> +
> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> +{
> +	struct gnttab_unmap_grant_ref *gop;
> +	pending_ring_idx_t dc, dp;
> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> +	unsigned int i = 0;
> +
> +	dc = vif->dealloc_cons;
> +	gop = vif->tx_unmap_ops;
> +
> +	/* Free up any grants we have finished using */
> +	do {
> +		dp = vif->dealloc_prod;
> +
> +		/* Ensure we see all indices enqueued by all
> +		 * xenvif_zerocopy_callback().
> +		 */
> +		smp_rmb();
> +
> +		while (dc != dp) {
> +			pending_idx =
> +				vif->dealloc_ring[pending_index(dc++)];
> +
> +			/* Already unmapped? */
> +			if (vif->grant_tx_handle[pending_idx] ==
> +				NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					   "Trying to unmap invalid handle! "
> +					   "pending_idx: %x\n", pending_idx);
> +				BUG();
> +			}
> +
> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> +				pending_idx;
> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
> +				vif->mmap_pages[pending_idx];
> +			gnttab_set_unmap_op(gop,
> +					    idx_to_kaddr(vif, pending_idx),
> +					    GNTMAP_host_map,
> +					    vif->grant_tx_handle[pending_idx]);
> +			vif->grant_tx_handle[pending_idx] =
> +				NETBACK_INVALID_HANDLE;
> +			++gop;

Can we run out of space in the gop array?

> +		}
> +
> +	} while (dp != vif->dealloc_prod);
> +
> +	vif->dealloc_cons = dc;

No barrier here?

> +	if (gop - vif->tx_unmap_ops > 0) {
> +		int ret;
> +		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
> +					vif->pages_to_unmap,
> +					gop - vif->tx_unmap_ops);
> +		if (ret) {
> +			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
> +				   gop - vif->tx_unmap_ops, ret);
> +			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {

This seems liable to be a lot of spew on failure. Perhaps only log the
ones where gop[i].status != success.

Have you considered whether or not the frontend can force this error to
occur?

> +				netdev_err(vif->dev,
> +					   " host_addr: %llx handle: %x status: %d\n",
> +					   gop[i].host_addr,
> +					   gop[i].handle,
> +					   gop[i].status);
> +			}
> +			BUG();
> +		}
> +	}
> +
> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> +		xenvif_idx_release(vif, pending_idx_release[i],
> +				   XEN_NETIF_RSP_OKAY);
> +}
> +
> +
>  /* Called after netfront has transmitted */
>  int xenvif_tx_action(struct xenvif *vif, int budget)
>  {
> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>  	vif->mmap_pages[pending_idx] = NULL;
>  }
>  
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

This is a single shot version of the batched xenvif_tx_dealloc_action
version? Why not just enqueue the idx to be unmapped later?

> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +			   "Trying to unmap invalid handle! pending_idx: %x\n",
> +			   pending_idx);
> +		BUG();
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			    idx_to_kaddr(vif, pending_idx),
> +			    GNTMAP_host_map,
> +			    vif->grant_tx_handle[pending_idx]);
> +	ret = gnttab_unmap_refs(&tx_unmap_op, &vif->mmap_pages[pending_idx], 1);
> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}
>  
>  static void make_tx_response(struct xenvif *vif,
>  			     struct xen_netif_tx_request *txp,
> @@ -1740,6 +1874,11 @@ static inline int tx_work_todo(struct xenvif *vif)
>  	return 0;
>  }
>  
> +static inline bool tx_dealloc_work_todo(struct xenvif *vif)
> +{
> +	return vif->dealloc_cons != vif->dealloc_prod
> +}
> +
>  void xenvif_unmap_frontend_rings(struct xenvif *vif)
>  {
>  	if (vif->tx.sring)
> @@ -1826,6 +1965,28 @@ int xenvif_kthread(void *data)
>  	return 0;
>  }
>  
> +int xenvif_dealloc_kthread(void *data)

Is this going to be a thread per vif?

> +{
> +	struct xenvif *vif = data;
> +
> +	while (!kthread_should_stop()) {
> +		wait_event_interruptible(vif->dealloc_wq,
> +					 tx_dealloc_work_todo(vif) ||
> +					 kthread_should_stop());
> +		if (kthread_should_stop())
> +			break;
> +
> +		xenvif_tx_dealloc_action(vif);
> +		cond_resched();
> +	}
> +
> +	/* Unmap anything remaining*/
> +	if (tx_dealloc_work_todo(vif))
> +		xenvif_tx_dealloc_action(vif);
> +
> +	return 0;
> +}
> +
>  static int __init netback_init(void)
>  {
>  	int rc = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:07:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFo8M-0007U5-U1; Tue, 18 Feb 2014 17:07:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFo8J-0007Tx-OT
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:07:01 +0000
Received: from [85.158.137.68:15020] by server-9.bemta-3.messagelabs.com id
	59/1D-10184-33393035; Tue, 18 Feb 2014 17:06:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392743216!2664868!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2414 invoked from network); 18 Feb 2014 17:06:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:06:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103578607"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:06:56 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:06:55 -0500
Message-ID: <1392743214.23084.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:06:54 +0000
In-Reply-To: <1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> This patch contains the new definitions necessary for grant mapping.

Is this just adding a bunch of (currently) unused functions? That's a
slightly odd way to structure a series. They don't seem to be "generic
helpers" or anything so it would be more normal to introduce these as
they get used -- it's a bit hard to review them out of context.

> v2:

This sort of intraversion changelog should go after the S-o-b and a
"---" marker. This way they are not included in the final commit
message.

[...]
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---

v2: Blah blah

v3: Etc etc


> @@ -226,6 +248,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
>  
>  void xenvif_stop_queue(struct xenvif *vif);
>  
> +/* Callback from stack when TX packet can be released */
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
> +
> +/* Unmap a pending page, usually has to be called before xenvif_idx_release */

"usually" or always? How does one determine when it is or isn't
appropriate to call it later?

> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx);
> +
>  extern bool separate_tx_rx_irq;
>  
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index 7669d49..f0f0c3d 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -38,6 +38,7 @@
>  
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> +#include <xen/balloon.h>

What is this for?
 
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index bb241d0..195602f 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -773,6 +773,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>  	return page;
>  }
>  
> +static inline void xenvif_tx_create_gop(struct xenvif *vif,
> +					u16 pending_idx,
> +					struct xen_netif_tx_request *txp,
> +					struct gnttab_map_grant_ref *gop)
> +{
> +	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
> +			  GNTMAP_host_map | GNTMAP_readonly,
> +			  txp->gref, vif->domid);
> +
> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
> +	       sizeof(*txp));

Can this not go in xenvif_tx_build_gops? Or conversely should the
non-mapping code there be factored out?

Given the presence of both kinds of gop the name of this function needs
to be more specific I think.

> +}
> +
>  static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
> @@ -1612,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  	return work_done;
>  }
>  
> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
> +{
> +	unsigned long flags;
> +	pending_ring_idx_t index;
> +	u16 pending_idx = ubuf->desc;
> +	struct pending_tx_info *temp =
> +		container_of(ubuf, struct pending_tx_info, callback_struct);
> +	struct xenvif *vif = container_of(temp - pending_idx,

This is subtracting a u16 from a pointer?

> +					  struct xenvif,
> +					  pending_tx_info[0]);
> +
> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> +	do {
> +		pending_idx = ubuf->desc;
> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> +		index = pending_index(vif->dealloc_prod);
> +		vif->dealloc_ring[index] = pending_idx;
> +		/* Sync with xenvif_tx_dealloc_action:
> +		 * insert idx then incr producer.
> +		 */
> +		smp_wmb();

Is this really needed given that there is a lock held?

Or what is dealloc_lock protecting against?

> +		vif->dealloc_prod++;

What happens if the dealloc ring becomes full, will this wrap and cause
havoc?

> +	} while (ubuf);
> +	wake_up(&vif->dealloc_wq);
> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> +}
> +
> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> +{
> +	struct gnttab_unmap_grant_ref *gop;
> +	pending_ring_idx_t dc, dp;
> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> +	unsigned int i = 0;
> +
> +	dc = vif->dealloc_cons;
> +	gop = vif->tx_unmap_ops;
> +
> +	/* Free up any grants we have finished using */
> +	do {
> +		dp = vif->dealloc_prod;
> +
> +		/* Ensure we see all indices enqueued by all
> +		 * xenvif_zerocopy_callback().
> +		 */
> +		smp_rmb();
> +
> +		while (dc != dp) {
> +			pending_idx =
> +				vif->dealloc_ring[pending_index(dc++)];
> +
> +			/* Already unmapped? */
> +			if (vif->grant_tx_handle[pending_idx] ==
> +				NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					   "Trying to unmap invalid handle! "
> +					   "pending_idx: %x\n", pending_idx);
> +				BUG();
> +			}
> +
> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> +				pending_idx;
> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
> +				vif->mmap_pages[pending_idx];
> +			gnttab_set_unmap_op(gop,
> +					    idx_to_kaddr(vif, pending_idx),
> +					    GNTMAP_host_map,
> +					    vif->grant_tx_handle[pending_idx]);
> +			vif->grant_tx_handle[pending_idx] =
> +				NETBACK_INVALID_HANDLE;
> +			++gop;

Can we run out of space in the gop array?

> +		}
> +
> +	} while (dp != vif->dealloc_prod);
> +
> +	vif->dealloc_cons = dc;

No barrier here?

> +	if (gop - vif->tx_unmap_ops > 0) {
> +		int ret;
> +		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
> +					vif->pages_to_unmap,
> +					gop - vif->tx_unmap_ops);
> +		if (ret) {
> +			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
> +				   gop - vif->tx_unmap_ops, ret);
> +			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {

This seems liable to be a lot of spew on failure. Perhaps only log the
ones where gop[i].status != success.

Have you considered whether or not the frontend can force this error to
occur?

> +				netdev_err(vif->dev,
> +					   " host_addr: %llx handle: %x status: %d\n",
> +					   gop[i].host_addr,
> +					   gop[i].handle,
> +					   gop[i].status);
> +			}
> +			BUG();
> +		}
> +	}
> +
> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> +		xenvif_idx_release(vif, pending_idx_release[i],
> +				   XEN_NETIF_RSP_OKAY);
> +}
> +
> +
>  /* Called after netfront has transmitted */
>  int xenvif_tx_action(struct xenvif *vif, int budget)
>  {
> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>  	vif->mmap_pages[pending_idx] = NULL;
>  }
>  
> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)

This is a single shot version of the batched xenvif_tx_dealloc_action
version? Why not just enqueue the idx to be unmapped later?

> +{
> +	int ret;
> +	struct gnttab_unmap_grant_ref tx_unmap_op;
> +
> +	if (vif->grant_tx_handle[pending_idx] == NETBACK_INVALID_HANDLE) {
> +		netdev_err(vif->dev,
> +			   "Trying to unmap invalid handle! pending_idx: %x\n",
> +			   pending_idx);
> +		BUG();
> +	}
> +	gnttab_set_unmap_op(&tx_unmap_op,
> +			    idx_to_kaddr(vif, pending_idx),
> +			    GNTMAP_host_map,
> +			    vif->grant_tx_handle[pending_idx]);
> +	ret = gnttab_unmap_refs(&tx_unmap_op, &vif->mmap_pages[pending_idx], 1);
> +	BUG_ON(ret);
> +	vif->grant_tx_handle[pending_idx] = NETBACK_INVALID_HANDLE;
> +}
>  
>  static void make_tx_response(struct xenvif *vif,
>  			     struct xen_netif_tx_request *txp,
> @@ -1740,6 +1874,11 @@ static inline int tx_work_todo(struct xenvif *vif)
>  	return 0;
>  }
>  
> +static inline bool tx_dealloc_work_todo(struct xenvif *vif)
> +{
> +	return vif->dealloc_cons != vif->dealloc_prod
> +}
> +
>  void xenvif_unmap_frontend_rings(struct xenvif *vif)
>  {
>  	if (vif->tx.sring)
> @@ -1826,6 +1965,28 @@ int xenvif_kthread(void *data)
>  	return 0;
>  }
>  
> +int xenvif_dealloc_kthread(void *data)

Is this going to be a thread per vif?

> +{
> +	struct xenvif *vif = data;
> +
> +	while (!kthread_should_stop()) {
> +		wait_event_interruptible(vif->dealloc_wq,
> +					 tx_dealloc_work_todo(vif) ||
> +					 kthread_should_stop());
> +		if (kthread_should_stop())
> +			break;
> +
> +		xenvif_tx_dealloc_action(vif);
> +		cond_resched();
> +	}
> +
> +	/* Unmap anything remaining*/
> +	if (tx_dealloc_work_todo(vif))
> +		xenvif_tx_dealloc_action(vif);
> +
> +	return 0;
> +}
> +
>  static int __init netback_init(void)
>  {
>  	int rc = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:08:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoA2-0007ed-L4; Tue, 18 Feb 2014 17:08:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFoA0-0007eT-VB
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:08:45 +0000
Received: from [193.109.254.147:25707] by server-10.bemta-14.messagelabs.com
	id F3/BC-10711-C9393035; Tue, 18 Feb 2014 17:08:44 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392743322!5165660!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28163 invoked from network); 18 Feb 2014 17:08:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 17:08:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IH8cvw004137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 17:08:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IH8cNQ020518
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Feb 2014 17:08:38 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IH8cWa006577; Tue, 18 Feb 2014 17:08:38 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 09:08:37 -0800
Message-ID: <530393F8.2040409@oracle.com>
Date: Tue, 18 Feb 2014 12:10:16 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
	<53039B61020000780011D68B@nat28.tlf.novell.com>
	<530390D2.6020000@oracle.com>
	<5303A090020000780011D6C6@nat28.tlf.novell.com>
In-Reply-To: <5303A090020000780011D6C6@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 12:04 PM, Jan Beulich wrote:
>>>> On 18.02.14 at 17:56, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/18/2014 11:41 AM, Jan Beulich wrote:
>>>>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>>>> This test is already performed a couple of lines above.
>>>>>> Except that it's the wrong code you remove:
>>>>> No opinion on this alternative at all?
>>>> Sorry Jan, I didn't realize you were waiting for me on this.
>>>>
>>>> Yes, your version is fine although to be honest I don't see how the
>>>> original patch had any issues with division by zero since we'd still be
>>>> inside the 'if (stride)' clause.
>>> It's the very division that this patch removes:
>>>
>>>>>> --- a/xen/arch/x86/msi.c
>>>>>> +++ b/xen/arch/x86/msi.c
>>>>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>>>>                 return 0;
>>>>>>             base = pos + PCI_SRIOV_BAR;
>>>>>>             vf -= PCI_BDF(bus, slot, func) + offset;
>>>>>> -        if ( vf < 0 || (vf && vf % stride) )
>>>>>> +        if ( vf < 0 )
>>>>>>                 return 0;
>>>>>>             if ( stride )
>>>>>>             {
>>> Which isn't inside the if(stride).
>>
>> Yes, I see it now. I was staring at a wrong line.
>>
>> This actually now looks like a bug.
> You mean the old code looks wrong or the new one?


The old one. Your patch fixes it.


>
>> You do check above for '(num_vf > 1
>> && !stride) ' but presumably if things are really messed up num_vf can
>> be 1 but vf is 0. And then if stride is zero too then we are not doing
>> particularly well.
>>
>> So probably this should go into 4.4 as well?
> We've done with this unfixed quite fine so far, so it's generally
> okay to leave as is until 4.4.1 (read: not a regression). I
> personally wouldn't mind pushing it in, but only if other similar
> not too high priority bug fixes would also go in.
>

OK.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:08:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:08:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoA2-0007ed-L4; Tue, 18 Feb 2014 17:08:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFoA0-0007eT-VB
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:08:45 +0000
Received: from [193.109.254.147:25707] by server-10.bemta-14.messagelabs.com
	id F3/BC-10711-C9393035; Tue, 18 Feb 2014 17:08:44 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392743322!5165660!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28163 invoked from network); 18 Feb 2014 17:08:43 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 17:08:43 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IH8cvw004137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 17:08:39 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IH8cNQ020518
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Feb 2014 17:08:38 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IH8cWa006577; Tue, 18 Feb 2014 17:08:38 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 09:08:37 -0800
Message-ID: <530393F8.2040409@oracle.com>
Date: Tue, 18 Feb 2014 12:10:16 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1392239147-1547-1-git-send-email-boris.ostrovsky@oracle.com>
	<1392239147-1547-3-git-send-email-boris.ostrovsky@oracle.com>
	<52FCA2E0020000780011BF52@nat28.tlf.novell.com>
	<53034122020000780011D2CC@nat28.tlf.novell.com>
	<530380ED.4050708@oracle.com>
	<53039B61020000780011D68B@nat28.tlf.novell.com>
	<530390D2.6020000@oracle.com>
	<5303A090020000780011D6C6@nat28.tlf.novell.com>
In-Reply-To: <5303A090020000780011D6C6@nat28.tlf.novell.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: george.dunlap@eu.citrix.com, xen-devel@lists.xenproject.org, keir@xen.org
Subject: Re: [Xen-devel] [PATCH 2/2] x86/pci: Remove unnecessary check in VF
 value computation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 12:04 PM, Jan Beulich wrote:
>>>> On 18.02.14 at 17:56, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>> On 02/18/2014 11:41 AM, Jan Beulich wrote:
>>>>>> On 18.02.14 at 16:49, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>> On 02/18/2014 05:16 AM, Jan Beulich wrote:
>>>>>>>> On 13.02.14 at 10:48, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>>>>>> On 12.02.14 at 22:05, Boris Ostrovsky <boris.ostrovsky@oracle.com> wrote:
>>>>>>> This test is already performed a couple of lines above.
>>>>>> Except that it's the wrong code you remove:
>>>>> No opinion on this alternative at all?
>>>> Sorry Jan, I didn't realize you were waiting for me on this.
>>>>
>>>> Yes, your version is fine although to be honest I don't see how the
>>>> original patch had any issues with division by zero since we'd still be
>>>> inside the 'if (stride)' clause.
>>> It's the very division that this patch removes:
>>>
>>>>>> --- a/xen/arch/x86/msi.c
>>>>>> +++ b/xen/arch/x86/msi.c
>>>>>> @@ -635,7 +635,7 @@ static u64 read_pci_mem_bar(u16 seg, u8
>>>>>>                 return 0;
>>>>>>             base = pos + PCI_SRIOV_BAR;
>>>>>>             vf -= PCI_BDF(bus, slot, func) + offset;
>>>>>> -        if ( vf < 0 || (vf && vf % stride) )
>>>>>> +        if ( vf < 0 )
>>>>>>                 return 0;
>>>>>>             if ( stride )
>>>>>>             {
>>> Which isn't inside the if(stride).
>>
>> Yes, I see it now. I was staring at a wrong line.
>>
>> This actually now looks like a bug.
> You mean the old code looks wrong or the new one?


The old one. Your patch fixes it.


>
>> You do check above for '(num_vf > 1
>> && !stride) ' but presumably if things are really messed up num_vf can
>> be 1 but vf is 0. And then if stride is zero too then we are not doing
>> particularly well.
>>
>> So probably this should go into 4.4 as well?
> We've done with this unfixed quite fine so far, so it's generally
> okay to leave as is until 4.4.1 (read: not a regression). I
> personally wouldn't mind pushing it in, but only if other similar
> not too high priority bug fixes would also go in.
>

OK.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:10:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoBQ-0007rc-4x; Tue, 18 Feb 2014 17:10:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFoBO-0007rP-Cy
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 17:10:10 +0000
Received: from [85.158.137.68:62222] by server-17.bemta-3.messagelabs.com id
	43/DE-22569-1F393035; Tue, 18 Feb 2014 17:10:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392743407!2681210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14499 invoked from network); 18 Feb 2014 17:10:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:10:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101855592"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:10:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:10:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFoBJ-0006bk-7P;
	Tue, 18 Feb 2014 17:10:05 +0000
Date: Tue, 18 Feb 2014 17:10:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <53036DA8.3080804@redhat.com>
Message-ID: <alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > of the email :-P).
> > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > response to the guest writing to a magic ioport specifically to unplug
> > > > the emulated disk.
> > > > With this patch after the guest boots I can still access both xvda and
> > > > sda for the same disk, leading to fs corruptions.
> > > 
> > > Ok, the last paragraph is what I was missing.
> > > 
> > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > hotplug handler, dc->unplug is not called anymore.
> > > 
> > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > free
> > > the device, it just drops the disks underneath.  I think the simplest
> > > solution
> > > is to _not_ make it a dc->unplug callback at all, and call
> > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > to
> > > do here.
> > 
> > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > Calling it directly from unplug_disks fixes the issue:
> > 
> > 
> > ---
> > 
> > Call pci_piix3_xen_ide_unplug from unplug_disks
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > index 0eda301..40757eb 100644
> > --- a/hw/ide/piix.c
> > +++ b/hw/ide/piix.c
> > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> >      return 0;
> >  }
> > 
> > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> >  {
> >      PCIIDEState *pci_ide;
> >      DriveInfo *di;
> > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > void *data)
> >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> >      k->class_id = PCI_CLASS_STORAGE_IDE;
> >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > -    dc->unplug = pci_piix3_xen_ide_unplug;
> >  }
> > 
> >  static const TypeInfo piix3_ide_xen_info = {
> > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > index 70875e4..1d9d0e9 100644
> > --- a/hw/xen/xen_platform.c
> > +++ b/hw/xen/xen_platform.c
> > @@ -27,6 +27,7 @@
> > 
> >  #include "hw/hw.h"
> >  #include "hw/i386/pc.h"
> > +#include "hw/ide.h"
> >  #include "hw/pci/pci.h"
> >  #include "hw/irq.h"
> >  #include "hw/xen/xen_common.h"
> > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > *o)
> >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> >              PCI_CLASS_STORAGE_IDE
> >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > -        qdev_unplug(DEVICE(d), NULL);
> > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> >      }
> >  }
> > 
> > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > index 507e6d3..bc8bd32 100644
> > --- a/include/hw/ide.h
> > +++ b/include/hw/ide.h
> > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > **hd_table,
> >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > devfn);
> >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > devfn);
> >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > devfn);
> > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > 
> >  /* ide-mmio.c */
> > 
> 
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Thanks. Should I send it to Peter via the xen tree or anybody else wants
to pick this up?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:10:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoBQ-0007rc-4x; Tue, 18 Feb 2014 17:10:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WFoBO-0007rP-Cy
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 17:10:10 +0000
Received: from [85.158.137.68:62222] by server-17.bemta-3.messagelabs.com id
	43/DE-22569-1F393035; Tue, 18 Feb 2014 17:10:09 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392743407!2681210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14499 invoked from network); 18 Feb 2014 17:10:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:10:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101855592"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:10:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:10:06 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WFoBJ-0006bk-7P;
	Tue, 18 Feb 2014 17:10:05 +0000
Date: Tue, 18 Feb 2014 17:10:00 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Paolo Bonzini <pbonzini@redhat.com>
In-Reply-To: <53036DA8.3080804@redhat.com>
Message-ID: <alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > of the email :-P).
> > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > response to the guest writing to a magic ioport specifically to unplug
> > > > the emulated disk.
> > > > With this patch after the guest boots I can still access both xvda and
> > > > sda for the same disk, leading to fs corruptions.
> > > 
> > > Ok, the last paragraph is what I was missing.
> > > 
> > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > hotplug handler, dc->unplug is not called anymore.
> > > 
> > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > free
> > > the device, it just drops the disks underneath.  I think the simplest
> > > solution
> > > is to _not_ make it a dc->unplug callback at all, and call
> > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > to
> > > do here.
> > 
> > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > Calling it directly from unplug_disks fixes the issue:
> > 
> > 
> > ---
> > 
> > Call pci_piix3_xen_ide_unplug from unplug_disks
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > index 0eda301..40757eb 100644
> > --- a/hw/ide/piix.c
> > +++ b/hw/ide/piix.c
> > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> >      return 0;
> >  }
> > 
> > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> >  {
> >      PCIIDEState *pci_ide;
> >      DriveInfo *di;
> > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > void *data)
> >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> >      k->class_id = PCI_CLASS_STORAGE_IDE;
> >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > -    dc->unplug = pci_piix3_xen_ide_unplug;
> >  }
> > 
> >  static const TypeInfo piix3_ide_xen_info = {
> > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > index 70875e4..1d9d0e9 100644
> > --- a/hw/xen/xen_platform.c
> > +++ b/hw/xen/xen_platform.c
> > @@ -27,6 +27,7 @@
> > 
> >  #include "hw/hw.h"
> >  #include "hw/i386/pc.h"
> > +#include "hw/ide.h"
> >  #include "hw/pci/pci.h"
> >  #include "hw/irq.h"
> >  #include "hw/xen/xen_common.h"
> > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > *o)
> >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> >              PCI_CLASS_STORAGE_IDE
> >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > -        qdev_unplug(DEVICE(d), NULL);
> > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> >      }
> >  }
> > 
> > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > index 507e6d3..bc8bd32 100644
> > --- a/include/hw/ide.h
> > +++ b/include/hw/ide.h
> > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > **hd_table,
> >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > devfn);
> >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > devfn);
> >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > devfn);
> > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > 
> >  /* ide-mmio.c */
> > 
> 
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Thanks. Should I send it to Peter via the xen tree or anybody else wants
to pick this up?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:10:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoBZ-0007tB-Pe; Tue, 18 Feb 2014 17:10:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFoBY-0007ss-4I
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:10:20 +0000
Received: from [85.158.139.211:39806] by server-13.bemta-5.messagelabs.com id
	CC/2E-18801-BF393035; Tue, 18 Feb 2014 17:10:19 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392743416!4728055!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4046 invoked from network); 18 Feb 2014 17:10:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:10:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103580097"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:10:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:10:15 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFoBT-0006bn-5c;
	Tue, 18 Feb 2014 17:10:15 +0000
Message-ID: <530393F5.2010501@eu.citrix.com>
Date: Tue, 18 Feb 2014 17:10:13 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
X-DLP: MIA1
Cc: stefano.stabellini@citrix.com, tim@xen.org, ian.campbell@citrix.com,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 04:56 PM, Julien Grall wrote:
> The current implementation of raw_copy_guest helper may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.
>
> When the total length is higher than a page, the length to read is badly
> compute with
>      min(len, (unsigned)(PAGE_SIZE - offset))
>
> As the offset is only computed one time per function, if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>      - to read accross page boundary => xen crash
>      - read the previous page => data corruption
>
> This issue can be resolved by setting offset to 0 at the end of the first
> iteration. Indeed, after it, the virtual guest address is always aligned
> to PAGE_SIZE.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>
>
> ---
>      This patch is a bug fix for Xen 4.4. Without this patch the data may be
>      corrupted when Xen is copied data from the guest if the guest virtual
>      address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen.
>
>      This function is used in numerous place in Xen. If it introduces another
>      bug we can see quickly with small amount of data.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
>      Changes in v2:
>          - Only raw_copy_from_guest is buggy, the other raw_copy_*
>          helpers where safe because of the "offset = 0" at the end of the loop
>          - Update commit message and title
> ---
>   xen/arch/arm/guestcopy.c |    5 +++++
>   1 file changed, 5 insertions(+)
>
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..715bb4e 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -96,6 +96,11 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>           len -= size;
>           from += size;
>           to += size;
> +        /*
> +         * After the first iteration, guest virtual address is correctly
> +         * aligned to PAGE_SIZE.
> +         */
> +        offset = 0;
>       }
>       return 0;
>   }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:10:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:10:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoBZ-0007tB-Pe; Tue, 18 Feb 2014 17:10:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFoBY-0007ss-4I
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:10:20 +0000
Received: from [85.158.139.211:39806] by server-13.bemta-5.messagelabs.com id
	CC/2E-18801-BF393035; Tue, 18 Feb 2014 17:10:19 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392743416!4728055!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4046 invoked from network); 18 Feb 2014 17:10:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:10:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103580097"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:10:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:10:15 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFoBT-0006bn-5c;
	Tue, 18 Feb 2014 17:10:15 +0000
Message-ID: <530393F5.2010501@eu.citrix.com>
Date: Tue, 18 Feb 2014 17:10:13 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, <xen-devel@lists.xenproject.org>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
X-DLP: MIA1
Cc: stefano.stabellini@citrix.com, tim@xen.org, ian.campbell@citrix.com,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 04:56 PM, Julien Grall wrote:
> The current implementation of raw_copy_guest helper may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.
>
> When the total length is higher than a page, the length to read is badly
> compute with
>      min(len, (unsigned)(PAGE_SIZE - offset))
>
> As the offset is only computed one time per function, if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>      - to read accross page boundary => xen crash
>      - read the previous page => data corruption
>
> This issue can be resolved by setting offset to 0 at the end of the first
> iteration. Indeed, after it, the virtual guest address is always aligned
> to PAGE_SIZE.
>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: George Dunlap <george.dunlap@citrix.com>
>
> ---
>      This patch is a bug fix for Xen 4.4. Without this patch the data may be
>      corrupted when Xen is copied data from the guest if the guest virtual
>      address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen.
>
>      This function is used in numerous place in Xen. If it introduces another
>      bug we can see quickly with small amount of data.

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
>      Changes in v2:
>          - Only raw_copy_from_guest is buggy, the other raw_copy_*
>          helpers where safe because of the "offset = 0" at the end of the loop
>          - Update commit message and title
> ---
>   xen/arch/arm/guestcopy.c |    5 +++++
>   1 file changed, 5 insertions(+)
>
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..715bb4e 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -96,6 +96,11 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>           len -= size;
>           from += size;
>           to += size;
> +        /*
> +         * After the first iteration, guest virtual address is correctly
> +         * aligned to PAGE_SIZE.
> +         */
> +        offset = 0;
>       }
>       return 0;
>   }


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:10:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoC6-0007zP-5g; Tue, 18 Feb 2014 17:10:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoC4-0007yu-PL
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:10:52 +0000
Received: from [193.109.254.147:11996] by server-2.bemta-14.messagelabs.com id
	24/41-01236-C1493035; Tue, 18 Feb 2014 17:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392743449!5166199!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10693 invoked from network); 18 Feb 2014 17:10:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103580420"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:10:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:10:13 -0500
Message-ID: <1392743412.23084.41.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 17:10:12 +0000
In-Reply-To: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 16:56 +0000, Julien Grall wrote:
> The current implementation of raw_copy_guest helper may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.
> 
> When the total length is higher than a page, the length to read is badly
> compute with
>     min(len, (unsigned)(PAGE_SIZE - offset))
> 
> As the offset is only computed one time per function, if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>     - to read accross page boundary => xen crash
>     - read the previous page => data corruption
> 
> This issue can be resolved by setting offset to 0 at the end of the first
> iteration. Indeed, after it, the virtual guest address is always aligned
> to PAGE_SIZE.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>


> Cc: George Dunlap <george.dunlap@citrix.com>
> 
> ---
>     This patch is a bug fix for Xen 4.4. Without this patch the data may be
>     corrupted when Xen is copied data from the guest if the guest virtual
>     address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen.

The crash is what makes this undoubtedly necessary for 4.4 IMHO.

>     This function is used in numerous place in Xen. If it introduces another
>     bug we can see quickly with small amount of data.
> 
>     Changes in v2:
>         - Only raw_copy_from_guest is buggy, the other raw_copy_*
>         helpers where safe because of the "offset = 0" at the end of the loop
>         - Update commit message and title
> ---
>  xen/arch/arm/guestcopy.c |    5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..715bb4e 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -96,6 +96,11 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>          len -= size;
>          from += size;
>          to += size;
> +        /*
> +         * After the first iteration, guest virtual address is correctly
> +         * aligned to PAGE_SIZE.
> +         */

I'd like to duplicate this comment in the other two places too -- if you
are OK with it I will do that as part of committing.

> +        offset = 0;
>      }
>      return 0;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:10:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoC6-0007zP-5g; Tue, 18 Feb 2014 17:10:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoC4-0007yu-PL
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:10:52 +0000
Received: from [193.109.254.147:11996] by server-2.bemta-14.messagelabs.com id
	24/41-01236-C1493035; Tue, 18 Feb 2014 17:10:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392743449!5166199!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10693 invoked from network); 18 Feb 2014 17:10:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:10:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103580420"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:10:42 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:10:13 -0500
Message-ID: <1392743412.23084.41.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 17:10:12 +0000
In-Reply-To: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 16:56 +0000, Julien Grall wrote:
> The current implementation of raw_copy_guest helper may lead to data corruption
> and sometimes Xen crash when the guest virtual address is not aligned to
> PAGE_SIZE.
> 
> When the total length is higher than a page, the length to read is badly
> compute with
>     min(len, (unsigned)(PAGE_SIZE - offset))
> 
> As the offset is only computed one time per function, if the start address was
> not aligned to PAGE_SIZE, we can end up in same iteration:
>     - to read accross page boundary => xen crash
>     - read the previous page => data corruption
> 
> This issue can be resolved by setting offset to 0 at the end of the first
> iteration. Indeed, after it, the virtual guest address is always aligned
> to PAGE_SIZE.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>


> Cc: George Dunlap <george.dunlap@citrix.com>
> 
> ---
>     This patch is a bug fix for Xen 4.4. Without this patch the data may be
>     corrupted when Xen is copied data from the guest if the guest virtual
>     address is not aligned to PAGE_SIZE. Sometimes it can also crash Xen.

The crash is what makes this undoubtedly necessary for 4.4 IMHO.

>     This function is used in numerous place in Xen. If it introduces another
>     bug we can see quickly with small amount of data.
> 
>     Changes in v2:
>         - Only raw_copy_from_guest is buggy, the other raw_copy_*
>         helpers where safe because of the "offset = 0" at the end of the loop
>         - Update commit message and title
> ---
>  xen/arch/arm/guestcopy.c |    5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index af0af6b..715bb4e 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -96,6 +96,11 @@ unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned le
>          len -= size;
>          from += size;
>          to += size;
> +        /*
> +         * After the first iteration, guest virtual address is correctly
> +         * aligned to PAGE_SIZE.
> +         */

I'd like to duplicate this comment in the other two places too -- if you
are OK with it I will do that as part of committing.

> +        offset = 0;
>      }
>      return 0;
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:14:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoFB-0008GS-RH; Tue, 18 Feb 2014 17:14:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFoF8-0008GL-BV
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:14:04 +0000
Received: from [85.158.137.68:31200] by server-7.bemta-3.messagelabs.com id
	8C/42-13775-9D493035; Tue, 18 Feb 2014 17:14:01 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392743639!1425033!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4057 invoked from network); 18 Feb 2014 17:14:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:14:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103582194"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:13:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:13:55 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFoF0-0006fn-IW;
	Tue, 18 Feb 2014 17:13:54 +0000
Message-ID: <530394D0.1080706@eu.citrix.com>
Date: Tue, 18 Feb 2014 17:13:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Julien Grall
	<julien.grall@linaro.org>
References: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
	<1392735093.11080.81.camel@kazak.uk.xensource.com>
In-Reply-To: <1392735093.11080.81.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Save/restore GICH_VMCR on domain
	context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 02:51 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 13:58 +0000, Julien Grall wrote:
>> GICH_VMCR register contains alias to important bits of GICV interface such as:
>>      - priority mask of the CPU
>>      - EOImode
>>      - ...
>>
>> We were safe because Linux guest always use the same value for this bits.
>> When new guests will handle priority or change EOI mode, VCPU interrupt
>> management will be in a wrong state.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: George Dunlap <george.dunlap@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> ---
>>      This is a bug fix for Xen 4.4. Without this patch we can't support guest
>>      that doesn't have the same behavior as Linux to handle GICC interface.
>>      theses bits are not modified by them.
> I'd say we pretty much have to take this -- otherwise some guest can
> break things for everyone else by writing to GICC registers.
>
> I've had a look at the GICH register list and I think we correctly
> switch everything else.

It looks that way to me as well:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:14:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoFB-0008GS-RH; Tue, 18 Feb 2014 17:14:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFoF8-0008GL-BV
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:14:04 +0000
Received: from [85.158.137.68:31200] by server-7.bemta-3.messagelabs.com id
	8C/42-13775-9D493035; Tue, 18 Feb 2014 17:14:01 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392743639!1425033!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4057 invoked from network); 18 Feb 2014 17:14:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:14:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103582194"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:13:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:13:55 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFoF0-0006fn-IW;
	Tue, 18 Feb 2014 17:13:54 +0000
Message-ID: <530394D0.1080706@eu.citrix.com>
Date: Tue, 18 Feb 2014 17:13:52 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Julien Grall
	<julien.grall@linaro.org>
References: <1392731901-20233-1-git-send-email-julien.grall@linaro.org>
	<1392735093.11080.81.camel@kazak.uk.xensource.com>
In-Reply-To: <1392735093.11080.81.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH] xen/arm: Save/restore GICH_VMCR on domain
	context switch
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 02:51 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 13:58 +0000, Julien Grall wrote:
>> GICH_VMCR register contains alias to important bits of GICV interface such as:
>>      - priority mask of the CPU
>>      - EOImode
>>      - ...
>>
>> We were safe because Linux guest always use the same value for this bits.
>> When new guests will handle priority or change EOI mode, VCPU interrupt
>> management will be in a wrong state.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: George Dunlap <george.dunlap@citrix.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
>> ---
>>      This is a bug fix for Xen 4.4. Without this patch we can't support guest
>>      that doesn't have the same behavior as Linux to handle GICC interface.
>>      theses bits are not modified by them.
> I'd say we pretty much have to take this -- otherwise some guest can
> break things for everyone else by writing to GICC registers.
>
> I've had a look at the GICH register list and I think we correctly
> switch everything else.

It looks that way to me as well:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:15:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoGq-0008NN-BW; Tue, 18 Feb 2014 17:15:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFoGo-0008NE-RH
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 17:15:47 +0000
Received: from [85.158.143.35:40206] by server-3.bemta-4.messagelabs.com id
	B8/9D-11539-24593035; Tue, 18 Feb 2014 17:15:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392743744!6601250!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19776 invoked from network); 18 Feb 2014 17:15:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:15:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101859127"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:15:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 12:15:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFoGg-0004O7-1k;
	Tue, 18 Feb 2014 17:15:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFoGe-0006Br-9A;
	Tue, 18 Feb 2014 17:15:36 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Feb 2014 17:15:32 +0000
Message-ID: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: Properly declare libxlu_disk_l.h in
	AUTOINCS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is necessary so that make doesn't do things which depend on this
file until flex has finished producing it.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
Tested-by: Olaf Hering <olaf@aepfle.de>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/Makefile |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index dab2929..755b666 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
 $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
-	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
+	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:15:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoGq-0008NN-BW; Tue, 18 Feb 2014 17:15:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFoGo-0008NE-RH
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 17:15:47 +0000
Received: from [85.158.143.35:40206] by server-3.bemta-4.messagelabs.com id
	B8/9D-11539-24593035; Tue, 18 Feb 2014 17:15:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392743744!6601250!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19776 invoked from network); 18 Feb 2014 17:15:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:15:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101859127"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:15:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 12:15:43 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFoGg-0004O7-1k;
	Tue, 18 Feb 2014 17:15:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WFoGe-0006Br-9A;
	Tue, 18 Feb 2014 17:15:36 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Tue, 18 Feb 2014 17:15:32 +0000
Message-ID: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] libxl: Properly declare libxlu_disk_l.h in
	AUTOINCS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is necessary so that make doesn't do things which depend on this
file until flex has finished producing it.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
Tested-by: Olaf Hering <olaf@aepfle.de>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/Makefile |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index dab2929..755b666 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
 $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
 
 AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
-	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
+	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
 AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
 AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoPc-0000HF-UD; Tue, 18 Feb 2014 17:24:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoPb-0000HA-Fp
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:24:51 +0000
Received: from [85.158.143.35:46999] by server-3.bemta-4.messagelabs.com id
	86/99-11539-26793035; Tue, 18 Feb 2014 17:24:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392744289!6597364!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15012 invoked from network); 18 Feb 2014 17:24:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:24:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103587428"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:24:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:24:47 -0500
Message-ID: <1392744286.23084.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:24:46 +0000
In-Reply-To: <1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> 
> +       spinlock_t dealloc_lock;
> +       spinlock_t response_lock; 

Please add comments to both of these describing what bits of the
datastructure they are locking.

You might find it is clearer to group the locks and the things they
protect together rather than grouping the locks together.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:25:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoPc-0000HF-UD; Tue, 18 Feb 2014 17:24:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoPb-0000HA-Fp
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:24:51 +0000
Received: from [85.158.143.35:46999] by server-3.bemta-4.messagelabs.com id
	86/99-11539-26793035; Tue, 18 Feb 2014 17:24:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392744289!6597364!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15012 invoked from network); 18 Feb 2014 17:24:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:24:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103587428"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:24:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:24:47 -0500
Message-ID: <1392744286.23084.52.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:24:46 +0000
In-Reply-To: <1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> 
> +       spinlock_t dealloc_lock;
> +       spinlock_t response_lock; 

Please add comments to both of these describing what bits of the
datastructure they are locking.

You might find it is clearer to group the locks and the things they
protect together rather than grouping the locks together.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:28:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoTD-0000Oi-L0; Tue, 18 Feb 2014 17:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoTB-0000OE-TD
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 17:28:34 +0000
Received: from [85.158.137.68:2017] by server-10.bemta-3.messagelabs.com id
	4E/06-07302-14893035; Tue, 18 Feb 2014 17:28:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392744510!2699697!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7135 invoked from network); 18 Feb 2014 17:28:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:28:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103588767"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:28:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:28:19 -0500
Message-ID: <1392744498.23084.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 17:28:18 +0000
In-Reply-To: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] libxl: Properly declare libxlu_disk_l.h in
	AUTOINCS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 17:15 +0000, Ian Jackson wrote:
> This is necessary so that make doesn't do things which depend on this
> file until flex has finished producing it.
> 
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> Tested-by: Olaf Hering <olaf@aepfle.de>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/Makefile |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index dab2929..755b666 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
>  $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
>  
>  AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
> -	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
> +	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
>  AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>  AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>  LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:28:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoTD-0000Oi-L0; Tue, 18 Feb 2014 17:28:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoTB-0000OE-TD
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 17:28:34 +0000
Received: from [85.158.137.68:2017] by server-10.bemta-3.messagelabs.com id
	4E/06-07302-14893035; Tue, 18 Feb 2014 17:28:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392744510!2699697!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7135 invoked from network); 18 Feb 2014 17:28:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:28:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103588767"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:28:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:28:19 -0500
Message-ID: <1392744498.23084.54.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 17:28:18 +0000
In-Reply-To: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [PATCH] libxl: Properly declare libxlu_disk_l.h in
	AUTOINCS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 17:15 +0000, Ian Jackson wrote:
> This is necessary so that make doesn't do things which depend on this
> file until flex has finished producing it.
> 
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> Tested-by: Olaf Hering <olaf@aepfle.de>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/Makefile |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index dab2929..755b666 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
>  $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
>  
>  AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
> -	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
> +	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
>  AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>  AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>  LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:41:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:41:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoex-0000ka-4r; Tue, 18 Feb 2014 17:40:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoev-0000jf-Ht
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:40:41 +0000
Received: from [193.109.254.147:9090] by server-3.bemta-14.messagelabs.com id
	92/53-00432-81B93035; Tue, 18 Feb 2014 17:40:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392745238!5221859!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 18 Feb 2014 17:40:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:40:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101871127"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:40:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:40:37 -0500
Message-ID: <1392745235.23084.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:40:35 +0000
In-Reply-To: <1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> This patch changes the grant copy on the TX patch to grant mapping

Both this and the previous patch had a single sentence commit message (I
count them together since they are split weirdly and are a single
logical change to my eyes).

Really a change of this magnitude deserves a commit message to match,
e.g. explaining the approach which is taken by the code at a high level,
what it is doing, how it is doing it, the rationale for using a kthread
etc etc.

> 
> v2:
> - delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
>   request
> - mark the effect of using ballooned pages in a comment
> - place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
>   before netif_receive_skb, and mark the importance of it
> - grab dealloc_lock before __napi_complete to avoid contention with the
>   callback's napi_schedule
> - handle fragmented packets where first request < PKT_PROT_LEN
> - fix up error path when checksum_setup failed
> - check before teardown for pending grants, and start complain if they are
>   there after 10 second
> 
> v3:
> - delete a surplus checking from tx_action
> - remove stray line
> - squash xenvif_idx_unmap changes into the first patch
> - init spinlocks
> - call map hypercall directly instead of gnttab_map_refs()
> - fix unmapping timeout in xenvif_free()
> 
> v4:
> - fix indentations and comments
> - handle errors of set_phys_to_machine
> - go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
>   modified API
> 
> v5:
> - BUG_ON(vif->dealloc_task) in xenvif_connect
> - use 'task' in xenvif_connect for thread creation
> - proper return value if alloc_xenballooned_pages fails
> - BUG in xenvif_tx_check_gop if stale handle found
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   63 ++++++++-
>  drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
>  2 files changed, 160 insertions(+), 157 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index f0f0c3d..b3daae2 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	BUG_ON(skb->dev != dev);
>  
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (vif->task == NULL ||
> +	    vif->dealloc_task == NULL ||

Under what conditions could this be true? Would it not represent a
rather serious failure?

> +	    !xenvif_schedulable(vif))
>  		goto drop;
>  
>  	/* At best we'll need one slot for the header and one for each
> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	vif->pending_prod = MAX_PENDING_REQS;
>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>  		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> +	spin_lock_init(&vif->dealloc_lock);
> +	spin_lock_init(&vif->response_lock);
> +	/* If ballooning is disabled, this will consume real memory, so you
> +	 * better enable it.

Almost no one who would be affected by this is going to read this
comment. And it doesn't just require enabling ballooning, but actually
booting with some maxmem "slack" to leave space.

Classic-xen kernels used to add 8M of slop to the physical address space
to leave a suitable pool for exactly this sort of thing. I never liked
that but perhaps it should be reconsidered (or at least raised as a
possibility with the core-Xen Linux guys).

>  The long term solution would be to use just a
> +	 * bunch of valid page descriptors, without dependency on ballooning

Where would these come from? Do you have a cunning plan here?

> +	 */
> +	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +				       vif->mmap_pages,
> +				       false);
> +	if (err) {
> +		netdev_err(dev, "Could not reserve mmap_pages\n");
> +		return ERR_PTR(-ENOMEM);
> +	}
> +	for (i = 0; i < MAX_PENDING_REQS; i++) {
> +		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
> +			{ .callback = xenvif_zerocopy_callback,
> +			  .ctx = NULL,
> +			  .desc = i };
> +		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
> +	}
>  
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
> @@ -383,12 +403,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  
>  	BUG_ON(vif->tx_irq);
>  	BUG_ON(vif->task);
> +	BUG_ON(vif->dealloc_task);
>  
>  	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
>  
>  	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&vif->dealloc_wq);
>  
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
> @@ -432,6 +454,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  
>  	vif->task = task;
>  
> +	task = kthread_create(xenvif_dealloc_kthread,
> +					   (void *)vif,
> +					   "%s-dealloc",
> +					   vif->dev->name);

This is separate to the existing kthread that handles rx stuff. If they
cannot or should not be combined then I think the existing one needs
renaming, both the function and the thread itself in a precursor patch.

> @@ -494,6 +534,23 @@ void xenvif_disconnect(struct xenvif *vif)
>  
>  void xenvif_free(struct xenvif *vif)
>  {
> +	int i, unmap_timeout = 0;
> +
> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
> +		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
> +			unmap_timeout++;
> +			schedule_timeout(msecs_to_jiffies(1000));

What are we waiting for here? Have we taken any action to ensure that it
is going to happen, like kicking something?

> +			if (unmap_timeout > 9 &&

Why 9? Why not rely on net_ratelimit to DTRT? Or is it normal for this
to fail at least once?

> +			    net_ratelimit())
> +				netdev_err(vif->dev,

I thought there was a ratelimited netdev printk which combined the
limiting and the printing in one function call. Maybe I am mistaken.

> +					   "Page still granted! Index: %x\n",
> +					   i);
> +			i = -1;
> +		}
> +	}
> +
> +	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
> +
>  	netif_napi_del(&vif->napi);
>  
>  	unregister_netdev(vif->dev);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 195602f..747b428 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -646,9 +646,12 @@ static void xenvif_tx_err(struct xenvif *vif,
>  			  struct xen_netif_tx_request *txp, RING_IDX end)
>  {
>  	RING_IDX cons = vif->tx.req_cons;
> +	unsigned long flags;
>  
>  	do {
> +		spin_lock_irqsave(&vif->response_lock, flags);

Looking at the callers you have added it would seem more natural to
handle the locking within make_tx_response itself.

What are you locking against here? Is this different to the dealloc
lock? If the concern is the rx action stuff and the dealloc stuff
conflicting perhaps a single vif lock would make sense?

>  		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
> +		spin_unlock_irqrestore(&vif->response_lock, flags);
>  		if (cons == end)
>  			break;
>  		txp = RING_GET_REQUEST(&vif->tx, cons++);
> @@ -787,10 +790,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
>  	       sizeof(*txp));
>  }
>  
> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
> -					       struct sk_buff *skb,
> -					       struct xen_netif_tx_request *txp,
> -					       struct gnttab_copy *gop)
> +static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
> +							struct sk_buff *skb,
> +							struct xen_netif_tx_request *txp,
> +							struct gnttab_map_grant_ref *gop)
>  {
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	skb_frag_t *frags = shinfo->frags;
> @@ -909,9 +841,9 @@ err:
>  
>  static int xenvif_tx_check_gop(struct xenvif *vif,
>  			       struct sk_buff *skb,
> -			       struct gnttab_copy **gopp)
> +			       struct gnttab_map_grant_ref **gopp)
>  {
> -	struct gnttab_copy *gop = *gopp;
> +	struct gnttab_map_grant_ref *gop = *gopp;
>  	u16 pending_idx = *((u16 *)skb->data);
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	struct pending_tx_info *tx_info;
> @@ -923,6 +855,17 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	err = gop->status;
>  	if (unlikely(err))
>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> +	else {
> +		if (vif->grant_tx_handle[pending_idx] !=
> +		    NETBACK_INVALID_HANDLE) {
> +			netdev_err(vif->dev,
> +				   "Stale mapped handle! pending_idx %x handle %x\n",
> +				   pending_idx,
> +				   vif->grant_tx_handle[pending_idx]);
> +			BUG();
> +		}
> +		vif->grant_tx_handle[pending_idx] = gop->handle;
> +	}
>  
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		head = tx_info->head;
>  
>  		/* Check error status: if okay then remember grant handle. */
> -		do {
>  			newerr = (++gop)->status;
> -			if (newerr)
> -				break;
> -			peek = vif->pending_ring[pending_index(++head)];
> -		} while (!pending_tx_is_head(vif, peek));
>  
>  		if (likely(!newerr)) {
> +			if (vif->grant_tx_handle[pending_idx] !=
> +			    NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					   "Stale mapped handle! pending_idx %x handle %x\n",
> +					   pending_idx,
> +					   vif->grant_tx_handle[pending_idx]);
> +				BUG();
> +			}

You had the same thing earlier. Perhaps a helper function would be
useful?

> +			vif->grant_tx_handle[pending_idx] = gop->handle;
>  			/* Had a previous error? Invalidate this fragment. */
> -			if (unlikely(err))
> +			if (unlikely(err)) {
> +				xenvif_idx_unmap(vif, pending_idx);
>  				xenvif_idx_release(vif, pending_idx,
>  						   XEN_NETIF_RSP_OKAY);

Would it make sense to unmap and release in a single function? (I
Haven't looked to see if you ever do one without the other, but the next
page of diff had two more occurrences of them together)

> +			}
>  			continue;
>  		}
>  
> @@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  
>  		/* First error: invalidate header and preceding fragments. */
>  		pending_idx = *((u16 *)skb->data);
> +		xenvif_idx_unmap(vif, pending_idx);
>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
>  		for (j = start; j < i; j++) {
>  			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
> +			xenvif_idx_unmap(vif, pending_idx);
>  			xenvif_idx_release(vif, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}

>  	}
> +	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
> +	 * overlaps with "index", and "mapping" is not set. I think mapping
> +	 * should be set. If delivered to local stack, it would drop this
> +	 * skb in sk_filter unless the socket has the right to use it.

What is the plan to fix this?

Is this dropping not a significant issue (TBH I'm not sure what "has the
right to use it" would entail).

> +	 */
> +	skb->pfmemalloc	= false;
>  }
>  
>  static int xenvif_get_extras(struct xenvif *vif,
> @@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)

> @@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
>  
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(vif,
> +				  skb,
> +				  skb_shinfo(skb)->destructor_arg ?
> +				  pending_idx :
> +				  INVALID_PENDING_IDX

Couldn't xenvif_fill_frags calculate the 3rd argument itself given that
it has skb in hand.

> );
>  
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
> @@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		if (checksum_setup(vif, skb)) {
>  			netdev_dbg(vif->dev,
>  				   "Can't setup checksum in net_tx_action\n");
> +			/* We have to set this flag so the dealloc thread can
> +			 * send the slots back

Wouldn't it be more accurate to say that we need it so that the callback
happens (which we then use to trigger the dealloc thread)?

> +			 */
> +			if (skb_shinfo(skb)->destructor_arg)
> +				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>  			kfree_skb(skb);
>  			continue;
>  		}
> @@ -1620,6 +1583,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  
>  		work_done++;
>  
> +		/* Set this flag right before netif_receive_skb, otherwise
> +		 * someone might think this packet already left netback, and
> +		 * do a skb_copy_ubufs while we are still in control of the
> +		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.

Hrm, subtle.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:41:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:41:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoex-0000ka-4r; Tue, 18 Feb 2014 17:40:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoev-0000jf-Ht
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:40:41 +0000
Received: from [193.109.254.147:9090] by server-3.bemta-14.messagelabs.com id
	92/53-00432-81B93035; Tue, 18 Feb 2014 17:40:40 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392745238!5221859!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 18 Feb 2014 17:40:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:40:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101871127"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:40:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:40:37 -0500
Message-ID: <1392745235.23084.60.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:40:35 +0000
In-Reply-To: <1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> This patch changes the grant copy on the TX patch to grant mapping

Both this and the previous patch had a single sentence commit message (I
count them together since they are split weirdly and are a single
logical change to my eyes).

Really a change of this magnitude deserves a commit message to match,
e.g. explaining the approach which is taken by the code at a high level,
what it is doing, how it is doing it, the rationale for using a kthread
etc etc.

> 
> v2:
> - delete branch for handling fragmented packets fit PKT_PROT_LEN sized first
>   request
> - mark the effect of using ballooned pages in a comment
> - place setting of skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY right
>   before netif_receive_skb, and mark the importance of it
> - grab dealloc_lock before __napi_complete to avoid contention with the
>   callback's napi_schedule
> - handle fragmented packets where first request < PKT_PROT_LEN
> - fix up error path when checksum_setup failed
> - check before teardown for pending grants, and start complain if they are
>   there after 10 second
> 
> v3:
> - delete a surplus checking from tx_action
> - remove stray line
> - squash xenvif_idx_unmap changes into the first patch
> - init spinlocks
> - call map hypercall directly instead of gnttab_map_refs()
> - fix unmapping timeout in xenvif_free()
> 
> v4:
> - fix indentations and comments
> - handle errors of set_phys_to_machine
> - go back to gnttab_map_refs instead of direct hypercall. Now we rely on the
>   modified API
> 
> v5:
> - BUG_ON(vif->dealloc_task) in xenvif_connect
> - use 'task' in xenvif_connect for thread creation
> - proper return value if alloc_xenballooned_pages fails
> - BUG in xenvif_tx_check_gop if stale handle found
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/interface.c |   63 ++++++++-
>  drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
>  2 files changed, 160 insertions(+), 157 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> index f0f0c3d..b3daae2 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  	BUG_ON(skb->dev != dev);
>  
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (vif->task == NULL ||
> +	    vif->dealloc_task == NULL ||

Under what conditions could this be true? Would it not represent a
rather serious failure?

> +	    !xenvif_schedulable(vif))
>  		goto drop;
>  
>  	/* At best we'll need one slot for the header and one for each
> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>  	vif->pending_prod = MAX_PENDING_REQS;
>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>  		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> +	spin_lock_init(&vif->dealloc_lock);
> +	spin_lock_init(&vif->response_lock);
> +	/* If ballooning is disabled, this will consume real memory, so you
> +	 * better enable it.

Almost no one who would be affected by this is going to read this
comment. And it doesn't just require enabling ballooning, but actually
booting with some maxmem "slack" to leave space.

Classic-xen kernels used to add 8M of slop to the physical address space
to leave a suitable pool for exactly this sort of thing. I never liked
that but perhaps it should be reconsidered (or at least raised as a
possibility with the core-Xen Linux guys).

>  The long term solution would be to use just a
> +	 * bunch of valid page descriptors, without dependency on ballooning

Where would these come from? Do you have a cunning plan here?

> +	 */
> +	err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +				       vif->mmap_pages,
> +				       false);
> +	if (err) {
> +		netdev_err(dev, "Could not reserve mmap_pages\n");
> +		return ERR_PTR(-ENOMEM);
> +	}
> +	for (i = 0; i < MAX_PENDING_REQS; i++) {
> +		vif->pending_tx_info[i].callback_struct = (struct ubuf_info)
> +			{ .callback = xenvif_zerocopy_callback,
> +			  .ctx = NULL,
> +			  .desc = i };
> +		vif->grant_tx_handle[i] = NETBACK_INVALID_HANDLE;
> +	}
>  
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
> @@ -383,12 +403,14 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  
>  	BUG_ON(vif->tx_irq);
>  	BUG_ON(vif->task);
> +	BUG_ON(vif->dealloc_task);
>  
>  	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
>  
>  	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&vif->dealloc_wq);
>  
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
> @@ -432,6 +454,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>  
>  	vif->task = task;
>  
> +	task = kthread_create(xenvif_dealloc_kthread,
> +					   (void *)vif,
> +					   "%s-dealloc",
> +					   vif->dev->name);

This is separate to the existing kthread that handles rx stuff. If they
cannot or should not be combined then I think the existing one needs
renaming, both the function and the thread itself in a precursor patch.

> @@ -494,6 +534,23 @@ void xenvif_disconnect(struct xenvif *vif)
>  
>  void xenvif_free(struct xenvif *vif)
>  {
> +	int i, unmap_timeout = 0;
> +
> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
> +		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
> +			unmap_timeout++;
> +			schedule_timeout(msecs_to_jiffies(1000));

What are we waiting for here? Have we taken any action to ensure that it
is going to happen, like kicking something?

> +			if (unmap_timeout > 9 &&

Why 9? Why not rely on net_ratelimit to DTRT? Or is it normal for this
to fail at least once?

> +			    net_ratelimit())
> +				netdev_err(vif->dev,

I thought there was a ratelimited netdev printk which combined the
limiting and the printing in one function call. Maybe I am mistaken.

> +					   "Page still granted! Index: %x\n",
> +					   i);
> +			i = -1;
> +		}
> +	}
> +
> +	free_xenballooned_pages(MAX_PENDING_REQS, vif->mmap_pages);
> +
>  	netif_napi_del(&vif->napi);
>  
>  	unregister_netdev(vif->dev);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index 195602f..747b428 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -646,9 +646,12 @@ static void xenvif_tx_err(struct xenvif *vif,
>  			  struct xen_netif_tx_request *txp, RING_IDX end)
>  {
>  	RING_IDX cons = vif->tx.req_cons;
> +	unsigned long flags;
>  
>  	do {
> +		spin_lock_irqsave(&vif->response_lock, flags);

Looking at the callers you have added it would seem more natural to
handle the locking within make_tx_response itself.

What are you locking against here? Is this different to the dealloc
lock? If the concern is the rx action stuff and the dealloc stuff
conflicting perhaps a single vif lock would make sense?

>  		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
> +		spin_unlock_irqrestore(&vif->response_lock, flags);
>  		if (cons == end)
>  			break;
>  		txp = RING_GET_REQUEST(&vif->tx, cons++);
> @@ -787,10 +790,10 @@ static inline void xenvif_tx_create_gop(struct xenvif *vif,
>  	       sizeof(*txp));
>  }
>  
> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
> -					       struct sk_buff *skb,
> -					       struct xen_netif_tx_request *txp,
> -					       struct gnttab_copy *gop)
> +static struct gnttab_map_grant_ref *xenvif_get_requests(struct xenvif *vif,
> +							struct sk_buff *skb,
> +							struct xen_netif_tx_request *txp,
> +							struct gnttab_map_grant_ref *gop)
>  {
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	skb_frag_t *frags = shinfo->frags;
> @@ -909,9 +841,9 @@ err:
>  
>  static int xenvif_tx_check_gop(struct xenvif *vif,
>  			       struct sk_buff *skb,
> -			       struct gnttab_copy **gopp)
> +			       struct gnttab_map_grant_ref **gopp)
>  {
> -	struct gnttab_copy *gop = *gopp;
> +	struct gnttab_map_grant_ref *gop = *gopp;
>  	u16 pending_idx = *((u16 *)skb->data);
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	struct pending_tx_info *tx_info;
> @@ -923,6 +855,17 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	err = gop->status;
>  	if (unlikely(err))
>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
> +	else {
> +		if (vif->grant_tx_handle[pending_idx] !=
> +		    NETBACK_INVALID_HANDLE) {
> +			netdev_err(vif->dev,
> +				   "Stale mapped handle! pending_idx %x handle %x\n",
> +				   pending_idx,
> +				   vif->grant_tx_handle[pending_idx]);
> +			BUG();
> +		}
> +		vif->grant_tx_handle[pending_idx] = gop->handle;
> +	}
>  
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		head = tx_info->head;
>  
>  		/* Check error status: if okay then remember grant handle. */
> -		do {
>  			newerr = (++gop)->status;
> -			if (newerr)
> -				break;
> -			peek = vif->pending_ring[pending_index(++head)];
> -		} while (!pending_tx_is_head(vif, peek));
>  
>  		if (likely(!newerr)) {
> +			if (vif->grant_tx_handle[pending_idx] !=
> +			    NETBACK_INVALID_HANDLE) {
> +				netdev_err(vif->dev,
> +					   "Stale mapped handle! pending_idx %x handle %x\n",
> +					   pending_idx,
> +					   vif->grant_tx_handle[pending_idx]);
> +				BUG();
> +			}

You had the same thing earlier. Perhaps a helper function would be
useful?

> +			vif->grant_tx_handle[pending_idx] = gop->handle;
>  			/* Had a previous error? Invalidate this fragment. */
> -			if (unlikely(err))
> +			if (unlikely(err)) {
> +				xenvif_idx_unmap(vif, pending_idx);
>  				xenvif_idx_release(vif, pending_idx,
>  						   XEN_NETIF_RSP_OKAY);

Would it make sense to unmap and release in a single function? (I
Haven't looked to see if you ever do one without the other, but the next
page of diff had two more occurrences of them together)

> +			}
>  			continue;
>  		}
>  
> @@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  
>  		/* First error: invalidate header and preceding fragments. */
>  		pending_idx = *((u16 *)skb->data);
> +		xenvif_idx_unmap(vif, pending_idx);
>  		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
>  		for (j = start; j < i; j++) {
>  			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
> +			xenvif_idx_unmap(vif, pending_idx);
>  			xenvif_idx_release(vif, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}

>  	}
> +	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
> +	 * overlaps with "index", and "mapping" is not set. I think mapping
> +	 * should be set. If delivered to local stack, it would drop this
> +	 * skb in sk_filter unless the socket has the right to use it.

What is the plan to fix this?

Is this dropping not a significant issue (TBH I'm not sure what "has the
right to use it" would entail).

> +	 */
> +	skb->pfmemalloc	= false;
>  }
>  
>  static int xenvif_get_extras(struct xenvif *vif,
> @@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)

> @@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
>  
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(vif,
> +				  skb,
> +				  skb_shinfo(skb)->destructor_arg ?
> +				  pending_idx :
> +				  INVALID_PENDING_IDX

Couldn't xenvif_fill_frags calculate the 3rd argument itself given that
it has skb in hand.

> );
>  
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
> @@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		if (checksum_setup(vif, skb)) {
>  			netdev_dbg(vif->dev,
>  				   "Can't setup checksum in net_tx_action\n");
> +			/* We have to set this flag so the dealloc thread can
> +			 * send the slots back

Wouldn't it be more accurate to say that we need it so that the callback
happens (which we then use to trigger the dealloc thread)?

> +			 */
> +			if (skb_shinfo(skb)->destructor_arg)
> +				skb_shinfo(skb)->tx_flags |= SKBTX_DEV_ZEROCOPY;
>  			kfree_skb(skb);
>  			continue;
>  		}
> @@ -1620,6 +1583,14 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  
>  		work_done++;
>  
> +		/* Set this flag right before netif_receive_skb, otherwise
> +		 * someone might think this packet already left netback, and
> +		 * do a skb_copy_ubufs while we are still in control of the
> +		 * skb. E.g. the __pskb_pull_tail earlier can do such thing.

Hrm, subtle.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:45:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFojw-0000rh-0C; Tue, 18 Feb 2014 17:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoju-0000rc-Tl
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:45:51 +0000
Received: from [85.158.137.68:2749] by server-11.bemta-3.messagelabs.com id
	E1/0C-04255-E4C93035; Tue, 18 Feb 2014 17:45:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392745547!2687624!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11221 invoked from network); 18 Feb 2014 17:45:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:45:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101873378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:45:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:45:34 -0500
Message-ID: <1392745532.23084.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:45:32 +0000
In-Reply-To: <1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:

Re the Subject: change how? Perhaps "handle foreign mapped pages on the
guest RX path" would be clearer.

> RX path need to know if the SKB fragments are stored on pages from another
> domain.

Does this not need to be done either before the mapping change or at the
same time? -- otherwise you have a window of a couple of commits where
things are broken, breaking bisectability.

> 
> v4:
> - indentation fixes
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
>  1 file changed, 41 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index f74fa92..d43444d 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -226,7 +226,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
>  static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
>  				 struct netrx_pending_operations *npo,
>  				 struct page *page, unsigned long size,
> -				 unsigned long offset, int *head)
> +				 unsigned long offset, int *head,
> +				 struct xenvif *foreign_vif,
> +				 grant_ref_t foreign_gref)
>  {
>  	struct gnttab_copy *copy_gop;
>  	struct xenvif_rx_meta *meta;
> @@ -268,8 +270,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
>  		copy_gop->flags = GNTCOPY_dest_gref;
>  		copy_gop->len = bytes;
>  
> -		copy_gop->source.domid = DOMID_SELF;
> -		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
> +		if (foreign_vif) {
> +			copy_gop->source.domid = foreign_vif->domid;
> +			copy_gop->source.u.ref = foreign_gref;
> +			copy_gop->flags |= GNTCOPY_source_gref;
> +		} else {
> +			copy_gop->source.domid = DOMID_SELF;
> +			copy_gop->source.u.gmfn =
> +				virt_to_mfn(page_address(page));
> +		}
>  		copy_gop->source.offset = offset;
>  
>  		copy_gop->dest.domid = vif->domid;
> @@ -330,6 +339,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  	int old_meta_prod;
>  	int gso_type;
>  	int gso_size;
> +	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
> +	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
> +	struct xenvif *foreign_vif = NULL;
>  
>  	old_meta_prod = npo->meta_prod;
>  
> @@ -370,6 +382,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  	npo->copy_off = 0;
>  	npo->copy_gref = req->gref;
>  
> +	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
> +		 (ubuf->callback == &xenvif_zerocopy_callback)) {
> +		u16 pending_idx = ubuf->desc;
> +		int i = 0;
> +		struct pending_tx_info *temp =
> +			container_of(ubuf,
> +				     struct pending_tx_info,
> +				     callback_struct);
> +		foreign_vif =
> +			container_of(temp - pending_idx,
> +				     struct xenvif,
> +				     pending_tx_info[0]);
> +		do {
> +			pending_idx = ubuf->desc;
> +			foreign_grefs[i++] =
> +				foreign_vif->pending_tx_info[pending_idx].req.gref;
> +			ubuf = (struct ubuf_info *) ubuf->ctx;
> +		} while (ubuf);
> +	}
> +
>  	data = skb->data;
>  	while (data < skb_tail_pointer(skb)) {
>  		unsigned int offset = offset_in_page(data);
> @@ -379,7 +411,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  			len = skb_tail_pointer(skb) - data;
>  
>  		xenvif_gop_frag_copy(vif, skb, npo,
> -				     virt_to_page(data), len, offset, &head);
> +				     virt_to_page(data), len, offset, &head,
> +				     NULL,
> +				     0);
>  		data += len;
>  	}
>  
> @@ -388,7 +422,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
>  				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>  				     skb_shinfo(skb)->frags[i].page_offset,
> -				     &head);
> +				     &head,
> +				     foreign_vif,
> +				     foreign_grefs[i]);
>  	}
>  
>  	return npo->meta_prod - old_meta_prod;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:45:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:45:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFojw-0000rh-0C; Tue, 18 Feb 2014 17:45:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFoju-0000rc-Tl
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:45:51 +0000
Received: from [85.158.137.68:2749] by server-11.bemta-3.messagelabs.com id
	E1/0C-04255-E4C93035; Tue, 18 Feb 2014 17:45:50 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392745547!2687624!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11221 invoked from network); 18 Feb 2014 17:45:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:45:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101873378"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:45:34 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:45:34 -0500
Message-ID: <1392745532.23084.65.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Tue, 18 Feb 2014 17:45:32 +0000
In-Reply-To: <1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:

Re the Subject: change how? Perhaps "handle foreign mapped pages on the
guest RX path" would be clearer.

> RX path need to know if the SKB fragments are stored on pages from another
> domain.

Does this not need to be done either before the mapping change or at the
same time? -- otherwise you have a window of a couple of commits where
things are broken, breaking bisectability.

> 
> v4:
> - indentation fixes
> 
> Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
> ---
>  drivers/net/xen-netback/netback.c |   46 +++++++++++++++++++++++++++++++++----
>  1 file changed, 41 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index f74fa92..d43444d 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -226,7 +226,9 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
>  static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
>  				 struct netrx_pending_operations *npo,
>  				 struct page *page, unsigned long size,
> -				 unsigned long offset, int *head)
> +				 unsigned long offset, int *head,
> +				 struct xenvif *foreign_vif,
> +				 grant_ref_t foreign_gref)
>  {
>  	struct gnttab_copy *copy_gop;
>  	struct xenvif_rx_meta *meta;
> @@ -268,8 +270,15 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
>  		copy_gop->flags = GNTCOPY_dest_gref;
>  		copy_gop->len = bytes;
>  
> -		copy_gop->source.domid = DOMID_SELF;
> -		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
> +		if (foreign_vif) {
> +			copy_gop->source.domid = foreign_vif->domid;
> +			copy_gop->source.u.ref = foreign_gref;
> +			copy_gop->flags |= GNTCOPY_source_gref;
> +		} else {
> +			copy_gop->source.domid = DOMID_SELF;
> +			copy_gop->source.u.gmfn =
> +				virt_to_mfn(page_address(page));
> +		}
>  		copy_gop->source.offset = offset;
>  
>  		copy_gop->dest.domid = vif->domid;
> @@ -330,6 +339,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  	int old_meta_prod;
>  	int gso_type;
>  	int gso_size;
> +	struct ubuf_info *ubuf = skb_shinfo(skb)->destructor_arg;
> +	grant_ref_t foreign_grefs[MAX_SKB_FRAGS];
> +	struct xenvif *foreign_vif = NULL;
>  
>  	old_meta_prod = npo->meta_prod;
>  
> @@ -370,6 +382,26 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  	npo->copy_off = 0;
>  	npo->copy_gref = req->gref;
>  
> +	if ((skb_shinfo(skb)->tx_flags & SKBTX_DEV_ZEROCOPY) &&
> +		 (ubuf->callback == &xenvif_zerocopy_callback)) {
> +		u16 pending_idx = ubuf->desc;
> +		int i = 0;
> +		struct pending_tx_info *temp =
> +			container_of(ubuf,
> +				     struct pending_tx_info,
> +				     callback_struct);
> +		foreign_vif =
> +			container_of(temp - pending_idx,
> +				     struct xenvif,
> +				     pending_tx_info[0]);
> +		do {
> +			pending_idx = ubuf->desc;
> +			foreign_grefs[i++] =
> +				foreign_vif->pending_tx_info[pending_idx].req.gref;
> +			ubuf = (struct ubuf_info *) ubuf->ctx;
> +		} while (ubuf);
> +	}
> +
>  	data = skb->data;
>  	while (data < skb_tail_pointer(skb)) {
>  		unsigned int offset = offset_in_page(data);
> @@ -379,7 +411,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  			len = skb_tail_pointer(skb) - data;
>  
>  		xenvif_gop_frag_copy(vif, skb, npo,
> -				     virt_to_page(data), len, offset, &head);
> +				     virt_to_page(data), len, offset, &head,
> +				     NULL,
> +				     0);
>  		data += len;
>  	}
>  
> @@ -388,7 +422,9 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
>  				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>  				     skb_shinfo(skb)->frags[i].page_offset,
> -				     &head);
> +				     &head,
> +				     foreign_vif,
> +				     foreign_grefs[i]);
>  	}
>  
>  	return npo->meta_prod - old_meta_prod;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:46:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFokN-0000tj-DF; Tue, 18 Feb 2014 17:46:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFokL-0000tT-Gh
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:46:17 +0000
Received: from [85.158.139.211:55527] by server-3.bemta-5.messagelabs.com id
	F0/45-13671-86C93035; Tue, 18 Feb 2014 17:46:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392745574!4737706!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24521 invoked from network); 18 Feb 2014 17:46:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:46:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103596514"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:46:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:46:13 -0500
Message-ID: <1392745572.23084.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 17:46:12 +0000
In-Reply-To: <1392743412.23084.41.camel@kazak.uk.xensource.com>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
	<1392743412.23084.41.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 17:10 +0000, Ian Campbell wrote:
> On Tue, 2014-02-18 at 16:56 +0000, Julien Grall wrote:
> > The current implementation of raw_copy_guest helper may lead to data corruption
> > and sometimes Xen crash when the guest virtual address is not aligned to
> > PAGE_SIZE.
> > 
> > When the total length is higher than a page, the length to read is badly
> > compute with
> >     min(len, (unsigned)(PAGE_SIZE - offset))
> > 
> > As the offset is only computed one time per function, if the start address was
> > not aligned to PAGE_SIZE, we can end up in same iteration:
> >     - to read accross page boundary => xen crash
> >     - read the previous page => data corruption
> > 
> > This issue can be resolved by setting offset to 0 at the end of the first
> > iteration. Indeed, after it, the virtual guest address is always aligned
> > to PAGE_SIZE.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

and applied.

> > +        /*
> > +         * After the first iteration, guest virtual address is correctly
> > +         * aligned to PAGE_SIZE.
> > +         */
> 
> I'd like to duplicate this comment in the other two places too -- if you
> are OK with it I will do that as part of committing.

I did this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:46:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFokN-0000tj-DF; Tue, 18 Feb 2014 17:46:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WFokL-0000tT-Gh
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:46:17 +0000
Received: from [85.158.139.211:55527] by server-3.bemta-5.messagelabs.com id
	F0/45-13671-86C93035; Tue, 18 Feb 2014 17:46:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392745574!4737706!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24521 invoked from network); 18 Feb 2014 17:46:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:46:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103596514"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 17:46:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:46:13 -0500
Message-ID: <1392745572.23084.66.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Tue, 18 Feb 2014 17:46:12 +0000
In-Reply-To: <1392743412.23084.41.camel@kazak.uk.xensource.com>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
	<1392743412.23084.41.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 17:10 +0000, Ian Campbell wrote:
> On Tue, 2014-02-18 at 16:56 +0000, Julien Grall wrote:
> > The current implementation of raw_copy_guest helper may lead to data corruption
> > and sometimes Xen crash when the guest virtual address is not aligned to
> > PAGE_SIZE.
> > 
> > When the total length is higher than a page, the length to read is badly
> > compute with
> >     min(len, (unsigned)(PAGE_SIZE - offset))
> > 
> > As the offset is only computed one time per function, if the start address was
> > not aligned to PAGE_SIZE, we can end up in same iteration:
> >     - to read accross page boundary => xen crash
> >     - read the previous page => data corruption
> > 
> > This issue can be resolved by setting offset to 0 at the end of the first
> > iteration. Indeed, after it, the virtual guest address is always aligned
> > to PAGE_SIZE.
> > 
> > Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

and applied.

> > +        /*
> > +         * After the first iteration, guest virtual address is correctly
> > +         * aligned to PAGE_SIZE.
> > +         */
> 
> I'd like to duplicate this comment in the other two places too -- if you
> are OK with it I will do that as part of committing.

I did this.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:48:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFomM-00015m-LB; Tue, 18 Feb 2014 17:48:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFomK-00015X-US
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:48:21 +0000
Received: from [85.158.139.211:25869] by server-8.bemta-5.messagelabs.com id
	64/EF-05298-4EC93035; Tue, 18 Feb 2014 17:48:20 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392745699!4698788!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2239 invoked from network); 18 Feb 2014 17:48:19 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:48:19 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so5383922eaj.22
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 09:48:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kHIp3ESoryL1l5i49U/OpYiU5w4pjEUOodsLEyQQ1hg=;
	b=NSHyb0bDfWcedfbkt9ENF0FejVfAqimT7d4ScCz1ZmAXMWiOxi/LaGHxvxWNOSeD8U
	79kTxSpFy+nkx52GifGhMq4LYWdMm3yAMPOW2xO9WcVNQ1LKeZXlxSSMydE/o7MGepmR
	c08Qw8f7KbmK11Zte7w3k9D7PD5PJsm0xSNbmJcjs5JQkFoqVet9FHFSOrzYNXWfe+Jm
	dwFeriJ1sQaj+C4jw4/iBd0GZZ4r8pYjahWVnSLLluWAzd5EKlBuwnhIStA4VzooGxa4
	44/+1zXEBmoC0+F+B+qLD7nzyFXCaSCfaBgMyqtKBe0mKX0WCraVQgqohWCXjo12YOw3
	hOqg==
X-Gm-Message-State: ALoCoQnY3qU7pCVKTktwczedWOKAgN06MLiRYvDVjVMFbdLVnse8UJeSmVp4/h7QQzqVZ6fZVBKy
X-Received: by 10.14.104.201 with SMTP id i49mr9932146eeg.61.1392745699071;
	Tue, 18 Feb 2014 09:48:19 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	k41sm73052110een.19.2014.02.18.09.48.14 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 09:48:18 -0800 (PST)
Message-ID: <53039CDC.6050209@linaro.org>
Date: Tue, 18 Feb 2014 17:48:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
	<1392743412.23084.41.camel@kazak.uk.xensource.com>
	<1392745572.23084.66.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745572.23084.66.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 05:46 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 17:10 +0000, Ian Campbell wrote:
>> On Tue, 2014-02-18 at 16:56 +0000, Julien Grall wrote:
>>> The current implementation of raw_copy_guest helper may lead to data corruption
>>> and sometimes Xen crash when the guest virtual address is not aligned to
>>> PAGE_SIZE.
>>>
>>> When the total length is higher than a page, the length to read is badly
>>> compute with
>>>     min(len, (unsigned)(PAGE_SIZE - offset))
>>>
>>> As the offset is only computed one time per function, if the start address was
>>> not aligned to PAGE_SIZE, we can end up in same iteration:
>>>     - to read accross page boundary => xen crash
>>>     - read the previous page => data corruption
>>>
>>> This issue can be resolved by setting offset to 0 at the end of the first
>>> iteration. Indeed, after it, the virtual guest address is always aligned
>>> to PAGE_SIZE.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> and applied.
> 
>>> +        /*
>>> +         * After the first iteration, guest virtual address is correctly
>>> +         * aligned to PAGE_SIZE.
>>> +         */
>>
>> I'd like to duplicate this comment in the other two places too -- if you
>> are OK with it I will do that as part of committing.
> 
> I did this.

Thanks! I didn't see this part on the previous message.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:48:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFomM-00015m-LB; Tue, 18 Feb 2014 17:48:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WFomK-00015X-US
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 17:48:21 +0000
Received: from [85.158.139.211:25869] by server-8.bemta-5.messagelabs.com id
	64/EF-05298-4EC93035; Tue, 18 Feb 2014 17:48:20 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392745699!4698788!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2239 invoked from network); 18 Feb 2014 17:48:19 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:48:19 -0000
Received: by mail-ea0-f177.google.com with SMTP id m10so5383922eaj.22
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 09:48:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=kHIp3ESoryL1l5i49U/OpYiU5w4pjEUOodsLEyQQ1hg=;
	b=NSHyb0bDfWcedfbkt9ENF0FejVfAqimT7d4ScCz1ZmAXMWiOxi/LaGHxvxWNOSeD8U
	79kTxSpFy+nkx52GifGhMq4LYWdMm3yAMPOW2xO9WcVNQ1LKeZXlxSSMydE/o7MGepmR
	c08Qw8f7KbmK11Zte7w3k9D7PD5PJsm0xSNbmJcjs5JQkFoqVet9FHFSOrzYNXWfe+Jm
	dwFeriJ1sQaj+C4jw4/iBd0GZZ4r8pYjahWVnSLLluWAzd5EKlBuwnhIStA4VzooGxa4
	44/+1zXEBmoC0+F+B+qLD7nzyFXCaSCfaBgMyqtKBe0mKX0WCraVQgqohWCXjo12YOw3
	hOqg==
X-Gm-Message-State: ALoCoQnY3qU7pCVKTktwczedWOKAgN06MLiRYvDVjVMFbdLVnse8UJeSmVp4/h7QQzqVZ6fZVBKy
X-Received: by 10.14.104.201 with SMTP id i49mr9932146eeg.61.1392745699071;
	Tue, 18 Feb 2014 09:48:19 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	k41sm73052110een.19.2014.02.18.09.48.14 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 09:48:18 -0800 (PST)
Message-ID: <53039CDC.6050209@linaro.org>
Date: Tue, 18 Feb 2014 17:48:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392742577-3052-1-git-send-email-julien.grall@linaro.org>
	<1392743412.23084.41.camel@kazak.uk.xensource.com>
	<1392745572.23084.66.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745572.23084.66.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	George Dunlap <george.dunlap@citrix.com>, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2] xen/arm: Correctly handle non-page
 aligned pointer in raw_copy_from_guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 05:46 PM, Ian Campbell wrote:
> On Tue, 2014-02-18 at 17:10 +0000, Ian Campbell wrote:
>> On Tue, 2014-02-18 at 16:56 +0000, Julien Grall wrote:
>>> The current implementation of raw_copy_guest helper may lead to data corruption
>>> and sometimes Xen crash when the guest virtual address is not aligned to
>>> PAGE_SIZE.
>>>
>>> When the total length is higher than a page, the length to read is badly
>>> compute with
>>>     min(len, (unsigned)(PAGE_SIZE - offset))
>>>
>>> As the offset is only computed one time per function, if the start address was
>>> not aligned to PAGE_SIZE, we can end up in same iteration:
>>>     - to read accross page boundary => xen crash
>>>     - read the previous page => data corruption
>>>
>>> This issue can be resolved by setting offset to 0 at the end of the first
>>> iteration. Indeed, after it, the virtual guest address is always aligned
>>> to PAGE_SIZE.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> and applied.
> 
>>> +        /*
>>> +         * After the first iteration, guest virtual address is correctly
>>> +         * aligned to PAGE_SIZE.
>>> +         */
>>
>> I'd like to duplicate this comment in the other two places too -- if you
>> are OK with it I will do that as part of committing.
> 
> I did this.

Thanks! I didn't see this part on the previous message.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:52:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoqf-0001O2-Ep; Tue, 18 Feb 2014 17:52:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFoqe-0001Nx-8h
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 17:52:48 +0000
Received: from [85.158.143.35:54285] by server-3.bemta-4.messagelabs.com id
	D0/D7-11539-FED93035; Tue, 18 Feb 2014 17:52:47 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392745963!6563153!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31007 invoked from network); 18 Feb 2014 17:52:45 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 17:52:45 -0000
Received: from mail-vc0-f171.google.com ([209.85.220.171]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwOd6/4rSN90E1XXOtbM1SnPz7fHb15c@postini.com;
	Tue, 18 Feb 2014 09:52:45 PST
Received: by mail-vc0-f171.google.com with SMTP id le5so13689021vcb.30
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 09:52:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=hLuBjoWYTlfnO6a34dCZAEL++/JM0a62t6aeBNlQT1I=;
	b=c9BcLcOxCQHFtIXoMJOLlsVc3maFLxM9lKvx4/pXDog9inTvKkOcgkCUBVn0ztNlJ7
	SvkohiVnNp9TrZ3BMAelKY4hlDvugHBwz6yVidi/rCeUJxzd0lsdqJZK5KHlpDf4HByF
	tos2Rc3Y6rNwPbtIqxWg9ZlQdxS5ZLZxFWhjY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=hLuBjoWYTlfnO6a34dCZAEL++/JM0a62t6aeBNlQT1I=;
	b=Dk2lkk8kLqDLY6vvkGdwDXiiBQynLQ/NhQUvbhj59a7ouTm7ENmw3RdKLpUybom9Kr
	xzAsNpBR96jgYCh6euGEpcDw+sY0FNSXW0NcN2Qg9iZvZzBcbtyY+4Wvx2HUlxlRAOl/
	PgOVLiuowvlihqnnYxzZZ7zJwnf/Na4H9hNaIZ42ET/EmcoayjwncFV7uzF19OOtW7/F
	MgoE7anDV3Ui/b/Pc2fhf0yYsrkipEOfiXOD/ymqXpjr8kOrbhFD9Q7Bvne73WEZzZZ4
	gzkZqpnhCVC+qO0rhTrF1USQh/4x7JhChCdgJETuRFMhiwXXH6Bifj7kHDMAoCd7vi29
	CiuQ==
X-Gm-Message-State: ALoCoQn7I3Q3kPe0avj+u6zR6fqsPAUnD/bOrvKRa6LgM0dhu0lE76DJtSyM6CQTAB974x2tQwYRzd9ESsohv0JpWgO9G2F5CvL+UJ/eJh2hLZQWO0wu6PKpupycgTUYKEp6qJ6AcixTOBkNqilBpOfDM61EhbBXQgdGu4xr0nduhLTJTIezUOM=
X-Received: by 10.220.50.18 with SMTP id x18mr11232vcf.66.1392745962803;
	Tue, 18 Feb 2014 09:52:42 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.50.18 with SMTP id x18mr11224vcf.66.1392745962688; Tue,
	18 Feb 2014 09:52:42 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Tue, 18 Feb 2014 09:52:42 -0800 (PST)
In-Reply-To: <1392721543.11080.22.camel@kazak.uk.xensource.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
	<CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
	<1392721543.11080.22.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 19:52:42 +0200
Message-ID: <CAH_mUMPoiS8NF86pbRY2psY7OwBqrr5wpZ=Acn15ELfpkmFf+Q@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,


On Tue, Feb 18, 2014 at 1:05 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-02-17 at 17:38 +0200, Andrii Tseglytskyi wrote:
>> Hi
>>
>> >
>> > > Something like this.
>> >
>> > This solution sounds good.
>> >
>> > If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?
>>
>>
>> Right. And for now I started developing something like shadow page
>> table algorithm. I need to create a trap for a real pagetable, which
>> is placed somewhere in domain heap memory. That's why I thought about
>> generic algorithm which can create a runtime trap for memory, which
>> address is not defined during compile time.
>
> I think you should handle this case by making the page r/o in the p2m
> and using the p2m fault handler rather than the MMIO trap
> infrastructure.
>
> See e.g. the live migration patches by Samsung.
>

I haven't thought in this direction. I will investigate this
possibility. Thank you for suggestion. Just for now I implemented
solution, discussed in previous mails and it works fine for me, but
I'll try what you suggesting - it may be a better way.

Thank you,

regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:52:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFoqf-0001O2-Ep; Tue, 18 Feb 2014 17:52:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.tseglytskyi@globallogic.com>)
	id 1WFoqe-0001Nx-8h
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 17:52:48 +0000
Received: from [85.158.143.35:54285] by server-3.bemta-4.messagelabs.com id
	D0/D7-11539-FED93035; Tue, 18 Feb 2014 17:52:47 +0000
X-Env-Sender: andrii.tseglytskyi@globallogic.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392745963!6563153!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31007 invoked from network); 18 Feb 2014 17:52:45 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 17:52:45 -0000
Received: from mail-vc0-f171.google.com ([209.85.220.171]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwOd6/4rSN90E1XXOtbM1SnPz7fHb15c@postini.com;
	Tue, 18 Feb 2014 09:52:45 PST
Received: by mail-vc0-f171.google.com with SMTP id le5so13689021vcb.30
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 09:52:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=hLuBjoWYTlfnO6a34dCZAEL++/JM0a62t6aeBNlQT1I=;
	b=c9BcLcOxCQHFtIXoMJOLlsVc3maFLxM9lKvx4/pXDog9inTvKkOcgkCUBVn0ztNlJ7
	SvkohiVnNp9TrZ3BMAelKY4hlDvugHBwz6yVidi/rCeUJxzd0lsdqJZK5KHlpDf4HByF
	tos2Rc3Y6rNwPbtIqxWg9ZlQdxS5ZLZxFWhjY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=hLuBjoWYTlfnO6a34dCZAEL++/JM0a62t6aeBNlQT1I=;
	b=Dk2lkk8kLqDLY6vvkGdwDXiiBQynLQ/NhQUvbhj59a7ouTm7ENmw3RdKLpUybom9Kr
	xzAsNpBR96jgYCh6euGEpcDw+sY0FNSXW0NcN2Qg9iZvZzBcbtyY+4Wvx2HUlxlRAOl/
	PgOVLiuowvlihqnnYxzZZ7zJwnf/Na4H9hNaIZ42ET/EmcoayjwncFV7uzF19OOtW7/F
	MgoE7anDV3Ui/b/Pc2fhf0yYsrkipEOfiXOD/ymqXpjr8kOrbhFD9Q7Bvne73WEZzZZ4
	gzkZqpnhCVC+qO0rhTrF1USQh/4x7JhChCdgJETuRFMhiwXXH6Bifj7kHDMAoCd7vi29
	CiuQ==
X-Gm-Message-State: ALoCoQn7I3Q3kPe0avj+u6zR6fqsPAUnD/bOrvKRa6LgM0dhu0lE76DJtSyM6CQTAB974x2tQwYRzd9ESsohv0JpWgO9G2F5CvL+UJ/eJh2hLZQWO0wu6PKpupycgTUYKEp6qJ6AcixTOBkNqilBpOfDM61EhbBXQgdGu4xr0nduhLTJTIezUOM=
X-Received: by 10.220.50.18 with SMTP id x18mr11232vcf.66.1392745962803;
	Tue, 18 Feb 2014 09:52:42 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.220.50.18 with SMTP id x18mr11224vcf.66.1392745962688; Tue,
	18 Feb 2014 09:52:42 -0800 (PST)
Received: by 10.220.99.195 with HTTP; Tue, 18 Feb 2014 09:52:42 -0800 (PST)
In-Reply-To: <1392721543.11080.22.camel@kazak.uk.xensource.com>
References: <CAH_mUMMtiujzBtvhSDos_SzjzuGeRpbZ6QkNn3PfPN3YjpnN5A@mail.gmail.com>
	<5301FA5F.8020602@linaro.org>
	<CAH_mUMP4kebPOn7xdSCgD4PqoxdkHYHHwghQukyAk+QLKO95SA@mail.gmail.com>
	<530219C4.3050304@linaro.org>
	<CAH_mUMP=G96gJG_vPvngMsPsktUDJOxFmmMQRBXfZ42iu3kAKQ@mail.gmail.com>
	<53022872.80209@linaro.org>
	<CAH_mUMO5HZ6NN35DXNL=9SUpUmBoE03eAQR5xKbDN4MMdmdP0A@mail.gmail.com>
	<1392721543.11080.22.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 19:52:42 +0200
Message-ID: <CAH_mUMPoiS8NF86pbRY2psY7OwBqrr5wpZ=Acn15ELfpkmFf+Q@mail.gmail.com>
From: Andrii Tseglytskyi <andrii.tseglytskyi@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] run time memory trap question
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,


On Tue, Feb 18, 2014 at 1:05 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-02-17 at 17:38 +0200, Andrii Tseglytskyi wrote:
>> Hi
>>
>> >
>> > > Something like this.
>> >
>> > This solution sounds good.
>> >
>> > If I remembered correctly, you are writing a driver for IPU/GPU MMU, right?
>>
>>
>> Right. And for now I started developing something like shadow page
>> table algorithm. I need to create a trap for a real pagetable, which
>> is placed somewhere in domain heap memory. That's why I thought about
>> generic algorithm which can create a runtime trap for memory, which
>> address is not defined during compile time.
>
> I think you should handle this case by making the page r/o in the p2m
> and using the p2m fault handler rather than the MMIO trap
> infrastructure.
>
> See e.g. the live migration patches by Samsung.
>

I haven't thought in this direction. I will investigate this
possibility. Thank you for suggestion. Just for now I implemented
solution, discussed in previous mails and it works fine for me, but
I'll try what you suggesting - it may be a better way.

Thank you,

regards,
Andrii

-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:57:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFovU-0001XH-A9; Tue, 18 Feb 2014 17:57:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFovS-0001XB-Do
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 17:57:46 +0000
Received: from [85.158.139.211:25881] by server-2.bemta-5.messagelabs.com id
	F2/06-23037-91F93035; Tue, 18 Feb 2014 17:57:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392746262!185709!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22447 invoked from network); 18 Feb 2014 17:57:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:57:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101877695"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:57:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:57:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFovG-0007KJ-Fo;
	Tue, 18 Feb 2014 17:57:34 +0000
Message-ID: <53039F0E.8010302@citrix.com>
Date: Tue, 18 Feb 2014 17:57:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <53023729.7020009@citrix.com> <53038D7A.8030807@oracle.com>
In-Reply-To: <53038D7A.8030807@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 16:42, Boris Ostrovsky wrote:
> On 02/17/2014 11:22 AM, Andrew Cooper wrote:
>> Hello,
>>
>> Here is a design proposal to improve VM feature levelling support in Xen
>> and libxc.
>
> In case you haven't seen this you might also find this useful:
> http://developer.amd.com/wordpress/media/2012/10/CrossVendorMigration.pdf
>
> It's a slightly different subject but perhaps some of the experiences
> that are listed there could influence your design.
>
> -boris

I had read that as part of preparing this.  It appears to be HVM
specific, which makes the feature reporting, as Xen can report any
information it likes in all circumstances.

>From the point of view of a domain configuration, I absolutely still
want things like

cpuid = [ '0:eax=0x3,ebx=0x0,ecx=0x0,edx=0x0',
          '1:eax=0x0f61,
             ecx=xxxxxxxx0xx00xxxxxxxxx0xxxxxxxxx,
             edx=xxx0xxxxxxxxxxxxxxxxxxxxxxxxxxxx',
          '0x80000000:eax=0x80000004,ebx=0x0,ecx=0x0,edx=0x0',
          '0x80000001:eax=0x0f61,
                      ecx=xxxxxxxxxxxxxxxxxx0000000000000x,
                      edx=00xx000xx0xxx0xxxxxxxxxxxxxxxxxx']

to work, albeit with rather more reality enforced.  The main point of
this proposal is to also make it work for PV guests, at least as far as
the feature leaves are concerned.


On that note, I do have a question for anyone from AMD who might know
the answer.

AMD has the CPUID override MSRs 0xc001100{4,5} which cover the basic and
extended feature leaves.  Are there any MSRs to cover
CPUID.0000_000D[ecx=1].eax which contains the 'XSAVEOPT' bit, or
CPUID.0000_0007[ecx=1].ebx which is the "structured extended" feature
map?  I cant find any reference to new override MSRs in the manuals (or
with google), or to having cpuid faulting support like Intel cpus.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:57:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFovU-0001XH-A9; Tue, 18 Feb 2014 17:57:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFovS-0001XB-Do
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 17:57:46 +0000
Received: from [85.158.139.211:25881] by server-2.bemta-5.messagelabs.com id
	F2/06-23037-91F93035; Tue, 18 Feb 2014 17:57:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392746262!185709!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22447 invoked from network); 18 Feb 2014 17:57:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 17:57:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101877695"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 17:57:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 12:57:35 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFovG-0007KJ-Fo;
	Tue, 18 Feb 2014 17:57:34 +0000
Message-ID: <53039F0E.8010302@citrix.com>
Date: Tue, 18 Feb 2014 17:57:34 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <53023729.7020009@citrix.com> <53038D7A.8030807@oracle.com>
In-Reply-To: <53038D7A.8030807@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 16:42, Boris Ostrovsky wrote:
> On 02/17/2014 11:22 AM, Andrew Cooper wrote:
>> Hello,
>>
>> Here is a design proposal to improve VM feature levelling support in Xen
>> and libxc.
>
> In case you haven't seen this you might also find this useful:
> http://developer.amd.com/wordpress/media/2012/10/CrossVendorMigration.pdf
>
> It's a slightly different subject but perhaps some of the experiences
> that are listed there could influence your design.
>
> -boris

I had read that as part of preparing this.  It appears to be HVM
specific, which makes the feature reporting, as Xen can report any
information it likes in all circumstances.

>From the point of view of a domain configuration, I absolutely still
want things like

cpuid = [ '0:eax=0x3,ebx=0x0,ecx=0x0,edx=0x0',
          '1:eax=0x0f61,
             ecx=xxxxxxxx0xx00xxxxxxxxx0xxxxxxxxx,
             edx=xxx0xxxxxxxxxxxxxxxxxxxxxxxxxxxx',
          '0x80000000:eax=0x80000004,ebx=0x0,ecx=0x0,edx=0x0',
          '0x80000001:eax=0x0f61,
                      ecx=xxxxxxxxxxxxxxxxxx0000000000000x,
                      edx=00xx000xx0xxx0xxxxxxxxxxxxxxxxxx']

to work, albeit with rather more reality enforced.  The main point of
this proposal is to also make it work for PV guests, at least as far as
the feature leaves are concerned.


On that note, I do have a question for anyone from AMD who might know
the answer.

AMD has the CPUID override MSRs 0xc001100{4,5} which cover the basic and
extended feature leaves.  Are there any MSRs to cover
CPUID.0000_000D[ecx=1].eax which contains the 'XSAVEOPT' bit, or
CPUID.0000_0007[ecx=1].ebx which is the "structured extended" feature
map?  I cant find any reference to new override MSRs in the manuals (or
with google), or to having cpuid faulting support like Intel cpus.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:58:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFowW-0001hY-OX; Tue, 18 Feb 2014 17:58:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WFowV-0001hR-6J
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 17:58:51 +0000
Received: from [85.158.137.68:9351] by server-17.bemta-3.messagelabs.com id
	55/D5-22569-A5F93035; Tue, 18 Feb 2014 17:58:50 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392746328!1441462!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 465 invoked from network); 18 Feb 2014 17:58:49 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 17:58:49 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 18 Feb 2014 17:58:47 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="656125823"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.222])
	by fldsmtpi03.verizon.com with ESMTP; 18 Feb 2014 17:58:45 +0000
Message-ID: <53039F55.3030901@terremark.com>
Date: Tue, 18 Feb 2014 12:58:45 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, 
	Simon Martin <furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>	
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>	
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>	
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
In-Reply-To: <1392742549.32038.580.camel@Solace>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/14 11:55, Dario Faggioli wrote:
> On lun, 2014-02-17 at 12:46 +0000, Simon Martin wrote:
>> Hyperthreading is just a way to improve CPU resource utilization. Even
>> if you are doing a CPU intensive operation, a lot of the processor
>> circuits are actually idle, so adding 2 pipelines to feed one
>> processor is a good way to improve total throughput, but it does have
>> it have it's caveats. I totally forgot this.
>>
>> Given the way that this works there isn't much that Xen can do. It is
>> a physical restriction.
>>
> I know, and my point was not that we should try to "fix" in Xen, what's
> impossible to fix, because it's just how hardware works... and that is
> by design!
>
> I was only saying that there are cases, when you still need isolation,
> but you can afford a bit more of uncertainty, there is the possibility
> for Xen to at least try to do the right thing automagically.
>
> Basically, I'm ok with people looking for hard real-time response time
> and jitter to have to pick up the proper hw platform, as a first step,
> and properly fine tune it. For more _soft_ real-time workloads, I wish
> we were (think we should be) able to do better, perhaps still with some
> user required tweaks, but nothing equally intrusive, that's it. :-)

I would thing that a warning message from the xl command(s) dealing with cpupool would help here.  See below

>> OK. This is my current configuration:
>>
>> Dom0     PCPU 0,1,2   no pinning
>> win7x64  PCPU   1,2   pinned
>> pv499    PCPU       3 pinned
>>
>> And I get the same interdependence.
>>
>> root@smartin-xen:~# xl list
>> Name                                        ID   Mem VCPUs      State   Time(s)
>> Domain-0                                     0  1253     3     r-----      37.1
>> win7x64                                      4  2046     2     ------     106.8
>> pv499                                        5   128     1     r-----      66.8
>> root@smartin-xen:~# xl vcpu-list
>> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
>> Domain-0                             0     0    1   -b-      21.2  all
>> Domain-0                             0     1    2   r--       8.8  all
>> Domain-0                             0     2    0   -b-       8.2  all
>> win7x64                              4     0    1   -b-      59.8  1
>> win7x64                              4     1    2   -b-      49.0  2
>> pv499                                5     0    3   r--      70.2  3
>> root@smartin-xen:~# xl cpupool-list -c
>> Name               CPU list
>> Pool-0             0,1,2

Change to something like:


Pool-0             0,1,2 (and part of 3)




>> pv499              3

Or add:

Warning: cpupool's share hyperthreaded cpus.

    -Don Slutz

>> I have gone back to my working settings.
>>
> Ok, thanks a lot for trying. I was hoping for that tweak, although
> developed for completely different reasons, to be helpful in this case,
> but it appears it is not.
>
> Probably, I wouldn't have pinned the two win7 vcpus either, but anyway,
> I don't want to eat any more of your time... Glad you find a
> configuration that is working out! :-)
>
> Thanks again and Regards,
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 17:58:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 17:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFowW-0001hY-OX; Tue, 18 Feb 2014 17:58:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WFowV-0001hR-6J
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 17:58:51 +0000
Received: from [85.158.137.68:9351] by server-17.bemta-3.messagelabs.com id
	55/D5-22569-A5F93035; Tue, 18 Feb 2014 17:58:50 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392746328!1441462!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 465 invoked from network); 18 Feb 2014 17:58:49 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 17:58:49 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 18 Feb 2014 17:58:47 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="656125823"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.4.222])
	by fldsmtpi03.verizon.com with ESMTP; 18 Feb 2014 17:58:45 +0000
Message-ID: <53039F55.3030901@terremark.com>
Date: Tue, 18 Feb 2014 12:58:45 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, 
	Simon Martin <furryfuttock@gmail.com>
References: <1646915994.20140213165604@gmail.com>	
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>	
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>	
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
In-Reply-To: <1392742549.32038.580.camel@Solace>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Nate Studer <nate.studer@dornerworks.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/14 11:55, Dario Faggioli wrote:
> On lun, 2014-02-17 at 12:46 +0000, Simon Martin wrote:
>> Hyperthreading is just a way to improve CPU resource utilization. Even
>> if you are doing a CPU intensive operation, a lot of the processor
>> circuits are actually idle, so adding 2 pipelines to feed one
>> processor is a good way to improve total throughput, but it does have
>> it have it's caveats. I totally forgot this.
>>
>> Given the way that this works there isn't much that Xen can do. It is
>> a physical restriction.
>>
> I know, and my point was not that we should try to "fix" in Xen, what's
> impossible to fix, because it's just how hardware works... and that is
> by design!
>
> I was only saying that there are cases, when you still need isolation,
> but you can afford a bit more of uncertainty, there is the possibility
> for Xen to at least try to do the right thing automagically.
>
> Basically, I'm ok with people looking for hard real-time response time
> and jitter to have to pick up the proper hw platform, as a first step,
> and properly fine tune it. For more _soft_ real-time workloads, I wish
> we were (think we should be) able to do better, perhaps still with some
> user required tweaks, but nothing equally intrusive, that's it. :-)

I would thing that a warning message from the xl command(s) dealing with cpupool would help here.  See below

>> OK. This is my current configuration:
>>
>> Dom0     PCPU 0,1,2   no pinning
>> win7x64  PCPU   1,2   pinned
>> pv499    PCPU       3 pinned
>>
>> And I get the same interdependence.
>>
>> root@smartin-xen:~# xl list
>> Name                                        ID   Mem VCPUs      State   Time(s)
>> Domain-0                                     0  1253     3     r-----      37.1
>> win7x64                                      4  2046     2     ------     106.8
>> pv499                                        5   128     1     r-----      66.8
>> root@smartin-xen:~# xl vcpu-list
>> Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
>> Domain-0                             0     0    1   -b-      21.2  all
>> Domain-0                             0     1    2   r--       8.8  all
>> Domain-0                             0     2    0   -b-       8.2  all
>> win7x64                              4     0    1   -b-      59.8  1
>> win7x64                              4     1    2   -b-      49.0  2
>> pv499                                5     0    3   r--      70.2  3
>> root@smartin-xen:~# xl cpupool-list -c
>> Name               CPU list
>> Pool-0             0,1,2

Change to something like:


Pool-0             0,1,2 (and part of 3)




>> pv499              3

Or add:

Warning: cpupool's share hyperthreaded cpus.

    -Don Slutz

>> I have gone back to my working settings.
>>
> Ok, thanks a lot for trying. I was hoping for that tweak, although
> developed for completely different reasons, to be helpful in this case,
> but it appears it is not.
>
> Probably, I wouldn't have pinned the two win7 vcpus either, but anyway,
> I don't want to eat any more of your time... Glad you find a
> configuration that is working out! :-)
>
> Thanks again and Regards,
> Dario
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:06:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFp3s-0001xo-OA; Tue, 18 Feb 2014 18:06:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFp3r-0001xj-OR
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:06:27 +0000
Received: from [85.158.143.35:27404] by server-2.bemta-4.messagelabs.com id
	E3/DF-10891-321A3035; Tue, 18 Feb 2014 18:06:27 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392746784!6593409!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7305 invoked from network); 18 Feb 2014 18:06:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:06:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="103605065"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 18:06:23 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:06:23 -0500
Message-ID: <1392746781.32038.594.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Tue, 18 Feb 2014 19:06:21 +0100
In-Reply-To: <53039F55.3030901@terremark.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	xen-devel@lists.xen.org, Simon Martin <furryfuttock@gmail.com>, Nate
	Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5820883154592364310=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5820883154592364310==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-T64TgQmhJCHLzI1eqwbx"

--=-T64TgQmhJCHLzI1eqwbx
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-02-18 at 12:58 -0500, Don Slutz wrote:
> >> root@smartin-xen:~# xl cpupool-list -c
> >> Name               CPU list
> >> Pool-0             0,1,2
>=20
> Change to something like:
>=20
>=20
> Pool-0             0,1,2 (and part of 3)
>=20
This would be cool, and I personally would be all for it... but it
perhaps will not be that clear at pointing the user to think to
hyperthreading. :-P

> Or add:
>=20
> Warning: cpupool's share hyperthreaded cpus.
>=20
While this one, although a bit more "boring" than the above, would
probably be something quite valuable to have!

I can only think of rather expensive ways of implementing it, involving
going through all the cpupools and, for each cpupool, through all its
cpus and check the topology relationships, but perhaps there are others
(I'll think harder).

Also, we are certainly not talking about hot paths.

Juergen?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-T64TgQmhJCHLzI1eqwbx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDoR0ACgkQk4XaBE3IOsSlkACgprHdqyFHLhSfsHDL37MnXfS7
X6QAnjGX94AeRrNw86U7lmCJriZczbug
=vXsa
-----END PGP SIGNATURE-----

--=-T64TgQmhJCHLzI1eqwbx--


--===============5820883154592364310==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5820883154592364310==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 18:06:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:06:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFp3s-0001xo-OA; Tue, 18 Feb 2014 18:06:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WFp3r-0001xj-OR
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:06:27 +0000
Received: from [85.158.143.35:27404] by server-2.bemta-4.messagelabs.com id
	E3/DF-10891-321A3035; Tue, 18 Feb 2014 18:06:27 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392746784!6593409!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7305 invoked from network); 18 Feb 2014 18:06:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:06:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; 
	d="asc'?scan'208";a="103605065"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 18:06:23 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:06:23 -0500
Message-ID: <1392746781.32038.594.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Don Slutz <dslutz@verizon.com>
Date: Tue, 18 Feb 2014 19:06:21 +0100
In-Reply-To: <53039F55.3030901@terremark.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	xen-devel@lists.xen.org, Simon Martin <furryfuttock@gmail.com>, Nate
	Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5820883154592364310=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5820883154592364310==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-T64TgQmhJCHLzI1eqwbx"

--=-T64TgQmhJCHLzI1eqwbx
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mar, 2014-02-18 at 12:58 -0500, Don Slutz wrote:
> >> root@smartin-xen:~# xl cpupool-list -c
> >> Name               CPU list
> >> Pool-0             0,1,2
>=20
> Change to something like:
>=20
>=20
> Pool-0             0,1,2 (and part of 3)
>=20
This would be cool, and I personally would be all for it... but it
perhaps will not be that clear at pointing the user to think to
hyperthreading. :-P

> Or add:
>=20
> Warning: cpupool's share hyperthreaded cpus.
>=20
While this one, although a bit more "boring" than the above, would
probably be something quite valuable to have!

I can only think of rather expensive ways of implementing it, involving
going through all the cpupools and, for each cpupool, through all its
cpus and check the topology relationships, but perhaps there are others
(I'll think harder).

Also, we are certainly not talking about hot paths.

Juergen?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-T64TgQmhJCHLzI1eqwbx
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMDoR0ACgkQk4XaBE3IOsSlkACgprHdqyFHLhSfsHDL37MnXfS7
X6QAnjGX94AeRrNw86U7lmCJriZczbug
=vXsa
-----END PGP SIGNATURE-----

--=-T64TgQmhJCHLzI1eqwbx--


--===============5820883154592364310==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5820883154592364310==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 18:14:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpBo-0002Bv-NQ; Tue, 18 Feb 2014 18:14:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFpBn-0002Bq-7n
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:14:39 +0000
Received: from [85.158.137.68:15336] by server-11.bemta-3.messagelabs.com id
	03/B9-04255-E03A3035; Tue, 18 Feb 2014 18:14:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392747275!1131988!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4782 invoked from network); 18 Feb 2014 18:14:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:14:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103608554"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 18:14:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:14:34 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFpBh-0007YT-Ex;
	Tue, 18 Feb 2014 18:14:33 +0000
Message-ID: <5303A309.3070900@citrix.com>
Date: Tue, 18 Feb 2014 18:14:33 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
	<1392741544.23084.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1392741544.23084.24.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of
	tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 16:39, Ian Campbell wrote:
> On Tue, 2014-02-18 at 15:59 +0000, Andrew Cooper wrote:
>> It is very common for BIOSes to advertise more cpus than are actually present
>> on the system, and mark some of them as offline.  This is what Xen does to
>> allow for later CPU hotplug, and what BIOSes common to multiple different
>> systems do to to save fully rewriting the MADT in memory.
>>
>> An excerpt from `xl info` might look like:
>>
>> ...
>> nr_cpus                : 2
>> max_cpu_id             : 3
>> ...
>>
>> Which shows 4 CPUs in the MADT, but only 2 online (as this particular box is
>> the dual-core rather than the quad-core SKU of its particular brand)
>>
>> Because of the way Xen exposes this information, a libxl_cputopology array is
>> bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.
>>
>> The current libxl code has two places which erroneously assume that a
>> libxl_cputopology array is as long as the number of bits found in a cpu
>> bitmap, and valgrind complains:
>>
>> ==14961== Invalid read of size 4
>> ==14961==    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.c:230)
>> ==14961==    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
>> ==14961==    by 0x406246F: libxl__domain_build (libxl_create.c:371)
>> ...
>> ==14961==  Address 0x4324788 is 8 bytes after a block of size 24 alloc'd
>> ==14961==    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
>> ==14961==    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
>> ==14961==    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
>> ==14961==    by 0x407A899: libxl__get_numa_candidate (libxl_numa.c:342)
>> ...
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
>
> Unless someone argues otherwise this is going into my 4.5 pile.
>
>

If 4.4 gets delayed, and patches such as the RTC series are re-up for
consideration, then this should also be considered.

If not, then 4.5 is fine, along with a backport to 4.4.x and 4.3.x.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:14:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpBo-0002Bv-NQ; Tue, 18 Feb 2014 18:14:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WFpBn-0002Bq-7n
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:14:39 +0000
Received: from [85.158.137.68:15336] by server-11.bemta-3.messagelabs.com id
	03/B9-04255-E03A3035; Tue, 18 Feb 2014 18:14:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392747275!1131988!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4782 invoked from network); 18 Feb 2014 18:14:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:14:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="103608554"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 18 Feb 2014 18:14:34 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:14:34 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WFpBh-0007YT-Ex;
	Tue, 18 Feb 2014 18:14:33 +0000
Message-ID: <5303A309.3070900@citrix.com>
Date: Tue, 18 Feb 2014 18:14:33 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392739145-24664-1-git-send-email-andrew.cooper3@citrix.com>
	<1392741544.23084.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1392741544.23084.24.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] tools/libxl: Don't read off the end of
	tinfo[]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 16:39, Ian Campbell wrote:
> On Tue, 2014-02-18 at 15:59 +0000, Andrew Cooper wrote:
>> It is very common for BIOSes to advertise more cpus than are actually present
>> on the system, and mark some of them as offline.  This is what Xen does to
>> allow for later CPU hotplug, and what BIOSes common to multiple different
>> systems do to to save fully rewriting the MADT in memory.
>>
>> An excerpt from `xl info` might look like:
>>
>> ...
>> nr_cpus                : 2
>> max_cpu_id             : 3
>> ...
>>
>> Which shows 4 CPUs in the MADT, but only 2 online (as this particular box is
>> the dual-core rather than the quad-core SKU of its particular brand)
>>
>> Because of the way Xen exposes this information, a libxl_cputopology array is
>> bounded by 'nr_cpus', while cpu bitmaps are bounded by 'max_cpu_id + 1'.
>>
>> The current libxl code has two places which erroneously assume that a
>> libxl_cputopology array is as long as the number of bits found in a cpu
>> bitmap, and valgrind complains:
>>
>> ==14961== Invalid read of size 4
>> ==14961==    at 0x407AB7F: libxl__get_numa_candidate (libxl_numa.c:230)
>> ==14961==    by 0x407030B: libxl__build_pre (libxl_dom.c:167)
>> ==14961==    by 0x406246F: libxl__domain_build (libxl_create.c:371)
>> ...
>> ==14961==  Address 0x4324788 is 8 bytes after a block of size 24 alloc'd
>> ==14961==    at 0x402669D: calloc (in/usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
>> ==14961==    by 0x4075BB9: libxl__zalloc (libxl_internal.c:83)
>> ==14961==    by 0x4052F87: libxl_get_cpu_topology (libxl.c:4408)
>> ==14961==    by 0x407A899: libxl__get_numa_candidate (libxl_numa.c:342)
>> ...
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
>
> Unless someone argues otherwise this is going into my 4.5 pile.
>
>

If 4.4 gets delayed, and patches such as the RTC series are re-up for
consideration, then this should also be considered.

If not, then 4.5 is fine, along with a backport to 4.4.x and 4.3.x.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpEg-0002KM-GU; Tue, 18 Feb 2014 18:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WFpEe-0002KG-81
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:17:36 +0000
Received: from [85.158.139.211:32894] by server-14.bemta-5.messagelabs.com id
	AE/8D-27598-FB3A3035; Tue, 18 Feb 2014 18:17:35 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392747452!4738989!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10428 invoked from network); 18 Feb 2014 18:17:34 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 18:17:34 -0000
Received: from mail-wi0-f175.google.com ([209.85.212.175]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwOjvObcrGtyZaBQEgkZiCvEWAPetOs1@postini.com;
	Tue, 18 Feb 2014 10:17:34 PST
Received: by mail-wi0-f175.google.com with SMTP id hm4so3816542wib.8
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 10:17:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=W/OPLo2iY/acDGtE921d+FAOH6VxwhuCY5hT1Sio3Yg=;
	b=jiCJB60mVHhwoYe59tqUyexrt6jq/F6aEnYBzRvNqV88q/ntXDsVdV12167nFBfpCg
	CAdh5XbDR1ga7UiqqVGCAQ/zbyaxEvPa+wu4Kl6TO+igIqtpdHH2AuE5QjqzVR/m99du
	jEqSiFNUHI6qSm5xBV+7+onkKT7NpY58OpxDk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=W/OPLo2iY/acDGtE921d+FAOH6VxwhuCY5hT1Sio3Yg=;
	b=C9exQRHy6ZC4IRuvqzwER165RKh+Sppmn1zZwfLq3Y1zMNkYn0i9713juktvdYcTX+
	4nifk6iaVkHjDD3uEP+w6zsnaUjqCSgQXHzJO8CXAR9yOuz1IzSOHhNxgS1Lm63J213+
	nk1uR+vBhPOPDQ2kX5oomOu1SMZ072NkS8/FZeCcLY0nPb/HgVATqobxoj8oSsQw5L1n
	06XLhyPIq7TcC+r1LgybaEtcIOQ5ZmFplJ6O3TMbFhtmQcxwEgdUyzwUlRXLeQwFxqfc
	bL0PgNqw/APq6k3Fa+peg6rmWiRW+W/8HfdHVRjAZQt49VO2uqVgo017aYYCbVtP/ds6
	X9Mw==
X-Gm-Message-State: ALoCoQklng3sqLuhwmSU4boj5p6qoio71VHe/nddb3/3nK6jj4Vg66UxVNFvooLuyr0tsBuayekF5xNctnoXdAJlvDwGQZ2OijN9KYLe7tm30idZ7ydZHb3+N//CF28fGyzB5kSg8v/S8RseV2NR4wRbwbm2/Q2Vm9POOYtT8yHf2npY/LM19Uo=
X-Received: by 10.194.161.136 with SMTP id xs8mr3667239wjb.56.1392747451024;
	Tue, 18 Feb 2014 10:17:31 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.194.161.136 with SMTP id xs8mr3667231wjb.56.1392747450950;
	Tue, 18 Feb 2014 10:17:30 -0800 (PST)
Received: by 10.216.31.67 with HTTP; Tue, 18 Feb 2014 10:17:30 -0800 (PST)
In-Reply-To: <1392737632.23084.4.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 20:17:30 +0200
Message-ID: <CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,
I have checked your suggestion with full cache flush.
For this purposes I have used ARMV7 specific function from our U-Boot.
This function performs clean and invalidation of the entire data cache
at all levels.
I call it in __cpu_up() before arch_cpu_up() calling.

It seems, the issue doesn't reproduced.
Now we need to find missing cache flushes).

Thank you.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:17:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpEg-0002KM-GU; Tue, 18 Feb 2014 18:17:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WFpEe-0002KG-81
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:17:36 +0000
Received: from [85.158.139.211:32894] by server-14.bemta-5.messagelabs.com id
	AE/8D-27598-FB3A3035; Tue, 18 Feb 2014 18:17:35 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392747452!4738989!1
X-Originating-IP: [64.18.0.186]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10428 invoked from network); 18 Feb 2014 18:17:34 -0000
Received: from exprod5og108.obsmtp.com (HELO exprod5og108.obsmtp.com)
	(64.18.0.186)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 18 Feb 2014 18:17:34 -0000
Received: from mail-wi0-f175.google.com ([209.85.212.175]) (using TLSv1) by
	exprod5ob108.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwOjvObcrGtyZaBQEgkZiCvEWAPetOs1@postini.com;
	Tue, 18 Feb 2014 10:17:34 PST
Received: by mail-wi0-f175.google.com with SMTP id hm4so3816542wib.8
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 10:17:31 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=W/OPLo2iY/acDGtE921d+FAOH6VxwhuCY5hT1Sio3Yg=;
	b=jiCJB60mVHhwoYe59tqUyexrt6jq/F6aEnYBzRvNqV88q/ntXDsVdV12167nFBfpCg
	CAdh5XbDR1ga7UiqqVGCAQ/zbyaxEvPa+wu4Kl6TO+igIqtpdHH2AuE5QjqzVR/m99du
	jEqSiFNUHI6qSm5xBV+7+onkKT7NpY58OpxDk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=W/OPLo2iY/acDGtE921d+FAOH6VxwhuCY5hT1Sio3Yg=;
	b=C9exQRHy6ZC4IRuvqzwER165RKh+Sppmn1zZwfLq3Y1zMNkYn0i9713juktvdYcTX+
	4nifk6iaVkHjDD3uEP+w6zsnaUjqCSgQXHzJO8CXAR9yOuz1IzSOHhNxgS1Lm63J213+
	nk1uR+vBhPOPDQ2kX5oomOu1SMZ072NkS8/FZeCcLY0nPb/HgVATqobxoj8oSsQw5L1n
	06XLhyPIq7TcC+r1LgybaEtcIOQ5ZmFplJ6O3TMbFhtmQcxwEgdUyzwUlRXLeQwFxqfc
	bL0PgNqw/APq6k3Fa+peg6rmWiRW+W/8HfdHVRjAZQt49VO2uqVgo017aYYCbVtP/ds6
	X9Mw==
X-Gm-Message-State: ALoCoQklng3sqLuhwmSU4boj5p6qoio71VHe/nddb3/3nK6jj4Vg66UxVNFvooLuyr0tsBuayekF5xNctnoXdAJlvDwGQZ2OijN9KYLe7tm30idZ7ydZHb3+N//CF28fGyzB5kSg8v/S8RseV2NR4wRbwbm2/Q2Vm9POOYtT8yHf2npY/LM19Uo=
X-Received: by 10.194.161.136 with SMTP id xs8mr3667239wjb.56.1392747451024;
	Tue, 18 Feb 2014 10:17:31 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.194.161.136 with SMTP id xs8mr3667231wjb.56.1392747450950;
	Tue, 18 Feb 2014 10:17:30 -0800 (PST)
Received: by 10.216.31.67 with HTTP; Tue, 18 Feb 2014 10:17:30 -0800 (PST)
In-Reply-To: <1392737632.23084.4.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
Date: Tue, 18 Feb 2014 20:17:30 +0200
Message-ID: <CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian,
I have checked your suggestion with full cache flush.
For this purposes I have used ARMV7 specific function from our U-Boot.
This function performs clean and invalidation of the entire data cache
at all levels.
I call it in __cpu_up() before arch_cpu_up() calling.

It seems, the issue doesn't reproduced.
Now we need to find missing cache flushes).

Thank you.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:27:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpOG-0002au-NF; Tue, 18 Feb 2014 18:27:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFpOF-0002ap-VA
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 18:27:32 +0000
Received: from [85.158.139.211:31372] by server-5.bemta-5.messagelabs.com id
	86/13-32749-316A3035; Tue, 18 Feb 2014 18:27:31 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392748046!4741551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15233 invoked from network); 18 Feb 2014 18:27:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:27:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101891723"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 18:27:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:27:23 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFpO6-0007kd-Eb;
	Tue, 18 Feb 2014 18:27:22 +0000
Message-ID: <5303A608.6050704@eu.citrix.com>
Date: Tue, 18 Feb 2014 18:27:20 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: Properly declare libxlu_disk_l.h in
	AUTOINCS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 05:15 PM, Ian Jackson wrote:
> This is necessary so that make doesn't do things which depend on this
> file until flex has finished producing it.
>
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> Tested-by: Olaf Hering <olaf@aepfle.de>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

This is a pretty obvious simple fix:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   tools/libxl/Makefile |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index dab2929..755b666 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
>   $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
>   
>   AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
> -	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
> +	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
>   AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>   AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>   LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:27:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpOG-0002au-NF; Tue, 18 Feb 2014 18:27:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WFpOF-0002ap-VA
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 18:27:32 +0000
Received: from [85.158.139.211:31372] by server-5.bemta-5.messagelabs.com id
	86/13-32749-316A3035; Tue, 18 Feb 2014 18:27:31 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392748046!4741551!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15233 invoked from network); 18 Feb 2014 18:27:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:27:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101891723"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 18:27:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:27:23 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WFpO6-0007kd-Eb;
	Tue, 18 Feb 2014 18:27:22 +0000
Message-ID: <5303A608.6050704@eu.citrix.com>
Date: Tue, 18 Feb 2014 18:27:20 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1392743732-23759-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] libxl: Properly declare libxlu_disk_l.h in
	AUTOINCS
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 05:15 PM, Ian Jackson wrote:
> This is necessary so that make doesn't do things which depend on this
> file until flex has finished producing it.
>
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> Tested-by: Olaf Hering <olaf@aepfle.de>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

This is a pretty obvious simple fix:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

> ---
>   tools/libxl/Makefile |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index dab2929..755b666 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -98,7 +98,7 @@ TEST_PROGS += $(foreach t, $(LIBXL_TESTS),test_$t)
>   $(LIBXL_OBJS) $(LIBXL_TEST_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
>   
>   AUTOINCS= libxlu_cfg_y.h libxlu_cfg_l.h _libxl_list.h _paths.h \
> -	_libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
> +	libxlu_disk_l.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
>   AUTOSRCS= libxlu_cfg_y.c libxlu_cfg_l.c
>   AUTOSRCS += _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
>   LIBXLU_OBJS = libxlu_cfg_y.o libxlu_cfg_l.o libxlu_cfg.o \


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:34:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpUd-0002m3-Jj; Tue, 18 Feb 2014 18:34:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1WFpUb-0002ly-Pj
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:34:05 +0000
Received: from [85.158.143.35:45174] by server-1.bemta-4.messagelabs.com id
	05/29-31661-D97A3035; Tue, 18 Feb 2014 18:34:05 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392748443!6570141!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2372 invoked from network); 18 Feb 2014 18:34:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:34:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101894730"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 18:33:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:33:49 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1WFpUL-0007qZ-7y;
	Tue, 18 Feb 2014 18:33:49 +0000
Message-ID: <5303A78C.6090709@citrix.com>
Date: Tue, 18 Feb 2014 18:33:48 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
In-Reply-To: <CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 06:17 PM, Oleksandr Tyshchenko wrote:
> Ian,

Hello Oleksandr,

> I have checked your suggestion with full cache flush.
> For this purposes I have used ARMV7 specific function from our U-Boot.
> This function performs clean and invalidation of the entire data cache
> at all levels.

Did you try to only clean the cache? When page table for the secondary
CPU is created Xen only clean the cache for the specific range.
I suspect it's not enough and we need to invalidate.

It should be easy to try with this small patch:

diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index e00be9e..5a8aba2 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -234,7 +234,7 @@ static inline void clean_xen_dcache_va_range(void
*p, unsigned long size)
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
+        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
(p));
     dsb();           /* So we know the flushes happen before continuing */
 }

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:34:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:34:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpUd-0002m3-Jj; Tue, 18 Feb 2014 18:34:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1WFpUb-0002ly-Pj
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:34:05 +0000
Received: from [85.158.143.35:45174] by server-1.bemta-4.messagelabs.com id
	05/29-31661-D97A3035; Tue, 18 Feb 2014 18:34:05 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392748443!6570141!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2372 invoked from network); 18 Feb 2014 18:34:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:34:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101894730"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 18:33:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:33:49 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1WFpUL-0007qZ-7y;
	Tue, 18 Feb 2014 18:33:49 +0000
Message-ID: <5303A78C.6090709@citrix.com>
Date: Tue, 18 Feb 2014 18:33:48 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
In-Reply-To: <CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 06:17 PM, Oleksandr Tyshchenko wrote:
> Ian,

Hello Oleksandr,

> I have checked your suggestion with full cache flush.
> For this purposes I have used ARMV7 specific function from our U-Boot.
> This function performs clean and invalidation of the entire data cache
> at all levels.

Did you try to only clean the cache? When page table for the secondary
CPU is created Xen only clean the cache for the specific range.
I suspect it's not enough and we need to invalidate.

It should be easy to try with this small patch:

diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index e00be9e..5a8aba2 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -234,7 +234,7 @@ static inline void clean_xen_dcache_va_range(void
*p, unsigned long size)
     void *end;
     dsb();           /* So the CPU issues all writes to the range */
     for ( end = p + size; p < end; p += cacheline_bytes )
-        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
+        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
(p));
     dsb();           /* So we know the flushes happen before continuing */
 }

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpbB-000336-GD; Tue, 18 Feb 2014 18:40:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WFpb9-000331-VD
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:40:52 +0000
Received: from [85.158.139.211:14192] by server-5.bemta-5.messagelabs.com id
	EF/C0-32749-239A3035; Tue, 18 Feb 2014 18:40:50 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392748847!4718068!1
X-Originating-IP: [64.18.0.24]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18032 invoked from network); 18 Feb 2014 18:40:49 -0000
Received: from exprod5og112.obsmtp.com (HELO exprod5og112.obsmtp.com)
	(64.18.0.24)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 18:40:49 -0000
Received: from mail-wg0-f52.google.com ([74.125.82.52]) (using TLSv1) by
	exprod5ob112.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwOpLgKabjnsPoj4YGi1Xbb/9n5qstQi@postini.com;
	Tue, 18 Feb 2014 10:40:49 PST
Received: by mail-wg0-f52.google.com with SMTP id b13so3625731wgh.19
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 10:40:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=5QX4lV1sjuwryFPF4KeCGDfUQzQKzyy0C04i7ocFmh8=;
	b=S6SPCzSSevvxoHUYw5XYsRLGgENNxn+ijPZdqFYhYUaSBKWqHmpa/0ODURNK4fDc9a
	3fQ8zrhR9CbN4gjl6aw44Jtw33mTBElonBeuK1umkET1x0p9o3sK3OFa1+/42IxjhcQJ
	atI4sHzlcygzqhVkCR443N5ylyLblFzxQn2Mk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=5QX4lV1sjuwryFPF4KeCGDfUQzQKzyy0C04i7ocFmh8=;
	b=DiJwY71FICHflzLmToc+gOr1h5c+xgTQXvlXbaRnvX3dN6mohCtR0qkdkeRtGbTLFE
	PnWP7yN7qLHAzdDPtoBAv5qGB2iKKDUihy1dI9FdYu2Cerz/oFbU5n9wurPImVdwZUpZ
	53koJYr4O2S9yy/RwhpgIzxGSYZhmCCVJVLCl17/EoIry59OYBijThmWjP5Y6nuGmQZE
	mTTas890SRvuxXTnyEosNcLJcNZHlhxfHg5mOgyNKKor7WYTdZehi4d/c0b2xtoQhFdO
	pB1Ckq+6AlMf9t5TCOU1dkftZMZBWKbfPKszbX/gYvrDs8Xh7rkh0JNNjgiHkBobR3mP
	DW9w==
X-Gm-Message-State: ALoCoQnECvdUa6l8jmhZBSmAyvHhvxPLXZsvNXByniILDqHtjpAaXXrkA182setzu3s+b1seSDOzF2gM2FQe+cE8bHw+Bg/Qk5Vy7eJGEmM2L5eroHC6WH3N6dE+x3YajO65Y7G2OY1SZppRiF/xgiyC6ZdSsXAiP3C5OO2EjLJ/1yd2eQ7ysE4=
X-Received: by 10.195.13.164 with SMTP id ez4mr24228829wjd.11.1392748845850;
	Tue, 18 Feb 2014 10:40:45 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.195.13.164 with SMTP id ez4mr24228826wjd.11.1392748845777;
	Tue, 18 Feb 2014 10:40:45 -0800 (PST)
Received: by 10.216.31.67 with HTTP; Tue, 18 Feb 2014 10:40:45 -0800 (PST)
In-Reply-To: <5303A78C.6090709@citrix.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
	<5303A78C.6090709@citrix.com>
Date: Tue, 18 Feb 2014 20:40:45 +0200
Message-ID: <CAJEb2DEH=iFF5-tu87wiXfHZqj0=ouQCbJTXR6WbrFpbCpUQ-Q@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 8:33 PM, Julien Grall <julien.grall@citrix.com> wrote:
> On 02/18/2014 06:17 PM, Oleksandr Tyshchenko wrote:
>> Ian,
>
> Hello Oleksandr,
>
>> I have checked your suggestion with full cache flush.
>> For this purposes I have used ARMV7 specific function from our U-Boot.
>> This function performs clean and invalidation of the entire data cache
>> at all levels.
>
> Did you try to only clean the cache? When page table for the secondary
> CPU is created Xen only clean the cache for the specific range.
> I suspect it's not enough and we need to invalidate.
I tried to invalidate too. Because this function performs a clean &
invalidation.
>
> It should be easy to try with this small patch:
>
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index e00be9e..5a8aba2 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -234,7 +234,7 @@ static inline void clean_xen_dcache_va_range(void
> *p, unsigned long size)
>      void *end;
>      dsb();           /* So the CPU issues all writes to the range */
>      for ( end = p + size; p < end; p += cacheline_bytes )
> -        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
> (p));
>      dsb();           /* So we know the flushes happen before continuing */
>  }
I will try.
>
> Regards,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpbB-000336-GD; Tue, 18 Feb 2014 18:40:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <oleksandr.tyshchenko@globallogic.com>)
	id 1WFpb9-000331-VD
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:40:52 +0000
Received: from [85.158.139.211:14192] by server-5.bemta-5.messagelabs.com id
	EF/C0-32749-239A3035; Tue, 18 Feb 2014 18:40:50 +0000
X-Env-Sender: oleksandr.tyshchenko@globallogic.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392748847!4718068!1
X-Originating-IP: [64.18.0.24]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18032 invoked from network); 18 Feb 2014 18:40:49 -0000
Received: from exprod5og112.obsmtp.com (HELO exprod5og112.obsmtp.com)
	(64.18.0.24)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 18:40:49 -0000
Received: from mail-wg0-f52.google.com ([74.125.82.52]) (using TLSv1) by
	exprod5ob112.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwOpLgKabjnsPoj4YGi1Xbb/9n5qstQi@postini.com;
	Tue, 18 Feb 2014 10:40:49 PST
Received: by mail-wg0-f52.google.com with SMTP id b13so3625731wgh.19
	for <xen-devel@lists.xen.org>; Tue, 18 Feb 2014 10:40:45 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=5QX4lV1sjuwryFPF4KeCGDfUQzQKzyy0C04i7ocFmh8=;
	b=S6SPCzSSevvxoHUYw5XYsRLGgENNxn+ijPZdqFYhYUaSBKWqHmpa/0ODURNK4fDc9a
	3fQ8zrhR9CbN4gjl6aw44Jtw33mTBElonBeuK1umkET1x0p9o3sK3OFa1+/42IxjhcQJ
	atI4sHzlcygzqhVkCR443N5ylyLblFzxQn2Mk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=5QX4lV1sjuwryFPF4KeCGDfUQzQKzyy0C04i7ocFmh8=;
	b=DiJwY71FICHflzLmToc+gOr1h5c+xgTQXvlXbaRnvX3dN6mohCtR0qkdkeRtGbTLFE
	PnWP7yN7qLHAzdDPtoBAv5qGB2iKKDUihy1dI9FdYu2Cerz/oFbU5n9wurPImVdwZUpZ
	53koJYr4O2S9yy/RwhpgIzxGSYZhmCCVJVLCl17/EoIry59OYBijThmWjP5Y6nuGmQZE
	mTTas890SRvuxXTnyEosNcLJcNZHlhxfHg5mOgyNKKor7WYTdZehi4d/c0b2xtoQhFdO
	pB1Ckq+6AlMf9t5TCOU1dkftZMZBWKbfPKszbX/gYvrDs8Xh7rkh0JNNjgiHkBobR3mP
	DW9w==
X-Gm-Message-State: ALoCoQnECvdUa6l8jmhZBSmAyvHhvxPLXZsvNXByniILDqHtjpAaXXrkA182setzu3s+b1seSDOzF2gM2FQe+cE8bHw+Bg/Qk5Vy7eJGEmM2L5eroHC6WH3N6dE+x3YajO65Y7G2OY1SZppRiF/xgiyC6ZdSsXAiP3C5OO2EjLJ/1yd2eQ7ysE4=
X-Received: by 10.195.13.164 with SMTP id ez4mr24228829wjd.11.1392748845850;
	Tue, 18 Feb 2014 10:40:45 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.195.13.164 with SMTP id ez4mr24228826wjd.11.1392748845777;
	Tue, 18 Feb 2014 10:40:45 -0800 (PST)
Received: by 10.216.31.67 with HTTP; Tue, 18 Feb 2014 10:40:45 -0800 (PST)
In-Reply-To: <5303A78C.6090709@citrix.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
	<5303A78C.6090709@citrix.com>
Date: Tue, 18 Feb 2014 20:40:45 +0200
Message-ID: <CAJEb2DEH=iFF5-tu87wiXfHZqj0=ouQCbJTXR6WbrFpbCpUQ-Q@mail.gmail.com>
From: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
To: Julien Grall <julien.grall@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 8:33 PM, Julien Grall <julien.grall@citrix.com> wrote:
> On 02/18/2014 06:17 PM, Oleksandr Tyshchenko wrote:
>> Ian,
>
> Hello Oleksandr,
>
>> I have checked your suggestion with full cache flush.
>> For this purposes I have used ARMV7 specific function from our U-Boot.
>> This function performs clean and invalidation of the entire data cache
>> at all levels.
>
> Did you try to only clean the cache? When page table for the secondary
> CPU is created Xen only clean the cache for the specific range.
> I suspect it's not enough and we need to invalidate.
I tried to invalidate too. Because this function performs a clean &
invalidation.
>
> It should be easy to try with this small patch:
>
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index e00be9e..5a8aba2 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -234,7 +234,7 @@ static inline void clean_xen_dcache_va_range(void
> *p, unsigned long size)
>      void *end;
>      dsb();           /* So the CPU issues all writes to the range */
>      for ( end = p + size; p < end; p += cacheline_bytes )
> -        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
> (p));
>      dsb();           /* So we know the flushes happen before continuing */
>  }
I will try.
>
> Regards,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:47:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFph0-0003Bp-G7; Tue, 18 Feb 2014 18:46:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFpgy-0003Bk-Us
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 18:46:53 +0000
Received: from [85.158.137.68:20840] by server-6.bemta-3.messagelabs.com id
	A5/C1-09180-C9AA3035; Tue, 18 Feb 2014 18:46:52 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392749210!2715035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13097 invoked from network); 18 Feb 2014 18:46:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:46:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101899742"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 18:46:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:46:48 -0500
Message-ID: <5303AA97.3010202@citrix.com>
Date: Tue, 18 Feb 2014 18:46:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745235.23084.60.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:40, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> 
>> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>  	vif->pending_prod = MAX_PENDING_REQS;
>>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>>  		vif->pending_ring[i] = i;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->mmap_pages[i] = NULL;
>> +	spin_lock_init(&vif->dealloc_lock);
>> +	spin_lock_init(&vif->response_lock);
>> +	/* If ballooning is disabled, this will consume real memory, so you
>> +	 * better enable it.
> 
> Almost no one who would be affected by this is going to read this
> comment. And it doesn't just require enabling ballooning, but actually
> booting with some maxmem "slack" to leave space.
> 
> Classic-xen kernels used to add 8M of slop to the physical address space
> to leave a suitable pool for exactly this sort of thing. I never liked
> that but perhaps it should be reconsidered (or at least raised as a
> possibility with the core-Xen Linux guys).

I plan to fix the balloon memory hotplug stuff to do the right thing
(it's almost there -- it just tries to overlap the new memory with
existing stuff).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:47:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFph0-0003Bp-G7; Tue, 18 Feb 2014 18:46:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WFpgy-0003Bk-Us
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 18:46:53 +0000
Received: from [85.158.137.68:20840] by server-6.bemta-3.messagelabs.com id
	A5/C1-09180-C9AA3035; Tue, 18 Feb 2014 18:46:52 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392749210!2715035!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13097 invoked from network); 18 Feb 2014 18:46:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 18:46:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101899742"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 18:46:49 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 13:46:48 -0500
Message-ID: <5303AA97.3010202@citrix.com>
Date: Tue, 18 Feb 2014 18:46:47 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745235.23084.60.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:40, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> 
>> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>  	vif->pending_prod = MAX_PENDING_REQS;
>>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>>  		vif->pending_ring[i] = i;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->mmap_pages[i] = NULL;
>> +	spin_lock_init(&vif->dealloc_lock);
>> +	spin_lock_init(&vif->response_lock);
>> +	/* If ballooning is disabled, this will consume real memory, so you
>> +	 * better enable it.
> 
> Almost no one who would be affected by this is going to read this
> comment. And it doesn't just require enabling ballooning, but actually
> booting with some maxmem "slack" to leave space.
> 
> Classic-xen kernels used to add 8M of slop to the physical address space
> to leave a suitable pool for exactly this sort of thing. I never liked
> that but perhaps it should be reconsidered (or at least raised as a
> possibility with the core-Xen Linux guys).

I plan to fix the balloon memory hotplug stuff to do the right thing
(it's almost there -- it just tries to overlap the new memory with
existing stuff).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:56:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFppq-0003Pp-AG; Tue, 18 Feb 2014 18:56:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFppp-0003Pk-9g
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:56:01 +0000
Received: from [85.158.139.211:5920] by server-3.bemta-5.messagelabs.com id
	42/5D-13671-0CCA3035; Tue, 18 Feb 2014 18:56:00 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392749758!4746097!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9761 invoked from network); 18 Feb 2014 18:55:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 18:55:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IItrCu012597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 18:55:54 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IItq96013701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Feb 2014 18:55:53 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1IItpOE024695; Tue, 18 Feb 2014 18:55:52 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 10:55:51 -0800
Message-ID: <5303AD1A.9010100@oracle.com>
Date: Tue, 18 Feb 2014 13:57:30 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <53023729.7020009@citrix.com> <53038D7A.8030807@oracle.com>
	<53039F0E.8010302@citrix.com>
In-Reply-To: <53039F0E.8010302@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 12:57 PM, Andrew Cooper wrote:
> AMD has the CPUID override MSRs 0xc001100{4,5} which cover the basic and
> extended feature leaves.  Are there any MSRs to cover
> CPUID.0000_000D[ecx=1].eax which contains the 'XSAVEOPT' bit, or
> CPUID.0000_0007[ecx=1].ebx which is the "structured extended" feature
> map?  I cant find any reference to new override MSRs in the manuals (or
> with google), or to having cpuid faulting support like Intel cpus.


Re: XSAVEOPT --- there is a bit for XSAVE (MSRC001_1004[58]). And since 
you can't use XSAVEOPT without XSAVE (you use the latter to initialize 
the save area) I think using this bit would be sufficient.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 18:56:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 18:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFppq-0003Pp-AG; Tue, 18 Feb 2014 18:56:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WFppp-0003Pk-9g
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 18:56:01 +0000
Received: from [85.158.139.211:5920] by server-3.bemta-5.messagelabs.com id
	42/5D-13671-0CCA3035; Tue, 18 Feb 2014 18:56:00 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392749758!4746097!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9761 invoked from network); 18 Feb 2014 18:55:59 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 18 Feb 2014 18:55:59 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1IItrCu012597
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 18 Feb 2014 18:55:54 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1IItq96013701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 18 Feb 2014 18:55:53 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1IItpOE024695; Tue, 18 Feb 2014 18:55:52 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 10:55:51 -0800
Message-ID: <5303AD1A.9010100@oracle.com>
Date: Tue, 18 Feb 2014 13:57:30 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <53023729.7020009@citrix.com> <53038D7A.8030807@oracle.com>
	<53039F0E.8010302@citrix.com>
In-Reply-To: <53039F0E.8010302@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
	Tim Deegan <tim@xen.org>, Xen-devel List <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	SuraveeSuthikulpanit <suravee.suthikulpanit@amd.com>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] VM Feature levelling improvements proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/2014 12:57 PM, Andrew Cooper wrote:
> AMD has the CPUID override MSRs 0xc001100{4,5} which cover the basic and
> extended feature leaves.  Are there any MSRs to cover
> CPUID.0000_000D[ecx=1].eax which contains the 'XSAVEOPT' bit, or
> CPUID.0000_0007[ecx=1].ebx which is the "structured extended" feature
> map?  I cant find any reference to new override MSRs in the manuals (or
> with google), or to having cpuid faulting support like Intel cpus.


Re: XSAVEOPT --- there is a bit for XSAVE (MSRC001_1004[58]). And since 
you can't use XSAVEOPT without XSAVE (you use the latter to initialize 
the save area) I think using this bit would be sufficient.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 19:01:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpuk-0003fC-A6; Tue, 18 Feb 2014 19:01:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WFpui-0003f6-P1
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 19:01:04 +0000
Received: from [85.158.143.35:32967] by server-1.bemta-4.messagelabs.com id
	19/5C-31661-0FDA3035; Tue, 18 Feb 2014 19:01:04 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392750063!6574383!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5045 invoked from network); 18 Feb 2014 19:01:03 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:01:03 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so12161927wes.27
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Feb 2014 11:01:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=34j5pkWqDYAedN0ru++WDif7Yd9LW3cyhDcrZ7fSOp4=;
	b=OrEG+S4nxwpY0/ydXk9DSzXrjZ/03O/Oouqp7NdcuSU/fiwRMc0SyZesT9NKkyuB6y
	adhlzBv166Vh1UdWke7seMyaMAnR1U3l0d4Bza2+eK/C0xELE1QCkq1u0OrmMPuuOfFV
	+sy/Y72xGqKJg92Bn3cFIlwwB43CO8OinitEZlQm9A334oVZLpsqHzQvEbAU+KS0he8L
	XbfKzq/eoRjsaKc/GTlLcvOxL1Du1Ee8fX0wxrg1aXdP7mWHiCDh5uedy0P/OyPezpEM
	a0CsWmIheyX8nTMYMuHHldKSRFoqW8Wl2MjtFZVV/ypIpH5hyZ7Sr45ifmeLaBAi3bYa
	eyqA==
X-Received: by 10.194.71.47 with SMTP id r15mr24513610wju.19.1392750063072;
	Tue, 18 Feb 2014 11:01:03 -0800 (PST)
MIME-Version: 1.0
Received: by 10.217.55.201 with HTTP; Tue, 18 Feb 2014 11:00:41 -0800 (PST)
In-Reply-To: <CADGo8CXyeYWV33CuY4JdNLPyVNXGYvT2O03Br2fATTf+YyBG-g@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
	<CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
	<5301C920.4040905@citrix.com>
	<CADGo8CXyeYWV33CuY4JdNLPyVNXGYvT2O03Br2fATTf+YyBG-g@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 18 Feb 2014 19:00:41 +0000
Message-ID: <CADGo8CV-8bvf5mL+mbmDQTWw5eBB19NDovvmC-CYwNr2SOxE7Q@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually after giving it a better tough, its probably best to keep
with the same version that comes in apt-get.

I applied the patch and just re-installed the tools!

I tried again and all seems to work fine, at least with the test Ubuntu DomU.

xl migrate also worked has expected!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 19:01:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFpuk-0003fC-A6; Tue, 18 Feb 2014 19:01:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <miguelmclara@gmail.com>) id 1WFpui-0003f6-P1
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 19:01:04 +0000
Received: from [85.158.143.35:32967] by server-1.bemta-4.messagelabs.com id
	19/5C-31661-0FDA3035; Tue, 18 Feb 2014 19:01:04 +0000
X-Env-Sender: miguelmclara@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392750063!6574383!1
X-Originating-IP: [74.125.82.182]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5045 invoked from network); 18 Feb 2014 19:01:03 -0000
Received: from mail-we0-f182.google.com (HELO mail-we0-f182.google.com)
	(74.125.82.182)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:01:03 -0000
Received: by mail-we0-f182.google.com with SMTP id u57so12161927wes.27
	for <xen-devel@lists.xensource.com>;
	Tue, 18 Feb 2014 11:01:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=34j5pkWqDYAedN0ru++WDif7Yd9LW3cyhDcrZ7fSOp4=;
	b=OrEG+S4nxwpY0/ydXk9DSzXrjZ/03O/Oouqp7NdcuSU/fiwRMc0SyZesT9NKkyuB6y
	adhlzBv166Vh1UdWke7seMyaMAnR1U3l0d4Bza2+eK/C0xELE1QCkq1u0OrmMPuuOfFV
	+sy/Y72xGqKJg92Bn3cFIlwwB43CO8OinitEZlQm9A334oVZLpsqHzQvEbAU+KS0he8L
	XbfKzq/eoRjsaKc/GTlLcvOxL1Du1Ee8fX0wxrg1aXdP7mWHiCDh5uedy0P/OyPezpEM
	a0CsWmIheyX8nTMYMuHHldKSRFoqW8Wl2MjtFZVV/ypIpH5hyZ7Sr45ifmeLaBAi3bYa
	eyqA==
X-Received: by 10.194.71.47 with SMTP id r15mr24513610wju.19.1392750063072;
	Tue, 18 Feb 2014 11:01:03 -0800 (PST)
MIME-Version: 1.0
Received: by 10.217.55.201 with HTTP; Tue, 18 Feb 2014 11:00:41 -0800 (PST)
In-Reply-To: <CADGo8CXyeYWV33CuY4JdNLPyVNXGYvT2O03Br2fATTf+YyBG-g@mail.gmail.com>
References: <CADGo8CXG7FN3kco7jZWTHTVXqx6AMAR7zLrrUeHJ-8J7Wik=tw@mail.gmail.com>
	<CADGo8CVMDdgTBkYaM7fnqT-uOvA52fqugQ134AhPO0zcD-H3=g@mail.gmail.com>
	<CAP8mzPOXs2DMocb_znJN5AzHH2nSw0RkbcTVXTPc21V8qqD2tw@mail.gmail.com>
	<5375d8bf-aac3-446f-af5b-a341c0b37979@email.android.com>
	<CADGo8CVOu4a51nU30dW9Wd=jEh7CG1fAVe-7yZ_qciRuvxw50g@mail.gmail.com>
	<1391681000.23098.29.camel@kazak.uk.xensource.com>
	<CADGo8CWPNLHjV8GxWZp6+LegN3air8y_T0_hfq3bqefQ5TAugg@mail.gmail.com>
	<1392042223.26657.7.camel@kazak.uk.xensource.com>
	<CADGo8CUvBHk_MBEjBv7EvpsmnCtiQ8kc=9H+QBVhC5qUrio3qA@mail.gmail.com>
	<1392198993.13563.13.camel@kazak.uk.xensource.com>
	<52FB50F0.70106@citrix.com>
	<CADGo8CXbn9ZwckEt0_orN=vERoCPH4RthrjX1cJCnn5sM+Xc-A@mail.gmail.com>
	<52FC7F8E.7040608@citrix.com> <52FC9A24.2020703@citrix.com>
	<CADGo8CXe+VQ_huijJ9NEgtiJumyLkT5_Bv2ZEnZF_XacaN9DTA@mail.gmail.com>
	<52FDDE44.3060001@citrix.com>
	<CADGo8CXKha9UJEQ5rWMF7aHH9nVOKFuenZn2Byhpv1fr3UjnpQ@mail.gmail.com>
	<5301C920.4040905@citrix.com>
	<CADGo8CXyeYWV33CuY4JdNLPyVNXGYvT2O03Br2fATTf+YyBG-g@mail.gmail.com>
From: Miguel Clara <miguelmclara@gmail.com>
Date: Tue, 18 Feb 2014 19:00:41 +0000
Message-ID: <CADGo8CV-8bvf5mL+mbmDQTWw5eBB19NDovvmC-CYwNr2SOxE7Q@mail.gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] handling local attach of phy disks for pygrub (Was:
 Xen 4.3 xl migrate " htree_dirblock_to_tree" on second host)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Actually after giving it a better tough, its probably best to keep
with the same version that comes in apt-get.

I applied the patch and just re-installed the tools!

I tried again and all seems to work fine, at least with the test Ubuntu DomU.

xl migrate also worked has expected!

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 19:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFqHA-00040U-B9; Tue, 18 Feb 2014 19:24:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WFqH6-0003zu-V5; Tue, 18 Feb 2014 19:24:13 +0000
Received: from [85.158.137.68:3309] by server-4.bemta-3.messagelabs.com id
	BC/9A-04858-B53B3035; Tue, 18 Feb 2014 19:24:11 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392751450!2712851!1
X-Originating-IP: [74.125.82.41]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32228 invoked from network); 18 Feb 2014 19:24:10 -0000
Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com)
	(74.125.82.41)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:24:10 -0000
Received: by mail-wg0-f41.google.com with SMTP id l18so3378064wgh.0
	for <multiple recipients>; Tue, 18 Feb 2014 11:24:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:content-type;
	bh=zb+75VTz8KNu8ns3sKUK6FqunaGbFyTC2osIf/AKci8=;
	b=LOC/hL0AZg+c/jQ+S6KOC2nAur7ayPuPw2KHXKxt3XsM/9PHsXcKJEVCE+L0saABSf
	Q2cT5QFpC4Mm7aVkX+M2D7jTZctuvlfKIP6nCNON/tqpi84FrWStltczLh7qcEPRk81T
	NitHEbrWGxE53ZqO0+xAlPtwvcpUtaK6UBilKlCM24nRA2ThUzVWpSkwEjQPgNVBbGZR
	MjHHpYDm+1IgM+e5pugPdEf/r9gZePO6h7Id2q9AqsCeDtEtpQ4z2mvq0ZvdPywNxrgM
	GAOXm6kuvUHRVh/+RGxHh1Ju8//6dK4lfjOVVMxGK2wr144khRX76wofC34xeOlB0dAs
	3Kcg==
X-Received: by 10.180.79.7 with SMTP id f7mr19692981wix.20.1392751450307;
	Tue, 18 Feb 2014 11:24:10 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id j9sm47703189wjz.13.2014.02.18.11.24.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 11:24:09 -0800 (PST)
Message-ID: <5303B34E.5000702@xen.org>
Date: Tue, 18 Feb 2014 19:23:58 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, 
	Russell Pavlicek <russell.pavlicek@citrix.com>, 
	Tim Mackey <Timothy.Mackey@citrix.com>
Cc: xen-users@lists.xenproject.org
Subject: [Xen-devel] [Vote] Proposal: Moving XCP binaries to XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7681988072679845431=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============7681988072679845431==
Content-Type: multipart/alternative;
 boundary="------------060701020901020602030902"

This is a multi-part message in MIME format.
--------------060701020901020602030902
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

I wanted to propose to move the legacy XCP binaries from XenProject.org 
to XenServer.org. With XenServer being fully open source and XCP 
basically being a variant of XenServer, it would make a lot more sense 
to keep all these binaries with XenServer.org. The fact that we have XCP 
and XenServer.org in two different places has led to:

* fragmentation of the XCP user community
* it is also a constant source of confusion in the user community

In a nutshell many people don't know whether they should go to 
XenServer.org to ask XCP related questions or whether to ask them on 
XenProject.org. As a result many questions remain unanswered. Russell 
and me spend a lot of our time, pointing people to the right place 
and/or cross-posting. I was hoping things would get better over time, 
but they have not improved.

When the Xen Project was created, there was no real alternative but to 
keep XCP as part of the Xen Project. With XenServer being fully open 
source, and being established, there is no reason why we can't clean up 
some of the confusion. In my opinion we really should do this.

This proposal does *not* affect the XAPI project : the XAPI project 
would continue to develop the XAPI toolstack as part of the Xen Project 
(and deliver source "releases"). In fact, I would also propose to make 
the xapi mailing list a developer mailing list. This fits much better 
with how the Hypervisor and MirageOS projects are run and creates an 
overall cleaner and easier to understand model for the Xen Project.

I have in principle agreement from:
* The Xen Project Advisory Board and the Linux Foundation (which is 
needed as I am proposing to move assets out of XenProject.org)
* Citrix to take on XCP as part of XenProject.org
* Citrix to provide resources to migrate content and redirect URLs from 
xxx.XenProject.org to XenServer.org such that people wont be impacted. 
This part is quite important. People who would come to download or find 
information about XCP, are basically encouraged to ask XCP related 
questions on XenProject.org. If they are redirected to the right place 
in XenServer.org, that does mean that they are redirected to the site 
where they should ask questions.
* I may be able to get some resources to have the wiki cleaned up too 
and do some redirects there too (another source of ongoing confusion)

== Who and how to vote? ==

As this is not an entirely project local decision, I propose that 
according to http://xenproject.org/governance.html
- Members of all developer mailing lists (including the user lists) on 
Xenproject.org can review the proposal and voice an opinion
- Maintainers of*all mature*projects and the Xenproject.org community 
manager are allowed to vote : these are maintainers of xen-devel and xen-api

You would vote by replying "+1"
If you don't care vote "0"
If you object, vote "-1", which must include an alternative proposal or 
a detailed explanation of the reasons for the negative vote.

Please vote by Feb 25th

Best Regards
Lars

--------------060701020901020602030902
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi all,<br>
    <br>
    I wanted to propose to move the legacy XCP binaries from
    XenProject.org to XenServer.org. With XenServer being fully open
    source and XCP basically being a variant of XenServer, it would make
    a lot more sense to keep all these binaries with XenServer.org. The
    fact that we have XCP and XenServer.org in two different places has
    led to:<br>
    <br>
    * fragmentation of the XCP user community <br>
    * it is also a constant source of confusion in the user community<br>
    <br>
    In a nutshell many people don't know whether they should go to
    XenServer.org to ask XCP related questions or whether to ask them on
    XenProject.org. As a result many questions remain unanswered.
    Russell and me spend a lot of our time, pointing people to the right
    place and/or cross-posting. I was hoping things would get better
    over time, but they have not improved.<br>
    <br>
    When the Xen Project was created, there was no real alternative but
    to keep XCP as part of the Xen Project. With XenServer being fully
    open source, and being established, there is no reason why we can't
    clean up some of the confusion. In my opinion we really should do
    this.<br>
    <br>
    This proposal does *not* affect the XAPI project : the XAPI project
    would continue to develop the XAPI toolstack as part of the Xen
    Project (and deliver source "releases"). In fact, I would also
    propose to make the xapi mailing list a developer mailing list. This
    fits much better with how the Hypervisor and MirageOS projects are
    run and creates an overall cleaner and easier to understand model
    for the Xen Project. <br>
    <br>
    I have in principle agreement from:<br>
    * The Xen Project Advisory Board and the Linux Foundation (which is
    needed as I am proposing to move assets out of XenProject.org)<br>
    * Citrix to take on XCP as part of XenProject.org<br>
    * Citrix to provide resources to migrate content and redirect URLs
    from xxx.XenProject.org to XenServer.org such that people wont be
    impacted. This part is quite important. People who would come to
    download or find information about XCP, are basically encouraged to
    ask XCP related questions on XenProject.org. If they are redirected
    to the right place in XenServer.org, that does mean that they are
    redirected to the site where they should ask questions. <br>
    * I may be able to get some resources to have the wiki cleaned up
    too and do some redirects there too (another source of ongoing
    confusion)<br>
    <br>
    == Who and how to vote? ==<br>
    <br>
    As this is not an entirely project local decision, I propose that
    according to <a class="moz-txt-link-freetext" href="http://xenproject.org/governance.html">http://xenproject.org/governance.html</a><br>
    - Members of all developer mailing lists (including the user lists)
    on Xenproject.org can review the proposal and voice an opinion<br>
    <meta http-equiv="Content-Type" content="text/html;
      charset=ISO-8859-1">
    - Maintainers of<span class="Apple-converted-space">&nbsp;</span><b
      style="font-weight: bold; vertical-align: middle;">all mature</b><span
      class="Apple-converted-space">&nbsp;</span>projects and the
    Xenproject.org community manager are allowed to vote : these are
    maintainers of xen-devel and xen-api<br>
    <br>
    You would vote by replying "+1"<br>
    If you don't care vote "0"<br>
    If you object, vote "-1", which must include an alternative proposal
    or a detailed explanation of the reasons for the negative vote.<br>
    <br>
    Please vote by Feb 25th <br>
    <br>
    Best Regards<br>
    Lars<br class="Apple-interchange-newline">
  </body>
</html>

--------------060701020901020602030902--


--===============7681988072679845431==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7681988072679845431==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 19:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFqHA-00040U-B9; Tue, 18 Feb 2014 19:24:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WFqH6-0003zu-V5; Tue, 18 Feb 2014 19:24:13 +0000
Received: from [85.158.137.68:3309] by server-4.bemta-3.messagelabs.com id
	BC/9A-04858-B53B3035; Tue, 18 Feb 2014 19:24:11 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392751450!2712851!1
X-Originating-IP: [74.125.82.41]
X-SpamReason: No, hits=1.1 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32228 invoked from network); 18 Feb 2014 19:24:10 -0000
Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com)
	(74.125.82.41)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:24:10 -0000
Received: by mail-wg0-f41.google.com with SMTP id l18so3378064wgh.0
	for <multiple recipients>; Tue, 18 Feb 2014 11:24:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:content-type;
	bh=zb+75VTz8KNu8ns3sKUK6FqunaGbFyTC2osIf/AKci8=;
	b=LOC/hL0AZg+c/jQ+S6KOC2nAur7ayPuPw2KHXKxt3XsM/9PHsXcKJEVCE+L0saABSf
	Q2cT5QFpC4Mm7aVkX+M2D7jTZctuvlfKIP6nCNON/tqpi84FrWStltczLh7qcEPRk81T
	NitHEbrWGxE53ZqO0+xAlPtwvcpUtaK6UBilKlCM24nRA2ThUzVWpSkwEjQPgNVBbGZR
	MjHHpYDm+1IgM+e5pugPdEf/r9gZePO6h7Id2q9AqsCeDtEtpQ4z2mvq0ZvdPywNxrgM
	GAOXm6kuvUHRVh/+RGxHh1Ju8//6dK4lfjOVVMxGK2wr144khRX76wofC34xeOlB0dAs
	3Kcg==
X-Received: by 10.180.79.7 with SMTP id f7mr19692981wix.20.1392751450307;
	Tue, 18 Feb 2014 11:24:10 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id j9sm47703189wjz.13.2014.02.18.11.24.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 11:24:09 -0800 (PST)
Message-ID: <5303B34E.5000702@xen.org>
Date: Tue, 18 Feb 2014 19:23:58 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, 
	Russell Pavlicek <russell.pavlicek@citrix.com>, 
	Tim Mackey <Timothy.Mackey@citrix.com>
Cc: xen-users@lists.xenproject.org
Subject: [Xen-devel] [Vote] Proposal: Moving XCP binaries to XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7681988072679845431=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============7681988072679845431==
Content-Type: multipart/alternative;
 boundary="------------060701020901020602030902"

This is a multi-part message in MIME format.
--------------060701020901020602030902
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

I wanted to propose to move the legacy XCP binaries from XenProject.org 
to XenServer.org. With XenServer being fully open source and XCP 
basically being a variant of XenServer, it would make a lot more sense 
to keep all these binaries with XenServer.org. The fact that we have XCP 
and XenServer.org in two different places has led to:

* fragmentation of the XCP user community
* it is also a constant source of confusion in the user community

In a nutshell many people don't know whether they should go to 
XenServer.org to ask XCP related questions or whether to ask them on 
XenProject.org. As a result many questions remain unanswered. Russell 
and me spend a lot of our time, pointing people to the right place 
and/or cross-posting. I was hoping things would get better over time, 
but they have not improved.

When the Xen Project was created, there was no real alternative but to 
keep XCP as part of the Xen Project. With XenServer being fully open 
source, and being established, there is no reason why we can't clean up 
some of the confusion. In my opinion we really should do this.

This proposal does *not* affect the XAPI project : the XAPI project 
would continue to develop the XAPI toolstack as part of the Xen Project 
(and deliver source "releases"). In fact, I would also propose to make 
the xapi mailing list a developer mailing list. This fits much better 
with how the Hypervisor and MirageOS projects are run and creates an 
overall cleaner and easier to understand model for the Xen Project.

I have in principle agreement from:
* The Xen Project Advisory Board and the Linux Foundation (which is 
needed as I am proposing to move assets out of XenProject.org)
* Citrix to take on XCP as part of XenProject.org
* Citrix to provide resources to migrate content and redirect URLs from 
xxx.XenProject.org to XenServer.org such that people wont be impacted. 
This part is quite important. People who would come to download or find 
information about XCP, are basically encouraged to ask XCP related 
questions on XenProject.org. If they are redirected to the right place 
in XenServer.org, that does mean that they are redirected to the site 
where they should ask questions.
* I may be able to get some resources to have the wiki cleaned up too 
and do some redirects there too (another source of ongoing confusion)

== Who and how to vote? ==

As this is not an entirely project local decision, I propose that 
according to http://xenproject.org/governance.html
- Members of all developer mailing lists (including the user lists) on 
Xenproject.org can review the proposal and voice an opinion
- Maintainers of*all mature*projects and the Xenproject.org community 
manager are allowed to vote : these are maintainers of xen-devel and xen-api

You would vote by replying "+1"
If you don't care vote "0"
If you object, vote "-1", which must include an alternative proposal or 
a detailed explanation of the reasons for the negative vote.

Please vote by Feb 25th

Best Regards
Lars

--------------060701020901020602030902
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi all,<br>
    <br>
    I wanted to propose to move the legacy XCP binaries from
    XenProject.org to XenServer.org. With XenServer being fully open
    source and XCP basically being a variant of XenServer, it would make
    a lot more sense to keep all these binaries with XenServer.org. The
    fact that we have XCP and XenServer.org in two different places has
    led to:<br>
    <br>
    * fragmentation of the XCP user community <br>
    * it is also a constant source of confusion in the user community<br>
    <br>
    In a nutshell many people don't know whether they should go to
    XenServer.org to ask XCP related questions or whether to ask them on
    XenProject.org. As a result many questions remain unanswered.
    Russell and me spend a lot of our time, pointing people to the right
    place and/or cross-posting. I was hoping things would get better
    over time, but they have not improved.<br>
    <br>
    When the Xen Project was created, there was no real alternative but
    to keep XCP as part of the Xen Project. With XenServer being fully
    open source, and being established, there is no reason why we can't
    clean up some of the confusion. In my opinion we really should do
    this.<br>
    <br>
    This proposal does *not* affect the XAPI project : the XAPI project
    would continue to develop the XAPI toolstack as part of the Xen
    Project (and deliver source "releases"). In fact, I would also
    propose to make the xapi mailing list a developer mailing list. This
    fits much better with how the Hypervisor and MirageOS projects are
    run and creates an overall cleaner and easier to understand model
    for the Xen Project. <br>
    <br>
    I have in principle agreement from:<br>
    * The Xen Project Advisory Board and the Linux Foundation (which is
    needed as I am proposing to move assets out of XenProject.org)<br>
    * Citrix to take on XCP as part of XenProject.org<br>
    * Citrix to provide resources to migrate content and redirect URLs
    from xxx.XenProject.org to XenServer.org such that people wont be
    impacted. This part is quite important. People who would come to
    download or find information about XCP, are basically encouraged to
    ask XCP related questions on XenProject.org. If they are redirected
    to the right place in XenServer.org, that does mean that they are
    redirected to the site where they should ask questions. <br>
    * I may be able to get some resources to have the wiki cleaned up
    too and do some redirects there too (another source of ongoing
    confusion)<br>
    <br>
    == Who and how to vote? ==<br>
    <br>
    As this is not an entirely project local decision, I propose that
    according to <a class="moz-txt-link-freetext" href="http://xenproject.org/governance.html">http://xenproject.org/governance.html</a><br>
    - Members of all developer mailing lists (including the user lists)
    on Xenproject.org can review the proposal and voice an opinion<br>
    <meta http-equiv="Content-Type" content="text/html;
      charset=ISO-8859-1">
    - Maintainers of<span class="Apple-converted-space">&nbsp;</span><b
      style="font-weight: bold; vertical-align: middle;">all mature</b><span
      class="Apple-converted-space">&nbsp;</span>projects and the
    Xenproject.org community manager are allowed to vote : these are
    maintainers of xen-devel and xen-api<br>
    <br>
    You would vote by replying "+1"<br>
    If you don't care vote "0"<br>
    If you object, vote "-1", which must include an alternative proposal
    or a detailed explanation of the reasons for the negative vote.<br>
    <br>
    Please vote by Feb 25th <br>
    <br>
    Best Regards<br>
    Lars<br class="Apple-interchange-newline">
  </body>
</html>

--------------060701020901020602030902--


--===============7681988072679845431==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7681988072679845431==--


From xen-devel-bounces@lists.xen.org Tue Feb 18 19:41:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFqXB-0004ex-Bg; Tue, 18 Feb 2014 19:40:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFqXA-0004es-E9
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 19:40:48 +0000
Received: from [85.158.137.68:6568] by server-1.bemta-3.messagelabs.com id
	60/8F-17293-F37B3035; Tue, 18 Feb 2014 19:40:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392752445!2722021!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31040 invoked from network); 18 Feb 2014 19:40:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:40:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101919954"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 19:40:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 14:40:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFqX5-00056k-Kz;
	Tue, 18 Feb 2014 19:40:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFqX4-0000Mv-Sy;
	Tue, 18 Feb 2014 19:40:43 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25122-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 19:40:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25122: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25122 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25122/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot               fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                60f76eab19e3903e810bdc3ec846c158efcd2e21
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7050 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2383712 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 19:41:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFqXB-0004ex-Bg; Tue, 18 Feb 2014 19:40:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFqXA-0004es-E9
	for xen-devel@lists.xensource.com; Tue, 18 Feb 2014 19:40:48 +0000
Received: from [85.158.137.68:6568] by server-1.bemta-3.messagelabs.com id
	60/8F-17293-F37B3035; Tue, 18 Feb 2014 19:40:47 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392752445!2722021!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31040 invoked from network); 18 Feb 2014 19:40:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:40:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,502,1389744000"; d="scan'208";a="101919954"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 19:40:44 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 14:40:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFqX5-00056k-Kz;
	Tue, 18 Feb 2014 19:40:43 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFqX4-0000Mv-Sy;
	Tue, 18 Feb 2014 19:40:43 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25122-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 18 Feb 2014 19:40:42 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25122: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25122 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25122/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot               fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass

version targeted for testing:
 linux                60f76eab19e3903e810bdc3ec846c158efcd2e21
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7050 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2383712 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 19:44:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFqaf-0004my-0e; Tue, 18 Feb 2014 19:44:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFqad-0004mo-Iz
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 19:44:23 +0000
Received: from [85.158.139.211:14427] by server-12.bemta-5.messagelabs.com id
	D7/F8-15415-618B3035; Tue, 18 Feb 2014 19:44:22 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392752661!4754033!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28545 invoked from network); 18 Feb 2014 19:44:21 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:44:21 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so12371811lan.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 11:44:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=3uYcZv86wR70eetn6MKInPLwGlY6NpN4k0u4GiS0NQI=;
	b=dMmXhR03XQOx6/llHLXubO+/5usSlK/Nv+2svq0w/0j3b2ldYvPu0KOD2bfRDfj4Mg
	lFVkp9vj4dbmquK0vJodluAabsuffEU6sy9Tg7lILbdhkCPvwqzz2x/Is2ixQQcVBJro
	xZUVLWmhc4JuuLxItlSF6zafsv9A0iNMBc1ICeIZl52jmRAen5AJXClbTFkyG+8Qs/oG
	TAqdVSAxCknVIHGKDR64Z4gdKu/uPvCwNLMxfd/bQ1cMifera4JxXWR4JiIzEnP275Uu
	6v7jfSRtP0NIMGu7NxBaca0L4kgoOd5rBS0JZkVVHIffIkgKmYuUIy8eI7K+dAKFSyS3
	5uPA==
X-Received: by 10.112.17.65 with SMTP id m1mr3532179lbd.46.1392752660955; Tue,
	18 Feb 2014 11:44:20 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 11:43:59 -0800 (PST)
In-Reply-To: <5301E411.5060908@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<5301E411.5060908@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 11:43:59 -0800
X-Google-Sender-Auth: CfQ0a7M_apZAVSil2meB-j0R34M
Message-ID: <CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 2:27 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> This v2 series changes the approach from my original virtualization
>> multicast patch series [0] by abandoning completely the multicast
>> issues and instead generalizing an approach for virtualization
>> backends. There are two things in common with virtualization
>> backends:
>>
>>   0) they should not become the root bridge
>>   1) they don't need ipv4 / ipv6 interfaces
>
> Why?  There's no real difference between a backend network device and a
> physical device (from the point of view of the backend domain).  I do
> not think these are intrinsic properties of backend devices.

Let me clarify the original motivation as that can likely help explain
how I ended up with this patch series.

SUSE has had reports of xen backend interfaces ending up with
duplicate address notification filling up logs on systems with a
series of guests, these reports go back to 2006. This was root caused
to DAD on IPv6 interfaces, and a work around implemented to disable
DAD [0] on multicast links. Even though this patch as a work around
should not be applicable anymore given that since the xen-netback
upstreaming since 2.6.39 ether_setup is used and that enables the
multicast flag, we should try ensure the issue doesn't creep up
anymore. As per the IPv6 RFCs and Linux IPv6 implementation -- DAD
should be triggered even in the case of manual IP configuration and
when the link goes up, as such SLAAC will always take place on IPv6
interfaces. Although not documented upon my review I determined the
original issue could also be attributed to the corner case documented
on Appendix A of RFC 4862 [1] and this could be more prevalent for
xen-netback given we stuck to the same MAC address for all xen-netback
interfaces. I first tried to generalize the work around and address
the multicast case requirement for IPv6 [2], and explicitly disabling
multicast on xen-netback. Although this approach could likely be
generalized further by taking into account for NBMA links by checking
dev->type I determined we didn't need IPv6 interfaces at all on the
xent-netback interfaces. This lead me to further review if we even
needed IPv4 interfaces as well, and it turns out we do not.

New motivation: removing IPv4 and IPv6 from the backend interfaces can
save up a lot of boiler plate run time code, triggers from ever taking
place, and simplifying the backend interaces. If there is no use for
IPv4 and IPv6 interfaces why do we have them? Note: I have yet to test
the NAT case.

> I can see these being useful knobs for administrators (or management
> toolstacks) to turn on, on a per-device basis.

Agreed but these knobs don't even exist for drivers yet, let alone for
system administrators. I certainly can shoot for another series to let
administrators configure this as a preference but -- if we know a
driver won't need IPv4 and IPv6 interfaces why not just allow drivers
to disable them all together? Consider the simplification of the
interfaces on the host.

[0] https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf
[1] http://tools.ietf.org/html/rfc4862
[2] http://marc.info/?l=linux-netdev&m=139207142110536&w=2

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 19:44:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 19:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFqaf-0004my-0e; Tue, 18 Feb 2014 19:44:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFqad-0004mo-Iz
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 19:44:23 +0000
Received: from [85.158.139.211:14427] by server-12.bemta-5.messagelabs.com id
	D7/F8-15415-618B3035; Tue, 18 Feb 2014 19:44:22 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392752661!4754033!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28545 invoked from network); 18 Feb 2014 19:44:21 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 19:44:21 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so12371811lan.26
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 11:44:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=3uYcZv86wR70eetn6MKInPLwGlY6NpN4k0u4GiS0NQI=;
	b=dMmXhR03XQOx6/llHLXubO+/5usSlK/Nv+2svq0w/0j3b2ldYvPu0KOD2bfRDfj4Mg
	lFVkp9vj4dbmquK0vJodluAabsuffEU6sy9Tg7lILbdhkCPvwqzz2x/Is2ixQQcVBJro
	xZUVLWmhc4JuuLxItlSF6zafsv9A0iNMBc1ICeIZl52jmRAen5AJXClbTFkyG+8Qs/oG
	TAqdVSAxCknVIHGKDR64Z4gdKu/uPvCwNLMxfd/bQ1cMifera4JxXWR4JiIzEnP275Uu
	6v7jfSRtP0NIMGu7NxBaca0L4kgoOd5rBS0JZkVVHIffIkgKmYuUIy8eI7K+dAKFSyS3
	5uPA==
X-Received: by 10.112.17.65 with SMTP id m1mr3532179lbd.46.1392752660955; Tue,
	18 Feb 2014 11:44:20 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 11:43:59 -0800 (PST)
In-Reply-To: <5301E411.5060908@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<5301E411.5060908@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 11:43:59 -0800
X-Google-Sender-Auth: CfQ0a7M_apZAVSil2meB-j0R34M
Message-ID: <CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
To: David Vrabel <david.vrabel@citrix.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 2:27 AM, David Vrabel <david.vrabel@citrix.com> wrote:
> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> This v2 series changes the approach from my original virtualization
>> multicast patch series [0] by abandoning completely the multicast
>> issues and instead generalizing an approach for virtualization
>> backends. There are two things in common with virtualization
>> backends:
>>
>>   0) they should not become the root bridge
>>   1) they don't need ipv4 / ipv6 interfaces
>
> Why?  There's no real difference between a backend network device and a
> physical device (from the point of view of the backend domain).  I do
> not think these are intrinsic properties of backend devices.

Let me clarify the original motivation as that can likely help explain
how I ended up with this patch series.

SUSE has had reports of xen backend interfaces ending up with
duplicate address notification filling up logs on systems with a
series of guests, these reports go back to 2006. This was root caused
to DAD on IPv6 interfaces, and a work around implemented to disable
DAD [0] on multicast links. Even though this patch as a work around
should not be applicable anymore given that since the xen-netback
upstreaming since 2.6.39 ether_setup is used and that enables the
multicast flag, we should try ensure the issue doesn't creep up
anymore. As per the IPv6 RFCs and Linux IPv6 implementation -- DAD
should be triggered even in the case of manual IP configuration and
when the link goes up, as such SLAAC will always take place on IPv6
interfaces. Although not documented upon my review I determined the
original issue could also be attributed to the corner case documented
on Appendix A of RFC 4862 [1] and this could be more prevalent for
xen-netback given we stuck to the same MAC address for all xen-netback
interfaces. I first tried to generalize the work around and address
the multicast case requirement for IPv6 [2], and explicitly disabling
multicast on xen-netback. Although this approach could likely be
generalized further by taking into account for NBMA links by checking
dev->type I determined we didn't need IPv6 interfaces at all on the
xent-netback interfaces. This lead me to further review if we even
needed IPv4 interfaces as well, and it turns out we do not.

New motivation: removing IPv4 and IPv6 from the backend interfaces can
save up a lot of boiler plate run time code, triggers from ever taking
place, and simplifying the backend interaces. If there is no use for
IPv4 and IPv6 interfaces why do we have them? Note: I have yet to test
the NAT case.

> I can see these being useful knobs for administrators (or management
> toolstacks) to turn on, on a per-device basis.

Agreed but these knobs don't even exist for drivers yet, let alone for
system administrators. I certainly can shoot for another series to let
administrators configure this as a preference but -- if we know a
driver won't need IPv4 and IPv6 interfaces why not just allow drivers
to disable them all together? Consider the simplification of the
interfaces on the host.

[0] https://gitorious.org/opensuse/kernel-source/source/8e16582178a29b03e850468004a47e7be5ed3005:patches.xen/ipv6-no-autoconf
[1] http://tools.ietf.org/html/rfc4862
[2] http://marc.info/?l=linux-netdev&m=139207142110536&w=2

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 20:17:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 20:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFr6C-0005F6-SN; Tue, 18 Feb 2014 20:17:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFr6A-0005F1-OG
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 20:16:59 +0000
Received: from [85.158.137.68:18953] by server-8.bemta-3.messagelabs.com id
	55/6C-16039-9BFB3035; Tue, 18 Feb 2014 20:16:57 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392754616!2723945!1
X-Originating-IP: [209.85.217.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7499 invoked from network); 18 Feb 2014 20:16:57 -0000
Received: from mail-lb0-f176.google.com (HELO mail-lb0-f176.google.com)
	(209.85.217.176)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 20:16:57 -0000
Received: by mail-lb0-f176.google.com with SMTP id w7so12467731lbi.21
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 12:16:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=/uQjsXtf7THdTthusiOttpinmJwVNYJk/1ij6Jw2jlA=;
	b=D3PKQFbjFv7I44/MeO6Uz40WEYMiSxcoHxsvaIlS2ghuT73JRnuioNFjikbaireqj1
	dB+QJpFKn5YFeRCXONt8QLmtlo6krpg7sGIIkmC1Pa7iTqUE0dpkyT9lejJAUg8Sl0V4
	GOJZXkNrIh0IDiL9hsCWuHjV9ZZ0FeUr5vEsl9wW7LzIAOB7U+TtCUT5Ci9QSgTTXXKM
	MU6HkNquKiNhKAcSbp7EuSzGDz4AfF7XAKrbi26WsMdGCyGwwGGq58ZrWKxO0DRlYueQ
	O9nYKst5LjJ3i3SdTNc8c+YtQUSYax1x3fFTqCU4sjklLmOrDvUg59yiUmqDNJaLAxX2
	RE5g==
X-Received: by 10.112.64.7 with SMTP id k7mr6998370lbs.42.1392754616499; Tue,
	18 Feb 2014 12:16:56 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 12:16:36 -0800 (PST)
In-Reply-To: <53021E87.6020607@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
	<53021E87.6020607@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 12:16:36 -0800
X-Google-Sender-Auth: vWXjrbkz0t0puuVs9Pzl26PMphg
Message-ID: <CAB=NE6WwPfPi-8Yudxs4-OA2LPOjo8-XxUiVbtu6=BQ8FhEEOA@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6
	interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 6:36 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> There is a valid scenario to put IP addresses on the backend VIFs:
>
> http://wiki.xen.org/wiki/Xen_Networking#Routing

This is useful thanks!

> Also, the backend is not necessarily Dom0, you can connect two guests with
> backend/frontend pairs.

Can you elaborate a bit more on this type of setup?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 20:17:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 20:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFr6C-0005F6-SN; Tue, 18 Feb 2014 20:17:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFr6A-0005F1-OG
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 20:16:59 +0000
Received: from [85.158.137.68:18953] by server-8.bemta-3.messagelabs.com id
	55/6C-16039-9BFB3035; Tue, 18 Feb 2014 20:16:57 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392754616!2723945!1
X-Originating-IP: [209.85.217.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7499 invoked from network); 18 Feb 2014 20:16:57 -0000
Received: from mail-lb0-f176.google.com (HELO mail-lb0-f176.google.com)
	(209.85.217.176)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 20:16:57 -0000
Received: by mail-lb0-f176.google.com with SMTP id w7so12467731lbi.21
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 12:16:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=/uQjsXtf7THdTthusiOttpinmJwVNYJk/1ij6Jw2jlA=;
	b=D3PKQFbjFv7I44/MeO6Uz40WEYMiSxcoHxsvaIlS2ghuT73JRnuioNFjikbaireqj1
	dB+QJpFKn5YFeRCXONt8QLmtlo6krpg7sGIIkmC1Pa7iTqUE0dpkyT9lejJAUg8Sl0V4
	GOJZXkNrIh0IDiL9hsCWuHjV9ZZ0FeUr5vEsl9wW7LzIAOB7U+TtCUT5Ci9QSgTTXXKM
	MU6HkNquKiNhKAcSbp7EuSzGDz4AfF7XAKrbi26WsMdGCyGwwGGq58ZrWKxO0DRlYueQ
	O9nYKst5LjJ3i3SdTNc8c+YtQUSYax1x3fFTqCU4sjklLmOrDvUg59yiUmqDNJaLAxX2
	RE5g==
X-Received: by 10.112.64.7 with SMTP id k7mr6998370lbs.42.1392754616499; Tue,
	18 Feb 2014 12:16:56 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 12:16:36 -0800 (PST)
In-Reply-To: <53021E87.6020607@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
	<53021E87.6020607@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 12:16:36 -0800
X-Google-Sender-Auth: vWXjrbkz0t0puuVs9Pzl26PMphg
Message-ID: <CAB=NE6WwPfPi-8Yudxs4-OA2LPOjo8-XxUiVbtu6=BQ8FhEEOA@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <ian.campbell@citrix.com>,
	kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6
	interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 6:36 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> There is a valid scenario to put IP addresses on the backend VIFs:
>
> http://wiki.xen.org/wiki/Xen_Networking#Routing

This is useful thanks!

> Also, the backend is not necessarily Dom0, you can connect two guests with
> backend/frontend pairs.

Can you elaborate a bit more on this type of setup?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 20:36:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 20:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFrPB-0005PB-Mt; Tue, 18 Feb 2014 20:36:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFrP9-0005P6-WF
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 20:36:36 +0000
Received: from [85.158.139.211:62834] by server-12.bemta-5.messagelabs.com id
	24/F0-15415-354C3035; Tue, 18 Feb 2014 20:36:35 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392755792!4758960!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8953 invoked from network); 18 Feb 2014 20:36:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 20:36:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,503,1389744000"; d="scan'208";a="101941380"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 20:36:31 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 15:36:30 -0500
Message-ID: <5303C44D.4070500@citrix.com>
Date: Tue, 18 Feb 2014 20:36:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1392743214.23084.38.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:06, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>
> Is this just adding a bunch of (currently) unused functions? That's a
> slightly odd way to structure a series. They don't seem to be "generic
> helpers" or anything so it would be more normal to introduce these as
> they get used -- it's a bit hard to review them out of context.
I've created two patches because they are quite huge even now, 
separately. Together they would be a ~500 line change. That was the best 
I could figure out keeping in mind that bisect should work. But as I 
wrote in the first email, I welcome other suggestions. If you and Wei 
prefer this two patch in one big one, I merge them in the next version.

>> v2:
>
> This sort of intraversion changelog should go after the S-o-b and a
> "---" marker. This way they are not included in the final commit
> message.
Ok, I'll do that.

>> @@ -226,6 +248,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
>>
>>   void xenvif_stop_queue(struct xenvif *vif);
>>
>> +/* Callback from stack when TX packet can be released */
>> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
>> +
>> +/* Unmap a pending page, usually has to be called before xenvif_idx_release */
>
> "usually" or always? How does one determine when it is or isn't
> appropriate to call it later?
If you haven't unmapped it before, then you have to call it. I'll 
clarify the comment


>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index 7669d49..f0f0c3d 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -38,6 +38,7 @@
>>
>>   #include <xen/events.h>
>>   #include <asm/xen/hypercall.h>
>> +#include <xen/balloon.h>
>
> What is this for?
For alloc/free_xenballooned_pages

>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index bb241d0..195602f 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -773,6 +773,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>>   	return page;
>>   }
>>
>> +static inline void xenvif_tx_create_gop(struct xenvif *vif,
>> +					u16 pending_idx,
>> +					struct xen_netif_tx_request *txp,
>> +					struct gnttab_map_grant_ref *gop)
>> +{
>> +	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
>> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
>> +			  GNTMAP_host_map | GNTMAP_readonly,
>> +			  txp->gref, vif->domid);
>> +
>> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
>> +	       sizeof(*txp));
>
> Can this not go in xenvif_tx_build_gops? Or conversely should the
> non-mapping code there be factored out?
>
> Given the presence of both kinds of gop the name of this function needs
> to be more specific I think.
It is called from tx_build_gop and get_requests, and the non-mapping 
code will go away. I have a patch on top of this series which does grant 
copy for the header part, but it doesn't create a separate function for 
the single copy operation, and you'll still call this function from 
build_gops to handle the rest of the first slot (if any)
So TX will have only one kind of gop.

>
>> +}
>> +
>>   static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>>   					       struct sk_buff *skb,
>>   					       struct xen_netif_tx_request *txp,
>> @@ -1612,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   	return work_done;
>>   }
>>
>> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
>> +{
>> +	unsigned long flags;
>> +	pending_ring_idx_t index;
>> +	u16 pending_idx = ubuf->desc;
>> +	struct pending_tx_info *temp =
>> +		container_of(ubuf, struct pending_tx_info, callback_struct);
>> +	struct xenvif *vif = container_of(temp - pending_idx,
>
> This is subtracting a u16 from a pointer?
Yes. I moved this to an ubuf_to_vif helper for the next version of the 
patch series

>
>> +					  struct xenvif,
>> +					  pending_tx_info[0]);
>> +
>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>> +	do {
>> +		pending_idx = ubuf->desc;
>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>> +		index = pending_index(vif->dealloc_prod);
>> +		vif->dealloc_ring[index] = pending_idx;
>> +		/* Sync with xenvif_tx_dealloc_action:
>> +		 * insert idx then incr producer.
>> +		 */
>> +		smp_wmb();
>
> Is this really needed given that there is a lock held?
Yes, as the comment right above explains. This actually comes from 
classic kernel's netif_idx_release
>
> Or what is dealloc_lock protecting against?
The callbacks from each other. So it is checjed only in this function.
>
>> +		vif->dealloc_prod++;
>
> What happens if the dealloc ring becomes full, will this wrap and cause
> havoc?
Nope, if the dealloc ring is full, the value of the last increment won't 
be used to index the dealloc ring again until some space made available. 
Of course if something broke and we have more pending slots than tx ring 
or dealloc slots then it can happen. Do you suggest a 
BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?

>
>> +	} while (ubuf);
>> +	wake_up(&vif->dealloc_wq);
>> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
>> +}
>> +
>> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>> +{
>> +	struct gnttab_unmap_grant_ref *gop;
>> +	pending_ring_idx_t dc, dp;
>> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
>> +	unsigned int i = 0;
>> +
>> +	dc = vif->dealloc_cons;
>> +	gop = vif->tx_unmap_ops;
>> +
>> +	/* Free up any grants we have finished using */
>> +	do {
>> +		dp = vif->dealloc_prod;
>> +
>> +		/* Ensure we see all indices enqueued by all
>> +		 * xenvif_zerocopy_callback().
>> +		 */
>> +		smp_rmb();
>> +
>> +		while (dc != dp) {
>> +			pending_idx =
>> +				vif->dealloc_ring[pending_index(dc++)];
>> +
>> +			/* Already unmapped? */
>> +			if (vif->grant_tx_handle[pending_idx] ==
>> +				NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					   "Trying to unmap invalid handle! "
>> +					   "pending_idx: %x\n", pending_idx);
>> +				BUG();
>> +			}
>> +
>> +			pending_idx_release[gop-vif->tx_unmap_ops] =
>> +				pending_idx;
>> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
>> +				vif->mmap_pages[pending_idx];
>> +			gnttab_set_unmap_op(gop,
>> +					    idx_to_kaddr(vif, pending_idx),
>> +					    GNTMAP_host_map,
>> +					    vif->grant_tx_handle[pending_idx]);
>> +			vif->grant_tx_handle[pending_idx] =
>> +				NETBACK_INVALID_HANDLE;
>> +			++gop;
>
> Can we run out of space in the gop array?
No, unless the same thing happen as at my previous answer. BUG_ON() here 
as well?
>
>> +		}
>> +
>> +	} while (dp != vif->dealloc_prod);
>> +
>> +	vif->dealloc_cons = dc;
>
> No barrier here?
dealloc_cons only used in the dealloc_thread. dealloc_prod is used by 
the callback and the thread as well, that's why we need mb() in 
previous. Btw. this function comes from classic's net_tx_action_dealloc

>
>> +	if (gop - vif->tx_unmap_ops > 0) {
>> +		int ret;
>> +		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
>> +					vif->pages_to_unmap,
>> +					gop - vif->tx_unmap_ops);
>> +		if (ret) {
>> +			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
>> +				   gop - vif->tx_unmap_ops, ret);
>> +			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
>
> This seems liable to be a lot of spew on failure. Perhaps only log the
> ones where gop[i].status != success.
Ok, I'll change that.
>
> Have you considered whether or not the frontend can force this error to
> occur?
Not yet, good point. I guess if we successfully mapped the page, then 
there is no way a frontend to prevent unmapping. But worth further checking.
>
>> +				netdev_err(vif->dev,
>> +					   " host_addr: %llx handle: %x status: %d\n",
>> +					   gop[i].host_addr,
>> +					   gop[i].handle,
>> +					   gop[i].status);
>> +			}
>> +			BUG();
>> +		}
>> +	}
>> +
>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
>> +		xenvif_idx_release(vif, pending_idx_release[i],
>> +				   XEN_NETIF_RSP_OKAY);
>> +}
>> +
>> +
>>   /* Called after netfront has transmitted */
>>   int xenvif_tx_action(struct xenvif *vif, int budget)
>>   {
>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>>   	vif->mmap_pages[pending_idx] = NULL;
>>   }
>>
>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
>
> This is a single shot version of the batched xenvif_tx_dealloc_action
> version? Why not just enqueue the idx to be unmapped later?
This is called only from the NAPI instance. Using the dealloc ring 
require synchronization with the callback which can increase lock 
contention. On the other hand, if the guest sends small packets 
(<PAGE_SIZE), the TLB flushing can cause performance penalty. The above 
mentioned upcoming patch which gntcopy the header can prevent that 
(together with Malcolm's Xen side patch, which prevents TLB flush if the 
page were not touched in Dom0)

>> @@ -1826,6 +1965,28 @@ int xenvif_kthread(void *data)
>>   	return 0;
>>   }
>>
>> +int xenvif_dealloc_kthread(void *data)
>
> Is this going to be a thread per vif?
Yes. In the first versions I've put the dealloc in the NAPI instance 
(similarly as in classic, where it happened in tx_action), but that had 
an unexpected performance penalty: the callback has to notify whoever 
does the dealloc, that there is something to do. If it is the NAPI 
instance, it has to call napi_schedule. But if the packet were delivered 
to an another guest, the callback is called from thread context, and 
according to Eric Dumazet, napi_schedule from thread context can 
significantly delay softirq handling. So NAPI instance were delayed with 
miliseconds, and it caused terrible performance.
Moving this to the RX thread haven't seemed like a wise decision, so I 
made a new thread.
Actually in the next version of the patches I'll reintroduce 
__napi_schedule in the callback again, because if the NAPI instance 
still have unconsumed requests but not enough pending slots, it 
deschedule itself, and the callback has to schedule it again, if:
- unconsumed requests in the ring < XEN_NETBK_LEGACY_SLOTS_MAX
- there are enough free pending slots to handle them
- and the NAPI instance is not scheduled yet
This should really happen if netback is faster than target devices, but 
then it doesn't mean a bottleneck.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 20:36:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 20:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFrPB-0005PB-Mt; Tue, 18 Feb 2014 20:36:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WFrP9-0005P6-WF
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 20:36:36 +0000
Received: from [85.158.139.211:62834] by server-12.bemta-5.messagelabs.com id
	24/F0-15415-354C3035; Tue, 18 Feb 2014 20:36:35 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392755792!4758960!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8953 invoked from network); 18 Feb 2014 20:36:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 20:36:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,503,1389744000"; d="scan'208";a="101941380"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 18 Feb 2014 20:36:31 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 18 Feb 2014 15:36:30 -0500
Message-ID: <5303C44D.4070500@citrix.com>
Date: Tue, 18 Feb 2014 20:36:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1392743214.23084.38.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:06, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> This patch contains the new definitions necessary for grant mapping.
>
> Is this just adding a bunch of (currently) unused functions? That's a
> slightly odd way to structure a series. They don't seem to be "generic
> helpers" or anything so it would be more normal to introduce these as
> they get used -- it's a bit hard to review them out of context.
I've created two patches because they are quite huge even now, 
separately. Together they would be a ~500 line change. That was the best 
I could figure out keeping in mind that bisect should work. But as I 
wrote in the first email, I welcome other suggestions. If you and Wei 
prefer this two patch in one big one, I merge them in the next version.

>> v2:
>
> This sort of intraversion changelog should go after the S-o-b and a
> "---" marker. This way they are not included in the final commit
> message.
Ok, I'll do that.

>> @@ -226,6 +248,12 @@ bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
>>
>>   void xenvif_stop_queue(struct xenvif *vif);
>>
>> +/* Callback from stack when TX packet can be released */
>> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success);
>> +
>> +/* Unmap a pending page, usually has to be called before xenvif_idx_release */
>
> "usually" or always? How does one determine when it is or isn't
> appropriate to call it later?
If you haven't unmapped it before, then you have to call it. I'll 
clarify the comment


>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index 7669d49..f0f0c3d 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -38,6 +38,7 @@
>>
>>   #include <xen/events.h>
>>   #include <asm/xen/hypercall.h>
>> +#include <xen/balloon.h>
>
> What is this for?
For alloc/free_xenballooned_pages

>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index bb241d0..195602f 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -773,6 +773,20 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
>>   	return page;
>>   }
>>
>> +static inline void xenvif_tx_create_gop(struct xenvif *vif,
>> +					u16 pending_idx,
>> +					struct xen_netif_tx_request *txp,
>> +					struct gnttab_map_grant_ref *gop)
>> +{
>> +	vif->pages_to_map[gop-vif->tx_map_ops] = vif->mmap_pages[pending_idx];
>> +	gnttab_set_map_op(gop, idx_to_kaddr(vif, pending_idx),
>> +			  GNTMAP_host_map | GNTMAP_readonly,
>> +			  txp->gref, vif->domid);
>> +
>> +	memcpy(&vif->pending_tx_info[pending_idx].req, txp,
>> +	       sizeof(*txp));
>
> Can this not go in xenvif_tx_build_gops? Or conversely should the
> non-mapping code there be factored out?
>
> Given the presence of both kinds of gop the name of this function needs
> to be more specific I think.
It is called from tx_build_gop and get_requests, and the non-mapping 
code will go away. I have a patch on top of this series which does grant 
copy for the header part, but it doesn't create a separate function for 
the single copy operation, and you'll still call this function from 
build_gops to handle the rest of the first slot (if any)
So TX will have only one kind of gop.

>
>> +}
>> +
>>   static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
>>   					       struct sk_buff *skb,
>>   					       struct xen_netif_tx_request *txp,
>> @@ -1612,6 +1626,107 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   	return work_done;
>>   }
>>
>> +void xenvif_zerocopy_callback(struct ubuf_info *ubuf, bool zerocopy_success)
>> +{
>> +	unsigned long flags;
>> +	pending_ring_idx_t index;
>> +	u16 pending_idx = ubuf->desc;
>> +	struct pending_tx_info *temp =
>> +		container_of(ubuf, struct pending_tx_info, callback_struct);
>> +	struct xenvif *vif = container_of(temp - pending_idx,
>
> This is subtracting a u16 from a pointer?
Yes. I moved this to an ubuf_to_vif helper for the next version of the 
patch series

>
>> +					  struct xenvif,
>> +					  pending_tx_info[0]);
>> +
>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>> +	do {
>> +		pending_idx = ubuf->desc;
>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>> +		index = pending_index(vif->dealloc_prod);
>> +		vif->dealloc_ring[index] = pending_idx;
>> +		/* Sync with xenvif_tx_dealloc_action:
>> +		 * insert idx then incr producer.
>> +		 */
>> +		smp_wmb();
>
> Is this really needed given that there is a lock held?
Yes, as the comment right above explains. This actually comes from 
classic kernel's netif_idx_release
>
> Or what is dealloc_lock protecting against?
The callbacks from each other. So it is checjed only in this function.
>
>> +		vif->dealloc_prod++;
>
> What happens if the dealloc ring becomes full, will this wrap and cause
> havoc?
Nope, if the dealloc ring is full, the value of the last increment won't 
be used to index the dealloc ring again until some space made available. 
Of course if something broke and we have more pending slots than tx ring 
or dealloc slots then it can happen. Do you suggest a 
BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?

>
>> +	} while (ubuf);
>> +	wake_up(&vif->dealloc_wq);
>> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
>> +}
>> +
>> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>> +{
>> +	struct gnttab_unmap_grant_ref *gop;
>> +	pending_ring_idx_t dc, dp;
>> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
>> +	unsigned int i = 0;
>> +
>> +	dc = vif->dealloc_cons;
>> +	gop = vif->tx_unmap_ops;
>> +
>> +	/* Free up any grants we have finished using */
>> +	do {
>> +		dp = vif->dealloc_prod;
>> +
>> +		/* Ensure we see all indices enqueued by all
>> +		 * xenvif_zerocopy_callback().
>> +		 */
>> +		smp_rmb();
>> +
>> +		while (dc != dp) {
>> +			pending_idx =
>> +				vif->dealloc_ring[pending_index(dc++)];
>> +
>> +			/* Already unmapped? */
>> +			if (vif->grant_tx_handle[pending_idx] ==
>> +				NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					   "Trying to unmap invalid handle! "
>> +					   "pending_idx: %x\n", pending_idx);
>> +				BUG();
>> +			}
>> +
>> +			pending_idx_release[gop-vif->tx_unmap_ops] =
>> +				pending_idx;
>> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
>> +				vif->mmap_pages[pending_idx];
>> +			gnttab_set_unmap_op(gop,
>> +					    idx_to_kaddr(vif, pending_idx),
>> +					    GNTMAP_host_map,
>> +					    vif->grant_tx_handle[pending_idx]);
>> +			vif->grant_tx_handle[pending_idx] =
>> +				NETBACK_INVALID_HANDLE;
>> +			++gop;
>
> Can we run out of space in the gop array?
No, unless the same thing happen as at my previous answer. BUG_ON() here 
as well?
>
>> +		}
>> +
>> +	} while (dp != vif->dealloc_prod);
>> +
>> +	vif->dealloc_cons = dc;
>
> No barrier here?
dealloc_cons only used in the dealloc_thread. dealloc_prod is used by 
the callback and the thread as well, that's why we need mb() in 
previous. Btw. this function comes from classic's net_tx_action_dealloc

>
>> +	if (gop - vif->tx_unmap_ops > 0) {
>> +		int ret;
>> +		ret = gnttab_unmap_refs(vif->tx_unmap_ops,
>> +					vif->pages_to_unmap,
>> +					gop - vif->tx_unmap_ops);
>> +		if (ret) {
>> +			netdev_err(vif->dev, "Unmap fail: nr_ops %x ret %d\n",
>> +				   gop - vif->tx_unmap_ops, ret);
>> +			for (i = 0; i < gop - vif->tx_unmap_ops; ++i) {
>
> This seems liable to be a lot of spew on failure. Perhaps only log the
> ones where gop[i].status != success.
Ok, I'll change that.
>
> Have you considered whether or not the frontend can force this error to
> occur?
Not yet, good point. I guess if we successfully mapped the page, then 
there is no way a frontend to prevent unmapping. But worth further checking.
>
>> +				netdev_err(vif->dev,
>> +					   " host_addr: %llx handle: %x status: %d\n",
>> +					   gop[i].host_addr,
>> +					   gop[i].handle,
>> +					   gop[i].status);
>> +			}
>> +			BUG();
>> +		}
>> +	}
>> +
>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
>> +		xenvif_idx_release(vif, pending_idx_release[i],
>> +				   XEN_NETIF_RSP_OKAY);
>> +}
>> +
>> +
>>   /* Called after netfront has transmitted */
>>   int xenvif_tx_action(struct xenvif *vif, int budget)
>>   {
>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>>   	vif->mmap_pages[pending_idx] = NULL;
>>   }
>>
>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
>
> This is a single shot version of the batched xenvif_tx_dealloc_action
> version? Why not just enqueue the idx to be unmapped later?
This is called only from the NAPI instance. Using the dealloc ring 
require synchronization with the callback which can increase lock 
contention. On the other hand, if the guest sends small packets 
(<PAGE_SIZE), the TLB flushing can cause performance penalty. The above 
mentioned upcoming patch which gntcopy the header can prevent that 
(together with Malcolm's Xen side patch, which prevents TLB flush if the 
page were not touched in Dom0)

>> @@ -1826,6 +1965,28 @@ int xenvif_kthread(void *data)
>>   	return 0;
>>   }
>>
>> +int xenvif_dealloc_kthread(void *data)
>
> Is this going to be a thread per vif?
Yes. In the first versions I've put the dealloc in the NAPI instance 
(similarly as in classic, where it happened in tx_action), but that had 
an unexpected performance penalty: the callback has to notify whoever 
does the dealloc, that there is something to do. If it is the NAPI 
instance, it has to call napi_schedule. But if the packet were delivered 
to an another guest, the callback is called from thread context, and 
according to Eric Dumazet, napi_schedule from thread context can 
significantly delay softirq handling. So NAPI instance were delayed with 
miliseconds, and it caused terrible performance.
Moving this to the RX thread haven't seemed like a wise decision, so I 
made a new thread.
Actually in the next version of the patches I'll reintroduce 
__napi_schedule in the callback again, because if the NAPI instance 
still have unconsumed requests but not enough pending slots, it 
deschedule itself, and the callback has to schedule it again, if:
- unconsumed requests in the ring < XEN_NETBK_LEGACY_SLOTS_MAX
- there are enough free pending slots to handle them
- and the NAPI instance is not scheduled yet
This should really happen if netback is faster than target devices, but 
then it doesn't mean a bottleneck.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:03:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFrob-0005dt-B8; Tue, 18 Feb 2014 21:02:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFroZ-0005do-VC
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:02:52 +0000
Received: from [85.158.137.68:31615] by server-6.bemta-3.messagelabs.com id
	C2/49-09180-B7AC3035; Tue, 18 Feb 2014 21:02:51 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392757369!1406193!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11969 invoked from network); 18 Feb 2014 21:02:50 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:02:50 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so12548680lbd.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:02:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=GPnzzdol1rrvJRWEwDYxfK5UBNwQXVlpiSdqb3RxKoM=;
	b=tGMkyT7Rpy+AlXWX1/eInegzE9SvIP2X7vdy09b2M4G8XfOphxaco9K2bmLIDx+JKH
	4gFbgTS33UBuatqnKZ0zFDxs02GOoC4LYAdzppqgy72iy95ruV1wikF+EAretCt9FMyX
	ExKbln1x0GAgzXfzh5NUtGvtwvBsmzOTpHvqqvaLopfZAF6CHDvNzRcrTaly9icMqyvB
	qPSyeg7Bh+E3rgSNtbWmWlB6Ww/d18m4DlcaR4mZoZLoiqIZiA0epMLnM64p5CLrKeQI
	AJc03MPNiAIS27JTm3Wxcc2wKWO4FzznSZRkrc7xVeFz0XWNSig8cRCE8tvpY/KPOL/Z
	MLrA==
X-Received: by 10.152.229.225 with SMTP id st1mr23487768lac.2.1392757369472;
	Tue, 18 Feb 2014 13:02:49 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 13:02:29 -0800 (PST)
In-Reply-To: <20140216105754.63738163@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 13:02:29 -0800
X-Google-Sender-Auth: EoTTYOaRpliNJyZ8eBAObqGE8Jg
Message-ID: <CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Fri, 14 Feb 2014 18:59:37 -0800
> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> It doesn't make sense for some interfaces to become a root bridge
>> at any point in time. One example is virtual backend interfaces
>> which rely on other entities on the bridge for actual physical
>> connectivity. They only provide virtual access.
>>
>> Device drivers that know they should never become part of the
>> root bridge have been using a trick of setting their MAC address
>> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
>> of using these hacks lets the interfaces annotate its intent and
>> generalizes a solution for multiple drivers, while letting the
>> drivers use a random MAC address or one prefixed with a proper OUI.
>> This sort of hack is used by both qemu and xen for their backend
>> interfaces.
>>
>> Cc: Stephen Hemminger <stephen@networkplumber.org>
>> Cc: bridge@lists.linux-foundation.org
>> Cc: netdev@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
>
> This is already supported in a more standard way via the root
> block flag.

Great! For documentation purposes the root_block flag is a sysfs
attribute, added via 3.8 through commit 1007dd1a. The respective
interface flag is IFLA_BRPORT_PROTECT and can be set via the iproute2
bridge utility or through sysfs:

mcgrof@garbanzo ~/linux (git::master)$ find /sys/ -name root_block
/sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/net/eth0/brport/root_block
/sys/devices/vif-3-0/net/vif3.0/brport/root_block
/sys/devices/virtual/net/vif3.0-emu/brport/root_block

mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
/sys/devices/vif-3-0/net/vif3.0/brport/root_block
0
mcgrof@garbanzo ~/devel/iproute2 (git::master)$ sudo bridge link set
dev vif3.0 root_block on
mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
/sys/devices/vif-3-0/net/vif3.0/brport/root_block
1

So if we'd want to avoid using the MAC address hack alternative to
skip a root port userspace would need to be updated to simply set this
attribute after adding the device to the bridge. Based on Zoltan's
feedback there seems to be use cases to not enable this always for all
xen-netback interfaces though as such we can just punt this to
userspace for the topologies that require this.

The original motivation for this series was to avoid the IPv6
duplicate address incurred by the MAC address hack for avoiding the
root bridge. Given that Zoltan also noted a use case whereby IPv4 and
IPv6 addresses can be assigned to the backend interfaces we should be
able to avoid the duplicate address situation for IPv6 by using a
proper random MAC address *once* userspace has been updated also to
use IFLA_BRPORT_PROTECT. New userspace can't and won't need to set
this flag for older kernels (older than 3.8) as root_block is not
implemented on those kernels and the MAC address hack would still be
used there. This strategy however does put a requirement on new
kernels to use new userspace as otherwise the MAC address workaround
would not be in place and root_block would not take effect.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:03:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFrob-0005dt-B8; Tue, 18 Feb 2014 21:02:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFroZ-0005do-VC
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:02:52 +0000
Received: from [85.158.137.68:31615] by server-6.bemta-3.messagelabs.com id
	C2/49-09180-B7AC3035; Tue, 18 Feb 2014 21:02:51 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392757369!1406193!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11969 invoked from network); 18 Feb 2014 21:02:50 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:02:50 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so12548680lbd.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:02:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=GPnzzdol1rrvJRWEwDYxfK5UBNwQXVlpiSdqb3RxKoM=;
	b=tGMkyT7Rpy+AlXWX1/eInegzE9SvIP2X7vdy09b2M4G8XfOphxaco9K2bmLIDx+JKH
	4gFbgTS33UBuatqnKZ0zFDxs02GOoC4LYAdzppqgy72iy95ruV1wikF+EAretCt9FMyX
	ExKbln1x0GAgzXfzh5NUtGvtwvBsmzOTpHvqqvaLopfZAF6CHDvNzRcrTaly9icMqyvB
	qPSyeg7Bh+E3rgSNtbWmWlB6Ww/d18m4DlcaR4mZoZLoiqIZiA0epMLnM64p5CLrKeQI
	AJc03MPNiAIS27JTm3Wxcc2wKWO4FzznSZRkrc7xVeFz0XWNSig8cRCE8tvpY/KPOL/Z
	MLrA==
X-Received: by 10.152.229.225 with SMTP id st1mr23487768lac.2.1392757369472;
	Tue, 18 Feb 2014 13:02:49 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 13:02:29 -0800 (PST)
In-Reply-To: <20140216105754.63738163@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 13:02:29 -0800
X-Google-Sender-Auth: EoTTYOaRpliNJyZ8eBAObqGE8Jg
Message-ID: <CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	kvm@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Fri, 14 Feb 2014 18:59:37 -0800
> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> It doesn't make sense for some interfaces to become a root bridge
>> at any point in time. One example is virtual backend interfaces
>> which rely on other entities on the bridge for actual physical
>> connectivity. They only provide virtual access.
>>
>> Device drivers that know they should never become part of the
>> root bridge have been using a trick of setting their MAC address
>> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
>> of using these hacks lets the interfaces annotate its intent and
>> generalizes a solution for multiple drivers, while letting the
>> drivers use a random MAC address or one prefixed with a proper OUI.
>> This sort of hack is used by both qemu and xen for their backend
>> interfaces.
>>
>> Cc: Stephen Hemminger <stephen@networkplumber.org>
>> Cc: bridge@lists.linux-foundation.org
>> Cc: netdev@vger.kernel.org
>> Cc: linux-kernel@vger.kernel.org
>> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
>
> This is already supported in a more standard way via the root
> block flag.

Great! For documentation purposes the root_block flag is a sysfs
attribute, added via 3.8 through commit 1007dd1a. The respective
interface flag is IFLA_BRPORT_PROTECT and can be set via the iproute2
bridge utility or through sysfs:

mcgrof@garbanzo ~/linux (git::master)$ find /sys/ -name root_block
/sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/net/eth0/brport/root_block
/sys/devices/vif-3-0/net/vif3.0/brport/root_block
/sys/devices/virtual/net/vif3.0-emu/brport/root_block

mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
/sys/devices/vif-3-0/net/vif3.0/brport/root_block
0
mcgrof@garbanzo ~/devel/iproute2 (git::master)$ sudo bridge link set
dev vif3.0 root_block on
mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
/sys/devices/vif-3-0/net/vif3.0/brport/root_block
1

So if we'd want to avoid using the MAC address hack alternative to
skip a root port userspace would need to be updated to simply set this
attribute after adding the device to the bridge. Based on Zoltan's
feedback there seems to be use cases to not enable this always for all
xen-netback interfaces though as such we can just punt this to
userspace for the topologies that require this.

The original motivation for this series was to avoid the IPv6
duplicate address incurred by the MAC address hack for avoiding the
root bridge. Given that Zoltan also noted a use case whereby IPv4 and
IPv6 addresses can be assigned to the backend interfaces we should be
able to avoid the duplicate address situation for IPv6 by using a
proper random MAC address *once* userspace has been updated also to
use IFLA_BRPORT_PROTECT. New userspace can't and won't need to set
this flag for older kernels (older than 3.8) as root_block is not
implemented on those kernels and the MAC address hack would still be
used there. This strategy however does put a requirement on new
kernels to use new userspace as otherwise the MAC address workaround
would not be in place and root_block would not take effect.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:20:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFs4r-0005o3-1n; Tue, 18 Feb 2014 21:19:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFs4o-0005ny-NS
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:19:38 +0000
Received: from [193.109.254.147:39047] by server-5.bemta-14.messagelabs.com id
	79/95-16688-A6EC3035; Tue, 18 Feb 2014 21:19:38 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392758376!1549260!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27532 invoked from network); 18 Feb 2014 21:19:37 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:19:37 -0000
Received: by mail-lb0-f173.google.com with SMTP id s7so11047254lbd.4
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:19:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=diwOVDBlhqAgViZexhkZXi/0w++NNXpseXtewVF8ApI=;
	b=ebpEHtYGRDuWjyovxhK5cDbgjJsGYPz2eCeZbcWf6kvCM72JpfespJJnO/ZmxPIeLg
	T0tVqxFsHQm8imgmAinpWx0ne//XeHxfBAfwANPH5nXAaMBQyfJXMWLFg1gCIL4QnIvL
	VBKdaJOo9gzIhyhPDiw3g3dzPO4mSljxKKpTiugq8E+t4pASuSE09cndiE/pjNNczpo8
	7xRsIXdMDYovmpyo83GzIngwWzdnS2lfP40NjcEDQO7kxyyt2Bf5TFM29K7I43kEK7nh
	8niAkSnOK6FTtzE30carLF8/8LdNZsELFD0xaRmrbCjKAOnyKq06pWm4Mw5YLWDksRm6
	vIng==
X-Received: by 10.152.229.225 with SMTP id st1mr23522479lac.2.1392758376135;
	Tue, 18 Feb 2014 13:19:36 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 13:19:15 -0800 (PST)
In-Reply-To: <1392668638.21106.5.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 13:19:15 -0800
X-Google-Sender-Auth: yXXs7OngIMoZKTo-gE-dCn-y_2c
Message-ID: <CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
> On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> Some interfaces do not need to have any IPv4 or IPv6
>> addresses, so enable an option to specify this. One
>> example where this is observed are virtualization
>> backend interfaces which just use the net_device
>> constructs to help with their respective frontends.
>>
>> This should optimize boot time and complexity on
>> virtualization environments for each backend interface
>> while also avoiding triggering SLAAC and DAD, which is
>> simply pointless for these type of interfaces.
>
> Would it not be better/cleaner to use disable_ipv6 and then add a
> disable_ipv4 sysctl, then use those with that interface?

Sure, but note that the both disable_ipv6 and accept_dada sysctl
parameters are global. ipv4 and ipv6 interfaces are created upon
NETDEVICE_REGISTER, which will get triggered when a driver calls
register_netdev(). The goal of this patch was to enable an early
optimization for drivers that have no need ever for ipv4 or ipv6
interfaces.

Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
backends though, as such this is no longer applicable as a
requirement. The ipv4 sysctl however still seems like a reasonable
approach to enable optimizations of the network in topologies where
its known we won't need them but -- we'd need to consider a much more
granular solution, not just global as it is now for disable_ipv6, and
we'd also have to figure out a clean way to do this to not incur the
cost of early address interface addition upon register_netdev().

Given that we have a use case for ipv4 and ipv6 addresses on
xen-netback we no longer have an immediate use case for such early
optimization primitives though, so I'll drop this.

> The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
> already doing.

disable_ipv6 is global, the goal was to make this granular and skip
the cost upon early boot, but its been clarified we don't need this.

   Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:20:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFs4r-0005o3-1n; Tue, 18 Feb 2014 21:19:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFs4o-0005ny-NS
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:19:38 +0000
Received: from [193.109.254.147:39047] by server-5.bemta-14.messagelabs.com id
	79/95-16688-A6EC3035; Tue, 18 Feb 2014 21:19:38 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392758376!1549260!1
X-Originating-IP: [209.85.217.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27532 invoked from network); 18 Feb 2014 21:19:37 -0000
Received: from mail-lb0-f173.google.com (HELO mail-lb0-f173.google.com)
	(209.85.217.173)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:19:37 -0000
Received: by mail-lb0-f173.google.com with SMTP id s7so11047254lbd.4
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:19:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=diwOVDBlhqAgViZexhkZXi/0w++NNXpseXtewVF8ApI=;
	b=ebpEHtYGRDuWjyovxhK5cDbgjJsGYPz2eCeZbcWf6kvCM72JpfespJJnO/ZmxPIeLg
	T0tVqxFsHQm8imgmAinpWx0ne//XeHxfBAfwANPH5nXAaMBQyfJXMWLFg1gCIL4QnIvL
	VBKdaJOo9gzIhyhPDiw3g3dzPO4mSljxKKpTiugq8E+t4pASuSE09cndiE/pjNNczpo8
	7xRsIXdMDYovmpyo83GzIngwWzdnS2lfP40NjcEDQO7kxyyt2Bf5TFM29K7I43kEK7nh
	8niAkSnOK6FTtzE30carLF8/8LdNZsELFD0xaRmrbCjKAOnyKq06pWm4Mw5YLWDksRm6
	vIng==
X-Received: by 10.152.229.225 with SMTP id st1mr23522479lac.2.1392758376135;
	Tue, 18 Feb 2014 13:19:36 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 13:19:15 -0800 (PST)
In-Reply-To: <1392668638.21106.5.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 13:19:15 -0800
X-Google-Sender-Auth: yXXs7OngIMoZKTo-gE-dCn-y_2c
Message-ID: <CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
> On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> Some interfaces do not need to have any IPv4 or IPv6
>> addresses, so enable an option to specify this. One
>> example where this is observed are virtualization
>> backend interfaces which just use the net_device
>> constructs to help with their respective frontends.
>>
>> This should optimize boot time and complexity on
>> virtualization environments for each backend interface
>> while also avoiding triggering SLAAC and DAD, which is
>> simply pointless for these type of interfaces.
>
> Would it not be better/cleaner to use disable_ipv6 and then add a
> disable_ipv4 sysctl, then use those with that interface?

Sure, but note that the both disable_ipv6 and accept_dada sysctl
parameters are global. ipv4 and ipv6 interfaces are created upon
NETDEVICE_REGISTER, which will get triggered when a driver calls
register_netdev(). The goal of this patch was to enable an early
optimization for drivers that have no need ever for ipv4 or ipv6
interfaces.

Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
backends though, as such this is no longer applicable as a
requirement. The ipv4 sysctl however still seems like a reasonable
approach to enable optimizations of the network in topologies where
its known we won't need them but -- we'd need to consider a much more
granular solution, not just global as it is now for disable_ipv6, and
we'd also have to figure out a clean way to do this to not incur the
cost of early address interface addition upon register_netdev().

Given that we have a use case for ipv4 and ipv6 addresses on
xen-netback we no longer have an immediate use case for such early
optimization primitives though, so I'll drop this.

> The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
> already doing.

disable_ipv6 is global, the goal was to make this granular and skip
the cost upon early boot, but its been clarified we don't need this.

   Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:25:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFsAI-0005w3-TV; Tue, 18 Feb 2014 21:25:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WFsAH-0005vy-7p
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 21:25:17 +0000
Received: from [85.158.137.68:33426] by server-15.bemta-3.messagelabs.com id
	DB/ED-19263-CBFC3035; Tue, 18 Feb 2014 21:25:16 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392758715!2732065!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25194 invoked from network); 18 Feb 2014 21:25:15 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 18 Feb 2014 21:25:15 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50767 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WFs9L-0008Nn-CS; Tue, 18 Feb 2014 22:24:19 +0100
Date: Tue, 18 Feb 2014 22:25:13 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1772884781.20140218222513@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>, ANNIE LI <annie.li@oracle.com>, 
	Paul Durrant <Paul.Durrant@citrix.com>, 
	Zoltan Kiss <zoltan.kiss@citrix.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

I'm currently having some network troubles with Xen and recent linux kernels.

- When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
  I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html

  In the guest:
  [57539.859584] net eth0: rx->offset: 0, size: 4294967295
  [57539.859599] net eth0: rx->offset: 0, size: 4294967295
  [57539.859605] net eth0: rx->offset: 0, size: 4294967295
  [57539.859610] net eth0: Need more slots
  [58157.675939] net eth0: Need more slots
  [58725.344712] net eth0: Need more slots
  [61815.849180] net eth0: rx->offset: 0, size: 4294967295
  [61815.849205] net eth0: rx->offset: 0, size: 4294967295
  [61815.849216] net eth0: rx->offset: 0, size: 4294967295
  [61815.849225] net eth0: Need more slots

  Xen reports:
  (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
  (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
  (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
  (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
  (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
  (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
  (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
  (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
  (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
  (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
  (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
  (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
  (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
  (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
  (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
  (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
  (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
  (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
  (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
  (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
  (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377



Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
  - i can ping the guests from dom0
  - i can ping dom0 from the guests
  - But i can't ssh or access things by http
  - I don't see any relevant error messages ...
  - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
    (that previously worked fine)

--

Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:25:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFsAI-0005w3-TV; Tue, 18 Feb 2014 21:25:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WFsAH-0005vy-7p
	for xen-devel@lists.xen.org; Tue, 18 Feb 2014 21:25:17 +0000
Received: from [85.158.137.68:33426] by server-15.bemta-3.messagelabs.com id
	DB/ED-19263-CBFC3035; Tue, 18 Feb 2014 21:25:16 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392758715!2732065!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25194 invoked from network); 18 Feb 2014 21:25:15 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 18 Feb 2014 21:25:15 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50767 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WFs9L-0008Nn-CS; Tue, 18 Feb 2014 22:24:19 +0100
Date: Tue, 18 Feb 2014 22:25:13 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1772884781.20140218222513@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>, ANNIE LI <annie.li@oracle.com>, 
	Paul Durrant <Paul.Durrant@citrix.com>, 
	Zoltan Kiss <zoltan.kiss@citrix.com>
MIME-Version: 1.0
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

I'm currently having some network troubles with Xen and recent linux kernels.

- When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
  I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html

  In the guest:
  [57539.859584] net eth0: rx->offset: 0, size: 4294967295
  [57539.859599] net eth0: rx->offset: 0, size: 4294967295
  [57539.859605] net eth0: rx->offset: 0, size: 4294967295
  [57539.859610] net eth0: Need more slots
  [58157.675939] net eth0: Need more slots
  [58725.344712] net eth0: Need more slots
  [61815.849180] net eth0: rx->offset: 0, size: 4294967295
  [61815.849205] net eth0: rx->offset: 0, size: 4294967295
  [61815.849216] net eth0: rx->offset: 0, size: 4294967295
  [61815.849225] net eth0: Need more slots

  Xen reports:
  (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
  (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
  (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
  (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
  (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
  (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
  (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
  (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
  (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
  (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
  (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
  (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
  (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
  (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
  (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
  (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
  (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
  (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
  (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
  (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
  (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377



Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
  - i can ping the guests from dom0
  - i can ping dom0 from the guests
  - But i can't ssh or access things by http
  - I don't see any relevant error messages ...
  - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
    (that previously worked fine)

--

Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:30:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFsFJ-000653-Mn; Tue, 18 Feb 2014 21:30:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFsFH-00064y-To
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:30:28 +0000
Received: from [85.158.137.68:62314] by server-3.bemta-3.messagelabs.com id
	73/84-14520-3F0D3035; Tue, 18 Feb 2014 21:30:27 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392759025!1408988!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6033 invoked from network); 18 Feb 2014 21:30:26 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:30:26 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so12915497lab.30
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:30:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=yBKmjNLGAJ7lmUH2uNeSMxfOarTyydnJ7x/AhgAvjtQ=;
	b=G9V6tIPMMlWPgsiYFieba7q4fpKfmewAb+MJp745HzEL7TtwQpTtIoJm3IzGXP7PGX
	LLvWdCNj5HLtuZ2CZCdB5mdc5kF0Z9Lh5UGzncgfiO07yIq+X2JbJt7phyaq0f4Rmrpp
	lahluAWUQSdVXCg5/LNYgK9z1GZiwIRRfBbB6LUiCwPNT4OTLoZmATzyB43tRdvIKSEP
	A8cayM9nRfyg4hWQgfQJgz/gWjXiieZbJb4oa973qi2OYS9kgRVhPnu1lx/8PyOG0RLB
	GuO8hf9KggBo5BLPgxNRcLQJTiHGg9Utb34S2jTOoIftHZtnKjbWKR/M+yyTe802HGNJ
	2TQw==
X-Received: by 10.112.139.232 with SMTP id rb8mr2940247lbb.53.1392759025311;
	Tue, 18 Feb 2014 13:30:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 13:30:05 -0800 (PST)
In-Reply-To: <1392722575.11080.28.camel@kazak.uk.xensource.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
	<5301E496.40802@citrix.com>
	<1392722575.11080.28.camel@kazak.uk.xensource.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 13:30:05 -0800
X-Google-Sender-Auth: v7NLvHQv01woS3ng10-rJ9yZQ2A
Message-ID: <CAB=NE6WrdLNWtHB7cLqjyR9pWWyvfAZMpWORAH19zLEd3XT91Q@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 3:22 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-02-17 at 10:29 +0000, David Vrabel wrote:
>> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>> > From: "Luis R. Rodriguez" <mcgrof@suse.com>
>> >
>> > The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
>> > was to prevent our backend interfaces from being used by the
>> > bridge and nominating our interface as a root bridge. This was
>> > possible given that the bridge code will use the lowest MAC
>> > address for a port once a new interface gets added to the bridge.
>> > The bridge code has a generic feature now to allow interfaces
>> > to opt out from root bridge nominations, use that instead.
>> [...]
>> > --- a/drivers/net/xen-netback/interface.c
>> > +++ b/drivers/net/xen-netback/interface.c
>> > @@ -42,6 +42,8 @@
>> >  #define XENVIF_QUEUE_LENGTH 32
>> >  #define XENVIF_NAPI_WEIGHT  64
>> >
>> > +static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
>>
>> You shouldn't use a vendor prefix with a random MAC address.  You should
>> set the locally administered bit and clear the multicast/unicast bit and
>> randomize the remaining 46 bits.
>
> I'd have thought that eth_hw_addr_random would get this right, *checks*
> yes it does. And then this patch tramples overt the top three bytes.
>
> Might there be any requirement to have a specific MAC on the vif device?
> IOW do we need to figure out a way to plumb this through the Xen tools
> (perhaps having the vif script sort it out).

Based on Stephen's feedback we should be setting IFLA_BRPORT_PROTECT
to the xen-netback and TAP interfaces in topologies where it makes
sense prior to adding them to the bridge. Userspace can surely deal
with the MAC address but I believe removing the static MAC address
would be good once we get userspace to use the IFLA_BRPORT_PROTECT
flag, to avoid the IPv6 duplication issue incurred by the current
static MAC address. The MAC address consideration remains given that
as per Zoltan there are topologies where the xen-netback interfaces
can make use of a either an IPv4 or IPv6 address.

> Speaking of which -- do the Xen tools not overwrite this random mac from
> xen-network-common.sh:_setup_bridge_port. What is the plan to change
> that (in a forwards/backwards compatible manner).

I'm not seeing that happen now ?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:30:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFsFJ-000653-Mn; Tue, 18 Feb 2014 21:30:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WFsFH-00064y-To
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:30:28 +0000
Received: from [85.158.137.68:62314] by server-3.bemta-3.messagelabs.com id
	73/84-14520-3F0D3035; Tue, 18 Feb 2014 21:30:27 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392759025!1408988!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6033 invoked from network); 18 Feb 2014 21:30:26 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:30:26 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so12915497lab.30
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:30:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=yBKmjNLGAJ7lmUH2uNeSMxfOarTyydnJ7x/AhgAvjtQ=;
	b=G9V6tIPMMlWPgsiYFieba7q4fpKfmewAb+MJp745HzEL7TtwQpTtIoJm3IzGXP7PGX
	LLvWdCNj5HLtuZ2CZCdB5mdc5kF0Z9Lh5UGzncgfiO07yIq+X2JbJt7phyaq0f4Rmrpp
	lahluAWUQSdVXCg5/LNYgK9z1GZiwIRRfBbB6LUiCwPNT4OTLoZmATzyB43tRdvIKSEP
	A8cayM9nRfyg4hWQgfQJgz/gWjXiieZbJb4oa973qi2OYS9kgRVhPnu1lx/8PyOG0RLB
	GuO8hf9KggBo5BLPgxNRcLQJTiHGg9Utb34S2jTOoIftHZtnKjbWKR/M+yyTe802HGNJ
	2TQw==
X-Received: by 10.112.139.232 with SMTP id rb8mr2940247lbb.53.1392759025311;
	Tue, 18 Feb 2014 13:30:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Tue, 18 Feb 2014 13:30:05 -0800 (PST)
In-Reply-To: <1392722575.11080.28.camel@kazak.uk.xensource.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-4-git-send-email-mcgrof@do-not-panic.com>
	<5301E496.40802@citrix.com>
	<1392722575.11080.28.camel@kazak.uk.xensource.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Tue, 18 Feb 2014 13:30:05 -0800
X-Google-Sender-Auth: v7NLvHQv01woS3ng10-rJ9yZQ2A
Message-ID: <CAB=NE6WrdLNWtHB7cLqjyR9pWWyvfAZMpWORAH19zLEd3XT91Q@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 3/4] xen-netback: use a random MAC address
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 3:22 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Mon, 2014-02-17 at 10:29 +0000, David Vrabel wrote:
>> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>> > From: "Luis R. Rodriguez" <mcgrof@suse.com>
>> >
>> > The purpose of using a static MAC address of FE:FF:FF:FF:FF:FF
>> > was to prevent our backend interfaces from being used by the
>> > bridge and nominating our interface as a root bridge. This was
>> > possible given that the bridge code will use the lowest MAC
>> > address for a port once a new interface gets added to the bridge.
>> > The bridge code has a generic feature now to allow interfaces
>> > to opt out from root bridge nominations, use that instead.
>> [...]
>> > --- a/drivers/net/xen-netback/interface.c
>> > +++ b/drivers/net/xen-netback/interface.c
>> > @@ -42,6 +42,8 @@
>> >  #define XENVIF_QUEUE_LENGTH 32
>> >  #define XENVIF_NAPI_WEIGHT  64
>> >
>> > +static const u8 xen_oui[3] = { 0x00, 0x16, 0x3e };
>>
>> You shouldn't use a vendor prefix with a random MAC address.  You should
>> set the locally administered bit and clear the multicast/unicast bit and
>> randomize the remaining 46 bits.
>
> I'd have thought that eth_hw_addr_random would get this right, *checks*
> yes it does. And then this patch tramples overt the top three bytes.
>
> Might there be any requirement to have a specific MAC on the vif device?
> IOW do we need to figure out a way to plumb this through the Xen tools
> (perhaps having the vif script sort it out).

Based on Stephen's feedback we should be setting IFLA_BRPORT_PROTECT
to the xen-netback and TAP interfaces in topologies where it makes
sense prior to adding them to the bridge. Userspace can surely deal
with the MAC address but I believe removing the static MAC address
would be good once we get userspace to use the IFLA_BRPORT_PROTECT
flag, to avoid the IPv6 duplication issue incurred by the current
static MAC address. The MAC address consideration remains given that
as per Zoltan there are topologies where the xen-netback interfaces
can make use of a either an IPv4 or IPv6 address.

> Speaking of which -- do the Xen tools not overwrite this random mac from
> xen-network-common.sh:_setup_bridge_port. What is the plan to change
> that (in a forwards/backwards compatible manner).

I'm not seeing that happen now ?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:43:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFsRX-0006EY-1d; Tue, 18 Feb 2014 21:43:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WFsRU-0006ET-Np
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:43:04 +0000
Received: from [85.158.137.68:44044] by server-2.bemta-3.messagelabs.com id
	D5/75-06531-7E3D3035; Tue, 18 Feb 2014 21:43:03 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392759781!1187480!1
X-Originating-IP: [209.85.160.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4920 invoked from network); 18 Feb 2014 21:43:03 -0000
Received: from mail-pb0-f42.google.com (HELO mail-pb0-f42.google.com)
	(209.85.160.42)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:43:03 -0000
Received: by mail-pb0-f42.google.com with SMTP id jt11so17380958pbb.29
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:43:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=ZMhe5vMXuTqkgYIZRW/9cVz9bSO2F1uVNrVblCBN2pA=;
	b=Y2S/G/FfCNmKG3BuOULl504Go11IHHoQbCmZHMG4ajjLFKyl415/aavkRSqn2mXWen
	LU3v4bJR3AtubjYzOxAtSFnu2ZO9K0j9ojVku4UhFjNaxAw1iJcmyhPC5qyVcOyXXMCt
	1llmIOsWJGGXDrOR5QLRc8x3GSGUp40FPgp54NQCsBF++GaSuoOvaiCr615UlDaNQEnt
	TcH9fd7TA+kiFBwAIs4KmEhmmWVbPoj3rdcAB274JLrxEmCqC/dphqj3J4sKLLbkkL/L
	9+yAhO6R5pC3zeqUXezJewo/7Ck3U5WPLd8LBj+1oISNt598cGkNm7muEVAB/OHq4FVz
	kHAw==
X-Gm-Message-State: ALoCoQmHk/DAMzjr3SpY+sW2z9hGhZaoeQDLsdxxVhQX1WAOJm1DGtJLwFbh+4p2G8uo8b3Rd0/8
X-Received: by 10.68.133.229 with SMTP id pf5mr35717383pbb.115.1392759781235; 
	Tue, 18 Feb 2014 13:43:01 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id
	bz4sm59225235pbb.12.2014.02.18.13.43.00 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 13:43:00 -0800 (PST)
Date: Tue, 18 Feb 2014 13:42:57 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140218134257.667efe23@nehalam.linuxnetplumber.net>
In-Reply-To: <CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014 13:19:15 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> Sure, but note that the both disable_ipv6 and accept_dada sysctl
> parameters are global. ipv4 and ipv6 interfaces are created upon
> NETDEVICE_REGISTER, which will get triggered when a driver calls
> register_netdev(). The goal of this patch was to enable an early
> optimization for drivers that have no need ever for ipv4 or ipv6
> interfaces.

The trick with ipv6 is to register the device, then have userspace
do the ipv6 sysctl before bringing the device up.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 18 21:43:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Feb 2014 21:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFsRX-0006EY-1d; Tue, 18 Feb 2014 21:43:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WFsRU-0006ET-Np
	for xen-devel@lists.xenproject.org; Tue, 18 Feb 2014 21:43:04 +0000
Received: from [85.158.137.68:44044] by server-2.bemta-3.messagelabs.com id
	D5/75-06531-7E3D3035; Tue, 18 Feb 2014 21:43:03 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392759781!1187480!1
X-Originating-IP: [209.85.160.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4920 invoked from network); 18 Feb 2014 21:43:03 -0000
Received: from mail-pb0-f42.google.com (HELO mail-pb0-f42.google.com)
	(209.85.160.42)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	18 Feb 2014 21:43:03 -0000
Received: by mail-pb0-f42.google.com with SMTP id jt11so17380958pbb.29
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 13:43:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=ZMhe5vMXuTqkgYIZRW/9cVz9bSO2F1uVNrVblCBN2pA=;
	b=Y2S/G/FfCNmKG3BuOULl504Go11IHHoQbCmZHMG4ajjLFKyl415/aavkRSqn2mXWen
	LU3v4bJR3AtubjYzOxAtSFnu2ZO9K0j9ojVku4UhFjNaxAw1iJcmyhPC5qyVcOyXXMCt
	1llmIOsWJGGXDrOR5QLRc8x3GSGUp40FPgp54NQCsBF++GaSuoOvaiCr615UlDaNQEnt
	TcH9fd7TA+kiFBwAIs4KmEhmmWVbPoj3rdcAB274JLrxEmCqC/dphqj3J4sKLLbkkL/L
	9+yAhO6R5pC3zeqUXezJewo/7Ck3U5WPLd8LBj+1oISNt598cGkNm7muEVAB/OHq4FVz
	kHAw==
X-Gm-Message-State: ALoCoQmHk/DAMzjr3SpY+sW2z9hGhZaoeQDLsdxxVhQX1WAOJm1DGtJLwFbh+4p2G8uo8b3Rd0/8
X-Received: by 10.68.133.229 with SMTP id pf5mr35717383pbb.115.1392759781235; 
	Tue, 18 Feb 2014 13:43:01 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id
	bz4sm59225235pbb.12.2014.02.18.13.43.00 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Tue, 18 Feb 2014 13:43:00 -0800 (PST)
Date: Tue, 18 Feb 2014 13:42:57 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140218134257.667efe23@nehalam.linuxnetplumber.net>
In-Reply-To: <CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 18 Feb 2014 13:19:15 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> Sure, but note that the both disable_ipv6 and accept_dada sysctl
> parameters are global. ipv4 and ipv6 interfaces are created upon
> NETDEVICE_REGISTER, which will get triggered when a driver calls
> register_netdev(). The goal of this patch was to enable an early
> optimization for drivers that have no need ever for ipv4 or ipv6
> interfaces.

The trick with ipv6 is to register the device, then have userspace
do the ipv6 sysctl before bringing the device up.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 00:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 00:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFuxA-00074q-RF; Wed, 19 Feb 2014 00:23:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1WFux8-00074l-OJ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 00:23:55 +0000
Received: from [85.158.137.68:24559] by server-1.bemta-3.messagelabs.com id
	FE/A5-17293-999F3035; Wed, 19 Feb 2014 00:23:53 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392769432!2748183!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17410 invoked from network); 19 Feb 2014 00:23:53 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 00:23:53 -0000
Received: by mail-vc0-f178.google.com with SMTP id ik5so13784526vcb.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 16:23:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=W49TZ4kT77moYZ1xupPhbZKWjaWbRJgDyexMU5lqlrA=;
	b=qSc+CB8EXUAxfp7IICI2ftCKxhW+B8VtuD/aBTaAg7PF2k0nv6MXjfUs/lgrjzGv5j
	ozJReXcp1ceeSktjDi3wB9Nfo/TonO+gpsl218g96ONnx2EJdOEAkoIc/NwHOFszy6ID
	LVer8DyQT/+ltbbC6f3V0OvsWzWqmOFsTmPn7xs84wxprtyS34qo2hX15zehBbOZ6sqc
	e5w1ZUpTT/nYfQ3dQFfS8NdrVRYt1Da4YS0m7zfBlWmK2jO5lUCO0yk8VewLFAWQfeaw
	2Wukvxa0HXGsjTymoe+5Di/9uauiQ5nnbkkBs5Aykr1rLdp8AtbDUiHXX8I45mKfqPSk
	TUUQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=linux-foundation.org; s=google;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=W49TZ4kT77moYZ1xupPhbZKWjaWbRJgDyexMU5lqlrA=;
	b=FFAgiwPOPZnBKN9hB8y7sDaC7DreKzl2aoyAyq4qxvPGG0An+VFfawfg8mQLpL+C4t
	JRbVYdoKzxfYfKNEdL26xp9rp1LLjfwhRwN/twIhk6pmw+4xskl6BZLWRQBVb+yPqcnx
	Og+N/xPuu1XfhywlK4N5TUlOYZAxiU7P8DRmQ=
MIME-Version: 1.0
X-Received: by 10.52.107.35 with SMTP id gz3mr19516475vdb.8.1392769431682;
	Tue, 18 Feb 2014 16:23:51 -0800 (PST)
Received: by 10.220.13.2 with HTTP; Tue, 18 Feb 2014 16:23:51 -0800 (PST)
In-Reply-To: <20140219000352.GP21483@n2100.arm.linux.org.uk>
References: <20140217234644.GA5171@rmk-PC.arm.linux.org.uk>
	<CA+55aFy7ApiQRudxPAd3v5k_apppxRnePHb1HZPH13erqhmX=g@mail.gmail.com>
	<20140219000352.GP21483@n2100.arm.linux.org.uk>
Date: Tue, 18 Feb 2014 16:23:51 -0800
X-Google-Sender-Auth: yTpTt7mTpJuaYbabsp8VenQnkRQ
Message-ID: <CA+55aFxZz9ubkj72KHy8PhpsjZb6D7LD+v6jHJTtsD4N6AWPzw@mail.gmail.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linux SCSI List <linux-scsi@vger.kernel.org>,
	James Bottomley <James.Bottomley@parallels.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>, ARM SoC <arm@kernel.org>,
	xen-devel@lists.xenproject.org, Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [GIT PULL] ARM fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 4:03 PM, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
>
> Almost, but not quite.  If we're going to avoid u64, then dma_addr_t
> woudl be the right type here because we're talking about DMA addresses.

Well, phys_addr_t had better be as big as dma_addr_t, because that's
what the resource management handles.

> We could also switch to keeping this as PFNs - block internally converts
> it to a PFN anyway:

Yeah, that definitely sounds like it would be a good idea.

> Maybe blk_queue_bounce_pfn_limit() so we ensure all users get caught?
>
>> That said, it's admittedly a disgusting name, and I wonder if we
>> should introduce a nicer-named "pfn_to_phys()" that matches the other
>> "xyz_to_abc()" functions we have (including "pfn_to_virt()")
>
> We have these on ARM:
>
> arch/arm/include/asm/memory.h:#define   __pfn_to_phys(pfn)      ((phys_addr_t)(pfn) << PAGE_SHIFT)
> arch/arm/include/asm/memory.h:#define   __phys_to_pfn(paddr)    ((unsigned long)((paddr) >> PAGE_SHIFT))
>
> it probably makes sense to pick those right out, maybe losing the
> __ prefix on them.

Yup.

>>   __va(PFN_PHYS(page_to_pfn(page)));
>
> Wow.  Two things spring to mind there... highmem pages, and don't we
> already have page_address() for that?

Well, that code clearly cannot handle highmem anyway, but yes, it
really smells like xen should use page_address().

Adding Xen people who I didn't add the last time around.

             Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 00:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 00:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFuxA-00074q-RF; Wed, 19 Feb 2014 00:23:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1WFux8-00074l-OJ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 00:23:55 +0000
Received: from [85.158.137.68:24559] by server-1.bemta-3.messagelabs.com id
	FE/A5-17293-999F3035; Wed, 19 Feb 2014 00:23:53 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392769432!2748183!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17410 invoked from network); 19 Feb 2014 00:23:53 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 00:23:53 -0000
Received: by mail-vc0-f178.google.com with SMTP id ik5so13784526vcb.37
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 16:23:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=W49TZ4kT77moYZ1xupPhbZKWjaWbRJgDyexMU5lqlrA=;
	b=qSc+CB8EXUAxfp7IICI2ftCKxhW+B8VtuD/aBTaAg7PF2k0nv6MXjfUs/lgrjzGv5j
	ozJReXcp1ceeSktjDi3wB9Nfo/TonO+gpsl218g96ONnx2EJdOEAkoIc/NwHOFszy6ID
	LVer8DyQT/+ltbbC6f3V0OvsWzWqmOFsTmPn7xs84wxprtyS34qo2hX15zehBbOZ6sqc
	e5w1ZUpTT/nYfQ3dQFfS8NdrVRYt1Da4YS0m7zfBlWmK2jO5lUCO0yk8VewLFAWQfeaw
	2Wukvxa0HXGsjTymoe+5Di/9uauiQ5nnbkkBs5Aykr1rLdp8AtbDUiHXX8I45mKfqPSk
	TUUQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=linux-foundation.org; s=google;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=W49TZ4kT77moYZ1xupPhbZKWjaWbRJgDyexMU5lqlrA=;
	b=FFAgiwPOPZnBKN9hB8y7sDaC7DreKzl2aoyAyq4qxvPGG0An+VFfawfg8mQLpL+C4t
	JRbVYdoKzxfYfKNEdL26xp9rp1LLjfwhRwN/twIhk6pmw+4xskl6BZLWRQBVb+yPqcnx
	Og+N/xPuu1XfhywlK4N5TUlOYZAxiU7P8DRmQ=
MIME-Version: 1.0
X-Received: by 10.52.107.35 with SMTP id gz3mr19516475vdb.8.1392769431682;
	Tue, 18 Feb 2014 16:23:51 -0800 (PST)
Received: by 10.220.13.2 with HTTP; Tue, 18 Feb 2014 16:23:51 -0800 (PST)
In-Reply-To: <20140219000352.GP21483@n2100.arm.linux.org.uk>
References: <20140217234644.GA5171@rmk-PC.arm.linux.org.uk>
	<CA+55aFy7ApiQRudxPAd3v5k_apppxRnePHb1HZPH13erqhmX=g@mail.gmail.com>
	<20140219000352.GP21483@n2100.arm.linux.org.uk>
Date: Tue, 18 Feb 2014 16:23:51 -0800
X-Google-Sender-Auth: yTpTt7mTpJuaYbabsp8VenQnkRQ
Message-ID: <CA+55aFxZz9ubkj72KHy8PhpsjZb6D7LD+v6jHJTtsD4N6AWPzw@mail.gmail.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Russell King - ARM Linux <linux@arm.linux.org.uk>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Linux SCSI List <linux-scsi@vger.kernel.org>,
	James Bottomley <James.Bottomley@parallels.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-mm <linux-mm@kvack.org>, ARM SoC <arm@kernel.org>,
	xen-devel@lists.xenproject.org, Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [Xen-devel] [GIT PULL] ARM fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 4:03 PM, Russell King - ARM Linux
<linux@arm.linux.org.uk> wrote:
>
> Almost, but not quite.  If we're going to avoid u64, then dma_addr_t
> woudl be the right type here because we're talking about DMA addresses.

Well, phys_addr_t had better be as big as dma_addr_t, because that's
what the resource management handles.

> We could also switch to keeping this as PFNs - block internally converts
> it to a PFN anyway:

Yeah, that definitely sounds like it would be a good idea.

> Maybe blk_queue_bounce_pfn_limit() so we ensure all users get caught?
>
>> That said, it's admittedly a disgusting name, and I wonder if we
>> should introduce a nicer-named "pfn_to_phys()" that matches the other
>> "xyz_to_abc()" functions we have (including "pfn_to_virt()")
>
> We have these on ARM:
>
> arch/arm/include/asm/memory.h:#define   __pfn_to_phys(pfn)      ((phys_addr_t)(pfn) << PAGE_SHIFT)
> arch/arm/include/asm/memory.h:#define   __phys_to_pfn(paddr)    ((unsigned long)((paddr) >> PAGE_SHIFT))
>
> it probably makes sense to pick those right out, maybe losing the
> __ prefix on them.

Yup.

>>   __va(PFN_PHYS(page_to_pfn(page)));
>
> Wow.  Two things spring to mind there... highmem pages, and don't we
> already have page_address() for that?

Well, that code clearly cannot handle highmem anyway, but yes, it
really smells like xen should use page_address().

Adding Xen people who I didn't add the last time around.

             Linus

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:18:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFvnT-0002kG-7d; Wed, 19 Feb 2014 01:17:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFvnR-0002kB-Uw
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 01:17:58 +0000
Received: from [85.158.139.211:36552] by server-14.bemta-5.messagelabs.com id
	E6/7C-27598-54604035; Wed, 19 Feb 2014 01:17:57 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392772674!4759363!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6487 invoked from network); 19 Feb 2014 01:17:56 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-206.messagelabs.com with SMTP;
	19 Feb 2014 01:17:56 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 17:13:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,503,1389772800"; d="scan'208";a="457769894"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 17:17:13 -0800
Received: from fmsmsx118.amr.corp.intel.com (10.18.116.18) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:17:12 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx118.amr.corp.intel.com (10.18.116.18) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:17:12 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 09:17:09 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, "Xu, Dongxiao" <dongxiao.xu@intel.com>, 
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///IvYCAAVNtYIAACJYAgAFmyJA=
Date: Wed, 19 Feb 2014 01:17:08 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEB9B@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
In-Reply-To: <53035628020000780011D3EE@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Dugger, Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-18:
>>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Jan Beulich wrote on 2014-02-17:
>>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> And second, I have been fighting with finding both conditions and
>>>> (eventually) the root cause of a severe performance regression
>>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>>> became _much_ worse after adding in the patch here (while in fact
>>>> I had hoped it might help with the originally observed
>>>> degradation): X startup fails due to timing out, and booting the
>>>> guest now takes about 20 minutes). I didn't find the root cause of
>>>> this yet, but meanwhile I know that
>>>> - the same isn't observable on SVM
>>>> - there's no problem when forcing the domain to use shadow
>>>>   mode - there's no need for any device to actually be assigned to the
>>>>   guest - the regression is very likely purely graphics related (based
>>>>   on the observation that when running something that regularly but not
>>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>>   worth of processing power, yet when that updating doesn't
>>>> happen,
> CPU
>>>>   consumption goes down, and it goes further down when shutting
>>>> down
> X
>>>>   altogether - at least as log as the patch here doesn't get involved).
>>>> This I'm observing on a Westmere box (and I didn't notice it
>>>> earlier because that's one of those where due to a chipset erratum
>>>> the IOMMU gets turned off by default), so it's possible that this
>>>> can't be seen on more modern hardware. I'll hopefully find time
>>>> today to check this on the one newer (Sandy Bridge) box I have.
>>> 
>>> Just got done with trying this: By default, things work fine there. As
>>> soon as I use "iommu=no-snoop", things go bad (even worse than one the
>>> older box - the guest is consuming about 2.5 CPUs worth of processing
>>> power _without_ the patch here in use, so I don't even want to think
>>> about trying it there); I guessed that to be another of the potential
>>> sources of the problem since on that older box the respective hardware
>>> feature is unavailable.
>>> 
>>> While I'll try to look into this further, I guess I have to defer
>>> to our VT-d specialists at Intel at this point...
>>> 
>> 
>> Hi, Jan,
>> 
>> I tried to reproduce it. But unfortunately, I cannot reproduce it in
>> my box (sandy bridge EP)with latest Xen(include my patch). I guess
>> my configuration or steps may wrong, here is mine:
>> 
>> 1. add iommu=1,no-snoop in by xen cmd line:
>> (XEN) Intel VT-d Snoop Control not enabled.
>> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
>> (XEN) Intel VT-d Queued Invalidation enabled.
>> (XEN) Intel VT-d Interrupt Remapping enabled.
>> (XEN) Intel VT-d Shared EPT tables enabled.
>> 
>> 2. boot a rhel6u4 guest.
>> 
>> 3. after guest boot up, run startx inside guest.
>> 
>> 4. a few second, the X windows shows and didn't see any error. Also
>> the CPU utilization is about 1.7%.
>> 
>> Any thing wrong?
> 
> Nothing at all, as it turns out. The regression is due to Dongxiao's
> 
> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.h
> tml
> 
> which I have in my tree as part of various things pending for 4.5.
> And which at the first, second, and third glance looks pretty innocent
> (IOW I still have to find out _why_ it is wrong).
> 
> In any case - I'm very sorry for the false alarm.
> 

It doesn't matter. Conversely, we need to thank you for helping us to fix this issue. :)

BTW, I still cannot reproduce it in my box, even I uses SLES 11 SP3 as guest.

> Jan


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:18:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFvnT-0002kG-7d; Wed, 19 Feb 2014 01:17:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFvnR-0002kB-Uw
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 01:17:58 +0000
Received: from [85.158.139.211:36552] by server-14.bemta-5.messagelabs.com id
	E6/7C-27598-54604035; Wed, 19 Feb 2014 01:17:57 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392772674!4759363!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6487 invoked from network); 19 Feb 2014 01:17:56 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-6.tower-206.messagelabs.com with SMTP;
	19 Feb 2014 01:17:56 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 17:13:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,503,1389772800"; d="scan'208";a="457769894"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 17:17:13 -0800
Received: from fmsmsx118.amr.corp.intel.com (10.18.116.18) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:17:12 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx118.amr.corp.intel.com (10.18.116.18) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:17:12 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 09:17:09 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, George Dunlap
	<george.dunlap@eu.citrix.com>, "Xu, Dongxiao" <dongxiao.xu@intel.com>, 
	"Zhang, Xiantao" <xiantao.zhang@intel.com>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///IvYCAAVNtYIAACJYAgAFmyJA=
Date: Wed, 19 Feb 2014 01:17:08 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEB9B@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
In-Reply-To: <53035628020000780011D3EE@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Dugger, Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-18:
>>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> Jan Beulich wrote on 2014-02-17:
>>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>> And second, I have been fighting with finding both conditions and
>>>> (eventually) the root cause of a severe performance regression
>>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>>> became _much_ worse after adding in the patch here (while in fact
>>>> I had hoped it might help with the originally observed
>>>> degradation): X startup fails due to timing out, and booting the
>>>> guest now takes about 20 minutes). I didn't find the root cause of
>>>> this yet, but meanwhile I know that
>>>> - the same isn't observable on SVM
>>>> - there's no problem when forcing the domain to use shadow
>>>>   mode - there's no need for any device to actually be assigned to the
>>>>   guest - the regression is very likely purely graphics related (based
>>>>   on the observation that when running something that regularly but not
>>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>>   worth of processing power, yet when that updating doesn't
>>>> happen,
> CPU
>>>>   consumption goes down, and it goes further down when shutting
>>>> down
> X
>>>>   altogether - at least as log as the patch here doesn't get involved).
>>>> This I'm observing on a Westmere box (and I didn't notice it
>>>> earlier because that's one of those where due to a chipset erratum
>>>> the IOMMU gets turned off by default), so it's possible that this
>>>> can't be seen on more modern hardware. I'll hopefully find time
>>>> today to check this on the one newer (Sandy Bridge) box I have.
>>> 
>>> Just got done with trying this: By default, things work fine there. As
>>> soon as I use "iommu=no-snoop", things go bad (even worse than one the
>>> older box - the guest is consuming about 2.5 CPUs worth of processing
>>> power _without_ the patch here in use, so I don't even want to think
>>> about trying it there); I guessed that to be another of the potential
>>> sources of the problem since on that older box the respective hardware
>>> feature is unavailable.
>>> 
>>> While I'll try to look into this further, I guess I have to defer
>>> to our VT-d specialists at Intel at this point...
>>> 
>> 
>> Hi, Jan,
>> 
>> I tried to reproduce it. But unfortunately, I cannot reproduce it in
>> my box (sandy bridge EP)with latest Xen(include my patch). I guess
>> my configuration or steps may wrong, here is mine:
>> 
>> 1. add iommu=1,no-snoop in by xen cmd line:
>> (XEN) Intel VT-d Snoop Control not enabled.
>> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
>> (XEN) Intel VT-d Queued Invalidation enabled.
>> (XEN) Intel VT-d Interrupt Remapping enabled.
>> (XEN) Intel VT-d Shared EPT tables enabled.
>> 
>> 2. boot a rhel6u4 guest.
>> 
>> 3. after guest boot up, run startx inside guest.
>> 
>> 4. a few second, the X windows shows and didn't see any error. Also
>> the CPU utilization is about 1.7%.
>> 
>> Any thing wrong?
> 
> Nothing at all, as it turns out. The regression is due to Dongxiao's
> 
> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.h
> tml
> 
> which I have in my tree as part of various things pending for 4.5.
> And which at the first, second, and third glance looks pretty innocent
> (IOW I still have to find out _why_ it is wrong).
> 
> In any case - I'm very sorry for the false alarm.
> 

It doesn't matter. Conversely, we need to thank you for helping us to fix this issue. :)

BTW, I still cannot reproduce it in my box, even I uses SLES 11 SP3 as guest.

> Jan


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:29:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFvyF-0002sy-H7; Wed, 19 Feb 2014 01:29:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFvyC-0002st-5v
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 01:29:06 +0000
Received: from [85.158.143.35:49985] by server-2.bemta-4.messagelabs.com id
	64/02-10891-FD804035; Wed, 19 Feb 2014 01:29:03 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392773342!6644694!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22819 invoked from network); 19 Feb 2014 01:29:02 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-2.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 01:29:02 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 18 Feb 2014 17:29:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,503,1389772800"; d="scan'208";a="483770159"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 17:29:01 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:29:00 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:29:00 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 09:28:59 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///GNICAAAOsgIABNL/wgAAPrwCAAX+DcA==
Date: Wed, 19 Feb 2014 01:28:59 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
In-Reply-To: <53033544.2000409@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-18:
> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>> Jan Beulich wrote on 2014-02-17:
>>>>>> On 17.02.14 at 15:51, George Dunlap
>>>>>> <george.dunlap@eu.citrix.com>
>>> wrote:
>>>> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>>> Actually I'm afraid there are two problems with this patch:
>>>>> 
>>>>> For one, is enabling "global" log dirty mode still going to work
>>>>> after VRAM-only mode already got enabled? I ask because the
>>>>> paging_mode_log_dirty() check which paging_log_dirty_enable()
>>>>> does first thing suggests otherwise to me (i.e. the now
>>>>> conditional setting of all p2m entries to p2m_ram_logdirty would
>>>>> seem to never get executed). IOW I would think that we're now
>>>>> lacking a control operation allowing the transition from dirty
>>>>> VRAM tracking mode to full log dirty mode.
>>>> Hrm, will so far playing with this I've been unable to get a
>>>> localhost migrate to fail with the vncviewer attached.  Which
>>>> seems a bit
> strange...
>>> Not necessarily - it may depend on how the tools actually do this:
>>> They might temporarily disable log dirty mode altogether, just to
>>> re-enable full mode again right away. But this specific usage of
>>> the hypervisor interface wouldn't (to me) mean that other tool
>>> stacks might not be doing this differently.
>> You are right. Before migration, libxc will disable log dirty mode
>> if it already
> enabled before and re-enable it. So when I am developing this patch, I
> think it is ok for migration.
>> 
>> If there really have other tool stacks also will use this interface
>> (Is it true?),
> perhaps my original patch is better which will check
> paging_mode_log_dirty(d) && log_global:
> 
> It turns out that the reason I couldn't get a crash was because libxc
> was actually paying attention to the -EINVAL return value, and
> disabling and then re-enabling logdirty.  That's what would happen
> before your dirty vram patch, and that's what happens after.  And
> arguably, that's the correct behavior for any toolstack, given that the interface returns an error.

Agree.

> 
> This patch would actually change the interface; if we check this in,
> then if you enable logdirty when dirty vram tracking is enabled, you
> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
> So actually, this patch would be more disruptive.
> 

Jan, do you have any comment? 

> Thanks for the patch, though. :-)
> 
>   -George
>> 
>> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
>> index
>> ab5eacb..368c975 100644
>> --- a/xen/arch/x86/mm/paging.c
>> +++ b/xen/arch/x86/mm/paging.c
>> @@ -168,7 +168,7 @@ int paging_log_dirty_enable(struct domain *d,
> bool_t log_global)
>>   {
>>       int ret;
>> -    if ( paging_mode_log_dirty(d) )
>> +    if ( paging_mode_log_dirty(d) && !log_global )
>>           return -EINVAL;
>>       domain_pause(d);
>> 
>>> Jan
>> 
>> Best regards,
>> Yang
>> 
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:29:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFvyF-0002sy-H7; Wed, 19 Feb 2014 01:29:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFvyC-0002st-5v
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 01:29:06 +0000
Received: from [85.158.143.35:49985] by server-2.bemta-4.messagelabs.com id
	64/02-10891-FD804035; Wed, 19 Feb 2014 01:29:03 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392773342!6644694!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22819 invoked from network); 19 Feb 2014 01:29:02 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-2.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 01:29:02 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga102.fm.intel.com with ESMTP; 18 Feb 2014 17:29:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,503,1389772800"; d="scan'208";a="483770159"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 17:29:01 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:29:00 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:29:00 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 09:28:59 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <george.dunlap@eu.citrix.com>, Jan Beulich
	<JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPJjaWGU8BoIe0JUmrwANmd1LxUZquITGwgAIV/CWAAJImEIAIfYa+///GNICAAAOsgIABNL/wgAAPrwCAAX+DcA==
Date: Wed, 19 Feb 2014 01:28:59 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
In-Reply-To: <53033544.2000409@eu.citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Zhang,
	Xiantao" <xiantao.zhang@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-18:
> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>> Jan Beulich wrote on 2014-02-17:
>>>>>> On 17.02.14 at 15:51, George Dunlap
>>>>>> <george.dunlap@eu.citrix.com>
>>> wrote:
>>>> On 02/17/2014 10:18 AM, Jan Beulich wrote:
>>>>> Actually I'm afraid there are two problems with this patch:
>>>>> 
>>>>> For one, is enabling "global" log dirty mode still going to work
>>>>> after VRAM-only mode already got enabled? I ask because the
>>>>> paging_mode_log_dirty() check which paging_log_dirty_enable()
>>>>> does first thing suggests otherwise to me (i.e. the now
>>>>> conditional setting of all p2m entries to p2m_ram_logdirty would
>>>>> seem to never get executed). IOW I would think that we're now
>>>>> lacking a control operation allowing the transition from dirty
>>>>> VRAM tracking mode to full log dirty mode.
>>>> Hrm, will so far playing with this I've been unable to get a
>>>> localhost migrate to fail with the vncviewer attached.  Which
>>>> seems a bit
> strange...
>>> Not necessarily - it may depend on how the tools actually do this:
>>> They might temporarily disable log dirty mode altogether, just to
>>> re-enable full mode again right away. But this specific usage of
>>> the hypervisor interface wouldn't (to me) mean that other tool
>>> stacks might not be doing this differently.
>> You are right. Before migration, libxc will disable log dirty mode
>> if it already
> enabled before and re-enable it. So when I am developing this patch, I
> think it is ok for migration.
>> 
>> If there really have other tool stacks also will use this interface
>> (Is it true?),
> perhaps my original patch is better which will check
> paging_mode_log_dirty(d) && log_global:
> 
> It turns out that the reason I couldn't get a crash was because libxc
> was actually paying attention to the -EINVAL return value, and
> disabling and then re-enabling logdirty.  That's what would happen
> before your dirty vram patch, and that's what happens after.  And
> arguably, that's the correct behavior for any toolstack, given that the interface returns an error.

Agree.

> 
> This patch would actually change the interface; if we check this in,
> then if you enable logdirty when dirty vram tracking is enabled, you
> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
> So actually, this patch would be more disruptive.
> 

Jan, do you have any comment? 

> Thanks for the patch, though. :-)
> 
>   -George
>> 
>> diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
>> index
>> ab5eacb..368c975 100644
>> --- a/xen/arch/x86/mm/paging.c
>> +++ b/xen/arch/x86/mm/paging.c
>> @@ -168,7 +168,7 @@ int paging_log_dirty_enable(struct domain *d,
> bool_t log_global)
>>   {
>>       int ret;
>> -    if ( paging_mode_log_dirty(d) )
>> +    if ( paging_mode_log_dirty(d) && !log_global )
>>           return -EINVAL;
>>       domain_pause(d);
>> 
>>> Jan
>> 
>> Best regards,
>> Yang
>> 
>>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:37:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFw61-000318-GU; Wed, 19 Feb 2014 01:37:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFw60-00030v-3o
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 01:37:08 +0000
Received: from [85.158.137.68:10853] by server-1.bemta-3.messagelabs.com id
	D9/0B-17293-3CA04035; Wed, 19 Feb 2014 01:37:07 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392773825!2757930!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12506 invoked from network); 19 Feb 2014 01:37:06 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 01:37:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 17:37:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,503,1389772800"; d="scan'208";a="477394688"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 17:36:50 -0800
Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:36:42 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:36:41 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 09:36:35 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <dunlapg@umich.edu>, Dario Faggioli <raistlin@linux.it>
Thread-Topic: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
Thread-Index: AQHPLIrHxlmdLswJMkCEB5VnXiiQ2Jq6WhCAgAFxB8A=
Date: Wed, 19 Feb 2014 01:36:35 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBEC@SHSMSX104.ccr.corp.intel.com>
References: <1392715101.32038.466.camel@Solace>
	<CAFLBxZY2-837vd2AcJ9fGOnHvTMyewfAxaAuVptYfeq-Y9i_rw@mail.gmail.com>
In-Reply-To: <CAFLBxZY2-837vd2AcJ9fGOnHvTMyewfAxaAuVptYfeq-Y9i_rw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Pang, LongtaoX" <longtaox.pang@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, cl-mirage <cl-mirage@lists.cam.ac.uk>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-18:
> On Tue, Feb 18, 2014 at 9:18 AM, Dario Faggioli <raistlin@linux.it> wrote:
>> This is a reminder that today is the Xen Project Test Day for Xen
>> 4.4 RC4.
>> 
>> General Information about Test Days can be found here:
>> http://wiki.xenproject.org/wiki/Xen_Test_Days
>> 
>> and specific instructions for this Test Day are located here:
>> http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions
> 

Unfortunately, we saw device pass-through failure with RHEL6u5 guest (only REHL6U5 is buggy). But it seems it only exists in one Sandy bridge EP machine. We are evaluating it now and will let you know if it turns out to be a bug.

> From a release management perspective, there are a couple of big
> changes since RC3 that need testing:
> 
> * PVH mode has been fixed.  This involved a change on the nested virt code.
> 
> * Building with the latest release of Clang has been fixed.  This may
> affect builds on other compilers -- please test different versions of clang and gcc.
> 
> * A big patch series fixing races in libvirt/libxl was checked in.
> Testing with libvirt or xl would be helpful.
> 
> * A patch fixing guest floating point support on ARM systems was checked in.
> Please run some programs that use the floating point unit.
> 
> * A patch fixing an issue with device-passthru for HVM guests when the
> VNC client attached.  Please test migration, in particular with and
> without the VNC client attached; and device pass-through, with and
> without the VNC client attached.  (Migration with a device passed
> through is not supported.)
> 
>  -George
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:37:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFw61-000318-GU; Wed, 19 Feb 2014 01:37:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WFw60-00030v-3o
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 01:37:08 +0000
Received: from [85.158.137.68:10853] by server-1.bemta-3.messagelabs.com id
	D9/0B-17293-3CA04035; Wed, 19 Feb 2014 01:37:07 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392773825!2757930!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12506 invoked from network); 19 Feb 2014 01:37:06 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-14.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 01:37:06 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 18 Feb 2014 17:37:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,503,1389772800"; d="scan'208";a="477394688"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 17:36:50 -0800
Received: from fmsmsx154.amr.corp.intel.com (10.18.116.70) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:36:42 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	FMSMSX154.amr.corp.intel.com (10.18.116.70) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 17:36:41 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 09:36:35 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: George Dunlap <dunlapg@umich.edu>, Dario Faggioli <raistlin@linux.it>
Thread-Topic: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
Thread-Index: AQHPLIrHxlmdLswJMkCEB5VnXiiQ2Jq6WhCAgAFxB8A=
Date: Wed, 19 Feb 2014 01:36:35 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBEC@SHSMSX104.ccr.corp.intel.com>
References: <1392715101.32038.466.camel@Solace>
	<CAFLBxZY2-837vd2AcJ9fGOnHvTMyewfAxaAuVptYfeq-Y9i_rw@mail.gmail.com>
In-Reply-To: <CAFLBxZY2-837vd2AcJ9fGOnHvTMyewfAxaAuVptYfeq-Y9i_rw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen <xen@lists.fedoraproject.org>,
	"xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Pang, LongtaoX" <longtaox.pang@intel.com>, "Liu,
	SongtaoX" <songtaox.liu@intel.com>, cl-mirage <cl-mirage@lists.cam.ac.uk>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote on 2014-02-18:
> On Tue, Feb 18, 2014 at 9:18 AM, Dario Faggioli <raistlin@linux.it> wrote:
>> This is a reminder that today is the Xen Project Test Day for Xen
>> 4.4 RC4.
>> 
>> General Information about Test Days can be found here:
>> http://wiki.xenproject.org/wiki/Xen_Test_Days
>> 
>> and specific instructions for this Test Day are located here:
>> http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions
> 

Unfortunately, we saw device pass-through failure with RHEL6u5 guest (only REHL6U5 is buggy). But it seems it only exists in one Sandy bridge EP machine. We are evaluating it now and will let you know if it turns out to be a bug.

> From a release management perspective, there are a couple of big
> changes since RC3 that need testing:
> 
> * PVH mode has been fixed.  This involved a change on the nested virt code.
> 
> * Building with the latest release of Clang has been fixed.  This may
> affect builds on other compilers -- please test different versions of clang and gcc.
> 
> * A big patch series fixing races in libvirt/libxl was checked in.
> Testing with libvirt or xl would be helpful.
> 
> * A patch fixing guest floating point support on ARM systems was checked in.
> Please run some programs that use the floating point unit.
> 
> * A patch fixing an issue with device-passthru for HVM guests when the
> VNC client attached.  Please test migration, in particular with and
> without the VNC client attached; and device pass-through, with and
> without the VNC client attached.  (Migration with a device passed
> through is not supported.)
> 
>  -George
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:47:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:47:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFwFl-0003A2-TR; Wed, 19 Feb 2014 01:47:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WFwFk-00039x-7I
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 01:47:12 +0000
Received: from [85.158.143.35:11507] by server-3.bemta-4.messagelabs.com id
	D3/65-11539-F1D04035; Wed, 19 Feb 2014 01:47:11 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392774429!6660944!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17356 invoked from network); 19 Feb 2014 01:47:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 01:47:10 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1J1l7dV028126
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 01:47:08 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J1l6PF017499
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 01:47:07 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1J1l66d015604; Wed, 19 Feb 2014 01:47:06 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 17:47:05 -0800
Date: Tue, 18 Feb 2014 17:47:04 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140218174704.08fa2fce@mantra.us.oracle.com>
In-Reply-To: <53038EC2.8030403@citrix.com>
References: <53038EC2.8030403@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAxOCBGZWIgMjAxNCAxNzo0ODowMiArMDEwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IEhlbGxvLAo+IAo+IEkndmUgdHJpZWQgdG8gYm9v
dCBhIFBWSCBEb20wIHVzaW5nIExpbnV4IDMuMTQuMC1yYzMgYW5kIE11a2VzaCdzIFBWSCAKPiBE
b20wIFhlbiB0cmVlCj4gKGh0dHBzOi8vb3NzLm9yYWNsZS5jb20vZ2l0Lz9wPW1yYXRob3IveGVu
LmdpdDthPXNob3J0bG9nOwo+IGg9ZG9tMHB2aC12NyksIGFuZCBnb3QgdGhlIGZvbGxvd2luZyBj
cmFzaC4gRG8geW91IGhhdmUgYW55IG5ldyBYZW4KPiBEb20wIHNlcmllcyB0aGF0IEkgY291bGQg
dXNlIHRvIHRlc3QgUFZIIERvbTA/Cj4gCgpJdCB3b24ndCB3b3JrLiBUaGUgZmluYWwgbGludXgg
cGF0Y2hlcyB3ZXJlIGNoYW5nZWQgYSBiaXQsIGFuZCBpdCBtYWtlcwphbiBoY2FsbCB0aGF0IHdp
bGwgY2F1c2UgbWVtIGNvcnJ1cHRpb24gaW4geGVuLiBTbywganVzdCBoYW5nIGluIHRoZXJlCmEg
Yml0LCBuZXcgcGF0Y2hlcyBjb21pbmcgdXAgaW4gZmV3IGRheXMuCgp0aGFua3MKTXVrZXNoCgoK
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 19 01:47:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 01:47:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFwFl-0003A2-TR; Wed, 19 Feb 2014 01:47:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WFwFk-00039x-7I
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 01:47:12 +0000
Received: from [85.158.143.35:11507] by server-3.bemta-4.messagelabs.com id
	D3/65-11539-F1D04035; Wed, 19 Feb 2014 01:47:11 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392774429!6660944!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17356 invoked from network); 19 Feb 2014 01:47:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 01:47:10 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1J1l7dV028126
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 01:47:08 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J1l6PF017499
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 01:47:07 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1J1l66d015604; Wed, 19 Feb 2014 01:47:06 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 17:47:05 -0800
Date: Tue, 18 Feb 2014 17:47:04 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Message-ID: <20140218174704.08fa2fce@mantra.us.oracle.com>
In-Reply-To: <53038EC2.8030403@citrix.com>
References: <53038EC2.8030403@citrix.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAxOCBGZWIgMjAxNCAxNzo0ODowMiArMDEwMApSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6Cgo+IEhlbGxvLAo+IAo+IEkndmUgdHJpZWQgdG8gYm9v
dCBhIFBWSCBEb20wIHVzaW5nIExpbnV4IDMuMTQuMC1yYzMgYW5kIE11a2VzaCdzIFBWSCAKPiBE
b20wIFhlbiB0cmVlCj4gKGh0dHBzOi8vb3NzLm9yYWNsZS5jb20vZ2l0Lz9wPW1yYXRob3IveGVu
LmdpdDthPXNob3J0bG9nOwo+IGg9ZG9tMHB2aC12NyksIGFuZCBnb3QgdGhlIGZvbGxvd2luZyBj
cmFzaC4gRG8geW91IGhhdmUgYW55IG5ldyBYZW4KPiBEb20wIHNlcmllcyB0aGF0IEkgY291bGQg
dXNlIHRvIHRlc3QgUFZIIERvbTA/Cj4gCgpJdCB3b24ndCB3b3JrLiBUaGUgZmluYWwgbGludXgg
cGF0Y2hlcyB3ZXJlIGNoYW5nZWQgYSBiaXQsIGFuZCBpdCBtYWtlcwphbiBoY2FsbCB0aGF0IHdp
bGwgY2F1c2UgbWVtIGNvcnJ1cHRpb24gaW4geGVuLiBTbywganVzdCBoYW5nIGluIHRoZXJlCmEg
Yml0LCBuZXcgcGF0Y2hlcyBjb21pbmcgdXAgaW4gZmV3IGRheXMuCgp0aGFua3MKTXVrZXNoCgoK
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 19 02:09:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 02:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFwaZ-0003es-TM; Wed, 19 Feb 2014 02:08:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WFwaX-0003en-Dk
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 02:08:41 +0000
Received: from [85.158.139.211:11465] by server-14.bemta-5.messagelabs.com id
	22/48-27598-82214035; Wed, 19 Feb 2014 02:08:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392775718!235977!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23179 invoked from network); 19 Feb 2014 02:08:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 02:08:39 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1J28aGt013716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 02:08:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1J28XB2022538
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 02:08:36 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J28XoP004077; Wed, 19 Feb 2014 02:08:33 GMT
Message-Id: <201402190208.s1J28XoP004077@aserz7022.oracle.com>
Received: from [192.168.2.114] (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 18:08:33 -0800
Date: Tue, 18 Feb 2014 21:08:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ck9uIEZlYiAxOCwgMjAxNCA4OjQ3IFBNLCBNdWtlc2ggUmF0aG9yIDxtdWtlc2gucmF0aG9yQG9y
YWNsZS5jb20+IHdyb3RlOgo+Cj4gT24gVHVlLCAxOCBGZWIgMjAxNCAxNzo0ODowMiArMDEwMAo+
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPgo+ID4gSGVs
bG8sCj4gPiAKPiA+IEkndmUgdHJpZWQgdG8gYm9vdCBhIFBWSCBEb20wIHVzaW5nIExpbnV4IDMu
MTQuMC1yYzMgYW5kIE11a2VzaCdzIFBWSCAKPiA+IERvbTAgWGVuIHRyZWUKPiA+IChodHRwczov
L29zcy5vcmFjbGUuY29tL2dpdC8/cD1tcmF0aG9yL3hlbi5naXQ7YT1zaG9ydGxvZzsKPiA+IGg9
ZG9tMHB2aC12NyksIGFuZCBnb3QgdGhlIGZvbGxvd2luZyBjcmFzaC4gRG8geW91IGhhdmUgYW55
IG5ldyBYZW4KPiA+IERvbTAgc2VyaWVzIHRoYXQgSSBjb3VsZCB1c2UgdG8gdGVzdCBQVkggRG9t
MD8KPiA+Cj4KPiBJdCB3b24ndCB3b3JrLiBUaGUgZmluYWwgbGludXggcGF0Y2hlcyB3ZXJlIGNo
YW5nZWQgYSBiaXQsIGFuZCBpdCBtYWtlcwo+IGFuIGhjYWxsIHRoYXQgd2lsbCBjYXVzZSBtZW0g
Y29ycnVwdGlvbiBpbiB4ZW4uIFNvLCBqdXN0IGhhbmcgaW4gdGhlcmUKPiBhIGJpdCwgbmV3IHBh
dGNoZXMgY29taW5nIHVwIGluIGZldyBkYXlzLgoKRG8gdGhleSBhbHNvIGNyYXNoL2NvcnJ1cHQg
WGVuIGFzIERvbVU/Cj4KPiB0aGFua3MKPiBNdWtlc2gKPgo+IF9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gWGVuLWRldmVsIG1haWxpbmcgbGlzdAo+IFhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 19 02:09:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 02:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFwaZ-0003es-TM; Wed, 19 Feb 2014 02:08:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WFwaX-0003en-Dk
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 02:08:41 +0000
Received: from [85.158.139.211:11465] by server-14.bemta-5.messagelabs.com id
	22/48-27598-82214035; Wed, 19 Feb 2014 02:08:40 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392775718!235977!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23179 invoked from network); 19 Feb 2014 02:08:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 02:08:39 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1J28aGt013716
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 02:08:37 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1J28XB2022538
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 02:08:36 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J28XoP004077; Wed, 19 Feb 2014 02:08:33 GMT
Message-Id: <201402190208.s1J28XoP004077@aserz7022.oracle.com>
Received: from [192.168.2.114] (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 18:08:33 -0800
Date: Tue, 18 Feb 2014 21:08:30 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ck9uIEZlYiAxOCwgMjAxNCA4OjQ3IFBNLCBNdWtlc2ggUmF0aG9yIDxtdWtlc2gucmF0aG9yQG9y
YWNsZS5jb20+IHdyb3RlOgo+Cj4gT24gVHVlLCAxOCBGZWIgMjAxNCAxNzo0ODowMiArMDEwMAo+
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPgo+ID4gSGVs
bG8sCj4gPiAKPiA+IEkndmUgdHJpZWQgdG8gYm9vdCBhIFBWSCBEb20wIHVzaW5nIExpbnV4IDMu
MTQuMC1yYzMgYW5kIE11a2VzaCdzIFBWSCAKPiA+IERvbTAgWGVuIHRyZWUKPiA+IChodHRwczov
L29zcy5vcmFjbGUuY29tL2dpdC8/cD1tcmF0aG9yL3hlbi5naXQ7YT1zaG9ydGxvZzsKPiA+IGg9
ZG9tMHB2aC12NyksIGFuZCBnb3QgdGhlIGZvbGxvd2luZyBjcmFzaC4gRG8geW91IGhhdmUgYW55
IG5ldyBYZW4KPiA+IERvbTAgc2VyaWVzIHRoYXQgSSBjb3VsZCB1c2UgdG8gdGVzdCBQVkggRG9t
MD8KPiA+Cj4KPiBJdCB3b24ndCB3b3JrLiBUaGUgZmluYWwgbGludXggcGF0Y2hlcyB3ZXJlIGNo
YW5nZWQgYSBiaXQsIGFuZCBpdCBtYWtlcwo+IGFuIGhjYWxsIHRoYXQgd2lsbCBjYXVzZSBtZW0g
Y29ycnVwdGlvbiBpbiB4ZW4uIFNvLCBqdXN0IGhhbmcgaW4gdGhlcmUKPiBhIGJpdCwgbmV3IHBh
dGNoZXMgY29taW5nIHVwIGluIGZldyBkYXlzLgoKRG8gdGhleSBhbHNvIGNyYXNoL2NvcnJ1cHQg
WGVuIGFzIERvbVU/Cj4KPiB0aGFua3MKPiBNdWtlc2gKPgo+IF9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fCj4gWGVuLWRldmVsIG1haWxpbmcgbGlzdAo+IFhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnCj4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCl9f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBt
YWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcv
eGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 19 02:15:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 02:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFwh7-0003nL-4F; Wed, 19 Feb 2014 02:15:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WFwh4-0003nG-5J
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 02:15:26 +0000
Received: from [193.109.254.147:60504] by server-3.bemta-14.messagelabs.com id
	3C/ED-00432-DB314035; Wed, 19 Feb 2014 02:15:25 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392776123!1516547!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8941 invoked from network); 19 Feb 2014 02:15:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 02:15:24 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1J2FKnl029615
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 02:15:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J2FJOZ017915
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 02:15:20 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J2FJ9r002441; Wed, 19 Feb 2014 02:15:19 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 18:15:18 -0800
Date: Tue, 18 Feb 2014 18:15:17 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140218181517.36a503cc@mantra.us.oracle.com>
In-Reply-To: <201402190208.s1J28lw5002932@mantra.us.oracle.com>
References: <201402190208.s1J28lw5002932@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAxOCBGZWIgMjAxNCAyMTowODozMCAtMDUwMApLb25yYWQgUnplc3p1dGVrIFdpbGsg
PGtvbnJhZC53aWxrQG9yYWNsZS5jb20+IHdyb3RlOgoKPiAKPiBPbiBGZWIgMTgsIDIwMTQgODo0
NyBQTSwgTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgo+IHdyb3RlOgo+
ID4KPiA+IE9uIFR1ZSwgMTggRmViIDIwMTQgMTc6NDg6MDIgKzAxMDAKPiA+IFJvZ2VyIFBhdSBN
b25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPiA+Cj4gPiA+IEhlbGxvLAo+ID4g
PiAKPiA+ID4gSSd2ZSB0cmllZCB0byBib290IGEgUFZIIERvbTAgdXNpbmcgTGludXggMy4xNC4w
LXJjMyBhbmQgTXVrZXNoJ3MKPiA+ID4gUFZIIERvbTAgWGVuIHRyZWUKPiA+ID4gKGh0dHBzOi8v
b3NzLm9yYWNsZS5jb20vZ2l0Lz9wPW1yYXRob3IveGVuLmdpdDthPXNob3J0bG9nOwo+ID4gPiBo
PWRvbTBwdmgtdjcpLCBhbmQgZ290IHRoZSBmb2xsb3dpbmcgY3Jhc2guIERvIHlvdSBoYXZlIGFu
eSBuZXcKPiA+ID4gWGVuIERvbTAgc2VyaWVzIHRoYXQgSSBjb3VsZCB1c2UgdG8gdGVzdCBQVkgg
RG9tMD8KPiA+ID4KPiA+Cj4gPiBJdCB3b24ndCB3b3JrLiBUaGUgZmluYWwgbGludXggcGF0Y2hl
cyB3ZXJlIGNoYW5nZWQgYSBiaXQsIGFuZCBpdAo+ID4gbWFrZXMgYW4gaGNhbGwgdGhhdCB3aWxs
IGNhdXNlIG1lbSBjb3JydXB0aW9uIGluIHhlbi4gU28sIGp1c3QgaGFuZwo+ID4gaW4gdGhlcmUg
YSBiaXQsIG5ldyBwYXRjaGVzIGNvbWluZyB1cCBpbiBmZXcgZGF5cy4KPiAKPiBEbyB0aGV5IGFs
c28gY3Jhc2gvY29ycnVwdCBYZW4gYXMgRG9tVT8KCk5vcGUsIEknZCBoYXZlIG1lbnRpb25lZCBv
biB0aGUgbGlzdCBhc2FwIG90aGVyd2lzZSA6KS4uLgoKbXVrZXNoCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Feb 19 02:15:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 02:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFwh7-0003nL-4F; Wed, 19 Feb 2014 02:15:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WFwh4-0003nG-5J
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 02:15:26 +0000
Received: from [193.109.254.147:60504] by server-3.bemta-14.messagelabs.com id
	3C/ED-00432-DB314035; Wed, 19 Feb 2014 02:15:25 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392776123!1516547!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8941 invoked from network); 19 Feb 2014 02:15:24 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 02:15:24 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1J2FKnl029615
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 02:15:21 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J2FJOZ017915
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 02:15:20 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1J2FJ9r002441; Wed, 19 Feb 2014 02:15:19 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 18 Feb 2014 18:15:18 -0800
Date: Tue, 18 Feb 2014 18:15:17 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140218181517.36a503cc@mantra.us.oracle.com>
In-Reply-To: <201402190208.s1J28lw5002932@mantra.us.oracle.com>
References: <201402190208.s1J28lw5002932@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xen.org, roger.pau@citrix.com
Subject: Re: [Xen-devel] PVH Dom0 with latest Linux kernels
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCAxOCBGZWIgMjAxNCAyMTowODozMCAtMDUwMApLb25yYWQgUnplc3p1dGVrIFdpbGsg
PGtvbnJhZC53aWxrQG9yYWNsZS5jb20+IHdyb3RlOgoKPiAKPiBPbiBGZWIgMTgsIDIwMTQgODo0
NyBQTSwgTXVrZXNoIFJhdGhvciA8bXVrZXNoLnJhdGhvckBvcmFjbGUuY29tPgo+IHdyb3RlOgo+
ID4KPiA+IE9uIFR1ZSwgMTggRmViIDIwMTQgMTc6NDg6MDIgKzAxMDAKPiA+IFJvZ2VyIFBhdSBN
b25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToKPiA+Cj4gPiA+IEhlbGxvLAo+ID4g
PiAKPiA+ID4gSSd2ZSB0cmllZCB0byBib290IGEgUFZIIERvbTAgdXNpbmcgTGludXggMy4xNC4w
LXJjMyBhbmQgTXVrZXNoJ3MKPiA+ID4gUFZIIERvbTAgWGVuIHRyZWUKPiA+ID4gKGh0dHBzOi8v
b3NzLm9yYWNsZS5jb20vZ2l0Lz9wPW1yYXRob3IveGVuLmdpdDthPXNob3J0bG9nOwo+ID4gPiBo
PWRvbTBwdmgtdjcpLCBhbmQgZ290IHRoZSBmb2xsb3dpbmcgY3Jhc2guIERvIHlvdSBoYXZlIGFu
eSBuZXcKPiA+ID4gWGVuIERvbTAgc2VyaWVzIHRoYXQgSSBjb3VsZCB1c2UgdG8gdGVzdCBQVkgg
RG9tMD8KPiA+ID4KPiA+Cj4gPiBJdCB3b24ndCB3b3JrLiBUaGUgZmluYWwgbGludXggcGF0Y2hl
cyB3ZXJlIGNoYW5nZWQgYSBiaXQsIGFuZCBpdAo+ID4gbWFrZXMgYW4gaGNhbGwgdGhhdCB3aWxs
IGNhdXNlIG1lbSBjb3JydXB0aW9uIGluIHhlbi4gU28sIGp1c3QgaGFuZwo+ID4gaW4gdGhlcmUg
YSBiaXQsIG5ldyBwYXRjaGVzIGNvbWluZyB1cCBpbiBmZXcgZGF5cy4KPiAKPiBEbyB0aGV5IGFs
c28gY3Jhc2gvY29ycnVwdCBYZW4gYXMgRG9tVT8KCk5vcGUsIEknZCBoYXZlIG1lbnRpb25lZCBv
biB0aGUgbGlzdCBhc2FwIG90aGVyd2lzZSA6KS4uLgoKbXVrZXNoCgoKX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApY
ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Feb 19 04:17:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 04:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFyak-0004S9-D5; Wed, 19 Feb 2014 04:17:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFyai-0004S4-ES
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 04:17:00 +0000
Received: from [193.109.254.147:21267] by server-10.bemta-14.messagelabs.com
	id 00/5A-10711-B3034035; Wed, 19 Feb 2014 04:16:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392783417!5290698!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13430 invoked from network); 19 Feb 2014 04:16:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 04:16:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,503,1389744000"; d="scan'208";a="103769009"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 04:16:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 23:16:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFyad-0007g7-Ro;
	Wed, 19 Feb 2014 04:16:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFyad-000110-NG;
	Wed, 19 Feb 2014 04:16:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25126-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 04:16:55 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25126 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25126/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 04:17:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 04:17:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFyak-0004S9-D5; Wed, 19 Feb 2014 04:17:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFyai-0004S4-ES
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 04:17:00 +0000
Received: from [193.109.254.147:21267] by server-10.bemta-14.messagelabs.com
	id 00/5A-10711-B3034035; Wed, 19 Feb 2014 04:16:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392783417!5290698!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13430 invoked from network); 19 Feb 2014 04:16:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 04:16:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,503,1389744000"; d="scan'208";a="103769009"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 04:16:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 18 Feb 2014 23:16:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFyad-0007g7-Ro;
	Wed, 19 Feb 2014 04:16:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFyad-000110-NG;
	Wed, 19 Feb 2014 04:16:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25126-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 04:16:55 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25126 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25126/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 04:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 04:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFyzY-0004cP-Q5; Wed, 19 Feb 2014 04:42:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1WFyzY-0004cK-0D
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 04:42:40 +0000
Received: from [193.109.254.147:39055] by server-4.bemta-14.messagelabs.com id
	09/E4-32066-F3634035; Wed, 19 Feb 2014 04:42:39 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392784957!5307748!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4131 invoked from network); 19 Feb 2014 04:42:38 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 04:42:38 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so27030450qcv.33
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 20:42:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=iKmBhCTvAy+M1thGJ6D8VehulJILbo1YddOrFodgnqg=;
	b=BE4zK9z9vP02+n2lgGvujubVUuA9mEPdc+nsnBPaTwz0J4fgWJng+97p4vfeAjXtVZ
	VGdyhdjRstKL/awd0K7sjskM5TaavjchZluX4bbizgVPrz7ELIn9tG4jShBFQdneskaz
	UV3A5NGlnku8GvHx4gkQGp/93zKMxt0paBeK6DFkKcYuErbOey53I97GOiC+8R6KV1r6
	yAVENx59rBzwD3YEGsEiR53E6kn4i+nj8MQoYobbfrQ4ilBJyoBF+gaXKWxHWmrtKkHV
	pDdYwoa4lNfOGrbhw2c6UDnRq4fWRkPomEpOArD5Jn4UuwBFmjNnHmh7y8L1e6TWXo/s
	Mptg==
X-Gm-Message-State: ALoCoQn/M9SC4KstIgO7ZKZsVMdsPwzKR+EvS5wP90nc3mbYbY1gFVSAXoxibqjNS5eyf+rboUxZ
MIME-Version: 1.0
X-Received: by 10.224.14.70 with SMTP id f6mr48383524qaa.31.1392784957102;
	Tue, 18 Feb 2014 20:42:37 -0800 (PST)
Received: by 10.140.25.111 with HTTP; Tue, 18 Feb 2014 20:42:37 -0800 (PST)
In-Reply-To: <87vbwcaqxe.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
Date: Tue, 18 Feb 2014 20:42:37 -0800
Message-ID: <CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Rusty Russell <rusty@au1.ibm.com>
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
>> Hi,
>>
>> Below you could find a summary of work in regards to VIRTIO compatibility with
>> different virtualization solutions. It was done mainly from Xen point of view
>> but results are quite generic and can be applied to wide spectrum
>> of virtualization platforms.
>
> Hi Daniel,
>
>         Sorry for the delayed response, I was pondering...  CC changed
> to virtio-dev.
>
> From a standard POV: It's possible to abstract out the where we use
> 'physical address' for 'address handle'.  It's also possible to define
> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> Xen-PV is a distinct platform from x86.

I'll go even further and say that "address handle" doesn't make sense too.

Just using grant table references is not enough to make virtio work
well under Xen.  You really need to use bounce buffers ala persistent
grants.

I think what you ultimately want is virtio using a DMA API (I know
benh has scoffed at this but I don't buy his argument at face value)
and a DMA layer that bounces requests to a pool of persistent grants.

> For platforms using EPT, I don't think you want anything but guest
> addresses, do you?
>
> From an implementation POV:
>
> On IOMMU, start here for previous Linux discussion:
>         http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>
> And this is the real problem.  We don't want to use the PCI IOMMU for
> PCI devices.  So it's not just a matter of using existing Linux APIs.

Is there any data to back up that claim?

Just because power currently does hypercalls for anything that uses
the PCI IOMMU layer doesn't mean this cannot be changed.  It's pretty
hacky that virtio-pci just happens to work well by accident on power
today.  Not all architectures have this limitation.

Regards,

Anthony Liguori

> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 04:42:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 04:42:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFyzY-0004cP-Q5; Wed, 19 Feb 2014 04:42:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1WFyzY-0004cK-0D
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 04:42:40 +0000
Received: from [193.109.254.147:39055] by server-4.bemta-14.messagelabs.com id
	09/E4-32066-F3634035; Wed, 19 Feb 2014 04:42:39 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392784957!5307748!1
X-Originating-IP: [209.85.216.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4131 invoked from network); 19 Feb 2014 04:42:38 -0000
Received: from mail-qc0-f174.google.com (HELO mail-qc0-f174.google.com)
	(209.85.216.174)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 04:42:38 -0000
Received: by mail-qc0-f174.google.com with SMTP id x13so27030450qcv.33
	for <xen-devel@lists.xenproject.org>;
	Tue, 18 Feb 2014 20:42:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=iKmBhCTvAy+M1thGJ6D8VehulJILbo1YddOrFodgnqg=;
	b=BE4zK9z9vP02+n2lgGvujubVUuA9mEPdc+nsnBPaTwz0J4fgWJng+97p4vfeAjXtVZ
	VGdyhdjRstKL/awd0K7sjskM5TaavjchZluX4bbizgVPrz7ELIn9tG4jShBFQdneskaz
	UV3A5NGlnku8GvHx4gkQGp/93zKMxt0paBeK6DFkKcYuErbOey53I97GOiC+8R6KV1r6
	yAVENx59rBzwD3YEGsEiR53E6kn4i+nj8MQoYobbfrQ4ilBJyoBF+gaXKWxHWmrtKkHV
	pDdYwoa4lNfOGrbhw2c6UDnRq4fWRkPomEpOArD5Jn4UuwBFmjNnHmh7y8L1e6TWXo/s
	Mptg==
X-Gm-Message-State: ALoCoQn/M9SC4KstIgO7ZKZsVMdsPwzKR+EvS5wP90nc3mbYbY1gFVSAXoxibqjNS5eyf+rboUxZ
MIME-Version: 1.0
X-Received: by 10.224.14.70 with SMTP id f6mr48383524qaa.31.1392784957102;
	Tue, 18 Feb 2014 20:42:37 -0800 (PST)
Received: by 10.140.25.111 with HTTP; Tue, 18 Feb 2014 20:42:37 -0800 (PST)
In-Reply-To: <87vbwcaqxe.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
Date: Tue, 18 Feb 2014 20:42:37 -0800
Message-ID: <CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Rusty Russell <rusty@au1.ibm.com>
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
>> Hi,
>>
>> Below you could find a summary of work in regards to VIRTIO compatibility with
>> different virtualization solutions. It was done mainly from Xen point of view
>> but results are quite generic and can be applied to wide spectrum
>> of virtualization platforms.
>
> Hi Daniel,
>
>         Sorry for the delayed response, I was pondering...  CC changed
> to virtio-dev.
>
> From a standard POV: It's possible to abstract out the where we use
> 'physical address' for 'address handle'.  It's also possible to define
> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> Xen-PV is a distinct platform from x86.

I'll go even further and say that "address handle" doesn't make sense too.

Just using grant table references is not enough to make virtio work
well under Xen.  You really need to use bounce buffers ala persistent
grants.

I think what you ultimately want is virtio using a DMA API (I know
benh has scoffed at this but I don't buy his argument at face value)
and a DMA layer that bounces requests to a pool of persistent grants.

> For platforms using EPT, I don't think you want anything but guest
> addresses, do you?
>
> From an implementation POV:
>
> On IOMMU, start here for previous Linux discussion:
>         http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>
> And this is the real problem.  We don't want to use the PCI IOMMU for
> PCI devices.  So it's not just a matter of using existing Linux APIs.

Is there any data to back up that claim?

Just because power currently does hypercalls for anything that uses
the PCI IOMMU layer doesn't mean this cannot be changed.  It's pretty
hacky that virtio-pci just happens to work well by accident on power
today.  Not all architectures have this limitation.

Regards,

Anthony Liguori

> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 05:21:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 05:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFzb5-0004zS-3x; Wed, 19 Feb 2014 05:21:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFzb3-0004zN-6V
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 05:21:25 +0000
Received: from [193.109.254.147:19093] by server-16.bemta-14.messagelabs.com
	id 74/1F-21945-45F34035; Wed, 19 Feb 2014 05:21:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392787282!5279210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11988 invoked from network); 19 Feb 2014 05:21:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 05:21:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,503,1389744000"; d="scan'208";a="102063221"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 05:21:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 00:21:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFzay-00080P-RX;
	Wed, 19 Feb 2014 05:21:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFzay-0006uy-8A;
	Wed, 19 Feb 2014 05:21:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25127-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 05:21:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25127: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25127 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25127/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24737
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 05:21:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 05:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WFzb5-0004zS-3x; Wed, 19 Feb 2014 05:21:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WFzb3-0004zN-6V
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 05:21:25 +0000
Received: from [193.109.254.147:19093] by server-16.bemta-14.messagelabs.com
	id 74/1F-21945-45F34035; Wed, 19 Feb 2014 05:21:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392787282!5279210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11988 invoked from network); 19 Feb 2014 05:21:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 05:21:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,503,1389744000"; d="scan'208";a="102063221"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 05:21:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 00:21:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WFzay-00080P-RX;
	Wed, 19 Feb 2014 05:21:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WFzay-0006uy-8A;
	Wed, 19 Feb 2014 05:21:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25127-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 05:21:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25127: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25127 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25127/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24737
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:02:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0Ed-0005Eq-Hi; Wed, 19 Feb 2014 06:02:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiang.liu@linux.intel.com>) id 1WG0Eb-0005El-At
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 06:02:17 +0000
Received: from [85.158.143.35:23566] by server-3.bemta-4.messagelabs.com id
	FD/29-11539-8E844035; Wed, 19 Feb 2014 06:02:16 +0000
X-Env-Sender: jiang.liu@linux.intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392789734!6673730!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13517 invoked from network); 19 Feb 2014 06:02:15 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 06:02:15 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 18 Feb 2014 22:02:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="483861762"
Received: from gerry-dev.bj.intel.com ([10.238.158.74])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 22:01:36 -0800
From: Jiang Liu <jiang.liu@linux.intel.com>
To: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Lv Zheng <lv.zheng@intel.com>, Len Brown <lenb@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Date: Wed, 19 Feb 2014 14:02:17 +0800
Message-Id: <1392789739-32317-3-git-send-email-jiang.liu@linux.intel.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392789739-32317-1-git-send-email-jiang.liu@linux.intel.com>
References: <1392789739-32317-1-git-send-email-jiang.liu@linux.intel.com>
Cc: linux-acpi@vger.kernel.org, Tony Luck <tony.luck@intel.com>,
	xen-devel@lists.xenproject.org, Jiang Liu <jiang.liu@linux.intel.com>,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [Patch v2 3/5] xen,
	acpi_pad: use acpi_evaluate_ost() to replace open-coded version
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use public function acpi_evaluate_ost() to replace open-coded
version of evaluating ACPI _OST method.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/xen/xen-acpi-pad.c |   26 +++++++-------------------
 1 file changed, 7 insertions(+), 19 deletions(-)

diff --git a/drivers/xen/xen-acpi-pad.c b/drivers/xen/xen-acpi-pad.c
index 40c4bc0..f83b754 100644
--- a/drivers/xen/xen-acpi-pad.c
+++ b/drivers/xen/xen-acpi-pad.c
@@ -77,27 +77,14 @@ static int acpi_pad_pur(acpi_handle handle)
 	return num;
 }
 
-/* Notify firmware how many CPUs are idle */
-static void acpi_pad_ost(acpi_handle handle, int stat,
-	uint32_t idle_nums)
-{
-	union acpi_object params[3] = {
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_BUFFER,},
-	};
-	struct acpi_object_list arg_list = {3, params};
-
-	params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY;
-	params[1].integer.value =  stat;
-	params[2].buffer.length = 4;
-	params[2].buffer.pointer = (void *)&idle_nums;
-	acpi_evaluate_object(handle, "_OST", &arg_list, NULL);
-}
-
 static void acpi_pad_handle_notify(acpi_handle handle)
 {
 	int idle_nums;
+	struct acpi_buffer param = {
+		.length = 4,
+		.pointer = (void *)&idle_nums,
+	};
+
 
 	mutex_lock(&xen_cpu_lock);
 	idle_nums = acpi_pad_pur(handle);
@@ -109,7 +96,8 @@ static void acpi_pad_handle_notify(acpi_handle handle)
 	idle_nums = xen_acpi_pad_idle_cpus(idle_nums)
 		    ?: xen_acpi_pad_idle_cpus_num();
 	if (idle_nums >= 0)
-		acpi_pad_ost(handle, 0, idle_nums);
+		acpi_evaluate_ost(handle, ACPI_PROCESSOR_AGGREGATOR_NOTIFY,
+				  0, &param);
 	mutex_unlock(&xen_cpu_lock);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:02:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0Ed-0005Eq-Hi; Wed, 19 Feb 2014 06:02:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jiang.liu@linux.intel.com>) id 1WG0Eb-0005El-At
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 06:02:17 +0000
Received: from [85.158.143.35:23566] by server-3.bemta-4.messagelabs.com id
	FD/29-11539-8E844035; Wed, 19 Feb 2014 06:02:16 +0000
X-Env-Sender: jiang.liu@linux.intel.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392789734!6673730!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13517 invoked from network); 19 Feb 2014 06:02:15 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-2.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 06:02:15 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 18 Feb 2014 22:02:04 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="483861762"
Received: from gerry-dev.bj.intel.com ([10.238.158.74])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 22:01:36 -0800
From: Jiang Liu <jiang.liu@linux.intel.com>
To: "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Lv Zheng <lv.zheng@intel.com>, Len Brown <lenb@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Date: Wed, 19 Feb 2014 14:02:17 +0800
Message-Id: <1392789739-32317-3-git-send-email-jiang.liu@linux.intel.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392789739-32317-1-git-send-email-jiang.liu@linux.intel.com>
References: <1392789739-32317-1-git-send-email-jiang.liu@linux.intel.com>
Cc: linux-acpi@vger.kernel.org, Tony Luck <tony.luck@intel.com>,
	xen-devel@lists.xenproject.org, Jiang Liu <jiang.liu@linux.intel.com>,
	linux-kernel@vger.kernel.org
Subject: [Xen-devel] [Patch v2 3/5] xen,
	acpi_pad: use acpi_evaluate_ost() to replace open-coded version
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Use public function acpi_evaluate_ost() to replace open-coded
version of evaluating ACPI _OST method.

Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
---
 drivers/xen/xen-acpi-pad.c |   26 +++++++-------------------
 1 file changed, 7 insertions(+), 19 deletions(-)

diff --git a/drivers/xen/xen-acpi-pad.c b/drivers/xen/xen-acpi-pad.c
index 40c4bc0..f83b754 100644
--- a/drivers/xen/xen-acpi-pad.c
+++ b/drivers/xen/xen-acpi-pad.c
@@ -77,27 +77,14 @@ static int acpi_pad_pur(acpi_handle handle)
 	return num;
 }
 
-/* Notify firmware how many CPUs are idle */
-static void acpi_pad_ost(acpi_handle handle, int stat,
-	uint32_t idle_nums)
-{
-	union acpi_object params[3] = {
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_INTEGER,},
-		{.type = ACPI_TYPE_BUFFER,},
-	};
-	struct acpi_object_list arg_list = {3, params};
-
-	params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY;
-	params[1].integer.value =  stat;
-	params[2].buffer.length = 4;
-	params[2].buffer.pointer = (void *)&idle_nums;
-	acpi_evaluate_object(handle, "_OST", &arg_list, NULL);
-}
-
 static void acpi_pad_handle_notify(acpi_handle handle)
 {
 	int idle_nums;
+	struct acpi_buffer param = {
+		.length = 4,
+		.pointer = (void *)&idle_nums,
+	};
+
 
 	mutex_lock(&xen_cpu_lock);
 	idle_nums = acpi_pad_pur(handle);
@@ -109,7 +96,8 @@ static void acpi_pad_handle_notify(acpi_handle handle)
 	idle_nums = xen_acpi_pad_idle_cpus(idle_nums)
 		    ?: xen_acpi_pad_idle_cpus_num();
 	if (idle_nums >= 0)
-		acpi_pad_ost(handle, 0, idle_nums);
+		acpi_evaluate_ost(handle, ACPI_PROCESSOR_AGGREGATOR_NOTIFY,
+				  0, &param);
 	mutex_unlock(&xen_cpu_lock);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lX-0005Qo-0I; Wed, 19 Feb 2014 06:36:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lU-0005QH-VN
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:17 +0000
Received: from [85.158.137.68:5648] by server-2.bemta-3.messagelabs.com id
	D2/09-06531-0E054035; Wed, 19 Feb 2014 06:36:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12685 invoked from network); 19 Feb 2014 06:36:15 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:15 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="457889571"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 22:36:12 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:41 +0800
Message-Id: <1392791564-37170-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 3/6] x86: collect CQM information from all
	sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect CQM information (L3 cache occupancy) from all sockets.
Upper layer application can parse the data structure to get the
information of guest's L3 cache occupancy on certain sockets.

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/pqos.c             |   43 +++++++++++++++++++++++++++++
 xen/arch/x86/sysctl.c           |   58 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |    4 +++
 xen/include/asm-x86/pqos.h      |    3 ++
 xen/include/public/sysctl.h     |   11 ++++++++
 5 files changed, 119 insertions(+)

diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index eb469ac..2cde56e 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -15,6 +15,7 @@
  * more details.
  */
 #include <asm/processor.h>
+#include <asm/msr.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/spinlock.h>
@@ -205,6 +206,48 @@ out:
     spin_unlock(&cqm->cqm_lock);
 }
 
+static void read_cqm_data(void *arg)
+{
+    uint64_t cqm_data;
+    unsigned int rmid;
+    int socket = cpu_to_socket(smp_processor_id());
+    unsigned long i;
+
+    ASSERT(system_supports_cqm());
+
+    if ( socket < 0 )
+        return;
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
+            continue;
+
+        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
+        rdmsrl(MSR_IA32_QMC, cqm_data);
+
+        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
+        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
+            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
+    }
+}
+
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
+{
+    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+    unsigned int nr_rmids = cqm->max_rmid + 1;
+
+    /* Read CQM data in current CPU */
+    read_cqm_data(NULL);
+    /* Issue IPI to other CPUs to read CQM data */
+    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
+
+    /* Copy the rmid_to_dom info to the buffer */
+    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
+           sizeof(domid_t) * (cqm->max_rmid + 1));
+
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..7b0acc9 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -28,6 +28,7 @@
 #include <xen/nodemask.h>
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
+#include <asm/pqos.h>
 
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 
@@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+/* Select one random CPU for each socket. Current CPU's socket is excluded */
+static void select_socket_cpu(cpumask_t *cpu_bitmap)
+{
+    int i;
+    unsigned int cpu;
+    int socket, socket_curr = cpu_to_socket(smp_processor_id());
+    DECLARE_BITMAP(sockets, NR_CPUS);
+
+    bitmap_zero(sockets, NR_CPUS);
+    if (socket_curr >= 0)
+        set_bit(socket_curr, sockets);
+
+    cpumask_clear(cpu_bitmap);
+    for ( i = 0; i < NR_CPUS; i++ )
+    {
+        socket = cpu_to_socket(i);
+        if ( socket < 0 || test_and_set_bit(socket, sockets) )
+            continue;
+        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
+        if ( cpu < nr_cpu_ids )
+            cpumask_set_cpu(cpu, cpu_bitmap);
+    }
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +126,39 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_getcqminfo:
+    {
+        cpumask_var_t cpu_cqmdata_map;
+
+        if ( !system_supports_cqm() )
+        {
+            ret = -ENODEV;
+            break;
+        }
+
+        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        memset(cqm->buffer, 0, cqm->buffer_size);
+
+        select_socket_cpu(cpu_cqmdata_map);
+        get_cqm_info(cpu_cqmdata_map);
+
+        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
+        sysctl->u.getcqminfo.size = cqm->buffer_size;
+        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
+        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+
+        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+
+        free_cpumask_var(cpu_cqmdata_map);
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e3ff10c 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -489,4 +489,8 @@
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
 
+/* Platform QoS register */
+#define MSR_IA32_QOSEVTSEL             0x00000c8d
+#define MSR_IA32_QMC                   0x00000c8e
+
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index f25037d..4372af6 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -17,6 +17,8 @@
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
 #include <xen/sched.h>
+#include <xen/cpumask.h>
+#include <public/domctl.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -51,5 +53,6 @@ void init_platform_qos(void);
 
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
 
 #endif
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..335b1d9 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
 typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
+struct xen_sysctl_getcqminfo {
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
+
 
 struct xen_sysctl {
     uint32_t cmd;
@@ -654,6 +663,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_getcqminfo                    21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +685,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_getcqminfo        getcqminfo;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0le-0005TH-F3; Wed, 19 Feb 2014 06:36:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lc-0005Sb-TE
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:25 +0000
Received: from [85.158.137.68:47346] by server-13.bemta-3.messagelabs.com id
	DA/9E-26923-8E054035; Wed, 19 Feb 2014 06:36:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!6
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13016 invoked from network); 19 Feb 2014 06:36:22 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:22 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:32:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="485636219"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 22:36:19 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:44 +0800
Message-Id: <1392791564-37170-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 6/6] tools: enable Cache QoS Monitoring
	feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduced two new xl commands to attach/detach CQM service for a guest
$ xl pqos-attach cqm domid
$ xl pqos-detach cqm domid

Introduce one new xl command to retrieve guest CQM information
$ xl pqos-list cqm

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/man/xl.pod.1           |   23 ++++++++
 tools/libxc/xc_domain.c     |   36 ++++++++++++
 tools/libxc/xenctrl.h       |   12 ++++
 tools/libxl/Makefile        |    3 +-
 tools/libxl/libxl.h         |    4 ++
 tools/libxl/libxl_pqos.c    |  132 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl |    7 +++
 tools/libxl/xl.h            |    3 +
 tools/libxl/xl_cmdimpl.c    |  111 ++++++++++++++++++++++++++++++++++++
 tools/libxl/xl_cmdtable.c   |   15 +++++
 10 files changed, 345 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_pqos.c

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..c1b1acd 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1334,6 +1334,29 @@ Load FLASK policy from the given policy file. The initial policy is provided to
 the hypervisor as a multiboot module; this command allows runtime updates to the
 policy. Loading new security policy will reset runtime changes to device labels.
 
+=head1 PLATFORM QOS
+
+New Intel processor may offer monitoring capability in each logical processor to
+measure specific quality-of-service metric, for example, Cache QoS Monitoring to
+get L3 cache occupancy.
+
+=over 4 
+
+=item B<pqos-attach> [I<qos-type>] [I<domain-id>]
+
+Attach certain platform QoS service for a domain.
+Current supported I<qos-type> is: "cqm".
+
+=item B<pqos-detach> [I<qos-type>] [I<domain-id>]
+
+Detach certain platform QoS service from a domain.
+Current supported I<qos-type> is: "cqm".
+
+=item B<pqos-list> [I<qos-type>]
+
+List platform QoS information for QoS attached domains.
+Current supported I<qos-type> is: "cqm".
+
 =back
 
 =head1 TO BE DOCUMENTED
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..67b41e7 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1776,6 +1776,42 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_attach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_detach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_getcqminfo;
+    ret = xc_sysctl(xch, &sysctl);
+    if ( ret >= 0 )
+    {
+        info->buffer_mfn = sysctl.u.getcqminfo.buffer_mfn;
+        info->size = sysctl.u.getcqminfo.size;
+        info->nr_rmids = sysctl.u.getcqminfo.nr_rmids;
+        info->nr_sockets = sysctl.u.getcqminfo.nr_sockets;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..f4eb198 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2427,4 +2427,16 @@ int xc_kexec_load(xc_interface *xch, uint8_t type, uint16_t arch,
  */
 int xc_kexec_unload(xc_interface *xch, int type);
 
+struct xc_cqminfo
+{
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xc_cqminfo xc_cqminfo_t;
+
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info);
 #endif /* XENCTRL_H */
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..8beb7f8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -76,7 +76,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
 			libxl_json.o libxl_aoutils.o libxl_numa.o \
 			libxl_save_callout.o _libxl_save_msgs_callout.o \
-			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
+			libxl_qmp.o libxl_event.o libxl_fork.o libxl_pqos.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
 $(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f3d2202 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
 int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
 int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
 
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);
+
 /* misc */
 
 /* Each of these sets or clears the flag according to whether the
diff --git a/tools/libxl/libxl_pqos.c b/tools/libxl/libxl_pqos.c
new file mode 100644
index 0000000..664a891
--- /dev/null
+++ b/tools/libxl/libxl_pqos.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2014      Intel Corporation
+ * Author Jiongxi Li <jiongxi.li@intel.com>
+ * Author Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+
+static const char * const msg[] = {
+    [EINVAL] = "invalid QoS resource type! Supported types: \"cqm\"",
+    [ENODEV] = "CQM is not supported in this system.",
+    [EEXIST] = "CQM is already attached to this domain.",
+    [ENOENT] = "CQM is not attached to this domain.",
+    [EUSERS] = "there is no free CQM RMID available.",
+    [ESRCH]  = "is this Domain ID valid?",
+};
+
+static void libxl_pqos_err_msg(libxl_ctx *ctx, int err)
+{
+    GC_INIT(ctx);
+
+    switch (err) {
+    case EINVAL:
+    case ENODEV:
+    case EEXIST:
+    case EUSERS:
+    case ESRCH:
+    case ENOENT:
+        LOGE(ERROR, "%s", msg[err]);
+        break;
+    default:
+        LOGE(ERROR, "errno: %d", err);
+    }
+
+    GC_FREE;
+}
+
+static int libxl_pqos_type2flags(const char * qos_type, uint64_t *flags)
+{
+    int rc = 0;
+
+    if (!strcmp(qos_type, "cqm"))
+        *flags |= XEN_DOMCTL_pqos_cqm;
+    else
+        rc = -1;
+
+    return rc;
+}
+
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_attach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_detach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo)
+{
+    int ret;
+    xc_cqminfo_t xcinfo;
+    GC_INIT(ctx);
+
+    ret = xc_domain_getcqminfo(ctx->xch, &xcinfo);
+    if (ret < 0) {
+        LOGE(ERROR, "getting domain cqm info");
+        return;
+    }
+
+    xlinfo->buffer_virt = (uint64_t)xc_map_foreign_range(ctx->xch, DOMID_XEN,
+                              xcinfo.size, PROT_READ, xcinfo.buffer_mfn);
+    if (xlinfo->buffer_virt == 0) {
+        LOGE(ERROR, "Failed to map cqm buffers");
+        return;
+    }
+    xlinfo->size = xcinfo.size;
+    xlinfo->nr_rmids = xcinfo.nr_rmids;
+    xlinfo->nr_sockets = xcinfo.nr_sockets;
+
+    GC_FREE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..43c0f48 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -596,3 +596,10 @@ libxl_event = Struct("event",[
                                  ])),
            ("domain_create_console_available", Struct(None, [])),
            ]))])
+
+libxl_cqminfo = Struct("cqminfo", [
+    ("buffer_virt",    uint64),
+    ("size",           uint32),
+    ("nr_rmids",       uint32),
+    ("nr_sockets",     uint32),
+    ])
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..4362b96 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -106,6 +106,9 @@ int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 int main_devd(int argc, char **argv);
+int main_pqosattach(int argc, char **argv);
+int main_pqosdetach(int argc, char **argv);
+int main_pqoslist(int argc, char **argv);
 
 void help(const char *command);
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..2e0b822 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7364,6 +7364,117 @@ out:
     return ret;
 }
 
+int main_pqosattach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-attach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_attach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+int main_pqosdetach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-detach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_detach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+static void print_cqm_info(const libxl_cqminfo *info)
+{
+    unsigned int i, j, k;
+    char *domname;
+    int print_header;
+    int cqm_domains = 0;
+    uint16_t *rmid_to_dom;
+    uint64_t *l3c_data;
+    uint32_t first_domain = 0;
+    unsigned int num_domains = 1024;
+
+    if (info->nr_rmids == 0) {
+        printf("System doesn't support CQM.\n");
+        return;
+    }
+
+    print_header = 1;
+    l3c_data = (uint64_t *)(info->buffer_virt);
+    rmid_to_dom = (uint16_t *)(info->buffer_virt +
+                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
+    for (i = first_domain; i < (first_domain + num_domains); i++) {
+        for (j = 0; j < info->nr_rmids; j++) {
+            if (rmid_to_dom[j] != i)
+                continue;
+
+            if (print_header) {
+                printf("Name                                        ID");
+                for (k = 0; k < info->nr_sockets; k++)
+                    printf("\tSocketID\tL3C_Usage");
+                print_header = 0;
+            }
+
+            domname = libxl_domid_to_name(ctx, i);
+            printf("\n%-40s %5d", domname, i);
+            free(domname);
+            cqm_domains++;
+
+            for (k = 0; k < info->nr_sockets; k++)
+                printf("%10u %16lu     ",
+                        k, l3c_data[info->nr_rmids * k + j]);
+        }
+    }
+    if (!cqm_domains)
+        printf("No RMID is assigned to domains.\n");
+    else
+        printf("\n");
+
+    printf("\nRMID count %5d\tRMID available %5d\n",
+           info->nr_rmids, info->nr_rmids - cqm_domains - 1);
+}
+
+int main_pqoslist(int argc, char **argv)
+{
+    int opt;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-list", 1) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+
+    if (!strcmp(qos_type, "cqm")) {
+        libxl_cqminfo info;
+        libxl_map_cqm_buffer(ctx, &info);
+        print_cqm_info(&info);
+    } else {
+        fprintf(stderr, "QoS resource type supported is: cqm.\n");
+        help("pqos-list");
+        return 2;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..d4af4a9 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
       "[options]",
       "-F                      Run in the foreground",
     },
+    { "pqos-attach",
+      &main_pqosattach, 0, 1,
+      "Allocate and map qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-detach",
+      &main_pqosdetach, 0, 1,
+      "Reliquish qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-list",
+      &main_pqoslist, 0, 0,
+      "List qos information for all domains",
+      "<Resource>",
+    },
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lc-0005SJ-1X; Wed, 19 Feb 2014 06:36:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0la-0005RJ-ST
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:23 +0000
Received: from [85.158.137.68:49813] by server-14.bemta-3.messagelabs.com id
	EF/3F-08196-5E054035; Wed, 19 Feb 2014 06:36:21 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!5
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12874 invoked from network); 19 Feb 2014 06:36:20 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:20 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:32:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="477493561"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 22:36:17 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:43 +0800
Message-Id: <1392791564-37170-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 5/6] xsm: add platform QoS related xsm
	policies
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xsm policies for attach/detach pqos services and get CQM info
hypercalls.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 ++++-
 xen/xsm/flask/hooks.c                        |    8 ++++++++
 xen/xsm/flask/policy/access_vectors          |   17 ++++++++++++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index dedc035..1f683af 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			getscheduler getvcpuinfo getvcpuextstate getaddrsize
 			getaffinity setaffinity };
-	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim  set_max_evtchn };
+	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim set_max_evtchn pqos_op };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index bb59fe8..115fcfe 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -64,6 +64,9 @@ allow dom0_t xen_t:xen {
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op tmem_op
 	tmem_control getscheduler setscheduler
 };
+allow dom0_t xen_t:xen2 {
+	pqos_op
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
@@ -76,7 +79,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn
+	set_cpuid gettsc settsc setscheduler set_max_evtchn pqos_op
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..6ee7771 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -730,6 +730,10 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_attach_pqos:
+    case XEN_DOMCTL_detach_pqos:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__PQOS_OP);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -785,6 +789,10 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_numainfo:
         return domain_has_xen(current->domain, XEN__PHYSINFO);
 
+    case XEN_SYSCTL_getcqminfo:
+        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
+                                    XEN2__PQOS_OP, NULL);
+
     default:
         printk("flask_sysctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..91af8b2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -3,9 +3,9 @@
 #
 # class class_name { permission_name ... }
 
-# Class xen consists of dom0-only operations dealing with the hypervisor itself.
-# Unless otherwise specified, the source is the domain executing the hypercall,
-# and the target is the xen initial sid (type xen_t).
+# Class xen and xen2 consists of dom0-only operations dealing with the
+# hypervisor itself. Unless otherwise specified, the source is the domain
+# executing the hypercall, and the target is the xen initial sid (type xen_t).
 class xen
 {
 # XENPF_settime
@@ -75,6 +75,14 @@ class xen
     setscheduler
 }
 
+# This is a continuation of class xen, since only 32 permissions can be
+# defined per class
+class xen2
+{
+# XEN_SYSCTL_getcqminfo
+    pqos_op
+}
+
 # Classes domain and domain2 consist of operations that a domain performs on
 # another domain or on itself.  Unless otherwise specified, the source is the
 # domain executing the hypercall, and the target is the domain being operated on
@@ -196,6 +204,9 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_attach_pqos
+# XEN_DOMCTL_detach_pqos
+    pqos_op
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lU-0005QJ-Ip; Wed, 19 Feb 2014 06:36:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lS-0005Q4-Q5
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:15 +0000
Received: from [85.158.137.68:5552] by server-17.bemta-3.messagelabs.com id
	C1/3F-22569-ED054035; Wed, 19 Feb 2014 06:36:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392791772!2784655!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15788 invoked from network); 19 Feb 2014 06:36:12 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:12 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 18 Feb 2014 22:36:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="483875196"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 22:36:09 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:40 +0800
Message-Id: <1392791564-37170-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 2/6] x86: dynamically attach/detach CQM
	service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add hypervisor side support for dynamically attach and detach CQM
services for a certain guest.

When attach CQM service for a guest, system will allocate an RMID for
it. When detach or guest is shutdown, the RMID will be retrieved back
for future use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c        |    3 +++
 xen/arch/x86/domctl.c        |   28 ++++++++++++++++++++
 xen/arch/x86/pqos.c          |   60 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |    2 ++
 xen/include/asm-x86/pqos.h   |   12 +++++++++
 xen/include/public/domctl.h  |   11 ++++++++
 6 files changed, 116 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 16f2b50..2656204 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -60,6 +60,7 @@
 #include <xen/numa.h>
 #include <xen/iommu.h>
 #include <compat/vcpu.h>
+#include <asm/pqos.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 DEFINE_PER_CPU(unsigned long, cr4);
@@ -612,6 +613,8 @@ void arch_domain_destroy(struct domain *d)
 
     free_xenheap_page(d->shared_info);
     cleanup_domain_irq_mapping(d);
+
+    free_cqm_rmid(d);
 }
 
 unsigned long pv_guest_cr4_fixup(const struct vcpu *v, unsigned long guest_cr4)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..7219011 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,6 +35,7 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
+#include <asm/pqos.h>
 
 static int gdbsx_guest_mem_io(
     domid_t domid, struct xen_domctl_gdbsx_memio *iop)
@@ -1245,6 +1246,33 @@ long arch_do_domctl(
     }
     break;
 
+    case XEN_DOMCTL_attach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else
+            ret = alloc_cqm_rmid(d);
+    }
+    break;
+
+    case XEN_DOMCTL_detach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else if ( d->arch.pqos_cqm_rmid > 0 )
+        {
+            free_cqm_rmid(d);
+            ret = 0;
+        }
+        else
+            ret = -ENOENT;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index ba0de37..eb469ac 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <xen/init.h>
 #include <xen/mm.h>
+#include <xen/spinlock.h>
 #include <asm/pqos.h>
 
 static bool_t __initdata opt_pqos = 1;
@@ -145,6 +146,65 @@ void __init init_platform_qos(void)
     init_qos_monitor();
 }
 
+int alloc_cqm_rmid(struct domain *d)
+{
+    int rc = 0;
+    unsigned int rmid;
+
+    ASSERT(system_supports_cqm());
+
+    spin_lock(&cqm->cqm_lock);
+
+    if ( d->arch.pqos_cqm_rmid > 0 )
+    {
+        rc = -EEXIST;
+        goto out;
+    }
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
+            continue;
+
+        cqm->rmid_to_dom[rmid] = d->domain_id;
+        break;
+    }
+
+    /* No CQM RMID available, assign RMID=0 by default */
+    if ( rmid > cqm->max_rmid )
+    {
+        rmid = 0;
+        rc = -EUSERS;
+    }
+    else
+        cqm->used_rmid++;
+
+    d->arch.pqos_cqm_rmid = rmid;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+
+    return rc;
+}
+
+void free_cqm_rmid(struct domain *d)
+{
+    unsigned int rmid;
+
+    spin_lock(&cqm->cqm_lock);
+    rmid = d->arch.pqos_cqm_rmid;
+    /* We do not free system reserved "RMID=0" */
+    if ( rmid == 0 )
+        goto out;
+
+    cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+    d->arch.pqos_cqm_rmid = 0;
+    cqm->used_rmid--;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..662714d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -313,6 +313,8 @@ struct arch_domain
     spinlock_t e820_lock;
     struct e820entry *e820;
     unsigned int nr_e820;
+
+    unsigned int pqos_cqm_rmid;       /* CQM RMID assigned to the domain */
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 0a8065c..f25037d 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -16,6 +16,7 @@
  */
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
+#include <xen/sched.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -38,6 +39,17 @@ struct pqos_cqm {
 };
 extern struct pqos_cqm *cqm;
 
+static inline bool_t system_supports_cqm(void)
+{
+    return !!cqm;
+}
+
+/* IA32_QM_CTR */
+#define IA32_QM_CTR_ERROR_MASK         (0x3ul << 62)
+
 void init_platform_qos(void);
 
+int alloc_cqm_rmid(struct domain *d);
+void free_cqm_rmid(struct domain *d);
+
 #endif
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f8d9293 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,14 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_qos_type {
+#define _XEN_DOMCTL_pqos_cqm      0
+#define XEN_DOMCTL_pqos_cqm       (1U<<_XEN_DOMCTL_pqos_cqm)
+    uint64_t flags;
+};
+typedef struct xen_domctl_qos_type xen_domctl_qos_type_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_qos_type_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +962,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_attach_pqos                   71
+#define XEN_DOMCTL_detach_pqos                   72
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1014,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
+        struct xen_domctl_qos_type          qos_type;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lZ-0005RI-Da; Wed, 19 Feb 2014 06:36:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lX-0005Qm-1q
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:19 +0000
Received: from [85.158.137.68:47046] by server-12.bemta-3.messagelabs.com id
	17/C1-01674-2E054035; Wed, 19 Feb 2014 06:36:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!4
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12789 invoked from network); 19 Feb 2014 06:36:17 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:17 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="485636195"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 22:36:14 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:42 +0800
Message-Id: <1392791564-37170-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 4/6] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c           |    5 +++++
 xen/arch/x86/pqos.c             |   14 ++++++++++++++
 xen/include/asm-x86/msr-index.h |    1 +
 xen/include/asm-x86/pqos.h      |    1 +
 4 files changed, 21 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
     {
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
+        if ( system_supports_cqm() && cqm->used_rmid )
+            cqm_assoc_rmid(0);
         p->arch.ctxt_switch_from(p);
     }
 
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
         }
         vcpu_restore_fpu_eager(n);
         n->arch.ctxt_switch_to(n);
+
+        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
     }
 
     gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
 custom_param("pqos", parse_pqos_param);
 
 struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
 
 static void __init init_cqm(void)
 {
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
 
     cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
 
+    rmid_mask = ~(~0ull << get_count_order(ebx));
+
     if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
         init_cqm();
 }
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
 
 }
 
+void cqm_assoc_rmid(unsigned int rmid)
+{
+    uint64_t val;
+    uint64_t new_val;
+
+    rdmsrl(MSR_IA32_PQR_ASSOC, val);
+    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+    if ( val != new_val )
+        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
 /* Platform QoS register */
 #define MSR_IA32_QOSEVTSEL             0x00000c8d
 #define MSR_IA32_QMC                   0x00000c8e
+#define MSR_IA32_PQR_ASSOC             0x00000c8f
 
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 4372af6..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -54,5 +54,6 @@ void init_platform_qos(void);
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
 void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
 
 #endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lS-0005Py-4d; Wed, 19 Feb 2014 06:36:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lQ-0005Pc-6D
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:12 +0000
Received: from [85.158.137.68:43409] by server-10.bemta-3.messagelabs.com id
	93/FD-07302-BD054035; Wed, 19 Feb 2014 06:36:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12419 invoked from network); 19 Feb 2014 06:36:10 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:10 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="457889552"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 22:36:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:39 +0800
Message-Id: <1392791564-37170-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 1/6] x86: detect and initialize Cache QoS
	Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Detect platform QoS feature status and enumerate the resource types,
one of which is to monitor the L3 cache occupancy.

Also introduce a Xen grub command line parameter to control the
QoS feature status.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/misc/xen-command-line.markdown |    7 ++
 xen/arch/x86/Makefile               |    1 +
 xen/arch/x86/pqos.c                 |  156 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |    3 +
 xen/include/asm-x86/cpufeature.h    |    1 +
 xen/include/asm-x86/pqos.h          |   43 ++++++++++
 6 files changed, 211 insertions(+)
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 15aa404..7751ffe 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -770,6 +770,13 @@ This option can be specified more than once (up to 8 times at present).
 ### ple\_window
 > `= <integer>`
 
+### pqos (Intel)
+> `= List of ( <boolean> | cqm:<boolean> | cqm_max_rmid:<integer> )`
+
+> Default: `pqos=1,cqm:1,cqm_max_rmid:255`
+
+Configure platform QoS services.
+
 ### reboot
 > `= t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..54962e0 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += pqos.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
new file mode 100644
index 0000000..ba0de37
--- /dev/null
+++ b/xen/arch/x86/pqos.c
@@ -0,0 +1,156 @@
+/*
+ * pqos.c: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#include <asm/processor.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <asm/pqos.h>
+
+static bool_t __initdata opt_pqos = 1;
+static bool_t __initdata opt_cqm = 1;
+static unsigned int __initdata opt_cqm_max_rmid = 255;
+
+static void __init parse_pqos_param(char *s)
+{
+    char *ss, *val_str;
+    int val;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        val = parse_bool(s);
+        if ( val >= 0 )
+            opt_pqos = val;
+        else
+        {
+            val = !!strncmp(s, "no-", 3);
+            if ( !val )
+                s += 3;
+
+            val_str = strchr(s, ':');
+            if ( val_str )
+                *val_str++ = '\0';
+
+            if ( val_str && !strcmp(s, "cqm") &&
+                 (val = parse_bool(val_str)) >= 0 )
+                opt_cqm = val;
+            else if ( val_str && !strcmp(s, "cqm_max_rmid") )
+                opt_cqm_max_rmid = simple_strtoul(val_str, NULL, 0);
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+custom_param("pqos", parse_pqos_param);
+
+struct pqos_cqm __read_mostly *cqm = NULL;
+
+static void __init init_cqm(void)
+{
+    unsigned int rmid;
+    unsigned int eax, edx;
+    unsigned int cqm_pages;
+    unsigned int i;
+
+    if ( !opt_cqm_max_rmid )
+        return;
+
+    cqm = xzalloc(struct pqos_cqm);
+    if ( !cqm )
+        return;
+
+    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
+    if ( !(edx & QOS_MONITOR_EVTID_L3) )
+        goto out;
+
+    cqm->min_rmid = 1;
+    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
+
+    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
+    if ( !cqm->rmid_to_dom )
+        goto out;
+
+    /* Reserve RMID 0 for all domains not being monitored */
+    cqm->rmid_to_dom[0] = DOMID_XEN;
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+
+    /* Allocate CQM buffer size in initialization stage */
+    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
+                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
+                PAGE_SIZE + 1;
+    cqm->buffer_size = cqm_pages * PAGE_SIZE;
+
+    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
+    if ( !cqm->buffer )
+    {
+        xfree(cqm->rmid_to_dom);
+        goto out;
+    }
+    memset(cqm->buffer, 0, cqm->buffer_size);
+
+    for ( i = 0; i < cqm_pages; i++ )
+        share_xen_page_with_privileged_guests(
+            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),
+            XENSHARE_readonly);
+
+    spin_lock_init(&cqm->cqm_lock);
+
+    cqm->used_rmid = 0;
+
+    printk(XENLOG_INFO "Cache QoS Monitoring Enabled.\n");
+
+    return;
+
+out:
+    xfree(cqm);
+    cqm = NULL;
+}
+
+static void __init init_qos_monitor(void)
+{
+    unsigned int qm_features;
+    unsigned int eax, ebx, ecx;
+
+    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
+        return;
+
+    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+
+    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
+        init_cqm();
+}
+
+void __init init_platform_qos(void)
+{
+    if ( !opt_pqos )
+        return;
+
+    init_qos_monitor();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..639528f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,6 +48,7 @@
 #include <asm/setup.h>
 #include <xen/cpu.h>
 #include <asm/nmi.h>
+#include <asm/pqos.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
@@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     domain_unpause_by_systemcontroller(dom0);
 
+    init_platform_qos();
+
     reset_stack_and_jump(init_done);
 }
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..ca59668 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -147,6 +147,7 @@
 #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
+#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
new file mode 100644
index 0000000..0a8065c
--- /dev/null
+++ b/xen/include/asm-x86/pqos.h
@@ -0,0 +1,43 @@
+/*
+ * pqos.h: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef ASM_PQOS_H
+#define ASM_PQOS_H
+
+#include <public/xen.h>
+#include <xen/spinlock.h>
+
+/* QoS Resource Type Enumeration */
+#define QOS_MONITOR_TYPE_L3            0x2
+
+/* QoS Monitoring Event ID */
+#define QOS_MONITOR_EVTID_L3           0x1
+
+struct pqos_cqm {
+    spinlock_t cqm_lock;
+    uint64_t *buffer;
+    unsigned int min_rmid;
+    unsigned int max_rmid;
+    unsigned int used_rmid;
+    unsigned int upscaling_factor;
+    unsigned int buffer_size;
+    domid_t *rmid_to_dom;
+};
+extern struct pqos_cqm *cqm;
+
+void init_platform_qos(void);
+
+#endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lQ-0005Pd-Hz; Wed, 19 Feb 2014 06:36:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lO-0005PX-Rs
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:11 +0000
Received: from [85.158.137.68:46626] by server-2.bemta-3.messagelabs.com id
	2F/E8-06531-9D054035; Wed, 19 Feb 2014 06:36:09 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12331 invoked from network); 19 Feb 2014 06:36:08 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:08 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="485636115"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 22:36:04 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:38 +0800
Message-Id: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 0/6] enable Cache QoS Monitoring (CQM) feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes from v8:
 - Address comments from Ian Campbell, including:
   * Modify the return handling for xc_sysctl();
   * Add man page items for platform QoS related commands.
   * Fix typo in commit message.

Changes from v7:
 - Address comments from Andrew Cooper, including:
   * Check CQM capability before allocating cpumask memory.
   * Move one function declaration into the correct patch.

Changes from v6:
 - Address comments from Jan Beulich, including:
   * Remove the unnecessary CPUID feature check.
   * Remove the unnecessary socket_cpu_map.
   * Spin_lock related changes, avoid spin_lock_irqsave().
   * Use readonly mapping to pass cqm data between Xen/Userspace,
     to avoid data copying.
   * Optimize RDMSR/WRMSR logic to avoid unnecessary calls.
   * Misc fixes including __read_mostly prefix, return value, etc.

Changes from v5:
 - Address comments from Dario Faggioli, including:
   * Define a new libxl_cqminfo structure to avoid reference of xc
     structure in libxl functions.
   * Use LOGE() instead of the LIBXL__LOG() functions.

Changes from v4:
 - When comparing xl cqm parameter, use strcmp instead of strncmp,
   otherwise, "xl pqos-attach cqmabcd domid" will be considered as
   a valid command line.
 - Address comments from Andrew Cooper, including:
   * Adjust the pqos parameter parsing function.
   * Modify the pqos related documentation.
   * Add a check for opt_cqm_max_rmid in initialization code.
   * Do not IPI CPU that is in same socket with current CPU.
 - Address comments from Dario Faggioli, including:
   * Fix an typo in export symbols.
   * Return correct libxl error code for qos related functions.
   * Abstract the error printing logic into a function.
 - Address comment from Daniel De Graaf, including:
   * Add return value in for pqos related check.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Modify the GPLv2 related file header, remove the address.

Changes from v3:
 - Use structure to better organize CQM related global variables.
 - Address comments from Andrew Cooper, including:
   * Remove the domain creation flag for CQM RMID allocation.
   * Adjust the boot parameter format, use custom_param().
   * Add documentation for the new added boot parameter.
   * Change QoS type flag to be uint64_t.
   * Initialize the per socket cpu bitmap in system boot time.
   * Remove get_cqm_avail() function.
   * Misc of format changes.
 - Address comment from Daniel De Graaf, including:
   * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to SECCLASS_XEN2.

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

RMID count    56        RMID available    53

Dongxiao Xu (6):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 docs/man/xl.pod.1                            |   23 +++
 docs/misc/xen-command-line.markdown          |    7 +
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   36 ++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    4 +
 tools/libxl/libxl_pqos.c                     |  132 +++++++++++++
 tools/libxl/libxl_types.idl                  |    7 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  111 +++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/domain.c                        |    8 +
 xen/arch/x86/domctl.c                        |   28 +++
 xen/arch/x86/pqos.c                          |  273 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   58 ++++++
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   59 ++++++
 xen/include/public/domctl.h                  |   11 ++
 xen/include/public/sysctl.h                  |   11 ++
 xen/xsm/flask/hooks.c                        |    8 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 27 files changed, 839 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lX-0005Qo-0I; Wed, 19 Feb 2014 06:36:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lU-0005QH-VN
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:17 +0000
Received: from [85.158.137.68:5648] by server-2.bemta-3.messagelabs.com id
	D2/09-06531-0E054035; Wed, 19 Feb 2014 06:36:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12685 invoked from network); 19 Feb 2014 06:36:15 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:15 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:55 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="457889571"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 22:36:12 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:41 +0800
Message-Id: <1392791564-37170-4-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 3/6] x86: collect CQM information from all
	sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Collect CQM information (L3 cache occupancy) from all sockets.
Upper layer application can parse the data structure to get the
information of guest's L3 cache occupancy on certain sockets.

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/pqos.c             |   43 +++++++++++++++++++++++++++++
 xen/arch/x86/sysctl.c           |   58 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |    4 +++
 xen/include/asm-x86/pqos.h      |    3 ++
 xen/include/public/sysctl.h     |   11 ++++++++
 5 files changed, 119 insertions(+)

diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index eb469ac..2cde56e 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -15,6 +15,7 @@
  * more details.
  */
 #include <asm/processor.h>
+#include <asm/msr.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/spinlock.h>
@@ -205,6 +206,48 @@ out:
     spin_unlock(&cqm->cqm_lock);
 }
 
+static void read_cqm_data(void *arg)
+{
+    uint64_t cqm_data;
+    unsigned int rmid;
+    int socket = cpu_to_socket(smp_processor_id());
+    unsigned long i;
+
+    ASSERT(system_supports_cqm());
+
+    if ( socket < 0 )
+        return;
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
+            continue;
+
+        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
+        rdmsrl(MSR_IA32_QMC, cqm_data);
+
+        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
+        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
+            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;
+    }
+}
+
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
+{
+    unsigned int nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+    unsigned int nr_rmids = cqm->max_rmid + 1;
+
+    /* Read CQM data in current CPU */
+    read_cqm_data(NULL);
+    /* Issue IPI to other CPUs to read CQM data */
+    on_selected_cpus(cpu_cqmdata_map, read_cqm_data, NULL, 1);
+
+    /* Copy the rmid_to_dom info to the buffer */
+    memcpy(cqm->buffer + nr_sockets * nr_rmids, cqm->rmid_to_dom,
+           sizeof(domid_t) * (cqm->max_rmid + 1));
+
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..7b0acc9 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -28,6 +28,7 @@
 #include <xen/nodemask.h>
 #include <xen/cpu.h>
 #include <xsm/xsm.h>
+#include <asm/pqos.h>
 
 #define get_xen_guest_handle(val, hnd)  do { val = (hnd).p; } while (0)
 
@@ -66,6 +67,30 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+/* Select one random CPU for each socket. Current CPU's socket is excluded */
+static void select_socket_cpu(cpumask_t *cpu_bitmap)
+{
+    int i;
+    unsigned int cpu;
+    int socket, socket_curr = cpu_to_socket(smp_processor_id());
+    DECLARE_BITMAP(sockets, NR_CPUS);
+
+    bitmap_zero(sockets, NR_CPUS);
+    if (socket_curr >= 0)
+        set_bit(socket_curr, sockets);
+
+    cpumask_clear(cpu_bitmap);
+    for ( i = 0; i < NR_CPUS; i++ )
+    {
+        socket = cpu_to_socket(i);
+        if ( socket < 0 || test_and_set_bit(socket, sockets) )
+            continue;
+        cpu = cpumask_any(per_cpu(cpu_core_mask, i));
+        if ( cpu < nr_cpu_ids )
+            cpumask_set_cpu(cpu, cpu_bitmap);
+    }
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +126,39 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_getcqminfo:
+    {
+        cpumask_var_t cpu_cqmdata_map;
+
+        if ( !system_supports_cqm() )
+        {
+            ret = -ENODEV;
+            break;
+        }
+
+        if ( !zalloc_cpumask_var(&cpu_cqmdata_map) )
+        {
+            ret = -ENOMEM;
+            break;
+        }
+
+        memset(cqm->buffer, 0, cqm->buffer_size);
+
+        select_socket_cpu(cpu_cqmdata_map);
+        get_cqm_info(cpu_cqmdata_map);
+
+        sysctl->u.getcqminfo.buffer_mfn = virt_to_mfn(cqm->buffer);
+        sysctl->u.getcqminfo.size = cqm->buffer_size;
+        sysctl->u.getcqminfo.nr_rmids = cqm->max_rmid + 1;
+        sysctl->u.getcqminfo.nr_sockets = cpumask_weight(cpu_cqmdata_map) + 1;
+
+        if ( __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+
+        free_cpumask_var(cpu_cqmdata_map);
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index fc9fbc6..e3ff10c 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -489,4 +489,8 @@
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
 
+/* Platform QoS register */
+#define MSR_IA32_QOSEVTSEL             0x00000c8d
+#define MSR_IA32_QMC                   0x00000c8e
+
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index f25037d..4372af6 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -17,6 +17,8 @@
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
 #include <xen/sched.h>
+#include <xen/cpumask.h>
+#include <public/domctl.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -51,5 +53,6 @@ void init_platform_qos(void);
 
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
+void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
 
 #endif
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..335b1d9 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -632,6 +632,15 @@ struct xen_sysctl_coverage_op {
 typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
+struct xen_sysctl_getcqminfo {
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xen_sysctl_getcqminfo xen_sysctl_getcqminfo_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_getcqminfo_t);
+
 
 struct xen_sysctl {
     uint32_t cmd;
@@ -654,6 +663,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_getcqminfo                    21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +685,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_getcqminfo        getcqminfo;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0le-0005TH-F3; Wed, 19 Feb 2014 06:36:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lc-0005Sb-TE
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:25 +0000
Received: from [85.158.137.68:47346] by server-13.bemta-3.messagelabs.com id
	DA/9E-26923-8E054035; Wed, 19 Feb 2014 06:36:24 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!6
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13016 invoked from network); 19 Feb 2014 06:36:22 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:22 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:32:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="485636219"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 22:36:19 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:44 +0800
Message-Id: <1392791564-37170-7-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 6/6] tools: enable Cache QoS Monitoring
	feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Introduced two new xl commands to attach/detach CQM service for a guest
$ xl pqos-attach cqm domid
$ xl pqos-detach cqm domid

Introduce one new xl command to retrieve guest CQM information
$ xl pqos-list cqm

Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/man/xl.pod.1           |   23 ++++++++
 tools/libxc/xc_domain.c     |   36 ++++++++++++
 tools/libxc/xenctrl.h       |   12 ++++
 tools/libxl/Makefile        |    3 +-
 tools/libxl/libxl.h         |    4 ++
 tools/libxl/libxl_pqos.c    |  132 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl |    7 +++
 tools/libxl/xl.h            |    3 +
 tools/libxl/xl_cmdimpl.c    |  111 ++++++++++++++++++++++++++++++++++++
 tools/libxl/xl_cmdtable.c   |   15 +++++
 10 files changed, 345 insertions(+), 1 deletion(-)
 create mode 100644 tools/libxl/libxl_pqos.c

diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
index e7b9de2..c1b1acd 100644
--- a/docs/man/xl.pod.1
+++ b/docs/man/xl.pod.1
@@ -1334,6 +1334,29 @@ Load FLASK policy from the given policy file. The initial policy is provided to
 the hypervisor as a multiboot module; this command allows runtime updates to the
 policy. Loading new security policy will reset runtime changes to device labels.
 
+=head1 PLATFORM QOS
+
+New Intel processor may offer monitoring capability in each logical processor to
+measure specific quality-of-service metric, for example, Cache QoS Monitoring to
+get L3 cache occupancy.
+
+=over 4 
+
+=item B<pqos-attach> [I<qos-type>] [I<domain-id>]
+
+Attach certain platform QoS service for a domain.
+Current supported I<qos-type> is: "cqm".
+
+=item B<pqos-detach> [I<qos-type>] [I<domain-id>]
+
+Detach certain platform QoS service from a domain.
+Current supported I<qos-type> is: "cqm".
+
+=item B<pqos-list> [I<qos-type>]
+
+List platform QoS information for QoS attached domains.
+Current supported I<qos-type> is: "cqm".
+
 =back
 
 =head1 TO BE DOCUMENTED
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index c2fdd74..67b41e7 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1776,6 +1776,42 @@ int xc_domain_set_max_evtchn(xc_interface *xch, uint32_t domid,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_attach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags)
+{
+    DECLARE_DOMCTL;
+    domctl.cmd = XEN_DOMCTL_detach_pqos;
+    domctl.domain = (domid_t)domid;
+    domctl.u.qos_type.flags = flags;
+    return do_domctl(xch, &domctl);
+}
+
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_getcqminfo;
+    ret = xc_sysctl(xch, &sysctl);
+    if ( ret >= 0 )
+    {
+        info->buffer_mfn = sysctl.u.getcqminfo.buffer_mfn;
+        info->size = sysctl.u.getcqminfo.size;
+        info->nr_rmids = sysctl.u.getcqminfo.nr_rmids;
+        info->nr_sockets = sysctl.u.getcqminfo.nr_sockets;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..f4eb198 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -2427,4 +2427,16 @@ int xc_kexec_load(xc_interface *xch, uint8_t type, uint16_t arch,
  */
 int xc_kexec_unload(xc_interface *xch, int type);
 
+struct xc_cqminfo
+{
+    uint64_aligned_t buffer_mfn;
+    uint32_t size;
+    uint32_t nr_rmids;
+    uint32_t nr_sockets;
+};
+typedef struct xc_cqminfo xc_cqminfo_t;
+
+int xc_domain_pqos_attach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_pqos_detach(xc_interface *xch, uint32_t domid, uint64_t flags);
+int xc_domain_getcqminfo(xc_interface *xch, xc_cqminfo_t *info);
 #endif /* XENCTRL_H */
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index d8495bb..8beb7f8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -76,7 +76,8 @@ LIBXL_OBJS = flexarray.o libxl.o libxl_create.o libxl_dm.o libxl_pci.o \
 			libxl_internal.o libxl_utils.o libxl_uuid.o \
 			libxl_json.o libxl_aoutils.o libxl_numa.o \
 			libxl_save_callout.o _libxl_save_msgs_callout.o \
-			libxl_qmp.o libxl_event.o libxl_fork.o $(LIBXL_OBJS-y)
+			libxl_qmp.o libxl_event.o libxl_fork.o libxl_pqos.o \
+			$(LIBXL_OBJS-y)
 LIBXL_OBJS += _libxl_types.o libxl_flask.o _libxl_types_internal.o
 
 $(LIBXL_OBJS): CFLAGS += $(CFLAGS_LIBXL) -include $(XEN_ROOT)/tools/config.h
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 12d6c31..f3d2202 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
 int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
 int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
 
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);
+
 /* misc */
 
 /* Each of these sets or clears the flag according to whether the
diff --git a/tools/libxl/libxl_pqos.c b/tools/libxl/libxl_pqos.c
new file mode 100644
index 0000000..664a891
--- /dev/null
+++ b/tools/libxl/libxl_pqos.c
@@ -0,0 +1,132 @@
+/*
+ * Copyright (C) 2014      Intel Corporation
+ * Author Jiongxi Li <jiongxi.li@intel.com>
+ * Author Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+#include "libxl_internal.h"
+
+static const char * const msg[] = {
+    [EINVAL] = "invalid QoS resource type! Supported types: \"cqm\"",
+    [ENODEV] = "CQM is not supported in this system.",
+    [EEXIST] = "CQM is already attached to this domain.",
+    [ENOENT] = "CQM is not attached to this domain.",
+    [EUSERS] = "there is no free CQM RMID available.",
+    [ESRCH]  = "is this Domain ID valid?",
+};
+
+static void libxl_pqos_err_msg(libxl_ctx *ctx, int err)
+{
+    GC_INIT(ctx);
+
+    switch (err) {
+    case EINVAL:
+    case ENODEV:
+    case EEXIST:
+    case EUSERS:
+    case ESRCH:
+    case ENOENT:
+        LOGE(ERROR, "%s", msg[err]);
+        break;
+    default:
+        LOGE(ERROR, "errno: %d", err);
+    }
+
+    GC_FREE;
+}
+
+static int libxl_pqos_type2flags(const char * qos_type, uint64_t *flags)
+{
+    int rc = 0;
+
+    if (!strcmp(qos_type, "cqm"))
+        *flags |= XEN_DOMCTL_pqos_cqm;
+    else
+        rc = -1;
+
+    return rc;
+}
+
+int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_attach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type)
+{
+    int rc;
+    uint64_t flags = 0;
+
+    rc = libxl_pqos_type2flags(qos_type, &flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, EINVAL);
+        return ERROR_FAIL;
+    }
+
+    rc = xc_domain_pqos_detach(ctx->xch, domid, flags);
+    if (rc < 0) {
+        libxl_pqos_err_msg(ctx, errno);
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
+void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo)
+{
+    int ret;
+    xc_cqminfo_t xcinfo;
+    GC_INIT(ctx);
+
+    ret = xc_domain_getcqminfo(ctx->xch, &xcinfo);
+    if (ret < 0) {
+        LOGE(ERROR, "getting domain cqm info");
+        return;
+    }
+
+    xlinfo->buffer_virt = (uint64_t)xc_map_foreign_range(ctx->xch, DOMID_XEN,
+                              xcinfo.size, PROT_READ, xcinfo.buffer_mfn);
+    if (xlinfo->buffer_virt == 0) {
+        LOGE(ERROR, "Failed to map cqm buffers");
+        return;
+    }
+    xlinfo->size = xcinfo.size;
+    xlinfo->nr_rmids = xcinfo.nr_rmids;
+    xlinfo->nr_sockets = xcinfo.nr_sockets;
+
+    GC_FREE;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..43c0f48 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -596,3 +596,10 @@ libxl_event = Struct("event",[
                                  ])),
            ("domain_create_console_available", Struct(None, [])),
            ]))])
+
+libxl_cqminfo = Struct("cqminfo", [
+    ("buffer_virt",    uint64),
+    ("size",           uint32),
+    ("nr_rmids",       uint32),
+    ("nr_sockets",     uint32),
+    ])
diff --git a/tools/libxl/xl.h b/tools/libxl/xl.h
index c876a33..4362b96 100644
--- a/tools/libxl/xl.h
+++ b/tools/libxl/xl.h
@@ -106,6 +106,9 @@ int main_setenforce(int argc, char **argv);
 int main_loadpolicy(int argc, char **argv);
 int main_remus(int argc, char **argv);
 int main_devd(int argc, char **argv);
+int main_pqosattach(int argc, char **argv);
+int main_pqosdetach(int argc, char **argv);
+int main_pqoslist(int argc, char **argv);
 
 void help(const char *command);
 
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index d93e01b..2e0b822 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -7364,6 +7364,117 @@ out:
     return ret;
 }
 
+int main_pqosattach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-attach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_attach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+int main_pqosdetach(int argc, char **argv)
+{
+    uint32_t domid;
+    int opt, rc;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-detach", 2) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+    domid = find_domain(argv[optind + 1]);
+
+    rc = libxl_pqos_detach(ctx, domid, qos_type);
+
+    return rc;
+}
+
+static void print_cqm_info(const libxl_cqminfo *info)
+{
+    unsigned int i, j, k;
+    char *domname;
+    int print_header;
+    int cqm_domains = 0;
+    uint16_t *rmid_to_dom;
+    uint64_t *l3c_data;
+    uint32_t first_domain = 0;
+    unsigned int num_domains = 1024;
+
+    if (info->nr_rmids == 0) {
+        printf("System doesn't support CQM.\n");
+        return;
+    }
+
+    print_header = 1;
+    l3c_data = (uint64_t *)(info->buffer_virt);
+    rmid_to_dom = (uint16_t *)(info->buffer_virt +
+                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
+    for (i = first_domain; i < (first_domain + num_domains); i++) {
+        for (j = 0; j < info->nr_rmids; j++) {
+            if (rmid_to_dom[j] != i)
+                continue;
+
+            if (print_header) {
+                printf("Name                                        ID");
+                for (k = 0; k < info->nr_sockets; k++)
+                    printf("\tSocketID\tL3C_Usage");
+                print_header = 0;
+            }
+
+            domname = libxl_domid_to_name(ctx, i);
+            printf("\n%-40s %5d", domname, i);
+            free(domname);
+            cqm_domains++;
+
+            for (k = 0; k < info->nr_sockets; k++)
+                printf("%10u %16lu     ",
+                        k, l3c_data[info->nr_rmids * k + j]);
+        }
+    }
+    if (!cqm_domains)
+        printf("No RMID is assigned to domains.\n");
+    else
+        printf("\n");
+
+    printf("\nRMID count %5d\tRMID available %5d\n",
+           info->nr_rmids, info->nr_rmids - cqm_domains - 1);
+}
+
+int main_pqoslist(int argc, char **argv)
+{
+    int opt;
+    const char *qos_type = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "", NULL, "pqos-list", 1) {
+        /* No options */
+    }
+
+    qos_type = argv[optind];
+
+    if (!strcmp(qos_type, "cqm")) {
+        libxl_cqminfo info;
+        libxl_map_cqm_buffer(ctx, &info);
+        print_cqm_info(&info);
+    } else {
+        fprintf(stderr, "QoS resource type supported is: cqm.\n");
+        help("pqos-list");
+        return 2;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
index ebe0220..d4af4a9 100644
--- a/tools/libxl/xl_cmdtable.c
+++ b/tools/libxl/xl_cmdtable.c
@@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
       "[options]",
       "-F                      Run in the foreground",
     },
+    { "pqos-attach",
+      &main_pqosattach, 0, 1,
+      "Allocate and map qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-detach",
+      &main_pqosdetach, 0, 1,
+      "Reliquish qos resource",
+      "<Resource> <Domain>",
+    },
+    { "pqos-list",
+      &main_pqoslist, 0, 0,
+      "List qos information for all domains",
+      "<Resource>",
+    },
 };
 
 int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lU-0005QJ-Ip; Wed, 19 Feb 2014 06:36:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lS-0005Q4-Q5
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:15 +0000
Received: from [85.158.137.68:5552] by server-17.bemta-3.messagelabs.com id
	C1/3F-22569-ED054035; Wed, 19 Feb 2014 06:36:14 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392791772!2784655!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15788 invoked from network); 19 Feb 2014 06:36:12 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-3.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:12 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 18 Feb 2014 22:36:11 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="483875196"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga002.fm.intel.com with ESMTP; 18 Feb 2014 22:36:09 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:40 +0800
Message-Id: <1392791564-37170-3-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 2/6] x86: dynamically attach/detach CQM
	service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add hypervisor side support for dynamically attach and detach CQM
services for a certain guest.

When attach CQM service for a guest, system will allocate an RMID for
it. When detach or guest is shutdown, the RMID will be retrieved back
for future use.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c        |    3 +++
 xen/arch/x86/domctl.c        |   28 ++++++++++++++++++++
 xen/arch/x86/pqos.c          |   60 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/domain.h |    2 ++
 xen/include/asm-x86/pqos.h   |   12 +++++++++
 xen/include/public/domctl.h  |   11 ++++++++
 6 files changed, 116 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 16f2b50..2656204 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -60,6 +60,7 @@
 #include <xen/numa.h>
 #include <xen/iommu.h>
 #include <compat/vcpu.h>
+#include <asm/pqos.h>
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 DEFINE_PER_CPU(unsigned long, cr4);
@@ -612,6 +613,8 @@ void arch_domain_destroy(struct domain *d)
 
     free_xenheap_page(d->shared_info);
     cleanup_domain_irq_mapping(d);
+
+    free_cqm_rmid(d);
 }
 
 unsigned long pv_guest_cr4_fixup(const struct vcpu *v, unsigned long guest_cr4)
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index ef6c140..7219011 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -35,6 +35,7 @@
 #include <asm/mem_sharing.h>
 #include <asm/xstate.h>
 #include <asm/debugger.h>
+#include <asm/pqos.h>
 
 static int gdbsx_guest_mem_io(
     domid_t domid, struct xen_domctl_gdbsx_memio *iop)
@@ -1245,6 +1246,33 @@ long arch_do_domctl(
     }
     break;
 
+    case XEN_DOMCTL_attach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else
+            ret = alloc_cqm_rmid(d);
+    }
+    break;
+
+    case XEN_DOMCTL_detach_pqos:
+    {
+        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
+            ret = -EINVAL;
+        else if ( !system_supports_cqm() )
+            ret = -ENODEV;
+        else if ( d->arch.pqos_cqm_rmid > 0 )
+        {
+            free_cqm_rmid(d);
+            ret = 0;
+        }
+        else
+            ret = -ENOENT;
+    }
+    break;
+
     default:
         ret = iommu_do_domctl(domctl, d, u_domctl);
         break;
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index ba0de37..eb469ac 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -17,6 +17,7 @@
 #include <asm/processor.h>
 #include <xen/init.h>
 #include <xen/mm.h>
+#include <xen/spinlock.h>
 #include <asm/pqos.h>
 
 static bool_t __initdata opt_pqos = 1;
@@ -145,6 +146,65 @@ void __init init_platform_qos(void)
     init_qos_monitor();
 }
 
+int alloc_cqm_rmid(struct domain *d)
+{
+    int rc = 0;
+    unsigned int rmid;
+
+    ASSERT(system_supports_cqm());
+
+    spin_lock(&cqm->cqm_lock);
+
+    if ( d->arch.pqos_cqm_rmid > 0 )
+    {
+        rc = -EEXIST;
+        goto out;
+    }
+
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+    {
+        if ( cqm->rmid_to_dom[rmid] != DOMID_INVALID)
+            continue;
+
+        cqm->rmid_to_dom[rmid] = d->domain_id;
+        break;
+    }
+
+    /* No CQM RMID available, assign RMID=0 by default */
+    if ( rmid > cqm->max_rmid )
+    {
+        rmid = 0;
+        rc = -EUSERS;
+    }
+    else
+        cqm->used_rmid++;
+
+    d->arch.pqos_cqm_rmid = rmid;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+
+    return rc;
+}
+
+void free_cqm_rmid(struct domain *d)
+{
+    unsigned int rmid;
+
+    spin_lock(&cqm->cqm_lock);
+    rmid = d->arch.pqos_cqm_rmid;
+    /* We do not free system reserved "RMID=0" */
+    if ( rmid == 0 )
+        goto out;
+
+    cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+    d->arch.pqos_cqm_rmid = 0;
+    cqm->used_rmid--;
+
+out:
+    spin_unlock(&cqm->cqm_lock);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index ea72db2..662714d 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -313,6 +313,8 @@ struct arch_domain
     spinlock_t e820_lock;
     struct e820entry *e820;
     unsigned int nr_e820;
+
+    unsigned int pqos_cqm_rmid;       /* CQM RMID assigned to the domain */
 } __cacheline_aligned;
 
 #define has_arch_pdevs(d)    (!list_empty(&(d)->arch.pdev_list))
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 0a8065c..f25037d 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -16,6 +16,7 @@
  */
 #ifndef ASM_PQOS_H
 #define ASM_PQOS_H
+#include <xen/sched.h>
 
 #include <public/xen.h>
 #include <xen/spinlock.h>
@@ -38,6 +39,17 @@ struct pqos_cqm {
 };
 extern struct pqos_cqm *cqm;
 
+static inline bool_t system_supports_cqm(void)
+{
+    return !!cqm;
+}
+
+/* IA32_QM_CTR */
+#define IA32_QM_CTR_ERROR_MASK         (0x3ul << 62)
+
 void init_platform_qos(void);
 
+int alloc_cqm_rmid(struct domain *d);
+void free_cqm_rmid(struct domain *d);
+
 #endif
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 91f01fa..f8d9293 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -885,6 +885,14 @@ struct xen_domctl_set_max_evtchn {
 typedef struct xen_domctl_set_max_evtchn xen_domctl_set_max_evtchn_t;
 DEFINE_XEN_GUEST_HANDLE(xen_domctl_set_max_evtchn_t);
 
+struct xen_domctl_qos_type {
+#define _XEN_DOMCTL_pqos_cqm      0
+#define XEN_DOMCTL_pqos_cqm       (1U<<_XEN_DOMCTL_pqos_cqm)
+    uint64_t flags;
+};
+typedef struct xen_domctl_qos_type xen_domctl_qos_type_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_qos_type_t);
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -954,6 +962,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_setnodeaffinity               68
 #define XEN_DOMCTL_getnodeaffinity               69
 #define XEN_DOMCTL_set_max_evtchn                70
+#define XEN_DOMCTL_attach_pqos                   71
+#define XEN_DOMCTL_detach_pqos                   72
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1014,6 +1024,7 @@ struct xen_domctl {
         struct xen_domctl_set_broken_page_p2m set_broken_page_p2m;
         struct xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
         struct xen_domctl_gdbsx_domstatus   gdbsx_domstatus;
+        struct xen_domctl_qos_type          qos_type;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lZ-0005RI-Da; Wed, 19 Feb 2014 06:36:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lX-0005Qm-1q
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:19 +0000
Received: from [85.158.137.68:47046] by server-12.bemta-3.messagelabs.com id
	17/C1-01674-2E054035; Wed, 19 Feb 2014 06:36:18 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!4
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12789 invoked from network); 19 Feb 2014 06:36:17 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:17 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:57 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="485636195"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 22:36:14 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:42 +0800
Message-Id: <1392791564-37170-5-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 4/6] x86: enable CQM monitoring for each
	domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the CQM service is attached to a domain, its related RMID will be set
to hardware for monitoring when the domain's vcpu is scheduled in. When
the domain's vcpu is scheduled out, RMID 0 (system reserved) will be set
for monitoring.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 xen/arch/x86/domain.c           |    5 +++++
 xen/arch/x86/pqos.c             |   14 ++++++++++++++
 xen/include/asm-x86/msr-index.h |    1 +
 xen/include/asm-x86/pqos.h      |    1 +
 4 files changed, 21 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 2656204..9eeedf0 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1372,6 +1372,8 @@ static void __context_switch(void)
     {
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
+        if ( system_supports_cqm() && cqm->used_rmid )
+            cqm_assoc_rmid(0);
         p->arch.ctxt_switch_from(p);
     }
 
@@ -1396,6 +1398,9 @@ static void __context_switch(void)
         }
         vcpu_restore_fpu_eager(n);
         n->arch.ctxt_switch_to(n);
+
+        if ( system_supports_cqm() && n->domain->arch.pqos_cqm_rmid > 0 )
+            cqm_assoc_rmid(n->domain->arch.pqos_cqm_rmid);
     }
 
     gdt = !is_pv_32on64_vcpu(n) ? per_cpu(gdt_table, cpu) :
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
index 2cde56e..7369e10 100644
--- a/xen/arch/x86/pqos.c
+++ b/xen/arch/x86/pqos.c
@@ -62,6 +62,7 @@ static void __init parse_pqos_param(char *s)
 custom_param("pqos", parse_pqos_param);
 
 struct pqos_cqm __read_mostly *cqm = NULL;
+static uint64_t __read_mostly rmid_mask;
 
 static void __init init_cqm(void)
 {
@@ -135,6 +136,8 @@ static void __init init_qos_monitor(void)
 
     cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
 
+    rmid_mask = ~(~0ull << get_count_order(ebx));
+
     if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
         init_cqm();
 }
@@ -248,6 +251,17 @@ void get_cqm_info(const cpumask_t *cpu_cqmdata_map)
 
 }
 
+void cqm_assoc_rmid(unsigned int rmid)
+{
+    uint64_t val;
+    uint64_t new_val;
+
+    rdmsrl(MSR_IA32_PQR_ASSOC, val);
+    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
+    if ( val != new_val )
+        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index e3ff10c..13800e6 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -492,5 +492,6 @@
 /* Platform QoS register */
 #define MSR_IA32_QOSEVTSEL             0x00000c8d
 #define MSR_IA32_QMC                   0x00000c8e
+#define MSR_IA32_PQR_ASSOC             0x00000c8f
 
 #endif /* __ASM_MSR_INDEX_H */
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
index 4372af6..87820d5 100644
--- a/xen/include/asm-x86/pqos.h
+++ b/xen/include/asm-x86/pqos.h
@@ -54,5 +54,6 @@ void init_platform_qos(void);
 int alloc_cqm_rmid(struct domain *d);
 void free_cqm_rmid(struct domain *d);
 void get_cqm_info(const cpumask_t *cpu_cqmdata_map);
+void cqm_assoc_rmid(unsigned int rmid);
 
 #endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lc-0005SJ-1X; Wed, 19 Feb 2014 06:36:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0la-0005RJ-ST
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:23 +0000
Received: from [85.158.137.68:49813] by server-14.bemta-3.messagelabs.com id
	EF/3F-08196-5E054035; Wed, 19 Feb 2014 06:36:21 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!5
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12874 invoked from network); 19 Feb 2014 06:36:20 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:20 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:32:00 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="477493561"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 22:36:17 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:43 +0800
Message-Id: <1392791564-37170-6-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 5/6] xsm: add platform QoS related xsm
	policies
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add xsm policies for attach/detach pqos services and get CQM info
hypercalls.

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 ++++-
 xen/xsm/flask/hooks.c                        |    8 ++++++++
 xen/xsm/flask/policy/access_vectors          |   17 ++++++++++++++---
 4 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/tools/flask/policy/policy/modules/xen/xen.if b/tools/flask/policy/policy/modules/xen/xen.if
index dedc035..1f683af 100644
--- a/tools/flask/policy/policy/modules/xen/xen.if
+++ b/tools/flask/policy/policy/modules/xen/xen.if
@@ -49,7 +49,7 @@ define(`create_domain_common', `
 			getdomaininfo hypercall setvcpucontext setextvcpucontext
 			getscheduler getvcpuinfo getvcpuextstate getaddrsize
 			getaffinity setaffinity };
-	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim  set_max_evtchn };
+	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim set_max_evtchn pqos_op };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op };
diff --git a/tools/flask/policy/policy/modules/xen/xen.te b/tools/flask/policy/policy/modules/xen/xen.te
index bb59fe8..115fcfe 100644
--- a/tools/flask/policy/policy/modules/xen/xen.te
+++ b/tools/flask/policy/policy/modules/xen/xen.te
@@ -64,6 +64,9 @@ allow dom0_t xen_t:xen {
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op tmem_op
 	tmem_control getscheduler setscheduler
 };
+allow dom0_t xen_t:xen2 {
+	pqos_op
+};
 allow dom0_t xen_t:mmu memorymap;
 
 # Allow dom0 to use these domctls on itself. For domctls acting on other
@@ -76,7 +79,7 @@ allow dom0_t dom0_t:domain {
 	getpodtarget setpodtarget set_misc_info set_virq_handler
 };
 allow dom0_t dom0_t:domain2 {
-	set_cpuid gettsc settsc setscheduler set_max_evtchn
+	set_cpuid gettsc settsc setscheduler set_max_evtchn pqos_op
 };
 allow dom0_t dom0_t:resource { add remove };
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7cdef04..6ee7771 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -730,6 +730,10 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_set_max_evtchn:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_MAX_EVTCHN);
 
+    case XEN_DOMCTL_attach_pqos:
+    case XEN_DOMCTL_detach_pqos:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__PQOS_OP);
+
     default:
         printk("flask_domctl: Unknown op %d\n", cmd);
         return -EPERM;
@@ -785,6 +789,10 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_numainfo:
         return domain_has_xen(current->domain, XEN__PHYSINFO);
 
+    case XEN_SYSCTL_getcqminfo:
+        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
+                                    XEN2__PQOS_OP, NULL);
+
     default:
         printk("flask_sysctl: Unknown op %d\n", cmd);
         return -EPERM;
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1fbe241..91af8b2 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -3,9 +3,9 @@
 #
 # class class_name { permission_name ... }
 
-# Class xen consists of dom0-only operations dealing with the hypervisor itself.
-# Unless otherwise specified, the source is the domain executing the hypercall,
-# and the target is the xen initial sid (type xen_t).
+# Class xen and xen2 consists of dom0-only operations dealing with the
+# hypervisor itself. Unless otherwise specified, the source is the domain
+# executing the hypercall, and the target is the xen initial sid (type xen_t).
 class xen
 {
 # XENPF_settime
@@ -75,6 +75,14 @@ class xen
     setscheduler
 }
 
+# This is a continuation of class xen, since only 32 permissions can be
+# defined per class
+class xen2
+{
+# XEN_SYSCTL_getcqminfo
+    pqos_op
+}
+
 # Classes domain and domain2 consist of operations that a domain performs on
 # another domain or on itself.  Unless otherwise specified, the source is the
 # domain executing the hypercall, and the target is the domain being operated on
@@ -196,6 +204,9 @@ class domain2
     setclaim
 # XEN_DOMCTL_set_max_evtchn
     set_max_evtchn
+# XEN_DOMCTL_attach_pqos
+# XEN_DOMCTL_detach_pqos
+    pqos_op
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lQ-0005Pd-Hz; Wed, 19 Feb 2014 06:36:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lO-0005PX-Rs
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:11 +0000
Received: from [85.158.137.68:46626] by server-2.bemta-3.messagelabs.com id
	2F/E8-06531-9D054035; Wed, 19 Feb 2014 06:36:09 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12331 invoked from network); 19 Feb 2014 06:36:08 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:08 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:47 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="485636115"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga002.jf.intel.com with ESMTP; 18 Feb 2014 22:36:04 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:38 +0800
Message-Id: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 0/6] enable Cache QoS Monitoring (CQM) feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Changes from v8:
 - Address comments from Ian Campbell, including:
   * Modify the return handling for xc_sysctl();
   * Add man page items for platform QoS related commands.
   * Fix typo in commit message.

Changes from v7:
 - Address comments from Andrew Cooper, including:
   * Check CQM capability before allocating cpumask memory.
   * Move one function declaration into the correct patch.

Changes from v6:
 - Address comments from Jan Beulich, including:
   * Remove the unnecessary CPUID feature check.
   * Remove the unnecessary socket_cpu_map.
   * Spin_lock related changes, avoid spin_lock_irqsave().
   * Use readonly mapping to pass cqm data between Xen/Userspace,
     to avoid data copying.
   * Optimize RDMSR/WRMSR logic to avoid unnecessary calls.
   * Misc fixes including __read_mostly prefix, return value, etc.

Changes from v5:
 - Address comments from Dario Faggioli, including:
   * Define a new libxl_cqminfo structure to avoid reference of xc
     structure in libxl functions.
   * Use LOGE() instead of the LIBXL__LOG() functions.

Changes from v4:
 - When comparing xl cqm parameter, use strcmp instead of strncmp,
   otherwise, "xl pqos-attach cqmabcd domid" will be considered as
   a valid command line.
 - Address comments from Andrew Cooper, including:
   * Adjust the pqos parameter parsing function.
   * Modify the pqos related documentation.
   * Add a check for opt_cqm_max_rmid in initialization code.
   * Do not IPI CPU that is in same socket with current CPU.
 - Address comments from Dario Faggioli, including:
   * Fix an typo in export symbols.
   * Return correct libxl error code for qos related functions.
   * Abstract the error printing logic into a function.
 - Address comment from Daniel De Graaf, including:
   * Add return value in for pqos related check.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Modify the GPLv2 related file header, remove the address.

Changes from v3:
 - Use structure to better organize CQM related global variables.
 - Address comments from Andrew Cooper, including:
   * Remove the domain creation flag for CQM RMID allocation.
   * Adjust the boot parameter format, use custom_param().
   * Add documentation for the new added boot parameter.
   * Change QoS type flag to be uint64_t.
   * Initialize the per socket cpu bitmap in system boot time.
   * Remove get_cqm_avail() function.
   * Misc of format changes.
 - Address comment from Daniel De Graaf, including:
   * Use avc_current_has_perm() for XEN2__PQOS_OP that belongs to SECCLASS_XEN2.

Changes from v2:
 - Address comments from Andrew Cooper, including:
   * Merging tools stack changes into one patch.
   * Reduce the IPI number to one per socket.
   * Change structures for CQM data exchange between tools and Xen.
   * Misc of format/variable/function name changes.
 - Address comments from Konrad Rzeszutek Wilk, including:
   * Simplify the error printing logic.
   * Add xsm check for the new added hypercalls.

Changes from v1:
 - Address comments from Andrew Cooper, including:
   * Change function names, e.g., alloc_cqm_rmid(), system_supports_cqm(), etc.
   * Change some structure element order to save packing cost.
   * Correct some function's return value.
   * Some programming styles change.
   * ...

Future generations of Intel Xeon processor may offer monitoring capability in
each logical processor to measure specific quality-of-service metric,
for example, the Cache QoS Monitoring to get L3 cache occupancy.
Detailed information please refer to Intel SDM chapter 17.14.

Cache QoS Monitoring provides a layer of abstraction between applications and
logical processors through the use of Resource Monitoring IDs (RMIDs).
In Xen design, each guest in the system can be assigned an RMID independently,
while RMID=0 is reserved for monitoring domains that doesn't enable CQM service.
When any of the domain's vcpu is scheduled on a logical processor, the domain's
RMID will be activated by programming the value into one specific MSR, and when
the vcpu is scheduled out, a RMID=0 will be programmed into that MSR.
The Cache QoS Hardware tracks cache utilization of memory accesses according to
the RMIDs and reports monitored data via a counter register. With this solution,
we can get the knowledge how much L3 cache is used by a certain guest.

To attach CQM service to a certain guest, two approaches are provided:
1) Create the guest with "pqos_cqm=1" set in configuration file.
2) Use "xl pqos-attach cqm domid" for a running guest.

To detached CQM service from a guest, users can:
1) Use "xl pqos-detach cqm domid" for a running guest.
2) Also destroying a guest will detach the CQM service.

To get the L3 cache usage, users can use the command of:
$ xl pqos-list cqm

The below data is just an example showing how the CQM related data is exposed to
end user.

[root@localhost]# xl pqos-list cqm
Name               ID  SocketID        L3C_Usage       SocketID        L3C_Usage
Domain-0            0         0         20127744              1         25231360
ExampleHVMDomain    1         0          3211264              1         10551296

RMID count    56        RMID available    53

Dongxiao Xu (6):
  x86: detect and initialize Cache QoS Monitoring feature
  x86: dynamically attach/detach CQM service for a guest
  x86: collect CQM information from all sockets
  x86: enable CQM monitoring for each domain RMID
  xsm: add platform QoS related xsm policies
  tools: enable Cache QoS Monitoring feature for libxl/libxc

 docs/man/xl.pod.1                            |   23 +++
 docs/misc/xen-command-line.markdown          |    7 +
 tools/flask/policy/policy/modules/xen/xen.if |    2 +-
 tools/flask/policy/policy/modules/xen/xen.te |    5 +-
 tools/libxc/xc_domain.c                      |   36 ++++
 tools/libxc/xenctrl.h                        |   12 ++
 tools/libxl/Makefile                         |    3 +-
 tools/libxl/libxl.h                          |    4 +
 tools/libxl/libxl_pqos.c                     |  132 +++++++++++++
 tools/libxl/libxl_types.idl                  |    7 +
 tools/libxl/xl.h                             |    3 +
 tools/libxl/xl_cmdimpl.c                     |  111 +++++++++++
 tools/libxl/xl_cmdtable.c                    |   15 ++
 xen/arch/x86/Makefile                        |    1 +
 xen/arch/x86/domain.c                        |    8 +
 xen/arch/x86/domctl.c                        |   28 +++
 xen/arch/x86/pqos.c                          |  273 ++++++++++++++++++++++++++
 xen/arch/x86/setup.c                         |    3 +
 xen/arch/x86/sysctl.c                        |   58 ++++++
 xen/include/asm-x86/cpufeature.h             |    1 +
 xen/include/asm-x86/domain.h                 |    2 +
 xen/include/asm-x86/msr-index.h              |    5 +
 xen/include/asm-x86/pqos.h                   |   59 ++++++
 xen/include/public/domctl.h                  |   11 ++
 xen/include/public/sysctl.h                  |   11 ++
 xen/xsm/flask/hooks.c                        |    8 +
 xen/xsm/flask/policy/access_vectors          |   17 +-
 27 files changed, 839 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxl/libxl_pqos.c
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:36:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0lS-0005Py-4d; Wed, 19 Feb 2014 06:36:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0lQ-0005Pc-6D
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:36:12 +0000
Received: from [85.158.137.68:43409] by server-10.bemta-3.messagelabs.com id
	93/FD-07302-BD054035; Wed, 19 Feb 2014 06:36:11 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392791767!1511015!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12419 invoked from network); 19 Feb 2014 06:36:10 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-13.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 06:36:10 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 18 Feb 2014 22:31:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="457889552"
Received: from friday.sh.intel.com (HELO localhost) ([10.239.48.8])
	by orsmga001.jf.intel.com with ESMTP; 18 Feb 2014 22:36:07 -0800
From: Dongxiao Xu <dongxiao.xu@intel.com>
To: xen-devel@lists.xen.org
Date: Wed, 19 Feb 2014 14:32:39 +0800
Message-Id: <1392791564-37170-2-git-send-email-dongxiao.xu@intel.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	andrew.cooper3@citrix.com, Ian.Jackson@eu.citrix.com,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH v9 1/6] x86: detect and initialize Cache QoS
	Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Detect platform QoS feature status and enumerate the resource types,
one of which is to monitor the L3 cache occupancy.

Also introduce a Xen grub command line parameter to control the
QoS feature status.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jiongxi Li <jiongxi.li@intel.com>
Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
---
 docs/misc/xen-command-line.markdown |    7 ++
 xen/arch/x86/Makefile               |    1 +
 xen/arch/x86/pqos.c                 |  156 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/setup.c                |    3 +
 xen/include/asm-x86/cpufeature.h    |    1 +
 xen/include/asm-x86/pqos.h          |   43 ++++++++++
 6 files changed, 211 insertions(+)
 create mode 100644 xen/arch/x86/pqos.c
 create mode 100644 xen/include/asm-x86/pqos.h

diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-command-line.markdown
index 15aa404..7751ffe 100644
--- a/docs/misc/xen-command-line.markdown
+++ b/docs/misc/xen-command-line.markdown
@@ -770,6 +770,13 @@ This option can be specified more than once (up to 8 times at present).
 ### ple\_window
 > `= <integer>`
 
+### pqos (Intel)
+> `= List of ( <boolean> | cqm:<boolean> | cqm_max_rmid:<integer> )`
+
+> Default: `pqos=1,cqm:1,cqm_max_rmid:255`
+
+Configure platform QoS services.
+
 ### reboot
 > `= t[riple] | k[bd] | n[o] [, [w]arm | [c]old]`
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d502bdf..54962e0 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -58,6 +58,7 @@ obj-y += crash.o
 obj-y += tboot.o
 obj-y += hpet.o
 obj-y += xstate.o
+obj-y += pqos.o
 
 obj-$(crash_debug) += gdbstub.o
 
diff --git a/xen/arch/x86/pqos.c b/xen/arch/x86/pqos.c
new file mode 100644
index 0000000..ba0de37
--- /dev/null
+++ b/xen/arch/x86/pqos.c
@@ -0,0 +1,156 @@
+/*
+ * pqos.c: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#include <asm/processor.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+#include <asm/pqos.h>
+
+static bool_t __initdata opt_pqos = 1;
+static bool_t __initdata opt_cqm = 1;
+static unsigned int __initdata opt_cqm_max_rmid = 255;
+
+static void __init parse_pqos_param(char *s)
+{
+    char *ss, *val_str;
+    int val;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        val = parse_bool(s);
+        if ( val >= 0 )
+            opt_pqos = val;
+        else
+        {
+            val = !!strncmp(s, "no-", 3);
+            if ( !val )
+                s += 3;
+
+            val_str = strchr(s, ':');
+            if ( val_str )
+                *val_str++ = '\0';
+
+            if ( val_str && !strcmp(s, "cqm") &&
+                 (val = parse_bool(val_str)) >= 0 )
+                opt_cqm = val;
+            else if ( val_str && !strcmp(s, "cqm_max_rmid") )
+                opt_cqm_max_rmid = simple_strtoul(val_str, NULL, 0);
+        }
+
+        s = ss + 1;
+    } while ( ss );
+}
+
+custom_param("pqos", parse_pqos_param);
+
+struct pqos_cqm __read_mostly *cqm = NULL;
+
+static void __init init_cqm(void)
+{
+    unsigned int rmid;
+    unsigned int eax, edx;
+    unsigned int cqm_pages;
+    unsigned int i;
+
+    if ( !opt_cqm_max_rmid )
+        return;
+
+    cqm = xzalloc(struct pqos_cqm);
+    if ( !cqm )
+        return;
+
+    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
+    if ( !(edx & QOS_MONITOR_EVTID_L3) )
+        goto out;
+
+    cqm->min_rmid = 1;
+    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
+
+    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
+    if ( !cqm->rmid_to_dom )
+        goto out;
+
+    /* Reserve RMID 0 for all domains not being monitored */
+    cqm->rmid_to_dom[0] = DOMID_XEN;
+    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
+        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
+
+    /* Allocate CQM buffer size in initialization stage */
+    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
+                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/
+                PAGE_SIZE + 1;
+    cqm->buffer_size = cqm_pages * PAGE_SIZE;
+
+    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);
+    if ( !cqm->buffer )
+    {
+        xfree(cqm->rmid_to_dom);
+        goto out;
+    }
+    memset(cqm->buffer, 0, cqm->buffer_size);
+
+    for ( i = 0; i < cqm_pages; i++ )
+        share_xen_page_with_privileged_guests(
+            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),
+            XENSHARE_readonly);
+
+    spin_lock_init(&cqm->cqm_lock);
+
+    cqm->used_rmid = 0;
+
+    printk(XENLOG_INFO "Cache QoS Monitoring Enabled.\n");
+
+    return;
+
+out:
+    xfree(cqm);
+    cqm = NULL;
+}
+
+static void __init init_qos_monitor(void)
+{
+    unsigned int qm_features;
+    unsigned int eax, ebx, ecx;
+
+    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )
+        return;
+
+    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
+
+    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
+        init_cqm();
+}
+
+void __init init_platform_qos(void)
+{
+    if ( !opt_pqos )
+        return;
+
+    init_qos_monitor();
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..639528f 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,6 +48,7 @@
 #include <asm/setup.h>
 #include <xen/cpu.h>
 #include <asm/nmi.h>
+#include <asm/pqos.h>
 
 /* opt_nosmp: If true, secondary processors are ignored. */
 static bool_t __initdata opt_nosmp;
@@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     domain_unpause_by_systemcontroller(dom0);
 
+    init_platform_qos();
+
     reset_stack_and_jump(init_done);
 }
 
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 1cfaf94..ca59668 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -147,6 +147,7 @@
 #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
+#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
 
diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
new file mode 100644
index 0000000..0a8065c
--- /dev/null
+++ b/xen/include/asm-x86/pqos.h
@@ -0,0 +1,43 @@
+/*
+ * pqos.h: Platform QoS related service for guest.
+ *
+ * Copyright (c) 2014, Intel Corporation
+ * Author: Jiongxi Li  <jiongxi.li@intel.com>
+ * Author: Dongxiao Xu <dongxiao.xu@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ */
+#ifndef ASM_PQOS_H
+#define ASM_PQOS_H
+
+#include <public/xen.h>
+#include <xen/spinlock.h>
+
+/* QoS Resource Type Enumeration */
+#define QOS_MONITOR_TYPE_L3            0x2
+
+/* QoS Monitoring Event ID */
+#define QOS_MONITOR_EVTID_L3           0x1
+
+struct pqos_cqm {
+    spinlock_t cqm_lock;
+    uint64_t *buffer;
+    unsigned int min_rmid;
+    unsigned int max_rmid;
+    unsigned int used_rmid;
+    unsigned int upscaling_factor;
+    unsigned int buffer_size;
+    domid_t *rmid_to_dom;
+};
+extern struct pqos_cqm *cqm;
+
+void init_platform_qos(void);
+
+#endif
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:41:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0qN-0006Iw-7N; Wed, 19 Feb 2014 06:41:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0qL-0006In-Hv
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:41:17 +0000
Received: from [193.109.254.147:21792] by server-11.bemta-14.messagelabs.com
	id 8A/AA-24604-C0254035; Wed, 19 Feb 2014 06:41:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392792075!5253540!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25599 invoked from network); 19 Feb 2014 06:41:15 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-15.tower-27.messagelabs.com with SMTP;
	19 Feb 2014 06:41:15 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Feb 2014 22:41:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="477494859"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 22:41:14 -0800
Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 22:41:14 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 22:41:13 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 14:40:54 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Zhang, Xiantao"
	<xiantao.zhang@intel.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPLL4cMElNCsabiEC2WKJuayr3Epq8IVzA
Date: Wed, 19 Feb 2014 06:40:54 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A91194BB6B@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
	<53038A33020000780011D5FD@nat28.tlf.novell.com>
In-Reply-To: <53038A33020000780011D5FD@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Dugger, Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, February 18, 2014 11:29 PM
> To: Xu, Dongxiao; Zhang, Xiantao; Zhang, Yang Z
> Cc: andrew.cooper3@citrix.com; George Dunlap; Dugger, Donald D;
> xen-devel@lists.xen.org; Tim Deegan
> Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty
> to track vram
> 
> >>> On 18.02.14 at 12:46, "Jan Beulich" <JBeulich@suse.com> wrote:
> > Nothing at all, as it turns out. The regression is due to Dongxiao's
> >
> > http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.html
> >
> > which I have in my tree as part of various things pending for 4.5.
> > And which at the first, second, and third glance looks pretty
> > innocent (IOW I still have to find out _why_ it is wrong).
> 
> And here's a fixed version of the patch - we simply can't drop
> the bogus HVM_PARAM_IDENT_PT check entirely yet.
> 
> In the course of fixing this I also found two other shortcomings:
> - EPT EMT field should be updated upon guest MTRR writes (the
>   lack thereof is the reason for needing to retain the bogus check)
> - epte_get_entry_emt() either needs "order" passed, or its callers
>   must call it more than once for big/huge pages
> 

The below patch looks okay to me. Thanks!

Thanks,
Dongxiao

> Jan
> 
> x86/hvm: refine the judgment on IDENT_PT for EMT
> 
> When trying to get the EPT EMT type, the judgment on
> HVM_PARAM_IDENT_PT is not correct which always returns WB type if
> the parameter is not set. Remove the related code.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> 
> We can't fully drop the dependency yet, but we should certainly avoid
> overriding cases already properly handled. The reason for this is that
> the guest setting up its MTRRs happens _after_ the EPT tables got
> already constructed, and no code is in place to propagate this to the
> EPT code. Without this check we're forcing the guest to run with all of
> its memory uncachable until something happens to re-write every single
> EPT entry. But of course this has to be just a temporary solution.
> 
> In the same spirit we should defer the "very early" (when the guest is
> still being constructed and has no vCPU yet) override to the last
> possible point.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
> 
>      *ipat = 0;
> 
> -    if ( (current->domain != d) &&
> -         ((d->vcpu == NULL) || ((v = d->vcpu[0]) == NULL)) )
> -        return MTRR_TYPE_WRBACK;
> -
> -    if ( !is_pvh_vcpu(v) &&
> -         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
> -        return MTRR_TYPE_WRBACK;
> +    if ( v->domain != d )
> +        v = d->vcpu ? d->vcpu[0] : NULL;
> 
>      if ( !mfn_valid(mfn_x(mfn)) )
>          return MTRR_TYPE_UNCACHABLE;
> @@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
>          return MTRR_TYPE_WRBACK;
>      }
> 
> -    gmtrr_mtype = is_hvm_vcpu(v) ?
> +    gmtrr_mtype = is_hvm_domain(d) && v &&
> +                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
>                    get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn <<
> PAGE_SHIFT)) :
>                    MTRR_TYPE_WRBACK;
> 
> 
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 06:41:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 06:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG0qN-0006Iw-7N; Wed, 19 Feb 2014 06:41:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dongxiao.xu@intel.com>) id 1WG0qL-0006In-Hv
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 06:41:17 +0000
Received: from [193.109.254.147:21792] by server-11.bemta-14.messagelabs.com
	id 8A/AA-24604-C0254035; Wed, 19 Feb 2014 06:41:16 +0000
X-Env-Sender: dongxiao.xu@intel.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392792075!5253540!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25599 invoked from network); 19 Feb 2014 06:41:15 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-15.tower-27.messagelabs.com with SMTP;
	19 Feb 2014 06:41:15 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 18 Feb 2014 22:41:14 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,504,1389772800"; d="scan'208";a="477494859"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 18 Feb 2014 22:41:14 -0800
Received: from fmsmsx155.amr.corp.intel.com (10.18.116.71) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 22:41:14 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	FMSMSX155.amr.corp.intel.com (10.18.116.71) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 18 Feb 2014 22:41:13 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Wed, 19 Feb 2014 14:40:54 +0800
From: "Xu, Dongxiao" <dongxiao.xu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Zhang, Xiantao"
	<xiantao.zhang@intel.com>, "Zhang, Yang Z" <yang.z.zhang@intel.com>
Thread-Topic: [Xen-devel] [PATCH] Don't track all memory when enabling log
	dirty to track vram
Thread-Index: AQHPLL4cMElNCsabiEC2WKJuayr3Epq8IVzA
Date: Wed, 19 Feb 2014 06:40:54 +0000
Message-ID: <40776A41FC278F40B59438AD47D147A91194BB6B@SHSMSX104.ccr.corp.intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
	<53038A33020000780011D5FD@nat28.tlf.novell.com>
In-Reply-To: <53038A33020000780011D5FD@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, "Dugger, Donald D" <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Tuesday, February 18, 2014 11:29 PM
> To: Xu, Dongxiao; Zhang, Xiantao; Zhang, Yang Z
> Cc: andrew.cooper3@citrix.com; George Dunlap; Dugger, Donald D;
> xen-devel@lists.xen.org; Tim Deegan
> Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log dirty
> to track vram
> 
> >>> On 18.02.14 at 12:46, "Jan Beulich" <JBeulich@suse.com> wrote:
> > Nothing at all, as it turns out. The regression is due to Dongxiao's
> >
> > http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.html
> >
> > which I have in my tree as part of various things pending for 4.5.
> > And which at the first, second, and third glance looks pretty
> > innocent (IOW I still have to find out _why_ it is wrong).
> 
> And here's a fixed version of the patch - we simply can't drop
> the bogus HVM_PARAM_IDENT_PT check entirely yet.
> 
> In the course of fixing this I also found two other shortcomings:
> - EPT EMT field should be updated upon guest MTRR writes (the
>   lack thereof is the reason for needing to retain the bogus check)
> - epte_get_entry_emt() either needs "order" passed, or its callers
>   must call it more than once for big/huge pages
> 

The below patch looks okay to me. Thanks!

Thanks,
Dongxiao

> Jan
> 
> x86/hvm: refine the judgment on IDENT_PT for EMT
> 
> When trying to get the EPT EMT type, the judgment on
> HVM_PARAM_IDENT_PT is not correct which always returns WB type if
> the parameter is not set. Remove the related code.
> 
> Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
> 
> We can't fully drop the dependency yet, but we should certainly avoid
> overriding cases already properly handled. The reason for this is that
> the guest setting up its MTRRs happens _after_ the EPT tables got
> already constructed, and no code is in place to propagate this to the
> EPT code. Without this check we're forcing the guest to run with all of
> its memory uncachable until something happens to re-write every single
> EPT entry. But of course this has to be just a temporary solution.
> 
> In the same spirit we should defer the "very early" (when the guest is
> still being constructed and has no vCPU yet) override to the last
> possible point.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
> 
>      *ipat = 0;
> 
> -    if ( (current->domain != d) &&
> -         ((d->vcpu == NULL) || ((v = d->vcpu[0]) == NULL)) )
> -        return MTRR_TYPE_WRBACK;
> -
> -    if ( !is_pvh_vcpu(v) &&
> -         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
> -        return MTRR_TYPE_WRBACK;
> +    if ( v->domain != d )
> +        v = d->vcpu ? d->vcpu[0] : NULL;
> 
>      if ( !mfn_valid(mfn_x(mfn)) )
>          return MTRR_TYPE_UNCACHABLE;
> @@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
>          return MTRR_TYPE_WRBACK;
>      }
> 
> -    gmtrr_mtype = is_hvm_vcpu(v) ?
> +    gmtrr_mtype = is_hvm_domain(d) && v &&
> +                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
>                    get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn <<
> PAGE_SHIFT)) :
>                    MTRR_TYPE_WRBACK;
> 
> 
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2Jw-0007PF-QG; Wed, 19 Feb 2014 08:15:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG2Ju-0007PA-TA
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 08:15:55 +0000
Received: from [193.109.254.147:31364] by server-10.bemta-14.messagelabs.com
	id C2/F9-10711-A3864035; Wed, 19 Feb 2014 08:15:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392797751!1631332!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26486 invoked from network); 19 Feb 2014 08:15:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 08:15:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102102054"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 08:15:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 03:15:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WG2Ja-0000Zi-57;
	Wed, 19 Feb 2014 08:15:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WG2Ja-00081Z-45;
	Wed, 19 Feb 2014 08:15:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25128-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 08:15:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25128: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25128 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25128/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25117
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install    fail REGR. vs. 25117

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  910b590601970440bdb135ed83fe28d8a755173e
baseline version:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Julien Grall <julien.grall@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 910b590601970440bdb135ed83fe28d8a755173e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Feb 18 13:58:21 2014 +0000

    xen/arm: Save/restore GICH_VMCR on domain context switch
    
    GICH_VMCR register contains alias to important bits of GICV interface such as:
        - priority mask of the CPU
        - EOImode
        - ...
    
    We were safe because Linux guest always use the same value for this bits.
    When new guests will handle priority or change EOI mode, VCPU interrupt
    management will be in a wrong state.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: George Dunlap <george.dunlap@citrix.com>

commit 4959e0eacf56456a4b16d59e98cec58f7c2d66be
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Feb 18 16:56:17 2014 +0000

    xen/arm: Correctly handle non-page aligned pointer in raw_copy_from_guest
    
    The current implementation of raw_copy_guest helper may lead to data corruption
    and sometimes Xen crash when the guest virtual address is not aligned to
    PAGE_SIZE.
    
    When the total length is higher than a page, the length to read is badly
    compute with
        min(len, (unsigned)(PAGE_SIZE - offset))
    
    As the offset is only computed one time per function, if the start address was
    not aligned to PAGE_SIZE, we can end up in same iteration:
        - to read accross page boundary => xen crash
        - read the previous page => data corruption
    
    This issue can be resolved by setting offset to 0 at the end of the first
    iteration. Indeed, after it, the virtual guest address is always aligned
    to PAGE_SIZE.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: George Dunlap <george.dunlap@citrix.com>
    [ ijc -- duplicated the comment in the other two functions with this behaviour ]
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:16:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:16:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2Jw-0007PF-QG; Wed, 19 Feb 2014 08:15:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG2Ju-0007PA-TA
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 08:15:55 +0000
Received: from [193.109.254.147:31364] by server-10.bemta-14.messagelabs.com
	id C2/F9-10711-A3864035; Wed, 19 Feb 2014 08:15:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392797751!1631332!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26486 invoked from network); 19 Feb 2014 08:15:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 08:15:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102102054"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 08:15:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 03:15:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WG2Ja-0000Zi-57;
	Wed, 19 Feb 2014 08:15:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WG2Ja-00081Z-45;
	Wed, 19 Feb 2014 08:15:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25128-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 08:15:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25128: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25128 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25128/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25117
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install    fail REGR. vs. 25117

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  910b590601970440bdb135ed83fe28d8a755173e
baseline version:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Julien Grall <julien.grall@linaro.org>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 910b590601970440bdb135ed83fe28d8a755173e
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Feb 18 13:58:21 2014 +0000

    xen/arm: Save/restore GICH_VMCR on domain context switch
    
    GICH_VMCR register contains alias to important bits of GICV interface such as:
        - priority mask of the CPU
        - EOImode
        - ...
    
    We were safe because Linux guest always use the same value for this bits.
    When new guests will handle priority or change EOI mode, VCPU interrupt
    management will be in a wrong state.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: George Dunlap <george.dunlap@citrix.com>

commit 4959e0eacf56456a4b16d59e98cec58f7c2d66be
Author: Julien Grall <julien.grall@linaro.org>
Date:   Tue Feb 18 16:56:17 2014 +0000

    xen/arm: Correctly handle non-page aligned pointer in raw_copy_from_guest
    
    The current implementation of raw_copy_guest helper may lead to data corruption
    and sometimes Xen crash when the guest virtual address is not aligned to
    PAGE_SIZE.
    
    When the total length is higher than a page, the length to read is badly
    compute with
        min(len, (unsigned)(PAGE_SIZE - offset))
    
    As the offset is only computed one time per function, if the start address was
    not aligned to PAGE_SIZE, we can end up in same iteration:
        - to read accross page boundary => xen crash
        - read the previous page => data corruption
    
    This issue can be resolved by setting offset to 0 at the end of the first
    iteration. Indeed, after it, the virtual guest address is always aligned
    to PAGE_SIZE.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Cc: George Dunlap <george.dunlap@citrix.com>
    [ ijc -- duplicated the comment in the other two functions with this behaviour ]
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:44:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2lA-0007aB-Ry; Wed, 19 Feb 2014 08:44:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WFymG-0004bN-4Q
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 04:28:56 +0000
Received: from [193.109.254.147:17438] by server-11.bemta-14.messagelabs.com
	id 20/B6-24604-70334035; Wed, 19 Feb 2014 04:28:55 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392784130!5291604!1
X-Originating-IP: [202.81.31.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0OCA9PiAzNDExMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14172 invoked from network); 19 Feb 2014 04:28:54 -0000
Received: from e23smtp06.au.ibm.com (HELO e23smtp06.au.ibm.com) (202.81.31.148)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 04:28:54 -0000
Received: from /spool/local
	by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Wed, 19 Feb 2014 14:28:49 +1000
Received: from d23dlp02.au.ibm.com (202.81.31.213)
	by e23smtp06.au.ibm.com (202.81.31.212) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Wed, 19 Feb 2014 14:28:48 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp02.au.ibm.com (Postfix) with ESMTP id 7F4342BB0055
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 15:28:47 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1J49Agr3801462
	for <xen-devel@lists.xenproject.org>; Wed, 19 Feb 2014 15:09:11 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1J4SjAM008908
	for <xen-devel@lists.xenproject.org>; Wed, 19 Feb 2014 15:28:46 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1J4Sj4Y008905; Wed, 19 Feb 2014 15:28:45 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 74F3AA039D; Wed, 19 Feb 2014 15:28:45 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Daniel Kiper <daniel.kiper@oracle.com>, xen-devel@lists.xenproject.org,
	virtio-dev@lists.oasis-open.org
In-Reply-To: <20140217132331.GA3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Wed, 19 Feb 2014 10:56:21 +1030
Message-ID: <87vbwcaqxe.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021904-7014-0000-0000-0000045AC6D6
X-Mailman-Approved-At: Wed, 19 Feb 2014 08:44:02 +0000
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:
> Hi,
>
> Below you could find a summary of work in regards to VIRTIO compatibility with
> different virtualization solutions. It was done mainly from Xen point of view
> but results are quite generic and can be applied to wide spectrum
> of virtualization platforms.

Hi Daniel,

        Sorry for the delayed response, I was pondering...  CC changed
to virtio-dev.

>From a standard POV: It's possible to abstract out the where we use
'physical address' for 'address handle'.  It's also possible to define
this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
Xen-PV is a distinct platform from x86.

For platforms using EPT, I don't think you want anything but guest
addresses, do you?

>From an implementation POV:

On IOMMU, start here for previous Linux discussion:
        http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650

And this is the real problem.  We don't want to use the PCI IOMMU for
PCI devices.  So it's not just a matter of using existing Linux APIs.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:44:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2lA-0007aB-Ry; Wed, 19 Feb 2014 08:44:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WFymG-0004bN-4Q
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 04:28:56 +0000
Received: from [193.109.254.147:17438] by server-11.bemta-14.messagelabs.com
	id 20/B6-24604-70334035; Wed, 19 Feb 2014 04:28:55 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392784130!5291604!1
X-Originating-IP: [202.81.31.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0OCA9PiAzNDExMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14172 invoked from network); 19 Feb 2014 04:28:54 -0000
Received: from e23smtp06.au.ibm.com (HELO e23smtp06.au.ibm.com) (202.81.31.148)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 04:28:54 -0000
Received: from /spool/local
	by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Wed, 19 Feb 2014 14:28:49 +1000
Received: from d23dlp02.au.ibm.com (202.81.31.213)
	by e23smtp06.au.ibm.com (202.81.31.212) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Wed, 19 Feb 2014 14:28:48 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp02.au.ibm.com (Postfix) with ESMTP id 7F4342BB0055
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 15:28:47 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1J49Agr3801462
	for <xen-devel@lists.xenproject.org>; Wed, 19 Feb 2014 15:09:11 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1J4SjAM008908
	for <xen-devel@lists.xenproject.org>; Wed, 19 Feb 2014 15:28:46 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1J4Sj4Y008905; Wed, 19 Feb 2014 15:28:45 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 74F3AA039D; Wed, 19 Feb 2014 15:28:45 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Daniel Kiper <daniel.kiper@oracle.com>, xen-devel@lists.xenproject.org,
	virtio-dev@lists.oasis-open.org
In-Reply-To: <20140217132331.GA3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Wed, 19 Feb 2014 10:56:21 +1030
Message-ID: <87vbwcaqxe.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14021904-7014-0000-0000-0000045AC6D6
X-Mailman-Approved-At: Wed, 19 Feb 2014 08:44:02 +0000
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:
> Hi,
>
> Below you could find a summary of work in regards to VIRTIO compatibility with
> different virtualization solutions. It was done mainly from Xen point of view
> but results are quite generic and can be applied to wide spectrum
> of virtualization platforms.

Hi Daniel,

        Sorry for the delayed response, I was pondering...  CC changed
to virtio-dev.

>From a standard POV: It's possible to abstract out the where we use
'physical address' for 'address handle'.  It's also possible to define
this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
Xen-PV is a distinct platform from x86.

For platforms using EPT, I don't think you want anything but guest
addresses, do you?

>From an implementation POV:

On IOMMU, start here for previous Linux discussion:
        http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650

And this is the real problem.  We don't want to use the PCI IOMMU for
PCI devices.  So it's not just a matter of using existing Linux APIs.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:51:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2rz-0007ig-Ru; Wed, 19 Feb 2014 08:51:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG2ru-0007ib-Tf
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 08:51:07 +0000
Received: from [85.158.137.68:7356] by server-15.bemta-3.messagelabs.com id
	BB/1A-19263-67074035; Wed, 19 Feb 2014 08:51:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392799861!1553232!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22694 invoked from network); 19 Feb 2014 08:51:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 08:51:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 08:51:00 +0000
Message-Id: <53047E7F020000780011D8E1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 08:50:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEB9B@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEB9B@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 02:17, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-02-18:
>>>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Jan Beulich wrote on 2014-02-17:
>>>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> And second, I have been fighting with finding both conditions and
>>>>> (eventually) the root cause of a severe performance regression
>>>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>>>> became _much_ worse after adding in the patch here (while in fact
>>>>> I had hoped it might help with the originally observed
>>>>> degradation): X startup fails due to timing out, and booting the
>>>>> guest now takes about 20 minutes). I didn't find the root cause of
>>>>> this yet, but meanwhile I know that
>>>>> - the same isn't observable on SVM
>>>>> - there's no problem when forcing the domain to use shadow
>>>>>   mode - there's no need for any device to actually be assigned to the
>>>>>   guest - the regression is very likely purely graphics related (based
>>>>>   on the observation that when running something that regularly but not
>>>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>>>   worth of processing power, yet when that updating doesn't
>>>>> happen,
>> CPU
>>>>>   consumption goes down, and it goes further down when shutting
>>>>> down
>> X
>>>>>   altogether - at least as log as the patch here doesn't get involved).
>>>>> This I'm observing on a Westmere box (and I didn't notice it
>>>>> earlier because that's one of those where due to a chipset erratum
>>>>> the IOMMU gets turned off by default), so it's possible that this
>>>>> can't be seen on more modern hardware. I'll hopefully find time
>>>>> today to check this on the one newer (Sandy Bridge) box I have.
>>>> 
>>>> Just got done with trying this: By default, things work fine there. As
>>>> soon as I use "iommu=no-snoop", things go bad (even worse than one the
>>>> older box - the guest is consuming about 2.5 CPUs worth of processing
>>>> power _without_ the patch here in use, so I don't even want to think
>>>> about trying it there); I guessed that to be another of the potential
>>>> sources of the problem since on that older box the respective hardware
>>>> feature is unavailable.
>>>> 
>>>> While I'll try to look into this further, I guess I have to defer
>>>> to our VT-d specialists at Intel at this point...
>>>> 
>>> 
>>> Hi, Jan,
>>> 
>>> I tried to reproduce it. But unfortunately, I cannot reproduce it in
>>> my box (sandy bridge EP)with latest Xen(include my patch). I guess
>>> my configuration or steps may wrong, here is mine:
>>> 
>>> 1. add iommu=1,no-snoop in by xen cmd line:
>>> (XEN) Intel VT-d Snoop Control not enabled.
>>> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
>>> (XEN) Intel VT-d Queued Invalidation enabled.
>>> (XEN) Intel VT-d Interrupt Remapping enabled.
>>> (XEN) Intel VT-d Shared EPT tables enabled.
>>> 
>>> 2. boot a rhel6u4 guest.
>>> 
>>> 3. after guest boot up, run startx inside guest.
>>> 
>>> 4. a few second, the X windows shows and didn't see any error. Also
>>> the CPU utilization is about 1.7%.
>>> 
>>> Any thing wrong?
>> 
>> Nothing at all, as it turns out. The regression is due to Dongxiao's
>> 
>> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.h 
>> tml
>> 
>> which I have in my tree as part of various things pending for 4.5.
>> And which at the first, second, and third glance looks pretty innocent
>> (IOW I still have to find out _why_ it is wrong).
>> 
>> In any case - I'm very sorry for the false alarm.
>> 
> 
> It doesn't matter. Conversely, we need to thank you for helping us to fix 
> this issue. :)
> 
> BTW, I still cannot reproduce it in my box, even I uses SLES 11 SP3 as 
> guest.

I assume you didn't pull in the broken patch - I'm sure you would
see the problem if you did.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:51:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2rz-0007ig-Ru; Wed, 19 Feb 2014 08:51:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG2ru-0007ib-Tf
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 08:51:07 +0000
Received: from [85.158.137.68:7356] by server-15.bemta-3.messagelabs.com id
	BB/1A-19263-67074035; Wed, 19 Feb 2014 08:51:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392799861!1553232!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22694 invoked from network); 19 Feb 2014 08:51:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 08:51:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 08:51:00 +0000
Message-Id: <53047E7F020000780011D8E1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 08:50:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53023239020000780011CED9@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8D3@SHSMSX104.ccr.corp.intel.com>
	<53035628020000780011D3EE@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEB9B@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEB9B@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, DonaldD Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 02:17, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-02-18:
>>>>> On 18.02.14 at 04:25, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> Jan Beulich wrote on 2014-02-17:
>>>>>>> On 17.02.14 at 11:18, "Jan Beulich" <JBeulich@suse.com> wrote:
>>>>> And second, I have been fighting with finding both conditions and
>>>>> (eventually) the root cause of a severe performance regression
>>>>> (compared to 4.3.x) I'm observing on an EPT+IOMMU system. This
>>>>> became _much_ worse after adding in the patch here (while in fact
>>>>> I had hoped it might help with the originally observed
>>>>> degradation): X startup fails due to timing out, and booting the
>>>>> guest now takes about 20 minutes). I didn't find the root cause of
>>>>> this yet, but meanwhile I know that
>>>>> - the same isn't observable on SVM
>>>>> - there's no problem when forcing the domain to use shadow
>>>>>   mode - there's no need for any device to actually be assigned to the
>>>>>   guest - the regression is very likely purely graphics related (based
>>>>>   on the observation that when running something that regularly but not
>>>>>   heavily updates the screen with X up, the guest consumes a full CPU's
>>>>>   worth of processing power, yet when that updating doesn't
>>>>> happen,
>> CPU
>>>>>   consumption goes down, and it goes further down when shutting
>>>>> down
>> X
>>>>>   altogether - at least as log as the patch here doesn't get involved).
>>>>> This I'm observing on a Westmere box (and I didn't notice it
>>>>> earlier because that's one of those where due to a chipset erratum
>>>>> the IOMMU gets turned off by default), so it's possible that this
>>>>> can't be seen on more modern hardware. I'll hopefully find time
>>>>> today to check this on the one newer (Sandy Bridge) box I have.
>>>> 
>>>> Just got done with trying this: By default, things work fine there. As
>>>> soon as I use "iommu=no-snoop", things go bad (even worse than one the
>>>> older box - the guest is consuming about 2.5 CPUs worth of processing
>>>> power _without_ the patch here in use, so I don't even want to think
>>>> about trying it there); I guessed that to be another of the potential
>>>> sources of the problem since on that older box the respective hardware
>>>> feature is unavailable.
>>>> 
>>>> While I'll try to look into this further, I guess I have to defer
>>>> to our VT-d specialists at Intel at this point...
>>>> 
>>> 
>>> Hi, Jan,
>>> 
>>> I tried to reproduce it. But unfortunately, I cannot reproduce it in
>>> my box (sandy bridge EP)with latest Xen(include my patch). I guess
>>> my configuration or steps may wrong, here is mine:
>>> 
>>> 1. add iommu=1,no-snoop in by xen cmd line:
>>> (XEN) Intel VT-d Snoop Control not enabled.
>>> (XEN) Intel VT-d Dom0 DMA Passthrough not enabled.
>>> (XEN) Intel VT-d Queued Invalidation enabled.
>>> (XEN) Intel VT-d Interrupt Remapping enabled.
>>> (XEN) Intel VT-d Shared EPT tables enabled.
>>> 
>>> 2. boot a rhel6u4 guest.
>>> 
>>> 3. after guest boot up, run startx inside guest.
>>> 
>>> 4. a few second, the X windows shows and didn't see any error. Also
>>> the CPU utilization is about 1.7%.
>>> 
>>> Any thing wrong?
>> 
>> Nothing at all, as it turns out. The regression is due to Dongxiao's
>> 
>> http://lists.xenproject.org/archives/html/xen-devel/2013-12/msg00367.h 
>> tml
>> 
>> which I have in my tree as part of various things pending for 4.5.
>> And which at the first, second, and third glance looks pretty innocent
>> (IOW I still have to find out _why_ it is wrong).
>> 
>> In any case - I'm very sorry for the false alarm.
>> 
> 
> It doesn't matter. Conversely, we need to thank you for helping us to fix 
> this issue. :)
> 
> BTW, I still cannot reproduce it in my box, even I uses SLES 11 SP3 as 
> guest.

I assume you didn't pull in the broken patch - I'm sure you would
see the problem if you did.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:55:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2vu-0007qB-Ol; Wed, 19 Feb 2014 08:55:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG2vp-0007q6-Pc
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 08:55:08 +0000
Received: from [85.158.137.68:12952] by server-4.bemta-3.messagelabs.com id
	3E/5E-04858-96174035; Wed, 19 Feb 2014 08:55:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392800104!2819333!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16694 invoked from network); 19 Feb 2014 08:55:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 08:55:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 08:55:03 +0000
Message-Id: <53047F74020000780011D8E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 08:55:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> George Dunlap wrote on 2014-02-18:
>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>> perhaps my original patch is better which will check
>> paging_mode_log_dirty(d) && log_global:
>> 
>> It turns out that the reason I couldn't get a crash was because libxc
>> was actually paying attention to the -EINVAL return value, and
>> disabling and then re-enabling logdirty.  That's what would happen
>> before your dirty vram patch, and that's what happens after.  And
>> arguably, that's the correct behavior for any toolstack, given that the 
> interface returns an error.
> 
> Agree.
> 
>> 
>> This patch would actually change the interface; if we check this in,
>> then if you enable logdirty when dirty vram tracking is enabled, you
>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>> So actually, this patch would be more disruptive.
>> 
> 
> Jan, do you have any comment? 

This simplistic variant is just calling for problems. As was already
said elsewhere on this thread, we should simply do the mode change
properly: Track that a partial log-dirty mode is in use, and allow
switching to global log-dirty mode (converting all entries to R/O).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 08:55:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 08:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG2vu-0007qB-Ol; Wed, 19 Feb 2014 08:55:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG2vp-0007q6-Pc
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 08:55:08 +0000
Received: from [85.158.137.68:12952] by server-4.bemta-3.messagelabs.com id
	3E/5E-04858-96174035; Wed, 19 Feb 2014 08:55:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392800104!2819333!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16694 invoked from network); 19 Feb 2014 08:55:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 08:55:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 08:55:03 +0000
Message-Id: <53047F74020000780011D8E4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 08:55:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> George Dunlap wrote on 2014-02-18:
>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>> perhaps my original patch is better which will check
>> paging_mode_log_dirty(d) && log_global:
>> 
>> It turns out that the reason I couldn't get a crash was because libxc
>> was actually paying attention to the -EINVAL return value, and
>> disabling and then re-enabling logdirty.  That's what would happen
>> before your dirty vram patch, and that's what happens after.  And
>> arguably, that's the correct behavior for any toolstack, given that the 
> interface returns an error.
> 
> Agree.
> 
>> 
>> This patch would actually change the interface; if we check this in,
>> then if you enable logdirty when dirty vram tracking is enabled, you
>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>> So actually, this patch would be more disruptive.
>> 
> 
> Jan, do you have any comment? 

This simplistic variant is just calling for problems. As was already
said elsewhere on this thread, we should simply do the mode change
properly: Track that a partial log-dirty mode is in use, and allow
switching to global log-dirty mode (converting all entries to R/O).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG338-00083R-BW; Wed, 19 Feb 2014 09:02:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG336-00083M-1I
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:02:36 +0000
Received: from [85.158.143.35:21378] by server-2.bemta-4.messagelabs.com id
	BC/E1-10891-B2374035; Wed, 19 Feb 2014 09:02:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392800554!6693994!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27403 invoked from network); 19 Feb 2014 09:02:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 09:02:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 09:02:33 +0000
Message-Id: <53048136020000780011D8FF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 09:02:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.jackson@eu.citrix.com>
References: <osstest-25126-mainreport@xen.org>
In-Reply-To: <osstest-25126-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 05:16, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 25126 xen-4.2-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25126/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
>  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
>  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865

These have been pretty persistent over the last couple of days, and
this

Feb 18 13:36:04.556216   File "/usr/sbin/xend", line 36, in <module>
Feb 18 13:36:04.564155     from xen.xend.server import SrvDaemon

looks very much like something broken in the infrastructure or the
way how things get built.

Any chance of getting this fixed?

Thanks, Jan

>  build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865
> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
>  test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
>  test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
>  test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
>  test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
>  test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
>  test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
>  test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
>  test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
>  test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
>  test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
>  test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
>  test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
>  test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
>  test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
> 
> version targeted for testing:
>  xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
> baseline version:
>  xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9
> 
> ------------------------------------------------------------
> People who touched revisions under test:
> ------------------------------------------------------------
> 
> jobs:
>  build-amd64                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-oldkern                                          fail    
>  build-i386-oldkern                                           fail    
>  build-amd64-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-i386-i386-xl                                            pass    
>  test-amd64-i386-rhel6hvm-amd                                 pass    
>  test-amd64-i386-qemut-rhel6hvm-amd                           pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-i386-qemuu-freebsd10-amd64                        pass    
>  test-amd64-amd64-xl-qemut-win7-amd64                         fail    
>  test-amd64-i386-xl-qemut-win7-amd64                          fail    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-i386-xl-qemuu-win7-amd64                          fail    
>  test-amd64-amd64-xl-win7-amd64                               fail    
>  test-amd64-i386-xl-win7-amd64                                fail    
>  test-amd64-i386-xl-credit2                                   pass    
>  test-amd64-i386-qemuu-freebsd10-i386                         pass    
>  test-amd64-amd64-xl-pcipt-intel                              fail    
>  test-amd64-i386-rhel6hvm-intel                               pass    
>  test-amd64-i386-qemut-rhel6hvm-intel                         pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-i386-xl-multivcpu                                 pass    
>  test-amd64-amd64-pair                                        pass    
>  test-amd64-i386-pair                                         pass    
>  test-i386-i386-pair                                          pass    
>  test-amd64-amd64-xl-sedf-pin                                 pass    
>  test-amd64-amd64-pv                                          fail    
>  test-amd64-i386-pv                                           fail    
>  test-i386-i386-pv                                            fail    
>  test-amd64-amd64-xl-sedf                                     pass    
>  test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
>  test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
>  test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
>  test-amd64-i386-xend-qemut-winxpsp3                          fail    
>  test-amd64-amd64-xl-qemut-winxpsp3                           fail    
>  test-i386-i386-xl-qemut-winxpsp3                             fail    
>  test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
>  test-i386-i386-xl-qemuu-winxpsp3                             fail    
>  test-amd64-i386-xend-winxpsp3                                fail    
>  test-amd64-amd64-xl-winxpsp3                                 fail    
>  test-i386-i386-xl-winxpsp3                                   fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on woking.cam.xci-test.com
> logs: /home/xc_osstest/logs
> images: /home/xc_osstest/images
> 
> Logs, config files, etc. are available at
>     http://www.chiark.greenend.org.uk/~xensrcts/logs 
> 
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary 
> 
> 
> Not pushing.
> 
> ------------------------------------------------------------
> commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
> Author: Jan Beulich <jbeulich@suse.com>
> Date:   Fri Feb 14 16:24:39 2014 +0100
> 
>     update Xen version to 4.2.4
> (qemu changes not included)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:02:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:02:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG338-00083R-BW; Wed, 19 Feb 2014 09:02:38 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG336-00083M-1I
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:02:36 +0000
Received: from [85.158.143.35:21378] by server-2.bemta-4.messagelabs.com id
	BC/E1-10891-B2374035; Wed, 19 Feb 2014 09:02:35 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392800554!6693994!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27403 invoked from network); 19 Feb 2014 09:02:34 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 09:02:34 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 09:02:33 +0000
Message-Id: <53048136020000780011D8FF@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 09:02:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.jackson@eu.citrix.com>
References: <osstest-25126-mainreport@xen.org>
In-Reply-To: <osstest-25126-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 05:16, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 25126 xen-4.2-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25126/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
>  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
>  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865

These have been pretty persistent over the last couple of days, and
this

Feb 18 13:36:04.556216   File "/usr/sbin/xend", line 36, in <module>
Feb 18 13:36:04.564155     from xen.xend.server import SrvDaemon

looks very much like something broken in the infrastructure or the
way how things get built.

Any chance of getting this fixed?

Thanks, Jan

>  build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865
>  build-i386-oldkern            4 xen-build                 fail REGR. vs. 24865
> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
>  test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect
> 
> Tests which did not succeed, but are not blocking:
>  test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
>  test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
>  test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
>  test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
>  test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
>  test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
>  test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
>  test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
>  test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
>  test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
>  test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
>  test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
>  test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
>  test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
>  test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
>  test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
> 
> version targeted for testing:
>  xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
> baseline version:
>  xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9
> 
> ------------------------------------------------------------
> People who touched revisions under test:
> ------------------------------------------------------------
> 
> jobs:
>  build-amd64                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-oldkern                                          fail    
>  build-i386-oldkern                                           fail    
>  build-amd64-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-i386-i386-xl                                            pass    
>  test-amd64-i386-rhel6hvm-amd                                 pass    
>  test-amd64-i386-qemut-rhel6hvm-amd                           pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-i386-qemuu-freebsd10-amd64                        pass    
>  test-amd64-amd64-xl-qemut-win7-amd64                         fail    
>  test-amd64-i386-xl-qemut-win7-amd64                          fail    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-i386-xl-qemuu-win7-amd64                          fail    
>  test-amd64-amd64-xl-win7-amd64                               fail    
>  test-amd64-i386-xl-win7-amd64                                fail    
>  test-amd64-i386-xl-credit2                                   pass    
>  test-amd64-i386-qemuu-freebsd10-i386                         pass    
>  test-amd64-amd64-xl-pcipt-intel                              fail    
>  test-amd64-i386-rhel6hvm-intel                               pass    
>  test-amd64-i386-qemut-rhel6hvm-intel                         pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-i386-xl-multivcpu                                 pass    
>  test-amd64-amd64-pair                                        pass    
>  test-amd64-i386-pair                                         pass    
>  test-i386-i386-pair                                          pass    
>  test-amd64-amd64-xl-sedf-pin                                 pass    
>  test-amd64-amd64-pv                                          fail    
>  test-amd64-i386-pv                                           fail    
>  test-i386-i386-pv                                            fail    
>  test-amd64-amd64-xl-sedf                                     pass    
>  test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
>  test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
>  test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
>  test-amd64-i386-xend-qemut-winxpsp3                          fail    
>  test-amd64-amd64-xl-qemut-winxpsp3                           fail    
>  test-i386-i386-xl-qemut-winxpsp3                             fail    
>  test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
>  test-i386-i386-xl-qemuu-winxpsp3                             fail    
>  test-amd64-i386-xend-winxpsp3                                fail    
>  test-amd64-amd64-xl-winxpsp3                                 fail    
>  test-i386-i386-xl-winxpsp3                                   fail    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on woking.cam.xci-test.com
> logs: /home/xc_osstest/logs
> images: /home/xc_osstest/images
> 
> Logs, config files, etc. are available at
>     http://www.chiark.greenend.org.uk/~xensrcts/logs 
> 
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary 
> 
> 
> Not pushing.
> 
> ------------------------------------------------------------
> commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
> Author: Jan Beulich <jbeulich@suse.com>
> Date:   Fri Feb 14 16:24:39 2014 +0100
> 
>     update Xen version to 4.2.4
> (qemu changes not included)
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:03:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG347-00087y-VL; Wed, 19 Feb 2014 09:03:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WG346-00087r-HC
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 09:03:38 +0000
Received: from [85.158.143.35:24285] by server-1.bemta-4.messagelabs.com id
	C2/37-31661-96374035; Wed, 19 Feb 2014 09:03:37 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392800616!6741896!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25790 invoked from network); 19 Feb 2014 09:03:36 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 09:03:36 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1J93PXw017446
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 04:03:25 -0500
Received: from redhat.com (vpn1-6-34.ams2.redhat.com [10.36.6.34])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s1J93LrK023165; Wed, 19 Feb 2014 04:03:22 -0500
Date: Wed, 19 Feb 2014 11:08:25 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140219090825.GA10646@redhat.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
	<alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 05:10:00PM +0000, Stefano Stabellini wrote:
> On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > > of the email :-P).
> > > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > > response to the guest writing to a magic ioport specifically to unplug
> > > > > the emulated disk.
> > > > > With this patch after the guest boots I can still access both xvda and
> > > > > sda for the same disk, leading to fs corruptions.
> > > > 
> > > > Ok, the last paragraph is what I was missing.
> > > > 
> > > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > > hotplug handler, dc->unplug is not called anymore.
> > > > 
> > > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > > free
> > > > the device, it just drops the disks underneath.  I think the simplest
> > > > solution
> > > > is to _not_ make it a dc->unplug callback at all, and call
> > > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > > to
> > > > do here.
> > > 
> > > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > > Calling it directly from unplug_disks fixes the issue:
> > > 
> > > 
> > > ---
> > > 
> > > Call pci_piix3_xen_ide_unplug from unplug_disks
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > 
> > > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > > index 0eda301..40757eb 100644
> > > --- a/hw/ide/piix.c
> > > +++ b/hw/ide/piix.c
> > > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> > >      return 0;
> > >  }
> > > 
> > > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > >  {
> > >      PCIIDEState *pci_ide;
> > >      DriveInfo *di;
> > > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > > void *data)
> > >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> > >      k->class_id = PCI_CLASS_STORAGE_IDE;
> > >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > > -    dc->unplug = pci_piix3_xen_ide_unplug;
> > >  }
> > > 
> > >  static const TypeInfo piix3_ide_xen_info = {
> > > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > > index 70875e4..1d9d0e9 100644
> > > --- a/hw/xen/xen_platform.c
> > > +++ b/hw/xen/xen_platform.c
> > > @@ -27,6 +27,7 @@
> > > 
> > >  #include "hw/hw.h"
> > >  #include "hw/i386/pc.h"
> > > +#include "hw/ide.h"
> > >  #include "hw/pci/pci.h"
> > >  #include "hw/irq.h"
> > >  #include "hw/xen/xen_common.h"
> > > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > > *o)
> > >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > >              PCI_CLASS_STORAGE_IDE
> > >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > > -        qdev_unplug(DEVICE(d), NULL);
> > > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> > >      }
> > >  }
> > > 
> > > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > > index 507e6d3..bc8bd32 100644
> > > --- a/include/hw/ide.h
> > > +++ b/include/hw/ide.h
> > > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > > **hd_table,
> > >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > devfn);
> > >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > devfn);
> > >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > devfn);
> > > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> > >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > > 
> > >  /* ide-mmio.c */
> > > 
> > 
> > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> Thanks. Should I send it to Peter via the xen tree or anybody else wants
> to pick this up?

I'll rebase my tree on top of this, to avoid bisect failures.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:03:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG347-00087y-VL; Wed, 19 Feb 2014 09:03:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WG346-00087r-HC
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 09:03:38 +0000
Received: from [85.158.143.35:24285] by server-1.bemta-4.messagelabs.com id
	C2/37-31661-96374035; Wed, 19 Feb 2014 09:03:37 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392800616!6741896!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25790 invoked from network); 19 Feb 2014 09:03:36 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 09:03:36 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1J93PXw017446
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 04:03:25 -0500
Received: from redhat.com (vpn1-6-34.ams2.redhat.com [10.36.6.34])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s1J93LrK023165; Wed, 19 Feb 2014 04:03:22 -0500
Date: Wed, 19 Feb 2014 11:08:25 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140219090825.GA10646@redhat.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
	<alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 05:10:00PM +0000, Stefano Stabellini wrote:
> On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > > of the email :-P).
> > > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > > response to the guest writing to a magic ioport specifically to unplug
> > > > > the emulated disk.
> > > > > With this patch after the guest boots I can still access both xvda and
> > > > > sda for the same disk, leading to fs corruptions.
> > > > 
> > > > Ok, the last paragraph is what I was missing.
> > > > 
> > > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > > hotplug handler, dc->unplug is not called anymore.
> > > > 
> > > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > > free
> > > > the device, it just drops the disks underneath.  I think the simplest
> > > > solution
> > > > is to _not_ make it a dc->unplug callback at all, and call
> > > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > > to
> > > > do here.
> > > 
> > > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > > Calling it directly from unplug_disks fixes the issue:
> > > 
> > > 
> > > ---
> > > 
> > > Call pci_piix3_xen_ide_unplug from unplug_disks
> > > 
> > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > 
> > > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > > index 0eda301..40757eb 100644
> > > --- a/hw/ide/piix.c
> > > +++ b/hw/ide/piix.c
> > > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> > >      return 0;
> > >  }
> > > 
> > > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > >  {
> > >      PCIIDEState *pci_ide;
> > >      DriveInfo *di;
> > > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > > void *data)
> > >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> > >      k->class_id = PCI_CLASS_STORAGE_IDE;
> > >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > > -    dc->unplug = pci_piix3_xen_ide_unplug;
> > >  }
> > > 
> > >  static const TypeInfo piix3_ide_xen_info = {
> > > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > > index 70875e4..1d9d0e9 100644
> > > --- a/hw/xen/xen_platform.c
> > > +++ b/hw/xen/xen_platform.c
> > > @@ -27,6 +27,7 @@
> > > 
> > >  #include "hw/hw.h"
> > >  #include "hw/i386/pc.h"
> > > +#include "hw/ide.h"
> > >  #include "hw/pci/pci.h"
> > >  #include "hw/irq.h"
> > >  #include "hw/xen/xen_common.h"
> > > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > > *o)
> > >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > >              PCI_CLASS_STORAGE_IDE
> > >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > > -        qdev_unplug(DEVICE(d), NULL);
> > > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> > >      }
> > >  }
> > > 
> > > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > > index 507e6d3..bc8bd32 100644
> > > --- a/include/hw/ide.h
> > > +++ b/include/hw/ide.h
> > > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > > **hd_table,
> > >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > devfn);
> > >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > devfn);
> > >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > devfn);
> > > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> > >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > > 
> > >  /* ide-mmio.c */
> > > 
> > 
> > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> Thanks. Should I send it to Peter via the xen tree or anybody else wants
> to pick this up?

I'll rebase my tree on top of this, to avoid bisect failures.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:04:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:04:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG34r-0008D2-DT; Wed, 19 Feb 2014 09:04:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG34q-0008Cu-Bi
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:04:24 +0000
Received: from [193.109.254.147:48939] by server-6.bemta-14.messagelabs.com id
	BD/10-03396-79374035; Wed, 19 Feb 2014 09:04:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392800661!5362838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9137 invoked from network); 19 Feb 2014 09:04:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:04:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102112580"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:04:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:04:20 -0500
Message-ID: <1392800659.23084.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Wed, 19 Feb 2014 09:04:19 +0000
In-Reply-To: <CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 20:17 +0200, Oleksandr Tyshchenko wrote:
> Ian,
> I have checked your suggestion with full cache flush.
> For this purposes I have used ARMV7 specific function from our U-Boot.
> This function performs clean and invalidation of the entire data cache
> at all levels.
> I call it in __cpu_up() before arch_cpu_up() calling.

> It seems, the issue doesn't reproduced.

OK good, thanks for doing the experiment.

> Now we need to find missing cache flushes).

One approach would be to move the flush earlier and earlier, essentially
bisecting the boot sequence.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:04:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:04:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG34r-0008D2-DT; Wed, 19 Feb 2014 09:04:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG34q-0008Cu-Bi
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:04:24 +0000
Received: from [193.109.254.147:48939] by server-6.bemta-14.messagelabs.com id
	BD/10-03396-79374035; Wed, 19 Feb 2014 09:04:23 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392800661!5362838!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9137 invoked from network); 19 Feb 2014 09:04:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:04:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102112580"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:04:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:04:20 -0500
Message-ID: <1392800659.23084.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Date: Wed, 19 Feb 2014 09:04:19 +0000
In-Reply-To: <CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Andrii Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 20:17 +0200, Oleksandr Tyshchenko wrote:
> Ian,
> I have checked your suggestion with full cache flush.
> For this purposes I have used ARMV7 specific function from our U-Boot.
> This function performs clean and invalidation of the entire data cache
> at all levels.
> I call it in __cpu_up() before arch_cpu_up() calling.

> It seems, the issue doesn't reproduced.

OK good, thanks for doing the experiment.

> Now we need to find missing cache flushes).

One approach would be to move the flush earlier and earlier, essentially
bisecting the boot sequence.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3G4-0008Vt-LR; Wed, 19 Feb 2014 09:16:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3G3-0008Vo-Bn
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:15:59 +0000
Received: from [85.158.137.68:22352] by server-11.bemta-3.messagelabs.com id
	55/E4-04255-E4674035; Wed, 19 Feb 2014 09:15:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392801356!1561630!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19330 invoked from network); 19 Feb 2014 09:15:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:15:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103827819"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:15:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:15:55 -0500
Message-ID: <1392801354.23084.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Wed, 19 Feb 2014 09:15:54 +0000
In-Reply-To: <5303A78C.6090709@citrix.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
	<5303A78C.6090709@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>, Andrii
	Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 18:33 +0000, Julien Grall wrote:
> On 02/18/2014 06:17 PM, Oleksandr Tyshchenko wrote:
> > Ian,
> 
> Hello Oleksandr,
> 
> > I have checked your suggestion with full cache flush.
> > For this purposes I have used ARMV7 specific function from our U-Boot.
> > This function performs clean and invalidation of the entire data cache
> > at all levels.
> 
> Did you try to only clean the cache? When page table for the secondary
> CPU is created Xen only clean the cache for the specific range.
> I suspect it's not enough and we need to invalidate.

I think the error is occurring when paging is first enabled with HTTBR
set to boot_pgtable, which the secondary CPU is setting up itself during
head.S. The switch to the secondaries own PTs, which are setup by the
boot processor for it, happens a bit later. Other than the boot_pgtable
setup that the CPUs do themselves I don't think anything is touching the
boot pts with caches turned on.

So long as the data is not modified between the clean and enabling
MMU/caches the invalidate shouldn't really matter -- any data which
magically appears when you enable the caches should be in sync with the
underlying RAM I think.

BTW, it would be worth just confirming that caches are not enabled
either upon entry to Xen or after the magic smc #0.

> It should be easy to try with this small patch:
> 
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index e00be9e..5a8aba2 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -234,7 +234,7 @@ static inline void clean_xen_dcache_va_range(void
> *p, unsigned long size)
>      void *end;
>      dsb();           /* So the CPU issues all writes to the range */
>      for ( end = p + size; p < end; p += cacheline_bytes )
> -        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
> (p));
>      dsb();           /* So we know the flushes happen before continuing */
>  }
> 
> Regards,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3G4-0008Vt-LR; Wed, 19 Feb 2014 09:16:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3G3-0008Vo-Bn
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:15:59 +0000
Received: from [85.158.137.68:22352] by server-11.bemta-3.messagelabs.com id
	55/E4-04255-E4674035; Wed, 19 Feb 2014 09:15:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392801356!1561630!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19330 invoked from network); 19 Feb 2014 09:15:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:15:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103827819"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:15:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:15:55 -0500
Message-ID: <1392801354.23084.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Wed, 19 Feb 2014 09:15:54 +0000
In-Reply-To: <5303A78C.6090709@citrix.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
	<5303A78C.6090709@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>, Andrii
	Anisov <andrii.anisov@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 18:33 +0000, Julien Grall wrote:
> On 02/18/2014 06:17 PM, Oleksandr Tyshchenko wrote:
> > Ian,
> 
> Hello Oleksandr,
> 
> > I have checked your suggestion with full cache flush.
> > For this purposes I have used ARMV7 specific function from our U-Boot.
> > This function performs clean and invalidation of the entire data cache
> > at all levels.
> 
> Did you try to only clean the cache? When page table for the secondary
> CPU is created Xen only clean the cache for the specific range.
> I suspect it's not enough and we need to invalidate.

I think the error is occurring when paging is first enabled with HTTBR
set to boot_pgtable, which the secondary CPU is setting up itself during
head.S. The switch to the secondaries own PTs, which are setup by the
boot processor for it, happens a bit later. Other than the boot_pgtable
setup that the CPUs do themselves I don't think anything is touching the
boot pts with caches turned on.

So long as the data is not modified between the clean and enabling
MMU/caches the invalidate shouldn't really matter -- any data which
magically appears when you enable the caches should be in sync with the
underlying RAM I think.

BTW, it would be worth just confirming that caches are not enabled
either upon entry to Xen or after the magic smc #0.

> It should be easy to try with this small patch:
> 
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index e00be9e..5a8aba2 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -234,7 +234,7 @@ static inline void clean_xen_dcache_va_range(void
> *p, unsigned long size)
>      void *end;
>      dsb();           /* So the CPU issues all writes to the range */
>      for ( end = p + size; p < end; p += cacheline_bytes )
> -        asm volatile (__clean_xen_dcache_one(0) : : "r" (p));
> +        asm volatile (__clean_and_invalidate_xen_dcache_one(0) : : "r"
> (p));
>      dsb();           /* So we know the flushes happen before continuing */
>  }
> 
> Regards,
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:24:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3O4-0000E9-Pa; Wed, 19 Feb 2014 09:24:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WG3O2-0000E4-Ir
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 09:24:14 +0000
Received: from [85.158.143.35:5164] by server-2.bemta-4.messagelabs.com id
	B1/0D-10891-D3874035; Wed, 19 Feb 2014 09:24:13 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392801852!6746530!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19147 invoked from network); 19 Feb 2014 09:24:12 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 09:24:12 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1J9O4ow001883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 04:24:04 -0500
Received: from redhat.com (vpn1-6-34.ams2.redhat.com [10.36.6.34])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s1J9O0IL032049; Wed, 19 Feb 2014 04:24:00 -0500
Date: Wed, 19 Feb 2014 11:29:04 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140219092904.GA11116@redhat.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
	<alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
	<20140219090825.GA10646@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140219090825.GA10646@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:08:25AM +0200, Michael S. Tsirkin wrote:
> On Tue, Feb 18, 2014 at 05:10:00PM +0000, Stefano Stabellini wrote:
> > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > > > of the email :-P).
> > > > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > > > response to the guest writing to a magic ioport specifically to unplug
> > > > > > the emulated disk.
> > > > > > With this patch after the guest boots I can still access both xvda and
> > > > > > sda for the same disk, leading to fs corruptions.
> > > > > 
> > > > > Ok, the last paragraph is what I was missing.
> > > > > 
> > > > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > > > hotplug handler, dc->unplug is not called anymore.
> > > > > 
> > > > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > > > free
> > > > > the device, it just drops the disks underneath.  I think the simplest
> > > > > solution
> > > > > is to _not_ make it a dc->unplug callback at all, and call
> > > > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > > > to
> > > > > do here.
> > > > 
> > > > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > > > Calling it directly from unplug_disks fixes the issue:
> > > > 
> > > > 
> > > > ---
> > > > 
> > > > Call pci_piix3_xen_ide_unplug from unplug_disks
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > 
> > > > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > > > index 0eda301..40757eb 100644
> > > > --- a/hw/ide/piix.c
> > > > +++ b/hw/ide/piix.c
> > > > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> > > >      return 0;
> > > >  }
> > > > 
> > > > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > >  {
> > > >      PCIIDEState *pci_ide;
> > > >      DriveInfo *di;
> > > > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > > > void *data)
> > > >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> > > >      k->class_id = PCI_CLASS_STORAGE_IDE;
> > > >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > > > -    dc->unplug = pci_piix3_xen_ide_unplug;
> > > >  }
> > > > 
> > > >  static const TypeInfo piix3_ide_xen_info = {
> > > > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > > > index 70875e4..1d9d0e9 100644
> > > > --- a/hw/xen/xen_platform.c
> > > > +++ b/hw/xen/xen_platform.c
> > > > @@ -27,6 +27,7 @@
> > > > 
> > > >  #include "hw/hw.h"
> > > >  #include "hw/i386/pc.h"
> > > > +#include "hw/ide.h"
> > > >  #include "hw/pci/pci.h"
> > > >  #include "hw/irq.h"
> > > >  #include "hw/xen/xen_common.h"
> > > > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > > > *o)
> > > >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > > >              PCI_CLASS_STORAGE_IDE
> > > >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > > > -        qdev_unplug(DEVICE(d), NULL);
> > > > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> > > >      }
> > > >  }
> > > > 
> > > > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > > > index 507e6d3..bc8bd32 100644
> > > > --- a/include/hw/ide.h
> > > > +++ b/include/hw/ide.h
> > > > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > > > **hd_table,
> > > >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > devfn);
> > > >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > devfn);
> > > >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > devfn);
> > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> > > >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > > > 
> > > >  /* ide-mmio.c */
> > > > 
> > > 
> > > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> > 
> > Thanks. Should I send it to Peter via the xen tree or anybody else wants
> > to pick this up?
> 
> I'll rebase my tree on top of this, to avoid bisect failures.

Oh sorry, I noticed what broke this is - is merged already.
Pls merge fix through xen tree then, makes more sense.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:24:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:24:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3O4-0000E9-Pa; Wed, 19 Feb 2014 09:24:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mst@redhat.com>) id 1WG3O2-0000E4-Ir
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 09:24:14 +0000
Received: from [85.158.143.35:5164] by server-2.bemta-4.messagelabs.com id
	B1/0D-10891-D3874035; Wed, 19 Feb 2014 09:24:13 +0000
X-Env-Sender: mst@redhat.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392801852!6746530!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19147 invoked from network); 19 Feb 2014 09:24:12 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-9.tower-21.messagelabs.com with SMTP;
	19 Feb 2014 09:24:12 -0000
Received: from int-mx09.intmail.prod.int.phx2.redhat.com
	(int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1J9O4ow001883
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 04:24:04 -0500
Received: from redhat.com (vpn1-6-34.ams2.redhat.com [10.36.6.34])
	by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with SMTP
	id s1J9O0IL032049; Wed, 19 Feb 2014 04:24:00 -0500
Date: Wed, 19 Feb 2014 11:29:04 +0200
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140219092904.GA11116@redhat.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
	<alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
	<20140219090825.GA10646@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140219090825.GA10646@redhat.com>
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:08:25AM +0200, Michael S. Tsirkin wrote:
> On Tue, Feb 18, 2014 at 05:10:00PM +0000, Stefano Stabellini wrote:
> > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > > > of the email :-P).
> > > > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > > > response to the guest writing to a magic ioport specifically to unplug
> > > > > > the emulated disk.
> > > > > > With this patch after the guest boots I can still access both xvda and
> > > > > > sda for the same disk, leading to fs corruptions.
> > > > > 
> > > > > Ok, the last paragraph is what I was missing.
> > > > > 
> > > > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > > > hotplug handler, dc->unplug is not called anymore.
> > > > > 
> > > > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > > > free
> > > > > the device, it just drops the disks underneath.  I think the simplest
> > > > > solution
> > > > > is to _not_ make it a dc->unplug callback at all, and call
> > > > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > > > to
> > > > > do here.
> > > > 
> > > > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > > > Calling it directly from unplug_disks fixes the issue:
> > > > 
> > > > 
> > > > ---
> > > > 
> > > > Call pci_piix3_xen_ide_unplug from unplug_disks
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > 
> > > > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > > > index 0eda301..40757eb 100644
> > > > --- a/hw/ide/piix.c
> > > > +++ b/hw/ide/piix.c
> > > > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> > > >      return 0;
> > > >  }
> > > > 
> > > > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > >  {
> > > >      PCIIDEState *pci_ide;
> > > >      DriveInfo *di;
> > > > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > > > void *data)
> > > >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> > > >      k->class_id = PCI_CLASS_STORAGE_IDE;
> > > >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > > > -    dc->unplug = pci_piix3_xen_ide_unplug;
> > > >  }
> > > > 
> > > >  static const TypeInfo piix3_ide_xen_info = {
> > > > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > > > index 70875e4..1d9d0e9 100644
> > > > --- a/hw/xen/xen_platform.c
> > > > +++ b/hw/xen/xen_platform.c
> > > > @@ -27,6 +27,7 @@
> > > > 
> > > >  #include "hw/hw.h"
> > > >  #include "hw/i386/pc.h"
> > > > +#include "hw/ide.h"
> > > >  #include "hw/pci/pci.h"
> > > >  #include "hw/irq.h"
> > > >  #include "hw/xen/xen_common.h"
> > > > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > > > *o)
> > > >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > > >              PCI_CLASS_STORAGE_IDE
> > > >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > > > -        qdev_unplug(DEVICE(d), NULL);
> > > > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> > > >      }
> > > >  }
> > > > 
> > > > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > > > index 507e6d3..bc8bd32 100644
> > > > --- a/include/hw/ide.h
> > > > +++ b/include/hw/ide.h
> > > > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > > > **hd_table,
> > > >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > devfn);
> > > >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > devfn);
> > > >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > devfn);
> > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> > > >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > > > 
> > > >  /* ide-mmio.c */
> > > > 
> > > 
> > > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> > 
> > Thanks. Should I send it to Peter via the xen tree or anybody else wants
> > to pick this up?
> 
> I'll rebase my tree on top of this, to avoid bisect failures.

Oh sorry, I noticed what broke this is - is merged already.
Pls merge fix through xen tree then, makes more sense.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:39:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3ct-0000P0-Cb; Wed, 19 Feb 2014 09:39:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3cr-0000Ov-TK
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:39:34 +0000
Received: from [85.158.143.35:28616] by server-1.bemta-4.messagelabs.com id
	2E/AF-31661-4DB74035; Wed, 19 Feb 2014 09:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392802770!6734829!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13993 invoked from network); 19 Feb 2014 09:39:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102119533"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:39:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:39:29 -0500
Message-ID: <1392802767.23084.93.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Wed, 19 Feb 2014 09:39:27 +0000
In-Reply-To: <1392791564-37170-7-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-7-git-send-email-dongxiao.xu@intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 14:32 +0800, Dongxiao Xu wrote:
> +=item B<pqos-attach> [I<qos-type>] [I<domain-id>]
> +
> +Attach certain platform QoS service for a domain.
> +Current supported I<qos-type> is: "cqm".

> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..f3d2202 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
>  int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
>  int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
>  
> +int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
> +int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);

I have a feeling that qos_type should actually be an enum in the IDL.
The xl functions can probably use the autogenerate
libxl_BLAH_from_string() functions to help with parsing.

What other qos types are you envisaging? Is it valid to enable or
disable multiple such things independently?

> +void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);

So each qos type is going to come with its own map function?

I don't see the LIBXL_HAVE #define which we discussed last time anywhere
here.

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..43c0f48 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
>                                   ])),
>             ("domain_create_console_available", Struct(None, [])),
>             ]))])
> +
> +libxl_cqminfo = Struct("cqminfo", [
> +    ("buffer_virt",    uint64),

An opaque void * masquerading as an integer is not a suitable interface.

This should be a (pointer to a) struct of the appropriate type, or an
Array of such types etc (or possibly several such arrays depending on
what you are returning).

I haven't looked in detail into what is actually in this buffer, but
please try and have libxl bake it into a more consumable form -- e.g. an
array of per-domain properties or something rather than a raw list.

> +    ("size",           uint32),
> +    ("nr_rmids",       uint32),
> +    ("nr_sockets",     uint32),
> +    ])
> [...][
> +static void print_cqm_info(const libxl_cqminfo *info)
> +{
> +    unsigned int i, j, k;
> +    char *domname;
> +    int print_header;
> +    int cqm_domains = 0;
> +    uint16_t *rmid_to_dom;
> +    uint64_t *l3c_data;
> +    uint32_t first_domain = 0;
> +    unsigned int num_domains = 1024;
> +
> +    if (info->nr_rmids == 0) {
> +        printf("System doesn't support CQM.\n");
> +        return;
> +    }
> +
> +    print_header = 1;
> +    l3c_data = (uint64_t *)(info->buffer_virt);
> +    rmid_to_dom = (uint16_t *)(info->buffer_virt +
> +                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
> +    for (i = first_domain; i < (first_domain + num_domains); i++) {
> +        for (j = 0; j < info->nr_rmids; j++) {
> +            if (rmid_to_dom[j] != i)
> +                continue;
> +
> +            if (print_header) {
> +                printf("Name                                        ID");
> +                for (k = 0; k < info->nr_sockets; k++)
> +                    printf("\tSocketID\tL3C_Usage");
> +                print_header = 0;
> +            }
> +
> +            domname = libxl_domid_to_name(ctx, i);
> +            printf("\n%-40s %5d", domname, i);
> +            free(domname);
> +            cqm_domains++;
> +
> +            for (k = 0; k < info->nr_sockets; k++)
> +                printf("%10u %16lu     ",
> +                        k, l3c_data[info->nr_rmids * k + j]);
> +        }

This should be transformed into a sensible interface within libxl so
that it can be consumed in a straightforward manner by the users of
libxl, rather than asking them all to reimplement this.

Is the buffer format considered a frozen ABI?

> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..d4af4a9 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
>        "[options]",
>        "-F                      Run in the foreground",
>      },
> +    { "pqos-attach",
> +      &main_pqosattach, 0, 1,
> +      "Allocate and map qos resource",
> +      "<Resource> <Domain>",
> +    },
> +    { "pqos-detach",
> +      &main_pqosdetach, 0, 1,
> +      "Reliquish qos resource",

"Relinquish"

and perhaps "resources" (in both cases)

> +      "<Resource> <Domain>",
> +    },
> +    { "pqos-list",
> +      &main_pqoslist, 0, 0,
> +      "List qos information for all domains",
> +      "<Resource>",
> +    },
>  };
>  
>  int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:39:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3ct-0000P0-Cb; Wed, 19 Feb 2014 09:39:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3cr-0000Ov-TK
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:39:34 +0000
Received: from [85.158.143.35:28616] by server-1.bemta-4.messagelabs.com id
	2E/AF-31661-4DB74035; Wed, 19 Feb 2014 09:39:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392802770!6734829!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13993 invoked from network); 19 Feb 2014 09:39:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:39:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102119533"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:39:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:39:29 -0500
Message-ID: <1392802767.23084.93.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Dongxiao Xu <dongxiao.xu@intel.com>
Date: Wed, 19 Feb 2014 09:39:27 +0000
In-Reply-To: <1392791564-37170-7-git-send-email-dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-7-git-send-email-dongxiao.xu@intel.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: keir@xen.org, stefano.stabellini@eu.citrix.com, andrew.cooper3@citrix.com,
	Ian.Jackson@eu.citrix.com, xen-devel@lists.xen.org,
	JBeulich@suse.com, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 6/6] tools: enable Cache QoS Monitoring
 feature for libxl/libxc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 14:32 +0800, Dongxiao Xu wrote:
> +=item B<pqos-attach> [I<qos-type>] [I<domain-id>]
> +
> +Attach certain platform QoS service for a domain.
> +Current supported I<qos-type> is: "cqm".

> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 12d6c31..f3d2202 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -1105,6 +1105,10 @@ int libxl_flask_getenforce(libxl_ctx *ctx);
>  int libxl_flask_setenforce(libxl_ctx *ctx, int mode);
>  int libxl_flask_loadpolicy(libxl_ctx *ctx, void *policy, uint32_t size);
>  
> +int libxl_pqos_attach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);
> +int libxl_pqos_detach(libxl_ctx *ctx, uint32_t domid, const char * qos_type);

I have a feeling that qos_type should actually be an enum in the IDL.
The xl functions can probably use the autogenerate
libxl_BLAH_from_string() functions to help with parsing.

What other qos types are you envisaging? Is it valid to enable or
disable multiple such things independently?

> +void libxl_map_cqm_buffer(libxl_ctx *ctx, libxl_cqminfo *xlinfo);

So each qos type is going to come with its own map function?

I don't see the LIBXL_HAVE #define which we discussed last time anywhere
here.

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..43c0f48 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -596,3 +596,10 @@ libxl_event = Struct("event",[
>                                   ])),
>             ("domain_create_console_available", Struct(None, [])),
>             ]))])
> +
> +libxl_cqminfo = Struct("cqminfo", [
> +    ("buffer_virt",    uint64),

An opaque void * masquerading as an integer is not a suitable interface.

This should be a (pointer to a) struct of the appropriate type, or an
Array of such types etc (or possibly several such arrays depending on
what you are returning).

I haven't looked in detail into what is actually in this buffer, but
please try and have libxl bake it into a more consumable form -- e.g. an
array of per-domain properties or something rather than a raw list.

> +    ("size",           uint32),
> +    ("nr_rmids",       uint32),
> +    ("nr_sockets",     uint32),
> +    ])
> [...][
> +static void print_cqm_info(const libxl_cqminfo *info)
> +{
> +    unsigned int i, j, k;
> +    char *domname;
> +    int print_header;
> +    int cqm_domains = 0;
> +    uint16_t *rmid_to_dom;
> +    uint64_t *l3c_data;
> +    uint32_t first_domain = 0;
> +    unsigned int num_domains = 1024;
> +
> +    if (info->nr_rmids == 0) {
> +        printf("System doesn't support CQM.\n");
> +        return;
> +    }
> +
> +    print_header = 1;
> +    l3c_data = (uint64_t *)(info->buffer_virt);
> +    rmid_to_dom = (uint16_t *)(info->buffer_virt +
> +                  info->nr_sockets * info->nr_rmids * sizeof(uint64_t));
> +    for (i = first_domain; i < (first_domain + num_domains); i++) {
> +        for (j = 0; j < info->nr_rmids; j++) {
> +            if (rmid_to_dom[j] != i)
> +                continue;
> +
> +            if (print_header) {
> +                printf("Name                                        ID");
> +                for (k = 0; k < info->nr_sockets; k++)
> +                    printf("\tSocketID\tL3C_Usage");
> +                print_header = 0;
> +            }
> +
> +            domname = libxl_domid_to_name(ctx, i);
> +            printf("\n%-40s %5d", domname, i);
> +            free(domname);
> +            cqm_domains++;
> +
> +            for (k = 0; k < info->nr_sockets; k++)
> +                printf("%10u %16lu     ",
> +                        k, l3c_data[info->nr_rmids * k + j]);
> +        }

This should be transformed into a sensible interface within libxl so
that it can be consumed in a straightforward manner by the users of
libxl, rather than asking them all to reimplement this.

Is the buffer format considered a frozen ABI?

> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..d4af4a9 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -494,6 +494,21 @@ struct cmd_spec cmd_table[] = {
>        "[options]",
>        "-F                      Run in the foreground",
>      },
> +    { "pqos-attach",
> +      &main_pqosattach, 0, 1,
> +      "Allocate and map qos resource",
> +      "<Resource> <Domain>",
> +    },
> +    { "pqos-detach",
> +      &main_pqosdetach, 0, 1,
> +      "Reliquish qos resource",

"Relinquish"

and perhaps "resources" (in both cases)

> +      "<Resource> <Domain>",
> +    },
> +    { "pqos-list",
> +      &main_pqoslist, 0, 0,
> +      "List qos information for all domains",
> +      "<Resource>",
> +    },
>  };
>  
>  int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:47:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3kW-0000ZD-Ga; Wed, 19 Feb 2014 09:47:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3kU-0000Z8-Jf
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:47:26 +0000
Received: from [85.158.137.68:28412] by server-13.bemta-3.messagelabs.com id
	E1/0E-26923-DAD74035; Wed, 19 Feb 2014 09:47:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392803243!2831753!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 19 Feb 2014 09:47:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103832789"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:47:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:47:22 -0500
Message-ID: <1392803241.23084.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:47:21 +0000
In-Reply-To: <CAB=NE6WwPfPi-8Yudxs4-OA2LPOjo8-XxUiVbtu6=BQ8FhEEOA@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
	<53021E87.6020607@citrix.com>
	<CAB=NE6WwPfPi-8Yudxs4-OA2LPOjo8-XxUiVbtu6=BQ8FhEEOA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6
 interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 12:16 -0800, Luis R. Rodriguez wrote:
> On Mon, Feb 17, 2014 at 6:36 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> > Also, the backend is not necessarily Dom0, you can connect two guests with
> > backend/frontend pairs.
> 
> Can you elaborate a bit more on this type of setup?

The domain providing backend networking services is not necessarily
dom0, it might be a driver domain:
http://wiki.xen.org/wiki/Driver_Domain

I think from your PoV here it probably doesn't matter whether the driver
domain is dom0 or some other domain.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:47:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:47:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3kW-0000ZD-Ga; Wed, 19 Feb 2014 09:47:28 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3kU-0000Z8-Jf
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:47:26 +0000
Received: from [85.158.137.68:28412] by server-13.bemta-3.messagelabs.com id
	E1/0E-26923-DAD74035; Wed, 19 Feb 2014 09:47:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392803243!2831753!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29707 invoked from network); 19 Feb 2014 09:47:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103832789"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:47:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:47:22 -0500
Message-ID: <1392803241.23084.94.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:47:21 +0000
In-Reply-To: <CAB=NE6WwPfPi-8Yudxs4-OA2LPOjo8-XxUiVbtu6=BQ8FhEEOA@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-5-git-send-email-mcgrof@do-not-panic.com>
	<53021E87.6020607@citrix.com>
	<CAB=NE6WwPfPi-8Yudxs4-OA2LPOjo8-XxUiVbtu6=BQ8FhEEOA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 4/4] xen-netback: skip IPv4 and IPv6
 interfaces
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 12:16 -0800, Luis R. Rodriguez wrote:
> On Mon, Feb 17, 2014 at 6:36 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> > Also, the backend is not necessarily Dom0, you can connect two guests with
> > backend/frontend pairs.
> 
> Can you elaborate a bit more on this type of setup?

The domain providing backend networking services is not necessarily
dom0, it might be a driver domain:
http://wiki.xen.org/wiki/Driver_Domain

I think from your PoV here it probably doesn't matter whether the driver
domain is dom0 or some other domain.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:48:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3lW-0000dG-1G; Wed, 19 Feb 2014 09:48:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3lV-0000d9-0I
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:48:29 +0000
Received: from [85.158.143.35:11846] by server-2.bemta-4.messagelabs.com id
	13/10-10891-CED74035; Wed, 19 Feb 2014 09:48:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392803306!6753860!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31526 invoked from network); 19 Feb 2014 09:48:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:48:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102120835"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:48:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:48:25 -0500
Message-ID: <1392803304.23084.95.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:48:24 +0000
In-Reply-To: <CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<5301E411.5060908@citrix.com>
	<CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>, kvm@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 11:43 -0800, Luis R. Rodriguez wrote:
> 
> New motivation: removing IPv4 and IPv6 from the backend interfaces can
> save up a lot of boiler plate run time code, triggers from ever taking
> place, and simplifying the backend interaces. If there is no use for
> IPv4 and IPv6 interfaces why do we have them? Note: I have yet to test
> the NAT case.

I think you need to do that test that before you can unequivocally state
that there is no use for IPv4/6 interfaces here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:48:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:48:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3lW-0000dG-1G; Wed, 19 Feb 2014 09:48:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3lV-0000d9-0I
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:48:29 +0000
Received: from [85.158.143.35:11846] by server-2.bemta-4.messagelabs.com id
	13/10-10891-CED74035; Wed, 19 Feb 2014 09:48:28 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392803306!6753860!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31526 invoked from network); 19 Feb 2014 09:48:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:48:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102120835"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:48:26 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:48:25 -0500
Message-ID: <1392803304.23084.95.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:48:24 +0000
In-Reply-To: <CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<5301E411.5060908@citrix.com>
	<CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>, kvm@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 11:43 -0800, Luis R. Rodriguez wrote:
> 
> New motivation: removing IPv4 and IPv6 from the backend interfaces can
> save up a lot of boiler plate run time code, triggers from ever taking
> place, and simplifying the backend interaces. If there is no use for
> IPv4 and IPv6 interfaces why do we have them? Note: I have yet to test
> the NAT case.

I think you need to do that test that before you can unequivocally state
that there is no use for IPv4/6 interfaces here.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:50:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3nc-0000nO-J1; Wed, 19 Feb 2014 09:50:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3nb-0000nI-LQ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:50:39 +0000
Received: from [85.158.139.211:57857] by server-7.bemta-5.messagelabs.com id
	19/5D-14867-E6E74035; Wed, 19 Feb 2014 09:50:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392803436!4844450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19073 invoked from network); 19 Feb 2014 09:50:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:50:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103833500"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:50:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:50:35 -0500
Message-ID: <1392803434.23084.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 19 Feb 2014 09:50:34 +0000
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> A long known problem of the upstream netback implementation that on the TX
> path (from guest to Dom0) it copies the whole packet from guest memory into
> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
> huge perfomance penalty. The classic kernel version of netback used grant
> mapping, and to get notified when the page can be unmapped, it used page
> destructors. Unfortunately that destructor is not an upstreamable solution.
> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
> problem, however it seems to be very invasive on the network stack's code,
> and therefore haven't progressed very well.
> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
> know when the skb is freed up. That is the way KVM solved the same problem,
> and based on my initial tests it can do the same for us. Avoiding the extra
> copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
> Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
> running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
> switch)
> Based on my investigations the packet get only copied if it is delivered to
> Dom0 stack,

This is not quite complete/accurate since you previously told me that it
is copied in the NAT/routed rather than bridged network topologies.

Please can you cover that aspect here too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:50:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3nc-0000nO-J1; Wed, 19 Feb 2014 09:50:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3nb-0000nI-LQ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:50:39 +0000
Received: from [85.158.139.211:57857] by server-7.bemta-5.messagelabs.com id
	19/5D-14867-E6E74035; Wed, 19 Feb 2014 09:50:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392803436!4844450!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19073 invoked from network); 19 Feb 2014 09:50:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:50:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103833500"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:50:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:50:35 -0500
Message-ID: <1392803434.23084.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 19 Feb 2014 09:50:34 +0000
In-Reply-To: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> A long known problem of the upstream netback implementation that on the TX
> path (from guest to Dom0) it copies the whole packet from guest memory into
> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
> huge perfomance penalty. The classic kernel version of netback used grant
> mapping, and to get notified when the page can be unmapped, it used page
> destructors. Unfortunately that destructor is not an upstreamable solution.
> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
> problem, however it seems to be very invasive on the network stack's code,
> and therefore haven't progressed very well.
> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
> know when the skb is freed up. That is the way KVM solved the same problem,
> and based on my initial tests it can do the same for us. Avoiding the extra
> copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
> Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
> running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
> switch)
> Based on my investigations the packet get only copied if it is delivered to
> Dom0 stack,

This is not quite complete/accurate since you previously told me that it
is copied in the NAT/routed rather than bridged network topologies.

Please can you cover that aspect here too.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:52:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3pf-0000vm-3y; Wed, 19 Feb 2014 09:52:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3pd-0000vb-63
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:52:45 +0000
Received: from [85.158.137.68:47307] by server-9.bemta-3.messagelabs.com id
	50/E9-10184-CEE74035; Wed, 19 Feb 2014 09:52:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392803562!26495!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27549 invoked from network); 19 Feb 2014 09:52:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:52:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102122040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:52:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:52:41 -0500
Message-ID: <1392803559.23084.99.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:52:39 +0000
In-Reply-To: <CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:02 -0800, Luis R. Rodriguez wrote:
> On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> > On Fri, 14 Feb 2014 18:59:37 -0800
> > "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
> >
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> It doesn't make sense for some interfaces to become a root bridge
> >> at any point in time. One example is virtual backend interfaces
> >> which rely on other entities on the bridge for actual physical
> >> connectivity. They only provide virtual access.
> >>
> >> Device drivers that know they should never become part of the
> >> root bridge have been using a trick of setting their MAC address
> >> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
> >> of using these hacks lets the interfaces annotate its intent and
> >> generalizes a solution for multiple drivers, while letting the
> >> drivers use a random MAC address or one prefixed with a proper OUI.
> >> This sort of hack is used by both qemu and xen for their backend
> >> interfaces.
> >>
> >> Cc: Stephen Hemminger <stephen@networkplumber.org>
> >> Cc: bridge@lists.linux-foundation.org
> >> Cc: netdev@vger.kernel.org
> >> Cc: linux-kernel@vger.kernel.org
> >> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> >
> > This is already supported in a more standard way via the root
> > block flag.
> 
> Great! For documentation purposes the root_block flag is a sysfs
> attribute, added via 3.8 through commit 1007dd1a. The respective
> interface flag is IFLA_BRPORT_PROTECT and can be set via the iproute2
> bridge utility or through sysfs:
> 
> mcgrof@garbanzo ~/linux (git::master)$ find /sys/ -name root_block
> /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/net/eth0/brport/root_block
> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
> /sys/devices/virtual/net/vif3.0-emu/brport/root_block
> 
> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
> 0
> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ sudo bridge link set
> dev vif3.0 root_block on
> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
> 1
> 
> So if we'd want to avoid using the MAC address hack alternative to
> skip a root port userspace would need to be updated to simply set this
> attribute after adding the device to the bridge. Based on Zoltan's
> feedback there seems to be use cases to not enable this always for all
> xen-netback interfaces though as such we can just punt this to
> userspace for the topologies that require this.
> 
> The original motivation for this series was to avoid the IPv6
> duplicate address incurred by the MAC address hack for avoiding the
> root bridge. Given that Zoltan also noted a use case whereby IPv4 and
> IPv6 addresses can be assigned to the backend interfaces we should be
> able to avoid the duplicate address situation for IPv6 by using a
> proper random MAC address *once* userspace has been updated also to
> use IFLA_BRPORT_PROTECT. New userspace can't and won't need to set
> this flag for older kernels (older than 3.8) as root_block is not
> implemented on those kernels and the MAC address hack would still be
> used there. This strategy however does put a requirement on new
> kernels to use new userspace as otherwise the MAC address workaround
> would not be in place and root_block would not take effect.

Can't we arrange things in the Xen hotplug scripts such that if the
root_block stuff isn't available/doesn't work we fallback to the
existing fe:ff:ff:ff:ff usage?

That would avoid concerns about forward/backwards compat I think. It
wouldn't solve the issue you are targeting on old systems, but it also
doesn't regress them any further.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:52:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3pf-0000vm-3y; Wed, 19 Feb 2014 09:52:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3pd-0000vb-63
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:52:45 +0000
Received: from [85.158.137.68:47307] by server-9.bemta-3.messagelabs.com id
	50/E9-10184-CEE74035; Wed, 19 Feb 2014 09:52:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392803562!26495!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27549 invoked from network); 19 Feb 2014 09:52:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:52:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102122040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:52:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:52:41 -0500
Message-ID: <1392803559.23084.99.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:52:39 +0000
In-Reply-To: <CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:02 -0800, Luis R. Rodriguez wrote:
> On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> > On Fri, 14 Feb 2014 18:59:37 -0800
> > "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
> >
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> It doesn't make sense for some interfaces to become a root bridge
> >> at any point in time. One example is virtual backend interfaces
> >> which rely on other entities on the bridge for actual physical
> >> connectivity. They only provide virtual access.
> >>
> >> Device drivers that know they should never become part of the
> >> root bridge have been using a trick of setting their MAC address
> >> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
> >> of using these hacks lets the interfaces annotate its intent and
> >> generalizes a solution for multiple drivers, while letting the
> >> drivers use a random MAC address or one prefixed with a proper OUI.
> >> This sort of hack is used by both qemu and xen for their backend
> >> interfaces.
> >>
> >> Cc: Stephen Hemminger <stephen@networkplumber.org>
> >> Cc: bridge@lists.linux-foundation.org
> >> Cc: netdev@vger.kernel.org
> >> Cc: linux-kernel@vger.kernel.org
> >> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
> >
> > This is already supported in a more standard way via the root
> > block flag.
> 
> Great! For documentation purposes the root_block flag is a sysfs
> attribute, added via 3.8 through commit 1007dd1a. The respective
> interface flag is IFLA_BRPORT_PROTECT and can be set via the iproute2
> bridge utility or through sysfs:
> 
> mcgrof@garbanzo ~/linux (git::master)$ find /sys/ -name root_block
> /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/net/eth0/brport/root_block
> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
> /sys/devices/virtual/net/vif3.0-emu/brport/root_block
> 
> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
> 0
> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ sudo bridge link set
> dev vif3.0 root_block on
> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
> 1
> 
> So if we'd want to avoid using the MAC address hack alternative to
> skip a root port userspace would need to be updated to simply set this
> attribute after adding the device to the bridge. Based on Zoltan's
> feedback there seems to be use cases to not enable this always for all
> xen-netback interfaces though as such we can just punt this to
> userspace for the topologies that require this.
> 
> The original motivation for this series was to avoid the IPv6
> duplicate address incurred by the MAC address hack for avoiding the
> root bridge. Given that Zoltan also noted a use case whereby IPv4 and
> IPv6 addresses can be assigned to the backend interfaces we should be
> able to avoid the duplicate address situation for IPv6 by using a
> proper random MAC address *once* userspace has been updated also to
> use IFLA_BRPORT_PROTECT. New userspace can't and won't need to set
> this flag for older kernels (older than 3.8) as root_block is not
> implemented on those kernels and the MAC address hack would still be
> used there. This strategy however does put a requirement on new
> kernels to use new userspace as otherwise the MAC address workaround
> would not be in place and root_block would not take effect.

Can't we arrange things in the Xen hotplug scripts such that if the
root_block stuff isn't available/doesn't work we fallback to the
existing fe:ff:ff:ff:ff usage?

That would avoid concerns about forward/backwards compat I think. It
wouldn't solve the issue you are targeting on old systems, but it also
doesn't regress them any further.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:53:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3qX-00011X-J6; Wed, 19 Feb 2014 09:53:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WG3qV-00011I-P2
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:53:40 +0000
Received: from [193.109.254.147:65046] by server-5.bemta-14.messagelabs.com id
	64/A7-16688-22F74035; Wed, 19 Feb 2014 09:53:38 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392803616!5358980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19500 invoked from network); 19 Feb 2014 09:53:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102122131"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:53:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:53:36 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WG3qQ-00039i-Uk;
	Wed, 19 Feb 2014 09:53:34 +0000
Message-ID: <1392803609.8843.3.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Wed, 19 Feb 2014 09:53:29 +0000
In-Reply-To: <CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 12:47 +0000, George Dunlap wrote:
> On Wed, Jan 22, 2014 at 5:17 PM, Frediano Ziglio
> <frediano.ziglio@citrix.com> wrote:
> >
> > These lines (in mctelem_reserve)
> >
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> >
> > are racy. After you read the newhead pointer it can happen that another
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> >
> > This patch use instead a bit array and atomic bit operations.
> >
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> 
> What is this like from a release perspective?  When is this code run,
> and how often is the bug triggered?
> 
>  -George

The code handle MCE situation. So if your hardware is good is not a big
deal. If your hardware start to have some problems in some situation is
possible that cpu raise a mce quite often causing the race to happen.

I think that the probability is not that high. The test was finely
tested (not that easy to do even now) and solve a real bug.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:53:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3qX-00011X-J6; Wed, 19 Feb 2014 09:53:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <frediano.ziglio@citrix.com>) id 1WG3qV-00011I-P2
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 09:53:40 +0000
Received: from [193.109.254.147:65046] by server-5.bemta-14.messagelabs.com id
	64/A7-16688-22F74035; Wed, 19 Feb 2014 09:53:38 +0000
X-Env-Sender: frediano.ziglio@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392803616!5358980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19500 invoked from network); 19 Feb 2014 09:53:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:53:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102122131"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 09:53:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:53:36 -0500
Received: from [10.80.3.57]	by ukmail1.uk.xensource.com with esmtp (Exim 4.69)
	(envelope-from <frediano.ziglio@citrix.com>)	id 1WG3qQ-00039i-Uk;
	Wed, 19 Feb 2014 09:53:34 +0000
Message-ID: <1392803609.8843.3.camel@hamster.uk.xensource.com>
From: Frediano Ziglio <frediano.ziglio@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Wed, 19 Feb 2014 09:53:29 +0000
In-Reply-To: <CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>
	<1390411039.32296.8.camel@hamster.uk.xensource.com>
	<CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
X-Mailer: Evolution 3.8.4-0ubuntu1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
 mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 12:47 +0000, George Dunlap wrote:
> On Wed, Jan 22, 2014 at 5:17 PM, Frediano Ziglio
> <frediano.ziglio@citrix.com> wrote:
> >
> > These lines (in mctelem_reserve)
> >
> >         newhead = oldhead->mcte_next;
> >         if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
> >
> > are racy. After you read the newhead pointer it can happen that another
> > flow (thread or recursive invocation) change all the list but set head
> > with same value. So oldhead is the same as *freelp but you are setting
> > a new head that could point to whatever element (even already used).
> >
> > This patch use instead a bit array and atomic bit operations.
> >
> > Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
> 
> What is this like from a release perspective?  When is this code run,
> and how often is the bug triggered?
> 
>  -George

The code handle MCE situation. So if your hardware is good is not a big
deal. If your hardware start to have some problems in some situation is
possible that cpu raise a mce quite often causing the race to happen.

I think that the probability is not that high. The test was finely
tested (not that easy to do even now) and solve a real bug.

Frediano



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:54:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3rR-00017V-22; Wed, 19 Feb 2014 09:54:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3rP-00017B-Fj
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:54:35 +0000
Received: from [193.109.254.147:5788] by server-16.bemta-14.messagelabs.com id
	53/BD-21945-A5F74035; Wed, 19 Feb 2014 09:54:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392803672!1663072!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16133 invoked from network); 19 Feb 2014 09:54:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:54:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103834096"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:54:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:54:31 -0500
Message-ID: <1392803670.23084.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 19 Feb 2014 09:54:30 +0000
In-Reply-To: <5303AA97.3010202@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
	<5303AA97.3010202@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 18:46 +0000, David Vrabel wrote:
> On 18/02/14 17:40, Ian Campbell wrote:
> > On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >> 
> >> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> >>  	vif->pending_prod = MAX_PENDING_REQS;
> >>  	for (i = 0; i < MAX_PENDING_REQS; i++)
> >>  		vif->pending_ring[i] = i;
> >> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> >> -		vif->mmap_pages[i] = NULL;
> >> +	spin_lock_init(&vif->dealloc_lock);
> >> +	spin_lock_init(&vif->response_lock);
> >> +	/* If ballooning is disabled, this will consume real memory, so you
> >> +	 * better enable it.
> > 
> > Almost no one who would be affected by this is going to read this
> > comment. And it doesn't just require enabling ballooning, but actually
> > booting with some maxmem "slack" to leave space.
> > 
> > Classic-xen kernels used to add 8M of slop to the physical address space
> > to leave a suitable pool for exactly this sort of thing. I never liked
> > that but perhaps it should be reconsidered (or at least raised as a
> > possibility with the core-Xen Linux guys).
> 
> I plan to fix the balloon memory hotplug stuff to do the right thing

Which is for alloc_xenballoon_pages to hotplug a new empty region,
rather than inflating the balloon if it doesn't have enough pages to
satisfy the allocation? Or something else?

> (it's almost there -- it just tries to overlap the new memory with
> existing stuff).
> 
> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 09:54:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 09:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG3rR-00017V-22; Wed, 19 Feb 2014 09:54:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG3rP-00017B-Fj
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 09:54:35 +0000
Received: from [193.109.254.147:5788] by server-16.bemta-14.messagelabs.com id
	53/BD-21945-A5F74035; Wed, 19 Feb 2014 09:54:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392803672!1663072!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16133 invoked from network); 19 Feb 2014 09:54:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 09:54:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103834096"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 09:54:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 04:54:31 -0500
Message-ID: <1392803670.23084.100.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: David Vrabel <david.vrabel@citrix.com>
Date: Wed, 19 Feb 2014 09:54:30 +0000
In-Reply-To: <5303AA97.3010202@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
	<5303AA97.3010202@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 18:46 +0000, David Vrabel wrote:
> On 18/02/14 17:40, Ian Campbell wrote:
> > On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >> 
> >> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> >>  	vif->pending_prod = MAX_PENDING_REQS;
> >>  	for (i = 0; i < MAX_PENDING_REQS; i++)
> >>  		vif->pending_ring[i] = i;
> >> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> >> -		vif->mmap_pages[i] = NULL;
> >> +	spin_lock_init(&vif->dealloc_lock);
> >> +	spin_lock_init(&vif->response_lock);
> >> +	/* If ballooning is disabled, this will consume real memory, so you
> >> +	 * better enable it.
> > 
> > Almost no one who would be affected by this is going to read this
> > comment. And it doesn't just require enabling ballooning, but actually
> > booting with some maxmem "slack" to leave space.
> > 
> > Classic-xen kernels used to add 8M of slop to the physical address space
> > to leave a suitable pool for exactly this sort of thing. I never liked
> > that but perhaps it should be reconsidered (or at least raised as a
> > possibility with the core-Xen Linux guys).
> 
> I plan to fix the balloon memory hotplug stuff to do the right thing

Which is for alloc_xenballoon_pages to hotplug a new empty region,
rather than inflating the balloon if it doesn't have enough pages to
satisfy the allocation? Or something else?

> (it's almost there -- it just tries to overlap the new memory with
> existing stuff).
> 
> David



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:05:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG423-0001V8-FI; Wed, 19 Feb 2014 10:05:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG421-0001V3-6Z
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:05:33 +0000
Received: from [85.158.143.35:31035] by server-2.bemta-4.messagelabs.com id
	F5/85-10891-CE184035; Wed, 19 Feb 2014 10:05:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392804330!6740006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10882 invoked from network); 19 Feb 2014 10:05:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:05:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103836319"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:05:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:05:20 -0500
Message-ID: <1392804319.23084.109.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 19 Feb 2014 10:05:19 +0000
In-Reply-To: <5303C44D.4070500@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
> On 18/02/14 17:06, Ian Campbell wrote:
> > On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >> This patch contains the new definitions necessary for grant mapping.
> >
> > Is this just adding a bunch of (currently) unused functions? That's a
> > slightly odd way to structure a series. They don't seem to be "generic
> > helpers" or anything so it would be more normal to introduce these as
> > they get used -- it's a bit hard to review them out of context.
> I've created two patches because they are quite huge even now, 
> separately. Together they would be a ~500 line change. That was the best 
> I could figure out keeping in mind that bisect should work. But as I 
> wrote in the first email, I welcome other suggestions. If you and Wei 
> prefer this two patch in one big one, I merge them in the next version.

I suppose it is hard to split a change like this up in a sensible way,
but it is rather hard to review something which is split in two parts
sensibly.

If the combined patch too large to fit on the lists?

> >> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> >> index 7669d49..f0f0c3d 100644
> >> --- a/drivers/net/xen-netback/interface.c
> >> +++ b/drivers/net/xen-netback/interface.c
> >> @@ -38,6 +38,7 @@
> >>
> >>   #include <xen/events.h>
> >>   #include <asm/xen/hypercall.h>
> >> +#include <xen/balloon.h>
> >
> > What is this for?
> For alloc/free_xenballooned_pages

I think I was confused because those changes aren't in this patch.

> >
> >> +					  struct xenvif,
> >> +					  pending_tx_info[0]);
> >> +
> >> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> >> +	do {
> >> +		pending_idx = ubuf->desc;
> >> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> >> +		index = pending_index(vif->dealloc_prod);
> >> +		vif->dealloc_ring[index] = pending_idx;
> >> +		/* Sync with xenvif_tx_dealloc_action:
> >> +		 * insert idx then incr producer.
> >> +		 */
> >> +		smp_wmb();
> >
> > Is this really needed given that there is a lock held?
> Yes, as the comment right above explains.

My question is why do you need this sync if you are holding a lock, the
comment doesn't tell me that. I suppose xenvif_tx_dealloc_action doesn't
hold the dealloc_lock, but that is non-obvious from the names.

I think I asked in a subsequent patch for an improved description of the
locking going on here.

>  This actually comes from 
> classic kernel's netif_idx_release
> >
> > Or what is dealloc_lock protecting against?
> The callbacks from each other. So it is checjed only in this function.
> >
> >> +		vif->dealloc_prod++;
> >
> > What happens if the dealloc ring becomes full, will this wrap and cause
> > havoc?
> Nope, if the dealloc ring is full, the value of the last increment won't 
> be used to index the dealloc ring again until some space made available. 

I don't follow -- what makes this the case?

> Of course if something broke and we have more pending slots than tx ring 
> or dealloc slots then it can happen. Do you suggest a 
> BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?

A
         BUG_ON(space in dealloc ring < number of slots needed to dealloc this skb)
would seem to be the right thing, if that really is the invariant the
code is supposed to be implementing.

> >> +	} while (ubuf);
> >> +	wake_up(&vif->dealloc_wq);
> >> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> >> +}
> >> +
> >> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> >> +{
> >> +	struct gnttab_unmap_grant_ref *gop;
> >> +	pending_ring_idx_t dc, dp;
> >> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> >> +	unsigned int i = 0;
> >> +
> >> +	dc = vif->dealloc_cons;
> >> +	gop = vif->tx_unmap_ops;
> >> +
> >> +	/* Free up any grants we have finished using */
> >> +	do {
> >> +		dp = vif->dealloc_prod;
> >> +
> >> +		/* Ensure we see all indices enqueued by all
> >> +		 * xenvif_zerocopy_callback().
> >> +		 */
> >> +		smp_rmb();
> >> +
> >> +		while (dc != dp) {
> >> +			pending_idx =
> >> +				vif->dealloc_ring[pending_index(dc++)];
> >> +
> >> +			/* Already unmapped? */
> >> +			if (vif->grant_tx_handle[pending_idx] ==
> >> +				NETBACK_INVALID_HANDLE) {
> >> +				netdev_err(vif->dev,
> >> +					   "Trying to unmap invalid handle! "
> >> +					   "pending_idx: %x\n", pending_idx);
> >> +				BUG();
> >> +			}
> >> +
> >> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> >> +				pending_idx;
> >> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
> >> +				vif->mmap_pages[pending_idx];
> >> +			gnttab_set_unmap_op(gop,
> >> +					    idx_to_kaddr(vif, pending_idx),
> >> +					    GNTMAP_host_map,
> >> +					    vif->grant_tx_handle[pending_idx]);
> >> +			vif->grant_tx_handle[pending_idx] =
> >> +				NETBACK_INVALID_HANDLE;
> >> +			++gop;
> >
> > Can we run out of space in the gop array?
> No, unless the same thing happen as at my previous answer. BUG_ON() here 
> as well?

Yes, or at the very least a comment explaining how/why gop is bounded
elsewhere.

> >
> >> +		}
> >> +
> >> +	} while (dp != vif->dealloc_prod);
> >> +
> >> +	vif->dealloc_cons = dc;
> >
> > No barrier here?
> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by 
> the callback and the thread as well, that's why we need mb() in 
> previous. Btw. this function comes from classic's net_tx_action_dealloc

Is this code close enough to that code architecturally that you can
infer correctness due to that though?

So long as you have considered the barrier semantics in the context of
the current code and you think it is correct to not have one here then
I'm ok. But if you have just assumed it is OK because some older code
didn't have it then I'll have to ask you to consider it again...

> >> +				netdev_err(vif->dev,
> >> +					   " host_addr: %llx handle: %x status: %d\n",
> >> +					   gop[i].host_addr,
> >> +					   gop[i].handle,
> >> +					   gop[i].status);
> >> +			}
> >> +			BUG();
> >> +		}
> >> +	}
> >> +
> >> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> >> +		xenvif_idx_release(vif, pending_idx_release[i],
> >> +				   XEN_NETIF_RSP_OKAY);
> >> +}
> >> +
> >> +
> >>   /* Called after netfront has transmitted */
> >>   int xenvif_tx_action(struct xenvif *vif, int budget)
> >>   {
> >> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> >>   	vif->mmap_pages[pending_idx] = NULL;
> >>   }
> >>
> >> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> >
> > This is a single shot version of the batched xenvif_tx_dealloc_action
> > version? Why not just enqueue the idx to be unmapped later?
> This is called only from the NAPI instance. Using the dealloc ring 
> require synchronization with the callback which can increase lock 
> contention. On the other hand, if the guest sends small packets 
> (<PAGE_SIZE), the TLB flushing can cause performance penalty.

Right. When/How often is this called from the NAPI instance?

Is the locking contention from this case so severe that it out weighs
the benefits of batching the unmaps? That would surprise me. After all
the locking contention is there for the zerocopy_callback case too

>  The above 
> mentioned upcoming patch which gntcopy the header can prevent that 

So this is only called when doing the pull-up to the linear area?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:05:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG423-0001V8-FI; Wed, 19 Feb 2014 10:05:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG421-0001V3-6Z
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:05:33 +0000
Received: from [85.158.143.35:31035] by server-2.bemta-4.messagelabs.com id
	F5/85-10891-CE184035; Wed, 19 Feb 2014 10:05:32 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392804330!6740006!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10882 invoked from network); 19 Feb 2014 10:05:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:05:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103836319"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:05:21 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:05:20 -0500
Message-ID: <1392804319.23084.109.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Wed, 19 Feb 2014 10:05:19 +0000
In-Reply-To: <5303C44D.4070500@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
> On 18/02/14 17:06, Ian Campbell wrote:
> > On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >> This patch contains the new definitions necessary for grant mapping.
> >
> > Is this just adding a bunch of (currently) unused functions? That's a
> > slightly odd way to structure a series. They don't seem to be "generic
> > helpers" or anything so it would be more normal to introduce these as
> > they get used -- it's a bit hard to review them out of context.
> I've created two patches because they are quite huge even now, 
> separately. Together they would be a ~500 line change. That was the best 
> I could figure out keeping in mind that bisect should work. But as I 
> wrote in the first email, I welcome other suggestions. If you and Wei 
> prefer this two patch in one big one, I merge them in the next version.

I suppose it is hard to split a change like this up in a sensible way,
but it is rather hard to review something which is split in two parts
sensibly.

If the combined patch too large to fit on the lists?

> >> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
> >> index 7669d49..f0f0c3d 100644
> >> --- a/drivers/net/xen-netback/interface.c
> >> +++ b/drivers/net/xen-netback/interface.c
> >> @@ -38,6 +38,7 @@
> >>
> >>   #include <xen/events.h>
> >>   #include <asm/xen/hypercall.h>
> >> +#include <xen/balloon.h>
> >
> > What is this for?
> For alloc/free_xenballooned_pages

I think I was confused because those changes aren't in this patch.

> >
> >> +					  struct xenvif,
> >> +					  pending_tx_info[0]);
> >> +
> >> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
> >> +	do {
> >> +		pending_idx = ubuf->desc;
> >> +		ubuf = (struct ubuf_info *) ubuf->ctx;
> >> +		index = pending_index(vif->dealloc_prod);
> >> +		vif->dealloc_ring[index] = pending_idx;
> >> +		/* Sync with xenvif_tx_dealloc_action:
> >> +		 * insert idx then incr producer.
> >> +		 */
> >> +		smp_wmb();
> >
> > Is this really needed given that there is a lock held?
> Yes, as the comment right above explains.

My question is why do you need this sync if you are holding a lock, the
comment doesn't tell me that. I suppose xenvif_tx_dealloc_action doesn't
hold the dealloc_lock, but that is non-obvious from the names.

I think I asked in a subsequent patch for an improved description of the
locking going on here.

>  This actually comes from 
> classic kernel's netif_idx_release
> >
> > Or what is dealloc_lock protecting against?
> The callbacks from each other. So it is checjed only in this function.
> >
> >> +		vif->dealloc_prod++;
> >
> > What happens if the dealloc ring becomes full, will this wrap and cause
> > havoc?
> Nope, if the dealloc ring is full, the value of the last increment won't 
> be used to index the dealloc ring again until some space made available. 

I don't follow -- what makes this the case?

> Of course if something broke and we have more pending slots than tx ring 
> or dealloc slots then it can happen. Do you suggest a 
> BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?

A
         BUG_ON(space in dealloc ring < number of slots needed to dealloc this skb)
would seem to be the right thing, if that really is the invariant the
code is supposed to be implementing.

> >> +	} while (ubuf);
> >> +	wake_up(&vif->dealloc_wq);
> >> +	spin_unlock_irqrestore(&vif->dealloc_lock, flags);
> >> +}
> >> +
> >> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
> >> +{
> >> +	struct gnttab_unmap_grant_ref *gop;
> >> +	pending_ring_idx_t dc, dp;
> >> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
> >> +	unsigned int i = 0;
> >> +
> >> +	dc = vif->dealloc_cons;
> >> +	gop = vif->tx_unmap_ops;
> >> +
> >> +	/* Free up any grants we have finished using */
> >> +	do {
> >> +		dp = vif->dealloc_prod;
> >> +
> >> +		/* Ensure we see all indices enqueued by all
> >> +		 * xenvif_zerocopy_callback().
> >> +		 */
> >> +		smp_rmb();
> >> +
> >> +		while (dc != dp) {
> >> +			pending_idx =
> >> +				vif->dealloc_ring[pending_index(dc++)];
> >> +
> >> +			/* Already unmapped? */
> >> +			if (vif->grant_tx_handle[pending_idx] ==
> >> +				NETBACK_INVALID_HANDLE) {
> >> +				netdev_err(vif->dev,
> >> +					   "Trying to unmap invalid handle! "
> >> +					   "pending_idx: %x\n", pending_idx);
> >> +				BUG();
> >> +			}
> >> +
> >> +			pending_idx_release[gop-vif->tx_unmap_ops] =
> >> +				pending_idx;
> >> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
> >> +				vif->mmap_pages[pending_idx];
> >> +			gnttab_set_unmap_op(gop,
> >> +					    idx_to_kaddr(vif, pending_idx),
> >> +					    GNTMAP_host_map,
> >> +					    vif->grant_tx_handle[pending_idx]);
> >> +			vif->grant_tx_handle[pending_idx] =
> >> +				NETBACK_INVALID_HANDLE;
> >> +			++gop;
> >
> > Can we run out of space in the gop array?
> No, unless the same thing happen as at my previous answer. BUG_ON() here 
> as well?

Yes, or at the very least a comment explaining how/why gop is bounded
elsewhere.

> >
> >> +		}
> >> +
> >> +	} while (dp != vif->dealloc_prod);
> >> +
> >> +	vif->dealloc_cons = dc;
> >
> > No barrier here?
> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by 
> the callback and the thread as well, that's why we need mb() in 
> previous. Btw. this function comes from classic's net_tx_action_dealloc

Is this code close enough to that code architecturally that you can
infer correctness due to that though?

So long as you have considered the barrier semantics in the context of
the current code and you think it is correct to not have one here then
I'm ok. But if you have just assumed it is OK because some older code
didn't have it then I'll have to ask you to consider it again...

> >> +				netdev_err(vif->dev,
> >> +					   " host_addr: %llx handle: %x status: %d\n",
> >> +					   gop[i].host_addr,
> >> +					   gop[i].handle,
> >> +					   gop[i].status);
> >> +			}
> >> +			BUG();
> >> +		}
> >> +	}
> >> +
> >> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> >> +		xenvif_idx_release(vif, pending_idx_release[i],
> >> +				   XEN_NETIF_RSP_OKAY);
> >> +}
> >> +
> >> +
> >>   /* Called after netfront has transmitted */
> >>   int xenvif_tx_action(struct xenvif *vif, int budget)
> >>   {
> >> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> >>   	vif->mmap_pages[pending_idx] = NULL;
> >>   }
> >>
> >> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> >
> > This is a single shot version of the batched xenvif_tx_dealloc_action
> > version? Why not just enqueue the idx to be unmapped later?
> This is called only from the NAPI instance. Using the dealloc ring 
> require synchronization with the callback which can increase lock 
> contention. On the other hand, if the guest sends small packets 
> (<PAGE_SIZE), the TLB flushing can cause performance penalty.

Right. When/How often is this called from the NAPI instance?

Is the locking contention from this case so severe that it out weighs
the benefits of batching the unmaps? That would surprise me. After all
the locking contention is there for the zerocopy_callback case too

>  The above 
> mentioned upcoming patch which gntcopy the header can prevent that 

So this is only called when doing the pull-up to the linear area?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:09:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG45w-0001gN-5n; Wed, 19 Feb 2014 10:09:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG45u-0001gH-9N
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:09:34 +0000
Received: from [193.109.254.147:9674] by server-12.bemta-14.messagelabs.com id
	03/20-17220-DD284035; Wed, 19 Feb 2014 10:09:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392804571!5327482!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3989 invoked from network); 19 Feb 2014 10:09:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:09:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102125588"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 10:09:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:09:30 -0500
Message-ID: <1392804568.23084.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Date: Wed, 19 Feb 2014 10:09:28 +0000
In-Reply-To: <87vbwcaqxe.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> For platforms using EPT, I don't think you want anything but guest
> addresses, do you?

No, the arguments for preventing unfettered access by backends to
frontend RAM applies to EPT as well.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:09:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG45w-0001gN-5n; Wed, 19 Feb 2014 10:09:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG45u-0001gH-9N
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:09:34 +0000
Received: from [193.109.254.147:9674] by server-12.bemta-14.messagelabs.com id
	03/20-17220-DD284035; Wed, 19 Feb 2014 10:09:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392804571!5327482!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3989 invoked from network); 19 Feb 2014 10:09:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:09:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102125588"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 10:09:31 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:09:30 -0500
Message-ID: <1392804568.23084.112.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Date: Wed, 19 Feb 2014 10:09:28 +0000
In-Reply-To: <87vbwcaqxe.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> For platforms using EPT, I don't think you want anything but guest
> addresses, do you?

No, the arguments for preventing unfettered access by backends to
frontend RAM applies to EPT as well.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:12:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG489-0001no-QN; Wed, 19 Feb 2014 10:11:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG488-0001nh-RN
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:11:52 +0000
Received: from [85.158.137.68:61794] by server-15.bemta-3.messagelabs.com id
	EC/A2-19263-86384035; Wed, 19 Feb 2014 10:11:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392804709!1295525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28626 invoked from network); 19 Feb 2014 10:11:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:11:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103837530"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:11:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:11:48 -0500
Message-ID: <1392804706.23084.113.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Date: Wed, 19 Feb 2014 10:11:46 +0000
In-Reply-To: <87vbwcaqxe.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>         Sorry for the delayed response, I was pondering...  CC changed
> to virtio-dev.

Which apparently is subscribers only + discard as opposed to moderate,
so my previous post won't show up there.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:12:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG489-0001no-QN; Wed, 19 Feb 2014 10:11:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG488-0001nh-RN
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:11:52 +0000
Received: from [85.158.137.68:61794] by server-15.bemta-3.messagelabs.com id
	EC/A2-19263-86384035; Wed, 19 Feb 2014 10:11:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392804709!1295525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28626 invoked from network); 19 Feb 2014 10:11:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:11:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103837530"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:11:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:11:48 -0500
Message-ID: <1392804706.23084.113.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Date: Wed, 19 Feb 2014 10:11:46 +0000
In-Reply-To: <87vbwcaqxe.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>         Sorry for the delayed response, I was pondering...  CC changed
> to virtio-dev.

Which apparently is subscribers only + discard as opposed to moderate,
so my previous post won't show up there.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:36:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4V7-00021e-0w; Wed, 19 Feb 2014 10:35:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG4V5-00021Z-Cu
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 10:35:35 +0000
Received: from [85.158.143.35:35131] by server-2.bemta-4.messagelabs.com id
	20/08-10891-6F884035; Wed, 19 Feb 2014 10:35:34 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392806132!6771243!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22213 invoked from network); 19 Feb 2014 10:35:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:35:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102131111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 10:35:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:35:31 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG4V0-0003kk-78;
	Wed, 19 Feb 2014 10:35:30 +0000
Message-ID: <530488EE.7050204@eu.citrix.com>
Date: Wed, 19 Feb 2014 10:35:26 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>	
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>	
	<1390411039.32296.8.camel@hamster.uk.xensource.com>	
	<CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
	<1392803609.8843.3.camel@hamster.uk.xensource.com>
In-Reply-To: <1392803609.8843.3.camel@hamster.uk.xensource.com>
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
	mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 09:53 AM, Frediano Ziglio wrote:
> On Tue, 2014-02-18 at 12:47 +0000, George Dunlap wrote:
>> On Wed, Jan 22, 2014 at 5:17 PM, Frediano Ziglio
>> <frediano.ziglio@citrix.com> wrote:
>>> These lines (in mctelem_reserve)
>>>
>>>          newhead = oldhead->mcte_next;
>>>          if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>
>>> are racy. After you read the newhead pointer it can happen that another
>>> flow (thread or recursive invocation) change all the list but set head
>>> with same value. So oldhead is the same as *freelp but you are setting
>>> a new head that could point to whatever element (even already used).
>>>
>>> This patch use instead a bit array and atomic bit operations.
>>>
>>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
>> What is this like from a release perspective?  When is this code run,
>> and how often is the bug triggered?
>>
>>   -George
> The code handle MCE situation. So if your hardware is good is not a big
> deal. If your hardware start to have some problems in some situation is
> possible that cpu raise a mce quite often causing the race to happen.
>
> I think that the probability is not that high. The test was finely
> tested (not that easy to do even now) and solve a real bug.

OK thanks -- at this point then, I think I'd just as soon hold this off 
until 4.4.1, unless we get some other blocking bugs, just so that we can 
minimize the changes.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:36:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4V7-00021e-0w; Wed, 19 Feb 2014 10:35:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG4V5-00021Z-Cu
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 10:35:35 +0000
Received: from [85.158.143.35:35131] by server-2.bemta-4.messagelabs.com id
	20/08-10891-6F884035; Wed, 19 Feb 2014 10:35:34 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392806132!6771243!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22213 invoked from network); 19 Feb 2014 10:35:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:35:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102131111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 10:35:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:35:31 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG4V0-0003kk-78;
	Wed, 19 Feb 2014 10:35:30 +0000
Message-ID: <530488EE.7050204@eu.citrix.com>
Date: Wed, 19 Feb 2014 10:35:26 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Frediano Ziglio <frediano.ziglio@citrix.com>
References: <1390387834.32296.1.camel@hamster.uk.xensource.com>	
	<52DFC5BC0200007800115C92@nat28.tlf.novell.com>	
	<1390411039.32296.8.camel@hamster.uk.xensource.com>	
	<CAFLBxZY4KfTvpUBVF9-9Xu9dyjirCWBtppZVHMofEyNk-h_c1w@mail.gmail.com>
	<1392803609.8843.3.camel@hamster.uk.xensource.com>
In-Reply-To: <1392803609.8843.3.camel@hamster.uk.xensource.com>
X-DLP: MIA1
Cc: Liu Jinsong <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] MCE: Fix race condition in
	mctelem_reserve
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 09:53 AM, Frediano Ziglio wrote:
> On Tue, 2014-02-18 at 12:47 +0000, George Dunlap wrote:
>> On Wed, Jan 22, 2014 at 5:17 PM, Frediano Ziglio
>> <frediano.ziglio@citrix.com> wrote:
>>> These lines (in mctelem_reserve)
>>>
>>>          newhead = oldhead->mcte_next;
>>>          if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
>>>
>>> are racy. After you read the newhead pointer it can happen that another
>>> flow (thread or recursive invocation) change all the list but set head
>>> with same value. So oldhead is the same as *freelp but you are setting
>>> a new head that could point to whatever element (even already used).
>>>
>>> This patch use instead a bit array and atomic bit operations.
>>>
>>> Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
>> What is this like from a release perspective?  When is this code run,
>> and how often is the bug triggered?
>>
>>   -George
> The code handle MCE situation. So if your hardware is good is not a big
> deal. If your hardware start to have some problems in some situation is
> possible that cpu raise a mce quite often causing the race to happen.
>
> I think that the probability is not that high. The test was finely
> tested (not that easy to do even now) and solve a real bug.

OK thanks -- at this point then, I think I'd just as soon hold this off 
until 4.4.1, unless we get some other blocking bugs, just so that we can 
minimize the changes.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:37:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4Wp-00026O-In; Wed, 19 Feb 2014 10:37:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG4Wo-00026E-4V
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:37:22 +0000
Received: from [85.158.143.35:54547] by server-2.bemta-4.messagelabs.com id
	54/BB-10891-16984035; Wed, 19 Feb 2014 10:37:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392806239!6762595!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27634 invoked from network); 19 Feb 2014 10:37:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:37:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102131453"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 10:37:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:37:18 -0500
Message-ID: <1392806237.23084.124.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 19 Feb 2014 10:37:17 +0000
In-Reply-To: <53048136020000780011D8FF@nat28.tlf.novell.com>
References: <osstest-25126-mainreport@xen.org>
	<53048136020000780011D8FF@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 09:02 +0000, Jan Beulich wrote:
> >>> On 19.02.14 at 05:16, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 25126 xen-4.2-testing real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25126/ 
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
> >  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
> >  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
> 
> These have been pretty persistent over the last couple of days,

Looks to me to be specific to xen-4.2-testing as well?

Previous successful run was against
640b31535ab8fe07911d0b90ae4adbe6078026c9

This is 6d83fd0a975731653efb0c470a4eeed6a44b99cd, the only thing in the
middle is "update Xen version to 4.2.4" which hardly seems likely to
have caused this.

> and this
> 
> Feb 18 13:36:04.556216   File "/usr/sbin/xend", line 36, in <module>
> Feb 18 13:36:04.564155     from xen.xend.server import SrvDaemon
> 
> looks very much like something broken in the infrastructure or the
> way how things get built.

I can see usr/lib/python2.7/site-packages/xen/xm and xend etc in the
dist.tar.gz built by the build job.

I wonder if we need to apply:
http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout

That might explain why only 4.2 (and I suppose earlier, if we were to
run such tests) are affected.

Ian, ts-xen-build has:

               ($ho->{Suite} =~ m/squeeze/ ? <<END : '').
	echo >>.config PYTHON_PREFIX_ARG=
END

I suspect this needs to actually be based upon the Xen version being
tested, not the host distro (or not only the host distro).

Newer versions of Xen (>=4.3) install to /usr/local by default and avoid
this issue, older versions of Xen need it on Wheezy as much as Squeeze. 

I don't think this workaround would be harmful on Wheezy+Xen4.3 but it
would be better to make it more targeted I think. That would imply a
runvar set by make-flight I think. What do you think?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:37:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4Wp-00026O-In; Wed, 19 Feb 2014 10:37:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG4Wo-00026E-4V
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:37:22 +0000
Received: from [85.158.143.35:54547] by server-2.bemta-4.messagelabs.com id
	54/BB-10891-16984035; Wed, 19 Feb 2014 10:37:21 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392806239!6762595!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27634 invoked from network); 19 Feb 2014 10:37:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:37:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="102131453"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 10:37:18 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:37:18 -0500
Message-ID: <1392806237.23084.124.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 19 Feb 2014 10:37:17 +0000
In-Reply-To: <53048136020000780011D8FF@nat28.tlf.novell.com>
References: <osstest-25126-mainreport@xen.org>
	<53048136020000780011D8FF@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 09:02 +0000, Jan Beulich wrote:
> >>> On 19.02.14 at 05:16, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 25126 xen-4.2-testing real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25126/ 
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
> >  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
> >  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
> 
> These have been pretty persistent over the last couple of days,

Looks to me to be specific to xen-4.2-testing as well?

Previous successful run was against
640b31535ab8fe07911d0b90ae4adbe6078026c9

This is 6d83fd0a975731653efb0c470a4eeed6a44b99cd, the only thing in the
middle is "update Xen version to 4.2.4" which hardly seems likely to
have caused this.

> and this
> 
> Feb 18 13:36:04.556216   File "/usr/sbin/xend", line 36, in <module>
> Feb 18 13:36:04.564155     from xen.xend.server import SrvDaemon
> 
> looks very much like something broken in the infrastructure or the
> way how things get built.

I can see usr/lib/python2.7/site-packages/xen/xm and xend etc in the
dist.tar.gz built by the build job.

I wonder if we need to apply:
http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout

That might explain why only 4.2 (and I suppose earlier, if we were to
run such tests) are affected.

Ian, ts-xen-build has:

               ($ho->{Suite} =~ m/squeeze/ ? <<END : '').
	echo >>.config PYTHON_PREFIX_ARG=
END

I suspect this needs to actually be based upon the Xen version being
tested, not the host distro (or not only the host distro).

Newer versions of Xen (>=4.3) install to /usr/local by default and avoid
this issue, older versions of Xen need it on Wheezy as much as Squeeze. 

I don't think this workaround would be harmful on Wheezy+Xen4.3 but it
would be better to make it more targeted I think. That would imply a
runvar set by make-flight I think. What do you think?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:41:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4az-0002Jw-5d; Wed, 19 Feb 2014 10:41:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WG4aw-0002JW-O8; Wed, 19 Feb 2014 10:41:38 +0000
Received: from [193.109.254.147:43137] by server-4.bemta-14.messagelabs.com id
	F6/A3-32066-16A84035; Wed, 19 Feb 2014 10:41:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392806488!5338100!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24193 invoked from network); 19 Feb 2014 10:41:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:41:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103843415"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:41:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:41:27 -0500
Message-ID: <1392806485.23084.125.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <lars.kurth@xen.org>
Date: Wed, 19 Feb 2014 10:41:25 +0000
In-Reply-To: <5303B34E.5000702@xen.org>
References: <5303B34E.5000702@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Tim Mackey <Timothy.Mackey@citrix.com>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, xen-users@lists.xenproject.org,
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Vote] Proposal: Moving XCP binaries to
 XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 19:23 +0000, Lars Kurth wrote:
> Hi all,
> 
> I wanted to propose to move the legacy XCP binaries from
> XenProject.org to XenServer.org. With XenServer being fully open
> source and XCP basically being a variant of XenServer, it would make a
> lot more sense to keep all these binaries with XenServer.org. The fact
> that we have XCP and XenServer.org in two different places has led to:
> 
> * fragmentation of the XCP user community 
> * it is also a constant source of confusion in the user community
> 
> In a nutshell many people don't know whether they should go to
> XenServer.org to ask XCP related questions or whether to ask them on
> XenProject.org. As a result many questions remain unanswered. Russell
> and me spend a lot of our time, pointing people to the right place
> and/or cross-posting.

As do I, from xen-users@ -> xen-api@xenproject/xs-devel@xenserver.

>  I was hoping things would get better over time, but they have not
> improved.
> 
> When the Xen Project was created, there was no real alternative but to
> keep XCP as part of the Xen Project. With XenServer being fully open
> source, and being established, there is no reason why we can't clean
> up some of the confusion. In my opinion we really should do this.
> 
> This proposal does *not* affect the XAPI project : the XAPI project
> would continue to develop the XAPI toolstack as part of the Xen
> Project (and deliver source "releases"). In fact, I would also propose
> to make the xapi mailing list a developer mailing list. This fits much
> better with how the Hypervisor and MirageOS projects are run and
> creates an overall cleaner and easier to understand model for the Xen
> Project. 

If xen-api@ becomes a devel focused list then what would be the
appropriate place to redirect user requests to? xs-devel@xenserver.org?
What about for xapi users who are not based on xenserver?

> == Who and how to vote? ==
> 
> As this is not an entirely project local decision, I propose that
> according to http://xenproject.org/governance.html
> - Members of all developer mailing lists (including the user lists) on
> Xenproject.org can review the proposal and voice an opinion
> - Maintainers of all mature projects and the Xenproject.org community
> manager are allowed to vote : these are maintainers of xen-devel and
> xen-api
> 
> You would vote by replying "+1"

+1 

> If you don't care vote "0"
> If you object, vote "-1", which must include an alternative proposal
> or a detailed explanation of the reasons for the negative vote.
> 
> Please vote by Feb 25th 
> 
> Best Regards
> Lars
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:41:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4az-0002Jw-5d; Wed, 19 Feb 2014 10:41:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>)
	id 1WG4aw-0002JW-O8; Wed, 19 Feb 2014 10:41:38 +0000
Received: from [193.109.254.147:43137] by server-4.bemta-14.messagelabs.com id
	F6/A3-32066-16A84035; Wed, 19 Feb 2014 10:41:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392806488!5338100!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24193 invoked from network); 19 Feb 2014 10:41:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:41:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103843415"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:41:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:41:27 -0500
Message-ID: <1392806485.23084.125.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <lars.kurth@xen.org>
Date: Wed, 19 Feb 2014 10:41:25 +0000
In-Reply-To: <5303B34E.5000702@xen.org>
References: <5303B34E.5000702@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Tim Mackey <Timothy.Mackey@citrix.com>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, xen-users@lists.xenproject.org,
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Vote] Proposal: Moving XCP binaries to
 XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 19:23 +0000, Lars Kurth wrote:
> Hi all,
> 
> I wanted to propose to move the legacy XCP binaries from
> XenProject.org to XenServer.org. With XenServer being fully open
> source and XCP basically being a variant of XenServer, it would make a
> lot more sense to keep all these binaries with XenServer.org. The fact
> that we have XCP and XenServer.org in two different places has led to:
> 
> * fragmentation of the XCP user community 
> * it is also a constant source of confusion in the user community
> 
> In a nutshell many people don't know whether they should go to
> XenServer.org to ask XCP related questions or whether to ask them on
> XenProject.org. As a result many questions remain unanswered. Russell
> and me spend a lot of our time, pointing people to the right place
> and/or cross-posting.

As do I, from xen-users@ -> xen-api@xenproject/xs-devel@xenserver.

>  I was hoping things would get better over time, but they have not
> improved.
> 
> When the Xen Project was created, there was no real alternative but to
> keep XCP as part of the Xen Project. With XenServer being fully open
> source, and being established, there is no reason why we can't clean
> up some of the confusion. In my opinion we really should do this.
> 
> This proposal does *not* affect the XAPI project : the XAPI project
> would continue to develop the XAPI toolstack as part of the Xen
> Project (and deliver source "releases"). In fact, I would also propose
> to make the xapi mailing list a developer mailing list. This fits much
> better with how the Hypervisor and MirageOS projects are run and
> creates an overall cleaner and easier to understand model for the Xen
> Project. 

If xen-api@ becomes a devel focused list then what would be the
appropriate place to redirect user requests to? xs-devel@xenserver.org?
What about for xapi users who are not based on xenserver?

> == Who and how to vote? ==
> 
> As this is not an entirely project local decision, I propose that
> according to http://xenproject.org/governance.html
> - Members of all developer mailing lists (including the user lists) on
> Xenproject.org can review the proposal and voice an opinion
> - Maintainers of all mature projects and the Xenproject.org community
> manager are allowed to vote : these are maintainers of xen-devel and
> xen-api
> 
> You would vote by replying "+1"

+1 

> If you don't care vote "0"
> If you object, vote "-1", which must include an alternative proposal
> or a detailed explanation of the reasons for the negative vote.
> 
> Please vote by Feb 25th 
> 
> Best Regards
> Lars
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4sH-0002rc-Uj; Wed, 19 Feb 2014 10:59:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG4sG-0002rX-B9
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:59:32 +0000
Received: from [85.158.143.35:54383] by server-1.bemta-4.messagelabs.com id
	37/5A-31661-39E84035; Wed, 19 Feb 2014 10:59:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392807569!6763055!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23755 invoked from network); 19 Feb 2014 10:59:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:59:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103846279"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:59:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:59:28 -0500
Message-ID: <1392807567.23084.127.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 19 Feb 2014 10:59:27 +0000
In-Reply-To: <1392806237.23084.124.camel@kazak.uk.xensource.com>
References: <osstest-25126-mainreport@xen.org>
	<53048136020000780011D8FF@nat28.tlf.novell.com>
	<1392806237.23084.124.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: [Xen-devel] [PATCH OSSTEST] Apply PYTHON_PREFIX_ARG workaround to
 Xen 4.2 and earlier(Was: Re: [xen-4.2-testing test] 25126: regressions -
 FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 10:37 +0000, Ian Campbell wrote:
> I wonder if we need to apply:
> http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout

Like this, tested only the make-flight bit and ran perl -c on
ts-xen-build to check syntax.

8<-------------------

>From 58f45b8855c35297958fef0f0213878856f9b3bf Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 19 Feb 2014 10:54:54 +0000
Subject: [PATCH] Apply PYTHON_PREFIX_ARG workaround to Xen 4.2 and earlier

...rather than applying only if the host is Squeeze, since this also happens on
Wheezy. Xen 4.3 onwards are not affected because they install to /usr/local
instead of /usr which avoids this particular issue.

The workaround is described in
http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout

We set PYTHON_PREFIX_ARG='' since that is what we did on Squeeze although that
is a specific quirk of Ubuntu and we could use --install-layout=deb.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common   | 14 ++++++++++++--
 ts-xen-build |  4 ++--
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/mfi-common b/mfi-common
index 8f56092..f443783 100644
--- a/mfi-common
+++ b/mfi-common
@@ -120,6 +120,16 @@ create_build_jobs () {
     *) enable_ovmf=true;
     esac
 
+    # http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout
+    # applies to Xen 4.2 and earlier.
+    case "$xenbranch" in
+    xen-3.*-testing) python_runvars=python_prefix_arg=;;
+    xen-4.0-testing) python_runvars=python_prefix_arg=;;
+    xen-4.1-testing) python_runvars=python_prefix_arg=;;
+    xen-4.2-testing) python_runvars=python_prefix_arg=;;
+    *)               python_runvars=;;
+    esac
+
     eval "
         arch_runvars=\"\$ARCH_RUNVARS_$arch\"
     "
@@ -132,7 +142,7 @@ create_build_jobs () {
         tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
         tree_xen=$TREE_XEN                                                   \
                 $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
+                $suite_runvars $python_runvars                               \
                 host_hostflags=$build_hostflags                              \
                 revision_xen=$REVISION_XEN                                   \
                 revision_qemu=$REVISION_QEMU                                 \
@@ -145,7 +155,7 @@ create_build_jobs () {
         tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
         tree_xen=$TREE_XEN                                                   \
                 $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
+                $suite_runvars $python_runvars                               \
                 host_hostflags=$build_hostflags                              \
                 revision_xen=$REVISION_XEN                                   \
                 revision_qemu=$REVISION_QEMU                                 \
diff --git a/ts-xen-build b/ts-xen-build
index 74d17f0..d228fb7 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -88,8 +88,8 @@ END
                (nonempty($r{revision_linux}) ? <<END : '').
 	echo >>.config export $linux_rev_envvar='$r{revision_linux}'
 END
-               ($ho->{Suite} =~ m/squeeze/ ? <<END : '').
-	echo >>.config PYTHON_PREFIX_ARG=
+               (defined($r{python_prefix_arg}) ? <<END : '').
+	echo >>.config PYTHON_PREFIX_ARG=$r{python_prefix_arg}
 END
                (nonempty($kerns) ? <<END : <<END)
 	echo >>.config KERNELS='$kerns'
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 10:59:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 10:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4sH-0002rc-Uj; Wed, 19 Feb 2014 10:59:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG4sG-0002rX-B9
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 10:59:32 +0000
Received: from [85.158.143.35:54383] by server-1.bemta-4.messagelabs.com id
	37/5A-31661-39E84035; Wed, 19 Feb 2014 10:59:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392807569!6763055!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23755 invoked from network); 19 Feb 2014 10:59:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 10:59:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,504,1389744000"; d="scan'208";a="103846279"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 10:59:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 05:59:28 -0500
Message-ID: <1392807567.23084.127.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 19 Feb 2014 10:59:27 +0000
In-Reply-To: <1392806237.23084.124.camel@kazak.uk.xensource.com>
References: <osstest-25126-mainreport@xen.org>
	<53048136020000780011D8FF@nat28.tlf.novell.com>
	<1392806237.23084.124.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, ian.jackson@eu.citrix.com
Subject: [Xen-devel] [PATCH OSSTEST] Apply PYTHON_PREFIX_ARG workaround to
 Xen 4.2 and earlier(Was: Re: [xen-4.2-testing test] 25126: regressions -
 FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 10:37 +0000, Ian Campbell wrote:
> I wonder if we need to apply:
> http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout

Like this, tested only the make-flight bit and ran perl -c on
ts-xen-build to check syntax.

8<-------------------

>From 58f45b8855c35297958fef0f0213878856f9b3bf Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Wed, 19 Feb 2014 10:54:54 +0000
Subject: [PATCH] Apply PYTHON_PREFIX_ARG workaround to Xen 4.2 and earlier

...rather than applying only if the host is Squeeze, since this also happens on
Wheezy. Xen 4.3 onwards are not affected because they install to /usr/local
instead of /usr which avoids this particular issue.

The workaround is described in
http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout

We set PYTHON_PREFIX_ARG='' since that is what we did on Squeeze although that
is a specific quirk of Ubuntu and we could use --install-layout=deb.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 mfi-common   | 14 ++++++++++++--
 ts-xen-build |  4 ++--
 2 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/mfi-common b/mfi-common
index 8f56092..f443783 100644
--- a/mfi-common
+++ b/mfi-common
@@ -120,6 +120,16 @@ create_build_jobs () {
     *) enable_ovmf=true;
     esac
 
+    # http://wiki.xen.org/wiki/Compiling_Xen_From_Source#Python_Prefix_and_Module_Layout
+    # applies to Xen 4.2 and earlier.
+    case "$xenbranch" in
+    xen-3.*-testing) python_runvars=python_prefix_arg=;;
+    xen-4.0-testing) python_runvars=python_prefix_arg=;;
+    xen-4.1-testing) python_runvars=python_prefix_arg=;;
+    xen-4.2-testing) python_runvars=python_prefix_arg=;;
+    *)               python_runvars=;;
+    esac
+
     eval "
         arch_runvars=\"\$ARCH_RUNVARS_$arch\"
     "
@@ -132,7 +142,7 @@ create_build_jobs () {
         tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
         tree_xen=$TREE_XEN                                                   \
                 $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
+                $suite_runvars $python_runvars                               \
                 host_hostflags=$build_hostflags                              \
                 revision_xen=$REVISION_XEN                                   \
                 revision_qemu=$REVISION_QEMU                                 \
@@ -145,7 +155,7 @@ create_build_jobs () {
         tree_qemuu=$TREE_QEMU_UPSTREAM                                       \
         tree_xen=$TREE_XEN                                                   \
                 $RUNVARS $BUILD_RUNVARS $BUILD_XEN_RUNVARS $arch_runvars     \
-                $suite_runvars                                               \
+                $suite_runvars $python_runvars                               \
                 host_hostflags=$build_hostflags                              \
                 revision_xen=$REVISION_XEN                                   \
                 revision_qemu=$REVISION_QEMU                                 \
diff --git a/ts-xen-build b/ts-xen-build
index 74d17f0..d228fb7 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -88,8 +88,8 @@ END
                (nonempty($r{revision_linux}) ? <<END : '').
 	echo >>.config export $linux_rev_envvar='$r{revision_linux}'
 END
-               ($ho->{Suite} =~ m/squeeze/ ? <<END : '').
-	echo >>.config PYTHON_PREFIX_ARG=
+               (defined($r{python_prefix_arg}) ? <<END : '').
+	echo >>.config PYTHON_PREFIX_ARG=$r{python_prefix_arg}
 END
                (nonempty($kerns) ? <<END : <<END)
 	echo >>.config KERNELS='$kerns'
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:04:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4wZ-00031i-Lf; Wed, 19 Feb 2014 11:03:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG4wY-00031c-9b
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:03:58 +0000
Received: from [193.109.254.147:6644] by server-5.bemta-14.messagelabs.com id
	ED/64-16688-D9F84035; Wed, 19 Feb 2014 11:03:57 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392807835!5393855!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32766 invoked from network); 19 Feb 2014 11:03:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103847181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:03:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:03:54 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG4wT-00049H-MP;
	Wed, 19 Feb 2014 11:03:53 +0000
Message-ID: <53048F96.7010503@eu.citrix.com>
Date: Wed, 19 Feb 2014 11:03:50 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
	<53047F74020000780011D8E4@nat28.tlf.novell.com>
In-Reply-To: <53047F74020000780011D8E4@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 08:55 AM, Jan Beulich wrote:
>>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> George Dunlap wrote on 2014-02-18:
>>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>>> perhaps my original patch is better which will check
>>> paging_mode_log_dirty(d) && log_global:
>>>
>>> It turns out that the reason I couldn't get a crash was because libxc
>>> was actually paying attention to the -EINVAL return value, and
>>> disabling and then re-enabling logdirty.  That's what would happen
>>> before your dirty vram patch, and that's what happens after.  And
>>> arguably, that's the correct behavior for any toolstack, given that the
>> interface returns an error.
>>
>> Agree.
>>
>>> This patch would actually change the interface; if we check this in,
>>> then if you enable logdirty when dirty vram tracking is enabled, you
>>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>>> So actually, this patch would be more disruptive.
>>>
>> Jan, do you have any comment?
> This simplistic variant is just calling for problems. As was already
> said elsewhere on this thread, we should simply do the mode change
> properly: Track that a partial log-dirty mode is in use, and allow
> switching to global log-dirty mode (converting all entries to R/O).

I think Yang was asking you for your opinion on my suggestion that 
nothing actually needed to be done.  Enabling full logdirty mode for 
migration when dirty vram tracking was enabled has *always* returned an 
error (or at least for a long time now), and *always* resulted in the 
toolstack disabling and re-enabling logdirty mode; Yang's patch doesn't 
change that at all.

If you think that's an interface we need to improve in the future, we 
can put it on the list of improvements.  But at this point it seems to 
me more like a nice-to-have.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:04:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4wZ-00031i-Lf; Wed, 19 Feb 2014 11:03:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG4wY-00031c-9b
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:03:58 +0000
Received: from [193.109.254.147:6644] by server-5.bemta-14.messagelabs.com id
	ED/64-16688-D9F84035; Wed, 19 Feb 2014 11:03:57 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392807835!5393855!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32766 invoked from network); 19 Feb 2014 11:03:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103847181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:03:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:03:54 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG4wT-00049H-MP;
	Wed, 19 Feb 2014 11:03:53 +0000
Message-ID: <53048F96.7010503@eu.citrix.com>
Date: Wed, 19 Feb 2014 11:03:50 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
	<53047F74020000780011D8E4@nat28.tlf.novell.com>
In-Reply-To: <53047F74020000780011D8E4@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 08:55 AM, Jan Beulich wrote:
>>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> George Dunlap wrote on 2014-02-18:
>>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>>> perhaps my original patch is better which will check
>>> paging_mode_log_dirty(d) && log_global:
>>>
>>> It turns out that the reason I couldn't get a crash was because libxc
>>> was actually paying attention to the -EINVAL return value, and
>>> disabling and then re-enabling logdirty.  That's what would happen
>>> before your dirty vram patch, and that's what happens after.  And
>>> arguably, that's the correct behavior for any toolstack, given that the
>> interface returns an error.
>>
>> Agree.
>>
>>> This patch would actually change the interface; if we check this in,
>>> then if you enable logdirty when dirty vram tracking is enabled, you
>>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>>> So actually, this patch would be more disruptive.
>>>
>> Jan, do you have any comment?
> This simplistic variant is just calling for problems. As was already
> said elsewhere on this thread, we should simply do the mode change
> properly: Track that a partial log-dirty mode is in use, and allow
> switching to global log-dirty mode (converting all entries to R/O).

I think Yang was asking you for your opinion on my suggestion that 
nothing actually needed to be done.  Enabling full logdirty mode for 
migration when dirty vram tracking was enabled has *always* returned an 
error (or at least for a long time now), and *always* resulted in the 
toolstack disabling and re-enabling logdirty mode; Yang's patch doesn't 
change that at all.

If you think that's an interface we need to improve in the future, we 
can put it on the list of improvements.  But at this point it seems to 
me more like a nice-to-have.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:06:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:06:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4z1-0003Ac-Bq; Wed, 19 Feb 2014 11:06:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG4z0-0003AW-RZ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:06:31 +0000
Received: from [193.109.254.147:51647] by server-8.bemta-14.messagelabs.com id
	98/9C-18529-53094035; Wed, 19 Feb 2014 11:06:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392807988!1685750!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4180 invoked from network); 19 Feb 2014 11:06:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:06:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102137061"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:06:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:06:27 -0500
Message-ID: <1392807985.23084.132.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Date: Wed, 19 Feb 2014 11:06:25 +0000
In-Reply-To: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  xen/include/public/io/netif.h |   21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index d7fb771..90be2fc 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -69,6 +69,27 @@
>   */
>  
>  /*
> + * Multiple transmit and receive queues:
> + * If supported, the backend will write "multi-queue-max-queues" and set its
> + * value to the maximum supported number of queues.
> + * Frontends that are aware of this feature and wish to use it can write the
> + * key "multi-queue-num-queues", set to the number they wish to use.
> + *
> + * Queues replicate the shared rings and event channels, and
> + * "feature-split-event-channels" is required when using multiple queues.
> + *
> + * For frontends requesting just one queue, the usual event-channel and
> + * ring-ref keys are written as before, simplifying the backend processing
> + * to avoid distinguishing between a frontend that doesn't understand the
> + * multi-queue feature, and one that does, but requested only one queue.
> + *
> + * Frontends requesting two or more queues must not write the toplevel
> + * event-channel and ring-ref keys, instead writing them under sub-keys having
> + * the name "queue-N" where N is the integer ID of the queue for which those
> + * keys belong. Queues are indexed from zero.

If "feature-split-event-channels" is required then I think what should
be written is queue-N/event-channel-{tx,rx} and
queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
as the final paragraph sort of implies?

(what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)

Is it required to have the same number of RX and TX queues?

Are there any other properties/behaviours which should be documented,
e.g. relating to the selection of which queue to use for a given frame
(on either TX or RX)? If not and it is up to the relevant end to do what
it wants then I think it would be useful to say so.

> + */
> +
> +/*
>   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
>   * offload off or on. If it is missing then the feature is assumed to be on.
>   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:06:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:06:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG4z1-0003Ac-Bq; Wed, 19 Feb 2014 11:06:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG4z0-0003AW-RZ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:06:31 +0000
Received: from [193.109.254.147:51647] by server-8.bemta-14.messagelabs.com id
	98/9C-18529-53094035; Wed, 19 Feb 2014 11:06:29 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392807988!1685750!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4180 invoked from network); 19 Feb 2014 11:06:29 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:06:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102137061"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:06:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:06:27 -0500
Message-ID: <1392807985.23084.132.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Date: Wed, 19 Feb 2014 11:06:25 +0000
In-Reply-To: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  xen/include/public/io/netif.h |   21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index d7fb771..90be2fc 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -69,6 +69,27 @@
>   */
>  
>  /*
> + * Multiple transmit and receive queues:
> + * If supported, the backend will write "multi-queue-max-queues" and set its
> + * value to the maximum supported number of queues.
> + * Frontends that are aware of this feature and wish to use it can write the
> + * key "multi-queue-num-queues", set to the number they wish to use.
> + *
> + * Queues replicate the shared rings and event channels, and
> + * "feature-split-event-channels" is required when using multiple queues.
> + *
> + * For frontends requesting just one queue, the usual event-channel and
> + * ring-ref keys are written as before, simplifying the backend processing
> + * to avoid distinguishing between a frontend that doesn't understand the
> + * multi-queue feature, and one that does, but requested only one queue.
> + *
> + * Frontends requesting two or more queues must not write the toplevel
> + * event-channel and ring-ref keys, instead writing them under sub-keys having
> + * the name "queue-N" where N is the integer ID of the queue for which those
> + * keys belong. Queues are indexed from zero.

If "feature-split-event-channels" is required then I think what should
be written is queue-N/event-channel-{tx,rx} and
queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
as the final paragraph sort of implies?

(what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)

Is it required to have the same number of RX and TX queues?

Are there any other properties/behaviours which should be documented,
e.g. relating to the selection of which queue to use for a given frame
(on either TX or RX)? If not and it is up to the relevant end to do what
it wants then I think it would be useful to say so.

> + */
> +
> +/*
>   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
>   * offload off or on. If it is missing then the feature is assumed to be on.
>   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:13:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG55V-0003NP-AM; Wed, 19 Feb 2014 11:13:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG55U-0003NK-DR
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:13:12 +0000
Received: from [193.109.254.147:17850] by server-5.bemta-14.messagelabs.com id
	26/F3-16688-7C194035; Wed, 19 Feb 2014 11:13:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392808390!5346081!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2083 invoked from network); 19 Feb 2014 11:13:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 11:13:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 11:13:10 +0000
Message-Id: <53049FD3020000780011DA37@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 11:13:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
	<53047F74020000780011D8E4@nat28.tlf.novell.com>
	<53048F96.7010503@eu.citrix.com>
In-Reply-To: <53048F96.7010503@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 12:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/19/2014 08:55 AM, Jan Beulich wrote:
>>>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> George Dunlap wrote on 2014-02-18:
>>>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>>>> perhaps my original patch is better which will check
>>>> paging_mode_log_dirty(d) && log_global:
>>>>
>>>> It turns out that the reason I couldn't get a crash was because libxc
>>>> was actually paying attention to the -EINVAL return value, and
>>>> disabling and then re-enabling logdirty.  That's what would happen
>>>> before your dirty vram patch, and that's what happens after.  And
>>>> arguably, that's the correct behavior for any toolstack, given that the
>>> interface returns an error.
>>>
>>> Agree.
>>>
>>>> This patch would actually change the interface; if we check this in,
>>>> then if you enable logdirty when dirty vram tracking is enabled, you
>>>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>>>> So actually, this patch would be more disruptive.
>>>>
>>> Jan, do you have any comment?
>> This simplistic variant is just calling for problems. As was already
>> said elsewhere on this thread, we should simply do the mode change
>> properly: Track that a partial log-dirty mode is in use, and allow
>> switching to global log-dirty mode (converting all entries to R/O).
> 
> I think Yang was asking you for your opinion on my suggestion that 
> nothing actually needed to be done.  Enabling full logdirty mode for 
> migration when dirty vram tracking was enabled has *always* returned an 
> error (or at least for a long time now), and *always* resulted in the 
> toolstack disabling and re-enabling logdirty mode; Yang's patch doesn't 
> change that at all.
> 
> If you think that's an interface we need to improve in the future, we 
> can put it on the list of improvements.  But at this point it seems to 
> me more like a nice-to-have.

I agree - for 4.4.0 we shouldn't need any further adjustments. And
I hoped to imply that I don't see a need for this incremental change
to go in by having said "This simplistic variant is just calling for
problems".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:13:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG55V-0003NP-AM; Wed, 19 Feb 2014 11:13:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG55U-0003NK-DR
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:13:12 +0000
Received: from [193.109.254.147:17850] by server-5.bemta-14.messagelabs.com id
	26/F3-16688-7C194035; Wed, 19 Feb 2014 11:13:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392808390!5346081!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2083 invoked from network); 19 Feb 2014 11:13:11 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 11:13:11 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 11:13:10 +0000
Message-Id: <53049FD3020000780011DA37@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 11:13:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D72C4@SHSMSX104.ccr.corp.intel.com>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
	<53047F74020000780011D8E4@nat28.tlf.novell.com>
	<53048F96.7010503@eu.citrix.com>
In-Reply-To: <53048F96.7010503@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 12:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/19/2014 08:55 AM, Jan Beulich wrote:
>>>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> George Dunlap wrote on 2014-02-18:
>>>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>>>> perhaps my original patch is better which will check
>>>> paging_mode_log_dirty(d) && log_global:
>>>>
>>>> It turns out that the reason I couldn't get a crash was because libxc
>>>> was actually paying attention to the -EINVAL return value, and
>>>> disabling and then re-enabling logdirty.  That's what would happen
>>>> before your dirty vram patch, and that's what happens after.  And
>>>> arguably, that's the correct behavior for any toolstack, given that the
>>> interface returns an error.
>>>
>>> Agree.
>>>
>>>> This patch would actually change the interface; if we check this in,
>>>> then if you enable logdirty when dirty vram tracking is enabled, you
>>>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>>>> So actually, this patch would be more disruptive.
>>>>
>>> Jan, do you have any comment?
>> This simplistic variant is just calling for problems. As was already
>> said elsewhere on this thread, we should simply do the mode change
>> properly: Track that a partial log-dirty mode is in use, and allow
>> switching to global log-dirty mode (converting all entries to R/O).
> 
> I think Yang was asking you for your opinion on my suggestion that 
> nothing actually needed to be done.  Enabling full logdirty mode for 
> migration when dirty vram tracking was enabled has *always* returned an 
> error (or at least for a long time now), and *always* resulted in the 
> toolstack disabling and re-enabling logdirty mode; Yang's patch doesn't 
> change that at all.
> 
> If you think that's an interface we need to improve in the future, we 
> can put it on the list of improvements.  But at this point it seems to 
> me more like a nice-to-have.

I agree - for 4.4.0 we shouldn't need any further adjustments. And
I hoped to imply that I don't see a need for this incremental change
to go in by having said "This simplistic variant is just calling for
problems".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:13:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG55e-0003Nz-ML; Wed, 19 Feb 2014 11:13:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG55c-0003Nl-Vs
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:13:21 +0000
Received: from [85.158.137.68:2069] by server-17.bemta-3.messagelabs.com id
	97/30-22569-0D194035; Wed, 19 Feb 2014 11:13:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392808398!1587464!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3449 invoked from network); 19 Feb 2014 11:13:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:13:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103849358"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:13:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:13:16 -0500
Message-ID: <1392808395.23084.133.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:13:15 +0000
In-Reply-To: <1388957191-10337-2-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 1/6] xen/arm: earlyprintk: move early_flush in
	early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> early_puts function will be exported to be used in the console code. To
> avoid loosing characters (see why in commit cafdceb "xen/arm: avoid lost
> characters with early_printk), early_flush needs to be called in this
> function.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/early_printk.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
> index 41938bb..7143f9e 100644
> --- a/xen/arch/arm/early_printk.c
> +++ b/xen/arch/arm/early_printk.c
> @@ -29,12 +29,6 @@ static void __init early_puts(const char *s)
>          early_putch(*s);
>          s++;
>      }
> -}
> -
> -static void __init early_vprintk(const char *fmt, va_list args)
> -{
> -    vsnprintf(buf, sizeof(buf), fmt, args);
> -    early_puts(buf);
>  
>      /*
>       * Wait the UART has finished to transfer all characters before
> @@ -43,6 +37,12 @@ static void __init early_vprintk(const char *fmt, va_list args)
>      early_flush();
>  }
>  
> +static void __init early_vprintk(const char *fmt, va_list args)
> +{
> +    vsnprintf(buf, sizeof(buf), fmt, args);
> +    early_puts(buf);
> +}
> +
>  void __init early_printk(const char *fmt, ...)
>  {
>      va_list args;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:13:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG55e-0003Nz-ML; Wed, 19 Feb 2014 11:13:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG55c-0003Nl-Vs
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:13:21 +0000
Received: from [85.158.137.68:2069] by server-17.bemta-3.messagelabs.com id
	97/30-22569-0D194035; Wed, 19 Feb 2014 11:13:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392808398!1587464!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3449 invoked from network); 19 Feb 2014 11:13:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:13:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103849358"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:13:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:13:16 -0500
Message-ID: <1392808395.23084.133.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:13:15 +0000
In-Reply-To: <1388957191-10337-2-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 1/6] xen/arm: earlyprintk: move early_flush in
	early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> early_puts function will be exported to be used in the console code. To
> avoid loosing characters (see why in commit cafdceb "xen/arm: avoid lost
> characters with early_printk), early_flush needs to be called in this
> function.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/early_printk.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
> index 41938bb..7143f9e 100644
> --- a/xen/arch/arm/early_printk.c
> +++ b/xen/arch/arm/early_printk.c
> @@ -29,12 +29,6 @@ static void __init early_puts(const char *s)
>          early_putch(*s);
>          s++;
>      }
> -}
> -
> -static void __init early_vprintk(const char *fmt, va_list args)
> -{
> -    vsnprintf(buf, sizeof(buf), fmt, args);
> -    early_puts(buf);
>  
>      /*
>       * Wait the UART has finished to transfer all characters before
> @@ -43,6 +37,12 @@ static void __init early_vprintk(const char *fmt, va_list args)
>      early_flush();
>  }
>  
> +static void __init early_vprintk(const char *fmt, va_list args)
> +{
> +    vsnprintf(buf, sizeof(buf), fmt, args);
> +    early_puts(buf);
> +}
> +
>  void __init early_printk(const char *fmt, ...)
>  {
>      va_list args;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:14:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG56b-0003Vt-6w; Wed, 19 Feb 2014 11:14:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG56Z-0003Vl-Ek
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:14:19 +0000
Received: from [85.158.137.68:48066] by server-12.bemta-3.messagelabs.com id
	9E/10-01674-A0294035; Wed, 19 Feb 2014 11:14:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392808456!2829230!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12074 invoked from network); 19 Feb 2014 11:14:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:14:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102138604"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:14:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:14:15 -0500
Message-ID: <1392808454.23084.134.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:14:14 +0000
In-Reply-To: <1388957191-10337-3-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 2/6] xen/arm: earlyprintk: export early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> [..]
> +static inline void early_puts(const char *)
> +{}

We'd normally put these at the end of the line with the prototype (I see
this is wrong for early_printk too)

> +
>  static inline  __attribute__((format (printf, 1, 2))) void
>  early_printk(const char *fmt, ...)
>  {}



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:14:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG56b-0003Vt-6w; Wed, 19 Feb 2014 11:14:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG56Z-0003Vl-Ek
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:14:19 +0000
Received: from [85.158.137.68:48066] by server-12.bemta-3.messagelabs.com id
	9E/10-01674-A0294035; Wed, 19 Feb 2014 11:14:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392808456!2829230!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12074 invoked from network); 19 Feb 2014 11:14:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:14:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102138604"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:14:16 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:14:15 -0500
Message-ID: <1392808454.23084.134.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:14:14 +0000
In-Reply-To: <1388957191-10337-3-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 2/6] xen/arm: earlyprintk: export early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> [..]
> +static inline void early_puts(const char *)
> +{}

We'd normally put these at the end of the line with the prototype (I see
this is wrong for early_printk too)

> +
>  static inline  __attribute__((format (printf, 1, 2))) void
>  early_printk(const char *fmt, ...)
>  {}



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG58c-0003gn-OF; Wed, 19 Feb 2014 11:16:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG58b-0003gX-C4
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:16:25 +0000
Received: from [85.158.139.211:58073] by server-14.bemta-5.messagelabs.com id
	2F/A1-27598-88294035; Wed, 19 Feb 2014 11:16:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392808581!348431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12376 invoked from network); 19 Feb 2014 11:16:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:16:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103849878"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:16:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:16:06 -0500
Message-ID: <1392808565.23084.135.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:16:05 +0000
In-Reply-To: <1388957191-10337-4-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 3/6] xen/arm: Rename EARLY_PRINTK compile
 option to CONFIG_EARLY_PRINTK
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> Most of common compile option start with CONFIG_. Rename EARLY_PRINTK option
> to CONFIG_EARLY_PRINTK to be compliant.
> 
> This option will be used in common code (eg console) later.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/Rules.mk              |  2 +-
>  xen/arch/arm/arm32/head.S          | 18 +++++++++---------
>  xen/arch/arm/arm64/head.S          | 18 +++++++++---------
>  xen/include/asm-arm/early_printk.h |  6 +++---
>  4 files changed, 22 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index aaa203e..57f2eb1 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -93,7 +93,7 @@ ifneq ($(EARLY_PRINTK_INC),)
>  EARLY_PRINTK := y
>  endif
>  
> -CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK
> +CFLAGS-$(EARLY_PRINTK) += -DCONFIG_EARLY_PRINTK
>  CFLAGS-$(EARLY_PRINTK_INIT_UART) += -DEARLY_PRINTK_INIT_UART
>  CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
>  CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 96230ac..1b1801b 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -34,7 +34,7 @@
>  #define PT_UPPER(x) (PT_##x & 0xf00)
>  #define PT_LOWER(x) (PT_##x & 0x0ff)
>  
> -#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
> +#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
>  #include EARLY_PRINTK_INC
>  #endif
>  
> @@ -59,7 +59,7 @@
>   */
>  /* Macro to print a string to the UART, if there is one.
>   * Clobbers r0-r3. */
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  #define PRINT(_s)       \
>          adr   r0, 98f ; \
>          bl    puts    ; \
> @@ -67,9 +67,9 @@
>  98:     .asciz _s     ; \
>          .align 2      ; \
>  99:
> -#else /* EARLY_PRINTK */
> +#else /* CONFIG_EARLY_PRINTK */
>  #define PRINT(s)
> -#endif /* !EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>          .arm
>  
> @@ -149,7 +149,7 @@ common_start:
>          b     2b
>  1:
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>          ldr   r11, =EARLY_UART_BASE_ADDRESS  /* r11 := UART base address */
>          teq   r12, #0                /* Boot CPU sets up the UART too */
>          bleq  init_uart
> @@ -330,7 +330,7 @@ paging:
>          /* Now we can install the fixmap and dtb mappings, since we
>           * don't need the 1:1 map any more */
>          dsb
> -#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
> +#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Non-boot CPUs don't need to rebuild the fixmap itself, just
>  	 * the mapping from boot_second to xen_fixmap */
>          teq   r12, #0
> @@ -492,7 +492,7 @@ ENTRY(relocate_xen)
>  
>          mov pc, lr
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  /* Bring up the UART.
>   * r11: Early UART base address
>   * Clobbers r0-r2 */
> @@ -537,7 +537,7 @@ putn:
>  hex:    .ascii "0123456789abcdef"
>          .align 2
>  
> -#else  /* EARLY_PRINTK */
> +#else  /* CONFIG_EARLY_PRINTK */
>  
>  init_uart:
>  .global early_puts
> @@ -545,7 +545,7 @@ early_puts:
>  puts:
>  putn:   mov   pc, lr
>  
> -#endif /* !EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>  /*
>   * Local variables:
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 31afdd0..c97c194 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -30,7 +30,7 @@
>  #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
>  #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
>  
> -#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
> +#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
>  #include EARLY_PRINTK_INC
>  #endif
>  
> @@ -71,7 +71,7 @@
>  
>  /* Macro to print a string to the UART, if there is one.
>   * Clobbers x0-x3. */
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  #define PRINT(_s)       \
>          adr   x0, 98f ; \
>          bl    puts    ; \
> @@ -79,9 +79,9 @@
>  98:     .asciz _s     ; \
>          .align 2      ; \
>  99:
> -#else /* EARLY_PRINTK */
> +#else /* CONFIG_EARLY_PRINTK */
>  #define PRINT(s)
> -#endif /* !EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>          /*.aarch64*/
>  
> @@ -174,7 +174,7 @@ common_start:
>          b     2b
>  1:
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>          ldr   x23, =EARLY_UART_BASE_ADDRESS /* x23 := UART base address */
>          cbnz  x22, 1f
>          bl    init_uart                 /* Boot CPU sets up the UART too */
> @@ -343,7 +343,7 @@ paging:
>          /* Now we can install the fixmap and dtb mappings, since we
>           * don't need the 1:1 map any more */
>          dsb   sy
> -#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
> +#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Non-boot CPUs don't need to rebuild the fixmap itself, just
>  	 * the mapping from boot_second to xen_fixmap */
>          cbnz  x22, 1f
> @@ -489,7 +489,7 @@ ENTRY(relocate_xen)
>  
>          ret
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  /* Bring up the UART.
>   * x23: Early UART base address
>   * Clobbers x0-x1 */
> @@ -536,7 +536,7 @@ putn:
>  hex:    .ascii "0123456789abcdef"
>          .align 2
>  
> -#else  /* EARLY_PRINTK */
> +#else  /* CONFIG_EARLY_PRINTK */
>  
>  init_uart:
>  .global early_puts
> @@ -544,7 +544,7 @@ early_puts:
>  puts:
>  putn:   ret
>  
> -#endif /* EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
> index 31024b5..a58e3e7 100644
> --- a/xen/include/asm-arm/early_printk.h
> +++ b/xen/include/asm-arm/early_printk.h
> @@ -12,7 +12,7 @@
>  
>  #include <xen/config.h>
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  
>  /* need to add the uart address offset in page to the fixmap address */
>  #define EARLY_UART_VIRTUAL_ADDRESS \
> @@ -22,7 +22,7 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  
>  void early_puts(const char *s);
>  void early_printk(const char *fmt, ...)
> @@ -43,7 +43,7 @@ static inline void  __attribute__((noreturn))
>  __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
>  {while(1);}
>  
> -#endif
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>  #endif	/* __ASSEMBLY__ */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG58c-0003gn-OF; Wed, 19 Feb 2014 11:16:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG58b-0003gX-C4
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:16:25 +0000
Received: from [85.158.139.211:58073] by server-14.bemta-5.messagelabs.com id
	2F/A1-27598-88294035; Wed, 19 Feb 2014 11:16:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392808581!348431!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12376 invoked from network); 19 Feb 2014 11:16:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:16:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103849878"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:16:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:16:06 -0500
Message-ID: <1392808565.23084.135.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:16:05 +0000
In-Reply-To: <1388957191-10337-4-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 3/6] xen/arm: Rename EARLY_PRINTK compile
 option to CONFIG_EARLY_PRINTK
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> Most of common compile option start with CONFIG_. Rename EARLY_PRINTK option
> to CONFIG_EARLY_PRINTK to be compliant.
> 
> This option will be used in common code (eg console) later.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/Rules.mk              |  2 +-
>  xen/arch/arm/arm32/head.S          | 18 +++++++++---------
>  xen/arch/arm/arm64/head.S          | 18 +++++++++---------
>  xen/include/asm-arm/early_printk.h |  6 +++---
>  4 files changed, 22 insertions(+), 22 deletions(-)
> 
> diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
> index aaa203e..57f2eb1 100644
> --- a/xen/arch/arm/Rules.mk
> +++ b/xen/arch/arm/Rules.mk
> @@ -93,7 +93,7 @@ ifneq ($(EARLY_PRINTK_INC),)
>  EARLY_PRINTK := y
>  endif
>  
> -CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK
> +CFLAGS-$(EARLY_PRINTK) += -DCONFIG_EARLY_PRINTK
>  CFLAGS-$(EARLY_PRINTK_INIT_UART) += -DEARLY_PRINTK_INIT_UART
>  CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
>  CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 96230ac..1b1801b 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -34,7 +34,7 @@
>  #define PT_UPPER(x) (PT_##x & 0xf00)
>  #define PT_LOWER(x) (PT_##x & 0x0ff)
>  
> -#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
> +#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
>  #include EARLY_PRINTK_INC
>  #endif
>  
> @@ -59,7 +59,7 @@
>   */
>  /* Macro to print a string to the UART, if there is one.
>   * Clobbers r0-r3. */
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  #define PRINT(_s)       \
>          adr   r0, 98f ; \
>          bl    puts    ; \
> @@ -67,9 +67,9 @@
>  98:     .asciz _s     ; \
>          .align 2      ; \
>  99:
> -#else /* EARLY_PRINTK */
> +#else /* CONFIG_EARLY_PRINTK */
>  #define PRINT(s)
> -#endif /* !EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>          .arm
>  
> @@ -149,7 +149,7 @@ common_start:
>          b     2b
>  1:
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>          ldr   r11, =EARLY_UART_BASE_ADDRESS  /* r11 := UART base address */
>          teq   r12, #0                /* Boot CPU sets up the UART too */
>          bleq  init_uart
> @@ -330,7 +330,7 @@ paging:
>          /* Now we can install the fixmap and dtb mappings, since we
>           * don't need the 1:1 map any more */
>          dsb
> -#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
> +#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Non-boot CPUs don't need to rebuild the fixmap itself, just
>  	 * the mapping from boot_second to xen_fixmap */
>          teq   r12, #0
> @@ -492,7 +492,7 @@ ENTRY(relocate_xen)
>  
>          mov pc, lr
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  /* Bring up the UART.
>   * r11: Early UART base address
>   * Clobbers r0-r2 */
> @@ -537,7 +537,7 @@ putn:
>  hex:    .ascii "0123456789abcdef"
>          .align 2
>  
> -#else  /* EARLY_PRINTK */
> +#else  /* CONFIG_EARLY_PRINTK */
>  
>  init_uart:
>  .global early_puts
> @@ -545,7 +545,7 @@ early_puts:
>  puts:
>  putn:   mov   pc, lr
>  
> -#endif /* !EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>  /*
>   * Local variables:
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index 31afdd0..c97c194 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -30,7 +30,7 @@
>  #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
>  #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
>  
> -#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
> +#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
>  #include EARLY_PRINTK_INC
>  #endif
>  
> @@ -71,7 +71,7 @@
>  
>  /* Macro to print a string to the UART, if there is one.
>   * Clobbers x0-x3. */
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  #define PRINT(_s)       \
>          adr   x0, 98f ; \
>          bl    puts    ; \
> @@ -79,9 +79,9 @@
>  98:     .asciz _s     ; \
>          .align 2      ; \
>  99:
> -#else /* EARLY_PRINTK */
> +#else /* CONFIG_EARLY_PRINTK */
>  #define PRINT(s)
> -#endif /* !EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>          /*.aarch64*/
>  
> @@ -174,7 +174,7 @@ common_start:
>          b     2b
>  1:
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>          ldr   x23, =EARLY_UART_BASE_ADDRESS /* x23 := UART base address */
>          cbnz  x22, 1f
>          bl    init_uart                 /* Boot CPU sets up the UART too */
> @@ -343,7 +343,7 @@ paging:
>          /* Now we can install the fixmap and dtb mappings, since we
>           * don't need the 1:1 map any more */
>          dsb   sy
> -#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
> +#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
>          /* Non-boot CPUs don't need to rebuild the fixmap itself, just
>  	 * the mapping from boot_second to xen_fixmap */
>          cbnz  x22, 1f
> @@ -489,7 +489,7 @@ ENTRY(relocate_xen)
>  
>          ret
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  /* Bring up the UART.
>   * x23: Early UART base address
>   * Clobbers x0-x1 */
> @@ -536,7 +536,7 @@ putn:
>  hex:    .ascii "0123456789abcdef"
>          .align 2
>  
> -#else  /* EARLY_PRINTK */
> +#else  /* CONFIG_EARLY_PRINTK */
>  
>  init_uart:
>  .global early_puts
> @@ -544,7 +544,7 @@ early_puts:
>  puts:
>  putn:   ret
>  
> -#endif /* EARLY_PRINTK */
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
> index 31024b5..a58e3e7 100644
> --- a/xen/include/asm-arm/early_printk.h
> +++ b/xen/include/asm-arm/early_printk.h
> @@ -12,7 +12,7 @@
>  
>  #include <xen/config.h>
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  
>  /* need to add the uart address offset in page to the fixmap address */
>  #define EARLY_UART_VIRTUAL_ADDRESS \
> @@ -22,7 +22,7 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -#ifdef EARLY_PRINTK
> +#ifdef CONFIG_EARLY_PRINTK
>  
>  void early_puts(const char *s);
>  void early_printk(const char *fmt, ...)
> @@ -43,7 +43,7 @@ static inline void  __attribute__((noreturn))
>  __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
>  {while(1);}
>  
> -#endif
> +#endif /* !CONFIG_EARLY_PRINTK */
>  
>  #endif	/* __ASSEMBLY__ */
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:17:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5A1-0003or-7z; Wed, 19 Feb 2014 11:17:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG59z-0003oi-Qy
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:17:52 +0000
Received: from [85.158.137.68:44236] by server-4.bemta-3.messagelabs.com id
	5E/24-04858-ED294035; Wed, 19 Feb 2014 11:17:50 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392808668!1588967!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6252 invoked from network); 19 Feb 2014 11:17:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:17:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102139276"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:17:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:17:48 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG59u-0004Mu-Tc;
	Wed, 19 Feb 2014 11:17:46 +0000
Message-ID: <530492D7.6050503@eu.citrix.com>
Date: Wed, 19 Feb 2014 11:17:43 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
	<53047F74020000780011D8E4@nat28.tlf.novell.com>
	<53048F96.7010503@eu.citrix.com>
	<53049FD3020000780011DA37@nat28.tlf.novell.com>
In-Reply-To: <53049FD3020000780011DA37@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 11:13 AM, Jan Beulich wrote:
>>>> On 19.02.14 at 12:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> On 02/19/2014 08:55 AM, Jan Beulich wrote:
>>>>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>>> George Dunlap wrote on 2014-02-18:
>>>>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>>>>> perhaps my original patch is better which will check
>>>>> paging_mode_log_dirty(d) && log_global:
>>>>>
>>>>> It turns out that the reason I couldn't get a crash was because libxc
>>>>> was actually paying attention to the -EINVAL return value, and
>>>>> disabling and then re-enabling logdirty.  That's what would happen
>>>>> before your dirty vram patch, and that's what happens after.  And
>>>>> arguably, that's the correct behavior for any toolstack, given that the
>>>> interface returns an error.
>>>>
>>>> Agree.
>>>>
>>>>> This patch would actually change the interface; if we check this in,
>>>>> then if you enable logdirty when dirty vram tracking is enabled, you
>>>>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>>>>> So actually, this patch would be more disruptive.
>>>>>
>>>> Jan, do you have any comment?
>>> This simplistic variant is just calling for problems. As was already
>>> said elsewhere on this thread, we should simply do the mode change
>>> properly: Track that a partial log-dirty mode is in use, and allow
>>> switching to global log-dirty mode (converting all entries to R/O).
>> I think Yang was asking you for your opinion on my suggestion that
>> nothing actually needed to be done.  Enabling full logdirty mode for
>> migration when dirty vram tracking was enabled has *always* returned an
>> error (or at least for a long time now), and *always* resulted in the
>> toolstack disabling and re-enabling logdirty mode; Yang's patch doesn't
>> change that at all.
>>
>> If you think that's an interface we need to improve in the future, we
>> can put it on the list of improvements.  But at this point it seems to
>> me more like a nice-to-have.
> I agree - for 4.4.0 we shouldn't need any further adjustments. And
> I hoped to imply that I don't see a need for this incremental change
> to go in by having said "This simplistic variant is just calling for
> problems".

No, but "we should simply do the mode change properly" could be 
interpreted as saying, "this needs to be done as a follow-up to the 
dirty vram tracking patch"; someone might even interpret it as, "you 
need to do this as a follow-up".  That's what I was trying to clarify / 
express an opinion on. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:17:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5A1-0003or-7z; Wed, 19 Feb 2014 11:17:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG59z-0003oi-Qy
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:17:52 +0000
Received: from [85.158.137.68:44236] by server-4.bemta-3.messagelabs.com id
	5E/24-04858-ED294035; Wed, 19 Feb 2014 11:17:50 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392808668!1588967!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6252 invoked from network); 19 Feb 2014 11:17:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:17:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102139276"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:17:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:17:48 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG59u-0004Mu-Tc;
	Wed, 19 Feb 2014 11:17:46 +0000
Message-ID: <530492D7.6050503@eu.citrix.com>
Date: Wed, 19 Feb 2014 11:17:43 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Yang Z Zhang <yang.z.zhang@intel.com>
References: <20140210080314.GA758@deinos.phlegethon.org>
	<20140211090202.GC92054@deinos.phlegethon.org>
	<CAFLBxZaS_Sa03VxmPqDBB0wWEdjwqEa__RbtYvwuMd=QDk9ghQ@mail.gmail.com>
	<20140211115553.GB97288@deinos.phlegethon.org>
	<52FA2C63020000780011B201@nat28.tlf.novell.com>
	<52FA480D.9040707@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9D98D6@SHSMSX104.ccr.corp.intel.com>
	<52FCE8BE.8050105@eu.citrix.com>
	<52FCF90F020000780011C29A@nat28.tlf.novell.com>
	<20140213162022.GE82703@deinos.phlegethon.org>
	<5301F000020000780011CCE0@nat28.tlf.novell.com>
	<53022209.1060005@eu.citrix.com>
	<5302332D020000780011CEF1@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9ED8B0@SHSMSX104.ccr.corp.intel.com>
	<53033544.2000409@eu.citrix.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9EEBC9@SHSMSX104.ccr.corp.intel.com>
	<53047F74020000780011D8E4@nat28.tlf.novell.com>
	<53048F96.7010503@eu.citrix.com>
	<53049FD3020000780011DA37@nat28.tlf.novell.com>
In-Reply-To: <53049FD3020000780011DA37@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	Tim Deegan <tim@xen.org>, Xiantao Zhang <xiantao.zhang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Don't track all memory when enabling log
 dirty to track vram
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 11:13 AM, Jan Beulich wrote:
>>>> On 19.02.14 at 12:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> On 02/19/2014 08:55 AM, Jan Beulich wrote:
>>>>>> On 19.02.14 at 02:28, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>>> George Dunlap wrote on 2014-02-18:
>>>>> On 02/18/2014 03:14 AM, Zhang, Yang Z wrote:
>>>>> perhaps my original patch is better which will check
>>>>> paging_mode_log_dirty(d) && log_global:
>>>>>
>>>>> It turns out that the reason I couldn't get a crash was because libxc
>>>>> was actually paying attention to the -EINVAL return value, and
>>>>> disabling and then re-enabling logdirty.  That's what would happen
>>>>> before your dirty vram patch, and that's what happens after.  And
>>>>> arguably, that's the correct behavior for any toolstack, given that the
>>>> interface returns an error.
>>>>
>>>> Agree.
>>>>
>>>>> This patch would actually change the interface; if we check this in,
>>>>> then if you enable logdirty when dirty vram tracking is enabled, you
>>>>> *won't* get an error, and thus *won't* disable and re-enable logdirty mode.
>>>>> So actually, this patch would be more disruptive.
>>>>>
>>>> Jan, do you have any comment?
>>> This simplistic variant is just calling for problems. As was already
>>> said elsewhere on this thread, we should simply do the mode change
>>> properly: Track that a partial log-dirty mode is in use, and allow
>>> switching to global log-dirty mode (converting all entries to R/O).
>> I think Yang was asking you for your opinion on my suggestion that
>> nothing actually needed to be done.  Enabling full logdirty mode for
>> migration when dirty vram tracking was enabled has *always* returned an
>> error (or at least for a long time now), and *always* resulted in the
>> toolstack disabling and re-enabling logdirty mode; Yang's patch doesn't
>> change that at all.
>>
>> If you think that's an interface we need to improve in the future, we
>> can put it on the list of improvements.  But at this point it seems to
>> me more like a nice-to-have.
> I agree - for 4.4.0 we shouldn't need any further adjustments. And
> I hoped to imply that I don't see a need for this incremental change
> to go in by having said "This simplistic variant is just calling for
> problems".

No, but "we should simply do the mode change properly" could be 
interpreted as saying, "this needs to be done as a follow-up to the 
dirty vram tracking patch"; someone might even interpret it as, "you 
need to do this as a follow-up".  That's what I was trying to clarify / 
express an opinion on. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:19:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5BJ-00040E-TZ; Wed, 19 Feb 2014 11:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5BI-000401-K7
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:19:12 +0000
Received: from [85.158.143.35:49204] by server-2.bemta-4.messagelabs.com id
	47/F2-10891-F2394035; Wed, 19 Feb 2014 11:19:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392808749!6786917!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6381 invoked from network); 19 Feb 2014 11:19:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:19:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103850348"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:19:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:19:08 -0500
Message-ID: <1392808746.23084.137.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:19:06 +0000
In-Reply-To: <1388957191-10337-5-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> On ARM, a function (early_printk) was introduced to output message when the
> serial port is not initialized.
> 
> This solution is fragile because the developper needs to know when the serial
> port is initialized, to use either early_printk or printk. Moreover some
> functions (mainly in common code), only use printk. This will result to a loss
> of message sometimes.
> 
> Directly call early_printk in console code when the serial port is not yet
> initialized. For this purpose use serial_steal_fn.

This relies on nothing stealing the console over the period where the
console is initialised. Perhaps that is already not advisable/possible?

> 
> Cc: Keir Fraser <keir@xen.org>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/drivers/char/console.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 532c426..f83c92e 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -28,6 +28,9 @@
>  #include <asm/debugger.h>
>  #include <asm/div64.h>
>  #include <xen/hypercall.h> /* for do_console_io */
> +#ifdef CONFIG_EARLY_PRINTK
> +#include <asm/early_printk.h>
> +#endif
>  
>  /* console: comma-separated list of console outputs. */
>  static char __initdata opt_console[30] = OPT_CONSOLE_STR;
> @@ -245,7 +248,12 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
>  static char serial_rx_ring[SERIAL_RX_SIZE];
>  static unsigned int serial_rx_cons, serial_rx_prod;
>  
> -static void (*serial_steal_fn)(const char *);
> +#ifndef CONFIG_EARLY_PRINTK
> +static inline void early_puts(const char *str)
> +{}

This duplicates bits of asm-arm/early_printk.h. I think if the feature
is going to be used from common code then the common bits of the asm
header should be moved to xen/early_printk.h. If any per-arch stuff
remains then xen/e_p.h can include asm/e_p.h.

> +#endif
> +
> +static void (*serial_steal_fn)(const char *) = early_puts;
>  
>  int console_steal(int handle, void (*fn)(const char *))
>  {
> @@ -652,7 +660,10 @@ void __init console_init_preirq(void)
>          else if ( !strncmp(p, "none", 4) )
>              continue;
>          else if ( (sh = serial_parse_handle(p)) >= 0 )
> +        {
>              sercon_handle = sh;
> +            serial_steal_fn = NULL;
> +        }
>          else
>          {
>              char *q = strchr(p, ',');



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:19:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5BJ-00040E-TZ; Wed, 19 Feb 2014 11:19:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5BI-000401-K7
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:19:12 +0000
Received: from [85.158.143.35:49204] by server-2.bemta-4.messagelabs.com id
	47/F2-10891-F2394035; Wed, 19 Feb 2014 11:19:11 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392808749!6786917!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6381 invoked from network); 19 Feb 2014 11:19:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:19:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103850348"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:19:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:19:08 -0500
Message-ID: <1392808746.23084.137.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:19:06 +0000
In-Reply-To: <1388957191-10337-5-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> On ARM, a function (early_printk) was introduced to output message when the
> serial port is not initialized.
> 
> This solution is fragile because the developper needs to know when the serial
> port is initialized, to use either early_printk or printk. Moreover some
> functions (mainly in common code), only use printk. This will result to a loss
> of message sometimes.
> 
> Directly call early_printk in console code when the serial port is not yet
> initialized. For this purpose use serial_steal_fn.

This relies on nothing stealing the console over the period where the
console is initialised. Perhaps that is already not advisable/possible?

> 
> Cc: Keir Fraser <keir@xen.org>
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/drivers/char/console.c | 13 ++++++++++++-
>  1 file changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 532c426..f83c92e 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -28,6 +28,9 @@
>  #include <asm/debugger.h>
>  #include <asm/div64.h>
>  #include <xen/hypercall.h> /* for do_console_io */
> +#ifdef CONFIG_EARLY_PRINTK
> +#include <asm/early_printk.h>
> +#endif
>  
>  /* console: comma-separated list of console outputs. */
>  static char __initdata opt_console[30] = OPT_CONSOLE_STR;
> @@ -245,7 +248,12 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
>  static char serial_rx_ring[SERIAL_RX_SIZE];
>  static unsigned int serial_rx_cons, serial_rx_prod;
>  
> -static void (*serial_steal_fn)(const char *);
> +#ifndef CONFIG_EARLY_PRINTK
> +static inline void early_puts(const char *str)
> +{}

This duplicates bits of asm-arm/early_printk.h. I think if the feature
is going to be used from common code then the common bits of the asm
header should be moved to xen/early_printk.h. If any per-arch stuff
remains then xen/e_p.h can include asm/e_p.h.

> +#endif
> +
> +static void (*serial_steal_fn)(const char *) = early_puts;
>  
>  int console_steal(int handle, void (*fn)(const char *))
>  {
> @@ -652,7 +660,10 @@ void __init console_init_preirq(void)
>          else if ( !strncmp(p, "none", 4) )
>              continue;
>          else if ( (sh = serial_parse_handle(p)) >= 0 )
> +        {
>              sercon_handle = sh;
> +            serial_steal_fn = NULL;
> +        }
>          else
>          {
>              char *q = strchr(p, ',');



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:21:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5DG-0004BK-FX; Wed, 19 Feb 2014 11:21:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5DE-0004BE-Ui
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:21:13 +0000
Received: from [85.158.137.68:36988] by server-14.bemta-3.messagelabs.com id
	9A/19-08196-8A394035; Wed, 19 Feb 2014 11:21:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392808869!2868660!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9133 invoked from network); 19 Feb 2014 11:21:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:21:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102139990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:20:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:20:48 -0500
Message-ID: <1392808847.23084.138.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:20:47 +0000
In-Reply-To: <1388957191-10337-7-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> Now that the console supports earlyprintk, we can get a rid of
> early_printk call.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Now all we need is a way to make it a runtime option :-)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:21:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5DG-0004BK-FX; Wed, 19 Feb 2014 11:21:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5DE-0004BE-Ui
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:21:13 +0000
Received: from [85.158.137.68:36988] by server-14.bemta-3.messagelabs.com id
	9A/19-08196-8A394035; Wed, 19 Feb 2014 11:21:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392808869!2868660!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9133 invoked from network); 19 Feb 2014 11:21:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:21:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102139990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:20:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:20:48 -0500
Message-ID: <1392808847.23084.138.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:20:47 +0000
In-Reply-To: <1388957191-10337-7-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> Now that the console supports earlyprintk, we can get a rid of
> early_printk call.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Now all we need is a way to make it a runtime option :-)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:23:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5FK-0004KD-0c; Wed, 19 Feb 2014 11:23:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5FI-0004K6-I6
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:23:20 +0000
Received: from [85.158.137.68:14844] by server-17.bemta-3.messagelabs.com id
	B1/52-22569-72494035; Wed, 19 Feb 2014 11:23:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392808997!2843788!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13946 invoked from network); 19 Feb 2014 11:23:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:23:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102140385"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:23:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:23:16 -0500
Message-ID: <1392808995.23084.139.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:23:15 +0000
In-Reply-To: <1390581822-32624-2-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 1/8] xen/arm: irq: move gic {,
 un}lock in gic_set_irq_properties
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> The function gic_set_irq_properties is only called in two places:
>     - gic_route_irq: the gic.lock is only taken for the call to the
>     former function.
>     - gic_route_irq_to_guest: the gic.lock is taken for the duration of
>     the function. But the lock is only useful when gic_set_irq_properties.
> 
> So we can safely move the lock in gic_set_irq_properties and restrict the
> critical section for the gic.lock in gic_route_irq_to_guest.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although ISTR Stefano saying he had got rid of the lock altogether. I'll
let you to battle that one out ;-)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:23:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5FK-0004KD-0c; Wed, 19 Feb 2014 11:23:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5FI-0004K6-I6
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:23:20 +0000
Received: from [85.158.137.68:14844] by server-17.bemta-3.messagelabs.com id
	B1/52-22569-72494035; Wed, 19 Feb 2014 11:23:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392808997!2843788!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13946 invoked from network); 19 Feb 2014 11:23:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:23:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102140385"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:23:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:23:16 -0500
Message-ID: <1392808995.23084.139.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:23:15 +0000
In-Reply-To: <1390581822-32624-2-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 1/8] xen/arm: irq: move gic {,
 un}lock in gic_set_irq_properties
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> The function gic_set_irq_properties is only called in two places:
>     - gic_route_irq: the gic.lock is only taken for the call to the
>     former function.
>     - gic_route_irq_to_guest: the gic.lock is taken for the duration of
>     the function. But the lock is only useful when gic_set_irq_properties.
> 
> So we can safely move the lock in gic_set_irq_properties and restrict the
> critical section for the gic.lock in gic_route_irq_to_guest.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although ISTR Stefano saying he had got rid of the lock altogether. I'll
let you to battle that one out ;-)



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:24:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5G7-0004Pu-Eb; Wed, 19 Feb 2014 11:24:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5G6-0004Pi-7c
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:24:10 +0000
Received: from [85.158.139.211:24334] by server-15.bemta-5.messagelabs.com id
	72/01-24395-95494035; Wed, 19 Feb 2014 11:24:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392809047!345608!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28386 invoked from network); 19 Feb 2014 11:24:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:24:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102140563"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:24:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:24:06 -0500
Message-ID: <1392809045.23084.140.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:24:05 +0000
In-Reply-To: <1390581822-32624-3-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH for-4.5 2/8] xen/arm: setup_dt_irq: don't
 enable the IRQ if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> For now __setup_dt_irq can only fail if the action is already set. If in the
> future, the function is updated we don't want to enable the IRQ.
> 
> Assuming the function can fail with action = NULL, when Xen will receive the
> IRQ it will segfault because do_IRQ doesn't check if action is NULL.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:24:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5G7-0004Pu-Eb; Wed, 19 Feb 2014 11:24:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5G6-0004Pi-7c
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:24:10 +0000
Received: from [85.158.139.211:24334] by server-15.bemta-5.messagelabs.com id
	72/01-24395-95494035; Wed, 19 Feb 2014 11:24:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392809047!345608!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28386 invoked from network); 19 Feb 2014 11:24:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:24:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102140563"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:24:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:24:06 -0500
Message-ID: <1392809045.23084.140.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:24:05 +0000
In-Reply-To: <1390581822-32624-3-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, patches@linaro.org,
	stefano.stabellini@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH for-4.5 2/8] xen/arm: setup_dt_irq: don't
 enable the IRQ if the creation has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> For now __setup_dt_irq can only fail if the action is already set. If in the
> future, the function is updated we don't want to enable the IRQ.
> 
> Assuming the function can fail with action = NULL, when Xen will receive the
> IRQ it will segfault because do_IRQ doesn't check if action is NULL.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:29:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5Kq-0004fZ-9A; Wed, 19 Feb 2014 11:29:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WG5Kp-0004fQ-A7
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:29:03 +0000
Received: from [85.158.143.35:35364] by server-1.bemta-4.messagelabs.com id
	9C/C7-31661-E7594035; Wed, 19 Feb 2014 11:29:02 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392809340!6773357!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24516 invoked from network); 19 Feb 2014 11:29:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:29:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102141305"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:29:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:28:59 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WG5Kl-0004WI-0u;
	Wed, 19 Feb 2014 11:28:59 +0000
Date: Wed, 19 Feb 2014 11:28:58 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140219112858.GN18398@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1772884781.20140218222513@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: ANNIE LI <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 10:25:13PM +0100, Sander Eikelenboom wrote:
> Hi All,
> 
> I'm currently having some network troubles with Xen and recent linux kernels.
> 
> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>   I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
> 

I *think* that issue should've been fixed -- not with that patch but with
a different one. *sigh*

>   In the guest:
>   [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>   [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>   [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>   [57539.859610] net eth0: Need more slots
>   [58157.675939] net eth0: Need more slots
>   [58725.344712] net eth0: Need more slots
>   [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>   [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>   [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>   [61815.849225] net eth0: Need more slots
> 
>   Xen reports:
>   (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>   (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>   (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>   (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>   (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>   (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>   (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>   (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>   (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>   (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>   (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
> 

Judging from the log above I presume it happens after DomU has been
running for quite a while? Could you elaborate on the exact steps to
reproduce?

Does it happen when you use 3.13 as backend?

> 
> 
> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>   - i can ping the guests from dom0
>   - i can ping dom0 from the guests
>   - But i can't ssh or access things by http
>   - I don't see any relevant error messages ...
>   - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>     (that previously worked fine)

I think I am able to reproduce this. I'm looking into it now. Luckily
there isn't that many changesets between 3.13 and 3.14.


Wei.

> 
> --
> 
> Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:29:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5Kq-0004fZ-9A; Wed, 19 Feb 2014 11:29:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WG5Kp-0004fQ-A7
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:29:03 +0000
Received: from [85.158.143.35:35364] by server-1.bemta-4.messagelabs.com id
	9C/C7-31661-E7594035; Wed, 19 Feb 2014 11:29:02 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392809340!6773357!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24516 invoked from network); 19 Feb 2014 11:29:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:29:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102141305"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:29:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:28:59 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WG5Kl-0004WI-0u;
	Wed, 19 Feb 2014 11:28:59 +0000
Date: Wed, 19 Feb 2014 11:28:58 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140219112858.GN18398@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1772884781.20140218222513@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: ANNIE LI <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 10:25:13PM +0100, Sander Eikelenboom wrote:
> Hi All,
> 
> I'm currently having some network troubles with Xen and recent linux kernels.
> 
> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>   I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
> 

I *think* that issue should've been fixed -- not with that patch but with
a different one. *sigh*

>   In the guest:
>   [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>   [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>   [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>   [57539.859610] net eth0: Need more slots
>   [58157.675939] net eth0: Need more slots
>   [58725.344712] net eth0: Need more slots
>   [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>   [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>   [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>   [61815.849225] net eth0: Need more slots
> 
>   Xen reports:
>   (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>   (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>   (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>   (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>   (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>   (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>   (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>   (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>   (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>   (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>   (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
> 

Judging from the log above I presume it happens after DomU has been
running for quite a while? Could you elaborate on the exact steps to
reproduce?

Does it happen when you use 3.13 as backend?

> 
> 
> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>   - i can ping the guests from dom0
>   - i can ping dom0 from the guests
>   - But i can't ssh or access things by http
>   - I don't see any relevant error messages ...
>   - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>     (that previously worked fine)

I think I am able to reproduce this. I'm looking into it now. Luckily
there isn't that many changesets between 3.13 and 3.14.


Wei.

> 
> --
> 
> Sander

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:34:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5Pg-0004om-0n; Wed, 19 Feb 2014 11:34:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WG5Pe-0004og-I0
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:34:02 +0000
Received: from [85.158.143.35:29651] by server-3.bemta-4.messagelabs.com id
	2D/F2-11539-9A694035; Wed, 19 Feb 2014 11:34:01 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392809640!6783947!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2603 invoked from network); 19 Feb 2014 11:34:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 19 Feb 2014 11:34:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51870 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WG5Of-0002gU-GF; Wed, 19 Feb 2014 12:33:01 +0100
Date: Wed, 19 Feb 2014 12:33:59 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <14710449418.20140219123359@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140219112858.GN18398@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<20140219112858.GN18398@zion.uk.xensource.com>
MIME-Version: 1.0
Cc: ANNIE LI <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 19, 2014, 12:28:58 PM, you wrote:

> On Tue, Feb 18, 2014 at 10:25:13PM +0100, Sander Eikelenboom wrote:
>> Hi All,
>> 
>> I'm currently having some network troubles with Xen and recent linux kernels.
>> 
>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>   I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>> 

> I *think* that issue should've been fixed -- not with that patch but with
> a different one. *sigh*

>>   In the guest:
>>   [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>   [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>   [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>   [57539.859610] net eth0: Need more slots
>>   [58157.675939] net eth0: Need more slots
>>   [58725.344712] net eth0: Need more slots
>>   [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>   [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>   [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>   [61815.849225] net eth0: Need more slots
>> 
>>   Xen reports:
>>   (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>   (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>   (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>   (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>   (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>   (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>   (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>   (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>   (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>   (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>   (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>> 

> Judging from the log above I presume it happens after DomU has been
> running for quite a while? Could you elaborate on the exact steps to
> reproduce?

> Does it happen when you use 3.13 as backend?

yes i happens when it runs for a while (could be due to the fragment packet problems that seems
to be a nasty problem that reoccurs from time to time.
But i just noticed in the specific VM that caused this i had TSO switched off (to see if it would effect the case below which it didn't)
so it could be due to that. I will see if i can figure that out and report back.


>> 
>> 
>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>   - i can ping the guests from dom0
>>   - i can ping dom0 from the guests
>>   - But i can't ssh or access things by http
>>   - I don't see any relevant error messages ...
>>   - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>     (that previously worked fine)

> I think I am able to reproduce this. I'm looking into it now. Luckily
> there isn't that many changesets between 3.13 and 3.14.

>From what i can remember it's somehwere from early in the merge window, but i didn't get to report it due quite some other bugs
in different subsystems this mergewindow.

> Wei.

>> 
>> --
>> 
>> Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:34:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:34:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5Pg-0004om-0n; Wed, 19 Feb 2014 11:34:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WG5Pe-0004og-I0
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:34:02 +0000
Received: from [85.158.143.35:29651] by server-3.bemta-4.messagelabs.com id
	2D/F2-11539-9A694035; Wed, 19 Feb 2014 11:34:01 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392809640!6783947!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2603 invoked from network); 19 Feb 2014 11:34:01 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 19 Feb 2014 11:34:01 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51870 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WG5Of-0002gU-GF; Wed, 19 Feb 2014 12:33:01 +0100
Date: Wed, 19 Feb 2014 12:33:59 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <14710449418.20140219123359@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140219112858.GN18398@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<20140219112858.GN18398@zion.uk.xensource.com>
MIME-Version: 1.0
Cc: ANNIE LI <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 19, 2014, 12:28:58 PM, you wrote:

> On Tue, Feb 18, 2014 at 10:25:13PM +0100, Sander Eikelenboom wrote:
>> Hi All,
>> 
>> I'm currently having some network troubles with Xen and recent linux kernels.
>> 
>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>   I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>> 

> I *think* that issue should've been fixed -- not with that patch but with
> a different one. *sigh*

>>   In the guest:
>>   [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>   [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>   [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>   [57539.859610] net eth0: Need more slots
>>   [58157.675939] net eth0: Need more slots
>>   [58725.344712] net eth0: Need more slots
>>   [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>   [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>   [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>   [61815.849225] net eth0: Need more slots
>> 
>>   Xen reports:
>>   (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>   (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>   (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>   (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>   (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>   (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>   (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>   (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>   (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>   (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>   (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>   (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>   (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>   (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>   (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>   (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>> 

> Judging from the log above I presume it happens after DomU has been
> running for quite a while? Could you elaborate on the exact steps to
> reproduce?

> Does it happen when you use 3.13 as backend?

yes i happens when it runs for a while (could be due to the fragment packet problems that seems
to be a nasty problem that reoccurs from time to time.
But i just noticed in the specific VM that caused this i had TSO switched off (to see if it would effect the case below which it didn't)
so it could be due to that. I will see if i can figure that out and report back.


>> 
>> 
>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>   - i can ping the guests from dom0
>>   - i can ping dom0 from the guests
>>   - But i can't ssh or access things by http
>>   - I don't see any relevant error messages ...
>>   - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>     (that previously worked fine)

> I think I am able to reproduce this. I'm looking into it now. Luckily
> there isn't that many changesets between 3.13 and 3.14.

>From what i can remember it's somehwere from early in the merge window, but i didn't get to report it due quite some other bugs
in different subsystems this mergewindow.

> Wei.

>> 
>> --
>> 
>> Sander



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:36:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5RN-0004uk-Iw; Wed, 19 Feb 2014 11:35:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5RL-0004uc-Uh
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:35:48 +0000
Received: from [85.158.137.68:29236] by server-9.bemta-3.messagelabs.com id
	F0/2B-10184-31794035; Wed, 19 Feb 2014 11:35:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392809745!1295178!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1473 invoked from network); 19 Feb 2014 11:35:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103853104"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:35:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:35:26 -0500
Message-ID: <1392809725.29739.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:35:25 +0000
In-Reply-To: <1390581822-32624-4-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 3/8] xen/arm: IRQ: Protect IRQ to be
 shared between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
> IRQ is correctly setup.
> 
> As IRQ can be shared between devices, if the devices are not assigned to the
> same domain or Xen, this could result to IRQ route to the domain instead of
> Xen ...
> 
> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     Hopefully, none of the supported platforms have UARTs (the only device

                                                     ^shared?

Other than wondering if EBUSY might be more natural than EADDRINUSE and
some grammar nits (below) I think this patch looks good.

>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>     avoid waste of time for developer.
> 
>     The downside of this patch is if someone wants to support a such platform
>     (eg IRQ shared between device assigned to different domain/XEN), it will
>     end up to a error message and a panic.
> ---
>  xen/arch/arm/domain_build.c |    8 ++++++--
>  xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 45 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..1fc359a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
>          }
>  
>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
> -        /* Don't check return because the IRQ can be use by multiple device */
> -        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);

"Unable to route IRQ %u..." and I think you want to use d->domain_id
rather than hardcoding 0.

> +            return res;
> +        }
>      }
>  
>      /* Map the address ranges */
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 55e7622..d68bde3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -605,6 +605,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      desc = irq_to_desc(irq->irq);
>  
>      spin_lock_irqsave(&desc->lock, flags);
> +
> +    if ( desc->status & IRQ_GUEST )
> +    {
> +        struct domain *d;
> +
> +        ASSERT(desc->action != NULL);
> +
> +        d = desc->action->dev_id;
> +
> +        spin_unlock_irqrestore(&desc->lock, flags);
> +        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",

"is already in use by domain ..."



> +               irq->irq, d->domain_id);
> +        return -EADDRINUSE;
> +    }
> +
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
> @@ -759,7 +774,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      struct irqaction *action;
>      struct irq_desc *desc = irq_to_desc(irq->irq);
>      unsigned long flags;
> -    int retval;
> +    int retval = 0;
>      bool_t level;
>      struct pending_irq *p;
>  
> @@ -773,6 +788,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      spin_lock_irqsave(&desc->lock, flags);
>  
> +    /* If the IRQ is already used by someone
> +     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
> +     *  - Otherwise -> For now, don't allow the IRQ to be shared between
> +     *  Xen and domains.
> +     */
> +    if ( desc->action != NULL )
> +    {
> +        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
> +            goto out;
> +
> +        if ( desc->status & IRQ_GUEST )
> +        {
> +            d = desc->action->dev_id;
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
> +                   irq->irq, d->domain_id);

s/the //

> +        }
> +        else
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
> +                   irq->irq);
> +        retval = -EADDRINUSE;
> +        goto out;
> +    }
> +
>      desc->handler = &gic_guest_irq_type;
>      desc->status |= IRQ_GUEST;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:36:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5RN-0004uk-Iw; Wed, 19 Feb 2014 11:35:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5RL-0004uc-Uh
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:35:48 +0000
Received: from [85.158.137.68:29236] by server-9.bemta-3.messagelabs.com id
	F0/2B-10184-31794035; Wed, 19 Feb 2014 11:35:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392809745!1295178!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1473 invoked from network); 19 Feb 2014 11:35:46 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:35:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103853104"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:35:27 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:35:26 -0500
Message-ID: <1392809725.29739.5.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:35:25 +0000
In-Reply-To: <1390581822-32624-4-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 3/8] xen/arm: IRQ: Protect IRQ to be
 shared between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
> IRQ is correctly setup.
> 
> As IRQ can be shared between devices, if the devices are not assigned to the
> same domain or Xen, this could result to IRQ route to the domain instead of
> Xen ...
> 
> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     Hopefully, none of the supported platforms have UARTs (the only device

                                                     ^shared?

Other than wondering if EBUSY might be more natural than EADDRINUSE and
some grammar nits (below) I think this patch looks good.

>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>     avoid waste of time for developer.
> 
>     The downside of this patch is if someone wants to support a such platform
>     (eg IRQ shared between device assigned to different domain/XEN), it will
>     end up to a error message and a panic.
> ---
>  xen/arch/arm/domain_build.c |    8 ++++++--
>  xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 45 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 47b781b..1fc359a 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
>          }
>  
>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
> -        /* Don't check return because the IRQ can be use by multiple device */
> -        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
> +        if ( res )
> +        {
> +            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);

"Unable to route IRQ %u..." and I think you want to use d->domain_id
rather than hardcoding 0.

> +            return res;
> +        }
>      }
>  
>      /* Map the address ranges */
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 55e7622..d68bde3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -605,6 +605,21 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      desc = irq_to_desc(irq->irq);
>  
>      spin_lock_irqsave(&desc->lock, flags);
> +
> +    if ( desc->status & IRQ_GUEST )
> +    {
> +        struct domain *d;
> +
> +        ASSERT(desc->action != NULL);
> +
> +        d = desc->action->dev_id;
> +
> +        spin_unlock_irqrestore(&desc->lock, flags);
> +        printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",

"is already in use by domain ..."



> +               irq->irq, d->domain_id);
> +        return -EADDRINUSE;
> +    }
> +
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  
> @@ -759,7 +774,7 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>      struct irqaction *action;
>      struct irq_desc *desc = irq_to_desc(irq->irq);
>      unsigned long flags;
> -    int retval;
> +    int retval = 0;
>      bool_t level;
>      struct pending_irq *p;
>  
> @@ -773,6 +788,29 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      spin_lock_irqsave(&desc->lock, flags);
>  
> +    /* If the IRQ is already used by someone
> +     *  - If it's the same domain -> Xen doesn't need to update the IRQ desc
> +     *  - Otherwise -> For now, don't allow the IRQ to be shared between
> +     *  Xen and domains.
> +     */
> +    if ( desc->action != NULL )
> +    {
> +        if ( (desc->status & IRQ_GUEST) && d == desc->action->dev_id )
> +            goto out;
> +
> +        if ( desc->status & IRQ_GUEST )
> +        {
> +            d = desc->action->dev_id;
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by the domain %u\n",
> +                   irq->irq, d->domain_id);

s/the //

> +        }
> +        else
> +            printk(XENLOG_ERR "ERROR: IRQ %u is already used by Xen\n",
> +                   irq->irq);
> +        retval = -EADDRINUSE;
> +        goto out;
> +    }
> +
>      desc->handler = &gic_guest_irq_type;
>      desc->status |= IRQ_GUEST;
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:45:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5aj-0005CP-4v; Wed, 19 Feb 2014 11:45:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5aZ-0005CK-9Q
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:45:24 +0000
Received: from [85.158.143.35:24228] by server-2.bemta-4.messagelabs.com id
	88/45-10891-E4994035; Wed, 19 Feb 2014 11:45:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392810315!6794627!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23553 invoked from network); 19 Feb 2014 11:45:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:45:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102144630"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:45:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:45:14 -0500
Message-ID: <1392810312.29739.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:45:12 +0000
In-Reply-To: <1390581822-32624-5-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 4/8] xen/arm: irq: Don't need to
 have a specific function to route IRQ to Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> Actually, when the IRQ is handling by Xen, the setup is done in 2 steps:

s/Actually, //

I'd also go with a title like "remove need to have specific..." or
"remove function to route...".

>     - Route the IRQ to the current CPU and set priorities
>     - Set up the handler
> 
> For PPIs, these step are called on every cpus. For SPIs, it's called only

                     ^s                    cpu             they are only called

> on the boot CPU.
> 
> Divided the setup in two step complicated the code when a new driver is

Dividing           into two steps complicates

> added by Xen (for instance a SMMU driver). Xen can safely route the IRQ

       to Xen

> when the driver setup the interrupt handler.

                 sets up

Although for this final para I'm not sure why a new driver is needed --
either it is already complicated or not.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/gic.c         |   67 +++++++++++++++-----------------------------
>  xen/arch/arm/setup.c       |    3 --
>  xen/arch/arm/smpboot.c     |    2 --
>  xen/arch/arm/time.c        |   11 --------
>  xen/include/asm-arm/gic.h  |    7 -----
>  xen/include/asm-arm/time.h |    3 --
>  6 files changed, 23 insertions(+), 70 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index d68bde3..58bcba3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -254,43 +254,25 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
>      spin_unlock(&gic.lock);
>  }
>  
> -/* Program the GIC to route an interrupt */
> +/* Program the GIC to route an interrupt to the host (eg Xen)
> + * - needs to be called with desc.lock held

This suggests that the caller must have desc in its hand, but it then
passes irq here so we can look it up again. It may as well pass desc I
think.

>  void __init release_irq(unsigned int irq)
>  {
>      struct irq_desc *desc;
> @@ -601,6 +561,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      int rc;
>      unsigned long flags;
>      struct irq_desc *desc;
> +    bool_t disabled = 0;
>  
>      desc = irq_to_desc(irq->irq);
>  
> @@ -620,6 +581,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>          return -EADDRINUSE;
>      }
>  
> +    disabled = (desc->action == NULL);
> +
> +    /* First time the IRQ is setup */
> +    if ( disabled )
> +    {
> +        bool_t level;
> +
> +        level = dt_irq_is_level_triggered(irq);
> +        /* It's fine to use smp_processor_id() because:
> +         * For PPI: irq_desc is banked
> +         * For SGI: we don't care for now which CPU will receive the
> +         * interrupt
> +         * TODO: Handle case where SGI is setup on different CPU than
> +         * the targeted CPU and the priority.

Do you mean s/SGI/SPI/ here? SGIs are inherently per CPU, like PPIs.

> +         */
> +        gic_route_irq(irq->irq, level, cpumask_of(smp_processor_id()), 0xa0);
> +    }
> +
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:45:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5aj-0005CP-4v; Wed, 19 Feb 2014 11:45:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5aZ-0005CK-9Q
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:45:24 +0000
Received: from [85.158.143.35:24228] by server-2.bemta-4.messagelabs.com id
	88/45-10891-E4994035; Wed, 19 Feb 2014 11:45:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392810315!6794627!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23553 invoked from network); 19 Feb 2014 11:45:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:45:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102144630"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:45:14 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:45:14 -0500
Message-ID: <1392810312.29739.11.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:45:12 +0000
In-Reply-To: <1390581822-32624-5-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 4/8] xen/arm: irq: Don't need to
 have a specific function to route IRQ to Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> Actually, when the IRQ is handling by Xen, the setup is done in 2 steps:

s/Actually, //

I'd also go with a title like "remove need to have specific..." or
"remove function to route...".

>     - Route the IRQ to the current CPU and set priorities
>     - Set up the handler
> 
> For PPIs, these step are called on every cpus. For SPIs, it's called only

                     ^s                    cpu             they are only called

> on the boot CPU.
> 
> Divided the setup in two step complicated the code when a new driver is

Dividing           into two steps complicates

> added by Xen (for instance a SMMU driver). Xen can safely route the IRQ

       to Xen

> when the driver setup the interrupt handler.

                 sets up

Although for this final para I'm not sure why a new driver is needed --
either it is already complicated or not.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/gic.c         |   67 +++++++++++++++-----------------------------
>  xen/arch/arm/setup.c       |    3 --
>  xen/arch/arm/smpboot.c     |    2 --
>  xen/arch/arm/time.c        |   11 --------
>  xen/include/asm-arm/gic.h  |    7 -----
>  xen/include/asm-arm/time.h |    3 --
>  6 files changed, 23 insertions(+), 70 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index d68bde3..58bcba3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -254,43 +254,25 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
>      spin_unlock(&gic.lock);
>  }
>  
> -/* Program the GIC to route an interrupt */
> +/* Program the GIC to route an interrupt to the host (eg Xen)
> + * - needs to be called with desc.lock held

This suggests that the caller must have desc in its hand, but it then
passes irq here so we can look it up again. It may as well pass desc I
think.

>  void __init release_irq(unsigned int irq)
>  {
>      struct irq_desc *desc;
> @@ -601,6 +561,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      int rc;
>      unsigned long flags;
>      struct irq_desc *desc;
> +    bool_t disabled = 0;
>  
>      desc = irq_to_desc(irq->irq);
>  
> @@ -620,6 +581,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>          return -EADDRINUSE;
>      }
>  
> +    disabled = (desc->action == NULL);
> +
> +    /* First time the IRQ is setup */
> +    if ( disabled )
> +    {
> +        bool_t level;
> +
> +        level = dt_irq_is_level_triggered(irq);
> +        /* It's fine to use smp_processor_id() because:
> +         * For PPI: irq_desc is banked
> +         * For SGI: we don't care for now which CPU will receive the
> +         * interrupt
> +         * TODO: Handle case where SGI is setup on different CPU than
> +         * the targeted CPU and the priority.

Do you mean s/SGI/SPI/ here? SGIs are inherently per CPU, like PPIs.

> +         */
> +        gic_route_irq(irq->irq, level, cpumask_of(smp_processor_id()), 0xa0);
> +    }
> +
>      rc = __setup_irq(desc, irq->irq, new);
>      spin_unlock_irqrestore(&desc->lock, flags);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:47:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5ch-0005Gv-QC; Wed, 19 Feb 2014 11:47:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WG5ce-0005Gg-Og
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:47:29 +0000
Received: from [85.158.139.211:13951] by server-5.bemta-5.messagelabs.com id
	E4/90-32749-0D994035; Wed, 19 Feb 2014 11:47:28 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392810445!4919098!1
X-Originating-IP: [209.85.219.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30734 invoked from network); 19 Feb 2014 11:47:27 -0000
Received: from mail-oa0-f43.google.com (HELO mail-oa0-f43.google.com)
	(209.85.219.43)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:47:27 -0000
Received: by mail-oa0-f43.google.com with SMTP id h16so263218oag.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 03:47:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=WMQoWckB4KnKY613ecGjhwndzggz0nO6DoWn4EnsPxQ=;
	b=BpAnZJEF/xx92/rqXTIsmu663AhmxVVwf0v3GBs7ehZrqcVvdgaKpz7Dqh2yKuqgzx
	U8WXyR0eR5bh2G6uVy3iBe0P8PyupdZS3+MrKfgNkk+zUzI/Cc0+iPVysjwANLODA264
	4cYeSNjJCjNjeFpeQFe+7oAiSWEKH/vKl+vKI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=WMQoWckB4KnKY613ecGjhwndzggz0nO6DoWn4EnsPxQ=;
	b=S+e1reVoUfVE0zpEuLmenNIWLPlvTXds+xTACN9O5z0czAa5dPkdJoaJS49wyY4jee
	MMR07AxW5/SOx3uXx/Ho5UaxdHJf9Y5YgqU38O/QYDApiFPkiFmnRf3d4T+ADAh+7lCM
	5HsZumIyj7mjI5cTAmuAu7pBvWw/CNGMJKuZ17lwBK2D8zYOppZdZAke1mO5kEBrLf8/
	fp0DQ2+M12OTcoKN5iXnUf3UeLB2xLqnAc6UaCILSFIe1aZ0gdow144UQnyEsknmFFtv
	o/YNnxjS6rvWe3GVuaOI8CD2qzcEmxZx0HqmuhJKuFBcSpYwIv8/Xg2nihhJKw2oa7VI
	8rdg==
X-Gm-Message-State: ALoCoQmqwFWlUKCeRNeOt5TOLSq1W7PaubLCszdDRRMOnA+DblneLXwfa9Orakd2azJ4wOKxpIlF
X-Received: by 10.182.102.7 with SMTP id fk7mr30621840obb.28.1392810445577;
	Wed, 19 Feb 2014 03:47:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.76.160.104 with HTTP; Wed, 19 Feb 2014 03:47:10 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Wed, 19 Feb 2014 15:47:10 +0400
X-Google-Sender-Auth: apwbD7ZOzeWf07C63WGA1O_EIhY
Message-ID: <CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
To: Daniel Kiper <dkiper@net-space.pl>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-17 14:56 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> They should. IIRC they were targeted for 4.3.


Daniel, can you check this
https://gist.github.com/vtolstov/9090413/raw/dc457631fe0b6df793552494d059eac62fd962e0/gistfile1.txt
this is patch that merges 4 patches from email (because i can't get
attaches and after copy/paste i have errors).
As i see nothing changed , i'm use enforce=0 in xl.conf and start
domain with memory=512 and maxmem=1024.


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:47:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5ch-0005Gv-QC; Wed, 19 Feb 2014 11:47:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WG5ce-0005Gg-Og
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:47:29 +0000
Received: from [85.158.139.211:13951] by server-5.bemta-5.messagelabs.com id
	E4/90-32749-0D994035; Wed, 19 Feb 2014 11:47:28 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392810445!4919098!1
X-Originating-IP: [209.85.219.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30734 invoked from network); 19 Feb 2014 11:47:27 -0000
Received: from mail-oa0-f43.google.com (HELO mail-oa0-f43.google.com)
	(209.85.219.43)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:47:27 -0000
Received: by mail-oa0-f43.google.com with SMTP id h16so263218oag.30
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 03:47:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=WMQoWckB4KnKY613ecGjhwndzggz0nO6DoWn4EnsPxQ=;
	b=BpAnZJEF/xx92/rqXTIsmu663AhmxVVwf0v3GBs7ehZrqcVvdgaKpz7Dqh2yKuqgzx
	U8WXyR0eR5bh2G6uVy3iBe0P8PyupdZS3+MrKfgNkk+zUzI/Cc0+iPVysjwANLODA264
	4cYeSNjJCjNjeFpeQFe+7oAiSWEKH/vKl+vKI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=WMQoWckB4KnKY613ecGjhwndzggz0nO6DoWn4EnsPxQ=;
	b=S+e1reVoUfVE0zpEuLmenNIWLPlvTXds+xTACN9O5z0czAa5dPkdJoaJS49wyY4jee
	MMR07AxW5/SOx3uXx/Ho5UaxdHJf9Y5YgqU38O/QYDApiFPkiFmnRf3d4T+ADAh+7lCM
	5HsZumIyj7mjI5cTAmuAu7pBvWw/CNGMJKuZ17lwBK2D8zYOppZdZAke1mO5kEBrLf8/
	fp0DQ2+M12OTcoKN5iXnUf3UeLB2xLqnAc6UaCILSFIe1aZ0gdow144UQnyEsknmFFtv
	o/YNnxjS6rvWe3GVuaOI8CD2qzcEmxZx0HqmuhJKuFBcSpYwIv8/Xg2nihhJKw2oa7VI
	8rdg==
X-Gm-Message-State: ALoCoQmqwFWlUKCeRNeOt5TOLSq1W7PaubLCszdDRRMOnA+DblneLXwfa9Orakd2azJ4wOKxpIlF
X-Received: by 10.182.102.7 with SMTP id fk7mr30621840obb.28.1392810445577;
	Wed, 19 Feb 2014 03:47:25 -0800 (PST)
MIME-Version: 1.0
Received: by 10.76.160.104 with HTTP; Wed, 19 Feb 2014 03:47:10 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Wed, 19 Feb 2014 15:47:10 +0400
X-Google-Sender-Auth: apwbD7ZOzeWf07C63WGA1O_EIhY
Message-ID: <CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
To: Daniel Kiper <dkiper@net-space.pl>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-17 14:56 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> They should. IIRC they were targeted for 4.3.


Daniel, can you check this
https://gist.github.com/vtolstov/9090413/raw/dc457631fe0b6df793552494d059eac62fd962e0/gistfile1.txt
this is patch that merges 4 patches from email (because i can't get
attaches and after copy/paste i have errors).
As i see nothing changed , i'm use enforce=0 in xl.conf and start
domain with memory=512 and maxmem=1024.


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:47:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5cr-0005IY-R2; Wed, 19 Feb 2014 11:47:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WG5cq-0005IF-A0
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:47:40 +0000
Received: from [85.158.139.211:18221] by server-17.bemta-5.messagelabs.com id
	BE/5F-31975-BD994035; Wed, 19 Feb 2014 11:47:39 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392810457!958292!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32008 invoked from network); 19 Feb 2014 11:47:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:47:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102144869"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:47:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:47:36 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WG5cl-0004lk-E2; Wed, 19 Feb 2014 11:47:35 +0000
Message-ID: <530499D7.70902@citrix.com>
Date: Wed, 19 Feb 2014 11:47:35 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
In-Reply-To: <1392807985.23084.132.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 11:06, Ian Campbell wrote:
> On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Document the multi-queue feature in terms of XenStore keys to be written
>> by the backend and by the frontend.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   xen/include/public/io/netif.h |   21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
>> index d7fb771..90be2fc 100644
>> --- a/xen/include/public/io/netif.h
>> +++ b/xen/include/public/io/netif.h
>> @@ -69,6 +69,27 @@
>>    */
>>
>>   /*
>> + * Multiple transmit and receive queues:
>> + * If supported, the backend will write "multi-queue-max-queues" and set its
>> + * value to the maximum supported number of queues.
>> + * Frontends that are aware of this feature and wish to use it can write the
>> + * key "multi-queue-num-queues", set to the number they wish to use.
>> + *
>> + * Queues replicate the shared rings and event channels, and
>> + * "feature-split-event-channels" is required when using multiple queues.
>> + *
>> + * For frontends requesting just one queue, the usual event-channel and
>> + * ring-ref keys are written as before, simplifying the backend processing
>> + * to avoid distinguishing between a frontend that doesn't understand the
>> + * multi-queue feature, and one that does, but requested only one queue.
>> + *
>> + * Frontends requesting two or more queues must not write the toplevel
>> + * event-channel and ring-ref keys, instead writing them under sub-keys having
>> + * the name "queue-N" where N is the integer ID of the queue for which those
>> + * keys belong. Queues are indexed from zero.
>
> If "feature-split-event-channels" is required then I think what should
> be written is queue-N/event-channel-{tx,rx} and
> queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
> as the final paragraph sort of implies?
I can change the wording to make this more clear.

>
> (what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)
>
> Is it required to have the same number of RX and TX queues?
Strictly speaking, no. But the implementation assumes this to be the 
case. Since the code already sets up one pair, simply looping over this 
sufficient times to create N pairs was a fairly sane approach to this. 
In practice, if you have an asymmetry between RX and TX queues, you will 
end up hitting a bottleneck sooner in one direction than the other, 
which seems impractical.

>
> Are there any other properties/behaviours which should be documented,
> e.g. relating to the selection of which queue to use for a given frame
> (on either TX or RX)? If not and it is up to the relevant end to do what
> it wants then I think it would be useful to say so.
There are no other requirements. The current implementation will 
transmit anything it cannot hash sensibly on queue 0, but this is an 
arbitrary choice (albeit a sensible one, since queue 0 should always 
exist). I'll document this.

>
>> + */
>> +
>> +/*
>>    * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
>>    * offload off or on. If it is missing then the feature is assumed to be on.
>>    * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:47:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:47:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5cr-0005IY-R2; Wed, 19 Feb 2014 11:47:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WG5cq-0005IF-A0
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:47:40 +0000
Received: from [85.158.139.211:18221] by server-17.bemta-5.messagelabs.com id
	BE/5F-31975-BD994035; Wed, 19 Feb 2014 11:47:39 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392810457!958292!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32008 invoked from network); 19 Feb 2014 11:47:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:47:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102144869"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:47:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:47:36 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WG5cl-0004lk-E2; Wed, 19 Feb 2014 11:47:35 +0000
Message-ID: <530499D7.70902@citrix.com>
Date: Wed, 19 Feb 2014 11:47:35 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
In-Reply-To: <1392807985.23084.132.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 11:06, Ian Campbell wrote:
> On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Document the multi-queue feature in terms of XenStore keys to be written
>> by the backend and by the frontend.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   xen/include/public/io/netif.h |   21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
>> index d7fb771..90be2fc 100644
>> --- a/xen/include/public/io/netif.h
>> +++ b/xen/include/public/io/netif.h
>> @@ -69,6 +69,27 @@
>>    */
>>
>>   /*
>> + * Multiple transmit and receive queues:
>> + * If supported, the backend will write "multi-queue-max-queues" and set its
>> + * value to the maximum supported number of queues.
>> + * Frontends that are aware of this feature and wish to use it can write the
>> + * key "multi-queue-num-queues", set to the number they wish to use.
>> + *
>> + * Queues replicate the shared rings and event channels, and
>> + * "feature-split-event-channels" is required when using multiple queues.
>> + *
>> + * For frontends requesting just one queue, the usual event-channel and
>> + * ring-ref keys are written as before, simplifying the backend processing
>> + * to avoid distinguishing between a frontend that doesn't understand the
>> + * multi-queue feature, and one that does, but requested only one queue.
>> + *
>> + * Frontends requesting two or more queues must not write the toplevel
>> + * event-channel and ring-ref keys, instead writing them under sub-keys having
>> + * the name "queue-N" where N is the integer ID of the queue for which those
>> + * keys belong. Queues are indexed from zero.
>
> If "feature-split-event-channels" is required then I think what should
> be written is queue-N/event-channel-{tx,rx} and
> queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
> as the final paragraph sort of implies?
I can change the wording to make this more clear.

>
> (what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)
>
> Is it required to have the same number of RX and TX queues?
Strictly speaking, no. But the implementation assumes this to be the 
case. Since the code already sets up one pair, simply looping over this 
sufficient times to create N pairs was a fairly sane approach to this. 
In practice, if you have an asymmetry between RX and TX queues, you will 
end up hitting a bottleneck sooner in one direction than the other, 
which seems impractical.

>
> Are there any other properties/behaviours which should be documented,
> e.g. relating to the selection of which queue to use for a given frame
> (on either TX or RX)? If not and it is up to the relevant end to do what
> it wants then I think it would be useful to say so.
There are no other requirements. The current implementation will 
transmit anything it cannot hash sensibly on queue 0, but this is an 
arbitrary choice (albeit a sensible one, since queue 0 should always 
exist). I'll document this.

>
>> + */
>> +
>> +/*
>>    * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
>>    * offload off or on. If it is missing then the feature is assumed to be on.
>>    * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5dK-0005Mf-DH; Wed, 19 Feb 2014 11:48:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5dA-0005MN-Rn
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:48:01 +0000
Received: from [193.109.254.147:14796] by server-13.bemta-14.messagelabs.com
	id F1/2D-01226-FE994035; Wed, 19 Feb 2014 11:47:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392810478!100989!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5906 invoked from network); 19 Feb 2014 11:47:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102144914"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:47:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:47:57 -0500
Message-ID: <1392810475.29739.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:47:55 +0000
In-Reply-To: <1390581822-32624-6-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-6-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 5/8] xen/arm: IRQ: rename
 release_irq in release_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:

Subject: s/in/to/

> Rename the function and make the prototype consistent with request_dt_irq.
> 
> The new parameter (dev_id) will be used in a later patch to release the right
> action when the support for multiple action will be added.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/gic.c        |    4 ++--
>  xen/include/asm-arm/irq.h |    1 +
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 58bcba3..2643b46 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -520,13 +520,13 @@ void gic_disable_cpu(void)
>      spin_unlock(&gic.lock);
>  }
>  
> -void __init release_irq(unsigned int irq)
> +void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>  {
>      struct irq_desc *desc;
>      unsigned long flags;
>     struct irqaction *action;
>  
> -    desc = irq_to_desc(irq);
> +    desc = irq_to_desc(irq->irq);
>  
>      desc->handler->shutdown(desc);
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index 7c20703..bd8aac1 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -44,6 +44,7 @@ int __init request_dt_irq(const struct dt_irq *irq,
>                            void (*handler)(int, void *, struct cpu_user_regs *),
>                            const char *devname, void *dev_id);
>  int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new);

This patch implies that things can now be dynamically registered and
unregistered -- does this therefore need to become non-__init?

> +void release_dt_irq(const struct dt_irq *irq, const void *dev_id);
>  
>  #endif /* _ASM_HW_IRQ_H */
>  /*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5dK-0005Mf-DH; Wed, 19 Feb 2014 11:48:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5dA-0005MN-Rn
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:48:01 +0000
Received: from [193.109.254.147:14796] by server-13.bemta-14.messagelabs.com
	id F1/2D-01226-FE994035; Wed, 19 Feb 2014 11:47:59 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392810478!100989!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5906 invoked from network); 19 Feb 2014 11:47:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:47:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102144914"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:47:57 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:47:57 -0500
Message-ID: <1392810475.29739.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:47:55 +0000
In-Reply-To: <1390581822-32624-6-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-6-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 5/8] xen/arm: IRQ: rename
 release_irq in release_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:

Subject: s/in/to/

> Rename the function and make the prototype consistent with request_dt_irq.
> 
> The new parameter (dev_id) will be used in a later patch to release the right
> action when the support for multiple action will be added.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/gic.c        |    4 ++--
>  xen/include/asm-arm/irq.h |    1 +
>  2 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 58bcba3..2643b46 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -520,13 +520,13 @@ void gic_disable_cpu(void)
>      spin_unlock(&gic.lock);
>  }
>  
> -void __init release_irq(unsigned int irq)
> +void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>  {
>      struct irq_desc *desc;
>      unsigned long flags;
>     struct irqaction *action;
>  
> -    desc = irq_to_desc(irq);
> +    desc = irq_to_desc(irq->irq);
>  
>      desc->handler->shutdown(desc);
>  
> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
> index 7c20703..bd8aac1 100644
> --- a/xen/include/asm-arm/irq.h
> +++ b/xen/include/asm-arm/irq.h
> @@ -44,6 +44,7 @@ int __init request_dt_irq(const struct dt_irq *irq,
>                            void (*handler)(int, void *, struct cpu_user_regs *),
>                            const char *devname, void *dev_id);
>  int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new);

This patch implies that things can now be dynamically registered and
unregistered -- does this therefore need to become non-__init?

> +void release_dt_irq(const struct dt_irq *irq, const void *dev_id);
>  
>  #endif /* _ASM_HW_IRQ_H */
>  /*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5gU-0005ff-KD; Wed, 19 Feb 2014 11:51:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5gT-0005fW-8N
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:51:25 +0000
Received: from [193.109.254.147:34825] by server-5.bemta-14.messagelabs.com id
	B4/BB-16688-CBA94035; Wed, 19 Feb 2014 11:51:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392810682!5408450!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11338 invoked from network); 19 Feb 2014 11:51:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:51:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102145797"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:51:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:51:17 -0500
Message-ID: <1392810675.29739.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:51:15 +0000
In-Reply-To: <1390581822-32624-7-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> When multiple action will be supported, gic_irq_{startup,shutdown} will have
> to be called in the same critical zone has setup/release.

s/has/as/?

> Otherwise it could have a race condition if at the same time CPU A is calling
> release_dt_irq and CPU B is calling setup_dt_irq.
> 
> This could end up to the IRQ is not enabled.

"This could end up with the IRQ not being enabled."

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/gic.c |   40 +++++++++++++++++++++++++---------------
>  1 file changed, 25 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 2643b46..ebd2b5f 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -129,43 +129,53 @@ void gic_restore_state(struct vcpu *v)
>      gic_restore_pending_irqs(v);
>  }
>  
> -static void gic_irq_enable(struct irq_desc *desc)
> +static unsigned int gic_irq_startup(struct irq_desc *desc)

unsigned? What are the error codes here going to be?

>  {
>      int irq = desc->irq;
> -    unsigned long flags;
>  
> -    spin_lock_irqsave(&desc->lock, flags);
> +    ASSERT(spin_is_locked(&desc->lock));
> +    ASSERT(!local_irq_is_enabled());
> +
>      spin_lock(&gic.lock);
>      desc->status &= ~IRQ_DISABLED;
>      dsb();
>      /* Enable routing */
>      GICD[GICD_ISENABLER + irq / 32] = (1u << (irq % 32));
>      spin_unlock(&gic.lock);
> -    spin_unlock_irqrestore(&desc->lock, flags);
> +
> +    return 0;
>  }
>  
> -static void gic_irq_disable(struct irq_desc *desc)
> +static void gic_irq_shutdown(struct irq_desc *desc)
>  {
>      int irq = desc->irq;
>  
> -    spin_lock(&desc->lock);
> +    ASSERT(spin_is_locked(&desc->lock));
> +    ASSERT(!local_irq_is_enabled());
> +
>      spin_lock(&gic.lock);
>      /* Disable routing */
>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>      desc->status |= IRQ_DISABLED;
>      spin_unlock(&gic.lock);
> -    spin_unlock(&desc->lock);
>  }
>  
> -static unsigned int gic_irq_startup(struct irq_desc *desc)
> +static void gic_irq_enable(struct irq_desc *desc)
>  {
> -    gic_irq_enable(desc);
> -    return 0;
> +    unsigned long flags;
> +
> +    spin_lock_irqsave(&desc->lock, flags);
> +    gic_irq_startup(desc);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
> -static void gic_irq_shutdown(struct irq_desc *desc)
> +static void gic_irq_disable(struct irq_desc *desc)
>  {
> -    gic_irq_disable(desc);
> +    unsigned long flags;
> +
> +    spin_lock_irqsave(&desc->lock, flags);
> +    gic_irq_shutdown(desc);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
>  static void gic_irq_ack(struct irq_desc *desc)
> @@ -528,9 +538,8 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>  
>      desc = irq_to_desc(irq->irq);
>  
> -    desc->handler->shutdown(desc);
> -
>      spin_lock_irqsave(&desc->lock,flags);
> +    desc->handler->shutdown(desc);
>      action = desc->action;
>      desc->action  = NULL;
>      desc->status &= ~IRQ_GUEST;
> @@ -600,11 +609,12 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      }
>  
>      rc = __setup_irq(desc, irq->irq, new);
> -    spin_unlock_irqrestore(&desc->lock, flags);
>  
>      if ( !rc )
>          desc->handler->startup(desc);
>  
> +    spin_unlock_irqrestore(&desc->lock, flags);
> +
>      return rc;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:51:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5gU-0005ff-KD; Wed, 19 Feb 2014 11:51:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5gT-0005fW-8N
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:51:25 +0000
Received: from [193.109.254.147:34825] by server-5.bemta-14.messagelabs.com id
	B4/BB-16688-CBA94035; Wed, 19 Feb 2014 11:51:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392810682!5408450!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11338 invoked from network); 19 Feb 2014 11:51:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:51:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102145797"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:51:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:51:17 -0500
Message-ID: <1392810675.29739.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:51:15 +0000
In-Reply-To: <1390581822-32624-7-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> When multiple action will be supported, gic_irq_{startup,shutdown} will have
> to be called in the same critical zone has setup/release.

s/has/as/?

> Otherwise it could have a race condition if at the same time CPU A is calling
> release_dt_irq and CPU B is calling setup_dt_irq.
> 
> This could end up to the IRQ is not enabled.

"This could end up with the IRQ not being enabled."

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> ---
>  xen/arch/arm/gic.c |   40 +++++++++++++++++++++++++---------------
>  1 file changed, 25 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index 2643b46..ebd2b5f 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -129,43 +129,53 @@ void gic_restore_state(struct vcpu *v)
>      gic_restore_pending_irqs(v);
>  }
>  
> -static void gic_irq_enable(struct irq_desc *desc)
> +static unsigned int gic_irq_startup(struct irq_desc *desc)

unsigned? What are the error codes here going to be?

>  {
>      int irq = desc->irq;
> -    unsigned long flags;
>  
> -    spin_lock_irqsave(&desc->lock, flags);
> +    ASSERT(spin_is_locked(&desc->lock));
> +    ASSERT(!local_irq_is_enabled());
> +
>      spin_lock(&gic.lock);
>      desc->status &= ~IRQ_DISABLED;
>      dsb();
>      /* Enable routing */
>      GICD[GICD_ISENABLER + irq / 32] = (1u << (irq % 32));
>      spin_unlock(&gic.lock);
> -    spin_unlock_irqrestore(&desc->lock, flags);
> +
> +    return 0;
>  }
>  
> -static void gic_irq_disable(struct irq_desc *desc)
> +static void gic_irq_shutdown(struct irq_desc *desc)
>  {
>      int irq = desc->irq;
>  
> -    spin_lock(&desc->lock);
> +    ASSERT(spin_is_locked(&desc->lock));
> +    ASSERT(!local_irq_is_enabled());
> +
>      spin_lock(&gic.lock);
>      /* Disable routing */
>      GICD[GICD_ICENABLER + irq / 32] = (1u << (irq % 32));
>      desc->status |= IRQ_DISABLED;
>      spin_unlock(&gic.lock);
> -    spin_unlock(&desc->lock);
>  }
>  
> -static unsigned int gic_irq_startup(struct irq_desc *desc)
> +static void gic_irq_enable(struct irq_desc *desc)
>  {
> -    gic_irq_enable(desc);
> -    return 0;
> +    unsigned long flags;
> +
> +    spin_lock_irqsave(&desc->lock, flags);
> +    gic_irq_startup(desc);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
> -static void gic_irq_shutdown(struct irq_desc *desc)
> +static void gic_irq_disable(struct irq_desc *desc)
>  {
> -    gic_irq_disable(desc);
> +    unsigned long flags;
> +
> +    spin_lock_irqsave(&desc->lock, flags);
> +    gic_irq_shutdown(desc);
> +    spin_unlock_irqrestore(&desc->lock, flags);
>  }
>  
>  static void gic_irq_ack(struct irq_desc *desc)
> @@ -528,9 +538,8 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>  
>      desc = irq_to_desc(irq->irq);
>  
> -    desc->handler->shutdown(desc);
> -
>      spin_lock_irqsave(&desc->lock,flags);
> +    desc->handler->shutdown(desc);
>      action = desc->action;
>      desc->action  = NULL;
>      desc->status &= ~IRQ_GUEST;
> @@ -600,11 +609,12 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>      }
>  
>      rc = __setup_irq(desc, irq->irq, new);
> -    spin_unlock_irqrestore(&desc->lock, flags);
>  
>      if ( !rc )
>          desc->handler->startup(desc);
>  
> +    spin_unlock_irqrestore(&desc->lock, flags);
> +
>      return rc;
>  }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:53:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5iW-0005oU-Dv; Wed, 19 Feb 2014 11:53:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WG5iT-0005oC-TD
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 11:53:30 +0000
Received: from [85.158.143.35:14721] by server-3.bemta-4.messagelabs.com id
	D1/C9-11539-93B94035; Wed, 19 Feb 2014 11:53:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392810806!6795124!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29448 invoked from network); 19 Feb 2014 11:53:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:53:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103855865"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:53:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:53:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WG5iL-0004rZ-Ra;
	Wed, 19 Feb 2014 11:53:21 +0000
Date: Wed, 19 Feb 2014 11:53:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Michael S. Tsirkin" <mst@redhat.com>
In-Reply-To: <20140219092904.GA11116@redhat.com>
Message-ID: <alpine.DEB.2.02.1402191153090.15812@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
	<alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
	<20140219090825.GA10646@redhat.com>
	<20140219092904.GA11116@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014, Michael S. Tsirkin wrote:
> On Wed, Feb 19, 2014 at 11:08:25AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Feb 18, 2014 at 05:10:00PM +0000, Stefano Stabellini wrote:
> > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > > > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > > > > of the email :-P).
> > > > > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > > > > response to the guest writing to a magic ioport specifically to unplug
> > > > > > > the emulated disk.
> > > > > > > With this patch after the guest boots I can still access both xvda and
> > > > > > > sda for the same disk, leading to fs corruptions.
> > > > > > 
> > > > > > Ok, the last paragraph is what I was missing.
> > > > > > 
> > > > > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > > > > hotplug handler, dc->unplug is not called anymore.
> > > > > > 
> > > > > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > > > > free
> > > > > > the device, it just drops the disks underneath.  I think the simplest
> > > > > > solution
> > > > > > is to _not_ make it a dc->unplug callback at all, and call
> > > > > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > > > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > > > > to
> > > > > > do here.
> > > > > 
> > > > > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > > > > Calling it directly from unplug_disks fixes the issue:
> > > > > 
> > > > > 
> > > > > ---
> > > > > 
> > > > > Call pci_piix3_xen_ide_unplug from unplug_disks
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > > 
> > > > > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > > > > index 0eda301..40757eb 100644
> > > > > --- a/hw/ide/piix.c
> > > > > +++ b/hw/ide/piix.c
> > > > > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> > > > >      return 0;
> > > > >  }
> > > > > 
> > > > > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > > >  {
> > > > >      PCIIDEState *pci_ide;
> > > > >      DriveInfo *di;
> > > > > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > > > > void *data)
> > > > >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> > > > >      k->class_id = PCI_CLASS_STORAGE_IDE;
> > > > >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > > > > -    dc->unplug = pci_piix3_xen_ide_unplug;
> > > > >  }
> > > > > 
> > > > >  static const TypeInfo piix3_ide_xen_info = {
> > > > > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > > > > index 70875e4..1d9d0e9 100644
> > > > > --- a/hw/xen/xen_platform.c
> > > > > +++ b/hw/xen/xen_platform.c
> > > > > @@ -27,6 +27,7 @@
> > > > > 
> > > > >  #include "hw/hw.h"
> > > > >  #include "hw/i386/pc.h"
> > > > > +#include "hw/ide.h"
> > > > >  #include "hw/pci/pci.h"
> > > > >  #include "hw/irq.h"
> > > > >  #include "hw/xen/xen_common.h"
> > > > > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > > > > *o)
> > > > >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > > > >              PCI_CLASS_STORAGE_IDE
> > > > >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > > > > -        qdev_unplug(DEVICE(d), NULL);
> > > > > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> > > > >      }
> > > > >  }
> > > > > 
> > > > > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > > > > index 507e6d3..bc8bd32 100644
> > > > > --- a/include/hw/ide.h
> > > > > +++ b/include/hw/ide.h
> > > > > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > > > > **hd_table,
> > > > >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > > devfn);
> > > > >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > > devfn);
> > > > >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > > devfn);
> > > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> > > > >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > > > > 
> > > > >  /* ide-mmio.c */
> > > > > 
> > > > 
> > > > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> > > 
> > > Thanks. Should I send it to Peter via the xen tree or anybody else wants
> > > to pick this up?
> > 
> > I'll rebase my tree on top of this, to avoid bisect failures.
> 
> Oh sorry, I noticed what broke this is - is merged already.
> Pls merge fix through xen tree then, makes more sense.

All right, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:53:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5iW-0005oU-Dv; Wed, 19 Feb 2014 11:53:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WG5iT-0005oC-TD
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 11:53:30 +0000
Received: from [85.158.143.35:14721] by server-3.bemta-4.messagelabs.com id
	D1/C9-11539-93B94035; Wed, 19 Feb 2014 11:53:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392810806!6795124!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29448 invoked from network); 19 Feb 2014 11:53:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:53:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103855865"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 11:53:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:53:23 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WG5iL-0004rZ-Ra;
	Wed, 19 Feb 2014 11:53:21 +0000
Date: Wed, 19 Feb 2014 11:53:16 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Michael S. Tsirkin" <mst@redhat.com>
In-Reply-To: <20140219092904.GA11116@redhat.com>
Message-ID: <alpine.DEB.2.02.1402191153090.15812@kaball.uk.xensource.com>
References: <1392050814-31814-1-git-send-email-mst@redhat.com>
	<CAFEAcA8SGApDRa06BJSqU_uxrQsG35KAoPUU+P_Liy0cr5hO+g@mail.gmail.com>
	<alpine.DEB.2.02.1402181208470.27926@kaball.uk.xensource.com>
	<530351B4.1010402@redhat.com>
	<alpine.DEB.2.02.1402181231090.27926@kaball.uk.xensource.com>
	<53035BD8.6050805@redhat.com>
	<alpine.DEB.2.02.1402181421310.27926@kaball.uk.xensource.com>
	<53036DA8.3080804@redhat.com>
	<alpine.DEB.2.02.1402181709350.15812@kaball.uk.xensource.com>
	<20140219090825.GA10646@redhat.com>
	<20140219092904.GA11116@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	QEMU Developers <qemu-devel@nongnu.org>, armbru@redhat.com,
	Anthony Liguori <aliguori@amazon.com>, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 00/20] acpi,pc,pci fixes and enhancements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014, Michael S. Tsirkin wrote:
> On Wed, Feb 19, 2014 at 11:08:25AM +0200, Michael S. Tsirkin wrote:
> > On Tue, Feb 18, 2014 at 05:10:00PM +0000, Stefano Stabellini wrote:
> > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > Il 18/02/2014 15:25, Stefano Stabellini ha scritto:
> > > > > On Tue, 18 Feb 2014, Paolo Bonzini wrote:
> > > > > > Il 18/02/2014 13:45, Stefano Stabellini ha scritto:
> > > > > > > Disk unplug: hw/ide/piix.c:pci_piix3_xen_ide_unplug (see the beginning
> > > > > > > of the email :-P).
> > > > > > > It is called by hw/xen/xen_platform.c:platform_fixed_ioport_writew, in
> > > > > > > response to the guest writing to a magic ioport specifically to unplug
> > > > > > > the emulated disk.
> > > > > > > With this patch after the guest boots I can still access both xvda and
> > > > > > > sda for the same disk, leading to fs corruptions.
> > > > > > 
> > > > > > Ok, the last paragraph is what I was missing.
> > > > > > 
> > > > > > So this is dc->unplug for the PIIX3 IDE device.  Because PCI declares a
> > > > > > hotplug handler, dc->unplug is not called anymore.
> > > > > > 
> > > > > > But unlike other dc->unplug callbacks, pci_piix3_xen_ide_unplug doesn't
> > > > > > free
> > > > > > the device, it just drops the disks underneath.  I think the simplest
> > > > > > solution
> > > > > > is to _not_ make it a dc->unplug callback at all, and call
> > > > > > pci_piix3_xen_ide_unplug from unplug_disks instead of qdev_unplug.
> > > > > > qdev_unplug means "ask guest to start unplug", which is not what Xen wants
> > > > > > to
> > > > > > do here.
> > > > > 
> > > > > Yes, you are right, pci_piix3_xen_ide_unplug is not called anymore.
> > > > > Calling it directly from unplug_disks fixes the issue:
> > > > > 
> > > > > 
> > > > > ---
> > > > > 
> > > > > Call pci_piix3_xen_ide_unplug from unplug_disks
> > > > > 
> > > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > > > > 
> > > > > diff --git a/hw/ide/piix.c b/hw/ide/piix.c
> > > > > index 0eda301..40757eb 100644
> > > > > --- a/hw/ide/piix.c
> > > > > +++ b/hw/ide/piix.c
> > > > > @@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
> > > > >      return 0;
> > > > >  }
> > > > > 
> > > > > -static int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev)
> > > > >  {
> > > > >      PCIIDEState *pci_ide;
> > > > >      DriveInfo *di;
> > > > > @@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass,
> > > > > void *data)
> > > > >      k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
> > > > >      k->class_id = PCI_CLASS_STORAGE_IDE;
> > > > >      set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
> > > > > -    dc->unplug = pci_piix3_xen_ide_unplug;
> > > > >  }
> > > > > 
> > > > >  static const TypeInfo piix3_ide_xen_info = {
> > > > > diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
> > > > > index 70875e4..1d9d0e9 100644
> > > > > --- a/hw/xen/xen_platform.c
> > > > > +++ b/hw/xen/xen_platform.c
> > > > > @@ -27,6 +27,7 @@
> > > > > 
> > > > >  #include "hw/hw.h"
> > > > >  #include "hw/i386/pc.h"
> > > > > +#include "hw/ide.h"
> > > > >  #include "hw/pci/pci.h"
> > > > >  #include "hw/irq.h"
> > > > >  #include "hw/xen/xen_common.h"
> > > > > @@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void
> > > > > *o)
> > > > >      if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
> > > > >              PCI_CLASS_STORAGE_IDE
> > > > >              && strcmp(d->name, "xen-pci-passthrough") != 0) {
> > > > > -        qdev_unplug(DEVICE(d), NULL);
> > > > > +        pci_piix3_xen_ide_unplug(DEVICE(d));
> > > > >      }
> > > > >  }
> > > > > 
> > > > > diff --git a/include/hw/ide.h b/include/hw/ide.h
> > > > > index 507e6d3..bc8bd32 100644
> > > > > --- a/include/hw/ide.h
> > > > > +++ b/include/hw/ide.h
> > > > > @@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo
> > > > > **hd_table,
> > > > >  PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > > devfn);
> > > > >  PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > > devfn);
> > > > >  PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int
> > > > > devfn);
> > > > > +int pci_piix3_xen_ide_unplug(DeviceState *dev);
> > > > >  void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
> > > > > 
> > > > >  /* ide-mmio.c */
> > > > > 
> > > > 
> > > > Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> > > 
> > > Thanks. Should I send it to Peter via the xen tree or anybody else wants
> > > to pick this up?
> > 
> > I'll rebase my tree on top of this, to avoid bisect failures.
> 
> Oh sorry, I noticed what broke this is - is merged already.
> Pls merge fix through xen tree then, makes more sense.

All right, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:55:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5k8-0005zc-3L; Wed, 19 Feb 2014 11:55:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5k7-0005zP-64
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:55:11 +0000
Received: from [85.158.139.211:7963] by server-14.bemta-5.messagelabs.com id
	83/C7-27598-E9B94035; Wed, 19 Feb 2014 11:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392810908!4910307!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6676 invoked from network); 19 Feb 2014 11:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102146479"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:55:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:55:07 -0500
Message-ID: <1392810905.29739.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:55:05 +0000
In-Reply-To: <1390581822-32624-8-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
> interrupt.

Mention here that you are therefore creating a linked list of actions
for each interrupt.

If you use xen/list.h for this then you get a load of helpers and
iterators which would save you open coding them.

Some discussion of the behaviour wrt acking the physical interrupt might
also be interesting, especially in the case where the shared IRQ is
routed to the guest and to the hypervisor or to multiple guests.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> CC: Keir Fraser <keir@xen.org>
> ---
>  xen/arch/arm/gic.c    |   48 ++++++++++++++++++++++++++++++++++++++++--------
>  xen/arch/arm/irq.c    |    6 +++++-
>  xen/include/xen/irq.h |    1 +
>  3 files changed, 46 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index ebd2b5f..8ba1de3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -534,32 +534,64 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>  {
>      struct irq_desc *desc;
>      unsigned long flags;
> -   struct irqaction *action;
> +    struct irqaction *action, **action_ptr;
>  
>      desc = irq_to_desc(irq->irq);
>  
>      spin_lock_irqsave(&desc->lock,flags);
>      desc->handler->shutdown(desc);
>      action = desc->action;
> -    desc->action  = NULL;
> -    desc->status &= ~IRQ_GUEST;
> +
> +    action_ptr = &desc->action;
> +    for ( ;; )
> +    {
> +        action = *action_ptr;
> +
> +        if ( !action )
> +        {
> +            printk(XENLOG_WARNING "Trying to free already-free IRQ %u\n",
> +                   irq->irq);
> +            return;
> +        }
> +
> +        if ( action->dev_id == dev_id )
> +            break;
> +
> +        action_ptr = &action->next;
> +    }
> +
> +    /* Found it - remove it from the action list */
> +    *action_ptr = action->next;
> +
> +    /* If this was the list action, shut down the IRQ */
> +    if ( !desc->action )
> +    {
> +        desc->handler->shutdown(desc);
> +        desc->status &= ~IRQ_GUEST;
> +    }
>  
>      spin_unlock_irqrestore(&desc->lock,flags);
>  
>      /* Wait to make sure it's not being used on another CPU */
>      do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS );
>  
> -    if (action && action->free_on_release)
> +    if ( action && action->free_on_release )
>          xfree(action);
>  }
>  
>  static int __setup_irq(struct irq_desc *desc, unsigned int irq,
>                         struct irqaction *new)
>  {
> -    if ( desc->action != NULL )
> -        return -EBUSY;
> +    struct irqaction *action = desc->action;
> +
> +    ASSERT(new != NULL);
> +
> +    /* Check that dev_id is correctly filled if we have multiple action */
> +    if ( action != NULL && ( action->dev_id == NULL || new->dev_id == NULL ) )
> +        return -EINVAL;
>  
> -    desc->action  = new;
> +    new->next = desc->action;
> +    desc->action = new;
>      dsb();
>  
>      return 0;
> @@ -610,7 +642,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>  
>      rc = __setup_irq(desc, irq->irq, new);
>  
> -    if ( !rc )
> +    if ( !rc && disabled )
>          desc->handler->startup(desc);
>  
>      spin_unlock_irqrestore(&desc->lock, flags);
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3e326b0..edf0404 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -179,7 +179,11 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
>      {
>          desc->status &= ~IRQ_PENDING;
>          spin_unlock_irq(&desc->lock);
> -        action->handler(irq, action->dev_id, regs);
> +        do
> +        {
> +            action->handler(irq, action->dev_id, regs);
> +            action = action->next;
> +        } while ( action );
>          spin_lock_irq(&desc->lock);
>      }
>  
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index f2e6215..54314b8 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -11,6 +11,7 @@
>  
>  struct irqaction {
>      void (*handler)(int, void *, struct cpu_user_regs *);
> +    struct irqaction *next;
>      const char *name;
>      void *dev_id;
>      bool_t free_on_release;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:55:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5k8-0005zc-3L; Wed, 19 Feb 2014 11:55:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5k7-0005zP-64
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:55:11 +0000
Received: from [85.158.139.211:7963] by server-14.bemta-5.messagelabs.com id
	83/C7-27598-E9B94035; Wed, 19 Feb 2014 11:55:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392810908!4910307!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6676 invoked from network); 19 Feb 2014 11:55:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:55:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102146479"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:55:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:55:07 -0500
Message-ID: <1392810905.29739.19.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:55:05 +0000
In-Reply-To: <1390581822-32624-8-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
> interrupt.

Mention here that you are therefore creating a linked list of actions
for each interrupt.

If you use xen/list.h for this then you get a load of helpers and
iterators which would save you open coding them.

Some discussion of the behaviour wrt acking the physical interrupt might
also be interesting, especially in the case where the shared IRQ is
routed to the guest and to the hypervisor or to multiple guests.

> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> CC: Keir Fraser <keir@xen.org>
> ---
>  xen/arch/arm/gic.c    |   48 ++++++++++++++++++++++++++++++++++++++++--------
>  xen/arch/arm/irq.c    |    6 +++++-
>  xen/include/xen/irq.h |    1 +
>  3 files changed, 46 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index ebd2b5f..8ba1de3 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -534,32 +534,64 @@ void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>  {
>      struct irq_desc *desc;
>      unsigned long flags;
> -   struct irqaction *action;
> +    struct irqaction *action, **action_ptr;
>  
>      desc = irq_to_desc(irq->irq);
>  
>      spin_lock_irqsave(&desc->lock,flags);
>      desc->handler->shutdown(desc);
>      action = desc->action;
> -    desc->action  = NULL;
> -    desc->status &= ~IRQ_GUEST;
> +
> +    action_ptr = &desc->action;
> +    for ( ;; )
> +    {
> +        action = *action_ptr;
> +
> +        if ( !action )
> +        {
> +            printk(XENLOG_WARNING "Trying to free already-free IRQ %u\n",
> +                   irq->irq);
> +            return;
> +        }
> +
> +        if ( action->dev_id == dev_id )
> +            break;
> +
> +        action_ptr = &action->next;
> +    }
> +
> +    /* Found it - remove it from the action list */
> +    *action_ptr = action->next;
> +
> +    /* If this was the list action, shut down the IRQ */
> +    if ( !desc->action )
> +    {
> +        desc->handler->shutdown(desc);
> +        desc->status &= ~IRQ_GUEST;
> +    }
>  
>      spin_unlock_irqrestore(&desc->lock,flags);
>  
>      /* Wait to make sure it's not being used on another CPU */
>      do { smp_mb(); } while ( desc->status & IRQ_INPROGRESS );
>  
> -    if (action && action->free_on_release)
> +    if ( action && action->free_on_release )
>          xfree(action);
>  }
>  
>  static int __setup_irq(struct irq_desc *desc, unsigned int irq,
>                         struct irqaction *new)
>  {
> -    if ( desc->action != NULL )
> -        return -EBUSY;
> +    struct irqaction *action = desc->action;
> +
> +    ASSERT(new != NULL);
> +
> +    /* Check that dev_id is correctly filled if we have multiple action */
> +    if ( action != NULL && ( action->dev_id == NULL || new->dev_id == NULL ) )
> +        return -EINVAL;
>  
> -    desc->action  = new;
> +    new->next = desc->action;
> +    desc->action = new;
>      dsb();
>  
>      return 0;
> @@ -610,7 +642,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>  
>      rc = __setup_irq(desc, irq->irq, new);
>  
> -    if ( !rc )
> +    if ( !rc && disabled )
>          desc->handler->startup(desc);
>  
>      spin_unlock_irqrestore(&desc->lock, flags);
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index 3e326b0..edf0404 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -179,7 +179,11 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
>      {
>          desc->status &= ~IRQ_PENDING;
>          spin_unlock_irq(&desc->lock);
> -        action->handler(irq, action->dev_id, regs);
> +        do
> +        {
> +            action->handler(irq, action->dev_id, regs);
> +            action = action->next;
> +        } while ( action );
>          spin_lock_irq(&desc->lock);
>      }
>  
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index f2e6215..54314b8 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -11,6 +11,7 @@
>  
>  struct irqaction {
>      void (*handler)(int, void *, struct cpu_user_regs *);
> +    struct irqaction *next;
>      const char *name;
>      void *dev_id;
>      bool_t free_on_release;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:56:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5lF-00066x-NT; Wed, 19 Feb 2014 11:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5lD-00066o-Is
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:56:19 +0000
Received: from [85.158.139.211:23280] by server-9.bemta-5.messagelabs.com id
	36/58-11237-2EB94035; Wed, 19 Feb 2014 11:56:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392810975!4902711!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15219 invoked from network); 19 Feb 2014 11:56:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102146557"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:55:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:55:44 -0500
Message-ID: <1392810942.29739.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:55:42 +0000
In-Reply-To: <1390581822-32624-9-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-9-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 8/8] xen/serial: remove serial_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> This function was only used for IRQ routing which has been removed in an
> earlier patch.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> CC: Keir Fraser <keir@xen.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/drivers/char/exynos4210-uart.c |    8 --------
>  xen/drivers/char/ns16550.c         |   11 -----------
>  xen/drivers/char/omap-uart.c       |    8 --------
>  xen/drivers/char/pl011.c           |    8 --------
>  xen/drivers/char/serial.c          |    9 ---------
>  xen/include/xen/serial.h           |    5 -----
>  6 files changed, 49 deletions(-)
> 
> diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
> index 74ac33d..9179138 100644
> --- a/xen/drivers/char/exynos4210-uart.c
> +++ b/xen/drivers/char/exynos4210-uart.c
> @@ -275,13 +275,6 @@ static int __init exynos4210_uart_irq(struct serial_port *port)
>      return uart->irq.irq;
>  }
>  
> -static const struct dt_irq __init *exynos4210_uart_dt_irq(struct serial_port *port)
> -{
> -    struct exynos4210_uart *uart = port->uart;
> -
> -    return &uart->irq;
> -}
> -
>  static const struct vuart_info *exynos4210_vuart_info(struct serial_port *port)
>  {
>      struct exynos4210_uart *uart = port->uart;
> @@ -299,7 +292,6 @@ static struct uart_driver __read_mostly exynos4210_uart_driver = {
>      .putc         = exynos4210_uart_putc,
>      .getc         = exynos4210_uart_getc,
>      .irq          = exynos4210_uart_irq,
> -    .dt_irq_get   = exynos4210_uart_dt_irq,
>      .vuart_info   = exynos4210_vuart_info,
>  };
>  
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index e7cb0ba..ca16d48 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -446,14 +446,6 @@ static int __init ns16550_irq(struct serial_port *port)
>      return ((uart->irq > 0) ? uart->irq : -1);
>  }
>  
> -#ifdef HAS_DEVICE_TREE
> -static const struct dt_irq __init *ns16550_dt_irq(struct serial_port *port)
> -{
> -    struct ns16550 *uart = port->uart;
> -    return &uart->dt_irq;
> -}
> -#endif
> -
>  #ifdef CONFIG_ARM
>  static const struct vuart_info *ns16550_vuart_info(struct serial_port *port)
>  {
> @@ -473,9 +465,6 @@ static struct uart_driver __read_mostly ns16550_driver = {
>      .putc         = ns16550_putc,
>      .getc         = ns16550_getc,
>      .irq          = ns16550_irq,
> -#ifdef HAS_DEVICE_TREE
> -    .dt_irq_get   = ns16550_dt_irq,
> -#endif
>  #ifdef CONFIG_ARM
>      .vuart_info   = ns16550_vuart_info,
>  #endif
> diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
> index 7f21f1f..6d882a3 100644
> --- a/xen/drivers/char/omap-uart.c
> +++ b/xen/drivers/char/omap-uart.c
> @@ -262,13 +262,6 @@ static int __init omap_uart_irq(struct serial_port *port)
>      return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
>  }
>  
> -static const struct dt_irq __init *omap_uart_dt_irq(struct serial_port *port)
> -{
> -    struct omap_uart *uart = port->uart;
> -
> -    return &uart->irq;
> -}
> -
>  static const struct vuart_info *omap_vuart_info(struct serial_port *port)
>  {
>      struct omap_uart *uart = port->uart;
> @@ -286,7 +279,6 @@ static struct uart_driver __read_mostly omap_uart_driver = {
>      .putc = omap_uart_putc,
>      .getc = omap_uart_getc,
>      .irq = omap_uart_irq,
> -    .dt_irq_get = omap_uart_dt_irq,
>      .vuart_info = omap_vuart_info,
>  };
>  
> diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
> index 9c2870a..ac9c4f8 100644
> --- a/xen/drivers/char/pl011.c
> +++ b/xen/drivers/char/pl011.c
> @@ -189,13 +189,6 @@ static int __init pl011_irq(struct serial_port *port)
>      return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
>  }
>  
> -static const struct dt_irq __init *pl011_dt_irq(struct serial_port *port)
> -{
> -    struct pl011 *uart = port->uart;
> -
> -    return &uart->irq;
> -}
> -
>  static const struct vuart_info *pl011_vuart(struct serial_port *port)
>  {
>      struct pl011 *uart = port->uart;
> @@ -213,7 +206,6 @@ static struct uart_driver __read_mostly pl011_driver = {
>      .putc         = pl011_putc,
>      .getc         = pl011_getc,
>      .irq          = pl011_irq,
> -    .dt_irq_get   = pl011_dt_irq,
>      .vuart_info   = pl011_vuart,
>  };
>  
> diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
> index 9b006f2..44026b1 100644
> --- a/xen/drivers/char/serial.c
> +++ b/xen/drivers/char/serial.c
> @@ -500,15 +500,6 @@ int __init serial_irq(int idx)
>      return -1;
>  }
>  
> -const struct dt_irq __init *serial_dt_irq(int idx)
> -{
> -    if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
> -         com[idx].driver && com[idx].driver->dt_irq_get )
> -        return com[idx].driver->dt_irq_get(&com[idx]);
> -
> -    return NULL;
> -}
> -
>  const struct vuart_info *serial_vuart_info(int idx)
>  {
>      if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
> diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
> index f38c9b7..9f4451b 100644
> --- a/xen/include/xen/serial.h
> +++ b/xen/include/xen/serial.h
> @@ -81,8 +81,6 @@ struct uart_driver {
>      int  (*getc)(struct serial_port *, char *);
>      /* Get IRQ number for this port's serial line: returns -1 if none. */
>      int  (*irq)(struct serial_port *);
> -    /* Get IRQ device node for this port's serial line: returns NULL if none. */
> -    const struct dt_irq *(*dt_irq_get)(struct serial_port *);
>      /* Get serial information */
>      const struct vuart_info *(*vuart_info)(struct serial_port *);
>  };
> @@ -135,9 +133,6 @@ void serial_end_log_everything(int handle);
>  /* Return irq number for specified serial port (identified by index). */
>  int serial_irq(int idx);
>  
> -/* Return irq device node for specified serial port (identified by index). */
> -const struct dt_irq *serial_dt_irq(int idx);
> -
>  /* Retrieve basic UART information to emulate it (base address, size...) */
>  const struct vuart_info* serial_vuart_info(int idx);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:56:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5lF-00066x-NT; Wed, 19 Feb 2014 11:56:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5lD-00066o-Is
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 11:56:19 +0000
Received: from [85.158.139.211:23280] by server-9.bemta-5.messagelabs.com id
	36/58-11237-2EB94035; Wed, 19 Feb 2014 11:56:18 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392810975!4902711!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15219 invoked from network); 19 Feb 2014 11:56:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:56:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102146557"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:55:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:55:44 -0500
Message-ID: <1392810942.29739.20.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 11:55:42 +0000
In-Reply-To: <1390581822-32624-9-git-send-email-julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-9-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 8/8] xen/serial: remove serial_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> This function was only used for IRQ routing which has been removed in an
> earlier patch.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> CC: Keir Fraser <keir@xen.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/drivers/char/exynos4210-uart.c |    8 --------
>  xen/drivers/char/ns16550.c         |   11 -----------
>  xen/drivers/char/omap-uart.c       |    8 --------
>  xen/drivers/char/pl011.c           |    8 --------
>  xen/drivers/char/serial.c          |    9 ---------
>  xen/include/xen/serial.h           |    5 -----
>  6 files changed, 49 deletions(-)
> 
> diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
> index 74ac33d..9179138 100644
> --- a/xen/drivers/char/exynos4210-uart.c
> +++ b/xen/drivers/char/exynos4210-uart.c
> @@ -275,13 +275,6 @@ static int __init exynos4210_uart_irq(struct serial_port *port)
>      return uart->irq.irq;
>  }
>  
> -static const struct dt_irq __init *exynos4210_uart_dt_irq(struct serial_port *port)
> -{
> -    struct exynos4210_uart *uart = port->uart;
> -
> -    return &uart->irq;
> -}
> -
>  static const struct vuart_info *exynos4210_vuart_info(struct serial_port *port)
>  {
>      struct exynos4210_uart *uart = port->uart;
> @@ -299,7 +292,6 @@ static struct uart_driver __read_mostly exynos4210_uart_driver = {
>      .putc         = exynos4210_uart_putc,
>      .getc         = exynos4210_uart_getc,
>      .irq          = exynos4210_uart_irq,
> -    .dt_irq_get   = exynos4210_uart_dt_irq,
>      .vuart_info   = exynos4210_vuart_info,
>  };
>  
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index e7cb0ba..ca16d48 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -446,14 +446,6 @@ static int __init ns16550_irq(struct serial_port *port)
>      return ((uart->irq > 0) ? uart->irq : -1);
>  }
>  
> -#ifdef HAS_DEVICE_TREE
> -static const struct dt_irq __init *ns16550_dt_irq(struct serial_port *port)
> -{
> -    struct ns16550 *uart = port->uart;
> -    return &uart->dt_irq;
> -}
> -#endif
> -
>  #ifdef CONFIG_ARM
>  static const struct vuart_info *ns16550_vuart_info(struct serial_port *port)
>  {
> @@ -473,9 +465,6 @@ static struct uart_driver __read_mostly ns16550_driver = {
>      .putc         = ns16550_putc,
>      .getc         = ns16550_getc,
>      .irq          = ns16550_irq,
> -#ifdef HAS_DEVICE_TREE
> -    .dt_irq_get   = ns16550_dt_irq,
> -#endif
>  #ifdef CONFIG_ARM
>      .vuart_info   = ns16550_vuart_info,
>  #endif
> diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
> index 7f21f1f..6d882a3 100644
> --- a/xen/drivers/char/omap-uart.c
> +++ b/xen/drivers/char/omap-uart.c
> @@ -262,13 +262,6 @@ static int __init omap_uart_irq(struct serial_port *port)
>      return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
>  }
>  
> -static const struct dt_irq __init *omap_uart_dt_irq(struct serial_port *port)
> -{
> -    struct omap_uart *uart = port->uart;
> -
> -    return &uart->irq;
> -}
> -
>  static const struct vuart_info *omap_vuart_info(struct serial_port *port)
>  {
>      struct omap_uart *uart = port->uart;
> @@ -286,7 +279,6 @@ static struct uart_driver __read_mostly omap_uart_driver = {
>      .putc = omap_uart_putc,
>      .getc = omap_uart_getc,
>      .irq = omap_uart_irq,
> -    .dt_irq_get = omap_uart_dt_irq,
>      .vuart_info = omap_vuart_info,
>  };
>  
> diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
> index 9c2870a..ac9c4f8 100644
> --- a/xen/drivers/char/pl011.c
> +++ b/xen/drivers/char/pl011.c
> @@ -189,13 +189,6 @@ static int __init pl011_irq(struct serial_port *port)
>      return ((uart->irq.irq > 0) ? uart->irq.irq : -1);
>  }
>  
> -static const struct dt_irq __init *pl011_dt_irq(struct serial_port *port)
> -{
> -    struct pl011 *uart = port->uart;
> -
> -    return &uart->irq;
> -}
> -
>  static const struct vuart_info *pl011_vuart(struct serial_port *port)
>  {
>      struct pl011 *uart = port->uart;
> @@ -213,7 +206,6 @@ static struct uart_driver __read_mostly pl011_driver = {
>      .putc         = pl011_putc,
>      .getc         = pl011_getc,
>      .irq          = pl011_irq,
> -    .dt_irq_get   = pl011_dt_irq,
>      .vuart_info   = pl011_vuart,
>  };
>  
> diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
> index 9b006f2..44026b1 100644
> --- a/xen/drivers/char/serial.c
> +++ b/xen/drivers/char/serial.c
> @@ -500,15 +500,6 @@ int __init serial_irq(int idx)
>      return -1;
>  }
>  
> -const struct dt_irq __init *serial_dt_irq(int idx)
> -{
> -    if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
> -         com[idx].driver && com[idx].driver->dt_irq_get )
> -        return com[idx].driver->dt_irq_get(&com[idx]);
> -
> -    return NULL;
> -}
> -
>  const struct vuart_info *serial_vuart_info(int idx)
>  {
>      if ( (idx >= 0) && (idx < ARRAY_SIZE(com)) &&
> diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
> index f38c9b7..9f4451b 100644
> --- a/xen/include/xen/serial.h
> +++ b/xen/include/xen/serial.h
> @@ -81,8 +81,6 @@ struct uart_driver {
>      int  (*getc)(struct serial_port *, char *);
>      /* Get IRQ number for this port's serial line: returns -1 if none. */
>      int  (*irq)(struct serial_port *);
> -    /* Get IRQ device node for this port's serial line: returns NULL if none. */
> -    const struct dt_irq *(*dt_irq_get)(struct serial_port *);
>      /* Get serial information */
>      const struct vuart_info *(*vuart_info)(struct serial_port *);
>  };
> @@ -135,9 +133,6 @@ void serial_end_log_everything(int handle);
>  /* Return irq number for specified serial port (identified by index). */
>  int serial_irq(int idx);
>  
> -/* Return irq device node for specified serial port (identified by index). */
> -const struct dt_irq *serial_dt_irq(int idx);
> -
>  /* Retrieve basic UART information to emulate it (base address, size...) */
>  const struct vuart_info* serial_vuart_info(int idx);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:58:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5mn-0006Gk-Fp; Wed, 19 Feb 2014 11:57:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5mm-0006Ga-7L
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:57:56 +0000
Received: from [85.158.139.211:53312] by server-14.bemta-5.messagelabs.com id
	C9/9D-27598-34C94035; Wed, 19 Feb 2014 11:57:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392811073!4909928!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32181 invoked from network); 19 Feb 2014 11:57:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:57:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102146938"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:57:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:57:52 -0500
Message-ID: <1392811071.29739.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 19 Feb 2014 11:57:51 +0000
In-Reply-To: <1390923481-3452-1-git-send-email-wei.liu2@citrix.com>
References: <1390923481-3452-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for 4.5] xl: honor more top level vfb options
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:38 +0000, Wei Liu wrote:
> Now that SDL and keymap options for VFB can also be specified in top
> level options. Documentation is also updated.
> 
> This fixes bug #31 and further possible problems.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
>  docs/man/xl.cfg.pod.5    |    4 ++--
>  tools/libxl/xl_cmdimpl.c |   17 ++++++++++++++---
>  2 files changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 9941395..26991c0 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -389,8 +389,8 @@ This options does not control the emulated graphics card presented to
>  an HVM guest. See L<Emulated VGA Graphics Device> below for how to
>  configure the emulated device. If L<Emulated VGA Graphics Device> options
>  are used in a PV guest configuration, xl will pick up B<vnc>, B<vnclisten>,
> -B<vncpasswd>, B<vncdisplay> and B<vncunused> to construct paravirtual
> -framebuffer device for the guest.
> +B<vncpasswd>, B<vncdisplay>, B<vncunused>, B<sdl>, B<opengl> and
> +B<keymap> to construct paravirtual framebuffer device for the guest.
>  
>  Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
>  settings, from the following list:
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d93e01b..23d85f8 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -721,6 +721,15 @@ static void parse_top_level_vnc_options(XLU_Config *config,
>      xlu_cfg_get_defbool(config, "vncunused", &vnc->findunused, 0);
>  }
>  
> +static void parse_top_level_sdl_options(XLU_Config *config,
> +                                        libxl_sdl_info *sdl)
> +{
> +    xlu_cfg_get_defbool(config, "sdl", &sdl->enable, 0);
> +    xlu_cfg_get_defbool(config, "opengl", &sdl->opengl, 0);
> +    xlu_cfg_replace_string (config, "display", &sdl->display, 0);
> +    xlu_cfg_replace_string (config, "xauthority", &sdl->xauthority, 0);
> +}
> +
>  static void parse_config_data(const char *config_source,
>                                const char *config_data,
>                                int config_len,
> @@ -1657,9 +1666,13 @@ skip_vfb:
>                                      libxl_device_vkb_init);
>  
>              parse_top_level_vnc_options(config, &vfb->vnc);
> +            parse_top_level_sdl_options(config, &vfb->sdl);
> +            xlu_cfg_replace_string (config, "keymap", &vfb->keymap, 0);
>          }
> -    } else
> +    } else {
>          parse_top_level_vnc_options(config, &b_info->u.hvm.vnc);
> +        parse_top_level_sdl_options(config, &b_info->u.hvm.sdl);
> +    }
>  
>      if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
>          if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
> @@ -1676,8 +1689,6 @@ skip_vfb:
>                                           LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
>  
>          xlu_cfg_replace_string (config, "keymap", &b_info->u.hvm.keymap, 0);
> -        xlu_cfg_get_defbool(config, "sdl", &b_info->u.hvm.sdl.enable, 0);
> -        xlu_cfg_get_defbool(config, "opengl", &b_info->u.hvm.sdl.opengl, 0);
>          xlu_cfg_get_defbool (config, "spice", &b_info->u.hvm.spice.enable, 0);
>          if (!xlu_cfg_get_long (config, "spiceport", &l, 0))
>              b_info->u.hvm.spice.port = l;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 11:58:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 11:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5mn-0006Gk-Fp; Wed, 19 Feb 2014 11:57:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5mm-0006Ga-7L
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 11:57:56 +0000
Received: from [85.158.139.211:53312] by server-14.bemta-5.messagelabs.com id
	C9/9D-27598-34C94035; Wed, 19 Feb 2014 11:57:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392811073!4909928!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32181 invoked from network); 19 Feb 2014 11:57:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 11:57:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102146938"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 11:57:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 06:57:52 -0500
Message-ID: <1392811071.29739.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 19 Feb 2014 11:57:51 +0000
In-Reply-To: <1390923481-3452-1-git-send-email-wei.liu2@citrix.com>
References: <1390923481-3452-1-git-send-email-wei.liu2@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Olaf Hering <olaf@aepfle.de>, Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH for 4.5] xl: honor more top level vfb options
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-01-28 at 15:38 +0000, Wei Liu wrote:
> Now that SDL and keymap options for VFB can also be specified in top
> level options. Documentation is also updated.
> 
> This fixes bug #31 and further possible problems.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Olaf Hering <olaf@aepfle.de>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
>  docs/man/xl.cfg.pod.5    |    4 ++--
>  tools/libxl/xl_cmdimpl.c |   17 ++++++++++++++---
>  2 files changed, 16 insertions(+), 5 deletions(-)
> 
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index 9941395..26991c0 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -389,8 +389,8 @@ This options does not control the emulated graphics card presented to
>  an HVM guest. See L<Emulated VGA Graphics Device> below for how to
>  configure the emulated device. If L<Emulated VGA Graphics Device> options
>  are used in a PV guest configuration, xl will pick up B<vnc>, B<vnclisten>,
> -B<vncpasswd>, B<vncdisplay> and B<vncunused> to construct paravirtual
> -framebuffer device for the guest.
> +B<vncpasswd>, B<vncdisplay>, B<vncunused>, B<sdl>, B<opengl> and
> +B<keymap> to construct paravirtual framebuffer device for the guest.
>  
>  Each B<VFB_SPEC_STRING> is a comma-separated list of C<KEY=VALUE>
>  settings, from the following list:
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index d93e01b..23d85f8 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -721,6 +721,15 @@ static void parse_top_level_vnc_options(XLU_Config *config,
>      xlu_cfg_get_defbool(config, "vncunused", &vnc->findunused, 0);
>  }
>  
> +static void parse_top_level_sdl_options(XLU_Config *config,
> +                                        libxl_sdl_info *sdl)
> +{
> +    xlu_cfg_get_defbool(config, "sdl", &sdl->enable, 0);
> +    xlu_cfg_get_defbool(config, "opengl", &sdl->opengl, 0);
> +    xlu_cfg_replace_string (config, "display", &sdl->display, 0);
> +    xlu_cfg_replace_string (config, "xauthority", &sdl->xauthority, 0);
> +}
> +
>  static void parse_config_data(const char *config_source,
>                                const char *config_data,
>                                int config_len,
> @@ -1657,9 +1666,13 @@ skip_vfb:
>                                      libxl_device_vkb_init);
>  
>              parse_top_level_vnc_options(config, &vfb->vnc);
> +            parse_top_level_sdl_options(config, &vfb->sdl);
> +            xlu_cfg_replace_string (config, "keymap", &vfb->keymap, 0);
>          }
> -    } else
> +    } else {
>          parse_top_level_vnc_options(config, &b_info->u.hvm.vnc);
> +        parse_top_level_sdl_options(config, &b_info->u.hvm.sdl);
> +    }
>  
>      if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
>          if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
> @@ -1676,8 +1689,6 @@ skip_vfb:
>                                           LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
>  
>          xlu_cfg_replace_string (config, "keymap", &b_info->u.hvm.keymap, 0);
> -        xlu_cfg_get_defbool(config, "sdl", &b_info->u.hvm.sdl.enable, 0);
> -        xlu_cfg_get_defbool(config, "opengl", &b_info->u.hvm.sdl.opengl, 0);
>          xlu_cfg_get_defbool (config, "spice", &b_info->u.hvm.spice.enable, 0);
>          if (!xlu_cfg_get_long (config, "spiceport", &l, 0))
>              b_info->u.hvm.spice.port = l;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:02:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5rA-0006dn-2u; Wed, 19 Feb 2014 12:02:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5r8-0006df-8y
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:02:26 +0000
Received: from [85.158.143.35:28311] by server-3.bemta-4.messagelabs.com id
	DE/2D-11539-15D94035; Wed, 19 Feb 2014 12:02:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392811343!6783656!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27985 invoked from network); 19 Feb 2014 12:02:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:02:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103857555"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:02:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:02:22 -0500
Message-ID: <1392811341.29739.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Date: Wed, 19 Feb 2014 12:02:21 +0000
In-Reply-To: <530499D7.70902@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
	<530499D7.70902@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 11:47 +0000, Andrew Bennieston wrote:
> On 19/02/14 11:06, Ian Campbell wrote:
> > On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
> >> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >>
> >> Document the multi-queue feature in terms of XenStore keys to be written
> >> by the backend and by the frontend.
> >>
> >> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> >> ---
> >>   xen/include/public/io/netif.h |   21 +++++++++++++++++++++
> >>   1 file changed, 21 insertions(+)
> >>
> >> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> >> index d7fb771..90be2fc 100644
> >> --- a/xen/include/public/io/netif.h
> >> +++ b/xen/include/public/io/netif.h
> >> @@ -69,6 +69,27 @@
> >>    */
> >>
> >>   /*
> >> + * Multiple transmit and receive queues:
> >> + * If supported, the backend will write "multi-queue-max-queues" and set its
> >> + * value to the maximum supported number of queues.
> >> + * Frontends that are aware of this feature and wish to use it can write the
> >> + * key "multi-queue-num-queues", set to the number they wish to use.
> >> + *
> >> + * Queues replicate the shared rings and event channels, and
> >> + * "feature-split-event-channels" is required when using multiple queues.
> >> + *
> >> + * For frontends requesting just one queue, the usual event-channel and
> >> + * ring-ref keys are written as before, simplifying the backend processing
> >> + * to avoid distinguishing between a frontend that doesn't understand the
> >> + * multi-queue feature, and one that does, but requested only one queue.
> >> + *
> >> + * Frontends requesting two or more queues must not write the toplevel
> >> + * event-channel and ring-ref keys, instead writing them under sub-keys having
> >> + * the name "queue-N" where N is the integer ID of the queue for which those
> >> + * keys belong. Queues are indexed from zero.
> >
> > If "feature-split-event-channels" is required then I think what should
> > be written is queue-N/event-channel-{tx,rx} and
> > queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
> > as the final paragraph sort of implies?
> I can change the wording to make this more clear.

Thanks.

> >
> > (what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)
> >
> > Is it required to have the same number of RX and TX queues?
> Strictly speaking, no. But the implementation assumes this to be the 
> case. Since the code already sets up one pair, simply looping over this 
> sufficient times to create N pairs was a fairly sane approach to this. 
> In practice, if you have an asymmetry between RX and TX queues, you will 
> end up hitting a bottleneck sooner in one direction than the other, 
> which seems impractical.

OK. So we should either mandate that there will always be the same
number, if that is going to be the case, or we should design the
xenstore layout to be extensible if not.

It sounds reasonable to me to mandate they be the same, but I don't
really know.

> > Are there any other properties/behaviours which should be documented,
> > e.g. relating to the selection of which queue to use for a given frame
> > (on either TX or RX)? If not and it is up to the relevant end to do what
> > it wants then I think it would be useful to say so.
> There are no other requirements. The current implementation will 
> transmit anything it cannot hash sensibly on queue 0, but this is an 
> arbitrary choice (albeit a sensible one, since queue 0 should always 
> exist). I'll document this.

What is this undocumented/unnegotiated "hash" you speak of ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:02:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5rA-0006dn-2u; Wed, 19 Feb 2014 12:02:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5r8-0006df-8y
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:02:26 +0000
Received: from [85.158.143.35:28311] by server-3.bemta-4.messagelabs.com id
	DE/2D-11539-15D94035; Wed, 19 Feb 2014 12:02:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392811343!6783656!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27985 invoked from network); 19 Feb 2014 12:02:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:02:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103857555"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:02:23 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:02:22 -0500
Message-ID: <1392811341.29739.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Date: Wed, 19 Feb 2014 12:02:21 +0000
In-Reply-To: <530499D7.70902@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
	<530499D7.70902@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 11:47 +0000, Andrew Bennieston wrote:
> On 19/02/14 11:06, Ian Campbell wrote:
> > On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
> >> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >>
> >> Document the multi-queue feature in terms of XenStore keys to be written
> >> by the backend and by the frontend.
> >>
> >> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> >> ---
> >>   xen/include/public/io/netif.h |   21 +++++++++++++++++++++
> >>   1 file changed, 21 insertions(+)
> >>
> >> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> >> index d7fb771..90be2fc 100644
> >> --- a/xen/include/public/io/netif.h
> >> +++ b/xen/include/public/io/netif.h
> >> @@ -69,6 +69,27 @@
> >>    */
> >>
> >>   /*
> >> + * Multiple transmit and receive queues:
> >> + * If supported, the backend will write "multi-queue-max-queues" and set its
> >> + * value to the maximum supported number of queues.
> >> + * Frontends that are aware of this feature and wish to use it can write the
> >> + * key "multi-queue-num-queues", set to the number they wish to use.
> >> + *
> >> + * Queues replicate the shared rings and event channels, and
> >> + * "feature-split-event-channels" is required when using multiple queues.
> >> + *
> >> + * For frontends requesting just one queue, the usual event-channel and
> >> + * ring-ref keys are written as before, simplifying the backend processing
> >> + * to avoid distinguishing between a frontend that doesn't understand the
> >> + * multi-queue feature, and one that does, but requested only one queue.
> >> + *
> >> + * Frontends requesting two or more queues must not write the toplevel
> >> + * event-channel and ring-ref keys, instead writing them under sub-keys having
> >> + * the name "queue-N" where N is the integer ID of the queue for which those
> >> + * keys belong. Queues are indexed from zero.
> >
> > If "feature-split-event-channels" is required then I think what should
> > be written is queue-N/event-channel-{tx,rx} and
> > queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
> > as the final paragraph sort of implies?
> I can change the wording to make this more clear.

Thanks.

> >
> > (what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)
> >
> > Is it required to have the same number of RX and TX queues?
> Strictly speaking, no. But the implementation assumes this to be the 
> case. Since the code already sets up one pair, simply looping over this 
> sufficient times to create N pairs was a fairly sane approach to this. 
> In practice, if you have an asymmetry between RX and TX queues, you will 
> end up hitting a bottleneck sooner in one direction than the other, 
> which seems impractical.

OK. So we should either mandate that there will always be the same
number, if that is going to be the case, or we should design the
xenstore layout to be extensible if not.

It sounds reasonable to me to mandate they be the same, but I don't
really know.

> > Are there any other properties/behaviours which should be documented,
> > e.g. relating to the selection of which queue to use for a given frame
> > (on either TX or RX)? If not and it is up to the relevant end to do what
> > it wants then I think it would be useful to say so.
> There are no other requirements. The current implementation will 
> transmit anything it cannot hash sensibly on queue 0, but this is an 
> arbitrary choice (albeit a sensible one, since queue 0 should always 
> exist). I'll document this.

What is this undocumented/unnegotiated "hash" you speak of ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:05:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5tk-0006lj-LP; Wed, 19 Feb 2014 12:05:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5ti-0006lb-Ol
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:05:06 +0000
Received: from [85.158.137.68:9419] by server-3.bemta-3.messagelabs.com id
	90/B4-14520-1FD94035; Wed, 19 Feb 2014 12:05:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392811503!1622897!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17586 invoked from network); 19 Feb 2014 12:05:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:05:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103858356"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:05:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:05:02 -0500
Message-ID: <1392811501.29739.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:05:01 +0000
In-Reply-To: <1392149085-14366-2-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 1/5] xen/arm32: head.S: Remove CA15
 and CA7 specific includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> head.S only contains generic code. It should not include headers for a
> specific processor.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:05:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5tk-0006lj-LP; Wed, 19 Feb 2014 12:05:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5ti-0006lb-Ol
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:05:06 +0000
Received: from [85.158.137.68:9419] by server-3.bemta-3.messagelabs.com id
	90/B4-14520-1FD94035; Wed, 19 Feb 2014 12:05:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392811503!1622897!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17586 invoked from network); 19 Feb 2014 12:05:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:05:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103858356"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:05:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:05:02 -0500
Message-ID: <1392811501.29739.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:05:01 +0000
In-Reply-To: <1392149085-14366-2-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 1/5] xen/arm32: head.S: Remove CA15
 and CA7 specific includes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> head.S only contains generic code. It should not include headers for a
> specific processor.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:10:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5z6-0006x3-GJ; Wed, 19 Feb 2014 12:10:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5z5-0006wy-7t
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:10:39 +0000
Received: from [85.158.139.211:62966] by server-9.bemta-5.messagelabs.com id
	5B/5A-11237-E3F94035; Wed, 19 Feb 2014 12:10:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392811836!4859015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8259 invoked from network); 19 Feb 2014 12:10:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:10:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102150700"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:10:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:10:35 -0500
Message-ID: <1392811834.29739.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:10:34 +0000
In-Reply-To: <1392149085-14366-3-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 2/5] xen/arm32: Introduce
	lookup_processor_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> Looking for a specific proc_info structure is already implemented in assembly.
> Implement lookup_processor_type to avoid duplicate code between C and
> assembly.
> 
> This function searches the proc_info_list structure following the processor
> ID. If the search fail, it will return NULL, otherwise a pointer to this
> structure for the specific processor.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/arm32/head.S |   57 ++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 43 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 77f5518..68fb499 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -198,26 +198,16 @@ skip_bss:
>          PRINT("- Setting up control registers -\r\n")
>  
>          /* Get processor specific proc info into r1 */
> -        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
> -        ldr   r1, = __proc_info_start
> -        add   r1, r1, r10                   /* r1 := paddr of table (start) */
> -        ldr   r2, = __proc_info_end
> -        add   r2, r2, r10                   /* r2 := paddr of table (end) */
> -1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
> -        and   r4, r0, r3                    /* r4 := our cpu id with mask */
> -        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
> -        teq   r4, r3
> -        beq   2f                            /* Match => exit, or try next proc info */
> -        add   r1, r1, #PROCINFO_sizeof
> -        cmp   r1, r2
> -        blo   1b
> +        bl    __lookup_processor_type
> +        teq   r1, #0
> +        bne   1f
>          mov   r4, r0
>          PRINT("- Missing processor info: ")
>          mov   r0, r4
>          bl    putn
>          PRINT(" -\r\n")
>          b     fail
> -2:
> +1:
>  
>          /* Jump to cpu_init */
>          ldr   r1, [r1, #PROCINFO_cpu_init]  /* r1 := vaddr(init func) */
> @@ -545,6 +535,45 @@ putn:   mov   pc, lr
>  
>  #endif /* !CONFIG_EARLY_PRINTK */
>  
> +/* This provides a C-API version of __lookup_processor_type */
> +GLOBAL(lookup_processor_type)
> +        stmfd sp!, {r4, r10, lr}
> +        mov   r10, #0                   /* r10 := offset between virt&phys */
> +        bl    __lookup_processor_type
> +        mov r0, r1
> +        ldmfd sp!, {r4, r10, pc}
> +
> +/* Read processor ID register (CP#15, CR0), and Look up in the linker-built
> + * supported processor list. Note that we can't use the absolute addresses for
> + * the __proc_info lists since we aren't running with the MMU on (and therefore,
> + * we are not in correct address space). We have to calculate the offset.
> + *
> + * r10: offset between virt&phys
> + *
> + * Returns:
> + * r0: CPUID
> + * r1: proc_info pointer
> + * Clobbers r2-r4
> + */
> +__lookup_processor_type:
> +        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
> +        ldr   r1, = __proc_info_start
> +        add   r1, r1, r10                   /* r1 := paddr of table (start) */
> +        ldr   r2, = __proc_info_end
> +        add   r2, r2, r10                   /* r2 := paddr of table (end) */
> +1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
> +        and   r4, r0, r3                    /* r4 := our cpu id with mask */
> +        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
> +        teq   r4, r3
> +        beq   2f                            /* Match => exit, or try next proc info */
> +        add   r1, r1, #PROCINFO_sizeof
> +        cmp   r1, r2
> +        blo   1b
> +        /* We failed to find the proc_info, return NULL */
> +        mov   r1, #0
> +2:
> +        mov   pc, lr
> +
>  /*
>   * Local variables:
>   * mode: ASM



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:10:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5z6-0006x3-GJ; Wed, 19 Feb 2014 12:10:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5z5-0006wy-7t
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:10:39 +0000
Received: from [85.158.139.211:62966] by server-9.bemta-5.messagelabs.com id
	5B/5A-11237-E3F94035; Wed, 19 Feb 2014 12:10:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392811836!4859015!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8259 invoked from network); 19 Feb 2014 12:10:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:10:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102150700"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:10:35 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:10:35 -0500
Message-ID: <1392811834.29739.26.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:10:34 +0000
In-Reply-To: <1392149085-14366-3-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 2/5] xen/arm32: Introduce
	lookup_processor_type
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> Looking for a specific proc_info structure is already implemented in assembly.
> Implement lookup_processor_type to avoid duplicate code between C and
> assembly.
> 
> This function searches the proc_info_list structure following the processor
> ID. If the search fail, it will return NULL, otherwise a pointer to this
> structure for the specific processor.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/arm32/head.S |   57 ++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 43 insertions(+), 14 deletions(-)
> 
> diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
> index 77f5518..68fb499 100644
> --- a/xen/arch/arm/arm32/head.S
> +++ b/xen/arch/arm/arm32/head.S
> @@ -198,26 +198,16 @@ skip_bss:
>          PRINT("- Setting up control registers -\r\n")
>  
>          /* Get processor specific proc info into r1 */
> -        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
> -        ldr   r1, = __proc_info_start
> -        add   r1, r1, r10                   /* r1 := paddr of table (start) */
> -        ldr   r2, = __proc_info_end
> -        add   r2, r2, r10                   /* r2 := paddr of table (end) */
> -1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
> -        and   r4, r0, r3                    /* r4 := our cpu id with mask */
> -        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
> -        teq   r4, r3
> -        beq   2f                            /* Match => exit, or try next proc info */
> -        add   r1, r1, #PROCINFO_sizeof
> -        cmp   r1, r2
> -        blo   1b
> +        bl    __lookup_processor_type
> +        teq   r1, #0
> +        bne   1f
>          mov   r4, r0
>          PRINT("- Missing processor info: ")
>          mov   r0, r4
>          bl    putn
>          PRINT(" -\r\n")
>          b     fail
> -2:
> +1:
>  
>          /* Jump to cpu_init */
>          ldr   r1, [r1, #PROCINFO_cpu_init]  /* r1 := vaddr(init func) */
> @@ -545,6 +535,45 @@ putn:   mov   pc, lr
>  
>  #endif /* !CONFIG_EARLY_PRINTK */
>  
> +/* This provides a C-API version of __lookup_processor_type */
> +GLOBAL(lookup_processor_type)
> +        stmfd sp!, {r4, r10, lr}
> +        mov   r10, #0                   /* r10 := offset between virt&phys */
> +        bl    __lookup_processor_type
> +        mov r0, r1
> +        ldmfd sp!, {r4, r10, pc}
> +
> +/* Read processor ID register (CP#15, CR0), and Look up in the linker-built
> + * supported processor list. Note that we can't use the absolute addresses for
> + * the __proc_info lists since we aren't running with the MMU on (and therefore,
> + * we are not in correct address space). We have to calculate the offset.
> + *
> + * r10: offset between virt&phys
> + *
> + * Returns:
> + * r0: CPUID
> + * r1: proc_info pointer
> + * Clobbers r2-r4
> + */
> +__lookup_processor_type:
> +        mrc   CP32(r0, MIDR)                /* r0 := our cpu id */
> +        ldr   r1, = __proc_info_start
> +        add   r1, r1, r10                   /* r1 := paddr of table (start) */
> +        ldr   r2, = __proc_info_end
> +        add   r2, r2, r10                   /* r2 := paddr of table (end) */
> +1:      ldr   r3, [r1, #PROCINFO_cpu_mask]
> +        and   r4, r0, r3                    /* r4 := our cpu id with mask */
> +        ldr   r3, [r1, #PROCINFO_cpu_val]   /* r3 := cpu val in current proc info */
> +        teq   r4, r3
> +        beq   2f                            /* Match => exit, or try next proc info */
> +        add   r1, r1, #PROCINFO_sizeof
> +        cmp   r1, r2
> +        blo   1b
> +        /* We failed to find the proc_info, return NULL */
> +        mov   r1, #0
> +2:
> +        mov   pc, lr
> +
>  /*
>   * Local variables:
>   * mode: ASM



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:11:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5zb-0006yp-Tc; Wed, 19 Feb 2014 12:11:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5za-0006yb-Hi
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:11:10 +0000
Received: from [85.158.143.35:32984] by server-2.bemta-4.messagelabs.com id
	9A/C8-10891-D5F94035; Wed, 19 Feb 2014 12:11:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392811868!6794060!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2600 invoked from network); 19 Feb 2014 12:11:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:11:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102150793"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:11:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:11:07 -0500
Message-ID: <1392811866.29739.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:11:06 +0000
In-Reply-To: <1392149085-14366-4-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 3/5] xen/arm64: Implement
 lookup_processor_type as a dummy function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> ARM64 implementation doesn't yet have specific code per processor.
> This function will be used in a later patch.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/arm64/head.S |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index c97c194..e1d3103 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -546,6 +546,13 @@ putn:   ret
>  
>  #endif /* !CONFIG_EARLY_PRINTK */
>  
> +/* This provides a C-API version of __lookup_processor_type
> + * TODO: For now, the implementation return NULL every time
> + */
> +GLOBAL(lookup_processor_type)
> +        mov  x0, #0
> +        ret
> +
>  /*
>   * Local variables:
>   * mode: ASM



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:11:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG5zb-0006yp-Tc; Wed, 19 Feb 2014 12:11:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG5za-0006yb-Hi
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:11:10 +0000
Received: from [85.158.143.35:32984] by server-2.bemta-4.messagelabs.com id
	9A/C8-10891-D5F94035; Wed, 19 Feb 2014 12:11:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392811868!6794060!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2600 invoked from network); 19 Feb 2014 12:11:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:11:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102150793"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:11:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:11:07 -0500
Message-ID: <1392811866.29739.27.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:11:06 +0000
In-Reply-To: <1392149085-14366-4-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 3/5] xen/arm64: Implement
 lookup_processor_type as a dummy function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> ARM64 implementation doesn't yet have specific code per processor.
> This function will be used in a later patch.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/arch/arm/arm64/head.S |    7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
> index c97c194..e1d3103 100644
> --- a/xen/arch/arm/arm64/head.S
> +++ b/xen/arch/arm/arm64/head.S
> @@ -546,6 +546,13 @@ putn:   ret
>  
>  #endif /* !CONFIG_EARLY_PRINTK */
>  
> +/* This provides a C-API version of __lookup_processor_type
> + * TODO: For now, the implementation return NULL every time
> + */
> +GLOBAL(lookup_processor_type)
> +        mov  x0, #0
> +        ret
> +
>  /*
>   * Local variables:
>   * mode: ASM



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:19:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG66u-0007Mv-DI; Wed, 19 Feb 2014 12:18:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG66t-0007Mq-8m
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:18:43 +0000
Received: from [85.158.139.211:35689] by server-3.bemta-5.messagelabs.com id
	4F/83-13671-221A4035; Wed, 19 Feb 2014 12:18:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392812320!4916848!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26613 invoked from network); 19 Feb 2014 12:18:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102152690"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:18:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:18:39 -0500
Message-ID: <1392812318.29739.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:18:38 +0000
In-Reply-To: <1392149085-14366-5-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> diff --git a/xen/arch/arm/arm32/proc-v7-c.c b/xen/arch/arm/arm32/proc-v7-c.c
> new file mode 100644
> index 0000000..a3b94a2
> --- /dev/null
> +++ b/xen/arch/arm/arm32/proc-v7-c.c
> @@ -0,0 +1,32 @@
> +/*
> + * xen/arch/arm/arm32/proc-v7-c.c
> + *
> + * arm v7 specific initializations (C part)

I think strictly speaking this is actually cortex a{7,15} specific.

Calling this file "proc-v7-ca15.c" or something (core-cortex.c?) would
be nicer than the ugly -c suffix...

> + *
> + * Julien Grall <julien.grall@linaro.org>
> + * Copyright (c) 2014 Linaro Limited.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#include <asm/procinfo.h>
> +#include <asm/processor.h>
> +
> +static void armv7_vcpu_initialize(struct vcpu *v)
> +{
> +    if ( v->domain->max_vcpus > 1 )
> +        v->arch.actlr |= ACTLR_V7_SMP;
> +    else
> +        v->arch.actlr &= ~ACTLR_V7_SMP;
> +}
> +
> +const struct processor armv7_processor = {

__rodata? (or whatever it is called)

> +    .vcpu_initialize = armv7_vcpu_initialize,
> +};



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:19:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG66u-0007Mv-DI; Wed, 19 Feb 2014 12:18:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG66t-0007Mq-8m
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:18:43 +0000
Received: from [85.158.139.211:35689] by server-3.bemta-5.messagelabs.com id
	4F/83-13671-221A4035; Wed, 19 Feb 2014 12:18:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392812320!4916848!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26613 invoked from network); 19 Feb 2014 12:18:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:18:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102152690"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:18:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:18:39 -0500
Message-ID: <1392812318.29739.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:18:38 +0000
In-Reply-To: <1392149085-14366-5-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> diff --git a/xen/arch/arm/arm32/proc-v7-c.c b/xen/arch/arm/arm32/proc-v7-c.c
> new file mode 100644
> index 0000000..a3b94a2
> --- /dev/null
> +++ b/xen/arch/arm/arm32/proc-v7-c.c
> @@ -0,0 +1,32 @@
> +/*
> + * xen/arch/arm/arm32/proc-v7-c.c
> + *
> + * arm v7 specific initializations (C part)

I think strictly speaking this is actually cortex a{7,15} specific.

Calling this file "proc-v7-ca15.c" or something (core-cortex.c?) would
be nicer than the ugly -c suffix...

> + *
> + * Julien Grall <julien.grall@linaro.org>
> + * Copyright (c) 2014 Linaro Limited.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#include <asm/procinfo.h>
> +#include <asm/processor.h>
> +
> +static void armv7_vcpu_initialize(struct vcpu *v)
> +{
> +    if ( v->domain->max_vcpus > 1 )
> +        v->arch.actlr |= ACTLR_V7_SMP;
> +    else
> +        v->arch.actlr &= ~ACTLR_V7_SMP;
> +}
> +
> +const struct processor armv7_processor = {

__rodata? (or whatever it is called)

> +    .vcpu_initialize = armv7_vcpu_initialize,
> +};



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:20:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG688-0007Pk-S6; Wed, 19 Feb 2014 12:20:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG687-0007Pb-Pz
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:19:59 +0000
Received: from [85.158.137.68:59113] by server-5.bemta-3.messagelabs.com id
	A9/63-04712-E61A4035; Wed, 19 Feb 2014 12:19:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392812395!2847994!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20061 invoked from network); 19 Feb 2014 12:19:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:19:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102153055"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:19:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:19:54 -0500
Message-ID: <1392812393.29739.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:19:53 +0000
In-Reply-To: <1392149085-14366-6-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-6-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
 asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> Theses headers are not in the right directory and are not used anymore.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although perhaps proc-v7-c.c ought to be using some of these? e.g.
ACTLR_CA15_SMP instead of ACTLR_V7_SMP, which now looks to be badly
named to me...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:20:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG688-0007Pk-S6; Wed, 19 Feb 2014 12:20:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG687-0007Pb-Pz
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:19:59 +0000
Received: from [85.158.137.68:59113] by server-5.bemta-3.messagelabs.com id
	A9/63-04712-E61A4035; Wed, 19 Feb 2014 12:19:58 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392812395!2847994!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20061 invoked from network); 19 Feb 2014 12:19:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:19:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102153055"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:19:55 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:19:54 -0500
Message-ID: <1392812393.29739.32.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:19:53 +0000
In-Reply-To: <1392149085-14366-6-git-send-email-julien.grall@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-6-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
 asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> Theses headers are not in the right directory and are not used anymore.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although perhaps proc-v7-c.c ought to be using some of these? e.g.
ACTLR_CA15_SMP instead of ACTLR_V7_SMP, which now looks to be badly
named to me...



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:21:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG69J-0007WM-Ay; Wed, 19 Feb 2014 12:21:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG69H-0007WB-H6
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:21:11 +0000
Received: from [85.158.139.211:54023] by server-8.bemta-5.messagelabs.com id
	6C/FF-05298-6B1A4035; Wed, 19 Feb 2014 12:21:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392812468!4918348!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3022 invoked from network); 19 Feb 2014 12:21:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:21:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103862985"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:21:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:21:07 -0500
Message-ID: <1392812466.29739.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:21:06 +0000
In-Reply-To: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
References: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.5] xen/serial: Don't leak memory
 mapping if the serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 21:27 +0000, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although you could also consider moving the dt_device_get_irq before the
ioremap_attr in most cases.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:21:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG69J-0007WM-Ay; Wed, 19 Feb 2014 12:21:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG69H-0007WB-H6
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:21:11 +0000
Received: from [85.158.139.211:54023] by server-8.bemta-5.messagelabs.com id
	6C/FF-05298-6B1A4035; Wed, 19 Feb 2014 12:21:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392812468!4918348!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3022 invoked from network); 19 Feb 2014 12:21:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:21:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103862985"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:21:08 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:21:07 -0500
Message-ID: <1392812466.29739.33.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:21:06 +0000
In-Reply-To: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
References: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.5] xen/serial: Don't leak memory
 mapping if the serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-11 at 21:27 +0000, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

Although you could also consider moving the dt_device_get_irq before the
ioremap_attr in most cases.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:26:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Du-0007uN-44; Wed, 19 Feb 2014 12:25:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Ds-0007uH-HI
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:25:56 +0000
Received: from [85.158.139.211:28635] by server-7.bemta-5.messagelabs.com id
	F6/9C-14867-3D2A4035; Wed, 19 Feb 2014 12:25:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392812752!4918277!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32598 invoked from network); 19 Feb 2014 12:25:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:25:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103863876"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:25:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:25:51 -0500
Message-ID: <1392812749.29739.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:25:49 +0000
In-Reply-To: <1391794991-5919-2-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 01/12] xen/common: grant-table: only
 call IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> From Xen point of view, ARM guests are PV guest with paging auto translate
> enabled.
> 
> When IOMMU support will be added for ARM, mapping grant ref will always crash
> Xen due to the BUG_ON in __gnttab_map_grant_ref.
> 
> On x86:
>     - PV guests always have paging mode translate disabled
>     - PVH and HVM guests have always paging mode translate enabled
> 
> It means that we can safely replace the check that the domain is a PV guests
> by checking if the guest has paging mode translate enabled.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:26:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Du-0007uN-44; Wed, 19 Feb 2014 12:25:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Ds-0007uH-HI
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:25:56 +0000
Received: from [85.158.139.211:28635] by server-7.bemta-5.messagelabs.com id
	F6/9C-14867-3D2A4035; Wed, 19 Feb 2014 12:25:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392812752!4918277!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32598 invoked from network); 19 Feb 2014 12:25:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:25:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103863876"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:25:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:25:51 -0500
Message-ID: <1392812749.29739.35.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:25:49 +0000
In-Reply-To: <1391794991-5919-2-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-2-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 01/12] xen/common: grant-table: only
 call IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> From Xen point of view, ARM guests are PV guest with paging auto translate
> enabled.
> 
> When IOMMU support will be added for ARM, mapping grant ref will always crash
> Xen due to the BUG_ON in __gnttab_map_grant_ref.
> 
> On x86:
>     - PV guests always have paging mode translate disabled
>     - PVH and HVM guests have always paging mode translate enabled
> 
> It means that we can safely replace the check that the domain is a PV guests
> by checking if the guest has paging mode translate enabled.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:26:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6ES-0007y5-Mm; Wed, 19 Feb 2014 12:26:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6ER-0007xu-MP
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:26:31 +0000
Received: from [85.158.139.211:16430] by server-10.bemta-5.messagelabs.com id
	93/FC-08578-7F2A4035; Wed, 19 Feb 2014 12:26:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392812789!364232!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6728 invoked from network); 19 Feb 2014 12:26:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:26:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102154862"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:26:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:26:28 -0500
Message-ID: <1392812786.29739.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:26:26 +0000
In-Reply-To: <1391794991-5919-3-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 02/12] xen/passthrough: vtd: Don't
 export iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> iommu_domain_teardown is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Not really my remit but:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 5f10034..a8d33fc 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>      return ret;
>  }
>  
> -void iommu_domain_teardown(struct domain *d)
> +static void iommu_domain_teardown(struct domain *d)
>  {
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:26:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:26:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6ES-0007y5-Mm; Wed, 19 Feb 2014 12:26:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6ER-0007xu-MP
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:26:31 +0000
Received: from [85.158.139.211:16430] by server-10.bemta-5.messagelabs.com id
	93/FC-08578-7F2A4035; Wed, 19 Feb 2014 12:26:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392812789!364232!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6728 invoked from network); 19 Feb 2014 12:26:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:26:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102154862"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:26:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:26:28 -0500
Message-ID: <1392812786.29739.36.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:26:26 +0000
In-Reply-To: <1391794991-5919-3-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-3-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	Xiantao Zhang <xiantao.zhang@intel.com>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 02/12] xen/passthrough: vtd: Don't
 export iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> iommu_domain_teardown is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Not really my remit but:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index 5f10034..a8d33fc 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>      return ret;
>  }
>  
> -void iommu_domain_teardown(struct domain *d)
> +static void iommu_domain_teardown(struct domain *d)
>  {
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:27:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6FU-00086C-5Z; Wed, 19 Feb 2014 12:27:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6FS-00085y-UL
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:27:35 +0000
Received: from [193.109.254.147:28248] by server-8.bemta-14.messagelabs.com id
	AD/CB-18529-533A4035; Wed, 19 Feb 2014 12:27:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392812851!117918!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6662 invoked from network); 19 Feb 2014 12:27:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:27:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103864105"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:27:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:27:30 -0500
Message-ID: <1392812848.29739.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:27:28 +0000
In-Reply-To: <1391794991-5919-4-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantoa Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't
 export iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> iommu_set_pgd is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantoa Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

(could collapse with the previous patch into a "don't export unnecessary
stuff" patch)

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  xen/include/xen/iommu.h             |    1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index a8d33fc..d5ce5b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
>  /*
>   * set VT-d page table directory to EPT table if allowed
>   */
> -void iommu_set_pgd(struct domain *d)
> +static void iommu_set_pgd(struct domain *d)
>  {
>      struct hvm_iommu *hd  = domain_hvm_iommu(d);
>      mfn_t pgd_mfn;
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 8bb0a1d..fcbc432 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
>                     unsigned int flags);
>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
> -void iommu_set_pgd(struct domain *d);
>  void iommu_domain_teardown(struct domain *d);
>  
>  void pt_pci_init(void);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:27:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6FU-00086C-5Z; Wed, 19 Feb 2014 12:27:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6FS-00085y-UL
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:27:35 +0000
Received: from [193.109.254.147:28248] by server-8.bemta-14.messagelabs.com id
	AD/CB-18529-533A4035; Wed, 19 Feb 2014 12:27:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392812851!117918!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6662 invoked from network); 19 Feb 2014 12:27:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:27:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103864105"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:27:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:27:30 -0500
Message-ID: <1392812848.29739.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:27:28 +0000
In-Reply-To: <1391794991-5919-4-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-4-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantoa Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 03/12] xen/passthrough: vtd: Don't
 export iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> iommu_set_pgd is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantoa Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

(could collapse with the previous patch into a "don't export unnecessary
stuff" patch)

> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  xen/include/xen/iommu.h             |    1 -
>  2 files changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
> index a8d33fc..d5ce5b7 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
>  /*
>   * set VT-d page table directory to EPT table if allowed
>   */
> -void iommu_set_pgd(struct domain *d)
> +static void iommu_set_pgd(struct domain *d)
>  {
>      struct hvm_iommu *hd  = domain_hvm_iommu(d);
>      mfn_t pgd_mfn;
> diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
> index 8bb0a1d..fcbc432 100644
> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
>                     unsigned int flags);
>  int iommu_unmap_page(struct domain *d, unsigned long gfn);
>  void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
> -void iommu_set_pgd(struct domain *d);
>  void iommu_domain_teardown(struct domain *d);
>  
>  void pt_pci_init(void);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:28:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Fx-0008BA-JV; Wed, 19 Feb 2014 12:28:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WG6Fv-0008Ak-GF
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:28:03 +0000
Received: from [85.158.137.68:8122] by server-15.bemta-3.messagelabs.com id
	AC/C7-19263-253A4035; Wed, 19 Feb 2014 12:28:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392812880!1629286!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13148 invoked from network); 19 Feb 2014 12:28:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:28:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102155025"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:28:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:27:59 -0500
Message-ID: <5304A34E.1090503@citrix.com>
Date: Wed, 19 Feb 2014 12:27:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>		<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>	
	<1392745235.23084.60.camel@kazak.uk.xensource.com>	
	<5303AA97.3010202@citrix.com>
	<1392803670.23084.100.camel@kazak.uk.xensource.com>
In-Reply-To: <1392803670.23084.100.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 09:54, Ian Campbell wrote:
> On Tue, 2014-02-18 at 18:46 +0000, David Vrabel wrote:
>> On 18/02/14 17:40, Ian Campbell wrote:
>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>>
>>>> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>>>  	vif->pending_prod = MAX_PENDING_REQS;
>>>>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>>>>  		vif->pending_ring[i] = i;
>>>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>>>> -		vif->mmap_pages[i] = NULL;
>>>> +	spin_lock_init(&vif->dealloc_lock);
>>>> +	spin_lock_init(&vif->response_lock);
>>>> +	/* If ballooning is disabled, this will consume real memory, so you
>>>> +	 * better enable it.
>>>
>>> Almost no one who would be affected by this is going to read this
>>> comment. And it doesn't just require enabling ballooning, but actually
>>> booting with some maxmem "slack" to leave space.
>>>
>>> Classic-xen kernels used to add 8M of slop to the physical address space
>>> to leave a suitable pool for exactly this sort of thing. I never liked
>>> that but perhaps it should be reconsidered (or at least raised as a
>>> possibility with the core-Xen Linux guys).
>>
>> I plan to fix the balloon memory hotplug stuff to do the right thing
> 
> Which is for alloc_xenballoon_pages to hotplug a new empty region,
> rather than inflating the balloon if it doesn't have enough pages to
> satisfy the allocation?

Yes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:28:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:28:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Fx-0008BA-JV; Wed, 19 Feb 2014 12:28:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WG6Fv-0008Ak-GF
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:28:03 +0000
Received: from [85.158.137.68:8122] by server-15.bemta-3.messagelabs.com id
	AC/C7-19263-253A4035; Wed, 19 Feb 2014 12:28:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392812880!1629286!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13148 invoked from network); 19 Feb 2014 12:28:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:28:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102155025"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:28:00 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:27:59 -0500
Message-ID: <5304A34E.1090503@citrix.com>
Date: Wed, 19 Feb 2014 12:27:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>		<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>	
	<1392745235.23084.60.camel@kazak.uk.xensource.com>	
	<5303AA97.3010202@citrix.com>
	<1392803670.23084.100.camel@kazak.uk.xensource.com>
In-Reply-To: <1392803670.23084.100.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, Zoltan Kiss <zoltan.kiss@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 09:54, Ian Campbell wrote:
> On Tue, 2014-02-18 at 18:46 +0000, David Vrabel wrote:
>> On 18/02/14 17:40, Ian Campbell wrote:
>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>>
>>>> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>>>  	vif->pending_prod = MAX_PENDING_REQS;
>>>>  	for (i = 0; i < MAX_PENDING_REQS; i++)
>>>>  		vif->pending_ring[i] = i;
>>>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>>>> -		vif->mmap_pages[i] = NULL;
>>>> +	spin_lock_init(&vif->dealloc_lock);
>>>> +	spin_lock_init(&vif->response_lock);
>>>> +	/* If ballooning is disabled, this will consume real memory, so you
>>>> +	 * better enable it.
>>>
>>> Almost no one who would be affected by this is going to read this
>>> comment. And it doesn't just require enabling ballooning, but actually
>>> booting with some maxmem "slack" to leave space.
>>>
>>> Classic-xen kernels used to add 8M of slop to the physical address space
>>> to leave a suitable pool for exactly this sort of thing. I never liked
>>> that but perhaps it should be reconsidered (or at least raised as a
>>> possibility with the core-Xen Linux guys).
>>
>> I plan to fix the balloon memory hotplug stuff to do the right thing
> 
> Which is for alloc_xenballoon_pages to hotplug a new empty region,
> rather than inflating the balloon if it doesn't have enough pages to
> satisfy the allocation?

Yes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:28:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Gh-0008LG-1W; Wed, 19 Feb 2014 12:28:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Gg-0008Jm-3j
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:28:50 +0000
Received: from [85.158.137.68:24559] by server-9.bemta-3.messagelabs.com id
	35/CB-10184-183A4035; Wed, 19 Feb 2014 12:28:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392812927!1608943!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31123 invoked from network); 19 Feb 2014 12:28:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:28:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102155158"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:28:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:28:46 -0500
Message-ID: <1392812924.29739.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:28:44 +0000
In-Reply-To: <1391794991-5919-5-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 04/12] xen/dts: Add
	dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> The function check if property exists in a specific node.

                       ^a 
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:28:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Gh-0008LG-1W; Wed, 19 Feb 2014 12:28:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Gg-0008Jm-3j
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:28:50 +0000
Received: from [85.158.137.68:24559] by server-9.bemta-3.messagelabs.com id
	35/CB-10184-183A4035; Wed, 19 Feb 2014 12:28:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1392812927!1608943!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31123 invoked from network); 19 Feb 2014 12:28:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:28:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102155158"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:28:46 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:28:46 -0500
Message-ID: <1392812924.29739.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:28:44 +0000
In-Reply-To: <1391794991-5919-5-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-5-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 04/12] xen/dts: Add
	dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> The function check if property exists in a specific node.

                       ^a 
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:33:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6LF-0000HF-SN; Wed, 19 Feb 2014 12:33:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6LE-0000HA-BH
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:33:32 +0000
Received: from [193.109.254.147:7671] by server-15.bemta-14.messagelabs.com id
	69/A9-10839-B94A4035; Wed, 19 Feb 2014 12:33:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392813209!115600!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13588 invoked from network); 19 Feb 2014 12:33:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:33:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102156020"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:33:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:33:29 -0500
Message-ID: <1392813207.29739.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:33:27 +0000
In-Reply-To: <1388957191-10337-7-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> 
> -/* Some device tree functions may be called both before and after the
> -   console is initialized. */
> -#define dt_printk(fmt, ...)                         \
> -    do                                              \
> -    {                                               \
> -        if ( system_state == SYS_STATE_early_boot ) \
> -            early_printk(fmt, ## __VA_ARGS__);      \
> -        else                                        \
> -            printk(fmt, ## __VA_ARGS__);            \
> -    } while (0)
> +#define dt_printk(fmt, ...) printk(fmt, ## __VA_ARGS__); 

Since dt_printk basically existed solely as a hack to workaround the
distinction between early_printk and printk I think you could just
switch everything to printk directly.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:33:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6LF-0000HF-SN; Wed, 19 Feb 2014 12:33:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6LE-0000HA-BH
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:33:32 +0000
Received: from [193.109.254.147:7671] by server-15.bemta-14.messagelabs.com id
	69/A9-10839-B94A4035; Wed, 19 Feb 2014 12:33:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392813209!115600!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13588 invoked from network); 19 Feb 2014 12:33:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:33:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102156020"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:33:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:33:29 -0500
Message-ID: <1392813207.29739.42.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:33:27 +0000
In-Reply-To: <1388957191-10337-7-git-send-email-julien.grall@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> 
> -/* Some device tree functions may be called both before and after the
> -   console is initialized. */
> -#define dt_printk(fmt, ...)                         \
> -    do                                              \
> -    {                                               \
> -        if ( system_state == SYS_STATE_early_boot ) \
> -            early_printk(fmt, ## __VA_ARGS__);      \
> -        else                                        \
> -            printk(fmt, ## __VA_ARGS__);            \
> -    } while (0)
> +#define dt_printk(fmt, ...) printk(fmt, ## __VA_ARGS__); 

Since dt_printk basically existed solely as a hack to workaround the
distinction between early_printk and printk I think you could just
switch everything to printk directly.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:34:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6MR-0000MQ-Bw; Wed, 19 Feb 2014 12:34:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6MP-0000MG-A3
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:34:45 +0000
Received: from [85.158.139.211:52827] by server-11.bemta-5.messagelabs.com id
	06/04-23886-4E4A4035; Wed, 19 Feb 2014 12:34:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392813282!366588!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10986 invoked from network); 19 Feb 2014 12:34:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:34:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102156265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:34:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:34:41 -0500
Message-ID: <1392813280.29739.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:34:40 +0000
In-Reply-To: <1391794991-5919-6-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Code adapted from linux drivers/of/base.c (commit ef42c58).

On that basis I only took a cursory glance through that monster
function.

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 7c075d9..d429e60 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -112,6 +112,13 @@ struct dt_device_node {
>  
>  };
>  
> +#define MAX_PHANDLE_ARGS 16
> +struct dt_phandle_args {
> +    struct dt_device_node *np;
> +    int args_count;
> +    uint32_t args[MAX_PHANDLE_ARGS];
> +};
> +
>  /**
>   * IRQ line type.
>   *
> @@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
>  void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
>                    u64 *address, u64 *size);
>  
> +/**
> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
> + * @np: Pointer to device node holding phandle property
> + * @phandle_name: Name of property holding a phandle value
> + * @index: For properties holding a table of phandles, this is the index into
> + *         the table

Otherwise it is -1 or something else?

> + *
> + * Returns the device_node pointer.
> + */
> +struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
> +				                        const char *phandle_name,

Stray hard tabs?

> +                                        int index);
> +
> +/**
> + * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
> + * @np:	pointer to a device tree node containing a list
> + * @list_name: property name that contains a list
> + * @cells_name: property name that specifies phandles' arguments count
> + * @index: index of a phandle to parse out
> + * @out_args: optional pointer to output arguments structure (will be filled)
> + *
> + * This function is useful to parse lists of phandles and their arguments.
> + * Returns 0 on success and fills out_args, on error returns appropriate
> + * errno value.
> + *
> + * Example:
> + *
> + * phandle1: node1 {
> + * 	#list-cells = <2>;
> + * }
> + *
> + * phandle2: node2 {
> + * 	#list-cells = <1>;
> + * }
> + *
> + * node3 {
> + * 	list = <&phandle1 1 2 &phandle2 3>;
> + * }
> + *
> + * To get a device_node of the `node2' node you may call this:
> + * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);

Wow, what an exciting function!

How do I decide the correct size for out_args?

> + */
> +int dt_parse_phandle_with_args(const struct dt_device_node *np,
> +                               const char *list_name,
> +                               const char *cells_name, int index,
> +                               struct dt_phandle_args *out_args);
> +
>  #endif /* __XEN_DEVICE_TREE_H */
>  
>  /*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:34:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6MR-0000MQ-Bw; Wed, 19 Feb 2014 12:34:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6MP-0000MG-A3
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:34:45 +0000
Received: from [85.158.139.211:52827] by server-11.bemta-5.messagelabs.com id
	06/04-23886-4E4A4035; Wed, 19 Feb 2014 12:34:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392813282!366588!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10986 invoked from network); 19 Feb 2014 12:34:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:34:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102156265"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:34:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:34:41 -0500
Message-ID: <1392813280.29739.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:34:40 +0000
In-Reply-To: <1391794991-5919-6-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Code adapted from linux drivers/of/base.c (commit ef42c58).

On that basis I only took a cursory glance through that monster
function.

> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
> index 7c075d9..d429e60 100644
> --- a/xen/include/xen/device_tree.h
> +++ b/xen/include/xen/device_tree.h
> @@ -112,6 +112,13 @@ struct dt_device_node {
>  
>  };
>  
> +#define MAX_PHANDLE_ARGS 16
> +struct dt_phandle_args {
> +    struct dt_device_node *np;
> +    int args_count;
> +    uint32_t args[MAX_PHANDLE_ARGS];
> +};
> +
>  /**
>   * IRQ line type.
>   *
> @@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
>  void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
>                    u64 *address, u64 *size);
>  
> +/**
> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
> + * @np: Pointer to device node holding phandle property
> + * @phandle_name: Name of property holding a phandle value
> + * @index: For properties holding a table of phandles, this is the index into
> + *         the table

Otherwise it is -1 or something else?

> + *
> + * Returns the device_node pointer.
> + */
> +struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
> +				                        const char *phandle_name,

Stray hard tabs?

> +                                        int index);
> +
> +/**
> + * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
> + * @np:	pointer to a device tree node containing a list
> + * @list_name: property name that contains a list
> + * @cells_name: property name that specifies phandles' arguments count
> + * @index: index of a phandle to parse out
> + * @out_args: optional pointer to output arguments structure (will be filled)
> + *
> + * This function is useful to parse lists of phandles and their arguments.
> + * Returns 0 on success and fills out_args, on error returns appropriate
> + * errno value.
> + *
> + * Example:
> + *
> + * phandle1: node1 {
> + * 	#list-cells = <2>;
> + * }
> + *
> + * phandle2: node2 {
> + * 	#list-cells = <1>;
> + * }
> + *
> + * node3 {
> + * 	list = <&phandle1 1 2 &phandle2 3>;
> + * }
> + *
> + * To get a device_node of the `node2' node you may call this:
> + * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);

Wow, what an exciting function!

How do I decide the correct size for out_args?

> + */
> +int dt_parse_phandle_with_args(const struct dt_device_node *np,
> +                               const char *list_name,
> +                               const char *cells_name, int index,
> +                               struct dt_phandle_args *out_args);
> +
>  #endif /* __XEN_DEVICE_TREE_H */
>  
>  /*



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:35:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6N8-0000RR-QJ; Wed, 19 Feb 2014 12:35:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WG6N7-0000RF-3S
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 12:35:29 +0000
Received: from [85.158.137.68:51561] by server-6.bemta-3.messagelabs.com id
	A2/54-09180-015A4035; Wed, 19 Feb 2014 12:35:28 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392813325!2887413!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28001 invoked from network); 19 Feb 2014 12:35:26 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 12:35:26 -0000
Received: from mail-oa0-f49.google.com ([209.85.219.49]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwSlDOxN4GvfsKs92+Y/D8wMyHOIItZI@postini.com;
	Wed, 19 Feb 2014 04:35:26 PST
Received: by mail-oa0-f49.google.com with SMTP id i7so315027oag.8
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 04:35:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=raq8UbWeXq2Gd6tnd6BIoyy/8fNB4eg+Q/iKVtKHGBU=;
	b=hdOTaMIp0BKh8iUPYgtNfgARcAXV38Hf37p9uhiQXTkVSbkae7sjEpBXH+qXuaZFpp
	XUCYYlcrAjYWifo0Wa99dq0WiCAYHruUcrtjcQiMJbqdOrm+mxDyT3e7M7KCAF2ijLH5
	MmE6WQ5r962wprrmd+p3APwaWpTAzEoijqwyk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=raq8UbWeXq2Gd6tnd6BIoyy/8fNB4eg+Q/iKVtKHGBU=;
	b=S1SZEm57e5nvgIh/MsvdoKxrRvK4SSk7+gv20wMQfTg1pHE5XT+8WGKvLt/TzFP7CP
	wL1H+4wMMG7Jwe3x28Ag5UqTfUhk1s1NAgNz05uSpesb6xtGcC0oRGkpvaMxvx29JyoS
	n5pFDVzCs0QGnDpwnlsqDIcpStVeHIrT1z0eDWqOciZz7Te2ktFdNYusMEUwJxuQW6UK
	bwujo3R+vUOPuwTS35+Q6hdvjt1Tj+bmCzEUDiD7BBJaBhMTbNvoavZF1GCZ+ARsRQ8l
	V5EfX0u6njHx/5geQymrhIBoCRsJoPDkfaTCkUXwNeN+S68gTOu0ofXFhzO+r+thF/ht
	lQmg==
X-Gm-Message-State: ALoCoQnyao+7uGSCeNE9V/zDezfANxtci2rp7of6n6HLTTyttNe0eDbHPMM3/fwqUrDuBgMv+mD2DgilkREEr6IknAHjNggESuhM4hYG4NaeXubRQWBI7c/vTDC8YL1odR+MPrje237qLb4KQJec0hiGhTsl15KWeqGy6qRS4te2As0tGn+QwGI=
X-Received: by 10.182.233.228 with SMTP id tz4mr1708920obc.56.1392813324339;
	Wed, 19 Feb 2014 04:35:24 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.233.228 with SMTP id tz4mr1708914obc.56.1392813324236;
	Wed, 19 Feb 2014 04:35:24 -0800 (PST)
Received: by 10.76.112.17 with HTTP; Wed, 19 Feb 2014 04:35:24 -0800 (PST)
In-Reply-To: <1392801354.23084.79.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
	<5303A78C.6090709@citrix.com>
	<1392801354.23084.79.camel@kazak.uk.xensource.com>
Date: Wed, 19 Feb 2014 14:35:24 +0200
Message-ID: <CAGQvs6hwsWkknOCprz67i6rPCycBuBf7hW+C9_jS2fvB1n5HiA@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7399996995075131341=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7399996995075131341==
Content-Type: multipart/alternative; boundary=001a11c306006e096d04f2c19f05

--001a11c306006e096d04f2c19f05
Content-Type: text/plain; charset=ISO-8859-1

Hello Ian,

The latest update is: Oleksandr tried solution suggested by Julien and the
issue seems to be not reproducible.

p.s. He is on vacations next 1,5 week, and will be back to this issue after.

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c306006e096d04f2c19f05
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello Ian,<div><br></div><div>The latest update is: Oleksa=
ndr tried solution suggested by Julien and the issue seems to be not reprod=
ucible.</div><div><br></div><div>p.s. He is on vacations next 1,5 week, and=
 will be back to this issue after.</div>
<div class=3D"gmail_extra"><div><font size=3D"-1"><br><span style=3D"vertic=
al-align:baseline;font-variant:normal;font-style:normal;font-size:12px;back=
ground-color:transparent;text-decoration:none;font-family:Arial;font-weight=
:bold">Andrii Anisov | Software Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline">www.globallogic.c=
om</span></a><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:normal;background-color:transparent"></s=
pan><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:#1155cc;background-color:transparent;font-weight:normal;font-style:=
normal;font-variant:normal;text-decoration:underline;vertical-align:baselin=
e">http://www.globallogic.com/email_disclaimer.txt</span></a><span style=3D=
"vertical-align:baseline;font-variant:normal;font-style:normal;font-size:11=
px;background-color:transparent;text-decoration:none;font-family:Arial;font=
-weight:normal;background-color:transparent"></span></div>
</div>
<br><div class=3D"gmail_quote"><br></div></div></div>

--001a11c306006e096d04f2c19f05--


--===============7399996995075131341==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7399996995075131341==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 12:35:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6N8-0000RR-QJ; Wed, 19 Feb 2014 12:35:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrii.anisov@globallogic.com>) id 1WG6N7-0000RF-3S
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 12:35:29 +0000
Received: from [85.158.137.68:51561] by server-6.bemta-3.messagelabs.com id
	A2/54-09180-015A4035; Wed, 19 Feb 2014 12:35:28 +0000
X-Env-Sender: andrii.anisov@globallogic.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392813325!2887413!1
X-Originating-IP: [64.18.0.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28001 invoked from network); 19 Feb 2014 12:35:26 -0000
Received: from exprod5og105.obsmtp.com (HELO exprod5og105.obsmtp.com)
	(64.18.0.180)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 12:35:26 -0000
Received: from mail-oa0-f49.google.com ([209.85.219.49]) (using TLSv1) by
	exprod5ob105.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwSlDOxN4GvfsKs92+Y/D8wMyHOIItZI@postini.com;
	Wed, 19 Feb 2014 04:35:26 PST
Received: by mail-oa0-f49.google.com with SMTP id i7so315027oag.8
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 04:35:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=raq8UbWeXq2Gd6tnd6BIoyy/8fNB4eg+Q/iKVtKHGBU=;
	b=hdOTaMIp0BKh8iUPYgtNfgARcAXV38Hf37p9uhiQXTkVSbkae7sjEpBXH+qXuaZFpp
	XUCYYlcrAjYWifo0Wa99dq0WiCAYHruUcrtjcQiMJbqdOrm+mxDyT3e7M7KCAF2ijLH5
	MmE6WQ5r962wprrmd+p3APwaWpTAzEoijqwyk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=raq8UbWeXq2Gd6tnd6BIoyy/8fNB4eg+Q/iKVtKHGBU=;
	b=S1SZEm57e5nvgIh/MsvdoKxrRvK4SSk7+gv20wMQfTg1pHE5XT+8WGKvLt/TzFP7CP
	wL1H+4wMMG7Jwe3x28Ag5UqTfUhk1s1NAgNz05uSpesb6xtGcC0oRGkpvaMxvx29JyoS
	n5pFDVzCs0QGnDpwnlsqDIcpStVeHIrT1z0eDWqOciZz7Te2ktFdNYusMEUwJxuQW6UK
	bwujo3R+vUOPuwTS35+Q6hdvjt1Tj+bmCzEUDiD7BBJaBhMTbNvoavZF1GCZ+ARsRQ8l
	V5EfX0u6njHx/5geQymrhIBoCRsJoPDkfaTCkUXwNeN+S68gTOu0ofXFhzO+r+thF/ht
	lQmg==
X-Gm-Message-State: ALoCoQnyao+7uGSCeNE9V/zDezfANxtci2rp7of6n6HLTTyttNe0eDbHPMM3/fwqUrDuBgMv+mD2DgilkREEr6IknAHjNggESuhM4hYG4NaeXubRQWBI7c/vTDC8YL1odR+MPrje237qLb4KQJec0hiGhTsl15KWeqGy6qRS4te2As0tGn+QwGI=
X-Received: by 10.182.233.228 with SMTP id tz4mr1708920obc.56.1392813324339;
	Wed, 19 Feb 2014 04:35:24 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.182.233.228 with SMTP id tz4mr1708914obc.56.1392813324236;
	Wed, 19 Feb 2014 04:35:24 -0800 (PST)
Received: by 10.76.112.17 with HTTP; Wed, 19 Feb 2014 04:35:24 -0800 (PST)
In-Reply-To: <1392801354.23084.79.camel@kazak.uk.xensource.com>
References: <CAJEb2DEjUdFGGt4Y7GC8HO2uRLTO4o561LRvUCLJSBB+z0e9Zw@mail.gmail.com>
	<1392725646.11080.47.camel@kazak.uk.xensource.com>
	<CAGQvs6gQo05q1N=_ukikiwNsf5d5RBaaRUdJ8rBxc3+Tmm5hkg@mail.gmail.com>
	<1392727989.11080.61.camel@kazak.uk.xensource.com>
	<CAGQvs6j0RQC7Y9DzCCNAvk28YtnATbjn+W+PzxdMdcVfTADxsA@mail.gmail.com>
	<1392730494.11080.62.camel@kazak.uk.xensource.com>
	<CAGQvs6hRH1fc=PTwG=Af_EWw+S3qtj46P41SPpBh=_iSuVK=Zg@mail.gmail.com>
	<1392737632.23084.4.camel@kazak.uk.xensource.com>
	<CAJEb2DFsX81pQzRXoWuanHqj=jFqGdyHsGA3zFGgE0BScdQRsg@mail.gmail.com>
	<5303A78C.6090709@citrix.com>
	<1392801354.23084.79.camel@kazak.uk.xensource.com>
Date: Wed, 19 Feb 2014 14:35:24 +0200
Message-ID: <CAGQvs6hwsWkknOCprz67i6rPCycBuBf7hW+C9_jS2fvB1n5HiA@mail.gmail.com>
From: Andrii Anisov <andrii.anisov@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Bringing up sequence for non-boot CPU fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7399996995075131341=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7399996995075131341==
Content-Type: multipart/alternative; boundary=001a11c306006e096d04f2c19f05

--001a11c306006e096d04f2c19f05
Content-Type: text/plain; charset=ISO-8859-1

Hello Ian,

The latest update is: Oleksandr tried solution suggested by Julien and the
issue seems to be not reproducible.

p.s. He is on vacations next 1,5 week, and will be back to this issue after.

Andrii Anisov | Software Engineer
GlobalLogic
Kyiv, 03038, Protasov Business Park, M.Grinchenka, 2/1
P +38.044.492.9695x3664  M +380505738852  S andriyanisov
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c306006e096d04f2c19f05
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello Ian,<div><br></div><div>The latest update is: Oleksa=
ndr tried solution suggested by Julien and the issue seems to be not reprod=
ucible.</div><div><br></div><div>p.s. He is on vacations next 1,5 week, and=
 will be back to this issue after.</div>
<div class=3D"gmail_extra"><div><font size=3D"-1"><br><span style=3D"vertic=
al-align:baseline;font-variant:normal;font-style:normal;font-size:12px;back=
ground-color:transparent;text-decoration:none;font-family:Arial;font-weight=
:bold">Andrii Anisov | Software Engineer</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span></font><div><span style=3D"=
color:rgb(34,34,34);font-size:13px;background-color:rgb(255,255,255)"><font=
 face=3D"arial, helvetica, sans-serif">Kyiv, 03038, Protasov Business Park,=
 M.Grinchenka, 2/1</font></span><font face=3D"Arial" style=3D"color:rgb(34,=
34,34);font-size:13px;background-color:rgb(255,255,255)"><span style=3D"fon=
t-size:12px"><br>
</span></font><span style=3D"font-size:12px;vertical-align:baseline;font-va=
riant:normal;font-style:normal;background-color:transparent;text-decoration=
:none;font-family:Arial;font-weight:normal">P +38.044.492.9695x3664=A0 M +3=
80505738852 =A0S andriyanisov</span><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline">www.globallogic.c=
om</span></a><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:normal;background-color:transparent"></s=
pan><br>
<a href=3D"http://www.globallogic.com/" style=3D"font-size:small" target=3D=
"_blank"><span style=3D"font-size:12px;font-family:Arial;color:#1155cc;back=
ground-color:transparent;font-weight:normal;font-style:normal;font-variant:=
normal;text-decoration:underline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" style=3D"font-s=
ize:small" target=3D"_blank"><span style=3D"font-size:11px;font-family:Aria=
l;color:#1155cc;background-color:transparent;font-weight:normal;font-style:=
normal;font-variant:normal;text-decoration:underline;vertical-align:baselin=
e">http://www.globallogic.com/email_disclaimer.txt</span></a><span style=3D=
"vertical-align:baseline;font-variant:normal;font-style:normal;font-size:11=
px;background-color:transparent;text-decoration:none;font-family:Arial;font=
-weight:normal;background-color:transparent"></span></div>
</div>
<br><div class=3D"gmail_quote"><br></div></div></div>

--001a11c306006e096d04f2c19f05--


--===============7399996995075131341==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7399996995075131341==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 12:37:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Oq-0000dX-He; Wed, 19 Feb 2014 12:37:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Op-0000dJ-AV
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:37:15 +0000
Received: from [193.109.254.147:6263] by server-5.bemta-14.messagelabs.com id
	D8/5F-16688-A75A4035; Wed, 19 Feb 2014 12:37:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392813432!5365280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28053 invoked from network); 19 Feb 2014 12:37:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:37:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102156874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:37:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:37:12 -0500
Message-ID: <1392813430.29739.44.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:37:10 +0000
In-Reply-To: <1391794991-5919-8-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-8-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 07/12] xen/passthrough: iommu: Don't
 need to map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Currently iommu_init_dom0 is browsing the page list and call map_page callback
> on each page.
> 
> On both AMD and VTD drivers, the function will directly return if the page
> table is shared with the processor. So Xen can safely avoid to run through
> the page list.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/drivers/passthrough/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 26a5d91..0a26956 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -157,7 +157,7 @@ void __init iommu_dom0_init(struct domain *d)
>  
>      register_keyhandler('o', &iommu_p2m_table);
>      d->need_iommu = !!iommu_dom0_strict;
> -    if ( need_iommu(d) )
> +    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>      {
>          struct page_info *page;
>          unsigned int i = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:37:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Oq-0000dX-He; Wed, 19 Feb 2014 12:37:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Op-0000dJ-AV
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:37:15 +0000
Received: from [193.109.254.147:6263] by server-5.bemta-14.messagelabs.com id
	D8/5F-16688-A75A4035; Wed, 19 Feb 2014 12:37:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392813432!5365280!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28053 invoked from network); 19 Feb 2014 12:37:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:37:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102156874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:37:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:37:12 -0500
Message-ID: <1392813430.29739.44.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:37:10 +0000
In-Reply-To: <1391794991-5919-8-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-8-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 07/12] xen/passthrough: iommu: Don't
 need to map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Currently iommu_init_dom0 is browsing the page list and call map_page callback
> on each page.
> 
> On both AMD and VTD drivers, the function will directly return if the page
> table is shared with the processor. So Xen can safely avoid to run through
> the page list.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  xen/drivers/passthrough/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 26a5d91..0a26956 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -157,7 +157,7 @@ void __init iommu_dom0_init(struct domain *d)
>  
>      register_keyhandler('o', &iommu_p2m_table);
>      d->need_iommu = !!iommu_dom0_strict;
> -    if ( need_iommu(d) )
> +    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>      {
>          struct page_info *page;
>          unsigned int i = 0;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:39:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Qm-0000qQ-2W; Wed, 19 Feb 2014 12:39:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Qk-0000qF-TQ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:39:15 +0000
Received: from [193.109.254.147:39527] by server-9.bemta-14.messagelabs.com id
	82/8E-24895-2F5A4035; Wed, 19 Feb 2014 12:39:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392813552!1714408!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24814 invoked from network); 19 Feb 2014 12:39:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:39:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102157320"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:39:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:39:11 -0500
Message-ID: <1392813550.29739.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:39:10 +0000
In-Reply-To: <1391794991-5919-9-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
> functions specific to x86 and PCI.
> 
> Split the framework in 3 distincts files:
>     - iommu.c: contains generic functions shared between x86 and ARM
>                (when it will be supported)
>     - iommu_pci.c: contains specific functions for PCI passthrough
>     - iommu_x86.c: contains specific functions for x86
> 
> iommu_pci.c will be only compiled when PCI is supported by the architecture
> (eg. HAS_PCI is defined).
> 
> This patch is mostly code movement in new files.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/drivers/passthrough/Makefile    |    6 +-
>  xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>  xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/iommu_x86.c |   65 +++++
>  xen/drivers/passthrough/vtd/iommu.c |   42 ++--
>  xen/include/asm-x86/iommu.h         |   46 ++++
>  xen/include/xen/hvm/iommu.h         |    1 +
>  xen/include/xen/iommu.h             |   42 ++--
>  8 files changed, 625 insertions(+), 518 deletions(-)
>  create mode 100644 xen/drivers/passthrough/iommu_pci.c
>  create mode 100644 xen/drivers/passthrough/iommu_x86.c
>  create mode 100644 xen/include/asm-x86/iommu.h
> 
> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
> index 7c40fa5..51e0a0d 100644
> --- a/xen/drivers/passthrough/Makefile
> +++ b/xen/drivers/passthrough/Makefile
> @@ -3,5 +3,7 @@ subdir-$(x86) += amd
>  subdir-$(x86_64) += x86
>  
>  obj-y += iommu.o
> -obj-y += io.o
> -obj-y += pci.o
> +obj-$(x86) += iommu_x86.o
> +obj-$(HAS_PCI) += iommu_pci.o
> +obj-$(x86) += io.o

io.[co] not mentioned in the description?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:39:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Qm-0000qQ-2W; Wed, 19 Feb 2014 12:39:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Qk-0000qF-TQ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:39:15 +0000
Received: from [193.109.254.147:39527] by server-9.bemta-14.messagelabs.com id
	82/8E-24895-2F5A4035; Wed, 19 Feb 2014 12:39:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392813552!1714408!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24814 invoked from network); 19 Feb 2014 12:39:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:39:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102157320"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:39:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:39:11 -0500
Message-ID: <1392813550.29739.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:39:10 +0000
In-Reply-To: <1391794991-5919-9-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
> functions specific to x86 and PCI.
> 
> Split the framework in 3 distincts files:
>     - iommu.c: contains generic functions shared between x86 and ARM
>                (when it will be supported)
>     - iommu_pci.c: contains specific functions for PCI passthrough
>     - iommu_x86.c: contains specific functions for x86
> 
> iommu_pci.c will be only compiled when PCI is supported by the architecture
> (eg. HAS_PCI is defined).
> 
> This patch is mostly code movement in new files.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>  xen/drivers/passthrough/Makefile    |    6 +-
>  xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>  xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/iommu_x86.c |   65 +++++
>  xen/drivers/passthrough/vtd/iommu.c |   42 ++--
>  xen/include/asm-x86/iommu.h         |   46 ++++
>  xen/include/xen/hvm/iommu.h         |    1 +
>  xen/include/xen/iommu.h             |   42 ++--
>  8 files changed, 625 insertions(+), 518 deletions(-)
>  create mode 100644 xen/drivers/passthrough/iommu_pci.c
>  create mode 100644 xen/drivers/passthrough/iommu_x86.c
>  create mode 100644 xen/include/asm-x86/iommu.h
> 
> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
> index 7c40fa5..51e0a0d 100644
> --- a/xen/drivers/passthrough/Makefile
> +++ b/xen/drivers/passthrough/Makefile
> @@ -3,5 +3,7 @@ subdir-$(x86) += amd
>  subdir-$(x86_64) += x86
>  
>  obj-y += iommu.o
> -obj-y += io.o
> -obj-y += pci.o
> +obj-$(x86) += iommu_x86.o
> +obj-$(HAS_PCI) += iommu_pci.o
> +obj-$(x86) += io.o

io.[co] not mentioned in the description?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:40:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Rt-0000yH-J7; Wed, 19 Feb 2014 12:40:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Rr-0000xz-GW
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:40:23 +0000
Received: from [193.109.254.147:49701] by server-1.bemta-14.messagelabs.com id
	1D/EC-15438-636A4035; Wed, 19 Feb 2014 12:40:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392813620!5386222!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23177 invoked from network); 19 Feb 2014 12:40:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:40:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102157654"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:40:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:40:19 -0500
Message-ID: <1392813617.29739.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:40:17 +0000
In-Reply-To: <1391794991-5919-10-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-10-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Suravee
	Suthikulpanit <suravee.suthikulpanit@amd.com>, patches@linaro.org,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 09/12] xen/passthrough: iommu:
 Introduce arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
> x86 specific fields.
> 
> This patch creates:
>     - arch_hvm_iommu structure which will contain architecture depend
>     fields
>     - arch_iommu_domain_{init,destroy} function to execute arch
>     specific during domain creation/destruction
> 
> Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Joseph Cihula <joseph.cihula@intel.com>
> Cc: Gang Wei <gang.wei@intel.com>
> Cc: Shane Wang <shane.wang@intel.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Mostly mechanical I guess?

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:40:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Rt-0000yH-J7; Wed, 19 Feb 2014 12:40:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6Rr-0000xz-GW
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:40:23 +0000
Received: from [193.109.254.147:49701] by server-1.bemta-14.messagelabs.com id
	1D/EC-15438-636A4035; Wed, 19 Feb 2014 12:40:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392813620!5386222!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23177 invoked from network); 19 Feb 2014 12:40:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:40:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102157654"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:40:20 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:40:19 -0500
Message-ID: <1392813617.29739.47.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:40:17 +0000
In-Reply-To: <1391794991-5919-10-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-10-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Suravee
	Suthikulpanit <suravee.suthikulpanit@amd.com>, patches@linaro.org,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 09/12] xen/passthrough: iommu:
 Introduce arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
> x86 specific fields.
> 
> This patch creates:
>     - arch_hvm_iommu structure which will contain architecture depend
>     fields
>     - arch_iommu_domain_{init,destroy} function to execute arch
>     specific during domain creation/destruction
> 
> Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Joseph Cihula <joseph.cihula@intel.com>
> Cc: Gang Wei <gang.wei@intel.com>
> Cc: Shane Wang <shane.wang@intel.com>
> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>

Mostly mechanical I guess?

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:46:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Xw-0001F5-HK; Wed, 19 Feb 2014 12:46:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG6Xu-0001F0-UY
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 12:46:39 +0000
Received: from [85.158.137.68:56842] by server-10.bemta-3.messagelabs.com id
	3F/C8-07302-EA7A4035; Wed, 19 Feb 2014 12:46:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392813995!2866676!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28814 invoked from network); 19 Feb 2014 12:46:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:46:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103868069"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:46:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 07:46:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WG6Xq-0001zB-J3;
	Wed, 19 Feb 2014 12:46:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WG6Xq-00056z-Fa;
	Wed, 19 Feb 2014 12:46:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25131-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 12:46:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25131: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25131 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:46:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6Xw-0001F5-HK; Wed, 19 Feb 2014 12:46:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG6Xu-0001F0-UY
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 12:46:39 +0000
Received: from [85.158.137.68:56842] by server-10.bemta-3.messagelabs.com id
	3F/C8-07302-EA7A4035; Wed, 19 Feb 2014 12:46:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392813995!2866676!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28814 invoked from network); 19 Feb 2014 12:46:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:46:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103868069"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:46:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 07:46:34 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WG6Xq-0001zB-J3;
	Wed, 19 Feb 2014 12:46:34 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WG6Xq-00056z-Fa;
	Wed, 19 Feb 2014 12:46:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25131-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 12:46:34 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25131: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25131 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 13 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   13 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:49:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6aN-0001Ov-3A; Wed, 19 Feb 2014 12:49:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6aL-0001Oh-KU
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:49:09 +0000
Received: from [85.158.139.211:11629] by server-10.bemta-5.messagelabs.com id
	27/0E-08578-448A4035; Wed, 19 Feb 2014 12:49:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392814146!4921059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12116 invoked from network); 19 Feb 2014 12:49:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:49:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102159166"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:49:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:49:05 -0500
Message-ID: <1392814144.29739.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:49:04 +0000
In-Reply-To: <1391794991-5919-11-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-11-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 10/12] xen/passthrough: Introduce
 IOMMU ARM architure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> This patch contains the architecture to use IOMMU on ARM. There is no

s/IOMMU/IOMMUs/

> IOMMU drivers on this patch.
> 
> The code will run through the device tree and will initialize every IOMMUs.

s/IOMMUs/IOMMU/

> It's possible to have multiple IOMMUs on the same platform, but they should

s/should/must/ I think for now?

> be handled with the same drivers.

s/drivers/driver/

> For now, there is no support for using multiple iommu drivers at runtime.

But this is in principal possible in the future, right? (I shudder to
think what the designers of an SoC with multiple different SMMUs on it
would have to be smoking...)

You should mention somewhere that on ARM these are called SMMUs not
IOMMUs.

> Each new IOMMU drivers should contain:
> 
> static const char * const myiommu_dt_compat[] __initcontst =

"__initconst".

> {
>     /* list of device compatible with the drivers. Will be matched with
>      * the "compatible" property on the device tree
>      */
>     NULL,
> };
> 
> DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
>         .compatible = myiommu_compatible,
>         .init = myiommu_init,
> DT_DEVICE_END

This is the same as for any other driver, right?

> @@ -568,6 +571,10 @@ fail:
>  
>  void arch_domain_destroy(struct domain *d)
>  {
> +    /* IOMMU page table is shared with P2M, always call
> +     * iommu_domain_destroy() before p2m_teardown().

It would be worth adding some commentary on the design we are using
(decisions such as whether to share PTs with the stage 2 MMU) in the
commit message as well.

I suppose this requirement puts constraints on the SMMU hardware we can
support. I think it is fine to live with that until someone shows up
with an SMMU with pagetables that are incompatible with the stage 2
paging ones.

> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f6d713..5a687d1 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -725,6 +725,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      local_irq_enable();
>      local_abort_enable();
>  
> +    iommu_setup(); /* setup iommu if available */

Comment is a bit redundant ;-)

> diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
> new file mode 100644
> index 0000000..7cf36cd
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/iommu.c
> @@ -0,0 +1,65 @@
> +/*
> + * xen/drivers/passthrough/arm/iommu.c
> + *
> + * Generic IOMMU framework via the device tree
> + *
> + * Julien Grall <julien.grall@linaro.org>
> + * Copyright (c) 2013 Linaro Limited.

It's not 2013 any more...

> diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
> new file mode 100644
> index 0000000..461c8cf
> --- /dev/null
> +++ b/xen/include/asm-arm/hvm/iommu.h
> @@ -0,0 +1,10 @@
> +#ifndef __ASM_ARM_HVM_IOMMU_H_
> +#define __ASM_ARM_HVM_IOMMU_H_
> +
> +struct arch_hvm_iommu
> +{
> +    /* Private information for the IOMMU drivers */
> +    void *priv;
> +};
> +
> +#endif /* __ASM_ARM_HVM_IOMMU_H_ */

Emacs magic block please.

That was a surprisingly small and uncontroversial patch. Well done.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:49:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6aN-0001Ov-3A; Wed, 19 Feb 2014 12:49:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6aL-0001Oh-KU
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:49:09 +0000
Received: from [85.158.139.211:11629] by server-10.bemta-5.messagelabs.com id
	27/0E-08578-448A4035; Wed, 19 Feb 2014 12:49:08 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392814146!4921059!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12116 invoked from network); 19 Feb 2014 12:49:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:49:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102159166"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:49:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:49:05 -0500
Message-ID: <1392814144.29739.55.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:49:04 +0000
In-Reply-To: <1391794991-5919-11-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-11-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 10/12] xen/passthrough: Introduce
 IOMMU ARM architure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> This patch contains the architecture to use IOMMU on ARM. There is no

s/IOMMU/IOMMUs/

> IOMMU drivers on this patch.
> 
> The code will run through the device tree and will initialize every IOMMUs.

s/IOMMUs/IOMMU/

> It's possible to have multiple IOMMUs on the same platform, but they should

s/should/must/ I think for now?

> be handled with the same drivers.

s/drivers/driver/

> For now, there is no support for using multiple iommu drivers at runtime.

But this is in principal possible in the future, right? (I shudder to
think what the designers of an SoC with multiple different SMMUs on it
would have to be smoking...)

You should mention somewhere that on ARM these are called SMMUs not
IOMMUs.

> Each new IOMMU drivers should contain:
> 
> static const char * const myiommu_dt_compat[] __initcontst =

"__initconst".

> {
>     /* list of device compatible with the drivers. Will be matched with
>      * the "compatible" property on the device tree
>      */
>     NULL,
> };
> 
> DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
>         .compatible = myiommu_compatible,
>         .init = myiommu_init,
> DT_DEVICE_END

This is the same as for any other driver, right?

> @@ -568,6 +571,10 @@ fail:
>  
>  void arch_domain_destroy(struct domain *d)
>  {
> +    /* IOMMU page table is shared with P2M, always call
> +     * iommu_domain_destroy() before p2m_teardown().

It would be worth adding some commentary on the design we are using
(decisions such as whether to share PTs with the stage 2 MMU) in the
commit message as well.

I suppose this requirement puts constraints on the SMMU hardware we can
support. I think it is fine to live with that until someone shows up
with an SMMU with pagetables that are incompatible with the stage 2
paging ones.

> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 1f6d713..5a687d1 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -725,6 +725,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>      local_irq_enable();
>      local_abort_enable();
>  
> +    iommu_setup(); /* setup iommu if available */

Comment is a bit redundant ;-)

> diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
> new file mode 100644
> index 0000000..7cf36cd
> --- /dev/null
> +++ b/xen/drivers/passthrough/arm/iommu.c
> @@ -0,0 +1,65 @@
> +/*
> + * xen/drivers/passthrough/arm/iommu.c
> + *
> + * Generic IOMMU framework via the device tree
> + *
> + * Julien Grall <julien.grall@linaro.org>
> + * Copyright (c) 2013 Linaro Limited.

It's not 2013 any more...

> diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
> new file mode 100644
> index 0000000..461c8cf
> --- /dev/null
> +++ b/xen/include/asm-arm/hvm/iommu.h
> @@ -0,0 +1,10 @@
> +#ifndef __ASM_ARM_HVM_IOMMU_H_
> +#define __ASM_ARM_HVM_IOMMU_H_
> +
> +struct arch_hvm_iommu
> +{
> +    /* Private information for the IOMMU drivers */
> +    void *priv;
> +};
> +
> +#endif /* __ASM_ARM_HVM_IOMMU_H_ */

Emacs magic block please.

That was a surprisingly small and uncontroversial patch. Well done.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:49:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6ad-0001S0-G8; Wed, 19 Feb 2014 12:49:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6ac-0001Rk-8J
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:49:26 +0000
Received: from [85.158.137.68:3317] by server-11.bemta-3.messagelabs.com id
	A0/0F-04255-558A4035; Wed, 19 Feb 2014 12:49:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392814163!2867165!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9754 invoked from network); 19 Feb 2014 12:49:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:49:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103868548"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:49:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:49:22 -0500
Message-ID: <1392814160.29739.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:49:20 +0000
In-Reply-To: <1391794991-5919-12-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-12-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 11/12] MAINTAINERS: Add
	drivers/passthrough/arm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Add the ARM IOMMU directory to "ARM ARCHITECTURE" part
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  MAINTAINERS |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7757cdd..ad6c8a9 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -130,6 +130,7 @@ S:	Supported
>  L:	xen-devel@lists.xen.org
>  F:	xen/arch/arm/
>  F:	xen/include/asm-arm/
> +F:	xen/drivers/passthrough/arm
>  
>  CPU POOLS
>  M:	Juergen Gross <juergen.gross@ts.fujitsu.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:49:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:49:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6ad-0001S0-G8; Wed, 19 Feb 2014 12:49:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG6ac-0001Rk-8J
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 12:49:26 +0000
Received: from [85.158.137.68:3317] by server-11.bemta-3.messagelabs.com id
	A0/0F-04255-558A4035; Wed, 19 Feb 2014 12:49:25 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392814163!2867165!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9754 invoked from network); 19 Feb 2014 12:49:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:49:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103868548"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 12:49:22 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 07:49:22 -0500
Message-ID: <1392814160.29739.56.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 12:49:20 +0000
In-Reply-To: <1391794991-5919-12-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-12-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>, tim@xen.org,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 11/12] MAINTAINERS: Add
	drivers/passthrough/arm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> Add the ARM IOMMU directory to "ARM ARCHITECTURE" part
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Keir Fraser <keir@xen.org>

Acked-by: Ian Campbell <ian.campbell@citrix.com>

> ---
>  MAINTAINERS |    1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 7757cdd..ad6c8a9 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -130,6 +130,7 @@ S:	Supported
>  L:	xen-devel@lists.xen.org
>  F:	xen/arch/arm/
>  F:	xen/include/asm-arm/
> +F:	xen/drivers/passthrough/arm
>  
>  CPU POOLS
>  M:	Juergen Gross <juergen.gross@ts.fujitsu.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:50:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6bx-0001az-0Z; Wed, 19 Feb 2014 12:50:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG6bv-0001ak-5F
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 12:50:47 +0000
Received: from [85.158.139.211:40938] by server-5.bemta-5.messagelabs.com id
	0E/C2-32749-6A8A4035; Wed, 19 Feb 2014 12:50:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392814244!4937196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7996 invoked from network); 19 Feb 2014 12:50:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:50:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102159554"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:50:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 07:50:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WG6bq-00020J-UL;
	Wed, 19 Feb 2014 12:50:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WG6bq-0006C8-Ro;
	Wed, 19 Feb 2014 12:50:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25132-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 12:50:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25132: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25132 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25132/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 12:50:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 12:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG6bx-0001az-0Z; Wed, 19 Feb 2014 12:50:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG6bv-0001ak-5F
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 12:50:47 +0000
Received: from [85.158.139.211:40938] by server-5.bemta-5.messagelabs.com id
	0E/C2-32749-6A8A4035; Wed, 19 Feb 2014 12:50:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392814244!4937196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7996 invoked from network); 19 Feb 2014 12:50:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 12:50:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102159554"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 12:50:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 07:50:43 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WG6bq-00020J-UL;
	Wed, 19 Feb 2014 12:50:42 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WG6bq-0006C8-Ro;
	Wed, 19 Feb 2014 12:50:42 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25132-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 12:50:42 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25132: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25132 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25132/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 16 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7MK-0002LG-S5; Wed, 19 Feb 2014 13:38:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7MJ-0002LA-BN
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 13:38:43 +0000
Received: from [193.109.254.147:57893] by server-11.bemta-14.messagelabs.com
	id 9F/93-24604-2E3B4035; Wed, 19 Feb 2014 13:38:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392817121!1664652!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 897 invoked from network); 19 Feb 2014 13:38:41 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:38:41 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so126381eaj.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 05:38:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=B6n2d4/SCcGS+RVi5LnAxagbWFvhMfqGgaUzDulpBXo=;
	b=ke4I85URNY4R4JmJGKk0S0ns0kXHBRu4c1nA3HQ3kWEAYdmvhgQBjabM4FZE1GU53R
	g1IUMl05HP2ZQdaQkojQs6EkIFpdJ9GiBz3mA4yENwfq4UzULuZvDQaaHEiJvwDCgZcT
	bpSrVDbJpt2fh7EXv+NcDBeRXmx0WgmGYmYHoRVTsWirlarDqQrWdIso59llcrEAKthi
	he5dloOLDrCYJE+xRMdGlLCQJqTqTjTSdEidEi8fEeLPUPUBhAWgzrdcFGPSwFTCjvVn
	IWDHZRNA4eeVrlIt+9vsz46RDv2vPYirkGAnevRcUfVgFhqbW1XjI6u3q0IjkDBTwbFQ
	sKBQ==
X-Gm-Message-State: ALoCoQkR2GGLCdC5QN/7pmvr6qdEiWlp+B9tJxfd//or8YSRGhDziQKp329lbVgxNaIr+1mdnHJJ
X-Received: by 10.15.22.142 with SMTP id f14mr1257301eeu.113.1392817121326;
	Wed, 19 Feb 2014 05:38:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm896134eef.2.2014.02.19.05.38.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 05:38:40 -0800 (PST)
Message-ID: <5304B3DE.8030209@linaro.org>
Date: Wed, 19 Feb 2014 13:38:38 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-2-git-send-email-julien.grall@linaro.org>
	<1392808995.23084.139.camel@kazak.uk.xensource.com>
In-Reply-To: <1392808995.23084.139.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 1/8] xen/arm: irq: move gic {,
	un}lock in gic_set_irq_properties
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 11:23 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> The function gic_set_irq_properties is only called in two places:
>>     - gic_route_irq: the gic.lock is only taken for the call to the
>>     former function.
>>     - gic_route_irq_to_guest: the gic.lock is taken for the duration of
>>     the function. But the lock is only useful when gic_set_irq_properties.
>>
>> So we can safely move the lock in gic_set_irq_properties and restrict the
>> critical section for the gic.lock in gic_route_irq_to_guest.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks for the ack.

> Although ISTR Stefano saying he had got rid of the lock altogether. I'll
> let you to battle that one out ;-)

I can't find any patch on the mailing which remove the lock in theses
functions. I will keep it for now.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:39:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7MK-0002LG-S5; Wed, 19 Feb 2014 13:38:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7MJ-0002LA-BN
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 13:38:43 +0000
Received: from [193.109.254.147:57893] by server-11.bemta-14.messagelabs.com
	id 9F/93-24604-2E3B4035; Wed, 19 Feb 2014 13:38:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392817121!1664652!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 897 invoked from network); 19 Feb 2014 13:38:41 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:38:41 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so126381eaj.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 05:38:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=B6n2d4/SCcGS+RVi5LnAxagbWFvhMfqGgaUzDulpBXo=;
	b=ke4I85URNY4R4JmJGKk0S0ns0kXHBRu4c1nA3HQ3kWEAYdmvhgQBjabM4FZE1GU53R
	g1IUMl05HP2ZQdaQkojQs6EkIFpdJ9GiBz3mA4yENwfq4UzULuZvDQaaHEiJvwDCgZcT
	bpSrVDbJpt2fh7EXv+NcDBeRXmx0WgmGYmYHoRVTsWirlarDqQrWdIso59llcrEAKthi
	he5dloOLDrCYJE+xRMdGlLCQJqTqTjTSdEidEi8fEeLPUPUBhAWgzrdcFGPSwFTCjvVn
	IWDHZRNA4eeVrlIt+9vsz46RDv2vPYirkGAnevRcUfVgFhqbW1XjI6u3q0IjkDBTwbFQ
	sKBQ==
X-Gm-Message-State: ALoCoQkR2GGLCdC5QN/7pmvr6qdEiWlp+B9tJxfd//or8YSRGhDziQKp329lbVgxNaIr+1mdnHJJ
X-Received: by 10.15.22.142 with SMTP id f14mr1257301eeu.113.1392817121326;
	Wed, 19 Feb 2014 05:38:41 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm896134eef.2.2014.02.19.05.38.39
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 05:38:40 -0800 (PST)
Message-ID: <5304B3DE.8030209@linaro.org>
Date: Wed, 19 Feb 2014 13:38:38 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-2-git-send-email-julien.grall@linaro.org>
	<1392808995.23084.139.camel@kazak.uk.xensource.com>
In-Reply-To: <1392808995.23084.139.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 1/8] xen/arm: irq: move gic {,
	un}lock in gic_set_irq_properties
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 11:23 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> The function gic_set_irq_properties is only called in two places:
>>     - gic_route_irq: the gic.lock is only taken for the call to the
>>     former function.
>>     - gic_route_irq_to_guest: the gic.lock is taken for the duration of
>>     the function. But the lock is only useful when gic_set_irq_properties.
>>
>> So we can safely move the lock in gic_set_irq_properties and restrict the
>> critical section for the gic.lock in gic_route_irq_to_guest.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks for the ack.

> Although ISTR Stefano saying he had got rid of the lock altogether. I'll
> let you to battle that one out ;-)

I can't find any patch on the mailing which remove the lock in theses
functions. I will keep it for now.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7R0-0002Ta-JM; Wed, 19 Feb 2014 13:43:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7Qz-0002TV-7x
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 13:43:33 +0000
Received: from [85.158.137.68:38667] by server-2.bemta-3.messagelabs.com id
	B7/15-06531-405B4035; Wed, 19 Feb 2014 13:43:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392817411!2908097!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17815 invoked from network); 19 Feb 2014 13:43:31 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:43:31 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so127345eek.2
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 05:43:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4yT3A3/9JcvszwZ/Hjyjp1vhVr4i9ChDqYEUwF5le0s=;
	b=CPfclzkazs2nfKVTybvsVI/IRHunpDSWu5/tEXiUNr/eCflOoGlXP4hhFipTVmbXB6
	vDbVb0aknjqfPdYeB+77T0k/YQ5dKNbcXchvipOvoNkKXK5XjYZjKcH9yhM6sI2/CpM5
	A7D9IoDuczkIyrQsRlzrs7qCK4z5km+cFpzyIG9dhYJAmIDIk6CbcTI7F6q4i/0UM80Y
	EejjosMU6m3vkPwgAbd1XbCEwx3OB4jJqube53ikWa7f40wrKlTecwZNPEgsMxWmJy8C
	B3lduYs2sKfSRz361eETwPO5+I8dSLlsAKtJCNnD/cb8QvA9A//XKhtP/IhmxZFfdFlJ
	1tpg==
X-Gm-Message-State: ALoCoQkfxRitcobffBSQG6WDhYdxDs4hyqKX3HZMmdzfiis9tQ55ZARkIMnW46kC+X2bYXXbSu+g
X-Received: by 10.14.194.193 with SMTP id m41mr3467881een.76.1392817411381;
	Wed, 19 Feb 2014 05:43:31 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm921600eeo.8.2014.02.19.05.43.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 05:43:30 -0800 (PST)
Message-ID: <5304B501.2010207@linaro.org>
Date: Wed, 19 Feb 2014 13:43:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Ping? It would be nice to have this patch for Xen 4.4 as IPI priority
patch won't be pushed before the release.

The patch is a minor change and won't impact normal use. When dom0 is
built, Xen always do it on CPU 0.

Regards,

On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
> gic_route_irq_to_guest routes all IRQs to
> cpumask_of(smp_processor_id()), but actually it always called on cpu0.
> To avoid confusion and possible issues in case someone modified the code
> and reassigned a particular irq to a cpu other than cpu0, hardcode
> cpumask_of(0).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..8854800 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      level = dt_irq_is_level_triggered(irq);
>  
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    /* TODO: handle routing irqs to cpus != cpu0 */
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>  
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> 




-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:43:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:43:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7R0-0002Ta-JM; Wed, 19 Feb 2014 13:43:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7Qz-0002TV-7x
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 13:43:33 +0000
Received: from [85.158.137.68:38667] by server-2.bemta-3.messagelabs.com id
	B7/15-06531-405B4035; Wed, 19 Feb 2014 13:43:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392817411!2908097!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17815 invoked from network); 19 Feb 2014 13:43:31 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:43:31 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so127345eek.2
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 05:43:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4yT3A3/9JcvszwZ/Hjyjp1vhVr4i9ChDqYEUwF5le0s=;
	b=CPfclzkazs2nfKVTybvsVI/IRHunpDSWu5/tEXiUNr/eCflOoGlXP4hhFipTVmbXB6
	vDbVb0aknjqfPdYeB+77T0k/YQ5dKNbcXchvipOvoNkKXK5XjYZjKcH9yhM6sI2/CpM5
	A7D9IoDuczkIyrQsRlzrs7qCK4z5km+cFpzyIG9dhYJAmIDIk6CbcTI7F6q4i/0UM80Y
	EejjosMU6m3vkPwgAbd1XbCEwx3OB4jJqube53ikWa7f40wrKlTecwZNPEgsMxWmJy8C
	B3lduYs2sKfSRz361eETwPO5+I8dSLlsAKtJCNnD/cb8QvA9A//XKhtP/IhmxZFfdFlJ
	1tpg==
X-Gm-Message-State: ALoCoQkfxRitcobffBSQG6WDhYdxDs4hyqKX3HZMmdzfiis9tQ55ZARkIMnW46kC+X2bYXXbSu+g
X-Received: by 10.14.194.193 with SMTP id m41mr3467881een.76.1392817411381;
	Wed, 19 Feb 2014 05:43:31 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm921600eeo.8.2014.02.19.05.43.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 05:43:30 -0800 (PST)
Message-ID: <5304B501.2010207@linaro.org>
Date: Wed, 19 Feb 2014 13:43:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,

Ping? It would be nice to have this patch for Xen 4.4 as IPI priority
patch won't be pushed before the release.

The patch is a minor change and won't impact normal use. When dom0 is
built, Xen always do it on CPU 0.

Regards,

On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
> gic_route_irq_to_guest routes all IRQs to
> cpumask_of(smp_processor_id()), but actually it always called on cpu0.
> To avoid confusion and possible issues in case someone modified the code
> and reassigned a particular irq to a cpu other than cpu0, hardcode
> cpumask_of(0).
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> 
> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> index e6257a7..8854800 100644
> --- a/xen/arch/arm/gic.c
> +++ b/xen/arch/arm/gic.c
> @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
>  
>      level = dt_irq_is_level_triggered(irq);
>  
> -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> -                           0xa0);
> +    /* TODO: handle routing irqs to cpus != cpu0 */
> +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
>  
>      retval = __setup_irq(desc, irq->irq, action);
>      if (retval) {
> 




-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:54:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7bA-0002k1-Ox; Wed, 19 Feb 2014 13:54:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG7b9-0002jw-90
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 13:54:03 +0000
Received: from [85.158.143.35:4331] by server-3.bemta-4.messagelabs.com id
	73/D2-11539-A77B4035; Wed, 19 Feb 2014 13:54:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392818040!6834220!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8622 invoked from network); 19 Feb 2014 13:54:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:54:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103885211"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 13:53:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 08:53:59 -0500
Message-ID: <1392818038.29739.74.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 13:53:58 +0000
In-Reply-To: <5304B501.2010207@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org, George Dunlap <george.dunlap@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
> Hi all,
> 
> Ping?

No one made a case for a release exception so I put it in my 4.5 pile.

>  It would be nice to have this patch for Xen 4.4 as IPI priority
> patch won't be pushed before the release.
> 
> The patch is a minor change and won't impact normal use. When dom0 is
> built, Xen always do it on CPU 0.

Right, so whoever is doing otherwise already has a big pile of patches I
presume?

It's rather late to be making such changes IMHO, but I'll defer to
George.

> 
> Regards,
> 
> On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
> > gic_route_irq_to_guest routes all IRQs to
> > cpumask_of(smp_processor_id()), but actually it always called on cpu0.
> > To avoid confusion and possible issues in case someone modified the code
> > and reassigned a particular irq to a cpu other than cpu0, hardcode
> > cpumask_of(0).
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index e6257a7..8854800 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
> >  
> >      level = dt_irq_is_level_triggered(irq);
> >  
> > -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> > -                           0xa0);
> > +    /* TODO: handle routing irqs to cpus != cpu0 */
> > +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
> >  
> >      retval = __setup_irq(desc, irq->irq, action);
> >      if (retval) {
> > 
> 
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:54:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7bA-0002k1-Ox; Wed, 19 Feb 2014 13:54:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG7b9-0002jw-90
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 13:54:03 +0000
Received: from [85.158.143.35:4331] by server-3.bemta-4.messagelabs.com id
	73/D2-11539-A77B4035; Wed, 19 Feb 2014 13:54:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392818040!6834220!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8622 invoked from network); 19 Feb 2014 13:54:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:54:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103885211"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 13:53:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 08:53:59 -0500
Message-ID: <1392818038.29739.74.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 13:53:58 +0000
In-Reply-To: <5304B501.2010207@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org, George Dunlap <george.dunlap@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
> Hi all,
> 
> Ping?

No one made a case for a release exception so I put it in my 4.5 pile.

>  It would be nice to have this patch for Xen 4.4 as IPI priority
> patch won't be pushed before the release.
> 
> The patch is a minor change and won't impact normal use. When dom0 is
> built, Xen always do it on CPU 0.

Right, so whoever is doing otherwise already has a big pile of patches I
presume?

It's rather late to be making such changes IMHO, but I'll defer to
George.

> 
> Regards,
> 
> On 02/04/2014 04:20 PM, Stefano Stabellini wrote:
> > gic_route_irq_to_guest routes all IRQs to
> > cpumask_of(smp_processor_id()), but actually it always called on cpu0.
> > To avoid confusion and possible issues in case someone modified the code
> > and reassigned a particular irq to a cpu other than cpu0, hardcode
> > cpumask_of(0).
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
> > 
> > diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
> > index e6257a7..8854800 100644
> > --- a/xen/arch/arm/gic.c
> > +++ b/xen/arch/arm/gic.c
> > @@ -776,8 +776,8 @@ int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
> >  
> >      level = dt_irq_is_level_triggered(irq);
> >  
> > -    gic_set_irq_properties(irq->irq, level, cpumask_of(smp_processor_id()),
> > -                           0xa0);
> > +    /* TODO: handle routing irqs to cpus != cpu0 */
> > +    gic_set_irq_properties(irq->irq, level, cpumask_of(0), 0xa0);
> >  
> >      retval = __setup_irq(desc, irq->irq, action);
> >      if (retval) {
> > 
> 
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:59:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7g8-0002tv-HN; Wed, 19 Feb 2014 13:59:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7g6-0002to-PX
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 13:59:10 +0000
Received: from [85.158.139.211:64938] by server-2.bemta-5.messagelabs.com id
	9F/62-23037-EA8B4035; Wed, 19 Feb 2014 13:59:10 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392818349!394478!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26445 invoked from network); 19 Feb 2014 13:59:09 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:59:09 -0000
Received: by mail-ea0-f177.google.com with SMTP id h14so366775eaj.8
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 05:59:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=36xYybSB0NGSgi0EE2Qlb9KXi/EZ5PM0d5F/ziJMIrg=;
	b=Zdix2oJiEZHP9Mms8rEmMO8JJOGfEPe0ejRczYzWEs2LWw3JQ61gfFhss8uY90IrKy
	8ZU0t2UUlzNTwK7yxwyqhUKzLvCChyAJlcHJ6AzNhrTQyAhI5Z9OG/gBfiso3hAsF+Gw
	VW+1SJJDV01GePoN5r+lz1h0+VACeSMrY0Ik3nGBhjp7TEMUHiVgoCoqYX477a7O4d7+
	sRepks1GzhGDmQfPrL0NRWqpZslvJGzlyWeL1TTNQuYlI80D2sQUBrAEW4p6B4lTWgOL
	U4GDYAoB0Grb+uCC+uE96c541quHXkIoLvZaGD1D5xAY1/rIZE/c9CttpdejyqPEfA6N
	n4sA==
X-Gm-Message-State: ALoCoQlxIJKfn9GkE1RSG+u6CjEavYgGYsO7FhM41ZHYt5vVrQKr4BIv6ND7DAci+53Mg12rFgfx
X-Received: by 10.14.176.66 with SMTP id a42mr2380686eem.101.1392818348887;
	Wed, 19 Feb 2014 05:59:08 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm1095458eef.2.2014.02.19.05.59.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 05:59:08 -0800 (PST)
Message-ID: <5304B8A8.3000202@linaro.org>
Date: Wed, 19 Feb 2014 13:59:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-4-git-send-email-julien.grall@linaro.org>
	<1392809725.29739.5.camel@kazak.uk.xensource.com>
In-Reply-To: <1392809725.29739.5.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 3/8] xen/arm: IRQ: Protect IRQ to be
 shared between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:35 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
>> IRQ is correctly setup.
>>
>> As IRQ can be shared between devices, if the devices are not assigned to the
>> same domain or Xen, this could result to IRQ route to the domain instead of
>> Xen ...
>>
>> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>     Hopefully, none of the supported platforms have UARTs (the only device
> 
>                                                      ^shared?

Hmmm ... I don't remember what I was trying to say here :/.

Anyway, this part was for argue to push it for Xen 4.4. It doesn't make
sense anymore. I will remove it.

> 
> Other than wondering if EBUSY might be more natural than EADDRINUSE and
> some grammar nits (below) I think this patch looks good.

Right, I will use EBUSY for the next version.

> 
>>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>>     avoid waste of time for developer.
>>
>>     The downside of this patch is if someone wants to support a such platform
>>     (eg IRQ shared between device assigned to different domain/XEN), it will
>>     end up to a error message and a panic.
>> ---
>>  xen/arch/arm/domain_build.c |    8 ++++++--
>>  xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
>>  2 files changed, 45 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 47b781b..1fc359a 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
>>          }
>>  
>>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
>> -        /* Don't check return because the IRQ can be use by multiple device */
>> -        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
>> +        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
>> +        if ( res )
>> +        {
>> +            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
> 
> "Unable to route IRQ %u..." and I think you want to use d->domain_id
> rather than hardcoding 0.

I will fix it. At the same time, the error message when Xen is unable to
map the range also use "dom0". I will send a separate patch for that.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 13:59:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 13:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7g8-0002tv-HN; Wed, 19 Feb 2014 13:59:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7g6-0002to-PX
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 13:59:10 +0000
Received: from [85.158.139.211:64938] by server-2.bemta-5.messagelabs.com id
	9F/62-23037-EA8B4035; Wed, 19 Feb 2014 13:59:10 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392818349!394478!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26445 invoked from network); 19 Feb 2014 13:59:09 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:59:09 -0000
Received: by mail-ea0-f177.google.com with SMTP id h14so366775eaj.8
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 05:59:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=36xYybSB0NGSgi0EE2Qlb9KXi/EZ5PM0d5F/ziJMIrg=;
	b=Zdix2oJiEZHP9Mms8rEmMO8JJOGfEPe0ejRczYzWEs2LWw3JQ61gfFhss8uY90IrKy
	8ZU0t2UUlzNTwK7yxwyqhUKzLvCChyAJlcHJ6AzNhrTQyAhI5Z9OG/gBfiso3hAsF+Gw
	VW+1SJJDV01GePoN5r+lz1h0+VACeSMrY0Ik3nGBhjp7TEMUHiVgoCoqYX477a7O4d7+
	sRepks1GzhGDmQfPrL0NRWqpZslvJGzlyWeL1TTNQuYlI80D2sQUBrAEW4p6B4lTWgOL
	U4GDYAoB0Grb+uCC+uE96c541quHXkIoLvZaGD1D5xAY1/rIZE/c9CttpdejyqPEfA6N
	n4sA==
X-Gm-Message-State: ALoCoQlxIJKfn9GkE1RSG+u6CjEavYgGYsO7FhM41ZHYt5vVrQKr4BIv6ND7DAci+53Mg12rFgfx
X-Received: by 10.14.176.66 with SMTP id a42mr2380686eem.101.1392818348887;
	Wed, 19 Feb 2014 05:59:08 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm1095458eef.2.2014.02.19.05.59.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 05:59:08 -0800 (PST)
Message-ID: <5304B8A8.3000202@linaro.org>
Date: Wed, 19 Feb 2014 13:59:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-4-git-send-email-julien.grall@linaro.org>
	<1392809725.29739.5.camel@kazak.uk.xensource.com>
In-Reply-To: <1392809725.29739.5.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 3/8] xen/arm: IRQ: Protect IRQ to be
 shared between domains and XEN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:35 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> The current dt_route_irq_to_guest implementation set IRQ_GUEST no matter if the
>> IRQ is correctly setup.
>>
>> As IRQ can be shared between devices, if the devices are not assigned to the
>> same domain or Xen, this could result to IRQ route to the domain instead of
>> Xen ...
>>
>> Also avoid to rely on wrong behaviour when Xen is routing an IRQ to DOM0.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>     Hopefully, none of the supported platforms have UARTs (the only device
> 
>                                                      ^shared?

Hmmm ... I don't remember what I was trying to say here :/.

Anyway, this part was for argue to push it for Xen 4.4. It doesn't make
sense anymore. I will remove it.

> 
> Other than wondering if EBUSY might be more natural than EADDRINUSE and
> some grammar nits (below) I think this patch looks good.

Right, I will use EBUSY for the next version.

> 
>>     currently used by Xen). It would be nice to have this patch for Xen 4.4 to
>>     avoid waste of time for developer.
>>
>>     The downside of this patch is if someone wants to support a such platform
>>     (eg IRQ shared between device assigned to different domain/XEN), it will
>>     end up to a error message and a panic.
>> ---
>>  xen/arch/arm/domain_build.c |    8 ++++++--
>>  xen/arch/arm/gic.c          |   40 +++++++++++++++++++++++++++++++++++++++-
>>  2 files changed, 45 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 47b781b..1fc359a 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -712,8 +712,12 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
>>          }
>>  
>>          DPRINT("irq %u = %u type = 0x%x\n", i, irq.irq, irq.type);
>> -        /* Don't check return because the IRQ can be use by multiple device */
>> -        gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
>> +        res = gic_route_irq_to_guest(d, &irq, dt_node_name(dev));
>> +        if ( res )
>> +        {
>> +            printk(XENLOG_ERR "Unable to route the IRQ %u to dom0\n", irq.irq);
> 
> "Unable to route IRQ %u..." and I think you want to use d->domain_id
> rather than hardcoding 0.

I will fix it. At the same time, the error message when Xen is unable to
map the range also use "dom0". I will send a separate patch for that.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:01:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7hr-00034w-8X; Wed, 19 Feb 2014 14:00:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luokain@gmail.com>) id 1WG7gb-0002xy-T0
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 13:59:42 +0000
Received: from [85.158.139.211:19936] by server-10.bemta-5.messagelabs.com id
	9E/5C-08578-DC8B4035; Wed, 19 Feb 2014 13:59:41 +0000
X-Env-Sender: luokain@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392818379!4937171!1
X-Originating-IP: [209.85.216.51]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16120 invoked from network); 19 Feb 2014 13:59:40 -0000
Received: from mail-qa0-f51.google.com (HELO mail-qa0-f51.google.com)
	(209.85.216.51)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:59:40 -0000
Received: by mail-qa0-f51.google.com with SMTP id f11so392875qae.38
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 05:59:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+7wvqd39e8FQsgqq7FrfEq+hvfHAXDnJfeMO6BxWgIs=;
	b=s1VFW9+K652War1c7BohsHbCPD5eFhbs3VFsOJiDAr2yqfv7C/aAlc4F2zAJHkrN/J
	yG8AIy1NxwYF6vd9RqdxtSXDxvIWay1SMF8C23KyHy/oiIb4vro9lBA+cZNo6JLVPl4n
	xm6Km3H5hj5tK139u+EEPktHJy9hDTkxOeKYJ/aFQWGH0Z8bMPAXAoQHxkAw2f4UiiC1
	V75U06MxedMWgv8x//X6+iUH/BY2V1Kpu/ocmJrm/VIoP1OADdasEcnmJm1ojkysbzqz
	eqDXYN76imVgUxa3nN9ZFt05VIIK8hOnHN0dpET1cVytLMUd6xBnT98pirZUMG7mpYQf
	Yz/Q==
MIME-Version: 1.0
X-Received: by 10.224.68.10 with SMTP id t10mr1108838qai.87.1392818378724;
	Wed, 19 Feb 2014 05:59:38 -0800 (PST)
Received: by 10.140.86.9 with HTTP; Wed, 19 Feb 2014 05:59:38 -0800 (PST)
Date: Wed, 19 Feb 2014 21:59:38 +0800
Message-ID: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
From: Kai Luo <luokain@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 19 Feb 2014 14:00:57 +0000
Subject: [Xen-devel] Problem when launch instance in OpenStack when we use
 xen as virtuallization layer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8500726370466343815=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8500726370466343815==
Content-Type: multipart/alternative; boundary=001a11c2e2dcb359b704f2c2cc15

--001a11c2e2dcb359b704f2c2cc15
Content-Type: text/plain; charset=ISO-8859-1

Hello all:
      Recently we are trying to deploy OpenStack with xen 4.3.0 as
virtualization layer, however an error occered when we launch an instance
from an image.We have confirmed that OpenStack services work fine and the
problem may lies in the xen layer.We checked the xend log and got the
following exception:

*TapdiskException: ('create',
'-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
(32512  )*
[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:3078)
XendDomainInfo.destroy: domid=21
[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device model
[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasing devices
[2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request start failed.
Traceback (most recent call last):
  File "/usrb/xen-4.3/bin/..b/python/xen/web/SrvBase.py", line 85, in
perform
    return op_method(op, req)
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/SrvDomain.py", line 77,
in op_start
    return self.xd.domain_start(self.dom.getName(), paused)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py", line 1070, in
domain_start
    dominfo.start(is_managed = True)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 474,
in start
    XendTask.log_progress(31, 60, self._initDomain)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendTask.py", line 209, in
log_progress
    retval = func(*args, **kwds)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line
2845, in _initDomain
    self._configureBootloader()
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line
3286, in _configureBootloader
    mounted_vbd_uuid = dom0.create_vbd(vbd, disk);
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line
3979, in create_vbd
    devid = dev_control.createDevice(config)
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py",
line 172, in createDevice
    device = TapdiskController.create(params, file)
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py",
line 284, in create
    return TapdiskController.exc('create', '-a%s:%s' % (dtype, image))
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py",
line 231, in exc
    (args, rc, out, err))
TapdiskException: ('create',
'-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
(32512  )
[2014-02-18 21:13:44 11395] INFO (XendDomain:1126) Domain instance-00000035
(db4b07c5-7443-418b-81c4-a1f55fb264d3) deleted.

     We tryed to use image of RAW or QCOW2 format and switch the xen
toolstack from xm to xl,still did't work.when we switch the toolstack to xl
and closed the xend service, the virt-manager did not work either.We don't
know what caused this problem and we know XCP may be a better choice but we
have to use xen as OpenStack virtualization layer because some
modifications have made in xen.Could you give any suggestions?

Jone

--001a11c2e2dcb359b704f2c2cc15
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello all:<div>=A0 =A0 =A0 Recently we are trying to deplo=
y OpenStack with xen 4.3.0 as virtualization layer, however an error occere=
d when we launch an instance from an image.We have confirmed that OpenStack=
 services work fine and the problem may lies in the xen layer.We checked th=
e xend log and got the following exception:</div>
<div><br></div><div><div><b style=3D"background-color:rgb(204,204,204)">Tap=
diskException: (&#39;create&#39;, &#39;-aqcow2:arbva/instances4b07c5-7443-4=
18b-81c4-a1f55fb264d3/disk&#39;) failed (32512 =A0)</b></div><div>[2014-02-=
18 21:13:43 11395] DEBUG (XendDomainInfo:3078) XendDomainInfo.destroy: domi=
d=3D21</div>
<div>[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device mode=
l</div><div>[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasi=
ng devices</div><div>[2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request=
 start failed.</div>
<div>Traceback (most recent call last):</div><div>=A0 File &quot;/usrb/xen-=
4.3/bin/..b/python/xen/web/SrvBase.py&quot;, line 85, in perform</div><div>=
=A0 =A0 return op_method(op, req)</div><div>=A0 File &quot;/usrb/xen-4.3/bi=
n/..b/python/xen/xendrver/SrvDomain.py&quot;, line 77, in op_start</div>
<div>=A0 =A0 return self.xd.domain_start(self.dom.getName(), paused)</div><=
div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py&quot=
;, line 1070, in domain_start</div><div>=A0 =A0 dominfo.start(is_managed =
=3D True)</div>
<div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py=
&quot;, line 474, in start</div><div>=A0 =A0 XendTask.log_progress(31, 60, =
self._initDomain)</div><div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen=
/xend/XendTask.py&quot;, line 209, in log_progress</div>
<div>=A0 =A0 retval =3D func(*args, **kwds)</div><div>=A0 File &quot;/usrb/=
xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py&quot;, line 2845, in _ini=
tDomain</div><div>=A0 =A0 self._configureBootloader()</div><div>=A0 File &q=
uot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py&quot;, line 328=
6, in _configureBootloader</div>
<div>=A0 =A0 mounted_vbd_uuid =3D dom0.create_vbd(vbd, disk);</div><div>=A0=
 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py&quot;, =
line 3979, in create_vbd</div><div>=A0 =A0 devid =3D dev_control.createDevi=
ce(config)</div>
<div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapControl=
ler.py&quot;, line 172, in createDevice</div><div>=A0 =A0 device =3D Tapdis=
kController.create(params, file)</div><div>=A0 File &quot;/usrb/xen-4.3/bin=
/..b/python/xen/xendrver/BlktapController.py&quot;, line 284, in create</di=
v>
<div>=A0 =A0 return TapdiskController.exc(&#39;create&#39;, &#39;-a%s:%s&#3=
9; % (dtype, image))</div><div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/=
xen/xendrver/BlktapController.py&quot;, line 231, in exc</div><div>=A0 =A0 =
(args, rc, out, err))</div>
<div><span style=3D"background-color:rgb(204,204,204)">TapdiskException: (&=
#39;create&#39;, &#39;-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb2=
64d3/disk&#39;) failed (32512 =A0)</span></div><div>[2014-02-18 21:13:44 11=
395] INFO (XendDomain:1126) Domain instance-00000035 (db4b07c5-7443-418b-81=
c4-a1f55fb264d3) deleted.</div>
</div><div><br></div><div>=A0 =A0 =A0We tryed to use image of RAW or QCOW2 =
format and switch the xen toolstack from xm to xl,still did&#39;t work.when=
 we switch the toolstack to xl and closed the xend service, the virt-manage=
r did not work either.We don&#39;t know what caused this problem and we kno=
w XCP may be a better choice but we have to use xen as OpenStack virtualiza=
tion layer because some modifications have made in xen.Could you give any s=
uggestions?</div>
<div><br></div><div>Jone</div></div>

--001a11c2e2dcb359b704f2c2cc15--


--===============8500726370466343815==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8500726370466343815==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 14:01:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7hr-00034w-8X; Wed, 19 Feb 2014 14:00:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luokain@gmail.com>) id 1WG7gb-0002xy-T0
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 13:59:42 +0000
Received: from [85.158.139.211:19936] by server-10.bemta-5.messagelabs.com id
	9E/5C-08578-DC8B4035; Wed, 19 Feb 2014 13:59:41 +0000
X-Env-Sender: luokain@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392818379!4937171!1
X-Originating-IP: [209.85.216.51]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16120 invoked from network); 19 Feb 2014 13:59:40 -0000
Received: from mail-qa0-f51.google.com (HELO mail-qa0-f51.google.com)
	(209.85.216.51)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 13:59:40 -0000
Received: by mail-qa0-f51.google.com with SMTP id f11so392875qae.38
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 05:59:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=+7wvqd39e8FQsgqq7FrfEq+hvfHAXDnJfeMO6BxWgIs=;
	b=s1VFW9+K652War1c7BohsHbCPD5eFhbs3VFsOJiDAr2yqfv7C/aAlc4F2zAJHkrN/J
	yG8AIy1NxwYF6vd9RqdxtSXDxvIWay1SMF8C23KyHy/oiIb4vro9lBA+cZNo6JLVPl4n
	xm6Km3H5hj5tK139u+EEPktHJy9hDTkxOeKYJ/aFQWGH0Z8bMPAXAoQHxkAw2f4UiiC1
	V75U06MxedMWgv8x//X6+iUH/BY2V1Kpu/ocmJrm/VIoP1OADdasEcnmJm1ojkysbzqz
	eqDXYN76imVgUxa3nN9ZFt05VIIK8hOnHN0dpET1cVytLMUd6xBnT98pirZUMG7mpYQf
	Yz/Q==
MIME-Version: 1.0
X-Received: by 10.224.68.10 with SMTP id t10mr1108838qai.87.1392818378724;
	Wed, 19 Feb 2014 05:59:38 -0800 (PST)
Received: by 10.140.86.9 with HTTP; Wed, 19 Feb 2014 05:59:38 -0800 (PST)
Date: Wed, 19 Feb 2014 21:59:38 +0800
Message-ID: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
From: Kai Luo <luokain@gmail.com>
To: xen-devel@lists.xen.org
X-Mailman-Approved-At: Wed, 19 Feb 2014 14:00:57 +0000
Subject: [Xen-devel] Problem when launch instance in OpenStack when we use
 xen as virtuallization layer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8500726370466343815=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8500726370466343815==
Content-Type: multipart/alternative; boundary=001a11c2e2dcb359b704f2c2cc15

--001a11c2e2dcb359b704f2c2cc15
Content-Type: text/plain; charset=ISO-8859-1

Hello all:
      Recently we are trying to deploy OpenStack with xen 4.3.0 as
virtualization layer, however an error occered when we launch an instance
from an image.We have confirmed that OpenStack services work fine and the
problem may lies in the xen layer.We checked the xend log and got the
following exception:

*TapdiskException: ('create',
'-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
(32512  )*
[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:3078)
XendDomainInfo.destroy: domid=21
[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device model
[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasing devices
[2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request start failed.
Traceback (most recent call last):
  File "/usrb/xen-4.3/bin/..b/python/xen/web/SrvBase.py", line 85, in
perform
    return op_method(op, req)
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/SrvDomain.py", line 77,
in op_start
    return self.xd.domain_start(self.dom.getName(), paused)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py", line 1070, in
domain_start
    dominfo.start(is_managed = True)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 474,
in start
    XendTask.log_progress(31, 60, self._initDomain)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendTask.py", line 209, in
log_progress
    retval = func(*args, **kwds)
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line
2845, in _initDomain
    self._configureBootloader()
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line
3286, in _configureBootloader
    mounted_vbd_uuid = dom0.create_vbd(vbd, disk);
  File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line
3979, in create_vbd
    devid = dev_control.createDevice(config)
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py",
line 172, in createDevice
    device = TapdiskController.create(params, file)
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py",
line 284, in create
    return TapdiskController.exc('create', '-a%s:%s' % (dtype, image))
  File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py",
line 231, in exc
    (args, rc, out, err))
TapdiskException: ('create',
'-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
(32512  )
[2014-02-18 21:13:44 11395] INFO (XendDomain:1126) Domain instance-00000035
(db4b07c5-7443-418b-81c4-a1f55fb264d3) deleted.

     We tryed to use image of RAW or QCOW2 format and switch the xen
toolstack from xm to xl,still did't work.when we switch the toolstack to xl
and closed the xend service, the virt-manager did not work either.We don't
know what caused this problem and we know XCP may be a better choice but we
have to use xen as OpenStack virtualization layer because some
modifications have made in xen.Could you give any suggestions?

Jone

--001a11c2e2dcb359b704f2c2cc15
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello all:<div>=A0 =A0 =A0 Recently we are trying to deplo=
y OpenStack with xen 4.3.0 as virtualization layer, however an error occere=
d when we launch an instance from an image.We have confirmed that OpenStack=
 services work fine and the problem may lies in the xen layer.We checked th=
e xend log and got the following exception:</div>
<div><br></div><div><div><b style=3D"background-color:rgb(204,204,204)">Tap=
diskException: (&#39;create&#39;, &#39;-aqcow2:arbva/instances4b07c5-7443-4=
18b-81c4-a1f55fb264d3/disk&#39;) failed (32512 =A0)</b></div><div>[2014-02-=
18 21:13:43 11395] DEBUG (XendDomainInfo:3078) XendDomainInfo.destroy: domi=
d=3D21</div>
<div>[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device mode=
l</div><div>[2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasi=
ng devices</div><div>[2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request=
 start failed.</div>
<div>Traceback (most recent call last):</div><div>=A0 File &quot;/usrb/xen-=
4.3/bin/..b/python/xen/web/SrvBase.py&quot;, line 85, in perform</div><div>=
=A0 =A0 return op_method(op, req)</div><div>=A0 File &quot;/usrb/xen-4.3/bi=
n/..b/python/xen/xendrver/SrvDomain.py&quot;, line 77, in op_start</div>
<div>=A0 =A0 return self.xd.domain_start(self.dom.getName(), paused)</div><=
div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py&quot=
;, line 1070, in domain_start</div><div>=A0 =A0 dominfo.start(is_managed =
=3D True)</div>
<div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py=
&quot;, line 474, in start</div><div>=A0 =A0 XendTask.log_progress(31, 60, =
self._initDomain)</div><div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen=
/xend/XendTask.py&quot;, line 209, in log_progress</div>
<div>=A0 =A0 retval =3D func(*args, **kwds)</div><div>=A0 File &quot;/usrb/=
xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py&quot;, line 2845, in _ini=
tDomain</div><div>=A0 =A0 self._configureBootloader()</div><div>=A0 File &q=
uot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py&quot;, line 328=
6, in _configureBootloader</div>
<div>=A0 =A0 mounted_vbd_uuid =3D dom0.create_vbd(vbd, disk);</div><div>=A0=
 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py&quot;, =
line 3979, in create_vbd</div><div>=A0 =A0 devid =3D dev_control.createDevi=
ce(config)</div>
<div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapControl=
ler.py&quot;, line 172, in createDevice</div><div>=A0 =A0 device =3D Tapdis=
kController.create(params, file)</div><div>=A0 File &quot;/usrb/xen-4.3/bin=
/..b/python/xen/xendrver/BlktapController.py&quot;, line 284, in create</di=
v>
<div>=A0 =A0 return TapdiskController.exc(&#39;create&#39;, &#39;-a%s:%s&#3=
9; % (dtype, image))</div><div>=A0 File &quot;/usrb/xen-4.3/bin/..b/python/=
xen/xendrver/BlktapController.py&quot;, line 231, in exc</div><div>=A0 =A0 =
(args, rc, out, err))</div>
<div><span style=3D"background-color:rgb(204,204,204)">TapdiskException: (&=
#39;create&#39;, &#39;-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb2=
64d3/disk&#39;) failed (32512 =A0)</span></div><div>[2014-02-18 21:13:44 11=
395] INFO (XendDomain:1126) Domain instance-00000035 (db4b07c5-7443-418b-81=
c4-a1f55fb264d3) deleted.</div>
</div><div><br></div><div>=A0 =A0 =A0We tryed to use image of RAW or QCOW2 =
format and switch the xen toolstack from xm to xl,still did&#39;t work.when=
 we switch the toolstack to xl and closed the xend service, the virt-manage=
r did not work either.We don&#39;t know what caused this problem and we kno=
w XCP may be a better choice but we have to use xen as OpenStack virtualiza=
tion layer because some modifications have made in xen.Could you give any s=
uggestions?</div>
<div><br></div><div>Jone</div></div>

--001a11c2e2dcb359b704f2c2cc15--


--===============8500726370466343815==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8500726370466343815==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 14:03:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7ke-0003Dw-TN; Wed, 19 Feb 2014 14:03:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG7kd-0003Dn-KK
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 14:03:51 +0000
Received: from [85.158.143.35:56374] by server-3.bemta-4.messagelabs.com id
	4E/66-11539-6C9B4035; Wed, 19 Feb 2014 14:03:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392818628!6792831!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24002 invoked from network); 19 Feb 2014 14:03:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:03:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102183064"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:03:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 09:03:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kT-0002Mk-Tf;
	Wed, 19 Feb 2014 14:03:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kS-0007od-9Y;
	Wed, 19 Feb 2014 14:03:40 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 19 Feb 2014 14:03:29 +0000
Message-ID: <1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <21252.47381.304145.583259@mariner.uk.xensource.com>
References: <21252.47381.304145.583259@mariner.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, coverity@xenproject.org
Subject: [Xen-devel] [PATCH 1/2] libxl: Fix error path in
	libxl_device_events_handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_device_events_handler would fail to call AO_ABORT if it failed;
instead it would simply return rc.  (This leaves the egc etc. from the
now-abolished stack frame potentially live, and leaves the ctx
locked.)

In xl, this is of no consequence, because xl will immediately exit in
this situation.  This is very likely to be true in any other callers
(of which we don't know of any, anyway).

Coverity-ID: 1181840
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: coverity@xenproject.org
---
 tools/libxl/libxl.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 730f6e1..2d29ad2 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3881,7 +3881,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 
 out:
     GC_FREE;
-    return rc ? : AO_INPROGRESS;
+    if (rc) return AO_ABORT(rc);
+    return AO_INPROGRESS;
 }
 
 /******************************************************************************/
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:03:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7ke-0003Dw-TN; Wed, 19 Feb 2014 14:03:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG7kd-0003Dn-KK
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 14:03:51 +0000
Received: from [85.158.143.35:56374] by server-3.bemta-4.messagelabs.com id
	4E/66-11539-6C9B4035; Wed, 19 Feb 2014 14:03:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392818628!6792831!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24002 invoked from network); 19 Feb 2014 14:03:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:03:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102183064"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:03:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 09:03:47 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kT-0002Mk-Tf;
	Wed, 19 Feb 2014 14:03:41 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kS-0007od-9Y;
	Wed, 19 Feb 2014 14:03:40 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 19 Feb 2014 14:03:29 +0000
Message-ID: <1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <21252.47381.304145.583259@mariner.uk.xensource.com>
References: <21252.47381.304145.583259@mariner.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, coverity@xenproject.org
Subject: [Xen-devel] [PATCH 1/2] libxl: Fix error path in
	libxl_device_events_handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_device_events_handler would fail to call AO_ABORT if it failed;
instead it would simply return rc.  (This leaves the egc etc. from the
now-abolished stack frame potentially live, and leaves the ctx
locked.)

In xl, this is of no consequence, because xl will immediately exit in
this situation.  This is very likely to be true in any other callers
(of which we don't know of any, anyway).

Coverity-ID: 1181840
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: coverity@xenproject.org
---
 tools/libxl/libxl.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index 730f6e1..2d29ad2 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -3881,7 +3881,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 
 out:
     GC_FREE;
-    return rc ? : AO_INPROGRESS;
+    if (rc) return AO_ABORT(rc);
+    return AO_INPROGRESS;
 }
 
 /******************************************************************************/
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:03:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7kj-0003FA-FZ; Wed, 19 Feb 2014 14:03:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG7kh-0003Ee-TT
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 14:03:56 +0000
Received: from [85.158.143.35:57004] by server-1.bemta-4.messagelabs.com id
	89/25-31661-BC9B4035; Wed, 19 Feb 2014 14:03:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392818633!6838525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6264 invoked from network); 19 Feb 2014 14:03:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:03:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103888063"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 14:03:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 09:03:52 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kZ-0002Mo-1V;
	Wed, 19 Feb 2014 14:03:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kX-0007oi-O2;
	Wed, 19 Feb 2014 14:03:45 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 19 Feb 2014 14:03:30 +0000
Message-ID: <1392818610-30007-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21252.47381.304145.583259@mariner.uk.xensource.com>
	<1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, coverity@xenproject.org
Subject: [Xen-devel] [PATCH 2/2] xl: Comment error handling in dolog
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Coverity-ID: 1087116
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: coverity@xenproject.org
---
 tools/libxl/xl_cmdimpl.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..1a7fd09 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -352,6 +352,8 @@ static void dolog(const char *file, int line, const char *func, char *fmt, ...)
     rc = vasprintf(&s, fmt, ap);
     va_end(ap);
     if (rc >= 0)
+        /* we ignore write errors since we have no way to report them;
+         * the alternative would be to abort the whole program */
         libxl_write_exactly(NULL, logfile, s, rc, NULL, NULL);
     free(s);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:03:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7kj-0003FA-FZ; Wed, 19 Feb 2014 14:03:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG7kh-0003Ee-TT
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 14:03:56 +0000
Received: from [85.158.143.35:57004] by server-1.bemta-4.messagelabs.com id
	89/25-31661-BC9B4035; Wed, 19 Feb 2014 14:03:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392818633!6838525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6264 invoked from network); 19 Feb 2014 14:03:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:03:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103888063"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 14:03:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 09:03:52 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kZ-0002Mo-1V;
	Wed, 19 Feb 2014 14:03:47 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7kX-0007oi-O2;
	Wed, 19 Feb 2014 14:03:45 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 19 Feb 2014 14:03:30 +0000
Message-ID: <1392818610-30007-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21252.47381.304145.583259@mariner.uk.xensource.com>
	<1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, coverity@xenproject.org
Subject: [Xen-devel] [PATCH 2/2] xl: Comment error handling in dolog
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Coverity-ID: 1087116
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: coverity@xenproject.org
---
 tools/libxl/xl_cmdimpl.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..1a7fd09 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -352,6 +352,8 @@ static void dolog(const char *file, int line, const char *func, char *fmt, ...)
     rc = vasprintf(&s, fmt, ap);
     va_end(ap);
     if (rc >= 0)
+        /* we ignore write errors since we have no way to report them;
+         * the alternative would be to abort the whole program */
         libxl_write_exactly(NULL, logfile, s, rc, NULL, NULL);
     free(s);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:09:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7pW-0003ca-9j; Wed, 19 Feb 2014 14:08:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG7pT-0003cL-V5
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 14:08:52 +0000
Received: from [85.158.139.211:62390] by server-16.bemta-5.messagelabs.com id
	DD/5E-05060-3FAB4035; Wed, 19 Feb 2014 14:08:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392818928!4960490!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 699 invoked from network); 19 Feb 2014 14:08:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103889924"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 14:08:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 09:08:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7pF-0002P0-Ue;
	Wed, 19 Feb 2014 14:08:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7pE-0007pQ-96;
	Wed, 19 Feb 2014 14:08:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21252.47842.950252.77709@mariner.uk.xensource.com>
Date: Wed, 19 Feb 2014 14:08:34 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	<coverity@xenproject.org>
In-Reply-To: <1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21252.47381.304145.583259@mariner.uk.xensource.com>
	<1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: Fix error path in
	libxl_device_events_handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 1/2] libxl: Fix error path in libxl_device_events_handler"):
> libxl_device_events_handler would fail to call AO_ABORT if it failed;
> instead it would simply return rc.  (This leaves the egc etc. from the
> now-abolished stack frame potentially live, and leaves the ctx
> locked.)
> 
> In xl, this is of no consequence, because xl will immediately exit in
> this situation.  This is very likely to be true in any other callers
> (of which we don't know of any, anyway).

FAOD, I see no need for this to be in 4.4, although it should go in
iff we are currently accepting trivial bugfixes.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:09:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7pW-0003ca-9j; Wed, 19 Feb 2014 14:08:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG7pT-0003cL-V5
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 14:08:52 +0000
Received: from [85.158.139.211:62390] by server-16.bemta-5.messagelabs.com id
	DD/5E-05060-3FAB4035; Wed, 19 Feb 2014 14:08:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392818928!4960490!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 699 invoked from network); 19 Feb 2014 14:08:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:08:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103889924"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 14:08:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 09:08:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7pF-0002P0-Ue;
	Wed, 19 Feb 2014 14:08:38 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG7pE-0007pQ-96;
	Wed, 19 Feb 2014 14:08:36 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21252.47842.950252.77709@mariner.uk.xensource.com>
Date: Wed, 19 Feb 2014 14:08:34 +0000
To: <xen-devel@lists.xensource.com>, Ian Campbell <ian.campbell@citrix.com>,
	<coverity@xenproject.org>
In-Reply-To: <1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
References: <21252.47381.304145.583259@mariner.uk.xensource.com>
	<1392818610-30007-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/2] libxl: Fix error path in
	libxl_device_events_handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("[PATCH 1/2] libxl: Fix error path in libxl_device_events_handler"):
> libxl_device_events_handler would fail to call AO_ABORT if it failed;
> instead it would simply return rc.  (This leaves the egc etc. from the
> now-abolished stack frame potentially live, and leaves the ctx
> locked.)
> 
> In xl, this is of no consequence, because xl will immediately exit in
> this situation.  This is very likely to be true in any other callers
> (of which we don't know of any, anyway).

FAOD, I see no need for this to be in 4.4, although it should go in
iff we are currently accepting trivial bugfixes.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:12:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7sl-0003jH-U0; Wed, 19 Feb 2014 14:12:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WG7sk-0003jB-K3
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 14:12:15 +0000
Received: from [193.109.254.147:17897] by server-7.bemta-14.messagelabs.com id
	50/FB-23424-DBBB4035; Wed, 19 Feb 2014 14:12:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392819132!5451428!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_23,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12512 invoked from network); 19 Feb 2014 14:12:13 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 14:12:13 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392819132; l=1520;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=vEzfDBJBY7gZCJh5V73anc9XVVk=;
	b=M2rZcJdFHysWAIOSU5pC6J7eUq1HAOgdPhKjQAKvEkfF88mIxa4CciBOglmyaDEVo61
	Ze58CRChYH3CC2yEfWOO6CA22Kmd175T923dgUKFkHr795X6JQi8lvZDn8cq5f9Ea2vRU
	szUlOftDCZK6d1c7ccYGkh37yuyBVz1k0I4=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id 9015beq1JEC7rOG
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Wed, 19 Feb 2014 15:12:07 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 863CC5026A; Wed, 19 Feb 2014 15:12:07 +0100 (CET)
Date: Wed, 19 Feb 2014 15:12:07 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140219141207.GA9631@aepfle.de>
References: <20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, Adel Amani wrote:

> Hi
> i know because a lot of product output in xen-4.1, no logger in xc_save.c
> i change code again according to "http://xen.1045712.n5.nabble.com/
> xen-unstable-tools-xc-restore-logging-in-xc-save-td5714324.html"
> But i don't know purpose Mr "patchbot" of 
> @@ -185,6 +183,13 @@ main(int argc, char **argv)
> i test again to code Mr "patchbot" without Consideration 
> @@ -185,6 +183,13 @@ main(int argc, char **argv)
> and result again no message :-( ....


All that is very imprecise, so we can not help.
Also read what I wrote: dont drop xen-devel@lists.xen.org

Olaf



> On Saturday, February 8, 2014 1:52 AM, Olaf Hering <olaf@aepfle.de> wrote:
> Please keep xen-devel@lists.xen.org in CC list.
> 
> On Fri, Feb 07, Adel Amani wrote:
> 
> > yes, for print data, function print_stats() in xc_domain_save.c should run
> and
> > work. I read in function and check.... But i don't know really why this don't
> > work!!! :-|... I test 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\
> n");'
> > But again not answer :'(.....
> 
> Please make sure the self-compiled binary is actually used. Try this to
> verify: grep STDERR /usr/lib/xen/bin/xc_save (assuming the fprintf above
> is actually in the compiled code.)
> 
> > how boot the domU with 'initcall_debug'?! Are affect on total time?!
> 
> This is a kernel cmdline option. Please check the documentation about
> how to pass additional kernel parameters to a domU.
> 
> 
> Olaf
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:12:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7sl-0003jH-U0; Wed, 19 Feb 2014 14:12:15 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WG7sk-0003jB-K3
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 14:12:15 +0000
Received: from [193.109.254.147:17897] by server-7.bemta-14.messagelabs.com id
	50/FB-23424-DBBB4035; Wed, 19 Feb 2014 14:12:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392819132!5451428!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_23,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12512 invoked from network); 19 Feb 2014 14:12:13 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 14:12:13 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392819132; l=1520;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=vEzfDBJBY7gZCJh5V73anc9XVVk=;
	b=M2rZcJdFHysWAIOSU5pC6J7eUq1HAOgdPhKjQAKvEkfF88mIxa4CciBOglmyaDEVo61
	Ze58CRChYH3CC2yEfWOO6CA22Kmd175T923dgUKFkHr795X6JQi8lvZDn8cq5f9Ea2vRU
	szUlOftDCZK6d1c7ccYGkh37yuyBVz1k0I4=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.26 AUTH) with ESMTPSA id 9015beq1JEC7rOG
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Wed, 19 Feb 2014 15:12:07 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 863CC5026A; Wed, 19 Feb 2014 15:12:07 +0100 (CET)
Date: Wed, 19 Feb 2014 15:12:07 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140219141207.GA9631@aepfle.de>
References: <20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, Adel Amani wrote:

> Hi
> i know because a lot of product output in xen-4.1, no logger in xc_save.c
> i change code again according to "http://xen.1045712.n5.nabble.com/
> xen-unstable-tools-xc-restore-logging-in-xc-save-td5714324.html"
> But i don't know purpose Mr "patchbot" of 
> @@ -185,6 +183,13 @@ main(int argc, char **argv)
> i test again to code Mr "patchbot" without Consideration 
> @@ -185,6 +183,13 @@ main(int argc, char **argv)
> and result again no message :-( ....


All that is very imprecise, so we can not help.
Also read what I wrote: dont drop xen-devel@lists.xen.org

Olaf



> On Saturday, February 8, 2014 1:52 AM, Olaf Hering <olaf@aepfle.de> wrote:
> Please keep xen-devel@lists.xen.org in CC list.
> 
> On Fri, Feb 07, Adel Amani wrote:
> 
> > yes, for print data, function print_stats() in xc_domain_save.c should run
> and
> > work. I read in function and check.... But i don't know really why this don't
> > work!!! :-|... I test 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\
> n");'
> > But again not answer :'(.....
> 
> Please make sure the self-compiled binary is actually used. Try this to
> verify: grep STDERR /usr/lib/xen/bin/xc_save (assuming the fprintf above
> is actually in the compiled code.)
> 
> > how boot the domU with 'initcall_debug'?! Are affect on total time?!
> 
> This is a kernel cmdline option. Please check the documentation about
> how to pass additional kernel parameters to a domU.
> 
> 
> Olaf
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:12:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7tP-0003ml-BU; Wed, 19 Feb 2014 14:12:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1WG7tN-0003ma-RH
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 14:12:54 +0000
Received: from [85.158.143.35:42573] by server-2.bemta-4.messagelabs.com id
	E2/86-10891-4EBB4035; Wed, 19 Feb 2014 14:12:52 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392819169!6827781!1
X-Originating-IP: [209.85.219.48]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8565 invoked from network); 19 Feb 2014 14:12:50 -0000
Received: from mail-oa0-f48.google.com (HELO mail-oa0-f48.google.com)
	(209.85.219.48)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:12:50 -0000
Received: by mail-oa0-f48.google.com with SMTP id l6so452206oag.21
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 06:12:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=4RDx2kW4n+EHyBbnkVebIO9Xf/rTkz2D5obc+eFQuS4=;
	b=T2CYNaRtqmt2+tiOpE+UGlSrSvxYi5reIDtXZEkok57nENSUd2Hg+un06kHUuZ+sPD
	oYF1r8WgfnTgR/wbuC536dYItHui3o6y1p0QB1d+zQTeu2D6e0laL5UI7ER/cc5p43/P
	GbmOsPp1bMSma4Rpw1R/ql0DYVvaVS9WpJ7sQhbhbVJMnZygylycOMjHVrcc2JWXF+SE
	crdbTO2IkOolDOI/1M6fnNvoEotBRRBSzGesN5xJEfyPRPTbOsmf07zmUzZv6cBEuaMC
	nTnPEuArJWfD+3JrloOD8aRQpCDFxcLoHHG7iq5x2Jgc02jXlmJR1elpUYLzGeW++SbO
	Pd0A==
MIME-Version: 1.0
X-Received: by 10.60.55.227 with SMTP id v3mr1339141oep.67.1392819169331; Wed,
	19 Feb 2014 06:12:49 -0800 (PST)
Received: by 10.76.173.98 with HTTP; Wed, 19 Feb 2014 06:12:48 -0800 (PST)
In-Reply-To: <53038748.7070002@oracle.com>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
	<CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
	<53038748.7070002@oracle.com>
Date: Wed, 19 Feb 2014 09:12:48 -0500
Message-ID: <CAENZ-+k1d9mfJnL+6hgZ5wrkeFUMb9Qb6-YrPcETSitpb5wu6A@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8592929094453760270=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8592929094453760270==
Content-Type: multipart/alternative; boundary=089e0111bbd6d30f0404f2c2fb0f

--089e0111bbd6d30f0404f2c2fb0f
Content-Type: text/plain; charset=ISO-8859-1

Hi Boris,

2014-02-18 11:16 GMT-05:00 Boris Ostrovsky <boris.ostrovsky@oracle.com>:

>  On 02/18/2014 10:24 AM, Meng Xu wrote:
>
>  Hi Dario,
>
>  Thank you so much for your detailed reply! It is really helpful! I'm
> looking at the vPMU and perf on Xen, and will try it. :-)
>
>
> You will need the Xen patches that Dario pointed you to (thanks Dario)
> plus Linux kernel and toolstack changes that I can send you in a separate
> email (they still need some cleanup but should be usable).
>

Thank you so much for pointing this out! :)



>
> BTW, you mentioned in the earlier email that you you wrote some code to
> directly access PMU registers and didn't think the code is particularly
> useful because of portability concerns. I believe basic counters (such as
> those for cache misses) and controls are common  across pretty much all
> recent Intel processors.
>

Yes, the counters are there. But when I looked at the events and umask
number, they have slightly difference among the 2nd, 3rd and 4th generation
of Intel's cpu. Some events are not there in earlier version of CPU. (If I
code those difference in the xen tool I wrote, it will be like writing part
of intel's PMC. that's why I hope to use the existing work to run in Xen.
:-) )


>
>  The reason why I want to know this information from hardware performance
> counter is because I want to know the interference among each domains when
> they are running.
>
>  In addition, when we measure the latency of accessing a large array, the
> result is out of our expectation. We increase the size of an array from 1KB
> to 12MB, which covers the L1(32KB), L2(256KB) and L3(12MB) cache size. We
> expect that the latency of accessing the whole array should have clear cut
> at around 32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
> several times different.
>
>  However, we saw the latency does not increase much when the array size
> is larger than the size of L1, L2, and L3. It's weird because if we run the
> same task in Linux on bare machine, it is the expected result.
>
>
> Although most likely your vcpus are not migrating you should still make
> sure that they are pinned (and not oversubscribed to physical processors).
>
> Thanks for pointing this out! 



> And (as with any performance measurements) disable power management and
> turbo mode. These things often mess up your timing.
>

Sure! 

Thank you very much for your help!

best,

Meng



>
> -boris
>
>
>  We are not sure if this is because of the virt. overhead or cache miss,
> that's why we want to know the cache access rate of each domain.
>
>  It's really appreciated  if you can share some of your insight on this.
> :-)
>
>  Thank you very much for your time!
>
>  Best,
>
>  Meng
>
>
>  2014-02-18 4:14 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
>
>> On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
>> > Hi,
>> >
>> Hi,
>>
>> > I'm a PhD student, working on real time system.
>> >
>>  Cool. There really seems to be a lot of interest in Real-Time
>> virtualization these days. :-D
>>
>> > [My goal]
>> > I want to measure the cache hit/miss rate of each guest domain in Xen.
>> > I may also want to measure some other events, say memory access rate,
>> > for each program in each guest domain in Xen.
>> >
>>  Ok. Can I, out of curiosity, as you to detail a bit more what your
>> *final* goal is (I mean, you're interested in these measurements for a
>> reason, not just for the sake of having them, right?).
>>
>> > [The problem I'm encountering]
>> > I tried intel's Performance Counter Monitor (PCM) in Linux on bare
>> > machine to get the machine's cache access rate for each level of
>> > cache, it works very well.
>> >
>> >
>> > However, when I want to use the PCM in Xen and run it in dom0, it
>> > cannot work. I think the PCM needs to run in ring 0 to read/write the
>> > MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot
>> > work.
>> >
>>  Indeed.
>>
>> > So my question is:
>> > How can I run a program (say PCM) in ring 0 on Xen?
>> >
>>  Running "a program" in there is going to be terribly difficult. What I
>> think you're better off is trying to access, from dom0 and/or
>> (para)virtualize the counters.think
>>
>>
>> In fact, there is work going on already on this, although I don't have
>> all the details about what's the current status.
>>
>> > What's in my mind is:
>> > Writing a hypercall to call the PCM in Xen's kernel space, then the
>> > PCM will run in ring 0?
>> > But the problem I'm concerned is that some of the PCM's instruction,
>> > say printf(), may not be able to run in kernel space?
>> >
>>  Well, Xen can print, e.g., on a serial console, but again, that's not
>> what you want. I'm adding the link to a few conversation about virtual
>> PMU. These are just the very first google's result, so there may well be
>> more:
>>
>>
>> http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html
>> https://lwn.net/Articles/566159/
>>
>> Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
>> Developers Summit:
>> http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>>
>> Regards,
>> Dario
>>
>> --
>> <<This happens because I choose it to happen!>> (Raistlin Majere)
>> -----------------------------------------------------------------
>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>>
>>
>
>

--089e0111bbd6d30f0404f2c2fb0f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi =
Boris,<br></div><div class=3D"gmail_default" style=3D"font-size:small"><br>=
</div><div class=3D"gmail_extra"><div class=3D"gmail_quote">2014-02-18 11:1=
6 GMT-05:00 Boris Ostrovsky <span dir=3D"ltr">&lt;<a href=3D"mailto:boris.o=
strovsky@oracle.com" target=3D"_blank">boris.ostrovsky@oracle.com</a>&gt;</=
span>:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">
 =20
   =20
 =20
  <div bgcolor=3D"#FFFFFF" text=3D"#000000"><div class=3D"">
    <div>On 02/18/2014 10:24 AM, Meng Xu wrote:<br>
    </div>
    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div style=3D"font-size:small">Hi Dario,</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">Thank you so
          much for your detailed reply! It is really helpful! I&#39;m
          looking at the vPMU and perf on Xen, and will try it. :-)</div>
      </div>
    </blockquote>
    <br></div>
    You will need the Xen patches that Dario pointed you to (thanks
    Dario) plus Linux kernel and toolstack changes that I can send you
    in a separate email (they still need some cleanup but should be
    usable).<br></div></blockquote><div><br></div><div><div class=3D"gmail_=
default" style=3D"font-size:small">Thank you so much for pointing this out!=
 :)</div><div class=3D"gmail_default" style=3D"font-size:small"><br></div><=
/div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div bgcolor=3D"#FFFFFF" text=3D"#000000">
    <br>
    BTW, you mentioned in the earlier email that you you wrote some code
    to directly access PMU registers and didn&#39;t think the code is
    particularly useful because of portability concerns. I believe basic
    counters (such as those for cache misses) and controls are common=A0
    across pretty much all recent Intel processors.</div></blockquote><div>=
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Yes, the c=
ounters are there. But when I looked at the events and umask number, they h=
ave slightly difference among the 2nd, 3rd and 4th generation of Intel&#39;=
s cpu. Some events are not there in earlier version of CPU. (If I code thos=
e difference in the xen tool I wrote, it will be like writing part of intel=
&#39;s PMC. that&#39;s why I hope to use the existing work to run in Xen. :=
-) )</div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div bgcolor=3D"#FFFFFF" text=3D"#000000"><d=
iv class=3D"">

    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">The reason
          why I want to know this information from hardware performance
          counter is because I want to know the interference among each
          domains when they are running.=A0</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">In addition,
          when we measure the latency of accessing a large array, the
          result is out of our expectation. We increase the size of an
          array from 1KB to 12MB, which covers the L1(32KB), L2(256KB)
          and L3(12MB) cache size. We expect that the latency of
          accessing the whole array should have clear cut at around
          32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
          several times different.=A0</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">However, we
          saw the latency does not increase much when the array size is
          larger than the size of L1, L2, and L3. It&#39;s weird because if
          we run the same task in Linux on bare machine, it is the
          expected result.</div>
      </div>
    </blockquote>
    <br></div>
    Although most likely your vcpus are not migrating you should still
    make sure that they are pinned (and not oversubscribed to physical
    processors).<br>
    <br>
    </div></blockquote><div><div class=3D"gmail_default" style=3D"font-size=
:small">Thanks for pointing this out! </div><br></div><div>=A0</div><blockq=
uote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-wi=
dth:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-=
left:1ex">
<div bgcolor=3D"#FFFFFF" text=3D"#000000">And (as with any performance meas=
urements) disable power management
    and turbo mode. These things often mess up your timing.<br></div></bloc=
kquote><div><div class=3D"gmail_default" style=3D"font-size:small"></div><d=
iv class=3D"gmail_default" style=3D"font-size:small">Sure! </div><div class=
=3D"gmail_default" style=3D"font-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Thank you =
very much for your help!</div><div class=3D"gmail_default" style=3D"font-si=
ze:small"><br></div><div class=3D"gmail_default" style=3D"font-size:small">=
best,</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Meng</div><br></div><div>=A0</=
div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;bor=
der-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:sol=
id;padding-left:1ex">
<div bgcolor=3D"#FFFFFF" text=3D"#000000">
    <br>
    -boris<br>
    <br>
    <blockquote type=3D"cite">
      <div dir=3D"ltr"><div><div class=3D"h5">
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">We are not
          sure if this is because of the virt. overhead or cache miss,
          that&#39;s why we want to know the cache access rate of each
          domain.=A0</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">It&#39;s really
          appreciated =A0if you can share some of your insight on this.
          :-)</div>
        <div style=3D"font-size:small">
          <br>
        </div>
        <div style=3D"font-size:small">Thank you
          very much for your time!</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">Best,</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">Meng</div>
        <div style=3D"font-size:small"><br>
        </div>
        </div></div><div class=3D"gmail_extra"><br>
          <div class=3D"gmail_quote"><div><div class=3D"h5">
            2014-02-18 4:14 GMT-05:00 Dario Faggioli <span dir=3D"ltr">&lt;=
<a href=3D"mailto:dario.faggioli@citrix.com" target=3D"_blank">dario.faggio=
li@citrix.com</a>&gt;</span>:<br>
            </div></div><blockquote class=3D"gmail_quote" style=3D"margin:0=
px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);b=
order-left-style:solid;padding-left:1ex"><div><div class=3D"h5">
              On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:<br>
              &gt; Hi,<br>
              <div>&gt;<br>
                Hi,<br>
                <br>
                &gt; I&#39;m a PhD student, working on real time system.<br=
>
                &gt;<br>
              </div>
              Cool. There really seems to be a lot of interest in
              Real-Time<br>
              virtualization these days. :-D<br>
              <div><br>
                &gt; [My goal]<br>
                &gt; I want to measure the cache hit/miss rate of each
                guest domain in Xen.<br>
                &gt; I may also want to measure some other events, say
                memory access rate,<br>
                &gt; for each program in each guest domain in Xen.<br>
                &gt;<br>
              </div>
              Ok. Can I, out of curiosity, as you to detail a bit more
              what your<br>
              *final* goal is (I mean, you&#39;re interested in these
              measurements for a<br>
              reason, not just for the sake of having them, right?).<br>
              <div><br>
                &gt; [The problem I&#39;m encountering]<br>
                &gt; I tried intel&#39;s Performance Counter Monitor (PCM)
                in Linux on bare<br>
                &gt; machine to get the machine&#39;s cache access rate for
                each level of<br>
                &gt; cache, it works very well.<br>
                &gt;<br>
                &gt;<br>
                &gt; However, when I want to use the PCM in Xen and run
                it in dom0, it<br>
                &gt; cannot work. I think the PCM needs to run in ring 0
                to read/write the<br>
                &gt; MSR. Because dom0 is running in ring 1, so PCM
                running in dom0 cannot<br>
                &gt; work.<br>
                &gt;<br>
              </div>
              Indeed.<br>
              <div><br>
                &gt; So my question is:<br>
                &gt; How can I run a program (say PCM) in ring 0 on Xen?<br=
>
                &gt;<br>
              </div>
              Running &quot;a program&quot; in there is going to be terribl=
y
              difficult. What I<br>
              think you&#39;re better off is trying to access, from dom0
              and/or<br></div></div>
              (para)virtualize the counters.think<div class=3D""><br>
              <br>
              In fact, there is work going on already on this, although
              I don&#39;t have<br>
              all the details about what&#39;s the current status.<br>
              <div><br>
                &gt; What&#39;s in my mind is:<br>
                &gt; Writing a hypercall to call the PCM in Xen&#39;s kerne=
l
                space, then the<br>
                &gt; PCM will run in ring 0?<br>
                &gt; But the problem I&#39;m concerned is that some of the
                PCM&#39;s instruction,<br>
                &gt; say printf(), may not be able to run in kernel
                space?<br>
                &gt;<br>
              </div>
              Well, Xen can print, e.g., on a serial console, but again,
              that&#39;s not<br>
              what you want. I&#39;m adding the link to a few conversation
              about virtual<br>
              PMU. These are just the very first google&#39;s result, so
              there may well be<br>
              more:<br>
              <br>
              <a href=3D"http://xen.1045712.n5.nabble.com/Virtualization-of=
-the-CPU-Performance-Monitoring-Unit-td5623065.html" target=3D"_blank">http=
://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitori=
ng-Unit-td5623065.html</a><br>

              <a href=3D"https://lwn.net/Articles/566159/" target=3D"_blank=
">https://lwn.net/Articles/566159/</a><br>
              <br>
              Boris (which I&#39;m Cc-ing), gave a presentation about this
              at latest Xen<br>
              Developers Summit:<br>
              <a href=3D"http://www.slideshare.net/xen_com_mgr/xen-pmu-xens=
ummit2013" target=3D"_blank">http://www.slideshare.net/xen_com_mgr/xen-pmu-=
xensummit2013</a><br>
              <br>
              Regards,<br>
              Dario<br>
              <span><font color=3D"#888888"><br>
                  --<br>
                  &lt;&lt;This happens because I choose it to
                  happen!&gt;&gt; (Raistlin Majere)<br>
-----------------------------------------------------------------<br>
                  Dario Faggioli, Ph.D, <a href=3D"http://about.me/dario.fa=
ggioli" target=3D"_blank">http://about.me/dario.faggioli</a><br>
                  Senior Software Engineer, Citrix Systems R&amp;D Ltd.,
                  Cambridge (UK)<br>
                  <br>
                </font></span></div></blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </div>

</blockquote></div><br></div></div>

--089e0111bbd6d30f0404f2c2fb0f--


--===============8592929094453760270==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8592929094453760270==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 14:12:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7tP-0003ml-BU; Wed, 19 Feb 2014 14:12:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xumengpanda@gmail.com>) id 1WG7tN-0003ma-RH
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 14:12:54 +0000
Received: from [85.158.143.35:42573] by server-2.bemta-4.messagelabs.com id
	E2/86-10891-4EBB4035; Wed, 19 Feb 2014 14:12:52 +0000
X-Env-Sender: xumengpanda@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392819169!6827781!1
X-Originating-IP: [209.85.219.48]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14, ML_RADAR_SPEW_LINKS_8, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8565 invoked from network); 19 Feb 2014 14:12:50 -0000
Received: from mail-oa0-f48.google.com (HELO mail-oa0-f48.google.com)
	(209.85.219.48)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:12:50 -0000
Received: by mail-oa0-f48.google.com with SMTP id l6so452206oag.21
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 06:12:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=4RDx2kW4n+EHyBbnkVebIO9Xf/rTkz2D5obc+eFQuS4=;
	b=T2CYNaRtqmt2+tiOpE+UGlSrSvxYi5reIDtXZEkok57nENSUd2Hg+un06kHUuZ+sPD
	oYF1r8WgfnTgR/wbuC536dYItHui3o6y1p0QB1d+zQTeu2D6e0laL5UI7ER/cc5p43/P
	GbmOsPp1bMSma4Rpw1R/ql0DYVvaVS9WpJ7sQhbhbVJMnZygylycOMjHVrcc2JWXF+SE
	crdbTO2IkOolDOI/1M6fnNvoEotBRRBSzGesN5xJEfyPRPTbOsmf07zmUzZv6cBEuaMC
	nTnPEuArJWfD+3JrloOD8aRQpCDFxcLoHHG7iq5x2Jgc02jXlmJR1elpUYLzGeW++SbO
	Pd0A==
MIME-Version: 1.0
X-Received: by 10.60.55.227 with SMTP id v3mr1339141oep.67.1392819169331; Wed,
	19 Feb 2014 06:12:49 -0800 (PST)
Received: by 10.76.173.98 with HTTP; Wed, 19 Feb 2014 06:12:48 -0800 (PST)
In-Reply-To: <53038748.7070002@oracle.com>
References: <CAENZ-+k37zggHyp1aXMYF2jmoBThY5fMdKh_++NMr4LG0Ubujw@mail.gmail.com>
	<1392714890.32038.463.camel@Solace>
	<CAENZ-+mm2VWF_0L2ZWEfga3UtbUDkHv-X95ovNPcm61Pyk-uQw@mail.gmail.com>
	<53038748.7070002@oracle.com>
Date: Wed, 19 Feb 2014 09:12:48 -0500
Message-ID: <CAENZ-+k1d9mfJnL+6hgZ5wrkeFUMb9Qb6-YrPcETSitpb5wu6A@mail.gmail.com>
From: Meng Xu <xumengpanda@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"mengxu@cis.upenn.edu" <mengxu@cis.upenn.edu>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Question about running a program(Intel PCM) in ring
 0 on Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8592929094453760270=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8592929094453760270==
Content-Type: multipart/alternative; boundary=089e0111bbd6d30f0404f2c2fb0f

--089e0111bbd6d30f0404f2c2fb0f
Content-Type: text/plain; charset=ISO-8859-1

Hi Boris,

2014-02-18 11:16 GMT-05:00 Boris Ostrovsky <boris.ostrovsky@oracle.com>:

>  On 02/18/2014 10:24 AM, Meng Xu wrote:
>
>  Hi Dario,
>
>  Thank you so much for your detailed reply! It is really helpful! I'm
> looking at the vPMU and perf on Xen, and will try it. :-)
>
>
> You will need the Xen patches that Dario pointed you to (thanks Dario)
> plus Linux kernel and toolstack changes that I can send you in a separate
> email (they still need some cleanup but should be usable).
>

Thank you so much for pointing this out! :)



>
> BTW, you mentioned in the earlier email that you you wrote some code to
> directly access PMU registers and didn't think the code is particularly
> useful because of portability concerns. I believe basic counters (such as
> those for cache misses) and controls are common  across pretty much all
> recent Intel processors.
>

Yes, the counters are there. But when I looked at the events and umask
number, they have slightly difference among the 2nd, 3rd and 4th generation
of Intel's cpu. Some events are not there in earlier version of CPU. (If I
code those difference in the xen tool I wrote, it will be like writing part
of intel's PMC. that's why I hope to use the existing work to run in Xen.
:-) )


>
>  The reason why I want to know this information from hardware performance
> counter is because I want to know the interference among each domains when
> they are running.
>
>  In addition, when we measure the latency of accessing a large array, the
> result is out of our expectation. We increase the size of an array from 1KB
> to 12MB, which covers the L1(32KB), L2(256KB) and L3(12MB) cache size. We
> expect that the latency of accessing the whole array should have clear cut
> at around 32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
> several times different.
>
>  However, we saw the latency does not increase much when the array size
> is larger than the size of L1, L2, and L3. It's weird because if we run the
> same task in Linux on bare machine, it is the expected result.
>
>
> Although most likely your vcpus are not migrating you should still make
> sure that they are pinned (and not oversubscribed to physical processors).
>
> Thanks for pointing this out! 



> And (as with any performance measurements) disable power management and
> turbo mode. These things often mess up your timing.
>

Sure! 

Thank you very much for your help!

best,

Meng



>
> -boris
>
>
>  We are not sure if this is because of the virt. overhead or cache miss,
> that's why we want to know the cache access rate of each domain.
>
>  It's really appreciated  if you can share some of your insight on this.
> :-)
>
>  Thank you very much for your time!
>
>  Best,
>
>  Meng
>
>
>  2014-02-18 4:14 GMT-05:00 Dario Faggioli <dario.faggioli@citrix.com>:
>
>> On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:
>> > Hi,
>> >
>> Hi,
>>
>> > I'm a PhD student, working on real time system.
>> >
>>  Cool. There really seems to be a lot of interest in Real-Time
>> virtualization these days. :-D
>>
>> > [My goal]
>> > I want to measure the cache hit/miss rate of each guest domain in Xen.
>> > I may also want to measure some other events, say memory access rate,
>> > for each program in each guest domain in Xen.
>> >
>>  Ok. Can I, out of curiosity, as you to detail a bit more what your
>> *final* goal is (I mean, you're interested in these measurements for a
>> reason, not just for the sake of having them, right?).
>>
>> > [The problem I'm encountering]
>> > I tried intel's Performance Counter Monitor (PCM) in Linux on bare
>> > machine to get the machine's cache access rate for each level of
>> > cache, it works very well.
>> >
>> >
>> > However, when I want to use the PCM in Xen and run it in dom0, it
>> > cannot work. I think the PCM needs to run in ring 0 to read/write the
>> > MSR. Because dom0 is running in ring 1, so PCM running in dom0 cannot
>> > work.
>> >
>>  Indeed.
>>
>> > So my question is:
>> > How can I run a program (say PCM) in ring 0 on Xen?
>> >
>>  Running "a program" in there is going to be terribly difficult. What I
>> think you're better off is trying to access, from dom0 and/or
>> (para)virtualize the counters.think
>>
>>
>> In fact, there is work going on already on this, although I don't have
>> all the details about what's the current status.
>>
>> > What's in my mind is:
>> > Writing a hypercall to call the PCM in Xen's kernel space, then the
>> > PCM will run in ring 0?
>> > But the problem I'm concerned is that some of the PCM's instruction,
>> > say printf(), may not be able to run in kernel space?
>> >
>>  Well, Xen can print, e.g., on a serial console, but again, that's not
>> what you want. I'm adding the link to a few conversation about virtual
>> PMU. These are just the very first google's result, so there may well be
>> more:
>>
>>
>> http://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitoring-Unit-td5623065.html
>> https://lwn.net/Articles/566159/
>>
>> Boris (which I'm Cc-ing), gave a presentation about this at latest Xen
>> Developers Summit:
>> http://www.slideshare.net/xen_com_mgr/xen-pmu-xensummit2013
>>
>> Regards,
>> Dario
>>
>> --
>> <<This happens because I choose it to happen!>> (Raistlin Majere)
>> -----------------------------------------------------------------
>> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
>> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
>>
>>
>
>

--089e0111bbd6d30f0404f2c2fb0f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-size:small">Hi =
Boris,<br></div><div class=3D"gmail_default" style=3D"font-size:small"><br>=
</div><div class=3D"gmail_extra"><div class=3D"gmail_quote">2014-02-18 11:1=
6 GMT-05:00 Boris Ostrovsky <span dir=3D"ltr">&lt;<a href=3D"mailto:boris.o=
strovsky@oracle.com" target=3D"_blank">boris.ostrovsky@oracle.com</a>&gt;</=
span>:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">
 =20
   =20
 =20
  <div bgcolor=3D"#FFFFFF" text=3D"#000000"><div class=3D"">
    <div>On 02/18/2014 10:24 AM, Meng Xu wrote:<br>
    </div>
    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div style=3D"font-size:small">Hi Dario,</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">Thank you so
          much for your detailed reply! It is really helpful! I&#39;m
          looking at the vPMU and perf on Xen, and will try it. :-)</div>
      </div>
    </blockquote>
    <br></div>
    You will need the Xen patches that Dario pointed you to (thanks
    Dario) plus Linux kernel and toolstack changes that I can send you
    in a separate email (they still need some cleanup but should be
    usable).<br></div></blockquote><div><br></div><div><div class=3D"gmail_=
default" style=3D"font-size:small">Thank you so much for pointing this out!=
 :)</div><div class=3D"gmail_default" style=3D"font-size:small"><br></div><=
/div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div bgcolor=3D"#FFFFFF" text=3D"#000000">
    <br>
    BTW, you mentioned in the earlier email that you you wrote some code
    to directly access PMU registers and didn&#39;t think the code is
    particularly useful because of portability concerns. I believe basic
    counters (such as those for cache misses) and controls are common=A0
    across pretty much all recent Intel processors.</div></blockquote><div>=
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Yes, the c=
ounters are there. But when I looked at the events and umask number, they h=
ave slightly difference among the 2nd, 3rd and 4th generation of Intel&#39;=
s cpu. Some events are not there in earlier version of CPU. (If I code thos=
e difference in the xen tool I wrote, it will be like writing part of intel=
&#39;s PMC. that&#39;s why I hope to use the existing work to run in Xen. :=
-) )</div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left=
-style:solid;padding-left:1ex"><div bgcolor=3D"#FFFFFF" text=3D"#000000"><d=
iv class=3D"">

    <blockquote type=3D"cite">
      <div dir=3D"ltr">
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">The reason
          why I want to know this information from hardware performance
          counter is because I want to know the interference among each
          domains when they are running.=A0</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">In addition,
          when we measure the latency of accessing a large array, the
          result is out of our expectation. We increase the size of an
          array from 1KB to 12MB, which covers the L1(32KB), L2(256KB)
          and L3(12MB) cache size. We expect that the latency of
          accessing the whole array should have clear cut at around
          32KB, 256KB and 12MB because the latency of L1 L2 and L3 are
          several times different.=A0</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">However, we
          saw the latency does not increase much when the array size is
          larger than the size of L1, L2, and L3. It&#39;s weird because if
          we run the same task in Linux on bare machine, it is the
          expected result.</div>
      </div>
    </blockquote>
    <br></div>
    Although most likely your vcpus are not migrating you should still
    make sure that they are pinned (and not oversubscribed to physical
    processors).<br>
    <br>
    </div></blockquote><div><div class=3D"gmail_default" style=3D"font-size=
:small">Thanks for pointing this out! </div><br></div><div>=A0</div><blockq=
uote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left-wi=
dth:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-=
left:1ex">
<div bgcolor=3D"#FFFFFF" text=3D"#000000">And (as with any performance meas=
urements) disable power management
    and turbo mode. These things often mess up your timing.<br></div></bloc=
kquote><div><div class=3D"gmail_default" style=3D"font-size:small"></div><d=
iv class=3D"gmail_default" style=3D"font-size:small">Sure! </div><div class=
=3D"gmail_default" style=3D"font-size:small">
<br></div><div class=3D"gmail_default" style=3D"font-size:small">Thank you =
very much for your help!</div><div class=3D"gmail_default" style=3D"font-si=
ze:small"><br></div><div class=3D"gmail_default" style=3D"font-size:small">=
best,</div>
<div class=3D"gmail_default" style=3D"font-size:small"><br></div><div class=
=3D"gmail_default" style=3D"font-size:small">Meng</div><br></div><div>=A0</=
div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;bor=
der-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:sol=
id;padding-left:1ex">
<div bgcolor=3D"#FFFFFF" text=3D"#000000">
    <br>
    -boris<br>
    <br>
    <blockquote type=3D"cite">
      <div dir=3D"ltr"><div><div class=3D"h5">
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">We are not
          sure if this is because of the virt. overhead or cache miss,
          that&#39;s why we want to know the cache access rate of each
          domain.=A0</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">It&#39;s really
          appreciated =A0if you can share some of your insight on this.
          :-)</div>
        <div style=3D"font-size:small">
          <br>
        </div>
        <div style=3D"font-size:small">Thank you
          very much for your time!</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">Best,</div>
        <div style=3D"font-size:small"><br>
        </div>
        <div style=3D"font-size:small">Meng</div>
        <div style=3D"font-size:small"><br>
        </div>
        </div></div><div class=3D"gmail_extra"><br>
          <div class=3D"gmail_quote"><div><div class=3D"h5">
            2014-02-18 4:14 GMT-05:00 Dario Faggioli <span dir=3D"ltr">&lt;=
<a href=3D"mailto:dario.faggioli@citrix.com" target=3D"_blank">dario.faggio=
li@citrix.com</a>&gt;</span>:<br>
            </div></div><blockquote class=3D"gmail_quote" style=3D"margin:0=
px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);b=
order-left-style:solid;padding-left:1ex"><div><div class=3D"h5">
              On lun, 2014-02-17 at 17:32 -0500, Meng Xu wrote:<br>
              &gt; Hi,<br>
              <div>&gt;<br>
                Hi,<br>
                <br>
                &gt; I&#39;m a PhD student, working on real time system.<br=
>
                &gt;<br>
              </div>
              Cool. There really seems to be a lot of interest in
              Real-Time<br>
              virtualization these days. :-D<br>
              <div><br>
                &gt; [My goal]<br>
                &gt; I want to measure the cache hit/miss rate of each
                guest domain in Xen.<br>
                &gt; I may also want to measure some other events, say
                memory access rate,<br>
                &gt; for each program in each guest domain in Xen.<br>
                &gt;<br>
              </div>
              Ok. Can I, out of curiosity, as you to detail a bit more
              what your<br>
              *final* goal is (I mean, you&#39;re interested in these
              measurements for a<br>
              reason, not just for the sake of having them, right?).<br>
              <div><br>
                &gt; [The problem I&#39;m encountering]<br>
                &gt; I tried intel&#39;s Performance Counter Monitor (PCM)
                in Linux on bare<br>
                &gt; machine to get the machine&#39;s cache access rate for
                each level of<br>
                &gt; cache, it works very well.<br>
                &gt;<br>
                &gt;<br>
                &gt; However, when I want to use the PCM in Xen and run
                it in dom0, it<br>
                &gt; cannot work. I think the PCM needs to run in ring 0
                to read/write the<br>
                &gt; MSR. Because dom0 is running in ring 1, so PCM
                running in dom0 cannot<br>
                &gt; work.<br>
                &gt;<br>
              </div>
              Indeed.<br>
              <div><br>
                &gt; So my question is:<br>
                &gt; How can I run a program (say PCM) in ring 0 on Xen?<br=
>
                &gt;<br>
              </div>
              Running &quot;a program&quot; in there is going to be terribl=
y
              difficult. What I<br>
              think you&#39;re better off is trying to access, from dom0
              and/or<br></div></div>
              (para)virtualize the counters.think<div class=3D""><br>
              <br>
              In fact, there is work going on already on this, although
              I don&#39;t have<br>
              all the details about what&#39;s the current status.<br>
              <div><br>
                &gt; What&#39;s in my mind is:<br>
                &gt; Writing a hypercall to call the PCM in Xen&#39;s kerne=
l
                space, then the<br>
                &gt; PCM will run in ring 0?<br>
                &gt; But the problem I&#39;m concerned is that some of the
                PCM&#39;s instruction,<br>
                &gt; say printf(), may not be able to run in kernel
                space?<br>
                &gt;<br>
              </div>
              Well, Xen can print, e.g., on a serial console, but again,
              that&#39;s not<br>
              what you want. I&#39;m adding the link to a few conversation
              about virtual<br>
              PMU. These are just the very first google&#39;s result, so
              there may well be<br>
              more:<br>
              <br>
              <a href=3D"http://xen.1045712.n5.nabble.com/Virtualization-of=
-the-CPU-Performance-Monitoring-Unit-td5623065.html" target=3D"_blank">http=
://xen.1045712.n5.nabble.com/Virtualization-of-the-CPU-Performance-Monitori=
ng-Unit-td5623065.html</a><br>

              <a href=3D"https://lwn.net/Articles/566159/" target=3D"_blank=
">https://lwn.net/Articles/566159/</a><br>
              <br>
              Boris (which I&#39;m Cc-ing), gave a presentation about this
              at latest Xen<br>
              Developers Summit:<br>
              <a href=3D"http://www.slideshare.net/xen_com_mgr/xen-pmu-xens=
ummit2013" target=3D"_blank">http://www.slideshare.net/xen_com_mgr/xen-pmu-=
xensummit2013</a><br>
              <br>
              Regards,<br>
              Dario<br>
              <span><font color=3D"#888888"><br>
                  --<br>
                  &lt;&lt;This happens because I choose it to
                  happen!&gt;&gt; (Raistlin Majere)<br>
-----------------------------------------------------------------<br>
                  Dario Faggioli, Ph.D, <a href=3D"http://about.me/dario.fa=
ggioli" target=3D"_blank">http://about.me/dario.faggioli</a><br>
                  Senior Software Engineer, Citrix Systems R&amp;D Ltd.,
                  Cambridge (UK)<br>
                  <br>
                </font></span></div></blockquote>
          </div>
          <br>
        </div>
      </div>
    </blockquote>
    <br>
  </div>

</blockquote></div><br></div></div>

--089e0111bbd6d30f0404f2c2fb0f--


--===============8592929094453760270==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8592929094453760270==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 14:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7vq-0003zP-32; Wed, 19 Feb 2014 14:15:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG7vo-0003yz-05
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 14:15:24 +0000
Received: from [193.109.254.147:34669] by server-1.bemta-14.messagelabs.com id
	16/B5-15438-B7CB4035; Wed, 19 Feb 2014 14:15:23 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392819321!1742677!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1710 invoked from network); 19 Feb 2014 14:15:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102187095"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:15:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:15:19 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG7vj-0006wt-6M;
	Wed, 19 Feb 2014 14:15:19 +0000
Message-ID: <5304BC73.5090803@eu.citrix.com>
Date: Wed, 19 Feb 2014 14:15:15 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Julien Grall
	<julien.grall@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	
	<52E69CBC.3090207@linaro.org>	
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>	
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>	
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>	
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>	
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
In-Reply-To: <1392818038.29739.74.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org, George Dunlap <george.dunlap@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 01:53 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>> Hi all,
>>
>> Ping?
> No one made a case for a release exception so I put it in my 4.5 pile.
>
>>   It would be nice to have this patch for Xen 4.4 as IPI priority
>> patch won't be pushed before the release.
>>
>> The patch is a minor change and won't impact normal use. When dom0 is
>> built, Xen always do it on CPU 0.
> Right, so whoever is doing otherwise already has a big pile of patches I
> presume?
>
> It's rather late to be making such changes IMHO, but I'll defer to
> George.

I can't figure out from the description what's the advantage of having 
it in 4.4.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7vq-0003zP-32; Wed, 19 Feb 2014 14:15:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WG7vo-0003yz-05
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 14:15:24 +0000
Received: from [193.109.254.147:34669] by server-1.bemta-14.messagelabs.com id
	16/B5-15438-B7CB4035; Wed, 19 Feb 2014 14:15:23 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392819321!1742677!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1710 invoked from network); 19 Feb 2014 14:15:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:15:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102187095"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:15:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:15:19 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WG7vj-0006wt-6M;
	Wed, 19 Feb 2014 14:15:19 +0000
Message-ID: <5304BC73.5090803@eu.citrix.com>
Date: Wed, 19 Feb 2014 14:15:15 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, Julien Grall
	<julien.grall@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>	
	<52E69CBC.3090207@linaro.org>	
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>	
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>	
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>	
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>	
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
In-Reply-To: <1392818038.29739.74.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	xen-devel@lists.xen.org, George Dunlap <george.dunlap@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 01:53 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>> Hi all,
>>
>> Ping?
> No one made a case for a release exception so I put it in my 4.5 pile.
>
>>   It would be nice to have this patch for Xen 4.4 as IPI priority
>> patch won't be pushed before the release.
>>
>> The patch is a minor change and won't impact normal use. When dom0 is
>> built, Xen always do it on CPU 0.
> Right, so whoever is doing otherwise already has a big pile of patches I
> presume?
>
> It's rather late to be making such changes IMHO, but I'll defer to
> George.

I can't figure out from the description what's the advantage of having 
it in 4.4.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7vx-00040G-HD; Wed, 19 Feb 2014 14:15:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG7vw-0003zy-7h
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:15:32 +0000
Received: from [85.158.143.35:11262] by server-3.bemta-4.messagelabs.com id
	BD/0E-11539-38CB4035; Wed, 19 Feb 2014 14:15:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392819329!6828732!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30417 invoked from network); 19 Feb 2014 14:15:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102187147"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:15:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:15:28 -0500
Message-ID: <1392819327.29739.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 14:15:27 +0000
In-Reply-To: <1391794991-5919-13-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> This patch add support for ARM architected SMMU driver. It's based on the
> linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.

Aha, here comes all the hard stuff ;-)

Could you try and briefly enumerate the areas which you had to change
please.

Some comments on e.g. which translation context type we are using and
how we are configuring things etc might also be helpful here in
understanding what is going on.

Also could you give details of the test setup you used, was it just
booting dom0 on Midway with these patches? Was the DTB complete etc?

> + * This driver currently supports:
> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)

I guess this is Will's original comment, I thought SMMU-400, which
you've tried, was v2?

> + *  - Stream-matching and stream-indexing
> + *  - v7/v8 long-descriptor format
> + *  - Non-secure access to the SMMU
> + *  - 4k pages, p2m shared with the processor
> + *  - Up to 40-bit addressing
> + *  - Context fault reporting
> + */
> +
> +#include <xen/config.h>
> +#include <xen/delay.h>
> +#include <xen/errno.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/list.h>
> +#include <xen/mm.h>
> +#include <xen/rbtree.h>
> +#include <xen/sched.h>
> +#include <asm/atomic.h>
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <asm/platform.h>
> +
> +#define SZ_4K                               (1 << 12)
> +#define SZ_64K                              (1 << 16)
> +
> +/* Driver options */
> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)

Is this just retained to reduce the deviation from the Linux driver?
It's no use to us I think. (I suppose that goes for a bunch of other
stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
further).

> +
> +void arm_smmu_iommu_dom0_init(struct domain *d)
> +{
> +    struct arm_smmu_device *smmu;
> +    struct rb_node *node;
> +
> +    printk(XENLOG_DEBUG "arm-smmu: Initialize dom0\n");
> +
> +    list_for_each_entry( smmu, &arm_smmu_devices, list )
> +    {
> +        for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
> +        {
> +            struct arm_smmu_master *master;
> +
> +            master = container_of(node, struct arm_smmu_master, node);
> +
> +            if ( dt_device_used_by(master->dt_node) == DOMID_XEN ||
> +                 platform_device_is_blacklisted(master->dt_node) )
> +                continue;
> +
> +            arm_smmu_attach_dev(d, master->dt_node);

Should this not be driven from the same loop as the MMIO mapping setup
in dom0 build? Otherwise isn't there a danger that they won't
correspond?

A "master" here is a "bus master" i.e. a device, right?

> +    /* Even if the device can't be initialized, we don't want to give to
> +     * dom0 the smmu device

"give the smmu device to dom0"


I obviously haven't reviewed all this code in detail, but I have skimmed
it and nothing leapt out.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:15:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7vx-00040G-HD; Wed, 19 Feb 2014 14:15:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG7vw-0003zy-7h
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:15:32 +0000
Received: from [85.158.143.35:11262] by server-3.bemta-4.messagelabs.com id
	BD/0E-11539-38CB4035; Wed, 19 Feb 2014 14:15:31 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392819329!6828732!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30417 invoked from network); 19 Feb 2014 14:15:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:15:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102187147"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:15:29 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:15:28 -0500
Message-ID: <1392819327.29739.85.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 14:15:27 +0000
In-Reply-To: <1391794991-5919-13-git-send-email-julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
> This patch add support for ARM architected SMMU driver. It's based on the
> linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.

Aha, here comes all the hard stuff ;-)

Could you try and briefly enumerate the areas which you had to change
please.

Some comments on e.g. which translation context type we are using and
how we are configuring things etc might also be helpful here in
understanding what is going on.

Also could you give details of the test setup you used, was it just
booting dom0 on Midway with these patches? Was the DTB complete etc?

> + * This driver currently supports:
> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)

I guess this is Will's original comment, I thought SMMU-400, which
you've tried, was v2?

> + *  - Stream-matching and stream-indexing
> + *  - v7/v8 long-descriptor format
> + *  - Non-secure access to the SMMU
> + *  - 4k pages, p2m shared with the processor
> + *  - Up to 40-bit addressing
> + *  - Context fault reporting
> + */
> +
> +#include <xen/config.h>
> +#include <xen/delay.h>
> +#include <xen/errno.h>
> +#include <xen/irq.h>
> +#include <xen/lib.h>
> +#include <xen/list.h>
> +#include <xen/mm.h>
> +#include <xen/rbtree.h>
> +#include <xen/sched.h>
> +#include <asm/atomic.h>
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <asm/platform.h>
> +
> +#define SZ_4K                               (1 << 12)
> +#define SZ_64K                              (1 << 16)
> +
> +/* Driver options */
> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)

Is this just retained to reduce the deviation from the Linux driver?
It's no use to us I think. (I suppose that goes for a bunch of other
stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
further).

> +
> +void arm_smmu_iommu_dom0_init(struct domain *d)
> +{
> +    struct arm_smmu_device *smmu;
> +    struct rb_node *node;
> +
> +    printk(XENLOG_DEBUG "arm-smmu: Initialize dom0\n");
> +
> +    list_for_each_entry( smmu, &arm_smmu_devices, list )
> +    {
> +        for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
> +        {
> +            struct arm_smmu_master *master;
> +
> +            master = container_of(node, struct arm_smmu_master, node);
> +
> +            if ( dt_device_used_by(master->dt_node) == DOMID_XEN ||
> +                 platform_device_is_blacklisted(master->dt_node) )
> +                continue;
> +
> +            arm_smmu_attach_dev(d, master->dt_node);

Should this not be driven from the same loop as the MMIO mapping setup
in dom0 build? Otherwise isn't there a danger that they won't
correspond?

A "master" here is a "bus master" i.e. a device, right?

> +    /* Even if the device can't be initialized, we don't want to give to
> +     * dom0 the smmu device

"give the smmu device to dom0"


I obviously haven't reviewed all this code in detail, but I have skimmed
it and nothing leapt out.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:16:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7x5-00049N-3h; Wed, 19 Feb 2014 14:16:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7x3-00049B-QO
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:16:42 +0000
Received: from [193.109.254.147:31763] by server-9.bemta-14.messagelabs.com id
	7D/DB-24895-9CCB4035; Wed, 19 Feb 2014 14:16:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392819400!1440112!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10982 invoked from network); 19 Feb 2014 14:16:40 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:16:40 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so395154eak.29
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:16:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=6oaIui/jq7bIBw+0OLQ33aFuijxr8oFFuQBzXtROUQY=;
	b=g0NvSrZ7W6MO3lTwbrQXXtHWTUl1MULl0B44e2NuMDos8uXLWtfY/NUVeXecCQb/Qg
	8NVLneTnfwvdYbZJf44vvNRpQy73t9F3QwslUOawzQgDuMV4DmIVF719YndAjgMyaeW5
	26LqnUEXmnVyDd9H9YAYG3/vsP5fFJ+3CTG52z/SkV0L9JRAD5ccw0BMe8uU7rDg2mwE
	vaK/SjzcCj8FlIj3yOUlVCDw89DJP8JuHMSRVN4J3mpNKlVEb3xz78QZnUiuwzMaYHkL
	FCZJ35etApOhi5gMi0WKoGbfNfSHIMSDcAwbjk7skTnT10AqG8/Bn1yPNrccXEyzEuBE
	3iYA==
X-Gm-Message-State: ALoCoQlYpqPCtmj7I50ARfYbnY35fXNePoMMAmkacwm/nilXMSlCYnmrgCncv2T+5AwlxaAz5Saw
X-Received: by 10.15.67.197 with SMTP id u45mr40142131eex.24.1392819399744;
	Wed, 19 Feb 2014 06:16:39 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id o45sm1143097eeb.18.2014.02.19.06.16.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:16:39 -0800 (PST)
Message-ID: <5304BCC5.7060905@linaro.org>
Date: Wed, 19 Feb 2014 14:16:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-5-git-send-email-julien.grall@linaro.org>
	<1392810312.29739.11.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810312.29739.11.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 4/8] xen/arm: irq: Don't need to
 have a specific function to route IRQ to Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Ian,

On 02/19/2014 11:45 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> Actually, when the IRQ is handling by Xen, the setup is done in 2 steps:
> 
> s/Actually, //
> 
> I'd also go with a title like "remove need to have specific..." or
> "remove function to route...".
> 
>>     - Route the IRQ to the current CPU and set priorities
>>     - Set up the handler
>>
>> For PPIs, these step are called on every cpus. For SPIs, it's called only
> 
>                      ^s                    cpu             they are only called
> 
>> on the boot CPU.
>>
>> Divided the setup in two step complicated the code when a new driver is
> 
> Dividing           into two steps complicates
> 
>> added by Xen (for instance a SMMU driver). Xen can safely route the IRQ
> 
>        to Xen
> 
>> when the driver setup the interrupt handler.
> 
>                  sets up

Thanks to look at my grammar nits :).

> Although for this final para I'm not sure why a new driver is needed --
> either it is already complicated or not.

I will remove this para then.

>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/arch/arm/gic.c         |   67 +++++++++++++++-----------------------------
>>  xen/arch/arm/setup.c       |    3 --
>>  xen/arch/arm/smpboot.c     |    2 --
>>  xen/arch/arm/time.c        |   11 --------
>>  xen/include/asm-arm/gic.h  |    7 -----
>>  xen/include/asm-arm/time.h |    3 --
>>  6 files changed, 23 insertions(+), 70 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index d68bde3..58bcba3 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -254,43 +254,25 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
>>      spin_unlock(&gic.lock);
>>  }
>>  
>> -/* Program the GIC to route an interrupt */
>> +/* Program the GIC to route an interrupt to the host (eg Xen)
>> + * - needs to be called with desc.lock held
> 
> This suggests that the caller must have desc in its hand, but it then
> passes irq here so we can look it up again. It may as well pass desc I
> think.

Right, I will update release_irq to take an irq_desc in parameters
instead of the IRQ.

> 
>>  void __init release_irq(unsigned int irq)
>>  {
>>      struct irq_desc *desc;
>> @@ -601,6 +561,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>>      int rc;
>>      unsigned long flags;
>>      struct irq_desc *desc;
>> +    bool_t disabled = 0;
>>  
>>      desc = irq_to_desc(irq->irq);
>>  
>> @@ -620,6 +581,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>>          return -EADDRINUSE;
>>      }
>>  
>> +    disabled = (desc->action == NULL);
>> +
>> +    /* First time the IRQ is setup */
>> +    if ( disabled )
>> +    {
>> +        bool_t level;
>> +
>> +        level = dt_irq_is_level_triggered(irq);
>> +        /* It's fine to use smp_processor_id() because:
>> +         * For PPI: irq_desc is banked
>> +         * For SGI: we don't care for now which CPU will receive the
>> +         * interrupt
>> +         * TODO: Handle case where SGI is setup on different CPU than
>> +         * the targeted CPU and the priority.
> 
> Do you mean s/SGI/SPI/ here? SGIs are inherently per CPU, like PPIs.

Yes, SPI.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:16:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG7x5-00049N-3h; Wed, 19 Feb 2014 14:16:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG7x3-00049B-QO
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:16:42 +0000
Received: from [193.109.254.147:31763] by server-9.bemta-14.messagelabs.com id
	7D/DB-24895-9CCB4035; Wed, 19 Feb 2014 14:16:41 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392819400!1440112!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10982 invoked from network); 19 Feb 2014 14:16:40 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:16:40 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so395154eak.29
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:16:40 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=6oaIui/jq7bIBw+0OLQ33aFuijxr8oFFuQBzXtROUQY=;
	b=g0NvSrZ7W6MO3lTwbrQXXtHWTUl1MULl0B44e2NuMDos8uXLWtfY/NUVeXecCQb/Qg
	8NVLneTnfwvdYbZJf44vvNRpQy73t9F3QwslUOawzQgDuMV4DmIVF719YndAjgMyaeW5
	26LqnUEXmnVyDd9H9YAYG3/vsP5fFJ+3CTG52z/SkV0L9JRAD5ccw0BMe8uU7rDg2mwE
	vaK/SjzcCj8FlIj3yOUlVCDw89DJP8JuHMSRVN4J3mpNKlVEb3xz78QZnUiuwzMaYHkL
	FCZJ35etApOhi5gMi0WKoGbfNfSHIMSDcAwbjk7skTnT10AqG8/Bn1yPNrccXEyzEuBE
	3iYA==
X-Gm-Message-State: ALoCoQlYpqPCtmj7I50ARfYbnY35fXNePoMMAmkacwm/nilXMSlCYnmrgCncv2T+5AwlxaAz5Saw
X-Received: by 10.15.67.197 with SMTP id u45mr40142131eex.24.1392819399744;
	Wed, 19 Feb 2014 06:16:39 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id o45sm1143097eeb.18.2014.02.19.06.16.38
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:16:39 -0800 (PST)
Message-ID: <5304BCC5.7060905@linaro.org>
Date: Wed, 19 Feb 2014 14:16:37 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-5-git-send-email-julien.grall@linaro.org>
	<1392810312.29739.11.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810312.29739.11.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 4/8] xen/arm: irq: Don't need to
 have a specific function to route IRQ to Xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Ian,

On 02/19/2014 11:45 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> Actually, when the IRQ is handling by Xen, the setup is done in 2 steps:
> 
> s/Actually, //
> 
> I'd also go with a title like "remove need to have specific..." or
> "remove function to route...".
> 
>>     - Route the IRQ to the current CPU and set priorities
>>     - Set up the handler
>>
>> For PPIs, these step are called on every cpus. For SPIs, it's called only
> 
>                      ^s                    cpu             they are only called
> 
>> on the boot CPU.
>>
>> Divided the setup in two step complicated the code when a new driver is
> 
> Dividing           into two steps complicates
> 
>> added by Xen (for instance a SMMU driver). Xen can safely route the IRQ
> 
>        to Xen
> 
>> when the driver setup the interrupt handler.
> 
>                  sets up

Thanks to look at my grammar nits :).

> Although for this final para I'm not sure why a new driver is needed --
> either it is already complicated or not.

I will remove this para then.

>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/arch/arm/gic.c         |   67 +++++++++++++++-----------------------------
>>  xen/arch/arm/setup.c       |    3 --
>>  xen/arch/arm/smpboot.c     |    2 --
>>  xen/arch/arm/time.c        |   11 --------
>>  xen/include/asm-arm/gic.h  |    7 -----
>>  xen/include/asm-arm/time.h |    3 --
>>  6 files changed, 23 insertions(+), 70 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index d68bde3..58bcba3 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -254,43 +254,25 @@ static void gic_set_irq_properties(unsigned int irq, bool_t level,
>>      spin_unlock(&gic.lock);
>>  }
>>  
>> -/* Program the GIC to route an interrupt */
>> +/* Program the GIC to route an interrupt to the host (eg Xen)
>> + * - needs to be called with desc.lock held
> 
> This suggests that the caller must have desc in its hand, but it then
> passes irq here so we can look it up again. It may as well pass desc I
> think.

Right, I will update release_irq to take an irq_desc in parameters
instead of the IRQ.

> 
>>  void __init release_irq(unsigned int irq)
>>  {
>>      struct irq_desc *desc;
>> @@ -601,6 +561,7 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>>      int rc;
>>      unsigned long flags;
>>      struct irq_desc *desc;
>> +    bool_t disabled = 0;
>>  
>>      desc = irq_to_desc(irq->irq);
>>  
>> @@ -620,6 +581,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
>>          return -EADDRINUSE;
>>      }
>>  
>> +    disabled = (desc->action == NULL);
>> +
>> +    /* First time the IRQ is setup */
>> +    if ( disabled )
>> +    {
>> +        bool_t level;
>> +
>> +        level = dt_irq_is_level_triggered(irq);
>> +        /* It's fine to use smp_processor_id() because:
>> +         * For PPI: irq_desc is banked
>> +         * For SGI: we don't care for now which CPU will receive the
>> +         * interrupt
>> +         * TODO: Handle case where SGI is setup on different CPU than
>> +         * the targeted CPU and the priority.
> 
> Do you mean s/SGI/SPI/ here? SGIs are inherently per CPU, like PPIs.

Yes, SPI.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:24:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG83v-0004WP-19; Wed, 19 Feb 2014 14:23:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG83t-0004WK-GZ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:23:45 +0000
Received: from [85.158.139.211:17738] by server-6.bemta-5.messagelabs.com id
	A6/1F-14342-07EB4035; Wed, 19 Feb 2014 14:23:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392819823!4953133!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2326 invoked from network); 19 Feb 2014 14:23:44 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:23:44 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so392765eae.21
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:23:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=6PwzclUqUjlMgM2e4PlFGgbpIMpsHYKM5pLMCEnTc9A=;
	b=UtqimczSXT+5qmqzVvrknj9xDSlexM6p0itn6ka7eiqyUM9PgJgSTpYpNGAry38+qy
	UbCHGIF6GJpbjQKyp5atW0eHDfxn1j4yUOhrRbdE5+DOC33Gthx+ZV9n+0V3iX/5zPhb
	dPcABikYp0jrakTxvPjdLZLVZzIde2REgR2aWV+tK8C72OHXXcVKeLvDsKfjO0CkPMQ3
	6EmETEP7UX29XfeYDfsmijGuqZ3GIEXLyY57oSXdKLu8UUTrAwYy1gwFM3tGPPiGma8l
	IHBZnK38H1Rl3nBFQ/7tMuwcUuaWrlRHi6S/6KmkPMkvqbZCuF7ZQoAjpUb8ZzRuz69u
	LEAw==
X-Gm-Message-State: ALoCoQnD7S+nzBB2mvwuc51TwtaLwX9dPQEq05nuI4KEXH0NPIeqnpJtql/RlQAXMqyGtLICCFzq
X-Received: by 10.14.203.71 with SMTP id e47mr2487929eeo.99.1392819823655;
	Wed, 19 Feb 2014 06:23:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm1304494eei.9.2014.02.19.06.23.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:23:42 -0800 (PST)
Message-ID: <5304BE6D.5020401@linaro.org>
Date: Wed, 19 Feb 2014 14:23:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-6-git-send-email-julien.grall@linaro.org>
	<1392810475.29739.13.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810475.29739.13.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 5/8] xen/arm: IRQ: rename
	release_irq in release_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:47 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> 
> Subject: s/in/to/
> 
>> Rename the function and make the prototype consistent with request_dt_irq.
>>
>> The new parameter (dev_id) will be used in a later patch to release the right
>> action when the support for multiple action will be added.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.

>> ---
>>  xen/arch/arm/gic.c        |    4 ++--
>>  xen/include/asm-arm/irq.h |    1 +
>>  2 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index 58bcba3..2643b46 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -520,13 +520,13 @@ void gic_disable_cpu(void)
>>      spin_unlock(&gic.lock);
>>  }
>>  
>> -void __init release_irq(unsigned int irq)
>> +void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>>  {
>>      struct irq_desc *desc;
>>      unsigned long flags;
>>     struct irqaction *action;
>>  
>> -    desc = irq_to_desc(irq);
>> +    desc = irq_to_desc(irq->irq);
>>  
>>      desc->handler->shutdown(desc);
>>  
>> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
>> index 7c20703..bd8aac1 100644
>> --- a/xen/include/asm-arm/irq.h
>> +++ b/xen/include/asm-arm/irq.h
>> @@ -44,6 +44,7 @@ int __init request_dt_irq(const struct dt_irq *irq,
>>                            void (*handler)(int, void *, struct cpu_user_regs *),
>>                            const char *devname, void *dev_id);
>>  int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new);
> 
> This patch implies that things can now be dynamically registered and
> unregistered -- does this therefore need to become non-__init?

Yes, I noticed that after I sent this patch series. I have a patch for
that which I will add on the next version.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:24:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG83v-0004WP-19; Wed, 19 Feb 2014 14:23:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG83t-0004WK-GZ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:23:45 +0000
Received: from [85.158.139.211:17738] by server-6.bemta-5.messagelabs.com id
	A6/1F-14342-07EB4035; Wed, 19 Feb 2014 14:23:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392819823!4953133!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2326 invoked from network); 19 Feb 2014 14:23:44 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:23:44 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so392765eae.21
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:23:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=6PwzclUqUjlMgM2e4PlFGgbpIMpsHYKM5pLMCEnTc9A=;
	b=UtqimczSXT+5qmqzVvrknj9xDSlexM6p0itn6ka7eiqyUM9PgJgSTpYpNGAry38+qy
	UbCHGIF6GJpbjQKyp5atW0eHDfxn1j4yUOhrRbdE5+DOC33Gthx+ZV9n+0V3iX/5zPhb
	dPcABikYp0jrakTxvPjdLZLVZzIde2REgR2aWV+tK8C72OHXXcVKeLvDsKfjO0CkPMQ3
	6EmETEP7UX29XfeYDfsmijGuqZ3GIEXLyY57oSXdKLu8UUTrAwYy1gwFM3tGPPiGma8l
	IHBZnK38H1Rl3nBFQ/7tMuwcUuaWrlRHi6S/6KmkPMkvqbZCuF7ZQoAjpUb8ZzRuz69u
	LEAw==
X-Gm-Message-State: ALoCoQnD7S+nzBB2mvwuc51TwtaLwX9dPQEq05nuI4KEXH0NPIeqnpJtql/RlQAXMqyGtLICCFzq
X-Received: by 10.14.203.71 with SMTP id e47mr2487929eeo.99.1392819823655;
	Wed, 19 Feb 2014 06:23:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm1304494eei.9.2014.02.19.06.23.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:23:42 -0800 (PST)
Message-ID: <5304BE6D.5020401@linaro.org>
Date: Wed, 19 Feb 2014 14:23:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-6-git-send-email-julien.grall@linaro.org>
	<1392810475.29739.13.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810475.29739.13.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 5/8] xen/arm: IRQ: rename
	release_irq in release_dt_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:47 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> 
> Subject: s/in/to/
> 
>> Rename the function and make the prototype consistent with request_dt_irq.
>>
>> The new parameter (dev_id) will be used in a later patch to release the right
>> action when the support for multiple action will be added.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.

>> ---
>>  xen/arch/arm/gic.c        |    4 ++--
>>  xen/include/asm-arm/irq.h |    1 +
>>  2 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index 58bcba3..2643b46 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -520,13 +520,13 @@ void gic_disable_cpu(void)
>>      spin_unlock(&gic.lock);
>>  }
>>  
>> -void __init release_irq(unsigned int irq)
>> +void release_dt_irq(const struct dt_irq *irq, const void *dev_id)
>>  {
>>      struct irq_desc *desc;
>>      unsigned long flags;
>>     struct irqaction *action;
>>  
>> -    desc = irq_to_desc(irq);
>> +    desc = irq_to_desc(irq->irq);
>>  
>>      desc->handler->shutdown(desc);
>>  
>> diff --git a/xen/include/asm-arm/irq.h b/xen/include/asm-arm/irq.h
>> index 7c20703..bd8aac1 100644
>> --- a/xen/include/asm-arm/irq.h
>> +++ b/xen/include/asm-arm/irq.h
>> @@ -44,6 +44,7 @@ int __init request_dt_irq(const struct dt_irq *irq,
>>                            void (*handler)(int, void *, struct cpu_user_regs *),
>>                            const char *devname, void *dev_id);
>>  int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new);
> 
> This patch implies that things can now be dynamically registered and
> unregistered -- does this therefore need to become non-__init?

Yes, I noticed that after I sent this patch series. I have a patch for
that which I will add on the next version.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:36:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8FX-0004jf-Bm; Wed, 19 Feb 2014 14:35:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8FW-0004ja-5y
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:35:46 +0000
Received: from [193.109.254.147:24971] by server-7.bemta-14.messagelabs.com id
	09/FE-23424-141C4035; Wed, 19 Feb 2014 14:35:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392820544!5409817!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30590 invoked from network); 19 Feb 2014 14:35:44 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:35:44 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so272654eek.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:35:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ZOAiU+bnB4BamCUp3bbwMDv63guehU7Es1JuT2rZAFI=;
	b=Y2okC/q+M2P/nkjtCdHGtKrT8WC0dYMBWcGQPQF9/KZoe3i2MV9fUMOJ8VovDQwpsO
	qdlK2pWA8m7VHC5hhwKibfuXWOGTrxsqUznJRk404NJWpPe4WkyHg4qIkuJNT1ZvuY76
	oviXOY5AkVCVIskOfey0o1f51sV397qXhlMGwmm39/iFmITAwZt4d2lhHJMwv+vZOrw2
	83Jdk1p2/fsLF/E9hO1PdY0cLJlzkrKw3cBBhBsN1cMls1CN4o04sigNrA7+loSj2zXC
	zI21pbroe/5StvKu1v86LTUBf6kj0pqxnzhWT5fYHpehD53B1kL/aHW2OpxaY5X4Wg6r
	eQmg==
X-Gm-Message-State: ALoCoQkMuUDS0ZVBnOu+A+J9QcuMo1EKedAiQBqbVvZZkSZJyu52IPmzBLUsMBWmGmBfT1yWL3Yq
X-Received: by 10.15.34.71 with SMTP id d47mr2600181eev.107.1392820544258;
	Wed, 19 Feb 2014 06:35:44 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j41sm1333626eey.15.2014.02.19.06.35.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:35:43 -0800 (PST)
Message-ID: <5304C13C.5060505@linaro.org>
Date: Wed, 19 Feb 2014 14:35:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810675.29739.15.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:51 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> When multiple action will be supported, gic_irq_{startup,shutdown} will have
>> to be called in the same critical zone has setup/release.
> 
> s/has/as/?

Yes.

>> Otherwise it could have a race condition if at the same time CPU A is calling
>> release_dt_irq and CPU B is calling setup_dt_irq.
>>
>> This could end up to the IRQ is not enabled.
> 
> "This could end up with the IRQ not being enabled."
> 
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/arch/arm/gic.c |   40 +++++++++++++++++++++++++---------------
>>  1 file changed, 25 insertions(+), 15 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index 2643b46..ebd2b5f 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -129,43 +129,53 @@ void gic_restore_state(struct vcpu *v)
>>      gic_restore_pending_irqs(v);
>>  }
>>  
>> -static void gic_irq_enable(struct irq_desc *desc)
>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
> 
> unsigned? What are the error codes here going to be?

This is the return type requested by hw_interrupt_type.startup.

It seems that the return is never checked (even in x86 code). Maybe we
should change the prototype of hw_interrupt_type.startup.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:36:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8Fa-0004jx-Ny; Wed, 19 Feb 2014 14:35:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WG8FZ-0004jm-7b
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:35:49 +0000
Received: from [85.158.137.68:46258] by server-8.bemta-3.messagelabs.com id
	5A/C3-16039-441C4035; Wed, 19 Feb 2014 14:35:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392820546!1370592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24214 invoked from network); 19 Feb 2014 14:35:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:35:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102194121"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:35:45 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:35:45 -0500
Message-ID: <5304C13F.3030802@citrix.com>
Date: Wed, 19 Feb 2014 14:35:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, "Luis R. Rodriguez"
	<mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>	<20140216105754.63738163@nehalam.linuxnetplumber.net>	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
In-Reply-To: <1392803559.23084.99.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 09:52, Ian Campbell wrote:
> On Tue, 2014-02-18 at 13:02 -0800, Luis R. Rodriguez wrote:
>> On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
>> <stephen@networkplumber.org> wrote:
>>> On Fri, 14 Feb 2014 18:59:37 -0800
>>> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>>>
>>>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>>>
>>>> It doesn't make sense for some interfaces to become a root bridge
>>>> at any point in time. One example is virtual backend interfaces
>>>> which rely on other entities on the bridge for actual physical
>>>> connectivity. They only provide virtual access.
>>>>
>>>> Device drivers that know they should never become part of the
>>>> root bridge have been using a trick of setting their MAC address
>>>> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
>>>> of using these hacks lets the interfaces annotate its intent and
>>>> generalizes a solution for multiple drivers, while letting the
>>>> drivers use a random MAC address or one prefixed with a proper OUI.
>>>> This sort of hack is used by both qemu and xen for their backend
>>>> interfaces.
>>>>
>>>> Cc: Stephen Hemminger <stephen@networkplumber.org>
>>>> Cc: bridge@lists.linux-foundation.org
>>>> Cc: netdev@vger.kernel.org
>>>> Cc: linux-kernel@vger.kernel.org
>>>> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
>>>
>>> This is already supported in a more standard way via the root
>>> block flag.
>>
>> Great! For documentation purposes the root_block flag is a sysfs
>> attribute, added via 3.8 through commit 1007dd1a. The respective
>> interface flag is IFLA_BRPORT_PROTECT and can be set via the iproute2
>> bridge utility or through sysfs:
>>
>> mcgrof@garbanzo ~/linux (git::master)$ find /sys/ -name root_block
>> /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/net/eth0/brport/root_block
>> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
>> /sys/devices/virtual/net/vif3.0-emu/brport/root_block
>>
>> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
>> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
>> 0
>> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ sudo bridge link set
>> dev vif3.0 root_block on
>> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
>> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
>> 1
>>
>> So if we'd want to avoid using the MAC address hack alternative to
>> skip a root port userspace would need to be updated to simply set this
>> attribute after adding the device to the bridge. Based on Zoltan's
>> feedback there seems to be use cases to not enable this always for all
>> xen-netback interfaces though as such we can just punt this to
>> userspace for the topologies that require this.
>>
>> The original motivation for this series was to avoid the IPv6
>> duplicate address incurred by the MAC address hack for avoiding the
>> root bridge. Given that Zoltan also noted a use case whereby IPv4 and
>> IPv6 addresses can be assigned to the backend interfaces we should be
>> able to avoid the duplicate address situation for IPv6 by using a
>> proper random MAC address *once* userspace has been updated also to
>> use IFLA_BRPORT_PROTECT. New userspace can't and won't need to set
>> this flag for older kernels (older than 3.8) as root_block is not
>> implemented on those kernels and the MAC address hack would still be
>> used there. This strategy however does put a requirement on new
>> kernels to use new userspace as otherwise the MAC address workaround
>> would not be in place and root_block would not take effect.
>
> Can't we arrange things in the Xen hotplug scripts such that if the
> root_block stuff isn't available/doesn't work we fallback to the
> existing fe:ff:ff:ff:ff usage?
>
> That would avoid concerns about forward/backwards compat I think. It
> wouldn't solve the issue you are targeting on old systems, but it also
> doesn't regress them any further.

I agree, I think this problem could be better handled from userspace: if 
it can set root_block then change the default MAC to a random one, if it 
can't, then stay with the default one. Or if someone doesn't care about 
STP but DAD is still important, userspace can have a force_random_mac 
option somewhere to change to a random MAC regardless of root_block 
presence.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:36:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8FX-0004jf-Bm; Wed, 19 Feb 2014 14:35:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8FW-0004ja-5y
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:35:46 +0000
Received: from [193.109.254.147:24971] by server-7.bemta-14.messagelabs.com id
	09/FE-23424-141C4035; Wed, 19 Feb 2014 14:35:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392820544!5409817!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30590 invoked from network); 19 Feb 2014 14:35:44 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:35:44 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so272654eek.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:35:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ZOAiU+bnB4BamCUp3bbwMDv63guehU7Es1JuT2rZAFI=;
	b=Y2okC/q+M2P/nkjtCdHGtKrT8WC0dYMBWcGQPQF9/KZoe3i2MV9fUMOJ8VovDQwpsO
	qdlK2pWA8m7VHC5hhwKibfuXWOGTrxsqUznJRk404NJWpPe4WkyHg4qIkuJNT1ZvuY76
	oviXOY5AkVCVIskOfey0o1f51sV397qXhlMGwmm39/iFmITAwZt4d2lhHJMwv+vZOrw2
	83Jdk1p2/fsLF/E9hO1PdY0cLJlzkrKw3cBBhBsN1cMls1CN4o04sigNrA7+loSj2zXC
	zI21pbroe/5StvKu1v86LTUBf6kj0pqxnzhWT5fYHpehD53B1kL/aHW2OpxaY5X4Wg6r
	eQmg==
X-Gm-Message-State: ALoCoQkMuUDS0ZVBnOu+A+J9QcuMo1EKedAiQBqbVvZZkSZJyu52IPmzBLUsMBWmGmBfT1yWL3Yq
X-Received: by 10.15.34.71 with SMTP id d47mr2600181eev.107.1392820544258;
	Wed, 19 Feb 2014 06:35:44 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j41sm1333626eey.15.2014.02.19.06.35.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:35:43 -0800 (PST)
Message-ID: <5304C13C.5060505@linaro.org>
Date: Wed, 19 Feb 2014 14:35:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810675.29739.15.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:51 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> When multiple action will be supported, gic_irq_{startup,shutdown} will have
>> to be called in the same critical zone has setup/release.
> 
> s/has/as/?

Yes.

>> Otherwise it could have a race condition if at the same time CPU A is calling
>> release_dt_irq and CPU B is calling setup_dt_irq.
>>
>> This could end up to the IRQ is not enabled.
> 
> "This could end up with the IRQ not being enabled."
> 
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/arch/arm/gic.c |   40 +++++++++++++++++++++++++---------------
>>  1 file changed, 25 insertions(+), 15 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
>> index 2643b46..ebd2b5f 100644
>> --- a/xen/arch/arm/gic.c
>> +++ b/xen/arch/arm/gic.c
>> @@ -129,43 +129,53 @@ void gic_restore_state(struct vcpu *v)
>>      gic_restore_pending_irqs(v);
>>  }
>>  
>> -static void gic_irq_enable(struct irq_desc *desc)
>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
> 
> unsigned? What are the error codes here going to be?

This is the return type requested by hw_interrupt_type.startup.

It seems that the return is never checked (even in x86 code). Maybe we
should change the prototype of hw_interrupt_type.startup.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:36:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8Fa-0004jx-Ny; Wed, 19 Feb 2014 14:35:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WG8FZ-0004jm-7b
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:35:49 +0000
Received: from [85.158.137.68:46258] by server-8.bemta-3.messagelabs.com id
	5A/C3-16039-441C4035; Wed, 19 Feb 2014 14:35:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392820546!1370592!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24214 invoked from network); 19 Feb 2014 14:35:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:35:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="102194121"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 14:35:45 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:35:45 -0500
Message-ID: <5304C13F.3030802@citrix.com>
Date: Wed, 19 Feb 2014 14:35:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, "Luis R. Rodriguez"
	<mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>	<20140216105754.63738163@nehalam.linuxnetplumber.net>	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
In-Reply-To: <1392803559.23084.99.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 09:52, Ian Campbell wrote:
> On Tue, 2014-02-18 at 13:02 -0800, Luis R. Rodriguez wrote:
>> On Sun, Feb 16, 2014 at 10:57 AM, Stephen Hemminger
>> <stephen@networkplumber.org> wrote:
>>> On Fri, 14 Feb 2014 18:59:37 -0800
>>> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>>>
>>>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>>>
>>>> It doesn't make sense for some interfaces to become a root bridge
>>>> at any point in time. One example is virtual backend interfaces
>>>> which rely on other entities on the bridge for actual physical
>>>> connectivity. They only provide virtual access.
>>>>
>>>> Device drivers that know they should never become part of the
>>>> root bridge have been using a trick of setting their MAC address
>>>> to a high broadcast MAC address such as FE:FF:FF:FF:FF:FF. Instead
>>>> of using these hacks lets the interfaces annotate its intent and
>>>> generalizes a solution for multiple drivers, while letting the
>>>> drivers use a random MAC address or one prefixed with a proper OUI.
>>>> This sort of hack is used by both qemu and xen for their backend
>>>> interfaces.
>>>>
>>>> Cc: Stephen Hemminger <stephen@networkplumber.org>
>>>> Cc: bridge@lists.linux-foundation.org
>>>> Cc: netdev@vger.kernel.org
>>>> Cc: linux-kernel@vger.kernel.org
>>>> Signed-off-by: Luis R. Rodriguez <mcgrof@suse.com>
>>>
>>> This is already supported in a more standard way via the root
>>> block flag.
>>
>> Great! For documentation purposes the root_block flag is a sysfs
>> attribute, added via 3.8 through commit 1007dd1a. The respective
>> interface flag is IFLA_BRPORT_PROTECT and can be set via the iproute2
>> bridge utility or through sysfs:
>>
>> mcgrof@garbanzo ~/linux (git::master)$ find /sys/ -name root_block
>> /sys/devices/pci0000:00/0000:00:04.0/0000:02:00.0/net/eth0/brport/root_block
>> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
>> /sys/devices/virtual/net/vif3.0-emu/brport/root_block
>>
>> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
>> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
>> 0
>> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ sudo bridge link set
>> dev vif3.0 root_block on
>> mcgrof@garbanzo ~/devel/iproute2 (git::master)$ cat
>> /sys/devices/vif-3-0/net/vif3.0/brport/root_block
>> 1
>>
>> So if we'd want to avoid using the MAC address hack alternative to
>> skip a root port userspace would need to be updated to simply set this
>> attribute after adding the device to the bridge. Based on Zoltan's
>> feedback there seems to be use cases to not enable this always for all
>> xen-netback interfaces though as such we can just punt this to
>> userspace for the topologies that require this.
>>
>> The original motivation for this series was to avoid the IPv6
>> duplicate address incurred by the MAC address hack for avoiding the
>> root bridge. Given that Zoltan also noted a use case whereby IPv4 and
>> IPv6 addresses can be assigned to the backend interfaces we should be
>> able to avoid the duplicate address situation for IPv6 by using a
>> proper random MAC address *once* userspace has been updated also to
>> use IFLA_BRPORT_PROTECT. New userspace can't and won't need to set
>> this flag for older kernels (older than 3.8) as root_block is not
>> implemented on those kernels and the MAC address hack would still be
>> used there. This strategy however does put a requirement on new
>> kernels to use new userspace as otherwise the MAC address workaround
>> would not be in place and root_block would not take effect.
>
> Can't we arrange things in the Xen hotplug scripts such that if the
> root_block stuff isn't available/doesn't work we fallback to the
> existing fe:ff:ff:ff:ff usage?
>
> That would avoid concerns about forward/backwards compat I think. It
> wouldn't solve the issue you are targeting on old systems, but it also
> doesn't regress them any further.

I agree, I think this problem could be better handled from userspace: if 
it can set root_block then change the default MAC to a random one, if it 
can't, then stay with the default one. Or if someone doesn't care about 
STP but DAD is still important, userspace can have a force_random_mac 
option somewhere to change to a random MAC regardless of root_block 
presence.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8Ik-0004yU-KF; Wed, 19 Feb 2014 14:39:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG8Ii-0004yL-MZ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:39:04 +0000
Received: from [85.158.143.35:39414] by server-3.bemta-4.messagelabs.com id
	45/EA-11539-802C4035; Wed, 19 Feb 2014 14:39:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392820740!6848737!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26312 invoked from network); 19 Feb 2014 14:39:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:39:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103900275"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 14:38:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:38:57 -0500
Message-ID: <1392820736.29739.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 14:38:56 +0000
In-Reply-To: <5304C13C.5060505@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:

> >> -static void gic_irq_enable(struct irq_desc *desc)
> >> +static unsigned int gic_irq_startup(struct irq_desc *desc)
> > 
> > unsigned? What are the error codes here going to be?
> 
> This is the return type requested by hw_interrupt_type.startup.
> 
> It seems that the return is never checked (even in x86 code). Maybe we
> should change the prototype of hw_interrupt_type.startup.

Worth investigating. I wonder if someone thought this might return the
resulting interrupt number (those are normally unsigned int I think) or
if it actually did used to etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8Iu-0004zj-4c; Wed, 19 Feb 2014 14:39:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WG8Ir-0004yu-QG; Wed, 19 Feb 2014 14:39:14 +0000
Received: from [85.158.137.68:50841] by server-6.bemta-3.messagelabs.com id
	17/E5-09180-012C4035; Wed, 19 Feb 2014 14:39:12 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392820752!2898343!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12648 invoked from network); 19 Feb 2014 14:39:12 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:39:12 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so409707ead.27
	for <multiple recipients>; Wed, 19 Feb 2014 06:39:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=jyBcNstFs7/WXjku9BmwT6oQenNzNO0F8mtIRscQ2GE=;
	b=v/qOBULxVvw+ETh+Yo0teoOOEvJWdjcuachF8o6YpuwZ0PMuQh97ySHGv/PkaQOsVD
	nEN7KsAdMPQNrRJuYq/hafto9K48B3EuLsLq5QiaLWszV9pgvoO7xu0di6ykVEujAZn6
	5ky5MYO520mZ9zfjvGoJiB7oL6g1kZgwhj6RtlF3CyFe6avDU6gkKEK3DRQM7LfyDG0y
	cWcyi5LZ5ZeWMEpoFDgrPjipMM2oxXJAOJEj9S43VToHzp3R00BFhVHPrshtQOs6QntR
	9eEzkUHC9TDXoL8m8zKuvovxRpKdVeDhXNT5w1W9uJZ1Upd4FE9GG9KCjFytV4VPQluX
	bdXA==
X-Received: by 10.14.194.193 with SMTP id m41mr3750007een.76.1392820751899;
	Wed, 19 Feb 2014 06:39:11 -0800 (PST)
Received: from [172.16.25.10] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm1429317eeo.8.2014.02.19.06.39.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:39:11 -0800 (PST)
Message-ID: <5304C20D.3070509@xen.org>
Date: Wed, 19 Feb 2014 14:39:09 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <5303B34E.5000702@xen.org>
	<1392806485.23084.125.camel@kazak.uk.xensource.com>
In-Reply-To: <1392806485.23084.125.camel@kazak.uk.xensource.com>
Cc: Tim Mackey <Timothy.Mackey@citrix.com>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, xen-users@lists.xenproject.org,
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Vote] Proposal: Moving XCP binaries to
	XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/2014 10:41, Ian Campbell wrote:
> On Tue, 2014-02-18 at 19:23 +0000, Lars Kurth wrote:
>> Hi all,
>> ...
>> This proposal does *not* affect the XAPI project : the XAPI project
>> would continue to develop the XAPI toolstack as part of the Xen
>> Project (and deliver source "releases"). In fact, I would also propose
>> to make the xapi mailing list a developer mailing list. This fits much
>> better with how the Hypervisor and MirageOS projects are run and
>> creates an overall cleaner and easier to understand model for the Xen
>> Project.
> If xen-api@ becomes a devel focused list then what would be the
> appropriate place to redirect user requests to? xs-devel@xenserver.org?
> What about for xapi users who are not based on xenserver?
Good point. This was just a side-topic that is not really part of this 
proposal.

Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8Ik-0004yU-KF; Wed, 19 Feb 2014 14:39:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WG8Ii-0004yL-MZ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:39:04 +0000
Received: from [85.158.143.35:39414] by server-3.bemta-4.messagelabs.com id
	45/EA-11539-802C4035; Wed, 19 Feb 2014 14:39:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392820740!6848737!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26312 invoked from network); 19 Feb 2014 14:39:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:39:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,505,1389744000"; d="scan'208";a="103900275"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 14:38:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 09:38:57 -0500
Message-ID: <1392820736.29739.86.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 14:38:56 +0000
In-Reply-To: <5304C13C.5060505@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:

> >> -static void gic_irq_enable(struct irq_desc *desc)
> >> +static unsigned int gic_irq_startup(struct irq_desc *desc)
> > 
> > unsigned? What are the error codes here going to be?
> 
> This is the return type requested by hw_interrupt_type.startup.
> 
> It seems that the return is never checked (even in x86 code). Maybe we
> should change the prototype of hw_interrupt_type.startup.

Worth investigating. I wonder if someone thought this might return the
resulting interrupt number (those are normally unsigned int I think) or
if it actually did used to etc.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8Iu-0004zj-4c; Wed, 19 Feb 2014 14:39:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WG8Ir-0004yu-QG; Wed, 19 Feb 2014 14:39:14 +0000
Received: from [85.158.137.68:50841] by server-6.bemta-3.messagelabs.com id
	17/E5-09180-012C4035; Wed, 19 Feb 2014 14:39:12 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392820752!2898343!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12648 invoked from network); 19 Feb 2014 14:39:12 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:39:12 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so409707ead.27
	for <multiple recipients>; Wed, 19 Feb 2014 06:39:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=jyBcNstFs7/WXjku9BmwT6oQenNzNO0F8mtIRscQ2GE=;
	b=v/qOBULxVvw+ETh+Yo0teoOOEvJWdjcuachF8o6YpuwZ0PMuQh97ySHGv/PkaQOsVD
	nEN7KsAdMPQNrRJuYq/hafto9K48B3EuLsLq5QiaLWszV9pgvoO7xu0di6ykVEujAZn6
	5ky5MYO520mZ9zfjvGoJiB7oL6g1kZgwhj6RtlF3CyFe6avDU6gkKEK3DRQM7LfyDG0y
	cWcyi5LZ5ZeWMEpoFDgrPjipMM2oxXJAOJEj9S43VToHzp3R00BFhVHPrshtQOs6QntR
	9eEzkUHC9TDXoL8m8zKuvovxRpKdVeDhXNT5w1W9uJZ1Upd4FE9GG9KCjFytV4VPQluX
	bdXA==
X-Received: by 10.14.194.193 with SMTP id m41mr3750007een.76.1392820751899;
	Wed, 19 Feb 2014 06:39:11 -0800 (PST)
Received: from [172.16.25.10] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm1429317eeo.8.2014.02.19.06.39.10
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:39:11 -0800 (PST)
Message-ID: <5304C20D.3070509@xen.org>
Date: Wed, 19 Feb 2014 14:39:09 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <5303B34E.5000702@xen.org>
	<1392806485.23084.125.camel@kazak.uk.xensource.com>
In-Reply-To: <1392806485.23084.125.camel@kazak.uk.xensource.com>
Cc: Tim Mackey <Timothy.Mackey@citrix.com>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, xen-users@lists.xenproject.org,
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Vote] Proposal: Moving XCP binaries to
	XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/2014 10:41, Ian Campbell wrote:
> On Tue, 2014-02-18 at 19:23 +0000, Lars Kurth wrote:
>> Hi all,
>> ...
>> This proposal does *not* affect the XAPI project : the XAPI project
>> would continue to develop the XAPI toolstack as part of the Xen
>> Project (and deliver source "releases"). In fact, I would also propose
>> to make the xapi mailing list a developer mailing list. This fits much
>> better with how the Hypervisor and MirageOS projects are run and
>> creates an overall cleaner and easier to understand model for the Xen
>> Project.
> If xen-api@ becomes a devel focused list then what would be the
> appropriate place to redirect user requests to? xs-devel@xenserver.org?
> What about for xapi users who are not based on xenserver?
Good point. This was just a side-topic that is not really part of this 
proposal.

Lars


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8LT-0005OY-5r; Wed, 19 Feb 2014 14:41:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8LR-0005OD-Oc
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:41:54 +0000
Received: from [85.158.137.68:37568] by server-13.bemta-3.messagelabs.com id
	63/F4-26923-0B2C4035; Wed, 19 Feb 2014 14:41:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392820911!2923794!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29488 invoked from network); 19 Feb 2014 14:41:51 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:41:51 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so410895eaj.11
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:41:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=VzYTjmDML2a9WiVUPUodw+vNtJMXz/umlLLd3Ig+48c=;
	b=AEyMWqR3oHTa3ZXYlb/CGJKDhagpOUq/07lNiQ/AFLfFcCmuhJS5DCzUYZmpC05l+R
	hEzMOzIizKLOEcc8bf535Sib9/fWPeOd0nfu0e5EcpLyTpxMuUSGUcwiF9hIB6mIBqnK
	0Y88mCEgKDVmBDgOWT0ugGR/DdVPrUPxbmKhglyPCL46+EXNEdKKv5+kAtZ7fqJtTup2
	awyApMhysDbdBb5bsGnr9+dGjuzWCI/hTCJh8RrgdofY6j7gNXS86+A8EP+s3mQBaMuO
	22ZFpO7n+UjWjV961Y7hhhN7rF4IsTT1XBBBbEWckNN761Ui56RYmzQmE1deW9v+n5s2
	Rjeg==
X-Gm-Message-State: ALoCoQkLAaJJ+vteCVS6aXbzykjf2IGC4z+1+H1r7FgtKYyphyeXBjx17GtLKABo5QCJm3HozxP8
X-Received: by 10.15.75.66 with SMTP id k42mr2471750eey.2.1392820911093;
	Wed, 19 Feb 2014 06:41:51 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm1428366eef.2.2014.02.19.06.41.49
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:41:50 -0800 (PST)
Message-ID: <5304C2AC.5080800@linaro.org>
Date: Wed, 19 Feb 2014 14:41:48 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810905.29739.19.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:55 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>> interrupt.
> 
> Mention here that you are therefore creating a linked list of actions
> for each interrupt.
> 
> If you use xen/list.h for this then you get a load of helpers and
> iterators which would save you open coding them.

I will use it on the next version.

> Some discussion of the behaviour wrt acking the physical interrupt might
> also be interesting, especially in the case where the shared IRQ is
> routed to the guest and to the hypervisor or to multiple guests.

For now, it's forbidden by the patch #3 of this series. Sharing IRQ
between Xen and guest will need a bit of rework, mainly when the
Stefano's patch series to remove the maintenance interrupt will be applied.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:42:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8LT-0005OY-5r; Wed, 19 Feb 2014 14:41:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8LR-0005OD-Oc
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:41:54 +0000
Received: from [85.158.137.68:37568] by server-13.bemta-3.messagelabs.com id
	63/F4-26923-0B2C4035; Wed, 19 Feb 2014 14:41:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392820911!2923794!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29488 invoked from network); 19 Feb 2014 14:41:51 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:41:51 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so410895eaj.11
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:41:51 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=VzYTjmDML2a9WiVUPUodw+vNtJMXz/umlLLd3Ig+48c=;
	b=AEyMWqR3oHTa3ZXYlb/CGJKDhagpOUq/07lNiQ/AFLfFcCmuhJS5DCzUYZmpC05l+R
	hEzMOzIizKLOEcc8bf535Sib9/fWPeOd0nfu0e5EcpLyTpxMuUSGUcwiF9hIB6mIBqnK
	0Y88mCEgKDVmBDgOWT0ugGR/DdVPrUPxbmKhglyPCL46+EXNEdKKv5+kAtZ7fqJtTup2
	awyApMhysDbdBb5bsGnr9+dGjuzWCI/hTCJh8RrgdofY6j7gNXS86+A8EP+s3mQBaMuO
	22ZFpO7n+UjWjV961Y7hhhN7rF4IsTT1XBBBbEWckNN761Ui56RYmzQmE1deW9v+n5s2
	Rjeg==
X-Gm-Message-State: ALoCoQkLAaJJ+vteCVS6aXbzykjf2IGC4z+1+H1r7FgtKYyphyeXBjx17GtLKABo5QCJm3HozxP8
X-Received: by 10.15.75.66 with SMTP id k42mr2471750eey.2.1392820911093;
	Wed, 19 Feb 2014 06:41:51 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id v6sm1428366eef.2.2014.02.19.06.41.49
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:41:50 -0800 (PST)
Message-ID: <5304C2AC.5080800@linaro.org>
Date: Wed, 19 Feb 2014 14:41:48 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810905.29739.19.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:55 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>> interrupt.
> 
> Mention here that you are therefore creating a linked list of actions
> for each interrupt.
> 
> If you use xen/list.h for this then you get a load of helpers and
> iterators which would save you open coding them.

I will use it on the next version.

> Some discussion of the behaviour wrt acking the physical interrupt might
> also be interesting, especially in the case where the shared IRQ is
> routed to the guest and to the hypervisor or to multiple guests.

For now, it's forbidden by the patch #3 of this series. Sharing IRQ
between Xen and guest will need a bit of rework, mainly when the
Stefano's patch series to remove the maintenance interrupt will be applied.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:51:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8UZ-0005u3-FI; Wed, 19 Feb 2014 14:51:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8UX-0005tv-OH
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:51:17 +0000
Received: from [193.109.254.147:38375] by server-13.bemta-14.messagelabs.com
	id 42/E2-01226-5E4C4035; Wed, 19 Feb 2014 14:51:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392821476!159088!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8966 invoked from network); 19 Feb 2014 14:51:16 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:51:16 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so416756eaj.12
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:51:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=7lS9Ex4YdN8ady+CrTZ2+W3w8xu64YVVbyKmVcKlbLA=;
	b=hGRDQS/pc4ivnJf+2DCtf80CsqrZ6tiRvdkHz+52kuv6hqCk8eg1LOJeECls3vej9A
	IrxV7fMaPScc4u6LMFBkbRRdhVTQ1SdGWzmrLvge/Z3cvc4a7HbWtpui3KPp8Cj7mr7D
	FlDGQH7DYknjJ7zoJSmA2HzwrtM0MmpYwAqWH0N5H7bG/Sbj88P0Byj9Kij/HPj08Uze
	sD0hnj2wAmHTp8BFRjfQEQ1MlNd2JSksucEXeNcyWfXrYPR0kZU5NmumrgSW+iKKCOKa
	qG/Tv88LJ7Yy/3CiGyp6N3K1OY+DQHiEPCl12QlGtbgJW9KueYC71bS3mitByshSUa6o
	0Wnw==
X-Gm-Message-State: ALoCoQmCvAoQ+vV6/2vpQrqdS62lG/4+V6BhlQCo2fP0kJTmE32QXSJcjHlIstPbB8/L7faLUpss
X-Received: by 10.15.90.203 with SMTP id q51mr41052930eez.6.1392821475864;
	Wed, 19 Feb 2014 06:51:15 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm1593285eeh.3.2014.02.19.06.51.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:51:15 -0800 (PST)
Message-ID: <5304C4E1.2070901@linaro.org>
Date: Wed, 19 Feb 2014 14:51:13 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
In-Reply-To: <1392820736.29739.86.camel@kazak.uk.xensource.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Keir and Jan.

On 02/19/2014 02:38 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
> 
>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>
>>> unsigned? What are the error codes here going to be?
>>
>> This is the return type requested by hw_interrupt_type.startup.
>>
>> It seems that the return is never checked (even in x86 code). Maybe we
>> should change the prototype of hw_interrupt_type.startup.
> 
> Worth investigating. I wonder if someone thought this might return the
> resulting interrupt number (those are normally unsigned int I think) or
> if it actually did used to etc.

I think it was copied from Linux which also have unsigned int. I gave a
quick look to the code and this callback is only used in 2 places which
always return 0.

Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
an int...

I can create a patch to return void instead of unsigned if everyone is
happy with this solution.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:51:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8UZ-0005u3-FI; Wed, 19 Feb 2014 14:51:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8UX-0005tv-OH
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:51:17 +0000
Received: from [193.109.254.147:38375] by server-13.bemta-14.messagelabs.com
	id 42/E2-01226-5E4C4035; Wed, 19 Feb 2014 14:51:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392821476!159088!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8966 invoked from network); 19 Feb 2014 14:51:16 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:51:16 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so416756eaj.12
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:51:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=7lS9Ex4YdN8ady+CrTZ2+W3w8xu64YVVbyKmVcKlbLA=;
	b=hGRDQS/pc4ivnJf+2DCtf80CsqrZ6tiRvdkHz+52kuv6hqCk8eg1LOJeECls3vej9A
	IrxV7fMaPScc4u6LMFBkbRRdhVTQ1SdGWzmrLvge/Z3cvc4a7HbWtpui3KPp8Cj7mr7D
	FlDGQH7DYknjJ7zoJSmA2HzwrtM0MmpYwAqWH0N5H7bG/Sbj88P0Byj9Kij/HPj08Uze
	sD0hnj2wAmHTp8BFRjfQEQ1MlNd2JSksucEXeNcyWfXrYPR0kZU5NmumrgSW+iKKCOKa
	qG/Tv88LJ7Yy/3CiGyp6N3K1OY+DQHiEPCl12QlGtbgJW9KueYC71bS3mitByshSUa6o
	0Wnw==
X-Gm-Message-State: ALoCoQmCvAoQ+vV6/2vpQrqdS62lG/4+V6BhlQCo2fP0kJTmE32QXSJcjHlIstPbB8/L7faLUpss
X-Received: by 10.15.90.203 with SMTP id q51mr41052930eez.6.1392821475864;
	Wed, 19 Feb 2014 06:51:15 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm1593285eeh.3.2014.02.19.06.51.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:51:15 -0800 (PST)
Message-ID: <5304C4E1.2070901@linaro.org>
Date: Wed, 19 Feb 2014 14:51:13 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
In-Reply-To: <1392820736.29739.86.camel@kazak.uk.xensource.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding Keir and Jan.

On 02/19/2014 02:38 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
> 
>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>
>>> unsigned? What are the error codes here going to be?
>>
>> This is the return type requested by hw_interrupt_type.startup.
>>
>> It seems that the return is never checked (even in x86 code). Maybe we
>> should change the prototype of hw_interrupt_type.startup.
> 
> Worth investigating. I wonder if someone thought this might return the
> resulting interrupt number (those are normally unsigned int I think) or
> if it actually did used to etc.

I think it was copied from Linux which also have unsigned int. I gave a
quick look to the code and this callback is only used in 2 places which
always return 0.

Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
an int...

I can create a patch to return void instead of unsigned if everyone is
happy with this solution.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:58:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8as-00064h-4R; Wed, 19 Feb 2014 14:57:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8aq-00064a-T9
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:57:49 +0000
Received: from [85.158.143.35:38544] by server-1.bemta-4.messagelabs.com id
	5D/C7-31661-C66C4035; Wed, 19 Feb 2014 14:57:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392821867!6839191!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25371 invoked from network); 19 Feb 2014 14:57:47 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:57:47 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so283809eek.15
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:57:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=bCVvQPih+kpkKepkzy+TrthwP2xruacFichcIglHpeU=;
	b=JZLaTHAfgCaqniSSV4PORz4QAZ6STVQGPtjFIiutGnI9/sTQB6NBDh1rf0w2smFeia
	D6cq94FS1l0wZYR2aEb59mrVC5yZgFqkgzYuKfZO0bH+wAgxa80yvskrGBy96DXyX+aW
	31+0tIuhFXFBHRqKEkUHSxiSEkgQiBZWHH3q+Iccg94N1JtJ8McN/8EzaEXSF1Mb5J+a
	ec/FAqR0lbd+/uj9BWU1VHBwnbdkP6/XVhSWsyNo6cWovj7ZAW1OD0VUcimQFq5AAyF5
	AfKHeLhUmv98W7/r3ITBAINnd7JkTMpoqq7faTlEnnxbOgRV+NNNuxGa+p38hZtLfAf1
	wTaw==
X-Gm-Message-State: ALoCoQnPoB20sujw0fGQyUBv+bWAGJGkP3HimcA294ogunmgvuKBu2ds0VuRmJrEZL8L4BxYJp9K
X-Received: by 10.14.106.193 with SMTP id m41mr15578216eeg.62.1392821867301;
	Wed, 19 Feb 2014 06:57:47 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id o43sm1616998eef.12.2014.02.19.06.57.46
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:57:46 -0800 (PST)
Message-ID: <5304C669.9080607@linaro.org>
Date: Wed, 19 Feb 2014 14:57:45 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-5-git-send-email-julien.grall@linaro.org>
	<1392812924.29739.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812924.29739.38.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 04/12] xen/dts: Add
	dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:28 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> The function check if property exists in a specific node.
> 
>                        ^a 

I will fix it.

>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 14:58:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 14:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8as-00064h-4R; Wed, 19 Feb 2014 14:57:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG8aq-00064a-T9
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 14:57:49 +0000
Received: from [85.158.143.35:38544] by server-1.bemta-4.messagelabs.com id
	5D/C7-31661-C66C4035; Wed, 19 Feb 2014 14:57:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392821867!6839191!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25371 invoked from network); 19 Feb 2014 14:57:47 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 14:57:47 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so283809eek.15
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 06:57:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=bCVvQPih+kpkKepkzy+TrthwP2xruacFichcIglHpeU=;
	b=JZLaTHAfgCaqniSSV4PORz4QAZ6STVQGPtjFIiutGnI9/sTQB6NBDh1rf0w2smFeia
	D6cq94FS1l0wZYR2aEb59mrVC5yZgFqkgzYuKfZO0bH+wAgxa80yvskrGBy96DXyX+aW
	31+0tIuhFXFBHRqKEkUHSxiSEkgQiBZWHH3q+Iccg94N1JtJ8McN/8EzaEXSF1Mb5J+a
	ec/FAqR0lbd+/uj9BWU1VHBwnbdkP6/XVhSWsyNo6cWovj7ZAW1OD0VUcimQFq5AAyF5
	AfKHeLhUmv98W7/r3ITBAINnd7JkTMpoqq7faTlEnnxbOgRV+NNNuxGa+p38hZtLfAf1
	wTaw==
X-Gm-Message-State: ALoCoQnPoB20sujw0fGQyUBv+bWAGJGkP3HimcA294ogunmgvuKBu2ds0VuRmJrEZL8L4BxYJp9K
X-Received: by 10.14.106.193 with SMTP id m41mr15578216eeg.62.1392821867301;
	Wed, 19 Feb 2014 06:57:47 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id o43sm1616998eef.12.2014.02.19.06.57.46
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 06:57:46 -0800 (PST)
Message-ID: <5304C669.9080607@linaro.org>
Date: Wed, 19 Feb 2014 14:57:45 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-5-git-send-email-julien.grall@linaro.org>
	<1392812924.29739.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812924.29739.38.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 04/12] xen/dts: Add
	dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:28 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> The function check if property exists in a specific node.
> 
>                        ^a 

I will fix it.

>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:07:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8kG-0006Ky-MI; Wed, 19 Feb 2014 15:07:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG8kF-0006Kt-1e
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 15:07:31 +0000
Received: from [85.158.137.68:15926] by server-5.bemta-3.messagelabs.com id
	B7/22-04712-2B8C4035; Wed, 19 Feb 2014 15:07:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392822449!2906515!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16618 invoked from network); 19 Feb 2014 15:07:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 15:07:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 15:07:28 +0000
Message-Id: <5304D6BC020000780011DC4F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 15:07:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Julien Grall" <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
In-Reply-To: <5304C4E1.2070901@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	"Keir \(Xen.org\)" <keir@xen.org>, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 15:51, Julien Grall <julien.grall@linaro.org> wrote:
> Adding Keir and Jan.
> 
> On 02/19/2014 02:38 PM, Ian Campbell wrote:
>> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
>> 
>>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>>
>>>> unsigned? What are the error codes here going to be?
>>>
>>> This is the return type requested by hw_interrupt_type.startup.
>>>
>>> It seems that the return is never checked (even in x86 code). Maybe we
>>> should change the prototype of hw_interrupt_type.startup.
>> 
>> Worth investigating. I wonder if someone thought this might return the
>> resulting interrupt number (those are normally unsigned int I think) or
>> if it actually did used to etc.
> 
> I think it was copied from Linux which also have unsigned int. I gave a
> quick look to the code and this callback is only used in 2 places which
> always return 0.
> 
> Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
> an int...
> 
> I can create a patch to return void instead of unsigned if everyone is
> happy with this solution.

I'd be fine with such a change; I'd like to ask though that if you
do this, you at the same time do the resulting possible cleanup:
As an example, xen/arch/x86/msi.c:startup_msi_irq() becomes
unnecessary then. It will in fact be interesting to see how many
distinct startup routines actually remain.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:07:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:07:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8kG-0006Ky-MI; Wed, 19 Feb 2014 15:07:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WG8kF-0006Kt-1e
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 15:07:31 +0000
Received: from [85.158.137.68:15926] by server-5.bemta-3.messagelabs.com id
	B7/22-04712-2B8C4035; Wed, 19 Feb 2014 15:07:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392822449!2906515!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16618 invoked from network); 19 Feb 2014 15:07:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 15:07:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 19 Feb 2014 15:07:28 +0000
Message-Id: <5304D6BC020000780011DC4F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 19 Feb 2014 15:07:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Julien Grall" <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
In-Reply-To: <5304C4E1.2070901@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, tim@xen.org,
	"Keir \(Xen.org\)" <keir@xen.org>, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 15:51, Julien Grall <julien.grall@linaro.org> wrote:
> Adding Keir and Jan.
> 
> On 02/19/2014 02:38 PM, Ian Campbell wrote:
>> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
>> 
>>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>>
>>>> unsigned? What are the error codes here going to be?
>>>
>>> This is the return type requested by hw_interrupt_type.startup.
>>>
>>> It seems that the return is never checked (even in x86 code). Maybe we
>>> should change the prototype of hw_interrupt_type.startup.
>> 
>> Worth investigating. I wonder if someone thought this might return the
>> resulting interrupt number (those are normally unsigned int I think) or
>> if it actually did used to etc.
> 
> I think it was copied from Linux which also have unsigned int. I gave a
> quick look to the code and this callback is only used in 2 places which
> always return 0.
> 
> Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
> an int...
> 
> I can create a patch to return void instead of unsigned if everyone is
> happy with this solution.

I'd be fine with such a change; I'd like to ask though that if you
do this, you at the same time do the resulting possible cleanup:
As an example, xen/arch/x86/msi.c:startup_msi_irq() becomes
unnecessary then. It will in fact be interesting to see how many
distinct startup routines actually remain.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:08:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8lU-0006OS-9q; Wed, 19 Feb 2014 15:08:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Nate.Studer@dornerworks.com>) id 1WG8lT-0006OK-2A
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 15:08:47 +0000
Received: from [85.158.139.211:19505] by server-15.bemta-5.messagelabs.com id
	DB/E5-24395-EF8C4035; Wed, 19 Feb 2014 15:08:46 +0000
X-Env-Sender: Nate.Studer@dornerworks.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392822524!4910095!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7560 invoked from network); 19 Feb 2014 15:08:45 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-14.tower-206.messagelabs.com with SMTP;
	19 Feb 2014 15:08:45 -0000
Received: from [172.27.12.66] (172.27.12.66) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Wed, 19 Feb 2014 10:07:05 -0500
Message-ID: <5304C89E.9010400@dornerworks.com>
Date: Wed, 19 Feb 2014 10:07:10 -0500
From: Nate Studer <nate.studer@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Juergen Gross <juergen.gross@ts.fujitsu.com>, <xen-devel@lists.xen.org>,
	<Ian.Jackson@eu.citrix.com>
References: <1392709117-30019-1-git-send-email-juergen.gross@ts.fujitsu.com>
In-Reply-To: <1392709117-30019-1-git-send-email-juergen.gross@ts.fujitsu.com>
X-Originating-IP: [172.27.12.66]
Subject: Re: [Xen-devel] [PATCH] xl cpupool-list: add option to list domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/18/2014 2:38 AM, Juergen Gross wrote:
> It is rather complicated to obtain the cpupool a domain lives in. Add an
> option -d (or --domains) to list all domains running in a cpupool.
> 
> Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

FWIW, this patch works as advertised.

arlx@arlx-dw-47:~$ sudo xl cpupool-list -d
Name               CPUs   Sched     Active   Domain list
Pool-0               1    credit       y     Domain-0
test                 1  arinc653       y     dom1, dom2
arlx@arlx-dw-47:~$ sudo xl cpupool-list -c
Name               CPU list
Pool-0             0
test               1
arlx@arlx-dw-47:~$ sudo xl cpupool-list -c -d
specifying both cpu- and domain-list not allowed
arlx@arlx-dw-47:~$ sudo xl cpupool-list -c test
Name               CPU list
test               1
arlx@arlx-dw-47:~$ sudo xl cpupool-list -d test
Name               CPUs   Sched     Active   Domain list
test                 1  arinc653       y     dom1, dom2

-- 
Nathan Studer

> ---
>  docs/man/xl.pod.1         |    5 ++++-
>  tools/libxl/xl_cmdimpl.c  |   47 ++++++++++++++++++++++++++++++++++++++-------
>  tools/libxl/xl_cmdtable.c |    5 +++--
>  3 files changed, 47 insertions(+), 10 deletions(-)
> 
> diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
> index e7b9de2..547af6d 100644
> --- a/docs/man/xl.pod.1
> +++ b/docs/man/xl.pod.1
> @@ -1019,10 +1019,13 @@ Use the given configuration file.
>  
>  =back
>  
> -=item B<cpupool-list> [I<-c|--cpus>] [I<cpu-pool>]
> +=item B<cpupool-list> [I<-c|--cpus>] [I<-d|--domains>] [I<cpu-pool>]
>  
>  List CPU pools on the host.
>  If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
> +If I<-d> is specified, B<xl> prints a list of domains in I<cpu-pool> instead
> +of the domain count.
> +I<-c> and I<-d> are mutually exclusive.
>  
>  =item B<cpupool-destroy> I<cpu-pool>
>  
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index aff6f90..c7b9fce 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -6754,23 +6754,32 @@ int main_cpupoollist(int argc, char **argv)
>      int opt;
>      static struct option opts[] = {
>          {"cpus", 0, 0, 'c'},
> +        {"domains", 0, 0, 'd'},
>          COMMON_LONG_OPTS,
>          {0, 0, 0, 0}
>      };
> -    int opt_cpus = 0;
> +    int opt_cpus = 0, opt_domains = 0;
>      const char *pool = NULL;
>      libxl_cpupoolinfo *poolinfo;
> -    int n_pools, p, c, n;
> +    libxl_dominfo *dominfo = NULL;
> +    int n_pools, n_domains, p, c, n;
>      uint32_t poolid;
>      char *name;
>      int ret = 0;
>  
> -    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 0) {
> +    SWITCH_FOREACH_OPT(opt, "hcd", opts, "cpupool-list", 0) {
>      case 'c':
>          opt_cpus = 1;
>          break;
> +    case 'd':
> +        opt_domains = 1;
> +        break;
>      }
>  
> +    if (opt_cpus && opt_domains) {
> +        fprintf(stderr, "specifying both cpu- and domain-list not allowed\n");
> +        return -ERROR_FAIL;
> +    }
>      if (optind < argc) {
>          pool = argv[optind];
>          if (libxl_name_to_cpupoolid(ctx, pool, &poolid)) {
> @@ -6784,12 +6793,21 @@ int main_cpupoollist(int argc, char **argv)
>          fprintf(stderr, "error getting cpupool info\n");
>          return -ERROR_NOMEM;
>      }
> +    if (opt_domains) {
> +        dominfo = libxl_list_domain(ctx, &n_domains);
> +        if (!dominfo) {
> +            fprintf(stderr, "error getting domain info\n");
> +            ret = -ERROR_NOMEM;
> +            goto out;
> +        }
> +    }
>  
>      printf("%-19s", "Name");
>      if (opt_cpus)
>          printf("CPU list\n");
>      else
> -        printf("CPUs   Sched     Active   Domain count\n");
> +        printf("CPUs   Sched     Active   Domain %s\n",
> +               opt_domains ? "list" : "count");
>  
>      for (p = 0; p < n_pools; p++) {
>          if (!ret && (!pool || (poolinfo[p].poolid == poolid))) {
> @@ -6808,15 +6826,30 @@ int main_cpupoollist(int argc, char **argv)
>                          n++;
>                      }
>                  if (!opt_cpus) {
> -                    printf("%3d %9s       y       %4d", n,
> -                           libxl_scheduler_to_string(poolinfo[p].sched),
> -                           poolinfo[p].n_dom);
> +                    printf("%3d %9s       y     ", n,
> +                           libxl_scheduler_to_string(poolinfo[p].sched));
> +                    if (opt_domains) {
> +                        c = 0;
> +                        for (n = 0; n < n_domains; n++) {
> +                            if (poolinfo[p].poolid == dominfo[n].cpupool) {
> +                                name = libxl_domid_to_name(ctx, dominfo[n].domid);
> +                                printf("%s%s", c ? ", " : "", name);
> +                                free(name);
> +                                c++;
> +                            }
> +                        }
> +                    }
> +                    else
> +                        printf("  %4d", poolinfo[p].n_dom);
>                  }
>                  printf("\n");
>              }
>          }
>      }
>  
> +    if (dominfo)
> +        libxl_dominfo_list_free(dominfo, n_domains);
> +out:
>      libxl_cpupoolinfo_list_free(poolinfo, n_pools);
>  
>      return ret;
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..8a52d26 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -426,8 +426,9 @@ struct cmd_spec cmd_table[] = {
>      { "cpupool-list",
>        &main_cpupoollist, 0, 0,
>        "List CPU pools on host",
> -      "[-c|--cpus] [<CPU Pool>]",
> -      "-c, --cpus                     Output list of CPUs used by a pool"
> +      "[-c|--cpus] [-d|--domains] [<CPU Pool>]",
> +      "-c, --cpus                     Output list of CPUs used by a pool\n"
> +      "-d, --domains                  Output list of domains running in a pool"
>      },
>      { "cpupool-destroy",
>        &main_cpupooldestroy, 0, 1,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:08:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8lU-0006OS-9q; Wed, 19 Feb 2014 15:08:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Nate.Studer@dornerworks.com>) id 1WG8lT-0006OK-2A
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 15:08:47 +0000
Received: from [85.158.139.211:19505] by server-15.bemta-5.messagelabs.com id
	DB/E5-24395-EF8C4035; Wed, 19 Feb 2014 15:08:46 +0000
X-Env-Sender: Nate.Studer@dornerworks.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392822524!4910095!1
X-Originating-IP: [12.207.209.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7560 invoked from network); 19 Feb 2014 15:08:45 -0000
Received: from unknown (HELO mail.dornerworks.com) (12.207.209.148)
	by server-14.tower-206.messagelabs.com with SMTP;
	19 Feb 2014 15:08:45 -0000
Received: from [172.27.12.66] (172.27.12.66) by Quimby.dw.local (172.27.1.90)
	with Microsoft SMTP Server (TLS) id 14.2.247.3;
	Wed, 19 Feb 2014 10:07:05 -0500
Message-ID: <5304C89E.9010400@dornerworks.com>
Date: Wed, 19 Feb 2014 10:07:10 -0500
From: Nate Studer <nate.studer@dornerworks.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Juergen Gross <juergen.gross@ts.fujitsu.com>, <xen-devel@lists.xen.org>,
	<Ian.Jackson@eu.citrix.com>
References: <1392709117-30019-1-git-send-email-juergen.gross@ts.fujitsu.com>
In-Reply-To: <1392709117-30019-1-git-send-email-juergen.gross@ts.fujitsu.com>
X-Originating-IP: [172.27.12.66]
Subject: Re: [Xen-devel] [PATCH] xl cpupool-list: add option to list domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/18/2014 2:38 AM, Juergen Gross wrote:
> It is rather complicated to obtain the cpupool a domain lives in. Add an
> option -d (or --domains) to list all domains running in a cpupool.
> 
> Signed-off-by: Juergen Gross <juergen.gross@ts.fujitsu.com>

FWIW, this patch works as advertised.

arlx@arlx-dw-47:~$ sudo xl cpupool-list -d
Name               CPUs   Sched     Active   Domain list
Pool-0               1    credit       y     Domain-0
test                 1  arinc653       y     dom1, dom2
arlx@arlx-dw-47:~$ sudo xl cpupool-list -c
Name               CPU list
Pool-0             0
test               1
arlx@arlx-dw-47:~$ sudo xl cpupool-list -c -d
specifying both cpu- and domain-list not allowed
arlx@arlx-dw-47:~$ sudo xl cpupool-list -c test
Name               CPU list
test               1
arlx@arlx-dw-47:~$ sudo xl cpupool-list -d test
Name               CPUs   Sched     Active   Domain list
test                 1  arinc653       y     dom1, dom2

-- 
Nathan Studer

> ---
>  docs/man/xl.pod.1         |    5 ++++-
>  tools/libxl/xl_cmdimpl.c  |   47 ++++++++++++++++++++++++++++++++++++++-------
>  tools/libxl/xl_cmdtable.c |    5 +++--
>  3 files changed, 47 insertions(+), 10 deletions(-)
> 
> diff --git a/docs/man/xl.pod.1 b/docs/man/xl.pod.1
> index e7b9de2..547af6d 100644
> --- a/docs/man/xl.pod.1
> +++ b/docs/man/xl.pod.1
> @@ -1019,10 +1019,13 @@ Use the given configuration file.
>  
>  =back
>  
> -=item B<cpupool-list> [I<-c|--cpus>] [I<cpu-pool>]
> +=item B<cpupool-list> [I<-c|--cpus>] [I<-d|--domains>] [I<cpu-pool>]
>  
>  List CPU pools on the host.
>  If I<-c> is specified, B<xl> prints a list of CPUs used by I<cpu-pool>.
> +If I<-d> is specified, B<xl> prints a list of domains in I<cpu-pool> instead
> +of the domain count.
> +I<-c> and I<-d> are mutually exclusive.
>  
>  =item B<cpupool-destroy> I<cpu-pool>
>  
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index aff6f90..c7b9fce 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -6754,23 +6754,32 @@ int main_cpupoollist(int argc, char **argv)
>      int opt;
>      static struct option opts[] = {
>          {"cpus", 0, 0, 'c'},
> +        {"domains", 0, 0, 'd'},
>          COMMON_LONG_OPTS,
>          {0, 0, 0, 0}
>      };
> -    int opt_cpus = 0;
> +    int opt_cpus = 0, opt_domains = 0;
>      const char *pool = NULL;
>      libxl_cpupoolinfo *poolinfo;
> -    int n_pools, p, c, n;
> +    libxl_dominfo *dominfo = NULL;
> +    int n_pools, n_domains, p, c, n;
>      uint32_t poolid;
>      char *name;
>      int ret = 0;
>  
> -    SWITCH_FOREACH_OPT(opt, "hc", opts, "cpupool-list", 0) {
> +    SWITCH_FOREACH_OPT(opt, "hcd", opts, "cpupool-list", 0) {
>      case 'c':
>          opt_cpus = 1;
>          break;
> +    case 'd':
> +        opt_domains = 1;
> +        break;
>      }
>  
> +    if (opt_cpus && opt_domains) {
> +        fprintf(stderr, "specifying both cpu- and domain-list not allowed\n");
> +        return -ERROR_FAIL;
> +    }
>      if (optind < argc) {
>          pool = argv[optind];
>          if (libxl_name_to_cpupoolid(ctx, pool, &poolid)) {
> @@ -6784,12 +6793,21 @@ int main_cpupoollist(int argc, char **argv)
>          fprintf(stderr, "error getting cpupool info\n");
>          return -ERROR_NOMEM;
>      }
> +    if (opt_domains) {
> +        dominfo = libxl_list_domain(ctx, &n_domains);
> +        if (!dominfo) {
> +            fprintf(stderr, "error getting domain info\n");
> +            ret = -ERROR_NOMEM;
> +            goto out;
> +        }
> +    }
>  
>      printf("%-19s", "Name");
>      if (opt_cpus)
>          printf("CPU list\n");
>      else
> -        printf("CPUs   Sched     Active   Domain count\n");
> +        printf("CPUs   Sched     Active   Domain %s\n",
> +               opt_domains ? "list" : "count");
>  
>      for (p = 0; p < n_pools; p++) {
>          if (!ret && (!pool || (poolinfo[p].poolid == poolid))) {
> @@ -6808,15 +6826,30 @@ int main_cpupoollist(int argc, char **argv)
>                          n++;
>                      }
>                  if (!opt_cpus) {
> -                    printf("%3d %9s       y       %4d", n,
> -                           libxl_scheduler_to_string(poolinfo[p].sched),
> -                           poolinfo[p].n_dom);
> +                    printf("%3d %9s       y     ", n,
> +                           libxl_scheduler_to_string(poolinfo[p].sched));
> +                    if (opt_domains) {
> +                        c = 0;
> +                        for (n = 0; n < n_domains; n++) {
> +                            if (poolinfo[p].poolid == dominfo[n].cpupool) {
> +                                name = libxl_domid_to_name(ctx, dominfo[n].domid);
> +                                printf("%s%s", c ? ", " : "", name);
> +                                free(name);
> +                                c++;
> +                            }
> +                        }
> +                    }
> +                    else
> +                        printf("  %4d", poolinfo[p].n_dom);
>                  }
>                  printf("\n");
>              }
>          }
>      }
>  
> +    if (dominfo)
> +        libxl_dominfo_list_free(dominfo, n_domains);
> +out:
>      libxl_cpupoolinfo_list_free(poolinfo, n_pools);
>  
>      return ret;
> diff --git a/tools/libxl/xl_cmdtable.c b/tools/libxl/xl_cmdtable.c
> index ebe0220..8a52d26 100644
> --- a/tools/libxl/xl_cmdtable.c
> +++ b/tools/libxl/xl_cmdtable.c
> @@ -426,8 +426,9 @@ struct cmd_spec cmd_table[] = {
>      { "cpupool-list",
>        &main_cpupoollist, 0, 0,
>        "List CPU pools on host",
> -      "[-c|--cpus] [<CPU Pool>]",
> -      "-c, --cpus                     Output list of CPUs used by a pool"
> +      "[-c|--cpus] [-d|--domains] [<CPU Pool>]",
> +      "-c, --cpus                     Output list of CPUs used by a pool\n"
> +      "-d, --domains                  Output list of domains running in a pool"
>      },
>      { "cpupool-destroy",
>        &main_cpupooldestroy, 0, 1,
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:14:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8r0-0006hn-Dc; Wed, 19 Feb 2014 15:14:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WG8qy-0006hi-JY
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 15:14:28 +0000
Received: from [85.158.137.68:56314] by server-8.bemta-3.messagelabs.com id
	E1/65-16039-35AC4035; Wed, 19 Feb 2014 15:14:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392822860!1604786!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6179 invoked from network); 19 Feb 2014 15:14:27 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 15:14:27 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 19 Feb 2014 15:14:14 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="656829741"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.112])
	by fldsmtpi03.verizon.com with ESMTP; 19 Feb 2014 15:14:13 +0000
Message-ID: <5304CA45.7050406@terremark.com>
Date: Wed, 19 Feb 2014 10:14:13 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Questions on vpic, vlapic,
	vioapic and line 0 (aka timer)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For some TBD reason (very very rarely) the routine timer_irq_works() in linux is reporting that the timer IRQ does not work:

..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
..MP-BIOS bug: 8254 timer not connected to IO-APIC
...trying to set up timer (IRQ0) through the 8259A ...
..... (found apic 0 pin 2) ...
....... failed.
...trying to set up timer as Virtual Wire IRQ...
..... failed.
...trying to set up timer as ExtINT IRQ...

hangs and xen's console is spewing:

vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
...



I have determined that the lines (in linux):

#ifdef CONFIG_X86_IO_APIC
         no_timer_check = 1;
#endif

that KVM has are missing for Xen.  The simple workaround is to specify "no_timer_check" on the kernel command args.

The reason for the email is that I have found the routine __vlapic_accept_pic_intr:

     /* We deliver 8259 interrupts to the appropriate CPU as follows. */
     return ((/* IOAPIC pin0 is unmasked and routing to this LAPIC? */
              ((redir0.fields.delivery_mode == dest_ExtINT) &&
               !redir0.fields.mask &&
               redir0.fields.dest_id == VLAPIC_ID(vlapic) &&
               !vlapic_disabled(vlapic)) ||
              /* LAPIC has LVT0 unmasked for ExtInts? */
              ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_EXTINT) ||
              /* LAPIC is fully disabled? */
              vlapic_hw_disabled(vlapic)));
}


Which looks to imply that the vioapic supports "delivery mode 7" (dest_ExtINT), but this case is missing (the message logged above).

Also linux and xen both set "LAPIC has LVT0" to APIC_DM_FIXED for "Virtual Wire IRQ" usage, but this code only allows for APIC_DM_EXTINT.  I have been able to get "Virtual Wire IRQ" usage to work by adding:

              /* LAPIC has LVT0 unmasked for Fixed? */
              ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_FIXED) ||

It is not clear to me if it should be added or just changed.

This code looks to state that:

...trying to set up timer (IRQ0) through the 8259A ...

is expected to fail.  Is this by design?  (QEMU does allow this case.)

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:14:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG8r0-0006hn-Dc; Wed, 19 Feb 2014 15:14:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WG8qy-0006hi-JY
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 15:14:28 +0000
Received: from [85.158.137.68:56314] by server-8.bemta-3.messagelabs.com id
	E1/65-16039-35AC4035; Wed, 19 Feb 2014 15:14:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392822860!1604786!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6179 invoked from network); 19 Feb 2014 15:14:27 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 15:14:27 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 19 Feb 2014 15:14:14 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="656829741"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.112])
	by fldsmtpi03.verizon.com with ESMTP; 19 Feb 2014 15:14:13 +0000
Message-ID: <5304CA45.7050406@terremark.com>
Date: Wed, 19 Feb 2014 10:14:13 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Questions on vpic, vlapic,
	vioapic and line 0 (aka timer)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For some TBD reason (very very rarely) the routine timer_irq_works() in linux is reporting that the timer IRQ does not work:

..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
..MP-BIOS bug: 8254 timer not connected to IO-APIC
...trying to set up timer (IRQ0) through the 8259A ...
..... (found apic 0 pin 2) ...
....... failed.
...trying to set up timer as Virtual Wire IRQ...
..... failed.
...trying to set up timer as ExtINT IRQ...

hangs and xen's console is spewing:

vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
...



I have determined that the lines (in linux):

#ifdef CONFIG_X86_IO_APIC
         no_timer_check = 1;
#endif

that KVM has are missing for Xen.  The simple workaround is to specify "no_timer_check" on the kernel command args.

The reason for the email is that I have found the routine __vlapic_accept_pic_intr:

     /* We deliver 8259 interrupts to the appropriate CPU as follows. */
     return ((/* IOAPIC pin0 is unmasked and routing to this LAPIC? */
              ((redir0.fields.delivery_mode == dest_ExtINT) &&
               !redir0.fields.mask &&
               redir0.fields.dest_id == VLAPIC_ID(vlapic) &&
               !vlapic_disabled(vlapic)) ||
              /* LAPIC has LVT0 unmasked for ExtInts? */
              ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_EXTINT) ||
              /* LAPIC is fully disabled? */
              vlapic_hw_disabled(vlapic)));
}


Which looks to imply that the vioapic supports "delivery mode 7" (dest_ExtINT), but this case is missing (the message logged above).

Also linux and xen both set "LAPIC has LVT0" to APIC_DM_FIXED for "Virtual Wire IRQ" usage, but this code only allows for APIC_DM_EXTINT.  I have been able to get "Virtual Wire IRQ" usage to work by adding:

              /* LAPIC has LVT0 unmasked for Fixed? */
              ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_FIXED) ||

It is not clear to me if it should be added or just changed.

This code looks to state that:

...trying to set up timer (IRQ0) through the 8259A ...

is expected to fail.  Is this by design?  (QEMU does allow this case.)

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:43:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9IX-00075o-8s; Wed, 19 Feb 2014 15:42:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WG9IW-00075j-Fz
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 15:42:56 +0000
Received: from [193.109.254.147:29932] by server-13.bemta-14.messagelabs.com
	id 55/11-01226-FF0D4035; Wed, 19 Feb 2014 15:42:55 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392824573!1766278!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9015 invoked from network); 19 Feb 2014 15:42:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 15:42:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102224068"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 15:42:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 10:42:52 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WG9IR-00007V-QF; Wed, 19 Feb 2014 15:42:51 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Feb 2014 15:41:27 +0000
Message-ID: <1392824487-8876-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Coverity Team <coverity@xenproject.org>
Subject: [Xen-devel] [Patch v6] coverity: Store the modelling file in the
	source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Coverity Team <coverity@xenproject.org>

---
Changes since v5:
 * Teach Coverity about errx() and libxl_ctx_{,un}lock()
 * Move to misc/coverity/model.c
---
 misc/coverity/model.c |  131 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 131 insertions(+)
 create mode 100644 misc/coverity/model.c

diff --git a/misc/coverity/model.c b/misc/coverity/model.c
new file mode 100644
index 0000000..cae5a25
--- /dev/null
+++ b/misc/coverity/model.c
@@ -0,0 +1,131 @@
+/* Coverity Scan model
+ *
+ * This is a modelling file for Coverity Scan. Modelling helps to avoid false
+ * positives.
+ *
+ * - A model file can't import any header files.
+ * - Therefore only some built-in primitives like int, char and void are
+ *   available but not NULL etc.
+ * - Modelling doesn't need full structs and typedefs. Rudimentary structs
+ *   and similar types are sufficient.
+ * - An uninitialised local pointer is not an error. It signifies that the
+ *   variable could be either NULL or have some data.
+ *
+ * Coverity Scan doesn't pick up modifications automatically. The model file
+ * must be uploaded by an admin in the analysis.
+ *
+ * The Xen Coverity Scan modelling file used the cpython modelling file as a
+ * reference to get started (suggested by Coverty Scan themselves as a good
+ * example), but all content is Xen specific.
+ *
+ * Copyright (c) 2013-2014 Citrix Systems Ltd; All Right Reserved
+ *
+ * Based on:
+ *     http://hg.python.org/cpython/file/tip/Misc/coverity_model.c
+ * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+ * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
+ *
+ */
+
+/*
+ * Useful references:
+ *   https://scan.coverity.com/models
+ */
+
+/* Definitions */
+#define NULL (void *)0
+#define PAGE_SIZE 4096UL
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#define assert(cond) /* empty */
+
+struct page_info {};
+struct pthread_mutex_t {};
+
+struct libxl__ctx
+{
+    struct pthread_mutex_t lock;
+};
+typedef struct libxl__ctx libxl_ctx;
+
+/*
+ * Xen malloc.  Behaves exactly like regular malloc(), except it also contains
+ * an alignment parameter.
+ *
+ * TODO: work out how to correctly model bad alignments as errors.
+ */
+void *_xmalloc(unsigned long size, unsigned long align)
+{
+    int has_memory;
+
+    __coverity_negative_sink__(size);
+    __coverity_negative_sink__(align);
+
+    if ( has_memory )
+        return __coverity_alloc__(size);
+    else
+        return NULL;
+}
+
+/*
+ * Xen free.  Frees a pointer allocated by _xmalloc().
+ */
+void xfree(void *va)
+{
+    __coverity_free__(va);
+}
+
+
+/*
+ * map_domain_page() takes an existing domain page and possibly maps it into
+ * the Xen pagetables, to allow for direct access.  Model this as a memory
+ * allocation of exactly 1 page.
+ *
+ * map_domain_page() never fails. (It will BUG() before returning NULL)
+ *
+ * TODO: work out how to correctly model the behaviour that this function will
+ * only ever return page aligned pointers.
+ */
+void *map_domain_page(unsigned long mfn)
+{
+    return __coverity_alloc__(PAGE_SIZE);
+}
+
+/*
+ * unmap_domain_page() will unmap a page.  Model it as a free().
+ */
+void unmap_domain_page(const void *va)
+{
+    __coverity_free__(va);
+}
+
+/*
+ * Coverity appears not to understand that errx() unconditionally exits.
+ */
+void errx(int, const char*, ...)
+{
+    __coverity_panic__();
+}
+
+/*
+ * Coverity doesn't appear to be certain that the libxl ctx->lock is recursive.
+ */
+void libxl__ctx_lock(libxl_ctx *ctx)
+{
+    __coverity_exclusive_lock_acquire__(&ctx->lock);
+}
+
+void libxl__ctx_unlock(libxl_ctx *ctx)
+{
+    __coverity_exclusive_lock_release__(&ctx->lock);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:43:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9IX-00075o-8s; Wed, 19 Feb 2014 15:42:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WG9IW-00075j-Fz
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 15:42:56 +0000
Received: from [193.109.254.147:29932] by server-13.bemta-14.messagelabs.com
	id 55/11-01226-FF0D4035; Wed, 19 Feb 2014 15:42:55 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392824573!1766278!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9015 invoked from network); 19 Feb 2014 15:42:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 15:42:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102224068"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 15:42:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 10:42:52 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WG9IR-00007V-QF; Wed, 19 Feb 2014 15:42:51 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Feb 2014 15:41:27 +0000
Message-ID: <1392824487-8876-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Coverity Team <coverity@xenproject.org>
Subject: [Xen-devel] [Patch v6] coverity: Store the modelling file in the
	source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Coverity Team <coverity@xenproject.org>

---
Changes since v5:
 * Teach Coverity about errx() and libxl_ctx_{,un}lock()
 * Move to misc/coverity/model.c
---
 misc/coverity/model.c |  131 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 131 insertions(+)
 create mode 100644 misc/coverity/model.c

diff --git a/misc/coverity/model.c b/misc/coverity/model.c
new file mode 100644
index 0000000..cae5a25
--- /dev/null
+++ b/misc/coverity/model.c
@@ -0,0 +1,131 @@
+/* Coverity Scan model
+ *
+ * This is a modelling file for Coverity Scan. Modelling helps to avoid false
+ * positives.
+ *
+ * - A model file can't import any header files.
+ * - Therefore only some built-in primitives like int, char and void are
+ *   available but not NULL etc.
+ * - Modelling doesn't need full structs and typedefs. Rudimentary structs
+ *   and similar types are sufficient.
+ * - An uninitialised local pointer is not an error. It signifies that the
+ *   variable could be either NULL or have some data.
+ *
+ * Coverity Scan doesn't pick up modifications automatically. The model file
+ * must be uploaded by an admin in the analysis.
+ *
+ * The Xen Coverity Scan modelling file used the cpython modelling file as a
+ * reference to get started (suggested by Coverty Scan themselves as a good
+ * example), but all content is Xen specific.
+ *
+ * Copyright (c) 2013-2014 Citrix Systems Ltd; All Right Reserved
+ *
+ * Based on:
+ *     http://hg.python.org/cpython/file/tip/Misc/coverity_model.c
+ * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+ * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
+ *
+ */
+
+/*
+ * Useful references:
+ *   https://scan.coverity.com/models
+ */
+
+/* Definitions */
+#define NULL (void *)0
+#define PAGE_SIZE 4096UL
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#define assert(cond) /* empty */
+
+struct page_info {};
+struct pthread_mutex_t {};
+
+struct libxl__ctx
+{
+    struct pthread_mutex_t lock;
+};
+typedef struct libxl__ctx libxl_ctx;
+
+/*
+ * Xen malloc.  Behaves exactly like regular malloc(), except it also contains
+ * an alignment parameter.
+ *
+ * TODO: work out how to correctly model bad alignments as errors.
+ */
+void *_xmalloc(unsigned long size, unsigned long align)
+{
+    int has_memory;
+
+    __coverity_negative_sink__(size);
+    __coverity_negative_sink__(align);
+
+    if ( has_memory )
+        return __coverity_alloc__(size);
+    else
+        return NULL;
+}
+
+/*
+ * Xen free.  Frees a pointer allocated by _xmalloc().
+ */
+void xfree(void *va)
+{
+    __coverity_free__(va);
+}
+
+
+/*
+ * map_domain_page() takes an existing domain page and possibly maps it into
+ * the Xen pagetables, to allow for direct access.  Model this as a memory
+ * allocation of exactly 1 page.
+ *
+ * map_domain_page() never fails. (It will BUG() before returning NULL)
+ *
+ * TODO: work out how to correctly model the behaviour that this function will
+ * only ever return page aligned pointers.
+ */
+void *map_domain_page(unsigned long mfn)
+{
+    return __coverity_alloc__(PAGE_SIZE);
+}
+
+/*
+ * unmap_domain_page() will unmap a page.  Model it as a free().
+ */
+void unmap_domain_page(const void *va)
+{
+    __coverity_free__(va);
+}
+
+/*
+ * Coverity appears not to understand that errx() unconditionally exits.
+ */
+void errx(int, const char*, ...)
+{
+    __coverity_panic__();
+}
+
+/*
+ * Coverity doesn't appear to be certain that the libxl ctx->lock is recursive.
+ */
+void libxl__ctx_lock(libxl_ctx *ctx)
+{
+    __coverity_exclusive_lock_acquire__(&ctx->lock);
+}
+
+void libxl__ctx_unlock(libxl_ctx *ctx)
+{
+    __coverity_exclusive_lock_release__(&ctx->lock);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:50:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9QA-0007II-Aq; Wed, 19 Feb 2014 15:50:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WG9Q9-0007ID-2l
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 15:50:49 +0000
Received: from [85.158.139.211:61963] by server-10.bemta-5.messagelabs.com id
	39/FE-08578-8D2D4035; Wed, 19 Feb 2014 15:50:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392825046!4977278!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29080 invoked from network); 19 Feb 2014 15:50:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 15:50:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="103931472"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 15:50:45 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 10:50:45 -0500
Message-ID: <5304D2D3.2050400@citrix.com>
Date: Wed, 19 Feb 2014 15:50:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "dev@openvswitch.org" <dev@openvswitch.org>, Ben Pfaff <blp@nicira.com>,
	Jesse Gross <jesse@nicira.com>, <pshelar@nicira.com>, Thomas Graf
	<tgraf@redhat.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] OVS Netlink zerocopy vs Xen netback zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Currently I'm working on a patchset which reintroduces grant mapping 
into netback. We used it before Linux Xen bits were upstreamed, but we 
had to change to grant copy as the original solution were fundamentally 
not upstreamable. But the advantage would be huge, as we could replace 
copy guest pages by Xen to mapping guest pages to Dom0.
Parallel to this I'm working on a grant mapping optimization, which 
makes it possible to avoid m2p_override for grant mapped pages. It 
causes lock contention and we don't need it if the pages doesn't go to 
userspace. This could be a safe assumption, as those pages would stay in 
kernel space while switched by OVS, and if they end up on the local 
port, delivered to Dom0 IP stack, deliver_skb will call skb_orphan_frags 
which swaps out those foreign (=grant mapped from guest) pages by local 
copies and notify netback through a callback that it can give back the 
pages to the guest.

And after that bit long introduction here comes the main question: OVS 
recently introduced Netlink zerocopy, which by my understanding means 
that Netlink messages from kernel are not copied but mapped to 
userspace. And such message can contain a whole packet if it haven't 
matched any flows in the kernel, or the flow action said so. As far as I 
saw skb_zerocopy will clone the frags from the real packet skb to the 
Netlink skb. Note, the linear buffer is local memory in netback case as 
well, we copy the beginning of the packet (max 128 bytes) there, only 
the pages on frags are foreign ones.
I don't know the internals of Netlink that much, how a packet is 
forwarded up in this case, but that concerns me, as if the pages on the 
skb_shinfo(skb)->frags array are still the foreign ones, and userspace 
wants to touch that data, we are in trouble.
If this is the scenario, I think the best would be to call 
skb_orphan_frags before skb_zerocopy in queue_userspace_packet, so the 
frags will become local. Fortunately this is a corner case, as it 
shouldn't happen very often that the kernel sends up packets bigger than 
128 bytes.

What do you think about the solution in the last paragraph? Or do we 
need it at all?

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 15:50:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 15:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9QA-0007II-Aq; Wed, 19 Feb 2014 15:50:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WG9Q9-0007ID-2l
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 15:50:49 +0000
Received: from [85.158.139.211:61963] by server-10.bemta-5.messagelabs.com id
	39/FE-08578-8D2D4035; Wed, 19 Feb 2014 15:50:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392825046!4977278!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29080 invoked from network); 19 Feb 2014 15:50:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 15:50:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="103931472"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 15:50:45 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 10:50:45 -0500
Message-ID: <5304D2D3.2050400@citrix.com>
Date: Wed, 19 Feb 2014 15:50:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "dev@openvswitch.org" <dev@openvswitch.org>, Ben Pfaff <blp@nicira.com>,
	Jesse Gross <jesse@nicira.com>, <pshelar@nicira.com>, Thomas Graf
	<tgraf@redhat.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] OVS Netlink zerocopy vs Xen netback zerocopy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

Currently I'm working on a patchset which reintroduces grant mapping 
into netback. We used it before Linux Xen bits were upstreamed, but we 
had to change to grant copy as the original solution were fundamentally 
not upstreamable. But the advantage would be huge, as we could replace 
copy guest pages by Xen to mapping guest pages to Dom0.
Parallel to this I'm working on a grant mapping optimization, which 
makes it possible to avoid m2p_override for grant mapped pages. It 
causes lock contention and we don't need it if the pages doesn't go to 
userspace. This could be a safe assumption, as those pages would stay in 
kernel space while switched by OVS, and if they end up on the local 
port, delivered to Dom0 IP stack, deliver_skb will call skb_orphan_frags 
which swaps out those foreign (=grant mapped from guest) pages by local 
copies and notify netback through a callback that it can give back the 
pages to the guest.

And after that bit long introduction here comes the main question: OVS 
recently introduced Netlink zerocopy, which by my understanding means 
that Netlink messages from kernel are not copied but mapped to 
userspace. And such message can contain a whole packet if it haven't 
matched any flows in the kernel, or the flow action said so. As far as I 
saw skb_zerocopy will clone the frags from the real packet skb to the 
Netlink skb. Note, the linear buffer is local memory in netback case as 
well, we copy the beginning of the packet (max 128 bytes) there, only 
the pages on frags are foreign ones.
I don't know the internals of Netlink that much, how a packet is 
forwarded up in this case, but that concerns me, as if the pages on the 
skb_shinfo(skb)->frags array are still the foreign ones, and userspace 
wants to touch that data, we are in trouble.
If this is the scenario, I think the best would be to call 
skb_orphan_frags before skb_zerocopy in queue_userspace_packet, so the 
frags will become local. Fortunately this is a corner case, as it 
shouldn't happen very often that the kernel sends up packets bigger than 
128 bytes.

What do you think about the solution in the last paragraph? Or do we 
need it at all?

Regards,

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:18:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9qE-0007yJ-Pb; Wed, 19 Feb 2014 16:17:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG9qD-0007yE-KL
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:17:45 +0000
Received: from [85.158.143.35:31346] by server-3.bemta-4.messagelabs.com id
	D5/03-11539-829D4035; Wed, 19 Feb 2014 16:17:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392826663!6873227!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9413 invoked from network); 19 Feb 2014 16:17:43 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:17:43 -0000
Received: by mail-ea0-f177.google.com with SMTP id h14so457913eaj.22
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:17:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=1v4MXtOeWmWjfK/aU+pI8mJ96nYkHNCkhj1bjEcb6sM=;
	b=TvzDDELDPWtWo+6/i5demOZhibjjPRkNjcnP4azwXNXzdAW9h5vINorM0BWaKV5qpo
	tmSc6TNhdyeDR7SHT3GO/MZpALkLPuCtF9CIakpGVkN8e0qAfyFHHa0Lb96aMVmkruBg
	TnpcjLl66gT1ij5kXIdMMhES2p+8/vNia9FTvBaUCT42Ur/daaycKfgZkcVrqFj93Dqg
	DseuTavV1kKbZQOvj5zrcqLmzRjuov07RZOYGUI65wIdlS7L7ByKVa7sAyYVs8UFmk1D
	q+1HKrng9ZLnZc6bJYMU5hsrmMtYoQdfWBpfhI29euYvw45drZ1bUSPAKNzdm4xCGLLU
	VUxw==
X-Gm-Message-State: ALoCoQnlIpgHuL3YfPDbk+zZW1P0gWY5HbNZVqzFwWj75l53Ck1OqQgXXG028naeWDmHjNOBhQU5
X-Received: by 10.14.39.3 with SMTP id c3mr41404810eeb.42.1392826663319;
	Wed, 19 Feb 2014 08:17:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id y47sm2369691eel.14.2014.02.19.08.17.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:17:42 -0800 (PST)
Message-ID: <5304D924.4090504@linaro.org>
Date: Wed, 19 Feb 2014 16:17:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1392813280.29739.43.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Ian,

On 02/19/2014 12:34 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> Code adapted from linux drivers/of/base.c (commit ef42c58).
> 
> On that basis I only took a cursory glance through that monster
> function.
> 
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
>> index 7c075d9..d429e60 100644
>> --- a/xen/include/xen/device_tree.h
>> +++ b/xen/include/xen/device_tree.h
>> @@ -112,6 +112,13 @@ struct dt_device_node {
>>  
>>  };
>>  
>> +#define MAX_PHANDLE_ARGS 16
>> +struct dt_phandle_args {
>> +    struct dt_device_node *np;
>> +    int args_count;
>> +    uint32_t args[MAX_PHANDLE_ARGS];
>> +};
>> +
>>  /**
>>   * IRQ line type.
>>   *
>> @@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
>>  void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
>>                    u64 *address, u64 *size);
>>  
>> +/**
>> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
>> + * @np: Pointer to device node holding phandle property
>> + * @phandle_name: Name of property holding a phandle value
>> + * @index: For properties holding a table of phandles, this is the index into
>> + *         the table
> 
> Otherwise it is -1 or something else?

Even with one element, the property contains a table of phandles. So,
the first index is always 0.

> 
>> + *
>> + * Returns the device_node pointer.
>> + */
>> +struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
>> +				                        const char *phandle_name,
> 
> Stray hard tabs?

Oops, I forgot to check the hard tabs on this function. I will fix it.

>> +                                        int index);
>> +
>> +/**
>> + * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
>> + * @np:	pointer to a device tree node containing a list
>> + * @list_name: property name that contains a list
>> + * @cells_name: property name that specifies phandles' arguments count
>> + * @index: index of a phandle to parse out
>> + * @out_args: optional pointer to output arguments structure (will be filled)
>> + *
>> + * This function is useful to parse lists of phandles and their arguments.
>> + * Returns 0 on success and fills out_args, on error returns appropriate
>> + * errno value.
>> + *
>> + * Example:
>> + *
>> + * phandle1: node1 {
>> + * 	#list-cells = <2>;
>> + * }
>> + *
>> + * phandle2: node2 {
>> + * 	#list-cells = <1>;
>> + * }
>> + *
>> + * node3 {
>> + * 	list = <&phandle1 1 2 &phandle2 3>;
>> + * }
>> + *
>> + * To get a device_node of the `node2' node you may call this:
>> + * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
> 
> Wow, what an exciting function!
> 
> How do I decide the correct size for out_args?

out_args is a structure dt_phandle_args which contains the node and an
array of arguments.

The function is designed to parse one phandle by one phandle. So
out_args is either NULL (we don't want to get the info) or contains a
pointer to the structure to fill.

The description of out_args seems clear to me. Does it miss something?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:18:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9qE-0007yJ-Pb; Wed, 19 Feb 2014 16:17:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG9qD-0007yE-KL
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:17:45 +0000
Received: from [85.158.143.35:31346] by server-3.bemta-4.messagelabs.com id
	D5/03-11539-829D4035; Wed, 19 Feb 2014 16:17:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392826663!6873227!1
X-Originating-IP: [209.85.215.177]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9413 invoked from network); 19 Feb 2014 16:17:43 -0000
Received: from mail-ea0-f177.google.com (HELO mail-ea0-f177.google.com)
	(209.85.215.177)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:17:43 -0000
Received: by mail-ea0-f177.google.com with SMTP id h14so457913eaj.22
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:17:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=1v4MXtOeWmWjfK/aU+pI8mJ96nYkHNCkhj1bjEcb6sM=;
	b=TvzDDELDPWtWo+6/i5demOZhibjjPRkNjcnP4azwXNXzdAW9h5vINorM0BWaKV5qpo
	tmSc6TNhdyeDR7SHT3GO/MZpALkLPuCtF9CIakpGVkN8e0qAfyFHHa0Lb96aMVmkruBg
	TnpcjLl66gT1ij5kXIdMMhES2p+8/vNia9FTvBaUCT42Ur/daaycKfgZkcVrqFj93Dqg
	DseuTavV1kKbZQOvj5zrcqLmzRjuov07RZOYGUI65wIdlS7L7ByKVa7sAyYVs8UFmk1D
	q+1HKrng9ZLnZc6bJYMU5hsrmMtYoQdfWBpfhI29euYvw45drZ1bUSPAKNzdm4xCGLLU
	VUxw==
X-Gm-Message-State: ALoCoQnlIpgHuL3YfPDbk+zZW1P0gWY5HbNZVqzFwWj75l53Ck1OqQgXXG028naeWDmHjNOBhQU5
X-Received: by 10.14.39.3 with SMTP id c3mr41404810eeb.42.1392826663319;
	Wed, 19 Feb 2014 08:17:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id y47sm2369691eel.14.2014.02.19.08.17.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:17:42 -0800 (PST)
Message-ID: <5304D924.4090504@linaro.org>
Date: Wed, 19 Feb 2014 16:17:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
In-Reply-To: <1392813280.29739.43.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Ian,

On 02/19/2014 12:34 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> Code adapted from linux drivers/of/base.c (commit ef42c58).
> 
> On that basis I only took a cursory glance through that monster
> function.
> 
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
>> index 7c075d9..d429e60 100644
>> --- a/xen/include/xen/device_tree.h
>> +++ b/xen/include/xen/device_tree.h
>> @@ -112,6 +112,13 @@ struct dt_device_node {
>>  
>>  };
>>  
>> +#define MAX_PHANDLE_ARGS 16
>> +struct dt_phandle_args {
>> +    struct dt_device_node *np;
>> +    int args_count;
>> +    uint32_t args[MAX_PHANDLE_ARGS];
>> +};
>> +
>>  /**
>>   * IRQ line type.
>>   *
>> @@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
>>  void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
>>                    u64 *address, u64 *size);
>>  
>> +/**
>> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
>> + * @np: Pointer to device node holding phandle property
>> + * @phandle_name: Name of property holding a phandle value
>> + * @index: For properties holding a table of phandles, this is the index into
>> + *         the table
> 
> Otherwise it is -1 or something else?

Even with one element, the property contains a table of phandles. So,
the first index is always 0.

> 
>> + *
>> + * Returns the device_node pointer.
>> + */
>> +struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
>> +				                        const char *phandle_name,
> 
> Stray hard tabs?

Oops, I forgot to check the hard tabs on this function. I will fix it.

>> +                                        int index);
>> +
>> +/**
>> + * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
>> + * @np:	pointer to a device tree node containing a list
>> + * @list_name: property name that contains a list
>> + * @cells_name: property name that specifies phandles' arguments count
>> + * @index: index of a phandle to parse out
>> + * @out_args: optional pointer to output arguments structure (will be filled)
>> + *
>> + * This function is useful to parse lists of phandles and their arguments.
>> + * Returns 0 on success and fills out_args, on error returns appropriate
>> + * errno value.
>> + *
>> + * Example:
>> + *
>> + * phandle1: node1 {
>> + * 	#list-cells = <2>;
>> + * }
>> + *
>> + * phandle2: node2 {
>> + * 	#list-cells = <1>;
>> + * }
>> + *
>> + * node3 {
>> + * 	list = <&phandle1 1 2 &phandle2 3>;
>> + * }
>> + *
>> + * To get a device_node of the `node2' node you may call this:
>> + * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
> 
> Wow, what an exciting function!
> 
> How do I decide the correct size for out_args?

out_args is a structure dt_phandle_args which contains the node and an
array of arguments.

The function is designed to parse one phandle by one phandle. So
out_args is either NULL (we don't want to get the info) or contains a
pointer to the structure to fill.

The description of out_args seems clear to me. Does it miss something?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:23:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9vk-0008G3-7n; Wed, 19 Feb 2014 16:23:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG9vi-0008Fw-MN
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:23:26 +0000
Received: from [85.158.139.211:55526] by server-2.bemta-5.messagelabs.com id
	0F/44-23037-D7AD4035; Wed, 19 Feb 2014 16:23:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392827004!4982228!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6177 invoked from network); 19 Feb 2014 16:23:25 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:23:25 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so346259eek.13
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:23:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=SCtU7dopfmedMj8027xfw+3Uy43Wtu3V1XfKmHAU/OA=;
	b=Ga9e9FM9nKYpRc50Wpxwb7KngzBXk8MIjyi59AQGFC0zxTdmTguPmKrLJOMWM20bkW
	7zcQ941Wkig5L16k8LkU0gJabcvcH0MrRhJO83OLv4kBPEP0H2tadJZE4K3bYXPDdyWS
	uN07DFqYnb0/Vhabpyn2CLO8Tz97/51jjj67QNQt4HE7oKIWaUpcqMn7tHNIDHXMGtFc
	3M/OEPxP5Mvp8+AIPv9GVpsZQ+d84Y4fhr6Gw5qh72RkVgKMPj2V/7au24TVAnrtEvKk
	CRITI+R7QzX+9uRL69J0DmgWMM6TEqlOrnSUvzHAS0dMVuv1u+XP0mI+Z/aAkdcl7AMS
	npPA==
X-Gm-Message-State: ALoCoQmbA5Ouo85o4xqXV/BMAh4USpj4YoFCtG0BaHGdcjBzZyrobSRJSR1R8sk/LcVM+S+c+0+d
X-Received: by 10.14.214.137 with SMTP id c9mr2647753eep.111.1392827004542;
	Wed, 19 Feb 2014 08:23:24 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm2366456eew.20.2014.02.19.08.23.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:23:23 -0800 (PST)
Message-ID: <5304DA7A.2040500@linaro.org>
Date: Wed, 19 Feb 2014 16:23:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
	<1392813550.29739.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1392813550.29739.46.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:39 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
>> functions specific to x86 and PCI.
>>
>> Split the framework in 3 distincts files:
>>     - iommu.c: contains generic functions shared between x86 and ARM
>>                (when it will be supported)
>>     - iommu_pci.c: contains specific functions for PCI passthrough
>>     - iommu_x86.c: contains specific functions for x86
>>
>> iommu_pci.c will be only compiled when PCI is supported by the architecture
>> (eg. HAS_PCI is defined).
>>
>> This patch is mostly code movement in new files.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> ---
>>  xen/drivers/passthrough/Makefile    |    6 +-
>>  xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>>  xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
>>  xen/drivers/passthrough/iommu_x86.c |   65 +++++
>>  xen/drivers/passthrough/vtd/iommu.c |   42 ++--
>>  xen/include/asm-x86/iommu.h         |   46 ++++
>>  xen/include/xen/hvm/iommu.h         |    1 +
>>  xen/include/xen/iommu.h             |   42 ++--
>>  8 files changed, 625 insertions(+), 518 deletions(-)
>>  create mode 100644 xen/drivers/passthrough/iommu_pci.c
>>  create mode 100644 xen/drivers/passthrough/iommu_x86.c
>>  create mode 100644 xen/include/asm-x86/iommu.h
>>
>> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
>> index 7c40fa5..51e0a0d 100644
>> --- a/xen/drivers/passthrough/Makefile
>> +++ b/xen/drivers/passthrough/Makefile
>> @@ -3,5 +3,7 @@ subdir-$(x86) += amd
>>  subdir-$(x86_64) += x86
>>  
>>  obj-y += iommu.o
>> -obj-y += io.o
>> -obj-y += pci.o
>> +obj-$(x86) += iommu_x86.o
>> +obj-$(HAS_PCI) += iommu_pci.o
>> +obj-$(x86) += io.o
> 
> io.[co] not mentioned in the description?

Right, I will udpate the commit message.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:23:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9vk-0008G3-7n; Wed, 19 Feb 2014 16:23:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG9vi-0008Fw-MN
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:23:26 +0000
Received: from [85.158.139.211:55526] by server-2.bemta-5.messagelabs.com id
	0F/44-23037-D7AD4035; Wed, 19 Feb 2014 16:23:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392827004!4982228!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6177 invoked from network); 19 Feb 2014 16:23:25 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:23:25 -0000
Received: by mail-ee0-f54.google.com with SMTP id e53so346259eek.13
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:23:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=SCtU7dopfmedMj8027xfw+3Uy43Wtu3V1XfKmHAU/OA=;
	b=Ga9e9FM9nKYpRc50Wpxwb7KngzBXk8MIjyi59AQGFC0zxTdmTguPmKrLJOMWM20bkW
	7zcQ941Wkig5L16k8LkU0gJabcvcH0MrRhJO83OLv4kBPEP0H2tadJZE4K3bYXPDdyWS
	uN07DFqYnb0/Vhabpyn2CLO8Tz97/51jjj67QNQt4HE7oKIWaUpcqMn7tHNIDHXMGtFc
	3M/OEPxP5Mvp8+AIPv9GVpsZQ+d84Y4fhr6Gw5qh72RkVgKMPj2V/7au24TVAnrtEvKk
	CRITI+R7QzX+9uRL69J0DmgWMM6TEqlOrnSUvzHAS0dMVuv1u+XP0mI+Z/aAkdcl7AMS
	npPA==
X-Gm-Message-State: ALoCoQmbA5Ouo85o4xqXV/BMAh4USpj4YoFCtG0BaHGdcjBzZyrobSRJSR1R8sk/LcVM+S+c+0+d
X-Received: by 10.14.214.137 with SMTP id c9mr2647753eep.111.1392827004542;
	Wed, 19 Feb 2014 08:23:24 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm2366456eew.20.2014.02.19.08.23.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:23:23 -0800 (PST)
Message-ID: <5304DA7A.2040500@linaro.org>
Date: Wed, 19 Feb 2014 16:23:22 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-9-git-send-email-julien.grall@linaro.org>
	<1392813550.29739.46.camel@kazak.uk.xensource.com>
In-Reply-To: <1392813550.29739.46.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 08/12] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:39 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
>> functions specific to x86 and PCI.
>>
>> Split the framework in 3 distincts files:
>>     - iommu.c: contains generic functions shared between x86 and ARM
>>                (when it will be supported)
>>     - iommu_pci.c: contains specific functions for PCI passthrough
>>     - iommu_x86.c: contains specific functions for x86
>>
>> iommu_pci.c will be only compiled when PCI is supported by the architecture
>> (eg. HAS_PCI is defined).
>>
>> This patch is mostly code movement in new files.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> ---
>>  xen/drivers/passthrough/Makefile    |    6 +-
>>  xen/drivers/passthrough/iommu.c     |  473 +----------------------------------
>>  xen/drivers/passthrough/iommu_pci.c |  468 ++++++++++++++++++++++++++++++++++
>>  xen/drivers/passthrough/iommu_x86.c |   65 +++++
>>  xen/drivers/passthrough/vtd/iommu.c |   42 ++--
>>  xen/include/asm-x86/iommu.h         |   46 ++++
>>  xen/include/xen/hvm/iommu.h         |    1 +
>>  xen/include/xen/iommu.h             |   42 ++--
>>  8 files changed, 625 insertions(+), 518 deletions(-)
>>  create mode 100644 xen/drivers/passthrough/iommu_pci.c
>>  create mode 100644 xen/drivers/passthrough/iommu_x86.c
>>  create mode 100644 xen/include/asm-x86/iommu.h
>>
>> diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
>> index 7c40fa5..51e0a0d 100644
>> --- a/xen/drivers/passthrough/Makefile
>> +++ b/xen/drivers/passthrough/Makefile
>> @@ -3,5 +3,7 @@ subdir-$(x86) += amd
>>  subdir-$(x86_64) += x86
>>  
>>  obj-y += iommu.o
>> -obj-y += io.o
>> -obj-y += pci.o
>> +obj-$(x86) += iommu_x86.o
>> +obj-$(HAS_PCI) += iommu_pci.o
>> +obj-$(x86) += io.o
> 
> io.[co] not mentioned in the description?

Right, I will udpate the commit message.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:25:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9xb-0008OV-Oy; Wed, 19 Feb 2014 16:25:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG9xY-0008OI-Gh
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:25:20 +0000
Received: from [85.158.143.35:37512] by server-2.bemta-4.messagelabs.com id
	52/C3-10891-FEAD4035; Wed, 19 Feb 2014 16:25:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392827117!6866368!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25327 invoked from network); 19 Feb 2014 16:25:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:25:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="103945535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 16:23:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 11:23:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG9vm-00034N-3F;
	Wed, 19 Feb 2014 16:23:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG9vk-0007zl-M2;
	Wed, 19 Feb 2014 16:23:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21252.55935.708306.649168@mariner.uk.xensource.com>
Date: Wed, 19 Feb 2014 16:23:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392807567.23084.127.camel@kazak.uk.xensource.com>
References: <osstest-25126-mainreport@xen.org>
	<53048136020000780011D8FF@nat28.tlf.novell.com>
	<1392806237.23084.124.camel@kazak.uk.xensource.com>
	<1392807567.23084.127.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH OSSTEST] Apply PYTHON_PREFIX_ARG workaround to
 Xen 4.2 and earlier(Was: Re: [xen-4.2-testing test] 25126: regressions -
 FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST]  Apply PYTHON_PREFIX_ARG workaround to Xen 4.2 and earlier(Was: Re: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL)"):
> Like this, tested only the make-flight bit and ran perl -c on
> ts-xen-build to check syntax.

After discussion, I have pushed this instead.

>From 76eeba138e8e22119f8fe0bff0e354daeb4506aa Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 15:58:20 +0000
Subject: [OSSTEST PATCH] ts-xen-build: Apply python workaround in wheezy too

Debian #693721 is not yet fixed.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-xen-build |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build b/ts-xen-build
index 74d17f0..c8b27b7 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -88,7 +88,7 @@ END
                (nonempty($r{revision_linux}) ? <<END : '').
 	echo >>.config export $linux_rev_envvar='$r{revision_linux}'
 END
-               ($ho->{Suite} =~ m/squeeze/ ? <<END : '').
+               ($ho->{Suite} =~ m/squeeze|wheezy/ ? <<END : ''). #Debian #693721
 	echo >>.config PYTHON_PREFIX_ARG=
 END
                (nonempty($kerns) ? <<END : <<END)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:25:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9xb-0008OV-Oy; Wed, 19 Feb 2014 16:25:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WG9xY-0008OI-Gh
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:25:20 +0000
Received: from [85.158.143.35:37512] by server-2.bemta-4.messagelabs.com id
	52/C3-10891-FEAD4035; Wed, 19 Feb 2014 16:25:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392827117!6866368!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25327 invoked from network); 19 Feb 2014 16:25:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:25:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="103945535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 16:23:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 11:23:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG9vm-00034N-3F;
	Wed, 19 Feb 2014 16:23:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WG9vk-0007zl-M2;
	Wed, 19 Feb 2014 16:23:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21252.55935.708306.649168@mariner.uk.xensource.com>
Date: Wed, 19 Feb 2014 16:23:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392807567.23084.127.camel@kazak.uk.xensource.com>
References: <osstest-25126-mainreport@xen.org>
	<53048136020000780011D8FF@nat28.tlf.novell.com>
	<1392806237.23084.124.camel@kazak.uk.xensource.com>
	<1392807567.23084.127.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH OSSTEST] Apply PYTHON_PREFIX_ARG workaround to
 Xen 4.2 and earlier(Was: Re: [xen-4.2-testing test] 25126: regressions -
 FAIL)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("[PATCH OSSTEST]  Apply PYTHON_PREFIX_ARG workaround to Xen 4.2 and earlier(Was: Re: [Xen-devel] [xen-4.2-testing test] 25126: regressions - FAIL)"):
> Like this, tested only the make-flight bit and ran perl -c on
> ts-xen-build to check syntax.

After discussion, I have pushed this instead.

>From 76eeba138e8e22119f8fe0bff0e354daeb4506aa Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 15:58:20 +0000
Subject: [OSSTEST PATCH] ts-xen-build: Apply python workaround in wheezy too

Debian #693721 is not yet fixed.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-xen-build |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build b/ts-xen-build
index 74d17f0..c8b27b7 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -88,7 +88,7 @@ END
                (nonempty($r{revision_linux}) ? <<END : '').
 	echo >>.config export $linux_rev_envvar='$r{revision_linux}'
 END
-               ($ho->{Suite} =~ m/squeeze/ ? <<END : '').
+               ($ho->{Suite} =~ m/squeeze|wheezy/ ? <<END : ''). #Debian #693721
 	echo >>.config PYTHON_PREFIX_ARG=
 END
                (nonempty($kerns) ? <<END : <<END)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:26:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9yF-0008TX-TY; Wed, 19 Feb 2014 16:26:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG9yD-0008TF-Ph
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:26:02 +0000
Received: from [85.158.137.68:62098] by server-14.bemta-3.messagelabs.com id
	5D/5F-08196-81BD4035; Wed, 19 Feb 2014 16:26:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392827159!1399904!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29923 invoked from network); 19 Feb 2014 16:25:59 -0000
Received: from mail-ea0-f173.google.com (HELO mail-ea0-f173.google.com)
	(209.85.215.173)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:25:59 -0000
Received: by mail-ea0-f173.google.com with SMTP id d10so478854eaj.4
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:25:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=N64TT4OtS9IaMLtM5aOxIPquWuvDjxIkrJu24o+OT7Y=;
	b=cto6uuuB1GcYWyPT44Chs58xopcfYMbgHNzVJ8losK4oM95mTEJwsFYMsUscmwx0bQ
	7Z/T4WJ574fublc9BJhxE79zqRHz8u8CjK3rwz9Idl9vx9dDB4hu1JgLoT9W7LlgVkqM
	7/BPm+cfSqKu97yuBuMtGb6Rk/o7IgMz4lbZNC7uFCGUKNw0/eG51ZtCnLO+pia0Z4pW
	aqiVrtrfQrD9Fn5u4LGg7puDBkpRKJvX62ZPKcVOnUXGOuFNcg+mtEzcz3oPGnBRGn9O
	1HHi0DN4fEoteeGlARw6oI4cgsa59q30L9BwUX2Pj9XjL0bG8TBkG1hxeq5m4FLKhj4V
	nPWg==
X-Gm-Message-State: ALoCoQlz7uvKifHOvyly7wNuBl7eZ3Dvo8nECDKi4XaJjBuu5c1N66BCGeZhvsrDj//YRY/wJ0U+
X-Received: by 10.14.179.129 with SMTP id h1mr41709008eem.26.1392827159076;
	Wed, 19 Feb 2014 08:25:59 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id s46sm2507128eeb.0.2014.02.19.08.25.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:25:56 -0800 (PST)
Message-ID: <5304DB13.3070605@linaro.org>
Date: Wed, 19 Feb 2014 16:25:55 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-10-git-send-email-julien.grall@linaro.org>
	<1392813617.29739.47.camel@kazak.uk.xensource.com>
In-Reply-To: <1392813617.29739.47.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	patches@linaro.org, Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 09/12] xen/passthrough: iommu:
 Introduce arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:40 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
>> x86 specific fields.
>>
>> This patch creates:
>>     - arch_hvm_iommu structure which will contain architecture depend
>>     fields
>>     - arch_iommu_domain_{init,destroy} function to execute arch
>>     specific during domain creation/destruction
>>
>> Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Keir Fraser <keir@xen.org>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Joseph Cihula <joseph.cihula@intel.com>
>> Cc: Gang Wei <gang.wei@intel.com>
>> Cc: Shane Wang <shane.wang@intel.com>
>> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> 
> Mostly mechanical I guess?

Yes. Mostly replace hd->field by hd->arch.field.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:26:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WG9yF-0008TX-TY; Wed, 19 Feb 2014 16:26:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WG9yD-0008TF-Ph
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:26:02 +0000
Received: from [85.158.137.68:62098] by server-14.bemta-3.messagelabs.com id
	5D/5F-08196-81BD4035; Wed, 19 Feb 2014 16:26:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1392827159!1399904!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29923 invoked from network); 19 Feb 2014 16:25:59 -0000
Received: from mail-ea0-f173.google.com (HELO mail-ea0-f173.google.com)
	(209.85.215.173)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:25:59 -0000
Received: by mail-ea0-f173.google.com with SMTP id d10so478854eaj.4
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:25:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=N64TT4OtS9IaMLtM5aOxIPquWuvDjxIkrJu24o+OT7Y=;
	b=cto6uuuB1GcYWyPT44Chs58xopcfYMbgHNzVJ8losK4oM95mTEJwsFYMsUscmwx0bQ
	7Z/T4WJ574fublc9BJhxE79zqRHz8u8CjK3rwz9Idl9vx9dDB4hu1JgLoT9W7LlgVkqM
	7/BPm+cfSqKu97yuBuMtGb6Rk/o7IgMz4lbZNC7uFCGUKNw0/eG51ZtCnLO+pia0Z4pW
	aqiVrtrfQrD9Fn5u4LGg7puDBkpRKJvX62ZPKcVOnUXGOuFNcg+mtEzcz3oPGnBRGn9O
	1HHi0DN4fEoteeGlARw6oI4cgsa59q30L9BwUX2Pj9XjL0bG8TBkG1hxeq5m4FLKhj4V
	nPWg==
X-Gm-Message-State: ALoCoQlz7uvKifHOvyly7wNuBl7eZ3Dvo8nECDKi4XaJjBuu5c1N66BCGeZhvsrDj//YRY/wJ0U+
X-Received: by 10.14.179.129 with SMTP id h1mr41709008eem.26.1392827159076;
	Wed, 19 Feb 2014 08:25:59 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id s46sm2507128eeb.0.2014.02.19.08.25.56
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:25:56 -0800 (PST)
Message-ID: <5304DB13.3070605@linaro.org>
Date: Wed, 19 Feb 2014 16:25:55 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-10-git-send-email-julien.grall@linaro.org>
	<1392813617.29739.47.camel@kazak.uk.xensource.com>
In-Reply-To: <1392813617.29739.47.camel@kazak.uk.xensource.com>
Cc: Keir Fraser <keir@xen.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	patches@linaro.org, Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 09/12] xen/passthrough: iommu:
 Introduce arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:40 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
>> x86 specific fields.
>>
>> This patch creates:
>>     - arch_hvm_iommu structure which will contain architecture depend
>>     fields
>>     - arch_iommu_domain_{init,destroy} function to execute arch
>>     specific during domain creation/destruction
>>
>> Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Cc: Keir Fraser <keir@xen.org>
>> Cc: Jan Beulich <jbeulich@suse.com>
>> Cc: Joseph Cihula <joseph.cihula@intel.com>
>> Cc: Gang Wei <gang.wei@intel.com>
>> Cc: Shane Wang <shane.wang@intel.com>
>> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> 
> Mostly mechanical I guess?

Yes. Mostly replace hd->field by hd->arch.field.

> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:28:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGA0D-0000EC-LF; Wed, 19 Feb 2014 16:28:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGA0A-0000E0-9p
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:28:04 +0000
Received: from [193.109.254.147:38074] by server-12.bemta-14.messagelabs.com
	id FD/EA-17220-19BD4035; Wed, 19 Feb 2014 16:28:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392827278!5487561!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5593 invoked from network); 19 Feb 2014 16:27:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102242944"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 16:26:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 11:26:27 -0500
Message-ID: <1392827186.29739.92.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 16:26:26 +0000
In-Reply-To: <5304D924.4090504@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
	<5304D924.4090504@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 16:17 +0000, Julien Grall wrote:
> >> +/**
> >> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
> >> + * @np: Pointer to device node holding phandle property
> >> + * @phandle_name: Name of property holding a phandle value
> >> + * @index: For properties holding a table of phandles, this is the index into
> >> + *         the table
> > 
> > Otherwise it is -1 or something else?
> 
> Even with one element, the property contains a table of phandles. So,
> the first index is always 0.

OK, so the "For a properties holding a table..." bit is misleading
(that's the "otherwise" I was referring too, e..g properties which do
not hold a table0, since this is always the index, even if the table has
a single element.

> > How do I decide the correct size for out_args?
> 
> out_args is a structure dt_phandle_args which contains the node and an
> array of arguments.
> 
> The function is designed to parse one phandle by one phandle. So
> out_args is either NULL (we don't want to get the info) or contains a
> pointer to the structure to fill.
> 
> The description of out_args seems clear to me. Does it miss something?

No, I didn't realise that the array was within dt_phandle_args rather
than out_args itself being a pointer to an array (the plural name is a
bit confusing in that regard).

I see the name comes from Linux, so no need to change.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:28:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGA0D-0000EC-LF; Wed, 19 Feb 2014 16:28:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGA0A-0000E0-9p
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:28:04 +0000
Received: from [193.109.254.147:38074] by server-12.bemta-14.messagelabs.com
	id FD/EA-17220-19BD4035; Wed, 19 Feb 2014 16:28:01 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1392827278!5487561!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5593 invoked from network); 19 Feb 2014 16:27:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:27:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102242944"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 16:26:28 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 11:26:27 -0500
Message-ID: <1392827186.29739.92.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 16:26:26 +0000
In-Reply-To: <5304D924.4090504@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
	<5304D924.4090504@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 16:17 +0000, Julien Grall wrote:
> >> +/**
> >> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
> >> + * @np: Pointer to device node holding phandle property
> >> + * @phandle_name: Name of property holding a phandle value
> >> + * @index: For properties holding a table of phandles, this is the index into
> >> + *         the table
> > 
> > Otherwise it is -1 or something else?
> 
> Even with one element, the property contains a table of phandles. So,
> the first index is always 0.

OK, so the "For a properties holding a table..." bit is misleading
(that's the "otherwise" I was referring too, e..g properties which do
not hold a table0, since this is always the index, even if the table has
a single element.

> > How do I decide the correct size for out_args?
> 
> out_args is a structure dt_phandle_args which contains the node and an
> array of arguments.
> 
> The function is designed to parse one phandle by one phandle. So
> out_args is either NULL (we don't want to get the info) or contains a
> pointer to the structure to fill.
> 
> The description of out_args seems clear to me. Does it miss something?

No, I didn't realise that the array was within dt_phandle_args rather
than out_args itself being a pointer to an array (the plural name is a
bit confusing in that regard).

I see the name comes from Linux, so no need to change.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:42:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAE9-0000jL-0o; Wed, 19 Feb 2014 16:42:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WGAE7-0000jG-OR
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:42:28 +0000
Received: from [85.158.139.211:63686] by server-6.bemta-5.messagelabs.com id
	7C/CB-14342-3FED4035; Wed, 19 Feb 2014 16:42:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392828145!4934731!1
X-Originating-IP: [199.249.25.210]
X-SpamReason: No, hits=2.2 required=7.0 tests=SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4879 invoked from network); 19 Feb 2014 16:42:26 -0000
Received: from omzsmtpe01.verizonbusiness.com (HELO
	omzsmtpe01.verizonbusiness.com) (199.249.25.210)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 16:42:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe01.verizonbusiness.com with ESMTP; 19 Feb 2014 16:42:24 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="655842739"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi02.verizon.com with ESMTP; 19 Feb 2014 16:42:23 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Wed, 19 Feb 2014 11:40:37 -0500
Message-ID: <5304DE84.6090604@terremark.com>
Date: Wed, 19 Feb 2014 11:40:36 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392715101.32038.466.camel@Solace>
In-Reply-To: <1392715101.32038.466.camel@Solace>
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	cl-mirage <cl-mirage@lists.cam.ac.uk>, xen <xen@lists.fedoraproject.org>
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/14 04:18, Dario Faggioli wrote:
> This is a reminder that today is the Xen Project Test Day for Xen 4.4
> RC4.

Did a few simple tests on both Fedora 17 and CentOS 5.10 and found no new issues (all found are current bugs at
http://bugs.xenproject.org/xen/).

Looking good.

    -Don Slutz

> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions
>
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.
>
> Hope to see you today on #xentest!
>
> Dario
>   
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:42:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAE9-0000jL-0o; Wed, 19 Feb 2014 16:42:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WGAE7-0000jG-OR
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:42:28 +0000
Received: from [85.158.139.211:63686] by server-6.bemta-5.messagelabs.com id
	7C/CB-14342-3FED4035; Wed, 19 Feb 2014 16:42:27 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392828145!4934731!1
X-Originating-IP: [199.249.25.210]
X-SpamReason: No, hits=2.2 required=7.0 tests=SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4879 invoked from network); 19 Feb 2014 16:42:26 -0000
Received: from omzsmtpe01.verizonbusiness.com (HELO
	omzsmtpe01.verizonbusiness.com) (199.249.25.210)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 16:42:26 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe01.verizonbusiness.com with ESMTP; 19 Feb 2014 16:42:24 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="655842739"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi02.verizon.com with ESMTP; 19 Feb 2014 16:42:23 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Wed, 19 Feb 2014 11:40:37 -0500
Message-ID: <5304DE84.6090604@terremark.com>
Date: Wed, 19 Feb 2014 11:40:36 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <raistlin@linux.it>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392715101.32038.466.camel@Solace>
In-Reply-To: <1392715101.32038.466.camel@Solace>
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	cl-mirage <cl-mirage@lists.cam.ac.uk>, xen <xen@lists.fedoraproject.org>
Subject: Re: [Xen-devel] Today is Xen Project Test Day for 4.4 RC4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/18/14 04:18, Dario Faggioli wrote:
> This is a reminder that today is the Xen Project Test Day for Xen 4.4
> RC4.

Did a few simple tests on both Fedora 17 and CentOS 5.10 and found no new issues (all found are current bugs at
http://bugs.xenproject.org/xen/).

Looking good.

    -Don Slutz

> General Information about Test Days can be found here:
> http://wiki.xenproject.org/wiki/Xen_Test_Days
>
> and specific instructions for this Test Day are located here:
> http://wiki.xenproject.org/wiki/Xen_4.4_RC4_test_instructions
>
> Developers: please consider monitoring the Freenode IRC channel
> #xentest today to make sure that people are able to build and test the
> code.
>
> Hope to see you today on #xentest!
>
> Dario
>   
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:45:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAHE-0000rR-S9; Wed, 19 Feb 2014 16:45:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WGAHD-0000rK-GC
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:45:39 +0000
Received: from [193.109.254.147:34649] by server-15.bemta-14.messagelabs.com
	id 07/55-10839-2BFD4035; Wed, 19 Feb 2014 16:45:38 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392828337!1716309!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10961 invoked from network); 19 Feb 2014 16:45:37 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-27.messagelabs.com with SMTP;
	19 Feb 2014 16:45:37 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1JGjV1v017847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 11:45:32 -0500
Received: from [10.3.236.116] (vpn-236-116.phx2.redhat.com [10.3.236.116])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1JGi3ZX007076; Wed, 19 Feb 2014 11:44:04 -0500
Message-ID: <1392828325.21976.6.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 10:45:25 -0600
In-Reply-To: <CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, "David S.
	Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
> > On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> Some interfaces do not need to have any IPv4 or IPv6
> >> addresses, so enable an option to specify this. One
> >> example where this is observed are virtualization
> >> backend interfaces which just use the net_device
> >> constructs to help with their respective frontends.
> >>
> >> This should optimize boot time and complexity on
> >> virtualization environments for each backend interface
> >> while also avoiding triggering SLAAC and DAD, which is
> >> simply pointless for these type of interfaces.
> >
> > Would it not be better/cleaner to use disable_ipv6 and then add a
> > disable_ipv4 sysctl, then use those with that interface?
> 
> Sure, but note that the both disable_ipv6 and accept_dada sysctl
> parameters are global. ipv4 and ipv6 interfaces are created upon
> NETDEVICE_REGISTER, which will get triggered when a driver calls
> register_netdev(). The goal of this patch was to enable an early
> optimization for drivers that have no need ever for ipv4 or ipv6
> interfaces.

Each interface gets override sysctls too though, eg:

/proc/sys/net/ipv6/conf/enp0s25/disable_ipv6

which is the one I meant; you're obviously right that the global ones
aren't what you want here.  But the specific ones should be suitable?
If you set that on a per-interface basis, then you'll get EPERM or
something whenever you try to add IPv6 addresses or do IPv6 routing.

> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
> backends though, as such this is no longer applicable as a
> requirement. The ipv4 sysctl however still seems like a reasonable
> approach to enable optimizations of the network in topologies where
> its known we won't need them but -- we'd need to consider a much more
> granular solution, not just global as it is now for disable_ipv6, and
> we'd also have to figure out a clean way to do this to not incur the
> cost of early address interface addition upon register_netdev().
> 
> Given that we have a use case for ipv4 and ipv6 addresses on
> xen-netback we no longer have an immediate use case for such early
> optimization primitives though, so I'll drop this.
> 
> > The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
> > already doing.
> 
> disable_ipv6 is global, the goal was to make this granular and skip
> the cost upon early boot, but its been clarified we don't need this.

Like Stephen says, you need to make sure you set them before IFF_UP, but
beyond that, wouldn't the interface-specific sysctls work?

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:45:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAHE-0000rR-S9; Wed, 19 Feb 2014 16:45:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WGAHD-0000rK-GC
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:45:39 +0000
Received: from [193.109.254.147:34649] by server-15.bemta-14.messagelabs.com
	id 07/55-10839-2BFD4035; Wed, 19 Feb 2014 16:45:38 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392828337!1716309!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10961 invoked from network); 19 Feb 2014 16:45:37 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-4.tower-27.messagelabs.com with SMTP;
	19 Feb 2014 16:45:37 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1JGjV1v017847
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 11:45:32 -0500
Received: from [10.3.236.116] (vpn-236-116.phx2.redhat.com [10.3.236.116])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1JGi3ZX007076; Wed, 19 Feb 2014 11:44:04 -0500
Message-ID: <1392828325.21976.6.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 10:45:25 -0600
In-Reply-To: <CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, "David S.
	Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
> > On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >>
> >> Some interfaces do not need to have any IPv4 or IPv6
> >> addresses, so enable an option to specify this. One
> >> example where this is observed are virtualization
> >> backend interfaces which just use the net_device
> >> constructs to help with their respective frontends.
> >>
> >> This should optimize boot time and complexity on
> >> virtualization environments for each backend interface
> >> while also avoiding triggering SLAAC and DAD, which is
> >> simply pointless for these type of interfaces.
> >
> > Would it not be better/cleaner to use disable_ipv6 and then add a
> > disable_ipv4 sysctl, then use those with that interface?
> 
> Sure, but note that the both disable_ipv6 and accept_dada sysctl
> parameters are global. ipv4 and ipv6 interfaces are created upon
> NETDEVICE_REGISTER, which will get triggered when a driver calls
> register_netdev(). The goal of this patch was to enable an early
> optimization for drivers that have no need ever for ipv4 or ipv6
> interfaces.

Each interface gets override sysctls too though, eg:

/proc/sys/net/ipv6/conf/enp0s25/disable_ipv6

which is the one I meant; you're obviously right that the global ones
aren't what you want here.  But the specific ones should be suitable?
If you set that on a per-interface basis, then you'll get EPERM or
something whenever you try to add IPv6 addresses or do IPv6 routing.

> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
> backends though, as such this is no longer applicable as a
> requirement. The ipv4 sysctl however still seems like a reasonable
> approach to enable optimizations of the network in topologies where
> its known we won't need them but -- we'd need to consider a much more
> granular solution, not just global as it is now for disable_ipv6, and
> we'd also have to figure out a clean way to do this to not incur the
> cost of early address interface addition upon register_netdev().
> 
> Given that we have a use case for ipv4 and ipv6 addresses on
> xen-netback we no longer have an immediate use case for such early
> optimization primitives though, so I'll drop this.
> 
> > The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
> > already doing.
> 
> disable_ipv6 is global, the goal was to make this granular and skip
> the cost upon early boot, but its been clarified we don't need this.

Like Stephen says, you need to make sure you set them before IFF_UP, but
beyond that, wouldn't the interface-specific sysctls work?

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:46:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:46:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAHt-0000uW-BT; Wed, 19 Feb 2014 16:46:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAHr-0000uC-Cq
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:46:19 +0000
Received: from [85.158.143.35:36000] by server-2.bemta-4.messagelabs.com id
	99/F2-10891-9DFD4035; Wed, 19 Feb 2014 16:46:17 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392828376!6872001!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22440 invoked from network); 19 Feb 2014 16:46:16 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:46:16 -0000
Received: by mail-lb0-f177.google.com with SMTP id 10so472119lbg.36
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:46:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=bf84TwENRT73uNR5foytN/JzPTBCQoX5paVvcSIZpHs=;
	b=QNdKiTJ7l8+5BaDx/EfLz4V4ClYRjlxa6Prfl2J0ZIObEpeNDWgdLVgEvFN1ThVbxB
	DXlTNu0/cDDbqWUJAgIyb1dey+xHXznjOZaCP0pJns2+sRVStNEE+Zbv6dxKD5d/+M0u
	64LDKmrf3tSgxV0lk8l/L+WsX354iuCHB9tOXIYFFR0eqh+zwC9LpX0W8UUzCf1YCbVT
	hH3yli9cBLk3js1KfeD0dwl5osV8zNfTNAfp5uOrdPfbhUCTOQu5v5kPzVJYNg2Q4An7
	D/pdRjSBxV+3Wztm3M9w0VvSwB8W6/nGMLAU6DVRSs35otdv7wsxSSmO3W28X1HqhOgv
	HCCg==
X-Received: by 10.152.27.193 with SMTP id v1mr26899612lag.4.1392828375668;
	Wed, 19 Feb 2014 08:46:15 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 08:45:55 -0800 (PST)
In-Reply-To: <53024C58.4010900@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<53024C58.4010900@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 08:45:55 -0800
X-Google-Sender-Auth: nVz7ZkgNI_FPFikfUmhYK_ZNeNk
Message-ID: <CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 9:52 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>>
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> It doesn't make sense for some interfaces to become a root bridge
>> at any point in time. One example is virtual backend interfaces
>> which rely on other entities on the bridge for actual physical
>> connectivity. They only provide virtual access.
>
> It is possible that a guest bridge together to VIF, either from the same
> Dom0 bridge or from different ones. In that case using STP on VIFs sound
> sensible to me.

You seem to describe a case whereby it can make sense for xen-netback
interfaces to end up becoming the root port of a bridge. Can you
elaborate a little more on that as it was unclear the use case.

Additionally if such cases exist then under the current upstream
implementation one would simply need to change the MAC address in
order to enable a vif to become the root port.  Stephen noted there is
a way to avoid nominating an interface for a root port through the
root block flag. We should use that instead of the MAC address hacks.
Let's keep in mind that part of the motivation for this series is to
avoid a duplicate IPv6 address left in place by use cases whereby the
MAC address of the backend vif was left static. The use case your are
explaining likely describes the more prevalent use case where address
conflicts can occur, perhaps when administrators for got to change the
backend MAC address. If we embrace a random MAC address we'd avoid
that issue, and but we'd need to update userspace to use the root
block on topologies where desired.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:46:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:46:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAHt-0000uW-BT; Wed, 19 Feb 2014 16:46:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAHr-0000uC-Cq
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:46:19 +0000
Received: from [85.158.143.35:36000] by server-2.bemta-4.messagelabs.com id
	99/F2-10891-9DFD4035; Wed, 19 Feb 2014 16:46:17 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392828376!6872001!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22440 invoked from network); 19 Feb 2014 16:46:16 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:46:16 -0000
Received: by mail-lb0-f177.google.com with SMTP id 10so472119lbg.36
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:46:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=bf84TwENRT73uNR5foytN/JzPTBCQoX5paVvcSIZpHs=;
	b=QNdKiTJ7l8+5BaDx/EfLz4V4ClYRjlxa6Prfl2J0ZIObEpeNDWgdLVgEvFN1ThVbxB
	DXlTNu0/cDDbqWUJAgIyb1dey+xHXznjOZaCP0pJns2+sRVStNEE+Zbv6dxKD5d/+M0u
	64LDKmrf3tSgxV0lk8l/L+WsX354iuCHB9tOXIYFFR0eqh+zwC9LpX0W8UUzCf1YCbVT
	hH3yli9cBLk3js1KfeD0dwl5osV8zNfTNAfp5uOrdPfbhUCTOQu5v5kPzVJYNg2Q4An7
	D/pdRjSBxV+3Wztm3M9w0VvSwB8W6/nGMLAU6DVRSs35otdv7wsxSSmO3W28X1HqhOgv
	HCCg==
X-Received: by 10.152.27.193 with SMTP id v1mr26899612lag.4.1392828375668;
	Wed, 19 Feb 2014 08:46:15 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 08:45:55 -0800 (PST)
In-Reply-To: <53024C58.4010900@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<53024C58.4010900@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 08:45:55 -0800
X-Google-Sender-Auth: nVz7ZkgNI_FPFikfUmhYK_ZNeNk
Message-ID: <CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 17, 2014 at 9:52 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>>
>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>
>> It doesn't make sense for some interfaces to become a root bridge
>> at any point in time. One example is virtual backend interfaces
>> which rely on other entities on the bridge for actual physical
>> connectivity. They only provide virtual access.
>
> It is possible that a guest bridge together to VIF, either from the same
> Dom0 bridge or from different ones. In that case using STP on VIFs sound
> sensible to me.

You seem to describe a case whereby it can make sense for xen-netback
interfaces to end up becoming the root port of a bridge. Can you
elaborate a little more on that as it was unclear the use case.

Additionally if such cases exist then under the current upstream
implementation one would simply need to change the MAC address in
order to enable a vif to become the root port.  Stephen noted there is
a way to avoid nominating an interface for a root port through the
root block flag. We should use that instead of the MAC address hacks.
Let's keep in mind that part of the motivation for this series is to
avoid a duplicate IPv6 address left in place by use cases whereby the
MAC address of the backend vif was left static. The use case your are
explaining likely describes the more prevalent use case where address
conflicts can occur, perhaps when administrators for got to change the
backend MAC address. If we embrace a random MAC address we'd avoid
that issue, and but we'd need to update userspace to use the root
block on topologies where desired.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:50:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGALe-0001Cx-Fm; Wed, 19 Feb 2014 16:50:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGALd-0001Cr-1D
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:50:13 +0000
Received: from [85.158.143.35:7182] by server-1.bemta-4.messagelabs.com id
	0C/E6-31661-4C0E4035; Wed, 19 Feb 2014 16:50:12 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392828610!6866697!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12839 invoked from network); 19 Feb 2014 16:50:10 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:50:10 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so498099eak.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:50:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=AeM2iXX4qLXIpFvjuyx1tzx6Q1JLu31Np96WTTHDmHA=;
	b=jQMkoH+p2w0PfJHzYlp4lKXjuwp+AENXKALl6Tga9myZ7quulcIOSMVZxPggyxfI5S
	ynuvSI8ai3D/rHU6h85s9X31iOISappgPalUuctDoSHKJpfym+mOJ86RajhXeJudU/xq
	11V0Z6KrEhRBVgpk2AklrM5noBFkIBUWazBOe64EYC92HQWdVh+uC+SCVXcSRpdddxMK
	3X4FzBA5Z2k3fSJOcnPbjqXRA4sN0A0+fuitpc9TT7TLyM/6r2mUFmxItFP0U0VhhMYr
	C3WMZTq+qou+fXgrrC80gIV70cW2QPyCjx97sAo9NCmJ3tssH/smsRFiFvaYdV7DjDho
	vekw==
X-Gm-Message-State: ALoCoQnReqg6FouqkhzBoH3ob+gORDOeTVZL8VpG9c+VG1iIDctoajysK5RuglcC1Domtt/0+eL+
X-Received: by 10.15.99.201 with SMTP id bl49mr16573263eeb.53.1392828610560;
	Wed, 19 Feb 2014 08:50:10 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id y47sm2619835eel.14.2014.02.19.08.50.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:50:09 -0800 (PST)
Message-ID: <5304E0BF.7010300@linaro.org>
Date: Wed, 19 Feb 2014 16:50:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-11-git-send-email-julien.grall@linaro.org>
	<1392814144.29739.55.camel@kazak.uk.xensource.com>
In-Reply-To: <1392814144.29739.55.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 10/12] xen/passthrough: Introduce
	IOMMU ARM architure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:49 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> This patch contains the architecture to use IOMMU on ARM. There is no
> 
> s/IOMMU/IOMMUs/
> 
>> IOMMU drivers on this patch.
>>
>> The code will run through the device tree and will initialize every IOMMUs.
> 
> s/IOMMUs/IOMMU/
> 
>> It's possible to have multiple IOMMUs on the same platform, but they should
> 
> s/should/must/ I think for now?

Right.

>> be handled with the same drivers.
> 
> s/drivers/driver/
> 
>> For now, there is no support for using multiple iommu drivers at runtime.
> 
> But this is in principal possible in the future, right? (I shudder to
> think what the designers of an SoC with multiple different SMMUs on it
> would have to be smoking...)

In theory yes. I will wait that someone want to support a such platfomr
on Xen before implement it :).

> 
> You should mention somewhere that on ARM these are called SMMUs not
> IOMMUs.

I think SMMU is the name used for ARM IOMMU. Samsung and TI doesn't use
the term SMMU.

> 
>> Each new IOMMU drivers should contain:
>>
>> static const char * const myiommu_dt_compat[] __initcontst =
> 
> "__initconst".

Oops. Will fix it.

>> {
>>     /* list of device compatible with the drivers. Will be matched with
>>      * the "compatible" property on the device tree
>>      */
>>     NULL,
>> };
>>
>> DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
>>         .compatible = myiommu_compatible,
>>         .init = myiommu_init,
>> DT_DEVICE_END
> 
> This is the same as for any other driver, right?

Yes, the only change is we need to specify DEVICE_IOMMU in DT_DEVICE_START.

> 
>> @@ -568,6 +571,10 @@ fail:
>>  
>>  void arch_domain_destroy(struct domain *d)
>>  {
>> +    /* IOMMU page table is shared with P2M, always call
>> +     * iommu_domain_destroy() before p2m_teardown().
> 
> It would be worth adding some commentary on the design we are using
> (decisions such as whether to share PTs with the stage 2 MMU) in the
> commit message as well.

I will do.

> I suppose this requirement puts constraints on the SMMU hardware we can
> support. I think it is fine to live with that until someone shows up
> with an SMMU with pagetables that are incompatible with the stage 2
> paging ones.

Adding support for a such hardware should not need too much code.

>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 1f6d713..5a687d1 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -725,6 +725,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>>      local_irq_enable();
>>      local_abort_enable();
>>  
>> +    iommu_setup(); /* setup iommu if available */
> 
> Comment is a bit redundant ;-)

Copied for x86/setup.c. I will remove it.

> 
>> diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
>> new file mode 100644
>> index 0000000..7cf36cd
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/iommu.c
>> @@ -0,0 +1,65 @@
>> +/*
>> + * xen/drivers/passthrough/arm/iommu.c
>> + *
>> + * Generic IOMMU framework via the device tree
>> + *
>> + * Julien Grall <julien.grall@linaro.org>
>> + * Copyright (c) 2013 Linaro Limited.
> 
> It's not 2013 any more...

I have started the implementation in 2013 and forgot to update it.

> 
>> diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
>> new file mode 100644
>> index 0000000..461c8cf
>> --- /dev/null
>> +++ b/xen/include/asm-arm/hvm/iommu.h
>> @@ -0,0 +1,10 @@
>> +#ifndef __ASM_ARM_HVM_IOMMU_H_
>> +#define __ASM_ARM_HVM_IOMMU_H_
>> +
>> +struct arch_hvm_iommu
>> +{
>> +    /* Private information for the IOMMU drivers */
>> +    void *priv;
>> +};
>> +
>> +#endif /* __ASM_ARM_HVM_IOMMU_H_ */
> 
> Emacs magic block please.

I will do.

> That was a surprisingly small and uncontroversial patch. Well done.

The IOMMU framework in Xen was well designed.

cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:50:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGALe-0001Cx-Fm; Wed, 19 Feb 2014 16:50:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGALd-0001Cr-1D
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:50:13 +0000
Received: from [85.158.143.35:7182] by server-1.bemta-4.messagelabs.com id
	0C/E6-31661-4C0E4035; Wed, 19 Feb 2014 16:50:12 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392828610!6866697!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12839 invoked from network); 19 Feb 2014 16:50:10 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:50:10 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so498099eak.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:50:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=AeM2iXX4qLXIpFvjuyx1tzx6Q1JLu31Np96WTTHDmHA=;
	b=jQMkoH+p2w0PfJHzYlp4lKXjuwp+AENXKALl6Tga9myZ7quulcIOSMVZxPggyxfI5S
	ynuvSI8ai3D/rHU6h85s9X31iOISappgPalUuctDoSHKJpfym+mOJ86RajhXeJudU/xq
	11V0Z6KrEhRBVgpk2AklrM5noBFkIBUWazBOe64EYC92HQWdVh+uC+SCVXcSRpdddxMK
	3X4FzBA5Z2k3fSJOcnPbjqXRA4sN0A0+fuitpc9TT7TLyM/6r2mUFmxItFP0U0VhhMYr
	C3WMZTq+qou+fXgrrC80gIV70cW2QPyCjx97sAo9NCmJ3tssH/smsRFiFvaYdV7DjDho
	vekw==
X-Gm-Message-State: ALoCoQnReqg6FouqkhzBoH3ob+gORDOeTVZL8VpG9c+VG1iIDctoajysK5RuglcC1Domtt/0+eL+
X-Received: by 10.15.99.201 with SMTP id bl49mr16573263eeb.53.1392828610560;
	Wed, 19 Feb 2014 08:50:10 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id y47sm2619835eel.14.2014.02.19.08.50.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:50:09 -0800 (PST)
Message-ID: <5304E0BF.7010300@linaro.org>
Date: Wed, 19 Feb 2014 16:50:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-11-git-send-email-julien.grall@linaro.org>
	<1392814144.29739.55.camel@kazak.uk.xensource.com>
In-Reply-To: <1392814144.29739.55.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 10/12] xen/passthrough: Introduce
	IOMMU ARM architure
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:49 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> This patch contains the architecture to use IOMMU on ARM. There is no
> 
> s/IOMMU/IOMMUs/
> 
>> IOMMU drivers on this patch.
>>
>> The code will run through the device tree and will initialize every IOMMUs.
> 
> s/IOMMUs/IOMMU/
> 
>> It's possible to have multiple IOMMUs on the same platform, but they should
> 
> s/should/must/ I think for now?

Right.

>> be handled with the same drivers.
> 
> s/drivers/driver/
> 
>> For now, there is no support for using multiple iommu drivers at runtime.
> 
> But this is in principal possible in the future, right? (I shudder to
> think what the designers of an SoC with multiple different SMMUs on it
> would have to be smoking...)

In theory yes. I will wait that someone want to support a such platfomr
on Xen before implement it :).

> 
> You should mention somewhere that on ARM these are called SMMUs not
> IOMMUs.

I think SMMU is the name used for ARM IOMMU. Samsung and TI doesn't use
the term SMMU.

> 
>> Each new IOMMU drivers should contain:
>>
>> static const char * const myiommu_dt_compat[] __initcontst =
> 
> "__initconst".

Oops. Will fix it.

>> {
>>     /* list of device compatible with the drivers. Will be matched with
>>      * the "compatible" property on the device tree
>>      */
>>     NULL,
>> };
>>
>> DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
>>         .compatible = myiommu_compatible,
>>         .init = myiommu_init,
>> DT_DEVICE_END
> 
> This is the same as for any other driver, right?

Yes, the only change is we need to specify DEVICE_IOMMU in DT_DEVICE_START.

> 
>> @@ -568,6 +571,10 @@ fail:
>>  
>>  void arch_domain_destroy(struct domain *d)
>>  {
>> +    /* IOMMU page table is shared with P2M, always call
>> +     * iommu_domain_destroy() before p2m_teardown().
> 
> It would be worth adding some commentary on the design we are using
> (decisions such as whether to share PTs with the stage 2 MMU) in the
> commit message as well.

I will do.

> I suppose this requirement puts constraints on the SMMU hardware we can
> support. I think it is fine to live with that until someone shows up
> with an SMMU with pagetables that are incompatible with the stage 2
> paging ones.

Adding support for a such hardware should not need too much code.

>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 1f6d713..5a687d1 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -725,6 +725,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>>      local_irq_enable();
>>      local_abort_enable();
>>  
>> +    iommu_setup(); /* setup iommu if available */
> 
> Comment is a bit redundant ;-)

Copied for x86/setup.c. I will remove it.

> 
>> diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
>> new file mode 100644
>> index 0000000..7cf36cd
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/arm/iommu.c
>> @@ -0,0 +1,65 @@
>> +/*
>> + * xen/drivers/passthrough/arm/iommu.c
>> + *
>> + * Generic IOMMU framework via the device tree
>> + *
>> + * Julien Grall <julien.grall@linaro.org>
>> + * Copyright (c) 2013 Linaro Limited.
> 
> It's not 2013 any more...

I have started the implementation in 2013 and forgot to update it.

> 
>> diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
>> new file mode 100644
>> index 0000000..461c8cf
>> --- /dev/null
>> +++ b/xen/include/asm-arm/hvm/iommu.h
>> @@ -0,0 +1,10 @@
>> +#ifndef __ASM_ARM_HVM_IOMMU_H_
>> +#define __ASM_ARM_HVM_IOMMU_H_
>> +
>> +struct arch_hvm_iommu
>> +{
>> +    /* Private information for the IOMMU drivers */
>> +    void *priv;
>> +};
>> +
>> +#endif /* __ASM_ARM_HVM_IOMMU_H_ */
> 
> Emacs magic block please.

I will do.

> That was a surprisingly small and uncontroversial patch. Well done.

The IOMMU framework in Xen was well designed.

cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:52:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGANx-0001LR-Ei; Wed, 19 Feb 2014 16:52:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGANv-0001LI-Im
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:52:35 +0000
Received: from [193.109.254.147:7127] by server-3.bemta-14.messagelabs.com id
	D8/5B-00432-251E4035; Wed, 19 Feb 2014 16:52:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392828754!5457271!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20295 invoked from network); 19 Feb 2014 16:52:34 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:52:34 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so495651ead.10
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:52:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=q9XLRlkWAqKer16Wk9tFNXBER7F0mGOK8dgxth/ofRY=;
	b=QjT/al48bKVDU7Yk/OTKygMr1LamjMcwBFMGOtQVVksGuPdwTBWmC2RHgrOTng9Cqz
	mTtL0XniBgSwu27g9aOjMXMQ4wBT5EPfDaLDON93aR9WEdE6VNmoD1iitNhFxeM9/1aI
	sGtKfNaOY1anqehxGYtsEOoPPKmybr6p6pslhwJdEnBqkpJxTYAIpJ60FyOzR4g6hQrf
	G8qQIL+fmHU+RTTtZjWcLfBNTRjEZTqrQOfmBSUpZYxtu3P4pqQ3NP2SlS4ZT5dz1/Pf
	GdTmPmiloo96Marktnj+xtfHV3hTQV9H0tqMFs+6Ynm+RLDiDpGXXyIYD9r76fJLDz4J
	6mPg==
X-Gm-Message-State: ALoCoQnTZ/zjeUwdgMafv4l9aSkjdvTuSce9/6hpjn9qrxnKbIYlmgb0aOqL9P/c0zOxuSwBbcrQ
X-Received: by 10.15.90.203 with SMTP id q51mr41650825eez.6.1392828753623;
	Wed, 19 Feb 2014 08:52:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 43sm920220eeh.13.2014.02.19.08.52.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:52:32 -0800 (PST)
Message-ID: <5304E14F.5090507@linaro.org>
Date: Wed, 19 Feb 2014 16:52:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
	<5304D924.4090504@linaro.org>
	<1392827186.29739.92.camel@kazak.uk.xensource.com>
In-Reply-To: <1392827186.29739.92.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 04:26 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 16:17 +0000, Julien Grall wrote:
>>>> +/**
>>>> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
>>>> + * @np: Pointer to device node holding phandle property
>>>> + * @phandle_name: Name of property holding a phandle value
>>>> + * @index: For properties holding a table of phandles, this is the index into
>>>> + *         the table
>>>
>>> Otherwise it is -1 or something else?
>>
>> Even with one element, the property contains a table of phandles. So,
>> the first index is always 0.
> 
> OK, so the "For a properties holding a table..." bit is misleading
> (that's the "otherwise" I was referring too, e..g properties which do
> not hold a table0, since this is always the index, even if the table has
> a single element.

It was copied from Linux. Shall I update the comment?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:52:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGANx-0001LR-Ei; Wed, 19 Feb 2014 16:52:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGANv-0001LI-Im
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:52:35 +0000
Received: from [193.109.254.147:7127] by server-3.bemta-14.messagelabs.com id
	D8/5B-00432-251E4035; Wed, 19 Feb 2014 16:52:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392828754!5457271!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20295 invoked from network); 19 Feb 2014 16:52:34 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:52:34 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so495651ead.10
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 08:52:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=q9XLRlkWAqKer16Wk9tFNXBER7F0mGOK8dgxth/ofRY=;
	b=QjT/al48bKVDU7Yk/OTKygMr1LamjMcwBFMGOtQVVksGuPdwTBWmC2RHgrOTng9Cqz
	mTtL0XniBgSwu27g9aOjMXMQ4wBT5EPfDaLDON93aR9WEdE6VNmoD1iitNhFxeM9/1aI
	sGtKfNaOY1anqehxGYtsEOoPPKmybr6p6pslhwJdEnBqkpJxTYAIpJ60FyOzR4g6hQrf
	G8qQIL+fmHU+RTTtZjWcLfBNTRjEZTqrQOfmBSUpZYxtu3P4pqQ3NP2SlS4ZT5dz1/Pf
	GdTmPmiloo96Marktnj+xtfHV3hTQV9H0tqMFs+6Ynm+RLDiDpGXXyIYD9r76fJLDz4J
	6mPg==
X-Gm-Message-State: ALoCoQnTZ/zjeUwdgMafv4l9aSkjdvTuSce9/6hpjn9qrxnKbIYlmgb0aOqL9P/c0zOxuSwBbcrQ
X-Received: by 10.15.90.203 with SMTP id q51mr41650825eez.6.1392828753623;
	Wed, 19 Feb 2014 08:52:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 43sm920220eeh.13.2014.02.19.08.52.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 08:52:32 -0800 (PST)
Message-ID: <5304E14F.5090507@linaro.org>
Date: Wed, 19 Feb 2014 16:52:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
	<5304D924.4090504@linaro.org>
	<1392827186.29739.92.camel@kazak.uk.xensource.com>
In-Reply-To: <1392827186.29739.92.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 04:26 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 16:17 +0000, Julien Grall wrote:
>>>> +/**
>>>> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
>>>> + * @np: Pointer to device node holding phandle property
>>>> + * @phandle_name: Name of property holding a phandle value
>>>> + * @index: For properties holding a table of phandles, this is the index into
>>>> + *         the table
>>>
>>> Otherwise it is -1 or something else?
>>
>> Even with one element, the property contains a table of phandles. So,
>> the first index is always 0.
> 
> OK, so the "For a properties holding a table..." bit is misleading
> (that's the "otherwise" I was referring too, e..g properties which do
> not hold a table0, since this is always the index, even if the table has
> a single element.

It was copied from Linux. Shall I update the comment?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:55:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAQk-0001V6-KJ; Wed, 19 Feb 2014 16:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQi-0001Ub-GM; Wed, 19 Feb 2014 16:55:28 +0000
Received: from [85.158.137.68:59489] by server-9.bemta-3.messagelabs.com id
	D6/B7-10184-FF1E4035; Wed, 19 Feb 2014 16:55:27 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392828925!1377595!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20088 invoked from network); 19 Feb 2014 16:55:26 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-2.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Feb 2014 16:55:26 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQa-00078q-MA; Wed, 19 Feb 2014 16:55:20 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQa-0003OO-4k; Wed, 19 Feb 2014 16:55:20 +0000
Date: Wed, 19 Feb 2014 16:55:20 +0000
Message-Id: <E1WGAQa-0003OO-4k@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 60 (CVE-2013-2212) - Excessive
 time to disable caching with HVM guests with PCI passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2013-2212 / XSA-60
                             version 6

   Excessive time to disable caching with HVM guests with PCI passthrough

UPDATES IN VERSION 6
====================

Since the issue of this advisory, various fixes have been applied to
the public Xen trees.

ISSUE DESCRIPTION
=================

HVM guests are able to manipulate their physical address space such that
processing a subsequent request by that guest to disable caches takes an
extended amount of time changing the cachability of the memory pages assigned
to this guest. This applies only when the guest has been granted access to
some memory mapped I/O region (typically by way of assigning a passthrough
PCI device).

This can cause the CPU which processes the request to become unavailable,
possibly causing the hypervisor or a guest kernel (including the domain 0 one)
to halt itself ("panic").

IMPACT
======

A malicious domain, given access to a device with memory mapped I/O
regions, can cause the host to become unresponsive for a period of
time, potentially leading to a DoS affecting the whole system.

VULNERABLE SYSTEMS
==================

Xen version 3.3 onwards is vulnerable.

Only systems using the Intel variant of Hardware Assisted Paging (aka EPT) are
vulnerable.

MITIGATION
==========

This issue can be avoided by not assigning PCI devices to untrusted guests, or
by running HVM guests with shadow mode paging (through adding "hap=0" to the
domain configuration file).

CREDITS
=======

Zhenzhong Duan found the issue as a bug, which on examination by the
Xenproject.org Security Team turned out to be a security problem.

RESOLUTION
==========

This issue has been fixed in the public xen.git trees.

For xen-unstable (#staging, #master), in these git commits:
  c13b0d65ddedd745 VMX: disable EPT when !cpu_has_vmx_pat
  1c84d046735102e0 VMX: remove the problematic set_uc_mode logic
  62652c00efa55fb4 VMX: fix cr0.cd handling
  86d60e855fe118df VMX: flush cache when vmentry back to UC guest
  f1c9658d6802c433 Revert "VMX: flush cache when vmentry back to UC guest"
(Earliest commit is listed first.  Note that f1c9658d reverts
not only 86d60e85 but also part of 62652c00.)

For Xen 4.2 (#staging-4.2, #stable-4.2):
  f1e0df14412c VMX: disable EPT when !cpu_has_vmx_pat
  644e6c5c7106 VMX: remove the problematic set_uc_mode logic
  0fffcffeb594 VMX: fix cr0.cd handling
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJTBOHLAAoJEIP+FMlX6CvZOZsIAI1JT1S+76kGilCSef5r2XUx
uQ/cFVNjlcACeIF9/ejglQzlfaUcB3fjERdHVuYdiURgiPOwUErJV+0Xg3avFTIj
hE9KeUnBl9+vS8OwmO7va4LEZf3xl8LVhirbsepL6eubvmgtmxqf/MeV6kMF5xUU
9t65V80qPNYpA+2SzUnRZFuzGHLd5IkTFUQXfKEzGH3lWu35qvGqyhYWRXHVmz9c
4e49pqO6QenjSlLxvpiW/FpeUxothpq4xxrSom4XsZrBULp4EywU9EkaF5tuFnpg
dyzfz3Ap7k0H+5NoHTfof+N7rzaEOyR/QtXIerpcwuf5qMIN0c2HSZBzGdrvlfw=
=SC2T
-----END PGP SIGNATURE-----

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 19 16:55:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAQp-0001Wl-9c; Wed, 19 Feb 2014 16:55:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQn-0001Vm-7d; Wed, 19 Feb 2014 16:55:33 +0000
Received: from [193.109.254.147:36768] by server-13.bemta-14.messagelabs.com
	id D4/E1-01226-402E4035; Wed, 19 Feb 2014 16:55:32 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392828929!1483478!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4719 invoked from network); 19 Feb 2014 16:55:30 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-9.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Feb 2014 16:55:30 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQe-000792-Eh; Wed, 19 Feb 2014 16:55:24 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQe-0003PS-Bl; Wed, 19 Feb 2014 16:55:24 +0000
Date: Wed, 19 Feb 2014 16:55:24 +0000
Message-Id: <E1WGAQe-0003PS-Bl@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 82 (CVE-2013-6885) - Guest
 triggerable AMD CPU erratum may cause host hang
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2013-6885 / XSA-82
                              version 4

          Guest triggerable AMD CPU erratum may cause host hang

UPDATES IN VERSION 4
====================

The original fix for 4.2.x and 4.1.x was found to deal with 64-bit
hypervisors only. Incremental patches to also address 32-bit ones are
now being provided in addition.

ISSUE DESCRIPTION
=================

AMD CPU erratum 793 "Specific Combination of Writes to Write Combined
Memory Types and Locked Instructions May Cause Core Hang" describes a
situation under which a CPU core may hang.

IMPACT
======

A malicious guest administrator can mount a denial of service attack
affecting the whole system.

VULNERABLE SYSTEMS
==================

The vulnerability is applicable only to family 16h model 00h-0fh AMD
CPUs.

Such CPUs running Xen versions 3.3 onwards are vulnerable.  We have
not checked earlier versions of Xen.

HVM guests can always exploit the vulnerability if it is present.
PV guests can exploit the vulnerability only if they have been granted
access to physical device(s).

Non-AMD CPUs are not vulnerable.

CREDITS
=======

This issue's security impact was discovered by Jan Beulich.

MITIGATION
==========

This issue can be avoided by neither running HVM guests, nor assigning
PCI devices to PV guests.

RESOLUTION
==========

The attached xsa82.patch contains a software workaround which resolves
this issue for 64-bit hypervisors. To also resolve the issue on 32-bit
hypervisors (Xen 4.2.x and 4.1.x only), the respective attached
xsa82-4.?-32bit.patch needs to be applied on top.

Alternatively, the recommended workaround can be implemented in
firmware, so a suitable firmware update will resolve the issue.
If you require a firmware update please consult your vendor.

xsa82.patch             Xen 4.1.x, Xen 4.2.x, Xen 4.3.x, xen-unstable
xsa82-4.1-32bit.patch   Xen 4.1.x
xsa82-4.2-32bit.patch   Xen 4.2.x

$ sha256sum xsa82*.patch
b0fb0289e1da965bc038993e07af4ba78cb746ed8f1a1865f5fec9de7299faa7  xsa82-4.1-32bit.patch
18f2ba14131975b45688e3c5f4c0a85bd78cf089c3d83ae81f86e149b8c538d6  xsa82-4.2-32bit.patch
0a58f3564ca91fd2668c202446c607fdb1ec8643e558a3921046d43675f58c08  xsa82.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJTBOHNAAoJEIP+FMlX6CvZ6TIIAMS1oTljW2yAB9daiY5P0UBf
u4X+NTUUUO6DiKLakBFjmS01oB7pApSCHmnqUqgFXlbo8KJsz3qtCLWe+IHH0Kex
8ofL/pDedcHm7bSkXCcncz8xVCqPbPrgVV+bwDXHru65/jxf0XDvPRT9af4N2eGY
wlngDFDaWLuozjOqp2mtaOSiqbUc2r43BOalMl6om2BFbF8BEBpPBkcLRxUvsQX0
noZMbknQ36mb0/+dC+pHCUfcUuLquaGNx+I+UF4HXSUdxhVniCD8hzmDxRR9i5Dn
S/g9z72LDF0cISL2K4B/iwRiCjOozHqbNimSAWuWTgj3dAWu8dClI3SQyFpOgxY=
=ie9o
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa82-4.1-32bit.patch"
Content-Disposition: attachment; filename="xsa82-4.1-32bit.patch"
Content-Transfer-Encoding: base64

eDg2L0FNRDogd29yayBhcm91bmQgZXJyYXR1bSA3OTMgZm9yIDMyLWJpdAoK
VGhlIG9yaWdpbmFsIGNoYW5nZSB3ZW50IGludG8gYSA2NC1iaXQgb25seSBj
b2RlIHNlY3Rpb24sIHRodXMgbGVhdmluZwp0aGUgaXNzdWUgdW5maXhlZCBv
biAzMi1iaXQuIFJlLW9yZGVyIGNvZGUgdG8gYWRkcmVzcyB0aGlzLgoKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpB
Y2tlZC1ieTogSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNv
bT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvYW1kLmMKKysrIGIveGVuL2Fy
Y2gveDg2L2NwdS9hbWQuYwpAQCAtNjQ5LDYgKzY0OSwxOCBAQCBzdGF0aWMg
dm9pZCBfX2RldmluaXQgaW5pdF9hbWQoc3RydWN0IGNwCiAJCSAgICAgICAi
KioqIFBhc3MgXCJhbGxvd191bnNhZmVcIiBpZiB5b3UncmUgdHJ1c3Rpbmci
CiAJCSAgICAgICAiIGFsbCB5b3VyIChQVikgZ3Vlc3Qga2VybmVscy4gKioq
XG4iKTsKIAorCS8qIEFNRCBDUFVzIGRvIG5vdCBzdXBwb3J0IFNZU0VOVEVS
IG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICovCisJY2xlYXJfYml0KFg4Nl9G
RUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxpdHkpOworCisJaWYgKGMtPng4
NiA9PSAweDEwKSB7CisJCS8qIGRvIHRoaXMgZm9yIGJvb3QgY3B1ICovCisJ
CWlmIChjID09ICZib290X2NwdV9kYXRhKQorCQkJY2hlY2tfZW5hYmxlX2Ft
ZF9tbWNvbmZfZG1pKCk7CisKKwkJZmFtMTBoX2NoZWNrX2VuYWJsZV9tbWNm
ZygpOworCX0KKyNlbmRpZgorCiAJaWYgKGMtPng4NiA9PSAweDE2ICYmIGMt
Png4Nl9tb2RlbCA8PSAweGYpIHsKIAkJcmRtc3JsKE1TUl9BTUQ2NF9MU19D
RkcsIHZhbHVlKTsKIAkJaWYgKCEodmFsdWUgJiAoMSA8PCAxNSkpKSB7CkBA
IC02NjMsMTggKzY3NSw2IEBAIHN0YXRpYyB2b2lkIF9fZGV2aW5pdCBpbml0
X2FtZChzdHJ1Y3QgY3AKIAkJfQogCX0KIAotCS8qIEFNRCBDUFVzIGRvIG5v
dCBzdXBwb3J0IFNZU0VOVEVSIG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICov
Ci0JY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxp
dHkpOwotCi0JaWYgKGMtPng4NiA9PSAweDEwKSB7Ci0JCS8qIGRvIHRoaXMg
Zm9yIGJvb3QgY3B1ICovCi0JCWlmIChjID09ICZib290X2NwdV9kYXRhKQot
CQkJY2hlY2tfZW5hYmxlX2FtZF9tbWNvbmZfZG1pKCk7Ci0KLQkJZmFtMTBo
X2NoZWNrX2VuYWJsZV9tbWNmZygpOwotCX0KLSNlbmRpZgotCiAJaWYgKGMt
Png4NiA9PSAweDEwKSB7CiAJCS8qCiAJCSAqIE9uIGZhbWlseSAxMGggQklP
UyBtYXkgbm90IGhhdmUgcHJvcGVybHkgZW5hYmxlZCBXQysK

--=separator
Content-Type: application/octet-stream; name="xsa82-4.2-32bit.patch"
Content-Disposition: attachment; filename="xsa82-4.2-32bit.patch"
Content-Transfer-Encoding: base64

eDg2L0FNRDogd29yayBhcm91bmQgZXJyYXR1bSA3OTMgZm9yIDMyLWJpdAoK
VGhlIG9yaWdpbmFsIGNoYW5nZSB3ZW50IGludG8gYSA2NC1iaXQgb25seSBj
b2RlIHNlY3Rpb24sIHRodXMgbGVhdmluZwp0aGUgaXNzdWUgdW5maXhlZCBv
biAzMi1iaXQuIFJlLW9yZGVyIGNvZGUgdG8gYWRkcmVzcyB0aGlzLgoKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpB
Y2tlZC1ieTogSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNv
bT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvYW1kLmMKKysrIGIveGVuL2Fy
Y2gveDg2L2NwdS9hbWQuYwpAQCAtNTIyLDYgKzUyMiwxOCBAQCBzdGF0aWMg
dm9pZCBfX2RldmluaXQgaW5pdF9hbWQoc3RydWN0IGNwCiAJCSAgICAgICAi
KioqIFBhc3MgXCJhbGxvd191bnNhZmVcIiBpZiB5b3UncmUgdHJ1c3Rpbmci
CiAJCSAgICAgICAiIGFsbCB5b3VyIChQVikgZ3Vlc3Qga2VybmVscy4gKioq
XG4iKTsKIAorCS8qIEFNRCBDUFVzIGRvIG5vdCBzdXBwb3J0IFNZU0VOVEVS
IG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICovCisJY2xlYXJfYml0KFg4Nl9G
RUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxpdHkpOworCisJaWYgKGMtPng4
NiA9PSAweDEwKSB7CisJCS8qIGRvIHRoaXMgZm9yIGJvb3QgY3B1ICovCisJ
CWlmIChjID09ICZib290X2NwdV9kYXRhKQorCQkJY2hlY2tfZW5hYmxlX2Ft
ZF9tbWNvbmZfZG1pKCk7CisKKwkJZmFtMTBoX2NoZWNrX2VuYWJsZV9tbWNm
ZygpOworCX0KKyNlbmRpZgorCiAJaWYgKGMtPng4NiA9PSAweDE2ICYmIGMt
Png4Nl9tb2RlbCA8PSAweGYpIHsKIAkJaWYgKGMgPT0gJmJvb3RfY3B1X2Rh
dGEpIHsKIAkJCWwgPSBwY2lfY29uZl9yZWFkMzIoMCwgMCwgMHgxOCwgMHgz
LCAweDU4KTsKQEAgLTU1NSwxOCArNTY3LDYgQEAgc3RhdGljIHZvaWQgX19k
ZXZpbml0IGluaXRfYW1kKHN0cnVjdCBjcAogCQl9CiAJfQogCi0JLyogQU1E
IENQVXMgZG8gbm90IHN1cHBvcnQgU1lTRU5URVIgb3V0c2lkZSBvZiBsZWdh
Y3kgbW9kZS4gKi8KLQljbGVhcl9iaXQoWDg2X0ZFQVRVUkVfU0VQLCBjLT54
ODZfY2FwYWJpbGl0eSk7Ci0KLQlpZiAoYy0+eDg2ID09IDB4MTApIHsKLQkJ
LyogZG8gdGhpcyBmb3IgYm9vdCBjcHUgKi8KLQkJaWYgKGMgPT0gJmJvb3Rf
Y3B1X2RhdGEpCi0JCQljaGVja19lbmFibGVfYW1kX21tY29uZl9kbWkoKTsK
LQotCQlmYW0xMGhfY2hlY2tfZW5hYmxlX21tY2ZnKCk7Ci0JfQotI2VuZGlm
Ci0KIAlpZiAoYy0+eDg2ID09IDB4MTApIHsKIAkJLyoKIAkJICogT24gZmFt
aWx5IDEwaCBCSU9TIG1heSBub3QgaGF2ZSBwcm9wZXJseSBlbmFibGVkIFdD
Kwo=

--=separator
Content-Type: application/octet-stream; name="xsa82.patch"
Content-Disposition: attachment; filename="xsa82.patch"
Content-Transfer-Encoding: base64

eDg2L0FNRDogd29yayBhcm91bmQgZXJyYXR1bSA3OTMKClRoZSByZWNvbW1l
bmRhdGlvbiBpcyB0byBzZXQgYSBiaXQgaW4gYW4gTVNSIC0gZG8gdGhpcyBp
ZiB0aGUgZmlybXdhcmUKZGlkbid0LCBjb25zaWRlcmluZyB0aGF0IG90aGVy
d2lzZSB3ZSBleHBvc2Ugb3Vyc2VsdmVzIHRvIGEgZ3Vlc3QKaW5kdWNlZCBE
b1MuCgpUaGlzIGlzIENWRS0yMDEzLTY4ODUgLyBYU0EtODIuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2Vk
LWJ5OiBTdXJhdmVlIFN1dGhpa3VscGFuaXQgPHN1cmF2ZWUuc3V0aGlrdWxw
YW5pdEBhbWQuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC00NzYsNiArNDc2LDIw
IEBAIHN0YXRpYyB2b2lkIF9fZGV2aW5pdCBpbml0X2FtZChzdHJ1Y3QgY3AK
IAkJICAgICAgICIqKiogUGFzcyBcImFsbG93X3Vuc2FmZVwiIGlmIHlvdSdy
ZSB0cnVzdGluZyIKIAkJICAgICAgICIgYWxsIHlvdXIgKFBWKSBndWVzdCBr
ZXJuZWxzLiAqKipcbiIpOwogCisJaWYgKGMtPng4NiA9PSAweDE2ICYmIGMt
Png4Nl9tb2RlbCA8PSAweGYpIHsKKwkJcmRtc3JsKE1TUl9BTUQ2NF9MU19D
RkcsIHZhbHVlKTsKKwkJaWYgKCEodmFsdWUgJiAoMSA8PCAxNSkpKSB7CisJ
CQlzdGF0aWMgYm9vbF90IHdhcm5lZDsKKworCQkJaWYgKGMgPT0gJmJvb3Rf
Y3B1X2RhdGEgfHwgb3B0X2NwdV9pbmZvIHx8CisJCQkgICAgIXRlc3RfYW5k
X3NldF9ib29sKHdhcm5lZCkpCisJCQkJcHJpbnRrKEtFUk5fV0FSTklORwor
CQkJCSAgICAgICAiQ1BVJXU6IEFwcGx5aW5nIHdvcmthcm91bmQgZm9yIGVy
cmF0dW0gNzkzXG4iLAorCQkJCSAgICAgICBzbXBfcHJvY2Vzc29yX2lkKCkp
OworCQkJd3Jtc3JsKE1TUl9BTUQ2NF9MU19DRkcsIHZhbHVlIHwgKDEgPDwg
MTUpKTsKKwkJfQorCX0KKwogCS8qIEFNRCBDUFVzIGRvIG5vdCBzdXBwb3J0
IFNZU0VOVEVSIG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICovCiAJY2xlYXJf
Yml0KFg4Nl9GRUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxpdHkpOwogCi0t
LSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgKKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaApAQCAtMjEzLDYgKzIxMyw3
IEBACiAKIC8qIEFNRDY0IE1TUnMgKi8KICNkZWZpbmUgTVNSX0FNRDY0X05C
X0NGRwkJMHhjMDAxMDAxZgorI2RlZmluZSBNU1JfQU1ENjRfTFNfQ0ZHCQkw
eGMwMDExMDIwCiAjZGVmaW5lIE1TUl9BTUQ2NF9JQ19DRkcJCTB4YzAwMTEw
MjEKICNkZWZpbmUgTVNSX0FNRDY0X0RDX0NGRwkJMHhjMDAxMTAyMgogI2Rl
ZmluZSBBTUQ2NF9OQl9DRkdfQ0Y4X0VYVF9FTkFCTEVfQklUCTQ2Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 19 16:55:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAQk-0001V6-KJ; Wed, 19 Feb 2014 16:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQi-0001Ub-GM; Wed, 19 Feb 2014 16:55:28 +0000
Received: from [85.158.137.68:59489] by server-9.bemta-3.messagelabs.com id
	D6/B7-10184-FF1E4035; Wed, 19 Feb 2014 16:55:27 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392828925!1377595!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20088 invoked from network); 19 Feb 2014 16:55:26 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-2.tower-31.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Feb 2014 16:55:26 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQa-00078q-MA; Wed, 19 Feb 2014 16:55:20 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQa-0003OO-4k; Wed, 19 Feb 2014 16:55:20 +0000
Date: Wed, 19 Feb 2014 16:55:20 +0000
Message-Id: <E1WGAQa-0003OO-4k@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 60 (CVE-2013-2212) - Excessive
 time to disable caching with HVM guests with PCI passthrough
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2013-2212 / XSA-60
                             version 6

   Excessive time to disable caching with HVM guests with PCI passthrough

UPDATES IN VERSION 6
====================

Since the issue of this advisory, various fixes have been applied to
the public Xen trees.

ISSUE DESCRIPTION
=================

HVM guests are able to manipulate their physical address space such that
processing a subsequent request by that guest to disable caches takes an
extended amount of time changing the cachability of the memory pages assigned
to this guest. This applies only when the guest has been granted access to
some memory mapped I/O region (typically by way of assigning a passthrough
PCI device).

This can cause the CPU which processes the request to become unavailable,
possibly causing the hypervisor or a guest kernel (including the domain 0 one)
to halt itself ("panic").

IMPACT
======

A malicious domain, given access to a device with memory mapped I/O
regions, can cause the host to become unresponsive for a period of
time, potentially leading to a DoS affecting the whole system.

VULNERABLE SYSTEMS
==================

Xen version 3.3 onwards is vulnerable.

Only systems using the Intel variant of Hardware Assisted Paging (aka EPT) are
vulnerable.

MITIGATION
==========

This issue can be avoided by not assigning PCI devices to untrusted guests, or
by running HVM guests with shadow mode paging (through adding "hap=0" to the
domain configuration file).

CREDITS
=======

Zhenzhong Duan found the issue as a bug, which on examination by the
Xenproject.org Security Team turned out to be a security problem.

RESOLUTION
==========

This issue has been fixed in the public xen.git trees.

For xen-unstable (#staging, #master), in these git commits:
  c13b0d65ddedd745 VMX: disable EPT when !cpu_has_vmx_pat
  1c84d046735102e0 VMX: remove the problematic set_uc_mode logic
  62652c00efa55fb4 VMX: fix cr0.cd handling
  86d60e855fe118df VMX: flush cache when vmentry back to UC guest
  f1c9658d6802c433 Revert "VMX: flush cache when vmentry back to UC guest"
(Earliest commit is listed first.  Note that f1c9658d reverts
not only 86d60e85 but also part of 62652c00.)

For Xen 4.2 (#staging-4.2, #stable-4.2):
  f1e0df14412c VMX: disable EPT when !cpu_has_vmx_pat
  644e6c5c7106 VMX: remove the problematic set_uc_mode logic
  0fffcffeb594 VMX: fix cr0.cd handling
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJTBOHLAAoJEIP+FMlX6CvZOZsIAI1JT1S+76kGilCSef5r2XUx
uQ/cFVNjlcACeIF9/ejglQzlfaUcB3fjERdHVuYdiURgiPOwUErJV+0Xg3avFTIj
hE9KeUnBl9+vS8OwmO7va4LEZf3xl8LVhirbsepL6eubvmgtmxqf/MeV6kMF5xUU
9t65V80qPNYpA+2SzUnRZFuzGHLd5IkTFUQXfKEzGH3lWu35qvGqyhYWRXHVmz9c
4e49pqO6QenjSlLxvpiW/FpeUxothpq4xxrSom4XsZrBULp4EywU9EkaF5tuFnpg
dyzfz3Ap7k0H+5NoHTfof+N7rzaEOyR/QtXIerpcwuf5qMIN0c2HSZBzGdrvlfw=
=SC2T
-----END PGP SIGNATURE-----

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 19 16:55:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAQp-0001Wl-9c; Wed, 19 Feb 2014 16:55:35 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQn-0001Vm-7d; Wed, 19 Feb 2014 16:55:33 +0000
Received: from [193.109.254.147:36768] by server-13.bemta-14.messagelabs.com
	id D4/E1-01226-402E4035; Wed, 19 Feb 2014 16:55:32 +0000
X-Env-Sender: iwj@xenbits.xen.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392828929!1483478!1
X-Originating-IP: [50.57.168.107]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4719 invoked from network); 19 Feb 2014 16:55:30 -0000
Received: from mail.xen.org (HELO mail.xen.org) (50.57.168.107)
	by server-9.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	19 Feb 2014 16:55:30 -0000
Received: from xenbits.xen.org ([50.57.170.242])
	by mail.xen.org with esmtp (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQe-000792-Eh; Wed, 19 Feb 2014 16:55:24 +0000
Received: from iwj by xenbits.xen.org with local (Exim 4.72)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1WGAQe-0003PS-Bl; Wed, 19 Feb 2014 16:55:24 +0000
Date: Wed, 19 Feb 2014 16:55:24 +0000
Message-Id: <E1WGAQe-0003PS-Bl@xenbits.xen.org>
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.428 (Entity 5.428)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
	xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Cc: "Xen.org security team" <security@xen.org>
Subject: [Xen-devel] Xen Security Advisory 82 (CVE-2013-6885) - Guest
 triggerable AMD CPU erratum may cause host hang
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

             Xen Security Advisory CVE-2013-6885 / XSA-82
                              version 4

          Guest triggerable AMD CPU erratum may cause host hang

UPDATES IN VERSION 4
====================

The original fix for 4.2.x and 4.1.x was found to deal with 64-bit
hypervisors only. Incremental patches to also address 32-bit ones are
now being provided in addition.

ISSUE DESCRIPTION
=================

AMD CPU erratum 793 "Specific Combination of Writes to Write Combined
Memory Types and Locked Instructions May Cause Core Hang" describes a
situation under which a CPU core may hang.

IMPACT
======

A malicious guest administrator can mount a denial of service attack
affecting the whole system.

VULNERABLE SYSTEMS
==================

The vulnerability is applicable only to family 16h model 00h-0fh AMD
CPUs.

Such CPUs running Xen versions 3.3 onwards are vulnerable.  We have
not checked earlier versions of Xen.

HVM guests can always exploit the vulnerability if it is present.
PV guests can exploit the vulnerability only if they have been granted
access to physical device(s).

Non-AMD CPUs are not vulnerable.

CREDITS
=======

This issue's security impact was discovered by Jan Beulich.

MITIGATION
==========

This issue can be avoided by neither running HVM guests, nor assigning
PCI devices to PV guests.

RESOLUTION
==========

The attached xsa82.patch contains a software workaround which resolves
this issue for 64-bit hypervisors. To also resolve the issue on 32-bit
hypervisors (Xen 4.2.x and 4.1.x only), the respective attached
xsa82-4.?-32bit.patch needs to be applied on top.

Alternatively, the recommended workaround can be implemented in
firmware, so a suitable firmware update will resolve the issue.
If you require a firmware update please consult your vendor.

xsa82.patch             Xen 4.1.x, Xen 4.2.x, Xen 4.3.x, xen-unstable
xsa82-4.1-32bit.patch   Xen 4.1.x
xsa82-4.2-32bit.patch   Xen 4.2.x

$ sha256sum xsa82*.patch
b0fb0289e1da965bc038993e07af4ba78cb746ed8f1a1865f5fec9de7299faa7  xsa82-4.1-32bit.patch
18f2ba14131975b45688e3c5f4c0a85bd78cf089c3d83ae81f86e149b8c538d6  xsa82-4.2-32bit.patch
0a58f3564ca91fd2668c202446c607fdb1ec8643e558a3921046d43675f58c08  xsa82.patch
$
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBAgAGBQJTBOHNAAoJEIP+FMlX6CvZ6TIIAMS1oTljW2yAB9daiY5P0UBf
u4X+NTUUUO6DiKLakBFjmS01oB7pApSCHmnqUqgFXlbo8KJsz3qtCLWe+IHH0Kex
8ofL/pDedcHm7bSkXCcncz8xVCqPbPrgVV+bwDXHru65/jxf0XDvPRT9af4N2eGY
wlngDFDaWLuozjOqp2mtaOSiqbUc2r43BOalMl6om2BFbF8BEBpPBkcLRxUvsQX0
noZMbknQ36mb0/+dC+pHCUfcUuLquaGNx+I+UF4HXSUdxhVniCD8hzmDxRR9i5Dn
S/g9z72LDF0cISL2K4B/iwRiCjOozHqbNimSAWuWTgj3dAWu8dClI3SQyFpOgxY=
=ie9o
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa82-4.1-32bit.patch"
Content-Disposition: attachment; filename="xsa82-4.1-32bit.patch"
Content-Transfer-Encoding: base64

eDg2L0FNRDogd29yayBhcm91bmQgZXJyYXR1bSA3OTMgZm9yIDMyLWJpdAoK
VGhlIG9yaWdpbmFsIGNoYW5nZSB3ZW50IGludG8gYSA2NC1iaXQgb25seSBj
b2RlIHNlY3Rpb24sIHRodXMgbGVhdmluZwp0aGUgaXNzdWUgdW5maXhlZCBv
biAzMi1iaXQuIFJlLW9yZGVyIGNvZGUgdG8gYWRkcmVzcyB0aGlzLgoKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpB
Y2tlZC1ieTogSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNv
bT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvYW1kLmMKKysrIGIveGVuL2Fy
Y2gveDg2L2NwdS9hbWQuYwpAQCAtNjQ5LDYgKzY0OSwxOCBAQCBzdGF0aWMg
dm9pZCBfX2RldmluaXQgaW5pdF9hbWQoc3RydWN0IGNwCiAJCSAgICAgICAi
KioqIFBhc3MgXCJhbGxvd191bnNhZmVcIiBpZiB5b3UncmUgdHJ1c3Rpbmci
CiAJCSAgICAgICAiIGFsbCB5b3VyIChQVikgZ3Vlc3Qga2VybmVscy4gKioq
XG4iKTsKIAorCS8qIEFNRCBDUFVzIGRvIG5vdCBzdXBwb3J0IFNZU0VOVEVS
IG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICovCisJY2xlYXJfYml0KFg4Nl9G
RUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxpdHkpOworCisJaWYgKGMtPng4
NiA9PSAweDEwKSB7CisJCS8qIGRvIHRoaXMgZm9yIGJvb3QgY3B1ICovCisJ
CWlmIChjID09ICZib290X2NwdV9kYXRhKQorCQkJY2hlY2tfZW5hYmxlX2Ft
ZF9tbWNvbmZfZG1pKCk7CisKKwkJZmFtMTBoX2NoZWNrX2VuYWJsZV9tbWNm
ZygpOworCX0KKyNlbmRpZgorCiAJaWYgKGMtPng4NiA9PSAweDE2ICYmIGMt
Png4Nl9tb2RlbCA8PSAweGYpIHsKIAkJcmRtc3JsKE1TUl9BTUQ2NF9MU19D
RkcsIHZhbHVlKTsKIAkJaWYgKCEodmFsdWUgJiAoMSA8PCAxNSkpKSB7CkBA
IC02NjMsMTggKzY3NSw2IEBAIHN0YXRpYyB2b2lkIF9fZGV2aW5pdCBpbml0
X2FtZChzdHJ1Y3QgY3AKIAkJfQogCX0KIAotCS8qIEFNRCBDUFVzIGRvIG5v
dCBzdXBwb3J0IFNZU0VOVEVSIG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICov
Ci0JY2xlYXJfYml0KFg4Nl9GRUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxp
dHkpOwotCi0JaWYgKGMtPng4NiA9PSAweDEwKSB7Ci0JCS8qIGRvIHRoaXMg
Zm9yIGJvb3QgY3B1ICovCi0JCWlmIChjID09ICZib290X2NwdV9kYXRhKQot
CQkJY2hlY2tfZW5hYmxlX2FtZF9tbWNvbmZfZG1pKCk7Ci0KLQkJZmFtMTBo
X2NoZWNrX2VuYWJsZV9tbWNmZygpOwotCX0KLSNlbmRpZgotCiAJaWYgKGMt
Png4NiA9PSAweDEwKSB7CiAJCS8qCiAJCSAqIE9uIGZhbWlseSAxMGggQklP
UyBtYXkgbm90IGhhdmUgcHJvcGVybHkgZW5hYmxlZCBXQysK

--=separator
Content-Type: application/octet-stream; name="xsa82-4.2-32bit.patch"
Content-Disposition: attachment; filename="xsa82-4.2-32bit.patch"
Content-Transfer-Encoding: base64

eDg2L0FNRDogd29yayBhcm91bmQgZXJyYXR1bSA3OTMgZm9yIDMyLWJpdAoK
VGhlIG9yaWdpbmFsIGNoYW5nZSB3ZW50IGludG8gYSA2NC1iaXQgb25seSBj
b2RlIHNlY3Rpb24sIHRodXMgbGVhdmluZwp0aGUgaXNzdWUgdW5maXhlZCBv
biAzMi1iaXQuIFJlLW9yZGVyIGNvZGUgdG8gYWRkcmVzcyB0aGlzLgoKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpB
Y2tlZC1ieTogSWFuIENhbXBiZWxsIDxJYW4uQ2FtcGJlbGxAY2l0cml4LmNv
bT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvYW1kLmMKKysrIGIveGVuL2Fy
Y2gveDg2L2NwdS9hbWQuYwpAQCAtNTIyLDYgKzUyMiwxOCBAQCBzdGF0aWMg
dm9pZCBfX2RldmluaXQgaW5pdF9hbWQoc3RydWN0IGNwCiAJCSAgICAgICAi
KioqIFBhc3MgXCJhbGxvd191bnNhZmVcIiBpZiB5b3UncmUgdHJ1c3Rpbmci
CiAJCSAgICAgICAiIGFsbCB5b3VyIChQVikgZ3Vlc3Qga2VybmVscy4gKioq
XG4iKTsKIAorCS8qIEFNRCBDUFVzIGRvIG5vdCBzdXBwb3J0IFNZU0VOVEVS
IG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICovCisJY2xlYXJfYml0KFg4Nl9G
RUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxpdHkpOworCisJaWYgKGMtPng4
NiA9PSAweDEwKSB7CisJCS8qIGRvIHRoaXMgZm9yIGJvb3QgY3B1ICovCisJ
CWlmIChjID09ICZib290X2NwdV9kYXRhKQorCQkJY2hlY2tfZW5hYmxlX2Ft
ZF9tbWNvbmZfZG1pKCk7CisKKwkJZmFtMTBoX2NoZWNrX2VuYWJsZV9tbWNm
ZygpOworCX0KKyNlbmRpZgorCiAJaWYgKGMtPng4NiA9PSAweDE2ICYmIGMt
Png4Nl9tb2RlbCA8PSAweGYpIHsKIAkJaWYgKGMgPT0gJmJvb3RfY3B1X2Rh
dGEpIHsKIAkJCWwgPSBwY2lfY29uZl9yZWFkMzIoMCwgMCwgMHgxOCwgMHgz
LCAweDU4KTsKQEAgLTU1NSwxOCArNTY3LDYgQEAgc3RhdGljIHZvaWQgX19k
ZXZpbml0IGluaXRfYW1kKHN0cnVjdCBjcAogCQl9CiAJfQogCi0JLyogQU1E
IENQVXMgZG8gbm90IHN1cHBvcnQgU1lTRU5URVIgb3V0c2lkZSBvZiBsZWdh
Y3kgbW9kZS4gKi8KLQljbGVhcl9iaXQoWDg2X0ZFQVRVUkVfU0VQLCBjLT54
ODZfY2FwYWJpbGl0eSk7Ci0KLQlpZiAoYy0+eDg2ID09IDB4MTApIHsKLQkJ
LyogZG8gdGhpcyBmb3IgYm9vdCBjcHUgKi8KLQkJaWYgKGMgPT0gJmJvb3Rf
Y3B1X2RhdGEpCi0JCQljaGVja19lbmFibGVfYW1kX21tY29uZl9kbWkoKTsK
LQotCQlmYW0xMGhfY2hlY2tfZW5hYmxlX21tY2ZnKCk7Ci0JfQotI2VuZGlm
Ci0KIAlpZiAoYy0+eDg2ID09IDB4MTApIHsKIAkJLyoKIAkJICogT24gZmFt
aWx5IDEwaCBCSU9TIG1heSBub3QgaGF2ZSBwcm9wZXJseSBlbmFibGVkIFdD
Kwo=

--=separator
Content-Type: application/octet-stream; name="xsa82.patch"
Content-Disposition: attachment; filename="xsa82.patch"
Content-Transfer-Encoding: base64

eDg2L0FNRDogd29yayBhcm91bmQgZXJyYXR1bSA3OTMKClRoZSByZWNvbW1l
bmRhdGlvbiBpcyB0byBzZXQgYSBiaXQgaW4gYW4gTVNSIC0gZG8gdGhpcyBp
ZiB0aGUgZmlybXdhcmUKZGlkbid0LCBjb25zaWRlcmluZyB0aGF0IG90aGVy
d2lzZSB3ZSBleHBvc2Ugb3Vyc2VsdmVzIHRvIGEgZ3Vlc3QKaW5kdWNlZCBE
b1MuCgpUaGlzIGlzIENWRS0yMDEzLTY4ODUgLyBYU0EtODIuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CkFja2Vk
LWJ5OiBTdXJhdmVlIFN1dGhpa3VscGFuaXQgPHN1cmF2ZWUuc3V0aGlrdWxw
YW5pdEBhbWQuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L2NwdS9hbWQuYwor
KysgYi94ZW4vYXJjaC94ODYvY3B1L2FtZC5jCkBAIC00NzYsNiArNDc2LDIw
IEBAIHN0YXRpYyB2b2lkIF9fZGV2aW5pdCBpbml0X2FtZChzdHJ1Y3QgY3AK
IAkJICAgICAgICIqKiogUGFzcyBcImFsbG93X3Vuc2FmZVwiIGlmIHlvdSdy
ZSB0cnVzdGluZyIKIAkJICAgICAgICIgYWxsIHlvdXIgKFBWKSBndWVzdCBr
ZXJuZWxzLiAqKipcbiIpOwogCisJaWYgKGMtPng4NiA9PSAweDE2ICYmIGMt
Png4Nl9tb2RlbCA8PSAweGYpIHsKKwkJcmRtc3JsKE1TUl9BTUQ2NF9MU19D
RkcsIHZhbHVlKTsKKwkJaWYgKCEodmFsdWUgJiAoMSA8PCAxNSkpKSB7CisJ
CQlzdGF0aWMgYm9vbF90IHdhcm5lZDsKKworCQkJaWYgKGMgPT0gJmJvb3Rf
Y3B1X2RhdGEgfHwgb3B0X2NwdV9pbmZvIHx8CisJCQkgICAgIXRlc3RfYW5k
X3NldF9ib29sKHdhcm5lZCkpCisJCQkJcHJpbnRrKEtFUk5fV0FSTklORwor
CQkJCSAgICAgICAiQ1BVJXU6IEFwcGx5aW5nIHdvcmthcm91bmQgZm9yIGVy
cmF0dW0gNzkzXG4iLAorCQkJCSAgICAgICBzbXBfcHJvY2Vzc29yX2lkKCkp
OworCQkJd3Jtc3JsKE1TUl9BTUQ2NF9MU19DRkcsIHZhbHVlIHwgKDEgPDwg
MTUpKTsKKwkJfQorCX0KKwogCS8qIEFNRCBDUFVzIGRvIG5vdCBzdXBwb3J0
IFNZU0VOVEVSIG91dHNpZGUgb2YgbGVnYWN5IG1vZGUuICovCiAJY2xlYXJf
Yml0KFg4Nl9GRUFUVVJFX1NFUCwgYy0+eDg2X2NhcGFiaWxpdHkpOwogCi0t
LSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbXNyLWluZGV4LmgKKysrIGIveGVu
L2luY2x1ZGUvYXNtLXg4Ni9tc3ItaW5kZXguaApAQCAtMjEzLDYgKzIxMyw3
IEBACiAKIC8qIEFNRDY0IE1TUnMgKi8KICNkZWZpbmUgTVNSX0FNRDY0X05C
X0NGRwkJMHhjMDAxMDAxZgorI2RlZmluZSBNU1JfQU1ENjRfTFNfQ0ZHCQkw
eGMwMDExMDIwCiAjZGVmaW5lIE1TUl9BTUQ2NF9JQ19DRkcJCTB4YzAwMTEw
MjEKICNkZWZpbmUgTVNSX0FNRDY0X0RDX0NGRwkJMHhjMDAxMTAyMgogI2Rl
ZmluZSBBTUQ2NF9OQl9DRkdfQ0Y4X0VYVF9FTkFCTEVfQklUCTQ2Cg==

--=separator
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=separator--


From xen-devel-bounces@lists.xen.org Wed Feb 19 16:57:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGASp-0001w4-HS; Wed, 19 Feb 2014 16:57:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGASn-0001vY-Qt
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:57:38 +0000
Received: from [193.109.254.147:6368] by server-10.bemta-14.messagelabs.com id
	F7/72-10711-182E4035; Wed, 19 Feb 2014 16:57:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392829054!5446961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 373 invoked from network); 19 Feb 2014 16:57:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:57:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="103958210"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 16:57:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 11:57:33 -0500
Message-ID: <1392829052.29739.93.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 16:57:32 +0000
In-Reply-To: <5304E14F.5090507@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
	<5304D924.4090504@linaro.org>
	<1392827186.29739.92.camel@kazak.uk.xensource.com>
	<5304E14F.5090507@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 16:52 +0000, Julien Grall wrote:
> On 02/19/2014 04:26 PM, Ian Campbell wrote:
> > On Wed, 2014-02-19 at 16:17 +0000, Julien Grall wrote:
> >>>> +/**
> >>>> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
> >>>> + * @np: Pointer to device node holding phandle property
> >>>> + * @phandle_name: Name of property holding a phandle value
> >>>> + * @index: For properties holding a table of phandles, this is the index into
> >>>> + *         the table
> >>>
> >>> Otherwise it is -1 or something else?
> >>
> >> Even with one element, the property contains a table of phandles. So,
> >> the first index is always 0.
> > 
> > OK, so the "For a properties holding a table..." bit is misleading
> > (that's the "otherwise" I was referring too, e..g properties which do
> > not hold a table0, since this is always the index, even if the table has
> > a single element.
> 
> It was copied from Linux. Shall I update the comment?

I suppose it is better to not deviate more than necessary, so feel free
to leave it.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 16:57:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 16:57:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGASp-0001w4-HS; Wed, 19 Feb 2014 16:57:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGASn-0001vY-Qt
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 16:57:38 +0000
Received: from [193.109.254.147:6368] by server-10.bemta-14.messagelabs.com id
	F7/72-10711-182E4035; Wed, 19 Feb 2014 16:57:37 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392829054!5446961!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 373 invoked from network); 19 Feb 2014 16:57:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 16:57:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="103958210"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 16:57:33 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 11:57:33 -0500
Message-ID: <1392829052.29739.93.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 16:57:32 +0000
In-Reply-To: <5304E14F.5090507@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-6-git-send-email-julien.grall@linaro.org>
	<1392813280.29739.43.camel@kazak.uk.xensource.com>
	<5304D924.4090504@linaro.org>
	<1392827186.29739.92.camel@kazak.uk.xensource.com>
	<5304E14F.5090507@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com,
	patches@linaro.org
Subject: Re: [Xen-devel] [RFC for-4.5 05/12] xen/dts: Add
 dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 16:52 +0000, Julien Grall wrote:
> On 02/19/2014 04:26 PM, Ian Campbell wrote:
> > On Wed, 2014-02-19 at 16:17 +0000, Julien Grall wrote:
> >>>> +/**
> >>>> + * dt_parse_phandle - Resolve a phandle property to a device_node pointer
> >>>> + * @np: Pointer to device node holding phandle property
> >>>> + * @phandle_name: Name of property holding a phandle value
> >>>> + * @index: For properties holding a table of phandles, this is the index into
> >>>> + *         the table
> >>>
> >>> Otherwise it is -1 or something else?
> >>
> >> Even with one element, the property contains a table of phandles. So,
> >> the first index is always 0.
> > 
> > OK, so the "For a properties holding a table..." bit is misleading
> > (that's the "otherwise" I was referring too, e..g properties which do
> > not hold a table0, since this is always the index, even if the table has
> > a single element.
> 
> It was copied from Linux. Shall I update the comment?

I suppose it is better to not deviate more than necessary, so feel free
to leave it.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:01:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAW0-0002Z5-H4; Wed, 19 Feb 2014 17:00:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WGAVy-0002Yb-DS
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 17:00:54 +0000
Received: from [193.109.254.147:44723] by server-6.bemta-14.messagelabs.com id
	CF/34-03396-543E4035; Wed, 19 Feb 2014 17:00:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392829251!192546!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2658 invoked from network); 19 Feb 2014 17:00:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:00:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102256704"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 17:00:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 12:00:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WGAVt-0001PT-OP; Wed, 19 Feb 2014 17:00:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Feb 2014 17:00:48 +0000
Message-ID: <1392829248-12093-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392824487-8876-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392824487-8876-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Coverity Team <coverity@xenproject.org>
Subject: [Xen-devel] [Patch v7] coverity: Store the modelling file in the
	source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Coverity Team <coverity@xenproject.org>

---
Changes in v7:
 * Correctly state recursive locks rather than exclusive

Changes in v6:
 * Teach Coverity about errx() and libxl_ctx_{,un}lock()
 * Move to misc/coverity/model.c
---
 misc/coverity/model.c |  131 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 131 insertions(+)
 create mode 100644 misc/coverity/model.c

diff --git a/misc/coverity/model.c b/misc/coverity/model.c
new file mode 100644
index 0000000..fac2ecb
--- /dev/null
+++ b/misc/coverity/model.c
@@ -0,0 +1,131 @@
+/* Coverity Scan model
+ *
+ * This is a modelling file for Coverity Scan. Modelling helps to avoid false
+ * positives.
+ *
+ * - A model file can't import any header files.
+ * - Therefore only some built-in primitives like int, char and void are
+ *   available but not NULL etc.
+ * - Modelling doesn't need full structs and typedefs. Rudimentary structs
+ *   and similar types are sufficient.
+ * - An uninitialised local pointer is not an error. It signifies that the
+ *   variable could be either NULL or have some data.
+ *
+ * Coverity Scan doesn't pick up modifications automatically. The model file
+ * must be uploaded by an admin in the analysis.
+ *
+ * The Xen Coverity Scan modelling file used the cpython modelling file as a
+ * reference to get started (suggested by Coverty Scan themselves as a good
+ * example), but all content is Xen specific.
+ *
+ * Copyright (c) 2013-2014 Citrix Systems Ltd; All Right Reserved
+ *
+ * Based on:
+ *     http://hg.python.org/cpython/file/tip/Misc/coverity_model.c
+ * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+ * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
+ *
+ */
+
+/*
+ * Useful references:
+ *   https://scan.coverity.com/models
+ */
+
+/* Definitions */
+#define NULL (void *)0
+#define PAGE_SIZE 4096UL
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#define assert(cond) /* empty */
+
+struct page_info {};
+struct pthread_mutex_t {};
+
+struct libxl__ctx
+{
+    struct pthread_mutex_t lock;
+};
+typedef struct libxl__ctx libxl_ctx;
+
+/*
+ * Xen malloc.  Behaves exactly like regular malloc(), except it also contains
+ * an alignment parameter.
+ *
+ * TODO: work out how to correctly model bad alignments as errors.
+ */
+void *_xmalloc(unsigned long size, unsigned long align)
+{
+    int has_memory;
+
+    __coverity_negative_sink__(size);
+    __coverity_negative_sink__(align);
+
+    if ( has_memory )
+        return __coverity_alloc__(size);
+    else
+        return NULL;
+}
+
+/*
+ * Xen free.  Frees a pointer allocated by _xmalloc().
+ */
+void xfree(void *va)
+{
+    __coverity_free__(va);
+}
+
+
+/*
+ * map_domain_page() takes an existing domain page and possibly maps it into
+ * the Xen pagetables, to allow for direct access.  Model this as a memory
+ * allocation of exactly 1 page.
+ *
+ * map_domain_page() never fails. (It will BUG() before returning NULL)
+ *
+ * TODO: work out how to correctly model the behaviour that this function will
+ * only ever return page aligned pointers.
+ */
+void *map_domain_page(unsigned long mfn)
+{
+    return __coverity_alloc__(PAGE_SIZE);
+}
+
+/*
+ * unmap_domain_page() will unmap a page.  Model it as a free().
+ */
+void unmap_domain_page(const void *va)
+{
+    __coverity_free__(va);
+}
+
+/*
+ * Coverity appears not to understand that errx() unconditionally exits.
+ */
+void errx(int, const char*, ...)
+{
+    __coverity_panic__();
+}
+
+/*
+ * Coverity doesn't appear to be certain that the libxl ctx->lock is recursive.
+ */
+void libxl__ctx_lock(libxl_ctx *ctx)
+{
+    __coverity_recursive_lock_acquire__(&ctx->lock);
+}
+
+void libxl__ctx_unlock(libxl_ctx *ctx)
+{
+    __coverity_recursive_lock_release__(&ctx->lock);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:01:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAW0-0002Z5-H4; Wed, 19 Feb 2014 17:00:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WGAVy-0002Yb-DS
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 17:00:54 +0000
Received: from [193.109.254.147:44723] by server-6.bemta-14.messagelabs.com id
	CF/34-03396-543E4035; Wed, 19 Feb 2014 17:00:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392829251!192546!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2658 invoked from network); 19 Feb 2014 17:00:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:00:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102256704"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 17:00:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 12:00:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WGAVt-0001PT-OP; Wed, 19 Feb 2014 17:00:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Wed, 19 Feb 2014 17:00:48 +0000
Message-ID: <1392829248-12093-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392824487-8876-1-git-send-email-andrew.cooper3@citrix.com>
References: <1392824487-8876-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Coverity Team <coverity@xenproject.org>
Subject: [Xen-devel] [Patch v7] coverity: Store the modelling file in the
	source tree.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Coverity Team <coverity@xenproject.org>

---
Changes in v7:
 * Correctly state recursive locks rather than exclusive

Changes in v6:
 * Teach Coverity about errx() and libxl_ctx_{,un}lock()
 * Move to misc/coverity/model.c
---
 misc/coverity/model.c |  131 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 131 insertions(+)
 create mode 100644 misc/coverity/model.c

diff --git a/misc/coverity/model.c b/misc/coverity/model.c
new file mode 100644
index 0000000..fac2ecb
--- /dev/null
+++ b/misc/coverity/model.c
@@ -0,0 +1,131 @@
+/* Coverity Scan model
+ *
+ * This is a modelling file for Coverity Scan. Modelling helps to avoid false
+ * positives.
+ *
+ * - A model file can't import any header files.
+ * - Therefore only some built-in primitives like int, char and void are
+ *   available but not NULL etc.
+ * - Modelling doesn't need full structs and typedefs. Rudimentary structs
+ *   and similar types are sufficient.
+ * - An uninitialised local pointer is not an error. It signifies that the
+ *   variable could be either NULL or have some data.
+ *
+ * Coverity Scan doesn't pick up modifications automatically. The model file
+ * must be uploaded by an admin in the analysis.
+ *
+ * The Xen Coverity Scan modelling file used the cpython modelling file as a
+ * reference to get started (suggested by Coverty Scan themselves as a good
+ * example), but all content is Xen specific.
+ *
+ * Copyright (c) 2013-2014 Citrix Systems Ltd; All Right Reserved
+ *
+ * Based on:
+ *     http://hg.python.org/cpython/file/tip/Misc/coverity_model.c
+ * Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
+ * 2011, 2012, 2013 Python Software Foundation; All Rights Reserved
+ *
+ */
+
+/*
+ * Useful references:
+ *   https://scan.coverity.com/models
+ */
+
+/* Definitions */
+#define NULL (void *)0
+#define PAGE_SIZE 4096UL
+#define PAGE_MASK (~(PAGE_SIZE-1))
+
+#define assert(cond) /* empty */
+
+struct page_info {};
+struct pthread_mutex_t {};
+
+struct libxl__ctx
+{
+    struct pthread_mutex_t lock;
+};
+typedef struct libxl__ctx libxl_ctx;
+
+/*
+ * Xen malloc.  Behaves exactly like regular malloc(), except it also contains
+ * an alignment parameter.
+ *
+ * TODO: work out how to correctly model bad alignments as errors.
+ */
+void *_xmalloc(unsigned long size, unsigned long align)
+{
+    int has_memory;
+
+    __coverity_negative_sink__(size);
+    __coverity_negative_sink__(align);
+
+    if ( has_memory )
+        return __coverity_alloc__(size);
+    else
+        return NULL;
+}
+
+/*
+ * Xen free.  Frees a pointer allocated by _xmalloc().
+ */
+void xfree(void *va)
+{
+    __coverity_free__(va);
+}
+
+
+/*
+ * map_domain_page() takes an existing domain page and possibly maps it into
+ * the Xen pagetables, to allow for direct access.  Model this as a memory
+ * allocation of exactly 1 page.
+ *
+ * map_domain_page() never fails. (It will BUG() before returning NULL)
+ *
+ * TODO: work out how to correctly model the behaviour that this function will
+ * only ever return page aligned pointers.
+ */
+void *map_domain_page(unsigned long mfn)
+{
+    return __coverity_alloc__(PAGE_SIZE);
+}
+
+/*
+ * unmap_domain_page() will unmap a page.  Model it as a free().
+ */
+void unmap_domain_page(const void *va)
+{
+    __coverity_free__(va);
+}
+
+/*
+ * Coverity appears not to understand that errx() unconditionally exits.
+ */
+void errx(int, const char*, ...)
+{
+    __coverity_panic__();
+}
+
+/*
+ * Coverity doesn't appear to be certain that the libxl ctx->lock is recursive.
+ */
+void libxl__ctx_lock(libxl_ctx *ctx)
+{
+    __coverity_recursive_lock_acquire__(&ctx->lock);
+}
+
+void libxl__ctx_unlock(libxl_ctx *ctx)
+{
+    __coverity_recursive_lock_release__(&ctx->lock);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:02:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:02:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAXX-0002od-K9; Wed, 19 Feb 2014 17:02:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAXV-0002oI-Gf
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:02:29 +0000
Received: from [85.158.139.211:31283] by server-14.bemta-5.messagelabs.com id
	28/C7-27598-4A3E4035; Wed, 19 Feb 2014 17:02:28 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392829347!4975642!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17822 invoked from network); 19 Feb 2014 17:02:27 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:02:27 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so498207lbd.9
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:02:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=iRZ8pbFPV3Zp9/PAnmQEoAOZWkN78celoNc5NdnVTs0=;
	b=JSylX8GmNoQjRWDJ4te6Sc86ZVWrvf49izSpmxAkOH2KpVxhOmUacH3vwVjDUxFWrE
	FJmlz6D8grPwVKpqUxnB4jD/1vK0Vwg5lUt1u9UQdSu3mYuvSdcmq2NHh1CApO1/DjYA
	WfyHHZlEiYipnt3oD0WXZZQvjOQKFtBB0WY8ulYOhGnllJr3taIujd2RnWTzF2q8zT0d
	bYQIbRUyZXN43w1Noadtyfo1EemMeejKQvdTB015huMQZN7D53DPB4SrqrocW6As3VgP
	Hw6HhUU8Mb9gnuTWN2G9oZBPnhgn8TFIuTMSo8n5EC1NqDY7LOBPGTv4gWl8JA8UzX65
	1Mbg==
X-Received: by 10.152.115.132 with SMTP id jo4mr1517682lab.69.1392829346973;
	Wed, 19 Feb 2014 09:02:26 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:02:06 -0800 (PST)
In-Reply-To: <5304C13F.3030802@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:02:06 -0800
X-Google-Sender-Auth: aQSII7wopRXhIVT9wQIeIUlVLbA
Message-ID: <CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>,
	Stephen Hemminger <stephen@networkplumber.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 6:35 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 19/02/14 09:52, Ian Campbell wrote:
>> Can't we arrange things in the Xen hotplug scripts such that if the
>> root_block stuff isn't available/doesn't work we fallback to the
>> existing fe:ff:ff:ff:ff usage?
>>
>> That would avoid concerns about forward/backwards compat I think. It
>> wouldn't solve the issue you are targeting on old systems, but it also
>> doesn't regress them any further.
>
> I agree, I think this problem could be better handled from userspace: if it
> can set root_block then change the default MAC to a random one, if it can't,
> then stay with the default one. Or if someone doesn't care about STP but DAD
> is still important, userspace can have a force_random_mac option somewhere
> to change to a random MAC regardless of root_block presence.

Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
upon initialization so that root block will be used once the device
gets added to a bridge. The purpose would be to avoid drivers from
using the high MAC address hack, streamline to use a random MAC
address thereby avoiding the possible duplicate address situation for
IPv6. In the STP use case for these interfaces we'd just require
userspace to unset the root block. I'd consider the STP use case the
most odd of all. The caveat to this approach is 3.8 would be needed
(or its the root block patches cherry picked) for base kernels older
than 3.8.

Stephen?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:02:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:02:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAXX-0002od-K9; Wed, 19 Feb 2014 17:02:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAXV-0002oI-Gf
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:02:29 +0000
Received: from [85.158.139.211:31283] by server-14.bemta-5.messagelabs.com id
	28/C7-27598-4A3E4035; Wed, 19 Feb 2014 17:02:28 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392829347!4975642!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17822 invoked from network); 19 Feb 2014 17:02:27 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:02:27 -0000
Received: by mail-lb0-f178.google.com with SMTP id u14so498207lbd.9
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:02:27 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=iRZ8pbFPV3Zp9/PAnmQEoAOZWkN78celoNc5NdnVTs0=;
	b=JSylX8GmNoQjRWDJ4te6Sc86ZVWrvf49izSpmxAkOH2KpVxhOmUacH3vwVjDUxFWrE
	FJmlz6D8grPwVKpqUxnB4jD/1vK0Vwg5lUt1u9UQdSu3mYuvSdcmq2NHh1CApO1/DjYA
	WfyHHZlEiYipnt3oD0WXZZQvjOQKFtBB0WY8ulYOhGnllJr3taIujd2RnWTzF2q8zT0d
	bYQIbRUyZXN43w1Noadtyfo1EemMeejKQvdTB015huMQZN7D53DPB4SrqrocW6As3VgP
	Hw6HhUU8Mb9gnuTWN2G9oZBPnhgn8TFIuTMSo8n5EC1NqDY7LOBPGTv4gWl8JA8UzX65
	1Mbg==
X-Received: by 10.152.115.132 with SMTP id jo4mr1517682lab.69.1392829346973;
	Wed, 19 Feb 2014 09:02:26 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:02:06 -0800 (PST)
In-Reply-To: <5304C13F.3030802@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:02:06 -0800
X-Google-Sender-Auth: aQSII7wopRXhIVT9wQIeIUlVLbA
Message-ID: <CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>,
	Stephen Hemminger <stephen@networkplumber.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 6:35 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 19/02/14 09:52, Ian Campbell wrote:
>> Can't we arrange things in the Xen hotplug scripts such that if the
>> root_block stuff isn't available/doesn't work we fallback to the
>> existing fe:ff:ff:ff:ff usage?
>>
>> That would avoid concerns about forward/backwards compat I think. It
>> wouldn't solve the issue you are targeting on old systems, but it also
>> doesn't regress them any further.
>
> I agree, I think this problem could be better handled from userspace: if it
> can set root_block then change the default MAC to a random one, if it can't,
> then stay with the default one. Or if someone doesn't care about STP but DAD
> is still important, userspace can have a force_random_mac option somewhere
> to change to a random MAC regardless of root_block presence.

Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
upon initialization so that root block will be used once the device
gets added to a bridge. The purpose would be to avoid drivers from
using the high MAC address hack, streamline to use a random MAC
address thereby avoiding the possible duplicate address situation for
IPv6. In the STP use case for these interfaces we'd just require
userspace to unset the root block. I'd consider the STP use case the
most odd of all. The caveat to this approach is 3.8 would be needed
(or its the root block patches cherry picked) for base kernels older
than 3.8.

Stephen?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:05:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAad-0003AE-8Y; Wed, 19 Feb 2014 17:05:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WGAac-0003A1-7v; Wed, 19 Feb 2014 17:05:42 +0000
Received: from [85.158.143.35:16099] by server-1.bemta-4.messagelabs.com id
	49/BD-31661-564E4035; Wed, 19 Feb 2014 17:05:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392829537!6870340!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29175 invoked from network); 19 Feb 2014 17:05:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 17:05:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JH4uQp030281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 17:04:57 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JH4qow002602
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 17:04:52 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1JH4pmr009456; Wed, 19 Feb 2014 17:04:51 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 09:04:51 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0FF041C0954; Wed, 19 Feb 2014 12:04:50 -0500 (EST)
Date: Wed, 19 Feb 2014 12:04:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140219170449.GB11365@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014 
> 09:49:38 AM:
> 
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> > bounces@lists.xen.org
> > Date: 01/24/2014 09:50 AM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > 
> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> > > 
> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > Date: 01/21/2014 04:59 PM
> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > Sent by: xen-devel-bounces@lists.xen.org
> > > > 
> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> 
> > > > > 10:38:27 AM:
> > > > > 
> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > > > Date: 01/20/2014 10:38 AM
> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > 
> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola 
> wrote:
> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
> > > 10:14:36 
> > > > > AM:
> > > > > > > 
> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > > > > Date: 01/20/2014 10:14 AM
> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > > > 
> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 
> 
> > > wrote:
> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
> > > consistent 
> > > > > > > crashes 
> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
> > > unusably 
> > > > > 
> > > > > > > slow 
> > > > > > > > > graphics with a newer HD7000 (can see each line refresh 
> > > > > indiviually on 
> > > > > > > 
> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare 
> metal.
> > > > > > > > 
> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you 
> mean?
> > > > > > > 
> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0. 
> The 
> > > 
> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device 
> for 
> > > a 
> > > > > plain 
> > > > > > > text console login.
> > > > > > 
> > > > > > So sluggish is probably due to the PAT not being enabled. This 
> patch
> > > > > > should be applied:
> > > > > > 
> > > > > > lkml.org/lkml/2011/11/8/406
> > > > > > 
> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > > > > 
> > > > > > and these two reverted:
> > > > > > 
> > > > > >  "xen/pat: Disable PAT support for now."
> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> > > > > > 
> > > > > > Which is to say do:
> > > > > > 
> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > > > > 
> > > > > Thanks!  I cherry-picked that patch out of your testing tree, 
> reverted 
> > > 
> > > > > those 2 commits, recompiled and installed.  Definitely fixed the 
> HD 
> > > 7000 
> > > > > sluggishness and appears to have fixed the R600 crashes (although 
> it's 
> > > 
> > > > > only been running a few hours).
> > > > > 
> > > > > How come that patch didn't get into mainline?  It looks pretty 
> > > innocuous 
> > > > > to me...
> > > > 
> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't 
> had
> > > > the chance nor time to implement it.
> > > 
> > > I see.  Well, I've got a handful of boxes in my lab that need that 
> patch 
> > > to be usable.  If you do come up with a more mainline-able solution, 
> I'd 
> > > gladly test it for you.  ;-)
> > 
> > Thank you!
> 
> Uh, oh.  Looks like those reverts and patches didn't entirely fix my 
> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers 
> again yeserday.  After being solid as a rock for 2 weeks as my primary 
> workstation, X has crashed a half dozen or so times so far this week. I've 
> been in Xen with 2 paravirtual linux guests running almost constantly for 
> this whole period.  I don't understand what's changed, but my system has 
> been entirely unstable now.  I did recompile my kernel... but I all did 
> was merge the v3.13.1 stable commit into my working tree and turn a few 
> things on (netfilter, wifi, a couple drivers turned on here and there).  I 
> just went and verified that those patches are still applied in my tree 
> (i.e., I didn't accidentally undo them).  I'm scratching my head (and 
> staring at a TTY login).
> 
> When X crashes, my kernel log prints a couple dozen iterations of this. 3d 
> acceleration no longer functions unless I reboot.  If memory serves, the 
> unpatched behavior upon X crash was that the kernel continued to spew 
> these errors until the whole box locked up.  At least that's not happening 
> any more... ;-)
> 
> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
> GEM object (8192, 2, 4096, -12)
> 
> and here's a slightly different variant that happened while I was typing 
> this email (on a different machine, luckily):
> 
> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [64348.297561] [TTM] Buffer eviction failed
> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
> GEM object (16384, 2, 4096, -12)
> 
> Any ideas?

yes. I believe you have a memory leak. As in, some driver (or X) is
eating up the memory and not giving up enough. That means the TTM
layer is hitting its ceiling of how much memory it can allocate.

Now finding the culprit is going to be a bit hard.

You could use:

[root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool 
         pool      refills   pages freed    inuse available     name
           wc          259           224      808        4 nouveau 0000:05:00.0
       cached      3403058      13561071    51158        3 radeon 0000:01:00.0
       cached           25             0       96        4 nouveau 0000:05:00.0

to figure out if my thinking is really true. You should have a huge
'inuse' count and almost no 'available'.


But that will get us just to confirm that yes - you have a big usage
of memory and it is hitting the ceiling.

Now to actually figure out which application is hanging on these - that
I am not sure about. I think there is some drm info tool to investigate
how many pages each application is using. You can leave it running and
see which app is gulping up the memory. But I am not sure which
tool that is (if there was one). 

Well, lets do one step at a time - see if my theory is correct first.

> 
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:05:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAad-0003AE-8Y; Wed, 19 Feb 2014 17:05:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WGAac-0003A1-7v; Wed, 19 Feb 2014 17:05:42 +0000
Received: from [85.158.143.35:16099] by server-1.bemta-4.messagelabs.com id
	49/BD-31661-564E4035; Wed, 19 Feb 2014 17:05:41 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392829537!6870340!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29175 invoked from network); 19 Feb 2014 17:05:39 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 17:05:39 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JH4uQp030281
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 17:04:57 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JH4qow002602
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 17:04:52 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1JH4pmr009456; Wed, 19 Feb 2014 17:04:51 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 09:04:51 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0FF041C0954; Wed, 19 Feb 2014 12:04:50 -0500 (EST)
Date: Wed, 19 Feb 2014 12:04:49 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael D Labriola <mlabriol@gdeb.com>
Message-ID: <20140219170449.GB11365@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, michael.d.labriola@gmail.com,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014 
> 09:49:38 AM:
> 
> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> > bounces@lists.xen.org
> > Date: 01/24/2014 09:50 AM
> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > 
> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> > > 
> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > Date: 01/21/2014 04:59 PM
> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > Sent by: xen-devel-bounces@lists.xen.org
> > > > 
> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014 
> 
> > > > > 10:38:27 AM:
> > > > > 
> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, 
> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > > > > > Date: 01/20/2014 10:38 AM
> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > 
> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola 
> wrote:
> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014 
> > > 10:14:36 
> > > > > AM:
> > > > > > > 
> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>, 
> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> > > > > > > > Date: 01/20/2014 10:14 AM
> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > > > > > > > 
> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola 
> 
> > > wrote:
> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having 
> > > consistent 
> > > > > > > crashes 
> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and 
> > > unusably 
> > > > > 
> > > > > > > slow 
> > > > > > > > > graphics with a newer HD7000 (can see each line refresh 
> > > > > indiviually on 
> > > > > > > 
> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare 
> metal.
> > > > > > > > 
> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you 
> mean?
> > > > > > > 
> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0. 
> The 
> > > 
> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device 
> for 
> > > a 
> > > > > plain 
> > > > > > > text console login.
> > > > > > 
> > > > > > So sluggish is probably due to the PAT not being enabled. This 
> patch
> > > > > > should be applied:
> > > > > > 
> > > > > > lkml.org/lkml/2011/11/8/406
> > > > > > 
> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > > > > > 
> > > > > > and these two reverted:
> > > > > > 
> > > > > >  "xen/pat: Disable PAT support for now."
> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> > > > > > 
> > > > > > Which is to say do:
> > > > > > 
> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > > > > 
> > > > > Thanks!  I cherry-picked that patch out of your testing tree, 
> reverted 
> > > 
> > > > > those 2 commits, recompiled and installed.  Definitely fixed the 
> HD 
> > > 7000 
> > > > > sluggishness and appears to have fixed the R600 crashes (although 
> it's 
> > > 
> > > > > only been running a few hours).
> > > > > 
> > > > > How come that patch didn't get into mainline?  It looks pretty 
> > > innocuous 
> > > > > to me...
> > > > 
> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't 
> had
> > > > the chance nor time to implement it.
> > > 
> > > I see.  Well, I've got a handful of boxes in my lab that need that 
> patch 
> > > to be usable.  If you do come up with a more mainline-able solution, 
> I'd 
> > > gladly test it for you.  ;-)
> > 
> > Thank you!
> 
> Uh, oh.  Looks like those reverts and patches didn't entirely fix my 
> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers 
> again yeserday.  After being solid as a rock for 2 weeks as my primary 
> workstation, X has crashed a half dozen or so times so far this week. I've 
> been in Xen with 2 paravirtual linux guests running almost constantly for 
> this whole period.  I don't understand what's changed, but my system has 
> been entirely unstable now.  I did recompile my kernel... but I all did 
> was merge the v3.13.1 stable commit into my working tree and turn a few 
> things on (netfilter, wifi, a couple drivers turned on here and there).  I 
> just went and verified that those patches are still applied in my tree 
> (i.e., I didn't accidentally undo them).  I'm scratching my head (and 
> staring at a TTY login).
> 
> When X crashes, my kernel log prints a couple dozen iterations of this. 3d 
> acceleration no longer functions unless I reboot.  If memory serves, the 
> unpatched behavior upon X crash was that the kernel continued to spew 
> these errors until the whole box locked up.  At least that's not happening 
> any more... ;-)
> 
> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
> GEM object (8192, 2, 4096, -12)
> 
> and here's a slightly different variant that happened while I was typing 
> this email (on a different machine, luckily):
> 
> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [64348.297561] [TTM] Buffer eviction failed
> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool 
> (r:-12)!
> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate 
> GEM object (16384, 2, 4096, -12)
> 
> Any ideas?

yes. I believe you have a memory leak. As in, some driver (or X) is
eating up the memory and not giving up enough. That means the TTM
layer is hitting its ceiling of how much memory it can allocate.

Now finding the culprit is going to be a bit hard.

You could use:

[root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool 
         pool      refills   pages freed    inuse available     name
           wc          259           224      808        4 nouveau 0000:05:00.0
       cached      3403058      13561071    51158        3 radeon 0000:01:00.0
       cached           25             0       96        4 nouveau 0000:05:00.0

to figure out if my thinking is really true. You should have a huge
'inuse' count and almost no 'available'.


But that will get us just to confirm that yes - you have a big usage
of memory and it is hitting the ceiling.

Now to actually figure out which application is hanging on these - that
I am not sure about. I think there is some drm info tool to investigate
how many pages each application is using. You can leave it running and
see which app is gulping up the memory. But I am not sure which
tool that is (if there was one). 

Well, lets do one step at a time - see if my theory is correct first.

> 
> 
> ---
> Michael D Labriola
> Electric Boat
> mlabriol@gdeb.com
> 401-848-8871 (desk)
> 401-848-8513 (lab)
> 401-316-9844 (cell)
> 
> 
> 
>  
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:09:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAdr-0003SW-TY; Wed, 19 Feb 2014 17:09:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WGAdq-0003SN-Tv
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:09:03 +0000
Received: from [193.109.254.147:34001] by server-5.bemta-14.messagelabs.com id
	9B/DB-16688-E25E4035; Wed, 19 Feb 2014 17:09:02 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392829739!192363!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21964 invoked from network); 19 Feb 2014 17:09:01 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:09:01 -0000
Received: by mail-pd0-f173.google.com with SMTP id y10so623588pdj.18
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:08:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=KuZTDio5x84WjFXt7AObyo0IvSly8ss1lkV7juIKyIU=;
	b=R7BIoiX8QKk0KulPguWwUZnqU0NmYPWMalDD751o5pWDy3TZw2eXKXyAAUUWx5pbvu
	wE0ZvXmhjaWIO7AuIzYDjtutklWSG2tMkslfzBpHfDiAnAoOu/J5xhCXQkPIkgR+5PC/
	B/fx9NHIl9ntJG7YLl3P33GmuaeKIGbRE7ni8oUMpe4eLypbwkdqZuBgdSuNuM0j6lQx
	htTupxCJ02wBxF1+LYkCIi2RsROadaOixhLzkd1KkkT5HqxJnQOGPMgTWotH5sIV6YmF
	vUxaHA8JW9USmFofo28fZPd6TEk7UN/unITuyleRsd/pm2C2QPyHer1okt17bxMLkBZg
	E6hQ==
X-Gm-Message-State: ALoCoQn0VeEfH2k8/faf3B8BhnduiWg6GVhzeRtoWSOQtpw+ouX/jLxcVG2Jx/N0OxACViPpa6fp
X-Received: by 10.66.142.42 with SMTP id rt10mr40973464pab.1.1392829739333;
	Wed, 19 Feb 2014 09:08:59 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id zc6sm5288857pab.18.2014.02.19.09.08.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:08:58 -0800 (PST)
Date: Wed, 19 Feb 2014 09:08:55 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140219090855.610c0e04@nehalam.linuxnetplumber.net>
In-Reply-To: <CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014 09:02:06 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
> upon initialization so that root block will be used once the device
> gets added to a bridge. The purpose would be to avoid drivers from
> using the high MAC address hack, streamline to use a random MAC
> address thereby avoiding the possible duplicate address situation for
> IPv6. In the STP use case for these interfaces we'd just require
> userspace to unset the root block. I'd consider the STP use case the
> most odd of all. The caveat to this approach is 3.8 would be needed
> (or its the root block patches cherry picked) for base kernels older
> than 3.8.
> 
> Stephen?
> 
>   Luis

Don't add IFF_ flags that adds yet another API hook into bridge.
Please only use the netlink/sysfs flags fields that already exist
for new features.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:09:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:09:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAdr-0003SW-TY; Wed, 19 Feb 2014 17:09:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WGAdq-0003SN-Tv
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:09:03 +0000
Received: from [193.109.254.147:34001] by server-5.bemta-14.messagelabs.com id
	9B/DB-16688-E25E4035; Wed, 19 Feb 2014 17:09:02 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392829739!192363!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21964 invoked from network); 19 Feb 2014 17:09:01 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:09:01 -0000
Received: by mail-pd0-f173.google.com with SMTP id y10so623588pdj.18
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:08:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=KuZTDio5x84WjFXt7AObyo0IvSly8ss1lkV7juIKyIU=;
	b=R7BIoiX8QKk0KulPguWwUZnqU0NmYPWMalDD751o5pWDy3TZw2eXKXyAAUUWx5pbvu
	wE0ZvXmhjaWIO7AuIzYDjtutklWSG2tMkslfzBpHfDiAnAoOu/J5xhCXQkPIkgR+5PC/
	B/fx9NHIl9ntJG7YLl3P33GmuaeKIGbRE7ni8oUMpe4eLypbwkdqZuBgdSuNuM0j6lQx
	htTupxCJ02wBxF1+LYkCIi2RsROadaOixhLzkd1KkkT5HqxJnQOGPMgTWotH5sIV6YmF
	vUxaHA8JW9USmFofo28fZPd6TEk7UN/unITuyleRsd/pm2C2QPyHer1okt17bxMLkBZg
	E6hQ==
X-Gm-Message-State: ALoCoQn0VeEfH2k8/faf3B8BhnduiWg6GVhzeRtoWSOQtpw+ouX/jLxcVG2Jx/N0OxACViPpa6fp
X-Received: by 10.66.142.42 with SMTP id rt10mr40973464pab.1.1392829739333;
	Wed, 19 Feb 2014 09:08:59 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id zc6sm5288857pab.18.2014.02.19.09.08.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:08:58 -0800 (PST)
Date: Wed, 19 Feb 2014 09:08:55 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140219090855.610c0e04@nehalam.linuxnetplumber.net>
In-Reply-To: <CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014 09:02:06 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
> upon initialization so that root block will be used once the device
> gets added to a bridge. The purpose would be to avoid drivers from
> using the high MAC address hack, streamline to use a random MAC
> address thereby avoiding the possible duplicate address situation for
> IPv6. In the STP use case for these interfaces we'd just require
> userspace to unset the root block. I'd consider the STP use case the
> most odd of all. The caveat to this approach is 3.8 would be needed
> (or its the root block patches cherry picked) for base kernels older
> than 3.8.
> 
> Stephen?
> 
>   Luis

Don't add IFF_ flags that adds yet another API hook into bridge.
Please only use the netlink/sysfs flags fields that already exist
for new features.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:10:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAfD-0003dF-KV; Wed, 19 Feb 2014 17:10:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAfB-0003d7-Oh
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:10:25 +0000
Received: from [85.158.139.211:15680] by server-10.bemta-5.messagelabs.com id
	96/E6-08578-185E4035; Wed, 19 Feb 2014 17:10:25 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392829823!4989482!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27241 invoked from network); 19 Feb 2014 17:10:24 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:10:24 -0000
Received: by mail-lb0-f174.google.com with SMTP id l4so506167lbv.33
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:10:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=/ZbjwUybhgEXXMtQ0H5gNcu66gdrG4Aq8muRbuFbtsI=;
	b=Jy/qIUzOw2fW7wM5Whq2ImHXeTs6zeXia9C66i0HgLN9D8GO6hIbSxaHBYGqPORlSn
	fZBUuBu16eaRN46dxf1VL7V/PWpjI5Rxl6LCwfwXJMK44BzjoxYNtPcdn5MUrDbxNEfr
	5pRoNh9H7yWLlQmPMkvxf3Lc3AZPe0Zf/wcUH3OEzBDOYrjMImjUC5X5jEAC3SLg2gMR
	YuUWanES/GU+gXucmBW537B5pvwTLcDdLGkdah25+FN0orwUpWjfdWm8oqJ7kN4f3mHT
	9u3GoH5lEt28z13DOSn+Au7VSwh6EgJqbh3wOYn2+EO7WzG3rUebbPUaFCw6OqqlL4Ws
	aNZw==
X-Received: by 10.153.7.137 with SMTP id dc9mr27121199lad.25.1392829823616;
	Wed, 19 Feb 2014 09:10:23 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:10:03 -0800 (PST)
In-Reply-To: <1392803304.23084.95.camel@kazak.uk.xensource.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<5301E411.5060908@citrix.com>
	<CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
	<1392803304.23084.95.camel@kazak.uk.xensource.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:10:03 -0800
X-Google-Sender-Auth: L98WOiZboGPzHeoIcYc-ZkeHgoE
Message-ID: <CAB=NE6Wp+qypMkjkXc9QjnnLRjkS_PRMQLbj0QGnR8KTTaL5oA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>, kvm@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 1:48 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-18 at 11:43 -0800, Luis R. Rodriguez wrote:
>>
>> New motivation: removing IPv4 and IPv6 from the backend interfaces can
>> save up a lot of boiler plate run time code, triggers from ever taking
>> place, and simplifying the backend interaces. If there is no use for
>> IPv4 and IPv6 interfaces why do we have them? Note: I have yet to test
>> the NAT case.
>
> I think you need to do that test that before you can unequivocally state
> that there is no use for IPv4/6 interfaces here.

Agreed but note that Zoltan stated that in the routing case IPv4 or
IPv6 addresses can be used on the backends, so that already rules that
out. Unless of course we want to enable this by default (for
simplicity) and have userpace poke to get out IPv4 / IPv6 if by
default no interfaces were enabled. Even though backend interfaces
would stand to gain on the average situation from this simplicity I
don't think the userspace requirements are worth it. Someone with
hundreds of guests (that don't do routing on the backend as clarified
by Zoltan) may want to test my patch though to see if there's any
reasonable cuts on getting these guests up and running.

Anyone itching for the above?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:10:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:10:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAfD-0003dF-KV; Wed, 19 Feb 2014 17:10:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAfB-0003d7-Oh
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:10:25 +0000
Received: from [85.158.139.211:15680] by server-10.bemta-5.messagelabs.com id
	96/E6-08578-185E4035; Wed, 19 Feb 2014 17:10:25 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392829823!4989482!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27241 invoked from network); 19 Feb 2014 17:10:24 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:10:24 -0000
Received: by mail-lb0-f174.google.com with SMTP id l4so506167lbv.33
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:10:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=/ZbjwUybhgEXXMtQ0H5gNcu66gdrG4Aq8muRbuFbtsI=;
	b=Jy/qIUzOw2fW7wM5Whq2ImHXeTs6zeXia9C66i0HgLN9D8GO6hIbSxaHBYGqPORlSn
	fZBUuBu16eaRN46dxf1VL7V/PWpjI5Rxl6LCwfwXJMK44BzjoxYNtPcdn5MUrDbxNEfr
	5pRoNh9H7yWLlQmPMkvxf3Lc3AZPe0Zf/wcUH3OEzBDOYrjMImjUC5X5jEAC3SLg2gMR
	YuUWanES/GU+gXucmBW537B5pvwTLcDdLGkdah25+FN0orwUpWjfdWm8oqJ7kN4f3mHT
	9u3GoH5lEt28z13DOSn+Au7VSwh6EgJqbh3wOYn2+EO7WzG3rUebbPUaFCw6OqqlL4Ws
	aNZw==
X-Received: by 10.153.7.137 with SMTP id dc9mr27121199lad.25.1392829823616;
	Wed, 19 Feb 2014 09:10:23 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:10:03 -0800 (PST)
In-Reply-To: <1392803304.23084.95.camel@kazak.uk.xensource.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<5301E411.5060908@citrix.com>
	<CAB=NE6VxNByeWGk6_Ow7WgxA3HCwGBjrjL9MNVRGsEfFyeKTdw@mail.gmail.com>
	<1392803304.23084.95.camel@kazak.uk.xensource.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:10:03 -0800
X-Google-Sender-Auth: L98WOiZboGPzHeoIcYc-ZkeHgoE
Message-ID: <CAB=NE6Wp+qypMkjkXc9QjnnLRjkS_PRMQLbj0QGnR8KTTaL5oA@mail.gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>, kvm@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [RFC v2 0/4] net: bridge / ip optimizations for
 virtual net backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 1:48 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-18 at 11:43 -0800, Luis R. Rodriguez wrote:
>>
>> New motivation: removing IPv4 and IPv6 from the backend interfaces can
>> save up a lot of boiler plate run time code, triggers from ever taking
>> place, and simplifying the backend interaces. If there is no use for
>> IPv4 and IPv6 interfaces why do we have them? Note: I have yet to test
>> the NAT case.
>
> I think you need to do that test that before you can unequivocally state
> that there is no use for IPv4/6 interfaces here.

Agreed but note that Zoltan stated that in the routing case IPv4 or
IPv6 addresses can be used on the backends, so that already rules that
out. Unless of course we want to enable this by default (for
simplicity) and have userpace poke to get out IPv4 / IPv6 if by
default no interfaces were enabled. Even though backend interfaces
would stand to gain on the average situation from this simplicity I
don't think the userspace requirements are worth it. Someone with
hundreds of guests (that don't do routing on the backend as clarified
by Zoltan) may want to test my patch though to see if there's any
reasonable cuts on getting these guests up and running.

Anyone itching for the above?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:13:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAiB-0003rI-9L; Wed, 19 Feb 2014 17:13:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAiA-0003rB-7C
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:13:30 +0000
Received: from [193.109.254.147:15675] by server-15.bemta-14.messagelabs.com
	id 87/1B-10839-936E4035; Wed, 19 Feb 2014 17:13:29 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392830008!5473641!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20703 invoked from network); 19 Feb 2014 17:13:29 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:13:29 -0000
Received: by mail-lb0-f170.google.com with SMTP id u14so514561lbd.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:13:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=aB6qT3riHDIX8LHH4CDMjUpUfC0NET2W+6PtcTkjij0=;
	b=PulXxPXBDlkbu/yYwghXH04YR6EWz8FAha2HgMlaa3CTho/LJqXc6Nbcq9jRC3PY8t
	cI1JlcBCqCgAKF4U+PNY8m6xmtgHImQnB4/6n3buTcLDgf+iHZI9gOte9fYbweTMp2hO
	oLMsmST84lgwY1MuLnRX/voksxu7sGDrBknvABqYlfZnLuZxeMwNk2fETrgFwpZLAK3V
	sjbAn3LzaLmkGP4vFrQTOQO9Nb9geXKegO+Yy/adW/AVpkoum55LGE/Qcjeoc5sL+PBH
	1TvLWbxrauSjG/8tcfJ2vwx69A2XXZT7ngAjUzpb7y8FmgEOvDVZR8GnLa/RgyL1yeU7
	BfxA==
X-Received: by 10.152.19.66 with SMTP id c2mr1969753lae.54.1392830008123; Wed,
	19 Feb 2014 09:13:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:13:07 -0800 (PST)
In-Reply-To: <20140218134257.667efe23@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<20140218134257.667efe23@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:13:07 -0800
X-Google-Sender-Auth: I1aEJ0YI7qnebgkJqw7BsLxSmyM
Message-ID: <CAB=NE6WQPaWeN+LT6DteetexZEThY-sLeNsYZbe0N7HROtH9cA@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 1:42 PM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Tue, 18 Feb 2014 13:19:15 -0800
> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>
>> Sure, but note that the both disable_ipv6 and accept_dada sysctl
>> parameters are global. ipv4 and ipv6 interfaces are created upon
>> NETDEVICE_REGISTER, which will get triggered when a driver calls
>> register_netdev(). The goal of this patch was to enable an early
>> optimization for drivers that have no need ever for ipv4 or ipv6
>> interfaces.
>
> The trick with ipv6 is to register the device, then have userspace
> do the ipv6 sysctl before bringing the device up.

Nice, thanks!

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:13:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAiB-0003rI-9L; Wed, 19 Feb 2014 17:13:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAiA-0003rB-7C
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:13:30 +0000
Received: from [193.109.254.147:15675] by server-15.bemta-14.messagelabs.com
	id 87/1B-10839-936E4035; Wed, 19 Feb 2014 17:13:29 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392830008!5473641!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20703 invoked from network); 19 Feb 2014 17:13:29 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:13:29 -0000
Received: by mail-lb0-f170.google.com with SMTP id u14so514561lbd.1
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:13:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=aB6qT3riHDIX8LHH4CDMjUpUfC0NET2W+6PtcTkjij0=;
	b=PulXxPXBDlkbu/yYwghXH04YR6EWz8FAha2HgMlaa3CTho/LJqXc6Nbcq9jRC3PY8t
	cI1JlcBCqCgAKF4U+PNY8m6xmtgHImQnB4/6n3buTcLDgf+iHZI9gOte9fYbweTMp2hO
	oLMsmST84lgwY1MuLnRX/voksxu7sGDrBknvABqYlfZnLuZxeMwNk2fETrgFwpZLAK3V
	sjbAn3LzaLmkGP4vFrQTOQO9Nb9geXKegO+Yy/adW/AVpkoum55LGE/Qcjeoc5sL+PBH
	1TvLWbxrauSjG/8tcfJ2vwx69A2XXZT7ngAjUzpb7y8FmgEOvDVZR8GnLa/RgyL1yeU7
	BfxA==
X-Received: by 10.152.19.66 with SMTP id c2mr1969753lae.54.1392830008123; Wed,
	19 Feb 2014 09:13:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:13:07 -0800 (PST)
In-Reply-To: <20140218134257.667efe23@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<20140218134257.667efe23@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:13:07 -0800
X-Google-Sender-Auth: I1aEJ0YI7qnebgkJqw7BsLxSmyM
Message-ID: <CAB=NE6WQPaWeN+LT6DteetexZEThY-sLeNsYZbe0N7HROtH9cA@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 1:42 PM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Tue, 18 Feb 2014 13:19:15 -0800
> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>
>> Sure, but note that the both disable_ipv6 and accept_dada sysctl
>> parameters are global. ipv4 and ipv6 interfaces are created upon
>> NETDEVICE_REGISTER, which will get triggered when a driver calls
>> register_netdev(). The goal of this patch was to enable an early
>> optimization for drivers that have no need ever for ipv4 or ipv6
>> interfaces.
>
> The trick with ipv6 is to register the device, then have userspace
> do the ipv6 sysctl before bringing the device up.

Nice, thanks!

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:20:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAp2-0004FP-PI; Wed, 19 Feb 2014 17:20:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAp1-0004FK-CJ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:20:35 +0000
Received: from [85.158.139.211:62098] by server-14.bemta-5.messagelabs.com id
	47/06-27598-2E7E4035; Wed, 19 Feb 2014 17:20:34 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392830433!4998423!1
X-Originating-IP: [209.85.215.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2640 invoked from network); 19 Feb 2014 17:20:34 -0000
Received: from mail-la0-f52.google.com (HELO mail-la0-f52.google.com)
	(209.85.215.52)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:20:34 -0000
Received: by mail-la0-f52.google.com with SMTP id c6so519645lan.11
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:20:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=D1A9JxL+tk/Ri0FxvmbHynfLryduRoZBYUCUoIWck1w=;
	b=Y+y9SkhLwR4RO03sThb0yN9+rGeHuf9gPBrOUIj65kkc8ASzURnjJXjv0ePBZy/Adj
	IHGFeZT8wG5KNpsKR6SPrHIDOQNoc6sHiXq//8Ny8xV5l9dQpOomg5y9e+JUmEpLlOpG
	GvX/KpCFs7+XgGVGoL137WrFxkY94qsXow1jUbiYfK/Z/UvHDGUw/SL7gIlfyq8ce6WY
	TBcOu9VI3Yvy/e/t4vzVBeNqsw1bI8pnDpN1tbpOKoskMboCgGrPMutj+5DWauQYfD6Y
	BMJuuYv4hoz47KjHvI+Om+QMCCYpyrgoHqG5CbVR/lX7wfoMVd9ViBo6SPqzrjRCbdhX
	ePFw==
X-Received: by 10.152.6.101 with SMTP id z5mr1990273laz.53.1392830432960; Wed,
	19 Feb 2014 09:20:32 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:20:12 -0800 (PST)
In-Reply-To: <1392828325.21976.6.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:20:12 -0800
X-Google-Sender-Auth: QZP8mEwLEjoaVd71dj1hVNM6nv0
Message-ID: <CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>, Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 8:45 AM, Dan Williams <dcbw@redhat.com> wrote:
> On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
>> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
>> > On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
>> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>> >>
>> >> Some interfaces do not need to have any IPv4 or IPv6
>> >> addresses, so enable an option to specify this. One
>> >> example where this is observed are virtualization
>> >> backend interfaces which just use the net_device
>> >> constructs to help with their respective frontends.
>> >>
>> >> This should optimize boot time and complexity on
>> >> virtualization environments for each backend interface
>> >> while also avoiding triggering SLAAC and DAD, which is
>> >> simply pointless for these type of interfaces.
>> >
>> > Would it not be better/cleaner to use disable_ipv6 and then add a
>> > disable_ipv4 sysctl, then use those with that interface?
>>
>> Sure, but note that the both disable_ipv6 and accept_dada sysctl
>> parameters are global. ipv4 and ipv6 interfaces are created upon
>> NETDEVICE_REGISTER, which will get triggered when a driver calls
>> register_netdev(). The goal of this patch was to enable an early
>> optimization for drivers that have no need ever for ipv4 or ipv6
>> interfaces.
>
> Each interface gets override sysctls too though, eg:
>
> /proc/sys/net/ipv6/conf/enp0s25/disable_ipv6

I hadn't seen those, thanks!

> which is the one I meant; you're obviously right that the global ones
> aren't what you want here.  But the specific ones should be suitable?

Under the approach Stephen mentioned by first ensuring the interface
is down yes. There's one use case I can consider to still want the
patch though, more on that below.

> If you set that on a per-interface basis, then you'll get EPERM or
> something whenever you try to add IPv6 addresses or do IPv6 routing.

Neat, thanks.

>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>> backends though, as such this is no longer applicable as a
>> requirement. The ipv4 sysctl however still seems like a reasonable
>> approach to enable optimizations of the network in topologies where
>> its known we won't need them but -- we'd need to consider a much more
>> granular solution, not just global as it is now for disable_ipv6, and
>> we'd also have to figure out a clean way to do this to not incur the
>> cost of early address interface addition upon register_netdev().
>>
>> Given that we have a use case for ipv4 and ipv6 addresses on
>> xen-netback we no longer have an immediate use case for such early
>> optimization primitives though, so I'll drop this.
>>
>> > The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
>> > already doing.
>>
>> disable_ipv6 is global, the goal was to make this granular and skip
>> the cost upon early boot, but its been clarified we don't need this.
>
> Like Stephen says, you need to make sure you set them before IFF_UP, but
> beyond that, wouldn't the interface-specific sysctls work?

Yeah that'll do it, unless there is a measurable run time benefit cost
to never even add these in the first place. Consider a host with tons
of guests, not sure how many is 'a lot' these days. One would have to
measure the cost of reducing the amount of time it takes to boot these
up. As discussed in the other threads though there *is* some use cases
of assigning IPv4 or IPv6 addresses to the backend interfaces though:
routing them (although its unclear to me if iptables can be used
instead, Zoltan?). So at least now there no clear requirement to
remove these interfaces or not have them at all. The boot time cost
savings should be considered though if this is ultimately desirable. I
saw tons of timers and events that'd get triggered with any IPv4 or
IPv6 interface laying around.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:20:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAp2-0004FP-PI; Wed, 19 Feb 2014 17:20:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGAp1-0004FK-CJ
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:20:35 +0000
Received: from [85.158.139.211:62098] by server-14.bemta-5.messagelabs.com id
	47/06-27598-2E7E4035; Wed, 19 Feb 2014 17:20:34 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1392830433!4998423!1
X-Originating-IP: [209.85.215.52]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2640 invoked from network); 19 Feb 2014 17:20:34 -0000
Received: from mail-la0-f52.google.com (HELO mail-la0-f52.google.com)
	(209.85.215.52)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:20:34 -0000
Received: by mail-la0-f52.google.com with SMTP id c6so519645lan.11
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:20:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=D1A9JxL+tk/Ri0FxvmbHynfLryduRoZBYUCUoIWck1w=;
	b=Y+y9SkhLwR4RO03sThb0yN9+rGeHuf9gPBrOUIj65kkc8ASzURnjJXjv0ePBZy/Adj
	IHGFeZT8wG5KNpsKR6SPrHIDOQNoc6sHiXq//8Ny8xV5l9dQpOomg5y9e+JUmEpLlOpG
	GvX/KpCFs7+XgGVGoL137WrFxkY94qsXow1jUbiYfK/Z/UvHDGUw/SL7gIlfyq8ce6WY
	TBcOu9VI3Yvy/e/t4vzVBeNqsw1bI8pnDpN1tbpOKoskMboCgGrPMutj+5DWauQYfD6Y
	BMJuuYv4hoz47KjHvI+Om+QMCCYpyrgoHqG5CbVR/lX7wfoMVd9ViBo6SPqzrjRCbdhX
	ePFw==
X-Received: by 10.152.6.101 with SMTP id z5mr1990273laz.53.1392830432960; Wed,
	19 Feb 2014 09:20:32 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:20:12 -0800 (PST)
In-Reply-To: <1392828325.21976.6.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:20:12 -0800
X-Google-Sender-Auth: QZP8mEwLEjoaVd71dj1hVNM6nv0
Message-ID: <CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>, Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 8:45 AM, Dan Williams <dcbw@redhat.com> wrote:
> On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
>> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
>> > On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
>> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>> >>
>> >> Some interfaces do not need to have any IPv4 or IPv6
>> >> addresses, so enable an option to specify this. One
>> >> example where this is observed are virtualization
>> >> backend interfaces which just use the net_device
>> >> constructs to help with their respective frontends.
>> >>
>> >> This should optimize boot time and complexity on
>> >> virtualization environments for each backend interface
>> >> while also avoiding triggering SLAAC and DAD, which is
>> >> simply pointless for these type of interfaces.
>> >
>> > Would it not be better/cleaner to use disable_ipv6 and then add a
>> > disable_ipv4 sysctl, then use those with that interface?
>>
>> Sure, but note that the both disable_ipv6 and accept_dada sysctl
>> parameters are global. ipv4 and ipv6 interfaces are created upon
>> NETDEVICE_REGISTER, which will get triggered when a driver calls
>> register_netdev(). The goal of this patch was to enable an early
>> optimization for drivers that have no need ever for ipv4 or ipv6
>> interfaces.
>
> Each interface gets override sysctls too though, eg:
>
> /proc/sys/net/ipv6/conf/enp0s25/disable_ipv6

I hadn't seen those, thanks!

> which is the one I meant; you're obviously right that the global ones
> aren't what you want here.  But the specific ones should be suitable?

Under the approach Stephen mentioned by first ensuring the interface
is down yes. There's one use case I can consider to still want the
patch though, more on that below.

> If you set that on a per-interface basis, then you'll get EPERM or
> something whenever you try to add IPv6 addresses or do IPv6 routing.

Neat, thanks.

>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>> backends though, as such this is no longer applicable as a
>> requirement. The ipv4 sysctl however still seems like a reasonable
>> approach to enable optimizations of the network in topologies where
>> its known we won't need them but -- we'd need to consider a much more
>> granular solution, not just global as it is now for disable_ipv6, and
>> we'd also have to figure out a clean way to do this to not incur the
>> cost of early address interface addition upon register_netdev().
>>
>> Given that we have a use case for ipv4 and ipv6 addresses on
>> xen-netback we no longer have an immediate use case for such early
>> optimization primitives though, so I'll drop this.
>>
>> > The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
>> > already doing.
>>
>> disable_ipv6 is global, the goal was to make this granular and skip
>> the cost upon early boot, but its been clarified we don't need this.
>
> Like Stephen says, you need to make sure you set them before IFF_UP, but
> beyond that, wouldn't the interface-specific sysctls work?

Yeah that'll do it, unless there is a measurable run time benefit cost
to never even add these in the first place. Consider a host with tons
of guests, not sure how many is 'a lot' these days. One would have to
measure the cost of reducing the amount of time it takes to boot these
up. As discussed in the other threads though there *is* some use cases
of assigning IPv4 or IPv6 addresses to the backend interfaces though:
routing them (although its unclear to me if iptables can be used
instead, Zoltan?). So at least now there no clear requirement to
remove these interfaces or not have them at all. The boot time cost
savings should be considered though if this is ultimately desirable. I
saw tons of timers and events that'd get triggered with any IPv4 or
IPv6 interface laying around.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:21:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAq1-0004IX-9d; Wed, 19 Feb 2014 17:21:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGApz-0004IK-74
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:21:35 +0000
Received: from [85.158.143.35:27158] by server-1.bemta-4.messagelabs.com id
	54/03-31661-E18E4035; Wed, 19 Feb 2014 17:21:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392830493!6877911!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24488 invoked from network); 19 Feb 2014 17:21:33 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:21:33 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so370530eek.18
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:21:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=z0ySDQq9NelUJ9t0SqwBW6Ss7a6KXsNnRu6vNBRWl/Q=;
	b=UmXdNm+otGyBTzTQrMOL8EDK2LuyprFCzw1IdrpO1QqDL9Ou83K1GXV1UBO+zrRJ2R
	CBVPdCXOA4uzhiFxQqKWuDYJ2wWzudVQgL/OR72KTUQuBrTPqDYN3qAsh+0gCLvN9EfC
	ZLRHjSI6+lj7w07e3mi1WMXeSoqfch1L8lj0sBr7255eT+BeamNU5EvAtPMAKR2mMuyt
	frj6CDsbL/OGRMrFnipi14dVkLcCmXxhuyCLNcLMzrZXFFDTjx17wxt8dBP6wR9a0pwn
	++fomE55eFfh5laVuGGOmH2oheElKkYzYLlZ4epEVoJossrat9utP6kAYa9+r0sEfNh1
	/KQQ==
X-Gm-Message-State: ALoCoQln3HUHmzSLoNqxABGrknb8QzvIA7Es0Uplri5zKqIMvbUHYqTtGoNMJIzdQJgIBYe/IbaF
X-Received: by 10.14.183.132 with SMTP id q4mr4288715eem.91.1392830493631;
	Wed, 19 Feb 2014 09:21:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j42sm2985536eep.21.2014.02.19.09.21.31
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:21:32 -0800 (PST)
Message-ID: <5304E81A.3050703@linaro.org>
Date: Wed, 19 Feb 2014 17:21:30 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1392819327.29739.85.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 02:15 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> This patch add support for ARM architected SMMU driver. It's based on the
>> linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.
> 
> Aha, here comes all the hard stuff ;-)
> 
> Could you try and briefly enumerate the areas which you had to change
> please.

The main changes are:
        - Fault by default if the SMMU is enabled to translate an
address (Linux is bypassing the SMMU)
	- Using P2M page table instead of creating new one
        - Dropped stage-1 support
        - Dropped chained SMMUs support for now
        - Reworking device assignment and the structure

> Some comments on e.g. which translation context type we are using and
> how we are configuring things etc might also be helpful here in
> understanding what is going on.

Honestly, the configuration part was mostly copied from Linux. Some bits
are still fuzzy for me. I will try to improve the commit message.

> Also could you give details of the test setup you used, was it just
> booting dom0 on Midway with these patches? Was the DTB complete etc?

I had to modify the device tree by applying this following patch:
http://www.spinics.net/lists/arm-kernel/msg301163.html

I've tried to boot dom0 and a guest with LVM (a patch is coming to
remove swiotlb when the device is protected by an IOMMU).

> 
>> + * This driver currently supports:
>> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
> 
> I guess this is Will's original comment, I thought SMMU-400, which
> you've tried, was v2?

No it's SMMU v2. As I understand:
	- SMMU-400 => SMMU v1
        - SMMU-500 => SMMU v2

The words in parenthesis was added by me because I don't have a test box
with SMMU v2.

> 
>> + *  - Stream-matching and stream-indexing
>> + *  - v7/v8 long-descriptor format
>> + *  - Non-secure access to the SMMU
>> + *  - 4k pages, p2m shared with the processor
>> + *  - Up to 40-bit addressing
>> + *  - Context fault reporting
>> + */
>> +
>> +#include <xen/config.h>
>> +#include <xen/delay.h>
>> +#include <xen/errno.h>
>> +#include <xen/irq.h>
>> +#include <xen/lib.h>
>> +#include <xen/list.h>
>> +#include <xen/mm.h>
>> +#include <xen/rbtree.h>
>> +#include <xen/sched.h>
>> +#include <asm/atomic.h>
>> +#include <asm/device.h>
>> +#include <asm/io.h>
>> +#include <asm/platform.h>
>> +
>> +#define SZ_4K                               (1 << 12)
>> +#define SZ_64K                              (1 << 16)
>> +
>> +/* Driver options */
>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
> 
> Is this just retained to reduce the deviation from the Linux driver?
> It's no use to us I think. (I suppose that goes for a bunch of other
> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
> further).

SZ_4K and SZ_64K is used later in the code. The
SMMU_OPT_SECURE_CONFIG_ACCESS is used for midway as the SMMU is broken.

> 
>> +
>> +void arm_smmu_iommu_dom0_init(struct domain *d)
>> +{
>> +    struct arm_smmu_device *smmu;
>> +    struct rb_node *node;
>> +
>> +    printk(XENLOG_DEBUG "arm-smmu: Initialize dom0\n");
>> +
>> +    list_for_each_entry( smmu, &arm_smmu_devices, list )
>> +    {
>> +        for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
>> +        {
>> +            struct arm_smmu_master *master;
>> +
>> +            master = container_of(node, struct arm_smmu_master, node);
>> +
>> +            if ( dt_device_used_by(master->dt_node) == DOMID_XEN ||
>> +                 platform_device_is_blacklisted(master->dt_node) )
>> +                continue;
>> +
>> +            arm_smmu_attach_dev(d, master->dt_node);
> 
> Should this not be driven from the same loop as the MMIO mapping setup
> in dom0 build? Otherwise isn't there a danger that they won't
> correspond?

On my second version of the patch series I moved out this code in
map_device.

> 
> A "master" here is a "bus master" i.e. a device, right?



>> +    /* Even if the device can't be initialized, we don't want to give to
>> +     * dom0 the smmu device
> 
> "give the smmu device to dom0"
> 
> 
> I obviously haven't reviewed all this code in detail, but I have skimmed
> it and nothing leapt out.

Thanks for the review!

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:21:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAq1-0004IX-9d; Wed, 19 Feb 2014 17:21:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGApz-0004IK-74
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:21:35 +0000
Received: from [85.158.143.35:27158] by server-1.bemta-4.messagelabs.com id
	54/03-31661-E18E4035; Wed, 19 Feb 2014 17:21:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392830493!6877911!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24488 invoked from network); 19 Feb 2014 17:21:33 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:21:33 -0000
Received: by mail-ee0-f45.google.com with SMTP id b15so370530eek.18
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:21:33 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=z0ySDQq9NelUJ9t0SqwBW6Ss7a6KXsNnRu6vNBRWl/Q=;
	b=UmXdNm+otGyBTzTQrMOL8EDK2LuyprFCzw1IdrpO1QqDL9Ou83K1GXV1UBO+zrRJ2R
	CBVPdCXOA4uzhiFxQqKWuDYJ2wWzudVQgL/OR72KTUQuBrTPqDYN3qAsh+0gCLvN9EfC
	ZLRHjSI6+lj7w07e3mi1WMXeSoqfch1L8lj0sBr7255eT+BeamNU5EvAtPMAKR2mMuyt
	frj6CDsbL/OGRMrFnipi14dVkLcCmXxhuyCLNcLMzrZXFFDTjx17wxt8dBP6wR9a0pwn
	++fomE55eFfh5laVuGGOmH2oheElKkYzYLlZ4epEVoJossrat9utP6kAYa9+r0sEfNh1
	/KQQ==
X-Gm-Message-State: ALoCoQln3HUHmzSLoNqxABGrknb8QzvIA7Es0Uplri5zKqIMvbUHYqTtGoNMJIzdQJgIBYe/IbaF
X-Received: by 10.14.183.132 with SMTP id q4mr4288715eem.91.1392830493631;
	Wed, 19 Feb 2014 09:21:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j42sm2985536eep.21.2014.02.19.09.21.31
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:21:32 -0800 (PST)
Message-ID: <5304E81A.3050703@linaro.org>
Date: Wed, 19 Feb 2014 17:21:30 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1392819327.29739.85.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 02:15 PM, Ian Campbell wrote:
> On Fri, 2014-02-07 at 17:43 +0000, Julien Grall wrote:
>> This patch add support for ARM architected SMMU driver. It's based on the
>> linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.
> 
> Aha, here comes all the hard stuff ;-)
> 
> Could you try and briefly enumerate the areas which you had to change
> please.

The main changes are:
        - Fault by default if the SMMU is enabled to translate an
address (Linux is bypassing the SMMU)
	- Using P2M page table instead of creating new one
        - Dropped stage-1 support
        - Dropped chained SMMUs support for now
        - Reworking device assignment and the structure

> Some comments on e.g. which translation context type we are using and
> how we are configuring things etc might also be helpful here in
> understanding what is going on.

Honestly, the configuration part was mostly copied from Linux. Some bits
are still fuzzy for me. I will try to improve the commit message.

> Also could you give details of the test setup you used, was it just
> booting dom0 on Midway with these patches? Was the DTB complete etc?

I had to modify the device tree by applying this following patch:
http://www.spinics.net/lists/arm-kernel/msg301163.html

I've tried to boot dom0 and a guest with LVM (a patch is coming to
remove swiotlb when the device is protected by an IOMMU).

> 
>> + * This driver currently supports:
>> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
> 
> I guess this is Will's original comment, I thought SMMU-400, which
> you've tried, was v2?

No it's SMMU v2. As I understand:
	- SMMU-400 => SMMU v1
        - SMMU-500 => SMMU v2

The words in parenthesis was added by me because I don't have a test box
with SMMU v2.

> 
>> + *  - Stream-matching and stream-indexing
>> + *  - v7/v8 long-descriptor format
>> + *  - Non-secure access to the SMMU
>> + *  - 4k pages, p2m shared with the processor
>> + *  - Up to 40-bit addressing
>> + *  - Context fault reporting
>> + */
>> +
>> +#include <xen/config.h>
>> +#include <xen/delay.h>
>> +#include <xen/errno.h>
>> +#include <xen/irq.h>
>> +#include <xen/lib.h>
>> +#include <xen/list.h>
>> +#include <xen/mm.h>
>> +#include <xen/rbtree.h>
>> +#include <xen/sched.h>
>> +#include <asm/atomic.h>
>> +#include <asm/device.h>
>> +#include <asm/io.h>
>> +#include <asm/platform.h>
>> +
>> +#define SZ_4K                               (1 << 12)
>> +#define SZ_64K                              (1 << 16)
>> +
>> +/* Driver options */
>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
> 
> Is this just retained to reduce the deviation from the Linux driver?
> It's no use to us I think. (I suppose that goes for a bunch of other
> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
> further).

SZ_4K and SZ_64K is used later in the code. The
SMMU_OPT_SECURE_CONFIG_ACCESS is used for midway as the SMMU is broken.

> 
>> +
>> +void arm_smmu_iommu_dom0_init(struct domain *d)
>> +{
>> +    struct arm_smmu_device *smmu;
>> +    struct rb_node *node;
>> +
>> +    printk(XENLOG_DEBUG "arm-smmu: Initialize dom0\n");
>> +
>> +    list_for_each_entry( smmu, &arm_smmu_devices, list )
>> +    {
>> +        for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
>> +        {
>> +            struct arm_smmu_master *master;
>> +
>> +            master = container_of(node, struct arm_smmu_master, node);
>> +
>> +            if ( dt_device_used_by(master->dt_node) == DOMID_XEN ||
>> +                 platform_device_is_blacklisted(master->dt_node) )
>> +                continue;
>> +
>> +            arm_smmu_attach_dev(d, master->dt_node);
> 
> Should this not be driven from the same loop as the MMIO mapping setup
> in dom0 build? Otherwise isn't there a danger that they won't
> correspond?

On my second version of the patch series I moved out this code in
map_device.

> 
> A "master" here is a "bus master" i.e. a device, right?



>> +    /* Even if the device can't be initialized, we don't want to give to
>> +     * dom0 the smmu device
> 
> "give the smmu device to dom0"
> 
> 
> I obviously haven't reviewed all this code in detail, but I have skimmed
> it and nothing leapt out.

Thanks for the review!

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:23:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGArV-0004PT-PO; Wed, 19 Feb 2014 17:23:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGArT-0004PL-Lo
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:23:07 +0000
Received: from [85.158.143.35:50731] by server-1.bemta-4.messagelabs.com id
	59/C4-31661-A78E4035; Wed, 19 Feb 2014 17:23:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392830585!6898240!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18601 invoked from network); 19 Feb 2014 17:23:06 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:23:06 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so188948ead.20
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:23:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=w8slK5inSM3i5ftoMWcuyn+mxcVljv0otZPu05HmVoM=;
	b=Z+Me9PUwGcZV+Z3uJMPq5ZGFSDWyKW2isUgxwQmeN6FPGZ4PmFqp4aRDu9HrtXo/8u
	4PfiJPZzSwIdldfdzk4ubdsVkLgLbklxFQb6eKY88qLpa6orHSUyNGMSPkgKbjDbN5kO
	hwWOrwp2WbuN6N2oQwkCf6+llQ6UiPwgYkOuDgeNP4P4FS+A3d4wYadhZhxtdJrKXMqP
	+mWCEzLOOpKr+qr+GGt5CWbixAXAkK/hMtgeG0gHNAxJ3LyLoD1P+QWwDeMdhvomNfJO
	KrL5MJKAS8KR9iaKnIqsFAZYssw2Y9nnlfwA0ZHfquRDD01t19rP+eY+zGSVgM5OL4Bn
	YakA==
X-Gm-Message-State: ALoCoQm3jTfe5zublzlMKLurZMa2vMgL1m4Xkt11vD/yvM5HH7nZzxXWIMtJWXmMGceCtftBVY61
X-Received: by 10.15.51.196 with SMTP id n44mr41692816eew.27.1392830585267;
	Wed, 19 Feb 2014 09:23:05 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j41sm2943906eey.15.2014.02.19.09.23.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:23:04 -0800 (PST)
Message-ID: <5304E877.7010901@linaro.org>
Date: Wed, 19 Feb 2014 17:23:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1392819327.29739.85.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 02:15 PM, Ian Campbell wrote:
> A "master" here is a "bus master" i.e. a device, right?

I forgot to answer to this part: yes.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:23:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGArV-0004PT-PO; Wed, 19 Feb 2014 17:23:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGArT-0004PL-Lo
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:23:07 +0000
Received: from [85.158.143.35:50731] by server-1.bemta-4.messagelabs.com id
	59/C4-31661-A78E4035; Wed, 19 Feb 2014 17:23:06 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392830585!6898240!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18601 invoked from network); 19 Feb 2014 17:23:06 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:23:06 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so188948ead.20
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:23:05 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=w8slK5inSM3i5ftoMWcuyn+mxcVljv0otZPu05HmVoM=;
	b=Z+Me9PUwGcZV+Z3uJMPq5ZGFSDWyKW2isUgxwQmeN6FPGZ4PmFqp4aRDu9HrtXo/8u
	4PfiJPZzSwIdldfdzk4ubdsVkLgLbklxFQb6eKY88qLpa6orHSUyNGMSPkgKbjDbN5kO
	hwWOrwp2WbuN6N2oQwkCf6+llQ6UiPwgYkOuDgeNP4P4FS+A3d4wYadhZhxtdJrKXMqP
	+mWCEzLOOpKr+qr+GGt5CWbixAXAkK/hMtgeG0gHNAxJ3LyLoD1P+QWwDeMdhvomNfJO
	KrL5MJKAS8KR9iaKnIqsFAZYssw2Y9nnlfwA0ZHfquRDD01t19rP+eY+zGSVgM5OL4Bn
	YakA==
X-Gm-Message-State: ALoCoQm3jTfe5zublzlMKLurZMa2vMgL1m4Xkt11vD/yvM5HH7nZzxXWIMtJWXmMGceCtftBVY61
X-Received: by 10.15.51.196 with SMTP id n44mr41692816eew.27.1392830585267;
	Wed, 19 Feb 2014 09:23:05 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id j41sm2943906eey.15.2014.02.19.09.23.03
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:23:04 -0800 (PST)
Message-ID: <5304E877.7010901@linaro.org>
Date: Wed, 19 Feb 2014 17:23:03 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
In-Reply-To: <1392819327.29739.85.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 02:15 PM, Ian Campbell wrote:
> A "master" here is a "bus master" i.e. a device, right?

I forgot to answer to this part: yes.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAvK-0004dm-FC; Wed, 19 Feb 2014 17:27:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGAvG-0004df-1X
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:27:05 +0000
Received: from [85.158.143.35:60397] by server-3.bemta-4.messagelabs.com id
	75/A8-11539-569E4035; Wed, 19 Feb 2014 17:27:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392830820!6898504!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29919 invoked from network); 19 Feb 2014 17:27:00 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:27:00 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so379627eek.29
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:27:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=uKnZK9wdngTxOIjz3NSx64avRR3yApilrb4afE0iTgs=;
	b=NAsE3QvPmp2ZMknxvRig3+6HSRvOFHib0ZT1wXv46jd31Zf1dKfZde1PsBl3eZmt20
	um12Io8lOM3PSj/ahKD9PbmEbX3f57WI81n4B+6H0Lmxxu5mSNRTC2aBxoAbSS2jmt6X
	cCkD/fSJ27WJcl8q7g/ZyR6z3Hy2F61/gKPVEt6JBuw06BG19SgaDbtWi9fN3qUq+Wsa
	DLd62CyVjHgHwz5rkLRD7vy8alaFFKSv/X/WPAkA5Y87JcAifwi+As7VAdct6MGb3wc6
	kJIkDD/QCIO5UB4bTn1NEm1sOoQmjBa7QRNJBjZ8+X7STg9EjUEcyLzmlU3XZOQW8cfI
	g35g==
X-Gm-Message-State: ALoCoQlajmN7Poww1GJpd6gWX2/h+VDHgPqUlzgDQlITBYr/93WAOh7LmZZ8HkTPJg+e0t27Fa+K
X-Received: by 10.15.94.135 with SMTP id bb7mr41051998eeb.48.1392830820049;
	Wed, 19 Feb 2014 09:27:00 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm3019354eei.9.2014.02.19.09.26.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:26:59 -0800 (PST)
Message-ID: <5304E961.3030104@linaro.org>
Date: Wed, 19 Feb 2014 17:26:57 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
In-Reply-To: <5304D6BC020000780011DC4F@nat28.tlf.novell.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 03:07 PM, Jan Beulich wrote:
>>>> On 19.02.14 at 15:51, Julien Grall <julien.grall@linaro.org> wrote:
>> Adding Keir and Jan.
>>
>> On 02/19/2014 02:38 PM, Ian Campbell wrote:
>>> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
>>>
>>>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>>>
>>>>> unsigned? What are the error codes here going to be?
>>>>
>>>> This is the return type requested by hw_interrupt_type.startup.
>>>>
>>>> It seems that the return is never checked (even in x86 code). Maybe we
>>>> should change the prototype of hw_interrupt_type.startup.
>>>
>>> Worth investigating. I wonder if someone thought this might return the
>>> resulting interrupt number (those are normally unsigned int I think) or
>>> if it actually did used to etc.
>>
>> I think it was copied from Linux which also have unsigned int. I gave a
>> quick look to the code and this callback is only used in 2 places which
>> always return 0.
>>
>> Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
>> an int...
>>
>> I can create a patch to return void instead of unsigned if everyone is
>> happy with this solution.
> 
> I'd be fine with such a change; I'd like to ask though that if you
> do this, you at the same time do the resulting possible cleanup:
> As an example, xen/arch/x86/msi.c:startup_msi_irq() becomes
> unnecessary then. It will in fact be interesting to see how many
> distinct startup routines actually remain.

Sure, I will give a look at it.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAvQ-0004e9-UR; Wed, 19 Feb 2014 17:27:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGAvP-0004e1-63
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:27:11 +0000
Received: from [193.109.254.147:53556] by server-10.bemta-14.messagelabs.com
	id BA/56-10711-E69E4035; Wed, 19 Feb 2014 17:27:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392830828!5465085!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11499 invoked from network); 19 Feb 2014 17:27:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:27:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102267186"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 17:27:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 12:27:01 -0500
Message-ID: <1392830820.29739.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 17:27:00 +0000
In-Reply-To: <5304E81A.3050703@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
> > 
> >> + * This driver currently supports:
> >> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
> > 
> > I guess this is Will's original comment, I thought SMMU-400, which
> > you've tried, was v2?
> 
> No it's SMMU v2. As I understand:
> 	- SMMU-400 => SMMU v1
>         - SMMU-500 => SMMU v2

Ah yes, that sounds more familiar.

> The words in parenthesis was added by me because I don't have a test box
> with SMMU v2.

OK.

> >> +#define SZ_4K                               (1 << 12)
> >> +#define SZ_64K                              (1 << 16)
> >> +
> >> +/* Driver options */
> >> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
> > 
> > Is this just retained to reduce the deviation from the Linux driver?
> > It's no use to us I think. (I suppose that goes for a bunch of other
> > stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
> > further).
> 
> SZ_4K and SZ_64K is used later in the code.

But they are actually useless to us aren't they?

>  The
> SMMU_OPT_SECURE_CONFIG_ACCESS is used for midway as the SMMU is broken.

Ah, so this is effectively a quirk then.

Ian/


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAvK-0004dm-FC; Wed, 19 Feb 2014 17:27:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGAvG-0004df-1X
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:27:05 +0000
Received: from [85.158.143.35:60397] by server-3.bemta-4.messagelabs.com id
	75/A8-11539-569E4035; Wed, 19 Feb 2014 17:27:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392830820!6898504!1
X-Originating-IP: [74.125.83.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29919 invoked from network); 19 Feb 2014 17:27:00 -0000
Received: from mail-ee0-f42.google.com (HELO mail-ee0-f42.google.com)
	(74.125.83.42)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:27:00 -0000
Received: by mail-ee0-f42.google.com with SMTP id b15so379627eek.29
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:27:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=uKnZK9wdngTxOIjz3NSx64avRR3yApilrb4afE0iTgs=;
	b=NAsE3QvPmp2ZMknxvRig3+6HSRvOFHib0ZT1wXv46jd31Zf1dKfZde1PsBl3eZmt20
	um12Io8lOM3PSj/ahKD9PbmEbX3f57WI81n4B+6H0Lmxxu5mSNRTC2aBxoAbSS2jmt6X
	cCkD/fSJ27WJcl8q7g/ZyR6z3Hy2F61/gKPVEt6JBuw06BG19SgaDbtWi9fN3qUq+Wsa
	DLd62CyVjHgHwz5rkLRD7vy8alaFFKSv/X/WPAkA5Y87JcAifwi+As7VAdct6MGb3wc6
	kJIkDD/QCIO5UB4bTn1NEm1sOoQmjBa7QRNJBjZ8+X7STg9EjUEcyLzmlU3XZOQW8cfI
	g35g==
X-Gm-Message-State: ALoCoQlajmN7Poww1GJpd6gWX2/h+VDHgPqUlzgDQlITBYr/93WAOh7LmZZ8HkTPJg+e0t27Fa+K
X-Received: by 10.15.94.135 with SMTP id bb7mr41051998eeb.48.1392830820049;
	Wed, 19 Feb 2014 09:27:00 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm3019354eei.9.2014.02.19.09.26.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:26:59 -0800 (PST)
Message-ID: <5304E961.3030104@linaro.org>
Date: Wed, 19 Feb 2014 17:26:57 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
In-Reply-To: <5304D6BC020000780011DC4F@nat28.tlf.novell.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 03:07 PM, Jan Beulich wrote:
>>>> On 19.02.14 at 15:51, Julien Grall <julien.grall@linaro.org> wrote:
>> Adding Keir and Jan.
>>
>> On 02/19/2014 02:38 PM, Ian Campbell wrote:
>>> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
>>>
>>>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>>>
>>>>> unsigned? What are the error codes here going to be?
>>>>
>>>> This is the return type requested by hw_interrupt_type.startup.
>>>>
>>>> It seems that the return is never checked (even in x86 code). Maybe we
>>>> should change the prototype of hw_interrupt_type.startup.
>>>
>>> Worth investigating. I wonder if someone thought this might return the
>>> resulting interrupt number (those are normally unsigned int I think) or
>>> if it actually did used to etc.
>>
>> I think it was copied from Linux which also have unsigned int. I gave a
>> quick look to the code and this callback is only used in 2 places which
>> always return 0.
>>
>> Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
>> an int...
>>
>> I can create a patch to return void instead of unsigned if everyone is
>> happy with this solution.
> 
> I'd be fine with such a change; I'd like to ask though that if you
> do this, you at the same time do the resulting possible cleanup:
> As an example, xen/arch/x86/msi.c:startup_msi_irq() becomes
> unnecessary then. It will in fact be interesting to see how many
> distinct startup routines actually remain.

Sure, I will give a look at it.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:27:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:27:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAvQ-0004e9-UR; Wed, 19 Feb 2014 17:27:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGAvP-0004e1-63
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:27:11 +0000
Received: from [193.109.254.147:53556] by server-10.bemta-14.messagelabs.com
	id BA/56-10711-E69E4035; Wed, 19 Feb 2014 17:27:10 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392830828!5465085!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11499 invoked from network); 19 Feb 2014 17:27:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:27:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,506,1389744000"; d="scan'208";a="102267186"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 17:27:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 12:27:01 -0500
Message-ID: <1392830820.29739.97.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Wed, 19 Feb 2014 17:27:00 +0000
In-Reply-To: <5304E81A.3050703@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
> > 
> >> + * This driver currently supports:
> >> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
> > 
> > I guess this is Will's original comment, I thought SMMU-400, which
> > you've tried, was v2?
> 
> No it's SMMU v2. As I understand:
> 	- SMMU-400 => SMMU v1
>         - SMMU-500 => SMMU v2

Ah yes, that sounds more familiar.

> The words in parenthesis was added by me because I don't have a test box
> with SMMU v2.

OK.

> >> +#define SZ_4K                               (1 << 12)
> >> +#define SZ_64K                              (1 << 16)
> >> +
> >> +/* Driver options */
> >> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
> > 
> > Is this just retained to reduce the deviation from the Linux driver?
> > It's no use to us I think. (I suppose that goes for a bunch of other
> > stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
> > further).
> 
> SZ_4K and SZ_64K is used later in the code.

But they are actually useless to us aren't they?

>  The
> SMMU_OPT_SECURE_CONFIG_ACCESS is used for midway as the SMMU is broken.

Ah, so this is effectively a quirk then.

Ian/


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:31:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAzJ-0004t4-LT; Wed, 19 Feb 2014 17:31:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGAzH-0004su-I7
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:31:11 +0000
Received: from [85.158.137.68:12959] by server-15.bemta-3.messagelabs.com id
	FB/92-19263-E5AE4035; Wed, 19 Feb 2014 17:31:10 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392831069!1706390!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18987 invoked from network); 19 Feb 2014 17:31:10 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:31:10 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so273289eaj.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:31:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RUkt4cSeOa8Jvt044RPtaPYRJ/Gb+9M/kMo3wHqGJg8=;
	b=R4HCZ8pnxB5XflC1spTUr6qZXFNbQZ94O+NrKVUCALpLJoz6xRzv9oB6ScjT9go2Yi
	kEwEvyXr+S8ve3YdR1Ur+kxNnW0KAmTW++a9TWZC6wBQCDGeCqbB8i5gcS+HOUBWd8xU
	hAUGxc9N7XHfLqZ8JS7PJjFZ2/unvG76ByYMyZpBDsAy142+65CHDfHtYgpaN4u0r8Yh
	bxuwvS7nvw44QurgGn2AgfG4D4q/t9ugdBc0tOt42I7GFJeSfrdgau2h3xHcXdWASg9b
	XQPzdxXrCBbV8+KwfZwBi/DI2CoPJS2b3CoY2bAX2JBqwNvIA2dNSrkvpvOa1CaUsFbJ
	AHYQ==
X-Gm-Message-State: ALoCoQmfSaScW0BcRieVDEHg/u5xsJbKUPpK6YDa0gQ3I/tiDW/y1oVYqEsoC5h+54i6ds0KCpYh
X-Received: by 10.14.127.200 with SMTP id d48mr41761327eei.9.1392831069674;
	Wed, 19 Feb 2014 09:31:09 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm3133879eeg.5.2014.02.19.09.31.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:31:08 -0800 (PST)
Message-ID: <5304EA5B.50506@linaro.org>
Date: Wed, 19 Feb 2014 17:31:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
	<1392830820.29739.97.camel@kazak.uk.xensource.com>
In-Reply-To: <1392830820.29739.97.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 05:27 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
>>>
>>>> + * This driver currently supports:
>>>> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
>>>
>>> I guess this is Will's original comment, I thought SMMU-400, which
>>> you've tried, was v2?
>>
>> No it's SMMU v2. As I understand:
>> 	- SMMU-400 => SMMU v1
>>         - SMMU-500 => SMMU v2
> 
> Ah yes, that sounds more familiar.
> 
>> The words in parenthesis was added by me because I don't have a test box
>> with SMMU v2.
> 
> OK.
> 
>>>> +#define SZ_4K                               (1 << 12)
>>>> +#define SZ_64K                              (1 << 16)
>>>> +
>>>> +/* Driver options */
>>>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
>>>
>>> Is this just retained to reduce the deviation from the Linux driver?
>>> It's no use to us I think. (I suppose that goes for a bunch of other
>>> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
>>> further).
>>
>> SZ_4K and SZ_64K is used later in the code.
> 
> But they are actually useless to us aren't they?

As we use only 4K page in Xen yes. I kept it because there is few place
where the SMMU configuration is not the same.

If we want to support 64K page in the future it will harder to had
support if this stuff is removed.

The constant was added by himself as it doesn't exists on Xen. I can
move it into the generic code.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:31:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:31:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGAzJ-0004t4-LT; Wed, 19 Feb 2014 17:31:13 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGAzH-0004su-I7
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:31:11 +0000
Received: from [85.158.137.68:12959] by server-15.bemta-3.messagelabs.com id
	FB/92-19263-E5AE4035; Wed, 19 Feb 2014 17:31:10 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392831069!1706390!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18987 invoked from network); 19 Feb 2014 17:31:10 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:31:10 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so273289eaj.5
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:31:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RUkt4cSeOa8Jvt044RPtaPYRJ/Gb+9M/kMo3wHqGJg8=;
	b=R4HCZ8pnxB5XflC1spTUr6qZXFNbQZ94O+NrKVUCALpLJoz6xRzv9oB6ScjT9go2Yi
	kEwEvyXr+S8ve3YdR1Ur+kxNnW0KAmTW++a9TWZC6wBQCDGeCqbB8i5gcS+HOUBWd8xU
	hAUGxc9N7XHfLqZ8JS7PJjFZ2/unvG76ByYMyZpBDsAy142+65CHDfHtYgpaN4u0r8Yh
	bxuwvS7nvw44QurgGn2AgfG4D4q/t9ugdBc0tOt42I7GFJeSfrdgau2h3xHcXdWASg9b
	XQPzdxXrCBbV8+KwfZwBi/DI2CoPJS2b3CoY2bAX2JBqwNvIA2dNSrkvpvOa1CaUsFbJ
	AHYQ==
X-Gm-Message-State: ALoCoQmfSaScW0BcRieVDEHg/u5xsJbKUPpK6YDa0gQ3I/tiDW/y1oVYqEsoC5h+54i6ds0KCpYh
X-Received: by 10.14.127.200 with SMTP id d48mr41761327eei.9.1392831069674;
	Wed, 19 Feb 2014 09:31:09 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm3133879eeg.5.2014.02.19.09.31.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:31:08 -0800 (PST)
Message-ID: <5304EA5B.50506@linaro.org>
Date: Wed, 19 Feb 2014 17:31:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
	<1392830820.29739.97.camel@kazak.uk.xensource.com>
In-Reply-To: <1392830820.29739.97.camel@kazak.uk.xensource.com>
Cc: patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 05:27 PM, Ian Campbell wrote:
> On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
>>>
>>>> + * This driver currently supports:
>>>> + *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
>>>
>>> I guess this is Will's original comment, I thought SMMU-400, which
>>> you've tried, was v2?
>>
>> No it's SMMU v2. As I understand:
>> 	- SMMU-400 => SMMU v1
>>         - SMMU-500 => SMMU v2
> 
> Ah yes, that sounds more familiar.
> 
>> The words in parenthesis was added by me because I don't have a test box
>> with SMMU v2.
> 
> OK.
> 
>>>> +#define SZ_4K                               (1 << 12)
>>>> +#define SZ_64K                              (1 << 16)
>>>> +
>>>> +/* Driver options */
>>>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
>>>
>>> Is this just retained to reduce the deviation from the Linux driver?
>>> It's no use to us I think. (I suppose that goes for a bunch of other
>>> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
>>> further).
>>
>> SZ_4K and SZ_64K is used later in the code.
> 
> But they are actually useless to us aren't they?

As we use only 4K page in Xen yes. I kept it because there is few place
where the SMMU configuration is not the same.

If we want to support 64K page in the future it will harder to had
support if this stuff is removed.

The constant was added by himself as it doesn't exists on Xen. I can
move it into the generic code.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:38:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGB6V-0005FM-4N; Wed, 19 Feb 2014 17:38:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGB6T-0005Ey-K3
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:38:37 +0000
Received: from [85.158.139.211:14420] by server-3.bemta-5.messagelabs.com id
	B6/2D-13671-C1CE4035; Wed, 19 Feb 2014 17:38:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392831514!4999892!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21704 invoked from network); 19 Feb 2014 17:38:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 17:38:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JHcQKq008213
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 17:38:27 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1JHcPmJ015858
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 17:38:26 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1JHcPLE015836; Wed, 19 Feb 2014 17:38:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 09:38:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E53BD1C0954; Wed, 19 Feb 2014 12:38:22 -0500 (EST)
Date: Wed, 19 Feb 2014 12:38:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jiang Liu <jiang.liu@linux.intel.com>
Message-ID: <20140219173822.GF11365@phenom.dumpdata.com>
References: <1392789739-32317-1-git-send-email-jiang.liu@linux.intel.com>
	<1392789739-32317-3-git-send-email-jiang.liu@linux.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392789739-32317-3-git-send-email-jiang.liu@linux.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Tony Luck <tony.luck@intel.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Len Brown <lenb@kernel.org>, Lv Zheng <lv.zheng@intel.com>
Subject: Re: [Xen-devel] [Patch v2 3/5] xen,
 acpi_pad: use acpi_evaluate_ost() to replace open-coded version
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 02:02:17PM +0800, Jiang Liu wrote:
> Use public function acpi_evaluate_ost() to replace open-coded
> version of evaluating ACPI _OST method.
> 

Looks OK to me.

> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
> ---
>  drivers/xen/xen-acpi-pad.c |   26 +++++++-------------------
>  1 file changed, 7 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/xen/xen-acpi-pad.c b/drivers/xen/xen-acpi-pad.c
> index 40c4bc0..f83b754 100644
> --- a/drivers/xen/xen-acpi-pad.c
> +++ b/drivers/xen/xen-acpi-pad.c
> @@ -77,27 +77,14 @@ static int acpi_pad_pur(acpi_handle handle)
>  	return num;
>  }
>  
> -/* Notify firmware how many CPUs are idle */
> -static void acpi_pad_ost(acpi_handle handle, int stat,
> -	uint32_t idle_nums)
> -{
> -	union acpi_object params[3] = {
> -		{.type = ACPI_TYPE_INTEGER,},
> -		{.type = ACPI_TYPE_INTEGER,},
> -		{.type = ACPI_TYPE_BUFFER,},
> -	};
> -	struct acpi_object_list arg_list = {3, params};
> -
> -	params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY;
> -	params[1].integer.value =  stat;
> -	params[2].buffer.length = 4;
> -	params[2].buffer.pointer = (void *)&idle_nums;
> -	acpi_evaluate_object(handle, "_OST", &arg_list, NULL);
> -}
> -
>  static void acpi_pad_handle_notify(acpi_handle handle)
>  {
>  	int idle_nums;
> +	struct acpi_buffer param = {
> +		.length = 4,
> +		.pointer = (void *)&idle_nums,
> +	};
> +
>  
>  	mutex_lock(&xen_cpu_lock);
>  	idle_nums = acpi_pad_pur(handle);
> @@ -109,7 +96,8 @@ static void acpi_pad_handle_notify(acpi_handle handle)
>  	idle_nums = xen_acpi_pad_idle_cpus(idle_nums)
>  		    ?: xen_acpi_pad_idle_cpus_num();
>  	if (idle_nums >= 0)
> -		acpi_pad_ost(handle, 0, idle_nums);
> +		acpi_evaluate_ost(handle, ACPI_PROCESSOR_AGGREGATOR_NOTIFY,
> +				  0, &param);
>  	mutex_unlock(&xen_cpu_lock);
>  }
>  
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:38:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGB6V-0005FM-4N; Wed, 19 Feb 2014 17:38:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGB6T-0005Ey-K3
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:38:37 +0000
Received: from [85.158.139.211:14420] by server-3.bemta-5.messagelabs.com id
	B6/2D-13671-C1CE4035; Wed, 19 Feb 2014 17:38:36 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392831514!4999892!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21704 invoked from network); 19 Feb 2014 17:38:36 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 19 Feb 2014 17:38:36 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JHcQKq008213
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 17:38:27 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1JHcPmJ015858
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 17:38:26 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1JHcPLE015836; Wed, 19 Feb 2014 17:38:25 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 09:38:24 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E53BD1C0954; Wed, 19 Feb 2014 12:38:22 -0500 (EST)
Date: Wed, 19 Feb 2014 12:38:22 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jiang Liu <jiang.liu@linux.intel.com>
Message-ID: <20140219173822.GF11365@phenom.dumpdata.com>
References: <1392789739-32317-1-git-send-email-jiang.liu@linux.intel.com>
	<1392789739-32317-3-git-send-email-jiang.liu@linux.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392789739-32317-3-git-send-email-jiang.liu@linux.intel.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Tony Luck <tony.luck@intel.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	linux-kernel@vger.kernel.org, linux-acpi@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Len Brown <lenb@kernel.org>, Lv Zheng <lv.zheng@intel.com>
Subject: Re: [Xen-devel] [Patch v2 3/5] xen,
 acpi_pad: use acpi_evaluate_ost() to replace open-coded version
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 02:02:17PM +0800, Jiang Liu wrote:
> Use public function acpi_evaluate_ost() to replace open-coded
> version of evaluating ACPI _OST method.
> 

Looks OK to me.

> Signed-off-by: Jiang Liu <jiang.liu@linux.intel.com>
> ---
>  drivers/xen/xen-acpi-pad.c |   26 +++++++-------------------
>  1 file changed, 7 insertions(+), 19 deletions(-)
> 
> diff --git a/drivers/xen/xen-acpi-pad.c b/drivers/xen/xen-acpi-pad.c
> index 40c4bc0..f83b754 100644
> --- a/drivers/xen/xen-acpi-pad.c
> +++ b/drivers/xen/xen-acpi-pad.c
> @@ -77,27 +77,14 @@ static int acpi_pad_pur(acpi_handle handle)
>  	return num;
>  }
>  
> -/* Notify firmware how many CPUs are idle */
> -static void acpi_pad_ost(acpi_handle handle, int stat,
> -	uint32_t idle_nums)
> -{
> -	union acpi_object params[3] = {
> -		{.type = ACPI_TYPE_INTEGER,},
> -		{.type = ACPI_TYPE_INTEGER,},
> -		{.type = ACPI_TYPE_BUFFER,},
> -	};
> -	struct acpi_object_list arg_list = {3, params};
> -
> -	params[0].integer.value = ACPI_PROCESSOR_AGGREGATOR_NOTIFY;
> -	params[1].integer.value =  stat;
> -	params[2].buffer.length = 4;
> -	params[2].buffer.pointer = (void *)&idle_nums;
> -	acpi_evaluate_object(handle, "_OST", &arg_list, NULL);
> -}
> -
>  static void acpi_pad_handle_notify(acpi_handle handle)
>  {
>  	int idle_nums;
> +	struct acpi_buffer param = {
> +		.length = 4,
> +		.pointer = (void *)&idle_nums,
> +	};
> +
>  
>  	mutex_lock(&xen_cpu_lock);
>  	idle_nums = acpi_pad_pur(handle);
> @@ -109,7 +96,8 @@ static void acpi_pad_handle_notify(acpi_handle handle)
>  	idle_nums = xen_acpi_pad_idle_cpus(idle_nums)
>  		    ?: xen_acpi_pad_idle_cpus_num();
>  	if (idle_nums >= 0)
> -		acpi_pad_ost(handle, 0, idle_nums);
> +		acpi_evaluate_ost(handle, ACPI_PROCESSOR_AGGREGATOR_NOTIFY,
> +				  0, &param);
>  	mutex_unlock(&xen_cpu_lock);
>  }
>  
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:47:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBEW-0005WV-6t; Wed, 19 Feb 2014 17:46:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGBEU-0005WQ-Ph
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:46:54 +0000
Received: from [193.109.254.147:34489] by server-14.bemta-14.messagelabs.com
	id 56/A3-29228-E0EE4035; Wed, 19 Feb 2014 17:46:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392832013!1729038!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9871 invoked from network); 19 Feb 2014 17:46:53 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:46:53 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so392029eei.12
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:46:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ErYJXVHQSnLB379qgmrzVFwKczNve9kqiD8mI9z4K7c=;
	b=ScTvnkDsJVe7cizYBjaSaGptm2euqtHucdp74qA1+K9T6DBk+6F2pIG7/cbzyDHzFg
	UEqDuwFAA+WeYpK0GCfyfwCQeez4jpHKfzpEIxb7CIIuxsTSFaS5qj0XfCt+p4Eu743L
	2pQ6YcKvptGoqoIEUIM1SrpvT5DqgFVccjsruzcNRiaeaaT721Iw3BUYAhjgDyuC/G/W
	VAQj8cs6pUPBLpZ7Z4OHXf/H2pQs79d8k9vUhfAO7g97z0F3sL9Uj5Cmmyq89hNHALMv
	gksWgT8jIKurTOEMIqJoO2sk4MvwAYnmVJNmRL00XBGXRF/z7jWalZW+WCGBCr0G9gaz
	ruEA==
X-Gm-Message-State: ALoCoQkllg/7e00nVW9TbmvlshSLCco1gUEIF9mMc7gB6LCSgjwj8/0zj/jTn2X2VtzTo28Mmykp
X-Received: by 10.14.216.193 with SMTP id g41mr42052452eep.13.1392832012944;
	Wed, 19 Feb 2014 09:46:52 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm3257960eeh.3.2014.02.19.09.46.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:46:51 -0800 (PST)
Message-ID: <5304EE0A.60605@linaro.org>
Date: Wed, 19 Feb 2014 17:46:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
	<1392812466.29739.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812466.29739.33.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.5] xen/serial: Don't leak memory
 mapping if the serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:21 PM, Ian Campbell wrote:
> On Tue, 2014-02-11 at 21:27 +0000, Julien Grall wrote:
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Although you could also consider moving the dt_device_get_irq before the
> ioremap_attr in most cases.

I will use this solution.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:47:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:47:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBEW-0005WV-6t; Wed, 19 Feb 2014 17:46:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGBEU-0005WQ-Ph
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:46:54 +0000
Received: from [193.109.254.147:34489] by server-14.bemta-14.messagelabs.com
	id 56/A3-29228-E0EE4035; Wed, 19 Feb 2014 17:46:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392832013!1729038!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9871 invoked from network); 19 Feb 2014 17:46:53 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:46:53 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so392029eei.12
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:46:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ErYJXVHQSnLB379qgmrzVFwKczNve9kqiD8mI9z4K7c=;
	b=ScTvnkDsJVe7cizYBjaSaGptm2euqtHucdp74qA1+K9T6DBk+6F2pIG7/cbzyDHzFg
	UEqDuwFAA+WeYpK0GCfyfwCQeez4jpHKfzpEIxb7CIIuxsTSFaS5qj0XfCt+p4Eu743L
	2pQ6YcKvptGoqoIEUIM1SrpvT5DqgFVccjsruzcNRiaeaaT721Iw3BUYAhjgDyuC/G/W
	VAQj8cs6pUPBLpZ7Z4OHXf/H2pQs79d8k9vUhfAO7g97z0F3sL9Uj5Cmmyq89hNHALMv
	gksWgT8jIKurTOEMIqJoO2sk4MvwAYnmVJNmRL00XBGXRF/z7jWalZW+WCGBCr0G9gaz
	ruEA==
X-Gm-Message-State: ALoCoQkllg/7e00nVW9TbmvlshSLCco1gUEIF9mMc7gB6LCSgjwj8/0zj/jTn2X2VtzTo28Mmykp
X-Received: by 10.14.216.193 with SMTP id g41mr42052452eep.13.1392832012944;
	Wed, 19 Feb 2014 09:46:52 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm3257960eeh.3.2014.02.19.09.46.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:46:51 -0800 (PST)
Message-ID: <5304EE0A.60605@linaro.org>
Date: Wed, 19 Feb 2014 17:46:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392154050-21649-1-git-send-email-julien.grall@linaro.org>
	<1392812466.29739.33.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812466.29739.33.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH for-4.5] xen/serial: Don't leak memory
 mapping if the serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:21 PM, Ian Campbell wrote:
> On Tue, 2014-02-11 at 21:27 +0000, Julien Grall wrote:
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Although you could also consider moving the dt_device_get_irq before the
> ioremap_attr in most cases.

I will use this solution.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:56:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBO2-0005j4-CN; Wed, 19 Feb 2014 17:56:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGBO1-0005iz-24
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:56:45 +0000
Received: from [85.158.143.35:44245] by server-1.bemta-4.messagelabs.com id
	D1/E7-31661-C50F4035; Wed, 19 Feb 2014 17:56:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392832603!6887819!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15489 invoked from network); 19 Feb 2014 17:56:43 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:56:43 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so394742eek.31
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:56:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RLo5+vlJ+eAi02r7HmjiELaGcqg4PA5gMWtulWHL1iQ=;
	b=K3xiARDYX9RsWBCtQf+tf8ODZ2LKTz5PDB5cXHo5fMxMZZp3G9ZfPVuTEWB4cOIHYc
	OWXkrNzqW8QWARP1Uq4dRHM7OlUIX5L3calNPb8O2dMS8Bo7YngzRSPTYw54DvjAIU3H
	kHeDC2W/676xQczOHb7ipTPhwJvX9akCP0DrPfvsLn60dl+TjFsZ1E+2eTaCuEa3SD5b
	BUVdobunAjX7ajNf4RVdvrawdCOEaVGHp79N38IygTirGNbCxS+eo9bYsMb+yngZq+li
	EaI0EVJqgFkm3acEcc7e7di2wKfHwLzzkWey0Jzod95HF6wJW3V/ECoD6wpLyO8DFOhd
	GoTg==
X-Gm-Message-State: ALoCoQn+n+gqQejKueTWEH+a9rMwt8DB4jBgdZ6r36F7j2Sdt3zgl+fvSn1XtSTwHuhpOjRWGwlU
X-Received: by 10.15.42.136 with SMTP id u8mr16952961eev.52.1392832603151;
	Wed, 19 Feb 2014 09:56:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id o45sm3334305eeb.18.2014.02.19.09.56.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:56:42 -0800 (PST)
Message-ID: <5304F058.6090503@linaro.org>
Date: Wed, 19 Feb 2014 17:56:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
In-Reply-To: <1392808847.23084.138.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:20 AM, Ian Campbell wrote:
> On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
>> Now that the console supports earlyprintk, we can get a rid of
>> early_printk call.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks, due to some conflicts I have rebased this patch on top my patch
"xen/serial: Don't leak memory mapping if the serial initialization has
failed".

I will resend this series later.

> Now all we need is a way to make it a runtime option :-)

I let you write a device tree parser in assembly ;).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 17:56:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 17:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBO2-0005j4-CN; Wed, 19 Feb 2014 17:56:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGBO1-0005iz-24
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:56:45 +0000
Received: from [85.158.143.35:44245] by server-1.bemta-4.messagelabs.com id
	D1/E7-31661-C50F4035; Wed, 19 Feb 2014 17:56:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392832603!6887819!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15489 invoked from network); 19 Feb 2014 17:56:43 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:56:43 -0000
Received: by mail-ee0-f44.google.com with SMTP id c13so394742eek.31
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:56:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RLo5+vlJ+eAi02r7HmjiELaGcqg4PA5gMWtulWHL1iQ=;
	b=K3xiARDYX9RsWBCtQf+tf8ODZ2LKTz5PDB5cXHo5fMxMZZp3G9ZfPVuTEWB4cOIHYc
	OWXkrNzqW8QWARP1Uq4dRHM7OlUIX5L3calNPb8O2dMS8Bo7YngzRSPTYw54DvjAIU3H
	kHeDC2W/676xQczOHb7ipTPhwJvX9akCP0DrPfvsLn60dl+TjFsZ1E+2eTaCuEa3SD5b
	BUVdobunAjX7ajNf4RVdvrawdCOEaVGHp79N38IygTirGNbCxS+eo9bYsMb+yngZq+li
	EaI0EVJqgFkm3acEcc7e7di2wKfHwLzzkWey0Jzod95HF6wJW3V/ECoD6wpLyO8DFOhd
	GoTg==
X-Gm-Message-State: ALoCoQn+n+gqQejKueTWEH+a9rMwt8DB4jBgdZ6r36F7j2Sdt3zgl+fvSn1XtSTwHuhpOjRWGwlU
X-Received: by 10.15.42.136 with SMTP id u8mr16952961eev.52.1392832603151;
	Wed, 19 Feb 2014 09:56:43 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id o45sm3334305eeb.18.2014.02.19.09.56.41
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 19 Feb 2014 09:56:42 -0800 (PST)
Message-ID: <5304F058.6090503@linaro.org>
Date: Wed, 19 Feb 2014 17:56:40 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
In-Reply-To: <1392808847.23084.138.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:20 AM, Ian Campbell wrote:
> On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
>> Now that the console supports earlyprintk, we can get a rid of
>> early_printk call.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks, due to some conflicts I have rebased this patch on top my patch
"xen/serial: Don't leak memory mapping if the serial initialization has
failed".

I will resend this series later.

> Now all we need is a way to make it a runtime option :-)

I let you write a device tree parser in assembly ;).

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:00:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBR8-0005uh-2B; Wed, 19 Feb 2014 17:59:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGBR6-0005uZ-H8
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:59:56 +0000
Received: from [85.158.143.35:34592] by server-1.bemta-4.messagelabs.com id
	9C/9A-31661-B11F4035; Wed, 19 Feb 2014 17:59:55 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392832794!6887744!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32217 invoked from network); 19 Feb 2014 17:59:54 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:59:54 -0000
Received: by mail-la0-f42.google.com with SMTP id hr13so556834lab.29
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:59:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=zAqQzD6gXSCKJdr2FGt54DT3nsVKm1OVC+MQrMEOT5I=;
	b=MnMC5lWr/L9Tq6vLjT69q9t4QXtb6RaZ5uiFJgH8ntWcKQzh+aakNehxOGmOZJOUrg
	CMhaO/QUiuvpw/03rolU0DO/WKfaK5dxkqZ/Fs542QsCk4cw1FqKPMPeg71nilEi1tlF
	W8/eqkJWqY2kHNIAjkLYqquH9NKm4nifO4qX6Rliczze/GA+jKGRq94MxhgtqGlzH+Us
	WvpxwZ1XO7VQmJP1NBSrOSwdonqzhLQB3OTa2K6npQGcgfaTc1tWopVZgGzkazOQfh+n
	Fgvtie3s01yv0NywQqYB6T0JNUysxXm5GbK/c6PNjLDuk+s1cXM6MK4S0OMxov/kS19P
	7YDA==
X-Received: by 10.112.142.230 with SMTP id rz6mr25579834lbb.0.1392832793956;
	Wed, 19 Feb 2014 09:59:53 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:59:33 -0800 (PST)
In-Reply-To: <20140219090855.610c0e04@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:59:33 -0800
X-Google-Sender-Auth: 2W6AZCV3DN8PrHRsHj3OiUbjK2I
Message-ID: <CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Wed, 19 Feb 2014 09:02:06 -0800
> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>
>> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
>> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
>> upon initialization so that root block will be used once the device
>> gets added to a bridge. The purpose would be to avoid drivers from
>> using the high MAC address hack, streamline to use a random MAC
>> address thereby avoiding the possible duplicate address situation for
>> IPv6. In the STP use case for these interfaces we'd just require
>> userspace to unset the root block. I'd consider the STP use case the
>> most odd of all. The caveat to this approach is 3.8 would be needed
>> (or its the root block patches cherry picked) for base kernels older
>> than 3.8.
>>
>> Stephen?
>>
>>   Luis
>
> Don't add IFF_ flags that adds yet another API hook into bridge.

The goal was not to add a userspace API, but rather consider a driver
initialization preference.

> Please only use the netlink/sysfs flags fields that already exist
> for new features.

Sure, but what if we know a driver in most cases wants the root block
and we'd want to make it the default, thereby only requiring userspace
for toggling it off.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:00:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBR8-0005uh-2B; Wed, 19 Feb 2014 17:59:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGBR6-0005uZ-H8
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 17:59:56 +0000
Received: from [85.158.143.35:34592] by server-1.bemta-4.messagelabs.com id
	9C/9A-31661-B11F4035; Wed, 19 Feb 2014 17:59:55 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392832794!6887744!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32217 invoked from network); 19 Feb 2014 17:59:54 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 17:59:54 -0000
Received: by mail-la0-f42.google.com with SMTP id hr13so556834lab.29
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 09:59:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=zAqQzD6gXSCKJdr2FGt54DT3nsVKm1OVC+MQrMEOT5I=;
	b=MnMC5lWr/L9Tq6vLjT69q9t4QXtb6RaZ5uiFJgH8ntWcKQzh+aakNehxOGmOZJOUrg
	CMhaO/QUiuvpw/03rolU0DO/WKfaK5dxkqZ/Fs542QsCk4cw1FqKPMPeg71nilEi1tlF
	W8/eqkJWqY2kHNIAjkLYqquH9NKm4nifO4qX6Rliczze/GA+jKGRq94MxhgtqGlzH+Us
	WvpxwZ1XO7VQmJP1NBSrOSwdonqzhLQB3OTa2K6npQGcgfaTc1tWopVZgGzkazOQfh+n
	Fgvtie3s01yv0NywQqYB6T0JNUysxXm5GbK/c6PNjLDuk+s1cXM6MK4S0OMxov/kS19P
	7YDA==
X-Received: by 10.112.142.230 with SMTP id rz6mr25579834lbb.0.1392832793956;
	Wed, 19 Feb 2014 09:59:53 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Wed, 19 Feb 2014 09:59:33 -0800 (PST)
In-Reply-To: <20140219090855.610c0e04@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 09:59:33 -0800
X-Google-Sender-Auth: 2W6AZCV3DN8PrHRsHj3OiUbjK2I
Message-ID: <CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Wed, 19 Feb 2014 09:02:06 -0800
> "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>
>> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
>> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
>> upon initialization so that root block will be used once the device
>> gets added to a bridge. The purpose would be to avoid drivers from
>> using the high MAC address hack, streamline to use a random MAC
>> address thereby avoiding the possible duplicate address situation for
>> IPv6. In the STP use case for these interfaces we'd just require
>> userspace to unset the root block. I'd consider the STP use case the
>> most odd of all. The caveat to this approach is 3.8 would be needed
>> (or its the root block patches cherry picked) for base kernels older
>> than 3.8.
>>
>> Stephen?
>>
>>   Luis
>
> Don't add IFF_ flags that adds yet another API hook into bridge.

The goal was not to add a userspace API, but rather consider a driver
initialization preference.

> Please only use the netlink/sysfs flags fields that already exist
> for new features.

Sure, but what if we know a driver in most cases wants the root block
and we'd want to make it the default, thereby only requiring userspace
for toggling it off.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:07:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBYF-0006CM-6j; Wed, 19 Feb 2014 18:07:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGBYE-0006CH-AG
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 18:07:18 +0000
Received: from [85.158.139.211:58741] by server-13.bemta-5.messagelabs.com id
	73/57-18801-5D2F4035; Wed, 19 Feb 2014 18:07:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392833236!4951084!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31639 invoked from network); 19 Feb 2014 18:07:16 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 18:07:16 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so545609eak.0
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 10:07:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=xsDuEUAwsuFPP+G5nPGuQNaieU4ZRZLOnyeSF+b+7wM=;
	b=WsfHOwpZhir5wClBs+QI1BrXJ9GAeCOL6cpwvXGgZ1alNFk1HP60TKoL2UQlZAtcJu
	teaYgojIX/Jx/aTEGavHF2MOt3g4wlGxxMx+yLEA1gnouyHcmiKTrcy3ilhOm/nVhXBt
	x8ipAFurmX6xUapVSDh8bCSahpY9VOpZLSImapAaIK3V9T9CwLLtZrXIDhxRnf1SUywK
	8mWn2zHqA2d4fU43gxhFNesv8AdMcEky584CBSx8IFXnYsYUfUWp2vjRxhdXoBnTBbNM
	TOQdwxXEkpWJSvE1bVF6ZprVhC57KXN4U0/OMn4q4RQi7QiMnm7IcNTt3jHjPxHNLS3w
	xG6g==
X-Gm-Message-State: ALoCoQmYEVUcXY0ACExyIYKBzsUS4HvFvP5JNuXgAV7tkox/CYLdxpy/th747UGw6+nPnshmWsZt
X-Received: by 10.14.246.68 with SMTP id p44mr4741521eer.72.1392833236349;
	Wed, 19 Feb 2014 10:07:16 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id y47sm3422642eel.14.2014.02.19.10.07.15
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 19 Feb 2014 10:07:15 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Wed, 19 Feb 2014 18:07:10 +0000
Message-Id: <1392833230-28706-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2] xen/serial: Don't leak memory mapping if the
	serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The memory mapping is leaking when the serial driver failed to retrieve
the IRQ. We can safely move the call to ioremap after.

Also use ioremap_cache instead of ioremap_attr in some serial drivers.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

Ian, I have dropped your ack because the patch fundamentaly changed.

    Changes in v2:
        - s/ioremap_attr/ioremap_nocache
        - Move ioremap call after retrieve the IRQ
---
 xen/drivers/char/exynos4210-uart.c |   13 +++++++------
 xen/drivers/char/omap-uart.c       |   15 ++++++++-------
 xen/drivers/char/pl011.c           |   15 +++++++--------
 3 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 0619575..150d49b 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -334,12 +334,6 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
         return res;
     }
 
-    uart->regs = ioremap_nocache(addr, size);
-    if ( !uart->regs )
-    {
-        early_printk("exynos4210: Unable to map the UART memory\n");
-        return -ENOMEM;
-    }
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
@@ -347,6 +341,13 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
         return res;
     }
 
+    uart->regs = ioremap_nocache(addr, size);
+    if ( !uart->regs )
+    {
+        early_printk("exynos4210: Unable to map the UART memory\n");
+        return -ENOMEM;
+    }
+
     uart->vuart.base_addr = addr;
     uart->vuart.size = size;
     uart->vuart.data_off = UTXH;
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index c1580ef..b29f610 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -326,13 +326,6 @@ static int __init omap_uart_init(struct dt_device_node *dev,
         return res;
     }
 
-    uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
-    if ( !uart->regs )
-    {
-        early_printk("omap-uart: Unable to map the UART memory\n");
-        return -ENOMEM;
-    }
-
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
@@ -340,6 +333,14 @@ static int __init omap_uart_init(struct dt_device_node *dev,
         return res;
     }
 
+    uart->regs = ioremap_nocache(addr, size);
+    if ( !uart->regs )
+    {
+        early_printk("omap-uart: Unable to map the UART memory\n");
+        return -ENOMEM;
+    }
+
+
     uart->vuart.base_addr = addr;
     uart->vuart.size = size;
     uart->vuart.data_off = UART_THR;
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index fd82511..fe99af6 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -248,14 +248,6 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
         return res;
     }
 
-    uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
-    if ( !uart->regs )
-    {
-        early_printk("pl011: Unable to map the UART memory\n");
-
-        return -ENOMEM;
-    }
-
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
@@ -263,6 +255,13 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
         return res;
     }
 
+    uart->regs = ioremap_nocache(addr, size);
+    if ( !uart->regs )
+    {
+        early_printk("pl011: Unable to map the UART memory\n");
+        return -ENOMEM;
+    }
+
     uart->vuart.base_addr = addr;
     uart->vuart.size = size;
     uart->vuart.data_off = DR;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:07:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGBYF-0006CM-6j; Wed, 19 Feb 2014 18:07:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGBYE-0006CH-AG
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 18:07:18 +0000
Received: from [85.158.139.211:58741] by server-13.bemta-5.messagelabs.com id
	73/57-18801-5D2F4035; Wed, 19 Feb 2014 18:07:17 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392833236!4951084!1
X-Originating-IP: [209.85.215.169]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31639 invoked from network); 19 Feb 2014 18:07:16 -0000
Received: from mail-ea0-f169.google.com (HELO mail-ea0-f169.google.com)
	(209.85.215.169)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 18:07:16 -0000
Received: by mail-ea0-f169.google.com with SMTP id h10so545609eak.0
	for <xen-devel@lists.xenproject.org>;
	Wed, 19 Feb 2014 10:07:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=xsDuEUAwsuFPP+G5nPGuQNaieU4ZRZLOnyeSF+b+7wM=;
	b=WsfHOwpZhir5wClBs+QI1BrXJ9GAeCOL6cpwvXGgZ1alNFk1HP60TKoL2UQlZAtcJu
	teaYgojIX/Jx/aTEGavHF2MOt3g4wlGxxMx+yLEA1gnouyHcmiKTrcy3ilhOm/nVhXBt
	x8ipAFurmX6xUapVSDh8bCSahpY9VOpZLSImapAaIK3V9T9CwLLtZrXIDhxRnf1SUywK
	8mWn2zHqA2d4fU43gxhFNesv8AdMcEky584CBSx8IFXnYsYUfUWp2vjRxhdXoBnTBbNM
	TOQdwxXEkpWJSvE1bVF6ZprVhC57KXN4U0/OMn4q4RQi7QiMnm7IcNTt3jHjPxHNLS3w
	xG6g==
X-Gm-Message-State: ALoCoQmYEVUcXY0ACExyIYKBzsUS4HvFvP5JNuXgAV7tkox/CYLdxpy/th747UGw6+nPnshmWsZt
X-Received: by 10.14.246.68 with SMTP id p44mr4741521eer.72.1392833236349;
	Wed, 19 Feb 2014 10:07:16 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id y47sm3422642eel.14.2014.02.19.10.07.15
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Wed, 19 Feb 2014 10:07:15 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Wed, 19 Feb 2014 18:07:10 +0000
Message-Id: <1392833230-28706-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2] xen/serial: Don't leak memory mapping if the
	serial initialization has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The memory mapping is leaking when the serial driver failed to retrieve
the IRQ. We can safely move the call to ioremap after.

Also use ioremap_cache instead of ioremap_attr in some serial drivers.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---

Ian, I have dropped your ack because the patch fundamentaly changed.

    Changes in v2:
        - s/ioremap_attr/ioremap_nocache
        - Move ioremap call after retrieve the IRQ
---
 xen/drivers/char/exynos4210-uart.c |   13 +++++++------
 xen/drivers/char/omap-uart.c       |   15 ++++++++-------
 xen/drivers/char/pl011.c           |   15 +++++++--------
 3 files changed, 22 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 0619575..150d49b 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -334,12 +334,6 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
         return res;
     }
 
-    uart->regs = ioremap_nocache(addr, size);
-    if ( !uart->regs )
-    {
-        early_printk("exynos4210: Unable to map the UART memory\n");
-        return -ENOMEM;
-    }
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
@@ -347,6 +341,13 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
         return res;
     }
 
+    uart->regs = ioremap_nocache(addr, size);
+    if ( !uart->regs )
+    {
+        early_printk("exynos4210: Unable to map the UART memory\n");
+        return -ENOMEM;
+    }
+
     uart->vuart.base_addr = addr;
     uart->vuart.size = size;
     uart->vuart.data_off = UTXH;
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index c1580ef..b29f610 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -326,13 +326,6 @@ static int __init omap_uart_init(struct dt_device_node *dev,
         return res;
     }
 
-    uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
-    if ( !uart->regs )
-    {
-        early_printk("omap-uart: Unable to map the UART memory\n");
-        return -ENOMEM;
-    }
-
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
@@ -340,6 +333,14 @@ static int __init omap_uart_init(struct dt_device_node *dev,
         return res;
     }
 
+    uart->regs = ioremap_nocache(addr, size);
+    if ( !uart->regs )
+    {
+        early_printk("omap-uart: Unable to map the UART memory\n");
+        return -ENOMEM;
+    }
+
+
     uart->vuart.base_addr = addr;
     uart->vuart.size = size;
     uart->vuart.data_off = UART_THR;
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index fd82511..fe99af6 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -248,14 +248,6 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
         return res;
     }
 
-    uart->regs = ioremap_attr(addr, size, PAGE_HYPERVISOR_NOCACHE);
-    if ( !uart->regs )
-    {
-        early_printk("pl011: Unable to map the UART memory\n");
-
-        return -ENOMEM;
-    }
-
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
@@ -263,6 +255,13 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
         return res;
     }
 
+    uart->regs = ioremap_nocache(addr, size);
+    if ( !uart->regs )
+    {
+        early_printk("pl011: Unable to map the UART memory\n");
+        return -ENOMEM;
+    }
+
     uart->vuart.base_addr = addr;
     uart->vuart.size = size;
     uart->vuart.data_off = DR;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:49:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCCH-0006kH-Sn; Wed, 19 Feb 2014 18:48:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGCCG-0006kC-PK
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 18:48:40 +0000
Received: from [85.158.143.35:49744] by server-2.bemta-4.messagelabs.com id
	7E/27-10891-88CF4035; Wed, 19 Feb 2014 18:48:40 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392835717!6912452!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28777 invoked from network); 19 Feb 2014 18:48:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 18:48:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="102303525"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 18:48:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 13:48:35 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1WGCCA-0002pv-Sw;
	Wed, 19 Feb 2014 18:48:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <netdev@vger.kernel.org>
Date: Wed, 19 Feb 2014 18:48:34 +0000
Message-ID: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@eikelenboom.it, Paul Durrant <paul.durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH net] xen-netfront: reset skb network header
	before checksum
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In ed1f50c3a ("net: add skb_checksum_setup") we introduced some checksum
functions in core driver. Subsequent change b5cf66cd1 ("xen-netfront:
use new skb_checksum_setup function") made use of those functions to
replace its own implementation.

However with that change netfront is broken. It sees a lot of checksum
error. That's because its own implementation of checksum function was a
bit hacky (dereferencing skb->data directly) while the new function was
implemented using ip_hdr(). The network header is not reset before skb
is passed to the new function. When the new function tries to do its
job, it's confused and reports error.

The fix is simple, we need to reset network header before passing skb to
checksum function. Netback is not affected as it already does the right
thing.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
---
 drivers/net/xen-netfront.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index ff04d4f..95041b6 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
 		skb->protocol = eth_type_trans(skb, dev);
+		skb_reset_network_header(skb);
 
 		if (checksum_setup(dev, skb)) {
 			kfree_skb(skb);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:49:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCCH-0006kH-Sn; Wed, 19 Feb 2014 18:48:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGCCG-0006kC-PK
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 18:48:40 +0000
Received: from [85.158.143.35:49744] by server-2.bemta-4.messagelabs.com id
	7E/27-10891-88CF4035; Wed, 19 Feb 2014 18:48:40 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392835717!6912452!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28777 invoked from network); 19 Feb 2014 18:48:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 18:48:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="102303525"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 18:48:36 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 13:48:35 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1WGCCA-0002pv-Sw;
	Wed, 19 Feb 2014 18:48:34 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>, <netdev@vger.kernel.org>
Date: Wed, 19 Feb 2014 18:48:34 +0000
Message-ID: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: linux@eikelenboom.it, Paul Durrant <paul.durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Subject: [Xen-devel] [PATCH net] xen-netfront: reset skb network header
	before checksum
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In ed1f50c3a ("net: add skb_checksum_setup") we introduced some checksum
functions in core driver. Subsequent change b5cf66cd1 ("xen-netfront:
use new skb_checksum_setup function") made use of those functions to
replace its own implementation.

However with that change netfront is broken. It sees a lot of checksum
error. That's because its own implementation of checksum function was a
bit hacky (dereferencing skb->data directly) while the new function was
implemented using ip_hdr(). The network header is not reset before skb
is passed to the new function. When the new function tries to do its
job, it's confused and reports error.

The fix is simple, we need to reset network header before passing skb to
checksum function. Netback is not affected as it already does the right
thing.

Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Paul Durrant <paul.durrant@citrix.com>
---
 drivers/net/xen-netfront.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index ff04d4f..95041b6 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
 		skb->protocol = eth_type_trans(skb, dev);
+		skb_reset_network_header(skb);
 
 		if (checksum_setup(dev, skb)) {
 			kfree_skb(skb);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCDp-0006tL-Je; Wed, 19 Feb 2014 18:50:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGCDn-0006tD-Nr
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 18:50:15 +0000
Received: from [85.158.139.211:47914] by server-7.bemta-5.messagelabs.com id
	AB/12-14867-6ECF4035; Wed, 19 Feb 2014 18:50:14 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392835813!454956!1
X-Originating-IP: [209.85.128.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26940 invoked from network); 19 Feb 2014 18:50:14 -0000
Received: from mail-ve0-f173.google.com (HELO mail-ve0-f173.google.com)
	(209.85.128.173)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 18:50:14 -0000
Received: by mail-ve0-f173.google.com with SMTP id jw12so810573veb.4
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 10:50:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=2vLbxkHxvCYb4y+FsXsFvad2cZFVkhy6sklWx/e+a9E=;
	b=0GJuHqXl+F1MXrqcBpPC+yGN4BC9WIEXsNTHxEeA883/HWaoN2nriWWc2z0iov26AQ
	Re2ON77sp3gYO6zYiHMal3rWTzCWwlLJV0WBQ96L1FaPuLNysx+7sl7622jStj5W0AWf
	oPBbNYqfvYORWF55u/qOyxdL2tlus7+U6XMc855Nz7VZRcMbuEvxXXeZ6vOuXlbju3op
	EuVeqRgM5aHJ9t+pICyUDARanrLBmey9WIxA52jyJ7+S580u0T380kR4c53EFg5T7W1s
	9+49DnhTOa7hM4X2Oenru2aQMaTjZn3hmqHivha2caVytN8T6+/9Dh+K6Jt2sg05lwp8
	tUfQ==
MIME-Version: 1.0
X-Received: by 10.52.246.42 with SMTP id xt10mr21993159vdc.9.1392835812732;
	Wed, 19 Feb 2014 10:50:12 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 10:50:12 -0800 (PST)
Date: Wed, 19 Feb 2014 10:50:12 -0800
Message-ID: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7944294558859415302=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7944294558859415302==
Content-Type: multipart/alternative; boundary=001a1136018ed9359004f2c6db4c

--001a1136018ed9359004f2c6db4c
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'd like to know what should be configured in the HVM guest such that it
accepts 'xm/xl shutdown' graceful shutdown signal.

'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
however whenever I disable xen_platform_pci=0, it does not work.

I also tried 'xl/xm trigger <vm> power' and this too does not work if
xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.

Our Xen cfg file looks like this :-

# HVM specific
kernel = "hvmloader"
builder = "hvm"
device_model = "qemu-dm"

# Enable ACPI support
acpi = 1

# Enable serial console
serial = "pty"

# Enable VNC
vnc = 1
vnclisten = "0.0.0.0"

pci_msitranslate = 0

xen_platform_pci = 1

# Default behavior for following events
on_reboot = "destroy"


I'm use SLES SuSE 11 SP3.

Thanks,
/Saurabh

--001a1136018ed9359004f2c6db4c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I&#39;d like to know what should be=
 configured in the HVM guest such that it accepts &#39;xm/xl shutdown&#39; =
graceful shutdown signal.</div><div><br></div><div>&#39;xm/xl shutdown&#39;=
 works great when I have xen_platform_pci=3D1 (PV on HVM) however whenever =
I disable xen_platform_pci=3D0, it does not work.</div>
<div><br></div><div>I also tried &#39;xl/xm trigger &lt;vm&gt; power&#39; a=
nd this too does not work if xen_platform_pci=3D0 is set. We do have lot of=
 PCI pass-through devices.</div><div><br></div><div>Our Xen cfg file looks =
like this :-</div>
<div><br></div><div><div># HVM specific</div><div>kernel =3D &quot;hvmloade=
r&quot;</div><div>builder =3D &quot;hvm&quot;</div><div>device_model =3D &q=
uot;qemu-dm&quot;</div><div><br></div><div># Enable ACPI support</div><div>=
acpi =3D 1</div>
<div><br></div><div># Enable serial console</div><div>serial =3D &quot;pty&=
quot;</div><div><br></div><div># Enable VNC</div><div>vnc =3D 1</div><div>v=
nclisten =3D &quot;0.0.0.0&quot;</div><div><br></div><div>pci_msitranslate =
=3D 0<br>
</div><div><br></div><div>xen_platform_pci =3D 1<br></div><div><br></div><d=
iv># Default behavior for following events</div><div>on_reboot =3D &quot;de=
stroy&quot;</div></div><div><br></div><div><br></div><div>I&#39;m use SLES =
SuSE 11 SP3.</div>
<div><br></div><div>Thanks,</div><div>/Saurabh</div></div>

--001a1136018ed9359004f2c6db4c--


--===============7944294558859415302==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7944294558859415302==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 18:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCDp-0006tL-Je; Wed, 19 Feb 2014 18:50:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGCDn-0006tD-Nr
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 18:50:15 +0000
Received: from [85.158.139.211:47914] by server-7.bemta-5.messagelabs.com id
	AB/12-14867-6ECF4035; Wed, 19 Feb 2014 18:50:14 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392835813!454956!1
X-Originating-IP: [209.85.128.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26940 invoked from network); 19 Feb 2014 18:50:14 -0000
Received: from mail-ve0-f173.google.com (HELO mail-ve0-f173.google.com)
	(209.85.128.173)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 18:50:14 -0000
Received: by mail-ve0-f173.google.com with SMTP id jw12so810573veb.4
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 10:50:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=2vLbxkHxvCYb4y+FsXsFvad2cZFVkhy6sklWx/e+a9E=;
	b=0GJuHqXl+F1MXrqcBpPC+yGN4BC9WIEXsNTHxEeA883/HWaoN2nriWWc2z0iov26AQ
	Re2ON77sp3gYO6zYiHMal3rWTzCWwlLJV0WBQ96L1FaPuLNysx+7sl7622jStj5W0AWf
	oPBbNYqfvYORWF55u/qOyxdL2tlus7+U6XMc855Nz7VZRcMbuEvxXXeZ6vOuXlbju3op
	EuVeqRgM5aHJ9t+pICyUDARanrLBmey9WIxA52jyJ7+S580u0T380kR4c53EFg5T7W1s
	9+49DnhTOa7hM4X2Oenru2aQMaTjZn3hmqHivha2caVytN8T6+/9Dh+K6Jt2sg05lwp8
	tUfQ==
MIME-Version: 1.0
X-Received: by 10.52.246.42 with SMTP id xt10mr21993159vdc.9.1392835812732;
	Wed, 19 Feb 2014 10:50:12 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 10:50:12 -0800 (PST)
Date: Wed, 19 Feb 2014 10:50:12 -0800
Message-ID: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7944294558859415302=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7944294558859415302==
Content-Type: multipart/alternative; boundary=001a1136018ed9359004f2c6db4c

--001a1136018ed9359004f2c6db4c
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I'd like to know what should be configured in the HVM guest such that it
accepts 'xm/xl shutdown' graceful shutdown signal.

'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
however whenever I disable xen_platform_pci=0, it does not work.

I also tried 'xl/xm trigger <vm> power' and this too does not work if
xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.

Our Xen cfg file looks like this :-

# HVM specific
kernel = "hvmloader"
builder = "hvm"
device_model = "qemu-dm"

# Enable ACPI support
acpi = 1

# Enable serial console
serial = "pty"

# Enable VNC
vnc = 1
vnclisten = "0.0.0.0"

pci_msitranslate = 0

xen_platform_pci = 1

# Default behavior for following events
on_reboot = "destroy"


I'm use SLES SuSE 11 SP3.

Thanks,
/Saurabh

--001a1136018ed9359004f2c6db4c
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>I&#39;d like to know what should be=
 configured in the HVM guest such that it accepts &#39;xm/xl shutdown&#39; =
graceful shutdown signal.</div><div><br></div><div>&#39;xm/xl shutdown&#39;=
 works great when I have xen_platform_pci=3D1 (PV on HVM) however whenever =
I disable xen_platform_pci=3D0, it does not work.</div>
<div><br></div><div>I also tried &#39;xl/xm trigger &lt;vm&gt; power&#39; a=
nd this too does not work if xen_platform_pci=3D0 is set. We do have lot of=
 PCI pass-through devices.</div><div><br></div><div>Our Xen cfg file looks =
like this :-</div>
<div><br></div><div><div># HVM specific</div><div>kernel =3D &quot;hvmloade=
r&quot;</div><div>builder =3D &quot;hvm&quot;</div><div>device_model =3D &q=
uot;qemu-dm&quot;</div><div><br></div><div># Enable ACPI support</div><div>=
acpi =3D 1</div>
<div><br></div><div># Enable serial console</div><div>serial =3D &quot;pty&=
quot;</div><div><br></div><div># Enable VNC</div><div>vnc =3D 1</div><div>v=
nclisten =3D &quot;0.0.0.0&quot;</div><div><br></div><div>pci_msitranslate =
=3D 0<br>
</div><div><br></div><div>xen_platform_pci =3D 1<br></div><div><br></div><d=
iv># Default behavior for following events</div><div>on_reboot =3D &quot;de=
stroy&quot;</div></div><div><br></div><div><br></div><div>I&#39;m use SLES =
SuSE 11 SP3.</div>
<div><br></div><div>Thanks,</div><div>/Saurabh</div></div>

--001a1136018ed9359004f2c6db4c--


--===============7944294558859415302==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7944294558859415302==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 18:56:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCJs-00077O-8z; Wed, 19 Feb 2014 18:56:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGCJq-00077J-5i
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 18:56:30 +0000
Received: from [85.158.143.35:60975] by server-3.bemta-4.messagelabs.com id
	DA/4B-11539-D5EF4035; Wed, 19 Feb 2014 18:56:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392836184!6914756!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27646 invoked from network); 19 Feb 2014 18:56:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 18:56:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JIuN14023693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 18:56:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JIuMpt009983
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 18:56:22 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JIuMB8009980; Wed, 19 Feb 2014 18:56:22 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 10:56:22 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3FA171C0954; Wed, 19 Feb 2014 13:56:21 -0500 (EST)
Date: Wed, 19 Feb 2014 13:56:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140219185621.GC12300@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> Hi,
> 
> I'd like to know what should be configured in the HVM guest such that it
> accepts 'xm/xl shutdown' graceful shutdown signal.
> 
> 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> however whenever I disable xen_platform_pci=0, it does not work.

Right. That is expected.

> 
> I also tried 'xl/xm trigger <vm> power' and this too does not work if
> xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.

Oh, that looks to be a bug then.

> 
> Our Xen cfg file looks like this :-
> 
> # HVM specific
> kernel = "hvmloader"
> builder = "hvm"
> device_model = "qemu-dm"
> 
> # Enable ACPI support
> acpi = 1
> 
> # Enable serial console
> serial = "pty"
> 
> # Enable VNC
> vnc = 1
> vnclisten = "0.0.0.0"
> 
> pci_msitranslate = 0
> 
> xen_platform_pci = 1
> 
> # Default behavior for following events
> on_reboot = "destroy"
> 
> 
> I'm use SLES SuSE 11 SP3.

As you initial domain or your guests? If it is with SLES does the issue
show up if you use the latest version of Xen?
> 
> Thanks,
> /Saurabh

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 18:56:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 18:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCJs-00077O-8z; Wed, 19 Feb 2014 18:56:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGCJq-00077J-5i
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 18:56:30 +0000
Received: from [85.158.143.35:60975] by server-3.bemta-4.messagelabs.com id
	DA/4B-11539-D5EF4035; Wed, 19 Feb 2014 18:56:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392836184!6914756!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27646 invoked from network); 19 Feb 2014 18:56:25 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 18:56:25 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JIuN14023693
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 18:56:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JIuMpt009983
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 18:56:22 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JIuMB8009980; Wed, 19 Feb 2014 18:56:22 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 10:56:22 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3FA171C0954; Wed, 19 Feb 2014 13:56:21 -0500 (EST)
Date: Wed, 19 Feb 2014 13:56:21 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140219185621.GC12300@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> Hi,
> 
> I'd like to know what should be configured in the HVM guest such that it
> accepts 'xm/xl shutdown' graceful shutdown signal.
> 
> 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> however whenever I disable xen_platform_pci=0, it does not work.

Right. That is expected.

> 
> I also tried 'xl/xm trigger <vm> power' and this too does not work if
> xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.

Oh, that looks to be a bug then.

> 
> Our Xen cfg file looks like this :-
> 
> # HVM specific
> kernel = "hvmloader"
> builder = "hvm"
> device_model = "qemu-dm"
> 
> # Enable ACPI support
> acpi = 1
> 
> # Enable serial console
> serial = "pty"
> 
> # Enable VNC
> vnc = 1
> vnclisten = "0.0.0.0"
> 
> pci_msitranslate = 0
> 
> xen_platform_pci = 1
> 
> # Default behavior for following events
> on_reboot = "destroy"
> 
> 
> I'm use SLES SuSE 11 SP3.

As you initial domain or your guests? If it is with SLES does the issue
show up if you use the latest version of Xen?
> 
> Thanks,
> /Saurabh

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:02:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCPe-0007N8-2r; Wed, 19 Feb 2014 19:02:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGCPc-0007N3-Ha
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:02:28 +0000
Received: from [193.109.254.147:31620] by server-7.bemta-14.messagelabs.com id
	08/28-23424-3CFF4035; Wed, 19 Feb 2014 19:02:27 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392836545!5457742!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13275 invoked from network); 19 Feb 2014 19:02:26 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:02:26 -0000
Received: by mail-vc0-f169.google.com with SMTP id hq11so858052vcb.0
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 11:02:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qVp7GAcnGlEAw1pZOZyacMpG6CWhEWzTj7Srlkqg0kw=;
	b=VwSlpUecbOKEUq9RJQoEQAm7ahPBJwp5GhQM2aXSWjWMvlUxYGgPojfy4RK/V9+rvc
	dmGC+p49L0iMDPu2QksWsjH4UM1SEas+s/+ZCeKGn+Lg8KHFtNDgChOOmuikjkLFrlQt
	rpUsgZIol7bruuKEdOxKRJBzcIHY2PU4QGjlUVx2+TfhyCYPfEdULyRHbzV5f5cxAT36
	8IUHfs3/KtTmZBdSYzrqQurnkRniQmH6DALvhIcCG1dzA4cfwvxpPR7ScZ7H7LFeoZSe
	p5pLG6y34X+Sy2tgjeZA+8urgqU6f7jxGJFPe8bTcv9HUcaO4F3h1BMX+w4HW2kaBQYp
	BPYA==
MIME-Version: 1.0
X-Received: by 10.52.116.102 with SMTP id jv6mr1228948vdb.94.1392836545333;
	Wed, 19 Feb 2014 11:02:25 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 11:02:25 -0800 (PST)
In-Reply-To: <20140219185621.GC12300@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 11:02:25 -0800
Message-ID: <CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8494676037758122978=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8494676037758122978==
Content-Type: multipart/alternative; boundary=bcaec548a91383d13a04f2c707d4

--bcaec548a91383d13a04f2c707d4
Content-Type: text/plain; charset=ISO-8859-1

> I'm use SLES SuSE 11 SP3.
> As you initial domain or your guests? If it is with SLES does the issue
> show up if you use the latest version of Xen?

We have two VMs. One is a WindRiver based VM and the other one a SLES SuSE
11 SP2 VM.

SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
does not and hence the question what config or driver do I need to enable
in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?

Thanks,
/Saurabh


On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > Hi,
> >
> > I'd like to know what should be configured in the HVM guest such that it
> > accepts 'xm/xl shutdown' graceful shutdown signal.
> >
> > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> > however whenever I disable xen_platform_pci=0, it does not work.
>
> Right. That is expected.
>
> >
> > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.
>
> Oh, that looks to be a bug then.
>
> >
> > Our Xen cfg file looks like this :-
> >
> > # HVM specific
> > kernel = "hvmloader"
> > builder = "hvm"
> > device_model = "qemu-dm"
> >
> > # Enable ACPI support
> > acpi = 1
> >
> > # Enable serial console
> > serial = "pty"
> >
> > # Enable VNC
> > vnc = 1
> > vnclisten = "0.0.0.0"
> >
> > pci_msitranslate = 0
> >
> > xen_platform_pci = 1
> >
> > # Default behavior for following events
> > on_reboot = "destroy"
> >
> >
> > I'm use SLES SuSE 11 SP3.
>
> As you initial domain or your guests? If it is with SLES does the issue
> show up if you use the latest version of Xen?
> >
> > Thanks,
> > /Saurabh
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--bcaec548a91383d13a04f2c707d4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"im">&gt; I&#39;m use SLES SuSE 11 SP3.<br>&g=
t;=A0<span style=3D"color:rgb(34,34,34)">As you initial domain or your gues=
ts? If it is with SLES does the issue</span></div><div class=3D"im"><span s=
tyle=3D"color:rgb(34,34,34)">&gt;=A0</span><span style=3D"color:rgb(34,34,3=
4)">show up if you use the latest version of Xen?</span></div>
<div class=3D"im"><span style=3D"color:rgb(34,34,34)"><br></span></div><div=
 class=3D"im"><font color=3D"#222222">We have two VMs. One is a WindRiver b=
ased VM and the other one a SLES SuSE 11 SP2 VM.</font></div><div class=3D"=
im"><font color=3D"#222222"><br>
</font></div><div class=3D"im"><font color=3D"#222222">SuSE 11 SP2 VM shutd=
own properly with xen_platform_pci=3D0 however WR one does not and hence th=
e question what config or driver do I need to enable in WR HVM guest such t=
hat it accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?</font></div>
<div class=3D"im"><font color=3D"#222222"><br></font></div><div class=3D"im=
">Thanks,<br></div><div class=3D"im"><font color=3D"#222222">/Saurabh</font=
></div><div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">O=
n Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&l=
t;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@o=
racle.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex"><div class=3D"im">On Wed, Feb 19, 2014 at 10:50:12AM -0800=
, Saurabh Mishra wrote:<br>

&gt; Hi,<br>
&gt;<br>
&gt; I&#39;d like to know what should be configured in the HVM guest such t=
hat it<br>
&gt; accepts &#39;xm/xl shutdown&#39; graceful shutdown signal.<br>
&gt;<br>
&gt; &#39;xm/xl shutdown&#39; works great when I have xen_platform_pci=3D1 =
(PV on HVM)<br>
&gt; however whenever I disable xen_platform_pci=3D0, it does not work.<br>
<br>
</div>Right. That is expected.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; I also tried &#39;xl/xm trigger &lt;vm&gt; power&#39; and this too doe=
s not work if<br>
&gt; xen_platform_pci=3D0 is set. We do have lot of PCI pass-through device=
s.<br>
<br>
</div>Oh, that looks to be a bug then.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; Our Xen cfg file looks like this :-<br>
&gt;<br>
&gt; # HVM specific<br>
&gt; kernel =3D &quot;hvmloader&quot;<br>
&gt; builder =3D &quot;hvm&quot;<br>
&gt; device_model =3D &quot;qemu-dm&quot;<br>
&gt;<br>
&gt; # Enable ACPI support<br>
&gt; acpi =3D 1<br>
&gt;<br>
&gt; # Enable serial console<br>
&gt; serial =3D &quot;pty&quot;<br>
&gt;<br>
&gt; # Enable VNC<br>
&gt; vnc =3D 1<br>
&gt; vnclisten =3D &quot;0.0.0.0&quot;<br>
&gt;<br>
&gt; pci_msitranslate =3D 0<br>
&gt;<br>
&gt; xen_platform_pci =3D 1<br>
&gt;<br>
&gt; # Default behavior for following events<br>
&gt; on_reboot =3D &quot;destroy&quot;<br>
&gt;<br>
&gt;<br>
&gt; I&#39;m use SLES SuSE 11 SP3.<br>
<br>
</div>As you initial domain or your guests? If it is with SLES does the iss=
ue<br>
show up if you use the latest version of Xen?<br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div></div></div>

--bcaec548a91383d13a04f2c707d4--


--===============8494676037758122978==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8494676037758122978==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 19:02:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCPe-0007N8-2r; Wed, 19 Feb 2014 19:02:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGCPc-0007N3-Ha
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:02:28 +0000
Received: from [193.109.254.147:31620] by server-7.bemta-14.messagelabs.com id
	08/28-23424-3CFF4035; Wed, 19 Feb 2014 19:02:27 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392836545!5457742!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13275 invoked from network); 19 Feb 2014 19:02:26 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:02:26 -0000
Received: by mail-vc0-f169.google.com with SMTP id hq11so858052vcb.0
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 11:02:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qVp7GAcnGlEAw1pZOZyacMpG6CWhEWzTj7Srlkqg0kw=;
	b=VwSlpUecbOKEUq9RJQoEQAm7ahPBJwp5GhQM2aXSWjWMvlUxYGgPojfy4RK/V9+rvc
	dmGC+p49L0iMDPu2QksWsjH4UM1SEas+s/+ZCeKGn+Lg8KHFtNDgChOOmuikjkLFrlQt
	rpUsgZIol7bruuKEdOxKRJBzcIHY2PU4QGjlUVx2+TfhyCYPfEdULyRHbzV5f5cxAT36
	8IUHfs3/KtTmZBdSYzrqQurnkRniQmH6DALvhIcCG1dzA4cfwvxpPR7ScZ7H7LFeoZSe
	p5pLG6y34X+Sy2tgjeZA+8urgqU6f7jxGJFPe8bTcv9HUcaO4F3h1BMX+w4HW2kaBQYp
	BPYA==
MIME-Version: 1.0
X-Received: by 10.52.116.102 with SMTP id jv6mr1228948vdb.94.1392836545333;
	Wed, 19 Feb 2014 11:02:25 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 11:02:25 -0800 (PST)
In-Reply-To: <20140219185621.GC12300@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 11:02:25 -0800
Message-ID: <CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8494676037758122978=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8494676037758122978==
Content-Type: multipart/alternative; boundary=bcaec548a91383d13a04f2c707d4

--bcaec548a91383d13a04f2c707d4
Content-Type: text/plain; charset=ISO-8859-1

> I'm use SLES SuSE 11 SP3.
> As you initial domain or your guests? If it is with SLES does the issue
> show up if you use the latest version of Xen?

We have two VMs. One is a WindRiver based VM and the other one a SLES SuSE
11 SP2 VM.

SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
does not and hence the question what config or driver do I need to enable
in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?

Thanks,
/Saurabh


On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > Hi,
> >
> > I'd like to know what should be configured in the HVM guest such that it
> > accepts 'xm/xl shutdown' graceful shutdown signal.
> >
> > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> > however whenever I disable xen_platform_pci=0, it does not work.
>
> Right. That is expected.
>
> >
> > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.
>
> Oh, that looks to be a bug then.
>
> >
> > Our Xen cfg file looks like this :-
> >
> > # HVM specific
> > kernel = "hvmloader"
> > builder = "hvm"
> > device_model = "qemu-dm"
> >
> > # Enable ACPI support
> > acpi = 1
> >
> > # Enable serial console
> > serial = "pty"
> >
> > # Enable VNC
> > vnc = 1
> > vnclisten = "0.0.0.0"
> >
> > pci_msitranslate = 0
> >
> > xen_platform_pci = 1
> >
> > # Default behavior for following events
> > on_reboot = "destroy"
> >
> >
> > I'm use SLES SuSE 11 SP3.
>
> As you initial domain or your guests? If it is with SLES does the issue
> show up if you use the latest version of Xen?
> >
> > Thanks,
> > /Saurabh
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--bcaec548a91383d13a04f2c707d4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"im">&gt; I&#39;m use SLES SuSE 11 SP3.<br>&g=
t;=A0<span style=3D"color:rgb(34,34,34)">As you initial domain or your gues=
ts? If it is with SLES does the issue</span></div><div class=3D"im"><span s=
tyle=3D"color:rgb(34,34,34)">&gt;=A0</span><span style=3D"color:rgb(34,34,3=
4)">show up if you use the latest version of Xen?</span></div>
<div class=3D"im"><span style=3D"color:rgb(34,34,34)"><br></span></div><div=
 class=3D"im"><font color=3D"#222222">We have two VMs. One is a WindRiver b=
ased VM and the other one a SLES SuSE 11 SP2 VM.</font></div><div class=3D"=
im"><font color=3D"#222222"><br>
</font></div><div class=3D"im"><font color=3D"#222222">SuSE 11 SP2 VM shutd=
own properly with xen_platform_pci=3D0 however WR one does not and hence th=
e question what config or driver do I need to enable in WR HVM guest such t=
hat it accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?</font></div>
<div class=3D"im"><font color=3D"#222222"><br></font></div><div class=3D"im=
">Thanks,<br></div><div class=3D"im"><font color=3D"#222222">/Saurabh</font=
></div><div><div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">O=
n Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&l=
t;<a href=3D"mailto:konrad.wilk@oracle.com" target=3D"_blank">konrad.wilk@o=
racle.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex"><div class=3D"im">On Wed, Feb 19, 2014 at 10:50:12AM -0800=
, Saurabh Mishra wrote:<br>

&gt; Hi,<br>
&gt;<br>
&gt; I&#39;d like to know what should be configured in the HVM guest such t=
hat it<br>
&gt; accepts &#39;xm/xl shutdown&#39; graceful shutdown signal.<br>
&gt;<br>
&gt; &#39;xm/xl shutdown&#39; works great when I have xen_platform_pci=3D1 =
(PV on HVM)<br>
&gt; however whenever I disable xen_platform_pci=3D0, it does not work.<br>
<br>
</div>Right. That is expected.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; I also tried &#39;xl/xm trigger &lt;vm&gt; power&#39; and this too doe=
s not work if<br>
&gt; xen_platform_pci=3D0 is set. We do have lot of PCI pass-through device=
s.<br>
<br>
</div>Oh, that looks to be a bug then.<br>
<div class=3D"im"><br>
&gt;<br>
&gt; Our Xen cfg file looks like this :-<br>
&gt;<br>
&gt; # HVM specific<br>
&gt; kernel =3D &quot;hvmloader&quot;<br>
&gt; builder =3D &quot;hvm&quot;<br>
&gt; device_model =3D &quot;qemu-dm&quot;<br>
&gt;<br>
&gt; # Enable ACPI support<br>
&gt; acpi =3D 1<br>
&gt;<br>
&gt; # Enable serial console<br>
&gt; serial =3D &quot;pty&quot;<br>
&gt;<br>
&gt; # Enable VNC<br>
&gt; vnc =3D 1<br>
&gt; vnclisten =3D &quot;0.0.0.0&quot;<br>
&gt;<br>
&gt; pci_msitranslate =3D 0<br>
&gt;<br>
&gt; xen_platform_pci =3D 1<br>
&gt;<br>
&gt; # Default behavior for following events<br>
&gt; on_reboot =3D &quot;destroy&quot;<br>
&gt;<br>
&gt;<br>
&gt; I&#39;m use SLES SuSE 11 SP3.<br>
<br>
</div>As you initial domain or your guests? If it is with SLES does the iss=
ue<br>
show up if you use the latest version of Xen?<br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div></div></div>

--bcaec548a91383d13a04f2c707d4--


--===============8494676037758122978==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8494676037758122978==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 19:13:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCa5-0007bH-9X; Wed, 19 Feb 2014 19:13:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGCa3-0007bC-J1
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 19:13:15 +0000
Received: from [85.158.137.68:39966] by server-3.bemta-3.messagelabs.com id
	F2/5A-14520-A4205035; Wed, 19 Feb 2014 19:13:14 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392837191!2979893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31686 invoked from network); 19 Feb 2014 19:13:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:13:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="104016089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 19:13:11 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 14:13:10 -0500
Message-ID: <53050244.1020106@citrix.com>
Date: Wed, 19 Feb 2014 19:13:08 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, Dan Williams
	<dcbw@redhat.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
In-Reply-To: <CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, "David S.
	Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 17:20, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 8:45 AM, Dan Williams <dcbw@redhat.com> wrote:
>> On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
>>> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
>>>> On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
>>>>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>>>>
>>>>> Some interfaces do not need to have any IPv4 or IPv6
>>>>> addresses, so enable an option to specify this. One
>>>>> example where this is observed are virtualization
>>>>> backend interfaces which just use the net_device
>>>>> constructs to help with their respective frontends.
>>>>>
>>>>> This should optimize boot time and complexity on
>>>>> virtualization environments for each backend interface
>>>>> while also avoiding triggering SLAAC and DAD, which is
>>>>> simply pointless for these type of interfaces.
>>>>
>>>> Would it not be better/cleaner to use disable_ipv6 and then add a
>>>> disable_ipv4 sysctl, then use those with that interface?
>>>
>>> Sure, but note that the both disable_ipv6 and accept_dada sysctl
>>> parameters are global. ipv4 and ipv6 interfaces are created upon
>>> NETDEVICE_REGISTER, which will get triggered when a driver calls
>>> register_netdev(). The goal of this patch was to enable an early
>>> optimization for drivers that have no need ever for ipv4 or ipv6
>>> interfaces.
>>
>> Each interface gets override sysctls too though, eg:
>>
>> /proc/sys/net/ipv6/conf/enp0s25/disable_ipv6
>
> I hadn't seen those, thanks!
>
>> which is the one I meant; you're obviously right that the global ones
>> aren't what you want here.  But the specific ones should be suitable?
>
> Under the approach Stephen mentioned by first ensuring the interface
> is down yes. There's one use case I can consider to still want the
> patch though, more on that below.
>
>> If you set that on a per-interface basis, then you'll get EPERM or
>> something whenever you try to add IPv6 addresses or do IPv6 routing.
>
> Neat, thanks.
>
>>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>>> backends though, as such this is no longer applicable as a
>>> requirement. The ipv4 sysctl however still seems like a reasonable
>>> approach to enable optimizations of the network in topologies where
>>> its known we won't need them but -- we'd need to consider a much more
>>> granular solution, not just global as it is now for disable_ipv6, and
>>> we'd also have to figure out a clean way to do this to not incur the
>>> cost of early address interface addition upon register_netdev().
>>>
>>> Given that we have a use case for ipv4 and ipv6 addresses on
>>> xen-netback we no longer have an immediate use case for such early
>>> optimization primitives though, so I'll drop this.
>>>
>>>> The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
>>>> already doing.
>>>
>>> disable_ipv6 is global, the goal was to make this granular and skip
>>> the cost upon early boot, but its been clarified we don't need this.
>>
>> Like Stephen says, you need to make sure you set them before IFF_UP, but
>> beyond that, wouldn't the interface-specific sysctls work?
>
> Yeah that'll do it, unless there is a measurable run time benefit cost
> to never even add these in the first place. Consider a host with tons
> of guests, not sure how many is 'a lot' these days. One would have to
> measure the cost of reducing the amount of time it takes to boot these
> up. As discussed in the other threads though there *is* some use cases
> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
> routing them (although its unclear to me if iptables can be used
> instead, Zoltan?).

Not with OVS, it steals the packet before netfilter hooks.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:13:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:13:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCa5-0007bH-9X; Wed, 19 Feb 2014 19:13:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGCa3-0007bC-J1
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 19:13:15 +0000
Received: from [85.158.137.68:39966] by server-3.bemta-3.messagelabs.com id
	F2/5A-14520-A4205035; Wed, 19 Feb 2014 19:13:14 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392837191!2979893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31686 invoked from network); 19 Feb 2014 19:13:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:13:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="104016089"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 19:13:11 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 14:13:10 -0500
Message-ID: <53050244.1020106@citrix.com>
Date: Wed, 19 Feb 2014 19:13:08 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, Dan Williams
	<dcbw@redhat.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
In-Reply-To: <CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, xen-devel@lists.xenproject.org,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, "David S.
	Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 17:20, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 8:45 AM, Dan Williams <dcbw@redhat.com> wrote:
>> On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
>>> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
>>>> On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
>>>>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>>>>
>>>>> Some interfaces do not need to have any IPv4 or IPv6
>>>>> addresses, so enable an option to specify this. One
>>>>> example where this is observed are virtualization
>>>>> backend interfaces which just use the net_device
>>>>> constructs to help with their respective frontends.
>>>>>
>>>>> This should optimize boot time and complexity on
>>>>> virtualization environments for each backend interface
>>>>> while also avoiding triggering SLAAC and DAD, which is
>>>>> simply pointless for these type of interfaces.
>>>>
>>>> Would it not be better/cleaner to use disable_ipv6 and then add a
>>>> disable_ipv4 sysctl, then use those with that interface?
>>>
>>> Sure, but note that the both disable_ipv6 and accept_dada sysctl
>>> parameters are global. ipv4 and ipv6 interfaces are created upon
>>> NETDEVICE_REGISTER, which will get triggered when a driver calls
>>> register_netdev(). The goal of this patch was to enable an early
>>> optimization for drivers that have no need ever for ipv4 or ipv6
>>> interfaces.
>>
>> Each interface gets override sysctls too though, eg:
>>
>> /proc/sys/net/ipv6/conf/enp0s25/disable_ipv6
>
> I hadn't seen those, thanks!
>
>> which is the one I meant; you're obviously right that the global ones
>> aren't what you want here.  But the specific ones should be suitable?
>
> Under the approach Stephen mentioned by first ensuring the interface
> is down yes. There's one use case I can consider to still want the
> patch though, more on that below.
>
>> If you set that on a per-interface basis, then you'll get EPERM or
>> something whenever you try to add IPv6 addresses or do IPv6 routing.
>
> Neat, thanks.
>
>>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>>> backends though, as such this is no longer applicable as a
>>> requirement. The ipv4 sysctl however still seems like a reasonable
>>> approach to enable optimizations of the network in topologies where
>>> its known we won't need them but -- we'd need to consider a much more
>>> granular solution, not just global as it is now for disable_ipv6, and
>>> we'd also have to figure out a clean way to do this to not incur the
>>> cost of early address interface addition upon register_netdev().
>>>
>>> Given that we have a use case for ipv4 and ipv6 addresses on
>>> xen-netback we no longer have an immediate use case for such early
>>> optimization primitives though, so I'll drop this.
>>>
>>>> The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
>>>> already doing.
>>>
>>> disable_ipv6 is global, the goal was to make this granular and skip
>>> the cost upon early boot, but its been clarified we don't need this.
>>
>> Like Stephen says, you need to make sure you set them before IFF_UP, but
>> beyond that, wouldn't the interface-specific sysctls work?
>
> Yeah that'll do it, unless there is a measurable run time benefit cost
> to never even add these in the first place. Consider a host with tons
> of guests, not sure how many is 'a lot' these days. One would have to
> measure the cost of reducing the amount of time it takes to boot these
> up. As discussed in the other threads though there *is* some use cases
> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
> routing them (although its unclear to me if iptables can be used
> instead, Zoltan?).

Not with OVS, it steals the packet before netfilter hooks.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:19:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCgE-0007pJ-4h; Wed, 19 Feb 2014 19:19:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGCgC-0007pE-KH
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 19:19:36 +0000
Received: from [85.158.139.211:54862] by server-17.bemta-5.messagelabs.com id
	D4/6F-31975-7C305035; Wed, 19 Feb 2014 19:19:35 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392837573!5027858!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29951 invoked from network); 19 Feb 2014 19:19:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="104018867"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 19:19:33 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 14:19:33 -0500
Message-ID: <530503C3.4050307@citrix.com>
Date: Wed, 19 Feb 2014 19:19:31 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392744286.23084.52.camel@kazak.uk.xensource.com>
In-Reply-To: <1392744286.23084.52.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:24, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>
>> +       spinlock_t dealloc_lock;
>> +       spinlock_t response_lock;
>
> Please add comments to both of these describing what bits of the
> datastructure they are locking.
>
> You might find it is clearer to group the locks and the things they
> protect together rather than grouping the locks together.

Ok, I'll give more description here. The response_lock is actually quite 
relevant to be here, but indeed that's not obvious, I'll explain that as 
well.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:19:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCgE-0007pJ-4h; Wed, 19 Feb 2014 19:19:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGCgC-0007pE-KH
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 19:19:36 +0000
Received: from [85.158.139.211:54862] by server-17.bemta-5.messagelabs.com id
	D4/6F-31975-7C305035; Wed, 19 Feb 2014 19:19:35 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392837573!5027858!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29951 invoked from network); 19 Feb 2014 19:19:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:19:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="104018867"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 19:19:33 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 14:19:33 -0500
Message-ID: <530503C3.4050307@citrix.com>
Date: Wed, 19 Feb 2014 19:19:31 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392744286.23084.52.camel@kazak.uk.xensource.com>
In-Reply-To: <1392744286.23084.52.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:24, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>
>> +       spinlock_t dealloc_lock;
>> +       spinlock_t response_lock;
>
> Please add comments to both of these describing what bits of the
> datastructure they are locking.
>
> You might find it is clearer to group the locks and the things they
> protect together rather than grouping the locks together.

Ok, I'll give more description here. The response_lock is actually quite 
relevant to be here, but indeed that's not obvious, I'll explain that as 
well.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:24:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCl7-0007xC-Ud; Wed, 19 Feb 2014 19:24:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGCl6-0007x6-4Y
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:24:40 +0000
Received: from [193.109.254.147:44919] by server-14.bemta-14.messagelabs.com
	id 37/23-29228-7F405035; Wed, 19 Feb 2014 19:24:39 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392837877!5460917!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20439 invoked from network); 19 Feb 2014 19:24:38 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 19:24:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JJOZq5025131
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:24:36 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1JJOYWD020930
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 19:24:35 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJOYk7002577; Wed, 19 Feb 2014 19:24:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 11:24:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7421C1C0954; Wed, 19 Feb 2014 14:24:33 -0500 (EST)
Date: Wed, 19 Feb 2014 14:24:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140219192433.GB12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > I'm use SLES SuSE 11 SP3.
> > As you initial domain or your guests? If it is with SLES does the issue
> > show up if you use the latest version of Xen?
> 
> We have two VMs. One is a WindRiver based VM and the other one a SLES SuSE
> 11 SP2 VM.
> 
> SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
> does not and hence the question what config or driver do I need to enable
> in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?

Lets step back a minute. What is the version of Xen you are running.

> 
> Thanks,
> /Saurabh
> 
> 
> On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > Hi,
> > >
> > > I'd like to know what should be configured in the HVM guest such that it
> > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > >
> > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> > > however whenever I disable xen_platform_pci=0, it does not work.
> >
> > Right. That is expected.
> >
> > >
> > > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > > xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.
> >
> > Oh, that looks to be a bug then.
> >
> > >
> > > Our Xen cfg file looks like this :-
> > >
> > > # HVM specific
> > > kernel = "hvmloader"
> > > builder = "hvm"
> > > device_model = "qemu-dm"
> > >
> > > # Enable ACPI support
> > > acpi = 1
> > >
> > > # Enable serial console
> > > serial = "pty"
> > >
> > > # Enable VNC
> > > vnc = 1
> > > vnclisten = "0.0.0.0"
> > >
> > > pci_msitranslate = 0
> > >
> > > xen_platform_pci = 1
> > >
> > > # Default behavior for following events
> > > on_reboot = "destroy"
> > >
> > >
> > > I'm use SLES SuSE 11 SP3.
> >
> > As you initial domain or your guests? If it is with SLES does the issue
> > show up if you use the latest version of Xen?
> > >
> > > Thanks,
> > > /Saurabh
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:24:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCl7-0007xC-Ud; Wed, 19 Feb 2014 19:24:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGCl6-0007x6-4Y
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:24:40 +0000
Received: from [193.109.254.147:44919] by server-14.bemta-14.messagelabs.com
	id 37/23-29228-7F405035; Wed, 19 Feb 2014 19:24:39 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392837877!5460917!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20439 invoked from network); 19 Feb 2014 19:24:38 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 19:24:38 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JJOZq5025131
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:24:36 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1JJOYWD020930
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 19:24:35 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJOYk7002577; Wed, 19 Feb 2014 19:24:34 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 11:24:34 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7421C1C0954; Wed, 19 Feb 2014 14:24:33 -0500 (EST)
Date: Wed, 19 Feb 2014 14:24:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140219192433.GB12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > I'm use SLES SuSE 11 SP3.
> > As you initial domain or your guests? If it is with SLES does the issue
> > show up if you use the latest version of Xen?
> 
> We have two VMs. One is a WindRiver based VM and the other one a SLES SuSE
> 11 SP2 VM.
> 
> SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
> does not and hence the question what config or driver do I need to enable
> in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?

Lets step back a minute. What is the version of Xen you are running.

> 
> Thanks,
> /Saurabh
> 
> 
> On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > Hi,
> > >
> > > I'd like to know what should be configured in the HVM guest such that it
> > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > >
> > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> > > however whenever I disable xen_platform_pci=0, it does not work.
> >
> > Right. That is expected.
> >
> > >
> > > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > > xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.
> >
> > Oh, that looks to be a bug then.
> >
> > >
> > > Our Xen cfg file looks like this :-
> > >
> > > # HVM specific
> > > kernel = "hvmloader"
> > > builder = "hvm"
> > > device_model = "qemu-dm"
> > >
> > > # Enable ACPI support
> > > acpi = 1
> > >
> > > # Enable serial console
> > > serial = "pty"
> > >
> > > # Enable VNC
> > > vnc = 1
> > > vnclisten = "0.0.0.0"
> > >
> > > pci_msitranslate = 0
> > >
> > > xen_platform_pci = 1
> > >
> > > # Default behavior for following events
> > > on_reboot = "destroy"
> > >
> > >
> > > I'm use SLES SuSE 11 SP3.
> >
> > As you initial domain or your guests? If it is with SLES does the issue
> > show up if you use the latest version of Xen?
> > >
> > > Thanks,
> > > /Saurabh
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:25:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCmK-000825-EY; Wed, 19 Feb 2014 19:25:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGCmJ-00081z-7q
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:25:55 +0000
Received: from [193.109.254.147:60639] by server-15.bemta-14.messagelabs.com
	id C1/21-10839-24505035; Wed, 19 Feb 2014 19:25:54 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392837953!5492785!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30696 invoked from network); 19 Feb 2014 19:25:54 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 19 Feb 2014 19:25:54 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55938 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGClH-0000ae-Nq; Wed, 19 Feb 2014 20:24:51 +0100
Date: Wed, 19 Feb 2014 20:25:51 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1533842338.20140219202551@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
References: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: netdev@vger.kernel.org, Paul Durrant <paul.durrant@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net] xen-netfront: reset skb network header
	before checksum
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 19, 2014, 7:48:34 PM, you wrote:

> In ed1f50c3a ("net: add skb_checksum_setup") we introduced some checksum
> functions in core driver. Subsequent change b5cf66cd1 ("xen-netfront:
> use new skb_checksum_setup function") made use of those functions to
> replace its own implementation.

> However with that change netfront is broken. It sees a lot of checksum
> error. That's because its own implementation of checksum function was a
> bit hacky (dereferencing skb->data directly) while the new function was
> implemented using ip_hdr(). The network header is not reset before skb
> is passed to the new function. When the new function tries to do its
> job, it's confused and reports error.

> The fix is simple, we need to reset network header before passing skb to
> checksum function. Netback is not affected as it already does the right
> thing.

> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> ---
>  drivers/net/xen-netfront.c |    1 +
>  1 file changed, 1 insertion(+)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index ff04d4f..95041b6 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
>  
>                 /* Ethernet work: Delayed to here as it peeks the header. */
>                 skb->protocol = eth_type_trans(skb, dev);
> +               skb_reset_network_header(skb);
>  
>                 if (checksum_setup(dev, skb)) {
>                         kfree_skb(skb);

Ah that also explains why ICMP was not affected :-)
Just tested and this patch indeed fixes my problem, so:

Tested-By: Sander Eikelenboom <linux@eikelenboom.it>

Thanks Wei !

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:25:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCmK-000825-EY; Wed, 19 Feb 2014 19:25:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGCmJ-00081z-7q
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:25:55 +0000
Received: from [193.109.254.147:60639] by server-15.bemta-14.messagelabs.com
	id C1/21-10839-24505035; Wed, 19 Feb 2014 19:25:54 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392837953!5492785!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30696 invoked from network); 19 Feb 2014 19:25:54 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 19 Feb 2014 19:25:54 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:55938 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGClH-0000ae-Nq; Wed, 19 Feb 2014 20:24:51 +0100
Date: Wed, 19 Feb 2014 20:25:51 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1533842338.20140219202551@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
References: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
MIME-Version: 1.0
Cc: netdev@vger.kernel.org, Paul Durrant <paul.durrant@citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net] xen-netfront: reset skb network header
	before checksum
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 19, 2014, 7:48:34 PM, you wrote:

> In ed1f50c3a ("net: add skb_checksum_setup") we introduced some checksum
> functions in core driver. Subsequent change b5cf66cd1 ("xen-netfront:
> use new skb_checksum_setup function") made use of those functions to
> replace its own implementation.

> However with that change netfront is broken. It sees a lot of checksum
> error. That's because its own implementation of checksum function was a
> bit hacky (dereferencing skb->data directly) while the new function was
> implemented using ip_hdr(). The network header is not reset before skb
> is passed to the new function. When the new function tries to do its
> job, it's confused and reports error.

> The fix is simple, we need to reset network header before passing skb to
> checksum function. Netback is not affected as it already does the right
> thing.

> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Paul Durrant <paul.durrant@citrix.com>
> ---
>  drivers/net/xen-netfront.c |    1 +
>  1 file changed, 1 insertion(+)

> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index ff04d4f..95041b6 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
>  
>                 /* Ethernet work: Delayed to here as it peeks the header. */
>                 skb->protocol = eth_type_trans(skb, dev);
> +               skb_reset_network_header(skb);
>  
>                 if (checksum_setup(dev, skb)) {
>                         kfree_skb(skb);

Ah that also explains why ICMP was not affected :-)
Just tested and this patch indeed fixes my problem, so:

Tested-By: Sander Eikelenboom <linux@eikelenboom.it>

Thanks Wei !

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCtu-0008MD-OB; Wed, 19 Feb 2014 19:33:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGCts-0008M8-He
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:33:44 +0000
Received: from [85.158.137.68:10317] by server-14.bemta-3.messagelabs.com id
	14/C3-08196-71705035; Wed, 19 Feb 2014 19:33:43 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392838421!169235!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28067 invoked from network); 19 Feb 2014 19:33:42 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:33:42 -0000
Received: by mail-vc0-f178.google.com with SMTP id ik5so877488vcb.23
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 11:33:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=xIIHbPrAON5tflKHgl9gbUDZJDODhtJev4VkSAfZkzA=;
	b=Q0HvY82DouWhTsNrVV+82Zy40vZmWQQr1ypWcg6ZE/Dgi+ULorIHshQdagTN3aFXnQ
	nALVZyyd+uCSVmr1th+lApQfJAOgSeiF5fWQ6Sbu34TC1a86X70jeJU8WYhDstxP5q2w
	cVnMF0Z9Yvb9vw3Lr89IHd3q7VOxjikL9f2ZcV1yyd1QujbEsUS1XWq8EdecEX6ajOR9
	FZ54hhpEzKKOJIRl34/yq9Sc67E3bpV5z5LDlU9aZPQhpRXGps3OsGrG4wgXzuWbukzV
	8iZ+33e9JK9pgp3I422aIYCo42+f1CjQY5a1aPRHSpZ7Zueu1samVzo5z9Ix28Laepiq
	DfCQ==
MIME-Version: 1.0
X-Received: by 10.52.227.7 with SMTP id rw7mr1334785vdc.77.1392838420813; Wed,
	19 Feb 2014 11:33:40 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 11:33:40 -0800 (PST)
In-Reply-To: <20140219192433.GB12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<20140219192433.GB12602@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 11:33:40 -0800
Message-ID: <CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1241628175726257093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1241628175726257093==
Content-Type: multipart/alternative; boundary=089e0122f3c44d5a9004f2c777d4

--089e0122f3c44d5a9004f2c777d4
Content-Type: text/plain; charset=ISO-8859-1

Hi Konrad --

I'm using  Xen 4.2.2

xentop - 11:31:13   Xen 4.2.2_06-0.7.17.62

Thanks,
/Saurabh

On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > > I'm use SLES SuSE 11 SP3.
> > > As you initial domain or your guests? If it is with SLES does the issue
> > > show up if you use the latest version of Xen?
> >
> > We have two VMs. One is a WindRiver based VM and the other one a SLES
> SuSE
> > 11 SP2 VM.
> >
> > SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
> > does not and hence the question what config or driver do I need to enable
> > in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?
>
> Lets step back a minute. What is the version of Xen you are running.
>
> >
> > Thanks,
> > /Saurabh
> >
> >
> > On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > > Hi,
> > > >
> > > > I'd like to know what should be configured in the HVM guest such
> that it
> > > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > > >
> > > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on
> HVM)
> > > > however whenever I disable xen_platform_pci=0, it does not work.
> > >
> > > Right. That is expected.
> > >
> > > >
> > > > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > > > xen_platform_pci=0 is set. We do have lot of PCI pass-through
> devices.
> > >
> > > Oh, that looks to be a bug then.
> > >
> > > >
> > > > Our Xen cfg file looks like this :-
> > > >
> > > > # HVM specific
> > > > kernel = "hvmloader"
> > > > builder = "hvm"
> > > > device_model = "qemu-dm"
> > > >
> > > > # Enable ACPI support
> > > > acpi = 1
> > > >
> > > > # Enable serial console
> > > > serial = "pty"
> > > >
> > > > # Enable VNC
> > > > vnc = 1
> > > > vnclisten = "0.0.0.0"
> > > >
> > > > pci_msitranslate = 0
> > > >
> > > > xen_platform_pci = 1
> > > >
> > > > # Default behavior for following events
> > > > on_reboot = "destroy"
> > > >
> > > >
> > > > I'm use SLES SuSE 11 SP3.
> > >
> > > As you initial domain or your guests? If it is with SLES does the issue
> > > show up if you use the latest version of Xen?
> > > >
> > > > Thanks,
> > > > /Saurabh
> > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel
> > >
> > >
>

--089e0122f3c44d5a9004f2c777d4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Konrad --<div><br><div>I&#39;m using =A0Xen 4.2.2</div>=
<div><br></div><div>xentop - 11:31:13 =A0 Xen 4.2.2_06-0.7.17.62</div><div>=
<br></div><div>Thanks,</div><div>/Saurabh=A0</div><div><br></div></div><div=
 class=3D"gmail_extra">
<div class=3D"gmail_quote">On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszut=
ek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid=
;padding-left:1ex">
<div class=3D"im">On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra =
wrote:<br>
&gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt; As you initial domain or your guests? If it is with SLES does the=
 issue<br>
&gt; &gt; show up if you use the latest version of Xen?<br>
&gt;<br>
&gt; We have two VMs. One is a WindRiver based VM and the other one a SLES =
SuSE<br>
&gt; 11 SP2 VM.<br>
&gt;<br>
&gt; SuSE 11 SP2 VM shutdown properly with xen_platform_pci=3D0 however WR =
one<br>
&gt; does not and hence the question what config or driver do I need to ena=
ble<br>
&gt; in WR HVM guest such that it accepts &#39;xl/xm trigger &lt;vm&gt; pow=
er&#39;?<br>
<br>
</div>Lets step back a minute. What is the version of Xen you are running.<=
br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:<b=
r>
&gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;d like to know what should be configured in the HVM gu=
est such that it<br>
&gt; &gt; &gt; accepts &#39;xm/xl shutdown&#39; graceful shutdown signal.<b=
r>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &#39;xm/xl shutdown&#39; works great when I have xen_platfor=
m_pci=3D1 (PV on HVM)<br>
&gt; &gt; &gt; however whenever I disable xen_platform_pci=3D0, it does not=
 work.<br>
&gt; &gt;<br>
&gt; &gt; Right. That is expected.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I also tried &#39;xl/xm trigger &lt;vm&gt; power&#39; and th=
is too does not work if<br>
&gt; &gt; &gt; xen_platform_pci=3D0 is set. We do have lot of PCI pass-thro=
ugh devices.<br>
&gt; &gt;<br>
&gt; &gt; Oh, that looks to be a bug then.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Our Xen cfg file looks like this :-<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM specific<br>
&gt; &gt; &gt; kernel =3D &quot;hvmloader&quot;<br>
&gt; &gt; &gt; builder =3D &quot;hvm&quot;<br>
&gt; &gt; &gt; device_model =3D &quot;qemu-dm&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Enable ACPI support<br>
&gt; &gt; &gt; acpi =3D 1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Enable serial console<br>
&gt; &gt; &gt; serial =3D &quot;pty&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Enable VNC<br>
&gt; &gt; &gt; vnc =3D 1<br>
&gt; &gt; &gt; vnclisten =3D &quot;0.0.0.0&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; pci_msitranslate =3D 0<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; xen_platform_pci =3D 1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Default behavior for following events<br>
&gt; &gt; &gt; on_reboot =3D &quot;destroy&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt;<br>
&gt; &gt; As you initial domain or your guests? If it is with SLES does the=
 issue<br>
&gt; &gt; show up if you use the latest version of Xen?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks,<br>
&gt; &gt; &gt; /Saurabh<br>
&gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.x=
en.org</a><br>
&gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div></div>

--089e0122f3c44d5a9004f2c777d4--


--===============1241628175726257093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1241628175726257093==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 19:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGCtu-0008MD-OB; Wed, 19 Feb 2014 19:33:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGCts-0008M8-He
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:33:44 +0000
Received: from [85.158.137.68:10317] by server-14.bemta-3.messagelabs.com id
	14/C3-08196-71705035; Wed, 19 Feb 2014 19:33:43 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392838421!169235!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28067 invoked from network); 19 Feb 2014 19:33:42 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:33:42 -0000
Received: by mail-vc0-f178.google.com with SMTP id ik5so877488vcb.23
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 11:33:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=xIIHbPrAON5tflKHgl9gbUDZJDODhtJev4VkSAfZkzA=;
	b=Q0HvY82DouWhTsNrVV+82Zy40vZmWQQr1ypWcg6ZE/Dgi+ULorIHshQdagTN3aFXnQ
	nALVZyyd+uCSVmr1th+lApQfJAOgSeiF5fWQ6Sbu34TC1a86X70jeJU8WYhDstxP5q2w
	cVnMF0Z9Yvb9vw3Lr89IHd3q7VOxjikL9f2ZcV1yyd1QujbEsUS1XWq8EdecEX6ajOR9
	FZ54hhpEzKKOJIRl34/yq9Sc67E3bpV5z5LDlU9aZPQhpRXGps3OsGrG4wgXzuWbukzV
	8iZ+33e9JK9pgp3I422aIYCo42+f1CjQY5a1aPRHSpZ7Zueu1samVzo5z9Ix28Laepiq
	DfCQ==
MIME-Version: 1.0
X-Received: by 10.52.227.7 with SMTP id rw7mr1334785vdc.77.1392838420813; Wed,
	19 Feb 2014 11:33:40 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 11:33:40 -0800 (PST)
In-Reply-To: <20140219192433.GB12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<20140219192433.GB12602@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 11:33:40 -0800
Message-ID: <CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1241628175726257093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1241628175726257093==
Content-Type: multipart/alternative; boundary=089e0122f3c44d5a9004f2c777d4

--089e0122f3c44d5a9004f2c777d4
Content-Type: text/plain; charset=ISO-8859-1

Hi Konrad --

I'm using  Xen 4.2.2

xentop - 11:31:13   Xen 4.2.2_06-0.7.17.62

Thanks,
/Saurabh

On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > > I'm use SLES SuSE 11 SP3.
> > > As you initial domain or your guests? If it is with SLES does the issue
> > > show up if you use the latest version of Xen?
> >
> > We have two VMs. One is a WindRiver based VM and the other one a SLES
> SuSE
> > 11 SP2 VM.
> >
> > SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
> > does not and hence the question what config or driver do I need to enable
> > in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?
>
> Lets step back a minute. What is the version of Xen you are running.
>
> >
> > Thanks,
> > /Saurabh
> >
> >
> > On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > > Hi,
> > > >
> > > > I'd like to know what should be configured in the HVM guest such
> that it
> > > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > > >
> > > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on
> HVM)
> > > > however whenever I disable xen_platform_pci=0, it does not work.
> > >
> > > Right. That is expected.
> > >
> > > >
> > > > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > > > xen_platform_pci=0 is set. We do have lot of PCI pass-through
> devices.
> > >
> > > Oh, that looks to be a bug then.
> > >
> > > >
> > > > Our Xen cfg file looks like this :-
> > > >
> > > > # HVM specific
> > > > kernel = "hvmloader"
> > > > builder = "hvm"
> > > > device_model = "qemu-dm"
> > > >
> > > > # Enable ACPI support
> > > > acpi = 1
> > > >
> > > > # Enable serial console
> > > > serial = "pty"
> > > >
> > > > # Enable VNC
> > > > vnc = 1
> > > > vnclisten = "0.0.0.0"
> > > >
> > > > pci_msitranslate = 0
> > > >
> > > > xen_platform_pci = 1
> > > >
> > > > # Default behavior for following events
> > > > on_reboot = "destroy"
> > > >
> > > >
> > > > I'm use SLES SuSE 11 SP3.
> > >
> > > As you initial domain or your guests? If it is with SLES does the issue
> > > show up if you use the latest version of Xen?
> > > >
> > > > Thanks,
> > > > /Saurabh
> > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel
> > >
> > >
>

--089e0122f3c44d5a9004f2c777d4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi Konrad --<div><br><div>I&#39;m using =A0Xen 4.2.2</div>=
<div><br></div><div>xentop - 11:31:13 =A0 Xen 4.2.2_06-0.7.17.62</div><div>=
<br></div><div>Thanks,</div><div>/Saurabh=A0</div><div><br></div></div><div=
 class=3D"gmail_extra">
<div class=3D"gmail_quote">On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszut=
ek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" tar=
get=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrote:<br><blockquote =
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid=
;padding-left:1ex">
<div class=3D"im">On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra =
wrote:<br>
&gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt; As you initial domain or your guests? If it is with SLES does the=
 issue<br>
&gt; &gt; show up if you use the latest version of Xen?<br>
&gt;<br>
&gt; We have two VMs. One is a WindRiver based VM and the other one a SLES =
SuSE<br>
&gt; 11 SP2 VM.<br>
&gt;<br>
&gt; SuSE 11 SP2 VM shutdown properly with xen_platform_pci=3D0 however WR =
one<br>
&gt; does not and hence the question what config or driver do I need to ena=
ble<br>
&gt; in WR HVM guest such that it accepts &#39;xl/xm trigger &lt;vm&gt; pow=
er&#39;?<br>
<br>
</div>Lets step back a minute. What is the version of Xen you are running.<=
br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
&gt;<br>
&gt;<br>
&gt; On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:<b=
r>
&gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;d like to know what should be configured in the HVM gu=
est such that it<br>
&gt; &gt; &gt; accepts &#39;xm/xl shutdown&#39; graceful shutdown signal.<b=
r>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &#39;xm/xl shutdown&#39; works great when I have xen_platfor=
m_pci=3D1 (PV on HVM)<br>
&gt; &gt; &gt; however whenever I disable xen_platform_pci=3D0, it does not=
 work.<br>
&gt; &gt;<br>
&gt; &gt; Right. That is expected.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I also tried &#39;xl/xm trigger &lt;vm&gt; power&#39; and th=
is too does not work if<br>
&gt; &gt; &gt; xen_platform_pci=3D0 is set. We do have lot of PCI pass-thro=
ugh devices.<br>
&gt; &gt;<br>
&gt; &gt; Oh, that looks to be a bug then.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Our Xen cfg file looks like this :-<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # HVM specific<br>
&gt; &gt; &gt; kernel =3D &quot;hvmloader&quot;<br>
&gt; &gt; &gt; builder =3D &quot;hvm&quot;<br>
&gt; &gt; &gt; device_model =3D &quot;qemu-dm&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Enable ACPI support<br>
&gt; &gt; &gt; acpi =3D 1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Enable serial console<br>
&gt; &gt; &gt; serial =3D &quot;pty&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Enable VNC<br>
&gt; &gt; &gt; vnc =3D 1<br>
&gt; &gt; &gt; vnclisten =3D &quot;0.0.0.0&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; pci_msitranslate =3D 0<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; xen_platform_pci =3D 1<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; # Default behavior for following events<br>
&gt; &gt; &gt; on_reboot =3D &quot;destroy&quot;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt;<br>
&gt; &gt; As you initial domain or your guests? If it is with SLES does the=
 issue<br>
&gt; &gt; show up if you use the latest version of Xen?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks,<br>
&gt; &gt; &gt; /Saurabh<br>
&gt; &gt;<br>
&gt; &gt; &gt; _______________________________________________<br>
&gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.x=
en.org</a><br>
&gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank"=
>http://lists.xen.org/xen-devel</a><br>
&gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div></div>

--089e0122f3c44d5a9004f2c777d4--


--===============1241628175726257093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1241628175726257093==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 19:50:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDAA-0000PX-3c; Wed, 19 Feb 2014 19:50:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGDA9-0000PS-0D
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 19:50:33 +0000
Received: from [193.109.254.147:64802] by server-6.bemta-14.messagelabs.com id
	8F/50-03396-80B05035; Wed, 19 Feb 2014 19:50:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392839429!5532724!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5838 invoked from network); 19 Feb 2014 19:50:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="104031855"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 19:50:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 14:50:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGDA4-00047o-8u;
	Wed, 19 Feb 2014 19:50:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGDA3-00076U-En;
	Wed, 19 Feb 2014 19:50:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25135-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 19:50:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25135: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25135 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25135/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=98b9a9e8d3b12cb3210c8c9edd65019b584704ef
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 98b9a9e8d3b12cb3210c8c9edd65019b584704ef
+ branch=xen-unstable
+ revision=98b9a9e8d3b12cb3210c8c9edd65019b584704ef
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 98b9a9e8d3b12cb3210c8c9edd65019b584704ef:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   b731935..98b9a9e  98b9a9e8d3b12cb3210c8c9edd65019b584704ef -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:50:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDAA-0000PX-3c; Wed, 19 Feb 2014 19:50:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGDA9-0000PS-0D
	for xen-devel@lists.xensource.com; Wed, 19 Feb 2014 19:50:33 +0000
Received: from [193.109.254.147:64802] by server-6.bemta-14.messagelabs.com id
	8F/50-03396-80B05035; Wed, 19 Feb 2014 19:50:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392839429!5532724!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5838 invoked from network); 19 Feb 2014 19:50:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:50:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="104031855"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 19 Feb 2014 19:50:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 14:50:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGDA4-00047o-8u;
	Wed, 19 Feb 2014 19:50:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGDA3-00076U-En;
	Wed, 19 Feb 2014 19:50:27 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25135-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 19 Feb 2014 19:50:27 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25135: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25135 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25135/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl           9 guest-start                  fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 16 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 16 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 13 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 13 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 13 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 13 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 13 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 13 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 13 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 13 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 13 guest-stop                   fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  b7319350278d0220febc8a7dc8be8e8d41b0abd2

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
  Julien Grall <julien.grall@linaro.org>
  Olaf Hering <olaf@aepfle.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=98b9a9e8d3b12cb3210c8c9edd65019b584704ef
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 98b9a9e8d3b12cb3210c8c9edd65019b584704ef
+ branch=xen-unstable
+ revision=98b9a9e8d3b12cb3210c8c9edd65019b584704ef
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 98b9a9e8d3b12cb3210c8c9edd65019b584704ef:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   b731935..98b9a9e  98b9a9e8d3b12cb3210c8c9edd65019b584704ef -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDBq-0000VI-KM; Wed, 19 Feb 2014 19:52:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGDBo-0000VC-Mb
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:52:16 +0000
Received: from [85.158.143.35:38421] by server-3.bemta-4.messagelabs.com id
	6B/36-11539-07B05035; Wed, 19 Feb 2014 19:52:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392839533!6897843!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4905 invoked from network); 19 Feb 2014 19:52:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 19:52:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JJqB06003146
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:52:12 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJqB9q000233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 19:52:11 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJqBe1012573; Wed, 19 Feb 2014 19:52:11 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 11:52:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EC7491C0954; Wed, 19 Feb 2014 14:52:09 -0500 (EST)
Date: Wed, 19 Feb 2014 14:52:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140219195209.GF12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<20140219192433.GB12602@phenom.dumpdata.com>
	<CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:33:40AM -0800, Saurabh Mishra wrote:
> Hi Konrad --
> 
> I'm using  Xen 4.2.2
> 
> xentop - 11:31:13   Xen 4.2.2_06-0.7.17.62

Oh, please don't top-post and please do use the latest version of Xen.

As I mentioned earlier - you might be hitting a bug in 'xl'. Hence
please use the latest version so that we can vet whether indeed this
is a bug when doing 'xl trigger'.


> 
> Thanks,
> /Saurabh
> 
> On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > > > I'm use SLES SuSE 11 SP3.
> > > > As you initial domain or your guests? If it is with SLES does the issue
> > > > show up if you use the latest version of Xen?
> > >
> > > We have two VMs. One is a WindRiver based VM and the other one a SLES
> > SuSE
> > > 11 SP2 VM.
> > >
> > > SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
> > > does not and hence the question what config or driver do I need to enable
> > > in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?
> >
> > Lets step back a minute. What is the version of Xen you are running.
> >
> > >
> > > Thanks,
> > > /Saurabh
> > >
> > >
> > > On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> > > konrad.wilk@oracle.com> wrote:
> > >
> > > > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > > > Hi,
> > > > >
> > > > > I'd like to know what should be configured in the HVM guest such
> > that it
> > > > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > > > >
> > > > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on
> > HVM)
> > > > > however whenever I disable xen_platform_pci=0, it does not work.
> > > >
> > > > Right. That is expected.
> > > >
> > > > >
> > > > > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > > > > xen_platform_pci=0 is set. We do have lot of PCI pass-through
> > devices.
> > > >
> > > > Oh, that looks to be a bug then.
> > > >
> > > > >
> > > > > Our Xen cfg file looks like this :-
> > > > >
> > > > > # HVM specific
> > > > > kernel = "hvmloader"
> > > > > builder = "hvm"
> > > > > device_model = "qemu-dm"
> > > > >
> > > > > # Enable ACPI support
> > > > > acpi = 1
> > > > >
> > > > > # Enable serial console
> > > > > serial = "pty"
> > > > >
> > > > > # Enable VNC
> > > > > vnc = 1
> > > > > vnclisten = "0.0.0.0"
> > > > >
> > > > > pci_msitranslate = 0
> > > > >
> > > > > xen_platform_pci = 1
> > > > >
> > > > > # Default behavior for following events
> > > > > on_reboot = "destroy"
> > > > >
> > > > >
> > > > > I'm use SLES SuSE 11 SP3.
> > > >
> > > > As you initial domain or your guests? If it is with SLES does the issue
> > > > show up if you use the latest version of Xen?
> > > > >
> > > > > Thanks,
> > > > > /Saurabh
> > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xen.org
> > > > > http://lists.xen.org/xen-devel
> > > >
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDBq-0000VI-KM; Wed, 19 Feb 2014 19:52:18 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGDBo-0000VC-Mb
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:52:16 +0000
Received: from [85.158.143.35:38421] by server-3.bemta-4.messagelabs.com id
	6B/36-11539-07B05035; Wed, 19 Feb 2014 19:52:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392839533!6897843!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4905 invoked from network); 19 Feb 2014 19:52:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 19:52:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JJqB06003146
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:52:12 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJqB9q000233
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 19:52:11 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJqBe1012573; Wed, 19 Feb 2014 19:52:11 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 11:52:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id EC7491C0954; Wed, 19 Feb 2014 14:52:09 -0500 (EST)
Date: Wed, 19 Feb 2014 14:52:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140219195209.GF12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<20140219192433.GB12602@phenom.dumpdata.com>
	<CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:33:40AM -0800, Saurabh Mishra wrote:
> Hi Konrad --
> 
> I'm using  Xen 4.2.2
> 
> xentop - 11:31:13   Xen 4.2.2_06-0.7.17.62

Oh, please don't top-post and please do use the latest version of Xen.

As I mentioned earlier - you might be hitting a bug in 'xl'. Hence
please use the latest version so that we can vet whether indeed this
is a bug when doing 'xl trigger'.


> 
> Thanks,
> /Saurabh
> 
> On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@oracle.com> wrote:
> 
> > On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > > > I'm use SLES SuSE 11 SP3.
> > > > As you initial domain or your guests? If it is with SLES does the issue
> > > > show up if you use the latest version of Xen?
> > >
> > > We have two VMs. One is a WindRiver based VM and the other one a SLES
> > SuSE
> > > 11 SP2 VM.
> > >
> > > SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR one
> > > does not and hence the question what config or driver do I need to enable
> > > in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?
> >
> > Lets step back a minute. What is the version of Xen you are running.
> >
> > >
> > > Thanks,
> > > /Saurabh
> > >
> > >
> > > On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> > > konrad.wilk@oracle.com> wrote:
> > >
> > > > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > > > Hi,
> > > > >
> > > > > I'd like to know what should be configured in the HVM guest such
> > that it
> > > > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > > > >
> > > > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on
> > HVM)
> > > > > however whenever I disable xen_platform_pci=0, it does not work.
> > > >
> > > > Right. That is expected.
> > > >
> > > > >
> > > > > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > > > > xen_platform_pci=0 is set. We do have lot of PCI pass-through
> > devices.
> > > >
> > > > Oh, that looks to be a bug then.
> > > >
> > > > >
> > > > > Our Xen cfg file looks like this :-
> > > > >
> > > > > # HVM specific
> > > > > kernel = "hvmloader"
> > > > > builder = "hvm"
> > > > > device_model = "qemu-dm"
> > > > >
> > > > > # Enable ACPI support
> > > > > acpi = 1
> > > > >
> > > > > # Enable serial console
> > > > > serial = "pty"
> > > > >
> > > > > # Enable VNC
> > > > > vnc = 1
> > > > > vnclisten = "0.0.0.0"
> > > > >
> > > > > pci_msitranslate = 0
> > > > >
> > > > > xen_platform_pci = 1
> > > > >
> > > > > # Default behavior for following events
> > > > > on_reboot = "destroy"
> > > > >
> > > > >
> > > > > I'm use SLES SuSE 11 SP3.
> > > >
> > > > As you initial domain or your guests? If it is with SLES does the issue
> > > > show up if you use the latest version of Xen?
> > > > >
> > > > > Thanks,
> > > > > /Saurabh
> > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xen.org
> > > > > http://lists.xen.org/xen-devel
> > > >
> > > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:54:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDE5-0000g7-6M; Wed, 19 Feb 2014 19:54:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGDE3-0000fw-57
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 19:54:35 +0000
Received: from [85.158.139.211:33889] by server-7.bemta-5.messagelabs.com id
	4A/7E-14867-AFB05035; Wed, 19 Feb 2014 19:54:34 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392839671!4990983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30591 invoked from network); 19 Feb 2014 19:54:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:54:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="102334593"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 19:54:31 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 14:54:30 -0500
Message-ID: <53050BF5.1060009@citrix.com>
Date: Wed, 19 Feb 2014 19:54:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>		
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>	
	<1392743214.23084.38.camel@kazak.uk.xensource.com>	
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
In-Reply-To: <1392804319.23084.109.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 10:05, Ian Campbell wrote:
> On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
>> On 18/02/14 17:06, Ian Campbell wrote:
>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>> This patch contains the new definitions necessary for grant mapping.
>>>
>>> Is this just adding a bunch of (currently) unused functions? That's a
>>> slightly odd way to structure a series. They don't seem to be "generic
>>> helpers" or anything so it would be more normal to introduce these as
>>> they get used -- it's a bit hard to review them out of context.
>> I've created two patches because they are quite huge even now,
>> separately. Together they would be a ~500 line change. That was the best
>> I could figure out keeping in mind that bisect should work. But as I
>> wrote in the first email, I welcome other suggestions. If you and Wei
>> prefer this two patch in one big one, I merge them in the next version.
>
> I suppose it is hard to split a change like this up in a sensible way,
> but it is rather hard to review something which is split in two parts
> sensibly.
>
> If the combined patch too large to fit on the lists?
Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible. It's up 
to you and Wei, if you would like them to be merged, I can do that.

>>>
>>>> +					  struct xenvif,
>>>> +					  pending_tx_info[0]);
>>>> +
>>>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>>>> +	do {
>>>> +		pending_idx = ubuf->desc;
>>>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>>>> +		index = pending_index(vif->dealloc_prod);
>>>> +		vif->dealloc_ring[index] = pending_idx;
>>>> +		/* Sync with xenvif_tx_dealloc_action:
>>>> +		 * insert idx then incr producer.
>>>> +		 */
>>>> +		smp_wmb();
>>>
>>> Is this really needed given that there is a lock held?
>> Yes, as the comment right above explains.
>
> My question is why do you need this sync if you are holding a lock, the
> comment doesn't tell me that. I suppose xenvif_tx_dealloc_action doesn't
> hold the dealloc_lock, but that is non-obvious from the names.
Ok, I'll clarify that in the comment.

>>>
>>>> +		vif->dealloc_prod++;
>>>
>>> What happens if the dealloc ring becomes full, will this wrap and cause
>>> havoc?
>> Nope, if the dealloc ring is full, the value of the last increment won't
>> be used to index the dealloc ring again until some space made available.
>
> I don't follow -- what makes this the case?
The dealloc ring has the same size as the pending ring, and you can only 
add slots to it which are already on the pending ring (the pending_idx 
comes from ubuf->desc), as you are essentially free up slots here on the 
pending ring.
So if the dealloc ring becomes full, vif->dealloc_prod - 
vif->dealloc_cons will be 256, which would be bad. But the while loop 
should exit here, as we shouldn't have any more pending slots. And if we 
dealloc and create free pending slots in dealloc_action, dealloc_cons 
will also advance.

>> Of course if something broke and we have more pending slots than tx ring
>> or dealloc slots then it can happen. Do you suggest a
>> BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?
>
> A
>           BUG_ON(space in dealloc ring < number of slots needed to dealloc this skb)
> would seem to be the right thing, if that really is the invariant the
> code is supposed to be implementing.
Not exactly, it means BUG_ON(number of slots to dealloc > 
MAX_PENDING_REQS), and it should be at the end of the loop, without '='.

>>>> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>>>> +{
>>>> +	struct gnttab_unmap_grant_ref *gop;
>>>> +	pending_ring_idx_t dc, dp;
>>>> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
>>>> +	unsigned int i = 0;
>>>> +
>>>> +	dc = vif->dealloc_cons;
>>>> +	gop = vif->tx_unmap_ops;
>>>> +
>>>> +	/* Free up any grants we have finished using */
>>>> +	do {
>>>> +		dp = vif->dealloc_prod;
>>>> +
>>>> +		/* Ensure we see all indices enqueued by all
>>>> +		 * xenvif_zerocopy_callback().
>>>> +		 */
>>>> +		smp_rmb();
>>>> +
>>>> +		while (dc != dp) {
>>>> +			pending_idx =
>>>> +				vif->dealloc_ring[pending_index(dc++)];
>>>> +
>>>> +			/* Already unmapped? */
>>>> +			if (vif->grant_tx_handle[pending_idx] ==
>>>> +				NETBACK_INVALID_HANDLE) {
>>>> +				netdev_err(vif->dev,
>>>> +					   "Trying to unmap invalid handle! "
>>>> +					   "pending_idx: %x\n", pending_idx);
>>>> +				BUG();
>>>> +			}
>>>> +
>>>> +			pending_idx_release[gop-vif->tx_unmap_ops] =
>>>> +				pending_idx;
>>>> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
>>>> +				vif->mmap_pages[pending_idx];
>>>> +			gnttab_set_unmap_op(gop,
>>>> +					    idx_to_kaddr(vif, pending_idx),
>>>> +					    GNTMAP_host_map,
>>>> +					    vif->grant_tx_handle[pending_idx]);
>>>> +			vif->grant_tx_handle[pending_idx] =
>>>> +				NETBACK_INVALID_HANDLE;
>>>> +			++gop;
>>>
>>> Can we run out of space in the gop array?
>> No, unless the same thing happen as at my previous answer. BUG_ON() here
>> as well?
>
> Yes, or at the very least a comment explaining how/why gop is bounded
> elsewhere.
Ok, I'll do that.

>
>>>
>>>> +		}
>>>> +
>>>> +	} while (dp != vif->dealloc_prod);
>>>> +
>>>> +	vif->dealloc_cons = dc;
>>>
>>> No barrier here?
>> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by
>> the callback and the thread as well, that's why we need mb() in
>> previous. Btw. this function comes from classic's net_tx_action_dealloc
>
> Is this code close enough to that code architecturally that you can
> infer correctness due to that though?
Nope, I've just mentioned it because knowing that old code can help to 
understand this new, as their logic is very similar some places, like here.

> So long as you have considered the barrier semantics in the context of
> the current code and you think it is correct to not have one here then
> I'm ok. But if you have just assumed it is OK because some older code
> didn't have it then I'll have to ask you to consider it again...
Nope, as I mentioned above, dealloc_cons only accessed in that funcion, 
from the same thread. Dealloc_prod is written in the callback and read 
out here, that's why we need the barrier there.

>
>>>> +				netdev_err(vif->dev,
>>>> +					   " host_addr: %llx handle: %x status: %d\n",
>>>> +					   gop[i].host_addr,
>>>> +					   gop[i].handle,
>>>> +					   gop[i].status);
>>>> +			}
>>>> +			BUG();
>>>> +		}
>>>> +	}
>>>> +
>>>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
>>>> +		xenvif_idx_release(vif, pending_idx_release[i],
>>>> +				   XEN_NETIF_RSP_OKAY);
>>>> +}
>>>> +
>>>> +
>>>>    /* Called after netfront has transmitted */
>>>>    int xenvif_tx_action(struct xenvif *vif, int budget)
>>>>    {
>>>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>>>>    	vif->mmap_pages[pending_idx] = NULL;
>>>>    }
>>>>
>>>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
>>>
>>> This is a single shot version of the batched xenvif_tx_dealloc_action
>>> version? Why not just enqueue the idx to be unmapped later?
>> This is called only from the NAPI instance. Using the dealloc ring
>> require synchronization with the callback which can increase lock
>> contention. On the other hand, if the guest sends small packets
>> (<PAGE_SIZE), the TLB flushing can cause performance penalty.
>
> Right. When/How often is this called from the NAPI instance?
When grant mapping error detected in xenvif_tx_check_gop, and if a 
packet smaller than PKT_PROT_LEN is sent. The latter would be removed if 
we will grant copy such packets entirely.

> Is the locking contention from this case so severe that it out weighs
> the benefits of batching the unmaps? That would surprise me. After all
> the locking contention is there for the zerocopy_callback case too
>
>>   The above
>> mentioned upcoming patch which gntcopy the header can prevent that
>
> So this is only called when doing the pull-up to the linear area?
Yes, as mentioned above.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:54:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDE5-0000g7-6M; Wed, 19 Feb 2014 19:54:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGDE3-0000fw-57
	for xen-devel@lists.xenproject.org; Wed, 19 Feb 2014 19:54:35 +0000
Received: from [85.158.139.211:33889] by server-7.bemta-5.messagelabs.com id
	4A/7E-14867-AFB05035; Wed, 19 Feb 2014 19:54:34 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392839671!4990983!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30591 invoked from network); 19 Feb 2014 19:54:33 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:54:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,507,1389744000"; d="scan'208";a="102334593"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 19 Feb 2014 19:54:31 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 19 Feb 2014 14:54:30 -0500
Message-ID: <53050BF5.1060009@citrix.com>
Date: Wed, 19 Feb 2014 19:54:29 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>		
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>	
	<1392743214.23084.38.camel@kazak.uk.xensource.com>	
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
In-Reply-To: <1392804319.23084.109.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 10:05, Ian Campbell wrote:
> On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
>> On 18/02/14 17:06, Ian Campbell wrote:
>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>> This patch contains the new definitions necessary for grant mapping.
>>>
>>> Is this just adding a bunch of (currently) unused functions? That's a
>>> slightly odd way to structure a series. They don't seem to be "generic
>>> helpers" or anything so it would be more normal to introduce these as
>>> they get used -- it's a bit hard to review them out of context.
>> I've created two patches because they are quite huge even now,
>> separately. Together they would be a ~500 line change. That was the best
>> I could figure out keeping in mind that bisect should work. But as I
>> wrote in the first email, I welcome other suggestions. If you and Wei
>> prefer this two patch in one big one, I merge them in the next version.
>
> I suppose it is hard to split a change like this up in a sensible way,
> but it is rather hard to review something which is split in two parts
> sensibly.
>
> If the combined patch too large to fit on the lists?
Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible. It's up 
to you and Wei, if you would like them to be merged, I can do that.

>>>
>>>> +					  struct xenvif,
>>>> +					  pending_tx_info[0]);
>>>> +
>>>> +	spin_lock_irqsave(&vif->dealloc_lock, flags);
>>>> +	do {
>>>> +		pending_idx = ubuf->desc;
>>>> +		ubuf = (struct ubuf_info *) ubuf->ctx;
>>>> +		index = pending_index(vif->dealloc_prod);
>>>> +		vif->dealloc_ring[index] = pending_idx;
>>>> +		/* Sync with xenvif_tx_dealloc_action:
>>>> +		 * insert idx then incr producer.
>>>> +		 */
>>>> +		smp_wmb();
>>>
>>> Is this really needed given that there is a lock held?
>> Yes, as the comment right above explains.
>
> My question is why do you need this sync if you are holding a lock, the
> comment doesn't tell me that. I suppose xenvif_tx_dealloc_action doesn't
> hold the dealloc_lock, but that is non-obvious from the names.
Ok, I'll clarify that in the comment.

>>>
>>>> +		vif->dealloc_prod++;
>>>
>>> What happens if the dealloc ring becomes full, will this wrap and cause
>>> havoc?
>> Nope, if the dealloc ring is full, the value of the last increment won't
>> be used to index the dealloc ring again until some space made available.
>
> I don't follow -- what makes this the case?
The dealloc ring has the same size as the pending ring, and you can only 
add slots to it which are already on the pending ring (the pending_idx 
comes from ubuf->desc), as you are essentially free up slots here on the 
pending ring.
So if the dealloc ring becomes full, vif->dealloc_prod - 
vif->dealloc_cons will be 256, which would be bad. But the while loop 
should exit here, as we shouldn't have any more pending slots. And if we 
dealloc and create free pending slots in dealloc_action, dealloc_cons 
will also advance.

>> Of course if something broke and we have more pending slots than tx ring
>> or dealloc slots then it can happen. Do you suggest a
>> BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?
>
> A
>           BUG_ON(space in dealloc ring < number of slots needed to dealloc this skb)
> would seem to be the right thing, if that really is the invariant the
> code is supposed to be implementing.
Not exactly, it means BUG_ON(number of slots to dealloc > 
MAX_PENDING_REQS), and it should be at the end of the loop, without '='.

>>>> +static inline void xenvif_tx_dealloc_action(struct xenvif *vif)
>>>> +{
>>>> +	struct gnttab_unmap_grant_ref *gop;
>>>> +	pending_ring_idx_t dc, dp;
>>>> +	u16 pending_idx, pending_idx_release[MAX_PENDING_REQS];
>>>> +	unsigned int i = 0;
>>>> +
>>>> +	dc = vif->dealloc_cons;
>>>> +	gop = vif->tx_unmap_ops;
>>>> +
>>>> +	/* Free up any grants we have finished using */
>>>> +	do {
>>>> +		dp = vif->dealloc_prod;
>>>> +
>>>> +		/* Ensure we see all indices enqueued by all
>>>> +		 * xenvif_zerocopy_callback().
>>>> +		 */
>>>> +		smp_rmb();
>>>> +
>>>> +		while (dc != dp) {
>>>> +			pending_idx =
>>>> +				vif->dealloc_ring[pending_index(dc++)];
>>>> +
>>>> +			/* Already unmapped? */
>>>> +			if (vif->grant_tx_handle[pending_idx] ==
>>>> +				NETBACK_INVALID_HANDLE) {
>>>> +				netdev_err(vif->dev,
>>>> +					   "Trying to unmap invalid handle! "
>>>> +					   "pending_idx: %x\n", pending_idx);
>>>> +				BUG();
>>>> +			}
>>>> +
>>>> +			pending_idx_release[gop-vif->tx_unmap_ops] =
>>>> +				pending_idx;
>>>> +			vif->pages_to_unmap[gop-vif->tx_unmap_ops] =
>>>> +				vif->mmap_pages[pending_idx];
>>>> +			gnttab_set_unmap_op(gop,
>>>> +					    idx_to_kaddr(vif, pending_idx),
>>>> +					    GNTMAP_host_map,
>>>> +					    vif->grant_tx_handle[pending_idx]);
>>>> +			vif->grant_tx_handle[pending_idx] =
>>>> +				NETBACK_INVALID_HANDLE;
>>>> +			++gop;
>>>
>>> Can we run out of space in the gop array?
>> No, unless the same thing happen as at my previous answer. BUG_ON() here
>> as well?
>
> Yes, or at the very least a comment explaining how/why gop is bounded
> elsewhere.
Ok, I'll do that.

>
>>>
>>>> +		}
>>>> +
>>>> +	} while (dp != vif->dealloc_prod);
>>>> +
>>>> +	vif->dealloc_cons = dc;
>>>
>>> No barrier here?
>> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by
>> the callback and the thread as well, that's why we need mb() in
>> previous. Btw. this function comes from classic's net_tx_action_dealloc
>
> Is this code close enough to that code architecturally that you can
> infer correctness due to that though?
Nope, I've just mentioned it because knowing that old code can help to 
understand this new, as their logic is very similar some places, like here.

> So long as you have considered the barrier semantics in the context of
> the current code and you think it is correct to not have one here then
> I'm ok. But if you have just assumed it is OK because some older code
> didn't have it then I'll have to ask you to consider it again...
Nope, as I mentioned above, dealloc_cons only accessed in that funcion, 
from the same thread. Dealloc_prod is written in the callback and read 
out here, that's why we need the barrier there.

>
>>>> +				netdev_err(vif->dev,
>>>> +					   " host_addr: %llx handle: %x status: %d\n",
>>>> +					   gop[i].host_addr,
>>>> +					   gop[i].handle,
>>>> +					   gop[i].status);
>>>> +			}
>>>> +			BUG();
>>>> +		}
>>>> +	}
>>>> +
>>>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
>>>> +		xenvif_idx_release(vif, pending_idx_release[i],
>>>> +				   XEN_NETIF_RSP_OKAY);
>>>> +}
>>>> +
>>>> +
>>>>    /* Called after netfront has transmitted */
>>>>    int xenvif_tx_action(struct xenvif *vif, int budget)
>>>>    {
>>>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>>>>    	vif->mmap_pages[pending_idx] = NULL;
>>>>    }
>>>>
>>>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
>>>
>>> This is a single shot version of the batched xenvif_tx_dealloc_action
>>> version? Why not just enqueue the idx to be unmapped later?
>> This is called only from the NAPI instance. Using the dealloc ring
>> require synchronization with the callback which can increase lock
>> contention. On the other hand, if the guest sends small packets
>> (<PAGE_SIZE), the TLB flushing can cause performance penalty.
>
> Right. When/How often is this called from the NAPI instance?
When grant mapping error detected in xenvif_tx_check_gop, and if a 
packet smaller than PKT_PROT_LEN is sent. The latter would be removed if 
we will grant copy such packets entirely.

> Is the locking contention from this case so severe that it out weighs
> the benefits of batching the unmaps? That would surprise me. After all
> the locking contention is there for the zerocopy_callback case too
>
>>   The above
>> mentioned upcoming patch which gntcopy the header can prevent that
>
> So this is only called when doing the pull-up to the linear area?
Yes, as mentioned above.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:55:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDF8-0000sC-Pe; Wed, 19 Feb 2014 19:55:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGDF7-0000rv-35
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:55:41 +0000
Received: from [85.158.143.35:2966] by server-1.bemta-4.messagelabs.com id
	BD/46-31661-C3C05035; Wed, 19 Feb 2014 19:55:40 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392839738!6905887!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6732 invoked from network); 19 Feb 2014 19:55:39 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:55:39 -0000
Received: by mail-vc0-f173.google.com with SMTP id ld13so921214vcb.18
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 11:55:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=7WePVKTNXoo4/qcx5j8ltufHjWkxtQCVDjwpqXM0u5M=;
	b=F/GJJQvRZCUY9leu+TT4s2C+KhVEda0Nit1aqzPOqjEINz5BWcbt0fLpVmOYutlMF2
	gpqbF3jy+I7ZrBF8HGVkhkVUqXWk2jQyG+cCMuj1bPkdiDur4qQoFPe+Qo/k1yGB7nHG
	fu+eJDKjEAOZEnZDj9nNPN/IFtgjKo6TTAQhdDxvcwyk6uiRgTXg1kLXFa/0sEnkTebC
	g8W/gwyY55AN7JNWh9F/yINp+wBmUW9L1njIvcT58ZxohLCCQkQML9W84pY5DwILQeme
	PwyxHnoMfzS8qgn/fE5rMYAVgofVZsdvydgKFC4KYpP8HQSafvlAF7yhilzFFSE7gEpk
	U2iA==
MIME-Version: 1.0
X-Received: by 10.220.196.82 with SMTP id ef18mr1646656vcb.78.1392839737766;
	Wed, 19 Feb 2014 11:55:37 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 11:55:37 -0800 (PST)
In-Reply-To: <20140219195209.GF12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<20140219192433.GB12602@phenom.dumpdata.com>
	<CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
	<20140219195209.GF12602@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 11:55:37 -0800
Message-ID: <CAMnwyJ0c9Q8+kn=E1phXkYWc5WzU3=S+An=b8opPjrRyuCe25g@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8462289377999604575=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8462289377999604575==
Content-Type: multipart/alternative; boundary=001a1133db9ccc76ce04f2c7c542

--001a1133db9ccc76ce04f2c7c542
Content-Type: text/plain; charset=ISO-8859-1

Well -- We are using SuSE and don't want to build Xen ourself and SuSE's 11
SP3 pack has  Xen 4.2.2.

But what about 'xm trigger <vm> power'. That too didn't work with
xen_platform_pci=0. So I don't think it's a xl(1) bug.

Thanks,
/Saurabh


On Wed, Feb 19, 2014 at 11:52 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Wed, Feb 19, 2014 at 11:33:40AM -0800, Saurabh Mishra wrote:
> > Hi Konrad --
> >
> > I'm using  Xen 4.2.2
> >
> > xentop - 11:31:13   Xen 4.2.2_06-0.7.17.62
>
> Oh, please don't top-post and please do use the latest version of Xen.
>
> As I mentioned earlier - you might be hitting a bug in 'xl'. Hence
> please use the latest version so that we can vet whether indeed this
> is a bug when doing 'xl trigger'.
>
>
> >
> > Thanks,
> > /Saurabh
> >
> > On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > > > > I'm use SLES SuSE 11 SP3.
> > > > > As you initial domain or your guests? If it is with SLES does the
> issue
> > > > > show up if you use the latest version of Xen?
> > > >
> > > > We have two VMs. One is a WindRiver based VM and the other one a SLES
> > > SuSE
> > > > 11 SP2 VM.
> > > >
> > > > SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR
> one
> > > > does not and hence the question what config or driver do I need to
> enable
> > > > in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?
> > >
> > > Lets step back a minute. What is the version of Xen you are running.
> > >
> > > >
> > > > Thanks,
> > > > /Saurabh
> > > >
> > > >
> > > > On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@oracle.com> wrote:
> > > >
> > > > > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > > > > Hi,
> > > > > >
> > > > > > I'd like to know what should be configured in the HVM guest such
> > > that it
> > > > > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > > > > >
> > > > > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV
> on
> > > HVM)
> > > > > > however whenever I disable xen_platform_pci=0, it does not work.
> > > > >
> > > > > Right. That is expected.
> > > > >
> > > > > >
> > > > > > I also tried 'xl/xm trigger <vm> power' and this too does not
> work if
> > > > > > xen_platform_pci=0 is set. We do have lot of PCI pass-through
> > > devices.
> > > > >
> > > > > Oh, that looks to be a bug then.
> > > > >
> > > > > >
> > > > > > Our Xen cfg file looks like this :-
> > > > > >
> > > > > > # HVM specific
> > > > > > kernel = "hvmloader"
> > > > > > builder = "hvm"
> > > > > > device_model = "qemu-dm"
> > > > > >
> > > > > > # Enable ACPI support
> > > > > > acpi = 1
> > > > > >
> > > > > > # Enable serial console
> > > > > > serial = "pty"
> > > > > >
> > > > > > # Enable VNC
> > > > > > vnc = 1
> > > > > > vnclisten = "0.0.0.0"
> > > > > >
> > > > > > pci_msitranslate = 0
> > > > > >
> > > > > > xen_platform_pci = 1
> > > > > >
> > > > > > # Default behavior for following events
> > > > > > on_reboot = "destroy"
> > > > > >
> > > > > >
> > > > > > I'm use SLES SuSE 11 SP3.
> > > > >
> > > > > As you initial domain or your guests? If it is with SLES does the
> issue
> > > > > show up if you use the latest version of Xen?
> > > > > >
> > > > > > Thanks,
> > > > > > /Saurabh
> > > > >
> > > > > > _______________________________________________
> > > > > > Xen-devel mailing list
> > > > > > Xen-devel@lists.xen.org
> > > > > > http://lists.xen.org/xen-devel
> > > > >
> > > > >
> > >
>

--001a1133db9ccc76ce04f2c7c542
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><div>Well -- We are using SuSE and don&#39;t want to b=
uild Xen ourself and SuSE&#39;s 11 SP3 pack has=A0<span style=3D"color:rgb(=
80,0,80);font-family:arial,sans-serif;font-size:13px">=A0</span><span style=
=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px">Xen 4.2=
.2.</span></div>
<div><span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-si=
ze:13px"><br></span></div><div><span style=3D"color:rgb(80,0,80);font-famil=
y:arial,sans-serif;font-size:13px">But what about &#39;xm trigger &lt;vm&gt=
; power&#39;. That too didn&#39;t work with xen_platform_pci=3D0. So I don&=
#39;t think it&#39;s a xl(1) bug.</span></div>
<div><br></div><div>Thanks,</div><div>/Saurabh</div></div><div class=3D"gma=
il_extra"><br><br><div class=3D"gmail_quote">On Wed, Feb 19, 2014 at 11:52 =
AM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wi=
lk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrot=
e:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Wed, Feb 19, 2014 at 11=
:33:40AM -0800, Saurabh Mishra wrote:<br>
&gt; Hi Konrad --<br>
&gt;<br>
&gt; I&#39;m using =A0Xen 4.2.2<br>
&gt;<br>
&gt; xentop - 11:31:13 =A0 Xen 4.2.2_06-0.7.17.62<br>
<br>
</div>Oh, please don&#39;t top-post and please do use the latest version of=
 Xen.<br>
<br>
As I mentioned earlier - you might be hitting a bug in &#39;xl&#39;. Hence<=
br>
please use the latest version so that we can vet whether indeed this<br>
is a bug when doing &#39;xl trigger&#39;.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
&gt;<br>
&gt; On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:<b=
r>
&gt; &gt; &gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt; &gt; &gt; As you initial domain or your guests? If it is with SLE=
S does the issue<br>
&gt; &gt; &gt; &gt; show up if you use the latest version of Xen?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; We have two VMs. One is a WindRiver based VM and the other o=
ne a SLES<br>
&gt; &gt; SuSE<br>
&gt; &gt; &gt; 11 SP2 VM.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; SuSE 11 SP2 VM shutdown properly with xen_platform_pci=3D0 h=
owever WR one<br>
&gt; &gt; &gt; does not and hence the question what config or driver do I n=
eed to enable<br>
&gt; &gt; &gt; in WR HVM guest such that it accepts &#39;xl/xm trigger &lt;=
vm&gt; power&#39;?<br>
&gt; &gt;<br>
&gt; &gt; Lets step back a minute. What is the version of Xen you are runni=
ng.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks,<br>
&gt; &gt; &gt; /Saurabh<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk &lt;=
<br>
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle=
.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishr=
a wrote:<br>
&gt; &gt; &gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I&#39;d like to know what should be configured in =
the HVM guest such<br>
&gt; &gt; that it<br>
&gt; &gt; &gt; &gt; &gt; accepts &#39;xm/xl shutdown&#39; graceful shutdown=
 signal.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &#39;xm/xl shutdown&#39; works great when I have x=
en_platform_pci=3D1 (PV on<br>
&gt; &gt; HVM)<br>
&gt; &gt; &gt; &gt; &gt; however whenever I disable xen_platform_pci=3D0, i=
t does not work.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Right. That is expected.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I also tried &#39;xl/xm trigger &lt;vm&gt; power&#=
39; and this too does not work if<br>
&gt; &gt; &gt; &gt; &gt; xen_platform_pci=3D0 is set. We do have lot of PCI=
 pass-through<br>
&gt; &gt; devices.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Oh, that looks to be a bug then.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Our Xen cfg file looks like this :-<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # HVM specific<br>
&gt; &gt; &gt; &gt; &gt; kernel =3D &quot;hvmloader&quot;<br>
&gt; &gt; &gt; &gt; &gt; builder =3D &quot;hvm&quot;<br>
&gt; &gt; &gt; &gt; &gt; device_model =3D &quot;qemu-dm&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Enable ACPI support<br>
&gt; &gt; &gt; &gt; &gt; acpi =3D 1<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Enable serial console<br>
&gt; &gt; &gt; &gt; &gt; serial =3D &quot;pty&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Enable VNC<br>
&gt; &gt; &gt; &gt; &gt; vnc =3D 1<br>
&gt; &gt; &gt; &gt; &gt; vnclisten =3D &quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; pci_msitranslate =3D 0<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; xen_platform_pci =3D 1<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Default behavior for following events<br>
&gt; &gt; &gt; &gt; &gt; on_reboot =3D &quot;destroy&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; As you initial domain or your guests? If it is with SLE=
S does the issue<br>
&gt; &gt; &gt; &gt; show up if you use the latest version of Xen?<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Thanks,<br>
&gt; &gt; &gt; &gt; &gt; /Saurabh<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; _______________________________________________<br=
>
&gt; &gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-dev=
el@lists.xen.org</a><br>
&gt; &gt; &gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=
=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a1133db9ccc76ce04f2c7c542--


--===============8462289377999604575==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8462289377999604575==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 19:55:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDF8-0000sC-Pe; Wed, 19 Feb 2014 19:55:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGDF7-0000rv-35
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 19:55:41 +0000
Received: from [85.158.143.35:2966] by server-1.bemta-4.messagelabs.com id
	BD/46-31661-C3C05035; Wed, 19 Feb 2014 19:55:40 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392839738!6905887!1
X-Originating-IP: [209.85.220.173]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6732 invoked from network); 19 Feb 2014 19:55:39 -0000
Received: from mail-vc0-f173.google.com (HELO mail-vc0-f173.google.com)
	(209.85.220.173)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:55:39 -0000
Received: by mail-vc0-f173.google.com with SMTP id ld13so921214vcb.18
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 11:55:37 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=7WePVKTNXoo4/qcx5j8ltufHjWkxtQCVDjwpqXM0u5M=;
	b=F/GJJQvRZCUY9leu+TT4s2C+KhVEda0Nit1aqzPOqjEINz5BWcbt0fLpVmOYutlMF2
	gpqbF3jy+I7ZrBF8HGVkhkVUqXWk2jQyG+cCMuj1bPkdiDur4qQoFPe+Qo/k1yGB7nHG
	fu+eJDKjEAOZEnZDj9nNPN/IFtgjKo6TTAQhdDxvcwyk6uiRgTXg1kLXFa/0sEnkTebC
	g8W/gwyY55AN7JNWh9F/yINp+wBmUW9L1njIvcT58ZxohLCCQkQML9W84pY5DwILQeme
	PwyxHnoMfzS8qgn/fE5rMYAVgofVZsdvydgKFC4KYpP8HQSafvlAF7yhilzFFSE7gEpk
	U2iA==
MIME-Version: 1.0
X-Received: by 10.220.196.82 with SMTP id ef18mr1646656vcb.78.1392839737766;
	Wed, 19 Feb 2014 11:55:37 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Wed, 19 Feb 2014 11:55:37 -0800 (PST)
In-Reply-To: <20140219195209.GF12602@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<20140219192433.GB12602@phenom.dumpdata.com>
	<CAMnwyJ08bGqNRKOnbAczL7mRz-OK+rcQHxAZFe9GG-dPJvZHCQ@mail.gmail.com>
	<20140219195209.GF12602@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 11:55:37 -0800
Message-ID: <CAMnwyJ0c9Q8+kn=E1phXkYWc5WzU3=S+An=b8opPjrRyuCe25g@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8462289377999604575=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8462289377999604575==
Content-Type: multipart/alternative; boundary=001a1133db9ccc76ce04f2c7c542

--001a1133db9ccc76ce04f2c7c542
Content-Type: text/plain; charset=ISO-8859-1

Well -- We are using SuSE and don't want to build Xen ourself and SuSE's 11
SP3 pack has  Xen 4.2.2.

But what about 'xm trigger <vm> power'. That too didn't work with
xen_platform_pci=0. So I don't think it's a xl(1) bug.

Thanks,
/Saurabh


On Wed, Feb 19, 2014 at 11:52 AM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Wed, Feb 19, 2014 at 11:33:40AM -0800, Saurabh Mishra wrote:
> > Hi Konrad --
> >
> > I'm using  Xen 4.2.2
> >
> > xentop - 11:31:13   Xen 4.2.2_06-0.7.17.62
>
> Oh, please don't top-post and please do use the latest version of Xen.
>
> As I mentioned earlier - you might be hitting a bug in 'xl'. Hence
> please use the latest version so that we can vet whether indeed this
> is a bug when doing 'xl trigger'.
>
>
> >
> > Thanks,
> > /Saurabh
> >
> > On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk <
> > konrad.wilk@oracle.com> wrote:
> >
> > > On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:
> > > > > I'm use SLES SuSE 11 SP3.
> > > > > As you initial domain or your guests? If it is with SLES does the
> issue
> > > > > show up if you use the latest version of Xen?
> > > >
> > > > We have two VMs. One is a WindRiver based VM and the other one a SLES
> > > SuSE
> > > > 11 SP2 VM.
> > > >
> > > > SuSE 11 SP2 VM shutdown properly with xen_platform_pci=0 however WR
> one
> > > > does not and hence the question what config or driver do I need to
> enable
> > > > in WR HVM guest such that it accepts 'xl/xm trigger <vm> power'?
> > >
> > > Lets step back a minute. What is the version of Xen you are running.
> > >
> > > >
> > > > Thanks,
> > > > /Saurabh
> > > >
> > > >
> > > > On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@oracle.com> wrote:
> > > >
> > > > > On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > > > > > Hi,
> > > > > >
> > > > > > I'd like to know what should be configured in the HVM guest such
> > > that it
> > > > > > accepts 'xm/xl shutdown' graceful shutdown signal.
> > > > > >
> > > > > > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV
> on
> > > HVM)
> > > > > > however whenever I disable xen_platform_pci=0, it does not work.
> > > > >
> > > > > Right. That is expected.
> > > > >
> > > > > >
> > > > > > I also tried 'xl/xm trigger <vm> power' and this too does not
> work if
> > > > > > xen_platform_pci=0 is set. We do have lot of PCI pass-through
> > > devices.
> > > > >
> > > > > Oh, that looks to be a bug then.
> > > > >
> > > > > >
> > > > > > Our Xen cfg file looks like this :-
> > > > > >
> > > > > > # HVM specific
> > > > > > kernel = "hvmloader"
> > > > > > builder = "hvm"
> > > > > > device_model = "qemu-dm"
> > > > > >
> > > > > > # Enable ACPI support
> > > > > > acpi = 1
> > > > > >
> > > > > > # Enable serial console
> > > > > > serial = "pty"
> > > > > >
> > > > > > # Enable VNC
> > > > > > vnc = 1
> > > > > > vnclisten = "0.0.0.0"
> > > > > >
> > > > > > pci_msitranslate = 0
> > > > > >
> > > > > > xen_platform_pci = 1
> > > > > >
> > > > > > # Default behavior for following events
> > > > > > on_reboot = "destroy"
> > > > > >
> > > > > >
> > > > > > I'm use SLES SuSE 11 SP3.
> > > > >
> > > > > As you initial domain or your guests? If it is with SLES does the
> issue
> > > > > show up if you use the latest version of Xen?
> > > > > >
> > > > > > Thanks,
> > > > > > /Saurabh
> > > > >
> > > > > > _______________________________________________
> > > > > > Xen-devel mailing list
> > > > > > Xen-devel@lists.xen.org
> > > > > > http://lists.xen.org/xen-devel
> > > > >
> > > > >
> > >
>

--001a1133db9ccc76ce04f2c7c542
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><div>Well -- We are using SuSE and don&#39;t want to b=
uild Xen ourself and SuSE&#39;s 11 SP3 pack has=A0<span style=3D"color:rgb(=
80,0,80);font-family:arial,sans-serif;font-size:13px">=A0</span><span style=
=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-size:13px">Xen 4.2=
.2.</span></div>
<div><span style=3D"color:rgb(80,0,80);font-family:arial,sans-serif;font-si=
ze:13px"><br></span></div><div><span style=3D"color:rgb(80,0,80);font-famil=
y:arial,sans-serif;font-size:13px">But what about &#39;xm trigger &lt;vm&gt=
; power&#39;. That too didn&#39;t work with xen_platform_pci=3D0. So I don&=
#39;t think it&#39;s a xl(1) bug.</span></div>
<div><br></div><div>Thanks,</div><div>/Saurabh</div></div><div class=3D"gma=
il_extra"><br><br><div class=3D"gmail_quote">On Wed, Feb 19, 2014 at 11:52 =
AM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wi=
lk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> wrot=
e:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"im">On Wed, Feb 19, 2014 at 11=
:33:40AM -0800, Saurabh Mishra wrote:<br>
&gt; Hi Konrad --<br>
&gt;<br>
&gt; I&#39;m using =A0Xen 4.2.2<br>
&gt;<br>
&gt; xentop - 11:31:13 =A0 Xen 4.2.2_06-0.7.17.62<br>
<br>
</div>Oh, please don&#39;t top-post and please do use the latest version of=
 Xen.<br>
<br>
As I mentioned earlier - you might be hitting a bug in &#39;xl&#39;. Hence<=
br>
please use the latest version so that we can vet whether indeed this<br>
is a bug when doing &#39;xl trigger&#39;.<br>
<div class=3D"HOEnZb"><div class=3D"h5"><br>
<br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
&gt;<br>
&gt; On Wed, Feb 19, 2014 at 11:24 AM, Konrad Rzeszutek Wilk &lt;<br>
&gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&g=
t; wrote:<br>
&gt;<br>
&gt; &gt; On Wed, Feb 19, 2014 at 11:02:25AM -0800, Saurabh Mishra wrote:<b=
r>
&gt; &gt; &gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt; &gt; &gt; As you initial domain or your guests? If it is with SLE=
S does the issue<br>
&gt; &gt; &gt; &gt; show up if you use the latest version of Xen?<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; We have two VMs. One is a WindRiver based VM and the other o=
ne a SLES<br>
&gt; &gt; SuSE<br>
&gt; &gt; &gt; 11 SP2 VM.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; SuSE 11 SP2 VM shutdown properly with xen_platform_pci=3D0 h=
owever WR one<br>
&gt; &gt; &gt; does not and hence the question what config or driver do I n=
eed to enable<br>
&gt; &gt; &gt; in WR HVM guest such that it accepts &#39;xl/xm trigger &lt;=
vm&gt; power&#39;?<br>
&gt; &gt;<br>
&gt; &gt; Lets step back a minute. What is the version of Xen you are runni=
ng.<br>
&gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Thanks,<br>
&gt; &gt; &gt; /Saurabh<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On Wed, Feb 19, 2014 at 10:56 AM, Konrad Rzeszutek Wilk &lt;=
<br>
&gt; &gt; &gt; <a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle=
.com</a>&gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishr=
a wrote:<br>
&gt; &gt; &gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I&#39;d like to know what should be configured in =
the HVM guest such<br>
&gt; &gt; that it<br>
&gt; &gt; &gt; &gt; &gt; accepts &#39;xm/xl shutdown&#39; graceful shutdown=
 signal.<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; &#39;xm/xl shutdown&#39; works great when I have x=
en_platform_pci=3D1 (PV on<br>
&gt; &gt; HVM)<br>
&gt; &gt; &gt; &gt; &gt; however whenever I disable xen_platform_pci=3D0, i=
t does not work.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Right. That is expected.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I also tried &#39;xl/xm trigger &lt;vm&gt; power&#=
39; and this too does not work if<br>
&gt; &gt; &gt; &gt; &gt; xen_platform_pci=3D0 is set. We do have lot of PCI=
 pass-through<br>
&gt; &gt; devices.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Oh, that looks to be a bug then.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Our Xen cfg file looks like this :-<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # HVM specific<br>
&gt; &gt; &gt; &gt; &gt; kernel =3D &quot;hvmloader&quot;<br>
&gt; &gt; &gt; &gt; &gt; builder =3D &quot;hvm&quot;<br>
&gt; &gt; &gt; &gt; &gt; device_model =3D &quot;qemu-dm&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Enable ACPI support<br>
&gt; &gt; &gt; &gt; &gt; acpi =3D 1<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Enable serial console<br>
&gt; &gt; &gt; &gt; &gt; serial =3D &quot;pty&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Enable VNC<br>
&gt; &gt; &gt; &gt; &gt; vnc =3D 1<br>
&gt; &gt; &gt; &gt; &gt; vnclisten =3D &quot;0.0.0.0&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; pci_msitranslate =3D 0<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; xen_platform_pci =3D 1<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; # Default behavior for following events<br>
&gt; &gt; &gt; &gt; &gt; on_reboot =3D &quot;destroy&quot;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; I&#39;m use SLES SuSE 11 SP3.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; As you initial domain or your guests? If it is with SLE=
S does the issue<br>
&gt; &gt; &gt; &gt; show up if you use the latest version of Xen?<br>
&gt; &gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; Thanks,<br>
&gt; &gt; &gt; &gt; &gt; /Saurabh<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; &gt; _______________________________________________<br=
>
&gt; &gt; &gt; &gt; &gt; Xen-devel mailing list<br>
&gt; &gt; &gt; &gt; &gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-dev=
el@lists.xen.org</a><br>
&gt; &gt; &gt; &gt; &gt; <a href=3D"http://lists.xen.org/xen-devel" target=
=3D"_blank">http://lists.xen.org/xen-devel</a><br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div>

--001a1133db9ccc76ce04f2c7c542--


--===============8462289377999604575==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8462289377999604575==--


From xen-devel-bounces@lists.xen.org Wed Feb 19 19:57:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDGv-00013n-Vf; Wed, 19 Feb 2014 19:57:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WGDGu-00013Z-Ku; Wed, 19 Feb 2014 19:57:33 +0000
Received: from [85.158.143.35:14296] by server-2.bemta-4.messagelabs.com id
	0F/1A-10891-BAC05035; Wed, 19 Feb 2014 19:57:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392839849!6898592!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30425 invoked from network); 19 Feb 2014 19:57:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 19:57:31 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JJvA4L031974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:57:11 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJv7cW025121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 19 Feb 2014 19:57:08 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJv7Yv019841; Wed, 19 Feb 2014 19:57:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 11:57:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C9D591C0954; Wed, 19 Feb 2014 14:57:05 -0500 (EST)
Date: Wed, 19 Feb 2014 14:57:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael Labriola <michael.d.labriola@gmail.com>
Message-ID: <20140219195705.GA13089@phenom.dumpdata.com>
References: <20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
> >> 09:49:38 AM:
> >>
> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> >> > bounces@lists.xen.org
> >> > Date: 01/24/2014 09:50 AM
> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >
> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> >> > >
> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> > > > Date: 01/21/2014 04:59 PM
> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> > > > Sent by: xen-devel-bounces@lists.xen.org
> >> > > >
> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
> >>
> >> > > > > 10:38:27 AM:
> >> > > > >
> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> > > > > > Date: 01/20/2014 10:38 AM
> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> > > > > >
> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
> >> wrote:
> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
> >> > > 10:14:36
> >> > > > > AM:
> >> > > > > > >
> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> >> > > > > > > > Date: 01/20/2014 10:14 AM
> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> > > > > > > >
> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
> >>
> >> > > wrote:
> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
> >> > > consistent
> >> > > > > > > crashes
> >> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
> >> > > unusably
> >> > > > >
> >> > > > > > > slow
> >> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
> >> > > > > indiviually on
> >> > > > > > >
> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
> >> metal.
> >> > > > > > > >
> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
> >> mean?
> >> > > > > > >
> >> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
> >> The
> >> > >
> >> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
> >> for
> >> > > a
> >> > > > > plain
> >> > > > > > > text console login.
> >> > > > > >
> >> > > > > > So sluggish is probably due to the PAT not being enabled. This
> >> patch
> >> > > > > > should be applied:
> >> > > > > >
> >> > > > > > lkml.org/lkml/2011/11/8/406
> >> > > > > >
> >> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> >> > > > > >
> >> > > > > > and these two reverted:
> >> > > > > >
> >> > > > > >  "xen/pat: Disable PAT support for now."
> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> >> > > > > >
> >> > > > > > Which is to say do:
> >> > > > > >
> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> >> > > > >
> >> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
> >> reverted
> >> > >
> >> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
> >> HD
> >> > > 7000
> >> > > > > sluggishness and appears to have fixed the R600 crashes (although
> >> it's
> >> > >
> >> > > > > only been running a few hours).
> >> > > > >
> >> > > > > How come that patch didn't get into mainline?  It looks pretty
> >> > > innocuous
> >> > > > > to me...
> >> > > >
> >> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
> >> had
> >> > > > the chance nor time to implement it.
> >> > >
> >> > > I see.  Well, I've got a handful of boxes in my lab that need that
> >> patch
> >> > > to be usable.  If you do come up with a more mainline-able solution,
> >> I'd
> >> > > gladly test it for you.  ;-)
> >> >
> >> > Thank you!
> >>
> >> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
> >> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
> >> again yeserday.  After being solid as a rock for 2 weeks as my primary
> >> workstation, X has crashed a half dozen or so times so far this week. I've
> >> been in Xen with 2 paravirtual linux guests running almost constantly for
> >> this whole period.  I don't understand what's changed, but my system has
> >> been entirely unstable now.  I did recompile my kernel... but I all did
> >> was merge the v3.13.1 stable commit into my working tree and turn a few
> >> things on (netfilter, wifi, a couple drivers turned on here and there).  I
> >> just went and verified that those patches are still applied in my tree
> >> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
> >> staring at a TTY login).
> >>
> >> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
> >> acceleration no longer functions unless I reboot.  If memory serves, the
> >> unpatched behavior upon X crash was that the kernel continued to spew
> >> these errors until the whole box locked up.  At least that's not happening
> >> any more... ;-)
> >>
> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> GEM object (8192, 2, 4096, -12)
> >>
> >> and here's a slightly different variant that happened while I was typing
> >> this email (on a different machine, luckily):
> >>
> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [64348.297561] [TTM] Buffer eviction failed
> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> GEM object (16384, 2, 4096, -12)
> >>
> >> Any ideas?
> >
> > yes. I believe you have a memory leak. As in, some driver (or X) is
> > eating up the memory and not giving up enough. That means the TTM
> > layer is hitting its ceiling of how much memory it can allocate.
> >
> > Now finding the culprit is going to be a bit hard.
> >
> > You could use:
> >
> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
> >          pool      refills   pages freed    inuse available     name
> >            wc          259           224      808        4 nouveau 0000:05:00.0
> >        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
> >        cached           25             0       96        4 nouveau 0000:05:00.0
> >
> > to figure out if my thinking is really true. You should have a huge
> > 'inuse' count and almost no 'available'.
> 
> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
> always have the same contents.  Is that normal?

Yes.
> 
> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
> metal... only in Xen.  Is that normal?

It would show up on baremetal if you boot with 'iommu=soft'

> 
>          pool      refills   pages freed    inuse available     name
>        cached        15190         59551     1205        4 radeon 0000:01:00.0
> 
> If I watch that file while creating xterms, moving them around, etc, I can
> see the number available fluctuate between 3 and 6.  This is true, even on
> my box w/ the newer R7 card in it, which hasn't gotten that GEM error
> message (yet?).

OK, so lets see what happens when the error shows. Incidentally - what amount of
memory does your initial domain have? And is it different then when you
boot it as a baremetal?

Thank you.

> 
> 
> >
> > But that will get us just to confirm that yes - you have a big usage
> > of memory and it is hitting the ceiling.
> >
> > Now to actually figure out which application is hanging on these - that
> > I am not sure about. I think there is some drm info tool to investigate
> > how many pages each application is using. You can leave it running and
> > see which app is gulping up the memory. But I am not sure which
> > tool that is (if there was one).
> >
> > Well, lets do one step at a time - see if my theory is correct first.
> 
> 
> 
> -- 
> Michael D Labriola
> 21 Rip Van Winkle Cir
> Warwick, RI 02886
> 401-316-9844 (cell)
> 401-848-8871 (work)
> 401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 19:57:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 19:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDGv-00013n-Vf; Wed, 19 Feb 2014 19:57:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WGDGu-00013Z-Ku; Wed, 19 Feb 2014 19:57:33 +0000
Received: from [85.158.143.35:14296] by server-2.bemta-4.messagelabs.com id
	0F/1A-10891-BAC05035; Wed, 19 Feb 2014 19:57:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392839849!6898592!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30425 invoked from network); 19 Feb 2014 19:57:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 19:57:31 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JJvA4L031974
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:57:11 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJv7cW025121
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Wed, 19 Feb 2014 19:57:08 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1JJv7Yv019841; Wed, 19 Feb 2014 19:57:07 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 11:57:06 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C9D591C0954; Wed, 19 Feb 2014 14:57:05 -0500 (EST)
Date: Wed, 19 Feb 2014 14:57:05 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael Labriola <michael.d.labriola@gmail.com>
Message-ID: <20140219195705.GA13089@phenom.dumpdata.com>
References: <20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
> >> 09:49:38 AM:
> >>
> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> >> > bounces@lists.xen.org
> >> > Date: 01/24/2014 09:50 AM
> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >
> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> >> > >
> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> > > > Date: 01/21/2014 04:59 PM
> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> > > > Sent by: xen-devel-bounces@lists.xen.org
> >> > > >
> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
> >>
> >> > > > > 10:38:27 AM:
> >> > > > >
> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> > > > > > Date: 01/20/2014 10:38 AM
> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> > > > > >
> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
> >> wrote:
> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
> >> > > 10:14:36
> >> > > > > AM:
> >> > > > > > >
> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> >> > > > > > > > Date: 01/20/2014 10:14 AM
> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> > > > > > > >
> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
> >>
> >> > > wrote:
> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
> >> > > consistent
> >> > > > > > > crashes
> >> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
> >> > > unusably
> >> > > > >
> >> > > > > > > slow
> >> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
> >> > > > > indiviually on
> >> > > > > > >
> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
> >> metal.
> >> > > > > > > >
> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
> >> mean?
> >> > > > > > >
> >> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
> >> The
> >> > >
> >> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
> >> for
> >> > > a
> >> > > > > plain
> >> > > > > > > text console login.
> >> > > > > >
> >> > > > > > So sluggish is probably due to the PAT not being enabled. This
> >> patch
> >> > > > > > should be applied:
> >> > > > > >
> >> > > > > > lkml.org/lkml/2011/11/8/406
> >> > > > > >
> >> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> >> > > > > >
> >> > > > > > and these two reverted:
> >> > > > > >
> >> > > > > >  "xen/pat: Disable PAT support for now."
> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> >> > > > > >
> >> > > > > > Which is to say do:
> >> > > > > >
> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> >> > > > >
> >> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
> >> reverted
> >> > >
> >> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
> >> HD
> >> > > 7000
> >> > > > > sluggishness and appears to have fixed the R600 crashes (although
> >> it's
> >> > >
> >> > > > > only been running a few hours).
> >> > > > >
> >> > > > > How come that patch didn't get into mainline?  It looks pretty
> >> > > innocuous
> >> > > > > to me...
> >> > > >
> >> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
> >> had
> >> > > > the chance nor time to implement it.
> >> > >
> >> > > I see.  Well, I've got a handful of boxes in my lab that need that
> >> patch
> >> > > to be usable.  If you do come up with a more mainline-able solution,
> >> I'd
> >> > > gladly test it for you.  ;-)
> >> >
> >> > Thank you!
> >>
> >> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
> >> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
> >> again yeserday.  After being solid as a rock for 2 weeks as my primary
> >> workstation, X has crashed a half dozen or so times so far this week. I've
> >> been in Xen with 2 paravirtual linux guests running almost constantly for
> >> this whole period.  I don't understand what's changed, but my system has
> >> been entirely unstable now.  I did recompile my kernel... but I all did
> >> was merge the v3.13.1 stable commit into my working tree and turn a few
> >> things on (netfilter, wifi, a couple drivers turned on here and there).  I
> >> just went and verified that those patches are still applied in my tree
> >> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
> >> staring at a TTY login).
> >>
> >> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
> >> acceleration no longer functions unless I reboot.  If memory serves, the
> >> unpatched behavior upon X crash was that the kernel continued to spew
> >> these errors until the whole box locked up.  At least that's not happening
> >> any more... ;-)
> >>
> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> GEM object (8192, 2, 4096, -12)
> >>
> >> and here's a slightly different variant that happened while I was typing
> >> this email (on a different machine, luckily):
> >>
> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [64348.297561] [TTM] Buffer eviction failed
> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> (r:-12)!
> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> GEM object (16384, 2, 4096, -12)
> >>
> >> Any ideas?
> >
> > yes. I believe you have a memory leak. As in, some driver (or X) is
> > eating up the memory and not giving up enough. That means the TTM
> > layer is hitting its ceiling of how much memory it can allocate.
> >
> > Now finding the culprit is going to be a bit hard.
> >
> > You could use:
> >
> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
> >          pool      refills   pages freed    inuse available     name
> >            wc          259           224      808        4 nouveau 0000:05:00.0
> >        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
> >        cached           25             0       96        4 nouveau 0000:05:00.0
> >
> > to figure out if my thinking is really true. You should have a huge
> > 'inuse' count and almost no 'available'.
> 
> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
> always have the same contents.  Is that normal?

Yes.
> 
> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
> metal... only in Xen.  Is that normal?

It would show up on baremetal if you boot with 'iommu=soft'

> 
>          pool      refills   pages freed    inuse available     name
>        cached        15190         59551     1205        4 radeon 0000:01:00.0
> 
> If I watch that file while creating xterms, moving them around, etc, I can
> see the number available fluctuate between 3 and 6.  This is true, even on
> my box w/ the newer R7 card in it, which hasn't gotten that GEM error
> message (yet?).

OK, so lets see what happens when the error shows. Incidentally - what amount of
memory does your initial domain have? And is it different then when you
boot it as a baremetal?

Thank you.

> 
> 
> >
> > But that will get us just to confirm that yes - you have a big usage
> > of memory and it is hitting the ceiling.
> >
> > Now to actually figure out which application is hanging on these - that
> > I am not sure about. I think there is some drm info tool to investigate
> > how many pages each application is using. You can leave it running and
> > see which app is gulping up the memory. But I am not sure which
> > tool that is (if there was one).
> >
> > Well, lets do one step at a time - see if my theory is correct first.
> 
> 
> 
> -- 
> Michael D Labriola
> 21 Rip Van Winkle Cir
> Warwick, RI 02886
> 401-316-9844 (cell)
> 401-848-8871 (work)
> 401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 20:30:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 20:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDmq-0001my-NM; Wed, 19 Feb 2014 20:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WGDmo-0001mq-5S; Wed, 19 Feb 2014 20:30:31 +0000
Received: from [85.158.137.68:44275] by server-14.bemta-3.messagelabs.com id
	91/89-08196-56415035; Wed, 19 Feb 2014 20:30:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392841826!2988373!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13996 invoked from network); 19 Feb 2014 20:30:27 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 20:30:27 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JKUBJf020292
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 20:30:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1JKU98O023937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 20:30:10 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1JKU8gu023904; Wed, 19 Feb 2014 20:30:08 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 12:30:08 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 61EAD1C0954; Wed, 19 Feb 2014 15:30:07 -0500 (EST)
Date: Wed, 19 Feb 2014 15:30:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael Labriola <michael.d.labriola@gmail.com>
Message-ID: <20140219203007.GA13324@phenom.dumpdata.com>
References: <20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
	<20140219195705.GA13089@phenom.dumpdata.com>
	<CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 03:08:08PM -0500, Michael Labriola wrote:
> On Wed, Feb 19, 2014 at 2:57 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
> >> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com> wrote:
> >> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
> >> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
> >> >> 09:49:38 AM:
> >> >>
> >> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> >> >> > bounces@lists.xen.org
> >> >> > Date: 01/24/2014 09:50 AM
> >> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> >
> >> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> >> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> >> >> > >
> >> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> >> > > > Date: 01/21/2014 04:59 PM
> >> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> > > > Sent by: xen-devel-bounces@lists.xen.org
> >> >> > > >
> >> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> >> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
> >> >>
> >> >> > > > > 10:38:27 AM:
> >> >> > > > >
> >> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> >> > > > > > Date: 01/20/2014 10:38 AM
> >> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> > > > > >
> >> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
> >> >> wrote:
> >> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
> >> >> > > 10:14:36
> >> >> > > > > AM:
> >> >> > > > > > >
> >> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> >> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> >> >> > > > > > > > Date: 01/20/2014 10:14 AM
> >> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> > > > > > > >
> >> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
> >> >>
> >> >> > > wrote:
> >> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
> >> >> > > consistent
> >> >> > > > > > > crashes
> >> >> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
> >> >> > > unusably
> >> >> > > > >
> >> >> > > > > > > slow
> >> >> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
> >> >> > > > > indiviually on
> >> >> > > > > > >
> >> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
> >> >> metal.
> >> >> > > > > > > >
> >> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
> >> >> mean?
> >> >> > > > > > >
> >> >> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
> >> >> The
> >> >> > >
> >> >> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
> >> >> for
> >> >> > > a
> >> >> > > > > plain
> >> >> > > > > > > text console login.
> >> >> > > > > >
> >> >> > > > > > So sluggish is probably due to the PAT not being enabled. This
> >> >> patch
> >> >> > > > > > should be applied:
> >> >> > > > > >
> >> >> > > > > > lkml.org/lkml/2011/11/8/406
> >> >> > > > > >
> >> >> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> >> >> > > > > >
> >> >> > > > > > and these two reverted:
> >> >> > > > > >
> >> >> > > > > >  "xen/pat: Disable PAT support for now."
> >> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> >> >> > > > > >
> >> >> > > > > > Which is to say do:
> >> >> > > > > >
> >> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> >> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> >> >> > > > >
> >> >> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
> >> >> reverted
> >> >> > >
> >> >> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
> >> >> HD
> >> >> > > 7000
> >> >> > > > > sluggishness and appears to have fixed the R600 crashes (although
> >> >> it's
> >> >> > >
> >> >> > > > > only been running a few hours).
> >> >> > > > >
> >> >> > > > > How come that patch didn't get into mainline?  It looks pretty
> >> >> > > innocuous
> >> >> > > > > to me...
> >> >> > > >
> >> >> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
> >> >> had
> >> >> > > > the chance nor time to implement it.
> >> >> > >
> >> >> > > I see.  Well, I've got a handful of boxes in my lab that need that
> >> >> patch
> >> >> > > to be usable.  If you do come up with a more mainline-able solution,
> >> >> I'd
> >> >> > > gladly test it for you.  ;-)
> >> >> >
> >> >> > Thank you!
> >> >>
> >> >> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
> >> >> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
> >> >> again yeserday.  After being solid as a rock for 2 weeks as my primary
> >> >> workstation, X has crashed a half dozen or so times so far this week. I've
> >> >> been in Xen with 2 paravirtual linux guests running almost constantly for
> >> >> this whole period.  I don't understand what's changed, but my system has
> >> >> been entirely unstable now.  I did recompile my kernel... but I all did
> >> >> was merge the v3.13.1 stable commit into my working tree and turn a few
> >> >> things on (netfilter, wifi, a couple drivers turned on here and there).  I
> >> >> just went and verified that those patches are still applied in my tree
> >> >> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
> >> >> staring at a TTY login).
> >> >>
> >> >> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
> >> >> acceleration no longer functions unless I reboot.  If memory serves, the
> >> >> unpatched behavior upon X crash was that the kernel continued to spew
> >> >> these errors until the whole box locked up.  At least that's not happening
> >> >> any more... ;-)
> >> >>
> >> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> >> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> >> GEM object (8192, 2, 4096, -12)
> >> >>
> >> >> and here's a slightly different variant that happened while I was typing
> >> >> this email (on a different machine, luckily):
> >> >>
> >> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
> >> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> >> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> >> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [64348.297561] [TTM] Buffer eviction failed
> >> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> >> GEM object (16384, 2, 4096, -12)
> >> >>
> >> >> Any ideas?
> >> >
> >> > yes. I believe you have a memory leak. As in, some driver (or X) is
> >> > eating up the memory and not giving up enough. That means the TTM
> >> > layer is hitting its ceiling of how much memory it can allocate.
> >> >
> >> > Now finding the culprit is going to be a bit hard.
> >> >
> >> > You could use:
> >> >
> >> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
> >> >          pool      refills   pages freed    inuse available     name
> >> >            wc          259           224      808        4 nouveau 0000:05:00.0
> >> >        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
> >> >        cached           25             0       96        4 nouveau 0000:05:00.0
> >> >
> >> > to figure out if my thinking is really true. You should have a huge
> >> > 'inuse' count and almost no 'available'.
> >>
> >> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
> >> always have the same contents.  Is that normal?
> >
> > Yes.
> >>
> >> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
> >> metal... only in Xen.  Is that normal?
> >
> > It would show up on baremetal if you boot with 'iommu=soft'
> >
> >>
> >>          pool      refills   pages freed    inuse available     name
> >>        cached        15190         59551     1205        4 radeon 0000:01:00.0
> >>
> >> If I watch that file while creating xterms, moving them around, etc, I can
> >> see the number available fluctuate between 3 and 6.  This is true, even on
> >> my box w/ the newer R7 card in it, which hasn't gotten that GEM error
> >> message (yet?).
> >
> > OK, so lets see what happens when the error shows. Incidentally - what amount of
> > memory does your initial domain have? And is it different then when you
> > boot it as a baremetal?
> 
> I've got the problem very reproducible on 3 boxes.  All three are
> booting the dom0 with as much RAM as Xen will give them, then giving
> up some of their RAM as needed when I create domUs.  The 3 boxes have
> 4G, 8G, and 16G if memory serves.
> 
> Does the amount of RAM on the actual video cards matter?  All the
> older cards (that crash all the time) have 2G, whereas the R7 that
> hasn't crashed yet only has 1G.

The TTM pool has a limit (a hard one). It is pretty simple:


       pr_info("Zone %7s: Available graphics memory: %llu kiB\n",      
394                         zone->name, (unsigned long long)zone->max_mem >> 10);   
395         }                                                                       
396         ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));    
397         ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));

so 1/4 of your memory. Which means that when boot dom0 with as much
memory as possible and then balloon down you might confuse it
(as the initial memory assumption is done during bootup).

If you boot the troubled dom0s with 'dom0_mem_max' set to some good
number - that might shed some light on this.


> 
> I've been reproducing the crash by just logging in and out of fluxbox
> via XDM over and over again right after booting my dom0 in Xen w/ no
> guests running.  That makes it happen within a few minutes.  Otherwise
> it randomly crashes while I'm in the middle of trying to work... ;-)

HA!

Does fluxbox use a lot of graphic? I mean does it do a lot of fancy
things when it starts and shuts itself?

> 
> >
> > Thank you.
> >
> >>
> >>
> >> >
> >> > But that will get us just to confirm that yes - you have a big usage
> >> > of memory and it is hitting the ceiling.
> >> >
> >> > Now to actually figure out which application is hanging on these - that
> >> > I am not sure about. I think there is some drm info tool to investigate
> >> > how many pages each application is using. You can leave it running and
> >> > see which app is gulping up the memory. But I am not sure which
> >> > tool that is (if there was one).
> >> >
> >> > Well, lets do one step at a time - see if my theory is correct first.
> 
> -- 
> Michael D Labriola
> 21 Rip Van Winkle Cir
> Warwick, RI 02886
> 401-316-9844 (cell)
> 401-848-8871 (work)
> 401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 20:30:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 20:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGDmq-0001my-NM; Wed, 19 Feb 2014 20:30:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>)
	id 1WGDmo-0001mq-5S; Wed, 19 Feb 2014 20:30:31 +0000
Received: from [85.158.137.68:44275] by server-14.bemta-3.messagelabs.com id
	91/89-08196-56415035; Wed, 19 Feb 2014 20:30:29 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392841826!2988373!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13996 invoked from network); 19 Feb 2014 20:30:27 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 20:30:27 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1JKUBJf020292
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 20:30:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1JKU98O023937
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 19 Feb 2014 20:30:10 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1JKU8gu023904; Wed, 19 Feb 2014 20:30:08 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 12:30:08 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 61EAD1C0954; Wed, 19 Feb 2014 15:30:07 -0500 (EST)
Date: Wed, 19 Feb 2014 15:30:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Michael Labriola <michael.d.labriola@gmail.com>
Message-ID: <20140219203007.GA13324@phenom.dumpdata.com>
References: <20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
	<20140219195705.GA13089@phenom.dumpdata.com>
	<CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 03:08:08PM -0500, Michael Labriola wrote:
> On Wed, Feb 19, 2014 at 2:57 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> > On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
> >> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
> >> <konrad.wilk@oracle.com> wrote:
> >> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
> >> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
> >> >> 09:49:38 AM:
> >> >>
> >> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
> >> >> > bounces@lists.xen.org
> >> >> > Date: 01/24/2014 09:50 AM
> >> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> >
> >> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
> >> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
> >> >> > >
> >> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> >> > > > Date: 01/21/2014 04:59 PM
> >> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> > > > Sent by: xen-devel-bounces@lists.xen.org
> >> >> > > >
> >> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
> >> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
> >> >>
> >> >> > > > > 10:38:27 AM:
> >> >> > > > >
> >> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> >> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> >> >> > > > > > Date: 01/20/2014 10:38 AM
> >> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> > > > > >
> >> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
> >> >> wrote:
> >> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
> >> >> > > 10:14:36
> >> >> > > > > AM:
> >> >> > > > > > >
> >> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> >> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> >> >> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
> >> >> > > > > > > > Date: 01/20/2014 10:14 AM
> >> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> >> >> > > > > > > >
> >> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
> >> >>
> >> >> > > wrote:
> >> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
> >> >> > > consistent
> >> >> > > > > > > crashes
> >> >> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
> >> >> > > unusably
> >> >> > > > >
> >> >> > > > > > > slow
> >> >> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
> >> >> > > > > indiviually on
> >> >> > > > > > >
> >> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
> >> >> metal.
> >> >> > > > > > > >
> >> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
> >> >> mean?
> >> >> > > > > > >
> >> >> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
> >> >> The
> >> >> > >
> >> >> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
> >> >> for
> >> >> > > a
> >> >> > > > > plain
> >> >> > > > > > > text console login.
> >> >> > > > > >
> >> >> > > > > > So sluggish is probably due to the PAT not being enabled. This
> >> >> patch
> >> >> > > > > > should be applied:
> >> >> > > > > >
> >> >> > > > > > lkml.org/lkml/2011/11/8/406
> >> >> > > > > >
> >> >> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> >> >> > > > > >
> >> >> > > > > > and these two reverted:
> >> >> > > > > >
> >> >> > > > > >  "xen/pat: Disable PAT support for now."
> >> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> >> >> > > > > >
> >> >> > > > > > Which is to say do:
> >> >> > > > > >
> >> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> >> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> >> >> > > > >
> >> >> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
> >> >> reverted
> >> >> > >
> >> >> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
> >> >> HD
> >> >> > > 7000
> >> >> > > > > sluggishness and appears to have fixed the R600 crashes (although
> >> >> it's
> >> >> > >
> >> >> > > > > only been running a few hours).
> >> >> > > > >
> >> >> > > > > How come that patch didn't get into mainline?  It looks pretty
> >> >> > > innocuous
> >> >> > > > > to me...
> >> >> > > >
> >> >> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
> >> >> had
> >> >> > > > the chance nor time to implement it.
> >> >> > >
> >> >> > > I see.  Well, I've got a handful of boxes in my lab that need that
> >> >> patch
> >> >> > > to be usable.  If you do come up with a more mainline-able solution,
> >> >> I'd
> >> >> > > gladly test it for you.  ;-)
> >> >> >
> >> >> > Thank you!
> >> >>
> >> >> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
> >> >> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
> >> >> again yeserday.  After being solid as a rock for 2 weeks as my primary
> >> >> workstation, X has crashed a half dozen or so times so far this week. I've
> >> >> been in Xen with 2 paravirtual linux guests running almost constantly for
> >> >> this whole period.  I don't understand what's changed, but my system has
> >> >> been entirely unstable now.  I did recompile my kernel... but I all did
> >> >> was merge the v3.13.1 stable commit into my working tree and turn a few
> >> >> things on (netfilter, wifi, a couple drivers turned on here and there).  I
> >> >> just went and verified that those patches are still applied in my tree
> >> >> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
> >> >> staring at a TTY login).
> >> >>
> >> >> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
> >> >> acceleration no longer functions unless I reboot.  If memory serves, the
> >> >> unpatched behavior upon X crash was that the kernel continued to spew
> >> >> these errors until the whole box locked up.  At least that's not happening
> >> >> any more... ;-)
> >> >>
> >> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> >> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> >> GEM object (8192, 2, 4096, -12)
> >> >>
> >> >> and here's a slightly different variant that happened while I was typing
> >> >> this email (on a different machine, luckily):
> >> >>
> >> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
> >> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> >> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> >> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [64348.297561] [TTM] Buffer eviction failed
> >> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> >> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
> >> >> (r:-12)!
> >> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
> >> >> GEM object (16384, 2, 4096, -12)
> >> >>
> >> >> Any ideas?
> >> >
> >> > yes. I believe you have a memory leak. As in, some driver (or X) is
> >> > eating up the memory and not giving up enough. That means the TTM
> >> > layer is hitting its ceiling of how much memory it can allocate.
> >> >
> >> > Now finding the culprit is going to be a bit hard.
> >> >
> >> > You could use:
> >> >
> >> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
> >> >          pool      refills   pages freed    inuse available     name
> >> >            wc          259           224      808        4 nouveau 0000:05:00.0
> >> >        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
> >> >        cached           25             0       96        4 nouveau 0000:05:00.0
> >> >
> >> > to figure out if my thinking is really true. You should have a huge
> >> > 'inuse' count and almost no 'available'.
> >>
> >> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
> >> always have the same contents.  Is that normal?
> >
> > Yes.
> >>
> >> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
> >> metal... only in Xen.  Is that normal?
> >
> > It would show up on baremetal if you boot with 'iommu=soft'
> >
> >>
> >>          pool      refills   pages freed    inuse available     name
> >>        cached        15190         59551     1205        4 radeon 0000:01:00.0
> >>
> >> If I watch that file while creating xterms, moving them around, etc, I can
> >> see the number available fluctuate between 3 and 6.  This is true, even on
> >> my box w/ the newer R7 card in it, which hasn't gotten that GEM error
> >> message (yet?).
> >
> > OK, so lets see what happens when the error shows. Incidentally - what amount of
> > memory does your initial domain have? And is it different then when you
> > boot it as a baremetal?
> 
> I've got the problem very reproducible on 3 boxes.  All three are
> booting the dom0 with as much RAM as Xen will give them, then giving
> up some of their RAM as needed when I create domUs.  The 3 boxes have
> 4G, 8G, and 16G if memory serves.
> 
> Does the amount of RAM on the actual video cards matter?  All the
> older cards (that crash all the time) have 2G, whereas the R7 that
> hasn't crashed yet only has 1G.

The TTM pool has a limit (a hard one). It is pretty simple:


       pr_info("Zone %7s: Available graphics memory: %llu kiB\n",      
394                         zone->name, (unsigned long long)zone->max_mem >> 10);   
395         }                                                                       
396         ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));    
397         ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/(2*PAGE_SIZE));

so 1/4 of your memory. Which means that when boot dom0 with as much
memory as possible and then balloon down you might confuse it
(as the initial memory assumption is done during bootup).

If you boot the troubled dom0s with 'dom0_mem_max' set to some good
number - that might shed some light on this.


> 
> I've been reproducing the crash by just logging in and out of fluxbox
> via XDM over and over again right after booting my dom0 in Xen w/ no
> guests running.  That makes it happen within a few minutes.  Otherwise
> it randomly crashes while I'm in the middle of trying to work... ;-)

HA!

Does fluxbox use a lot of graphic? I mean does it do a lot of fancy
things when it starts and shuts itself?

> 
> >
> > Thank you.
> >
> >>
> >>
> >> >
> >> > But that will get us just to confirm that yes - you have a big usage
> >> > of memory and it is hitting the ceiling.
> >> >
> >> > Now to actually figure out which application is hanging on these - that
> >> > I am not sure about. I think there is some drm info tool to investigate
> >> > how many pages each application is using. You can leave it running and
> >> > see which app is gulping up the memory. But I am not sure which
> >> > tool that is (if there was one).
> >> >
> >> > Well, lets do one step at a time - see if my theory is correct first.
> 
> -- 
> Michael D Labriola
> 21 Rip Van Winkle Cir
> Warwick, RI 02886
> 401-316-9844 (cell)
> 401-848-8871 (work)
> 401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 21:03:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 21:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGEIA-0002XF-5P; Wed, 19 Feb 2014 21:02:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0127121803=mlabriol@gdeb.com>)
	id 1WGEI7-0002X6-G8; Wed, 19 Feb 2014 21:02:51 +0000
Received: from [85.158.137.68:14743] by server-13.bemta-3.messagelabs.com id
	00/8F-26923-9FB15035; Wed, 19 Feb 2014 21:02:49 +0000
X-Env-Sender: prvs=0127121803=mlabriol@gdeb.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392843766!2992900!1
X-Originating-IP: [153.11.250.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MSA9PiAyMDMzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15940 invoked from network); 19 Feb 2014 21:02:46 -0000
Received: from mx2.gd-ms.com (HELO mx2.gd-ms.com) (153.11.250.41)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 21:02:46 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx2.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1WGEHy-0006wg-3r; Wed, 19 Feb 2014 16:02:42 -0500
In-Reply-To: <20140219203007.GA13324@phenom.dumpdata.com>
References: <20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
	<20140219195705.GA13089@phenom.dumpdata.com>
	<CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
	<20140219203007.GA13324@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 78ACED06:414AEB6F-85257C84:0072CCCA;
 type=4; name=$KeepSent
Message-ID: <OF78ACED06.414AEB6F-ON85257C84.0072CCCA-85257C84.0073988C@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Wed, 19 Feb 2014 16:02:36 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx2.gd-ms.com  1WGEHy-0006wg-3r
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Michael Labriola <michael.d.labriola@gmail.com>,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 02/19/2014 
03:30:07 PM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael Labriola <michael.d.labriola@gmail.com>, 
> Cc: Michael D Labriola <mlabriol@gdeb.com>, Konrad Rzeszutek Wilk 
> <konrad@darnok.org>, xen-devel@lists.xen.org, 
xen-devel-bounces@lists.xen.org
> Date: 02/19/2014 03:30 PM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Wed, Feb 19, 2014 at 03:08:08PM -0500, Michael Labriola wrote:
> > On Wed, Feb 19, 2014 at 2:57 PM, Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com> wrote:
> > > On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
> > >> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
> > >> <konrad.wilk@oracle.com> wrote:
> > >> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola 
wrote:
> > >> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 
01/24/2014
> > >> >> 09:49:38 AM:
> > >> >>
> > >> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> > >> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, 
xen-devel-
> > >> >> > bounces@lists.xen.org
> > >> >> > Date: 01/24/2014 09:50 AM
> > >> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> >
> > >> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola 
wrote:
> > >> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 
PM:
> > >> >> > >
> > >> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> > >> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > >> >> > > > Date: 01/21/2014 04:59 PM
> > >> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> > > > Sent by: xen-devel-bounces@lists.xen.org
> > >> >> > > >
> > >> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D 
> Labriola wrote:
> > >> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote 
> on 01/20/2014
> > >> >>
> > >> >> > > > > 10:38:27 AM:
> > >> >> > > > >
> > >> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> > >> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > >> >> > > > > > Date: 01/20/2014 10:38 AM
> > >> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> > > > > >
> > >> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D 
Labriola
> > >> >> wrote:
> > >> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote 
on01/20/2014
> > >> >> > > 10:14:36
> > >> >> > > > > AM:
> > >> >> > > > > > >
> > >> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > >> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > > > > > > > Cc: xen-devel@lists.xen.org, 
michael.d.labriola@gmail.com
> > >> >> > > > > > > > Date: 01/20/2014 10:14 AM
> > >> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> > > > > > > >
> > >> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, 
> Michael D Labriola
> > >> >>
> > >> >> > > wrote:
> > >> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm 
having
> > >> >> > > consistent
> > >> >> > > > > > > crashes
> > >> >> > > > > > > > > with multiple older R600 series (HD 6470 and 
> HD 6570) and
> > >> >> > > unusably
> > >> >> > > > >
> > >> >> > > > > > > slow
> > >> >> > > > > > > > > graphics with a newer HD7000 (can see each line 
refresh
> > >> >> > > > > indiviually on
> > >> >> > > > > > >
> > >> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine 
bare
> > >> >> metal.
> > >> >> > > > > > > >
> > >> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that 
what you
> > >> >> mean?
> > >> >> > > > > > >
> > >> >> > > > > > > The R600 problems happen when in X, using OpenGL, 
> on my dom0.
> > >> >> The
> > >> >> > >
> > >> >> > > > > > > RadeonSI sluggishness is when using the KMS 
> framebuffer device
> > >> >> for
> > >> >> > > a
> > >> >> > > > > plain
> > >> >> > > > > > > text console login.
> > >> >> > > > > >
> > >> >> > > > > > So sluggish is probably due to the PAT not being 
enabled. This
> > >> >> patch
> > >> >> > > > > > should be applied:
> > >> >> > > > > >
> > >> >> > > > > > lkml.org/lkml/2011/11/8/406
> > >> >> > > > > >
> > >> >> > > > > > (or 
http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > >> >> > > > > >
> > >> >> > > > > > and these two reverted:
> > >> >> > > > > >
> > >> >> > > > > >  "xen/pat: Disable PAT support for now."
> > >> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> > >> >> > > > > >
> > >> >> > > > > > Which is to say do:
> > >> >> > > > > >
> > >> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > >> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > >> >> > > > >
> > >> >> > > > > Thanks!  I cherry-picked that patch out of your testing 
tree,
> > >> >> reverted
> > >> >> > >
> > >> >> > > > > those 2 commits, recompiled and installed. 
Definitelyfixed the
> > >> >> HD
> > >> >> > > 7000
> > >> >> > > > > sluggishness and appears to have fixed the R600 
> crashes (although
> > >> >> it's
> > >> >> > >
> > >> >> > > > > only been running a few hours).
> > >> >> > > > >
> > >> >> > > > > How come that patch didn't get into mainline?  It looks 
pretty
> > >> >> > > innocuous
> > >> >> > > > > to me...
> > >> >> > > >
> > >> >> > > > <Sigh> the x86 maintainers wanted a different route. And I 
hadn't
> > >> >> had
> > >> >> > > > the chance nor time to implement it.
> > >> >> > >
> > >> >> > > I see.  Well, I've got a handful of boxes in my lab that 
need that
> > >> >> patch
> > >> >> > > to be usable.  If you do come up with a more 
mainline-ablesolution,
> > >> >> I'd
> > >> >> > > gladly test it for you.  ;-)
> > >> >> >
> > >> >> > Thank you!
> > >> >>
> > >> >> Uh, oh.  Looks like those reverts and patches didn't entirely 
fix my
> > >> >> problem.  My box with the HD5450 (r600 gallium3d) started going 
bonkers
> > >> >> again yeserday.  After being solid as a rock for 2 weeks as my 
primary
> > >> >> workstation, X has crashed a half dozen or so times so far 
> this week. I've
> > >> >> been in Xen with 2 paravirtual linux guests running almost 
> constantly for
> > >> >> this whole period.  I don't understand what's changed, but my 
system has
> > >> >> been entirely unstable now.  I did recompile my kernel... but I 
all did
> > >> >> was merge the v3.13.1 stable commit into my working tree and 
turn a few
> > >> >> things on (netfilter, wifi, a couple drivers turned on here 
> and there).  I
> > >> >> just went and verified that those patches are still applied in 
my tree
> > >> >> (i.e., I didn't accidentally undo them).  I'm scratching my head 
(and
> > >> >> staring at a TTY login).
> > >> >>
> > >> >> When X crashes, my kernel log prints a couple dozen iterations
> of this. 3d
> > >> >> acceleration no longer functions unless I reboot.  If memory 
serves, the
> > >> >> unpatched behavior upon X crash was that the kernel continued to 
spew
> > >> >> these errors until the whole box locked up.  At least that's 
> not happening
> > >> >> any more... ;-)
> > >> >>
> > >> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> > >> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> > >> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to 
allocate
> > >> >> GEM object (8192, 2, 4096, -12)
> > >> >>
> > >> >> and here's a slightly different variant that happened while I 
was typing
> > >> >> this email (on a different machine, luckily):
> > >> >>
> > >> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 
0
> > >> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> > >> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> > >> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> > >> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [64348.297561] [TTM] Buffer eviction failed
> > >> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> > >> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to 
allocate
> > >> >> GEM object (16384, 2, 4096, -12)
> > >> >>
> > >> >> Any ideas?
> > >> >
> > >> > yes. I believe you have a memory leak. As in, some driver (or X) 
is
> > >> > eating up the memory and not giving up enough. That means the TTM
> > >> > layer is hitting its ceiling of how much memory it can allocate.
> > >> >
> > >> > Now finding the culprit is going to be a bit hard.
> > >> >
> > >> > You could use:
> > >> >
> > >> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
> > >> >          pool      refills   pages freed    inuse available name
> > >> >            wc          259           224      808        4 
> nouveau 0000:05:00.0
> > >> >        cached      3403058      13561071    51158        3 
> radeon 0000:01:00.0
> > >> >        cached           25             0       96        4 
> nouveau 0000:05:00.0
> > >> >
> > >> > to figure out if my thinking is really true. You should have a 
huge
> > >> > 'inuse' count and almost no 'available'.
> > >>
> > >> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which 
appear to
> > >> always have the same contents.  Is that normal?
> > >
> > > Yes.
> > >>
> > >> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist 
bare
> > >> metal... only in Xen.  Is that normal?
> > >
> > > It would show up on baremetal if you boot with 'iommu=soft'
> > >
> > >>
> > >>          pool      refills   pages freed    inuse available name
> > >>        cached        15190         59551     1205        4 radeon
> 0000:01:00.0
> > >>
> > >> If I watch that file while creating xterms, moving them around, 
etc, I can
> > >> see the number available fluctuate between 3 and 6.  This is true, 
even on
> > >> my box w/ the newer R7 card in it, which hasn't gotten that GEM 
error
> > >> message (yet?).
> > >
> > > OK, so lets see what happens when the error shows. Incidentally - 
> what amount of
> > > memory does your initial domain have? And is it different then when 
you
> > > boot it as a baremetal?
> > 
> > I've got the problem very reproducible on 3 boxes.  All three are
> > booting the dom0 with as much RAM as Xen will give them, then giving
> > up some of their RAM as needed when I create domUs.  The 3 boxes have
> > 4G, 8G, and 16G if memory serves.

Actually, they're 6G, 8G, and 16G... and I've got a box that I can't 
reproduce the problem on even though it's got the same video card... and 
it only has 2G of RAM.  Could this be a PAE/HIHGMEM issue?  I'm running 
32bit with CONFIG_HIGHMEM64G on all my boxes.


> > 
> > Does the amount of RAM on the actual video cards matter?  All the
> > older cards (that crash all the time) have 2G, whereas the R7 that
> > hasn't crashed yet only has 1G.
> 
> The TTM pool has a limit (a hard one). It is pretty simple:
> 
> 
>        pr_info("Zone %7s: Available graphics memory: %llu kiB\n", 
> 394                         zone->name, (unsigned long long)
> zone->max_mem >> 10); 
> 395         }  
> 396         ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/
> (2*PAGE_SIZE)); 
> 397         ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/
> (2*PAGE_SIZE));
> 
> so 1/4 of your memory. Which means that when boot dom0 with as much
> memory as possible and then balloon down you might confuse it
> (as the initial memory assumption is done during bootup).
> 
> If you boot the troubled dom0s with 'dom0_mem_max' set to some good
> number - that might shed some light on this.

Ok, I've got one of the problematic boxes booted with dom0_mem=5G and it 
doesn't seem to be crashing.  Fingers crossed!


> 
> 
> > 
> > I've been reproducing the crash by just logging in and out of fluxbox
> > via XDM over and over again right after booting my dom0 in Xen w/ no
> > guests running.  That makes it happen within a few minutes.  Otherwise
> > it randomly crashes while I'm in the middle of trying to work... ;-)
> 
> HA!
> 
> Does fluxbox use a lot of graphic? I mean does it do a lot of fancy
> things when it starts and shuts itself?

Negative.  It does next to nothing.  Super light weight, pretty much just 
gets rid of the login box and puts a taskbar-type-thing on the bottom of 
the screen.  I'd say the majority of my crashes have happened in 
Enlightenment (with plenty of extra fancy things), but it HAS happened in 
fluxbox doing next to nothing.  Which was pretty surprising.


---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 21:03:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 21:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGEIA-0002XF-5P; Wed, 19 Feb 2014 21:02:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <prvs=0127121803=mlabriol@gdeb.com>)
	id 1WGEI7-0002X6-G8; Wed, 19 Feb 2014 21:02:51 +0000
Received: from [85.158.137.68:14743] by server-13.bemta-3.messagelabs.com id
	00/8F-26923-9FB15035; Wed, 19 Feb 2014 21:02:49 +0000
X-Env-Sender: prvs=0127121803=mlabriol@gdeb.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392843766!2992900!1
X-Originating-IP: [153.11.250.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTUzLjExLjI1MC40MSA9PiAyMDMzMzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15940 invoked from network); 19 Feb 2014 21:02:46 -0000
Received: from mx2.gd-ms.com (HELO mx2.gd-ms.com) (153.11.250.41)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 19 Feb 2014 21:02:46 -0000
Received: from ebsmtp.gdeb.com ([153.11.13.41])
	by mx2.gd-ms.com with esmtp (Exim 4.76)
	(envelope-from <mlabriol@gdeb.com>)
	id 1WGEHy-0006wg-3r; Wed, 19 Feb 2014 16:02:42 -0500
In-Reply-To: <20140219203007.GA13324@phenom.dumpdata.com>
References: <20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
	<20140219195705.GA13089@phenom.dumpdata.com>
	<CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
	<20140219203007.GA13324@phenom.dumpdata.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
MIME-Version: 1.0
X-KeepSent: 78ACED06:414AEB6F-85257C84:0072CCCA;
 type=4; name=$KeepSent
Message-ID: <OF78ACED06.414AEB6F-ON85257C84.0072CCCA-85257C84.0073988C@gdeb.com>
From: Michael D Labriola <mlabriol@gdeb.com>
Date: Wed, 19 Feb 2014 16:02:36 -0500
X-GDMEncrypt: FALSE
X-GDMMarking: NOT_SENSITIVE
X-GDM-MESSAGE-ID: mx2.gd-ms.com  1WGEHy-0006wg-3r
X-GDM-EVAL: score: /30; hits: 
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Michael Labriola <michael.d.labriola@gmail.com>,
	xen-devel-bounces@lists.xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 02/19/2014 
03:30:07 PM:

> From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> To: Michael Labriola <michael.d.labriola@gmail.com>, 
> Cc: Michael D Labriola <mlabriol@gdeb.com>, Konrad Rzeszutek Wilk 
> <konrad@darnok.org>, xen-devel@lists.xen.org, 
xen-devel-bounces@lists.xen.org
> Date: 02/19/2014 03:30 PM
> Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> 
> On Wed, Feb 19, 2014 at 03:08:08PM -0500, Michael Labriola wrote:
> > On Wed, Feb 19, 2014 at 2:57 PM, Konrad Rzeszutek Wilk
> > <konrad.wilk@oracle.com> wrote:
> > > On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
> > >> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
> > >> <konrad.wilk@oracle.com> wrote:
> > >> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola 
wrote:
> > >> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 
01/24/2014
> > >> >> 09:49:38 AM:
> > >> >>
> > >> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> > >> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, 
xen-devel-
> > >> >> > bounces@lists.xen.org
> > >> >> > Date: 01/24/2014 09:50 AM
> > >> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> >
> > >> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola 
wrote:
> > >> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 
PM:
> > >> >> > >
> > >> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> > >> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > >> >> > > > Date: 01/21/2014 04:59 PM
> > >> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> > > > Sent by: xen-devel-bounces@lists.xen.org
> > >> >> > > >
> > >> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D 
> Labriola wrote:
> > >> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote 
> on 01/20/2014
> > >> >>
> > >> >> > > > > 10:38:27 AM:
> > >> >> > > > >
> > >> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > >> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
> > >> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
> > >> >> > > > > > Date: 01/20/2014 10:38 AM
> > >> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> > > > > >
> > >> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D 
Labriola
> > >> >> wrote:
> > >> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote 
on01/20/2014
> > >> >> > > 10:14:36
> > >> >> > > > > AM:
> > >> >> > > > > > >
> > >> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
> > >> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
> > >> >> > > > > > > > Cc: xen-devel@lists.xen.org, 
michael.d.labriola@gmail.com
> > >> >> > > > > > > > Date: 01/20/2014 10:14 AM
> > >> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
> > >> >> > > > > > > >
> > >> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, 
> Michael D Labriola
> > >> >>
> > >> >> > > wrote:
> > >> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm 
having
> > >> >> > > consistent
> > >> >> > > > > > > crashes
> > >> >> > > > > > > > > with multiple older R600 series (HD 6470 and 
> HD 6570) and
> > >> >> > > unusably
> > >> >> > > > >
> > >> >> > > > > > > slow
> > >> >> > > > > > > > > graphics with a newer HD7000 (can see each line 
refresh
> > >> >> > > > > indiviually on
> > >> >> > > > > > >
> > >> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine 
bare
> > >> >> metal.
> > >> >> > > > > > > >
> > >> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that 
what you
> > >> >> mean?
> > >> >> > > > > > >
> > >> >> > > > > > > The R600 problems happen when in X, using OpenGL, 
> on my dom0.
> > >> >> The
> > >> >> > >
> > >> >> > > > > > > RadeonSI sluggishness is when using the KMS 
> framebuffer device
> > >> >> for
> > >> >> > > a
> > >> >> > > > > plain
> > >> >> > > > > > > text console login.
> > >> >> > > > > >
> > >> >> > > > > > So sluggish is probably due to the PAT not being 
enabled. This
> > >> >> patch
> > >> >> > > > > > should be applied:
> > >> >> > > > > >
> > >> >> > > > > > lkml.org/lkml/2011/11/8/406
> > >> >> > > > > >
> > >> >> > > > > > (or 
http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
> > >> >> > > > > >
> > >> >> > > > > > and these two reverted:
> > >> >> > > > > >
> > >> >> > > > > >  "xen/pat: Disable PAT support for now."
> > >> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
> > >> >> > > > > >
> > >> >> > > > > > Which is to say do:
> > >> >> > > > > >
> > >> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
> > >> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
> > >> >> > > > >
> > >> >> > > > > Thanks!  I cherry-picked that patch out of your testing 
tree,
> > >> >> reverted
> > >> >> > >
> > >> >> > > > > those 2 commits, recompiled and installed. 
Definitelyfixed the
> > >> >> HD
> > >> >> > > 7000
> > >> >> > > > > sluggishness and appears to have fixed the R600 
> crashes (although
> > >> >> it's
> > >> >> > >
> > >> >> > > > > only been running a few hours).
> > >> >> > > > >
> > >> >> > > > > How come that patch didn't get into mainline?  It looks 
pretty
> > >> >> > > innocuous
> > >> >> > > > > to me...
> > >> >> > > >
> > >> >> > > > <Sigh> the x86 maintainers wanted a different route. And I 
hadn't
> > >> >> had
> > >> >> > > > the chance nor time to implement it.
> > >> >> > >
> > >> >> > > I see.  Well, I've got a handful of boxes in my lab that 
need that
> > >> >> patch
> > >> >> > > to be usable.  If you do come up with a more 
mainline-ablesolution,
> > >> >> I'd
> > >> >> > > gladly test it for you.  ;-)
> > >> >> >
> > >> >> > Thank you!
> > >> >>
> > >> >> Uh, oh.  Looks like those reverts and patches didn't entirely 
fix my
> > >> >> problem.  My box with the HD5450 (r600 gallium3d) started going 
bonkers
> > >> >> again yeserday.  After being solid as a rock for 2 weeks as my 
primary
> > >> >> workstation, X has crashed a half dozen or so times so far 
> this week. I've
> > >> >> been in Xen with 2 paravirtual linux guests running almost 
> constantly for
> > >> >> this whole period.  I don't understand what's changed, but my 
system has
> > >> >> been entirely unstable now.  I did recompile my kernel... but I 
all did
> > >> >> was merge the v3.13.1 stable commit into my working tree and 
turn a few
> > >> >> things on (netfilter, wifi, a couple drivers turned on here 
> and there).  I
> > >> >> just went and verified that those patches are still applied in 
my tree
> > >> >> (i.e., I didn't accidentally undo them).  I'm scratching my head 
(and
> > >> >> staring at a TTY login).
> > >> >>
> > >> >> When X crashes, my kernel log prints a couple dozen iterations
> of this. 3d
> > >> >> acceleration no longer functions unless I reboot.  If memory 
serves, the
> > >> >> unpatched behavior upon X crash was that the kernel continued to 
spew
> > >> >> these errors until the whole box locked up.  At least that's 
> not happening
> > >> >> any more... ;-)
> > >> >>
> > >> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
> > >> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
> > >> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to 
allocate
> > >> >> GEM object (8192, 2, 4096, -12)
> > >> >>
> > >> >> and here's a slightly different variant that happened while I 
was typing
> > >> >> this email (on a different machine, luckily):
> > >> >>
> > >> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 
0
> > >> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
> > >> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
> > >> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
> > >> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [64348.297561] [TTM] Buffer eviction failed
> > >> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
> > >> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached 
pool
> > >> >> (r:-12)!
> > >> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to 
allocate
> > >> >> GEM object (16384, 2, 4096, -12)
> > >> >>
> > >> >> Any ideas?
> > >> >
> > >> > yes. I believe you have a memory leak. As in, some driver (or X) 
is
> > >> > eating up the memory and not giving up enough. That means the TTM
> > >> > layer is hitting its ceiling of how much memory it can allocate.
> > >> >
> > >> > Now finding the culprit is going to be a bit hard.
> > >> >
> > >> > You could use:
> > >> >
> > >> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
> > >> >          pool      refills   pages freed    inuse available name
> > >> >            wc          259           224      808        4 
> nouveau 0000:05:00.0
> > >> >        cached      3403058      13561071    51158        3 
> radeon 0000:01:00.0
> > >> >        cached           25             0       96        4 
> nouveau 0000:05:00.0
> > >> >
> > >> > to figure out if my thinking is really true. You should have a 
huge
> > >> > 'inuse' count and almost no 'available'.
> > >>
> > >> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which 
appear to
> > >> always have the same contents.  Is that normal?
> > >
> > > Yes.
> > >>
> > >> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist 
bare
> > >> metal... only in Xen.  Is that normal?
> > >
> > > It would show up on baremetal if you boot with 'iommu=soft'
> > >
> > >>
> > >>          pool      refills   pages freed    inuse available name
> > >>        cached        15190         59551     1205        4 radeon
> 0000:01:00.0
> > >>
> > >> If I watch that file while creating xterms, moving them around, 
etc, I can
> > >> see the number available fluctuate between 3 and 6.  This is true, 
even on
> > >> my box w/ the newer R7 card in it, which hasn't gotten that GEM 
error
> > >> message (yet?).
> > >
> > > OK, so lets see what happens when the error shows. Incidentally - 
> what amount of
> > > memory does your initial domain have? And is it different then when 
you
> > > boot it as a baremetal?
> > 
> > I've got the problem very reproducible on 3 boxes.  All three are
> > booting the dom0 with as much RAM as Xen will give them, then giving
> > up some of their RAM as needed when I create domUs.  The 3 boxes have
> > 4G, 8G, and 16G if memory serves.

Actually, they're 6G, 8G, and 16G... and I've got a box that I can't 
reproduce the problem on even though it's got the same video card... and 
it only has 2G of RAM.  Could this be a PAE/HIHGMEM issue?  I'm running 
32bit with CONFIG_HIGHMEM64G on all my boxes.


> > 
> > Does the amount of RAM on the actual video cards matter?  All the
> > older cards (that crash all the time) have 2G, whereas the R7 that
> > hasn't crashed yet only has 1G.
> 
> The TTM pool has a limit (a hard one). It is pretty simple:
> 
> 
>        pr_info("Zone %7s: Available graphics memory: %llu kiB\n", 
> 394                         zone->name, (unsigned long long)
> zone->max_mem >> 10); 
> 395         }  
> 396         ttm_page_alloc_init(glob, glob->zone_kernel->max_mem/
> (2*PAGE_SIZE)); 
> 397         ttm_dma_page_alloc_init(glob, glob->zone_kernel->max_mem/
> (2*PAGE_SIZE));
> 
> so 1/4 of your memory. Which means that when boot dom0 with as much
> memory as possible and then balloon down you might confuse it
> (as the initial memory assumption is done during bootup).
> 
> If you boot the troubled dom0s with 'dom0_mem_max' set to some good
> number - that might shed some light on this.

Ok, I've got one of the problematic boxes booted with dom0_mem=5G and it 
doesn't seem to be crashing.  Fingers crossed!


> 
> 
> > 
> > I've been reproducing the crash by just logging in and out of fluxbox
> > via XDM over and over again right after booting my dom0 in Xen w/ no
> > guests running.  That makes it happen within a few minutes.  Otherwise
> > it randomly crashes while I'm in the middle of trying to work... ;-)
> 
> HA!
> 
> Does fluxbox use a lot of graphic? I mean does it do a lot of fancy
> things when it starts and shuts itself?

Negative.  It does next to nothing.  Super light weight, pretty much just 
gets rid of the login box and puts a taskbar-type-thing on the bottom of 
the screen.  I'd say the majority of my crashes have happened in 
Enlightenment (with plenty of extra fancy things), but it HAS happened in 
fluxbox doing next to nothing.  Which was pretty surprising.


---
Michael D Labriola
Electric Boat
mlabriol@gdeb.com
401-848-8871 (desk)
401-848-8513 (lab)
401-316-9844 (cell)






_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 21:54:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 21:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGF5F-0003a1-3t; Wed, 19 Feb 2014 21:53:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WGF5D-0003Zw-7u
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 21:53:35 +0000
Received: from [85.158.137.68:13504] by server-13.bemta-3.messagelabs.com id
	B8/BB-26923-ED725035; Wed, 19 Feb 2014 21:53:34 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392846813!2998676!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18841 invoked from network); 19 Feb 2014 21:53:33 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-14.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 21:53:33 -0000
Received: from localhost (nat-pool-rdu-t.redhat.com [66.187.233.202])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 91E065821B1;
	Wed, 19 Feb 2014 13:53:31 -0800 (PST)
Date: Wed, 19 Feb 2014 16:53:30 -0500 (EST)
Message-Id: <20140219.165330.1494132817520500296.davem@davemloft.net>
To: wei.liu2@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
References: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Wed, 19 Feb 2014 13:53:32 -0800 (PST)
Cc: netdev@vger.kernel.org, linux@eikelenboom.it, paul.durrant@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net] xen-netfront: reset skb network header
 before checksum
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 19 Feb 2014 18:48:34 +0000

> In ed1f50c3a ("net: add skb_checksum_setup") we introduced some checksum
> functions in core driver. Subsequent change b5cf66cd1 ("xen-netfront:
> use new skb_checksum_setup function") made use of those functions to
> replace its own implementation.
> 
> However with that change netfront is broken. It sees a lot of checksum
> error. That's because its own implementation of checksum function was a
> bit hacky (dereferencing skb->data directly) while the new function was
> implemented using ip_hdr(). The network header is not reset before skb
> is passed to the new function. When the new function tries to do its
> job, it's confused and reports error.
> 
> The fix is simple, we need to reset network header before passing skb to
> checksum function. Netback is not affected as it already does the right
> thing.
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 19 21:54:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Feb 2014 21:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGF5F-0003a1-3t; Wed, 19 Feb 2014 21:53:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WGF5D-0003Zw-7u
	for xen-devel@lists.xen.org; Wed, 19 Feb 2014 21:53:35 +0000
Received: from [85.158.137.68:13504] by server-13.bemta-3.messagelabs.com id
	B8/BB-26923-ED725035; Wed, 19 Feb 2014 21:53:34 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392846813!2998676!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18841 invoked from network); 19 Feb 2014 21:53:33 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-14.tower-31.messagelabs.com with SMTP;
	19 Feb 2014 21:53:33 -0000
Received: from localhost (nat-pool-rdu-t.redhat.com [66.187.233.202])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id 91E065821B1;
	Wed, 19 Feb 2014 13:53:31 -0800 (PST)
Date: Wed, 19 Feb 2014 16:53:30 -0500 (EST)
Message-Id: <20140219.165330.1494132817520500296.davem@davemloft.net>
To: wei.liu2@citrix.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
References: <1392835714-26480-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Wed, 19 Feb 2014 13:53:32 -0800 (PST)
Cc: netdev@vger.kernel.org, linux@eikelenboom.it, paul.durrant@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH net] xen-netfront: reset skb network header
 before checksum
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Wei Liu <wei.liu2@citrix.com>
Date: Wed, 19 Feb 2014 18:48:34 +0000

> In ed1f50c3a ("net: add skb_checksum_setup") we introduced some checksum
> functions in core driver. Subsequent change b5cf66cd1 ("xen-netfront:
> use new skb_checksum_setup function") made use of those functions to
> replace its own implementation.
> 
> However with that change netfront is broken. It sees a lot of checksum
> error. That's because its own implementation of checksum function was a
> bit hacky (dereferencing skb->data directly) while the new function was
> implemented using ip_hdr(). The network header is not reset before skb
> is passed to the new function. When the new function tries to do its
> job, it's confused and reports error.
> 
> The fix is simple, we need to reset network header before passing skb to
> checksum function. Netback is not affected as it already does the right
> thing.
> 
> Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Applied, thanks.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 00:41:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 00:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGHgr-0006cF-RU; Thu, 20 Feb 2014 00:40:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGHgq-0006cA-50
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 00:40:36 +0000
Received: from [85.158.137.68:18381] by server-2.bemta-3.messagelabs.com id
	27/7D-06531-30F45035; Thu, 20 Feb 2014 00:40:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392856832!1750516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9388 invoked from network); 20 Feb 2014 00:40:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 00:40:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,509,1389744000"; d="scan'208";a="102442805"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 00:40:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 19:40:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGHgl-0005aQ-6L;
	Thu, 20 Feb 2014 00:40:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGHgj-0000ZY-RV;
	Thu, 20 Feb 2014 00:40:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25139-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 00:40:30 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25139: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25139 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25139/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-i386  8 guest-start            fail blocked in 12557
 test-amd64-i386-xend-qemut-winxpsp3  5 xen-boot          fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                960dfc4eb23a28495276b02604d7458e0e1a1ed8
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7059 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2386813 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 00:41:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 00:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGHgr-0006cF-RU; Thu, 20 Feb 2014 00:40:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGHgq-0006cA-50
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 00:40:36 +0000
Received: from [85.158.137.68:18381] by server-2.bemta-3.messagelabs.com id
	27/7D-06531-30F45035; Thu, 20 Feb 2014 00:40:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392856832!1750516!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9388 invoked from network); 20 Feb 2014 00:40:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 00:40:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,509,1389744000"; d="scan'208";a="102442805"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 00:40:32 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 19:40:31 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGHgl-0005aQ-6L;
	Thu, 20 Feb 2014 00:40:31 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGHgj-0000ZY-RV;
	Thu, 20 Feb 2014 00:40:31 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25139-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 00:40:30 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25139: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25139 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25139/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-i386  8 guest-start            fail blocked in 12557
 test-amd64-i386-xend-qemut-winxpsp3  5 xen-boot          fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                960dfc4eb23a28495276b02604d7458e0e1a1ed8
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7059 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2386813 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 00:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 00:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGHv2-0006qj-TI; Thu, 20 Feb 2014 00:55:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WGHv1-0006qe-IX
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 00:55:15 +0000
Received: from [85.158.143.35:64029] by server-2.bemta-4.messagelabs.com id
	2F/E1-10891-27255035; Thu, 20 Feb 2014 00:55:14 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392857712!6945776!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16433 invoked from network); 20 Feb 2014 00:55:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	20 Feb 2014 00:55:13 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1K0swG0032118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:54:59 -0500
Received: from [10.3.237.60] (vpn-237-60.phx2.redhat.com [10.3.237.60])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1K0suT3009701; Wed, 19 Feb 2014 19:54:57 -0500
Message-ID: <1392857777.22693.14.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 18:56:17 -0600
In-Reply-To: <CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 09:20 -0800, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 8:45 AM, Dan Williams <dcbw@redhat.com> wrote:
> > On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
> >> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
> >> > On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> >> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >> >>
> >> >> Some interfaces do not need to have any IPv4 or IPv6
> >> >> addresses, so enable an option to specify this. One
> >> >> example where this is observed are virtualization
> >> >> backend interfaces which just use the net_device
> >> >> constructs to help with their respective frontends.
> >> >>
> >> >> This should optimize boot time and complexity on
> >> >> virtualization environments for each backend interface
> >> >> while also avoiding triggering SLAAC and DAD, which is
> >> >> simply pointless for these type of interfaces.
> >> >
> >> > Would it not be better/cleaner to use disable_ipv6 and then add a
> >> > disable_ipv4 sysctl, then use those with that interface?
> >>
> >> Sure, but note that the both disable_ipv6 and accept_dada sysctl
> >> parameters are global. ipv4 and ipv6 interfaces are created upon
> >> NETDEVICE_REGISTER, which will get triggered when a driver calls
> >> register_netdev(). The goal of this patch was to enable an early
> >> optimization for drivers that have no need ever for ipv4 or ipv6
> >> interfaces.
> >
> > Each interface gets override sysctls too though, eg:
> >
> > /proc/sys/net/ipv6/conf/enp0s25/disable_ipv6
> 
> I hadn't seen those, thanks!

Note that there isn't yet a disable_ipv4 knob though, I was
perhaps-too-subtly trying to get you to send a patch for it, since I can
use it too :)

Dan

> > which is the one I meant; you're obviously right that the global ones
> > aren't what you want here.  But the specific ones should be suitable?
> 
> Under the approach Stephen mentioned by first ensuring the interface
> is down yes. There's one use case I can consider to still want the
> patch though, more on that below.
> 
> > If you set that on a per-interface basis, then you'll get EPERM or
> > something whenever you try to add IPv6 addresses or do IPv6 routing.
> 
> Neat, thanks.
> 
> >> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
> >> backends though, as such this is no longer applicable as a
> >> requirement. The ipv4 sysctl however still seems like a reasonable
> >> approach to enable optimizations of the network in topologies where
> >> its known we won't need them but -- we'd need to consider a much more
> >> granular solution, not just global as it is now for disable_ipv6, and
> >> we'd also have to figure out a clean way to do this to not incur the
> >> cost of early address interface addition upon register_netdev().
> >>
> >> Given that we have a use case for ipv4 and ipv6 addresses on
> >> xen-netback we no longer have an immediate use case for such early
> >> optimization primitives though, so I'll drop this.
> >>
> >> > The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
> >> > already doing.
> >>
> >> disable_ipv6 is global, the goal was to make this granular and skip
> >> the cost upon early boot, but its been clarified we don't need this.
> >
> > Like Stephen says, you need to make sure you set them before IFF_UP, but
> > beyond that, wouldn't the interface-specific sysctls work?
> 
> Yeah that'll do it, unless there is a measurable run time benefit cost
> to never even add these in the first place. Consider a host with tons
> of guests, not sure how many is 'a lot' these days. One would have to
> measure the cost of reducing the amount of time it takes to boot these
> up. As discussed in the other threads though there *is* some use cases
> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
> routing them (although its unclear to me if iptables can be used
> instead, Zoltan?). So at least now there no clear requirement to
> remove these interfaces or not have them at all. The boot time cost
> savings should be considered though if this is ultimately desirable. I
> saw tons of timers and events that'd get triggered with any IPv4 or
> IPv6 interface laying around.
> 
>   Luis
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 00:55:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 00:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGHv2-0006qj-TI; Thu, 20 Feb 2014 00:55:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WGHv1-0006qe-IX
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 00:55:15 +0000
Received: from [85.158.143.35:64029] by server-2.bemta-4.messagelabs.com id
	2F/E1-10891-27255035; Thu, 20 Feb 2014 00:55:14 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392857712!6945776!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16433 invoked from network); 20 Feb 2014 00:55:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-21.messagelabs.com with SMTP;
	20 Feb 2014 00:55:13 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1K0swG0032118
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 19:54:59 -0500
Received: from [10.3.237.60] (vpn-237-60.phx2.redhat.com [10.3.237.60])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1K0suT3009701; Wed, 19 Feb 2014 19:54:57 -0500
Message-ID: <1392857777.22693.14.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Wed, 19 Feb 2014 18:56:17 -0600
In-Reply-To: <CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 09:20 -0800, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 8:45 AM, Dan Williams <dcbw@redhat.com> wrote:
> > On Tue, 2014-02-18 at 13:19 -0800, Luis R. Rodriguez wrote:
> >> On Mon, Feb 17, 2014 at 12:23 PM, Dan Williams <dcbw@redhat.com> wrote:
> >> > On Fri, 2014-02-14 at 18:59 -0800, Luis R. Rodriguez wrote:
> >> >> From: "Luis R. Rodriguez" <mcgrof@suse.com>
> >> >>
> >> >> Some interfaces do not need to have any IPv4 or IPv6
> >> >> addresses, so enable an option to specify this. One
> >> >> example where this is observed are virtualization
> >> >> backend interfaces which just use the net_device
> >> >> constructs to help with their respective frontends.
> >> >>
> >> >> This should optimize boot time and complexity on
> >> >> virtualization environments for each backend interface
> >> >> while also avoiding triggering SLAAC and DAD, which is
> >> >> simply pointless for these type of interfaces.
> >> >
> >> > Would it not be better/cleaner to use disable_ipv6 and then add a
> >> > disable_ipv4 sysctl, then use those with that interface?
> >>
> >> Sure, but note that the both disable_ipv6 and accept_dada sysctl
> >> parameters are global. ipv4 and ipv6 interfaces are created upon
> >> NETDEVICE_REGISTER, which will get triggered when a driver calls
> >> register_netdev(). The goal of this patch was to enable an early
> >> optimization for drivers that have no need ever for ipv4 or ipv6
> >> interfaces.
> >
> > Each interface gets override sysctls too though, eg:
> >
> > /proc/sys/net/ipv6/conf/enp0s25/disable_ipv6
> 
> I hadn't seen those, thanks!

Note that there isn't yet a disable_ipv4 knob though, I was
perhaps-too-subtly trying to get you to send a patch for it, since I can
use it too :)

Dan

> > which is the one I meant; you're obviously right that the global ones
> > aren't what you want here.  But the specific ones should be suitable?
> 
> Under the approach Stephen mentioned by first ensuring the interface
> is down yes. There's one use case I can consider to still want the
> patch though, more on that below.
> 
> > If you set that on a per-interface basis, then you'll get EPERM or
> > something whenever you try to add IPv6 addresses or do IPv6 routing.
> 
> Neat, thanks.
> 
> >> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
> >> backends though, as such this is no longer applicable as a
> >> requirement. The ipv4 sysctl however still seems like a reasonable
> >> approach to enable optimizations of the network in topologies where
> >> its known we won't need them but -- we'd need to consider a much more
> >> granular solution, not just global as it is now for disable_ipv6, and
> >> we'd also have to figure out a clean way to do this to not incur the
> >> cost of early address interface addition upon register_netdev().
> >>
> >> Given that we have a use case for ipv4 and ipv6 addresses on
> >> xen-netback we no longer have an immediate use case for such early
> >> optimization primitives though, so I'll drop this.
> >>
> >> > The IFF_SKIP_IP seems to duplicate at least part of what disable_ipv6 is
> >> > already doing.
> >>
> >> disable_ipv6 is global, the goal was to make this granular and skip
> >> the cost upon early boot, but its been clarified we don't need this.
> >
> > Like Stephen says, you need to make sure you set them before IFF_UP, but
> > beyond that, wouldn't the interface-specific sysctls work?
> 
> Yeah that'll do it, unless there is a measurable run time benefit cost
> to never even add these in the first place. Consider a host with tons
> of guests, not sure how many is 'a lot' these days. One would have to
> measure the cost of reducing the amount of time it takes to boot these
> up. As discussed in the other threads though there *is* some use cases
> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
> routing them (although its unclear to me if iptables can be used
> instead, Zoltan?). So at least now there no clear requirement to
> remove these interfaces or not have them at all. The boot time cost
> savings should be considered though if this is ultimately desirable. I
> saw tons of timers and events that'd get triggered with any IPv4 or
> IPv6 interface laying around.
> 
>   Luis
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 00:58:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 00:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGHxw-0006yc-1q; Thu, 20 Feb 2014 00:58:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGHxu-0006yU-DW
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 00:58:14 +0000
Received: from [193.109.254.147:60859] by server-3.bemta-14.messagelabs.com id
	5A/1A-00432-52355035; Thu, 20 Feb 2014 00:58:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392857891!5529138!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19465 invoked from network); 20 Feb 2014 00:58:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 00:58:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,509,1389744000"; d="scan'208";a="104142842"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 00:58:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 19:58:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGHxp-0005fZ-Ue;
	Thu, 20 Feb 2014 00:58:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGHxp-0004Gt-R1;
	Thu, 20 Feb 2014 00:58:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25145-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 00:58:09 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25145: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25145 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25145/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 15 guest-localmigrate/x10      fail pass in 25131

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 00:58:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 00:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGHxw-0006yc-1q; Thu, 20 Feb 2014 00:58:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGHxu-0006yU-DW
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 00:58:14 +0000
Received: from [193.109.254.147:60859] by server-3.bemta-14.messagelabs.com id
	5A/1A-00432-52355035; Thu, 20 Feb 2014 00:58:13 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392857891!5529138!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19465 invoked from network); 20 Feb 2014 00:58:12 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 00:58:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,509,1389744000"; d="scan'208";a="104142842"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 00:58:10 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 19:58:10 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGHxp-0005fZ-Ue;
	Thu, 20 Feb 2014 00:58:09 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGHxp-0004Gt-R1;
	Thu, 20 Feb 2014 00:58:09 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25145-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 00:58:09 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25145: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25145 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25145/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin 15 guest-localmigrate/x10      fail pass in 25131

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 01:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 01:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGI0l-0002rg-6X; Thu, 20 Feb 2014 01:01:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WGI0j-0002fO-1e
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 01:01:09 +0000
Received: from [85.158.139.211:57066] by server-14.bemta-5.messagelabs.com id
	84/35-27598-4D355035; Thu, 20 Feb 2014 01:01:08 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392858066!4996526!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5220 invoked from network); 20 Feb 2014 01:01:07 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-206.messagelabs.com with SMTP;
	20 Feb 2014 01:01:07 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1K10xKA003631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 20:01:00 -0500
Received: from [10.3.237.60] (vpn-237-60.phx2.redhat.com [10.3.237.60])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1K10wqW021712; Wed, 19 Feb 2014 20:00:58 -0500
Message-ID: <1392858138.22693.21.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: Hannes Frederic Sowa <hannes@stressinduktion.org>
Date: Wed, 19 Feb 2014 19:02:18 -0600
In-Reply-To: <20140220005833.GF1179@order.stressinduktion.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
	<20140220005833.GF1179@order.stressinduktion.org>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, Patrick McHardy <kaber@trash.net>,
	xen-devel@lists.xenproject.org, "David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 01:58 +0100, Hannes Frederic Sowa wrote:
> On Wed, Feb 19, 2014 at 06:56:17PM -0600, Dan Williams wrote:
> > Note that there isn't yet a disable_ipv4 knob though, I was
> > perhaps-too-subtly trying to get you to send a patch for it, since I can
> > use it too :)
> 
> Do you plan to implement
> <http://datatracker.ietf.org/doc/draft-ietf-sunset4-noipv4/>?
> 
> ;)

Well, not specifically, but with NetworkManager we do have a "disable
IPv4" method for IPv4, which now just doesn't do any kind of IPv4, but
obviously doesn't disable IPv4 entirely because that's not possible.  I
was only thinking that it would be nice to actually guarantee that IPv4
was disabled, just like disable_ipv6 does.

But we could certainly implement that draft if a patch shows up or if it
bubbled up the priority stack :)

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 01:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 01:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGI0l-0002rg-6X; Thu, 20 Feb 2014 01:01:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WGI0j-0002fO-1e
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 01:01:09 +0000
Received: from [85.158.139.211:57066] by server-14.bemta-5.messagelabs.com id
	84/35-27598-4D355035; Thu, 20 Feb 2014 01:01:08 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392858066!4996526!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5220 invoked from network); 20 Feb 2014 01:01:07 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-206.messagelabs.com with SMTP;
	20 Feb 2014 01:01:07 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1K10xKA003631
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 19 Feb 2014 20:01:00 -0500
Received: from [10.3.237.60] (vpn-237-60.phx2.redhat.com [10.3.237.60])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1K10wqW021712; Wed, 19 Feb 2014 20:00:58 -0500
Message-ID: <1392858138.22693.21.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: Hannes Frederic Sowa <hannes@stressinduktion.org>
Date: Wed, 19 Feb 2014 19:02:18 -0600
In-Reply-To: <20140220005833.GF1179@order.stressinduktion.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
	<20140220005833.GF1179@order.stressinduktion.org>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, Patrick McHardy <kaber@trash.net>,
	xen-devel@lists.xenproject.org, "David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 01:58 +0100, Hannes Frederic Sowa wrote:
> On Wed, Feb 19, 2014 at 06:56:17PM -0600, Dan Williams wrote:
> > Note that there isn't yet a disable_ipv4 knob though, I was
> > perhaps-too-subtly trying to get you to send a patch for it, since I can
> > use it too :)
> 
> Do you plan to implement
> <http://datatracker.ietf.org/doc/draft-ietf-sunset4-noipv4/>?
> 
> ;)

Well, not specifically, but with NetworkManager we do have a "disable
IPv4" method for IPv4, which now just doesn't do any kind of IPv4, but
obviously doesn't disable IPv4 entirely because that's not possible.  I
was only thinking that it would be nice to actually guarantee that IPv4
was disabled, just like disable_ipv6 does.

But we could certainly implement that draft if a patch shows up or if it
bubbled up the priority stack :)

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 01:51:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 01:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGInG-0003OO-Rx; Thu, 20 Feb 2014 01:51:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGInF-0003OJ-K3
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 01:51:17 +0000
Received: from [85.158.139.211:9574] by server-4.bemta-5.messagelabs.com id
	4F/44-08092-49F55035; Thu, 20 Feb 2014 01:51:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392861074!501087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27618 invoked from network); 20 Feb 2014 01:51:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 01:51:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,509,1389744000"; d="scan'208";a="104166081"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 01:51:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 20:51:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGInA-0005vE-Mu;
	Thu, 20 Feb 2014 01:51:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGInA-0008HU-53;
	Thu, 20 Feb 2014 01:51:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25144-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 01:51:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25144: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25144 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25144/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24737
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 17 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 01:51:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 01:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGInG-0003OO-Rx; Thu, 20 Feb 2014 01:51:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGInF-0003OJ-K3
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 01:51:17 +0000
Received: from [85.158.139.211:9574] by server-4.bemta-5.messagelabs.com id
	4F/44-08092-49F55035; Thu, 20 Feb 2014 01:51:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392861074!501087!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27618 invoked from network); 20 Feb 2014 01:51:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 01:51:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,509,1389744000"; d="scan'208";a="104166081"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 01:51:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 19 Feb 2014 20:51:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGInA-0005vE-Mu;
	Thu, 20 Feb 2014 01:51:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGInA-0008HU-53;
	Thu, 20 Feb 2014 01:51:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25144-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 01:51:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25144: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25144 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25144/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24737
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 17 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 02:23:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 02:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGJHg-0004C6-9V; Thu, 20 Feb 2014 02:22:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGJHd-0004C1-ED
	for Xen-devel@lists.xensource.com; Thu, 20 Feb 2014 02:22:41 +0000
Received: from [85.158.139.211:43289] by server-12.bemta-5.messagelabs.com id
	FF/D1-15415-0F665035; Thu, 20 Feb 2014 02:22:40 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392862958!5051248!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31231 invoked from network); 20 Feb 2014 02:22:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 02:22:39 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1K2MVXD023362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 02:22:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1K2MShu013431
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Feb 2014 02:22:29 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K2MStw019227; Thu, 20 Feb 2014 02:22:28 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 18:22:28 -0800
Date: Wed, 19 Feb 2014 18:22:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140219182227.6a37a33c@mantra.us.oracle.com>
In-Reply-To: <52FBA5BA.4020301@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Feb 2014 16:47:54 +0000
Julien Grall <julien.grall@linaro.org> wrote:

> Hi Mukesh,
> 
> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> > In preparation for the next patch, we update xsm_add_to_physmap to
> > allow for checking of foreign domain. Thus, the current domain must
> > have the right to update the mappings of target domain with pages
> > from foreign domain.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> While I was playing with XSM on ARM, I have noticed that Daniel De
> Graff has added xsm_map_gfmn_foreign few months ago (see commit
> 0b201e6).
> 
> Would it be suitable to use this XSM instead of extending
> xsm_add_to_physmap?
> 
> Regards,
> 

Not the same thing. add to physmap could be adding to a domain's
physmap pages from a foreign domain.

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 02:23:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 02:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGJHg-0004C6-9V; Thu, 20 Feb 2014 02:22:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGJHd-0004C1-ED
	for Xen-devel@lists.xensource.com; Thu, 20 Feb 2014 02:22:41 +0000
Received: from [85.158.139.211:43289] by server-12.bemta-5.messagelabs.com id
	FF/D1-15415-0F665035; Thu, 20 Feb 2014 02:22:40 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392862958!5051248!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31231 invoked from network); 20 Feb 2014 02:22:39 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 02:22:39 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1K2MVXD023362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 02:22:31 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1K2MShu013431
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Feb 2014 02:22:29 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K2MStw019227; Thu, 20 Feb 2014 02:22:28 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 18:22:28 -0800
Date: Wed, 19 Feb 2014 18:22:27 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140219182227.6a37a33c@mantra.us.oracle.com>
In-Reply-To: <52FBA5BA.4020301@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 12 Feb 2014 16:47:54 +0000
Julien Grall <julien.grall@linaro.org> wrote:

> Hi Mukesh,
> 
> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> > In preparation for the next patch, we update xsm_add_to_physmap to
> > allow for checking of foreign domain. Thus, the current domain must
> > have the right to update the mappings of target domain with pages
> > from foreign domain.
> > 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> 
> While I was playing with XSM on ARM, I have noticed that Daniel De
> Graff has added xsm_map_gfmn_foreign few months ago (see commit
> 0b201e6).
> 
> Would it be suitable to use this XSM instead of extending
> xsm_add_to_physmap?
> 
> Regards,
> 

Not the same thing. add to physmap could be adding to a domain's
physmap pages from a foreign domain.

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 02:37:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 02:37:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGJVx-0004R2-88; Thu, 20 Feb 2014 02:37:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGJVv-0004Qx-PB
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 02:37:27 +0000
Received: from [193.109.254.147:64466] by server-12.bemta-14.messagelabs.com
	id 9C/E0-17220-76A65035; Thu, 20 Feb 2014 02:37:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392863844!5518529!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14843 invoked from network); 20 Feb 2014 02:37:26 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 02:37:26 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1K2bH81001232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 02:37:18 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K2bFZV014010
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 20 Feb 2014 02:37:16 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K2bFVU018146; Thu, 20 Feb 2014 02:37:15 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 18:37:14 -0800
Date: Wed, 19 Feb 2014 18:37:13 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140219183713.1aa2418f@mantra.us.oracle.com>
In-Reply-To: <52F8F13E.1070308@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
	<52F8F13E.1070308@linaro.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014 15:33:18 +0000
Julien Grall <julien.grall@linaro.org> wrote:

> On 02/10/2014 03:27 PM, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
> >> Hi Ian,
> >>
> >> On 02/10/2014 01:42 PM, Ian Campbell wrote:
> >>> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
> >>>> Hello Mukesh,
> >>>>
> >>>> On 30/01/14 01:33, Mukesh Rathor wrote:
> >>>>>>> I'm not sure what you mean:
> >>>>>>>   - the code that Mukesh is adding doesn't have a struct
> >>
> >> For ARM, a call to xc_map_foreign_page will end up to
> >> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
> >>
> >> For both architecture, you can look at the function
> >> xen_remap_map_domain_mfn_range (implemented differently on ARM and
> >> x86) which is the last function called before going to the
> >> hypervisor.
> >>
> >> If we don't modify the hypercall XENMEM_add_to_physmap, we will
> >> have a add a new way to map Xen page for xentrace & co.
> > 
> > Wouldn't it be incorrect to generically return OK for mapping a
> > DOMID_XEN owned page -- at least something needs to validate that
> > the particular mfn being mapped is supposed to be shared with the
> > guest in question.
> 
> It's already the case. By default a xen heap page doesn't belong to
> DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
> guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to
> the given page.


So, how about following:

+++ b/xen/common/domain.c
@@ -484,8 +484,20 @@ struct domain *rcu_lock_domain_by_any_id(domid_t dom)
 
  int rcu_lock_remote_domain_by_id(domid_t dom, struct domain **d)
   {
   -    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
   +
   +    if ( (dom == DOMID_XEN && (*d = rcu_lock_domain(dom_xen)) == NULL) ||
   +         (dom != DOMID_XEN && (*d = rcu_lock_domain_by_id(dom)) == NULL) )
   +        return -ESRCH;
   +


Ack/Nack?

Thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 02:37:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 02:37:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGJVx-0004R2-88; Thu, 20 Feb 2014 02:37:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGJVv-0004Qx-PB
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 02:37:27 +0000
Received: from [193.109.254.147:64466] by server-12.bemta-14.messagelabs.com
	id 9C/E0-17220-76A65035; Thu, 20 Feb 2014 02:37:27 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392863844!5518529!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14843 invoked from network); 20 Feb 2014 02:37:26 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 02:37:26 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1K2bH81001232
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 02:37:18 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K2bFZV014010
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 20 Feb 2014 02:37:16 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K2bFVU018146; Thu, 20 Feb 2014 02:37:15 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 19 Feb 2014 18:37:14 -0800
Date: Wed, 19 Feb 2014 18:37:13 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140219183713.1aa2418f@mantra.us.oracle.com>
In-Reply-To: <52F8F13E.1070308@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
	<52F8F13E.1070308@linaro.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 10 Feb 2014 15:33:18 +0000
Julien Grall <julien.grall@linaro.org> wrote:

> On 02/10/2014 03:27 PM, Ian Campbell wrote:
> > On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
> >> Hi Ian,
> >>
> >> On 02/10/2014 01:42 PM, Ian Campbell wrote:
> >>> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
> >>>> Hello Mukesh,
> >>>>
> >>>> On 30/01/14 01:33, Mukesh Rathor wrote:
> >>>>>>> I'm not sure what you mean:
> >>>>>>>   - the code that Mukesh is adding doesn't have a struct
> >>
> >> For ARM, a call to xc_map_foreign_page will end up to
> >> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
> >>
> >> For both architecture, you can look at the function
> >> xen_remap_map_domain_mfn_range (implemented differently on ARM and
> >> x86) which is the last function called before going to the
> >> hypervisor.
> >>
> >> If we don't modify the hypercall XENMEM_add_to_physmap, we will
> >> have a add a new way to map Xen page for xentrace & co.
> > 
> > Wouldn't it be incorrect to generically return OK for mapping a
> > DOMID_XEN owned page -- at least something needs to validate that
> > the particular mfn being mapped is supposed to be shared with the
> > guest in question.
> 
> It's already the case. By default a xen heap page doesn't belong to
> DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
> guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to
> the given page.


So, how about following:

+++ b/xen/common/domain.c
@@ -484,8 +484,20 @@ struct domain *rcu_lock_domain_by_any_id(domid_t dom)
 
  int rcu_lock_remote_domain_by_id(domid_t dom, struct domain **d)
   {
   -    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
   +
   +    if ( (dom == DOMID_XEN && (*d = rcu_lock_domain(dom_xen)) == NULL) ||
   +         (dom != DOMID_XEN && (*d = rcu_lock_domain_by_id(dom)) == NULL) )
   +        return -ESRCH;
   +


Ack/Nack?

Thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 04:27:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 04:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGLDw-0005rz-UR; Thu, 20 Feb 2014 04:27:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WGLDv-0005ru-Gu
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 04:26:59 +0000
Received: from [85.158.137.68:44876] by server-15.bemta-3.messagelabs.com id
	D0/4B-19263-21485035; Thu, 20 Feb 2014 04:26:58 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392870413!3033470!1
X-Originating-IP: [202.81.31.146]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NiA9PiAyMTM0Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21632 invoked from network); 20 Feb 2014 04:26:57 -0000
Received: from e23smtp04.au.ibm.com (HELO e23smtp04.au.ibm.com) (202.81.31.146)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 04:26:57 -0000
Received: from /spool/local
	by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Thu, 20 Feb 2014 14:26:52 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 20 Feb 2014 14:26:50 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id BDD292CE8055
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 15:26:49 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1K472O43735898
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 15:07:03 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1K4QmSV020878
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 15:26:48 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1K4Qmcs020872; Thu, 20 Feb 2014 15:26:48 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id C9C3AA039D; Thu, 20 Feb 2014 15:26:47 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Thu, 20 Feb 2014 12:01:19 +1030
Message-ID: <87ha7ubme0.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022004-9264-0000-0000-0000057E4709
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony Liguori <anthony@codemonkey.ws> writes:
> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>> Hi,
>>>
>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>> different virtualization solutions. It was done mainly from Xen point of view
>>> but results are quite generic and can be applied to wide spectrum
>>> of virtualization platforms.
>>
>> Hi Daniel,
>>
>>         Sorry for the delayed response, I was pondering...  CC changed
>> to virtio-dev.
>>
>> From a standard POV: It's possible to abstract out the where we use
>> 'physical address' for 'address handle'.  It's also possible to define
>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>> Xen-PV is a distinct platform from x86.
>
> I'll go even further and say that "address handle" doesn't make sense too.

I was trying to come up with a unique term, I wasn't trying to define
semantics :)

There are three debates here now: (1) what should the standard say, and
(2) how would Linux implement it, (3) should we use each platform's PCI
IOMMU.

> Just using grant table references is not enough to make virtio work
> well under Xen.  You really need to use bounce buffers ala persistent
> grants.

Wait, if you're using bounce buffers, you didn't make it "work well"!

> I think what you ultimately want is virtio using a DMA API (I know
> benh has scoffed at this but I don't buy his argument at face value)
> and a DMA layer that bounces requests to a pool of persistent grants.

We can have a virtio DMA API, sure.  It'd be a noop for non-Xen.

But emulating the programming of an IOMMU seems masochistic.  PowerPC
have made it clear they don't want this.  And noone else has come up
with a compelling reason to want this: virtio passthrough?

>> For platforms using EPT, I don't think you want anything but guest
>> addresses, do you?
>>
>> From an implementation POV:
>>
>> On IOMMU, start here for previous Linux discussion:
>>         http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>>
>> And this is the real problem.  We don't want to use the PCI IOMMU for
>> PCI devices.  So it's not just a matter of using existing Linux APIs.
>
> Is there any data to back up that claim?

Yes, for powerpc.  Implementer gets to measure, as always.  I suspect
that if you emulate an IOMMU on Intel, your performance will suck too.

> Just because power currently does hypercalls for anything that uses
> the PCI IOMMU layer doesn't mean this cannot be changed.

Does someone have an implementation of an IOMMU which doesn't use
hypercalls, or is this theoretical?

>  It's pretty
> hacky that virtio-pci just happens to work well by accident on power
> today.  Not all architectures have this limitation.

It's a fundamental assumption of virtio that the host can access all of
guest memory.  That's paravert, not a hack.

But tomayto tomatoh aside, it's unclear to me how you'd build an
efficient IOMMU today.  And it's unclear what benefit you'd gain.  But
the cost for Power is clear.

So if someone wants do to this for PCI, they need to implement it and
benchmark.  But this is a little orthogonal to the Xen discussion.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 04:27:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 04:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGLDw-0005rz-UR; Thu, 20 Feb 2014 04:27:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WGLDv-0005ru-Gu
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 04:26:59 +0000
Received: from [85.158.137.68:44876] by server-15.bemta-3.messagelabs.com id
	D0/4B-19263-21485035; Thu, 20 Feb 2014 04:26:58 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392870413!3033470!1
X-Originating-IP: [202.81.31.146]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NiA9PiAyMTM0Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21632 invoked from network); 20 Feb 2014 04:26:57 -0000
Received: from e23smtp04.au.ibm.com (HELO e23smtp04.au.ibm.com) (202.81.31.146)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 04:26:57 -0000
Received: from /spool/local
	by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Thu, 20 Feb 2014 14:26:52 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 20 Feb 2014 14:26:50 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id BDD292CE8055
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 15:26:49 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1K472O43735898
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 15:07:03 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1K4QmSV020878
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 15:26:48 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1K4Qmcs020872; Thu, 20 Feb 2014 15:26:48 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id C9C3AA039D; Thu, 20 Feb 2014 15:26:47 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Thu, 20 Feb 2014 12:01:19 +1030
Message-ID: <87ha7ubme0.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022004-9264-0000-0000-0000057E4709
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony Liguori <anthony@codemonkey.ws> writes:
> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>> Hi,
>>>
>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>> different virtualization solutions. It was done mainly from Xen point of view
>>> but results are quite generic and can be applied to wide spectrum
>>> of virtualization platforms.
>>
>> Hi Daniel,
>>
>>         Sorry for the delayed response, I was pondering...  CC changed
>> to virtio-dev.
>>
>> From a standard POV: It's possible to abstract out the where we use
>> 'physical address' for 'address handle'.  It's also possible to define
>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>> Xen-PV is a distinct platform from x86.
>
> I'll go even further and say that "address handle" doesn't make sense too.

I was trying to come up with a unique term, I wasn't trying to define
semantics :)

There are three debates here now: (1) what should the standard say, and
(2) how would Linux implement it, (3) should we use each platform's PCI
IOMMU.

> Just using grant table references is not enough to make virtio work
> well under Xen.  You really need to use bounce buffers ala persistent
> grants.

Wait, if you're using bounce buffers, you didn't make it "work well"!

> I think what you ultimately want is virtio using a DMA API (I know
> benh has scoffed at this but I don't buy his argument at face value)
> and a DMA layer that bounces requests to a pool of persistent grants.

We can have a virtio DMA API, sure.  It'd be a noop for non-Xen.

But emulating the programming of an IOMMU seems masochistic.  PowerPC
have made it clear they don't want this.  And noone else has come up
with a compelling reason to want this: virtio passthrough?

>> For platforms using EPT, I don't think you want anything but guest
>> addresses, do you?
>>
>> From an implementation POV:
>>
>> On IOMMU, start here for previous Linux discussion:
>>         http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>>
>> And this is the real problem.  We don't want to use the PCI IOMMU for
>> PCI devices.  So it's not just a matter of using existing Linux APIs.
>
> Is there any data to back up that claim?

Yes, for powerpc.  Implementer gets to measure, as always.  I suspect
that if you emulate an IOMMU on Intel, your performance will suck too.

> Just because power currently does hypercalls for anything that uses
> the PCI IOMMU layer doesn't mean this cannot be changed.

Does someone have an implementation of an IOMMU which doesn't use
hypercalls, or is this theoretical?

>  It's pretty
> hacky that virtio-pci just happens to work well by accident on power
> today.  Not all architectures have this limitation.

It's a fundamental assumption of virtio that the host can access all of
guest memory.  That's paravert, not a hack.

But tomayto tomatoh aside, it's unclear to me how you'd build an
efficient IOMMU today.  And it's unclear what benefit you'd gain.  But
the cost for Power is clear.

So if someone wants do to this for PCI, they need to implement it and
benchmark.  But this is a little orthogonal to the Xen discussion.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 05:53:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 05:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGMZO-0007Cz-97; Thu, 20 Feb 2014 05:53:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGMZM-0007Cu-QN
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 05:53:13 +0000
Received: from [85.158.139.211:32138] by server-9.bemta-5.messagelabs.com id
	9A/A9-11237-84895035; Thu, 20 Feb 2014 05:53:12 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392875590!1127956!1
X-Originating-IP: [209.85.216.46]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7643 invoked from network); 20 Feb 2014 05:53:11 -0000
Received: from mail-qa0-f46.google.com (HELO mail-qa0-f46.google.com)
	(209.85.216.46)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 05:53:11 -0000
Received: by mail-qa0-f46.google.com with SMTP id k15so2326466qaq.19
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 21:53:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:from:date:message-id:subject:to:content-type;
	bh=To6cTLyK4o8BFez4IfCGgXxuFdMPs93N31YbGUbk5lY=;
	b=ECsFMpqJnfynpn8nFRzpOPKFVWXrKZsx7l5D4Tc5/Y0GxSYQEupSZafyrgnu6Su0wU
	Ngp6MzEMdY7jTy1lMBddCFyQ5yk0gwodxSDnf2RxeUjw7Vk4veFC9IWBsFb8eAPtwxsO
	LVHm844crUuhP/I6kHuUkorG0cs1UkLkrE2xQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:from:date:message-id:subject
	:to:content-type;
	bh=To6cTLyK4o8BFez4IfCGgXxuFdMPs93N31YbGUbk5lY=;
	b=l7i9isM1fq0ggqavT61s48neVWNKxkeea/lsPPd7MMjHjvVHDP1HVwUFD8iIC3v8pL
	wE/HTR3XHzazS8vUfBP1s0CedRGENUkGviBPVLSClXfcoiiSOK9jCC8pYQMWg9In0lxG
	T1xOG678QvQ9e9EY91DbFsI9iYGPIZPSoHIS+Ex6EerHXQ5eEN46G/Jh0oAzEuGiNAAD
	5zjdANOBhIJwts2BxXW7+PKLJh5le2FCjyxwDJrXhkmCy7dFcY3VpEoZ/ebtkd2UCUtD
	fCei9coFQS4y3fMfgVNYIi9nUYBkZJIHi9DkkC2QZBqHqcjJvxYIs+bftSFYeCoQEXdr
	MQwA==
X-Gm-Message-State: ALoCoQn1yqD2G7nA7XFeJjNQiGPBiG4cOkFhgXNfkkllmXhHsS4/qUEspjknMAz79vHBZ2ffoDn/
X-Received: by 10.229.189.65 with SMTP id dd1mr55420301qcb.5.1392875590037;
	Wed, 19 Feb 2014 21:53:10 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Wed, 19 Feb 2014 21:52:55 -0800 (PST)
X-Originating-IP: [85.143.161.18]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 20 Feb 2014 09:52:55 +0400
X-Google-Sender-Auth: kYp1RxYHZvHMQX2_y_rVuVXWilc
Message-ID: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello. I have some problems with swap files in domU - i have ssd disks
that caches all io and if user use swap, ssd may fail very often.
Is that possible to use tmem frontswap without swap file at all? And
transparently push swap pages to tmem?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 05:53:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 05:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGMZO-0007Cz-97; Thu, 20 Feb 2014 05:53:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGMZM-0007Cu-QN
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 05:53:13 +0000
Received: from [85.158.139.211:32138] by server-9.bemta-5.messagelabs.com id
	9A/A9-11237-84895035; Thu, 20 Feb 2014 05:53:12 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392875590!1127956!1
X-Originating-IP: [209.85.216.46]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7643 invoked from network); 20 Feb 2014 05:53:11 -0000
Received: from mail-qa0-f46.google.com (HELO mail-qa0-f46.google.com)
	(209.85.216.46)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 05:53:11 -0000
Received: by mail-qa0-f46.google.com with SMTP id k15so2326466qaq.19
	for <xen-devel@lists.xen.org>; Wed, 19 Feb 2014 21:53:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:from:date:message-id:subject:to:content-type;
	bh=To6cTLyK4o8BFez4IfCGgXxuFdMPs93N31YbGUbk5lY=;
	b=ECsFMpqJnfynpn8nFRzpOPKFVWXrKZsx7l5D4Tc5/Y0GxSYQEupSZafyrgnu6Su0wU
	Ngp6MzEMdY7jTy1lMBddCFyQ5yk0gwodxSDnf2RxeUjw7Vk4veFC9IWBsFb8eAPtwxsO
	LVHm844crUuhP/I6kHuUkorG0cs1UkLkrE2xQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:from:date:message-id:subject
	:to:content-type;
	bh=To6cTLyK4o8BFez4IfCGgXxuFdMPs93N31YbGUbk5lY=;
	b=l7i9isM1fq0ggqavT61s48neVWNKxkeea/lsPPd7MMjHjvVHDP1HVwUFD8iIC3v8pL
	wE/HTR3XHzazS8vUfBP1s0CedRGENUkGviBPVLSClXfcoiiSOK9jCC8pYQMWg9In0lxG
	T1xOG678QvQ9e9EY91DbFsI9iYGPIZPSoHIS+Ex6EerHXQ5eEN46G/Jh0oAzEuGiNAAD
	5zjdANOBhIJwts2BxXW7+PKLJh5le2FCjyxwDJrXhkmCy7dFcY3VpEoZ/ebtkd2UCUtD
	fCei9coFQS4y3fMfgVNYIi9nUYBkZJIHi9DkkC2QZBqHqcjJvxYIs+bftSFYeCoQEXdr
	MQwA==
X-Gm-Message-State: ALoCoQn1yqD2G7nA7XFeJjNQiGPBiG4cOkFhgXNfkkllmXhHsS4/qUEspjknMAz79vHBZ2ffoDn/
X-Received: by 10.229.189.65 with SMTP id dd1mr55420301qcb.5.1392875590037;
	Wed, 19 Feb 2014 21:53:10 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Wed, 19 Feb 2014 21:52:55 -0800 (PST)
X-Originating-IP: [85.143.161.18]
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Thu, 20 Feb 2014 09:52:55 +0400
X-Google-Sender-Auth: kYp1RxYHZvHMQX2_y_rVuVXWilc
Message-ID: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello. I have some problems with swap files in domU - i have ssd disks
that caches all io and if user use swap, ssd may fail very often.
Is that possible to use tmem frontswap without swap file at all? And
transparently push swap pages to tmem?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 06:07:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 06:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGMnU-0007WH-0L; Thu, 20 Feb 2014 06:07:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGMnS-0007WC-Si
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 06:07:47 +0000
Received: from [85.158.139.211:13000] by server-6.bemta-5.messagelabs.com id
	25/09-14342-2BB95035; Thu, 20 Feb 2014 06:07:46 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392876464!5073518!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16401 invoked from network); 20 Feb 2014 06:07:45 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 06:07:45 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=gVIjHstHp/mAoh7MrrrpTyJ/4DAvS/8TQQbnjFemt4ODvQmH1nr1/3oS
	zEEhzmJ7mVuV6aJ6yMB7LLOf6qbCc1RwTzS0Mq9vDonRSthhlZHRk2X4y
	6dBEgl34346RbWVWjQMpacIQNAJ1bWrDDHG+ZOKC0uHJLcHIg37fQTolb
	q4gCSTXJEo2gx6+uK9bjRFO4VVA2ZDVLCkirLA/Z85iRfTnU9QzVQomLg
	eDyLvq0D4C3fg2YROoZXrUqJbqTQ0;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392876465; x=1424412465;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=KfTzU40mvmwsHX/APvnlVO/Dz4zkDScKf47IZVfPCec=;
	b=tqaH9BEMdX4Y66Y1/+ZQlHJouWKvXTlL6YNFN0ZaZdD4Cb98ICds1dSL
	FhXeRJjoqfdkuRHVsezsGAcrpsN4lL88zzvo9KBFjs8MOHQS43G12Ytg2
	s8kCAch0GtGZxZwwjIBkxe4R4HZu6ReN8jeRu4S6G0cW6AqDDTtCch25V
	yaxnHOlaQd0RcPPhdMDvceoLwDmS6FgpntkzkCFqSxKo3tFU4PqX2yv1N
	IwpnNpkpQ56L2L4vSNu0p6gT37upy;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,510,1389740400"; d="scan'208";a="159434282"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 20 Feb 2014 07:07:45 +0100
X-IronPort-AV: E=Sophos;i="4.97,510,1389740400"; d="scan'208";a="31923726"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 20 Feb 2014 07:07:44 +0100
Message-ID: <53059BB0.1000705@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 07:07:44 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
	<1392746781.32038.594.camel@Solace>
In-Reply-To: <1392746781.32038.594.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18.02.2014 19:06, Dario Faggioli wrote:
> On mar, 2014-02-18 at 12:58 -0500, Don Slutz wrote:
>>>> root@smartin-xen:~# xl cpupool-list -c
>>>> Name               CPU list
>>>> Pool-0             0,1,2
>>
>> Change to something like:
>>
>>
>> Pool-0             0,1,2 (and part of 3)
>>
> This would be cool, and I personally would be all for it... but it
> perhaps will not be that clear at pointing the user to think to
> hyperthreading. :-P
>
>> Or add:
>>
>> Warning: cpupool's share hyperthreaded cpus.
>>
> While this one, although a bit more "boring" than the above, would
> probably be something quite valuable to have!
>
> I can only think of rather expensive ways of implementing it, involving
> going through all the cpupools and, for each cpupool, through all its
> cpus and check the topology relationships, but perhaps there are others
> (I'll think harder).
>
> Also, we are certainly not talking about hot paths.
>
> Juergen?

Adding some information like this would be nice, indeed. But I think we should
not limit this to just hyperthreads. There are more levels of shared resources,
like caches or memory interfaces on the same socket. In case we want to add
information about potential performance influences due to shared resources, we
should be more generic.

And what about some NUMA information? Wouldn't it be worthwhile to show memory
locality information as well? This should be considered to be displayed by
"xl list", too.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 06:07:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 06:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGMnU-0007WH-0L; Thu, 20 Feb 2014 06:07:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGMnS-0007WC-Si
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 06:07:47 +0000
Received: from [85.158.139.211:13000] by server-6.bemta-5.messagelabs.com id
	25/09-14342-2BB95035; Thu, 20 Feb 2014 06:07:46 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392876464!5073518!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16401 invoked from network); 20 Feb 2014 06:07:45 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 06:07:45 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=gVIjHstHp/mAoh7MrrrpTyJ/4DAvS/8TQQbnjFemt4ODvQmH1nr1/3oS
	zEEhzmJ7mVuV6aJ6yMB7LLOf6qbCc1RwTzS0Mq9vDonRSthhlZHRk2X4y
	6dBEgl34346RbWVWjQMpacIQNAJ1bWrDDHG+ZOKC0uHJLcHIg37fQTolb
	q4gCSTXJEo2gx6+uK9bjRFO4VVA2ZDVLCkirLA/Z85iRfTnU9QzVQomLg
	eDyLvq0D4C3fg2YROoZXrUqJbqTQ0;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392876465; x=1424412465;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=KfTzU40mvmwsHX/APvnlVO/Dz4zkDScKf47IZVfPCec=;
	b=tqaH9BEMdX4Y66Y1/+ZQlHJouWKvXTlL6YNFN0ZaZdD4Cb98ICds1dSL
	FhXeRJjoqfdkuRHVsezsGAcrpsN4lL88zzvo9KBFjs8MOHQS43G12Ytg2
	s8kCAch0GtGZxZwwjIBkxe4R4HZu6ReN8jeRu4S6G0cW6AqDDTtCch25V
	yaxnHOlaQd0RcPPhdMDvceoLwDmS6FgpntkzkCFqSxKo3tFU4PqX2yv1N
	IwpnNpkpQ56L2L4vSNu0p6gT37upy;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,510,1389740400"; d="scan'208";a="159434282"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 20 Feb 2014 07:07:45 +0100
X-IronPort-AV: E=Sophos;i="4.97,510,1389740400"; d="scan'208";a="31923726"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 20 Feb 2014 07:07:44 +0100
Message-ID: <53059BB0.1000705@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 07:07:44 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
	<1392746781.32038.594.camel@Solace>
In-Reply-To: <1392746781.32038.594.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18.02.2014 19:06, Dario Faggioli wrote:
> On mar, 2014-02-18 at 12:58 -0500, Don Slutz wrote:
>>>> root@smartin-xen:~# xl cpupool-list -c
>>>> Name               CPU list
>>>> Pool-0             0,1,2
>>
>> Change to something like:
>>
>>
>> Pool-0             0,1,2 (and part of 3)
>>
> This would be cool, and I personally would be all for it... but it
> perhaps will not be that clear at pointing the user to think to
> hyperthreading. :-P
>
>> Or add:
>>
>> Warning: cpupool's share hyperthreaded cpus.
>>
> While this one, although a bit more "boring" than the above, would
> probably be something quite valuable to have!
>
> I can only think of rather expensive ways of implementing it, involving
> going through all the cpupools and, for each cpupool, through all its
> cpus and check the topology relationships, but perhaps there are others
> (I'll think harder).
>
> Also, we are certainly not talking about hot paths.
>
> Juergen?

Adding some information like this would be nice, indeed. But I think we should
not limit this to just hyperthreads. There are more levels of shared resources,
like caches or memory interfaces on the same socket. In case we want to add
information about potential performance influences due to shared resources, we
should be more generic.

And what about some NUMA information? Wouldn't it be worthwhile to show memory
locality information as well? This should be considered to be displayed by
"xl list", too.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 07:45:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 07:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOJS-000152-T3; Thu, 20 Feb 2014 07:44:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGOJR-00014x-Og
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 07:44:53 +0000
Received: from [85.158.143.35:19259] by server-2.bemta-4.messagelabs.com id
	FC/5B-10891-572B5035; Thu, 20 Feb 2014 07:44:53 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392882292!6995594!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28817 invoked from network); 20 Feb 2014 07:44:52 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 07:44:52 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=mEZi/Ijy64842u2j9z+jdRQ7KnuEaizEfUP3YO/4E2KoukuroFH5sJ9X
	+tzbznuZ/ev2bfa4dnP7SyPowLKvIkjiQOyV0R0tHhq57+BC7jaG+Zmta
	qomnn3CTfK4xuF4vIDO7wGFFNEt08QAoG6AjAcZ+SejYez8vt7HAsSr1u
	YitlPkMmJ3RyBMKfoKFFBee7jomDAz3psusY7V4Z78kw0jBGXXj4E1EDp
	8dJAaCuXnpxv5chpg7oGVk0CIrIh7;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392882292; x=1424418292;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=QMvFEpCrvlVlpEqm02OkX98xJBz19bIkno04CjuARyU=;
	b=Vf54kDzzOUOMUfSH4i24jJNxDk05UQjToL3pAulEWoSV/n5wfcSGZxpB
	vktxB8sPdUhRxXyCRqw7FqS/wWg6G6mo/ToH2Dbo/2uHeazpvwN8+KM4u
	IbODAnAJLwgxJPay7D0H6JV9oV5zS9IBFupHgoYMv/ivjWlxue5iWUd1K
	sI3jxr2aNjpBe796jQjbaiRG4IzlB8mw28W7vxQQA3ESSQMsckiLiXTEG
	n55/Wq2N6kycdkxFbsjrHnNwKlyKt;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="186117727"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 20 Feb 2014 08:44:51 +0100
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="31934296"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 20 Feb 2014 08:44:51 +0100
Message-ID: <5305B273.9000803@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 08:44:51 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
	<53035689.4000602@ts.fujitsu.com>
	<53036688020000780011D4B5@nat28.tlf.novell.com>
In-Reply-To: <53036688020000780011D4B5@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18.02.2014 13:56, Jan Beulich wrote:
>>>> On 18.02.14 at 13:48, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>> On 14.02.2014 14:02, Jan Beulich wrote:
>>>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>>> Is this the case when the guest itself uses single stepping? Initially the
>>>> debug trap shouldn't cause a VMEXIT, I think.
>>>
>>> That looks like a bug, indeed - it's missing from the initially set
>>> exception_bitmap. Could you check whether adding this in
>>> construct_vmcs() addresses that part of the issue? (A proper fix
>>> would likely include further adjustments to the setting of this flag,
>>> e.g. clearing it alongside clearing the DR intercept.) But then
>>> again all of this already depends on cpu_has_monitor_trap_flag -
>>> if that's set on your system, maybe you could try suppressing its
>>> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
>>> the optional feature set in vmx_init_vmcs_config())?
>>
>> I've currently a test running with the attached patch (the bug was hit about
>> once every 3 hours, test is running now for about 4 hours without problem).
>> Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.
>
> Which, if it continues running fine, would confirm the theory.
> I'd like to defer to the VMX folks though for putting together a
> proper fix then - I'd likely overlook some corner case.

Okay, theory confirmed.

Unless you want to do it, I'll start another thread with the info found so far
and include Jun Nakajima and Eddie Dong.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 07:45:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 07:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOJS-000152-T3; Thu, 20 Feb 2014 07:44:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGOJR-00014x-Og
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 07:44:53 +0000
Received: from [85.158.143.35:19259] by server-2.bemta-4.messagelabs.com id
	FC/5B-10891-572B5035; Thu, 20 Feb 2014 07:44:53 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392882292!6995594!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28817 invoked from network); 20 Feb 2014 07:44:52 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 07:44:52 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=mEZi/Ijy64842u2j9z+jdRQ7KnuEaizEfUP3YO/4E2KoukuroFH5sJ9X
	+tzbznuZ/ev2bfa4dnP7SyPowLKvIkjiQOyV0R0tHhq57+BC7jaG+Zmta
	qomnn3CTfK4xuF4vIDO7wGFFNEt08QAoG6AjAcZ+SejYez8vt7HAsSr1u
	YitlPkMmJ3RyBMKfoKFFBee7jomDAz3psusY7V4Z78kw0jBGXXj4E1EDp
	8dJAaCuXnpxv5chpg7oGVk0CIrIh7;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392882292; x=1424418292;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=QMvFEpCrvlVlpEqm02OkX98xJBz19bIkno04CjuARyU=;
	b=Vf54kDzzOUOMUfSH4i24jJNxDk05UQjToL3pAulEWoSV/n5wfcSGZxpB
	vktxB8sPdUhRxXyCRqw7FqS/wWg6G6mo/ToH2Dbo/2uHeazpvwN8+KM4u
	IbODAnAJLwgxJPay7D0H6JV9oV5zS9IBFupHgoYMv/ivjWlxue5iWUd1K
	sI3jxr2aNjpBe796jQjbaiRG4IzlB8mw28W7vxQQA3ESSQMsckiLiXTEG
	n55/Wq2N6kycdkxFbsjrHnNwKlyKt;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="186117727"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 20 Feb 2014 08:44:51 +0100
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="31934296"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 20 Feb 2014 08:44:51 +0100
Message-ID: <5305B273.9000803@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 08:44:51 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
	<53035689.4000602@ts.fujitsu.com>
	<53036688020000780011D4B5@nat28.tlf.novell.com>
In-Reply-To: <53036688020000780011D4B5@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18.02.2014 13:56, Jan Beulich wrote:
>>>> On 18.02.14 at 13:48, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>> On 14.02.2014 14:02, Jan Beulich wrote:
>>>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>>> Is this the case when the guest itself uses single stepping? Initially the
>>>> debug trap shouldn't cause a VMEXIT, I think.
>>>
>>> That looks like a bug, indeed - it's missing from the initially set
>>> exception_bitmap. Could you check whether adding this in
>>> construct_vmcs() addresses that part of the issue? (A proper fix
>>> would likely include further adjustments to the setting of this flag,
>>> e.g. clearing it alongside clearing the DR intercept.) But then
>>> again all of this already depends on cpu_has_monitor_trap_flag -
>>> if that's set on your system, maybe you could try suppressing its
>>> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
>>> the optional feature set in vmx_init_vmcs_config())?
>>
>> I've currently a test running with the attached patch (the bug was hit about
>> once every 3 hours, test is running now for about 4 hours without problem).
>> Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.
>
> Which, if it continues running fine, would confirm the theory.
> I'd like to defer to the VMX folks though for putting together a
> proper fix then - I'd likely overlook some corner case.

Okay, theory confirmed.

Unless you want to do it, I'll start another thread with the info found so far
and include Jun Nakajima and Eddie Dong.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 07:59:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 07:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOXK-0001Nt-DS; Thu, 20 Feb 2014 07:59:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dexuan.cui@intel.com>) id 1WGOXI-0001No-Gf
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 07:59:12 +0000
Received: from [85.158.139.211:47298] by server-3.bemta-5.messagelabs.com id
	77/DE-13671-FC5B5035; Thu, 20 Feb 2014 07:59:11 +0000
X-Env-Sender: dexuan.cui@intel.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392883149!543989!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8889 invoked from network); 20 Feb 2014 07:59:10 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-206.messagelabs.com with SMTP;
	20 Feb 2014 07:59:10 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 19 Feb 2014 23:59:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,511,1389772800"; d="scan'208";a="484657916"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 19 Feb 2014 23:59:08 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 19 Feb 2014 23:59:07 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.202]) by
	SHSMSX104.ccr.corp.intel.com ([169.254.5.227]) with mapi id
	14.03.0123.003; Thu, 20 Feb 2014 15:59:05 +0800
From: "Cui, Dexuan" <dexuan.cui@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Thread-Topic: [Announcement] Updates to XenGT - a Mediated Graphics
	Passthrough Solution from Intel
Thread-Index: Ac8uCM+UTJUf7TAWTkKmKWKJu37vjg==
Date: Thu, 20 Feb 2014 07:59:04 +0000
Message-ID: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tian, Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Li, 
	Susie" <susie.li@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: [Xen-devel] [Announcement] Updates to XenGT - a Mediated Graphics
 Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
We're pleased to announce an update to XenGT since its first disclosure in last Sep. XenGT is a full GPU virtualization solution with mediated pass-through, on Intel Processor Graphics. A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability. Though we only support Xen on Intel Processor Graphics so far, the core logic can be easily ported to other hypervisors.

The update consists of:

Linux-vgt:
    Rebased to kernel 3.11.6 
    Lots of stability fixes
    Improved sharing quality of render engine and display engine
    Multi-monitors (clone/extended) support for VGA, HDMI, DP and eDP types
    Support VMs with different resolutions
    Improved monitor hotplug handling
    Preliminary support for GPU recovery

Xen-vgt:
    Rebased to Xen 4.3.1
    >8 bytes MMIO emulation

Qemu-vgt:
    Included VT-d GPU pass-through logic for comparison
    Grub2 graphics mode works now

Please refer to the attached new setup guide:
https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf
It provides step-to-step details about building/configuring/running XenGT.

The new source codes are available at the updated github repos:
Linux: https://github.com/01org/XenGT-Preview-kernel.git
Xen: https://github.com/01org/XenGT-Preview-xen.git
Qemu: https://github.com/01org/XenGT-Preview-qemu.git

More information about XenGT's background, architecture, etc can be found at:
http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt

Appreciating your comments!

Thanks,
-- Dexuan

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shan, Haitao
> Sent: Monday, September 09, 2013 8:46 PM
> To: xen-devel@lists.xen.org
> Cc: Tian, Kevin; White, Michael L; Dong, Eddie; Li, Susie; Cowperthwaite,
> David J; Haron, Sandra
> Subject: [Xen-devel] [RFC] XenGT - An Mediated Graphics Passthrough
> Solution from Intel
> 
> Hi, Xen Experts,
> 
> This email is aimed at a first time disclosure of project XenGT, which is
> a Graphics virtualization solution based on Xen.
> 
> 
> 
> As you can see, requirements for GPU to be sharable among virtual machines
> have been constantly rising. The targeted usage model might be
> accelerating tasks ranging from gaming, video playback, fancy GUIs,
> high-performance computing (GPU-based). This trend is observed on both
> client and cloud. Efficient GPU virtualization is required to address the
> increasing demands.
> 
> 
> We have developed XenGT - a prototype based on a mediated passthrough
> architecture. We support running a native graphics driver in multiple VMs
> to achieve high performance. A specific mediator owns the scheduling and
> sharing hardware resources among all the virtual machines. By saying
> mediated pass-through, we actually divide graphics resource to two
> categories: performance critical and others. We partition performance
> critical resources for VM's direct access like pass-through, while save
> and restore others.
> 
> 
> 
> XenGT implements the mediator in dom0, called vgt driver. This avoids
> adding complex device knowledge to Xen, and also permits a more flexible
> release model. In the meantime, we want to have a unified architecture to
> mediate all the VMs, including dom0. Thus, We developed a deprivileged
> dom0 mode, which essentially trapped Dom0's access to selected resources
> (graphics resources in XenGT case) and forwarded to vgt driver (which is
> also in Dom0) for processing.
> 
> 
> Right now, we support 4 accelerated VMs: Dom0 + 3 HVM DomUs. We've
> conducted verifications based on Ubuntu 12.04 and 13.04. Tests conducted
> in VM include but are not limited to 3D gaming, media playbacks, 2D
> accelerations.
> 
> We believe the architecture itself can be general so that different GPUs
> can all use this mediated passthrough concept. However, we only developed
> codes based on Intel 4th Core Processor with integrated Graphics.
> 
> If you have interests in trying, you can refer to the attached setup
> guide, which provides step-to-step details on building/configuring/running
> XenGT.
> 
> Source code is made available at github:
> Xen: https://github.com/01org/XenGT-Preview-xen.git
> Linux: https://github.com/01org/XenGT-Preview-kernel.git
> Qemu: https://github.com/01org/XenGT-Preview-qemu.git
> 
> Any comments are welcome!
> 
> 
> Special note: We are making this code available to general public since we
> take community's involvement and feedbacks seriously. However, while
> we've
> tested our solution with various workloads, the code is only at pre-alpha
> stage. Hangs might happen, so please don't try it on the system that
> hosting critical data for you.
> 
> Shan Haitao


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 07:59:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 07:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOXK-0001Nt-DS; Thu, 20 Feb 2014 07:59:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dexuan.cui@intel.com>) id 1WGOXI-0001No-Gf
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 07:59:12 +0000
Received: from [85.158.139.211:47298] by server-3.bemta-5.messagelabs.com id
	77/DE-13671-FC5B5035; Thu, 20 Feb 2014 07:59:11 +0000
X-Env-Sender: dexuan.cui@intel.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392883149!543989!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8889 invoked from network); 20 Feb 2014 07:59:10 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-5.tower-206.messagelabs.com with SMTP;
	20 Feb 2014 07:59:10 -0000
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
	by fmsmga101.fm.intel.com with ESMTP; 19 Feb 2014 23:59:08 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,511,1389772800"; d="scan'208";a="484657916"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga002.fm.intel.com with ESMTP; 19 Feb 2014 23:59:08 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 19 Feb 2014 23:59:07 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.202]) by
	SHSMSX104.ccr.corp.intel.com ([169.254.5.227]) with mapi id
	14.03.0123.003; Thu, 20 Feb 2014 15:59:05 +0800
From: "Cui, Dexuan" <dexuan.cui@intel.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Thread-Topic: [Announcement] Updates to XenGT - a Mediated Graphics
	Passthrough Solution from Intel
Thread-Index: Ac8uCM+UTJUf7TAWTkKmKWKJu37vjg==
Date: Thu, 20 Feb 2014 07:59:04 +0000
Message-ID: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tian, Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Li, 
	Susie" <susie.li@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: [Xen-devel] [Announcement] Updates to XenGT - a Mediated Graphics
 Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
We're pleased to announce an update to XenGT since its first disclosure in last Sep. XenGT is a full GPU virtualization solution with mediated pass-through, on Intel Processor Graphics. A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability. Though we only support Xen on Intel Processor Graphics so far, the core logic can be easily ported to other hypervisors.

The update consists of:

Linux-vgt:
    Rebased to kernel 3.11.6 
    Lots of stability fixes
    Improved sharing quality of render engine and display engine
    Multi-monitors (clone/extended) support for VGA, HDMI, DP and eDP types
    Support VMs with different resolutions
    Improved monitor hotplug handling
    Preliminary support for GPU recovery

Xen-vgt:
    Rebased to Xen 4.3.1
    >8 bytes MMIO emulation

Qemu-vgt:
    Included VT-d GPU pass-through logic for comparison
    Grub2 graphics mode works now

Please refer to the attached new setup guide:
https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf
It provides step-to-step details about building/configuring/running XenGT.

The new source codes are available at the updated github repos:
Linux: https://github.com/01org/XenGT-Preview-kernel.git
Xen: https://github.com/01org/XenGT-Preview-xen.git
Qemu: https://github.com/01org/XenGT-Preview-qemu.git

More information about XenGT's background, architecture, etc can be found at:
http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7.pdf
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt

Appreciating your comments!

Thanks,
-- Dexuan

> -----Original Message-----
> From: xen-devel-bounces@lists.xen.org
> [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shan, Haitao
> Sent: Monday, September 09, 2013 8:46 PM
> To: xen-devel@lists.xen.org
> Cc: Tian, Kevin; White, Michael L; Dong, Eddie; Li, Susie; Cowperthwaite,
> David J; Haron, Sandra
> Subject: [Xen-devel] [RFC] XenGT - An Mediated Graphics Passthrough
> Solution from Intel
> 
> Hi, Xen Experts,
> 
> This email is aimed at a first time disclosure of project XenGT, which is
> a Graphics virtualization solution based on Xen.
> 
> 
> 
> As you can see, requirements for GPU to be sharable among virtual machines
> have been constantly rising. The targeted usage model might be
> accelerating tasks ranging from gaming, video playback, fancy GUIs,
> high-performance computing (GPU-based). This trend is observed on both
> client and cloud. Efficient GPU virtualization is required to address the
> increasing demands.
> 
> 
> We have developed XenGT - a prototype based on a mediated passthrough
> architecture. We support running a native graphics driver in multiple VMs
> to achieve high performance. A specific mediator owns the scheduling and
> sharing hardware resources among all the virtual machines. By saying
> mediated pass-through, we actually divide graphics resource to two
> categories: performance critical and others. We partition performance
> critical resources for VM's direct access like pass-through, while save
> and restore others.
> 
> 
> 
> XenGT implements the mediator in dom0, called vgt driver. This avoids
> adding complex device knowledge to Xen, and also permits a more flexible
> release model. In the meantime, we want to have a unified architecture to
> mediate all the VMs, including dom0. Thus, We developed a deprivileged
> dom0 mode, which essentially trapped Dom0's access to selected resources
> (graphics resources in XenGT case) and forwarded to vgt driver (which is
> also in Dom0) for processing.
> 
> 
> Right now, we support 4 accelerated VMs: Dom0 + 3 HVM DomUs. We've
> conducted verifications based on Ubuntu 12.04 and 13.04. Tests conducted
> in VM include but are not limited to 3D gaming, media playbacks, 2D
> accelerations.
> 
> We believe the architecture itself can be general so that different GPUs
> can all use this mediated passthrough concept. However, we only developed
> codes based on Intel 4th Core Processor with integrated Graphics.
> 
> If you have interests in trying, you can refer to the attached setup
> guide, which provides step-to-step details on building/configuring/running
> XenGT.
> 
> Source code is made available at github:
> Xen: https://github.com/01org/XenGT-Preview-xen.git
> Linux: https://github.com/01org/XenGT-Preview-kernel.git
> Qemu: https://github.com/01org/XenGT-Preview-qemu.git
> 
> Any comments are welcome!
> 
> 
> Special note: We are making this code available to general public since we
> take community's involvement and feedbacks seriously. However, while
> we've
> tested our solution with various workloads, the code is only at pre-alpha
> stage. Hangs might happen, so please don't try it on the system that
> hosting critical data for you.
> 
> Shan Haitao


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOdw-00020j-4I; Thu, 20 Feb 2014 08:06:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hannes@stressinduktion.org>) id 1WGHyH-000717-4t
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 00:58:37 +0000
Received: from [193.109.254.147:18677] by server-13.bemta-14.messagelabs.com
	id BE/BD-01226-C3355035; Thu, 20 Feb 2014 00:58:36 +0000
X-Env-Sender: hannes@stressinduktion.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392857915!5531358!1
X-Originating-IP: [87.106.68.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 20 Feb 2014 00:58:35 -0000
Received: from order.stressinduktion.org (HELO order.stressinduktion.org)
	(87.106.68.36)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 00:58:35 -0000
Received: by order.stressinduktion.org (Postfix, from userid 500)
	id 798B11A0C2D9; Thu, 20 Feb 2014 01:58:33 +0100 (CET)
Date: Thu, 20 Feb 2014 01:58:33 +0100
From: Hannes Frederic Sowa <hannes@stressinduktion.org>
To: Dan Williams <dcbw@redhat.com>
Message-ID: <20140220005833.GF1179@order.stressinduktion.org>
Mail-Followup-To: Dan Williams <dcbw@redhat.com>,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Patrick McHardy <kaber@trash.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392857777.22693.14.camel@dcbw.local>
X-Mailman-Approved-At: Thu, 20 Feb 2014 08:06:02 +0000
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, Patrick McHardy <kaber@trash.net>,
	xen-devel@lists.xenproject.org, "David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 06:56:17PM -0600, Dan Williams wrote:
> Note that there isn't yet a disable_ipv4 knob though, I was
> perhaps-too-subtly trying to get you to send a patch for it, since I can
> use it too :)

Do you plan to implement
<http://datatracker.ietf.org/doc/draft-ietf-sunset4-noipv4/>?

;)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOdv-00020c-OU; Thu, 20 Feb 2014 08:06:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <michael.d.labriola@gmail.com>)
	id 1WGDRD-0001T0-W1; Wed, 19 Feb 2014 20:08:12 +0000
Received: from [193.109.254.147:57621] by server-8.bemta-14.messagelabs.com id
	1C/1D-18529-B2F05035; Wed, 19 Feb 2014 20:08:11 +0000
X-Env-Sender: michael.d.labriola@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392840488!222474!1
X-Originating-IP: [209.85.216.182]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11659 invoked from network); 19 Feb 2014 20:08:09 -0000
Received: from mail-qc0-f182.google.com (HELO mail-qc0-f182.google.com)
	(209.85.216.182)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 20:08:09 -0000
Received: by mail-qc0-f182.google.com with SMTP id r5so1514925qcx.13
	for <multiple recipients>; Wed, 19 Feb 2014 12:08:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=loEZer1cCjPyAUpXYz8JahfpCDvHnlFc6eFCy2hhGNA=;
	b=CcgrascZ1pzD5WNYdI6kgNe3Non5TPk8vXyUU2lrECb+MPHTtraG3DH+C/x5QUwBM6
	ZnjbqcLWZ1J4sEwkF3rfZjp1NnBFd7xmGZcTYA5GBQyw/aYBlb6w7wTUD4XdBAJLOhYy
	3GGwVlWCi72wuZQaHTqqjBKN7Y4h96jpevNGkcMtrXAPB7BT7YtDV802mWP9ukGrwNfQ
	LNzMpenUNFj5hfRGt8dSaAZ8Z5oajflLjBodq818SEEtVBHD7IViqFuh8b+Z2DXEwxqt
	B+PBKOZgB+PFAq01/YYkp2fKveDKdRH8+B2nywmqZZGrHShgKvbK8isCXNceR9ISxBEF
	YDVQ==
MIME-Version: 1.0
X-Received: by 10.224.68.10 with SMTP id t10mr3994904qai.87.1392840488460;
	Wed, 19 Feb 2014 12:08:08 -0800 (PST)
Received: by 10.140.32.132 with HTTP; Wed, 19 Feb 2014 12:08:08 -0800 (PST)
In-Reply-To: <20140219195705.GA13089@phenom.dumpdata.com>
References: <20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
	<20140219195705.GA13089@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 15:08:08 -0500
Message-ID: <CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
From: Michael Labriola <michael.d.labriola@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Thu, 20 Feb 2014 08:06:02 +0000
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 2:57 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
>> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
>> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
>> >> 09:49:38 AM:
>> >>
>> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
>> >> > bounces@lists.xen.org
>> >> > Date: 01/24/2014 09:50 AM
>> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> >
>> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
>> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
>> >> > >
>> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> >> > > > Date: 01/21/2014 04:59 PM
>> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> > > > Sent by: xen-devel-bounces@lists.xen.org
>> >> > > >
>> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
>> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
>> >>
>> >> > > > > 10:38:27 AM:
>> >> > > > >
>> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> >> > > > > > Date: 01/20/2014 10:38 AM
>> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> > > > > >
>> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
>> >> wrote:
>> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
>> >> > > 10:14:36
>> >> > > > > AM:
>> >> > > > > > >
>> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
>> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
>> >> > > > > > > > Date: 01/20/2014 10:14 AM
>> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> > > > > > > >
>> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
>> >>
>> >> > > wrote:
>> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
>> >> > > consistent
>> >> > > > > > > crashes
>> >> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
>> >> > > unusably
>> >> > > > >
>> >> > > > > > > slow
>> >> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
>> >> > > > > indiviually on
>> >> > > > > > >
>> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
>> >> metal.
>> >> > > > > > > >
>> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
>> >> mean?
>> >> > > > > > >
>> >> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
>> >> The
>> >> > >
>> >> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
>> >> for
>> >> > > a
>> >> > > > > plain
>> >> > > > > > > text console login.
>> >> > > > > >
>> >> > > > > > So sluggish is probably due to the PAT not being enabled. This
>> >> patch
>> >> > > > > > should be applied:
>> >> > > > > >
>> >> > > > > > lkml.org/lkml/2011/11/8/406
>> >> > > > > >
>> >> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
>> >> > > > > >
>> >> > > > > > and these two reverted:
>> >> > > > > >
>> >> > > > > >  "xen/pat: Disable PAT support for now."
>> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
>> >> > > > > >
>> >> > > > > > Which is to say do:
>> >> > > > > >
>> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
>> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
>> >> > > > >
>> >> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
>> >> reverted
>> >> > >
>> >> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
>> >> HD
>> >> > > 7000
>> >> > > > > sluggishness and appears to have fixed the R600 crashes (although
>> >> it's
>> >> > >
>> >> > > > > only been running a few hours).
>> >> > > > >
>> >> > > > > How come that patch didn't get into mainline?  It looks pretty
>> >> > > innocuous
>> >> > > > > to me...
>> >> > > >
>> >> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
>> >> had
>> >> > > > the chance nor time to implement it.
>> >> > >
>> >> > > I see.  Well, I've got a handful of boxes in my lab that need that
>> >> patch
>> >> > > to be usable.  If you do come up with a more mainline-able solution,
>> >> I'd
>> >> > > gladly test it for you.  ;-)
>> >> >
>> >> > Thank you!
>> >>
>> >> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
>> >> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
>> >> again yeserday.  After being solid as a rock for 2 weeks as my primary
>> >> workstation, X has crashed a half dozen or so times so far this week. I've
>> >> been in Xen with 2 paravirtual linux guests running almost constantly for
>> >> this whole period.  I don't understand what's changed, but my system has
>> >> been entirely unstable now.  I did recompile my kernel... but I all did
>> >> was merge the v3.13.1 stable commit into my working tree and turn a few
>> >> things on (netfilter, wifi, a couple drivers turned on here and there).  I
>> >> just went and verified that those patches are still applied in my tree
>> >> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
>> >> staring at a TTY login).
>> >>
>> >> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
>> >> acceleration no longer functions unless I reboot.  If memory serves, the
>> >> unpatched behavior upon X crash was that the kernel continued to spew
>> >> these errors until the whole box locked up.  At least that's not happening
>> >> any more... ;-)
>> >>
>> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
>> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> >> GEM object (8192, 2, 4096, -12)
>> >>
>> >> and here's a slightly different variant that happened while I was typing
>> >> this email (on a different machine, luckily):
>> >>
>> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
>> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
>> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
>> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [64348.297561] [TTM] Buffer eviction failed
>> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> >> GEM object (16384, 2, 4096, -12)
>> >>
>> >> Any ideas?
>> >
>> > yes. I believe you have a memory leak. As in, some driver (or X) is
>> > eating up the memory and not giving up enough. That means the TTM
>> > layer is hitting its ceiling of how much memory it can allocate.
>> >
>> > Now finding the culprit is going to be a bit hard.
>> >
>> > You could use:
>> >
>> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
>> >          pool      refills   pages freed    inuse available     name
>> >            wc          259           224      808        4 nouveau 0000:05:00.0
>> >        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
>> >        cached           25             0       96        4 nouveau 0000:05:00.0
>> >
>> > to figure out if my thinking is really true. You should have a huge
>> > 'inuse' count and almost no 'available'.
>>
>> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
>> always have the same contents.  Is that normal?
>
> Yes.
>>
>> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
>> metal... only in Xen.  Is that normal?
>
> It would show up on baremetal if you boot with 'iommu=soft'
>
>>
>>          pool      refills   pages freed    inuse available     name
>>        cached        15190         59551     1205        4 radeon 0000:01:00.0
>>
>> If I watch that file while creating xterms, moving them around, etc, I can
>> see the number available fluctuate between 3 and 6.  This is true, even on
>> my box w/ the newer R7 card in it, which hasn't gotten that GEM error
>> message (yet?).
>
> OK, so lets see what happens when the error shows. Incidentally - what amount of
> memory does your initial domain have? And is it different then when you
> boot it as a baremetal?

I've got the problem very reproducible on 3 boxes.  All three are
booting the dom0 with as much RAM as Xen will give them, then giving
up some of their RAM as needed when I create domUs.  The 3 boxes have
4G, 8G, and 16G if memory serves.

Does the amount of RAM on the actual video cards matter?  All the
older cards (that crash all the time) have 2G, whereas the R7 that
hasn't crashed yet only has 1G.

I've been reproducing the crash by just logging in and out of fluxbox
via XDM over and over again right after booting my dom0 in Xen w/ no
guests running.  That makes it happen within a few minutes.  Otherwise
it randomly crashes while I'm in the middle of trying to work... ;-)

>
> Thank you.
>
>>
>>
>> >
>> > But that will get us just to confirm that yes - you have a big usage
>> > of memory and it is hitting the ceiling.
>> >
>> > Now to actually figure out which application is hanging on these - that
>> > I am not sure about. I think there is some drm info tool to investigate
>> > how many pages each application is using. You can leave it running and
>> > see which app is gulping up the memory. But I am not sure which
>> > tool that is (if there was one).
>> >
>> > Well, lets do one step at a time - see if my theory is correct first.

-- 
Michael D Labriola
21 Rip Van Winkle Cir
Warwick, RI 02886
401-316-9844 (cell)
401-848-8871 (work)
401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOdv-00020V-Cy; Thu, 20 Feb 2014 08:06:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <michael.d.labriola@gmail.com>)
	id 1WGCtd-0008Lm-O1; Wed, 19 Feb 2014 19:33:30 +0000
Received: from [85.158.137.68:64662] by server-17.bemta-3.messagelabs.com id
	93/23-22569-80705035; Wed, 19 Feb 2014 19:33:28 +0000
X-Env-Sender: michael.d.labriola@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392838406!2957229!1
X-Originating-IP: [209.85.216.181]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12102 invoked from network); 19 Feb 2014 19:33:27 -0000
Received: from mail-qc0-f181.google.com (HELO mail-qc0-f181.google.com)
	(209.85.216.181)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:33:27 -0000
Received: by mail-qc0-f181.google.com with SMTP id c9so1100455qcz.12
	for <multiple recipients>; Wed, 19 Feb 2014 11:33:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=F8pbF+dINaMSyYCNGuSc6qxv+rvZoSlmCBaRydKxCGI=;
	b=0RRQ4f++iElebi/pS3dAfQ2M6L9jGkqb3OdaDwJjFahbiqF/4m1SpMW8tL4WOQLmp1
	yWBirHMk0jF2jlusVVEqHVXoNFriVvKwh5K1tur1kdja+x8iUCzBSYEOjyzlEjtdfir3
	LmiLo4Qx5rSQH+xzf1IzvjeHI95+tQUWxQNSzMEpPdx1l/JBhDjohynhTn6gpTznlMz7
	Hh/NIGjOk6ejSYM2Nl4Whi8MqZpFxdOBE5hxVtHpY5w4jyxDON31quLG8W5ZjsnRxvgc
	Och2p+W3kitf7Gk+gT84OpSbazDjlV6yjX1h2ZQp2HuO2phnaDa37QI13Z21QVdOvnvI
	jdJw==
MIME-Version: 1.0
X-Received: by 10.140.84.19 with SMTP id k19mr3445189qgd.98.1392838406276;
	Wed, 19 Feb 2014 11:33:26 -0800 (PST)
Received: by 10.140.32.132 with HTTP; Wed, 19 Feb 2014 11:33:26 -0800 (PST)
In-Reply-To: <20140219170449.GB11365@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 14:33:26 -0500
Message-ID: <CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
From: Michael Labriola <michael.d.labriola@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Thu, 20 Feb 2014 08:06:02 +0000
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
>> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
>> 09:49:38 AM:
>>
>> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
>> > bounces@lists.xen.org
>> > Date: 01/24/2014 09:50 AM
>> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >
>> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
>> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
>> > >
>> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> > > > Date: 01/21/2014 04:59 PM
>> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> > > > Sent by: xen-devel-bounces@lists.xen.org
>> > > >
>> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
>> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
>>
>> > > > > 10:38:27 AM:
>> > > > >
>> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> > > > > > Date: 01/20/2014 10:38 AM
>> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> > > > > >
>> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
>> wrote:
>> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
>> > > 10:14:36
>> > > > > AM:
>> > > > > > >
>> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
>> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
>> > > > > > > > Date: 01/20/2014 10:14 AM
>> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> > > > > > > >
>> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
>>
>> > > wrote:
>> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
>> > > consistent
>> > > > > > > crashes
>> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
>> > > unusably
>> > > > >
>> > > > > > > slow
>> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
>> > > > > indiviually on
>> > > > > > >
>> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
>> metal.
>> > > > > > > >
>> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
>> mean?
>> > > > > > >
>> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
>> The
>> > >
>> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
>> for
>> > > a
>> > > > > plain
>> > > > > > > text console login.
>> > > > > >
>> > > > > > So sluggish is probably due to the PAT not being enabled. This
>> patch
>> > > > > > should be applied:
>> > > > > >
>> > > > > > lkml.org/lkml/2011/11/8/406
>> > > > > >
>> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
>> > > > > >
>> > > > > > and these two reverted:
>> > > > > >
>> > > > > >  "xen/pat: Disable PAT support for now."
>> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
>> > > > > >
>> > > > > > Which is to say do:
>> > > > > >
>> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
>> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
>> > > > >
>> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
>> reverted
>> > >
>> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
>> HD
>> > > 7000
>> > > > > sluggishness and appears to have fixed the R600 crashes (although
>> it's
>> > >
>> > > > > only been running a few hours).
>> > > > >
>> > > > > How come that patch didn't get into mainline?  It looks pretty
>> > > innocuous
>> > > > > to me...
>> > > >
>> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
>> had
>> > > > the chance nor time to implement it.
>> > >
>> > > I see.  Well, I've got a handful of boxes in my lab that need that
>> patch
>> > > to be usable.  If you do come up with a more mainline-able solution,
>> I'd
>> > > gladly test it for you.  ;-)
>> >
>> > Thank you!
>>
>> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
>> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
>> again yeserday.  After being solid as a rock for 2 weeks as my primary
>> workstation, X has crashed a half dozen or so times so far this week. I've
>> been in Xen with 2 paravirtual linux guests running almost constantly for
>> this whole period.  I don't understand what's changed, but my system has
>> been entirely unstable now.  I did recompile my kernel... but I all did
>> was merge the v3.13.1 stable commit into my working tree and turn a few
>> things on (netfilter, wifi, a couple drivers turned on here and there).  I
>> just went and verified that those patches are still applied in my tree
>> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
>> staring at a TTY login).
>>
>> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
>> acceleration no longer functions unless I reboot.  If memory serves, the
>> unpatched behavior upon X crash was that the kernel continued to spew
>> these errors until the whole box locked up.  At least that's not happening
>> any more... ;-)
>>
>> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
>> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> GEM object (8192, 2, 4096, -12)
>>
>> and here's a slightly different variant that happened while I was typing
>> this email (on a different machine, luckily):
>>
>> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
>> [ 3114.491717] usb 9-1: USB disconnect, device number 2
>> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
>> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [64348.297561] [TTM] Buffer eviction failed
>> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> GEM object (16384, 2, 4096, -12)
>>
>> Any ideas?
>
> yes. I believe you have a memory leak. As in, some driver (or X) is
> eating up the memory and not giving up enough. That means the TTM
> layer is hitting its ceiling of how much memory it can allocate.
>
> Now finding the culprit is going to be a bit hard.
>
> You could use:
>
> [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
>          pool      refills   pages freed    inuse available     name
>            wc          259           224      808        4 nouveau 0000:05:00.0
>        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
>        cached           25             0       96        4 nouveau 0000:05:00.0
>
> to figure out if my thinking is really true. You should have a huge
> 'inuse' count and almost no 'available'.

My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
always have the same contents.  Is that normal?

My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
metal... only in Xen.  Is that normal?

         pool      refills   pages freed    inuse available     name
       cached        15190         59551     1205        4 radeon 0000:01:00.0

If I watch that file while creating xterms, moving them around, etc, I can
see the number available fluctuate between 3 and 6.  This is true, even on
my box w/ the newer R7 card in it, which hasn't gotten that GEM error
message (yet?).


>
> But that will get us just to confirm that yes - you have a big usage
> of memory and it is hitting the ceiling.
>
> Now to actually figure out which application is hanging on these - that
> I am not sure about. I think there is some drm info tool to investigate
> how many pages each application is using. You can leave it running and
> see which app is gulping up the memory. But I am not sure which
> tool that is (if there was one).
>
> Well, lets do one step at a time - see if my theory is correct first.



-- 
Michael D Labriola
21 Rip Van Winkle Cir
Warwick, RI 02886
401-316-9844 (cell)
401-848-8871 (work)
401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOdv-00020c-OU; Thu, 20 Feb 2014 08:06:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <michael.d.labriola@gmail.com>)
	id 1WGDRD-0001T0-W1; Wed, 19 Feb 2014 20:08:12 +0000
Received: from [193.109.254.147:57621] by server-8.bemta-14.messagelabs.com id
	1C/1D-18529-B2F05035; Wed, 19 Feb 2014 20:08:11 +0000
X-Env-Sender: michael.d.labriola@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392840488!222474!1
X-Originating-IP: [209.85.216.182]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11659 invoked from network); 19 Feb 2014 20:08:09 -0000
Received: from mail-qc0-f182.google.com (HELO mail-qc0-f182.google.com)
	(209.85.216.182)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 20:08:09 -0000
Received: by mail-qc0-f182.google.com with SMTP id r5so1514925qcx.13
	for <multiple recipients>; Wed, 19 Feb 2014 12:08:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=loEZer1cCjPyAUpXYz8JahfpCDvHnlFc6eFCy2hhGNA=;
	b=CcgrascZ1pzD5WNYdI6kgNe3Non5TPk8vXyUU2lrECb+MPHTtraG3DH+C/x5QUwBM6
	ZnjbqcLWZ1J4sEwkF3rfZjp1NnBFd7xmGZcTYA5GBQyw/aYBlb6w7wTUD4XdBAJLOhYy
	3GGwVlWCi72wuZQaHTqqjBKN7Y4h96jpevNGkcMtrXAPB7BT7YtDV802mWP9ukGrwNfQ
	LNzMpenUNFj5hfRGt8dSaAZ8Z5oajflLjBodq818SEEtVBHD7IViqFuh8b+Z2DXEwxqt
	B+PBKOZgB+PFAq01/YYkp2fKveDKdRH8+B2nywmqZZGrHShgKvbK8isCXNceR9ISxBEF
	YDVQ==
MIME-Version: 1.0
X-Received: by 10.224.68.10 with SMTP id t10mr3994904qai.87.1392840488460;
	Wed, 19 Feb 2014 12:08:08 -0800 (PST)
Received: by 10.140.32.132 with HTTP; Wed, 19 Feb 2014 12:08:08 -0800 (PST)
In-Reply-To: <20140219195705.GA13089@phenom.dumpdata.com>
References: <20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
	<CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
	<20140219195705.GA13089@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 15:08:08 -0500
Message-ID: <CAOQxz3zrT16dieXtxYDeUR6_xci7tQWftQ5Af-dJyjS=eA_bkQ@mail.gmail.com>
From: Michael Labriola <michael.d.labriola@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Thu, 20 Feb 2014 08:06:02 +0000
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 2:57 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Wed, Feb 19, 2014 at 02:33:26PM -0500, Michael Labriola wrote:
>> On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@oracle.com> wrote:
>> > On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
>> >> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
>> >> 09:49:38 AM:
>> >>
>> >> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> >> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
>> >> > bounces@lists.xen.org
>> >> > Date: 01/24/2014 09:50 AM
>> >> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> >
>> >> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
>> >> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
>> >> > >
>> >> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> >> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> >> > > > Date: 01/21/2014 04:59 PM
>> >> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> > > > Sent by: xen-devel-bounces@lists.xen.org
>> >> > > >
>> >> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
>> >> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
>> >>
>> >> > > > > 10:38:27 AM:
>> >> > > > >
>> >> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> >> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> >> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> >> > > > > > Date: 01/20/2014 10:38 AM
>> >> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> > > > > >
>> >> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
>> >> wrote:
>> >> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
>> >> > > 10:14:36
>> >> > > > > AM:
>> >> > > > > > >
>> >> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
>> >> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> >> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
>> >> > > > > > > > Date: 01/20/2014 10:14 AM
>> >> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >> > > > > > > >
>> >> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
>> >>
>> >> > > wrote:
>> >> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
>> >> > > consistent
>> >> > > > > > > crashes
>> >> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
>> >> > > unusably
>> >> > > > >
>> >> > > > > > > slow
>> >> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
>> >> > > > > indiviually on
>> >> > > > > > >
>> >> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
>> >> metal.
>> >> > > > > > > >
>> >> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
>> >> mean?
>> >> > > > > > >
>> >> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
>> >> The
>> >> > >
>> >> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
>> >> for
>> >> > > a
>> >> > > > > plain
>> >> > > > > > > text console login.
>> >> > > > > >
>> >> > > > > > So sluggish is probably due to the PAT not being enabled. This
>> >> patch
>> >> > > > > > should be applied:
>> >> > > > > >
>> >> > > > > > lkml.org/lkml/2011/11/8/406
>> >> > > > > >
>> >> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
>> >> > > > > >
>> >> > > > > > and these two reverted:
>> >> > > > > >
>> >> > > > > >  "xen/pat: Disable PAT support for now."
>> >> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
>> >> > > > > >
>> >> > > > > > Which is to say do:
>> >> > > > > >
>> >> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
>> >> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
>> >> > > > >
>> >> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
>> >> reverted
>> >> > >
>> >> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
>> >> HD
>> >> > > 7000
>> >> > > > > sluggishness and appears to have fixed the R600 crashes (although
>> >> it's
>> >> > >
>> >> > > > > only been running a few hours).
>> >> > > > >
>> >> > > > > How come that patch didn't get into mainline?  It looks pretty
>> >> > > innocuous
>> >> > > > > to me...
>> >> > > >
>> >> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
>> >> had
>> >> > > > the chance nor time to implement it.
>> >> > >
>> >> > > I see.  Well, I've got a handful of boxes in my lab that need that
>> >> patch
>> >> > > to be usable.  If you do come up with a more mainline-able solution,
>> >> I'd
>> >> > > gladly test it for you.  ;-)
>> >> >
>> >> > Thank you!
>> >>
>> >> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
>> >> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
>> >> again yeserday.  After being solid as a rock for 2 weeks as my primary
>> >> workstation, X has crashed a half dozen or so times so far this week. I've
>> >> been in Xen with 2 paravirtual linux guests running almost constantly for
>> >> this whole period.  I don't understand what's changed, but my system has
>> >> been entirely unstable now.  I did recompile my kernel... but I all did
>> >> was merge the v3.13.1 stable commit into my working tree and turn a few
>> >> things on (netfilter, wifi, a couple drivers turned on here and there).  I
>> >> just went and verified that those patches are still applied in my tree
>> >> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
>> >> staring at a TTY login).
>> >>
>> >> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
>> >> acceleration no longer functions unless I reboot.  If memory serves, the
>> >> unpatched behavior upon X crash was that the kernel continued to spew
>> >> these errors until the whole box locked up.  At least that's not happening
>> >> any more... ;-)
>> >>
>> >> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
>> >> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> >> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> >> GEM object (8192, 2, 4096, -12)
>> >>
>> >> and here's a slightly different variant that happened while I was typing
>> >> this email (on a different machine, luckily):
>> >>
>> >> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
>> >> [ 3114.491717] usb 9-1: USB disconnect, device number 2
>> >> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
>> >> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> >> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [64348.297561] [TTM] Buffer eviction failed
>> >> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> >> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> >> (r:-12)!
>> >> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> >> GEM object (16384, 2, 4096, -12)
>> >>
>> >> Any ideas?
>> >
>> > yes. I believe you have a memory leak. As in, some driver (or X) is
>> > eating up the memory and not giving up enough. That means the TTM
>> > layer is hitting its ceiling of how much memory it can allocate.
>> >
>> > Now finding the culprit is going to be a bit hard.
>> >
>> > You could use:
>> >
>> > [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
>> >          pool      refills   pages freed    inuse available     name
>> >            wc          259           224      808        4 nouveau 0000:05:00.0
>> >        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
>> >        cached           25             0       96        4 nouveau 0000:05:00.0
>> >
>> > to figure out if my thinking is really true. You should have a huge
>> > 'inuse' count and almost no 'available'.
>>
>> My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
>> always have the same contents.  Is that normal?
>
> Yes.
>>
>> My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
>> metal... only in Xen.  Is that normal?
>
> It would show up on baremetal if you boot with 'iommu=soft'
>
>>
>>          pool      refills   pages freed    inuse available     name
>>        cached        15190         59551     1205        4 radeon 0000:01:00.0
>>
>> If I watch that file while creating xterms, moving them around, etc, I can
>> see the number available fluctuate between 3 and 6.  This is true, even on
>> my box w/ the newer R7 card in it, which hasn't gotten that GEM error
>> message (yet?).
>
> OK, so lets see what happens when the error shows. Incidentally - what amount of
> memory does your initial domain have? And is it different then when you
> boot it as a baremetal?

I've got the problem very reproducible on 3 boxes.  All three are
booting the dom0 with as much RAM as Xen will give them, then giving
up some of their RAM as needed when I create domUs.  The 3 boxes have
4G, 8G, and 16G if memory serves.

Does the amount of RAM on the actual video cards matter?  All the
older cards (that crash all the time) have 2G, whereas the R7 that
hasn't crashed yet only has 1G.

I've been reproducing the crash by just logging in and out of fluxbox
via XDM over and over again right after booting my dom0 in Xen w/ no
guests running.  That makes it happen within a few minutes.  Otherwise
it randomly crashes while I'm in the middle of trying to work... ;-)

>
> Thank you.
>
>>
>>
>> >
>> > But that will get us just to confirm that yes - you have a big usage
>> > of memory and it is hitting the ceiling.
>> >
>> > Now to actually figure out which application is hanging on these - that
>> > I am not sure about. I think there is some drm info tool to investigate
>> > how many pages each application is using. You can leave it running and
>> > see which app is gulping up the memory. But I am not sure which
>> > tool that is (if there was one).
>> >
>> > Well, lets do one step at a time - see if my theory is correct first.

-- 
Michael D Labriola
21 Rip Van Winkle Cir
Warwick, RI 02886
401-316-9844 (cell)
401-848-8871 (work)
401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOdw-00020j-4I; Thu, 20 Feb 2014 08:06:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hannes@stressinduktion.org>) id 1WGHyH-000717-4t
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 00:58:37 +0000
Received: from [193.109.254.147:18677] by server-13.bemta-14.messagelabs.com
	id BE/BD-01226-C3355035; Thu, 20 Feb 2014 00:58:36 +0000
X-Env-Sender: hannes@stressinduktion.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392857915!5531358!1
X-Originating-IP: [87.106.68.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31869 invoked from network); 20 Feb 2014 00:58:35 -0000
Received: from order.stressinduktion.org (HELO order.stressinduktion.org)
	(87.106.68.36)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 00:58:35 -0000
Received: by order.stressinduktion.org (Postfix, from userid 500)
	id 798B11A0C2D9; Thu, 20 Feb 2014 01:58:33 +0100 (CET)
Date: Thu, 20 Feb 2014 01:58:33 +0100
From: Hannes Frederic Sowa <hannes@stressinduktion.org>
To: Dan Williams <dcbw@redhat.com>
Message-ID: <20140220005833.GF1179@order.stressinduktion.org>
Mail-Followup-To: Dan Williams <dcbw@redhat.com>,
	"Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Patrick McHardy <kaber@trash.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392857777.22693.14.camel@dcbw.local>
X-Mailman-Approved-At: Thu, 20 Feb 2014 08:06:02 +0000
Cc: kvm@vger.kernel.org, "Luis R. Rodriguez" <mcgrof@do-not-panic.com>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, Patrick McHardy <kaber@trash.net>,
	xen-devel@lists.xenproject.org, "David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 06:56:17PM -0600, Dan Williams wrote:
> Note that there isn't yet a disable_ipv4 knob though, I was
> perhaps-too-subtly trying to get you to send a patch for it, since I can
> use it too :)

Do you plan to implement
<http://datatracker.ietf.org/doc/draft-ietf-sunset4-noipv4/>?

;)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:06:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOdv-00020V-Cy; Thu, 20 Feb 2014 08:06:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <michael.d.labriola@gmail.com>)
	id 1WGCtd-0008Lm-O1; Wed, 19 Feb 2014 19:33:30 +0000
Received: from [85.158.137.68:64662] by server-17.bemta-3.messagelabs.com id
	93/23-22569-80705035; Wed, 19 Feb 2014 19:33:28 +0000
X-Env-Sender: michael.d.labriola@gmail.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392838406!2957229!1
X-Originating-IP: [209.85.216.181]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12102 invoked from network); 19 Feb 2014 19:33:27 -0000
Received: from mail-qc0-f181.google.com (HELO mail-qc0-f181.google.com)
	(209.85.216.181)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	19 Feb 2014 19:33:27 -0000
Received: by mail-qc0-f181.google.com with SMTP id c9so1100455qcz.12
	for <multiple recipients>; Wed, 19 Feb 2014 11:33:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=F8pbF+dINaMSyYCNGuSc6qxv+rvZoSlmCBaRydKxCGI=;
	b=0RRQ4f++iElebi/pS3dAfQ2M6L9jGkqb3OdaDwJjFahbiqF/4m1SpMW8tL4WOQLmp1
	yWBirHMk0jF2jlusVVEqHVXoNFriVvKwh5K1tur1kdja+x8iUCzBSYEOjyzlEjtdfir3
	LmiLo4Qx5rSQH+xzf1IzvjeHI95+tQUWxQNSzMEpPdx1l/JBhDjohynhTn6gpTznlMz7
	Hh/NIGjOk6ejSYM2Nl4Whi8MqZpFxdOBE5hxVtHpY5w4jyxDON31quLG8W5ZjsnRxvgc
	Och2p+W3kitf7Gk+gT84OpSbazDjlV6yjX1h2ZQp2HuO2phnaDa37QI13Z21QVdOvnvI
	jdJw==
MIME-Version: 1.0
X-Received: by 10.140.84.19 with SMTP id k19mr3445189qgd.98.1392838406276;
	Wed, 19 Feb 2014 11:33:26 -0800 (PST)
Received: by 10.140.32.132 with HTTP; Wed, 19 Feb 2014 11:33:26 -0800 (PST)
In-Reply-To: <20140219170449.GB11365@phenom.dumpdata.com>
References: <OFF3ADE803.67F2976D-ON85257C66.004EA77A-85257C66.005243B9@gdeb.com>
	<20140120151436.GA19918@andromeda.dapyr.net>
	<OF16949C1D.7EB315DE-ON85257C66.00540CD1-85257C66.0054D016@gdeb.com>
	<20140120153827.GA24989@phenom.dumpdata.com>
	<OF2899D9D9.D33A4E3B-ON85257C66.006ECFD4-85257C66.006F4611@gdeb.com>
	<20140121215905.GC6363@phenom.dumpdata.com>
	<OF72DA261E.EBD23FAC-ON85257C69.005C71A2-85257C69.005CE475@gdeb.com>
	<20140124144938.GD12946@phenom.dumpdata.com>
	<OF60145CF5.5A89BE42-ON85257C7B.006FD806-85257C7C.0055A1AF@gdeb.com>
	<20140219170449.GB11365@phenom.dumpdata.com>
Date: Wed, 19 Feb 2014 14:33:26 -0500
Message-ID: <CAOQxz3yaJECiVZjgjN43JjGkfdG8czbTx4dFhmvKvHFS0eOZTg@mail.gmail.com>
From: Michael Labriola <michael.d.labriola@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Thu, 20 Feb 2014 08:06:02 +0000
Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>, xen-devel-bounces@lists.xen.org,
	Michael D Labriola <mlabriol@gdeb.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Radeon DRM dom0 issues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 12:04 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Tue, Feb 11, 2014 at 10:35:18AM -0500, Michael D Labriola wrote:
>> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/24/2014
>> 09:49:38 AM:
>>
>> > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> > michael.d.labriola@gmail.com, xen-devel@lists.xen.org, xen-devel-
>> > bounces@lists.xen.org
>> > Date: 01/24/2014 09:50 AM
>> > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> >
>> > On Thu, Jan 23, 2014 at 11:54:37AM -0500, Michael D Labriola wrote:
>> > > xen-devel-bounces@lists.xen.org wrote on 01/21/2014 04:59:05 PM:
>> > >
>> > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> > > > Date: 01/21/2014 04:59 PM
>> > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> > > > Sent by: xen-devel-bounces@lists.xen.org
>> > > >
>> > > > On Mon, Jan 20, 2014 at 03:15:24PM -0500, Michael D Labriola wrote:
>> > > > > Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote on 01/20/2014
>>
>> > > > > 10:38:27 AM:
>> > > > >
>> > > > > > From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > > > > > Cc: Konrad Rzeszutek Wilk <konrad@darnok.org>,
>> > > > > > michael.d.labriola@gmail.com, xen-devel@lists.xen.org
>> > > > > > Date: 01/20/2014 10:38 AM
>> > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> > > > > >
>> > > > > > On Mon, Jan 20, 2014 at 10:26:22AM -0500, Michael D Labriola
>> wrote:
>> > > > > > > Konrad Rzeszutek Wilk <konrad@darnok.org> wrote on 01/20/2014
>> > > 10:14:36
>> > > > > AM:
>> > > > > > >
>> > > > > > > > From: Konrad Rzeszutek Wilk <konrad@darnok.org>
>> > > > > > > > To: Michael D Labriola <mlabriol@gdeb.com>,
>> > > > > > > > Cc: xen-devel@lists.xen.org, michael.d.labriola@gmail.com
>> > > > > > > > Date: 01/20/2014 10:14 AM
>> > > > > > > > Subject: Re: [Xen-devel] Radeon DRM dom0 issues
>> > > > > > > >
>> > > > > > > > On Mon, Jan 20, 2014 at 09:58:32AM -0500, Michael D Labriola
>>
>> > > wrote:
>> > > > > > > > > Anyone here running a dom0 w/ Radeon DRM?  I'm having
>> > > consistent
>> > > > > > > crashes
>> > > > > > > > > with multiple older R600 series (HD 6470 and HD 6570) and
>> > > unusably
>> > > > >
>> > > > > > > slow
>> > > > > > > > > graphics with a newer HD7000 (can see each line refresh
>> > > > > indiviually on
>> > > > > > >
>> > > > > > > > > radeonfb tty).  All 3 systems seem to work fine bare
>> metal.
>> > > > > > > >
>> > > > > > > > I hadn't been using DRM, just Xserver. Is that what you
>> mean?
>> > > > > > >
>> > > > > > > The R600 problems happen when in X, using OpenGL, on my dom0.
>> The
>> > >
>> > > > > > > RadeonSI sluggishness is when using the KMS framebuffer device
>> for
>> > > a
>> > > > > plain
>> > > > > > > text console login.
>> > > > > >
>> > > > > > So sluggish is probably due to the PAT not being enabled. This
>> patch
>> > > > > > should be applied:
>> > > > > >
>> > > > > > lkml.org/lkml/2011/11/8/406
>> > > > > >
>> > > > > > (or http://marc.info/?l=linux-kernel&m=132888833209874&w=2)
>> > > > > >
>> > > > > > and these two reverted:
>> > > > > >
>> > > > > >  "xen/pat: Disable PAT support for now."
>> > > > > >  "xen/pat: Disable PAT using pat_enabled value."
>> > > > > >
>> > > > > > Which is to say do:
>> > > > > >
>> > > > > > git revert c79c49826270b8b0061b2fca840fc3f013c8a78a
>> > > > > > git revert 8eaffa67b43e99ae581622c5133e20b0f48bcef1
>> > > > >
>> > > > > Thanks!  I cherry-picked that patch out of your testing tree,
>> reverted
>> > >
>> > > > > those 2 commits, recompiled and installed.  Definitely fixed the
>> HD
>> > > 7000
>> > > > > sluggishness and appears to have fixed the R600 crashes (although
>> it's
>> > >
>> > > > > only been running a few hours).
>> > > > >
>> > > > > How come that patch didn't get into mainline?  It looks pretty
>> > > innocuous
>> > > > > to me...
>> > > >
>> > > > <Sigh> the x86 maintainers wanted a different route. And I hadn't
>> had
>> > > > the chance nor time to implement it.
>> > >
>> > > I see.  Well, I've got a handful of boxes in my lab that need that
>> patch
>> > > to be usable.  If you do come up with a more mainline-able solution,
>> I'd
>> > > gladly test it for you.  ;-)
>> >
>> > Thank you!
>>
>> Uh, oh.  Looks like those reverts and patches didn't entirely fix my
>> problem.  My box with the HD5450 (r600 gallium3d) started going bonkers
>> again yeserday.  After being solid as a rock for 2 weeks as my primary
>> workstation, X has crashed a half dozen or so times so far this week. I've
>> been in Xen with 2 paravirtual linux guests running almost constantly for
>> this whole period.  I don't understand what's changed, but my system has
>> been entirely unstable now.  I did recompile my kernel... but I all did
>> was merge the v3.13.1 stable commit into my working tree and turn a few
>> things on (netfilter, wifi, a couple drivers turned on here and there).  I
>> just went and verified that those patches are still applied in my tree
>> (i.e., I didn't accidentally undo them).  I'm scratching my head (and
>> staring at a TTY login).
>>
>> When X crashes, my kernel log prints a couple dozen iterations of this. 3d
>> acceleration no longer functions unless I reboot.  If memory serves, the
>> unpatched behavior upon X crash was that the kernel continued to spew
>> these errors until the whole box locked up.  At least that's not happening
>> any more... ;-)
>>
>> [  702.070084] [TTM] radeon 0000:01:00.0: Unable to get page 2
>> [  702.075971] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [  704.720699] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> [  704.726635] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [  704.733910] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> GEM object (8192, 2, 4096, -12)
>>
>> and here's a slightly different variant that happened while I was typing
>> this email (on a different machine, luckily):
>>
>> [ 3107.713039] sdf: detected capacity change from 31625052160 to 0
>> [ 3114.491717] usb 9-1: USB disconnect, device number 2
>> [64348.271534] [TTM] radeon 0000:01:00.0: Unable to get page 3
>> [64348.277312] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [64348.284470] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> [64348.290257] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [64348.297561] [TTM] Buffer eviction failed
>> [64349.550518] [TTM] radeon 0000:01:00.0: Unable to get page 0
>> [64349.556417] [TTM] radeon 0000:01:00.0: Failed to fill cached pool
>> (r:-12)!
>> [64349.563714] [drm:radeon_gem_object_create] *ERROR* Failed to allocate
>> GEM object (16384, 2, 4096, -12)
>>
>> Any ideas?
>
> yes. I believe you have a memory leak. As in, some driver (or X) is
> eating up the memory and not giving up enough. That means the TTM
> layer is hitting its ceiling of how much memory it can allocate.
>
> Now finding the culprit is going to be a bit hard.
>
> You could use:
>
> [root@phenom 1]# cat /sys/kernel/debug/dri/1/ttm_dma_page_pool
>          pool      refills   pages freed    inuse available     name
>            wc          259           224      808        4 nouveau 0000:05:00.0
>        cached      3403058      13561071    51158        3 radeon 0000:01:00.0
>        cached           25             0       96        4 nouveau 0000:05:00.0
>
> to figure out if my thinking is really true. You should have a huge
> 'inuse' count and almost no 'available'.

My /sys/kernel/debug/dri directory has a 0 and a 64 entry, which appear to
always have the same contents.  Is that normal?

My /sys/kernel/debug/dri/0/ttm_dma_page_pool file doesn't exist bare
metal... only in Xen.  Is that normal?

         pool      refills   pages freed    inuse available     name
       cached        15190         59551     1205        4 radeon 0000:01:00.0

If I watch that file while creating xterms, moving them around, etc, I can
see the number available fluctuate between 3 and 6.  This is true, even on
my box w/ the newer R7 card in it, which hasn't gotten that GEM error
message (yet?).


>
> But that will get us just to confirm that yes - you have a big usage
> of memory and it is hitting the ceiling.
>
> Now to actually figure out which application is hanging on these - that
> I am not sure about. I think there is some drm info tool to investigate
> how many pages each application is using. You can leave it running and
> see which app is gulping up the memory. But I am not sure which
> tool that is (if there was one).
>
> Well, lets do one step at a time - see if my theory is correct first.



-- 
Michael D Labriola
21 Rip Van Winkle Cir
Warwick, RI 02886
401-316-9844 (cell)
401-848-8871 (work)
401-234-1306 (home)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOfV-0002Fh-QS; Thu, 20 Feb 2014 08:07:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGOfT-0002FV-SE
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:07:40 +0000
Received: from [85.158.139.211:4336] by server-10.bemta-5.messagelabs.com id
	9F/01-08578-BC7B5035; Thu, 20 Feb 2014 08:07:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392883658!5062469!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25202 invoked from network); 20 Feb 2014 08:07:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:07:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:07:37 +0000
Message-Id: <5305C5D8020000780011DEB3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:07:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
	<53035689.4000602@ts.fujitsu.com>
	<53036688020000780011D4B5@nat28.tlf.novell.com>
	<5305B273.9000803@ts.fujitsu.com>
In-Reply-To: <5305B273.9000803@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.02.14 at 08:44, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> On 18.02.2014 13:56, Jan Beulich wrote:
>>>>> On 18.02.14 at 13:48, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>> On 14.02.2014 14:02, Jan Beulich wrote:
>>>>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>>>> Is this the case when the guest itself uses single stepping? Initially the
>>>>> debug trap shouldn't cause a VMEXIT, I think.
>>>>
>>>> That looks like a bug, indeed - it's missing from the initially set
>>>> exception_bitmap. Could you check whether adding this in
>>>> construct_vmcs() addresses that part of the issue? (A proper fix
>>>> would likely include further adjustments to the setting of this flag,
>>>> e.g. clearing it alongside clearing the DR intercept.) But then
>>>> again all of this already depends on cpu_has_monitor_trap_flag -
>>>> if that's set on your system, maybe you could try suppressing its
>>>> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
>>>> the optional feature set in vmx_init_vmcs_config())?
>>>
>>> I've currently a test running with the attached patch (the bug was hit about
>>> once every 3 hours, test is running now for about 4 hours without problem).
>>> Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.
>>
>> Which, if it continues running fine, would confirm the theory.
>> I'd like to defer to the VMX folks though for putting together a
>> proper fix then - I'd likely overlook some corner case.
> 
> Okay, theory confirmed.
> 
> Unless you want to do it, I'll start another thread with the info found so 
> far and include Jun Nakajima and Eddie Dong.

Please do. And perhaps also include Yang Z Zhang.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:07:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOfV-0002Fh-QS; Thu, 20 Feb 2014 08:07:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGOfT-0002FV-SE
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:07:40 +0000
Received: from [85.158.139.211:4336] by server-10.bemta-5.messagelabs.com id
	9F/01-08578-BC7B5035; Thu, 20 Feb 2014 08:07:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392883658!5062469!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25202 invoked from network); 20 Feb 2014 08:07:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:07:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:07:37 +0000
Message-Id: <5305C5D8020000780011DEB3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:07:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Juergen Gross" <juergen.gross@ts.fujitsu.com>
References: <52FDE2ED.4030008@ts.fujitsu.com>
	<52FE00C8020000780011C649@nat28.tlf.novell.com>
	<52FE09A2.4000909@ts.fujitsu.com>
	<52FE21E4020000780011C6F5@nat28.tlf.novell.com>
	<53035689.4000602@ts.fujitsu.com>
	<53036688020000780011D4B5@nat28.tlf.novell.com>
	<5305B273.9000803@ts.fujitsu.com>
In-Reply-To: <5305B273.9000803@ts.fujitsu.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Debug-Registers in HVM domain destroyed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.02.14 at 08:44, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
> On 18.02.2014 13:56, Jan Beulich wrote:
>>>>> On 18.02.14 at 13:48, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>> On 14.02.2014 14:02, Jan Beulich wrote:
>>>>>>> On 14.02.14 at 13:18, Juergen Gross <juergen.gross@ts.fujitsu.com> wrote:
>>>>> Is this the case when the guest itself uses single stepping? Initially the
>>>>> debug trap shouldn't cause a VMEXIT, I think.
>>>>
>>>> That looks like a bug, indeed - it's missing from the initially set
>>>> exception_bitmap. Could you check whether adding this in
>>>> construct_vmcs() addresses that part of the issue? (A proper fix
>>>> would likely include further adjustments to the setting of this flag,
>>>> e.g. clearing it alongside clearing the DR intercept.) But then
>>>> again all of this already depends on cpu_has_monitor_trap_flag -
>>>> if that's set on your system, maybe you could try suppressing its
>>>> detection (by removing CPU_BASED_MONITOR_TRAP_FLAG from
>>>> the optional feature set in vmx_init_vmcs_config())?
>>>
>>> I've currently a test running with the attached patch (the bug was hit about
>>> once every 3 hours, test is running now for about 4 hours without problem).
>>> Test machine is running with Xen 4.2.3 hypervisor from SLES11 SP3.
>>
>> Which, if it continues running fine, would confirm the theory.
>> I'd like to defer to the VMX folks though for putting together a
>> proper fix then - I'd likely overlook some corner case.
> 
> Okay, theory confirmed.
> 
> Unless you want to do it, I'll start another thread with the info found so 
> far and include Jun Nakajima and Eddie Dong.

Please do. And perhaps also include Yang Z Zhang.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:09:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOhV-0002ZI-KP; Thu, 20 Feb 2014 08:09:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiezhenjiang@foxmail.com>) id 1WGOhU-0002ZC-Kl
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 08:09:44 +0000
Received: from [85.158.143.35:56193] by server-3.bemta-4.messagelabs.com id
	15/C7-11539-748B5035; Thu, 20 Feb 2014 08:09:43 +0000
X-Env-Sender: xiezhenjiang@foxmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392883782!6958789!1
X-Originating-IP: [184.105.206.84]
X-SpamReason: No, hits=-1.1 required=7.0 tests=FROM_EXCESS_BASE64,
	MIME_BASE64_TEXT, ML_RADAR_FP_R_14, ML_RADAR_SPEW_LINKS_14,
	spamassassin: , received_headers: No Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28711 invoked from network); 20 Feb 2014 08:09:43 -0000
Received: from smtpproxy19.qq.com (HELO smtpproxy19.qq.com) (184.105.206.84)
	by server-5.tower-21.messagelabs.com with SMTP;
	20 Feb 2014 08:09:43 -0000
X-QQ-SSF: 00000000000000F000000000000000S
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 115.156.145.101
X-QQ-STYLE: 
X-QQ-mid: webmail642t1392883771t1879861
From: "=?gb18030?B?Q2hhcmxlcw==?=" <xiezhenjiang@foxmail.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Thu, 20 Feb 2014 16:09:31 +0800
X-Priority: 3
Message-ID: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-SENDSIZE: 520
X-QQ-Bgrelay: 1
Subject: [Xen-devel] confusions on monitoring VM cpu usage in Xen hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="gb18030"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGkgZXZlcnlvbmUsIEmhrm0gdHJ5aW5nIHRvIG1vbml0b3IgdGhlIFZNIENQVSB1c2FnZSBpbiB4
ZW4gaHlwZXJ2aXNvciBhbmQgdXNlIHRoZSBWTSBDUFUgdXNhZ2UgCmluZm9ybWF0aW9uIGluIFhl
bidzIENyZWRpdCBTY2hlZHVsZXIuIEkgZ29vZ2xlZCBpdCwgYW5kIGZpbmQgdGhhdCBtb3N0IHJl
ZmVyZW5jZXMgYWJvdXQgdGhpcyB0b3BpYyBpcyBtb25pdG9yaW5nIHRoZSBWTSBDUFUgdXNhZ2Ug
aW5mb3JtYXRpb24gaW4gRG9tYWluLTAgd2hpbGUgbm90IGluIHRoZSBYZW4gaHlwZXJ2aXNvci4K
SSdtIG5vdyBibG9ja2luZyBhdCBob3cgdG8gZGV0ZXJtaW5lIHRoZSBzdGF0ZSBvZiB2Q1BVLCBN
eSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQgdkNQVSBydW5uaW5nIG9uIGEgcGh5c2ljYWwgQ1BVIGRv
ZXNuJ3QgbWVhbiBpdCByZWFsbHkgY29uc3VtZXMgQ1BVIGN5Y2xlcywgc28gSSBjb3VsZCBkZXRl
cm1pbmUgdGhlIHdoZXRoZXIgdkNQVSBpcyBjb25zdW1pbmcgQ1BVIGN5Y2xlcyBvciBub3Q/IFNv
IGNvdWxkIHlvdSBwbGVhc2UgZ2l2ZSBtZSBzb21lIGFkdmljZS4KClRoYW5rIHlvdSB2ZXJ5IG11
Y2ghCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:09:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOhV-0002ZI-KP; Thu, 20 Feb 2014 08:09:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiezhenjiang@foxmail.com>) id 1WGOhU-0002ZC-Kl
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 08:09:44 +0000
Received: from [85.158.143.35:56193] by server-3.bemta-4.messagelabs.com id
	15/C7-11539-748B5035; Thu, 20 Feb 2014 08:09:43 +0000
X-Env-Sender: xiezhenjiang@foxmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392883782!6958789!1
X-Originating-IP: [184.105.206.84]
X-SpamReason: No, hits=-1.1 required=7.0 tests=FROM_EXCESS_BASE64,
	MIME_BASE64_TEXT, ML_RADAR_FP_R_14, ML_RADAR_SPEW_LINKS_14,
	spamassassin: , received_headers: No Received headers
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28711 invoked from network); 20 Feb 2014 08:09:43 -0000
Received: from smtpproxy19.qq.com (HELO smtpproxy19.qq.com) (184.105.206.84)
	by server-5.tower-21.messagelabs.com with SMTP;
	20 Feb 2014 08:09:43 -0000
X-QQ-SSF: 00000000000000F000000000000000S
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 115.156.145.101
X-QQ-STYLE: 
X-QQ-mid: webmail642t1392883771t1879861
From: "=?gb18030?B?Q2hhcmxlcw==?=" <xiezhenjiang@foxmail.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Thu, 20 Feb 2014 16:09:31 +0800
X-Priority: 3
Message-ID: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-SENDSIZE: 520
X-QQ-Bgrelay: 1
Subject: [Xen-devel] confusions on monitoring VM cpu usage in Xen hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="gb18030"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGkgZXZlcnlvbmUsIEmhrm0gdHJ5aW5nIHRvIG1vbml0b3IgdGhlIFZNIENQVSB1c2FnZSBpbiB4
ZW4gaHlwZXJ2aXNvciBhbmQgdXNlIHRoZSBWTSBDUFUgdXNhZ2UgCmluZm9ybWF0aW9uIGluIFhl
bidzIENyZWRpdCBTY2hlZHVsZXIuIEkgZ29vZ2xlZCBpdCwgYW5kIGZpbmQgdGhhdCBtb3N0IHJl
ZmVyZW5jZXMgYWJvdXQgdGhpcyB0b3BpYyBpcyBtb25pdG9yaW5nIHRoZSBWTSBDUFUgdXNhZ2Ug
aW5mb3JtYXRpb24gaW4gRG9tYWluLTAgd2hpbGUgbm90IGluIHRoZSBYZW4gaHlwZXJ2aXNvci4K
SSdtIG5vdyBibG9ja2luZyBhdCBob3cgdG8gZGV0ZXJtaW5lIHRoZSBzdGF0ZSBvZiB2Q1BVLCBN
eSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQgdkNQVSBydW5uaW5nIG9uIGEgcGh5c2ljYWwgQ1BVIGRv
ZXNuJ3QgbWVhbiBpdCByZWFsbHkgY29uc3VtZXMgQ1BVIGN5Y2xlcywgc28gSSBjb3VsZCBkZXRl
cm1pbmUgdGhlIHdoZXRoZXIgdkNQVSBpcyBjb25zdW1pbmcgQ1BVIGN5Y2xlcyBvciBub3Q/IFNv
IGNvdWxkIHlvdSBwbGVhc2UgZ2l2ZSBtZSBzb21lIGFkdmljZS4KClRoYW5rIHlvdSB2ZXJ5IG11
Y2ghCl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1k
ZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhl
bi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:11:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:11:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOj9-0002i0-4I; Thu, 20 Feb 2014 08:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGOj7-0002ho-DJ
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:11:25 +0000
Received: from [85.158.139.211:4448] by server-4.bemta-5.messagelabs.com id
	C3/A0-08092-CA8B5035; Thu, 20 Feb 2014 08:11:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392883883!5100629!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 914 invoked from network); 20 Feb 2014 08:11:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:11:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:11:23 +0000
Message-Id: <5305C6B9020000780011DEC9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:11:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
	<1392830820.29739.97.camel@kazak.uk.xensource.com>
	<5304EA5B.50506@linaro.org>
In-Reply-To: <5304EA5B.50506@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 18:31, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/19/2014 05:27 PM, Ian Campbell wrote:
>> On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
>>>>> +#define SZ_4K                               (1 << 12)
>>>>> +#define SZ_64K                              (1 << 16)
>>>>> +
>>>>> +/* Driver options */
>>>>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
>>>>
>>>> Is this just retained to reduce the deviation from the Linux driver?
>>>> It's no use to us I think. (I suppose that goes for a bunch of other
>>>> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
>>>> further).
>>>
>>> SZ_4K and SZ_64K is used later in the code.
>> 
>> But they are actually useless to us aren't they?
> 
> As we use only 4K page in Xen yes. I kept it because there is few place
> where the SMMU configuration is not the same.
> 
> If we want to support 64K page in the future it will harder to had
> support if this stuff is removed.
> 
> The constant was added by himself as it doesn't exists on Xen. I can
> move it into the generic code.

But if possible I'd like to encourage you using PAGE_SIZE_4K to
match up with what we already got in the IOMMU code. Unless
deviation from the sources you're cloning is deemed worse than
consistency within our code.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:11:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:11:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGOj9-0002i0-4I; Thu, 20 Feb 2014 08:11:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGOj7-0002ho-DJ
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:11:25 +0000
Received: from [85.158.139.211:4448] by server-4.bemta-5.messagelabs.com id
	C3/A0-08092-CA8B5035; Thu, 20 Feb 2014 08:11:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392883883!5100629!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 914 invoked from network); 20 Feb 2014 08:11:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:11:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:11:23 +0000
Message-Id: <5305C6B9020000780011DEC9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:11:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
	<1392830820.29739.97.camel@kazak.uk.xensource.com>
	<5304EA5B.50506@linaro.org>
In-Reply-To: <5304EA5B.50506@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Ian Campbell <Ian.Campbell@citrix.com>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 18:31, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/19/2014 05:27 PM, Ian Campbell wrote:
>> On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
>>>>> +#define SZ_4K                               (1 << 12)
>>>>> +#define SZ_64K                              (1 << 16)
>>>>> +
>>>>> +/* Driver options */
>>>>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
>>>>
>>>> Is this just retained to reduce the deviation from the Linux driver?
>>>> It's no use to us I think. (I suppose that goes for a bunch of other
>>>> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
>>>> further).
>>>
>>> SZ_4K and SZ_64K is used later in the code.
>> 
>> But they are actually useless to us aren't they?
> 
> As we use only 4K page in Xen yes. I kept it because there is few place
> where the SMMU configuration is not the same.
> 
> If we want to support 64K page in the future it will harder to had
> support if this stuff is removed.
> 
> The constant was added by himself as it doesn't exists on Xen. I can
> move it into the generic code.

But if possible I'd like to encourage you using PAGE_SIZE_4K to
match up with what we already got in the IOMMU code. Unless
deviation from the sources you're cloning is deemed worse than
consistency within our code.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGP2s-0003AJ-4u; Thu, 20 Feb 2014 08:31:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGP2q-0003AE-AP
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:31:48 +0000
Received: from [85.158.143.35:53245] by server-2.bemta-4.messagelabs.com id
	EE/8D-10891-37DB5035; Thu, 20 Feb 2014 08:31:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392885106!6985852!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27984 invoked from network); 20 Feb 2014 08:31:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:31:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:31:46 +0000
Message-Id: <5305CB7F020000780011DEDE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:31:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
	<52F8F13E.1070308@linaro.org>
	<20140219183713.1aa2418f@mantra.us.oracle.com>
In-Reply-To: <20140219183713.1aa2418f@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.02.14 at 03:37, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Mon, 10 Feb 2014 15:33:18 +0000
> Julien Grall <julien.grall@linaro.org> wrote:
> 
>> On 02/10/2014 03:27 PM, Ian Campbell wrote:
>> > On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
>> >> Hi Ian,
>> >>
>> >> On 02/10/2014 01:42 PM, Ian Campbell wrote:
>> >>> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
>> >>>> Hello Mukesh,
>> >>>>
>> >>>> On 30/01/14 01:33, Mukesh Rathor wrote:
>> >>>>>>> I'm not sure what you mean:
>> >>>>>>>   - the code that Mukesh is adding doesn't have a struct
>> >>
>> >> For ARM, a call to xc_map_foreign_page will end up to
>> >> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
>> >>
>> >> For both architecture, you can look at the function
>> >> xen_remap_map_domain_mfn_range (implemented differently on ARM and
>> >> x86) which is the last function called before going to the
>> >> hypervisor.
>> >>
>> >> If we don't modify the hypercall XENMEM_add_to_physmap, we will
>> >> have a add a new way to map Xen page for xentrace & co.
>> > 
>> > Wouldn't it be incorrect to generically return OK for mapping a
>> > DOMID_XEN owned page -- at least something needs to validate that
>> > the particular mfn being mapped is supposed to be shared with the
>> > guest in question.
>> 
>> It's already the case. By default a xen heap page doesn't belong to
>> DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
>> guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to
>> the given page.
> 
> 
> So, how about following:
> 
> +++ b/xen/common/domain.c
> @@ -484,8 +484,20 @@ struct domain *rcu_lock_domain_by_any_id(domid_t dom)
>  
>   int rcu_lock_remote_domain_by_id(domid_t dom, struct domain **d)
>    {
>    -    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
>    +
>    +    if ( (dom == DOMID_XEN && (*d = rcu_lock_domain(dom_xen)) == NULL) ||
>    +         (dom != DOMID_XEN && (*d = rcu_lock_domain_by_id(dom)) == NULL) )
>    +        return -ESRCH;
>    +

Changing a very generic function for a very special purpose is
almost never the right thing: I'd suppose that _if_ this is really the
right thing to do in the first place (which I continue to doubt without
being presented with full context), it should be done in the one or
very few call sites that actually need this (depending on their count
a new helper function might then be worthwhile introducing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGP2s-0003AJ-4u; Thu, 20 Feb 2014 08:31:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGP2q-0003AE-AP
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:31:48 +0000
Received: from [85.158.143.35:53245] by server-2.bemta-4.messagelabs.com id
	EE/8D-10891-37DB5035; Thu, 20 Feb 2014 08:31:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392885106!6985852!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27984 invoked from network); 20 Feb 2014 08:31:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:31:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:31:46 +0000
Message-Id: <5305CB7F020000780011DEDE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:31:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<20140127175550.4cc67171@mantra.us.oracle.com>
	<52E7951802000078001177F6@nat28.tlf.novell.com>
	<20140128180802.152b3f8d@mantra.us.oracle.com>
	<1390992026.31814.63.camel@kazak.uk.xensource.com>
	<20140129113846.GA54797@deinos.phlegethon.org>
	<1390995662.31814.76.camel@kazak.uk.xensource.com>
	<20140129114837.GB54797@deinos.phlegethon.org>
	<1390996295.31814.84.camel@kazak.uk.xensource.com>
	<20140129173315.592e593e@mantra.us.oracle.com>
	<52F7B213.3000302@linaro.org>
	<1392039724.5117.94.camel@kazak.uk.xensource.com>
	<52F8ED31.609@linaro.org>
	<1392046038.26657.19.camel@kazak.uk.xensource.com>
	<52F8F13E.1070308@linaro.org>
	<20140219183713.1aa2418f@mantra.us.oracle.com>
In-Reply-To: <20140219183713.1aa2418f@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Julien Grall <julien.grall@linaro.org>, Tim Deegan <tim@xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.02.14 at 03:37, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Mon, 10 Feb 2014 15:33:18 +0000
> Julien Grall <julien.grall@linaro.org> wrote:
> 
>> On 02/10/2014 03:27 PM, Ian Campbell wrote:
>> > On Mon, 2014-02-10 at 15:16 +0000, Julien Grall wrote:
>> >> Hi Ian,
>> >>
>> >> On 02/10/2014 01:42 PM, Ian Campbell wrote:
>> >>> On Sun, 2014-02-09 at 16:51 +0000, Julien Grall wrote:
>> >>>> Hello Mukesh,
>> >>>>
>> >>>> On 30/01/14 01:33, Mukesh Rathor wrote:
>> >>>>>>> I'm not sure what you mean:
>> >>>>>>>   - the code that Mukesh is adding doesn't have a struct
>> >>
>> >> For ARM, a call to xc_map_foreign_page will end up to
>> >> XENMEM_add_to_physmap_range(XENMAPSPACE_gmfn_foreign).
>> >>
>> >> For both architecture, you can look at the function
>> >> xen_remap_map_domain_mfn_range (implemented differently on ARM and
>> >> x86) which is the last function called before going to the
>> >> hypervisor.
>> >>
>> >> If we don't modify the hypercall XENMEM_add_to_physmap, we will
>> >> have a add a new way to map Xen page for xentrace & co.
>> > 
>> > Wouldn't it be incorrect to generically return OK for mapping a
>> > DOMID_XEN owned page -- at least something needs to validate that
>> > the particular mfn being mapped is supposed to be shared with the
>> > guest in question.
>> 
>> It's already the case. By default a xen heap page doesn't belong to
>> DOMID_XEN. Xen has to explicitly call share_xen_page_with_privileged
>> guess (see an example in xen/common/trace.c:244) to set DOMID_XEN to
>> the given page.
> 
> 
> So, how about following:
> 
> +++ b/xen/common/domain.c
> @@ -484,8 +484,20 @@ struct domain *rcu_lock_domain_by_any_id(domid_t dom)
>  
>   int rcu_lock_remote_domain_by_id(domid_t dom, struct domain **d)
>    {
>    -    if ( (*d = rcu_lock_domain_by_id(dom)) == NULL )
>    +
>    +    if ( (dom == DOMID_XEN && (*d = rcu_lock_domain(dom_xen)) == NULL) ||
>    +         (dom != DOMID_XEN && (*d = rcu_lock_domain_by_id(dom)) == NULL) )
>    +        return -ESRCH;
>    +

Changing a very generic function for a very special purpose is
almost never the right thing: I'd suppose that _if_ this is really the
right thing to do in the first place (which I continue to doubt without
being presented with full context), it should be done in the one or
very few call sites that actually need this (depending on their count
a new helper function might then be worthwhile introducing).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGP7k-0003JI-29; Thu, 20 Feb 2014 08:36:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGP7i-0003JC-6L
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:36:50 +0000
Received: from [85.158.139.211:28535] by server-13.bemta-5.messagelabs.com id
	87/26-18801-1AEB5035; Thu, 20 Feb 2014 08:36:49 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392885408!5102275!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12857 invoked from network); 20 Feb 2014 08:36:48 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:36:48 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:Content-Type;
	b=oY391q0FthoBACDor2YhVTSxsfhwoH/vn3AC99FOBNxdvGfGwkYMaec7
	pEFnsCQHZ5KnlczkNPo+E9vpDZyM1NF4zzNzSQN2RqkBFVL9//+ujYK/q
	/IxMDzOuUSpIS6kPuzeHTw3FSnToQUmW1TvWxWhS94NSs1+SyhCqTB/gQ
	QB00KqrJDVtjy4j434czbuZ7V9gOW9cDJOTwNs5mY7nrWohoKVdOBPQ/E
	ddBnd2T7Vfkr0pioNkyem1d4OYUu+;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392885409; x=1424421409;
	h=message-id:date:from:mime-version:to:cc:subject;
	bh=/VGAIKi2J2G2UdJtWCN5VE33u6l97z2ppfGCOk/XQj4=;
	b=nfeppII/YBIfPfKw5/CTI9Fae/yJMqYlYWiHwOCXCY7zua91DT7pBQUB
	RKZ9NonTLrnvAKKjnPDT0drEpirc2/giBxg+DiHMvuhGpJp0i34aJSdcL
	XMYHxl+tzRA752wKYG072Fjfrl2sKjspuqeF27tiuW/qeCwh0k9H4iEqW
	T1XMRJcYKMnRfROJZrLRovp5ZWzoPOKlzNqALNn7Uq7u9ZLDpt/Kp7d4U
	PQmb3jsIZ+VMU0tB9uPeRMi45v8hG;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="186126073"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 20 Feb 2014 09:36:48 +0100
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="80443004"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 20 Feb 2014 09:36:47 +0100
Message-ID: <5305BE9F.2090600@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 09:36:47 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xenproject.org>, eddie.dong@intel.com, 
	jun.nakajima@intel.com, yang.z.zhang@intel.com
Content-Type: multipart/mixed; boundary="------------020908080302050204050106"
Cc: Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Single step in HVM domU on Intel machine may see wrong
	DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020908080302050204050106
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

I think I've found a bug in debug trap handling in the Xen hypervisor in case
of a HVM domu using single stepping:

Debug registers are restored on vcpu switch only if db7 has any debug events
activated or if the debug registers are marked to be used by the domU. This
leads to problems if the domU uses single stepping and vcpu switch occurs
between the single step trap and reading of db6 in the guest. db6 contents
(single step indicator) are lost in this case.

Jan suggested to intercept the debug trap in the hypervisor and mark the
debug registers to be used by the domU to enable saving and restoring the
debug registers in case of a context switch. I used the attached patch (applies
to Xen 4.2.3) to verify this solution and it worked (without the patch a test
was able to reproduce the bug once in about 3 hours, with the patch the test
ran for more than 12 hours without problem).

Obviously the patch isn't the final one, as I deactivated the "monitor trap
flag" feature to avoid any strange dependencies. Jan wanted someone from the
VMX folks to put together a proper fix to avoid overlooking some corner case.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

--------------020908080302050204050106
Content-Type: text/x-patch;
 name="single-step.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="single-step.patch"

--- xen-4.2.3-testing.orig/xen/include/asm-x86/hvm/hvm.h	2014-02-14 19:05:59.000000000 +0100
+++ xen-4.2.3-testing/xen/include/asm-x86/hvm/hvm.h	2014-02-17 07:43:05.000000000 +0100
@@ -374,7 +374,8 @@ static inline int hvm_do_pmu_interrupt(s
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+	(1 << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 07:48:43.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 10:16:25.000000000 +0100
@@ -168,7 +168,7 @@ static int vmx_init_vmcs_config(void)
            CPU_BASED_RDTSC_EXITING);
     opt = (CPU_BASED_ACTIVATE_MSR_BITMAP |
            CPU_BASED_TPR_SHADOW |
-           CPU_BASED_MONITOR_TRAP_FLAG |
+           /* CPU_BASED_MONITOR_TRAP_FLAG | */
            CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
     _vmx_cpu_based_exec_control = adjust_vmx_controls(
         "CPU-Based Exec Control", min, opt,
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 08:04:23.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 10:45:42.000000000 +0100
@@ -2646,7 +2646,11 @@ void vmx_vmexit_handler(struct cpu_user_
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
+            {
+                __restore_debug_registers(v);
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+                break;
+            }
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 

--------------020908080302050204050106
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020908080302050204050106--


From xen-devel-bounces@lists.xen.org Thu Feb 20 08:36:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGP7k-0003JI-29; Thu, 20 Feb 2014 08:36:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGP7i-0003JC-6L
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:36:50 +0000
Received: from [85.158.139.211:28535] by server-13.bemta-5.messagelabs.com id
	87/26-18801-1AEB5035; Thu, 20 Feb 2014 08:36:49 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392885408!5102275!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12857 invoked from network); 20 Feb 2014 08:36:48 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:36:48 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:Content-Type;
	b=oY391q0FthoBACDor2YhVTSxsfhwoH/vn3AC99FOBNxdvGfGwkYMaec7
	pEFnsCQHZ5KnlczkNPo+E9vpDZyM1NF4zzNzSQN2RqkBFVL9//+ujYK/q
	/IxMDzOuUSpIS6kPuzeHTw3FSnToQUmW1TvWxWhS94NSs1+SyhCqTB/gQ
	QB00KqrJDVtjy4j434czbuZ7V9gOW9cDJOTwNs5mY7nrWohoKVdOBPQ/E
	ddBnd2T7Vfkr0pioNkyem1d4OYUu+;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392885409; x=1424421409;
	h=message-id:date:from:mime-version:to:cc:subject;
	bh=/VGAIKi2J2G2UdJtWCN5VE33u6l97z2ppfGCOk/XQj4=;
	b=nfeppII/YBIfPfKw5/CTI9Fae/yJMqYlYWiHwOCXCY7zua91DT7pBQUB
	RKZ9NonTLrnvAKKjnPDT0drEpirc2/giBxg+DiHMvuhGpJp0i34aJSdcL
	XMYHxl+tzRA752wKYG072Fjfrl2sKjspuqeF27tiuW/qeCwh0k9H4iEqW
	T1XMRJcYKMnRfROJZrLRovp5ZWzoPOKlzNqALNn7Uq7u9ZLDpt/Kp7d4U
	PQmb3jsIZ+VMU0tB9uPeRMi45v8hG;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="186126073"
Received: from unknown (HELO abgdgate60u.abg.fsc.net) ([172.25.138.90])
	by dgate10u.abg.fsc.net with ESMTP; 20 Feb 2014 09:36:48 +0100
X-IronPort-AV: E=Sophos;i="4.97,511,1389740400"; d="scan'208";a="80443004"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdgate60u.abg.fsc.net with ESMTP; 20 Feb 2014 09:36:47 +0100
Message-ID: <5305BE9F.2090600@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 09:36:47 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: xen-devel <xen-devel@lists.xenproject.org>, eddie.dong@intel.com, 
	jun.nakajima@intel.com, yang.z.zhang@intel.com
Content-Type: multipart/mixed; boundary="------------020908080302050204050106"
Cc: Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Single step in HVM domU on Intel machine may see wrong
	DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------020908080302050204050106
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

I think I've found a bug in debug trap handling in the Xen hypervisor in case
of a HVM domu using single stepping:

Debug registers are restored on vcpu switch only if db7 has any debug events
activated or if the debug registers are marked to be used by the domU. This
leads to problems if the domU uses single stepping and vcpu switch occurs
between the single step trap and reading of db6 in the guest. db6 contents
(single step indicator) are lost in this case.

Jan suggested to intercept the debug trap in the hypervisor and mark the
debug registers to be used by the domU to enable saving and restoring the
debug registers in case of a context switch. I used the attached patch (applies
to Xen 4.2.3) to verify this solution and it worked (without the patch a test
was able to reproduce the bug once in about 3 hours, with the patch the test
ran for more than 12 hours without problem).

Obviously the patch isn't the final one, as I deactivated the "monitor trap
flag" feature to avoid any strange dependencies. Jan wanted someone from the
VMX folks to put together a proper fix to avoid overlooking some corner case.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

--------------020908080302050204050106
Content-Type: text/x-patch;
 name="single-step.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="single-step.patch"

--- xen-4.2.3-testing.orig/xen/include/asm-x86/hvm/hvm.h	2014-02-14 19:05:59.000000000 +0100
+++ xen-4.2.3-testing/xen/include/asm-x86/hvm/hvm.h	2014-02-17 07:43:05.000000000 +0100
@@ -374,7 +374,8 @@ static inline int hvm_do_pmu_interrupt(s
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+	(1 << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 07:48:43.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmcs.c	2014-02-17 10:16:25.000000000 +0100
@@ -168,7 +168,7 @@ static int vmx_init_vmcs_config(void)
            CPU_BASED_RDTSC_EXITING);
     opt = (CPU_BASED_ACTIVATE_MSR_BITMAP |
            CPU_BASED_TPR_SHADOW |
-           CPU_BASED_MONITOR_TRAP_FLAG |
+           /* CPU_BASED_MONITOR_TRAP_FLAG | */
            CPU_BASED_ACTIVATE_SECONDARY_CONTROLS);
     _vmx_cpu_based_exec_control = adjust_vmx_controls(
         "CPU-Based Exec Control", min, opt,
--- xen-4.2.3-testing.orig/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 08:04:23.000000000 +0100
+++ xen-4.2.3-testing/xen/arch/x86/hvm/vmx/vmx.c	2014-02-18 10:45:42.000000000 +0100
@@ -2646,7 +2646,11 @@ void vmx_vmexit_handler(struct cpu_user_
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
             write_debugreg(6, exit_qualification | 0xffff0ff0);
             if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
+            {
+                __restore_debug_registers(v);
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+                break;
+            }
             domain_pause_for_debugger();
             break;
         case TRAP_int3: 

--------------020908080302050204050106
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------020908080302050204050106--


From xen-devel-bounces@lists.xen.org Thu Feb 20 08:58:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPSb-0003gr-S4; Thu, 20 Feb 2014 08:58:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGPSa-0003gm-62
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 08:58:24 +0000
Received: from [85.158.137.68:32710] by server-14.bemta-3.messagelabs.com id
	83/96-08196-FA3C5035; Thu, 20 Feb 2014 08:58:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392886702!3076639!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10739 invoked from network); 20 Feb 2014 08:58:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:58:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:58:22 +0000
Message-Id: <5305D1BC020000780011DEF7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:58:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <5304CA45.7050406@terremark.com>
In-Reply-To: <5304CA45.7050406@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Questions on vpic, vlapic,
 vioapic and line 0 (aka timer)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 16:14, Don Slutz <dslutz@verizon.com> wrote:
> For some TBD reason (very very rarely) the routine timer_irq_works() in linux 
> is reporting that the timer IRQ does not work:
> 
> ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> ...trying to set up timer (IRQ0) through the 8259A ...
> ..... (found apic 0 pin 2) ...
> ....... failed.
> ...trying to set up timer as Virtual Wire IRQ...
> ..... failed.
> ...trying to set up timer as ExtINT IRQ...
> 
> hangs and xen's console is spewing:
> 
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> ...
> 
> I have determined that the lines (in linux):
> 
> #ifdef CONFIG_X86_IO_APIC
>          no_timer_check = 1;
> #endif
> 
> that KVM has are missing for Xen.  The simple workaround is to specify 
> "no_timer_check" on the kernel command args.
> 
> The reason for the email is that I have found the routine 
> __vlapic_accept_pic_intr:
> 
>      /* We deliver 8259 interrupts to the appropriate CPU as follows. */
>      return ((/* IOAPIC pin0 is unmasked and routing to this LAPIC? */
>               ((redir0.fields.delivery_mode == dest_ExtINT) &&
>                !redir0.fields.mask &&
>                redir0.fields.dest_id == VLAPIC_ID(vlapic) &&
>                !vlapic_disabled(vlapic)) ||
>               /* LAPIC has LVT0 unmasked for ExtInts? */
>               ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_EXTINT) ||
>               /* LAPIC is fully disabled? */
>               vlapic_hw_disabled(vlapic)));
> }
> 
> 
> Which looks to imply that the vioapic supports "delivery mode 7" 
> (dest_ExtINT), but this case is missing (the message logged above).

Not really - the code above suggests the LAPIC emulation is
prepared for the IOAPIC emulation to support that mode, not
that the IOAPIC one supports it. At present only LAPIC LVT0
really supports that delivery mode.

> Also linux and xen both set "LAPIC has LVT0" to APIC_DM_FIXED for "Virtual 
> Wire IRQ" usage, but this code only allows for APIC_DM_EXTINT.  I have been 
> able to get "Virtual Wire IRQ" usage to work by adding:
> 
>               /* LAPIC has LVT0 unmasked for Fixed? */
>               ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_FIXED) ||
> 
> It is not clear to me if it should be added or just changed.

Adding this to __vlapic_accept_pic_intr() would be contrary to
the purpose of the function afaict - fixed delivery mode is
unrelated to delivering PIC interrupts.

> This code looks to state that:
> 
> ...trying to set up timer (IRQ0) through the 8259A ...
> 
> is expected to fail.  Is this by design?  (QEMU does allow this case.)

But in the end I think you're barking up the wrong tree: All of this
code would be of no interest at all if Linux didn't for some reason go
into that fallback code. And with that fallback code existing mainly
(if not exclusively) to deal with (half-)broken hardware/firmware, we
should really make sure our emulation isn't broken (i.e. the fallback
never being made use of). So finding the "TBD reason" is what is
really needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 08:58:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 08:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPSb-0003gr-S4; Thu, 20 Feb 2014 08:58:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGPSa-0003gm-62
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 08:58:24 +0000
Received: from [85.158.137.68:32710] by server-14.bemta-3.messagelabs.com id
	83/96-08196-FA3C5035; Thu, 20 Feb 2014 08:58:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392886702!3076639!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10739 invoked from network); 20 Feb 2014 08:58:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 08:58:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 20 Feb 2014 08:58:22 +0000
Message-Id: <5305D1BC020000780011DEF7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 20 Feb 2014 08:58:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <5304CA45.7050406@terremark.com>
In-Reply-To: <5304CA45.7050406@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Questions on vpic, vlapic,
 vioapic and line 0 (aka timer)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 16:14, Don Slutz <dslutz@verizon.com> wrote:
> For some TBD reason (very very rarely) the routine timer_irq_works() in linux 
> is reporting that the timer IRQ does not work:
> 
> ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> ...trying to set up timer (IRQ0) through the 8259A ...
> ..... (found apic 0 pin 2) ...
> ....... failed.
> ...trying to set up timer as Virtual Wire IRQ...
> ..... failed.
> ...trying to set up timer as ExtINT IRQ...
> 
> hangs and xen's console is spewing:
> 
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> ...
> 
> I have determined that the lines (in linux):
> 
> #ifdef CONFIG_X86_IO_APIC
>          no_timer_check = 1;
> #endif
> 
> that KVM has are missing for Xen.  The simple workaround is to specify 
> "no_timer_check" on the kernel command args.
> 
> The reason for the email is that I have found the routine 
> __vlapic_accept_pic_intr:
> 
>      /* We deliver 8259 interrupts to the appropriate CPU as follows. */
>      return ((/* IOAPIC pin0 is unmasked and routing to this LAPIC? */
>               ((redir0.fields.delivery_mode == dest_ExtINT) &&
>                !redir0.fields.mask &&
>                redir0.fields.dest_id == VLAPIC_ID(vlapic) &&
>                !vlapic_disabled(vlapic)) ||
>               /* LAPIC has LVT0 unmasked for ExtInts? */
>               ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_EXTINT) ||
>               /* LAPIC is fully disabled? */
>               vlapic_hw_disabled(vlapic)));
> }
> 
> 
> Which looks to imply that the vioapic supports "delivery mode 7" 
> (dest_ExtINT), but this case is missing (the message logged above).

Not really - the code above suggests the LAPIC emulation is
prepared for the IOAPIC emulation to support that mode, not
that the IOAPIC one supports it. At present only LAPIC LVT0
really supports that delivery mode.

> Also linux and xen both set "LAPIC has LVT0" to APIC_DM_FIXED for "Virtual 
> Wire IRQ" usage, but this code only allows for APIC_DM_EXTINT.  I have been 
> able to get "Virtual Wire IRQ" usage to work by adding:
> 
>               /* LAPIC has LVT0 unmasked for Fixed? */
>               ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_FIXED) ||
> 
> It is not clear to me if it should be added or just changed.

Adding this to __vlapic_accept_pic_intr() would be contrary to
the purpose of the function afaict - fixed delivery mode is
unrelated to delivering PIC interrupts.

> This code looks to state that:
> 
> ...trying to set up timer (IRQ0) through the 8259A ...
> 
> is expected to fail.  Is this by design?  (QEMU does allow this case.)

But in the end I think you're barking up the wrong tree: All of this
code would be of no interest at all if Linux didn't for some reason go
into that fallback code. And with that fallback code existing mainly
(if not exclusively) to deal with (half-)broken hardware/firmware, we
should really make sure our emulation isn't broken (i.e. the fallback
never being made use of). So finding the "TBD reason" is what is
really needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:04:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPYe-000455-GS; Thu, 20 Feb 2014 09:04:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGPYc-00044v-Tc
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 09:04:39 +0000
Received: from [193.109.254.147:15716] by server-12.bemta-14.messagelabs.com
	id 2C/51-17220-625C5035; Thu, 20 Feb 2014 09:04:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392887076!1917205!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8644 invoked from network); 20 Feb 2014 09:04:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:04:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="104243925"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 09:04:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:04:35 -0500
Message-ID: <1392887074.22494.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 09:04:34 +0000
In-Reply-To: <5304F058.6090503@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 17:56 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 02/19/2014 11:20 AM, Ian Campbell wrote:
> > On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> >> Now that the console supports earlyprintk, we can get a rid of
> >> early_printk call.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks, due to some conflicts I have rebased this patch on top my patch
> "xen/serial: Don't leak memory mapping if the serial initialization has
> failed".
> 
> I will resend this series later.

Thanks.

> > Now all we need is a way to make it a runtime option :-)
> 
> I let you write a device tree parser in assembly ;).

;-)

I was actually thinking more along the lines of a .word at a defined
offset which you could hex edit to a specific value to activate a
particular flavour of early printk handling. That would be sufficient
e.g. for osstest to activate the appropriate stuff for the specific
platform.

TBH I'm not sure a dumb DTB parser which just runs through looking for a
single property on a particular node would be all that hard, but I'm not
all that keen to find out for sure ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:04:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPYe-000455-GS; Thu, 20 Feb 2014 09:04:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGPYc-00044v-Tc
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 09:04:39 +0000
Received: from [193.109.254.147:15716] by server-12.bemta-14.messagelabs.com
	id 2C/51-17220-625C5035; Thu, 20 Feb 2014 09:04:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392887076!1917205!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8644 invoked from network); 20 Feb 2014 09:04:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:04:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="104243925"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 09:04:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:04:35 -0500
Message-ID: <1392887074.22494.7.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 09:04:34 +0000
In-Reply-To: <5304F058.6090503@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 17:56 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 02/19/2014 11:20 AM, Ian Campbell wrote:
> > On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
> >> Now that the console supports earlyprintk, we can get a rid of
> >> early_printk call.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Thanks, due to some conflicts I have rebased this patch on top my patch
> "xen/serial: Don't leak memory mapping if the serial initialization has
> failed".
> 
> I will resend this series later.

Thanks.

> > Now all we need is a way to make it a runtime option :-)
> 
> I let you write a device tree parser in assembly ;).

;-)

I was actually thinking more along the lines of a .word at a defined
offset which you could hex edit to a specific value to activate a
particular flavour of early printk handling. That would be sufficient
e.g. for osstest to activate the appropriate stuff for the specific
platform.

TBH I'm not sure a dumb DTB parser which just runs through looking for a
single property on a particular node would be all that hard, but I'm not
all that keen to find out for sure ;-)

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPdK-0004JC-7f; Thu, 20 Feb 2014 09:09:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WGPdI-0004J3-3e
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 09:09:28 +0000
Received: from [193.109.254.147:30756] by server-11.bemta-14.messagelabs.com
	id 9F/07-24604-746C5035; Thu, 20 Feb 2014 09:09:27 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392887362!328097!1
X-Originating-IP: [202.81.31.146]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NiA9PiAyMTM0Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21564 invoked from network); 20 Feb 2014 09:09:26 -0000
Received: from e23smtp04.au.ibm.com (HELO e23smtp04.au.ibm.com) (202.81.31.146)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 09:09:26 -0000
Received: from /spool/local
	by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Thu, 20 Feb 2014 19:09:17 +1000
Received: from d23dlp03.au.ibm.com (202.81.31.214)
	by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 20 Feb 2014 19:09:15 +1000
Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21])
	by d23dlp03.au.ibm.com (Postfix) with ESMTP id CB4283578058
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 20:09:13 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1K9908j655630
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 20:09:00 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1K99CFS020064
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 20:09:13 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1K99CqB020061; Thu, 20 Feb 2014 20:09:12 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id C36B0A039D; Thu, 20 Feb 2014 20:09:11 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392804568.23084.112.camel@kazak.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Thu, 20 Feb 2014 18:18:00 +1030
Message-ID: <8761oab4y7.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022009-9264-0000-0000-0000057EB096
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell <Ian.Campbell@citrix.com> writes:
> On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>> For platforms using EPT, I don't think you want anything but guest
>> addresses, do you?
>
> No, the arguments for preventing unfettered access by backends to
> frontend RAM applies to EPT as well.

I can see how you'd parse my sentence that way, I think, but the two
are orthogonal.

AFAICT your grant-table access restrictions are page granularity, though
you don't use page-aligned data (eg. in xen-netfront).  This level of
access control is possible using the virtio ring too, but noone has
implemented such a thing AFAIK.

Hope that clarifies,
Rusty.
PS.  Random aside: I greatly enjoyed your blog post on 'Xen on ARM and
     the Device Tree vs. ACPI debate'.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPdK-0004JC-7f; Thu, 20 Feb 2014 09:09:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WGPdI-0004J3-3e
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 09:09:28 +0000
Received: from [193.109.254.147:30756] by server-11.bemta-14.messagelabs.com
	id 9F/07-24604-746C5035; Thu, 20 Feb 2014 09:09:27 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392887362!328097!1
X-Originating-IP: [202.81.31.146]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NiA9PiAyMTM0Mzg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21564 invoked from network); 20 Feb 2014 09:09:26 -0000
Received: from e23smtp04.au.ibm.com (HELO e23smtp04.au.ibm.com) (202.81.31.146)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 09:09:26 -0000
Received: from /spool/local
	by e23smtp04.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Thu, 20 Feb 2014 19:09:17 +1000
Received: from d23dlp03.au.ibm.com (202.81.31.214)
	by e23smtp04.au.ibm.com (202.81.31.210) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 20 Feb 2014 19:09:15 +1000
Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21])
	by d23dlp03.au.ibm.com (Postfix) with ESMTP id CB4283578058
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 20:09:13 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1K9908j655630
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 20:09:00 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1K99CFS020064
	for <xen-devel@lists.xenproject.org>; Thu, 20 Feb 2014 20:09:13 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1K99CqB020061; Thu, 20 Feb 2014 20:09:12 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id C36B0A039D; Thu, 20 Feb 2014 20:09:11 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392804568.23084.112.camel@kazak.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Thu, 20 Feb 2014 18:18:00 +1030
Message-ID: <8761oab4y7.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022009-9264-0000-0000-0000057EB096
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell <Ian.Campbell@citrix.com> writes:
> On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>> For platforms using EPT, I don't think you want anything but guest
>> addresses, do you?
>
> No, the arguments for preventing unfettered access by backends to
> frontend RAM applies to EPT as well.

I can see how you'd parse my sentence that way, I think, but the two
are orthogonal.

AFAICT your grant-table access restrictions are page granularity, though
you don't use page-aligned data (eg. in xen-netfront).  This level of
access control is possible using the virtio ring too, but noone has
implemented such a thing AFAIK.

Hope that clarifies,
Rusty.
PS.  Random aside: I greatly enjoyed your blog post on 'Xen on ARM and
     the Device Tree vs. ACPI debate'.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:15:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPia-0004jn-Nu; Thu, 20 Feb 2014 09:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGPiZ-0004jf-OZ
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 09:14:55 +0000
Received: from [85.158.139.211:51345] by server-14.bemta-5.messagelabs.com id
	4E/DB-27598-F87C5035; Thu, 20 Feb 2014 09:14:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392887693!5076783!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18724 invoked from network); 20 Feb 2014 09:14:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:14:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102552259"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:14:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:14:51 -0500
Message-ID: <1392887691.22494.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 20 Feb 2014 09:14:51 +0000
In-Reply-To: <20140219185621.GC12300@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Saurabh Mishra <saurabh.globe@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 13:56 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > Hi,
> > 
> > I'd like to know what should be configured in the HVM guest such that it
> > accepts 'xm/xl shutdown' graceful shutdown signal.
> > 
> > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> > however whenever I disable xen_platform_pci=0, it does not work.
> 
> Right. That is expected.

With xl in the absence of PV drivers you can use "xl shutdown -F" to
fallback to an ACPI power event to shut things down, assuming the guest
is configured to respond to that.

Although if xl trigger power isn't working it sounds like this might not
work either.

> > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.
> 
> Oh, that looks to be a bug then.

Remember that this could also be a guest side bug, since it has to be
configured to respond to the event. I'd have thought that most modern
Linux distros and versions of Windows would do *something* by default
(either shutdown, reboot or in the case of Windows perhaps hibernate).

I wouldn't have expected xen_platform_pci to make any difference to the
functionality of this option, but it's not clear if that is implied by
the above.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:15:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:15:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPia-0004jn-Nu; Thu, 20 Feb 2014 09:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGPiZ-0004jf-OZ
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 09:14:55 +0000
Received: from [85.158.139.211:51345] by server-14.bemta-5.messagelabs.com id
	4E/DB-27598-F87C5035; Thu, 20 Feb 2014 09:14:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392887693!5076783!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18724 invoked from network); 20 Feb 2014 09:14:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:14:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102552259"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:14:52 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:14:51 -0500
Message-ID: <1392887691.22494.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Thu, 20 Feb 2014 09:14:51 +0000
In-Reply-To: <20140219185621.GC12300@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Saurabh Mishra <saurabh.globe@gmail.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 13:56 -0500, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 19, 2014 at 10:50:12AM -0800, Saurabh Mishra wrote:
> > Hi,
> > 
> > I'd like to know what should be configured in the HVM guest such that it
> > accepts 'xm/xl shutdown' graceful shutdown signal.
> > 
> > 'xm/xl shutdown' works great when I have xen_platform_pci=1 (PV on HVM)
> > however whenever I disable xen_platform_pci=0, it does not work.
> 
> Right. That is expected.

With xl in the absence of PV drivers you can use "xl shutdown -F" to
fallback to an ACPI power event to shut things down, assuming the guest
is configured to respond to that.

Although if xl trigger power isn't working it sounds like this might not
work either.

> > I also tried 'xl/xm trigger <vm> power' and this too does not work if
> > xen_platform_pci=0 is set. We do have lot of PCI pass-through devices.
> 
> Oh, that looks to be a bug then.

Remember that this could also be a guest side bug, since it has to be
configured to respond to the event. I'd have thought that most modern
Linux distros and versions of Windows would do *something* by default
(either shutdown, reboot or in the case of Windows perhaps hibernate).

I wouldn't have expected xen_platform_pci to make any difference to the
functionality of this option, but it's not clear if that is implied by
the above.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPjX-0004mQ-5U; Thu, 20 Feb 2014 09:15:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGPjV-0004mL-Do
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 09:15:53 +0000
Received: from [193.109.254.147:20728] by server-7.bemta-14.messagelabs.com id
	D0/E0-23424-8C7C5035; Thu, 20 Feb 2014 09:15:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392887750!1920747!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6901 invoked from network); 20 Feb 2014 09:15:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:15:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102552719"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:15:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:15:49 -0500
Message-ID: <1392887748.22494.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Date: Thu, 20 Feb 2014 09:15:48 +0000
In-Reply-To: <CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> config or driver do I need to enable in WR HVM guest such that it
> accepts 'xl/xm trigger <vm> power'?

Support for ACPI power events.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGPjX-0004mQ-5U; Thu, 20 Feb 2014 09:15:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGPjV-0004mL-Do
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 09:15:53 +0000
Received: from [193.109.254.147:20728] by server-7.bemta-14.messagelabs.com id
	D0/E0-23424-8C7C5035; Thu, 20 Feb 2014 09:15:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392887750!1920747!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6901 invoked from network); 20 Feb 2014 09:15:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:15:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102552719"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:15:50 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:15:49 -0500
Message-ID: <1392887748.22494.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Date: Thu, 20 Feb 2014 09:15:48 +0000
In-Reply-To: <CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> config or driver do I need to enable in WR HVM guest such that it
> accepts 'xl/xm trigger <vm> power'?

Support for ACPI power events.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQ0c-0005Rq-Rg; Thu, 20 Feb 2014 09:33:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGQ0b-0005Rl-Qx
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 09:33:34 +0000
Received: from [193.109.254.147:6760] by server-1.bemta-14.messagelabs.com id
	BC/E7-15438-DEBC5035; Thu, 20 Feb 2014 09:33:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392888811!1865271!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16440 invoked from network); 20 Feb 2014 09:33:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:33:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102556648"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:33:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:33:30 -0500
Message-ID: <1392888808.22494.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Thu, 20 Feb 2014 09:33:28 +0000
In-Reply-To: <53050BF5.1060009@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
	<53050BF5.1060009@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 19:54 +0000, Zoltan Kiss wrote:
> On 19/02/14 10:05, Ian Campbell wrote:
> > On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
> >> On 18/02/14 17:06, Ian Campbell wrote:
> >>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>> This patch contains the new definitions necessary for grant mapping.
> >>>
> >>> Is this just adding a bunch of (currently) unused functions? That's a
> >>> slightly odd way to structure a series. They don't seem to be "generic
> >>> helpers" or anything so it would be more normal to introduce these as
> >>> they get used -- it's a bit hard to review them out of context.
> >> I've created two patches because they are quite huge even now,
> >> separately. Together they would be a ~500 line change. That was the best
> >> I could figure out keeping in mind that bisect should work. But as I
> >> wrote in the first email, I welcome other suggestions. If you and Wei
> >> prefer this two patch in one big one, I merge them in the next version.
> >
> > I suppose it is hard to split a change like this up in a sensible way,
> > but it is rather hard to review something which is split in two parts
> > sensibly.
> >
> > If the combined patch too large to fit on the lists?
> Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible. It's up 
> to you and Wei, if you would like them to be merged, I can do that.

30kb doesn't sound too bad to me.

Patches #1 and #2 are, respectively:

 drivers/net/xen-netback/common.h    |   30 ++++++-
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  161 +++++++++++++++++++++++++++++++++++
 3 files changed, 191 insertions(+), 1 deletion(-)

 drivers/net/xen-netback/interface.c |   63 ++++++++-
 drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
 2 files changed, 160 insertions(+), 157 deletions(-)

I don't think combining those would be terrible, although I'm willing to
be proven wrong ;-)

> >>>
> >>>> +		vif->dealloc_prod++;
> >>>
> >>> What happens if the dealloc ring becomes full, will this wrap and cause
> >>> havoc?
> >> Nope, if the dealloc ring is full, the value of the last increment won't
> >> be used to index the dealloc ring again until some space made available.
> >
> > I don't follow -- what makes this the case?
> The dealloc ring has the same size as the pending ring, and you can only 
> add slots to it which are already on the pending ring (the pending_idx 
> comes from ubuf->desc), as you are essentially free up slots here on the 
> pending ring.
> So if the dealloc ring becomes full, vif->dealloc_prod - 
> vif->dealloc_cons will be 256, which would be bad. But the while loop 
> should exit here, as we shouldn't have any more pending slots. And if we 
> dealloc and create free pending slots in dealloc_action, dealloc_cons 
> will also advance.

OK, so this is limited by the size of the pending array, makes sense,
assuming that array is itself correctly guarded...

> >> Of course if something broke and we have more pending slots than tx ring
> >> or dealloc slots then it can happen. Do you suggest a
> >> BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?
> >
> > A
> >           BUG_ON(space in dealloc ring < number of slots needed to dealloc this skb)
> > would seem to be the right thing, if that really is the invariant the
> > code is supposed to be implementing.
> Not exactly, it means BUG_ON(number of slots to dealloc > 
> MAX_PENDING_REQS), and it should be at the end of the loop, without '='.

OK.

> >
> >>>
> >>>> +		}
> >>>> +
> >>>> +	} while (dp != vif->dealloc_prod);
> >>>> +
> >>>> +	vif->dealloc_cons = dc;
> >>>
> >>> No barrier here?
> >> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by
> >> the callback and the thread as well, that's why we need mb() in
> >> previous. Btw. this function comes from classic's net_tx_action_dealloc
> >
> > Is this code close enough to that code architecturally that you can
> > infer correctness due to that though?
> Nope, I've just mentioned it because knowing that old code can help to 
> understand this new, as their logic is very similar some places, like here.
> 
> > So long as you have considered the barrier semantics in the context of
> > the current code and you think it is correct to not have one here then
> > I'm ok. But if you have just assumed it is OK because some older code
> > didn't have it then I'll have to ask you to consider it again...
> Nope, as I mentioned above, dealloc_cons only accessed in that funcion, 
> from the same thread. Dealloc_prod is written in the callback and read 
> out here, that's why we need the barrier there.

OK.

Although this may no longer be true if you added some BUG_ONs as
discussed above?

> 
> >
> >>>> +				netdev_err(vif->dev,
> >>>> +					   " host_addr: %llx handle: %x status: %d\n",
> >>>> +					   gop[i].host_addr,
> >>>> +					   gop[i].handle,
> >>>> +					   gop[i].status);
> >>>> +			}
> >>>> +			BUG();
> >>>> +		}
> >>>> +	}
> >>>> +
> >>>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> >>>> +		xenvif_idx_release(vif, pending_idx_release[i],
> >>>> +				   XEN_NETIF_RSP_OKAY);
> >>>> +}
> >>>> +
> >>>> +
> >>>>    /* Called after netfront has transmitted */
> >>>>    int xenvif_tx_action(struct xenvif *vif, int budget)
> >>>>    {
> >>>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> >>>>    	vif->mmap_pages[pending_idx] = NULL;
> >>>>    }
> >>>>
> >>>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> >>>
> >>> This is a single shot version of the batched xenvif_tx_dealloc_action
> >>> version? Why not just enqueue the idx to be unmapped later?
> >> This is called only from the NAPI instance. Using the dealloc ring
> >> require synchronization with the callback which can increase lock
> >> contention. On the other hand, if the guest sends small packets
> >> (<PAGE_SIZE), the TLB flushing can cause performance penalty.
> >
> > Right. When/How often is this called from the NAPI instance?
> When grant mapping error detected in xenvif_tx_check_gop, and if a 
> packet smaller than PKT_PROT_LEN is sent. The latter would be removed if 
> we will grant copy such packets entirely.
> 
> > Is the locking contention from this case so severe that it out weighs
> > the benefits of batching the unmaps? That would surprise me. After all
> > the locking contention is there for the zerocopy_callback case too
> >
> >>   The above
> >> mentioned upcoming patch which gntcopy the header can prevent that
> >
> > So this is only called when doing the pull-up to the linear area?
> Yes, as mentioned above.

I'm not sure why you don't just enqueue the dealloc with the other
normal ones though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQ0j-0005Sc-DY; Thu, 20 Feb 2014 09:33:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGQ0i-0005ST-As
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 09:33:40 +0000
Received: from [193.109.254.147:64638] by server-15.bemta-14.messagelabs.com
	id 67/C0-10839-3FBC5035; Thu, 20 Feb 2014 09:33:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392888817!5592587!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20002 invoked from network); 20 Feb 2014 09:33:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:33:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102556658"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:33:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 04:33:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGQ0e-0008FF-Ft;
	Thu, 20 Feb 2014 09:33:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGQ0e-0007DL-9w;
	Thu, 20 Feb 2014 09:33:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25148-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 09:33:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25148 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25148/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQ0c-0005Rq-Rg; Thu, 20 Feb 2014 09:33:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGQ0b-0005Rl-Qx
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 09:33:34 +0000
Received: from [193.109.254.147:6760] by server-1.bemta-14.messagelabs.com id
	BC/E7-15438-DEBC5035; Thu, 20 Feb 2014 09:33:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392888811!1865271!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16440 invoked from network); 20 Feb 2014 09:33:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:33:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102556648"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:33:30 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:33:30 -0500
Message-ID: <1392888808.22494.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Thu, 20 Feb 2014 09:33:28 +0000
In-Reply-To: <53050BF5.1060009@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
	<53050BF5.1060009@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-19 at 19:54 +0000, Zoltan Kiss wrote:
> On 19/02/14 10:05, Ian Campbell wrote:
> > On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
> >> On 18/02/14 17:06, Ian Campbell wrote:
> >>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>> This patch contains the new definitions necessary for grant mapping.
> >>>
> >>> Is this just adding a bunch of (currently) unused functions? That's a
> >>> slightly odd way to structure a series. They don't seem to be "generic
> >>> helpers" or anything so it would be more normal to introduce these as
> >>> they get used -- it's a bit hard to review them out of context.
> >> I've created two patches because they are quite huge even now,
> >> separately. Together they would be a ~500 line change. That was the best
> >> I could figure out keeping in mind that bisect should work. But as I
> >> wrote in the first email, I welcome other suggestions. If you and Wei
> >> prefer this two patch in one big one, I merge them in the next version.
> >
> > I suppose it is hard to split a change like this up in a sensible way,
> > but it is rather hard to review something which is split in two parts
> > sensibly.
> >
> > If the combined patch too large to fit on the lists?
> Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible. It's up 
> to you and Wei, if you would like them to be merged, I can do that.

30kb doesn't sound too bad to me.

Patches #1 and #2 are, respectively:

 drivers/net/xen-netback/common.h    |   30 ++++++-
 drivers/net/xen-netback/interface.c |    1 +
 drivers/net/xen-netback/netback.c   |  161 +++++++++++++++++++++++++++++++++++
 3 files changed, 191 insertions(+), 1 deletion(-)

 drivers/net/xen-netback/interface.c |   63 ++++++++-
 drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
 2 files changed, 160 insertions(+), 157 deletions(-)

I don't think combining those would be terrible, although I'm willing to
be proven wrong ;-)

> >>>
> >>>> +		vif->dealloc_prod++;
> >>>
> >>> What happens if the dealloc ring becomes full, will this wrap and cause
> >>> havoc?
> >> Nope, if the dealloc ring is full, the value of the last increment won't
> >> be used to index the dealloc ring again until some space made available.
> >
> > I don't follow -- what makes this the case?
> The dealloc ring has the same size as the pending ring, and you can only 
> add slots to it which are already on the pending ring (the pending_idx 
> comes from ubuf->desc), as you are essentially free up slots here on the 
> pending ring.
> So if the dealloc ring becomes full, vif->dealloc_prod - 
> vif->dealloc_cons will be 256, which would be bad. But the while loop 
> should exit here, as we shouldn't have any more pending slots. And if we 
> dealloc and create free pending slots in dealloc_action, dealloc_cons 
> will also advance.

OK, so this is limited by the size of the pending array, makes sense,
assuming that array is itself correctly guarded...

> >> Of course if something broke and we have more pending slots than tx ring
> >> or dealloc slots then it can happen. Do you suggest a
> >> BUG_ON(vif->dealloc_prod - vif->dealloc_cons >= MAX_PENDING_REQS)?
> >
> > A
> >           BUG_ON(space in dealloc ring < number of slots needed to dealloc this skb)
> > would seem to be the right thing, if that really is the invariant the
> > code is supposed to be implementing.
> Not exactly, it means BUG_ON(number of slots to dealloc > 
> MAX_PENDING_REQS), and it should be at the end of the loop, without '='.

OK.

> >
> >>>
> >>>> +		}
> >>>> +
> >>>> +	} while (dp != vif->dealloc_prod);
> >>>> +
> >>>> +	vif->dealloc_cons = dc;
> >>>
> >>> No barrier here?
> >> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by
> >> the callback and the thread as well, that's why we need mb() in
> >> previous. Btw. this function comes from classic's net_tx_action_dealloc
> >
> > Is this code close enough to that code architecturally that you can
> > infer correctness due to that though?
> Nope, I've just mentioned it because knowing that old code can help to 
> understand this new, as their logic is very similar some places, like here.
> 
> > So long as you have considered the barrier semantics in the context of
> > the current code and you think it is correct to not have one here then
> > I'm ok. But if you have just assumed it is OK because some older code
> > didn't have it then I'll have to ask you to consider it again...
> Nope, as I mentioned above, dealloc_cons only accessed in that funcion, 
> from the same thread. Dealloc_prod is written in the callback and read 
> out here, that's why we need the barrier there.

OK.

Although this may no longer be true if you added some BUG_ONs as
discussed above?

> 
> >
> >>>> +				netdev_err(vif->dev,
> >>>> +					   " host_addr: %llx handle: %x status: %d\n",
> >>>> +					   gop[i].host_addr,
> >>>> +					   gop[i].handle,
> >>>> +					   gop[i].status);
> >>>> +			}
> >>>> +			BUG();
> >>>> +		}
> >>>> +	}
> >>>> +
> >>>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
> >>>> +		xenvif_idx_release(vif, pending_idx_release[i],
> >>>> +				   XEN_NETIF_RSP_OKAY);
> >>>> +}
> >>>> +
> >>>> +
> >>>>    /* Called after netfront has transmitted */
> >>>>    int xenvif_tx_action(struct xenvif *vif, int budget)
> >>>>    {
> >>>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> >>>>    	vif->mmap_pages[pending_idx] = NULL;
> >>>>    }
> >>>>
> >>>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
> >>>
> >>> This is a single shot version of the batched xenvif_tx_dealloc_action
> >>> version? Why not just enqueue the idx to be unmapped later?
> >> This is called only from the NAPI instance. Using the dealloc ring
> >> require synchronization with the callback which can increase lock
> >> contention. On the other hand, if the guest sends small packets
> >> (<PAGE_SIZE), the TLB flushing can cause performance penalty.
> >
> > Right. When/How often is this called from the NAPI instance?
> When grant mapping error detected in xenvif_tx_check_gop, and if a 
> packet smaller than PKT_PROT_LEN is sent. The latter would be removed if 
> we will grant copy such packets entirely.
> 
> > Is the locking contention from this case so severe that it out weighs
> > the benefits of batching the unmaps? That would surprise me. After all
> > the locking contention is there for the zerocopy_callback case too
> >
> >>   The above
> >> mentioned upcoming patch which gntcopy the header can prevent that
> >
> > So this is only called when doing the pull-up to the linear area?
> Yes, as mentioned above.

I'm not sure why you don't just enqueue the dealloc with the other
normal ones though.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:33:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQ0j-0005Sc-DY; Thu, 20 Feb 2014 09:33:41 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGQ0i-0005ST-As
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 09:33:40 +0000
Received: from [193.109.254.147:64638] by server-15.bemta-14.messagelabs.com
	id 67/C0-10839-3FBC5035; Thu, 20 Feb 2014 09:33:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392888817!5592587!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20002 invoked from network); 20 Feb 2014 09:33:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:33:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102556658"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:33:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 04:33:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGQ0e-0008FF-Ft;
	Thu, 20 Feb 2014 09:33:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGQ0e-0007DL-9w;
	Thu, 20 Feb 2014 09:33:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25148-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 09:33:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25148 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25148/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
 test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
 test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24865

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xend-qemut-winxpsp3  7 windows-install  fail like 24917-bisect
 test-amd64-i386-xend-winxpsp3  7 windows-install        fail like 25060-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 xen                  6d83fd0a975731653efb0c470a4eeed6a44b99cd
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           fail    
 test-i386-i386-pv                                            fail    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6d83fd0a975731653efb0c470a4eeed6a44b99cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Feb 14 16:24:39 2014 +0100

    update Xen version to 4.2.4
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQ5P-0005jN-8m; Thu, 20 Feb 2014 09:38:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGPOS-0003gT-CN
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:54:11 +0000
Received: from [85.158.139.211:23882] by server-5.bemta-5.messagelabs.com id
	B2/51-32749-FA2C5035; Thu, 20 Feb 2014 08:54:07 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392886444!5086804!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18710 invoked from network); 20 Feb 2014 08:54:04 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Feb 2014 08:54:04 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:49426 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGPNJ-0008W3-O2; Thu, 20 Feb 2014 09:52:59 +0100
Date: Thu, 20 Feb 2014 09:53:59 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1142136480.20140220095359@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140124174806.GA15571@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<20140124174806.GA15571@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------09910D0C003FD46CC"
X-Mailman-Approved-At: Thu, 20 Feb 2014 09:38:29 +0000
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------09910D0C003FD46CC
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Friday, January 24, 2014, 6:48:06 PM, you wrote:

> On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
>> 
>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>> 
>> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> >> > nonethless.
>> >> 
>> >> As usual ;-)
>> 
>> > Ha!
>> > ..snip..
>> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> >> 
>> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> >> > I totally forgot about it !
>> >> 
>> >> Got a link to that patchset ?
>> 
>> > https://lkml.org/lkml/2013/12/13/315
>> 
>> >> I at least could give it a spin .. you never know when fortune is on your side :-)
>> 
>> > It is also at this git tree:
>> 
>> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>> > want to merge it in your current Linus tree.
>> 
>> > Thank you!
>> 
>> 
>> Hi Konrad,
>> 
>> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>> seems to help with my problem,i'm no capable of using:
>> - xl pci-detach
>> - xl pci-assignable-remove
>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
>> 
>> to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
>> So the first 4 seem to be an improvement.
>> 
>> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

> Could you email me your lspci output and also which devices you move/switch etc?

Hi Konrad,

At the moment i found some time to figure out what goes wrong with the xl pci-detach and xl pci-assignable-remove, i have been
able to narrow it down a bit:

The problem only occurs when you:
- passthrough 2 (or more?) pci devices assigned to a guest ..
- and only remove 1 of those devices with "xl pci-detach" followed by a "xl pci-assignable-remove"
- when you first detach both devices with "xl pci-detach" before doing the "xl pci-assignable-remove" it works ok.

In my case i'm passingthrough 2 devices (02:00.0 and 00:19.0)

I added some printk's and what i found out is that:
- after doing the pci-detach of 02:00.0, it doesn't call pcistub_put_pci_dev for that device ...
- but when i subsequently pci-detach the second (and last) device 00:19.0 .. it does call it for both 02:00.0 and 00:19.0 ...
- so somehow that call for the first detached device gets deferred .. but since it are different devices and not functions of the same device i don't
  see any reason for it to wait until all other devices would have been detached ...


I tried to capture the console output but some how that didn't work out, so i attached a screenshot of what happens when:
- doing a xl pci-list for the guest
- doing a xl pci-assignable-list

- doing the xl pci-detach for 02:00.0

- doing a xl pci-list for the guest
- doing a xl pci-assignable-list

- waiting some time ...

- doing the xl pci-detach for 00:19.0

- doing a xl pci-list for the guest
- doing a xl pci-assignable-list

There you can see this strange sequence of events :-)

But i haven't been able to spot the culprit

attached: screenshot.jpg

--
Sander



> Thanks!
>> 
>> --
>> Sander
>> 

------------09910D0C003FD46CC
Content-Type: image/jpeg;
 name="screenshot.jpg"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="screenshot.jpg"

/9j/4QA0RXhpZgAASUkqAAgAAAABAJiCAgAQAAAAGgAAAAAAAABDT1BZUklHSFQsIDIwMDkA
AAD/7AARRHVja3kAAQAEAAAAPAAA/+EDlWh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8A
PD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4g
PHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iQWRvYmUgWE1Q
IENvcmUgNS4zLWMwMTEgNjYuMTQ1NjYxLCAyMDEyLzAyLzA2LTE0OjU2OjI3ICAgICAgICAi
PiA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRm
LXN5bnRheC1ucyMiPiA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczp4bXBN
TT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6
Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtbG5zOnhtcD0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIgeG1sbnM6ZGM9Imh0dHA6Ly9wdXJsLm9y
Zy9kYy9lbGVtZW50cy8xLjEvIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjU2M0Y4RjI1
OUEwQzExRTNBMUUzQ0VEMTFEN0M3RTM2IiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjU2
M0Y4RjI0OUEwQzExRTNBMUUzQ0VEMTFEN0M3RTM2IiB4bXA6Q3JlYXRvclRvb2w9IjEwMDMx
NjEiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0iREQ1QjY4RDQyODg0
NEQzN0I1ODVGMDkwRjcyNkQ3QkIiIHN0UmVmOmRvY3VtZW50SUQ9IkRENUI2OEQ0Mjg4NDRE
MzdCNTg1RjA5MEY3MjZEN0JCIi8+IDxkYzpyaWdodHM+IDxyZGY6QWx0PiA8cmRmOmxpIHht
bDpsYW5nPSJ4LWRlZmF1bHQiPkNPUFlSSUdIVCwgMjAwOTwvcmRmOmxpPiA8L3JkZjpBbHQ+
IDwvZGM6cmlnaHRzPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0
YT4gPD94cGFja2V0IGVuZD0iciI/Pv/tAFxQaG90b3Nob3AgMy4wADhCSU0EBAAAAAAAIxwB
WgADGyVHHAIAAAIAAhwCdAAPQ09QWVJJR0hULCAyMDA5ADhCSU0EJQAAAAAAEPkXFbhi6c9J
PDKtAE0qv1X/7gAOQWRvYmUAZMAAAAAB/9sAhAAGBAQEBQQGBQUGCQYFBgkLCAYGCAsMCgoL
CgoMEAwMDAwMDBAMDg8QDw4MExMUFBMTHBsbGxwfHx8fHx8fHx8fAQcHBw0MDRgQEBgaFREV
Gh8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx//wAAR
CASyB3IDAREAAhEBAxEB/8QApgAAAQUBAQEAAAAAAAAAAAAAAgABAwUGBAcIAQEBAQEBAAAA
AAAAAAAAAAABAAIDBBAAAgECBQIFAgMGBAYBAQIPAQIDEQQAIRIFBjFBUWEiEwdxMoFCFJGh
UiMVCLHB0WLw4fFyMxYkgkMXUzQlGJKiwmNEsnMmVDUnEQEBAQEBAAIDAAMAAgICAQUAAREC
ITESQVEDYSITcTJCBIEUkaEj8LHB/9oADAMBAAIRAxEAPwD5cKtrFPH9+Eu1LK/ChlVijda5
jDF4MWt238kKWkodKAVOWZp9MNM6JbC9XrC4YjInp9Rg1rSXbdwIHto2o5in8PWuIB/pl0+k
CMg9x0/HPFgG22XOrSo1lRQkf54cZQ/0+6WgMZz6Vyw4tSHbb41ojMfLw88Ehgl2+7VGbQ1W
6DqRghhLt+4ekGNgsgLRntRfu/ZiwB/pl0KLoq5zVR1P/LFYrYMbduiVIiJjFAWGYqegOCRn
QjZt0qhMLqXGrpl+3xw4jNs96WCrGxdjQqadfxxY0Z9ov1JIjOlerDOnbFg0v6TfaUYREsxy
Hj/pgX2gv6PuNVQ2zAsfuPT8TiOmXZr2QtpSqj8On1xr62idS+HG03gahjOo5Cnc4fpW5hhs
9+WakDnT3UdcGM/aJZNn3Agn2iVHVqeknDjV6hjs1+IzWJnJoFp0qRXI98X1Z+8B/RdzpUwv
UDMeWD6r7COybqgOq3YEgEV7g4MWkmw7sSD7RA8Pr2NMIENi3bWFER1DqCOlemFaQ2Ddmp/J
K50Lt0r4YCY7FuSlWkiJqaDTn+3D9aYTce3ZXKe0fFSMx+OCxbBHYdzNaRSMR19NRWnY4pyt
CvH9zKBvaNSMl6Ej/LB9VRDYN1EagwsEbP3BmM+taZ4sGmHH92UmtufSOozqPwxYfC/oG9ad
JtnoDQnw8qda4R94X/ru9V9v9Myt2PUftwyi9wm49vKspktyofpWlTTLLBg+8Sf+tbuKabcm
tOvavj9MTU7gZONb1qp7QHfr1rllixm9wy8b3oNX9P6h1UnPPxxRfcjx3eWJIhPt1pq88WNf
YC8c3lgKwNpOYNetMssWDTNx/eDRf07Ekin49DixaSce3d6//GddGRJ8cWKD/wDW96VQTbtp
OYpiaC2wbrGaSW750A9Pc/TDIpfTDY90ZXC27sUzI8PLPDeBaBth3UKSYGqM8h2OD60aePYN
4LFBbucqkAVr9BgyoTcf3fXVLVyoFQCKHp54rCUey7oVBW3f1ioYj0/TFeF9pEkeybs4Li1N
CpIJHUDwwfXGp1EI2fc3UFLdqV9QpQg+GHGb3Eh2LeG10tjQdaHpgwfeG/8AXt5GhDbPV+gH
UZVri02hj2HdJSQtu/pyGoUJwr7RIeObxpr+ncVrUYfqtiIbHuZBpAzBASSBi+lF6P8A0LeP
bUi2dqgkaRXpnTBYp1DpsW7NUm3alKggdfpixqdHHHt4YnTavXrpIHQeHicWLQ/0DdTGsgt3
0nplTFg0jsW6DUrQMNPah798X1Fsh49h3RtQNu6lRWhHX8cWGdQpdj3X/wDxpAozrTr+AxY1
3Yi/o27drd/EimYqcX1rEsSHZN0Un/48lQaMAKkeeLG/sQ2TdgwV7d8z3GWDGbQNsu5irtbO
BWgFOp8sWKdQ/wDSdzL6TbvWlSQK5YWvtDDZtyJo1u4A/Mew6Yhozsu5xrVoHoa9BXLxFMWH
xAm23lRWGQKxoCQQc8WDYlO0bmv/ANg9K+pqYLEZNr3IltNvJkaVIpmOvXBi0SbXuWmhgatf
DtiwaE7TujMzC2lFOnpPbCvsS7PuEhr7DBlNGFDl9cWHTJs+41NYH/YcvpixEdsvNH/gahOR
pmaYsX2kL+mX7U0QNViRXSQKj8MM5q+0pn27cG6QOWB9R0+GL6m0B229UvqjZQmeYOf0xYyk
Xar9iCsDkUrQihH1w/VbAf07cGqPYdRWnTuPPFgtENuvytfadSAcqGpAwNQy7besNftMUpkQ
K4EZLG7MukwuHHVaEdcWLYY2F1qqYnVTlUjv54sRjY3QTKFutC1DiGh/TXbSBfYcuMuhzpgG
jFtdBhqiNT9uVQa4TKD9Nc6gRGwLZLUUrTALRNBcE+2Y2DL9wpUjC39kftTAtpQkKM8icjiG
w6wThvVGQ5+0UpWuJEVljZ1ZDUHMEHLFVoSrdWVhShzB6HGVpPFMJCNDaqVIpn44YNL2pWqd
J6dSP8MKMqSMSVUkdARgxCUSNUgFic/HIdzixaS1YVVdROVPPFhIxSkEFSufQimHCZUbuCtT
QHoMWHCEZIrQ9SMWM4SRM4PX0dfLBYMIgilAakVNe+IkcyGFdQ7eGHFpgDSp75eeHEco4Q0q
KU1YKiVGI75/sOAGU0etCKdvLCREu5DL27+GGrSqMq9O5p3xlBIYqMjXsPpgFMAxJ7MOlPHE
oejlaN0HjiJyoqAq6Sv3HEDAOSaCoJqPPEiVnGk0qozH44iatST1w4jKpp5nMVxARU01dOoA
HjiRipUAAZeOIiJGkkA6qZk4TgG1D/HALBUPtjv4HCiViAK9Tl9BgRCoAJPb92Ihq2RFfLEj
0c1Yg598SDVlOfbDANa6Knp2xEg7GoHXsMCMdR69syPHED6vSOtB1xAGdfCvbtTFgPV1qMx5
YCcGncimJECxBGrriJw5C08PHEcN6i3XI9cSw+qmoA5eOAENXXw/N2xGDaUlRprXvniCMsdX
7q4Ud2YHUCSfHEi9x6swyr1wrDFzpADfXEjF2Jqe2IDDFlyJrXpiaACQTnme2ICZjUUY+IxA
wYscswOtcCJiajrTviR9bmlTkO30xIwkcVIJoe2InWZgCPHFiOspVwQafs/zxAwkcEnv0J8s
Kgg9RQnMZg4miEzRjTSvfPAhPcyk5jPr9BixB959FK06nFi0yyOBVWzPUYhhCRsx59PPEoXu
vU1Jr3w6MMbhz3PhXviGD/USqFzOWdMWHD/qGNSTmeoxIAuJFFASBXMA0GKxYI3MjClSO588
ZxYYzydSxp3FfDCsEtzKrD1nT5nxxRYZrmYClTn1rhX1JLqRTUEmnY9MCwSXk/iT5E4MMhze
TVyJHXKpyxfVSBa7nDgiRq5VNcOCwZ3Cep0uwr5/4eGFGG4XAC0clh1xUSJEvZ1BLSE51oTX
A0Z9xmy9Zy864pGcgl3GRVoH6dCe2LDkL+oTaQiyHT4k51xYcL+pXKDSJD51zzxYsOm63a6m
MhY0pQ9Kd8VE5h13W4SQEOaUyFfHERjeLtci1RXp/wA8SMd5ndj6zmevTIYMWCbeLhvUG0mt
f+mHBhzvNyaa2BPY+GJWEd6mYaiayL0r3Hngw4S73cU9XcEE+NcWDBHeJ/bClhqPQjqBixWI
/wCsSqNa/dXMVyp44WcIb3dVWp9IFCK5VxYpyP8ArM5Tx7AeWDGsEN9m06AMgKA16YcWG/rc
jKq6AJK0Dk9vPFiwhvciP0DN3avf/PFgwQ3uf11YCuagYsFhLvLqDUVqAaV/acFX0Sxb1qBL
LSudQe/0xnDOJC/9go2oCp6ljn+zDinIRvwMVAgPl1PjixXkLb2Sh9NHOVRkMawfVJBvEXtU
oV7DuOtcsWL6n/q6qWOmobwFKV8MFi+p03pRWpYsT9QPKuDF9S/qyFwKUAzz7+WLGvrTndwv
QZnop6eWJYNt+iFBQAfmHevXvixYjbeIzUqPzaiGrU17ZYRiRt0tdGRyFO2eeI3kK7rGrBTT
TXIADp4nE5/Uju1uoCUqmYr3XzH1wtUk3i3ilAz096Zkk4ME4dK7pFpdgwIXy7/88GL6oW3S
NQHcAORkB9e5wyL6Eu6W+klmBoMiBlXGsMgTucKigOpqVqMTNSR7iisG6g5GvTPxp2GM2j6i
S5gBYlyW76cZGH/WWwFTkKZmvWuVMOL6hS8hqArV/hr54sPMsSi9jWXqPbH3GuVfLDjaM3lq
wrqprrkPHww4pD/qgIyFNcwAxOdMTWBF1FG4VWLEjI1qa9zioE11C2p60UfnJABIwM4cXSlj
Rh01FR1A+p74lOTtd2msIXqR0Hf8cFa+qN50RhqdW6aQD2OeRxQfVObyEIQfuJ6VyA8aeOEo
zdRMHFSAPyjuThRhcqg0Byqnr06kf54BUn6l/LpT8MCxT7JYG7uoINK1lkCKzGgqfHvTG8a+
2R6tbfG+9yWum2ihuwv/AI1icBjQZijUrjU8cr1vwrjtE1m7RS2kiSxn+bHpowpjUmue3kJs
YwSDFkpp7ZBBFemWGyNc97PBna1RhIyPHHSuvSQK+eWL6xudHSzSSsrRmT+JlU0y6dB1wzg6
JNsgaT1xhSfUY9OkkD+LGevBOtFNtNs6IDGFbotQNQrjM6xrD/0WGWQJHHqdfS60pSg743rG
AbbbRXERQnQKkEdMak31jq2X/DoXbYtOoxfy8v5gHp/A9MZsdeetB/Srdn9EAI//AAqjU1fw
7YmOlknG4m28XFrGCOslPEf5Yz+TK4222EgSFCGkGbGuY7+VMRvg49vs6IGjXUvpBAANPCuK
rRnbbNq0jAA60AHkcErMHHttsoI9tdJ65dP+3CpyF9qtjLVkACDJvr5Yh7DNtdmw1qgOeYA/
bUY6c9Y5/T3U0O2bdpf3YAw6gjKg8PpivVdOZfykG3Wy10xgIfyZUxztE4mk2125BIQAjIAi
v4YnTCO3WfuLWMKwqK9s/LGtFPJtlma1jUBhmo7/ALMUpvwcWNsooAGFKafplnjNilIbfZqv
pjAkr9ymlcu+CwdUz7fbkhtNXORPl4n6YY57SG32sZDFaoM6H+LzrhMtB+htXLMYwTkKEZE4
j/0L+n2hb1oAVNRnita56JLaKpOjPt/yxrUlFjb6aBRSmR74xTKNbG101MdSa9euMnArt9vp
AVVBU1FMjiUM9hC7GQqA9BSniO5xqOd5N+giBIYD1UJp3pnjUYoJNvt2zdKkjKg7+Q7YdYnN
vymG3QgVpUmmffGLXX64CSxjGSotCKHxp5HGpRefyKKxiqCF9WedOn1xbGpDm0jBpQEDMdhX
zxHRfpoQmmgaueY/wxmxqdaQtlVTQDS33LSgwCw36a3qRSuoZgZCvjiwem/QwE6tNQagA+Pe
mKRS0SWVuCx9sAHueo864WuQm2goFaMGNB0A/filZ66JbGHNlULq7EYdV05s4hLooAKZUyHj
TDoloFtE1uvUH93lni1ekdviDg0qO1f8MX2N50S2kIk1KADSlKdKdxg1Ae0i1mka1J1assj4
nBpwwsoRX0gdft8TiFgv0qUI9ugpmDn9MCE1tDoZciadwK4cOAe2QgLpBXqT598bkc7TxW1u
CG0hSB9v5T9RgaiQ2sYFVX0Dr9cZalRyWcEg1MMxkKUGWLSY2kIIb21JWiqfD6YYzNP+khqC
IwtDWo6YjDJbwmpVevl2/HCxQrBHUAJStdAAyP4YdZF+lhVSoAB7DtXGXSWm/QwNmRn1qKZe
GDWhC2jFCMiPzDL8KYtOo2toQa6QFLZjDGdL9FbNrMQqTQ0oKA+VemLQT2qhGCqArABqAauu
AYM2cdB6MqEDIVOXfFKdRtYooACqppRgACKYZRpmtYmGkqKjpkP24o3CXbrcg+4q0Wnt0HTF
ap0TWNsFIZQXPelf3HAtJrOBgFZaAfb4ZeWBi9C9lCSpQZnvkPHphkH2ALWFhQqCF6VGdPDD
VO7pltYVbWqhU6BOo/fi1q0nsoWQ1jXUe9Acq1pikWg/Qwlg4UBsqt1yAoBhlwSibb7U9EoR
Vj364vtWyWzhDh9Ir2FB364rdRv0Vtn/AC6rUlmPcn/LGdVhnsYZBpZRUmtaCpGAaM2NqwJZ
FNR6qgdumBANhAwYsNRY9CAMI9RvttswVyg1IKDwp/niRDb4dSDQuR9NFH4YTOiO3276qqNT
H1NT8cWL7BG2WT6gUGeRHl+PTEAjY9uqVEQHcCnh0pgOFJtdsysDGpUn1CnhhWIjtForK3tr
UDwArXLPCBf0mydVrCpWPoKaRn4064zjWyg/o1iEVViDMKUYgFge+eIWmk2WzI1CNAXyagBF
D3xeKVGNjswykgHRkFIGHVdI7Nal6iJAejHSK4mbqP8Ao1k0YHtAEdAB1/b44KtpDYbEZe2C
V70Az/DE1LSj2DbkYMsAqOq9Mh2qMGLUcnHdvYL/ACY19QYgZedMI2iOwWbV1IF9VVXKtPL8
cRnVDJx3bmTQqAsMmYip+nlg9dNB/wCvbco0hA2k6lr2JyOI6deO7bGSY4QMwTToCev7cQtC
eN2gzVAAuWumQri1m2o24vttBoh00FQFPTxOeHVpm4rYgH+V6moUrTLBp0Tca2w5PCoFPtA6
n/liHoDxq1qGUAL3FOg7YThpuNbcyge36yc5B3y8O2IfYjxfbtJCRhajLr1PXBp0L8bsFUa0
BoaJkKZd8UF6wzcb25hkihyKRmmYp1p5YcZ+54+L2TUegGiuVKVPjXGa1Npl4vtq+pU0kVNB
n16dcWHUbcetia+3XzplToc8ODSg4zZ6SDGC1dS/Xoa4KYf/ANZ29W0ugZK1ApTPFh0jxSwC
tWLKnoA8fE4h9kZ4zZ6VPtgFe9MyMakU6J+MWUi6pIRqFKEdajocsWNaTcZsmRSVH0H/AB3w
W4zehHjFmQEePUta5DMU/wAaYNU60MvGdvDA+2GIHrB6VrWuEaE8XsSC/tBGY6tIzH/LEZS/
9V29aUSncMfH6YsOm/8AVrGo9FdVcu2KQWh/9YsAo0KQ6j/6f91PM4sX2GeM2MiCi6a19PZR
gsM6QtxaxEitpNMhSnT6Ylo5uL2JZmC+hPtrlSuJaF+Lbei6gPWoFT064D9gScXtqKWAERH2
9HH1OHGTpxWy01NaVFQ3Sg6DCNoTxayIZGBCA1ByDfTLA0YcYs9IoNEhbMk5BaZCniMSQtxO
3WTTRqnNg2XTtgc+urov/VrWlB379xTsMWN/ZG/FrUormqCtKnM/XzwyLRLxq3EdUrqPQHMn
/lhW0443ZqlSlQD6j16jrjOHUA41b00kECvqbr+wYcMopeOQKW0gn+GmXTBi+wf/AFuNlJkr
prU0FD+GFDHGYQx01C9SPAeGBajbjkYU1UVcUBHT6jDBKdOLxU1aarTIAnOvTFSiHF4xT1HU
RWnYYzopf+uxgekkEdBiH2CePDUCpGYqQf8AA40zewrx1tdFFc61r0FMVMqQcdhZiqVquVc6
nGdbtA/HdblQNLAig7afri0Al42UBIzAOXj074tWmOwViXL1DLLz7nGsH3D/AEPSw6sCaBT/
AK4vqvvDjjhq7FugJUdenUYsOmXjsv5uhAYZf44kaTj0qICc2NDpHge+DW5Ttx9qqDXwpiX2
D/69Kf8AcfGvSnjXEBHjbk1VvwYeVcKtBHsM2YOVDkfI+OLWfsYceuM2LAmtAw6HEvsdthmj
bSxGeZODB9jf+uSGWgOlR2PXEdPLx+4IJFdI+0U/1xqLTLx660Ift1ZVPQeWDVoW4/chfTm3
TPti06L+g3KjMhiBn5Yzp0A2G7yC08jT/jPENGvHbtQ2ofbkf2VphQDsV6I9WRUUA8c8WjQL
s1zrNO3XLphOj/oN3pZxmRmB4jEtMNjudIYDMmhr1r4UxLULbNdhmFOhp9a+GIad9mu9OpUy
rTrnXwxLS/o15l6aEjVU9sCgv6RekkFfT1JHQYmgNtF4jA6MQ0v6TcFgqKWr3plhFoV2y71U
CeOI6TbXeAghAx64lpNtt2oD6SMtQFMyPGmAlHtd29SENRQn6YtJ22q7DaSlAa0JzxagDb7w
R1CHuCDkcsWgjt10QpCHMZ1yocK0L2N0AD7TeZoc8A0nsp1IpGen44lp3srnoEIAzJ7YlphZ
3LCgRj4GmLUFbacg1SgXI17YtBzY3FSAhOkVPli0mNncqBVCAcOo6wXNRpQgOKDAtL9PKPUy
EjsRhIViepOhh4Hp/jhBCBx6tJp54Cb2pTlpNOpPliBijhQCMj0xIlSQ1BGXQkYDKIRyBdSo
ela+WImEUpU0H/cMLJgjlRlShpkMRwbB09JBqRkMNVhkD0KgE9zgJyzA0Wp8z4YlptUhJLGt
O30wM/AGNTmv3Zn/AJYkQBNa1AHbCkqiSlAlaZ18sBqNj6hStfDCyKrkVAoPHzxENa0rUnsM
KSByDXotKZ5188FRi7Nn1NfuxkgbUwqBSmR8MaQ4XkQkAZkZkYgTPKBQ9v8AHAgiWYRmjZH9
uJYJZGKEE/6HEjvM50kH1d6fsxogM76gKkmlM8QGJmQgg+o5fhjJJZnB0k0A6V7Vw4iSXSaK
DWuZHXDgL3X6E0UGtBiQ1uHBrnUda98FhwjMQdbZE5g4MJmMrUINSeh74ASvKvqLEE5HuKYV
hSTSVJ6DpU+eJBN3KzqWNQB+7DgP77fwjx/DFgxf8OoN/siWqjSeodP2nwON80f0zHtz7ptF
tBH79vKs4NUlgfTQj64KxxyKw3Rdxc3MOpxbyaHaP/yL3BpjP1rdz4W+7Cw3KwjuNaTXK/ax
jCOdJrQkUrjM2M885XHt+4+7EsNnciwuASrWV9H7sUvegenprjWtuKwvv0G4yyXli1qAf/xi
y9YUk9dFDVMP2Fixv57mZopJGtb+31gx3UUftzpX/wDCdzg1ZC/oNhcAFoA2pvV1yHjlilCz
i2yJIjeQp/8ANhbQssYFStMq9cOjHEtx/VreNr63ie6jZtMwjCyFQcgxGLT9YC/3yLaXQQ2k
ABzmhZAYpBXpo6KfpiMkDtQsZXlv7CD9NIznUVBYLUV00/hxUTl0y37Xm0SSXVrEl0UakkC6
BJ29VMsZKvsri/XYGhvNuj3LaKs0E+nU8DDvVf8APDp6ys0dDOSgGeQB8MbrkNVUvXoe4HgM
DUgZGJlzoAOn0xQ0YFGFOnTPCzRnSPV26/j2xCCQ0Wn7PHFTpihypkfHELBqfSakE9x/pib0
zqAtTViemJk4C9aELkK/5DEYfTRq0oTTPtXAgsPaXoP/AKf88RwyEAEg1H8XXGnGykAChJzJ
P2nMUwY1C9ujClNNMh4YozeSCdA2YFSR44VhyMiUoQKUHiD2/DGWzqD3NAPD/LE1BKMgQadq
9jiagVyFTkQc8LHRChJ1fvxM6d1XocqdR2zxCk5oTQ5nqfDEfwdFIrqzI+04lghGKAk0p2OA
w5/YnQ07jC0jCs1PVUZ0/wCeFzvJipLeo6SBkRiGnpn92Z7D/jPA3C05ZGgGZ8zXFEc0K1OR
bNvGuIHFK9R9MLQdArlUUz8a1wKzTyAkgj9nfFFgNWmhp16sfLDrMFpX7ga6un/XBWvAKKvn
U5fcRlXEBivYZA08sTUKWPqSOnSmWWIgFDICpzH7K9sQqRQFOgj1Hr+OAQ2gZg9fy0/yxNYi
aOlRU16kk5kY3KxeRUoukkEDrl2xMwiiChz09csZahqPTKmXQnFGdHpUmjUJHXP/AExVqUKk
U0tTI4hOvSoVRsu/hiISMwVzY+HhhZwQVXGpfSfPxwNQjHStaHzOWWFoKqW/2kdMZwEYxpAp
Ummo+P0wikUUensc6YDREGlR0P5u+IH01IX7QK/txLAGlTWvi3+RwxAZKhqGtc/2dMKIBiQR
mafbitVmiZCDVsyP8cDJZminPPr2+mBUzqrCp/d54YKFVzzPoH2nGsZnyNwqEgGobqfHGHWw
LoQBpyJ6D6YdWBYlftGTGrfXxGIGYNlnU9PLFpOqsEp005jxxNSndCzUOffLEgBG0lSa/wAP
iMDPJyhqBWtMqeOJozFtHpNT0P8ArhGCAOlarmg6f9MSMEBNW6DoMQAFOpqChGY864tZ0kWr
VAGkjMjqPLEcpFXrqUilaU64B6ZQalqgUOf+dMRl0mX1UZQ/UgDxOJqEKrVRliSPomonOtBT
uPHEBqmkhv4ch5jzxHAhNJLafoPPxxENTpoRUEVy7VOI6EoCpPXLKmIBr6KlcjmG7YgRWtRl
Q5U7YjD6AuRGrPuM/wAMRs0xQM+plrkfIj6YBIERqCKVFTl9MSkEygGpAXLMeOIhodSlehy8
/PEtM4kOpDQFT27jAgkFjp6Dx6ZYkLSDRei9mxGGNNYFCaZhvHCkPQAGlW8M6ds/DCLRKrHK
oY/mB7eGMsfkz+ldRBUk0Ze4GFqGmRW0nVUg5GlKd8TXyQhVSCoHXIjz8MFYsOI6BiPtpVjg
akCAKE0zr2zxoafVTqaAin4YFKZqqlB6a5U+uIxH7IDgux6dvDFqw/8AMY6Sc16V7HCDkOci
c+xOEfgzoq1rmB1Hb6jAdIwvXJsupXpljNWGBAatTpUZ+eJYFWWpP3U6geeHDgmQMpKilBXP
EodwACjHKgowyGIyhJ1UAFF8PL6+OFaQjZwq9R3JxasMsRBYCla+oV6DAsCQiAg9DQknrTsR
gR6UUasg2ZI7YkjKlpDXME5+VO+IWHKL46gCMj388WtH9sH7hQAklRi0BMYc6mBJrkp8sTQB
GoFSCNXWnUEnthRpACcxVRlQ9MsTAJAigDPP/CmBWlQZKQTUUGXT6YdRNHRwa1FKV8AMTRgp
AOnNKipPY4ABlq1UP1B8ThiMtWYhstDUp5YLTD6WUCpBGdD2p4YtQXFCdH5j1JxI2jpUGlMW
qErNqqctJzHl9MCMc21EDM5nxHamFGiSkxrTLx8MQD7MYYlaaulaf4YtOHMaVqMnJoaYLWbC
9lQAGP2mrf8AAwKHolSScz2xYtNpqhLDJuxHhiMA6KygKBqNKAdRjcosM0J1AmhPgfHFowlT
S32+nt54tOnXSSdQJp95OXTwwVSjUM5pTt0/wIwNQESVBANY+7Hxrli1HMRWQDTU/mypXFop
m1k6SAATUg9sLO0bIAaMNQbJR1NPPA0ge3zWIAIadR3+uNShN7IDhXAeg9Ne/fFpwmtyz6kG
kNTLpmMGnBPANRY5n+EeOLRoAQNNUIByFcyf9MZxmXCaKISDUuZFWbthOhjjV21AAL1FOtMI
xKsUaLQHUAMvxxHTe0oKtShOWntTwOBn7GMQZtJXqc8+lBlXATCBApagZlyI7CuFqH06m6UX
8wIHTELTexGzn01K9u+Jn7EkcQlox9PhTr9MK308lsrAUA7nV/u8cWulMlrGE0vSiGhP+pxa
yRiVi1QSwINaZ55YTND+mRpV1LR0rQeFB4+YxM+jWAIxckKGHpJ7n/l2wFF7CtpOnM5gnM5e
eIJDaxVVyc6Z+fjQYmoB4FoaioGSMT1NcWr7HNrGGNQI6/srixfYhDCQxZe1TTLMYhKRtowx
ooYL1qKk4mpTfpow+qhGkEn6+QxILRIWCt1P3LiZvQf0ie5X2hnT09qYNaGlsmpvSKA1NRiG
0UdlEFYsMyMvLxxKoHtIyxIUEdgade2GRkf6KMKQyhqCmQqT3wtSktmgUHSpByVRQ0P1wadC
LCMggAUQUGXjg0aiNhGQUIGdMuxoa9MOrRy7ah11XW3QUphOhl220LKCh0gdaZnBqtRNtkSq
QqhmfLzp36YtQk2yP26FQG/M2VPoMaZynG0W4NYwAtPWaDPPA1Bvt0bJpVP+xcgB+OCqoW2m
BZNQUUz1GudfDBrOmXbIUprVSCMyOlcWqWjbaraoCrReoB7jF8tUx2i1RwAKFwSV60AwjcHH
tVqauqAvSlCOoJ7YlK5xstrUVUMSamn7sDPXR/6Va1VxHmcilOmLWZ3vwZthhZzRQSehIpn9
cWt7TjZrRcihIpTtTzwkL7JAz9BqUZP5eGJm6YbNbR+r2/cP8RNKfhgah/6HbGIqAATQ6u9D
0ph9IP6NaJGapUA0Ddv24BbgI9khYlSNJ7L/ANcOGD/oarWgqpoBXIYkQ2CPUWI9ByA8KfTB
qRT8fgVCqktI9dPh+/EL1iNtiiYIjDSACWp91fDGvgc20/8A656wmYSgq3c16DEfRtsECSaU
BZwPUBnSnXLFGqjfZ4ipXRmT26/uws3o0ewKsuhjqUDNgK5+IOBSkuxQ6qayVNdVa5+FMGmG
bj51fyiz5dch+H0xajvx6rJVq5UYAaSB/ni1ac7JGuoNqI/LpArXtU4jOgtsBDekmgIy6jzz
xE0ewtoL00iukE+OIac7EulmUEkHJQcSc1xtRhQqp1lq/SvU4YlZJbui1ala9BhWo6f7sSWm
0STLeRyKXBVwEZOxxvkdT9vS03O4ubSKG4f3FUVBP3UPji+rE8S2F3d2M/vWU5glFPXGcmA/
KwORGJrItk5VuBjAGlAw0yKEyIPh4Y5/Uo7fku5QViYrLCM1SRa5HDjNpR79ua3K3EUzRSR5
hkqukeFP4cWHXRcciu5tSyUDSeosvpHn0wYtPFyreLVAqzggCiawKr9DhwaKHlW5pIZUkMci
kMVjFAx7HTiw6KblN5ckyehJf4o10mvmMP1DmvtxuLtV9w1cdCpzoMa5mKltW6bjtk4mt5zC
chIqHJx4Nis1c1Yz8k3GSNlOhQ4IkQKFVqnIjwxj6Lqubb913OwRzZyvDFIaEKSFIIoag+nF
eXPrflwmUe6zEAsTWoyrXwxv6nRoxB8KZ/8ATE0NQWNWy/MB4HxwDak9WoL9zDP6YDhaA9DU
hh2GJfB3/lSAvX1dqZU/DEPlI6gLVhpVj6QT2HfEs0w+6tSSOn4YUWpiakHSc8v8BiwjrQDI
6e/emJGz+5WJ8u9cCwRLaiSdLHoB18/+eGDrrAD1tQdO4A/dhxndFGrEHR6siajqPLFWTLQj
I6yM6A5ZeBwNw7BSKA+oitOmAyFqJT00JXv54sEOCQCTmaVy61xE1CY9VTXsMK+xgzAlSudM
UaGaVFMh08BiczFOzGmeQGf4YjJp6dCpqDkoPbEoL1gUJpiOBYEMdVSB/niZPmPpQVpha0lD
L6vHt5YWKdqCh0npVadcWINKGvQ06dcsANRqVHSoPmcSlOuoajpJOGxSnzWSpFU/xOMukASx
YerM1ApiW/satT7iQTlTEMI5VI7DLyGKBGUeNSNIUdVUnCokC6hQVoO4zwVuQJVhnWq91xRW
GKlyCSc8gR0y8cShFiG9IGQzwK04VasaeoDPzxMwqSmhqA3bvlhxqUmrTNcz1HemKGwwYaa9
ADQDy8sarmSkD1VGn9oxk4ZnBIKrl2AHbFIxTKcyAOnfxw1QhoCitTnkPPz8sTUEoY9U9Pgc
iB2wGUvSTSlAOh7jEAoqasyfGo/54STFVqvh9pxLQ0Ar49c8Rw51Fgc6+XfAyTL6KflP+GIm
+wBWJIJpXrgOCIoSCcuw7Ylh2UEFhQ55jt+7Ekcieuq5Hw8MalWEpdSKZimJkSls6nPIkHwx
lqGqO59I7DEyGTIVGYPbrljUZsNEmp+pI6t2w2rnn0bKyiunVX8vgfEYw6mZgXXt2qB0xCwn
FahhXPtiGozQU6mp6YWUgFaBvuOXl9MFagaAA55j/DviVIA5/mr1PhTtiRUBIHcjriaDmg61
PT6k4UeSNlTSxp5YgQWi1PTw8vDAybICpNQen4YlAgkZCufTEdJiCqUqO5pliRKFzr9hHfE1
hitOnTPpiIJUAeudCaAHtiZsI0rQj09iMSwnrQA5jw7g4lhidKUYFu344kQFFGdRWpr4HEkY
9NQmVD0PSvXERMqnMmte46Z9ScSBRQaUNB49cSFG0hB1DVTMf8sFahh1Vq5nOtK4AjJqadKV
0j6YRYJqkVqOlD9cRA/an4GmZGJHBo1ACezEZU/biVC2kNVvUBSo7HFgDQ6tNKGhNPr2xEJR
wQKZ5YhTM1GqfxNO+JnfTGiOWbvSh864ikkFfuyoc8TWAD1rpzSvpOIaWk1yzOf4YKslOw0q
c650I/yxNIjoAOod6AD/ACwslUVI6qB9cRSFV0lSPrXOhwVaBlowFfuHWvTEAMG9zW2YPUYY
zTqhoanOg0088xhOU4Bb0g59KnrgJCiLXIBahvHLAi1KtABm33efnihgdIA0pn5YSFmelPDI
18cQsKupV1UzyoP3HAsL1VZSoYVrTwxIhqVgo6E/XLFpFIir09J7554kFUX765UINP4vHPAg
pkQHPc/t88SMQ3QU65eH0xEvzmopkMj0OBBcMPUuQ/hA6eeJE+qhJGpiOnn2woAQj1OT4Gp7
+A8sOg+hvSqk6swa0oR1rjNqoWLe4WyK9PLLEoZadTUk5eGFGcKzACnjmMRJ0Jf0mnY+eAI1
OkkgEtU186nCoUhVWqDTUasK+PjgJ6IaUopyyH7MSCyRpSuYB8fPLEsM8gDCoLKxppPXEEel
VlNAenTsPPEtSSaNINOtcwOuFI9ddKkGmYowzH1xHCQIpIB6d/LEilCsSqgBejEZ5DzwA0hU
KR1alR5Yh0jYOcw3q/ipUA/jhCTQDSh1KRmD5d8FawzAagSRUdqUxIzUMlK5Ur5D8cIOxOk6
jQDOoxCmFPbCsppXIHpTxGBJFoQvQAfb+Hngbhm9sLmCKnTiJnIcEBtS9OudfLCDBGA9JqSK
HxyxMyHYBT3zyzyOedMRIqpOs0BUZGmdcShozqplQnNSfHETEsSdR0gdT40xazqTSCtKkUGI
4Sqx70A6DwP44lgAKsTqqTko8x4YsODVGT7hpTso654hmAlrpZUGZOVPDCzfToXz1Ag5E18v
PGazJTnNQV+0Gn44mocgiqgVrnTv+/C0R0kAAZk0XxA/HEDElXIXqfSAeuJmh9QZKk5ZHvQY
QlVaLUHM5g1/fgdNIanap+wZA1oa4iYUrTSSx7YUYAKzMTQqP2nyriBnZitG0liOgHSn+GEU
vUYmBOYHpPhTywM6GgC6jm1QfwxNH9ZFQQdR6+Q64jhSR6aM1aGgAGdDiZ+pLpVTU0BOZ6kn
xxKGk0tShoK+nrXLrib0QXUag1buDlkf9MFWhdVMg0itDRiR2HhTFGak/NRc/wAcz9MRqKb0
ijdTmoH78QOrrRSfSCPt7jEsMyhXqwKkZ0pUkeeEmBZiGBpq6DP/ADwaBMWUfaBTpTLPAgws
HUlgc/uoO+FJlCAlF+4D1E+BxFH7ILMweijPz88ROzCmWQIpQYEYgKfSSGOVR0xEClJdRCkK
PSR0rTv9MIKlBRkGlANOdP20woYZjHka1yXLLLEkchDEeqmYBBHQ+WAU+h6FcilaAnPPERih
AfSF/afxxJF6ipz9QNK4RgkaMLoY5nz7YCir7YZaatYIBA79hirNmiX3USoXzIrnT/XBhwpa
UStQFyIr0J7YUOL0g0FS9TU/uxNSGMUekFzmT6gOlfPCLyj9ypYBSVyAHgcSSBAiByaMa0Bz
/HEiopAZW9PWhzOIEunVVhSgGmv7qYjpwyMpYqajse1cWLUc0jUSpoB1jHl4YgTKZAH6inh+
0YNF9GVUotBpU9O5y8cTUhmAC9Qpy6DsMTUL7HHtnrmx8O2FaF4yHAC+gig/zOWGMWenSKNg
ppQUp16geOCmBcIaKKiP7ScDR6xKwVm0kjIjqKYsZoRWmt3A6ksoqaf5YkNfbdtSCtcqZ1qe
9MSC6q+QFCOh8D9MK051MNJyHQkf44MRSRZhRTS3Spz0jM1xFBNDGEYMaA5ZeHlhlOM5ua0X
0ALHSjEmpr/zxtK7QfFelOn78DK+4YBJvdpCB6WkAYHofHG+Z431fHp2/WlrbmJ400SSVLCv
py8B4YI4IdqsBeSUKe4D+TMdO9emWNaPr6vLriX8hbiKV42NCystcvHLvjP2bwEmy2cFt714
HeKQfyrhB6fTnQgZjGbWg8ei2xdxEUo9+FkOmVetD4g/XFrF50uSbXs1nIr2UzVfrCa1BOQO
fQYT9PFHUMRUBiMv+eGDHTthdL2AoBIQ4BU9888sbk0Y1G+7TZSXMDxKsJmOiXLtTI4xozHN
bbBZG7W0uGNu832zrVhQdKL/AI4NE69SDiM66qsswQ19NQxX/aP9cH2dKV/sNtBafqEYSoVr
7vev8JHlhnQ+tSbHZyz7bIYnMcmohonUPGVA6+Ixrco5lLa+PxX1uZ0l0lH0mMjJgOwPbBem
sxz3WygQNcwOHUZlT0FDQ0OKVfI7Gz2y+2t2F0bXdIydUMiVjYdRpevcfvxXyjFQrMoVWzfq
WGQqMSGGYSVUgKc6/wCGNya499XWv28xT7LqaBTMisVfLUKeJxz78bmCsrSzvNjkeVESgcae
pZfHV4k4lFZ/6/q9t0lLRaQCWGa1yAy8Ma1dUFvsccl41nJIVqCVkHkK54rTPXNuG3mxvGtm
apQLUjIeoVGM7piXZYo23FY3AKserdM++LCtd6sJYHJWOF4wKakYFgPEYNazXLb7BcTQfq4o
zIKEsU6jDKzkdFnspt7tJY5CyMhViRRgcX2ZvIJdntLi/eOErESurIZVHj4YtHEsvqmu7ea1
uGhmWjDLVSvnixu0EWlZRUah0I70xDFvbfoyzmWMyQspAIyYE98WEc20Rpatc28nuRMNRB7j
p1xaJz6gvtuVbeO4i6MQGT92KNKyqaigBNOowyM1Y8fEJv41njEiNkVIyoR0wdNSB3m0tba/
0W/8tCFYRdAoPh/lijOLGWKzfZDNJEqzqnpZRn1zrTFPlm1TWA29pQt4r+w4IDoaEH8tfGmH
Fn6NfQQ28+iOYTx0JDgUJXzHjjTci2nsrKXZUnKBZo1BVh4eeMyiwMNrFLZxyBGhmoM3+x/p
htHLn3SCBEjJVopj9yU9LAjIqcEhsBc2m0y2KXNjeUuFos1nIDUZZsr9CBilESy7SYbSO7DE
owBdW8/4fLD92MtqK/29Y1jnjf0sP/GagjvXGNdZE9hNbiNorm3SZWGllNQwB7oR0OJrIqpR
Gksntj+UpPt6utPPzxpmyOvarVLmaVJfQAlVYZ9TlgtWEDarK1vNHryIDA0IPiP9MbZkXM+2
Rvtcc8IVnQAM5oop4EDvgbkxRStJHIQy6GGemn+OMmri/S2G0wXHsJHLoUPIopUnxGGM3xni
KFiT6W+0dCcVFS25UZuKqR17ivhiMnjtt47RmZbkFUYfy5F6qfE41gg73bI7W1W6EgaAmjHP
vlqHiMGm3HLebf8Apiko/wDG5rTy8RiZWO6WdpJtVvcIumbIFgKBgepYeOMx08c9jts6XNo8
8R/TSuh1joVJzWvSvlitZnJuR7db2u4AWwZIpFLaT4knphlc5yqV09GqCQc86nCYsJNtMcST
sS0EiqdXhqGQI8cGmxJDs7yP7buEDD0Oen44Wcw1vss100sEJU3MILe2DUMAaelumeK+GK6S
JkYhx6xk3iP+eKNFCqNNGGOWoVYdQK9c8FC75LtVraSwzwNQSg1SuRI8P88UrH5QbZYWFzbT
NLJ7cgB9lj0BArQ/XBp6mfDjt7F7sSqsgDxgMopkQcaHqXbdoe+WRUcK8fStSOtCMVa570Ww
2sE129tcxkqyOrKPuU9mBwdGeuO8t1hme31MwiJUMepoe+Ba5/UTTrUZHwGEw6gBgfAUrXx+
uAhcfmH4gYmLDE1qO3anljS+S0HMgnrU+WDTDu1c0J9I9VcB05JLU7tniV6AHYKQwK9saZmi
NNIoudak9/wxmmwmVtQK9K4llIqApHUk51ocsCw+k5KpKgihHhhQAf5mRpQZEdMRonANCGpl
Wv1xEzK2oMxrTCKE6SSKEUGedRQ4mTINNa5qTlgMIsqdF9XQ+AOFqhSpqKEkGhz7YGdOTRSO
w7d8sTWBrmAprXOn0xIyucxQ0rl3I+uEYdqBczmfwOAhIZvy0/hxKmTSaqwyHXyOJkmNQVI9
J8fHEtCzekEDr+2uBo9BoFBTxPniSMMaMTkF8e9O2FCDpGuVSMyo6dc8BBrUxrU5scwPM4sJ
M6mufjQ+GEGVVZzQg5DUP9MCMrHSA/Toe+IUq1cVr4CmIBdR+K1/D8cRsJNAckEVNKgdTlgM
MQCQ7Gleg+mJYYAE0BzBqT4YR8GAB+/PuD1/diBmWgLA1B7Dr5YSERqEy7dB9PDEcHpUkZkV
6jGTCKHVUVDJka9MWpDIzA6QoNTXXXpQdcISM1CDpoTnTwGFULaQcyc8qds864KD6VDr6agK
Kk/8dsBJl9Qqa1/EU+uI4TKKekDSMjlXp0wxnoAoy1rQjviUh3QEgk1XV6a5DyxNYR0qSpHq
P7PwxYjuFBr4AZDPM4CQQLVDRs6mozIGJIlCmpDZflP44gUa11E/dXPOn454FJhg3fIM1QT+
OICZTRjTtQf9MJAjEZacu4OAkKE62NCTUgZ/jiRmpGD0JH250pXBotMiKY/DrQV8cShi2RIP
pXp3rTywoFOw+1xke/jTEiGhlGoZqPSetDiQ6gEamrXIfjgICASQa+mv4UxAB0hiDmD1X6eO
KILkmoC/X6YUcV0FyBTOudKA4CDV6ioNC1MsQwygAkKKP3riSNwFBK/lPqA6n6YgKRlkipTq
M/w6DDG7Te4jAUUoBSvfPBgpnOj1L1/xxMAkV2iOZ7adPXrhag5FOtdOatU5+XfEUavXIj09
h1xSIBC6ep1HNR4/sxYMOYiznVQF+lOuXhgWDIrQigH5R3r0wiQJLCugatOZpl/wMSoB2Y+o
HMg9qZjGREjxqfUCKuBQ+B7YWiIGoCoK9/r4HCsMQW9LgVGQH7++Bn8iJOkAZ9ifDLpga0Jz
OlvsP4/XEdPSOoK5ox6d8MQtDioHiPUfDEjAqDqYFgo6N4nzxM6YAh65aMh518ziMOaMpBQ0
ORb8pp4jEtOQpH8sVC+PniAh0K6fTTPxwFER6iGrQZGuWFUSMKAkUAJGIilMftA+NAtc+uJU
xLMxFAAKUA8aYWJCZtJ0j7qHIZ1Pbr4YMG0grEsJD9CBlUfTE0IOahTmxp17U88ShjpUEhWq
W/w/1xExEfulgKMFqeta4kB9QIkChWc5Dr+0jCzZo/QulGFWPRqVHXxxNfBlQCQn8prViemJ
QSq/uUJpXucvxxEzoKUA1GtQfLx/DFqICNjrIGsClTWpJ75YGSZxSp86U7YtSLWOgNaEEVxo
RKGZI9SgEg5g+B88DWkShDaQ1CQfEeYxGn0udSgUoPuy7n/HEyEBiaLnQYlDaqMAhqB+H78D
Wm1Mpo4FGYmudP8AlgWJEijVvcUUIUrl4dc8UQENasGHoGTEVIr2xqowce50AatK9un3YicA
M4IqPHxy8cFFA9C5YGhGRJr1GKsikIZSakkgBge474FTZaTRTVjQ07LhAq55E0ApmPy+eFrQ
n211Z1HTVnSmBHAOf41Hl5eGJGb1ClKDy7DEdMxIoakpTTUeWJaNQGANaAjMHwxICiUDUxoK
kAV6DEieMAaiQF7V6kYgZCrBB+QVH18AMSMwJAFCNJ055UBxIjpNSKVPSgp9a4jQaQyElaFv
25YNZSL7YQUOpx50ywnAHVTUc2B7gj/imI4fQNJVupz1A54GaBGkWUBBl38MvPzwmU5aUy0r
6WzNeoHli07aJgoOZr4jrn26YkcKNR6kEda98WoLBU9INVrnXv8ATEArpFWf1FjqAOWkntlh
Rx7goa+ls3r1J88IG66YyqtXuSBX/HAUcZIIrkpyoSe+BRIS6kKsgp28B5Yl9g6dfXovc9BX
z88TWjZVUk6aauhxLQmmliGp5HrhAFBUVb7hUgDCUjNXSa0qPCn4/TAEeQJqevpPeufT8cSS
JQZAaFP5k6VwaSJzIoKnImuJAdD6tL5eAyNPxws00WkIEQHMEfhhJn0k0atQAOuX1xloFw1E
BGSkZueppgDM7uGqNYGljVadaDG5Qq9Tef7e2HUt+N3bWm6QXIDOkbqzqnVgO2OnNw/0+Htx
3Tgu72kDyz3Vs6A1j0ilT2LUqfpjHrnMBYbrsu0StBJcPuO3zEFpEj0yIemXj9MFlaxbtyTa
I7R4Y5mmVkpGWUgAnor1/wAsGFBs+5bJaRa0uZrSfq0Mn82Fj5DtjPTUirvdxtV3Jbi2iT0s
S6xZK+vP0+FcakOG3262u+RJbWV0uQAs1q46L/tcek0OGMXxSRlV6ZZVJHc9MaYWWznbv1CG
99wqGDCSEiqEZ5g9QcO3E1O87tsFzaQzbfeAXMDhikyMAT00+WOc1nf2Ue6cfuxbXbXEtrfW
gNFlUSIWJq2kj8vgMN1rnlbbJu8F5fgQyQGdAVMM7+2k4PSj5aWxjG3Py/bIxG05il26Yikk
bMrwuR3DL0OKXFri43unHobN4rqeaCUAByUquqlDShHp8sdKMpW267fYGW0aX37dmZ0mRSpB
YZAg9PHBYbHBa7vELGW3l+/1BJB0NemoYsZzElvvm1Ptn9PvLFHuYl/kXiVQihr6qZNhwVSS
OhlrXM1NfLCztOoVmNTl0r4Y1Kvqv9n3W1t7F7SWtCHKuM6E54x1G+eYksd2tYrJrShqyPpo
KjVTqfDPAbzBWu9oluqOpEiigPY+NfDCzZHbDuWxLcLfa2ExGh7duv1Vhl+3Gbavrip3u+hu
78NbklCoBLKA1KYYnPZXgtrpZjGXUDSUHU1+uWNJPu12JrhZIGpEQBQmh/dinJd2y75axWxs
rkywo6lUuIqEof4iDgswalfcba1vI5oLk3cNKMCNOVM6jPAbUjbjsTXP6+ynlgu2XQ8Mygxt
TKgYYqIpd1uFurn9Sa5gdT088MrNoNqnt4Zw9xEJImBEiGtc+64LW57F1cxcYuFb2L6S3qAw
WWNiQQMwSP8ADFKPq5rTcYYY5LO4b3rciglhGeZ/halRiaQX91/J/TpIWibPLoc8sUjNVrB6
mn416/hjcZqx2OTbYbgSXTukgNY2rVW8RTtjPZ5uu3kcm3XFxHNaTiRVUGWNlIIp1FfDBye/
HTDfcen2eS0d5rdnTQCVBoa1zIOYw/ln66p9u3GLbrgs0KXcPR45BUMK9q5rh6awG4T2U157
1rCbeBqEwVrpPehPbDyrV9FebHNsptWdopCCqMRqAPX1eOMn8OS0v7B7E2N8zBR/4bhMwCpq
MvPFjErl3K8hdBDHP76pnGehH4HDiveJbvdLK9sYxNZpFeoVX9RF6A6gUPuKO/ngxqT9IrTc
rcWBsrxnMS5o/XTnXMf6Yvrpc+4XyzILcHVEpDLN3BHgcGCpZr+0msBG9UukFFcZFgO9cWFW
dQKeod6+OESO7bdw/TT6inoKkNTuOvXDi13yjaPXNHJ7izffFIpVh/2t/ngXw6o77ZrjbBaS
SSW8seUbfcHUdOgybCdU17BBHI3sXKzAdBnqX6k9cTOumfcI59rFs9S0emjDPUe5xK+qto+g
BqAAa98MNdm0S2guPbu0LQZ1C/eM+or4YrGXdcW20lGaC9Ysp/lqwoD9cGsW2Oa3vUls/wBF
eEgI1Y5Oq1r0K9sVM6R3dyrmOMyBlXqVPh/xlixvVrc3O0XGzNbJcgTqFKq4IpQ9/Ptgkqt1
V2G9XtoVRSJYw4ZI5PUooewxfVXp0cj3SPcZYpEABppYLlmMz+GCRWKZMm1rmBmB3yx01iT8
r60vrO82z9FeStDMjBkYCqsB2NOhxz+HWemfcYIp0iMgkjXJpFGfkadsajPSPbdwhs90kc1M
LV9S0yzqDivq/GKu+kEl3NIcw7atQ71PWnbApfEYQLRjkv5vEUzxLFzu25RX9hbujCOaFSss
RFVcmlCh8hilWOHb7pYy0U2pYXyLoKkVyrQ4rBQoHgujHFKHJNBKPtIOJfTWi22CDbnka6je
AXS0XUCUD+T9M/DFetX0k+FbsbWMd00ssjQ6SwYN0IJ7eeCmTHFuwjO4SvHIs0MhLRutQc/y
sD0OFcxXVBNK+jsfp2xNYVTQEjM/mIxA5BZaN0GYA74lS9I6ChPbwxMfBwDWh+lfHE3CpVzR
RkOv+uIUyjUc8iOh64QZ6EenOnXAuioVzPT7h9cQGGJFe5BxHUdKAHLLw8/HEt9MTnpoRWpG
JU5YABadsj0xHTaaKPAd/PEvk1CGoD6ievapxCnoVJqAQP8AHEkbV6U+h7YQRK0r2GZ+vngb
0jEASWOSjMfXEqdcnSpOk9O5xAzBdNTk3SuBrAVooJ7jt5YWL4JEUjpUgfswVqGY+oVyp9v1
xLUZQ1GoahU0Hf64WKcL0Y+nKun/AI74jhi5FQQD/CvTI/TAYbV6T5duw+mElHpADHMGtTiQ
WUdD0qDiQXiYimQpmG8MRMdKgqc18cRC1Mi4PgSMssQpmIOWYFMq+GJzokKkjsSe/l4YmoEa
a1A6dTQ9fHBTCIDNWtBqH7aYDCajAgjMdD4Z50xQhcMPUB50HfCzQMWWppUdgOpwiHFAoLDL
sBiaOAuqrHTWoHniRmcAA/m6jLPGUcs5CkDI9a4jz6ErHUDoK0bLDETx+HfIVzyOJHdYwCSD
QACnkoxDDIy6KjpT9uA4QKjsfVnQUpX/AExIJoAv5T0FOmJUIoJGVwCRTp4fXEIKRfXn06Ee
PhiaC6CopkIx9vj9MQOrM1dQo9f3U74SbWdJDUNetK1p+OJaH0ClCM+4/wBMACQ49JApnprn
+36YEkCID1FSKVOf4YjiNqkGpzqCK+B88QMdYaldWqtKZ088SPGMvtFDl54EWha1yyzr4Dwx
IAGlgCM3yp2rhiAgfQBkG1Gg8MSCImBbUfU3h4npi1QRi0A+rpl+I7YkYqxepWlKHPpU+eBG
eoc0FT+ZvLEjFSAKFQudfx8cWqmbQ7UboF86VGFFpDeqtK9K+WClHpNQehUGvmP+WKAKBQFr
2JyJoDhJFGAEiiqjw7HAsIBdOkkigqPA1zxBHpRk0UNa5j/A4BpvbbVp6LWp/DEJPUxAFQT6
VpTw8sabRFyBUg5HPuMziVN6PAlTlllXzxIyx6dWdKAkDrniWGYF8wSNPVfEnEDDShVnA7np
mKeWBGBcGqAV6qCfHCDhSwPT1A5j+IYEKpW3YMPUcj4YiEqvoaoq2ZHWuAk7OrVJGkdB3xCm
1xnNWNXpUH/PFQJqaAoypWpHfPoMRFIBUhh2y7YsR/cBVVNemWVaDCQNGK+2HBrnqzrniZwL
NUlBQkCg8/wOJkTFtOkZ/hQimJqWniYtHnkhPhQ5eOJEDUmuQJoaZZYkjm0uC1CpH2suVcDN
EqgMA1dORP8AzxNYOQKGVK6gTWhOQr3GEhCkVAIBZsz1riFE4JTURQ9K98KMGz01rTMDEgqH
1amIOdAT4HrTBqkOS0dBqyJoB1AOJCCEhg+RNAW8fD8MJMgcAjr4+H4UxacNJUEKK1AzDYma
SD0lMkp0pma+OJHADLVqkHufLriSSkdCPzdMumeBaCjU8OtKZiv44NIWKhqfdTNsODcMgK6j
kDWrHuB2/HCodAVLGorX0jwPTE0c1DFCxqKZDpgWAYMFqp9VfUfEYtGC0CgNSoAqy1pXPEpC
DgFdNTpFQQB/ngUpwZMw1dNNWo9u/bCQgkLpyZa+HWmfTEtOyaogfDJx9c8SMVOZUZEUBp1A
xIv5hT0ih6EVzr9cIDkjVAyORYkd8C8O0hGmhApl+H4YgZVPuijZnoCP30woXthpACKMvSmC
ktSqldBarUI6V8cKRye4xKgVVDmx798QPJQsa+qMdVJp6vwxLTIoC1GZzyrln4YlDhloK005
Zf6DAtNIS5BIoT4Z0PSlMStIMC3rzPRT2A8MRSCgb0f+TPrlTFi0LkgrU1DEhgPLDACoB0jN
h4n8cRMwC+gA1egFB0/7cA0tSBaMCc/SVyNf9MWLTO1aVBPiepOJoQBqMhmcx+amAWFVkNDk
R54hglMUgLEmnQgYTKi/ku38sCta1GWIpFoqmp9RzOJGd0owAqq5jFggiI2p0I7sMq1+vXCg
EKSwUHWfDyxClG60BbJujA5U/ZgIUzkK09OemuIU0j+plCZhQAcNYkEyPo0o2S09J8xWte+B
1hVNR+UCgIPjiWDJ0yUqQzDqelcMFqBgzNn0+00yIHicLNoggVFVWpkKEeHQZ9sDWEoAOalo
yen+JwEtbK1WXU3QLXp5AYUUQZXZmNP4OmXb8cRMBqbWwy6E9Sfw8sWg8TFZiT9r0VSMwPpX
FqgJkVHWhp2NM8uueJaCZnoVJJJICAYkze7uM4zXVWrU6YgrKj+LtTEV7xWzWbebRDX2Xk0T
ODSinpke9csab7ejbjs9vZLG0UpaNySFpQrp/KRjTyWD263FxOqe6iM5pR8oz/tqPtxa7cfC
93njH6e6hhWGW0klpSOVgYqkdUcVy8ji2L2Ch4DyG5Qm3SOSVRT2GcK8lM/Rq9LftxnTO1c3
HN6VHk/TtG0RpLC4o6kDPLv+GF066Pa7BulykcscUYtJTQevOtaGo7YNYoU2K/F9+kkQwys1
EMhATyNfDzw2udjnutuvLGQJNGY3LVoaUp4j64dUhlhmlnWKMCSSTNUJ8/8AHG5VY0tr8f8A
I7uE3FgkN2F/+wjkAlHlobTnjF6MjnteHcguWlYwNF7DBJtZ0+2a/nHWmG2Nam3fjHLLIQi8
jeW0mU+3cRt7sLAHojV6+WM+E9twDkV5aC4sFjvq5exFKnvAgVIMZOrLvh+0Zuxz7dxndr65
MFP0s6PoeO4pGAR1VtXQ4LVddG5cM5DttzFHd2hRLhtMMqsrxknsWXofrglVdI4HyIxMsMKy
yJUtbK1JdPc6Wp+44ZRkVZ2bdRD7nsn20YrMctSleoK/cMOs/USbNetEsns+5HINQaM6svGg
xWtxcxcC3+axa6sIo79FFWWB1dwKVoUNGr5Yz9jWeMVxDMUkRkZahkcUZSOoINDjUus31PaQ
Ge5CNIsak/exqKeX1wqxqbbi+wT0i/Wz7ffldSLcRB4HyqKOh6NjOpnNx2+eyuGiuFpJGuRB
yP0OCHEEJR5PXlXof+mExopOJD+kDcre5SQEapIia5HppOD7M9QOzbJtt1bme5eUooPutBRm
ShpUqcz+GL7LEO78ftrWJbnbr9b61f8A2GKSPwDqe+NfYKbS1GAavc9qEf54sZxb2GzW00Md
xdNILaZfuiI1qPEBsj9DgawG8bM23kSwSrd2DLWKZARpHg1c1bywYdVQlLSIrEamzQE5Y3Ix
z9tW238a3LdVUWzRhj9qyto1Z/lr1wV10VzxXkFvuP8AT7m1ZJyC0QqCkqjOsbj0nBKOqOTh
fIf0puoLcXMUQLXEcTVliH+6M+o41OnPNR7Rxncdw0+yYxK41LFKwVmXxUHwwWnnmwO4cd3r
bty/R3ls0DHNXqGUp/ECtRhlg/p7XXLw7fktBewxJe2+nVIbeRXZR4lRmMEsal/ALLi+53cS
Tx+3HBLVVldgF1A/af4fxwWnqoL7je8WV29ndWzxzBA61ppZT3VhVT44p0HNZbbdXEkkSITJ
HUMjZVp2Fcaqld1hxfc9wcxxmOKTr7UjUZh0OntlgtV5ce6bNue13P6e9tmglWuioBBUdTqF
RjWs3j9OINQqHNakU8cGKbF/tex7Jc2YuLu6lTvJ7P8AM05kaSgGv8cUtjeItw2BYJ4/0V0t
/YyLqW4RSrf9pU9xhvp3PkpuOwXVu8m2XHuXECVlspcn/wC5GGR/7cY3BPQ7JsAv5mSaUwsK
hQR6gwHRlPTLxxK3HFdbfLa3rWr1Z0NCVGVDhlcvttWT8akG2m6SVDKgLPG1eg/hPfLwxSul
nhrXaIPZSS9laGGZdSTIusp4EqSKjxw3qa589VC2wzQ3hilljkUeoSRt6JY6/kY9GI7HFaeL
b8rB+N7deRyNtV8BeRIHexuEKEjoQrj01/ccUuNbXHs2wNuNxJAZRCqgjUfyOv8AEe1cVuKa
549ll/qEtnIdbwk19o+p0BALJ554NS0bjO23UL/03ci11D1srlDG7Z9FelC3+ODV9GdlV45H
RvvWoeuRAGRGG1iT1Ja7fdXmtbSIzSIobSuZYeS98E6dcjsu+Obxb2n6x7YvZggvNEdYTzYC
pGOk6jFFtnFt33JPdskSVVqyx6wslP4gp6rjnaZPFdPa3NpM1tdQPBdx5SQyAhwD5H/HGpF7
XfumzTWu32tw6OjzUaNh6oZVJz0uMtQ7jFq+prfjm9XBhWGCvvrrhlLBVcV/LXqw/hGeC9Rz
93xzzbbd2917FxBJDcqdMsUgoVxR1/DovuPbxaW/6ia3P6fr7yetQp6MSv7MOCdSg2zjm8bg
oNrD7p+7SDRwB5dcFrM59QT7fdQXbWlzE8EynSRKpWhJy69sTpjo3DYN8sIfcu7RhFQaZEIl
T9qVp+OCUeumx4jvV7bLcWkYlVxVQrrQ1GansGHgcWpTzwz20jQzxNDIraXjcENXp07Y1Dom
urlwIXmYxD0+2zHStO9OlcX1Zu13Qcd3q7CiO11TOgkjQnORfGP+LzwbIfmOBbK6/VNaSRtb
zqwSRZhoKN00vXpnitE1Pumw7ztUWu8s3SA0/mgalq3QEjIV88Zlb0+z7ZJemcFHkRYi0vtU
LIP/AMJp6nT3phXXiCOylkuVgjKtU6ffb0rn0JPQVwaKuLXiV6q3MO4RSWtwtDbuwBQ5ePRg
fI1GEWOKy49vN4rG1gLqlFahHXwoc6nAp0W3bJd3JuIQpS4gAFGBDKQfUHBzywymxFfbJudg
gkng0wN9lwp1Rmv+4ZYaz1455LO7hZEmTQZAHjbqHUioIIwGUv0Vw84tyNEhYCrZAHzOIOuX
j26QyTRSwkXESe5HCSB7idyjV0tTvTFEso+J7e9nBNJuBtvdVdaOAtXIDHxywyi8VTbpt/6O
5aNJ47mCvonhNQR4EHMMPDDI1OUU9nNAyrIjAlQy+DBhUEf54FUOg6c2qB1r9cC+BW9tPcyB
Y42dQw1gD8n8WWCrdq8uOF3cU90mskLGJ7MtVRMhOa59GFMU6a+rg2rj91uCt7J0uyl41bIE
L1/EYWLMV00RDMhoWQlHH0wqdB0NQhjkBX9gxNNBZ8YS4SL3ruO2lkQSL7vpDKwqChPUf4Yz
qcq8W3Vri7tUj9+4t1MntpR9cYFSyEdcsGrXPYbJdXu3y3sSj24D0OWoU9VCejDwxvF4tYOF
CaCJl3KJRKoKCQBaMwrnnXLyxjTFBuVhe2F3JaXSBJIWoxqGB81Yda41jndcyEnLTppWle+I
xo9r4cm5wLdWu4RkupkMLLpddP3I6nwPfA3iGfisypKqSRzuoDRSIRQnuDT7T4YEpbvb7y1o
t1bvCxoEVxQtXpQ98aWUDwTJK8U0ZjlUgFDkRUDt54NZyomjZWaNqhkNGBFD44tQdIFGz88s
6Yta5jULwWQshXcIEDgOPdNMmypTDGlDd7TuMO4Gx9k3F0raIo4yG11zBVumY/HBWNBeWMtl
IYruJ4sq+oUqDmD54jjt3TjO5WUmpIHntjEJUkjWpC0BbUB9cQioBTQHBJLCuodMVQFLUGo0
Xrnl+OBY0lpwm/uQoFxEsxIH6dvSQSKqCW66geoyxSnFMNtuhdPZyI0U0MvtTBh9j16Ht0zx
obB7zsF3tYUzisJIEV16jG4OY0t4/wC04hJ6rXYs+Q/7qdcVNia3s2uWUCRY11ANI1dK1z9V
Kn8MZ1R17rse57ZL7V2g0MaQXSeqBx1VlftUeOBrFlacG3W6U/ptDXP5rZjRq0qFzyq3bxxL
FEtrO0/6f2mM+ooYyCGDqaFaeIPXCoibTHI8ZDB0Yo0bChDKehB6Y1FbI7bbZtxlspr2OLXH
b09wd/UK9D/tzxnRuFtm1S3tzFErLEsje1FLLUKXpqCCndu2Joe+8Z3bbRG9wuiKRygnVSYm
dOqAnocMrN9Vxi0AM6No1aVahKl/CvTEJ479u2W53K4SKNliMp/ls7aQzDMpXMasGtSj3Tj2
7bbcRRXMOlZ20QOKlWev2qe5xSq1OvC98a0adBHNIpYiCM6m9PVe2Y8MOjFDXVQAUzzJ7Edi
PrgIxBcSyCKNdc0rBUjHUkmgAxKrduHb1+ja8SATQopZxGSXAX7qLTPR+bDEj27jW57lD7tq
EZalc2oxp1IwG+I9w49ue2iFL2L2xK4jWatYwSaAMw754qNds3BOSBGpCsiK2gFWFNX17YGb
qguLW4tZTBPGyPHVc8hVepGGxS+BhhluH9qJSZWI0AZkt4YGljdcU322sf1Zh9yPT7jaDX0V
pqIFch3xSpyWO3Xd/cLBAgNwQdNTpFKeJ74U5JLZ4ZdMkbK6GrahQrXL/HFjX1dLbdci3F2Y
T7EjaUcA5ECuY88ZxlAIi7FQhLtQ6RmSSaCmI4ae3ki1RzRmNq10NlniCFgwNc0yyH+OKjDy
FXWqt6QMwfHtikNJUkllRIgz6skRQSSfDLDix13uxbtYWv6q8tXigUhXlFHVGbJQ1K/dgSOy
2u93CUQ2kLXE2ksETqQO1ScUMc0sNwkjRyIY3jNHjIzU16Y1Uj1KHAB+40qcsArqsdpvr73U
s4/1BiUMyAjoTQYi4i5RmjdSKErRhQhgaEYGNTNZziFbgoywOxjWU5KXUVKq3QmhrgaxzSOA
qqynI9PAYVaeHLNhUflI8MKJzpHirfm8RhISdIVgSFGRNcv24kHMr6TpJy+uAYcaRUEFgtAP
OnfESai9qAHMfXErCUAMFGS59OpzwCwRbS4FAcj+JGJECjV05UGRpQ+OIhkQlVIyB6/XxxkU
yKqmukZk6l6nPzwqndhq09PADqCe2EBKaGUNkfyj64iklbSwGmleh7Cn0wo2TAjSK9A/jQ4K
ARka2H3Hrnl/1wI75n0igJ0kAmuE4kBVXI65VNOwHfEAkIy0Dnxz8O2JaEL6MxQ9qdK4lgmQ
jyNOnjhMOwWootPOtanxJwKm0GhUjvmR+YDxxChVi1SpzU0Hl/zwiCKHUWGZNNRrng1qCeig
aQKkdhXEqB9JNTkimhr407YBhzV1PWpHTCSdhpII0L38aDFqMofVQsdPn2xVEzIpNBXPt2xA
SFK1YnSctHTFUZoyznVkPLw/DAYAkqSFNQO/Y+WHEThfcUstR0p38jhxYkdkH8sDMEs7EYlp
1KVKkVpT1da1wGB1v5IoIrXM17Z4keopVswzUr5+GJI2D1FKshrmaVJ8/p5YRaHSVZTp9WYq
OhGBjMqUUILZk0GXTPA6RHpIcBaEioX8ThQqAKrZgjMr1y7jEElB4VH3MOmQxKogaA6QKNU6
fLCtBE9KVT0jLPsfPEBEKy6lJUnIds8QCCylWU6mPQn91cRw1XrqBpTKnc4kdA5BHY5gA/64
gTFtWZIRgKgf4Yh9ikAJ06vT2yFfriGiAA1CgIXLV0y+uJoA0BRGVIzOY8TngJamX1BQ2k/c
OpxIcgUJ6fu706/hiiqL16xUAilKVzA8cKhV0mpFARl3p9cWE9DkfAVqP88SI6GatahR9vT6
4sATEGbWx7ZivQH6YKMGwyBNajKg+uJqHYhjRgcsvpiwoWiqPbOerL8PpiGCo6+ilBTt/wAd
8Kw1dSlXUjLIUIy7YCcKoRasQSQadcsSOxUtWtfAjyxIyxuZC7mh7rliZwmFarmJBWpOWXXC
cR2wDsdbUqSC1M8vI4rVgnqzekMIxn7vWv7MGg7nUoILM4yJGVfAYVghkAtDl1pngaLX6Hov
/acR0CxtUaifEhjUAHwwsnMVHy9SmmY7+WeJYYoBGHBoK+mnQ/UYNIxWikk5nLv/AIdsRsCk
akMxenU/Q+WBFI1EY6qZZAYcFqJZBqGr1M1DT/lhGpmleSXQRQD7h4DtiROtXCipTqadBTxJ
wGuedVcMdWRyOXUDpTDAzW7BHK5Uky1Z4TI4qDwXrTA1i74iVi3u3kmekOoBgT0JPXGubg/r
uePet04ZuG7WNtLttzazRUr7yyLRu/7a4r16486zlvxjcIN4SwvQlnLXUhmYLFJpz9DVpjUs
dLNbnfdj3G2srW7cRzWqOGMiSq+kU6sK45pSbhujRXe3u86/okbU7g1VGJoOnl3xqRma0qzW
tzcxQy3kEUkykwPK1EZT0Af7QfrjLTnTYriy294rmRFdAxA1qaVNaihofri1I7HfNmjs4tt3
xEuLcAiz3D/7eFj2LCupa9mxaPrrI8it722uVWedbiJlJtJRmNFa0WmX1w/YTNcG13MdtuME
80RdUYesZlSe4x0/DXUbu/4zfbjFBe7XcQzRMhLtbzAyL3OqOqnIdcc9MHsc91bWzJPfe5cj
UysrESEAf7864KNZrauS3tvdD9VOwtpJKSR1Lxrn9yr+WnemNUvSEu7yaBLq12/b9ztgv/4x
BJ7cw/7ghRiRjNicG23abpuZVY7a33IAqttcSVinp0B1517Z4jVjuRh/Rj37KHb54aiaKN2A
DVoKIxYf/UuIWM3PuTxb9ZCW5ZYVUhXL5LUdA2HRi5eOPcJZrGKeNbx4darI4j1qRSqucifx
wSrHOu3na7Ff1EoVYhRwDR17VIHUeYxrQ4b/AI3u1xTctuuIpLIJqE9tIGZWGY1qpVsEaZXc
jOZCbiUSyxijODU5dM++GLEVmlrJOpuHMcIU1kFciehp5Y1ox6Ds9nv+3W1JHtt549KoNVKT
hARUHSaOmMdVRQ3m1W+48iayF+lmhQezLP61Jp/464pMFim3PYdw2m7FrfKiN11o1Vp0rXz6
43Lrj1src7TxndJuNBbP27ke2zKVkU6ic9IzoD5HHPq+usuqXYtg3RmP9JuUst5t2IuLK5kE
RNDlp1fdXFrVjq5VJuEtr/8AlGBIdxUaZJkVUMlO5Kel/CtMMrFqmfiN8uzpu1pcwXlppBuF
R6SRNWmllPhjrO4vrXfsr2m4bO22+4lveKCEWZtKy1NaI56N5HGej8nvp5NttVjK6biKgCSL
qRgv5SMYhw78y2uaART8fsn1DORSVOr+LyPhTGpoljt2m2g3rZ5rbbpY4p4s47OaQJItWz0s
33AYrV3d+HFDYbptV6INzdkVlIQSvqUtXqrVII+mK0uDdt03WPcw8d06MgUKyOQfpl1GH8Mx
rON7rJue3TQpHZ3l6hobS5AjeoodcRP+IOMt1xbxdMJ47fcLNNubVqR1kYgGvU1LDr4YMEi4
2+d4HcXlrZSRSj+TfwyBUkbKmaHSrf8AcMRxVbrdFLa9S3kVXQlZI1OVCfD82LBfXBDdQfpI
g04CsQmssSFbp6ga0ocbxmu7bNjuId3mup5oRDdA+1cJIGgdhSg1/lbyOC1c84Ke0O7291a2
E0K3sMmj2pH9uSoPqZD3WncYJWrGfmG77TcBZz7mh8oJv5iah2INag98O6OXfNyXb57WSGfY
7MiTJympch3B7U8sZasT7Lscl3Zi+2eZfcjbOJXImjc9NVe3njTN6QX9zutluRfd7f2VcaWl
jVdBev3enInxphg6orKKTbLp76RoptvvKBLqIhlpWukjqD5HFWeTbJtlxfb5c3VnLFJFqYPV
9EgB6HS2bCvhg3HSzw26pNsXJG/qVl70UqjWjGgcAdUda4NE5ixW/wCI3NhMtulxCwqEWQj0
VHUZ+rBGsV9qIt02U2ttKq3Nsw/kTemUop9LpnRvMdRjTN5N+gF1NDaNIsN0QWRJW0KxHap6
HywaI677beVWtvpjsRIF9ZdCHdadTHSh/wCOmHYJfXFxTbtwvL6S5h0uRqWdWYI41daqeowW
usjj3203Cw36stbSclWt5iaR5D7lYYXKtPBv282KCLfLaGSGdf5U2hSGrmZEde+M/Lcqgh5B
t8MsyXW3w3dXJhuej6CftI6EY1OWeqn2292273cz2kI2xtNSUcmMN0BXKq4qXfbW287ffzz3
cn/xbk6UuInDxsSepKGgr0zwatQf0283DbJG2ho5LqKT2lRW0yICT0H8J/ZhtOM9usO5+/Ha
7gzGVCEVpTRga0Cknpi5GtLuGxb6OLBvZMsMBVykLiQClRqCjt4kYL0r0r7q7mPFoTHL/LBA
emRjfqafwt4YpBcQbTvU1xuML38yyyhNKm6JOsVACM30xqtSStHLeTWQeL+kvbW0tYwY5v8A
45qa6CTUAHtXri1i8uDZWvLq1cPZ/rLGGQgPbS+xd2pr9moHNa+OWM1uUETpbb1Wa4/WwFdE
A3AaWFcmhmJr+B6YqlrLfTWMMi/0eS3t5UKxlJf5DAn7e4FO2rriw1k9hnvWkureycxzXLFx
CX0VYdhU6a+WNeOX2cv6y6tdydt0gNw8XpkjuQS3Tpn4Yi7rvduKXkMkT7U8LkVSSNwpU06j
xHlg9a2Lu5sd1m2Pb12wrOkYQyxBlWRKKKFFJBP4YDMcEmq53H252D3VwoUmc01lctLlu/1w
FXz23KrOD23eVV//AIeKVtUcijqor6Dl0BxpmpuH7Zu0t0L+2UeyAU9xGGoMctJHUYuvDLp4
bGSLfdwsruNY7h5HCRSgIrV+3TXKmMKrXYrXeoYr+0u5G9yOMAWUtWpGKkPHXr506Y0vw7LO
Vm2m3u7ewN5IpEbTwOI5V0flkHcjqK4B8OGO6O4b27QO1vemMVNw4DNIDkC38P1xQf8AgG3R
7vaS3dju0bQQzqWS3ehtyTViRSqeY8cIc13Y3G4bNYz2ah7OFyJJEzMZAodY+5Rlh5uGw1xb
G13GFL6ISWjKouGWhBiJ+9WFcGqcuvf9svbTbkexuZb/AGyPSwQtrNvrFFYN3UjLyxSnqB2j
c+RSbNbxC0t9ysY2KwzNTWhAoyNTOo7asBVF/LtlnvEcq2Ci1mVjJZnVpRwM6F/UtfPGmVzu
m58Nm2i2ZtvaWGT0o6SUkhfTRgSe+M4djE3ccSl1h1GKtIyx9WnsSR+/GlYuuISLb7j+qLsr
RAUMfUr3FPzfTB0eWy3zkG1R7WksV+DFKCbJnhLKrsP5kdMB66xleHW25yXSzQIxtHJXUGBI
k1ZfQZ98atYlUm8209vud5DIhilSd/cRsiDqOf08PLFIzHERo9Yo3iMVOtjbQXb7PZw3+1rv
GzsddtdRSBZYCRSSOozqhzCtgbgNu17VukslhdyFIlWS1kempFP5WHehFCMFUW1jyGz3O23O
S0tltbu8t2O5WiisLyKCBNEv5fwwnJHHuO92Vts+2RXu3x3qNEDG+SmmgfnHXFjP2jF7nfR3
k4eJCkQFEjJroFelT1p44VuuRlLnStOpKk+AGIfVe8OQ/wBVLFigEJWnYmuC0/CbYriaC73K
ZT6kL6lOalasKU8CMMYm5rr2bcbjd9i3O2uaNaRIrwwFdRRgG+0n1ALgsanR96267mTZLpAH
gkSv6gEMp0hQoL/Xx6Ymr8qPlEL2+5vHMmhwBrrkdQFD+04mZ8qf/wCyk1EmgOnT1p5Ym3oW
+bxs9va7dHuO2LdB4vTJGwU0YCuo9x+/BLg1ScUi279Tdo0Uk9sGLW0QY/qEC1OvUtKsvjh3
VmLff952662K5tJhLIxUNGskXrVgDQ+KrXrhkFS8hu+Rbc20T2KGixKHvUGtPaop0P2oO2Mx
WsVyb2G3h5IkUC5UTtGnpTU/3FewHlhrGqt1UJpYe4OrKf3YG2yfbty279Idxt3vrQRrGUFd
ZjYaw0cg7CvpP4YJS4LGc7ou7Qx//jMoJto5KCRigIX/AOsY3WJyk2EmLj2/WF6rG4VWm/TT
GjVij60bpIGoR3wX5PO4ycbsQCfSxHcUJNPDC1uxoNhgWfbb61hUyXbr71vGKVJRc9H+6vbG
Fjt2udW4pvtreU9ynvNDL9ylBpoFPcHthjVrptGuLK4hh5DYyzw6I/Y3C3JEgjYVSRZK6SFG
FKa8W8blCJHMI53mX2J3yFWPpd+3q74FKLnz7jLyad92sIdv3GNEju/a6TkLQTk9CWHhhl8Y
z1bcSO6yWG428scqx/pKQ5HWy6S2kjqVWmMfk1WcTmUbNJM1v+stkKtcWYNWK0HqUdQVw2nV
1us1pNw/dZdtWf21kC3djcjS0ZJXRLGD0amKJWbTJug4Vd1slvtrlce7SheCauUniMX5GK7a
7O8azYxrI23PJSd1FfbkFGQk/l1fvxc0SLDed+EdlLtsYd0leOYCQnTHPGwIkiZszq/MMapk
X1jv2y3ssNxeW1xa7iUDXDQVERcLT3KEg+o4y1WD3oRHd79Iv/Gkx9skULKc608+uFmG2lQN
1sjWipNGSa9PWOp64KWj5bdb/Z8tuYtvlngQ6HVVqsQYirZjLU5641zNMdGx3dgmxzz7jHMs
ZuG1Pba1kjYNmnpIIBOD4q68UnJL2CeQR2V5cT2Uy+57FzqBDA0/NmR4Yqzq02Pcdwbi28k3
chMMaiMBidFBlpr3GMymMldXl1cyyTXUryyflaQ1J/HDpp9sjupNwhFm5/UOyrAFGesn0/sx
VcxshvhsLm5e+mkt5HjZ7cpqaBj9ssToOiyEHPscP11m31UyaI+PpuNkvtypcPQrmVjLVowH
SnbBPlqpN1MV5wjbt0vB724peGL3sg+lgWAJHYmhFcNZ/wCmV1ruD3fBd2BRQ0UsbsCtDroo
YgnpWvQYzPl0uVBxTjsl5Z/r9vkSW4Rmju7R6awOqtG3XSR18MK6R/I22blDLYXM8ccYmj0G
YA5sMj0/NTDGIyTqvtqO4Bqe+AtHx3aLTkUcO2SOm334qlvdOP5Tkg6EkGX3HKuLFenZYwra
b1s21m3Nvu9nO0UocArJEAxKMe/qqFbEftrQSXdhJ+us7LcDRI2V9muACfbIq0SsVofI1xBT
bPDx+32a0vrm7O33HrSOVq6XUMcshXWv78X5GqPmW23kMlpuE16t7abimu1v4hp1la+hsgSw
A64TKzrCM1BIBINSPGmBrHoGzQbHY7Ztl+b47VfNEB7jsBG9B1UUNW8R3xRm1xy8SnvOY20N
1dwXKbxE9xY3QpHBO+mqKfDPI4hJ6ut32HcrT423qzmt1eO1uY5LeQDOPMaylBlTp+ODfVa8
wZdNAXOlqNSnXwOIglJ9qgObGor2OIBUMCdXVcqdqYSByGGQqpyH4HPEjJGtKDMZtTt9MSGu
vOQnIDp2wI0clalhl3+uLR9gKVlb+ELme1CehxLQoWADKaGtPL64UcuAQADUitB/liSSlaqT
R8tfhgwgAqxoAMqin+mJEEVQWAA/GuIHyGkEk6j6j3p3+mJGdVanUNTJfzU88S06tRT0J6UP
an+eEFqBYlaAg0INCfrTBTCYk0fofDEijAcV6HvXLLAD6VrUMKKP2DEsApmDHL0nMDzxokZJ
ImID1J6A9fwxDRITI1a9BkKUxNSCjX1mtQMs/I9emJAclaqoFD6iR+6uBm+HyZCh6CnU0OeJ
rT6RkqGpXsetPriQSSxJI9IHU9KeOJmmEgbJRX+Ijw7nEhLEQAB9Ca9jgQirU9eSg0PhTvXE
T6QMyRQmp8adq4YQMFJ1ClfynsThgOzBmNCW+vbyxK0Pp9umoFuvTp+OJGAd2BLdOhH7MSE7
ITpU1JGZGRpiQQErpAIVT1ANK4kXiFNVPUDPp/hgJGrUFR1rkK5eGLGKHUUGkfaSdNexwiiR
mKqBm3Qk5D64GpDyq+opXpl4fhiQNTORXKhybzGJCAU6mZslGeXfwriMhVp6ifoDlkcUQAW1
Aa9IP0/dhGkFbOMMUVeoOf7sLQpI0euZqOoGQpiGAJ0qooNJPQYzUdg5YtpqnZT0BxAGnNgD
UnsBQfvxaqUYrUFiUH3ZHr5UwsYamTZUOWntT/riHyelBQvRRmD2qMVW4UZlqS5UrXNT/ngb
h6MDpUUHSvYjEYkDEUPU0p0qMMFASo1BxV9XX/LEpR+2mjID1HPxP44G6jELJWpoPAnwwjDp
EXowyHQEnPLtiR3KoaA6QDQUxKhIo1FYvXqCMSMysfU1U00OuuX7MQSBwsYLnSpbSGPX6HFj
QXOk6tVS2VfDEELH1gZtQ0JrTPEkhKhar6i33U6Z9sCJyfbTSuZFRn1PnhVOtWzCk5gMKYjI
S6jJ6VIoSpJxKGAUmuoA5g+GBEB6AivqBzCrlTEjmP1AyGpXqegxHAIs7Bpa1Arp8QOmeEBI
cgAVIGQHcYBqZgyqS3qWlBWtcvDywnQJJCoo+YHXV54QchC7VBQN93kMZrWExCghTkDmD1p5
YsBmFG9LZEUI6E4IdM6moVaMaZf88IQU0B/bbUakBj2/DE5/A43rCQc5MqsBTFrX2Fm66a5d
cwcRjmu2bUXbLLSQMqDxGNRVndxYVDgDSPtPl44aua4Kr5ePbBjeu7a2InABNT1I7fjjpzzK
r8PRdpumjt8pGC0FKE5E5dsZ65ZlTvLPJkzmR6VCsxbp4VwSs7XTHNKjvpY0YfzCD91RTvi1
TUTKAAoNY6Gi/lBGfTFKspxM72wj01QE0U9fP8MOGeJVu7oqqaz4Ia5inSmLDoDMQdBYllzp
XOuLBpme4IQFyyGvprktf8PPEz0dXQ+kEhx1riU6SRzzI5ZDp1L9oNM69aeOHGtSo8juHZ2Z
x0qSeuKxQiV1jU2VajvngI4LyeFm9mRkJPqoSMvwwYz9jmYTPqkerrlrqa559euHBOhvc3Es
gDSsY1FArMTUfjhxbRIzHTU6gOle3nnixuDebux1Kpooapp3y/HGcFqSS4lYAGVzlTTqOYON
Rz72nSaSP7JWTx0mmeGNb4b3dZZy1Se3fFg+xiA0eeQ70NMR0cdxNGAEcqta+kkA4LNV6Czy
M9SfSvY4h9qkaWVkUOSydBnWg7YFboopp0Ue05VR9wBIB/ZiUp3lZ6uWLOKAGprl0zOeM41L
6Uj3Eo1GRmCn7Sx6/jjQsHFNLECFY5/dQ0rXrisUoFkABOrOtRXrSvX8MVUo3kMhJZy7DuxN
T554kBDqNWFGHRRT9uIYdZHV6xsR+45eGE/CUySOAGLFCMlZi2X+0E5fhgwW6CR6rVvy/m75
YVDNI5A0tQgghh1BGKRaf35ZKCQlqGqk1IFcVi2pHkYIVVjo66VJpU9ajBinVDrYLXUQxFCK
nt/liHVMzkhTWpGde4ONRXs6yy6NCkoWz9J798GNTo8krg66nWAAWzDCnniwUckkzEO7tITk
GYk/44sQdfQHInMD64sJ0kkQExuV1jSaE1I/DEoUtxJJpV3ZhlQE5ZYlUMTOxdVJCk9zll5Y
axzPU6ymOhUnUDVWBocDpbkM15M4/mM8mf5mLAfSuCxznRvderL0PSmDGuezowR9eomg6+Hn
hbAzO8hYvqJzJNScsaxytupzf3YpSaQD/vbqMss8GN6iW4lSbWHPuH8wOZH1wWD7UU9zLMwa
d2kp9oY6qUy79MUG35N79wyCIuzID6QzEqp/2g9MTSGTUF6UNaHDrNiW2u7i2lEkJpp+5T0Y
eYxUJZL64Z3dJHjWY1eNCQrU8RikatiBbuWItJCxjcdGUkEDDhnR5rqeZw80jzN/E5qQPAE4
mdTJf7hEFWK4kjQdAHIBB65DGTdqJpZGrqb76Fv9xU5EjpjSnKKoK1HXwP8AniDqG5Xvsfp2
uH9lRT29TBetc1GWM43qGGa5gfVFK6SP+ZWIy8DTzw4Bz3lzPp95zK1KCRjWo8DjNWnG4Xoi
MBlcxUpp1tp+mmtMMjN6cyu5ct2JrXwpjWOc3RzXFxN/55GkcgLrclmoOmZzwV0gHofvyFM+
/wC3FC7JN5uXs4LYyeq3fVDKtQ6AZBQw7YpyXJNcyzytLNI0kjKPWxJPhXPGRozuN6YDC1xI
0Q+1GYlQB0oMKtBBe3dq3uW80kLMAH0MVr+zFfVDzXt1O2u5laZgtFZjUjPxwNxI+67kfYP6
mSsNPbJY1QAU9JrUZdsWC00V/d2xY29xJCzU1SRsQcjlX9uGRjqlJdXMkguJJWkmOZlJ9R7Z
nDk/A/nLPaJ9yv8A2v0zTsbbq0Wo6a/5YHTQW24XlsJGt5mh9waXCn7qdNXji+q1Lb7vdxyp
MkhEkYp6sxQ5mqnF9Rrvl5RO0QKW6RXAGgyx1God9S/aa4pyp6rLe+ubUt7DtCH+7QSKgeIG
X0xDpFc3M87l5naRpKepiSSfM9cMEqF2AWi10dxXL64VhavTn+FOmCIUUrRsrKxEnVaZH92A
/CVrppIhCWZoVYsIz0DdyB2xH8CtNyvrOQm1neFWpqC5VIGVR5Yhgb/cZ76YzXchllGWo5tT
64YrY56imojLw74mXXZbtf2Gv9NcPGJ6FwDkSvenSvngalDLuFy9z77SMZ2J1S1/y/zwRIrW
+uLaT3LeRopKaQ6mhocmB8iDiM6DLczPEtqWYwIS0aV9Kscshic+7qD26OA2VT+zCeDnR+Tv
l0yGJ1PBcywSLJHIVZTqQjqKYANLuVZfeR/bkepYjvr+6uFFbXc1nI0tvIUdlaNyv5kcUZD4
g4LVqWx3zc7K3FrbTslo4bVBQMprl0Pli1RyX95cXkyyTSGUxqEJc1OkdMz4DGoziDUBShyH
QDOuJX4TteXMsEUMj60g1e0rdVDGpAr2rjNPNQxXUsMgkhkaKUDKQEqQfEEYo1a7rvkW8XtF
u7n3mQGjhVQkEaaVUCtRhvwzL6Oz5TvNnHBFDdOsNuCIYjQqEOdKGtcYP5VU13JdztcSsGkY
lsgFUd6KB9oxrGZYD10yNaZivYYLCvbbmW/wW8Vql0vs29faRlDFdWeRPauCNRU7hfTbjdNc
TlFncgvJENA1DPUQO+NCj3TdL3cGWS6kDyqixvJQB3A6FyOp8ziicYVfy/d4k1/DFo0SStEy
OtUl/KQSDXr2xlqV17lu97ukxnvSpmChCyCjPT80hHU+eNM102HMt9sbFbVJRJbxn+Uk6rIA
v8IqMl8sWDarJ7kzTySSH1Oe3QDsB4AYG+R3u5Xt5cI97I88kMaxwtI2o6F6Ln4YsFW+zcx3
rbITDbyKdNQvuKJCFYZrU50OM2GeuC33vcINwm3C2C20jvrVI/RHnSqhR+XyxVSOjduT7nuK
Se97a6kMcvtLoLRtQ6Xp92fTDC4LXddwtIriG1uZIY7pQJ1Q0DJ4Hywh17RyO92lZFtmDRzU
EsDAGNx29J8MRHu3Jb7c7f8AS3MEBXUGjlVNLKwFPSfMZHFfT9dPHyrclsUs5fbmERpFOy/z
U0/aAe9PPGcZ6VDM0jtK7epiTn1qfDGmQqwVqISrEAMQTXEdaJOc7sbVbe6WK8bQI/flWkml
BRdTDqR49cUOubaeYbtthuAixTR3R1SxygkahmrLShrgsVpt55JNuttouLSBJozVbmMHUP4l
BP5TiGOCx3W8tLW7toyDFdoFkRhVfCtPEDAnEFLD1dBnQ5g4Wh28z211DcRO0M6MHidezDoR
iDWXXyJLda5LzbbaeWUaZsqK3p0nIdNXU+eKUfVndq3eewuvctzRGaskBAZWU9VcHtire78l
fbpJcSTRwj9NavJrS1RiyKR9pqfDtitEmL21+QZ021rO7sYblZI/anlY6dag9Soyr54zB10y
azTCZ5IWeNZKjSCVFCculOmNatS3243F1FbLcMzm2qAzEsCD0y6ZYtUcn5tZJNT07UxF0x7p
LHt6Qr/LuInJS5XroOej9uYOIOmTkt1LbWVQDebdP7tnfGvuqtdQQnuNfqzwYZVxf89gu43l
fbo1viC5u4zoPueOimnSfDFIbzPwr9j5TNZp+mureK9sAxlWJwAyMcvS1D49MNGeHvOSRXVz
ZwyW4bZ7WZplsnYllEy6XRWy9I+4DxxDmO2S6+PDHI4tpULGgBjKs3/bRjniNiDbeVWliBaX
FqL+wtyxs2NFlXUa09WoZYOjsxW7zyQ3v6eGFWgtrAyGyr6njDkMRq70PTFGbU9rzXdE2G/2
iaeSZLzS8UzMxKMrDUT461yIxSMys6zF5CwUmn20zpXDjX20i5Uigr5+ZwUGIy8j374SBmZE
qoyrQN5YUYuQSFPpGZ/0p5Yhp2JC0BLDo2A6SlfuA6gU8PPAAsvX/wDWXviRg7FANQB6BTSn
7cJGsfpJL+tR930wFGGYE06jqtPHEBAHNCQAOlO+E6QoH9sr0ocuhxIgdQqGGX2HqfpiBCXW
a0o0ZABr1y64EQoRmPSerfT/AFxAx0mrLmcgABnTEzaeQqKuFOQoGNMv8sTchLUxggjMZEjE
B0C+kU9X0P1OIhyypRlByzw4gkBiJMiR9uXnniA0IGTd+hPicRKIadQ6+ZNK+eJaZtLsxQAZ
VIPXwyxDrrQkVU6s+2XauJQWsFaVOqgFRiaMoDeJKnx/wxIWohwO/ViB/pgFpqnUdLVp3Hn1
wA4lX7aZtkwPTLFDpncFwKUYdO4HlhViMgs5LgVUZeFfPChK5QV0V6enoKYlon9twVC0J69i
MS06sqowWlVyJA64ijcgPqNAG79ssQSKCSK/cCD+Hjg0gfRkWOVRVVyxIiq6jT1kio8fqBiF
hpHUgqVOroaf5YUJXC9QDlQE50GBGYhakDWxNBTwwpEyyM2tR06keHliWJEqi0qSKioPbAka
BWLM7Zr0UZ6R/wA8QnolINVHVB1PSmGNYEtqq3RgQNI7074Ro1cEMVOk/m74iZV0opIqamp8
sGMyHLsQQQVU5mnc+GAkuYzzr375Z0GGRUzNTyB60zJ8ssTASNRCuBrPT/DPCodoQSBoy7Cv
SmA4eiNkGFTXLywVGaSTSFQAaahu+WLENCwBzCgDUB+PfGiiZ0NXp6a5KelfPBROTq5K6jQa
ei+GJo6ydTXU3YeR7YiFhJQZGg71oM8MGEPTUqwY0616U8MSOFUNqIJQDI9iTiqEshbXlqWt
NNK0wEDJ6iWAooFFNOuHSIkKpIAZjlTI08wcApiyg0H3U/CuBkizBdJAJ61GJo0jGukfVf8A
jxwg2pRGaOc/vPXPwxLSCElczqXPIin0BOIyiZGAJoAOgrkQfPEgqSCGABJyqAOuIlJpPpXo
ozLZivfEqBQysAsmTAlh0rTvicrSVqDUTpY9PLEoXqUaiSB4ePjliMF7asQ7ggN+Uda0yri1
vkRIU0fOv2geAwNItSgguRQHNm6YcZSxpQ6jm1DmfPpiCIQSNWStFjBNMqg4NQfajC1cD1kE
CnUnxwDINg7UKZAVzByxECqU6lcslqc/PEUdxmrUbXQZE4ZVWa3gEye3UGlCCMdIOXHpj/8A
wf5dP4+OBt27Mum4VmTXU1Wh741z8s9fDe2Ptfp1ABGXppmBn3w9M812JGBkTTuM/wDDGWkp
qoBb1N4VxlHL6aMfwA7Uwi0A0gnrQiuR/wA8KExGYUip9Qr4+WIiiJIOsUORGVemJDIQvQkZ
dPriYoUC9eor0xDEpZa5nIdPDEjqSZDSoqKnPsMJw4C+4F6jxHXEtE4Bele3qp0rgMJWoAOt
Ohp38PpiA+urKpP/AB1GFRIhzKDwBHkfphQlB0KK5DMHvTGUcUANQASOpOJi0SZCmVT+7GlB
Z660AJGWVa4Yi/mZnpXoD2OAyiUjQNRqW7dOuIErKCGQdu5zwNHRo2ZgaUPWnjixSjGkoFqM
q1HhgVhIFANfr50xKHAV2qDRvPEjodSsTmD1IpXEsIkFSBmxzJ6GmJCNcs8xl+GIh1qXoBmO
/cfjiWmLGmdSK5kdcajIqtkQagZDEhEgNRjkO4FRiRiyAgV1aj93l/lhxnThqMKeOeCtSnAA
LaiR54DgvQSAcwfuzxIy6SQB+Aws2HZhUMTUjtliJFV0jz+4f6YlDaiGGVAcGnDF11UZanpT
PFoECamo0/4DETMwINR1NKDxxCmCohJrQnqD/liPwJUYsBXIZDCzpdyNOYypiBvbatVPbrWm
BYQU6RUVpXLyxY0fIPqFczlXoRhR9WdKU1Zip7YiA6tQbKvQfTwwCi1jSczqPlXFgtPVgozy
8+uAhYjPvT9+EEQ1AcxUUphJelV8Se564kWoiooAKYhToc9PWmYPjgMIGlS1OtKdPxwNaRVS
QBmvf8cKCtNRINNIy8xiBAqQSMz3OJfIT18Msxi0YdPWDqGlh4eGAkVckVYAnI4YKQ6E9Sep
xoYEoUJJOoAUNMZJytRmM8DULSoAIGTVri0gJqVGYUYmaAEBvWPQMge4OJCCrkASR3OI4cKh
U0NR0HjiRxqCGnbtiQVyB8/24WNOS/bqKeo+GIbhKHJyHpHU4G5SZtJOkdeoPhhQclBrRc6V
6n8cLIwx1Ainp/ZTBWuaTVqWUZd6nPAeoZhQjxPh+/EAhaZL6TXMdRQ4kcBVoBXI/d9MKCSS
W8sqj/PCMECppSlR1OMtyikHSnQ5HEKiAIJIIDZ5/wCmFk0QqC1cjky0wIdVAJoen+GJoH3L
kCNI8MABqADBqmprl44RT+kUXV1zr/piWAo4K9TWta5fTE38GJD+juTmD1xLTMKNU18h0xA7
+k5AFSKU6YDgZGUJUZk0Ar5YhYjXUzHSaHLUewPlhOnCkEas8RMfzZ+omtMQsI0FSc2A6HpX
AJMAQ2oVzFM6+HicCKoZQDmKkA9MSwjpatOq9KeWJAKaV1ZClcz0zxU4Tk0quRbwOIaAq61Z
89RB0jLphODXSKE5EHPPpiBq6qAUofuI64CYsoIJyKihpiEO2f3DIGgzocTQAjhqEVrkCT2w
gy6SCf4RTr4YgWo5t0DDOv78RlNGdWQqT2NO2JJEKda59j2OAwxJ1kUqWyr9BiQAZHVVAIIy
JxeKCDHSFUUNe/bCRLk1SARTI9P24EHU2rXTPw7YVpo/cZNZHQ11fUYGadgR0NHHUjPLEKjE
lEyWp6KR1+uIaYsR260qKknEtGQzlWpShqfMeZxRvSOsoScx3GIGGooKD1A0byywEm66qeke
mp6HEjOTUEKCB1p2wk6kFidNCOg61wIB9BbOtetB2OIEQxJIoDpoB44FodOefXIVPYd8SKTU
mVQXU00/vxECnVUMRTw8PHEgSMzCqECoNB1xA0asV/mUFAQtf9MSwtQI1GtB+04kEBtQUEBT
mR1qO1cJLUoOjNaE5HyxImZw+osA3hiCF3NQSW65KBmCe/0xIYqFqMgMz5VxLDMQciPU3U4k
BylCvQKOvU/9cAwogq+keruO2HSSqnqUfn/N5jpU4CjoAASDq7Z4QaQuoqKEOdQp4+OIURzQ
Fsh+bT3ODUBC3pYDqf3dsVQyulqhaoa6h/p9TiIXLagtK0zB6YosAEYjL1DuB1+owjBpqoFJ
PnX937cBwzoSFeMCi0JU5dcCEr0kLDuO/icaG+h1fziFfMDp5nxwtHI9VVp9O2Q74kEoFGss
FDV1L1wM2ErMAqgaaGufTPv54kk1BnIyH8FAa+dcDRgCoY1qRkwPhiooAtXGkZk1YeXlhEG4
jGoLVmbIA9hgjRljGkIwqRkAMiCM8NJmC5rpr01DPLAxoyoY0+0dvrhUDkKoc2NBqGVMRBRV
dWkNT9o708vpiYz0XuJrI00Nc6dPwxNiCgNWmlRmc+uIwVWiqxoSegODUFQCP4QcyDX/ACxI
LkqV0+gZ0+uLGadh0JBFBSueLFCUh2r0H0p0754Wjspev8Nc+wBp1wILaiVIzK0IHY/XCyKl
GHmOvfATMrClKFu9TiVBkGVNQAU1r1/CuJCoiuCGzalKjKmJCYCppSlcx9cRCUYHrka0r2r5
4lRVzJrTSKV8fqcINlQk0AGeX+OLEYEafuAYUr/rhxDA9RyCAUz618cCB6GTJTRjSpywILqR
QIchnTz6VwlIFURhvSWPRsRc5hILOAGbqPLFrOJlZAFBYgt0PmO2Em9sAVJFewPb/rgRFh7h
1ZV6Af4+WLECVvDKh+pp4YRTkKEDOQoYUBGWeBZgPcJmUAejoT4EDEzLtSyKViLDNWzYdwB4
4mkaNV9QC0/i/wA88S0pFHQmpPhlTPEzUIDFqg+qtF7/ALsMX1dDFQumMCi5551+mFrA1pQK
AE/Me/8A0wE7CMEimdMl7Z9M8ZRihUVJoe9T0w6DUUZVpXMda/twomDPSq6UrQEdwOte2JCA
OapUn8pGQwYdAigCQtm5/N5eGIGBBjKhaA0z/wCOmInEatHQnpmG8cZWHoQKmhHifPKmEWnB
ALUzK5EnP9mEEqKvr/NnRqdP2YCiQI1V/KTlUdfPChSe67aWb0UNPBgPPC1DDUq6hmH65dAB
kBiVpimumkUC9CPLz8cDJ9ICa9NFGWrufLEsKRAiozdSehBqRTviWEkJWuVB91etBiMh00kh
pM6VoenXAdCsoLlepAzP1wiGk1EsHIZKii9hh1FLr0gnMLQ0By8sCwWoHLUKjp3p+zEEbvIz
eltTDqWyriZ0cir7FSa16Dy/64GgOh0iQHSqZIevbLEUUjrJAyMGANCzCn7MS1m91RXfVqoB
ko7088akEcef/wCpT8MTeu7Y1lacCMAsxoaiv7Mb5Zvw3NlJpiBpQKKtUUoB3NcdLzrEuO4K
GQSItE/iA8M+uOdmNSjU6G0la6swadcZWnUktV8q5A9q4hpGmSgCoyLDCdEV0NpyNciV6n9u
HCcB2BAqKAihxSM0ccYrQt6ugJwVF7ZoDUZZFh1OJWCzYjIUqOuIQRDenT1zqD2GHGjxvRwG
GZqAB2wyKCY0NR16E4sV5FkdNPTllXv/AM8WMiUF9RJJp0AyxYYMCp6DVTqP3YFTHSKUJ9Pf
rkeuAX1ItBmwzpl5r4DBhwelWAWgGY9Q8Bh0/Xw5ilL0jBc9l65+VMM6YvJFGroNdYPbsfPD
QNQKAkhiTiWGCeogJR/LpgR+je2ASD1NKdPriRKG16D188RHpULppTtqB64rCQQjKuXY4FiV
InZDoTUQK0H+JwL5AQdOR9XQ4VYEM2pRUgHpjUjNSRxyAOwUuv3MBnl54sWgDMzVT7B0J6Ys
NpBCK0+3qAMQEW1Ghy/xwgXtsw9KksTSiiv+GE2GWjAAghgcxjNUGdFQCf8AX9+BvCaKRGz7
5rUZH8cMZsJVYknp5Z4ajDQ325M1enh44IMMp0soLAg5/UYViSlCSAaHvTpisQSwZAfuIrXG
EXqqK506+YwkgadBqI/ywjSYKwBNdR6DANgkfIGhJP8Ax0wnCBOot0JP4YQZwOhp0yPYYlYW
iQClCSelev1p4YzqkMD+UDP+L6eGFHYHUtVIHUE9M8ROQ6k1WtP2nFiB92S9xmT49sA0ZVqg
dfL/AJ4AEqSK1FegrmMLWlUo1GIIIz7AUwmG1FwudT/phG6RpX1ZHqanGV8HYpUKB1NPwxYt
IqGFVyNaAnr/ANMWHn0IaijUanpTwxGnYioU0AOVPE4YNCVy09x2/wAMQwiaE1FARnXscCSB
V0LRupo3jXA0GlOh6HqfDDGAA1kLD7R6aDLCokVc6tWgyA8RgrciM6gK51rUfTExdh1kYkhh
UkVIOKtToUjAjIZjp5fTAtRUcjOnXr9fHENIL6s+1K4Togq0IJGfWmA6To7EAZADp5YmbD6f
uHY9ThgsAdRrU9cqnEzDqx7HyNM88TZz9oIzyoSe2BIz0zPq8O2EnCkDSPT3GDTh9VBQ+oHL
8cRMfb7ZHEDqy0OkV8sSDQAUPU9ThGGGk1qaL1ywg4cAigJyIr4YKZToxJAbIDp9D0wEJBpQ
A5dAR1+mJkqrkBkRnliFCo0kkiqHMfXAuT5kFq+S16YnRGanUQAaZEjx74tBmrWtSKDuMKwz
Fm00JPc+OJUIcAV6Nn6vEeWJaRWtSTQZdTn+GIwDFSoXKgzI8h0xGC0NoNTTyAr51wMg0qpy
JLHqB3GKMhMtKVHalSemFrmkBWtaEjuPpgrYtAEVSa07nw+mABIfKmZAqfADEsC5qOmoAZ+A
xM0Ik7nMHJV7iuJQ7HS1RU07HwOIgagpSuXU+WFBYk0r/wDTXEBgKaLpoBmadfxwNBUrJRga
0ByGWIYEgMWIypliOCOmjEZ0pqBzxJGTmdRII7YRaddOlgV06+hPjgBwgaMUzXxOLTD1FCTk
D6RXETIq6dRIy74kcFWFQKBTkT/hiRULrpodPTWcqjwxJGxVx3y7dvDCjqxYn0khf34hp6uX
9FKDBTom1afUpzNa4hgUZVajDTqqEr3IGJkSoKUAB8/HAYBuumh/DwwgxDKqgUAOZzPbFD8H
Ri1ADme/08cNR6oiaa10muffGSb7qBcvHwGImV0rkCVJoK5A0xI1MxSpB+1fx7YSYt6wAKg9
SPLxxAOoEKaVY5FlywAlDIVObgAde+BExBNT9wNTXsOgxYUMi1lLsAaZGmS/TEoJioU0A8NP
WvgMaQCWMijSoAyqfH64Ec0ByGa/d5/TwxEK0ahKig7079sQRuK1qNVTU18PDEhNVhVSD3Le
GJBHcE0H5Wp498RMWPY5AZnCgkoyLQ6WXLzNM8Aol0GpFdVBWnUUxIMoIJJYgnMnxOBB1NpN
AWOfll44kYqKUqSo9WeVD3womdcsgFIIy8BlSuIaEMzgmlAaZVwIJApQZKDWnWv7MQMJCVNK
sSQV8SRii1ISSKMMute5wtSgEeRKgUB/aBiMEylgDq9NK9MRRoJDQlaFsgD2wM4ehQe3qGfU
0yriULQwWqAFyaZ5UH1xqLRF5T6eiA0BUVJxaNpiE1VLBT0UHxxQgdtBzPai4lokb0U11IzI
7188BIyLUg1zP7qftxM04YgEK1adB2wGHqq6BSrHOp8MLQShzoxNMgR+3viFOiho31FqjqAK
YGcCpBIFfVSlfCn+mExJqjQ1NWLD0keOIoyPUA32k1HgKYmLPUjkKA35WzPfTiw0DMyClCBq
qKeffEUlWFanVXMsMBA5UEEk0pnWv7cQpKH0AUqzUpTwPji0FQlV9Z9JrUdK4ThhqZi2kkfw
NiIyCwctQp3Vcu2Ah0vpIXMdietPA4nO6dWFehqQOozzxYp0RevTqOtfDE1KUlGU0FWNGHTw
8MWJEEDGrGpyanToeuFYkMhDAAaR0zzP4Ylo2INNTUAAOeIomkAcqpp/DXy8BhZtAsZAYuCV
c5nzH/PEYMazQ09JFS1c6jEdOtNNADqNcvCvXrgBIyhApqe6sOte/wBMSAdLMVOZr2wg4OkD
URl4ZVpgaMzBlCKp09TTvTPrhRwCQcqk9+lP+mBDLoyBVo3fPy7k4VUcoamoVagHhkcQJm0t
r/P3PXKmBEqtKuSio6qex/6YkFgwK5VXtTxwaKL3KBUBHqzYtl+GGLQvHFnnUrTKvWv+WGAm
1NGBUCpz+pyxKmrqIFQKZM1e4yyOI6dvU1AKAdT2oO9cWkjp65sVzC+P1xLSyLE5gV7/AOWM
qAGosVY1Fcj3piCSUIpVBkAPEUI8MaIVYlGoSF6den0xAixGgE+gDMDtXAQ1CSBdRPiP+eJC
ersSGNT/AMd8SNrkXInTpFGypWuLBoD7iijeoMaCg/HMYmfUiFghUDVU+jt3xNQ//wApVZq6
c/TpzpiaxFIzNpSp1A5keB6/jiZqXUahK1UZBj1IGFpFKSDSoNa1AP7MCpK3toGZa0/HPBrn
1cBUswNKqOx/5YmdtGCCrVoAwoPp5YWpp2YltCkaQB9B2y/zxN6SSRNQ0JI+6vT8MRIxqAa0
LfdlTPEjkUCs3h2yoMSoWLCUUAJVe/ShPliACCpOZ9xvuUimXjiFhOCAwCn3GFRXsPrgNJY5
NWoRliooQT2HjiEH7sbDSFJByAOQxNOeWQgGg9PSnQYUzG7L6y1fST1p1ONqOKg8T0xg4sdl
neC8WS3Y+8Cad1HnTHXhWa+iOB7wt1sjw7htttNJA4AuJIkZySKnPMYv6eXxznK32a42i/8A
1kTWkNkPcowC/wAtxSh1LmP3Y561VHecK2m8YvtN66uGo1vOvpWh/K4OYw/YLCz+MrZ4VN9D
fQ681v7dBPB1/hJBGDTjH8l41Ps1z7TyiaKUkwShCpoDTNT0xqUVybTZR3N7BbzV0ysFYrka
HuKY6MxtTwbjC+xBPfTIZ6rHKtGof/3iH1Ef9pxja1Rbb8bbatYrzcAZw5EEoJWKRCDp1V9S
N2xm9jHBccR2C6khk2rddbqSlzYzoVZCP4Xr6hh5tax1wcT41LLFZXl3PZX04CROQJYWI7j7
SMTOpbf48jS6lN27XltEK67P7tHYkN1y6jxxXrDdrouOEcaRoPYupJbRzpW4QaJF1dmQ9xg+
ysRXPC+LQSJHc7vJG0prDcKo0CnaSFtLU864Z0trn2rg0U1+1rO8l+q0eKW0dfUtK/yw1K0/
hON3tm81Y3/x1DazxSRTTvbzZvbXUJhmUHuslSrDyxj7qTHR/wDd/wAdaJWjvbklz6oGC+4h
A+9X+1v+04r03NR2vxzAssr65dxhTUxW3ASbQMq+zmW+i4L0mf3vZdjt/cG1blLJLEwWSyuY
WimjNO9aVxSpR6FX7iST9xH1xpatuNpbTbzbQXJkRXPplgI9xSMww1VBp4Ycc78tHu2w27bt
b28k4mtXU/8Aykj9uTw9Ufiv78Z+zdiY8J460q2K7m0V6VLRzqnuxPTOjL6GjywaHPb8Js7V
Hfd5pDErNnbkagPGh6jF9lin3fbNnhbXtO5m9jU0eKZBHMle2RZXH0wyWleww2V9xBRc28ck
0SEwXCLpkBDECtPvr+7BfKulWnG0k2SO/SYmaNWLp1VwMqqexw3rROXRHxCK4topo56a01Mr
A0LEfarDscWnV5x7Y+Ox7iUhu2tb4Rfzba4UtFl/DIv25/xDBbTiS54NFdXU901tNJb1q62e
k+2T/t7g0qCuDRVNuXBv090otrlpYnGr2po3hlUH8rV+76jGp0x9VlxrZNhh3F4rW9eHc1jP
uW94uqNxXPTIuQz7aaYuqsrM8shSLebiNUWPS3rRAFQEjOgHauLlc39oNm2+O9uhFIxU0JBX
Og+mN03xbLwyYzxa5Ve2lqJPyOoFaMvWueM/ZFYbDHZcgjtZrqUIw1w3kCj3EUjoUPpPTPDe
9h+qPe7aK23wC9Imiehe4t1CPInZqGoDU64yIv8AlfH+KjZoby2meG5YL7czJ6WBWul6GoOL
m1ZQ2lls8/GU/Vgn24tbDL3ANXWMnI4WLzazm+ceisBHPY3hvLN1rV1MUiV6K69OncHDutfC
nY1YBe/h2GItXxKx4zeW81tuPuxhaM2kBgwP5lP3Kw8OmM3QHatu2uHfZIILsz2UisqO6UZa
Ho0fj54bfG8T3HFdnu7m5t7G+WG8iQyJEfXG9OgUqKrXzxnTh9s4JK9t792k725BV5bYB9Lj
sy9fxwj5VPIeNybRcj2phcWsmccjIYpB5Ohy1fTLDGOlMygnPIHofA4Yz9fW0aSwttggnmso
rlqIpJBDFTl6iMZdMccnFo7u6DbUHVJV1raOdUgNc0VvzeWNToYnueCiSKRopJbS6C19i6jK
qXHbUootcH2akdl3tO2Nx22N/IUmt1USslNcVSRkPzg+WLRYy+9bR/S50KzreWzisUiq0Z+j
Ke+Geue5cWe7bbDHsVvdwy+5CwUSRzLSSNm7Bx6WT9+CN9RHsvHLe/tWnZZ8qDXGuoL5Ov3f
QjFeh9R/+nCLcjby3arAVrHP7bKQwP54z6v2YtMpLxKzuhNDZ31NwgBd4XoyPQ0qjLkAe1cF
6M5ZuSJopJIpRokQ0bLOoxpizHfsFmlxPL7gBAQFDTVpPmD2waPVxJDt8trK247Q6Wykg7pZ
gr7T9NTAVXr2xYr567rnjexbjxeO4tiY94tQrIyLSO6RT9pU51pnX8MZ3HS+uOHabBpQ0kGl
njo6nt/uUY6a5Yh49tdjd7neWlyBLEBRKHQ1K0qCOhxjW54qf6WW3OSAECGKRlEp8jlWnljR
W1ttthJfSQTQAsFAcLnQilHHnigxn7iJYrqZM9IYha9cu+Frn1ZcXsYrm+EbzCByjGKZ1Ekd
QpJWRT1VhljNORNb7Cl5vNzZCSOzkhqQSf5OrsA3YHx7YGbRtxJbiKb+n3iG8tBWeylIEjU6
mNlqhHh44WPa477ZDZ28F2WJhuNIaI/ert/iMUrpIKTj4W5SMXCq0wpaFzQF600SE9PrgtWg
g45cfqJ7W9rYXkX5LhSqlq9G8AezdMUps1UtrrppkTTSf2dcLnZjv2ra5b6doYgXdKExL/5G
8dAPUgdsFMjsvtgsyKWO5JNMilmtJVMczEdlB6EeBw8tXxz3uxSWSWsxcvBcqoJagdGYVIIG
XTMHEvBjjUxvRAH98SDXDHFX3XQH1UU9SPDEyK42CBwx2+/imnjGdrITE5p9wGofcO69cR/L
v2rYdhv9pMs+4mGda11LTSyipGXUYzPlqxV2+wXNxd3FvAxuf0wDzi3Gpmi7vH/FQZ0642JI
7bziE4tZLra5RdNEnuG2cGORlI/LXIkD8vXGZWb4mXhqNb280V6CZI/cNvIumQ+nUyxnoStc
q9cWtap902WexRZw6z27GiyrkwINKOnVThnov+VeZDrJpmQe3fAWg27itveWkcrztGWB0SBC
y1/DuK54mqCHiV0dwm22S4iWaJdcVwT/ACpQTQDV+XV59MLMsKfisskMh26RZ7q2UNJZOQk+
kffpA9LafLGVamtuF+5HHNPcNFHcoskEoQtGQwB6juv5hh0KPd9puduu2guCpYUIliOpHHYq
cOlx0IrSteorn1wMtfx3i97cWS7haSW8xQsYdRBNQvqjdT3pgnptxVwbLd7nuN3bRQraTwGr
QSGg1E/apP7u2GzDPSl4nKbCW7sZ0uHgBa5tCaXCqDRm0d9HcDFvq+qVOEzyRW3v3H6Z541l
hJUlGVh6SCMu+Y7YtizHL/63uI3Ca3nj1Ja6TO8A93+WTT3EUfcuC4tT3nFL+OzlvbZhc28N
GkVfTIFP5gp6qO57YhrPSBTKVJOhqVPjTFDa69v26S9YhMo4yNTICzKp/NoGZp3pgpx1bjx1
raB7yC5iu7eL1SCNh7mkdTorWi98akZzHTFxCRzGtxcx290yq0aTHQhWQAq1e60P4YFrl27j
V9NvsdhNFEkkUy+7Cz0EgBzCEfcCueK06sud8dGyze2IGFs3qtrpRQlM/wCVID+dcZV+Ue38
Ntdx2U3UF/HG5JdWYUyH5Hqag1w8s9T9KEbFPJb3kiOGlsqe8gIp0JOf0GFv646bbi24S3M1
qVMU8cIuoFbJZVyIAbzByxNa6bHiE91FriuYlc00rISAGPQ1IxMuO04xu0730DRiO624/wA6
EsKlerGPs9FzyOfbDVP6C3Did7b7edxt3F3YRaTcSRke4gf7S0Yz0+JxkX10w8FvpKJ+pjgc
qtFnBCksKr6v4W8Ri0yKSWxmt9xWzvQbWUyCKUOK6DWhOXUDrXDT8urk/G7/AGG/FnclZYpl
962uomDRzQno6keeVMCVAUgmrVHcf6YWTLIxU1yNaCmYpTE1CkYkCgrn9MvwwIwZihA+4dBT
piRuzE5LQk9jTxyxINKSEgV09BiFSMFoqnNmzVT0xDDOeoVhQHt/hTETto9QJ1ADMd8RCpVV
Cq1a5rXrlniAa+oknMZU7VxIYYsAFOkE1qfLwwaDKGqF69aAV/HFpFpZSSPt7061rTGlhgpY
CmQXqOxOCmQWog9D6+vfIfXAjgkEKKU/xOJGd4wmgAEkjpjStCAagnI1+uBiBmQsgBY6TmR4
nBK0TqCaBjqAoe3XCSKArnTI0xCHIUKUY1Hh54Gg6W1AA1FMge+WJFQLQDwzI65YUCNW1Gla
g1B6ilMSM5ovqOQ8P24hhMzEF60IPTywYg6loHYjSfur0PhliAACa+Ff3YkZhHpU5AKaAjEq
EkA9K9s8/wB+FSGNWJRQVamWVaHAt0RQhfVUkZeAPiaYGgySeoKagHIjxA6Ygb1iulAFAFPP
CjagQA/XOn18sRCSFX7aV8fPED6FAVurHrU9f24EYjSf4DloA6mnjhQSXIBLZdACMqeGJEx1
GgNdXTpSnlgOhkDUCfmHQ9qeeFAdaRqFNX8T4+OBmm0lXOXpIy+vfBQSiidBqP7sQJAKFhl1
y7fhhMOunSTIKk9a+GLWpDsegppGQB6A/TCdNqfUUqdJoB44kQJBrWgGQp1qcQCHVQc8upY5
1xLTpKKmoJrmq+NcQEBGDmNI7E9q4jCl0ByXoa5kEdD2oMMWBDI2VPSMywFajEL0HSQ4PgDl
TLyJxD0RC0IoK1oR54DTpkgIIUas++YwHk1ENBUk50HapwtHRGIoTQkHr1yxAwBVAKtl1Pj4
DEKFCpSoGYyNcziGpFdhHqoKj0knw7YjpSAUJBzGQr4deuIWhJoAxBJPXtliR9NFYLkTRhX/
ABGJoQ1Cmmv+5sCMztQhgSBkJK9cq4loI1LAhXIJ6joBTCzDgEDLoTQEDvg1rT+s6wBn0PgD
5YkEO4cEZsO3kPPDRabWSxIyDHP8PHEzoiFqdOXYeOff9uIyYARMKAGpX7gennngMgzQNmQM
uv8AlhISrswK5dAPp/phRgIxVSdVelete1MAFJ7mRANMh/x9MQw0i+qhAr1qf8sTUGCv2nId
++fan0xK0lCGNiNNPt1E9/DAKjDHR0Apmc+30wjTFlIWpov7PxxA3uAAtT1A5N4jEcJQX1Mv
Q9RiMOqGnoByGZxI4YlMzQtlnTp/rhIVAVgqkjUale3/AAMSPMF9CVqCcx1qR0xAxYhRUdaB
fH6fhiQw7k1Rqaj6TgBvWGrq9Byp/ngBq5VPnQ0qcuuJA1IRlRg1Mz1p9cMaOiHQAvievU4Q
JkZCqxsKNnT/ABpgOmIKdG0q/UHOtMQviAtUClKk6vL6EYREoVqha5UqK9h4YyTUU6lqCx+5
T3wmUriOhBanqzKjKnlhVgvZUE6lXIZUJoR3ywGTDzLqooyB8MjiJOjMyMOpFSCOtMSwJIPp
0EHs3+OWIWH0rmCNQHc9/wAcSwKxrpCkseorWvfy8sKM8jMTlkKL0pUfTENC0qp6AM8s8zlg
GpAvoXUpUsakjP04ieQIanIdj9D44jqJRGslNFK5A0r+GIW0pE1nQG0gVLEdc8TFhmUBApai
9/EjzwE6uaDIejpXsT5YmvgtdQxaI5ZE0/diWjBBizUFKdB/yxFHoLrpqAK0jA+vfzwhJqGg
jJmPQHyxYajNWYBcj3r1J+uIJFkOr0MDlmDTMeeIwwajkg1qPVU1IHbA1kMrakpqoT1I8fHE
sIoDXOgGdCMycS3UV26+2wdqjv418sR8ZXdhWVtLEocwPOmeOmhxaX8T08MZCy2pdNyGrWhA
DdOuN8tY9f4Ryay29Wtb1WVJmUiaMagtPEU6Y33xs1mfz9Wmz8msba7uEmRjbvMXR1oTQ1H2
449RXlPtnJrS1WWKRSFjdvYYgkkMa4zIzi7seVcYvLdNe4bht17FUj2m1wPX+IVVgDi9ak1k
OSX8F3c/yryS5Y/c8pLE59q5jGouvHFsskUW7WrSlVjR9bMTlUdAcdJ8Odej313wt0tri6Nx
t99H6oby2Kz270/ijJDqT5HHPa1jkk5vsol/SSVdWBRJ41IFSMiynPLGcrXOMSl5+m3IXMYq
ySawPyuK9/wxvmq2Nxbb/wAC3T2r2W5u9r3i1FI4mQTW7nyGTafHwxWBecQ361vd6H6G5gS6
RCvsXhaGKVB/+Dk7H64Kk/N4LeJ4dwktpdtnSrNAHilgm+kkf5h1zGMyqx5zybdLPcJImRjW
MFdJFKVzocakFWXEOT7PaJ+l3uCf9K50pc2bfzISejaW+6nlirN6am+51sCWyQxbjcX5TKOV
wQrU7MjV0sfLFPQ4E5vs0ZEbrL61qjgACo/Ka98ON6Q5PxC9eFtw/V27wM2m/sZAJomfsR0Y
fQ4MSPmPINkvrOFI7992lj9MNxNbiK6QAZrJKuTj64oMYWrZ6mq1QBXxxtL/AIrJxy2vEfdl
uRRgUltyDTt9jUr+3B1amn33eOHte299tV9LOLUOtxbSxFScsmRzkaeGBKQck22LfY9wjR5I
NBRq+mQBx4dKg4zJVi+i5XxW7mns91W7W0nTTHdWxUyKD1Dow/ww5Wvr4zu+2fFIDF/Sbprp
TUsGjaI08Cc8/MY1zaMafbt3+OzsZsJbm9sJ2QhRoExDHPtQEYz1NVcO17vxmC0n2jc5pmsw
CttuFoBUZ1GuBs/wBwak19yTjltbLHtrNJKlF1UZY2T81Fb1o3lhysY6LXfOCJcf1KJ7v3pV
C3FlKo1BgAGaGUZeelhis1uRJBy7j36iW1nnvIbeXKG+tBokQV6PGetO9MGHFbunIba2vka1
3WfdrVK1S5DISpNCDWv7RjTMdsG88FivH3SBrkSSIUlsZQNasci0M49PmVI+mM+lkN/vbW73
ee4tH9yAke2xUqTl0z7jGuZjANm3E7derclPdVsniOWpe6g9q+ON2sttDyniRt9Xv3USy09p
WQPJC1c9QGUik5ZUOOdamuW13nir70lxuM06rFnFcxLSPT09UR9eJtwcy/8AXryWK92fcXmY
IFa2liMbLQ9dfRq41yz9Pylk3rad42IWd9I9huER1JIE923lCin2/creOGzDXNFvVi+xSWMg
dZok/lyflah6CmYJ7YFZrjut5guNoSBSVnBzQj7gB3OHGOlWChQ0oG8uoxGV3bDudvZXbveQ
tNbHKX2jSTT4rXKvkcVUnqyW52jb9zN7aXBu7OcHVqTTLHU9GWpFR4jGMrUDt+929vvk92FL
wP6AwGek51ocP1W6v9t5Jsk0txb3d5dWLuf/AI99a0ZGj6hZE66hg+GZzVBya9jnZEj3N9yh
QExSMpQgnIgoeh/dhUjPqoY0aunpUefauNFtdt3Pil9sI27cZ5tvvYtIX+X7sMmnMZj1fswW
G1wtyC0tLmIR/wDyoYmBcIdLAA/lLdD9cGWMxcbhv+3Nbvc2HIbx0ZfXZ3AIlX/scApl55Yp
Natc0G98P3LZhZbo9zaTxMCl1FpbWK5OynKvjTEr8KverrbWMNqJzfQK1WnVfaLL0PpatGph
kYnPvq6ll4bfbALGHdprW4oNMdxGWUBa+l9AI79Ri9bQ7Jvm1G3/AEVzezbduFtRYb+FS9vK
i9FfTRs/HBYh7tyzbZ5IH9+S4cfy59YGlR/EjEAlT554sFUuy7/abdvNxJIjG3uaIzpTpqqr
GuH6rnpU7mVa/nkjbUjyOQx71NanDF0n2Tc1267WWVC8UnouAuTaCfy+eKwcxsdr3vjm0zTy
2O5zzWW4ZTwuhV4mJ6PGfRJH40zGMWN2Kq7363js3/RyEXdpcLJEgH8t11Egqfp2wuU5srs3
bmGxXclruNvbyRzSALuNmCAAfzNEx8fA4XT6mgfjdhK+57bupn95Stzt91H7ciq5DDQ6imof
swYL+lVcbhtL2s08DEXLNRoyDRg35suo8RhEn4cnHtwsbfcPc3EyrbMuhpIhV1qeoXuB4YaV
vuPG+Ize5Na8iBlcaoY5oyqVPUPQVH+WM7TJjk4pHsCXHvX96bS6gJ0qwrEy5rkw6EZ9cXWq
V0blJstjv881ruCXdjehNMwBDRnwdad/EZYYOnHxzdbSy3O5F01be4AVJRVgoDE9PMY3fgYs
oL3jm8bcttdXhsb2ym1KJFJhnjBqNMi+pCfpjHw0593uOPT3UWichAh01A/lyH8jfxqfEYoM
90V1yWDerOPa95kMf6YabDc0GuSM/wD4OU0rJFTp3GKRVlAKek5sDQsv2mnf8cTPv5dG1taf
rUa7eSCNTU3UB/mREHJlApX9uG1Tmtfu1xtd3tyrf7jbbvEy1ivAhivoWGQ0n86nvrwQ3nQc
dtrzdduhjubKS7is5dVrfW5WRkAFPbniYjWDTLDTPHPv9ztbbtbsDNaoimssdfcgcN6tK+lv
/pxkc313b7cbTdbdGdwvrXeFKj2r6Jfav4MsmYALrFcmDZ4Yemb2G729Xls9ymNtBOKC6Vda
pIclLr10+OAunZr6HYd2kS4uQ8RYAXdmdYHgydNQ8RhK/wB23OZLeV032C/imUgPGipLGTnU
xkAsrdDTMYhK4od+2r2Nvmmk0mMOksag60LKF1le6k+GM0eKveL2xuduuLdJB7qS+mhqrANW
qkYeYumeDD0rXrWhw4txu9i3Xa7ja4ZINxTbdzgCi8tLkqsNwi5KUYj0t5jPE1od33rbLndV
niug8Dw6RLoCaGBzWQjI+TDriY+usQ8ziX34iySqxCOCaitRWoxKPQNj3y23DZobaHeF2u5t
SRcW9xEvtNRQFaNmyPTp1GA/VleT3Ms14RM8UjR+lZIqFetagjqG7YlKoQX8xQ5A4QtdrvY0
sbqFnKO6ltXQEjz8aYDYk41ucUVzcpK+kyx6UY51K56anp5Y3WcueKuG6nt7r3o5mDA0Eik1
I6HPzxmxra9D2fe/6nsVtbWm6QWV3a6lmsrtRQhcleJqigYD64zhvqqh5CbTlFby6iiOgRPe
WwDrEBUgtpykBr+zDikjv3a9v7O195twsZ4ZUJiubYLT1A5MgOpQ3TyxRjNrziTJfHvpB8et
PLDHSx38fmjj3GNnupNukUEpexjUAT2YDM1xWMytdyGz2yazke/ktDfPFWDeLAhSHCk0mhBz
WQdWp1wTWLsCbOw5PY7bcw30FteWKGO4tLhgtHUDQVcmhRtPXFrVZrfb1o9+t7mv86zaMOsZ
BKGJwzAMuRHhgY56v2/w7PkTdY73fVntbg3FjPDHJG2okM7EmhU/mFaHDjra49g/TX8E20z3
MVpcTj/40s1REz/ljZh9vhU4sZPsGiwur3aNzcWU059tpyRJFG6qRRqdVNfuGGtT2NeqwRXE
E8t1GZrGBra7t1cFljbSUuIsyJIzT1AZjGWcwe1XVpJZRX23zwS2qFjf7XIB7sDBvU0OshSH
6074takVO7X9iN43QRTxBpLaOS3khJEbUUnKv2stft7YRWGtN0v7VpJIJCksyFJe4euRDeOR
xU8vVrW/n3na7CTa7m0uFghMU8F2QksMgIJjqStQ35WzGCHrx5xv8s9zyItOsccgZIpYgdCA
g6aEmtMjnguqOjm1nZ2V5aCymla0miLx2cxq1q6kB4QfzKT6lPhhGs5KQRSmrTm2IaABdBYG
qgE0GRHfC1KVaBKeZKdDQ4qaSsC2eWoV/ZgBNlkPVQZEd8SLKrHwzPmcKDVV6+l6mh7E4gdM
qMBQk/8AXAiDaKmlWzqfwwoI1jR0Ip0717g4CclgAx6DKmICqKFiadFGJERVSv7/APHAjLKx
YnPLq4yxLRsWJAK1TtQ9CcR05zFPzA0p4nCUZqoyAJbx+uJFlqBA69a+OJmnLHSTQDsHGdO+
WIQvUUJHSlFwNGCgFj1JHqr2xaMOip0qCV6jtlhMhmrWvSp6+X0wEmJD6SuVAdY718cKMa5k
KKDviAQSoOeda18u2InLlRqpXuaYkiUepUAqCCSfriA9KuStPT3B8sARNqp6wK1yKmmWJGKq
FIr6jQ0/0xEDoVLUPpNKeGJH1MuqtMwKZ5Dt1GJBPtodYJZa5V8R44FpFqSDXmpr+HgcSLQH
VlBz6AjpT6YlgGZlYkUyAAYjLEtD7hP5fSAKH/P9uEECGABB9J/Go8vDEQk+smmYNCvWlcCO
W9RNPSwoF6jIdfrhQaMakADodX+VMSPO1CSe2WWfXKuJUwyRqZkAAocq9xgQTQUap1CpB6im
I4iCsXYMfTQnLr+GEYkQGMerNaelh/niMI1OQJBHXL/DApAChK6SWUGlKUBr9cKOhCII9BBJ
NSB3GIUQANQD9uJIzRRqyYVyp0OJFUVzy7AeH4jENSAqrHV9SB6sBhtZJ9XqU5gUGY8cJCgL
SagKqTWnXEzcFpYTUWpAAy8TiqEQgp+SmZJ8PDEajYkHQAAQexyxIWt2CigDD7gOw7YqcMQa
6kFWGROCIglASTQVy+mNQYFw7KpFAQTQ9KjxxMjorENIdIJpo6dMFX/kOlivXr0A6YUQGoAs
dRFKN5YjRBiQCRQA9e+BCi0BqE0Fcz9cDSMF3Ypk9e/T8cIOAFFKDM1J8KYkcr6yASQRVT0z
PXAqBKV9VRSudaEjCjCshBFaAdTX9lMQ9KjBhShbqR9cicFH19EzSGmkafFiK4WqQkI9LCuI
GYyiYVYLToPI/uyxE5YFqq1WGVD3GJaFm1ilfUaGoGRpiAkbUGANCpzY9c/DATNpB09HcEBc
SPFT2wSwr3Wn+eImkXSCVHXIr5eWIYIksmfVe7UqcUFQqqlyTVs6Bq/uphGJdFE0EVAJBIzI
r2wH0kjQFioCgEAnrU/jiIMkalTpqcj1xKCahrQenpn0PjhSOSh9OSgj7fEeeIWpGVDRSNSg
Z0yriSMkgkBSCT6WJyAp2xI4Q5rSjDoScv2YQZFqc/tPQjGdRNpCFe4yPgPxxKm0sEAb1L+U
06Z9MTUMwjRasTVTlTCiqqAALqPXUfH8MQoi0j/eQBkAAfHFWflGQ2tlAUKMlYA1J71rliaG
ZQFAY0Y0H7cMhJUVTRWJAJY+By8cOGQDa/TUDXWoJyFPDEzbh/cLIaA9aVGeBqXRwuQKCrKe
p6Z+RwUiYEICRmc/pXsMCR66nSuWfqJOf4Y01yEvKDoVq6jXPw8MI6pvafoDoB7g5jBrnaIq
QAr9h17jwwLQxsY3zGRHXKuX+WISiD+5VVqFOZYdsRl0lOp6lAY61OWRHbLxwki7VBAK1HqF
O31wEz0Ede2n6k1yzpiFRkMSqMBU0yHQUyGeACUDVQ1ZgM1HYV61xNQnYaiUDBSaE1ywo/pC
BUPpBzpmSO4FMQl0DEflXUOoJqafQ4ikXQ5oCDQduowpHQEClXIPfIinYYBgtIRg2S+YzOA6
JRHoUMAp6gjMD6nzxNaZggfSOtMwM/8ADENGAFZmJ1MK6WP7MQc1zqCqJFFQKqe9T3riGstu
ZZqMakE5A40XJWTx7V/5YksdoRTewh5kiQsAztWijpU0x1/m31cmvfNn+I9yuePrd293A8hU
sCJUKSKPyr3UnB13lceP62n45woXkdxabhGbe7hfQ7rQtpI9PSoOM9V1tV288B5HtzLJDELq
01aUuYZUdK1ppIrqBwSsC2/g293yt+jmt3mBzsmlWKbzFHpnitjPNsqiv9uvbKV7a9ie2nVi
pjbKpBzpTr9RjUbqCO3edhFF65GPpH8R8sbjj1b+Gp27ge/X1iZLN7a4cZi0M4jmrT+F6Yxb
Id6xxDiXIEjuDLauk1qSs0NBrQ/txWxvEK7Fu8lsk0MJdHJzUgstB+ZfA4pjNi3tPjze76DX
Yy2l7KgDGBZhFcM3dfbelD/ji+0MtR2fE95nuGt5Stlco2mlwxiGodV1ioritjWujdeH8r2o
xreD27OU6Rcxy+/Cp8CQTp/HGJGJtq7k+I93fY49xtbqCaV01qolUq5/2sSueH7t3lhZYJoJ
zFKpSZMnQ9Qe+HVia1tpLmbRCCznwGdfHGfgWRruPfHu5Xk6TNHBf2AVjNFDKDKuX54jpYZ+
GNfZmxyXXE7az3+C0tnCCeje1OdCtQ0K1p+ArhnQnNDv3HDb7gtvb281vPJ91rKVah/2MhIK
4zy6RJF8db7ewG4sZrS8ZTUwRzKJq910PpzAxvrqM3x0cd4Jc7xdTxfqI4Z4AA9pK/tyoeld
LDNT4jGfsZ65914Tv237mNulSN5pWPsSI4aOQdqN4/XFzNY7tiQfHPLzHM0VostxbjVJYCQC
cDxCtpDf/STi2Lnq/pXbDt9zfbotoY1M4OmS2mYwlj/Dq/Kca+0x13xdLxuzj39Nra3ntyyF
mtpyPT2AWWtGqcDG2JNz+Pd5geV7FTcxw0LwKR769ydLULAeWM6rFDNse5R236l4SYmBLMci
lMqthMv7SwbBuDPD7vtwpLT27iRgI8+gZu1cVqjR718X7rY2S7havHOoUNcwiRWdarXVGagO
mDVrP7Dt/wCt3VLYhZiQS1u7+37gA+1W7N4Ymku9bILe+/S28dxERRminprUeZHUeYw80X10
/wDo3IhY/rLX2L23Hqf9O4dkXv6DRsvpgc/hDZcN3m6hS5CJHayGizyNpRWr0eldP1ONWnn1
ybrse67PP7O427R6vscUZGHiGUkUwHHHmetAMQrScc4Xcb7CXt7iESCtVeQK6AZepD2PliZt
V1/x3erDcmsruL1Aeh0IeNlBpqUrXLE6R2S8K39rE3lpGl9EorotpFaRQOpKHPLAOpEVlxPe
ri1jnKrDbSkUnkbSFYGhWSv2/jjWnQXPEd9t7o2d1AtvOw1wu5BjdD+YOtRTExU7cD5IjqBA
riVdcFHAE1P4GJpXyOeJn/wjseIbrfSiJQscxqDDO3tNlkQNVK4tbcO5bJu+0XL2242rWxOa
N96Op6MriqsMNhnjiOWRFGp+IxqMVpeN8W3C+K3a2D3dutNJH/jOXqUlTVfrjPWKXXDuNhZG
8eLb0uImQ/zre6Hqioc/VkWHmRjEaxNc8M5JBam9/TC4tOpmtXEyhT0JpnTzpjUsOVFacY3q
4SKZIwtvNT2JpG0pIxqNAY5Bsu+N7GPXDue17ltt40G4W728wGavTOudVIqGU9jgvoxA7VC5
0C/4Yofstdt4xum46WtmiLAFjE7BGNe4B+7GKYgn2bdre5NndwG2uK0PuD8v8eX5fPDG46rj
i++Wtqb39OJ7VU1yzW7CRdNaajTth+UbbeLbvuQpaqjMBqWJnCtTxUN1AwW4E9jxi+a/k2zd
LWS3uNNVRsmB/iXswxbqv+XPfcS3u0ga6W2NzZxVEs8R1BFU9XUZrhvQkiof7OlaEEgdRXAX
RY7fdX0pS0jaWcDKFcyR5Dvi0OybjO7xWMl8IDLaRV9yWL1aSPu1gZrp88Z02+Hg4vvdwIHi
iVY7kareZ2Cxuf4dRyDf7ThG6guto3Wzu2tr23e2nj+9ZBQj/d9PPDG46r/i++Wtob+W1Mll
oDe9CfdUKe7aa0Hni0VHYcY32+gLWUHvNQukIYCQqP4V6nD9mfqFti3NJmRIWmlUamijB1in
3Ar1qMCzCm47vdpatfvAXsaljPCfcVamnr0/Ya9jgP2Ft3Gdzv4ves1SQNTShajEnp+GDVZq
KPj+4vfNZPAYryE0aGX0EV6ZnsT3xrmszlLJxbfYHlSSzkWaBdcsIzf2+nuKB1TzGK1r4VCF
jTqOq1OIflJDHLcMEHQEKpb0gHoPV2xeNO+/4/uu1xe9d2rJDUUlHqUV6VZagYrGdPb8e3ia
RYobZi7p7sMZIUyJ/FHU0OA2uRLG7W4a2kjaOZG0SJKNJVj2avT64cUro3DYd3sY63lq8cT0
BlHqVa9AWHj2wLXNZX+4WJZrS4kt3ddJeNipNOlaHOnbDfRJg3O4bnOZJJDLPIwDyytTM5Zk
+OM/CXe38OvZv1dveI9jcwqpspGAMUhNS3q6MKeGHVqrseOb3fK0lpamepoFRgTn09Jp1wm+
ONLS4aYQPEyThxGUkGjS3ShrSmfjiwWujc9h3fb40a6tJIkr/wCQglQTkPUOlcMZvqSz45vd
2rSW9s065KBGQWJpX7a1z7YzWpHEu3X8nvLHbOWtiVnQqdSDvVaav3YeXPrfwluth3WzRZrq
Bkt3AZZRRkFRlVhhtEl/KfaNqF7LIZUZoIkLTSx0LRns5TqyjvTGPy6yJ9t2H+oyX1tHOWe3
UvHKoLIwU01f9uGnXDuGz7pYQRTXMDLbzCqzr6ozXoNQyFcMgveJbTjm+XkYa1tDcW5FTRhq
Hb7etfDxwYtv4cUdlNNdGGJWEvue20b+nSwNKNWlPxwavrfld7bwu8uGvre6jlsp7eJZLaSR
T7UpYkldQ+nbEGdaiqGU1BHbsRlTCbQaciBk9e/ljWiLHfNoayMNQY3lUGe3kFHicrq7ZMj9
VIxT4Ft0MGw7uzNGtq7OEEoTT6tBFdQB6r5jGK1KrZkkDZLoboykUOXUEdiMOnqGoFOnrQZV
zBwCLeHinIbnbTew2bvEBrBShqn8S+IxStVDtG0yX8N2FYrcWiqRUVGdQQfDphVrivdv3GwY
G9heJZRVSw9LU8O2FjrU8GxbvPCssVlLLCan0ioqoqV0jOtO2M1vmK2jqaEUGqpB6/jhGSEz
E/7h2r5YlIFVJQhfuB6Hv+OJGZm0Uc9RmfI9jgqzIbWV+3KlAKdh0wYtMXcVGo1FdZBpWnQj
F9V9sRNJI6mhIoa+Fa9emFqXQhjqHfxI88IkE0mmMAipOTV6dcUPXwYyatQfMjrXM/v64aJT
XU8kq6mkaRgoQKTWiDJc/LGVUanKhJ1jNvE+A+mJkINQAmSnx8cTRm9xRXJywCmvn1/ZhIzS
qjwGRGVf24zQRIBqxrl0H/LEqHTRdI650bEjenTpzFcyT4eWE4cNQsi0NBU+P1xIQ1AsxFT1
J8R2xIj9le9M/piFOiaqKOhBz8MCIaTlpFRSvl2wItIGrMAj7j5YUJgakjLxp3wIzuka1bPV
Sg/1wxfAlYlSxzDfur44iFKEGlMjmG8u+IkzVDADxNT0+uIBCsEViaDuvQCuLRggKov5T3Xz
GBYcltFENS2RxIMYFaMcx0/yxKCJyrWpXKnfLE0joAw/MKZ6uo/ZiBqaaivStKeHnhQ6KQ3Q
9x36YEjNR6a1BGdOpPliJ0Nfy0pXOufhiAMlXLLPp3/biVKhLEMpAyo2IRG5FXIIFTmf8sRJ
qGMEVz/xr4YlgahhpOVO2IHKqBSgNPHwxELIxH8zJWrTTmKDMYEYB1Wur/X9mJoLsFIKZ1Hq
OffriZtAgOo6mVQSCB9MSDqBfUR6vMUoK4SSUEhFAvfyoeuIaIamUUAIFendRiKKpJDDpnWn
7sQIe3pL0NFFKdeuImDBRqAzr0PWmBm0tSVY6SvegrSmFaQiGkGnTOnemIlI4I0U1UNfwwka
ufSvXsD4jBiCBQ16gdR2r5YiEUd6UoxyHYiuJkgKKUbqPvGJFSMAKG9Qyr2piQa11rWlSKDp
iQ1BVKHIHqR54kQIDgdSooCRkwOWJBCsKU61pl44mThmJJIJrUChFQcSmmf3HDCunQMjTv8A
8sDRoiQajt1y6YaBKQTQDP7ifHzwGD91TpK9D5eGFByZtIXIZajnXEsOiKAFBGpT0GIBZkBA
86/64sFoJSuvUfs7f5fXELDhhorQkg0KdziFIK+s0NV7jz/5YtMFqXP05rmxAwNGGgAlahyK
rTqcKOpYnPoRn4YiQVqEUBJz69j0wIS6/b059KE9/wDlhxB9xVFCOnRfE/TET6TqBpn1Nf8A
DEguwJ9XpXOgrT9uIaYArV3BqaDrWuJEEarFR6h3OWZxM2FGlENWBb/HFghih1UU5KcziMg2
PUnp4mlD9MBAVZpR1H+mFQm9FaLRVyJ65V/xxEcgQLWpIpXV0/DBjV+Ao4clQv8Atphczqyh
qEe3nSnX6YCBySfQfuyNe1MShKTkFFNINT40wHCZWdASadz4mvTDFIRC6mVs26ADp9fwxEyK
DIwI8NL9q+OEA1tqMZWoXqf9cQ0Wr7mUmlASfqcSLqNbE1IooP8AniAQ7IGI9PYd8RxIHJZR
XPtlka4qrSJUNobJVPpp9uBaY0X1EF+uR7nCQSCKT0gkdiwyxYjrGgTSBr76sgSR0rXEPrBI
xKmrBTQgeNaYjiEgyONag6PtJzpjQiSM1FRl1yPWnlhbgIwCSTl1AJGAWFSmnIZj1jEIMkii
6qj8wGDEAu32kGg6E9c+4AwE4YNq9AFMqdST+GJAjjLN1yHSv788Os+0TFaN2bpTsfPBrOG/
mhq5ZCgPUYtJh7QA/M7VoueVBiIhGYxppo6VFa1xaR0YpU50/AU8sSiJddQFFe5ByxE7Rsoy
IZjmDTrTtiFCqqupmI1GlD2PkMCxGKajUaSe9cxiUSr9mls1Irl3GGEBdIyRmTSoUdgepxD4
IkGFWBIFK16Ur4jEkiIF0saANnTpmPGmJrDCRCCgAU0+4/txIIDtqJcFnIorChoPAYLRIkb7
ii9e9cEasASNTBK164WTrqGb5J0A7U88GjXPOWYsV9JXNQ3Wnhhgxnd1RgdXjkxPj407Y1je
Kqp/iHhhS02mBWukVwNLePX8cdODeX0HwTcdvk2JtsW4SC8SOQCOYhVaq+goxyrjHbn1M+D8
PnWykubG4uP092ktUQvmcq1Vu4z8cPV8a+zu2fc7VTcJczIZ4pWaSMnMiuRFeuOZq7febu5t
Irjb0269CE6zIEE60yINaNl9cQYPmW73W5TxrLDBCYajVFn1NakE/swxRTbVcx2u5289wD7c
bDVQVqB3oMb3xnPXrX9cmuLSKfboduvlC+uQhUnUDL1Zgt5Y5NRUR75fyb3+sja1gvYMntkY
gPH10urHPzphQtz3PZNyVLvbol2+Stb2yViysB1MQNCPMYZ4ZFptNxFbG0kIstw29zlcyANL
EPCgpIKYNTt3K42uW7/TPNa2jTofbLPqhYEdNbd8++IY873TYt42pZAlwoiYU0iSqMAc6aSR
ljUrEuVpti3CzveLS2EVxF+shiZBaynQz51qp6VwWetyvP7hJkcrPUSBqsa1NR440evXZsF3
bWu6QXE7MI429ZWpop8RgZx6BbWkUm+Wu6W1/B+kVGP6hHJIJzUOooR9e2IYg36ya65ZbBby
DUiq5LOCjas/uFaHBKok53tt1bmxnSddCmn6q3kWQg1qMh0wTpoYVN2t0fe4bS9SJM9029zB
eRgdNajSsmnuCMWrVJa3Ntte+6J70zwSqqw3j1JUMftcn1KR4Y1gkx03EcVnyVJLm4iayuCT
Hco+pK0qD6cw2KK3HHyzd79d0heO7lh9pFaEo5qPOvnisE/pB7LNc75vkVzc3UCzgqJDMwja
QDvUUBOWD4O61O8W36Te7Oa6ZVsWjKi5jdXzJqAaHBo/8neO5tN9Xcf1kV7Ye2FCxSB5EJFA
WQkEL9MNaKW62zcrW+sLS5iadUK6HIUOD6joLUDfTFPBit3yCGLjYRbmOVowqlU+4FOmpcVv
qzBbjcW25cS/+Bco9zFoMloSUmXSPUQO4+mK0qLiNl+p3JW/UxRujaj7r6WPkuXXGtGtJv8A
bttfIoJruQGzltwhuY2EqqT/ABaa5UwapAWlku374+56o5bKWJlWeFg33AUbxxSsyOvdNwhk
2i/awuo3AT1+01PSTnVcjSmJqKPdZrabjUbRSrKUCBgrVZDp6EVrXyxYKymrMVoKeWNYxV5x
i8to7wxyyLCJchI/2jzY4KeXZa30u28iL3Uo9lnpFOvrSh7gfw4Z8NVo7rebuGSW5isNvnid
fTdQMQfCvpIYV74zTI47ndlm2G5kjZY5dJEturVAqfPqDhFqj45u8huxBPcVgZSI/doQpGYo
e1emKxZrTxXVqUJMq6YWrJqP21P3Yzq+qO+mvXuXhjFpuG3yRLJLaXL6HDHo0MgOoE+Rpi1Y
zXIrSCEAWs0yW+XuWU8vu+21Pyf7TjXNZv8AlQhyRqPQZEdca1jWi4hud3Zi4S1vGt1f1hNZ
VXJ6imDpTaO13qWbeTLulwZGcNGkrmoGYpqPhTA6RZ7Yj7Xvc9zJKGsZhojlhesdCeuXUYsO
ujeLyJtgnSxlRo4pAGCGoHqNBQftrjJ1ip9yuLm2W2lcyxwsfbaQ1IB/KG8MbxizXMqAkHoO
hHY4tU5j0DadyS447FBaRW95PakA20hEcy5k+lhmwxnWrFdf7lFfXUEd9GNvkhJEUpJZFPge
uXnhlMur7ZLyk09le29qJHBMd1DIUSU+K6fRmPykDAz0rdxSK+26SKzkRb60kKPCTokXM508
PMYbGPhV7Q15bbiLe9lfUR/LEjkkN4KW8R3BwR0t13ccv5X5FeLdXFSymNI3NC4B+0E5M2Gi
Txkt1j9ncLpIwFVJWApmBQ5YZWfVrw0k7lIGcQJ7dGkzOlicshmK4rVi+sWu7Per2bcplktp
1MS3CSao6E1pJp/dXGWgb7eCPjFwltOrRrKqTrEwZFoSQGHjilOKbaN8LzpDudzUaPbgllq1
CTWjNnRe2eKDVtsPubfut088hjs5xoi0Mfa9RzDUyoR+HjhtUd+3Czfb2jupRaX1rM0u33Wo
oWhVsvbkHpah7YBzXIbu4k5KpudNvOYQqur+mYg11qa01MPPC11Gfm5Du1pvF1cQ3De6zskk
fVZFGWl16HLrXDHJcbY6f0mO8t0juIQwN1aKdM0Lt3ShDafpjLcvi33FoJNygkSP23uIKKfc
EiSKDUaCc8u4w/hiX0EFzVduujcV9pmiSUOaoSCPbqT+7A61iOQx+3vV4gACh6qAKVPc0864
1+ByWwpE90YnmWGU09j3Ke2x7qx8MAb7j00E8d7t01p7MyxVWASH25NFalFPpYCua4KlJv17
KnG9vazmpGH/AJRGZRwtGoTmGFKYOVjk49vcd7uhbdfbubl09oNcZrKvYOcvV4HG9GatP64l
mk1rNtkiWsq6BC0wYOqnJa9Mu1cZa8YEhA7KB6dRqD169DjTNXvF2t5JZrSdlBuEpHrICkrU
0z8sGNY03DpN3RdytLi4Yx+3/LtHfV6Fr649XYf7cBwbXlsdj2+6s7QXcqAwSzwN7cqhVB0S
KMz0yGJVQ7pvi3m8fqEtV9yVP/m2tyRIkudKAnMUHjhZTypczxyRWktxZSBATtlxIZYJkHUR
O2SkddLfhiMg9wNw+y7fd7Q+qYqI52jfTIPbXIOAQQVPji1XYh47e3s17czXNwf1oiAjZz63
C169zirM9VMXKd1tppqyCW3nBS4s3FYWFf4PynuKYK1Z4sOAwXY3uO5hJVfbkjQVzDOKBWxU
c+O/j0M9tyTdIRJ+lukZmjXJSSCahe1MF+R1PB8elu67hYbqQkF8p0wNQQsxqxCD7Q3fzwz5
WSx1C9sU2HbL22tDPKqrbTzQMI5NSLUiUdcu2JuRXf1C03fdrjVGYp7mERMspFZXQkg6svLz
xaLHfwW63ktf2t5Mz+1ECLaVg2lVJBdK5mgHUYBPY83NF9IOsdadO/TG8c9FEHlICqASQFBP
c9sWu3ONhzK3vm49tDFDJHEhRZCNRGhB6a9aVwz4Fm10ck3a6tdo2CWG5OQUJcLTUvpWqhh+
U1zXHNnu5VDzJpTuaTMKz3EYeUgCjHpqJGVcaH5Z9wQv+IwNY1PGd33iPZLu3tLpg8f8yKEN
mFNM1U9q9RgbLh+43Jn3G7Mhe8dSz6gKOoUhtakCooKHGhfI7OMbhJvG1b3Y3h9+2WFJbazI
BSIBWoYyc1ANMIli1mvYLTadtv7WGQvcRgST2p9DtEFU+5nky0z74xGqw3LLyyvd2kurWN19
9Q8hkAXU/wCYgDt+/Cx18qJWGYqM/u/1FMKlC/QEV1Vp9cTRA56M8zTPtgUgJAoUaT3oAPEY
h0bIVPYdfOvhhZsiIDSntAkA5gk51JriakwaNmVCkADr44Goc0cBKigxJAGoa6emRPXp4Yhh
81BNfuPTCCIc1qSKnIHAZAqXYjTXLJgMjTvTFq+oq0oGNB0I8MWk1R36nv1FMFAVZFzBFATi
BBgQCanPv0OJaKpocwtOqnPC0aLKQ1FKnLxzyxIUqk+qMZAU8c8Cw2QcVrWhz8/HECSQgKB1
Y1LeAxEmZUYFftY/cOpPauIU51MlMiaVOJHBoSCPKvXPET6QxVSBXPQD+/AUlWIGlh6sqeGC
KgNCQurMdfwxoacACpIoBlTxxAzNXr4ZHqD5YCf0k0ait38z44BQEA1DVABJqOnli1D+0BgO
v3HLCYaMkLrZQe1PPESf7QdORNCfocQM6giq1z6jEgxoykaqtQGh8cSkIgmo/N5+GIhZKVqD
XIEjEDFdQYVoDl+NcCA+rWooadKjucMRgzqdLLTVmSaZGuWLUHU2kKqCuroMzQeGFHZaetj6
a0HgD54ECRc9SnIdc8SErkAL4Znx0nvgOBkcLUhaPSgPgO2IIfUI2HU064hRiukavUNOZGEg
IIoWHr7VxILRrqbWSSBRadB9cJwy6lyy9OWWXTvgFMUPt6ivQ50yFMCC4YjIaf4q5YheiYMU
Hen2n6Z4RC1O2lmIApUN5eeEwl9zVUUVQfUCcj554iKqggHt1oMhiSM6gtQQNJyz618MB02s
awukgKK+52NfLEqfM09taHuxOJHkRqEEGpNajIk/XyxA1aHNKV7HxxIwRdTVIA6gd8sSF6Wy
Pp0/aK/jXAgyv/LA/M1KgdaHwwjTesCtKKMhiGHEmnOvmW6/gMWGUi+ulFNDmzeJ7Z4TRVAX
SaBTmxFcqYMZtMpqpHU56WGQp4YmpT1ZSCCCuYDDriOm1CTUE6KQa9CMSCwzB1ZDIADz8sAE
CvqcjTTtTrXCj6SaAioGZ6YlIdlNVDEAihr3xLDFdJGVCa0P+eBSHHUacq/l8aeOJIpDRhIF
Ib8lfLqDhgSRuSM6VapIHl2wrQh/Q2ogdaKMz07YqfwAIASqsexJ8MQkTaFKEgEEmmpu3mMF
OGTUjDVUhfy1wIz0kqQSaHM0HbsMSwPQ1IJalABhGFJrMYatVB/fTEqEP7Z9ILNTOmFg6y1G
QqOgXpn54moWktEx0+qgy6Z+QwNU6gjTUkk0FfEjCBZmNy5FOxrnXyxHQqwKEN0pSnf92A6E
nR+YtnQqOuJi+HPiRVsgD2pTDBEYjHuAkmv+I74G5Bks1CxoF/bTDhEjFRlmKfjiMMXEcikD
qPtxAZlYsQKUXqRXMHyxJDSE0FdJrl5064MZtSAr9i9fyk+eI6jZSwUE1alNQy/diGiIcK6t
Sv208R5HCUCkIRqBrSmRqfKn+eM4xokVqAFc6fjl44lJfylRDQZUoQTnl08PDC3AOACCa9xT
p+AwrCjIHp/YMCIH1Mukah2HicSChGoKcmrkfPCYkkC69fVloARlmeuFUEinUSar300yp+OD
WergdGssy1p+cfvywWsabU4aoUBSKE+OE/YkJZqsat0A8PA4GvRV9vN6kn00/wCPDEaYFiw0
A9c27AeOICdA3ppkOn1xYaaTTHnoJFenniZtwcTUOfoB9RHUk+eBoJlFQNILHoPEdjhQRINZ
RWIPevY/jgJpNOuhUmTNf+eJaYOyMqdRX016U88IGdCioyA+1SP24CCPS5ZqHUBhEPJoUBn6
rQKvX9oxEMgieTWo1fxUFDkMCpCMtGNP1KnI4VJh1bSzwlqavz0qB9RhJ6IUoraqde2WCqCS
op00jpq7f9cBA5YMMqaBmB/gMQH61UPSp7aqEjFhONZSjtpB9VR/jixlx3ioG6sEGZp1P0wx
M/ukupqaACOorUn6421Iq/cH8K9fDGQs9oY/qaOenWmeXljfNrVtemcf2x7+3UwyIZ0IIics
p8hmKYOq5SiubV/ckVtXvrkxqSQD0ofCuBXkCq6sC4JIzLEGteh64VI6dvt7q7uYoYwYmaqq
5qBQ5gGmD4G11bhsm6bfcJHuMdDIarIhBVvGvfC3gNx2m5sYhKzrJBIfRJE1a+RGCpxo7p6o
qCopQE5j6dqYpGfsltpZZJQrOQX9LSCtR2+uNYZY7b/Z7qwVJndZopP/ABzI2ujefcHGc1bU
e0WVzd38dtEximuCQjPUDIVFcVjP3T7ntW67fO0N8CrutUo1Yz/2/XFD8uUzyKsUdaL9xFew
8caxuyF/NBZ1BQD7mpSo8sZrOuq2tJr26jhj9csuSrX95J8cUiHebbfbdMI7y3e2Y50cHNex
U4U5mnaSQqlQq/lBpl+HfFORenXt9hPegW9sGdgCQF+9dOZPjhsEn5dN5aTWkRBk/lDOnQ1H
iMYxvTWm1XdxYtdwToTUyPbhtMmju4BIqMPw5flHZ7deX90ttbkPM5yjb8x60Fca1s13aXtn
K1tdwvBMjGsMikHLuMK6mxGQ65BSFPZq9/AntjLlAAurg01lqqABUnvTLFYdxPFO8v3E0r6h
nQ0wWNc9+gmcs5Gpsh06Dyww9VPaLJPMsRlWPX6TI+S5/wAR+vfCzo90gurJ1juKajnrVgyM
OxBGRGGTToFd1CsNRDii0y/EHvjNjUoBK9asM0Ofj+7FgvWJBPKTTUQvVR5eFMOCd102VteX
TslqCZACwRT6iBkcvHywY1K51EkZaJ1ZZB6pEb0sB0zBzxYzpKsjJo7DMUy/64dUoSNBBY5s
ARUeP+uGASLJWtCDQkqMqjr+ODqGHMhICtUKOx/yGCMddOm0hu530W2p5AtQFzan07nFHSde
IT7olIIZW+19VQcuoNe+GBJY2Et1L7MGTAagG7he318MVpKVLq3cpMGRgc9Qpn5jApQB5KCR
c6LRG8AewxYdAJXaTUxrUUH1+uNOV9omGlh550PhixYQKkZdswewxEOmSQVIBBGkd8hip5dN
vb3lyri1V3Ma1eOOpJUHPSB1/DBXSodTZhKjWQHHTp4/TFjFldN5tl7ZFDcIdEo/lsM1OVcv
PGoZ050opIXqcx54zhN10lSfqCRTzFMQs0xZ6gVJC+JriW11Rx3lxGyxK0iRKZGVKmgHU0HX
8MS1yvIWrJ9xqCJDUkedeuECeaZ1VmYtQ1qxr1wieCggursySwhpPZzkIzYDxA65YMa+3oH1
k11amPQ1r064lYks7y5sp/djkKSgDw9S9aHyxMew7XVw8zvUgOdbLUgH8MWOkRF3o66qBjVl
ByPkwHh2wfVi0Og6QDQmlaYhIkLzBCFYhR6QoJpn5YsadVnu95aW5tletuTrMDZpqH5hXphv
JBebi8yImftodSAdATikXXUBNt95DEl2y6oXrSUHUNXcN4H64a52II/eD6lYq4yqvWhxYN9d
draXl1rEWp5Lce6ACSyjoSAcc63LrmLy0IBOktVlqaMfEjx88LQZfelDzsxcAhXk6mo7HDAC
MsCQpIr1p4DPPEtwbXFwskZWRxpOtKMcqdwR0xQUjcN7XtgH2ydQXqK+ODGrQoz0Ir6e1MsS
lSPPO6KhdqJ6QNRyHXEqhYEtVfuPUnvhZ0ILKDX/AA6HDKnSt3dqFYSsrg6l9RqDSmXcVwVu
BiuJEf3EYxu2baSVJ+tMDNBJM8j6nJLE5sTmfPGozgnv7t10GVjpr6WJpTpjON2lDeTQ1MbM
AR6iD1Pn44sc/sb9TcGX3nkJn6mUfd5UOF0niNuuf5vzeOA6lju54SSjsgqCdNR/hiGCkv7l
7gXTzO82f81jV88uuLB0jF5cGMIZGMVSQpJOZzqBjQ4ELu4jLvFKyM9Nek06djjDpqCSWV5N
bMS7GpLH/PFjOpxf3gdJVlZWQEKwJrmKdcB5mOWhLClM6kjGmcJ1Ck0NVbqD/hlgLqO9brJA
tq105tl6RliVr0BAxHXPJdNJbJBJIxijYmKMnJWIoSB5jriwdBa6mlRI5GZ40qsakk6R1y/E
4mYhoNJRgR/ngJRyyQzCRCUlTNXXqCPCmEzoYvrpLprqOZkmYMGcHrqFDX641KLQ2t5dWsnv
W8hhfSRqQ0OlhRg3kR2xVYKHdLu2UJbStEta6AaqWp1A7Yw3rnuJ5ZpCznWWqXJ6k964WLUK
ooGnT6hUiuWX1xGGGSsR1Hj0zxEvvYkgjKozxUymcAADKg6UwarEBaQyZmgH5e9fHGmPr6KV
ULK2qvYV8sTYdWk9aknxy+mAacKSSe/n44DhgE1kH0nx/wCeJYhBPXox7noB1GEYM1QBmqWN
StOlPHEsMAELEZBvzDscCDoIY1zJPp88JhwWqrVoTWo/1wIKBtPqAqR26dfHEDBs60OlTlll
ngUgtY0kkdTkP8c8aa0zOxI6KadRiAfcYgkNmOtPHExTkhmrQrl/jgI61SjDPsOtcRkI0LjO
n8WAw50p6Qe+bDzwqiAUPoBOrIr3FD1wDRCpqO1agnM4joKOpo/WvUYlaTKAykg1GWXTCyWo
liinqa5d6YkIKemR7/TAT6h0OYAzODEFm11UDSDmWOLERDlCWrln45eGEn1o7A5qKVAHjhRh
IPUBU1GKolDBADX1kZjOn1wEzhatQk6ciO1cSKg1hz1NMzXqBiJnIObN/wBv1ws1CxWvrqDl
p/DAiJIAr+b7u1P9MRFIQSKD0dQO9RiCPTU18616AeWBEwBjYfTsaVJ64VDBVDFSQSPPMnxw
qBfqCPu7jGSHUrMdI6HPEAov3VyJ6H/HEIaQqB2qMlUdq98LVBpJ0aiRnmPDEBiQFWL9WOnz
xLQLIi/jkKjPwriAfWuRIqenhiWl6iwoahepPjiAWVtS1qBqOrEcMwq1NNeoXEMONRSpyIOm
hzJPfEQui0Utk1cj5YWjq0egAmlc9Pn2IwINCxAoaZU6j8cQO5iWMIoLPXM4hpB9ROkkkjMd
adsGo7+8iAqtQMwT1p4eWEiZEdQTm3dunQd8WFG60ANaasiT4eGMikDpYrppkNNf8sWjSIDm
hB9AzXsSemHVAtnpBbSg69wT4YSMawmXqyB007nEjFKMAT0+6nj4YtBQI4UrX6eNTiWHqQ1A
PSRSg7nCSY6aDpQ0BHY4Bo11BKkitfuBqDgWI6yatPavrOFepFkq1JBRRWh/d+/E0SihJqNP
YHrTEA+sqSPUK1pWtBiVEDqIUrqbOlMqVwFD72qrDMplp8fE+eNM6QVTX1kEdK5+eeJCJAIo
ASRlXPM4VT6BnQj0/m/ywAz6WoCetAfLArRRkEZmo6fs8cBhO2kjQc8iafuxIxIqrlSCT6vr
iaggBqArlSufTCkULEyAEV7q340xMYcmjsaAktQgDqDiJOrKwYt6T0FehP8AniItRLVHRQTq
bwwjAhA6sTmQK/UYkQNCCQcx0UZeZwINAystMyKahl54FBgagpI0kd/LCUbDUxZTUE5AZkDC
jppYEDNgfUa/4YicxkUZWJYdF/1xK0wIFTIAaigJxIwA10DZnp4YgWgE1Cg5+pj0y64AIUaL
0io6Z5Z4kYlh0o3Qahl17YkGjAChGo9j54qsPVAQCNIXOuRoT5jAcECKjUtGFQCBkfx8MJww
NcvtU/l8sK0LO7SISRpAOqvj0wAw9pXc/auWZ8fAYkNvaKhCCWNenUqcSMhAbJTQVK16jEQk
slFNa/mPT9uEUBd2JBIbT/8Ay98FrNGkpzCkaaZClM/+eAIikunUCNKnocuuHROaIgBfUfUP
uYdKeGeJvBsfSI6g5/af8cSKqsCQaEH0gdeueImb2aVRjUd64ieRdB1s1T+UHMH8MQoFNEqv
3nLP/LBhEhNRqzByPl51wqGcZrUAgfbTOmA4EowcVJ1MfQMziB9D1JA6VBIxIXpADN93QDt9
c8SMjEN7detSGGWEm9t+hWijp/ywaMPoYIq0OoVqR59PpgNNRtBUigOYp4YUSaGUBSBrOeXQ
+GeIGfTQr9zDqRl0xI6qChYksCKHt+zAdAhYuQQa1zc+B6YoEsruVFFqQQKp0yz740UavTNi
Q3UKemGJDNMTQsK1Byp0+mGRM1uMqvpAoNJyZain7cVLj9v/AHH9nbxwasd2zEteRqEGstVG
OWY8PPG+YnvnxkqT7NcQXcazQq9VEi/arLUkHrUnwxf1kYWfF4bdU3BJLZJ4BcVQyDNa1Gkd
+1cZqrrtrbbt0tfavLNNMMrok1NLkoehI7DGNWGt9W1bpZw21qLqB31TRumoZHIqQNQoO+GD
6ubm80P6rbmhBRdRBUZiqn1Vr+XDw1z/AJPyxrW3S3vY7SKGUMjERD78vtYdMQ07bDbSpbb5
ZxpCr0bWR/LY+B/L5YL1+GVVvb2C7xZNbwCzuVcGdUNYmJ+00/3YZWsXnJ7GxaSxlWCGKSWV
Un0DQrBvEDKuLmqui0UbXvFjbW9olzHKTSORCzpTusg6Uxr5ZvC63eysL+Ck0SGNVavuZ6Tp
PQ9cZjTNbXsttPsaRz2yqqhzbmRc61yJYZnDrPq349t9n+uisbyaGhBMtnPFq1gg5qy9hirW
s9yPYNv2jlFmloWSK4ZXCqxdF9VPSxzofA4ubsW66/kJ6xQO7FyGOqtSKDMH6YzixghpJJKl
VPpDDtU17Y6SmRs/j6/S2v2hewjuVKHSSSrDoa1wdM9A5zcbfPvEhtLU2x0gyRhyyavEVzxn
menlccehtrrh5S4gjmaESexI6gSRnxDj1DD0bzIy+wqV361BIoZR6hmK4YJ8tpvsdtc8h26K
+BkiWN0K1AY/w0+mKVV2tbcZtLl7aa+hvLXQR+iuYdMias82XPLxxm6lHbvxe3upIY5ls4pA
f0l7KhaAt/DIfAdMs8VjP1V3Jdv3OGJJZrK1ETZx3tg2qGQHv5HxrhlH1jMkqpJUknpl0r3r
ha8XfEJDDvVqJApacldLoGUg+NRQ4rrMjVyWdjByiJYLaIQSwsXtiNUVakmqt9tfAYNas8SQ
Q7Ve7/cbY+2xfpGiLCLUTpPjEeqYN8EjIcj2uHbt4ltoG1IlKFuvqFaE96YuaqqkhlkbTGhd
upp4d8dE1Etrt9lt0csLqzIq6Rq0yhiKksveh7jGZ8j6rC6tbfcOKC+uIhJfouVz+cNqoKt1
IIxUleWW3R2EFzcWTO8aL7gXJwCOudOmCIBs/wBZDIdnFtvUZUtLAKLeRADNtDUrp8VxrRmu
1Bt0fEo7uSyQXMMbGGQg619dKSDvXzxhueM3d3O2bhSa32/9LdpQTCCpiI8c6kfUYYz1Nddx
DY2+2Ryxt/MB/luhowc9nX/AjDCk2zcNtmr+tmjtdyentT3Ss0Mvb1MAaHzwVp32Nne2W+Qy
XlpHEXjJjngIeGZexVhUfXwxM9fDrg/p+4b/AHdhd2SPbCE1VmIeMig1I+WJcXI449lG3PKW
lWOy1COG5lAZCSaKHpmG/CmHdC2k47tlrcpcRRwSJJHSS3UFomr1Ohsw1fDFokZ3cLvZYr+4
tJbAPZUoAzkSoT1ZGXwPji1q86toNl2JdnjvIbhbalKrcCquKf8A6rYb6vrIpOQ7ftTaJdu0
+8f/ACxRElf+6h+2uKUBvIYbC1hntX/nAr7UsZKHWBmHU0ZDn16YefR6sZ7W3TYod20ia4nC
/qI5BUO5ND518cY1uLO7trO9jiheBoiFroU1CkDKlemNTxz6Vey2W339/ebddwRyRQxn25gd
MgeooQR0NOtcFMg9y2yz27bUmiAMqUjcvmsmZoafl8MsSusxOqXD6kiMT/nI6CvbPpiMi2vo
4Nvtbe8gkBuCFVZojpNQKHWvVWH7Di5Zrs4rttjuO4K94VjldXbWwDRPlmHH5T4Ybcak1x8l
2CGw3J0hX/4zlTHIh1IAR0J7fjg+zE+Su4bLb7SK4s5KT19E0JpqI66l6q4/YcOtR1XVvb2u
0Q7ksSSPMB7yOAVkd2Oo9PSfGmBquHarKyv76XXB7RWPLTnpbsV1eGJmzVjs237dul1ebbe2
ys0KkrdRel1YGlVof2jEs8T7ttu3WGwQXAtoxewsI1nIoJEBNda9z54dFhn2jY0vI5TEscG4
IqtbsdSBsj6GOY64zpc/Gdkt5Ny3K2vLZpbKNPbb3AQVFaq2rIjyONHmOne7HbLPZI7uO0UX
UbLErj7ZY8xRx4/7hgl9OMrucm3zAS2sJtmP3wk6lFO6nGsYzKvd3tbR+L2d3GntXUoX3DGS
FdM//InTUOzYeT3NTWGwwy7La7lBbtdAN7e4W9RqIH3PGRQ9M8H21mcY69uG1R73JHYsbqyM
KlS/pkUV/wDG5HVlxmmcxybbt21bzdblaT64jGNdvcJkylnpUgZMB0OI/guN3sdraXtpc28d
zGNSNUZmlQCD+/Fh/DJXCxNK5iTTHXJfPwGLWAalDaa0IBqO9cTZvVQFTQVypnhZKlMydRPS
mJGLKX9Zp4Af44jTsK0qOmXWhOKMhDMBlQ16/jhImFUFPy9z1xlqGYLpZmIzIJPfEzTMnSla
9z5/5YkALR6A0WmZPXGoL8ky19BI65+FMDOCC0Q0qT0xNc30hrAXSa51z65YnTQsSW01yr9M
TOkanPOlaUwD5IavDNcsKnhBqEkgk/mXxwLQ6m7jIdB5YiYlydRFO4U/4ZYgbUp+z0+P+dcT
RpHbQCczmD+GJnoFCEWgNAa4VLRMPTVgBXoR1xNAZqmq9MZBEamLHqv+GKKh1FmFMlX7sWAI
UkhSBU0oe30wozEU6GnQ4iHUioCKGleg74CD3D9qoenXLL6YhQMzaumdK18sIhNIzD0qQh6A
Z0xNALtpAGY7jv8Ahgxad2B9LZIeoFf8cZOo21qpaudQAT/hjShmFEyPeuE1GGVV9JzOVTnm
e+Jk5kcgAZnLURlngahNISaUFW79cSBViCQMxlp8u+JQ9C2YOa5r/wAHAgMQPzEAZmuYJOEe
HD0pq6HOnfyxLTBiSCcwDmPLAodWCgnPT0oT/piJNICrClVI9PhXCjRqxClmqSdXlgBmIAqO
hoRTtXCLoqnSaEeRp/jgJgWHU+noPLEsErMwUjouYB+tM8QCGZnYHsMvxzxEbGICnQDqfDFq
EpodJPfM17eeIGBjWTUBmDRqdTgIySaHr3qMKJQVCt27k+eBYFmHUdRkKYQbWNADA1Y00jER
gkDTTM5j64kL0qhrQkZn8cBCCSgXVpPXp2OFBJB9NNK9vHEhADMnqBVfwxHAh1OZGmuWWIBY
ouSmoDZ/5HAhMxVya+nuD54kFyCdIHqBFf8AliQW0qAHzUEkL3qfDEidRUKPtA1VJy/HEQLQ
sRmGr+2vcYhpiw0EavuNHFO48cAJT0TSanIU7jzwlGK6x5Egk54RKTfcf3Dy8MTRnZkAIyIz
IH+WAUxJII6N3Hn5YFpv4qZaRmxHXCkaygqc/UcqdziEoWBC9DWlQxzp44lhwcqV7dT54ijK
sz1JNF6L2P44gkpQKxyUVr454EEo7vUNpoKmpzNcSwvtIZepGfgKdsJBQSSAMaqKkEdcSD69
ArmQKDvXPEKQKZkrSuQHcYiIamHpz7lq0y7DEkbhaACoAzI6Z1pgZHVSQBU0FGrT8OmIjDkk
LWppT64WglVH355j6DzxIKVL6SKnozf4YKKT6a0oQ1evjTAxQliZgj+nuCfDywwykNLRgVGk
11EjwyxNpU9D6aFaDPwp54gAnUdGrOo69enbEzfkg4VqaaV6kdMvPDounXUD6slaoqDXI98J
KgFAPUp+0DsR4YFCKvUGur09R2z74lh1VlqzAV8R0+mJqHLjQSB6cqAjv44kYpUCi0cnNfD9
uEWkij3epqciB0y7DEpDM2oLqUKACpNc69sB0DB1OS5GtfKuEEIQAT+7wOLUdQaiprnQAd8G
qHlZDp0ilc2HjTCsJfUNQALU6Hw8zjKpL205A1APUHEDh2DFCorWgP07jEZSkalarTV0708a
HFi0COSGBqG6ZjChxoAdRbMjM9q+WBBc60GfQ0oOtaYkbUShBFCvl3PfDDUbI5BFcz0FPHGi
KStBWhamRrTLvjIsHp9NKkimfj9cAKJl1FTnl+OFoztVm05hMiQO+JFpLin2ilNfQkYtACDT
0A1TIDKlMSEqlRQDTrNTXw6dsJFIKVJYVGQU1zAxBGDpPgHGXjXvgAHDqmoelRl5EnyxRfA1
DGrUIyAP/KuFQQAQVrU/lB8cRCtCak+nTUCtKGuAkxQOFNQWGQ7Z+OJHV11UqSAPUWPevb6Y
cISoZT6qJ+UEUOXeuIYZY1A9JDEAqCc8z3GAHLqi16k/aO1R1riGn1JUFAVZ/uNKdPHERKAZ
GctmAOmYoO+ImD+rxBzI7jEkTMS7MQAtfup/jgZzRMCeootOhyqe2EYWRjoagHM59u4OLDsg
g3uHQEHpFK+IxGBEagHURmfTgIk0qdJWreFaZ4UiZHK1YBaHM4hRGjqASKV7eXhiRIE0eo6W
NQv17ZDERvVFMZ6n1Fa5ZYiFkNBRdFcy1c8/8sApjpNQKhhmCa0xUEdYFan1D7h4jrTETag4
GqmkCpA6+WIw2as7CrKM1HU18sSJpZGXpUDNjTv4UxDBmVlI6aO/etMRMSqktI1DkB5d8Qpe
6wBYCo7Hp16k4ogaXJLsa1yrXt1woXtsqjSBSmQY5+PTAiBAU5aUGZI8T1ywRQJOlgFBzyB8
RjRBJr6KCT4U6fTzxKoZm/lFGNA56g18qfh3wxVmdwhkWSjUr2/Dy7Yao5Pc8sGHXftDKb2N
yBVelemr6Y6ysdXHtPEPkbe9j29IovYuokGlRcxK71+ooaLjn3BLBT83vX3AXltBBaPJ6pvY
TSrV61Uls/PDjdc9pybdYJZGUpSUsxiIJALeGM/VlZ2vyLv0NvFbTCK8ig/8Mki0kiP+xx6q
eWD6nYju+f73d3MdxSIPBVUOhW9xD1EiGqkYcxbHdu3yXf7pZLYXVhZ+yg9Mqw0etOq6T6cH
PLNn6cXHefcg2eB9vhkin2yUMslpOgkTPuK9MaslHHXvrls+S31juJvYdHvFtXtyKsiFD+XS
e3hg+rr1V1vHyVNvO3/or3bbYKprFLGumVT4qQRikxmWIbD5G3WC2hhkhguP0wAgunBE8ZHQ
6wRWnnhwaNPkfkv61L1Xh95P/IBGGSVfB0bKvmMWF0XfyPuk8DxRW0NrBKTrWBaLqOZoDXRn
/DgniRWnyLvUVrHFNHFdSwAi0mdAsqd6ax6iPLDlQJ/kHfJNxS/9uH9QiCNgY1KOta+tCOvm
MZ+tZ6dm/fIVzyCzW1v7C0dIP/BPGmhk7HOuNYYyhkQya6VVu1aDFC6LTc7uznEls2gr3Ph4
YZFqS+3O6urgTyODOKVcAUJ6dMMjNuNJs3yjyHb7Frd7e0ulo0bLNEp1IRQaqfd9MFgnX7cN
vzG9s9yN5tttb2sTke9aCNZIT3qFbMfgcMnit3x28h53PvMUTXVnaxzxUMN1DH7c6hei1Bxm
Ncwf/wB4m6vapb7haWe4MgoLiWKk6rTKkqacxixWufbuabxts0ksHtSRS0MltPGrwsR0JBHX
6YrFKDd+VvuCMsdpb2SyD/5AtgURm8dNSBinIqirGKgGpONQXKtdi5DfbJKz26wzhiC8Nwgl
U0OWmuan6YbdGLbd+d3W6MkslpbwX0BCrdwqVeg/I3Zl+uMY19lY3INzXcBfxsIrqlGZQCGB
+4EHtiwxz7vuU+53bXUqAO9AxjGlagU6eeGReILW/u7C5S5tn0SR1CkAENXIhga1rhYqS7v5
ryb9RKqoxOaIAFH/AGjt54ZcZ+yz2Plu6bPHLAgju7KevuWd0gdBX+GtCv7cZpl10X/N93u4
FgjUW8SMGRRnp/7S1TixvEzc4uJIljmsLSWZSCl0kZhlWnfVGQK+OWeDC4n5VvE0VxBczLPB
cDMMBqqOhBGLFarrK+ltbhJ4TpdRShrQ16gjuMLI7y9mupzKQsdfyrkK/TpiZqw2Tk95tiGE
JDdWTE6rW4jWVK+K1zU+JBxmnnq/kd3yu5di1paxWCs2p4INRiY9KhWJp50ww65Tv1/HuQv4
2CT9H0j0sp6hge2NJZWnNdxha4Bhhure6H821mXUlexQ/ch+mDDJIbcOa7nKEjjjSIRGsRFS
ACBUAn1Hp3wyDq+Ki6vJLy6e4kAWSXrToAPDDYzzVlsvLb7bLaW1Mcd3Zymv6a4XUvmQeowR
q+gn5HI13+otraG0JGl446lGHh6qnBgsV1xezTyM7AKGNFjA0gA+ONSYNWmzcsvLCzexeCG8
sZDnBOuoKRnqVuobBeTKK95juF6IxUQvbNqhdD6wAairfmHmcXwZI6bnm3vkNJt9vHdijC8g
UxMWPVmUHQa4zIdxz7ZzfcbVZ4Jo4Lu1mkMjRXKBwHPdOmmuNXlyneoJN+LXfv28EcCt6Zbd
amNl7j1Z54rDxfwrJ5DLM8mkKGYsEXt4D6YpWuuYkivLiDOJiKGooaUJ74BOnTBvV/Bdm5Em
t3FJFcAqRSmkjvhxbHHdT+7KZVGjwVei1zpTyxRq1b7Fyy52+1aykghvNvX1rbTipVz1ZW8/
DBRrnn31xepe2EK2MkbaljjJdR5UbqPLDDMjvl5cskayDbobTcAdX6y3LRknqap9pr3wFYLz
zTsS2kkMdxOGDNBNGHSlSSy1+04MChvt5iu7mM+z7VsGDexEx/EIW+3/ACxrB8tTZ8p2ywgg
u7HdJLl400S2V5GfdKE5x+6OtP8AdUYI0zF/yCe6spbFqCAye5C1KFQCaD9+GRaqVmMciOOq
n0mlcx4+WFndamXmdrc7S22Xm1QtC4BZo3ZKMPzJQen6dMEas13cW5FtMVo8Ut5LtMwXR7ir
7tvKg6a0AJV/Bh1xjcF51Wbzu9rbbt+t2/2ZZDQTSW4aOKUD8xQ5q/jjX4Y+qu2nfJ9u3B7y
BQPdqsqscipNR+IPQ4lKisN7ubO/e6jRDrdmeGYe5G6tWobxw61zd8c95NZtcmS1jNvE/q9n
UWCN/tbrTwrga3Fpv3J/6zY2i3dnAm526+3LfwrpaaMfaJB0r54mLVFISRl0607YkZVUipan
cVwqk1RQ0pXucSMVfqSCDkCczXAjhRQ5dfuPTCTaT6gvQZn8MLJaNQOo0H+WBEzMqlVyOVa4
VabUhoep6VwLSc6QRnXwPTEqEIRUgklumeJcw5aQoAMwK0r0wkIDdK5L91O2JDBVlIbIihGM
0w7UrqH/ANWImJB9IFPBv+eEI5GFaL38etMC9OztkK5dfKuIgJJpTvkR5964h1SoV05eoivi
cTJy1M6EjwHTE1AAU8/I4jajapP+0HMDEBsASNPl+7FBUGo5+nLrXxpiWE7UrpzHiB0wI4A0
AjInqMQAUJJcGtMh/lhanpqShioFMssDSOgbJcyeuEENKsw8MifHENygAZI/Fv4u4p0riKIG
rhVqKjMdfViUNpK5dSBUVyFK4moEkyaSaVB9NOo+mJaZaAD0igqNPauJj8mX7hp9K5+k1rQY
mtMWzJHhWoyzwLQs5XICpbof8cLRlYUooY9QxNKnywC02oBSCK0BAPgfDENCCC2bUFM6jriA
dRC0/d4/TCkgc6AB6SxqR4DEjakA61Y1VgOmMt4ZW1EilK9AO1O+NAMjVQVFWJFWAwYzRBlA
JIzH3DoadcS0+nU2pgaDM/XwwGU+oioAyFCPxwLRAajVhQZlj41wgzAZflIzNehPhiR105kZ
N55dfDACKER0GWrM/TCRD1ChBFcgR1yH+GAnOohlJ+vhTEqYOgcUNKZLXucIOzeoKMq9M+me
Ii1KCKGlT1PWvfETk5jTmR1HTL/liOGGkkqo9VfVXpllXAsMnU1GRHqr0GFDklBoBnkKkYgA
lSaDMda/uxIxA0EUGXSnhiRtbqDqqPAnuPPEsO4B0kkihJOWBIyanXQ0atB4DtiRtZUDsuYr
iFNWjaUqx6HpkcSNrY5Gi5598SPWhGVVbx74msAw9DMpqa00DIAYhUaVVSCfOmZNPLCNDrGv
wA6gf5YsG6b0irGpKnKvfwywGGZpF+0VDVyrQ4kSmrqSBUZMAafvxITISorWg/CtMsBgCVFT
Wvh+HjiRtdWDMansB1zxM6H3GiHck1qv1woxZtIVKZdz1IxGaFqMukDSvUeeIw6aaEU6dz5e
GImY1AFCfCg6YWbQozgFqaxnpXpl4knEJpCRdWkmoXp2ywGHmV6qVoanLyXEsEKAlTQUFB45
4kFWVSAzEM3/AExL7EZVX0UOhq9euI1IGIGVCScqfuxKoyNTMcqHoQKZ/jgApVjLFlFCopU5
Zd8TWExUIKAZkEUyJxEshWunTUVbx/biAGCCTUepOf8Ax4YmPykZV0lAKFuowtYYgsfUaEdP
LyH1xGxGYiWOj8D5/XCySE6TUDI+oHLrgGnGnUQoqMwqf4jETOX0VWmrqScwPIYhaJdPXUGZ
etM8RxHVgTQ+de9T/phWJhRie4pmQOprTAjNUmrGgA6HxGJYYMWY66CgpQd/M+OIWnaqn0kh
vy1GVaYlD9ZCaBgRXy+vliaRSMxZdOZrUZZYGaeqMgChgUOdMq+OLQcnUaVBrlRsqYYgoWrU
GirXST0oPHERUrq0Mc/HM18q4jBFjnQEHpU5ZUwUgDqEoOgFScSLXqFfzDJT0zp2wxaYj1K7
Akjoa0xpDHq6qdJND4YGjU1TAUIy+4dKYAfTqi0ac+oUdMsSqP7hpXKmer/TEBllAANNJ+0d
c/8AXAgMxHpGdc1HiDhWpCQaVoEIoT4D64SjfSw1k1Zar9R/piHRmD0R66WyFF6+GBndMCJB
oKgg9Segw60lKBozVtJ6D/XEkSLUesmgPWmf7MRGtAQSKjPUcqfTAkYqa5Z16nww4jolBUCt
K6adMSDpFSWGoA+k4tR0UB6k17qT2PhlgGG1JrBFdPauVDiA66tRILEdew/HET6QE9I0s31r
XETUaM+kkN1I60xFHJCSXqwAc5jOtfLEzTuCVChfILWgJHmcQpjrHpJFa9sxQDEMOhjH3HSD
kF8cNMpBw2pQFHcD/TA0FWdmA0mvVz1AxDUjNpFVr0zHbPEgqQadB0/DzGADYuFyoKHKvfxx
NowvqyWjtShFKCuQxAzjUwpJ6hkFHWo8sIptdQTJXVU+mhFAP8cSSQkF/U9CRRaivXPt0wUw
v4qfcDnTFhRDUza6aAeleuEiVndKajWtCuQGIaReEMDSpPWvfEzp/SrsBkak+roD4YEbNySf
T2IHQ1/NT/DATmqk0pXt9Pxwo1HGTt16V6keGJDDZ1K0YZaV6EeOI6jlP8zoSRnTwr4YUQoQ
rZ6x9tfrn+3EkUwdauooudadvrjQZfdzILgEZ1z1AZHzzxLXJSfxP8fbr44i7NrSR7hWAoVO
Z6HGuKOvhvLR6RxqaDoPD9mHNY5rrDKp9HTsfPE0Z1LAgnTT8vfrgoSxkg0PUA5Uy/HGaEia
FAK0oRXSP8cVMApTUQSQPLqSMWHT61RiRlShr2wjClLFQVzzz79cWmjFKU6VGWWdMQkFXUQa
D/cMSGFBTQpOo518sCIMfbIFaDsfLviUPrL0AoadfpjcVSVTrWig0JOY8MVoMFKgqK5+OCo7
K+mjDyWuJZR0Uxgt/wCRfy9vCuBD9tgppQEZEdjTxw6MEi0+0fWv+GJYMDTWgHqHTp0xE5AK
An71706YMMpxIQAprUjOnhhxklegqtcuo8B44mhsE1FlqfHv08cQOi6qEUocx2IxaqF11ZA/
5HEzToRSrAlulcVE6wZYnUegyqOg+mLGue9JSSOmXmcRh0YsPR0P/GWJH1CvSgA6eYwM35EC
5BLk6egOLV9YJWVq5+XhUYsblDU+la9KkHvT64heijGZNSA2HWTqhYk9adR/piQxQAKQAB2H
hiREegVrSuR74Cev/GeIg9VaZnP1eOeFDK+kmtCP2DGkcdVrX/v+uLWLCopAK5AfjjNMFIWN
NIy7V8cTUIkqKGmvpl3xIFSTRuvQf8HDrNggHYllHT8BhEp9NfSKUpU+WJqG0gCh7Z4tRiFI
plUnr54WcIA0Bz1eHjjNX1IlR9pNad8sCtOhoT4GuojwwAIBJ8qZHDpHpAXIZnqfGvjhRIVU
lVy8R5/54EcVWpFBXL/pihBqplnUeONYzadfsZQaDrSvfyxLRqR9XGBuUvcYU8eh+mMnQkAq
Qa0/N2ws032sPSCvbLP9uJCDErQCgIp+3E19jLIVTTkB+YdsGDTPIpbzqM8OLSGrSdQooFWG
BkhQKDTr0r54kY0oQCCCPxBGI6IEHOtD00/51xExooUaiaVoD5YcZtwnoVDdiOnjgNCpOVO+
R1f4YUei5opqQcx/pixHNCpAGfniiCUIp3Y9h2wqnJ7AU8PAYgFc2Ibr4/TAYTqQfSOhypi0
kaM2kmpAqPriBwBXwbufAnAYRBI6Vp54UEkrWnqXETZhepP1wILtU5ZD/HEofM5nrSlMSw1N
VTQmuWquI4TZLSlAO3Y4QBVJIoMh44BhiNJqD9T4HELC1P0B65HEdKtKknyp4YSAjU3pyVa1
+uAEzr6dOQ/iPjiFoKBjVqgdT9MSNRVGlftP7KYkFx18e/gfDEETFgB2A6j/ADxL0/26GLfU
eXbE2BiCak1fuOmWJI6jVkaAnKvfEKYkgsQ2QHQePhiGl6Syk0qPzDrkMRgBIjCgUZ+eZxNA
kCh2ND0/Z54lSK6W+gB/bgAHoa9aNXMdBiUROxACr0GZ8K4SRavhTxB6+eIfYJeigKKEAgHw
8MRojkCG9JIzPaowM6j0kKanVUnPwPlhIm9VNVKgAeGQGFGAVowaUNc0GADCxkfaDqp165Yy
3KjJ0glmzrSg6gY0hodYyI1HJcDOBoS51CuXq/HFpwY6dTqYGtcxgOnQAA51UihPbABVBTQT
6e9K/wDBwnAqCtDSoHSnc4lhwR1bI5geWEUSrVSmqhPUeHhgEh66WCUJIGZ7DEcMtXBoD9PM
9a4lgwoR81ALClG6fhiUIqAoIqx6Cnlia0/cBvxA64kdV1GoyHUH/XEgksyklfWMjTEjrpYM
y5MciT4jEAUKhdIrXNh4VxITB6inqTv5Yhhi1SDXv6lGJBUSVJFKEde9cRM+t8we2f1waQ6S
YzUnPLAqaqhKOKnuPphZMaachSmWXUDEirpb1DP9uXbE1DOCGBGQ6k/XEgFSshocxkF8Pxwo
pakFgaUAr45dsTACpZSWqK0yPb60waoB1WlDmB0Pj4jCcEnuIlC1QTRQeoGAmqgDagATmDgA
a0WpNK9+4OFGU+mjUVqdvHywFE2khiWACj7aEd++FmjoCQTmwABPfPuMWHDihVsugoScOKhN
QlF7jp3+gxELVDA0qT2GWVPLAQmM0qhoD3J7DrlhGDqCKCjVqdJrUADriSNlpKQtaGjN4Cn1
xDNENR1EePQ9aDAbAh9TFQKnu317YcBwa1Y1yOljXMDGazg5X0kk9Mh50xY2YsorWmRzHSnl
hRlkDMcia9QcgRgQyKlQOgr1+mIhKgr7nc1ABzOQzxIJFFIBqQPQp7g+NfDCqFdZkNakflr3
8cTGJGrQilV7eOeJs2skgsMswCO1MQtDlrNBSlCi9vPEzz6JlNNen1EAE/5VxRrBKpQ9KNT0
iopl1wnCEYavanY5dMFGFpj05DUVBUdic64sR8qDLSO5PUjpXEiKtQrUEKfT9OtMSxF7laqw
y7HEBUUmlfoR4UxApHoFAaikZr1r9MOExZKqBQOO1adelcCpAE5dRXBWadgakaMg1Kg5V7E4
oTBkYBQfV3Xz/HCYaiZhqs4GZzoadBiRRgBDIanoCBkc8SkF5EVXL1V7YmgBFBLR5KCfcb/X
BgxIgWgzBbqDQDI4VgNLvUVCrhRDUtc8l7n/ACxVGzYKCetOuKrThyA/Yr+Y/swExShoKZCp
OAU+pWOqn29O4r54RoKVOruPHoCcR0TFKABSpAqfA4iB0BZQftp3/bhgo4QaM46dSCcCHqQk
6gFp0y8u+LEiatApAA8OwHhhWESvt1yAH5u1OnTEja66vdYnUKCgqPphGkAQoKmqdh0ocBha
pDQg1HQ16g/hiWmYk0oehp3z+mJCdlXSop6s88CDMKBW9KpWhNK0qMSowQQFNSR+Xrl4nEtA
XouRGrUcs++JHYjJyQJKZeJFMRMrMtHBOvwPYjywgIZXz/MDknSvlniWJAtKsSWZTRl6AVwU
YhKhVrGtGJNT1p+J6YEkiX0GPoR1OVf24iH1rJqANa5U6VwgJCE0BJpmaZnPEoJdKAkg5dq5
4DBlQumma0LEVzB75YCFgETUT2yAzFe2GVBj166+AyXxwgOWos2bHPT59MICNdWTNlbNiork
e1cTUGz0JEQPp8MiKYFptQDAKle9T3H+uJaIosikkgr28hiFRhSmRU0/L4+VBgWJfZAbIliQ
D1pU98GnAsBWmihVs/qcURizaFK9upHXLChLISgdqVAqK9SB3PfCtMkhYEkkAdMqN+zENO0i
kF1DhWyVV9ROffERk0ABYEDKvjXviLnnrGjAtQnvnT65YlrL7lJEX0Zh0yBbxxrBHH6P/wAI
f39cRdm1Ae+ueoqa1qf8Ma4Z7+G7tBWIFj9gqe4p5Y0zz/l11HpRgQGzBHUftwVo6QrRtJK/
xV65dvocZX1SBYwqkVB7jtgOBkhDOHDEU8MLFEiggUOfSpxYYIR+nVTPoQf88TRm1AqVoSxp
UdsQtTAipDGpBybrTEjIUVSF6HNiT1xI7FSFbQVPQAf4nCsHEfRmAR1z6EfTFgiTTGGqaUOR
HbBhKhCkrkaUA6g4WaUSqAFJpX/imJDXJtOeWZPXEUlCM6FQevTIDAL4NGBqRnlT/niFpaau
FUnUBQDvhXJwuWpjTT0xa3gohVmqaE9R0oMCJWZ3ILBadRiZ0fq+0L6SdJr3r9MROsbgEVpT
oe+JQYFfLw8TTCrEbatQyq3fExYIt6qHIeWIYRRiw7joB2xHkTNVQtCDWhGJsbrQCjU7gD/X
FURjbqfSTgGBoFeopnTEKVQH6Vr1JHTDjNpwa9PEYhBan1Up06YGhVJYaRQjviJl1GSjUbrU
jpiHwImqKvUDNz4kYURIEZC10nPLM0xGUdVYgZrQDPrXEQ6iCT1FPoThZogejeOBqDco1aDP
oSOh+mKJGZK5qchTLDg0f8wUoQQe58cCMzAqx88xTrhRgo+4fdTI+R7YhSXSpBrUnoMIIAl9
QOZ6jtiOkwypnUnAj1bJTXPw/wAMRExWgByIGX/PAqEdKdT3I6fUeWKsHy61NBkSMDUgQdWa
5qfxzxpCKstMwSeuXTEQsZDmOnft+OBmnBZgKEEE+knLLGgIKCanx6UwKBppYlq1b7aYmztU
ICcyTWvf8KYJBaH3HoDXr3640xohQgA50z88DRUBAJJXviioSGPpI69G7YVoSpCiv2gnp1xm
lIuQ11yIpTrgIWcAnrQDM9jhwWkqjWKilTSvjiZ2HFQWBIoft88DdERTTVfUOv44QEka8xQD
r5YhTA1JpmR0zyp9cRMdRoSPUe2ImGoUJINPxyxMnBbr3riAgaE/v+uAhFaHKhrkTniOEQaU
FfEA/wCGI0iKJp7nsP8APDBDKKtpAyXx8cBwzN6dWZU5VXocRLUtKsTqXIYRKS6hWtM65Ysa
D6vupRV7DPFRCJXqwyJ696HATluhUek0rXEgE0FSamvQdMIMpPpZQdQzpiEL0ha9/HzwHTFR
91ageGFkBXWanIHt1ODTJ4A1GoVqa5DvlhRwy6mJAoM1wEBJzZBmevYHEPkjUD1AV6HLEkZY
Hv8AUdsQgNBZc2oe+JrDaDWp8vQPDriUM2mpH216D64lqI1IFSK+IxCwKVUjpU/cMSkIhQDS
hqSxNO/fE1AFARWgFMiRgIWQ5Z0A6+GQxarAA0JUmrd6mmXliZKoWmkVXL8PHEQtTRVR17d6
nEjUGRGZAzr4DCgEZggUDeHUDEzDFss/UK9PEeeImUAEkiinx7n/AFxLw3pYdxQZeNPDFpNT
+WqA06ksfDEi9BjrmHHbxwICs1TQZfuwjlKHFNHetSa5jvliaJ9QZ2Na6RUf8d8ZR1FWAPpI
yI65YhDhnACitAO/n/liIy3pIJzyA8/LFFCVwYyKAMDn16eWIkanKhqKBfxwg4kOalQD0r/j
iSRDRTQ1Hcd6jxwISyxjMihOYH17YsSMEFqHqBlXMYmRKhC1ZqZilOtD44DB+llYEggHL/TC
0AuFYrXr3xIVCqUUhs8yOo8sSJQKGnTrXsT4YhRKSO2TZ/8ALEjHRq9J6dQe+IVEwYgHVpoe
vifLCPlIrakQaaED1UwY1ptBDNVgqnt5YCH1aATlU+nwpgCNgVY6cvI/44RSBAcMTUAfv/zx
KUmoQWbI/lr4fXE1oHJA6kk9PGuEaEBw2ZGfUnqMuuWBaIDLsxPWnfEJDDUBSoFM9PjXEQsF
Cseq9a0yHmMRA4auoAGg9I8fGuIWho5IAz+njgQSSF0ntQVOef8AphRqrUj+HrXvXzwimZdL
VOemnoGdRTEpATMpFdOQFQ1e+JUSmudKAD1djXEgN55AeGfXEdMrqyGtaipY9x9MOKmNGADH
uat1z7U/zwIdKGgX009Tk0/Z4YCYBqnMle/YjEjyDSnoIIz65/8AXEka0I9GZPftXxGFnREK
QA1OvXxPnjIhnkKioWuffocJOWMkZIA1H8P34kFBIKk0qOwz64CISMVNTXSRmev44hCjfqtN
BOf/AFwtHVgASBq71Jz/AOBiJxk4YZ16Fsv24gdjXr0FQe344ghLBFIFWyzUDtiFg9RbSoPp
AofH92LFIcaEQLUsFBNKdTiaMzVDAVKACjE0y64qBayyk0zpQ1754kRCKAQK0NCACB4dsRAx
qwAPfNup864gMnSWJ6A0B64kBFiegVahSanwPmDiB6UORqvQ1PfEZDmjOtCQQcye3jTEgUBe
gX0k5sR0xCw4LqxAOqhqG8BgBnYEUcmrDIdz/wAsJwWoU1gBCFB8a+BOJQ4ZgDQAk9z498BC
mtasY9Q6MMyK/wDLzwo6mldVKZkDtniGhBrHqGpT4AilP88RgVoQdJOodu5Hc4ojLq0k1FBk
KdM/HCBKQze2GoRmR2p0OEEyhNXXOg60zGAyCJ6lnWlBXL9uIgJVpVApRsh5088SHQqWBy11
IpmARgFgPSCFAJNakdMsS06qHXTkMzUnqe4xGembSSooQpyz6/jiNp1dUUqqdM8z1PkcTOk/
uOFZCADl4n8cRMH1sEp6SfUeoOXbCN9Rn1FWNDTMAd/qMQojCsr6QQoIrQdajLFq+RVZT7ZF
AgrmKYjKEk0IHb1Env5YmjLqAVSQGB1jPsMQwQcMxD50/iyzPcYkGZUc5rQ9mJP+WCKw4UIV
IqWA9VOoA+mIyBI9JYpTrm2bNiWGDMOw0nqO+WJHNS1fvC1pXrXCAgMRqr6h9tc6Ad8sRH7m
rPoCAMxiFLQFHr+38yjrg1UDFlYGmVPqPxxA3qeI6hqNKlhl+zEMMvTPJfy5dcTQgsigI7VO
rM9xl2wWoaDIsKlTULnnn5YFIZC9AgNFHiTXEsFIi1p9mX255HDIkTIxalaNlQjwGNKxL7aw
wF1eoB/LU1J/ywGeQLKCAoHqahY+HlgBgoVmVcq1Ir0riJljCux1VoNWfQDpniWGABcaiQ1D
Rh1/6YkJDQBKagBQE5DESLfzBqqT0Byzy8fDEgtpkoa0J7L/AIfhhwEzI5JpQg0XvX9nbDhs
TBE75k9SBl9BjNWIyFUACoANCBlnigPkxACaQPzVxJz3KBo2YEnLJh0yxqLGVvQRcanP25AU
w0ahp/8Au+9ep6Yjq04vbRXO82kEmUMkoSXuSGy7Y7/ykHXXj3OfhnD7eCK1Bube5lOlHRg6
MQMwQR+zHLrr1nlV3HDd1guvbOlbcmkdwwOinYMB0OCWNXrEn/pu7W1xBLK0VxYvKsZa3OdG
8QanFRtXe+fHisIpNvCxmXpHI1AxHYP4/XGZW9Y+TaL6y3MWd5GYnZhRXNUOfQMuOkys9c6v
uT8cFhbRslk9pcyuqH1CS20sKnSwzHjjM69Yyz4ctlwjdb2IJbzQSSgZQs/tsxr+QnLGrW5d
+T2vBN9uPcUW+kwmjrXU3WnQdRjFsVoL/iW8WNo1wVSWFfUzREnSPBgc6+OK1ap160ANG6+P
jhl1RpuMcZg3WE3ciPLCjmN0RwrCmZp1GNXxi7asY+K8Qv5VGzXzpIXInsZ1AcKMmKvkGH4Y
xbWo7d5+K5dtubOaG7F5tF22kzRijxmn2shzxTpUMnxoqWkmm5JajNHIVqfTnpYeeLWZtVcH
D1n2YX0czLPCGZ4iARQGhzxq9NSVxbTtS3MoluAy2BYo8qAgh18cF8a1om4JZXW2PcWl6JdJ
LKKEA6RUqfDGZUxgJrpUjzPjjTlflqrLjkN1sDXslo0vtq7G7tW/mIV6CRD+XzGI/VxWXEd4
vrH9bboDbJk0lQQtO57kYb43Yafh+8QSRpoWUTDVHIjalP7M6+RwCRHb8e3G4vjYrCVutJcK
ciwUVOHWb4Kbje9QWrXL27e2jaXTup/3YL0ZUNxs1/Dai6cB7ZlqJF8f4T54NacKl2CkGhbp
4Y0xq2sNiur6MNrWLX/42kyRqf7u2BWGueObrazCG4jVy/8A45IGEsZUdcxiqny604huskJl
ttExVSxjjapp39JpijV8K24bvt3bCW3jBUhqgnS1FNDpr3HgcWiygl4tu6Qroi9yPuwFCtOu
teoOKmRIOJ7rLC09uY7pVFZYo2DSKo8UyP7MGrK57bYb65IzS3JJA96qjLzwj7I7/ZdxsWUX
SUSQVV0YFG/7SK4WbHEwYFWodPh18sBxYDYd1MEdzDbPJFLkjKKgEeIHT8cOLwJ2ncRbG6EL
GFX0SNQkKR4+GBpbS7JbLx+O9kilSRq6biM6o2atArgfYfCuDUDjHE333344rqOJ0AISRqH6
nyw24JynsuJzxb7JtW4LoYgopjYEN/CVPn54FmI904ZuNrPKLVRcez/5IlYCQKMy1D91PLDq
8UDqUYqVpTqp8cMVg7W2aUla0B+45kAfhjQnLQTcbglgLWs2q4jQOYgwZXSvqZTQUp3BwNa7
LXi20Jsy7jctcFHTU6xitGLU9IwWixBNwqRrqL9Fci6tbhdcBI9t88/bYfxYLWZuAOx7HcO1
pFczWO5JqH6e50lHcfk9w5VxSs/W0S8XgsbaOfdfdET0rPAwOgk6QHTv9RjU6b+o/wD0kyXA
WC51wSLqglZdJH+1qYvsPqzc1u1ndSQualG0n/urQjFfVFxccalh2m23SOZWin0+5FT/AMZY
0FG71wNYV7sNvt3sXFw7T2UwFdBCyIT+Xw+mHRNi3ueK8Xt9rXc1vZzC66m1KA6scqsKkEDy
xbouwDcV2W122O+mmneCUDVLGur7vtNO3mcZnokV0Ow7Zf3M1vt124VY9aNKmkhgftOffxxq
zxrVNcwSWs0kTkakOlqZ5jvilNi0fZK7LFu1tMJbctonQj1I3Tt2ODdSK62cw2JleVY5vSxS
To6nOqHocUYtVR1CtRRB/wAVxoNRecIvU47bb9aTx3Nm6/8Ay4hlJCWNASP4TTBGsyud+H3Y
MLQMJkkQSaSfVQ5lR/uGDUOHjlnuNrMNtnk/XQAtLaTJpOkGlA3c4pcanrOmAxsyspRgdJU5
H9mNMYsdn2dtynmt4pFikWPWurKrA/bXsMZtaT7Nx2bdLu7sw/t3FstaH7S3ZT4Vwy4LNPac
aLJ7l5N7MSyGKSRQW9lq0DSIPVoPZhi0fVLa8QuTus+3XZKywx+8pjzWWOoo6t4GuC1qJBxO
K6Nz/SLoz3dmNc1m6kO0YPr9vpXT4YzqHJxWwg2+2vru7eKK4AqaCils6GgrjXNVkV24bBc2
u4C290SQSKGgulHoKH+IEenww2sTn0rvYWht2mgeqLRZ4W9UkdctbUy0eeB03HZu3DtxsNm2
/eYyJ9uvkSsoyaKRuiSDtXscELkvdmh2u8SK/kJgnX3EniFRl11A59cazRb6t9z4fttjaCc7
iv6e6A9iYqdIcrq0leuM6OrIptx2Se2srXcEYTWtyfbSRRSkvRkz/wAcM9HP+UF3s00EBl1B
ZIyFuLdjRlJ7+Y+mI31xMtCQGr5/8sVGBVAxHiD0J/fgUddlYy3d1HCvq1OAamg60J+uLWpF
hyni1/sF9+nmAktnBaG5U5NnSh7hvGuGMf0V1vs95dAtANTLk6Zq1PpipnMQSJLDIySjS6fc
p6j64DqPPV9R6R54Ut9u4nvG47bNuFsiyRW7VYKwrpA608fLFKOusLY+OPvcd5GrtBc2ih0G
k9q1qP8A6cVuVv5itm2u7toFldSyN9sozGefUZfhh1jn9Dg2y9uATEoYrQkZd+mR8cGNa5pI
5YXZJFKOho6kUIPbrgVRVqadfPEJXXFtW4XEWqGMlSaFQRq+lPDFKr5HO9rMFlYxkC3H8yoz
XtnjU9AYYZngM0alo0AMrqKhRWgr5YLD8La24vu97tU24WsBnjgp7ugg6QehxnTFP7ErSFGX
S9aENkQR2PhhIrmxuLb1TJoB7gekV8+mJn5Mm13zyafYcOFD6D6Tp6givUYkVlYz3d6LaMHW
zBdBFG60Jz8MSnyn5JxrdNg3FrO9QK4zjkGasD0YEYj4qnVhVgK9gO+DUj166gjtSuEQL6RT
uQcq4lsRvmdS/lzI/diRCjAUAUH7vriJmABoaZjM/wCWJIdRWQACp7g9KDEdO9a6gPGo88C6
QoxPrFSfzV6fjhrMEtSciR2r4Yy0B1YFhUE0p5+OFYjlqKGuQPq754UcaXIK5r0J8PLAyY/+
R2H20oa55DphJa0OZpqJ755jEg9AEJIWpqPI4KdEXRmBUZL0HTr5YqvkDsQjgEKvfv8AswQY
dPbVQVyFK1+vjhghA0FCOh+4d6dsTemWqrUdBQHxBOBCIJXrSmbV/wA8SO1XDFjliRwVIFR9
32k9vHpiZ+Baz26+Pl54oZREHVQVNQBTvlnhJMOpQ9Mgf8cQ0o9Nc1JAyHh4YlKJ2RgqkHUK
0HbETmI6Q9RVhQf64BYJdAQNqJoO3U+GJSkpUAgmhND1xE5CFCNNT0BPhiJ1LUqB09IA8MQO
ZAQCT4ah2rhWgYsr1RcgaGmf+OIFJpIociT+3AjZZg5jrTywo4LBWIqwAyB6Hy/DAsBEzMCW
Ude5/fiUhOwQrQ0zq1T28sBpMhoAfUxFXP78sTITmCxX1g5A0A6YkZmLUBXIjviMqJg2mjHL
rq6YRYlZkJoRkev1ODGgGlCU+5TWvjiJpWLMStACKknPLEtMz+mmqlKVHTCL0jqdZFST/F2w
MymZ9Jr+U/mApUnE1SLA1bp2BpU4loSpLZUNB0xM6BtVNVaN0Bp+FMRNRiKUqtTmew+mFaWm
kZUtQjJiaZn/AKYtIGGlUb7UA71p+OGLDMqkirV1ioHT9uJDaQIp0imQIIwIOoqakdex6VwI
4LMPSCxNSR4UxEEqkgBRQDIivTEzaURVFKihH8XhiF/wcMtQNOfc/wCGIyjzYdKaSNJr1phw
g0+qhBU9vAnv0wAdEpQgEIcgO+WBWh9xAdA/NSnl4/jhkEpGnpNNRNdJJ/zwmUlZm9VKqMzl
QeHXA0TKsjAg0Ph5YmQ0c6kBJHavSmJJSgFVXSB1FOuJAKioK550AORJxETupbv6SQV8MsKC
q1TTo9IOCqEKAA9v4T1GJH1UYKcych4fjiR2QJ0+4Chcdz54haEBDpDZGoy7YCERKrFl6Vqa
dz2xITgFmBFaV6CtD3/HCiRomXMZ9l70xLSQEqx+wDtiqPr9tADnWtfxxLUbiNjrJrQUYHrQ
+GBm0gFpl6lJzI7DCYTUGQauXpr9O/hXELQqz0KrUKT9aYiJB66M1TT7gMqeBwaj6XB9DDSS
CxriQU0e4C5oprqrXCDLHXUvU9j0OFSksWTVoGr6SPDC0f21FAc26Vav+eJYbNSUycg5knKm
BUtCxpVFoPuzzpXw+uCqUek6yoejsCR4gYtVRnV6k/NT017gdMWs4ZGIppqDTNhnkMRsOwK0
INZDmPrgQuq+sVJ8MKCpZTRlIPgTXEqEVGQyp5ZAYWToqDXqWr1oD/y74tWJApKhhQEigY5H
rkP9cGkDBlbSeg/Mc8U9MwKu/uEFfQ2Rr5d/LGmjmOFWqwNBShGYxlCZqkMO+QpiBOR7hA70
p2XpniIFU+8zZVpQR1pT6nEcOdQHrNTTwr+OIGOrSdIqWpme4+mAaTsCAyL0FSCaZ+GFHUqq
VYUcd/CufbDCifXqDfw5tnSuIRIWAXM0YmtcyM8SR1BAU0pU9un44sFEysEpq61DHKpP0Hli
QoGWRaqwC5KAP8frgah2JSQZA1rpoKDLv9cWIhJHX/eMjTEQ6gzBaGoGqvenjTACl9pXIGeW
VMwcIBUmhZAE7Me9exxDaIB9SlctJ+0ZCv0xaSTJDpObmjf9TiJKshk1ZinVP+eIw4kjrpDA
E9POmLEdSQNJag61I6UzzwUGkLSKuuh7lugI8csUQJEDGgeuYGXaufbCjhkLn00Iyp3r2OE6
Sq4Fagr3HTv2xBL7ihaAkfXsPwxmxrQsSzqGyAyDn/HLBBhyuk11AkGo+njhic9xISj0AVOv
1wwMvuXokKsx9JqCO+GqRwe5L/F5/jiWLzhssce+Whk/lxh1rJX7BXM07478b+F1PH0fuG3S
TW9huNlLFdwwMXYqy10nP1AfTHC2jlY7bzHa7icWqskM5P8AMikYFCD0oT+zBhxx3O571ttz
WW0higaX/wAikH0E5MtOuXhhTr3O2muIbK/2+8hurdJf50cUlHHiAh/xwJR8rnt9wurCOCVY
ZkYlllIQihzSp7nth4MXXMdqlOy288qrPbI6NNGjhnUU9RpXti/IwGzQ21u1pcWLQX1ira5C
4JaIdq09Qpi1NHf7lYK4CXEKOxNSkgBNemfhjH1tFZLZ76OWxvEmmT9Q7uRE70oB0UDzw9Dr
XnExBuZAPSQ7ek5kZ9K43Pgct78c3VqLaXbSwW4kOtNTBFbt9K1xdN40Vjte1/omfc4oF3K3
kLi4WkcpFcgQDRvKmM2iuuDke1R7naWl/crHt1wSP1IzVH7BqdATijU9WV/c7at1/T4r2KQz
KWhaNqq4Iy9XiMAZiAiy2+ew3CUWU8mpFmf1ISTlpK41qU11fLs+3fpGiUySBvbuYXqDqNSW
XMMP8MPyzV9wJVv9iubKGWKS8DO4t5HVHIK+kLXrnipYjdeLbzto9yaFQpJLMDWlcyMulPA4
pWZMbzhW2XT8XeGFkldhI0aahVlbzHYeGLqkHHriCDjl9YvKIr2NJgYXCqxU9l8cHyZdSbRu
9jFsaSSTahEKOv51I7kdaYRU8VoX5HFuizxizlhASfUChqOhb8v44NX10+4TNZ79cbnHJbXl
u0arc2JYlWAUKWAHUUHUYIvrYz3JRaXdk24bJGsFpJQ3FoH1GMnwVjq01w6YxaKlM86VJStK
42nouwXVpuHE5dug0vdJFIEV6KQWYFdJPhjItxUbRBPsm5pJuGuOL7G0/lqKVzyIw/YZK0MM
M9pyX+sCMybaYQnuwkOua0qVGXTBvjUi6G7bS8bvDcwlUProcwTn9ppngZsrosN026C9Mhnj
L6KkKQwoe58cLUUu6X24Wu5/r7CygSNkBa5iGqM1yzXz74CpbbcN33C4uxb28M9vI2qfb6/Y
aZmPV0r4jFAr932m0t7YPbyvFqNJrOcUZD/s7FcatDN56zUkhfynw8cUosek8P3ixfYFgWRv
dtamaKh1HOusUOdBgpkdm6X9nHHKokUi4jOmTorEjKtPDzxKodt2+5fhM0Ef82VFPuxrmTnX
Uvjl0pgilZjit/DaX8iXEnsO/pVjkASaUPh+ONHXZYVsuUTy3hKiQuqSE+kk9CD9MGs31Z2u
5onM3Se4UK8ZWN3YFTkKCoyyxBl+WwRxb3cqNBTWftACk96Uw8iXE/BL5bPelld0QupjUPQq
QxzUgimHqa3Gr2redqj5VeWr20MEcisElqBqcmpA7AHAYsNxN62x3ke0BJRE9XiVlNPXWhFe
lT1GAWs1uUdhcxQCS6O33gYAghkUP/Gw7fUYVEYsp5i1vyOIMNJEe5wUJD0yJYdVOKxR3zCO
74rNt9o6XFzasi06B9JzI/DGTXXtt/Zs9tbGQQy0CFZzpoQPtr0z7YWZGU5HsG6jdLud4P5K
nUoWhJHalMM6OLlydz4fHb2JRryARrJBkGDI35QTTpgNcXJ54X22GGMaZ4qLJEwowoOp7ZHC
xZXRuE1tPwhFgdWMYjVx4GvqB8a9sUXdWsZ3G44Tb/0zRM0IjSRBSqgVyK/TBuNVTcZ91N/k
juaW80kYUVGlWz6KfHGt8Ziq3zY91j3C6le2KprYhhmDn1y8MUElFxG5jTdTBcS+3b3SmKdC
dKsCMq/jgp9Q8khNrf3FkkjPaREGEMa0BFT0y8qjrjUZlVLAn1HNaVoR2w6sehrukMHELO4B
Q2g9uO7iXOisxBVlGdaYxXSxbbmu27dPt1xbXkU1jeArbXKsCK0GT/w4N1n6qPjbDauR3L7h
/wDHjuAyxzsKoDqBHTx7HBWoxm7sDul4a6lMshWQGtdTE1oMbzxi/K/+P320b1pvW/lSR6KD
Jqn7dJPXBrUjbbXfbBdb3dbe8kI3WFCtnuca+2LqMGvsyL091fEdcFhkUe/z7cvHkneETSpO
0EsqU1K2o6kemEWO+G6sf1Nlc24WNmhKgswIbTQ6V8/LFaMUZvTucl5HZxLDvdmxmtpIWKSS
RA5qf4mH+GCVqRZXW9RWvGNqnmjjvYWljjuYGUdW1a0z+1hma4jZHNt36aDfpbeR1udpmhL2
Ckkj1mpU6vzL0IGNYxRcOn26fdN0sliQNMjLFGzV1IpzSh/wwG8+J7rcbSHYNpiuM9snkFre
KCWC5N6zT80eXXFPAq/knb3sTYLHOlzbSxl4LhCrCRchX09K4Z0toeTyauI7PMrgxsFGoZUZ
UzT6+OM/Jv8AlV8Q03V7JtVxMVsbpdZhYjT7yZrpr0Y9sa+Bmq/emnXc57CWQulnI0MTkVqK
/wDFRhEvuB3LZNy26ON7qIrBcKJLaZfUrqRWobx8R2xnWrFeApqBkRkAe9MIxLHL7cyMXoEZ
WJHWgIOX+WDFPlr/AJNvoL24spYJv1Fpc2hIkjrpzkP3eBHgcQ6vp+QGK72bZb3bzruU0wXE
q1D0ijXKTT1pT04tyGz1V83ktrm8truHQWmiHuSqNJYr/EviuHlm85WdFCfVUEdx4eWKtRqO
KXs426/treYid1LrApoWUfw+OBVJ8f3EkN3uJM+m5eNTGtKFtJOrr4VzwLmV2cGFvuC77Z3B
WQ3UKsLcigJRmJZF7afLpi1vJgt7l26TYtmu7OH3J1AgmlhyasSiiyU/MuYr3wxixmeRblBf
3Ec4g9idUCTVNQxXIMB2xL4U66QdPQ0pl0zwVrmt7u72B4/s24WcWq4oILmeGoNY0BpIB3DD
qcEVUm47lcXu5pf2lusM8UY/qHQick016O66fuGNSud/Se4FgLdty4p7qvHD/wDlTa5Br9ta
6XZCf/LHU16VXFv7avPgOL7hMuyblbWspWX23dEVu1Kd+tM60wflmzwHCZLG6vNwi3RRcG4j
AWKQ6WZ0/hPZgOhxNWZA2282lvDf2t3Zm7sruIxtHLVSCpJSRa5grjY+VpzOdk2LiMyOCxjV
feAoQqhPTqHavbGJBZdir5n7Ntynb72ICNbhVmmeI5Eh/U2XiMP4b/Lr+Wboy79bvHMJLSe0
jnhZTqUlmf1KR5dsZXwwJZQxXMjxGfXCJSJCkqDQNlqAyJxG1zOKSih9IFT+HTGtZFQihI65
sR/zxFGQDUdTTtgUC2qor1H3EYEB2oKhixzJGEoRI7EEeFKHywg5cpRcwvWnngV0KyK6jTWp
ORHQfXE0Ah1ckEEdTU+eeJkg+Y1VIrlQ/wCOJBORqpy7ntln0xH5CxqpyNemrtT/ACxI7aYx
q1UUjSBSueBElQgBJJqSW8j4YrAXpZSXqoApgUpB11VP4U/wxNkEFa9u4P8AjiGGrmM606H/
ADxLMJHAOZNPHM4UkRgR2INQKdT9cBPSnqVvEFD1H0OJUGk6g9cmyH1GLRiXQtFC1XV3PjiV
BI76gzA6RmMIS5ahpzBFSP8APEibIV6kdAKjEhPqoGXIkVI7DE0cZn1EkNQZDoKYtFhIEpTw
P7u2IHrpFQT4/TtiaiQPQajmen7cCC4amqtNXRSRlT6YlaZaUWqkHsT3phR/SuWZpTr4nAiM
ZcA9e9T3864NGGEaBhXJiMz1+p/HCha9I9NDXr9DiJg40s2VR2Hf64ijcFpPStF8/HEyTFlj
HqOWX08cAtMP5lCKlTUE+Y8cSsAwz1J0/N5nt+zDFCk9TqRlkBUdvwOA6Z8lBp6u3/TEiJAQ
empGJpGQoGTekj1VzGEYFlCUquTZnwz6YhITkZMRpoMqjMf5UxGEfsU1GeZr4YEDJKnsc+ta
DAgsISNWYpkB/ER2wjCYnMEekU/H/li0ADCtNX49zi0nIUNQ0GWdfHCsMspKkSCoGSD/AFri
hgdYBdmGVM/8emEkrEkMtWy9OWR8sFAn9QBpnTOvTAgsr6as1AxBCg/hgipHJtbVavYdPxwg
w9tWLNkD+UdM8qYmSdXC6mFKDIL4HFhPpQ6RQVFQGGf7sRKQgIoJrQU/acI6plqWFCNJFASK
Cp/z8MA5h3jzqVqy+fng1vA/axOnrkoGEJBVVKtVlrlT6YCYqFoxYA9u+eFmkcvTXM5AriR1
Mgqa5d2OWJqFpZi1WIp6vqe1cWqBUsD1q3Qn/ngQ6LkCBkSaivXEkbfcWX1NUGngcOg6ooBa
oQ1qCcCMx1ipb1Up+/tiFmjYAKPMafphakBVQdQGRHTwOKo9ar1o5PpI6U74BpgHrVRRkp4A
U7401pMj0NOpPqHbAqaQdg9VBBzyPSlMDFpIIWqwFOgoMxl/riMpRqSdDCngR0ywmExQlkGe
eWVKfsxCwOlV0r1Ddc6YGb4L3B0B9NKZdPpialOSWABByGZ8fpiVAxY0A9JPRqdq4RhiKKDQ
6mqAO3/FMKJ1YEBc2OdQRhOJJKa2dl1MABp8SMI+DCmhiwPmcZUDEwGrQAGPj5eGCmCpTOvU
dPLzxEJBalWqoNdXanjniMO7gZpUVFPLEtOPSpqauetRT9mAVGrs0h9JqSf24aDrpcu7KSAA
CD28aYjzgnXsvqYj7ulAcUNhgFRArEFlyI8Qf8MTM8MpOorrFSKjL9mMoynUQ0g9Jyp1z8ca
XPgy2rUinoaH/HFrWhjYlK0FV6V74gZkjKqzDMZkZ0B75YkQYkKaApkcvDyxEgY+qZ6STWnX
EtMoYRFq0opJ8R9K4lpkXSpZnIyyI9WIYSworHMjV97fhl+GBYZgqgrWoyr2zwgqR1KsKLXN
RliJ5QqqaEjMaPx8cUIRGjRUaqyDoP8APCzQ5qaVFWzUnI5ZYtM0cCEZ6tRH3L2z74DIKYoI
wxIGeQzPUdsKCCDpVAMwMj498VROEVzKagp6exBB8cBF3OlqPXqaHLwFMC+pKVzQkVAqAcgf
PPAbQEyaSzAKxI9R8PHCwNnDKSxqa+g08MKoBICzagCvRVB74TCTRrLfZVMh3PnTGSYDSOp0
+B7/AFxI7ODUhQPAf9MUCNyyLqp1+0AZ40sSe9rVRkQVpQihxIStVCoAZkNRXoP24EY6dZAP
rp2zoOuM0wXt1bqWUj01P+WA6SKBq0r6TkKjoPDCMQXa6FqtF1LWpFaHvliDJ7qVMgGZUdT4
n643jUjm1Dy6eWLGnXtMBlul0BgwNch547fzvrl3tj0CxuZ0tVo7Rx9WqzKBTIHyxd8z8DmZ
HYPdllEi+th6tYBannljnW8TvezqzJJK4Wh0qxYgAjPStcv2YxzpxHFJLbovsmSIPmKlgvh/
0xrGaNJpZwXYmRMw79RUHv4Yb4zNEu43kdayymKoo5J0/wD6XQ4MMtWex7Tf38Un6KeOK4Wo
MMjmNmVu6k0Br4YfhIIds3ae5FqatOWpUtViR2BxGguLS/sneO5hkhuE/KdVT9KZmuM4tdB2
LdTYpufsNJZzVPvRgtmuR1AZih74dZ1yRBwHJDBRWumtQKZ1I6YrGtSy31x7YDTPkoXWzlqr
0AzOCRj7Oqws5dxuYrFZVSWb0oJa6fp/0xbjpLHVvGz7xtNwkF6AqutY5UYMhCmnpp0NcXyz
esRbab/cruK098s7ErErsSoONfVc9affNn3DbZRDeotCNcRDah4ChGDn5XVcMckq6JUPt6cw
UNGoOuYzxvV8p3vrrTpeeVon6q7Hp4ivbGaihuZtBSJzGoIoisVBp3ywUJDc3L6meRnFNIka
tfpqxSN6jjldRRWZss6k508fLGscr1iaKaUMSkpEfXTUqpJyr+OMtSk91KjaAz6o8hprmB/l
jc5atqMyO8esHIeksp7daZYvrGYtJuN72m3pujWTybdLkl1FSRAR2bTXSfrjNq1zbdG0lykY
uBbFshO9dC16GoxYMdW7We52UkcF9J7iOKxFZPdQg91NT+zBJqxyJe3cFIUlkiUliUqyg55+
WKRW4CO4jRiykiRqhBnU1zNPHD9WZXdtNlf7pdpBaSL7pDFDqp2zHXGvrF1ae6O6bVI9pcO8
BJ9UZY0b/d1pjH1Z+/RbTZbnud5+m25DLeAa1jV9DPlX05jUfLDZ439/2bck3WC4aLcElWdT
R45gwYfUHBG65SwNBXyocicsNgSRTSI/oYqy0oVOkVxYMP705ULrdkJqykk1xHFlLtfJbCwj
vnhmTbpM4LmNqxZmlKr9p8sTOKmSSRpmLtUmpdjWtB1xuRmpS80kQWN2aPqoqSBTOuMtwCli
wkk6rnU50p3w+YOieX3CXLay33Ma/wCeCDdGKrSlKj99cWk6yyFhn6lyGo1PmBgOjF3OrMVk
dXP5lJBp4HxxYrQy3E8p1zSM7ZdTqOX1xOfNyiS5uUGhZHETAgrU0/Z4YW7TRyz6tUUrRnuy
HT08cVglG9zPIVaVixGdT1ywNxINwvqqRcPQZr6j/jgxWoFuJhKZVkKyP9xX0/XMY1g+w57m
WSRWZ2YBaGueLF1USyP0JJWmS9hhc7NS21xcQBlimeIP99CRX6064L63L+ynubqZV9yZnVCS
Ksaj6d8E8VONy3Apoe6dlyouo1y8cagnSP1DU9RqOde+eBv5hi7M5LnL82fWmNMfUCscuuWQ
A64KvhIJXX3FEhq33qTkaeOMq9AWVApjbMdc81H0xrGeenSL25MPstI/tU+1iTgsblch++qn
MClOuNfItEqyllP5gaj6+IwahvcEya11awah6517HAtonvJaeqtGFZB2J8SO+CT1r7eB96XR
6WIAIIUNkPPGvrGZ1aZZ5RIJc1etQ+er/uBwZGpac3Uh1JI5Ic6jXOrHvniwaIXVyNJ9xjoq
UFehPh4YlqOOeeOVXRm9xCSJASMz1J8cSlPJdz6DGzakclmTpVvHD9TaFriQhUBooqQnbzyx
Y47fgazsYTHqOk5le1R0NMZdZfEIdo2D1oSeg/1xuM21K85kmEjmrtUsxqanzxn4ajqutzvL
myjtJnPsxNrSKtV1EUJWvSvcYFXAQT06jucaBgDqzNc6sO2DTiVp5vb0CRtNdWgdCcBuFa31
1bahBI0at9wXvhZgJ3uJmDyMWcmmv6npTB8G+gVGAYV6dz5+GIBjMqSJIhyVqhlyOWI6le6u
TOZvcImPV+jfuxNSmiuLiCcPG7Ryip1r6SMvLFjN6SWt/eWqslvIyKxrQdKg4ME6c9xcSSuz
udTsfWaYcFoQQEFSTQ1y8fDFh5qeDcby31R28xRHzdPyk/TBjfyjF9c+8ZAzCfqzigPhlTEA
2+53VvciS3meKQA+pevqyOHDoY7ho5A6VRgagrlisRpbmSWVpmIExNVdPTQ1BHTvlgkCW83W
/uU0zyBwvUkeqvSpONROWW8uJLSOzeV2gjYtHGT6VLCh0jtjSqOa4mKIkkhcRLpjJNaAZ0wU
zxJLuM72ogeTXDF6lUiuknM6T1FfDGMHVcZzFUX7+vlXGrBAuvppXLvXxxkyIyGYMCKGtcIR
E1JVRnQEivXPPCcI6c/pUfjgFAJRQLSpr+7zxKVGxoajx/D6YiZnQ0amnPIny6nEQkgkmoYd
D4YkjCDSGGRpTSOg71xIs9NXAOfqr1riQHZSch1IAUeGIZDOCAylsutO6ntiHwBHOrN8j38f
LDVzREVBVvtHTAcMpo+RyApSvUeOMrIaP1CoyGdK9cS+pyzagfu8T069RiP4N7mmjMaDMADv
9cJEJGoAvqBFSen7sWAEh1egExgZ+AxHUkYPb0gZZfSlTiQiaDrkajUf8hgGgRtYpUen0nxq
O4xLE+WkgnOlQDkfwxEJSpDKtaLTST/xXEkihloCQa5Ad/pi0HDRfcM/Lv8AhiMExUlSpqBU
1r49sCoeqE9x9o7Y0EijUCvj44gXo9vSDXsSP3YmgkkE9R51xazuCWmlS3jQEYFonzFValKH
PwGIirSlDmOo65YEE1LFV6nOnamIUKk6suq5kjvhWGD50b0t4EUNO2E6c0FQetc/w74yQyat
KkA6fPr+3EDEKwGk1A+5j2AxMn0mOumg1dh1p5+OIhkIBFOjdzmABiJmRcqk1OeWVfpiKMUV
imk0yJbrXEzptR16jVVGVex/HCdCpGlqVanQ9j9fLEghmZSrZls6nIZeXlgGheLUFBJYdgD3
71GI4MUoarVaDSpxEzArQKNK5jAqEoojNTl16Z4UH0ByTllmPDBgCaAgA5E1XtiOh0hjqJ65
VB6V7GuHR7RMEUUPrbqx654YkZL0LGhQmlPPxw4aIM+elqFf8Dgo0j6wPWNJyWncYDKcoCRU
auxYfu/ZiRUQh9INf4fHEqjzzLEEnL6eGWLBRppYGgLOuQ71+n0xHA6tL1NQ3cdc+/TECALo
xIqD0Pev4YqBFyKAnJaDOhoMRgTJRgQTQEipBxYrTOKerq/bsM++BHjKlCBSlajsKgdsKhaF
6seorTzwq8hYsag9jl2+mBYkR2r6a6O47Hxp4HEgClW9QUN0WlcvCvjg0iAcqFrpAyFO58sI
LWRRagvT/DrgOHpUgFunXFpAxbVUqPIHvgZGpVmzIUeFM/2Y1EYVqynLM1Y+HgMOExZahWyp
0I6088SoSCR4UPXAIk1llUtSpyIHSgxGGkqQKMK0qop3xDQF9TgMKGlMs6E4RYf0IoRQM+vU
eVcsBKrhDTp4nKlMRJaV1KKEjLEkbO+ujfa3SnfAziWJUGQWv8J/zxNGoadx4/64kZxqopIJ
I+7phBUZvS3pZW8+3XIYUZlCLqK+ok18KdqYZQBjIyhqAGmYGWJm6IrI2lCaEU1d+vngamn0
qpp+br4U/bgOi06hRsyep74qoageQkjNRSnbLGVqIk+kKakk+k+fTCzo+hJNSQM6knr4Yo2Z
dLMoViD3p1r9cI0kRvdKBdZT7S3cda4qpD6mZSQCWHn0/DyxEMbK+rVQUUU1jMjpXEocadZY
rVgMj2PngB0cnUg6Uzc4qrdMiooLEEU6Dtn2rhM8MSfUydGpqBxEUpEaLnrrnSnQYhplDGOo
oKHMDERSOKijelur9608MAodKmrH7SRUHP8AGmKLAApXQpzHfz69MKJdZBJbMUPX/HEimHpo
MujMR0yxA4clNIFDSoIzqT3wIqszguRl2PenhhJMysfQC0jDueg8csKEFqn8zSAe9c/+DgMR
l3VToX0+R7fTvgF6xEDKwWoqiGnkM8sLnzbUwjVRqHUtn54nSGFVBBFBWpqa5YKdGqo2UdNP
5cB0DHWaN6R2Ld8KMsbBXjOkhvH82IH9LAqSaio6YahlUUaQaOANP17YEYlFqaVfuFpUYkEy
Fisa/cw8PTUf6DEhG2ag9VT1YeJOLTgCsugCSlKmnjXzwgnjGgk5MPuXuR1+mLUdwNJalB1H
Q0GMk5lJyrTLLxB8cOALFvbUgKprmx/zxYhlyCpNdTVFR0H1wJFNX2aUrmSP4iThiZTdX/mE
Aen8tOh+mNxvmuHVF4dv34BrTcEaSXklmVIJSVSytn1/yOOv8+vwLfH0vfb5PtBs4xt0CC5Y
CZpYAYJUPirAofwxzvyxeVlt+08eS7a7ihG2tI1Wlsow2lvEIQwp5YzqqPdNy4nexJFPfQbh
OkhWG4ewMNypWnpMi5Anzxa04N95THstvG9rt9oxlNHjkjBjmAypIpHX/di59Q+J77s1+s95
t1lHYM0um6tVRTblwPyqa+nxw+jVzfi1uI5IbiytJLZhQW4hVVWvUx6QNJ+mBaoOLwWsS7rt
rwJcWUclUguFEgTUuQVyAy/h0xdUqHim9Xu1ckeK0aGO3kkKe1IqunWmn1VP7MPzDvmLL5M5
VfTTQRS21p+ohAKTLEI5VIOXrGdPLBwxOfGm4zzTdG4PLdMbaYiOVWeWBcylBpcKBXr3xqyS
rnFT8ef0rcEvtwvFO3Se7rSaziEscR6H3UOr0E9MXUasXG8T8F3eOFbi7sb+7EhWG4Swa1l1
fw1WqZ/7sZgsiBN3uNt3Wy222sLfcLVq+5BPAsrRhT90bABwR5YkzfysbZ72z9qMW6BWcR1J
IzoVBOefnh5VZnicSrvlmsVS/uAqRnn4MO9emHaJznw9Y3PkptN3sNpn2+2Mc4KXMNzH7iGu
dNLj0k/7WwRfUDcd2aX9Vt1lYwtMytdxbcqgyDKgMP5qA50GHSpJOOSw8Qrf2GuaJWb33T1B
i1NLHIrTwOKVnHmIjZmq/pKGhAy741g16b8eJ+msZNxuJDbWqsUW6SBZtAH3CVGrrXBb+Gm0
3HatlvZLXcoY7O5vl9Qnhtmt3lV//wAIn2urDGNqsVU+72Q5DHx+bZrVbSZC8lrcQh46hTmm
oal/BsUqdos7DaNkuXt9uRoLcSTW7EamjYDUAslCaDwbLDKcZjfk2zeuPjczt9tFuOgO1zbo
IS4I/wDtEX0MfOmHWZT8cZv/AEi5jeVo1dJlkWMkChofoaHtg6NvrJ8QuLm15JatGFKGQRyr
IoeN0Y+pWVgVON/g1r9623Z5uVW6QWcUSshc20eSjPqijpWlcsEoiPfeW3llM+0S7VZbxZqp
Y/qYNUwT/vj0tVfHBPWrXVw1rCKzG4xFNrsJ2MkbywrdqtDSjhhqyPRhhrMutHe7JsZv7Tc7
RbSS+YVla2he3Mqt/HCcqf8AbjOjrcV829WN/wAibY73abaaExszwXCBmUgZe22Ui/txLmSx
kP6dY7Tz2O1sWJg1qYlY6nUsKiPVlqI6YpVJ6vd2itL7mlmLyskfsn3UObkDJSD/ALcOmxo5
m4gI22++3LbdwjCFv013aGO5VfH3IhWq+NMBeSb/AG+2W+6zx7VKs23NUwyg6svJu4+uNRj1
XoTHmR1yBz/diwyvRNuumTgExMlEMTe5HWqlg37DgNTcJtobLbFvbuRLW3lzW4lhS4hOf/2q
EErQ/mGGr6rzdOO7HJeQ7jBFZGZlYSm1jaJXDCup4Wyr4EYvsvhWw8c2MSxXkcCRyrIftNVP
ajKa4FrJ802ZId0pt9mRGV92dLdGfQKVqEAJph5Y+WcFJEULQqPzDw/HCpACOhBpQeOEnYs5
pShA/CmJGp6ifAdRnTGWaTEjMEnT2PfzxIJbKoqo7KeoxoiqSq5V8c8B07HTVsqjqB+/Dg6u
lUEKT+P17YgSqSfVVfPtiJ3Jy0miDp4YEJakZDP+HwxSrKj9Pu0Zq9jTthJkUmgqAFyXxxVn
4GtK1bNfpgMABqFa5ePbCRLkQO5rVun4Yh0cgrmaUHXz/HEzAaaKcxp/KPDEvkSK4bSfpmcV
pETQZDrkw8a4NWGQMAcvT4+eHUSVC9KV/E/jgqNJ21ZqOlMqYIkhIIWgyGFGYgHMEg9Cf8sR
0AOpmLUoc1PngEMG0mp79fPGlgtVAa5AGoJ7+WJBdgfWtCa/acI066aVNAR3GCmHLeoUFa9j
271wHS9ZFamh7U6YjoQRU07UGnyxI9cgK9uv17YMICDXKpBzyw6MOAtW9Rr1JwLToDXr/wBM
S0zCM+odupHgcQOCxYMOnj4DEtA6knTq+4eGJBGpfw617YkelKs3UD7sSMwdsyevUjDqwyxD
UTmtOhFc8WqQIUKvdiWzApgFhqDp9oU9/wDHES01XUwGRzriOYZiaUHpP7aYloTmxbrlU0/x
OEYCuRJfocu2JvAgIFIrXtngAJCqHrWv/TCiaM1JrQt2/DIYtSI9RUGi9PE074mKB2OrPI9c
u3/XEjxjJmz8h1/bgrUM0tBmKk5VOJaikVlJYNTV0HXLEZEZJKjy6t0xIHqyZKU6g9/DEyEO
Rk7Zdj0NfDEsC1WNKCh7dhiIS6ZjwOS9On1wrajIohXsxqR9cBNQBgrHLwPTCoGSoGoDUD/i
PDAdBrXSF+2nc/4HAD5BKVrn3xHxGH/m0A9JFQ3cjt+3Cxz8mCpViGoE9XQ9cDeCc1IOoCo6
9s8RID1KAMsw3fIYgH3aLQAFCaL5U/xxDQyHUANJ69PDDFaeOQNI6hqBQRU+XYYhogxdxlXK
uWCtQysNZqc8jl0wHExIz7sc8+mWIGDBqVUgk5ZZivQZYcQlbWAWOeYofGvfEdSelaEnKtBg
qIIKN09Ofeor3GBBjYeoHsak/hhQi1UKrRKgUr3GIJEPpqpqK51yz6VwokkAJFa1NajPFUNh
UFBRS/anbvjOnQ+gUrTRXInpiCRwCdQpka0AxILyHI5UHVqfjlhB1f8Al0HcVB8/PEsMBUBq
5r1p44ENAtWdTQgdD3xGA+8g1q3X64UdiQ2okfj3OA6DW7PpGYao09sQoVQqRXIDsP3YUKho
2eXQUzy7YMWHIJcAd8/LLA0iYkMQch11H/HDjN6xGp1HNhl++vfCzBSR5KA9FIq5PQ0wVqYG
oHYioGlu2LQAkD/aQaVOWZ64hh2KJQFqkH7h+4HC2Biy1ZVy88/xwAmLFDSlT9p+uJVHq9NR
ViDn9cKOA61ByHQ1PTAcC3q+0ZKMvLxwCwyUZS3UBswM8/HDIoABq6hlTMDKnXGjIJ10uWCi
nXzxasCw1mg9Jy/YcDOaKI0yC5moK1GXgMSBViSCdAzp2r9cRCNRI1H0n7iO/niUg5E9QqRp
UEU754mgq1DRKjT2Ncx/riY9MHlK63pVj07jzxIcamhFSWBqCMl8Bn3wKDeMhTU0rm5yNcTQ
VZGBFSvenniJ2jVgurMNmPPAghVRSi9DnTvX64RTrpAY/mHc9xhJiihfAZUU9anzxIiuRVTU
A1pgZNJoJU6ftpVR2xYylBWgJbMmlPKmLGkTOrgNQrQZefbFiSIupP8AuzB617Yy3pFFZiAQ
3cknMfTCgNMaeoZggnzrhZKRqioY0IOXn9MMFJXOoDuRTPp9cFRwAqlQ1ZKmpOQzxIAACMpS
pGYH/PEMPGQSrDI9GHiB2xHBVZgStAa5joPOuEaEEMa0AUdKdcGinLM2kHvkT4eZxNaL1RtQ
Zgjt/rgIKxh+hVu1RixCU6jqBoOhphRvccHQBqVj16kYlQsSSQGJI+0kYkIjUurNW6E1oKnE
ofQwUsaFVPTthKDT68qhVp6GPQ/6YWfykYSatIFajr2B8sGgo6lSGbI5mpFf2YGoiNQWoas2
Y6+n8MTNiRGirUqRlnQ5CmBkySM/rVa50UEU754moeYE6QPUxNajsMJNXSelT1FOwxLcOHY+
kqBXPUMyR2xLEWbelvuGdD/nhA5Fqv26mYd+wPhiaDrZaVoMvtGBk5kNPbK1Xv44jDiMtGla
0FSPIYiJtbKQ4GeQHh9cSMQxNDRRT9vhiWBKuv2Uqae4o+nYYEd09dSBmAVHgfxwkmOQ0qAz
dTnl9MOJC0T1JGTfmHfEykjFGMgYDKtK1qRgaM4BoQTQ9uvXtniZFCpPSQihzT/LPE1oJVbW
SgFT92eVBiGG6alADFTUOTQ4iMNpI0n+UOnSlf8APEgmOpEgJ1A+pa0P0xA6Kpq7LkRQdjT/
AFxH4NoBGpTQVyJNf8MQGI19sg1P+7vgtKNm00Knp1Hap88SGWDGi5AZGvceOeFYZkDoQxpp
PpPU5YiFC3UGjfxfTwwVDqQNYNMxQ0odRwVHaFFq2kRsANTigJ8ziAVWoqw1MD6W8CO+JQUr
6W09SehGX45YYgSliDr75Dw/HAToDqVGz9sg1JFT4YVT6dBJydWFSvgxNMQMIo9YqQKClKnM
HzGJAKv6QG65kN38sS08pJAXMBhn2AzpXEZUc+gICoLUGQr0I8MEb+WU3Iu1SVyrVMs8dIy4
vck8ulen7sGldcYupLTcY54ApaFxIimvqK9j5HHX+fjnY97i+Z9zh22K2sGMajKWxvVS5tXF
ASVBBYfupjnedUsnyql+QeSQ7mNwsLgWsymqrANKivWqmo04I3kd+4/KnItzQJcJaiUCpuYL
eNZSf+5Rn+OC8VrJIz15yTdtxtljvnST22IEgXSdPn541zMc+oLZORbvtKstlJpQnVooHUgd
NQIw9emSYtJPkHkZMie+oauselWC+KnxHhjGM4s9v+XuYbU38t4WRs6XFsrqCehRsio8q4cW
qbkHJbjeb1Nxmt7W2vVq3u2Ufs6tXjQnMeWDluzxNuHOd53bbxYbutvetCAYrl4l/VKBkP5o
zP44cAOO8033YDJ/TbgC3m/81rMiyxN2OuNx1p3wYtibbub73tm6SbltTR7bPJX3IbeMCAg5
kPGagg4donysd4+S963OBoru0s0Zsv1FtEsUjautaZZY19VUu3fKnKre1js5pIbyK3AFs9zE
sk0R6DTKul6/jgxcqnkfK9536SM7kYmkjFFljTS2f8R7/XBDaqLa4mt7mOeFikkR1I4rl/rh
kErbQ/LfKY7NbS4FrfWiikcF3AsgHgQ2TD9uWCnHHY8q2+bdUut6triYL/4H2+c280B/2E11
CnbFBjTcl+QtnuNsks7K73G6R00NHuATVQ//AL2M1P0YYM9X1eYyNUlq1Vh6R4A+JxtwyytL
w/fuV7M0smyCSS0VNF3biL9REw6jWhBp1zOK46xfv8o8imIaWziG3LRJILeNkVT4qTq0nyGW
Ma0m3H5Y3KUezCkd3Y6KGG/jWVom6eh10v5jD9TLDbD8qXtjFKs1Y7hlZY7iKjJX8okietR+
OKwKfdueXu7RUe1trWXq8lsnt66fxAZEnDIciz2z5a5NZ2P6V7exukI9vVNANRSlNL6dIb8c
GDIqtt5rum138l9tkFtb28tfdsRDrgLL3CMWKmngcNEjq375Hvd9hSS7sbOK4i//ABe+tlaO
eMdRRwenljMXw6k+WN9ezSDcbey3B0I9q8mipMlBkxZNOfnhwyyq7aPkLf8Abrq4kheOaC7N
bmzmjWSF6dypGX1GGxmX8O7cflPfrlUjjjhgijNYkRPs/wCxiSVGKQ7Bv8q77cWpi3G2tL8q
B7U80R92MjusilWrivOJxbRzzkNluLXNusd20lNdtPCJQ5Hh+ZSPEHBidXJPkW+3oxNLt1pa
Xtqw9i7tlaOZD1KaienlhkWug/KW8z2pjv7GxvZQKC6eLROCO+tCM/PBinrI324TXkz3kqok
jmhCKFXyyAAxpm1HBK4nSQZOjVBOWIN3ZfKnIrXbjbXO3WV3DICgeaDT7i9PUUIVq/TGcbir
2Xn247VPP+it7dduuSTPtcqmWCv+1W9Q8OuGteJ93+Rd5vYREkUdtHGVaERV1RFc/Q7VYDyx
BOvylvvsOskFlPJINNxK0QHvr4SqDp1eDChxYldbc+3+13dN0spVt5VHt+yUDpoP/wBnRs6f
jixnVdyDfZd73F7+W2hs5mAWSK2XTGxHViPFupxqMqw5KKde5+uGHDIrEfhl9MVRyAGJGRpm
euMsHFNJJzNPTTwxLDEErUipHUnxxpo4BpUjtXPPEDpSh79hXBBaE9aAgOPHGoNPqP3D1L38
PDBW4L0ip6EdPD6YyaWZ09qCnhiQWYdhTwJ7jDBSzCjT4dB/zwuekJAFBFADQH6jBWoJQwU0
GR6k+XlibCwUgLq+3x8cQFqKjoC1PSfI9sWgJBIDEgHwphtUggynPUDQUJxQGB6Uq3fPLLsR
gWFViurp1riOEXomWZP78S05YMCG9LHIDwriRgSBpYah2IyrgQmoooTXKoPXrhgsR6gM+3QH
FgO2kAL1amRwNadsxmQO5r0w6jFR6SPCoJxazQ0IJLdR1AxL4PUM2QpQdQemFQgxFQ5oO3/T
EtJMhT9nngbnp0IPp60PXFUbUD0Hf6YFpz5DociMQCetOpHUnEaWRBJ+49adaYhhjQA1yA6Z
5HEC1VHpqK9cRASK0ApXqTiR1DEkt+3EZDMQWK1yHU4lhAM3cg/5YiYA5AZ9ziGhByqcq069
cTNKSullU0qev+uJaBQwNTUHuMRMOuWVBTLzxKBcDMjMDr4ZYtbCFYgV6kdPLEEWQ6DIZKfP
EDMp6Dp1wmw2vSNJFW6jtX6YmcwJKV1kChFP2YBPkEzekKvWtK9MvP6YnQDEkr4AVNOv1xCo
2BPfI9sCDIpUZDP7aV6YUEFVTKlBkQO2JaBNRLA5knIHxwM7UR/KCaZ9vHCjMdC5Z0yK9euI
6Craq9RkKdsJRl1YUUen/PFYZSZpNWgA6fPrXAQutSABpy69frTAyEMVYg+oEmv7MsWnDMdX
bJcvE0/0xHRM6VUAnLx8fDFg0DehiSaVFSeufhiVMH9WWY8O9f8ApiZ2iU0oQe+Q7V8sRJWP
pJppBJIP7BihwBiAFK96se5PamGCxJGQIwFXrmw7jBYJSKAH0mq9QuBuEJNCaz6sv+PwxIVc
9TmrAZnxPlhZFVfcq+eRBB+uBSmR8wanL04mk/uFak+ApUdsCNkQCer50/wxIRYfaAoI7ZZ4
1iOmg1AIFfuJzwDCR8/tovlTEkkWrrUgVoG/ywIGqldQqamgGX4YgMPkDTM5U7ePbCgtrOgk
hWPbt+OIiBA8gQa+R6YiUa+o5dAcj2xIQkWirXpUajnniWk4JUAmniBQYEACtDq9I70w6Ebl
wQFz8x1xIgWNakjwPXzxBKSmkt0zFa9afhgaAWfVSmX+OBaDSrKatRRmoHXPCz4BQSSAQEHT
/XDqLPWVBBHevSvljNpIvTIgaj0FemIUxFex6517n/TCZA6qOajr1AzFPriJmA1fb6eh8ajC
rQhzqovXrU9vHLFAElgchn3HUjwpiWGofU1SzdD+OE4b7SCpqnif8RgR10CpzzB0g5V+mFRG
uTGp/H/TETg+nSamhrnmTgVMwCqTk2WfXr2GFkBUqAa076PGvfEzog51AefpPX/gjAZS0vkA
BQVJWueBosg/QA5demfli0HagUnqKUB6eVMQ0FRpqPuA6diPCmFWnViqFR0p+ArhxmUSr6WG
qmR016fTGW5dCTQ0ybLMdMvE4mhNGag1oFzbsPoMQM2oAA/YerdqjDiISVY0BzJ696eGLEdV
ctqappmuVPxwIhUu4yoD6R2I8cQPpQOS4OhR0B6+GIyA0qoJboe/+WFWEQooxzAFKf4CmIDj
GlyASSfHoD4DGagB2Vyag/lPTKueDTowr0LKM2IJ8PDCgGHVIpNKHoR3piQ3FCT0H5u+FYBa
+4SB6SKmmWJmT0760Ymmodh5eOJoMcw0sadW0ladMQopEQdwT2Y9T+GJm9YjjMiLQtVfzL4e
FO+HDlGAx/3eDYFgI5vEdARU4joyymlDUHP6fjiaAwZBqHY506Up0IxAlZ2kpWhNDQ9x5HEM
SykE6QfAedMSCaZj7mPUnIGmI4Ytqc6hQUAqDnSnXCqFaZgUZaDt1xM09XA0Vrp6L2OJS0md
QTQaRXp3JwGl7kUlF0lWPQ0r+Bw4qZWI6D09CO+ASDBBOoClcshSp/5YMKIuA50ZA9APuBOE
ehKZ6gSGB/afPE0l101A5VFAT4eWJajqRlFRgtaAnqW8zhUpCmosB6iKeI+mLSXrWq5UOTHr
TADijIxAotaGvc4UEuQFdSV7aTnkPphKQuWqy11HM+VcZZOoLUX7a9ScvxyxHAoVDu5FdP5l
79hhww0jq4YkZilV8vHPADF3AYVoHpmOnTqDhZtMjyBVC56WqxNKtiUSSyBoyNIUg9Sa/jgj
QAGAJ1BgCAa98QkGqAuGbOT+IdB+OBqRHJDqBYMQey/6UxI9VKKtaeZwoek0DqMxmcu/bLxx
Ko1OmPUw79e9O5+uFEyvQAhv+7zPTLEj6lDjI5UJHgOlSBgR2kLGv5SaGuXTywI7NHQk9TlT
qKeWKHAHKhDasgQTlmPHCAE1+6gr6h44gIMgzqKfmr0/bgJz7lDnXPp5YlhLqCnUT9e9DiR9
KkjKiJ36Cp/0xI9GYkCgz7kUIwGGDH0kioUkAdK4lmmeNTocVOk9upPgcRGVHRh0odJHfCMI
x0r10/wmmQ/DEEWrVJqqQgooFDWvjniQwoaIg9EGpsSc9xqCBiSxH2gZEEjFGtZW/f8Am1ZK
ajqGZORxpmVD/wDR2wN6t+H7db3m8wQykshdQ8SnSSa0yx14uMWa+gb3h3xzt0EZ3Bbp4Z8i
Y3Cyx08KVEg8sY+1ZzQ7bwnjkjve2dwdz21qKIWYRNpqTpYp0bLFaLanbhPCtytHk2ZL3brp
H0NG7rIlR/DkpAxn711l89SQ8Y4dYtDb7xa3Mk85KQ3lvcafrqicacW0YLbOL8a2Pf4Eu7eT
cNvuCf09wzhJkp2kQAqaYvsqfme0cIl3K2Wwtbm1vJGBK6k9pgcjkACMXNPNjZcf2gbXZxhH
WSwmYiVWiSUEDI6lYEHL6YbRVPvHBeMSbmbw2ayQS5va2xNsjUy9Gn7Cf2YFKytztPxrcX8N
vt53Dbr4yLE0FyEliZemnWvqB8K4fRVxd8V+N9thibcHuSZyyiWNgssdKDU6kFGwRn5p9k+M
tqvpy9jdf1bb2zjRZFt5wD1BILLUeBxq1vr/AA6t8+JtqW0/V7fZX23SQkie3nkSeJgPzKy5
qcU6YkPsPxzsG7WkTTbXevISR+rsJ0qjDu8D+H1xWtDg+K9mgluE3OecRxNqhaRNBZPCRVzX
/uGD7Cxw3PBeJ38Ud7s73MVSEuLeVg8WkZH2pB6xXqKjDKlXLweztd/s7OSeSe0uFOpCAGj6
9xkSMbtlE6q8t/jrZrKV2uvev7ShYqJDFMB19LAEZeGOej1n95seCSRu+w3l3bThqGy3BA0e
X3BJkqxI7ahiyuiWzsfj7cdkeK63OfZ98iUmroHt5iOirQekn64sus2ag49zK82SN7W3VWCO
THcxsY5UbpXUMmXyIxqzWYurTnqblK0d5typeXBpNuFkiqZAcg0kB/llh4rTBOWo9DteH8ej
toJJpFa4hSiSPbEV1dQaAnOvfBaseS83t9nt97m/pZiMYAEyRAhVf81AaZ4Yap9t/p8lxGL0
uLVspHjpqXwYfTuMOJqpuB/onivZ3kuuOXaK8G4xLSjGoAoaUNfHAzenLsaR7Py2BNuu1u7a
aQLraMH3Ms1eNgaHtihjo+S7Pa476G4tLWOyNwP5qwLpRpCa/b0B+mWCCsfHXVQ5ajmD0/HG
4pG647wvjt7sC7tezzRyRM5lhVgqsoP5WAOkkdK5YvtR1xorrgO03At7rZr2d7GcVMN4o1p/
9a/ep7ZYzvqz9gu9i+PoE/S30m4bbf0FZ0CXEJb/ALTpYDFemoq+OWVnDyS3tzO8kTSaYr20
bS4y9LgP59sIvKy55tN5JvUFsrRXE0ifyJNC2rSGpylBOnX598EEiul+PObJF7r7POEQV1LR
svEaSa4rWoHh8l3t/JrdTDVyxiuIJ49QI8GRx+3GksPkHb7K23UNZWyWvvLrMSV0LXrSvauM
+Oe1a2NhBPwIzWszB5ErLa3CrIlFP3QuAHjJPY4m7PHHxTiGz7nZS3c6T3qQt/8AKt7GRVuo
hn6hERV181xdUTmq7crHilpOJbG5uLq1VtM+33a/prpCT0Vswcu9MTTZ8r2PgTcXh3GNbq1u
pY0W3ujGGkao9KSgEKen3DGYK8rdFVjoYkCoBI6jvjYW3Fdlt923NbS6kaGIqx99RqKn/t74
0xb60l1xHhccrWDbvJZ7khpBcEiS2kP8MilVaMn65Yy3JobbhO12dl+q373zbj7nsnDNGSaA
spHqUjww3pfUpeAba12os9wluNvuI6wSugSVHOQDAGjDFaziu2XicN5ul5tc0p9+3UlZE6A9
ASD1GC1r5+F/xPY+G1v9u3mK5N5bv7cksbao3FPuSgqmDR+FHHxjbb3kElpsdw11EvqWO4Hs
MQp9SEg/sYYbS0E/xfZXkDLCm4bXfRBnCXEazwMw6KJEoaHxpijLjHB+P2m1RXu6yTek6Ly1
jbRIr1IDRvSjdM1OKm4z++7FtljovNnvxuG1T+gNIojlif8AgkTv/wBwxDm5VEfXUHNVyXE1
SUklQeo618BiEmExHUkAD9oxChYqCadKZDCCAIGXTuev7sRwer7c6V7HocB0LN6qk1A7YlpK
6gZUzOTYhKTiUtUU9sY0fqWY/wDppSv+eJn4IMX1FRTL/gYqPkQrp0joK/Q4GvwajaDT1A54
tBv5mrMgk4tRVz1E0H+GAnBNG1Go8DTEiyFKVFegOFqcnoq9WANaj64BZgAMgSa1r+0nEDqT
WgNQMiO2JUzZUp26geBwxmkaoCT0707Ykb/fUMBQ0HXPEsGAooSPInw/64sb5MEUNUZfTFWi
bMnqQf21wDAyNRdJFKZ0w4DkZ1P2jt44iQK+ofh/0xkYYKAaN9pOeFSBZlUUFMu2JUxZSMwe
uRwrDkkBg2eqhFfCmAhFSa1Hq/diBiK1alP9MRCzJSvc+eJz6pV0ozHLzHhgXJH0gnuaVYZ1
GE4EEBaDx64cOESAtajMUwGhotfT26jyxLQNSg7NSoOJAA1DUpBJzz7Yj5+EZqAAGoKk5dc8
Qpia0ZxqY5Ll3whHIdEZNNVO1O+BRG6prIFV1Znv2xRoNCSKU0eXXELDEdjk3b8MCA1aDKlD
1xHQa4w2VSC3qr0w4NRMwYg9KVr5muLFDsFb1DMKaNlQn6YCCUV6VA6AjsPDFFUShmDMT0Oa
f54VCMZ0hiAM6gH/AAxGGeXSNLd/25Zn9mI6jPRgeg8cjgBGq5tmxIyxKUB0hjpNa508CMvx
xEgQJAvj+YdjiWALKWOoZj0g+WIHBGpSvQ0LNTv4YsBAFtQABHbyGKqCKBlKj7lFAvTr0z8c
EOeBXqgY5qpHXphqkwl1glQKitQK+P8AjiOCqwI1nqMq4iI6Sq0ao6HP9+Jk6kFBQ0UE9f2Y
EXTqKr3YHEJPR1UmtQWPQH6ZHE0SyVArmpNSfLviWjKsrMR0NKUzoT9cSMQxjBb7lBz8M8K0
atEpA1D1GmfWuDGbUoVaE0Boc/LywE2mpCZaGzBHiMSEdTKxJowzy8fH8cKDnQhQATmun9+E
CBYnMmop0xLBMW1CoqlKfU4GsPIPTkak5VGJBA1HSRU9R/ngGkNOmlat9Mq4kTGSgJbr2woM
gJIVaAZCq5fjiJtDsKg/Q9sWoyhdOXU5U6Vz64FIalDTVnWhp0p44ETClf4V8fM4RgATXLr1
c/5YFhE0FQoFAK08PHEguoMinVUnM5dsSL1U1HoOrDt264ThuoNB9uYHbPEUVDUkn0HMkePh
iZCHKgf7fxHjnhUJixYlerHPPEYTORmwy7gdMSpsjTSBQV1Vrl5jFolM7Rq9CCTTLKv7PDFp
09AxKMSB49sS0zB+i1BpSnSh8cCRAnVUnp9y/TCykVNR0ioA+7EsIxBR6QFJpQ+FOv44GpIB
6MfA9FHQfXArCIJbWc86fhiB0VWJSuormAegPWmJSE+krqJ6HOmNGwipFWXPpl9cIwQB/MKO
fy/4YzTgQVRtLAknsD1Pngi3BKfSAvj08a4R8n9sZ5+j8wFWqfHCsAhQPQD0uepNc++JaYPq
ADMQQcq9euLF6f25GKAAKCfVnQkVyp+OA4J1eNipAFDQnzxIEjFMyK1JBrlTFEcqdNASCaZe
OIUyspcp0HXMZfQYBpyutSpHTqfHAR5ag9SGp9q+WWJaF6tIFBzAzpn3xqENGkJGQ7MPE4hT
AEZrkWOf4dcsIw2aPQAnVXLriIzKNQBFCQSSPHASlpIMwC4yqO4piURkqmnVQMxpUHpXrhVO
rfzQFqI6EN4mnTAMOUVpNamhYjIeXjiRtZ1qTTR+cAVp+GIicsdJFCpOYPUV7nEMCVYOAfHE
gOWU6mrlSracv34gdnLSUADIMyT49RhIqEDU9CCciOgOJUyNEJDQ1/xwDRsNbdRq6GhoRlli
IESsh7sO3TtiUpqhAWr0FaA0rhwmJLrQAqWGeXjixU7I6gBa6aAE1ywCi0KFzqHpk/hgMCNY
9JJyp6gO2EHZqFgTUqQFHbDiNHp+wnOudKdMCG6nRUClcyT5YjaFfUgrQEj1AHI08MSgTGun
3CCpboPDEgrIgb/cDllnhQmqzB43DIa559/r54EQVqUqcjllXLxxKEUABXIEH0jwIxEz0kDK
B07HKpwAIAqAMitQa5ihHbDrN50YA+wCtM/r+GDWpMEfYNCRVvDw88MKGYoV76q+nTmNOIDa
UM1NVAw9X08qYgFNFNDSUCCgzoTiaNISpApq6Z1yxIcgFNRfM54lTGIfmJ1Hqv8AhiOBWUuT
XUaZAKf8ziWHkX0UVzrH3Vpn5YKBkmpFaEeXXLAgKWqSxNFyWgwlIpX1agACKgjLphQZAKj0
AhaZ9OvjiZRnSriLp1B8D374h+RsEpQ9csq5ZYGrCYkShpGAp08x44UKGrIzZ06BO4riMiOc
qrKqlSGNNPU/u6YKhqRqOo1Ynqcsz/hgQtEZqS1AOgWoo3l/rih8RqwZyXH25AeJ8cIwSLLJ
JU1AYV7YVhiXDvmWjBoAMj/wMGqclprVWyB7dc8Fa+qK+fQgyPgc8/rhkZsZbc6CWoHjQnpT
GmY4NR/iOBpb8W3IWG7w3XthjE4YI3Q0PlnjryLcew73zG23nbo/YiaK5X1nURRqntjn9fRq
TiXMTsl1puolu7GTOWHNWHeqMv8AnjckFq/bm3H7T3Ttkc8RnbWkFywfPqdJGOdjRv8A3jje
4QwPulhPDeW7loLiF6jUD+ZG/wAMWN4r+Qc3N7cwTworPZyEqD6S5bqWUZYZyz0beuU7Ru8U
d3BHLZ7nBQMqNrgenUgt6gfLFOXNd7B8ntBFEt/7wlj6XFswqwHaSMjScGNzp0TfKlr+uDPB
+tsmLB4xSKUV7+kUr+GCxKe83XhIdLmwiufe1e4Y7ghqtXOjihrjXwHPynk1nutrAsERS5UU
YP8AbSuZ/EYJDYPiHMF2R3ivLf8AVWMjEuFPtyKzCnpZczjVojsv9/45Ay3W1yXaTKdYWd9a
ip/MBStPDBG6sLTnfG5biG4urWSzv4G1FoGPszAdC1SDXFWVrd/KGxvPpNvce0yH3aNrYHoB
6s6HBmrFBsvO7WxtZbVoWAVneGYZg6s1Dg5419Us4+a8ZuZ7XcLqymS8t/vRJA0co7ldWaGu
BOm4+Tdjkvfb/RzSbdKhDLq0yoTlXViwaye8NxEqX2lJ4/cqzJLSobyI7fXDE7tu5jtsexPt
G57Pb36qpMN0tY7hCTl6s60wNyay2qNgWiUqvVVJqR+ONMWO7Y9xj2/cLe8dPeWFqtCTTUO4
riGPR4vl6CO3e3Uzwo4JhlqHKPTIaR1H44xjUeb7pu97ud5Lc3b+5M5++gFR50Axtnqg2+SC
G7R5k92BTR4yaVH18cF6YythZc9m292tLUyz7NMum4spaDUD1ArVencDBW5rhstw4pa78t2s
FxNbMVYRhgrIx+5fD04VsdvOd54xu6RSbXJcpLEpDwTqtNVa0Ujthixi2IqQuY707Y1FJr1L
iO6cf/8ATprK8oxjVmmiVwk2lv4CfupjPUavOK6Tmm17fbRQ7cr3kSrqiEw0HSPytTrjKc+5
7vxDeZP1zR3NvdsiqbcUkjOkeemmLAq9i3DYrHeFnuI5WtFYFCp9YPlXt5YSs+c7xxfdjFNt
U1yJUGl4JkBWtctL1xY53ys3Fvu7xLoW9uFDU06ZXFKZU64py3qy47u1jFu8V5vBup/akUiZ
JA708w/XGrfEtOcbvx3d54bra5ZWKqFltplIZRqJ6jtjGM2XVps/JuGxcdk2u4S7tJHDLHMq
LKAWz6A/b9cNjbObVdbDaXVLqS4T2i36fcrQmOZa5iq+H44qzXXyffdv3G1iq5vrmKgjvpUV
JtA6LIw+8+eGQWfpLLyux3Ti/wDSb+F4r2DSYLuBlKHT9uuM9D2qMGHxkZCFcgZhj/xljWBa
cc3ePbNxW4ljaSEDSwQ0YV8D5YK1Dchv7a/3Se6txWCUghCKasqGoOKLV5tPJ9pl2B9i3WGR
YqUjubZirAV1aZAeo+mDFXRJzO1tFt44Yf1MURUGtUrHShB/3eYwsZvym23k/E7HeZtySC5d
bsaJIcvdiqeitkGUdaHFfYuOcqrXlFra8ivL1ImuLC6di6E6JKHJW+oGLF1cc9tvFrte8/rb
FjdW75PFMNDaSan1dyO2KrmrW95Dtdyz3dtf7hBJJmbZ5fStBQ0KEUp1XFG0G4cos73i7bcx
l/XKQFkcl1lWv3s3UMB1xRm8siwqAK0FO+HBYHQ6sFB6+GIfB2BPmO7A0piWhpQUOeXhhGma
NmowND3p4eGLWc06xODmdOoUP0wOg2jJOffviBvaqxNOnc9DiWhEQoNQBpUgDOmISkVbLSKE
Doe+eEkAWYEnUCKjti1YFdfVh9Bi0Zh1qwYd8FUEMgGUeXn+zAjVqS1PoR44VTsyKNNa1oAT
44GykFBkagULfXEzSLhqZkeP0xNyhcivpoQvj1GJWwTEZV7jM4mcMNJLVAFOhw4oYkhSftJ7
4orMCGYtkR/3djhc4QLE6v2gf4YjpI5FTnpOVCemBSiB+5ele+Jv7GaWgr07Ad8Ui0JYUJoT
Xp5jCD1L5U9QFAcS04YKdLCgORy74zTCJABopOWf4eGBAzFKrQnqcJM5NK9F7DvlhAQzVUgA
EdScQHqAJHhmO2eBqQOpagk0I64haTBgwpTSRn5YKxZpAKuWRPanTEZMASO5y/fTC2CQ0yWn
Tr4fTCLTUYRqB171xU/UiR0U1r9MCoWdNNK+moqTiZCSoBUCvdKdxiPiNypboStfu8TiWgr6
j6qkjpiARIahCPOvlhMRMBqqoLE9K4BpjTtSh6gf44iCgNaHpnTpgMgXKqczQAAk1yzxCxGU
q9AKggZ4RgWBX7swSR06UxNSGlYiLIileo8MSc7ySN1ORIZh0yws4I6fowGQwNIaTVIavZTX
oKGuI/A1dTMaepVFM+lcCRsshr0oex65YlgU6EfcpAqe/liMC4UdCRTqOuFGJdSKEBD1I8fx
xEgzB6gaip9X08cCJ8+gI8T4eGFixEC4U5Ztk1PAHtiEiQs9DmOtCTXIHpSmJomYVU0zrQgd
T4YlTiSuZNNJA869f3YjKfXqchsgueA6dTGaADOvQjLI4gOR9BKmmrqPIHEKZS2k9wK0xMfY
66DpbpQUC98DXNL0h2oTViajw+mJalpRPUcifS4+lMRJCCw05ZdD+zGmadAur0VNBTUen78F
MiQDUwJIA6so6eWAyEJsgOgzrTPPEkiIrDrp7EHwxIoyVQsACoGZGJfBaGrqYUPke2LR6IPr
XOgArSnbEZTovenUUav+GLSFQA49P+0k/wCGBk50hGzqaZkdfPEQgN6iBpHYHPLCiqxdSCKK
On+OIlQAMfcyYUI/0/HAglq9fy+PjiGBIofT4Zn8e2EXwvcBYUNQTU/hiwSnBQNpBqWJ6mmD
GvDIFIqvU/dUf64Ci9KuF+5s6GmflhwnaXQKvmQKL4fTCrUauFq7A1plp6/sxMylEpKsrihO
a/TxJwGAkiZkIc6T0pXPEjlRoCqB7YGbnMn64kicAsPzCnQeOJm0UbkKQubdCcSMWqSKDTkB
4fjhQWLBgFFQakUxAafaajJaAAGp/ZgaRl1L0NDn6W/1xKDMbM2RoD6gR/yxEJowqzEsprnk
MsIREl5CR91KZYsF9EodDoUVL9F8PPAcEka6ycs8jQ5YjIcjMBhpVftK96+Ne+JUEZP2ip0n
NaZaQa54tEGWJJBfpUhu4xNaF31OpHc1Zf3YGaQJRMqaxlXt16DCNFVepzI/MD0/DE3DKoAq
PSmeX+mFU2bM5YagftP4Z54mSYsrhifTShPhhHoVLqAakqa18T9cBkDpq1VNKflY5dcWIcZD
EKQSMyadu+BE0LiRdRPorSn7sAMJaZU9TGuXgMRiRqMACp9VanzwkkYmi1NR6T9f9MQRkBTp
qM8iw7YSejkEKKUy1dx54AM+hQWOrP0kdaHETFoqMQBU5Z9caxozZKdIqCMlGRHjgY3wwUe4
jgUH5gO/nTBrOX8nYqaaVy6E9xXvgjRtJVtQB9WR+mE2HOZNAWp5gdMCggAVIJ05VIIp+OHC
HUweoAKUpQ/44kRJcVY0jJ9XkPDAAOy50qTUaQaAfXCqaN1GrM1yqDniZOiESGi0JNR3PTEZ
CiOk6j0zqa1rXpiRqkgE0rXLCS0hFaQ0JrSlM6YkVCFepAbIntl+GKoQ9t/zdczl1H+uASHB
K0IzBzbLI4kj9RbUKljmR0y/ypiJH1ocqN11V8MSSChfI0rSrEeH+WJG0MzMuvIdfpiWBbSh
AJAbIHxIr1yxEmIJJH2dK+J6HLwxMo2YoBoTWQciOgB/0xVaddbMAczkQBi1ejDMtaN50Izq
PpiJllP2soqenjTxxEmANNGa9z1rXwwISAilciKljXLyxCAYoHrU1OevxOIksYWLNqyA5sBm
S3T6YV8ItMiagKhUOX070wsWCTS1VZSD2pTwwGUZjD0H2Rmgr3PhijRvaUBgBqqat2/HEiMK
OoJPqBpUZYiMV0trzpllTMeNcRwLOaFFHtpk1R9wxAEikL6sjWo8cDKRGUioOrOuvt+/FhgW
eMCtftyHXv8ATA1SQiuVSijrhZ0CqWJ1DIk6a5j6YVBxRihYANQUAORy8K4kZVDioBBy1dyP
qMCErFgYyNQ6jLv9cSNE0rMR26BQMh9TitQ1RFBOk1brT9tcu2AlVGFKjPMg4sRg5IEYoDmd
Xenh9cKOiEMobL+Hz88KwIyOkknPLKn7fDEDFqA9DGtaNmCD0zxLQe3IQtKoR+0/6YZmOn28
RXDKoKk1mzoc6D8B1wRisvuCM0lSSVZiS3UfXDQ5dC+Xh/zwHVnxqG2l3S3F0/t2+se6/cL5
Y6civYdy4RFZfpprOVryGdwE0A1o3Snj9cV6Y9Ws/wAeGSxWawn1Fv5ckco0FJCOhYdq5dMH
PXonOrXa/jF49mEm52sougrJHcQMJENPtCnpni76jpPFDH8fb9ckRWccUsva3aUJJX6N38sZ
mNfaON+FckX3kG3v71t/+MQMQrDvlXriHVjnj4rvE9kb23s3eFahtOZVh1qMWsVWBTCQXOYO
mvTM9saEbXhPDdu3i0uZ9wuHhaFxRo6DI9D55ZUwdeOsg924JC9q95sdw1/CreqBl0OpBpT1
HPGWO/HDa8B5BdoRBGjyqCRbB1Ep+ik/sxpT1EnDeQlJnFnIGg/8sZykHfNTnT6YGfqih45v
Mlib+K3M1qAdWj71p1qn3UxN24uuMcXbcNrkuJrGS6jT1GS3bVJEP/6XUjviYnqrtONbpuFy
8VnEPb1FI5CaEGv5i2QFM8LVS7jwfk+2kLLaOySmkDxMHVz3pTGubrN2Oq04FyW8gE1vbfqG
FT7UTL7oUeKEgk4zWoor+2lgmkguo2imjajRupVgR1qO2FI11au9QMh2xAmf0ECobtXriaIE
5UWgpTTXKnicQqQEUqgz6eeI/JwWWtKgDpT/ADxM4c1rRydQ+2pzIwVJkbSNPbOn0xkhZlVq
LWmZ/ZhRkShr0GZY/XGnOwa6QampHavXDgFlkwAAPXwH188DfNGJZEINMugHgDgvrWnZwUJ0
/iPDAKdqg5Oa0y+uGKGUer1k06h8Kt0lckkNUmlAPLxwwXkg4DmorX7fLEaNGFW6KD+cnApR
a2qAGqtP+M8B0+pKaPtr0PauIaaNnJoaUIp+GFm2iemoVFf8vrikFqOqEjSBqHh/ljTGiIB9
OX1xGUlQjIGg+uJudDBJRqdun/PBToVGlRUj6dsSlIs1dIBIBxYKYFtWZoSajxywiHJBJBJ8
RiIiFYAA9RQsa4lMM0j6OnkSfLFjWnEoqG8STSnTtixn7EdJagy71r274VaJidIAatcq+WLA
FCwko3RRQAjxxD6pGU1VaCnXBUFhpNFBI6mmIlVnb01IP2174junWgko320qVPlisZJlZvwp
QdjiagNJBzGQ7HCEgC9OvYV64LEQ75CnQNTvgIfSWGWrz88OM4RQCv8AEP8APEi9rMZ18jgM
gdNGAIyByPSuGAjHWpNNfUDt5YsRihJyoezeAOLFpGMu1KjLBVDe2DUA9ex64DpLFkwPXw64
jpjGdQ/h/fXChGINQVNfHzGIgMR7Gn8Q8sLNhzHTJcie3cU/xwD6/o3tjqB6u1cqnEAmNiQS
agdfrhOmeM9DnU5k4kf2pFFVAGeRP+GBrCfI9Cw7/XCD56mB+tR0wYQ9T4eIrngwHYEMM9Qp
Sn/HhiKPQHFD4VoO5B74iQXI0OR6LiZwLH1HOmWZ/wA8RC0epahslzHjhZpmcKK0y/GuKg5N
EU16554DKHUrElhp7fjia+wavqFenYUphFO60ckEVpXPr4ZYGtRsoGX2/Xx/DEsC4LLQClck
/HEzgQoBGXeg/wBMRRtQPmcj+wYkSABciC1etOuIBJr169CMR1HT1MRQn/HyxJGa6q9+lD0w
asAWGQNK9vpiWgejk9aflXDGjayiUZaEZAjvhCNiXVlkBr91P8P24DoGUkGlMqenxxMWoZWU
01DTnl2wg2Q0kCoAyJP+OLW+TCRipfPLrXoa9sB2UNdNWOWk+qnbwxVZQnWVqOmWoeH1wI6k
L6TSgzY+P7MJiNijE0PXx6ftGJSnZ0kC+moA6YkZX0tUaivSg6eIrgJEmpBZgWzr4YhgW1hA
MtTH1HwB6YWaCktaGtV861+uIJtSoTXq1B4jxxNBZQxBVT1oFwM9Gkqr9hXLTiUHIVCClT/D
TLMYmrTxqAilhTT0Df54WcLMGmfStR374mKKlI9ZY1+417+VMFpifLSCPqPDA6Q1WMZoDqrU
eOIkFI0mnqJqTXoPDC52VJloofvrSnfLGa3CXUSaUHn+/CRLoqQBXof+YxMpFzzAyz+ueJak
otT4CtDgSMnpnqAyoetcTOiFQ1CtAwyrl/hiah2VyNNR0ybrl+GIhAYR0Och6D8euAYTBtND
TxywrAEMa1Iz6nrl9MMrRxVj0BdRlTwxI3tIDUnIivl54kUYBqaeef8AngGhYOzaaZAVoMsL
OAqFXTTtnXp+GFWGAJYFQCKdTiGBqtepNP2demBvRrShahLDoenXAQuXL00FvKmf7cK+SBdW
AjIYkdPr2xDP0idmzBofHKlPGmInJOVSC/UADp51wIPRCGzZuvgPriVAvpfM1BGR8SPHAx9a
Z/SQa6C2edTUYTgsmrTImtf8sLWIwB7hNa0Fa08OuBkSszdaBj0DdvD8MIw0gAJ6Bm+4dR54
mg1oG0mpJybt+NMSN7Q6KxBIzH/XEqdQW01FMu2R+uEYbST2pWoY51IxmrAomRKqR2y6UPhg
OJA32r41yPUUwktBFaBSPzkdQfDELDEDOvpan3dT+GJZhtFFDdR4Dx/1xCi0syNU0anUZ9MS
w4VNK1OnT38BiahxSppRgOh654jqPXIqqFHiaeWHRaaEsw1Hr2yxMpc9NR6T0r+7AYAqa6ez
dT9MKw5ajEAjLw8sQOzlq5kLT7++BT0/oQVFc19TdjXFjZB10VJoR0IFenjiCHWG9Irr8Tll
i0eiWNXbP0sO+LQSBASQNJORIH3YiJXKyUIBJ6Vz65dcJhnC6hQ5Voad8SLUamgpQUCnExYG
i6AY6k9xXOtcDUg6K9SSCQPUT+/AQlegzyGTYUTVpQkU6+efhiQC1SwZadCM8iPwwqU4DZk5
kUNe3jXARGlUOkZ/vwC0EldTCgBGZJ8PADEz6RGsVyz6np0w6UjAhQ6tmRk30xFGtdZYgAAU
p0rhJgmlqMQe5UdBiB6NIxABIH7PxxKCQFdSuDoH5h1PjiJkQR0oo0d18B9MAEVYNl36A+H4
YkFNQYtXLP8AYP8ALEofUtNVcj26D9nfEqRCirg1HhXocSD7n1Pck9BiREH1FciPuXy8sSqO
L22WrjLuh7H8MAkGGRmAVQBSvTviWijclmjRaECpanT61xKEQVJDgaTnXC0BT9xYEqBQMeue
LASVIAXInoB4DzwEmLmhpQ9NJPatMSJomYABqAZ061+uI4X3RlkbTQ/aOtPxwjBEKooVANKG
gNcRxHRmOtDpIGk0yyHhXEziSf3ymoZv+Y9K/TAsOrHIfmBqp88Wt6F/T6ciwGZrXLzxUGJX
RUVy7gVp+OJUgyhB1Zz4+JxIzUcVFQehr5YgGRkSmVRX7Sf34ikMkY/7QPL9+DBqKNkANK6u
56g/hjQ0ShghqSdRrU+OIyHUIW9a1alAaZ0wExUrpLEgnKopUV6HECCk6CpbLLV+PXEUhcBS
gHrFQT0+mJBjaUH26az+cnKn44qoYMwkqSNLH0rTp5g4NJxInrOkGhoKeHlhBLKwWpq1ch38
6+WFYev8xvSc81/DEAo5bUNRBWtaAHM9sICrUoDmxHq09PxwHUVyZjHQHLsR0ywpltyLiTrm
xNRhxOTV59/3YvEsduQJc1DDV+WtCMsa5p6fRfx3ve37psi7fuclLy2AVH1iMuq/bpBHWvhg
7jErQbTv1rfbvJs81wkUwIETXTiPX2IL/bqHnjOHVptuz7ns0Fyl/cpARI7i3Mvp01yKH7TU
YPk2p3nu90soLrats2/d0hr/ADjKqzxZ1qVDK5OLFjOtyi6g5LBYbwtvbW8wMYeJ3bQzEZyC
T1DDOdZvju5lYWvHrFpoZYkSbO3u7abUrZZ6kGdaYz8tY8alnWWdywprOpD4jr08cbUj0/4l
ihu7K6s0lhN4zhlimlWNmyIqpfr5gYOo1atoorvi4ki3mH2FaQye4pDKK9KUrUfTBzdV+F7c
SXW8bfbXu07Vt28JDX/5KyATRAdyiuhrXExigi5HKeRpab4sVhHcKUL6iRG3ZmqdQwrE/NLG
DYdtLxFFaaotru3lDxyhuoIHTFCXxlx3epNqlnt/ZYSElGWZCQGGasCQVr54erBdPxeyurXc
b7ZL2Jba9mZzHDcOsfuZEKUcnSR+ODqbBilbi/LOM7gt7uCvbWqy6nVn1JSuWggspFcHNabm
5TcbqO33Hbtjst0RfWdytpdMgI8Y421ah9MNTzDn+4395uccl7brbP7ek0fWXA7saBq/Xphl
UZdaVqei9RhVATXSdRXsV60piB1yb/j9mLEImoLL5rkKfsxI5PprUnMGtMjiZHEPcGok17Kc
s/PBV9RUCkqa16DpQYG7BqGFQSfJux8sLJgirIK1+vgPHCsEQGj+3R9MIsChyoRqpnl1zxMp
Epp9TEUI+uXfAZRNJKoFANPY+WDGj0P/ANoaeHl+OFFRxVGzp4YgSdQCfr44SMDvSo6MDiIW
WMxii1zz7nAyMJUD00bqPCmIWnIYMKqP9wrkBiF0gV15inhhxHLE/b6QepHQ07YcBjpbxJ7j
ALDjSDRQDTscaAgqdW7dfr2xaZC1AMDT/Q4GtPTSpYmtftJ/xwNUq0Gkdev1wg2lSeozy01G
EFQaajoMqYD8G0FXqDQU/diAgprprWn+Hni1WEA3cZU+meGKnVenT1eH+GBaeuZUkUGdadca
BBQepzzIbwwKmBcU1EGuAnWrscvT54kVGAooIZemeFGZnVszVjmT9cTFMla9evSvl54GuRIS
2QFQASDXrhaOCxB8RnkP8PHACWhOf2gVGAGIRRr05n7aeP0wnSFDm3T9/wBMKP0FQevUHOmB
EwqS5owGVO31xDRZZGtScvTQfhiwA0UbUBWnfviR2Ck08/pT64iQQhq1HX9mBExLDTkBWgPl
iakJAaknPwA8u+Ik2osWU0qMx0xM6VG9NRmSK59R54kRKGhArT9uIhLNpoorQ/44hhH7SOwz
U+OJYEL6gpBI6jwxoaZqE+fWv0wIiq5VUgjscTRlDt6qAeJ8K4jhjHVhQdOpwCnIzOntmO2J
QxVVWgrqPQUwGm9JcECgA9X18MTO4ZkViSOv+GJrUfsqiljkAa/jh1ikFCkMTXLPV0xJG0YZ
wxBIpWv/ACws3kjGCATkDmPwwNIyjNSpppNFbr+OIemaMg6ia1OVfHE1DCrGpGZqSMWHUba1
qKUHUmvfwwLDMCaMuVTn4DEvQEAZHuM17YF1UK1BJrSnXCybUrHSD07HIH8cTURySEMV6Fhl
gVgTKWFCuRoVI74h6ZpKilK1yPYYcTnc1IoNXYDpXCdPrXTpXJutfpgV6AWUguMiMgcS0ACr
IBqqp/MOv78QDJWpemVaDwwNfULzJWlNNSQAe4PcYWYjIAVa0PfI5VGJuQzhehNKU7+OLCAF
mDDUDX7R0Bp9MRCzDWpIoF7jCwj0g1C5HrUiuJEjkAnNu2eWBQbvkHXLOhXxOJr5HIVEYJGa
0r3FMCRkxsehDdaVyrSmIaYLGiEmpzAPh40GLRggGofAn1DuMWnD6iXOZovkSCPwwLDVowBW
g/MetKeGE4Iy1enQdiR455YmakRdRo+VPxBriahAMK1GQqB4DwpiZwiw1HMA9ic/8cCFGEJ0
fae9a9++BqJaMSB4ZUqB+OEkPUzRHMgCo7fQ4mToMzTp0qe2KoSEV0kdRQHwPngFogFDhifo
Mu2GVWnaqAOM/EDPriR0KuoUZilKdDXywIxJJK0AIH7MQGtaUYUJHXviOHPSqjWFP+OJBLy1
A6Anr4eWIEdRoVNQp9Q65eGJoix6AZUqCPPCQDRQOpNSNI+h6gYlDsSoVn+3ppGIBkIU6hmx
6CtBTEqIUFTWpHXwof8ATEdiKuodRpPfzHliBqstKnV9O/0riAZD1LEE5EV/0xNAYEAAg6QO
lcQpVIGpSTTv0qMQw6+oser9V7ZYtUAVZjQZMD07E+eJoThoxqWgXI1/ccCoEPqYHNupp3Hi
MTJy+qLQopq6V7HC0SaAn8wZ0ywHUKSACr9elex8cWs6TKag00pTOn+AxAQLErp6UFK+AHTC
cpiFDeDMKtTPL/TEjKyI4Q9CMwM+uJUGsCQ0ADUy8zhY08a1aqtRuhauRxNCyBJBKmuoHr9c
ZaMA4BMdAMj+OJQjrJbIg0q1RShxI5B0MxIDN2GVcSoVdqkMBXquJjBKqAGrAKO46V88RkMW
GrSoo1Ovf65YWhKpXUKVrme4P+mBECik0XNuq174haASyE1BJpnQ+GJQcgPtk0FX7E9MRR19
IVRUmtSTkCMQ1KDqbMVqBketcCRuvtoWCnUSdXnjQsMELMpOQGbow6jyxKRI5UnOqgekVPh4
DBjZix050y6EZVH4YlSDln01rp8BT8BgZ0yLqB1GhzoBTLETyuciADQ0oPHDAFg7ELQ0XL8B
1rhJaTWtAppl+HliJxUhqDMdSf3YAZVozaqgZVP18sSOFiOQYagc+tMWLTBtUgXL/t6dOmHB
p2bSxBWh618MSpkIK11UrmMupOJGqAPUantTpnixaZFbIKfSuZPlgREpT+a1Bmq4AIJoQVNa
jPLr9MTWHShUV9I7DKigYSjNUc0BILAk9cqYmdGzQM7EgggZk9K4cMMzEsCvXsK4sRomJb1/
b/gMCGRq6U/3f6YCjjRAumpJ69cxhRwaIATmc2pmPLAjsVK1bIeJzOeEWg9CsRlp6D/LLEDs
rEhQdVfDx8sKwyiT8qkspy8TgRFAGoTQE5+NcCw/tstCGIHU/t654hgQSzACpBqa59uoP1wm
U7NI8elegFT4Z4GjgadLNmq5dc8KMyIc1OYIOoZAV8fpiRg66iv+6oxI9VDe2BQ5lz0xE8rj
3FVBq7g0oB2xLCJIUnTViaE16UxNWB9xSaMa5ZN4HxxME0jAas2JyWg6L9PHEgIxEIDkCnf6
+GBDiLliykAMKFj3+mCqCJ0mjAEqKAjKueEogVZqrmRWoJpiAjM2gsgrU1AGf7MSOWLdT0HQ
jEgGRWUqoC1FA2ZGXiMQ0lSNEB0kvQVAPXwyxKRJJIBRUOnIUU+WGGl9xDA50NVPifDAQ6cw
Rkxzz64kdShbM6e5HShGJGZS59J6/bTufriMgvaKxkL371ywEyFdNSQGOQIHU+VcSwLopNDQ
f7shU4dRm1kqV9NDmOx8ziFgkSIVJ1Fzmta1IGLWTI2iN3YfcTpIwoARhkg0gABf4Rl0xM6g
ncLER4+kFj+bzOEysvuCyGcAkkgnyxoofZm/iPj0xlOnao5FuY6qWNclXMknpljrxxrXU8ey
8f4Dy7c7I3W32Qm0KGNssq/qcu/tkq37MYvlxznTr2PgXLd4eRLSKFZ42KSQXMywSBgc8n6/
WuH7RnLKtN0+K/lDbLR55rH3oVILC2uY5SoA6suqpGMeNesc0txCWjZ2jcjQ2kkUbz88ONa6
bLbdz3edUtqSSA6JpJWJRR5sfHGpc8ZtWm+cR5bsFv7u5WLQWcpAhnV1lgYUyIZCwH40xh0l
8ZzMUcU6mtMiD44dYsd+0bRuO53aiz0LQj+ZLUKCTSuoV0/XGrZIouN94tzXayn9Ttbj2piW
tbrX79scsgjKWWvljn9l1an2zg3Mb62a/wBssxcRp6nghl0zqF/N7QI1DGpjH3/Thsdn32+v
P00cRM5J/wDKSNJr6g5YGn44vG/XXv3GOU7KsK7tayxQyGtvMSHidqZ6GQsmYxn5Nrp2zhPM
rrbTf7dbtNGBrkit5AZhTu0WTafpXBg+yguLy5Yuly8ssysVMcmoutOq06jHSGXUsdxeMAPe
mmVuiGR2P0zPbGcC/s+Jc4uLJtzsbOZ4CP58VtLWRKZgvEpDdPDDsi3XPsvFeSb/AHLpYRiW
dhQNcSiMM3can6sPDGrJ8jmpN/4RyzYVLbrts0CEhEnXTJEx65MpI/bjGtWKFdbEBVKnrpYg
MR5Y1sc5qz2rYty3KQR2kYklNFXWyxrWnQu1AMGtYLd+Pb3s86W252U1nOwqiyrkQDT0sKqf
wxaq6Tw7kw20bkm3vNZUrJNARKYu9ZEX1J+zFpVGtCQQK1yDDtTtixbBxproKVNaj/niwrqT
h3KP0I3H+myybfUs00JWQR6f/wAIqksn4jEFTq9OXqFajyr9MSDrFG/yzxpnSBjVQwo3jiip
19S0btU54sZ8SLUgEjxBp5+GBqDcKNLA0rTzpTyxEwUuCDmRnXwOIGUsWAYjM5geWIC0uHau
Q7nyxNHbI1SoGXpOX+GIWEylyQzUpmaYmRKreRFPT9fPCtM7O9RWukVIHT64kJVcKO9B1xaT
LVdTjNj0HfEhenPL1kj6DErh3VQq0NKdB2NcWik2TUPb9priI1HXOopWh7YjoHDUB1ULdQMM
FgkUKoYAV/iPXCiJqBkKYAWmgqD1yzxE5YHTkM8OAgW0kaaHs3lgUlMwCgMa55VOGHC0MAhK
5d2HamLR9aYhh5+JGJHXMgVoDkMqZ4AIUDKqdB1J64odM4UvQ5E50/wriByqGnQafy+eFaFg
rUp26jAhD0gkCoIwnTBWAoOnh4HyxDBAEgaTkwNB9MZwhpnUUNadMsMVw7EflzbvlnXDglDr
Jy7jKvbBqwy6tQPQHqfPEhAtQqAdXc4jhF9IrTLue37MIsPVSdYBJIzODFIRCZ6RUkH8MWNa
QAVM869j1GDAVAKMp64UbT1IOVKgd8WKkubAavSB0IrniwQjqIPiBWvXp2xE5FTlkSAaDpiw
gKopJBqcqgYBSFB0zpnp8anCzoAAQctPWvl9cWtYJNNA9a+IwEj0JFR2PSlMRIFQBSvmBgRE
5GmY7qeuJYE6QBQ5jPPwOJahJYOamurqows2CV2AArU9vIYiA1UEac+vlgGB1esVFAfHvhwa
HUzH0/RVGJQtbghaae34YGw189Q7+JPbEKjkEnYZjrn4YQYsxJAPQekYUjGro/Wpy6YDAOrg
ZHLrXAaBhJpBBoennniERsoIpmMQM/5RnXI18x2wtAdix0kZjIk+WAm9pmUMx8SvalcSkBHF
X01oc8/P6YmcRMCtAASVBFOpHniSFVFQS1Opz6YlgTVTXVpr+YjM08sTIdSstepPbtibkiP3
aEmta09JypgaJhRanxFfDEzgG0H8oA6jv1xExKnp+WlUGFbgXBViooAen+lcR0zIqqSwB1Ee
VcsTFAFPugVIXz8R2xGDJbUTp9HRvAGnX8cDSELRitCwr0JyAPfEoUjE1ABofTX8tMIEiFa9
wuR8TXxwLDI6qmlyTp7mhB75DAIYMzDV0DdvLFiSK1VJqKeJqMsTQkYmhJocwFHShwkIXSCl
exoOg/HwxM2pA5C0GQXq1ehwIK6iFU5Zk6jlU+dMREqlwWB1EChH1xJJrpReoIzP5RiB6agN
JyXERGn5uo6V61xI6SKlNLBQRmM8Sw6D1FgOvUHpXFjODMqvITSle5HfyxYj1IJFag0/b3xV
YkGoKcgCw9J7ftwNGRWr9oAOZ8cQw7N1Wh1HoMIOhcgmma9vEYkB9QkBLUFRqXuThxq8nHYJ
kD1Pn4YsZD0encZGvgcDWiKEqyipcfh59sBMBWg6BgM/r1xILNVyxJIC0oMq4WaGpKsQpA75
ZYkYFQ5WlS3ll+3CZQFHaRtRyGVfAjEMEiH8w7dT4YKQl1FE6rkAehFcC0pGAAWiih+3oPI4
QBQyyZH7jmR4d8sCkE59IK+OQPT8cBqNpG9SkgrX0nvXwphY+3oKDz1jI4iYltRPnkB0/HCr
aM5yDVQhhl4YjpqLUgilT17+eDEjFdJ1NkpyAzyJxpGLMCQ2ZOTKOpGIGjUtSQmjDIL5DFpw
6sCQSKUNVby8/pgWlJnRVNO7HtU+OFghpjoagk+kjxGBqQ4qW1HJa9s6UzrTwwUheSmRFF6s
OhP0pixWmeslT9iKAFB7+OIbpIGMhrnQUWuf/LEodjIhpp69cRwkOiUrp1aRqyzy88JzDhVp
qAoSc8+mJDGpBkDq/iwGImQLIWofV0Zuor/piZSx6MxTTl1yqcShj7JqCSDSqt2+hxYQMUMR
ZCD31DPEsGqkaasQ1cmpl+ODFmAqQxZqNQ0oelMIOWBb0mvb9vniI6EkAePqJ6nEkbagChpT
wANPwOIU6yMQKDt360rixmjIUo5+09M6V/b3xY3EQkFNQFaDp3H4YsZlFFPqVia1B6988S0J
zYGpqDQH698DWDBFCCencDI08TiIXlPpU5A5FuuFnQhQTVj9nYDqB1OJSnBAPug0LGgUZ1Aw
jTkgLVSSG/Kex71xKBdELaWFB4DocRh3UqooDTwU54FT6fyioJ7DwxE5WgHRqdKHMYEBA1QG
oB0z7YNJeoqFjAVW6sO/8WWNA7uwQKAeuVeoxaKBSWkCVzFNQ8vPChEVDAn7MgeuX4YkZQDG
RkM8x9MRh6vENFdNaEEDt9e2Aw7szDMD/u7jEgqU1nTVczmP8KYhTLG/u5MdIFWQ4maXt1kd
9IJbKoNemJqC/mfatQB1PY+OEk+orUV9IGYyOAAbMU1aqmtO+eIFrYIddVJ6YEd/U1FHqA7G
la9sR0gwzPalCO9egwxI5NZYaz0yA7/TCsOpdiFUGgzIyp0wCkixhgVJdSQKnqT45dMTUg5V
Ao1K9v8AqcWqlIoABD0oMvP9uIgUgIWeqggaf+mI6SojKCjEHqCemIafT/EagnoPAYQD22VP
bFAvUdzTGQkKyDNADl6QBWlMqHETQEs1GNK1ByzJAxYoH1aSK1IyIHh4jE19hM+gMNOk5UXo
cQ0LygrQ9TmPx+mKCnqNA7t1oMgSO2eEYLWxYkkEih1Dx8PwxVBePW4Lr6q+hcGqEiaqnVQ9
x4jA1g20BBlqY/lFPwzxAMasCXdRr/h7U+pxKHOtUIIyP29uvQZYmpomIY9KAihU/wCWI05Q
fkelF7+GI6ilctJGAobM9h/gcsTCWmlAK01fa1Kin18cDplM2kOxJoGpRh28RTthjNDI2kA6
qIOtafs8saZRs2n01Jr0wsYC59pYiAMyMjSv1wNyMludRLWlNRqGxsYg9yT+Mda9TjKXfC5Y
YuRWTSPoT3V1s/QAGuflj0/y7mHq3MfSm8QT28213lqrwoZQ36iA+ho+xDjL6481EmLu85Ba
bWFvLjam3e0lr+qkt2PuCvQ6lrl9cZUikvdp4TyHb7m92P8AqOz3yVJtnn9yIkCpyPqFfHFu
HXms3HeRNC90ljcT2iNoN2EZowRmQXx1ljNrf/FtoJNuvLUIjXAf3RB1elMyF6muM9U/XFvs
7XT7NuFpdsxDtIDFN0oMw2eWMUPJLxGW6lFB7QbuBQmvamOjpHpHxgBcWF9Guk3EZDCBKGQp
Q+sKM6DF0sX3G5p2tLi2vWKq8rO0cpotK5MFPQ5Y51lzb1BJa7ltckOq3heTW80ZKoyVrUMK
V+mHWcamaCC5aaaBUkndDNJ7ZBlK9NRA9WA1S8ZuppdsktruRirTszwTGlKdCqNl+zFU592a
5st4202uu2gZwWliqiUGZJbpT6YYrA8hG5nlO13WyJBNus/rjMyoUmYGnrJy/HFIxy5uU3u7
PyHb/wCv8ah2S7D0lu4lAjn89QJDY1y6Y7b4XlhyTazGXt7d6vKYaqpXswKkAg9MZrLt3qDd
Evf1MDrDt8+d0rKpSQ9ySBkfMYZfFqi3/m2xW20z7bYvcMGUpJbyye9EmeehmOrLti5mq10b
VuHJI+ExLe8Rg3rYfbb9LeoFM8RJPqYirek+AwWZWZ0Lh0MVzxO5dIQ8luZNdBqdBpqD/EPI
4Wlzsc5vtitYb6QzpbktEtwdTIfLVmK1xGuZZLzb+YW0cM8lpbiPW1CVShFQT2Nenhhl8MzG
c+QdoA30ixs/5s0ayzR26FtZpm+lf4uppgjNU/DfZHI7VLlQqBiJFkWlAajSVbvXxxuxRv7e
6vdu5lbW0UrWMDoffjWqI1cgGBFKHpjFrUjI/I1jBa8hZoY0gMyCWSJBpQlxmwAy6jFGdZJa
fdSgrTGhRUDilMj50FcIoSQcq59vPDrOJFcmoYUqf34zTDADI1rnSuI6noKV6U6kdMSIUABA
yr2GBH9yoApWprQdcKP2YFgQe/l9DiVMqrTLoelOuJnCWqZVz8+pwxU4yNRQE5075YiJQ4Wt
Op6+WEafXVielOn1xWH7BCEZjr1NcFR3ZyQMyK1BxAZLBcqEnqB/riRIQAB3A6HxOJolUf8A
cTnn1+mE4Qqa1zFcsTJNoqFzHmMSpKRTSCD3B618jhWGVRULQr4164hRH1AnOg7dz+zATZg6
jT6Dp+/CpCU5gDvnX/XBURWgzbIfv+uJDpUZ/Z4g9cCBUE16HoB40xqCioozFNLZk+eIHrpP
qzGQFOtfriJLG35uvUk9cADHnqHXOlcS07rq9Iy/ypiWnoDU1Awm0lKnPqPDwxM6FSFNe/ce
I+uIwtdBpIoK/wDAwIQC0Gquo5AdssWIPYqCa9xilQV01rWoH+WIwRahI6A/8VwyE2olMu3X
xpgtBKSx8ujeeJHJJI0DyJPhiWnpQGv3HoMINWo9IIPVjgGmI9Fa6e9QcOkwJU1B1E9c++Iw
iEYZ9T+4YEYqqkFcyO/jTtgRCta9O5xKUwPqFB1yoPHzxYdORQ5kU8B54haY1X0BdQGeBacj
M+WZPhiaAy5hjTLyyxAzrVgVPTrhiNqD5EUpnXzGLBpMAy0boe474sFMIiaA1ApQ4FgXiHUe
Hb/HE19UaAEk9T2r1GIaQhUUqKU6j/LLEguFoSvUV+tcSAq6aUoAerVrh1HKRA1J869jXA1P
EJVSGAAKn7hiFBROnc55f54hiI60Aypmc/riaA1OlT9MSA1PSWz9sk/tyzxILMxbIDMZg9MQ
RFfbXz6k+J8sCC2QFRWtSadcTTnOlqEioBPXKhGESBWUMQiipJoa9cQwJQxioILhjkf8MRRl
dVSanPPvQ+WCqGyAFc/AfXEQsdILEgHoG7V/HEqjm1KwINTlU9BhZLTkAWqMzXEQOaCjZsaZ
nsO/44lII0Y5ZNlTPInAdNRixodJoK0zrTriRjrZSwyrllmaYNMNlIAK6aD7T1qPGmEYdXYK
w+5dNMvr1/DEoFdBGea1qcq4MRy7tIAAAtKMO5+mFHDtmtKCtB4DwwLcEy1JzDCg8MvCmElV
lYkZLQBu5PjiBnNwzBFORFGYDt+GBCNQVj+7UoA7/txIURZagdFyDePbAUoWoBOZrmvbChqy
HIrmB2xYjA1diKdgCc88SGuYLGqr3qMq9MSMGFCQDpHUjx8aYlRKQykgFfPt+3EBoQYwGyFK
KD10nAoZa0rX01yPliR2DFgQ3qUgjz8RhGCAYrl6tHWuZNfPCcA2ZUg070r/AJYAkcam06sq
jI4WtM6sFOkAlfuPQ5eHjiFpZSVcZqpFAcjXrXGUUjBG1kkI3bEtJX1KDWnSviO+IlTT0ozk
g59KYVQtVTmcqmn/ACxBGDqVhX8uR8MS0KVzalGA9RzzH1xI51hj2UjFVpnAatDnTPLOuBaC
o1Eg/Vad/riJFo8in3AZsPPLEkbMQBT1NXqOmeJmm9FMjqVfLMnwGEQhIXU5+sAa8u9OgxI4
Cmun1ZVA+nfA0jOa1LUyyxI/ukaCQGY9VpnTCtMS9Tl/LP3eX/TCj6Cc8ytMyOtD1wVQzyVo
o6IAP2YDpgjMA1cj0HfEANlRJQSVNPOp8aYVhxT0k01J0Hl5YiBF1VCsVFcs6V8sAzT0ZQVO
WeZ8KYhSQqtRX1MajyOFmDWQljqNNRqB28MUdIYvVipGsnt08sNJKTqYAAxigJHXAyZiiS0A
9T0qK+HliZ0a6pFrSuroAemffBXSUDFFUF2LMSdIPhiAgGVASKmnpI7Z9sRwALEgAD2q11DC
yRkVWC1ISvWnfvliQ3bSaqxIGerALSGrI0r1B88Gk7miactROajPMYlphQkVGljlUnLETaZG
ZkJHgKHthGGQuyldQ9OQy/xxHCIZKilQMQpKqqQWarHqP+eJm0xqSQRWmQJyyGJQRqUBGXgT
gaATJSmXXJTiFGqqlZABqOQBwrTlAWVjkejDoPE0xMhlRAupaAkggHt9Biahm1iMVNFPU9Ms
Ok7EV/2tQU/54lh2VQKq9R0Ioa0xI6MNJY11gZ+WJqUiSzUVdNBqJJ7YKglar6QczmT1r+OB
BAAYosZCnP6VxC07L6mGmpUduxws+krhXq6aWIp50OIfAgtTqJHp9II60xNkjIKZAx1Jz7U7
4CZiDXWaqOlMssaRpNDKPbILdDlWvcf9cSpaslAyYCv+tcCRxn6sxqan91cQwSFVkJYFaZHT
XDqwVDI4JNKHua1Pjg0gFWkIkNSch4fj4YgfTIKaBpUijE51AxE5UVyppPcjt+3CEaFi5BB6
9fCmJQSqCSAa06EmgBH+uImeqtGFNDU1AA64hRtITmwHma9cQ05qzIQdKrXMDKg8cDQFUhmF
QSelPPEhkFWIZa08czhaRh1kFH69Qe1OmeCsaSAEEatSmpp0A7UwE2lFU6Qa0zP0wo+ShakK
tBQ9CT9PHFAYECMaCyvUnUTnpOJCdzEoYqusE0Y1Jp4YjqMsderSFDEMB3zzzpiZ9E1CCdPU
9PLywNQy0DMFFVWlD2z7Vw4jy+x6dTagfuC/uriJw6iRQvpI/LpHT64AdpUIbSxOdOmY+mIk
5LOoBKuB6lIpQ+eIGj0lqU1McvwGAkn8z1rSg7HpU4Qc1XLMkHVpHc4mpRVYqBSqnOvQYiQl
pqJ9OqtK54BoCcgaZEADPMU8cJgtdR2K+IOJuGUatQXp3xMdHU+rSTVRnp8W88UZC4UEnr/D
455YdSOVgo1D1mlACKiuCNWsjugb39JzBJIXsMdfww5PZk8MDWLPaQgmQ6a1qB4EnDzBbj1r
j/NeUbJtojguyNvl/wDLbzIskOrvTXUDGep6zLqfb+X8gt9zXcbC9khuRVdUJJQg50KdDivL
r9fFruvypyvcbRoLmW290AgT/p40mq3iwAyxiZrnccXH/kPmHHYnTbtwIglqZrWQB4iSKFtJ
8cbyUbisTer99wG4Rym3vteqKeAFWU9fTTG7ManWrPdud8l3aD27+5Rmb0u4jWORwOzlApqc
YKhD1YE1BFa6iSTgZ1Pt17f2V7Hd2bSxXMJ1RvEaSV+o7eWNZrN7aDdudcl3i3EF/cevJpyI
1SZx0pIwAbPGI3KHaub8g2mz/SRXDGzfNredFlt+uTAOCAe2Rxr5Nxzx8j3X+opfwTyRXCtq
ja3JDKeo0kdsF5onjq3vnnI92i9u/ukZnJWWVI0jdlPQMVCktgivU/B9t53yXa7RrKK7Em3y
1AtbmMTxqTkfb1fYfocazTx6r599v5bj3GuX91TVWJPpPYLTp5YJGLMrqv8AlG+7nbQQX15N
dQwV9lZmLla5mlc8MjerHaueck2+w/QO63O2NX27a4QSKhOeqMn1IfocNjXMldGwfJPKdnnl
axuwYps3tJYxLF9QO1MX1c7U+9/Iu5bzatb3m37c4auiaK3VJQT/ALhXp5YzIlXtfKeRbbDL
Dt+5T20EykSwof5ZqKH0morhxTAbVyTd9pvP1ljctBcgUZkpRgTUq6n0sp8Dh+up17pzHc92
oLhY4m1euO3X2w3hqC4MErrtfkHfYdtO2XEiXVtGKWy3MYaSI+MUh9Q+mM1a4LDlG+2O5ruN
rdvFcxH0N1GnwI8D4Y2q7OT8v3LkjxTX8MCzxihlgQRO58Xp1xmHxNb/ACBv8W3x7bM0N5bq
h/TtdprliHT+U4Ieg8zTFitUN3d3F2+uabW56MSTRR2FemFmuetQRQ5Z+GVcaQzRRQZ/T/PE
iU19RXMZZ9cQwgT179h54WRqCyiuWeZH+WMtnDekg109KdqHEzowraadhkDgaJa1Dr17NhRk
K0IYZrn5nDg0YI7UoMgOlTgR9AFCFIy6k+OHVh6diw1dfDLEyFmIUoBkTmDhWCU0NO47+WIy
EC6khep7np+OIUyuyv1qen0xYz8JOlSCKg9AO5wGUwoQCcgM6DrnibKoHqFdRFNI/wBMSOp0
gnoSRVcSOyKRQsQpOJGJH21GodAP8sIJlNVLGorlniJ1dtRZsszpPbECIBqR9AMS04ChQtaA
Z4kZvuFPCtR2+uJCOpl9NKVzOBYAoWoTkVzAHXCqWhsiSB3041rGC1CgOY/D/DA0LouZqvic
jXxwKkoQCvboQe2IBrWijKg/4zwo1PS2XU5jr+w4tZI5CncfcT4dsREOgFenQHz8sCO6q1Cc
sRCa0on3DOvjhMERRQ1Kk0rgREANQD6jsf2YCAo2WeRNKfXCCIKE9CfHy7YhpAEqMipOeR8M
RPSpNcz2PYDvhRqBDprWv+eJW4Xq1VqTTpgGF3IbMEVHiMSCVBVeoJ+79uKNGY1YaRn38ziG
iZlBzH0UYMWg9NddPoMSF9wGR1fuI65YQZRWoagPUYBSd1IIFS2LGtLNgSTQ9qd8B0watABk
OoOHAapqQVoOgOWeKEA0geWYoMIPQHqaN0A8sTNpg5XLqw6HAoGpCVJrUfhgdJQsBWoyY5mm
JnoAYZhuvSnf64WfQ0GoAdO+IhcVchR6e474iEjSVVs18egrgOh0irVqK109/wDgYliN69Wz
AP5TQ0pniGULlQvqpTpQ9jiaRVoBSgPc9/riVoSBQ551yAzxK4hZfSAe+bf6YgidQ+VSKZqR
4YkDSy5OxDZEnrgJiEJ1H7a1P18cR1Cw0gyGgqc6ZZdO2EBaM6cjSvbxzxBBXJqZU6jsT44t
GUx10HpA0jM+JriMM4BFDkaklfMYDQHXVzSoFNSjI+efhiX1R6PQCTQHMGuVDjWqQy6Vb7TQ
erT/AJfjipkIyam9Ppr2+n+eBYI6iwWlVI1VPkeuBEmsgBRoy+44kEhW1GulvD/pgIULKFBI
0nIn6dcKw6gGtMj2p1PmcQw9SSdPU9B2I8RiVJlNKEZ/4YkcxotKE0NcvEjEhKaBBqooIIXz
/HETuzqCQCUrQsMuuCjT1H3KtdQFanr18cSlEiEqwJ0qOhH+GIpIxX0hqAg6fwxAlCa270oK
+OIiUDU1Pt7g4hoyGopJpSop2xIzZivQZkgVAr2xDUkYZlVa1BrkemJqJF9wjTUA1qKeAywI
DPQ6KnIUJpkc+mIURUihUZA1NfLrjUGGaRgTpJBJzqO/liFpzpoNQoxxCW0tYJJofLLA1DK2
pmJ9LAeoYkRav/ae/icSpFG1Kx9YPY9sFEOBqPp6dyf+eJswcZhT0NM/HEQkagdPqZcmr2rh
ZOD6FIyzoD54hqJ9S5jr3HTriJRsdK5VFSM++ImI1NStCcq/v64BqAyKxIjXOtG8D5nCJ0eu
oaSKU6le9PripJjr9JyVfy96+IwKwkXWoFNJFdVTTCIYIQemZyoexwNYEqxBqP5erI9yMQwz
BiFWlFAqB4AYojSghdVSBSlfr4Y0KH1qo1HUAOnfLviB1CkkL4V65YDDjMjoKgip8cDWBJkB
A6UrXxy69MUBBVFCGJ69MJJSNJr6a5kEZ08MREAlQqgU61HfywM4FjGWzrXM18M8sQpl0Fq9
lyp4+eESnbUSq9UFfby6d/rixuCMaOArLqJoNQP7sRN1bSppTqfHywM2GlJNGrQdMuv0/DDg
p2koyKOhX/D+I4MagRAGzB9RJP0PliFEPSmk9KmhPniJ1ARlcnLpTwP0xILxsxKnOhBqvYYg
IEdKUWlNXn/niIUBJIJaoy8AAcAEqoMznT7QPHEiFWFAQB49DlhJqeitPUDkwxEtaag3UtQf
j0xAzMsr/camtanv2y6YhmhELAMWyJrl5eWIZgiVWPTqDdyGNdVOmLFBhlMgBop0+ntXE1oC
fSSB+J8vDAzpgGNK5E5iueFmnNDJprQdTU1Nf8sKMyrqqpzIrXqDTxwHQk6+h9LdQcR+TIlG
bUaqBl2FfDzwHBI4ajjof+KnCQkUY0PVug6ficKEHdSWGVctPc1GVcGo2tSST9woCo6j/XAN
M7DXRWybyyrhBe7RQGUgUyetST4U8RiXpNXMitRlniODVBUH3AG6nvXEQtQodKmlcgM/xwIx
NdNQCB2zzxKUguhK9zWuWYrhJ09bDUupiKeX1oMSNQkqK9ftpmDTz74ho9YzViNQOQ7fU4la
jca6itK9WXOmIfImWjfeKrQHx8gcSwgrjM1GWfhiKNEIHrq6GoUf4DChx6WqCtdOTAdq+eI6
dQrV9tdLUNFyP/BwInZAgY1JUVGXfEqiAq6lWrqzqcx+zCIkZgBpJKr5VGQ+mIgzqxQAE/mJ
7HGUR1BdS/iwOeGq1Gnt6jQj26erSMq4GJPXQ6RqgbV6KUA74o6I0kLHSE0IO5NRXCAssbkM
Puz9Vcj9MQoTUp6jkKVIqB+zEEqs4aqUzyNcSlCGjLERrqNM2NaE/TESQFwQDrBOYzFPpiJS
fdpHbMnMEDpiWky0CiMCtKAt3GIioQ2rJSOpJqDiR4WFT6SGHUNT93lgpOSKGirn0briUR1Z
SCSBnmfHwwIhGgFdZVTmKUzbvXCMDmk1CpEgFD16nz6DEaNY5A5XX1+6vfwwskypRlerUNan
/ljJL+Y0Z0gAdx59sMOkqMrBQAzLllkAf9cQ+wk+4uDpVR9xH+GIHKsUalMx18/qMRCqSgq4
cBkGakD0k9/PFhQ3K6oayEMB9wGVfDphxMpuooVAB8c+uf0xta5MvDtTqcGnWg4rbfq94tbN
mokrqtRQEGvnjpwx3fPH0Lb2CcfjtrWHRNaXTrHJbzxClW7EGtfrjnfazxyuLb452qS5e5t5
ksmYjSrKdIJ8FGC9N9apuWWHJ3tp0ItL6xh1RPPGitMpHXSQAe2eDxy5sl9eYuEqTmunIk9s
dOY1vrdfH/G472Ga8kl9mdWCxgjUtOzEdsHTbQS7TZ8msJReQxx3UDSRJcRoFc6PGlCa4JRr
y6eP2p3iJqVyz6nwP44sTdcC4xb3tpNcKzJcIw0ilQcu/cHBaMX42ew5Nbn9VFGtxaSmMXaI
FlOnLS5WlfxwbiDJaR7FPabWEjuLO9b2jBNEHVdZoa18fuqMM9Trj+PNttJXuLWb2mU1jAWq
+nsPDGr2a54dk27kdiJLyCNbi3mZDNCiqzU6aqAaqjGLWcOLGPaL2222kU9ndtp0SIGpXJqL
Q+oeWGVTyqXf+LbHtG92s140jbNOCZYo9OtTWh0164r1rpmhv+OcL/qtpHsW7PdW0xGqCVWS
WIdPVWlaeWCWs1fLYRbZuNttEkUV3ZXZ6PGDIv7cwcMusy3U68As9vMt1byjUlXVCCchnpr4
YtOOLc9is962EboYo7O8tgxdok0CTSaCoH+OLcMin2/ZOE32xs8m7y7fvKanNtKmqOWn8DL0
rjWiuPikW2JvEKXkQvI2dUQk0GZpmP8APEcavmu08Usd3spEtv0tjcyMtwiMTQdyAaUIxM5G
g2zbo7qaC2tP6fumzsD6Z1X3UyyArQ1+uMyHGD51x622Xd5Le1UxRlRKsTGpUN26nLFFFVx+
wi3Pc4LNmIWViCehpQkAfswpuLbarNdxi2C+tI7i0lBeGUJ/MialKq+R60whk+U8eGzbm1sr
iVWAdGAoc86EHuMGmqZgAKkhadT9cK02k9QtSRkK4kPUaVPXw/0xKhUEu3ie3fCyMMeoHp6B
sAtMA1aVBHl4YgNXUinTsadcDZ2TTlXSD0GJYIIdBFPV1r2xoYZR6m1H1dsSEFUrQHURUnri
HVJBU6sqDADsTWoWp7jyxpo4OQqoJ7HEQSIwfI0bw65Yoz1BgZAkZUzPfExYS6SwyOfX69sS
0VCGDePftXA3phpdjlQjvX/DDiKRqP4jpUjriSQBMl6L0wC0GlA2sflyriJzVzmKE5imLRh1
6UYksex6HEYRJBqOoGf0wowaPTQClenicQ040Uz/AA8QcOKGX3K6B9orX64AQrX16VYdwe/n
iJd+n4HChFTXPMUApgNgqhiVZev7xiZygpnTIr2OFYf0ii0ypkAcQ0jQEN+WlAK5A9sR+TVJ
r3p1rmfwwIqGunoa5EYlacag1fDLPDFpMrKwKkUPfEiWuqoPQZg4FCNSuqpzyqO1OmInYqM+
/wC+uBAWoqDl4E/5YUkCs2mlMvu8MSDKaaichQUA/wAcRptVSSM6du+IeENIr6vqPPFgpiBp
qDn5dx54kdxlVaNXIg4NOG9st5A/a2EBpQZ/dhgOAa6l+priIaMp1daHLyrgRMEILZ1HQdji
IaZdhn08RgR3LfaPSR2OdMSwv5lTU+fhiNgDkQSadxiAi5A/3eH1xYAIAx1aqnpU9sOnDEj1
ZdMuv78GMhBWhqaeP4YjKYP2qademKq3Qk5VIq/j3piZyomDuDU0bLSR2Pc4moZxWg6Z1Jr0
GKDqUneigfcRllgOIydKE1Ap1B8MJyomNWoR18MK1HKRUDI0PfADOZCFApQCmodcBRIRVgPo
fP64SHLVqoG8O2CkJICnTliOIGBZq/bXLCMA7KGBU5U64iDWsoCk5iopgEoJNakoCKg0VvDL
EkeoE6R36ntiRuwD0H8R7EnAqH0qxJypnU5AeQxYkbjRVmyopCg5VPbEgaHKAn1HpTz74Vpp
KqK5HTmPDwxNBC9E6K2f7MSDq9tmDZdxTzwwXwYfUBoNCO3YDwwHQys1F0mg6EinXENOz1St
ADTIf8sRMVVVFRTyrXLtiWGBCRnLrkPH/lgA9TEVFWIGZ8q98QJ1FW9VaDIdK/hih0o2qdLd
fHxwkbIKBPyHNhgFhlKUc1FAaGuQy74mdS1bTmQykeogZYGoaMjOtKEVB7/uwqjWMKTnUGlQ
D44tGCSNg3qNT5HI4Fg9R0tqzIqFFe+EUSSFq1WukUJGIwekaj+Ve9M8sC05ZQQKg174jKKi
uSXrl0U+H4YlpfkBocuvjXwFcMBmrTQtABXM9TiQ1hAUUPpFagdRgQaFmUkkBsxXw7YkOSNA
BWmeRPckZ9MRRaKg9aHucSE+lQpBNOhr288TNpjVqgrUAih6CmJuFKoZqLQAggU6+WJGBJB1
tRuikeWKiIySpNRWtKgdKdMQAVOokZutQfpiMEVqaD6HLx74jqMsyEBB3FAcTNC6Fe9G7+dT
iFJVzJDalpWhIH7MRiSLUeoD+GXfA1iMUEh69Ovav1xKDdhqzFAaVU+JwFzPHJqAFadcunnX
CCf1EEZDx/zwinUVVlJ9Xnn9DgWgoMlr4kYdGCVQB4RkZ+P7cSkwLBSK6SQtDln38MWEpAwT
1MBQ1Lf7cODTF0GlhUA9gMziOnUK75gimTHtQ4KZTssen0t5U+vTAkelwoDHI1DEH/DDjNhh
mCB6iuQPb6HtXEpBkStkTQgUBPj5Ytazwmce2PTQ+X+JxmCnX1gHpX/HDEjRXKlQcnPbrl3w
qDRqBgK9KVp1PngaSL3Yj0gZgf6YkBZtFQc69e+muJkINdWnocz559sCONQOnwNSSfHETl2D
sSagU6+eNYkZY1DGtP8AimCsjUFTUGtO3f64DkGSpWgOY/wOeImLioA+6uYrTIYcRnaQGpUF
SepIwo+k1ooGk50bM1GLBpSaZYwanzAH7jiw30DKqpRVNctX4YhpKGCAjv8AmOI6FBIQxrUj
7vp5DGWcEiswJUCtKjPt/rhH1CiER+r0hjQmuGmTCq2gioOWYrQjP/DBQcFGGonSAeg7A4DP
TFGqy5hhnQUIpiJwVD6T9tajwp+GEifQAEFQO1Oh+pxI2pgooCprSp7jErTBNXm4pXzHjiB1
9gVBBJXwGQPfEYKgKkEUXvXoMRCvlkG/NWvTENCY1q3d/Ed/DEsJftogHuHqTiFpiGDagPtP
qPagxMSnFJPtzNKmngO+JuXQypIvQV1ZkLhV0Sq9R+UdgB4d8CgdEjHVUEdyD1xI4Uop0v1y
Nc64UAFD6CtT+/8AHFi0etl1Ek0HQA1FPDPEtAQAC2nUOjEnpTvgRzmP5YOoCp/xpiOiDF6u
cuwwEitVGfTJlpTPELTUNPQh1g1LdqYTgWMgahGoN1Nc8RFG0ZFApPYU8sSpgM9JPo8Dl+/B
ooFSSMBlXKuRHY+eIDVSysfyjKh6f88RMEanqXLqK+HniQurBR6dIqRTFqIxEHUTQDOniTg1
fUKSW6khgFbVUClakjuB2waSOpyTHpAByHT6/XGgFndTktK/mwowjPvD1VLCqkf4HBqxKdZO
rT/MNPURWv0OIwIjAZgftp188URyAykvnnQaRniXwdDGFeo6AlSRWn0AxE4MZGn0sSKgnxxI
DmUmmQpWnjXECbWY6ktroSQev1xCiNTCGyU9K9T+GJI3ZmagaiU+0itMQpKtATXI98IHoIrp
6AVYePicBhmlCtkfRWin/PERsVUACor9tOvjiKNgqn3SCrMPr9BiiRzn+S1AAwyoTmO+XY41
GmV3SRveC1FfzU6VxpjXJ7J8R91PxxkrXY3Md3DNGNMkbVVx2I6Uxvmh7psvNtuv9vjTkKSQ
XkBBt54/5qMOqs+YYNi75xqSrSy+T1gvRFeI01gDnKKCTT41PUjwxjBZUr8x4zbLczbVczSi
Vi36a4hKE6u2oGn44MYvMvypdg3D4/aG5i5Jts0lxI5aC+t3pIhbOjJ9pAOHdbskc2y8p/8A
X9yma01XG3TEhomorAdjU5VAxu/CW91y/bLWCSbZZXdpVLNBcAhQzZVJXr9RjGL6sM7F2+0B
+poO/X9mFNBxHll/slw4ze1kP86JDQnzDHwxVL+fm1hZoz7M00YmYtLC6U9bfca9/qMGLAR8
z23craJ94Se23G0qbe4io0RJ6ax9wp44oE6fJdzFdKZqz2SkqygBZGXpqHbpiWHuOZ2FlBq2
S4lCyVPtyoFNW+4fsxIoeXbVuNlA+6tLa7pZ1Mc0S60IJyGVDqyxQWOG95xeS7hZ3UsaXkVs
SEjnQOrKc6EdMWNfEDyPlGyX09rebVtKbXeQtWd4nqjDqCiHp51w4xz1q2bme07isN3eyzWW
7Waho5kUGFx169VbBjWJLX5KlN7o3Bde3OPbl0AIyoR96jvhxIb/AJZaWu2yWe0zG4gnGQlS
jKp611d/pigc+2cm4u+wtt+78eiurqIE224W7exICTlqYfcR3w2eqeq/jVzs0G6xSXjywwoS
0RFHNVNQG8cTWNVz/duIbtBbvt24NLNDUSQSxMtWbMUalMUY/Ks2efj+3LHuFructrucIOiB
0OksRmoIqDX/AHYNaprjk227xvsM+/25ntAntTC3b220jOo8DnjLO0G9HjO3X8F5xa8mkt0o
wiuVAZGB6D+LGo0t15VtV9Jb7t+oks95tlztyn8tqZelhUfgcTLM8l3+bfL4XcoAZV0gqNOS
9Mu2GRi3FSV1KKCtcq+WJrmiIZaH8xNB/piaw+nQpzrXoppT8MIwKhkNK1HUjviZSt6lGdQO
pPXE1IHQwalNQPWn+OAWCyXMUPiemIaOjHMd+oOJoRBAqKr9cSwzUOeRPWuFnTaSDVMjTr4Y
RYddepaEsa+OAYTkluuQ6ria0ejVkr5Dv+/CdDVhmKZZV74hTkAZ1FCOmFkWgkijAntTrTyw
LUgEgRVNaN9xOMtSI8idKZEGmeWNQ4L1k5fd3J6YhaY1DZmtBlXExpa2ocq+WBqCBcZ0qSBU
+AxE5Y1p91Tme+JomY/aBqoeo8MLIfAnr5ZYgSv6woFWJp50GeHUcOD2oTn+GJk+lVYH8vYZ
YiSgkHMZmvXqTiWlmhoBUdh0wVoUZfIg9KgGv+uAGbVT1HM/dTPPCDAChAqT0OdCBhWHQg6h
ToK5+GKnmnIpVhWoOdfPAqbSC1akasQOF1A0Iy7+GIwmH5jWvYeGKVUxatQeopWgphBwQ5AJ
yNfLAKInSCqqCR3J6YMalRgkswYjyIPSvcYVRCila9CaMTiRtWY7rnQ+eJaP0qQOhOVR3OCN
aEgaTqFa5ZYWaTAKMwaHoRliGGVgpz6sPwwIlLE/wrWoPn44SYkBulWrmB0wgiBSiECvXx6+
GJEVBB6V8cZtSPt0qehPTCdF1zDAaf24hQ6/VTp2oehGJQ7AEEUPjTv4YGsRMQo0sK0GRwgg
NQBbL6UwHTmqsVIAXrVfHAAlQK9KnM1wgOhftXo3cdsQCq+LZdB54GsAtNdcwa9O1MKC7KDU
CoHQYmTHUBWubZEd6YqpUYdSdIOdB26YDpiKmte+ffC0i0sCVBrXPEUbR+r151FRU+GC1kLE
qaEVIH4Ygi8gM8s6fjhKNge4FfHAZEOsvUk+noThh0DCQsFA0qDnXwwr0JZRUHNj91B2PamI
ai1E5gUoc6f64zg3CVzQqQKnMA5D9uLGpfEDUBIK5D8MSM5VmBJYOMyCcgPLEzoCDoJOasaV
rWuJaakrnQoyrlqzP44K18m/mBjnmOw6n8MRRzM4ByoB27Uwo1OwzqM26YlgQRVuhXIA9wRh
hoCZWzbOlaKMuvjht1mHQOtNQoK1YHMYyUgaPUSoofE9csB1HkFcVBJrUd8RSwE5BwAKVFeu
JGYNVmbIaep8PPELDKS4rpoFzBPiPEjCxiUBa1YAgZs3f8KYic51UVqO1aGhwEy6XJRaqvYd
cu4zwqxI7gEMoPqqD4YDp1JQUJzPRT0B/wCeIHoWUNq82+nliB0ZSyn7owc/9a9cRiWSmWeo
V+nXEhKV9sCtC2dRQ5YFaeQkn00J6L54oz7ToQrgE6gpqa+JwmJA4GrpqOVT2xYYVW9QB1dq
Dw8sQpDqOlB1wI762OlAEAzBr18cRwRjAbIguRmPD64loS9BQ9G8O+IECT6SKZ9cSDItRkuQ
rQnucIoVcgKD36/TET0Oo0b7smINRQYlaYKx1IQFzqvhQDvgUgVaOpOnyFfyjCjNrGkgCh7E
UNBgRmq4zOQyqP8ADEQMF0hV8aLXscSMtHY1HqWor3rhAaLXLoB0p1/ZgRA+moPXt3zyxNEP
tNajqRTqDgOhB1qWpUqMgev1xM6bUdAzoR2B6/sxIwVGVSWIFD9MRw1SktKZEUP+4H6eHjiZ
sCGJqytQ0oAP+PLEArQKQxNTQmnTPCcCp9taCpJrQ+Iwkkjq9WHpz9PUeVcSkEyqUDLkx79a
jBTaAAKwBWlc6jw74pRtOKHrmK9emFCkSPRUn1dVHmepGImB9QAQkHowyxmo4UBfU51EejxF
cAwiGA6fbkR9O9cKwKaixIBr08M8SoVUfxUIJB7fjhETtpWKlexbLrgb3wIYN6yPTSp+p654
hoHDaunUfaepwI6q5TQx06ft+vhhxUSqWy60/aMKBICfVWo6BQfwB/DETor6NFaOTUVy/HAz
h6Nm9WJIqUOf4jEsJ2DELTwLYicqhaimpp9vh4/txImLEuA2k0ArTw7YhiNCEIpXrXVhWJPQ
F1joTWnn3OImUq+oVzPUd8IgH01oMoxStP3ZYKDOdClVqS35fLEqJSaUIKkjPpQnyxNGahYr
QUIp1r/xXAMFGhYVAoQKHwwLQh1dyG9K50A88QEhf2yEyJ61OXl1wNTwnUUXoGU9M++KE41F
vuA6au34YQcASAAOAxPpLdPrnhCJlk9IqA3h2I88Wn6wRLlQaio7+OJGkZWqWGoUpSpGeIhd
ZBUg0ag70B8Rg1kWoe0FDAnuOmZwqgUqWApqAOY860xCncorhdLaiST4YZBykUoGPoorDOvj
0HTE3zDfzAQC3gc+/wCOBs1AWZ19JrpAY5EDvhZsRkr7gdO35ewHjiZwZJowyKkVFfHEkZEd
CzEgAgZda0/wxatSIyupAoT1Nf8AAUxCU8kYqRUggdD0NfHA1iMKSlGrqr0r2wKC1MrBS2o9
MvHESOthU5sTpY9q9q4hh2Z6FVzK/ePAYSYMZDRR7enMkGtPwOJEg/hA8QR0xEmVtZTME+oK
e/7cSIuOjZOuVT0P1GIAVWyoSGpXwpngAhXU4rmwzr4fTESpqQkAqOrMT6vr/wAsWIgWoIzm
tPu6H/rgp04fJtIzNPV5dMGGhCBZKHo2ZGVBhBELVanM1C+FPHCMN7ZCMoJBrlSnbEhRKRF6
uo6muWLSH7VFAStanvWvfLCtMRVGX1AjoKZ18a4ikDCoo1AvQjMjBTQ+ihAjNSakCpPiD+3E
CDgkPQED9mIGQF6nWAFq3fAcMWqldJYD8vTM4QaGql9ZybJvw6U88KkGgBUaBmKEsO/7cQw1
Sx1UqTWhrQZYNI/S38xjQ0oSew8vPBTPAtroGALAdRWgHhTFoPEsjO1RqAFWrTL8MJxz3QlB
OmhqKA06VxqFlNzQfqtIqQDpqcq4bWAaf9vl0wHXdxuwurndbe1VqPM4RU6+o9Pw88al9FuP
cbH4Y57LGPaghDkaoonmVDIB/Bq9Lftwd963/wBEEXxtzI3hsruy/RyhtDm4YCM+FHGpc/HB
OoOusde6/FHO9ttRdtZJcWXQz2c8c6p29QU1xa53odh8Ucs3CNHtYbSYkeu3a5jjnqM8lciv
0wa1ui2r4r3+83s7VuMC7fcICVt7iVEdlP54yfS9PrjU7jVdnK/iDkOwrE8ZjmtJWEZlR1Oh
j01KD3xS6LasuO/D1rcxBd1uLyO6fKOS3COhPf8AlmmrT3IbDaxjg3f4i36y3IWllcwXMMlR
HcThrbOvRw4op88Yb5rj3D4r5vtqrNeWSz2ldP6u2ljmQM3ZghqtPpi0Wui3+HOdyqXFkitS
scLzIjSAitU1UB+mHRqut/j/AJdNeyWUloLK5gbSf1TiFK9hrPpzxrzGrE+8/F3Ndqg/U3Vg
skDZR3NrKlxHr7A0Ppr54zKE9j8Wcw3Cy9+1ht5nKg/p1uIxOKdmRiueK39M3UNl8Z80uZpI
49vYSwHTcQTHSY/Nv9vmMOwzRbl8Zcx22aIXW3kxT0SK5idZYWdui6kPp/HBpcKcW3xNwi22
W1Md1NT2A7DS1c/uHpw4tdtn8c8wubt7ePb2jkU+lbhljV/+xydJ/bg1m0HIeC8u2BS+57dL
BEcv1KUlhFf/AN4hIwSocXAOYvs8W8Q7ebnbZVLe7bsJNOnqWQeoDLwxr7NO/i208OuLWT+s
3LiQVJUOInQDMGOoIY+Rw3qn7VarwriV1ae/t26ybmkpHsiMhLyM9dEts9NX/wBOD0V0WfxL
eXNtBd/qp4YWJEiS2zBkJ707geBxmhkuTcaudhvv0kkqTqyiSO5Qada1oKqftPlhWuGwsri9
uFgtv5k0lQkPcnywh1w8f3uTc/6aYXjv/wAsMo9ssR/3YjE+2cc97e4tp3cXO1tM2hiYw0ik
5VocMvnjn1PXXy7h78ZmijW7jvopgSk0WpGp21o3RsZ+2tSYogCoOrqc/oOuNNauNu4Nybc7
UXNlZF7SSoS51KqlupGo/afriFBuHE+SbTMsG47dPbyMDoZk1KwXrRk1KaeWNbDzysm+N+aJ
bJeR7W1xaSLq12zpM5BFfsRi2M6cVuxbNc7lu0dilu7z6jrtdSwyEDJlUyUow8MNZrr5Zxuz
2i7WKzW5jBGqS2vIykqV/wB3Rx5jBBikBoF1CvgRnn+GLEuNj4jyLeiz7bZmdVP53WOvc6dZ
XUPphtagd04ryLbLwWl7t0sE0tfZ9JKN3OlxVTTvnimM9O2f495glmt3/TZJrRl1iSHTMNNK
kkRlmwasc218T5DvCn+mWjTutdSalRhpyPpcqTTDoxDufH962qRYd0spbSTM6pEOg/8Aa32t
+3Fq+q5PEIP/AFld1mju7SV0129wiCa0lHQa2Spjbr92CU2K7auJ8g3WN22+za5Cj1orKrED
qVViur8MXwpddEvAeURophsXuGcZwRj+ah7q8bUYHGpWM9RzcG5ZBY/r5Nsn/TKCS6AOUA66
lQlhTvUYPs1zwbbeHcr3SH3tusnuYa01qyAlR5MVOKWKyx3bDwy5ud5O07xbXFldGMsiaf5i
/wC5QarIPpire+A3fgPJdr92VbGW62+Mki8hUMun+IqpZloOtcWuc5tc3F9gtd93H9DJemza
QfyZvb91dVK0YVUgHxwUzhHu/GN02vdW2yUxTzoRSeFiY2UjIkkVX8camJLuPCOY7bbm4utp
nFuBUzx6ZVA61b2yxxbBbYpa1oKhmNKnFTCUjMMKsf3YKY1O58HMGww75Y3gubaYD3YZk9q4
jcmlKAlWTLqMEqsV/GNhm3TdBB+mluraOjXCWzoLhU/ijRiNVMVrMgNx2NYN4lsbF2uSjUjk
kQwuK9nVqaStaHtjX4VnqTcuEcu2+Brq82qdbUU1zIFlVQRkT7Zb0+fTFrX1WHHvjfft9sXv
LNRHHp/+NI5BSUg5rUE6T/3DBelYz+42F9tt3Ja39vJb3MbaJI5F0keBWvVT4jEzY5fTTSDm
c6dqYhpZAaR1zqtcJJDStehp1/yxALBmNK+nx8cRO4IANa/8UxIStmDTMdcvLEg+4urPJTmT
3B8KYqoLWpWldRGZNMEVpwVqQxyOeNM6HJgTkD2PUnGcXyclW9NanpiUgEoAQcmGI5SNCRnT
xr1ywnRHrqBoegH174kcAhT9a0xAVK9anLL64jIEh6AHoM/rgJiGBIqKDMYoDe1Q+Z6+eFYf
QwFQaaepxLA6ctTDMYqDNpOR6nLwGWDFpAjOhrTx8f8AliRirdj6q0pi04GQeoVPQUr2riiO
SQo1Hr0/64UFkYrlmfHw+mBCFAoqRUZ6cSR1b8xrTP8AfiQAxKjua/h9MTOnDaRXq5yJ7YDA
FTUHp4+ZxELRjTmO5woJGQ7DwGEAcswzaukU+uJmwFMyaAKczTrQdMAnPoSDXVXLoR186Ym5
NA1QCVIqOlcRD62BByJyGAoizadNAGB9NT1y6Z4kE6glaUXvpxFztp7GgP8AwcsQRuq6RWoX
+EjLEgSNqQVB9Ip4VpiO1FWlGANRUN454dZ+QGMgUGSn7gMq+OAwLIpYgGoOdc64mkdAE1MG
0g1Ne7DviAP5b0YKfUCAR4d8SCy6AwUDRkKA+HfExmEz6lBBoRU5eXTFjUqP1avcY1JBp/zx
Y0EBnAHUKahenXEDDUPSFqWJBAyoKdcSDItEBAArRhn3riJpFLOgqRU1JHgPA4UTGoAH29QP
r3IwKwkyUkLqcnL/ADwUQaKqai2ZP3KPDA2jB9ylQeucnQeNP8sQoi4Gpy1F6MT4jEzTrG1S
qjOgJAPU9sMUh0AWMo4LP9fTSviMVI2TUBpNew7GmLWTsPuY9ajSRTKnbE1BFQFWoLMM/L/r
iRgxZCerE18/rhZETRgrZNQaj3NfEDpg0jC0Qsp9IzUf64EdQzKy1+3MfU4gmQoVYAUI6+OJ
aIkBFIzrkWOKCUWhK/mBbM1/wwtCVWD1BqCcq/T/ACxLCGkfbnU1p0r454gdFUdxXv5fXAZD
MSCATp+g/HEdEqnXmcj/AKYVoJnIGlV9INFbtQeODGfk1Y9IIqg6Emv7sMip2AIJH3Zdf8sR
wztRgtR5YlRFAFquUnTLAjAArpLUkIJDd8SC9QwbSDXoPOmIygLMAAVzP3AdsKpPpQAIAOgI
GdPM+OIB1hasVqR0pn088B0xq2YIqaknvn2wgxppIQEZZnz7YFgVGkBic+gjP76HESlbSoqy
qSR92f8AhiNRk1kpXr+0+OAeQhUPSgJPQj/PCQscwMgejEd8SPSgOZ00qp8ge2BWI1RtI1N6
GP4/TEycBtJIy1flIzyyxprAsjABanUM/wDXEKGj9CKVyNMDN6EC+hQ1KgAAdD5V+uBqlVqg
Cng1f8cShvbIb0kA9anv5YUIMSBTqOtelfpiWmKg0YnvnU0FPDEdN7ik0ocs69xXABlmHRDn
9Mx44iE11NmFJGQ659MsQC7LooRUAfd3r4YWSAUlTpyFCc+v1xKXRghFNM1GRWlBSuCtFUmT
IHxHj+3EZDSAoGfSS9CVHc07YkSTSMpYDSSBmOmFAChnZnNCa1oKfXFqIutAxJPYZYAkzIqz
UORFehxE/uEPXTl28a4UGqsdZXNRQKMqd6VwKCjybURpz9IJBH1xEw1OfS2kk0DdQQcKO7ok
RVs9PfFGaj92ihgalRnQda/TDRDgjSAKKx6ZZ1wGURUiinOmWrvnniVJCCdByArUeGIaRVaE
UNB9tep+mA6H0o3pBzFD2P8A0xAmRMjqqQAT2GAkfcrn0/Me/wBcCGaP+ei0BqfHC0j0sX9Q
6nIYVgjH6gKAEZgHyxLDqH/NR2Jqxb/KmBJPQe2ksKBvrhSBPbUj3D0rkor08cCNIGFWUlcx
VSc6HCDN/MyIBypq+nQV8sQCsSZKDkDWq98Qwcj6iqjzOYyOFFHIHWin7jl45f4YHSUztUkN
kv8AEK9cCFMNA65075jDKKFCgFNP1I60wo2hjUqC1T16UIxMfKRQRTUAT+dqdBgaQZVCq3/6
PUeH44hEraSoCsRQ/b4/XE0ar6Bq+7p9RiR6r6WUZg55UGWJBYyU1UrXPT5+OWIH1OKZh1Ay
J/biICyqS6HI/cevTEhf/ZltFPD6nESpJ7or6j4GtR44lgdKe4zKrMcsj2OIYIsBqUDXQZ6c
6HATKBmhAUtnqBzqen44gbTp+4nPImmRIxAIaQIWc96Ktf8AHCThgf5hNUTJaePhiawXqVar
muTE5V+mfbALCZpFkqDQHo2R/wCBiSUlczQAgGi9ycSRLQkEEgN9lc/+M8KO4NaV00/AfhiR
NGSDVtB/46DEQJ7gJIKllz65kHufLANHqNRpJUt+Yf4fXEjo8RLL3Ayr/wAd8WKBaUBRkVP5
TSgyzocWI8ObHXVq5gnriwwx0HNRnXocWIjFJpJrpYn0qfDBasFqb7UX0j7h4eeIHIIIDZoQ
CD0BwLA6pCW0GjVGbfaw8vPCTFyCdSgFcyev4V8sI9c87tQlSAa5VyGNQstujMs1VYkk18q4
ViL9RN4dq9cZS44hMkO92cjv7eiVSzZ/ZX1AsOmOvMY7nj6M5tu8yceglsb33YXdfa9mXI6a
VPpP7Mcr8tcfHqf425Wd3SXbt13BEvC38k3Te2hHQj3DkenfF1yeo0lpsl/sCXF1uHtpYtI8
guoJo5IxGTVVIDV+gpiZvMqG62CfkTWd7stzZXiwSe5JGLhEnVRmfQ2k1wKRW843Kwt922xL
yTR7T1kVjVlXL1EqTl9MMhc3NLG+CWe/7eYdy26CmuazmRupHpZKhlP1GM89en61vOIbvtO+
2MKWd2sxiJEtssyQXMDnt7cho48wcatZxPfT2g3cbXcXYW5kDPCb11jRwv5RIGdTjJkee3PA
eV7JuQ3h7uK1sPf9xitwrRlCT+ZT1z7rhjHX2nwbnu83MVnZSW94XhZg4eGTUoav3DScunXE
eatvjTms98stjuV9bT3qjTDFuVPblQ50Eh9JI8DnjX4aXm8Tbzt1m5veP7VZbfcNpN1azNpI
buNBpWnYjGYnVsltJcfpZ7O12vddvCk/rC4M8VD+bSVf9ow1ZXXud9DDMVhuYYZmi1RrDNqU
oPAk1ND2xmC+Mtxfc4jtt089wXkilczrWrKAaltPWn4Y0lm3F77dN32vdbGaGXaoR7gnjkDg
d9LAZpX6ZYvtjUi3u7KSeY7cLiA3jxs8ETyoBICKaoyTRsZ+ReXme8cS+QNhsJmvJWjs5GIe
BpiyaSfMlGP0wsdXHXxiwJ41JLsvNDtO6aGe72i4IWNhnkuf5h4A4Wnnd3G4uKzPVyTVgcjX
GozurbicUFxvlml5I0dqJF9SyCN61ypIen1xq3C+io9+2+O2YyXF2ssCanRJ1lJUD7qnrjmY
8F5xvdru+8yXVrcXN1E4ya5VUYUyAOg0NO2NYlVsdrJc7lDHBcLayGmmRm0eodKMelcKeyx3
2yXEdvte/j2d+9vRb7vkrMR9nutlUeffGcTKRcX5le8yjWdUupLV0Y1lTUYhmrpqILCnhjUu
LFl8s7BvIS0uZLCR7WIN7s8YBCVao9wg5eWOYeXE0GogVH5WP+NMdINey/HttfXHCbiGwulF
wwkVYGbQGYrTqfTgt9ad1n/Uto2e1t97kktZoRSRpzVK9CUYFlI/7TgVqo5HxXm1zuEd7tGu
OzdQwurWf0sx6NSE6qeeAyqDZePcu3LlXvXEa3l5aSKbxzPH7mn+Ifaz/wDcBXGt8C9+TrHd
rK72/c7jbzd2lsGDLMplgqW9KyUNQCMEClHL+AvbhLvhlurEBGaKVlFe9KLqXEsaXhUOx3Vh
ePsdiLlY2/mbRczl5UAFQ0TtpYV7EYjmNByKK5h47HNb252+WAroMs7SRwlsvWHzWvTPFBYq
xsu6XhMm47fe8fuKBl3XYrn3LV27SNAhJA/DCscm2pzD37yzkjteUWoc+/onW3vEyoJY2Gn7
q/txVRXcx2TcLfagbe/v1hZlMmz7qxdo2ORaGZqq48aHpihq52bjfJbfgd1ZGyczzwuP06Oj
K+rMEaWoajBRhcBtUm48dv30SbfLHIwsJnrBKBWpozDTUH6YtMjs3Q7qnJ9vW51eyVkW33PU
PbmyyUkE+oU6Hv0xM/lxbbvO8S/IE9nNcuGFu3twlxGXIAKimWo9fPCYs7TbVU301pYjcRHP
/PsFkMUsMpzYppIKVHameDUtb2ImXb5Cj2p+2F5pdYViPtDN61bxxDFNbxcuseW3N9uPurtb
xaIrqJg8JII0mQJ0zrmV+uGtRmtj2Pcrnnl1uG3WyT7dBcMsksbINAkU0Oiob91MVEi0mS42
35FWTcIBBDdW4toJLgUt5T/BrHprn3OIflfPdXe0zTtHxie3RASk8V3/ACmz8BqVK/7hTAa8
8Xk3E4bq5/rPEYZpvcY1R1WQEsSfcA9B+q4ayqd5u+KbvJCuwbXJtV1OwRkeYNE2rIZPkp86
0w4ZY9Bu+M7+nx+LP+nSG8hT1W6MruQGr6aE1y8MZi6msn8a7Fv0++Wu4wwO1vYyFLhlZVdD
Q+lkJD5430zzv5W/KI+SWXO1uYUWG4nSNIZb6gtrmi6WhLt6TXwJGJqLYbPvMiLJ/wDlDiG4
g5e1L+q2tmPYqC3thj+Axm1YpOC3N9Fv26bcZ0i3KZyGt45AiSupNWjppQ/hgUusfyo70+8y
pvBuEuY6rEl2DqVCSQEJ+5fAjHSMWqXSlanqMsODSUZ1oBUdMS07nuRQePauAiFcxpyHfEMC
7Emlcq0oRiawwQgEmgU9fr4YhhOFIrQZ9KH/ABxYYZNJGVTl2yP7MTJyoemR9OY/DFUVcySK
V/bhGHCkf8eOM04QYZUzI61zxIKkaiT3FQDhRn7f7R+FMR0qUAFP/qOIDXqAxzByyp9MRhml
qdIGailBiVpAhhQtTtiRAroFMvEYCEH0UGXcGuFCPUEZ50qPLAMARqDH8wPUeGJGZTlqORyB
wacEG9AzAJHXtTEfgOfQjI98S0BCNmSAR08saAgWyzyHcd8AwJ0H/jpiRiyAEZ55fTFiMqBc
jma1p/ngxn4MVp0INDkMLSMUJAzzxELZUFat37ZYgaoyJ9Pn/wAsTUAaqxIyrkDhjHUAVoa0
p2Y9iMSkRyNpYaq5/aAeuJWYF1XSDUjV+XsMBRtkKtVV6AeNcARfcQzdsgOueI4DUfUCKdqY
kiCFnIOSgfccvwri0xGxdsly8COmJIpB6SumpXqcQAQwNOtBTy/biIBpYFj1PSlKZf64aoik
1AaR3JyHhiJizUIFCGHTAcA5ZAFAJLZ5HtgWBqRUUBIzrTIYdV5NpGda1alR/h0xCco5AQSa
VAoKVxHCRxUkD09ycQCfuoO/3HE1iN5Foqg5NkCcz1zxAyqVYEGpGanrQnLCqddVD6uvSuVR
3zwKE7aToFTU0y/xxEaoFqDUqvj4/XAgksBUL3q/fIdK064lp1alKmudar0OfTEzYSCkmupB
r/zpiUiStS3tkGhoQcq18MTXyQFKFidXSnah7nAzYcUBy9BA/L2wkYJrmag5huhwg6EkEsQK
dh+zM4hBKAPX1BzqeuXXLA0lUIU0/bnUntgUPQaPSD1+uIUhUkZjI9f+uFRIhINctH8Z8cRE
clyP7emJeDCpTVWjAfgcAyEBRAw6d/MYBg0YHUW/Ad8JxHSj1bPUaKB5Z4RpICxU09NTQf64
tJFxTQQc+4zGeJYWappK1p0OJG0h3Tt3IODUVahnpXPMVpQfjiVJ6EEUpXrTvgBwkf8AF6if
GuFvEbFc+oY5DywxkJYLpOjNT0r1w4rTM2pmJWg7IBXpiWnqwUVJqcwv18cZpRkDUKjIdAPL
AqTMMgRTpn3wgNGBLBqnpXwGIyBdSVH5lPWvl3GAgaokqPuA9Jr3GLRgZXzqBVWPTphiOEDG
tKDx+owmEDRQhBU9CwPfAajfWGqjLkPQT0Bp388Wg1WUAMCxYUJrU18Ri0ekxJPqOXj54tVC
uYY0IIPXri1nB+3WJgfvrmadadBgMMcs8iUGde9fHCqBSSBTJv4mPfCpEmrSdSmqDr4/swLS
LPqJYjSQaHI5YVuGSNzXOi5aq9cFMgjIhdQzHXT8DgJLGQQK9QS2o5+VMCCFTVqBI0+OeeFm
m9xBVlQVOYY9ThUSkoigip1DMdeuCxpGTpBb7QO/lgWlr9GpiQ4FQfEDphg3BKpZgUIVBmwP
SnfCjakctSnWmrwOABpRaP6l7jz8cTMiRCsjMprQU0sRQfhhbhFG7DIfaBl+3AQKCANRAbwx
aEgAC6T06jz+uJrQxBwpAYUavn07YlA/b1WtenmPH8MKMihWYjNga0XIZYqzCZ2FCBT+Fh1F
cDR6qh1FqVyr1OIUo/c6qMupp288GsmYFgWNfDLqSO+FSAY6iPcY9c6dsSSNmEJ9S/lPX6HE
tIqzZOKVrRh38zgX5CTSgUV7GtKnE1ogxBVW7+OZphOiBDMQBQpUerp+7EAVFWOYr1X/AJ4k
cOhkLMaqoyI7fhgITQa2YVJIHevlhBOpKg6RrFO/Uk9fLEhSh1Sn2r3IHWvXEijDjToFVAqW
6k4kbMVbPM1GWJI2rqADCleoxKCiUiuohh91OgqcBO2lQWU18T1/aMKCtKlQAwp94ypU+Xhi
QloqkKKhu+IEKxnST6GBo2LUFAxNQBpIFSR2piR3TS+YzIoOwPniRip1BaeQp5YiQkKhhQ6j
+UZnLt+OEEjdA6mp7CgKg+WJQslYlT/Lp6ScBhzHqQF8sjSncYkFSQVBNaiiYNMFEkZVipOo
fd2/ecSMhIB1kgqfy+IxAIl66Bp1H7/88KwQDAZtQg0rTw74EY55sTmaAHpn4YUfMVBWhr1/
6YlBvHShZgC37K/64GvsFirsFDAU6eNcWM6YLUEH1Edu5/HFhEEHtgFdLZ6qnqT/AKYRoft1
fXMgjoOmA6FpVc6lJ0rmK9CDhUEwP5hQKagnM9O2ImPQOG1MewOZwASuHdkXoncdRiSJtRkZ
kzz6ZfjhGnZmoxIIz+oOWQwImZQooKnrUGpHkMR06hCVLr6KVI88CFRC3oNB2JOWJqU1aZIa
scy3niQpA9KyAUP5euXjliwaQQacyNI+5R4ds8KOzEAKRqNMq079K4lqC7WPTryUAU+pHYYY
fGV3PS0mumkKTVMbxuTxz+7F4dsGMurZSVuV9VCSAqg5Gpxrlz6r0CyS5FvqMUtFH8L6Bn+a
goPLBfFzXTAzXA9qBTKa1dEVmYU8lBOBVO7zKqKTIFFNEcgdTUH/AHAVwaZB0voUEot5YQpL
GTS2kjv66YtjNSbZt9/uN4I7RJJJiM3AOlR2q2YAON+YZU9/tm9bVKsF9bvA8gyjAJ1k98uu
Of1la+682T495TusIns47dJSKRxzzrBMy16ANkSfrXDZB9lRvex8h2uaSz3SzkinQ/YR7rUr
QEFdQp54BXGZrvKGRZdYGSuHRvppameGemdeHBmVdWh9EgIICtpy61oKCmLBKUKTuQY1aTR2
RS1Cc6UFcMp1J+puwwQs4IFWiaoNPNTTBilomE0FH0vBXq5DIufSv1xRr7lJNKqqxLHUeoJY
V/7hgkY6tplnlEjEFjKuTqSQ4+vfFkY9jogu7lNYjldTLkzRswo3jl3w43KTTSOc2dWRqUGp
jUZ+enGTqRtznkARryWVTmInkZgKf7a0xRWogQ9MiCDU1rWoPWpxpNDx/g2677bm5hnt7eLU
VSS41BCy50LgEL9Th1n5dUvxpyq3aSG6hSGSMaokZgUmXqHil+wjBaLK44+N8tPslbWaQSEp
C0TalKr1Fa+n6Hrhwy4rL+xv7OZoZ7d4JiaNHKpB/EeGKL5R6WXSwWpAo1Blni0/CaZ5HoHe
opkSSen1wl3bRbbzut9Hb2DGW8FP05Mmhiy9BG7EZ5eOLGb/AIdG8tynbZXs90F5bsfuhneS
jV70JKsPPGVap0LU0k1PeuNwWL7a+Xbtte2S2EMqGymDEwSgEBv4lJzXGejFedz3C8RRJPI6
IaqjuzqPCgJIGIX2ntt2u7WMxwXckIqf5SyNH93cKCuLBuUdibqe+j03BhldgP1byMmk1pUs
Dl9cVje4s+Qxcn21vY3W7lljlX0EXDSxypX7gakHyri5rNuqJZFdSwU9eh74kmhubi3ZJI5T
btX+W6uVYUzoCCMKtXt/tvKW26Lc7p5ruxkUH3jcF9AOYVlJ6YZWdqstd+v7KP2rPcJ4EGZS
Od1FT06EYMalSWKbrNdmSzWZ7mP+b7sbMrjxbUCDTFZq2leb1vF2xi3G5muVQ0CySM1G6UAJ
OeCxWjteQbtbKIbTcLiJRT+XHM6qAD/CCO+LB9quNp+SOR7VLIslyL22mYNLa3y+6rN4hmOp
csWNfZ1Sb/vnKyNus7W1sVIaX27cskbsOhIclVP0wlmNyW/gvTDfa1voT6yxOsH61r+w4mNP
Fum4QXP6uG5liuvtE6uwag6Bmr6vxwY1pXW8bpcFjcXk8rOdba5HNT45nrjc5FqW33zeELRQ
31zGXzdUncA+NRXPAt9QxbpeWVw9xBeyW08lQXWQoxz6VFMsZxqpLrft3vYjHf39xdRVqI5p
WkUEZVAJpXzxpy3068g3qO2/SJuV0tqPT7IlfTQ50IrgwzqVEsFxcK7RIZtHqlCjUyjrmBnh
KFyMhlpbKmRriZ9d1tyneIUEUG63EaxZCBJ3Sn4Vypi+rUuo4923K3umvI7uSC6kzedZGSQl
v4mU5/8A1Yr63or7kG7X8KxXl7NdxIahZ5GkWvQ1rlixj7JoOT8ihtRaRbndJZgaBAJW0BT+
UKT0xYz9w7VsO8bnrO320l1Kp1AIKP6cyUJIqR5Z4rWoG9uN3vrhYL6See5t1MQScszxgHJP
XmoqcM8VT7jxDk1jB+pvNpuYIRSkunWgHixQsAPri1nFMYxVX1DtQnLPEz1zUkZYmmbEmntg
Z4m+ZZ8uy/2bdNviinvbWaG2nUPDOy/y31dAHHpr5YlLDWGy7vuQ02FpJcsgLCONasfGgP3H
FuFyCNwWqpV0JV1cUYEdQwOYIwiQHqrWgqOh6DAz267DaN13F3Sxs5LkhDJSKhYhetF6tQeG
HYeZfy5yskepSNDDJlYFWDDqCDQg4t1YEKejZpkPOuDUdz6goFCD6vPwxLEYyYitPDEhIKnS
Scz18cSMp0sQw/l9CDiREOKjTSv44lfT0ZkoRp8O9MQpgM2NR9T1wQ5CRvUCcj+2n0xYhGta
qNRPiMK1G3qYg5CnhniRzpBFT0FAQc8FGkFAWhof+eIyBqKaRUlepOJotLUzAPj9PLAKSkmh
GdMs/wDPEKbSp6YUZq5VI60r5YloCPUDSpGVMCJ6EkiorSmFI82NOwrXxPlhRyT3NM8x5YEi
JFSaEA5/XETMoVgST516YgZz6aAliOmJXozFmyqK0zOBfKII+ohjVT+6mHVOcRFtTUfIrko6
4Vb+0Ur5rpBIY0yzyr1PhiVpE/chBp0qe2Ms2o2TStF+4nv44Gp6ChOkkgkda/5fTEcxHKx7
CoI7ZZd8sJc4K1JJ6Aj9nbEpEZkoOpBIyHbPxOFkIdimkdO606+OAxA4FAR6BT7fEHvigtMw
YPVj6dOQw6JDJUnV0B7eXbA6wElQykgAdz4+GIgYkiiZdqkfgcSJ2ahIpUjt5YAGhIoozXow
6YVgHYg1UDrUAjx+mFENaj19a9exGBajqula/eorQ/XECqGYv+Y+mnQZ9cSJyaUApQDTlUV/
5YloQWYA06dfr5YWR5aasag5kV8MZMtKMJqUk1BqSR2H0xEtRVdP3t3+vUdMKlMpOZbLVm3a
vlgA9SitDpr0PX6nEUoyAq3pORJ74mjEkjSnp8z0piB6O6gE5nKtKEDvhFSxrpKrJmB9tev/
AAcAg0bI+JyZR9csDWiLqq5eo1r5DPCjwofuOS9x4eYxCDBVhpJo3UEjLApBaiQKGjDqO2FU
yhyVLfbQ6qdfpQ4AmqxUlBqrmFPTL/LAvDrkhJQ0GYAxEIIoaGi1y/H/ADwg666AUooyBPev
+GHVOTEGtDUdziWCDgtkcxTp5dRiOmf+YG1emvge3fBqM6gLq06iB9lewxIzVbIrpIANOuCr
CVo1qKkp2xEz6UAqBWtQ308cRAVYzEqMiKin+eNM2ImNSwdqK3X/ADzws0TMaaSSKeknr+/A
QgNT15nxHXAcDIFpkelAo8fHAQlSytXLuSMIwwVicunQHtniQtAKKoNaCgNBgKNgSw1EgKOt
KZ98QILVRTpUmvjiQVDEagRp6nzI8cIRmikyFtVTUdu1MsTUonIKgNkQRqXrgJCUhWGRA7nK
mIEoBTI+kj1DpX8cOAAcKmquXSnfLxxM0Won+YfStNQoa4GiIoelVofA1B/zxEvQGyHXInwI
+uIBZ2Daa06k+PlhAQVXKlBSpUdMKETVatVl7nx8sVQidWVKMvT6Yy1DGunrpk7MfLEi0OhU
tTPrn3r1/wCWBUMmll7gk9B/lhYtEsrNCq0oe5YdvPDVLQrVqp91M6dhXGSJWQDP1PXSdXQD
EoIgoWVRq7DPC1gAAr6TQ19WvpXsARiEmFmZMhTsymnTscsJFoFNQHTIqOufngRHQKaT/j+O
IaHL1U6jMMcKEq+otrFGHc/5YiBmUEJTIn8SPEYiegkyX7hlU5ZeGKjTFFJCoKMOo6eXXAhh
a+lhn4DuMRMxQij9+1OlMSIFgvpYE160NDgZA1TWpqe1PHwwapBoVUgkEkjLvnjWkyHKgHqH
f8o8cDOGLEU0E6jkRXLEgpR2oyUpmCe3bCRFAWVRQr+Vh1r4YFpwHzAIzqK4SFgq+geqv5ji
w6VGIbQP5nWvagxCgogapboK17eGWFCBjY6lNR0J88BOWdfQMgc+/TFoILUgOG0/wjv4YETB
KNqqozCgf4YYjAIERQPUewzwkpJEOSrV+uX7MFAgGAqKLrBDA5kCuDSjRmZSpGTDLKgyyOGQ
U5fTXSSRSnln1xYjkMVKA9T9nXLx+uJEZFBOnMgVauYy8aYotOy6gGJ0mtKnoDiRpS2r05hP
zd6nCrTVLLUgBj2IqcuuJYdQKM9KI1KVNScBNSsRb/xsv3iv+GJGR1WPTXSoGdep88SMzIpR
gTnkF61r4eGAakk0BggJKDJss8s6DFh0tKkMyvqLCq18R2wkPtADrqzyBGWXjiQqRk0D+oDx
y+meAUlUSGndcyDnTwwoxQtGdbUpmfDzxGIyie4GJFAK+rp+7FosGoUZmoJBCk5V+mL5E5My
Vb1VCjv0Jrgawyx6FNDRBm3jiWHCqKk5L4r0P1+mIEwZlAVfp5/XEcCyhOoLUpTwNfPyxL4S
GOMDSppX83l4YkF4QHUhhQipIz/biFC0QSikCtaaq+rPqT4nEhVRVApVQOh6gYkGNDTWa0HQ
dz+zBrWDaNfUJKuKjp6Rn4HrgFNI6pKFH2sKLTMZZ0w4tP6n1A19fY0A6+WGG0nVY9S9ie3h
/EfOuKs6eNRQ6DVe+f8AliJtTEGpDK3dT1xHUEsker7WKgHT0pnkRilGsluXpnIYkmuY8B4Y
3qlQ0X+AfbXr28PriOLjhhaXkNogdYlWZHMhGqgB8O5xv+bHd8fVvIOY3HH9rgurK1t4yukE
CIAsKUGtftb9mOO+mcoeM7/Z8jnl3JdvXbt0jAMs1oQKk+AQDTjVKxtN53HfILqx3plu1t5N
Fu0saBwn/dpDYkivt13XjrWabW5htJZVR7Zx7kJRsjk4IwYJDblLNtnJdt3TbJVtLi6kKyxx
BBHL2oUA0gVxz5t+Bjk+QeYbzLc2MV4I1tnNZnkiSqsDTJqAj8MdIZGx29dq/SW7Reyrs1Y9
Wn23FAVYNmpxURJfQRvcrdRRCO5twwWeA6itcyajV1OBp53ec95Nf7o23brbQ7hZpMIo5Zbd
BNHp/MJFAYH641Iz9ovuT85vOOW1tNYWkILsqzLo9Mg00pLH0IIxSqQfCb7je/Sz7tDYTbTc
Bv5s1kQtD1BVSOnlipxPvG+cY3a3ayvNxfcboPohkuLFI3WQGgDSJ2PicsS+rq2C8FkIdrvr
2a2D5C3a1iureQHyYEoMFSz/AEG17XPcz2trDIJfU9IVjGoDqq09JxSlR2d4OQ2yz7jbwCSK
UrDMsaCUBTkDIAC344bGbFXuO12kfLdvjt7NAz0adI0pqANKt2NRgwT9NMNttrGSW+2+P9Jc
BTUoi0YgE+sUoRXrgaeab3zGfebd03TabFpULGG/hg9mZSMj60yYfUYZNUi42zdd9g4YVv8A
ilvu+zGvs7iI6OlR1Zlq2X8WLMNmMPDu+52YmjtriSCKYFXRSQulhmCpws1d8Y37kdzPbbRD
ePNZyOK207F4QF6kq1aCnhhMe82Fpt8e2Rxw2doIF/IJGXS38VDgZx4p8mXP6jkkiNEsPsrp
X25RKrA99Q/dhwxnNtuZrW9gnijSZ0OUTqWV+2k+RxF6lY8U2S9t49+2oWtvewxk3m0PmtKZ
6a5NXAPhj2udun5dbPa2a2C+6q+0AQCynMqO1cJX/wAsM7vZlm1KFcAsdTBa/bXGYnm6KGAZ
sgOox0lD1rga2kHDpNwewjnuITIVunUF0YdBqIIpl3wdUuqOy2rkm32m5X22W63jer3IU9pn
H+/QaVxQKjk3KLraLkbV+hs90ttAKxXdujyBCB6dYo2Q6HriZZ3YN2hHJreSxtUtYp5AslrI
PciIJzU6xmvlgka1d/Je37NHuNsqQLYWs4IuTbKTSh/IjGladaYuYscMHE+GPAkictWMHLRL
autMstXhit9EjSccsIdq4xPfW80F1ImstN7aywThc81kFR4YlYvrSXb982G0Nzt0dmbpQ8kV
vVY9JPQJlinhwdweL2xWw3S+2+7tFTQYLyzBnVKZVkj/AMaYFIpOKbtY7Zya6sNllt7/AGRx
rg9AdoiRQhJCA1AexxCVT8mun3rlv6aDabaO6jJ1PEjK8tM6vTuB3wprdsXZ90I27cP0EzaS
XsLm0KXKae/vx0GX8XXBYY5Jtrs+N7Vf3dgsUiwFm9q6RJ4mAoaFnGrIdM8LF+VJxjf7DduS
xXf9Gt7C+jVmaW2DxwyKwoQ8bZA+YxY3Gilvtn3nk82x3m1289t7JkczAGVSoy9uRaPTyJw/
hn/w8v5Ntce2bzd2ELl4YZNKFgK00g50+uJRa/HVtZT78Uu4I7uIRMzQyiqkUp/x54uq1jQb
5ybi9vuR2e+2OK52qM6FZVUTxE94ZPSSFr0ODFrsh/p2wcTO67TDHcwuTJHDfwq6yKW0imWp
CO+eFWuz/wBf2G9uLfc1263hNzBW4giU+zJroa6D0I8sTHc9UnFtk2ocz3azktFexWKRQkil
o6AhqeR8MVXMXXEt4m27dd32tLeF7Sxk0W5KKXVeukOM9PgGwY3uxR7XDtfIeZ3AurGHb/aR
5C9kDRnVsnKsSurPOmG+KNNd/wDp87S7bvV5tF2qiisYDb3SNXJmdPtIxCuVo9h2bjB3G0tY
Lma3BSG+0I4lQsQPcDD1r+/DIL0wPIdy41u0SXVntq7XugJF5FbiltLXMSIOx8sHo2KBa9Tm
MwoHgO+ESPSZIYrX43s9ztSY7r0hpUJGp2k00qMgy4HR17/LuKbdte+Wypcbs0KwTyuFkDxs
vrSb+Knj1wxm3HLtfKrLZ63kt/JWRGjba0JltmDfcqoSTGfBsxjNi1SbBu1uOQ395aceXddv
uFb3LHRraONj+UkEA/hirUjp4rDsG6cxnjsrJ4tvmhcxWkp/mpIMyAeuRrTvh1Rd8UnmueU7
vx66l/VbYYpYxZSZq+kg5oci1M69cFrM5Lf7KLZuHSXW2A201vce0rJXV/5NIp4OopmM8M9Z
65qt+QbRJ9j2rdp19zcZVWOe5ACmVQlQXp1YHLPPGpgefguJGEtVYCo1Cg8MDVj0a6hXbvjn
b9329jFeDQonQ0LOZGXUKZiTzGCRqub5Ftbabatr3p1139wiRTzilZQqg1crkWXpXGoK8/Z1
BoOhGfhgGk9ANVOvXzxAKgHpUjzyr5VxITVBHhXIYkZjllWlaZ0wyo+kDzrmBiJmCgnOjHoc
Qwl9RPp9XgT1xYsCWIy0irZA+GICDEfd18R/piRlOpiOmdSfDxwacMT1yqO5xU4ZXIb0gg+O
LEEUBLV9TZ18h5YkRcBqDuM6nLErAiUKxqxJAOXliYCXQgVyA64lpFhrAOan92JoUjUBqaA5
AeNTiQNRbLx6V/xxLcASR+Yg1yIwrTUBJyp2BwLA+oAlhQHqDgOI2ZlYAjLsTiWG1GpIyJ/a
cKAutvSSAD1PngNoWooHUkEjL/PEELoQVLft754QRD0atCRn4YCgYMgJDmhBAy/YcWj6+AyE
fi3Ut3xLETKQCdWa5ip7fhiOIypOaggtmw8hhxAc5EHt9v1xRmoaHUCxFfDtniqxHqapAIqO
hHamIwxBIoDVgag9a+WIWajkYAFSDqOfkT/rihkyhLMWIIJPSqmhrgdIYOSKtRkGRplnXBRb
QsKOKeon1AimQP8AniMDQoxUnLrXyPjiSL1kZZAV/ZXwwrALRagEjuK+eFYKq6FJyXOhPn2z
wLUaOgALfSp/xxI6sDVS9fLw8B+OIEpc1A7GlDiAkU6OoWgzp498ROVXUK1HQkDywNYcUYAa
KKa0I8D44UGFX1kA0QZfjiZkE2lquGOVASaU88Sw6feoFKeI8cQw7ES1LdFFFHamI6P1aAgX
7cyT4YjaVGDrqIyoKeA6dcDI6hCWNTXKmIyJlWNBVaZ5A9RXA0cAk5gjIlj4+WIFGoBqTkST
Q+AxBIGjJ6UBNa07f5YlBkAEKPTTqepz7YVotABr3NRQjqRlgGnABVUr6u4rUnyOLCTAAFdR
1diO2JHUjRppq7GvUV8/PER0oQAD5Zd8BCzKXAI0+Z+uEaWkayKBaZ9KDEC1aak01HIimeeA
hBIyJrXIqBWn0OJYfQ3Qklz3wLA6SMwPHPEkbOtKk1IOSnCdPRT6g3X8taYWaDSNBJ7Z5dfD
EAKVBGkkg11/XsRhwfJtbigbM9RXuMDcOrknOlMgKDxxHQlajMUjUmg8z2xA1BpIr6lOf4Yk
GoqdOVfDOmLAArqyYeXXAjUpHorUVzUeGJESBkfXkdI/5jEMN6FIIWv8Nc6/XwGFTwcbRyD0
g07dv+Bgb3UTgNqCtVehIHTEzTMrUpl4sD+zph0mX1VUihNM6UxAJD1A9VAaVNMyO+AX0bu2
igXIj09+uWAyGOoCpz8sRwLrQZj7umXSnjhZsJhkApzp6sMGUiCpA1BWYfX6YtMSCRu5FKZn
r0wa0FXZpTShA/KfA4lDy6FYdQoGQPYf64sVMWFBXIHo1cWAqBTobNj6gf3iuFnQxAiTwyzJ
xKDYAkgmq09JA6+QwUmCskdCx9WQU9vPEZBUWqhmJoOuXc4Wgej1FmoRkCPu+mICqvRh6iMw
PHriQW1mi00aupPWmJYYKusktUDop7jADihFSaAZEd/rhJ45GoUKhh496nphR1WpJ06aCtfP
8cZrNJCrNpoRUV1jLA3A+7UEIKt01Hr+BxarUiFiKlT6RmMSC0aociDQVNa54gSP6q1ANKAD
/HGSKgNKt9oqq/uzwpE7hCqlxnkle2EaL26CpbUAMvriROUKUBAPgetT3riGhWJC2g5mhoKf
54jgiAiUX6UHavf64UdVpoXSSD0NcvPriSNgdR0dGPc9j3wjSlljSinp000qTXwwLROVKqhA
WnSnQYiEmppqJJBzpiOiCKAAFp/uBxCo2jCrRfW61IbIf44hYSzatJoRQdO+WJTRgBmWtQwF
AMDQ9JBp1J61zp9MUSN6ggjM9qdPxxIgoINR6mOdMlp/rhMMgACmKpOf3ZH6/TCCCElmK+2R
kR54BhRq1aahn0JzFfxwA3tmg6jx8KjEvgR01BOamupa517HCdRqJG/lkU15hiQBl44jo1qt
ehJOSnMZYicqC4LAnOoBOWAUEkSFlalHjPpHanjiYw+lpAWB0gjInr54lo4THQAMPTkWIP7c
LQVzrQg+Vcvr+OJqHKFaFVFGGeJGjGVVAXsfrgUC1SSemokeQAxNQjRVD68jQUyP7cTNSELI
SK1UjPV28lwiEHp6T0AoD5/9MGN4jlVaUU0AOqpz/DELDqoapJoOgNc8+tcQw06OoqCa9COv
pGIDUsCqqfR3AFcuuJBKqutiuqpr1JocCSaUDgsNRp1GQqeuLW5DSxxsNBUtU1WmQqOmeKCo
39JoBUnqvf6/hhBKypQj7fznwrga3CVRI2TV1dFPX64AeqawAuYqCe4r5YQcqakk6QT6T5ee
HUj9sln1N08/8cVqkSLGaGvqBU9K5nxywaQxQuVqDVFHTLP/AExapEdyGplTUD6VpmRTrhGM
nun/AJwD6jXOvbG4HJ7x/wAvwxalhstxJbXY05FSGr0NRjfFxa9Ybl+97ntkdpeSNIkIUqzA
NmchRu+WOd59Usgdm37ctouDPt1w9vdMRWRPHwpjWLmtPuPyNy27QQXQVHYAe4sQjck/xECp
xnFENj8ict2hHtGui0QYM9tcgP8Ad4aq5fTA3kVO6ck3TdHV7p6RoSyogAGrxFO/0wyCx1T8
03q9s0sb9ku4FGmN5BWRQOlGP+GH6jnEmxc13zZYnhsJxDbUyhkQSISD4MDnjPsbvMdMvP8A
fjfx7jBN+kvIszLCNI65+noa+eJnoW5fJ3ItxGi5kiYjMTpGqPq8wB2w4zIr915Vu+520cF5
MLiOL/xsy0YZdz3xnG5IbZeU7ttFyt1ts7QXC5Aj7SvfUvQ41gq43b5E5BusQS8MJNCEeONU
ap8xixfY+3/JHJrSCGGSVZlh/wDC8ih2Q9PScDLom+TuYXEweS7jLgaQvtgLQ9QQOv44TVbY
cz3vb4JYA6qk5b06QQCcyR4YQs7H5S5RBapC0ySmI/8Ax3kQM6V/gY9MNjPXSVvlLlr7il77
yRyogiOlRpZKUOpemMYdc2/c4vt1iIlgtoPTpJhj0Vp3NMPMw7jj2rmXI9ptJLfbtwmt4JQQ
8AIKGvfSwIxrNa+2xVyzPPqZ2zfNm8++DGU1jfXNhdRXNpJ7V3EQYpKVp2wprG+S+Sya43mj
0SJolGgaWNKVr1BPljKxl3Z5p3kBLSHoO1B9fDDqSWV5d7fcpPA5E6ZpUVz8NOALaXl27SXs
N8miC7hzDQKFSvSpTpmOowwSp4ub7nFui7msFt+pyVyIl0se7Mp/yxY1Hbyb5DvN+sltr3b7
MMpOi4jQo4B/HFIGTCgLQnS3fxIxovQeGfIdrsm0vtrq6TNraKRQHRmYUo6tlgrUiovvkDd7
jStusVmGbVIbcaFLD/aMvrTFjN8TH5H3OWIRX0FtesootxNEvvU/7wAcDH2jh23l9/Ybmb+F
I2Zs/bdFdBXwB8PHD+DOo7eTc/uuQQRw3tjapJAT7NxChVlDeNe+CVdVlSz0CHPuB0/HEmg4
/wAx3jZdYtWWa1mFLi0lVXiYeJU5fsw4dWl78mb5Nt7WEUcVtb9YfaBV4SDX+WcA+3gX+Rb2
5hX+oWNpeXIWn6gxKJFNMqMKYR91IeQX6bh/ULZjBN4oNJH+WCTTHVuXJ7vcbuO/9tLbcENW
u4NSs5/icDvhkMkW8PydvEbreGK3fcIl0C80ASOvQq4UaWB88SNH8l73+ruJ3gtzb3hH6iyd
BJCSBSoU1pXGaz8uOfm92L2K6223hsnjIce2oMbdiCD2ONSDm4jTmu6Jv8m825iivHGmSPRq
RlORQqexxWH4Vu/btNum4SbhPEInmNTFHXSCQBlWvhixFsm73m0XyXdo1JowQwZQysG6qwPU
YaRb1ukm53sl7cgLJM1XVR6a+IHbDBItdl5xue1WTWDpFe2LGrWtygkQHxWvTFY15nqS453u
TmJ7E/oY4G1QpCWKq3iNVafTpjFEsd1t8n7zFJ+sSC2W8cBbiUJpEwH8aDLPyxJS/wDuG5w7
3LvFmwt7mVi8iIKoS3VSjVBX643jOzAS8jcbou6WiLtt3q1sIAdJY9aqajPwwYpkWd3zuS+g
Z7nbbZryQ/zLqMe05Pj6ev0wFyzcy3SbY5dldYmtHI0tQBo9J1DRTzxsKAEN1qR/Eev/ACxl
Ux0oSNRA+0/j540It9l5Num1RSW8DrLZTU9+ynGuF+2rSejeYxmt747Lbm+62W5Jc2DrCoJ/
+OV1xEHJgynqPDEzY7L7nW33rP8AqdgsWEwOoICoBP5qjPBhlik2ff8AcNnv2vNqma0YklVQ
1TSfykEHUvkcas8Wn3Pfr6/3NtymYJeEq/uQqIvUvcaaUPnjEhld03Mr65VJJo0/qC0P9QQa
Jn0ilZNP3N5419R+UO28u3WzlnSSl7ZXp1XlrdVkRnH5wD0fzGDDrm3Xfby+Q261hs9RdLUG
qg9K1OGRnqLLceZ/1TYoNu3Dbrd7y2VUt9yQaJQi0FGAFDkMEmHfHHs3JrzbUmtCFuduuKGa
ylGqMsOjr/Aw8RisWuXdd7n3ECN6pbqxMUCklQT3p4+eHmC1WqGD0YivYHwwsH9R9Vaipyp+
7ASBcnM6Sp6dhi1CII60r4/6YjIFhRdVe3hkcMFuFQgCmS+Hf64h78mIYKQufY+Aw41p1jYg
0Ncu2f4YEEsAR+yoyzxI7IW6Ll+Zge/liVMqL6WBNOmJQ7oaBVPoY0rgQTGUzpWv3fj0xDcO
ER8l/f0FOuIyhZSHWgDDxPfC3TMVYKGFKHv3pgFDJmAVApWp8xhjnTZUFR0oFH+uCtQiwKHX
1PUYEAMpQMTpJyH/ACwxkDLpIqfqD/hi02GJBIk0+IFcQ9O5VlKE59z4YHTxDrFVFcvPphGn
chcqVp37+VMUKLMsGpQd6jvhsRKVJqMgKgeOMpAx1EU9QrWoOeWGRi0BfUgoO+ZwLUZLaCOg
GWfTA1EbKadajPr4YkikoXoxoUFWby88akE9RISztQ0C9GGH4KKi6vuK0796j64tFgWWoKdG
AqX7U8MCkRkHKoz/AIh0y60xNfULOijIEBc/PPEUbrID7gyVRX6eeJUFCV6FVPVjiq0C6aFl
FSMjXoaeWBnSCBTUEAnoa0659MDWh1sJGyBfScvH6YkaUfaxyB6HzwwAUUAYlSSe/iegxNaE
sJMjUkDp1BIxLQxpH0INW7kZEdeuAh0AsyADrUN3OFkXuZtWpc5Bj2GE050KKGpp0HbPAobS
fy9B9x8O3TATuHFFWhQfcQMjhAgGDaa0y+zrUeJwLCFVkalAoBoPGuEHOpQqiihiKn/liWHc
yBSEH1QZEk9x9MQSUJHYZer8MGqiRs+ikgdCaD6YRokdjpJJA71zpgbhBSCCGzY0+lemIal+
3ItXVmMSEVBHWpP+OJYIopGpSc88j0GJHUKnqJ6dMQH7ravGp6mtTXDYvkekoocHUWOZHgcZ
NKixyGv0APSmKxTw59RDLkvbzoMBEJCEJA7kAn/HEkDs7lS3pPUHCEgdUDClQTUNWuAhZ6vq
UGq5/XEBpNqVsgATStO9O2LFOgiRyxAoE8Dl07jATOyjNj6FxC0BUMxYZBafvxAtRRiageAP
7saiCKAfZ6ycz4HxxLQqkZfUMgBQ1ywKU03qHp/IP88JpilF9Q6daf44NMCKupAFM89XfCgy
A1AAIFaeOWJBAKgFR1y09gD3xIR0KhoKFu9OmJIVcu49NAcjiGCoA5FQpOemmAgOkPo6E9R4
eOKg7AAAMfStCCuRA8MUVOFFHJHXP6nEjurNX1VIGfb9mImWhqFqaZCniPriSNnJBBHb7fDF
o0QqgDDI1zJ8MTR0A9TNRq1p16eGJBkFCACCDn6cUZRghnKqfT3Pfxxpk4RFBr6iD17YzTPB
JG2umbVNT2r/AKYDA1VnyppppGXcdq4Y0N6g5DqaM/kMICU01rlqFWrnX8cApGjirHTQ+lu/
7cSzTkVIo2k/uxDDIxkyc0auaHy6YaTnXq+4EUBNfAdsSlOI5SnqoK/aD/zxa2BlAKGmY+7z
r54mTqAasmVBlqOWWJGJZ2oSSK1FTkBiZsDRtdCDRegNKUwGpFbuciKA51OeFadT6ghA61yO
eKgjRtQ1DScs+lOn7MDWnZatVTQ9vCv/AExFGoozF2NAehAH+GKRkSyE+tq5Hzp1yxE33sGJ
Ola0PYE4KYkJ9NTmK0FB0+uDEByxeoqWoR5AUwyLDHMCtARmGp3xYBMxLkknI9exOIALN6m7
VoGA6f8APEjKVEgJB1jqT4H69sRFq1alNKHp2xE9HAIGa1rUYhUbh2bUagH7QDQZdcLOHZTq
11JNANZoKHwGIkRpr6dYIoWPXFVg66VDigyz/HEcADplpTtqIPSmJJI0UKTka9++I4BCS+oq
BnSvQgYKBSPnqU51oculfDAjIQ9fFcjq8cJCQenQn7Qf9cSEa6VAyzofqMMOm1EFsz1+3vQ/
TxwrToNJC9iCY+/1xA5ASQNkV7qBkP8ATGUeQhXBAqRlpp+/CgOApzUAEDEMAwU1JzPge+CF
KCuhQQBTsc/2HCjBkrUL6j49MSA3uOc1z7ZVU/TBazZpSKdCioCg/af8cRsCo9ZOnI9DnkR0
wk6qzLUmpB9VMqDzpiR2OpiBUaegJp+AOJQkjCsWZSC2ZAzGAmOgflqMsWIiaZCmeXStMSMy
jSAFqCfuqCRiJ1j+8muo5rXMH64LXSSCSOutlALAdCcTHhnqSACEoKHt+zFrJomGg5VAz/64
tUO5DGtKaehHf9mFHRqHM1VuhPX9mCgEj+36gembD6YlowjyUYEAdQuJrTOWWRugen1GG0HD
gRuPzADJcZJlStCrEVNenTBrcmkutQVJAJJAIqT+JxqM4cSpo9RKoMyKgg188SOw1qwWhp6l
Y9M8A0KBlUU69T1/f9cRwReUNQaVC08+uJXlz3Tpo1A0I+4+IxqSqRlt5ZffHtroJzYd8bzG
ccWp/wCIdMGrFltQU3a1UBkP1y7/AI43z8ivob4aFnPYXkNxY295C+XszIHapHbF/WYzOVBy
Lbdvh5LJB7DQ2ccqgxwGmQNdK16UxjhrK9A5Bs1vBstndW9+88euFniu4lMqhmoQsq9csX5Z
nMlXXIdn4de7VbR7rNMvvEBXgRFmFegbUM8/PPBW2Uu/iXaLa9S0Tc3dbsgbfOqBakj/AMZV
z9344JRa4rn4xj2yGYbyJ7KZKkORqQqPtNBnXGtSz+MLTbp4N0s7iyg3C2VlUCdASwPVkbI0
+nfF18JkdzsNpt+US216Wg2iOTRIYKatNcyvWtMENlqw5Zx7gFpZJece3033uAsbKVAssYA/
M2Xfyxn3WpMjHJqY0z0kZBcwMdYI3nx/8fjkUTy0lnigrrigKq9fJnND/wBuLqYb40tx8M7Y
I47i0l3C0o1J7K/hoaeKupIxmVzxBuHxvwDa4Y1vt5uIhcPRHCBtDnsVY1P4HGb1VZT7V8T7
BKZ5H3r9RbEhrS5gAMbx0z1A5o3bGtE1yy/HvF7mibLvZkuUfRNaXIBYUPZgc/2Yvl0nNjoT
484pHNHYXu4XFnud1lE4jEkBIy6Ka4JrN9dFl8P27GSO73YRtCx9to09EidjVvtP1w6ypt04
Dsn6C4vNh3qO/a1qbizuVMMoHipqVf8ADFom/kG18R4XumzGduRDbd+jQl7K4UGJiMxoIzoR
li2t4yMkftsyE1C/mHehzwoyEKamp8MMT1Lh2y7FvHFJ/wBdt0Uk0bSCK8FUlAC1AYg54zYm
N2W0hPIIrZ52jQSqPdVVZlof4GoDijNb7dNlksuVbYXe3uY5iyG7ii9p2FMxIhqOncYS69+4
JxjcdySKHcY9uvrhf5cccYZHIHVlrVT+GDRyy4+MLxzcW0dyH3eAGX9Eilllh6B4yvqz8xhj
VV99w0Wu3fqprg210g9dpOpGfT0tWlcDLTcX2HZdy4VcPe7dCbyH3WjuoiUlGharnmCKdsVh
edSKolkRR6QxCnwAOGKV38c2Zd63W329pvYWZtPu0B0gCvfLG/sLNba4+K9kinh2878i7jOC
0LNEWjYAVNdJ1If+7GN1TiOO1+MXt/el3i6MEMJYGW1X3iqj8xU0yI7jFafqpuQ8e2e2hNxs
u8R7kgIWSEo0UyL/ABFWFG/DBIOp40uw7ILrhctzazwTU1+5Z3cClgyj7redcwfI40x9bY8/
eMByj5MhNfPPyxa6SOzZdpl3TcUto2VGeoJkqBQeYws9RpZeEcZt0Nvc8hWy3TINb3cbe3Xy
kjqNPni+zl9VTtPH9sfd2s943JbSDMLfwj3Y/wDaf+1sNv6dOeT8g47FsF+IhuVtuVu6B0uL
c1yJ6OoJo344zutxr7TivF77hD36Ixv4VeRbgMR6h2ZelPwxKxm+CW2w3O9w225qZgW0+0QQ
jZUFWHfyxWVmLreOB7W3MY9s2qQw29xF7sSTEUVwTqUHwAwxRJvfDLywsZILrjck6xDLcbVy
SB/E2jVWnfLBoy0XGuBTzbJFu8Fou6JOWJspGpkp0+IOL7NSMtyewjtNwCf0ifZ3Iq8E1WUn
xRiPtOIrDcOH7YvH13rbd5gulRF/V2Uv8uaNzT0qtTqoT3xKzEu38Etpdri3W/3IWFpOA8Vw
sfuxpQ0AkAzqW8sO0NbyPhFpd7PaJaLbNdFV/wDnQjSZFA7Ef54zqkjNR/HNnfe9bbbvcTbr
bqWfbriNoCprmNfqBqe+LSx15ZTWd1LZ3S+1dQMUlQmtGHUeYxueuVi04jx/+u7i1iLhbVvb
dxMy6lBXsadK+OCt8tFJ8X+trG23WGTd4lMv6ZwQsijusgqufnglWOa1+O9FkLve73+kQEkN
JoEyoVbTpkoaqfA9MWrIpN22GKwlU2e4W+52MgNLiEkNl2dGzU4dWRo7/YWHx/b7hF+mu46K
dbIYbmKrFSurNZVBGKUdTGFJ6UNTTJux8MJ3xccU2A8g3I7etyLaURtIHkUlajpWmdPPBVPV
9L8Z6bh7GDdrZt2A1LYykRmWgqPafNDqHjTGdUgLD45uzt6Xm63P9NiZmUP7bS6Sp0lW0faa
9OxxrVIJfjG+bdIbee9jO3XCFot0gBkUkZ6XjOllOLRY6n+J5kIRd0gdJQRaT0YKZe8cmdU8
jmMGtOcfG80EPu71dptkeY93QZIqjL1SKfTXtlglWo5PjLc03JIP1tuba5iL2N8rFopSBURk
0DIx+mHVis2jh15uO83G0maOzvIFY/zamIuv/wBnqH8WHWJ6s7f45uhELjc7pduhYsiyFGlj
1odJVmT7TUd+uL7NdTxT8k4juPH3RrhkurS6Ba2vbc6o2oAaHoVPemHdZx2b5w1LHZYN6stx
g3CxuNIkKUSWN2GalCT3rg9NkZgELprXSOrHthZkWmwR7JJeBd1maCJwfamC6lWT8usdSp8s
ZrpI1O3bLxPe5JdveKTaLkLS23NJPcheStPUrflbt0w7ixUcM2Gx3Dky7XuKe9EBKsvtMVro
BAZT1GYwWic78huNo2Kx5RfbZfXLxWVq5SC4cB/VRWT3FFKj1UNMatZ4XG1bJxPfDcWSQybN
dKpFteO5a3eXtrB/K3bMYNb+qn4NsthufKV2zcI/et2SeNljfSdUYNJEI8xlg6c+btRf0bZL
Tkt/tV/dn9NBK8cMzrUFlI0ianbxK4eq3Iuts2HiO/vLYRJJs17GKWt48nuQSz1+0oaHSRmD
XBrWKfg2w7du3I/6duJaSB1mBERKkMin1qfIjBaziOLatmteS3u13920UFrK0VrO6nS7IRRZ
NPTUuWod8dOp45S2rbbuP8V5A9xaWQl2a8QVtbmV9cEslc4yDnQjocZ1qzxVcH2Wy3Xkqbdu
Cs1q0c2sxtpYFFNCvjnnjPXy1xPEcW0bPacg3Hab66ZBZzNDbTyL6Xdc/wCYE6Bl7jDROlvt
vGeM8kFxa25m2rcEQG1uZqGCR/8A8GfqOhw7+mvFXwbYNv3bkv8ATdwVjbvFMC0Z0uGRfS6N
0qCMFp+uxTb5t67Zu97YCQzLaTvAspyLaejU88ac58K37noTQ0GZ7YsX216PufxjHLsWxb1t
pf2buOD+qBvWtvrFTcBR6tA/MB9cZ1v6+rgfFnH4OTNZt7t1BcWYubaJZBqRw2lmRh96nrRs
ZX5Y/kO38asYnim2jcttneoguJFpEWXLVQk1z6543Ix0xRUxsUrqIGZPicRjWfH/ABG05PLu
Npcs6TQwCazkhoSsgahqpyYEdsGtfh3cR+N5t1Xd4t5trqw/Twa7G5ZdIMqsTTSfuGkZ415r
C13Dg3GrLY9l3Jdrvrp9xt0eZLTU3tyaASwp0D+GD1u1V8C4zxbke7bpHHFdy20FussEDHTM
jlyjg0pr+mDqMfzuo5uBbNuSXVvbxXmx7tBC9xYRXy6IbpUqXUV9WrStRT8cLXXMWG58F4zY
7Lst+drvrttztkkmjtgXCSCMOzeQIzxk3Ph5nvS7S9y0u1NMbR840mXSyf6/jhpkVpCe4y5k
nr44sWPUOD/GN/dcXn3j9FFdXTNrtIDIrxTw6aaP9sgNcUvq+GFtOMXm6X977FtJYWlk6i99
wFntFclV91QKkA5E4bfVHXv3xjyLa9un3O2kg3Xb7ShuzZOJDEjdHda6tOXXthmDbPl1y/D2
+xy28c95ZwC/hSfbriZ2SKZXUMRqIorLqClT9cF/wrZGN3vYd22LcJtt3WF7a9hYho2zyHdW
XJl8GHXEpFfUkhO/mK4KtbrbPifkc81oZGgWUyQytt0z+3LJE7BhoqQrKy+B8uuD8K/Lt+Qf
iyO25jcW2xQfoNpjjimvRKSYoDOTTxcISPu7YbjHMu+s7vvxbyfZtsn3ILFuG3W5T37mzdZD
Gjk6WkjHqC5demGNeO+8+E+X2kEFzrtrmCa3W6eZGNY4nAOt0pqouoatINMXMlFufKouvjXk
dpuVvaX8SWkFyolg3IMJLd0JoGV0rl5YeuYvvoeS/GnJtl2+Td4xFuW3W2lbuWzb3RAX+1pV
U1VT49MZhvju+M+ILyFt212qbnFb2zCfa0k9q9KsAfetCRoLoex+mHnrKepsVHG+Bch5Jabn
Ns0UcqbUVM0BOmQRMzAMo7lQp1r1wXNHF8dO/fGvI9l2cbuxW92hdOu9tCJBD7n2e6v3Kp6V
I+uBv7B478Z8j5PsO47tsvs3DbZ/5tuD0unFK6o1ORqK9SMUvrPX7Z2x243E8cZlSNWI1s5P
Q9aeY8MXVwS3Na75X+PoeHbnt0dldm4tNztVu7cyACVGFA6MF9NM6jDCw1TkVXPMEE5U+uIy
1c8e4tvG9tqtVSO1R1hmuZTpiSWTONGftqAyrg0Xya9Ct/iCe3+POVX++Q+xu22SR3G13cbB
0kRVIeJgPyPXyNc8Uno+/ms8nw/zK79Mcdus6kI1nK+i4WSlRGFP5mBFPrh1apBxHf32jctx
Nm3s7JcrabjE/plgLdDInXSOlfHGrF9wwcY3ibjt1yS0T3dssZkttxZc3gLqGWR07JnTUD1x
ifOD5rQcP4qm68Z37cprZrq0s7f/AM0DAXNrIlSsnsH/AMkLU/mHsMx0xZ63fFJsHEt43yJJ
bRI1WWX2VuJG0RGXTqEWvMaiuYri+PDsde/8E5JsAt13G19F5KI4JoW9yJpOnthxT1nsMF0S
rRfiblzwym1t4prmGqvaiUe+HUeuHQR6pF8Bii6rPbLx++3febfZLZVjvbmUQRC5JiUSmvpc
0LKcqdMNXPqXfOObpx/d59q3K3Nvf2rfzFJqCGzVlboysOhwGq/Sx1CvqP2gd8Ws40uzcRmv
duXebrXFtLT/AKd7iOgIlUBimeSEqcieuKXWrE258Fu0gs9x2yZNx2TcbprKO8XUpS5JAWKU
kempOTdMLOfpZD42M16/H/1RtuUIzC3s5aCOVkFdAcdzQ6Scj1wmqzbuGX72p3DdY3ttvhuH
s52pV1uI/uRuwoDXzxT1nqJtx4LuCDbrzbJY7/aN0uf0VpfLVVW4JAEUxP21rVT3zxY27ovj
Z5Zn2aO99rlUZk9uwnGgTtHViiEZKSqnTXr+OMizWc2Hje673vcey2CJHubu6BLg+1RowSVz
6NQGmGjn0acb3T+vR7E9s0O6vP8ApPYm9GmYsFGo9KGoIbwxmtL3/wC7mSeWXabW5H/s0QkY
bbLVGn9seoR5elqKaAnPG/rGer544tl4lY3G3NuW5XbWtuLhrOZAPVDMBVVmUgsPcFdNPDF9
fR9fA8i4VLtmzWm/WF0u47HdzPaiZRRopkJ9EnT7gKqcH1gmxxbrwrk1nxi05NNZl9lvCFhu
I3Ryob7fcUZoGIpn3xcTbh6TbNw4zW1tuO5v+m2y69wRXAPp/lHS5avQoeq+GDPVlg7vgW9C
/wBrgthHfWm+SGPZr6POCYq2llZvysKdD17Yq3HV/wDd5dXEFxbbXP7++WoeWbapBpldI/vM
KnqVHY4oKxgiKGtKsfuByIpkajxxVR27Lsm4bvuEFhZI0t5cNoijXMnInv1yGCmxfT/H881l
LNtl3+rvtujabcdr6XAhU+uVE/Oq17Z4cwaxwdiSiioTIt5eOeHFutd8ZcSsuT8qttlvGdLW
7jnUyp96skTMj5+DAfXGbWpFVvPC+S7VBJc3Nq0lgkrwG9jGqOqOU1Gn2V098bvMc9ultnDO
Sbtau9jae7ElC5JCtUio69jjErdquv7C6sLl7K+ga1njIEyOhQqOuQPYjocWKXV/yzgCbHsO
y8hgvDe7XvcZaBpAFljmXNonAOY09DjXM0/FVlpw7k1zYzbha7ZK8EdfdNK+o0I9J9XQ16Yz
WOr74ufkvgicV5DZ7fZye5b7jZwX1ssv3J7nokjZsv8A7RTTyOGTw564+bcTOwJtMrrLBe39
vquttnA1xTLn7qSL6WilUjTToRTFz6rMrlseE8pvbP8AV2+2zNGCdSPRGBArTQTUnEubq54R
wI8m27kysxtty2iz/UwRygoqyRNV4pEOY1J37HBJ611ZJrL7lx/ddoWCS/s5LdLkFoJHB9t/
9ofxxvrnHP7T8LnjfA7zfuM75udhcp+q2WJblrBwayQer3Cr/lKaTl3xiT3HTPNUm07Buu7T
fp9rhNzJo9w0yAp09RpTGupjPthbvx/edpeCPdLSWyFwpMMjrkwU0op/yxYpDtxnfDsP9chs
2/pKT+w96c0E38HpzrinoxXMshOsnvmPr0pgrTq2vb7nc9wgs7VQbu7kSGNWbShZ20ip7dcZ
tUmtonxNfTStbRblDDuRkEH6O5pGVnDaBG2eoA+NMaiY692Td7ASG7s5LWOK5eykkkUhVuUB
ZojXo1BXPr2w2C1HJtF5DHbyzW8ohu6ixkZfTIynS+k96HBJqx33HEuRWVm13c7dMLWMBi4W
o0noaeGKRYi2zjm97oGaxspbmFM2kQUBr4HocFakc13tl/aXj2k9vJDcg6P08ikPXr0p+ONY
xY6rzjXIrGD9TuFhPBAMhOVrGAelSK/6YJdMmGs+Nb3ukTSWdhJNHGQhlUEgkdlGLT1Njni2
m9a4az9hhdK3trCwIdXHVSuKnmamvdg32wRZr3bpbeGRiiySCmY/yxQWKwKGzpXsQB3HfFSI
e5WkYBUVArn1z/dgZpaEZQuZalW7V+mEGGQqKVPQnAocBCzZgsR08a9ziB6KtSa6CQBl0/DA
1oSQBVWLUJrnUgnCtO3t6dLNkDqy/wBMROxjOlSagjoO30GAiLnQaCgHfocQRaSFABHpNcsz
n9cRSK2tch9pyr4+FMQAFLkkijk+n/GlMSwwJBb05jM5dTiQx/Gw9FKqD1riISpUBuobrXtg
R2T01JoOoHbLCqH7vTUClCg7DxwshJVloQajqcWmFqWo0nUTlTy8aeWJJApCEK3pP3Gv7qHE
qTBCqMXOogivSoxIygqM+1CKGpOEykjnNnHqY1zGVcCJiWqC/op6lpkSfPGayYAGVjpLjrT/
AKYYpdonbRmran8KVGeFvAMqyLqfIjriBkKnJSM6Z4UdZdCjsy1HX92JaZC49YzV/wBh8MSG
oBJaoUAde9T2wIwUPVetex7U/fipwIrroQaoPDI18MCwNCQCxpn0Gf4VwjCOqtD0/LUZ0wJM
rEMRpyyFQfDviISoUCvrqTQnt54hUYcMpWtFU9iSSev7cSSg5k1NDRXUClfMYoQPn0JY9VU9
MQAA8hpqOZqfEjCpB+2sZ/h7EdcsDRqq3io/xGIfJGFqeltJPbwAwGQQchCA2QFB07d8SpB0
GqoFKdSep8qYgaQrpDZk1qWPWn44DRlvScho7HoT9cCOVAUEBUVR6lUZDC3x1l1AJEcMyodK
mgOWZ8jjQ76vV0Rk0nSBqrmcsDmZVNcyEUktXz/HEkkemjEDUFy61BrnXBW5MDqoKqNVe1Kk
eOCxYQkKdyuofb1HjlhWk5U5gFv9p6+ZwwkmS000J6eVcIOc1Kk0C9iMApoREBpclFodIB8f
riPNwIDFslyGX1GJrHLeKuTAagcqE5D6Y1yxZWZ3JWeatc601Uy/bjXQkQfpz/t8OvfGGndt
JU3gFMnNFAyzGeOsgs19DfFPJ/j3Y7Fv6m24w3kpBnki0SRUHdejJjP9OmJKHkN38d3O/Lud
tfbhdWsj6pY2jWCRSehDNVX/AGYxOsdMsavdeYfGN5skdnBfbnbyxaWRHgSQgrnQrXofI4d1
ixG3P/jXdraG33S1vrSWBx7V3bDUrFDkXR2/wxWNWObc/kbh1xKsDWtxeWkDEJeACJ1qKVMb
VyHamDGbHJc/Km07xYy7Nvqz3djESduuh/5EyouvME+BGNfVSp+B8r+NtnWaO+O5LNMc1RUe
GlfykHUn44evhuyqTkW68Yg5BFuuxXEl7AsmuSyv4FCmv5WoTqGMciwPLuWcT36zjNrxyDat
2Uhprm2b0MOlCgC/vxMskpXIHMfmzp/hjSbX485nt/HpJLfcYp5bKTMSWpVZo/AhWyNMOj8t
Fecq4VFoubXft3vGDGX9JPrSlTmDnp/CmMtM9zTle08gtbZrHWrKTrjl6KezA+OKLU3BOZ7Z
slrNZ38ckqyuWM6aSB2oUND08MVGs/DucdvvQ3KBfcb3dfUqWANdOWYyw810vevQ4uWfH+7y
2N9d3t7tW62/qSCSETRFvCqZn8aYXPHXJ8mcdX3beT3D6WEdxGpKE+JU+pRjFOPKrqYm4mki
dmjkkLoTUDP/ACxvlctRs3JeBf0M7ZvXHlnvo0ZYd0hfRLU5gua16+GLqDrll5vZ9w6FJirW
PUa0HauAGqDINYNBSun7sRes8S3z48sdhexl3y5glkDalnt/5iawA1NAZWGK3TmszYW3DbXk
bS3W8SSWSSLLaXkEHUjMe5G/qp9MO+LGy3/f+EXE1juFnvfvNt7ljYyRSIXU5MFagAP1wSAE
3Ifjm+3i13e23OayuoFqkF3AzxMy1opKDUKVxD6/pyNzjh825S3sc08O726FYLpFb23K9FVh
6hXzFMOJwcm5jsHKNmWTdTJa73bkhJIQ3tSIOhalQCB1wRWau+K7z8f2fH2tJN8mhkkRvc9+
AkhnFDTQCjAdsVqxk4OFbbuW4S29tyzbdVdSm4WSDUp7jXQE+IriinONHx34+l2febW9feto
vYImIlijuNDkHIhdWX78VpkXe+jiNryGO+k3J9uvooqfpZk9yCRSevuxAnLxxRmxxR8/4vfS
3tjeyT2MMo0RXsae5GQRQ6lHqHiMsJjJb3s/D7OANte9HcZK0YxqVND11qwAH1BxauZWs4nu
/Abbjs23Nvz2z3AKmO4hIaMstCFKggrXvXFUxVhZcTtuQPBvF2dx2qlFuduLqQSK6qMNRA7g
YfiLNd08/FOPchtr3YL192sdOp42XQ6VP2hmC6jTxGKGRcbvF8dchnXcrjkD2rFQDYXMRiYB
R/GgNcZrP0ys1YXvGtq3b3ZrT+ubUoP8qQhD/wBy09LEdq9cUJuST8Nm3CJ+PRTW1pJ6riGf
qrVqdFScqeeA43Ww33Bo+KzbWvIhbyzxsv8AOhKuhfr6RXUvbLDtDN8ZteN2fIgdx3sW7W7q
8F1BGzW01MyCx9SH6jGt8LQ8i3njtvyC13+w3WDcYo0/T3NkoKTBCT64yRQ0rilGY67XfOOR
bl/XLHlVY5UKHbLgSRUr2FK0f8MFWIYuY8X3a0vtum3CXYpZpCILsISpFdTEaaaSfPAorOZ7
3t0+x2+3R7km7PH9l2hZmXTl6g+ak/XGlinvb/gl3sCBdsm2vfYFVfct2rBKRSrSVPf6VrgV
XWy75xjc+Kf+u7ndnbZY1os7IXilUNrAJGanFKsXV5zDiVhDaRWd0bpYSsc0cVTSMimuNmAD
9PriUQWt9xK13mXkFtv0E7XCsJLG5jMEmYrpVu7fUYosZ+64n/7dfXe5bZuljEkspP6e5kZJ
kHbUmn83WoNMMuM31e8L4Numxbwtxcz2d7atGyk2k41iufRgCanww3rTJjpDca2bk99usO8B
ZiHS52q6QxMA4GcbgaWPhXGdMc55NxTkW13m03d+20GWWsFzNEGR0DBlPXJh0o2LVYxm97Tx
yweFbHeBfLIwEzRDVQVoXzoVP+3FjDcF+JT8K/o1vyWBJtNI5bhGjAbVqo6V9Ofhhla65tY/
afjrcd4jdtv3PbZvbJSWP3jrVq9aaft8MP2xj6VebBxXeeI7iu5bi1vcbfodLie0kErxhvzF
G0t+yuHZTPGT5LewTb/dT20qyRSSFo5ErTtmpxmmN3xHm9nNx87TJvR2fdLepS7uEEscik1D
Vb06u1DiI7nlMFpvNrJuG/RblZFWX/4yIyqzZe4zRgEZ9VI+hxasTjmfGFtDqvA0lvdB3jCE
l0LVDJT7gB1wKJN73zinJbC52ZN6itWf25YrllZoXUHUPV6dJrkVOeJYoOYb7Z2W2WFjb3UF
3e2Mkckc8EnuRMkYpU06HxU4ZFuVa7xyjjNztE+82F1E93coiX22yUSf3jQB0alWKfjl3xS1
fFNxTmtvd8bTaE3pNm3i3b/z3UYmjlQtqDLqopNMjU1wUs38h7pcTtbWs27w7p7ZEgW3RBGM
qEh46de6sKjG+WLHDvb8CvNiiutqgn27fVCLdWqlmgb+J6tVa9xSmMzqt3nWXLDTUD9vfzOH
WcCrHV6T0698Qa673i3bhdnaWu4JLd2pCS2rx6J4wxYn2pVoHibuGzBwYfsreI7nb2W/W9zc
3L2EaV1XcYDlCftLIfuWvXAZ8oeU3JuOT30zTwztLJVprevssCAAUr2xu3wczF3ue6203DrC
G23BXntmEM9rIgS6jUhqqJBlJC3avTGYelXwzcobDfYbm4vZNvALRi7RBKE1Cn8xD9yfxYaz
Jjn5NM02/wB/cM8U7STOxmgJMMlaepK9FOK3RzLFxuu7283DdtSC+R5YaQTwmPRdRZGqah/5
IO6k5jGZD1VbwrcLfb+QWs814bCFAwF2F1qrsCF1p+ZDX1YTzYg5PK0+/X8skkUrTzM0j25J
hNe8bHPSw/ZhEXG77lFNw3bntLuJ5Lci2niZPbu4RpJEWoZSRd1bqDgi7cPC9yt7Lkdtcy3n
6EKHCXWj3FVmWg9xO6H82M1v8OHkjSSch3KSSSJ5ZJ3f3LckwGv5o656TjVc5zi63ncoG4nt
P6a9hmuIx7N0iqI7uMhTSNyp0vH3RvuxQ3lXcG3CLb+SW8095+hWMMEvGX3EVnWirKv8DVox
HTFWpXDyWRpOQbmzvFJI9xK7tCdURYtn7Z7r4Y0JMVLJqBXIhh07iuEPU7/nstlxDjtxx/cF
W+tgLW+hIrT24aaZI2/LX7TjGN1e2vLeP3PI7GcbjHCt9YmNJmqixXAcN7bZj2+9MQxRco2H
lc+13Yk5pZbnb5v/AE9powzquY0Mx+4D9uGVnrHlBUkVWnTriNjb/FG9bft26Xy3d0tmbq30
W0rZASKaj1dvxxYfwvfjr5AvLq+3Ow5BuryJc27CxNyV0NJnUBwBpJXpg3KxJq4gu7neuJ7H
/wCucpttmks4xHexTSCNtYjCmMqe1RWuHW+oo+L3EnHuZbqOQbpBcXe52Y/SblaurRTSIxcZ
r9hyzDYM1jmyarbHll1zDju4cc33clj3WAtdbRezMEDugJNvKaeH24rMO7Na6G7v994psJ43
yi12iW2gWLcIZ5Arh1QKVK1qCCPxxc3FeXjXN9t3nbd9uF3eeC+urg+8by0dWjmDZa6J9p8j
hXMxnRIQak9O/wDrga16JwfkFvDwTkW3Nf8As7i/862j1lGKkAExVIXVlQ0zOD8i/CH4u5Xt
ltf7xZbzem1k3y29mHc5tTIsulkCy9wCG6+WKHPGo4/sUXANi5Au6bjZXVru9m0FveWcgZFl
CuEjlWur+ZqoGAp44vyPwzPy5v8AZbnxvh0233Xv2q2TpPEreqOWNYkbWvZhQjGrVJ6823He
L/cJIf6hcvcvbxiKGSQ6mEa/agbyxar8uIFFYH6rmTQkjocFi171ue2WXPds4nv22bhGltsU
aR7rZFlW8QpoNFViFJHtkjPMdMZzzGrPdWe7co46/Md02+63a3WHf9rSKw3Jm1W/uDWCkzDN
c2xqRjdqm2Ljlx8d8d5Ja75eW9xBvNg8VnfWr6o/ejjakTgmoLa/S3Q4rGrmY0W1co47JyXh
9z/UIlS52Sa0ifUAv6hfaBiep9Lek/dikH5VOw7vxm74ds1pJusNluVvdXMVjczMNFtdB3b2
5gaj2pUOmhFMOixdyXD2PFuUbTu8G32G5z7ZdPZxWNFjuYhCwLwuTV6Mc0IqMZxf0m8+Md/b
rxadbluSC/tntJbVrOWAPS4hlqD/ADgaUrpqCMTX4cfBorziF3znZ92uEst3Nq13YlJAPcQN
IwkhkBoTpcHxw2MczOVd8U71Ztw35Csrm5H9QvtulnghlPrkVYpPcoDkzLqBywT5N95Vfxpa
cRvdo3OK95JPxnkQQGwvvd9u2eLSKI65BiD1Fa9xikytS+MXx/Y7je91t9tW5htri4d4oprm
Qxw68wKsA339q+ODqrHtP9wPCuQTbHtG+28azW+yWAg3WONg0sWkCsmn8yVXqM8PNF8uvn5G
qBqOoEVFPpXDad16t8P3u23fHuUcRa8hsd1363H9La5OiJ5FBXQZOz9x+7B+dHfP2mRsds45
e8S+Lub7HvN4H3doP1AtCxNIkUBZIXY/zEPc9VORxT5EmctdeT3XJIdq37jO2WG82QtIhPfP
KyX0Ukf3RrGMtadaMwJ6DEc9eaWPOI1+UtyHJFgXY+Qxrtu6QI2qEo6aEmcmhUg/fWjL+GHq
+NSaH5ChThPG5uGWRC3lygVpIyDFe7Y7kxO47TRldNf9cZv7c75MaX4R4PvEXH97vEuLe6s+
Qbc8Fk0bagkullMcuXoIrmMHNdM8Z346227PEeU/Ht0U2zlc8qNa2l0REsjIFQrE9fuGn006
g43Z6xJ4qN64PzHiNl+p3y6ognhluLMSs6SW0bhvdiZ8nkjbJk+4A1wVv7a9q3l77db6LkvH
tntd12uWCGeLc4p2juAy1J/loCS6ft7YpTjwjcb2y3P5Ma432T9FaXN6rXtzYvqZE6CRHUVq
GArlXri6jPN9Q/JsFzbcvu0n3teQo0cTW26BleRoqH24pGX060HhhsmD8swpFQ2QJNCB1qO2
MU/D0jhe4WG8cC3XhH6mOy3i+u47rbmunKQStkDEXAOlvT3GeHmYb1qzk3JOKfH8/F9zQxbr
a7pbXiWxyL+3LG0iHw9KalbowxrmM3r9L29sLrcvkmP5G25orrjrC2kgdX9Yljh9uSKda/yT
1oW7/XF+FblVn9a27l218n4xtN3DHue47oL7a2umMUVyKjVFq/K/pyr1xbiy1B/V4OKcAh49
uKsm6WO8W24G1P3tGkqtIlD0ZNFQTkQQcU89Pyvrvbbq6+S4vkexMV3xpxDPDMkgLEpAI5Ec
f/ZOBWmvIkUrU4zWvh57ay7DufyhNcS7pJZbZf3rPb7tb0WSNmJaJ6tUKNVASegxVnmn5DdX
ex/J73G7343a5sbuCZ75GDe5EgVlNBkGEdFp2OKrm+49AO2XH/3mN8i25ivONzFZ7S6gfUzD
2VSVJO0TINRo/WlMHqvir4/wwb6+48o2N13L3b+4WS216NFHrDIq/wC5W6nNeoxvVNjNfKUf
ObBIrffNuTbbG/YSA2xBguLiCoQy6fSs2kmtKahhnOs9dSfKa62y6Pw8bvaOS/qLVnA3vjc7
IAvrA/lg+uqsFanQj6YOLisvlT7J7fKvjSLiG0yRnkVhfteR2cjBPfiOpmMZbqwr0wSt9erO
Lk+3cb41xGw3Filzsu8m6vrYKRIIPWWcIc6xs9GU54vr+2ZVpYbRfbR8pX/OZykvHL8z3Fjf
wyLIskE8YBdKZAxirFTmRWmIx5lsHE5Oacw3DbtvvIbaW5e4ntZZtTI4XNUFKGrDvi6bwPDL
ocT+Q9tffkNs+3XYi3AZt7RFVY0WvSta+GCxmV6VsWy7lxr5H37le6GJdj3E3kllcxMJBJbX
BDJOrL6aLlqWtaHDaxJny8Kumha4mZBVPccgdjVvHD1da55b/wCBLiBPkzbvccIziVRqqBUx
NpVfrjnW49B4/tfMNq3jk6criMWzbtDex7dbFka091i0irozCa0GoV6nHQeY4OJ7VFa8S2/e
bKwl3mwkgJvHtpQk9lNExEscig+uL8wFKjFWJKr/AJ52aO+37h8G2RLLf7hYtDFIXAaYBl9p
S5yP3HTXxwT4bnyXyVxjfLf4b43Hc2bLJtN1N/UVAqYVkDhSw8DkK4uB1XZyu25Zdbdxbf8A
h0xttsfbIIt43G3cBQ8QEf8AOA9Xo6Me2Az5V3zxtO5bnz/j23W6/q9wutniVY0ZQJpFlcnS
TQZ9RjU+FP8A2H807RvUXx5wq5u7WRpdtSS3v3ycxSMEVVkYVoPT1xc/C6+W53drUQWG8bZt
d/d21xZQyz7htbp7TlEA1OtdXuLShoOmKCz1y8Zvm3fnHL57CyEd7c7EgjilCabyUrRJ/SaF
ZPSp/fg3FZsZKKLkKfEXJ9v5jFIm4wLDdbVa3irrjjSilofDQcq9Ri91nmTPC+E+Mb8OOcqn
/Rto3jaZI9skqpEzKHFFIPWrDrilzp1v/qqvhnYLgXG/xXkMjbxtsUddmZtDsI3/AJpSpUe9
HlpatP241/TqWnfG1+Qbezvfh3dbj9PN7trdK8Ud0F9+CVaBsxXSSv7a4zz8ufTA8UteV/8A
3NcivNq3GCXbGem67TJGDJHEApM8T1yZh2I7eIw9TKzLseXMCZGXTQAggeFO2CtY7th9ybdL
ONLtbESTxo1ywqItTZNSo+054xY1HuN/sMu57oF57sEtpuELJH/7jtrII3qwWOeRQSKUAOqm
WNaMc78c3a7+PvkLjkr/ANa5Db7tCzsgBmk9UeiajZgmPv8AXG5curubFjtO3JtfB+BQ7wIQ
qb5/NZ6OqozOCrk/aexBxnmejFLyO3+Vds5XuFhtFu9vsC3bRWH6lENsY5GJWMu+o6WB0j9m
LRg+Pw71/wDdBcPxqzZOS2O8uJbKBAZII5qLJGyPlpy6HuMEzfXSzMc/FLjkVx8hcZPMLaES
M0tqs0kao1GjZTbygenV6sgRWmKw5G8jv9p2ncLiyn2+/wD6dpngNpcWpMUkOlqwI2o6vT9n
jlijnWK4BeXj8XhRtuurO1S6nXat7sI1uWjiZ9T21zET6HGTAkHLDflr8Obe7PkG3fIdpcXF
tByZb21lMUsGmJ7y2pplQ6dOmeOnYVyw2+Lldb/t267rs16m13Mk6m1lkk43utvHDLKsaail
vKvqZ0HqBrXLBKzXz2rOSjA/lHQUrUVzHbD34d06k1oe2aEZZeGMYAkSOVAAFTX6eeIYcKx9
Jo2nIMe5wI7oopStAtAfIdcWrDSAaGGouq0IOFYYZj1LTVl6cz+7EZCcFfUM16KCMzTEjvK+
hQ5FKU8P24FaYe0VatdJA+o/DEjEj7CTXtXI/TEDaEXUor07nOvXI4ikJLKpU5jv4EjEqePW
GKgVUD6EHEKChUH1Zuc/IYjCc0oCxZB2Hc4kf10NG9L9u9fpiNhjEqEFDmTlqNaYZBhxOWYo
ARpp6D/yxYTUAAYUBWtQMjiAJfUNVaE98qZYlTlidLGrBhSmIQy/fXp4UGZpiaiUk0qagnt2
/HEaimRtFFBCihKggmuBzsJSFIYVU0oTn27YlzMEaM9BUkde1RjTfyVAzFAv25Gp/wA8SOxY
A5BQKA17+WBGEjF6lQCO/l064lDamoAaUzoO2FEI1qWIp0yrUEYEKTVkFy1Z5ZfhXES6ISGO
dK174kH0qgVBXUa9ciO+BE5ojaV9dMpDlhRjT7V/NlQ/TAKZSdBHRq0UEUFBliBIUWvpq1CQ
vUYoZ4eRF1B6nSo9GkHOncDCQlSK6q1ypXqa54zppITpq7HwBXpXw+mJSHpEAVLasqjCjH2R
koIag/DEBoVU5oQT0I70wI8aIzEsaUBIyHboD5YqoZWHrXIoSK5Z1PTPEDspDKtdMZ+4Ma5+
eBrUchBBVGyHcCuFfKRV0R5gUoTUnr4/XDEHX6vsDA+rLrq88SGukVPc9CB2xAJ8QQG6srGo
P4YjAhyEKnJTQ5eOE6cKFWqNRhmRXM4zUZtbMfTkBXMdMADoYkPRTXv1z8sMGginkSU6+x9K
0qKV74atSlpWJqMuqdjng0AUyIMivtk1ZepriKXSS2ZrXKh64nXlDcoqqQ65A100zxarjJ7n
/wCcUY0z9PT8Rjbk4/cHn/zwJ27bldxrSiDNiT28cdf531X/AA92+MOIcS5LZXS3ct7aXsRo
t5alHRhlk8bDGP7Rys1Wbzs8O378NsFyjnXpinlUxx0rQM+nV1xcc67fbG55FwTbNv2u2upd
uktndkEk9rcpcW8oIFT/ABr+zBJ6zL+1punwibuyiu9lVbO6YV9q4uaQsCMtFQ37zgp1ipvi
nnsNxc28lkFvIF9wwiRSGiJpqjYEqRh2G2Vx2nAeTXCTS2tuJTbf+WBmCyEZ1ChqBqdcsX2Y
lytBwbgux79DdQ30t5Y7hC1BNB7bw1P5XjbOvnXD3PGvvrPXHGJ25Adlt7hZC7hYpXPthmrR
dWdFxnmZDa6+T/GfMOMwLcblZqsLHSJo5EZDXsKGv7cWsssi+plFag5A5VPnjQaHjfDd536R
ltGiijQVFxcyCOIHwLUOG4tXl98S82tY1nEFvc20hCfqbWdZYgT01EHUB+GMavl1W/wh8gsu
v2rVXYZRNOFZ69DHUFSPqcW6Pq47H4k55ezTQLZJDcWraLiKeQIFBzVqjVUeYxasFvHxbzna
o0llslmtiQBdW0qyxVOXajD9mFqJrX4p5vd2ouNvSzuS9a2/6pFmAGR9DUofLCOtQ7f8X86v
pHWLb/akhYxziZ1i0Htqr/iMVqlQ7/8AHPMdih/Ubpt5Nr1FxA6zR59iynI/UYtalDafHXMb
rZot5stuN3t71IeGRGkAHWsdQaDFoqgf0ytGyssiGjK3XEL0JWdqZ6GpVVpXFjM6b3YOA7Xv
fH23GDdJbTcYgwlgli1wllz9LoQy1Hjgb1ntk2WXcd7TbtAnkchRH7giZiv5UZqYfr4Gs3Hg
W3Wu/wC32It763juTSW0uQBpplWKdTR88EosTb/8Q8gt6tslvJeW6qWKySx/qA1emn0kj6Yq
YyU/EuTRWz3j2MntQMY7rIa4mHUOn3DDKKj/APWeQLtS7qLN5LF+sqUYKK/mFaj8calkanjT
7PwKx3jjsm6Wm6vb30BYS2dxH/KNBkUaOp9Q8e+M9Fi3SSN2iYV0HT/FiTq2zZ7vcZ0tLCD3
bmRqJEKAknwr3wVlov8A7svkA2xn/prSxoM4/dRp1H8JiJDfsxapa49o4PyjeZJEtbSksR9s
wXDCBtQ6ij0+3vjXisQ77xDkewaButhJAjHSswAeImnQSISDg8q5X1hwa1uuLNutzFdQtnLb
XVuI57ZgooBIoOuPPqTjVv4F5Y4JpdgtVpX0/wDMYykkNvcXUi28SGR3+yNak5fTFIWjk+NO
cRWq3Q2xri3I1Rm3ljmb/wDRU6sI2qnaON7zu19Jt+32ryXsIYywORGVK9a66dMSBuGybtt1
4LPcbZ7W5rkkopVT0I7EeYxaLrVy/Fu6Jxz+tLewa1XXLamoOkdCrgkMadsV9GYj47wTb9x2
p913LdnsrdWPuvEizCPQafzc9SV7ZYMxse6/Gt8jQ3G1Xse7bfPUpPENDqv++Mk1/wDpwxUF
3w3j9tZJ7nJYbXc1X/8AErqLQhYGpAdCW/dg0RHsXCBf2z3m5X/6K0cEQTqnuAlTmxBK1X6G
vlhVjg5DxDctkSKZpo7vbbpPct9wtzWN6jzzVvLFa0h3TiPJdrtYLq92+VLSUK0c4o6DUPTq
K10k+eCVJdn4dyvdYDd7ZYG5t60L+4iNTyRyCw8xhoq73/47jsttt7q1/UR30pVJbGaho7fm
DmmnPxywz1zkqr3D485zZ2xnuNqeS3C+p4GSYgAVrpjLGlO+C436ztW0DUFOnoxFaeVcCdO3
bVuO6XDQbfZtcXSIZBHBT3KDqV6Zjyw1mdatr3hXNoLU3t1ts89t9zyKwmcCmZkVSzhh54Fz
aDauF8q3e2afbbIzwj0iskaH66XKmnni8atVm6bVu2zzNa7laS2dwBU+4CAR01K2YYeYOEWN
FunDobDjEG5TPcWm4PpLxOvu29wr/a0MqVVTQiobFFt5ZUFgQGzZTVexH7MOLXVYbdf7nctD
Y20l5OFLBYhVyBmaDywN477rh3KrXb0v5druDaMuozJSUKOnqVCzL51GJjKHZeH8o3W2F1tu
3m4hBydWQV+moio88VMBHxrkL7r/AEpdulj3WhkFoy6HKr9zKTRWHmDihTvwfmUJnX+jzgwK
HkQAM+k91APrH/bXDsc+pZ8Its4jyndI3udv22SdVIDkFUbV5qxUjBbDPsgk49vgv5bEbfOL
2FCxtyhD6RmxA/MPphjVlrnsts3O8dorS1lnaFGkljjUllVT6m0jPLvitDv2riXJ92tjNt9h
LcwhtHuJpXMeTlT1xm1barr+0v7K4ktL22kt54yFeKVdDg/TuPPGpWb8p9z2Te9nSN9ws57Z
J11QystI3FAaBlqtaHpg+WrcV5ZmNSPp/wA8Il9WvHeO3O+Xv6a2qkiL7jAEe4VH3CNCRrYD
PTi040k/xeJo5n2Dfrbd7q3BeTbiohmIrQimo0YHsQMZ0fT1nuO8cuN73hNrWUWk8gcI8qk6
XjUkq6jMZimGXFn6T2HDt1vOQXOzPQXVl6rwRUc6FIDewPT7ho1aYqtXN38YNNaTS8d3m33u
aFdc1ktIrgJ5As2dfytTFPn1dees/wAZ2GbfdzG228q20pRyJJQSuqNSxVwMx0p5Y1Zhl2al
27h+6Xe8Xe2SaY57CpvI4SJWAUgM0a+n3AAa0GeM6ubq5ufjG8azmuNh3i13yWNfdltYv5U4
j8kJbP8A2mhxS6PWd41sE/IN2j2uCRbeZkdo5JAdOpASVamYri3FnnibbOJ7pd7zebSaLd7e
T+tRD7hVVajtGooZNPgMVonK4vPi27exe72TdrTfTEnuvaW/onVf4ghZq0/hNDilOKDjfHp9
/wB0O2WkywTFHZZJgQmpFJ0vTMVpTEYm2vie6Xu6XO2spin2/wD/AB7T/NeJQ2hnCrnIFOdF
7Yr4ebsW+4/F19FYy3ez7na74kCmS4trU6bhU/i0VbVTwyONfYVnuNccn5Bua7baSxxXEkbS
RvLUJRBqINMxWmC3BzdVt/Z3Vpdy2lxH7dxbSNFNECDpZOuY6jwIwtVzgFmomf7gDhYjR7lw
jfLC32i5Ci7g3xY/0bxCj+7IK+yyddVMxTqMY+za5g+Jt2feH2m7vbe1Ag/Ux3ZUurANpK6D
pZWB66sWo25fE8lvZyz2O/7VuE8VGFqHSJmA/KrFmAb69cUosefuxr0NfzDpTCyuOO8V3Pf/
ANYNtMbXFlCJjFIdIkBNCAfHwr1xfZuQ3G+Kbnv8O4vYtG8u1xe/NauaNIK0IRvtDCnQ4L8s
ZrVbd8PT3G02d/LyGxtDfRLPEJ1IKiQatPqZdWnph10+qvtPirdLjf7van3azgSCIXBvYz70
MiM2gZKdSGuZDYHOcz4R8i+I962zbJ9w26+s98htEL3kNo1ZY17vpq1VH7cO/tu+RYWvwtc3
O3Wd4+/2FrBexRzxrcKQwDqGIFSAxAPbGDGC5Vxm841uLWF1Nb3AdfchuLSQSRulSoY6c1Pk
cajNiiGok0YBad+/jlhwLW04hy69sFv7XZr25sZFLpdQwmRfT9M8ZNVQtr6W1e5iheS1gZRc
yBTSNmYhdfhmCM8Jnwb9NcPbT3vtE2sLL+ouApKI0lVSrfl1UoMRkaFfi/nE/HpN6t9qkZba
jNaFHWd4SK+7ECKSLTwzxT2rrxj0ElxLHHap70kjBY1UZlmyCjz7Yvhnqb8D3DbtwsLuSyv7
eS0u4spbaVDG6V8VahzGGKuU6iyFailCTX0kjpkMTMq841xO/wCQ3ErJLFa28JRLu+nDe1CZ
ahWk056DShYdMX2xr6ytBzT4b5Zx3aG3Fp7fd9si0fqLiyldxAjdGkRsyv8AuH44yr4wUtte
C5FlLA8V0wUm3caXAkzQ0pX1ChU41vg4SW23bldXI262t55L3Xoe1RGMhYdRopqrg1vIl3XZ
t129kt9ytLmzcglVuVeJyAM9JalRTwxRm78CtOP8oexTcLXab64sWVnW6hgkaNkz1MCo9SjC
Y47SHddymSCzhnu7qQFoo4AZGc0oaKtSRTB8KWFuG1bxtbCG9sbmwmejQ/qY3iL1FGIDAV/D
DfWeek0XG+TS2Q3GDZ764s1BeK4jt5Hi0J1ZWAKmlDjLeuSztr26uEhtLeS7nuBqiS3VpHeu
ZKqtTXvjS+8zXVuce82ASzvYbq1kere1dCWJnUDwemofuxnBepVYrMxagpoNK+f0xVCM0iU0
sUbqxHWteuBR1Ne3UgrPK7vpodbs561zJ7eWFs8N/dR1SKaSEMRQRO0ahvE6SBitZ6vqBpqu
2ZY9anMmpzJJwSBPJe3U7J77vIFGkF2LensBU9BjTOJrfcbyCErDcTrCWYn25HUaj1JCkdu+
B0vppLmS4kEssruwKn3GY69QPp9Rzy8cZusfUdxf7hOSbm4lnCmqCSVnAqP9xOLa1lS2273k
FYobmeJW9WmOV0APdhpI60wi1zJKNbMT9xJYg1IJPWvj3xoDjMS5kU8/H64zSlFSDQgnLyp+
GIlHrLEGpLZavxrhEdE93dXLiaeR7iUAASSsZG0jKlWzywtaUN7uFvHcRxXMkUdxk8KSMiFf
9w7/AI4GLLoQzBtRYFwOoyp55YGkk1xeXEonuJ5LiUj1TyuZHYUoAWNTiqiVdyvoIHtYZpYb
aY1uIUchHI6a1GTYF16hRyKJU6u/+mJA0FQSBlSlO/8AwcOsZjoi3G7S3e1E0qW8xrJbrIyx
vTpqQEK344F8+FY7vuG3vIlldTWYfJmglePVTpXQRWmLT6kv963W+jWO9vLi5iLaxFNK7rqH
5qMW9Xng2s/XVe5Beg6nKvgPA406aeC5ntZFnt5DFOjBopI2KuhHRlYdDgX2w8t7c3Nw9xdT
NcXDsXklkJd2Y9WLtma4dZm6m/qd/wD05tr/AFEy7ez65rRZGEJI6EoDpwbW6itLq8tLiK6t
J2t7mBtUEsbFXRh0IYdxh0IZr26muZbmWVpppWLyzSsXd3PUs5zJxX1DbeN3bbTtn6uZtsqZ
UshIxgDeIQmlScXI+XIsigrlViOvnhW4Vve3drOJbZ2jnWjrItQykHIqfEdsTG+rW45lyWba
X2yTeLuSymP/AMi3aVvbYltdSte7eGBWg2ble/bJI77NuM23tMKSfp3Khq/xj8xxOu65Lrdt
zuBBHc3MtwLPUbRy5/lGRtbaBX0kvnljUuM1b3XyZzm6gktrjkF7NBKmiaJpao4IoQQcqYjX
Hs/MeSbRBLb7XudxZQ3IAuUhaiSAZZggipGRwVa5Zt93iW4tbiS8maaxAWxmdyzwRo2tVjY5
qFY5YNUXO4fKHPb+znsr3fLi5tLldM1u9GV0PYilf2YNNcuyc95hslu1vtO6z2EEr6vbiagq
O9DWh/DFGfs5I+Xcgt9ym3ODcJob+41rJdA0d1k+5SR2NcNX3xLu/OeV71ZRWG67tcXljbNW
K0mYFfSNIOqms5dicU6Z+/6dOxfInMNg28WGz7tPZ2esutvGVA1EdgQafhi103XN/wC4cmG/
/wBf/qVx/WPSTfah7mQ0DMAD7csxhMmOjeufcs3mOZL/AHee4iuFVJonIX3AhqusKFBp2NK4
Nxnq4ore9uIIZ4o5pIo7j03CRuyq47BwD6h9cQ5xCZH0nTU6uq+B8sWtmErK9KE1BoPI5HLB
Q2Fh8sc7sNsXbLfdpxZKrL7MmhqKRTSGdWbTTtXBqtVGx8y5Dsm5z7ltt/NbbhdK3vzq2pnB
/jD6g3lXG+ponaffOc8p3uB4dzv5LmCWVZmgaiR+6opq0oAAaeGCWxq39O//AO9vn/8ATBtC
7tLJZhPbZJFjchDnT3GUv9DXLFo1xcd51yjj95cXW1bhNBNdkm5f0yB6mtWDhgWr3OeC+rfA
8j5pyLkF9DuG8X8lzNAF9ummNVK9HAjC+rz64ouPFpN8v/IM9iu3T7s8tsgVQzRxGWgzH80r
r1Ds1a4az104OM/InK+NCc7LfvFHcHVPEwR0Zia6iJARqz69cDSDeuZ8k3TeIN1vb0/royHS
WBRCEZR96KlAjHu3fD9mbc+Fxufy98gbrtz2F5uoe2WmhlhjSYZaSfeA11K5VwWiTWOB9FOv
QAeQ7A+QxN4BBWStcgKaQag07+eLRgSRqBFKgZAVAxIzEkAA0JNajqMSwahdAjPUd3z64ieS
MlggOfdT5d8StMtCwXV6h0p2xYgsrk1rVgKVwAjpahcZg+uoxLTuArVpQd/phVOT1IBKrnXu
QT2wYjxxkhj1pmpPUeWWE4jJBPpOQyJxM6kL0jLdSK0H174lUEUa0yrWhGZrXGdWDR9Le336
06Z0wtEC5YhhQ9Q1akYpUYu1aA5VzB6f8HGkehJBrTVmKGlMCESpDqxrSmXQHxxJDUhar0OS
qMWs1MoQKEqC651HcYiZnJ0yA1JJ1CnYfTEvgzqNYoaHwbvihJiFWgWiEEU7kHErTVWpzA1A
ACmeWJjQsygr6qU+2g7Htli06JmWtUFK9icyMRggCV0Z6c6NkajEQMqx6ZC/ppQqOuIikYe2
AGPudiwy+mIG0yFAMiB9xyyIwIhrNQ5BTp4f8UxYoH2QNIDUUD8KfjhJRI2k+rTpzGeQ/wBc
GmCkPpqFr5f54RRKEjjUnPUNTNSuA4B/S3rNYzQ5dPxwjDSaaAhqMOgzofxxAaOioWLeoemh
OYr5YKpQuuo6gSFGRyrjLpLplkGnSnpXoKeGFU7E6SpGSCueQriZ0CKn3oTQ9R44VJqQlPcN
aMzdU6npliQVK1ZMvSM/DEqWWkaSdNKF+priEM1aAGpr9q+XYYjgjSmlFqpFPOvc4kai10MK
rkAetD1yxAK/yz37/b2WnfEkiAKA4FSftc9CP8sCCzjXV6hvPMUxHTLp1UJBBzoPPGjT6/bq
WTUAM8sxixmiE66ADXw/5YzYjS6aqpJCd9Bp074gbShQFTQAj1Cufma4kQJYlSRSuRrngR0L
AUb7mND2NMRw9FCrpOVan69jhb+2Ipw9GDPVl71yp3641BWR3hQLtmLA1Ppp4Y1GHDVf4f8A
HFiWO2sovU1ehS3ppn6u3XGufkV9O/CmzVtJJW3HbgzgEwNOIpVI/iRqYP6eiOPmfEd0XlFZ
LqziincfpbhrlDCw/ibTXSfwwfz6xuXG73vj17NxqBILuwuZIPbJEN3EVIXOorSuf7cZt9Yu
0W6bKm77RbR7TvliWjdWntZbgQuStKoCCVOeLacHu1xaf1CyKblFZX1jIJNLShS5FCQzAlCp
+uMxvEW+8h2XkNjcw2MkG1b1BqVm1qkUvZtNaY1IrFT8WcduUF28m5WLmYqwU3ASQMKgj2+u
Y7411XORluWcbbb+WiXeZBFY3T1d7SWOVljNQNOmvTrQ54JTZfkXLtv4tb7RH/Q+YT7rCrAf
065EheMEdtWX7sEg9YVANJNAeuZ/fWmFp6r8Mci2y1Fxt89zbRXUi6olvQPZceHuEEA41Zq1
tdw3bfrGMyXO2bFa2TSZy271JB/NVG608VxgyMj8jbuf09lNZ35ZAxKtBMToYdwFOWR60xQf
b13fHvMLjcLGYX+4Br2Goi911jk00yo3pwdQ7rJWHMd02zkEqvuEwsZZj+pQ/wAxaVpWhB/d
jUax6LPst1vG4bZvG0va3tlCAZmhnQSdcvSSrVH8JwMLZ9zsi8yRXaLeQqdUYlCyimRGmvUY
cTxpuY79afrLOO7Z7aUsHjl/mDM5FA32mmKRNFxSytJtjebaubPsu7uC9xtcgaO3bLtUiurx
w2q39MFcrJ70plYSOpIaRTUEk9anxxmMyftCgYkIprU0Wvj9fDG4zj3T434vvC8XkRvYm94s
0TRTIynUukBtJOYxno8ysJtvDuRrywWr20Ud3BKHeO4lSNgNVQyMxo+X8ONTrxt6RyzZd1g3
LZ766jpa20n825LgrFX7Sc69TTGVqbctn5K/IbC+tboX2z26B5RBLWRSe/t1DFT9cQAb5rbe
p9xiu47mz9kJfbcGVtarX1yKc6074Fqi5hd2m7bD+u4xOIYX/l3ti/plUV7UJqvbDisWfBeN
70eJyII0Pu6zEYpo3VqimZB79wcNLzC+4byw3sqf0i7DqwDosTPpJ81BGLUteIbFvlhyOza/
sLqygMn/AJpYXCjxrli0Y9Ku9t5V/wCzWl/Hcfq9mgjOv2pdToGH5ozR6D8cSia5kh3ZNw26
yuIbm/0em3EypKSwyZS1O/fGYvXnG68e51tO3Mm5XMqQysEe3lm1Ll+YVLK31XDVja8R43yK
Lhk8Cxq8k6Se37U0bK6uMvtPfuDhVeb2PD9zv+Rf0aQptt05Jpe/yjkKgUHUt2phGatbDjt9
xDltqu/NHawN0utReJwcgRpGoCvWoxnTI0fJuJ80ud2W92Vmis2UMt3bXAKAr30I1TXuQMOr
GY2+2vZ+Wad932Tab1QCu5xFSdVPSNa5Z+J/HD+FD89g3qPdLePcd6i3mP2wLW8jKnShalH0
9DXPrgk1nfcbnj3GeQLwOW0NuGmmjkaJUlR9WrNSrVIGG03lS8M4TuEdnJukdux3WNmV7UuE
liZDQK8b0GfWuLT6s49y5bab/af1+wi2iydiIrqMKImlOa+4yFkBPmBjOtYvbtd9u93mt962
W0u9hdPRe+2srGviVJI+uWJmRTta7bf7FuHFtkuI2urY6Y7UsoYoWD/yyT6gv7cMuFUcxmis
eMWu2XB9m+iCKICNNQozYA5jPphlZ/LmubTdk4YZ9p5Ym5bboVb3apyqvHmKxx1qw0ntlXFO
dq6W+y28+7/HcUGySiTcrUBHjjfRNG2vVQ0KsFI7jFZhjTXAv7Ky29N1u1cCkJuZmADSMK0Z
mzHh6sUVVG1bXybauWXdxMs0GyyRk28sb+5bg5UqqltIP0wWxiSvO+X7RuU/IL6a2sbie2lm
ZopIYWkjevdXjGk4TJVr8Z2t9bcm03UUtj7kDBGnR4wGFDkaDFVzy3Vnb8tt+XT3W5M7bS6M
ltdQsPaPTT7gj/8A5mGDWpEO62t7uXHb5OPMJ9xt5mVBbyKksbFwWoarQEeHXEq8+5Fa85ig
trLepXniZhoW5fUyO4pTU4BFaf8AbiljNbibYeSR/GT7athKbtUBFrG6yEgOGGnSaHLwwyxd
yvJG2beBIPb2+5auakQyEV750wy6LPy1fxnHc2vJ4knje2lljkEYlDQsXAy0atOr6DFjpHXy
HnPKNr5dcvb3RT2WK+w6jQ6DoJYxTP8A3dcFg1s+P7nb7nxGO/t9tW/uY5H9yzsJPZkhZ3ro
UkqwHfGUkTc3v92s7WXapbOeNX0PcSiR1GVY6H11NK5HFA7PeumtbaR5JAYb0RBy5Gn1FSC1
cq+BwlzcttN5udnum2QO28wOmcDBJ1LHUQxqpoy/twiKrft8udustl3O4Y/rbVkhvTOtZgGX
+bG4NGz61H4YD+U9xttlse53PKbW4MUO4KJra7P8yAM4q0TU9SiTtXv3w7qvjp2bdIdx4rFu
dvtrbjdiRxLBYT+3NEzSMWQsWRqd8Zwsd8o7ot7b2cNxtk9pLASPfncSvpI/8Z/OKHx/DG+W
L8g5RDySLg9tTfLfeePyNHSoX34ny0R6j6iFP4jBPaO/fl59IOoGZ7174lCt5JoplkhkZHGa
OpIYEdwRnh1qPQ0vX49xix36wt3hv7nKS9UiWGatTUuCSkmWaPkcZg68U3x/e3J5tBdLCJ5r
h5meHWsZJlVizJq9OrPJcNXHTi5fdzLzHcr61aa0l98NHUGOaJlUL0rVT5YKLWlW+bj/ABuy
37b7aS23C8T+ZdCksMxzrIHrVXqvqjbDDYp/ji8nHMYZYoP1TS+60kIZY3YurMxQt6SwrkO/
TD0Jc8cXKdxuE5duN9ayT20n6hmicq0E0TADqvVT9cFo5saFNwk4zx/bt5sraS33G9Wj3DH3
opQakyLMuWokeqNx9MUg76y4rfjW8mHMYrhYGuZSs3uQoyhmDKSxQEgFh4d8VjpFVyW/lj5f
uV7ZSS28v6p5IJAGhlTsag+oHxU41ax8NOu5NxnYLDfrGKSDcdwWs07ASRSmh/mRyL9KtE+M
z02/pV/GV3KOZRXKQGclZfchiIVzqBZiob7j3098XVa5VPId2mi5dud9t0sttL+pcxSjVDKp
Jz1A5g9iDiZ5+GmTcW4zsW375tkMlrue4xgNLKNcMtalnWRajST90TdDiatVvxlcyxcwhlS2
a5lKSmSOIqH0lDr0KSNXX7cF9XMkij5dcw3HJ91njZjHLdyOgdShpkKFWoRSmYxqwTqKg59F
DDppGVfxwKvc7jlW27HwnjX9VtWutvvIIkRoyPdhkjTUsiGv3L5GowSNb+F28083K43EfvFt
scpGQKuhlFNQ/NUeGLBY8w5063W2TSXXA22n2GrFusSsug9NT6EAKsOzHFvqec9TqA60of8A
lhY2vRvg7Wm67xIFWRktFZoWFQyhyTl4Zde2K/J5+Gv4JyfZt9l3u3sdit9qltbYs8kDL/ND
6l9Sqq1H1wflr5gr64lh4ZxynFhyVXtUUIqgmIe0M6UbqMsULN/FS21tybk36XaW26BLdW/p
N1qkaMaquh1gMVr0w27WZM2p4eT7Zeca3PeOG7NHt2+WCGPcrSMghrR61k9tQBKFzPYrhz31
dXYud4nt7XiXF1k4p/7RHLZoiAIGaMNEpqPS/wB1cZi+HgW/RWybteQ2e3y7VCJT/wDk+4Zn
khYdVZmCsR/DUdMbtYVTBT0Boain1wNePfOUX3LLDbOLXvDffFvd2sZ3SayjE0TCFYxV1oVD
gagT1xj8NXdXD2u2zcr5ALGOGWXedojnaJArLcn1gsqD7zWlSM643JBvzjJfD/Hr6y4zzH+q
2vtpLZ+3JaygF1aOORv5kZGVQag4LPVLcB8Xcq5RJ8W7um33ks+4bUtbOIUmkijCgj0sC2jI
0xU/bY8hfc7zeN6G43U0Vvf3lwkk1zpWKNJS4/m0SgWjZkgYzWuY23zieZfrtotuUQWs0sED
iz3mzqP1i1UkuPylT+Xzr3xuMX/2eWNq1Aqar1b6U7DFosehfEt3yuzl3a549ZwbtGsKf1fY
ZiC1zAxI1RL1Jj708cY/LXLVf0Cyutl3zdeFx3/Ft0gtZW3fjl/V4bmycEy+1rqaKtaeByyx
rV1yP5Q2K83e9+ObrabdbiKa0gil3BFCw0HtMnuTDIAZhQTh5+FL62G57bBNy3lHswD+qwbX
a3dl7ShLoXUOsrJHp9VRkD44zjO+15Nv/OflDf8AYdyseTbWL3bIkVpluLVUmtw50pdxBQko
EbDNxkOjZY6/XDmvQecck5txuz423CNbWm57fHNePFALiD3ERFD5gpHVfuIpXHNrFZ8Vpb7r
vPKrredvFjuNslveEbcuiYSrqZrq3Vela+oJ6Wr54umZM+F1zzk/D984Nvlluz7hfmK3Ett+
rsjDLbzkEQXCEKjKmugZqU8euKeLqbC43y3d5dosbQXFzw/dmgjLO1j+r2m5BjAiuEf7YQ1B
r9QH+ODxvGS2O53TZebbyN54zQ64J7642EVmgmp6L+1UZ+1MK6lA0+ONdz8ufMiz+U7vkO98
Ov7tLy15PstrH789vPaGx3Tban0XarRSyx09dB6hXtglw9R8+Ox6SsWbL1AUBB7/AEwGfHok
KiqgZ9MSlIFSfUxoOijuaYjadlJSjVND1BocQlEirmCxKjooyypkpxash1LlgAvoVfVTrhQo
2YNSlK10nyHiPDAZUrMEYmuQHYdSf8sKKNgwqMya1B88ZxaOI+ohRq/iPTriRBVqXArUHKlA
MISLGBQrUEfkPngKUipLhadPpXuMQtO4Rs/4iDn0qO2BacghSwNBSvpqKU8cKJWUEdSDT8Ti
WxIdXukmqqFzy8e+DSYvpWoBLU9FOmLUJW/OwoT1PniJagr0Pganx88QAGrnU+P+mEaRpShz
J6070NcQtCz6hVgCR38cSloXY0BQedew8cQ9NUGrqaseoPXFTqNg9VpXT0phQyGoocVzof29
cBKZii6iTQHqMBMNZHUU6+GXli0oyCAyV9fh2p1ws2h0EEFaKy9a9PwwowBD0I0swqD2y7Yh
h1OVVo1D6ie/0wLERJBKsoqQP39MRzBShBSgq1ftBpgOg9Z1KMlyIBzNR54UTxKSyZEHt28q
4QAHSQPDIVxAqqwpUsP4e1MBHWhFRQ0yplgxpExagJJOr7a4Yzh8gPUatQZd/wBmFk8cik+o
GnYn/PBigUADZ5ocwPA+IwNjLioB9RHVWyGGHUaszj1ZnpUfuGJn5MIJAQjEmmeogAkeFMVM
EZfb6flrmfHxwK04ZwwIowapFO/jio2kTJr1E1qKDviiCFBAZ19ZHQUzp4YVgVdlKkgFh9rD
wI88CPqFBQ0rUVHb6YmaKMmhqRUdRnWmIyD9JzrXIaq9a+YxNQCmgKiq1qSB4YkTtpUJlTrS
nXELTg1A9RDPkQelMFoxKQWAonXqtczgagD6GOkAhh1GEmjKtGw1lQB1GVM/8sQol0x0UtUU
6Ht3GFIwBlppQmmr659cWrRyOBSo1gnIny+mJaTCuYIqMye2EhU9aNSpqfD6YgOuVQDmBXwp
gIVo2qmTKanyxYMMNIAOqrdlxDCkcaCWJLCpPamInEyE1jfVTJ6ePniWhKproCArdT0xEaUI
GlfT/DXOvhniCP3WQkZg5Vp0J8q4EJioKOzVqOtMCgqkqSuR7DtXGoQayEoFBAOeGo5WiU7g
+kdTU+OBG6CjCrkjUf8AjxxahMhchsgF9OWIGQ0Pp+/OoOf1wEJBqHC6SeoHhiAlEYkyyHWg
qf8AHpiJaEK6mNQxopJqcsSRyBqMT6aGqkYozgkRSiZgE5eeI4E1ErhaaPyk59MQ1KCiioFB
/COoJwtGGSmi5HuRXEkb6jUA1/3ZHEij9sClBVugzB/HAhTEr2opFexGFBjbLURqFK6gc/Dp
iRK8dWVSTn0PUD/rgR1JjJP5j0Y54mtEsrNQAn0Zg1616YhoWXuy6QchQ9aYUEfzF0qoU9ye
p8MQ+qRVb7SRVagLT9+BYGN9Lhaeg5A5nP6DAobVEa6Bn3Iy6Hviw2lVGoHzUAGnbM/44WSC
xgEJ17kmvftga5p6qKkgVOYI6nE1pn0BM8nGQy61ws9UslgFTSv2L0JI7A4kUbOy6WGknrn0
xNfYbaxUkF9RrqHYDLpgZ1GCtMs88voDnlhoLoX0jr1GJQVFKqlaA00AdMBM4WqtQ9CGp0yx
NBiWIVKkl29VT0p3rjUVSMxYahnXPPFWSd9DgIuo+dK54sIJI6quqqkZlRnQ1ywLCYyDvVa9
fr1wRWC9XuGhGnyH5aeOIEpX21Bp6egqa0rnTEdI6KAVFa0UdDhOI7iJJIvUaqctJ65Y0qx+
5L/8kDoBUAdRQZY3WMcukePan4YyVjtShbyMyk6A2pl6agB4nphkUmvZOBcKfkvuexffpDbg
OHMfu1r+OVO9MHXjHbo5LxaTYZWibdId1QmpEYKaSeoZWzxzb52s+0siv/MIKintqBQIcbUh
GbUHBoQwpXPt4AdMWBY7Nsm67uxhtwCEoJJJKiNSft1EeOD6tXrxc7r8ZcysbL9ZJBFdwK1T
NbTLJpyz9P3dPLGoztQ7Vw/kd/A5so19wdIncRuf/wBKlfoMZsb56xDY8T366vXs2hMEkTaZ
Hn1KgPT1HM/jizxXrVhvfxxy7bLZbyeKGa2pRJ7WRZVof4gCCv7MDLLgPpowIrkR0qRjUTu2
zbby+uo4bdQtTp1Nkv0rjWha7lxLku0R6riALFcMaXMbiWFm6dQcjjOnV1t/xNzG/hD2jWks
bqPb0XUZqaVyU0OKxmxS7jwblO3bmtje2YguWoQXZWir0qGWuHVuOi44ByixMS3EMcqXDH2Z
4ZlkUtSueeoYvk1a2/xVz8wr7NvCrMMylwmoj/tJBpgi5tc0Xxvzi7uHiFtGroRHL7kyo2o5
5Z/vrhNthbj8Z82sWRpNvHtswTUkqP16HI1xQV0r8W85MZkTbfdLDJRNGp+rBmFPpisURbNw
HetzvnsJ54dv3GENrsrmo16c/TImpf24xivSo3vj24bHftaXqIrJ+ZG1qa9CtO2NSqVyxXE0
JYwsyO33FCVr+zDQP3LuUh5WkkZTX3NRZhT/AHYIJBi8uiucsrRnoWZyPwBOKnS/USxsNDvF
4gMwo37a4cQ7X3bi7RddJJWCiRz3bKhbEl5JxDfINxgtgYx+rqIZoZNSsKV0kqemWCrXHuth
vex3Oi7Z7ZgAV9t2CGvQgqQDhi0VjybkW33HuWG43NvI49ftuxDD/cDWuKRas0+SeZufRvdz
qXqusEV8MxgDh3Ple7bndi6uptF3oCGSMe0zAd/TTGsPFn5Qpt29fphuP6eZratf1tCQCMz6
hU4DvrllvZpR7U0jyKCGozM2fjQ4ke33G7gosE0kApUiORkGfjQitMUildNta7tu1wI4lku7
hzk8rEknqBrY/szwnS3S25DbXX6bc4545l6JMWrQfwmpBHmMMkYuuxNq5Vb7WLy3S4FjqqZI
XZBGe+rSRQfXLGcNcdltu6blc+zbJ70xzYFgKn6nLFrPPOFuG3blts5ttwtXtZuvtSgjLxp3
Bxa1IsEs+W2m3i8hW6Ta/wA0kLkKPNtJ9P4jDIdcC7pfrI80dzKsrijze4+pvq1anBVOjSbn
usx9mS6ml9waTE7sy1/EnDnh13S23J9v24XJ/Vx7Yxp7sTv7Qy6HSfT+OKRi9FtfH+Q7wyvt
1u07AlvQ6oR2JBqD+zBhQ7vYb9ZXJi3SCWK4YAUuCzOKDI6mrUfQ4pRarWooUgaiDk1BUHGo
xbVns1hvVzcD+lRyGYCqtG3tsT3o1V/ZhvrXN8DeXO8C4eC9acThyJo5y1dYy9ascZOuqWLl
W32UU5NxHZSVVXikcw5/lbSaD8cUi3B7NzDku0RtFt24SW9uxqYBRkUnuqsGA/DBWln/APed
zYMpG5mX1ZJJFGyEnsV054sTk3q75brfdbmOe0t74gubZnW3ZiOnoOkfQ4mNUdpuNxb3HuQT
PDO401idkan1UiuHKp0O63O+uWBuriadVHoE8jPQD/uJOJnr5dFnyTfLaIRQ7hcxRA+iOOZ1
VRTsAemDGvs7to5xyrbHdbLc5VSQ6pVekq6j3/mBqE+OKRpPuPyFyzc7VrS9uxLblqlHhjqC
OhVtIZT4EdMOLWcnnlnlaSWVpZzm8jksx+pOJmxNabjf2Env2l1JZvShmgcoSD2JBGJXxNd7
zud68cl1dTzvb5wvJI5ZPoSajGbDIV7db1BqS6aeMXWlpElZwsg6q+f3fXChR75u0NyLhL+d
LkqE9/33DFF+1K1zA7DEtBuO77puEiy393LdSoNMbSuzGndRWuNMWm/q24iwbbxdTCychntQ
5EJPWunpXBjUtT7A+9HcUg2aeSHcZ6JH7cpi1d6VqFw1rfA77eby117W8zTPcxVjP6h2d0AN
R92eDGf8q/3jqoTSvUDx8cUY6lDVaAg9SQK+OIwIZQPT4/TDjaYX9yls9osrrbyMHe3BIjLD
o5Xpq88WCo1Yh6Voy0YeoqR4HLPBVzE9xd3d3P8AqbqVriebN5HOpmpkDX6YlhkvroWxtFmf
2C3ueyGJQv3YL0rTEr4hWeVZ0kik0yK1UdTQ1HmO4xM1NNc3dzcNcXMjTTynU8sh1M5pSpPf
B8qLC72bkFvtWotr24Sa5bMOf5UxFNTREjSSO4FMUHU2uLatt3W/v0h2uNmvgpkiVG0uSg1e
g1HqyyxrY39UG4TXkt08180j3TMWuJJj/MLd/c8DgHWifcbr9GLAXEgs2b3Bb6iYhJSmoL0B
plliTnSR4ZUmhdo3iZWV0JDKwzBBGJaV3cXN3PJdXMhmuJCXllc1ZmJzY4cMqU7jePt36AXD
tYh/dFvq1IshFC4Hao8MFKGGaaCVZYpGhni9SOhKsCOhBGeDUa9u57qeW5u5DLcTtrmnbNnJ
6k418sSeoqUH8Y66fI4C6H3K+a0jsHnc2UDmS3tyaojsKEqp6VGHRnupoeT79BJbOl/MDZFh
ZMHIaEN9wTP7T4dMZkbiwk+QubvG6NvlyysKMrFSCCKMCCtKGuLFb4zRLautCev1xplPYbnf
WF2Lm0ne3uEBpJExU6WBDKadiOowHcLbN83Dbbw3llcNbXQBAlhNDpfqp6ih8DisUWO2c45d
tNnHbbduVxbWIc6IVNY1LGpAJBK59q4fqdQXPMuSz7gm5z7jK24RoYluagN7bVqhoM1z6HGS
qtu3jdNtvXu7Cd7e4KNGZIjSqvkyspqCp8DjQ+FptXP+Z7Tt6WG3btNb2UX/AIoFIYIO4XUC
dPlg81XcVXIN/wB83+7SbdZ2u7gARxyMFDkE5LVQK/ji3GLdRX3G952+0gvZ7YiymBrKDqCM
P/s3HVH8ji3WsW/EOR8/tlaw43fTQ++xZLbWgRpAP/sxICuo+HfFitU+4b9yI7zJuF3dT2m8
QSapWH8qSOTKvSmg+OnLBreSfDvvvk7m11Mskm8TmX2mgcr7Y9yJ/uV6KNS/XErig2jft02i
9S82u8lsbuLITQto9NfsYdGU+Bw1nmSOfcb273K+mubn+de3sjM4WNUDyt4KgAGr/HGaZ4iv
N03K7hgt7q7ubiGzDRQW0zsyxDoVUNmtCOmNaa4XDllFKd8sjgc8de27rum2bhBd7dcy2l3C
w9maA0kDNlQU8a088RmtLyT5O+Q9xh/p+9Xsglhqul4khnQOtGBZQGAdeuHD18ODYPk3mex7
fLtm2bg8W2vULYyxpNCpObBVkBoO9BhY52m/9z5num+2W5Jf3M282sQit7mIBJtK19IKfdXV
+auKtfl28j+Tvknc4P0G93LqQGVfctkhm0sCjqJFAYqw+4dMG+NTKg438vc343Zf0/bNzaHb
gKxQSLHKkfYhfcDadRzIxYbYqhzPkq70m+xX81ruSFvbuICIwqk+tFRQEVD/AAUpi66rH/ha
ch+X+e79bm03DcA0ellWZIoo5dDrSRNaAHS4+4dPLFjSfhXyb8k7VYJsXHLqS4t11OlpJGk4
RTVmWP3KhVzPpwTF+FPBzfksXIP65FfT226oNKug9v2lXJoQn5I/FKUw2LlY8j+YOc77H7V/
eIdSPAzwRJE7RSCjxF1Goo/5l6Y1yzZ6xhZmQVyGVK5Z/jgxqmjYBiGAq3Q/6YsZOGGvKhUG
mfiMZWksgAJfMHLUehqcFKeMO0qQxqWd/wAqglm8gB1xaijLnWjqVkjJVlI0kEdmBzBwmXT6
mJBofQKk9CcQqQoZWAU5gZL0BOFS6JYgXala060pU9MBGqlV1srCFvSJ6UQmtKBumBJI4ZpJ
ljh1SSSZBVGomuWQwi9YMsEkCuCjJ6ZAfuqPEdsUgnRV6EGqk1of88TXydMnAPSh69MumAYI
F2HiTWvjQYKtSNGtakfj0IBHXATodfqFTlQj/jtixQRRW7EVOajxGEheN/a/UCNzbh/aMoU+
2HIqFLdBUYhpyhGbCi5jPtiQCmrSGWir0ArX8cKE1tOsJuPbP6fV7YlodAcjVpr01EDpiFAA
qj/aR6hX92I4BzkDQ59sQ6hvbJBJBr1Ncjn2/HFokAzJGhJGf5R3+mI3wiW0qzZGnqOFQnjl
BGpSC+VM8/pXAQAaUoMtJoM88Sppj6wCMurEdcGjD0cgEgaBnXsB2w41IjkY+pgdQ659Prhj
NMxaJgSQG6n/AFwU5iNnWoYedKmlT54mKlkhmRUuCp9t2KKx6awK0B+mBvEWtQQXFDXOneuW
FWBYldP5VOWXbEzSCFVFc8ia/XEdRhyEB6KPEUqK5Uw1UTsHNGYZZH6eeMrTU7aqqOjdBhIS
NfrrqoQKDr9KYWRM/qYK1SOppkAMGHBa9UZ9ulG6augHc4CYKHIbWDlVTXIj8MSKhUhtJp/E
B1/ZiGGKhgJCxIByP+Rw6QsK55kdx3GAYJGdnyNNQqD5DEiSgqQSVNB4UriRiAzF1Lah1IzA
+uLSJQNXpYgnKtP8cSJFjUZCj/mJ7YMZw2pHqmdBWhAzOEwNFGRJqAKL/rg04IMdVFoHpX8M
KMAGYh0rT7aHpi0YIyKaUNFStPHAsACSvWiitT3+gxIWlcmSukD9/bEjJF1JJLk1pXKmGInV
ichVele+XfAYItVwpGnTnTx+mJUmkU0A6CpIByPhn4YUeh+zKgAORpn5DEiRSxJpUUqanL6Y
WjHUpGnIGus+WIDZkChVoVPc9aHxwYTQuqV9wAnsf+WJlGI09RBJIzoemfhiUpRoUNStEbp2
y8hgUGYyaGi/WlcsJIRnSaEk0zYUpTEiILkKv21oD/jgREChBJK9x54lpHUy0egU98IBoIWl
KgDKnjiRlYuSjiviBXPFQfJjpZCAPtBrniZvpIvpJX1UP21rngbg+9VUKK9R0FeuJoAIVqEi
h8Oowgj7YYUIRaEk9KjwxClCrIpYAUOefn3wVHMpYmnqpSijEtC+tQ2QoSK+IwxU2nRRQa6R
UeOeeeICOrSpBzDVKnuO9TiOCMyhM8jma9KDEUOl179RnTFQkRQFBYhUFTXKoOJQ+pULEHNs
60HTofpiIVYD111GuXhljKEUQuaGp6kd8SlCCHYaavQZ+X1wrSFFyoQDkadm64iZnJUBgSfy
/wDHY4UZCyNkAVPUgVIPhUnAkiodLV1A9ST/AMZYq3+C91CtASAuS51zwYxpBhCoXQGPU+JG
GADKjByXUKtAwzFG8PPEvqBwRQqPdqSW6CmIpKmmoIKr0och+GJabQagsNZFNJJzGLThPGAt
OwNAOuZzxKlqFCqgavCvbEDlpF1FUIXoK9SadsSKQKKFUpXqRmfOtMSwx1ZaTkMzQZZd8QPU
tmpGgZhadB54MQSdMQJXSRQkUywxpIYpCpYHMjsPHEUYSToSBqH3dAGwjKdU0jUwBZurDp9c
8FOYYIjOHByJ+2udO2CmZiRz6dAb15ZgV64kBwcqVRga0y9X/LFq+ptLr6DRWJqtex/DEPqY
aStDGCa0JxRrQ3K6kPqpTMU608MagsZTdgfepSgr1/yxphz6/wDZ38MGnK6bAL+oC0DsxAGX
T9uNxl9EfB6Sl74SCkiop9IoKE/cKeHQ+OH+vwsVfyAYv/Yblxk5NCcwDU1pQZY4Hi5WUkjq
xCsGBoVqO/njca6pzTUwWpNQe2FlpOINyZWmuNlWaSKMD9THFQsyf7lPUY39pmHx6nw+5tLu
wuJ4A9rdq1bqC5o0ZNPyjtjl0MR8qEMZsZrYCOQy0dkrSnUnT3wxYn5Mu4rb215x8GeQU/UG
FgS8dPt09/xxQB41eQ3+z3Rsddre1ZLq1uhqUmmZGfppg6U6eTbjtO5S3VzdJZytBG5DzIpa
MPq6EjDK1cbr4qi26SzvIrlYmlrqPuZEL/tJ6YbWbV3x9YH2jcbW9dSNT+0sx/KPt0lsZtVr
NcH2S5uNxvLv2TJb2shpHG7A/dlTPpjVvgeg8duts3VrxL9Hu7W3JWNJDSRKCp88sF+Fms/N
xnZ9xibfNhmnafb5TL+iuGOhlU1NCCaVHTBKciS45Jwzk9xb7Zdw3+3XxJAaM64Q4yyNQafX
DYvhLbX1nweSey3iOW92q6YyQXsJIkVSKeuvf8cWq1Emybdc2ych47d3UyQPrk2+8cjUBnVG
BNPxOLVElxv/AAnlN9a7e7bjtu5UKgLIPaDD7v8AuxMz5cO38c3DZOawQ3N0bsSxl45yTrZC
SAM88u+Ln4blU3yeo/rulRqqlWI8umCVlj40qajv+8jxw6Y9Z+OrLb34rc3E9ik8qyExzkH3
AAtaL2Ayyw2Dr4WVvt+38k2aC5u7aMTI5UNGAGAHRWpgU+FNyPc7jj8sVolna3UJUaRcxLK+
nwQ5YtZ+rK7TuFrNyayuba3W3WSVVe3ZfQPEKp6DGo1j0Le9u2+25XstxbWiW8kshMiQiiMd
JyKH0596YysWO8brbTcks9olsI5klUmaOVQ66WH5QRlii1WtxFdn3OaaziWHaxRi0w1CMnMq
w+7ThlZ6qzv+L7Jutnb7hcfpZLg0P6izX21bPowpXLFrUjOck35tinO3R2Nrf2cqgt7yKx8N
NVFR9cUp8X/B1sbzikoQC1ilMtbct/LBYdFPn54KMeZct0RXxsjb/p5ox9zKPUPEeIwwqIR5
kr6u/wDyGEPUuK2O0ycGmnZA13Dr0urFWVqZVI+7PtgvylzsllabpsG3XO4xrdTRUUSOdRBH
QeWLV640C2HOLayjpHZOh96GQ1jcspyIOWR8cWpYbrx/aLbaNyktrdIZoQ0sUsZ0sD/D4Mvk
cX5WI9ks7bcti2263BUuZ4hpZ5CHIqftIPbDanJBHFafIA2yNvasGiIks3NY3LLVNQPUeGK9
JU3Fn/RucXUG1bKu5QxjVJtjxmUIGALFaVpQnLBg5rn2uTYdw5qhj2iPbISjpJZSk0EwPXSQ
NOrEF9tIjg5xNtJk1WLxMrQSH+Wz0rpNcj5Y1apXFuyblt9/ebckYi2VJK2zgUkSoFfakABA
8q4LjSp5Vyaxv9kg2wN+plgoIbt6iZQo+0nv54uYz0fd9ztZeNW8G7cYay3B4lFpusKGBXVR
kaUoxPcYzfDzFqm37evx1DuNuFivYVzlQ0fUXoSQMqiuGWir+32ja90tttvr62W6uXhEbzvm
XNOpI6MCMsWlXbCkA5xfbKSZNtaJ1FpIaoTSq9cieuNaJI8/5Vtlrt3IL6xtdQghkKKDnSnU
E4qZ4sfjewtL3kH6a7jW4jaNysZP5lGX4+GKmRruPqG5vuGwTVfbPYelrKRpZuoAp38MFZ31
QWkUGx8x3W2i2I73t0TmJrf2zJLHHQGqsAcwa4bWpGc5DcbLdbhLNtVm9javl+llJLo35uua
jywMyKwaBpA+3L8MVPj1Czsdhtfj6DeWsVN/GhYSkemQiQgq6jIj64Fri2PbeH8o3eD2bV9u
u2hZr63ioYgyivuKa5eFKYdM5X0vHdiNxLYbuu3taAaIryEiO5XsrMo6YEpNm2/jVr+qsbBo
bvdLVyiW96UaC6Xs8Uhpoah/bh1nqKHf5VsN1Ep2b+l3ieqW1YF4JAB1APX8DnhuLndbv5O3
G2bh+2tNYxSy3ARYiRQxaogwK+Xli5h6qhs9p2/aODwciitobssP/k2lygZfvKhkY5g+ODNH
S6g4Xxye82+/e0CxX8Gq5sKlYteiuuMg1XEcZ7drng1vut1sd5t7Hb4h7a3y0W5hfIg1FBIq
+eLKxKreDWe2ScuhsJI1vtvlaSJPdUoWWh0uKZq2WWHqYebrQWnHtpuPkncNrvYn3Cxjttar
cOS4LBdKl8mqvbywVSeVLtmzcX3re9y4ybULFaK36W+oFmRkOmjMMmAJy8cDUnhbfxqzs9lS
6htF3RYblrPdrWVamiNR3hbrXR6sMVZ35Jm2Y3VtDt/sTw+2Hju41KSBM1WGVchVfGlcPqjF
qgBHcnMA9R9MLMvr0prDb9h4jte8i3ivBfBRNBMo1B31UZX8PTmv7MZ59atxX8Z2TYOT77IL
S2bbvbh9yOJWMkayFwKjvpOeFVZ7HtGy8quN02mfborWexjYw39sCre4r6MwDpoeuLqqXUu7
bXsWycW2rd1sY5rxgLa4V1/ly01AuVNaN6cExnr2q3l3HNg2LcNvv0hLbXeKjPZKWqhWjOY3
8DXLDfgz5XfzQNsQbdIuuO8lhfRIlAJIxQBJKZnTXrg/At9ZT4xtrO75KlnewiaGaKQ+5UiR
HRdStGy00sCMEjXN112ewbbfc93Ta7+6drr3WWxupQCZZ0IoJaZVZTTD1FLqyEG0WdruEt1t
kFruUKmK4spgfauYEqWZO6SZVVlocOOfduPL5hqdygJRiSoPUA5gfsxVT1bcYg2Z9w0brP8A
po3/APHORWJX/L7w6lD3pgtbnDRck2+ytbWQ3uyewlAId4tGDQlfyv6fTRvPCLPVjuu0cb2T
i2w7/wDohNPeRRJc27H+XIChfXn9p/xxmRr8uPkvB7Da98tZdvgd9vuYRcyQGsn6apFdf+yp
pXtjfPwz0urXgVhyCzvYltYdvu7eJZLS+t3DBnzp7kYP2t38MF8MikvNp2njvHto3O5sotwj
3WOntS1DxynNqMMmQgZVxT0dde44uKcT2Pk2+Xy7fqsVjhE1vbyH3Y0bVT2zX8rePbD11o54
xoJOBbRvFtd211Db7NfxqWsr2KQFWljOYkWv2kDpgw7jyHUC4BGn/ae2GwTrfVvxuxaa/Zzb
maGEB5mpVI9RorS/7ScsZjb0VPjXb+Q7Xer+lt9uvbWITWl3avVGOZKui19DD8RhtxfXPTfG
17szcH3uK+2qOWOEguFoPd0x1BqftYGueCfIvw8hv5bKad5bON4rdiGjhdtbLXOmrwxrDPhy
lGJL9F8fLxwVPTN32DYuKbVstxdWKbpBvMMciMwpLE7KpcZZMvq9OMy+L845OR8N2Di3PLCC
5klfaL545bTP+bBqfJG8QGFK4b7GJ84sv7gbfboN+Rbe4e2vbm3WW6sxlFN6mUSmmQcaaHBP
IPt/tjr+O7biE/x1vPvmWAROHmkKh3jcICrRN1zbPrinrrfIwHGbHZtw5Dcx7juQe4kVW2+4
uVpDOw6pMDUhmHTOleuCZRE3M7DZYLdor7ZJ9kvVUpaXyHXbzMPVRqZUbt3x0vMcr1bca+X4
1s9ptdsdtti3a03CCKYSswSZCyK0iP2emuqEYOZMdWV3HYdl4V8kWlqvs7ttN5Lby2hkYmRA
8oKqxHRkPjgs8Z+3+2LD+4yHjsHLpY7eyMG6S28U09xHRY5C5YVde7UXrhk8Z6udf4eOljoG
Xl9fM4GnofxPxSw3yx5DeT1iutqtY7mzlrq0sGYlWX/cF64Pzhvk1ccbsrf5C4fym53R1S/4
9Cs223yqPcRVR2eKTvIraO/TGrPwr8bFRB8dRcl2uz5PxeKRrGGKP/2HaHIeSCaMAu0QHqdJ
gC2XTPFLnlPMxy/1biPHN/vE223a62u5jR43BIurG8WpPtSCvuAdx3GDqMXrW1uGO8cJ36Tk
Eqcg2+OBLiz3GzVRcWVxRvZYqPVokPpkPbvinLf41nuQcV4dwmPa4d9sF3CHdLVbu3uVJV6M
AXjdR0ZC3pYYudYnzleZb2u2R3rja5nm20kPbtL/AOVV/hfxI6Vw2K31wigo59KjKgz1V8cZ
aj1v46s4Nv8Aj/c+XQRpJdW1w9rc2ky6ori2cKPbIyKGuYYd8XM2m3IpOGcItOU8V5bulxNL
Hu22qt3b3NQwkrqZkkB+6tOvXDu0fE1rNn+CrCTfZtpvLtjb3uxLum0XoAEkcxZKiVOh0EkZ
dRig+2uXi3BOPXlta293skj3E0gguLoNpljLkK2tX/MtdS+WHqYzzba5Y/iex2rmO77Bf3Ed
9c7eEm2yyuCEW8tWGdGqKTrUHTXFfY6s/wAz2niUEEn6Kxutn3O39cdnMtIrimTCOoHq74zg
1iwx9w0PpPWopiwPSeCcXtbrhXIeVBiLzYZEeJfytF7ep0FOjaiCGxcZuHu3PHdcbTa8z+Ot
95hcpHBvGwzxNJPGAv6u20LWOYgCr0OT9fHGr84Px4gufjSxuZbbk+xrLecGn0S3CswM9q4N
JraU9fS32v0ocH+PyOt3fwz3tbBtnNFjtJP6ltMcyGB2FHXUwrqHRipyPbGuucjPF1tPnDad
ptflC2gs4Fhj3CK2lu44xRC8jlHeg+2qgVpgz/XW9yrnddssdh+Q7T479iK92Dd2hSWOdRX2
J9QTS35ZU05OME+NO/hXzfHtpxja+SbsszTXPHtxEEbEUL27hWQV7OpfM98XzVFXu+yW3KPj
q8500UdlvNjuAgvTCoWK6gk0AF0GXuqZB6h1xrfwzjAFAsdCdRr16VGObpA0ctQrkKaD0yxJ
Y7FY295vNnazF9FzPFG2n7tLOFan4YKsbrkfxFdbdNyf9HeC5h488EzRSZNJa3KlxmB98Y7d
8LHF+daDevh7jW2bC+9zXUjLb+xcGGQ6VkglRdUJIz9wMSV8emKQ9WsLv+wcek2Ib9x6+Mlr
Fcfp9w26X0zRmT/xSqDmyN0PhhzCveN7ZHdfEfKJ7J6Nbe2Nxt5wHjkiHqWSLoUmSpo34YOf
kdTwfwnacduuW21luVuLm7cSiIONUZT2mBDKeuA8zxkeabbsm37/AH9rsxkW3iuJYmhkOpka
JyoANcxTG+qOZjU7JtQl+GuS31ow1RPH+utZgHjdVYFJ4T90cy1PlTGZNq6DtPANhm4Bb8sv
LllaO6lhurVyAkiKaD28qhl8+uLPTf8ACK++ONvvI9i3fabwJsm9Xv8AT5JJxQ282oqlTXNX
oc+2GDPVvZ/HXFzuW4WlxBObvZInfc7DWf5sSemR4HbLUisJFHfDIuvh5JdrapdSx2shltw5
MMhFGMf5a0y6dcFS84JsdvvnLtq2y5LezdTrHIVzKg988sZ0zlvbHj1nyLku5/Ht7Akcm3i4
Fru8fpeK4t1DKyf/ALt6+pWxrcZs1kI/j2fcdo1bBIb7ebGSWHedpyM6e21FuIe7xHv3GKxu
uLftg27btq2a/guhNc3GuLdLEijo6sWWRT/+DdKLn0bBglbPlWy8Zu/hex3zbLZLe6sdxW2j
mUUMkczDUsv8dKjPyxrhf0cLbZacP4xsnJxFFex77E6yW8oDKGViJEPihWmCI2/fF0l1d8cv
ePwBrbkVpLdixJ1PC9uuuZYq/cpX7F8csEmn/CwfjHHJPhvkk62yDcNjmhnt7sqRIGcqrISR
muliNPjjU+RZql+I9ui5Fdbrwy9iRrferOR7e4Nddvc241wyRt2APWnXGbMp+ux5zPbPBPLb
zaTJC7JI6moLIdJK+AJGWHoAqwfSDQj7q/4fswCnASQeqp1E0ANKUxFE+rIkAUPb/DENP7Ku
pVvs6+HXucChnOWnTVVA+pxRfJlCBga0Y1oM+3nhXkN7i19fpHcePjhFuDIVjpYUPl4DxwNQ
B9qJyFWgGdBikFNVT6QSAwyGfTwyxBKfbyFfSB9nYVwa1pUFMqjOoH7uuAYCsbHImpBHStMK
w6J40oozGJEDWgXqegOQxH5Jft1FqsRkDiBh7ekPn6iK9xXDiOlVfJjVsxpHbARkJQswAYnM
HxxIlGlQw6np4jEYQpQmmlsxQ9MRAFi1BdNak0Ydc8QpqqZGjYHL008TTxxMpDCHRWDEhc6Y
CEmlCRQA0r/hhFpaiEOv7xnSmXXLPEtOfUc8qjKo/wBcBMhqdDfhXywoQjVSWZSCTka1piOm
Fc65Cta9a08cSNIUeQhgFL5jDiOVAGkAVr6a9cCJwy6VYhnJOoAZ/jiAQoEgzppP3eJ8vpiG
H9s0IrWhJr/zxGFVgwBauVP29hiJtIP29Dn+z/LEtJidOpTQnsO30xC0y6SQSaL1qD1/ZgRS
UNEr6ACB/wA8QJNZI0/aAKV708MOknqa0IEn5czSvniMHUEUqwc09X+OI1EZVSpUkBvSV8Di
Z1KZCo1U1gUqO37BgOo/5ZZmqVJ/+zwo4Ri9S1FUVUUr07Z9cCKTWT6Vr2zOkU8cIMsbE1Jy
6+GIE5cZ0Cp0Ne9cRDIpDg1FAKNnkcWpKaaB/uyU+GDSDWlNLff2NeuIaGSVifbAAJyJ8RiH
pwoyOYr1AzGWJoa1zLdKdPLBaiOpQAyggiqgZfv6YkB+pcjP8tDXph1YJWoGYAZ508DhQEcy
MwVs6VrQ0z74lC9SuRkyr0bp+7BSLV68h+ByGJm0xLEk68wa5+HnijU6CABSlCC1Tp8cVUGQ
qrmzBanr44GpTPGppIQS2QVe1BnX9uE/Um0BCWBNQSSMjXyxM0nce37kcekBalR/zxDDJKwX
RJUl66R4D64ltF6FHpXw1E+GJEixmTT26gYDh9RZvFD91TTp064UEvmQGpToDl+zEd0zaZG0
E6ezHoB36YhYeMgMMvSMyM6HCyd5QPSh9Z8ftpgOnikYLpIK07t3wkDBg5AkAbqo6hh9cCiS
EjSVbOpNVIyz7YLTJoDEqhs6LWoHWmDWoRBLUV8qZUOf0ONLRByQPQG/3eNPDyGIW6Yh9PUn
xp/ngZRosgUHQS+ZFchTA1A3CDV6spMx16HrXFAy26FvfoDVwa5nM/UY6azHN7z+I/54y6a7
NskDXGsgdRU+X/LHTlysez/GnJd92W6afa9uO6EpSeD2nlXRUZkpmK433ljVyR1865Hab3cm
7Tj8m03zNS5Bkcox6k6HAK488i59ZFJKEsxApUEdRjYTFSStT9ueqnj40wGRZbFyLdNm3Bb3
brpraWP7lUAowPTUpyOGrpqNw+U+QX8DxzW9ilxJQNc28AilJp1OkkYwpFbtPyByTbo5IEki
nglb+ZDdp7qk1yKavUD5jGtMkRWvOORWG7NuNpKtvLWntqhZKVzBVvScallZ6+sW+5fLO+X6
FGtbKKZhpaeGARykHvVT44zJ6x9nPxv5L5hxxJYdvuke3m9U0U0SyAk9Tn3xqm2qyLke8R7r
Lusc4guSxkcxoqx1P+z7aYGljuXPt53a1Cyi3VtNPfgi9st4EgEgnAuY49m5jvOxz/8Awrj2
neokjP2mvZs8as8Njq2zm+87Vur7rBcRxyy5SI6h4mr1DIfHGBsWu5fJ+5X0cgt7WzsZnyae
0iMTNQdTQ6Sfww8xnSsflLebe0WG/srDcSlFWW4gVZwD4SKVZj54bEFPkne4r03SiFovs/RT
KJYNJNSul64sPifcvk6/uoCltt1lttAVkksw6KfH0AlfxxD4Sbd8pX8VqIbraNtu5zmtxJFS
YgdP5iaTXzw3h08dO0c+VNwO47ps8t8AGWCaCSQGNepA9LA0xMKXlnJoN83A3dtCYIWACKXB
OXiQP34zMayM+ZBrBqdHYDMVxuM69G4J8jW2x7XLZyxP/MJeKUDUhNPtkU9q4KnBuXyTuE4Z
bSzt9vZidctpqUPXvoYkDANSRfJ9+9tHa7jtVhflEAju5UdZcv4mRh/hixa4tv5ncWe7LuCW
NtM75+y8QZNS5AqfuqPHGrfF0tt2+U73dViL7XZw3NqwaC7t/c9xCuRGkmh/HBig5fljdrpD
HuO32V+0aj27l1Mc6U60dDXEcclh8l73a30k9IbmCYaBaXGqRNP1J1V864qHTd/KW9zWL2Nv
Z29lb0oixBm0V/MjPVvwwEyfI0jQIm4bLY3twi6VvNLQy0/iOg0Jwj6uKDn+7Qw3NmiQ/pp1
NYFWhTUc2U4MNcF/yq+vtuFjeolwYsobll0yIP4dXcYZGJFSrKQNPfucsTS22Lk+6bHcvJaF
PZkASeCddUEgPZ1/wIzxaK7N25ZNfHTbwrt6sRqitnYgZdQTniErsTnl09l+mv7WG8liGiC8
A0zItPzEZSU8xibcO3c23ewuZLnUtxG40SWlyPchkTsCreHY4cAt65TJeqHhgWwjNKx27sQa
eBbPAa7pOfXE1qkd5aQ3N7GAqbiKxzsAMhJT0v8AU4EprXlO+2u5ruFpcNBdpUK6mvpPVXr1
Bxuc+MXpLvHKt63m/hvr+ZGu4lCpLGgjcD6r1xYFlPz65urT27qzt5b+MaV3FaxyMB09xV9L
n9mM56bHVsnybeWlg23blY2+52ZJdBcahIhbMioDVXFYuaquQb1sW5Khs9ki2uWpJmgkbSfI
ow7+Iw8nBnnvJ12d9le5E1gyhAkiK7Ko/hc54saRbDyi92lJYUjiurK4NbiwuBrjc/xChBVv
MYMGH3XlN7dXAktVaxjU1SFHY6W6g1/wywiu2fnlxc2o/U2MJ3Kg1bhEWjdiO8idNVfCmIRz
7HzHc9l3Rr9Y472S4H/yVuAGEteuZzBwX1px7zviXe8Hctvtk2pjpolt6FVhnVSO9c8alS3v
Ody3FqjS2USbmoAbcYiUaQgU1Og9OrzGCs9eKzbeW7xtm5Nudlcul5IKTOTqV9X8YPX8cNh5
Rb/vlzvm5PuF2IxcSaRL7S6QSBQmnjg1m/KsWQLJUD09MKaYc1vhxWTj8iLJa5exIKB0UHVp
y888B8U23btf7beRXllKYp4mBVuxHgw7g98P1P2am75vx69SW6ueOpHu8g/nXMEx0M3SpjYf
ae4rgyr5Vu1cm2e3DwbrsVrudoXrEmopJED/AAsa1HkemDFsHyPlo3azhsII3hsYGLRxTyCZ
48qBUkorBadsMjn9kj8xS949Hsm9Wv6tbU6rK8R9EqELQBhTSy0yxSN30ex82Frtj7Lu+3pv
GzlxIkLyGN4yP4CMiK554rVLrol+RpLa4tjtdq0NjbGqWdy4lAXoUVwAdNOlemKXTWZ3zck3
TeLjclQw/rHMjpWoUntjTGRbcO5Tt2w3Rnm2xb2YHVDcaykkZHYE1XP6Yyef8LS/+Q7X/wBi
/r237e9tdOgiuYJXDxSxgAflAKNQdsWa1Lir2jmt7tnI7jeoIo2/UlhPauNSlJDqKhhTMeOG
8szpseN8z2ZrW4WK9i2TcJ5HeeC6VprSYO1Q9QcpB0yOMRrWX53Lxu5mimsFj/XsSLuS1LNa
S0H3qHFUfxGHVGPoDqIyp37Ux0jFjYbNzuxj2NNj5Btn9W2yF/ctU1mN42Uk01DqM64xjdrk
Tlq2O+x7lsVt+jRQFMUxDh1rUo+jTVcvrjVqi1TnuyWV3+v2bZpNr3WYH35Y5Q8DknUVaNh0
Jxm0bJVVuvNLzdeN22zXUKCSzl92K5jrmtT6GU9fupghtDyPmd7vu32FvNEsM9mCrSihEi5A
emmVAMKDyvmz8ni24TRJFc2EbRu0belwSKEKc1yXFGeprp4Xy3a+Oyma52v9TdKSY7mN9DhW
yKPq9JX6Z4sPw6xybiNxy6Xdms7qzjvfXNJqWRrecEFZ4wPuXIVU4q1LGm5DufHdyt2k3rcL
DcoJVpHc2DGC+hpksiwsSHJ/MuJix5CwCMyKdS6mo3TUAepHicOKOnbryG0uPcmto723I0zW
0tSjJ1K5EEHwIOKlrYubcbsbRk2TbriyFzH7d1Y3MguLORG6oyPmKdiMB+Vxfcx4bccI2Syv
7M7ilsyxXFpqaCeFo0NHQjIqemeCCqWb5M3GHdrS8sS0trZRvbpDdhS8sMhBaKVlyNKZHEsd
u0fJWw7JfSS7TtMqW94pF1BLKrBWArqhk+7TXqpGN5rP2xW2fOdmudktdn5Tt0m421jI0ljN
byCN0RqgJIMg1AeuDP0dyOCy5omw8i/qGx2p/TaPalhuaD3oj+VilQD5jFZsXNHuHIODT2br
t20XdvcTIROk82uJqmpKtXWpB6HFLgyVjZY466vOvnnh1r6SLninLLzjO6G+t1E0U6ezd2sn
2SRH7kPhXscZpjX7P8k8c2GW5batsuVsb+Mrc20jIXiJBoYZB1FW+1v24t1W+YyvEeZNsIub
e4g/VbXeAxX1pqKF1aoDq/5XAOK+VmZmM3uo2xbqZdr907eGBg92gkC/wNTIkeOLVK5F1N0J
HiR1p9cOmVvbT5B41uWx2G08wsri8faKrt19aPoYxkCgcAj1JQUPTBivz4peZc2m33dLS51s
67YqrBPIoWV9LagZaZaq+GWIyel8k83HLt1ttxjgEEyWiW06EhgJEZmJFOx1dDgxnZKj4xzy
TaeP7vsdxCs1tuSZS1CMk1AoOfVafjhxrq7FBsG62djfU3Xb13Xb5RpnhaQxSDM5xOv2tgnH
6U8bO553w+HZ7mx2a3vZbG+Qw3m27nSeOhX0SRyai6vGftIxtw63Ulp8k8b3nj227bzO3vTf
7PGYbW6sJNHuxkKFaRKr6l0DyOMu3zGE3zdrSfeRd7YJI44XVoZJaltUZ1K5Qlgpr1WtMFqX
fyLzPauZR226TxzWnJ4YUtblVAa1mVKkSKa1Q+vpjc+F1xrBtWoB6j7vHGFIuuK8s3fjW4vc
WDKYbiM297ZyD+VNBIaMjkdPIjpjU+TYv5OcbJtEVzFw1J7CHc7eS3vra4pJ7YlUo0es/wDm
XMlGIqMVow0nyc22RbPPxWN9p3G1iWLckjNYXaIAelT1SWmplP5umMwwpOdcQt+R3U0ewa9i
32NRvm1MaCK5B1e9Zy9UNSWAxqYx1xn/AIW0HP8AgvGtu3O34f8ArJYd4tpLO+2+9GXrRlSR
ZQfSY9Ry71xWthHPuC8q2HaLbnNteJumywmzgurRVeKSMUKOyEj1KEGXfGZcZ6515zySTa5N
zkm2uNo7RmLRl8tbfxhDUorDPT2OG9auf5+KtGIahI0r1Y/bU4C3Hxvy/bdtW+2HkJaXjO+R
tb3zxkhoG6R3Ea/xIaV8Rilb8sLgnM04nd7xtN/H/Vdk3NHsdwNs2iRo1YhJoGIArpNaUw3y
ueeY3rfNPHIJ4Lm3muLi/wBptH2+zd49Ed/ZMARDMvWGZaCj9CcMsZy1xwfKPGtzTbN33YXF
nyfakjhkihXVZXohNUebSyEtp9PiPpi1vM9VXLfkLhvJecnd9x225baZ4YVaSJ9F3bzxrpLR
sMmUdwcZaiTl3NOHz8dutlsrm53u2uVDWzX6MlzZ3MeSXEUxqWUD0lT1wwZ+HlyMoqa5n7Kj
IYFfGp4bze72MX9k8Zvtg3aMW+6beG9tnTprieh0yKOhOGG/C53Dl+z7Vs+48e4q8lxsO5oq
3CXiaGChgwRlzq6006hkRiokqxm+Vl2zcrKXi9v7GyXEEY3japjqjkrVJbcVqCmiuh/uzxlX
dz8M7HLwJOYmSGO9TjMxrCwI9+3diGAo33KjZUPbDerTzxI13y9yfhXJLi133Z7+Zd/to0tm
heFkikRCTqjY/aQTX1YZfMHXNl1E/PePb5Pacj3h57Pl+2xRxCZBqguBb10SRGh9qf1fm9OB
dftX2nyhdXt3vK79bC52TkGlt1soDolDKQBNAT9sg098sOiSg3nlO122xbjxXZJ2utlv3ilk
neMxMWjYMCFJycUCuRke2BZfy6OPbVw684DuQ3ILa7vbxvPtO6K51PLGCxtJUJoDJ0GWdcjl
ikatYolguagydz06DMYyUm3X81lucF7H/wCW3dZLev260IYVH1GNY5/f17RN8o8Pv5twuru5
ktYOVWSWm7wiNmeyuIIyizilRJCa9vVihU3KflXa9/4Pd8dcPFuULWwtrxEYQ3K25pqGr1L6
RWh740zlseXe5E+ZGTZCNiaVHT6YzfTz49e4rv8A8aWPBd22Wfc5YJd9gC3SSRM0kMukjoo0
sgbOozpgjdvjDcQ5H/6vzK03VGS6hspGDe3VfdhZSradVKVU5V74KzxXPzF9ll365v8AY7z9
ZYbjLJde26NHLE0zFjG9RStT1GNVqPQOO7z8c23x9u3Hpd4ktm3qNNXuRM0kMqADSdC6WWq9
R2wwdMvNyzbV+KpeKFvc3C23IzW8yj+TNA1akVzFPA4L7S7rHlfG7vgG0cXvJprZ7beEubmW
JOluwdmdD/FGz9O+My4rHqPJrBt2R5b6CW1ikgET8k28rJGxZdKXJC5+0ymjjsMdOaq+abqB
rSW4tEcSmBzFHMp/luq5Ky0/LjHQ11bBvV9s27W252TKL21kSSIsPSWQ19fke+LG+XoX/vHF
rHfbjm+2XMkW9XjNNc7VcAsFkkFJFiIGlo3pp8cTn1cuKfbOZWOy2cO97RW35P7sn6haVRlc
6lapr6aNoZT164ZWoruX7xsW8ptu+BTFvs0kke/2UQohA9SXUP5V110FPH950zecut3fXnx3
cfFE/ErTkMaSrOl9atOH16wQ3tsAoAOVPDBy1/SfaYymzcl2ffON7fxTklz+ji2l5G2ncwKo
NYqYbgAf+Mn84GWG1SZAbp8pXkU3Hk21tEvF3l/R3C9XSWimNwRmg05V6g4oJfVzY/L1vd8C
5ht26Rwpul8qy2NuEPtzF3AkXOp1D7sMaqs+Jd02jYDf8mnukjv7OD/4SEk+qhYxSr3SdaoG
X7TQ4r7TevHnd5Ot1dS3Xt6fekeUKMtOti+k/wDbWmLpnEDVb1NQP1J8sCCI6qK+kg1zyH/B
xLCdSHqvWnpHYfXEzQlhU6VIpkSfHESDMtVoD4V6VwLTMtTQGnl/lhU9C0Wp2K+og1DE+Hlh
1WEiZaqeoVoc+nhg0yYZ/bojDv3PUHEBMTmMq+I6eWJExPp0jVTMU8+vTxxI6rrUO2X8IPbA
iX1AiiqRmCPAYmiJzOlgCD3PU+OHBQOGcU7kgEgf4YohAsNStQgUCr4nEKSFwCMicjn49MsA
04LV0sSWzzP2mnbApTqiglpDTPOmJokKlznkT6aZ5Yjog0fU9K+rzriX2AxYlgNIHgKigwjS
QjJaBmJrq8frXEjM/qMlCe+jpTzxIipNGJJJNQoyz8cDNhFVzq3r+6hHXPoMLUhGQCIBqsT0
z8cS0jUUz9fSoHjh1akckpkSVGVe9fpjJOilgRTMedf8MJxGyLkKUqfVnkcQOpCerTqI6eH7
MQ0ySNqJU0JyAPhhQzRk0/aBmABXpgQURQfuIBNSR+7AT0ANRm1enYAd8IploFPT6DoaYIzC
OkrqyDMK6fPtl2wtGQx6qhRkCMxkD5YiIagQoANfurll5YECQhWBzI6GnjhRqFjqYAU6f44k
Z5WWRAG0vXI+GfjiQQsnuMXFB3r9p88QSLIyrp6u1a1FBTyxJIWGkn8w8hXp188BOHUtqPqV
B6gMuuJIkJNdZ7kqACaDECMisTTJehJFM+4+mESlGEatDqYde9MB0nCNqV8xlSnXPviISwJL
DoRpAB7jviJnL+gmhpnVzTAxYOOoFZHqgFQDUftxGQ9JPc61J+xRkD+zEYTp6RRaMcyen44m
kcgB0leo+tPr5YhRLqVWJABJ9JzrQ+GJr8BSNifQNC5VqaVI8caYSFXA0A0NM86g18xgIQ61
zOf07jDVRgCRQB6VYeqvanngFiPNTp+5WpqJpX6nEMCvoUgihFTp8RXt4YmoLSC5AX1L45it
MRhlYaSRQZDLt+GJfYAcAFiahjQVyp2xF0agEDVogHqJ/wCOmIWgAUFiQWHUYhoS4ZjoBCn7
lOdP2YjJqRUOo0Iz6EmlfpiMgKsV0lKdaqepBxATpqQEGjLUhTTt2wGEWj0n00yq1RmMKvRi
5FCpowFaE9R2piBVTSutBQfur4jvgV8HIVddKULVrUDriWodNNQNM+tD0woUOpwScgv2t4+G
Aidm/O2sjKoFa4mpUZRVkUNk1anxH0xSs9JGjyqhqRkARQftwshJcyDIgdx4jFUeravS5NOt
cqZYGtRXT6V9Z+37aDOp/wBcOKzGV3IFp9YOvX6q96dKY0zHHWTx76emAuyw1m98c+g6UrjU
ZsfRnwHO8E940LNE+gKsgqpqfMY1/WJz/KPIt6vN4kt7i6eeKI6UDgE0HUVpkMcpGuWD0B1J
C1Uj8vWvb6Yl+UnteijGmWY/xwxq1ruA8i2XaNwX+o2cc5uCohuGoVjHTUVcEGnU1xvNZ3Xs
b7ZFve1zNuFjsu6bc51QX22BBNGD0rp0sG+mMWMqS5tNn4zFHBDtttfWtzIqNHdxKx1MQK1I
1fsOM4qHcLHiXFt1h3NLNEivZKRPIFKwHxAcMGTx741FedXl7t1ru+x3s247fs24WIVmh3Ha
9BljFO+gBq+NMU8OPAL5LaO+uLeBSiI2iNMy1B3Nc8VFb74t49t+4Q3G4XEYkltnAiYgNpIO
ZKnJvDDZjMaySx2Tku2XlxNtcG37naO3tbhaIsbEp2lUDScutBjN8anjg4lzXeLu+Tbrrbtp
kSPJrqSzUM4GXqKkLU+ONKtEt3wt+SSQSWe27TvHt192REFvIT9oqwZK/gMUg+XZunDY972i
dty2raf1Y1CG+28qhX+A6gf8sUEil4hwfZdnkMW+bd+tmlViDLGzaR4IQKUp+3B1TivtJuGb
Jyy425bCCJpz/KNzpaFa9FZZainninpi85Jx/brnYJ7veNlsbG4ofZ3HaiCpA+3JKj61xfDN
rh4TsnAr7jNwtxtjT3EbsZJJMjVQQCjKcXXVaZ/4/wB53LaeWS7fYTvFZSa1MJo6tQ5DSRTo
cbk2My+uT5RET7/IYoUhV1V30KI1L/magA64xjUrGiFanTmaZ06UPXCW/wCCfHHHt82l72/3
O5s5I2pKiKugoBWuoggN44drOO25+LNjvrdL3j+8ySRM1DFeRZgDqwdKA/SmKxWSm3Lg/ANu
jSDc94vLC8dNRmMKzQk92XSuqn1xnCotv2jabLlNtAN0/W2Lspt7+zVdXWqn2pK9D9wwyLW8
37bv0/Kdmke4gn1EgXUduIZMuvuKPSx88Wp0cq4PwzdL+GOTc/0O43B0xi3iQox8WjorZ4JR
jObV8VWa7jd2e4XNxcPbKHj/AKfErF1YmjEOPD8o6Y3bo1NuXw3GiJLtu5zqjOBJFfWzpIo6
/cMj4dMZiuubcuEcHsmW3vt9u9uvSgOmSFZoRX81YhXT+OI/K72L4/2e44vIrR219Mpf9Pu8
BNSvUNUUp/2t0wWs3xhOT8d2valia23P9TI/W2ZKsviS65fhijTP/kyIBWtA2NrXpHC+ObdH
x873cxreTRly8EqB46IK6KHIlvE9MCxYPwXjG9Wtnum3QttLz/8A41aIxljy7rrzB8sFF8Sw
7RxF9wtuMXW0JJM6My7jbkxyx5Vrqr6jl0bFIpdK34FtGxxXW43Kxbo1sGJiuEBQwgVppOWs
9/3YlmIZeCcb3yws9x26B9njuM57JT7iDwZNX2k9xhlxqVOux8Um3NOM3u1Ibj26puNt/LcA
LUEno/8A9WC+s/LLHhux2fI5to3veP0MKoGt7+NPSxemgSBwQtRjWmcpBwDb4+Uw7THu0W4W
0yGVLqADID8rBGYV+hwejGjTjvEr7d24xe7SsNykJZN1tW0SDSoIJXo/XOoxDdc+0bZxraXu
Nnbbo923e3kNJLyP+RLqPpKzdIsjQhsVg5qLn3FOPWe2x7hFY/0Xc2Ye7Zq+uGTV1KdsvFcs
DanueD8ebjw3TauRwzzRR67qxuAEkqRmiAHVUHpUYdqW/HeObJYcTTfry2j3MSJWS2nUUX1a
fTSn7euEV1XHxtx69ktNx2qSTb7a8UPLZOzSBGpUBHzNMUPkEnG+J7vucnHZdtNhuNojGPdL
ckepf40qRIPEnBYzPXnm9bXNtm63W3SESSWrlGkU+kgdCP24sP4dvDNgh33dzYyTfpwq6y2n
VU06VH+OHBzdbVePcT3fcpuOCzO3bnax1Xc7epDFR/8AaxnKQHucjipzWX2bhu1Schvdm3fd
126W3JSGYge275HSC2kdM8zhus8qnkvH5Ni3R7FryG8yDLcQGsZVui96HxzxYPyrM6AMKdss
WHW/2X4zsrzYYN8ut5FrbOjSToIgxiCkgk6mXUKDtgvy1Yhk+Ktxa7gG339tuW23YLQ3kWT5
Z6SlSNRHTOmGdM/R3S/EcEkTxWG9OdwAqLW9tWgApmVLioqcZtMjl2n4yD7W99vm5nbFR3jf
RF76xuhowmYEaCO3bCs1WXnD9qtbmF332G52S4Yp/U7JRNJE46e9BWoB8QTg2qc41nM/jjiV
vssF/tl7b2Nz7Y0tKxFvckLXUPu0Owzyyxc6uufFDs3x5Z3mwQ7zu27ja7aYkwziISRgA00u
xZdJJGWGrlP/APdRuBv47eLcraXb7tHkstyUEjUuYSSOuVR3Bwa1g7j4i06bKz3y3feSDINu
nUIHAHq9t1Zq17VGHf25/W/hVcF2WvLF2+8kW03KDWP09zD78Luho0cgqMqd8VPOu654X/U+
aX2zRC32lo4/eiSJjLATQHImjBWr0/Lh3Dm/LouPicjXbWG7w3W8Rp7sm2yr7VVHX23DMCK5
KSPrjO0yIbD4zgk2y3vN03UbXLdnRbxyxBojJqKiN3DAq1RTFKcji5zxCz4+0MUFy5unUe7a
S+qhUAF4pMtUbHoCKjEz+WQUZALl2Ne2NasbnbPjnb5dmtty3bextUN2uuJjEskRBNAA5ZaN
4qRjO0/Vx23xvuM25tawXlve2ZT3I7u0cSB4waH0Eikq/wABOeH1THXL8YRXUE7cc3+Dd7uB
S01gy+xJQGlBm2k1/i74dZvH5SQfFkH9Jt91vd6WwspokMkskWpopWy9uUalAWuQb9uCVrHD
P8cb1bbzb2D3VmDcqXs7x3KwTqDTShINJP8Aaf24KZGx+SeLtYcdRbLaLe7sYl1SPktxZHKr
RlaNJGc6g1pii6rGfHG1W+474Y/dgaaNG9uxvIjJBOhUhwWB9DU6GmKMSagg4VPufLdy2y1h
/Q2+3kvLbg/qGii1BaI1V90rq/ZjXTnxx6t7f4y2yS2muDv8Nzt+jRb7jAvphnrXTcwklgh6
VDZHBjt11J8vPblWSTRUMUYqWU9aZBh5eGFnd+Fpxjjd7yC+ktrVtK26CWchdb+3qClkiBBc
rXMDF1cMX1/8cosRbZ99st3uY1LSbcpENyQBX0Rsxqw/hNDgl1W46LX4nnl2623O63q3srO6
iR4ppIm0q7/klqy6CDkD0OJfXVZJ8e73a76uz3zRDWnvQ3MJ9xZ4AaM8CZM7L1KD1UwVO3cf
ifcY7KW52fcoN5aEe49nEDFOFpWqo9dRFOnXDKz1z4Ha/it7jb7a+3Dfbfajcp7sSXS+h4zm
CshZF1eK9RivRvKvg+NN/m3SSxd4jBEgna/tT+oja3LafdiCkGSn5lGYw6ZHduXxLukW3yXe
0bpb737MfuTWsKtDcaPFEYtqp4GhxnSwIZTULmPE9cK1acb4zNvd8LWOVYiKvKSNbhB97JH9
0mkZkLnTBasrR738Ubnb2El9s25W++x26e5PbW4KXCocyyoSdXmOoxqDrz1ZcQ+LONcg4rcb
h/XIhd1BiuAug2rUq8M6Ocx/uy8RjPunJ8vM9126XbNxuNvneKWWBtPvW8glhkFKhkdeoIOH
GLfXBI7LGSWChfzUNKYmo9D274anubC2ub/f7PbJ7qJZ4ba4qAwkFUKSalDg1zpmMU6ovKrs
vink0vK22OeGH9TCI5mjecItzb6hqaBqetade4wa6c41PzRw+12LbIXtNhV7CIARb5bnRLDq
NDDdRqKSZU0yNnh4c+/U/wAefFd0vHW3l7Kz3W5lHu2ELuk1tcW7pRopQR6ZK1o35Tilm+td
c+Y87suA3W9b9uFvb2b7HZ2Thr+3n1TyWUbsQqFae5Iqn81Mhhvfrn/Pc9dnJ/iDedq259w2
vdbLfreFfduobBw1xDAv3SiKpMij81OmHWunftfwRut5t0Fzc71YbddzIs0VvcBgHSQaoisl
aOGBzp06YxreKLYPjK+fmf8A61yC+t9ovo3jdYrosY7uMsC8cEqUHqX7T1/HDZ4Oasfl34fb
idy+4bZdR3GxswpbO6rc2xY+lGWtZU8GGfj44Z8Drr15dJqetKKa1aoz+pxhpreD8Hk5St7c
sStltCxy3qKKyNG1SzIO4XT6h1p0xflW4vd5+PeN7ztO6b3wC4kmi2QJJuu0yksEjKlmks7h
6NIo0klX/wAcsbz8M9deay28/H3I9u3HaoXRLq33yGO52y/tKyxOj01ii+rXCGq6/sw8z9qZ
Uu4cAu9p5dHse+X0NpDcKJLXcVb3LZ4n/wDFKjDs4/i6Hrjn1z+Yt9xefM/xrtXDbvapNrnl
ktN1t9RiloWikjVQ7BlJBD1rTt2yxrm+NVaj424Xsdht1ly27nh3TfUj/pm42lWMbyhfbf2D
lJH6wHrmD5EHGZL8ide48+5tw7eOJ7/Ns27opniGuK5j/wDFLET6ZFzqAw7HpjfU81c9fhnH
9JYMM/y5VGeMnHpfxbwbZLvb7nku/wAay7ChezZ3q6285UESTRrn7ZBpqGamhwfLNmM5YcNv
N723fd52OMvtmxOGe0lfVci3kLaX1j0voC+rvTDPnF8Ta7tv+LOZ3+5Xu3pbCPcLDbxua2km
RuISVAWBwShNH1D9nXDfFst2NFt/wByfcNtS8g3bbY55YlkG3zu6SAumtArUKnXlRhl+/FP8
tazu0/FnKtz3C7sbpU2V7GZbW7ub0ExRTyA+2r6K+l6ZSfbizDPjXRzv4g5Rw+z/AKrdPBuG
1Aqkt/YuZUheTJfeUgOqsctXTzwyaz9s+WGJr1NKAVAzFBlgxbK2/BeCNvdreb3eSBNj22ZI
b5VqGVWXWGkb8sZGRYV0nPpjJ+IseQ/H+zPtNxyThl9Lf8bsbgW+521ypFxZl6ev3DlPCNX3
DP6jGr/kc9qbceAch2/mKcXulRL2YpJbXSEvbSWrmi3SuASIeupvy98DV6Tw/H/JP/Yn2C7g
/SXUEum4lPqiCD1akcfcpT1V8MH1o+0aD5P+L7Hi2x7TuNhfvdLdn9Pe284BkWcLrrGV9IXt
+8HCPvdeeDVTJgF65+PYYtawYYFQKVI796d8AdFnbvdXMFnFT3J5EijZugLsFFe9KnEYuN44
TyTZbrcrfcLF1Ta5Io7+eP1xqJTpglPhHJ+Vv20xrfGerI0H/wByvOFRJ5o7eG2ldE/VNIfb
iWZA0c0lAT7eekkdD1xhr7KrmPx1ynh8kP8AWLdfYuDS1uoH9yB3pUqWoNLUFaHr2xqOWe+r
nZeDWd18bbxyK7QPNDWWxvrRmeaCaLIw3UIH/jk7v264Oflu+Q/xl8Z3HMp2lnuFi2+ItHeL
HKP1cTaDokVWBDJqp+/FtOKLl/Bd94lu36Dd0UiQVs7yLOG4jH5lrmG8VPTHS+zYOZ6ubXg8
UvxnunIrsapYXV9vvrdjIYmQ6XtrqLskmoersaZ458/I/pLFdsPx5yTe7G03Ozig/p9xO1qt
1LJpjjnQ0pLkSmominoT4YzTKHkXxrzPZN1tdt3CwJnvm0WUkTB4ZDUDSrj83qHpIrjUK+i+
DOY0tp7ma0h26aqpuCOZVhen8v3UIVlVnGhj+XEx3zWA3GyvLC9utuu4xFdW0rRzorB09xT6
gr/mHgcavOU/z62epNh2q+3neLTa7Wn6y7kEVurnSNRPc+WC43J69HvfjqKS3m2TYt7uDye0
UySbI8rotzbhav7VCBHIuf8ALb/nhkF6/Tz294xvNps9rvcsJG2XTyQwOtaLJCdPtS/wOT9t
euM2ZVKmbhPJxb7VcLZPOm8s8W3LEQf5yEqYpK6dLmlQD1GE63PKvjPbOM8CkvdysblNwuoI
pLbc0z/S3gye1ukrpETN9r9cHM1jvmVUcJ438Y7vsqnfOQTbRuwfRLAwBjkB+1ojpbr0PeuK
wzmyeOnm/wATR7VJtP8A6xeS7rbbtJJbqk4pJFNCvuMWoANHt+rpXDJ41PFVyT4h53se0Sbz
eWcM22RgGW4tJhcaYmFRKaAfy+5Ixci3E/GvjiyuLCLdOS7i+1bbeWzXFjdppK5kqrvXNlVh
RkGYyPTBirlHxXyMcnttoDw3S38Bu7HdYi0kNxbKKtLGBmWUdY+uGzw839u7kfxRbR7XcXnG
t1O57htC6+QbO+n34B/+GjZMnjy6Dt+OCM0Gw/GNvNtJu+T7smwpdWyS7dcHQ8NJRqiaWtCy
N9p05qeuKQyKyL4m5WnJ/wD12dUqYDfLuEbFoJLUZieFqanXyAqMa6wR2cq+LYLDbJN34xuH
9XtrArHyCzk0/qbKQ5A0SqtHUdfxxSftr4cfx98XbrzC7X3ZDt1gxC/qpFqxJrQInUj/AHHL
p44zT1FfccY2/j3OjsXJWeSwtLs2t3cWxowjkWizDr9oIcqfpirPPx6Hn3Cr3iO9Db55kurO
6iW52zcVFEubZujf94/MBjWeaxdZlQak5ZZk/wCGMkDOfU3Q/l/wOIenDEMKk0p18sWtQ6OW
b7vtH0H44kZY1y6gZsp6j6YFhKzEEH1EdSOmfnhR1dQDlUHIAkdsWLTAN7VCasGzqf3YsJal
9LZahnU/4YkFhqanQDp4/wDLCi1ltWvqPtK9KeOWIWiDLQFhpAOZ8a+eAQLNRgUWq9RXriNS
h5HAKkeJHTPt1wYpQlhVlK/9wHSo88RMzAFaHSaUrTLAtOYy6jwpUr4riWEclJStSMz3r2+u
HSZSuglyOvofw88IMPWrGuqlAWPif8sQHECSUyBH4UHgMFagFb3GKMfQvcEjJcSEQSSQpI7E
dMLB66ioBz6g1/wwNc0o5GyIpqzBP0xNEXOmhWqnqenXEDoBQCuQ6g5UxIMjLqJWnp/Kcvxx
I4pUZiozAFB+H0wjSMqV0gV7EjPGVpOq6AGbIZimRwakKk/YwBI6HPphZo1j9ZqtSfylqLTE
pBGN9YbI0OVPDp1xa1gmIH5uv+Q6HEQlclbPMGoGX4YUbMn29XoUimX44kTtmUboDVcIoVUF
CxORzr51zxEzs+bVJA6jpXwwIbGpBFafdUDLPwrgBjGcxkHYZqOte+fhiRBh3y0mlB3oMJOz
KKOrVr0/Z+/EgsoFWQjKpKeNf8cSAmZzqpYUUDoSRmMQgqkEtWlAFXy+uAlIFdqkMSemk4sQ
tIWMZ1JP2/TEKcODmaAjPx/AYlBUTT0Jfs1MxXOmBoGZJIJ1f5YUY6PGmn7R0qD44kkNW/Ma
Uy8PrgLnkQudJbStc36+rGmaUemuRpUde2I4leoGoL7hH2LTOv1wGkjpQsEoPAmv1xSMhAfX
6PzUoPHyOIwz6R9w0kUofriQizE0NCSfy5DpliWmCsslVIYgeFcQiRVBB151FaZH9mJvQk6l
Kn0inXyGAaY61XUtGJNKeNcKMVOZ05dKV/fhXogCWoRUgilew/DETKzs5BNAOnngZ0iCpY6v
UKEeOIApIx/hFM/H60xM1JFQpRywp/42PXzxNygZgaZEZ1VmwLRp7uosqAAZ0NMShh6lqOoB
GfepzGFBTS2RYdPzdaYFBekL0apqQRTLEf8AwIkF1YpQjIV/wxLCD1ULUqc/WRXFoEFUyMNd
dQoSRQZdaDEkbe2DUZJ0aueY8BhSK5dTEXBNFGWfWvlhjVvjJ7oB7wFT4lelK40zHN+Lf8sZ
aWFjJGbj3SwA1ZjpWuGcr8Pc/h3lGw7Rc3H9cnlgiukVIpo0Emhh209cb7usa7Pkj/1a5vn3
HY98j3I3DeqJYZInjNO+oaSMcdxSsLV9WqiimTFRlXucarrqRC1RUd8mwazWo4RyXZ9puZot
32xNy226oLiF6CQU7oxqM+4wwfLb23Pvi7arWY8ft9yinmBYWkiJ7IbwD11ADBloxzWnybxv
cYI4uUbdcMsbmW1vLJlBVvyh43I1H8cWDCi+W9murxrXcdn/AFuyFSqibSs9elQRqUZdsb+q
dttz3402awn/APXodyE0moizljUoCwoQHrXTjP5V8Z3j/I/jho7lOUbBNdTTyF476CT+Yqkf
bpVkOGxqTQcc55t3Hd3nbb7VrjZJmpJazH+ZpPdH6qyjELMX1/8AJXEbXb5Rxu3u4rq5DCWC
5AKrUUZtYJrXxxfXWaqeG88stm/VR38DtBeUZjEFYqV7aW+4fjhayqxd04zd77Nc76t1cbXK
xIS0ZUlQHp91dQHhhZ31rpuccO2jaDb8W/Vk5r/8tAunLrUMdTeeMWNIOOfNPI9sk/8AyrfT
3tkcgCqGRCTl6iuY+uIO67+ReKbxucq8lsm3HbJhpguFREuIfrppUD64TPXSeX8F2nYpdt2O
5uriFqoLe5izUNnqVxQYzPVeT8K5Z8b7TtrwSy7jBLKayI6pIusk5xugz8c8bvqzXJsMPBo+
Rm/g5I0CFmaIXlq8eoPmV90EqPqQMAxS/Jd9tt1u5azvIbuHQAZbdtaFj59sWJjowVcDX0zI
6nLFKnsnxZf7MnGrq2u5EJEjs0YlEcyx6fuVSc64uqcRy824Zsdh+m2V7q6dJPTa3MZBFT6m
Mi+kgeGKaMQbjyD4u5A0Nzudxf2tyq0ezK60YdwGQA0/EHFiig2y5+PbbflcncYbKJw9tPEE
JqDX1o416f34cUjW79y7gdxd2W5wbjcPNYsSto8J0yg9RX06TXGVRz8w+NL7ebPdhc31jf2Y
Mio0XuREkeoEZkn8cSPb/KHGLm5nt5ku4LYitvfwLSQMO/ttRgPCmG+J33vybxWHa1itb++v
54hqUyI4Emf5jJn9DigsUm4bl8ZciaO8vr68s5goBtpo9ainYmMGtD0zwCcj2bn3GtjsLrb7
C5u4mjLvazyRiVJCRkppQ0NPzDDjVjM8j3fiO/WYv0hk2nfBQ3EKITbSk9WTOsbeWLEyZVKt
Q6q9SMaYsbrh3NbC022TY96jc2E4YLeW50zRFhQ6lOTj94xlta7jzzaNosoLHj0n6xIqaXnR
lCAdFNc6+PbCk1vzDhV3dLvhln2/eoY9P6GUGSB2GfoZQf8A9bAAQ/J+230t7Z7xaNDtt4mg
T2pJmjyoSVbJgfLEi3DneybRYQWfH5DeQw0YNOjrpA/KQ1CfPDIRw8z4TPdjkIuLjbt3SPS1
lKhmhY0zETKM69tWDAok5ttNzyqXddw2eG+spowj2zk0Wn/2i6gQW8jiq1z3HJOP2HI4N04t
t7WcEQPu2lzmjO1Q2mhYqKYcZ33Gnj5dw03rcjhnuLPdyml9smX3Faq6WETrlSnjni+Wsju2
7mvCtxtZ2u7mTaL+Zm1ymL3EqRQEUDZ08cFi+sY3lG38eVFn23kx3dw3rs7lJFkHmpoFp5Uw
rcTXm98Ev9h9qTYRa75CiiG7tSAhYfmehBoe4ocSv+Hdx7mWzzbC/HOQJJBbtlDfWqlioLat
MiDtl9wwaN/bt3T5B23azZ2ezPFuUFtpBcBlUp/BnRvx6jGorNTRcp4dDeT8lsbyWHcZlEcu
2ToWIqPWsbgaSCe5xfKzJ4zttunDd25HfX/JluIYbttcTWzHTE1Ap109RFBXplivon+UMV9t
nFuUm64/cpuu3FfS0ildSt9ylqDMfxAYhz5WntuT8Nt9wm5PaX0qX0qhJdruEJZVyqiMo0up
PnXB8ts5Zcn45cckvtz3zZlv7K+JPsMdbwHxTVQNl5/TDWcim5M3Gn3Fm49FPDtxUFY7j7lf
qyrmx0jzOLV9cVdQAC32n8xwxmx6D/7Rscnx22ze8U3ONaexKjBXUvU6HFRQD8cGet34UHCe
Sx7FuqXMwkksc1liibSVr/8AaLXIkYMXNelTctsJWN5Bzb2raQE/opYUjeNSOzFGbUPAihwH
qM5xrfttk3C5nPKLvatwklYzz3MKPbXidFdoiNKPT/lho5cnyBd8Qnjjn2+WCXdyaTXG3ApF
KnSssfRX71FcTTu3bfOO8m4lZWA3GLa9zsNBa1uQwilMSFTokUH7gf24hTbDvPGN54WvF7+/
bZ54PsuXXXGyq+pSrHKvahwy4si4l5bxbapNss33CO5S2qr3FqDJGFppDSKM1NeoFfHEvmvO
uW7jazcmvLmyn9yN5i8FxGxFR/Epy04rGLVx8eT7Am7x7puu8myvI5KxpPHqSUMKVM5OTeIO
K6eGlud24vYc6bfxvNrc2F5F+nkWCrywSBQA7Ktaq1MyP2YvwZMVHGeUbFtvPdwvru402N17
scN0oLoNcgZXP5gpwYY3nE57WTabooP63t813NIBCFkMTPIToMb09PcMDgqee/LG23SXtvuD
3Zls2BgtrWVTHcW9Dq9tlY6mXqQ2NM24wFavpOde3+pxD7R6fY7rxnlPDrLYLzdTsd3t5Wrz
Irq4QMF0sSqiurMVrg+GvlybNf8AFOH8mWSDcBuFpNFouWtyJViYkUkVh93TNTmPE4Rq32B+
Jcb3G73uLkVvuMV4jCW3NIplVmD1RSTrbKmg0w26vhTcr5Lsm6cEsYLK4D3dtcD3rVxok0fz
CGA6FfVSuM2q1Hz/AH/Z932rZ5bCf3GhQrPb/bLGwCdVPeq5HDPhfnTfKPJNp3y32S52u5E6
rDIk8dSskbeigkXscjg/C79p/i2PjdpuEe63W9x2d3EHjexnTSpVhTXHKTRvpTBK18La029o
vkqXdNp3uzuBdu08FvFMAZxkr2r9VVqZpXLGrWZz60nKtpXcduuYNoR9j3O7jKOs8Pt29yKH
+U7isYb+Fhni56wf0mx4C0RWTQ9FZDRgO1MuuNazzPFrxaaxg3dWur+42tlH/wAbcrVdbQyV
+5k/MhGRAwdNc316jusfFr7azLyLdds3Mlapu1mEttxiJFFfQrH3PNcvpjMp6wW8bRte+fHv
G7W43mPbFKxtBdSrrjmKIw0sdSgHvniKLeeW8RXkFhBucqTLFbeyby1cSrbS6g0dxFImdPTn
TNe+LB81f7dyu1tAyb1yWxvYLldO33UYRDmp/wDMFqVb/d0P1xFjn/oHN+J7XtD7xDtN/s7s
biO8Aq40lNSEsqsrA1r2w6Pq5eL7jx3hHLJLUbp+osrq3EP6iHTItqwcOrEpVXR+p0io7jFT
NarcuTbvbWc92OV7VOjR1iNvDG8iMcxqi1l5EYZHTmOtDgkHXw8BkX+Y8n5iT+3ucsNE8jVf
G26cfsN2l/rGu3juI9NvuUZOu1mBqsw/MPA0/HLBY6S69j2/lUFoskG+8g2+8e7jb+n3dt7c
an0EkSUYlH+uR+uWIX4eVfGvJNjt7Hdtg3W6WwfdKfp7+RQYEbQU0SdNNa1B6Yf8sz4xhd82
ibatwm26WSGaWAhTLbSLJE4I1KyMvYg9O3Q4rRy4JCvtZ1zUig8/DEdez3EGxfIXFePR2++W
2z3ewoI7uzvSAzMqoMiWT0to+4YPg2e6z/y1yrbp+VbTdWsoWfbvbF57LhtDJICdEkZ0t6em
ntjVZ58uoPnPkttu/JLO42i//UbXcbfGze05MbP7j5OoOTKKZEVxT4Z/2+/+D/HnIdstfj7k
W1T7j+l3FiZ7aNpCjONK1MPQasugzxn8t93xmOAbmw5M97PyCfYdykCrabrJWaBmrRo7sOdW
hlACk5A4sXNeoc62PjbbHc3+93W2QbrHGzxci2VhDIZiDRLm3RiXSX7Swr+GHbVZI0Gxcx/r
/Gdrk2PddqsnsrdINwst2WrxyqiqAtGWitpqGzxmN1458qclurjndlc3VzbXJ2z2RLHarp0i
KQPoObLqzyZTQimNfhznP+2rP51hs+RzWnOdlvra82qW2itZrcOFuoJQzH+ZCTX81D4fTPDI
rv2/w8gYV1NWlRQE9yB3wOmY9B+Iub7Px243Ox3T3IbDfLcWsl9ENbWzgMqSe3+ZfWdVM8Hx
VfZjW7Ku0fG3H98trvcod0Xklm8djeWhVklHtsqMgqdNDJSRX/8AprjVv5c5M/1dey844nwX
iWx7ekrb9s+7fzZY1f127CnvuAfVC8Tn7BkfuUg1xlrM8YDmPG4bvnqWL8ntprLd196w3mVt
UKRXB1IkqJ/48zRugrnljXV8XHj0z5z4TebjxXZb3bb22ujxu00XtrHIokkiVFDSxVNDp0V0
9SOmM8z8M9T3VUw2n5Sj4vu1luMO3XnGoBFuGzyUE7iMoyyW5YqsiH2608MuuL48N5u7Hn3z
VyvbuS80F1YSB4raAWtxIlWheaM+sxMfUyeFcb/GOXtusCtRIAegBIY9TTpXHO13j1z4rv7P
e+G738fGRLLeN2DT7TcO2mOeZQD7Dfwt6fScXPg642H+Gtw221teV8R3OdNn3Peof08El4Ck
UdxHqR4pSQCpq2VcOWU3mXnHtK3FnabnDu0t3BCu0bK217pD7ia7VjpdLhqH1QkJkV8R50d8
Ejn41u1lcbdtl1x5ttuONRIovLaf1Xlk65yrChIGgH10J/7cqYKcZffN1lPzBdS7LyOzsbu7
s7VYbe8/m7ZewFKtEzqfTJXNf3YvwxJdcnylxTbLDi19ubrBxvexA5Z9rmJsdxBzks3iaml2
6pl16YZ1TeY+eVqFoB6e0WDRzHqPxbyjZYeP8g4hu1wu3DkKKlpukil4UnVdIWcDMI2Xq6Yp
8t3nY0EtvZcF+PORcQ3i6SW+3pYzbvAfcWQsVCtFTNoyi/cc1IocOe6PhpTy3i/FE2jil/dn
cLa7gjax31SrtaRSsyxyhjmYq1rEftzHqU4IblZfauanhO975x3llovIraeVLqK7tnWutQPb
eI19KmMj0qar06YtYnvix+U+QcK3n4k2q52eCSCt4f0tski6oJgDrFwrElgUrSnemDT1zjEb
J8bpvvCtw3/b9wI3TbY2ubnZ5o9CPbRirSQzE1YgV7Urllg59atYpSDCNBAB/L3IIxF27FcJ
bbrZ3Uqs0cE8ckiJm1EcFqeOQxWqPp2/ih3mfkN3Yzwz2XLtthtdluS49l7iOJwYJD1SQ6sg
fPuMblZ6nmODmnIlg+Nb/wDpl6LbebC3t7a8tVcC4gZaRSpIprkafiMEmL5eBJynem2K62Nr
t5tsvJYZ5YJRrKSQsGVo2Y1XMUNMiMB+Xt3x58eb9D8ccito7i1uW5HZ6tveGQslWjYaXago
atTBPk2eY8y+N9wPGfkqxTeGk25bedra993LS+kpR6HpU5np3w08l8r7DyDauXX1xuKFrC/u
JrraLpZDLG8EjatKZ0Vl8BjUD0zhPx7yFPijkW3pLbXk+/24l2xoJSYmBTJSzABWr1/fgmaz
1tjO7fum5bN8JblawyyWW82u7G33C2GkSxLIQGWRTXJgP9Mayaz+HTsl7d3vxJsq2Nwku82/
Irf2/fk1FJXkJiDs1SoatMZl/bo9G3ewsyu6bjsOleSX8EkN1tU8pRJ3KaWj0saI4I1KyjP6
HFA+U7m3uIZniuRJHPExSWOWupXU0YGuda9cXUypa8K3a12flu17vdI0kNjcpLIiU1aAfUVB
pUgdu+M2ade07btBsvkO9+SmvbafjN2xltZ4G1VR4lRxKSB7bp1CnqRTrjf+FIpeIbzt+z2W
4ckvb1b7i+63lzHLtjL7iCZWJUPGwOmR0GpGGXY4qPiKH5K3G7k2bat541uDzcUmuybOOmi4
tNxhFEWepYlggOhq9Otcjg/DUq8+VOdb1LwbjLW2664N4s2i3ZgI2WZtOl1ZaZMGy7YObg6e
Z/Hdptu4802ew3Wg22edY5at7YzBCZ/99MFn5ZlejHl6p817dBu1yIIthmbb5bnNY5CgZFnk
T7VLKQpb/LGrfGuZqXknAflXbt53febPdRZbP7k8m3lbl2gETsxSN4j/AC0RlamYK+OIOGO2
bnvx3x7Z9iuYY+QcdaYbjtsp0yvGTV5IFAIfp9ow/HypPy0MHN+N7Duvx9a314hO1w3VpflM
xD70axxTPSulWIzHUZ1xZhvt8c/GOM33BW5XvPIZ7eKw3eC5jtZ45NSOlxIzRS+4BQ6mYKVH
qFQemM/lnVNu+0n5B4TxpuOzRS3fHrZoN126Qn9TGuX8yOIAmRTp7f45Y3PGp561NtzTjW38
34el3eRRC22aXbrh9Q9uGVzGsbOwyCyaMvCudMYsUuqPjPFdx4FxXlq8luYrV93tnitmVtaE
vqMVGGTa9en09O+NX2q3zFpw35A+Nd5GyW91+q2/e39m3mgiqtvJNCoTWzA00sF9Xj3wacYD
5Z2WLc/mLcLDaWaaW7dGvQyEfp5FQGQnOroqj3Cw7fTF1/6s8zXZ83zW8Wx8Q2QyrLuOzR3E
G4QjqgZU9uVfFJQKpTtg5vivW15KAWBJUqvTI4hiPT6tFMuo88SwTspJFFBA/HAtREZaR1Od
a9vA4VLqVMlCkBaZgU6DEQJISaKQCtdQ+uJigYKGqqgSdenj44jo6hgEoTXqp8cTQgFUn8qL
QA079K1woCytXqTQkAnpTxxAgqVJjAFe1M6f54FAhihLUoh66vPLAhsKGnRQKCnh5YlgVBqa
mrHJSfPKuFErVYxVzFdXfIf44MU0VKE5VFMxTqD1xGHEj+7nXPM5UGntlipOxCljSoIqT0/D
EAqBr/h7BT9MKD6Q38s1KgkjMA5+JyxAVGYek/ywDU/XviIFJ0kUqaff0oOmAENSEJ6qEA0P
auEYOoEgqa0FStKUPhgO4EyK7aGzI6U8R54gcopyFfKvliJKzrXSoLUz/wAsSCwRxqkqD1r3
xIaKuvVWgy8f3YiRVQ40nOufliESMRp9YqWFQAK1z74CjSr6qLmtSxOVcShtIXJh6fE+eICQ
MNNTn++hxNH1NUhCWHXT5YRhRqGfXnmK/TzOJrDLRqKuTVqw7fUk4gRTrmKdu31xIlkOQIIC
9ajOp/5YlJpHSzAZgf6YiEuq9QQK1FOtDliZNRNRAOoD7tXWn4YiJS3UUKmhIyzxIDKFAAFB
10+GGEbaaa2K5Dt3PamAUKyagGIoTUADMDzpiwaQWU5gCozr2y8sSIsvt0YVLfco7HDhpKwl
IRlIPVPw7YAcElAEAYDqtQK16nEtMpoCSdLGumudPLEYSsC3rHrFAOwp/rgOjIAajHVq8v3Y
AFCSWiNdQ7UoDXzOEi0ACjCnYv3OJYCSMU1KwzqW0+OJouiZCrDL9uJnSYmqVJNMmp9oB65d
8R06uPdJBGkZZnrhoMwZmBGdTQg4LQGiqPTSoyI6mn0GJYdjGDVWI1DM+PlhQlHpCE61666D
PAiEisoyoytQg9f9MFOlIQRTPI1JApUeGJFqpIVcaQANLAimfbCdM7mhY5L0OXq8cIpQk1LU
Glsm69fHPAsBIyhwS3egPXIHwwjBvkfAqaAjPBiw5rQ1TUzClevXw88RJkYk+nNRkle+BEhL
D0nSelQa1xIMUKqvqIyJSq1pXvmcSOB6zSnryIIyOJQtJrXNad+gy7YjpzGVNWrVhkCammBY
jlSYLXotakdzTCMOruZFAAoBnWoPlhKRSDkBkTUV7H/PEvlz3LBUIB9RqKnoMMFZPciRPqJ1
V+7uMbqQerxPj17YyHZt6xC5CsKivqNKjGpKenv3wvsuzX15djcbCC8jjjV/ZnUstCKArQ5e
eLr4ZwXyM/G7fcZLPadkj224iqjFJpGQ6uwjaoA75Y5znWrPGGOpulFAPTp1xoyiUHqO4oR1
OXfAo1XBNm2DddyaPdrtYWTJLbWYzISOivQivfG7/g243Nx8XcB3G1lG1Dddu3CI6VW7KtDJ
4EEgGh8RjHrGK2x+N+LbdCLfkU94J5nEUdzblWBYn0j22FPqcTMSp8WbRYbrp3S8dtvlFLaN
CIJJDXsaMFOGWt/DuvPjHg1/YXA2iTcttv4CSi3xVo3ZfOimh8RgrN9eW31vLZ3TxO2n2zQn
I1plihnjR8I4d/XZJZ5dUllCP50aEK1Pr1/EY1fFrR33xjxu8gaXj24Xf8t/bura6C+n+Ixy
KAMvPBKvys0+MOE2kdtBvFtu8ksnpivbQq8FezMCtFH44rVbUKfFfErHddN7uck9i6f/AB4i
62s7d/uoyNTB9qzedHcfGHDtws2l2G63C0mjb1xXyqymhzbVk1B40w7phQ/H/wAdhora5ur+
S8lOlbuxkjmgDdyykVWn+7FlMVu4/GMNnyCKyO5gWF6VMF1IumUr3GkHSWH1wYdT8v8Ai204
/LZ+1uivBcPoMsi6Sjdye2nBE7YPj7gcZt4b2/3JLqevt3VsyTWZalQSVXKvgTjVrMTbdwvY
do5Amzbxty71a3imS03HW9vKgPiisU0jEWU55x/aNm3Qw7WJI7dgGVZX9xlrnTXTP/HEmXGb
6R08cMiajYeBcl3y2F5tduslrq0SSNIq6WAr6vEYlro3n455jtaJPNarPASF921lWdST5CjV
/DEL66YfivmFzAktqLO5agZYFuUWQHuCj6cxgpVu38e3Q8hj229g9iYvoe3uW9kt2K6z49sU
ixqdz4TslnyLb7M2F1YpeFfetZJBKpPSscikjM9cUh8Tci+Id8jua7FDHLDSrW8kqpKPJQ1K
4mbWZ27gvJr67ktgkVtLCxSQXMgioR2zwzpD3PgPLNpEf6yz1xStSO4t3E8VfMqaj9mBOqD4
v5pNB+pgt7ecfdohuUMtKfwHTn5YC7bP43ur3Ypbx5pbPc4Gb3LGaPro7V6qT49MLdvjMbls
G8beiy3UGiI9JQwZT5GmY/Zh1hXBCx1R1Pl9MGhquL8MTcIGvdzmkt9tOSyQgFv9zEt6csOF
07p8cXMZt59kuxu1jcnSkwUxMnk6nL8cTOOxfjjbLhI7WDdza76FLGxuY6xSEDorrRk8PVgi
xzbVwCZmkl31pLG1iYo0sVGao7gn0BR59cOs+fkO6fHN/H+nm2W6TdrK5akdwAY2Qg//AGkZ
8u4w2UyO1PjfbLnTa227mHfSrMLG4jrE5X+Bl9Sf/VXGfWvln7LhvIr7dJtptLWt7ACZIZHV
AKdTmf2YUhj4fyQbr/Sp7Rre9NSY3oWCjuoBowPkcP2U5agfG9pcQrFY7pXeQpZ7CZQuvSMx
HIPt/wDqGCVVHs3ALJtumu97vp7FrclLgwCOQxEH/wC1Qktn20jGtTm5RwUbXtibrt+5Dctq
cqyzGMpImVfUPtNfLGMHStvuEcqsNtj3K5smSymVZFmUq4VXzGvSTprXvhM13bDwiG825dz3
a8l22wlDGGZF15L0Y6vSR5Vrixm+/Jbn8fbpb3kMdnKm4Wd4pe13CIFQU61deqkDM4Ysdknx
7bT23t7Xuwn3hYy8m3Tx6DIFyPtMDT/6WzxWFi7m1kt5HhmiaOZDpkjfJge+WGVm3XZs+13u
53P6W0j1y01AZkBRkSaYrDxdrVyfGv6i2C7VuqzbrHGWm2yVdBbT93svXpn0bGSzm18U5DuN
1Pa2Fk73VsCZ4CVVloafmIHXFaJPy4dwsb2xna3vbeW3uE9LwyDSwb6YYUBRD9w69MQ1dbfw
vlu4Q28tnYGW3uaiCZnVEamRGpzkf8cNozP/AA5r/ju97feCxu7GaC7evtxyL9wrTUrj0lfO
uKVqSLC84Lzaws/em2qSS2Wh1wlZxppXVSMsaeeM0Xqxz7PxXkW8wvJt1kblVNGXUimn8QDE
VAPhi1SUz8Z5Cm6RbXNYSQ7g/wD4rebTGZO3ocnQ34HEdXPJPizkmz2C7hHEb23ZFNzHEpMs
JpU6lBOpR/EMM6GKfaOK8m3aF32qz/VoFqQrouR70cjLzGHZVliJuMchS/G2SbdOl+UaQ2jK
dboB6ilKhwPI4K1iW44Zyy3sP177ZK9ioJeZKOVA+7WikumnvUYJ0zYLiOyJu+5xwGG4mtkB
a4W10+6I+hdA3Xr0GG1TDbhx2dN+utt2hpd19n1BxGyTMAKsDGQCCnRsQ56013xHlVrY/wBQ
udslXbwCTMtHCiuepUJK+dRg03XTx7aeaXUL32xwXDpSjyW8vtuVHampS3liaQ8q2blFsILn
enknWcAQ3UrmRhp6xknNSv8ADhlc+r74z+hV+6v+n0xM/VebTxHle525l23bnuIQfU1UX8aO
wP44LW+dcMu0bml6dvls3gvVIR7ece0wLZKamg0ns3TCY7914nyrarYzbltU8Fuv3TkB0X/u
ZNQX64DQWfEOV3wj/SbXNIJYxNCw0qrow6qWIB86Z4qzeXEdu3KO6axktpxewf8AltfacyKK
0zSlfxwmRreX8I2LYNtiNzdXUO6zLrgfSJLW6IAJQUCmFhXvi59XVxR8P2Bd63FkukuTt8KF
ria0AkeIkel2QepkB60GK+Myb8uCbbJY92k260kF80bhLe5hqBIK1Rx00/j0OK+GdNNLYfLF
1Fc7ZcG+rFBraynlIE8NaERGpSQr3WtcTXywknuAnVmTkdXYjqMOueJrS2nuriK1gjL3EppG
B0zy6mgH44q1I7d04zyLaEWfc9quLSGulZpIzoHbNxqVSfM4JNXXWfhPa7Jza/tYrG22+7ns
aC8ht1B9s6hQSxqxAzGR04sw7flWR2F/LdLZJBJ+pDaGtmT25FkJ+1g2nST54FPXTu3GeSbT
EbndduubOBm0e/KlFr4ahUYVYLbeJcr3S3F1t23T3lorFdcIDUI7EEgg4GZKrFsdxk3D9BDC
/wCsD+20DKUcPWmltWmhJ6VwzD1+Mde78a5DtEaz7ntdzZRSGizyxlV1dM2FafjgatUr6go9
WTdSM6DCzaO0s7u6ult7aJppJCBEi9dRNABXLPFp5de98Z5FsiLLu22XFjBKdKzSx0Ut1oSt
RXEbV1t3xhzHcdluNzgsGYIqPDAQC1xG4rriIyancdcX2HsYyZJIgIZEMRhJR49JXSwNGUig
NQcQrnYjVQVPZuw8qVw6zfV5t3CuX7laJe2Oz3N5a5hLiOMutR2oDXBreKqLb7m5u47KOGQS
NKkDAxt6C7BCWoPTprnXFuHytdz7492bjFulod1dd9VVafb5YyVnTMe7ayqNNDT7XzGL59Wu
Thvx9b71t91u+7S3VpstlIYbkWyfz4yyBluPWp1xqT6tOeCW1nrxmLjZp33qTbNkk/rFZDFa
TWylBcJ2YK+Yy7HvjXgnOot349yDZnj/AKttFxt7StSBpomj1t0orGorixr7eumLhvLb2xF7
ZbNd3tk4LC7hhZ0ov3UIFajuMEsatc2x8b3zfr42212Mt3KpT9QqIf5SswTVLTMAd8F8Z561
0cy4VyLie6z7Zu1to9Wu0vFFYbiMdGjf/FTmMV/bbK6nZCtCvqoR9T1wsbVptWx7jud5HbWU
ess4jlnY6Yk1ZAu3n2HfF8m+LfmPx1v3FrU30rR7js7lUG57fVo43ah9u4jb+ZAa/wAY6419
fB9svrNTW15DdPaSwNDeVjZbd1KyN7gBQgHqGr6Tg+qnX2N7V7Fc+w9s6ztIIv0/tnX7hNKF
KV/bgtkany0/IvjXl+x8XtuR7lHH/Spm0Vhlq8JbJRIo8T6fLFLsHXUjHMxFANVCKVrkBTBI
p8HRLiZ1ijBmllIWNAKs7nIAAdScVoa7mHx3NxfbbCe6vYzd3Sq6W4BKyo+RaJh3ib0yo3Q4
I3PGUkgurK5Mc6S2s8ZBIIaN1rQqR0IrkQcOHXUthyHdNzQGxur3cr8NdKWVpJ5wcvdWtS+o
L18sb8xjfT7nte8bbMkO42dzZTstUS5R4iy9fTqyamM5+mZ3Nx0w8a5bNb/1KLaLySy0NKbi
OF2haOhq9RWoWhxTpqVUe6rUUEOh8OhB+n7sONDaSR43Damz1EH1fTrXFV9W5274a5luGzyX
ll+na9WMTQ7TIxS5mQZt7VfRqXuhNf24xzPXPu2TxifYu40eSWOSFdQhd2RlpL3jav2t5Y3Z
i/n39o6XS4a3FwVZYRJ7STUOjXSpUN01UHTBK63Gk2v445juPHtx3+2shJZ2KLNNFqpLJG2Z
eND4L669x0xZvg68is49xzc99u44rFaREit02USKc2OVWYhfUQudMGYzzfFhyrhPJeL3cdtu
cSulyT+juoDrt7mn2GFhkTTsfViwa0+2cT+VIOK3J26dVgUGafaomDXwikUrK0H8SulVeNT9
RXFzg6jH8e4xuG+XiWlkEiQN7YncERRqBWpyqdI7dcFjpK6+T8O5JxfcY7LdrdoDICLKeP8A
mR3IrXVC65Mc/t6+WG8+DV0/xrzn/wBffddGs2qrNdbbG5/Uxp9wmWJfvoMyV9XXwwcjrVPs
Gyb1yK/WC1kSMTf+a8nZhEKd3cambqPHzxSVqI+TcW5BxvcWsN6szbTN6o2rqimQdWikXJh+
8dxhxmdOOHc7uGIrBcyxhakCOR1zOeVDgalRmQyye6zamND6qk0PWpJ74NQ2kurkxoXlmKGk
MZZmp4BFJNPoMaSWLd7+ArFFczotSViikkQVH3elTkRiRtN9cT5CeSeWrSIA7ySsMySoqzEf
twVm3Ucct7bzuCJLecFTJE4ZGDA+kuhpQjse2CqX16xa/IfyNf8AANw3Mz2k6bYUjuhdW/8A
8h7dhpW5R6rrMb09Q754Ya862vj3KeU7nKLOKa+v5lluFmnavumFNTKH+0uewr1wW+rVKyXN
vPpuEaOYHS0brRlI6hl7H641YJ1K0K8a3abhl5vkFyWtLKeNbywdmjrDIaC5AJ0yr7npNOmC
XT0z8UV+2q2iilk1+sxRB2qB+cKtdVPHtivyrz4B551jMQLBA3rh9Qo/TNP4s6dK4bFKni2r
fLkyQRWtywije4aHQ4oiD+ZIFI/KD6qYPg5qtikdDVGGVNLDrTGhKltotwvr8Aa7i9mOlS51
O5Pap65nqcC91sOQbH8k7Hx22vNyuJpNjnrbv+nuZZbeNq09qaMn0mop0pgk0dXGQtLu+s7o
zxSSW1wnqEsbGNs86oy0NKHM410OO/2B3naYuyu00rAmtdTMejZ5muMZa3auuSbHzbbNo22T
e47mLabwmTbIZZGaKqdxHXSjENUeIxrlf+1dHEeI843Iz33G0dbq2jL+7BL7E8gpVkgaqtI2
nqoxaLLFFFt253e6SbetrM+5vKUltWU+97v3MH1Zg/XFVOouuZcf53scO3QcheWXa5x7m0us
7XFsAV9SxmuhHVew7YM2eOfV9RbLwXmG87dPf7Tt73cVmgkkCOI5dAPWKPq5HfTgk9a66s+H
Jsr8mn5Pbz7e9wd9Rw0UpJEqmIeoyF+yqKNq7Yem/wCcsdfN+P8AMdt3TXyeFhcXy+5ZXKkS
QTIaHTDIvp9NR6R0GLmWnFhsfw58hbxt39RsbSJ4BUSQCQLMjKKlWjIqreAOeJn1R7Lw7dt1
5C3HarYbwGeNILv0E3CKWEDV+xnNAvauJZvwq7/bbzbp57K9ga1vYXMc9u4o6OpowYH/AIOG
8WKZY5EYBdIoQK5Dx+v1wCTCCuWCtm/U1yGfli1qJJUJelFQsOoOeXjgFiNg2QGfZWPQk9sW
s35ESVpG+nUK174ToIxrqKmoOoavCnbDqGKe0QubA16dPHBp/AT7xPpUaSR6q5/TApqRyKMj
nMdD5ntiQaAA9WIFSp88JINmKgrRa0PbtTEMMGAQ55E5EeOIQ4ZuxoR3yzwEipetMnr37j/r
gWnIOmp+3pqGWY64UZnFMxU1Fcj2xI9H1KWIUDt2APfFqRl6P6aEkUPY0whIFAGkd+qtmBgx
rDGSiAx/cMj36fXFgwzsCzHRQ1zNdQ/DywMUlZKj2jUHMny88K0zUc6wMgaMF6g4jJo/tFIw
ApFPx7YK0FdSqQCpbsOgz+uHQZWkX0joTQVPXEDqFVNVRqP21y6YiNpwpqMyAMh+/AgsGrkf
STkPriOG06tJB6DrnQftwgwlZPuNSTVa/wCAxETuMwBmQMx0JHfChB6hiv2eIIJpiM8IaSBk
KrkPHAtCup29S6DX6jz/AG4kQXUrNX8a0anhiWnl0BtBBFc2KkV/DEtBSpqnQ0yPWvgMKM1V
JKmr0yp5eJwIYoqqSKMMzgQZavVQG9BqK/mxInz0kKPDPM5dsKBGyqxDr/uB7gYQJAWIJrRa
ig7g9MWozaQNdTnnpNOvTPETq+qmX+016f8ABwIgDUnJqd/PywDAkrlRqNUggDv+OJaLMyZH
Jfykdx3GAwbBqjSSF66qA9fDCtMnq06jTyGZyxES6w1MmrmDWo+mLEHUHLBQKKaFF/xxHTPH
UqBnQVJGWX+mKM0oj/L1kioNC64tB/T6dPqI6U7fTEQydQKAhvSwBzz65jEcMAqHVpNQQAv4
eWJYJ2CEh60AyApQDCDKSzBeqfkGQ6f4YFCZTnqrmclPh4VwNYbS7hl1KPGhyH08caFIqoIr
1PjkQR4jAkjgslBpANa+X0xIFQsQ1OGLGmroPDEaeqkGjAimQp0PfPCgOoJqPS3h44FmjDqI
lLvpzzIrQHpTPEMOdRYaWAArqJ6kDPp54joX06xnUnw7eWAYbXma1YjJWHni1Yd3CJ1qCaE+
eE4dZC0i9a0qv0P7sAsIHW5OtRp9JI6+dcRho5FBFVyXMk+HliJ85BpRlqa069B3OEadJGZV
LrWlNROVa9MR3EV8IjBoWq6PVQDLPtXDKLWQ3L/zmq+gZdemXhjWhyenz6fu8cCWFgQt2qrk
9c65/sxqG2Pon4CSZ7+7t9So4VGhjLKC4rVlUE9Msa/pfGcc3yntO4WfJbh7m3aJA5Cy0ojV
z9P7ccuWvMYks+oKw9JNf2d8awQMYfWWPc5U/Niaavhe07Dud8sG6XzbbcUH6a6qNAYn82WM
6LXru2ncdn26W23zfE3SwQ1t5hKsh0juQTqX6DEFZd7htXKLeNNvuoop7CUO0Uh0s4U/lJpU
Uwqwt73LjXIva2me9/p15E1IbjUGi1n/AHDp9cEVWlkb7atkntOSbzDulhQ/pZBcK9BTLP7h
TyxM4wGy/Hu28pkvJbfkNtaXcEh/+PcGnuqcw2onp2yxbjfwsuDb7t3Gtwudj3F1jEjaf1ae
uLUCQKkVyPY4vkXGktdz23jsV017cQT29zMZI5IZASUOdKE0OHEtrbcrm6htrziu8QxWYbVc
2sk4Q+LKUk6fTBQhvNo45yHdgVvbeHdI1yhmeisR4AHTgxKTkG0c7Qfp7i9t49oeQRyCK5Dk
IehamYywypZW/EN/2qz9zil5ZSpMoDS++iFqfxVqK/8ABws3WHuNs5bd8mji3iWJ7tZASTMo
ShPq0Mo0gn6Y1nh5r0P5H2Ce52O2YSRtbwyap3ilViikCrAVz6Y5zytaruK7JyTZnguNh3aO
+485JubfWqkDrRopBQEDMgY3ay6bqyivOaWcm2K1wFhZ7oIdSodX2gVy+gwQsd8sRyDel91W
RWQaAw8M6CvTPFExCMApY5eGVQc8MMe0/E1tI/GrpfdEUjSUjlBIzK1GY6UxdKu/btGw7W8O
93MccokeQymRXRtZ6iQHv4YAqOZcXuuQtb7htM9vLaBKSyiZNQIGWVQa+WCplNo4/u8vJreG
a6inmt3Ut78+khVOQq9c/LGpVK9L5bt9zDuuz3zGNLZZW92UupVaj81CcsEphbztW5HkO2bp
Depc7VED70cM1WA/j0j7hTGmLfRPJsu4bnObK8tpt0WME2c//ilX+E6tOfia5YzjSyFxaW2x
o9zBZ2Dx0M8EDD0Z0yWpr9cUDIcy4puu83kO57X7ctkYdUlxFKKmnQhQQaimCmLnhu4i143d
Lfzx3lxGHNzqk/me3SgpX1dMaVrzvmFlNdMm6WV6bzb3BCxMQHhr2ZB/jhc+urKyi61apIrn
0/fgxqV6fwbeNuvOL3PH1mWHcCsnse6QEk1j7VY5AjzwtfhbWu92HH9nt7LcWKTWvoMaDWSS
a5Z9BgGHmETclteTW0kd5tIgAkkhdS4NOhQ9DXrgw/Dsfkey7zFuO1WlwBdPGRB749tJWZc0
BOQI88UZxx2W+bfx3ZbOy3NilxFRNCkFitfuB6ZY1aSlVI+UQcrgkjuNp0affiYAglNPqHVS
PPEpFBuNzs2883eWHe5dpidF9ncY1ZSsij7eo6+JwfgTn8jnv7nYuW21xu27jereNQqX8LCR
1jbIEip+2uYriMX6wJb8tXk0TJcbJNCQLqBg1RpA9Qy0sunAIGTjp3i7uuSbMUv2YgCNJBHI
rJQdDl+3EcZTmtzzmOJIt2tDa2jmoCqqo1D+fQdJb8MalYrrax3AcLafZ+XC5smUfq9plIRl
1fdGgarfhSmKfLXW4sdiv7Xe+BzcctnH9ZRCBbSkBZRq1VjJyqPDEvwto+UbTtNjt9ruUrxy
wBYpAq1YACmoqP8ADCz8obZDt/MpuSSOk2y3kZWG5gIkWjHqe6kdwc8Zt1rnmxlNw2GXmHMd
zXZJ4qN/NiaZ9GoUocs2qD5YvgSA4utzw/lgh5AjWzrG0cki0kVVfo+X3DLDbp45xq7OI7fz
WXkM4D7LeR+3b3sPrVgy9cqUoR3zwW6vhQQC33jnt7Pt+/f0csS1jdiq+6aAMmZWgr1DYaOY
oecjfF3x4t3vI9wuggC3cBVkdB0A00pTuDniHVZ8N0AGR8OvTEy9dtry7h+IDJaT6XRDHI0d
GKVkCkUPQ0wR0qn4Pzu8ut1sNu3u4jktIdf6e4ulBbURRY2kPY+JxYWwudxl2XdZ5LTjLpEg
zvkuGEJD5/YCyrq8aYFrOWG6btftexJsgvdoE0jRW9rOsV5YFsz7brpOksa+GIqvnGz31nts
E6blc3m2E+iyvm/+Tayt21E1IPiuHWPq0XNL++u+AbZe7dcSzxL7a3slu7Fl9FGWQL6sj1xS
qoLKOXcfjGKPaB7+72ZKkW7aJ42aSp6aSQy9O2JqtLtjF4dleWQncbeMwMZG/nI7LR1Oo1qa
Yla8+5VzPkW085uprS7K+yxjNu6j23T+GVBQH/u6419WZYH4+t9z3Lm0e7WlqEthK8lwsPoS
OooABWoUk4OjzGtiivoPl64eWNohcWYaGU0AchACqnxywLkHE903C6+St4tbm4JRI5Y1tydK
vpIAOjoWp3xVqO7bLW3vNvltZX/pt5tl7K+3Sxt7Tm2SUsKFqBlrUEYtZYH5Tv76fe0Se2a0
/kgsA1Yblm//AIiOlR09Pjh/DEn+2sOEJZaAnt4dMMrPcr1q9gur7402mfj+qTcLd0imktGI
mXNy6MykGgNK4zI665eDXe4/+yiy5PcmV5rZoLU35DOzEgmIs2fSpAP4YRmrbhMW9WfJNzs9
49+La5o5IbSG6ZjC/roqqGLKPRWmKwcdyufmV9d7f8abW+3SGNffWISJT0KGkyVxmp9NMsWH
q/CP5M3G/wBtTYt3sLhob4xhDeoQzOhCk1b8y9ajFJ4r50m+ZuQbjDZbZYwyxta7hC8k6mNH
JKhdLoxB0/ccxh58mjv24znw/Y7k/Job2GFzaRxSrJOn2VK0CtQ1wUzXfby/oPlbd4rqwEm3
XskkN6zodEcMmhveqBQUIHlg6omauuTJuPH+PXUUT3G6WNPetLlJNctmxJCTRuDlGMakHdse
JSu7udRJeupj4nuTjJvsaLhV/ulnuRFhaQ7ibqIxXG2zAFZ0JrpWvRgcxiakxuJYd43CCU7P
d7ltV4I3rx/dWFxZ3CEeqGF5MunRXzwysdR28iHIR8a8cfYoZZLiIQ65IFImjXS3qQ11LmKY
I1b7HTvdjBu/INqu7K7iF9NYvRZANN2UK1gmBoa9euY7YTflacUmtt2h3Pb7jbry3jEBjktL
64aeNg2XoR6kLXo2C/KnsZTkLbonCOObhxYzfrHQW17LZuQ5jiQ+iXSRUo/44YrcQfHlxe7n
yi7teUiO+mnsPagE4UPOqSA01UBdkHQn1YrDFp/7fttvDf2M+ybtdWMsbRGC8mEkUkYy/liS
uY/bgYvrwpgEUpm6BiBXrSpoPwxq1ib+Ww+L9quNx3q7gtbhEuBb60tJiPauhq9Vuw8hmD+O
M115r17it1Zbxt28WFxY30UMUJiuLTc5vfU6QckRq+kEfd9MRvww/wAQ7puf/p2+2NneyPe2
8bTWlsrVdRp++ND/ALhnTF+XPd5eT7vfXl7fS3m5TPLeTU92WQ1clRpqT40wtZ4rWDMGjRss
9LdxllXE5dz9Pc+X3W92nGuH7nxGaaNruBItzlsT6XWGJP8AyAZalIYV64Pw6/FcvyRv0uwc
/wBk3zZJkifdIYkupk0tDcIZAje4OhNPxxrPGN/3Qf3G8iuId4tdkaGBrJrVbhJGjX3Vcuyn
TJ1VaDPFJcXWfZZ/GnNuQf8A3XbtcySpctsytFZmRQ1EWMFVYj7lz79sHPPrfV2PN+Ecmvxv
25XabJb7ta7ggbcNotwIJVCtrWS0C/8AjKGpywW6eZkxruRzcn3jY9zm2bc73cLOOAtf8W3+
FDOIQKs9vIB62h+4H7sbmYzZWxvt42XZuP8AHdwtRuNL+ygLz7Ow9qQwRoP5wNUyrQmlcY55
b15fzbmtqPkvb9845HcbPcTtCu6yGkPuVcB5DoJR0KfcThxmXKsf7l+QbuvIYNpFzIdju7OK
5t4lAaJ5A7Asr08KdDinwt9eIh3YAj0k1qev1wNPWfgW72lo+V7LNKov9zsh+jgegEskavRE
LZaqkZYp86O+diw+KYUs+Lc6sd9U219uNlKIrW9qpZoY5aBtf5wWXT+7Gr1d1j/nPriw2Wx2
Xd9m2NefPHt3J9vhji2W/nFElhkCm3iuVGT/AOxx2/3DGeerpyeOG33zZp+e7tZ85UbBvlvG
sG1bjpDmNoKn+eVBEnuqRR6eoeeHr0/XP/LQ85tIL34Hv3t97S+QTI4mZCi1jlH8mMAsan8p
PUYubjl/WbI+atVUGsdegPb6jBa7SLPjV3DZ8k2u8uX9q3t7uCaeQCulEkUk/uwWeNvQP7hD
/wD9Lco5/p81tb3NkBX26SjVLJF+X+Y33U64nKf+1ey3uwce3nlNhBe2cV1bblxYxiqhhMVd
GQK3Usq5imdMPNyGz/ZXW22TyfHHGtxsYvffZ4CNwhiOi6MMJYH2ZB6klhb1U79Ma1my2yh3
yTZuXfE/KLq6kvL5duikubObckjjmguYYy6PGEGpKGlfEYOflruf6qTaeZbrd2djaR3t5w3e
XSM3Mkdslzs9yzIui4RiD7Qky1acvHPFjMuzx4x8g2m82vLb+Le7KHb92Ztd0bRQtvMWFRPE
BlSTrkMarXHOKGGUpcRayVIZdeVRSoJy+mM1rcfQPyM8138m8M3naS8myyW9rfNdw1Fu7JK3
uMWHp9z21UU6mgGD8D8pORSQH5A5FZmxhvuEbz+n/rIABT9TImsTQyKfRPpNa+OITmbVBz6z
s+McN3Da+PRQ7zxHeJRF/UgVlNpdR5oJNOazoBRW6MDQ4ZgtvwuPg2fkFxxrk1nemWW3h2qR
LAsDQCRHPto3cA9B2rljP5b7nij+MXtLj4h5RZWY93kNsY7yzt1FLhSqKmuLoSRRgaY1z8+s
c3eXfxFoR8M7wu8SVnstyt9yhWeuuICaINKitmDk1af54PybfF1y+h+dePbhaFhsxS2vluoy
Rb0cv7rqR6SXAXVTrivwv/k7OQLYNsfPbXZ1Q7rZ7iu4W0VtT30R0QPNDozao1V04pPVOlDx
x7T/AO56d90qLmw3i0vl96rSRapoo2kAb1CnqU4p8tVZ8gkkh+ftrlt5Hj2VUgvI3UkW+mYM
ZpIyPSQ7EaqHzxfgXrKn362sYuI83h2aAf1LbN4e6tYrUfzYUl0AyRhM9BGrIY1z5RfIy1yq
yfAExvqzbhZbpDND7prPGkzImoVJI1jUuD8s93Zrzg7NuUO2Ddmt5BtskvtC+ArCsn8BYd8Z
b1yKXBJbqK5HL92Ir3hPtjmeys4oTeQUJIAFZF74rE9j5XxTY5R8g3K26R3W3XVpe2boulkl
eINIcuqSFjqHTCmj+Q72y47w+feLWyiW7jlt3tbhBoaOeWMD3ldRU5KNQ6MOuKRnPXh/LOdw
8t2WE7xZovKLWYexuVqoijltn/8AJFMg7qaaDilX19a3Zo7u8+B+RxbhA7naJlO2GVSHjBZC
+g5HT6mr2wc/LfU8B/bxyTdIeUpx9Jg21XSyztC4rR0QkNGeqk9/HBVjA865BuO/civbncUh
iuo5pYBLDEImZIpWVDIB9z6R1xvocc/lvrP3rr+3veBfoQ203Srt0jLR0VpErTxHrbGeJ6bH
bYb6nHPhbYd5trWP+rR3c0VneN6Xib3HJPT1oyqQyNka+WNWeru1NvU+z3m3cC51vdjC24Xe
4Nbbl+nQIkyPrAMiDJmTTUH9+CXRZl1qeSbnvHG7HeJN2uY3j26Nv/Wb4opmjk0n2kmapLJM
P5bFhTLPFi18u3txLcXT3B0qszl3CABAxzOkeGCMWtx8JtaH5G2hbox+07yIPc/MzRsoXPxr
0xWOvLecCS9vfkfmG17tJLPtLw3dj+lnJ9thFJphQA5a1jX0nrl1w2+s54puL8a2jmmy2+0b
/CNovtqSSPat4PoFxbh2McNypp6Qej1z+vXX2y+Myb6znyNuRtt62Xbk202G7ccj9hrwiouI
VfVblAB60Q1KP3GK/HjWtb8lX99uXwhxq9ukpJLuRZqLoRFKygen8o8BjPF+Wup6h+Rbu92/
inCbvjsskH9TsYop7q1JUSSQIABqX7XWpBzqemDmK/LeWW3bTc882C6ZI5bzd+MOt3MtNU0s
OgaqjP3EzFev7MIyRgfjyTcd94TzqPkfuXiSQm6tbafIJNCX1SQKfsoSCdOOls3wfWWJfk68
v9nuOKnjcrW0O67fb3EstsxUSTRgIxDLSnopqA698c/2Z8tzLtu0R/IO+3sEcYu7/jMe4AoB
6542osyKOp/ljp+OLdirAcQ/Ucj+IuXDfJZb6SBob7bDISWikA9TwA/YA2RpljW+n/wnXeL6
3lXceaXdzt1zbwRQTR2DyQSuyKPZnkVDSeUrTyIxnRawnO+WNynmab3tMbwTUt4jJGBHJNNB
kZgozUtll2pivkPPy0Pz/aWU99xreQUe/wB22pHvrhcvelg0rqNO41EHG+er9WbJvjyYsqvQ
0UU9Z88c0LWFUkZ5/d/piIVZmJNBU550yGAGVFdeuYNKjxxIEgAaq0oD6j54WaJqMevpf1Dx
OI6FyFDAd+i174klKspoVpUD8DiIVJVwKkmtK9evjipEfc1DR1rVj5DAhvQDMnw88KD+UGg6
UPY4BpnmA0oFooOeJaTRENRXK6jnXpTriWCBD0IzIJpnll0xICgD1lWJrSgz/diIqFlCudOu
tK5YlgY1AQn7mjP4U88QPGSZCxGSipA/ZnXthJq6iWRSNNKgigGBCpmSCNNamuX/AFxM30La
RRlPX7h4DFGcOpAbVkGYUqKdMTUJk0yeKGhU4yi0lnBoKd2HhiWlpbT6BkGqwPcYSTurHKg1
ZZnwwnBE+3Sv5u3bADODIQwGQyB7YiTKWjMddKHp3zOFYTRDSlXAVSPV59ziR9Q1EaiGHcdw
cuuJai9fqyAA6HywoZjDU9XrU1qvTpjKCxkC1ata9xhRI3cxkajpr1qPHEsFoIrmK/kP/HbE
sDSgX0hQSasPHERFShBBoOpoK5YgYtkGIp3FOhwIzM5UkUVSMqnx7YgTM4AK5UGWkYhajQpq
IlB1ePan1wxQ6ysNVF1AAZUzqcap06KwYkKA/T/PBpSaiWCmigHUMhQnGNWmAf1edcx44tJM
ooCg9S5+P+OJYeiZnSdRFCP88CCaaFXSQQcz1H7MICkchd2LA/jQAeGFCqaEN+APhiQ0Ck1S
iU7KKUpgahmHr1NIrHovUD8cQB/LqVWgPWmfXwwwDAHtlqgKtKeRPkcTUCFCr6RRetRn+3Ei
eWNgH6aRnXLI9emAWgZQxABoAfUR18sKPUAelaitGJr/AJ4johqdcyoA7Nli0AklRZRGlega
ngDlniWk4Qn3HPX7vpg0pNK+7pSlKZKe9friKJgXGlBp0+klunnliX11JHHmK0L0GYOVPHEM
CUKtp6EmjDwxaaIhyGEoGkiv18MsTI44xIQBp9Irl1/fialKNStasASeg7/XEpZS9lA+bA0q
dI74GtMQD0bIHv8A5jFqEARTKq5+rtiZoGRNOaAMTSopmMQ1G6lY6n00OnTSuZxoz4EkKEim
b5FiK0wHCbRTSS1OuYz+v4YmUExaOEMKnrp+h74ofGUvwzShh0OR8z443rOOep8T4dMSdu2C
X9QDQEk0ViOhw81mvX+DcR5DvkLzbEBcy29PdRJFWRa/m9RGHvYeO8S77JyaG5/p+8S3DyRt
paG4JZlYdBnXIY5czXe5njql4XcLtkd5He29wjqDoR/5ikmlCppjprjKm3z4+33arBL6R4bm
BgCrRMcqjvkKYLV92cf0JqkHo6itdPkcGGUy3BRyw1NJUVckkE+ZOWGRr7L3jvB+R8gjkuNn
tRdC3FXVW0yJWtaaiOvljNZ1W3e339jcva3kLxXMZKOj9ajGoa5RI40xyhsqkI5NMuhGNeM+
JI3bIqaMM8iQQfDLGQYvKSGLE5+tu/0JwxmptUpVVVjpX1gBTRafmzFMGtHL3AIKmTS5orLU
6v2YvAP+Y7M0mXYNqOo0+mLWp/lNNPdZRmWUE0IQMwqO1KmhxRuSBWe4g1sjyRqDmoYqtT1B
p4+GFmyQZnlVs3JVzRV1Z+NT/rjN0Ohb65zHuM0ZrVdRZWr1yxQ6ghunjmYxSt6hU6WIoKdx
XGqzrttN63azkEljdzQOw/8AJExFRgXgb/cL7cpvdvZ2nkanrZi2YwwrGw4pyS+2p91s7CS5
2+PJ5oaMVpmaoDqy7mmC1a6Nh37kO1u1nZXRghmekqN9i6hQnOuK1l1XnFuRi9gtp7iKZb1h
7E4lEsLlsxqH5T9RjOHXNvex79x+5MF236aoGcMgKmvfLCbVSJH94Pq1S0qrajnXv9cWrEiX
t7TSJ5GHdXdqfsrhxqVaWmycsk287nY2U81oCVknhPoWnUNn54fhi1VPPP8AqNRJWRCaoSa1
8zijG1MLyZ1YNI5BrUlixH0JzxNQ0V5cxgmO4kQV9Sq7U+tBlgqDHcs7kowWVyQ8laavIk/4
YLq06ySOjBWOmhBIOQX6YZUstv4pvO5bVJf2YjuIYj/MhV196nj7ZoaY3ay4H/lyMqnT4U7U
7DGWtO9zczNSWV5NPRmYkinb6YZDyIXMsVY4pSI+rqppSvY0wYz1aEySk6lchhmSDSlPPtgU
p5Lq4mYCZ2diKKS2pqeFcOs/a67l2/e49ua8WCdbCQ6WkUN7TUy6j/PFa0Lbdk3vdWcbdaTX
QX71iUt+GeDGsQX+1bnt0vsbhaS28q1CrIjKT5CuEO2PZuTx2IuY7W5Fi2Z0h/bIPXphGuC1
uNyimYWjyxvI1CkZYMx/7VOf0wSafXXuTb/bk226i4idczHcBl+ho3UfTGRFYZACVVvVXUdP
WmLVRxzzQuHVjHL+V1NGH+eNYpfFtuvHOR29pFut9CzWtwB7d2GDqa9BUHL8ca5Y93VZHd3K
RmGOYrD1KajQk9TTBY6ToEU8kUqtbuVdcw6kqR5gjMYcGpJ7y5uWD3E8k0pp65GaQn/6jU4B
KdLu6VTDrkWNjV0BNG8yPLEfEX52oaU6EZGvjiX2IspJHf8AMVywi1Y/+ub0dsG7NbFtvbL9
REVYLQ9HUHUv44DeXGLy6iheCKV0jlI9xFJ0vT+IdDiWozrIKj0kjt1wj7Otd83iK3WBb2ZY
VqFi1FkFTnQHxwLYitb+9tZmubeV4ZifVKjMGNc+oOFQV1uN/ey+5eXElxIMleRizUP18MUZ
9prfc722jmSC4dEkydFY6TXLMDFjVs+D2W531kS9tNIjsNLOhIy/DBYZgrndb+5lWe4uJZpU
FFkdvV+0Z4YOsc8s00zmSR3llkNZJGJLE+ZOLRmJrHcdxspFltLmW3cZMY3K/wCGKzTLia+3
7dr6RJLq8nmljqUld6upHShwKuQXV6Z/f95/fBB92p1jz1dcTMlaDa+d7jb2r7feRx7nZlzK
kV36jHITUlHOfq74y6RBvvKdw3qOCxCBbWE//GtlGooWy0q1NVPAdMMrN+VNdWF/arrubWaC
IZFpUZBX6sAMWpPtO47xazMNtmmjZ1zjgJJcUzqorXCZEVxf7nuNwkk8r3FyaBWapYkdAPMY
jjsv945XDaiC+uLpLZwFUThlXLwJAxSuXUm64obvcZbb9PE808RPuC3Qs4GnIsEHh3NMB9pf
q7yeOGzWR5Y0JEEBJYB2+7QD4+WCNz0V/cbzHb28F+s5toqpbmYMFUMc1TV2y7Y3Ix/TrBbR
uu92Mko2uaeFtJLLDqOpV9RyXwwdR0jtsuZckXd4tyS6ae7po/mD3BLG3/2ZXMOD4YLdYmau
JeeXO2lru32k7ZdTEpLIVYwOHqDH7TjSR/txadYZ0kkkYxxEltUmlVJKj7jkK5DEJDxSXKyx
tbE+8SDGIz6q9QRTDkMXG7cl5VJAINwnuTDJQaZg6hjTPMgAnBBasZfkPf5Ni2/a7cuk21+u
K7twxf2VWmlwKgqteuL4arM3e+bjdzm5uJ2M5bX7imlGOeoEY1rn9sd55zytZkmj3OYTQCkc
oNCEIzU06jywSavfwh27kXINoDJY3UtoZqOyL6QScw4Vu57HFjaK63Hfb6/a5c3E15EVZmCv
7isOjekVX64Jf2ZMdO4cp5dcWZhvLm4a3oNSuhRdXQZ0GeHGYzbMSSNNBmT288OK0dqbuGQT
WxdWhAf3EBGhuzFh08sWqauxzrmBUXf6+dZbcaZLqOtQjeij5UoT498FbVlm2+WNzBPYrcQT
hg8M0SOjDuDqA7/vwyOPXnw57h913q5mnkhkuLksRce3EQ2oZHUtMsVmNcd744ZLS4iC642j
rUUYFf8AHrjJxd7NvvNNmt3trBruO1k/n+zpfSA351BHRvEY3Dih3Hcr3cpmlvZGadu7grQg
9NOVMHUyufM/J953vdd3eB9zuHuWs4RbQPIdRWJalVqeoFcPNas2otu37ddtguYLO5kgiuoz
FcxI1EZD/EMP1/Tc/Tms727srmK6t53gubdg0M8TFWBGYzGMXlS4uNy+QOT7nA1veXzSsFOk
10vXo1CtO3XBdi59RbPzzk+0WKWu3300FqXJiXSGjV+4XV0Ld8PNwd3YqN03zcNyvTfX0muZ
iRWmWfXLoMOnn4S3XLd8u9oi2a6uTcbdF6II39bIo/KjHNR4UwHPyptFNRQHQgqxoaBelT/r
gIo5ZYtEqakYGq6SVNa9iOmIWrfeOacj3eOKLc7xpxCojYsBqdQOjkfcfPDP8s65d05Nuu5W
Vla3c8k1vtiGOwRqa442bVpD/cQCMgemCeK+/Ad15Pvm+TRXO43D3NxaRJbCdlq4jH2gvQas
vHFaYZOQb4uw3OxxXMsW1Xrq8tvWie5EdQIB+1vph0dTVUXcqzjIj1EeIJp3wNQyeDCquMwx
yz7YjV9u3Lt33XZdv2q/lE42vUm2XL5zRRmhMWvqy+kUr0xrmOP9HRt/yLy+w2222+13CRIb
Ng9oxoZIc9WmFzXSviMEjfF/YpfkXlL7sm4revbyxMWiih9EQZhpYlBkdf5q9cWN103XynzS
4t76D9YI4tzt2tL6NEASZGGk1GeekkAjFGfrsBsnyRy/Z9rG2Q3QayjJ0QSqHCAj7FrXSDjX
21zn88U+6XG77rHNvd1HPPAjiGS7Klo4SwJSIt0XL7Ril2uvitRnTSWBBJoofrT64zRrQ7Jz
nkm0bfLt1tdlrOXM2sgDIrnLUop6T5jBQj23lO/Wce42trdN7W6xmO+jI1e6K6umfqr0Izwx
qRMu48o2Dar7Y7mOWzsN5jQXNrMh0yKpDxv6uhBGRGeIX3xf8V+TOe7DtTx7ZI39LtI9M07Q
rKkKOdKI7UOlS5yJxm9a11VCORbsm+ndbOQ2m4s/uKYBpVXyLBR4HFazOcFv3Kt3326Ml5IA
KqxjjUIrMOpYD83jXGt8Zkmuyw5vyKx2ltrhuCbKhMSyKC0Jbq0ZzKn6YpR3LarNv3vcLK/h
3G0upYryF9azhiGqPE/4jGdM8dm/cs3ffrlpryQKkjF3SIaULkZsQMVpt34d1nzzkNptC7OJ
vdtY6m1Lirwg50jY5geQw6bN+XBtXIN22vcY76zuWiuoh6HU5keB8Qe+LdMSbvyfcd3uDLdn
Skh1pBGNMWrx0+OHpjmZVzsvOjYcJ3fjRtRPHusbxyQv/wCNJPyzoezKe3jjMaZCpfSAKFh6
upoRl1w4ZE6xXNvClzJFJHE5b2ZHBUMUIDaG70r26YcTax/KXMfaW7kpMXT9Hc3ToCLhAPTF
MSKFtIy7nBBqu3L5C5FuOzSbLeTCXap3jdLZxq9poiSBGTmozzxamWdSJNRB9VTX9344DG7s
fmXk1psT7MYba4t5ohDMZ1rqVRSrHuaZYZ4r6x1tvl/t+6Jf7dMbW7jcTQPF6dDA1AWvbyPX
FVesdHIuRy73un9Ruo0junFZmiXQGdurGmWo+WLVz01Vt8ycig2J9lkgtprOSP25lkQMXp+Z
vBjTFDYys3Lt3n46/HJXX+krdG+ihZRWOUggqp66PVXTjVuuVvqz2j5F3mwsNnsdEd1Z7Nej
cLWKddWZUo8Z/wBjK5+hxzlx0+W7uflvjwLv7lxeQzRSRHY70e7EbeTrb+4egT8rY1Fjxa6/
Sm4Y26CKGpKRDMKCftWvhjNY+voVkaOUSR1QihVlqDUZihGYI65Y1K1Gp3P5G5FfbWtvcMn6
xhon3GP0XEkeVRIR1OQ9XXAarb/mO832ybds1wyPaWGoROK+8Uc10M351DZgHph1ncNunNt5
3O22WK5K+/x4GKwviAJDFqDIkrV9ax09Iwc1NXvnzZuW88cfYLzbLV7DIoaHUHAoHQVplnSu
GeNes9xX5B3HY4JbBQl1tMhq9hOC8SNmfcjr9rfTri2mWVWTco3aXdbfdhdSxX9tU2ciOw9k
6tQ9sAinmO+L7Vnnn1c7/wDJO9brZ+3/AC7OeUabuS1qnv1OdR2B7jF9jllR8d+Sty2ra5Nq
mjW+28qwtref1ezJ2aJuqivUDrgNsV0HMt/g3iHekv3W+tqexOtaoo/+zoMtBqcumK2sc3Fp
yH5H3LeIDFFDHtqTLW7FrVVlNOrU+0eXTBrUvrQTfOO4XNpZjcNntbu8s4Ug/VNVWkCfaXHS
v0xQ1l4OVWSc2bkTbcltbkrItrBQ6J1Gcq1y1Fs88atlmHHJy7lU3Id0e9mRINVfYt46+3GD
QtpB6FiNTU74higoCxXyzB74tBa10Zj/ALVPbzwLQq/pJGRHVDlQeeBz0swBUVKdQvevjhbh
/R1LGhIFKdD54lgnYiME5MMgoAwE2hmIJYA9aUrQeAIxIgdTBWzH/wCtUYgfWo9XSuZr0ywN
Dd09ulAxOYH+uJVFQsKs9a5gdOvnhEhAvU0+j+QxBJST3FA+4+kVzyOI4AlwAWc1PUHPM9KY
gKgDalNScmUdPwxGBMiqafaK0FMWNCJGSKoFOjdTTyriB2jjD6kFM6Edq98UQVCB2SpOrN/A
HyOECGsrX7hQ5f64AS9DUesjp+7AiVR6KkFhk+nMUP8Aph04RSNwa1BGYIFBQYibSpb05in7
sVZsOSqqQhpTMDtgVOhAORBFKha5E4sMMFauoLRqUNOmFozCVqGldQpn/niZ+SRyiFDkD18P
qMBh3ViKin0J6kd8SMWKtq0ChABPWpPbCiyUMgOoE1JGJaRGrUqmmWdcyPHAg63pSNGCjv8A
5VxASrIy5/iK9uueEmWrGoOYBAUZ5fhiR/ddiuuhKAjTSmQ6HADq5emZOXXt9MJBJIFyNSa0
U1zzwHDMKShQKVHXqemeIUyoAw1Einc98SHO7H7Bl0K98WCxHRApBrl91c88LNPG7dApApWo
zxKUQA1En727dMhgahjIQKBNVOlcqA+GDCloKHLp2xNALgEtTM1IXFiNWuQ/ADCCGorVWo3a
vhg1YKMFgDmK+FM8Wo1aalajSeJ7+QxHk4aiiopq7HA1gSxZitBoIoR/1ws00bFtShM608Bi
ZIr6krmgzJ+n+eEwTMoLhV1K59ROR6UNPDBDaTOcl0Ahcqjt4fXEYZkXOin1EVP4Z4UTtpCM
DWvSvh4HCqBajUH+o8q4GRrqeQhmFKV0dKDxwKE56ECqDKgHcd8REELjUBpBNAD1xHAqmiSl
fEqT188BMdQoiUqTqNOh+pxDDSMUUu4qzdWr0HjlhZKJnYBvubqCegGBYKq1OsaV6kjvhWHh
UPQ9GoQQMzn3/HBWphBScjkU9ORxHTigGRFG751qcGDTaQkgIbUBk2dR+AxM2mABA0kKelD5
59cUtWkKl9BH2mjN0J71FO+NaZTI0hkahzA69AB44kQrIV90iv5RWn0rTFpnKKdqIQoGpstQ
z/Z54ob6ym5KVmA6qP3EeONsY5a/7z49cKd9kHF0RDXw0gZkd8jh4jXMfQnwHHonuWJZiI1W
qqfUNWojV0646f3vjGeuf5GdxzCWWOT+ajhg5BD0yPpU+J8ccf5WNTa9I5BIt3wqC5uoIDcK
kTGURqrE0FCWUCpp1wflmrXcN8v9m43ZzW9irSNoCyNGSGXxKkFWJw9M8wG6C0W7tbuDaYJF
uKR35jjVIhGaHXozFV8O+MyNwG6cc2nZ9qvNz2Vk3aFqvLt7woyqzeKkVUDFoZH4qn9zet0m
9sw6tGqGhVQczpUeWG/Bql5JFuD/ACEF2tUur4zg6JGGgsDkvq9P1zwc0a0XyJuPO32VYN74
ba2hLANulsqUpT7dVWK1+uKKV5HQs4LVWpoxUdx5YYa9c+Ets2eQbhdXdVlUqqv7YYivjWoK
16YeqrjaX+/cJvImtNwvotxljkCxE7c0TIwNFX3I1p/ljOBz8v5anHLG3msdqs2LkmYCFAsi
qQKaaUBwkPB982Dd0utzttit7KTUf1Vu6qYvcH5osvTXvg6gs9Z+751s+6blJse9bFZz1nMV
rPCirIKdATQdBhaxody3eLYrnarC2t7WWzu3VJYbqJZiqHI6SaGmdMDC/t9p2Owurp4tptaT
nU8SxrSqigKVBp+GGw15tvPP+P3qXe2btsFv7iBltru1VUkUg1WukLUYZDlS8RvuTxcWMa8M
g33ZSzGC40qZa1zDBasaYuucY+tjzrcSf1crmA2rFj/8egAQVrpp5YlI5Qz0UUAJNARkBh1p
7H8STMux3uiTS2vSwBoSdHXrSmDUwVrd/puVxSRhHLXPqSQB1OdTWuRwyNW69P5Za7eu77Fc
W9nBDcyyhtES6A4FC1VGWfjTGWVrvu+QLvm37LNtNrLHejTOtzGHFDWmkvmMKkcEmxcZTdZt
mfbYottuU979QFDfp5my9JNGAp4HBDVNzTY9q43sUNs9nBucEhK2l4q+uMf/ANRcq+ROGM/l
1/F8qLxO8VWKR1k10aooQa6qHwxdGvKLujXE1CCdRLf9lcs++KLxc8Hhhk5RYW8sCzRO2cbC
qsOtGHfGqHq19d7FByK02JdgshFeqTMHiUP46lOTD6YzIpIJeMcb2VL67t7CGi65/auFE6Eh
a6dL1pmMvDFIK803jlG1bzbEy8et9vvsyl3Zao19PVWQjQcsOLI2XC4Nqv8Agkr3NjbyyRLL
7Vxp9uUClVLSKQTTBTjyyaFZZWWANJoNVUKWOXkoqRiX1XHC9rtdy5HbWt5GZI2IMkROnKhy
w/ZStryrfdp2O5XaG2Db9ytygJLxaZgrfxMg8PzYlbrN8Yu44+SST7Nx9tzgYMW2uQCbSrCh
CFssvFsFo55xDzG42593iez2B+Pzx53VrLlqfsQtABl4ZYopHovGt3ubz44uBKsaIsU0YCgB
WAzB8K+NMNPTNcSuOUPtDwahbbSHZY7jJZl0mrGJ8qde+DTJ40lnyPY9yu7PY2uH3pIyay3K
D3F09RrzVz5rhQ9z2rZr7dX2ja+RXu2XxUsLZF1R0A1UYilf24hY4+N/HwtS8v6iG93u3l1S
RFwpBB9LqOo1DOpw6om+ZLXcZNr2+4nRRFA7iWSoIBZQAD9TjOC1mtwvLIcVt7feeGvFeLEq
2e7RRmOJjT0uxUVzHiTXAqtdj2vZNu4J/WpbCC/moWkiuVEgajflfquXTG22ra12fdtisYza
tb20yIzWwPpWgqqqT1A8DijKitrTjd/ySfi97sNkYYY2ZLmEGKagA0n05avocFUea8r2O22n
f7yxtnaSKJqRSMKOQRWhp5YtGLP4427b73kohvrWO7t/ZcvHLWgyoKU+uJbG4j27hW4cim4u
mxRJFDCXe4VjHOjAAjS6mrdcKkgX4vxrjnHbu/FhHubWrFnS9GouNdBpb8mkHwwHI875BunG
dw0TWG1naboH+Ykc3uwOrZVCkDSfpjUFkjcfoNuuvi7+o+ytrepEV922cxrIFfSPeUelj44P
yeq8sJU/Tv406YWbWp+Odp23ceSxw7jai5t2jcmGSoBIWlciDXwxm0yNFvFx8b7fukmzXOxG
O0SiPfxVWeJmz9JqfcpXr/jh9gnOrjj3CuMf0U31ulruCGR6SbiDGsiqaIwI+w9jka4rWpMG
OFcL/rFndwxW2qRWS62pJVubdqLky56l0nBqdkvEeGSJ7g2aFUaY20kKswWlaao6EaHqcQxw
brxHinF9ku9w/p67jErrW3uz6iHbSoWQdKYlXHPxbg8Y2zfJrFoLDcAsVxt+pnRDKCFdGqGW
jYlZPhBZfHFnt3Kr1b6z/W7HBGzIhfXKVfo65hqx0z74Kzzznyttl4VxVtnF/bQW24wySP7R
3GQpWMOQpWQDIjoQRijeMt8i8S47tttb7ltUsdvI7aJ9ujmE8akiuuPPUPxwwbHByC54lJx+
31bDdbJvyKhSTQ0cM6GgaQs33V69K4pTYxtAGLUoGFCSO+FzWfHd9fZtwF4IIroMmiWKZdQZ
a5gdwfBh0xXxTXpWy8g3OHa23nfL1L/jV6DCNucrMIqkj23ciocAZA9cZ+W8xl/j292+L5Bg
nhZbWxlkuBaiQhQEZW9tat0Pl3xrPG58epOTbumwfIu7SrZwXMLuqS2kq0Qo0asdNB6WrmCM
Vrn9saTaN+3NdofdeSSx3/Gb5DGLBisnsnspY+oSU7N26HB8isr8Y3FrD8gwSRlYbVjcCEOQ
CFdWEYJ75ZYeov5/b8lyHe7fj/yDvDRWEN1amVop7Rh7a6WRW9DL9jA/mHfBWpflqNs3zcY9
lO5crni3PjF8hiS0ISTR3Ca6BvdUdm+oNcWr8esr8Vy2kfPoJI3EVq6zx23uEA6W1aFNTmdN
BhuDno28bxDx35E3lksoJ9vNw0b2bDSApVWJiI/8bhswRiyLlqdv37cDtD7jy2SLceL7lHog
tjoZowT9pegb3UA6dT1BwfJvjL/FEttHz2EB1ijk95baORgGKlWoor1JXtisZ46/aDcN0teO
/IO+ymxinsTNJEbQAJpGpXDQsPsZWFRTGvGuWu2/fdxGyrfczli3Pju5QhYYyqM8ZzoA6gEy
qOoyPcYzqvjKfEM1uvPbdahVdbhYtZA1EowUUPU0xWNT4ZznUEMHMt6t4o1hgjvJFijVQqqB
TIAds8PUcM1SgrpYMKjwHf8AHA6yvf5uKbLv3E+Lm79mLcLeGCSBZTpW4RVrJbs/30K+HTBD
floTBaw8vhlhQW8h210lkCgtRJVCK1M2C9q4sDzjm28cit9uu2i5XtW8WbakudtMUKSqjEii
fdWn4HHTnr9sWPGiFNAc6HOvTLocBr1D4GtrW43Lf4Loo8M9ivvB1DqAXp6gevjg/JzY1PAO
C7HtaciEe7WW+Q3Vq0UtrEA2hVLN611N36YLfVzLi1upL2Ph3HDacgs9iZrVFMl7EkokpGCq
pqK/bhlUUPxpuG53PM+Rf1S5s5LuGyjjkvLJVMElJKpLVMmFPurn2OC3088Y6N+hS243ez8u
v7HkPHbtHignsoUie2uRUoYyhJOYodJy8MV9C1v23JeJcZNjv9hsdyLGHW+4RJJ7ieytAmoj
7fLDLiv6fPXOtx3a/wB/uDuclpc3kB9l7vblQQygZrIpTJq+PXthq55yM2hVmC1LMSAKGmK/
Cny9n3DinxtxXadiHINpl3Zt8jSS1uoX0Sq8iqXSQVA0p7noYZ4zyfziT/7jePW+88h26a6m
lig25b7apx6Z4CSwZJAPRJ9gAy/fhl9Z6ZT424FsfKOPcre91Q7htMKT2VzEQGjdVkZlZT6W
VtAqDi6+WpfNbj49uOBbh8Rbm+5bAwt7ajbpEra2eURqBPAzkGNiM8u+KcrqzHz9uo29by4/
pbyTbYHP6N7iiymM9Pc05auxpjWfpn61yx6gSOjCnbLzxl0j3T+3K42C9uLzY7jbYpbi6tZR
uMsqCQTxahQHV9gANNPQ9euMm/Dxi6htYeQT2/tFrO2vpE9pK1eCOYgxKex0igONd3a5/wA7
saT5QsvjqGbb7zhE8scVymu92qZZA1q4pQEyeqrVPpqemXXDMxdbPhhqFmowK1qQe2fnjOqP
bv7d49nk2jlx3i0S+26OCGS7hlVXBQayxAI7L0xn8n4jS7L8dbTtOy/IO2SQpebBfWMW5bFP
IpYey0MrRMrsPviamY8sa5+R+FBf8D+JuNf0Gx5LZX11eb1BC9lfW0hBrKFVlmQnSpVnyI6j
FizXm/yvwMcG5VJswuTeWkka3FnMyhXEbkjTJTLWKduuN/XZrP8A0y5WQijVpFOr0E0FOoGM
OuSve7H4w+M147Z7om07pyDbLi3Lx7vtLmSWMhQZFuIK+mSNq9qEdsUjn1GV458Z8W3njnNL
qy3GW5i2ZEutnvipjb2yjO0FxER1qNJI6Y0LbI5OD/He0ck+OOTb1LK9tu2y+3PazxsWQx+2
zPCyn0sH05NSoxn/AOWDnv7c7Gu5Hwj4c4g1rtvI7XcJru8sxdWt9asWb2SNJ1xk6BIjeGRw
RvFj8RbRxpeGc8s03GO+2CUQyJeyIyARe2xCzIalHy9VOnUYefldfDK738f8J3vhW6cn4W1z
anj5B3GzvX9yN0K6n9mQ51X8vY/XGrfwzzPy8oEZNTpzI+wmlDkQcZdMek/CO1cavOYWTbrf
vZ7lBPDLtcbKGguGD5xPl6WNMvHB1TOlx8/x2dryP9FY7xDd2kUh17GFpLYO6hj6iM4nqSBX
KtOmNfhy+2dOn4e22wveIcuayLW+8WtnNrnf+Za3FqyM3tSwtllpoCOnXGZPXTr4eXcemltd
0226gqs8EsLRljUVJBH/AHYrDL49O+cdg28fKcNraW/6Vt1it5Lh4aBfdkco8pHTUQB9TnjW
+OWf7thL8B8VSb+nXFluSyn+Um7Wz+5DQisczoehBHrWlMTXU2vD+VcdveNchv8AY7+RJp7J
iGliHpdWFUYA9CVzI7HBZGufXNsW3LfbnZ2RZkiupkhZxQmMSMFZvwBwWGcPY7j40+LoeRjh
s097Y8nkoltMgM8BDgtFJ6uxUZqTka541+GdtuODbPiDYrfY943Hf74Q3Gx7jLb3oR/bhlgR
VoI2YEpI2qo88sUgu54zfKeL8NTY25BxDdJLiyt5kt9w26+Gi7jaU6YpY+7qzZN4dcXVY5rD
EsRQAZ1pjLq9H4jwTjc/Ev8A2jkM1wdo/UtZ3ItjpkhZdOhwAD7itqzHbDzR140vMNs43B8G
QybLdHdbKHeIpba4mj9uaP3HAeJgfLI+Iw830dTxWcc2u0uvhTkt5ZPIlzGSd0tLnTLbuVNY
5IKUaORexrjPPyfts1Hxb4745d8Ch5Xut9JBHBdyxX0OtUEkKEKBCWGUo6gV9fTFParXLd/G
3Gt0udsu+L75Jd7PfXse3XqzppuLWWU1R/bOksjiv/HS1jvnq3ytjL8C8enE23wjcLLc/tS7
kAmsxKoqvqAB0SL+IxRqMls/xpx/a9kfeeXyzxWb3c9jM9sdXsTwsVWoAYyJJpNCOnfGrN+F
L+1Dv2z8CsdxtL/Zdxl3fYJZhFfWTqYruCmZ06wNasK+ofaRTBngsyvQflDiXxlHwjZt0s7h
7LcjaaNmuxE3/wApFAb27hVFKsMtR6HFw3nrJ8f4DxGLi1hyfllzPBY7iZYFlttVYZkYgB1F
Sysqk6h0xSM95IsP/uh2GffONT7duUt3xLkUjwxzf+G5SRFY6fUDqB05GmKtQV18X8Ca8ueM
2+9tbcvsy0cBnOu1mZQSFLAAIzjqK+k4ZMPV88ZPk3x//R+F8f5Ikuu43Jpra/tWoUSaGRx7
sbj8pVcx+zGued1xvfwbmXBLPZeJ8Y5DazNIm+RyLcQSZaJoqnUjfwMuVOv7cc58OvwxAfUS
C3rOWoigoMA1E1Sc9WRoD2+oxM300gkoV1DqDkfPvjSkOoUhqAaAKEYm4Cnpp36DvTEsEGy9
YqVHpPf64FoXZc61I6AfXEL0ZDpyJyIyFM6UwA6BD6dORyYDtiUFUBSEWqdx4Z4G4QI01ApX
pXxwnQRGSrMVqq0ypSgwgsmK0NRStQMgMQExIpTOuQ8DiRmrrIIoV6jr1HliRmIKFW/7qjx8
j/liVCSqkdad/PEqeNtCkDLw8VJ6YDBlsvV9vQHBRTdaBSR0K9gcQwMkmmgyJboaZ4RKlOQo
Tl2FO/jiblMAaBVyIrqoOgOBEClQa0p1/wCmFB0iokALHzGdPDEBsuqlGp5DxwkOb5vkoP4Z
YlggCQDq0+OWRp0wELKh/LXKpPT8MS0y6NNRlXJV64gdT0UGtPu/1piBZ/bnTx88SGW9LVNC
OletfPBTaSijVIqD3xIJ1IutgKn83c4BDghs6kBvVTr+GGEwidlVg9D0K/40wozRnVk9B1bx
GIYRRW71pQr5/twESxOwBahIzYeFe2LQSq2QqdB7V7/88MEhgQzaFOk9anw8MTR5KKQDlq8f
HEsPrXTp69s8sSCY89RANOunBQFB7UZbIHsPEnt5YF8CJNdIP1NKivl/rhaEo1ao3UGM9Tln
5YQiHvRliW9Na0Azp4Ygdiqsat6OtT1z+mCtRIEBOpQCpzA8cB8BKq06+k9jlTEjNRmUPkwA
0Hsex/HCBZZroOnqCe9OueIBbRoBGRp1PX8PPEj+2SCB2oWP78UrOBdjUqh9QAoeoxGBkzFU
bPPXX/TCYdjpqwJ6d+pB74CJAYiznq2XdgR+OLDTFUZ1pmADXrn+zEBEhggY6V7qO3hXEaLQ
FWtevbriAFkCtVVJPQBcCOgbTWpFc9eR/wD0sSGHYPnQin3dsvLALUK1UdMxWi5nLDqHKNLo
XNNfTy/5YkVQGJyJJNfPDEYxscjkT51OKmEFBKqx9IA0nzwHSUuMiwEerSpJrWvhTChMVCFc
tWdD1+n7cSCAGpUZOBU9wBgGnVEJyJIVqgdDUZHAZIExrUMshPfQc8RO0shyrWv5u/0womWk
3qHp6iuRGBaM+pSAukDqMqnCZTD3JGJKhV+0yDy6CmJZoChCk5Bulc8vrixU7j0MHpQkZnqT
/pgZGjUWooKZHz8MRkRMGVg1ciKhh/phQVWh10JNKlelR9BisGJWBfQ1dBpmuRHnXGWTBQ0l
CaDuPDCZBatDfb1618u2FBkjFGDAdKr5nyGJrAqx9oSAAj+CtSMVakRSe3oYgU1GoYdFwyGy
4ye4qHuifHv3P1xusYiqfAeGAY7dslJu1pQMSKgdfwxrk3rHtXDPlbmvHbNLSxvVituntzQx
yJp/LRtOXlnjXccp1LVhufyzyzdbyC9lmt1vLZibe4ht41KsB0Y56/xxz5mt1eXPzB8n/phb
3ltH7DqtEmsVKtqGTLWinBZ6sctv8tfJG1x+287wxKCTb3FoDGQ2dKFaf/onBaIrZ/l3mc93
78Nwlu/X2Yo1SFgPGOlMbUrltfkTlVrcT3EF3pNyP5yBVKmuZyINMB+VrsnzFzPaIfaguYPZ
YltM1urip8HFMvLBaVdyP5D3jkUqT3YtRKlQJ7aBY5D4etc8vPBgsjgvOZcmv7EWF5u1xPZJ
9sMsjECnTrhngiqjJJJBJqcgOueJauuPcp37j1z+p2e6a1uKCp+5GA6hkORGNJp7z5m5jewP
G36JHfrNBbqklPBXq1M8GFQ7zzTkG72kVruEwnWL85QK7E+JAzxYvig45zLfdiMv9PuPbjkG
iaFwGRqHI0YHPDYza4Jr6aa9a4dj7zEuzD0+qtdQp0P0wHW12z5e5hFZx21y9tfwxgLGbu2W
V1UZU1DTX8cZvjXMQyfKvLVuJJIp4VikGkWwiCxKelVXMj8DhnqyMncXU01w805DvKSxIFKV
NenYY1BVztPMeTbMgj2vc57W3Y1aKNqoxIpqo1RjKVF1PLcSmeRjJISSXOZJPjhVIEBgSWAF
CAB0IzGLU3ez/MHJdssUtEgsbkR5BZYApI/h/lleuD8hxx/Im8rvj7nY7ft1pPKtJYYLbXG5
U1qQxqG81xvxT1b7j8tclv09ifZ7ESRUaGT9NIZY2HRkBPX6YybMOnzJypkMd9DaX3tigW6t
9LA9+hUhsQcDfKnJy80YkgFvMDS1MWpU8TExOpcKctr8g8gg2ybaf5c+3Tmpikj1aa5+ljnn
54sS32n5n3jbov0ZsdvmyoT7IiYgClWEZAOXljOK4hh5pxW4uHn3niNpMJvURZySQEHv6a6S
D+GIYsrXmnxvZzx3VrxWeyngcNHJBcZ/WhNMRd+9fMqSXCGysYbyBUpW/QidGOfpkjIywjVD
afKnJbW6lnMdvPbzffZzoZIgO1DXX9c8VIt8+RLncrMWQ22ysISQx/TIetOyt6V/AYFju275
c3Xb9uWyOzbfLCFoZBE0SMD11BRoJPfEcUdnzW52/fH3rZrO22x2Gn9JEDJEa/dUMQQD5Y1j
CTfvkC/3rcbfcRaW9lf24J923UjWwNauxzNO1cGNLiP5evnVTfbNt97dKKC7KmKQjzKg/uw4
mcl5bu43s7tZyCxumOTWyqg0nsVpQ+deuNSeM0+/8v3vfZ7eXcp1mktwUiYIqZV/MFArjO41
saew+Wry02w2DbNt80aqFkVUaJWFKEtGAVr40wVI7L5Vl2uKWzj2uybaZG1xWMuohC2bAMa1
BPamKROO+57bXUkdzYbFZbbewsGivrQkOB4EKFU176sQd6/LEskqSXOzbfLfDL9XGWhmrWnV
M/30xYemT3Lk1/uG7nc7h1F2zg619ABU+kClMssbrPPix5LznceQ2Fva3kUYaCuqVKgy+BdK
lcsZiuBj+ROVR7N/Rjd+5YMnte3JGruE7KsnXKmWHB+HZxz5D3DaLA7bJaw7nt7motLkfZXu
jAHw6HBjWrDdPmHeL+OOC2s4LH2mV45ELSONGVDrADKR2pijN6SH5Xjb/wCRNsdp/UqAJfQO
0L6gOpoC1PKuBrXJb8i4buss93yXabifcJGLS3lpMV1+H8vUgWnliomLTZuT/Fm0bqm4bfZb
lZXCr7Rk9LqFb+JWdjT6YsPibdflLbF3SW5ttpgupR/4NzQtbTtQZaqKSfocS1T2fyfeAXEO
7WVvu9jdv7jW8o0aGbMhWofT5EYbFK4+R8wsd0SCO12a2so7cgpmJQQPygFVov8AtxQRdW/y
ttx2w7Tc8dtJICuiSCGQxxFSM9KFTp/A4lsVO03nxi5lG6bReQOpYwexcNMKVyWhKUKjviUs
vixg3v472mdb/j8W5W+4x1KJcL7lvJlQq+piQD4riixkeRbzJvG7Sbk0fsvLmYlNVH4nPGvl
nLF9xvn72O1ybRudhFu+1SkuLeY6WUmmStQgr3zGCt7qeXn9la7ja3Wx7Da2UlshV9Z1lkOR
jRwF0qR2xfhzvWdOhvlm+MMyJt8MatKJoG1MxjcENpauTA4Ma+yZ/lv9YZod22W3vdsnVddl
7hLB1GZVmFNJOdCMOHVRynnUe6bbFt1jYnbrFHDey0nvUK9AjUUqvlgis2u6b5Tu7rYhZTWa
/wBThQRwbsj0cDKtVp6shQ50OHGnLx3n4tNrfZ942yPetrdy6wvRGRia0DEEFdWflgsSv5Xv
207oYo7PaI9viiACMzCSUAH7VdaejwDdMPMYsuj3D5C3vcOPf0G8EFxaoFEdzIhM6hfto1aH
p1pisw24y7lqBQag0piYzQ1Ok6aBhnU+XhiWVfy8miu+Ox7RPYQe/AR7G4Q1RylSSrp9rnPr
gsN6cex7s207tDuAhiuhDUNbTrWNlIoa9e2FqXUW9X9tuG6z3ltC1rBKwZbZnMmjIDSGOZFe
mIrC/wCTR3XGYtrlsY1u7VlEG4RHQ7xivolQelmz+7Az1NcHH99Oz7nDuJtY7z2qrJbTCqsr
CjD6+HnjWa0j3e+tL7dru6t4Gt7a4f3EgeRpSi0pp1Hrh6jjuVY3nJ7e945b7TJYLDd2bKYr
+E6NcWfpnj/Mw/ixmRu3XFxzfF2feIr42sd17eoPBN9jKwIbsaZHI4Kub6h3q/gvd1u7u2he
C3uJNcFvIxkZVoPSWNScahjvveTWl5xi22lrBY7uyYab2JyuuPMj3Iuhep+7wxWVddOLjW9x
7LvUd7Jaw3kdDHJbzioKuKHSeqt4EYBkrn3q7tr/AHS8uoY5IbaaQyRxyOZHUN+Uuc2p4nEZ
ziy3TkVnfcVsdrk28W+5WTqRdW7lY5olBA92LoZM+uNTxn+nOuHjG+R7PvMF9LbJewxhg9rJ
kG1AgsGHRgDkfHGbFP8ALl3y/hvd1vbyJXFvcTM8MUzmSVVPRWc1LU8cQjhDgNUDKnqTrga5
ajkfOhu3Etn2F7cwzbVKNFypqrRBCoBHVW8cMh66aa0+aVTc9vvJ9tDGG1NlfIj5TKaFXirm
pGnMNgGeq7d9/wDhy9s7hU43eWd3NqMdzbyqJEfrq9UjJ16gjFp8jzSWVvcZCukA0HjQdMaZ
rV/HnOl4pukl0YP1VvdxGC7ir6wnUMp6VB8euK+NSzE3AOfrxrdL+ZrX9VZbhE8Mqp6ZI1Zi
UIZvScz0OM/5anxjRw/JfB9y45tu28r2Ca8l2kBIGgcBSANOumuMgkdRixn4it2j5B4pxfkN
zd8d2q5/pW5Wxgu7K4kUvE9SVMDVb0+KscXi++qTi3OLbarXddk3SyfcOO7upE1qjaWikp6J
Yq0AP8Xjiak/DUp8n8E3Tj217ZzDj815c7REIoHhcCNqKE1ffGwJVRUHDPFeXm/MH4nLuKPx
m3u7SxoS8N46sUkP/wCDILEr/wBxw+MSX/8ADOGuqoBAAoCe3j0xqU5Xre3fKnDd22La9u5t
s0t9ebMgTbru2kCEjSFDFQyUdQo8cYsa8+TTfOLryqS+/SG92uS2G33UFwQk09spLa3KVVJR
qP25HAsdtr8wcA2Swvts45slzFY7nBKs0bCMPDO8ZUUerPIhr0Y5dvDFi/DEfHXyHDsFhfbD
ulk248d3NNF/Eje3PE9NPuRyE9dPbyw2jzGM3ldqTdrlNpkmk2tJG/RyXQUTGLt7mj01+mNb
4dcoUEtkSNOYxitSPW/iH5G+POJW63W5bXeJvojeE39qTNHPEzaqOjMojZaAZDPGZB1cYXmN
7xdOSy7pxJrtbWaT9UIr0JrjuDJ7hppLVj1dAe2WNWMSY7fkr5Gh5w+23sm0RWG6wRFL2+ha
puCAApK0FAhrStetMOHrnWHBZtIPqCnsMqd8CxuvjX5HteJ2W/W13aPcWu92ht9UZ9UMiqyo
xB6qdedMU8rH9P8A1rQ8M+bTtXCNw4puqNeW09rPBtVxUCSFpUYLE4b7owzZHqMM+Ws/0WFh
8nfHO/bRtLc92+8bfdhhW3trqwLCMqmkiYLrQa1KDIgiv7MVi+t81gPlPnSc15Gm5RKfbto/
00Ujp7bSoufvSRioRm/hrQY1Lkxm/wA99rIh00Kpqo7V61+uMO2+PaeFfJXxdYWkd9f2W5bL
yCFVW4m2mSQW8pUBf1Bh1iPWR96lSCcE8Z/yitfmTZ7blm9tLtKycd5Cgt91WIexNOUUot2q
DJHdT61rmc64bWJzvz+XXJ8pfGm1cS33jHF7O6Wx3a1lFtNIipLFcsmhI5SSWaOvqDE1GYzx
b7pnMkyHT5J+K+U7ftt3zrb7tt/2u2Fmk1oDJAyqQwkMYZQSSM0YEfhisN5/Kj2j5M45te38
w2bbdtkg2vfYgthp6rMoK1MbH0RuDq01Onpnhl9Ys2WK7iHyBt+0cC5VxncIHY77AyWU8dHA
lC6Asi5UUjPVjMnrXMt5xgXkKoM6stAzZZ0/hwzlrrxa8U399g5Dtu8e2Z1sblLkwlqaxG1d
Ne1cV506tPkbkthyXnG7b7tYZLG+Mbxe6KPqWFUaq1YZMpw7+GZPXpPxfz/4p2HjU9vf2t7a
7puFubTd1i1zxyAgqWiao06ga+WMznPUwexzfHe280eG5F5uPFIwP0tx6obmMBgwcqPv0faf
HrjfWWDiY2/yxzH455HLa8h2m5vY+R2JiCRSQEQXEcbV0OCaKVqWr+GMz4xq8+6vT8s/H2/i
Ddt2vN22retCRy2Vu8z2QdAQH9qNlDq1f+7/ABxfB+u+vFuRX9tuG+3d9aNO0Uz+kXcvvyg+
Jk/N5VzxaJHNt95LYX9vdLRpLeRJhXoWjYMMvqM8WnXuLc++Kt23e25rfS3u38rhjj9Cq81t
G8AIA0Jk6tqPev0wap+2f3j5R2vduJ8q22a3azud3uxf2BP81KejWkjCmkjRqH1xbgx5Rrrp
UvrqKntWp8vPFfXO8+pQSCc/MKRnXE6vR+A834svF7vh3LIJ12W4mW6jvLNj7kclACrKM9Pp
FCMGYOrHbzHmXDJfjtuH7IGLW17b3FrP7LxJOqPqdnVs1kUZN2bqMajNur3j/JPh2z4He7DL
c31mm+wf/lJGSSZ45tOkmN0UrQNmKYOfLrV58xjZ+Z7LB8a3nDo5HurpNx/VbfeCJkSeLxZW
zicUrRsauazKyfHt8uNo36y3SNVMlnOk4WQ1V/bNaMewxzz1r3Xt198ncAvUl3leRbtY3049
x9r1TNBG+ijRj2xpFaelg3Wh8sblHTI8b5rwrc+OXHEuXi9j2lb176y3GN6zVZtQScqDVqsc
wOuHc+BJsZfnqfGccdnHxSSee5jeTVdMjoXikz0XJcDVIj/Yyj7TQ9MWmc7V1f8ALuLcn+PL
baN3uJtt37jsZG1TBTLBcUApFIF6MwGnV26+WCXGq6uI8x4JufDbbh3NXn2+2tJ2uLG+tySp
L1OiQKrnq5plTBB16tbn5H4Zsk/F9p2i4k3Tati3Brs3ZRkkS3dHVkZXCl5FaStVABGL8LXl
3NN7tty5pvt9YS67G8vpLi2moUrHIaq2k0YfjivwJ09K+Hk3DlO1XPHr+3tN42C0l9/9DcSG
K8gd6j3rZu4rnQ5ftxS2HwX9w1zTjvFtpm0w3233FwtzbDSrCNY6RS0XLS9BmMia4ZcHXrwy
QVQstRTsT1OLXP2mrTSoJzqanpiwz0enSFNAQf8ADA3mBK0ZumkHOmWXniIBIinUTUDL/pip
MCFBjI+3MdxQ9q4mSRS9Fc+kmhB6j9mAabQxagAIaqgse2Il0OXXp5nywIar6TVqgjNevmMR
iKg1H05n8v8AphQg7EjV3OdT0Iwo4WtGLBgtaoP9RgEC0khdTmScyDnQYlgiCCSBVOhPSvhh
OIy1T00jt4VGJJEAoRpGrTXPPKvXETa1MlQSfHvgQkrQA00kVA65fTEjE1jAY0AOYH7sDNEQ
QKGlaUX6nyxGB1GNAjkNXo3WpPniSRWAXSBTVnqHie2ImVQxFDmDmfHzwoDKwYLqAqa/hhRx
GpQivQ1K+IH1wIiSwoopX/j8MRwQZlBY50yFaV8OmAaZGJFDUmvTtTCieMiqrk47eGIYZAwI
WlCPzYiRlIGQy/48MWqG6agx1A9MWgSPApqSSdNBXtgQCfVSgGrrnliSQsST6hlkpoBWmIhV
nNFr5k+OEk+ktXJicgKHr0y8cQMdZOphqY9B06ZdDgGCCkJkaD+KmWIglcKAKknoCOlcOjRo
jmqvkgppIIqD9MBNIPXQCvgBmfriOmID1ZVoD9xJ/wAzhAhJUrX0gfmp+GIYcjUxLnWpNany
6Ylho2Uqprl/F3xGU7uUAYjr6dKip/HEgEgDUWzPYd/wwDT0LMAKOqipoeh8MSh/cAqhFQcj
5YmtOkYIZGFFBqCe/wBMFIHQnQyrqCnM+GGAbMwHWik+llH7sGCo2VcgzZjM/Q9OmHBgquCo
WmfpNMwf88SJRpBjU6Vrkvev1xELhk7FGJqR2/bhRy2mhJp4U6eeJE3rCovpK1IB88CoE9Qc
ABtJqKk0piI8mJqoK0zIFM/DEjFarRiSD9Qa4haKtJqUpHT0UzwYicUoTTzA6n9uI0nDSKVU
EIcqf44NGaGOBUUBS1KUPhli04Qpk2Yz6HvhWGjRtR0NqJ7gVHn9cIEQwYZlQooPE/h2xLTa
HK+5SoXpU5E/TEhdyynVXKhrUYDDaiD4sT4UK074kkZgH7saCpp44EiXSpAXpU551JPjiOiY
gEsM1I8MycK1GyVAkU1ocxSmI+BU6WK69WvqD0H0xMJmiZgCrAVzLZGmJrQAN9pXOpzOE54e
khUHzoQelfrg1QQEnvGtNf5UbMAYlCJz1rSvRgRSn0xI4mrmV8nB6fXCPS1BUBcVLA6D1/HE
tsDB7bLpQFtJ9VMs+vfrgoKWjdtR79sOKQZ0U1lc65AdAfPEfg0kqGgGRGa4jKEtWlBnX1ds
sEajnuzVSIxpoKkHoT441IdZPcJD+qJ6VOYH78arFRfy/Hv4fvwDHVamP9QqodDhsmAzqMa5
+V1Y+k/gLc72J7u1VUa0MYZYpkV1Zj3UMDnXrh/pdmuUrg+RrWzHLRJNbp7HuL7sEAEYK5ZD
28g2Of8AF0k16NvVtYy8LtJbO7vVt6oVtJ5I541PWisVDZUp1wX5EWW7xcOm4xbLvzXc0LaP
5ULFNLkfkYdsVWMvffFvx9b3kJa4vIbTcRogeSTU6OftBZV+1vPBBiC6+GNr2WCeXkEk0dqC
WtrqBx6VI6FQKE+Jx01uJvh+dY7rctugEd1tisNAuIo5ASelNQrmOuDqeCxjuY2+yx8ylj3G
3aLa4X/mxWgCSldVWCqvTyxczxZXZyux+G220XfGr28Tc6ClpcKSGWuYYsAR+GMb6pjCRUZg
UTQR+3y/HG41OXo/xh8f7fySSaS9DXEcNGNtHN7BJJp9/wDlio6mN1d/BnGpLcyW1huG2Swt
6lNxHOjp+Y9WI+uMsuHdPj/4j2Syjl3OW/YzuFaRZmUoO59I0kLhSPZ/iv4zv55rmDcrq+28
kPA0UyswB/JJpH+WL7UOSbhPxNfe9ZbRuN1Y7nC5SOG4fVHIR1CawNX4HBTjqbgPxtty2dlu
UO4G5uSFW6gl9LSE9dBFFH1wfJx223w1w4Xcxvbu+eAHVBpZUYClCrKFP7RiCg3bhnxdcW10
+w7xPZbhAKmC7kUxyAH7VqAwr5Y1Faq9g2n4qutuaDed4u9u3hGb1ijwMtfSEGhs/IkZ4qJW
SvY7WOaVLSYyRKxCu40kgGgy8cLVQgtqQtVdJqCDigex/Fxs77il1aX1na31jGx9uOWFSQSK
mj5MST+zF0NYGwjsY+UpFNHL+mNysapbOInUFstDMD0xqfDUeo8stpU3XYZHv3ubf31GqeGN
JlAIqpePTXLxGMQXddvLtg4Lul1awblc3K39wSsD24VWBPeQMpBH1OIYyyfEuxJeTbQu7Od2
KGWwjdVQTJ4U7GuH04rtw+NbLbtke43S7ksd1iA0QmjxMxOSdjWnfDo9aLgcG2bnw24t9wsb
W9RGdYpJIF1rQE/+RfUTUYKK8nuojHK8a1UIxAqKentkMBix4vs8W673b2U8hjjkYanWhYKM
ydJ6/TGpE9HuPi/gUE0W3Hd7w7nMC8ci6GUAZkvGV6fjjNUjntPibZbU3km83lzNBFVlksyI
X0KK1KsG7dhiarOb1sHBTZS3Owb5NLPF12+9Qq5B7o+lRUYozGv4XZxScAuDZ3roSshmtLqC
OWEyAUJjNA6/WuNHHktyrRzsGoSD6qCg/AYRXdx3Zf6xusFisogM7f8AkapAA+nfFYm33bhP
xxtsn6G+3ncNuvSAys6LNGT/ABDQhqv4jGcGRnrHYeJLvz2u674BtdKxbha10sT9uoMGKefh
h2j5Lkew8c2i6Rdp3pN3tpV1HSR7kY/hcr6c+3+GBrmN/tnHuCbnwKSa3219aRsDcSgi4WRe
7MDmK+GWJWKXgW57DDZS2abOJ92Lep7qNZ7Vh+SsmZi/ZhtLS7xwfj27C0F7t9vtG6XRozWb
gJIAM9On0kDx64oyk3fi97a7cLOz4ltm728K0Da0jmZQKAggByfxwVpwcGsNnk2aeG3sLKTf
beRhcbZuAU0zqmguC6qBlXx64qsR/IHCtuTj53cbTFte4IR+oW1bVCwbsQtBlT7gMUFZabiX
EJuPJuVjyRTfJErz2FyEVixFSiAaW+nWuFY7eP8Ax/ss3H/63u99cW0DAhJIQjCMA0q6kFs+
1MHpzGz3niVjuXFrGxtp4ZwAi21/7YWb2xn4VBPnljUuDGZPxpxK7vX2203y5g3eNdRtrmJX
XpnUoqinmDg0XlgN52q82nc7jbrwBZrd9BKGqk9ip+mFmxZcO47Hv+6rt0lybUNG7JcBQ9Co
7qaYFlbGX4j2H9V/T7fkL/1koXETorQsF+6oTNR+NcFP11yWfxbttvt019yLcXtIoXZJHtlD
xx0NM6gtn2IGEyYz3I+M7VZotztO9W+62QIVlp7dwlRUak7g+IGGGzWtfaLWX4uM8PsXcCqX
H6mAR3EUgcBvZlT7vUctXbFPljrjx5kKqdQ6mlK5kHDaueZF1w/jycg3hNva6NqGSR/cK+5m
orSlRjOtRrLv4l2kSLt0fJ0TeXUPFbyxoI2p1oVOoDwBzxbTZodt+I2e2nn3q8msWjkaP/40
QnFEy1NQEip8B0xLMIfEjrucEZ3Qy7VchtN2sWiWNgKqGif+Lxw65zn310SfDm3AMU32RoZG
EULNCDplzylWo9JOVVxavq5k+JbextZrrkG8/oY4nOma3jWSER/apct6hq7YtraJviiT+pWs
UO6wz7VfK36S+06WEmmqLJHX8/YqcBVO18Aubvk93x+5uktZoFYG4VC0ZkAqgNdOTftw2scr
uw+IybWSXedxmtWSR4lltYTNGNB051GsV7ZYzrUUfMeBXnHfbuxOl9tsx/lTaTG6nsJEbME4
1Fpbvwawt+Nw77tu+W9/b6Va4t2okyu+RVAGNdJyoRXGfarGSkjdFIp4Vzz/AAxoYuOKxceO
5xjf3dLJhlIql1VxQr7qjNoz+amC0vRdu2zgXJZp7IbGm1pGD7G82re2jsDRWRaD0H/dlgUY
/hmxbdc8+/o97Gl7ZRyzROcxHKI6qG9Jy6V641YZlTSbPw/Zue3u17nLJFtFuVFrI491FkdV
YLNTN48yD3xWDmtNt2y8D5M1xYx7KuzGFGMG82r6I3OqgdUampG6+oeVcGGsjwXj2233Mv6T
uMIu7cNNE4NVVvb1DWCpqOlRniXPwk/onENp53f7Vutw67ZbyAWjyVkQOArgXIFCyZkGmG6J
jS7fsXB+VvPaR7IdkkhVv029W0h9t3rRWCMBqRuvqFO1a4yWQ4Bx/b9x5ou0blCLm0U3EUyi
q1aNWGsMpBHqFRnhvjPPwnj2Ph+1853Hat2uJY9st5ClrLNWRQ6hWUThaMUYEgkYauepa021
8e4Bylbuytdl/ocsYPtbrbvWGR+inQfyN1GoeVa4DYx3x1sG37nzQbPusAntdFxHIFZlDMim
jBhQjNajPGuhxo7TZOJ7fznc9m3m7KbbaysltLMGZXdSGCXJShCFTQlcFjUsvw1O38c4Dy6O
a3sNmk49dxBv0u6RsTbySA0pQn1o1KgkCo6GuL4HlZD4547tm8czG07pH+otmjmV1UlTVVNG
VloRRhlgsF51Qcq2q32fku5bVC7SQ2M7RQyPTUVABBagArnTGsZ8VK66gk1q2ZwDXrV78Q2l
7x3j+9bOGge5jgG80JlAikFTcrGcyyn7gO30wS+NXn3Wig+J+GWnKktzY/rbWewecWsjt7Yl
V1XVGahhqBOROWAzjLrN8w2zgm17dK19wXctqhJKRbnA4IRqkA1MjqNXgwzw88+rqbHj0ulZ
ArAkqT1/z/DGrn4EmT1uviXhWz8rvN4s9xjfVHarNazwsUeOQvQFfyn6HGbfWs2LnhPw3c3g
3uDlW23FjEkGrbL/AFAOkqkkuoRiDUflYYbffGeOWnt+AcKteMbPff8Apkm+3d5bobprZ2rr
0As7B5FHqPhg1qxScI4v8d8g5Xu8VvsFzFb2VsrnZ7xyvsz+4VPtaWDioy0ueuG1nnmOvePj
biO8W1zDa7Dd8S3SGFprG5uGDQ3DRgkxFdcgNAPI98Zb1YD4/wCEQ8a2a8k4ZPvUt3axPdNZ
uSwf2wzF1eWOuonth02vD+cPxWTdZW43ZXm3Wy5S2N5QGKVSQUUVZwv+1s8as8c57WaZtIo7
asuo8cDduPUtv+C/e220m3Xktps19exLNb2c8fpZGAZNMzSIHrqGrSPScWjpTL8Kcva73mxk
MUO4bPbrdiOtYruFic4pegyWvqHli/LP4U/Ffj7deS7LvO47bKmvZY455bSSqtJGysX0ydNS
BO/XBfK3OpZr0Livwjw7f+Avuke/ob6YiSK/oY0tyEGu2uI2aho3VsvLDL+2ceM7vtU+17jc
7bNJFJLbuUke3kWWEnt7Ui9VpitMrijXKiLqI7E0r+OBp7xx/ivxA+27aI9quuRXG4QM73e3
u5mQppE4uYBIgjaEtlQHUP3kjLzP5V4Nb8N5BBaWd9/UbC/h9/b7g/cIiaBZCKKzeYx0vs1X
r8MbFqeT2Sx1HLWAMq9zjNrT11f7dd30RSXfIrHb0uoY3sJ5kdYp/cTXo1ErodfDMkZjvglY
26zu3fDnKW3q92rdKbZHtuj+o3Uie8kaS10TDQQXiJGbL9vfFVro5f8ACO8bHtL7ht+52PIr
K3X3L5rBgZraIEAytFqctH4spqvWlMa5F6sc/Cvh/eOa8f3Dctm3C2N5t5KLtU1RLKNGpfWD
RQ/RWIpXGft667kefsrrk/pdaqwOf2khhjVjJIGIUE0UmiaulWNBjOnXt9z8X/G3G7Dbdj5N
LctvW+CMWW52jN7kUktAswjJ9t4NbAENmDlmCDh/yz174875D8ebtsHNI+K7zcxQyySIF3Ff
/AYZTRZqE1UfxKemC/DPM/FSfInxzvnBdzSy3J4riK6QSWd7bmiTKPu9DHUpXoQfrXBI3PGQ
qdIULTQdWquf4/XGtFet/EHxVsvOuN781zdS2O52txD/AE69U1RWkjJ9t4z6WQsB5+BwfkWa
h2r4nhfhHLrvdFktOS8auwGIJKSKFo0RXIe3JUMrdeh6Yfy1x147Yv7cd4ntW/8Ay7t1tupQ
H+lzahIZSupI/c1UIauTqpGH7D+nvw8v3nZN12bcpdv3a2e1vbVyk8DZkN5EZFW7MMiMXWT4
Z52z1HBE88qQpQzSSKkYrpFXIAAP1xztwzx6sn9u/J0s5HO/bcm5IhJ2mfVDJ7tMohJXTRuz
0ocanqusrL8c8th2Lct6ktvYi2a5FputlLlcRVAPu0+1o6sPtPn0wyRrr41HZcB3/ceJtyWz
ZJbSO7FjcwLUSRmTSI5SW9Ogs4Q+HXpjOn5jax/298maxleLeLF75VoNqlLRTiUDOIuTp1V6
N0P0xT/I+HRxL4sg3z483KO6tjY8n2jcjEty6NqgXSnuLOi1LoKkmn1GGTKz17NjI84+LuSc
Ojhu7x4L3aLlvbTc7Ny8azHNY3qKqT+U9D0w4t/bIqWj9JqwJJUAmv44xmjndeifFPxnuPKd
0hvJ4RLx+GUxXsiyoGRimpf5ROoqTprlmMVdA/Muy7Zs3I4bSz44+xT6XEk0bBrK7UH0y26i
pUmvqBOQyp3x0zY577jo2Thm23XxBvHIrqNLqaIGWzubSpubSaLJ4rhGophfIswrQZ4zz8tX
cZPhdptN7yjbrDdkll226lWGYW7mOUe76FdX/wBrMCfLB2zm+LXmXArjYud3HF9tlN/qeJbE
vRZCbhQY437FhWmrDZk1cTPGjn+A+ZptD3tleWO4zW4J/p8DuszMp9ca6wBrWh9JwT1u3HmE
sbLM40vHICQyMCrrmQwYHMGuKzLinvqfbduu9xv4Nvs4vfu7l1iggJ0lnY0Xr064mnot7/b/
AM2t9nkv7O5s9wZF9z9FbF/dan3BA4ALp/D+GHkKbjPxDyrf9vtL63lgt9tu5Zo5Lq41AQyx
NpCyoAWGs5A9PHBQquc/H/I+G38UG8IjJdgm0vICWt5dNNSqTmGGVVOffFIt9ZkK5aofpmwy
6V6YqcbfiXxJyvlNuLm3ltrBHVns1uWYG4WNtLlGQMKq2VDglFi2+SPjuLjfCeN7lPatY7/N
LLabrEpDCXTreOQ0JGoKoFR1GOn88/K8lQ71wHb7T4gteQvErbjNcRyW+42rGSN4JfQYLkGm
ho2Bof4ssXNmtdyb45eK/EXMt3s9sv7WeGxsd1jc2t7JIyqroxUxOYxqDuVOnsfrjNrF53xw
br8bc2s+Xpxm/gMu4XEZktp9RljmhWpaVHbOiUJZTmMZs1j+fGeO7knwPzfY9ok3j3LTc7KE
e5Kli7SSLGR/5SCF1IO5XPGo39UPEfhLlXJbP9RaS2sAliSeCC5dkZ4JSdMy6Q3pqtPLvg03
nHFD8TcmXmw4jf8As2G5uoltzcPpguEzzikUNq1aSoFKg9cF0yauflL4Q3XhcCbraTm94+Sq
POR/OtWfoJuzIW9KuO+RxrkVycZ+B+acgsv1lo9rbOoV1tLp3jmMUi6o5KBGGlx9pxnTfFbb
/FfMrnfty4+LAx7rtcJuJoXYfzIqjS0TCqvUHKnXp1w4r7EnKvhzl3Gdqg3m9WC62qehe7sm
Z441YDS0oIDANXI0+uHGZFDyTh28cd3iLbb+NVubmOOa0da+3JDMAVaNu/g3gcGNzEXJON71
xneJdn3iIJdwKrllNUkSQBlkiP5lPSvjXDnmsX5VOrVUKKeVKU/DANoTQvWtaCpYf4YidiVI
Yiq01Ad69sTUB76kFi1GbMrSg8gaYsY3aJQK1FQzKCaZ4jDlzp0Dov3A98TRlemTVJYZmn7D
ngRelDrUgscjhQvcyNB95APniRPJlQgstOvfyxEOqN3AGYyqRXKnfPGWKSaNZGYfqrEZ+OLV
INidJHbrq7/sxENCaio9QBH08MSIo+igNR4+HhiJRMOrktQZqczlhAi4oX6EGlP9cS0Kh2aj
1AP5u1P+eKkXpCqCGJ71yI/6YgJBVS/VR6dR6GuLCXcDPUpyNOuJFoJ1DoW6N3OAlQIEViM9
VCKflPgMCJ4wyZeVSOv44QERxRnI1GVfDLtTEBTa2K6sh+Ynp5DFiCG6gUqBn4jCRoFAAPpr
UkHriWndVVTpGry70PT6YCGhZUlcGORsigaoA8DTDBgpNDAOpKqeoIJzH06YkFHUCvXwxIIU
awxJ1D8vbPvgR2GnPqSaA+XepwoLCrLWuXQDwA74lRkhc8iCK6gO48sS0xOpVFQS1aDw/biG
gZCklQvUdO1B5Yhgg6OT4ihJpgtUJSzVDJQHNQO5warNNqkVssifuHTrhZ9SGRUC0X1Cupj0
8hTE6RGFOYckEtXVTtiMFIxckKCOwJxADVemkg5Zk/5eGEUVNaaWbpkT2r+GBA0GPMmrAEEn
wxKmq5Yk9h+GJDFJSRnmOp6CmEnZIRWgNewrl+3ETR+6rMXFWFKN16+WACjCq46Kpzc/6Ylo
SxUj8oJ6YlpmcEmuWv7KjCNEVoUAYjqS3WhH0xEEhk0GM+sp0PXzwBJ7jOpYir5VHTp9MZbw
6uM0b7j4dPxxYsAXiLVCamXKhz7Z5YsWgQiMkRElm6/7R5YYLgnMbEKzk9lBr1/DPCykqxQ1
oF8hXLwwNI9MYRQtRqPpKnw7Uwo4A1lVNCB1JNCPrgQ1LALRhWnQ9MUSN0AYnXpY9AO37cKM
2tTUkFj0Yd8VQ2DUINc6DsPqcBOsUag0AWv3MTmcCwNBGRlWMihXvQ+ONLAyswVRn1yIOZ7Y
GhRpBqMjNU//AGYqaL5+BOLBYelaSlSa/npmMQgpH0ihqWGa0xG1GhJUo40qfuIzrTFqsNM6
gaAa99PbCDKrZICE8QO9cChBo1GZ1Ejr2wiirWKlK1HXwOJHZApWgox+4UrjLUyJKRFizCjK
aAf54mp04rqjCQEUoDQDI17Y1LhjJbgW95QRQ9SpxpnpDr/2r1xM677KhvVKEEA1DHy8hjfI
vse/fFnPuFcctS97slzJdOP51xFcEKa5krC3b6HD36zI6+Tcp+PNy3eHcodrv/b/APtIJZwI
2BNdSOpLI2OMmHlqLz5Q+ObnZBtr7TfxxALo0TRk6lzWpqe/lgrUmhb5c4JfbfFtu58cuJoo
CqxSJMFYKvRiQVGWEINy+ZeOe9Fbw7PPcWSEqJJZVWZMsqUGk4ZF8OWP5iimFxtm52k+4bVK
v8pGKj2zWopUdsVBcK55wHY5bl59ou1mkaiyxz6wy9gEOkAjFbpk1Ucq5dxabfod52Ozu4bq
NwzRXwSSGq5j0DOn44YU/KPlRORbStlc7BYQ3QI/+XCumQGnVfCv1xZFOXn7MPUU9DdRnmSM
K1tfj/5Ai4xMy3Noby0uK/q0VzG48CjDw8DirF6aW95v8ekG5tNu3N52kEmme4rHXrT0sWpj
GOnPqo5f8g2PIdugt4rF7OZDmS4kB8hQDL643gviP4/57Yccjure7s3uIpmDEQsEYUFPVUYK
yorre5P61+vhycSCVAwBoAagHGsjpvmPRk+SuF7otk++bbfQXtnQxS20yshcmtaNT9hxj4c6
65PmrZUu5Fbb7iS1AISbUiyE+DLmtPxxYceUbndrd300sY0RSn0AUqtTXDImt4v8i2e07O20
3ewWW62nqo0yBZanxbS1R+/F1WcZDcrsTXjTpGII3NVhXNVHYA+WI6FCtFdswB9tKV8sSep8
F5xwTZNm/T3MG4RyyUMsitHIpbxWmnTTtioUrXvAF5P+rpudzt0je6xf2opEbrl11DxGRxSl
rOQc/wCAbhDbyxNuJu7WRZEiMaUcJ+VmJpQ+IwavhJdfI3x3uN1bbjc2N/Z3VqC0fthGTUf4
lLUP1xY1jjuPlHjUl3NdyWVxHcRpS1uxoJbuFda1XPuDhZcG5/Iu0cl2F7LfopHvY9TWd9GA
tK5gZUy7Z4fgatOIc1+O9o2d7Vl3KBpTWTpLVqUqpWlPKuDdat1m49j4Ju9/cGHk01hGTqjW
+tKEhv8AerUy88TK745xXie07xb3lvy+yu2ib1JMpj1CuYDaiMXpafkm+8Fst5tNwunnW9gW
sD2TrcQsCCPUobFgVlv8q8du7i7tr22uY7GZdMd1EVLgUNdSHFYZKym/L8cCzRtpN7NeO1A0
n8qgJqWb8rfsGHkWVqeLcv8AjzbePnazPf27Sg+6HQSULCh0sg6fXFYaxtmeEW/Ina897edm
rSjKYJgT+YqCtdP1FcSdt3u3C9l3+x3Xiy3FzChLTWlzqChuwVjmOuVcVGLzdt8+Md+nTcdw
lvoJQmiS0ePWpFP4o/VTzBxDGXsd541tO+G6sdsXcNqA0for0hjT+IMwPqHaoxHQcm3fjG5X
8dxs+0naqik8epSHetdQCkgZYMEut1x7lnx/ZcbbaWvLyEyIwkEkQZlZh6iClVIHauLGnLxv
kXx9t22SbcL64Dh2Zdzjg0mQN/GhDZrgxlXndOJ7Tu0W97du11uUqN/PtZ4ilVOR0vkFP4YY
1IvIOV8DbfRvybleW96Ywr2MsWuOmmlPRl+IbFUz+5bvw3kPJbncN0mvduVgqwTQKjhtC0Os
UL9PDFgiflHNttbZk2TaZDeWSiPVcurxvpU9FDertnXFIfhxS8l4Zdce/RTbCE3iKP24b+3K
oSwzDP0bPuM8US143zDjc3F245vAuLGNAfbu4l9xdJIPq7hgcOpd3PyHwnb9vt4dsaa4/SFE
0+26hkJ9RV2z888QQ23J+BR763ILfcp47xojG9hPCxUgjpqQUr4GuDCoL7Z9v5lu15u9vvtp
tqSvRrS9qJEoAKq1VVgfLDoi44Xw+LZN/hu33va76HS0bQwzBJDrGQAYmuCnVpfX/DNm5fdb
tJuUlvuDJomsZkMkbAihMbRiqmnicXojgbn/ABHe7K/2ncJrjaoZnb2bzQJFKHMagAdLVHhh
LF8g23hVmtsm0bjLeSDKSVAVDL+bUsgGk+GnLDA3O27z8f8A/p7bB/XjD7qOmq4gImBc1OpF
BU08jgmq1i9o4Jb7qkptORbejROUCz+5CxANA4DdQe2KnF/x7ikvEt1j3q93OwvrWNXE62Ut
ZUVxpDhHI1DxAxSplua7nY7hyO7vLCUTQMQYpdJXVQCvXPrgrE6ytlw3n+3f+vf0a+3Kfar1
K+1uYX3wwJqK1DUYdMx0xNuk8422y3KyNzyGferT7p2iiX26n0h9NEIpXMKThi11yfIHEVtr
iMXjOY7hZo1WNv5qagf5Z8V/hNMWgW+8x4Nv+23eyybs1pDcpG/64QsyABteggjJss8Uqqg5
dynZI+P2W0bVdx39xaPG0N1GG0ERnUC6sBQ18CRhjF69XV58h8Un2ibco54zujwiG82iUMrS
Pl9rUPQ5qanBPTXBxTn+2Nxr+k3G6zbHfQyPo3F4/fR1ZtQOeqmR054rGoovkDkEN3bx2MfI
ju6gh3ARQlenZVZW/wBvTGufhnpX7zuXAbzjtv8Ao9ql23foAiFowGhkUUDNI1e/UZVwSenq
+MgZCzFagU/HDXOd7SEqqOpNO3jjOOjbvyW0f47i2+DcXW+t3Gvb5ko6oxJrbTLT0HKqk4pG
evhTcP3e32zkFtfXNxNaQRVrLboHdCwIBKGupfEYrGuaj5nuK33KLy6F3DepIoKXUKmKOUUG
miNmDQeoeONfLhlnS7vd+tZvj+02+Dcz+ttWCvZzJpmRTq1GCZKaojXNWwTx16lql4Pu9ttn
JLW7ubl7OKKuu6hUOyAqRmrAhlr18sZta5mIea7kl7yncblZ4btJZAVuLcMscgKDMK1SDTr5
43HPfV7uHI7Sf47srC03INdQOIprSRClzGmZpDKtNUJ7g4zL66eVTcH3WDbOS2l3Ldvt8EQZ
Xu4093QGUgB0P3KSfV3pitHMc3M71brle43P6mO692UN78AKxSVUUZASxFfCuHRecaDcuSW8
vx5YWlluokvrRxHPZyIY7iKP1akjlFNcJrnWuMyjvpT8E3q123lVpeXV49hF6lN0qiUJqU01
qeqE9cLfN8cXK70XvJ90uPeinM1wZBc29RFIpAoyAknPwOFcxf7vyO3m+P8Aara03FHvbUC2
ntGUx3UaEElA4IWSD654Yx/Tfwp/jverba+T2V3PeSbdGutP1axe8ihlIpIh/Ie5/ZjPUrXP
Su5fdC85TvE6zw3Ek100hlgLey2oD1RaiTpI88agxTqzBq5BulfCn+eG1fV69uXycNv4Rx1u
Nbiq7raabbcbUqGIVYzkyPkV1dGGMyG27MaeH5F4bcb7tl426pHBc2UschZSvtTl1YiUf/Z9
D5YMW+sryXbdyuLK8WL5Ktr9JA5XbriYRxOtdXtlgzj6VGNT/C6uR44XqaGhJrSmYH0wYp1s
eifCHKNl2bf72PdrpbVL+29m3lkqqCRW1FWcfbqAyODPTKvfjT5MkTed3tOSb9I1vPE0e3SX
b1hEoJ+1iPSNPSvXFYJ8ertd82/feGbFFs/M4+OXVkntXodgkjMqaSpVitQGzBHXEbql4bvN
hxznG7Hf+UW27NudkkdtvCSBgzwmqrMV/wDGwFAtevjiEVW1c+PKeM7vxXke8Nb3lHn2fc5W
Kq7JUmCZhQEH8tcN8W7GuO+Wm/8ACuPjZObQ8burSFIb8SOqSMyRBTGyMy00kEg4o1a8Q+QN
uvLPkc7Xe9wcglugr/1O3kDlxT0h1H2sAKY1+HKTKzSUL0Y6NOTfTrQ4zXSWPddzk4x8kcf4
u0PIrTZNw2BVW42+/ohd0VARqYr6G9v7lrilyKz3V5e/KvC15tdWtxeiOPcNtFhcXUf86C3n
BYirpUPGwfJ1/GmIdTVdwvb+J8L2fkO2NyS1v7/eLKVrSZJozFKkcLhUABJSQFujH1dsFu0X
mc85GO+JeU7B/wCob1wzc75Nrvd2FbG9uB/8XU6BDHIQaqdS9TkcOfkzm/V5Xue2zbRd3FhL
JE09qxhMkLrLE3+6N1yYeFMTX4ckUVAr1LBxmD5dcsGaeZr6T2vaNjfiu2S8M5Hteye5Gjy3
F04W5WQJR0I1KSVetGbMdMxgkxdeV4b8hbbvVjyS5j3bdIN5upD7v9RtpVlSbX3GnKM5Zrjf
4c58syjCgLAdc6dcz3wNPYfnLk+3bxtnDm2ncEubYWDrcxK1WSSP21/mR19LVBGYxT4X13oX
xB8l2vv71s3Kd2dJNysVsts3G7JkjiUK6rG7UyX15V+mCy/KsVO+/EdrxjaLjcP/AG+ylmgi
JtjZyKVeoOu3mQMZNMqmiMKiv3ZHGpVmIPi//wC7OfY9xteR7nd8c3sgrY7xDK8aqjD7Asdf
tfqrZEdDjH5at2ePMdWgyrq9yjsPd7OQxGoDtq+7G7XP+dtnpQzMrqDm0dCp7DywXMar6Afc
uN/Jdzxrf03iDab3jduIty2OdlWdmjdZFeCSQqkikqCR1pl1xn8YZfyxnynzjjnK/kG03CNW
utst1S3vhGQPc0uBKIGNDpIBoWGKnme6q/lv/wBPO7WknFN6m3La3gB/RXDSO1gwp/KDS+oB
vDG2eZ/sw6spcqAKgV1Dpgw9V7H8O8g2qz+P+c7TcXYttxmg/UWSSME9wRx0JjbIalcjLrg/
I/Db2/yNxvknxVyO7mMUHKpLIDdrRm9trl7YBUljDH1alAGX0wT5X4Sbztu1855Jxrn2z77Y
pY7XbQe7tlzIsU7ywTGVoqkqEOdKnKvlh/GDPdeUfOe/bXu/P7zcNum/U2kkMCe6oNQ6rpZD
/wBpyyyxUfnWCR0jkT3NYjqtfbIDqO5UnofDFIX09stra8h2G1bdb/Z+V7CITHbz3lLbeYkI
p7ZkLaRKh75VPfvhlOftQ8fvOMXG28u4DDyNJJ92nT+lbluDsVkACgwu5p/Mj0aSO/bB+R7Y
7Es9q4Z8Tbzs8u4w3u6Wt3bXV3bxSRkMP1EVDCAasjKnQ5g9cUnp58gedcEtPkLkMXL9q320
Xap7WFFh1hboPEX1KqMVUP6qUY41L5jPUVmxb7a2fxZzHb7bfBNvVvdLNHcxSPHLJA3tosiF
yHb7WVvDB+Ws8VVhuthJ/b9v9j+oQ36X8M0kDMDKUkmio6q33D0npjM+V3PHlBZQGINMqqB2
XFE9A+EuQLs/Odvee/az2+5k9m8JbTE7MjLEsudKayMz0xVuVw/J2+7vuHNt5tp757u2tNyu
f0CSSF44kLU0x5lVGXbGmNya9b+PuD3I+LuQ2EG5WN1LyO212UsUlEBaIrokDUKsGNGxn4rW
68w+OuK3V3zeCze5ttvvdruklkhuHGqQ28v8xIyvoJoPHFYpjefLG0XWy/J+381uykmxSXNq
ryxMGkjaID7krX1ANSnhjd95xnmevRdx3be3vn3vaINkvNlKLNBf6na7KaKlzpKghT1Fa0xm
Lrx8uctvm3PlW7bjMYGnubh5Sbdy8J1UP8tzmV8MVUdHA91sdq5ftO4XzMLWC6ilnZRUqisC
SAPDrjNjXN9e92HF722+UJ+eJfW11x65LSoYJgXWGSAR+6VrpKI2bd8aXwy3LOSofiTdf6Ju
aiZ99njl9iQKzwTuz0y/I6kEZY1Pljp5Be8p3a44/Fx+adn2iG4F5HbyDUyThStUc1ZVIc1X
pjLXio+9iCQAKFvIHx+uIvb+NWH/ALx8T7bxzY7+3tN92q6knuLOdyj+2xcj26UJVg1K/txc
3F1NP8rww23xnxTbZb9ri4stwe3vZ5SC8MvtSM0coBamnVQZ9MUVXz/HXIF+DrzYbOa2vb6S
4S+tmt5A0UsQZJKRlvzEKT54uflWsnyXfru3+DONWu3bgYWW5ntN1hifSSys7e1IOozo2Gfk
W6qfiv5EuYeT8es+Q7gDs22yzGzmlALRtcRlCpkPqEdSMj9cZ0817Pu+78n2i23Dcv6LtUe3
6ZRb3i3LlZI3qUaTSulFcd+lcanLH2jCTbFufPPjvjFtxO/to9x2VZodzhaYwyISQCBpFdFR
UdqYtNl/Dzbm+1ck4/f7Pt27bm109ujywwGdma1kMg9+JQSW0sQHjYGjeWC7Dx/ltPnf9bvW
2bXy/Y7hb3jJt47W9aGVmMMx+39RDWi+r017HFPV1cXW9cZ3r5D4pxO84ffQFtssVttwjado
pEkRVGh1XPqppXBK1rWbfvFp/wDeJtNt76w7jFxuW2eCU/z1uIpASjA9xpLDxGYxJ85WXyNy
u1j3aNr43K71HJFfw3Z91CHYnWq/ajVzBAw9+fDHN2vU5xx75Ii2Tebm7l2y+2K1jgvttlT2
1uIo29UlrO2Vcz6evbGZ1cdLzlZv+5XQOfW/ttWL+k2qw9yV9yUg546z/wBWLjyVDmT1BGqn
U+FMYwQzUoupSKdP88BKQswU5hVFBXy6YEjSMFQr1Vcz+/t3w6JPUmgL6hnX7e2XhgawQRct
Xpan/FcSBrpL6mq/Zc8OA/8ACFy1Dv8AvywVUI1RjSTRR1r3+hwyiVJqpGKCo7EZfjiP2MoU
EpQeIYZ+eAi1ksBQEjqB3wIMigEtXURTStcQoSRpIPU0AI8umEUSg0BqTT1H8MqHAZQyqrVU
DwPniSYBVLVWgFKjofxwoNQpL1OX2VwovUPUDqqK088FR1boXyY/lHT6nEtOXAap6nopyFBi
WhoQRQ1qc16UHjgRigCnUoZmJIoa0r1zxGEUKrUsR0p2H7cIpD0uCKEP1Hc+GBYIOOn1r3Jp
9cRhaDUvpFTlQZEVxI0UQoWFC/8ACc8SGxBYEVocipA01HfEhBAfVqzGerpU+eLUh9da1pn0
7fUYlgu+thpU5rTKpHliWEezHLuKeODUj1+o6sxT0tjWpLRnIU9D+U5ZfhiQKgirnSVagr44
lSIIoajT1K9ziQmCKutq+KkdSBgGlUZyCvq6joadxgUHSNKZ0StCe2eBqIpKe5pzp4jMCn/L
DBREgZsoJAopGVcawaVdKhm01GZrmc+lMRMzMzkUqKZ+NfHAiMat6h2OdO9PHCjqratVOorl
4+OJH1EEADVnn0OfnXEAmVi/tkDxGWdMAPMEWlfs6DPwwxtGo1ZNkT4Z4qsGC7KMtIGTf61G
BGohAIIJHcqRl+OIYEo5GR1UHQj8aVxIw0tStKgUalaVPlhGCGpUowJBy0kZ18csWoKMAA5J
0k0BP1wLBzKTmtQcvUO9MTcpElUD5AdPOuBU5JXNk9YFA3n54D8EPbIbIlaElh1xMIxqDCgq
wFdJ/h+uFJarQJTrmW/0xEKrEuoFfWa6XIzzwrRU01NQcsvr+OJBaQDMigFPV1xQWkdOn1ep
iMqjCjxqzKQzah2J7YlDh1K17gHLx7ZYG0ae66KXAUGpAGdAOx8cFWJdahCR96n6jFitCCjt
pZwprn3/ABxI7LTIFaGpr5eNMWsn1gLpSugfcW6VPgMCM4VAe7A/b2xHQoWLFtORFCK5D6Yl
NpnVVeoUgEkaRmNXjXEZD6fUARQ9evWmFGWLMCgdG7DoMIw5RAwDOAhOZzy8vHADswjanWtB
Wv8AniwE7CWVj2/29/KuLDrnuFCxPKFOoVqPLywRuesdfaPfPtilc6kd8dGai0DwPT9+AO62
/lXY05HxY5D9mNc+0dXHu3xBxTYORJLa7jHMsijUksT0VRTP0nr5Y1/WWCXXLy3j1vs++myt
5S9uG/lvKAT1/MB+/GOI1I2m88GsLPj8O4S7XHGx0Fr+2nDLJ0rqQ1KnwwVlbbt8OWN9ssd5
t8UVhe3CpSSWXTEQy1AIzCsfLGRGJn+J+UxXRsRHEZ09Q9R0sg7o2YOeNa1HHZfHXJL9rmOB
FN3aVEkRZUrTwqcKXvx/wLZt9kvrXeTcW95bFaNbMMu3U+nL9+LqYpsUW/cXFryNtntZhLI8
oihckIKnoWrkv+7GZG/l3cl+KOY8c21NwvxAbM013EDh1BboPHPCwxsmtZAAACOvhTCtXOw8
X3TfZlNoFEK5PM9dCsemph2xpeVpr34m5XawLNDcWt9E3pDWz5hq90PqpjlZ6rXVa/CPNWNZ
hbW6y5x+5LRWP/cMs+wxuVOOD4h5pNeXNutrEWtyBI4eikHuuVTitIt0+JeY2Fs1wI4LuMen
THIPdUnxXrTFp+2ptr+JOVXkIaK4tYWABFtcuY3qe2YNcQqO1+KeZ3c1zbLaKslu5SUmQBK1
pUE5kYFUW9fFvLdrs/1phjntVAEjW7+4VbuKUGLQbZvjLlm97W+57VEk8KsU9tZFWSo6nQT4
9MGrGcubC4s7p7e6XRcw5SRnOhGRwkFcgQCe1D2HYY1A33Cvj7ZOQ7FcXMt7PabhATpXRqiP
gaHBWcZuDaZDvX9PU6zHJ7fuMdKuQaUxSNNpvfCdvsd3261bbLmxecgXEUkqSQuTTOKQf4Yp
B9nXyb4iuxOjcfFUZdTQzyBHJH8HVf24vhS1jBwrlPtzyizdRasVmQ5sNPViB288aOoo+L76
9j+vFo0tstdcikECnjjOpqeMfHe38g49LdrfS2e4wuV9sgMj0zAP8NemKxWMRcxNbXEtvIwY
Rkrq8xhEdG32W4bjdR2VpGZZ5jRIh1ZqdhialaGL4x5u0LA2SrKpr7DOscjAd1Q/44Wb16r7
Dh/Ir64khhgWKaJirLcN7ag1oQWI6+WDW50PkHDeS7GgfcbQrDkRNE6ypn5rn+7FPBel7sHD
LK943PuV7a3jUBaK9tWVo1I7SRkj8cXVZYuUFJ3Ck0UULUpmPLwxQns4byeVYo0MjyEgKtdX
0GENMvxzzGa1F3BaJLDmSqTKXUeakgg+WKUqzbeMb9uO4PtkFoTfqD/JJFaL3NaAYBgd049v
m0TrbblaPb3FKqGyBHiG6UwD6tPafF15dcaO8x30fuhTI1qc9Sjwbx8sKsQcd4Rt1/t8u432
4tb2MZoxgMbulDQlkb1aT44tDq3r4/trSyi3ez3CS/2OQgST6VWZFOVaVzxY1enTtfBuCbs6
QWXJ5zKy1AeBVoeoDnIYMWGT4wtbaS5k3XdgsFqSDNZ6HIHXU0ZNQCMMWqzkfAJLDbod62y9
Xctplz/UKpRlr/EMa8Yu/lWXPD+QWu3puku3zJYuokS4FCmluhyNRXzGMukDtHGd+3hmXbrY
ysK6kchSKD8tcWM241G/fGy2PHba/iM8e4EolxYuA1XfKikHoD1wyKRQzcG5ZDbG6/pbyxKC
S8LJIQB3oDWg+mA/DPtqJZZBQqaFWAyIOAWuvbLDcdwvP01lA1zc6SwRF1NQeX+mGxTVnNw7
ly2bXbbdM8EY/mUzdAOupD68vpih1ybVxvft0LtYWpuXGeiOn7q0rhCLc9l3Xa5/Y3K0ktJ2
zAkWhI8V7EYFlq7fitnBxKPeZ3ubW8LUQPDrhnqaKEkWuk0/ixcwXYy7tqerLSleudP24sO6
ntbO83CVYbSB55j6lWMEtpXM9OwwGV3T8T5Klk95LtszW6VLuB6k8daD1CmNSM2odo49ve6i
T+mWrzuo+1Rpr40LEYzTINuL8iF8NubbpheMKrCwoSP9tewwlI/EeUxu0Um1XCFFJaoFCB10
5+r8MakZ1Ft/G9/3EsthZyTMpo8ZOllp/tbAUc/Ht9juzZSWFwl8oqICtWKddS06jEJygh23
cbmY28cMkkyAs8cakkaepoM8sEpx0bbsO/7mko26yluXQ0KCi5HvnTrhLjvrDcLK4NruNtJa
XIHqilBUgDvn288LNmprzYt6sbWCe8sZbe1nUNbyyKaSKwqCD9MZ04rSEUHz/wCMsLGRYbDs
V/vV+lhZUaZgXCVFSi5tQHqaYtalaqX4vkuI5Y9n3eK93CCMyvtboYp6KfUozp3phXXLO8c4
1d73vi7TE4trl2Kt7oNEKA6lameVMZtEmjThO/Tcjm4+yJ+vtatMiGpKKA1V/iqDkBjp5HK8
3qrx/jOe5tZpNl3WLc7y1jMs22aGjlVfzAA5agcqd8ZsdZuM3xrjl9vu+ptNvILa4l1VMoNB
7aksCBX6YMw5qaz4Zvc/IrjYmi//AClZgtNEtGb2xSpQV9WRBpi+FzF1L8Yz3NlO2ybrBut5
bKZp7FFMcyp00gHqwPbBYNZvjXHb7ft6TZrd1t7mQNSRx6QUUkhqVp0pgp2pLPhm93e+3Ww+
2BuFjU3ccZDlVUgMyDLX1qAO2N3xS6uJfi6/ktpn2jdrfd57SMzXFhGrR3AQGmSZ+oHzwaM1
n+L8ZuuSbtHtdrMkFzIrsnuAgehSaMO1aUxGQdhw/kF1vtxsZgP6+yBa7CkEIFPqYCvqoDWg
xadXG4/F9+1ncTbJuVtvE9snvT2UPouBEDQtozGoHqK/TFn7F9Z7i/F9w5FvSbTZzJFcujsj
S1ArGuoqaVp4YbR/4Vm5WN1t9/Pt14hhu7WT27iPKiuvUVGWKRbNch1FivXx88J1pN14LyHa
9r2rcGQT2W9LGbYxepllkFRCyjvTMHvjPyZ4vIvhflB3k7ZcSw2yyW36mOctqUopAYaetVJ6
YdjNmuPcfix7WzuJoORWFxcQCpsg3tE6eq9/Vi5nrPTAlqD3CPU3QDw8xibxc8T4dvXKWv12
sJLLYQC5ELHS0i6tJVScqjEh8e4PvnIY90Xa2R7za4P1DWrdZATpKKelcZ33B8zWnT4f3AbP
Ybjd73a2EF9Ck0AkHdl1aSzUqR3wt1wbZ8Pbrue+3W1Wu52ciWsH6lrqGro6u+kUHY+WL/DG
a595+Jt5s9tudw2/dLferayGq/t7TKeFOhYqSdWYz8MN5xnie6sI/ha9Xb7G8vt7s4Le+gSe
1Mq/xjXQFsiyjqMElrrcjBcr47dbDuYsZbuC+BBkimtDWNlP+B8sdLMjjepuRTl0yjA1EHpj
GFaW3HeTXloLu12ye6tGromiQtUDI0pnjMnrpzLHB7dyYnmhjd4bdgkzoh/lscqPQen8cV88
G7fEawXZgkumjb9PbMFnlCllQufRqp9tT0rhPrU2fxjza+41c75a7Y0tvbt64VzmZNOouidW
XSe2CXbjX2xhoyVRiF0CPL2iMwOtCD0w0SjodWsatSjJadf8sGp6Ft/wvvO4QRyXW4We330s
aOLK9rq0yjVGQQdLK1RmvTDz0OmL3rje97Ju9xs252f6bcYZBG0KgEMWoFZNP3K1fTTrg6Zl
9cF/tm4WVxLabhbSWt5Hp9y3mUq4BzU0yyPbFGtQiRlVmWpLkBieuWWeKKWtPw/485Py+y3O
fY0S4bbFVp7PWEkkV65R+LDScsP3/DW+aDaOA79u+y7vvNhbqYtgCf1KzKlbgKQ2plB66AhL
DFnuMXqfP4VVpx/kk1ot3bbdc3NtIuuK4SMspUCtcsxTFVviqWSr/bQ1qzfmpTo2M4OOhVKk
UGkmlF8aYW1hFxfkMtn+st9quLm0arLIiF0Ufm/Zi1S45Ireb2TcxwN+kjIikkCnQrEelSe1
fPFYpSkgvpYGuDGzWyELLNmyB2B0oW/i8BimRmV0W/G97lgF5Ht1xJbsrOsiRsQyL1NQO2N6
LcrTcX+N9x5NxHe9/sbikuwyxhtvIp70brqZh3Ei9FHQ4z+ca6uRntys912+4VL+1ktXZdcS
zoVY9qivhgM99cMknuDUaAD99O344a521a8d2O937eNu2u2UCbcJltoa10KSaVYjIDGW5Vxz
vhtpxy5hgt9w/UXKEx31hMume3mU99PpZe6nwzx05581i30uM8Jn3rZ90vobpLe52+EzWcE6
gR3WgEyxJK2QfQMhXPGd9avOeqbZ7IbjfWllC6Rw3kiRpLIDpXWaAkDMUJHTGbWvl38o4pvP
Ft9k2XdkRLy2YSa0pokQj0SI3cHGr8a5/wA77ZUD7ZuyRi4lsJ1h0e8s7RMYyjfn1Z5eZwR0
tcJmkoAW9wLXQe1D3ws3o6OY01yVIXNpB3B6CmDBL+3XNtW5W8LXE9pOkCgFpzG2jSejE+A8
cDeYC3guWkURRPI0zMsWlTRiBUqD0Jphw1Hdx3cMvszxNA4GtkcUJB/2nGuY5f0+BxXsqxFQ
2kA1ShINCMx9CcZs9a/n8eprW3vbmQvaQyTSLR3MSltIUdTTtgg7l/C+ueGbpBwiy5k8/vW1
xdNZSxmoeN60T7iahswfDG5N8PNyeo7PjG43HENy3+CdUbbGX37NiyF7dsjNE2QbQ33KMZ5n
pz3VHDZXk7LFDDJLJIDpCKfUVHq0+NO9MNaxF7ckUhSQGJ0ylVwVYV+vjjNjlPl1zxbhBAZJ
beeO3OlNbRyBM+lWyUA4uXSXUMcG4TySLbwyTtGoGmNSwPgKL4YbRb6ha2uDMbZ0dbhyCYiD
qJA7DrXyxYlrvXD+Q7Ba2VzudlJb2e4oJrWdh6CGGQY50Y/wnPGfkyuC2iv5CfYhkk9sUcxq
W0VHSozz8sMGo3S5mHtsjGZKUjIOpdPQsvjn9cFtPU1NIN3ihBdZ44WA0M2tYjUV9PRa4ZTj
kaSSNdMg01AbQQcx2bz+uGxn4qNlliGplMZNGRSMiv5WX64FEp3K6eHQbl5VHoZCx05dqVpS
uH7HJpra8uYHaSGZoGdaMUYjrn2wNIGuJpZHnd2knJDGV2LE088VoObqQRmNXdIWoZEBIViD
X1AH1fjhgHb3s8Ss0czwZesxOyFjXKpUipxn8tT4J9wvFZpvcY3BqdZdi+fT1k6v34VzcQAU
FVprIFaDL/riDZ8T+V+Vcc2htotGt7rbtZlSK7iWX2ywowQnoPLEtZ7kvKN35Fem+3Of3ptC
xQBVCpHEldMca/lUVOWN75jF1TrrUDTka0NemC1qFpUEKxOlswSKgHqcCGaksPCh05UJPTPA
DIAc9RU+JFajy+mBqFKZWZq1ZTShpgCMkshViexrlmB+/CJT0LMD160bpl/rhawfpf8ALpcj
IHrTtUjpiGhKkkEg6gKFQajLpngFGArjSD6z99e1MWLQqQS+kEHvpqMx/niwyiCBVqKADI0N
P24WjBVIY1FSc2OdMVZI0LEZGgHq8T44sBulWbOhzHbLwwLDSAUUgmvXzwKJET0h3Jr4HPPz
8cLRw1TVslzNR3GIABEfU/cclHYYURFBqLZjqO2DVT1QoDn7gPU/l754mdhSEMtOpGYHgcTR
zINSmub5A9qAYkfU1QACTXocq+YrhQSqlgT6cqr4Zf64lRJk5UEU7A/88ZMEASoCNVq0FRkf
riWImChqAjUW0mlcvpiRmaTXTRkw9J+njixzu6krUhzWr/d4AjwpidDxklFGTGuVRQLT64kY
FSSpq1R6T4/TwpgRRhXRlb7VrU9DXwxKHcqY1Knr0br+GKKhQMCSxoFNKDv54dEIsEzc0U56
qVp+HfFSKQiQ0JLg9GGQA+mIYTAhdLGo7LSuXliOBijK6lJJC59MSIsA3qq1BlTKh8cOA6KK
+02RUZdyRixGoNAOoAqT1/xzxI7RtRfUCOpNPDAQSVNDQ55hhl6R2pgR0UqwzrTP1daHCjvq
DA1yNdVP9MKJGjUHPP8AL/0wM/AWSRm9LBTnpr9pHngHyYxyV9aUpmR1xaRClAEGdRRR3+uG
NnqamqgA/jn9cIEXY00nvRT388BAx0HUVzORbt5YgZdVdTeiuZHUmmBDq3Q1UkZsR49MSCY5
FbSJAQg9NO/44VQIWKrU1NfUR0B74tMiVCaNnqRaagcssRNGVdgT1GWWABOpWKKRqNBTodJN
M8QSPQMDVqigPelOmJHZaKXzqMyCe2I6jGgal0hVXt5+IwrTAaioGYHVh0r1wATUJXS9TX7u
gzxQ0iASaGqitO/7caGGSoQFupyIXrl3OJrmFmjZDI/d5eFcGt/U6hRUFTWtAP8ATAydl0qX
Q001DOBWniBiGhdiRlnpGR6ftxC0ITWiocyerDpXArBr7qeo09PVR5eeJYaNUBJBPj6gTlhI
VCVPqzpQGnSuAibxBplnXP8Awwk+ZVQM2I+4DKnb6YgejAg9AM+tKHDoolZwTWlDUVXPAojA
AIKitMs+mAyG1xo7JU0yLDt+GJIbt0IcL6ljHeozpniX2Y/cS5lV2+5uuOmMQPvv/wDgx4YC
67bK6WrKegOeX4Y6/wA56x314+mP7fOPzSwzbiLm1MBQCOMToGFMgHUkMuefnjp/9nqXxj+d
31N8j8M3NuTRya7YC5alvOJoyuonMNn6fxx5+LjtPG/vOMbtLwwWscltNcQoi6Ip42B09aEm
hameeM9X1kN/s39W4/DbbVutsbmJ092ymnSNqp1AoSMJLeHtmktLOTcIrK+tmDFDIoDHroah
Ioe4xnPTHNuW/bJyC0utvt/0W2bzbqQboMFEjDKq0/LjQVXxHxW+tby9nae3kGvvMmtn/MdN
aj9mK3UzvyRxe7suXJd7oBbbXdMAtxCyuxC/cag9aYJpkNyvZeIw7AlzsvMJNy7pt8tdIH0J
9NPPFDJXmih3OtjRTn+zHSCR678H8htLae6s3lhiluFAto5yqq9OtC1BX/LF1izG93W75Zty
zzf0zarHb2NVu/dqQCfu0hiM8Y8EZf5O3mObYbSezvIydWrVE+pAQtTWhyNemCRYb4r5Vcbh
HLb7vuoN0gAtzM6xs6gGug5VPljfULIy8r3TbeXTiK+l/Re8RNECGiddXTPrpwRmvTtz2q55
D/Sty2Vobu1t39yUpIgk/FSQa+WD4S8Xd9vW6lhlultrpVOqN3VZBRa1GeZwVa8NvuYci2/c
rqK0vHihkZtcAYsjZ9GByONQxc8L23Yr/bJJ5+YzbHvJLloM0Qr+Uk1UMMa6a1id0j9q+lSO
4/VjV67gdGNaVzrjNHVQRgM2RA8jiD3L4i45vUGwSzmNDHc0a3Kyo9OoqaHrTtitDFzcO5E3
Omtf0xSf3feUSOq611VLCpo2XbGpRj03l+y7sU2mZrQyW9tOjXRqDoUChkrXLGafyl3zZt+u
9y2u+sSL2xt6PLHDKAQCMmArVqjwwHUF6Xn38X8Esc0NtFpvbB3CM4ANdSmhyGFKPkyWu87D
Le8UC2Sxk/rbX3dBYKKldHQYYz8O/wCL+O73DxmczW2n9UzSwtrUhlZcuhy/HFVNeV73xjkU
G63EUu1XQkV6HRGWWpzyK1yp0OA/Dv4bte6W/Kttaeymgi930ySI6ilCCGJGXjhWvXd52nkD
8i2q/jLXe2W+oTtC9SlRT7OpGDVjqkuIr8X9hZOk98sRY2wKBwSCorXoa5YE8k3fjfOdtsXN
4ZorV8zFNJqUZ56Q3f6YoLXofx7sG92vD5Yp7dkkudbqoYEuHXSGyOkAjtjRvry614hvO5ci
l2hFW0vQxpHdH2hl2GRri+Cudt4zunEOWbc++lLeN3JinUhojpyJ1daZ9CMSkafm/HeV3u7R
XGzwyrbNErNPGwWMkCuoFM/3YKmO23adwl5Ottu+7nZ74EFL6us6wMhUELn541+Cm+Q7De7e
6txf75HvalD7E0bCoStCGoaLXBGdxv8AhGxbzb8HltJraRZnEjQo1DqR19NMz+GeK1VmuK/H
t4+3z7m0bvexSNHLYEaWGk+B6scOj64reV3fM4bP9NebXJtu3MQahDR2UenUwqopgqXvx/t4
teKXm8xSrNPRhd2jhWUKn20p6gT44cXXwh49xS83Kxn5Lbqbn3XelkhBb0n7KdG/HFfFNUvM
Nw5clotpfbZLttnJ0GgrCzDpqAqo/bg02rKHauRpws3G28ojm20x/wDytschGjqPVGC2o/hl
5YrRYuePj+p/GLQ7fGbjcrYMmhB/PRw33VHqzGYPfA01kH6q12Lbjuso1KqxzTSmo1sKUav+
eGVmxSW1rvdhz+aeWK5g2V4SqyIGNsr0yOWQBw6Zv5ecfIUcb8rvXtIlaEsGT2FOk1XMinev
bGelK7Piz3f/AHCGn8tjDICXrGemVK0ywxR6HXkVt8gtcXwkTZjCVjuOsJLAUqV6fjhtEht5
tv1fF92i2eP3b2KRhCLQVkR2ZSdOihGoYtWPL92i5gttHbbqJnhDaYhN6nVgPt1N6wSMTNt1
6HtW2bvF8TXFk9tOLgQSEWzIS5q4YaV65jBGruPFnSShUo4atANJOfgfPEPlsfiqT2uYWqlv
bcJIVZss2UilfPFWouOac25DtHNZntZwkcI0NCaGORSBk69GxqxRp9hutr3bhM1++3meVZZH
e225SskbEj0xgdKjOmMi8prLe4b7ddttP6ZdW88OpYLu7C60AWhRmBqGIHQ4itnM5tnkkfU0
F4PbfKqDUKA/Xzxq1Y4+cWU78cv32yA/1SCkkT26fzgzOK/ZnmuBWKbdd5ksuLbPvc0ujdtv
eOKeV1pOusfzY5FNMmHXBE7Jdp22w3eXl9tLJBFfQCaG5UB4kJWpRwMgH8cQKyv9u3DhUu6y
WctxN+okLrtn/wCMKxfPTTpkakHFD+GR+Rt/ttw2OytDtV3b3MJAhvLsJrEYGaPpNa9yDjfM
+Wb16j3215xB8fW7f1K33Tj0ojLFCPdhqRpUs4qQrZGhyxmVrp5oa551WvQ/44GRRTzwya7e
UxzJRg4Okin8LDCm/sbr+icVtuTRR3MW8XTGu4P/ADIZatpycVZDlTS3XFGtVXBd4vm55BuH
6Z767nkllkhiKqza1JcrWgqOtMQ58R893cPze/vbX3bYkpQyKYZkeNQM1NGBr+3D1R/Pxd2t
9LsHG7Tk0CXUW63da3LAPBOHY+v3RXSwpQxtgn+Wu/8ACn+PtxuY+dWm4C2a+u5XledI2Csx
kBLFQaAnwGK+rnmRzc43lpOc7huNhJNbsZVMeoGGWFlUBqrkVaoxpnj8r2C+bjnGrPkttFc2
++3fpa4ddcM9Wb1CTMUy9SN3xnNXXnkU/wAZ7lcR86hv1tJL2eRpZGhgprYOrF2VWIB06iaY
rBy5+db0V57uW6WLTQs0ytFIVaKVGVApyOlh0w/MV8XlruP/AK7xyy5LYxXUO8bg5rcSjXFO
pJq6uMvrG2eCT9rrq74qvi7c54udW16LR7qWX35Li3goZKuGLMi5agCftxdNc1Xcv31m55u2
4bZJJb1uGkgnXVE+agGqmjCnQ1wVnmtHY7r/AOq7Ba8j2+O5t923EVm95AYpwSf5iSj0lcvs
bPvhll+T1/hT/Eu5yW3P4LpbaW9klE7PBDQyASAs7hctQWtaYu/RxMUfyBfw33Nd4urdi1tP
cs8TFSh0kAEMpowNRjc+GZPazwc6vuAoQAP9cS319ItyTZNm+OeMvvkDXVlcxQorxf8Akjmj
QusidM109RjnI62tOZTLymyenuIdukdHalXVnU5j6YMF14Fzi/41cw3gteL3O1XEUj//AJR9
emtaHVVaUOO25GObrztmXRWtFOY8aeOOca6r1v8AtvkUch3gJpkcbeCYycwBKP8AHB+W+b42
XxfyHjm5bryC22fZxtlxDAzyy6g+v1FaCgyFc6YuvlmfCHlu5bZY8A4pJuOxyb2rxgIkOr+W
faBLHSDkwywwdVQfBkm3HmXJmsLWS3tTZKyWU7VZB7nqQn64qfMdKb3s8nFt25DwfbxY7vYa
rfe9uY+5rt5agzUz1gfcPxw3rflizMxYc1vdmseC8PG58fk3+FrYe3FFqqn8hSW9PiDi4p/o
+dt5NlJuly1lDLZW5c+zaTmssKkD+Wx707eWNdda58c5/wCXFGDViSSXJIAoDWtMvCuMunPW
PoLnu47psW0cL3HjjyRLf28S7jNAheIrEkVGZVBVT6m1Hvg5+Gut1tbnZ9kHLOR/preJ/wCt
7GLi4jUBkuHUsokCDIsQVz74rR9Z7/l518HcfmTh3Npr6xK2V1btEFlWmowxyH7GzFNQIqMH
5M85TfEvKt7T4Z5G8NyTebWhNlUanRGiXMAVNOv0wyf7M9df6a8E3K+m3C4lvbgA3dy/uTOu
QZiKBqYe/k8WXmVye5Rw1S692Ar38MDWvW7D+tbJc7VtnOtom3XbWgjfYdxgcuIIZmRhomUE
ERk1oxqv0xkb/sl+fLfdYvkTZojfxzSywxf0y++xkBkAjaVh6WOr8w7Y3Z/q59df7YrvnO45
7Lf7XBzDbrS3v4IGSHcrSpS6U0LerKhUj7aZfjgnw1vryhWZWLGmno/18sCe8/233UtlsnNb
u1IWa2to54i3TXGkjCvkSoriz10t/wBW+2CHaL7iPLOabQ6C05NtrT3lutQYb2KGRLhSM+rG
uKfLnzJn+GV5xu27cWv+Hjjh/T7Xu9pbT7kY4y8TOjQr7mQITUjHV+3FaZZKw39yWz7RtXyA
JNugFum42gupTFQxvIWoXApQVAHT64fwJ5XlMKhlVpGJYGoNK0NajLA6Y+i9s5ENwt9vt7W8
k4VyMRolxb3Nqz7dcMVHt3CyL6YxKMz28ca5+HPr2puL7LfNtXydtnIttgt92ltknlgiUNDI
4ik0XEK+DNRtQ7415sqz/WxT/FvHnT4W5tNulnW1lQSWzSqKlraItqocwysQa45z/wBlxznC
x+VuQ8h4nveywcST27Xc7GK8nt0h92EzBhGCoGSa1pXOnfFtwWyX1P8AFe9OOLfIu6Xe1pBc
wlDd7cf5aGVYWLZH7Kk1xSet2+Kjbtxm5j8L8vvORBLu52MrLtc1BrhkSOoo37j44rdqtyPC
pjVjVRXVUBTXr54RY9I+Cd+l2vm23WP6SCeDcriOH+YPXExP3IfGmM9VqR3fPm92d/8AIe52
QsI47vb5I4zeIfVIhhQ+sHutaDG51kxmf+y/+D4XvuC842q+iebbmsnlSOVTpEgik+zLtQEU
wX5bryHYIz+vsG0PIyywVoCSo1KCTTzwdYOfHsHzrHEfmPZv1GcMqWcTrIB7b/zSTQtlTxxX
fq52f7vRt63LjmwcxXaf6r7ER0SnYTbmVHWfVrRWCNRWGentikdNfNnOE4+OX7oOOenZvdP6
NCGBWv3rpbNVVugOFzkRcMhWblG1i4VGUXcNVahQr7i1yPlgtb4j3m83y7275pg4PHFC3Gpy
jNBNGHyu4XZ4gSKaNQoqnD+Bu3El7bbTxP415UbaxSWLad4mO2o59ULlo1Rlcivor+zBF1fH
kvKueQ8n40LferKMcktLoS7fudsAqtbsB7sM3QnpVcUvo6ls/wAsTq1tSlQcjT/HE6V7HxVR
svwu/KttgU75Z7i0fu01kxuyDQ47gBq07YOL6z15Frz3cb7cfga0vtwtI7K4uN1gdhGgiRqs
aS6R/F5Y1zfV15Ffx0G/+AeUi6UTrtbtPtzsNLRvRWYo3h1H7cZ5+T18OzYL6w2j4Hj3Y2SX
F7BuUhsZaDXHK7VDgnrkKEd8M8rN9in2LlGz813zjcG97bGnIIdzjV54lVba5spM2jnQ5syt
TTXFpk16Tc7hw/bt8v8AZdy3m3udtjZoLvZ7iPXIsciaqFgK0StR5Y1vik9YkNYcV+L05Rxk
obldzubRZnXUJrZ5H9sSimehUGnBB1vy8+5PzTcN2l2vc7nbUsd5tmM9puMCGMSxIa0ZSArM
r9GHbBomV6n8vc1vf/uv4y0kMMib9bhbr0alDNCGqn8JDZjFxG1JaXMfFPiHYOVbRbxfr7ia
WC+qNayK0jsrPT8yacj4Yo1fK1G12W37ne/HfL5oUj3jcJ7i0vWjRQssdJGXWO5XRQHwOIMp
vnyBtO2cs3zi257PFccShnntZbbISoK+l4WOQ0vmMPgji5Lstnv/AMPcduePWgvr/Z7ia1vd
IUzxW7u7AzkerSAVOrpjf87JrPct9F8z8ftds4JwGWaNI9xSGS2muEoS8YQSKpp1UE1r2xz5
nyf6PHNNCFBCxkkkj9tfPGYrDg1zFJAKK1O34eeEg1AlSp9JNK1yAGESmYrkSK0OXhXzxNBk
CmqkkH+Hv45YGaZnX2xX0jVnSueEaPW5UJUgH8MhgO05KAhSKsM2bwGAg+1qimk5ip7/AIYR
pU0pnQHy+7PCrUa1FATVa9cIh3Ux19smneoqc+1MBEXX0saEjIj/ADIxik8ldRzoDSoBy/dg
RmonqYgt+843KjI+lKoSOykjp4YSYsCGzGokamAy/wCuIUgxNEJqo6E9qfTzwMGjoF01qcx/
x9cSlSoHXwBpVh1X9uJqQlhVqliCppQA/vyxNBpo1KoH0OIFqOkVQqvenX8MSkTHT6QVB0dM
zgIFZFoci3YeGJEWcB8/GnliA0NegzHj0P4YiAhSShr4g9sQGwKhVZixPhTPtiSP2KkimZ6/
XEMJVU0RfQ3Qs3emWFbEigSLqYBtIGoeFO4pia0Gli2vSVJIIHXt38MQJ2JZQyhvcJyr08sQ
06GIUT60HU1HhXBWod19HmVqKdB4YNR3ViVJFDTr0GWJAHpkJJBJJIBNfrUYWaKsQGrTUgg6
gaAftwInpk7Go6E9aj64CZzUKEqpH5V7f9cMRU0j/d9P8cRSRBFVGdTmCaGlBgagCQpKkEiu
pWOYIwikzIMyBlmD54gc6ilSQV7Uy/64kHWDSubU/EZ9MKGdAABORGoMO2JI2Vcs6t1NfLEb
CcspB1aGFAaGtfLyxA8gDLU50rWmRzxaLDAgdQVA8R4YKRFdSAipDdCfDAYb+WQFcE+BwxU1
FWlATU9B4YmTmJBJqJFMgB4YCcBCCK0AyA8fwxYsJtVDToainTpiQTIvpYgh86UFMMRqDqys
4rVQvWp/xwrTofyasyaZZHLAYOUEgH+HoD/p3wLEeolTmQTQ5CtPDEEgajCpyGRI88WI3tgm
i5EZ1xNSBCmJfTUmtdJ8DiO4c0oxXsAQCe/8OFhGVYDVUkChNO9R0z7DABodTqxOXiO/4Ymj
a3JPuKK1pQHz8cQlF6dNVGqp006GmIgkhJZh1Hf/AJ4lgoIqJ6vwr1wHDtDULqJJUkqDh04G
hBoFGWYYdD+GHWcGPcqWUdR6qmnXwxOkmBLjWACS2Qbzr0wVjTEgGin/ALuuQ74jaACn/jFF
BAA7YhUgOhqEVJzPniUILJ7RDEVbqwyp5jEsCVAC+ksT1JNKAeIxE4Zi1QQwJ+6hFaYmUZVi
SACcz+z/AEGInjWUUqaRr1K964NI8wR0oxq3jTyxaqTr1VWzPj3GNDDhnRTRguVDkP24IqOP
QXBYlR1B6En6YKZZ+QlkWQsoIVjmtK59ziPlct6zIGb7jmfqD5Y1IGPu9Jm9QAYnMj/njbMN
QfxeX4eOA46bWIi5RQtCxoMq1+v1xqXGbHr/AAPhG98ot5P6PFBcNbKA8bzpBLXwCsRqw/0/
Y4yfBbtsO9bVett+6wfpp1ougsHOfiVJxjNX12ruThN3FtMe5227WjIVUyW4ZlmQHoGU9T9M
Zz1bUu/fGe/bTtkW5Nd280MgARYS5ck+oagwyOK1TpkpNbSB5dTSDLUK9R11Nh0onK+h0J11
IVs6edKDCZWn4zwXl/I4pbnY4EmMNFc+/HHN/wDSrEEg4LVfFVvW2bvtd41nuULQXEZKywMx
ZgVNCMiR+zF8tT4cROrJiSiilCCtM+gqBnjMY0kojliDSuRJy/ZjemVOpLNpNTU6gQC/T/aM
B2JzNcKdBZlLChiIZfScwQrZYtZ0xOtHYK7UH8xQGqaeXTFG7JDe7pQVBGoalNPUFHh+OHRC
dGjdSKgdan/HPFo69qVZZCGCI+R9RXWP2sP9cWg8jsyq7OF15K1SxX9pOKiwzLKsunUS9PVq
qGy70Pji0yCWYgNErU6kDz88KL3DpAybUOtMCINXQynNciPphkUavZOHcz3HbG3HarOSa2Vv
UbWZRJXw0Kyt08sFNVR/qJuoYp5ZI5lcZ3DsWU16VJyIwaZGjv8AhnKrNrcPuEd1aXj6I5lu
WdSW6al1FhT6YNrPXPrj3zjnIuMyIt9OI2ddSSW0xbUO/qUgrhHKja8l9wN+oZpqVLaySQfE
98aII3k1kLIUcGjlTTr2PiMAaTbOOc2fbv6ltdndS7e+oCa2kqPT9x0I2r9ow2t265rHl3Kt
snc2+6XUEv2sgkLVI7MH1VpjLC2i+VedIQW3l/E6o4nFfMFcScG68937c5o5ri+9qZQVSS2Y
wlwPHS1MMWqmLcrlJffWZklJqJPcKsWPfWDqw4pE1xvW6Xalbi+nmiUg6Hld1WmWWquCKr3j
2z8uvdrnu9mu3FrbE+5bQ3TRyVpqLe3qFRTGqoo7y53CW6Ml1NI9yP8A7Z2LOD/3Vxmmwr3c
L27CC6nkuiMgJXZ6CnavbDBbgrTf9ztVMNtuE8cSj/wxzOqqPoDhZ+zlmuXkaWSeQsZM3dj1
/HDGdw4kTSgFNBzUL9M6UxjfWp1L8NDYSc0i2s3Vq24rtXQ3EBkEa6focsVMqTaYuebpNNdb
T+vu51/8sySMrZ9AzFlzxQh3kc9ipY7wdxImb0wzmQq7eVSVP7cMq+p04dz2zgeT+jXscRX1
vGQwI65qjEkfhgtZu5iLjw5W9w8Oxi7a6NTLFbMyEAfxZquRw2rnc9Sb3ec5hpZb5LuCK4/8
F2X0NnU6a+k54mmedqOADWQjPSa/txDqunbt1vLC7E1rdPaTt6dcTtGzBc6VBrljfSmtFvW2
c4O1jdNylmm266AZpzL7qZ/aZF1GjfUY5pWwcv5BHbLZw7xcLEF0rEszhdAyppJw4vtHbsvP
+U7IHgsrwLFIdTpJGsgL+NWz6YsV8WI+W+XNNHJPLazFe7W6Vp4VGYr5YsH2V24c+5JdXUsl
vuEtnFMSHtYJCIwTlkrVw6vlV2W+bltk7TWl9NbXDn+ZJHKULnr6vHGWpBbjyHe9zlB3C/nu
Vj/8fvMGZa9gQBhHur+33D5Ng2FNytb68/pC1VZkmEoj0mhVh6mWhyzwQoNp+RuU7WsqJcJO
srmR0uYkkOpsywbI4qvgW5/JW/39q0N1FZmtSjxwBXQ9mRwahsa5HXWRmLq/ur2dri7laeZv
ukcjV+ONWOc7rp2vfN62ib3dsvJLVyKMEYqG+o6H9mMOsT7jyrftxkSa+3CaaVM4ZCwXTQ1y
0aehw4EcnJN8pLqvpibnT+pHuZyEZqWHf64Fek9ty/klrem7g3S4juCixswkqWRR6Q6tUGnb
Ex98QbtyLet3lR9xvpLzQCI/c05A9RkBijc9JeR75DtTbXFezLtjGj2gasfWtAGrQV7DLFGk
W08g3vZpJH229ltHkALmJ6AgH8yGqnDjN6wt35DvW8yJJud5JdSJkpaigeVFArglsEuq9p5V
i0e44Wp1Rajo1f8AYDpww9I2ZaBcjXqB1xDRKjsVjjUvI5okSipJrkAB3wFa3icr2rapLK8g
vbXa5XDyQzxyLCZBmDVhQHDMF8V1ku4C5iayWYzj1RfpwxlqM9ShPVl5YmiubjcN2vDcTzy3
t7MAru1XkkK+lRSmqvbFapzI7rpuV2OznbryK9tdqdwxguI5EiVzkDRhRTjI+yssf16Xay2P
u/rYyWiNvqMg091C1ONQSgmudy3ncP1V3LJebjcsEdqapJHX0qKKMz2pjduOfHNWF8/LrHZX
26+S9tdreTV7N1G6RGRe41CgbPtjHjplV+1tuCXMU1g0wvIzriNsGMqlcywCgnLv2xnqjmWU
19e7lvO6tNIZL6/uCNTga3lcCgoqjM5UxvnyKx1z3HLbHZzt19Dd2m0NL7qQXSOkPueKawNJ
8hg6sElV+33O4W97HNt/updwkyxPAGaRNIqWGkVyGDW8BeX25bvurXkwa8v71tbsiktI1AAw
VR1y7Y1rjJddl5f8otNkTaL79ZbbQJPcitriJ0jWQVFVZwKdTlXBPlu2q7a7rc7S/judtMsd
5BWSKWAEumnMt6ammNWGX1HuO53e6Xc243T+9c3LmWecgDUxyrQZYNb6jlQlSWqCVzrl37Ux
axOVpc75yCTZ7bZ7qWT+mW7C5sraZT6dQI1IWofbYMaUyxmdN+flZ2XL+dwXdiLO5u3utpQr
bwrE8kkcTZFXUqWKHwbD4LfVvu3yn8rNZTDcTKu3sjJMbnblSGjZUculMUot8ebamLer/wDR
6Dp0HgMPTn/Oft3bLvG+bJejctmuJbW6VSrTxqWAQ5FZB9uk/wC7FI6Xx0bRybkGy3lxue0T
y29yoK3MqKTHolOfvZFdJJyr36YpNP4aXYPkb5Y2faIbLaBdT2YP/wAdpLQzrRqmiOU+3y/Z
guaHBLz/AJ/echbdU90b3boYLlbO0MblD2njRat/9QwWjlTWO+8t4xuct7bPPt9xdK6Te9Cy
JKj/AHoVkUKw7+WHGv8ADS7D8nfK+z7dDYbWLqXb4lIt/csWn0L20uUzX+HPAz9v9mO5dyze
eSbv+u3pEG4xr7chjtxbtUdPdUAepfPPHT8OPz34ogSjK1QxHVsZdpGy4n8tc64xth27atwX
9ArHRbzRLMqFyTSMtmoY/l6YK3aqn55yf+uLvUF6bXcImrE8H8pY1P3IkYqoQ/w9Ma+us3uR
c3vzNz29uZbia8ija5gktbv2YUQTxOpX+aBkzKD6W6jGZFus3xrlXIeMXibrs1xLaEr7ayoA
0T6T/wCORW9D18Divt/yp5FVu24TbjuFxuMiRQyXMnuvFboIogx66Ixki96Y1/5Ekk8cqN/M
LqaUPU/bn4DGapXp/wAe/IPy3BtB2PicL7ra2yuwtmiS4KIPvWPWQdPq+0VxnMdMYTfd+3be
t1N3fMWkirEkOkxpCKnVHHGf/GFPbD1/lmcwG6cl33d4bW23C/nvYtvUw2kcrFxEoAqFr5Ux
M/lyWe0bzuMdzNt9hNeiyiNxdCBNeiAdZCvgO9MWxR3ca5XyHYBeNs9y0P8AVLZrS4hCCRJY
3BWntmvqAb0kZ4fg2by7+P8ANeT8bsb/AGqxnltrfcYGt76xnU0pIhTXokoUfScjgl/Kk8xc
cT+Z+c8c2Rdms7qKWwiqlnFdRLOURhRkDNnoPgcVpkZDfORbtyDc/wBTfyiTSBHHGg0JGo/+
zReyDsMRkkVze2GUKDnmAvXrTPEdx6Pxv5h+Sdi2NNuth+r2uJW/TveWb3CiI5MqS6aGMeGe
CVi9M/afIvLrTfxyGyvjHfAaIypPtpCcxD7Z1AxAdE7Yd0SZ6t9x+aub7nHucTm2gh3W2a03
CGCIIkobL3SudJNJpq8MNlPzE3Hvm3nmwbHHtME0F5ZxKY4FvIBMwiIziLEiqeRxfK+uKSP5
H5Xby7v7d2Pa3mH9Ne22isZioQiKCfT7anSrdaYopy4tl5dvm07Pu2zWNx7e3bvAYNxhdQ6l
afctftcA0Bxq4PrfhRjKhzBoDpyqO/UYyNjr2rc9ysNwgu7SQ29xC4nhdR60dTVWU/UYbzrf
PTt5Fvm48j3q73vcpFa/vGVpjEntqSsYRTpz7LjNM5jdcX+decbBssG0262k9vbRe3AbiHVK
qfwalK6lHngUjO7Vz7eds5VPySwitbK7mZvctoIgLMhxmoiNaAnr+7DRKtuc/MG/8x2pNu3W
zsGELK8U8UTpPC4GftuWNAehHhglF53122PztzC22WPbp47Pcoo4v063d7CXndQKAF1ZSSvY
9cMjc9msHd7rd7heT3102u6uDqmdhTUxyqKYmeYhEmh0q+lvylTQg+IpniXw9Kh+duWrtybf
d21huJji9s31zCxuG0/+MsyMDqX+L8cUpxmpPkTlU+y7ptl5ci4tN4lW5vBKCz+6pB1qe1dI
1DFsH5ZirMoFauBQkD8enbBVaMO4/wDGKkjV5YhK1XCPkbkXEHl/p5hubG5NbrbrtfchkYfa
5GRVh4rikadXJvk/fd+2K42OaK2t9qa4ju7a2iV//julapEWJ/lsWJoenbDp+utBtn9wnJLT
aU2mTbNuvYRCIbh7iNw8yKumkioVRvTl0zxQWW3GQuecb0/G7zjQSJdjvLr9XFbKDWBhVtMT
Vrpz74a19cjPWl3PBPHcxyNHc25EkbKSCsgzVqjOoOeM6y9Jufn/AJLc7b+mutt2+4u2RQd0
aJlm9xBRZfQQNY8On4YpWfspOF/K/JOMSTrEIL2wvJGlubC5jBgMp6SIi09s/wDblhtUjm5n
8hbhyqRUntbWwghcyx29qGCiVhQuA9dOodQtBgvwZzN0Vp8lbxBxS54texW+5bPKp/TRXCtr
tZOokt3X1Bgc6HIYpWrEvBvlLkPEbWSxiSDcdtkf3TYXie5D7lKF07q3jTI4b1q/8pNz+YeQ
3u52W4WdvBtqbfcm+t7GCvsC4ppZtL6iA69VBpic7ay2+b1c7xvF/u86rHdblcPdTRrUIHkN
WCVqaDtXE3PW1+JOU8P2O+uLrfLm9sLkqBa3dmS6MvRo5oQGDhu1RjON2wXy3zvaN/sNn2Ha
5nu7TaZZpor+WMxFkmXKMofzR9KjKmNzxix5m7mQ+oZdMsqkYyN0C6fUENADmR3+pxL8mLKC
SAQMsz3r16YWOv8ABkJC1B9VaMaZkYmpRqpIDKwNMj/ywY1Oje4fcDqCKZaSOnmMS0z1LVNd
JINBnShxDDMXd2VW1VyHYN51OA0NSpIYEilTl4+GHQORwCGrq0qB9fDFo1G6FSDXOnf/AEw6
R00kLpNWNT/3fjg1YTO41ig0ECh/yzzxkjQE507dBnWnj4YjoToIofSx6dgPMeeEaUZJVlYa
qDJj0Fe+NILEsM/sp/xTFgumRCQUBKsmdR0r54KvlI4ouXXodQoTiZ+DK60Utq9RIXsCcTUo
6EdDTPr9MWkLZyEnS2v7aeeIHqysoZidAqD1H/TAtFkSzEsGqNNemfcYjoGFHNCdApl5YiVx
LFUGopUDPLLEx10LNPzkk9FGef1xNk+VQ3qNaVJp+zECBFVUGrgZitPxpiRSagorWjZNSv8A
liGBcitRkOmpsh9PPEkjKEyXqfuI8PpiJg2lGZs6fbTP9+JG/wB35MiajKuJjSrqajD7elAK
1/HA3B+kDKurLr4DETyqAhappUA6sSsARkV6mlQDSgPjXEyJUQgepXyq0fYV88BwzD22UjP+
EHIUphJAEkOv3A0oTliSJj68qjPt3+tcK1KNYIJYkDqCMGGAlkT2y4OpTkB16eGIWnoxQNTU
OoPj44tBGSVBqda1yK9KV8MJCquWLk+npUdv+eJCoXVnzIHQ5f4YqUiitGBowqaN1P7MSCAr
EuF1EjIf6YCRBGktTw86YmQsgL+o0UZ1B1dfpiwZRVyqa0OWZpl9BixBDgt6xkvSh60wnSVR
7papLMaA+eCgUikABQAy1yGdPPPAgNqqCGOvuSOnlTETyFzQhSTlQHrU+WFBeQshTMk518Gw
ohqpX8xyJB6HwxAdQRr1AFT0PWg8MFpEJGYqMqAZA9jgOBoVWvWpLHtn4YFgWZyC/boVPUDy
xHD6joAAFa1B7+WFYQV2UZk55npT8DiZoXI1qhowH4AUxYzmnLFQorXV1yyAr0OJo7mNVJAB
YGhp1+mHEd5JdVaFiRTsKYsOAijEQ0Kxev8AxSmKoVQurTUVoA308BgR0pRyxNB6vIfjgq03
uOynKtDkT1p4jywk+lQNX39KriOFI1AJOhI750xC0CIQoogrUtn1oeuCwQ5dARUEVPpH3VOJ
vYUqqGHWtaAjIZ4BbArGqnWKls9QzI8saEEGLCq0YHIg9j1wmw0BarAnVXKnWo8sQwaoqgmt
V6BT0GMpGZQCumgQ5MxqTQ9qdhio0f2jyNBTyPgMTWi0IzBANQOYY9Rlg0BMaICRmO1MzUdc
JwzKQAVWopmvfPvhWGDFaIgJqavTrn4HxwM3w7llpoPehNP8cLccd4WeJlAqaH1Dr07Y2L4y
VwHEoauosM8s/piEN7knj28MDTtizuFBovt0DeJz8sbjFfRPwDF/8u7kpJQxKB/LJWursSPL
rh/pdZlF8ja4eWCWHX+oBDqwVhkMxpHYjHLmtz4ej8jupr/gEct3olm0Rhbh4NElTl91K/XF
b6zat7/kO+7PxezmsbMOzrGhnMOsBTkSwFRiqh9wnuPds72Pb4LlpvRflIkUtH1PbL9mMytW
eG3HZtk2bab7c9jgTcRPV5duZUahbr6aEqB4DCZ6w3xRLPPv25yTwGD3VBZFjYAVPYigxq/D
PUxRctO8J8iRrtdJt1eYfpUlVTqbsjBxSmVca46i/DU/I9/8mS8cMe/cT263jOkvuFu6ymMj
zr6a4wOXialWf0jVWtPLDrT2T4HtdtQ313NUXEBULcKivImrMUrXr3oMa6ivLfXvJ+MXRmtb
68l3QglQG23VoauQDqgzxhly8r5vdcc2m3m2+wttMrDUfa0eimRKgd+mISo+Ecs27fpLrcYN
lt7W6XStydMbBj0DAhRTPrjVmR0sUN/8nQ3e8S7Lu+yW17bNLohlCINADU/OCf8AXGJ6zI0W
879f8dO1WO3iOKxuJNM8EkQkoreZwrF/bW1nbXcs0FhahpwCVSJArFRWoAHU4hjzTfvlSyvn
vtr3rYoL63BZIpEVVljIOWkkZUp1xrFUvCNx+QE4/L/TeL2e87IZHokxjSXV3GkmkgGIPMt9
Mz7pcGay/RSaz7luq6RGa/aB4YWq4Y11E6TWnUDvXBpj2H4XkT+n37B8xIBRag6QtST4nF1Q
xL30tlzQzW8iCWO6p6gsiZmlGVwdWWKG3x6hzKGze+2K5FpbRXLTqGeJApelMjQ5ivhgZXe/
ckltdy23bX2yEw3r6JGmj1AL5KR3+uJOKbZ+PLu77c+x262F8nuSXUcagW8hqKgU7+WJK3l+
x7VxjjDIljFvFs4YRStGjPGadda5jyxQOX4iniTjt9JHLpPuEuqsQQdHgPHzxq0vKN2Jl3a6
oKqsrDX1J8TXvn3xX4Y1b8JSKTktpFNEsqGRV0OtQVY5qR3xSGfL2O/3DarTfbLYV2KxSC6q
XEkK0pmDQaRmPriMSw7Bx3ZxuF9Y7fbW8jqXYOoaIBRXTR6hVJHbAXmO+8ys91s2S945aRzH
0w39mrRMD55EOPHPEmu+Pht24cNuf1dtbzLbmSOOf2wsgXTWmsULUJ74qq8pnhMl80dvG0yF
mosSl2p5AdTig1Y8Q2uC65Ta2N7bO0DMBLC4KMQe3amNSwbtejcw5NabDeR7ZHsW37hbPFqe
OSELIF6Uqint3pjONMnsG5QzcoW52DjC3SGMltqlpIqg0De2zgUp/uxYJA8/u7We/gI4w3Hb
tATOrqgD1/MAgCHp1GKMySfh6HwzkF5uPApyzRAwxSRr7Q0rQKaZHp54Wr8M3wS45RLt9zbC
SK32QSMP1LLouPcrRjE1RQgdWOGpo7PmO03M9psm238u53Jc0kuwHKiPrqdetfGmDQsYrrax
yeK0ttzu7XcGj1/08ESWrDp0XxwF1ynbLey3G8vybBg7NNc2+kEgD78x1P0xF55zbjd3NscW
/Wu/XO67YhDJb3tdcavlqjYda9KUwRncDeXlieIQJu/CZItMQFrusCAJSnpldlowr1IY4Wqt
uK2mybX8fvvT7ZBdzBZJnS4RZA1G0gKWBK0r2xKtOkO07vx6xjawNtbzqrNaKaKvfIEDv0qM
MrNipVOPX/J5ON3ewbdJCsOo3CxrHMaLnmgqPrXF8D6y+vLeb7PbbPyO8221ZjbwMDGzfcFc
A6Sf9vSuJW6sfjTbrK95LDDf2yXds8cgeJxVTVf9emKjl6ALTgt3yNuMJxy1CRws8s+lVdSF
qNDL66+eDG8C/HOMcb47uF6m2W9+1q7SFL1RIzJULRXboaeAw4rXnXIN64lulv71rtP9H3OO
msQOGtpF80oun6gYsUrbWcW3XHxXNfewtrcpG6vLbMYVkaNgAZEU6WrXOuKcju48iZvWwP2A
nM56f2YhutV8b7Vtu4cot7bcbZLu2ljkrE5Ok6VJB/DE1y2O/XXxfYbp/Qr7jyJagATXaKVk
Q0qGDKdbgDzr9cSslWfGuGcM/ojXtrFaX1vJLIYrnc1pWOvp0sQNPh0zwKCPDOEf1Wyu7aGx
Mra1udvglE0D1GTBDmNPXIYk7n4pwuklwuwWgcSCCRdJoVOWpKU0Nn1GFZHBuPDuG8Y2a/3N
dpi3COL1vDd0ZtGoAKjtWgFcTN5jguuMcH/T7XySXbP09lekJdbarNLEfeBCMApBGlhX04I1
iG1+MtusuX3S3FilzxtIvct7VpC8ragKlc1PoPYmuEfC12HhXCDs897aW1leW0lxII5dyr9q
mgHuEVFPocFLL/JXEOJ2e3wbntMsFpciQLPY204libUKVRSapT6UwwYqd7u+GTcVgWXjdztO
/rGgtboRskE2mlZGkNA4YVNCK4ufkVhakmp6j7tPjhC04zyCfYtyF9Fbw3Y0lHt5xVWU9c+x
8xgsMj1vj/KNwXbpuSbxuCzcXnDRrtjaZ/Y1NpEczkGSpp6a5GueMtaw/AL7aR8lwXNoP0m3
STzfpkekYjjcNoTM5dadcarXM8d3Lt/bjXylul9HZQ3ULqiT2rqBUSRqWKkZq9cw2Cscxqdg
5TuS7ZJv/Ib6O54vea4xYMFm9kFqLHK1NZegyDCh8cXyr4w3xtc7YPkmGezYW1lJLObNJDpK
xvULHmaD0npiq5T8l39eNfKG8XSWENzE0gSW1kGkFJI0JZGUehq56sasXNa/Y+S7j/Rp9+5P
eRXXFL5WSOxkCze2rNQJI5XWXHQBvu8a4MN8Yb4pnsl+R4ZIQttbtJMLfUStI2DaEzPdcqVx
dHn1JyPkUPGPlHerhNvhubWRhFLZsvt1DKjFo2X7WLerV4431PI5/wA/mtltHKN0GxNvfLLq
G84ruClIbF1SUoGJ/lSSU1NMB0U/d41xhrq58sN8Sy2afJVuLYBLMvci3WRgpVGVvbGZ+7TQ
acVka4+PUm78ii4r8o71cLYxXNq8xjltCPaohVW1QuB/LeuYYYbGZfw2m0clvm4/Ju3MriG9
4huMRWG0mWORl1EkI7geqVQKaT9QajBmtWMB8NS2kfyVbe04W3k/UC2DtpIVgdKZnM6aCmNd
DiZGe+SYLe359v0EKpDGLpisajSoqATRRlgo5Zp1XTQUBIyB7nFKcfScfEuO8n4JxZNwMUd1
bRwzW3u0X3lX/wAluWrq0P3ANR1xiw41EkFtHy60eFBBPJt0yPIoBbSjoFBI+/R2rh/Aee84
5NyWw2u8lseabRuioGWfabi3gDvH9rKoBfW4/hIzxqRi3x8+B1LBlAWM5BKUpXwwl63/AG52
9vLv29xToGt5NvCyRSrrUr7o+5TWuMW3Tnja/HPA+NbRNvv6LfrbfbW8tmSfb1RKRIGLDUoZ
655ZjDfkczIu4by5tuE8d/Tcjt+P6rdED3cUcqyUTJV9xkoV8j0xRpnfjbet63Dn+/ru+42d
9dWlgqJue3xosbJ7gKsSo9emvQ9OmCiYtN/uIP8A1e9uuXbtZcj4xcK0KSQ26o8NxUhSntl9
RrllQjFhrr/qF9a8I421pyS02BmtIlMl9Ekqy0iFFXW6U0+XbFA+cfk3et63bkk67vd2N9e2
lIf1+2qqxTR9VbWv39e+Y6Y2OOd9vyyCqpdgq0LUBIzP4YrD+XvW48V+IuH7Fx0b3x07s+9Q
xm3uQ7CZWYISJBrRfSZfSQK4xJcXVm4m/wDuB4lDvHIdvuJZprdLFL7a7lm/n2rEvqUkUST7
O46Y39mPrZKxPxh8f8d5XxnlR3AMu47ZAstnexNRoygkYgr9rB9ADDw6YzOvW5f9dehfHV5w
Tcfh3dDdcaRLG30je7FHLJNMsat78TOwKsQQcqUOGzOhbvPr523r+jpuVw20JONqZ1ayW7Km
cRkA6ZNGRI8R+OHq6zzznjhVSkoZaCrDr9vnX8MZb8fT3xjZ8Aj+Lb2826/vLO2SX9RfXTqp
ubK8RFVmiZFPp7rSuWL09/DzbhPHdk5RzbeJt7nbktoJVddzlDW9ldO9QguSoR4mkUeg9A4z
rg6HPw33OPgPjUvGr6+2XbW47uW2QtdIDcG5gnWNS7xuCzspovpYYeaLMmsx/bVDtl5u26W8
UcltvX6cT2O6xuxMUTDSYpISfbddRDAMMZs9Umx47Lqi3Odq+3Pb3ErI8PpKvHISHWn2moqB
2xrtri+PYv7kRFLZ8P3Fo4xuN5ZB7280APKPbQgOR1oSaeFcP8/Y59dZ1GZ+OrL4nveN7ra8
0guLS/Us9nv8STlY43UBUJjBTUrZ+sUNcZvy6deR5kxMRZYgfuYaj1Kg0BI8xnjf1Zm2Hj0i
VGFCpYV7CvnTyxkvqzbuX79vGy7c/wAb3m2RPFGItz4xuYMU6zIqkrAtRpV1B7aTXV44JIru
+MTNwPh/MLbmd9/Q5+Lbls8Mdz+hJEfs3Sxu8gVB/L9qXSP8RjUu1n+nP+txlOBfHuxcj+Nu
UbrcO8W67KYprK7Q56BGztDIhyZX/b0OLfcY44zlt+T8c+DOFR2G2ci2K5ubm8sVni3G3eT3
Gjb0F5VEiIJVY5UWmDPy63r8KPg3xt8dXlryDfv1J5Nx/Zvaf2ZRJA7W7xszI1ChE8RXqDpb
Fu1mSybVVyq3+G95tKcQF1sN/AyOtpOrSQ3kZcK8aktK0UihtSk+nscUO+ePY1+APj0W6Wb8
de5Dpp/q8V0ySkSD/wAroXC+4vkpH+GCeK8SsLxz4c4lt/Ntz47u867xf2eiWw2udza/qrOV
K6kZSv8AOjalRXSf8Ovffnjl/Pmy3VL8mcM+ONvaN7Lb9x4nexuuqwvkZor6EuFl/TSaph7s
QJYDVQjGfw118zG3vPjH4pteNRXSbBfbrZzW2qPkG0O8x0ac5mj9z0uvVl0kVHTtjMdeo+fN
4tbHb92mtNrvV3Pbo2pb7jGjQiQUr6o2zVl6N2rhc+e9chY+7QH1HqR0/bjON6iqUUIDqRic
uhrjUEo6uOnqGWQxNQ4J9xak1p6i3Wg654CcuY3BrUP0UZ/Xpioo9edD3r9v+GBncIMVBIz8
vDEAhZGAJcZmrKMv24ROcOr0AVDROgLZmpxN6kSXVpQN0BAr0Ixkym1MHOrMr0PWmGNIzK4q
AR4gZ4dFtErqa1NHYasxT8MsZwBWUeqjZDt2/wCuNYiD1jIDaanP/jtiBFkBVui9Aeh65YUZ
nB0gnLs/164yTmQBaFss9P8AyGJmgAJiJOZqNGXTCpDiSnpzYklTTrXria3DFgNLDMjLSe5J
xlb4fV7hU6jmPST0/HGjfYAlihegFOijxGVaYmcMHdRkPMZdcCsIihAaoZszXuMQRqDr1aiM
s6g0+mFfUnWRGFCCewr1/Zi1WHY6BqzOrMD/ACxQ/B1LudJ6ipFD0y6ZYDAkSABPvqCVr417
4iNSOlCq0HTpTviFA6oDorSgoD/ywMWEjgIUZdUtQMulMLUOusFiDnSp1Hv+OKnS0swAYUHV
j4YEZMtRcZdyD4+NMQ8CCVNO69f9v+uIJXakekEaT91etMajQfap6gBpXNe9QcSEkgUE0zrQ
nvgolwmZa110DdR3p5YD4SgKCnUMa07D8cIM/uEtU0jAoT3FMSOYq6asBTL/AEOJYNlHUn09
CB/lXAQKBr9sA6QBnnQfhiR1jA/EZhv+WBEURqsE1MCKdKeeIYYkCqU1KMxQGv4UwnRBiTVt
WroKgEUxC2nzrTIv4/8APAoRU1JLktSgGIkjppUN6kAzXoKjuThR9L/cSF71HTLwxEBFVAIz
zNAaCh8sIMqrroFLsRmegoPCmBYkkINC3bNSfLyxEixZtVasMsxlgROyxsFLVavQZjED0NSW
PQ0KjpiH1MKFtVc1+0U7YGpCbWAQ2f8AD4jCiYSUGkkRgAsB/iT1wxWEctRyZO344gSMAxBr
pfNicRExRFrQ1rpXpUf8sRRlCVZdTKzEUI/wr4YlYOTXrCFtRVaEA/vwRgwALGpAcZKQev8A
zwkqtQrT7sq9KHA0ZlVGoDRQPSOpp9cQCGOQ6igLAmmWKI5aNZq6fSMzp6Z98SMTIz1amnxX
NcWnBlVYDwrmRlXwwqxGhjzBOmvhnkcQE8ihNIGkjJTSlR54EarArUEPln2pi1EVfUGMmbHM
UyxasI60FBkCczT/AFxLDrHVsqkn9h7YkUa0JrkPKlcsIOAEagBCtU5mtPrgMhzIUdhQinRh
4fTA0SjV6zUU6qe2AmLmjkequSkf5YRAqgTMHVQVcnv9MCOvrXIUXt1r0rhGGaFDTtlkf8cQ
PQgkV9R+4AE1p5+GFGZ9TAUGRyAOQHnhI4kRCQjhB/DXrgRlQdgevQ9a4F4dULP1yXMHPr9c
SwKlNI7FiRn/AM8WCUeomqMKEZ+FfpgxotQIqtG0nofHyOFUDsjaDUZA1+p6UxMkQahh6agV
6/sODDTgBlBzIzoMqfhia+Ahgx6EKtR+HamFkyrpcJWinPrXLBpOUIl0MpArQqDl9csKOjUq
VYgHIGlaeQxIwdywAA0n06aYgUvrHSnZq9KVwGHcAUUVNTll44GsKVQgFFPUZDx+uIUkeNiV
LNU9SDUCuQIxqwaNYtJr7lQoIJ/i+uBqGzXT7dS5zoDp6Z5nDjNDV/cIFST9pJ/bhwy4471C
EbWdOWRoag4RbrJ3pb9TnRTQA0/x/HEIDT598Sd+3v8A/LXVQEGtRnjcqvke08M+Uuc7Dtsd
ttW4lbUGotpEjdV8gWU4zbokWm6fL/PNxkhnuL9BJCxa3eGGJJMvFgvTBIa73+dfksQGGTcY
GUrpkMlrEW886af3YliC1+aPkO2jLQboWWuSSRxsgP4r08sNNmIbj5c+QZ7xLptxKSoP/HBF
HFGw/wByAZ4ziV1n8jcttbuaazv2hluGLXAZFateoAYUGLTytNk+YfkXagYLTcBNBIxJWSBG
RCc8tKmmHRJ65uT/ACXynkipHuMkUgQkho4EibLppkUavpngNn6VFxyvkl1Z/obzdbuW1FB+
nlld1AAoBmTgxzs1WIArBQAukGgHhhkanNi44/yff9gv/wBXtN01rKRR2UihAPQqQVP4jG5R
a1t98289u4Dbvexoj5tLFBGsikZgqw7+dMCig3bnPJ93sP0e5Xn6iNTVKKobV4kj7sFCLj/L
992BpF2i7eFXp7woHVh4EEHFfWpdc9zuM97eG8lcvNIxd26Ek5k5Y1zcblaraPl/nlhZixW8
S4tYwFiFzDHIVUdtRzNPOuCxihb5X5u16Z1vkQSLpMCxIIqjv7dCB+3BiZa7vJrq4kupyBLK
xZiFoNRNc8I127bynk+2kjb90uLMt94t2KggeX24rUjWHe94uXaGG4v7g5v7SNI4JP3EKMKt
NFt25xzuv6OYSIKshibUufVlpUfsxlStjtHy7zPZ7RLGM2siQLQLNbgMB2qyFK/jiLnl+VeQ
Nu39RiSyjuKaZDDax6W8316iW864mbVpdfNHKL62WKa1sSyHUjtBqYHxUFiAfphMKH5r5oAs
U72twiD7ZoAxJ8TRhniVccvy3y8zTP7lv7Uoo0BhDRrl+UVqv4HFBrh2/wCQuRW9jNt6TpJZ
TEuwlTWAzZ+moy/bjWGWL3Z/mLke12YtorWymRKBWkiKEKBTMxldX4jGDMQQ882S8vnuN84r
t10zg6zb6rd6nPMVZc+5whYwc9+PrG6S6tuG/p54yGjlhuKOCO4AFMUg+HfvXza7+2222CXF
vpB9q+jDyq3kykDDil1R2/yzyxL2a4j9hluKVsWiLxUGWnTWo/A4mrS3f5T3jctu/QGws9tR
zWRYoWR6HwD9K+OD4ZldW1/MG+2FsLNLKwniACh3haNqAU9QjIVsNhqqPPdxtN8O87dbWm23
Mg9aQRViOXqJVjlq70wSLD8k+Rt632ezvnihtrm0IaG4tlIcmtc2JJoOwwyQatrf5k379OJ7
vb9vu5YqKt5PC6v/APpCg/ZhvOC1n77mm/3G9f1mOdbO9YUVrVRGFXw71FPHBi+yPfuYcg3v
2P6pdfqjAaw1RFVfEkKBXp3wxXpqtq+YdytrFbWTbNulQrp1CNog+VPUqeip8sZUuo7D5Z3G
y9yC32rb47ByWex0voDN9xU1PXwIpia8QX3yM1yiyQ7Lt+33cbh4bu1UpOpGeVAOvQ17YNUd
qfMG6FVkn2nbri6TL9UyNG/1qtafhjStcll8r8jinuXuY7e+trj77OdNUYHQBadBT64zrM6l
Q7/8l7hue3HboLO2220yMscFfVTMAA+lR9Bhi2a4rf5C5TDsz7Ul8zWTx+yUlRGYIRQqrMKg
Uyw4bZXVxz5E3bYrNrIwQbjtzmv6W5yCsepRu1T2ocZwrTdPmHf7qJIoLK1sjGymOaMtI6Fe
gIf0sD3FMIo1+W7l9NzPs1g9+g0i7QPG5ypWoq1D4VxFw23LOH7k8t3ybZXut0ldibq3mZQw
JyUpqULpGVcVEWmzcs+NNo3SG/stpv7K4SqmZJBIpDZHUrOa5eGIeOjd/l2wTeHntNqt75QA
tvuLBrecAr6g9AWND2xYdU1l8s7rHJcR7naW262d0+v9LKgAjqMwhAIp9RiLj5LzyHdY7eCH
Zraygtm1IFAlJr+UVVdK+S9cKXVp8uWsW1Hap+NWht3Uo8cTCKBlIz/l6WpX64hcql27c/jB
y8e67BdQM7F0/R3DyxgVyX1NGwoMVinMWdryT442edNw2Cy3K33C31e17tJImJFCrq7sdLA9
VwYsZPk/I5d73eXcmtxb61UGJGLqCopWrZ541rPq84z8jz7Zt7bRuNnDu20v/wDw09A0detG
IYEHwIxlv7a6Ln5Otoru0uNl2G0sGs6quukhZD1QMoUqMLH39TP8xbmwnRLG3SKZxLCWZy0T
LQsCPzq1MTVqeX5mlunMW47Nb3e3zqontSxYBwc2VmHQ+BB+uJT1Ucq+RY9z2hNn27bhtlgj
BvbEhkIK/b7TaV9sDwxTxjp1/wD3u7pJx5NvmtEfcIUEUG5q5WVVApqYUOpiMjnnijbg2D5J
l2/bZdn3bb4d52lnMiQTeloyTqIBowIJzGDDrl5fzHa94S2gsNjg22K3/Oul3oOiqyqlF8Qc
MY30958k8gvuNtsF2YJ7P0Kk0sdZkVM1o1dOVOtK0w41usexKgEdRXI9/wAcLOHWQEknr4Yr
E0j8xW44t/QrjbYHmjYfp75axyBa1PuBf/J5VxmXFZqq2LfZNm3aLcUhjuGgOcFwuqJwcqEf
Q41fV9rD8g3i03TeJ7+1tP0EEumlormRQVFCQxzzPTwwYOddsvMIp+Jx7BNt8az27aoNyjb2
5NGosUkQfec8iTgjSu4/v77Nu8G5LDFdmAkfp511RsrDS1R9Oh7YsqnQOQ7rBuW8XV9a2wso
piGjtS5lMYpmNfcE5jwxVcxYXXMkuuHw7C23xi6s2X2dyicqxSpLCaOlHPqyOGK1Xcc5G2yb
rDuC28V20Leq1nFY5EIp/wDSfA9jgw/Zzcg3Oy3Pfbu+tYDbW1wVdbRpGlMZCgMA7Z0rjWuP
PWXVvecwjveIW+xybbEl3ZMpi3CNihaIV9EqdHbPJsGH7fZV8X5Eux7xb7l+mhvVi1VgnHoY
MKU8j4HFjXNxFybdbXc96ur61ga3t7mQuttLIZmWoAKlzmadsDXNWW48ztbzhdnx6XbxHd2L
r7W4QSFFkjXVQSwjJnGr7sMh6uq3ivIl2HeoNxa0hv1Soa1mHoZGBBAP5WofS3Y4afly8l3K
03Dfby+s4nt4bhvcWCZzLIgNPQZDXUB44tZkxXGUHKgrkARQZ/jgw62W/fJUu88L2nj09oLe
42mZGF4hoksaoyU0dVf1A+GKRnrdjW2vz2sc+03U2065bW3e1vQkgCujaaNFkSrVXMHLA6WK
vdOYfDG4wzauI3NtPOpP6q3mVJ1JzLISxWv1GJytn4eUVOYWuRqNQGqlci1Mq060xq1rWv8A
jT5Cl4Vvc181qL22uoPZu4tWhwtQQyHMVBHfqMFZtdPx/wDJScU33cdw/Qm8tN0SSN41YJKq
ltSEEjTXxrivrcnmNRZfL3CNx4vZbNyzjb7m22V/TlGQxsACof1MjKxU0YZ4MEVmzfKPCeK8
kfdeObFdW+3X0DW99t8syHSKgh7Y1ehqPUrmh7Uw5ozFLxH5Kg2qDdto3O0fceObwsguLFWC
NG7V0yRk+nV0r+3BY1cxqrf5e4HuXGNr2nl3Gm3OTaYxHE0TAx+lQmtQWRgSoFfPEvw8253u
fC7rckn4pttzttnIhNxaXDoY1cUp7AUuwB71P0wxSVmA7GRSPQuTFh1BHTDQ9h2z5h4fuuyb
dYc72GTdbrZQE2++gbSRQABigZKONIzFcZPWX1Lcf3ATty39db2IuNlNsbC4huWC3FzASW1O
Y6ojrqIqMjiql112/wA1/Huy7XuG2ce2G4tLTcYpo546xVSV4yisj6mZ1qcwzZdR4YGerkYj
4y+T4+NbZd7FvNmu5ce3AAXlnqKzBqaS8T5VJAzBp5EYb+2f4z/XK5OPWvxXdclv7Xe590tO
PyZ7RdKV9+Ij1aLlUV9WVQCo+uK+uk5aa74r/bjLaTJbcvvre+ZSLe4ljkkRW8XiEK6x5VGK
SuexQ8Q+TLXYeE8g4lJZtc224tIbW/gYIVkb0MxR8yh0gjuMas9PzHL8Y/JEvELy7juLRdx2
Tc4vZ3WykOlnRQQGiJ/MAxy74zjp5mPRW+cOB7ZtG47XsW1XosdztJoyJWRpIZJIyiBSzuzR
59C2XbFg+vim+IPlPgfEdujfc9klTfxF7FxuVm2v3461/mxu6hCKAenrgs9XMyYyW7718cw8
9j3Xb9muL7jU7NJuG1Xr+2TJJXV7DRtX01quo+WNdezBObG5+RflH4n5hxmLbv0O5Wl/t0ej
abmNYj7TIAEjesh1xsVAbv3wS/Udcz5Y746+Y7ziezbjsc21wbxtN8Webbrk6QGddLgPpYMr
fwkfTF/lrfs88upU93Wqqq+r20U1CKSdK1OeS5Z4bZTmQ8MoA1MxCClB4n/XExY9u2v5W+M9
whsr/mewzNyqwjjhTeduoryCGntyel46Ov8AuBH4ZYy3hn/uDhl5XuF1d7Sbnjm5W62F9ayM
qXM0UYYLNRDpVyHOpAaU6HGvhie/KeX5l+P9v43u/G+M7Rd2227naTJHK6xh4bl1KrUlmaSL
OtSajOmXQ/JzzAJ8w/GG+2u2S892OebkG3QLbw39nR4yqHUGCF0AbVmVYEfhliY6yfKr2T5e
43tG9b3DHsleHb4oS/25CInLU0++gDaU1AnVGrU8Dix0zz1Dyrlvw7/SJbHiXHri3u5ZEliu
LiiJbyxj0zI/uSOfBoz6X8iMEgnP6amD5t4BvUFld8rsL+PkNrGsBuducmGqeoSe37kaZ9wy
nw6YjZ6yG3fJHDpOTX029bB+q2C8lSaHTK5vLaWMaPdhk1KyhxmYddB2OHqz8Kc40PP/AJg4
XufF5dh2i1v7pHKmBt0CsLSSIgpNC7vJITTIqTQg9euKXGevfHdxb5d+MrC1W/bb7/ZN8z/U
rtZaSzMjAVdIHkEeljno0f64zrV8ePcz3+05FyW93exsE2mG8kZjaRNUFxQNIQMgXI1EeJxv
fGJP/wCqlEhVGPp05jMZn/TA38GUIzVoXQ5hfL6+OK1Q9CG1BiGH4DritZ0i7UPQCuX4d8BE
DFpB1GvietcWmCIapypl9tajAMMquun8p/KD1zwnDBnzrUNUhcx1/DC50yyBF00JamYOeJCQ
M46VPY9QKeeA8w+qVG9dK9AfMZ4Y2fUASSBmcu+fUYkjZnJ60rnkKE18cQqM6SvqagYg/iO+
FmVJJX0rrAA8vDxwNUTDUq1Wuk+nx/ZiRs9Jz0OR9pxExUswpQKozp5eOBn8pItYYNr1Ejp2
8cJkAzVJ9wAOfzD/ABxK0zCqFS9Aa1IyqT064MBhJ0qAWzpTv9cLUMGZ1pT19aDKmEhVigUU
oxyavaveuBmlIdTUFRTMjPP8cA2nSVguooFU5lielcGkEsRUlgQDqoz9cu2WHWbydlAoG9Vc
yB4YW9w+ujluh7jEjCSMuTQ1r08j5YBBUFdNcyPt6VOAmVkZa0FeoWtaHpiGgAJUhgVY5gjq
PCuHEKP0BqkHV0r1qcCAHAoDQn7a9qf88Q0RFGABJX7g1KHEiUKQzE9+vjXtTGsQqow60Vsv
+DgaggNIKkaVFSzA1oO2WKnEbRyFQsdCPA9qnBowXtITn2Ppp0ri1FUkgMvqPSnTCj+2HCgn
1L2ByIGedMQpR0k1F81/ZTFokIkiQCg0r6qnPrg1qGEqs5DCqEU1fTpgQ9TKrHqe3nXriROq
kqremh6DCjKAKlBWgopB8MSLUNJqaMDRa9PriBK8n2kDLMHz8xiE6ExPUHVTsKVOJo1BQhmI
YCpyyz7UxHCQkkZ0Uj8AfHDgFrCsEADdjUUoMRJKUOr0hegJyPngRF9a6QMzX26+A/fiOEho
O9SKaKHI/XAsOIwNNMwep7+OIBk1ampQDLSOv1xRZo6mp76BXVUU/dhWgTSQWJ1AUrSoz60N
cFMGWVqhTp1HoK4YAkHXqIqK5nwxCiLj1KfSfyscR0gF05k66UPQ4mghJaVBqyVqfHyzwaNP
oAVfcOnR1Bzr+OLWcAHSoCrRlPUeeFDdh6VbJdX/ANOIkACxWpyqFpgQvaWvtrmSOvbBqQsd
DIvUZioyBwxJj61Br6ehUdsBJdLLVaU/diAAqglaggmuulK+VMKJGABAzA+4Uqc/riWBVdQV
xUZmoIIywIYEf2tk/cjP8MOLDgHNgdanpXrgQWUt6fAg/s8sSDoJY0eqk5nuCfPESLENpGZH
Wp/fhWiaXStaHVnUnr1xYvseOSNwpJoTkCe+CqAQxqSBnp6svSuI/JxIhoxqeo0p1r2xAatp
oSNJByHXArUYNZKaSB11dhhGHBYBQFPqz6+PjiJxSpUAPU5kdvM4TqMB1Y6gpC/aq55+eLR8
jBkClFAVhmc69fDFpw6y6Y9FSCTmTlXypiOIZggICHVTMgfcDXLFGfqkDvoRm++tAD3xUwzF
39SgEigCClfrggwlc6x3FSGUjqcK07sFC5mnRqZmhwHCRFQsQ2bfbTsO+JHWKhqG1g1OedcR
J9aUKUzyJpn+GAEgUCn3ePjn4YUSL/L05huxr/niWDSumhqFHRhmc/E4qYEMoADrVhl6sh+7
GcOmqGUBQVHQDtixaRDfbTvUP9PLCCRwzDT08OmNMHZWKs0QBqe5y+vngM0Ko4NG0uozYjKn
mMRn+RPMBRlY+rJT1NB3y6YRXDdNMIWMj6Qakh8zTyw6mRuSPeNB17/XCIbS3n0p+PhhSy2m
IS7rbRore07LU0r3/wCBjp/GS306+s4vj34m2bjse5bjZXErMqPJGszIqswB9GnOn1xy7nvi
c9n8efGG87h+o2mSaa1QD9Rt00zlzUVGdAR9cE8CwT4++Nt0a8s7PaJtvvLOkPvJdPImqmr7
HqDlgxIH+OuA7HYI277ZJuzSyaFcXLxEdxVRll2ODFbqWTgnB+P7nZ7nHt89zYzMum2luWZ4
2P5gaZjyw6sWPybacETb7c/0mRbqRwqTRMIxSorr/iH4YpC03D+NWmzbVEu2KLaBgC8WhXzI
qxJYZ4NUxzcs4Nx3fbyG4mgDsv3IgEBanYmPpi1a873raPiRbqTbf6bf2W4IyokqTmWFiSOr
E1OEL6/+PPibZNrjvtzt7u5B00pKy/cOooRih1zbX8T/AB/u25i52K4N9Yqo1WDzlT40ZwAV
/fhFi4v/AIS4xLaytbbVdbXcxgmJ/wBYkqvTqSHJwLFfx74x4lc26xS7Ld3twGIa4S7WEla/
wMy1/DCqsE+D+HQ38hujexWjgGKATAuD3BIH7MQxxJ8bfHO5LcLtEd5bTWr6HlM3uK/iCHGL
FKo+UfG2ybZJt4gnmMVzIIpEc16kDJgOuKXFbWj/APuZ4sl1HJquri10qxt2fSzGnZ0zFcRZ
TkNh8SoLi1gi3Lb9zi1CMlxPCzA0OvUScRQcYtPiiXb5Y+RXd5ZX9W9uaGoiKflppD5/UYrW
cQ7DzReMXd1DtUcdzbuT7E8wKz6QfSQ6EUwj6tFbfNO5uR+qsIJrwjQl8mtZdJNNLBQQ4Hge
uKNY9K2TiuyNtkb3QW6mnIm96W0VSHOYCrQ6QOmCh5l8w2mxWl/bw2UaxzKNU4ji9rPoSRQA
4shYCx/Qm7j/AFZdbXUBcGI/zAncrhkUrfP8T7deC03Dap573Y29VzMuUgU9s89QHXEIr7vb
ti2DktoNnuzeIjqStzErhcwNBBybr3xLGo+X9u2uLbrS6Szhs7oufdaGMJUaR105HEL48k1O
JAjUK9aV/wCDgZ5+Xo/AfjvaN/2qS/3G6mg9tyPbhoNQAr9xDEYq6LCT4v4xudkt5x69ugnu
hHF1oYZGjUYaThxI9z4X8b7P7VvvFxuNteyiomhZXj/7gumtPqMQZyx23YbPlNrHbbg17ZGT
V78VI5lU9NSPVajFPTrU/L22Xf8A8ECdLx5f5aaoUjm8tbJk3kcPLNZBPjrnZLE7FdfX0UPn
92eG2IXHIdw2rlNol5Ztb3Cye3Lb3UY0srelgVYGopgkaav5e2faLV7O4sLOG1mn1CUwroVt
NKHSMuhxRlZcC2/9Twi6/SzlG/mCW2uYYriFmC/kNAy1+uGtdRnuH8B2ve1vLi9uJppLU0ax
sWRZloM20SCjCuQpgZxzbvxngVrITb7lfK8bj9RtV5H7VyFrmY5CqrXyOAZG3v8Ajfxzc8Fi
v3imhhjiEsd6Ywbpc6etRRWxY08ZuRbRXTJAxkiJqrsNNR2yPTGoxKt+JbJHvW9Wu2yzGGKa
oaVaMy0FemCujcXvxnwWxni2275DPBuM4rHK4iMRr01oV9NfNsHrFQ2Pxbt1taXV1vt5IbWG
rCXbypAQfnIYNUEeGAw//wB0+z3Ztbvatynk2y89QMyASqpGRB9IYV8QMJxU2vxtG3Mjx+4v
j7ZiaVLmNaNQDVQgkjPGr145zj1p+J8I4la7zfbTuLzXV7AKUkVDBoYA1SgLBwDnXGHWM9vP
CNnuuWLtOxXkh9wt7sLxkNGVzIVjTXl441Fi8/8Aue2mUm1hudygvipo1xArW9QOhaMAZ+Or
FQ5rL4h2+32yW93fdJLb9OrC5iiRMtJyZWfx7imDEoOR8HtLPa03nY90XddqYhXDaUniJ/iW
uf7BhkGMcSAxUn1dFP8AlhOBc6WzGf5iOgpiRy7A6gAPAVyxM0waoJqVU+OJDU+oMcgfwywH
TKSBRWHWgJzJGEgdjpAP4gf88TJq9DQAeFMR0gx0nSaZ9PDEKE0NT1J6nEIZdSjTXPuBhMIt
IVwHC1AsKCncVHfEg6qHJan8xOBYRK6sunbywgAoDSg8hXqcUWo9R6GpUn1KcKOO5OYPl0Hj
iWB9yjk6vSMz3wWDTtLVT/jgkV6R6nJBrUd+1catHNMaEaj4dcZa0B1CPOhrmB0/fjWiGBOn
1kZ5qD59sTSM1B8PE+NMGudIuWboAB28vPEUZDhuw/cc8OrASSfxE5erFqnMMjGqgMArCpBH
TvniaMzgrmBUGlf88SRnQVNOo6+GEaFZCV1LUeApnUYFKbWNJOQZugPb6eOJfZGWFSrAa6VD
D/PCx9sC7Lo9QrXuPLuMTdplf0kk0r0ODDvhVDAAEqVzLVywYsiMzEKe9MqdOvnjURpJDqGn
PoCTiFiJzqqVYrq+4/5Yy1og9FCoSafd9MQQyPqAGnUo6+VcUVgGzrpNCe3XMda41Kz8g9xW
UFq+BrWlPE4jyZpCak5KoORzrXxxlq0zClMiCB9fPEAmQhhRc4z9Mz5YFUZkkJJPfNB5Y1ME
lAxAoKDPoB4gdcXjV5A4kosi5uuR7knwwCTCWX+YpDHUOtT0B8sKlpnagLaiwU6gtAKE+GDD
gTMTVgSajv2OIyHJZqBqg19RrmaeGAUImJ1BvurWueEbhzckrQAjLrUVPj+zDjX2QamaM6ft
UZt1qD1pgoSGQrQL6a06+I6U8MGLSMmn1sKOBnpOVPLEtCGYgMB6cgcsyMB0AfQGCk1U1r16
40kqsxUNU1BzHn9cSwmeRh1GfU06U8cGMdAEkgFQaBXIP0HSmITT62aJlUEVy1fvwn7FWUNU
kgEUIByz8sW6hQO4LKTp/iU9z454muaYa9bOh9PgxyzxLNTRyfb5erTXuPPBjenkuGFQaM5G
pUY1oCaVOM4LQ/qGA9sUFT28O4xrGYkZwewqtKtWnliwm0ihGosWzI8QMQMNFAlMyfwxJJVv
u8/Ue2BQyFFXUWzqT/wMRFqAINa6qZHt54kdiKlep7DFiDJIKKq5DPI559+uHGaYOCwPhTOv
TPpiqgiVZwoOog9D+3EqINEPuaj/ALBiW4FTRtTMSdWXkKdKYKYPXUlaUIzBIyJwpHqzZGqy
LUlvrhElhApo6agRTQcssRRsBWppTMMPAdMCxIuXpBoOuZ6jEDOrFMjmaEg54tagnCFQoYAg
fh9DiQVZy2mv2/lplpI8fLwxA7eokOKEHNRl074hQhiimnhSppWn06YlPDPqU/bStCPD60xN
Wi91VkoToDZaiMiPri0DzC0/IOtO/n54EiDBwGA06utTTLENEHDBlAAUEDM9sR0DMqtTNs6D
/gYojoXSlV+6tT4fXGiZQxJOrU9TQimYxkEpKyMwoSemeQwCFXV6W9YDVr0zOFo2gBvur3Cr
386YkdWck0X1HOvgfDEDKQqEV1FvHMYtBkjSmdSDU18DiJy9VUNkorpoanPyxHA1kqHJqoNC
v1/zw6zgkFXLE+kilDmTgbg2qldIqMi1a9e2AnLkV0fcQCNXU/XAKFCgBr/9XiT2BrhBDoag
6D0XzwoyNoXP7Qc6DqfDEyNqUJK0r27AeWJogIkppyyoqjzxlC11NAtSB9xXIfXCjolAGY51
OSjPy64lhy1EqhBNakdBiKOOQUIC5sSaAUz7jLECEiKCNJVgaqCK5+ZxC3wNEJYk5MeprT8c
NohFmHhTsMGNaKNmz66cyW88KhBZC+oj7ulOxHlgWEj0ehJc0JH0GISiqJK5kJ1J8CMFMpSR
tUFGNRnU9f24mi90AqdVQe/icSISoCwI6mgAOVf88WM2ncrq0knUO4HQHCjMrSR6WfuSoUUr
iRwXEehgFFMwMv2YiZFI0knId6Z1xJJr9J/McgB54AVQVzGRzUjMAHEUdG1daqO1aEn64QLS
wFQ1G/h64KUjVZNS5FMyCcv+VMSRAe5RwQXNBl188vLEi06Bka1qG71wggKAuhAFakE1Ap4Y
Cdqk6i9R1NemBGVg7Aacxmpr3xLTo6AMK0PceGJAppUuWOhu/b6nCjKVQhDmFOVc61754sIm
oSAvpHc9K4CerjUC3pBGontTKij6YcG6cyo7BlPpbIeJGJBYaX01OWagdRgR/cZ1KgkEHplX
CqYa66ydNAQF+v8AhgB9ACkBtT/uP44UdkDJkKkGoDZfgcCwlLhgCagjLtiNME9pTRBXw8R3
6YjAliCNJoTmpp2+uJJteqgZ617jEKB0ZgGByNQV8fPEAEkAAkmuRI7eH4YUIqvtnR1oTRa5
1xNHhEKKMtNOvc/jiXkGgAHqNK1OqueWBqAkKaQpAOf20zPmMRpI2n1AgEAlSB0P44hT/qAy
moUBc8uoPlhZoDnUugA8jTL8MWg1ELkxahTofEDFpPpOqjVaueXjggEEj16VYMx6sM/wrjRR
s3qqtAqdPHr2wUDCs9Sa9K6iKfhgOCHXPIMKg+Xj+OJGUKCSATWpHh/1xLRNVwV1CuRP1PXF
VKFBo16QWNakkk17YNOEaVH5fEHPA1hKRUiQa1ND4dehGFmhlYKoHUnsM/riGHUMy0ViQPtp
kaHDqwTkkDUuqnc9R9cS00rIATGAuo1Y9RTwXDjOuLc2HtMupXBzJIy6eGNctX4ZK5R1mp9z
dyMNYhUH8R/5+OA47dpuWhu1ZBrZHVgp/NTOmOv87jcmvoG9+Wtr3nise2ixktrxUCsoo8Yo
unM5HPxxx7vrGOLgXM5eP7i0xg9+3mUJNHlq0+Wflhlaxvf/AL0+C2plvNtsruHcLogvHIwe
IkZUqD38sDOOZflfim6Wht+TbZcK8b60ltHAWqmoqCa4cUV3LPlm0vxb22227CK2kSaKaZQG
YL2anbyxm85TLBb78hce33Yo4761uINzi/8AxeSJgYta/a1OoHljO1O/iXzPBbQrZb+bljCQ
Le5tgrPp7+5mPwxplaX/AM47WNxgkgFxc2dTqSZFQgdANS0r9MPjSm3vlnxRcyy38Nheybmz
B2/UMUXUPyhgTlgCv5l8kbbvWy2+3Q2MkDKVDvq1BVUUC1IBGQxLPHBwLn6cbuXMsHvWchHu
AV1qoPbPCNrS7zzD44lD3tkNwe+lf3BFK1Yw/WmfbDjp9bXbZfKXEri2hO7W13Be2h9zRBT2
30/xkHvg+GLMq1ufnHi5kiaGxuhGR6xIRrI8VAqTXFTGc2T5Z2zbLq+E1rI8d27TIwYAqT9u
oAUxM3xaj5S4duSwvu9jcrdWjh4SigQuRnRs6jEpXdd/NfH/ANXEbawnFt/9uNShgpzBQD/P
DhxluRbz8WXCSy7db3kt7cHUxn9IVjWvq/54JDJXPw75KtNisZ9vudltL+1lctrlA9ypFArE
hq4bJQyO5Xsd3ezXEFuLdZ3JWJT6VXsPoMWJNsl/+g3GC6kNRDIrHwIB6fTEeY9jg+dNlRRH
7FyiKlGWq+kjqemKCvLuV8pvN/3Jrq4naVasIDJTUqE5DLLLElXYS2cF1FJdxmW31gyRhqVA
/KfrhMj0q2+UrLa/0sWxe/DYVBu7aTSaDuEJHXFgxU3e8cAu+RwX4/WW1mzhrkimUmqpUaa5
HFFWj+QeVfH+/bGkVtuE/wCqgNY4yjAEjqCSOuDA8q/lhiCKn9+eLE9h+Ity2eHjN5b3dxGs
gkJdNYWTQR+VWp27YadRnnvFtisWs9oeW8QSFvbnOlxqPqoQKNiW65945D8d8ligudyvLiwu
oVKvahC0bMc82AJOAMttcvCIeQo8kk8G3xOrK6iragwIqCK0+uN/XxSNf8h8n4XvW3xmx3J/
10BJCNE1KHsTQU/DGcLz+HlfI4o0hg3S6iRRQKJXIp/2kkYsWuvYN02+436C95FfTqIWUiYA
yk0OYalSMblFrY/InIeEb/Y236HdJGu7ViViMTAMGFDqLAUOWWMYs9dvCeU8D2vYG2+63CaG
a41GVXU11EUJUgU7YKbWZ2+54ZZbxMG3S7itya2u6W1VkU16uM8vHDqXfLOXcWudnSwku33u
5QVtr519uVD4saeoYA59o5rsl1wl+O7oz2UgjMcF0gLBhqqFalc8MarzueNI5GjWQSoPtc9S
PHCzi34fvFts+92t9cp7sEbeqMDPpT9uCqR3/IPINr33eVudvL/pVjVC7ih1LWtR5eOH8Mxe
8Q5lsK8bn43vUjWcc4dYb2IFqiQZh1zoR2pjLa4fn3Htk2yz2+wkO6pBpQSoPbbTXMlW74pA
ntuW/Hp37/2L9fJ+oeNozaFGDx6hmQKeonCpVI3P9os/kC63SBGuduu1VC32sooAzAHqR4HF
iny5JuUbHt/MY+QbbeHcrZmYzWxUxyRFsiKt5YsGtNf8v2K9c31ry24tQVoNrIMYFOvQZn8c
NhxWT882S54XuO2TXcsm5MrKs06l/dJYFTrGdRTOuMz5FeXrO5j0KfQPuBJ6419RzUSsoaoI
YeHjhOi1Kx6aQTkOmAaGYEqMwKdR4jriRywoCCQD0BxLQk0IFcie/wDjiQenqIq1ele2GM0z
hWYE16dsTUESFddIPq7+FcAoVoGNTmD3xD4O8lCRWhHWuYocTUCmtOmVete/liImZ65Zj9mJ
AFSaKanvTtiBgyhm1Go8PP64jDFlo1Bm1AB3GA1HqUMKV/3Hx8sLAWcjoCB59DhJga1rn2oP
8cRwAYEkVoBnhrByuYApnn4/hjIMKlg1BVctJxNSAL0LKpqo6KcgMR+A+/qyYBR3Ff8APDEa
pJ1Hp2HWuLSHXGGYADSBWv8AjgZoEb3AW6DrliEhiwIL0zPcdcTRgyGmWVMx1riUqMuCpZSM
69fHErdRsSpNaFiMwen4UxqM0IYkUU+mlQaZfTCAVjAJHWtSCcZq0DlG9Yzp2xLSZRJXrQ91
+nQ4h9UelPynSQPtr2rXofPFVAO4qanMZADxwtwJLUGXpORC5VGKUh9LGhBr0oD4YNQS9DpA
AyrpGHBuI0k1KyivkK9cFUOZDShoKeJ6/TEgOwFNZqQc+1K9PriKP+WzAqQQehHemBeEx0ga
BmRWviMKRkgijChpmTln4YF4YFiKhqZ0H/LEkcrKwAy1NkCelR4/6YsVpmfSF706jtTvhOmU
l1DAUOZANcDUJqqxqMyOnc+NMTNBpoWbp2z654kYyihWlSO474VoGnoy5Bq5U6VP1xYrSJiI
zGoihGeVT54yYFwaCgyXJWPQ5YtWGdH9JAoSADXPPqcKoNdSI1U9aUpl+PjiZEdVCQKBulOm
JFpAUALRurKcqDzxEjVgDUAdKeQ+mBfJH20AopBNfpn1JxQhGlZD50HU518MKJTGCQB1Jrq7
5Z4mbEaBgErXPvUgj/gYtZyJqOEK/aRkPAg4jDq8kZYM2qnXw8/PE1mI1LSJXPVXVToSO1cS
SiZjpqQS33AeXhXAtPGRICnQ6TRgaf49cTRx7ek+kKQOteppliBqgH1g6V/MACenhhB6pnVq
AgUbwGJYYsJF9NWHVSDQ5fXABsB9560AoOviMWE8bqTpKaSSaAk1r3/DFiwVfVUjtQmvQ4CU
bpr9RFD1HWmeeHEL3EHqyD5ihBz8hiF0IY6fUAVHTtUYlg6IFrTShzp4V6GuIwLxihCnIfaR
ng0njlzMgo2kds6ftwsS+i90ioZQCRk1aCmCrS9zVUJmD1YeeJrQkguBlqOVPp3GEGdmWuRy
zy8fEYUaoFA65nMHxwIjp1FwCKZKpyH1xLBMCSexWpB8TTBqsChUoH1VdhnTMHEIUjE1XsOv
+QxLDhmDa2qSfuFf3YTCkiqNIOdKrnliRnkJRRH4UqepPfEtL3Cft9SeDfvpTAtPViKBCQOv
0xI8n20RQCaD/rgNCAwIBJIYH1DthWAJrIQVJApQDr+7CNHqYUWhp4fTETOoqQrBf4QT174E
UtRpYUZT6j2I/ZiWAZqjUPTnkcQsNmFVV+gr1FMKxLHqp6CA/mK5d64MJEUNKAIT6ic8+oxY
Axs7CuohD+UD/iuEyHQ11MUAI8qfjiVJyqqEy1tSqjqcCohpR6sa9wFwITvUEE6ScxUDEUfu
q5GRXOhJyp+3CzRqVDhCOpOXSv1wEejSoBNafsB8K4SiQDSRXufT2r44mRB30Vrqp0WmQ8cS
0g9UVkUKrEEmo6d8BlEWkQuv3J+U+JP/ABliRlJUH+I9M6dPHEiU0TVSsp+4jwxKEMtQFSMv
rn54CTZIanrl+zEMIxsYxQgL11d/ocIsOA1CpqT2Ipl44kSug1EVKjqp6UGI6YsWzjYFsw2I
lQCtfvPXxp4YmDVBJ0gAAAN1wNQY7AGunIA1xNSFIhVVBGZzBPlnh0UiiUSo9R+0dOvhiRlG
pzrJIHVeh/A4BDxoEBBzGZBPWnliqIPWpahAoGNM/wAfLCQlnaJg/py9I7+WLAdFGgj72y8v
2YcMgvczCsAA2THPIdq0wI0kYOgjoMqf9O+JDQKf5gprXI0608DjJwLRhgSGJq33Dv5YkTFq
UWigHMgf8HCLDuBoBP1A8PPCAa2PVaA9D4YFCBkoagAKaAVyIGeE4THU+oUyGZ6dc8WIXoYa
iPsFPDLucSJWVl6UAyAP7PxwEygBKEUY5Cn/ADxEU7VFaVAFWA61xM0NCFFfSB1BzJOFU3uV
VaDoczTt4YkeN1ox9soVyqcv2YKodfR6gCTTp4HAg0kclpOq5ZeOK1YQIHQ5A06Z5+GJHDkA
gsdWRUsMhTtTEifUxX05sfVTtXwxKC1LUIwA0iijyxEzaKGuQFB1zr5YQYtQA1FR2z8cDUFr
FKGuZOk0pWmI26ddbgtUA0+lfpiAJGIAdMxkPT59cGs6dmUppr37ZnPvXEdAEAY6QrGPIK2Q
J7fTCqmFGXUp9QzCt0HjgagXTWSHJAFQAppXzGJjQEOraaAjqTlQ/XDCcRKQQTpSmQ7eQxab
DsGICoKU/ZQ98CgmJZRSq1yP0HfCkZd8lQFsjUCgH+WEUy6WodOnV2OQ+mJYkUTNSpyHQDMH
6YsWGCspIoM6g/hgNoU0imk59B3GJk7KrOWLE0H0H44jIMZprBCse/TLzwNow9Gq3QmhU9j4
YjMEjEq9R0JyHbEPg6aQM6aOjEZGniMQhogADTx7D9mIbpFgGZmarMenhhECVp6kGYzp0698
MMji3DWI2agb0mvgK46ctWsnMdUmVaDKnngrEDQefh+OBrXdtqxfr4lcnSHAalKnPMd8df5c
7cc++rJ4+i7n4r2lOJQ7vtF9c3UJVWkUxggVAJGoeeWeOffGdYeOtmraw+GbS82CO4ju5oNz
ZNf6aZAIvUcgWGYB7HHOxVfcU+DhFbXJ5Farcxkj9PNZ3Aoq09QyHWuE6zNz8Mb5Pc3B297S
2tlfTaJeTLHIyjKoUjv44SqJfhbn0N1JBJbREooZZBKjK4P8LBsz5Ytcp449t+L+ZX8l0lva
ATWbaJ7eSVFZW65Cp7YrW9Zzcdvu7C+ks7lCk8R0yUzocKrbfGXA9m5NPOu6T3EUFsodPYoW
Zv4fUP8ADFZkU9abdPiLjd1FdDje5XVzfWRpLaXCgE5VyYaQcZaUNl8M8xvIvdVbeDSSFS5m
WJ2I8B54lOnJJ8Q8+jvZbNrBRNGochXQo6t0ZX1UJ8sa1j7X8uaw+OOXX/6pbezD/pP/ACxq
6+4WH+0nP8MFrf3q++POAPuU16t9tjXpiAMkUVwsckbdBRWOeK/DF6tVs/x7u15yG62zZbOV
ZIGIcXMioFIzNWbIDBzyZy6dy+G+f2Fm13JZpMiCsj2sqSuK/wCwdcKFt3xFzi9s0lhitkV/
UVluVSSp/iU5jFqql37ifIuPXS2+62bQEg+25IZGNM6MKjCYpQdRNB1NKnpn4YToQKEqBXV+
4DAyMCgyzHYD/DEB0qhFSSOgp1p1xE6SZGmROVKVyxBKmkLU0ocz2xKE4QkUyy7d/piNECNK
knInKvn0w6NPraQaSBo7U6f8sBEXbQFPqFclPTzwokyckE0NaDsf+mFmpGuPQcvWopUdQcGD
RIW06VNQaE/XE3DAsOla1NTXviR1ZzTVUdaEHrh1aYsWIQNkD0z74macnSKAj/TCjxhg9akf
TI4KsSGVdVWIIzpU9/rjNOnDPWpqpI74kNtelepzzI7V7nEqYkqhDEkn7c8KkL8oNczXLxJx
Ey06asxSo/zwoiyg9TqPQdsWAi4J60b+HEzaLUQC1DUZU8sBhaiR2y/N0xYaSSHTRaAgVof9
cII6gQR17U/yxLQ9qMc6nM9cUopalyNNRHSuJS03ujtkxNST0xKmVQRQHM9T2w1Q/tgZCgyy
pgIkIIzAJOfeoxYiLLmSMu588QL0sQCcjlTENBRCCR6iOhxHDBQG8a9csRkJajIL5g+eJAqw
r9c/xxM4PShUA9T3GdfLETEUJ7AZV8sRMegFfUOhr0HniWgodWmpA7HrgZnp/UpHcDqRl18s
MOGRW1HwpU1Na1xHA+oMXPTwxCwDAhszVe474gYpUgg//T44VA5jpUV/bX8cTQFUaq0God69
MTOE2VRpJYjLPv8AXEMC7CmpaVPbxxNQMiVzVqN4YtNmhY+mrj0jLPriZxGCxYE/aemBBOkE
mo8any7YloXK10g5HM/j4Ylp2NCM6qBme+IyA9KggkUOdPPtiVRHSCFHqb+KnQ4cAHeIVJ7Z
aqd8anIMSSvQAn9xwBEqrnnpZR6cq5YPlfHyHXIsje5TQRQmnjiqkOH+7MjTm2eR+mJsBarM
xStRnT/LExYiotRnSuf0Aw6bDyMrHPInILnkfHGVzPPQs+gBVIzNfPzxHcCyhXqKE/lr/hXC
1qP/AMYIY5HI9DnhZAQpAJU5V655+FcQ1Hq9ILE5Z9KkH/TAcJWJBY0zqa/XrSmJIl91XNcl
yIBPevhiE2FKEyUZL91fPvia0zhKEg9B6fInviFoUDBWquVa18fwwHASJ1FKLSpp3GNDDR6W
XWreQNemM1rSLFn/ANoFW8QcTN6DkAp8emJaBUVHq1fBhXr3yxHw0ghUsxpVScjnUeWI+GYr
6cvFgBSmWLB4aQ6yFUkGmo9s/LARMSVop8Aa+HjihwGqRQSSBlTPz61wrUZU0opqqCmX7cLN
ECZBnk4U1WvfEgRtUqqselGrlTPGTDtnIB0kbJVPVqYierHWvcNQVI/L4YUZItWpmXURln1r
44AZl9QpUilE7EntXt1wjBVkQ6ZAHH5ipzI8vDFpzDnSHyVjTr4mv1xYTkJX1BtTfae4OJUy
OwUAGp+7UO+JnBqQ+rVkoI0qRQVwNSnMgDL6RqNMz9pHTENMRpUrU9chh0iQkgLpHgx65YBT
kqqkah1oxHbEAAntkp9Q86eHjhgollLAnSSQcuo+uJuCdg+pm9INDUmooPHFitCoSqkEHPIg
ZGuIC1MSOuWfuZV/CmIepdTFdRyPSuX7xgbQtKwYLWtTUYGbUpkK0Ud6n8MK0yZ1CEZdB3+m
BYKRkJWvfIfhiRkbS4ORUg59M8SISE5Vy6kgZVHh9MKO7o4Gk0JHfof2YlpzpYLqIJ/wxYSa
VmFXjAUn0jrkPPBiEQzeotRaZBszQ4CYFWqUGa5U8PPAyAkqQGP8zrqpkcaBq0BOqpBp0pXE
aLS5K0JUUGknPBpw6lyWJ9SrniXwjf2vcFTQ0oV6dDXCzovccyFguRA9J6UPevng0y3TtIwY
ekitVrXw6UxQ0JKHSoGdDqYZCo/1xpklRVZjWhIAbypgawmaqhq1JNCMQOqH1MxHgpOZr54K
YFnqRSgjpnn0IyI+mJHUqF0sQB4Hv4YUaT22FCMyQKDwwg2oajq+gPgB0/bXEj6TTUBQ9wSS
MsBwhItNVBRRQkdAcK0pOoVaU/jbv54FTO8jEFRTwPcYlBKw9IBOomtPrgQ9QFaNUZZkVIBx
JHKWYhNQI6gN91B44UIyqAQBmmR+uLFowrNECTQd/M/XEQuwpWgoTUkdqYhTq9FKqpAPQdSS
fPEzpBSFA7nw6V8sDSRCS5UmnjU5065YkEjIAjLuKiv7sWo0bBUrSgr6j3B8cQlIZDSwJH8Q
zrTpTBjR3XSVplU/a3Y+WJCIXS1TmO48u5xDAsFX1ivqAoRXId8OLDIa+inoJrQ9TirI9CA0
DKDkSaipwNI2jbVQDM+JzoPDDKsHGzLVTn0oB0/fixQylgxcL5e51zPbPE1B62T7qMaEgjqc
VBhpVaH1DxwIDioQZANTIdv+mIYSyMrLUKc/uPXwAGJJK1chcq9FoOnnhJIP5g0nVp7NkK/T
CDEa/Q1aV6eGFo5EoyqDXNRWmMoDhpKBnC9gngR+Y07YmOiVpDlQBaUFO5xNYNtS+kZZVzNa
fTEdLWrE0Oo5VXpl5nBgAW9OugC1PTPphSUOjAChqe3gPpjOHQKgaRadFqrA9698K0xCZ+nq
aAVrXPriwGZRroWqX7HxH+VcKSe4yChNaip7516YkWtQCWNf4R9PDATSdQdR1NQEdK+X4Yha
FfZkzNaitAD0OIECACpBNMwR0pgJB6nSaae3iP24VDkKuRc9gD3OJDU6lPWlO/h54EjURqfW
Sa9CR3OBHdaMRGKuwyTLOv8AyxIzMNABpRSK9Qajx8MSL0tqK/8A1Zg5/XGsRgOhIqfEZ+rA
kzImTinpOdepxGUAKFgGYBegLdD5YTpA6dJrp8h0A7YkESaakD0fv+uAUQKe4dJGmmde3fEg
siEn2qgnP7cqYCcGStdVKn0nsadqYgMOM/b/APIBXV3FfDETawQBRfcYg1bsO4xHRHPTTPsp
r361HbEqWoKo8K5tl18cSRvJoNBmrdCfHCIY1MhNNNaekGp+uEkcsly1ZEnEBorslVfVpyYg
9/xwGFIJFKijCq6q/dTxrTFo6h0CsAKCiVz6AVH78SnINQBYKK0A0uMq/wCWKmAy1DUDlnTr
TA1Ik9zWwzpTovc4Tga+rIaAfSB1B8MGiQjqYk6c6ZJ2H/LFowkQhmNehyHbwzxayZGZhXJj
3H1xoQXtekaDV/An0gDxw66c8uDcmVrZk6xrXUR01U7YZV2yUjr7gYLl/Dhrmf3x/D/x4YEs
bbTHeJoNBWtRnjXPyH078J8z2yXZ22He5QlvoIgZ5BGCCehY/XKuH+nvrWTPG/HNOPpyNNlF
/FGgWiXEsqmEGmSmUGgP1xy+rCz2fYEsL+/3OXdLOC2u21QqLgGMrTOpqFqcJkdEu8ruFiU2
b+jbpPG5Dx3jqNIH5j+Yjzxk4zu98m3bZtz26PeX2eG2ZtLW1gzBtLZKWqT6Bi5p8XO9vx/Z
tout4truz1Ouv9TBIoctSlaA5+GLNGPmrd9xN/fS3UrMdZqFPXr1rhhselfAs0Qub6Ka6hhn
0qsCzyrGzVz9Feprjp1fGJXokdhLxx9x3bc7i3S0natUkU0Uf/rFvIY5tS13Pun9X2yJ9kt9
p3Ukii3sgVwK9xXUDiCk3flO6bXvlhb7y212tvIdLxWcpdgezOGPpAxRv6zFrydNo2HY7zcr
aS2jeWjC5tmBkd28q51r1wdRzsZj4j2XdJry83dmt5bW5NYis8Zkof40B1L1741L4fF5tm2X
m1ctvv6m8cMe4AmydpkCtSvoLE9fri/BjN7n8ecxg3S43r9fDt1gshlcm50qyVz/APGf3YoK
3M91c7ptlpPs237Vu7ZEtPIg0acjWhrUUxanm3zHf7+8Vtb7ou1xrBUpDZySPLpag9QOQxZq
x5RQAlc6ePTCEqt1z70PiaeGJGJcHKta1qBniqEpDE0NPAjx88B0Sofcq9KAVOVP2YUIv6ak
GhPpAz64icGp0sKnrliZOrVNK08MKiRPRlQnLMnMnETlg2RILHqOn7MSJkAIKkgrlQnFKKUc
alqkeoeqhy/bhEiSEnNvzdss8ZaOmvWdP5u4Pj44kIsF00H1GEGIWrAfUEeH44RgsxQg5np4
YiMsAfw9R8MSIBNNF6noOgGBHzYDyzC4kYswyyFP2/jiRvcNQaA6egw4zaJmBFAKV6ny8sS0
RZWXqMuoOJoxyrlkOlMQD91SKVGQHQHEB63fPUPIHIYid10INS50oT5eOAmFCo9RWmQyxLC1
hQB+YV/4zxI7MaactLdc864gYqCgrXxyPXCsw475VFK550xIAKlsqAHKmIaIAEjOpHSuQGIz
0qNXoAW7/wCoxLCVRrq2ZrQr/wA8BCTpPXOuQwg5Ymhb83b/AFwLDEhTpNPI/XCglgSaih6D
/XEAkj0A5nECLtT05gDr2xExZtOX4jyGIGAqSW79PAfTEibM01ADvQYjEbE6QQc65k4YaIEv
WvbMgYDCkJUhs6HpQ9sQRkh0zoueRHftniRq6T1r4jEsBqjqSRQ5+eJAkYqRTKoyOEZpoyCu
uua5svb64tWADJqrQaqVJplniIHdipoemZFe+IF7jGmfoPUAYEGWU6W0jInLyoMQtQLLmaD/
ALh0rTzOEYZmOgdOtStc88RAdbNVQ1GyBPhTFKcCMtNfU35vww6goWLHrQnNvLwGIYCqs5Bz
oCanLCDKwA6fd3+n7cCwLE6hU1yAr0/digoCChrq1VNTX9mJqQDnMA5eAzwnw5KH7aqV+7P/
AAxmweI2KH1rnQZ+X4YB8G1glVOZpUdev1xNIqEmsh+7tXphGDD6idQypSvX9gxHmOdnFArN
Ug5AChr9cIuAkL5Bcq5n8MUqzSaQKCQQA33L1GCr4piUOpANOo0Sle3fFi2h9wK4LHIdFr6h
TLFp0JIdXXME51HXFRQstCVoSRmD0zwKwIOpian0EenMV8ziMJtJo35h4HEvA6qn05BVqB0B
wtXEbjIEZdge/wCHngZpg0amgJKgCg8MQw0iampWrHt5DEgSRqSAFpSmqnb/ADxHIJlRFFRV
+zdMumI/gBbUKEZU/bTAztE0lABXSxzFT0xNShYRs49R0/7upwxAVCytQ0J/P0rn0xoejYBW
HRjl6vEDAvgDg1YNmAB0GdT0wC6RGeomjEaaHrXtiU8GlNJWmosfu6Uy7YDICNDQrXUBmAe2
JrEoQV9RqDn4dPLEvgtYZjQ9RkB4/wCmEVH7jBPUDoU0XxxDRKRRWX8wyU5ig7YCQ0moVKBq
EHwxA9H0kFtNex6A0yxL0OY1MwzP3DsPphH/AISRNUMKdKmviP8ALA1KdHLKVGYINfr4UxCl
TOoqewB/yxLDsymqUJBap/5YiYVJoKgKOmEh0BhRaA0qfHPyxGwNVUAA0K5ZeeWJizB1Y6/4
6ekfj1OJaIP0qmgHIgmpPbATyVBAqAOhqfDAtCWAqXXUo79/LDgSRuhBZwNJ+0fTAYcEqK1o
3iT6a4jCRgYzr06h0XrnhQMwCzAimYQV6jwwIdACmkUAyZuxr2xA5RHVvykArl9MKBDKFIUV
qPE9Bip04ObUJ11yB/x+mAGLoGZtXqbwFD54EfUCqtUsT38MSOAQA3U9AOoOEyH1t6aqQetR
1GJGdnJow6DUaGlcAsN7h06qABhnXyPhh1kWWgtnpbIU6/swN6F0rqkV6GlCpFRUd/LEqf1A
Ak0C0oBlXzGEchbSWIYVUGtc8u/bE0Sg0IAIB9RJ7YRhKfWNBozdaitcsCwSAF+tQMs+n0xL
DUAWteooVPWtf9MURyXYIyqAlMmPTCtMrh3JWjCuRAp+3EoNpItJJplkwz6/hjLWIwYygRcw
zd60K9cOudomcAqAKrmX7+WLDBqBnpBIOR8MBgXXzqKUHYVxRacBwgbTpqa06YsQiNWQyZSd
J6ft8cMQY0IZqrqp3zxVQlqTWtc/SD0p5YkNtKxtkGZuijxP+WIgoAtAaUofKvhTCyNTXSQR
XMsT0rgJmk9JOk5mgpl+zEhFAI1CkaQMj4HwwIziNl9Jq3auVfGpxCmDsjaTkCKimdPxwtQS
6mGVf9xbMn/TBSIIhUhgC/hXr9cAwBdVSoGfQqKAg4UeRlIqBRmIB8aYlTpGNesgazlQ50y6
4AEktU5K+npXr+PjjS0Wmnqagp3wI4BZmVcx0z7/AIdsR+TL/LPt6gOv1OBUMmoqQKgUoPHP
wxDCQsjaCSSaUYj8cRxJnTS2RbMk5Z9MsOIgugMCAa1q3U/twkGgGRW6EjLwz7/XADhiD5ZD
r6jTriIlqevfopFMvPwxLDMAjV11z9MeXTECao9a5NXNQKimJGR4gzFu2QHfChVTRTq56r0p
/wAsA0wXVRRpCUNFJzr1yGAmOll6E1+6uIG0sUBT1dqdKHxwtEyupyajmhP0754klBpWoGok
ED8MQR6lLMMgSKkH9mWJEHkFWCkioVXC5Vp3PbAjEamrWlM9WdfpiOHDqat9x+nf/niqO5ZU
1AUCjsM6nwxYiDalqMwMjiQqkAgCpB9J759q4l8GLOQKgaaUp4k9cCRxJJU6wAqH0Bv8D9MV
UiQiNVUnNgMz4YEE+1qoRRf/ANr6YgJAgTNKnrWnXCTrEadWJz9NT0OJGc5elugzB6YgAP7t
GIo4yI8fIYjBpVSQQAR+BIxKUYoCzE1UZKB0FfPxxLQKaGqDUlcyf9MSNLRiK1Xwr0GIB9KB
asCrZ17V8aYzXTmRIzIKFc+5H+Rww9XAGnpYdzUk5UOFg7j1UoTUioGY8jXAsMy/zCc+/pNC
D5nEbBEsNFVUmlBQZVHcnCPTqNI9xj6fzN3/AAxYfgLlalWUtUVAXKoxI6xCupSAi0AC9RXq
cWqhYSZiNiARTSe47YhDtraMr2B+45A08RiammCFeoqxzIBpX8cR2HMYA6E6smBNSB1waoOO
MnUKUHQg9gcBOTSMrSh6auophwWgqy0JrlkSvXAztFQrmo1UFf8Ad/0xqBFIlBqfItnpU5gf
54qiWTQoVC2odn/zphP2rm3I1g+4KoBoa/jXDPkfZkXkAl1BaU618cNQPcTwH7P3YsDtslkW
50ChValj1AHjTvjfHyse1cF+LOX8msRLt6QLboAVNy3tNICMgisOmNf2sjr9+c8dl58e8ssd
4/pVxZUvCBqjTS8ehhk7SD06RjhrGT8La5+Fua2Nkboz7XNbBDIQLpV0LSpoHAXGbbXPKwZ1
pSP0uqZBh1r2+oxucu2eOvads3Hc7hbSwhNxOfSQagAN/Gx6LhvPjPXGRrd3+G+e7dtj7k9p
BcwIKsLWdZKCmZK5VpjjZVzWFkSVZGj0UA9LtTOv446xO3adq3Hd76KysLf9TcsaIimhB7lm
PpUAeOGqzxsN4+JfkPbbRtzvLZLi1UKGeG5SXT5hR++mOV0zpBx34y5pv0Zutts0jQZf/ImE
Mj6f4FY1IxqSs7FfNwbli71/RpbB03EmjI1GBX+L3B6dPnXF6b0tN1+KefbHt3626skNqv3y
286SgDwIXxxq1m0PGvjTne9W73u2WcaxlfSWlEEjg5faxXLzOD8HFLvWz7zst+9lutu0Vynp
kRiXX66qkfTBtDlLyJFpo7IRUqWbp46WOGVVquP/ABjznedv/WWFovsEVCvKIpGJFfSGK1FM
QVzcO5S27f0eSydNxc0aGQgKoXu7k6QPOuNSiu7ffi3mWyWjX9/ZILU0/nW8izIP+7Saj9mD
SzAjlVmXSxC/cyoxAp40GLRmJ7KzvbyVLa1hee6mIWKGMamauVKDGhKvd3+PeYbSkc+4bXJA
khARwVcVOYU6T1wNRa7f8P8AyFe2q3EFpA0co1KrTxowHmpOWC0M7vfG972aY2e6272lwBro
wByPQhlJBH0wxpwRgEUc5j8xPTDgaiw+OOZ7lYf1Kx2pprQ5V1osrAH8sbEEjBq2M/d2N7az
PDcRvFPGSro2TAg9DilQNFNJYA0GZHWvnhQjTUtTXyGdPriRm9JIPQZKRiQgXCkmo8B/ngZ9
F+VQDQf8dcDUpzTJQPtzHTCj9sxQV/HDEZVOodBSv4+ZxCwdfcFCpKt/ji1HBNCKdMSP6aUN
a9vLEijUHqaHsa1riqPoSvqPqHVhiWGrRSXzB8OoxA6vRc1B1DviIjXqcgMhiB9K+rUK1GX4
4gY6gwYdBkPLET6q50qO2JGLGoz1V6L/AJ4kdmNAK5jqO2IlkW9QJPguJGq7VrSg/biWHzAB
NQD0BxAGmrkqv4jywiwnGoUXLtXzxGHWkdaHMd64DhVyNOrdv9MSCxY0NdQ7+NcQJzU06qQR
l5fXCkdT6fST/wAsQ07EM+r9gGKrQ1BNMwR0ywI1BoINcz174WdJiBWgIRcRNUDT6vTT1V61
/DFpwiVY+AIypmTiQCCSdJPliQW1jM5AigplXEcMTlQ9aUBGdMSLUvc/XzxGVGZVYHMipqaZ
Cv1xCgANFJIB7+OJYZpCKECpzz8sRACTn+XqRWmQw6AqQakdAcx9PPAkTmTSe1en0wj5DG/p
oahSK5Hw+uIGZwwyHTufH8MWYfAPMjSVP3AdaZVGHGaD3TUsxpnmMBwyS5UVjqbt2xYfsj1q
H0/mOYOHBQmRgub6j2p3xYtAZDXUfUvUjtjQCwBZWDEV7ef4YF9YHWtTQg+JPY4BPDl0NAWz
r9M+v4YW5YgdnrqY1XoDXsPphct2hEgOkqKLSgHYnxr1wVvNEZFC1AAI8RjJ+AoSULHrUqD/
AJ07YrAZCKgVACmpNa/jgaBKBryU+K0z/wAMMNRuQzUFPSKlj44mcBqZwyjKopTxP1xpmWnQ
AMtaVApT/OhxluRGyaDo15Hq3gBgVMrxjP0nppfucQhdAXND46utPwxY1aB5A/qFdX/HTEN0
ys61p36CvbFqwFASRkDWtR/yxKYeSlQdNB2PbEQyyUGQ1VyWnbz/AAxIyqyjTUEkVYf6YjmG
WpcHTSlaE5/txIBV2lDVJrUsOhyyGWJYUbyE5AVA+mX0wDAMylwpOk1y/Zl9M8SwMgQSUb7i
NVFzNcKGPVk2dASFHnlniVCtVYjtWhJ8e+XbFoykxBIKGvmegPlgOBkKqwFa6syPLEsKi1qq
kMagDr+OIYdGCNWgJByqMgcJ/wDAKEMWqNVaEeXniSRGQsQDqrlnWlcSlIKdVY/Sf4qioxKz
TFiagipUerOn7K4AEUpQek92OdPrhWpFK016hqAyHjnnTBgtIFUD1oFI1ChJI8csKIlHXVmE
6nV0JxE6vocqhypqFcqHuKnEfQqJGLMxC06074gcSn3KChC9NPj54sA9T1GYLj8o/wBMDZmZ
kBFa1HbLOuWeEETr0azV1/Hp2OJWnolBVSurI08cDOnX/bkQKaic/phpCwDUPVlbPzJH7sBE
pJY6h6gfRTt4VxKQ2WnWlC7ZkHpUYgISARkahQdVIyqepxNacstSuoGQdF6fjgIa6T7ZNdWf
liCRjLpCNQqo+/of2YRdA02mOtKxjq3l4UwAUdArEGoagVKf4YWoINk4YepclJ654qQmpIYL
6ula5GgwI32lSSFLGtAMDItWqjVy+4ZUGEknqQ1alCch4eOI6ETLQAdM69jXsTixnUjIXUVO
nrWvX9+JaEKAw0Ag5Cq9iP8ALATGNtJUMCetT54jh1Egyb1GldXfPFoIKWahoe4boa4VhIza
VpUNWtewPgcWI6lqUc5E1zzwElZENPsp4HrhRlSp60yyUkUBHShwahKlQMySa1PgPxwjDhNB
0j6geeFIwzRCppn1ABqK/wCWBJiFamojUwr5/XGWtNrZgS1DU0TLtTscQpgoNCCadHqKYZER
JR9KdVHToD54sZMjsUOrJSamuIyBZkFFUk0yFOn1zwpMY/UAytQeeeXhXEsRvNJrLBcyaHw/
54kf3HaJVUgLU9u+JBJI1CldP2j/AEOFYljcsWAWlTmD/mcZEgnDUYEihoAOtKdcRCPU1JKZ
mlR1/acCCxChR+U5HPPLCymYKUQZAg1So6fXE0DVpDe2aeOVcJEZUJ9xQFNadMq+WBaYBlOn
Tl3OBHdgXDFdZTLpnTEjMoX1IlG6MD44kWlychQ00jTnlhBEKp0lcjkQR1/HEsM0UaPnmK5g
51JOVMREGZWqKBaksB/jgR1AyNQxOTN4fSuJHzVwq1AOYIFaEfXEgtVfv9XUK2Qy6duhxLSj
jNQCAwA/ZXCoVXoag1ByPTUPHEDRFFTTUmpNCe3gDgJ1ZWclXqw8fDEpSzZyfykeonxxLTFW
LE18hlniSViypTJSRQv1y/DEgK4RwSwZhmOlc/DCZDOqGYENpPgPDvjIsMpRldiSF7ZdulQT
iZ0aerKpYAVApnllWpwxuQykLUrmCa59fwwgysjeonPIhadadsSMqFnz6gZg9c8SExUZuahe
y9QfPEUZMS10M6h8m0minyIxYElEIo+TkVK0piap29KjoKABT0rTzwAMi+kyGpOeTHtiBJ/4
yUyJGaj9+LWoFz6QpY6uqZVA8ji0JNZQUpXSP/IRQE4lYFiSAwJap9QI6YEkC5aQurL1dAB5
4Ei0aGAoH8jl9M/DChRoc0J9J/E+JxagI0jNV8tJIqDlQeeIRJJMoOkIMxWvcfhiIFZXJ0E1
yzP+WHElb3Dmx6dGOWBBiP8AKZa1ZaE0yz+mIhQsCxB9LUNB+/EME4BcKAWJFTXqPwxADAOx
RgoC5EjMGn+mJU6k1LgZACp8uwp44kdSVJNC9cu1AfDE0cKVrQkN4DM070wE2o1oxJYdiKf4
4lId1IYUJ8BTIVPl5YjqRURQWJJCnI5Cp88MitiOMPUNl7h/ZTECQsJOhFevY/h5YdRjm4rX
/aAM6nzwYMMJCTlQ+Fe1PEYjoo1WuVS1Mh+X8MC+CljZdQBJYEGoPc50FMC0QIplWp7jOp7/
ALMQ0KqSNNSGH5ia0rhQoy7K3u0oxoQvh2wNS/sDgxpRV9HYkkmn+7Cp6IRoT16jPM9MMZsC
kQBCn1UH8v6fQYlHBuWiSOR4mq1CKVOZOXQ41KeWSkH8z1EhuhGFUXtp/F/1xYlnsjxR38II
USrIrB2zA0mpy+mO38k+y9wRty4HazbMpvAVjWM25q66hU6mTuD4449Sy+s42O2NbtDbpJGj
ssaq9uxOuukV8zmO2OYsxmd049wnku531lecdmt7mPJb0TSqSx/Mq9Kf44rzFK8c3T485A28
X1psW33O5QWbUd40qVJ6aiP8saldNa/4a2+W03u+tNyjNrdPFpEU3pkDA6nGlqNjVuxi+t5s
dvdwb9u36lWSJmRYTMCkZBXoNXp8sc+vhSPCediEcp3ARBfbaRz/AC2BStft9Ph5Y3zDa2/w
VJbm7uoCYzLPGTFCzAOexK+eGxlvuPRXMO8br+o9yKBm9uMS1VG0dStcvxwIuZwySbXb3G2q
zgyppe2BK6K0P2ZU8cSaSFoJP0wJSSWRSI0LDWafcAPuwYmf43+rh3LeEvXeKNnpFHJ6V0Du
K5UwkHOLW7k26zlsQ71nWstuctNRU1TLLrXDA5vkKWX2Nol29IrjdPfAtFKqdTKAVBDdQT2x
mSnVdza/+SG2yAcj2Cwt7FZI2mv7fTI60P5jqbR50wz5E+Vvy79RLY7VLt2t42kjaSWA6lEf
1TLOmGqtYFtWmUExvO8YNDRpGUAfiaYyma4yZUj3uO7DxxPK/spPUKyAHMe544tKi4Te/Ilt
Z7gnHdnstz2Vbhwn6hljfUKalHqUkDzxrqeBhTvPJdq5bPcxwDar+Sb/AOTawoCtWPqjKUbK
ueKewyPUvkrdrxeMWN4ZC0iSxTDUrFQyioJWn7cZCq2j5E4/v17YWu57JeLuiMAl1aSt7Zr1
ds0oMuhriRfNCJdJtkdiktzJISqKh9xq9ANK5sM+njh5WvONp2q9sN+sV3S0ltR7qkG5jZAN
JrnrAGGdH5ew8pnu7Td9n/TSSRRTSK0rw6grR/7ymRGdMLOYzPzfFa+/ZzRqPdkX+ZKoFSo7
5HP8cZNeWIRU+GHRKRPgM+xxEwHpzIzP3YUly05sdQyA7UwI9BkD2H0/fiQQqAnIivU4cQvc
zoOp6fQYho4xpqaZ9RXocSOS5IRe56DAg0CgqxqR1I/1woRZa59T+UeOJCY0GmlAegGeeJBL
V0qVzHcYMWnpUUp6hn5fhhREDIkg9wB2rhQCzsW1EUGVcSSUVT9wNOnjgQjWmXQ9R3xIBcay
DktKADED6WzbTU9Sw64kcVNSevenWuEmCspYk5jrT9uBGDPmK0rnhxHL1oaU0/8AGWJkwDGp
LZHpTEi9RGRpQ+oYEZtFfUQCPwyxNaJdK9cwOhwoLPqIpkKmpHTEKYsQM1yJxM6H7Voc/DEa
ZGAJPUnt4YlKT50ZjTt4YgEUYGpNAKA1pXEAAkEAd+uf+GInrmGJpnX00xEMnUlD1pl4DyxL
QtI9TlpHl0OIWmJ9NK1A6+J+mI6iUvkT9pOWWdO2HWfyFgAxFS1anOuWBrwHVaVAz8OuJGNd
FOteniMRMzVCtmaEZjrlhSJpASFrnXMEdcQ0i6gaFI9dDpOAyh1hmJriWAJShPZuhxDUdM9Q
AzyIwoBY1JyPiQa0wig1kZHvXOmWIGUkZCmfbpgagJB11jOlCMLNBEUBc6aEmgr1H4+GNKWF
pr3zFOnhjNawJVRkGzGa07eWDVQMRoyGpvDEyAAE6qE/7T0xDDfyiSVyIz8v2Y1pkgfR7vXK
hoeueAhYVFChC+B6g+OBQJWhopqACDU+GDVbA6xQBMyetMOLmw6aghzNTkT0APfE18hVVdCj
genPwr4UOKiQDEkgVq/Ydf3jBqsoaAnS33np4+WIzAalJddOmpA/ZixXrDuo6MwUdAR1zwak
TydQM6UBqaVGEEI0jRvUWVjQE4hzCkADCn2mn1/diWhy1kDNemXYDETMoFA3/k6KT4eWLD8G
DEMPTQAkNXL9lO2IhJAFT1qaV6/8HEgNr6gEMMiBniRywQ0JI8CMz5YEbWzEs1BUds8vwxHA
g6WBoSGGRAqaYhQ1b3QxrTsvnhGjNddUNGJzJ/z8sSwEZOqrKK1oD/maYDIIuCDlmv3nEQSA
6gDmT9reGFkx9JUPUMD6WPbBFonkZDpYk1NKgA59anErTGJwmoEEVGo9MvLCDMCHLofTkTnm
fPDgpNpJDswzNV88WDTgErUn0xkg5dD1p54zTBs4KBlAOoVC+PliIQtQCD0zHjjQE6xplqyI
ofGv4YGoFyx0xhciMj3AHY1xLTsGSgNGPQ9P34lfBV6mgFPydyPHAyjAUMwA7VDA5Yj4IqzA
VNGFaGnhniJtK+lR1JJNfPrh1CyCaRQr01d/xxA+bIuXQUzrn4YCYiqrpJBH3k0qThV9IVL0
6kNTT+UjAp4PTWQuhNCenh2piVMrmPVQdKtXrSuJQhSqso1Bhn2r554jKNStKUqc6A+OAgqQ
hzqw607U74RpK8irXUSG+40rmcQMrflQ9PuH5a+WFYOlFq9KjNqClB4DA1DxgalVmqx9Wfh2
qD3wVB1aWKyH0scx0oPwxDTodTaywUrX8KDt9cTMpEnSSq0Jzqe9fLE1KZSfbADCr5V6VHb9
uNCkyrQDV6x1Ttll374MZGJ1NNR1Mew88FjQkkIoCldRzH/PASZgrEMfUVrl0r5fTETIGqoV
wFHqbUK/uxayQZ8i2avWnbCvTgUWi5A9z5eeImoK+rSc6VXoSfDApSYUXNdQGVTlmMTQCCSR
+aM+s9s/PFAMBi7rUgClD41xpnKYpqYFjpUZVPT9uInIND6vSBUV7+WM1UkWki0AqQaUNen+
uJRIHoo0gnPOuR86YiFjUhlzAqQvfPyxAnqW00Ar9cQPAGCmgGZ6/TwxE8bgMo0DPqQOlDgW
ndwZehGVVAzGWNQlGzEg1JAqc+x8BiQ1FQBUUUZL0+mWAIScizHSB6Qv1yywakiRyhAVPpAJ
8yMJw8oRQ+s+oZKe1DgZCzUCeoOemrwriVEkCdNQBPqJH8XjniUh3QGhoSqfd0IJPQ1ww6UV
akBad2zzy7ftxUnI9AoBpzIQ/wCmBUPut+YByorpHWpy6ZYgJJDkAoUnJyfDtiRtIHpJ9BFI
2Pj3phQyahQpAy6/88SCSEdVJrUekjrQ+GJHcFSVyJBqK9KjEiUesL6akVr2y754CQb0kg1p
VvHENCzMQRUmvb64qgs6ppOljTw6HzxK1N6KEjpXIYVKDWokC6cm9IBJOXjip0ZAVCSBTtlX
/DBiwzCNFJYKHAy0g0/5YBpKX+8gAMMu2IBDZsxyUjp5/TCYKoVAF6GvpHh4VxIw0KDqUBAK
qQOlPriJatWYFGXJh3z/ANcQMVJUgHIfl8O/XAcFRqAV1eKnrhRj7gcKxUZ+on+HyxI9TpPp
6GqsDlTEsNqZq6h6fzL3NfHDSZxkF0kg59Pw64kFUGmjEUPQYkKMyhtRIoMhXOv0xIS5sdQA
HYd/rgoR6Ff1dfqSTlgSU0QBSF9xh1/yxYQuoIAfLLr3r07YlDlSTpZiSKeQFcR0xBVgAQaC
lB5YQPPIgVJrWnmPDAggsWAJ1KR6AaD8TiRVULqOZ70rnnSgOJYPTIhIYZgdDSp+nhiWAXPU
uqhOf4HxOEwQK6mK+kjJvE0wLTaWAyYkg1UHsMSEzPG3YqR6mOeVcBxGfVQg6a9KDt9MQE7K
ooKs4pTzHjhwUTIHV2ZACcz2/HEkZUllNPQ2QJ7GmWWIyHCKSDUhk/N0qDiJ0CKcq6+poe57
4EebUTrpUD9v7+5xG05lRovaasbnJaimX1xM1GWI01H1B/xp2xaj6RrrQ1PcEZfWhwnwLkh2
YnUSdK0zocQw/uOPzCo6HCvQ6tL0aiufxqfwwUJBGxVdQGrvTIYGjsSdAQUp9xXoKYBYjzqN
Z9NSQcq/uwjBPIpWoGgr1JyqfwwxaJySgYEV6uOmXlTEb4YSBhpNBWgKnuMDYvaMcRAP3/jl
XtiF+CBomR9QyoOw8cRiq3QfyGMbev8AMQKVPjjfLUrLyrWTMjUcq9carnS0nwHXT9378Adu
3TM9wASCGNGPj5YZca++PU+Ecz5Lx6LRs+4T2Ub0rDEwKMf9wNc8PfWse/la3HNuV3G5rusm
5z/1FGqlyjUZT09IGXTGN8PVtXdx8y/IM9obSXd5CrDQ5CIjEdvUo15/XBKOVNsXPeWbHcST
bbuUtq9ydU4r7isV/iDhqnGrWu745ty5VyHdt0O53c7TXf3Ryg6TVehyp0xSMz1a3fyjzy+2
s7debq8loAFJZAJCfDXSv0xWKMxonnkJiVnkHqdFUsaHMk0GLTRWd5d2kkc9sz29xE2uKQVU
6gexwfZnY1l78o883CxNhuG6NNCwOoFUDHtTUACRjF3fDK5OP/IPKuPo8ez7hLDGzFpIm/mR
ljlXQwoMdPqZHJdcp5Ffbmu6XN04vI39z3UOkhuxWlKHEzi43L5Q5rue2/ob3cXuIG+40Ctl
/EwGJmoeO/I/J9jV4tuvmijlqDDIBJESRTVoaorgrWxw7pyHet0vDe31081ywAV66AlO6KuQ
/DFKhT8o367thaXm43VxAB/4ZZW05f7a4isuP/IfLtgtv0e2X7pakk+xIqyRVbrRWBp+GIVx
ycp5FJup3Zr+Vb9G1RTRHTpP+3GrStd1+S+Z7ttosdxvmljpQgKqMwP8TKAaYyIqNu5Hvu2h
v6dezWQkGl/acgEjLMVzOEfZ3bBu3KIN3j3Da3mm3JTX3lX3nYn+LVq1Vw7h+Ws3T5Q+TZLW
Tb7qP3ZHQieE2YDafGmmuM/I3FZtXyvzPbbT9DBNGIaHRHJbo1K9hlXDi+/imbl3IhuybrFe
yJfw+pJiFovkFppofDFFKs+RfJnKOQ2K2G7SQyRRsGDpEqHVShqR/liUvqbZflHlm02A2+C8
WW0FVSGaJZNFfAtnQfXB+W6z1/ud9uFy1zdyNIzHItSg/AZY0y5VcjuTXv2PjgZhwCK1NQTW
nhXtiaEGCZlRQ5Vp4YRaRq5XOlc8/DEJTmoFaFh0p1/HFhp1XWwFNTnoPE/TCzozE6tUqwpk
WZSAP24tJiKmpNSuda9MBOaUNQSeoav+mJHUFaaK5Z169MGgqotDWhOVe2FWiyINOlfxriMu
kQxoKek9M+uI0aqKNqDaR6S3avhgAApNAtSK/j+zDoNoIoMw3XwwlKqZAE59a/8ALEjGpGR9
PT9mJGjjLnTpq/gMz+wYAcalNAR6cq9RXEsMe7UOZ6DEqajEVU0zzr1wqB050Nc8zXrl2xI5
JJ70UZV6YkSqT5KeuJQ7lftJrT8gP7MAodasaGgp+/EZTv0OnMU/wxExAoQR6iMvCvicLNAX
1MApq3bwxLT5rXV0pWtO+CoKgMSP2fhixBrqNTXv45/twrDSB9Cqp0t44tEhmI0VIqehIxGw
PrNS1ACMvpg1BNakkUoKfQYsWBMqhdCmgGYNOn44VoS6hcjQ9SDliFCxz65n7WwAAZ65965/
54dOGdqAN9x/MQKYig1hnqxOeVQczhwH1gr9Tn5UxYtBqqBpoDTV5UwHQk1qStAopiOgLDV6
WopOYOJI5CpAWg60APfEKRoPVUeAIP8AliQA2mpGVO/TphWGdj1A9WfXv+GFlFQlteqnl2+m
LSTge2WFDU/dUg/hglVRVaiAjURl1yxqMYQOskgFSpJFfTl4eeBrnaY0NAlFHbLM+WC0hDlm
bWtAOoGDEj1AAkfbWhHl9MWLYEP7jElssjmM6Y1o+QsT0NKk+nxOJYZx6A4pQ5kHLAULAU6+
nsBmc/PEeecPoJ9ZyJOQ/dhGRHUlqnoDSh65YaYNSFNWAJp+7/DGKgCRhErEDNhQjzxEEjks
WU1INaDqfriFmhqCQxyNfSK4dYzb6cipZcjXMt2Ixl1iJmULnQqpz8c8MjJaQ9PVQVzrl2oK
YVsOEGnKtaZk1OY71wD6o19wSfcWSlR06dMIm6dqaa5kitDWgH44HQpSGoQKaemrIn/liiAp
oB7tGZRkoNRWuJQYEayCtQnTSDQjzz74ijBUMQM0JJVq5jyOIBMmWoKCTUGhpgJtLKv8IGbE
ZVB74pRSZdK0jYszCmXT654RMINTTrFcqah5YCFpRULm1Sa0y+mLF9iLMramquWar0OGRUzl
tINaE5jPLpiZpKjOKMo9NGqa1/ZjNJj0DD01b7jmD5/TEJQsaMtRQA11dfwpjUFJWMjsa+kZ
LlSv/XGtUP7dXTSB/wBvQgDzwKwRMb1WrAKSKeOM4PgIL6SRmAAoP8VTibwTMzsGQkA9NOZr
50wrDNIBpFchQau9fPEBNIDUaad9Q8R5Yjo0IJC0Byq56nEtL2qSEEGoyJY+IyzGAIx6qoy/
mzGQ/ZihHVwAFr5Uwg4NFClQrj7hWuWI6jSQZrXInOmeIQdalVDEHppIrT9uIkyD8xNa01A4
lCBCKSBlWtPL/HAtL3fSSAfbYemuJQo2fqrekeOZxJIpQJ6TU5lVPTEtRE/frIY1Gog0xHT1
D0YAhD1Hf6nEPkRNBqStQfT4H64VpAuxU9KV1U8cS0RYhlDDIGoIPQ4miYq4pmKk5nqD3xhC
RV0jVRnFfUfDw88QwIjNR6wwXr4D61wick0qhaEVIPU9MsRMzODTqq0AFRkCOpxpnoyZ5aug
1KRQ1/biUmiVEFH055AjrnjNaxLH1BBqq5HUPHBhRhiWavbIeP44kc+ohyaFVIA/54hQIHqA
tASPTTvh1iS6Zi58an7+4y74WksZUAlaBgfuPbBUb3TIGH5ctL16nvgMpwyhcsyTStajFI1K
eTVqZwxUg1OQqT5YdYvQULaBroQalj516Uxa0djlQZgZk4AJZBUIRQihDMOgH0xE4dq6SQKG
pqcjiRJqNSwppyqO2LRSALU0kDOtcQLPS7K/pOZFMqeFcSPH7iuDIKDpX8tDiMGDEEYOfNaZ
4VAqNaqKlG1elh1xE4pSoHr6Vr+3AC1gEq1Mj27VxLS0NKdOsxkEUp+7piqPI0enSBQ9n+7p
54BPDqNQUgDSg9Q7nwzxNEPuWh9P+J7nDAS0VGUL3zp0oPLCjVowdSAvenTERUYEBqMejeFc
ZR6Oewr1r3I6YgabS1PBaBiaAfQDDAQ0rGBqOfTyHhhxBJKplWncjt5UwIQDUFQCpzqDVgR4
YFmGkABrUdsz0ocIupIzGSQaEAdT1OBqI5fQy0Wjv/lgIkGqgIBX+Hv9VOEYN1P2k+gEHLwx
LABSsrAELGv5SanDB9SZVdgaMq0zI8frhaCsrPpCrkScu4oeuCo8lKJQUkGTA4BRNVF9HWlc
8hQ4sOEWUKrUoO1BlXCiLxqo9xSPDTln9MVWlKrM3uRjIfav/XAkYd66dLZ516CmFJV0KubV
c56e2BIwGkkqoypnSta+FcRiVtDgs9QF6hf8M8S+DUjIGZCrmB9O2BGzA9JFR4dK/XCtE0qu
iqT6hWrVrX8BiRtY0KK5E1of8sIIOOjDL8qeJ8cu+AikotKk6fAZdemJaj0rIp05v1cU6EeO
ICHpoGU5nPvSuJHXqVUV7dR3wNGYAVrmc8h4+ZxLQgAGgFT38fPENGQUY0NNWRHlhQdLD+Yf
3DqPpiJAelgxYKxrXt+zEoT10ga86gAk9sKuQwRqsBTIZHsfxxIRNRqOYNCxXMg98CCTQGhy
P3VywWrBaCuoaSUI9XcAYtWBoR6qksRln+GWJYdIyra1Yhz38B4Z4tRyGUemudRnmKHwGJCp
6iSzerrnSn0wNUMsHp1L0Xoe9D3p0xathklCyjSdQ61AxDR6iQ1fSGJYZ1yxM7SQKerZZHUe
58M8R06OASDTSuZb/kcR01CWeQAAkigUV8sWjCYupYFRRcjX/li1BWpUqaGueXgM8aUNCyM5
BUg4KpBO3pEbBilajLuOnXAqAsgJTVUEkVGYoB2piEp3ZWA1ChrVutT/ANcSo2RCnqGVehyz
OJQNPRSoqp0r3wq05ao0adRAzJ65YiJJJaUNKEUWvTywN8/CM6hR2qSepH+mEzlX7k8vssxf
0CvoqSc/DDrHXNZSdgXyUKPBemNM4ap/h7fuxJ3WEKvfRR0qpdACTp6sK5+eOn8pLfVd/D7B
2v44+Lto4tb7ruVtdTExK1wtSv3rWi0xy6+Ttrltfjz413y+in2C8cW4XVLtzEl1p0pXMDBi
WQ+PfjjcLi822222e13G2CiS4MxkVx0LUrTI4JIMcf8A92HBNgsTebxZzboNdF9p9Ei1PTKl
RhF9Nf8AxvwfZrqy3iGyml2ySVK2TyklQ/gD/ri38CTF58nbfwBdgWT+kNHduNMEsYVCMqev
POn+OKfJni34hw7bNr2i1n29vbrGp1MoZiHHqNe9cNpttNy74941v7RPPCGlDKXdFCE6ehqu
Ve2M6zjzvkHHviC0uJNqEF5ZbvC6oJNTSQMT1o1WJqPHGs0xfXPxt8WbVs67juk9zcRkKCq1
D1PhTL9uI2uOx+JOE71fJPsm5G924ep7RjqdXPQEqR+/FZiW978G8UuLWZbbb73a7qMVFw0g
eJj3Y1riCr498WcWkjC3W2bhuM6OVNxAyxqq9KipGX0w1m86tG+COKruJNxJcxWzL6ULBpB3
zalP3YD9XK3xT8e3/wCrtto/Upd2Mml5ZZCy5gVxYpypuT/FmxbOlpPBcyyLPIIpYX61P5gR
54jF9/8Acrx5ZYn9+d4aBprc/wAv3MqihFSMVVZ3k2z/ABFFHPaW7X227jbdDIxMbN4VNf2Y
ZAreIbZ8UXNrNHyW5ube71VtZI2ZVKHIEUBz8jiNiLZ+Y2vFN1uotnhTcrEnRDdXC6ZfAMKH
DkXOr7bfmHc3LNc2sUl29EW+UFpEUnppPUDyxabHpvHeJ7Tc7Wl3uLQXlze//IaY2+mhfOgF
BTBQ87+ZrLYLCS3gsYo4rojVIIY9Fc6VP+uCRPNLJLeSaJbiqQs4EjDqBXM40o9Ek+J7O9is
7/Zrme92587qSNAGVR1HqJqfLAlbu2w7RsPILGLaL1twVnVZYZ4xVM6FWHfzxciNh8vbRtC7
Ja3trZxW1ySPXbqEqNI6jwrgnyunjyKQK/mOamvXzxrE9B+O/jfauSbVPfbhezW5hkClEAoR
1zJGCqxaXvxRx/cNvW649eyvJE5jdZ2X2yQaGtF6jywyM4a/+P8AgWxRRR7/ALhdQXE4OieJ
C0QI/BsDcZM7Rs1nyOCG1vxebezqVuIFrItTkc++NYsjcfK+03cdrYe3cpdq5EaAwhJaAA+r
TQMTXwxiQYwB4Ry8R+8uy3ZXOjCJmqO4yzxpYteKcKgvb6SHeDcWrRGjWsSMJq+JqMh+/Bas
ai/+H9uFl+u2W7uEi1UkXcE06V/MQaK37RiwxBuHxtw7bLGKTedwvYJZVqt3bxe5DSmRFFbL
/HDBXFxz422fdLm4K7n+rsoM0FqdNwyUrr0nDRy4uR8X4XbW7ttO6z/q0P8AMs7yP25CO5Ul
VxmLq41Hx1s6XHDr9ra5RXqwkhuYElWoSoZWPqzGEWbFBxf49t91t7vdNxuHFnG5Ui2I94Cl
dWk5acVMnjSXXxZxNLS0VJ55GunCQXpaqkHM6kA7DEYgn+IOMW15Ht0m8yG/uam0oi0y/iWp
P7MBVNr8Z2Ftu09hvN1KGhGpFtlLsVNaPQVP4YU0Wz/GdttW97Zu9lM93tpalzDdxBHFQQDS
ma59O2ASOrlfx9xbd9+Fuu4jb9xuE1RwQIpVqDMlR0xKMTs/H5Nh5zb7PudtFdCSRY2WQBld
X6EDtXE1Fzz/AILYTct27bNlt1s2vI2LIPsBVsyP4fThlZ/KQ/Dm0u/6AbldQboU1KJIh7OX
Qlh2/HDqxhdz4Vybb7+WyawkuHgPqeBGcA+OQ6UzxWs3VbcbVuto+me0mhY5hZUKZDuK4DPX
qOy8X43unxnNd3O3xi/tkk9m7iBSWqGq6iPM4FZjD8Fsorrk9nDLKkMpkAjZoxKmodAUbriq
jTc+4nPd81sttUWVvPex1ikhHtRtSoYsvZvTijUc958bcYsWNpdcjNjudK+xdxBUJp0DjKmN
WMV0fG/CeLXu5Xtrvcq3F7EtFtlP8rST96OuM6eYy/P+N7Nsu8y2+0336mEn/wAJrqiYdY2N
M/LGljLagaU606A+GLAagZSTkx6Z+GJBGsVDNlXIH92JaENUk+VDU9/LBSSZA07dGOfXAEIf
OpoanKnj9MKJmrVdIFPLCrQPTMZaegr59cQpgvpC5mnWuX+GAzkDykEKB0A6juPDxwnETto6
10+J654YLUbBSuo/TzxVnTH7hmSWGa4tMhm1EEZqe30wFFraoII81OR/HEsp5KmlMmNKHIA4
loHABr1ANDXMYlgGoAaoSK5NgW4EMCpqPS3QdvwwrQChYk1NBSvU4dWGrqWimhXucCCz6iFB
BVepP+WEUEjjV6T0+3/lhYRioUljUnoD2xNGoaNQaRWla+IwLEQWM1CFiwHXxxer6mGqpHUU
HamESYYyU9SspjHqYk0piayBKhiDpzPQ98BkOxjFOocZ17fsxLIGR6DScg2Zr0r44loVDqdG
ZCigNa1+mLTIGvo0aTqOfniwEdfgaflr0GJIm0ox1epKVYHLvmcWK0wYSBjkFYVBp18sA+TM
opWtaD/qMSwgwOf20FQaYmg/awAHVci2f7cJAuWVaEZaT0qcTNP/ADGBBGfQeOCtQoswCSFW
tCD38hiWBBdWOo1QZivXPEDSlgQreonMLixWmpTVUVUCo8vxxL4A+pqUo5I79h54UGI0RlFB
nUsOlcC0iXKF1IUEZE/6YFpPVl+8ENTLqR9cQNGKMWUeqlEJyH4YThkUkaa+ofb54kHSVlAY
gnpl1+mFiHqAxZ6qoPhhOmZjpKlDUZr0qR5UwYCjLldRJJzybp9K4zTCArRStFJqa9cSIFQu
l81LekEZ/u7Yj4bQVUAkEMcz3IwtYaMUFaVY5AUrQYlRHQRU/ap6dDXzxM0l0hakkrUFq4lE
ivRv4D1IBp+OBrQColKaQxU+mp6164mClIJUgVp1PiB2yxHQsUWpHpSuYGfXCRspIWQ1YtkG
8PM4FgjmpVaMBmx8CPGuKUow33UOdKgDPphZ1IyLUGv8xjRm6jPxxNbEZ9tO9WrXL/PANFG2
sagKEmvWpP0wszoj6mXUvp1HMdDTviJaiXJA6ZBjllgJvccAhkqT+YdB+GJBRlD5AAdv9p7H
EtSOULaQT2Jp4jEKSUPq+4gUB8CP+WJYcZSB+ooRQ5gf64mgl1LEKKIBWh8Pwws0aMqmlTSh
0g9vpiZlw7Fmzf091y7eeB1MnpJNTpzGfl5YEdZkVvGmWlvVWvSmJHYhCaVVT1XPr9MSJFBr
kPbOfWuRxIOoEnT1Y0Pj4YcZ+SVQrBdWntpIzzxVqXBqsakE98qdDq7VwK2A1GpoD6a6QCQa
+eIDFdRFBqAqh7H60xHBe76UBOlh4eP+mBYdDQVRRQnMfXEKAMzOQ1VAGajInPtjQkHpIUgZ
JUH/AFy88Cw2k6MyB1oKeGAGGQWlFBqSKZHCkgdutKnuO/0wHAiRdTgeorTI+PXLATJUkppI
ZjU+eeJCMtUI1DSSQB5jLpiWiURha9WXIV6Vp3+mFCDLUVclqUavU4cRMyN1GgdhjKJWXQ1Q
Vy6npQdsQKIkqdWqnUdMhhMMFQuC3ftXPEhfy1T1UFK/RvxxK0s0XT94apYAZ18vpiFMMhUq
GJFKHpTEiLVVSgrU+pajp0xE+p8xSnnTp5UxUWCUgMFAp+7BFC9TVY5Afl8MKwI+zWtStP8A
jLDqkGVGmjAA0qeuLWiQBloSAoFWenf6YkUNGQknVnkR5dBn0wDSkjIkUrQaj6gT0OLRoiVW
pqdLDNh0B8cRKqECtQykjzJPfDiMopRV9K9/OmdMBOAzlTkFFdI7knxH+WIQMVQhDDWa0Feg
piIQrBQAMlzUHuCevjjr39Mn1+W+s/CVCCPUdBGYbv8AQY5MaRYqRprpU0JOf7cSMrKX0moH
8QHfwwkiSjjI6TmqE/vNMQpEEDKiqvSnicFQSZSuogF1OYP+WBJJGaq6jrPdCMwK+GIaAsTI
ASKjMEeHjhJSvrqUzIpmf8sQGizenV0AGdc6YGsBLG2rM6iKlien7sDJkqVJNSerAHsPr4YS
lULkA2moqM6N+7EYYatFFPfIeXniQWB9L1+2tB4eWFHVnZMqUrXwP0wImAqGoKHKpyoMSLQW
bSMgBQVOVMKOfdqFDAVr17/jg0ImkontMQGr49Ae1cItSqqslFFGA9VTT9mBaGUhSlGJr1Wv
Q4UfIlR4fbT/AFxHTUPuAUAU9RWo/CuBCaTQpWoPWhPjgVoWYFlLZhc1I+0mnXEje5MlE0hi
3251oB4DEhMQrKANWXrB8fLEUaL9xTPPMMMq4QkJf2vStGGRJ6EYVpo3olCKVzyrQeWAw6sw
fIih6kZ0/DAYJANWRNK9AfLvhVISOsvp6dDUZ/hgAjVgKNTV3PUeWJUIWX3SGXTToDn9enTD
iJZHckLSmeZyH7MGH7FGXBq1COpr2/DGaqJzJpWgFSTQjID64tHqLSpOYp2rXuO+eLEdY60q
CcqKB0BPfPCD5g6Qe5BAzzwa1DxpQkk0Y9DiIaZ6CTqAqxAplXDGSKiqFFJDAlySAAB2wwpF
R0YZ1pkxBBy8cVUuI3Ka8wSo69c/P6jAL6cKI6HX1pQgePfLAswRAMgAzWtHJByJFQcS+Qys
VqlfUOjZGg8sLeYQL6jnXT3p27YVh1kDMzEBT064B8nVQQRWpGY74q3zEblg1FyUGkgOROGL
XDu6K0Bo1CB1Ph4Dwws21kn1CUnt440wVG/4/wAMSd+3TMl0jogLKclYVr5UOOn8rPyuvh9I
3HzBYbrw5dmG2TwXRjWKOX3AyhVA9QI9QOR6459SaudU3BuatxnfP1skBuEkX25o1YCRozkQ
pI01w7MavOR6Mvy58f289zf2Vluv9TnADxP7Pt5eJr0xj5EqBPlrim7WMtryGxvVQvqWWwZD
kuag66EHxwXIbzVXzD5W2W9srax2KGf9NBKrarsKjH2+xA6+OHm+n6H3b5K4fv2wC13KHcIt
yjFEaD2mjJp4sQQMWzReK7+HfMu3WMAteQR3cohGmG7tgsjMoFAHjNBX6Y1cvwsXG4fOPG2u
IZrAXk9qG0zRvDFEwHip1GtPPHOyT5ZxR7zyj4Y3Ke43KS13afcH9XtsoRdYp31U7djhmCRx
co+Udo3fji7bbWN1BKQucmkgacg1VP8AjhxKX4655b8Y3J5buF2s58rhkIEo8NJORz7Y6eUY
2e7c5+Nboy3cd1vr3klWSFmKQFh+UgEqB+GMeJ07f8rcIu7K2i3WPcbKa2NQbP1I6jpqKlX/
AAxWtV33XzrwwSL7NrfOiVUu4U08CV1aj5jBAzuxfLmyWm4bhNLa3BgupGeJkFGp2quNGTVo
/wAofH26QQrv0N/EbeUSIIFBRz1WoBDjBRecd9z84cRS7h9iwvRZZ/qJCFEgA6aQSa+eeJM9
vu8fD16brcI4txudyn9SpKulNTDtnRcUSo4n8ibNsVvcWt3xy33S2nbUjyaBKgpTTqdXqv0x
VMrvd/aXt/NeW1ollFI1VtojVUHYAnPDpzEW2XsdpfW859ftOHIHdhhlGvc7f544/wChGjul
LKAyaFIUgdie2MjXl3NOaXvJdyllmk12qsf0w0qpCg0AYrjR1n7I2sV2kt0hltwQJI16lK5g
edMGrNepW/yntGyQ2cHHI5kshT9Vay01aehEZz9XniWK7c93+NNw5HBfI99ZwN/MuUWPW/ud
aU60r1IwKXGi5ty7483zY/00W7zRTRDVH/8AGkIdqUApRafXDgvrx0SKjlVGlB0r/wA8MEe0
fDN1tabLewTyIJCdbBZFDMpH3BT1xVrRrzvgfH7GW12q8lvWaRmW1MTqVYtnqkai0rixn/w5
t25H8b8qgt5dz3afb2tx/Nh9piKjqodQcZaZWFfjuHk0bQ3t1bbVGVkjuGj9w1GZUg+pV71o
TjRrX865NwLfNlijtN/ZZrUkwgQTN7rU06a0Ug+dcEDz6HnHMoEENvvd4sKiij3CxCjoBqrh
oa/g3yTZ27XMPILiZmnGtd0CmWVXpT1AZjxBAwYdae5+RuEwbLPYHerndJZI2AkKPVywyWrK
KYmaruMcp4lY26LDyW7tbYL/ADdo3GH306epUbT0+hxVpVy8j4Dc8jkubW4u9oqo9vcdvUrG
zVz/AJZ9Sj8KYmftFnybl3Dbrj72k+4x8gvDUQ3BhME6EdCWCqMv34sNScE5HwLatjltG3l4
5bj1ypLCylGK6dK0DggdsRd/xrBtNrdbqdo3Vd6ikIItdPtOFFaMFkpWtaHtiUd/J4LKBrXf
L2W426Gzk1f06UIEfx0e2T6sQZPduc8fk5vtu7wO8tlarouG00ddVdVFJ/LXF4tq+X5L4hNu
F3F+uazEiD9NuIhYkFhmDQVGn9mFa6Zfkjh1nt8VvLvcu5TQaTJKsbFnqc/yr2wHXC/Jfjq5
5NDyC334QXEKaWt5o5PbYFaZNpqDiEZ9OScR3jnv9auNy/p0VroELtC0iy6P4m6JXtUYb4Ob
vq65Py7ii75tm/7dvNvdtZ1SSxUN7jq2RKNTt4HBDa6r3mWybhcLeWvNzYWmj1WRh0vXxqyl
v3Ylrzm857yCx3Wd9p3y5mt2Opp5AhMjdKlCCKDtiVqv3LmvJ95hS03Xcmlty2WpEqD3I0hT
+FcMHr1DYd44La8Lk2M8mgZpkZWmljaN1LihBjOeRxk1iuIWXGrHlAkvt/iiisZVktLlY29q
YrmaSH7fx64cUzF98mXvEd1v7PdI96hnihHtzwWx/wDkBdWoSR9vRXEt9Wm273xw2Xsbhyqz
37amTK03SEJcKCPtEn3V+oxerGR41ybiWw83ubqwWUbLcD2VY1ZogSCWocyoI/Zhvgiq+R4t
qk3iTc9s3W33GC9csiwsfeQ9xIjAADDonOMazsKnInL7RmcLOhBYkqCPKuBo2lgA1Pu/y64k
Ymr6SKEZ0HQ4EbUxDA5kZmvfCLQsPSDSmeS1zp9fHAgGhyHUCtD/AJDEYjcoftoW7j/TEsNr
0sWrkKLpByzxKIpPQxL9DlXOmffFF6B5DRaeoDqPGuNAzZVLUXse+RwIOqgLaqheh8vDEQtJ
UHIGgqc86DEfqjMhOTAOhqGrlhxIw5qF1nStKZZ/TPyxYoeWU10gZHtgIGY5qMgMtPf8a4hi
KpqDT09gfLCz9Qk6SetadBXIHEcNrgCg9j0BFMsC8J5KMNP7PD8PPCEQJZaaDWuqp6jzww4H
UOriqjtXARtLkCO1DQ98WBBJJV+gC0rQdMK+xmkObNVWIzr+XBqwDlSoGmnQ5DpTv+OGxQ6l
AayEAt0HmemAyAmIKkgMXOWWIajEYC1oa0Ab/pi1YPVN6gQCtBUjIimBbUbNUggGrVBriZtI
6Q4pVl7jtibR6ZKlmHX8vamIHPUgEFKZL0/DEEau+koVOlFqc8zU9K+OE4YyMHAAPSpJ6U7V
xEi4cam9JXv0NR2xLQ+r7iBTt+OJaOr0DDNulcZG4ipTUxJFfDP69e+JFSqE1AK9260OIk7O
AxVCz0pp7Z9xhGUP81gqooLHIKcu2I4VWAQOKFa6h49siMWqoiVRwKGmR09DhjNpj9gUE6q5
gdq/XFjOkdWlihDAArU9fHBjUFVmVV+2mYNcvxxRvfDNKFIoCsRFGYZ079TiZ6IiNtNAT0Cj
uCPDENiIOVqhGXn0HkcbjGmAkdjIANK9B0y8sFOCRajTQHwStcFihxqAUsrE5grXKn/LA1ga
Myk0NGOnLqAM8TJiwRMwaDuPLE3KlzWNW05k5HI5EYDoRExjFVqVzA6UrhVNpORzC9D/AJ1x
CTB1j0kL6ssgf3DxxNGUoAWcetKAmnc9aYAFm0hQppUf59cIog3uABn0t4gVrTr+3Fg0ayPo
oMiGqB16YmtASSlQTUV+v4jEqZTqFTXT0A6UI6598CkEZFCgeNQQa4RaAgFK5kA019vCmBZp
ijRkaaFSo69gMMH1IBnVwVKgUNB3/HEfRqQUKkZgChrQEdM8RlGoFW1AjpmvT9mCkDorMDmG
odI8/PAjydE0qCU6g/54WYKMqoai5g1LA/4jBCQKZ0GTZ6hX608hiIZBU6QaK3WnY4RRihqu
RNKE/TEsMrZAEao8wB1FfLE0Wv0gKOvSvXLtiSNKq1T9x7Vy/ZiZTIqkHSag+J6k4KYE6tGY
IpkF8RiJ3WgFBmAKkeeeEfAi1fUwA6AHOuRy/wCeICMKORrckoevStfHAtlNo6aT6lqdOeWI
nf06TTOhOfb60xEJ0yEEeFNPj3wLRF/bYqSKE5tTviVCxYuCCCW8cjhZwjXWpyYEmqitcsSw
5YtJlU0oM8iRiWGYCisqEAH1E1wLEupVqgqK5qfHBSKmogt9p6kdPTiJaGOpzQ6upGfQZDPC
gCQgaigPcdjniZ+Br7nqKA6elO+JGWMvPqH5Mgx7nwxKDCmmrSAciy/8d8RlOW9IVVAQ1JJP
7sS0wRSFNTlmUJ6/hiQSukdMh6q988Sp9TU0kgaupoDiZpN7orX1VORXPPAakKSMczQ9STmP
ph0yhYda0FKU8f3YkcIp9fRiScj17nEcMx1tTMMOp8a4qoNUKjS1AeoqanLBqM8Y0aABUHN6
1z8aYoDBmoVYKpPh/kcJHDRQa5aaagTUUxEL0YVFGH3MTli1nBjVI7EOB0zPXPAoZtZqiUZR
95PTCUY1KDmAQfSx6A+WEUQ1mhZwxGZ7A+YwAayqBpXPVnU9sROB6TRaJln54CTspkTIALTV
XocK0MhCyBSADSop1H0xA4UNnVs8j/yxEmLBaAepSBpOQIOIaJERlBAAyJzqCPGtcWmUiRoy
OsGlMFgL2iVNKZfcO5OAnkGuij8ozbxGLVgWBUgjJj0I6ftxrSTRxagwJckV+lcjXEMGgqnq
Bz6CuZ8ziRlZVIGoI4HY5/UYEJzqdS1OlTTwwGAYgsA9KA0XxphlWgKKJPU1Ce5PX6Uw6iOk
joQq9KmufiMSgirqusigOQp/niVOjsASrBlGZqKjPsMSLMpn1Jyr0FMCIsqucqKMz/D07YkZ
gunUyDOhArU9MuuIIykQUszaV01+gPc4Fg1Xp6terJWAHT/TGlgtKh6L6dOVMqZ9SaYkAj0s
GA0r+avU4GRIjPH4gkZ5HL6YqcSL7Sx+0ACvn/hjJRCKOhHXSc27kYScuqprUA6SNBJ6nx/6
4sVplK11BzU1JJ9OeIYkHQU9PbPvX/PCjsqqhAOpx0AGVD3xlrEY1hdGkEt1I60wo4VKFQf5
q5edDiQUQqSDlq+7OtB5UxLEjlAAK5gUDkf6dcKpiAy6ipcjPI0piCMh2oFqD+bsQMVWaNtX
QmtO3j+OMWn6nVCAGkckMfQT3AwbqsRkqX1s2kKch4/640MGXYhiCB0FG8s/TXFpwwL6mPQk
VzpiWDjAUaQxah9RPTPBGsMSrMaepq00nKpHnhZOPSNL+qv8JPQ9AcOq0IoGGg9R9B+/ADKi
F1kK1aMNQscgfIdMGKDLDIBTSmfYjywkxmiqa1cAAt2p+JxYdA3tBmKSKT/B5HrjR8iUhAwA
PoYDP8MDWhpGwLAVYCprkfrTAhK4pqGRUdyKZ/hXEzoXUMauaFhkfHwphbVu6HTEymoRz9oz
zpjUorLSgCUjPrkT1wuRe5J+/wDfgWu63gQ3CRRsxOpagH1Z9aEY1z6n07w34l4zZ8ai3Xlt
xPNBcRRuiWT0VFYekFiNTVxjr5bvUsWl98F7TLcQPsV1Iu0S6XIuj/PQHOiCi1xma5yO2X4j
+PbqKfbbCa9i3m2QPLJM4kiOrKrZZ+WKxYr7L4d45tdjJccvvJniVtMYsGARQT6a6hVq4c1q
0O4/BNtLewzbJuMv9JlCuRcU98Kc6LQAZ+NMOYz6sJ/iLgd68+37beX8e92yVeK5ZWQnT+Vg
q0xn6/lrc8Y7ZfhrlO83V5bbfPbxC0Ol/wBRIQxB/hVQT/8AVjXqtFtnxBvUvJW2bcZY7YWy
+5dmGQMQlPy0/M1cU9+Temrn+G+FXdtc2uybnenc7VRqguSpQkdakKMGMorf4y+NtstYl3vc
r293ByE/T7c6khj/APuwGcU88IcHMPiXbNttra54/dzXEd2+lILyivrbp6qAHBL6dVW9fDHO
to29r+9ht3towDIYp1YgsQACGC1zPbCl5tPxHx+y2oXnK91ntJLggRGz0lU1faGYhtRPXDR1
E1z8HTDdYUtb8ybNpWR7xgqzlT+VVGRbAefE978OcVvLe7Xj28Xh3KzFJLW5CEM3hqAFK9sF
0V5VeWN1ZXstndKVmiJBBOrp542ZWj4Fwe95RcOjSfp7G3NbmZSC6g50VT3OD4Wtxf8Aw/xr
cLa5Xjm7XLbjZkI9ndaQpJH8QApXxxn/ACq8ovrGWzu3tpdKzREiRBnQjLLGg50YSiigjScw
f8a4SNXXSGoCOxpQDzxAbO4AJNSw9RHT9+CJHT28nNK51/HyxpmwYLGhWmnv9MTQ9JFSW9J6
Ur+3EsJGkQqyua1pq8sTOJZST0ahxNEPURnWoGoHPEziRGmiXJqE9WFRT6EYzWijYs+pGJTs
PH9uED1yV1A9PwxYb8Bo+VfU3Xz/AH4YzJRCSQitafU5ZYjSDdKklu2WICX3O2VM2B60xapB
+4SSScBJpTrzOquar2+uIUzFgaqQe4p44hhGSQdc65nwrhwwUcxWmqgJqB2GeKxqeJoLl45P
cRmjYCivGSp/AjPGajvf3c3/AJJppgDVfckZwK/9xONGyVCWNS4WiVqRShwYDGR9J0jJR28M
IpLKFJLH01/4zxYzzRPIdB1UI8PHA2ZWJXWcqj9lMSgJJnqC2emmnDGOvTrMymtKV6HtixQG
r1FjT1ddOYxNF7hpqAz7V7Ylp1mcg9K98FjJGfMLmT+39uJqG1OKVOWekHviBhKTRR1zJ+uF
GNOgcZ55f4YKoiAUgkirCgPmB0xM3QOQpNAemVf+WNRFqK0oM+tT0xHDV0gCtCPVUCuJB6Ek
da101zzwJEtSCxXSR1avbCCZqdBq6Bv9cAgS0eokVHfzwugDQL260xWJHIQAKCinqfE4hUZk
JIFPVn6a5YWdCJCTp/CtOnliWg16SwBoOooO/lgEmnd9OQWhIpSuLHREz6CutvoMq08MLF0J
kLBqDOmdehPhha1EZF6KKsaUp5dMQoRIQPV3NT3IwUnVAy6mBBJ6jpX8cBwDalYknLPTn+/E
AEqAK0OfWvlXEyjd21io+0/cPtp4Uw6sOc2I00ZqVYGpy8MDcA5SpY9iQAKjI964WsMG0xgM
uoGv454mUcfqLUYBexH+GGswxqKZ+qukHsMCoKAnUAa1INPHEiMhOZJ8K9Kf9cSDmPSQCpNa
9f34QQuFyUqVPYf54ieTqKZqOlTmDjLSFZaAgjLoPI98QpnGrSQT0JFOtOhwUWC7KCcui6hQ
0xNTwjmreYorVxakVMqaVIH5cIM8iSqvpq1aavpiRhImmhPoAzyrXyxEKqSwJ+1vtr3+uDUV
UbUQOhKnt9cKkCGAZadCSA+JmhlLMQG6kjrl+3EN1G8pLDQDUHr4U64jNFVQ1Wr1BQeHnia0
VAq61BFO/iPHATBteorkDQL3OXSp6YozUZQFi1TmPUT4jDKzQ6QKEN3NAfIZUw2rCZX9rV6V
bpUdMGtYi1OJVBqwPp1f454ViZgpIVe35exwCzUYbLQtcq0xMWEUqKaKk9D388RkMPcRS+eZ
qo/dh1pK+dCF0tXKnjTywKo/uoDWp8OvjXPEoFhIr5LmxAPn54hR6JAy969SfP64lzBAxD01
NQKOAMh+3A3ICT7wlWByAr0OGCncAlFckv3qaV8MKNq0u1aCpyPXMYBKctKVUFiSc28aeOIm
lYaaqdb0pQ0qBiQdCMQzVVBkSmIb+xOoaiICVU+kn/UYjsNobXpqa5VIPSuICaQrUvWhX0/U
ZYjTPIF9tgAWNNQFaEeBriwCGbjVkhqQfLxxYdAfbdM60p9349cQHI6qRpPQVUf4YlQqIiBn
R610tmAaV64jBamSjMCadRgXpSB2NHah6hQadOmAAWoLBjQ5aSevmMsUKVXoVCiniK518xia
ASS5LrStVoKnP8MKwcUavrDZlch5UHTECEgio1NSkmtM8sSOhjalPQtfT4Z4lKAAoC0hrUnM
UoB/jiJOFfQNVQ3RvL6YGRMI6rQ/9tK9fOmEk5BeOrUr91K0r54iJgwUZ5E10+I8zi0UIGvX
7gOkZZdM/DFowmIOmhCgUBH7sSEHAegJC08M/pgIuoAFR1ofp44iSMimoX0g0H1OMgSyD3QF
ozN+Udaf8sOIzfdpr1+4dTQeBxLDsIgmkAgnInxxIkVtIYtQqeoy6dsQwvUzZNk+RB798sKE
DocqpLEZNXFhOfaYhSA1K07AYMSNmmLUNR3C+NPLywwUQNVV82YdulK9c8QSrIwQBa+lafj1
wNajDOyFyCXJqe2fb6YtGJo11DNhU1r1xGRGQfUAKqhAqMxTywobEa6itBkQOh+uBEDHrFSa
GufgcVROtMmGrPMDAMFHVsgKOTkOw8q4kF3agBqKHNR0y88KwgUYFnGWYAHT8friOiWRFyNA
BkFHTPrTEUpT3hoIyPUdD+3AkTx5lQAR/l/riGH0VIRM3pWnamKIUYyJYAFuh7fhiMOy6lqD
9ad8RJip65ECvXrTLCqFmo1ANOr7qf4HAyQoWqWGkmhp9K4lCkK0oRqiU5eP44RadXoNIGQ6
NTtiUpgRq0rHVB36Cv8AngMHG4YMCQGrkv0xEI9RrSo/aDiQmKhtIYMR9wIqKHwOJaIldA9u
uoGoPhl2xI0rLoViCXBzA/164hTMhOZBJbscqUyz+mKDDagxChuhr5EU64GoJXBGgkinanf6
4Ccg6lVDUnP6f9MMRNTSTTLrQdajGoqFSQutqMWFFpkfrl3GJFUlqeFDUnp9cVByqCufqOQ+
h88GIjIhrRQHbI17jpixaaJdJFSpYeNQAP8APFiP6ixzFa/lrp/DFFDFKxHRSoOfYdemEkZW
1KOwyFB0+uIEp0s2dQBSn+uDUchwoqxJJqcgPw8xiSTTTMUKsKqW/Z0wHUbrQgfmqBToRXEj
sVLCM1L50YeI61xEwapD6xSncUPh0xAZckV0qa9wKE4VUKalZhIBmcgOoJ74kkakb6UJVu6+
AOBQgWFVUhuhJp0H0wUwwDCRaEahUsPI98UVNE3pOpF1GooBXI+ffDjOH9FTShp2PXCTpEyk
EselPoD54NakPIwq1GBp6VC1/f54kZS5NSQpAqxPcU8MAKNtQqeg6g4lCNKjSaktl/zxE0gI
NBUAHr2xLBtpVQgHU5t1oTnn5YladGRXqKF1XIdMVO/oCAGSpFAc/I/hgqDUM5jcaSv29xQ4
oyQ9PrjyJ6k5j9hxQ4ctLNlX9oA/EY1gynIAqy+roAKZ+eBqCDqGyYMV70yqMWHQ6lQOz9BT
oetcWC0KsTRaEDqO+XbFrPtEzghVJUd6mpqcQMCS6+HVhTKnbLBWol0MKsMwc2/MaYiB1ioK
10sKac60OEIokVX9ZyrRWapz7DLEpPU5jGkqCFYGobtib1AXjpkKqciRnXyxM3pKVqdTHSKD
tQ4IrcJiugeAqSxGdfLCp0rdz0TWocFlOelWHqoOtcK661lJNXuGlSa1NRXGmcBpbxxBa7cC
twlSrKWGrSaPQGuVM+uN8fKk19acT5rw/l3Do9puNxTYr1QkCm7P8t9AAqjj09jWtMZ7mVj2
eNHefJPDtia02179L8xKqSXFudSKo/McvUT4DGG5U43Tidhc3nIByGxnt7hAP0wkBm0rmqgf
dq8qYad1Xpy7iPMrKbbxuUO0XIkQxrf0VXWuTI9Qp6dDni+BZjrvPkXh+w3VttU96t0yaY5b
m1cTJHl97U6+fhhk1OiG+4xt025cjHIbG5srhKCKKVWddOddJOrUT9MF5vwq882rcfjXfd+v
r3e91v8AbGmYC1mt2aJaf73QNT6Uphs8GOvjvJeFcb5jJBbbpdX22XLKG3OZCwDGtdZpqI88
MniksbyGbYtpur/fn3yxuLO+00SGRC+kDL0k9z5Yykm22m1Jbzbtxa921tzvKuzzXAV6nsxU
ErTwpiqed83seX/qoLnlPIrW5sNWkfo50kZFLDUY4aLn50xj/a1rmC3uz+MLbZ1uNp5buN5d
W9Hi2u4kkkDsDmCjKgTGozI11jufH+Y8ftraz3W3269gMeu2u3COpjy9C1AYHpUY1ZTqyu/k
PiG3bla7Vd3qe5X2XuY2V4IyBkXcHp/hiwFA+0ccTdN4vd4s7i1nJZRBKrutc0yFdTfTBVrz
jaOF2HNbq73FN+ttrYyMY7S40mSTUahyCynT9MK8WfBdz2bh3IbzaNy3KGVbg+yt9btrgDA1
R2P5R44s1a2kNxtPGbfc92ut4tLmC+k9xTBKsjn00VVAPq/DFix4Fv13DebjNPGQInYmOv3U
r3p3PXEZFfGQwIAoDmadfrXChCNSxJ+0Z54mThGBDVA/wy74lYSkyAjuehHX8cQlSRpKtBpD
Ej1E5ZVwxoqgaqHOo0+AxIasVWrAAVzxASqRlWqkCnn54EcALJkCD1BPT8cQ0Yq//wBoFYn7
T3xYRVIJOv8A+o9cROwqoB6+WJHWQaiCOn5gc8IMsL6RraorWg6HFqzRkEAEnNO+JYQYmjhh
nk1PDEjg0yzAIyr4YFSHpK6h9AcTJZax2r1z6U7YUfMdc/Af64jh1Uklsqdu9MRGpORB6dhg
IWbVl49a4QR9WWoDvmcTJByVy9NehxIErVAA8M8WIUaOq5591B6YmpSZmBArUUzU4hQ0BAOo
jsDhBMAdQbooGnETOSAukUTKmBWE5bJR6gxriACxIIQ/b+FMSIBqiuTUqRWpxGEGAJzGeefT
EiditcjRv8cSDppmad8ycjXFRPAOtTQAUGZA7+GI2mLIaZgmhrl0P0wjYFCQKgaTT/jI4jAN
JkTmCfuBFcSCSrAMzVY91FMvpgBlIZqFqrmCPH64jiOoBIINOlMLJjJkMsgetc/piMAWB1VN
FxFGp1g5inYH/LEEbBQWOVMunX64RaRUEM4OfhXPEsgQPTQ0qM1pllgQDL6qMR6cq/XviW6B
ipJOk1yqD1p4jDhA7VIAr/x0xLSY09GVe7Dti0oS6UPgCASfHpiUOwYxGlWUUqB164DgHkOs
DsenevlhxmgYqlRTUhPQ9M8TUMZFBJ01J6V6fswAySEIBkCRqp0IxFEzEGhIBboB0qfPENNI
aLVsgT1HTLDBQlNNUqBUE9e/XEvDB0Zgq1BpUg/6YCCtAc6ZZ/TErCYepWLVAyHb9uEbAEsC
x/8A0a+OGUGFSA5IGQOXSvlgpmnYoQCQCtfUemR/xpga0IVe+YrUsOww4KHNWZiaAnLBg0DM
slVzy+9iP2HF8L2i9KKSTU/mNO34YGsR6Y9Q8T3BOX4YcRSTaVVQuoZ9Bn9cWAyhqKQczWoG
CkpnXq2VQOlcTRZagCTSla+f088IqJUoBl94qynpiZvhiAep6DpX9+WIYZtVG05k/l88RlEU
KMGatAaFTXriqlBqcq4yQCtT4+YGBDRHEYJIKsPS3SuIoj4/lFQKnt0xQfUlalFAGqlBl0/6
4lDKgalegzNR3xacJkXVTIUOpThBBqA6qVHQnqT1zxYJQ1JzYHVStPD6YMWkfTRjUZGg6fic
Rwg5qNHUCrqe4OQwkpGKNQ1NfDKn0GIdA1AaWbp0NM/LEIJ1yNKlRkuAmSpbPUtK4kRXWCa6
QR1HU+WFBLOVpI5oBl2OFkxbUjPXPLr2GIYlA1rRSAlKAjJvw8cDoFJFYmoDdi3Q/hiZ0Qcd
a59KHy7YlIEMQoCZZ1rTKvfI4lunlMZYadQNPuFc6YDcJ3qV6g+OLADUSrazrBOQJyphRzmS
xevge1KeWI+FHEdVR0bqP4fPECBJYhiG6agMqj6YVCAVFGokucwPLGbGtEzqxAAzPUHuMOC0
iWcekhgOx64EH3PuDKPqfE4kk0K6goegNfwxLAqCw0impemfUjviaSCRQilgXYdSOgJ8MSRs
TkHI1+C9MSE9Vo33aR6gMv3YAEkCoz09QvbEBowLKDQ6geuf7MTRe2AGKmvi3h5YUJw5UuVA
GRYChoPHLEjAlgRHkTX7h1piGkrDMDqw69KYEStpiZSczkWGWWJEyrHRyfWfD7enhiWYZalj
SurSAW7/ALTiwYmNWiT2yTJGM6nM17jEQlgqldQPTUa9xixkKIklCmTq2Vfpngag2BUAGtD9
5HQYSf3DpNSCvQDEzYYAKlACHzFfGuI4l9pVRHr6xWgGfl08sWHDVDKwYaXf8wz6daeeBYZX
UKWY6mIpTouXc4VhMPTWuTVahPX8cLNEjMEGtap1Hh9MZUMWIGo1IJypkcsLWi1alLLTQaaq
ilK4EJmKgRR1Na6CxoCPrhFAokB0n1eAH/LEsw5QVYLkRSorUkDAEjJGV0p1/wAf24jDALWj
KQnQHv8AhgRS0EVc9XYDoPxxI6uzoSQFJFBlXId8sKOhCqq5l+wpXAgqlXZSAaVCIKgkeR8c
SSq65rooVNCO9BhxUjQA0BNc69KqMSAlDUgeo1oT4deuLUL1FAKGtcyRQV7YKTvUMitUspyI
6Gg8cLOkQzlclLfmp2wNG0EnPMHy79umKjCcGKpzqKADqfDEiKMdWoDXJTUv4d8IpvtUZE+J
B7YlKUik10n7aaQO5xESLVSHUBs6U7nrgIlEQrQFtWQHgT+bAgCMoxL1+h8e2EYIGVHGrJMi
uffvliJzJQmi1r1YjpnhqMzEsS1NPSmeeAGRmoUKjP8AhyqPx74CdxpQA1bz/wBcWIyFqamq
VGVPPzwk7Es3QBAainenhiBiIy3oNI8w/jU9jhWksgK6Fzzp4/iD1xIVDG6g5V8eo8cCO6FS
oFD38cCBICulhpLV9RoTT8MOnDtLHWqrqGeojwxA6uCummZAzOQAGYzxE38wEspA1VIHTFqE
tWBJAqD6j5U6iuDQFTHpb1Hw6GmFHOZrqOkCobpU4jhOdIOoemlSRX7uwJxYiBj1ek+o9TXI
DAtIVkbIaRQ18ajxxIlIpSM1brn0woL1VtIb1N0A8MSwBorVBYE50PY9uuApFYgDT1Io5GeZ
zrTEKdpNWmgoBWr+fhiBxLqUqaKK5V6YiBSWB9ymr81P88KlGrOWoWpqHY9AcTUonorrEMz/
ABUqfM4BajZgxIADjsemKmQ76loU0joCegP1wI4YaSVb1VIBHb9uJQzUVuvSlPriWDIV6NTV
pHqNaH/riGAIZNFaKB9oyqB+OJo7aQAw1M3VQ9KV7nLDIxplfUxNM/PrlgOCoQcgCD+bsK/X
E0RC/aSAo7UqSf8ALEtPTQpLOQtKsCACQMOLAKNMtCuXU6v8Riq5pipUaiagHLTTr44BBRDU
NSsCSdNcRpq6X9ZGrsQK5VxMjkVKkipY/up4YPadMkhDetite3Qkdq4kTOWGlSchQE9R4nCs
Jnk0aQx8qGtR41xGTSBcIFIBp4HtiVN7asqEVUg5gYh8HRvR1qf93SuJZpH2aEddJHUn/LEa
rd4KJAXZsjXrikZ+GWZtTlixoTXLrjapvak8O1cIddoum6X0nX0UDIk0rkcaij07i/F+SXFm
I7TZ7u5Q0ZhDC8itq7mgpg7ui1Z3eycmt3EN3tl1bOxCxwyxMrt4aFpXHLMUdD8U5fb24nl2
e+jhYVaVreTIdzSmLXXYgseNcjvUb9JtlzdxhvV+nheRajuTSgONWs9XRxcZ5G1/HtqbZdxX
UxpFG8Dq4+oI6YuafMXe7fFHM9lszd3G3hoyKERAlgTnRwVy/bi+zGj4r8V8n5CvvpJb7Wiq
CpvHK1B6ZKGofDCdwXJvjDlPHCslw8F3HNlE1o7S1P4gHPsMEp3Wduth3uBFluNruoV6h3gk
VV1dySuLT1JgYtv36eVjb7bcT6FAcpC70H0UHLzxeOfXWFJYbkJxA9nKs0lAqtGyMx8qitfw
xKdpptm36KP3Lvb7mFaVd3hkAFP4iVH7cFxqdI4dn3W6FIbG4uEINJI4JHBA/wBwUj9mNaKT
Wd+hMf6aZZEAHsvC4OfgCPVi00cm27jHH7s9jPGg9Ss0Txg+eajBrKGSOdVAkjepzUaGJP0q
M8Mgs1JBbXiMBb20rznL2xG5Joey0zw6ufEs+17tAmu62+W2Qn7pIXQCvfUwpg1vwAsbyYao
IndVzfTG7qq+JKAhfqcWi3Gk4XwHcuTXkkcV0lnaRD+dM6mU0OQ0xijE4Wa1Z+Bd5eQC33K1
urShrdxhl0kdmSpocHpigX4t5PKbhbH9PeJayCN5IZQ1T0Izpi2/kX4Vm+8F5FsLI252xhjl
GpZEdXDfTTinTPMqnGVC1RVqDxz8AMLof2XGlCnqPWgNBhidO3bbPf3aWdsEF1M2hfecIoPY
FmyGHAu+R/HnK+P263G6WaxwMRolSZJVY9aenMYxoZxlrQPVa9MLOCVT7hTOoINO+WHTBvG8
XqZWB/hdSpPmKjBK0MJM1DoOnM10tn+6mK0Dt7ZpZY4o3RTIQNbn0Z+JGLTjRch4VuWy2Nvd
TXdncw3A/lNaSe4a0rRhgGs1RsxSpoCUOEpQgBXSvkBQk+fTEGl23453rd9jfebQwC2iLVik
YrIdAzZQRSmAX1mtAWRhqqy/dToPCuNJ0bbtm47nex2VjCZ7hxUIgqKeZ6YNSXc9o3Tartrf
cLdre4AqI3HUf7T0OLTI4g66RkwBzqFJFO/1xaPtC0oGGZBYVUd6YmnVtW3SbvuEFhbyL+pm
ISNZMh4eo9sNErt5PxTcuN3SW24iMSNnG8balNfA+WM1nrpUZ6cqHLM4YYBiBqbVpUd/Hzxo
HMkZBq4DDtWlcY1WnLxAA6gwyoelThU9AJE01LBk6mnbPuMWr6neVPuPqQZDyrjUa11bbtu5
7vN+l22Bry4WrGCKgYKOrZkYtjNqCe1ubaSSC5jaKZCQ8bijA+BHUYNUQKwaSoI6U6iv0xar
AkRh6v6a9GJp+GHTIMCN0IShqaHOpwa0EtqIABBBoT/1xLHbtWwb3vF3+l2u0lvZ0GqSOKml
Qe5YlRi1mwG6bNuu1z/pNxtJbO7HSGUUJB/MviD2wSrE8vC+XRbd/VH2i5/pzLrW5CVAX+Jg
DqA+oxrWLLFNqAOpyVFOpp06YNdMJ3GqpVcxVCxzPniLsTYN/l2pt2G33B28NpF6I29sEGn3
eHngtxmpdl4lyjfRI+z7bNeJAaTvHpADdhVitfwxq2MTUO+8b5BsrpHu+3T2Lyk6GmUaGpTI
OCVP7cZlKq9ZalQPrhajr2jYN63i4NvtVjLfThSxihGr0/xGpAxWqufcNuv9uuJLPcLd7O7j
NGilUxuPwalcEqivJp1UksO+WNs4ZqlRlXQaV/yOLVmAUt6iFzplq64B6iLNXxNKZZjCs0xk
XQwZiW6q3eoxNWYEOhSoqTXp0OAUDNoYmtA2QphgMy9mqtM1bria0xmkI0jpTOmI6BmAFVpU
0GAAZ29r7ch4+oinhhOmWgX1jV5+PlXEdPKVpkKOooKeGDBqIHUBU01dfLCgGrVBIKA5V/54
mfgzsEKkgLU/tOBZp9IH8werxc+fbE6GUzM4Y0VWHpPWtP8AA4mcRSI4kqxIA6nxNKVNMMos
CraqNVqKaGoGf0wotdCCjalOZAGQGAheUAjOqgZinjhRmfRQAGlaBF6nyxK0pItKaT9p8fHw
wACNGtCTn2I8uxwYLTs6liVqK06nL9gxYdAzEM5AFBQMV/zxoaQIDZkGvQA1OeBoJqaIPSAS
QozrgBwraizswp08yexxNRGWJY1qqIPSadjkcxigpg1ADSo/Kv8Ani1n5ORQelQrL3ByP44t
aMG0ge2S1KVr4jviFpyy6qvUn+Dv9cRhmCyOBqJFcifLAjgyUAKk0rppShA/wxFG7iMFiKBq
kefkMQEmvSGppyoK0+0eGIwIf1UAp6q5dMQ0Cw+369RPWnfLEgk+vM1K9f8AuIywsjc/zQ1K
DSDXOv7PPAbCEgNFGYINT1z8AcOHSArmKgZVHf6VwGE2qVSpDEHp4mmFI/SKVqF6Eda4hp11
Fcj6VzoO2Bow9RBkIr2qa08PxxazT0LZ6j1oR/mMIzSbRm2rKnqr1IHhiWGKjIUIrmufQeWJ
YRcEAijCpFQMUOmJ0pWpbOh7DyxCnYlUYZFqdBl+Aw2FIzPI+YCKozoMs/8APGVQAtkM9VDQ
n9meLSABUXJiV6Z9j0/HDrI9HRaDT4DPLviISddAK1pk3l5gYlS9txmp9Jyr2JxLCcqXA01p
kx8h0w6MJwAwY96ALlQeeBEPT91S2TBuxHh5Yjh9MjUboR1ByOnAod2NWqaoTUkZgYiYAstA
5JORp0wrfwElgA4JJB9NemIpo2ZYjqNFAzFMq98SR/yy3q69fE59xiAllYfaegIJ6D8a98ZF
oI/T6gTn0I6D9uBYMlmWpULpy9Ipn54Wvkl7moA7Adc/riWHfUVLA5E9fCn0wjDlwFp+elKj
P1YVaSg6EbOg9NPAYEdlXtWrdKdxgJ2QaVZiX7eAr5YjhmaTUA1MxUN5DLEzYdSin21ZUcVP
7O4wojQyEVFKeodajwwJIoqoVWITsKePniahAGv3ZKAAWzqD288TWGUuvpjFQnbrkcq598I+
pmd2Prb0k6VIoaV/1xMnzGbg0IoSCMvriJw4IoWIVu9OmAUSrpQrQOAep88QKmYGVEByHj4V
xUFGzHJ/vByWmVMCSVDoAcmH7sRAx10BGXQ5dh3JwqjVVoUU0Ip6TTLEtCQ5b/tJq58/pi0Y
ejRy+31FPu+vhg1YJlNchmB9pIofxwE+kazmaDJs+h8PpiQlZ8yRkPsGWY+mLSdmQBQA1TkA
RUA+Jw4tM7kFnQFVoBmc/CoxAiWIVRXQTUVyA/HEjmjISxo/jn/jiQCDQFyagfQDFg1KrhfQ
q6mOZNcvM4iaoDEqK1GXck+f0xE0gb0j7mrQhTnliZsSFRGjEJ9adq4DPDBjmRU5U0n/ACxL
QqHCipAVfUCeuJYJWlWhJyPU9v24UQ01618T/liRBSy+2DTKoGE4YuYwK/zADkP8a4Baf3GD
FScga9P3HABMNTChFQM/Gvb64SdWLLVvVp+5KdfPEQL7ekhRpBNad8vAYsWnBWgqvXIUNTlg
xGkUq2s06fd2/wCuA4Qj1qNZr11A/szxDDoXGpBVh+YVocvPDipgwK6StRTqvbyxLThtMZVR
XLocsutajCtAqrkcg1K5dcGIeqUlkIOQz6dD3/5YAWokUJKEfm7k4ikBcZBhQenKlanviOgQ
KEZxmvjTOuFGajQEZf8AZ/ia4jTjTpUijtSlfpiFOw1ZlcgO56YAVNJrUFQOgrSuJHegNCKs
OniPphKN1NAaek5nr+GWE04WihaVBNDTM4GRKNIBU1NaUGRHiBgJMyu2rTTxbp+zEiY0AK9R
mRUVwnSXWDSuoHMVPQnASYlRp00LdaU7YhYABg1T/wCPrXuR4fXEMHGWq5NPSKDV4eWLUbUz
g6lJb8zLkPx+mHUBSPcBpq0flBzIp1xHUseox+k166dXWmCgiopoIofL7qYy3ydWdKBhkvTI
VxIv5bDRnp79jiisANNS7NRRkuWWFQTatS/wL1NMOqkPWaECo66v3YtGBRlJ0Dt2Gf0wA5jo
Tr9J6pXESckAK5BViTmep65YRoYwCKg1JrmT0AwNaLWpGlWqW9Jrmc/M4UYlqU+91GZPanhg
HwcjMMtCSPUp6jCjAKZAQpCg0oO/n1yxAxUgsDSg6AVqadMBPUuwBamrLUPLxxH5HpWufpYd
GPWgGdcEjVsBrNCEX09VYnCvmCL6VzULQfcDU6j/ALe2WFm+GjUK5bTUnMsev7OmCiSk2mQ1
zTTRqnv4gYln7SllkXWKFTn6elcFbliIkt+YBlzJPX8cQ1Xbwim3Z6Vbscj3yxqM9MuWYynW
QMswOmNAtLefj/zwrFhtsmu7VSzLHqFWUgEZ50rjfFU19xQ77u22fHFvJt8nsezbgwPo6qFB
9Ncssc78hn+DfId7yzd0sdztreS8tl1RztX3B/2Ekeo+GNZGtjXWl/vg5He2k9xOdt9seyJC
+inRqN0PnXGaKbf3v9u2SdthLW1yCAiW/j5qOtMZYcPKtzvodgs9ze4khv1aP/5LNpYdm69A
euCdetYb5F3/AH6Li8TQXDlbjQszRd42GbZDvjSi64rf7RNx+zMMscrFR6gNVG8GoOow4rVr
ezQTGKKSWJpK1jCUDVHde+WMjXmnKuXfItrvkllt6z3O2uQJVkhEiUbIjWwrSnnjUONVvvKN
62nhiXdmqxXOhaegBVyzBoBnivyM1T8F5pBzHdQ242SR3lmg9q4DBn1EZlVbME0w0VfbpzPj
1nPc2M7bvuGrJ4Pa1RU/7qL6frgxUFhJPtlmkk11d2dnNIDDb2McbgBsvVkaHxIw0tK8SJeR
XaJI9xoCq1wvqC9cwRjJUW3b3um57lu1peODZwOEjjoCtPPLvhiVXPduMm3WMdpalpFmUIkc
YroJHh2GIVqm2+ESQSNAIriGMBZAAjig8VpgUjy7lHOPkKDcNx22CL9ftwJV4ZoQ5CnpRyMx
9cUqR/Gd78oJa3zcd26zmsvc1Tx3LCP+ZT7UoVbMeOWNWCsLyq+5HDyK7uNwX+nX+s+5b2re
2EIyIGg4JS5Nqv8AeJrqOCC7nV7hhSsjUYscta1ofxw8h9Scbs4LLZ7W202iSLGDKI00gmmZ
88Vul5R8670slxa7dBcW7CEl5ViB9xWIppYnKlMULyzb3nS+hlhQSOjAojCoJByqD54FI952
jbNv5Htdld77HDtm5wEFIkUIkhU+kn+Kv8OFZjEc93C5uOQw20m3RWwt5ABLFGfcmFaAg5ZU
6YYt9a75VVl4fCCAxBUuTXP0g5/iMZtYvy8MAPv1rmaGuNGva/hOGEbRfy/p455A4Id0DUGk
5Voe+Kpo7G4flO0XcO92kRjjmMaQhaFQhy9VK4Crecco3PitrY2+zpGEkqP0rQh00gAUHSn7
cWJ5zNyO7vOSWl+drisLl5F1+1EFDdAW0EMP9cUErc/L9tYtse3TmOOGWSVRJLEgQ0IBpl+3
GpIOqydrxD4umgEz8tnicrqdWhXI/TS37K4zYW2+POP7BYLfXm0Xv9ReM6EvREI6VGoqUNev
jiTR8f3W53nj7y39qiu7vGzqmhWUHTWmGEV/f7DsK2sM25x2Fs9EjtXt1fWP4QwXL8cFDPWn
JNtg53Ba8bKfp78U3CL2vbQuPtZAQDl3phkLg+UN63W53rb9kigilEzKyK0ak6tVB626CvUD
FFGt255bY2m2X17FbTMNJsRaqUfLMKxrgwDXje07XdbjfbbaRQXLqWBVF0ZAkLpI0qK+GIvP
tm51fb7vdlb7htVqzLcgJeQxlGVhlQt3rhGN5vPITHyjbtljtIHgu1YzvIgfIDpQjviwT5eT
/Luw7btPIUSwiEMc0YlMKn0gkkGlO1cEqsVHx3b2s3LtuiuIknWWTOOQVStO9enljVMj1Lmf
M9u4xu1vYrslrNt8q6riP20DFT/Dlpr9cAl9S8ch46nH905HsVittHJrl9qZEcHQK6KUoor0
A6YDY7NssNu5HtG1b3f7ba/rterVHGB5EH+IeAOHGlJJsOyJ8qWkEG2xfp3hMksKoDGDpOZU
5ZnBWZIudmuhtnyFe7HZ20EW3PAtxHGsSqY3I9dHGefWhyxYZGc3mW35B8o2+07jZQexC51S
xpSSRQNQ1vXMA4fhRsdwk4zYXn6Lc7zbf0zLRrSa1T3itKdV7f8A04zYK49ts+I7Xxzcdw2i
xt7u3tWllt5GVT/u0hnBICk0wyKvNOT8w4fyDY63G1foORw+pZ7RFEEmdD7hFCRTxHXpisWv
PUYF6tkG6+eGLXtnELa3tviO63C0VYr6BZ5BcJTX7i/bqPlgXU8X+w2NtvnE9hvd2iF/dREH
3pvU41eZzz8MUThO4bja/LVps0U8ke1NAzPbFqo1UzDDv5Vw1RmN3tLXYvle4t9r49HusTxL
K21hFbOVdUjx1DAUPjlgqlY35AvduueQ+7abGdgdQBc2koVWMlfv0AaRUdh1wxR6xxje7nef
iHcZp0iQpFPAqwgINKrkaH0gmtcsUno/pPGH4F8ices+NPx3ezPZBXc2W72Y/mBnplTqGA+u
D4XP6bjm0e2XfxHJPb3Mm6W0YjeC5uzrmPqoakjJqGmGLrx85lgkaSPGyh6hGYek/RuhxNa9
t4nGlj8H3u62FIdyg99lu4wA5YOFBencA0GAdelz6yt91+G9r33dEW53qGOILfuAZR7jFWBb
v+OKerp4GTTItUgZMO5/HG2QMwILUyp2OdRgIAVf1A1yADdKV8TiE0JfTUqCD4gdT44iiMxc
a2Wp607EnEADWQxUnMdchhxAViDqYAkglfp/yxAMmjqralP3ds/LDKEbHWurqDmvY08K+eKx
o/QjKmrIV6jvjOo38watOSg/44tNBmVNW8yg6UxALMVGokUqAAuZwiwiaUFaGmWfbCdDX0sO
udQf9cCpKVOpz91MgP8ALBYvEctAmrVQjqBliOCDKV0sKfTLEkTa3p27UrnXzxI5Vo1pJ6q/
bTID6YjmBDaX0pUE5FqflxLDPGhYUagyzNKEDBDYYutSOgpQEHtjQA2sCreoDtWnbI4mdCNB
Sh6rme4JwAysfUppX8o8sJOipXKo1k1rnmMv8MWkIVYh6urGmXh5nAgyLINPQZ5d+mLTRMzV
oKmv2iuQrg1mgDLUayCcxprlT/riXweuigIILHMVqMxTLEfQ6dPqUVI7jx6YWcCrmvfKoUjv
X/TEoGrhy3Wnby8c8WoTFihOS9tI61PhiOko0oVIGrKhPY/6Yms8PUBtRYZdwcq/j0wLQ+kA
qhOX5+uZ8PDFjOnkjdR7TAmpz8emJrET6WND6FXv1OWLRiShCVLUIyAPh4188WnEUvuBhQUH
cDCrDqhNQRUV1KOgz7ZYgYkhCCaZ1xKnUyGTSQaGhyzy/dgRswoJNNWVPD6YpVYEhkYkdAM2
Xv5YUSguVFQZAchln4YFp6uJGyzJoF8Kd/wxaPThiysGp1zB6mv+WEyYZAfbpQgJ9pzpniRR
acm1es9a+fhiAVkl1EVCgnMN0NMsvPEvRCNg4b6988h/liMMoGoFWIfx7HxBGIwQORBJ9QpQ
9q4gYhagVplXLv8AhiwWnJdfT1U/d4n8cCAA1QYwPBaVz+uFqDFF0hgCo6fU4iUtTVkFQOo6
f8UxAOlT6V9WR9fmOuWJnTBUdWfuD9vj+GJHUEgF30gDIdcvPBpkHQ62zALUK+HTEdQxtpz0
UpUqw7+IOIJaArqA9NM1plliJROCBqJKtU6TmBXCCdlOSimnJc+g7/txYrTJWtdQ0qMh164z
TsEra2QAVShz7AntiR48koFzOVOoGImR6yUYCQJl+PemJDChgNDFQa6tOWXhnhCFx6h7YNOh
Jzp9cKS6mDZdOjAdelO+IWgSUsUULkf3AYMMozIi18Oh/HEdCgLDrrK5qWzoD4YAfQQCGWtR
kxOf7cSGAmllZq1Wp/wywqU0hGnSKDp6h/kDi0jJWLPNtXWhq34+eBrTRtHrooZQ/XENOAAX
A+2ta9yfHPwxMkVBUuhJL9SPLscOo6hUFdJZemQwIRD0I1GmYJpkfCuLUABfSpyeTow6/tHT
FEILLUg+oDOPxywA6pIVATqB6x4eWLTgwpUqv5lHqHj2riR1RgpYGuYyPTEsEAVIDepeoz6H
6YKTM7KtHUuGr7ZGVM8SD7jtqWMEuvUg5Z4WamQze1Ufd/u7jxJxGBLsqkEh2NcqdvLETyl/
bVxSpGZrmPI4UfU5iQ1qR3GVPpg0nh0tFU5I3T/g4gEiXNNIoDkxP+GIGC6QdXQdjnU/TEBl
WbSAwU09Ok1p2/EYiMEqNHSo9OXQ/XFhwyVIeT7mGQAyz6VxDcRx66dPWDnQ1/biBytJfTWv
hXI+RxWqQTmlT2/MBXt5YlpnbWKMdKkDr/ywqicnrSpBz05/TATqPcYKtSfHtliRm/ln0D19
fGn/ADwC3B16xkVbImvWmIoSdLVY1JNBn3wgdHRyzlmNKZ5nx7YjD6iSprkc/Cn0xE2j1HTU
jMrq64iMMrAqOo+0gU/xwGoo1KqwatR+U9DiCW2yWhKliKA+X+7EzoIyAQmS51avfERoqlya
Alfy1oa+OE6VaLRVNDUavp4YAYrVKkggHI51p/rgxYLPV0JA+6mYr0wqB1GooCaH1kjLEYcM
1aAVJ6npkMWNHMgBqOv8J8PriVRgKWoVCSA5EZ5eWBjBtKCSq+l6UPetMSMACik9jmCafswn
BssYf0ggN0Nc8WIJqr6jkVyoOp8/riQiTQt1AFQR1r1ywrSAkY1OWYI+vjljK5DRNQJoT1Ck
0B79RiJMoSlFApQ0Hn4nCC9LKVXU5qczQUr2ywEwVRUgfzFBIJJp0xNHDhlABqxzNBXP6Yma
agcig71riAtQjrVqBsiRWmr/AHHEdO8pWlFHYKwyzOJegZBqLKSriudMgRiNghI+QYguBU+W
M0zwWs6SH1H/ALezfjiVoamtCasejdcvwxKXTE0UKRXu1B+zLCb4B5C1FDZN9p8MTF6FIFBz
NXpQkZ1PTriGkCmlfTT8pNKHFpNKwPpVa/xav9cKpomLegtVa9G/54hgmQ6iiimfpFevj0yx
IQEntegjMkdBWmAw9YwgIpTKo8fEHE6TCMQL/dTTmQMq/U4XPPRRMM6MBnSvWmA7AVZaEgFW
HXvXGlIFZEAFGOkmgFaCvfM4MM1MREV06uvUjw+uDFajbppatSc6dwPpiVPoQrpy9XQnI/TA
tMyusgBaorpp40+nhiWhdj9uk60/N2qcOipFctED7YB6gg0JxVTQGtSykg0+zqcDSu3gmO3Y
oDT6ZY3wuoyrj11IyrnTGqwPQvn4/hgDrsH0XKuMipJ1dwfLGuPl05j1XaOS8qutlTaf1tw+
3jSf0ygsoHQAUBxruZ6OuMT2ku7WF0ZYvdtp1oyXFCTUfwntjlz3vwz9W4Tnny9c7Z/Mnun2
8ihlEKZDt/N0j/HFbjN8cG28q+U9iikurFrm3gc62knQTxOT3o/RvxxXpfKn5Bzjk2+Oh3bc
GkpmYgqola5kqoFcZ1rMT2vyHy6125tvi3Iy2jKQI2UMADkQNXgMbgwOwc35ZskzSbTdSoub
OkcfuIW8ShBrljfhd25fKvMtylhlubxFuoTWNoYxEy07MKdcZjPU/Tsm+bPkC6sxZNfI0RGk
sYk1sB4uAM8AijvOb8p3Cwawvdxlmtj0jc9APPDKZFftO+7ptN9DebbO0V1CaJIP35HEsbK/
+Z+f3dk9lJeRLFICpMUKK9D19Xjia+rl2f5V5ltMPtQXgMZNAkqiQA+OYOC9LHRd/LnyDdSI
W3QxaDUKsaEZjKtR+7FKp5Vdb/IXJ7K8nvLa+KzXApOwodXb7TUDFunrFltvy1zixhZIbtdL
kkrNGJH1/wAQY9PwxMAn+Wub3NzFLLuBklgOqJgiKo8cqAH8cbzwypdx+Y+Z39o9nNPb+xUD
WkAR/PMHKuMQWs5Zcn36ykaSzvprOR+rROyMfrjWpy3l7cXU5nuJjLO5Op2JJNfE+OBGty8E
yvEdLqwdShoQwzBxRSNhF8rc3jCRruTGVF+4xoa/91RngV8Zrcdxv9xv3ubhxNPITJLIciT/
AJ438g8UO42cscpt5otRBSRkIrnlpr2wYr3jQ7lzLl15+mF9LIjWtGtzp0kaftPTy64zp59T
zfJfKbi9tb25nikntQDC0sSscv4hShrhlP1We5fM3Jdz26SxvYLSSGUUcLEa07HNiAcZ0YwJ
kkkbW+nWxOaj0gY1Vj0L45+Q7Li1tdW01tLN+oo6PEQNLKKeqvXrh3xYbe/lzkW4o0cccNpE
xIMkS6ZT5lgf8sCDYfMnJ7OzjtpUtryOHIS3Eep/21/yxpnVWnyJvg31d3Edu1yP/GjR1iA7
EKKYoYud3+Yt+3fbpbC8sbHTItCFRmKnxXUT+GM3wxh42lfUI1OlRU6RQA9MBq64xzfedgla
Tb2Ghh/Ot5hqjencrXDgnrR3PzRyma2aGKK2tY5FIMcCUAB8CSSMQ0Fn8xckgs1gubSz3CRB
SOaaMmQHtWhz8MPyPszW6cq3i93iPdW9uC4jIMfsgoikeHU4jqz3n5C3Xd7aKC9giaaAgxbh
GNMuXmMEM9W1j8x8lt7VI5oba7uIABFdyKRLTpma0OK+GzES/MHLG3L9c7W7Ap7b2xU+wy+J
WtdX44mUe5/Le9XvtG3trOyMDBhLboSdQ7+rp9BiijgvvkTkN7vFnu8s0S39nQRFFAWlOhXz
w6Vdyrl+7ckulvdxCLKihE9gUSi+Pme+KYzXDtG6X2130V7bMomiNVJAIBHj4jFrUru5RzLd
uR3iXV+qLJGixp7Q0rRT1p5k4hHdxb5H33jbyQQLHdWUoOu0nqU106rTpXvixq1Ybv8ALG+X
kSw2sEO1RIVkPsairkZgLq+3zpijNd8PzdvUaozWFq9+qhf1pUiQinRqdQcBUG4fI3Irvkv9
ehK214qqqiEHSVXLSwORqMjh1Q++/It3uV/BuYtYtu3O3IJu7Uspdl6VrWn0xRRbS/Ml7Oqt
dbJYXN6F0G/cESEUyPTI4cFqv/8AvS5Amz3W0FLdrW7VlYLHpK6upUjx88Z31MRIzBjp7nMe
PeuEXEbFsgtNRzFKnLDBjQcU53u/GpSLWNJrab03drMC8Ug6epa/vxVa6+SfIu47tIohj/pt
qh1C1tnYIHpQNlTA0sh8tbudsFtd2kFzfIoSHdmqtwq/7qZP9cGpmrHmPJrLff6zbXbJuJBT
3z6wU/hZWqCPLGqpMDyvmW9clvo7rdZFmlhT200IsdFrXMKM88UqkbXYvnGPb9lXaW4/bPAI
9EixsY0eooSwIapPfAqy+z802va97vLscetLqyuMxYTHWIx/+7dh496YjEXLfkO73qNre2t/
6VtrAVs4HquXnl/hh1np2W/y1KODrxW92a2vAsZit7qmkrnXWQQfWPEYhelZxH5E3Pjss0cE
aX+13g03m23PqhdQD0HZvPE1oOYfIe575/8AGt4RYbOSDHYxMSBQUzr/AIYZRWPOZoDVRTId
anAJUbEKDXqDSnXCiLUU5aa51plgIKPQODl1DdRng1qQNQTqc6mbMg5UpiGIQtWoaVNBQHLG
mKMopYMKAioPfPwwHyoXSjBfSSw9I6fsxqM/ACV0sCtWyNDlT/djLW4FWz0EaSAdNcq1xM/f
SV3UEN4VC9Pw+uGNGKtIAoHaoUePapxNFJEEUSSLlpHTt54AgZyrkkAIvU9csbkZ0w1M2pfX
qNKeFe5wYTkMGHUk5svegNDngq2DPtsxJHkARgaRKHcDPSAakU6jtgWHAUFhK1ATVTSv4fXC
jGHQhEdSpzo4rQeODUUilhkwVT1NOnliIUSE/aQwH3Zgmo8B2waPEDoNS0FGBqVGa41qsCUC
yZ9fupnQgZ9MNrEkgkEbKWJ0kdGrlTsR/lgMIJKxFB3JOrLP/piWVIVIBXTXXQhu2r/LBp1F
LE5AR8iorQj94xSr8mX1ZnIqMiATl1zIxVpFU6lULT81G6jwNcDJEfyyNXU5g9MsKJWpFrY6
RWilulfKuCqXPlK4OoZUqOnStcMXSJmcMxFSfsp3GFnEbISRIKeFD44TgjprUDTTofAjAgg1
Y1pkRkT1wjTnINUD15VHfGSOSMAClSD1z05+Qw4LZ+A62ZQGyy7nx6CuHDCKE0GqjDNG7imD
FpgyOzDTkRmD4jLAYf1kqoGQ/Me2XcYWgSI9QBVad6+VajEKjlqQgCk5Vr0yxDTlpGVaAhiN
Ir188sGM6SkkEn7lJNSM6UxGBLORQV0g0qDUZ9zhV0VXZdWkaaUUt1J6VxYz9jelT6cs6A4s
P2h3q1K1AHpy60OLD6NRQhAdOpSNNan6kHAb4FojqzIVRQgUzocaGi0S9Ez8K0NR54FtRBya
n8w6Uzp44UlELlhIOq5UOQIwaPUDtq1Lq0k5EdQPHEdSRxRslCQG/wBMRkPo0yHOq0BY+Z7Y
KsNE5DAGuVSPAnw8MSlOzJpzBJfMkHMeRwnUSqUYlXp41Nevh44gLIxAKpFPUWHSoxDNHHby
sxoK6c9Jy6j9+C1fWmIkKOUbP/bl0zpXAzueCJZRVic6EMe1cTQQqI2rNlfIJ54moTxSBRJU
huq5ZZeWKEIRC4UVoM9R7mlcLIqkqzEValBQdaYcWwMKq3rOTL+XpSowaJBqVVmAqMhRsTRv
UTUkB2qMqCgH0xI0kyrky9QApyFfPErcG9W0qVbPypiWiGpgAK1OT07/AIYUiEZrUMCtaE5n
FQeQO5ATp160rTAsogqRjU2ZPVaGnTEcMVjoENdVTQjvUYBLo4yzIyuAUIoAvh4k+OJoo0Vd
NAFr0DHw+uJJkKCQ0y7aTTI/XELUagFTkag+rKmZxAyaS5ATTIcyT4nLFigmWqdT689Qy8qH
BDYf1LTSKEeJ9P1ONYkgrI5Nagg6iMhXyGAn0rooTrRhSn+NaYsZKunUwzOQUDoQOn44Gjqi
lSUYkg+knLp1/Zi0mcO1CPTTL6+eCs5TR+6CSBpJGde5OFYVWOXX+I/64iPIqAPUWIqfLASk
AV6VBCijZ/bXCzaYEaaA5V9P4/44h8jdUrk1FBAZe1eueE6NIXoSB0qakVPnngKOtWLCjKR9
pyOedcSxIrKV001CuQr2xHTM6q46nr6D/h54hacRuyClFX8oOfXzxUSaWgAkKKgdQMxX64jg
Y4iVDAekZAjr+3wwIQWQMGYtpP3P92fhhX1CUoAENADkviD44kMashRQa5t3ywEDSVlJNRQZ
LXz61wsUiieplOpTTSB28TUYlEjKUUAKWUnI9unfAUdZQlQw8CT1BP8Ajh0iSI6NRYOwNTSq
1DdCM8AsJSxnp6chkD2p54Vgmd9QWlO+qmdR1wEBMvualppYZHqwPliH1Ojkks9VbvXv+OLT
8QEYlBJapB6kMSMWszUoBzB70pQ5fXC0f0qulhQHrXPLBjR3OpchRTmp74QAKShWh1nME/8A
LACCsXGlAcq9c6jEsEqAMrUBA+6mRB8fPELT6gpCv6gK0A7DrniWiRzI4IoIz9p7VGJA9xld
j2BpUDpXFiSajpDkgLTqO31xY3JiMBZFag6nOmWf/PEpT0C5GpUnr4DxpiWkhrnmxBpUDt45
4qtGQq0egVe1PHzwIJJBNDSvqYCmRP8ArhgwC5MGOQUUHfriRq1YgDUf9p6/ji1CQk1CrUMM
gcgKdaYkEOy/9o69v2nFENaOaH7aUI6fswVqFJVWII1K2eoeXjiZpLPH7ekiin0hq0Az/wAs
WGUnAIogVg37wO+HGjlYxQ1KdjQUP7sDPUNojr1Ap0NKfhXxwCQQJMepVJNRq8PLI4MaMWUT
L6Cf4iOuJaGY0NM69aeNMIwRMYX3XNPCmeZ6VPhiWlEUUekE9x2H1xVacHSD3Boad/pgxcX0
VEZgSunuaE5fXE76D1FmLBWCf+MD/PDHCwKVLB6gA9dQy8sTMSxCR2I9Kk/sz64m4jkNQcxU
Gkg69PA4ZBacKjKAumg6AeP1PU4sASC0mhKg50r4+HlXChmRtKhhTT1C+I7UwNygNCasfV3I
6YF8DdQ2lkJYDpTuDiFArB3p9pPpIOX7vHCpBAKxIA1OpqApHpp4+OJo8hKlsqxk9fLxxEFA
RSIa1GYrSv4npitZp1JlzAOkU1D6YxajslBUGor1bM+WEJY6LXR9pzNep/HFiR1q3QALUk/5
4UeOSVgVFKH8wyqPKuLGvtfwcvGE7ZmgJzNO9PPBFelTvpb2Dp9Qbrjpw1fhmClXyHXw7Ya4
n0+R8OmBLFhSZQGGqozGRNcMb3x7/wD24bjf2/IFSCcpDNC3uqVU/b9pzHXHT+l2Dq603zkd
e9W7tGCoRWdKBAx6NQgd8sceJ6I1lhtWz3Xx37lql5aKYDSFbovFqBAJZSMXU9F5XVrNx8cE
j/raT3FpHH/MtkNAT0JBBB64rymc3j44+L/08O7TWU0FhMAJI/eYlS1CpHWmMzkWAl+DOLWr
T7le28h2dIxLE36htaAipkNAO3QUxZgkVXxddbZZc5u7bZGY7S6MsZlAlY6OlNQFD546SeHV
Z8zRWKcugkuIglvIR+oMSLGWUfcwI6HtU4zxDLqLeT8GjY9e2jcl3crS2Rw5DSEdGJHt0+hx
X5FrzYAK7Ba6OgJNT5Yi3HxTwex5ZvUsF8ZBbW6a2jRhG0nT7XzpSuNWDcesP8FcMmEsce13
lkyj+XOLtXViBQGhLHGPqlZe/FnxZsu2G/3Vr6UqAHVZep6alCAZYZzC5tq+M/ifeNwWfa9x
mntnj1NbLL7kqse51Cq0+mHEHcOCfCa3U20vc3NjfxCizyy0UsfEkEdfLFOWfkTfGHANj2yK
53kX+4mSRYjNA+hQznL0jT/jis02LIfCPA5Lz3BLepZSDWsesawT/upXT9a4JFmKzcOEfDX6
i52tb+fb7y1Fazy6kLAZVDCp/bixMrxiw+JEe8g5HuFz+pDFYJIC/tFBlUaQ2Z88VovTK8gt
9ji3WRNjkkm21CRDJKKMRh5utRWghMk6dA2eRPhjRj2P4MaKVL+2uLS2lTSHpNErs1OvqIr5
+GDqC1kOY2e3w8zkjEQisvfFY7dQmgFh9q9K/XGuKzXqXM9tH/re3SR39xJaxyxMkVykRyag
FXUD92M/kXlc8y2ziF7Y2cW+SysWdYoBaAK2phSlafbjOGMjP8QcRt9zjt5dzuK3tf6fE+gO
WHVSaANTtgOuBvh/bNqsLy73+8nsdLk286e26FaekMOuo+GGJZ/D8e3XVludrNZwXdrCQokm
iVnIYHuQfSwGNXnGr8PNOW28cW+3kcFusEaSsFSPJAB4eAwsyuHaLH9XuFvZljGkzqjSdSK9
xiheuXfxP8f7Y0IvN1vWuLsrHbJGUDVbodOg1HjgxmzaaL4P21Nxla53OZ7CNdUaRIqzk+JJ
GnLywY0zm6cV+MUjuE2verm3v7eoMN4jUZx1AYIKE4ZGGi+H9vtpNov/ANNeywXJbROkkMcs
RUD0stev44sateYcmiMW8XCllc62+0aUPq7KPtp4YpBKg2nb3v76G0V1ia4dYkkeukEmmdMR
16Xunxv8fbFBAu/breRzzjQJYlHtll6kKqu1PrgxmyMpcbBw2Hf7a3ffPe2KUF5L6NGWSNaZ
BkIyNe9MONRNzHjnCNutobjjm+f1J5xpNuxVqZVB1KFpn2IwT5UrefG3H+B7lxq4H6A3l4fR
eyXKDUH09EIyUfT8cNVvjAcatOLDljRbzHI9oJ9MMMOaFq0USZhtNMVijefIHC+NybjslvYW
Mdj+ul9mR4loCpGQoMgRikH5W9xwqTZtvWy2Tje27uoB9ya7cJI1c/uIZicFSn4T8d280l9u
e5bbHFuUcxEW1SKPYiHUUqTqr2xaVH8kcf3/APS/qrjitjt1sjeq/sHDuAchrVQtB50xBV7R
wjg1/wAc/WS8n/R7uiMZ7STQoDr+X229TA9iuIYn4R8Y7fvGyXG97pfy21jC7oi26AvpjzaR
yQ1APIYW9ejrxfYr3gkW37bLBe2sqhba8kjAc6jm9QNWrDKzbrHH4i4fHfx7PLv1xHu86Vih
eFCrafzUA6f/AFYKIwfLOK3nGt5fbLiRZQiiQSocmRujAH7T5YI1EHEeOQch5DbbTNcG2Fxq
LTqA1Aor9vjjTNkr0ab4U4zHdw7W3I3Xc7gM0cCxKQdPiKkgfjg9OOKy+EIoLe9n3zdza29o
xBktk9we2BX3DqzUU7UxmazJYzfI+F8cttv/AF3H+RQ7pBHlcW0n8q5X/eqmmoD6DGpGsbXi
OxQTfF13Jay2twgSSSSO7tqlHUVqkqtqr4E4l18PF5kVGbScz1rhg8XHDeMryDfrXa2uf00c
7UM6gMRQE1CnInFaZHoV58J8btrpduueVrHudwD+mgaJAD4ahrr+/B6z+XLtXwLcv+sl3vdT
bLZuY1/RR+9rFK+4R1APgBXDrVg7j4FJuLKW03kzbXPJ7csz2/tzxk99DEBl/ZjHrM5rsn/t
92zU7jkkohhISU/pVLq5P/d0zw+rqW/DjX4BFoL253zfUt9ut/XHcwRawUpUtKrH0fRa4vTz
LFdN8FRTvZXm18gjudgvpAn672grxAg0/lk0bUcuowjqftVH4huoudJxa63FUjMXui/ihJQI
QSuoMaA1ypqpjSnK+sfgB/8A50m6b5+nt7OX2keztzNUBQSWUmq9egBxn1qTIoud/D19x7bv
65YXq7js5ABkKGGWMk0BeNjmpOKaL1iC7+KdpfhK8m27klvcXUcYmurBwqUP5o19WrWOmY+m
KWq8vNZRHkVaoOYPanhjWrF3wyPikvILWPk7zQ7MamR4qmj/AJK6ator93lisMj2nZtn+LuT
7hebVaccMG3QjRHya3d44nf8ujUa1+tRjOM151xjhmyn5bXjF4F3TbYLp4XkqAsoC1BOg/ga
d8NmQcb+VjyLh3x5x75abat1EttxtIo5Y9JZtEkiaqSEeoxg/jg6PMm1rtk4n8VcwuLzbbbi
xsLRFPscktpJEjd60VotRrn19VcXwbzHnXBOF7JP8trxvcgm77XbzXEJYnTHKYQdLek188jj
fU8XG2O/eeI/Hex/LN3sO8SzWvHY1SWKQsX0yyKHCSFRqMIqVpjNng+Wz2bivxRy+a/2yz4w
22WcOpLXktvIwhkdTQGPWf3MDjOCcyPOvjvhex7j8nvxfdI13Tbrea4hkXUyrKsNQJNSEN1F
cjjdljfElnjr3Dhvx3sny7e7DvVxNacagVDA7OWRZGQOEmYDXoOoj/PGK58eWtrtHEvirm0t
5tljxufaLe1MiW3IraRjbSODQe2XPrRuuYp2rhx0s15z8a8K2jcPlQ8d3eJL7boJLiI6SUSR
oCwVwAdQrSozxrrwcx33fDPj/aPl2/2DfbqS12CAq8E0pqPcZEdI5WUVCeoiowXca5njZ7Tw
z4k5lPfbZtmwXG0Rw60j5Dbys1q8gOkaDIxDK3UVGDGPpHnHxnwfZrz5SPG92U3traz3MDaW
KCT2NShgwIamVcjjd8PHs9U3yhxbbeL813bZNvkeSztWQxmU6nQSxrJoJFK01ZHBIuJ8smiy
OwoAadKjviax71L8Ebdu3xvsW+7BG8W/SRxNuHrMkcqOxWR9BrnH1AWlRjEZ6l3WzX4E+P7X
f9oV7WS5iuLWUXMbSMI3niRKTih1KzVOVaYYb6yfMeNfGOz2t7+v4BvNlaQ6kG8Q5qudA6sX
YZnpq64cjnvuY+fpPaMjrFUIGJXUBq0n7ajxp1wOsuvR/grgmwcx5HuO3b1E0lulkzwlHKPH
JrUa1p1Ir3wb6Ma/gPwH+q33kFpyrbLyLb0iYbXuMhVWMiyZSpQkV0UyIpjXWfg7400Pxd8e
23D9m3J+H3XIbudQs62jt7moVrK6mRF9RWmWCesdXz4UXB+E/FXKedbnZRcevrSKztS9xtm4
s0bRXAkADR6W19MqMcVi5yxYX3w5wTfrO+sbPjt/xG/t1eSzu7pj7Fy61qtC76gep6HvivMn
wz9Pz8OyL4u+OrLiWxXqcNvN5mvYI3uRYO7Mr+2CXfVIg9THKmKRuvAfkMcNO/MvHbDcdqhi
DLc2O5U1RzBiGCLUuB5McbvOOf8AO3f8MoQrZkBmHTvQeFMDq+mvhn4fS24zNyK8sbDdN4nj
1bL7jrcW7QSICY5B9qMWqpPVcY/K65eaWXxZvHKeb73aRWCcctNsZZL+1dmmFoJK/Yv3OlVL
ZH7cbp5/9fXbv/wBuVttE+6bFvNhyU2fqvrazf8AnRxU+8KGfpTNeuMs671/tqvRZ2t3d8ks
bK0vYo5bSWeNhGWlQPoZiwCsK5D83bwxS0ddXcea85+POQ8M3WTbd6hVZnUPBPG2qGdCaa42
NDkeoOYw6Jv5Zi3ikaYRoNTg/acgK/XGln4e2bV/bZfC223cNw32wtb64Ec8e13LEe5qIYJH
KSFcsuXpBoTjndq+lnTV/Knwbte7c0gXYII9ntFtBcbvKg1RR1YpHIsK55afVp7Z41Phj/nP
trDci/t93rbdouN12bebDkEFpHquorLOaND/APaFQz6lHU51AxbY6xbL/atu7WW3XFtv9pMl
ykdxNG8cglWF1DO8ahjr0aulB+3Fe6OorG/tw5PFvKIb+z/o0qtKm8w1mjZBkf5QIcMv5gKi
mDTLUHKf7e932rZrjeNn3K05FBaASX0VnqE0cfUuEq2pVHWmffDIx11Z7+Fh8AcG2XkTbw93
BZ7mi25gm2a4DJO0UgqJ7eU0VDqyB/wxnqeun23nxmeB/D+7c2n3obVNFbPs8yqYLjUWkDu6
0quQKBP/AKsX3tc+b9psWXMf7f8Aedg2WbdrDc7bfbOxOvckswRcW6nrJJGSzaV/MOo6064Z
/lX7T/w4fj/4Q3LnXGb/AHfZ9ytReWMrxJtUuoSOVUMhLg0USV9JIph1q/GxluH8d23eeSQ7
duV+m0QGQxPPP/4q10lWPQMSMmOVeuNdTDxftNjcf3A/F3GeFbjtMmwu6W+4wlmtZT7iq0dA
WjfqNequnx6YJ8OHVzv6/t5HoAdQR0IqSf8AHC6yN98cfFW5c1W8uI7uKw22zKpdbjOKxxyO
CyKyAg6SPzdBjFrpfHqsvwlY7b8S8nTczZbruMbrcbPvFmwr7ihVVAw9SqWOlk6HBPL652Wz
1Qyf2t7o0Sx/+wWFtuZAb+myh1YyFaiMPX1AtlqUY1LdXXNvwycvwnzFNg3zcWhWO743P7W4
bcxAkMejW08LD0FdJ/EYZ7WJ35/4VUPxnu1z8eXPNbWaOSwsbgQ31uVpIsTAUnUn7hrYKVH1
xnfWuuv9djd/HHx/tO5/FXK90uYYdzPtO8ccf8u/sZrdC6yK59JRh6iB1GXlglur+kznWW+P
PhrdOXbU+4y39rtW1xv7LX84JRp6BtORGhdLD1N1OWHbfg83/WV08q+CeS7JcWP6O7s98sr2
ZLRb6ycFYZ5TSNZga6NXZumLbGtaSb+1ve1hdbPkW3vuVKjbJg8crShamPVX83QHTQ9emKWj
rfwwHGfju93jm0fDr+ZdjvzK9vJ+pQnTJGpYJQEV10otDnjPWxvnqdRW8/4PuPC+R3Ow7qyG
5iUTRSxZxPBIfQ619QqQag5jHViX9sz7Kl0IqFPQkdfpijT2b40+MOKycSj51y2YtseuSFrJ
WeNSUIC+46+urn7KZV69cHyb8J93+D9v3GbY974ZuMh4tyC6Fqn61P59nPK5UK4/+0T0kA9R
59cc7/hjnmy+3xodw+H/AIxvtwPAdtup7XnUSNKu4Es8DPGnuGK5jB0rqTMaB0oa9sak/K65
mqTjXw5sG0bTd8l5+abdaXM1jNYRO2kSQsUzkjzJen8unp7Glcano/HqDc/g2w3G749uvEL5
/wD1Tktx7NvLeAi4s5jqIicfmRgjafA9exxdHiXn5+F7efEXxrum5ycH2e4n2/ndlGzC7ctL
aymNdTLOlfQWU19P1Hhgkw3/AA824d8WX3Iec3fDbm7i2jdLP3o5veHvAyQnogUqG8s+mHrY
ubodv+Kd5b5JXgd9PHbbiJvaecEumjT7glWmfrTOh/HB1RxNel3vxD8Zb1f3PCeP3U9rznbU
aU3U+qS2maMDVHMBkjgMPs6efTFJgk2qzjPxFxTYePPyr5ELvYs8tqLCBmqs0btGUZ1pVyVL
R09JHXFjUusn8mfGFhxqPa+RbBNJfcQ30E2DSgCeCRQW9qUfmyBo3lnh+fhnLKfmnwpvXHeF
bfzKO9g3Ta772jKkVQ9uLjOPUSSHo3oanRsP8/8Abxrurb4F+OOM8u3Z1328Vo4Qf/yUH9qe
UsDR07lF76c6+WOXUutTnxFxn4y2C4+ar7hV9ds1hZTyxwO5CSShKsiGnj0amOtnjXGZ69Pv
fgv475HFf7RtWzbrxveLMM0F9Or/AKNpYzSiuzMssZJ6gDLPGcxxntfMV3AbeSS3fOSKRopS
OhdDQ0/21GRw9cuvN1r/AIp+O/8A3Tk0e2if2beNWnnYDUxSMVKop/Ma5Vyxi1vPy3u6/E/A
uWbfuM3xxJJHvfH1YblYXBf9Pd6agvDJJmj6lPl288MmeOF/ceGFZM2I0lcioyAp1GCzG5dj
2f8At/4Px7lc3JNt3m0juoZdv0xPSssUjNlJEfyuvUEYvy1eZedcHL/gHkOxcel3qyvrXfrW
1XXfC0BEsAA+54jmfOmY8Mb7t1xkuerLZP7bdz3HbLe7m32w2+7nRZRaTFmOmRQyMJFNGBBy
K45y1qysFunGbzi3LV2jkcLJLZSxm7t0NBPAT90T0zWRa0b8Mb+vmnm/tsfmv4349xre9hj4
wjw7fyC29yPb5SzFJdSgaXYkj3Pc9QPSmHmTDzPV5x7+2/eoL7bbi93Gxmube4i/qGwS5PoD
BnUOx9a6O4FGxmyq/Pih+RODbLafN9xx/bJYts266ltXX3R/Ihe5QO4FKUSpy/hr4Y1ef9NH
GXrHR83cFtrP5F23atrsVsJ9ys4PchgotvPdM4hLwr+Ra01VpnnjU/8AU887VlD/AGybxJau
p5FYxbmCVO3SBlYSj8mon1aux00PXHLBbVlwn4ls95+O+S7Xu1qu38h26/RY7uVatbSRAA6m
XNoirEmmRGeKc+td+xh+f/D298PtINx/UQbvsk7Ki7tZmqrIx/8AHItW0/7WBpi+rG2LjbOC
cV3n4T3jka2stvyLZZizXCM/8xKqVjKE6aaX60ri49rXUVPxz8PXPNNvub0brBtlnBKIlklH
uFpOtGAI0VXoT1PTD15WsyOT5G+IN94OsFxLPHuO0XTlIN0tzRC1KmKRfUUbL050OGTWNS7/
APD9zt/x7Yc3sb2K6264p+rhFRLCWbQAzEnUA+RFARg5mrqXWBoVUlKMFOdcBsbH4h4ntvJ+
Y220bncCCC4R20GlXkVS0cf0bTnTDW+b49a37g3w9c3cnGJdrvuI8gkVhZ7pcB1sveWug+4z
tE8bsKCtCw8Dg+uOXXv+Hk/J/jXdNj47svIJZUlst5eSNnirSKaJiAravuVwjFSMOa1rQyfA
HIRuu1wRXkTW27WI3H9SAf5SKqmRShqdSe4KU+7BhdHJfgG52vZp9z2jkFnv0lpG00thbgLc
GFc5HRQzltK5laYvrWLK6tj/ALff1+xWl/uvIrPZ2vY1ngt5h90MmaNrZ48/FR079cGVqzFV
Y/BPJpuT3Ow3k0UFvaItzLuUAMkb2jkBZYVGcgzzHbDYZfHbzD+3zdth2Wbd9q3aDebW29V7
DGuh4oiae5pq2tV6t3xYzPlY2P8AbiX2y2u9z5NZ7ZczxCRLadfRSQVQB2dNVa50GL1u9Z8K
TYPgnkW4b1f2G6zx7dDtRjW8uKGQH3qmGRKU1REL92Kys32Ob5D+GbnimyHedv3a233bYZAu
4PaU1WpkOmN3AZ/QWyr44pzrE2fLzjQVQFs+w6d8WNVEqitFzoagjoMQSq7KQoqVXoCO/jgM
NVSwDqAxPpPbEafUxcqrA0qGrmP2+WKJCX1OSGACkZdPphxnU1WaU6vSlMgeuIwlK9jkOpHX
A0A+1TUCSfzVHSvTErT64yfbL5dQB1r3wsyiqAAwX7egGWQxIgFLVJNG8BUL5Z4iThl0KDrG
Y1DI165+GBH1sCApNR3xExNRRnpUflyoMNrNDRwo1MD00gdf+DiBRMAzVOWZoTlT/XAYIaiq
16HIUOX78JCcjVj3oy1oCMRwVRrNHAJ+/viGiVNCtob7jnlgMpvcUoRk3mcjTviFpkdlBcnL
oD3oexGJHOhhrdgfKvQ+eJHjI1Ds9f3dKYmod5WBKnov3VyrX+HEA6YSSTUqQKoB+btgCRfb
VSrHST0UjM+YxNaFPbPuUqoyAJz6eGLBD1Ji1dGrTw/54pDai1flNBRalevXxwgQWsbEnpTU
TmB+GAkVOSjNl9QCnx8MSlOAzUND4UbLr44DaJtGoK5JH8QoPp17YlmlIhRBWvgv1wGxGyt7
mouS1KFRniGDYrqz+7Sa/QZ4R9hCBXzFAgFNPY+WLSjCjQK/tHh9MISKDkqLUmtWrgMLWSpB
yKn8MuueLGkXuKzKACASSH6Vwxm+j1vQaiMzR38D2piZwhIhUrUa1oVp1OI6YqWj01oSaVpX
FpzCNC7CmoEUyyoR3waLQ+36ipJFMwD4YapUuldQc9AKVPUg9sBRAp7nuVquQAHWmJaIM4Wi
rRSSNfanlhAY0VSK5uxp/FT/AFwHcFUoVIGkjtXqMKJpK0C5GpAqDSp88C0bKqSe2p0kipUZ
/vGLTsOpGjKgK5U6AeGDBajq6jS2ou2TD64dGGcyawV+1QRTMHPrgUGVWgUZ5Dp3H4+GHGt0
CSss+l4ydIIFCBT8cKlE7FSAtfcYE/8AHnib8U+8K62y1oQx9TEk5jGuR2zusoxZD165dsac
j6j4HrX8MCdHuRm4T21IzFM86fXGuS94+I+c8Y4tF797skm57gAUW8/VlGz/AC6CNFDjX9Jr
F6ta3lvyXwnkarO3G5v1KUDCW7KAqPuU6QaHzx5+Oprf87b6trL5n4LDsybRDxe4S00FFhW7
BUVNT62Iatc8b6FprT5x4qLA7ZLxYvYgaEia5Vn0dPWSMj+OBGvPnTYqRW1jxx/0kNDouJlN
AMqIF19B44PFjli+drgXcxm2s3O1TDTHZGcaEB66tS+rD4pEHGfkjhO17ncbnLx6aOeViIzb
TqQiHqoRtAp+OK1WOb5A+QOI766Xe37NdRboroqzyyq0ftrnmgLd8Z5slSTdfnHddx2WXZ5t
n26MyLpa4CE5UoWVKaVbDaZ/l5ozBpMxStWcr6a/QY1Pg3xr/j/nB41uJuP04uYJF0zQodDB
a56XPSuC9D5ba/8AkP41vTJdzbLuM11ICSktzSIP2zWT/LAMV3JPlna9540+1W+1S2UwCqj+
4rRhV/KMqmoww8y1TfH/AD2DjFzJPcWrXcUo0uisEkp41IocWntV8m5N/Wd/m3SK1MEbsCsZ
YMdK/aGIFMMrEehW3yzxPcdotbHkG23mqALpezdRGxQZGhZSMFq13yfO+wxXEawbRcyWaLpr
JIglFBTJRVTTwJxrE8u5bv8ABvG/XN/AhihuDVUYgkDzHniWrnhPyUvGLea2GzWd+JzV2mAW
U5Upro3p8jgxM9ybkA3vdZb9LOKwWY5W1uAsS5DwxczGp4roh4mvao8BjQeq/HnN+Bcas2/V
W24LfzClxMml42I6BUJXT1xn5Frh37f/AIx3Pfk3OG13Z4nbVdBWjiNRkCmonLxwwNdufyZ8
a3+0x2Dx7osUJQxKkaBv5WagksVxI1x8r/HW7RW6X+3XsRtnV0MZRnVl6Vo4rgq0N18ycSl3
NHfaruaGEA2903tiQMa1IQk0/DEtcMny3s+77debdyOwlvoSSbZowq6R+WtT28euFD4LzX42
49byKf6jBdXFPeDIHj9NSAmjoB54rVarNzs/jbkm8y3kHIrnZxP6pP10AZHPgjVXTT/di1O3
beD8Hsb+1u4+b20ntMriNo0BcKagMQ5pX6Yzo9bflnIfjqMWFxuVzreFxLbTWLe8AR/EF1UB
8xjRqoX5n4vJuTxvb3R2/SVF0qgSah4xk1pT8cSl1lt6u/iB7e4lthuN5uDkumtnhVWbOtSF
FBXpngkS24By/wCOeP2MkbXF7Fc3B13AmQPHXsEKDp9c8NizGZ3e6+O35RFdary+2qRzJexy
KI2qxPpUjSzD9mCReOjkG5/GdpcWl7xS0mivYZBJJG5kSMhR6R62bOvfDF5Gk3fmXxjyaG1f
env7WSD1PBGhILEZjXGGqP2YrVrH3e7cE2zksF3sO1yXm3wNruILxyUlJHZXBI8RqrgiScx5
bxnkEMS7dx9dqu0YGS5QxqWFPtCxgA/U4U1fA+Y/H3H9okiuNwvIbm5FboTRalDkUIjMasNI
7YlWcjf46TlH6v8AqW4PY1EqTRwBGWTVXS+oamXT3C4lK1/K+dfH9/8AobuDcppLvbX962t1
hfQ5GWliyqFr2NcUFKXnXx9uu4WG8XN7f2d5a0YWYicqW89AIah71xVrU8fy9xS4vru3uBc2
9nOoWG9VKluxLR5svliSq3Hn/Ctt4hcbJtF3dboZ1dFaRHUrrObM7qK/QDFIx1We2nmvB4uO
iw3XjMd5fRqypdro1OxzVnY6XT8K4sM9WPAPkPjdnsNzsG7iW1hnZzHdwqZF0yCjKwAqKdss
LdaWP5I+Ptl2WHbtslubk21BDGY2BYk1NWYKB44MxlC3Mfje93+15NJu1xaX1omhrOWBjlQj
1aVPj2OIZN1R73Z7L8gb3LudpvlptAUBEtr6oldEFPcALRjS3bPFS7OJfHttsXIbTcX5Ntd3
HC9WgRljdu2Ta2/ZjM1NLyLceA7ZzK33bc93aw3O2hAEDBpInRuh9Ct/jjS2RVxfK3CN3G7b
ZeTT7fa3YMcN68ZIZCKM2gBinlqGJbrC8n274ostsi/pe6Xd9ubMAZ0Q6NHfUrqg6dKYom24
nyP412ziEuyDkDRpdLJq/UwNHJGZBRhpVSuX1OJX1hdv+M9q3i5n/pPK9vlhikIDXIkimcH8
2hiMvMYNok8aTi3x6/F96tt4ueQbXcQWrNJLFFKEfSBT0ajQkeeL0xlvlLkey73yj9ZtU3vw
CNQZCpQA06Z9aY1RJ603xr8l7BYcdl2HdNwk2q8Zma33EBpkGroKUYqV88sFatWd5z/ie3S7
dJNzG93v25tVzFDEhjK50d1CpQL4KTgjH2iwuflz4+9rc403Qt7uiS3f2pNMlKelKgVP7Maa
1Pu3yd8cbzt9xsz7ybRNwtipuWhekdciGqKA/XBuLWX5Pzbhm0cAi41tm7jdrlNKwTQpmCr6
9Uy5ADsCDhgt1fWPzFxCfjse5Xu4JBvdrbsk23MrETMBkq1U1DkZGuXfByr1kUvx/wDLGxSb
He7RuO5tsO5TTSzWl/KBMumQ1AFQRVOlGw1fhUfKPLNquOLx7facxn3y6d6zQxRRCF9OYMhV
VKUPQAmuLmi++M7c718Q3HCDbPs1xa8lgiCJeR1YPKP/ALVn1faTmQVxfk915lIQuTUDVzAN
Voehpiiw6MqnUTmOnjhUeqbXznZbb4evNoj34228xNIw2+WL1MGcE/ppVAIyNSanuMPM2j+l
8Yn483ew23mO37pc3Um32kT+5LdxKXcAgmulga1PUYrMPPvqx+Xt7td35zc31nuKbpbSQwpH
dwr7akKv2lD+ZehxXnYM9aOLmW2w/DK7RFv7W27wag21yw6ZtJkrpt5lp6dPU1OM8T1n+ssn
jGfGO+WWy81sNwvruTbrKIt7lzEglK1B+5CD6P4iBjfV8dP59eJ/l7fbXc+dXd9a7hDuUEyx
CK8tVKQsqoKdSfVnRu1cYtc719f/AMtXNzjbR8MxbTY7+E3e2cI+2yR6ZgjMSUt5lpVKGpJr
g49dNY74r3zbtk5lYbpuW4nbbSCqNdxJ7hDOCullINUNfV+3Gumof5Y3i33T5A3G9tr+Hcbe
b2v/AJlqpSJkEYVaVJzFKN54r8OXN+Wwn5vZD4Rj2/b+QpDvFmNElhJGY7gB5CSsEiEfyypz
qD50xczau+smxjvibkW3bTzTb9y3HcH220Uv7t6EVxV1IHuhgfST1PUYupW51LAfKe8R7nzz
dtwttwttwineP2Lm2BWJgsagadRY1pk2fXD1Phy/ldt/8thc84sIfg222vb+Qwputnpin25k
MdyitISywslNUZDVqQfrg5lde+axvxLyGz2jnNhfXu4tt1tHqWS/KCYDWCNMit+U/mbqMFh5
s+AfMO5x7l8hbtfxXlveQzGKl1aV9hwsShStScwMmzOeN74xzzltYxJHD1BqVOqmYBPWlR44
y3K+ipPl622b4g2NeObpCnI7Foobrb2UM4Sj6wYm+5RlQjBh79+G3s/lThl5ecb3G43qANJb
yRXoP8v2riREylQ5x6mBArinwyynNdk33dRuX6L5Xs4re4L+3tcs0awFH6QvJ7jaVIy6Yzl1
y/pz1Z5XzPKFjuZIwyStA7RsyEFCUNDQ9x4HGrGv53x6t/bnyvj+wcslm3y+isbe5t5IraeT
0JrZ1IV2PSoU5nGc9dfw3fxt8uxDnG/WfJuS6trk92PbHuHX9NVJCQRIBRax9KnPGr/hmTz1
oYN527kHAtpteP8AN7fjl9Zkrcu0iK5oSCjI7ofMHvgZkUfCL6y4f8hXt3yvmVjvD7xYiC33
VJkPrhcERzaaiM6ftqc8UlMqu2H5Js+WbFyHhfKN8S2uNcr7Lvzye2jjUdEcjLpHp/8A1h9M
PxRzuetad4h5LwXYIuOc6tuN3dlGsV9qaMO5SMKyFHaMihFQcZrVfP8A8s7DuO27/wDqN15J
a8nmv49ablbSK8hEdF0TIpbRpFKeONTcHHU3HnwIjlUqc+pIGVPDG8h17/8AC3NeM7d8c8r2
m/3KO1v5kmubSCRzHrDQ6f5JJpq1/lGdccp8ju+efKq/t857s+x71ucPI7prY71bJBFuMrNJ
EksZb0zOSW9WrJjl2xqjm+ZW/wCJ7Rxr402jkl3ecr2/cYN2gb127oJUko+gCNXdpAxk7YNt
rPM+rFfOvL9g3jhPCE2jcIr0wROLq2jerRssUSfzY65aSCMx9Mak1qzbHj+7ci3rc47ZNzv5
r6O1T2rT9QxcRRDoqVzAHhjer6e64oWUOCyjS3qqQRqp54xbrT6ivhwv5R2fiV7Bv9tt0fHS
F3GxuGjjutWiMaY2chRUxVVqEEYzdkPU91p93+S+E23NUSXebc2m8ba1pFeRuskMMySMaTlT
/L1asicO+MflluLbLxv424/yae95Nt19b7tbsizW0i61lKOFBQM7NrL5FcZ91W+NFs3yJwVt
y4ZM28WyRybVNbo7OFEcwWAGOYkj2j6TTV4YTKfjnOvj/wD9TsILjd7W3u7a9miil1ittdtL
K0TsAf8AxuD932muZwp3vyi1tuPbtYb7yPabnc7uzuf0f6JooI3URMMqu51VPQn6VwDLnrzr
+2fath2uB+ST8gs1kuLc2s+2Sv7Utu4cNmZGAYFQKFRTBfaeJ9ecdnAY9o4Lv/NNo3Le7Bpt
2tnvtpuop1CyxkynSTWiSI0gGmufUYc91njnOcZr4X5LsEPD+eWF9uKRbnf2M0kSXDBJZAIJ
UbS7kBzqYZVrisumXYoPiVviOXjV9DyjcbjYt/kX/wCLukU0yf8Ax2UBQEiqnpaupXGYwDJe
WJ4dsVnyLkY2qbdrfaIrkyLDuN0paJyD6EIqun3PE4111cxvmePdv7h+J2298Z2zdNu3ixml
45aMtzaCeMSSIFXU8VWzK6K6ep7Z5Yuf059cf7Tp8wFg1HU1pRjXspzyxrGo9z+AOS8cTj3J
uI7hfR7Xd8hSljdTj+QWaIwsjNUUfOudMYvla3W4ttq43wP4q5Fx7+vQbjuskRvWhWaIB1Uq
P/joGJ6L0OdfLBaxJkbkcx/r62W78d3vZ49okiUzw3qVvUkViXRayRqjAZUYdfLFPWpXle1/
Km32XzFuse8brb7zsm8Rrt13LBG8UESAFUMsbag2nNHoSM6g411LGebK5/nXkmz7Fw+y4Js9
0J6opMtpIhSXblZtMEpQmkivpI8hXvikGZ5Go+FeMbVt3x1u0Q5LYXX/ALLa0il1CNrdngaM
xyI7VqjNnjPuul95xn+Aw7VBwzk3xJum9Wdhvk8tLS/9xZLOaNkjo0MgIBYe3mhIYVw/FZnx
ii3DgO1fGp2repOTQ315FuMMt5tlo49u6t0cN/LjUlvdipqPuek/lNcHPP7ZuyvcZeUT7pPH
vWxb9sf/AK/NGjo0qVvRpB1+p5Y1BU/lYCmHGtvzHzhPyDjG5/Ndxf8AL72LcNkkn0zbjt3v
QRhVSkL6f/IApA1lT2yxvvnyNc2WMz8tDjJ5rdyce3mXfdquESWK7mkeZ4zpp7QkkNXVex/D
GY58S778McKoalw4Oanp1wutr3T475RxzkHxlc/F247gmx7ldzF9r3KUe5byMziRY3Pp0NUa
eufY9sY3Ge5sanfeb8Z4DsGxcTt74bzuu0blb300cRWgSMl5QzD0Zh/5ek18QCMXMwS/hZbe
OGpzXcPmWHkls+0yQazYMBHPEwhEMiSoSX1aVoukfdTqM8XyrzZdUNrz3h/yNsG9cPN9/wCv
XO47i247PeXagxMHkEgjl0kKj9ctX78dLMM9dO/c+4rwHaeM8Ttbht5vtj3GK/vVjaMlYVV2
erqdBYmX0U7ZNQ4Jy1Nq121OG2HMN2+YY+TWs+13MXvJYHSk0RMQiljkUsX1mlEUL93XLPBu
+M5leTca5B8f7p8ubnvfJDPBsG53M1zZzl3hktZHoY5JWhbUCMxkcjh7tuDnnNR2/K+PcS+Y
/wCr7Xfy8k2K0n91dxd9U7xyLR01tnJ7RagY/di65a5/pHru0f8ApOy8t3f5Z/8AZ7W52q/B
kitU0pLGZlVHilWrOXqoCUUZ9cs8Hz4vtjP2vLeIfJnFbrhD7mOPbh/UZL7bJ7tQYpomlZwh
JICuFkIoT9K4bcSl+auRcZg4fx/g223j31/sVx7s0yAFEUROrLK6+jWXky0VBGeLnnIb7VTz
eX4wm+KNsTje7zwb3G0J3HYWnmdZXoBMzRyH219tv5iMvUfXBxMrPdz4d/8AbjxywuuS22/T
b3bWdxtMjM+2TjTJIjoy643JA0+rGettdPwi+YeLts3ycN6G9QJY8iuzNb7jbSAm0IYavdVG
1+gEMGXr9ca6uxjXsFjyjcuDbRc75zHmlvyOwWJEtoLRYVd9R9OhVJd5W/LnQitaZYxNTwf4
y4XxPnW8brbb3u77PMxkm2uRTEusmQsBL7lQSqtXRUVzzxvrra1feR/EXKbD48+SJhvUy3Nn
bvNt895bepBVyonQCupDSpp2xd8ZR/PrZ69O2L/1D4u2ffd93Df4N3g5D7i2SWIDs0cpeRER
Ks3uDX6tVFp5imH5o3I+ZGMhIZiKuxFMq0qaE0wVqPbv7Xt+2ax33ebXcb1LCa5s9FtNK+lW
KtVqM1AGVc6VwWNf/Fttj2nbPj7g3Kpdy5JY7tHvsTGC9hertKyOqo41SMzSe5Wo864fmuca
3jfIdquto2u+47uO1DjscMa3W23IH6i2dQPcWGrqFp10sPpgaeYfLVnsfMvmHb7GLeba2trr
bYDbbjrDwag8jKjFCKajkM8bvnLMs1ofnfjbSbdx3kNpuNrcRcWjVL+0WVBLLEGj1PDVsyNH
29Ti/n8Y3OpLqy3rZuMct5dtPyPbcngXZbaGIvbIyRzoI2L6mLNUUP3LprTpjF+MYjEc12va
uXfP1zBDvVtaRG3tZba+bTJC5WFGMQcHTqcDKpx06v8ApjX87m1qPnGwii3/AI/zeK9trux2
YQwX1nFIhnAWfX7qrq9a50K9e+Dn2Yp1lb5+SPvE0W6bDvOx3G0SBHCzr/8AMCj7lVjIqq/8
IdRQ9cYoYvZt72fkm3/IOzz8os7O+v7sLFucDGJDDpSKN0DMrH7fbko3WudMasxq/DP/ACHD
tXEvhscM/q9ve7jJdJNatbAUmT3A0n8tWcJ7dfzHPti4uXazWg4VwnZR8S7nx+DlFlL/AF5F
ljuqqhi1KtY5IzJWo00PTGJcutd1W/DF9t23bRu/GIb+w2zl233be9PclZre4hBA+9WQSKBm
KNUV+uNdfOq9bHR/cFyLbbn40t7GO/srndIr+3/VW1s6t6grHWqRknTmD/jnh4Zrzjf5vjOf
4kto7DcpbDlUTp+s2j3JmS7fX6i0bakA0nUrjpShxj+eWnrzHlXpD5Nl1XsSO5OGxVqvjKXi
sfLbY8orHtD6lkuldo2gkK/yZlZKMPbehxm0yvpZIrKz47d2XMuX2HKOK3UL+890IkuEhKkr
Iro7GQjyGquYwssbttjxj5C+Nti2Bd9h2u64/Oz3EVzp1yR+tUKF2QepGybseoxv7ZWup+Wz
3Dm/Dtk5Pxem6Qnbls7ja3mWVJWgkX2wnvaK0zjpq6V8sZkyDNebb38Qcf2Ox3bkD82tveKT
T2U0DIjLMxqiyaHZ5VkB0ELnnXph3azPKvLjY+K/K3CuMLa8jh2ebj8LQXdlcLG8glKIhDBn
Sg9FQwB64tkb/p7UfAN24VwTnd/scPIBPb3dmkEN1cP7yWd0CD7MkgpGUYnUpXLsaHB9cE+M
ab5D5HebfwzeY5uRbRcPdWjQwLbxqHdnyeLSkkjeuMnQ3QN1w8/Os2g4Fe7Je8SskTf9v3fZ
EiWJtr31Y/1UDhaPCJiw9Or7Kocu5wXrTZjJ8bu+H7f8lbrDxvlcmz+4scdkbr/5FiWUkXFl
J7pWug0MbB/oTh6vinrr+cLHhX/p1xdTXW3QchTR+mm2pwq3jl6yQTwIWyYeqrVoe/jr+dus
185SGNqEkAg964xTIAINVc6g1p0/ZjOLC9xlkqB6OrH/AFwqHkaMrkAM66h0A8BgwmWRGQqh
00P2jpXCLQEaRWgBPUqKjECUl29DVamVaEVOKlMWKRrmAVrQ5VAPngKIqHAIP0NaCuACAcV1
01EZHpmPHCpMHSlQVArkanP6jC0jDKgLglqivpGVOmICEn80GukEDPrjK0xlYswLAgmgIFK/
swo4MY0grnTNqYkABcxkDqyHn5eOLRhyEoWCFgTRj1H1+mIjkGmOi0amdOmIo1Bcam9PYaut
B4YtByGDKFprP5Tn0xarBRtqPpNX/M2dCRiUhpdLHS6gq/3/AO016imJFrAJVcgDk3n2xYqQ
or0YgVzoRkDiR0lA9WmjHIEYCQlzLHwNOwHnTCNEQwLEMHD5AZjrgxQ9GClWOo5EUzNR2xIM
OasxaoOdDhaOVBzoWBABIJyPniASsor7a1oep64lSoXjIemfTOn1OLUdcnoaFV+0Hr+GJFKh
rReo6semWeAFHIXjLUFT0qaHLxxNbSRpip1sVHYkV/Ziw3ohrWj01CtMq1wDBSKjv4NXpniU
MSVcHUD5Ef5YTaeQKAwAyPQDt5/TEqBAVXVqoa0oemIaJGLhu1DShFMShBgCDWirlpp0xI9f
uNAU6ZYsCMtGhzqsgFR3p4DDi1ImrTUMCzjIdTg04UcZzJozjItjLXwT6mJA8gCeoHlhZMza
TVhqp0AxLBKyupOSmhIr0xGX9goW0Z/XOlPphitCAgJqRp6B69j5YBEuvIE/aB6XAqaDELoS
rtXI+NK5fsxLC1LpGo5j8w8+uI7CBi1FgK1yqT379MKwUdfygEdKD/HAgsWkBVSY9JzHfLrn
iJiW0LVchkpr2P8Ajh+V7Tuuk5LQfXKvjia+o9TFVqBVcy3+WA/lR8jU0YDNMiPGvfpjfLHf
yzwJzJywsjqvg3SvXviTpQR/qVBWp7+AwxqV7j8HbXx7e92FrvG2ncLd0McYEjRMHXOpC9Qc
a758Y68aj5X4pxnYtwSParY2tvKB7kZkZ6VGWbk0rTHn/nMrpOsnjZbVw3av/QzcCy2rcpRE
ZFuaSidTTwIpVca6nrCx2n464zuXC45JLawspZYzqvZFoRnQsaFTnTucXX+FGNuvgi9EsTWG
9WU0Ew9EgR1U16Co1YzJWpYqoPhTk0u6SWTXUESwqD+qZXMTGpBA1U6YcU7d3DuAbAOTz7Fy
K3O5SIpYT2twY1Qr1NFpXGpPGVf8mcN2LZN6hstnjkhtXoum4lLULdDqPQV8e2Mc8etR17t8
F75Y7Md3XfNtljij1yQqT6h1osn2k418Mz15tJC6SEEqxXowORGGVrF/xLiG58l3IWthLHCe
ss81SkYHchcz+GGwY2jfAu60kMHJNuuJ4664NMgOXbua/hglrHqS0/t/5Q8RubrcrCztwoYm
R3YoT2ag0/vxNTrAv/b/AMtF2ttFfWU0bp7i3Q1hCK/aMqhjg+Bamk/t85HHHJ7W72E9zEup
rWMyaz3pniyqOay+EuQTWgnv90sdpJ9Ps3TNrB6Z09P0FcM02pD8DcsN8LW3vrKZCmpbnW+j
9gGquHaqlu/7fuUR28kke67fNdRisluhkGX4itfDFtCm4x8Scj5E11FbyW1s1k3tTLcsyyav
JEVjp88VqZ3kfH73Ytyl228eJ7i1Yq7W7akqOwBo2KU6rUKmqrUsaHVlUY0HpHxZw3i3JP1k
G7C8aeEa43glCx6MgQyEH1Ke+K8+KxScj4vBt3JX2m0kP6cPpElwwAAqKaio/bQYIo33KeAb
FZ7BZTx7PAk7mNJby3uWeNwaaj7ZoW1YksuRfDO1XdhBNstrb7bclRqeeVglGGWQDZ4zYMYe
T4a5yLloIv0jiNS5cTHSwp+QkZ4h9nLtnxfy/corgwRwr+lZkm9yTQxZcyVByZfPE0vPj349
4vyCC+j3Z72PcbV6E28qrEFzA0ihNcs64bBZrG8j2uLaN9udtglaWOEkCRwF1DtSn78TXNcN
tBcXVxHaQUZ5nVI1rpDMxouf1xSDqN4nwfzl/wCZcGzt+gNZszXtUKcZ9Z9cUfxFzk372DQQ
RNGA/vyS/wAllPg4Bz8qYdakSbn8Q80sbV7sG3vLaEEzJay63Wn+1gpP4YfWLbqz+OOAWG82
F5c7nZG9CnRGkNyscsZHUGM+PYk4a2we+2kVrus8VvHJEgchIZG1sq9hUZYoLHJbwS3Eywop
aZzRR3J8MKxt7T4e5xNbxSFLWBpBUQyTBZAT0BGk5+WMqqi4+PuWxb3Fsk1ske4T0MNXX227
5PmMMGaLkPxzyrjcK3O5W6CCR9CSxSiQE9lNQDn9MGqRrOIfDNru+z/1C73cQ3Eg1xwWrLKs
ZOdJSfzeIHTEXHxTgOybnLfy7vu36WG0cxxwxPGk0gTrJ/Mr6fCgxfU26tL34n2PcNma/wCN
blcyVb0w3q01mtDRiqMo8MsEgQT/ABnxDatujk5Ju19BfuAxS0QvGpHUKBG5YeZphVqt4r8d
WO+XF1cpezrx23NEuyircS9fymqoB54sa+Il5J8YWkOzNvXGNxfdbAE/qI5KCRAO6kBdVO4p
ikYqh274v5te7Sm72tmJLOQGWPTNHrZBUfZ1/fXGrWdqHjnBOV78052y2AihYI80ziFNRzot
QdXnTFa6N9P8OWNtxB7i9huByFFJPsS+5GXJyUKBQrjOMbWXj+GvkB42nS0tySKiP9Sms/sy
r5E4mmN3Tb77brx7TcYHiuoXoYpEIIPlWv7cQ0tp2vct13GGy262Se9lNIYyyxg/VmoMOHWt
Pw58lvbmRrCKqgj2muUMmXhnT6erBomq3aPjXnW6TTw2ViI2tn9u4a6kEFH7oK1LEeWHWkXJ
eAcz2CNZd3stFq/oW6hcSoD2BK/aT2qMUG/tpeP/ABvtl3wOTetwgvo7r25Jre4t5EmheNBk
HiUllqRnUYl18PMZivuMZAGNe+dCOgxSCOvbNrvd23COysYffvZfTDACoJr1oTkMNaadPhr5
K9k3P9KUKK1i9+L3SP8AsJGeM2sXYrdj+Oueb1cXkVjtrCSzb2phcusARz+T1dWHcDDfgWWx
JffF/PLC7trG52krcXLCO1aN0kidj29wGi//AFYJWuefPXUfhf5REixHaEIOptQuYCAB0FdX
U+GG1n39OCx+LPkW9ubi0h2R0uLVgLgXEscYq2YKFiA4p3BwfJvsxDu3xrz7br63srvaZPdu
nCWpRkdHlIqFEikrWnjTDBz+qrU4hyqTel2Bdulj3dqiO2lojHKpzY0pQZGtMLXysNo+LPkP
c7m5is9pZZbFhHOk0kcGlj2DORU0z9NcV6WYruT8J5hxl0/rm3PbRyNSOYFZIi3XKRCy1p+O
MaJ8lffHnObXZo97l2mZdpmQSC5BRvQ3RnQEsqnxIxqU93GVnMsakEZmg6Zj9uNyC1c8N4pf
cq3+32e3nitbi5qyyysASq5nQK+ptPQYrRPXp7/AewT3kuy7XzFZOQQR1O23VuqtUDUTJpYs
Mj1WuOd05WB2bgF/uHOv/Tbyf9DdpKYri4UCUIy/wioDdsO/s8XVk/w7uo+Rxwxdwt/1Ecaz
NckFUaJhUEKczJp/Lh0cctPJ/b9sN3NcbPtPL0u9/t0ZzttzAqMun8sgViyip60wVWW15/xf
47v9652vEbuYbVdxtJHcyafeeJ4syFzCt07417jU534Wsfw5urfIUvC/1cLXEMXvSXNRpMdA
xMcZoS+lq6MZ605Glm/t62i7mn2zj/MILzfbFG9zapotBUKcw9HZ0Oo0zGCysdTWC4h8f3+9
80/9WuZxt93rljndk9xUeEHUoVSK5ila41a1zy7rb4e3q4+QrjhL3cH6mzX3ZZoidJgoG1qD
Qs2YOjAObrWS/wBvO13KzbbsHL7fcd7tFZ32yWIQyUGTB6OzqdX8QwexmSvPeI8Avd+5evFn
mFheh5I5J3j9wRtECXBUEfw0643elPZ4s7D4e365+Qrrhsk8IvLRPdubhM1ENA2uIEAsxVhR
evjhvS/lxmtVd/26291+qtNg5VZ7putshJ2yWMQTUGTK1HdozXL1DGftYbK8+4V8f3/IeWpx
ed/6bdM0kc0ky6zG8IJZClcz6adcXTXHMzVVzTi+6ca5DdbJuTJJdWThGaIlo2VlDowqAc1Y
YYN1UqWrQJlT1Hp07YmpHoW7/D/IbHhG18ptJl3Gy3MRf/EUN7sUkjaY4xT7g7ZasqYzq7v1
8am2/tp5Cd22+zvt1iggvrUz+8IyZY5IwrNbsmrOmv7tWJjKh3L4J2O2Fxbx/IW0C5SoW1uP
ah/mL0R2ErFfrpJwXTa8auYEjlljGlnjdkdkIZCymmoMMmBpkcUmLn4ar43+Nr7nW5XO22lx
DaywWr3CmcMUkIYLo9PStcycOtO/g/xLu/Kdz3vZ7W6gtr/ZI2JtnOuOWRX0e3qH2g9mP7MP
Vy4OepZsbPb/AO3y0k2Ow3bd+W2u0ruCKY4Z4kARu8aySSLqp9MCcu2/AK7nyKfabHlu33tu
lubgXdsqysoVwmmWJX9JzqDqph2s+g3L+32dduu7jjnJbLkd3Zky3u3WwRJRGMvTR5PVVejU
r2xmysy2f5dSf2+Wp2Xbd23XmNltcd/CkkIuIglCyh9Id5UqQDni+V1t/LzP5E4HDxHcY4rT
erHere7BeK5spFYrU/bLGpbSx7erHWfDGT7MnHUtmAFzqaefc4naNjs/xZ8ibrtiX+27BdXV
hISba4QINa9yFZlYfsxxtHXkU0HHOVP+tMW1XTNtvq3NEjbXbLXTWVaVXpnjU6jPN2aay2Xe
r61vNws7J5tvsVWS6uY49SRCQ6VaQqKrXDLGpfz+G72j+3/5C3XidxvVvarHcxgNb7dONE11
EyhxJDJXR0+0Hrh+3q6tk8jzYbduQv5Nukt2F+kgg/RupSX3idIjKn82rKmNUcdyujkHF+Tc
cuVtd/22fbp3X3FjnWmpSaVVhVTn4HBKPy4orvSCzgNqIWlK0ANe+C+ut6bP42+PN05tfXot
Z47GxskWTc7pwXaNXroOhaFh6Tq8BjOfhmftqeVfAW67Zss+7cf3ex5JFYfzry3sgguIoqE6
wA0gcClaZH64p/ly6tns9jzXctr3iwuI7C+s7iGe5RJYoJIyrSRy/wDjKVyZW1ZEYnX7T4dr
8N5bJuq8fXaLtt4UCu3GMxyqAK0rkOnTOhxSii33hHNOPx/qN72W5sLedhHHPdRAKxGenUCy
9sq4ftF9vcde2/GvyHutlDf7fxy7urS5UyR3UaBkYZrlUgnp4Yzarap9q41v+7bk+zbXt8tz
uK6vct0jJcOmbK6nTpP1xudQ8/134T77wPmOyQQvv+xXG3xTSFIHuIxoYjPIjUoNOmf0w/Zy
+K7Nq+Nvkbc9ti3Cx47fTbfcR64Jo41ZWQEioqQe3SmOdvp95qo2njvId13E7VtO3zXu4qGM
lrEhMgCVLBgwXScswc8a6sx1nUs8dPIuF8s49bRyb7ss22pMaW8lxEERmAqQhzGoeHXBK5fe
/lmBKVYBxRCSPM5ZY20njuJYUUAZkEVOdQe2LG5ShuHjp0CMQTmNVfHFZGbDyTo0jGZFbWAB
UVFB0OeeCRXmI/1JjGqMmpGn/risWZ8H94aSV9BYVNO/ni+EYXCBQpAY1qe2WL5Rxdyxqy6g
I2bV7NAF/ZSmG+mUxmUsSie2KeryJxcizTrMGr7gDFc0r+zp0wtczAe8EiUKehOkD/DGWaja
RpfUzersT0y7YWPQqUKEipFTRq5D/pgdIkS4Ksq/cB9wPQk4LCONyulF9KA9+xrXI4F8Dlul
JyH80+kt4LikWhjuXFdDeRLLX0+FfPGtWGL5vQFVWjp0GYyNAMGmXBLOmj3pApdhUmmeXTz6
YmJlRatSVodB+4eK9RXDGqZ2ZiTGQnanjTy+mNbovJGenqRAJTXUxGeeQp/yxmwTiRMkrjvm
aavA+VMDYVnkUNGSB6iUA6CuE6d5XkIBIoBQL41wCihm9vUKVIFBWpAPiMQlNJPqABYZ0LaR
nTtn4YsZvo2kFFKhY6Ak1AqWIp17eWJvIGOTTCVU0V8mDDI/X/LBVfZhK7R0Ioqp0B7Vw3oU
f6rUNVcpPvK5Vp3OKQzAN7boZOlSQrE+H+WBfJQ3EiItCD4k554VozcBae2ACBQ0A6E59MWM
57qRJvRpKrqYUAOVRg1u0yXBBcAUBPryoPEA4KIOSYSsrMatSmmgqCf8PrigvOl7skWSkMwp
UsBlTtXvhxZAJdFRSNR7YqpIp0bM+WKQiN3IzgpmpOkqMhlTM40IeWXVIwZPVUqGyJAPSpwa
cP8AqZEGYFUWgr1H/HhiOktwwb0UDmhNBSoH0wWCUbXJBC5VIOtSBnXrXBhGZ2FVESgUqqAA
jPuB0GDAjjkKgKh0gV0itAB5YUYXMzOaU00ocs+mYr4YpIvkLDTJnUBlr41PhiNSRMq6WBoK
VpnWvngsZov1QDkBRkdSmg/zxSM/CSOYk5+krnQiow2N30mdtYERERyJIAGXbBiH+rIaoRfc
GRfSv7RUdcFkVA87en2wCK1YMAfqPpikEhjdBQY1oqD8oHiegAxq+khOVqqgRtX00A+0/TEL
CFw2vV6QFAWhHqI/5YMZ7qRLtgMs2qc6VBr9cGCdVG0h1qCATU6iMtJI8T443K6QKavuqMzU
n6Yqju401LVIy0+OMhGxXUUBAA/J/hlhRag0YyFa09OQxLDgkVCoCtSFbuP9MSHVAxrQley9
DlniRGNh6gwCqaoFPj40wExRQCSSW6kYlgfQAShBBPfoP+YxYrDsDQMrEsOpOQI8DiBFS2oi
okY519Q6YTTuZCo7SDrTocB8IIwHqAoaihH+eDBpDUWC6QFb8w8u2EaMHU9BTSCAD0pl0wk6
kqSaA0NFc9vMHAQ+9VSunSOxHhgRgdY1E0XrTyGECFxGwBUirCgqOmJaYMFYICcumVST3xKJ
H0hiV6gUBI6VxFGG1gNkKGjDFiDJoLVAop7Ht9cQHG4AIIpTqBn07YkUbK1aCgZsifDywILe
0QV0+omhah6DpiUGqmNFJzDVPqNevhiWI9R1KANTGpHYnDpSIsgQkswDH7f8sWnEYaRa6CW1
fcx6j6dsLPwMaiRRqdvxwIJVCfaGfc6uuX8OJCAcMFWhDVOYr+8Yib22DaQcqfYTTLFaZDgI
wIPpYZt08cSODGvcluwp6cR09REoqM2FK174ECkozPp/3d/Kvni0Qi7lRlqA+6n+mJCJBb1n
SSKBQMI0l9C1cUU/bp8PxwEyMMgF0EZ0BNTiXomUslHB1g1Ugda5+rEcIB69SEUUPhU4tFC6
tIy6vUFB0noK+eIYeKVagxUatRQH/DzwVvUpXTmxIJyr5+eMqgKuakH00/AnvilwfUxiIofy
kU09csMqP7ZC9aoBWh8ulBiIaF86Bc+56/6YYLCIZG0gZUrXviB0ZlGlG1GtSp6YsFoo2Yip
qM6ZnMYjAuD6QrBhTMHscS8ACaqEFR5jLUPHE0l0sWDZFgaEjJc8WixEXLUSpIBz7Eg4gkEe
nM0HcdMvAeWF0gH9wIACAScgT3P4YlfDrpqBnUDJegA8cWMzpR74X00rnT0sAenhjcVULED7
hWudBhrBvxHjgGO+TS0qm39S9x3/AGnFI1H0J8CQ7FY3aX+5b5b2M0a0e1lR9LHpTUPSMa6v
jLcfLUXF93MM8HKLRmWiNCEMhAY1LHScwKY488+mXGn2O64da8NTbDymzdzC38/NCus1yQnV
kfHDZ6pTWu6/H8vHDs0vJYI5kXQ8gU6K1rlUdD3zw5qqLcuRfH6bRabPccggmSPSRLAWAPt9
iy1pXBiTyfKXF7tn2S4v4YrH2yke5IW1aqUyoCOnfD9VuMn8f2PGbDlt3e/+z2xgz0rc1RnX
PMMaLl3FcOeG1H8v22wXVxDuMO+Wt3ECFa1t21SiMn1aSCUOXY4zzMU6VV9F8F/0EmybchuS
xgQoztRpKZllJKUJw3liSa80cqzNRTGvZmqDQHI0GDl116L8N8o2rYd5d7+4/SRyr7a3B9SL
qIPr8jT8MdPmK/D13dOS3Ot7225dtFvYMNSBI0lmIAzqSf8ALGcc2d5tzDj9xw+VE3aO6u5C
iuQNLMTXrGO/cdsArOfFHM1S/mh3ndv09oqUg/USFo6jwJ+36DCJVTzLkzpzK6n2rcWWKSQf
/KtnYAqtD1FKmmWGRuPR2vti5fxq3htt4s7W4TQbiK9f23BXqdJPU9a4rQv/AP2viVndwbfL
vFp7wjHqWVTGaDICT7a0zwSNXrXinMeWz2/KLy52XcnMUh0rPbSuqsFFMxXDGYm4KvAb5Zpe
U7puFvfu5MDwSSIpBz9ToCdVcPcDNcoj2BN0nj2S6lubNGolxOKSN41r1I8e+MtKiBQxNCVN
c37nzxuHmPbPhLbILOO53CXdbAJcgRx23u0lXSa1YNTB0Kq+c8S93l6TRbttqx3cg0O01RGR
1MumtCT3xmRSPROQbbbz8VitY9ysfctxG0jmZQpEdNWlu2EFvdjtm/WFmlpv1mJIXRinvLpJ
FDQUPWnTLFqPum67HNu9l7e920E9iCzIsyhJMqadddP1GAeOPeOVbLyfab6wtr6Ha7yENodp
FHuUyqtO2HCr/iLaP0Md7cS39o4meiqsoMlVyJdT9vliqrKc1+PuTX3JbiTaootyjmJlUW8y
FkBOYcEimeMqOTZvjHnttudtNcbTIkayL7jh42CoM60VjmD4YZVr2DlHH7rcjYSLfxwi2kSS
aCVigYCmqn/PCrE55Hx07r/TxuFst2Vp7TSDTUZBdXSp8K4sWvMN++PN22/9fulxvdpYR6mk
9pZXZpFJJAoKZ5+GKaz40vw1st1a7bdXUlxbSx3pV4jHIGddII0uvVT3ocVaYblHDr//ANz/
AEV1d2tpBuEpaG5MylACanWBQr5VpgZ10bjwOPh1zt+43W62d7ZmdPcC+l1U/mCanLD6YZrU
bjm2xXPLbWwuNi3C19pTqe5aYoQGFAaAdRiFeb3uwDb+S21hvPJQ9qAPdvYWkleHtQMdVM/2
Ymd9dfP7HYRZ2psOW3G+0YqtnNKJfboPuqKU+jDBIZ8t58P7Dd2exXM00lu63hDxLHKHIWnS
SnRsa1quPjfxrbjfr683YW89/FJrsrUyKy6T6vcy9X7cDM5de6t8kR7hBc30trDxq2lVp47e
VDKIky9ZKqW+gwxa0824cgvb6wudlvbS42QjVetVC2k5inhirSvO/cUg3S/2O1vre2vbxDKK
FfaMrChBYehWPhiwdTYpru5seIcEm2jdbmJr2ZZBElsdQLSNqCqPuA8a4omW45t2zNxUzR85
n2i7YO1xt4kCRKc/T7Z9R1D8y4FY0HxdeWd9wy/2a3u45NxDzaIHcof5g9DDV1Fe4wYdaza7
a42HidvBvV9H7lrQ3EhfJQWyFTmaVwpw32xbtdc42zfLMxSbNDEPelSYUatT9laN1wp558sb
LvG/8rkudjs7jc4IoEhmmtUMih1r6dQyHmMDOOT434pynbOXbXdbjtV1ZwLIRNLJE2hSUIFT
TL64tMer7nsnIpOdWG6R3Hu7LbRkTQLNpKNQgkxn7gfri1Jbq6s92tN82va7uC63PQQIEkCs
GZaD1dvqMSeOcg4f8h7FsK/1bdo1tJ2Cy7cZ2aueRUMSrGv7Malo/L0r494zvVh8d3m3TwKL
q7Ez2+mVHDLKlF9SnSPPGZ5T1PHhd7wPmIuJ7aPZLuWaCTTJojLqCR01JqBw2iXxo/jXjvIt
p5rtlzuu03dhGH0GeeB1iWv+45LXxwVrnpoPmPl3INm5rZNt24PELeJZEiB1JrI9RZehBBpQ
41njl7rYfHnLH5PxbcJphbbhvokdptvVhDUAAIfIMO+B1dcO5b7bvt1lfbNbbTaSTjSWuw7o
QewpQ1+uBleXCsYN0DsraGR0qw9K0BBzOX1wrD8kt7i/2DcLGxYSblPas1oiSKj6iKK6tXLP
8wxFkLq83fjPxZbSbtILbe7B0ZPfcSMz+50VmJDtoJ6YZNZtXEVjx3c7iz5+gzayq9+CQ8QU
Gre36hlmCMG+Y25uI8ntuS8b3a7iij3a9W5lK2KOIZHjFFiOZ9GpR1wUfhkflbf99t+AvY3v
HotttZnVV928WWWOhrVFp6iemRxrnnWL0o12vf4/ic3W384SeyFuP1Wzy+3RFPWBZCWkBzzB
64zl/DXdeGzSKXK1LSfdUigONTVb4eGdo2BBMbA1V1yYHxBGY/DFWPh7vwrcbnY/iq55ZZ7b
7+9I8ld2eQTs5Vgv/wAgMfcUAZADywSNXrxg/jbft23L5S27eLg/r91u7gySB3WEMSKEVppB
A6DF0eHX887hdt8j3NyYptvnW3gCqzaZFeMH+YrRsRpI6EHFonltbHje63ewfFQ5XZbazbzM
9JN3aRbj3Kvp1T1IdNI/4zxfk9Xxhvifet1u/la23Ur+s3C8mmkuVd0i913U6mDdF8ad8avX
jUmF83btdf8A3obhdQxz7fcwpBRmPtzI6RijqyN0PYg4L7GG32Hebni/xHFy+w2xpN8vTWfd
XkWf33ZyvuTNX3FI7A5HvjPM024xPw3ve6XfytbbpPG1/uN88r3La1hDu6klhX01FT6e+Hr2
O3NyVzfNl/fr8q7hfWyzbdKvsgFmMc6uiABgY2+0jMZ5jGt8ebnm7W+2TfZ+K/EkXMbDbnl3
u8YmbdnZbhZ3aQoZJmLCSOnYdPHGefWuvIw/w1vN/dfLNnuDp+tvbt55JUDpHVpFOtxWg71p
g7+WubMcvzVv24R/Km63tl+o2+dGhEcgJhnjeKECtVPfsQemOl5+GZ1+G+2XkEnE/ie35pYb
W8u/XrVud0nImWdnchnmcN7iU7A5GnnjGHq5PGH+Gt33C7+WbXc2T9dfXks80wRki1NKGZ2G
r0jM9PwwWt8eRwfPO5LdfJ+7yCGW3C+ypjmXRJVYlHSpy8PEY0xzfl5/G/8AMIcE6s8+lPwx
Y6R9dbdzSx4l8Kcd3XcbUX9lJHFbzQoVH/kLFSK1Bppxkd/LbC7W83fjl3GCsNxaTyorGpAk
jjYAnvQYp8B458oS8qv7feLJvjC3urFWkK7wsdJQorS4Qxj3Cy/dQYzbjj1b+nzhIZlOiQEu
FoCRQ188ajtHtf8Aag4PNNwR6e4LBirZZr7qZAHPBfkyePWvjnntrv3M+SbPHsdpts+26y13
BpEs9JAhMgCg/vw2MRJfXu7Q/Hmxy7XxmHlErHS9lKEIQer+YNYYdRTEtZT4ctdwg+T9/kuu
PDjMlxYB/wClhW9vV7i1eNjkwY9QuQwflT4W1lzNN847ySfiG2WlhzHZnljubVIxWWFWI1DS
EEmqh9LVo2NZ6pPFhvl9vVrwHjD7VxSHlpe2j961lVHEQ9lTrGsMMyadMEL5Z+RbO9g5Jci4
44eLO5WZdoozRqKULxMfuDNnlkOmNCcxnI06NJ1lYagMya9sFaj6v5w/N5uM8KueAtdTRSRR
jc59tZGUxpHGF9yhoaHUP24z+Gbf9m8cba3Nbhomi/U7ls1KLRZJ/blYV8WKBqeWNYr+Xm/w
nwvkfH9q5hPvFk1hFdxOtvazafdZUEnrZBX0sDke+L8jZ9cQ/A2+b3e/FfIbO2vJbndLD3P6
ZbCTVNGDFqjEYP2qz1pivyObvHj57udx3nceTrcbzcyRb1JcIlxf3VY5YpQwTU4pqHt08MqY
evhjjr34eof3Dw/JNptmww8j3Cy3jaW1/otws4vakeUqKmUEtXUtKFcjjPOj+l/3mvCZHUKF
Bo1aqAKnpnjbtXsfwAOXDcd0ueJ3do15bWyPcbHd/buERYgqD1Vk/i86dDjFvrT0SPh0e47N
vG67ZsV98d8mjt5pLiFZCLO+hALTRhQaaWHgo09cV9cufzcxJ8pcY3XlNl8c3Gx2f66yiiHv
3kZHswqywaWlkHRRpP44OpsPX/tHpu4CCbmO4WrgNcybOhtoSdLuUmc1Q5H0uRmOmFr8vALq
9/uIO07qu+Wk11tCwyJe2e5RRSLJCWp7sEZCuxiFHJU1XrQ4rGba3nyRL8jLtXEp/jp7ue1u
LIC9l28I8TlUiELyOQQBQtmP8sM+D1uun4nm5Hcb7yWLm9pAu/NaWkq/pUjWS7ii9zTcRvEf
5jAkDWKEGgywWetz2a6+S8kjl4RyG33Hjm+vbNZuSu5rFIVNCqSxqz1Iicq7kZqM8TF/SDZd
25Vu+y7XbbnbbtsG4Nbwn+u7CIrvbJ0ZBolYESafTTUNOXjTFKz9bf8AyqtuvPkTYeb7zDuu
1W3I0ube2uLqTaVS2vp4lZo4L2JKjU61KyUOWXbFl+Rzz878ofkfbuW3fCt5u7TcLzcNr9h5
Nx41yO0jS4gg7T20oCHXD9wOo/t6sY7nXP53n/8Aq+V5BpoUJfTln3HSuGV6cmEHGoHSdJJq
G/5HLCAvKA4NP5ZOYOYB/wAsZCNnJDNk5FSPEfXCAgs0X8ytG6ZUzHbFrWCLOzkjNlyHkMQD
rj92unIdAcwCe1fHEBEGtaKqE5iuZ/biQgBpHUOM9Rp+BxH7QzBTH91TmWPU5/4YiY6NKGma
j7QD0wIMkgp9tQaDLDFAayoLahUGtB59qYkJFjDErma0A8iMR1KWk0AR5kZkrnSnbAKhjIPb
MZgds8IS5EFskqAKDLPEpS1SgAv6VB0qwz+pwI4MOZdKgdBXPzNMSmHljVgHBqueXQn6Ym0O
hwCGA1jPVXFoo8xQGmoZPTqARln4YUB2DFamij7W/wAsWjThs/UBStNdc64EYyFVIVch4DKo
wnUkT/zMzUEVDeNc8Qw6qGOkVGrMMSAajriUMGDtRqOOxHfEhn09StMqKelPE16YyTtH1IJI
8B0xASy0YlgEUAUGWYHXCgF2kyFNAGQPT6YCKijTpHpBoK5EYoDBqkhu+Qp0A/DvhJxRZFZV
qAKEN1wAQ06grE1JpXt5YElVigQ+kljkfphOg1MWV2+4Eg9gK98SCoQiimudQO1fpiQ8gSA2
VKn8O1cRDKysQFNSPu/0wCjDjUocUf8Abl0rh1DfUp0genNa9SR4YlSK09XUDIn6/wCWBGDA
iqmq9KDpiWHUqpppoqkgHsa+WKIwNASrCj/cT28MsKEZBp0sar1IHfADK1XJUHsCmef7cOFI
ACuhlPgQe3ngZpBUXo1CppTsRTphaOhjzJJI6jPEj+8GZtILqaBT2BHngApSpDgkCOgowzpT
zxEDmJgGWusdQMsqYGRqdS9a+kCvQ+VMOnQPrboPLV1A864mbNErDSPVqTuvgcBFVZNQJplV
gehwkk9EBqTToFI/ZXEsRjVrrkQv3HEzqUN7jNQaVTy/4PTFjUpfyXUqMhXLqKEdM/8APEtC
rAPoFWORehyBI8+uJCX00pSn8JyzOJHkAIzXMEUpl/hgIXXMMa0BAqM8SJVQOT1FKVOJBlYK
pbPSTpCjoT2wyKnQkih6E+od/AYRBSArGdPQGlfM+WM4bAxomgDPOlVz6dc8Qw9AaMCWNa07
DEcPIqAU05HxyOHCehb1KQFA+1ulcSwL6PbAIC9dT0yP7MZxHr6NJOVK5eGEGXQsNAQQx6nO
h+ow4PgiS2ZbNcq/6YsQldwtM3U+Pl44q0QZQKxUBHWueBadWSgY11A1z8fp54DC0rrLdUNK
j8wr44R8kjosh0j0EUVmH7cGLTMJGNAwB6ae2WIBVtQ9Q0sB0yBr0wakioEQagC1ag9cjhaJ
XZiyHMrTWpPbyp1wskE1enUKgVp0xEyKg6LQ9yMwfxxLTSMkikAHSD6m0kEHy8cVCRKaQB0p
n2p4HLARjQlFKijfm7kd8WHXOsDam0UVTmtK0H0riUh/56mgHehB8friNkGQjgrIANJz9VR4
4kWsIA1Aa/d0GWJaYsoSsSjSKZds+uBYYM5AyqvVv+uHR6JV9yjP07qOnlgEDWk4kJ0uoNNP
av8AriOJDUGpalTVq1z8sQtRhv4ermuk59R0wmHHqABqAKAg+I8cVp0WlkIWgoPtpk2BrDAt
QhSDT8vn9cAsGzGgNNOWa+FfPBjPoS1H9zOj0FPAU7YVgWzIJNHj+0DKlcKOVTQACzlyQaeP
fEbZmBGli+XQBTXqKZV8xiGHVSStKqVqDTocVGafUmjTIhyJrTrhjVh4krWQdFFNPl+OC0zn
CkjKKHjrmaue3/M4hfDpNGymvp1Z55AH/LA1AzxUZcydGZVTke+NMflE1w7ElQEHQ1A+uLVt
TNq9oa/u7kmo/wCWDWpEdNMZoutW+818MajNVG9MJID7QJFczmD+IONJn1ArQj8Tgqw1fMfs
xDHeEjilVRIdR6A54pWsesfE3GX5Nu1vtE921rbsAGuACzVAJoRkGNMdOufG7ZHonPvjPhnF
rRP0W8XMt+y6hAyKEYn85IH7scPhjNeZswDFgAtejfmp0z8ca1fWAE7CioKRqAGr1PngxHX3
G0lQ3WlEAzxfWi+tvx34c5xvm2DcrKNLW0zZROwR209V0Zn8cXVsTk498acp33dZ7Db0p7BP
uyyMFiqO+Y/ywS3F1I5+U8H3/jV/HZbhSSd6OBGdZJY06DLDz65/C9b4V5pBsbbvdW8MEKp7
rw+6BIqUrqIHX6DFbjTCuvtkrl6a1Pga064Y3CtLaeadIYQXaU6QFBaoONSmzxubv4g55a7M
28XcUSWqqGMZcCT2yMiV6DBrEcnHvjrku/gPD7UEYBWM3Le3r8waf9cC6xwcg4ZvuwTLb30Y
kaoClGEkZz7MMsU52icrrafijmW4bR/VYLILZn1UlYI5HcqrGuWNXw/C1234K5Zc2yXc13t9
r7g1BZXJbR2rQHGaKp97+M972u6jsRc2u4PIPSlmwcKxNBqX641E6b/4k5Rt+3i9laCZSAZI
4WLOCR1b/ljO0j2X4f5Tulqt379tYxsAVW7do6r0FKVIqfHDfR1/hKPii9h3ZNu3u+j22KWg
W/FZUYE5aTQV1dBgkVNz/wCMxxSOKZdxW8tZsgypocHzFSPV5YvtTKwyP6yGXUOuWf0xvNCS
IzSOVhDNIvQIpNK9K6e2Kw2up7bdbVQZLaaOIrlK8bBXH4jB4zPD2dheXCt7EDsGOlHjRmz6
nOhAOLqq3Qi0uRIYZkYzBtIqMx/upg+WZzWuvPj+7srC2uzudncyzhf5ClgwY+IpibHyb443
7YbKO/MqXCTBaiBWLrUV9QAwCxnbG/3i0nrZyzw3SnNoWZZAx7eIr4Y1K1zNWR5XzZCpk3C9
hqK+suAa51ocWiwO7cl5RvUEablPc3kUVQrUY07VJUYtWB2PifIt7inO12Tyi2AMpyFO4zYj
8MFoxVzR3sMjRzBw6sQQ9dWpcsz1wys3g0VzMi91qakKTUkdzgrbo27bdy3S8SCztzNLMdPi
DT+InsPPFixcb/wnlWzxxTbjaaIJBRXVlkAK+anLLGfR1S2rgPL9xs/1Nja6oEBZLcvpZh3Y
KaZY1oxW2/H98n3BbCC0d71wQ8YIRVA6lj0GL5Ujs3zhHKNkjjkv7Ix27gkTIyuv4le+EyJt
m4RzC+sTuNhbSfpwKijiN3HXJSRXE31YpLn9ZbXL+4JIbhPS5JbVl2Jri5uMaCee+caZC7s1
AisWJr/24ol/tfCebX21/q7WyJgIrQuYy1KmtK51piwTcVdlse+X99+itrR5L+tTFTTpHclu
lAfHDK1E3I+K8o2SSNd4geAuv8qTV7ikDrmpIH0wBSL7g9RPpp4eOeEWOna4d0uryK2sIpJr
uRtKRRBmdiczQjpgp1YbxsnKtuv0g3O2uIZZdIiilJcuTl6MzXw64YNXVr8cfIzWTTWu3yfp
6GirOqse9BGW8MVKksuR8v420ltbXlxtzqzCaDWQdQ66lav7cHo13RfK/PAyON4uGYEeghCD
36Uzw4p67t6X5P5Dayb7d28z2oQUkt6R1Ve5RSMh+/FVaxP628g9cUxieMmsisQRU9yMR3wM
+5X9yqm4nkk0N6Q7s9POhNMK468HFvG5QRGG2ubiGLuI5HVaHvSuWM6qsNp5zyjaBKdv3W4g
WUapFDl1LDIE6qjCc8ddx8sc9uoWhk3idUkUrIvpUkdCKgZYNGayM1yzvWeUnWD65KmgHfUc
IyQVreXdrKJLaaSLRVQ0bFGz/wBwphPIrzer6+TVPeyT+2aLWQtpNKMczlijVkd0+38nSxj3
WZLpLC7TSl67SaJQPTQk5Uy6YtkrPUV0W87jazxzi5lhaAaIzHIykA56Foa/hh0yIbzfNy3B
0lury4uyGIiE7O5Xx0gk0riZFFyLcobWSyivpobWcEzWkcjLG1cs0+04NFs+Kk2GfeX3WGDa
rlre9uXESOJPZpWiirilBnhsaju59tfMdn3RLHlM8kl4qBoGmmMyFWNKxFiR1GZw8i2X5Zhm
jNUIDaT07aj3ODRLEdBUVoaGgHUL+OA08kEyAiRWjBWoDL9w8a/XErDpuV9HayQwXM0cT09+
3V2Eb0y1FK6TTzxC5XMkpSVJtZSQCiOG0moNfTT83nhU8T3t7f39wZbu4e5mAAeaZzIxoMg3
enhg8NIbzfrZSWMF7IttLpM9skjCJ2/LVa0PllisXlc1tN7MyyAmKSN6q4NPUB4ihqMFU1Nf
Xd3fTtc3U0k0pADTzO0jMqjL1MTWnTBrrZ+G1T44+S4uEz7pFri2tkWafbnnaNzEcxKYD6aM
OhGeKW/hx/p4y3Gdj3ve+QWu3bQA24Sv/wDEPueyoZfVk/alMO43+EXKLTkllyC6i5J739bt
6R3MlxKZGNANP8zPUNNKY1+HOdOeLed0O2SbZHdSvYysGa2DMYQVzyQenzrjM8a8/LghmuLS
dJI5HWeNtccsZKOD2KkdPwxqesfFSbjvF5uLvuG4XDzzufVLKxZ3Cig1E5k+Zxa1KnXfN2j2
l9tju5xtlwyvLbB29nX2dlrprTqcGtbrktLq4s547u3laN0OpJY20kMvQq4pQ/TBRqfdN2vd
1vWv9zuWubiWmuaZtbNQADU5zNBlgacUcoWQsGDBepGdM8I1aS8k307Wmzy3sh2cP762buTE
JQMmVTllXDuNWOm15zyyyjslt92ulTbiXsFEraYieugV9Ip+GBjfVs/zV8kThdHJrxT3USAZ
nxIGGNeMTLPLNI7ykySyMXllcjUxY1J/biojq2ze912q8W72y5msLyI1W5gbSVyoKHrn3wYf
XVY8q36x3hN0sNwnh3KTUWvIZdMh1/dXT/Ef4sO6w0Oyc/8AlCz2u4baNyv4dmtiZJmjDPHC
0mZ1Gh0hjXLLPGbV358K68+R+e3u4W+4zb/d/rLVWWG4SVtUYceoL4Bhh1qSVW7fyrftr3Bt
z2y/ms799Qe4gYqzrJ94bxr1zwxX4W20fK/O9lsBt9hv1zaWiHVbwI9VUsSctdcia+nphsn4
H49VfIOW8o5Ncwz7xucu4zoCsLympVScwoGQz6YK1FtvfxVzrYdjg5FuFn7W3S0BcMrPE5FU
EyD1RnGZdZtxb/GM/wAu3zHbuF3lxb2hasrCX2bYTEE6SWBQMw7d8VXW54oN73rnFpyma83m
6uoeR2ErCaT3SJIZE6gEHpTPLI4bKuPY67r5c+SbhgX5FeOdJiFH0/y3Hq1CmdRhi+uqDZeQ
brsl3+s2i5lsLqMELcWz6HFTnWh9QPnljNMkkQ7luO8bzu8u4X8xu91vHDSSEUeRzRVzHTGr
0PrqHdm3eO5/pu4tcpPZjS1rcs+qEfwaX+0Z5UwRymW/+HA7iNiTQLpzbv8Auwu0duxbnue2
bjbXW0yyQ38bg2zxMwkDk/l055+HhiuYxdbbmPyF8otbts3Idwv4hcIG/TXI9vUD9pQ0GoZ9
jgjVs+FXxz5Q5xx7b/0O2bxcWdoW1LbqaoOxpWtPOmG+FHe/InNr7crPcbneLo7hYim3zu9G
jWuYDDOh88WCWLLlfyP8qXVuu07/ALlexwzCOUwTL7fuRv8AaRkKo3Y4pRbL4suKb381bZsU
cfF13P8ApDlvZEEfuRKRm2gENTGdNrKPzXki7/JvbbpcLvbMX/Wq5SZWrR1YZaMx9tPwxrv2
eL/ps8WG/wDyvz/frEbfvO9TT2cgDNCCFDHsSFzP7cZhzVl8f8x+WFji47w69uDCocx2qEGO
PLJquCEBbt0riVljOXXK+ZWvJ5d3nvry15HGzLLO0jLOj1o6Z9B200pirnLvwsN8+V/kDkFo
1hvG8y3do4q0AoillzBYKB+Iwb4bGLIkDBWYK2ZXzHfFK1tC0bIoV2BLD1U7DGjhFSCRk1cj
Q1HTqBiYtxEI1LZ0YmpqR1K9MVUdO3WtzuFzHZWqGa9lOmCGMFmYnoKYmoO+22+2++ks7y3e
1uo2pJBKpR9X+5WpSmKVndcQKvLoCCoJIyzPnhxfKe3triYrDFE0krPoRVBJY9BSlc8FrUmH
uba9tp3iu7dobqElJbaZSkiOpowdTmKYmJ1L8OdHBYkAEGmunT8B5YWyepY0NGNVz8x1OI4Q
VyVAJ1KaNX7svLFqP7PqKkZpnpPUH64gemZBXJT6n6fTpiWnYKZKKaEdAh6np2yOA6t5OFck
PE35PDaB9nimW2uLtTURyEg/zB1Vc6avE0xc33Ge+vrHHabRuV5K1vYWst06q0rxxKXKxxrr
d9I7KvXDblMuzXLFGckBLq5orrmDXwxavV4/COQRcZXlSWpl2P8AUG0e6U1EcpHR16he2rxy
wT1v64pKoVP5QKkDxPYYrBpirUEjZA0IPc/QYRPTezL63cBdNdXeh86Z54JThhqJppAZRQAA
kHtiqPpIZWJoT6QD4k9qYNGLLc+Nch2izt7u92+a1srv121xJGwSQ9SFY/4dcMun/CtZqNqA
1Dtp7DGh6kjjkZvcQD7fQB4HzxnTjS33xfy/b+Ltye6sjBs6PGlwSaSxmWmgvEfVpaoo3TFL
o+FXtPFuQ7wtz/S7CW+NsvuzeymrSneoHh3GDTniGx2u8vpIbeziae7mf2oreMFnZyaaQvjh
Fdu9cR5VsMcNxvW1XO3Wsj+1E86enXSunWMgcZl1n7OTb9o3G+WZ7O0luUgjaecxIze3GvV2
pmBnit9xrmp+Pcd3LkG8wbbtiJLf3VUgQuFRmVSQoY5ampQeeFqRxXltdWV1La3cEtteWzmO
e2kXQ6OuRDKcwRiZQBZGkqpAPWncnEKdPWNRGYyYUp+IxEQUU9qmRzzOVPIYjoanV1GQ+8/5
YgYZLXxyXrX6YtWDjV0dg4H36hp7mmLWtIUBD6QQcyPA/X/LECDvqDAVqaAjwwDExZAAwIDG
tBXoMJDoAIYNlU1BzrXuMQGNKutTk/QVqc8GHUZpXR3r1BxYUgoCTnrOZFO3niQwhp0Davy/
cRiRgwqdLBtVaVNThBkMhFXQGhJLV7dq4hhyPSCRkKN498BlMxLMCxAUeGVa4RRMJCukUbyB
zA/zxYxd/BAyswQkBV61FK4j6MBSy9KDP8MZbO8Sgkk0rnUeGIWBMjA6FBUAV1dq4cZuiikB
ozVLUoQfDxwVqQyiQqR1KZmuRA7HC0IUpXPX3HliBio0qAPLUPuAwjAlGUiMNSv3sa5UwE7A
5eo0BoprT8BiVPIriPUx9JNcutPPFopRrE4AIqF9PXt54jD6dLJUejtTqT2+mJYQtwPcdjTL
SDWmXWhAwLCp6QFBqeprXLCjo4FC5LAD0kZZ+eIkfbzditOpr1/DECQSFTVMmJ0tl/1xaT61
UBRT3BkV+vgcANWdlLN6Qv2pWv8Ahi0w6oS1dVdQoScsx2ri1BqQdNdbjqCP8cWo9WbSQNJK
klTSo8RiRwwZFFNZ7djTFEZUrRq5L1XvhGEoSqnqT+UUpT/P64kcJMxIjypWoapBA8Kf54gS
SVrGqjUn4E/XzwaSjVVU6qnPqfPoDiCUadGk+nLoO+I4goy9xToa9cQokmIzNNP5ic6YqpRN
pLqv5aVy88Zb2GkeXUdPqYZUHSvnhFpBSWVtNSQR9PpiM0ZqqmpFB2pX/nnhVDFpFGVDHqyI
OVfpgtBhq1V01VjUEA1GLUNSEZqCoYUArkcCNHGmlqsBWprWhqOw64lDaKH1VaNcsjUk9qHE
19jo2mpSjHqv+WEG1AnMU8RSv7cRwSRoaggVHYGvXpgUw6gaaEDUv4AYG7kAS2rKhNR6qVp9
caY07yxD1UqT1rWhxYvsNip9RpQCn1wC2mqj0UCpGZODQEgVAFQOpHcHwwtYLS1fQAKGrBu4
OEwij6c60XwNSMZ04ZRpPWsYrpHTr1xYzTFzUVFVIqD3w4zp1ZZWIUnL9n7cS+RMNLhnX8Rm
cDViMkrIKVUNULSoyOFmG9ZehWo8/SfriNhy0qsKnqCA3UUPnijWHUsVCotXGVCaH9pwoWs0
9NFBGXeo8MGq+hepqGINQKoRkO4IpgUhy6UqQcgNVex8cSt07RMyZj0t1PUYdX1gf5AYKCdS
qAK/WuCqRLQk0YVBHpJ/0xYdRFkjLMoLFOinJanzPbG4FLvOkW/go+0qakt541GZus+Mwa5D
BTakp/uPTwxB1KSzqVAJp3yBxSB7r8BR6eXWgWUhAhDxtl6iuQr9e+O3W4u+W4+eUP6y2AJU
lBUg9WXP/wDRzx5J03y8ekqzVfoOn1746kCojmoPp6UBxDWu+NrriNtyNZOSOy2Sj+XIwZkB
Hcqo7Yef8MddY+k9gu+M7lZS3G07wtzZMCNUasFWo8wDg65/YnWvNeETzQ/I93a2N1NdbeBK
JHjRhHqBNNeWWeL65Dji+aU3CHfra8t4pEZEFZghZchSq1Hn3xfzqnGtvth3a5+OK3HuvcPb
kkSI1fty7V+mLr5WevBrXiXJ99vLiPattnuJIAWKKNJQVoAwPjibviKXZ+S8fv0ju4JdvvjQ
KrrpcHqCvXG/52b6x7X0B728XfxzW7Essz2w94sG1NSpz79MHVm+D1x7gFuuBW62SmWQIiko
p1g1rVgP3YDYurp+ODabBN8ngtYgqj+cRkwWuZoTgWrXbl2y72W5/R7tDPZsrKkseaKtKUJ8
BioYe65zt/6AbVuvF7ncbMMIxdQFvbcA0VgCAw/biTQ7Fw7jWzbit9tlv+mW4TOF29wqxFfS
W70xag7Agi5HvMkiGO3kf+U7g6AQBXTqyFM8FpZ25+RNnnvn2bctouN0SKWkU9swCkK1ACo8
PrhC+59fbe1nsqRMIZDcxmOJmBZVA6acULPfNLU2WyOnTVgE6ZVzOXfpinyK8SIatGarMcm6
f4Y3GnsfwIES63Ae2kz6EFWUHSPEHzwdMt/Y7lf7xdbvYbrCn6WBxHFC6aarSuWWf1xlK3lm
63/Fdih/9ciityGoYEiWTL/tHj44YxY8o3/km+brudrf3lglleVXRKImRWatdRB/zw+GfL1b
mumXiljclEWYNCUf2wpHQnp+7EbfVvyTkl/tljYTWVtUzPEsk2gtRGoDlSgrgkJtx2KKTd4N
ytYYIrhQRcXDxq5p5Ke4xKXHXBHtu97bPFdXS7pGp0Mf0/sgEdhlniwazvL99vOH7XbtssMK
K7aGhePUDQZKNNDniSX433Wfd/6hdX9olhOxDPHEroCSM8mp0xFlflG1vrGwaKw29G22RiWv
0Go6up05f9cDOvIkBIDHLKp6ipxo8+vZvhCGyfbdyaRYnuDQFmoWCAHUAetK54LDdaHhZjuN
q3FNwAlpctpS4BZQi5ekPlpxWouYPLBuOwpZ6oVedTIIRRNIpp1Be1K4omma02sXsziKIO0d
XdQNWmlQfHFiZnhZ/WbJuKbhSZf1Le2ko1DSp6DVlRThQeWyvbb9sC2rGC3ll/nrFVFcDL10
/KMAz1WfIlvNb8z2W82nbIb++ZDJ+l0AiUjpqHTID7u2GfCVvKLnctw3/Y/61xaLaI/fUG5Y
q+skg6da6RlglONLyeSa05nsUNqRFazsVljj1CI590GVNOdcNvgny0l1Z7XH/UZI4IUZoy8k
iKA2QyfLPI4CyO0CLcPjK6fcwLqeL3pEMubIUNFYE54p6zaz/G923uDhLW3/AKQm52AEgjvA
Fow1E65FILtp8Vwaeqx/A9/vds5VEbYLG1xMsUocVoGNCqUzGNXGeZdekfMW63O2btsG4xgS
vbM7+06+k0ofV5GlMWn8urj/ADbhnIuS2M8c19a7qy//AInn+m1UJOorVafXBrTEfOlsJuax
rbxa55beP0opLEGvZQSTh/DH5Z3452+Jub7XbX8Iym/mQTqQuQNKhgMGtSPW9wuZ7P5T2zbo
pHi254SZbYf+I+kk1HTI4dUnrO8ojTaflkf0rj8W7tJbiWTa0Re/3yKCNKt5nLBg/LGfK1+L
zeI5RxpuPuiATRzIFaUno1EomXTLFqYJ3chkAI7hutPwxDXvvxht3H4PjafdLzbILy5tZJZN
TxqznQAVBr4YW+p4rrDf+A88urCwv9nFjvZn0qtqFWN16kMwGeQzriyLG8vIeIbTcJtV7Js0
W3MtGtrpFW4K06CpoR+GKhkdhf4o2rke52u3TWovGZf0N1ekSWZ1gVgVjktD364sGqn5Ljur
a1tdyv8AilojW8qva7ttzK9pICchKgFaH/diXNbbfOWV+IE3u4262nS4hRXsQD7AV39sFRTK
nUYOWu4yfxrsexWnANy5a+2W9zfwmaQJOvuR6Yl1BAG+364ZdNuRoLDYOL8u2bYeVXW0W9td
GdWe3gp7TgVXS4oAR3xSMdTPVXzfk3x9w/fbbZLvjNs22XMZmu5hGpaMN0aLIn8BTFjOz8vN
eNz8TPy3bx8ft/1+w3M0aIl4lSBJmwVWofQftONXMa4nr0blnFePD5q47aPZrLaXls7TwSMX
jquoKArV0jKtBlXBvgnM1bTQ8GPOo+Bpxq0NrJC8005QAo5GuiEZ508cFXP84bjnx7xraIt9
i2TbYN23m0u9MkN/Qho5AGjQO2S0RutM8WtMh/cHu+02PHNv2Db7W2SWYlxbgKtxaBDT0kdF
bUevhjXPjh3duR8/6m1k5kdC3Xp/pjNdOdfQ3Eto4vxX4gHMf6RBut2xEl1HcgOCWkEa6CQ2
jTXsMUm116qo4ovx58jc4sIv/XjtE6xzS3sFu49iYoKo2QWmfUUxXGcjZWMfDd95vufx/fcX
sVisoWb+oQhUlNAKMAFBQ/8A1YqpD3Ww8J4Z8fS7zJssG43W3zSW3uTKpa4JnMY92tRmKVyx
G1Q/IfEfj2y2rYubXG0NDZ3vtC/2qzYIrGVNQKZD1KevjgzRvrRfN9zxGP4/217v9ZAksa/0
Z7bKhEQZEnBbNNPY1w8zV08i+A3spPkC3sb2zhuYryN9AYNrhkWra49Pj3OCt541W4cS4ncf
3AT7NvMsj2bok8Cyuzl53jXRG7mpK1JAB+mHpz/nMelJtvH9ik3C/v8AYLHZp7KKSOyvAE9m
7twKshByUtSlDnikg66x8gb1cJebjd3iQLapPM8kcCCiojsWVV/7Rlht/S5lz1f/ABgvDTyu
2HLW9vaGGkSnNUnJGhpMvsr92C+tyPeOV7Dx5trnlbhNruexSo1N+2GVJJUAFRL7Yox09SKn
FZGbXLa7H8d8b+J9r5ddbBb7jdKoUF0oZzLKwBetQuQr0ywTDYHknxNxS63HjvI9p2ZxHuyr
NebNAQEX+WJfdFemmtGA69sIsytZ/wDdrwzkME+132wbVagRt7F3Yyq1wrdA2jSrCnfUcPUw
yMe3HOC/HnxvHvt3scW+GW5NtcxzkV1+48YePUGCg+3mMZnO0Ws/wbY/i35D5+Jtt2mbatFr
JNe7W9Hg95HUK6HoUOrNcs8Ni5nr03cPjbg26RXm2bzs+07bEVKw39nOqTB1+1ghCaKdxU4M
Ur4/3mzSz3O5tlZZo7e4lhhljzWRI3KqwHYMBXGsZnWtl8Pcfg3jmEEVxtjblBFG00qKRpiC
kASSCtWQFswMFx0lx9FzfE/C+T2lzYXOybbt8ipqttx2ycPMsnQMVVV9PfS2WK5Phm876q/7
f9022Tim/wC3vtcCybTI0V7cRgEXqqJKGVSCNVFIODMuKXedr5n5fuGybhv95fbHth2vbZXI
isi+soRk2Y7aulOmHw8+RSR5SZioyLA+HhhT6Wu9v+PPjnhXGLq+4zDvzb2E1yTaPcSV4xIx
1OpBjAb0r2xiDrrLHJzP45+L+HfIOz3l9bTpsm9AmDb7Y5W1yjKQyjq0Tas17YbNjO51jt/u
hl41GlvC19d2u/z22uGzjB/TXMIk0/zfyhge57YeDeV18HtwR/izdTY29zbRIrNv8bGrCVYq
tJAynppFVIzxmX1rdjyj4+sfjTdfkbdBvN+95bTFH2G93UaI5pO6XYNMytAteuNdUc7jZfKH
F+IR7RONx4TJsLKC9hyXa2S4tvdP2LKEI/lydPUMsamOdt34bLZfifh+08e2iJePWG6rPbpL
NJfyqkyvIitJpZlOpc8h2xh16msLvfH+C/HPydtzWdjFve1b20ft7c0gd7Gb3AqyRtVtQ1Z0
bpivwzOsuD/up3TjMV/Dt77Qz8hkt0ni3SN1SsRZk0SL9z005Hth5jPV98+Xzf7LUJdaMT1H
Tyxp0x7n/bNxratxl5HdXttG9/YWsTWV02bR1Ln0nsPTQ+WMflWeNNw7cj8p8M5bbcnijuIt
nRpdndIwJrV0R2pHJSpFYxkeow2+4xLLN/Sp234i438j7Psm/wDFkXaUiRYORbdIHAaSJQfd
hoPV7hBrnQ/UHBWvWT53c/HWzfI1pecb2pWtIC0e77c1Wtf1IOllgYk5ZV6U1fsw34c+Pbrf
f3Ofp7/jXDb6OJY5LhWYNQBhHJFGwSo7Anpgnwz/AF/9+XoF/JeW238eig5Da8RuZLeFbiwk
WNkumCoE9l2pTwOkVzzxS+O/fy8R/uisbK353BLBYCxnu7US3r0AW4lrp91GGRoo0tjWePPO
s7x4uZpdI9tVqoOlj4/TFjvH0j8LLacf+G+QcytIVk3S1e4SR5ftlhjVD7TqPyhmrlgk9a/r
5yzPxZwfY+ZcK5vf75Gbnc7dEurO/JPvRv7ckpzrmGZQDXtgl/2c/pnGytxsvwLw225A1lMj
Xe3bvsLXEcUrES29yCgMqstAfu9OWWM56ZfwvuNfFXC/6VaWk3Dre6Z4xFfXNxKsdzGWyYSK
fUW/MGU5imOlik1ltn+H/jnj3yDumxXzLf3DRxXWxWN/J7cckEgYSQGQdZUZfQfD8cFHEs2M
x84cY4VbbI8h4rdcP3uAhbC5CrJaXqg0eEyxEqrgetSRnjUxnvl4d7KBKkEUAq/8RB64HWR7
/wDCtlt+3/F3KeYfp45d52t2MM7L6jHHErCE+RJ7Yzx70uvI7Lu5t+efC8/O+W2cFzvOwXqv
DLEgjaS0EkQNtKV+5Ssh/H8ca6+cc/rv+yeH4Q4tue92fNtnMQ4G8Yurnap/cjmiK/8AlgOQ
KRjrUtl9M8ZvvjXVsjDcW3rh2yfL7zbHZm+2C4uo/wClGaiSwSSMAWWpYMiP9tc6Y12P5bZ6
6/7qLK2X5NEkKosstlA8+kULNVlDE9zpFPphnwpxPta8UeE0cEVCk1NKLl+XtnjOt4BVKj0j
oaiueXlXCrWz+JtmsN1+Rdisr+BbiynulE8TZl06lWHgTjHUPMewch+AuHRycl3Czee3Gybn
bNDaaqwtbTe28kOfqr/NIRicssdJcceuf/6N7zT4x+HuI7JcclvdljaLb3Miw0Zw4kAAt9NS
KFvt1dGOMcz10seFfJtt8WbrsFlyfiUA2TdRdC13LjcjKJGSQF0nQAsKLpzK5Z+Iz1f0LzW1
2u2266/ta3q5sbf9DcW4MV4YySLoRSp/5VNfuD9vDrg4+W/6SyOn+1nfePtuFxsj7Un9aVZp
k3cZs0BpqievYHpTD/SZRx/6vJflS+4fuHM7uTjm1ybVbRNLbXMLlQhljlZWkhVTRFbPLGrg
j09rOwu/7Vb+9tlO3Swe3Ff+2fTdiG4RR76mvUODUZ1GM8f+zX9KsOObF8b7H8P7TzTf9qju
7yNpEkXTU3PuSsojcfbUIPSx6Yzm2pDvfxh8dbnuHCOV2Vu2x7HyK4/TX22a6IGdHaIq1WVC
zJpYDI1xufFZ/wDS43r8c4lscm67lc8Ut9tm2S2uHjuG9v8AT7lbqlZlVSWzKqKaswfLBIrr
463VobjdLme0ha0tJZHktrauowozEpHq76QeuHpv8PRPgDYLDefkjbrfc4Eu7XRJIY3qVLRI
Svp8A2dcYsbnHmvY+JbxLzH5A5Z8fb6sV7xi1WeOC1dRrV4JljWZH6qx1E5dDjVvw4yMVtPx
LxnnOwJt3H5P6Vyzj08tnuMkysYr2BJWAk1AafdC0zGfjlg6Oflycs2T4n2blXGNqu4rgz7Z
G1pyY24PqAzhkbL/AMuonWP4cGZGp163/wDcvLx+LitjbjcLm33G5twlnbIrG3vIFK1W4NNI
KfcpPfDyzb6puR7zcfHnxrw2+4tFDZXe6xIbiUqCTIqK5c1yYuGINe2H+eZa3flsrHi2yQfI
+z77Bt8Qn5Fsk027xwrpUSIsZM8Kjo7+5pOnPv1wfhzvOV0xxxbpxfksF/v0HNdqNnIGsIFj
MsJUNopQ/cKdTQ1FcalywZsYP+1nebCSS92VrNBdNBJLc3OR9yIMqiN69aVwd/LXM8eEm9u9
v5JcXVi/6OazvZHtpoxpCmKSqig7CmHrkcfL1n+5jarSaPiXLkRU3LfLPRuegBY5PajR1f8A
7h7hX6U8MXNljXXOXx4O6A1Na1ORr08MZZSR1ACN9ozP8VPPE1iIiQtlkB/lgQmYgiNzV2A0
r2z+mIU2liMyQVNdJNc8SGQdBdn0sDloNf34loYi7E6R6hnXxxKJFDCnQ+OrxPh+OJqhBdRQ
6TIeo8u+IGIetKkoew7nCz9Rr7dVFAWUipOIypWALjSNJHgcjjLfhkGkNT7q1oT49q4NZFq0
rpbID1A1zBxICxmpYgNnlQivkcaAy0gjzNSepHU/hiHoUV2otKMCMhlliWHMbMprQUNa1p0x
nTYkQtmVpX8zHLphGGoCe7MMqE9PPCcG2kNkQRQgg9vocBIqSmls+yk9AO2JEinQSXGrIEGu
WeBGRqekA0rWn+uJDk91aUzoMv8AmPDEgMzlidFSDQEGgr3woRGpgyLQAUOfU+WFaZHYKAvQ
/cD+/M4kaSrqQxqO2XQ+WKihrWqsMxSp6H/gYGaNWbxoy/l7fXzwNSjjJ9v0ipY5dqnvXC1p
ChOl11Duw6VHbBQSiKvpWgBrWuJaGrN2KoaVB8P9cK0Xt/nOSgZVppbPERAaZdJOVKD6d8AM
pRW0aRXoK9KeFcCJ6owFKVHqHWhxYjr7hTSVo9RoH+OEgkqrVb/6l707ZYtROsmioBp3z7d8
Q0oyzVAHpUVFcqg4kMxtQgsrDLSR+/EQmI69QNGWvQUNPHFqOoCsHOrUMtQ7nzwarAkhiaqF
q1AD08s8IENFNJOunRiaZ9+mJC0AMS2VAoXt1OdfpiJA+hlpnU/gMQwKhVBRO3ReoFfLEYMK
xj9dSKGpXLAsCAzL6lox8wP24QJHNCDTV1AHWnSuLDKj1K8g0kqa5A9/EYkl0t6WBIp1B6A4
zioQWUEAsO5/hzw1SDEZXNlBNcutBgawNQT61AHcr/yxLISlS2YPqP8A5CKZDt+GIaYgHNam
h9LVFB9cOLRswWukdBU/8jixoOqRUyOVanxz7YGMExl1DStFAzU0r+OJBmWIt97Ka0GkEA+V
cUNJECEJ91PU5Pq6HthGmdgZDQlwxBK08P8ALAtSe3EzMOw61IoDgIB6VIDNQ11M3UYWtEfb
IGoDUKaO/lU4lejSxlq6WKjoGGWY7Uw4KbQEAz1U8a9+oGJkzBNRr9h00PfPti1aJUIkMdSy
k1Ma/wCvng3DCLgEpQqtOn+WBqS0OtViqyNVfUaZgnzrhkXkSP7jNrANDnVvAitMS9RvICB6
vt/L3GLThgprWlQMzT7s/HEpcGU1JpJNc8xlXAPkggL0A9S9aZVHkcQL+YdRUE1/i6U/0woO
pTWnVT6q9F/ypiVgwxdvUmkj7COhwKBaSn2krIDQHrSvXDILTII9GepiOpGY/EYVVPvMa6KK
dKnOnj40wytcs8RpJC5+FMaZtP7zeHliGuwmZZFLEKQKgDqa4oXqnxNc8ntd8tX47E1xuor+
niVRICCp1VDZUI61OO/f9N5wWbfXr/OLn5pvdiZOQbDZW1mq1e8hEBk8gdTtpP8A248W3cx0
5x42yuXLtQgfcQchXG1UI1MSa9/V4k9MaOJRKyUOnUhqRQ+GKM9crjZeT8r2uGWPabq5hikq
00UALJRuuSg0ODquf1kWey/InOdlYxbZuE0DSnW0BQSliepKMpJzxXvxnrcdm6fKnyBfpCm5
30oaOQSIoiEJ1DIMF0g9/pjM634P8+tdy/M/yVHakHdpGRR/5v0yNQeb6MiO+M9d3XTmT8qe
z+ROZ2O4y7hb7pJHcTqGuHVUOvuAwoagY6b419Nc19zXktzuyb1c37ybgpBjnIWqlegUU09P
LGuazI06fOHyYVjAvf5RGkTG2jIJ/wC/TTGerjF+VVY/IvN9vvJZ9t3Cf9RdNWcLGroT1ro0
lRT6YzKeorN55NyHfrv3t0nnu5ya6NBWh8dCgUxtnmYtNt5Lz7bLZ7Kxe7trKVTrQQtoKnIn
ND170xfaNzHdsXynz3Z7c2lhftNAASsEkQnZTXPTUFh9MV6WRx7z8hcs3q4hn3C/cvAwliEQ
9oIw6ZJpFRglVkdG6/LHOtx2p7S43JpLNQEl0oiyNU/nZFBxWs3w3Gfkvm+y2j2213NYhV/b
/SrMa/XTq/fhoiw4/wAp5funIE3KfZTyG/AZhbzQOQhr9yqtNFPMYL3+jLrs+SeTcz3W1ig3
fj/9GgQ6wdMgLD6t1p5DBLqvGvOBUrQkau1ak46Ft/jT5B/9TuJ3e1N5HcgLIFk9srp7rWoO
K3xifK0375s5hfe/DayR2ttINKlYwZQp6hZD/iMZlacOz/L3N9ss/wBLDPDcRRVMYuYlZx3p
qqKjDRiCf5R5hd7pBul1cwtcwVCKIFMYH/Y1f29cU6a5n7aG++Wvkk2MZlsYkgm+yRrNiGHU
FdZ0HGdHX+AL8tfJ9nGhuY1MJAp79poDVGQy09vDCsVn/wB7HMzukd/+qRZACjWwiUW9D1Bj
/wA61wyqrCb5s5vLG0UTWdslCD7UGZr39TNg1YgsvmPltnaew5tbtUA9qS5h1Op8aqVr5Vxe
hFP8w82e+e4/UQgyJpeJYl0Fe1GOqmEK+H5I5VDZzWfvxzWtwSZIbhBIsbt+ZK/4YtWbGauJ
JHcu70dznQUH7BkB5YGpMd+wcl3LY7xb6ym9qaPIN9ylfBlOTDyxs60O9/KnJd9sf0tzLDBb
Aj3f0qe07HsC9TkfAYIi2T5W5Ls9l+ggeG6t1P8AKS6T3WSvQK9a0HYHF4Lith+QuRR7qN5N
+/6oE+oKPbC1oV9voVwjXfyH5U5NvVmLKZre2irqkNontl1HZ2JP7sA+yXZPl7km07d+ktmt
7yID+ULpfcKDyYEGnkcVxqqG65Tvk26/1g3kq7mPUlyrFRGPCMVoF8sWsxLvXNuS76YE3W+e
7WA1iBVUUEihNEC9caaXe2fLvJ9s29NvLW97HCuiGS7QPMg8C9RqpXKuMasxVbZ8jcn2/dX3
RLstK2pJY51LwyKfysp/KO2Ks8x1cq+TOQcito7WY29pbLQvHZgoHP8AvJJP4YJWrHBtnyLz
TabB9usd2kisSGCRsiOUDf8A4N2GpT+ODXPrS4rzW843fy30Npa7g0lC5vE1Mp6lkfIq3jjp
XTF7yf5fvuTWD2d3tG3aAAUnrJJIjdTprSlf8MHwzQ2nzNuNptgtIdn2q3lVNC3UKGNgQKBt
IOZxeHqqTZ/kDftq5D/XtS3t+aoz3BqpR+oHdRTpg3WeYk5v8ibvyjcLW7ltoLSS0UxobfVq
ckg6i5zyplilOraz+b9+i2oWt1a2VzcqhSK+mDe6gAotWr6iMVvp1koOccos98O9DcGO5v8A
/wASKGqt1WhyK5dManq+IHlfN9/5VcwXG73AuDbqywiNFjVQ3X0rlX64Gd1nyfXnXvmemfbC
cbXjvyfyDZ+N3XH4Y4GsblZFq6nWnuLpYhq55YLcNlZXbN3vdpvIryyleOe3kDwyg1YN44p6
z7HpY+dppRHcbhxvbb3cEXT+sK6WPnUhiPwODGp6p9v+X7y2v7yW82fb73b79jI9lLEojR/F
MianvXrht04LlPzRfbnsJ2Tatrt9nsZgP1AhJcMoNdKIQqqK9xi5Fjn478v3+38afj+47ZDv
u0EHTBcEoyKxrQMtfSDmMq4Lcq9oOG/Lt7xuK721tvhv9iuyxbbZSQEDClFchqg9wwxr/wAJ
bXvz1fxw29vs+y2m12tvIJPYRy8cmj7U06UCL40wQ/XWI+Qud33Lt5Tdbm1jtpFjWJY4SStA
OtWzrXDvmMXlL8fc6h4pem6k2W33WSoMcl01JYiuYaJ6HTX6YLDOmy3758j3DcrHdoeNwQbv
t7Vt72S5Z6IQaxsqKmoNXucsXPpjPTfMm9vz9OZQ2FtFcCMRGz1u6FQmhqtRTU1643mwTdel
cM+bOK3LbnLu0q8b3a+mEv6vSbiGRQoVVbLqtCMwMYytM381cx+Od42OCKylg3bkRYFN2tYW
hVUHX3q01V7UrjXH+XHrvHhquyvSgTsRXI+NMFbj0/hfzQ+z7C3HN62qLfNib1RWzt7bRtWu
nowZK59ME5p67iXefmyb+obXd8a2O02NNtr7IqJXYHrGxRU/lsPy/jhUi7h/uI2dLiXdLbh0
UO+zKRJuMc6pqbvq9GrTXqCcWfs3YzG+fNW775wq743fWUOq5uf1H6+NitAZPdZfboa+o0Br
0xdVdWVzcn+YL3f+EWHFrixjg/p7Rsl5GxJdYhRRpPQ0654xL4rAc8+XNw5VxfZ9nuLKG3ba
yP8A5KsSJV9oRrRSPSdPXM4ebF2f4q+Utm4XJLcbhsMW43ILNbX6OI7hAwoUViGBXLwxrrlT
q41lz8v8G3Pne38j3Hi7wso9q6vY7gSSCoHtSiJQtWiI/Zg/wJPXpG6fJXxpuVjPFvHIdu3b
aWQs+3y27pcUAqNJr/5O3QYB3n5fKG9ybXJu11JtKyRbfJM7WkU//kEWr0hj0+3GrJPg822O
rivJLrjm/QblFBb3DR1SS1uo1mhkjf71dT0qO4zGCNPXNs+feH7Mry7Hw7+m7jIv/lguQtuz
kfmjAFVr2pXFgX6/NPAW+MbW33O0g3fcDPS+2BgY83leRnhqCulK5eGA2stvX9xe4Jf7RJx/
bVsdt2uvt2txIJi2pdJjLKPTHpyFM8Q31b7b/cXxqwlfdNt4iLbcb0iTc399P5jV9WlypNT2
7eOG39mqja/nTbLnZ7rZuVcdTeNne6kurW39xUaHW7SCNycn06jRhTAJFbffOVjacn23eeIb
BbbSlnEYJo5dPuXEZNDC5jC+hVHpb7q4rUsr75s+N5I7q8t+BJJut4jid5ZI5I3Z/v1hhmpP
hQ/TEzZnw8RuZo3lmlijSFXdnEQB0pqNQq1zooyFcNEmRpPjbn+6cI5Gm7WCxytpMU8EldMs
T0JQkZqagENjNa5et7d/cdw/YpmuNk4nJaSX7l78GZCjt1JVwGYHV4inlhuq3GC+OPlfcuG7
1uc8Vst5tm6SNJuFjIdBfUzN/LkFdLgOe1Ma66l9H8/jKx3Ltw49f77c3nH7CfbNuk0sLGeR
ZXQsKtpYflr0BxDnVJHOi9fSxNK5Z4ca17Xsnzfx254xYbFznjg5CNoFbC6EixlQFoAwYr6w
uQIxjMVy1QfJfy7c8w3Da5orJrTbtnAawSVhLcO5KmsrJ6PygAD8cOeLNvrk+WflKPn25bbu
D2JsLqys/wBLc22oPVzJ7hdSMwprShzwQW5XT8d/MY4Zxbf9lnsUu7Xd43EciP7bxyvH7WdQ
Qy07YL5V314zfA+Xx8d3CSW+2y23vbLuMQXW33ahqxqaq8b0JjevQjG7yZfHqsnz1wm049d7
XxnjU9il5E0BtrucS2ZDgg60LuykVyK0xmL5cu2fOHF9x2Dbtn51x473dbQPbsrq3k9v+UAF
BkUsnr0qOlQfLFTjDc55ttW7cnG8bHtkezRQaTbkUErMhBSSVFrEGqPy9cV9jHPPq8+SvlLj
HN9hs7jd9lkh5paRrCu628im3dFYsQ0ROvSSSaAZHvTFGuufZXlLys1TqooNAOh8zjTTV/Hv
yBv/AAzc/wBftbL7cy+3dWs41wzwj8kg/HtmMZsrHXeN7uXzftNpsc238G2EbDPfo43B5XWS
JTIKOsIBqytUn10p2phU5/Tl3n583SXauPW3H4W2OfbIwl49uy+3cMmlUCUGpUYKS4YdcP18
a8/Lg5N8lcG3/mm3cmuOKqkCRGPfNtaUKlzKxykjMdKMPEjPvjP4E5ytj8g/MvxlybjSbS+x
blFPt8JTablHiUwOECoCS5DJ6QG65eeKDvnfXPt3zbwffNh2qLnnG5dw3PaB7NjeWzhchpCt
pZ49LegV64rMXXMvywHy/wDJ7c75BFei1MFvtym2tPdIEskVa6pVWqhq+GWOnPUxn/nLdrCR
SRt6tPq616VPiMZdden/ABF8o2XGor7Z+QqbviW8o0O4WvVkLDQZY+nbJh3+uM/Ho6kviT49
+Ttv4Fu277bFGu98Q3L3IZxnBJLCpZUlTX6lbS5Uo3XGrPyObJzj0F/7kuKJrmsdkvG3Cxtj
abTczPFR4ZACI5wGrpBWupanIeeMpxWf9wHDL2Ow3bkmwzycp2mMQ293aMGt2KEshMbOv5vu
qpp2OJnN9VO+fOnFt65fJuG78YF/sd3bQxXFpIy/q4pYSaSQSLp/j+2oriwy6f5C+a+JX3B7
vi2xbReC2vFWMJuUglS2C+oSweuVxIp6LWmGRdcbMeGyzEnWq6gftFehPXLDWrW4+NvlHceJ
m4sp7ePcuO7l/L3fap/tlQilY2/I+nv+3GfhZa1XKPmvbv6anH+H7Mu28aZonu4LvSWlCOJD
GVQuNIKj1V1EZHFoy7/h07t/cfukvL9u3PaLd7XjkMMcd7sbaAk7ZiZXYAgppNI6jLw7YpmG
RTbTzz40t/ku+32fi8knH7opJY2SOqzWtwtC0saqyoUZxktcv3Yr6zz54sfmj5L+OOdQC+27
bLy35NaMq/qZQohktxkyyUY1p+XSOvXLFarf06rHffj6X+3692/e57G53hRJ/Tkji07hDeB6
wiTT6nXVn7gOnTkcXPyevh4ZM5aRmArq/MO58saXNW3F99u+Pb5Zbzaei5sZVnhZl1rqUg+p
fzDFW4+gj/chwW6juFu9ivT/AFhV/rccTIU92JVVHtqkMT6R91O2Ms/LP/Iv9wsHLOGbpxt9
slhnmeNrO8LIapGQazp2fL8tR9MM+U8ThdZbiNZArnqAxpQHrircuPoXavnL4eTgs3GH4xdw
WFzEU3HbbUoyajTU8btIr0JWoOWM8sddb5XlPB/kCPg3NG33ZEM+3NI6R214wWSS0bqHKekS
gdxlXGv6XWePIH5N33hm8ciTduLbbPt8N0Pf3SxumRo/fZ9ZePQzEB6nWK/QYc2NyPWYvnj4
ffhU3FbnjF5a7bdx/wDzrC0MZjDsQWaJ2kDGjqCDjPPyx32wW+fKtne/F/8A6Lb2Uoa0vffs
7t2X/wDFtbOI5VH/ANoNeZGWN2ZW+pZF3x/5l45ccf4pxzkG3ObTj9+k09wp/ltbqrhaIDr9
xGkB8wMZ1fL27knM/j7dtguYN83zZ9049LEXlhWcJeFFGoMiB6mRf4QoJ/digr4s3Q2A3W8X
b5JLjb1mcWUs40yNBX+WXXsxGN9Dn49dnGeRbpx/dYd52qZoL6zOq3kJqB4gr0IPQ4xXSPZL
n+4DY7bbLrcOO8fG1c23Qq95uS6XtDIfvmUEmQtmaIVAz9VeuCT9s1RS/Ou6Q8Ksdk2yH+lb
7FI4ut3ttJ9yMsXU5jUkoLfdh0flS/IXyPHzGz2S7nsfZ5PtqtBum5ggLewintMFSnrTOtf8
OjsxXn10/K3yrDzhNhT9FJZXG02zQ3SM4ZJSwWrIKVA9PQ54zzfMVvq14d8xWdptFvsHM9qX
fNlsXMu1umn9XZMAdJjaT0uq16H94ywjruR02/8AcJvycxsN8W0i/pW3LJb2u2g+praYjXrl
/LIdIIp6VOXTF1YZ8avL35x4Hse3Xv8A6Vxd9t3DclaO591o0glEv3M2h3dnXUSun8csXPtX
wqvhz5c4NwzbCm4bFLJu41RtutsU1TxvQ6JFZlHp8cXd9O+KC65D8bXHyla71abZdWHGfcWe
9tZAjmO5Uk+9Gil9UOoBmRjnnTrit2HlD8yfJMvNt1tks4jFsG3F/wBDAwXWskgAkcU9So4V
dMZ+3GpZjOV50pA60cfmqMgP+RxkwwV5C2RpmAfLEgmPOhJ0tlU55D6YBhzATVTmB6SSM69v
MYicJoALUFWyrln2zxeLDCHQrro0nIkEHr3oRhxYUYZav2NADlnTviSUhWjq6k6TkvQsMSRR
KM1Io7A6UbLIYDEn8wsW0qQAKHz8KdMWAzxkOAFAJodPXChMJlBD5nt3B+mMiwI9wAUzqNIp
1/4GJYJvcXVT1D81eg+uKQUcYAAcqOlBU9ulRiMM1cyV1kGp7fsxEVWMrMDUkU8OmJQowsk9
NOle5JywEamhIRfUDQ59R4Y0CYMrEFajLSp6gH/TCjOVCKCgOkmtTmK5dcCOighWWgC9K9D/
AK4KD+zqT7hpNSDUChOImVQraScv3/XBiSEKRr/NkNVf8cQpKuRLZAH1UPfDiPIp0gR0I65G
tfPCtR66CuipJoR0/EHChFG1GQ5ha+muf7MBwy0YkV1dKA9QfPAsGqoGVwMydOXY4lh3aram
BDIc2yzr2xIOpgzKoJBzA6V88SMdIZSAAOrg9MQE7qNQApSjBh0P0wI0QRkZgQFP5e7eeeIy
idVRBqyJBzPfyxVWhLlwciU8G6jLt41xIpQoINCT0FOw8fPEKKrD7jU9Q1COn1xIQA1a1zbS
a/Q9cBhnVeh1EGlUBAU96n6YdQGElCwByzNe+JC9/SdDCv08aYhpkFCSGLMoyFcRgT7jOhAJ
05UrSnhnipypZqaqsABSp/D6YNFhi6faACgSpHhnhR1SPSGFTQk5HLPEjRk6Gb1UB00p1BxI
4opCv6svSOmR8DiWnBUdGIQ9vDzxYjFSyKSDq7FfDwOKKwCxlAsgJz/KfAeOKjnxIqA1KnqM
xXLA1AD7dTnLpXx8K4iLV2qQCMie4OWWIYcjR6TWQGgOfT64moZQSxHYU1VyGJadiRmrahXo
MxiZE6xKoalJANQAHUdcSAsmpCB6e2k9wfPAUitFQ1NCpOk/7aeGFahLDWSAQQMm8fKmED9W
SioYABs69emR74LVg9K6lLsQw6L2JPjgxCKMgqX9RNSo70+mImOl1EgoDTp2OCQgpIJC7HUe
pr3+mFQI0AhqkoPw/wCuNIZZOpP/AG+NDgZoUdADWrEnKprhVpwyq9UI0nLSaHM4zTBMeyki
nUeHjTBrUgdUJ0sAStMvGvnTPLCZf0aZQAe4agp0xKhLBYnVBUn8pqQCPHFaNEulx/LOl+jg
Dr+OBbaajAE1Onp9D44N/Sw+ksKKcx0JNK+eHVhwCOjZ/mA/1xQ2BTXrNHJ1Zenz61wjBq5X
JakDKtAanxNcQpmKFQQSGr6iRlQYsGnYRhiQC4YfeMh9MSlG4bRSOgy8qD64Yuqo941+1RqF
yR0/17Y1zW/wzjABjU59iP8Alhc6bWvgcQd8xUvGHX109XmO2HTr274ARk5fYMNQjKsPSxFP
Tn51PQY1b4zZXo/z1NcA2wWVyHj1BdTBSa0Ip5Y5Rvl4npq+Q9P5xX/jLE3cE4UqDX1UpQf8
Z4lrU/HXCzynkMW2tM8FvT3JmXSXZF+5VqfST44ZGfl9O8c2BeP7Y+2bVtS29ugOl6RhpGIp
qYg1Y+Zxixl5zwm7GzfId/ZXW1RNuFy5H6qVgZETqdKgEKp70xqc+GxWfNN2q8jtZp7aK5iU
BjBXRqXppZh6qGlcXOCR6DZ7za7h8dNLa2MVlCbZk/TKqsFKijDMde9cZ6T5h3LTHcsCQFJO
joMq/hh5rc6T7VNZRX0Nxe236y1Wmu11EBu+ZGdDjfyzX0md1sNw+OWuLXa4LKCSAhbYKrKp
XLIkDp54zYxqt2eKDifC/wCo7NDElyyq1xLKA5kJIFT3p4AYsaScvubu02JOWWbLa797aD9T
DGrDS+bIQQRn5iuLRY7uBcp5bf7DLum+XSjImJfYSJdKipaoxVCvdyGybTcb3tcMS3l6ytLK
6AhtVRUsBXSB4YE5uQ8C2nl0dldyxCC8lVXmnthpZlOZFFy/E4YsaKw2Db7LY5rCw2VIFjR0
TVGC7mmTajUsfM4DVJs/JNth2tdv27c7TYL+OQrLaX0aq+oHOqnTq1HvXGhU3O/etbbbtzhu
wu4iZI3urasZdG69MiPrXBIozfzTLPJx60aR2c1qxJNAMuxzqcOp4WQTXPUCen0wp6H8S8N2
Hkl3cpvHumOCP0CFioZi3QkDwxWJuo/jT4tvpL7bbGyukubSgaf35RRiKjJmIND5Yzict78e
fHPG9mF5vNpdbgUyedJGU6mPZFZQP24s0VheQJ8bLcWk3HDdtAraruxmNHIB+1HbUQT+OGcr
XqnLrSN+F2ckNzeLEPYYW0rrKoFQVDUANV6ZYSteVQ8Tutjs15D71xHJoWGGOqsXYAkArSn7
cGJl7r4b4Yu728qDcJba4UsLSNgxrToZTTSvlgxOy6+EuI3lo5gsr3bbhKmKV5xKD/3Jqeox
YtVe5/H/AMa8b2qKbfFv7nXQPcRyGpb/ALEK6RhwVc/HexcEeS/Xjy/r7ZwglW+jDmMkE0Vm
AqPww41+GR+QOJ8K2dZfeubn+tylnhtotIjVScvy0C/vxVl5bIsiMEJ1aOtPLv54oXqfwxxT
Zb+a63a9hFxcWNBaxyAe2Gb8+g/cfrhqjapt+y8y22/hvdshs5LOdoori3VVkBTKtaeHbGUa
7TZuH2+3bbYbVbXEV9MscjzqrOQTmzGnqbwxSLHXF8b8Rg5A+6RWYWQJrS1eht1c56hHSlfL
pixY5E27YeZ7VfR321w2cllcGKGe3VVk9Pf7f3dMWKFcxbJxNNq2qz2i0uLe/l0SvOil61AL
MSCWY1+mHFemQ+QuF8L2jkljLdyS2G0XxZ7zQAwjoaDT10rigVW8ca+M5tz2yy4xucl495Ks
c0a1kKIe4dlX1HsDXBIno1xHsHGbraeO2ey209lfvoeaZQzddLM7MCSx88OGpLH414htO8Xu
5wWQmKIXgtZ6PFEaEnQpyqe2rpikCo3PZNg5nwybeJdti2zcLX3mje1Va/y/ysABWo616YGe
tYbj/Gfim52FpN53+W03ga2micjSoqdCouk68qdDU4Ma2K348j4T/wCxIu+Rz3kJdf0UaAGI
sTTVMuTEHw6Y3Y1uvQflHY+HWW87HLc2SWG2yyE37WyCMuhNAG0DLPwGCRltINosJprWHa9s
2W42F19cgVTIFI/KApDH64KXiXzRxnaNi5SItsh/TQTwrM8K5pUsR6fAZUpgxnPWb4bslrvv
JbDbbiV4IJ5AkzR0LBT4V6f441K1j3CXbuFbZvW3cGHHrWa2ukNbohTItQaEuQXLkjrUYrBk
rzvkfx3w3aefjaN13SSw2WSL34rgUDxl81jZiGH/ANVMHwLzrO/IXGOJ7JdWsfHd8G6xTKzy
pVG9vpp1SJkdVcsPI+uMa6liF/MM6dvxxtPWvjz4f4/yDjB3zc90ubVFdxLGAgjVVAOoM4Pb
GOvWk9x8LcZ3ezhu+JclE0UsoikW50l1WtGZNIRi3cKVzxSYKvf/AM3jiCslo91urXBA/wDk
6EMRalNbUUr+/FbTiu2r+3i0W7vl3nc5ZYrMgW8dmq+66MK+46sGpXpRcCxn+R/H3xxZqk1v
yK4EMEgj3CyuYwl4sZyLQK6x6iD2pimmdN5uXx78TP8AG9tfMzW9pFEHg35Iz+qJY0rIqj1V
OWkjFmjusHwz4Z4/ve0X2+bju0qbNbSMkU0Mel3WP1GWRXDaRQ/aK4Zqvws3+BuO7habfuWw
79JebNdTLFNJLGocKx0l4zRa6W/KRimqeOi++AuBWNzBtt9y2WHdruq2cTLCqkjpWPP97CuB
ax+xcHj498qW+wb9uAjaKWN7O4giW4jlZqNF7iPXSrd8N5uLia13ybwCLfPlbZtmna3sYtxt
2d7mziCu3t1o0iMSpeop9MXsYm/Ypv7feFwX8eytyqUb/cK0lvb+3GQQorVkGogfVsW1rHDt
X9vdrbWd/f8AKN7e1gs5zEDZRiRdK0HuPqVjmW6AfXF9qccvyx8VcA4lxe1vIby6feZ6C2Zq
ulyMixpSkWlTXrikZs14eUpOSckqaAZtl+7DKcew8I+E9jvuFjl3J96k2u0YloxAgZYY1bRW
UsGqWbwGC7+D9JAH4O2Hdt5sbfivK7bcrC+dhITp/UwmMa3Yxqc6jvQZ4poXf/3BcFutxuNi
2jlU3/sNqrarK5iB6DPXpC5Z9QcZsowG2f27bIvHzvHIOQS7YsHuJuCqkehJInKVDscwaeFT
h9PSq5D/AG/2ll/T7jb+QWj7BuZAttzvqxCNmXUgYrkdfQHLzwfWqPXOb8bm2j44h2/YrfZR
Fb23t3FvuEaiCRfb9TQsxX1lvUNRw8wd7jwz4AsNuvufxC5uUt7qFGNnA1us8M7UOtHDGsfp
rTG+q685i65N8YXHIPmm/wCPWSWm0RvGl3M1urGIJoBZ409NJGr06A+WM1y4ntaPbf7eeCXF
1eGbkF1uFvYRtFeW6gQTW0lMnqta6QDUUzwZV38Pn3e7Wwg3a9tLC4F5t9tO8VnegaTNEpos
mntXGrF/Lq2erf484Le8z5NHskM8dv7sbyvJJ19uP7go7sfyjBXSPS92/t/4bbF9st+ZLDvB
H8mwv4Vtg5H5NZIK17EVweud6S7F/bxtkvGId+3zkn9NgKSG6CxIywushj0iUvR19OZpng9r
UVG8f27XFpvW1ww79bSbHu5pZ7tKPbz0htBQGhdx/wCOhofLDNFi+u/7ZNvlgmg2Lkksu6Rq
XW0vLQxRuR1Rn/L+/BdXqq2f+3/al4wm98t5N/QVnleIKqI0UZVigWSRyAJCVOXSmGadccX9
v899yO1sdn3yz3XZblJJxukDI0qIlNStAGP8w1GnPSfLCvV1uP8AbFazwXMfH+SSXG8QqWSx
vLVreN9I+xpPynzxbWb/AIeDXthNY3EttdDTcW7vFcKTWkkZ0uvgaN3GNUc9yxoPjnhK8w5J
BtDbjHtwmGsvJm7hescANFaQ9lJxm02PWN1/tjt5rG7/APXOQyX27RKZEs7u2NurBfyB+gbt
nguixZfFHw78Y79xLcn3O7O47kAEvmIaCbbpYwwcIwNG6V1EEZYm57PHg/NeO7ZsHILjbdu3
SLerWOj2+4QHUpRvyN4OtKHPG4xIooYUIqo1GtR5+WF0ke57Z/b7xyPYNt3blHK02aXdY0kt
YjEjQjWNSqJXZdT6SNQxjbWevlxp/bvvllzWHY73crEWM6e/ZXc7tHHdRhxrjSMer3dJzHSn
TB1bhn+Xovz/AMYXbOEi32rY9rbYrOIa56aL6xNQoeCmnWnY1qfGuGa5/wBC+Cvijbtu4rPv
rNtm57pfD3NvvAPfjiRkGq3m1LlRvupmMZnt9dLPHmm3fE248y+Rt+sLuC143+g0zXtnYf8A
yEjRhRWtYy3qWQ+s55VyHbG7ax/PE/JfgbYoLK5fjfK7fdt1tEMtztUqJbTmOPNjGpfNgPys
M/rjPuq+1cbV/bTt77PY3W9clawuLyNZUgitvdi0v6lpISpJowrXF66X/Cosfg2z2nnA2Lme
7xbfbzhJtou0XVDfUk0vESxX2nI7H8MFtE69yun58+H+G8Y9zdNi3K3sHZUd+OyuA+lm0a7a
pLFa9UPnQ9sUjHV+rwkoAxqTVagnxz7Y6Na9T+EPjDbOZX24XW5ySNt2zrHNLYxelpw1ahX/
ACgBen5vEY5223DkzWs3X42+Nud7NvW5fHsMuz3mwgGe3IZbO6jVSzNGjF2jc6T+zPrXD8Vz
k/MY3fPg3f7JuPS7Q533beQrG0N5aISsbvQukvVVC6sjqz8jh6/pcU8vqXkXw/Y8U5ztuzb/
AL7BHsl+hmj3hYyWjoaFZIqn26nKtSB1xfhv7XWj+fvi/hvF7Pj248Wj/Trf6onh90ywygIp
SVS1fW+rqDQ+GLmRjrqzqZ+Vj8ZfAO0q+233PShW+NLPYmemtnBZPcYEEkL6gg/b2xm+u39M
njyD5I43Ycc5xvmzWZeWztLp4rd5DqlC6VcKxFNVNXXDz44cX5n6ZNI5S2t1oOrHId8ssbdP
Xv3wBwjjVrsG5/IfIoku7GyaS3jinj979OqKNc/ttVXBD6dNK0zxm+1X9sfx/wCLn5vt/KeQ
bELfarbZ5jcW+0traNreQPJpWUnUuhUqoI8sXfW+Ry54sm1ZbT/bzyy93WXbLuaGwuTto3Pb
5SfcguQGVfbLj1IQH9Xpy+mM/e7mH73cz8NPt/8AbDZSbdbvPymG23G7iDRpHAZIWd/tCSll
1JXuB+GG66T4UOy/248ivN/3Sz3q+XabfaWRZruNBda/dTVHIqAqfbah9RzByp1xS1fKt5/8
JLx7YbjeePchtuSWdkNW7JEUWe1TUB7hjV3LRjo3cYdv5ZvefLyp41GonNm6eH4YW3sfw38Y
cU3jYLvmfJpGfaNmlKTWmltDqqh9cir6mT1UKr6u/ljPtuNXyO7lHxbwrkfGV5pwh5Nn2WO8
W13Wwu/tgRpVj9+Akk6RrDaCenh0wzI5W1TS/wBv3JIOewcXkeSXbJaTDkEMOqE2bAkS0rTU
Dky6vPpjHVZ4vUuVruMfBnGdhTdt95vLHfWWxXTW0sEQZ4WUIrJK4U6yrCUekZqetca5lvjt
OvGZ+Rfjbi0vDU+QuDrLa7E1wYL3Z7ps4Xd9CyQmrakqw9JJ8R3xrmeuHXNl38PH5CkZZQAx
X7TSmR+nfxw47oAorryIf7SCfxrjTF8aPgfFzynlO2bD736X+oz+y12KN7dQSW0EjUTTpjHV
xvn1t7/4D5ntsm4s7QTWW27hDYXM8JJcRTaCLjQaegCRSwBqMZ9cv+ln4bqT+0u4tyHu+QxG
whaQTXJgrIsSrWOUgsF6/wDkFchmMXrbzz5L+HZ+H2lnu9nuMO+7BfH2bfdbfSqrOc/bcKzj
1UOkg9iMsMtPNy+ttJw7Zj/bbeblGtvul3G6XK3vtiC4sp9axujPUmRFBpTwNcU9rX9ZL8I/
gn4p+O+V2ksu7X63e4aJY5tjK+3JEftS5jkqGanUUFAeuM2es/Xz15x8m8HsOI8jG37bu9tv
lrMHdLiBlLxMj6WiuVUsEYeRz8BjtmzXO7K9N3fh+wD+2tNytEg3WaGaK6hvxEtvPaTySrHM
jUqZUFSmk9Qa9hjn/P5b7niu4T/bzbb5xSw5Xfb+Nu266ilkvT7K64CkhUMWZgpX0+rpTF1t
ptuRybp/b7vFpybaLLbNyg3LYt+Yjb98jyCsilmRlUsuqgLKQSD5YGOb1L619r/bfwcTTzXH
K5b2026KSPc0jWGCa0lI9MstNeSkHWrUyxSNdzfl867zZxWm6XdjFcJdxW0zwR30QOiaNGIS
VfJlzx0vinTWfEvA7HmnLLXYbm5MNtIHlnKdaRjUUUjMFqU8sc7Wpzr2K9+N/ivme5bnw3jN
gdj5FsCPTco0/lSvGwjkhmVmLN6iKuPqPDDWZteabl8Icgj4vt/INnm/rUd1I9tu1lbIGezu
Vcx6TpJLR1GbmlOvQ4rMalW938Abtbb5xvbZt2trVt8t2cC5On2rmABpLb0atYYMNLjrjOXF
OvXpXzxw/wDonxj+j2rZLKbZLQRPPKpZLqxmDBRcQ95FZjRgx7+HTfEY76z1iPjnk/xLuNtt
WxblwSK85DKUtHvP5emaRm0q7szalLd+uMWZ8t9SWtBzX4W4zvXyFZbXxsrs1bWSffrGABmg
EZUI0MZy1ShsuzUrjeeBe8N+JeFbbxrk0MG7WnKtunsiZCyxmWCeFXYEe2WKV65EGoxmT074
5OG8b+Mb/jVk+xcUg5eGgjfcZPchW6tpWFHSVZWUmrK2nSMN5ZpbT8NfH1h8o3loYFuoJ9uG
5bTsc7+hWLlJYDqzcLSq1+2ufTFedmtc/CXnPwxxXduIbhukXGjxDdttjMts8UsUgmH3FZFh
Z1I7Z5jDP0zZVTf8F+HeIx7fxPftuudy3XevbaLcohplRJPT7sTghY9LZMmdeueD6+ac1x8e
+Cth2PkvJZORzC/2njscd5FEq1M0EimQPMgpXQqEOg+7tialyOHf/jDhPNOOrzLhludhtorq
Oz3DanA9kiSRE/UQAH0uBIDp6N5HrSCRoNw4H8MbPuNj8dbht1xNyC8II3mEETxtIP5cwkJo
VY5e3Q6c+oxTnPRmsJxeztuAfKe48X5HbwbttVw422/QorrLb3AV4pljIJV/UjMAajOmM9T8
tc++M38xcEj4Vze92a3cPtjol3tlW1SrDKSPbc9fQylRXqtMdcmaOaw5CachqkI+1T08vLGT
QKVqSARn6RU9R9cTNMJGdWkf7vy+IPbApTRuTlQVAHlXCUsUiBSzDPMeVa+WCgUaqAWJp451
NPocRRFwQHUnIdBl1ywIasghoEGthQE5nEDoYglGyy1D9meJacgFQEoB1FOv7cCLL2zFSorq
JPj4YUInKpK5dVp/hhjWnegDOEYAkaHrTp0xAlMgbMHrUgmpGFm0mYGrAfs6YDCRGIOVB1Ir
+HTATqFRlYip71FR+H+uLSOgAOXpr0GDUT+zrGRYimXYfXDjJ2GoswzHY+ZxYjFoaZgq/wCY
k0B8OnTCrTa6keimnMnsa+GAWiD0JBHXvTtTtiOkroNByUtkPAj64CNnCKDIfpXufEYoQsdJ
IKGhFa1HWvfEjSNQjU2odBToMIEVR8gaknIDv9cCMo00TVpataDsPDyxK4KgpSmonqWp++mJ
I20gEOo05KvXvniRAlTVSSRkT2phwFIFoXaTUVFR1J8/xxKJVLGMKrAqKVqa9e4wUw+tUYEt
Q5gr2wG4iIo9ZPXQ/aB+ymHQlIFClMq9T1qeuImMTnU1NNKVPamIHo+nNaAdCOpxIURiUUBA
PdicGKUPqZqsAq1yJ/dlhQfaAZqOQG6mnTPpgWHdmLDv+YMMq+dO2EUlelAxHjWlfxpiRBw+
pNIABqHr3PYeGJGqYpKNUigpTLPEjagkoNNSk1VKdfGuJqVOkcQj+09/p+H0wJFQsAARHGDm
a0xLR62z0UcHvSufmMSRU1Cle+erqKf88SlSayQGYeonp4Ymh0zB1aSO1NWAo1T1AB/5Y+4A
dsTMEHjckA5AECgrXPECjdidKfcDWrAdPphJ2jDZLRia55CtOuLQFWBcFmzP5+lR4CmLTBSC
PUSDkahmGZrgBgSgYhgY5BQmtRUYjo1cA0GdRmf9MSRhHRDSpqcu9B5DFpypKkekNqy9Xenk
MRCSq1bUCCcwR0+mGC0ygPSgoa0qczTEyaRCIiRRtVa9B3wHwCppBfUWUKNK0yB+uJYlLMyr
ISC3ZRX8evhiOoBIQ2mpdwczSlB/3dMKtTFqoWzFOlc8j5YNOG0sR1H/AGjKuIWHCsqBlFV6
eeCnmHSMqxqKjr1zGI2YDI6mIJ0k0pmf2DEoSiRlJAICZaehBPh+GHWQLqWQrpJDflr0PXti
OYlVwK6hQdNIGIaEe6I9CMvpzJb/ACxCwkZCD2LdO/1xEVAFqF9fQCuVcRxT78FWDQXGpyG6
fuONQW1nU+850FP2nCyVT/EPH8fDEHWZUeRSAPT26jCXrPxDyS02TdrfdbsSyW8FSyIyq9GG
YVm9PbGvwzZXqPyD8h/GvKNuRrW33JtxCkQ6wiRrTrqzaoHljjs105jyRzqZmC0UklTl0w2N
IgdAzXJjnQVoTgUTwXM8DrJA7JITRWUsDX6qRjpFZseicP8AmPeNhtZbS7B3OKTMSzzS6ojS
lEqxyxVw9lS8T+TNl2rd7jdNy2P+obhNms/vsJEr1HqDKa4vw62O7mHyjxLkSITxhfeRwWmk
nIbQOqEJTrg5gk1cW3zbxOz2X9BDxNxbRx0FslxqUilAKkE/vxdSM22MpsfyBxjbN7u7t+J2
15BcEFbeWQOYh19DyK4JwTmNXrXHyPmGwbxvkd+vHoLSyFPesIn9ppBX1VkjAoT5DD8fB5bi
2+bOEw7Ou1w8WmSzCaWtxONFKeqjH1H6nFjNVe1/MlhCJrW/2P8AW7M1Db2wmOuNfAs33Aee
DvwYq+afK+4b7ALHbbQbdtSEn9KG1s1BRSzjpl2wyxOm1+Xli4YNgXaCssS6Rce8SpBzY0p1
xq4pKn498xW9vtf9N5Btv9asVIMNHEboo8QcmAxjTfHPyj5l3Xc3SHZUbZbGM/yTGx94hf8A
f/pljXhnOzVlsXznutjtFxZ38c+43Ta/Zvmn0kVGVQB27YzazVXsHLOCxar7kWwT7luJkMi3
JuCwJOYDISO/1wjFzN8m7PyDdraDdWl2bYrYq8MMAFy7OuVJCR6RT+EYTKk+Uef8S3jaIbXa
Lie7kjJDF4yi5UAqWof3YxLDPXk+WrIEIepr/wAHGlj0/wCFeR8f2ncbtt1u47WKVQsX6iqp
rJzpStCR441+BWq3P5d4JtV7fHabB7q+mY65oW0wO4HVmPT8BjKVbfLXE982w2fJdkuG0kFh
bSVjamYP3IR9MOJnd25R8c3l7aJa8ae026LJzFKIp3zqRqGrLx/xwYNbG9+Wvju72gbfJYX3
tooEUNVU+j7fXrJH1wj7Gl+XeCbnt62+5bLcezBRo/adZDqXpQgxnBa0UnzltY3OOOPZ5jtq
qAz+4vv17Gn2gDwrh0a64PmThVpDObOwvneTUzGVgfX4Zu1PwxYtcF38l/HvILJYOQ7ddH2W
DGOA1Uk+NGRqeWDDiHa/lPhGxXUo2rYp7SylAzWRBIWUUroLN/jhDj3P5U47ve1zWXINnmmq
zfo7mFl91MvTXoa/uwC/Dy+T2tbANpVT6RWrfQ+eNKRtPj/n1zxi5ZTCLqwnoLmHJXA/jRux
A7YNajWXvy7xrb7CaLjG3zxXNw5eVrjSEDN9zU1MWwKhsflbjW4W9oOV2M0m4WR1209qo9tj
2JWqhT9csWrcQr87XQ5BrksAdn0mM2+oGfT2fX9pby6YWft6mvPl7im37XLFxOxuFurmUmU3
S6EV2+9zVmZj4dsWG09p8rcR3O3tZ+T7bMdysSWt5rQVjLDvQsKH61GI4zu9fKUu6chtN1uL
CCSzsSf09hP6wytX/wAlAav38MFRuW/KNpu0tjNt+zQ7ZeWcizJdqVdmK/aKKqen641IGkg+
XOFbilrechsLhd8sc4GthqiZh0YeoAHxDdMZ31Wo7T50L7vMdw24nZp09srCQbhO2utQGr3G
Glz7/wDKvG7Dj0ux8TtJ1FwrLLJdpREEmbEZlixxc+iKnYPlLj+28eG1X/F7e+f10uqxgOzf
mkDKWB81OH6qqThe48Qh3t77kP6iCJJA9pHZaSFYNUaxTUVHamC1c1ufknmvx1yfbIxFd3n6
+0JNqscRCsT1D6xSmA5qv4tvnxFsCRbrHNuku7RLUwyLRGlpnpVAI/pnjWVKe55zx3knNU3T
lm3s21JH7cdtAxOkDNS9CpY1zahwWaoDkPJeEbTvtju3BIZYpbZtc63Ct7BII06Vcl/GuCcx
S42SfKfxzeXdvyTcobmz5DZxlUhRDLFqoc105Gvi1MKrGXHypabhzdeR7ps0N3ZRx+yli1GO
kdJDqBUv9csXyzNVHyPy/jHILy2l2XZF2lolb9Q40fzDlpGiP0r9e+GcliWPc/8AHfCry9c4
R8pcW2jgV7sF+Lhby5Wb9NRNcTe4mmmquWfXBrXXUYHiPJk2LktturQfqI7KQMsRbTqUZGh+
nTD0Nle33fypwDc5otwbkm67aAFMu3RI6L5g6UbPzDYwNZbaub/Hk3J9wuZt13yxuHIW03xp
2d2jA/8AHJHpYaAc11IcNidfyT8l/H1/xhdvMkm/7rUfprxrf2XiI/Oz6UBy6qBnh5n7Z6uO
TjnyRwjdPjj/ANQ5DeT7TJApjS9jjMwcBtauNKtRq5aT+3FjXXpfHPyLwyw4/uXEN2u5YbK4
aVbbdRE2ho5F0nUnqKMPuFcEgvsXtp8n/GPF9i2/Ydsv591ggmUvKIyrJHq1NISyxq3kBiOv
Lvmnm/HuS8lt7/YpZJbf2EjmeSMxetSSAK0PT9+NZ45Tm/bXP8TzcNg35Nw5PvNzYS2sqyWT
6fcikYHUyyuQ7jwA8MFtdpMescw538Wvy7ZuWLyAyXW2/wAr+nwwSOZEYkkqSEoRXv1xQRmr
n5S4dH80Q8mSaWXZWtfbedYmDrLo0/Y2kkZ5nF1BJleqcI5Fs+73HIN12G8G7R3VyjGwUiJo
wsSrqCy6fvwUxhP7jNlFxsFtvc24SWU0BMUexTuml9X3NEEPX+I55YpLWL3j5s1KfRq9Na5V
/djbVr3rhHyFwbd/jFuF8k3GTZMvb/XAEpIuvXRWo2k1FCGGM/Bt1z2HIPh3g3Jtq3bjV7c7
1dRo8O5zLUp7Timr1qlH70TIjFjO7Wu2/lnxDacxuucRcq13d5EVfb3iYAAqKgBU1assGGeM
3zr5X4VyD4t3XbrGZ4t2nuzLBYTIysQ0+sNqAK00Z1JxuRdc7HDz75L4dvXxRteyWF07b3ZG
2Sa1kjOoe2ul21U0aT2OCfFgvzHJ8v8AyLxDkfx5xyw2y5M24WJUXNmyMrR+3B7ZJJGk+sZU
OLmNWa4Pgubgm377/XN9319v3K0fXb2zxgwyIyaTWUBmLCpqMsZp8eg383DN2+YNs5Hs3MIU
uJ6LJbKGVSYkCpF7npXTMMmrio5uV6jvtnc7vZ3O2ol1s1xOjJ/VoBEyg07kEkq3mo/DAOpr
4e5FYSbXvm4bWZo5DZzvCk8J1I4jamtT4NjU5M+FhwDctgs+TWdxvpvYrKMnTc2Mnt3MUv5J
lP5gvdcVmidPpVOb/G0mzSWvIeX2nI9rkjIW3vrcLdUpWgZQrF//AKQfPGfrVc/Irqx4Xu3w
jZ2u4bo20cdkkH6a+NHaNROxiWUZ5kZHzxSfo44d1+YfjbaJON7ULob5bbegimubdBJGkft+
ysjV/OtAaL+GHBrRbb8p8Asrma7uucw3tnM2m2tGUL7Go/a2lS5p4vTFjWsbecr+OOacQu+J
7rvw2JrXcHniu3UFZ0EzvG6FvSah8wc8OM/hQ7FyH4e+Nuc2Nzsm6zbvHc2zW+6Sxfz0i1kF
ZVZQOpX1RiukdMU5uCd//wAN5efInGo3vL9/kyM2Miu1tYwwwLNE1KppBVmen8LL6sGGvkje
r39VuV5dvK08k88kju4Cu+tyfcK9FLdSo6dMdPkcyN58Ic041xfmQu+QRa7C6geD39OoW7Fl
ZZCP/ppVcxjnY3r6P2v5U4Lt07ybhzu23K2uSf0cZCL7SdaP7aljTprb8cA15B8K/KPGNj3r
lG271Kbez3qd2td0UaolVXkVSy0qFZZKhqfXHbrj8xz48mPJOb7Ttmy77cWW3bvDvtm5Lw7j
biiMr+oA9QGX8wGWCSL+Vs+fVLAQJBU1AIb3FyINeowV03X0oeQfG/yDwTj+1btyFeOXuxaD
LFMqEylI/a1IWorI3XLMHHOfCuSqL56+Q+L7xvGwW21Xi3R2dg19PD6oShKlfZky1H0kMO2N
ZkZu7v4Vn9xXNON8k3zYrvYr5L21WwKXCoSPbf3dSq6no9D0xT4Ny13/AA18icS2ngnKto3e
/js7+7R5LSOUMFl1QlFCsARr1UFMZkX9L54w/wAU71x+15I1xu+632w3zxBbLebMhlhkHX9T
GQfciYZUxXRzJnnj3PlfI/jTceMXNtyjku17/cLGXtr6ziWHcFmpSMxpGz6vAjLzwzml27D8
s7BvnFdoaw5bbcWu7KJYdws72GKVnZFCDSZGUafSSCtfPFZWupjyX5j5vYXnOtrvLbfG5Dab
SyMypDHHHHRlcrHOmlZtVOtMsNjG5Xd8/X/DuZ2FvzPZeR2v6qK1SCTYZ/5d2VDsToXu4MmY
IplUHBDceEF1WmkEBe/Y+dcdLDXqPwV8o2HDL++tt0t3udr3ZFguJYT/AD4gC1GRPzj1Gvfw
xn6/ky7MejWPIPjH4x41uke0b2eQ3fIY5PaggK6gJFbTrQU9nT7n5vup0GM/N1z55+viWw+X
eC8A4jsO38VX+tQ34Em5xvKwaGmn9QZY2BKyNqIRQKGmfjidLZ8MRzfZfj/c/km2ng5jr2Tf
I2nuNwkDXUlrKfsimqV0oe2r7fzeOL8MSevRfmaw4lvnA9rh2zlW1td8bg1QwyTx0ulSJVKo
VY6ZCI/QKHM0xSfsd+e/pScG/uWW63HZrDku1Wq20NIZd6P/AJYgFoJNGk6aZa9JwWY39trz
T535Fs/IPkfcLvbIrYW8LLD+stXLpeFUB95j9uoD0VXwxr6szn3Xm5dDQaSqVrkcuvX8cTWv
evhDlux7nw3fvjLdbuHbJd5WZtpv5T6WknQAxNqIGpCoKiufTrgXfOzEvwvyXZ+KScq4Tyu5
Oz3W5E2Yu5SGhinjR4WQkH82vUpOXngnOMfz/wDX165cc0+PbG8j3V+TWMr7RtjWE9pDKjuw
bSwkho38w+j7F/xxGz8j2T5F4pf2m37ntfKbLaNmhjP6/ZrlYop9QzbSZGDJTwVTXtnha1k9
55nxW7+Upr7beX/0i4nsrZbHc4/bn26RfUXt7pGp68wy1Ip9cX4Znyrfm5/j+54Rc3N1uu2T
coiGvbr3ZyqS3Ep+9LmGNpNUcncsaA/vMHXM+fy+YWNXRq6ASNVe5Ap0xoz17L8Q/InGrDjG
78F5FI9htm/6hFusQ1mCV0Ct7qfw5A1H44zLl1rq+NZvu9fG/EeAN8c228PvMm5ywyXd9aFJ
FiiMqM8rMmpE9EdBHm1fHFJ+WOZ+F/P818E47uO1cI2+4M3GprdYn5Bbzlv0wmBSJo2AdlkQ
5vX7e3hh+uRqsztHK+FbRDyb4p3TeQ2z7jI89lyuOkyKZwp03BqASpUAkZVr0xrM9HMuZXH8
jci4Hx74uPxvse5jd7x5oriS6idWhUB1kZvcSqnUFoEGa98Yxmy2YpbP4d41ufwrccyje9st
/s4JbqT3XVrS6jhY19pQPTUZfdUN1GN83fD1M+Pl4pMVjdlDLqbofEeYGHS1fxfyS12Dm+z7
vdQs9vZTrJIgpqKj7ipPpqK1GMdTTxa+sJuUfHu4We6pFyiwSLk80VxbSPIFaOSJYwY5UJBS
vtdWp/qq8s183fKXDt++MN3tNm3eCXc0nij/AEwch20SDW0RFPdSlfUuRxcz1dX9Pmdd/wB1
udvTZf1U67T+oFw1mzloRKmQlWPL10J6YTLr6f2vZ/jlvh+/4Vbc2smTeYzJHfTtFE6NIVYB
4C9VoUoQTXGefk2vJfhjlu08J+SpjvdzHcWSCWyO52pMkNC1FmTLU8TUzIFRh7590cdbus/8
mcZ2PZuXN/Tt8tN42ncZHvGubNlLRI8pMkcoUsA66qg19QxW3FOfXvEO1fHLfD1xwi05vY6L
9RPbX0zRxMhMiyqGhL6lFUoa54IemD5RynYG+AbLjcG528m87buHtXNpHJX3FWWR/cjI++Jl
YEN07Y1zPWLbY0/HeVcc3rifx1tO2b3HYbht24ql0tQJoikcmWljmkhIXV0zwRuzXrXLdobe
9l3PbLNG2XcdyhktpNya3SSJvcWjJKQfUrjLV27Z4InwjudnNt+43djPT9TZTSW8xRhIlY3K
kq4yYZZHww9fLMutT8Uc5XhXLrPejaC7gjLx3US0VxHKNJZPFu+M46x7ltvJ/ifhl9v3yPYb
827Xm+O8i7ONK3CNM2toPaHq+4fe32071xrNc7z9apuOfLHB+E8Qj3nYlF9yDeppDe7XJKY3
jlLO4LoR/wCIr3Gdf2Yc0RkPl7fOLbxuew8t2DcWkuNziaO+2VpC72EkVGb2wSNKvUgAZVFR
4Bn/AK1rMru/uK5nx3kw4xNsW4frIorORLyNCwaNiUosymg1VU5YzPgdcy1QfA2/ca2TnkFx
vjxwbbPBNDJcSisaNIuRZvyLlSvYnGL7jpzJJa2fBvl6wtPmDcd45JeC4srsNttpuSABY4Uk
pC7AAakI6sc++OljPM8bfjMfxZwLZd/tLPldreXm8QyssxlSv2vojKR6lFGfJq+rGfyNZ3id
t8Y7lZWW/bDy2Tgm6rBHb7vZROkQlljA1OVkPRznUZfQ4updOKr5E5JwzkXy7shg31tvhitE
tb/kNj6gl3G7NG6SZBhRgNfYY1/8RK1HLfkbjnFuG3uzJyh+X7juyvFFMZxK0dekj6dSxqn8
P5/LBzz+1svkcbcp+LufT7RyXfN1bZd545b0uNsmlESTqh1BopPz5ioUZmtCO+C++Nf+vrnj
+cuLbrzHf4LyOa145yOzj247nppPG8aOhnMWZWM+506jrhvkZk028cw+PuAcDPFuO7gvIb6/
lW6E8Th41aN1cSykelK+2F9sEnvhk/J1YjlPxTyrerb5Gu92k27eNshj/VbHKQH1x10vEKap
euSqc+9MZ+fDLnrIbHNx35J+br3e7h2sNrotxHBO4jnkNrGqDQegaqiTSfy1GNdzzBz+2Y+d
+bbdyrnU93tqiWCygTbxeRsrRXLROzGWMj8vrpn4YfiDmvNiGDF1FOoDdCcZSIsNIJbzBP8A
hgAtHUKftqSvnhWE0TLkq0JGYPh5YiFOquACtc/+RxVJv5ZkzqulqjuMu2BQxRgGCghSfVQ0
bLAkgaH1FloRSla508MSIkkDQKuOxGXliFoAa0UqdBPqNaUy7Ylg10BtOZYda5/XPESoAtHU
k0IqorQHxHfDgBFHOgbV6l6JXM/hhR/aKsDQlB4E1rgoxIw7qMj2Hj/ywSkYmU6SQ2oZL+HW
vlhMpyRoHQqAT+05jFhSBqKD0Umufj9MGHUYkRmZeqdFFKivhiZtMqERFKaCT0Hh54gZkoRk
FqBp7qfriEg1DiQLXvXSDn4HAs99LKNwPuVSTpYdK4lQjS61LZAkqB0FewxFLG+r10IYDPwr
5YGguyMDlUgUCnI/t8sQplNX9VadQKZDLpXCtOGUGlBWvby6DDi0tCOKior1Ff3VwYiAFAc/
qO1MSM5NCueZy1YoLAaZFFABpJzIOEJ4/TSgOXQ0zwWkTH3DXSPAtkKeWWDWgqyayDmRT098
SIory6tVFpQKBiOHVBqGfiQfPCrDJXNgxKr9wrUg+NMTJJQ6tJIzqM+xwVYINSrgVqdI1Cn7
cSCUfIDPOreGHQaqkEVrWlSDWlMBHqVjrqTQUJoOnlhKMgB2cgg1qKd+2JSDDJob/f3HbwOI
2QIZwKeHU98TOCCyMa6iQPsrTJhg1qCVjUUbJc/EeeKHAaDrkDair+NNOFkUSMtAepzp0Bp5
jBVhVRhqCfblq6Y7f0/j9eZ1suu3X88ku/JlDAhtIAr4544MQ8gPuALWg+7T+/r3xI7VUAAZ
Hr45eOLQehNFyAA+0DFFSWiLQUUAUHfEoQoKsubdycQkMregachnn4nEhq1GFNJqc1/yxL4B
ImhtVRr6KvcV70GJaMhVl9XqHiOtfwxGF6gzEnI0UU8MSRjUrZr6T2zqMTQiGIDMFBpkqj95
OLWfqBSzpU0UggavEnEvDs0Z9DE6+w6GoxI+RyzVciynP8csSA5ZvsNRXqciK/XCoMKtfuoT
lSlAcGukkNLESRIDUdARlQjE52aSKSCCB6fDv+GJYZSQ5D5MOpz/AAxEXsMPXq1UOYB8vDEo
cIwdiqkRgZkkD9gxYJdAg0yVWjA101J74jp3oiVpmxIJ7UOIDgTQASvU1yz/AMe+M60aQkDV
TrlSg6eNcOsmcKxpXNaFgBQZ/wAOFekjlGHYn93nngaU++EvGZGjCCoyrqOeVfDGozYzgX1G
p+uNsn9PhgTtaFmb3MgpIzHh4Ylj2b4EtbG55Zt8dxEt1CoJCTgGIdvUpGZHUVx1s8Ykey/N
W9y2O2pt9rZ20MNwn8x0hRXFGpk1MhjzXz114+XgLsupq+k9FXz79MP2bzAamK/aVJ6V/fhE
WWx7Rf7pdpZ2kOuWYhUzpmTTPLIY1IzbY9x4j8NcLTbpIt6aS+3Xoyo2hF8NIHX8cFo6rN8T
4DwncOW3dhuU7Rwxe4ttYxalZwDTUZa0Wnli/Atc/wAncH4xse7W1rtKta2s4BkeQ6wvUEgE
1ODnVLjW2XxT8cniTX1qbi9uBF7n6okqpcLX/wAY7DrTF1FrwzcYQl5IiAaI2ppGdPrikV8S
WMEVzfR28sq20D5S3VNRWn+3GpD9vHtx+KPjpOGyblbSz3t4Lf3Bch6BmFDQIaBc8sFjKq43
8Y8Y2/aW33kNrJfW8q+5HBFIyBVJoBVSNR8RXDfTKtf/ALnuLbjuFtuFpG1js84DfpUkOpq5
gau1cGBfx/FXBppbiyXissGlRov2mlpUjPTU/tyzxWLcY/8AQ/Guz7v/AEK/43cXu4avbWZZ
zRzX8ylk04fonb8gcH+PLXareW0gOz3crhQHaR0Cj7wVqwyr1wYZam3X4m4HZ8KkvbKee5uF
i95bjWNEj0qBTsD2xWes3xmvjDgXF+Qzyf1m9KSop0WMDNG7UOdWI7eAw3nxS66L7Z9h4hzm
Ozt9vi3GxdlRbe+QSaVcVqvao7E4OYtXfzNsux2+y293ZbXbWlx7mhmt0VGIIH8NAc++GTGp
XibICK9SxyzoMJafhvBd05Xcy2m3skTQANM8pIRe1a0PjiorX3vwVvUFvKRu1ncXUVCbZNYY
D/jyxmSs45Nr+F98mszNebnabXqPpiuNZYDpUkUXPGrUqeS/H1/sF7DFNd29xFcECK4hfUhq
aEN3XFzGp63fJOC7Tt3FLS4j2izF0PaWW9hl1M/uEAk6gOvbAzkW2/fEWz32yW52uK32ufSh
e4nZirZdPTlngNYG4+IeQw7tHYm5tikwLLelmSABexJGo/sxRmOm8+GN+hsnurHdrDcHiqWt
7dm1EeCkgiuNa0aw+FOQ3VolzdX9lt8jiqW1y7GRa/xUFMWpbcf+F4HvLu1325EpjSttNZsH
DE/m9Qy+hwKsxv8A8abxtcF1c+9ELCIkQ+7IBOyg/dpHU+WGVnLWNQrqz9TKaE0zwtNbwPgl
xyi5l92VodvtgHuHQfzGBOSpXIHzOCnPNbC++MOL7jYzy8UluTdWblJ4rt9Ssy9VVqVB/dgw
aVp8WcV262tv/bLm4N/eNpgFq1Ig5GS9Dr698Wardc0XwjdLvBhkmb+kke4Zlo01D+UKcgf3
YcWJtz+K+O7ltc83E5rqS6tGMc0N24KFq0IDUGeWLAVr8U8VsbCGDk11dJuN6wSA2jKsSv1C
qCCWI7k5YrGrWZ5B8Vb7t/IrTZ7Qxzm9JFnJIwRWA6se4p3GCwOffviXl+yz2q3n6eT9ZKLe
2aFyVLt2OoKVxqdeM/lrLT4o4pt0Ftt+/wB9crvl/wCm3a20+1Xw0sPVTBJp6uOSx+Eb6Ldr
hN2uNG023r963P8AOkXyU10kdziURck+Ktou9ibeuIXVxc28AdbiC5A1EKKs6minLwxZiig2
X4k5fvexjdLNITa+rQpkAlbT/CtKftOHqm+OLhnC5d/3k7fPdxWKJ6ZZZSA9PCNa+psFgjVc
z+HLfad22ax27c5G/qkhgrKBVSACGr364pFOvwvpPhngVtdQbVd3e7Hc7haLKlDBqHemgqB5
E4tLzL5B4TLxLff0Jl9+LR7lvPX1lD0ZuwOCRmeKDaNsv943eDbrOP3Lu5OmMfXrhw/L1WD4
b45Etttm5b1cW3J7tSVi0B4WZR0AA1ae1a4MP+GHk+M+Yjkx40IEl3IVdAHCRtHSocs3amG+
Mb7jg5fwLk3Epo4t3RV98fy5UIdWAyNG8vDFqnyzQIYEEVArRetaeOFpreK/F/NOTWH67arJ
P0wcxmWaRYwCOtNWeXfGdF9S8g+Jed7BDC11ZC5jncRxS2riYCQ/apoARq8xi2rmftYQfBHy
RJAJ0jttb+tYTdIHFRkpHjho6l/Cp2T4n5xud7dWsFkIbq0b27mW6f2UBrTSGNanyGLXTzEu
+fDvPNnuLVb62QJeSCGO6jlVoVdvt9xzTR06nFtc7Naq4/tq5BDxldwt74S7/pLT7YGX2SPC
OXL1d88Ws/XqXZWF438b825Dd3UW2WcZazf2ruWaURRB/wCFWP3nxpinjo6d3+HPkTap7S1u
9uEk95J7VtLBKskbP+VS2RSv+4YNU/y6F+A/lGS0ec7XGXWpaL341c6ewUnOvbD9jqi4fxHc
Ny5db8furOQzpMVvLRmWCZVX7wrOQNQAyxWmetH8hfGsFlzPb+P8Xt733twUtFZ3hAKsKltM
hJDLQda4sZlmuT/7h/lc20l2dpjKxaqQe9GZmociErgqrm4l8d/JO9TXJ2aCSBrVzHJK036X
S4yKByVYle4GHfDkdHOPiT5B2rYRyHf5o5VQhJ4GneaaA1otWYsCG6+nFNcrz7teZ00uKmtK
FmHWuG1qRsOJ/FfO+VW0t1slmsltGwX3JpUiDDuUD/dTBrVniLfPjXnO0bzb7VfbXOt3ctpt
BEPcSTT/AAsuVB3OGVnmLu9+BvlSzsmvn2tJIUXXJ7M8Usuk5nSimpp5Yvsa5Ni+IvkLfLSC
72/a3NreahBcSSIieglTr1EEZjKuD7HVVvXxrznZt3TaNw2i5/VXBJt1gUTCUAVPttHqrTv4
YB+XqW+fC/COKcBt9w3qDdZd0uIA8l3Zj3FhuJF1COWA+lUXpqOGLrx5v8V8LPKeXx2c0Etz
tEaGW8S2dY5wgyrVyNVelFw9UfT0ubcNG2c9m45x5LrcFjKm3MkZST1CpjaoFNHTVkMa55nz
WebbWwg+JvnC5KbZNPPZQSxMytLes9qwA/8AD/LZ9LNWgBxhvp5Bu213m2bjd7fuERj3C0lM
UsRNTG65FKjrjWYubsDtm17nud7Ht9hC1xdyMPbjQFqajmf9v1OWK1n61t9z+E/lXbLBtwu9
meWGBKs9u8czKoFS2lGZsh1ODWpQ7J8afK287XGNs2yebaLyskBNwqQSMhI1aWZVqDXqMZ3P
gqfdfj7nVjvMeyXm0TxblcDVDDpBV1HpLK49LCveuNaflb7l8L/Km07dPe3uyyNbwoTM8Lxz
MqgfdpRiWFOtBljNrH2z5cfHPiv5E5Nt5vdm29rq39wK85ljiVsugDlSVHji+zdzFZf8D5pZ
b8mw3W0Tw7y//ityoo+dNayLVdA/jrjcvjnnq73/AOHPlTaNuk3G/wBjlFvGKzvbyRTlVH5y
I2LUHc06YJTev8MBLGdTOdIY5NTv+OFquzYNm3jetxjsNptnur6aqpEvn3JOSqO5OCiTWv33
4f8AkzYtue+3LYpY7OHOaaBoptCkU1N7RZtPiaZYzKft9YvOEf278s5bxybeWmXbqxhtnExV
0ulNdWoo2qOhWmYxud2OffH29eacg2He9l3Gew3izksr6E6ZrebJgB0IpkVbqrDqMRzzxXRU
D0YUpTUgP78Sn6bvjfxF8j8h2sbnsu1NPaFmWGaR44Q1PuKq7K1D2NKHHPfW8/amPDuUHd32
Z9tuI9xWQRzWphdpE1NTWyKCdI61w3rG5lj0r5e+JuB8J47aW/6rchySSNGguSplsrllI9xW
JAEHchQf241y4dZLHP8AEfwtack22+5DySG8/QWQolnbK8c06umrXC+WsIOw6nBffG+4wdxx
BN25jPs/C0u91s1f/wCA0qe3Lp7rNWirpaq1alTh2Rji7Hdyb4z+Q+PWEd7ve0XFtYV0tcMU
kVGNAocxFtGo9zilrcuJtn+HfkredtXc9q2iWe2uB/In1RoGofyq7K340xXo9Vxce+Muacg5
DPstvt0q39g6f1GGUrG8CM2kuyuRl3y64L0sWfyf8N8s4LcyS3A/XbM5Ah3WJCI8+iyKNXts
PM0OAfl5u0TayaUUGlPPGtFjV8G+Pt75huTQ7amiK10m8vmyjgRz9zHzplh+y5jV88+E952S
yuN52XcouQbRZkJfG3olzZyHM+/EC/p8wfPpjM/yu+rPhiN24/yXaJraz3Symtrq8jSa3hkX
/wAqy/YyMPS1a9sRtdkPBOZnkcfHZtsmh316Vs5gI/SVrqr0I051GNWzDz7Wg+UPiLd+A/0q
a6uY7y13BdMEsAI0SgAyIUOZpqybv5Yxm+uXdyyftd8e/t43K92KO/3be4OP7rf0G3bfdAEM
0grErvqBR3FDShw7reY8s5Jx3fdg3ifad7s5LO/t8p4HyUjs6EZMj9VIwiXVYB/MZtLADID9
1KYmnqPw38StzWeW73MyWuxWgYNcIFDTTqKmKN29IZAdXqxmxWebWOuuMbpJLu19tCS7jsO2
TaJd2WJwArkhGmQlmTXTHWZPGeds2o7Pj3JNwuLmOz2+WeW0tzdXirGVeO1BFZWUiun1ClOu
LOarMaGy+Jvk++25NzteP3M9nLFqimUISYlr9qE69WXSlcYsh6qj2PiHKuQ7lLt2zWE95dxM
TJEBoZWGbBtekBl7jrjNN9guV8L5nxiOE7/s9xt/6r0QyzACNyPyh1LLq8q1wszr9su2o/lN
D6WOYNR4j641W8b741+LuSctu1eORbXZ429q73SQVEbU1UUEjU+nMLlXxxm+mzx2c4+G9+45
c201jPFvuy7jMbex3WyaqtO2QhmVS3tyHtnTzwSOcueVk34vyZORR8bXb5f660ht028jTL7t
NRXOgzGYbocdLfHSthwn4c3/AJNdut8X2XbLaWS2vr+dAWiliH8yJY2oGdKioJ6Z4xNY6szV
X8i/FnIOHm3uDPDu3Hr30bdvdkawM6jJZKVCOQDlWngcXU31zu/FZteQb2mzvs4u5f6a8om/
R62EIkH/ANoUrpLYuddZx4p3DMTX1kmlKUqa9RhOLLYdr3Pdd2h2zbomubqc6LaGMAMzsclF
csGicuybaN+t7+XbprGaO5FwbWWOVGUC5FQIWqKazTId+uGq3Gih+Ifk64u4bR9huEleV4Yk
lZUXXEup09wnQPEVOfbGYvhQ8i4jyPju5Db9/wBvl226ZdaRSfnWtNaOKqw8dJxpc9StxP8A
GMEHw7JzW6mmtd0jmWS3hlCm2vbQkIojoKrJmSKmuVKYoeucqH40+EeS85t5b+Efpds9uYWm
4OVaJrmOmmMqDrALZFqYz1fwpIxnI+L8g41u0m1b3aPYX8Z1CFx6XUmgeNh6XTzGNfhz4tt+
MbzdfjSCy+HIuXXTSWe+SXMftI4VoLq2nYIntkatLipY59qUxnmbW/6TGa2b4q+Q92W0fa9o
knttxWV7KeqpE4hJDjWTpBBU+k9e2Ok6jnN1Bf8ACuYbHvybHfbXcRbtKFNvaU1GQVoDGwyZ
dXcYxtddx6hFwr+5O6tzsu4Xt7Bb3FpKIfcvdcTiNKm2LRlhqdcl1dc8Z0S/t4VdW88E5tpI
zDJAxjlQZFWUkFCPEUzx0sMWvEOKb3yre4Nn2lK3dwS2vPSsa5uz+QXwxlvmPTeWf2+3lhtT
z8c3hN83Tb4xLu+zqES6iibMSxBWYuvehoaeeWHn59Z6vrzDdONcg2+12+6v7OW3g3WP3dvu
GoY5o6kHSwzDDupzGHVa634Py5L3brSXa5/1G6xfq7CONTIZbcZe4oSvT8w6jGL14zOvcemf
K/xNw7hnDrQyXF6nJH0PaX2hntLsNm8Eij0wMoJIzrl55a/nD3t8iDhnxJ8Ycg2Ky3C55wdv
3O5UrcbaEi1xSVziCt6m8ssY+tUlV/Ofgvdtk5Xt+z7DOd9t95hNzaXDlY2ohCssvRFUFh6u
meN7439vwv8AjP8AbxvLbFyN+TQz7VutnZ/qdnlSQNDqUOzrJ7ZKuPSBprl1xzltvrGeOHYP
gO7u9n/Wb7vtvsG43SRna4JgvtSPKKp7rEgjUOhSvnnjdm/+DbVDtHwxy+65ld8bv9O3z7cq
y7hdSeuJIZPtmjbLUrdB59aUOCnPHVz34YvNhsZN447ua8k2NJVg3KcKqz2k4y0TRKSuglh6
h0PXxxrPHOSy/wDldWX9u80mzp/Vd8i2bk87KNs26co0TuRq9qVvuDN2K5YxzK3bWf4p8M8q
3Xk+4bbuY/pFts8iw71cswYxtJmgh/K3uDNW+3G7F+NHzr4av9i9i92C+HIuN3U621vuEGnV
DdM2n251U5EsevQ+WCQTWgt/7d9wO0LbXHIYbXmE7P8Ap9nmKpFKFFfaD5uJB9KH6YJPyfWT
+N+HbVunK7vinKjNs95OstnbTGitDuETZCVWydTpIArnUZ412fmefLNcv4xufG+SX/H90Ux3
1k5VnVSFkjOcc0YP5HXMY1jM6ijIYMpNSOxPamCwg/Tj2w+ZzqV8T3pXtjNRwQi+Cn7QM6dv
24kT1BqQS4oBXwPXFqPJpjCkUrkQB44lTyAUaQDMZClaYhTpK4jB0ksBQV8sq4FKciiqTmw6
ilcRFVW1fw/lFfAYgTS6UPWtMvDPwwowUadQJYDInpn9MKIatIHQ9DU5jyyxE6uhCFyS6dKE
9+uAJTrZhpOn+HOoOBBYlYjqPp8KZ4mcJXR6asivUd/24modonb0qSBWoJyNB9cTRyZBJUsW
oKfT/LEKSlVYLpKkVz8a/TxwDRq7soLKa0yX/UnEYRZZHqjUIFS1M/8ATCACJFFS59zrq7U8
MCwyqoA1OxJr1Fe+X4YVgtANXANagUWtMWjBqT6lGRJzNOlM8BkO2pmFHNOhNMq4ii9WSlT6
gaMTlkeuEFUKFDN6gQQfoMWoTMrIzatKtSmeD4WpU0qy6moD4daDE1DHUXIdQVzoR59MINV1
jKjPInIUphwU0JcqG1VIoM61OCqDJOYXNvynBiOhFCSoUk0NehGIgi0qWLek50anXwyxGJHT
XT2zRe9DXEQBqEoaaRSh6HEqmVlUNlkehGIAZmLDTmSc1P8AzxDAhmZRqqS35QPA/wCWJCVU
UHKpp27V8fHEoE6lXIVJGRH+mKQildX0Ma161xEhViSPUF7HEAoZBWgASta+OADV4y2SV1VF
RlQ+GeI6AkKwYt6vt0jy7YkIKNPuFTnlppn9MKKQgKTHkGpppn9fwxNWDZwKMSoalC2eWMG+
BDEv6jXLVQd/CmFmEdJQBjVupIyGXbEdIEmlW6/lp1p54jBsFB0vkDmD2zOAaYlVqSDUnLP8
MJDKrZsD6WyZew74mQsVMgCqDUZk1pXEqlYnTQgauqkClD4jEKAvJLnIBpWgqDn+3EhKAAy/
b08Pr1xY1CJJQuoqBko/0GJWjqzoaih8O5PfPBg+QKgWMhcx+Y+PfvhUCrEUAOZzzpniakKQ
KQKgGn7sWDUhK6R6szStf8sWNbEWmP7B1ByBORP1PXEzsF1yrVsjpHan1xUhaTSvWoqcvM4k
ZXJj10AQGqv3FfDEMGBI65UYahR+h88sRP7miqgEA5U+uHRQ6V1ElSB/HgWH0lSqgksMipoM
vEYNNIn2wCQSQcyen7O2C1CZs0ViNB7jOv4YvBdA6gSkavSPMYdR/ZVmLE0YHJq4tbnCPpca
dWp+tD0yxYM9Uu/NkyhqkkMy+Fca5jHSiUDM+PbvjSwWgePniTtlmVnVVpUnr5+OKRPZvgZ7
KDlNg9xcJArMdUzkBQVBA1VP5umOtn+quSPZvmvj99PtsV7G8X6dV9b+4pNPpXP8Meermvn6
ULTSSNNaVPcf5YmrdBE6sjFXBjU0FM8x4Y1g1ouGcoueObum5Wao8wFHWRaoyHqGB7Y1OvGe
q924b8zbPudpNJu62VhMgJXRUhxTLtljPXqjNcQudr3fn02+SbpY7faRlvZtnOkS1y9JYgef
XG5fBY6/l7bdj3OS1vIN/sCBpjdC6yAdwfQTjHPJxp+NQ8es+Eptj8i2/W0VHkWdMi3j6vww
2DXkey8J4nufJLu23fkEdpboS6XEZGiQk/lJyA+uLFcrh5ZxPjG3b5Ft2z73HuEcmn3JWCCI
HwLqT/jh58HL3PZ7DbbXhCbZLvlg85hYe6JUVTqzpm3ni62n4Z3bORbHue0S8Ym3G3tZ4KoL
hpAI2H/7snI4vq1jj5jzDbIdqt+Mbddxy3aFFF3E9Ej0dGMgP3Ypz6LF7se73nGdra+33l1v
uUTrWK3SVXZAB0QZkt2xUVh+Ocj41vXyE+6biRb2ylpbdmY+pz01Edz1zxfhH+SN5i3rkNvt
23bklxa9FlDfyqn7gT9csUUelrtFmODjaTvNiHWH25JTKmkEiuk59sFgrD/EWyQWm+Xd3Lud
kqxho85QGapGahj0NOuNX4Pjt55xi53Dl9hPtt1b3glkQmOGQF1UHOtKjFynZ81wmDjlqk4C
EuaAkatWnt9c8Fox4JqWg05MOtR4Yo3a9f8A7fg8m5XiaqAR1amYJJpQ/wCOG/DL0f8AR2ux
7nuu73t/CILhQzKWAZVUAD01rjKVu9x2PNuOvbbRf2uosDI8ziqFTUZA1zxZU8x3Xg8237nb
7Yd8t7u+mYatT/y4/JmJoDhi3Hr297Mz8Lh2+K6tTPCsRZvcGhjHTVQk4pPRQ7xt1nv+yWVt
Y7tbGaAxs8fuAoVSmoUB8sWFJuG78Xm3Oz2u7u0e8ioY4ww0q/bWenqpixLa0uls7e5G4XNh
aR1ZoEjZFIjH5mzzP0xJmeU7W/NNjii2O7tyqSfzJJX/AISa00+oYMDn+O7O14xfX1hdbza3
dw4109wejSuYWpw4tUHOYtr5ZBNe7fuSwzWLn3bVnARlB+6hz/EZYsF149JC8UjI+ejLLy8M
SlenfD3Ldq2u5m2+69L3PpikrRQRn6q+NcLd6bi1vLXh+37jcblcQzJfzmaD2ZEOo9AvUZ0w
YyC5ms+bW+3X20XKRpt8nuTLK2iVQPuGivp+p64ZRnrsHyZxZ98G3fqFWZlMQm1D2jIBTTq6
D6nFitc8F7bcM269bd5UpfTtLCsLhidRJGXfzwYpQ3jW3MRs+6bTcoIdsl92dJHCvkcwF/DC
WY+QOSbDyDlO12a7g1nbWhZbzcI/UEJIJ9qnhTriajh5f/61s19tO42XIpt99qVWa1ml95kR
aMWRhRVJA6EYp6Nbm7Ntyu92fkm13EclltrCaeOQqrCufTscVidNv8g8a3De7naIJ/8A5c8Z
WNpCEjZ6GqqT3wxm+qie9tuF8HvNu3l1a4mEwjWJgQ3ueQJpReuCRMpw/aNkveK3M83MrjaL
qQM0+3pMI40X8o9tiNeoDquLGqyfBOO7jvHLo49vKTm1m92SVyFyBFWzz6dsSj1X5z2XeZ9u
sty26LOxZjJKGzQNQ/bUeHUYtGe6fg25/K36a1vd1eCTYANbyyFDM0dPEHL6nAdY3nyw/IXy
Lb7bsl5BGFt/aaeclUqpJIWn3nwAxrIObqubiN38acy2m+3i5iuLTV7qPbkhtKkB6o1DjK/L
1Hcdvl3flu08x22VLnZLWP8AnurDVQVNKf8A1d+mIyZWG5Tueycl+VLOS03t9ntI4xEd0Sin
3Ur6FetBX+I5Y1+HOcf7azHzTYGx3Ky18ok5AHj0qrujmE/7tBI9XbB9bmm+V5nkD6DUn9/7
cLWvpP4gvUT4lv2juAksCzknWA6EJkc+lT0wYevh598b/Km9pu1hs+47kJNkecNcSzIGYVNQ
Q/WlcVlE6j3Pfn3Mbxb3e1ceg3IaA6bkZQmknpQDNsu+JMntvJeb7lyXeY4Nusru2VEW62cT
gPkNPuLJ/E3Q5Yj+FL8lcOhg4fPudtdXWx1KtPsF3cGWCSpp/KqTRhXsca5FyerTbp7rkHwM
LXY7g3e528CxyRQyMZlZJKlDU6tWnpXGfyurqH4jE138Z7tskUiNvcTzj9O9BIrMoVX61rUd
fHFqvs8afh8N3tHC9p23fpP0+6pMoaO4kUyEtISulmOfhUYGtec/PfOOSbBzvbH2u8aFLW3E
ixjNfcY9WXoykZZ43J4xL6zHC7vlPPvkq33z2BNLbSwSX80GmONESgBIGdcYrpy9j5lt96Pl
nie6GA/oI1eBrnLSJH1aVY+OeWBiZrhvt63D/wC/6x2yS6ZbJLRmjg1EIS0VcwOrV6Y1fhc9
e2NLDEm5XXJ9o3AG128XKG1vEPtEvJErSUky9QYfvwJ5P/chLyJ9q2uL9CBsluWC7nFJ7gkb
SFUPTIU098alwf8Al8+ByzhVbSX6igzyyxLH09xBbrfP7fxYcek97e4F9uOK3kCzQyGWtGYU
odNcHMxd7is+PbHn3G+Y7HHzS/b+mOJookuZDI0dy8ZCJrJyBBy7Ybap1Gy2jZOV2PzDf7jc
rNHxZoWS2YyUgEjKKUStPHOmC0wHyJu0+2/E/ILva5xA0d5IkcsJFFVrkBtJHTqemLmes9qj
5J3e+sfhrju/W1y0e5Wv6aWK9VwWDNGdRDdW1f8AXDz+TZ7EvzXzTfNs+MNkvbW50S7ssUd8
VCkyLLBqYAgECp8MXDXU9eYf277FvNzzu33W0ikawsyVu7lCBGnuK38tvHGK3Zj0TkUe/bP/
AHEWu8W+2Pc2W42qW8kgUlTEsYWWSo//AAfcY3+HKfLd7rZ3ezWe7XfHYJN7e/Vpm283C0Rg
KD2Q3QD+EYoz2+I9wW7N7I1w7m6kldpxJUyayTq1Vz1VwW21vmeNf8N7nySx5vaScf8AYkv5
A8LW12QsU8TCrRaj0Zvy+eLG+fh9GjZ9x3mR54F3fhO+aSR/N9zbpJBlQglko3kBlic8DfWH
NG+FIrLj7e5v8blHaArU6bhvdMTf5jGZ4asdy25NwsuE2u538m3bnGaG5DBbn9SkA1RVav8A
5DUMO+GDqe+LjisSR77e2p2a6sv0ylFvZ5neGcFszFGzEaSfLD0WN5Zab7dfFftcMEn9WtNy
kgQWVFkiQXEnuJ6cwMxUYoMU/wAXrz6z53t9vzu7aVZbGaPbRdMjS62064WNAS1B0qcV3GvG
5XeZ9r3e9NpxPcTT3FEzzgQSgZ0iV3K+qmQpirMfFu/zLc75uM0cQgie5lYWwFDCHkJ0U/2V
phEj0T+3Ta13D5CWP9e+2yxWssluFIPvuCqtGwPVWQk064zfluPp/iKRybluFvLsl1t3tKYm
kupmkhuFLUJijZmGg/TD1WZHmv8AbpuhMnMtmjnZfYuXawtNfRQ0iMYlY5UIWtMq4eucrP8A
P/0fOHNZt+HIr1t+ad90RjDOLss0o0EhU9X8I6UwT2nnyeKaIL7i1kqpI1NmSK/TDWn1hyuL
le4fGPC7jgxnmuY1hjupNuajCFIqFWoeiuOnjjE8is9gPm/lN5xzfuHb5s86xblODbXUw0ln
hcp6JVPVdVcj0OHPBn+yq/uq5fu9qNv4zGyDa91tTPdRMily8ctBpc1K/hjXDH9Jvi2/t95h
vt58Zb5+ou/1EuxKy2BcKWRRAZFVulVqMq4x810tznXm/wARc45tunON43q02+HdrndoQ277
W3t27SxJ/wDgKCmtSanxxX5HM8en73sG/wC48f3G54/fbrtNybaRrvjG8kzWskZB9xIml1U0
r0zxuVjrm/Vprm+29eNcZurXab7d45rKILLtMjIEWONKe4FZKivj0wY3Hk/y7zi5T5M49fW1
nPsVxCIxcXzMEM0TSCpZoz6lQVBDYbx43xg/7rNw5Mm42kcctyeK3VrESIyTavcB3I1Eemum
hwRz/wDk+c2kZjkMq1YkUOeHG7698/tW3jaYbrf9onnSG93O1RbCGVgPcK6wygk0r6xjOZR+
Gl+KNg3PiXFecf8As0B21r6Gb2P1TiraY5FoCcqHUNPjh6+RKvePW2yxcX4unynJaHebbSeP
Xkubokqr7aMR6XNAK1FPHPBZqsjyr5aPyduHytb7ffwMN0gjK7CLLJXgDavcR1oxLEVNTVTl
hvw58dX743f9xe37zLwrid7LbzSCxQHc7ihb2nMUdTJStGZlIwT4P9J/tB/JO07hzJ/jzfuM
x/1LaIFH6u8i0tHHRoWAkFarTQw8ji/GNX/238MR/druO2X3L9mNpMk8sVgySiM1MbGUsBJ/
CaHocMjP/wAteFgnSuo6R+Y402+k/h949z/t/wCV8dsB729xi5n/AEq/+RhLGpR16avsIy75
Yzz5R3N5Rf23aLng3OtujAnu5rYaLYU1uDBKmS/92X1xn37apl48exWe3wryXbb5LcKbnj7W
r3IXSHdGRhE57lVBoD0zxZ6kFog23cdk22VNyvjJHCYt1t5Cloy1on6kKdOsZL09QpjWqTFL
vW4bjs/yPv0G3bEd12m9t7S53iGyIjvI5mDRpcxAEF8loxGeKQye1jvlvad+ufjnetxsN4v7
/Y0CPuWyb5CpurZFaqTW8rAOGRvDtXFWepHy+sgUkZnV9wJ9I7nGjK+jPg+6h3f4g5hxrbJB
NyKdZJrexY+3JIpiUK0YJzFUpjHxXTrrYtNq2nc9k/t+v9mvStlyKe9ieyhlYJIJnuIfbVQT
XUhGfhg9cr7G6t4dkXd9vm3trKP5cWyMFvI1U/UFFJAy9Hq6ahn4eGH/AMt1ldt3Df8Akvxt
zzj95GBz43U0s+0pojkNREFaJK5jSmRHX8cb31zzeWe5RYvsn9szbLuwWHdReRTCzcgPnOHy
SvZRVh2xz51rr2PIZfjDlZ4L/wC8QwR3PH6kytC4M0ShtOqSM5gBuuHdPfX1ZDJiFJ9RyUdC
B44cabr4QuLe2+U+Nm4dIoUuKCRiBRmBVa1ypjPTMvr6q37aPe27l6vZCWUblaXtuClWb2xA
fdQ0rkEbMeeNC/lL858m3Dj/AMbX+6bW6Lce5EiTkBwodgNa/wC4flPji4+V18PlnlXyxvPL
OIWHGt6h/XXdleC5t98kNbnQysrQnLOpbNq9APDFp7zY9ltOJcrT+2DeNhudvuG3Rgz29o3q
lMRmjl1KvX7dRpgjf9Phlf7Vt/uoea3OxPfPFaTWzyrtzMRG8ykVdVP5wo9WHqYzxZY81+Q7
nmN3y68t+UTXMm4wTyxWkd1q1R27zN7apqrRGyOWNd/EHL3a54jyyP8Atiu9kuLCVt4gZJI7
SmphGlykmpBnloq2Of8AOr+0v1QXfJN047/bTslxtcq21xNL+nMoGox6ppD6T1Vqr1w8z5XM
8mr6Tdr47L8Wb8LZt53SS5MUsrUab25oXEvq8U01/DDz78ruexo+X7duux8a5LebLLe7q97A
81laxSqWtGp/LktlrUJGTqIXPwwyaXw/eSNcXM1zPM81xM5kklYnU0hzLGvcnDb6bj07+3Lk
O2bT8k2Vzucy20Lxy23vNkgeVdKBj0FTlXGOo1L49m+N+ObtxX5N5fvnIIl27abmS5NveTMA
jxyT+6jq3TTpFD4HG+puYzsxz8Sn2214MsnyPLa3PEri5e747M41vADM+jSy59GqAOn0wZvw
JGV+WuVcw49zbjl3bXdumy2cTXPGLu1jHtyQ3BVJg/8AESukMnSmeL8KfPq5/uu5dvEG2bVx
6D2X2/d7c3F5VAzh4nQo0bn7epzGHi56vy81/t72W0375IsoppPbNij3tutMmniAKaq9V8Rj
HTrMz/L2HgPMbHlHzbvyy1ht4rJ7KysZWDKZEYC5MY7ByuojrjXUxzk8WPxnB8iJxPlycve4
cezMLCG5JeTT7cgqv+1gBlgs9HN1mPkHYNw51sfBN640q7ht9lbRxXckbAmGVVjqsik1GakH
wxqT5jXxW4uuQbHcfI97s8d1Cdx3HjotIoS49VyJJCIGbs+lq0PbGfqGH4Htl3wj4l5PZ8rK
7bfXQZo4rhqM7KlAo61Zu3jhzabZmOz5G4/u3L+XcP5dx4pd8e/TxJcXSOpCMJBJpdBnqp+/
LB+ME8rS7rvO1bxyHnvG9tnil3jc9phFlGjqPclSCRWj19PcXUtV6gYefwKy/Gof/Sfha6se
TOu33r30cy20rASOFliqqqcyyhNRpjXXzp6sWHKeMbnufzJsvNbER3XGDDbTLuETqU1RFqjI
1qRTLHO3zIZcvrz3fNrk5V/cRe/0N0ugl3BP7qOPaH6eKNnOvpU6StPHG+r5Iv52b78OD+56
7s7z5K128odotvgScKfVE4dyY38GAbph+OWfLXjUySEgB8mFK9jjGrBt6FUVLacutcsRREgt
6qlRktMsCH7qjqMgaAU7YloQQaDVVaHUT9uIaKjkUV9SgVUgVOX0xKmQgrUsSFNTQEDzxKCA
V5NWugI7jOnjTEh6tAXVIWYCrZdO2LFQGSr1oBHl6hWle2Fn0giBqEkkkgnMgd6jyOIxMoc0
XOi9SOvlgpASFlAcGlNJyqKHxwLUpj9sxowfSpOk9qYicu9GoTXoTSpP4YkFqhdQI051BGZP
Q4tCWJqoPXpXKgIz/CuIgULVwvSuTYkf0awQ51L0Zs6YgMvqUKpoa1JH+uI6ETyK5Deod16Z
HwpiQoypOkEaRnUdx174kjdmIIrR6gjw/biQ0YBQDmcy1D0xE/vlq6zQtlnnl26YMGm9xgNF
PTkCO1T3xM+jYkEio9IyHkcLRZsQx9NRUAChJHfAiZFK0D62B9WVMsSwLtUEgDI9SMyOwBwx
CVqgAijg5EZVwIMhLqGJCkfsr9MOokZ/Zr38KdD0xIQErBXNNVRUjpXFVh1LCqlgHp6QvSvf
rgRMHprZRl0JNMRJiEXN6d8I0LR10vqBIIIBrn9cQTPpVjTr3XtWmClz/wA3UKULL+XqR4gn
ELEpkqtFCjPpmCO2JekGJJUgEChFOmJJNAI0qP8AcSewHbLxxNIX1s/5anIt+PQYUeNVV2Cm
rf5+GJGKvqpTPv164LFiZin3EitOvmfDBiAHBJJJJHUHriwQjrY6anSMzU0b92Ew9CGAqFXx
65YMbI1IU100yYUyNMQ0INUPcnMHt+zAtOqCoahFM6A5ftOHVILSnua6lRSgQ+PfFjWkGQMQ
lKnqe1fp54HO040oKlaOSOuY/ZiNM1VclyW0GhCj1Cv1piV8KpZghXSRmBWpoelfHFBbp3iY
0YCqkVC0z+uEYGNJXcqooFFQxyBJ8MJkEhAQgDURkCR/hXGW4KQJpAeoFMgP+WI0Bb3FACmg
pWuXXxwufykUJGdQB1n8tTmPpiPwZWBD+ilTmQP+MsFhnRmjGgKR93fEdKQqNJA+zIEdD+Jw
wdXUZ1Nq6U6moz8qHAzIY1VchRaA+f1xNFp1fzGbPsaf4YdPqTSFUMtMxUmvbz88BMZEUIa1
Zq6a9CD3wyCj9wpTU2oUNVNDl4ZYBUVfcHuITlnRiKU864q1PQMX1Bmqlete+JmT1KrUq4oS
RTPscZzTaQoQpUEE1BIyPn9MWCTSAVfuWgHWvXC1mHWZ4+n3E5Zdh4jDILQIwljoQEZgaHzx
VmKbfsl6EsuZbKn7MPMNZ8NT7e/bGgOrfup074k6Hj9qSmrUy55DFE9V+Ldkj3zdLOwmnFol
wQrXCL7hAPX0jr+GOnuM3/L1fn3xpxTje0Evv99cXrKTEjx/y2p1rnRV7Y83U/y6/wAs15EH
CMQMgftQZig7ivjjchtlvhMnrBBFD3GR8sTPSZY5NANKkdh4DDjGN5xX4f5nySwa/tkS0sn/
APG11SNpCMxRRVgvnipnjl2n4y5Vu2+TbPaRJM9mKyy10xRk+LkUqewGCfDd6lRcy+ON94xL
BDuHtTyy5wpakucvoBU4uflj66t7P4P5tPsC7vIIrOH2vfMMjaZQlNRqtDTLPD1v4WSMDcK8
LyRUqwNDnQ1HbDFgbeJsooIgzSNX2QNRZvId8MU8eg2/wZzifZ/6pdRQwR6PdFu7gyBKahkc
lOM9W/hjqao+O8H5HybcZbHbbUOYs5pGIVEA6En7fwxr3G7fFnd/FXLrPfrfZPaSS/nXWojb
UmkfmLZUA74patX0n9v/ACoDX/UNueQZtEHIcAD7VNKYNq+zi2v4O5nfs81y1rtsKOVT9Q+T
qOpGnrg1l03HwXzBdwit4ZbRveJf9Skp9tUB71X92KHXRcfBHIInZn3ba0lzqhkKaj40K5Yt
p1j7Xh3IbzfDsu3xpuV6h0u9swaJR3dpPtC+eEY0qfGfINn3O1h3vcY9jinIMV2jlqNWlB7f
Q/jglqS/Inx1uGyWcV7cb++6RE6NMgdmBOamrMwxes484oEf159iv+WKJY7Jve87TcF9ruHi
mIBIhqKhegIHXGilvoN+uNd3fW106yNqeV43pVvGvbGb0fHJa2m5ySD9Ja3EqLkfZjc9ex0g
41/0H39O9tcQyiC4ieORj6RIrDST3OoYpT8tbd8Avbfjtvuj7xaStIFb9JEX1ENl6S1AadDT
FarJE+/fF+97Ptlvukt4lwhCsYI9RZS4qMgBgtFZGYXcMjRUkWVj6gVap8qdcZli081vu6xg
vBMqV+9lcIPozAYftGdDDDeykmNZ5DWumPXQkClNS4ftjS12Ph/KORPONutTPLAKyaiFovSh
LUJOLWrOc8Vd9bbhZSPa3cTLLESsgzY5flBH+GHWHGjlGIYHSO/n4HCPhYbRtO47vepabfBJ
LcuRojQfdXz6AYrcMjRcg+NuY7FYR3O4WRNouTuJVlCk/wAWkmmMyrAce+Oubb3ZPfbbYH9K
fSrvKINdB1QMQWH7sNOqv/1jkp3YbQLGZr9jT2FALVHckZBfPFrGaseQ/H3MthtBebjaOlsS
KyBxKEy6NpLZ4pWp1gti+OOb7zZG+sLCQW5UlZ2kWEvUdFBIY4KflmtzsdxsLh7O6haKaIlW
jIIp5nxrh5ol1zaZFKimZUk1qDTyrhmNSNZs3x5z7c9tfcNssJTbOoGsyCJpAOwRiCwxmtdS
RSWPGeQ3e7/0iys5JNw1UaOlGjYHqxP2jzw6xPh2cq4hzHYo0l3yykRCx0zahKgJ/wBwJ64N
G4zLSOrBiA8YObnopPTGtH5dO2Lu8+4JBtKytd3DBIhAW91mPZQuf44txrNXXJth5/tphtd6
jvkhmI9hZpnkDv8AwijMCcErNi1g+Lfly42ykdnIlq6+4lqbj2/oCmsD92DrpmysZuVnvG03
clveQta3sJCurgq6MDWvY9e9cU08zxy3t3e3kwmubh55xRfcldmPlmxNMblwyNNYcM+Sptm/
X2djeHbHUu7xysqlO7e1WrZeWDR11jJye9by6JAUZcmD9AD2z8cEp521BPR3fp2AXyPbDKeo
hNSaEAkGoA/zwiang3G7hieOOaZIpAVZA+kMPNehxacc6voLCM+nqafb5Yt1nZFva8u5PbWv
6eDdLuC3poEKTSImf+2v7sYvhvLitN63S3vBc2t5NDcZ/wDyIpXVyQczrBqcbal8PunJN93R
hFuN7PeLG3SaR5AD4rU4PtAsLXbudbbsK8is4L2y2y4Yhb6AMiNQ0q5U/aexIxbl8FVVhvm9
bfcPe2d/cWdyS2q6jfS9D91WrUjEZyl3Pl3IN00ruO6XF6kI9yMyy6glczQnpi04q76+ur2Y
z3Nw88pQCsjmRgAMgCa0HlgtZyLDjV3v8G6W8WyXbWl/cMIlaKQxVLGgDOCMq4Y3I03yNN8q
8fmj23lO6XLays1ujXLSxkr0dKeB/ZhlZYh97v3mS9lvJJLvUGWdpGMgIzU6idVMLHVmt7w3
5t5Lx6C4s7sRb1aXb63sr3Oj0zKuPHzwdc4eetBz35k5DyXZI9risodn2eQj3be3BaKWnShY
CgHfTgnWHfXmheb3D01A0oB+8Y00uNl5dv8AsJMu07jPYSNT3Xt3KEj/AHU/zxm1q5iLeOV8
g3mZZ933S43GVKmF7hy5VQfynDLrnn5Wp+T+bixFim/3yWrLpNuZCVpSmn1V/Zia2Yp49+3V
rSbbzeT/AKNmEr2jO/ts4/OUY9cZtxiX8huN93prWLb572aSwjIeG09xmhBHSiGoGKVpt4vi
/wCXN14fHuLxzT7Dbq13Z2M09CEp/wCSO3Y1AK9D4YZWfZ6o/ju+5z/W4to4luE1rf3xMftw
y+wjMtW9bMKUxWty6ut4+QPlvjHL4od93K6G9ba2sW9w4kg0uOoAFGR18OuNzm2CZ/8AlrP/
AM5K4JN3Z8WtE3Z0YNf28jZOBmxTTmo7gnGNivLxLet0vNz3i83O9l928vZmuJpEUJWRutKY
vt9hJjkhmkiZXjZ4ypBDKTqBBqDqHfLG8PPTXTfLnyHNt0lnPyK9Nq6iN0aQMChy0liNX764
zsNX0HzryuDg9txiyZbR7GVTb7tC7LcBVJbQRQhixbOuM6LdZDdeZcm3jcUvt33Se8u0ULDK
75oVpRk0afUKdcaPNxan5f8AkiaOAPyG90wMTAzOVckZAg9T+OCdQfb1WbH8jcy2Oe7l2jd7
ixnvZBLc6Gr7jE5s4eoLE9+uNXmCoN85hyredxXcL/crq6v0ACTu1GTR9ujSBpI7EYpYJq4v
Plz5Ju7M2FzyK9mtpU9tkDAEqMiSwGr99cGxqxhpDpdm1EOSWfzY9ycW6z8Oiz3C7tJ0ubeS
SKZCjJNExV1ZTVWQjMEHDYI2TfMXyTNLC78kvX/TVMfrAYFhQ1dQC3TpjDUU2wPyqO6n3vZ4
74XO3sJ5txtg9IWc6qu69NXcHI98bvUxmf6/+FdyHkW98i3GTcN4vX3C8lAWS7cKrERjQoIU
BfSMsEbV5EekUPQ0C9qYhWq458k844/ZPabFu1xYQyH1W8enRXKrUcMAx/iGLxv5cG88k5Hv
m4frt2vJNwu3Cgu1NZJNBRVy79sGs+tVzTgny6ePWHIOVRXF1t9hB7EE9w6vNbwkhlWVVq8Y
zH3Yuazbji+Ndl+St6ku7Lhks4jmj9q+kST2LZhT7JXb06j0AxSyVu3xnp7fk3E+RPC5l2ne
bCWjCNjHLC4zXSw7EZjsRjdxidavN7+W/kbddvfb9x3+6ubKUgPCNCEkDNS6qGI8c88c5cav
wh478q8541YHb9l3m4tbfWW/TDS6Bmz9KuG01/24cVsin3fe965Fuv6y9uZbzcLo/cSSC/dU
HQV8AOuNff8AbOfpY75yz5Bj2F+H7td3cdhalQ22XiBTCY81UBxrXI+kV6YzVZsY15dLFwKt
2p0/HG8MuO+0a8V4/wBKjicP/I9qvuayaLopmSDjF6atavnG/fKFytttvMri+b9NCoggvU9o
up+1nWilyDnVs8WsYo915Xv+6W1jDuN/Pdw7WhSwWU6/ZDEFghpX8oxSOfHV313Sc75pd7hZ
bxc7tdS3+2x6LC5LVeBQagKTjcx15nutRvny18zXvHva3O9nl2XdFaD9S1oiQTClWVZSmdB1
occ/vDWW4x8kc14pDc2uw7nNZw3ZJngIWSMsRpL6WBAancYTWdnmuJ55Lm6lkmmctJLI7aiW
J1ajXqTjTGLnd+D8z2jZLPfNw2m4tttvXEcNxIlBUiqq/dNQzGqle2DTZguHc65BxLdo912S
f9NewijBxqikVj6klQ/crAf54Prta3BWfMd+23k0nI9ndtqvZZXdf0hKRKrsWaMA11Ln0bDa
zz58Ly8+ZPkS9ttwtJ95ma23E67qPSqEEgA+3pAKVpnppihDtPzB8l7TsY2Wx3yaLbFUhEeN
JGjVvyxyOC607Z5YvDjjj+T+djkEfIG3m4G+RxLCLwkeqJeiyJ9rDyPXBapMPyr5X+QuV2Y2
/fN0e6tmJrbxKsOoHP1aANS5dDlhtFmqF+M8gfazu4265ba1f+Zfey36ZSDp/wDJTT92RzwT
pd36/KLat33TZ7yK82+5lt7qBg8c8RKujqaqVI+n0w2CdbPFryLnnKeUbjHfb7fSXN1BpEUg
CxgBeh0ppAbL7h1wymT12Xe/c83bchzO7luZ5tu9iP8ArCggRFMoKug0q31zOM7Pg5lDZck5
1u3MTvVjPd3fKJvUtxaoRcMY105CMfwjMUphtTn5fyXme877NPyW4nl3CAaJUdfaSOgoawgA
IfHLBrnLb8Npb875ls/w9Lx+Hj88O1bqHik394pDBLHO1ARUBdfVdVcxi5snrX9P1Xm227Dv
O7XjWm2WUt/dmui2t0MkpI6+kdgMX30of0t/aXjRSxyQ3iN7csTDRIpBoQQemGxTmfL0y6+T
Pm7advsN1mvL+DbrZPa26+ltw0cqvlpMhWkwyy14JV1ZLihj335E5TBfccs5L3dRdSfrLjbo
wZgZVFdWkdPpkMO+j6sdNBd287w3SvFNbv7cySK0bI6noQaEHF9WP/at7Y/3A/LVnaR2ttvr
GK2GiISRRSMY1FEBZ1zp54zjpjEx79ukG8LvFvdPBugmNys8Z0OsrHUXTT0z7dMbt1nnIsOS
845Nyfcot13y6a/uraMRQT6FRhGCW/KB3wb5lV591rbT57+VreyjgXfZTCiCOJTFEz6egBZl
LGnicZayshecr5BcbYuxXm6XE21Gf9WtkxBUXDVOsUHjXLpjWq8rXYPkvl+yHaltb2R7fabg
3dnaSeqISEFWH/aykgiuDDPXuG4fN+723DrfmNhw4WZvpmt4t1W692BLgfd7sIAOh89IJGfe
uKC3Hzqu2bvyCbcdztLJ7lYVkvb4wISkSFqtIyj7UUt9MV69Ekiqt3kjFQxCSAEsMwVONSp6
Fyi++V//ALvNqm3W8uLrht7MYbJzIJVVo8ljlYeta09AbI0xS/I78rKXfJt7u9lttmu72V9q
sZXe0tZCCkbn7iB1FfDBz1jpzfyDcOR7xuu12G33l3LcWG0tI1hCWrHAJKe5o70anTFb4zOp
Uu879yTfYIP6ld3F6m3RC3t3mOv2YhTTGCBko88YnX4X2cuy71umyX8O5bVcyWW5W1fZu4Wo
6kjSRT8ehw2hNt26b2u9R3thcXDbzNOJY5oi3vG4Y11qw6MWxdXflvluuYc3+cNrmih5Nfbl
t8l3bvCsMiqnvRSACQKVUA9umHnpyvfuflj+P855PxxLj+h7jNZx3atFciLNXB9LB0YFTTx6
jF33t8btV0V5eS3ouo5nlvS4f3tTGVmBFG19SR2OG9bFO98jR863j5IuprOz5lc3rXCQJPZw
3i6WEDk6JVoAW1UP3Z+ODfNi3128Zuvljb+M7jfcd/qEXHZ00X9xaoWjpShZhRtJ0n7lxmfL
N6z5UfHLTkd3utrFsX6iXd2cSW/6dz7hfqGVhnXzxd9zW+LM8WfP9z+Qbjels+YNdNu1qgUR
Xa6XEJzVvSNLAn8wxr7eMTqbjt227+V9u4VcXFkL+PiU5K3skan2AaihzGpeuZFPPGZfW7UP
xvuHOLLkEh4dE1xutzG8MsWgsGRhq1A5aWRgGVq9cPVVm/Ci3vbuV/8Asl5DvFvcnkEkp/W/
qVb3Xkb8xAFWr1yGN9f08wcdS+Rc3Pwz8lR7a19JsV21okQmEiJ+QjUXpXVQDPpXHOVrFfxj
gfJeSW+4TbFD+sfa4hNNbKVErwk01IhoXofDFaOucmqBlckjSUB/KeoPTPFRLqNarqboKUH1
GKokDFSVNUQ1YU7nth1aCvqAz0/caZE164dQqKrGp9LdK9Ae1cCOsX3EjPqSc6jwGAHRaA+o
FiPSKZA4lp9DAKXYlP8A8H2/HGiIFSQ+Q/hAH78CBLI4j1RONX8PifwxC9JEepyIWmTEZj9m
LFpqsC+ptZGS+WI6fRKoYgdRVs/2YKkkYZgFYdaGtMjQYiYhdRq2agUGekGvfEjamElQuZ60
zFKYkkhib2xRtIXMKM8vx60xKhZtRADLSlc8gR54WdM7AaUY+ont0p9cR1KFYoArCnQeJwEk
icqdTAqcvoMBh6MsZyDLWgOWY/ywpADoYmh7kgdPDEzalhopNMxpoCc/wxapRK4oKEUPh1/H
ASkZqalNADTT1oe+JGAJFHoXU1J7/Q9sSL/7QECiDrQ6jgZHkWIByPQH/jI4TDhdOT50+0nM
EnucTQZGLAtqoo606+FMIP6w61pQLQEkaaYzaKKvToo6161HalMWkEjVIB6EdfA4VKShhHUD
MHMkVAJ+uJEtA9W656h0P/04UMKQCVJY0yr1p/ngQEViahgCmZIGZxNTm0TK2pXorsK0B8xi
ZoaL+QfbmPy1z8cGgS6xq9IU5+rqKUyriMA8YNKg0rRyOn1xa1iSWNFzUjL059RXthVoA1Rp
B0k55HphB4k1fzKtmKU88CESBkwoAKlh9cWKkJGyKn1fw07dfUcWCaFSpJIUtXrXIKfHE0Qq
HLAVA+6nQjAZBstVDgZg9B3+uCC0lBYv+WhqNX+WJCVgxoKjTkO9afXCtRsivQZ0qTRcgMAp
1ii9wqC7ORXSOgIxIcjBkOvN16GmFUK5D7WBrWmJD0NIUAIFc6+H0xacCxcuKOT/ABoOnlhj
I1YOtdQyFBTL93bA3Alj7ih6+C1y6+J8MBJXoFjViFqRTrUeRxqRmwIVWLBjVc66s64D9RKR
5gA0HYnLEjSsQh7lqVBNDlgFPGQwArVhn/x4DEEbS/zCAMx+btn1BGJSpFVWFB2/KOn78TpI
BqKlCRQmpqelelMSo42BH3EuBSh7fhiWI0YFqVAJFPUPSKeWKjEoWMxagw8MwcziWgydzrag
6AUpkPLEYZ6ldNAG7E54AJioiBcAORkWpkMQwMbSEDpVfzVpUfXv9MUa9EZiVqKCjUJbI/XC
yAhh6jLmhNAf8sWqw8iKi6wauaGoHj2+uFKHfQNK0qUBzPge+NQ5cUqgMrGgJHTEzaVZP4T0
8MQ1ZPAVlUllB/iXMftGGNSPZPgpRDy7a5VA9tDooDmCR6SPKpxu/DF+Xqfz0kgitzCjSuVC
qAfzD7q9RQY81pk14YyzUBYes5FCcyR4VxrmeKJFjcICAMugY5jPG25LWp+OLzj1lyO3uOQ/
zLMMWbVGZQAoyJUZnPwxr8Drmx9Mce3vjO9pK+0bv+qt0+6KKJkCGngQDjnYzrzvYpL+P5Mm
g2i5vLjadTm5KRN7VTWtT0BByrjW5FqL5sst/Fza3lpZ3LPGoKXcKOaEGtKqDQjGeZ6Ja1/E
hyC74ADuS3Ut80L6PeVvcYkemoOf7cPRx4OnB+YbxvNzZ2O0yyzxMfcQKIwig0qTIV69sWpF
f8V5ZxS9ja/s5LS61B7VgQzEg+kpo1Hyywz1a9/4/wD1294CU3BLmW5khY6ZFIkKkVGVBnXB
0Hn3xPJyi15fJYH9Xb2TBmuofaZUJXpUFciMb3Ymo5rsfKL/AJrZDaJZLJyqs97RtKpnX1AZ
kV6YzKcdV9dQcNUva7Lf8i36Zf8A5N+wft0+0Eaf9q4zaFNarzi5hfd+T7M267VcENHta3DI
YqnKkSAtl31YQ3l/uNxZbPbX237TJKEVf/yYuoEAilNRFfT9MTSltLTYeVXl3HvHDZtvuVTU
bi4ZqliKelloMVxR17BbcH2N/wCkbXuNvY35lAlifOV3Iy6kasumEVW/LVvYj+l3L34a4E6x
pblSTJXuOwpi5Tj+XSW4lAI5Msgp/KG0gVI+mIPBFEmrsQQTqrnhhzHrXwAa7vdkRRyOI6UK
BiADkyk596HG+ucg2vV7Tdd8ut43KyvofbsI1CwKYzpcEZ+o1rjmVbyXcNx47xuuwRLbyo4A
gig90ksc/SB44sDyvlvJuc7nDaXO97L+lliIWO4ELRNIQcvu1Uwc9emV6ZyWS7vPj6N5F1XE
iRPUxhCT+Ayw0X5W267vvW3cbtJ9rstc2mIMVQuQCACaf54KXRdbfLd31pfokEd4iUkvHhDu
ikVIQGmZPjixOq3eHcLW8t7qd7+FSY5Ult1RK0zUekVwpmuWbre8W49CnG7aOOTVpjt0g9xa
VqagZ5/XCKb4/wB53jeLy7ud82wbZOqKokSJ4hIPGjVNcRkVXyrHvFpsrpsm3Qy2Erarm8VV
eYHyAFfqcCeAzrLHJpbp1p59yfPGoLHtPwEsfs7izBRNoWpIGrSSchTFYZWy4f8ArHj3pbyK
QRfqHEJuASpSvYvlQfswGoecNexptH6EyrH+oQTC2qAy9c9PbDjE+WqCwm9DAL7rR11UGunf
p6qDA1WY4S9/Lbb0m4CRkN24gSerL7YOVA/5cTOI+byblFc7Gm3e6qNcL7wt66WjqPv05acJ
xXfJpmh5Px2fa9th3O+ZpDDbOMpAB0J8BmanBhkVPOL7lN5uGzrvXGrfbLL9TEWug8c5c5D2
mYUoPLFB+Wn5hNucXJePx2LSraySj3khqIitfzUy06csVTT3Edss9/LBGgmaE62jVRIaD01I
Ff24MOMhx9p7/wCPNzO7lp3/APkMzXgrRV+0jV0p2xrNZ+Ga4Nfcph4hLFY8Ig3Hbjr9mdni
jaWpNS0bBmkA7Uxmn5YTgvJt42Tl+q0t4beS5k9m5t3XUQpahVPzIa+GEx6n807zJtN1xzc0
j96S2meREJotQFOeIW+pNi+RuE8n3ywmil3KDdKhP0sZb9MXP3FtHpYeeCmsb/cRb3Fxyrbo
LSB555rY0SNSz1DED0gVNcOsy+sX8e7JeW/Pdos932+SFGmDNBcxFdR7ahIOnfFbrfNke68h
3Pd7T5L2PbraeSHbrhT79vEKRsRX78qUAwD8stzWL+mfLVnJtfH4d6nltxM23qAuotVWl9QK
gjxOHBL+GK+bL67ubqzkn4k3H2RGBkJj1TZg/dGApCn8cEuM768pRyW9YA1EGvfGmtfSXwht
mxR/HE+5XW2295PbySye7NGjvSNdWnW4OQ7YzjXXkVdhzH445/JZ7NumwJbbzJciO3/ToqKV
Vq0Mi0ah7jD9WMl9ej7jFwnaLqPb7j+jWluV/wDxOe3T3SpyOnPOv0xVpl9muPh6w5HukNhF
ZrNOEaO8uYjJaq5yMKMwCpQ5/wCeDFJ4r/ka2vLba03aXiu13yWjrJZbvtxDxJQ/bPCVqyN3
zpisjNuNPLzUD4c/9lk2y0CG3GrbSpFrQvoI00+3vTDhrJfEmz8cbhe8cwl2ezk3BjNIsMii
SBFjXUsUYeukefXF431fGh2rZOHcz41tPJb/AI/ZQXSzikcKLoorlSj0VdSt5jBjLg+QOWfH
/Ct4tNjueLWclldKHu5BBCKJISv8tdPqbLxGGch5Zs248Ij+XLS64tYJPx+7dIltLyOqo7ij
lUckjP7cV8alx6bznjmxv80cSgmtUktbiCVpoJKyIzLUKNDVAHl0w74xOcvi2vjwWHndpwNO
KWDQ3ULyT3LQRABSpfSlFLdR5YziyUuNfHfDdlbkNvsOz2l9u0E4021+A4WKRQyIGcNpX7qH
9uG3TJkZf+4je9t2riVjxy0tLOOa5q09oqoHtVHq1RADIO5p54Zmeh80LCVcEtQKQzGpr9KY
dbsfR/Adh4pxj4auOWy7Jbbnftrkm/WKH9xfcCKoLBtC08sYg6/wqeMXPxp8i8z2qzfi7bVe
rrluP05C2dwkSFgCoC9COwxTGfy3UH/pe6c/vuA3HFdu/TwRGQ3saIj5IDkoUMGz6hsVivMo
rjjfx9w7g24bvNsFvuZ2eaeKJ7iNHlkrLpTW7A5eoD6YZyr1kZ/m/GfjhOLbJz272FYIrgxG
+22xb2VdJ0OY0haGPypXB9VbjQ/MF5waP4z22Tc7e9/ps8cSbV+ik9qaMtDWJXJYDTpyINcP
C7jxP+36ayHP7a0ubG3uobrXEjT1aSGnqR4yOjZUNcXUh4mNzyXjPErj+4qOy38F9tnt4Wij
lldg1yyfy42qSdB/hxq9XGuJdr1OPZ9g2GS+3C92LbdngsFZNv3GIIqSxsvqWVQBpJ6Z1xjJ
rPV8fFvJ9zG5chv76G3jsoLmeSWO1h+yMavsXtpp0xvz8M8TxdfFi8M/9ws25jr/AKMGyVas
nu09BmVakxj82Dpr5fSe+cb41d7bNJacM2zfePvHUX+ySxLdovYpHQVZRnQPjOQKq02v4041
8TWXKbzjcF/LHWOATxKJpGeZlQSk+kNl6ji8NuJd1+KOD7luHF+RWPHhp3FBLe7LBIIYSrRi
X3j0B9s9VFNVcMmi7rVSfG3x/vgn2y82bZWj0nOy9N1GRkraQoKFfrjOQ4xknHOAfHnxsm9T
ceg3yQ3jQ3L3oQyuTK8atrdWAChB6QMKtZ3hO2fEfyDz21ew2ObbLiOCWfcdu1hrOQqQY3BB
BIqTVQAKYvMM8em7j8e/Ge4xXe27pt2x2kNGCz2cqQ3Mbj83RdJH1OLA+ON/t9vtd8v7KzkF
za2tzLDHMoFHWNiobLxpXHT64zOtbb4F41ab7zyKC+23+r7dbwvcSxFgqxkUAkkU091RWmjv
jNp4fSUvxX8f7/BdbbPtO027iMrHc7W4FzGa5NQKNNPA1GMWHFL/AG/brYPsO/7Om1QQvs0z
W893GADfIusK86mvropB7Z419cHPX252vmLnm7bTu3Jb692fak2WwZqR2Cv7ml1r7jBqKAGO
YUCgwyM/ytv/AIUCRI7DUc1Ffr55+GF0x9PT2Hxz8ffHPGd0veMwb427e0ZZbkq03vyx+4X1
yKw0eC0FMYHVyubnPAfijiXL9k3S92u5G1b2pZLK0kottcqySCRQCpKHV6lrTuPDF9Nms/bO
sd/90V7xWLboba4ur+3365tjJYpbE/pJ0RwNN0K06nI0xrk9RbfA93wub413Nttsbq0WBP8A
8txvJrZpUhq0kDAilQKjpQ4Pyb8evJ/j9vim++TL993u577arhVOxXe9VIaSmaXmonUBXShY
9sa7rnxzjffJPD+Lx7BdPe8Mj2/bkQta8m2J4rhEYD0NJANDe0zZNXpjGRddX4a3ZfjLge2c
b2gJsu1XJubeOWWfc2AmZpEV3Icq2rr07YpHSxgd92n4++PPk/a5Ns2+23qw311/+C0gkawm
Eo0ywNVqAl8lb8Dg+v5YlzrA/wB1e9cajvItofZ9fIJLdLi33uOQIUTWymKVKVkFFNK9MakX
e758vmlo4/cLsTQDqvQVxp0e/f2sbLtd0vJN0ubaOa9sIo5LG4ZNftkhy2mvTVoFRjH5Oz6t
PxPeZvlnhnLoOXwwXC7WDPtbxoFa3dY3YGOQ1YiqD7sN+XPjrZri234c4r8g7fx/k3Hbf/16
JVQb3tzxsfe9kAa4FPpcPQqTWhH+7HLrm/isd/ztssdG0/GHxNy7n+7f0S3eKz2OOH9btML6
IJ7nUwb2g2YjXTRhQBm8MdLPw6zr9Nl8x7beXnwdvNrFtX6ZrRY2t7QIrGKCCZD7iImrTSMH
IGoFc8PMnw5/1t+uvi6ZImqyklWFVbr9MamukaD48sody5js1neKslrLdwLdRsPTJG0gUowG
dCMjTB/S+HievXv7lN8uX5jZcRicx7RYwwMbeIldXu9FkFaOEUDR4VwfbPGN9bHkv9vXx7uv
JJI7eCTazJsrXUQt2/lx3COsayaDUHI5r369cGeDvnfFrsnw/wDHZ2rj+8f0OGWaSxRLj3K/
omkCBvcnjqaOxrR+x64Jz+zzuRwfJPw5wS64fuG82u2W+yX20Ibj3Ntl9wSxoKvHKtFWpXoa
Vrhk/Ss/Lqs+HfGNxxWzvOMcNtOVbc8K10TRpfAEUf3RJQtID1zBr+3BjXXWPmH5D23YbHlF
5bbF+rWwiYBbS/QxXUDkVaCRTmdHZu4xvxjmuPiVnDcci2u2nTXDPdwRshyQq7hXRz2BU4z1
8Ok2evq/lnK73YPlLjXAtsgtoeNX0UCXlk0SujxzO8PtFWqoAWPIgVPeuL8M/NsrL7n8Z/HN
/wAt5JwKCyfbt3cpuGxbtGrPHb+5EC0D6fti1g0Vss8swMLHHObIzPyN8dcJ4N8dLtG6N73O
ZZ1uLO7t0K6kJCPGztQPbqlaas9fhgk9atbngvI9u374F5NaW+2RWUW17bPBPDGFMcswt2Yz
AUyZqAkHo3TFz8n+nsqi+I3s+MfBW88zsLSNt7tJpR+okU62Qe2FRmFGouuopi5+VJZzIuLS
zsefcI4nynk1rBeb6d3hsZp1jCLcQNMVMUyL9y0zFeh/HCbfr6urvl24zfN7/HjxWz8Ve1WO
5254lZZBJbGYEgii6clAXIjzwL5tcs+3bZ8dfH/K944zaQw3tjuU0UM0oLMyF0VQzD1H29fp
+mGT0W+Oe02fbebcd4Ny/kNnDd77NucVpdTCIRC4gLSCk6DJtPthl8Pxwbp+HbbctvN2+bdz
+Or23tp+KLbtHLt7xKyMVgWYPQj0tVqZZd+uNWeD53XHeJZfG/xhu258Ws4or2HdJ7JbmQVl
ZGnKI8jjNmjyC9qdcE8ou4xXzPZW+/8Aw/xrn99Aq8pnnjs7q6iRYhcRv7g/moBQlfaGnwzx
rjodzLseAy20yIGaNwG6kqaftOR/DBLK6VAFIy/HUcIx6P8AAGybZvPyXt1lulol3ZOJvdt5
l1RNpiJWo6HPPGOjHsEPwdwezaPdLeB3ax5IbOS0nJliuLVpxGsLKf4NdQR1pQ4aI1vPeN/E
vANkHIrjjVvOYpmhhgESusjTktobV6VAodDEenpgk1V4F8o3HxpvFptfIOHRLtm5XUjR7tx4
GixKqkrPGAAg1UodORr0BBxqMye+N9zBrK+/ti2/cIrdNu03Vot3BbemKZ45/a1SL3JNG/7s
XF9bsXX9tPK9pvNqvtlGzQRbht9q8lzfIFD3ERfJJhQ1Pqp16dsZ/LXUfP3yFv8Axfe+USbp
sW0f0OylUe5Yhw6mYZM6BQAgI/KBjrjm9o+RWs7/APty4/uttCtgJbq0FxbW/pglZS6e4Yx3
1JqFO+Mc9N98+ru5b4x4f8d8Y5HuXHLbcNxv7NIEVokYTEgO5kZgQGHUNQntjPEHXqWT4z+N
U+QuL7l+gjsLXkdnNPHtDnVbfq1VHT05BtQfNOhphzYZJz4v+UjYOIcU3vfL7iu32FyIDavt
0ckZi3C3dwoWoRKOCdWmhIGLmTWLNfGTEK/pXSwYjQDWi1JAJ76RlhsPw9y/tQ22wvuT7xLL
HFPJb2RaF2GpomZwpZK5amGM1qXxruEbvefJ3DeY2/Loo7q029ZJtp0xhDayIslPYlHq9Ohe
58OmNXNxn6+eodt+JuM/I22bJynb7afYLiGCOPeNuWIKt0YFAEsFaIa9z+bKueZPPg2e6xXL
t/4VsPzDBvXG9pintbRVg3LbpEMEDXaBlkMKEVjYLpOY+8Vw9XOcXHy2f9yDxXt/wO6lSj3U
EzOCAWKuYG0MaZ/dil/1q/8Akv8A5D5lvnEfkbjHG9gEVltN6kctzbxxKaoX9ox0p9gGYp08
cb553kSf7LKTY9j4jffJG67DYQw3dpaRX0Q01VJHheWRFGWmN2UMVGOeCc5PFFsLf+9/FNrv
nKUW+3ax3OEWl5IiCRYmmiBQlQNUbB2BHfFnrdk8qy5FzLkG1/Nu1cNsfaj49dRxNcW/tAh4
5gVZGqCNApRSOnTBfIJ8hv7Sx4Lwz5Cv+N2sNrd7de6rZiKkJMIWCEj1aEaUlBXIgeGNTn2K
3xW3c26b18X8b5o9r/U+b2G4QLtlzoCTXA980ibTpqrr/riw9ZL4vvj75AseR70ouN6u9m5B
+pkSfh89GiUp9yKxQNTv9wz7YxqeUcYvbraf7iLv+lwNZxf1a4tZrRQCsdvIw1qyrVVTPUPD
LG/6TZK6/wA5bLrLf3A7JZbT8rb1bbdEILaYQ3TRrmBLcRh5Cq9gzVNMb695lebmZuPOJHJT
IEZgEnxPc4w0AMWIfILTI0zy+vXEtE7BEoDn1B8a/TFg0Gmq69dEoTpp+GWJJRr9kUOpyRTt
+3AaaNkAYs2fegywL6lQg+hqoOw6fvxpJF0ADqCTUAdh4YCj1KGBUdO46VPjirCV6Oo9sU7m
nj4YmjEqo8SaZdKYUdSKlQPSTkW7/WmDEREoXUCKmpWmZBGBDNCSSxDkDVTOv1GJHRzrrIpo
2Xl9BiagVUIDpBAqaAEV8sIsGoVowrKCQTQfXrXEMMgCykMoFR16j8MTWHIYVMYAHcnrn5YB
g2mkEWaqGH7K9sCDLp6n0k0NFyHTCLoEQ6K/kNfV2piGVLCrqQAdRypTp9cBkOYyZCG+1ftI
FaHtg0gIcihFSDWvQH/nhWJSVD1kzB+0Cn78SC0mmQIqEsM+gAof9MWIiFGaV7kjxriIlDIp
rSh6imFIw5Ioa1Gak5fhgCUmvqemjoB5jw8sGImjGmrtpQUI7jEodTHpIjBBz1E+WeFaGHUU
rmAwrTERAegBqIzePWuJH9Wgg5L/AB1z/dg0wHuKFoBVVyJHXDBaeJY/vlqGPQV/dgA1KD01
oBmoPXPPFhkRM7hkYDI51xLKlUKdRJ65DxNR+7FGvgDBCVNRQfn+mIG1A68gegUV6/swjRqS
XUKuknIjsMWmUyqitqNa/wAPYk4EdEVUVtRyyIIpliVNTSA2agHqDXI4hIQuGClStSuYQCgz
8frip2pftBZRmc2HTAQPKCaFRQgmoyGJko66SalgBmD4/XDq0tTimptJ61ApkcSPFo90ZlSc
xXufMeWAEwJNa5A0LDx/0wnAFxGpIbM5au3XFhGGYkaSAO4zy/5YVotYUksAGWop1r/0xLwL
FEYM3+hqcC0zMJBUuSTkVNRTErdCkcitQZocgfD/AExo8iqHqAchUZfvwCw6s2goxHT0+OMn
DxBqHUCDUkdwPHFVhkLhDRVMbHKozB/DFB1DGICvamda9sCgg7UGZyrUf7cLcAz0Ay9LdXY5
gfTEzadCakagzE5sPpiXADGc5KgydCMqGniMTeJg6soYjTn6x4U8MAwzBZPScyPtYdR/1xM3
QqoDsQQXGTAZ1wgtZZASoJrp0sMhhxToJLgtopRTnXpXA16NvaC6yqiR6fiB4YMZ+AVVhqAA
MZ6VNfP9uH4PyJiHBIrUAaT/AIYHSTFJvQolaUqfUD3x05Z7qhqBlTPDjGi9yTwHhiZdTPI2
kTKSgNaDLM/TDK65HpPxzNyGy3Czu+P28824pQ236eIzse+YpSi+eG1z7k/D1LlvyV8ry7XL
t292psIZRpmkkshHIyn65fsxx+aeJWQ23gHL962xtysbCS4skrrvDpQUXMmhzBHhjV8Nqa3+
OeZSWDXqWRNihIM4IIqpoaEnBT98UrEW0jR0Hur+YnKpFMMPXeuzbuSch2xidt3GazRwRKkE
hC5inTp5YftrN/nVtx7l/OLOd4+PXl6JJqPJbQAuHbx0EP6j44PhirHe+c/KxRId6vr+zCES
CIr7HfLUwVS2MzrWufhAnzD8kKHiXf5zEK0LLGz5/wC8rXHRj8q6DnvLrfcpdyg3Wdb6enuz
l/UR+OWDW5EN/wAs5JuO5x7hd7jcTX0RDR3LN6kKZKUpQBsa5p8X6/MXyUIwn9akoOhKRknx
qStcZ6xmxFZfKPPbOeSSDe7hppG1SxvpmDGngwNKYtGOi5+XvkK4VXbfJolU1PtrGmfgdKj9
+CVuc+Jpvmn5GSJQm6NLqNNYjhFPGnozxpnHNt/yzzyzlkaPeZwJCWkEwWUVPUjWpCnB0LzU
d38lc1v7q3ubneJ2a2b3IAhVQCPzaVAFfrglHNd8vzN8iTQmFt2bQV+6OKONyaZeoLX9mDTJ
WUh3q/iujdxTSJdq2oTD1OG/iBNc8Oi9tFafJ/Iot0hv79od4lt10Qi/iEgUeIoVofPG5fB9
nRzT5c3nk1nHZTWltaW4cEpboSxalPuYnL6Yzql9Y6C1vrl6WkL3Ep6pGpZhp65DDuN+tjwT
mM3Cb6e4eya5lkGh4W1Rso65Voa4r3rOVq96+Rfla/tZ72xtJtu2aZKwy+0NarTNhKRUYNSs
27mvyvsO2q9LiKwb1JJeW5kUg51Rmzz+uLTilvPlLml5udvuc+5kzWhJgjCqsIJyJ9uhUk+e
NQYtX+cPkF00DcIBXIuLaOvn1qMCPt/zH8iq/tx3hu6VJDW6Oaf/AEqMvDFWpHJN8p87k3RL
+XcXS4hFI1ESrCFJoQY+n1rjOix1XPzf8gSJoN9BCoyLpbqCQe9WOWNQI9u+ZOf2UHtJfxzp
mVFxEsjfQNkSPDDVUcny5zma+N0N0ZZNOkoiRrGo8k0mn1xk8q+y+TOYbfFcxWt/J7dwzvJH
KFkUl+p9QqPww6lFb2e971du1naS3lw3ql9lCaV/MdPpXBbjLp27dOQcc3JbqB3sdwt2oCyk
HL7lZT1Xywy61F9vfy1zTe7D9Fe3iJaOayrDGsZkFPtZl/L5YjIbj/yzzDZLQWVldBrfMxRz
KJgvb0Mx1D6dMViqubnfKDvI3r+oTf1RTX3q+gD+EJktPKmHGa79++W+a79YNY3t0kFvUB1t
kEbPTu7dR/2jBVPRbB8wc02ew/p9ldRTRqD7a3Ce7o/7cwdPlioZ665dyefdBvE97NLuRbVF
OhKEHwjVegHgBi+0F2JN65fzHdoETdtyuJo4qmMTqVGfXSKLU+OKdQSWr7Yfmfmu17b/AE+2
mS6hjBRf1ERmeL/6gQaDtqw3HTFXt3yRyyy3uTe13B3vHqsjTANG6k5roFBTw8MXyc8dHLPl
rmHJrA2N1PDb2+oGSO1Row9OnuEkk/StMA+v7Vez/I/NNpsW2+x3SaC09QEC00qDmdJIJWvk
canKnJuIc/3DjW6SX1nbwXE8tfc/VRe5XVnVZK6lNcVi1sN7+ZOZb/s81vPstqbV6obgW8sg
U0zIZiVU+eM2yMXpy7b868l27aks7ba9timjT2xcpCUOWQYqpAyHj1w5GrrMbP8AIvI7Hk//
ALOJxcbl6leS4BdTGwoy6a5D6YulIn518sch5c9p+sW3t0s212/6ZWSTURmS5OoeQxSKrvb/
AJ/5tt+0CzkS1vmRBHBe3SM8wHiSGAcjzGLfRayEXP8Alce+nf03CRt3aoW5IGSnLQFzGn/b
0xdVc84h5bz/AJTywwPvd2bsW2r2ogqxIrMKNRUAAP8Ajil8UihtbW/njLexLLDFm0kaF1Wp
6EqDT8cVWVs+M/L3JOO8futgsI4Gs59as06F5I2caWpmBmOxw4rWQ2vdL3bN2W+spXt7uCT3
YpgaMGJrqXw+mLfGcr1uP+4/fTDBJdbFt13dxrQXjhgxYfm0jp+Bxh0qr27+4Dldvul1cXsF
rfWd5T3rCSILChUUGjTVq+Na1xpA5Z8+b/uuyybNYbdabTYzACcW4Ysy917Ko8aCuCXBZXJx
T5x3/j+wtsc23Wu87UDRLa7BARWzKinVa50IxqTT8g4j84b3xuW+hg2+1l2q9dpBs5DLbxFj
0iPqIqOoNcX1g9W+7f3G8jlijtdp2mx2qGGRJXEdXDBTXQVoqgHvQYPhWZ8sL8j/ACNuPNt4
h3C8hht3ij9lFg1aaDPPUSa43PhnPddHxv8AIUfEb6W7O0We5NIAqG6X+ZGy5ho5KHSa9csY
6albbkH9xlzf3FrfJx2xTcrBxJa3bu8kifxKCFTI4ME1m7v5p32551b8xS0tor+BBEtr6jC6
BSuZrqrQ9cOmPT+EfPnGLldzm3tF2Hc7yRZf11ujXEbgLpAIIZqpToRgaxR/M/yj8d73xeGx
spBvW/OxNvuXsG39jTnWQkLq1DLSMPMl+XPr/DwASohZq6guZWpJz70wNXrHqnAfnPdON7E+
x3lhb7vs5NUt7o00as2FaMpUnOjDLFjeyuzfP7g98u7nbZNk2yy2WPbXLAxKsrUpT266VCxk
VqFxeSMybVt/+cukcx3BeLbed5Ceu+EntsxORodOvT5FsEso62Mtv/zrv298Q3Pjd9Z27JuU
nvSXqFkdC8gkICDI0pSuN81m8uPffmret24DbcOuLK2Ftae0q3qF/dZIclUr9oYjq2LR1dsB
zL5f3blHENq45dWsEa7YyFZ4mbXKIk0IGU5DLrgldLdp/ir5Ss+Ey3Es+xWe5tI4kW8dvbuo
mA00SQq400PlgvKsbW4+eOIbzyza973PicYe1ajXiymWfRQhSF0IG0E6hXEJbK9Om+ZPjC5t
5TeckhvbGRD7u1y2ju5BH2GiGrdsCr5H5LuW0XW/7jcbVbtZ7VJO7WUEn3Rwk1VX60+lcbnM
YlxPwvlu5cV5FBvVgsZlQe2VlQSRGFxSRWU9mHhniany9js/7kNksdUu3cQtLHcijfzoJxHG
Sw+4xRoGI7kHHPZKb4uLf+4Lh8XxxDHuVrFuu9NOf1uyTxARS+5Kzu8TFXjCqDUasbs1dsrv
f9yu/Tbltcuw7dBtm37WxItXJl91SulkqAulNOQAxrnMW4uoP7n9ss5nu7HiUEF9eOG3C4Ey
r7jDqaqmonzOMFU7N/cNGLS92nknHrfd9lmuHuLO2d1/le45f2qOrhlRidJIrh+rMjn3j+4a
8bkW2brsOzWG1RWEZh9shZJZUIoYXkQRkJTooGRzrgyRvMWM/wDcNxRY7i/teDWP9YuUZJLq
SWMxOZPuDj2tTg9xijP1eB7leNPdzTBEhaVmk9uIaY11GulR2UdAMa1mSTxpfjnn278K3qLe
tuKtN/4bmGTNZYmzMbfiAajPF+G58PYrT+5/ZdulebauIR2k9y+vcf56qZG70ISuqvjjODXn
/Avl7eOIb5um4QW8U9jusrzX+2yE0cM5K0kAqrKHpWlMat1SZMZfne87FvXIZb/ZtmGx2s4T
/wCF73vKHpV2XJQurrpwT1njxnwxYOxasa+lmWhHqy7Ytb2Pa+LfPkEPFLTZOX8et+Rx7YEF
lJIUQqqjSg0SK4JVcg2C5qs31m/k/wCZb7mm4WDJYpt+07Uytt1sX9yTWwFWkkAFQKelQMsP
nxGZZrg+V/le851d7VcT2KWUu3WrW0xR9aSyO4bWgIGgZdM8Uisuur43+ZL/AIjsW87TFZRb
hbbuhBMrmN43MZjqNIOoUOY/fg+DbsxR8A51PxLdpZBaQX+33sQt7/b7uMSRTRVrnkSrKcwy
/sxVf4eqTf3EbBYbRebZxvice2SXMTRUNyHtjrWjMbdVXUM/LGueDOXFsn9we2PsO37by7i8
PIZ9qAis7v3I43VQAF1Iyv6gqirL1xnqDu58MRz/AOToeS8tg3yx2uHZ1tSogFsq+85QhhJO
4Cq7q/Q06Y1eTz8+rb5I+YNh5rxyNd348P8A2q2jVIuRW03tgaTnrg0+pWqarqoDmDg5Xc98
eTl3ZhGFo71yJyb9mFmtX8dfIPI+GbqL3aZxGX9N3aSLrgmT/wDByL4f7hmO2M1qXzHonJP7
hw2xvs/Edjt+PxXIYbpNFokr7g/mLAoVAofOpavlQ4ZGfrji37+4nkV9ZcfttoifY22dVE36
eVWhuWjCiOiaVZUopDLXOuKRpV83+W5+RcoteR7LZPx3ckt/Y3GezlIe4durkqENMqCtfPDe
Yx9f9lhd/PvJLj42uuH3KyzXdx6E30XDCYRFwzrKtCW1Cqfd0ODk9c7MeQNppoCEKtRQClQP
PGsMdFhfS2V1HOrFJYCHhlBIZHXNSKdwcZ6mm16p8l/JfFOb8c2zdLm1m27ne3e3DPNEqvBd
QgjMzVBWh9a1XrVemM5+2Lz7rZbL/dCsO22s+67Cb3fraL9K18syQxzQFhUSIQ1GalaDv5ZY
19Wgj+6KOO+isLbYIoeKrC0U+3TSB5XV8iFdRpVRXJSpxWYsQ75/cLxx+L7jxrY+Mttu3Xtu
8EREsamOaTMPoAIaPxzrin7FmxHx3+4Pitta299vHElk5GirHNum3vHa+4UWgfSdJU0GY6Ys
2tZjyz5O55d815RJv0llHa+4qQQiM+p0jroMrZBpQpoSMsOYxePyy8N5JHIrpqRo2DCRfSys
uYIbsRjNbvw9y2D+5KCKyhu+Rcdi3rlFlbm2sd6VkjlMbdFnJHjmSnXrkcX1ZVuz/wBxnIra
y3t7yBZt93Zw9lu0QRWt30hFTQVPuxKqgaW/fjRkin3r5dvN+4Hc8T5HaHc9yWdLjad8mf8A
nWrawZFKsNTKVBC/Wh6DGZ6z3zvjc8N/uG4RtHEzsk3Dwpmh9jdI7N4hDct7eh2YPQ+tetTg
yRusjx75lh4/u25W+3bWsnB9zb+bxe9cTqqsAGCSUoD4VrlStaVw9eesy+Jue/Pt5vcFptXH
rNeP7FZyxzxINJnEsBrGQ8fpVUbpQV8a4Z6JdX9r/cvBbWTX91xqC55jJB+nG+IyRLMEH8tp
l068utFPXpQYJG8ZfiXzxv8AYXO4Lv8AGvI9m3ZzJu+23SKokkIALxUUqjekdqdPrhrMmJ+e
/Pu675+htNgtxsmy7fNHcWdslP1KywiiF5F9Ppz06e33Vw82fCl/bRwf3LRwbbJfDjdv/wC5
3UIhm3tGRI3KUEbzRhQ7aeukHr5ZYOfbjWb8Mvwr513va7u+tuRQLyPZ90ma4vrC5WMK8rGp
liNNKsSK6aU+hzw92NWRyfK3zNuHNLeHaba0Xa+PWsyzRWagNK7ouhWdx6fQCwACjzrjM6jl
ZWt5Z8i8FvvgGy4+16Ny5EkUP9PhNv7c9pJFIok1kUSgj1BWB9QOYxrjk2WvAdQZlGdNVAB4
eQxuxmde5Wo4Xy3dOKb1bbtto9u7tm9yISL6WU+lg3ijCoOOVmusv4e3yf3QbKw0PxNzDcv7
10qXIUG+Sje7C2j00YAluvQ/VkYtysT8j/Pe5874e2w3m2xW00d6t0t3G5/8aatMbIRQt6s2
Bp5Y1Mis2PLNuvWt72G6WNJRE6SCKVQY3KtqKMO6tTPGb2ueJH0Pcf3H8GvuNNsd3weOXbZl
/nbfHNGluXBDEoFjqueYYZ/ji5heU/G/yZe8D5PLuW2wiTb7jVFPYSmolgZ9QX3KV1KKUan1
xVvPEfKeZcevuapyLY9ji2qyRo5ZtqlZZoGlRtTNRVQKjj7lHfPGus+rPMx61un9yPCd04y+
yXfC1l2+RKm0SeKOIN11xaY/QQ2akZ45wXq15tyn5Xl5BwvZeNTWaQf0OVmhuw5YyQkFUQoQ
PUopVq50xu84ur623E/nrbLvc+J/+0baFi2KCW2F7CQ51yKiJOsZFVMapnQ+YwZkNu16Vzr5
S+JNx4xf2e77xbcgtHQ6LCCIi6WUgiN4mNArqT92VP3YuYNfHbyUbSy6iMvBvIkjqcVFX/CO
Zb3xXeE3faZzbXUJzoAyOp6xyL+ZG8MHXw6c16Ryz+4qbcuNttPGtph45LdEnd7q2ZSGVxSQ
RIETT7nRmNTTDz/lj4qs5J8/cp3Oz2OHb5ZNnbakjW6a2kpHcywjTHIFFGSiihWtDi2NVFuH
zBa7nz/bOZXvH7QyR2ywbtt+TpeSKGUzHUtFbSw01B6DGft5g5vrT/I/z/xbl/HJNqbihjuo
lH9Mv2uY9Vs+VGj0pWgp6lrmMa4uM3dcfGf7j9zsdsS33raYN7v7GIptO6syx3ENRTQ7Mrl0
8ehPeuJqzYzm0fOnLrTl0/ILqaO5kvVEG4WDIBaSwKNKQlDWiqpOk9ca6sxTyerTn3z1d8is
Yto2Wx/9e2GMH9Va27AmZq6loVVFSNWzAA654zOsinq32b+5beotk9vcNrtt13yKEQ2W9yaU
lCVppnXSfcXxoRXvnngOazfFfmrk21b7f7rfyJvEG8ODvG3XShoJqDSpVaHRoUALkcutca6M
zMrt5d8+b/u247c+xQrsO27aY5oLCBhKryxsWRpKqF0r+VQuQwyzP8sRrV/ufsHSO7uuH2rb
wANV8kqofeUfeGEbSBf/AKvLHMsnwL5i27Z+Uci5PvG1/rd33pJJbcwgBBK59cRVj/4nAALZ
nLHfrqWYvvkx57yvlW5b9u11uu4uXnuX/M2oiNMokrl9i+kY59dazJipWeNYwXFA+RJFevlj
J0B9DBW+wGoAzyH+eIAWlCVFUGfniWnj9QDV/mGoKt9uXfFqwmdjISRppSvavlh1QbalUUyQ
5in/ADxa1TZtXtUAkDofCn44AKNiSRmSMhq8T54SJYylWenpyCjue+BYdmox0khWGYOf4YkF
kdmLKSBSgqM/24tQ1FAhZaKxrU4lh10KWqSKZUHngB290sfQqilK1OeFA1OD2Zl6+WApWCUH
QM3etKfjiFMItEgavpHpOeeIwRzcFalP4l6D9uHDp6M6GmZ6j8O+DERkqM1FVyPfPtiAlNSK
0r/Cf9cQE0ur0laAjMjEYB1JpoII6ZmmDCNnFaANQZN26YgGN9VRQ+nqCKdfDEtMH0ADpUkU
8x3xIZbUauMgMj0PliRklovpoXORrl+PlhWjKyAhpACCD0xGUFQSPVWhzHgfKmJH0q5oHJY1
IFKAV7YkkDaWHf8Aiz6YhiNQiKSGJDHI9WzOMncPJRmCdF7U65Yhoj9wApUZGvX9uIjQEVPS
uYFOmI1E6FCSaFW/LSmISDKRNpBb1ZkDsAMWtSE+iqnow6N/phjQi6BNBUB/zZk1wi1GymlW
OYzAHTBWBDSU6ig6+FD5eOAIxVaAggU9LAUFMWiJDQgFGKsMwSf20GI6bWsiig7VFcvxwwhD
EvQ+oGmpa5UxIbtGECoxIbMEiqjyHhgVpvdCoFNS1AG6dfHLFh+x20gmrD1ZFq51864hpmGo
eutCBmcq0yHTE1gm90UoAc8+1RiZsOQQG0kFzX7ycsSwKFwAGIo2QP3dPpiJ3lkVqL6qEUp+
8YFSlMLEl0IcflXv4YRSicsPcb0Fh6SRmPHCjBo9eknM+oDqfqcCGFdw1WFB0rTPPtgWAVGN
QB6AMmOX1xHB1qCFQhuhYjqMSpmkkai5UTJBQDLxNOuHVOaSpUAq1anM0rke2eMteCoqoydg
CKg4VQiYBSr5EDp1HlmMWM0CVDEsT40/DEoMl2GutNK5dAK17n6YlSNQpUR+jqQRiNgah/Vo
CEitSc/2YDzMSMAwrStPur0p0xU6EPApQDMdh5+eLBeoL3KVVUGmudelfI4M9O+AlBC6xRFq
KkdfxxpnBUJjXQysvWo65+WNYEakglGXKuRY0FT1wIdEAJpn0wIgXhoVIocmXtQ+FMOLcIyx
/cy0IrpA6UwY7b4z2+kgioPqzUVrTHSRxqkWrNQCte2IJKeZxYXbOJDEgGdTmOmfalMMEeyf
BN/uFryfbmt52Sr6JlQ5FWBCqFrmScat8T1r57d2SzMjF3UMPUa+nrUg9scP530xc/HC7Xf/
AB85bb4oZRHJG0kbyBnYChLKTkf8cb/r5Ut+GXOzQ8Jdrm3N1FEZdURrQp1oQfDBWXDNsXBd
12KPe5OO28UIPuNHAn8xwW0ZEUNRgwyii+JOHXMse+LYxx7WIgUsX1A/9zEnI+WD4W2shsG5
bTbfJFrFsFs+02RZomRG0e5QHJhnRWI6Y3Fjp+dpmL2+siZhH6qk1AB618fLBPkxBtl/8fNx
ZTBwK9urxYtLXRjIRpNNNZmB6fQYz11Iw8fvlD3EjCMxrqJ0H7l8iPLBz1rpy1XxjxOw5Fye
2sb8k2gBkkQMVD6BUIfLHWHrI9/m4J8e2pEU2z7NbwhdOuaT106/mof34xZK5qpOEfFu07Xc
bodpg3ER63WUMWTSSSETOmnw64bNSpsLP4d5RuFpbw7ZFb36nXJZRagSO4ZhRDgnMOurkEPw
7sV9/S9w2KG3ilSv6lldqGmWSktjX1RW/DPjfa9kuN6G0pvNsw92P3XOnQRUKin00HnixasI
fjj483RbO/GxxWkMgEhit3ZdVRlUqRgxK/ebX4X2ncm2ncNlhtfQWW8f3CC1OgZSXrikgYfj
0nxVFyK5W6sbnerIilmLeKQlc8/ciUqzeTYbxGZlZ7n0/GW3Zv6BtU+2WlQojnBBZh1fQ5JX
6VwSHGZUlaKaDrXLGo1Hq/wPuN3Fv8ttDIUt5oyZYyoNSMwSeo6ZYqa5fmc6+Yq92QIysYNP
SWAAOZH7K4ecZtehbhHts3xislqs8NqsK0t2uHdaBhWoP3DOuCz1LG/l45b8IsxvdtLf2vtx
g21SAWIFCTVf21wZqUm68B+OEayv7ixezsrkBY4YnZlLuPRXM0/DB9fWZ8oH+IOKbV+t3Tek
ddqVdUEUbke0oBNTQEk+GGRpwfDG5Qx8h3O02x9G2SJrijkQNKdPRmb7hXwxq/CZD5eenLrs
GgkJqojAUHIVNPPGTKxdonvXMUUi0iZgrjqdNc6V746SCPerzgXxZsW1W11f2E128/txqrSS
V1OB0ClQtMYw2pp/hzhE25xTRQSQ2bKDJapKwDsR6f5n3AeIwYmW5APhTb5LzbbnaLvbby3q
iXEZkerkVBqZGH7cWM2uz4MXbmu9w/Ttco7oGV1cRqU1UAMYrqbPrjVh/DA/JsSQ8qvo/clk
USNoaU62I8yKfswQazG3wLezw2wbSZXVAen3GmKnXtt/wT4u4ns9tPvlhPujzBQ9yHYjoGZl
VWQKM8GK1gt9T4qferVtolvo9rLg7lEVYsgJz9rVU9O2Fn8u3nFv8RRbNGeNNdPuDshCMZvb
092f3R18KYz+S3HweOJzbXc2trtNbxKfq7q50ymSuQUVHpXyxqw4pOOb7tmyc43eC32Jty3S
WUrZLbBKQpWlG1VC17kYrIPw9B3S4tptkez5fPYy3F06rBZ24DSxlz6VFc9S/wAQwSKOn+gS
7BtcW3cZm22wD1Nb5S7OzZkmhVnJPicVhtee2/xPvu88pvb3lRiaCFld1sEEazmlfRQekH83
fFDzcaj5E2PaF+Np0sdljiWFV9hPZ/mRqpzbIagadziwXp5fxLYvhqbjpm5HudxDvRD+5GHZ
dFK6VjVVYNl/FiLr+NPivjG/2t/u+4y3Eu1WcjJa2sR9qRlX1a5mB/h/KMau1Y9Y2Gx4xNwS
4s+OSum2zCSJRMCWV2OltWr1HGZy53nxi5/ir4p2/cLDZdxj3CXctxAWGVZDoZz1Ip6R08MV
h2fDy75X+P7fhG+RW9rcNPbXSmWHV9wUGmmSgpUYYYo+FbBbb9yey226kaKO7cLI46qpNCR5
4aY9vvfhj4h23cbbabye9k3HcDS2jWWrAdK5LSle5xj6iyOWx/t34va7nudxut7cz7ZbJqhg
i0xyaaamaR18OwUDDZokYrlXHfhmTaZbzje8z2u5wii2d4GZZVGVFqoofOuHmYtz4eg/Blhb
P8dbgu3TyxXLmQXSSxxvHr0VQpUVI0+OL8tX4fNu6wr+tlqfUJGL0GnUQT2GNxha8K49DyPk
227TdSvBbXkypJKtNSr3p44Ov8Ga9v3b4Z+GdivrWw3Xdb6G7vWK2wadVUAdC3o0hQe5xj60
oti/tz4w267jLd7hPuO225X9JHaMgkJcVJd6Eah4Lh9M8T7v/bnxWaG2u9rlv7CISql5BclX
cwlvUUy9LeFcZkVtd1x/br8ZrJKwn3LVaoDcKsysxVhUUqmHKHAP7aOG2tzdX+4bjeTbJHCH
ht0KpKgA1MzvTMAdKCuDKJFVcfBXx/vGz2e/8c3K6h2eSUJcm4oKQ6tLOhZVoQfHB9b+12p9
2/t8t7bnW27HBc3Fzx++T35rwKA0aA0NZB6K+Bxvfwef00e1/wBtPFDuW5vebjeX+3WrItol
toWUMVq+pgGD0rQUGCr8Kb5G+Ads2rjEvIeP3F0sVoNV1ZbgAH9utNSkKpyr0OCSjfqzu3/G
vxZc/Hzb1Nyv2OQJA0strIyaVkFSIUhIEmdKagcM1WvJWaRGoppoJOfpJr4+GNtxf8Lm4vBy
jb5+UxyTbLC+q5t4sy1R6Se5FcyO+KqSR9L8b3XiPLNyubax4hYnhca+zJvs0CW2qQJnGoZQ
Tp7sGxjGbN+XkO18Y4h/9/UezbeIdz2FbxFWNqSRkFdTKSMnCN6cavjP840HyXsvxpxr5h29
N024W/HZrdZ7yztVIR5mLAFkWlFyFQvXBjfN9rfcavuHcuvLi1g4hYDhyq0C73JbpbmRlA9C
KVDih6nVgyLf28e45xfiI+d4NnshHuexQ3skcSyN78bIFrpP5WCtUfhjd8h4s6i559svxrxn
5whj3ixFvxf9LHcT2Vup9r33LUdkB+wEVKjF9f8AVykzp6Jxq74bzia6t4+Jbf8A+m0aODfj
EtsZWUZCJdCuKdyGyxm8umPHuIcX4qvzpHsEITddigvJY4fcKyxyoq6k1Eel9LH6GmNdDm6u
ubbF8bcX+cgm7WH6bjQt47h7S2DFDPKp9Rjr9oYVIX9mCzwS3a9I47Jwvm89zarxGy/9RQPF
b78Ihbe6yj/7NdKOPrqyxnDmvH+C8X4y3zf/AEWOOHddkgu544feCyLIiAhS35WNe/fD1546
cZi05hx/4x4v84GDeLT9LxiKOKY20OpokmljDKzJ19sN9yjFZ45vR+OpwPnEt7bnhthFxQa1
teRJGlv7xXKsYCrItO51ZYLzFmvHvjni/GZfmttiEEe67Fb3cqW5k0zRSLGG0tUek9AT2w2Y
OLsVPzrxzZeOfIu5WGzW36SyCwzLaxkkBpEDvprXSpJ6Y1gktrz6J60RlNCatGv3EHzwGPqK
w+EuO8n+I9juNrt0s9/9lJBuCga5VMh91ZOgaq1016ZYxD1PfG1T4g+Odv3zYDFsdtJJFDNA
7Sxq3uaIwRJKhGlnr3p3w1X2qbme0RbVabhK3xft247RCG1T28sCyGJQaye2Iw6UGZoajFjP
Vr5DvpreS8neGEQRNIxigU6hGlSVQsczTxxrGrJI9W/ts4rsfIOW3tlvljHfWn6KR1hlFQr6
0AcdwwBNCMZvyZ7Hp/Bv7fLew5Xvbb/tFtcccnSSLaQ8gllRS4Kt2KuFrRsF+f8ADHEsnrS7
JwfZLPhG2zbTxHb95v2qsi3PtRsU1MNbSyK1TkMsORu1nuJbHxbevlO6sN14LZ7Jf7dt7STW
50TRSh5VVJQgUReNGAqcTM9X+7/G3Gt8g3Cx33iNhx+yjjZrLfLSSBXV1NFYiNU05Z0ao7Ys
jPXGwti4Vs9rwPY5Nr4ht2+XbQKJmmMMLMCp/mmSRHLajg+s/LXuPnb5sn2x+QLaR8STiW4W
aEXkCSq4l1CsbqsYVFHgw641zJFzPy8zLMStaUNBSlSK4009+2/4A4Ft3Gto3PmXJJtsvN2V
Xt1hVfZpKNaxCqOxdVI1HHP5Z6+cND/axdnkW57TPvIW2Sz/AFuz3iR19w6ypSZSarTKunxy
xTWZsnrJfHPw4eYWPJHN/wDpb7YkAiUrrjkcay6SfmCkJkRjd2V056/116h8afE3xBvnx3fX
ck8ly7Ipur+ZRDc7e6RhmVGXLSD6q56hg91i87N186cq2va9t327s9u3KPdrS3fTb7lCrJHK
pAauluhFdLeYxtcXflUrmxUvWoGVMssTdfRf9u/A/ifkmxTw7lGdx5L7bre2VwGX9NGzUWS3
IyIb/wDCVrXLHKz1iTXiNxs1qvK5dsLfp7OK+kt5Lg1Oi3jmMZkNetEFcPVxfzv2jYfLvxVs
fCYtsvNl32PebDcQ9KFDKpABDfyiVKEHI41FesuPMyx1Jpb1LSlcwCcLb3n+3HgHD+UbZyOL
kVpHeKiwhZyWjaGuss0bggqchXHO31rrMX+0/B/HrGw+Q9t3uzF1JaRLdbJuJAEqwLHJJE0b
Llm60cd++Ge1x56vsqoi/t54JYWW1pyjlp27ed0hWSO3WOP23d9OURb1MAXC9euG63ryv5N+
ON04JyH+j7kUmWRBPa3UROmWIsVBoc1YEeoHFNjHN25fllreNJD7ZFGOQ1Gi1JoBhdcfQMv9
uHCdn2i0l5NyyWyu5o1dphEHtS7AH+VIRmorSpNcZu1jr/yzlt/b1eXe28nkTeLeW52JY7iz
e2IktbuB0aQlpFJZGomQ7HDz8+sd23nxUcP+I7jkfA945La36JcbVOsQtHWsciaQWIcGoYah
Tti+3rpuTY3O4/2+/HezWltYcl5mdr32eFZWSQR+2zH0lo9VCY9eVa4vfmLr1pviv4s29uF8
w4nvP6S+je6jEW5QtHPGVMK+3PC/5WUZgEih64pu+i7eceb/ACF8HbPtfF7zlHEt+Xf9t29z
HvGrQHhoQNasmTaa0devfFWJvP8AmPGKBVZoPWrirf8Ab/phldK9n/t2+ONs5LyC33i8vbJ1
2ufVJss41zTIUJV1B9JUNT9mM21rnnPV3/dPbbqm77Ybq321rJHmWzvrPK+0FQTDeCv2j7lI
FPxxqfDnn+zp+OeNbBdf2+csmtzHuckkE0t3azxLGbeeCMtHJDMPU2laOte+WMz59X9Z/q8p
+K5LJOb7JDfWcO5WktxHDc206643WQ6K6c/Upaow9/B5+G0+WPjHbE+ZbXi3Go1sIt0EOiJq
mGOWYGpUZsEy6YcyMc8Z18+Npef2rcYkie0tN9vI990H2f1UAMHugd2RckanZq0xmR0fPHJ9
k3Pj2/Xuy7ggivbGQxTKp1qSO4I6g9Rjo589fbXRwXi11yrkllx+1lW3m3GXQlw4IVQBqZmp
UnIdMZ6vnjrzn5e33X9uPx9dXdxsGy8uk/8AbIA2nbroIV1oupldUUMF0mtVOXnjMljH/hzc
X/tv2mfj0e88n3dttS1e6g3iFPbVYmgkKKySvUEemrZZjpjFl6M6vyxHyr8VWHGdr23k/Gt2
Xe+Ibk/6eG8XSZI58yqNSgYPpahy6UPbHTnlnbL78PNqNUkH1plT/b06Y1OsdpXtnCfg/hc3
CrblnMd6babTcSyWjwhQsRVmXTK7BvU5UkUFKYxdrFkjQ/PvF9m2z4l4cbNoL+4tJv09pukK
6TLbyxSSUBX8rUU08ca488Z73ZQcn2TZ4v7ZLCayZbuKO5gmE9wgjntp5JQkyx6PuzJXPquM
za3/AF9cvGv7feHz8L23le+b9Lt+1XUDT7gB7aaH1kKVdq1Wg+3TXwxnOqOp65J/7ebK55Ts
UWw70l7xXe1mltdwYVkHsDU8NF6n+FjSnfFjcrT7n/a1xqbb5049ud9HvEKs0aX8emFyOqM6
olCf4gThysaoNg+DeAWvErHfed73Ptj7iT7RiZVgUglWhaqyEt6a6shiy02s9P8AHHxvtPMt
s/Uci/qfCN3EiwblZSI1zDImTR3CANoQEg6tOYw9c2xjjm7763H9xHxv8cbXs1tum13UWzb0
Yl/S7dGh9q/ij0q1FQUWQK1dXfv441xF3sviq2H4T+M7fh+0b7zvfLmyl3iJZbWSMrHEupdQ
hPokZnA75eWMe09cxJZ/23WJ5rHtH9TeTY9wsZL7Z9yjA9w6NI9uWNv4damo6jwOH1cyy+i3
z4J4RfWN/Z8N5I93yfaUM24bVcSRyLIEX1qpRV0tXoQSOxpinOfLU+dYT5J+JIOMWnF9ygvZ
Lu05BbJI6yALLFN7auy+mg9uj5dx3xX41jvftjk+YPjWHg+67TFY3TTWW6bel7GZB60lFFlV
iPuUlgV8OmNczzWty488k1M2thXv+zAL6EIkoLZigqKeOFoIlZMuhGeXcdO+DAlhdCAAM26H
wBzwkxJ1Vrmo9KDwwy4dOrautFPYd6fjioxJpQrpqFqMyanLtQYyojVpVUEioBNO1cvDCSEt
HNSQ35SP+eHWadSGocqDt3J8jgWJEmkoVJ1UFFFQagnvgMJWUy1DEFRQEeJ7YsSNWepohOo1
AI79DixEfS+kjIfxdK/XFg0K+44CUpQnSemIHQj3AXJqBSlMS06NpP3VpSgB8+uIjZVZ6FxQ
50rmPLEjkNTTQkdCDQkDEqS9NSrRPA5dD2waoTOUagzZsiBl+7GorBaxpCg6SRQscya9sSP6
lYVGQyUtmc8GEZMoFdRLKQwpkDXxxJHKejkUI+8DpT6YgItrUqRRSMnA74kNmWhXSStBQf8A
TBpMqSU0nJzn5dc64tQiGT7wM8yxz/diOEAysB1HQnwBxIWksRQMEX7ulDiB1k1RkIRUGpHh
hRJKKGqUqQSRl1wUiLKhLfcCaKQBUk98VR1AYmuZXMgZHEkYqAfcUEAkjvliSRGkK6an2xmQ
f8sGolBFSAada5Up/wA8WImzcIVBVj9ag9vwxA5gBr0WvRfp2xI4SPUGy8DTqPHEjEIVqhLq
n5vDENCw60zYDvTp45YjoxHqFemnw7k+FcRh9WllJJqM6qNVPwGHVgJDGjU0k5kkgZ+rviB4
hGvrUZv0Y54LEIkFiI82HqLeWBH9xl1VLFj2HQeeJEuVPVUr2JpiahwXBox06s6ZZeWeBQ0h
IU1JP+2nXGo1uAjKtkgOoH1Gnj2rhtZvRVFTRjqrWnjTqMZ1lNVSGqOtOvUHwxIIIoFY1AzB
I6nEtMxUmmmtMyR4/XECXSGWpopHQdsRKq6QwYBBkKjPP6Yidk0IApoDXIjOn0xM2G0Ar6ct
P5m74moWpF+5dZIqR3y61w418CqpIZQfGp6AHtjK05X0ekgEZgAkj8K98QpmZmXMgNSufjiG
kqgKVDASeA6eFcRKFipHdENCO9aZHEfgUgSRm0kh0OYHauXXEzDZA1YanHVq0ov/AFxG0BCE
q2QNPVTEElUP2enTke9T54iESD3KHMH0gDqMFUFLIgyWoXoFJpnhgtCAQoYMVanpWuRxEReg
JH56Coy/xxGGopq2nOvQdxiNhkVwxZSSAPT4Z4gRA1E6cstTHocVYsO7LWjVOrJVbIV8sB0p
T6iVagpQqD3wkwZc1JJyyJ7nxxN4b3kQac2PQgfvqMTNODWjBQi0GRzocTOHDuijIMB9wYZU
Pniw6XuRgMaqWYU0HpniVrn0yFw5qAtRkKfQfTGtZl1PqZVBZM1zA7mvSlcDRLWoJb7TqAPQ
VxLBFo1calqSfUoyrXzxFE7RatJNVIqABkP+eFWs/vIJYUFFFaE41GcVIqaHp40wEVX8+teu
IOv9SXIUKB2qMqV740bHr/xJzWy4vMLifZbbdZEBEFw50uh8h4074mXonJ/mqx5Bt6QTcet4
StP50hMjin3Koy7YxM11548dexfPG07XtSbdHxiBKKRVH0q9e5Ugkn8ca7xm8Dsfn97K3lhj
49ZrZOxIhQuSa9VpjP2jN5w1989yyLHFZbLb2lstTIhq1T9Bpp+GDVOarR83bu26LeLbIlsq
EGzBJQ1FK+P7cM6OYDa/lra7bf03e84/alyCq6WZdBPdetT9RjfjPUdPNfmTauR2DWiceihm
DBUupXBYV7pln+OMXF64f/v05tBsw2i39hYI4hDHKIwH0gU0inl3waXnrzPM8txI5kllcvIW
zqWNT+/EZ0ueLcrvOP7pFf2iqWT0yRuuoMK17/b9RjU6yNbLHpt/8z8Vuz+rueJRXd8EoJ3k
EiA08GFafTBMZxUXXzXd3GwTbaNshtxMpQSKD6FYZ6R+4eGKnni1lOG8xm43vq7rFapd6NQa
FiVLaxTJh0/HGuaxeHRzXncvI9z/AFskSWmkBFjWp+lSevhljUyNfXGs4z8u29rsg2jf9pTc
7GL0r7bGNtPYMOhpg6sGLK4+fYYpbdNt2VbeyiFBbyOxangKekYzMoxg+Zc0n5PurX72y2oo
EWMMW9K/xHKrGuKtWZE3BPkHdOH3E81hFDObldMgmBPpBqNLAg9cLMjk5rzTdeU7i99uOhSF
EcMcQ06QO5xRM6WcmurUR37YYHo/xvzji/GZJJdw217rc6aRdLNpASnQI2WCnXbzPnvDuRXl
vcJscisjg3LvLRpY/wCHKoX64JYsaKb5p4INnO1f0O49ho9CQNIADToNdSfxwyLUcXzhxWTa
0sdx2CRrSoQIH1AAdD6hU4iivfnPZZLmFY9m9yygWqI70daDJgQKA4VHLB82hpr0blYSXlhP
/wCC2Z6e3lSgqDqXxJwMouF/JPDdkubu8l2WWO7uG6wSAhUPRQrFc/HPDnilBvvJfjblm9Rz
3Vvd7NFpKzXo0yM38NUq+j64pyrXRBsPwlDPHInIbn9RE4f/AGinc1Sn7DgkplbvfvkL44j2
iBJpI92twR7aQg6gU6Ma0p0xYNZV/nuzO6QmLayNpClZAXrOT2I6KAMR9Vm7/InxjcJdXVvx
h5dyuMmuLuhXURSv3OD+GLGT/HnyJwbjcczXO33C30xOuaGjAoftVI6igGK1pTcx5bwbdeQw
7pZbRO0anVuAnbSJlqKqFBOk064Oco118r5zwS6sLaLYePf06+hZZI7gKiBQvVBo+440sXUv
yzwbeNohteU7XPK9uFBjhP8ALY0pWmpSMuuKnGP37mXExvdpebBsSWtvZsGEUtay0zGpasKY
zgz8u3mfy/Jyba121tpt7PSQ73IOphp/KhoKDPCZ/ldfGvyNwHjG1abm3vEv5wFuplIlViMh
pFU0jyw2K1YbT8ofGu2b5f3FpYXSDcKG4uNSlg+eQWtQD/3YBJip3jlnxEbiXdbK13Kbdo2E
0MkjtoMleh1s1VxSKy/hcN8ufHW9R2V9vVrdpuW3triWMDQHAzIIIyPhTEozXL/nDd7rdo5u
PmTbLSJPbA1K7THsXWhUAYfMWV3b389W9/wk7aonG8yoIri49IRgfu//AEhlimCqbiHy/tOw
ccfapuO295JR2M7lQXZ+vu1Vi2fgemCzDa7fjD5Z2vZ4b3bt5tilpeyPMs9oM0qKFSpOYocs
NaxqE+Z/jPY9lbbtkguXEdXQSLQF2NSzsWqc/AYoyGb5V+LdzuNu3/df1sO47cuqJI1LRqx6
0APqwVSYz++bhxX5O3xry/3pePWtmgjs0mTW8q5ksalV64c34GOvjPx78fbHvthutvze2uHg
lEiQSe2msg/aSHNM/LBlMbXnHMfizbd+26/3m8I3SzX3bSe0Pu+ljQq2ioI70xfVbjO2v9w3
C77dNwtL22nt9qul9qK6UVc5UbUvYN1GHBsYbmN98I22zvbbFZXN/uUrVF1OXURitTnkCfID
GcW7Ww+NvkX4k41xf+mG/ubaW5B/V++jNVyKHQyCgXwxY11GGh4T8Z8m3/cJ7LlLbZZFh7Yv
YwGYHqFOpVoD+ONZ+mfw0uwfHXBuMbvZ76nNbS5TbpBMYXCLrAFaAqzH9mM5TKy3zf8AIWxc
q3azuNlWVltYjG0sg0ZknoPDPG/wx1Lq3+HPl3j/AB/ZZ9h35ZbSO7dnhv4DqbUw00IGY8jg
a3xd7l8m/HO3vZyWG7bxvVxFcLLLJLLJ7YjQ1YFXCqfAenFhi+k+f/jqRr90e50XMS6G9sZs
oIKnPthxakP9wHxjuNuNunubm3tLuBoZbl4yoRiumlczn2ywYqy/IPlf432LgEvFONTT7s06
FYi4KBNbaizuwX/9EDFJgtXfG/n/AIX/AOpwybtObffbGExrbrEzLKVFECU9NCKAgkYp7TVH
8b/N/HWtd12zfJZNql3CaW6jv7UHTEZfyAerSV7ZYrVKqPlLnPCp+LDb9t3/AHferuZ6u0sz
iEr4yqQuoeAXDy597qnsflD41i+OTsF5xQT7xbwlEulWIa5WqVlaY0cGpzGeD6/tuvILh2d2
kKjSTkB2HUVxpvALIrNVqgAZiuefn5YNHy9k4b8h8c274k3bYJt3u7beKv7NugV45VkAChaj
09PX0xT5Hfw8/wCCb3bbTy/bNwu55ILS2nDSzW//AJljBzMdcmPjXBi5i/8AnLmm0ci5gl9t
V9PuFisCRpLNGY9BBJMa0C1ArWuNyQyZfWg47z/YLf4W3Djsu73ttvDiT2rbQHhlV21URgtU
B/MdQODmTR/b/aZHn3xxv1hsnMtr3G/nls7W0k1zSW4rIaEDIZ1qD3xj6ufE+lz8Ln5t5Xtv
I+cPuG0bhJf2gt4kjuJIxFQrX+WVAWumv3Y1njctafaPkHYLf4PueNvu9xb75qZhY6A0cgd9
WiN6HQtM+oxSLrWF+NuRbbsvN9s3G/nmtdvgmrLLbgGQVyBAzy8fLBZa3zVl808n2vfudT7l
tO5Nudm8ESLdPGYaaQf5WkqhOiv3Uxqxnlpts+Q+PQ/Bl1x193uLbeUc6tvMfuq6O4YCKQD0
p3Jr5YxzPTWH+MeS2Gx8w2vcdwmmtbO3mDzXFtnIFIoag1DDPMU6Yby6yyTMdfzPyiw375Ev
L+xvG3OxKxLHcmMRhgiAAACg9NaV74c/DjJ8tXa872SL4HPHoN5mtt5SQhtrMepGVpCwWGQL
VFpmc+uKc58nqX8MT8W7/tOx8523ctyuZ7Swt5C1xNar6kyNCVz9NfuoMZrc8iX5n5Ht2/8A
Ptw3Hbtw/qtrIsKxXoj9oMFQAJpov29K0zw1x5uMNFKZJaGgCEe538sStfRcvzLt9l8N7Rtu
wblJY8nsGijeEKQfbUsX0saqyaSMEjVvrbbZ868BuBxi5u92IuY4nj3ISRnVHJJGqGSQgUCl
x1GLB1/7Rlua7V8Zb9dbhef/AHnXFm94XdLdXcwJqz0lBSqdsOUWePmm7hhjvJEgmEsaOwjm
zHuKpNHoezAVw6frs9ep/wBvPN+PcV5pNe75c/p7S5tXgWehZY3Lq2YFaL6euM2etT4bv43+
Xtms/k7kE+9bzL/Rb1pRts0rSNCP5mpBoJbR6RQUGGiVov8A3L425VwSx2mTl0mwXFrcOzNC
zRT+l30gin2uGDDFYqqOF758f8E58bx+Ytv1luFk9vLfzapHgkWRXVZXFdSkDIjocZ+o5cHG
/lzjG8f+y8T5duRk47fTTybRusvuOYqklVOWqg+5P2Y1Ws8aFOUfHnJfjrY9ol5jJx+520KJ
zCWSYmJShDAflb7hiksZ6jwj5Z2ji9lusM+x8ofkyXKVmmkDGWOmQV3OTDwphkErD2sqpKNQ
DA1UrXrlliq19N/+0fGXyDwnjthvu/f+uX+w+3rhajB3SMJVHIoyELXyxiTxrqeyr+8/uE+P
rXnUEX6iSfbhZfo7rdY0PsxyM4dGQH1MpoQxplljX1G6i4jvXwzwq13qCw5PHdXO7pJIbiQM
cyrMEIVadW/HFeb8ieTHnvwX8kcS23j+9cR36d7G23ZXMe6U1omqL22VwKnLqppgz0zjeceP
cn2u123f7uwtdxTdLOKSlrfxAhJYqVVs/wB4xrB/P9VVHSG1IStO3lh1uvo7+2vcvjnj9ncb
3fchW03q6j/TXu33emNFAcMrxPT1BqeOWOd20fXPXl/yDa8V2H5C/Ubduacj2a8uRfT+36CA
0pd7eRqUqc6MO3njd4tmsfx88WHzNv8A8T7u+2XvB7F9t3GRGXdI0iEUOhQNClQdOtWrmO2C
TGru+PMFYLQZLUgmh/dha8e3f278747sFjymy3y6WyO4Wy/pHYN7buqSBkYitGOoUxn6+6fw
3nD/AJw45unxrue1b5dC05Ba2FxaxSSrQXSKjCHQyg+rMAg/XFnrJXO9/GHyHZ8W3y65GNgv
9kjRf6ZOF1awyMVYt+WsWTLisZvLzD+47nvHuW8ws5djlNxBt1u1rLckERPIJCxMXdlofuxY
OZ/tryNbhEmUg69JJEZ/MB+UE41I3a+s/jzmnDotgspLLmf9P2v21jn4xvCJcLEyiksSzNR/
bLV05kf4Yz9bpscOwfJPxHtnK+T8b26R9s2PkEaqNxSOlpFclGik0ddKOG1KSAK+GH64x9dm
T4HZbt8T8I+ON/45snIYtwvpYGuVcsdVxJ0VUAGkNlSi/XBl1T4w/K7j4h+S5dq5JufKF2iW
2tRaz7VKUV8n9x0dj6h1pqXtngNnuqLiXNfjPYuNfIHHNm3SRYZleTav1K6WlrF7bJGwFHo+
QNASM8b/ACztsZziHN+OW/wBzDj1xdpBu0y+5a20po0yyFFpH/Ey6PUMZ652s7bz78vFjMzl
2FFRiQAcvwpjTcmNx8O8p2vYPkLZdy3dylhbysLh1XKLWhVXNMyAxFfLGby1OkPy5u1vufyd
yO+26ZbmwuLovDcRsCjqEShB7gsCMays879rfw97+M3+Kdt+Ltx2j/2pIP8A2OB/1ovCkU9v
LLB7ci6MtWnt44zOa137HlPxnsHCbH5Iktt55GsdvtU6XG3btAumC5ZGDBW119tSB+3F1PwO
Ph6L8tcg4RZc82P5E2zfrfdLizngiu9mhIaQxAn+ZEy+H5g2HrZyLcrcXnyJxS+uP69a/IX6
TbZI0lXZ9EKkAJmup0aTWTnT/LGeffhp8jc03qPeuVbruaXkt/HeTtJFeXEaxSyJ0BeNfSuW
WWN2rmO/4w5PZ8b5vte9TxPPBYS+49sh9RUgqxFepANR44zeWpzH0dabr8PWnPLr5Lg5gjXV
zGzybZJQChhEbKq016wBkPHLDlrF8rG/IXyjxDkPxFudpY3RG5Tbw1za2E66ZWhaYyB6Zrp0
t18ca5+Wep548Cud33B7AbfJcSNaRye6LUyExBxkG0V06gMtVMFdOsrlgcqwYCh60PjjNZfR
nC+VfHPLPi604NyndjsM+2T+6JWoEmXUzqQ7ArX+YQRi5asRfNfKeA3/AAHjnH+ObhHcRbTu
Cx3EUTM0qQJFIHmXV9ynVUHzwznPlfLW3f8A9zl18RnhtvzKCO1crcwXk5UyrIJBKoeNQvfI
gZ4OPldPO+bc84vuHwPsnHrG/jn3far1Yri0WoLRxNJ/Nj1dUIYEHG5Par6y/wAQfJMPFuX2
V/ujTT7bGXDQqWbQJV0F0QmgZep8cc63+Hvm9fJvCbS33Ldl+Q7i7heOR7fa7URCRGfNBD6A
WZelGNPHGpNcdjHbXy74x598fbfxrlO9SbNebPK0jS0H87NvVXSw6P6x2OHGq8x+T7b442fd
tsi4PeNfmGPTucoJa3kbrE4JoQ5zDqPT0OH65PWJsutx8vcz4Lz7gm27zHuZ27keyxiI7HOp
pP7ugSKjL2UrqVq0wcRvr51cbVyv4n578f7FsPJN8k2G+48E9K0QSFV06kJWQFaDp1GMm2fL
SR/Mvx5Zc42G2TcjNtu1WM9lNu5RhBqk0e2QfuP/AIqMaUqcOeLdfMN5vUsW+X15ZzNFFcXM
7pJGzRsY5JGIzUg0KtmMXQ/n1+a+kPia53DnnDLBeQ7FDvdlsTmPab6OdFuYniAASWJioYaa
AE5EdRjHy1186yP92VzDPyTjgiZI5ottmE1sCoaOsy+hl7UocdeZ4xfXgJDFitajx/6Yy1AB
ZCtEWhBJLHyxImBcqStQuZNMjniUG7ISSPS35lGQIxIGpaZpSnVsySTiIii6Qr0FTmR1p+OJ
Ghc6SBmVyWvYds8VZ0Te8TpJAZPuPY0xEABKkFdTt0PcDEvRoArAkav4geuBC1RrKMqKw+3s
AelMR1J7arWhqooQB4+BxBGrEEOMl0lWr0zwrQIyksA9R3JzzxAUeoSUp6aauuY8sWLYJisS
nTp/iABzofGuLAWpitQtCwyHkfPFjRaZNKyUU5aQewPXPBUNBrYhaDzB79epwLRSAMo6nqAV
z698SRBSiminQDpqM/p+3CMGIwdOdKdB2p44SIsjARoKjOgJ/wAsI0vdQdH/AJnQ1wVD0qWJ
caWyz6gg+OAiVTpKkgAZUqf+KYERGkgEZtQFic8SJaBSC1Qp1V75f44tRpT6NKmhY6wTkBXt
gVSR0dkJSiDItn+04UjkozFUfJagnCzp40UAkNVsxTxr3xFIs0gbUwoKgUyrUeAwGdEsjUaT
ucyT4+GJaQrqOfXoBnQnviRaf5hYqXBH4A+OJQ5ZlWvU1pkevlXAiMtUDEUYHMCv7MKEQqkH
TrUDtkSa50wL4JmOr3AKgjoO3/PADjQzDKhUZqMifrhQTkDqH31ypmK+WJYEAQ96joT/AJ4c
SZnOZqKMAKd8BMsVEYVzPU5/5YmoIhgdQ6mg0jx74USpmAp01zA/xoMSkM+gJqJowNCPDGWb
4EAqM/STXqMgMKNqLPRWqTkVPTBUN0jkNCQpBBzqenbEhhdRVS2Ryz6ZYtJn9qtSTqU0GdK+
GJaQlAZuhQirVFMx9f8ALAtJQAhcAUpl4n9uEEhUHUAVr0HSh74DhHQ/pOdD6ifA+OJGctqL
qAR0qPHEhrIgJK0UAg5jvhNRGTT62BzNFp4dcQwy6klYLQKw/wDqJ6/TDrOJH0Oi1XRXMEHP
L6YmxM6q1AaqAM698CvRhICp9QJJGqvbAtMoVXqRq8B/zxDD6gG1hQVJzwrSkCEUoCozyPWu
AkCyxjQ+lq0oc8sBGVIUE1p0z6jzxavriIe4WOldNDQeZHjTFrMqQ0DMVORGeeYOJI100Odd
OeoDMVwqRJkfSzkyUrU0GWImGhWKtkpPqY51FMhiMgWSdl7M6mgQdCMR+p6MrChqSM+xIxas
GyqAoGYAzp2zxM6j0gqQuVT6s64idW1OKA5ZU65DvXDjUPSMsSxrXw6eWLFSVDUa1CilBozG
M4C0qiqAdOedR6q4pGtOyUBelC3UVz/6YXOwy1pmWVT9x7Ni1mgeMf8AkFdPQ5DIYtWHWP0t
SpJz60FO3XFG5AlasWLdOoJ1Vrh1YJYipGiqBzpJ6ih6knBqwQCMjK3rAy1DocGrMA8MaxjS
tBWgA6gnDqqh3aVSwQrpYih7io8MblGqcqR6dJr2IxKw1H8cQxZSLFRTHWh6v0z8Maxu16b8
PWe13vI7W13SI3NvK4JtwxXIdaH/ABw449c7XrvzFxHjW0WVrLtdgLRDmyI1a16UrXqcjjlz
8ukq0+N+JWEnC3u40267uJFJ1yxF5QaVC1oQPKmD+mq7Vzwbh2xX/Frj9TaWa3BZ1e6kUB1a
pzFe2eHIPhld0+DrefTdbXvUNxbytSWXSVRf9wILYqpaqZfhDcW3WOygug6sup7wq3tL/wDU
BQ1xYfsWz8G2LaudWezbtKu7xuSjrCSEVq/vxuTY6eYsfl7ifGtnktpNtsktUcBm7knpXP8A
xxzY2flzWPwot1x3+tz8ltLWJ4vfWIUOkddLvXv5DF1sY15lexRRzMkcgdVJCuMwfMeNcPMT
t47x3ct+3GLbtvTXPKfUCQAqDqzk9AManKeqwf2+xKntPyNGv9OprVYiF+leuMYkVj/bxudx
GZLrdILKAH1Elmeg7qD6R+3F619qCf4Av/dj/pu7RX0Uhp7wyQL9akH6A43II62/t2ZGMM2/
27XbD3EttJDUUZgLWuXjTAr6r9q+CN4kV5dy3ODa7dahXcgsaHrpqMqeODNMuJ5/gbclnhXb
Nzt72CfP9U/pSlOtBWv4HDhnTpb+3m+VWU73bHcNNUtlBNadiOtD44MGsxsfxDvO6b9cbO11
BYy2ecxmYVoc6Kozb6jGox9opubcNbiu5Hb5L2O7lHqZoTqoD4+GKJnQ2eQ60rTMY0XoHxRx
jjfId3ktN5illAQuqRtpJ0kde9M+2LqIPydxjauP7+bTagY7ZwrD3GqRUD/PHOcnW3uuB2lp
wIXcm22cs5jWZryOUlxqFelKHwxuireT4o2HcuIW36G2htLyaJXa/mZqA0rmPCuD06w178G8
mW/WCCW2mjZdYuQ+hCemRYZ4Jv5Fqt234k5Nfbjc2SvEgs20yuzUQsOyn83nTClr8ccD2Pc9
+vts30PNLZgiJbZtKGhzJceWHPDyzPyPxvbNg36eysSzWpoI9Zq57np4YoGZiEk7xxHNSQqr
kaEnI5+eNM2evTrL4E5ZPZpPeXdpaLKgZdbHUtRlrIFP2Yy1jjn+FOXR7pHZRBJYmX13akey
vmT1zxNyujcvgrfbKyluU3G1umgzeGJ6P9DUHr4YPWKf4k4Pa7lf3pvbKK+FsNJR5dOksaUI
74rEyPyBtw2rkV1bCzFnGjlRAr6wv8NSOuWLnmCs5G80hEajU5OkAVLVOX4Y3kUek7T8F8t3
GyS5urq121ZVHtRXBYSGoqDQd/rjOtKPevjPkW073a7PK0c1zduI7eSNvQ5IGVWppxYzHZyv
4f5Nxba13G/uLWaByEcxvTQ7CoFGAqPpiK/+NfiDaeR7fJfbpuJA0hYLW3ZdSVzLSA/sGC81
qzHFx74241dcj3GDdd2SDb9tbSFZhHJJpJFV7UyzxYt2L6X4i4DvmzXF3xi5uo7iNmRZbokp
Iy/wggZHsRhkZ2xz2vxDxXYdu/U8skvZZZR6f0oZqZZV9sH/AExBW8K+K9h5Ff392LqaPZLA
+2IvsuHLerOv2jT3w43Phc7j8KcSv9hnv+MXFxA9trLre6qP7fVasA30wMRgdh+GeYb/ALXL
udhAjW4Z0jEkiq0mg0OgYrVeZ8uHjPxtyrkd7PY2MADWxKzzyNSNSMtLNShNcsV1udPWNu+D
NrsuH3r75bGXeo439qWGQsoFPSoHfANYW2+AfkS7s/1MQto9QrFDLJok8QSKEDLKmC6uv8MF
yLjm+8e3A2m72xhuQKCpqM/4T3HeuNS4zz1qDa7C93G9g2+zj13ly4SCMULM56ZnBa3cehp/
bz8lTRO0kdvFKB9rThgaDLSRXr4YdZZ/bfh/5B3Debja4NvaCe2ULPNKdMak9CGPpNe1MDnm
j5V8N854zZjcNxgiktRXXLDJrCnsG74G/I0nxh8W7XvPDtw3XeLG7u5HL/o5bZ0GkotaaGI6
Hxxs9PIb/wDk3Jib7UJTS2ZGdM/rhjItr2rc913GCw2+EzXc7rDbRg9S5pQVyGL7SGR6HF/b
t8pSRuzWkCvGa6WmQah4LQnPGbdKk2n4h+QNz3q52i3sDHPZH+e838tB4DU2WfbDPGa7t5+D
fkbaZ7WO4shMt3IIkltpBIvuMaKG6UB88Gic+uq5/t0+U4in/wAe1f3KgsJgQvfPwrjX2Pqt
h+D/AJJud5k2lNsEc8IDPLMwEIBzFH6MT2pgtWaXIPhL5A2d4Yrvbw/6phFbNbP7yGVjQKxH
2188EauVSXXxzzmz5Jb8buNuMO63ZH6e3DVRx3YP0ph1m+rTbvhf5Fvt4vNph2xoryz0md5W
VIhr+31nIgjwwfYxwcx+KuccSiju97sVitmOmO4jdZY6/wALMldON81nraZfiPn03FxySHbm
l2lg0gZWBf2x/wDaBKhtPnTF1fW+ucYsxstQSWUEqc8iR2yxCLjhnG7rk3JrDY7WZLae8k9s
yTZBABUsfHLpTvgPPte1J/blwmTcP6EOWt/7QilmtDGlDQVJKjPTTzwVnLXmUPxfua/I8HB9
wnSC8e4EJnjqVWMjUHFOtRTGbWsXnIfgzc9v+RrPhltucc43Bf1EF3NRCIiTVWH8Q0mlOuH7
UTbWy/8Azb+FPdtsVryyT/2hF9z9FIFIXSNWYShoev0xenmvL9u+M9zufkqPhN9KLG7/AFHs
XUyj3EVNOr3EGWrUKUzwy4blXO7/AAhult8nQ8Htb63lkuYv1KX0v8se1Q/kz/menoDh5/bH
N9xr/wD82/iM15Psthy0ScnjjZzt0iKCCo/MAdWnzxWszXmWw/GV/d/Iw4ReyGzuzcNBLOKO
FCLqZhTypTFrfPq43D4U3W3+ToeCx30Us0yiaG8kOlfa06qlf4qDoMX4HH+W1b+27ity8uy7
Xy8PyaFWlewdFWhXMh1U6gKnBY39v08x4x8aXu5/IX/p17N+hvEnMN0TSQo0ebEAdajp2w22
NcdSrmf4R3ZflE8DW8ikk9v9Sl8WKL7BGsDT1107DF114xz1tutrN/bfxm5mudr2DlqScjtw
zSbdMqhvSM1k0ksPV3pljNlV6/TzHivxzeb58gx8Mvrj+nXkc0sF4y+oxmIEtpPfpljW4Zft
Fd8i8KuuGcrveP3Ey3DWwRxcKaB0kUPGSDmDQ5jBjjL+GZRGMiDprIXTXw8aYWo9Tv8A4K3m
L442zl1lcfqhdlXls40IaJJW0RsP4vVStfHGNxuz1sLb+1u/G47Pb328LFHuFu7XhWLVJHJG
of2hU5/d1rg9q/Ll3j4D+N7N7mJufQxblbKyrbzCJAHHRXo1cbkrOzXhN9CsdzJEXEmhyBID
6WplVT4HGsb1sPib42k59v1xtUN6LJ4Ld5/eI1CqkAJp8ycyOmMdULfg3wlufJ+Tbzx2bcBZ
Xm0RyO0gTUjSo/tqnX7WOZOHrpjnnzW3tP7edjXi9nvO+8s/o5vQEcGOP2klBIChmI1H098E
01wbP8Ccf3rl/wDRtp5XBuO3LZtcyXFuqmVCrqhQ6TpqxatcC+XTuP8Abjs1zb30XE+Uw7nv
G3Rs8+2EL6ih+30k51FM++G6p2Kx/t445Hxba9733lI2h9wiR2SVEEYlkGooCxHQDDtrXfTz
D5O4dsfGb+C22nfYd+tZlDieHTVT3VwhIw4zPli2iUscyqk1qudKdTiN5bnjXw58j8h2aPdt
n22SWymYiNpGVK06kKxH7cGi6rR8e85bcd029NquGu9ljM+5Qlf5iRD81O4PUaa5YtjnxPy5
dn4ryPd7S+vLGzkuYNpjEt9JGusxRmvXx6Z+GG10vw9D4l/bdy/fOJz77G4t55FSfbbGY/8A
4xGyajpYGiV/LXGPtaviPKNz27ctv3Cex3GF7a9gcpJDMuloyPynGtPNl9ccKOzGpPfp41pn
i0ya+gNq/tz2ldk26TkXMYtpvN0iSa2tQkTRsJACBEzspb7lrTB8rt5b8l/G2/8ABeQ/03dt
E+pPdsbyMnRMhJGoA9CKeoHDLkc/ZcZO3gkkdUI9ZP2HL8a4rTG5h+C/lG4C/ptguCdAlckq
ilXFVKlj6q4z9mr4o9p4JzLcN8l2TbtsuJN2gJSe2YFGiIyb3A3SmOs7kjO78LDlHxnz7itq
l5vu0y2VhrVP1X3x+5XoSldPlXrjNuryINp+Pucbzst7vm07ZJfbVZ6hc3MOlqe2AzhQTViq
mpC4vv8Agdd2MszSVVT9wHoHanWo7Y3fDLpQRNNIqDUx1LpFM9RNMYtT3HZf7bN1l2pf6zvl
vsu/Xihtr22UARySMK+2zE11UP5c8Y9vrVueR5pccO5NY8mPFr2xdN5WUQfogKl2cgKVb7XR
q+lsN6P8+9Qcn4nyXim6vtm/WcljdBRJRiGUq1dLRup0kGlMjijObVRFMsQqGqslCzk9+2Zw
41r0z45+Ft355xXcd7226WG92+b27e0mWgnHt6yNX5TXJf34xd1nqfpw2fxDulx8a7ty8TLG
+z3fs3e2Op1aUIWUqT0kR3GXfPD9vWJ/SZqIfAvyrLY/r49ilaF4hMiBlL0IrQKDXV5Yfs6W
sHLBcQyMsimKSMlCjekh+6kHv5HGtEsoY1klC+2rO7ELpQVZmJp6V8ScErUjfTfBXy1Dtkl+
dgeS1WH3/SUaTSRqp7dS5IHVcU61i9WMyOLcjTYF5K9nIdo942TXlPSs6qGKHuPLtgt9Yt/P
4SPxHlC7Rab3/T5ZNqv5mt7O6Va+5N3Q98/y+PbDbrq0118GfK9ht0m4XGyyyW0EZmkCMjsI
qVqFB1agOwGM8i7K0Oz/AAX/AOwfFFryfaRJccla7eJrQvSOWDXpVVU/a6jPzw8nrrPXm/Ku
Gcq4lera75t8u33M6e5bmShV0U56HXL098Mo+83FIQJMmIOYqPPyP1wynNelfDPxbLzbkCRX
8U6bNHE8kk6qQJPbp/JEhyXUcq9sZ669aUvyvxzh+y8nFrxWa5axClLm2vI3SaCdWIMZLBdQ
yyOOmMc+tDuHxXZbf8Ijl12stvvguomthIawXNrOQqpHTo4zepp0pjnzNp/p58KD4r4TtvM+
VW/Htw3GSwS5jkKSKquS6iqoUbIgnFbinrk3vgu/7Ty664lBF+q3KC5NuggBYS1zVhQaqFSD
nh75cv5dW6vN++EvlHaNpuNzv9nP6G3ALzRusjon8ZRc9I706d8Z5jpeseehZI3YsoJpQnIH
9uOnyZXbsWz7tvW7w2G2RNPdznSsSVOXfp28TjNuNba2u/8Awb8l7FtUu73e0/8Aw7ckzGIr
KyKfz6EqdI7t2wc1jEOz/CfyPusVnPt+3lre/g/U2lwWX25I60NDX7x/CcE60VleU8W5Dxfd
32re7N7W/wBOv2pMwyHIPGRkynyxuU/KoAc1SRfQDUFepPTEXoXHPhX5C5Fsf9Q23bS1j9wc
sEeRSKgxqc2y6N0xz0yRdfM/xltfDDxptuMscG8WIN5aTsWZLmEIJHqegf3Pt7EY3zfGepl8
+AfJ/wAWbbxng/Fb9Y5LXfL3Um5WcjakkOnWksRHpFB6SK98H8/8rub5EvGfib5rhjji2lJr
NJ7dLtPbufaWSKShQ1B0sRUZdRjO/mNfExmB8f8AyHvvMp+P3VjPJySE1ulumNQriqu8pqNB
Bybpnjf2yHn4S8x+E+fcSsI9z3awC2TN7ZmjYSLFIegfSTQN2Jy7YZ6zXfsP9vvyjuu0Jf2u
3IIZfVGJZBG7ZVzRqEVrkcZtKD4/+IOS8m5lcbDc25sm2yQLuyuwWWBSwUnQ33UDV9ORwWrn
0vlH4a37gm9LDPS62u8am2bggoHYUqkijNZM+n7MdJPGeZbVhYf27fJ95tAv4rBWLhmWB3Ec
mXYo+YY45y638Kbj/wAQ835DDfSWe3sr7bcLZ30E1EeOYkqAynPIjMj64rb+GZ76LkPwtznY
L7btv3Tbgj7pIkFpcq2q3MzmntM4+0/Xtilok/Crj+OuTPzD/wBONp7O/rP+n9iQ6U1EVBEh
yKEZhu4w39nn1Rbjs+47ZuV3Y3sJs76zkaC4hfqskbFWGf0ywjdcj6xFXpXv3p4jETaGYAuQ
DSi55n/niR41dax1q3Uls6g4kMAuAKZEEaT1zHji0mKZ1K1y+4dMu2IB0hahAaMKgVri+RDl
aNrMepWyANKUwqDBGgENSmQZj2/ywNaIBXrWpXTmMu3ngWkGIXSCWAyCriJkYD1OpAzBr2P4
YsVIliipHn6tQbPUB9TiZEXkIYDMkUZu4AwkwjfUag6Oo/HzxaDOQGLAZ1A9S5E+GBa6FGba
6kEDWCPtHbAZTKX0sR6VNa0608cQOHqoJJqxpUgVoOtcJC0hZm65ZA/8sC1I1AWzoOpPn54s
QVZgzBssq5HpXDGaZGZkr4ft+pxVC0SBD1RuxHl4YgTawvpGseI7VwappyrKC2kHIVzyrhME
tKmgGphQCuWWCnSEhqFJJH8OIir6BGoyOZI8MAPRm0ocl8cSOCQFUadK9wTn554Tpmbqn3Gg
Pn1xLQ+lKHMMD071/wBMCEWamsAMTkfphACn88N26VOYFR/nhFo1XS9VpVsqdv24yYMPIKop
oO3kfrgaOrKpz+3xPfFqCxLNTVRa1r1P0w6hGtQa1Hicz1wCj1g9avU0B/DA3viMSRLKxIJ1
D0U6CnWmHGSKu+k9ADWuef8AzwrBqfQVIGmlAD4jBitC6N6ulAPUT0+mJGJhKLRdK1pWpP7s
AOq1BY5qOurt2/HERxnUDRg1BlTtiMpHQxo329FoK5+eImpR3yNB+UdwPDCNNUEKWBoTmD2x
KQ8ZVVp0/h759xixoziqggHr1FDQYmbAvo1amY+k50yr+JFMsQG2igCHShNSOuRxGH9qIk1G
quQByy88R8JSpYAD2wv2nrXsaYsahai1cgDU0p0I8cTNhBSyglqjtTLAMRyMquHppUClVHeu
Z88sWKfI0Zs6sStOlKjPpiWCjYI2eSsKE/4YmvCYSszRBS2kBtX5QMDnSijcMS+QHWmdf9MW
t4ZzE81GahShJGeXnikH29L0l/V08TlUeWFqw7SOnpApQiop0GLCVAEYLVtTdB1B+uAYGNUZ
HBALAag1ehxM0iVjBJJPgBUHDFDxltOp6rkdPiB9MONylJrrkRpoADTEJdEjKXozHMZHrlgN
hNIqP6s16gt1GIb6NvXSh9VdQbxwGzQhmagKaaHxqD54hYBiCxpXSetcq/gMLGHIJFBTV4nv
5HBrUBqAzGTA0BI6H8MJ+RCUt0ycE1y6eIGJTQhi03tsS2nMjpWmAfk7o7MWWpUUFOtP9cUF
5qg3svqXUCGFaVFP3dcdIlONTVNc8SPWTw8umJO+V4NI0sXXL6nGk9c+D4dll3aC83bd4Nrt
rRg2m41+o/cCCBpp9TjeeL7f4e1/Kd3wLftriaLlFm722qkKgyFySDRdFaUxxninUd/x7c8I
2Xi/6OTldi7SVkZlb29BZaDJqE0Hli69PXep9n5D8d2223W0XPJoHMuv27gIyoA5qACRniwG
l5F8f7XsSbKd/juIZyU92BS2kV6swFBiYwUHy1xKwaDZLbcIW2wJoG5vq9J7dRmfGuHDjJ7R
Z8I/9/G42nI7eK3hkMjy3TEGVyNX8uuVATWpOKW43evFv8wS8Q3Tblkj5NZGaJKC3jIlkYk9
YyhOeMzn1iMvtt/8E23GtF3b3s+7olH9wuWab/aVPtha9qZDGuk8yvninuJJbVBFEzfy0OZC
jxGKN2NP8Y8gs9j5RBe3g028Z0O9KgVFKnHSZjMmPf8AdOWxzyCbb+Z7VY22nUy6YpZDQVPU
j/DHKhnty57xJ+L3kU+/pf3TK6V0sGlL9goAoKdMMFjzv485XFY8ohbddweDakY6YpXPsqaE
AkDJeuNt2x3/ACpy2zn3yO62HcWdkjCieBu5P5WFMqdaYxmD5bPYeQbDyLhZ2eTeLW03DQIZ
GvZPaalMyAetfI4g0UXJuK7JabfttxvlpNNGgUSxuChC5VJXUPLEnknyjzCKTk7S7HuUnpoH
kgkZQ5IoaEUyGLRqu4HdcMudynuuabrdRenTFKrOAT0Ot0DPh+YrzFbzh+EDdpE4vNPPalQD
NcMW1N19LN6qfXBilZijUGRVSagDv+GNJ6/8GbWh3A7xcX1nBFGpiWGWZUnYnxStVHfD1fBI
sflvjkV/u9reW287eskpWIQ+6NVR0LDMAZ5nGOY1zP23CbVax8FG0HdrA3HslPeEyiEnr1r2
xX1UFzZbPu/FItlh3+0juokRCwmQj09q1qRhR95vuPznb9obfbeCa3KO8kcyk/ywPTWoUFvA
nFlWIrvmnFd9g3HZI9xh2wW49obgSi+5Tr7R+vXFgZ74n4+tnvF5uX9UtZrQ6oVpIFdiprq0
nPTTxw5Y1sxV/J/A953zkIn2qSLcTKgKw2zqzoF7uMlUGuRrnjN0RnbH4X+Q4bmKU7YoiV1J
VpoiaAg1NDg0SPbuScYl3Pj9tavdraPH7byLMxVDooSpNcJE3Jdhh3O12t9wt2uygUASJpqo
yqwOkHywrXnXLvjm4fcdx3y55JbbZZSFpI4hKWan5aAFQxIyxbRa7fhDYJbX9VujXVubOWsc
FXX3W0mhLr+UfXFYWW+UuIyz8xg//KVjFDuUwCTGQN7QJoxlHUDwwYpQcj+Ldr4pbWu7pv8A
Det76H9P6QWzqShVjUAeOGanovLdrXm/HbJdj3K21JpZ5ZJgGppA9QX1Bhh+FXlPJeJw7RvO
32e7cujuV1KJ5YiXe2jBzPXsOmCWj5WfyPZcEi4/FJZcruN6vFAW3tZpv1CkH81KDTigvjXf
AnHdzsNvutwnaIW94o9kq6l6A1OpRXTTzw2tfhJt3xdY3nNdxvt3aGQGRprS0jmUq+o1X3EB
1Zdxg6Urt5NYfJv6gPFcWdlxuzkV2ghdEkaKM1NSQo6D7cUTVXV9yLcF2y543PbPtrsrXcrF
XrHTPTisTF8zuN1Xm9rFweW3G+yQMNxRnjMVARQyqTp1fTPD7gnyudz5ZunHeH3UvNru1i3K
dHS3jtgG1FhQKFGZ88HLPV8edcDsNgu+NXVxLzmXZGlZy+3RTLEsS9qoxDMT/txYdaL4Outt
bZd42a3uhdXfuswRiFkkU1GvM9DXth1r8Nnxra9x49xFrffL2MOju+qSXUqRlqqutjnQYE49
92Ped45Lx7edq0vtVrSSeQSqA6n+FQfUKYh7rzz552rdOT8it7Lj9pNuklnBovf0qe4sTsxI
R2HRiMVmDmsn8f8Ax7zjaeX7Te3ex3dtZQzqzzSRHSlTT10qQPPGNa177yjYOSX/ACfY7+wu
K7bYy67y3EmnLOpAy1E42Y7juFhf7hvO22lyku4RQgSQRuPcRmUgE55EeOJmvBOSfG/yttfH
twu9w3lIdr1MDt810XLITUaSfTqPhWuDm3XO8b8vSfgrjG/7Twa6j3GIRvfky2alw1Y2joCa
E0qf3YXTPHgXJPi/ncfJLuxTY57mQNr/APjqZlCMfSSVyzxaeYt/jngHNdp5dtN/uOyXlnYQ
3Kmad4WCKtaEk9sVxvyPQ/7guZ7xsO/7Idsv5IGRfeSONvTrLEBnT8y0HQ41njl7rS/D/OLn
lux7tPuLw3m8BgZLOOkdUCUWgPSp74zutZi0fceSWVnawzbDa7Rtst1GkjS3wlkUM1aqNNK1
7asQaO5b17tRyQsSkeodlrlnlhKXcRJc7JcWsBEl3PZt+nRWAJYpRWDV7MRniDCWX9e4v8Rz
jfpjDvVsWZJZ5BJVy4MYRiWrkMhXD81VdbbZcc5LFs3N6Cae1tjW6LlShA/mErWgKEHBpscv
COX2fKNt5FcWrruzR3TrBbowid4QgEXhprn6sVmBmvlHed/t/jK9tX40m32UoERa9vEleOrV
DKvqLN4erFGbN8ZvYNj5BL8NtPac6it7WS2kaXbXEftxx0OqD3CTIrEdvPBN1vvMfN0hCsY9
RkVfQCDUZeHjjQ5mQdncTrOGVikoNUlBo4/7WGdcNqluvov4mlex+M925om2tf77bvMI90kl
1yERgZMpIYKtak9Tgl9a7uR5rwblW+bv8u7VvFzS93G9uVLgssSuTQaa9FCqMsV9c+esaP8A
uZvr9vkKzlWOWxnt7SIoQ4LGjswZShNB2xfgz5rTfGdzLtXxJuHNLba5LrkUbTH+ryyLI7kM
FOoM2pUQZGgzxmXabcjzn415Nu+6fMO37reqb3c727EklWSMM5y0joooOmNdc/k81b/3Ibrf
r8ox3EcUljLb2sQRyVWUMNVJI2QnI164fw58z/fWu+PL2TZPiS95zBtrXPIneUvus0nuM4Vq
L7lTqRVr074zPa6d2SPOvizkW7bn8wbfvUoG4blfXTSTUZY9ZcaWoT6QAMXV1czZqw/uJ3Pc
T8rSXMST7bPBbW4jcsok9NfUugnI9s8a3JHPm7a2nA9wk498QXnOrTbJLzkU0spbeJpBI0g1
6AZSTrVF6U74zLbXTq5HnvxFvu67j8w7dvFz/wDP3S+uZJZ2BWPUZFOqhPpAHhit1c8Y6P7g
N23BPlq+voY57CeK3t1j9Xtz+lKalKE0B7GueNX4jlzfa3XENyk4v8Mvzix2uR+QXLyPPu7s
JmdfcK6patrVVpTTTPGObtdO/h5/8L75ul98v2W7XEf6/cNwmmluipEfuPIDWTOijSO2Hpvm
ZHN/cXdS3fypu8jW0kDCO3j9mYAMQkQo9ASCD28savxHDm7a8yhpqpKaEHIKM8X4dMfZ2yc4
sOJ/B3Ht8vbZr6yMUEE0aFdVJWZdXqyNG7Y5yHq42y3qX+4cavo10RXUE00asQSBJCrKKjKt
DhH5eT/Kv/ue6RbztS/G1tf2bM/s7stPcNPtnGgayw6074vtjN418qzwTQyCJloy5erIjT1q
uNSt2Pcv7TmH/vN3UDV+gcAjt/MQ/vxnoyePafj/AJ/b75z7kWxDabSxm2zVW7gI92cLJpq3
pU0zr1xVjj1Jf3W6QfH21PtnHouSzGZgbOQKyqA0n8z1AitcvxwU9Mf8S2+5R/Lm7XO48cHF
5r3bNaWKV9t6SpqkTID7uoHTFaOMxods5fByCw5RbcYs7bbea7K86exGil5VBybJV1azln+a
mETPwluLzebX4y41JtnG4+U3Jii921mCsIz7ZLSUYHPV6cBvj5a+VrLeI+UXF3uPHDxc3SiV
NvUExaj9zRsaAhiKkLkDhlHFYyHVrXR9w9SnrUjqKYXSfL6/5dJze++POGXnx/JPI5jtxfPY
Mq1hSIAq1KZK9RTxwTyK/LfC4s15tbmsS393s76DUCWT25lJXOhbTU/TPBJ+Wa8++GOEci47
ZcyuN4tf0q7gXeGF6B2UK/q050U1/HDfnRzM5xXf2873ul/8cch2yHcJbi/sVYbdAkgaWINB
6fZB+1fc6dq4fz6N/wBfHzJyi73q432eff2lk3NnP6t7ivu609JDAhSOncYbF/OzPFYnUVpU
N0NaAHLPA1r652vg17w7i+1XEm3XPN94ihA2+2ZfctraJ6OUQsSEpl6upPTBz/ldV4L808r5
pvvJ68osZNqmsEKWu3SijQwyEMPV/wDaVp9+E8yaw+3Mkl9DX7A6AHoKkjqfDBhr6m/uE+QO
UcWtOK/0O/azE0DzXDxD1N7Yi0hia/yzqzGHm+M35dXwz8iy8+v+U7hLaW1nvQsLZBDasdUp
jEmmX1erJiBXtlirHMm2x5/uo/uOl4rv/wDWYJbraPZZby13FI2c29fVNBHQNWKmosOnXFLd
X49Wf9vG289m4Ru91xDkNkgWZ4xtN1F7y+8kYpIWqvt6xQV6GmeLq7XSzOXzndo6Tyu5DSe6
wlYDIvqOqg8jijn/ACmcujab1Id0tp9OpYJkaUDMFAwJ/wCmKumPq/5E49u/NOacD5LxmMbj
sEMayTXcbj2UKTpJ66n0tpBGffLGd8xi/Osd84XdtuvzhsKbTusNrdxpb2/9QDDRbTGcnVIV
IzTI0OG3w8z1Qf3O23MrbftpTk1/a7hC1s39NuLWL2WIU1k92PU1CTTPp4YZ8Odm9vFcqBQR
IrHMnt45YXT4fUH9vW53W1/FHLNytnLXNlcC4VjShEcKPQ18hQnGJ8tW+PQd2m43N8a7ryPa
pV/pm+zWu4TA6QkUrzQrKW8PUtXr3xVzvMkqDlQ+RI/l7Y59oe7/APS5o4ZNzMBU2urXJrL/
AFXQTTDp9188/wBzEFtH8t7p+ljjjrDbO/t0A9x4wzuQMtRPU41KJz8sLxKe9t+Qbdc2RRbi
K4ia3kmIEfuaxp11y06uuLrMb5mPss/+ybvJS7G7cS3iMCt1aslztMk38dDqqjH+IL554zKs
Zg7Fvm4/EnOONe1Dfck/qUyz29mFVWeVopVlRB9gdfUMEc+59ucdewbLNxX4s4hY7wUtpYd7
s3lLGqoz3JIDk/aQTQ+GLmKb9Y4OeWXzz/8AeVdNxO4nt9gYwvZtI6Gz1iIGQOr1oGkqD5nF
rf5RR8h5Nxz4B3rcjFFYcgt764ju4dFEhnkuAraFB9ObArTph5vo6njN/J17cbz/AG17BvO6
z/r9zN9HW8kIaVdbSoy6voACMPFH9eZZHzbqOoZZAUr9fLD8Nc9PpL+1Hme8nc34i0yHaI4Z
buKBlGsOSNWl+uZNSMY91vPHlHyVzPeuZ8wkn3EQG4s5ZrC3MEYjLxLOVjD/AMTgjrjr1Pqz
HvUnDOXj+2W+4/e7bM+9weuHbzR30R3KSgRip6KCQMcpfT/T15b/AG68X3y85/t+92lnJJtm
3XBjvbimURZG9LA+rHT+vjPF/T0eDbrjZf7ov1u6wtbWm9mU7bPIAI5XWAIAjeNe3X9uC/A4
59r1Abze2m73sMXHd0kWMyxxNLNELSYDOkIeSnq/KCBjOtPhrkMok33c5hbGxSS6ndbNusKF
zSMD/b0xvdZ/nzj0/wDtguNvtPk20FzOkbXFvNFAJKDXIy+lVJ7nOgxjp1e5cCtPka153yJe
UyXP/rEv6iPZ47mRXtyplrGqipp/K6V7ZYrXOesv8oct3bjXw3xp9ivDYtPcGJpocpFRC5/l
SD7KEafpjfMmtSPCOe/JO9c52rZLfeY4Zr7ZEmUboARPcJLSgl7ejT26nPFOoLJax1qCVAQK
ATQaiep/zwWtR9ebrbc03b454NN8f3bQvFFFFuN3byKgjiSNVZZB+ZUdTVaYzzivyzn9zO1b
ruW88H2yJUvt0uYZ4YkqFWe4LQjKtFGo5jDz8UflYfMfEeTTfCnGoZbKS5utkMc+6xijvGkc
DqWOdSFqK07YePTbJdQfOHOOR8e4Vwc7FuD2kd9bIZLi2NJP5dujArJ/CdWY74zz8M9e10fB
vyTJzXlVw29pbrusezix9Hpe9CS62dg1PVp6geeC0yflquR75uEHDd/jm4pfRQSWzrJBd3MU
oMekgyRqXevt11MAa43zNo1S/Je1fJu723G7/wCPLqcWL7ei3VxZXCxq7enQzVb1aRXtgvxT
+XmNruXLuM/O22Hl+5KdwH6SG53iMrElzZPRUeSgGrKquWFcsZy4P532yr7+4VuRW3yZts+6
STnh8xt5rDU+q1E0QAm0qPtfv5jG7f8AXz5PPz69F5ttnyFe/IWy73xi4nPFDDA+4/pbgCOb
1k69FfX/AC6D6Yzv+v8Akye+p9/3k7Vb/KO4bSYlvLW2t50KgMDIbQ+th0rXrjUnwy+fti+c
eURWcOzb5Kd4sEvre9967Je4iMEgcpFIezU/N07Yf6ZHWTXuG3cd2Dl3ybt3yXtm7pNt7Rwx
iyKlLiOdFKqro1KKa/t8sY+2zGJ/ra+ePngrH8qcoIrqa7BcnpT2kxv8MvPmkK5CjVpTAiWn
3rQqBQqeuBWES5kLBSF6hqdcSkMUIy6lKGpGBE/QoVrXMDsfxxLCeMoQBSjGlSen0GLVR+5p
ACkNQ1of8cK0Kq0itpoVAII6VFcBzS0+2oWlAMwOp/biiSRlPSB9xrVj38cQ0wNBpqC1aMvT
CRKzach/9P8AniwHLp7YOdRlTT1/EYsWo/e9PprkepHXyGBan0S6dZ/MSADToMRkB7unUqLk
cqf4n6YkkyEQCHJh9pHQ4sR0qhDMajz7fXEikOZDV19enQdTiRBmBBUEquWkdc8QOG1gqyrU
gkVyI/54FQtGFB1HTSmqvQivliZsSZjPqgGRzFBgUENI9K5Vyop7nE1KXthxpUnV4Dth1YBk
IOhjn9MWiQxWhAWhOVKHp44ice4GNWBA7Dwp4jEhJKxVyaELQAAVy+uJDJlAoSNbZauxHXLA
gh0Dkk6j1qOn/TGkJnZnOghgaDM59cFJ1HqoX69l/wA8QA0blia1WMUIHUkeNcQ+owUABcCg
H4DE0FZfcWobS9dVKdKfXEtJpAwIqCW6HqQR4jBgtEqUUHSAoyFD38cGGURWighe9a98RIup
BatG8K/uxIYiI6nIip8+/bFq1GsmpdRNKH0qP3HCSJLNq+0E0WuYqOuIU5diAMwpy8ya4maa
mmrU/NWngBlgpSMxNAoBDiqr3/fgVCVVFGmnu9Wp54sQl91CSlSOtO/41xHQUmObnSVNaV79
Mag0RV6AECvenT8MRgRIVQkE5Za6HPywH7CRlKkIACxq4/zGLF9jy6Si1B1dAe1B0ywCkZIh
6aABsgwzOQzwrTaA2qiEhQCTXt4YEJRpU0P07imLWzBanM0yyHj9MRAJ1rpVQ4Gb5nCxekyo
w8BXOp6Z4EippJCA0Y1cDwHgRhMwOhAfSpI6KG7E+NcQSgu5JXJQKHPw8BgXwZp3Wqlu/wBn
UYcUpSFSQyVWg+40GZ7Z9cUV6JTp71YChdugPliqltIMTTVn4jvXFTLgiyDUCrAKc1boDT8u
M5T9wyMoTUwCVFcupH+uHGeuoQI0E50oPVTMU6jAylJGkVrRVoK5EDrhb5gDrNAWGggVPbLE
QBwjUp91QCMhTEzaaQEiigav30PjiGJGJI9Rrop6KUOCtQtLNUj7hkviK4zD0cAKqyMAS3pJ
/wBcLO5Aqrxxl3GtTlQdsKgmKox0UalCK9VHnia3DA6V9RoD07Ma5fTFh0Te2wA0MDWgoQF8
OuHGfsA1WRVIBj6DOlM8s8RmVn9+DB29RaQsan/aOmeNyCxT5BQa/UYBRe9J5/twh1tauy6l
z0+VBnhheg/GnHE3u9h2+W+Sw9w6RczBigLHJPSD36Y1dsVuPSud/FlpxKGCd90W+kOoelDG
QQOmZp2yx55Lq5kd/B/iy23rZJt53D9fCGBNsI7f0Ouk56z92YxWNZ+nfx34d27dtnmvY7m9
juVqsVrEqHMDIGpr1zxvqZ8Jld3+MedbeQx2m5ELuFRiAzOadAqlsc9q+0VU/EuUR3CQz7bd
JcyD+XA0ZDfgpxqVrx3bPwLerzeoNq3I/wBEkmp7Ul8j0NewA741Nc7Vnzv4uuOH20ddxiuh
JmojjZAB41Ymv4Y1EobDgXNNxsjf220XUtp191YmKstK6ge4pg6ok9Ul7b3Vu7x3EZjdfGle
legxSm/JoPcZo0RWaaY6UC5sWPSgH+GHVrWW/wAXfIUsPuf+vXenrqZAC2Va6SQcc+mvI5Lb
gXNLyZoI9mu5XGTr7TVB86jLLzxm659TUl78fc625EW52W5jjlYKhMRaoHb0k46ToxJB8Y/I
MiGVOP3XtjIBUzFO5rSv4YfuY5dv4hzLcJGgtNnubqWMlJAqEhW8GqNI/bjWnInveEcx2x0h
uNouYWn+xPaYs7AdEIBxzntZo1+OefyKZW2C9SMeol4mpTy8cUrMVdvsHILy+awt9vnlu1JL
20alpBTI1UDpjWtB3PZdy26b2Nws5bWRKELKhTMd88UpcQoHUdCcwfDGg1nBuF7lym8ltdvn
tLeRU1O1y5Vj2BVQCzU74L8GZA8y4duPGL/9DeTw3E9A3uQVINe9T6sZ41rZ00K/Fp/9TG+X
e7sshj1fov0zj09lZj38Dht9cuufXRdfDl3b8Zj3KC+/XyyIJFsooGYlXFVzFf3YbVWGu+Nc
gs5v0l1tM8Ej0McTwsob/cuXfF9zPHPDtW6l2T9LKZq5xvGdSD/tpUYvvGWi4f8AHfIOUTzx
2bQWr25Gv9W2h6HsIx6/xpht8OOPe9o3vhu9NYyXhgukNTNaSuBXqKEaTShxbShPNOTFlH9X
vad1NxIanx64tFrvnv8A5E5HbxW87bjuFuvqiVY5ChHQEsBnhtU5UM+2b/DcJb3G3XCzsdKR
SxsshY9BpIzOCYnZc8X5TBAk91tF7FFlSWaBwte/qIypjNpvMq04NwmblN9NCb9Nt9hfVKys
zGnai+mn1xepSco2ePad6msRdrfPCwBmVWVT5gNmManSirafWCrE6Myyn7Tih132Wz79foZL
Hbru5j7zQRSOAR1FVGD74K47+zvLZjDc2ssEjVpG6Mrk16UYVNcM60W4a42vd7W1WW5spoom
FUkmjdFI/wC4gDF+RN/LT8N4Bzfk1q1xs0TizjyM0jmONz/CrMV14K0LbPjvnW5cjudrs7Zf
1NmxS7neUCOOnTU48fAZ4vlYvN6+FPkW0tHu5ZIr1EX+ZDFOzkHIZLIAMZ9Z+sS2PwP8lG1J
FxbRe4K+2ty/p/8A0BT6541eqXDL8HfI0e9pbRW8TMR7v6pbikYoaamNAwOMy9DEXI/hX5K2
2wm3CZYr2KFauscplkC9z6sz+GH1Y85/S3rB1SB5DFnIVjZtJPYtSgONbEe2l3AXMcVtr/VN
Vo1jLK4+hX1DFa1Zr0bZvi7mvI+Ozb6+5JGsAIktblpvdJjHqpq6eVeuCU9TGIN1yWKF47GS
+S3XUpaBplhrmKih04b0x0l2HnXMeNBk2rcZbFZT7k0QoQz0zLK2rFLpxdRfOHyQsqn+sySq
DrClUzNejUX7cSxy8j+RufcolE11cyr7S+3ps1khQDqSfb+78cH2UZW3u9zgvka2kmW6HqVk
Z1lNcyaqa4ftFYPeL3e5ih3aW7kcN6DcvI1O4I1H9+H76skbfgPE+fcl2a9vNm5Au32ljRGt
ZrqWPWevQelR54z9vfBjO7b8k8543dXQs94nhleQi4VnE6My+nURIH6dsbP4WTfO3yfNER/X
pFDCnpjhUknwOjKuCwc+sXdybruFwbi7ae5nLEl31OQT1q2dMZva+2G2/cNzsrsNt8k8Fzqp
G8DPHJXvRlocMrWa7dz33kN1QbteXc2g/wAtLmSRjUfmUMeuH7nmg/qe/q5LTXbGVVQnXKVZ
KUCdc6eGD7IFpyHkcVzqgvrmO7iRlhaKWRSD0AUVrXLOmDRzNNvXIeS7iy/1jcby4CeoC5kk
cA9CQrH01wzqLEFtyPdre3ksra+uoLaZf51vHLIkT1yIZAdLZYf/AAzOpUm1b3vW23Yl225u
LW5INWt3eOSnShKHpjV7ljpcRb/ybkO80t95vbu5WFtcaXUrtGpPgrGnbGJ0x4rVllMegyMa
H0CtanyGLcWOZvcLnWxRWpobvjWtY6LCxu7y/igto5J5ZGCQRxAl2dsgoUdTgqnOvSx8OfN9
ntsqW9hdR7fKn/ybWK4QNItK0aJGq30xid2fhnq+vP7LY9+beU2mztZ/6q8giitVXTL7in7R
Wmk1x056lms/K05Dxzna7/Dt2+W13Jv8gVIYLgtJIynJBGatq/DLBempNakfEnzXa7NLEu03
kdlIPcuLOKdCGoK5xI/qPlSuMTr34VYDato5BLvUW27dbStubze1BaR1SYTA1oCKFSKd8dr3
sa55/MWXIdi5u/IEsN+tb2432QpFHFOJJZ5A2UaLqqT+GWCdKe3xqX+Kvm202e4tl2q+Ta5K
NPZRzKUYAVqYVc1p9MY+3rNjCbFtu9y7vDt+zwSNusrmO3ht6pKrqei5ihHjXDpjr37Z+dzc
hG37va3txvjEQ+xcFpLqQj7VBapK0zFDTFe2L/hq5PjH5wsOP3Fs+238e0SD3LqwidWSi51M
SOSxFK9MZ+2GxidkseSXe7W1lscUsm7SSabeOFikxdfV6ftpSmHxW3PHTuu0c2veTNY7nbXt
3ySWQRSQ3Gt7gkZCoOehe3bGr14zxN+GtvfjH5u2zj1zZPZbhHs5Huz2kMgeInqXMSMxqPpj
jO7vw3fIw2wbTv8AfbtbWWxwXD7m7FbeKBmVyR1ZSpBFKdcdp0OetByVOTLvVynIxdHekIju
2vmJmqg0gEmuQXpi3flz4zarUYggZ1CkkAV6fXC6a117bfIVtxWya/gvYeMXri4tPdYm0aZf
zKK0VvAEZ9sY1rr9NFtOyfOCTbNaWcW7afbN1tKK+lESgrJEWIUZMPw7YPssae9tP7p44JZT
NuqxRr/9m8bue9aCrf8A6IwToV4TetdNcyNe6jeCRjM0gpIZCSWLV7+ONrV3wyDnVxukY4nD
dybnCrSqbFisqoB6myINM6UxTF9ri04zF8i3e/Xd9sQvm3e0jlnvmtmZJ0WtHdz933dv3Yrc
Mbri+2/3ErtEUmzrucG3XI9+MRyRgMzklnAckjV1PTGfsnLf7d/cRfcghtZ23Zd5tUa424tI
PTGSFkZJFKrQ5BhXDrMnqpl4X8xcU/U8iksNx2+TWzXG5QvWUGT7ndlJbSe9csF6VyNBxjav
7i4NngOxjdItpmX3rTTNGVIk9WoLI2oA1rTF9jWH+Sx8ordW0HPDftMis1iL5lZSCfUYmT0f
UVxbojEL7iyIoqFkoSK5jzwrY1uwfJPM+PWrWey7tc2Nr/5PYifSmrpq0tWmC39tOK95lyS4
34bze7lcy7sukx3zSES1QekqwoBTyxq9D7TmLe8+Yvke79oS8ivpWVHiZRIAGWUUb7QPy4Nx
nrrxx8Yi5ttttPyTj0V1HZ7W2ibc7NWAhDAEq7L2p1BFMavcpmczaoN73TdN53K43bdLg3l7
cMrT3MjAs7KAq5ig6DGrdHMnzHFC385GD6Cfu618q+AxmxqPbOKz/wBxk3GrObjw3NtpVf8A
4dGj0+2uVIxL62XLIH8Mcvtl+B3v4eZ873zme8boicturqbcrFWhVLsBZY1Y+pSoVdOeeOvN
1z56Z5k06VJ9C9ad65AVwyx0qz3Xku+bxa2ttum4XF3Ft0Zjs1mcyLHGxB0JX8vpGIXxbcKs
ObqZ9643BeatoIe5vbPUGh9wEKWK5sDTwPnjHXTPPF5m/hd7ryz5i5Xxu/mvb6/v+PWZjXcC
mn2kLGie6Y1V6HvnTxwzuRqz82+MPbbhe2sL/prhreOv84ws0YPih0ldWLfW+XFLHJr1aACx
AZCeopWuE0oKJmgo35mFDQ9qYzYNaPYuf802K1ns9j3m52+3uc54o3Ajct1bQwND4kZ4zMgq
i92f9SSCWldjI2s1q1cy2rM18cb8E8or69u7t9UzyTSBQNUjM/8ALGQALE0HkMMwXr1zKj6V
qtFXpXI5dT54r1PgVvOJce+RJeH79v3HppBscYW13y2SUh3TSH1GHMOgVhq7gfjgtn4HVvPO
1Q2vKt+tNnudojv5Y9rvgBd2ayH2ZaEEF079MEyum/aLvavmL5G2qwj23beQXUFnB6YoNSt7
a/lUGRWIUeGLBig2rZ9/5Zvtttllr3DdLyQohkapYklnJY9gKknww243zzq95/8AFXLuCvYn
dkge2vtRtru1f3YtSD1oSQp1afLBKzb7jT8Z5B81XPANw3vZN9uW2DYqQ3FokyNNDEFFWUMr
OURTU55D6YJ8s93JrK/HW4/IUnIyOIT3Lb1dqw0QShGkCguzP7h0MB/uxq2Rcal5fyv5Je6u
tm5Tf3rSLIj3lldmg92L7CU0gekdCOuGUy78Ch+Yvky02+Oxh5Fex2qxiNVLKdKMKBQSpIy6
Z4zL63/5ZxuVb6dpn2Y7hcf067k96a29wmJ5QcnlDVq3njWDpE/It9m2Jdia8mk2r3/1UlgX
Jh94DSsmjsfpikxm+qvTL7opSpP4ZZdfHGfvFJix2Ld972Tcf1+2XclpfRNqSaFqSK1KClPE
djkRjez8tOJbu5aeWctrlkkeSRmzqzGrE0p1OeC37MZl1urb5s+T4rZLeHkd8sMaCNCHQlQo
ooBZWP7cZjdmqrj/AD7mex3t1ebXu1zZXF+fcvJoiv8AMYmtZAQVLajjXfX5q54kjVfIr/K4
2jY995buU1/tW4FrnaWDKRE9NJD6VUo5X1LTt9DjHPX2+PhnM6dm+bt81SfG0XIL3fZ7ri16
5sLiITB5V1VQLONIbSWFK1r49cPPXq6uPJ1SUoymkgHpKkmvStDXxxXppJYJdQye/b+qWOhQ
gn0spqMx9pGHyxmTa1u5fJ/ybuG3S7Zf8hu57KZfbuYneiumXorSvbrWuLjym8s9d7ru91tk
O0T3lxNYQuZ4rSR2aFGbqQhyUnFe/U4be1nlVAg1FjpGnMVOQoBnXGdS35FwvkfHrsbdve3y
bfd6Fmj1j0SI3RkYVDUPXw741ql9dmxc35tx5JbbZt1utuhcBnS1ZghalAxrVT4Vxn4P9bXL
uHJ+Ubk1nPd3l1cy2JJsppJCzW5La2aJh9nqFT54Z0xxq4uvlz5Tlie3uuQbh7MysssRkprU
rQ9RmPHG5YZNZa+3zfb62trK5ubiWytFYWls7M0SA5kRq32j6YysDt257ht1zFd2k8lpdQkN
FNESsqMMwwpmCK4PGuauOQfJHN96shZ7xv19e2lVk/TzynT7i/a1Fp443z1jItm+S+abJt7W
e0b1eWEQ9Sw28rKhJ76TkPwxi/LpepYob3dr/cbuS8vbiS6vpGLTzzsXd2PUu2Zz746Tpicu
6/5fyfcNtttsvdyuZ7GzYNb2sspkhiKigKK3QgZYLcUiy2f5M5vtG3Cy27fr2ytFYskMUzKo
Y5lQoOX4YzFetVack3qI3hS9mQbiHXcGSVv54kzcSivrqcatSsWUt1FD6cwTUZ4zWua9t2v+
5bf7LZrbbZNh269nsokiW+lDRO6x5I1I+jDywZDdryPk/Ib7kW73u87kwkvL+Uy3DgBAScgF
UZUCgAY3etc58eqldKqa0+2hXGUl7jSdOQJoOv1+mJpGPcQ55rWpJ8MR0/tdK/bq6A4gZ6RU
0jWTkRXOmBCL0j0kVaoJz6Z9Pww4CLaMnzDU6YkD7RVRVq/l6H8DgKSpBJGZp/xTEMCTpFaa
iRUsMzniFS+vJ6AsAKjsfx8cS08lRVs6A1A8PDCvTDToC1HuU1UBoKE5UxHCgorer0kGpJ/5
4DA+24agcmMGoYYtGCEgBoPTWoqOvXEUjSKH9HiAfLCDrQK2s11E1PgcCOukKyg+lcj5g+eE
adNBD+3+UZkdBgpggFBOkAuo6HvgVqL1sT1AP5MWs4k9YXTqqF6GudPDEcEGHt50BX7fEsO2
XbA0d5NBrpINADTrU4Rp1kqpDU9wGpp3JxHAsq6Q7KAB0YChA8sIExopaoQ/l8cROG1AazRR
1K5muAEpcGquCooE1CopiWGCadVCAG6GnfEiVXQio9KmhPT8cSJG1Vr6STTwy8xiOCE38wqF
GjIau9e+WFaFqsdNaUNSO2ChIyZAkZ0q1f8ADEkCsuqg6rmSBXLwxD10iQfcCAoy6eI8MBRC
WQhQFJfoQe2IZT60DEMoaOmWkd+uJoY9ppAxqB3WtDiQCmlhppQHv0I8BTEcSutULipCnoRW
mXXEOvACR3oSKafUD/pTEzLRVBq+qo7rlTAdPMVzkJLE0AoAQPIeeLECjPI1MqUrTzxE4yyD
MRqAIJ6U6Yjh1GpqD1LUgVJrXv1wQfU+ktVUJXOhBywqgVHZQqVoMyOmFYVaMfRRjSoBrSn0
xKQROjT6qeY6k4yaZhGSoJJYnUCBQA9umJeJJ39OmgL9B4V8MSDpqNRzPhUUBxGG1Fw1T0PY
508sS+QyMyKGRTToWYU6n/LCzqUyDSQWzqKGnXxxY0SlddEdgTTIjrT6YhpmoT6XOYIGdafX
Fh0Ijkdxp+7u9cv3YosE2lSVNC3Wp7U7YgJhqj1swB60Arl54jhw6aRX8fx70wYND7bFfUCV
6g+Phni04dEdBqJqDnXI5Dtg03Ak6q6QNAPfI1wsGXUEJFQCwqFNQPPEoJDoUgnWCSF0jIV7
YmwqlH9JJDflHYDxxLCBKly46GnXt4YkSSNWmk5ZAkUI/wCWIXRALIVyNRStDQD9uMmUbg5q
rEGoIqcI1EWLOV0VB6nrQ4tGCjHo9QLU7nrliRlDMGIrToAT0/48cRlM8JdQngftOef1xGlE
hjjKOCWzOvKhxaLaaUoEVR6lNa9unUnDBaz27tGZMtRzrQ9Bljca1VZM2WR7eeJmn0DxwJaT
C4MQJAVaUyyONGR6f8K7JvW47/aJt9pJee0VkmdDUKlerVoAB+/Dtcu+r+H0L8vcT5HuGyRP
Z2plWAhp2JRSFFM8z2OOXNu+uvM9T/FOwcqs+K3EF7b3Fu0p1wxyN+QqR6amgHhjfV09z3xe
8Q2nlNnsV7AkL2c7SyGJHKg+ro3fBQLb7Dk1vsLJuckj7mzOIJ5pAxDEkpQntiGSuu2m2i2k
tId00TcleNvZcV9R6+nP0+eCxa80u9o5zefJNvPdI96kLKyiIeiGMflbt+3Dx8NSSrf5l47v
15YQzWtlJLBAhMjenTFnmWJPbywURDxzbfkabiqu/OLOzszEdMBVHkjjA6GVtLJ/ljXTN+fH
ge6hBudwhvP1jo2c61Ktnmx7YMb+Wu+ITaJziwkmEYVW9LvTJiOq16HwPbHScywXx9KbluN7
Bdqttsu4bhVa+4soSMeR1soxjGXNdbtvMexX9xFB/T5o1ZhCGWZkbwrSmrEnnXBPlTlG8cih
2i5SJwrtquPU01FFc89NaeGNfhv8LX5M+ROScW3y2FvIkkUkVGtJh/LYsK19NCCMZ5rC72ze
d93bhJv7BPa3CdDIsViB6XfP7R+b641ZhXmyNu8e1bfJuYlG4KgWZ5vuL98ZqYT5I+UOR8W5
FGts0VxatGNVrJkAxNMyvqywax6z3x7v/POS8m3C+2CXbttlnUNdrKutCoOQQD1nzxrGoz3z
JBya23pBv+7W1/dOn8tbUaVRegQp1XGZpledBWyWoCjue318cIepfBG33knJhdJbyS2qRSK8
6oxQN+UM3Tr54d8S4+adq3v+vw30NlcGEKvszIhILVroDAHPywQSPQNHIbz4zaG4juZL6S2I
9mRSJT3UHKvbFflp2GTlVtwiD+kwst6kKaFeMe4B3Gg98VFdN9LyFbbbJrWD3dwYoLkTemiE
Vlz7U8sUQ7iOztG3G62VUuOSSoGuEY5kgenWprQeFMWanm3xdFv9xzrcbvdLaVpFVkmcJpRJ
O2dKY1+GpmMz827Xua8mkuXtZBaaQYpCh9tzTMBqZHyxnWI8+2+GQXMOuJqGRQx0k6a+dMM6
i/L6q3W95Tb8Ws22K3D3WiFWGgEqlBqIByqMRq0WFXubW6mjDXaxkJM6j3F1D1hSRlgDzjf+
SfMKb3uFvtO1te7PnGPdgFApFKq1V1eeC1Ob4Mk31Ny3ZLhXjic65o0jpCJvAMB2qaAHDrWM
T8wbXvEvNJSLCd/1LfyJNBCv+UKtBQnAJFNJ8Zcv2tIdw3namhsPcQSGTSwFT0dQ3pBwy+qR
71zHdeRbJxyzHFYCZUEaJbwwe8oTSMlRR+/Daa8x5Bvvypf71srbts0VteiVf6cZIhHreuXu
Fiwp/txmVjJrQ/LL/J0/DmXe4dttdvLL+p/Ss7vWlACZPH/bhjST+3/feQX1pd2128r7Zbop
s1KaI1NaOiMAO3auNWpHs+0c2vPkLe49svpdm2oymS+nEfqlHRFjVwQW659sC5rp5Hzmw4jt
11tvH9k3C/u5A36ncLpJmg1NkzM5zY/QAYpfWOr+GU+Ddlv963+73a8kD2lmS62upwwlc16d
CvlhrTbjc+Zck5jf2+w7i+2cfsKQX92URnLDtErAnUex7YKZFbyb5G2ziW1Xuz7BtG4bhckO
ku43QlMJd8ndnb7voKDEHN8Vbr8kjiDjZ+O2N3t7tKyXFxP7DyyE+r0UOtQelafXBTVp8K2L
xwb/ALjfWcUO+rcMkrKgAjpU+2nYLXww41+Gx41vW7b9xWa43ayFtLJJJDp0soeNW0hqHPPE
y4t+5Bumyb3x/Z9rgiXbr1xFcosJPtIOpGkgD/6sXgl9eSf3NbZt1jvW3XNpBHDc30Lm4kQa
dWhqAmn+OJPPfjmGCfme1LJErr70eqNgGQnUOv1xWtPqjkm/7ns+/bNtm3bbGLHcJdF1d+22
lFrQj0jSK+eLIy6o9h2Tbt13XdrPboIL94Qz3IjzJUE9v30zOD6xfDw/lHyhzDf+PbhZ7jw+
G+toyyruXsyqkJByZa1zFK9RiyJsf7fbqS/4LuFveQxtBC7CANEoJV0LNqIHqAPTFJ6bMj5p
5HGYt6vQ6GJEmfTGwppNaEfTGqzz8Ln4sigl53s6XEcVwhuoz7UmlkajD0lTXtnjPVa5fSfy
h8mR8Cv9uhj2qCayu2BuyAFkK1IIRRQVy6nD9R+XT8dXnC9/G6cu2fa125pSInuJo11gotWK
qKhQe9OuLC6Nx3rgu8WlutzLFutxFdR/ppf0jgLLqGlSxXSPPPPBiaa4MWu6H6eEtZxh4SYx
kSD0yw5E5bjadms4rrkEO3WybqbTU1x7QqdKl9JpnmetM8WJkrLcNv5T8fJynfdotmvLCY3C
RpGAVEDigq9TSnUdMGRdJNx+Ntn3jmvHuW28FvDaxxe5LaiFSsjFapqp6a+qlaYWZMq02Cw4
k+48h3q2262sZoZRaz3LRIQBAtWagyoS1cuuDFGN+Wb/AONN14JeTXE1vebnArGwurW3dT7o
6JqVWFPEFsM503xn+N7nvy/DgiX4/S/tBaSLb31YVLjMmZ4iPd9PXUOtMZw183XjHU4YVjB6
dCCT08csbgtWnEeR3/Gt/tN7sZEN5ZktAkg1LVgRRhh+pj6X+OOa3++WV58h8k3k2cFkGH9A
tpCLcIiULkMaO7/lAwT9C+TXmFhzra+QfPljydY/0FjPdRCPWR6UjXRqfTlWTqcb+p/hPm1q
PnjnU/HvljZd42eWC4u7OxKjXSVUDswZSoORIPljP18c9s6/w1Xx7zPcNw2u7+R+T760FrDr
QcdtTSFFUUDaGOp5G6gDBPfhr49eYca5nt2/fPsHIok/QWV1dq4WSgogXQC1O79Tg6i/lc1o
Pm/nN1xT5ksd92gQ3F5b2KJSX+ZHoYurL6Tkc8ak8a5vy2Px5zTcLjZtw+R+S73JHYqzp/65
bGsEQFFBVGOpnY5imDZVbkeV8N5XtO8fPtvyGONbCyvbt3iRuiqy6fURkGZhU+eLrBzMaH5k
51NxP5rt+Q7T7N1dW9ikJRx7kVG1B0ZkOpGNcGLie1suAc4u5divfkvlG/Spt5LqOOQEGC3G
rQoCk6nYnpTF8qzHlXx9yWx3f+4CLfjB/T7TcLqSSCGXSFQOp0VZctZxYOP0uflrnt7xD53m
5BtSQXUsdpBBSQ1jIZNMikqev45Y6TjZGef/AGrecE5zOnG7z5L5Pv0psZzII+PWzaobca9A
VY2Opm6U8K4zefcdOvHl3xZv1luHzuN9CfoLS/urh7aByqrCs4akZ0+mrV/bg7kh5mKj+5aa
KX5b3JoWR1NvaqWQg5rEKgkdwcP4jjz/AO1rzG1UtLp7kZEdv24zXSPtXjMHFLv4Z4/a8peG
LbJoIF1TkIglViyZnJTVcDXXy2Vw9s277EbfQYGjnMDR00aPaWmmnalKYQ8k+Sec8VsZd5ht
+e7vt+9W5kC2EKl4kmUEiIARigPT78Tl37+Xyjc3M9zcPPca5LiZi8ssh1O7MaksT3J7406S
ZHt/9pKBedX5YHW1i5DHP/7RQRjPXyOJcr23gO2/Hdvzvf7nYNwnud/n1/1W3lJ0JSUaiBoX
8+XU4qefILdN62PaeA7Xc7xu93sds8zKtzYVDFi0h0H0Semg8MSZb4p5Eu7/AChfx2fIbvkO
02+3M9rLfKUeJ3lTWgBC1yAq1M8GrmNpHum1WG175yK23S/5DZWvvRXu1yMsqRsmboqMqaQo
61qNOGpX3O98e274347c7rvN3sVnPHH7M9kxDHUhYRMQkmQXy7YIq+X/AJd5Ku878Le25Ldc
l2e19Vjc3dUMbOPXHpKrU0yLUzpjUYnvywcNWZVVQXrUA98NMj6y3TcONfGvx9xKSx47ZbiN
1WNJ2ulX3PdliEjuZCrklmJFMYklm1rrr3Gki+KuALzS8kbZbf8AS7ztBe8smQNEjrMtWiU/
+MnXnppgk9Y65eZfBvC9j3SDm0G5bXFc2QVobWSRCTGsTPpCSdQ2QNVNcNk08bePW8+E+bLu
HxjuM77XbQnY4jFIkKiKO5EcGqsi0PqYCjda4pPTv+vr5K5fu1juu+3G4bdtkO0Wt1J7qbZb
ktHDUAMBUDq3qpSgrljpZIOJkVkH8qVXGZJC55jFZrT6t438kXc9ttO1c7u7/hm7wRw/o/0q
BYNwgk0rEWXRLpYFaEA98c9itUP9ykTR/I3Fbn+mxTThVdwpH/zPbmUrE4A1U/LnXrht/wBX
Hr+e/wBJWZ/uIvdnnl2v2eHzca3hoy1280cUUckRppUGIlHKt364ucdP/k8RUEyALQk/dnQf
jjZ8fT39p15+l2TlV0y+4LcQylV+4rGkhp9TTHK/LrtvL0S34ztO38d5num0FTsfJrJr6BUK
6FeSB1lUKuSg6q0xczK42eKDkO8cS4FdcS4rb8VsLq03a2SNZZkQunrjjYMWRy9fcJzOK5Dv
xHiv9xvBeOcR5tFbbJC1vbblai6/TVqkT6yhWKvRcq07Y6SeMff/AGx5ZYLAb+FJgXhLgOif
eRXov1xVqSvtDY5+IXnH7S24Zs+zXLxQrFNsW5Bbe+T0iqNrR2YnrVhn4455HSsxsXCuGXfH
/kOObjDbYLYCVbC90vNb3CW7PWCZakRVoVoca5nrhv2lZr4x4XsW5fCe+3O5bZG9z+pje2uJ
E0uGTSqskn8NWPTGbNrfzGz+SeXcX+P932zi1rw6w3O2msleNCqCXRrMXtpWN2kb09eueHJh
/Kb4k3XiFnxPme6bdtk9rsNvdmeXaZ9LyRlIEaSKjUGkN0Ddsjg5nq/DK81j4p8g/DG9cyTY
oNm3HZZmFlJahdbKroCJNCqGDiTp2PQ41JJWP6bOdj5qejFkah/KaZHrjTq91/tf3biMXJIN
svtp9/kUsjvtm7g5woIjriIqMqA507452etb4q/7kd24lecxuLTZba7tt7srl4t3eR//AIkz
FVYPGhZtL+JAGWN5449S7423xpcruv8Abnym2ntY7aTbrS6ijlth7UroIjKqysubUORr1GM8
/LXc/wBfXj3xXK0PPePvGTDMt9AymM0orOAwqOuRzwdtcdbHr3ydsdhvH9y+yWd/EJbCeG1j
u4sx7ldZCsw7UHbG9/1Zz17Bu1j8e3E1xsW7Lsj22j25rF1iS5VNNQKAhhQdx2xn6rqy/L4r
5/tOybRzPeLDYZRc7NBORYTa/cDREA/eMmCmq18sdLWeVv8AC2xbVv3yHstjuEP6qxlm/wDk
W5qFYKpYaiM6VXp+3GenSPpG33/jXJ/kDd/jLdONWD2FsJUNwoVXKxokikKqgofX1VsYyCzf
ly+3wn43+N33x9mXcJNpvru1s5TFE07e7O6j3HYDIhQGPYdManLMuTx4r8u8t+OeXbBY7/td
kNo5kJBDuW3wIfZltippLrAVCUIWmWrMg9sa45NnrypF1ygEEK4rpBoD+OCtR9T7Be7PwD4U
2Tklrslpf3O5SrHepOKlmld1VtZV29ISlAM8Yzfld9Y5P7ibmCb4u4fPabeduhmujIlgwoIS
0DnQQR0qT2xrieeM9U3M7obt/bLt17JBHbTxXNojrb+iN2jm9rU6r9xI61754OD/AEq6i37j
PCvh3jXJZdigu9zmgS0tyY4vUZGLUmcitDprUZ1xczWur6r+Gz/F/wAg/IWy7zZbQltu6W87
b1tugC3WSHT7E4oAjnV5Z9xli/DOZdj0bkUPxzvm23+z7/dbVPaQq5m9pVSe3aM0EupS2hk8
fH9mKK+sPech2/4z+NeN32z7Pa3/APUgwmklTN3C6hOWVSxLj8PDGvqfl5xd/J1pbfIW1cn2
/i0e0blNFp3ezuABbXccxAiuIF0qyMQTV+/7cY2Mz/2yPQ/7oOWW1rZ7bx652uC4W/ia6t76
UsJbeWNwD7VBQ1U0YHtjfM8Nk11b1ye0+NeBcUbathtLxN2t4/1ELpQCUxIzSlgD95bOv4YM
a6+Wi2nh3Fo/kdNxh26BP/YdjM+4WmhWg1iSMF0Qig1h6NTrTAPh5dv/AMyfH/L7HeePcq2D
9JYxLKNjv4EDzJNHqRC4QDST/ty8ca64xR3fJ/x6/JbPgm78W2xLpri1it9xv7WipREjMYmP
2rRtQ1HPtjP4F37Mr/dRsW27bzLa5bS2S3ludrQ3giUD3XjkMak07hRSvhjfPPmrz7PDDQhi
Tr6GuZFMCpL6lVCaqe3n44hgTojOoD1H/GmX1rghMXb0gKKUr16dsKukCiSEadRbw7fhiYSI
FAfIlTXwGYwKUGoF9SLSn7wMONSpPW0gatKAVP4Z4mtAFU9Ktn26V7dcLOC0MH9RAB6U8MB/
yQRlbWoIL5A+X1xatSui6QNRy6he/wBTia1Eyo7Z1JAJA8MQOtEI1as/20OGDUbgamAFAx6H
r+GBDC6aOc2A+3M4kR06gymtKD8cSOvvK51EZ55+HliIgxdgSCPGnU/h2wMnXUoHt5Kppn0+
uIWJMjTMg1FSPEeOJoANDTowJzIr+NeuFHlldqFupAqD1OBaVFpoIYOclr0p/wA8TQ1rpFUq
y5GnT8MSwwSIscirL2GQP1xM0oipmpq75Ke/ifphZEHjeMutVCmjr38+uA6KFiM60UD7Rka+
eJSiQF2VkBao9Wf76YsMJCVbLMVyJ606UxKkyhWDtT1HxNa96jBiMKggKAUFNNBXEYeUjVpA
DZ5n6YlSUjWPcarUJVfP/XAsOPc+78gPqy6eWLWBPKKaaAP4AdcMrpDVUk9ssxhgpIshAUEC
mZr+3EDk0YHSQlD6ssj54EPUUQMANGQp9cSPEoaSrLpA6sM8BCEJTUmfUV6Z18MKpoE/KDRh
498VYiVqDRrJ1Zgr4YG8LSFHpFK9SfDFqERVCCKGlQwzFPocBRqSFoG6GinwxA6qBIdSg1Op
an9nTCi9sM7MKEdB5HEtOoiKkEGo6MMwcRItUBUBYn7iDTLELD+3IoCg0J6jp1/yxLAqoZ/U
aduuRxLDn0ua10kClOlehxISMFjNB9adfHEdAxfSfTpOTEePngSUAulWOQ9QNKAk4EYVVGoS
NRAPj+3DqwMnoBKSeeWRGNaB6SEOugdl1AnsD3yxmtYAKusF1FRSlep8PLEIGjI4cCrsamte
laEU/wAMSowlwpZZQBpJGoGpP0OK1Q3pjag+w4GgLQSVd+oqD5fQeGJn7OgdQSNYWmfmfLCQ
1QOGKggn7QKZ+JOLFqRyxVSKkEmgORAxUImVHlqG0nw8T54FCFG6mlDSi50A74Yidol9Rr9R
1I88R2DjZljJyGo9PAeFMCgWQkOU7ZmvX6eeLWsCTMNC6fV3OZA+uDQeqF6NqNBQSEUH0GIf
k5XQioFqo+0eB+uDWqaNToOoAEZAjpjTMLQRQoC1eq4msFEoNTQ5Cp8aA4sV8NMVUrIBUtkP
GnXviZwgihhJUGU5BiSFK9lp44EQLPUyCjr16dvpiwnXUxOurVzAX/DFgoSxqVUFsqEgHr4V
74lNJ4/zAUIFKV6+PXEbDLFmEDN6uq/mrgtXMkSmR0A9JNMq5UGBIn0nU7eIFBlhtFMQpJUA
1pRiaUr5eGDW5yzu8oiuKMddenamOvNp6VYND5YXItf+0eOJO1Y58l1HQRWhwypuvj6XfxdJ
DtD3BvZfTHDbO0bMTlkykHGr8Cc+t7yK0+TbG0jO+i/ghlPoSe4aRTTsyhsc9XHP+zv4nF8m
7vt7HZ9znjtIifcV7po1FOoQVzNB0xWu39ZJfHRtXH/ky/FxuFpeyOsJPu3DXL6iUFaD1Vzx
nu2RxtZfdt/5Gz6Nx3G6meI0AeaSRVbtQk5NinQ+HHNvW6a1nkvJ2uEoFd5HLIvZQSa4J0cW
WzbvzaTcWi2ie/n3GahLQO+thXIVQ9MdPsfhacql+VLW3ReTT7hHDIaxxXTkg07Chwc+qSMY
9zJKfvJY1Dq321Gfq/54z3MV59OTWhcUFMvy1B7UxqHadbhYSjkn0sCultB1A5eoZ4Z4fK0h
+Q+cxwSW39bv1twgovvMSoPRR/piovOKybf9+aBojf3Iif8A+y91xqPWpz8cYvR8R2l5e2je
/bSGC4VqmVGowbxBHfDrOhv9w3PcJBd3sslxOMtczM7mmVc8a2CyrHY+W8j2IOm17hc2YfMr
C5CntmDUHD92OZ66rvn/AC67vFmud3upJUWqMZGBUDoy0IAwOlxT3+9Xu4Tfqb2Z7mR66WYl
2anXr3xkyeAt7y4hGuF2hAOr0NStPEjF93PqxFc3TSSh5WaRzm2o6iaj+I9cb1qUyBSqgEip
yr2wSJrOJcs53Y//AAeO3FzU1b9LAC6D+I6aMuHfGeecdW+c4+R1nWPer28hmtz7kcMn8ojw
bSoXPwxmda3MaC25T813WyndBdXUW2qKvNSP7fHVp1fiMa+AiTf/AJps9rTdBeXoszWRXlKu
GBPiwpTGbVWan+SOczXhvm3e5S5WojlSTovcAfbTyC4ta+rmt+ccrt7l7u13SVLm6r+omU1d
mPXPscIkW3HPkH5Jtw8Oz3l3cKWLPAkQnoxzLGqsc8H2GOs/K3PbPdC28ubm4jSsdjuEA0Bj
+YR6U0keONzEuR/cDyUqtdrsKgEEe21Kj/6sFxYW6/3DckvLRIbC1i2udKM9wh1qR4BXFMXg
Yy4+TebzbrHukm63H6uIEJJUBFDdVWOmkA/TFKsdu5/LvyBfWptJd4lEMikSe2qRuyEU+5VB
ocJsdnBub/J1BtfGHe4t4l1e0I0lp4ka+mCqq7mHN/kR9zhTfLueC7tJPchiISP2nU5MqoOv
1xc1nXHvfyXzXfLAWG67nJcWqkH20ATXT+PSBqw2NY7dq+Zue7PaJZ226E28a0iSaOOUqPAM
41YvGdUG/wDNeS71erd7puUt3cL/APi5aiLH/wBipTScWqY5b3km+7pEFvdwuruFPSI5pndA
e40kkYN1Rq+OfMPyBtNim17Xch4IlpBE1uJiqjrTStaDFchuuyL5w+UBuWr9UZrhhoFutuvt
5Z5Rhev78Updm7fL/wAvvZtHcK9nFKpVn/Re3VWyqHcZYPtAy3EeV/IG0rPNsAuF/V5O8cBl
RyKnqVK1w2xYn458qc54zPdNHO3v38jS3KXMOsM4ObUoNOC9aOVxvPzl8p3O2yxywrDbTKUd
1syoKsOmt6itME69Ft3GH2vn3M9ttZLWw3e7s7aQsWhicqCzdaL2/DHSqR2cQ+SeTcUu5Ztt
vCxuK+9HKTKhJNftfGfs3raS/Mny7ve33UlnGP6cqlJZoLU5ZZ0kzz/Zi+A5LP8AuI+Q7GzW
0eS2uJEOgTTw+sAZaTQrU/XFsFc20fNT/wBVvNx5Ps1vyKeYBYWuAP5Cgk6IkYNGqdzlXGrD
Gih+fOLQXCzx8Ls4SjA+5HoRqDujCMCuMYan5L/dBevJGeOWawRaP5v6xTKdfloK5fjhErGW
fzr8h2+8S7qt+s73ICy2ksYNvpHRVQU0keIzxrY1huU/P3OuQ7bLtxnhsLSWizrappdx3XWx
YhfHFMZq64l84fJsezCz2zaYL61sIwokhtnJjRcgZShCdB1pjPVLitPnCNt4ur7lXF7HeryS
ixsEjiaIL+X1h9XmTni+VLi2PzxwuHTPZ8Eso7qMq0U6e2mhx0NVjVv2HFkU15v8g/I/I+Z3
0NzuzRlLUEWscCaVjRjWhJzY5dTjX2zxmz3XXwP5a5TxGSVNsnWS3lH82zuV1RMezAArpI8R
+ODG/lo9+/uJ59uEUcNubSwVZFkUW8VS7IagNrLZfTFKvqjH9xnyM/vu81rWRfbcmBdNAKZC
tcV/wLK6Lf8Aua+QIJ4BM1i6xroMUkRCyLTJmKsGB/7cU9gvjg5b/cBzbf8AbJNs/kbbZS+m
4/RrSSRev3ktpU9wM8EqvKLjv9wPM9i4z/6/Zi3mgRGjtbmdHM0KsSSKqwDUrlXCxbfhx8H+
auV8TN0tq8d5a3bmSazuwWT3GNC4YHUK41Y3xPHXzn545byXam2dY7XbdtLBpktUIaWn5CWL
aRXwwTF9dUFn8w8/2/j39Ctd2ePbtBj9kKhZImBGlXILAHPB8FhppCGA6EnVqOfXGojIrHV6
9IBOk+Bp1w6rG+438sck2Xh15xW0tbWTarkyavejLS65ANciGukE9q9MYnWXRedjK8X33c9h
3i23m1obqzl96LWAyAg1zBypjX208c4uee/IG9c43eLc9ztLe1niT2FW0UhG0mpZy1SWwVdc
xY7L8t8i2rgdzxOK1s7ja7z3KTXKH3YzIfWVzFfItmMZ5sWMtxvku4cc3i03i1CteWbiaNpF
1Rlu1VPjjd9Vrv8AkPnu78x33+s39pbWciwpEYrZSiqBnrbUSSWr3wcVj86tdu+XN/suBXPD
0sbWTbbgnTdyofdhLkMXRgdJrTKowbJR1/tMrPcY5Lf8e5BZ7xaLG9zYyCVFddSFhUHUDTIg
4K6Wu3nvOrrmPI/63eW1tZ3EqLE0NsGCUStGNaklq5k4mJZqysvlrerL49ueER21jPZzM7W9
zKhaeLW+ttP5SS3Q9Rh5snrXTPcS5ZufGN7tN8sfbkubM6kWcVRmpQhgfLwxDa6+f8xveXch
l369tbezeWNFa2tQwiGgfeNWepupwzu/C5nvq6h+Wd1b44HCZrCzms5D/IvJFZblAX9w6dOR
NejHPBz1latl+Wb4lzC74pvlpvVjJG1zZMXSO4XXGykFSpHn4jMYL6zKLnvL7/lvJZ98uLeG
0lutA/T2ikQ1RQpIr1LdSa541usXmy6oUZlYvmKihP8Argaj0O9+Yd53PgFrwe4hgO32TRvF
eqGWVkjBIjbPTmWzPXFJhs1rNo/uT5PaW+zQy7fZSHaU9kSnWhkiKiMKSCQp0qMxi8xu4790
/uYguIZv1HDdtuGuNSt7tG1VyYsdNT/jjPjFjwS9mEt3JOIRCsjs6QIToVWYsEStfSK0GN/L
Psar42+R914Lvh3TbIop5JIzDJFchvbZGOpgCpBBBAzxWNTpoOJ/NW+bBzLceR21tbSy7q8h
vrRg+kh21gKR6l0Ng6q45yNVtP8AcpuNrsi7duew2G4Q27lkExKhQzFhVWD106qDKuCetYr/
AP8AOKurfksO/bNxvbbGRImtbkRFgLiN2BUPpC6SpWoy+uH6xi274p+L/M3IOM8j3jc7e2gu
od5Z3vdqlZjAzOxNV0lqEVI+mLZWszxp9o/uTu7Pj9vtl7x2y3G3s/TAZjpCrU6V0FXzUGgI
7YrI11MeefJnyBZc0vLW6tuP2WytCjLJJZ1Dy1I0+56UX09jSuHm+Of026x6SRpISBVqDIjq
R5DFfW3snFP7jt52vj9vs+8bTa75DYqEtpLvJ0AFFUHS6uFGQNK4xIqgvf7leZT8rt94tkgt
be2jMCbaqmSB4moXSRzpc1IBHh2xuyfhi1f3H9026QwSWthxzb7SGVZBJF7jhWZxTUBGFp59
a4PG88YH43+Yt84TczwWiRXlhdit5t9wD7LNSivUCq5enL8Rgua3xxsxjuX73abxv97u1tts
W1w3TVWytyTFGfzaa5+ps8LlJnyqlZhKrsuliRU+WFPeOLf3DbydptNpvePW3J7rb4//AI87
hnnWJRToEkqVp92XnjEn7ayVhPkP5Z5DzHkNvul1Gtmu1nTt1tGGDQ6WVw3uN6nbUvfHSYzx
Z9nP8h/LnLec21paby8ckW3hjbvHEsbu7gB2Yjx09sH1HftYy3trmdmeKJpDGmuXQpKhV6li
K0A71w2xRvfjX5c3jhlru9vYWsF5DukRidJyVAdVKqUK5/mPkcY6nut3cxc8V+ceRbFxK84q
Yobzbry3mjtBKWV7b3gdRR1+5dTEhT3wzKpz41HF/wC4bkEu2Wthd8Zh5PuVggSO80kziMUo
WVY5MxT7hSv1xdSLHl/ypz7fuZciG4b1CttNaRtbxWqIy+1Hq1CNtYDlge5xrZmRiWfP5Y63
uVimik0GsRqKHMN1wYJ1j3PY/wC5u/t9vhj3fj1ju97AqRpusrBJ3VRRfcIRiXFB6h1xixrr
quGx/uO5jHy293yeO3m2++QQS7S6Vh9pK+2pYesuuo+s9Rkcb/B55118k/ua3XdOO3uyW+y2
dhZ3sBgX2JHrA5NdaelV+gp1xcyUZXXtv90e6x2tqm67DZbnuNsoWPc5m9uZh1VvSraWr/CR
44Pq19Wcb5/5VJb8otbu0spU5KGDgIYzDWP2wV0EayEoPVmaVON2QXn8M7tHyjvu1fHW7cGN
rDNte6kMZyxWSGpUtp/KdWgVrjH9Mrzznr3m/DCSsqNRfu6mmWXbC7/DRcH5funGOR2O/WCx
G6tWqolBKsKUKNSmTAkZHFY1KXMOQy8h5Luu/XFssMm5ztcyQKSVjYqF0qTnlpwzrwx6/wAc
/uevdp49Bt0vHLC4EcIguJY2MKyaECAyRhWGY654585o6Y/iPzAeK8s3TfbLY9v/AEm4uGO1
ipigzrW2cgsnqOYpQ+WWGz1jmYu/kD59u+Tpt9xbbPBtm9bfcJcWG7QyM9xGVOcdCo9Ld6kj
DMK7P9z+4+xHcT8ZsJt2WMar/wBUbmRRpMmoK1B/t1eXTBzV1PHiG/3u5bvvF7utwqPc3b+9
cexGIotTKCdEaZCvljc6hnJ+M7zuWxbra7ptkrQXtk4aGdTmCc6muRHkcXWVqWR7bP8A3Sbo
kCzHjlim5+3STcqsrCYjSH+3/wDVJzxiYz158Mnv3yH8hcl+OLzY7rZ3u9tmvG3KXfIInUKB
J7jj0D2wNZNT2GGdyUfh5LMZNSs1V9VCta0ON2ndOJdLBqatJyJ+3LpkO+DDHr3x58/bzxbY
12S7srXedqiJkt7a6yMDE10o9Gyqa0Iyxysso61F8gfOe9czh2yC5sra2O1X3662aNjIxoKL
CweqsB3Y9cdJcFm/LW3X9011c7TJtl1xXbp4pE0TQNIxgNRmWTT4+GM841lee8n+Vt23vglj
xGW0iisNvuBPbSJq98KurTHmSpRddPGlMa+BVDwvmG78W3yPetpl9m6jACMfUpX8ysp+5WHU
YzaeZj1bc/7qt7ksZ4bLj232d7cRtG94NT6WcUZvbI9XjRjTxw84FZwv+4rk/Hdki2e6sLTe
raJibL9T6GhqSSgKAgqK5ZZfTFRLWP8Akb5T3bnG6Wl1d2tvYx2KNFZ29ouaq5Bk/msNZGoV
Vei9sOzBPKt+R/Ne/ck4LFxvf7Oz3CaB1/S7u6st1EqUAZaGhZlGlj3BzxfzuVd8zr5aLgHz
9zbbNmj2GPaYuRxWURe1SSN2mhtox6hWMNqRB3IqBjPUbtPYfPvyBd8qfkdntsD2lhZyRPts
ETvbxWuTuzuPWKMAxPQdOmLZik/LxyW5aaaWdKlHZnIqPzknLyzyx08vyNe8/Cvyd8e8d4w9
tuu4blYbkzst0Ii01tIpzRlSjKrUyJpjneW+rKxPzt8hbRzfk1nPtJmNttdp+jFxMvt++2sy
FghzGRGOksnLOevLViZiErpcVLIuYoemWMaKUiCgR1PudMsunXLrlgZ3fD+0zBU8qCgr+GLW
8AEbRqb00zStCKYYglFCqagEH1jvn54mKOMOxYKKgZ5+GDRITLJ6mYEaR3yy6nC1IFy+ZAPg
4rTr0w6bHTYbZdXU8cFmplmlISGBBUyO3QKPM4LRILcNs3Cyuri03G2ktL6ycxXVrKtHjlAB
ow+mKUz2OZcjVW9AFSw/wocLNqWO2uWQyxo7IprIVGoqPE0+0YrY1CEbMGHmaDvn4YNJmD1U
ha6aBfCtelcIxYbjx/d9utrK+vrKW1g3JGlsZpFOmWNG0syHyPXGd1WY4nUZ5NqplXri1EIg
QFHU0AHicIIpMrAEUNcl6nwrg1o8QiUvkQ5JWo8V6/hiMhSIxFK5DuRkK4tHXLoe0vRafqBC
5t9Wn3KHTqH8TdB5DFsWOeNHRGJ9UjdWJpSvngqSJC9RUZgdep/HBaoIRPUErRich41y74pT
CSN2LaWr7Zq/lXIftOFAijeRiQdQY106hnhZxbw8V3mbYLnkEVv7mx2UiW91cr6hG8v2ah1V
a5E4B15FSXk/UNqGRppZR5YVpyv8yiEhSOop+P7cBX3GuKb5yO6ltNnhN5dwRNcGBaFikYq2
kEj8B3wXpRVSW7h31V1gnVVdOkr1FOxwhCqM8fuBdWWRGfU+eLUJEOgv0U/dU0IPl5YmpTBv
UaH7uh8fKmHC6Nus7q8ult7WMy3Mh0qgFSSe1PPGbcU9dW78Y37ZLxbXdbCeynlGuKKVCoda
0rHX7hXwxM3la2vxtzu92wX1jsl5c21arOsfpIH3U71Hhg+y6uM81pMHdPbfUhKvE4IbUDQi
nUU741uGEsRyQUKsdPpNQfKuNDFy3EuQLtDbuNsuH2eNtM9+qkxIxGQLDOg+mOdpzPlz7ZsW
57vew2W12cl5cNUqsak17jPAai3XbNw2q/e13C1ltL22aklrMhSRWfKp8Qe2Hkz/AA5DGgk9
RIWmQrmScaC42HifIN/nNvsljJuF2MzHGKEKo8TTBapFbe2F9aXUtrdwvb3Fu7JcwSrpdGTr
WuHEgEj6QwOgKa1brTAGw+N+DS815CuyRz/o2lR2W7YByuhdVdHeuM1vmfLNbltl1t+43e3X
sftXtjcSWtzGPyyRMVb6g0qMa64sjFrlZHJC9Kg9OpPemCUUa276CQrnqXFMsuv44tOHWJtC
yCulh6KCta9ssJFGshkGiupqqIxnUjrn44NSSW1lX1NExSlQQjFadOoyxfZIxazyNRYXc/l0
qe3h40xWj0wt5Wk0urI6HNXGklTkDTLLzxacaY/H3IG4dccvtGSWxs50t7mChWWNZDQSimRT
UQPxxrmSnrnGdXb76Ufy4JHkrkioxJB6sKD1LjNsZJbaZ5GhQM8ob2/aAIYOOoAalcDUdcXH
93e2nmjsp5IrVfcnYRk6UBoXPgoPU4oqrjoJB6VzBNM/pjSi02XYdw3m8is7WNpp5XWNAF1B
dWQBp2JwXpq878OzdeG7rtXIjx7dYjZX4dV0UL1V6AMn5WGeGc3NY5m3HZzj435Fw/epdtvo
Wu41RJIr6BHdHVx+YAHSa5YMV+Wct9rvpGSOO2klElRGYgXzPhQdcZtUlhn267ilFtcwyR3A
OXuoU6+RHTDPWt1NPsW7Qr7s1jNHFp1e77baKE5UamnBenL8oZrWeBImaGREk/8AHIQQrf8A
aelcPN10swKWF62hZbSQPI38r0mrKTlpy9VfLGmJbVxZ8O5NN7lNvmT2UM0qyIylYU++Qgjo
viMZ10mflecv+KOQbLZ7NfWkR3C33a3NwnsKSyaaEoy59QwNRikN59xh3WdX9uSIq4NHSnQj
KmGRinjhMrLFCCWY6AtK54ktP/WuQi3kmO1z6IARcMkTOBlWvTwzxnWnNt+x7puBpt9o07gf
lBaoGdaDDVSvtsvbCYQ7hDLbTuBWGdGjqpP3DVSowSWqNUPjK/m+PrzmCzGE7bJFrs2Ao8Lt
paRXP8JOLi7cPUyKDbeMb9uCF9qsbi8jBJfQhJUd6+WNdc5cG2Fb8Y3i53P+m/opEvh1tXWj
kHppHc453yNc+/8Aldcy+MOVcSa0O5QarS8jR4LuIVRWYV9t/wCFsa55tmjfcZ19p3W1kgjl
gkV7lNcGoFRIjdCviMsU9NuNxw/4eu+Twbskhlsdw261N1DDKuj3GpVA3+1+xGM77431/SXn
4Y3eOLb7tUcP9U2+W2guP/BLIvoYAVI1dPpjr9XLxWHRGpZaGhpn0/f1wfU9Lbj3FeQ73pXa
7J7g/dIAQCFHfPGesjPy2O/fD97YcFt+URvNHdm7a1vdvYGqVJVCF69Ri/nPs3JlZy8+NubW
dg9/c7TN+kiXXJKoBCrSpagOojFfLjXWfCmudj3O2tbW8e0dLe7ztbg10SUpVVqOoBzxqRyu
6nvth3KwurZb22aA3CCaEuKLJGejqx6jtjN5/TpzP21fOvjT+gcd2HefeZpt1WT9RZvQGOma
MtcyrLg/l/O092bimsPjXmW42ovbTbHntGpolDBNQp0APcYOph+HnXJrW4tLySzuE9qeFiJY
jUFSDQgg4687jj11tUyqumoah7nwwjT+rx8sQ1YyCT2VK09PUjuDiMbv4tuWj5HtqITWSVaO
KjoemWO0+F9n0H80NLJs1qx/mF1NM60UZ0Jx5/ypaL4YO33fFroPt8KPGWT3KAyFSvQj94xr
sW21o/j17G22jctVt73tSuRAckFB9hy61wUYcDZuQbLJf3m020EVuXPsRwqPsOdTTPGbmtYh
svj7j29JbbubWC0sYQSLE5K5Hdgc6eGeH6pgp9zsf/vE22PZ7P8ApltbzBIlQH+YRU6s6fsx
uRmy1pPnVgm22ckr1YP9vWp05kE+eDn5aVfCd72qHiAS34FNuFw0bLLfGNTHKxJ9Tkg4O1Pl
4/vMk730qvbi3oxBh/gIP2/QdMHBtW/x5sNpvfKrGzuwDbSMQygjMrnQk9B9Mds8VuPpmTZu
GbNGIGi2ewtzGSXljT3mHTUNfXHLBqvsuPfG8NpdblBtltuIAJ9/TrRiOgUH0j8ME5W6zVjv
vxzyq8trS62eO13D3ShtYYVjUhTQ1kUZimNXkRbcm3T494zfwWVxsNrBBMh13QhSV1/hHdji
nMq1Fs3H+DWmz3PJ7TaoL6OUNLH+rXUQgzoENR+0YvrFFja8Y4jyOwsdzudmtoEcFjbQr7fX
+JkpXBY1rh5HuPxnx7ck2m82O1trWaMe5ciFWINMqIMzl1OKcs6892G/+Mxy+aQbJNutkVJt
bcRhyravvEY6jsB2xXmBQ/J99tU+7L/Stil2S2NR7UsftM7jqdJ6Vxc4PsxR/lrqqQ/fvTyp
jTT1D4Ivp4uWRrHWJJI2Dr11L4/twzMart+eZCOTWcpAcIoLAjIiuYJ/fi5xzrc21rtdz8WC
WOF4gISqD3X0nSfAnKpPTFZ6bVtFfbLa/HdlLuNob+FYUH6Q5qzH7VwWJTb3xDgIhs99utmj
SKRUC28AK/fQrVQcznikX2B/90vFLC4u9+3KwD7X7YMFmHI0JStWHdjXBY1rP/D25w/+77jF
taG125o2MUNfuVT6a1rjUkxn1nfnSd//AGtqDSXjrUnoR16559cZMedWokmnhjqEWQgB/qaf
vxqRXp9GScP+Mtg4zbbldbJ+skZI9Uczsza3AB6nT+7B9V8uif4k4Ve7haX39OS3tiokk29P
/G7dRq7/AFpisTJ8v3X4m2+e+2GbjSwXiAqlzAlfX49dQH1wfWQai+BE2Z963iG3hePSoKSp
IUooOS5f54bFGI+YrOC25jdqoYoWLBnJZvH7jniikYyzjM80UX5ZXCVOddRpXLzxox9C3XDf
jfhHGLe/3PZ13d5QpuLhydWpl6KK6aDtjM509Y8w5XffGN9vNlLtFlPaWZdGvbaPqFBq4jqT
2xYxV58gz/En/rSrxrZZ4dxcq0dw8csIXxLF66iR2GD6qSNb/bnf7M1hdWtptqw3iANPeHN5
ADQj1dAK9sasaVtjyGPZvlDd0s9j/qW43EjrbPENKxCtQx/KtB1wYI3N7yGO12GROb31vNPe
NSDarcAuQT6UC9W/7sOalyGu4LGxj228g2a29FbeaNSzIwHoQZANiwo5eKbHuHJv6nPZp+ut
4gBcSoCpP8VDkSMZxMh8nbJzzfthvYLPeLO+2+In3rG3jCuVUdHk9QqPDLCzrzn492b4dTYr
j/2/3jvSOQ+chCAdFi0dW8a4eprdnjt+KPizivJbvdN4ulkn2WznMVrYP/LZl66nYZ9ugxn6
4JfHsfEo+MNxK7t+NRtZ2eqSJoHq2iQ1U/cWxD5Ym++Nvi3ZZ9t23edtuL/cN0fQlx7rBWct
3AZQoqcsX0lOb48w+aPjPbeGbjD/AExyLG8qwjYlmSppp1H7jXCmM4ntFvvO/bftN0zLDcTJ
Gzrk1GNDTFGrj6J3X4o+HNkvbDb72xuLu73JxFApmY0Jy1NpKaRngxnNrlg/t14hDyG9ubx5
bjaII/ci273GViczVnWhNAMWK1geY7V8E3O03S7Gbnbd0gakYJZlcjsVYnDOcYut3/blYWMn
DN4WznuIZ2laOaQlSpHt+hlBU0yJxWN3nI+cuQwMN9vlZyQkzAEdyD1P1w5jHHrp4ZskW/co
27abqR0tbudIJNNOjnOn+mC1uR9Cb/8AFPwTxe6tbTeIrgT7k+iN/fkAUfbVtJUBfM4Ppovt
xBsH9v3AbretwmjuW3TaLagtrWGUV9YrpeVepA/bixqXx171/bvwW7sba42+yuNkMEyiWOST
WWhLUYCpajHsa4fq53Vld/2+fFJZwtldK1qokkZZ2UsCPEg9fLF9WrXE39uHAI7+XdZf1Em0
pB7iWAkoy5VLa8iaDpgxKV/hf4u5Dx6DfeNPcWO3pcUumlclXiVtMhGupFOxrTDeTKqd+/t9
2WDm+07ftX6qXj25x67q4LB9CLm5D0HX6YTyvts/tu4I2+bjciW4v9stAi2tnDIFdZGFXDOf
uoOgxaJVb8if2/catOI3O/cfjudtnskMk9nev7mtVy1DM6W/HPGc9TKbFwz4WuPjKS93LfXi
5MkMryIJc1lFdEaxUowOQrhvyurjxgRojkVOmgoGzNKePfG2dXXCrvjsHKLG65EnvbJC2q4t
4/ukABGk/jnTB0tfVHEeb2PKL9o9p43b/wDokC+zNud1AqsSE+xUWqkYPq3nmvHYdl4bN8+x
bfsYjvOPreRhYB64lYgGVAPzKGyGC858M8TGm+adt4Bx/wCXNkvN125I9jkthcbrBbKE1srs
qsVUr4CtOuNZ4pfW64jy7ZuYXk8Nnxmz/wDQEUwDc7iBELvp+xUppoD3/HB9Ynjm0bBxQ/3A
2217UiXnHlvh7St/MQ0WrLU9UV6gYb8Mfxt278NF8sbd8fce+atuk3XblXYntUub60t0Cq0u
pgrlRTV0FVwWeNTr2vQuL8o2Pl15dCPjNn/6GgaAbvPbohkdV+0KBSinKvbBhvP7eNcZ2DiU
/wDcDFt+2sl/x+K9KW0UlJIyqLqKqCKMquaYrsXHK/8Ak6w+OOOfOllJvO2qmwrapc3FpbpR
DM2oIxRdIpqAJXvixjnmfa/t6NxjkPH+YXl3bDjFl/6KFeJd1nt1iMjqMxSgC0PU9sN5dLP2
8X4Nx/i0vz9HZbeibhsUV7MLR3AkRliBIFSKMARjNHPq+59t/wAdcd+dz/WLFYtgWGGeW0t0
FBM6Eh9Ay06syMK+z0bje+8c5y17HecXsY+DnXDb71PCkRmZaABRQaD3LA4sn4Z6ks9eM/GX
H+NT/PC7daqm5bLZ3U62Ukml45IY0bQxBFGzyw3nw/zuxT/3A7LtOy/KW52G1WyWtr7UE4hj
GmNWkjDPpUZCpzyw4589XbHnehT/ADJM1amo1PQmmVO+B1fWe3/DOwcq+HNhgt7SGz3iOJJo
r1VUSNqkPuLI4Hq1L498YNbiL444Vtu7ceW32m3rawTWys8aEuixAgyAijNUVqcKUHPZuRbH
Ybjex8I2S/2a3DgMWRZTEfTVk0DrXoDjU5jGvjW7dZrmWYRqil2YQKaKoZiQFrnQVpjbUlev
/wBsGwbRu/NryPdLKO9hSwk/lTKHVWZlFSreKkjGOmset8B+BYNm5nyC/wB3srS62K+jkh2y
1Y+60aO9TkVGk6MqjPGbWefI0ex8cFjwbbV4/sG3Xl0NaOl6FSsetxqMmlmY9OuLFrNcPSPd
vly72/fuLbdtd7te2ltECpIkuuRQsnTSfT9ppWmKyKNJu/FNu3G33W25bsG1bfsOhv0m42+l
bhWH2uxCroyzyPlixA2XjzW3x/sf/r+wbbul0YkEhv1SOsZB/mF9DEkmmLF1a+bPnnd7mfko
s77jtnsO4WQ9m7ayYMs4OaOSqr6fDG58B5jRxpZASVNKf5VxaX0VYfC/xlsfEdo3Lne53UV1
vKqYZbNtEIaVdaxKuh2qFP3HGM02pU/tc2x+W3u1vfyHbJtva72i5AHuCT3FT25l6HTq6jrg
xmsV8ZfDFhy5uUJe3skNxsSstnLGAU91dVdSnOnopljVn4Y5v2mvVviLg3xNvHxvftcI17IK
f1u4ugBLbTRR1/kuq1RQvqFK174zI6zrzx8x8usdlsOQXsGyXcl/tXu//FupF0NIvXVp7daY
3JjMuqjRV1BqDUUOfStD9MOqx9j/AATwfadn+Pp902rdrafcdxUzrugjX/4hCCsD6jUqjCrA
0r+zHP8AJzI8b2PgljzD5V3+25LuUVYpTLI21iiXRFB/8egKLXrn1x0tvw5/z491reXf21cZ
/wDWr/d+LTbhDd7bG08lrualUlVAXcRnShQhVPTKuM3W+vPUf9qu27fPPyIxTf8A5Va3EZtZ
lElrNaSDL3B91Q+TUPQ4zl30ZbHhqp+k392WNX9i5fVHprFqjlPpplVaj9mN9TNH8u95e1f3
P8c2S2suMb5ZbdFZXW5RlLz9KojjP8tXHpWg9NTng4zFbfs7v7RI1G48ikApItvEqgjMDUT1
86DBb66PE/kTlu68t5Hc7puftfqgTC3sRrCCI2IXVT7j/uON1x/nzv8AsyBDJJQioFCR3AP0
xN4JZ2yQ1rXL/gY1IHqHxB8S3XM/1e4XM/6TYtudV3CZTqkJYaiqqBX7fzdsHV/Edebj1H55
4fwaz+Itn3jj+2RWgtbqK3guFjKyNBLrR/crm9WUMNX4dcYnjl31mV0/D3GfjDevi/eLSCxF
7uosm/rEtyup0nEZKe3JlQVUMumlMVrp18PEvitOGS8ps25eskli5j0aAWUz6hoEijMo32tg
lueuP8bevluP7qeN7JtHNNul2uxjsjuNkZLsQqERnR9IOgUUGgoaY3PhWX7/AOHiDghddAPE
08PLvgldG/8Ahb/0luYWq8rjeSzleM2KINUbXJYaBMtKlCcHUdOXoXzL8fcfuPm7adms4Bt1
rvaW5u/040DXJKyO0aj0hiq/twWZNjj9d7escg4XuW0cduOP8b4jte67OLVoVS6lVJpCy+oi
q1Ld/u65g41xJF/S3189fCO18StvkG2teX2kwv4J1isbaRKhbtZNOi4ibOg7efji79o/j7G2
5nsXFto/uV2qBLa2ttsupLaa9t3CiH3pa+rSRpGpgPKuK3x05nr2PlNty6ya+faOI7Pum2xJ
WGBiEuJlKguujQUrWtBXPBMZrOfEdnBc/HNzd7LsW3f1o7jdGTb7xdIgYy19pyVaRdC00jwx
fk8+TxUciNzJ8i8QteX8V23aov1bNabxbsHhl9JBt2NFX1NpOlx9O+Kj6y1teWxcusINxbbe
H7Ruu2R1MVsCFnmiCjV/L0aS3XKuKQ3Xmv8AbZy3dLrkW58UaNIePaLi7g2wqGWFndQ8Q1+s
LVj6Tgt9MkkfNu+rEm530aj20S6mSvbSsrBQo7ACmOvTn/P9OCNx0FPTQmudfpglx3j1/Z/h
Sz5T8c7fv3D7g33II3aLfdolZdSsXIV0XIrpXSfMdMYt9Hfl2NMv9u2yDd+IbWm5FNwvYZzv
sFQ7q8I9z3Ix1VW/8fqwWVn8tXu/9s/B9xtLq12FNy2vdwpeKa7PuWrOuRjJy6/xA4vrg9Uu
1/CXxltXC9q5Ryy7uIImiaHcUSWlbguyK0VASaaaafxxcy0vPfmb4t2viB2ndtgupLrj2+xs
1gJRWWN0Acq5FNQZWqDTGpF7K83jjOsBlLliAAe5PbC2+jLD4g+Ktk4dsfKOVXVwtrf2sa3M
SzFP/lzDVriCgMRSqlenfGJNZ6vrn/8AzcNlPyFt1rY3kknFtwtG3K1Vqe8BCVDW7P4P7i0f
FIxmKnlnDvgLcNtvoOM7tLtPJLNZJIv1xkaGZogymFqggEkZFf341P55fWepbNgv7UrTb25h
eu4ePc/0bCzf7416CXUuVQwOWDv5d5J9d/LRf252EY59zG1mSNl0XEVwgH8upuSHXSei+WDu
f7QS7HDu3w58ack45u0/BLmWPedhVnvFd2ktpggYmFSwy+w6SPxxu/Lllk1b8J+DPje/4tt1
xd2G4391cxpJNcQs0RjdwKo0dQBp8aZjFY3PYrrT+3TY7Ln+6Wu43El3sFhZruMEB9MrwSFl
aJnSlGQo1COuWMYJv5Z3kfGvgvcbFbvhG5vabpZlXawu/c03kTMAQhcfcOozxdfzyHnjK9G+
Tvibh99yG35TvbLtWxRbfCu4yxAI08ykhAadDpoKjyxrnnZjHXOdbHzjx6y4mvOLe13CeSXj
K34jaf7ZP0ms0dgoPqoc8XXDrx78r35s4nwfY+TWi8Pv0u9ou7b3ZY0k932JA2kjX1OsZ0PT
DJ5rO+vOVEaSaSRmaKCMyT0OM1Y94+Hfivge+cJ3TkXIlf8A/JV0H1tK0aCFEDSI1KfcDkeu
M5rXUmeLHf8A4T4Fv9htHJuB3Etns15fRWN/bTAyVEkgQzR+4WbUvgTQ+WNfXPHO85VhyXiP
9u+w3c/Gt+a62/eUVVk3FTIwUuvomNKqAa1K0phnH5a/DzP4p2jZ7T5csdsv5f6jaQXrW1vd
WraFLgk2860zCk0NMP8ATmfhr+WtV8p8esN3/uLn2ndrpLOx3D9Es10lEorQgKWqaaqgKTiz
/Vme1t+W/DfxBZWDWW4bVfbROyUTkESvNaowGUrkMy6a5lSv7OuMSKxzf2zWPCLiPdtveyS4
36JZobm8NXt7myZwtYw35WIrQitDh6mV0+eWR4zwb425L80S7LtUc0vFzBOWtZCVeGdFYMkb
5nQrrVa411mOX8/y5PjD4n2LePkHkfFN0kNxbbal1FDOuREiSaI5h/uWuC+OknmvVLu04HD8
QcUk5vALvb7AtEsShhI8nrj/AJaqVPapWuLB17Xjnzr8X7VxO82reNilkGyb9E0tvaSirwMF
V9GrqV0uKVzHTGuZ4xd15hbQxs4QOKgj1AEE1+uMVqPppuA/DPFON7Buu/2H6ld5to4ntdTM
8sxAkMwAIzAaj5+GCc61Z7hH+3biicl36x92T+m3u1m+2c1IltJdekgkf+RV7au2XXPDB8R5
/wDF/wAc7PyHgnNL6/j/APyltlv7tjcGtY5LZWkyUGlHEdD5HF1n3xr/AOO16t8e3nxvffCe
43N5s4G228Qbf7fTX3JY41PuRknI0IIzFDin88uRju+bXyvvRsE3O7Xbmkl29Zm/RtMoEphD
EoXC5aqY3/XnLkHrs2C2S63azikJMck0aSqMqKzAEr+3HHqeOvHG2a+nt54V8IbBye14xebK
lxd7+yfpYxrL2+sGPUrk+lSwqPPDxxM0T1R7R8Q8I4h/7buu/W39YsdhlT24ZDXVaSRLJVlF
AZVL/uw3nazvjzf5Mn+Gt12lN04Ravs+9QzJHNtjxsqXUElQXQ1YK0fX6Y7cZ7K52XdbfisV
nc/208tEEH6W9hhkS7lA1CdEpIgAOQybTXqOuOX87vTt/SbI+fvYBoD6UWlc+njn3xrr5Ykb
X4h4nYck51YbZfkpayk+5p6soQtpB7A6ccrXT+fM9/8AD6S4FdfH459e7Ltm0LYcg2ZZbee7
gAEc6ppV9VPwOeNfXGJfHNxb434qNrbd7jaLfeJ9yurl7kXOgaGEz0aMt2oMxjV9rOKbc/jH
48235J2ZJLaOPauSW1xA21Bvcjju4wNLo1arq1/TUK4vq1zJmMxwr4r2Kx2zmacvWIWFgZLC
Od6rJFKvrimVhUDUCACOpOKcb1MXWfV4EYipAJ9cVBXpWniMb7klxn638vc/7b9m225n3u/l
RJbyws/fs5D6jFJUnUB50pTHGz11lzn/AC0O1brP8l/GPJbnkcETXezVn2edVCvA4QtpVuuk
6dJB6jHbrnOsZsya02xck5TdQWe43tynErBLeJbuO40FJbhRT3NLAFFdQBjn/gX9vKOS7xxb
c/myPedotRue23EludwtI19E0wGmYKPz6tINfzHG/wCkn1n7b/lPyk+Q9u4Xc/Lu2Js23tYW
9z+mTd9vmhNsiS+7QFYSABqi6064zOZOdHPzXp29cp3DY/lvb+G7ZDEmw3KRfq7UxB1ZJvTo
A7Adv9MP1/10Sfbdc9ztW2cM47zzc9ktY47raL8S2QIJ9tJRG3tjPJVMhK0OWHnmWway/wAk
Qryn4VseXbtHG3JLW6SD9dGgRnidypicjqpBHXvjXM/2w9zL48HltLlPbZ43CSglHdDpI76W
OWPP1fTZ+H0FtF+3Ffgy05Hs8axbm90IHnC6mKu5UE+aUFP+eH+PO/LfckyT9OD5zsotw+PO
Kcxu4kG/XWiG9uI1CGZXjJGtRkSunvj0fz53Y5d363Pw8AaQvLRTUFvuA7DHCwPYf7aKn5Gg
qlAIJqHsf5eVMZrrx8X/AMMR8lSf/wB/ckYLm+5XLFia5ByKnzx2/r/6xykUm2RLJeWy6VKl
hqDZ/jjz34bnEtfUnNOU8W4FabDcDYob263KyjSaLRHpMcSqQ1SM2q5Hn3x2/l/LedXX/tYq
Nl2H453qz5xd7BAku03e3JcohBBtroLIzLFWhSjAdP8ADBLKOtxTfHPFtvufh/eLm+swJI72
Ca2uClH1RyICVbrQ1IPY411xJ1kbsyT/AC9D3vkvHNn+QbPiNts1u676kLX4ZFEYWSsYKqBS
pVc/HFP5TLWZNcabPxPhPHOW3K2KzWuy3wurFSoZ4ndEKKGOekM37ME/n9rItt8Yv5VXa+W/
Fm3c7awi2/eVuxbF7cf+SOSq6WPVvUAQe2N88f7YO+Mqe3aS8/tov0uUKSbfIqQ+2NJYLKh0
yUpq+85nGf559mv6fhc7TvW0cU+G9j5B/T0uNwikeK1DhesrNrVzT7Co6fTFP571T18rK62f
iu4cp4TybcbCKGff7aSO5iQaYzMEWSFyP4lZqVwfX5iksti05zv17xrie7bjulrZDclVrWyK
ghLqGU6TG9OhoddPLD/PiWxy7vj46cqwBjH8upIFNNM+mNf1z7eLmX8vRPg7k91snMrQxwxT
C8dbSf3MiI5GA1Ka/cMef+vfvkd51r0n525H+q+QNr49LaQqlk8FxDf0JmJl+6OvTT3x6+cn
Dn/G/wC7dct51/SvkTaOOW9lFM+7RxLePL6qoWISgHdc8c5xvOjN6xwy2ux8R2Pnc9vZK9rt
13HeQQCg0SOquNHcBXb9mMz+e2H8Rkdp51xf5D3LYod22uKDkUG4xqmhf5Mto2bB6n1VPbsc
X9JObjd4srY2fKLqT5FueAXe128mwpqSOaWMtqjaPXoLfb+agwfSSaxOdmsZzri23w/Edta2
MHuybbv0sdrKBVlTXIjKe9CgCnGv5yb6N+Gxt9n2ax/+7v8AqdkjTtFLae4UoylotUdfAKxq
MH1nrX58d/NN43Hj3GN1ud09ifcoUMO03CRFg0ctFMcuXdeurLFxzrHzVJynmtxxLi3C0srd
JLu7h9sGVQVVNCahprnmwxvn+e2u85+3djzf+5PZ7Cw5Ztu4WUCwTbjZCe7SMBUZwxUtQdyO
uOn8+JeLv4cuZPtjzfjJEm8WiUALSoQRStAaaT2x4f6V3/nNuY+qeR8m3PaOebPtVntS/wBP
3GKEX96y1UpUoKjoujpjtzzPq4fXbXFc7ZZ8Xseb7hx+yiXcrO6juLaEIGFZI1JQKB9ra2NM
azc0b48a+RuY8o5Nx61bkuzRQSw3H/w90SMwurUOuI161XPpjWzm5FebL43W12G63H9uW72V
3bH3rfS1ohUlgpkRtagZn7ic8cf4f+7p/aZlS/E+/wAMPDoLa4hexumkcW27rAZoJQctDqvq
9ON/1ydju1VfKW48r2Pk3HN4uBa3Fyms2W9266VlRWWsMqUABWtR3xrnn7Rn+ds6XP8AcNyf
eY9i2K10r/St2gjkvX0VCS1VlKtTrmaCuOv/ANaTLvy1zz//AHMc3yLsL7txz4/m22zE6JAU
kmReiFUp6u3qqcceM91T/wBnp8dqbXldw0USvcrsCD2x0kaNyqqafsxynM3Wbfbn7YRrreeR
/FXKU5LZLFNZqG2+3eMKYguasvfr+YY7eSwf0nkx81f0u+Mmj9NJQnuhpUedOmM/3sl8Mv7e
+/Fcl7Z/Em9X2zqDv1nPGyIqgurEqNJBrqGljkcceedvrp/Tr/WSPQLW93DcOKcYuOQRoLyb
dYBcppAVnqwGsfxY65l8Yz3x03HJtu23fbqzlt7qS3iaRWgWDWhWlSFJHqB+uDHO159se1WP
Pfjq52K2jpc7FuQubSNiNfsu7UQDqo0HD1crpfxWf+c7y2uN+2Tiu1WwlbZ4lFvOramKygFU
P/bpoa47/wAZMtq/ld62tB8tbXuFx8Y8VmurXVJYSFr4uKtGoGnST4Y5/wALtrpud/8A+fpY
863TnG27XxqXgzMltdWQW6MaBkOimjtQEVNTg826593enzN813W9XHK2n3y2it91MarO0KhV
k0j72AyqfHHTm7y45dYCP3QgoPS1R5442t4HLz6fv8cQdEtxIx0P2pQYoXoPxXyqHjd9FePt
0O4SoxKxTaqU7Gox1+3P1xPXuQfPN3vdr+ifYLCNHI1SlmdqdwgotD548/mm82Fxf54j49t7
WFpxm3MdSHmWRlZz0q4IxrrGPXTb/wBwV/Y3Eht9ls1tJ9RNuoeoZuup8UzG7zYh3f8AuB3B
7cRWO1Wtl3dULSBz/CTQAVwTFearbj5p5BPudrfG2gghgWjWq5RSZZhlH+OGUYeH5fb+uwbz
uezW90IwPatkZkC9q5g1P1xrnGsWvL/nFN+2ySy/9fgiciiSPJ7hQHyooJwXIvrWf275v51t
m1R7RZ3EK2oVlX+VWVA35VY4z10Pqw95NNczvcTsWuCdTu5rWvXGea11Md3Hd+utm3K3vbVv
buIW1KaVIFa98sdZ0zr1V/njZ7gr+r4vBd3iDSLiaXUtafcAUOkV88FYVU3zjvSbbc2Fvtdp
bNKSFIqwAOQYDoMsGNRjeM8rvNj36Pd44lmnBLFXHpOrI/8AXDOpjUkWfO/kG75RuMNzNGsH
tLpjjjyy65t3zwSsfVoOHfL39J2o7Zu22jctujXSkKSCPzzLDp2xq2VVcXP9wEEcltFteyCy
soa6rcuHOXRQBpABwXGpNYXnfN7vlW4JcTxrCigBIYjUAnvX7q4z9oMwuFc93jiFy91tkdvL
LICj/qFJ9PUjUKEY1pxDzXm+8cqvReboULqNMaRigC9KKB4+eDWbyzajUSR2oTTrlnn4YdD0
X4w5/wAa4xcSXF5tEl1duP5U8TgaQeoocs8Hh2rPnXylxPkbwMuxOk0Z9crygll/hAXI/jhU
aC2+ceDwbEu0R8enFs0ZjMOtDHn1qxpXPEKgs/nnYU207XuHHmlsUBULHIjHSOnUdPocOGOe
7+fdtmuYFttiL7bakUieSrllFBpoKenAzL644Pm2V90unvdu/V2MqkQWLvpjir1oME6dOuS4
T8mcP2Xcr/dLnZ5GvLjIG3ZNCRk5qFJGNM4LlHMfjjmW6QNfWc+2RkET3ZX3ZWFcgFBoMGLE
o2r+3uELIm83bFKUakjj/tyT/PFh3G6335Q+MrPYLeCSZd4goiLbxCkgCgULatOYxYNZG4/u
DthusAs9seLaoRpeNpAJnWlMj9oxr6xOPe/lP4wuILm4teNSXG6XI9Ut0VAViKVJVmJ/DGcg
cfxp8n8J45+purza5v6nceljBoKBK5KiMwONWLVfzz5A4XyHfLW/tdomVInDXonYD3lB6UBa
mWRzwZGo7uY/KfCNx2O3tdk44NvuoyrJcyJGpiVfyjRm2rzxkLW1+ZeGbzstvY8v2ya7e3AO
iKgjIAotV1KQadgcaDFct55xy53a0m4/x6CxhsZFes3qdyv5SB2P+7BixY84+cN05VsZ2n+m
21pCSvuSCrM1B+Qn7F88XNmtZVp8VfJXA+Jba4u7S9fcJQBPcpoeNgop6QWWlO+N9cmzFzY/
MnxhYcivN0j268EW4IEuJigZlkrn6S3SncHGfL4xqt3/AJv8K3ck+62tjfPu5YSQXDDTqcdB
RnIp+GLMEWsfzL8dbxaWMvKLa8hvrAh4BCupAV76lI8q1xWmij/uK2F9/Je0mTZqGMz0Bnp2
fTXT+GJahvvm3492PZL224tb3dxd3bOZI5lKIGcUZmLE+PTErzcZHhXzZt/Gtoutvl2GC+ur
iR5BcMQNTOMlkqrEqMFXV8dXxp8z7dsc24x7zbiGx3CQyyS2gBKMa5e36cgDTLFGZfGwT5v+
L9h2a4sNiguZZXLSD3I9KPK5rqdien4YsagZPl/4t3ldv3Pezdw7lt3qt4YULKr/AMQoTq/H
CdUe/wDIOH/KXING47wvH9s2+Iram4CiSWpzbM6Rn54LBKm2L45+Kto3i03O15tFPPbzK4jk
KesqelQSR+zBNa1ved82+KrTctuu94uzJe2pE1pLZgzFa9NeiuXkcakZvWesvaf3G8Yud+vI
ru2mt9nmj9uO6Cgy1pQl0rkGxYpdYjmO+/BsO03C8dsbrcN4uTqW4cOohLd9T0FK9hiwxrPi
35N+KOJ8cNtLfXEd1dNrvQY2dFcClEKjpiVusg/FPirlvJ724tOSHZttBEiG6Cq8sjGrKNZH
pXxxZRMkX21/HXxlxzcbbe7fnFpNLZyCdYWaIh9JrT0szCuM5RKyvzp8obJzC/sV2eOUwWqM
skrigf1du+kH9uOkuRSe6s/hX5h2Tju33mx740kFtdsXhvIVLyBitDUDoPDBYdXW+/JnxdYG
zG37lu27XaTpKyyvJoVQ1SDr0Ll4Yoc9aKf+4/4//U3oYXAaaHSisgA9wKfTWtDXLMYfqNdE
P9w3x1eQRWM0lxb21zA0FxOUziYrpKkCvbvjKrLcg+XfjXjfALjjfE5Z9xF0HjUSKyiIyHN2
ZlWo8hhVurjiH9wfDYuI2q75cOm/2MZhhhjjJ90qPRoI9IFKA6qYpFaoPjP552JY96suTvLt
z7ncyXEe5QqzqDJ6aegEppAyxX5ZlxV/J/NfjuXjrbVsu+bnvl3Oa+5LLKYo8si2oIrfTFGe
ur+FZsvy9wLbfjluOX3GUut4ETxLeBYtEsjAlZWf/wAi0r08sE599a7514xM4eRjRV1HUyr0
WvnjpfBzzfyFGKsU1UoQ1fIdMGrrl7Lwb5G4vs3xbvOxXu6X8G7XHuG3itl1RH3AAAuVFzHq
ODfXT3Mef8M3my2zktluN480Vvbzo881sazhAwqyHxxi6daT5z5hsXJuYW1/sm4TX9lDbIpe
5GhkY56FqASADU1743b5jnJ76vOM/IHEbD4c3Tjc26X1ru0wleGzhFYn937UDEGgY/dmMH8/
n1f1n+v+rAcE3my2rlu07nuFxLb2lnMslxPA1JFUD1aPFj0xNTyL75w5ptHJ+a/1LZL+Tcdv
W2RElnT2zGwJJRBRchXMkYr8KfLQ7Hz/AI7t/wAJ7hx47zdWu+uX9mxQaoZA5B0hgPSDnqzG
HmTfWf7W2efLz74537bdn5js24bhNJb21tMplubeglCn7gCcsVHFsmVc/OPLNm5Hztty2W+k
3OwaCFFuXUqaqDWMLRft8aYN8MuVqNm5/wAdtvhC748+7XMG+zSuw2+NfQys1QoYj0LT7jXr
i4+T3bWF+Md+sdn5ptG5bpcS2lpayVuJbehkQUoWXxr+byxq8txYfNfKNl37n93f7TfPudm8
cQS7dSvRKaKUXJele+D6uN+Wms/kDjkHwOeOjfJo97EjgbfpLRujSVChqZKozOfXBzJ66dex
ifi3kW07Hzrbdz3a8mtbCGSktzbL60GkjVUA+nxyOWG3zDL+nR828i23fPkDctx22+fcNumW
H27xl0lwsYBAUhfSvQZZ4r+HPM1irSZVBQ5gj83+P4YzY1K+h7r5m2iz+GNo2zYN2ay5bYPF
E8C1Dqqs3uvU+kxlW8cXMX9Nbbbfnz4+lXi895ubi5SN4dxaRHLRSmIKzyUH2lhXUO2H6mVm
+W7Z8O73ebluL/Ik0Et5rcQLK7QoWzACUzGL1mPmWaOCO7lSCUSW0bssU6g+tQcmz6VxqXG+
br1T+33nXHuK80a+324NtY3Fq9stzQsqyMQ4L6amlFp064x1Gq2/xz8w8esPlDkV9u+8S/0P
cPcG3zSM7QpSQMPR1UFemWGzXPnyetMOc/F/KOB2u03vKX2Ke0uGkZ4meOYaZHIoQM0dWrXB
hUnEOQ/F/BPkEX8HKn3mz3OzeCe8kDySQyIwYe6wX1KwGRHTBYXJxP5h4vusvKOM8rvivF90
muJdr3CXU5iLOaCuZAy1r4HDfKvp/r60UXLfjLk/Adl2W+5ZLsc+2hRKI2eKdggMa1NM1ZaM
DgyqvCPlzbOIbfu8Fxxvkzciiu4z78jlnljKmnqdhRq41J56Pt6w6TRh9IYip9I6fj4YLC+m
rfmfxb8gcE2HaeSb2eP7hsbRhoHH3tHH7aspoylGGfl0xmeeCzavJ/7hOBWvO7WGOZ5rCC0e
wvN0jWsasWV0dB9zL6fVlljVmRfNR8U5N8HcLi3mPbeRR3dzvSySy3J1NnpJEZAFFqWy8cP1
t9N5yYwXwT8n8T2fZd64xyKVrCy3kv7O4Ea0TXH7TrIB9op6gcGZWfr/AK4w/Hvi7bOQ843T
jO3cnslsrL12O7TECO4TroQd2Fc/3Y1/Sz8LiZMbm8/tb3L9PI0HKtskuEQlYgxTWwFVGon0
1xzkpo/ivn3Ftg+N+X8U3a7S13l/fSCN66ZWeD2gquKjJ161w/XKz1d58Un9vXyHx7jO4bja
cjLW1vusCRLfhdXsuhObUBb833DwwZ6ZuPZrj5P4Ft3GN62u65oN8vL20uRbTyZ1YxMixARr
RTX9uNLr4xi/7cN7+L+L7I+77hv8dlvl9ELe8sbn0BAj1Ux5eoNlnjHzVz5Hm+58e+NovlBr
NuSifi91rmn3SBS5RpSW0uOnpY5kY6d/Dn/OX7f4eqfOu6/GPJOEW0Vhy60bc9hgL2Nsr6hc
URVKMFFQzBcvPFxx431Pdig/tM3zarLe94s7+6W3ur2BP0pmYIraTUolSPVQ1pjPU9aksnry
H5I4jc8U5Xd7dcTw3XqaWKW3cOrI5LKTStCa9MavjH8vJjJy+2QV+hr0piaMoUKNFNakeo9x
44dL0P4z+XuS8AW5XaGhmiuyouLWdCysyj0PUEEEAnvjOetR6j8jfOOy8u+GYtvEkTclup4o
9y28RsBGiMWaWMsKafStCDinz4598Trxb/CXI/iHYOGTrc8lS13TeYAm7Ws5KNDIFZD7Y0n0
0bI51wZbXTrPh5Vx/afjWz+TZNuv+RB+MWpEm3b3EvpaRCCqTZZdPupTFY5cX8Y3v9yW7fG/
J7KHe9m5Pb3e+7ciwDbIm1ieFn1OAQPSwrWvhjpzzZPV1u+Mvx6x+FL34buY94uY7Dm8KTSx
TkuZJJkLPAkaisbI60UjHPibWu554zHw3Dww8pt35buJ222iCzWdyv2maMhtL0rQGlOmHqfh
rm2T1698ycy+PG5FsnOdj36Lcd522eGJtnjqTLCjliyVAKle+NfXxnfdaI85+JNy5La8/Xls
trepCrnY53dIwUjKFGjpQPQ9c88Zk1Z6822Dk/x1yP5q3HlW9XsuzWJmjvNsklUAO8AUETEa
tFdNcXR4masfnXe/i3dN9sOW7VvUO8XKywW+7bPCzVktkqQ8Z0ihFKE174cuD8tRxrffhbY9
yh5RtnM7gyIjyJs99PI1EdDWHS351BouZzwf4NuMnsfL/iTlW97/AHm77lfcYuL+9e7gvRK0
cc9u4AWOZU1BXUg0r2w9S6OZjv8AkT5E+NrXivH+J7ffycot9u3GC8uZoj6v0qF/cRpWpST+
Z6aYcq6u1f8AGeUfDHGLw8msebXN8RDI0e0Xkzu4SQZxBGA/mLSgrjMmm9SF8GRbBf8ALeQf
IabrbWkO5XFyo2iZkjngSRldGkqQPUBXLLF37RzMfLm/FZ93vpFYSxNdTFCvTT7jU/djdxjj
XHCqe6VX7VGR+vbGXV7zwz5T4hwj4xtbvjcUf/u7ubfcra41GvrJWQmnqj00pSmMcyK1Z3Hz
FweDlnGOfQQsm+TiWy5XtUYaqp7dBdRg5NT8tDmMuuNfIk98bHe/kPgNva7jukPyFfTiaOWW
Dao5CjIZs0ES+2GJQnJScPPyL48u558k8Z3r4f43sVpdFt82+6P620YMrqq6x7pPT16xTPxx
vnn5avNtmLTlG/8ADPkTZfjvix3Zdru7f3ob95QQLZxAqRliaIVkZaLnjEl9N+XSP7adhiPu
R84sTMuarI8eguPEa8sZ9X2eg8nsvjd/jzim28zvxbQwApabhbNrjWW3TRICyBxpb6YpyKzd
7/cBwrb+a7B/T5ZNx2farWSwvdwRSqsk3t+28aH1Ep7Xqr+GNfXwazXKLX+2/b9m3a/sdzl3
fcr1naOCIye8k0zFw8IZEUaWOeo9Mb442+qSn/t55P8AF/HI5d333djtm/qrwtbyh2hNu1KF
SitU1GfhjHckrp1ki44t8gfE3D/lHdL7bt7/AFuwb/aNLNchJGFvcmUsyMdIYiTqMu+G8+ax
zPlNZc++F+F8c3mTim4z3029xNEdpYP7kcjB/UzMoUBdZr6vpg5m1nXdsfzL8c7vsm1XG5b9
cbFvG026Qz7ajOI7n2gB1jVtWoL+FaYNOyCvfn3gcvP4Z1meTYd22j9Df3PtsslpN7jsBIh6
jS2emuK+GesLyv8A/N92Tjcqcbv5N03asXsSesyRNGa6vUqD2yMmGK+/KnXq+5j/AHF8duN0
2y3itV3zie42CQ73tNwugpNqrWMt+dR+B8cU+GbfXkc+7cP438m2+6cb1bpx2yuob6C3uFKy
FGWsttIH+5oiaBsx0xuyYOfl3fN/N+J8y5HZ7zx/b5NvH6YQ34kVY2kmVqxnShIySoLYzuTD
nrzuKoYsSPTlU0oa/Xwxhp9J/B278RPwryuDkbs+2xSM26wRGsotnCpqUZMaHDzLrXXwW/8A
y38ccU4tZ8e4VJJukCXsN+mpXQQiOVZHSRnVSden00H1x0vOX1z6uuvduZ/268q3P/2fdL2b
+ryJG0m3TpJprGukRuFUoQelQ2MZqnUee/E+8fGu188u9w32a4sdtt5jc7BNJV/aKS1WKYIG
P2ZDB363xsmtD8wcv+Kdz5ntPLdouzu917kce9bYY5ESW2iUget1UA0yp+OOkmT1z31utj+X
/iDZdqeTb9+vZLWRCTsV6kswzGcI1K2moNPuK4xJrc9eQ/FXyntHCOc3m5yWsg2K+Msb2wIM
ltDLLriVB0Yxigbxw9SfMO+Oqz55w/hPy9/X+MStufHpFaSVWDJIq3IPuxrrALFGOpSfphsm
Dj9PT9j+Vvgvjm/3O87dPO13vTs987wvrhL+vUKgDSWyOknGb6r1nil2/wCT/iPfOE7Xx/mU
dzFJYXEraoVcqh1OVkDpnpZXpTD18jfhX8z5h8dfIW/cd4vPuLWXHdqSURb9MvtxyMY1WOMh
tJSmihZsicalyLypD8NfDYjd4ucWokVT7ZE0elWpUMfUT188c/rTLjacl334pk4dxGHlczXN
vHDItne2Le5olt1WOUN7dW0tTw7YZLDetus5P/cdsCc3gmht3uePRWj7Zd3I9EkiFwyzJG1C
NPRlOK5Ifx6L/wC9j4c2Djm/cf4uJhFullchJTEwpcyRMio5ehIOrt0w8/MrNvmMX8S/KfGt
k2DdOIctgll2HdV/m3FuKvHVAjhgDqKkAZrmPDGu+v8AbYLPMqg4Dx/403jet0seQ7421bcj
vJtV44EZlj9wqqyBs0dkocH9LbW5bj0Cb4/+E9vha+2vmcEm4WlJreNpF0O8RD6SB01Up9cc
8U7bfnPyL8Ow8r27cN5jkl3WGzhu9s3S0HvqqM7MquENAUcVoR3xrMjO4xe1/wBwHG9y37kU
W/7Yx4rvwjSRIvXMhiT29bLUVWQLnQ5Y1ck8HN2Mr8n8p+J73jtttnCNvaKZblLg3UqGJowo
0sg1Elg4/ZjM6n/5WN1xj5i+F7DgjcevrG8hF7B7W7WcUTShpCuh2Vyw+7r5Yzx5db/p1sxi
+EbP8G7pHuEm/b1Ntc0V036JZQ0eq3b7CQAw1dmFcdP6fPilmLvcD8S8Ils+S8F5Gu4b5t1w
rNt87swuIH9MkaelQrUPXGJwJ/TP/wAtHYfPnxPt+6vvu17Tdx7lujF93Zo1BzHqZXZsyGWl
MqjCNUvFfm7iN1s0uy8t2+4lsobuW422WzYliskjOFkAaOlNf0w3qb4cYr5C+QdivOR7XfcX
tp9vg2YBbW4uCFaRhJ7iBkq/2nKvfBevPFz5Wo+UvnXaeWcMTarC1msL/cDH/W1KLokWKjgJ
IDVqOB4ZYf5d/W6z3Px+HiLzKWLKAdXUDpT8cG6Wp+O/kLeOF70Ny25EnV1EVxbuQI5Yiasr
HzxVr8N/yT5u2hdpba+F7W2z224a/wCprNpYAkepY9OqobxyxrnqfkTr334aaf55+L9822z/
APYeP3V1cxxoJoaRtGZFWlQC6k18xjC6+XkV/wAr2jafkJuR8OtHs7GKZLm0spwPS60LKQpb
0V6Z433PI1lif5E+Tbjm3IbbfXsk2q8tIUgQwtr1aHLq7E09SlssX2znHOXK3Fn8/WlztaS7
3swu+W2UYSz3eIIolUdPeWoZW81rnjnz1+Ff8KHj3zfvNryDcJ9/tot32vfSDvG2FVVGKgKp
TsGVQPrjV7/TWZMD8jfK437bhx/YLdtu41QSGzfSXaQNVcxWirTL9+Hn+klZnO3aO++Utk3D
4mh4bdbYZN2sGVrLcCV0KQ5YsfzaqEqR0IxTN2u3X+18cvxz8tycesZtl3yyG98buG9yawdg
JIpgah4WPppkKg4xuU9exx/JPyhdcrlhsbeMWnH9vYSbdt5FWB0ldUrZ1bOmWWOn2+s8ebLf
lhXBFXHQ5EkY5a62PRfg3l2zcb53Z3e8z/pbP25ImnYEJH7q6as3hXrgsa5sysrzjcLe95hv
lxasJrSbcLh4JlrRkaQlXXyIOOveZI5TrYp4HCSBgaFfsboSfHHLHXnqNl8hfJNxzKy4+k1s
ltNstsbaSVGr7tSo1BaDSKIMvHHX+f8AT6yxn+n+3WpfjX5S3LhW5XMiQLuG33ie3e2UhCrI
vYK35Wz645Xz2Ny7MrZb3/cEl5sl/su2bGu32V8qGH+YGaGVXVi9FABUlemNTv8ALn6zPIvl
bdd05xtvLUtobe9sIoFeMEukkluxPhVVevTHT/p/rYfZ6sN9+cb/AHjbOT2JsVjh5AYpBRyX
geMKGC1GasE/DGeP6ZZWcUB+S75/jpeFz24MEdyt3DdKaFApNYyvg1f8cZ/6X7a3/Tvcegbb
/clYRcai2e/4xDNE9v7NyiSBY5Ci0BMek/dT9uLfdF61g9y+Tr2/4RHxUW62+3W93+qtJC2q
RUNSIm6iisxx0n9cus+27+mo4t83G1m41a71ZJcWexCWMyqSZpI5xTVpai6k7eOOf2b+23Xo
3K/l745m47e2825nfo7+Ewnb2gaN6sKK4dl06k643yxXy7I4YvXNamn+4Yz3160fbr+W2uY5
7cmKeBg8T9aMhBH7Mc7h+1bvmfy1fcvsdqTcrWCPd9sfUm8Q5SSgfkZaUGefXHXj+mQTN1yc
k+Tt+3/km28luQlpum2JGg9urKzQ5hyD0L9xi5/vkxcXOtWm7fNm97vZckt5rSGG25IsYuYl
ZmaJ4wqhoj/uC51wc/1yw2eMLt243Vhc293ayGO5tpBLDKrEMrqahh40xz/rftdrU/pY9Zm/
uX3x7JVk2iybcgoI3JS6uJgKe57YGmp8K4ee3K/4UvEfnTkmyC8t7m3t93tL6dryRbosgjnc
1ZlCDKvcY1elKubH+4LcpN62q53myhlsNvklpbQ1aT2bhdJFWyJT8mH7eGXGy5H88/Hj7Ffx
QXF1uxu4Xtzt89s0WtZBpH81shoPfGuJLWbXiPJfkHe972bZdpvGjH9BLmxnUEyMGpTWSaHT
pyxv/pluN+/J+b893nl6bfPu7xCXb7cW6tDX+Yi/mevcnwxzndksPNvyodq3L9Hew3iKpaB1
cKa0bQa44WOnH9Mu17Tyj+5a6e+iG0WEVxZS20YkhvVKmKcA+4UZTXwpjv8AzzHLr58YzY/m
zl+0bzebks8e4S7iALy2uPXC9PtrSjVXscHfe/B8+EPNvlnkHKhFFdwW1naxsHFvArepgKam
Lk5r0FMHPTPsur7aP7iuZbftEW1G1sJEjjW3M8qOWdFWgY0YA+nLGdkVtql4t8xcm41PeLt3
6aWwvHac2M6FoI5D09oA1Ufjh66261qq5t8jb9yt4DuTxQ20JrFZwKVijdvuZRU1ZqZk4p/b
PFJZ6kb5U5JecS/9VvLhLrawytD7qapIvbNVVHOdBi57xi71dWHGfmnmWxbL/Rrd4bmxQ/yI
rtPcManqIzUECuC2t5al/wDvu56N4i3Vb6OO7itv0PuGKqNDq1DUK/eCcmw6upIDkfzPzTer
NbC7mt4rcEe/LbRe20oBGT0OYPXBO8rMnrS2f9wsMNlDby8ZtHZEEfuLIUroFAzrpPXyOK2X
5bvqgk+Zt4g5Jc71sUMOztdIsV3aD+dazaf/ALR0anqxv7yxz5ji375e55vlq1pe3kQtWmS4
RIIkjEckR9JQA1Xxw894frfl2zfPfyNNYtazXULKE9sXLRKJwAOusdT54weprOcU+QeScZ3S
Tc9mufYvLhStw0irJGwJ1Zqeo/wxWtc3zHHPy3d3319/knDbjJKbhpNOke6TWunw8sav9LZg
k+vw128fOfyJumyzbfdXlq9tdI0U0S2yVZX6mtcsc+P6XlfTar+O/MfPNg29du2/cQ1qhJjg
nhScJUZhS2YHljf22tWa815vybd983eW73aYXFzJ97aQAAemnwA8MbvfmRmeM2kxEZU5jsMY
ZpqyeHbEsds9sqBdLEqafvxQPZvhb4eflu3yblfXT2W3QuFHsRe/M5H+0dFONd8rjr16vff2
77O9i0u1Xl9+pqDru4VjUAeVFP7scc9PXVtRp8E8O2+0Rt33q8eSWrF4bc+0leoqQ2X1wdc7
+W9eYcz45xzZr5rfZtz/AF8A+1iCGHkwIAqMMH2t+VTsmz3e7X0G322kSXUixq7nStSaamPg
MdJGtlj23b/7ddjSCKHcN4upb1xVv09sfar4AsD+/GOprlpoP7b9rSacXG9SWttH6grIrSMv
ZmqaD6DFzLGtcl18F8Um2973aeQS3SxsRNK0a+0NP3Fu/p8BjTU6rU2Hxb8UjigmeUXAMRL7
sVYE6fuZE/LjNmsW6xuwfCOwcigm3OHef0+2K5WFmjGsKhzYAnLLucX0w3rwt6+D+MW+w3G6
bNvzXyQ6vcZwiqSMqaq5HD6NaH4z4vtLcRuZoZ7C6mIdWZrcySo2n7GdyFI+gw9TBYzWw/Cs
nIri63Lcb4WFgjsESGIzPJpNK0Wmmn7cGVr4We5/29WD2aTbJuE9zJqoWuYvZRV7k1ocvpg+
tZvoV+A+M2ECpuXJzDdSAkaYD7Rp/wBx7fUYcrX2eW8n2C12ndZbOC9jv4UFUmi7jz8D5Yl8
i4XxF+S8ht9s/Uraxz19yZl1EItCxXzpjWKV7unx78a2123HY9pZtx9syi+eRmcmlCa1yNM+
mM3kX5Vth8L8R2j9Xu2+mTd7aOrQ2gJjWNR/FShdjiwabcPiHjHIYbTcNmjG07bdBXe3Kl30
dfTUmhP1xSCLCH42+Nvebj1vtJ/qEMWr9a8jA1OfYgE+RGGwq2w+HeGcbivN45Az7nbxkstr
EGWOME9T0LN2xZpV3Ofi3Yp9iTfdnX+nRyBWS3dixofFj9uXnizKvhUXPweLPjEm+XfI7MSJ
F7oh6x+oV0BtWb+GWG6L4uOKfFvEts4yvIuUNJuSMqyQ28NUVEPStKa2NfwxfLVsxZbr8T8F
LW29T3DbZx9gJJow2o6TQge6ftBr/pgxmXFvsvB/i/f4pU23jUiWUNfY3VzKiM38UZZqt49K
YLyz1686234ett/5LuljtW6QW1nYSaZJ5PWzE/8A4NFIFB3qcayw8ms/hhLrmjccO7xT21sq
y3d1DXWAeiBQTR/8MMtLdL8XfG96t3sW2Ws0O5WQBe8nkeSle9CaMPwGM1ar7P4m4Xx3a33P
lrtuR16UjttSImpqKQooxbv1xSUWs78n/Fe27Ntke97M/s2krVS2kJdxUVzP+uEx5PDDJJcI
hUnWwGXSp6YTj3HY/iXhmxcdi3bmXubhPdBNMNuGVYRL0AoQzNmKnB9aLkZ75T+KbLj9ou97
RJp26UKFtpWLOhOdAe9R44A8m9ckwRFqWyocznjpOfNH29fRPxJ8NW0O1vuvJtsS6urgB7K2
mdWQRkAqzKtRqPn2xm+ums7yD413vf8A5BG2Jslrtliulrn9HMpP6fuxY9z2WmL8MxsvkH47
2rZeHzjj/GbKRo0/nXcr0mQGgLgmmo+VfwwSG3XB8U/C9jFtr7tyfZ0uNwlr+itZXHtrDSoO
lSRVj/F2w2m1mdy+NN35B8hDbRskO0bfGVa+htpEdUgr96sT1YeAwY5yRr/kT412XauLi22D
i9mVGlZNznl9cIrp9xhXUx/Hr2wyFDZ/Cfx7t+32ibhbX+73V0F9yS01e0rMAa0UiijxJwXk
0pf7euFNyABridNrEfufoVeszP0zfw/DFihXvwj8fbltN6dpivtuuodUcUt1qCMyjsrjME/m
wYvsqbD4O4DsllbJyq/nk3PcCIrX9OdKB2H2g0bUfM5YvqrXHB/bQDyR4p9xKbEiiUOoBuGH
UqBSgP8AuOFST8uzcPgzhG9beLvil5cIltN+nuzMdVdDUfRUD1fuOEYt9z+GPibZbaI3dput
07rm8AklYnuxEa0GM3nfRjg4t8QfFfIbq+ubNNzWxtNI/T3H8ti5BLU1Lrp5HDjWKLlfB/iS
ytJUs7Pfl3KQlbOMxyCN5K0UFpEA0k+eCc4zOoq7b+2/m11AtxNd2e3vIv8ALtJ3IlGrP1aA
wri9arPN8I87Tko4/JZKZ2j917iNtcIhOWsuQB17YdXLac5/t/41xrgL7qu53VzuUGgSEU9h
5HIFFjAqAPrh9tVcvxl/b3e71Cm5cmnk23b5FDWtmgCXEtaUchgdCU8RU+WCnqO3bPgngkXK
d6j3jenh2vaSGWFpEjnkLLqLkkfavT0jPD6zzMjt3b4K4FuvF7nfeKXt3C9skkiNdiscugVy
V0Q0NMj3wYzeVH8af26Xe8o25cnlk22xkB/R2iECeUH87Bh6F8BSuGtpdu/t2s7zme6W/wDU
JE45s9FuJwF992ZNWhR0y7scXplmB5Z8Cceu+Nycg4HuMt7a2xcXVvMwqTFkxViFzXvhmwbj
wlomXXU+pDRiaGvhQ98OiRGZtOqgzOeXj9cRKaZUyarscszlmMSA8szp6FordVBBrTFINNHP
KAqtVRU0zzrTxw2LkzNIV9VQPCtcsDVQF2yVVrTNjTDy592gDGjMaFa5/wDHljVZ46v5FDGW
cSKMugJr+z6YxXTUnueltLaDmcu/0wYfuBWJKstVrmSDmK4WdhPJXT6QafbXqcOGh1VOgmiU
J0k9PPFPEZaM4BIXSKgnpTzxLBFpDL6qDV0YdKH/AFwGGLSg6TmtfsB7eFMGLUahGYKKAVI8
sWM+JCziQDoFGVDnigpvdYL66gV8canjUuADFNQp/L6R0659fri1aAllUCmghtOXh26YdVHr
aRSpqRWmo0yr9MZwQkZ0XSxqM+mZOFoIko1XoSRlU1y8sU5ZtISMKaaEAitMaxkbOdNGOpB0
8TXGMXXWzCFy6xKFaueXiPrhglyHknlZXDmtQK+ND06fTBY1KEzBwunJcxp7V8cU8OlEJNZq
Mhnl3p9cJ0pZ2LdK1qKgdh3rhkZtomvZRHRDUHI0qennixShNw9Q2osctC1yGkf44zhOXzU6
h5n+GvfA1tFJcu60Bz+0V7jCKid5EWqdgA2eQOBI9ZY1zJHQ0AOWNQ6f9RMFoGZcqhaZjv1H
jh8H2xJHdEKxYmmRBOZ88F9UovcWUlq1PUmncYNrV61EsgWQNU0B69q4rHP8kl3IhJz1oT6u
nTphxrfRrdF49BZkjFdOZqAeoyzpXDR16gWeVpGLmvZT/wBcGKSJPdOgnSSCQVBOeWWIpLi9
f0rU0BB1E9KeeCzTaYXZFVDkKerj7qeGCRmdA91lcsGalf5dafvOEna6kcaT6Rq1N0HX/jri
YtA9yVBBJKdhnUV6keGHF9qD7ug9HY1IxY0jAkYMCCc/U2XfAcSLkNRqFIAAwnCLlBnRTX7R
16YEd5aIq0bxFe34YZBqb9W2lasNQyAp37EnFivoBMEoqn+Y1ak5HPz7YyvwZ7iXSwL+rx6j
6Y3KubYi96Ufe1FAOivcd+mDRZRLIwYAN2qPAU6UwUJVnmLlqlqAhW/yzxacMl5KrakJDAUp
lSnf8MasglCly0QVh1IOdcxgxv7C/UyPmz01mtaZAf6YLBKYXMpkSrABQaKBkRgwlHMAXYsV
Wua9j+GEToZldn+8LU506/TEr0S3Lxe6VOpuuYH0wYydJ2CksxIcUL9h5YMaiJ5lNadKmo6D
PCrQKxRjUeodfLEYMMS7OCRqoCnl4YMF9SmQOQZGK18OvhhxQJmaoAatKBSTU07UwrSM8rhs
jpI79SemHW9JLiU0UuxJH35Agf8ALBXO2yiivG9SFRoWocACp1GtTib1M243EkaxNKxiiqIo
XYsi16lVPQnywSGVE7gxt6jToQeo+uEdYY3LMKGpUEKAOwI64tUpR3Drq0lgwFK+IHjjFmiU
wu2fUrVBZiQDlU0xueNbBLO8ie27FQv5SepHj44BaD9RKQAalgdLU6ZdBizXL6mWYrRmdmDV
oCcgRisalGJ1Ioc8s2Phialw0rMW9zJzQfUDtQYoL7QGRG09VA7HrXuThRyymiOxKmtAcx9f
rgp0hVRql+0VBXrnSuDFiZLueNTpkZUkWkgUkVQmukgdRXPGmbaaOW501YkoR3OXlgq9P783
syg1Or84pl9MA45wzyyqAA2RAp2oRn+3E6QzSMM2lLqOpP7sTNgppzLVj6HoFApQUxLQ+5Jr
opDah18MSEZGcKRkyD09yAPGvXDFKH9TMACzaewUVI/5YToy7agdRHgorn9cB3SadhIy11UB
JHgPDEKeKV44qaVZQfUvVqnzxazkSG6kCrR2XIAAE5+FQemK1qUzXhrXupyJzJ8jjP1avYTM
zZuxrWq07lsajGmMpNGI6AlKfxeWJHQyMup2AZlofzerwzwXXSdeJFuZSKNUA01AAZkYJGBN
IXdgrBFpUL28csat8VuxEpCABfuGRK5EeHTGVDtK59ByJOTZ9fHB9YtTJK/pVjQqSQSakk+O
JaY3Loy1NMyTXpiZ9DLMZHDBQVppahz+uNxQwaiFWc6TTVQ0ale2A/YYlYfaTUkUIy6YB9jh
5JGOthUdC2Y+uLD99CWkLGtSKkk9TXxP1xYTkoULNXLriBoXWpKZgmpHauFRKbjSoUihY9u2
DDh5DmtahgfQwoe3jiJi8/Zwp75Z5fXDKKcUYAkjVSuWX1qO+K1SEGIMjqSezZAgUxnEHWQq
iQmldRqa/T8MJH3LK5OoEivnkcWA7tSMFc65PXqAMVOhD6fSnqU5gdwDiOiQEMNAFT1Y4NBn
ozEr6TUAAnoRiSRmQwlNer8pB9Rz754NZ6uhXT6VDAKnpp0pTzxacCQ2uhHpfzGRH+WEDUEN
9tcqGvj/AJjAYTEArRfSfSe1cLUL0KFNSSMx374lojMy0ZcwDRiTn9MTNCHf3NSBQBmWJOQO
CqUncaaVyPTx/bjMi6OrA0Jyz6+PcDGjKGSfM0qFBz8f3YcVtC0jn1KdRI6N0AxoYmhkkZCN
Rq3VWPQ4y1DO5DClWDZDOtB0ocA0xb8pXQx+5lzFOmBaYVQa1zINQRl17YcZw6opb3P4z18/
x74sakEsgRaR+kA9x18jixUJdWkOqop49sQCzkqAGBkA9QIypXLMYkKoL+pjmtQRlSnU4kEE
lqqT3p/kaDE59T0aPJbrUkszklm8B4Z4o6QRkfUCwND+Xx8MNalJ2DU0nS5GS5fsywVaYmgq
2XQFaVr+IwG2EBq9TGprmT1/AYhpLUsC50Z50rWmJnRPNVdNKrWhGJGjcaSv3EHr1p/rgrWG
eVoyI65NkHGLFfDwu6nSy1IBK5ZHDilpAtIApQ0HRsgBi8XyZXj0HTXQCCaDMEdRliQm9TMV
OsCg86demFfJndhq1Ln+U5dfLEYJVCIyNHpJo1FGZ+uDNNoU1iMFQTHWufqy7DEeYUp0KS3Q
CpHbEusP7hLEslR0HgQemFzPLq+3p1/4H0wHcRh1dVOWs5gVyA8c/HEfsGsxJDnKuXb8csWN
YJig1eo9PUD5+eJM1vUhe5IIIC5Gv7sbjOK5VXPPLKuJkVH8fLriw67TNDpALNqBqFPT9uJm
vfPgf5U2Lj23TbZuk9xt4uDqiu4EMyCoAppXMCox179jlJ7re798j/HptHjl5Dv29yymn6WJ
2giHjWoXL8TjjHaTVpxn5N+O9uslaPlO4Q2yD+Zt13C84GWaozI5oPI4rFfHmvy1zjjW/wB5
TYrQqgNP1UiBDIa/cFAqK458zGpfGW4jvCbTvttf3Kuy28qSMg6sAa0GPTzZg+r6EvPlPg26
28UzcyvNqoAZbeGFlc+I1e2/fLLHPGJVUny9wGGzvbZL++uJ5VYRS3iFpJCcgan7R9cTVVvH
vlXh9txS7sp3n/XMZD+nEY0Av9o11Fa4qsro438lcEn4y20btuh2h6PGuqNpdQr1GkEA/XFZ
qxDsnyZ8fbFxm/2tL+4nce77KtAVMgP2nwGKtfV4/d7jLeXswDvFbTsAqaq0DGtSoNDTDzTe
ce/8D5B8bbDxldsuOUxzyuKz6oZIyGK0IUaatTF0za6OPfIPA9shm2a05G0EOpnS/lhOhmfM
kHPMf7sDNit5JzjhotI4Z+abjuzSOCEskRFWhyLURMh5nFIpGg27n3BrOyRbzmEd/bhdRivI
dcxy6VIrl4YsWvB/kjfeLbtyBrjjlq8NqTQyk+3rbrq9s9K9hhkUrn4DypuM79DuUkIuI0Pr
hrpJDek0r3pjUxV7snOvjI3X/tEm9f8Aykg0CzCN7g76dNM2rlUZYzi1X7d8scS5PFd7XuMv
9DtphSG6nYNrU9j0CN3w/UQ9/wDL3CNgSx2jaZn3mOHSk9zFkiquTdaamp4ZYJDqxPNPje3u
ZeWPyBZTJEIv0cSMXJGdNFK6u2eWDE4bH5R4lyqyutv3G5/oMchPtSTkMGUGoNeganUHCtU/
yD8ocUt+PJxrZrs7mwVIpr8CiRoO6k/c30FMCvSnvN6+C4OMtDbbfd3G7mIFZJVkLCWg9bHX
7ZFfAYla0fFudcT3/i39B3S+XY3twE9yYCjxjoUY5VplnjdkzwSrubm3xxeXFrxp7+O52+0j
Uy3ctVt6p9iNWlen0xnDouYybbvNmba259abTtujSLa2MdGWmSkq6nT5DBZU8v4inxVt293H
/su4TX8MfotLmMSpA9DmXVaPn+XDZsNq22D5A4DsPOnm2O1mi2W6QxNI5JfW3/2iKxLaR5nF
OWdkb+XkfA9jn3HkR3+K8mvUWllbsJGJUekKBmPxpTFilcEHNuF852U2FzuA2OYOrlLkjUdB
rVGNFYHuDgLPfJ3yDxfdLS14ntV0LiL3FS63UV9mMp4H81OpxrF9lLyHg3xbsHHhuVlylr3c
hpaNVKS+8cqj20FUFPE4JzdWtttfLOG884vBYz7smzyW2h5I7hlV6x9CpcqpBHnXGvRust8z
fJfF9w22Dju0Sm+e2Oua9UfyKKukKvc+ZGM2eD5eJxyhJDIdVK6gR4dKD6YY18PfvhDm2x2H
Fr9d33iK2cSM8CXEjKdCjogbsKdsIteabTyK3PP03L9bItm957j3WtgArNXUa50wrm/t6B88
8x2Lctr2232vdEvm165UgkYpToC1PScZ+B+Vh8Lc02Sz4Vf2+7bytvJEzGCK4l0yBdPSNSa0
B6acTVrzTinInj+SYL/+oPHaNdESzvM61jNRViak/Q4NEegfN3Ldhvb7Zbe03WO8t0f3bmO3
k1rpqKnSp9TZfhhkZ/L0j+rpvG1bdLxPkFlZ2gCM6syljHQegoalG+uCt6nXl3DTyr+npulq
27iCjMJFpXugkrp1eXXCNR3W8R7Ns9/ccs3y0nhZ2a2EZAYJ+VNIoXb6YJFqkubnjnO7bZ91
sd5t7WDa5/dliuSqSDTSoKsRp8j0wp2W/wAs8FveUzbRb7hGzOntrdsQtu0nT2xIev16YMVc
X9R478ecavk3Hd7e4udwuJJbeGAguGlOVFBJ0r1J6YZE6tj/APfrs2l9b8v22+2ZiryIIUEh
i6lda19VPHAhbtuy3/LXtOMcqsdu3NYSbu2dUnEzg0BNaKSo8DUYsEWEu7vs3GZJOc7zYSTC
TUs6URSARp0pkSR5YpDasbrcbzcoLS92Btqu4HAb9VcuzaQehTQCa/iMGKMDv3zXYcW3+5sd
6e23iSREJ/php7IzGhtddRr/ALsazwTVzybm/DLb42Td1hgmtG9uSDbmmVXEmoMBkWOoHFJt
Nrn4T80cQ5dvMcZs32+9jjY+7dSoFXKpUeoA/sxXkprPYOEcl57ue9OlvcXG3BIIIWkV45JR
m05UHOmSjtgwKzntr8nhW3C73HbIeOWLrcybTFJ7TSRRNqVHldRqrT7cU1O7hnzJwvlm+pH+
kfb72OMhJrqRFTzQUalfDLFi8dO3c74XJzLfuPw3EVre3wAScyAxTy+2VNH6BqdsOLVNuV9s
Pxl8YXm0b5ucFzdX36hLWGDNnM4yov3aVGbE4Z8rr2Y+SJ29wyM/UMdJApU+YHjiZnOOY1I1
FirLTVXL9n4YmiZmDAFa9q9TnhOHk1AKVNSMiAO31xSM1EtNdOjVzHQf8sWiSiLENkpJ6VNM
j1rgjR2IqKD/ALgP8MOBGsYVGZl01p0yp54tWGLAMAA1CCQSfHxxM0OnVkw9Kigp3wIDqUJB
GZyBr+6mGLLBf+Nq00kigetcz44iIh2BqKkZUH+OKwnLAkD7wP254mgMG1Aae9K0p+7yxIKB
WRiCAwIoPOv44qyJwVWrGvl3p/niwYGI0zAHSrE9sBkHRZSfy1HrYHrTFiRGMhk6gVzzJoMO
qQZDMaN0H5h4YGsRsvqBbMEZDp08aY1g8I1oCBUU9JGVMA9CM6vpCMvQU61wyjBUDABxop3r
3xaDZMTGcyvj/ri0w4YJHqXScsq4mToSIiW055sTlQYza1IAirAr6UIrXw88ROus96mv0qfH
GoTii6lIJUGpPY4h/wCDIhJJrSgGkCta4tMKqF6dCAQzZ/XAAtHUVr4aiOoHbAhOEqoDfcOh
6jFhwHrVKuxKseg8vDEjgAEaT1p+JGEU380uXPUA0p2p44tQHchT2Bp+P1phZErGmjUfAr/h
0wEJ0qznURWgJ8x4YtIqac6GpyJ8a4ocJs1GnJhUlfLvi0AAojClRQnKv7cSOKaFqTpbMDw/
HEfg7LrPtMSSTl/yxM0JWMU05lMyc8ycUCVAjx6wCpBpqPb8DgahpKaSK11dD4jwOGCo6Pkd
VD1U/wDHhhGYZlKFSWyIotO9cZIhoOaVFBUk9Kg4jKYsSxUnM0qPIYjp4xUZMtF6+Pn1xEpS
CQOh+0mlcOggjKSFJIJ6gDp+OAnVNVWBOmtACKkCuLEYairDM0yIpiAddWoVJSgox8euJadS
pFaA5VXEi90v6RkwP7cKEUGkjqQdRI74YMAgqCKgkihJyoPHBYZDlCGIQGlPSaePbEUkcZJK
9+gAy6dcCRKBrOrUR0y7jxxMUZ0KwPUEZtTNcK8E/pEYDAp18vxwIKAHNcwa0FcsRJkApU6S
2Zr/AK4CeN2U6dNdRqa/54kdxpcacwcwRlQ4URqGFOrZZ9x9fHDEYZMaVAXIf88ANqfpXWak
qB18a4laIEE0WpK5sG8cQgkQEah6SPUKdcszga0LAlS1asxrqHn5eONRadXZq0FGU9+hGKqH
ZSAEJGfQAZmvhjKAaISisSzZP507Hywi0m91nUVoR2p2HniUGVLPn9/5SvSmA6Bw7Ka10Hv0
zGFHjDaSelO7DMYlgRGpQk1AByOIDBX2n9RBr6VIqQO3TEdAjelWY+rsDlQ9KEYkRH2kE1Hp
IPbzGIHTVqFSfVlo7HAR6SslTQGnbpiWHIkVSK1Byr1oPLENCpFaHIUpTPOmJobMKqQSdI6D
KlO+AaEEqDQaVArq8frhWlrLBXKaQcmHXphQyUBDUqQPupXI+WBYUa1Y1Wpyq3X6dcUQXUFi
VPpBqfD6YR4cMa10+jpr71wI+oux09D6i2FfJnaRVoAMvUT069yRii0+ZU1FWNPof+BiP209
Q1BJUBSB0qSKdqYiF9I9SOaU6DtXExaIqV0DUQMqEf54GjnUqsdXU5HFpSSVC1AOYBy6ftwL
Eah1J1DJvUfHPEBMSrKtKEElD4jr1xVHozh2bMk5jwpmMWggy1WhH/dgIwBp1ykUr9vVajCT
el2yooAqSBTLEzpUOkFhqrkPDEjigYVPpUerLpnljIwqFasKgk5EefjhFSKSWbqB2r0r5Ymt
PG5YjMVrQ9O3UfXFETMlSqZeRH76jCdNRTIaGpGZDdMsS07ONNR+Y161IrngVoXkAUMBUdeu
f7cQqRXQsWVqagDQ9SQMWtynUxq1HzalelMh1xA6HUDQaUA64kR0xn01BYV+uGIJdVWiOWJG
dPPtiSVY1CaxSpAqAMgcBCxbSQ382tDU9cApnXWpOZJzIHl2xAKVY0ddLr36ZDOmJSJqAgls
jXKnj4YKcDUSEgA+4DRh+/CjpKC476RmT0BOBnSMr5gKKkGudRSvjiJZljr+5fSaeHjhJxQ+
pz/L7g5VGJBaiUQagtaigqCOvXEiBXWwHqLerrkMGAVXVelB3FMOEiw1KdINR9x7nCSFSRUE
Z+rKuR7YkJdB1BjV60oO2MqkFRSAQCq5gkmtfPEkZKtJkxD9CtMjiwJA/b8ytWlPHEiOrMpQ
KOpHSuLTDBWZs6EHOh7eWE0gzg9Rq7kZ18hXABV16qAs/bTnX8PHEjH0poCigzyGZPjiwYbM
FXQgo3j2PfFiwT11KAPTXvmaeGGRrDqFLNpqQB6hnlX64qpKMhvb/wDGrZ1BHXLGSExOrV1U
r+UeHmfHEyjLVcaAxC/cxzphXyNWX3ASSCR6a554MPkMZXK5gaScmHceB8MQ04jFQYhl3Fc8
vAYsOmDAawwIYkUIocOIqBQdJDEE1XOtPHEB07gkUyAOAwAZKHWpIBzHcn8MFiMrqJAyKADm
Qf8AliglGVZ5AHGknOlRSmFv5CC7ExsfWvpVa18+pxaswSnTVT1UUatafhga0NFJ1g6m6HsQ
R0p+GKs81ICDlSjE1J6Yijo5KEVIFS3UEeP4YWfkUagoaUK9RXz7V8MSnJ9JHqb0hh6a+Iyz
xNXr9IGQKAasdXY9Aexwxrmaz29ECUBev5q541jHdVpoKDI/TEwWoeHamJLM26G2VyKkHr2w
a3OWk4ZD+rvba1d/ahnlVJJz+VSaGgx14mrZJ6+mOV8P+NeNcXN2dimvppItKSS3D0rT7iFI
+oAGON5Y14BcXetiY09hGIHtitAAKUGLqLQsPWrAEgVrnmP+uKOm+D91zFrJApkBnl543IMe
p/HvwjvvJbM3u5XZ2fbmANvrAM0h/iC1Wi+Zxd3RUd38Q3UvL249tt+HhoPdvrghVVe7AHNi
PDGOeVOom5/8PWHEdriuF3V7+af0U9sIoK51IBate2eCy0zt2cM+Abnctn/qu5bwlks6l47S
LTM6ilfU1fST3GeHrYxb6835HtcW1bnNZrMZEjJCS0zJrQ4HfnrxWQli3oNS2QzzLE0ph5+W
OvHsPGP7f9y3DZV3Let4i29GT3UsoQspUUqut60Fe4GHuax9lRxv4fvt63K4Mm4Cx2O1dka9
ZC7uQclSPLPvmcZ5lb/6eA5txDg202iW+y77e7nupOn2VholAaUJKqAcW+sy1d8e/t53K92g
blu25R2UzoXhtkAdgtKguVOkMfLpjVZ69DxX4M2Pddol3Tdd7a2tlZgkUUQJCoaamYk5/TFZ
QkPxL8bz3EO2bTyqaW8m9KoYS48TVqZU8MEl/Znjt3T4T+PNodP6xytoZVX7XgWhHjT1YrL+
w5ds+ENkv45dxG/vBsasfZu2h9cqL1YDVRV+uGbD8OjcPhPjFlb2+5/1i4vdmWjXTwRKsnte
Kk1qMUgT80+KuB7fxKTednF4ZtAkjkml1Aqe1CBSvlixV4aZnrT29TeOqhp4/jjStdO2bTue
6XYg26za6umA0xxLqahNAKYAtX4PzEbqm1ja5pb8rq9pVOpVpWpwLVjt/wAU/IV7cPDDtE0h
iIDuxWONSf8Ac5AJxa1Bbl8XfIlhcRwvtcqvMaROpWSrA0IOgt1xfY30f/3OfJgT9QuzSaEq
ZQ+gMB30rq1H9mD7KMddWt3aymK7UwurFfbbLTnmMMus2ptq2jdd0vBbbZbyXl0VqIIFLGni
adMalwWa1L/EPyR+mkmOyTJCq62BKBmoK1ILVH0wfY449n+POd70kj2ez3EyofbZyAqEjwZ9
NcFw9OTe+C8u2R1g3PbJbMSn+VIc1appX0lsb1Nze/DO02PCU3e7vr7+o3CK0luYNKK7dP8A
t6dcArtvPgN04vb7hts9xuO6XCI7WiIiqoIrkT3Fc8FtEed7n8fcxs9wisbzap0u58oIlWry
f9gXVXA146Lj4f8AkW3tTdz7LOluis0mSsVAFQdINf3Yfsscu0fG3N96hM237PcXFsDlIqgR
6+41OVr+GK0YuuPfCPMd53VrC+h/pMkKB9V5qAI6VXT93TBLW9mKjevjnk+33N5BFBNfwWDF
J7yKNjFUZ5MBQYtc/hUbJxHk2/uybZtVzfNCKztHGWCV6AsMlONbjTg3Ha9z2u8a1vIpLe6S
oaNgVZexr0zwy7GZ8uWpNDpGutKDqa+YxNY0dl8ec93CENa7HeToyh1IiajJ1U50FPDPGfsx
1EL8V5VFuse1nZ7mPcHIpae22snuSgzwa39fE+6cC57ttsbjcdju7S0i/wDt3RiAvn1wToc8
4u/jf4xbmZu5LrcP6dDYqNLNE8xkc50NMtI754609SZrNXvH7+PkVxtG1rLusyOyw+1ESzhT
T0oBUD64x9hOtSbpwDmu2RLdbjs15aQgBTI8TnLr16DF9hGl2H4S+R952GXdII/00AFYYZme
KeRVFaog/Z6uuL7NMe/G+SW1/LaLtlyJrRC1yvtOTHQ0q2VRXD9hgtl2HfN53KOC3tpiZJFh
lnZWaOJnNAzfw+OHTOZW+5r8Ico43DbHarm4373lrMtpC6MhIz+w0p4HritjEvryi+s76yuW
W9tJILlQEkjlUo48Kg54LdO2uYCeQh9BLKajqTWvlh0Zq/seFc1uVLW+zbhL6dVVt5AQDmO3
fGb0XNFtfJEvZYorK6ilto/5wSKQNGOo1D8ueGd4upKZbbkO4WskogurqGI6J30uyg1yr1HX
thtORrtn+CvkLdOMzb9HZCCJhris5i36mVFFdSx0yr274zaMef3ce72N3Ja3ivbXSHSY2BUg
L4hu2NQYF3uJ0MTF3cn0kkuaHsK1PXFTIsrbg3M7pGEOz38gXORkt5aCnSvp74r1D9VfBxve
Lm8Nhb2M9xfrUm3jidpqg90pUYPshbtxDkm1Ijbptt1YhzVJJ4njVj4AsPuwT+mjK2fx78MX
HMNl3HeW3SPa4bCgjSdCUlbTqJL9gvTLvhnVNmTXnVzELaV4dQdI2Khl6Eg0xpiXQ/y2o2k6
jlSmKmNdw/4z33k282FhHbTWNveuR+tuIZVhFB1rQVy6YPtjeJ+c/FG+8Y5Q/H0L7vOEWZGt
IpGPtMOrIobSe2LWc1l7vjm82E8Vvf7dc2k8hJijmikQv/2gjPF9pROXOduuUmEYhYzk0ERV
tZc9QFp1xSn6uaeKSO4KTIy1r6GFGBHYg+GN6sSW9lJLJHHHG00krURUGpizZKFAzJ+mCnmb
8PSf/wA3X5RGxjeP6eGQxl/0PuqLoJ2JiOVe9K1xn7VXnK4uDfBvNuXQTXe326QWdvJ7bTXT
mLXIPuQChJK/myyxnvu/gZ6i578M804dHFcbtArWkpKi8tjrjBpXSSM1bLuM8XPVvyrWJj22
9jsVvnt5/wBHKdKTmNgpJ6DUBTGvsrGo4L8U8u5pLMmyWeuO3oZbmY+3CjHNVLnqx8Bnh6pk
iDlnxvy3iu6ja94sGS5kGq29n+Yk1TT+Wy/d/jiamXxe3/8Ab18mWfHRvkm1FohGJms4pFe4
RSK1aIeqoHVRXGb0xY87likjU5lWFarT8MwcaiyCt7K6ujHBbRvJcTsESGMFndj0VQATXFpz
W93b+375O2rjo3y52kTKyiR7aJw00QpUGVBn9ev4YPtrN8efTI8catKoRgPtPbPCYiCs0iBG
JLHr0wwPWpfhOGP4cXnsO6MJ9LyT2jopRo/cMShH61rnnjE9o/rcjyUBSQhPqpXwyGNGXxIs
KzJpVKj92XbLCc161d/257vY8LHId03vb7C5mg/VRbbO+htAXUESQkKXK/lA8sY20X/V5bab
PuV4tbe2lu9I++CJ2HXuFBp+ONfaT5OI5dvulcQvG8cyMdcWn1Cg7jtTD9hYZLK4d9IVmYAk
ooOS0+4geGK2CJbbZNxlYCO1nm1NQGNHZaeNQD0xi9xvnkE1hdW8hj9tv1FaFCDqB8CAOuL7
M2/htvj34Y5HzS8vrSJ1265sbY3Ea3aMomJIAVelASfuPTB99pxhbqyltbye2moJIWZGUZiq
EqwB6HMY3PVLL6jhSSab20FQaCNVBLFvBaDPLDY3mumfaNztI1klt540ZqRNJE6BjTOhYUP4
Yxuufz8Gj2i7nXVHBNKKZGOJ3yp4AVzOHTeR7fsV7e3i2lvG73LnRoRSxU9BqUCq54vtjM52
rfnHx1yrh17bWO9WbW80ih4nBEkTqR+WQZEjoR44p1+2e+5yoYtp3Qv7aWc8jMC2iONpGKj8
2Q6Yb1DL+0LWc6lvbjce3T3EAPpqaerwzxa1DrDNqlKoRoFZGAqor/Efy188WwzXMoYuwPcA
jvl36Y1YsW3HuObtyDcYtr2u2lub+U0RIYy4z7mmQA7k459dYZGk5r8P864XFHcb7tpFu5AW
+tnWWEMR0Zh9p+uD7M3I6eN/AfyRyXZJN922wU2IV2t/ekEck+gV/koc2HYE0qcP3X1Z7aeC
8l3TfTsVhZTNuesxy2zIwZCvUPUeineuK941Jq35r8O874Z7J3m0X9LLTTexN7kBYipUOBkw
8GA8sH2/bFWPH/gT5D37jx33bdvBsSjPbxyOI7mZVH3Rxt1DdsU71qxgrjarxJf01xDIk6Eo
UKlZNSmhUp9wauWG0aimsrq1k/SvbyQzLnolQqwr0qrAZnzxSh6Ds39vfyTu/GTv1rYAw+2Z
YbVnCXM6EVBjjPWvatK9sE6avOK/hXw7zHmN1Pb7XaCNbOi3ct03spHKcvZeo1B8vtxff9NZ
47Oe/BnO+Fbcm5bvDC9g7iM3Nq5kWNmNFElQNNfHDLWL1J8sO20boYHuY7aX2I8pZ9BaMEml
NYGkYr1GrFzwf465Jy/dRt2z27PIpH6iYkrFEvUtI/QAYb1gkdHO/izmXC7sWu9WbL7xP6S5
j9cUgpUhZFyLDw64Pt+xq/P9uvyWeLJv67f7ie2JXsNQF2sZ/MIupyzp1wa1cjzVoTDLIsq1
KZMCKMtOxHWuNwTqUUFqzyBUqysVA0AtmegFMz1w61j0i+/t1+Ubbjg5DJYJJCie7Lt8TVvE
j/jaKnhmQDWnbFP6OXdseaz2vtyMGQ64yQ6kd698H21ufAoF1Ggz/iHUHDYp09a2/wDt4v7j
48Xk9xvFpt088Ju9tsrhgnvqo1UaZiFRmWtEzxjd+B/S5PGQ+NuA7jznk8G0WMsdu8iNJNJK
a+3Ev3uAPuIrQDvg66zz8nn2a6vkr40tuF73b2UO72+7wXcTPFdWrKSrxtSSOaMM3tsPrnjU
0fnGmX+3u+h+Pf8A2e83m2tbyW3a9stskIQTwUD5SMQBKVP2AYz9ra11ZIy/xl8ZX3OeQps1
pOlrEkbTXNy6kmKNf4UFCzMT0w9dfgSb67+U/Dm42XM7PjGy7hb79NeKHtbmzdWCqWKusyBm
0FKV+7png9inzi/3n+1v5G2/bZdxjW13COCkhtbWYtIyd2UMq6qDOgOfbD9mrji4Z/b1zTlm
0pu1j+ntLWUt7P6t2RptJKl1VVY6FYUzwfY9SSKLlHxBzTYORW+w7hZar+8o23NbAzLcVOnS
jCmanqDSmL745c225YseZ/AfP+LbLFvF9ax3O3EA3bWrGRrYn/8ACqBXSDkSKgHGvsrLv+Af
HvwdzDl8Et7t8ccFpGh0z3JKRyv2jjanU+PTGPt7jeRHsXwty/d+YycVmt0sN4t42muFuaqg
jH2sGXVq1diMsNuLGu3f+1Tn1htk99FdWNx+miZ3tIWb3JFXM6WkVRqoMs8Z5vX5FgNn/tZ5
tu2zWW5w31jCt9As6q7yFlEg1KGGmnQ4b11vg+rGc5+HOV8Q5Ba7LcqL+6vkWSwltNTpKCdJ
VVpqLI2RBGN74su5FBunDuTbUEk3XbrmyglLJFNcQvGpZe2pgM/IYzz3qkW3AvjHk/NNzXb9
mgDBQHuLqQ6YYVzozuAevZRnjV6hn87PVdyPh29bByG62DcYvb3C1kEUkaEvqL0KMlBVlZSC
tMFuCRf8i+GOa8f4jacm3GAW9nczCIQVHuxhh/LaRB0ElDTuO+M8dWmzPlr9l/tY5num1QXk
26WO3TTKr/oZy/uKrepdegMtW8jhnVXUrzffuAch2Pk9xxu/tmG6wsP5MQLiUP8A+Nov4lYd
KYb1g598XnL/AIb5bxfjW271u1uiQbm3t+yrFngehZFlyyLKCVA+hzxS6bMq52f+2r5B3Tiz
7vFFHFdVElrtlwxjnlTuVLCikeDYJbv+Gnlu57dPZ301heW7QXts7JcxyKUaN1yZGU9DjpWN
DZWzXc6W0Q1yyELGiirsxNFAHiScZvkMaOP4z5w0E1xFsl69tbMyXEogkIjkTNlag1Cg8sc5
1rpeccOycQ5Hv3ux7Nt01/7AJdbdSzLnQsQB0xrWMHd8G5PZ7lDs1ztlzButzp/R2ssTLJJq
OWhepri+3mic6tJfiT5DiuBaPx+9F17Rm9sROW9sGhalOgOL7rHDsHx9y3f1uP6VtNzfJbf+
dYULMpNQAwr1qDhvWMzmIZeC8si3c7C+03Y3hRqG3tGwlK6S2pUIqw0+GKdaeZrj23j28blc
T2232E11cQKzPbxIWkCICXbR1ouk4dONF8V8DbmXJ7TamEq2UhrdzwLraKKn/kPgA1BU4Omp
EO8fHm+23MN141ZRNutzt0sqD9GPcMkMYDe5pXwQjV4GoxdXIxJai3T4y53tG2f1jcNmuYtl
KqyXuj0aGAKs1D6a1yrgnWq8Yn2T4z5tvG2ncts2m5u7FdQeaKPUBpGqh6Hp4DF9vWrxny4e
ZcG5JxO5s4d3tfZjv7cXlrMhDRvG1K5j8y6swcajP29xn9FK0Ho61bMivlhWBEhElCC2Vc8s
z4YcRFTrA0k/wgHLx/bipkDLpQCtTTw7VwKpAFNDUEkZA9qefniEJgrCoXIiuk5Z4NIlMmka
gR175DyxLB+3QqjZE9+37fDAQBKAnVX1fae30xMiYAI0imrN4dvGvjiRZkCqgqaMD4AZYkPU
qeiufie3lih0grMTkCOy5AZ5dcJH6lUAMNI6eXjgVRj1KaNl5/5YGaUbEqyCo8SPHpi1kVZB
RajLLpn9cKiRQFOk+o9ch365+eBozCfRqRSc/UKZgeGLWbadAGIr6ie/nhrUp9DBNJIGeZH7
sBOAoqHJVe57/uwKU+mOigmlO4Gf7MJORWM6cwOpPUUxLSDKy6QMwOuBGjkJAYjUB1BND/0w
6hrSlaZGuroM/HCdDAHIJBAOdEJ7YKDhvUPTQfbXsaZ4GdNqbSdKUAPc1Iwq+jjbQfWoI6dT
3wYcOsvryX09z1HlixaRJqwKaSfzE5fUeGIW1FRiftz/AIepOAOhFLKcs+3l/piaNGGICkFj
llUZdq+eHDgVMagxlDq1ENXtT/LChsBGwqvQVBOJIalm1xtT1ZD/ABP/ACwIa6//AMJ6cxWl
enj9cWozKaZrXOta0AHliWEXKjWGNaVHn5UxYqkSrABaUbMr9epxkow1GoTRtWlGHh/rjUB1
jTWQGHgSeuIwbPRqKpBQU1E55+eLEZlDAkgjOgIwKnTMmoyPcnLLBaiC6SrAatP7aYl+REgs
GSqjwrnhiNpBpRgrd27t44STa9OSlwpqzinTxwBEukOVVDprUauv4EYT8pUX7hJTUx61JAAw
NTxJWqnS1CO/Y4giZg4JGZqAOtOvbEMEyvXIkL5UyP0xLAsgArSq1Of+eJeHEelQQx1mpoRk
fHAycKpr7y6nB1eFKdK064TKRUFzp1BTRtS9/HCt0wyXUhFAaEmop5fjgR9RUjSe/qr54jog
IhIGcadXUg9fr2GM4pUcgo5YNpH5dPbxGeIX0qB1DOtDSoPTLxxKQIOsgjtkPGnXE1ElSyMp
YoDma9cvPCbTLqoGKVNfUB0wWGHlBAUKBozqDmfLLFgpxopVya9FINBhAdbLIVrRTnl/piZh
esrWUin5U7Ek5HFrU8Q3DPHF0GfQHx/DFI192b3YqZ6KKU60NR+GOjFrhBFM8vMYmT0TxOBL
WRStuGY0Ldj1p44HWWNLwFgm72LenS8yAajUihqa06g47cfDP9L4+sfkvY95vuGL+ksmmZVB
y9IGQodRP24893WeL4+YLgSQTzQTEe7GxBRTllllTqTjQspozITWlK9C3T6YTFpsF5bbdu1v
ezqJoreQSMlNVR3AByxviz8tfMfTHDPk7iHKNxWCOzms5okBRriVFjHagof2Yrx+WNZXlNlH
v3ybZW/HLWJri1ZZLqZpwUKK2ZqTT8BjPMFar5d4vu15xwmExRpEKyF3AJp0Va/jjP5KH4V4
zvlnxid7uKNVu3LwSK+rUoJAORquNdXU8u5R8b79uPP5tohkgS4uZfcVmfUBUV9VPAdsYnLW
uLnfxDuPELaBrrcre5muPtiSoYkEdjTueuNcS6pHsvw/xjf7DiLLfR+qdi8ILjMFaCgNdONf
0Fki24hFf7bY322D2pdy955PYBVyK926VxnAh5NHO/ErmXliW1rcgN+lhQoSW/Lo056jjP10
xB8X7Dv0fD5Bdx+010GaAMwaqmoDUqaasb6hql4+/wAqbcbzbNqttunhglYpDOQ0gqT3Ur18
8NYam/O6W2xW+57tFZ2fIFzihAQRe4xpStST2rngOn2O4+Rb69Kb1tljHYFBW5iCk1p2Dsa1
+mBM5d73yqz5JcbNxKxg3iFafqI5QPageuYLVUD8MJtW/PuRLZcPaz3YRW+7XKoEsbb1UbUD
+4YpKHBzWCYfGaiSJxogjY6gVIoBQZjB+R1XzDJpLtKvVzQrSo8s8alXy1vxYnJZOWW0OwSi
C+IJ1EjSqAeotXwH7cSnj6Zku9qF3NYWlH5Sbb1TAeoADOp/hrmFOM/VWJLCWZNh0zwPuV+h
rPFay0JcnM1UqBhwgst13d7+0t5dlO1q2oq8sySysAMgNBPXzOBnfXBZbjuF18ibht8ly09t
BAhjg1gor/m9PSoriwvDfljatxvubXJ2zb5LqBaoWt0aQKTlQaQR2xm+LWl+Adm3rbeTXJur
Sa1UwlUa4iZKk06awMbUj1+2s+Stye/kvneTaTGv6SOo9sNTM08cBcfJn3p+NSDjTyS3obTE
bRuhU5gken64Q8Y5ft3zG1tZjk0+oe6v6a2RkeaUlsgFT1GmHTj2Dcdp5Dc/GgsZo5ZNx/Tq
HiY6nJBrn5jGVXXuEfLV4Pax7MrQ7ksUI0kLrCgeoDzw1V2XVnNK+2xtdm1vtIM2llE7gD1g
avVme4xRO3a1Z2uY2tboKmSy3UhcSGmZUE/5YMTPc2PJRxsJxb3hfq9FFpQgdevaleuNJXfH
EXyGN2lbmmbPB/8AEXUhbr6iwTywafHJ8pWnIN62O9teM36x2sJZb6wh9MshpmNY+3Lt3wYw
xnwht/ySLXcbXZLyzsLCKQCX9WnuuZaZ0VfVl4k0w+mV5X8mR7vBzC/i3jcItwvhIxuLi3zQ
scqDwp4Y1i+FPsA9zdLWlJF92IujdGAcGmKnX2NzCfmEe0bedgWhMkIufbUFxEaVoM9NO+WM
wX5XHsWw3aOV0UXpt/8AykAy070P3dfDFheN79v3zi8m8xW1lLLswMixyzQxoBHUjJmALCnf
Enf/AG3pvUVlu8U6ypaCUFCUpEZRUOEbuenTFrVnhuJXm5ce+ROQLPsM8q3p1veQr/PVdVfS
GpqQ17YqxJkarkMPKN02O8uNj3SZUClpLK/t1Q+n1emRlFKU75YmJ3ofjbknI91+Pbm+vLhr
rdIWmjil0qSxQemgUZ17VxY6dXxT/CF1vN3HyW73oSPuUswMwnUe5po2lTUdKZDEfwuvjDZd
y27i24pdWbWxuL2aWKJ1CsyaqAkdc+2IX2LXmW48utLjZotlgraTzqm5ShNbJHUfsr44g8g/
ups7EXm0ypEn6qaNhM6Ae8VUihamdMWLHknx5A6cz2oyQmn6qI6XU6MmB6nLBbFL6+u+Z7py
613PZYdmhJ264uAu4zogdljqMs/tr44YVs8Fuu47k6xqXltwJqAFn0qQNQ75dMWBivjLZdy2
n473VLm1ezuZJru4jRl0vQZoRUV7ZYh+E3C+U8l3P4su92eU3G8wrcJbSCMF2aP7Kp3bFh/D
5I5TuO/7hvFzPvJmn3OVz7skiFXoT+YUFAMPjMtiz+NLWVOcbL+oVgBeRMdaVVgGHcjBbKed
19dcy3rl1lyTYLTa7cttF3Np3S5SP3GRQelfyCnfFMKz/p1tbbtvG5WsCR38lumqdEBdyqkq
G8emJV8/8j5t81bxxfcbTeOPpPtTk67ua1ERQBqDSGyy61pjUs1Nv/bgdxf483Gzu43NrDM6
2ayJRSrx1YKQPV6uuDfT1PHy5yXbru03e7hmtpYblZn1xSelgSx7N0w6xx/PGh+I9ps9x57s
NrfQrNbNeR+9DIMmFclNeueM3qNyPrLcd/5TbfJO27HZWnvcfnty12/temIgGhEgFFpQenF+
AuQtraXe+7o6mKT+WJLpE1ylIowQAtGqBXIYix/KOe8bj2Sz3Kayu90msrmOa2uJ7RoVR9Wn
UXZUCihwyDcWU3Atqv8A5D23nCOCsNs2lVCtEZCulX1jwB64Ma+Hyr827/Z738i7te2MKW0E
bi30xlTreH0PISop6znjpmMTn8rD+38QD5T2iSbQixlyrSFaamQgde/hjPTcv6fQj/8Asa/P
Med0NhFkTq9Qt/cKdP4fuwX4Y5vtBe7Vvt7uvJhd7mdv4ElwBJbWyD9RLKFV59EigtGmvqRm
TiUeY/Lny9t91xQcS2PaJ7PatapJdXgdXKRnUvtBqsakVJJwzysXqdLy/wBy+SW+DooJuN2H
9JNioF2JP5scFPTN+mIoH/NUN54y13ciz46bm2/tlZ9mkZb5IGJktfVKXM4Dk6fVqK9e9MXJ
tuN7x+F5OM8Mk3dVk3GL2iXuSGlWVoWBoWz1YjbjP7VccmPz3f28r3I2OKzJijYsYKsq0pX0
/dWmHr8M8/NfPfybxLc97+X+SbbxvanvJxctI0FotQo0rqZqUVfUTjp15IxPm4vv7d9mudm+
X4bLeLY217BBPGqzqEZW0UFA2eo5jHK3Xbi+V7Jx645QfnTere5e6OypbkW8Uhb2ACoI0gjS
QT0w0cvmD5fgtIfknkkVvGiWwv5qJGAFWh6ADIDG7zmM8X5ZGFf5qs1AjGpJqB9MDXL6jnSn
9pp19Vt60pl/+NeH44z/AC+WP787MfLEgijk06QAnRwain1xrDz5Man445JabByKLc5dsh3i
FQVNjdZxt4uKggMvaoOM9cunNfTnz/y/Ztt4NtMV/skF/wD1pSlqsrU/SP7SsHjYCtV1ClKd
PDFzHP8Ap1NxBY7xecF+CNk3PjtnEL+5MRmX2jJ7jTFi0jhKMWooxlu1urTadru9+2DkE9jC
u7323SLdyhANQeON2UgjsxIz7ZYk88+IeNRxfLPNbj+mrHtdJLeGsNIqPNqMa6hTSQOgxdfI
5+F5yfmUvA/ifbd12m0t/wBS862UIdaIgkkkJyWh/L0rhkWtCOM8fvuW7ByGbbbc7rPYSySX
IQD1lY2DgfxAsaMc8Z+Vnrl+OuZcj37k+/2W6beqWm2SNFZbisTRl1DldBLda01ZYbMp5ux8
V8mdf67fqa5XVwCMyQDK1M8drPXPj/1ezf2mbXYX3LdyubuCOaWys1ezJUEwu0gUutfzEdGx
y7+XSXx7du3JeFbtY7xtm8X53eyhikNza/o3rEENC4ZU6qe4xWBQcv59d8F+OOLXGz2luLnc
FgtkkkFUSNYtf2rQkkeeKSK/Kv8AlTdn4TzTjfJNggt7Xcd+ja03ZnT+VcR6o2WoFKOGauqt
aYpPNc71ncn7cX9z/L91sRt/HBbQPtm5Qe9PcPFrkSVJQB7TsaIcuwrgbsluLz5F+SL3464R
xa52vb7ea73GKKB5JFICpFArkaVozFvrl1w8yNdfLh+OeZ/G3Mub7ibPa49vu912xYtys7hY
wlzOsnqCUyfSp+7LV+GLGcrm+Ivjbddi2PnK8h2tIre+Dx28EwR2aKJHp4+jMFTXrgz02/6v
k2WFEPpNWrQU7988dNHMuPoz+01Y1XlYiOq7/TxNGgP8zLWfSBnTVTHO/JrT/FlxyHeuCc4X
lxnu45VleFb6pHoicsQrAadLKvTwxX0Z4l+T9x5RFf8AxrZ7DLOLa4MMky2ZOhygip6kyI0F
jStCMPxGbfY9Gnt7WDlO+yWkaJuF1tUctYwBLI0bSqrZZmnpH7MWflrfl5h8Zyb/AL18Wcxb
lQnuA5llhN+GP2RlmZVemnSwHTFm30c3eXd8lXnKm5d8e2mxT3A264EUsv6VmWJ2V4zVimWn
2tXXKmC+Q5tjJfOllJb/ADlxqfY4oTu8qW04hplLOtwVT3aeIAFfDDbkc7xb3/hV/PV7vl1z
3jrb/sEG0SQe2zXsUwmW6T3UqrvpWqR+DCo+mH/4tzne3pPybdcnb5Y4Ra7NLcf0mQJNIlsS
ICfeozMy+kr7IzUmlMZ/C6uXHXv2zcmn+Q9+h2i6i2vjF3a2x5JcGMNIZWVgJIKZrJ7YoXOQ
6ntiky6fy8/+UPkbiW18K3nhvHbC/nj3P/4v9Tvmla2qoBaSOSbU7sNOSigJzxvmesdzZn4P
8cbpzeD4J3P2ePWm88faO701l0TEEkTMYtLBwjVYdDljNk107z6pPgFr23+GuXDanZ96t3Nx
B7Ocwl/TLpZRmfy5eOeMz5c5s5XXEk3F/g4S8naWaeHdbe5t7jcKs6hruKjgvmoBZx9K4Pn5
XVv138rTlEnLT/cLsMdpLeLsS2yTSKhYWxU+4Jqkeg/kqD+GN74c/wBnivzlxa43X5z3PZ9g
sfdvbsW7pbwKiappIQzmpIGf3MT9cO4eJ7VB8d7Rece+V9ksORW7bdc2N9C0sN0dCoxcAFif
SVocm6HF3tjXP9Jbj6KvW5bJ/cjbpGbz+gWtqpyLrbKskJ9z/Y2qUivgcFqk9fO/zxbW1p8t
8lgtYkt4BNHKFioFLyRK0hy/iY1ONYOPLWR4vur7Tv1jufsx3BtplmMUq643AIOh07g98VF+
X1p8p842cfCe27lecfgurbfRFBFtzOI47d5UZhJG6rUaCtV0gYxzF/WyRjv7U+XRrdtxSTbr
eSXTNdxbuqqtwAaa4nyqVJ+3PFZNa58jD85+Q9lu/lCLfdu2C0tpdqnlgu4mHuw3eiWgaaOi
qGyzpnnhvwxx7a9j+cOabNH8SbRPd7HBdxb9Ei2sLPoW0laH3EljZVr/ACz0pSuLiav6X8KP
+1nl8Fxq4y9hAbq0hkmG7KqrO6M4OhzSpFWyzwWZXazIq+C8msOT/PtjuO1bZFsEiC6hu4om
Di59vWGkZaKFZlWmQw1n+fuvW+Nbrxeb5A3SwsdqvbLfYDOZ7ub3f0chDAN7dW0EMSD6VxUR
UTX+12Hxtts+/bbcbrC93cj2ttEmuGRriUnSUKNpGYwxnFtBebc+98AmhQ223zx3wsY7on9Q
kjQZRMWJ/LXvXBmt35Zz4qj5VcfJ3Nv6yl0+265oIUudRhK+9/KVVf0lTGMiB0yxUSyqbnyb
8/w7xteKGcqk5tbxbAkfy1dwI30ZhVkX8MPPhanfN53LYd94FeHbJd23STbru33GKMqbzRHF
E7tnTXocH0986YJ6ur6qTsPDfka13O1sNo3Xjm5wq13Ffz+7GJHJPpKM7K6kn1LT6YNGfly2
PO7KXhux2XJuGbhf/pbdVtLm0Be3lRFEfuI6MpBIXNT0w5il1fcS4DsGx/JpvrH3Ba7vtBu7
KzndpJLRxIiyLEzliAQ46dDXEZ47uR8o4runFt8tN0jvt2s47aQywXFiyhaVVXU6F+0n7u3X
DgB8cbjwb+ibBZ8V3aws0kWJrzb4whuJplT1qwLBgxYHqPpgzGrrP3o4VY/3B7pd8gurNbua
wt32ueZgVt7gRiPTMGoquUGpNXUeeGzwRY/Ldlb7x8Obk7cggvNNytzb3ilRFI8b+m2QqWzP
2r54ePlOJNll2fZ7DcebJdcq3i3tYods2+ziegiyYeqMLqdehd/+eCxdXFdw75E4vyH5J3jc
eSW8Ow7tYWsdltA3D70WMu8hlL6VWU+4MssumKzxT438rz5ZtbO/+LLS/ffYbyC03GC+W+IV
UnAkZTFGFrUgPl40zxT5HVxw/Je38r3H5R4buGyi4l2Fo4rma5hkYW/uLJmxodNTEencYLbn
hlzr143/AHLLbJ8u7rKgXS1vae5opTWYqVanU5Z43+GZ8sh8ZJGOc7KpRWY39vmWotRKp/bj
HUdOX0L8y/OHI+F/IMG1balutlFDFcXcbpqe5L9VJy0UUUBXPGpPGV/8S71sF5xjkfJdtspd
vtLzcnnFpZBXuI6qmpdKgjNyzUpTPGDfItN73bZdyuuKs9nfG8Td4hZXt5b6GQgn3EZqDSGU
5VGeLFHZvHIt1Sx5BcpMqy7VuUFvZvpUFI39nWp/i1az1xrAh56m87Lxzd7vhVsI97N/FI6W
8QkZ/dVPdLJQ6tQNfrngFZ3kvIr7bOLcF5nv9sF3+0v0tr95E9qRI7lZEmRgBkNIDUxczWtW
dnwfYuL8v5F8gREPbXVqL6wKuiQktGzXEY/KfcoGQ/7sWfawS5K8c/t4j3m++Tn5FtVpJDst
zJcLerGP5UUc5eWOJ6UHpJy+mOn9vmRrjJy23DNsuNv/ALlt+FxatapdfqbizJQqksckcZMi
HoasDXB/SbJRz8Vc8A51ufMubcw47uiwnZLOOa3TbBGNBUSGMuSfV686jGb5ik8tX3xjDe7n
wLZXtJptnm28GFlCq0dzGjHQSGB9B8Rn1xWeq9W/L5//ALmN93W95u+23cU1vt+2gNt1tOoR
dUqATPGVHrRmFQa41njMnuvGxQMyAmhGeroMTWIyBXVmASAD4YGcEhRWD6vQtR49T44lhSa1
Go5g0GR6A+NMSwBeMUWg7Z9RUeWJijEoCkZ1/dXwzxnDOjSFtA01Ab7R/liOiQuTXSGBGbV6
EdcsRFRw/UFTmAPPCMDqOogA+k5sCBkcRxJHHO1VrqC9QO2AQzRMCChDMMyevTxxERjZU1JR
QRnTzw6hGoAfVXoPbAzJ8cCFUaq5UNSRkPwxYjII+ooSpzB61PfFgpkC+5qB01Geo1xMw2ka
gEJKr9urvTBqdBeSpIAANKfUeOJuaaR2BBqNXQgdBXriJpIgaDVRBUso8adzgVDVPtIqPyk+
QzwspKoyAvGVzyp4dsDVoP5xesYoQDXVlWnjhZog0poQvrFASM/pg1acHSc1ozdQe/j9MUIj
HQaqgAeHn5YTTEAEtpId8z2r54hYJHcAalBXxBzJwYpBEsKt9h6qpzJHhiRgsaAg061FMq/U
nCtMKMoLKdVQWUeoAYFpH72bJWJNPDLESI1uKOaL9zfXrniWHDKGYKCwbrQ9O9cOLBhI6qxY
616AdgcDXwE5s1VOYr7hOR8sQw4UOqj7XORPUYtFIFQKDMBjn2864hpnLA6tRCnqtOvngag/
yAMQVAqrDz7HFpQOoYrpzzqSO+ID0yKlagEilR2OE0zFzRfSo/MeoI8D3xMiQoBoAFCevlhW
DYEZ1GmmSn7T+OA/BAFqEn00ooGdCe+Cr2nXUqvqFKVB8M/LA1hRr6e1D9o8SP8ADEML2wpJ
YgMuVTn/AIYhfAKxLUUHpkDTGho1R/b0fao+8E/uyxaTsIwAcq1FSOo88ClASZB9vpByPQGu
I+mLqo/lUIpkvSh88Q0OnIkkgSZeQIxNSptSqAQG/wBw6Z9K4tRkij1V9WZroGeXiaYmMKSN
1atT4j8fDAcEpUx6sm7swrXLxxIAQUZ/tNMyD+7CBKW9vOlSPUcuvhiUMFoatQEj0iv/AB1w
NnZVLAtlUD05Upi9PhqgtoGZUii5dPGmJnSf/wAjOpzUd8h51xSDRAnSCKVyyH7iMWNc04Uk
gEmhGR7YK1JDAotVboOjA9T/AK4lpRsSxamknIsc8x/x1xazfkzRlpFK1FB17ZYRSGT+oUXq
hGefSuDTIb221N6qhTUk9KYtMm/IJDH6n6BwdRNK1H1ww2Rl9xOqYqq6QBXT0/xxuOdceWQG
YGEab0+BwJ011DM17+VcLbScXlMR11oV+3SMx51ONS4Pw9AfmfNH24WMu43r2JjCpE7PoKjq
M/LGb1o44kUBq+t0BDtQoppUUHnjMN9XHG+I8m5FcCz2i0N5cIfWtaKB4lzkMVuCyxp92+Fe
c7Rbrd3tmiW6rUskgkda9tI88GmVjPYu4LphpIkiydxXOmdMa+3hsiSG9vlZZYjIh6hxqU1P
n44Of6YOb6nuty5K9swupbmW17lmbJeoIzzxudS10+sFY75yQx6bK5uVhoFOhm0kVr0GX44O
2LzHLNc7wlwshllWY100LAiniSa4J0yO7bd7kB7r3pIwAUL62anUks1csWtSfh2W3IuS+3HH
BfXiWhWhjieTS1OldP8Ajilo64xA28b7Hde9BdTxzNWsodtZB7BidWeH7M2B3Lc98uWFxf3d
zMsagFpZXk00GVKk0OCdiO225rya3tzHb7pdwilECzuqqvklaVODrut/Vy2/K97tZjPBfTpO
xrJL7j62PiWrilrPRr7kO8Xrxy31/cXciH+WZpGalc8q41okdb845T7DW53W6FqAB7aSOqnt
Q0PTFKfq5tu5Hu+2ye5YX89qz5sYnIrlnX9uNfc1PZ8v3u13FN0FyzX0TB0nl/mNXxo1e2Mz
qqLbk3y3zHktr+i3W+BsgdXtxL7WY6FtNA344LROWQSRSjaiQa+kV7fUZZ4YsT2m43lq/u2s
rwSAH+ajUI7VqO2JY6IeS7ylx76XswnceqZGKk0HcjPGtFnjusOccnsGkew3K5geU1mdJCCx
8frgtoBecv5Pcbh+tn3W4kmyWOZpCzAEdK4DIhi5Pu8V0bhb6YTSCjyIx1U71xS4a12xfOfN
tk29LGzeCSNavrljUswPfV3OWGDFnH/chztJYXnjtJkNS0TxiMAeOsH9mHxXcZ3kPzBzLfJZ
9W5TWkEh/wDxeGQpEVHaoNcZvnwzKqtq+QeW7ShTat4uLSM1LxIxK1PUhTXvhvZvrnv+Y8jv
r1b+53G4muomLwzu3qjPfQR0xc9N8riX5e+QprZo23650dGOqlVpmPSK4bWpw5Lb5U51ZRAW
+73KBasoLtpFe/Xwwbrl/T/VyT875Vd7hFuU+5SyXSHV7rv6hXoAT0pjeyKXYtLn5d+RLiIK
/ILghhpZRRRpHkKZ+eMfZrHLtnytzPaoDBY7tcQoxJ06q1r3Fa54oHPJ8j80k3CTcH3i4S5k
9DTo1CFpmMj3xqxY44uY8itJ5pYb+dJbgfz31mkgJr66GhOM0q2He9yt2keK6lR5G/mMsjoW
rnnpIGGUeOKWeV5WcsSx617V61PfDDXXtl5+ivIbg1rAwfQMyaZ0xGPVt8/uR5ddbfbwbTDH
tcsUYEk5o5emVKGtPEYLjM9YqP5Q5oN6G9PusrXqCizscwndaCi0xWxrE+/fNfyFu9rJYXe6
uLaQUIjCx6lPYlRni5oTcc+bOc7FYLYbduGm0jqqxyxpLQn7qFhkc64aPyhk+Yecy70u8vu8
rXka+0rBVRQvgFA008cBLffmz5A3+xewv9wraNlIsK+2GWvV6UJ+nTDPHOVWca+S+XcXMh2e
7aFZGHuB81Yd6qcvpjMa6qfb/lXmVnv029QblILyY0lc0YsD1DKcvp4Y2pVve/3AfJF6hgk3
XTEw9axxrG1K9KgV/fhkhHF/cP8AJ0SkLuSzVACCSNdVB1JoOuCyD0uM/Nm87Xvc+7bnb2+8
3l3Hpd70EsgrUe2fy+GkYz5TI1n/AOdFO+gjjNko61LHUKGuQpi+sMQcq/ug5Lc+y+wW6bYF
U+8JgJtRPgD0xTFjC2HzJzq13g7z/VpX3GQaXdgGjaMGoQp0IxWiVZbx8+/JG5Ws1tcbkscF
wjLILZFRgCKHOlVr9cGpRcV+V+Z8Uimj2i/YJIQXhlHuIWH5s/LDPTat+E/M1/s29bjvG47d
a7te7j/55LlfXka1R86delMPUDbN/c9CzJTi1mGB1BhmVFeq5Ka4pzBrn5R/dFyC7miOwQLt
senTPHKFlJPiGIywYZdYbbPmr5AsN8m3mHdGku7oBbhZl9yJgPt/l9MvLBhkHzH5x55yaz/p
95eJHaVDSRwqI9ZHY06jywys2O/YP7iefbLtUO1281uYbZQkAkhVqKPPFc/S9dewfPk1rvF9
uO97ZZ7zd3wUGWeMAqF6KhoQF8qYr78n4XO7f3KbdeWckEfErRJmUosxIDRmn3KQqkUPQg4z
kG2qOH+5j5GgsYrUXVuZE0gStCGenT1Mag5Y1cLisP7hPkG13m63Vr5ZZ7qkbwsg9giMemid
FPni+W+cxw85+Z+a8vs1s7+7SO0T1PawoERmGfqI64Z452eis/njnlrw6TjVtdQw2axG3jk9
usqQsCCquPI9euKyRuyPN2uGkFFAUtVmbxPjiZSWd5LayLJDI8cuRDjrXsSRg0R6cn9wfyND
x7+jLfAqY/bS7dQblaZUWQeP+7PDy1Ifh/8AcLzHi+3DbLVobm21mSl1GXZWY+ujKVNCczXF
kZ66m5HHz/5z5dzOyWwv4raKxglEn8mMAkjwdizUPTLFuM9c7VBL8n82m443Hzu1y2zFdAsQ
yhQgzEeqmrR5VwRqx2cJ+UuVcKm9/aboFJsprSWrxPlSpX+IeOKSK3IDl3yvy3lm5xXu5XbB
rc1tbeGsUcLfxqo7+dcNUy+tBcf3F/ItxxldoF2inT7Rv9FLkquROseP7cPMit9UHAflDfOE
bxc7ptISaa8TTdrcAusoB1VJrqDavA4uufF/4cXKfkjfuU8pl5JPItrey+2B+nrGkawiiAH7
qjxrjPlPEu+tVd/3H/Ilzx0bPJcqjU0DcQoW5MYyFXHfzGeHZF18vL5pGeR5XBLSGr1NSSTU
sSepPWuDavEUZ0lQTpoQQxzr9caZ3HtLfMm0D4Obg5tpV3ansLMlDGVMvu6jXpllTGePKe59
niUlRI6KNKg9+le9MaNiWCcwsKEinqC9K0Neow5rHVsbPmvyxyblezbRtu6vE1ttPpikWPS7
VUJ6iO4UUyGMzZ8CdbV9wX5/5vxHbV2m2MF3tqGttDdBmMA7+2Qft/24zjp6befn/wCQt03u
03Zb79C23sTbW1uoEJJFG1Ka6tY6g41G/q0Mn91fPKRlYbFHAIaQRn11GVRqyIxZBeWI3/5g
5Jv/ABaHjd+6PYWtyLmIhFSUSAsw9Q6qNZ7YpMUiym+f+fyjZv8A5Yjm2Wot540UFwVCaJB+
cEAA1weM/NaKT+6z5FKxlYbCOSMkSL7TMZMv+7L8MazlqR4rf3MtzdPcyyappXeV2J6mRix/
DOgxbtYzPFzwrnHIOK7tFuuzzfp7qOqM3VHU/kdfzLXxxdcibHo++/3Qc83DbJ7OKC1s/wBQ
hie6t4yJTqFGK6manftinMb5lYrknypyLf8Aj+zbHuEiPZ7L67UqtJSNGldbjrpXIYLzKOvl
JzT5Y5Ry+02iHeZYmG0p/JMa6CzkAa3YdT6R0xn4Y6491Zcg+ceW79w1ONbsttdxLo038seq
6XR0o/QMP4uuKXD1NU3MflDknK9g2TaN2kjdNi1CCZVKyOSgQaj3oq9cdZzMdMl9qp4buu22
fILC83uKWTbYZV/U/p3MU3tipLIy5gjrjn1yo+kd4/uJ+PrDjd8myT7nu91Pbm2hivCwUa1K
hjI9TlXPxxTlmzx8pzyF5CWqZDRm7ZUpiwc/C241y3eNh3NNw2u6ayu46GOWMlSKHNT4gjqp
yOL6tytzzT+4Hm/J9vXbrhobSAqEuHtVKG4Izq57D/bjVn6GG4P8/wDN+KbU22RNBfWAoYIr
yriEk1JjORAqahemDmT8q2VnIflHl8XI25GN1n/q3uF/1JY1NfyUNR7dMtNKYvkyL/nvzzzf
ldnDZ3UsVpZkD9RFaAoJulfdFTXpkMOZ8OfUFw7+4HnnFtlm2e2kh3CxFTapeAu1uGGXtMKZ
A5hT+GL6w6xF/wAs32/3c7zf3klxvDyCX9WznWr1BDKw6ae3hi8a56x3cs+ReWcsltrnftwk
vXso/bhBCoqZjOiAVLUzOM1nfy1XE/7hPkDjmwS7FDNFeW+kizkugTLbahl7Z8Ac1r0xKzYH
g/ztzLi13uU6SrfPuLLLuH6ystZugdTWo9ORHTF/5XLs53/cLyfmPGp9i3CwsYrKYjVKqFpF
dGDI8Z1HR0p40xrZPgdfz+09ZDYPlDmWx7HdbLtu5y2u23ius1qmkoS66WKFgSle+nrgkP8A
hx8P57yLim4Q7jtF41pdp6WUU0SD+GRD6SpoOuHr1S5Gg5980cw5nIqX08cO35OdstqxxFh+
ZjX1tXxxkWasdj/uJ+Qtn42diguknjAeOzuZk1T260ovtuTnp7VwyYbGQ49znfNm5HByG2uW
fdbd/dFxNWVmY11a9WbawSD5YOvWo6ue/I+8825Mu+X8UMVwYkhEcQKoEStBRiSSCTWpw2+M
5Nafbv7iPkXb+KHjq3scyhPatdzkQm8gWlNKNXTRVyDMK4OL76rrzG8vLi7meS4leR3Ot5HO
pmY9SzHqcbtWIoZtBNDVVPqBNKg4xVjX7x8lci3XhdhxC+kR9p2thLZKVHuCmoIC/cKGKjFP
D1JflxcE53u3EN/h3zanVLuFWBjYao3VxRlZcuv7cH1Snv8AcLi73W4v52V7i8lkmuCtQuuV
y5oP/qwiRo+Q/JfJt+4ztXGtzlSXbdjJa1ammUjRoVXYfcEBoMM+PF1zL8ouA/IO/cK5BHuu
zNGbkKySxzgmOSNhmjgZ9siMGa1qu2vku57Zu8e72dw1vuaTPcQ3MZNVeRyxA8vVTPDa1zce
rXv91nyJNZNbJ+kilZdAukiPug0pqUE6a98XMjG+qPhn9wnPOKwzW9tPFfWs8jzPDeAvpd6s
8iMCCNTHUR0w/VdVV85+YuW8xv7e6v5ktf6e/uWUNrWNUkYUMoz1a/OuL4jGrPff7gPkLe+N
rsd5eiJAoWa9iX27qVFP2SOv8VM6dcHNxrFfwP5h5fwyd22i6DWkwPv2N1WS3Yn82kZq3+4Y
Pm+umar3+TeXXPK//ZZN0Zd2hYG2kXJIlU5KiGoCEZFe+Hq+eMNnun9zfyLu+2T7fJJBaGUB
Td28XtzKDkSh1N16VxufWT/Jis4T8+894rtX9MsJ0uLX3HdY7hPd9oE1/lkkEAk1K451Vybr
8z83veUW3JTftHuVnVbUR+iKNH+9Fi+0q/5ga1wW74MWnJ/7kvkXftjudtnube1gudKO9rH7
cpj/ADipLU1dDjXPyza892nkW57TultulhL+nvbVxLb3KUJBU1FcPXrX212cm5puvI9/ud63
aVZb680C4kjQIje2oRar0HpGM34PMdyc73xeGvxMTqdla4W7aNh6hIOmluoFczTBz1lb8xtN
s/ue+RrDbYrJJ7aaO2RYYJ5og0jKgoCxHU4Zl+WNYLmHPd85dvzb1uiwm+lijima3jEcZEYo
rEddXnjVq5qe7+QOSzcSi4o1wrbLbS/qYoDGNaTUI9L/AMPqJ0+OMSN/0y4seM/NvPeN7Hcb
Ptu4E2c4IRJwJTFXqYSc4/pmMXw526xG43txdzS3VxcPPNKzM80pLSOzGpLV71w6hbXfz2Fz
Dc27mOWFlkjavqV1OoUP1FcHSlWvNOa73y3e23zeZRPeOqxDSgRNCCi+kfSuH7eYsWfBPk3l
nDbp7nZbsW4mXTPFIoeJ8vTVDkSOzYLF9lpyT5y5/wAhu7Oe/vWjO3TLLZrb6YgJENVkZUHq
YeeNSyNc2xFe/MPOLqPd4rncCsO+NHNuGiNR64SNBQfl+1QaeGHUsbD+4H5Dtdzm3iLc9dzd
RJBOskaNE4hGlW9s09Y/i74zgU3N/lblnNJLf+uX9YrdSUt4FEUWvsxUU1OB3wy5FKluPlbm
V7w+Hil1eI+ywFWjQoNZEZqqe4c9I7DGeevrR16i4F8pcs4U1x/Qr1LeK8p+ohmjEqsyn0FV
OQIqc8H5blWu6fPHyDuG7WG7ybgg3DbC4tpY4kj0iVdLqUAoQw8a41vgz1QbD8gch2bkdxvu
23bJuc/uNPNT1P7xq+quRBOeLq76nqPBP7ibDa+PRbJyTaJL6OxBWwuYJPaPtMfUsoqMx44F
WG+Y/lVed7nZvb2X6Ow2tGgsvV7kzo9Cxlc9aMuQx2l5k/y5bdeeoz+4wYihzqKeFBQY52x0
07yLUsMlA0kdyfGmAo4yGkDBui1FOmeWJgwlZIzUeKk/9cBKlVWlAB9zdcLFEArUGrxz7AeN
MCmJKCOR1UahSqntgOmLBGUj01Pfp+3E0Jp1UVA9Q79s8OKBCBmpJVgVqaDthWDcMhoraSx9
JPSgGBFpZo9FdLDpXz/1waD+2ygs7aaChQZinhiNFCRqRS40nv2H44kKRkjFKACpIPWhGJUw
SrDIHUANY8u3lgRm9ppKOAOykVocWMxKGKsCtGVDVRTsMROxkUEoAGY1YfXyOEnOaocqHrl3
xEPRnY9up7HPwwIwqrN6eua9v+DhGpV0BRl6wMzlXPxxLTODpjRR6KVZun7cA+S1tUaTRe1B
XpgxrAuqyFnJJcnOprQ1xLBlyw9ebLStOhwmoh629bHSWqO4+lepxMxK2kD0jU/Zj4+GJo8i
CgGZpnXywVUwio1DlXMZ4hhFTTJzqr6VIyr4VxKk8bAkrrZAfQxAH1p3xajrIyOqgmgNSPH6
4gFcz7gzBOY6DriaTs4GopQKcq/4ZYkZXkVyXBatAegy7mmA4GqhAFPpqSSev0ri1mwB1FwQ
3p6Fe+LWMGryqGKUI7aiQcTc0nkLKSM2pUgdvpiWn1JkwFdRoRTE1pCLUAAwXwHU0GdMWo7B
WUsQFIy0nxr44YALUHv4UyJ86Y0CMbuoBPp6jLt4UxlJVFAoyIHfzxlvQn3AjUWoYkKa1yHX
EMMavQV9OXTPp2xCmbNaFdQX7iOhpiBtMYBdOjdqYTgkFAc9SDw8cJwxOn1aiScq+XbBRggA
CDWtclGYp+zEjutV1Nka+oEjOv8AjhRi5ZupoOinpT/XFQdZGYFDVW7igAI7YyfUij05PSnh
l/hiaJGJXoAadT/riVkAaLVWzZj0HUk+OEU8lV9XRagANlQjviZJ/bDVZg5bsPtr3riRm9t4
tLAjOtMRNSMhaqWVvuFaU8aVxHTgaWJcjTWlaEH61xHTE6Zs/VU11A5dMTMomWPTUA+NcR00
bjSKGoJyp2B64zW5T+2ig6vTGKjPr5YtWomKFgSKhT6QDnQYWEsUqMGr4/acqeGDRYZpQHUA
aajJ17fhganVGGSunVU9x3xNoZArFstVO5FP3YWYzO8sGufTlQdMb5jFvqvWnc6RTrhQ6y+J
/Z+/CMd8VUjcqAaj01GefniabD4ve5h361lg29d0nWVfZsJBrWVickGNb4y99+VLv5DvONQ/
ruL2WxbcHVmaNlaUtSukhRkPLHGfLUjxDRrCqx1GnUGuQPXGxI9g+FOQ7pZ2d3ZRbXPebcxP
v3tooZ42Ay1VpWmffHTrmYu/HolhZfq4ZJ9j3WS6tZJALyC6Rq9aMoZjVaeAxgSouWbBtkm4
7FbCygYSy6Jo9IXUF9R1AdfqcZmFp22Dj9vO24Nttqs0EdYxJGPbAAyGnpiyB5rec+ud93dd
gvNigEHu6P1NrGdOjLKoBNPHPDMPw1HMeSWnCbC1TbdjsZIn9JRo/UB2IYZnF4LT/Hlpsu8x
XPJL7aoLG8LERi6jJRU61AIXpjXUn4Mrs5HvHxzeWEke5XVrevGf/wAXsoCWpXoNIJ/fjngd
Gz3eyS7Yq8VtdrESAK1u8ZSQGhora6H/ABwlSbEnHrHfrlt62y2st3lOqGS6BeE0/goNC1xr
PFlN8i2m5XOwzsu0WFzCFJjnsqAhT9rEDrTGPIHzRcLKkrxzlRpJDaTUE/XC3dcxZWYIQaqM
z9MaZGZNQCkae+RrXEDlo6ghqDoQK0/HAjFgBUAk0rTIAVxINVkND16nKh8sSEXUR+0aghvt
pUEntiM06HJqt5GnY+WEUFZBI2k1yz7VxAlctXr6u56UPl4YiQLHouSilDmDiWH0qVoxJYfZ
nkMJDqWmtc/4h1xIUkxFHY1A/LTpiAKqV0s2sD1FewH0xI8iDSWFP4mH1ywCwCKtSKmpH3YD
IZpDGxUkkE0yGS/U4Y1mH9xUqTUk507GuJaIEdT6lpUg/wCWDAH7quQStf8AHxGHUBlYAlaZ
dR1p9MQpvc1GoAHYnucMGonNBqalB+U5Y0DEEqDXUqilAKV8z44CTBWCjVp8adRgQWejKpJI
IqB2p26d8RGSSAaVAyAPcgYTh/dCUkcgDwHYYGaAvUkg1FKpXL9mLBOkbxyEVBAUnt3ph5N9
JGbTmRQGiinTxH/PDqN/LzUjpnl2p5YFaCpOoA1OfmM8Qkh40dYypJJBqAc6/twavqjdtRr0
Hb/LDowgtVLa6swo1B/hi9ahFqMnU0FdS40iGoUZ1BatDXpgGmaRg50NRRQVIrWvb8cC+x5H
Iz0+pRTV5YpDfUSsQzODUdxTPPpTFYsShvUvp7ZeGJIHOot1LDp9cOs07ah0ppOQPkMMqIaM
xTXkKHMUr1qcRhyKIGQ1VcqHrl2wVBL6SPcrpPRQD/gMB0mcEgIdVRQf51JxLTrOgbUQSa01
dKVyxKULmQOK+nwHepwoTSMKlRmAK1GdfDB9Uid2LZUIFKfj3P0wyYrNGjuyEIwDDIEDrXFR
ARNKxDOaueniKYNQShoEBGg1NDmaf6YkZn1UBAUg5AdfLPzwi0qSRsSGGphmPriWCIJQFmpJ
4VoPxwtaYuozJq1PTTvTt54MZ8OrBo0UGhY5Dt+zwwXkyh1/dQ001qOgxLQ010rkDkrd88Wj
66UR1HSpqD1Ympr+OeJqAZaqxBYZ0NMsh2Aw6LBqulRQkOMz517HEYEMNRJbp1T69cS0jVlY
VNCMifr2/wA8VZMHA9LDPOrHscA+Ds+tRRanpXoKjChGXVRegABAHSo64MbgXIk9LEBjUljm
Prh1Iew8FP7fp4YfllIqNQ0elcwD0wHAlmQDTXPp3+tTgOC95tBFCNRFKHCtoZHBB+4uKnwU
HzwwSjVqEZCtTTxxU7Q+5JqFMhX7unTvUYyNMZGYBh96Nm3Sn4Y1hMCpqCQD1OrvXwphkZBJ
IQAYxSmRI6D/AK4lNHE6sdWalRpUeOCt6aWV0k0qQK9Aen0xQU5Y0BBFe48DTAAJJ+Zsxnqr
/jiGiKjJtVK5U8j0wNUjKQxYdVpmfPp+GKiDaSVAV1EqaZDMVOIxF7nUjqO/av8AzxrFCdxS
qireP17YsJE6WIDVIFevjiFpFgqkVz//AGe+DFomdAACQRSgI61w4rQKMgtNNK1qcxTuMTJB
pOopl1PY4lIYeltIyNMGkzM7D05DMUHYjEsJWYrVwS4JBzz6Yj8JArsQGAyzzOZAxIIkVSVD
U70p288WJEshSrGgFep/dTCyk0mjE/d1GWCGEdAFDlXuO57YiTlBpzrQ5/j+bFo0zGuQ7Eaa
dz9cC0VZDRQwofx6/wAWIkyiQACmpT0HcfXFBqOUOQtKKfDv1wwkgYLnmfE4AIe4QSgzFSAe
uk/XA0coCBoFR0OeGCheqhVUenM5Z1/6YhQyPJrJyANBTr0w6zEwkZmGoaKDM0wHSqoq65aa
injijWmR1X1DLzPT8fDG1hy4GqjVA70yz7DCthyWVtBVS1KAjw654zipvWTrOnp2zwaJCLqo
KKtD0y6nwrgbBGoGZzLUJJr+GNaKkWVlPTTXqOuQwA2tgQEoFB9PWpxYB1ZqMxAKjx6Z9Pwx
Gl7ZozuNenoK0oTi0WFGagsKZZ0JpliogRSlRk7mp8PHFh0WbAn7lUnv2wYdMGJkoxotKgeX
bECEgQsJDXuD/pTDjUG5Eh60Y5jOv78HwsACAw1HJsq4LRkKRIwwNRQdq1xATe2SVJJz6DLp
0riWgleTUKAk/mz6jtiiKP3MiD6l7HpXGtWDWXTSoqDmp6VPQ54sW0mPrY6suukGuXlgqDqU
gADPOiAUFcQlGsoZACM+hBH+uJo4kByKnSaAmp7dhjJOxkZFVcx4dzgFGkiZF8h0IIzr54Vq
MtJHIPVktaZ9T5YVRLI70qxkOfp6DzzwYiYZFjQEYRhiEWhdlJrWnmM8qYmjmSMAlhQt0HfP
CggqtVrRsjmMApHIKHoR2IFfKuIGFuTkpPWvma+GGrDstYguoL59yR5YGbBKdR7Bx9tQRkO+
JTwTuaqh0mp9OXQYm4Qik1CqioJJPgfHE1INJZAvYqOgOf4YlaQzK6s6mtR/hgZnqM+hxUU8
M/24liQxu9NB0q3c0xKQSVGqM/lyBHTETuoaMFV9zuTXL8MSDGjtrXoo7/5eeKgRVCQlaZgG
nU4kLP2ajVTz7jwywLAsFU1cNQHqR5dvHCcEwaupev2jwpiB4w8hIKn05ajkP2YrThGQg00a
h0FOpPjgApm9uh/MeijPIYYSj0MlNRAOdR2/7q4sR4zochXo1cgKUFP9cCJmUg1y09SPE4EX
vKIzQVanbtigOyLppShA6+Z+uEnIYKCW9XcflH/PAhhzoClwpGfnTyGIkuqONmLajXIk5j8M
SMWiNG1VRup6UxDRF2oJNdTlQ+Q6YsQFNGC5alqag9Sf9MQ9PHVqrSrD9n0zxKUiEJrIdIjO
RrSgOJqURoGYH1J11Hr+zAjSLrVm1aRQGg6EH/PEgxUdAhUEL9tD1rnhESzIGA01Y6fVSn+G
BagkjBIkC1plq8/HEvqNdQAC5g9R0qfA4jglVJCQBpKgUoe464TDH3A2lW9PU/8ABxC0gdBp
lTIg9TX64QKhB1mqsPynz88FOGSZiK0IP5WHTL/XAYkRo2JIGo0qfLAfwjZQHqBUEZ0HeueE
YJtQJK4hoI3kr6vtNTnlU/5YjKJSwDe3ShqKnpn5YiSIpBWRgJR1b8op1yxaDEhFART6DWoy
UjvWuJX/AAYUlUal8aj9/UYRggoC1J6d6fuwJJ6GYkMDmK+P1pgUCWYgimbdzlXE1hg7BT7y
1AzJArQ+QxIcbqHqSCG8sS0MdQ7a1NPy9xQdKVxRnASL/MrU55gdh50w0EhZ6rpyJozt1H4j
EoJECSlWTUDSpB7eNcWmDydStTpByr4jAQq60MVKMa+sAU8xi0YYLF1JpTtXEfDKUQ6c20mp
pnmcRSgH1MTn0HQ5YlaEOgH20cGmv/PGapC1kghqGNjXVTqcUipn9IABDIfVq+nhhZ0KhmyP
pDZ5dR9MGnSZRoYfcVGZGZGFqxk9wANy1FCivQd/M43Ky5gp69hhQ9fmcQ1LqLVNMh2J6Yjr
S8R3O9s7lJrKRoZlYMhQ6WGnwbHWf1yYO+b03O+c25hvixQ7lutzexxZJFPJRB2OQ+444/eO
3P8AOyKSNpw5JK0aoGkUpTxqemLXOytdw75F3zij/wD5P9qWBspbWZdcRPXUQCMP2V4taTc/
n7ml3t5tbNLTbkYn3P0kWhyvkSW018sE69Yyuja/7geZwWqQvY7a9wij/wCaY3abyLUYAk+O
G2RmaFfn/mn679XdC0uV06GtnjPsgVzbQDWv44bY3gt3+feZXlt7Nra2m225yL20TaqnPq+r
T+GOf/Sa6fTzUtr/AHB8ktrFI7jadvuruMFTdSRsXPn1A+uOlxyuuGy+e+Yx7nLdXot76GVd
Js5UIhUA1AREpmPrh8M5tT738+8ku7dbazsbHakZgZGt46s4GdPWTl44z439cdFp/cVvFtBG
g2HbXmjGhbptWup/NRf8jgtjP1ris/nrlK7hPcblZ2e6pNklpNCESMDshGYH1xvfBdNyT575
JuW2SbZY2VjtNrMpSaO2Da6N1XVkB+AxnwY8tmeZvcZVFWNewFa54tOloejFKhtP5hVfr54d
WFpUZ0zbNqf8sQJgy5KtQ3UDtiODRfUSAT1JrlSn+mJYBVZtZNTT1a6UyOCtfSuqPbr9YRK9
pLooCJWVgpJ/3Upg+0FCbG+ize3ZQQWVqEA18K4fA52Z1B00ZgQK91r5YNAtEuqoodQ6nLp1
zxpHSN9NRkzHI9Bnl+JwmR0z7ZuMURL286R5BpHidQaiuVRmMZ+0OORtQAReqjP640AuWK08
s/rgAY3bVVx0y+uFRNGT6gp1R16Ht+GDWpETFlByI1dQRRqV8MSM0uSqBVGNRQdT/wAsUFoV
cAnWQWGainTzw3GMOGBNU6kde1e9PPGW5TB5a1ppJP2+ONCmeNh6myByUV6/64NjPVwy0Kg6
SFA6ivU/TCOetDNDJpcCjUzAP/LD9m8JUIpqIIAqwByHng+x+oEPuk+n0jMv1BHjio0nLRAO
SCv8a/uwS6qHQ7GjAaB9wzqPPGsEomRGSmoFR6QAOprgNgZhpZS4oKhFbtqPbBeoyZoWjapo
STUmtQK/TDLpxG7KWYjMimVMyMKCUaUGuXcEdjiUlOAooBmVybtniAnU1DMTUjt44mtRSlyM
z+PTInFrNJ5IySF9RAIanT8KYfsoYJRA6mnlnQ16UxS6b1pKk2R+7UTUsan6YNY+oXRgTQVY
dQegxNQyo2tiUzrRa9x44jDBGAIZshTSv+FMIhwR7XrH4t44CjGRUV8+vX6YYCVdZEdK+NPP
thGEVb1AAihxnTIejPVqVWtfAVPbEvROgaIqQUHUEnp5YW5ESRq0J0+pVFGalK064BhCMoTp
FVIHTt/piGBkOZ05noy9CMKwy6z6qgFuinPIdzjUGCQRnUgyPQU61OKrS0qhQDMioFTmK/TG
dOCcVlYEaCANQPWpyzAxkI6SKGrQdhUZ/TCcCIkbSig6qgr4ZeOFnR+yUJ10Zj0PfAfqRQk0
/Mvpr2/54hEZiUuF6DqfM0wnIEReqhNFGVenXLALCKaEWtWJPbPLFpMQdIVmqo6UGf7euEHQ
tUsGBA+00qTipNWkhaTLVk4XP9mBEWTQdFQ3X6iuVcTOiKggM33A5HvQ9caZ0xoFCsSEU5V6
+Z+mMtwpERhqIqa1B7nC1gfvQUzByyHU4WcMUpSq6QPtamWmvh9cGNwpPQCa+uulgK/gcUFM
A3tU7Dyzr3+uEC0SOKsTmPu6fuwNYYoGWhP+XTucQpKhDKqEOwzYjwwM0PqBIr1P2kdx5414
IYKCaBRUj1jGT6KjoAgJoPUQete2KHQPq1VVamhP7csajNRhWYAlCW8cu3jTCMSLkATktaaT
1y74jKJRVvMjLwxlvSdRJT0+oZdR+0YDhhE7BqqWYZsP9cTIBEwAfQQJAQK96YdiogGbQpXy
p3riXtCyus5VlAI/LTwy6YoKEKVaoNNWeXQYmpB0UMh+4EHM5ZfhhOwD0NY6aW6huhwi2EIj
oIWh0jKvie+AQbUJGpakDAQLSmlmJPRhTMeeIaIii6WBNCKUHY9cKwLK9CHFVUgimIaIorH7
QWORP0xmtSglRlAPXP8AH8fDEsMYtR60BFT2zwhLp6Mp1V6eAAHSuA4iILoKDMda5DPGsVO1
uCul1Wv8WKrACNlLFTUZBmzpn5YBgolJBqdTE5jrTFpkHpojBaaicyMq0wCwxqvqRTTLU3Xz
zxC6SxGuS1B/N2HbFq9E0YUAAgOQTSnVR1phN+EZhBOosS3j3GFSCcBh2FcshQD6eOM1UzRs
xbQain2nuBiVP7QCAjIt9qnr9MNZlJyoCLTUK0ov7K4EZI1Z8iBQkfj/AMd8R0XqdSTQmozP
h9MSIhhHkBmfQRih+A0c62OQ7jt9K40sSJFrUNT0rUnLIUzwWqGeNfSwOmuQFa59cjh1Q0qy
ahRSA49JrTpjOtGAFGbNWBCqhHXyxEQL6gaBozmB4kYmSPqYFqksT6fCnX9mJnTmEldNQDnQ
4miiRyukH0geod6+WJG0LUD1agO/+OIWEQtKUyFdTDpXEgsFXTRCQchTzxISo76ifuTN6ZA+
VcSl04icVk0Eq2Q8Mupy8MGrDezIG0qelKg9640pNGqFQRQ17mopTwriIntwRQrSoBUnBqwE
Op6My6etT5dM/LAIMRsPTSh7+WKq3AsoRqAH0jp1OfSuCCaiLtU1YEjJvPHTGqNCCCKnQ/5s
FigSpBoMzTI+Xc4lTkfmR9LHt2wYK6PbZiNeTDv5eRxm0ymQZ+kgAgEDpl4DGUJYPdUEKSSx
I6sQAPLwwrNBkXBBZixIApkKYWdwwgMa6pCHBHUjt/ri1rEsdvPMD7UZYAaiqjIAZ1ri0YF0
Lyakb0v0K0IJHh44tVDoZgR1YZmuVf2Y1Kj/AKcoqn7QBUMexGeeM6TN6SWY6gcxXphFO9CC
UyYZr5g98RiREZNIeoZgdR8zn6TjOr0zxlioCg0IHgaDpi0WaERuAwZehNO+WNMYlNuVWMM3
qr0/CuDXSCpNpchCyr91OgU964LcOhET01EZU/l/Qf54Qkit5VLyADSuQA/bhpkdJ2u+a1a+
W3ZrRKJJcBSUUtkAXppXPtjNqxwvGyKFCn/tPY06HywqiWhQHSFZhmOgriRpEZISy51y0jxr
i1EqOo9tkGquRHQeeJHdHy/LTMAUzpl2xDBBnchWyBzzyp+zESNckeule58K9cAO7BFyBKD7
ga5eeEl7iqy/cRIcj1FPPwwITVFAGDAA08fLEocIwIJ+ozoTTE0QFBViTn1A/wAcQC0kR1fc
WApqywspFU5azVGHUf5jBjUptCgqQ2R9WZof24MVO06DUGoIyKUOZqP+MsQE0g0lWckClK/8
dcRKqD1Hyp4/TEsPGGOoigXy6mnY4RgAK1OQJNSG+mJFroxDZKewHT9mAaJaEqgFSelTkPxw
LQqr+4aZU6gVyw6PRardYyxWprQjxOAkWpUEEkGnTIfXE1h6xlQq1Z26imVfrhB0WIKcwKZB
f9MBEAFcaDRiKHp+GeJaWdGD9ABpC+OJabUTUEHW4zGeWIEQ/sh+4608aYWgq6mMEerKlT1p
3xAtDVBALU6nri0DQhw5app+UdcZunCYLpUCvroK9gD5DFh3QjWoCBlAU55Gv4YoRlirjIEV
pUioH4DCIcMSKVGnPr44kEg+r3ftoD4YhTAg5VFB+b/DAUihgwGZ7ekj99cS8DLI6ir0DV+t
cMRoy+QTv1A6188SOXFaFc+xHiO1MQCgKtrjWgObav8ALARyxhxVDUeFe5/yxYPRHSiAuCSn
24Mb3A0LDWfuYnIjEJ6LILQmhH4jLCbAKB4+kghVA6AnMk4mcFFpodI0p0Fe+IUixatRVRXw
ypiOEVP3E+TCnf6Dri0YaN1059TX1AV/DES9KsSWUBqAaumJSn0ip06Qa5GmeeAwHsmgKmjV
6ePniOUMjMr1rSn3DtiVSrX3NRzPcDoCemLViMCkoldiFY0X6nt5YtWQ/vvq1FdROdV7dsEX
sRyuVWmugY0IoST3wqsxuoDXGvUG1Y3Gccp1L3y8sKpqr/DgZWDpCsaoVOpvubqBihte4/2/
fF/GOVW15fchubhFiYLFFBRAQPFgGpmMav8APxz57616TLwv4Nke5srXVt99bN7UVzd3DKXJ
HSMerUK/THK/zd53XSPib4o2eC2l3aC93G5uWA1o2lKtnQIpX04Zwztdc3wRwqbcVkJubbbI
0Ht2ds+qUsfGR65YpGp0iT4u+LN1sbwbTY3cT2rFHlklJIZepVKkN+OL65Wb1Wk4nsHxnBxZ
v022vPHGrpNPPEhuHI+4hgaZnpTD1xvyPszXAeE8I3vfb3eodtb2Lab2rWwvD7iggV1uB+4Y
1mRa9N3TZNovNrms5oYKMKKkUKJpp0pQYxeZTtjzLefjT4k2O3S65NJeNJOSGnjJRFB/2RVy
Hji+otc2xfEHxbyP9Tf2W5XX9NhbRHGpGoBfza2XVmPEY19TOiuvjD4f3GxdNlvGjvg/so11
PUkg50jpqNe2D/nTOv2vtv8A7e+C28CQ3UO4XkkmbuHWONT5DqBivKvWqS4+COIR7/JFc3F4
0QAePbbNfdnKHvLJQ6cUlZtWm4f298JubIGyhvrBq+uW4kDnPqTH+Y/jioQp8F/Fu2y2lheX
l/c3lwT7Kq1Ax+gUgftwZf2XFff2/cZit7y5W+uQsYLwWx0noPzPlX9mKRfZU8e+DNr3XZDu
DbjIssrMkUWhdKiv5vzfsxqrYsrv4e+Htj9m137eb1buVaiRn9tCQPuoEagr0qcWVr7MXb8H
4HdcxksByP8AS7JEA/6yUDUxOQRCfRme+Hnnxfh077tfxJxPkVpPt878iSIa7mHWrqp6rRgN
BFexyxcT9j7Ncn9wnF793ttz2OT+lxANbQRCN6kfxhtKU8AMa+krLU8Ysts5+8e9XtlYxbLB
qS12lSHkDdNU4UKoNO2M9c/hZg+fcN4Ns/E9wvP6daw3KoTHKkNX1H0qF60pXGZzD9nyndgN
M2paHMmmXToaY3F9tb74Rhspuc7es0cdxoqQJFDKDQ5516Y1+Gt8x6N/cHy/frSCDZ7R0jsZ
1DShUGpulBqNaL9MY8c5bK+dJmLyVrqkJNcvH/PFGrTN1AeiqSAG6g4WV5xjiu98m3eHbdpi
V55CQC+Sp4sT4UzxrGua+kOI/DHx/s9jLBfWy73vGn/5l1KG0BqfbGvRVH7TjnYurrDfGnCf
j7cuUXyb1GZLv3GFrsrxuqKEbqz5asvy41fgy+K/5c4zxmx5la2VpHHsu2TohnljQvor1Kx/
h9MU5XPtegXPx38Zw/Hd1d7XYLeP+mLx7hcFvdeQCgZm6DPsBTGeuWb8vmS8i0XDqgCqCcjl
pz8sEorUfGllweXf0l5lctHtgBMFuNVJpeyEr6gv0646SbBI+itm4xwje7O4kj4fDYbOqUtr
+4RImlSn3qho6indsY+sN9V+w/DXEtthm3ZdsHIZ7glraBnVIUTooVWOlj/ub8MVh1hedfHP
JN65DZbZs/EbTj8Ug1e6kqyKVB9bylBpUL4DPDKo2O0fCnx3tPHL0XEA3nd4Y3M92+oaZQuS
oqkBVGKTTekfDfhLi207Uu63Ozjf9yuVDRQFhHDEh6LGGIFPNsVi1RbjxHhw5l+o5jscfF9j
tog1vaxSGRLmUtQGR4sgB4Lgkxls7D4/+PORbVdOnE12mzof09/OvsSOun/yqK6gKfxYcalY
3jd/8OSbtFx3beIf1C8R/YS7LpKr0Oku0jsP8MV4/avVavf7XgvEeTbNBtXHrA7huknsTxum
aRkgawTVF9XlinEZ31jv7ndvtYbPamsrSCBmZ/ckiRULUIyJAGWGSRPnV1LSEKoFG6ZAZ9sa
OPSPi34evudNdSw3cdja2VBI8iF3Z3zAABGXnjNMrabp/bJbrt1y+08gS93C2HrtRGAur+Go
dtLfXB6NiLb/AO2SC3torrkfKIdsmmA9u1CoVRj21yOtWxbRJjLco+IrLj3J9u2t96tb20vn
UfqHYRgRsaH3QpbTlnXFNT1b5N4Nx7buN7BbWe2bfFbm6treWWFfXLFkCuqlSG8a4sU8q5+Q
PhTh+82VlFCLTj8cBCrMqKHeooqV1Jn+JwWfo568vm/tn3KDkYsX3iCHbXj91txlGlmNaaFh
Jzb8aYPV47d2/tgptk93su/C/uIgf5LxaFanUAhiK/XF6vEe2/2wRpYRTcg5Lb7fez0024VS
FrmF1u6am+gxvabi+4Z/bpxO3ud1t+Qat2ljCPYzwSlFCkVqgU6g/bPLD6zHm3Ofgx+LbLLv
u67lDCLiVhbbQWLzaSSVow+5qfdh9/CvPrymSGjAL6lpU6f8BhFj2H4Q+G7HmMs+77zMybNt
1FNtCSsszkVpq6hBTtnjNrczG/3b4b+OuX8YfcOFIdsltZWgPve40UjKwDBwxYjx1DGfqzU1
v8OfE/GLfbdk5LHLue87yfagnTWqCRuojCn0gE5Fq4pK3v6cu3f2y7PDym7fcrwz8etIxJDb
rlO9cwrH8oXx74PVLkBvvwr8fcu4029cJD2UsUhhb3mk0OUYKR6i2kj9+HMYx1Wfwr8S8aXb
uO7+J9w5BvPpjuY2dV1V6JQ6UUf7q4vrrW/h5xyT4CubP5Is+K7dcwiPcFNxb3MxKgQiuqoz
q4p0HU4WZfUPMvgLc9g5FsuyRbjbXlzvsntWrhTGyEEBmdST6RXr3wy1m9Xcelw/CHw/srbf
xXeXnuuT7mlIb8FkOoDMxgehRXxBwXXS+q7Z/hD4z4zuO7DmN8l6bcA2O2oxjeSKTNX9tCHk
kc+kKuDLYpXR8g/CvApeBXXJtmsLnYLq0QztDdalaRVy0ujs2kn8pxc8/pnq568+2L+3je9y
4G3LRuVrFFJC9xHZyA6jDGSfVIfSrnTljW3TuLP4i+FeObrsV3zDlUmrj9r7hjtYS2tzGPXI
5GYUdgvXFV1Jmr3k39vPF+Qbfte98DuP0u37jIkcsN2XOlWYjWmrMUpQrjPv4UW5+Cfh+O6h
4ZJeXLcvlgaUXq6q+la1Mf8A4wtPy/vxYr1+ngPyFwu94bye42K4lSeWGhEsfVlYVRyT4r1H
bHTnRLKi4Lw3cOWcmtNjs2jjnuW0+9MSUVQpdiadaKDQd8Z66xrmPoB/gf4kMqcMj3G4XmIh
aX9XQ55aiWSnthadga0xmwde/Dynafgbft0+Qdz4jBdQW9zta+5c3MhYpoNDGygeoltXT9uN
fazwTmWes/8AInx3uXB99bZ9yngnlMYnjkgJ0tG5IBIIBGYOH3BGREVDRTkQSo60OJW/p67w
H+3PlPKuPWnIBeWlrbXuto1nLe4qqaaiEUjMg5E4xev01kNyD+2nm+27lYWlsYdyj3B/bgub
dj7ammomTXQoAKmvTFOqMi8m/tQ5LHbOttve3XN+qaltKupcjqKkZfsxelS8U/tq5hvdpc3V
7cwbLBbTNboL3V7kjxnSzDT9qg5eeNfY7Dbr/bdynbN/2/bZb+yXbdxLLDupci3V1XUUcGjB
v4f4sX3rEnrac4/tc2iw41b3+w7gE3C0i1bh+slpDcUXUzRE/Ya9AcsZmtdX9MTwT+3nkXLN
i/rhu7batulY/onutRaZQSpkonRajKpzxdW/g10X39sfM4uQWm0Q3Ns8V7HI8e5KW9giJdRU
imoN4YpbFFhdf2nczj29pUv7C5u0iLDbkMitXp6XYaa/XKuLaNUXwX8Z2e8fIc1nvUEE0Wzq
TdbTekq8koqhCgVq0ZzK4118Nc2Onf8A4dbfvl7eOK8ZjXa4LYG4C3DkhEEanSpXUdLM3p8s
Z6ufDh/PbatNw/tO5hBtvvxbjZ3N4i+5JYRlld8s1VmFMsEtdcit4h/bTynftlTdZry12lJ2
ZbaG5LF5Appq9IyqQcuuM29Wj6q35h+Fo/j7bNtuxu0d9Nd0insyNMkblasy5kulRSpAx04l
/LOV5UIySwfqSBkKUx0akez8P/tk5Dv3HrberncbbaorxQ1tBcFi7x0qshK9Nfbvjl9raeuZ
HHN/bdz4cvTjrCBoZYmnh3NSf03tKQC7EjVlWmkCv4Yfsz+cWe/f2pcpsbCW9s92s9x/SKZZ
bK2DiWiip0auvTpi+1OJdr/tV5Lum221/JullZ291Ak4aQyFlLioUhQACO+eMzrqt+Kc/wBu
vO4uUrx4wJIukXL3cToEe2EgR3Vnpnn9tK4trE9emfNfxvxTjXAmi2fiJmSCJdW/wyKJIGBC
l7gH1uGH3Hpi5mK31hv7avjnYeSbrut5vNrbbha2kfsx2czkThpDUzIoP2j7a9jjVazxmdh+
INz5TzneNl2ZVtYNruZTPNeOD7EKyNGiyEVLEhaGmG38Mccz5raT/wBqd/a7but9d73aotra
GWyaAFoHaMFnSRm0suQyOeM71pvWR8/SoCPSpKkagCaChx1g3Y23xp8V7/z7cp7bbTDaw2SB
rq7mb+XErGgBpU6iAaCmMdUc8tlzH+2rkew7NNu9re2m6W1r67qOzLe5HF/HpbqB3p2z7Yze
rD1k9WW3/wBpHKZ7aO4uN3srdZ4kddZdvVIurTUL+Tp54p1aZIze3/26c3veW3XH5hBZNYRr
Pc307/yvZckI60Hr10PT8cX3omLDmX9tvJtg2WTd7a+tt4tren6tbOuuJOmrSfuGfbphlrVx
Z7L/AGn8nutogu7rdrOwvLyIP+jm1Oyas11MmRNKVA6Yr1V5jO7B/blzjceR3203ntbYu1Ff
1t/Mx9mklTH7dM3VwKg9u+C9Vym6m5//AG6cl4vskm+W17bbvtdsAb79LUyQrXN2BpVB5Zjq
cPNdLXkTWoDaAfWxoQM8j441aI3/AMYfDG88+F5Ja3MFjt9gQk95cNkHkFVXQvq7HPpjNtbz
8r7nn9uXION7UN1S+tb/AG4ypFdXFoW/+OGIVXlDD7KnMjpg+9jHXT0OX+1LaJ+DpLabtG/J
Ke6m4RuTYzJ1VAD0FOjjF6b/AIfM1/atZ3E0EgUyxyMjaTX1IxU0pl1GN4eb4n2nbbrcr6Db
7WIPdXTrDBGOrSSGir+OM9XDr3I/2kb/APpQP65ZLetGHbbatqMlK6NfbwqBTBtY6t/Dk+I/
gXeb/lz/APsW3qu0bVctb7rA0oWRnjXXGukZlGqpLD7hgttP8+v27Pn7hViOX7Ftmz8YXZrm
8me3truF41tb0GgUBVAWN1Jz1dsPkmsWb031l8MbNxz4pnj3LjC7/v8APCz7pHFKv6hXGrS9
tIR6fbFCAvXzxct9XHjfxz8EbrzPaJt4k3C223ahKYYbi49RkZD6hQFdOnpU9cXVv4XHx6bl
P9ufKtk3LboLK4g3u03a4Fta3Vu2lEuDUiOUn7fSCa4ft4x31Y1F5/aTu6WcqWe+WVzucaa1
s11Ru7jquok6cuhIwbXTyw/xB/b7tG/C5l5DdpS1a4tLnZUJjuoJ1yV3+n3eBxW0R5n8p/G8
nA9+XbWvYNxgnRpre4tytQobSUkjBJRlyy6HtjpzNFvuMMFZqmLJn8frgE5fQfx18RfH+2cE
g5t8gXTTWd9Gy2tooZAjFiozT1NK2n0jGPl06yOTfP7ddvvd/wBjbjW7qON8mEj7ddXQIlhk
SMy+24opfUFIXIeGL2Ofwz3F/gHmG+79u2ySxnbbratfuzzqTBI4yjRGAzEgzVvDPF9vw654
9I+NPir4xj4pby7rts/Id9meRdxsrV/cltJIWKPGY0eL0VH3HqTgz9sy7FZz/wDt1tLnlnH0
4mj2G378kvv2N3UNZ+woZ3oat9rfbUmvfGtueOPXN+0cPI/7XNwt9qmm2DfrXeNwiBc7cqhJ
WjH3GP1v6x4Uweut68eDSQGORo6kstQ5OX25Gvh0xuT9j5SbXaG5vIrVG0vKwjUnPNshlgvW
Q88voX5b4bwPgvxjY8bezhuuXXksUsV6pZJPTUSTKxB/lAej269TXFxz+WP6c7cZK+/tx5OL
nZRtc8V8N62836q/8pkeJFdoqGo1EOAp6YzO6bMqx2D+2/ct0s9hvlv0tYN1Mq3cUq0ltjbs
wYUr/NYhDkKYz1rUru5p/bHJt2w3W8ce3Zd6ks0Mt3YBRGTEoqWSjNVgBXSaeWKS/kunbP7Y
bCTZrK+3zksG2T3USSlSo9vQ41JSR2RdWfbDtVzXkfyDwO+4TyCXaLuRZkZRcWN1H6ori2ky
SWNvqCpHY46yXGL1NxQbXbie5ELKQZGCjr3P3ZdcZ68akr6ck+NPhXh9jt/GuTQS3+9b2FMV
5ErLKglOn3IqNpRQ2R6nvTGOObmulkvkYrdf7b9xTcOU7Vt24LdbhstvFebTbGgluoJiSVYD
7WAQqKD7qeOLbrlbYoU+B99X433HmVy7WM9hI0j7ZcLpeS3joJGBNNLAk0UjOmOku3GviPRe
D/Efxpu3xFut3HMm47xJZvdPPGxEtlMkZdYglfT6lo1fuxiT31rr48Zv4S+O+H3Ox7nznlD+
7tW0mstnoJQhVrqIHqp5YuubesG/WaueR/EHC+XWO28z4crbRt97dx2W47ZMKKheQRmaFan1
AkHTWh8sVn4Oe+tJffHfwxHuFt8b3FpP/wCwOvo3iJaTxyyJrQu/2lWXMLQgdMH1kVu/D5q5
hxqbjHINz2K6kSe62y4eCSQZCRRQo4HbUjA47fhjn/KgclSStAQtSvgD44tSaGNZUqzGg8u2
MVY9a2b4D3DeOJca3uzvURt9vTZXltImn2VbV7cisCa/+M6h+zGOba3ZjT8P/tel3eyuDuu4
PYX9juEtjcQqiyL7cSVSZST6ixZSPI4rqrt3X+2HY3srjbtu5JGeYw25kTa29rTK6eoekkSa
WHft3wTms2/pm/7ZNjsJ/kGm4ziG+tIpWgsnjDiWQVSaNwclKDMY11PXTj/1tVlnxPiV580b
5sm9THYNqj3G5S3KaXRHL1SHWQVCtXLwxv8ApznMrhxNt1tvn74i+P8AaLD+ubXuEW3brHbx
FNmqoS7RSI2liTqJCMzTI0wcTfGrPVd/attezX3Ib2WWZXubeF1gtHQNHLG40TK2oH7aj9uM
9TK3+FT8b/EW3c05LyXbZHO1JtcsxtI49Miq6XBQRscjoXplnh75E+Gi5b8AcaXYdyvuF7z+
u3bZv5m6ba5SRdKisqBloVZQCR40wTnL6Vvt39uXCv0G2wbtu0sW/wC4RpPAsYAt5CaNoXIk
jOjVbzxm8+m2PJfmziPE9j5lPZcZm12aRgz2rFj+kuUJEsBY5t/EPDHb65GN1jdksVvNytbZ
l1LdSpDrpXTrYClMc+vgya+qdz2v4v4rf7Z8ZXOwHcTviIJ75wnuL7p9tZfcpqB1KaKtKYuf
5ea1J9vGFvP7d9rk3Lk2w7fuxk5JtrQ3e0W8pRP1FtJHq9t8hR6+nUMulRnjWfkZ4o+SfBh4
78YDkW4zfp+Qw3K/qtvkkWklrKRGUVP/AMMhbWe2nBzL1R1kzHoFrxv433L4B3iXaLRZLq1s
xPczsv8A8iK9hXWFDkDIHoFyKnPFxzl9P9Iq/ivYeIcc+O3+QdxsP19xbTPCLchWQFiqoVVh
QnW3qr9R0wT+e9K/GFzP4s2bmthxzl/FrRNmud9uTa3+3VrAXfWfdULkpBjNaDOoyxqK8ZfW
1t/inhm0cH5Lx2O0e8vv0Dz3N7JHn7yIWjET0yKutaLg459Hd8eVf29bzG+73XDtzt4rvY+S
xFL22IBHuhDpYV/4rn1GH+nOXDzJ9XnPN+Nrxvl+77DFIZ1sLloo5GyLx9VJ89OTeeHqDWcl
QVZ60HXS2eMA/uqV1BaEijAYUjSZZGPbP9/l44sAvc0rXPS2Wqn78WEDVLK1CQO474klVap6
jXTnWmZAwI+tKkEkkjIDphwnLDrroFoCKdT5YAjdJdJYKSAasT28mxM3RhsxVu2eeRxYdJWA
JI+0mjk1FPPEdCpXSCp9dSpoKYRolj0rWvqGRJyz71rg0pBqpqObk5UIp+7BiPrdQy6fU2TE
UIp9TgVKUGqigoPuP+eJAMJVy6gE0yPf9+NQnWQGhAo46HuMQGqrINJyrmD5jEiU6FGpSOuf
YDBUYga1aMjUa/d4YFgpHop6k5dOpxA7aTGdYy6eY8KYiSitCwyGZXt5YlKI+pi1MugHQ5Ym
kZWpWQkKBU1PXLEByCiqytVT1HbPpnhViLWwLgUoxqzNT0+ZI8cDI9LMgCqKCmtqnEMBMj6q
Bj6jmuXbxxQ4l0qNOfoHSnn44jSIcqTqy6AA54RZTAJ3JDqOo8B2xaRrKVcoRrIFXp+7PAaa
oRPUKkEDrUivbEjuTqbTTTTId8ODTpGWQMPu7t0AHngrQGPuGrDVp+2nWoxLDsyAEnq5qVFC
PPFIqYKWfUKIvQL0GCqHFWycelASK9evTEjRINGoek91GFbD6kBauZpXUcsvAYqxs0hrJUEM
1Oik5/hiaIVYtkAq9s+gwKC1hkoRQGoJ88BCGCjQ3rWnXoa06k4ZFshhm2XpQinl9a4mp6IO
xYgVIHWnSuIdCMbFTRqgfaRlU4mTerMjt91cxXyxLDo5kUZjrUeJxNSGJQBCCABl50xLwtQ1
9iPytQf8VwHSL5VH5fAZ4sG0LO9Ml1GtDSowoxDGjEUPdu3XqcSwasTGVZs1JKgD9uCRI2DM
wbropVaVFO+Fr6ieTPIaR+Za9vIYlqNtFGyI1ihHie1cEVrLbiQZ3qBVTTLIY3GNcgHj36YW
R+2fDy64k6tfpWoJbuvj4Yoa9j+EPljbuHbfe21xtst3NMVZSriOjUIzLVyzx07ss8c5sqHc
OVG/3k7sIhGRKJI4VbqynuTnjnL63fXsEHzVwLc7C3O/bffG7tNJdbfQIGdR1Ugrl5YeofRp
/cVsc25NHLtNxBtRUIksbA3Byp0Pp6Ysiymm+aOB7RttzHx3abue5uixmlu2CKpdaajQtq+g
AwGRUcN+Ztj2+yudr3qynEMpLLJZkGoYUIYORTLLLD1GetiHjnzBtexbzKLHb7iPYp2LSQsw
N1U9H1H0k+WGWXxTa3u4f3C8GTbitql7JeMP5cTxJFVvBmFf8ME5anNUe4/KnxPyGwgTkdvu
JliFXtY1Htse66lZSw+uK8M2459t+Z+B7RZXdntWyTWtvRjbxq2rWSP/ALRmqR9M8WJ5VZcv
9jk8W9i2D+1N7xtgQAQWrpLHtjS+r3K6+Y/jXdYI5r++3S2nUVe2t2ZASeoLRkVAxixODYfm
jhNpNeW36a726wlAK3wk9+ZjTq+vP6ZnGsmGrQ/O3xxt1oba3O43RBqZpwWY6jn6mY9PpjM9
ErP7t80cSuuVbbd+zdR2diuuViFLlWrktDn+GGQ5V6vzj8fbgbm3uWu7awZaG40anIPWqD1K
PPBgswo/mf4s2baf0O0C6uWT1IrIVL1OZLt/phwa4eSc0+EuS28Vxvt3eOYRVbVEkQ0GZFVG
YB88H0WvPrDl/wAa2XK13K347JebFCjCK3kk1yM5pSV1kyy7A4cdJ8OX5M59sfJ3tk2PY49r
toKgGiRytXuwQaR9MHkZnOsJC6JNpkarg5AdAfEgYZfT4+hfjv5k4PsXFLPbp5ZY7uEf/IKQ
agxJrXUCP3431yzaovl75yXd7X+lcclcWEoH6qRkCtJXtnUhfpjFijxKSaV3Y1qK0BPfDBrc
/Dd/Z7bzqymvpo7aAV92aU6EUMCMicvrjebDa9H+erbjO5W0N+ORWrSwL/KsLdhJJIaUyKFq
da545ZWfy+f9TFvSCFINGIrkO+JpGaBmRipC0Jp4HGjjR8N5ju3F91Xc9rMazqpWkqa1ZWFC
DmOuOk6nxWOte7cS/uH2yTbLmTk0ywXqE+1BbQMVYU7mrCtcY65n4Nqi+PuWcUueXXfKuQ77
+ivHZhaWbRMoMbCgJdQVoKZDGcM6N8s7t8Zcgvraez31pb53WOSSKMtDHFX1sSQp1DwXrikP
NxsoeWfEtjw+PYH5RG1sYTHqIPut3JC6cj5YvraL1K8k4btHw/cch3Bt93WVdpj/AP8AXtL/
ACvdqTUyFAxB8BgvGxn7SLzjk3wpa/Iv6q3OnZrOOsFzel3hFwPtYBuwPTVjXPGQ8/Db803T
475CrS3nPZIrJRX9FbOpibT/ALFUlq4zlMrosOf8A3jjse1xclbaBbERiRiYpSiUCsD00sMa
+tVqs5n88cZ2CG0t+Ozjer+MhJZZQ3tBAtCWb0szHywzka7eO/PvG7zjd1c7zNb2G7Uf2rCN
XbXUUQ1NRmeoJxdc4tdFr8hcQ5Fx23gPJTx+5XS0ylhFJ6eoUn06Tgp+HVN8g/F25bxZWFxu
sM4sEE8V1MP5DSn05s2TN3xfWj6/lR/ILcR5DbXEl/8AIptdtkBCbfblRGQMtOlCGevTF9at
jz/4N33gGycq3OW/vI0WGMpt93OpDEhsyiioB04bKfst7aW7+QfmD+obZdrJtG2SIVe4cR1R
B+WPItX6YbfGP+fuuv8Aujvtvaz22BLiOa6BIaCIhnX1A1cDsemeMWHLr54V0ZtQoASciOn4
YXT7a+kv7WkDWG8KwASqBF6tQ11V/wAsVrLXf1/464Nbby779HO97M0rWqMJZVfMaAErT6nF
gVvJbvgHyJsFhq5Nb7bBDSR45GT3lLL0Icrn54rzVseXXXD/AIzfmu37RsvJC0PS73S+esQK
5iKJvSGJH4eeCc2RfHr2nnF1wvcdhsrd+SWUY2+aKVCsiSFzBT0hY21Dp2xYtmm5BuHxzyob
bdf+0WsUm2ypcRt7qUY/7kcr1piyrfUg+S/ji+5f+nXcYJLy3iKQXzituJCaMqSH0188OLVl
c884rtu23C7tyayuZaMV9oxqQOyBEZqnBi1luZ2nBfkXadulPKLWwtbb1Udo9eo9QQ7ppOFI
uAcg+MOHPvFhZckS4iTRJJPdNUtIqkMI2p6lHgPwxfWlk+e3/wAWc/2CbeJd0G2b9bB6wTMa
yBcxpU9j+UjDlXw+dJNMUhVJNcaNRQMqiuRwyD5e8/29/JmwbFBc7DvEotGvyHjunoIY2AIU
O35QwPXBYb43V3zbgXxrxeXbrTck3y8vZ3uEgt2QmkrAksyFlVFp9cGM66ZuSfHPM32flM2/
R7YdhYTyWdwyI9Qa0cMRUV6Fa1xfPwRbV86cE3fk95tqzPDZ3MYgg3CQaI2cA1rX1KprkxxH
NjjuuZ8D+MuIDZYd0Xe7u5mkuEgt2UtpdwXJ0FgqqB454bKMd0+8/HnML3ZeZNv8ViuyfzXs
5mRHrTVRgx1fsBrgUuXXmnLvkTg3L/lOwvL+W4teM7YntSXa6kklYFmDDT60TUfrTDechzPU
PNeVfGW0c54/vXE5J9xNlJ7u4kSyyLpWhRI3nJYN1r2xZ4zN+Y9Uud44ByHedp57/wCxxWcG
0xkNZT6El7kgqTrrVuwNcH1ta+E3GN74Dyvftw5dby2y7rBSy2x71lV0SNa+6I2II1sxz60w
2Yq88+ati3662GS/3Tn9pdwRtqg2eIrEkhJ6KqMxcj/cMUtG4qdiufhdfi1od23u8/rM0Tia
xSaZWMwqERIB/KKdMz1xcy6OutXvw7zHiu5cCvuB3l6m13d2swt55WAiZZQBQEkAMtOlc8Zx
q3Y0u5/JHCfjnYNi4t+u/rd5aOnvfpCjaEDFizkEqCS1FWtcM5FqxabgdxyyH5Q/9jgjs4rL
2ntHKhxkVzWusNmfTpwWKWR47PFxP5g+Y7ud719q2V4QIZ5NMcsxhAVQgfJddCc8bvwxzx7r
oW34h8T/AC7Yy2+6Pu23LCZJVj0vLEZAUKsU9JIB1fTHP611nT1jVwdOXj5RPJbUbX+kMZt2
ZdQLDT0rq1f7dNcastZjyraOUcF5R8x7vvm67jc7Rscyf/Cn9xrZpJFCoNbxnUinSSBg7/DP
M+WH+ZV4k/M5puNblJu9lNEnv3c8zzFZVyKLI/qdafsxqS4ufbWBjcRMX0igPQ5jDjX1fTdv
ybbbb+2F4YNyiXcY4PZ9pZQsxc3AOhVB1VKYzOR3rJfDfy9fpynadt5Duxj45aCRbZJMljlZ
CsYZgKlRXInpgxrX0RebpuCXst7FuWyQbSFLi6cl51SldRIdUP7cQYf465Ffb/Duz23I9t3j
XfzNc7ZeRlYxGTRXgauoI/UVUgHDYJdZL+4Kz4Vt+z2b2N4tvvbSAttNnM0tvQ5PKU+2PT40
FfDFJXTj5XPObXbvkD4i2dtk3m0jO0RpNdx3L+2x9qHQ8ZB6GvSuRxcrv/21JxxNo558IWXG
dr3i2s9ytBEtyJiR7bRSFiNIoSCO4xmK9TW6g3/jWy3/ABfjtzvFp/ULa2ZWHuKoZUg0aqk0
GphkDhxm3a+cPkT5G37aPlje912LdpKxu0NvPG4lh9qlNIB1IwqP241cxmRpP7b9mut35Zc8
xv8Ad4TLBNMLiCdwLmeWZfXKFyGmrUrg62nnn6x6ZBtVrsvzlccgudzsxab7aNFDGZkWRZIU
QaWUnuFJBwVnny3/ACznxnyjbm+b+YPfbnG0UyvHYSyzAoQsgOhWY6RRBkB2xdT08bj0Lh6W
txw+1m39Yru3F7cTbYyRtJpiE7mE+gH1Ad8FheG/3VbDv0W82nJriWCbabtFtLJYwySoVq+l
1YnUTqJ1Cn0xqazuV4NDpLtqpQ0JBORP+WGtPrW527bvk34q2Dbdk3y2sbvbBD+rW4JDI0UR
jZWQEHvUHocYksmHubdR8IHHuA/IKbTuHMBu0l3ZeyI5mOi1lLqypXUyIr0NM/ri+tX2nwu+
H8ZtODXfJt/3ffrGWy3IO4KSUMfqZ9OZzNGoAMPus8+Rh/mfk1lJ8acLWy3NXSSeN7mGGX7l
ji6SKpr6Se/fF8GT2O/535jaW17wfcdo3RFdZGZ57Wb1rC4jB1aDq0nPrinwz1v2Z/8Auo5h
cybltO37TujS7XdWJkure3mrDIWl9JdUNGqo79sMV+Vp/azxmK2W45VJuVoVuUe0Njq0zxEO
Cuupp6qVGDr5ejrv/XGh+PeIb5x35f5HIt5aXW17nFJe3UcZ9yRo5pmMagDNWjZjUHqMVnuv
N/PnNlaD5M2B9x+Nt62PiIgt4/Zea6tZYpEDop9xhEzaQrEr1zGGfJ78mvh9i7UqpDkAVPgc
+uO1wc3Zsewf237jNZclvra332HZr26iT9LDcprtbx1JDQymo0lFzShr/hjl/SNx7pznZONN
xbcty3a4sdn3b2G0X+1ztGs8hGULR+kyCQ0Gk1xnKupPw7ed8LvOSLxVot4h21tu9u4lsJnK
rNpEZqoBFStKVp3wfhQUnLuD3Xydc2v9Stju8dgtrZyO49pZdbtJAzV0liGU6f8APGrz4s/K
z3nc0suG74m6Xe2R3v6K4f2bJvbUoIyK0dtRJ6YJFfhiOacLsvlC149vGzcjtrKxtrX2JlLE
S1LKWFARpZdNKN3xXVg/jLdti47yLkXGX5THvV/IsQ2+9vnOgmFGVrd3LEExkj7TmOmL64pX
b8p7zvO2/GXIE3DcdktnubV4I4bUSBpkkUq6oGavuUPpyI8cUjHc2Y+MQI1mQ9HoA3c/txrG
+ZkfRX9sW8cbt7fetvNxbWvJLgI1g14f5ckaqQyE1Ab15leuM/lu9eY9P+Ut0gh+JOTW1/e7
e24m0LNBZMEUqzALpVmLmtMWOfUlmML8fzbbzH4Ku+E7fusFlv6GTVDcOUGn3fdXSa10MMqj
p3wcnr48fM+4L7N88Mqh5InaJ6ZjVGSpoe4qMsbz9n7eLXgW7Wmz8v2nc70sLSzvIZ7hwKnQ
sgOQ/NTrjP8ATnYzuvre94pt2+/JW1fIm38is32mKGIm11gFhGHGoNUU+/MN+OD5WZdYXafk
DaZf7lbq4s97H9Dvglu8iyabWWWK3KKrdjST0g+ONdb4OZ7XlHy7yjdNx+R99iuNxlu7Ky3K
dduQys0cKjIe0QaLl/Dg3zGuL8vVuY87m/8Azctkm23fG/rKTW9vee3cf/KKhnWRJPVr6UrX
Fyu0f9vG5Odhutu23kVrFdi4Ml1sG6IHtnhIA96BgyOC3RgKjxGDbTjV83vvj/iW88b3k3dt
YXSbin6zbtvlMkEsLKym5aFclMRb7qePXDjP/hvrjepjdtucG9bNHsjoZI5yNc+gJWuv3FVh
XrTtiw3x8+fFfyFtEHzVu27bzfw/pNwlmjj3NAYoHklYKhKvmiNpy1dO+KwcXz15/wDM3D7n
i/MJoZNwttzt9xeW9tprZlLKkkrMUlAJoy6h9cdN8Y46stjBK7KwKEE1qHI6EZ54HV9JcH3/
AIl8g/Fll8e3u4jYd526RZLea4A9qYI7OCjMVFdLkEVqOueMZivq93/cfjm/3Xhfx4N+bTsd
wZLrcYnCBXiiZY4jOp0rI7t1GG8+D5X918rcX5bum+8AXcJdjngSW1t9999VWSWLJ1V/AjPV
q9Q6UODB8xjvjni/F5tutb7jXK4tj5ptrTW29zvIrpdDUw9wLIw1o2RDDL8cZ64vyeLny2m9
fInD9k5jxCLd98t7rcYLe5g3K/iIZF91FEbyFKqgd1PmPpjU5sitmsbyL4l4ZYtv3Jdw5wlo
k7XFzazWMirJE9w5Za+27PIDq06VwfW3pnnnJXzDIp99xrDuGOqWpo+f3Z559cd7xinw69sv
BaXUM3UxyKdXegNcjjj1zrUr3/5xudv55wrZPkLYpvej2tTZbzt6kCW398j1OOtEkWnmCDhk
/B6tl16ZwvnPBt12bj/ITvcFn/QrE2d9Z3LLHIrNGkZNCanOPLTWuCQ2J15r8e7BLsWyXu+2
lzNJNcS21zGRIsfvMzRySMupYzV9OZw4JFnecx2e02LdLfdOT7be3U9tcfpjCY0FFjPpYK71
ap8c+2Kc2i2Mv8Z8g2C84nt6QcntJ9sWCMTbJvqxmWFwtGQSOyMUDfbkcsHUsp14N8/QcOHO
m/8AWLmOaAQL+qjhYvbwXAY6o4D9uhhRqLkCcdeZZGJzd/w8926eWC4idWK6TWtOwz/ZjFmu
suXX1HFv/wAYfI77Lyjct5GzbjxuHTc7bM6Jq0lZA0bnNlDLlpzPQgYM/A+0l1BtXyT8cb18
ib5zSW6kifabBV2pJXFuLowq5k0K2epqgIjdcPXOMyqb5A5/xv5L+OL+9ubs7HvWzSCa32pp
z7d7G1NI0ZanpUD+EjwONfy6y+Dppvhm1+O9s4FeA8ogV+QWpjvYp3jgkt2MbKwCudVRrOZ6
9sccs6btmMdwvkHEOOpyT4r3zcUl2Tcz7NnyK0IkjpIg0mQLXT550DA9s8dbPyz9t8X3JOf8
G4DxOw4jsFy2/wB1b3MV49zC6GIMkolPuOKqC4XToXph4/l5tP224sl5N8QblyH/AO9WXfHt
L+1hjWfZpSqzJLEpTKLrI2lqek074x9d8XV+ry7jC8d+S/nW+ubuwml2jdpZZhAG9uUIkICu
SD1BTUwHbG+/OcX85LrI/MnDtr4h8hbnsG3SGSwQRT2wY6njWddXtE510mueGc/66x7rGxMY
20odLEEavwpnjFatx9R/DfyHwm74JtGzbluibVuHHbsXYW49KzrGX+1jkf8AyGoGeMSNRv7f
5V+Mtqunk/8AYIZ03q7/AFCshr7JMaofd/gT0DM98asFr5R5nyy5g+Rd73nZdwZR+uneyvbS
RgTGz5Mrdc+3ljp38RmV6R/bnc8JtN1u+Rb3vY2/eIJG9q2nZUikjnWjPVhUtqrXPHG/LrmR
nvm+34zt/OBvfH9+g3hN6u5Lq6tomDm2mABoWWqlG7dxjp9djlzfWk+Yt+4Dzfh1hyay3gWv
INstktZOPy1DSpqBdAo7qSWVq0IwcT8NWyV1/wBuO5fG2xwTbvuW9iw3tdcElrcsEjaJ9LK6
EjPwxi8+tW5FvxXkvxtwz5E397fkcF3te+Wss8dwDqENy0pkeB2UU9Vap+zG6zPYyPw98g8e
2jYuY7Vu11+mu9ytZmsXcFklISQaAVFQfWKVxv8Arn22Nc+zH0TwWa3bhOzLZu+82YtYwLwu
heoAqpqQQUOQHUUxyvyuvl8kfO/H/wCh/It/FDuH9RW+YXjzOQZkeXJ4pQuQdSvh0x2vvMcp
MtYnbL2exvobiCTTLGwkU5/cpqKU6545Y3z1lfSW2fJvxRyq923l/Jrhts5Ts0IjNgatHce2
TIrRUXrqNQKjPI4fcz8G+XY49o+a+GXW8cj5bd7esXJUSIbJDKWJlhjXRoDrX22b7j4Yes+F
LcZrnfydxzn/AAX9TvCJYc02u4U2sMIcpPayNokHceha171GXU4JcG+t7wbl3wZa/Hkuwzbu
1iu624XdrWb3DIkhjCvpZUZR0quMcc2XW/6dsdw35D4ZYWO7/HnIGk3Phdxcsdu3qKqShCQy
s6UDChUGq9D2x06+dHV32pOe/MWy2djtnGeDNptNnuYryDdcwrNGpoI0IrQ6zq1YeJJLR9tq
54l/c/eTWm6JyL21uGtXfaJILdin6kKQkUqgt97Uz6Y5yenqTPGb+DuRcKsbveuVcjvoLPcr
PVNbWTJoAaT1M0KrXUwf06R2ON/0m9MzrOXlnN+TS8l5NuG+SRLayX8rStCtfSSKUqc+2Du/
pjdijSQOFBIZ9JHWta4zG9CAaV1UJ6qM+nniGo8zpFfbJP3HqadqYdKU0aMK9UAY6WI74kcN
6eyKK1B71/hwI5jKglXqKgMQPHPpiR4l1ISp9ZNSetcSN3Kj1AV1LlXL/PAhs8grGGBBAJHk
cSRmRW0hgF0Z1pXIZYtVqVQWGlRqK9T5eGeEGYxhjWtQKNTx7Yjo4VLKZCASO3+RxlQBJIVa
hXcHtnQ+WJacjSqq48hnU4logAzjU2qn5T0y75YjhwHcaFFdJrUf4E4QAGUsGU9fuAHSneuF
YKrM+oZCtWX64KEzSGlWTInp0p5YI3tRRsVqooW+40GZ/wCmK0CAQGhzr+XrgAX9SBRlXqPA
4ULSuhlIIJFSw7DFqHX0Zg6TkD54DqKTNqIpOX3Ghy+mIHCE9WBX85/ywo66l1FGAYijqQOg
xI6R0zJ6n8vTp0OJQzF1cqw9ABqcqfvwIl9Wkg5E1oGpTLEjQ+22Z+zVXM98K0T6qlMgK59M
Wox9tFqAWNQMuv78GE/paj6iClc+1emJHhCIzDqx7eBPfFQbSVoARp/MfPEjNGxppyBFcv8A
OmJrEqxrooDQ1HSmRGA6GQEtko0+BGeKj5FkFK0OnKpP+OCUVHIpQhgeoyP+WGMdJAdOdB6q
dDXp2r4Yl9ojYhgWUsr+eRxY0P8Amfw+pv31xHEcjMC2nsKKa+kYT6NV0VWQB1Yag2RofLFq
8hqlQHoXByNcqdsVhndOk0jO5YU1fY3jl0xZGZpvWyBgTQfl1Ggqc8sQPqZlKEAKc1Ip+3Bh
KWoYCpLEZkjI5+OA6FyVoSunSci2Y+uWFCZcyMgD9tcRLUV1ekmOgqfIeGJW4Y0BJSukile9
fwxDEhUgirakpTwOfjiaxEi6ARUZGlTmKeeAy4Ziw6K2lB9vY+QxqjfSZHJHoIUjM/5eGBI5
IlAGqpA9LkfuJxY3kZnc2LXL1BpXr441JjHTkUVPl38sOMDov8Y6eHbFgWLIjws2kZD7u+Ml
Y8Yt0ub+GKV9MTSoppma1yr5Y68ivry927iPGuApPZ8btJJnhX3JrldTa2GRzqQe/XHPqKPn
Hcbr37yWUiNdRr7EdQB5DywSV2kiAs5NDVS1RqGdB/lhJi1ATIxZegYeP4dcMHXb1X4z+Ebn
kCRblyK4l2zbp11W8CgCWYDOormFw2uX5dHK/ivaLfmFts22Xf6K0cgz3N2akgj1KoHV/DGe
Zp+yy+RPiDi3G+OxXdndXF5PXQzykaGr0yFKEnBJ6Z0j+NfhDj+/bZ/U983Wsj+kWVrT+Wv8
LOe/0xqs2vNudbNbbRyGfb7Iv+lgJEfuH1FakZ074uaeVRtNgu4XsNkJUtnuZFQXEh9IB6kg
dlGOn1b65ke6bR8V/ENrNabXNuF7vG7y9raT+UG71KLRR9TjGOVpXnwDYT7tohunstrUBpgn
8yZR/Ctcqt54ylFzLhvx5tNrDbbTt27e8rj3byYNpFD6iAQFJw8tRbcS4f8ABu+XKWNq+5XG
4yZvK38vMCpU6QQuNXim2xX8x+JbBuTQbRxdnaWWjTNPIWECkffIw6gYzB9t+VhyH4G4zsXH
he3W7T3NyrKZpUARQDk2jv8AtxUW6Li/AfhHeJ47K0vNxuLylGZgECkDMMdJWv44cUoN9+M/
hzZt8eDeN2vLcMAUhSjg9wCyqxr+GD66tWd98SfFUfHP67bS3psowHIll0B1rmKaQ1csH1Z1
17/8f/HcnAn3TadlW2kaH3I55GLOpIoKkk98V5wvnK61wPSnoY0yoM+gy8ca1Jtl2bcN6vrb
b9ttzcXVwSscceR60JNfDEeY9HT+3T5DEHuSCzgTKvuTivTpShpXGNo6VO1fCvPtyv5rW2tv
b9hqNK7rGtR5t1B8sa+y+ouU/EHLeNWv6jcGt/bdSWMDavV5ZLlXGfdF8azi3xDtp4JJvW62
M9zubRmSGVZNEZAGQ0A9/PGrFtxace+DOP7jwxb6GGS43a6DPErMEiDVoFNcF0zp59yL4U5v
s8ihraOYzPoiSCQzHU3QEADTh0u2D+3f5FWz/V3CWtvJQMsMlwKgHOjAA54PtWVfsnw18h71
dSx2loEjgYpLPcN7aBh10asz5Uxr7GL3b/gLk/8AW4Ns3iWK2tZwS1xG/ujL8pC0ocG1fZxc
q+Et+s9zms9mjbcLO3UM93IVjjRvAFs8hg2rHmlxataXDQu+p4W01HQMpzA/HG4FvxfjO8ck
3WHbdpg924mNXcnTGijqz+WH8OvPfmPW7j+3/ao7dobbfmu9/ijBltI4gqio6DPUFxjK5qjj
PwJe3UUtzye9/o23QMVSoBdyO4BOQB/ixerxzch+BN8O42sezTG/sp20x3LgRkA92GYy8Rg+
BFxff272NvtrW9nv7X2+Qrqks1T0AkdMjXp01YctHX+FbxL4H3C6tp7rke4DYtvRzHCGAeR+
wJrTTngy1qXPlRc8+Kd747PFc20rXe3S+mCZl0V/7kFdR/hI641LYftVbuvxR8gbdsp3jcds
9iyAFGYprIfMagCSg+uL7VnJK1HDfgS93Xahu/I9yj2WyuBrtEYapWr9rkmgWv7Tiu03HTa/
2+cjm36Pbba//T7WQzruHq1sB0oOv4VwetW7PVzff287ZfWtz/ROTvuW5240yxaE9suBmrMG
ahxZWZ1jyzavi7l28bpfWm07dJePt7CO4kyVFfw1Gi6vLDWvqteH8N+R7nf5tg2iS4266jDL
uEqytFHFHWh9zSf2YpR9Wn5L/b29rt0lztG9DdbyEVubRUKnxOYJzH+7Dtcb5XPxb+3zcLnb
Tf8AINyi2RJ6La2zrqkkPY5kaa9hittdJzIxvOPjrfOI3am5g9yyYE21yQaMD5Hpg04yJSSV
/bqXc0UqOtK5UAxpmx7DxX4B3i+26LcuR7jDsazJWys3BaZ1YdXAK5nw64z7+FmMby7455Px
vdI7EwSXHvELZGNSBKGNBpOVT5YvY3zNrm5H8ec447ZpdbvtklnYyU9uSTTQ0zo9CaN5YJBj
KR3NS4CirGrUBoSO4xvFF1x3h/KeQ3Eg2SzfcHRNUixLXSPFsZtxqT8rbe/iX5C2q0W/3TZ5
ILOnrmybSO1QCSpw/ZdXTbL8N/Im82P6+y2WaS2c/wAmVtEepa5Musio/DFehZI5oPjvmp3k
8fO1zNubAvJbaSxAB+45aaYNUdm+fEvP9kt1utx2aaG3rpWSMq6hmyGvSTSuH7asjWbL/bjz
vdeOtut08djdIuq02mZAZHAGZLA0Un8tcZvX6XbA2/AuYT73JsFrtkz7pCxLWqLUqE/MxH2/
UnDz3h5vjs3z4o+RdjgSfctmkhikanuKyyCp7FgWzPhjV/pvyz3dTW3w78nXUTyRbBctGoqC
ygdRkQGILU8sYnTPHl2qKLivIZuRxbJcWk1vuDzrA0LI2rzbLyFcdNjd6+zcfLXxRt/EZdqi
2m8utxur5KSQuumjig9IoK6mOQ6jGGbbbkUMHxF8pTwvLHx27IiyrIAtaCvpUkYZ2qqts4Vz
HdNyfabHaribcUqZIgprHT+Jm00H1xrV9ZV3f/DXyRt2wXe97htYt7S1JM4kYCUeLhf4fPGd
1m5ywUjsiepQTnUkZ5+ffFjWrrjnGeTchneLYbCbcpUUM6xLq0AjInoBgtw/WpOU8I5jsAhO
9bTc2aytpjLx0UkfwlagnwocPPW/Ixbw/FfyRNZG9TYLwwBC4l9s+taVDD82C9fpXmfKt2fh
nM9ySY2O03V2qt7RKRs1JP4SR+YYz90DeuG8v2CSOPctrm2+ec6kjkUrqA7r2ONaZHp2y/B2
zx/H0XKOW7xNthuVM8EEUWpV1A6PdyLanpXLpi9o7mPLuI8SPI+Xw7HaSuIbyRRLOELFIQ2b
FR4Lh6uRRZ/Knx7Bw7labHt9624K0ayJIyUYavyELkW+mGeDf0Gz+Gfka+ureBdguo1u1/8A
jNKoijkJXVVnbJMhXPFO/HSYzG9bVu2zbjc7VucLW+6Wx9uaBipZSDUHKoGXfBGdcVqlw0yC
LUZ2YBI1OZcnIA+eNUc81rrr48+SF279bPsd+lkie47tDIF9vrWmMTpnvi/Li2bivNtyi9za
dsu7mNm9qCa3iYh2GZQOozp3xXuGf5c+68c5RtV/+i3bbri0vZSB7M0ZViT9tB3r2w/Y8yz4
Wd58efI0G3teXmw38VqVDPKYnCqo6ayOmWLTXDsHF+Zbv7zbDtV3fpFRJWtkZgC2fXxOLZFe
PNQbtsfINu3IWW5bdPa37MEjtZo2EjFsl9JzI8KYvtrMx37h8Z8+sNvlv77Yr22tI/U8jQvT
T/EKA5Yz9mp4zce5XsZ0qSq10g9CKeHfHaNT2eiVtxv5ljijee4lpEqINTksclFM8zi2RjqN
He/HXyHtO3Pf3+x3drZJQtPJC6gU6Fqfb9ccr1tE5en/AA3Z/Nt1sN2eNXptNvtlD2/6saor
iU5MkXuKw1ZZ9sLpmR5f8k7/APIG57w68xmnbcLNigtJ19tI6H8kQAA1ePfG9mY5xjkkowzo
a1H0OBNZx3h3Pd8szPse0Xd9ZI3ttc28bFdQ/KSOtK4x1004J9p3q03g2V3Zz2u4qxVoJYyr
6iQnQ9an9uK39rNr1T5D+E9n4jwqK63DkDHkgh95NuMf8mVqjUsTD1DTXqca4+WevPhT/D/x
HNzuK7v9wupNu2Sxp+pkiFZnLIT6NQK0TT6sZ6u3HTzNrLcj4tAvMJ+O8Zu5OQDWEtJ4UbXK
tM1p/EpyNMsN8cp19nPvnAOcbHard7zs15Z2oOhZp0bSKmmkHOn44PlrHRsPBfkfdbJb3Zdl
vLq2YlDNErGOq9V1Dqc8U6i6iw4NZfIcPJv0vHYbuDfrc65hCH9yMVCkShvyVOeoYb0JG7+a
d8+ddgs5Nq3zcWm2i9UR/rbOPRFJrFPbaRFVh3qvfBzR1n5fP2tqhHAqCRVcx9cdVzP00/CO
G8l5XvC7bsts08mkNOxOiOOMmmuQ9BnjPddJG2+Q/hLmXE7H+qTTx7ttsNP1F3Zkn2GGWl4z
6l7Z9MZnVny4/b63Wb5byXne5nZ7Tkj3Uj2kCx7ckyaJDFPQowICs+sADVh56dPlW2XGOVNv
CbGNruBvDsEisZEKS0Pq10OYoM9RwddHfMjRfJvxbyzgYsf6pdRTwbnH6XhdidaAFo21CpYV
6jLFLXC2c9Y0XHv7fPkDeuPLurXcG1XFxGG2+wumKTXAI1AeCFh/Fi3Xbv4eW73tu97Fu022
bjbSWd9bgCW3kXSQeuRHX6jDPRHAby6kVS/qUigJJJ/f0xUyPRfh74WuOdPd3l5cttuzWKH9
VehNTGQAtpjr6TpAq3hjN7tuQ7kY7+h7kn9QvNvje72uxnMUm5RI/s0LEI7tSqh6VWuNUS7N
PBt2+35u57e2nnEEXv3TorSCOJCFLsB9q50rjP3lHK1t/jL5Gudtj3C12K/l26VPdW4SJtDx
UqHyzK4Pt61VNsvEuU8ivJLXZ9unv7uMEvFCtWUA0JzpSnnjX2WB5BxDlXGpoE33arna3lqY
P1KFVbTmQp6E43Lrn1Z8OB725MWipdhVu6kFjn0NDXBz43Zvy3Hxr8Ucm5pcBrVf022wtou9
xl9McVfUQBUF2/2r9cZ6vrUnhc++IOWcS3K1tZIRfwbk/tbZfWfriuHJoIwB9r0zocZusXqM
tHxfk0+8LskVhNNvAdof6eFPvhkBLCh7qBhlXz8OjjHFOR7zvK7NYWjvfs3sCAgoUcmjBi1N
Omnqw9dSLjqVovlb4u5F8fzbbHuVzFeJuEZZLiIsQWjprj9QB9OoYpbVslxg57yd4lh0kr41
rl+3G631UTTSEd+vQjuPI4HP137PtG8b3fQ7ft1rJc3k50wQxnU7GnRe/bF1ZGpDw7Bulxer
Yx20vvyTfpo0CMC8oOn2wp/NjO+L5aq2+IvkyW4jtrfYroy65ViVxoBMJ0yZkgDScqHB9kz/
ACLjnIdkv22zeLCSwuko720qlfSfzr/Ep/iw89zRP03nKfigbP8AEW2cwuJpoN0uLiOK4sbg
ehoZq+00VDX7aHv1we27F35YPgHwNyvle13W5wwi1tTbPLttxIR7d1KG0+0prqXMHM5Yuu61
9WB3XY932fdptp3aBrLcoSDcQuuYbTUEj6dG6YfsMlb/AJ18VDYfjTYOUSzTxbtfTCC82uWg
AWZS8bxEfwhasa51wfzt+R3/AKs1t3w58j7xFA23bNO8dxALq2kYKivCSBUMSK5npjd/oup4
rv8A0PmUnJE4y+1zx70PT+lZdLjV0NOmk/xdMF7g5lrYr8CfKi2m4wS2D26W9v8AqZ4GlAF0
kZqQhjLK7r1CnFOnS3I81e4lRgqUyNV8iMalN6ljUfH/AAjfub7r+g2lasi+5d3cv/jiQHN3
bvToBjF8ZbP5F+CNz2DbDu+w7jDv1hZH/wDLRiotxbyAVq6KXGj6Z/hi5tgyPPd44zyTZbyG
y3KxltpbuJLi01jUJYpftZKV79adMFv5pXG3fFnKbnl0PF7yzlt7giBrrSokWK3uSv8AOqMi
tG/ywX+mTWuK1nzX8X8O4RYbXZ7XNdDeg5W9jlVjFcwsP/PG32hkagKA98a5nnos2tHsPwJ8
X7rZ2MtrzRWkuokMduqxM4YrRkKltWRqKEYzlZ65msZyH4Q5fac+vuJ7Gv8AU1t4Y7mG5ICA
xyAlPcJ9Kt6SP8MN8Ev4UvJ/iPnfGriyTeLAwrfMIrW4RlkT3JMvbLCulvI9cH2yacei2X9t
d0NjjtrvfoLLlc5ItNtloYn0CpiJNW1jqKZYJLfVf8M1wb4R33ft13BN5k/pO37RObfdJZaB
0dQTRBUVpSoNaY3rXkmn5v8ACW8bTd7adhnTfdi3WeO22y/gAp7sjaRFKFJCt1oSaH64J1YZ
1JWkvP7Z2fa47O25BCOZaXlG2S6fbmVK1VX+8MvQmlK+WGW/LPfvwo/ir4c3u+ubnfN1v5uP
bftkrwyXmr2547iE6ZAtCKFca661z5mTVP8AKPxjyLYdyi3CO7bkOz7u+rbN5QiRpiwr7clC
fWM6eOLdjU3Wi2f+23er3gW575f3S2O5R273FjtzpUkRKWb3TkRr0nTjEu1rueKr4c4PsXOb
Lddju7t7HkXtLc7JcZGMaAdash+7VUEkZgYeplUnjzvdLXctq3W6229QxXtnJJa3cXZZIzpc
Z9fHDRu1XS69WpzU9vEdgcCp1uioJB9App1dTTzwY3OjGeRjU0NPuOFjDSXUhZwQMqGo79uu
LGQpcPX06gi9q+HamFrUwuJfbpT0rQRimYOCtcoxPIjKalWU6l8KjywWafhc7dzDe7CD2rPc
Lm1jL63W3neJS1OvoIzwSWfA6u/Ktu9wluGZ52MjuxZpWYszM2ZLE5k/XGpv5Zz9OViQTpPX
r441rNFC00YKmgrkTXMeeCxQ7yOrqQfUpIqKg/hgNo4ZygL6ag9GHie2eCoSXagAJ4kMjdCc
WKl+pkdiEYqBln0phG2gleTLU2pkGQrQU8MumIzD+46EEMwjc5ivj4Ya1C9wq7NkA1fUfE9/
xxnAZiWfTGCxH3+f/XEsPJkKrkTQHxp+GJF7jAMrDOtRT/jvhJShyxIoWGYNO474kJBqOX2g
ZKc698Wg2tGkFTVfzU6V8MC0SnShKA6Sc69vHENCNahgpz6jxxKJY2Z2Go1cZE9K0xUmBjyH
Q1OZ8OuCxaeJNSE5FRWqk0pjJCurUWGQ6Z+B74QZ0YkqCVyzHiD3w6Lg4VIj0A5NkSDmAMGm
HGVQFqcgSTUgeXhiIxABkQRkGBJqcvDEsNTV6AM1oVAy6+eJGdXVw65IciK9cItMjMpOr7R1
r0xJItKFgpKgivb9lcWmExqR7gq6/s8sROSAaH0rWrAdvxwJGWrWrAHoGOXTxpgCWT/xHxbI
/j9MIoUKuKNU6R6T9MSIMTUli3fLrTv/AMsOE4OpnqdJ0jQRmfp5YLFpLIQDqarNno64gSrE
5WSjAqCAfDLI+eInLMhGn1aa5dm/464MAWYBg33Mc/VmAPH64iKNPUT6atmFXIDLCiic5uAK
dx0P4YFpjQzVNNLdX8P+mJSHMiNIykZ0ybsR9cTQ1LIQSgI655HEzQFFf1M2YPShqMQhywBK
KSQcx5/TEdGJGIOonI6RlmCPDEdAsmkkaDrrQ59R44DojpJJY1UnMjqKdsWDRayVKgZIcwRn
9MGI5dQ5UUcU79jhkFkBRgaEUoPxzws/UtThoyNLAZitakeeJpIxJHqHqOfpP+GBrUYLajXo
VFNXpr41xYjopVQlQaCob/CuFHKkLo0qCaHUPHCCoGAB+4A0IBJ64D6AuyMQqgHv4VxI4Eb5
EgofU3jUf6YFpMhU0T1gD7T4fXBuGYdnUIqBaEAkoM8u4wSrEZJlSqV00qc88agFE+ohiNLE
aSp74lKTxEMHBZVANAR/liawgoCgFdTfxD/jPEf/AAdWFCFTTn3PXtTyrgVhJGiksNQYAgxk
5V7nUcWjEZYsul6jvpBy/HGtGBZlHX05516eVDi1rWZ3Rf8A5DgHzp3FcPItcYBUZd8jjTB6
nx7UwrFvCCsBCEFh+bLPHMYtOLV/Wxy0J0yICBSv3Drjr/JrH2Rybat03f49jWytzcO0MbL2
Wq+kvXpkopjHflUfLe62s1rcSwygB430sKgkHqBUYI3jkU5EMCdQoR/zxLHft04s9wjuCgdY
GVnQjuMxQeGN8ZrFj6R4R808S3y/s7C52sWNwiaVuXlHtgL18KVw9csaqPkS5t+T83sdq4/H
byXUTo88xlUKVDAnrlkO2McTGpW0+R+KtecRMP6mCJLda1aQDJRmFJ6nv44LupTfB/GN0sts
vLyZo/alekFGFGGR1kA+nDdNsx578g/Ht7e8+isUu4FuNykohMoKqtanWRmAB+OKc1aXJ/hb
beOy2ST77HNdXhCGKKihRUAtQknTn1xT7aL1r1yy4Hu3Htjjt+F/pVuZU/m7lcnW7qRXUAdQ
p4YbQPhEu/bZFebfut9DecgkkZqB1dmqKpSh+3FiiffDcWvGr2bmlzbB2V/ZSoVWBHpASv3V
74KmC+C7XZim9b4ZxC7SOoDsqL7S5k5+Qx06+D9tHs3zLxXb9y3G33K2lumuriq3tsFY6T6Q
tKjIU7YzIzrY/IHIOKjgUswEbrcJ/wDGiZwJDXuADgvK1kPgSx2212bct5/UR287HQyyMuhU
Hr1erDZh155Ldx8s+S4457oC1N0VYofSV1aQVY5YJKY9M+bNyj2/ZrLjtlNGpn+2GKjye2uQ
9I6YozV9uFncwfEaC5jMEq2il/coCp7Vr3YUwdX1PlS8cLcyFT/MZ6seuZ+uKG16D8GCU89s
9LaCQaL3yy9PnU46b4J0+kbzZt6l5Za7gspewiiYPDX0ajXOnfHModwnO4bXuFrs8iXN8FeN
Y4XFVkOXqYHKnjhxPEOX8D+Rtu2TVvu7gwO9I7F5/cZqnoKfvONc31PWuBce3yD4zXbbqN47
qWGUIrHOjj0Gn0wdfIrv2Lad6Tgf6C3b9JeKjooU5q5JPboa4KqQshZccsdv3e6W0upSEebV
/MZq9FLZmuKJbbRYiC/ZIdtMdusY/wDnSuSWP8IRs/xxJw8lS9vtj3C02Z2bcTqSI27DJ/Bn
6LiZtY3gnHOfbVyGzk5TeJNAyOttbNMHkr/n+OGeteLb5FguN8iudm2i+W2uo1rcQRsA7A1A
1HMgdcHwtfKe/bRNtN/NaXLEyxnLOpp0pTCXov8Ab9vO1bVy2MXcwQXETrG8ndzQaVPbLxwy
L7Pdtu266teWbtyK7QQ7ZLbLFHKxpQIa1PlgTj36/seX8TurLj0wv55H0mmRBBr6g2dD44vg
VbHfdo2OPZts3K5S2vREsQiPUlVAOQzAY98WFx7Pt19Zcv37fruP2NsuIVWKVzQEKKk+FMO+
L4cfJp7TlvE2tuPSruM7Sh1ETD0lG6n8OlcWBS/M28QWXH9i2SO4SPfGlhEERIBQhQoZ65KN
Xc4MH9K5Pk7aeaQcMiu9+5PDPCrRtLYQxrEsrUyCuM5MUaXvJ4W5Zw3aodipdtG0DN7VPT7Y
AYau2nD8LPVpyraH5FNtuw2189pPbKJNyaBqOItNNGVM2ODDrN8jveacZ2a72rhHF2gt0BMm
6H1MTSjSju7fXpi31isz8IWPyNfWO7tt+72+22pkP6ppYvfka4YZuKkUNOpOK1S1c/EG4WW2
8y5LabluqbhuszkPdhsp2RiTp8czjX1rf28ajh9pPsNryS+3hP0VrdXbzxTS0GqM9D5DGWcQ
c8spuU7dsb7ARfLFOkzyRGqiKoBate2H4GXWT/uSu4t2k2Lju3FrneZZS8VnEakaqKNVOhJ6
eWD6+atuvNr34g5fwy423e99WFLX9XGHeKQNoNQ38z6AHFyu+sr3jn9nNyU8em2St5DFcxzT
SRN6faFDUMOpFMUP+VH80b083K+KbXsVzCd8FwZI9bCkNaBHf+HPxxZcG+s5/cNtXNouJWU+
/wC/2c8CzqBZW8BiaSQitVBLFtNPwxcnbHzmGOeoAMewNKV6muNz1XqV9Q/2qMg47u6xgBhN
GWIFCfSev0xnqer7f6qGz+b95bkG48d3BIr+C6u/ZFxMMoo/c06QB2HiRivg5ux7jv8AuFrY
R2kUdne3uQCR7etV0qBTVQqNODDmszPy7eU5c8cHGLgQfpAJZQUFyAWqCBXp5VwrEXKtp5Bu
vGb6523dLmCNUaWayv0CiiDUVBI/Z2xY5y0Px1yDfNx+Iprv9U93u0Szw28goz6kFIxljM+X
S/Cn+Aortdq5HdbkH/r001bmSUgzE0YqPECp6Yvyp7Gr4ZNyC54jeScpDm//AFL6P1CgUiVh
7dFp2wpnvnT5H3/h1xsTbcwEM7s9zF/+FEdPQT2U4ZPE8ibn29c6+Tdp3badvFtd20sS+zbg
uSFILO9M/tyr4YOsXHL3D5WsJ5uVcHuxbmW2tty/+Q4BoobTp1EDIVxmxflNzLlW82XyZxfY
Laf2Nvv6yXKgeqQBiumvhjWeazv+2L61Fu3Kd92+2i/TXc9vBNJfxqNVWUooPmvXAZHnf9xW
4chsuCR7VZQXU0EjBNy3RTRGjCmqyac/Wc8b5snrPc24+TmRiQKevwrUCmKtySPq/wCEoZbD
4Uvrza0B3ZjcOJIQGd50UaAaeBypjH5dP6deK3i+5fK+9bzsEPNttA2f9crB7iNVYyIpKEoR
XrSmHXGN7ebvy1flm02yEzDjJh1TssVYvd0khTJTKreeLPDPl371uEmwcV5VuO3IkU1s800N
BVfdKLVqDvqxnDb4y+873c3fwpt/KNwSO83W3EV6ksyAp7olpXT/AA07Y1PR1cdvOef3tn8O
QclFlBJcbhDAr2rjXEv6gZ0HemLmenp4F8BXO8v8k2M9or6JZmbcHjX0iNqghsvtwdHj+ea9
Y+SVtbD514zu99YPdWggWJAE1K0rsyKy+Lxkg0wWiX16xfyX21G/3KV7jcLYpqgsIEq8egZh
KZsWwjp8H8tv73deT7juu4hv1tzIzSGT7h5HzpQHG5dHHORd/DO5foOf7ZdrtrbtKhYRWK0q
xP5hUHNBmMHdb5fWf9V3fep5DsO6T7ZuC6g+0bpbegkdRqIqB/2k4vMYu6rrK45TsXxHdS7Z
ao3ILaWesEKBlWQ3B1lVH3UBrjLVrqn2++37YeH3d3oi39pI5ZbqaMB0Pts0yaPw6YZF+Wg2
a/hm3u725rm8nuLZf58csdLb1ZelqUb9uHpMlvtxu3GPjjcLnh1ske4RbjN7UMUXug6rhlaq
AerL9mM8/KtZv4+3rnHI+bbDJzjaYYXgtbmWxumiEbuxWn2EekgVy/HGuv8ACkeiycr2q23y
7sableSRKytapbvJDkKsEagDZeeDFr4f5nPZXXKt5e0tjZ2rXs6w2Z9LRprNFI7UHXG/hjjv
xuv7cNpu9z+R7c20yQtaQSye66e4CoAXSAejZ9e2M9VvnqV9Y8fvrW/vb+zaW8ufYUxXUd3H
SCpNGCEqA9fxwdJgvgbfLuW+5XsJlpY7ReOtha0H8pGlkBA70quWCz1nnrY+Yfk7fOR71zC9
u99dv16uYVEie3oiiJCDSBlljXx4zxPyy6NoCsa1LAMOwNcunbDXTX2Fvm78i438RcUm4XAR
PKtusywxCX0PEWkdkAPVxUnHOfC669F8t8hHF7/hvKktIbjdJ62dwZ1pqjkRGIoPzKSSPDFn
gvyqv7o+YtYbRZ8e/RW8y7vC7tdzAmSDQyisJ/Kx8ca5uK33Fn/btzO73L46vrea1ijHHh7V
voGkyx+2ZFMg/iNMZ/LV+HnHxNzu5vvlDeeQw7Ct4t9a/wDyrXbk9dsisP5kSmmZp6u5xrqj
iTHrXIm3zfONbxPsG7DcrSS2c3Gx7na6HVdOYVyFbWtMsiK4tY65uLb+pbNsPEeOKJby1t5b
WJYU2+FpNVIlY61QNTM1r3wSNvN/kz5DO2fKPHLvYYZduvJ1S33W4uYvbW4gmkUKjqfv0jv2
xWeMzr3xy/3W8h5PBLa7JED/AOu3Vuss4EVdVwsjU/mHuoANK41zcVm318vhnWTIUVelelR/
rjoZH0Z/aPNbmblFpqU3k9tC0UZIDOAZAdNc8iRXHG/+xab4m2zfLHgvPTyWOaBJhc6Fu600
pFICaP8AUfXF18sz3louObDtXJeM8S3H5Bsrex3+xSMbTrcIZAVAi1oaAsaA6PHBJ4ZMx5F8
r8n+QT8uRXI287Xum3xG22v9MpeWW3ds5Swrq1dsa68jPHc+/wDltv7l4byTj/Db+SJmS2lL
XkrKQsZliQVc9jWuM2bF3J9pqx+U9s3zdeZ/H97sSTXO2L7cklzbn+T6JEcMSMvsBxe543f/
AGecf3ctt5+QNr9vQbk7aPedSKj+c2kOfNemOnO4xudPCS4TMD1AE+NMWOj6h+OJJrr+2PfL
fZZGfc7f9SWWA0mBqrMSBQjVHX8Mc+fkd3xzf2+20V58U80hEYnM0gUp9+oeyNNB16dMO+q3
Z49cTYbODlV5Pb2kcC3nH1hlmCAI7I5ADZUyUj8MZnPusZd/xjqtL0WG77ZstxeXUl4Yoyiw
Q0tHRVoWLUovTpXG63Ga/qUHHOc8pt4dnuBtd5JaXFxue2x+5Jb3MtvQl41BOlgtajxPjjMZ
m7YxXz/Y8hu/jO6u/wCoRb9ssdxDIbiaD9Pd2TEhVZaU1KdQVgRXPG9Y75+NfLfsBW1Bj6ex
7eJFMP2dsfTXxQl5f/28ck2/Zy0+9R3ErLFA38+jCIqRTP1RqwGMT5HXwtYtv5Js/wAEbPbL
Eycjh3W1l222mo0iyNdBolo3cqcx9cWaz/TmWRvV2TYo98fkiWFovyY9iWksBP8A+R1jCHSp
PSg066fXBh+PYx3x3yfjs21b9NvV/a8a5tcbhcDczLpSWMkgBUD9VHl3w/n1nnmfhT/3b2UD
8e4rObpXaCaVBGc3lV4lrIAOw0iv1wbYup7HhO6/FfMNv4TFzF7ZZOPXDR6LqNwXVJDpDunZ
S2VcM61vq/tjpBJQUapOQPkMjjocer/21yRp8r7SJnRSVnWOoAIb2iNI8Scc/wCk+Ma58fSd
xscLWVzObUC4tuUrexyhKMv89FMgPnGSCfDBXKf/APS+eOZblw/gMm57XoS+lu4oIp3XV7Rk
LEuo6avTQVxvienr4fNHPvlS859suxbZcWivyLb5pFfccgbmOYUSPSBkdVKitK9MFgvuft65
zbYuSS/2z2G331nO+62bwGaB0LyxxwzGpI60WP8Adg4a/rn5cH9rvKd6mtd648t0ZrOws2ud
utWoxR3b8rddJJ6eOK/Knw8R5Pecy3vlzpyBZbjkUjQ2zrINElf/ALOPSAtB6shjVPMfQHyb
sfIp/wC3rYYLu0mk3Lbp7aS/iKl5YoovcRi3eiqVqcZ5jP8ASal+Q+c73xH4i4VJs1yLa4vU
t4Wu1AdlQQBjpqCM++L+c89a7+WwmuxHzzhN49o15ebrtE8F5dgAFY9MUvuN5BmP/wCliH5Q
/K8vJ+M/Gu8vtlzfbneOdUN4ArSWsbEe4W00JjCVGHn5V+HxJKHWTM1krqY1yJPjjd+Tz1Me
+/2mbrY23Id4264lSO63C1AtFYgCVkapVa/mpjn18tfhs/irjm7cP4ZzZ+VRfoP1Cyon6hq+
4RHKagn7tXuADxxr5rE9jQceh2qPinHYvkySw/q0KRrst49A4SRV0Kx8aAA9j3zwdRMTuXMe
b7H/AHA2VvuYt4P1CW+3Ikan2pbCWQlHVupbXXr0Ipizxcz1T/3X8x3CXkUHFHSL9BaJDfRu
B/PDyq6GjeGWN8fA/Ln/ALWdssp983Perr+bc7PbNLCi9CXqrkqfzaMh54x1PW9mN18acpuu
X7b8j7ptMkjbldMW2WB2X9THGkTfp1HksnTtXGuplxnPNZS9m+bLHY/13N5DNx6K+tZL6K4C
GeFUdSJlpmErlljNu/EU6n5bzlXEt43r5k4zzDbWS445Hbwt+sWRTGSjsxCgdSyHLxxX4xqJ
d5ubXk9l8l8X2G5jn3y4KSW8KOo9ykEQOhq0NGTSfA4bLB8uXZmtuD/F+wWXJZ0sbuPdLe5e
3kYe4qfqQWJXvp/NTBmrddEvC96b54j5jGqy8ektkkjvA4Kf+HQwWh8tWX1xe41Lkrh3r2Oa
8K5xx7jk0d7u0e8vci1jkAaSIzRnWjA0YEKRXphzGL7HTfbxsnB+B8KteSSRC5269iknsgA8
ixn3QziPqfa9wV+mCc07Gr4/yHgPJLDkVxs+7tcW36dotxJDgW8TRvVlVwDSmo4sqfP39tfH
7+856m5WNH23Z9ZluD6Q8TqUTR4sykNTwxv+3y1z/wCrG/M242t98o8iubGRZbWS7qkyEFXp
EisUIyOYxdTyM7GHLZ6aaVFfUDnnjOBBITRjSlcwBngVMaxnSx9bUBYeHfCqIO0YC6QTq9Of
UeeHBqNz0Y+mngcz5YlqSKVdXTMDJTma98GNQnLAh6A6jWvXMYsFKRyVrlpIpQCn1/fhQdOp
KrnSnXti1YmjXWTlVjQOw61GDWsDKsepSWrmQD+7EzZDagCGpq8KdcvHEpBPVmqQF65LXMYB
DvCoTVqzFAag5eGeKHEZzOqvpzqp8j3wjNHqb7CCrn+HoPL6HEyJZFQhaa89XkMTpDO6Me9G
NATQ5fhgsSVaijK/fI+A7YlgRUy17jKnUZ9vpiBnLKepB7HwHhXARPIQtKZmmQzPlhAlkhGq
oplkxPf64kBUUKw1dfHuPE4zUc6FiLJJTVlXrmMQ0WsotSQxOQ7Co8MKIse+RPQdxiawRY5P
mR3P+eMinVieoID5ilAKYtA0jZtRX7e7EdD50xHAgEE6upGdP8MTBe5QEBfTXLOg/bhamjRw
si09TMKV7jxwY0IKo1PQkVrX/l0w6UZaNzVmoOo/Z2xA8avQkEMAKFf88WjDLQllI9R/J0b6
54iNdbClSvgDlkf9cCOyqx7kDLLPPxNcRLT9zlqoDQ/WmRxI4Ueoint0yp44EBiI1L1qpGVa
j8cUGDCsash9NNR8BjYMPaSlOhGdPPyxLYkjBDa8lyoampI8BgIDEhJZmHqGQ8gcFqNGxA8W
GVT2GIaIs5pTKh8P29cJgDHI4LZqdJDLkKivTAiHuoFqy0yIOXTzGEjByy0uD9oP78ZoNqk1
kdI1zbLI/wCpxL0VUKk/xZU8jiIQXf0qGKqaamz6d8QwZYEZnTUipH+WJGjKyaj+YChB7d8q
YieUn3AASWHTOoJ8MUWow1GY0pr/AC9hTriAxpotPSG6eBxJIr09OfUtl1NP8cDQgYwRIM2P
QE1pXyxIFWDg1Cj7tOf/AFzxqDDNKxkZyNQYVp0GMikzsTqClT0BHYdcsS9EwkIY5HoWFcOt
QyS+shl1ArQUORIxapAnSJFLrqA6g5dcIEVQLWMUAzBGDT6EqxJUnVn3P78NGiHtRsSPUzCj
UHUdMhjK06iTUNC5oMqHpiplPXUdVAGAoWX0keeMnAhHjUSMobtWuf1OFrAswY5kgg9OvTpm
BihMXUhS3U5CpIHXCzSj0LISWKvX7Sa4sM6xI2pxqYlh0yNK+GeDT7UZnkCsrKGX+Clc/Pzw
semjIJLK2Tda9P3YNWlKhkiZtPT7VJr+7zwWlltx/wDxhwDmD3643yy5SNOdc8bQ6P4npXth
GuiIoAQvU5n6YxIHfsXuvce2FcszAIiEChrmTXwGO/8ALk6+pbb4wv4eJx3e884lsLFohIth
CZGQKy1EZqw6+AGOPdrUsjw3dbeygvZorR2kj1UMnQtQ9aHA6Xu1A1FJIJIyAr0J74mBI5Yh
Sv3ZBiM/wwq3F9xLiPJOQ372GwWkt1LGC08o9Mag5etzRfwxWixZch+POV8e3O2sbyMPfTsB
FHC3uSam+yhXM+eM89U8SVabv8YfIG17R/U95iMdvIKBZZqkd/UlTpxbdUkV3DeCfIXJUkm2
JJzZR1RrgsYo6/wqWIVqeQx066uM3n3aouTbTvGx7jJZ7g4e/hOhqMSwA6qtT+/GeelarRPc
yzRkK007UVBVmY/vrjcXXMj1XZfhj5Z3Hb0vBK1hBIuuOO4uWWShGRKV9I8jjnbWagtPhX5O
l3F4oI1kkjP8y+9/Qv0Dg1P4YftXT7THduPwB8jLZy3F7cwzLEGkPuXTyUAFcg+M22i2KPYv
iD5G3OyN3aWxitWB1TSyezHIoNBpTItjV6rn1w4Nr+K+d7lu0u37ftrPPbema5ZhHEp/72oK
4zreeLbefhX5E2my/U3ipJmKwRTCZj4UWteuG2s8ySp9n+DvlK5s5Jf0y2cE1W9uWZY2oR+Z
K5fji6t/Dd+qotPh7n027ttlrYN+phIaR1YCFB2b3KqufbPDOsYXdz8QfKez3VveyPE8qsFi
keVGaM19Ls1TRR54Z0rI1/NOC/KN5xd7je+WQXtvFHre0hqqGnmoXV9TjPVuh89XMTRTP6wz
KaSDwI71ONLFtxblG4cd3GHcLJh+ohJ9vXmPIUphlWNJv3ydz7f5GkkvLmCNhpe2tC6QUIp1
HXB9jIzlhvXItouXNheS2lw2ZWNmBevfIgtjV/oeuT7hvXJLu4jur65up3Ueh7mRzQ1rqGrp
ngnQeg8Z235d33j0m7pvclrtMQNfdumQnTkF0g1AwW05IHZuGfJ77HLyG23eSCxBZ0c3J15H
M6a9zg6/pV4wu679v819+p3G8urmWMn25ZmdzUZVUnOnhhnRdR5vzq6g0He9wljWqJ/NegHg
B4fXD/0k+WbyhseZcssJ2O37ld2csn/kNvIw1nxp2xm9+M9Sntdw59ve7xJHc7huO6kGOP1P
IxNa5k5LTDz141IDebXnnH90kTcZri1vZwTIGYiShFfURnh+4+sZu4luJZWe4BklY11E1Jp1
OJqRLBJMkqpDUN0YqfHoManSx6Je7B80XfHo5b6G/bZYkDUdnoq0yb2ya6aeIxm9eqyRUcKT
5He7mtuJi8F2g/m/pxQiMnoxPpXyqcM/pMYs91xcn23mm2737m+xXEe4sagzEmQ+eoda4uem
pWj3TafmjcONK+6jcm2SJA0fvFtAUUozDqR4VGMfdWKvg0vyct3Pb8PW8WXTpuXtEqoHbXX0
/wDbjU6hyKfkNlyuy3aWTkEU67i7VlmuqmQt/urXB9mcipn3HcrhVWW7mmiiYnTIxdVPTKvh
jX2FjU8Au/k5orm34VJeyBmb9SLdGIBJ9VSchWuRxq/0nweeb811bVuPybsXIZltBeJyG5YJ
cIoYyOeujQ1dWMXvW41u+3/9yJ22V7wXsNi0Z/UKqKDo7gsBUeeMTtnvnfh5Gu6b/ZyOi3lx
BJJqWRUdkJb8wbSc/DPHS9CRJts+6299Bc7a0i3WpTCYi2pmr6aBcE7xfVsuYXHzPe7UkvKj
fHbVI9j3ozEgJ6MaAaj4VxfZTxBwfePlWOC5tOKPevbD/wAy26M4VuhZiQQK+WC9mRmru95N
s+//AK65kmg3mKTWZpGPuK/ia9x4YZTY6+R/IfMOSQwQ7zuUt7Fb10RSEKoJFKkIAC3mcWr6
xccH5D8ufop7Ph5vpbBSRP8Apk1xxtTopINCcXN/YsrKbndb1Z38s92J49zd9UjS1EquDQsd
WYocP22hzbrvO7bikcm4Xs90yfa00jSUrlRak0/DFOvwfiKxQBoZgRWtAegxpnxo+O825VsU
E0Gy30tlBcKVmSM6QSfM9zg39n5VSXFx+s9+QkSOS3uGtT3bM+OM3qNNxtPzl8jbbty2Nnuw
Fsin21kRWKAdKOwLfhXF9oLFXY/J/NLXe5d4i3eZtymFWndixK0zHtn0/hTFaI7eU/L/AMgc
l2z9HuG5SGzehkt40WJZKfx6QDTyxbhyH4PyX5I2KyvrzjIuY7GMD9bOkZli1HudYKBsV6mt
dfDg2X5M5ntW9XG7Wt+6bjcSMLiVl1ayPyFXyIGLXOVYch+Z/kTfClvdbo3sxuJBHAixgupq
MlFTTzwzqY1ZWb5dzDlfJ7iO43u7ku5IlEUfQBVHhTKvjiGWJ+D8t5LxbcxebDIsd/KPaVTG
JDJqNPbo2LYLsek8/wDkH5y2zbrUcjdtut7ujxNHFFGWK0cCq1IK9TgnSee33yTzW73u35Bc
38h3S3Ki3mOZQL0Kr2r3Aw/ZTn8vReCf3FX+3S7i3JYH3QXoR3miPtTB0GmgHTTp7YzrX4Sf
Iv8AcLb7txeXYeN7VNawXn8ue5vG1kp10qKtTzqcamDNjwgtoerHOubf9MQbPhPylynhyzDZ
rwxJcU1RSqJImNcm0nvTvirTr5X8yc/5HLAL7cQiWx9yGO2URLrH58qGuDfwJ8+rtP7kPkxN
vjt/18I9GlbloFZyBkWZj0p4nDkiyszN8qc6fYrzZRubSbdu0jSXJoHd9fqbTIRVdVMFZ753
xz3PyTzOfi0PGJL7XsURBWz0gminKP3PuoDnTGuci6/nbMaezt/mHmfBYtqsrWe941YN/K0I
EVjHU6QxIMmmvQYzesvjVnim+Nefcz4nuUtpx231X24OsEllLF7zNIppSgOoH8cPUs+Vx1LP
G55H8xfLGy8l25eU2FvHJt7/AKmK1eEJrVlKFtQLZ0ORGLzE2Df3O8KjT9f/AEvcf6gEJa39
1fZDfw/fSh8dOLE+cOY8jl5Jyfcd8uIEt57+UytDHlHGtKBR4nxONSrmYr9l3a+2rcYb6wlk
t7u3IaKWI0cHr6T4YzaeblerP/c58mTWwtRNaxSGIj9SIB7uXVhUkA4JVb6tto/uTm2bgEez
2kL/APsVu+v+oT0eIrJIXd3U5k+qmNWfldMdyH5w55vm62e4Xl77A286rWK0/lRhz/8AaZVq
aeOM/jxiX1ezf3PfJJjRVubdDH1YW6EvQdz0z+mKWT5atVfGvnv5A2S5vnhvo5xfytd3MNxF
7gEz9WQAjQvkOuLyrm7Ffv8A848/3neLXdLi/MM1ia2SWqiNFfu2nMknz7Y19vxDPF7e/wBz
Pyldba1qtxawa46G7jh0zeFQa6Qad6YL1Ix1rx+8upZ5Xklk9y5kZneZzV31GpYnuSThmice
LLjHI9545ukW5bZcPb3tu4ZZYzQ+BDAZMtO2D5rWY9Ql/uf+T5I0YXNrGY6jUtuvrFOr11D9
gwzDz6xfEeY802nkF1vuxTTm+mLTXzxqZFKai59xaaNJY98HXUc+ePrVZzfm+9ct3qTet59p
9w0rGzRR+yoRAdIKgnPzw/LWYolYgK6k5kGgxYpXqPDvnnnnFdo/pm2yRzWaDVbQXKe6IgTm
ENQQte2M43JrN8z+SOUcv3Rb7frr32t6Lbwwgx28I6sUQZa2PU9cLOXWk+Qrf5a5LsG3cm5N
YTttdhbiK3vfaCD2pCPXIB6vVQeojGZT3k9c/wAXbv8AJ6vuOzcJR5RuUFLuFI1ZaAFVkMj0
WNqEjri3BJsUe27ny74/5XK8Im2ne7P0TwuM88yrA+l0b9h64bDL+mt37+5L5K3TbLjb2uYL
VbkGIz20IWT2yPUAxJ0kjwxrmxrxFxL+4LnvGdoi2i0njurSOogF1HraIU+0NUErXtXLB80R
j+Yc95JyzezuW7XBubnUqxRqPbji/wD3cajoPxxq3wTnmXxe8t+VvkbcOKDh+/upgh0BxPBp
uSF9UZ1t6sh5Z+OMc9Rd/wA9ebqNZYyMMh1pQnGxix49vG87Xfw3+13MsF7CymF4GIcMDT00
zrn+OCyflSVvfkf5P+Tt9toNq5HK9tGEEggEDWvvD+ORRTX/AJYzOv0t9yqPkXyXyzfn2Ybp
etO2xoFsCqqhUJpKyOwzdvQMzjUVuLS8+ZOcX/Ltt5RLdq27bbB+ntmESlQh+7VH+bVXPB+B
kt38tBy359+Sd54td2W5W1qu07ijW7XP6RgG8faZzoLCnXtgli6n7Z3h3zfz/imzybNtN2p2
8A/p0nX3jGxrX29X2jOvXDIawd7fXe63ct7e3cl1dSktLNOxdmYnPM+eNW+HFjunDuTbRs9l
vG6bdPb7ffEfprqSMorg9lr38PHGZ6L1J8r74z+TuR8G3ZryxVZ7S5XRebdMD7dxH9ezL2YY
sWn4j8l8k4lvVxuHHpFsRdyOXsQvuwGN3LCMq33aK0U9cXUg4meRpN1/uI+Rtx2282uW8hjh
umIkeGMJJGrDOONwfSp6YobR2X9x/wAl2Gzx7TFdQTewgSK4khDTIi0pV+hKjvTDIb8aqrH5
95/bcru+SQXyG8vFSO7hMY9iSOJNKfyhlVezdcXXOLn1z84+beec32ttr3W4iXb2YO9vbR+y
rlSCvvZktpIyGKdTGZ/ljZNj3obYN1ezuBt7tQ3ntuYQT0BcClcZ1qyu/i3MuS8T3O33LaLq
S0vIWHukE6HSv2SIfS6Hz/DFJqtXW/8AyzzTknIot4vr+l3Zssu3xQr7cMDIdQKIOrVzJPXG
rcEuUc/PPkObkrfIYkl/XRyokm5xwlbVJFiESxUp7frQZr3wbsdO5OYpp73k3L+RXF+sUu4b
zujmadYIv5jtQAlY1H5VAxnquc5x2cx5hzXfDYbZyKZpZdmRoLaKWMRSRKKBlkoFLN6AM88X
N0z2t3u3yFvqfBsXDo+OXFtaXie1db1cCQQuPcEoaBWAoz6enQdsXORd/p5Ltew71vV5Ha7Z
ZzXl2QxMcSFiNAqclGNXrD9ae0k3TZ9xWWP3rXcbWUFdIKTRyKcjnmpw/aWCV69f/NXzntMN
nuW4O8NpPEIbW4lswIJg1CC9QA7+DZYzzearcZSflHynz3am4073G820MzXgtYoi0hkzJZmG
dEJ9K9BhvclXtYOOS52+8L6ZLe5tpTQyDRIjo3ge6sMbt2Odsnx8vU5P7nflb9KbVr63kQx+
2ZWtoi5BGmrClK4zIb1L48+47zHfOObsu87Rdta36SaxOgH5ydS6PtZCTmpyxfXWuMkx28h5
3yff+TryPcrkf1QLGVniURANBmjKEAzWmK/Ckba5/uQ+Uriwe2N/G0U8ZilJtoSwVhppXT1I
74Jca6jFbrzTk257Rt+yXtw8+3bUWO2RkVRC3XoPy9BnkMZ+zPvy1HC/mzmO07xtN5cS/wBR
tNrhktEt5zkbaWhaPUBXqg0k9KY1Z41PXsPOvmi82Xillf7fxiS0XkkLLt95cTi4t2Qj1rIi
10tQ1UH/AFxc/wCWevnK+Y7Xje+XO23u529jI237cVN9OAWWL3XKpqNPzHD3/SWqSc+RzWV5
d29wklu0kdwjAwyREq6sDkysvSmDGnpPyhuny7JsuwDl161xtl9CbnbZUKaZtIH/AJdAUl1U
ggN41xrnvJ4JbKxu+cx5Hvce2DcLyS7Taojb7eZaApGKEqTlWlKDVngtmHq+6Ld+Xcl3rcrP
cNwvpbq+sIY4bSetJI4ozrjGoU+04zFLt1DyfeeU8hvJN63qae+nIjt3vJEFBpU6EqoArjX3
nwz+Qcc5ZyHjr3Fxs99JaG5gNrce2fuibqpBxUWrP49vOZW3Irf/ANTkuY93c0t1t/UzK33K
ymqlf+7GeutdJGg+TeUfLUlz/Rea3F1AIQJYLd1jjRw4+8+2NL06dTTD9vGPrNZ3avkXmez7
HPsm27rPabTd5PbqxbtnpLVKV76aYOb6bUXFbzk43q2l4+0670ZB+kaAkSa3yAU/mBrnh761
0k8x3893Tm03KJrflrXEm+2wSCRJ6DSCoKaQo00YGobvi/H+HOWbn5aBbX5s274/YP8AroOJ
XJLFRmEAPXL+ZGpI+mCd/prqftV/H22c8vN3jn4SJotzhjYxTQEqAh+5XY+mh8Gxm1T4VfMJ
uZXXLGTlBuZeQIRFN+qU6105KEUALpPiOuHvq45S+rz/ANa+VeJ7G2+w293YbfukJglmjY+2
8DmjJMozSo7tTBz/AE2Oliz+K4/ls7bu228FjdbW6RDO+SGNlNVMbvpCtQkeYx0/6TdbvM+r
E2vEeQX2+NsC2U/9XEjrNaSIfd1aiXqD31HrjPfV3a585fhfco+CPkXYNon3fcNuYbdaqGml
iZJGiT8zskbFqDuaZYJ01ccOzfF3Id34Tfct2jReW23yOl5YxnVcxqqhvc0/wUNfHDo6mTWL
9t9IJGVagk0xMwBRAWehOeYGNIIiYNr01GWRzyxKYUiRmT09B4dSB1GKETKp0iMVbw8MWELE
aXVDmAAfpiZOmiNKA0Yda9zitOpY2Y6kA0kHMjrXGbTpgK10ZlcumfTtiMpqGJSMwaZ16U8M
ADrCjMknsQf4uuGAWpvzEt3r2A88ISKp1VOa9tPT9+AkFObBiGUfaTlTzxHBpGftbSVrmQex
xFGWKBkXSwzOkChAxAQZEiAFBl0Ne/h9MVWjQArqBLAD7hlX64EaupTU6lNR5g9cOAKkVLAZ
j8p7fswrBKV9rSyA0qTXrQnBhwm0kVBqB1FM/pXGWTsRpXQRnX0Adu+JaP26KyMoFPtp93ie
vjixbA+pSWH2nKpOeEwYKmPQVNBmtOlO+A0Mw0oCreqvYdAfHFjOjSoU6ftoPVUio8D5YsJN
IjtUgLp7LWhHlXFi8Oq6TVEJc1HXDhEuWpdBHZGFKgeNTgPwGRv5WiT1eAGWf0GAGT0AAN93
5emfhigOrFKmh/iPkfI4UOtVaRvUT1anfzwUSmEukE11ADMgV65UwSHTFlUCRQRqGZJ6fhhO
jWhU0zXtTpQZCuIkZGYURcgc9R6DxB8cGLTsVVtKnVXKp64sWo3UkVZdAb8K064dZsGGqgot
Aa6QPEeGFYSmlBQEEdPr/lgWFXTnpGXqoOn0xIlKylnUaB5dMuoxIxYhdRNB3HUYtMOwdFoM
wejHvXChskQjSgrIQAXzzI8cCtRwrHpIUmlfwp9BgoxIFdVCrl16nKuBaBpFV1C6tQNWA71x
LUjFwFCZGuan00+tcWkhJ96saaev+78Ma1EFlUBAwrSoJ8DiRkCqKnIZ5jpUYlhyVMhKjVH4
GlPxwHToWJU1AUnImhpXEoLPUciB2r3P1wHw+lgGJzCjMefXLEgK0TBXFRUCi9vPPCPk0suo
glRRemdB5nDgEzoyUBAY0apzGXfAadV9emnqI9XmDgtWA+2XUo6dB2/Zh0HlQSULAhwaV8cB
+Dq7hXLRhyOg75+eKLUboyPrGb5UHge/TDqgonSmpjmah3GZ8qDBaMDLKyUBLasiG75dKgYL
VKLSdBrUtWta5n6YLTDlSxzquWdOtMWtGJXSVLUHQHp+GFq9FVQodsz2TtQ5VwoKiQvUisRH
UHMinc4hpmZwVjTNVP3Hx88WC2nQkA5ggd/LwwsaZZSgpQKlRqbtTywWEM0oaPoCOhIOf+hx
m8tSsxuCkTsGUA9cs8sb5ZcpIqKCnauNo9Pr0xFZ21uUhY5a6EiuCVSeurjdTuMQ0azq1Mni
B+XHr/hmjrjX2PzHL42hLOhQxIxC/mBAoF8SMeT+t9UfMt+srSsDUFfUGIoSvY45yO1czDow
zpn1JofGgxpj2O6xWB7mGOaT2wWUSMOy/map6DG+edc7X1rwWfg1rt1pt3GN3soYyqvJBC4e
dyRmXqdVa4uuRusp8r37bDyvbdw2q+ru1wCFHtByBqC1DNXrn0zwfzbi9+UoL++4YlVknuWX
U+mNmXVQHUop069cZtys/Cn+C+Q8lurC72/cS6W23kJbII9OkV+1tIzIGN34Tyj5b2bc5+Wz
RRWc8muQtGFjclzX8mVWqcZ5+Sr4fjz5D2KS23GXaJrMmRRbSSrmJCfSNJPfG/ub7Xu1pa7h
tVna8g55uV3um7RANBte3glFrmFYJ/5CO+M2jHmXyJzvk3JN/trW02642Sz9xYrYnXHK2s0D
lqoMHHWnqSPX5Fh4rxaws5X1XN28cc0js0geQj10ZiQMvHLGr7WHVyB73+u7DBZtLJDrpcRQ
1KGPT0cCg654Gku+WnKL2/Tbtp3BNp2tkJv7lQDKvj7S+Jw54MZbeOfW+wWo2LiG3X+87pr9
sX91rMfuVpm5zbPtkMEw/VZ+/ebHBBv/AD3dbm83UD+Vs22KTBH5aE+9vEk0xWiuHaOS865p
fXB29Dxfi0dVnupAHuWI/wDwVR93+GKUzFZzL5I2bZ9hl4/sVpfbpcyKYJNyu1cqueba2FWO
H8ir+PS3xOsmor/8fUa0qG71r554uk+Vr2gnfSKKCQVbsK4zIY2Pw9t1hfc8s47+3jurGgcx
PmC2ekEeBxuQx9Q3N9HZ75abVZ7VBFaToS1wsQGnTSiigA74xkF1zS7PsGxxbjvVttVtJekN
KzyICSQOi5ekeQxZBbXj/MfkvdeSbI9q/FI0Gqi3wRiiAZ6gSuR/HFOppkr0P46lt774wCyW
FvEyRujCJfvKL9zE1q1cb6voru4Zefpfj0Tjb/1slushWArk2liVGmhwWBLcWG3cn43aXm4b
NBLO5Vo7SntKpr0L0DafGuCwr6wt7BJf6dJ/T45QlTaWsIqF/wDqrl+GLFqlu9l4/wAY23ct
127aLWa99UrPOvqc1rStGp5AYPrBrN8G583K+R2qnYBtksKuJL9FIjPkrFVyONeLV98hbfYW
Vrdbpb7HHvu8TJpRp81jUCmXWgHlgwvkTdBdNukzTRiKQudUSjSoI6gY1K6ePS/7eNt26+5s
pu7VLn9LE8sPuAFUlGYbPqfCuLrkXx7zabpuF7zvcNsnkM22wwLJFCR6dTZVrSh+mBz+Tclk
l49xG+utiRLK5dvcDRxjNifUxyzoMUitdtltlluVjs9/uFpFd3oRJmmlTURKVFXz88Rc1jul
/fc33baruT3dtggjeGCmWtiQ1SOoIHTEc8cvKJW4rw64uOOQRWVwZVakMa1Ys1GZh3oMXMkZ
qg+Y9v2mfge37jfWguJhJA8pzErhgGlQHr6sIvih5nf7dPwRY9m4BPtkEioBuE1vHEIEYgal
YVdtQyqcZyN/LYb1dHh/x1trcajjsvTArhEDFtYGpmPckk+rDi6W+/y3+3Ntd/se0Q7lv14P
aEklV9tXXUzl+oXxxYFZuHPds4bY3R5bv67tvMylhtNqi0Ukf+KNV7eJc4sDzf4q3nim4bjv
F5dcSn3fdLljNEIbZZ0hiNdMZV6LGWPfDZG88aH4i2TbLzn2/wC53GxjbLm2INnYyrQWpY56
VIoG+n4YzkGtlxnc7nkkvJLHeALq2srloIIio0e3TIEfmPeuGwZ45+ZbjccQ2HaLTjkUdjFP
cJFJHGg+0kVp/ubxxQML/c5se0Q7Jt24RWqrdyO0b3IFXOQK1/HCLbrwPZ9luZ9ws2uIJFtp
pkEpZGQMhYBlBIGf0xmdfgyvrfl1/PwvadiseNQQ2VnNcRW80aoPtegPT8xr1wtX2sp88bHx
i13Djt9uFkZYJ7krfe0D70ypQ6csyTWmKQS+s181ycd/9Ts49q4VPtQdwV3Ke1W30Jppp9JL
MWy+7pgkir59LgsjZsK9emWO0GvoP+2zgnE94s91vt4sY7+6jaNEE/rRQQSSq9K9q9ccuvaZ
fGmuN7+Fd6tb3ZZtug2m7iuDaWiRxKLhn1aQ6FBRan+LF9IpW527414LsdhDaw7PYMCP5016
FeVz3Ys4NTjP1VtZxuB/DI5ypaOya8eD3Y7DUvsa60LnOlaflrhxQXyBxTjcPHLqR+HRyRRD
VHcbc0QIHiaBWC+ORwfWULrgvIOPTfGsu4WOzrt222cc6z7Z6SCYlrICaerX4nPGvrhteefE
fE+H8w3Pe+V3u0RRWtvKEsNlPqhi9JYyNXJmb9gxWRb41Fjxv4555xe4u4dgi25ba4aJXiVY
5KxMK+uMCqsMZ+sW35Py2x+HeEvty33G7fRf0j94RKyRxrSrvqPn2zOHFbrybkr/ABbZ/Km1
z8dgW+2gvG95awMY4lldvT7bHMfxEfhhvPgm2vT/AJw4/tW67/wu0vld7W5vTDcRFzT2jpJp
4E9CfDF+DzPVlu3H/ifY+RbZx5uMW8t7vTn22MYKIPt1FnJ/YuCcr7eott+F/jqx5Nu12u3L
fTxxo1rtkrApGrg10gnMMRQFumD6qOD5fTifFfjG6hg2OztbrdSIo7MLHrWRxVpKj7tFO2N8
SMdW/h8jaH1qi5sRRiMgT9OwwtTx9GfCHx5wgcGvuab5Zf1OaP3gIZR6I4oVBOhCQNR8TjnZ
rW5EdxbfCPPL/adq2fbJNp3a6uBE5iQKBEoq1aFlPgMayYzZtbiXi/xBt/LLbhP/AKtHJdXc
IP6oioKhSas2rVXLBh3TWfxF8U7Bt++X+4bV+qt9vmklLPrLJCqBliQKy1Ar364rNCm5L8ef
FO9cLseY223ybJY6kkneADWbYvofUhLLXLqMWNfdtt8n4VtPxXA0W5XW1cajgRLS7siyz6Wq
FUempLE5gjFzMZ7fN3widmX5S2/9Rby3MEszpZOXKSLK7VSWShBNO4rh6b5lx6b8scQ2rkHz
hsO1bldyR2e4Wmq4dnAKqjPSGInJfcI/bivwxz816Rtvxl8c2G4F14xBbLtoV4dwmoyP6aEn
UzV0/wC4Yzi18efJG4WG4803e72i1Sy22S5ZILaOhyT0lhTL1MCcbnlZ5ldPxbsOw7zzHb9t
5BeCz2qVv/k3FdOemqRa/wApdqLq7Yu25X1FvXxR8YW1oYDw5pbF0J/XWJMkiUX7wA+uvmAc
Z+sZrNcc+OPiDbOAS8n3jb3vra3mlka4m1iV0WQxxqYwy+WR74MatLf/AIb+M94tuP77tVhc
2Nluk0Sz2Fpm0sTqWHpYtoPp9RB6YsGZWyPwj8a3Vs1m/GI7WJ4/bW4SZhIAOhFGPqxYaxdh
8U/FvCeJbjvfI7B99WG7eIyMCWRBL7aBEDKP+4k4sHPii23iPwhz3mW0xcbjuLJWEku6baqs
iaYVqoDMW0knI6T08MTUr1G5+D/jK5Etq/GYLSMoUW9hlIkHmoqc/rivMox8bcz2a02jk257
Vb3S3MNjcy20VyADqWNqVFMv2Y7SMS60HwxxDaeUc3tNu3WCe628K0k8VuDrfQKhWOVErTUc
Z6uNT5fUtz8G/Gu4Wk9geNQ7esiFUu4JSZEPYr6j+/HP6pV/AVpxSzsd62Sx272902qZrTdr
1/ULkLJIE6k09I+2mL65TLsfM3yxdcS3Dml7Nxawbb7GrJKkhHqlViJGRAToXLIVx1/DMjG2
6qZU900UHSM6DM5YLVj6k2/hnxFwf442bfOSbO+8Tbro9ydhrfXMhcIE1IoQBaY58+/LfVzx
HzX4p+Jtl5PsO7XrTbZsO7Bv/ixZqlwNDxEmjkIdVCM8/LDZsUvrRf3J3+xw8RFjcb9c7bd3
MLfpdthqYbxFIqkygUp51xrhx/tzbHR/b5ZcJs+AXD7LdTe+0aNvjvVWinEZYmP09KEleuMu
nMznHmvEti+POXfMm6nd95m3zbyiy7fcXz+2bqYAD2HNFNI1yWlNWNdUcXJjafJPxv8AHNvs
d6Ljitxs3sxF7Xe7Ae/GrjMBwrkhCcjqXGZIur4veJfCHx/a8U2z39hi3e5uIUnuLi4kKtql
UMaCoFBWgwNMhvnBvi/gPyTt8l3tx3Dbd7UfpbAsJP0d0sq0lUEglDWmZywWOf1/2B/dXfcJ
hiWzn26R+Wz2yS2e4R+lBCJSpWY19X2mmX443JGrb8R8tOUUhvz+Yx1xa+gf7V+M7JuNxvW5
7hZRXV1tUML7eZAHWNn1sX0dNXoGOV+cb3xt9u3q1+XeHcsbkm2WyNsvuPtc0VfdiKRu33tn
1jz8cGudmzVNbfBXDec7fx7f+JSDb9uKxxb7aye5VzFQv7dQfWcwaHScZs/TUT2vw18Xcn+Q
9wTYGkg2nZIY13OxtyT7lzrb+XFI5qE0r6vPphvKny13zbtUc3wducFjtJs49uETWlnoXVFF
DMqlworT0VJ8uuN84x/a3NfFrx0YBSCQc6GlT9emNVqNN8a7JYbzzrZds3GIS2l1dwpdCtKx
s+YB7Vxjv4bex/3Fchu9y51YcEAMOy7ebOR4o2NJTNp06wcv5aE6aY1z1kceuZ1cajlP9u3B
Z983Rdt93bVt9oW4tYIjqRZlLrqOrUxBEeYr1OOeNZ60G0/CHxnHDs9+uxGeV7CPVAzt+meT
Sra5CTUSHUadjhxrVN8sfCfBjxK43m02wbDc7Y6STG0YOJbdnVZAy9KhWLL54HP+nP5XjfGP
xZHsdu+28UG+bXPAtdxtJFknZdNPcoXRi9M/T37Yq62vkrnm2bHtfLL6z2BpZNphdhA9xG0M
65V9uVWoap0rQVx1mOXPVSfHm1We78x2Xbr1C9jdXkMVymYLRs4DDLPPGO24+rd05pNZ/K+3
/GVttdoOLy2yR3FuY6hlljZgAv2gLo/HBsNu1kJviL4z3fd+T8Hs4DZ8jtJ1utovW9x0WGSJ
X9gkVXQp1Ag555YM/LE4+WX+WPjjg3A/jux2t5kl5otyLiOSNXrLEcpPc8I41+0+IxTjfae/
8fLfx73sHIP7bd9G27Wu3WdpYtbPasFIM6IjGQEdaswIJzrhi7nio+Pby24T8ET802/boJN+
9xojPLm2kyiFcx0CjOnfvi4ktN68aJdl2PnezcF5fv8At8L7zd3aQXbRLojnT+Zk46kaogwB
Plipnjsg5jNv3y3u/wAcbhY2c/FobcxNZslTUQpKG8vupQdO2eK4J7ar7s2vxf8AFu5bxxuw
h/qMV/NaiaYan0m4aNCW76FAy6YeZ+111cdcfHtj5dcfH/Lt32+3beL8v+vWNdMc2mF5FMim
uoo8YIr9OmCzRfLKPZuXXXL/AJN5NwPeLK2uOMWcckUdsyEsGgZF1lj0rqyp07YjP9pdVW6b
ovxh8U2d/wAYtIReXd7JaTXUia5H/nyKsjEU1EBAKHLGuYJ5GG/uQ2vb7/i3D+YNaxW287vG
ybhNENCvqiEgYr3YNXM554Ob6z/T+fsrwKWxuoIVlkieOOXJXcEBqH8leo8cbnWrM9cw0l6M
CT5ZZDGq1Hs/9tHFtj5Bze8tN6tI76CKxkYRTCo1OVXWg7EA45dfLpz8PTuNfDPCrSXh+6mA
3RuLy7gvILg+5HMKTGJipyqntD64MY1oueQ/EnxttdpuF3x6G4e6LW1tbLGG16TrYkvVQwB+
6laZYueJWngHy23x1cb1s+88LKR2m6QmXc9qRafpZ43U6dGWn3KkFR4VGRxq83GZLz09H/uC
NpdfEvC72ziXb7SWdJV2+E/yVaS2ZhT/ALCDT64zL4138tZ8Q834/ffGG8XZ2CKJdktdO6xx
hNF77ULMWOodWC56q54OJ6r+3zTvG4bZecwut22fbxYbdPOtxDtLn3BGRQtEWUfazCtB0rTH
bvJPD/PXt/8AcekFzxbgc8Ma2ltcGQizjyjj923RvQAMitafTHL8Dqf7NlybdfjbhE+yWs/H
be63DkEcMegQx6SqAJrq4ZdQLZimeLn+fmr5qHZ/jTgezfK+7x21lbveXO3pf7Xts6qYVd2e
OdUBqCDpU9PTU0xCbKj+Zt02XjPxlfrd7JZ2V5vzCznsImTSG0sY7ldKrqMVB0HWmNfz5mq+
vj9AQoVwAtO1D38cNovj6N/t1kisPjvmXI7K3ifdrGBv08rLrascDOF7HSWGeOef7Ol+F3b7
ldcr+Dp+R8qtV3PdtoukudvmaJUdhHNGQqsB6lbUynx743k3Ge55q9g+HuFb3yGx+Qre0l2+
EIt1c7C8AAMy5kmIdQ3dQPVjFieYcO5vs+zfNu53XHtr1bNvN4bW3ikTRJbySsqyyRpT+WPd
1EJ/Dlh6yH+frT/Itla3X9zGwRXSxvFJDZAxyAEPSSXxxv8A+OLiTdaz/wBv5UfnV+I6P/7Z
S3GuD2axMksWrU7H/d6R27YrJJBPy4N7c8H+LOSy8Ut0tJbXepYEkRdTxxfqFFdRzrGremvT
BJ6rV5YWcO/WXx7yLfLVJd+95kkuJECuytBKaMtAD9gYeeYwftWeuDjnKeT738vcj4puwNxx
mCOa3Fs8Q9tkoNOpqVqwYivfGupJIZ8Knkm5bpxH4Y2U8XY20jbg1o88S6pfbEswrq/jPtKu
o9ssEkW7i93+13+PduD8h2iwhueUXFtLDupuB7ZlgMCGQSlaZoxqP2YJ8KzKW58z4/wixuBy
ffLnedzeCVptqjJmtyspOmAChVKA6QXPTDJtZrzH+1u43H/22+gjjddmu7d/1URXVFXV/KVz
0DdQPEVxr+1m+Ncc/wCvrxjmFnZWPKt6222i0WtluF3DHDX7IknZUX8FGOl/nJJf2OJ5ihbQ
pY1NMqdvwGOdZsIhiAewz+mASGlQMzUYKR0c9BXtiapkBUAnI9KnL8KjviJmqCCK1PqDUzp4
HyGIUxZDQkEgZ6j0J/1xDRFpSNFQPFh1wYtpJrRc8we+IiMgLBSKnsD3OJHKeoEUFMzppUYj
pCpGhW0Kw9J6/jhAwBWpLZ5qfLviJAsFK+gk51bsf88CJ0korDIqKk+PY5DAiIrShFTmfDrl
hBHUzBmp6QQTiCNndSEBNT0UCmdcQqUgnSuqh6kAdfI4takKikBmNGGQC09R60xGDUBioJVW
YdO5xL5M6PG2gZjImufXwwIKVRgG9JXOooa18RiZ1IVLAM1a9VIOde9cUWEhFQQpKZmhHjgO
H1VTWoJ6nT4+GEnEukioBr9ynw8MsTOgDoWLAUqKnrQfXCEo6hqihHqHf8MDcGfZQdyD93l9
MB0PvDVoC1UAAU8uxxMXo8lS1ehFOtBgalA7MTkhOda16HFEOhK6qAEEDSTWh6ZYmSKLG5XV
nloH164kXVmB9A7OmS/TLE1CcekGT/x9aj92FHJKolBqDZt5DFFSaOMURjpjGZFcj4Z4joY1
jUgLlU5160+pwDUojPue4Vq4qNDGv44iXpYFiNIXqpPn1yxYDF/cUGgOn8p8MWE9WYK4qDU0
A6imIGJ/mZnSMyKDrXxxLTq2VKVIyJ88ChpCjHXSooMSECtFGfT7QKKMQoPVqJWgX/GuRGJY
LJIwKVHbv+7EjjVqpXUxGRp174FCdyqgVDaaD19Wr1xM3TsQxDe2FB6muZ/bha2/kwj05ahR
x6mORqMJGpIpqGphkM8iO9cBPAIkagAK9QnZW8sSkJRrqCK6ich1DYtakKT3ApBPpyoAMssW
s5BvlUMSq0FR44Daj1orBVrTrpIyxILo+onOgoSB1AxUSEkcRepNZOmZ/wAMDWRK8aRMueY6
CueeKClIdLAqQ1TQgdSMNGmFSuqQEL+Wnh4nEDI0oJJaqgf8UriKN11xqGqFBqBTPFqSRe2h
AHpDdvPEdJ0AZqMDpox8cZsFMzNrUkgsR3yyGLAUjSCQK+mgGTA1Jr3xrCbVoT1EFuin64MM
DGA6BT6JBmO4/biaOhfVQeonIr0oMI0qsCygCgNDU1NcWC0jHX1dK+BA/diGkwjZaKfSRkte
lMWJyyqREVYAk1NRl1xYmbus7hwakV8cbikQknooyGYOEn1Sfur0xanfHFerCSKe2OvjXyxl
R2ceeaO4WSIkEGpcUNKZ1pjtx3ir6w4zyP5p3DjUcFnxWzawEA9i9uYxE0ilfvCO1D+zHLvr
3xSR4byGbd03OVd1ZGuyzG5ZSrKr1+waMssYi31VlqIdQCMD2HU+YHTDT9qZUU0NNSkV+mLW
byudl5Pu+z3cV7tkxtbsehJ0UFvDQFNeuNfbxYvR8g882/fYeQ3LS/rnGkTXcKyCpFBSNhQV
8qYJYrK0F/8AOHy3e2Twu5gjdNMs0VsqKOxpKOhxfaDHJx/51+RbCCPbtvZJkqwjj/TrJJXq
WZsjnjdsMmqzePlLn9zvlvuu6XWjcbNq2hEOhYyOgCdO+Lmw2Q3IfmLne/TWr317VLVtSxwx
hELDuVGROCH6tZtHzx8tXcRSzt1vI4xpEi2TM9fAlfT+7F11I53WV5Ny/wCRuQb3b3G8RzNd
wlWtYI4CqoFNQQiiv3YeepPW7zMdnJuQfLPILWKPdobtoYAAKW7oK+JAVVDU745/9ZrM5d/H
vlb5Xs9uew21HnhirEJjamaZCMqNJQ109ica6/pyueabYfmP5C2Kaa2aIT3FzKZJVu4HeVmI
oSGBB/DDOpWs/S73P51+Umtx/wDkwWcOkMHS3eMEih++QkAfhng+0Fjjt/7meeMPbSzsJJel
Ehl1Z/mpr7/TGv8AVn01t/cbzq1maS8trS4SX1OjwlQgpQaQpVv24vF6mvvnrlm8KlpdbfA2
1TlRLDAjo8w//B6m1FdX+3PGZY39Wl5Lz3mcvDpLLaODS7TYSRe1+puKsqJT8qlVqfrg1mPn
KUzCZmmRCyk1PQ16E43PTca34z5XZcc5JabteQu9vaMSyxULspBGQPXrjeeCdPV+R/3OT6we
ObZE0JWguL7UHVj/ALUPbwxz8ijLbR/cHy7brmX9alvujztraO4QhVbwQrTSMUsqxzcu+d+U
8giWwnW3stuqrT2NspIlANQHdjq0nuB1xcpp9m+fuWPYfpNm4xatBbIAHtopfZQUpUhTpBJ7
Ybiug2r54+SlSXTs1pcwQsRMFhkRYzWtCUNMZ2D1Sb1/cVzPcLiJYf022w2zhmt7ZSS5Hiz1
y+mGYLKtm/uc5FElIdm25bhlAM1ZBrP/AGjMftwWr1x7b/cdyi1knbc7G13P3H1rrDIIx4JT
tjXikqLcP7keZTXMNzDa2sVlD6ktY0LRknprYnOnliyNWK9Pn7nUO5y30wt5Ful//FnQmBCD
+ToRWvXBLGbHn3IOTblv27TbhemNJJqsUiXSlPKnfDIuam47yXd9k3CO826d7e6hIZHQ0y76
vHwocdPw1Zr1G6/uU5VPtTW9jt1rYXbD136gliOhKKfTq8/2Y5XGPVXxD515LsUMibgi73A5
JjiumbUjk19LZ5GvSmG2YZuoN7+dOdbpvNvercpt8NnJrgsoAfaFOgev31GRrlgnUNi83L+5
ff5NvktNs2q3sb1wUl3BayPl9zIv2gjzJphmDcV3Efn3kOz28kO4W0W9iRx+nW5f25I2Na50
bWKZ9MbyU8+qTmfy7yrlG5xXNzItra2Th7SyiFYFYGoJBzc+ZxnwxPyj5z53yLaf6beXMcVs
9Gm/TR+0zhT0Y1Y9RWgwbAueJf3Bbhsm1/od2sot6EIH6aSVtLJXsaK1R+GKtO7av7j97Tkc
24blZR3UMkYS3sI2MYgQmuRo2pmHjh8xj137t/cBsd1DN7fDLaSeUEGWf23oSPuY+2C37cYt
h9YbjfzRy3jEV7Fs6WyR3cvu+1JGXCkigNaqchhjVQbB8vcr2/lk2/ST+5cXZAv42B9qb/bo
8u1OmN82MXa2XJv7ktzu9plsuPbXHs8k2Ut1rEsjVyYoNK0Pma4z4UXGv7jLza9vFpvm2rvc
8I/+JdO6q48n1Bq6f4uuKSVc3bjKb38tb7v/AC7b9632GOez26US22z5i39Jrpy6k/xHDuRr
rn61f/JPz1d8q2uDbNu2qPbY4JUkednEsqkA+mOiqF64uZGcrq43/cpe7dtK22/7Su8XkIpZ
3cjrG1F6awQ1afxDPGrOWdsZDffmHl288msuR3ckQm22US7dYhf/AI0IBrmhzZj/ABHPFLJ5
+G/r+UnP/m3l3NNoXbb4wWtlXXPb2oNXYfbqZiT+GDwSb8vOYnd4lJBQ9gaGlMEp16h8X/L0
3A9uvrWDb0v2vSshkdyoRlqKEAZg1xYzmMim47y+6vyEoyC6uHufcEZMZlrqCgkdAcF6mqT6
x7RF/cpx2+26D/2HjX66+iorMjoY2NM3CyD0g+GKxXpV2v8AcFapyaW8vON2n9LeP2Esolj9
1R/F7hWhr3yphk0urkf9y8B2a42zjGxnb57hGj924YOIw4oSqLlWnnjfPE/avqk+M/nZuMbF
Nsm97X/U9vk1SIsbqjKzn1q4eupTjPdm+CT8Onh3z5Y7Hvm5MmyRQ7BuJ1ixgYK0TCvcijdc
8YMXO5/3KbPBtzWHGeP/AKFXcvNI5QIMwW0ogzZvHDImB+Wvlm5+QZNvSGw/RRbeGKxhjIzu
wGo6gPtFMhhtyDPVZ8Zcl4/x3kf9T5Bta7rbooEMbMB7U9cpACNLH/uxj7aZ17j1flf9xvEt
yS2VePvdXtpKk9tJLIn8shhVkZamp6Y1IaynJfnf+q8+2fkK7b7Fvs61W2Z9bSGpJOoZDrTD
+BOfXqnBfmnhu973uF5uDpsu43Mccdqbl6xNDGCdOv0rq1Ek+WMtWOX5q558a3HDLqzu7m13
vd5AU29bUBmjkPRi4JCKO+efhjU5v5+GNfKESkUYZAdGPgfp3JwU49p+KPmm24tsNxxjf9t/
W7ZIzNCIyBIBIKOkiv8AcDixLHe/nri1mllBw7jNvYexcLcPcTJGlCp+1RGK59zXDOYWgH9x
XAJbqHeLjjkx39E0LMGjIFB9vu1rpPb04LyGa3X+45Nx4xv21XG1st/u5lEM0bj2oonUKA1c
yQB+OGQZ4qbj5ys2+J4+ErtrC6WJbdrtnrFoD6vSo9VcUVmublPzXZ7t8U2PDYNuf9XbGFJ7
uRw0ZEDE1UDPU37sZn+Vu1xfDPNeF8Uv5dz5Dtst/cpRrG4iIYwEV1ERkqCzeNcVmul7ej8i
+YPijk/Idnu77a7mJ7OeNptwlAR0iWrqjIhYsuognwxZrE9ew3HNOG3dqz3G9bXLtMqEyJJK
hZkpU5Fv/wBnFi18T/I1xxm55duMvF0aPYvdP6MsCBTLUVBz0aq6a9saki5lqHgu/W+xcmtd
1urOHcktW9xrO4qUkqCM8u1cvPGuvhZj6JsP7j/jHbLVp9r2m+inYM36EMFj1d6AuVGf+3HI
attq+XuBP8Zz7hyE28r3s8hv9hUqZKzylgixsRUBaGuNSb8NMnvn9y2z2N1s9rxvZjFtO2sC
6XJ0uy00e1EqltPpJ9THDOfB6uov7kfjm2vW3G12W9/qFyaXkrMgVfoxdl/YBjNSk27+4ThG
42m5bRynZZbjabm6luLaJP5lELawsgqmanuDhsSquv7geH7Zv20z8Q4xFZ2G3KySzSqsc0kU
goyKI9XbOrEmuGfzHutFef3BfErLJukWw3lxusyljHMQEJIodRMjLT/6cP8AzM98fNG+bgt9
ulxexwx2sdxK8iWsNSkas1Qqk50UeON6z9fq1fxP8hT8E5Sm9ewt1F7Zt7i3roLRyMC1G6ah
QUxjrDLXu0P9yfxrtt1Lebdsl4bq8at1JVQS3Uganbv2yxj5Lzb46+cP/VOXb5uM1ibja99u
ZJ5rZD/Pjq7MmlvtJUNQ+ONdT8qc5GI+TuQ8S3rlVzecX2qTatvYIZIpj63larO4QFgq59K4
Z6xuMnH6H1AHr0HQf9cVmtR9E8X+eeDX3DrHYOcbJJejbQotnhUMj+2CiHSWQhwpzzwT+Z6v
5Zf5d+bbbmFxtlrtNh+i23ZpBJb++aySGgBDKKhVCrlmcX18U/bi+aflq151NtFxZ2UlnFY2
5jmEhBLSOwJKU/KNPfBzhxL8SfNNjwfYt92m+sZr47pV7aSJ1AQiMppYH8uda41OZrPrP/GH
O9s4vv8ANebntcW62N3E0FzbTgM6oWDB4SQQGBHXwxdcHnl7Ruv9x/x/YbBdbfx/bLlpLmBo
oorpv/iqHGkl6vJQZ5DvjMk/K6cPH/7g+DX3F9useXbTPc3e2IIUe1r7R0KF1FQ8ZB0gVGeL
PU82+Rfk3auRcytN22Pao9ptduCi3kahlmdGDhpVHpAVvD8cVi5X3yr8x8K5vxWE3eyyQcxi
RIkvUcGFADVqZ1eNqnIioOHmD+nNnseJ6BIxMeaKRVVFfLphtZ+rd/Ffyru/At1mksUjntJ9
KX1nIDplRa9+oZa+kj8cYsbl8ejco/uJ47Dxu62fgWwrtTbqj/1K4uFCqplqriNUJ1tQn1HI
eGNTmfIxz7x/ctdx2XHrHido2z2u1xp/ULRgmmZkAAiQitEyYsaZ1xSKbVPzn5wF5yuPkPCY
5+PTT2yw7mwK/wA+Sur+Yo9OX21698WHMd2+f3F7juXxXd8YvvfO/wB3WF92Rl0SWzPWQMAA
amMlDQUph552jq68MJRSDlToFHh9MNmGYtdk3a82i+t76wkEV7byLJA/WjIdQH7sc7zpr1r5
b+RuEcv2jZ+T7f7thzWzaEXto0Z9uZB3Eg9De2RVe+nLD+McP6fz/wBp1G923+6HictlHd7t
s8zbs8S2m4tb6faeMkn0Bm1UJJyp+OMvReUX/wCdFxlr47OuxzNxhIRbkOwFxQCgyDFdNMqa
q43efNZit5l/cBwy54ZunGtj2e4tY7mJVtpJihUP7gdmlGpzp9OWZr5Ypx+x18LDj39xPxta
wx7ldbDdWG8sg/U/oABBIwUCqJrRfVToVy7nGfqdeD/KXNG5hy+93/8ASx2CXOkR20RqSsY0
qZD3kYfcRljc6/DjzzPtaoNl3e727cre/s2eK6tpElhdcmR0NVYfji658dZdfQ21/wBynH/0
Y37c+NLc80t7ZoV3GEBIpFAopZjmlScwK96ZY5yftXxn9n/uV3O043vpmtlPL93n9+23WBFC
KpCoI2Q1JESr6frXrjpk3/DU5uaquWfMr8p+PYeP77bfqOR2t0r2u+ELqECn1Bx9wdhVadD1
OCYzedeg7D8+/FljxCPYn43MttcRBdxs4hGYZH0hZHqzLXVpB8cZxrpi+G/Nu1cduNx2Kfa/
13A7yeR4tpuNMk0KuagK59LdMwfwOL/wxzxJPRc5/uH3Ldrnb7TjVoNj2fapkurBdCGUyx5J
rAqgUAsNI61zx0+skPN1ov8A853ZItsn3W04wkHOL2FYZ9wogtyV9Id2r7hApUJT8cYnMvyc
9Zrhn9wt5ZJebXy3bl5Dsd/NJdXMTBA8csja/wCWD6SpbPT2PQ46dfz/ACrzgea/3E75uu97
Y/H7ZNm27ZpTNtsQClySmis35aaSRoGVDgkmBpJf7nrCPZrm92njkW38u3FBHe7idHsGRRT3
KU9x6dVB/EnGeZN9axScC+eLyytZdq5LYJyXZp5muEtSq+/FKzGTVHUaSuqppTLscHUxYzHy
18wX3yBc29pJbR2Wy7dJI1nZx01AOunVK/8AFTL05Y1kkY65tbn5k5/wHefibj21WN9DunIL
Y2xWWOExtBGiaZxJl6CR6aDr1wcc/ldx8+KI1JCCoPqr4Dxzw61Gy+MPkLcuD8lXdrBYpXKG
K6jlrpkiJBK1H2k0FCMHXsdNe3j+6PjCLHFDxd1tYQZ7FPeQGOeramyWij1dvPGWLHnXy18z
XXPtg2iyn25bS82+aSaaaNwySF10qEUiqkDrU/TDB6wvFd8j2nf9u3O7tor+KymSVrOWhSRV
NShqCM8VjVr3Hlf9xXCN941NsdxxBntmhYWyPLEqwPpIR0otVK1/LikxnvnXnvw78uXfBZLm
2ubQblsm5rpvbNyFzCkalqGFTWjKRQjBf8McfpVWvN9lsfkw8l2/ZEi2gzm4h2Kch4wCtJM6
ZAkkoKZHHTvqWN8W8vU+bf3FcW5HxqbZ5+JvmhW0keeKlu9KJJGVSqlfAY5TFfWB598vTcsu
OOXb2gs59ihWMun8wSOGU+5py0LVBlXG58Yc916bwz+4Lju4c5bduRWqbY1zYRWSX6H3Vilj
ZmdwCNQR9fatO+MKrn5i+T/i7duE3+0ybjDyC/uUK2CW6BZYJ6akm1tQKoIFafTHTjn1nqb5
Pl8t7fsO87mkwsbKa5EK652gjLiNB+ZtIOWDrqazzzcytT8X/I++cM3LXtDrNb3Hputvlzhu
E6EMvUEdiM8HUdeLL43HyB898kvpbXarHbf/AFuwtGjmm28r/wCdtQkQOCqUiyqFAz61w85i
nz64d7/uJ5bfcrs98tmO3R2RVTtqStJbF/8A7XUvpr7ncHp2zw5MZ+Dce+Z4LD5J3rlv9Cgl
i3hVL7aGBMbjTqkSQjJ2dNROnOuMdSYObXZ8pfOlvy2ztVt+PDbt3s5Untt094NPFpJ9Ioqm
h8zjf87MVtWu1/3J8yuNpG32u0x7hyVk9q33VBWVo1OeuBFOtvxp3pjnvq31lOCfOHKOObnf
XV3/APlez3OZptzsLsj+ZKTRnQ/lcUocqY331rUR83+duUch3mG6tWfarXb2STbrKB6iOROj
lqDU37gMsM+PBxffWhu/7pOWXGxG1t7C2tt9cBLjeVFGZAOvtgUDeGeWDmT8rr/DP/H3ztv/
ABYzWdxDHve03DmWWxuzULKW1GZHIajMalhSmNd++n4juH9xXKTz2HkUqJJawI0VptWphbpF
IKNQ/cXbI6/8sFzPFGk3r+56G9sbvb7jiVnJBewtHKryEhldSG1aUr9MEkZ38Mr8dfNEfC+E
bptFlt4k3m8dnt9xL+nSV00kAo+qLrGR1Jw3L01fJkeVX15Jd3Es9w7yzznXcTP9zyE1d28y
2NXq0c/6zHO2kLR2Nc9PTGRoHeqhA9HObGtcsWLQq7LJ4pTr/wBuJeiZmdTTpXNfA/6YyTKJ
FEZBFRkxwjBlBoqp1Uzp2p54jhwxTNiFBBBHXEKg1ESiuaA5H/PCz6mamhWipqPcYFaTrLq9
NAT92fT8MTU9GluNOXqI/N4/TETj3AG1ZVy/dkcCDKsYcVJ9XXvn+GIkJGWqGtBUDPse2Fm0
8bVXQx1FRnXL/imHEUcnoJXNgaUYUqMCSlgEWlCw6+Of+WM1Sm9IUmvp/OATiaN6CyqwFD0H
YYQJhHX0UFftzzAxEoNVaVJqMz/1xaxlRxx5e4T1OXjXFREjqaA5qeoBp+3FhtPGxKkn0CnX
PLPFWpEgKDSCaeBOX1pga+AgIGBPqHXxIp0xM0ccapGHJBataDvXAT11AsRT9vfwxI1F0sCf
X3J718cRLT0rRiMmp2r0xYMHoRIySNZkagPjTww4bAOzqQT3yAxA/wBsYdsiagjvgR29tVJr
n0UtmcDOFrKae+oamPhXplhUhLICpelVzGk9MsRwCu3t+n7B9oY5gfXEUhLggADSRUCla+OF
GBRqUGamg8BXBYNGKlyxUEj7SK/hgMMBWkn2r0bwI6HCp6kUaSuimhqGtOh6DA0j1MAV8GOe
Fk51ABuh7Z4yjVUPqIIZuoUVP7MQ0Slya0oo6DuBhw6Uj/lQ+oVy64gH741DjRIRUAdMWHCX
KpTqaaqf6Yhh29xQW0nI+k1oRgVghJ7hoczSpP8AnhRKaalagYD8KeOAaRKsi0FadQRn1wtJ
FrUKaqO1cRpAlQcvStSaDxxLA6tQJQaSOtcgQcCC5k1gV+4UDAmn1p3xH6pPWNOlS0f7QT/l
iWCZ0WLLNT1c/lOBfZEJgg06mbUQWNPHEvsNhkKj0Zfd3PlixWl/LK6i515+n/ng9ZtMV9sk
sK160qc8KJVcFR6q9VNfHEYjaLM6Xqyj1LmwDVxIaydA59enpiWnAIJ1mrD83bFTIdjkDWi9
SAPHLFp/ISDVY2oOtG6eeLCYwCQj/aDQ1xYLMKqMUXTRehJGQPjn1weimVZGbRqooOIYkWIB
jqIpkxGnofriakLQrErqybMimY8CcKvMM0WgLqJLAUReuWAfACi6xrBIANAMs/I4lrnldQXo
pJpQrnTLwriitZi8ZDO9Kg1yB7Y6yBHkDUnPEUvuj+JumBa7LVCVK6yUGdPE+FcWB2cbWm5x
qGWNnelWFRQGtPA1x2/lztZ66faG/Xt1H8ZwyRyywn9KikKxDBdFaAg9PHHHuZT/AD9fLdzc
SfqWMiiN11a48iatmc/PGI6VyMejsaMRQDr+Bx0xzdNha/qLiGFJKNK2kZ0AZh1A+uKc6NfU
3xh8UbfxS2ttxmsP6pvdwqu946gxQhhkIlOXT83XGbGlT8wQxWnKNt3PedvN5txdQlp7gjV2
XqWp9w8Vw8cpe/KF3HL8fUtbdLa2kRXijiVVoKVCrlTPGbMqih+AN02A2V1bWmzpDfJnNfzM
JJZOxXp6APBTTHXrmYra8t+Z1duX3XpoxIf0kLke1PLHLcDE2KwJNDcTRNLBFIrSxCoLKPyk
jpXHTnqfl1j6n4dyvke62FpcWNjacV4rboDI84q8q0/+yFFoP9xwXPw52Mzzv5qsdn3b9Pw+
4ju7+Q//AC7xohLGW6AAn15+RxkNxbcs3a34Ad25LKkk00etkiURAK3RadScVicvDOeNvy27
bdPY7LtgPtm0k9V1Ky9dIoq5/icORbqXfN7G078BtWwtum93wAivZAFVT4liCQopnTFMQeWc
msNt4rPFzPdLOfcZUJSwtQCNRzCgZt6fE4LzKLrj+HLriV7sNxPtWzrbzVb9VdTqJHkelepz
oB+XGuuMUeYbWeLRfKJfe7Fr9GmP6O2BC26ylstYPUDwP44eedM9b350uBt+1WNxt0CWUkDV
VoVClSKEaSB2+mMySU6v/wCp7huXxe9zezG5nltS0jE0J0jyyrho18j3yk3E2oZh6Cvh4Yoz
IveCcZteR8os9onk9iC5KrI/UqDkSuNVrMfQn/3PfD+03drtLWdxc3twtYtUz0oozLFSKA45
3nS44vgTgdpcXt/uTXD2CLqWzgYgIi1J1N9zfhgnFGsXzW0+DRsjtx/3Ytw+1YwXY6hkNYc0
A/HGuZgt8ej/ABdZbb/91/s2FzNpaOVZwqolJFBJoaVONdQy7FnwWLbR8cEX0vs2g91rnTQv
kc86VOM4rWW3f4l+O9x2SPc9pjksraRtX6pqyOQWzIQ51r0GM/U6tNr+CvjqW0VBtV/MXUH9
VdS+0Sf4wFIoT4acOLVXb/AvBtrS93DfJbiWxgJZbaE0Cp/uYDU2KxfZFxvjfw3c8p25+OQN
PcoSWtJy8i0GYJR6gUxvPGt8d/ydwHhELXO/cidrO0jjpbWNigUu/idI798c/pvwzOnzHuH6
L9TK1orRw1rGrZsq9gx8cdPgfVrfiLhW18s5XBYbgzC1Ue7MqmhcLmEXw1dzh0yvpFLPjsu8
TcLtdks7ayjt/cEyRoxOoZL0qD/urjOQKqy4Bwrgtjf8hj2xdzvkYmJ7qh0Bjp0RghlX64Pr
qlTbt8bca5hFte7XNsLQXASV4LZFjDoRURmnQ+LYMw/DuXbeL3u5XvDINhtrS1tYEczxxpq0
nsGpq1eZOeH6xm+qe04Fwr4+2e85CdqXeL4MxaS40toQn7Y1YEL5nFIVD8ufH3HrzYbXkscK
WU960WtIkVYlSQArUCmYw4LMrN8o+OfibY+Grd2m/tfb0VBt4kkWVZZH6qsQGpB51xSftrqt
px7h3EPj3htrvt5ti71uN0qvI8wU6PdA9MasCuVc++LBasOTcK+N9tvrHlu+WapbuA/6KBB7
LTP6o/QvWngO+LDK0Ntb7PvGy3V1vfH7TZNkKkwSXKxLMUp97KFX269s64JzGa8S4h8c/HHI
9z3e+3HfEsNotZtFpbpIkLSdi+qTt5AY1Vy7Pj/4r4Zv/O9yggna94/s9DGBVf1JrQAn+GvX
xwY3z+3ph47w/ldrvGyQ7JbbcNqkMK3UcaBi4FAVKAMB4g4LGarrfhnAvjbYbe4m2ld5vb+Q
QzXE6oxq/wDArhlRQfDPFIIw39wHxxsmxWcG97aotpLyXRJbKD7YZhWqeFaZ4sF3XiVjbNd3
CRRU1SsqBz6gGJAJyw60+pLfhvBfjLZbGW72pd53DcJFhnvJ0RzrcDJUbUqqK9B1wyC38Mj8
w/EPHLTeNouLK4XaV3icQ3RA/kwioLOvgBXpg3Bes8UPyf8AGPxtxLjcN5tG9vuG5MwDQI8U
3u1++T+X9tD54Gr08cnClghDKWFSmNSDrHo/wf8AH1lyzlEMm5Swx2Fkfdezlejz/wAKBKjV
nmfLGetPHX5e7/N1nLDxGLarC+tNrsbh47ZbERKJH1MAFi66VH5qDp3wxZtZY/25fHWxWtvN
yDfbgGfTEASiB5npQJkx69MGWqz1zSf2xbbJyv8ATpujf0aOITS10vcgk0CUGS1/i/dhmyGO
/ev7auKybRO2yXF2t1DVglzkjkdgdKkfXGcs/I0Oz/2x8Yt9vgXeb+7mvZqNItsgEa6h9nRy
aV+4nFedOvP/AJF+CN22Te4du41DPuyXUbTRRRgNKkaEA+4MgBXKuEay1/8AEnyNYWr3l3sl
xHBGuuWUp6VHfVn0xfZV7L/bBsG2txzd7mT2Ljcnl9iSOSIOY4wtVBY5ENU1A8MNnqvwy3Af
g+35juXIb3c9yazgsrtreFLdVbuWJNfTpA6Yuo5/z38rfeP7dePXfH3uuI7017NbSGKQPo0M
1aMNagUK1xnHXVht39sHE4/09lu2+XE+7uBLIkKIilR9wSob9pOFWvKvmrYOM8X5eNn460xt
44h+qWYltM3dQSBXKmNYxOmN2DZtx33d7LaLFFN9eypbwoSAgLGldXgBmcapnL6Y4j/b7w7Y
N/2x9w3cX+829Lj+nSKgicr1KofUVB8cY9b+yH5W+F7Dke579yGKZLP+nWqFLWOMBZGjj9x2
YrQioyywOee6wnD/AO3j/wBi4Xab826+xJuMmlLfQGCRmTQTrJrUUrSmHa6deNZL/bHxKCeO
wu+USpf3NVtIRHGrsFGRCFiT50xmSs/LO7J/bdaNv26We9bxHaW+3soiEA1zSKw1B9B+1SPr
nh9Tu3n+162R7CXZN4kntLudIbmS4j0mJGr60XLUPI4vYPFpP/anxeNgU5DOkEJP6tmjQsCR
kMjpH44rLVWb2z4TtOPfK+07Nu99HNtkhFzZPLGNN17ZqsTIOjFutcsXrXNn5dnzdwa53r5X
2nY9otrezm3K2UqYgI1IUsHlkoOqqKZDGviMTdd9z/bbxGEnb7nmHs7xInpgdY1WunoEZ9Wn
HO82nGW4V/b3abs+7neN/jtbbb7g28b2hEzyMvVyK5Rn8vjjfsXFQfKP9v78V2RN/wBp3Ftx
20UWYvGIpUJPpIAyIPTFtHfTVbdwLge7f2/yb5FsqJvNtA8hvjX3XniehkJB+1h28MXPO0d+
cvOvgrbeM7rz62sOQbbHudtcI0NtFIaoJcyZCPzUAph68b/l7y1vO/h7adx+aYOJ7Fo2mxvL
VblyoLLFpVi+gE510/bXBuRmbtZzbPg24vflKfg0u6COO0jaaa8RdepAoIoh6E1HfFRz78vP
eZbBNxjku5bDLcCebb5Wt3dKgMAcmFfI46Srn+nqfhHE7jlXI7XZIbiKzNxk1zctoSNVFWY+
LEdB44LcdJHurf2pcfuIZILHk8s9+EJ9uWACPVTxDVA/bjG1j65WP+FOI7BF8nXXFOXbQm43
ISW2USENFFLASxan5iwWnli66trU9Zz5x4ztPHfkzctt2mP2NvQQzxw1JSNpIw5VSfM5DtjV
+HPm+vTeX8L4NdfANlymw2ZLDdI4om92N21s/ue3Jrb82ognGeI131fMUvC/7c9q3HiVvyPl
W/DZ4Lz1WSjRpCP9pd3IXU3YYrt+G+5Ik3L+12/XlW2WO37nHc7NuEbSJubj1qkQDHUgqGYh
hpIyOKWyMz59du6f2zcLeC7h2Plsc+7WwdjYzGJfWoqVajalz8RgkrPny4OBf26bNvPHY953
vf8A9NLcu2i1tlE4TR6f5lDk9RWnhh+3Tr9/Dbj/AG6bbsnMbOw3ffVtuO7kjtabk6gOWQr/
ACGDEBWYN91afjg2s76vPnj4Y+P9p2ZNz2m+i2W8hiVY9tJLG7KkDUmZYP45UPlingt9cHE/
7ZNsvON2W67/AL5JYTbgolgt4Yg+mNhVA7mnqp1GGdVqxxr/AGtbuvNxtX9TT+jSQG6XcdJ1
eyrBSgir/wCSrDvSmNfeufM/b0346+GuFces9+/TXtvySzurc28wlSN5IigYshoWpX8DjnZt
9XXsfH1xVHIC6EY1AqSQpPWnbG78jjbzNbz4i+JP/vEn3WCO/wD0k22wLNGSupZJJGIVDmCF
yzIwW+tZ47uA/Bm4cqsuSTy3BsptlR0gDqfbluELa1LZUChKZYtu4d3nWy/tQ4psl7d7xuF4
0N1fJF7A22eIMDC2TSqx8T6Tli6+Rx7zv7Zn4/8AhSLnO/8AKLVLgbcdouXWEEGRFLTuoiqC
poip174zfLjH8b/q0XKP7a9rtth3G+4zyEb1e7ShbcbE+2jqEBZqFGbQwAJ0kZ4brpOmu4T8
G/Fe6/G0l016l5PeKJ33xToa0cRrWOjNSiH7tXXBNN/w8r4r8IryS95bFZ71DIOO5290i64b
lvUcjX0rpTrnnjfu4xO/Lrh4T8QXPK+Icl5At4Lc7KpaG1K1EwRDJIpavp9K+mgxXqymdT6/
Z6BsH9sHF7zY7C5vuSNFd7jCk6R2sQkhHuZqqvX1aehwbb8ta8v5X8f7h8e89tNs3pEvbFZI
7mMIKR3NtroVr9wrQhvPGetxxvd+2Np/c1wjjGwz7DuXHrAbeu5QOZraGgjJGkqdPZqNn2ON
c5Gf6/b7ST4eIqJs1U6U6MR2FcdNd5K9U+G/h2Lm0G5bvf7p/TNm27TFPMBqf3Cusn1UXSFI
qScsYvVtyLPyt+c/Aez2u0DcuKclg3qNZore8g1xBkE5CIylGao1MKjwxbYbd8aZ/wC2HiUM
dttG4ctFtyGeFPaswsagzEfkUkO6EjGPrflnr3xWcW/tae+bdYN63JbS52u8WByiEq1sYg5l
ViR92qori+1o4yK7nPwBtOz7RByDjG+R7rtTXUdruEzaaQl2EevWrEaQSA31w7Yr1j23ZPjc
cR+Mnstrsds3beWgc30lxVILsNq/+0Oa+ggL2/xxcT9r+3sfLXxTte37hz7aLO+uBaWxvI9U
hUOqsr6lhAP8TAJU+OH+nP4Z/hLI3vzXwdtz+c4tk2y2gsZN8WD23ApG8hWryuAOvpNaYviK
b9r+m02/+2j4+TePYn36S7ksU17vtvpjIUplIDmyeohvpjOV0ebcv+F9rs+EX3NNq3U3u32d
/JBAANBkthJ7aSqw+5iT0pjfOxj38rDi/wDb5b7px/h+8Sbg6rv8zRX8WkAwrRyhizzyjNa4
Pta3JI01/wD2wcRPvbHacqA5O0ZktrOVY6MFBNWQHXpK9SMEl+Tv6VHxx/bxsm6bXPuHIdzl
tby3uZbGSxtk9xoZIG0sJCATVvuXyxq9Wndnrm5b8DbBxXk2xyX+7yjh+6yNDLuLx0ngkVaq
jqRTS1Mmpl3xzu5g31rfnT4i+NbDhMG7WVzDtO5WlsFsF1BU3DQgOkqOsjLnqH441xF3Xnv9
s207TP8AJtrNdysJoYZWsYyAwMgU6latQAUYmvlh6tPPkG3xpHyv5537jsCxbdaLdzSypF9q
QqQ0ntrlm2rIdicPc8c/5T51sb3+33453mz3C04fyFrjkFhqkktJyrqPbOlopFCqy5+mvjjH
PN5/Lp9pQbR/b5wGPiu18h5LvEu3Wl1bI1yupE0XDE/a5DVWlRQCvfD7VbPw8z+afimHgu5b
dPt91/UNi3mJpdtuWNHBjClldVpXJ1IbuMb5njF79x56Gcn1EVJAqT3PnjWL7PpLiv8Ab7wD
/wBHsd75RfXTLewx3LzWtTDGrgFUbSshqCcz4445a1bjg49/b9xTdObXdja70L7ZJNvN5ttz
AVMisZBH7c6j7WRj5VHhh2xSurkHwJwC/wBj3Q8K3prje9gBfcbaRhJG2hSWjqAApOk0IJxS
WfIurHh/9vPANx4ttm43l3e3l1dxLO01uvpQydYwpViGTpngw4rrH+27a4PkS827c7yWXjlr
ZjcIyoAnmjZimhj2KMDXLPFZcH59VXMPjD4gvdiurvgnJF/qVihmmsrmTKaOoGmMuqaX8KVr
h+la5bvZf7avj+HabH+qjcLu/liWSSe29MdZFGQUK9NPnjOU2vBPlf49fgnM7jZlnM9oyLc2
c7UDGGUmgYClGVkINOuOvPPjj9t6xjraYSTiOU6kZl16Mj16jDXXmx9pcA4ttHFPjuVuP8ht
FNxIJ13ydI/aBagEUoLLkOlCwIOOfPz6er+Hyvs6bfc/KKS76sa2k26O13Lt9Ah1y5tbqBpC
u2Y8sdO3Pjm69a/uH40Nz+WuObbRIl3OzihN0oAei3BTXJ/Eyg0WnbGPiHNrbP8A2z/Gn6db
CW33FropoN8j+jXT/wAhFNPXtgNjxXh2yXHBPm232qb2717G/FozOoZZIpwF10b8xikB8jjX
XsP8/wBNT8o8D2vdf7gotktgtlBu8ds9yYwKCSQMHdV6VISp88Pxyxxz/ta9K4bwT4m2LnsN
lsk8sHKNrDG4tpCzrKrR0JzGmtG1ek4xjWqLZPhvgm77nyveN0gkvUXd7uNNvsvS9sokLH0J
6vXXp+zGu/cHPjyL5t4Vw7Zbu0vuK37/AKa4f2r3Z7gOt1bOBk5SQK5R/P8ADGuJ4z18vLZM
gGGQGTAHqRi1ogtVOkU8G8K4gQLyAgtQjpIOtB2xlqAErEqDqK0NPrX9+JmcidwKBBQAULHM
/SnhhPV8AWzqBUEUqDhxiUCPRqPQhhmfLywtCjpp0svqJJqMvxxYRJGVYIDRSaahmKeGeDAZ
nJalRpHQdBnlixbSC6WKk5joo8PEYDDmOTSaUrkdPj5jEaBmDaTUg9z2/fic/hMEzA+7qaD/
AI6YrDqJSpIGS0P3dv3YhmpFRSn0OYHjXE1hAuVZlyocz/0waKTmsYIY+ZrT9uLVo2RAgaME
9hXpTE0Y6wp01oxoQMhTxw4rDMaS5LnQBhigpVDfb0H5u30GHAMfyk9FATWh74xVTamqVbJy
KdPHEfRFm0mv3Ll4CngPriNPriZanIJ9/SoxYtSAVGmEgJ4kVxECopVgxLE9DT8w64dZ+pdY
/UBTqOxGEQQJCVOVVoo69e2AmoAat6g3Q9KHwwE6vnpLaaZ6cumJbp5NIbWBXQftBzPn9MGL
CeRmlrmaDI0AocUUETrQh8x1JrQ1xNHqBQBsulD06YkILmnq9PVUzr9Rh1BdSCuomlSwHXLC
DuHBBIB1dK9vAYBTkEgAiq9W9NTnjOoswSaDLqw8MSpVkC0YAuBX0+eJHJIBLUzGZP8AphQX
ZdICjUwFQSKHPvXEhFGSMAHU9Klv9KYSZGJjKlvUo1AnrkcQOjl1DU9LHv8A6YzTo/SX0mvq
yp4eGeKrDDUkRSpOo9OpxItRZylchSoH7emLWaauptQBUdjWv+GJEJf5hC10uKlPHCiYxhsy
NNDUnoMsTRFC0YcDVlUHplgIVDP6EybI1rT8Ppiqqdw2mj+pa0IBz/DEKjMkagZ5nKh8PpiA
zCxHQDV1FK0p4YDJCGsMO5FSqjrQYkcqftd6lslJyp3wtmYPVUzIbrTsPM4maf21I0ilSMqC
gNMssWE7mQKEQjKmmvanhgwnDUZRmQc2P1wi+HiJZfbJ0VrXMio7ZHvgjJe2oFGAzPpY5YkF
z6fb6eXWpxLTqg0EMCCp65Uz6YCQc1quYIoKmlKYkBnYhSFrX1A+Ir1H44VBh3oysdLdNQ8T
2xE0fuFSNJDLkXrUUwI4p7RRiKLkPH8a4hpl0qQI2PtMKZ59OtMIMK+5XTQP9zdf+mBuUTFO
hUsR0P5mHhjUVsQH3XbMEBSevl9cGj61IlFSoYM1akKOgpXPAPgJbRqdAQG7E1AyzyxAKsSo
DVJNSe1B2GWHDEgbWopkDkCKCmC1qQLAKCq+qn+PlgajmndqZn1jMgVqPrhjNZq8NZ2NM60x
0ZRLU9BUjAk1U8D9tO+JOsTxxQ6KVlPVj0H4Yoa6thet8gCZg6l/7gainnljv/H5YsfVyfKn
xjfcMh2y7ubt7r2VRreNCKyAUproR5HHH+k9XNrwXejZS30psomiiWppJ92ivpJ8csYjr64x
pYChIFB07D8caNmihJjYsPS9a616jwp541OmLy3nCvlnkuw7hbzXl7cXljAtEtDLpVkpQCna
mOt6li/51a8k+Utq5TySzvN8spRstuQf0kTkSBhTvnke9Mc+ZGbrb8k+dvj662NtvTZJ5yi1
hjm0hAaUU1qScZ64h+tU3xr8qcB4vZTG42u6O4zktNLGQ60rkFFRTGrz4PWe5l8i8F3vlMF+
uzt+g9wG7TLVKF66mqafh0xn6waseZfMHCNzsrPbdk42tpDFIks81I1dVTsAv3V8zi54mmbW
tuPmn4p3XbYrLdNqu5giKJbc0CVQU6q61GNdfzk91b7jzLlnLuHz7vbTcZ2NLGzjZSVrVjQ5
0ArTxpi4xq8+NJzP5r27fNvsdvsdu0rbshnW4Jo2gZDStKrg81mStLZ/M/xvLBbXO6bTMm4W
lPaht0HtBgPy0I/fhvKkrs23+4PiM15cS7hYz29q/ogkjo7AeD0oRXyxfWYsZnlnM/he+sbm
Wx2iefdZloslwxWjVrU6mauDIl18e/KXxXxfj5txHdpeSASXihDIGf8A2moX8MPXH+VlZxeZ
/GNzzf8ArE9rNbbXEwlCoTr111ayudfEqMHM/TU5rUc6+VfjPk+2pZ28U9xMzBY5plMSICRU
0rVq/TDeLGXdvHOfj7ZPjhtpt95XcLow6IooRRg7UJ7UAXp1xmz1V81XNz78skpUIXYusRPS
p8cTUjZfEtza2vN9tuLmQInuBJHOSitaZ43zNFsj6E5Z8h/F20blFeX9z797aofY/TVkBr+Q
lcq4z9bBrL7Z8/8AEd1kvLbeIZdusnakWikjFD2fwrjX18FrF8v5F8OR2kltxvaWmvLw6Zb+
apZAfu0BmyI8sZ+nqnrdcL+Tvh/j/FYtpi3CckKTOsiMXMrD10NNONdcU0+xfMfxhaba+0GO
cQSO/wCQSLRiTmQfuz8MZwObcPnbg21Wttt2xW8tzbQvWSSYBQgrWlD0NcMhl1eJ82/GkU6X
019ePcMmlbYI5QHyQGlf92D6jXD/APffwHeBd7fuPv2NnKSA4XWWU98s1OGcqs/afIPw9xne
rWbj9ncMMzc31WaQh+oCt18+mKcr1LvPznxTdt0uLa+s5W2p00xM4BkI8dJqtD+3DiyPDOSX
m0XG6TSbUjJZSE6FNQ1fLywXBwt/jnmt1xLkEe5wxCRQvtshppKnIiuRwSz8uj3B/mH48sDP
v6vLc8iuYvbe2UUUD+HUPTQftxWRiqvZfmvjnIoJts5VWygkBdWhUkEDohGfq88a+viswe7/
ANwmz2F3aWHH7R5dpsgoMsoOtwuVADn+ODD6tZ/l3462xLnksU8l7vl1CE/SAFQAuYWnTv1w
YIq9o+ZeKcn2mbbOYTDbLZyGDxk+sA10ZAt+ONfXfgyM98o/MezbxZWvG9kRxsNq8TS3Mnpk
kWLLSlc1WmVeuGTGP6fs3LPl340l4Ym1cf44EvGVAryIiPDpIq4cet2wfX0Xu5rQ8e+V+H8i
4zHs/Krj+nxWaoCUz9wKMgpFdLYrz41L+1tY/M3x9vG/29leSC32TbVrZXNwNSvKoopYAGlA
MsH18M9Rc9vPiPkHvXW5cuuJyqkwWcEtYxUVAVCuk54MqeffGHNvi/jUm4Tbzt0l7MW0bfO8
Sy/y8zo9tjQN4mmL66J46uH/ADNtGyc33G+g25LLZdxIUWyffEpP3gDL6439ZYdbq++Wfjni
u3X97sd2d13fd2MrwqfR7hyqxp6QK1pjP1PPqLbvlLgvMdnij5TP/S2sZFnCA5M6Zgowr6fL
vivI31h/k35Q2bnm8bdsdvWz45ZXCGS9OcrrWjFV7DSOmNczxfX10/I3/wByey7Ha/8AqYWf
eUmiKyws5AQEFvdZqAV/245zjfV1b+GusPlD495rs1n/AOzXn9Mm25klWEVHuPH0KGhbOnTG
8/TM38sZ8j/MGxcw5DtVv+lZ+Mbfce5NWqzTitHyHQUHTGK19fy5/lf5K+Mt245FtPEtnWG6
DqJLs26w+1GOqimbk41OYnjElGYhTVaUZjka+WI7rX/GG+2G0cv2e8v5DFZ2s3uTFszSnUAd
8FXw9I+YvkfjG/8AJdlk2iYzWto8bXt0QQoQuGNF+7Lvi+GZ3lT/AD58ncS37a9ss9mn/UTx
sZGm0lVjqKKudDUEYfwb7T/Avybs+1S7hZcgvnjmu9LRXtw2sKEBGkn7qeGBbjacs5/xTbdr
mnbl826zSOpjsrZkAIrVlbQB6aeJxZTKu4/kziG+7daXVvylNlQqGe3IQS9Ptb3Af3YbyJde
N87+arnbeVvc8O3ea9X2xBc3k4DxvpJoIkIp/ljX+BJdZfffnL5A3fb5rC73RltpU0SrEioW
r1qVAyPhh5xuZ+XsnwLecK43xaSafkNv+s3Ah7iCVliERWtFAPfPPHO6uup+EvG+b/GfEjya
0g3kXaXLtdxUUn3GdDqjSgoaHDjnb4rfi35D4xF8e3O0y7hFb73PPL7Vs3paspARgTkQvXBj
U+Hs8ZljtLe2Cy3EzRqj30enM6c31nxxYa+Q/nnj9rtHPLmKHczuM1wBcXDSMHlTV0VyO4w/
5UxmfjvfLfYOY7Vu86s9vazLIwX7wtamlcVFuPqi43b423fku18zk5HDH+kipBbalUEt3kr6
gV8MGUSoIfkiz3vne68b2q3O87NcWyfq7m0IJjYoQ4qaBgVNMONNWV2DjHD7WNV/pG1WLx6F
uDRlHuVzqTVmJxmQV5XvPP8Ailx85bTukW4xPtdtAYZrst/LV2Uior5tTD+Fz81t9q+QeC3u
/wDILbb94t7bcpCgj3FwuhtEYFEZqK/tntXFeTHbf/IfDLW0sbe55DbXl0k8RmlRlqQD6nKp
kFGLBar7v5O4C9rvmneYK+/GEGZLGiiqD8wqMX1Wqvetw4byf5Q2W+i3+3gTYbcXCuHXTM7P
XQGY6cl64c8MnpfIO9cN27nfH+ZPvUMr2Ia0ns4Csr+1JqrINBP26sx+zB9bi+K4eb8M+MuW
77/7buXLY4LQJGHt4ZIlqiDIaidYLeFK41l+GfgfxDyXgcWw7psuwbtFt9/FdyFL66ClpI6j
Q9ZNIYAZUJwfWmpfmrmvGZ/ja72mLfre83Sd4oQYSjM7hwWYolQFyxTmwX1YcW2HikHxIeJy
cktZIr2CQSXiSRoV971EKhb8vTBzLKu5sx5H8GcT45b/ACFd3knILeODYJ9NrqKot11FQztp
UCvbF16eOvG3+V97seNfIG1862jcLbc72RFsm2hSHYpQgsroTSobvizz0T5eicZ2rZ7nkI5W
2y3dhvm5W6rPJOAURdNaagep6YLTJj5C+b5IpflHkksTBwbx1LqQVNKD/LHX/DHEnrq+D984
zs/yBYblyEL/AE+ISJ7rL7ipKy0R2H8Knv2xjrl1nw+u7bluzW9+9zd8qsZdsuKforYGJSmr
MapASTl40wfWsvH/AIzt+Lbr8xck5b/W4LZLW+mNlbuVQXEcmpWlDuV+mWDrlcdeKH+4Xj/H
LjnNpvdvvttMN1khgureIrIbdY0CGQlSwow8c8astjjOZz1v7eqXvHeHSfD68Pbk9p+migAS
/eSNQTqMq6kDVpn9cZ55rr0odrbhHPPivb+JXO+RbVLtM0SzMSg1G31aGTWygo6tWvbDjXXv
rQ7h8q/H+xcj2DZH3RJv0ED29zcxgtFGGjRI2ZlqPVp7YrzYGL5d8XfFbX+78q3blysl4ZLh
ILWSIyB2BK6dLMzUr0ph+trH1af4y5bxm5+PNrtNk3y12K6sAF3CO4WPW4UnUxEhFS/XUK4J
GmW/uS5rxm/sOORbZucN7cQXbzSLA4cqFUD106VI74s8V+Yg+bbzh3N+HbZy7a9/t4Z9pjKL
tk2UsjSlf5ZWoZWWnhTFzzsZ722Y9A4jz3a9+4PtI2nkdtsl5Zwxw3yXaxO4MSBCNMjKKE5h
hgkdbFdZfKvGbL5Pkstz5Cl7Clitm12EVLaK6MgdlquVHA+7xw2YzztdXFLb474Ou+ypye1u
rndhNdlfejAVFDMVRVZqn1fU4vrb6fxj4xvZg8jyKKoTUUzFDnhvyzNz/LafDHPxwzmVpucz
FtvcNHdxKSC0b5E0H3aK6hg6n5PL6C+bPkfjGwcCv7Hjt6lxuXJC8sRtJk1R+9p92VyPtDLU
UOeeKRmxQ/2t7ZxXa9vuuSTb7BFuN6ptJ9rmkRPaSNgUZS7AnUB4YrNp58i54JecO4RzbmFt
JyC2uLbdY/6jDOHUIG1SM8OoEqXXWKAdcX19Z5+LHzMvL942+fcE2+/nt49xMiXfsMVE0Tsf
RJ/EKY1jUtzK9y+A+XcTn4LvXBNz3GPbb29ErxT3FBCUniCEKzFRqTT0Jxzk9N+PEXwVyDin
FORcm4ruO6QLFeFLWy3WNv8A40rLrUnWcgTqyrljXXOXVuzG526x+P8AgXBOUbNb8jtru9u7
S4uJVaVPUrRsiIiqTU50yzxSe6zc+uL/AI1zXi17sGzbnsnIbPadkggQXm1yCIONAAK+oh06
HMDPrinLTzL5an4jzH5j2GzG+QW9gNt0TbilJI0dpWkjTVXSC/iTlisuLnn/AG1b/wBx+2cc
3jhEO52W/wBm0+wxALZxyRyGdKqpChWLBhTLFOaz386+UHkrQ5hBSpIxqRuTX0J/bpyriw4x
yHh+73ybbc7sD+nnkoIyrxe2RqJoCBnn1xnLp7+C5JwT454FtUO6WnI/6xvcN7bSWsEDIVeK
J1aWKRI2YDL1BicXXHWesy16dfWfx7yrl+yfI8XKLa3/AKbCnt2krxJVY2Z2EupgyMNfhjOW
/AvPuqbkfylwjcuL/IhtN0Ss6BLNJCVknP6dY9UK5MVLigON/wDOyqR5/tvM9gf+2feNme8i
j3pLmMfoHYLMweeJlKL1PpU5+WM/WrqeOrlPOdiuP7a9q2mHcY/63FPDFLZB/wCcmiZy3p6l
dB+lDi5lh6Yv4L2vjm6c1juN53Zdo/QaLq2L00TyRyD+WzMQoywd7fGuutewfLe17Ff/ACFx
rkm0cotob2a8t7CVIZI3kjFfTMhDEek5GuGy4x9sr2XdoP1sd1t0Qns7u6iaE7qkKkLVaV1H
I9emKQvFuMnhm4cF3X4p3zeots3HaLt1uLglQskaTe4kkbMQufQiuGTKxuzxoLrmHxxxzb+I
bVY7zBNa7TuSwTMsmtox7MgMjj+HVIM+gri+tje6+e/k7lx/+9nft42a+JCXlbS7t5D6gI1S
sTqelQRXG/xh4+XtHwx8g8bv+B2+0tyBNh5HayySXtzcaC1yJHZ9euWiyEggE1qKY5yYelX/
AHI854xvHCtmt9q3eDcbuLcCs6q3qPtwursy0BAqwpi65uMWX8OH5I5Jwnn/AMPWF9FuyWO7
8ZgBO1T0WSZwiROsedWroqhX8ca4n4Z738Kv+22Hhlru8m87xvUe27vt7E29jMyIskMqFSat
1IJ7Yz1zbXb4jV7lyDg3C/muHllru0e5bfyCOdNzEBEjWkhK/wAyq9Y2oMuuRxvqXIxxzdrR
bRufxHwndN95hbcmjvn3TW1xaoyyGssnuBIwvfV44z9KJ5480+VPkDjW8/FPFLfb7wPuFrcF
rqxNdcYAZPWPPUNPiMMi32K7535rx7kXFeDR7RciWSytZ1vbdvvgkKQoFkA7koaU7YeZ4u81
4omVSwqpGplPcVyw2sXx9W/DvMuDwcV24Q8pm2K8tkEd/tNwwe3eUfdJGkgaiv1OkjHLmOku
tXwzknB91+Wtzm49JFJcPtQS/liAVJ5o5gdUQ/MdH3HG7BKz238m+GeCcc3u72Dd3vjvKOp2
9SJJVlbWAKUXSAzkGuNf87aZ8O7i3y1wLc+ObZPJyJtgvNsgjgvtsOlfeaJVXKgOoNTIqa4x
hNuHzdwFfkO1Zr4S7PuW1CyuLtAf/jTmZm0y9x6Tn4YfpcTAcv2n4C2DjF0dl3N9z3WUr+gS
Jy7RSoa6iAFXQfzV/DGueLYJ1ZfHo23/AC3wbkuybfczcrn4xeQQhLmxVlWrgAE5q+oV6HHP
Da+efnPlu28j5zNd7Xdy7hYxQxW6TzZZoCH9sHPRq9Qr3Jx0xmfLz2J/ZoyEKRUUrShxL6vZ
Ns+QOPJ/b/u/GZ5lTezdRSW9s9aXCmVGOntVRGdVcZ/nP9muutZX4ml4hLzG3/8AbZmtttVW
YzqSrLKKGItQN6aimQwdet89fV7V8y8x+NL6/wBk5ds28R33ItinjWGzjLFJIBIHkVgwGlh1
Bw882xjmy1qH+XvjPdhDvTcru9taRVI2rWyKGTqrIoNa9/VhnFXUx43wjkvx7ufzJu2/8jvJ
4rCWdr3Z7ydmMgmhddEchUN6dCn92Lv4wc7PW1+UOe/GtvzTZOd7Fui7nvVlNFFe2MQOmW1W
tSrMBpZdWXji+txSxpLb5G+DLfko5lDfk7tuJWOdZFfVCGXSW0UoNNPVQnBefDuMxw75K+Nf
/ZOSm+3K52i4m3Oe8sd5t2kQXVu7VWFlUNUD7lDL+OLqKVlv7kPkThHJ4dqtNhk/qe4bfJ7k
+9Imge1IpHsk0UsQaHpljtx/Py+s7t8eHyICCV+3ueuORAocEKfGoA6DAMCkrg1JB1VAIFDh
8Xp3kjUaQKEirDz8Rgw6AsGRXVSAO5PbDgsFUU06DoOZYDMV8MKMoFQRRVGZYjMUywaoMsoN
XyANACafhh1E8ZWtftH5e+eBbEYDMaA+pcz9PLFighqBDAkA5EEVOIyJGkkqWJC0AqPI4F6D
7iSAGp92fbEiBRSCBXX1FcsTCRRGYVQIQP4sifMVwnw5QCMLqHo6gZdfPGWglf3gah9O+FHi
09BQFDqBPf64MEhZVI1FadR418MSsPEXAFT5EmtfrhJysZZn7jqBUUGFBiVZAaDShPp8/rgQ
jGQRXotfp+zAxTrIpPrAdqZKDT9hxYpaYBhQaizGmf8ApgdMFIUppUUNPUCOueFmnRiippq4
zz6k4mfsfUz5kUByI708a4NanpKrEtnQHKh6ZYdMgnTNXkYkgUUUyriVhvT0Qelsm1eOAH/8
dGloEqBTrlhHwEBWVWU0YVJPciuX4YjBrUKXK6ST1GBo5VS1Hqa9CvQ4FgSfzFSFJovhl0xA
QdECkn0mtT1OFW4SMrknMMv3E/44lojTLUvpH5sBsDoZaGpPehrT8RiEgg6gFgAZMqjOmJrC
dnDANmB4Z4maaZ20aV617DOhwDRKFFWTNqUIPl2phhOcwAFqxFKA0/xw6TCOONyoB1/mDUIA
8qYqsIhQAoqdJ75Cp8sZGpVICKgFNXf/ACzwa0ZA1aAZkZNlUU7YkCirmx1AnKgzIw4BSR6o
mMZogILDufDCMINShelaUr4k+H0wrAN7agMfy1pl2OBCEjaSoUiuYHj+3AhhM1KrqYrTI55Z
4iaGpJDKQ5z6508MBCutZqkkp1UkfuzxASufdNNQNBn/AJYgdjViKUIGp28a+AxHDR0DZgtT
uRXLDrXwmqHGSgBswSaDLp0xL7I3VvSAK1/DLyxCnKsqKpoNRqSSMh45YNULSRmjV7acWqmU
1kFVNTl1r/wMS05OoA6iCDT6jFQMyKdKlRl9pp/jgkGg1NJVMlpm3nhhF6TEFrofqScxiOBE
ntM35hkKt2+n+mFbUutCaqBQ9QM8GJDO7B2rUdAy+XiQMQMrSuPuBQZmnb8cSOpQj2yPV3/y
64jhj6axdGFTXtXzwBLG1WIk6tnp7UphalAF1VQGikemv7cF8O6RRdID/ewqQppg9GGVoyjC
lBSgB7Ed8aVoY1/llmNSPzjofocGggC4qR6cyDTLPtgrUAB2rQkVbxpgMRXMVYz6gznqK0NM
ajNrM3i6ZnUmtDjoESEDOv0OBJ/V/wDhfy+eIOuCIOKIddBmCMwR3xNWOnYo63g940WoUMBU
A1/wx3/nzq+r7M2yz43x341h3Cz2Oy/VNarrklXWxdh6qsRWuOHc9FfNG830t5dPJKixsG9U
aLQZZ0+uLG+Z+Vax7/bqzoDkMWnEkS6jVQPRmxbIDCztesfGHwo3II49z5C72GzSZxowGucd
tGoVC+eCi9rb5C+MuM2G+7fte0FbOCVgz3ly+YU+k+H/ADxc+idLnl3w/wAE2ThYvLES3t4i
FjeSsfVUVqBlpGDqH76pviL4l4hv8Ml9vd8ZiDqi2qFxqGdNcjD9gGNWXDe2J+VOObFsfJLi
y2eEwWoFArkE6up/d3xzzGYx1hCk88MIJjjmkEbORkoORbHTl0vOTXv/AB3hPw3t8FlZXAfk
e93S6GEALqhbrUJ9v1OLrlyo+U8C+LeMX0dzyCzmjsrn1R2tqSTXvUD1UB88Z54UtX21cI+I
982J9wsdqngtitEubnVHWn51qT+3D9Mq10SfFfxZs9mrnZLy9kkoWMPuTPnn4gYjtYG9g+II
N+Zt12XcLGyUkQ2+mhcjOrCoKjDOdTa23xz8T7rxyTctr2eWBJFZIGlLLqbsw1E1HnjN5W2K
uw+GuEbZawJvzST7nfN/8e3ViI1Yg6czmTTFZTbXCn9vMVzvhmvZWg2pM5Y4TWQ07KPp3xc+
NTvIrOc8T+Ptlt1XZdg3KQq4W6u5VdY2C9QpenXvSmN8+1m3W3t+NcFv/jf+o2OxxWrvAXV5
QS2tQRWv1GDvnKy+Yd09F7MiqAisaAUNKnBFam2Sy3DdZ4rCwT3riVwkaDqWY0FfPPGo1y9Y
tf7aeaNCr3d3bWpKgtqfUAQKgZdwe+Drq0de1U2f9v3PLrdZbKIRrDGSXvHJSLPsv8RxfYWS
n5V8Bci4/Zfrri+iuY0qaREemgqag0JxS1WtX8YfFW2XnDpd33PbWvLuRZDAzyhYgB0oq5g/
XFdNsxacF+H+L7rxe43FrUy7pM8iQOGokWkldI8fPGcGsRyL4D5Xtie5EsV0hOj2YGLsuo+n
UKZ4Nw8ui2/tv5vNbi5ubi0tZWT027SVkOXgo0/vxbWvtFZtvwJzvcru4s0gWOKDJrmd9KAk
ZDxb8MNtVsWsP9uW/Wu72dlf3kS21w1JLmKraRSrCmRr2GGH7j5f8B3UG5fpOMe7cW9vF7l1
dXTaY06nSW7mgxMPHL+yuLO7lt3dWMTFT7bAgnybuDjeMSZVnxPjW88j3eHZ9qhVrydqKWFF
VR1Yt4DvgvLpLr24f298StrRtsTcpbrkLxrK8MYBjUjqwU9tWM4L/hTcf/t7uJLi5uuUXo2/
aLYn11Adgvc0NAMW0S/s3JPgi5MkFxxZ3vLK4ZY4JJSBQMfuNKVXvXF7pl/a1m/t146Lb9GN
2e45GsfuNAgX21PXxqB4HCOutviq4z/b7ca7m85TuC7VtUDaUI0l3YHqScgMG1RUfInw9Js8
A3rZ3a62WQjTK4OrP7SB51qKYZ8rFVc/CHPrLj536+hSK0C+7IrMPcSM9Koc/wBuH7VXmRou
BfA5v9n/AK7yu/G0bXMlbWMAe6Q3RmJyA8MsF6tVz8u24/txvrjfoYdvvtOyXHqW7lHr0AAn
09ye1MW4vGhf+37gG4wXNhtG8SXW62uT6ShRHHZqA6anrQ4sF15ZafEfKt65BfbPtNuJ321t
F1dlgsKOMgpbuTToMFuGXYj234g5VfcuPGDGYb6Ig3UmRSOMULSFgTlQ5eOL7Uya9D3X+3ba
RYSxbNuTbhvFmKTRhVVUIFSpNe9e+eGWxc3FbxT+3hpbE7ly7cjtVq50W9uKamoaKSSfTXsO
uK+hlPkb4l3PiTG8i1SbZIawTsaAk9MutfLANefRpcTSiN6tqYaQBVtfTPGp41r2nhX9vzXO
0pvHMNw/pEcorZ2/pMjK2YL6vt1dQOuDbRZjLfIXxLvXGry3aNWnsrs6LKSMEM7N9qgHu3hj
HsOa5OU/DXPeP7Cu6bpaolqSgc+6CQX/ACvTNfDDzfQwQXTqp+YZAeAx0+WWj4dwnkvKbl7b
Y7RrqWJdVywoqoO2pyQBXwxnqtY0u6fAPyLttjLuNxar7MakyqrKzU/7VJqP8MZ21nrmI9h+
CPkffNuS+tbER20vqjad1iZh0qA5/YemL7N45bj4i5zb7+mxLt0kt9JVooh9ugdZC/26fPGv
tRmu/ffg7nmybfJuV9ZBbSFQ88sTLI48ahan8cE6o8hti+D/AJG3fb0vrXbfatZfWgmIidhT
IqHIOH7NSo9o+Dea3/J147LB+mePTJc3Min24oyc3yPq8h3wfZLT5Q+BN04lbR7hYzi8sGyu
bk+lvcOQXTXv2ph5tXyt+Jf217/f8Wfdb68azvpY/dsduoDrBFaufyF+w/bhvWqyR5jc8U3q
15EnHp4TFuBkERgAq/rNOg6HPBPBHoXy58U7Vw/b9k/pP6ua5vT7Vz7ilgZKAAR06Ek/bh0b
67Nt4j/cVb2X6K1/WWkCRgRxCcBVU9gWPUeAw/duZ+Xn0Xxtz7e+TXNgdsnuN2H8y6Z6gjtq
d3I6/vwfdm+ry6/t9+QLLY7zdr2CG3j2+sjxySrVkQVZwB4ds8G2i2R5p70mvQzFlzIH5c++
NyC31r+EW3yFb213vvFFuY4LNSl5d2+SrTPQT0NevljN7XcdO9//AHq8s2R9/wBx/Wbhsto+
lrqQ+iMqPUadKDu2D778DqNJ8e/EW2b18eb1yXdZZopItbWSxR1BWNNRrUNqqTTLB81qzJrE
8d+P+a8ggnm2jbri7s4HEbSIlaSfw1/7euK0yJN8+M+f7NcW6bptUkJuzohamoM7ZKo01Gry
xTpn8tre/wBt3NbXhMe7h1l3Ufzp9nQEOiVJ693A6jFtatz4ecxcT5W+3XW8QWM5sLA+1dXS
IdKMTRhX8cb+2tfZH/6hy652JN6TbrptreX2kuwCUMi+kAHvXpUYL0rF8PhH5SlsUvE2WcwF
PcXVRGp1zT7j9KYp1XPqY4vj3gk/K+Wrx25uV2suP57S5OGUfYqnqx8MNvjXM2J/ln48u+A8
ij2trz9fHIgnhuyNLshqKMgrQ5Hpg9qxiFvJint6iIxnpqQPphys5Gi4pwjl/JzM+xbZNfJb
gCZox6F1ZhanKvemM7jcgd74dzfjl7Ba7nt88F9LT9PGysS5Y5aGNamvh0xfMc/denXLf3LL
s4SaPcU272qEgAusYUU+31ZYuerG9mvN9s+Mud8gi/V2W1XF1HcSvGtzRmDSpmwJp+3zw3o3
HHvnx/yrj97FY7tt01rPLQxxspOs1/IVrq/DF/0c/bXrV98Gcb2H41XeuR7lcW+7XUIltLeN
GZEkZdawsP8AEmmLn261fPHnHxdwz/3DmFps0rSx2jOWvnhUkiJMyx/hJOWeG/BwfyFwg7H8
hXfH9mMu4CCRY4QikSMXUME0d2WuZHXF8RjmXqrax+B/lK8vIrKTaXhjmikdWuSEAKCvqbNQ
T0AxnW7Iyu58N5fsm3C+vtumg29pWhinYEK0yHS6Fq9QwOKUSyum1+N/kK8O3pb7TcyTblGZ
7IgZyRAaiyk07eOK9NOq++Jfk3bdofdptjuIbSL1yOBVgvSrJ1ph+7N8QcX+OefcjtJb7Y9t
mureKXS0iiikjqqucmA8sH2xT0k+NOdT8l/oC7XMN4P857aQaWKHMvqNP24L0pGg+Tvgjk/C
7S3vai/22SNWubyNf/DN3R+tB4N3w/anyOb4S+Ok5xySSwuzcRbTbxu11d2/VZfyLqIIFT5Y
ejuxm73im7ycxv8Ajmyo+5XUN3LaQlEOt0Rioan06nB1/lz47vSw374m+RdlszuW77JLDbR0
Et0RrUAdNWkkgY1z3inyfbvhf5K3K2guLTZJ3juoROkjLRWjbpSpGMXpvqYym87HvGw3rWG4
2b2d7CSJIZAVdanLI9vCmN89My64Q8jyepQX051r0/543kOY3PHviH5G5Bt8W57Xs80lm41Q
TfYslMsiTXLxxj758Gz9uKz+O+cXu431jDtc73O2DVeW4SrIoNAzKPPBexLKl5B8S852Hbhu
u6bXNa2hI1TshKLqOVSK0rXvi+xkjo2H4h+St422Pctq2eaexnzikKha6T+XUcwfHB9lqr2/
g/Kr3fn2Kz2q4O7oSJLT2zVQD+cGgAr3wzr9qeu3k3xxz7jNqLzetpngtiae4U9Gs+JUkZ9h
XBuuX59dmz/DPyRu+3R7ht+yzta3I1wXGS6lXuM9RB7Y1P6WfDp9P2yO5Wm5bdez2l7G9teQ
NpntpAUZXBzqOxxTraZZ+FbM/uMzMFIpQMeuWfbzx1nVkwSYgOorWtFpUAVzxkijkKglVIQr
R861HlitFF+pZpFcqKgAo5yNB2yxXb8sfFSrcSMjIcgc2AqAT5rjPw18oxcuDVhoC11VPj1p
hlp58ObhqUADah6WIo1PM4rauqD3SGyJJ6M30xlkSTlVyqRWgoaD9uKNRMbsrQ/mAyCgZHxy
746Q5Gw/++n5FXbTt/8AXr02nt+x7bPX+Xp06amrHLLBF4yK30pUyGQs5zctmSD9euM9T3Wp
gHvJV1BSNDClDnTxzxRmTHM9zKwAIBNMiegHhlhotqSG5GgqFzUUFc/rTGcMKW7ZyVdi+s6g
euOk+BesCZ3ZTGGGimkCnh/lgyMfaniuHoign3Y8tWVf24rDOhm6fUsisdadF6/49sP22Y19
sJ7mRhqWrFiKrXIEdMvHGWPm6ZZXcqxJC00gN5Z9R1zweOskC0jspVmDU6ZZE4oMOJWZFVgx
Y9RlpyzpixmzRC5YBeuWSnw75eWKQ67Nu3W+2+VZbO7mtZfUIp4HKSAMCrLqBrQg54bE53uZ
SKBunfrUf5nBtqkAs0uj3CdAPSuda98Ma0v1LMCCzVrQjyHjjTNpNMcpFQaqUIGXTGZcV8Et
4xYLX0A1yFTXt+zAygui+sh/UAae5U1PfDFaZSpYkEk0Jz7nzw6oMOClRkrmjf7jiOiV2Urp
qmRFf4cC0bXUpyUZChLdyfE+Jwa1v6N71ACOhzK07+OGM06TsSVr1rXtqA64vgymN2wqxOTd
MhmTl18MHyz1Re7IKZ6iB1JyqPDEtpR3ctKEgyHsOmX1xfVShe5kZdAz1itBTqMR1EWyqT6S
AR9cSokX0t6iQcgBWvngWGQsAAKaa0B8K+WFFJGpUuCRpyPgMK065qyZdMx44iFWOk+Nc/8A
tGLGaSFn9Ioa/lbp+GDDKcVYAGp8wa6cNAmUEEZ5DuaCn/PAQKAqEKcic69R5YjgwaxkCo/3
Hv4YFBV0VqKkLUmvU9sR0IZBFlSrffTxPYYmbToxJA0ekChI7VwsyUYWi01At10+HjiIY2K6
aqdYzFewPjgaLUEb20Gk9WAyzxDSJYx01UNQcs8/PCBPJGx0laEmmrvXA1oZH0AB1qF6UPfz
wyISOjaaAkdTXyxfA0mYsSc6VNGHTpngAEfJgSTTIEjKmEDgClTIq5iirmKkDOuCx0g4mYtn
kSMm65HPPzwIKVkYOvbsPHxxOfSWJ3UfaCvRkOdBgU04aQPQUox6HsMWHTlEDOgqaEFK51/H
E1gSaaNXRSRmcvriWJGD1DKVCDufAeOJrAqig0BBJ6nrTEzSEaoHrkWNFHX64hkMAdYqDopm
Af3jzxNCqAAGOQFTTrTEj+uiaqsoOX+uBGmbSQUAA7Bc6nz74maaMv106mNdTDpSvTEJEuoK
KaqK3c9QMTQfUX1K4+v0GIl7xqAorXPMdP24Vp3dXIAb0jt1r9MCRgktprVTkW7/AIYGRhhG
GUirgUOVCT5+GLFC9RiVlFBmxBNe2WE4RJZVZqFyB6ugzyxHBvpAyz/iX/TBUdGNc/SaUP0O
BQ5YKAzLXV9prQ9aZ4kSqWJYUABqD/x2wo6tKuppKLqyCimY8xhowAZ6AvQqx+4DMDAjukRU
H1KO56gHtnhJFMmetTlp7Cv44ziOzggslQwy8Dn/AKYkemlU1mrVNCO5OFEQXTS32EjPEcFo
ORU6j4VpgxBkAWTqQW+764cCRdBK5kgjt4+GAglIC6ZMgTkB18q4hRFndtFWYgZtToMR+Car
j1AKT4dMS06xqI9XWppSvcYBgkAERdlqFyqDU4jiMBGYoCpB6kdK4WSFNYRhTwqaD9uI4Joy
TqK0AGRGYHiDgMhIiger06Klf+3DDThlUVYZdA3jhQFbSaR1BX7D/mcFAiDGAalmJoT4YAGN
cvU9Av20FBXtWmKmHGklhShBrSlfrTEJAgBvUDn+U1yw6idaqULVJ6KMshiVNFI9ApUBxQgH
/LApTurUzozHoB3P174GsBMhRtJPqpXV4HwIHfEjtQoGCgMOgU1H7DhwYYtMWDlaMAMq9h9M
ROxBYNkGAoa5175YzTOkLQnN9WXgcxXvi+wkZi61G5kYihrmK1x10RGyjQGHXv8A8sSSax4n
7aYU67YUjdhIEC9j1wRqO7ZGBuVVhkSASoz0k5k49H8r63bI+2odj3Defi22t7IxVe2WrPIA
qEAVqx8AMcP6S65yvlnfrFrG+uLb3UmmhdhIY2DoSWrkwNDjON2YrB66ZVVgfEU+tMFY2p7S
VIZo3EYmCMC0bkhaA1pUY6cY18x9DcL+fduv7yw2zdtpgtUQCM7jNNRAEFAxSnXGrzM1zkoP
lLkuy8s33b9k2O5sS653O4PIojjVjmus5YxzD43HJLDYZ+GLtr7/ALegghVPfMqtmi0qgU1L
eGC80ayPwZx+02s3u5zb3YtbS+mGMzIsuRpqda+jG71cw/Zl/l7jGw3vL7Qx8hsvdvm0UVgV
jDsPU7AkYxJSDlvxZ8ecb2eAx8oF5uMzqkcSSRUofuaqljl54pzda+9r1bj/ABja9j4vDacQ
3OwsLyeJGn3KWSOWRyRVungcNlY3XinyXx6a136Gbd+Vxb7eXbKkpiYkIpNKEVKqB5Yud1PS
ub8q47svBLHbdn3GBZWEcMahvedAoz1Cv44bzdE9XXFIOczWW33UvNbO5sQA0yUjBKUrp1jO
oHjguqeMV8xch4fue7Wm02d1HdyGVV3C5hoQtW9QD9OnUjBzPTL+l/z3mey7Fsm17dZ3sEkU
bRloIW1uUQgk5GmeOmMxoRc7Ryg7TyCLdLaDbbGkzNLIoatM1Kk1BHngssMqy2Ln+w7zd3lr
sl5HLcQ+kFmCg+Y1UywXm4NQ79enb+O3h5TuNtd3kqultaw01OZMkRIhmW86YzJU5bDbdxsP
jCdL9DaN+mdnSUgFA32oQfww02vkXcysm4SsjB11sUK59+uKMth8RMRzzaCg/wDtVzoAK1/O
csahtfVe78c3C833bt0W7pa2WoyRMx0MGHXw/HGdSFd227eP6lt+zXsF5eQgpIsUgIRmGWde
3emLE8Y5r8Y8g2varmff+VxJbysP0u1xsxkZ3NdKg9/PDzbp69em/FHEd62jgn6O+VY57kM8
aF/cJV1AUua0BI6gYe76MyLDimzXFnxu52X9fEt2JJRSCRSy6ye1cuuCifABBY8e43bbdvu4
pDPNL7YkMlZX1MSNNan8e2DNaX1hZG1uI2gs4INvSP8A/G5ZC8pr9a/tJwJybnLHu9luW37T
PHc3hDRt7b0RGP8AEw6YcTA8V4TyXjXILG55LyWG5QsxttqaViamuYL06Vwy6Gl5ndbVyL9R
xu33YWtwkWuWG3lVWOrIaiPuHiK4sL5P5jx5tj3uXbzPHciM/wDkiYPmfGnfCmp+EuQbVsPM
4bndHWG3dSvvHpUjSK+AzxqezF9n0hFt36flVxy65mgt9pNoIxcPIBWn5yft006Uxz/wnE++
bRzfYL/buOXcV5OSVNTp0mtNRU56fPDZi10tyDj/ABSy2bYt13GL+oRosftqa0PSrn8i16as
HyrU9ttk1ryvcOUXbw220yWwT3HcAkqPvJ+2lMQitvtxsedcVvLDjlxHdySNpZq6QlDXOvY9
jhsxVRfL2+7XtXCbDi0t7Ed/lNvHFApro9ulS9PtXFGnHzjZ5k+PxLvnN2vGWNGhso/aVJpB
SkY0HW/hVvxxZWbV3S35p8b2O2bDMs86JEsqE6GjMf3Bq/b+OKq3V1v+22m+R7dxD9eYL23i
jl3H9M38xYYwF0hh9usin0wL8uDfNq55tG2y7P8AH+y2ljbadIv5pVMzMRQtQmpP+5sGlgPh
faeYyXe8xvyVNq9tlN9GBFPLJKdX8z1mgX/djVUxb/Ge7bHtPybv1i++ndZroaIr2VlrLIpq
6A9CQfP6YpxVPW12Ha5eNXnJt93iSO12++cTQSM/RQD6SD0JxDHJye0HO+L7cOPzJdKk6TM4
caRo6qxGQPli+Eyn9w26WV9ZbNxPb5Eut+knUJaRkFkYqEXU35czTPFnmr5rAch+Ct74dtFt
yHcNwtnaKeL34Ii1V1MK0ZgKkeWMyWnqyPbOU2EvNuO7K3HpUuYYpopnuNQoAgzz/iHgcazB
fWc+buQWt1uvHePbTfxJv8V0JCzMPbgNAFaRuimuKQxU/Pdhu1rwmCbfOVi/k9xPb22KGOFZ
nI+9VjJLaetTi55tD5roApcVyHbsfPHWwfV9Lf2pyJ/T97C+kBomf8Qc2J7nHGxvfFLN85cj
seV7rsckqXttc3zRRTS5+zHr0kJTILp641mOfN2PftyuZIbW1/S7fLuagKB+mdUQADI1LKCM
ZbUNxvnKhyyCCLaYBb/pT7sQuENzQnLoKKAR2xGRNveyXN9tM8/6y52s5PPFcOJIiqtVlNSc
j9cOsWLfc7tlgtv0m2zbitAVFu6RotBlUsyAjAXn/J+V73b/ACrxmxnC7Zbz1SWJZ1ZpY2//
AAtKAKD088OLm7VD/cxY8pmt7S9tmkXYLPS9ywbSiyhvSxFfMUxbfws9ab4q3nkG7fE9zIt2
9/vEYnhtJNY9wEIPaGs/54zz/k14Bx7j3PP/ALzYoWgupd/iuPdnZmrKtMzIxJIC/U431Vz6
+kPlLZN0v5uK3VvC0se3blFNeFSPSpKip/HGYvyH5A5Ju9jzriO02t0YLLcJz+siUZyIDQhj
1phnwzflp45dfKNxsUjMbSWkUjXqD1DNkC18R1wGMJ892nJP/u6O37RFLd26ujbleFxrEKdd
QFCanrh5Y7mvj14nWY66ehioPj541zW5H0p/bNvkG5cd3fiM4RYGRpUUNokczLomAH0ANRg6
ki3Vz8uXln8f/Edvxbbwpl3CtroY6m9tiWlf9pAri4nrPd/Cz+Eto5Fb/EU9nuFvJDNOtwdv
gcUPtSp6KA5gMT3xnfW88P8AHi7vxn4i3lpojZ7rtzXsrLItSrqgZWIP3ZUwz2rq+PHb3+4z
k+48ZGzzQRteLIJBuxb+aNLalIQCisOxxqcjNe1bhvHM9z+ELXcdmklueQXVtG0jwhTKwLUk
pTodPcYxD0p/j6w3C/8AgLctvhjM25TrdwmAZye8xoVNfzV8cSajYOM7hYfGGwbDdBbK8hNu
kuoKwiJkL5g5E5/twcq1qbZ3TdhalL6QxoS13JQWxNPqKn6DCnh93sm4Xf8Acs9xt1oXt7GW
3mvHiT0xK0alnc5CrHviq4nypf7sOP70/IbLeo7aRtrW0W3a6AOhZS7GhPbI4Yy+c5Im05dB
mMdYzj7F+PF3uz/t+ik4lFTe9LNbrGgZnk9wBsj30+OOMdeg8Sb5XveR8ZbnlnbpZQyztbTF
E98zCJghkA+w+AGC0RrbC6+QW+VbqC4WUcOWFmtmMahDJoAprHqPq8cVrMcfMuQ7hxn4z3/d
NmMdtcWt5MsLBQVTXcBSQvQt6sanK6vip53ym+sPijjfMHWK73m3e1nSWaMMC80ZEmXavlg+
R3cN80/Im6bT8X7ZuNtb27y78iQ3AkUuqCaAyH2wTSvhXDz+x3fZJ+Xk39sVhyFubJuVjDKd
sAeHdJ1FIwChZFftm2Y74Oq7z49ek7hY7htf9ycO83G2TXFpudulrZTxpqUERBXfV0GjPV5Y
K58/Nen76N62vZt2urD9Tu11MjtaWqlA0TFSB7ZyqB1pmcIseYcr49vfJ/gLZtt2yJtx3OSa
3eYLmwcSOZS/gQSdVcFZ6nkenWNlLZScZtJlCy2tm8DgZgMkKKwB/wDpxR0rw3kf9wnJ+O/J
G/WN1DDuGy2sslpFt1QhXTkHLUNSe4xvPGZ69M+PVlsfjfYp2F3crd1mSDa0UCEXDtJoIFKq
hahJxkyKf5+3rcuPji+7bTIbbcv1TW5uci3ssqlo3P5lPgcsFnmr/wCUUv8Ac9f82HGtu/pH
vnj15CV3owoHU6ypjDkZrXx6Y1wz384zX9pu3clh3m9vUilHHpYWhknp/KedCrJmfzLU9MHV
9dJMjcfGexXm3fNXOJb+1aFrwvcWMjrp9yGSYHVEe4PemM57rnz+Vxwmb5Bum5Vb8yhkG2iO
QbaJ0UI0XryqPu9NK1w2+tRUfJnOd/4zwzhSbNci1k3RoIJZioZtCxJQLXLOuHBvsYn+8G3t
1uON3GgfqDFOsktACyqV01PfMnDKzbfvMfOMC+65DMEY0qT5Z439nR9w8V3Petw4hstvuFnu
GzTNaRexfbbolgKaQI2dVDaSQM1K45q+u/i+2b5Y863ttzuIL24nsLZra6ijELSIjyKPeUE+
qvfEzFLx5/kC849zWLnduvtCKb+nwsiCH2ljckoc6qKKanFfVJ40eyrNtuxcdsXN9fs9tCgu
rJVWFQFUgyUIAWh69xii10yQRJzHd5II1W+l2qIhkykfS8gBr3oaDAJ814luFz8/T8d3leS2
iT8fWMfq0vIog6xCTOSLTRiVAqa9OuG9fqGyZ69y3DcYrSLbFt7PcbmFokaJttUNCFAGkSUK
ihH7sE+C+WP7pL6C7+RYq7a9hNFYos7ShA0zFiVkOgsDRfSM8bkEz7PF60DCnq65Z5fjjVOh
kBZToNdOWjvh0hDEVFDppmKZ08BhBiQVq2Q6VBxM6HUodj3PQHtTocZ0wzrqfuR0r28+uKFL
RmqMwRShHTFVCcNSpJUnLEyagKaa/wC6nWn0wNQ3o1KQcwejdQcaWnLhpCCCVpkMA09VJqCW
CZ5/4YLTKEV1Fsqdf+mLUTVYZNQ9K0p+P44URZli16clyBHX8cGLQUzYkUc/a4HY/TGmcSDW
IwSNNK1Pc1wqckBpqzZA5VJ79cASExtUkCrD1GmYp3wYUf8A5EZUGmnRh0xAIDLpBzKZ/ie+
BuDZjUUHqrU/TDipFiABQinRvr1xSKmo1BQ+VT0GNAUUlF9WRqKHGUZmcKBWtCaAdM/DAbpw
rGNmUEqF6dfxpjWokqc+r0+0d8Wk7AAZVqMyx6ZdRgsBkFW1Zjt9Qe+IWER6iWNQDSozqMTJ
zGx1sorQdevXoMShKn8vUWyPQdwRixq4IMykMfWpJFAaV88C0WoszjrUZClM/pixboEaWpJz
r+HlhlWUjHpz/N+fxAOGnDhI9BrU9vGnnjNX1FpGrT+UCgB/ZiWB0hZXTUC47Zg5jqMM1E8V
TStTQ5eHmaYWQpVwoC9QQK9KfTGaYMArXsRkAP8ALBBSZZNC6ga1qBl0/iw6sEsaZGh1N9wP
+OLTIeRQraEp6u560xNBaNQoUkEjt4jyphxjqlERqI09sj0B/DFYpTRFm00ADk5npQ4vFp2l
ZdWta59vHxwatp0AZKq2hh2X654mvk5UjIAksKhT44sIfSVrSp6E/XLBguEqSamoakAUr3w4
zp01lMmqzGoy6E5Z4EYNKp00rU0z7ds8VqkGDITUk+kZ9v2jBpwxUuV0jU2fqbsuLVg/ZWhX
OozJPhh1YjUqlVByPQZVHlgEh1VX9Ve1KeWFoymRV6aqHsMh2xAYhI/2n8gNSCR5YlhmDHI0
qT0p0xLTgVOWVOpbqfwxE6yHSVrqD51AFRTtiWjogGpRT/Eg4yKQ0s9SaitMsj54gY5PpIOn
ux7eOIWCKNUBSSaeuhqBTtih39JkkRQSwBJFcyPpirc6RvI9CEFErSnf64Gb2SBWQKAT46hS
o+mLVErqdIV/S59JAyopzxacKNSCFCkADI9a0/wwnQIDI5OnUrd/AfTFaoKVWABrUrlprkPr
gJkCaA7fcx7dcu+JkwiJqrGqkdBkaHvXFWMJLaTMA6vCpqRTE1zKIqp+4B1XMt0OFrQ63Iqa
dfSfEHpiZKFnkejALJ5eHb8cGE7FgarQEZMO5+mJDBUFVqKdDTETKwK6WNBUjKpHkPxxAZah
COQWX0H+H8MR0FQctNHXq9M/LACRSgJjAZj1qTniJnJcUFXVT93h44QMAAgaiFPY9Tl2wEpp
PEGvQVGWGM2iibVQEAN1qfLp9cJhOWctX1gnMjICvliSIR6TSoZB+UtnU5jEU5IZF9NAvpqO
tPrgQdXtjRFU51qR074icOzEEULE16V8+nTEjI1SSR6sySB+7EqJmXSKBetDnXAMFrZZAwTS
jZEZUBGAgpI7en1KRUUFMvxxCJEZgp0CrdBQ+HfAQFW0amjNAw8xmOowqExCjUGJSuQpXr/j
i1k4kVlVPvXTkBkfriOBOQCKKAHUQPynvTFpxKynIDp0OrM/QYDpMXVUXqDkVP8AyxK0BL0B
bJfzL1FfCuHRhKsrOfb/APEwyWlM+/XEpAp7YJ0k6D0rlWn78SSaQ/TIr1By+lMS04bQunRq
LV1aTkKf54loRQEVPXox6g+WJqI3Pb7QvbtiFhGJ6k0rWmoV6fjgZwTKGXSBVV6AdM+uIloW
ulsgAAhGImkOlRRfMk+OAbhtMci0C6T1LA1qRlTPCgpVUWpoK+kH/DCYRzf/AHGoYdKHrniQ
ZGlS3cKy+0eviKeODNWMpcsGmLeeeN4oalUBr06DEqm0jwPTyxYE0cMLR0LUr+/AZ8rHZZI4
5wmgOWIVImJBY9gtOp+uO3893w94+oeO/D/F7biMe8ci5DfvHNCJZrazLJGmsVCL1LZZY5/0
l1jnqPEt7GzpuTxbPE8dkpKqJSGc0P3GnjjDvbvyrHZTlSgrQV7/ALMWMXkwCJGxYHqDX/pj
UUjU8I4NybmV/wDotlttap/+NXMqlYIkp+dyPuPYDDXT6+bav+X/AA7vHFrm0sP1Ee57jfUE
FtZqQ2onSK5Dv3OCWuOR37r8C8q2Tjbbzvd5axe0geW2i9Ug1fl6epq+GNX+lVxV8F+JuW8t
Esm2KsFhC2h7y4BjjLdwhpV6DrTLB1asin53w674rvb2Ek8d3LEArtEPTU5g554Oejfhmo45
TITGpklc1oM21HL01xqVl7Bx3+3zlV9tFvdbxutrsiXIDxQOSZdBFRWmkavEVxnrrqm4kuP7
d93XcP00e82cdgoBNzcVj1A9aK3qOMzRK74/7Y/cgL23J7e49pCHjjQOtevUGoy8cX26OxFt
39u95OhL8ksofcOn2UcMTTp9uVThvXVGRLJ/bPvX6j2YN5tWj/8AtZXDilewyqxwbWvtMcO8
f2+2W02Ut3d8qsAkXqrJUu7L+XSCST2AxTVLHJsPwNyjdLAX97eQbXt8nqtnvGMbtGfzaKHS
D1GrDe+lcUtt8bb5dckl2Hjd6N1lQkXF3bfy7aNEPVpT0rhnVExoN5+H9y41HFeblyqxt7l2
AiBdzLGx7jIk08RilonjY7n8YT7rw17y75ze73FHF7kShqwMQK/mP+OM96L4+cLu2EF08ag6
FYhMyMgcjjXNE9dWx75fbPuEF9Zv/wDLt3DRMc8wajI/443K19daTknKee75LDdb+92GbKJS
HgTyCIukMcF6ZxWwbZydLuKGzsbuO4nP8rQki6x306QD1wf9PF9drq3XivO7aE3O42F8jE6f
duEkJz/hd+mMc/19bkxJZ7Z8jXFsI7GzvpY1GaxCcr4assjjV/qLFXdz7zt0j+800Nwh0MxZ
lKnpQVzxffW5ljjk3C9nlSSSeWab0gPJJI5y6ChJ6439qzsi/j2z5Cu7dRFb7lcwqCQf57R5
/wC0VWn4Y5/9PWbFfYxcxivJLW1hukvGFWjiZxJqHkhxv/p4zOdSbhtPNrL/AORuVnuFuxGo
T3gkZiKZ+tugxid61I1fEPire972C55HNu8e120YZgpBeVyB0IAOkUw/arrlLtXwTvG4cbk5
DLfxwwOWNvBRmkl05Vrn1IxWjmMFfcY5Bth9ye0uIYCaRtIjKNXatR5Yp0bwsTa8zvLEAR7j
d2VAyRhZjb0UVqq005ePfF9j9ccW3HkdvOV25buO81En9KXWQ17EJ6qfXHSdZ8tzmdLCy4nz
Td94W2h225bcron0yBwWp1Ls9B5kk4x9vWLJrr5js/O9lnGzb5NPJRAf0omaWIClQQimnpwz
+miYods3jkFjcmHbZ7i0upCAXt2eKQjw9H3Dyxu9eH6a579N0FxJNeNJ+pkJd5Z9RkYk0JOr
HHdokcpkZ2QVYtCPQSf8D38cdNo8XOy7ryy0YrtlxdwCYESm0dk1eXp6kYxesNmiXduVbNct
ewy3VlfMdYuRI6SuPNjn+3Gue4z9dWl7zX5Rn26SS53DdmtXB1nXMqspHQ0FD+GKdxXxycA4
ZyHmO7HbtsmSNwNc9zOxUxp10nu30xq9YeZEfLOObjxHkjbc14lzLBpUNbliPcGf7RjM7qvp
btuvOt0tkG6XO4XVpGP5QunkeMV8Fb/PBe2Z5VzwHaflbcYLmLif62C2jBN5cxSexHWlPbBN
NT07Yr26dcsvdxb5tm5SG4SeHcFejuSTLqU1rU5k964ftMZljo3XkfLd/ubey3K7u9wkTKJJ
3ZhU96E0ri5ovPreXXBfk7g3F13Abm9rbbjTXt1nK5ce4K0encj+Hpg67oyvLb1d1jdp7pJF
nb1NNICHJORLV64ubsM5xyz3F1OweST3ZFqEYsxFKdACafhjrGuZIO3sLyZT7cMjdATGpbrn
lTrgvcavLvst25BtB9izlurB5lKyJGXj1KOocDqMG6z9arQsrz6k1O5rXTUk1zJNO2GdYzY9
S4TffN1/ssi8cnv/AOm2aaRJG49paZ6U19SBn6cc++/VOcjFpy7l+376+7Pudyu7aj7t47sJ
TTKmfQeWN/eU2unevk3me+25t923i4ubXJv05ekdf9wFK/jjH2GOzavk35LtLJLLbN1vIoFB
WKNTqVV/2ag34Y1O419WZ3LdN8ubx7y9knub6Z6PczlnlqO1TmKYPtpnPniz33mXO9429dt3
a+vZdvjoI4bgt7IZBUZn7iB44Z2sz5af4d2n5M3iS/HDdxbbo4lAv5i5jQkfaoBDDWfGmWC1
VmzyvlvF+WXV3HuUybwHeO5vA4kMvYhyailRi3XPnL8Ozdfmn5C3KD2Lnebj2dQ/ljQoJHRq
08egxrxdRz2m6825HyW1kS8urzeC6xbfO0h/lmooAx+3PvjP2jc5etJzjn/xbvD2XLtG9jdI
ln1CUl3YekaXpUU6UIxnRFH8if3C71u+xTbHtO1rssV4pSediZXdDkyx+lVFehON89T8U/Xf
l4ZJ7ijOtQhJ1Vqc6VBw76LFpxvku88ev7e+2yeWzvo6iOVag6KUIofHvis1j108i5hyPk+6
RX+938l/c24pCzGiqgNRRFooFeuWKeNZNesbFyv+4m740Ny29LuXbYlAgl9iIs6JlqUMAxHm
BjFrdmfLFw8h+VuVSXvHbK7vry53Ri25WCkkuFybWRT2xlQ9B2w7+RedjN8j4HyzjFykW97d
JZSSrrVGoUZRlUMpKkduuH7NeYs9g+S+ecc2p9s2jdJrexfU7RpRgpb+GoOj8MGsdRLwbnvy
Fs1+bfjlzcPc37/zLZQJvflY1B0MG9Wf3DD1156pz+l1znl/zHHeRbPyS5vIZpyk0dowCFmr
6SixgA0Ph3xmdGRtFu/7objaYjFFeKoUNFqECSlVAILn7yfI5nF9vR1Hm+3/ACl8jca3zcro
bhJHul23/wCUDcora5EFBVGFPRWgyxrZRNcHM/lzn/LdtG373uXvWgIY28UaxIWByZglK07V
6Y1Ks/biHxhzw7CN+Ox3L7Q61W6Vfyfx6fuCn+KlMZ+7X1x1cQ+Uub8Pge32jcDbRTlZJIGU
OlQNI9LhtPniuLdLkXyzznkd/Beblu0zy2rarb2P5KREGupUjoK+fXDGK9g27fvmvdvim65c
OSxRWUAekCxoty8MfpZveC5NX8fPHOVd7I8PvfkDlN5x9+PTblNLtDTe8bNjUF9Ws6mPqpqz
64661+Ablz7le67HbbDuF/NPstswNtZflXQKANTsO1cEmDN+Qci5pyrfdrsdt3S+ludv2yps
7TokYChARkCdIyzwm8ZNWPCfk/mPEbe5j2C+NrFclTLGUV49a/mKsDnTKoxhTbGqsf7hPkFd
9sd43C6F4LQuI7No0SIrINLA6NJ9Q/N2xq2XyKPRbz+6bbEt5Zdt4243Mx192SYPEjkfn0LW
n7MYLyPaPmn5E2b9Y237o8b31w1zcR+2kkZllOpjGrA6K+WN2wfXJ4Cf5o+S5rmC8l324/V2
xdbcjSNJceoUAAbGfszz/wCXHvHAedHisfyFuSiaw3SYyPcmTVKzOxAkdfuAdlOH7a5/07vF
n6rq4p8s/InGtokttr3JoLN21CEorohbqyBwaVwbHfKq+T/IHMuUC3O+blLeR2zM1urNRQTl
qXTQY3epgkx2n5N+RL3i7cVi3G4k2pYSs9sE9yQQjMguVLhB9csZnUnwOud+S4Z8v854tt7b
bs+5GDbw/uNbsiMAx7qWBK17064sP4SXXy38hbvv1vvE25yvuNhqWx9qi6deWnSgUGvh3w34
Mi/538n/ADXbqmw8jup7JruBZXh9pIGkR/B0AJXxAxmWMd/OK75G2X5S2/jvGH5NPJLtjKH2
VUcN7NUDANpA0tpp+GC9RrPWT5Ny/lXKGtl367mv/wBCrR2YkpSNDQHMDMmnU54rVJ6of08y
kM6Fcq+odcX2VegcV+Z/kjj+1rte1bmyWER/kRuiTaQeoUyA0XyGHYrV7wv573zj2571um6x
vu93u8KrJLNIY/bKVCEECiqNX2gfTFJtWKPffm/5E3zbDtV7u8n6KSqyxoFT3Vr9jMoDsKeJ
xqzFqTj3zL8kbLtMO0bZukqWCD+RqRZBEtT6EZlLBfDPGfIzNX3HLv5p5RuV9yHbr+6bc+OW
663mb2pTC9X9sI60fUFLEEZ/XBa1fPWb5P8AMXyTyPbZdu3PdJH26X/zWyokVQD9kjRgFh4g
5Y38KZYLj3zj8j7FtcW12e9tFZ2oC2yyRpOVQHJayBjQeGC+Fi9/3jed/wB0m3HdLmW9urg6
5bmQk6qnsv5R5DLGr0vFaYHBbWr1A1FaH7elQcZvUGGe1uAisI29vqSRQAdq1wf9JEUVjNLd
JDpPqPUZ5npU46feZqrTc++M+RcKvobPe4kWaaFZoJo21xyKxz0mgzU5EYxLq2bjN/opqemI
n3BrRqfd4kE5dcH2itxYbBxfed+3KHbNptWur66OiGMCgyFWqTkDTD11IJa4r7aNx2q5uLW7
jaC5tpGimjlQqylTQgg9KY19ozt/Dna0bUY0BMgpqy8RX/DBLFlbCH4q5LLwSbmVsiy7TDcf
p5QGBlWhA16aZrqYDGftrXWczax7QuiSOVpITTMdKfXG5dX/AIAqll0p1PWtf3YVXRBt1zJI
NMbOD1oKZeJpjn31IMsanfPjHkWy8P2flc8aPtu9Myw6G9cJFdCyA93CkimM8d/Znv8ArJn+
WZudtuFjq8LIjHRqb0kN1HXG51HazEJsZI6+k0NK5dR0/ccN6jJhazVKAFWrRh1yOKU42W5/
F2/WXx3Yc2LxHar24NsEBJkiNSqNIp/jZSPLLxxid7V34yUO230kQEUTSn8xAJFa/Trjd7gv
FwI2+5aX2TCySAqChBDeo0zGHZYzzLXVJs99AZWmgaGOtF1CgJxn7RrARbVuUgaSC3kmA6vG
jEZdyQCAMH3i+tcs1lPFIY3VldQda9x3xqdakaKxUAkmnWvUY0y7rXYt1u4gbe1lmjlPpZEZ
kDf9wBzxm9YrKUW1XMrlI43Z6FpEVSSNHWvh+OM3uN88pJ9l3G2KPcWk0SSZxGWNkDDxFeuL
7RNBt3xdzK841ufIrSweSxsGRJgqn3SH/Mkf5kUHMjBP6fpXnPWdtdtvWaRkhllMRIcohbRQ
0oadMzjfXSkBfbPuO3yI15Zz2vuhmT3kZAwBzK6gME61i1tfjv4m3bmtpu0ljcC2l2y1NxAs
yHRMa5x6+laDLBb63Z/rrIbbse5bhciC0tpJ51Ul441ZiKeCirHD11kYnqW+2HdtteKK8tpr
f3Rqi96NotYB/LqArQ9cHPcrWY6F4HyyazF4Npu2tGX3FkETgUrWtaHLFezePFNJA0czRvlp
A/AHDGMTWVhcXcqwQRtLK70jRASzHoFA7k9hgtyGTXqjf25fIS8YTekszIyjW225C7CUrq0f
m+gzxznXTdnqk+P/AIf5Pzi6nXbYkt4IWpNe3NREkgy0H82quVAMb67/ABDZHNyf4n5hx7ka
bBfWTG5nbTZPCC8VyvZo2/xHUYZfBJrR7x/bf8hbXx2LfBbxzjQz3+3wkm5gVcwdFPWaZnTj
OpXfHPwZyrm8T3NkVstvj1Kt9dAhWYZBEAGo59cP/T1fWfLM8x4JyTi27y7Ru9p7F6GYwjMx
zIDQPG9KMG8salc64924tvmz29lLullLaruC67VpFZQw7Mrf6YJ1rWV6FwH+3blnLttfc0nj
261b/wDFpbnV/OcfwAVNK98ZvV1q8RW8e+EOZbhzG44xdW67deWkZlunmNYxHWgkBH3I3iuN
W+KcfnWo5z/bRyfjvHrnerS8g3aOwjMtzawaxKIVWrMurIhevjTGed30Xp5jtnB+VbvtVzuW
3bbNeWlknuXdxEupIlC6/UR/tFcavS8s13/H/wAdcg5juC2O1x6lorTTintxRsaamJ8e2Hvv
BzxL7Wm+Q/gDkvCoF3ITx7rtDlI2vIFIZJHOlUePqueWqvXGbaZzF1sf9r/Ldw40+4vdR2m5
zLrtdtuRoaT82kn/AOzJHj+OM809cyePJN62fcto3G527c4JLK+gf257eRc1YZ9fMZ1x0jlN
1z2dhPeXCWkC+5LMwjijTMkt4Ym+Y9si/ta5DccTN8LmO35E8Zli2eXIyBc9OuvpananXrjl
vW7+D1P08S3Pab3br6fb76CS3vbV/aubdxpdJB1VvPHWMuP2aE0alPAfs88LJgTSrNkMmAzz
wkRlYnwUMBU9TXr0/di+q0auqsRKf5Y6fX64z9UZWboCVcdVGdR2wGQUbBanOnQ0+vTAYdol
Vtdak9RT9wwqw0oP5MgMj3OEfIVEqeZyXzriVEKqumtWBqD4UxAxRz/MIJViCoBxDBynVp9a
t/DlmDhjV+DaE1AqQA/3Fe/7cZRwFKUGdMh5DEtPQmIe6wrXKg6jEq23xXwWXmfJINvHqtlP
uTsMpPaT79JbLp+3Ger+nT+cl9vxG05z8W8fufk634rw1yXltY2u45CfaE4UsGWSp+9BVh2O
N/WznWefahuf7aedQbTLuqxRSvCrtJYI498aK6gAfSaAVA745y0/Tli92+N962nYdl394xLt
u+SPFayrUMjL+WVTkpYAlfocMlovElS8p+Pd94n/AEe8vo1ksN2h/UWcwNVK0BaJq0o4U6sM
mr651jafJ3xbx/aeC7DzHj800aX5ihvbSY+4fdkQkMhI9IVlII8MHH8/tXT+m83EXwZwLifN
W3Pad7edL9oTLYzRNoowJDdiDpqDTvg6+cZ5951leLQ7TsnMUj5FbJuO32ty9ruFuMlkCuY2
IHXqNQGN/wBeci/l1Ktfmv41XiG/Q3e1P7/GN8X9RtLdTGKBnh1dWpq1LX8v0w887PBXnLRC
gowBHdcsZwU5UpUgamAqreFMSRqZCtVPrXMkf88QPCpEQYADpUk5DEjgIztqXSp6AinXFiNq
ZPCpOkv3/DEtR/cGYk0NadOowJJF6s6VrmVIqR2/xwI6oVVVGQBqxPiOmJHWur1ZV+1h0P7c
SP6T4t1BK51wkMZABUAZf8UxAR1OgqGBGY09RiRK9chWR+gB7Z98ER20kBncA1oamvQdMKLO
ootAPpn+GIWgZyGAQ6W6g1yOE6OqsfTQuBmxzGXl5YCZn0+ktQNmCP8AjKvhiRA1SqDQK5E/
44icsdIAzBbSKDr9B2xIggWinq1RUeeC1CfQuVSKD0noaeWEC1KaIBULkW8zgOEaxupNQtKk
9KHw+mBBFHUOuRDH6Yl8iAIYM5JUdsMQJPdcjUQVJrToRXrniCUtHF6W0r2DEj/HArTQhKkg
6gTmfAnzwHk4Gl9WmoBPma4iTMKEE6ZK5eOHEMEldLGj9ycyaYsKFKtVVqfrktMKwVAHAoCo
74ACQhqD/wAWYoSKgk9jiYtEWCLQDOp8wfD6YGiKxqdTKDXNu5p0wgMr+kLpqBll0H1xY1KT
toQ0NVNK17DFhpIZFlOdV6BV6YmTFRqYsSATkgzOBJKHWVavp7Hp/wADCkfqehRqrmQT3wHT
KF0grSlKgYUGSVCpGnU3bqOvniTluhGVohagFWFcjT/PCmduQokYDxrhRlIoAMjiTo9pfEdM
WlNZ0o4ObZaR/lgUd+xqqXqSMvq1D06tJpXp4Y7/AMp613zMfZTxPN8TW6xxu8n6MFY0qXNR
SgC5k17jHP8ArMrhzj5h3O1ure8Y3akTdgRTSvfpjErp9nA7F/toQSCxORA8sR10wGP3P51R
GSBQdh/FjXM9GvqvgHMfjY7dtuxbNftYzAD/AODHA6M7EDVqcg6qnqScbv8AO/LP3/as+bJo
bS82xtoe7ffJHEUEduP5gDHMig1Yxx8i1p+T7JyDcOB/pntXub8wr7iN95YDM54z38nmsh8H
Rcwj3C9sL1bptstE0RxMf5Eb/wAKn7ajyxq3YbGQ+cOJcgk5MZorI6LtglsxABlYH0qB49ss
ZjXNZ+5+GOe7LYQ7/uKJYsXVYoGkUurt9ootRqNMX29Zsley8f4ZuG27Rb73yKO75Zv6Ir2d
gGPsxVFRSuRI7nGtXjzr5b3X5I3q4hXd9pfZdvyjgt1YKp1GlXNati5qkn5elTRw8H+LB7Ua
/qZIQ2uP+WJHYD7yc2ovXBWbXnnwHsj71ya63q6ZZktPUsZBJDvUhqdMicicdb1/q3z149Hf
nG7x8o3K0tdhk3W1hpGXEyqEPXSciDkcscwtbDZtu3yxkvb/AIvFs0kUuoGRVaQ6fzh6Cgxm
jVvvslldS2llJsZ3hJSNbtR0jHTWwOWKRal2vaNr2JLtIrVY0n/mPDEAgPWg0r27YZBrN7nx
jj3Itjv7274+m1FFd/1EyUmIQV1asqYp8tSuXikCn4uljt0ontzpGq5grmCw8fPD3do69fKu
81XcJ1JoDIxj/MCB54DPh1cRvp7Pf7G4trZLu4EimG3ZS4dgckI7541Gbr7HisLG7Xb925Jb
xR7vHGHtbBire3Jp6Kv5mH7sYxrU+xTz0vNz3SIWlw7skWhKt7K/aBUV/ZjVg1FLvtreJFEN
vvrwGQD37mAxwgg9fUAP3YMSPd+RbtFyratphKxWV0je5oA1swzABPQU8MWJ4r/cdaxRbzb/
AKeAFpog8iigduo1Z9cUi56ZH4W2zXzzbJbuxMyBqlZ48q0yybKvhjWxSPqS/v8AkacksLG2
hWLaZI2NzIq6mDAZBSPtGMhHcwps9puN9t9tGm5OpbXoBeR1rQE9TihrxjmXLPlze+M3MW5b
IlpYwnPcjEyA0PUh8h+GNbIMehfGlzuN98WD37dUYwyQwpFEEDKBpDDLOpJNcFNWPCG3Kw+P
lNpaF763EojilUqS2s0yJGWJOkxbhvfGrKfdbSGbcWdXaC5QiFHDddPgB+3FiXljcR/qBt7X
fvSxx+uGGH24gvTqKgfSuDEqtxjt+N7PuW47NYQx3pJkPoq8jE0odPqP0xJk+Hcw5nyXf7A7
1sbbbbwiQLeMjqrkjMLqzBNO2GU2Yv8AnkV7Y7ffXPHdvt7je7mPS91cLqIUD7UB7jsMHmsW
48X+EZd/sOW7gINgO8bq6VeeZ1hFuGYnX7jKaajllnh6srUqm+eZuWycsjl3+K1tWaIe1ZW5
DnR2LSdWz74JBlea2ymWZQV9WbL406Y6SDK+yNtkj458bbfdbBtKSzxWkTJGEoaso1O2kajn
n44xZDatZdo2zfP6PuG72MU9yFE2iZAQrkA00n+E9K4MhYvlfypyWw5Bfcfs+Nf1W1Qe3E8I
cVBXPVVWT8MVsjnbrL/AF5cNzbeba62+G2klRpWVYyDEdX2KfyjPG63zPFrfQcct/mq4u97s
TND7amzb2y8KT0r7kigHVpp+GMrmvSN0nvdw226Xbv0m4RSxlEtnVkJr2br+8YMGqb4n5HeX
22bhYXO3223NtEntezbAqlaEnUKn1Zde+Gxq3We+Ob9+U/Im97lvdlC01ihhsAYhpiQPT0k/
cxH5sFkE+Fj8e8fsP/aOW3k22xmRbopbTtCKKuZKxkjp9MUinw0/K99u9k2S3vLPbv6hPrSK
rDJAxoWNAThWvNP7ltosm4vY7kLWNr6RiHmUaXb0AgZUqOvXFF6+cdkso5r6CGddayMgeLqx
GoGmXTLvhvWmR9n8gvLTiew7ado2SGZC0VuipGqrEjgVY6RXGciWn/r2xvv67jNYwSXjW2gz
PGpIANcgRTErWJ+L+M7U83Kru422Au97LDbyvED/ACRU6VDCgFfDDivw0Px/yoblxzcJo7GG
xTa5ZoY4IckIiFQSAMie+KzGebs18k/JfK9y5Pye6vr2FLU19tIok0KFX97H/ccamGc/lVcc
skm3jb7a5jVoJbiMSIepTUKj8cHlam6+zeT7jY8Tttri2vY4ZI7maO1V0RVSFDRRUgV+mMzm
J1Pwni78s/rEm2wvex2/okZF9sPWhfTSmunfF9ROnlfKflmPdtt3faLjhT3qRa4lmjAaOKmW
tzpBGf8ADhkhjp/teuoJtn3u0azSOVJw81xpIeQSA0Rq5aUAoMVXXw+fefxC35nu8JjEKC5l
ZRSgKsciPAYZGOOcc3Dtjt925Rtm23isLKe4jWR+lFZh3P7MHVdJPX2PeX+w8U3zYOM2GyRC
23AlYnhVdURTIMagk+bE1wZGdW8/H9mvOTzbvf28c1zZW6wWskwDJGj1dyA2VSepxYlFzRvj
u72FpN6ayuIrWSOQfplDsNLg6fRU0I6jFOVqs5D8U7FyDl3Hd8t7O2i2uFRLc2oi9v3VA1R6
goA8MjhsX5eA/wBxVzsUvyHPBtEEdvFZxLDdCNFRXmzLFQPqBXHSc+MSXqqX4f22x3L5E2Ky
v4kntZbhRNE32uFBYKR3FQKjGOo6znPX1NvXI98tflTY+PWbaNlnt2a4hEY0EqrH7qZU0jIY
PhjdqxvLS22Wy5dvO02kVtujRs4nSMVZ44apXxzzpikZ6tzxkOQSS8o+AG3XkECXO5/pDP7r
JoZXWQrrUClDp7Drinp6+GX4tNtS/BjrZ8LlvJWtZRcXjRR6JJM9U/uMTIQvkO1MHP8Ak9/D
Ef298qj2bl0Ns21R3F9uzpaw3LN6rZGY109QA3fvjXUh4ux6V8zb7t2wfL3EN53OJprKztnZ
1VdRBMjAMB4rWuMiX/Z6DsHNuI8i3yFtp5DLcTSIWXagNKekZlgUrl/3YtasfNH9we33N58v
7haWNpLcXUog0xQIXZi0QOQUd+uN/EZ49tQfCHFY5vlfa7HfdvYtCZZHtZ0YKJI4mZQ6OPVQ
555Yxev014+jRy3fz8xrxUaRsIs2kMQjGZEdQdXhXLww0R8sfOm2WO3/ACnv1tYxLb26SoUg
SiqPdjDuaf8Accsbxy4s9YWOP1EMSM/QBkTXzwVrH1Xw0kf2wbkCaMsF2H7dJMx+OM/z+T/T
4fKMukOFFVFMx188dV8vU/7bdk2jd/kmyhv4Eu4YIZ5xBMNSagh0nQetPPHPtvn4e5b/APIX
xJsPK5+GbxslrZ7cqabm+eGP2w7jWFKKpfSf4sE5Z+34ZTm/DuGH4EfcdmsY6tdtNt18YwJ/
bkuSoq1K/wDj7HLAz3uePQuK8B4nFsnCVm2a2aeO21sZI1ZizW2pjISPUamueM8xqeG2++4F
vfL90+P141bfpdvhP6iYxxgEgiqgBdQHqyOrGrF8uGZOD/H3xzHvU2ww35tJ5LWImKMysXne
NS0jhuy9cUg+2Ri/7iuJ8YfiWzcv2+xj27cL14RL7ICoY5ovcNQoALKBQGmeHnnfR1PYtedS
W+5/207Zcx2wtAI7YQwx1AXQ5j1UP8VKmvji5uL+k3FpDdcO4L8R7ByC64/b3tzJFBAoMaa2
knBcszurGnpJwSa6Wre9+PPjq45nsG93W129vPucEj/o2AEEtwER0qlNLOoLeRxm8s76vN8v
Np4tx3dd/vtosLK5tIJo7dYwg9+OlViLBVPrYCq43JGeuvHwZc3BklkkIUvMzMyr0XUa0/Dp
jco5+G++FeYbfxnlUc99tMW7pcqsEaS6aws7isiagw1dumM9R05e6/3Ncu2uy2+z2C42mK8v
NwiM1vuEhAa2IfSDGQK6j9aYJPNcv6de41PMea7FxfiXF5ty2ld1kvo4ILWKRVKxkwqWYlw9
Oo7YJG7XTB8acIh5xd3Q2m30bltxe5tmjUxBxKAXRKUUsDnTBjWvO97+Sfhjk1nvfHty2mLb
EtUkj2y6ESiV5o6hdAiXUhqO5oR1w/WMf+zR3t9xH4041xOxh45De/1YIjSMsYdZSqFnZnVi
amTp2xZDt1df/ddwa25bvTLtUBtdy2wSXFqyAosnuOGeIH7Cw66cUN+HlXwlwvY73g/OJtx2
6K5bRIttIyAlREkjr7bkZMGCnLFfaxxt49excT4rxiLjuxw7Nsthe7LcW6vcXkoQy1Kgl/Ur
F2LdcxTBjc+EXFeS7fvfM+ZOsUf6XbEhs5ZI/UbhYxIWckZGma4RKxl3d8F+TeAcsuU2CPbo
djSV7G5RUSYvFEzh/wCWq0+yhXPLEz1/66+R2jULqVSCQNIrUCuYzx15b5ux9K/2xbDx664v
yW+3bb4txFs8LKJI1lYJFG0hEYIyJYdO+ONm060m4Nw/5O+PN63A7DFtrbTMFs5ECpMpUqTq
KKtKjIrngyVm/Gu3nnN+J/HF9tHF4OJxbja3VoroVEeoJ7hjZTrRtbGlcznhnHOHfXgW433G
oPlu13Pjuyy2u0pc28r7VeAgCdW/mRFfVpjJzzw9WTlmcWX/AA9K/uatbF/kDi0t4jyWj24/
VwAmvtib1COvpDlajzxS+My//wB2T/D163Xjl5tkEHGNp2rcLRIlRttm0wXKR6aFGV0Yhqfx
AYsjpdeZ/EnIdq2H5S3Xh237A1ja3t000KXVBc2jpF60X7tUbU9NG6YepHP+V9sVvyByLZ+T
fOGx7BLs8EEu07usNzd5NJdRnSaOukVFela4z3Jhln2/y7bniVgv90ypBYILEwRXU0Cx1iLP
A4ditNP3gH64r+Dxv2r0oT7NxPinJ7s7cbnb9t3GaeLboEBFWWNgFXoF1PXywyYv3ryf5otd
j5T8Pbfzr+kx7XvP6lIVS3AqY5HZCklAurpUV6Yec0dSzMfOAgmaVSFKlqUFK5/QY3reevqj
47n2bhfwjByi02mHcL26nKXQlWryBpTGBqoxASnTHLP213s8bi/2/ZuS7Nwf9XtP6Tb7i+F0
21OAqo4gmdVZaD06s6UxcyZ459T2L7dk4XuJutm3dtsuLXSyT2jKolSg75+nT4ilMONV5HNw
XjHMvieSw41Gj33GdzmgiulWsk0QmJcalAL64ZARXuMUjGXGe/uN49xvYdi4dxmzVDulnE4k
nSIRtLa6dGpiPGUVpU0xvmeetd22zF78hTNu/wDbTsdzJaiKYXFmGjgQopaJ2i1FR/EFqSe+
OfJ7y/K6seQ2Pxz8Q8X3La9mgvm3JIxdIw0O8siFzKzBXJzGdcakl+V1bscvCLrYuafKG3br
e8YO07nHt8t1cpIo9mdg6rb3CDSuqgLDUfLwwUzyvRuQScD3ratz23ebixu7NI3NxGsfriKV
HuEjVpKHvizUz/xjvWxwcI2zbdnS22ncTCB+mv4mjS5Yf/biRdPuBxmCCcX1kXteE/3L7c9v
y60uJ9oTbdwuLcm8uLRtVteUaiTRggaWA9LVz6fXHTnHPrj3XkthBHJdRpJRBIQGatO+Rw9f
DXEyvvPaZ+P8Y4rslsLm22q3ktIikZi1K7+2pZvTTP1Z1xyjXV2qrarTig+Td4u9uhilNzss
U+5IqUVnMp0NpIoDJGM/HB9YJ1+Gc2blm3/LfBeVpvW1QJabbE7WKqSZUPtyMrGQ/a40D7aY
1k3FngvjX5T5BuHw/um9JZRXW4cfthFawxqw94xRAgOq51oPy4rzJWr8KX+3/ebfdLznXJ7q
yhSW4K3NxaQrREKh3ZFDVpqK1z74es1f/FneR/O3GubcO3rbeUbSI7lomfYntlMjxTUOgu7n
0suRJXIiowyeicfaLj+2b+rNxbke33UDNtktv7luGRl9x2RlfQ1M1PTGevOnTrjOXV/bBttp
Z7TyW9vIvZ3CynETzOhMkKqjFhpILdsx3xdXax+Gv+T9y4nvPA57i5puc9rPC23M9u8Z/U+4
pWOrL6dYyNcqYsZaGPkF3fxR22zzJst/FRZNp3O2KqtBQorIVWnhpJwRXXxv8s2stv8AI2+w
TbemzzGestlEdcKsyhi8bED0SH1gU746SeMSC+HpYIfkvYWmdI1W9hJL5KCWGZPbPvjn3HTl
9PXsXPj88Qe3+rHFI4Ueqki3IeM+4D2P80DLDfhRByJd0i+POcw8XDR7nHvNxSKyFJVWSSIy
ZCp9SM3TthzAstnims+J8BbfyV3KC+jjEs7anV5YpQq6mzOpSBg59b6+XHx2y50nznu818br
/wBeWFltdTH2PaZQYgo6GjV+hw2+MzPVPye3366+HTZ8SaVpot5niu4rEkuIP1MxdDpzpUqT
hnlCh/uJintfjvg43Sp3OJxHPMxDTK36arerv6lFcHG2VrqT7eIvnSXlsPxTxm33Sys7q01W
xG6WzH3YmWH0LpcUGtPuKmhpi/nGf69Sdf4XXK4uTbx8T8IXiJmmg/lpuIs2oBojopkIzUJK
MU6zWup62nI9m36/5rx1tq3Bdu3C02yb+o3BUTO0LNGjRlD9x1mq1754p8C/P+Ge5jyC14Px
jcodm41fXQa1NvdbxIri3ZpqozSsx1NTVUEDyxSe+i1lf7aYuUQcX5BNsjWl2quurbrrWuuV
YzpCsuS6wNOeWMX/ANmsyIP7ZJQ15yywJjg3O8hZre3BA9Su9VXyjd6eWOv9PmUSf6tPwzZu
T7L8V8wbmAdJphLN/wDKfUaqh1P6iaEkAjx7YL/tfGpfI7+ebPyTf+YcM3fY3kudgWKKSa4h
k/kCT3A3uOQepTIH8MF+MWZfXnHz7sh5J81WO0ba8Ru7yzghLZU163GmQjoQPHti68jHM2+M
LPw7deA/JO22PIo1jWKe3uI54ifakh90anjPfTpNcFlsPP8A7PofkXHuUbr8zce5DYMbji8U
EUsdxHJ/IDAsXAoaEutD59Ma+0+plyvAv7jJLOX5b3i4tJFkjZLZWeMggypCAwNMsuhON/X/
AFc/r748tcqj0zBPfy74oMOC2ggHIdKdaeNcSsCoXRpBp2J6kk+OEkqdn6fw98uuC1YIK7UC
gh+7VFK9sYOHRaBTkFFSa51r4YD8DJrShJrmPIHCA6lBBBJY1CkZgDzxC0LerLOhqNIy/YcL
PpDSDpXOoz8aYDRhSsPqrpPfocDZIr1oVAZgMgTT9+FkXtVWuYcdF7nypgOJKIy1qpIWop1/
ZgaMFDJRSfM6cs/rhFj1v+2/ke3bN8iW6bgwt0voXtUlI9HutQItSfSDTFW+fhrePbbLxv8A
uRksdxcR/rL2aeyY/niuVZ4s651rp+ow922Q8eStZxbmG4XX9w29bTeX7i2g923tLQtpRvbQ
MBpFASKmnfHTvmTiVnnqWVNecaj+Qfji32XZbqFbnad2mkvYGbToaOWai0Ga1D1XGd+t1Tr4
rg+dtgt7jZeCbAkygGd7WOeQhVVkiRKsfy0PXF/P4rV/261Z/JXxrv1z8L2my2bpNe7JS6mj
Umk6RK5YRnxo1RjP/wBfr631f162s9/bDxSQvJyaK4j9iMS2cloSTMrHSwJHYHtjl1z/ALa1
9pOM/bA8q+Nd7tPlWXYJlDy7vcPdWMsZorxyOZMq/mFDUHHf+v8AtzHP+flaX+5XcdmisuOc
SiuWl3DY0Mk9ENBG8apG1elSFPTpjp/9fzm1fbengbhdFc2Wp8zTzxwPXOIfc0KupsxlmMgO
xOBkKM2ssqalpTWtMwc8QHVCcqMnQqcSO+ssGBqFyo37sVhCr6SQFBBPfI4hhKn8sqCCT0PT
LwGBEFZWDKaECtB4nKgxFIr6zRiGDiras8q+WLUYkAEAnyNch9MGISFIx6QemVT1YYW5PAZM
S9fUOop0+uJiwSvpWrHIZGuVcWjDITSoNaZmtMBFoLAADInMn/LFrJjq6KpKD7mOX1GFYJc6
AZA9VpXFqIBUcBAVrlUdzi06dSZNVaUzDZdsGqH1A1TMrSoP0xHSVRWhp46jlXBSR9sgZGp9
KMwzBPT6YkMR5VX1KB6jSoHYjAAqpUjTUg5BadPDP/HCdGNXqEgo3QL/AKVxaYjC+jL0sKmp
7jEtOK6tX21FKn1DxBwgjIlMhVSKkDx8MVX2JlLKFCjQRpoR0PmewxUYeABTpNa1zpgMsPKq
qwIqfxyzypiRaiXLkZL0Zs8+h+tMWLTGqr7lBUilBXI4sMJCWXMEnv2xWETg1DMCY/4VHU+e
LR0EMaKXy76SK1GBgi+osr1CkVNKVPh+GHCZmaRAyrRiKUalc+2FGKpIKCqtSjA9a4G+TONW
kUYr2p+zrhrX2PKHKgBdIqP3DuO+MsdGQSFQyZlRqr3A74hSSSooa+v1Zdj4YtUMK6noKD/b
6c/PEjROimuRNencnywkmQVIFKnMAHMV8cSjnujRKk0bIs2XT6YsVxm7lw0zMi0zzAxoARgG
rSuIuj/6e378KTBHZQ0baQcqg5D8cA10bUNN2pesmnqgBNfHp4Y6/wA+/rRZvj6s4B/9/d1x
GEbXNbWG0LGRbS3gQzCPr6ajUMjlXGf6d7dZnH1eJ8qhuod7nS7vhfXjOWnljNV1eI7Y5St2
fpSHTryBPfUcj9MawHlcjIdx93XG4VhtN5ucV3GdtaX+pNRYBCSJaHw046TrI59cS/K/v3+R
9lvF3q/lube9qP0908j+6F6fm+3zxy57mn6+eNDJzP5l3HZpbh9z3BdvCeuR00I6UzoxWpP0
OM9/0n4jV5yKvjfPPlCzC7bx6+vSGOr9PT3Tn1IUhqeeNfeZ8Mzm2/Lk5RyL5A/qSTb9eXT7
jbEPE0zFTGeqlQKZYNdHPuvyHzDeZIU3Xcp7+GI/yo5GqFJ60Aw89KRteM80+cd0tEj2GTcL
uCBaJJoVlyyA1stCB4Vxd9K8YzvLLb5Q3HekTfZLu83GqmFNJOgk1CBFr3xTvGPosN92H5k3
DbQOQWt81hCoCNONEUajoAOueOd62tTlVcQl+SIXubbjBvEk1e3N+lWq07hjTSMN6Umpdk3X
5C2HemG3NeR7rK5SRUXXI7Mc/wCWwOWNc9+elpeTbl877ltko3lL+028j1jSIFA8WpmcE6U5
1JxrmPzvNYrabMt7d20K+3GxgU0yy/mFc8b6/pz+mPrf2pLjmPyZx3dpb26v7mDdJxSeVwHF
R2KuCuKd+GxPc8/+VuUW8b3TXF1tyOGNusYjglIOQcqF1D8cGwWV6Vum5/N11wuSFNnsdmsm
g0s0NA6xFc9MerIHvjPVM1803sN0ty6zqRJGSr5jI96AY1Flde1bxfbReR3lm/t3UB1RMADQ
juK4VGjuPlLmdxvVtutxuLyblbJRJj+Woy9Iy+uGdQWeLXbfm75Dstwlv33P3ZpMpRMA8Wn/
AGqRQYvtMUn7FvXzjz/dGjmO4tEYHDRxxoETUMwSoADHFzY6TmWKxvlPmku+R73LfyPeoumK
Vgvp/wDpAoP2YLjEjS8e+dtw2+WW63barPeNxlP87cLkaZPT9oXqAPIY1eZjP5aWH+5tmuIW
fj1tHGpq5jYGQDyqBSuM/WGqvkv9zPLby5pscC7bY6SuplWWUv4+sZDF5B6zWyfNPPtou5Xg
vhOLhtdybtBI2rrVQftOKZjX1qHlXy9zTkhji3K9H6aB1kNnCqrGxBqvuIPvr4HGftDY3HG/
l35j3PbXG07fDJaWq0E0dsAsagUAOar27DFeozJUGy/KfzVLZ3F3DH+ptLdmMsjQqyau4Ld/
p2wyzF6zm8fOHyDuN1C11diJrd9SwW6e0moHIMMyfxxbIeVm39yfyAUEK/pLdgaSPHEGI+oO
WeNTGfdcm0/3B82sJriYvFeSStqK3K6ivYaSNOkeQxXMMlNdfP8Azy83a3vBcRa7erQW8aAR
55ElKmpplmcWyKuN/l75Dsd0ub6W4P6i8zliuIiYRTppiNAKDoRjMsrX035Udl8mcxsdzut0
sd0ktb66qJ3VVCaa1ClCCtK4ZcYs/Sg3ze923m+l3Lc7mS8vps3nlNWyyAA6BfIYNMcsFzHF
IDJRiCBpU59eoOGe0W4+g4v7kLew4rYWO0bcZtytoVike4yhAQU1ChDYeuZPyuetYzcP7gOe
Xm62+4tcx28dqapZRJSLPI+4DUsG8zli5xrmW1Z7r/cvzW5254LOCzsjIrKLqJWDg9mXUSuH
JDeXH8dfM/JNjhbb7DbYd1v72XX7joxuZWb+N4zU+Irg6xWOzknzl8jwchglvrK329bUsRYt
HUN0rq1HVjPNjnYbef7mOb3Ni0G3WNnYSyLRrqIMzgHLUoc0rh8XcsjM8C+WeX8au52tqbl/
UDrubedS4ZxWr1X1ZeJxrzPTJ1juT505lHyp99eOCByuiWz0H2nj6BSB6ssZljM5utFc/wB0
XLJYmS226zgypqQOevddTYZI36sIP7g/kKz2yO7vtht2s3Wkc8iyRqT2zBzyweL1nIfm6033
kqX/ADbbv6jZ2wP6HaYT/wDGjc//AGlG+8in58WapWqT53+J2CtDxGP3AaofahU1rnmE7Yz9
IvtVlyP+57ZYLeBeP7f+qcLWYXdY418FSn3H92NfVWVhY/7juYtyFt2mWL9Kqeyu2aR7FCam
r11E+eKmLPcP7peRzQyRWO12lnqWiyai7Kx6mhOnLBMgrLcD+beS8TW7jht4twgvJGleG41C
kh6tqXp54dl+RzzifZvkzjN5zC75BzbYo9yadBHaQwqqxxEEZiNvS3hUmuG5Tljbp8z/AAxH
J/K4kEkVqe8YYVoRnXWPV1xmcRasuWf3Q8ftEt12LbzuQYapnuaxIhHQDI1Iw3laxFn/AHK8
pHJG3S5ihkt5I/Z/pgqsSpXUGDfdq+uLZ8NYPlX9ynId22yba9r2+32k3PpluVYyP7ZrVAKA
At44pjNdHx3873uy7TFsO28aju7mMGRjblxI61JMs2lXqcWRbQRfLvCN35NdbzzfjyTLGghs
YIkVwgBq7Shiupq+OK878JcSfMfwdChuLPjDC6io0JESIAw+0hgx/wAMH/NffXPB/dTerbgy
bJbyXgyhuGciteg05tXxocORTajl/uP5bt++S3W7bKkVlcRKn9NlDRUUColDsCxrXoRhyD1Q
c3/uG3Petnl2faNpttotJ/8A8alSju4BrQDSoUH9uKSK66eP/wBy+/bPw5NpFmlzfwRGO33C
R2bSDXSWU/do/LnhnGtT4eLbpud1fXct7dzPPczuWmmalWdzU1xrzDs/Atm3m/228t9wtJTB
cwMGhkXJlIP3A9jili+XuNn/AHRcjGwNGNqgut2iXQu6tWjHprKAU1D60xz8/LNZzhfz5y3Z
90u73cpf6tb37M95BMCEZh0KAfbTpkM8asmCOb5M+cuS8tgj24QptOyqAWsbYklyDVWkag8P
tpTGZl+Gb3nygg+c+YwcIPErV4rTbUiaBLhUrP7L9Y69ADU50rhuStT/AGnqP4j+TNq4LuVz
ez7RHuTTqFjfVpkjp0MbMGAr3wZpnkyNR8n/ADrHzHZYtvXYIYayLIbmZxJKijMiM6VpXviy
H6u9f7lP6fsQtePcWt7C7giERvAAyKqgDWEVVY16jUx88ZyRW1heDfL24cd51ccr3K3berm9
V0uWmbTKGegDI1NK0A00p0yxuzTL54XOPmPeN/5vFymwUbPdWcaxWwhbWwVKks7UXUc6dKUw
X4xzz3W3n/up3w8dVrXZ7c8hKaRurgFAp/MYwP3aqeWKSNvCd+3a/wB33e63TcpTPf3JLXEz
H1MzGvTwxuxjI4YqsRpJLLmBXOo6Uxk+vpXivKrSL+2/c7H9LcySr71u0iQt7AkkYN/5Pt9P
fzxniZR38PmuZizOZFpJU9TnQny743p5njW/GHPpeEcmh3qK3S69hHQwSErqEg01BXoR54Lz
p1y855jd8r5RuO/XEKwNfOHMEbakRVUKKE5nIZ43ngnElt/b0n42/uHn45xVNg3ba493261o
tgrsA6r1KNqVwwqaqcc7ydXt3/dfePc2VxFscISzZmaASMWYOhTSDQBaA4cirH8Z+cb7avkD
deXSWMc0+6KTLaEsoRWIyqK5qFGK8ieFzb5zvuR8Lj4zJZwW6G8a6nulYksvuNIEVexBfM41
OHPvLBci+dbnftl47tN5tUBtNjljllklJb3xboFQFOwI6+OOcrpvrf3n9zdjPxxoZuIKbCZW
hhDt/wDEYgUKj0UGnwBxZDa835h8xXm+8B2rib2UdvDt06StcxuSWEOr20CsMqaszi8T0Hiv
9xG2X2+bAvItvSG12y3aMXMdXKzOiqJdLf7UoR2rgOPRORfNXxbBst299vMe8wTxMqbYkJLS
6gRoFVUfiTjUgs18TXM3899MYjRyzIimoVWNVX8BljdxjmZMT7VcvDewzo3tvEQQ38OnMHPz
ximXHr3NPnyPlvCBsm6bLbXO9BVVd7+0roIPuRLTUrkjx0+WHmGzVfz/AOXN45Nx/i+3Xm3f
oP6WFeK5GoNc6FEepAw0gejPrjEw2NhL/dNuy7vHuC7JA5FgbX9P7rZyMwb3NVKaQR9vh3wz
MVjwia/mk3Ca9qBJNK0kiAEAlm1NTyw/I55z4e6cd/ud/S7Da7fyLYYt4uLILHa3ZZFbSooN
aur+sAD1L1wfVquT/wDOZ5QeaNvYs4G24wfpTtJJA9kNq1e7111z6eWK5+Fn7Wu4f3Rxzbbe
7ZYcft7C3vIZIlIkzDyjSz6EVR9p/bhyMyV6Jw/5q+N5OLbYq7vHsX6WJYJbB49VDGAG0kKf
T54zJq3GC5B897XtnLN8uOHbI99tt/aLHfXZUxRm4QMqThVFQjBqEt1xqcz8iT15lwr5e3Di
/G+RcejsYruLfonUSsxQ28kkZjPT7l0tkPHFOYbzsx5uSSVVnqlKGuQFPpjbM88e8/2/fKnH
OG8b5BDucphvJ9E22qVLrMyIwEdFzB1H9mOf1trWn5Z/cvcbpsFxs2z7HBsv6703VyrB20n7
iqKqqG82rjpf5yTR7XbYf3VSiwtDvXHbTcd2t0Cm91BCSpyYBkbST91AcHP85nrdjFbN83Xr
/JV1zbcdqtb154zbvZCOiLECChUmp9wU+4gk4rxLGOY1nyp88tyHZf6RccZ/QbkjwXMN3M2q
WDQwkDIpRTmKftxjmxdc7XZt391RSyiuL/jlvd7zCgWW/UiPWQKaq6WNTToDhk2t2PMp/ljl
Nzz8c1aZV3ZZQ8QijrEkSroWHT1IKelicb75i54k9XPyL81S8l3rY97stsh2fddrf3xuFv65
HbKmvUBUKRUVxcfWzK5d/wA5ep03Cf3Y7lHBFN/QLObcPbVZbsOyl9JFRQLqAbr1yxmcT8uu
I9u/uT5LY225bi2wwybdf3wkNw3ue0jvGokhDgUY6UHX9mM3Bz8eqDmX9wE3I/6bYXOxWttx
21vIr28sI2LNcGN6hSQF0LQ1NBnjf15kUaxvlb+3sZf+naglQKQxUHfKr9MYn8zqosP7i7DY
r27suO7Ij8UnYSWm2XjDVDLSsugrrojNnpNc60w/SKbQcz+feabhZ7Ndxbau2fp7w3233iq4
jdI0aNo/5npcDX6sXMisdF9/dZuTWcjWfHrKPd5EZBfNWiyaaF9NCWHlqxSTXO278ML8bfMu
9cO3q93KCIXkN+zNfWZPtxSSMa6kVckIPSnbLGp/Pa7ST6+KflvyFu3IuXryi+/88Uivb2j6
nihVXDrEoYn0ahn44OpPhiR7FuP9y29vxpZL/iVv/TL8NbxSPra2lYL6wBSmVa0rjPM9Hc8Z
XhH9x+/cY2QbLc2VtulpaVNiJgyNFGWJEQYV1Ba0UnOmN3mDLJ+3Lu/9wfLL3lm38it1gsjt
cZhgsYV/lFJiDIsjH1Or6Rl+XqMGeeGc/lfb7/dNv91tNzb7ftNjt9zdRmMXa6pGQv8AfkwC
mo8cX85N9N1UcJ/uQ5Fx7ZINlvbK23SG3B/SLcKQ8aV/8Y09QPy+WHvmazLjGfKXylvHO92h
vb+OO3gso2gs7O3HojEhBY1PqYsVGOn0mNT/ACx0LlJUJr6MiKVJpjnapHtPEf7nOW7Pslvt
1xZ2u6wWqrDBJcBhMqrlpJUgMFGWeeM/WG3xxL/cPy//ANtvN9VLYS3FmbJdvKUhjgBLItQd
VUYltVfLFv8A/Dnz8s7wT5R37iG27xt1kIZrfdovZuklUtVmRl1rQhqhXPli6s3Y6Xcx2fGf
yjyngaT3O2D3tukKRXVtcKzQswB0NqXo+XjguHAbR80b/s+9ch3C0ht4F5IJDf2gjPsKZFI1
Iqn0sK1FTn3xrzWevIwayyFtTMWIBKmlNQ8DTyxXFK9v+N/mr5K23iE+3bRtMO52PHrcOzmN
vcjiJLamCEekZ54zZ613fNZqy+duZW3OLnlVn+mtn3GONb+wRGFtKyLpVnWurXT83XGupHP+
fVsS83/uB5ry3bf6Xce1Y7e5DTm1VkeXSaqrFmf0qwqKYuci93/C023+6Hn1ptkNpO1tdtGu
lbu4iJlYdFL0YKfqRnjJuvJuU8i3jkW/3W9bvN+pvrpxJNIQFHpAVVCjoFUALjr9/MGOC3lk
WVHjJArXUPuFMxjnWnr0fy58x2nChaiWX+kzfybXeZYi0qEfdGs2QyGQJxnmxWbGf4By/wCQ
Ns39r3irTXG43IIuLajTCfT3kTMHxr1xXr9r4jm5vzrm3JOSG55BNKt3aEwxWSI0UcBX7tEX
Z65k9cdue+ZPBOvWn3b5V+ZrfhNtYbhJPBt91/Lt9zkhZZnjGWn9Qp/AnGJcvg7v78VPxhy7
5I2fcZJuJJLdtKv/AMi0VDNFIBnrZK9fMYxetby4z3NeWci5FvEu4b5LJNd6mi9pgUWAhs4o
4+iqMb++zI58c/lDuvNuV3+y2uw31/PPs9qFa1t5fVHGV+3STnkMh4YObh6jX/GHKPlTZ7W6
k4bDPPbsnuXMAhNxGApzk9s5Bu2WM7HTfHFsvyZz+05bNyS1vZLjeLjUsqSqG1KcmjKMNIVa
ZDtjXX9ZYzPGx5n8y/M1naTbVyK1S0tdytmXS9oq+7FMtDpc1X7T9cMs/AsryrjnLOTceM77
NuFzt5mAExt3ZDIq+Q60wWyNfMxy7Tu2829+LzbJHiufd92JoWcSq5Naqw9WeLrrflc+Np8k
c0+U9wuLHbOVyTxfyFlitHiEHuoaFWcIAHPgTjX8/wCnnjPPtajhsHz/ALZxySPjMF0NovEd
lSRFkCVB1NGG+1j/ALcc/t66/wBPXkrbru9rvIvf1EsW62s2p5ZCROsympJJz1V61xu21wnW
VY8y+QOR8z3O1u9+uTcy20Rt4TGiqoFdRyUCtT1w/fOcO+622z7l82bVwW7l28Xi8ZVSJWjA
k9tHHqZQRrQUPbHLm+ul635eUT3DyyGR2JGfrJ1EknP1Y7Xq1n4+ESq7yKFUlqZLTqOmBfL0
La/gvm25cPn5RDEiW1tbtdQ2zN67iGP7/bplVAK59cZnfo65sUPCeB7ty3e49s24D3SpZmyC
ovUyGvZR1GDrvDz/ADtmpedfG+/8K5ENn3ZopmmjWa1u4M4pI2Okt4ih6g43Z5rMl1o9z+CO
U2XBZ+VSyRaYVjmNlmJjayEATL2p6sx4Y5c37XxuzFP8ffGW8c13WSysZo4IrdRJdzy/ZEhN
Azfj2GC9fhTjZqXcPiLk1rzi54gkQuL6HQ0c8BLRPFKupJPELT7vDGrcg45+18d/LPgTnPGd
qbdbuxElpDldPEwkEYBpUhc9J/ixmdUdcu3ZP7eeeb3taX1jBFGrqHjS4YozBhVcuwbscHPV
tb+mT1ll+NuUvyl+OPZSRbvAwSa3K106hVaMuTKRmCOuN9dYuONdHNfinlfELmGLeLMrDcgm
0uYmEkT9KqGXIMP4Tgloxabf8Gc53Tjbb3t9kZFTpa1pOy9yinrTvTFOvTguCfCHKuZ2ct3Y
vHbrZymCVZxoZWBNVz/MKZ4Ouvch+n5WW+/28cw2W6sRdRxSW+43CWqPC+orI59AbIaQcM0T
2rmX+1bnYQrFcWjUqFYyEGo6VqMQeW2PFuUC8uoraymkvdvcpcxxhiysjFWzWvQjD34z7PXd
yTeuXbhuVlHv0s77htcH6e2lmBWdYSdSrroGbPNT1wzvxu38xe2/xxzu54xd83iLSR7cNcrs
7LdMi0LujfexUZnPDer14z1/LJri4Vt3Md53X9Hss9xFdXclHkjZkWpObyMhHpz7459dZ8t8
8bFzL8d81uucXXErmRnvrE+48zyvInssoYToGz0MGqaZ41erIOY0nL/iH5R4/wAZn3SHfP6r
ZWcZkuLe2llVvZHWRQWOoKvXvTF/O0XGK+OeF8x5DupttnuXto5wPcnWR0VQATVtJofLvh67
yt/TxJtnEefXHM59nRrlt52e5aKe71sxi0t6ZEcmoVuoOHv+lxniZK6vlL4s5px0DfNzuv6x
Y3jD3N0jLOfdboshb1DPIVHXFO+rMHPy8xkVgRqeh7gClcBqIs5JD+quVe1MTJ41ZRpVQi+I
7k+GCklV9WlQAgzVf93T8K4tGBVJBISQVB/GhH+OLVg46aala5keBHlgJNQ0ZcmGWXTLxxIR
DNpr6ew8K/XEj+2V6NTSPqMS0g4MhVsiv2kjrlXCzUYEpyqKDOnhTxGDWpRxEsQRmFNT9R4A
4PlSnZSSzAmhAFDWn1xIDK7LpXIHMnxxDEilUQIcj2BrTyzxK+GDSNJqYDPqvYD64KPUjOR9
uVPtp4+XnhPqMqxUAklzm3fvniowB6lBq1joPDCLqRVNKMSWc001yI+nljLUF6h6KVH5WyP7
sRN7xUBH9RpqB618K/TEYIXND6CSakFegGHFoQxzata5GlQKdcGBIsiH1azka6T/AK4VpnVi
pc0KsR9w6HxywHDguTVzUGn2jKn0wgx9lZCVyHj26YbFaEy5KFdlJOfgfpgxU5WNFQxsKUzH
fANGjmukKGpnV/Dx+uJaeQL7aBu/3E+ff8cJMBnUitDpLJU1xa1A6EajVOla5+dfDEtSI66f
uJFe+Z/0wEJkYgivpJyr4f5YcZCpjVAK+Q71r2+mIygY6moG0muRHXLtgAxQEE+qgKhz2r2x
GBVq6lDEmvpY+PU4jpzUfe5EhGVaGmrpUjEJ4HUqFoWBKg0p2rgxCDUp6TSoDZ4KzopmNNVK
VHqPlhw1zlGB1/c1PST0p2xHw8fuffrDVrX6/TEHNcCgLEZj0t3r3xqapGenAE7hchXDhCdK
kEdcSS6z/H2r0xYvXVDIoGhc0oVPbrhZdO0uUuxk1SwCaOtRnn3pjf8AP5F19mrcP/8AdDBJ
7jhktFpGCVUBclFa+HfGf6T1nn35fL+4Sk3UyaNTqzA+kjv/AJYxJHRySgSxAUoQQSK9DjUZ
0cAd5EjjUCRiBXtnl3xvmatfVPxJ8X2vHdus97t4Y9w3S9RXN1J6khjIyCHy74z0cc/zjbzw
3G3blfxR3FvGyn2JD6XKH7Co+tcY5l0Wrvlu6z7t8ZC7VI4dVsssMQyC+nIGo6EDLFZ6tYn+
3zf7WG6u9ut7CO2nZC8141WlYVr6j+Xr0xu8+acrI/Pok/8AbZpCQA1S9RX1VFATkOhrjErU
rzOzhnTRcvblrfVX3cwpAPSvnjXOaJ6+leD7/wAz3fYrFkEHEeL2ajXcP98yIfUIw1KV7tjp
19fwsv5abYuU7DvO/wA8OwxtdC0T+fuhAC6m/hPWvnjnYFkpt7j9dG+4NvM5BQ2wpogr2J/1
xmxKvf7uXju2WFttqpaw3dyqXAQCpLijeZ+uKSJDyK+t+JyQ3e22ay7jfqIxcuCzL3LMRn1w
pY7ZOLzZJp7vcF3+7KszFVEcceVdAHgMNKr2ne+V3yCe8mg4vxayOmRmZfclCnPQT9q+eHwO
aXaOM/Ie9Ge0UTbLtBHulcmu5SSR5+2PHB9TGq5Zt163Bru0sbSO0EMP8mNQAEEdCKfswCs/
xG9u7n4wlku5XnkSOYOWOoilSFr3oDi7WvlPfWRd0uYjmfcardBnnh5hlNscW3Sblb/1Niti
H/nun5UrmR50xr66Zj6Ptfhvh3IrexutqiFvsACySzliZJkArTUele5xi84Frsnxb8e7rulx
ewbUrWW3N7EFuWIDSAZs3iMWB37/APGXA7iySK6sLTbUZ6RpER7rmtAAcz+AxYpQXPDPi/aL
uw2eHY4p7u+BSN5AzqNArqeppgnJ14384cL2fYNytzYpo/UL9ijSletFH+0dcamsWsZ8d8bt
OQ8psbG7maKzlmEUxUVNGNPT59sJ52Ppe54H8X7VuNlstvsMU19eKwilkDMFVMixYnGPrDbr
gh+Ffj/b7q73fcLQ3UcHqWAkqihc9Rz9XkMNmqV5zz7ffia62xodj21bPcKkKFUqwYHKvXvh
55kYvT0P4o2/bZvjBzbSTLrEpnZTSrhTq008Ma65jV+F1wBNqt/jVVvj/wDAjEpljj/Mtc/q
cZxazu+fHXBt+49HvG32QsbdjWNolLSuNWk6gTWowfWL1e7V8ScD/QLaRcc0AoNV5dO3umo+
7r3xYdU1t8I/HeyPf7zvNu15FES4RSQioPzUB7dOuL2rccPG7f4l3bmG2f8Aru2xLNGzO0BQ
5MgJBKtUdc8P18EtWfy1xHiym45DviGV44TFZWUICgkitWpmTX9mDDenjHxZs3BNy5FPPyyQ
Wm3Q52tvVqO5/K+kE0xrqfouL5YThicgMXGLeSLb4gP5jhgjsBmy6s6V6YpPEwiyRs9NLMMg
qgdzixnX03xz4c+NNr4Rb77yBJZy8QnuirsBVh9oVczTGPrq6dO5fA3C94/QXG2xNttjcaWe
NCxdoiNRKsehIw/DfNxxcg4v8CbG82yXNsbS/hj9MrszNUjtU01H6Y1Jvyx1dZ/+37bdoHPL
+S0kdAIXFkxUMSgNG1E/a2NdYObfy7uV/HO38l+YpLLcdwaG1SNJZ5mYB5FFAI0r0J6Yy1Gg
5D8VfFdjtk0Me0TxSohpdku1aD8xBp+7BIzasPhLYPj0bVdvtduLi/X+Vfyzx5hT0VK9FOLq
N3pkds4Hwnm/yRuVtbRSQbPYVeWEEgvIraaLWukV/di+uRc3wuI/C/F9x5jv0d2zm22VzFbQ
g/xHUCajtgsqleq8w23h54raQck9G0waESFKqC1NKj054pGbXh3zf8T7FxjaIt92ZWgS7YR/
pnYuUJFcm8KY1Li3I8ZtIJp7mGJGoruE1NlmTTVlh0c/Ovp3/wC5D4p4/stlechknuJZVRWd
pDpkmkFQEVRkK9BXGcb6uif+2vhU+9JUy2+2NGZBYxsdWs+Mh7YLGcZXhHwRxret15G17LIL
Tabh7WzjjIBdgCfcc9x5YRP/AFej/GHDvjO145fDbLaO5BLRbrNOutvSKlRWulQM6DFYZfHz
F8injA5RdLxhJBt6SFUD18cyPAeGNZjW1RbbYyX24Q2hqrTusQfuS2Q64cUvr6cuPgv4m49t
9lJvs9xNJMUhUtKVM070+1VGMXaur65Ln+2Xj1xyyJjPJFsyxe9NbAgys2qgjDeA/iwYtUnM
eD/BibTdrtV61vutmTEAZCzEg9CGAqB44ZyzbVl/a3tm3Jbb9PbyBb4usTEqGIjz0sCegqOm
A548D5/bSW3Md4gdxIf1UmpwAFrqNdIGOzH85FbtO1T7tuVtYW2U1zIkEBz0iRzQavKuL8Ol
kr6k478IfHPF9w2e33W8e65DIyy2/uELE7oQWVUpmK9ATU45D7Z4v+d/FW1825nbXe5M6WO3
WYWRIqB5JGclRU9guC+xMryb+3LiFxtzXOyST2EiSL7sly2oMmoByNQFMsH1Z+zJ8t/t4srX
l3Htp2iWeTbtwOm7uigb2wubkUy+0Vxv2RudS+MD82cE2XhXLE2farhp4GgWYpIQWRnNKOR+
3GuefHGX/bGc4Jxock5dtmxSyGJL64SB5gK6Vr6qDxpgvw68+vqt9r4Fxvdtt+NrTY4vY3SI
t+qcKz9DUsxGqp0nGc8F6244dm+HOFcT3XkHKJbb9eNtUvZWko1IirH7jHScix6VxRv7eKbn
nEOL80+M5OeWtgu2X0UDXEccOkalU0ZX0hQc60yw/wDhizPlj9l+JPjeb4sm5Ju+8GPeXhkm
WMSoojZa6IfZpqLGn78U/wBqer4zvwXsnCNw5bFHyNjOhISxsmX0yyuctdPyr3GM1R6X8t8N
4eflfidjdRRbbs93EzX4ipCjCNiqr6RRa5A4cE+XrcXHLRL2GwteP7cNhWMJ+pKIX0gZKFpU
jzrhskF2vkn5141tPH/kW/27a4vattMcqx1NFMqh9IH8K9sU8Z++XFT8X8Pt+Xc12/YpJDFB
dEvdTAAuI4wWIUdiaUBw9OnPcfTT7T8dDkKfFY4/Ctt+m90XgChyNOr76a9VO/jjPgvr5f8A
lXh9rxXnG6bHauZYbZ1MDNTV7bqHUGnfPHSMS+4ySR6SFIoT0YdR44m9fV3BIUT+2LdEQg/y
LslvuzqDU17jvjnx8ruePk/SdZCZ9zXM0OOlErbfD/BYeac1s9lupjBZypJLcyJTUyRqToUn
uaYOusblj3N/7d/iU7g/Gv63OeRGMyxwmRQwj6r/ACwOg+uM+4531geSfA+37N8Y3vJVuWl3
W0vHhoorGYVl9lTl3rmT4ZYvfgX4ajYP7a+O321cXvZr+eOXdY/c3CNdNDWIyD2yQaUpjMtb
zKuD/br8WTXlxsFpu07ckhiMtNSnQOil1APiK54c/K3Vbsf9u3x/Z8Vg3fl1+8IillW8YMFT
0ylEUN1rVfxwc6MZH5k+F9o4tt1lyHYLp7jZ9w0RxJOauruNSPWgqhXy+uHFnrb8z2Xb4v7Z
NoTbZUktYjBO80igMWkdvcoe1HalfDFwu+XDxv4E+Po+EbRyTlW5zW8V1GstyFZY4wZATEAx
BaoHXxwYe4G8/tg21uYbdFtW5yf+s7hC10Pd9c0YTSdCtlXUHyJGGrnxqbP4M+Iraz3HdHku
bja7C3niukuGP8p0GppRQISygHtTFIL0+RL5IRPILUkW5YmMnqVLGlfD046axxPG2+JOPcQ3
vlkNny+5NptRTVE4JUST6gEjY0PXGOq6fR7b/cfwv42sttt75DFt3IIoVWxs4l0rPCjCM1Ci
g0g9cXLPxfGv5pwj483vhnFF5ZcDb7K0t4ltniIjZnkhUaA1CfOmMyav6c6xFr/a3sg5febe
+4SjaDZfqNudQPeV2k0lZOmoL+/EZ54h3z4B+M77Zt2h4tvMkm+bNEZr2EyLJGWiFWUig0V0
mlDjWWC25p9n+Cfi/auO7Pc8x3KaHc91jEtv7bBI6ygERqCrFmAYV88Z+TLqNv7XrJuTbvYS
7g7WkdiLraZAAHMjMy6ZR0oummXWuGK2sT8ZfDe3cs43yncbq5eC62eMizjQVUyKrOxfuQQm
nDb7jU6/116RsX9uvBxsm0W2+X9yd73OITRPbmkYd1DaQdJqFHicYwNvwriHCdu5Ny7Y9hhV
bc2ltb38D1dY5GWSqgvWooQx88UnqeX8t+COA33FN43XhG7yXG4bEjm9hdlkjLRAtJHqoulq
KadR2xvLGd8fOBC1JNQ9K6e+eeOh/D2r4L+Htj5xtG87hul3LaCwkiSDRSigqWkZifECgPbH
Prr8H8NHzD4K4Fe8Tvd64DuUsz7dIq3UUrCSN/UA41aVOpQ2M7YzuTYuLj4V+FeN29pt/Kt3
kt97ngWQz69CEudAZU0v6Qwp6vDDJa3e3jNzxXZdk+To9hmv13HZheQa7+3pnDIwY00k+pR1
xvrnORx1769N/uf2Y3HyFxyEPHCt9bR2yTUA01mKa5fHTXLyxy/A3/ZpL/8At7+H9osYbXep
L1LpoQJNyj1rC0jfnGhWAp4Vxqc6v6XVD8H8L+PrPnV/t253g3Hetsuq7JTO3mhMZIkNRR2H
1yxXw8dXFV8k8B+Pbr5Z2nYNgnKtul6YN5sY6hLd9YLCM0y1AnLoMVmTXO9X7Z+HG3w7sMXz
8OFe9K2xlBOFBAlETwGXQX8VbKuC954Oe/8Aa8vYdn4BxbbvjTfeN75OF2Cw3SaSa5akbe1G
Y3BrnRqemuGT1vPHjvzH8T8S2niFnzLhVxJJtFxKttLHOxPqeoSRCwDULLpIOHnyq/h4tGJV
lFWFW+6uY8MdF9fXv3x38TfHacAj5fza5lWzvJWhQx1RYTrMaGqhmOqmOW3p076mY9E5l8b7
HvvDOEcX2y8WTa/1wNtuJAZ/0/syyNppSrMMsE8Yv4Rbn/bf8d7jaXW3WUF9YX5QhL521Rhl
Ip2oQ3lhw268q5X8JWtr8Z7dyLaCzbxa3clnvcAJeMUnaFSABVShCg+Nca4tny53uznxyfMH
xPtvDuM8Uv4RJFu+5I8O5W8h1Ksqx+7qAOY0VKkYxy331ja/JO27Un9tPG02lStkt3Zswcap
Ed9Ym0Ht/M1fhh/ncZ/tNgePfF/xBsPAtp3zmksuneUWVJdTKqsyltFEqftzxqb069dSZHBt
Pw38eb/8lWUXHNzF3xe4s2vpYEYmVGicIYWOTKGqCKiuC+KXG45F/bnwPcNkvY9ns7ratzgR
pYLiRi6O6gnQdVag9KjANVfCvhf42i4Pt+9bxYT7zNeRrNNPa6mMTnJk0RmvoOVfHFmj7bHi
HzBxbjexcm9rjt01ztl3ELgQyBhcWrFirRTBgDUEVFc6Y6/DO3WJtrd2mCZlKdQcz5YL1k1v
n5fYHHPgL4+suPbat7tc+63FxEk81yrU9bqCQaFchXLHOe/K/p1+oqNu/t34Tb873nbbz3JN
rlsEvtuQtR4NUpSQMw+6hXKvbBZvjH0V29fF3xTyvhm6bhwRWtdx4+pkkmBcxSlFLsjaydVV
UkMO+LMrd68afhKfDtz8N3t09gBsAjibf45FZnFxGF9QpnUO2oFfHF+V16wfw98ccB5JJzC6
vY2m2yydf0E8j6Hjgq0tX/8ApSjeWG/+2KXwua8R+GuScJ3be+FSjbdx4+vvSW7llW4WldAV
yWNaHQy98bnPuVz6lzY7P7co7GXg3L44YjDuxsm1XNNQ9p4n0pQ1U6WGMf8AydbN5cP9vPxf
xzftr3Led3s/6gbNxbw2RyV9Y1MxrQ1ofTi69p+s55av5Y+GOERcXXfbGxOwttk0KXaKdSyW
skqpJkSw1KGqpHhg+u/DH2xo7r4u+Mtu2mGJeLvutgYiq7lZD35TGRUPRGqT5qMa55atfInL
9u23b+RblY7VdG+2+1uHW2u2BDNHWoVlPRkrpbzxvqRx56vurv4l2Tb91+RNisr9fftri5jS
5gb8yg1Kny8ccu9deMr6ai5kknyq/wAYttlr/wCtx25RrZohpYe37i6V6ClaefXGrM5lak2K
W3g2z4w+Pt/3vYbGN9xh3ieyhml9REfv+3ECfCMHFzz651brxnYuYPwbmG7bdD/Vb+Rk3Iou
lZgIpCgcDrpaMEYLG54fZuU/+0fIPIfjjdbCCTj1pFLEsJQdYyF1q3YkNXyOHrzBeZZ6pZbx
PjP4k2/c+P20C7hdXzWsl1KoLlRJKqsadfTFSnTPGZMTI/3G7HYzbTxDmdvbx2u8buNO5e0N
KSsYRIHYCnqBqK43xJWepZfHH802fBm4hxS423bW2rf5EX3YTC0QkhaOjM7kBHOuhUjMjBxz
PW/6S2xvORcoufjjgvC04zbQwPvCqbuVk1ElYkdjQ9S+qmeHmLvjevGwh4RxhvkpN6isIEu9
w2k3My6Ro/UCRAsyr0D0ahOMZq+GM2Pkd18m8J5vt/I7aJYdsjll29kX1QPErlNBOZ0mMf4Y
3ZNF+NZP4NHEJ+E8hTkm0G9spIgz36WxlWJPaOtdQBMbDI1GDrnOsNv+ry/473qz49yqxv57
Jdwgtn/mWz01MnRW/wC4ZEYeuZFxXsf90tzC93wncxCDKyySyVpVov5cgjJ6dca/nZlF4/2e
nbBzfjvJhte4bXyQ7asqRpLsjhFkMiEekq2YqMvTkRjmr8vBP7ktliPyzPFZ21JL22t5isKZ
vOQVJoOrMFx1m/XWPrNYjge1FOe7dt+4QmNjdxJLDINORkAKvXxxw/p1sdv4ybj6c3fl2/bf
8z7Zw+xWKLjrW8XvWqxA6hNqDA+Crp/yxvycsya+bvnHYds2D5N37b9tiWG1eWOeOBBRUMsa
uyqOw1Mcdpz5rnljMcZuYLTe7Ke7hE9qsqC5gao1xagXCnrXHLuHj2vrz5K3bg1v8RW0yLPa
7ZeR+3sYtw6vHNIjFFYAj0dQwOVMa/hzt8a7+WS/tb3HjQhuNre0C8iiaSRbxV9DwEKCoPYr
+/HPvPs1fhhOXck4I3y3Y7pY2095sUD+xuFnc10hwx9wR6yfQevhXHo/p/Lr65R/PqfMesf3
CbtxGL43s4J0nhm3GD/8hPACukKqOUloR6DGQKGuOX8uFf8ALn/t237i9xxK8txYiPdrC3kO
63KqNM8NSw9Q7gZacYvM+zVv+v8AhT/Du5bBvPy9vF/xoSDZ2sS36S4JMqqaK2mpJoXpQE9M
b/rMZ5vleibTuXG7zaeQrsu5XG7PDZzJcWF2z6UFGGgaxl0IwUAu9w41ZbJxkb5uNzZXEljA
tu9qzqsnpX0sUBGVe+DD+XXdIg5jyK4hiAvodihMFwKGSUgysrjvkwAwySq3xg+E326ck+H9
+n5NrnEU4mtjOBWMxlW9Na6QrDp9cPXP+2L9Lnne98psfljitntk0kGy3EaT3KRCiPV9DhgM
iNIGWKSfXVzmuq45BsGxcl5ptd9b3MG3XcttcS3tpG7BJriEagxT1KWYagfrjODfwpOTWtql
lsnN+M3l9e2NjuMMV1tdw7v7g1hdUavnrzp51xqe+Hlfzbrwfl++SWEc+7We53ylFIEtsInR
aVzyBHh44zLjOJPj7bbbjPDbh765ji3C13C5t77cWXV7rrLpq1Kn1AAjFZ6b8OHm7/Hm77/x
K43eSCe4luXiNwQYhJCUbTrchfSsmnvkcM43WuWwm2Rzxnf9thnga1mtJrezSKgEatEyjWQa
fmrgnmCsT8RWfHn4FtD7JdW1ruWv2t0kcr7vvRtpZaEg9svHD/Tn/ZqdX4/CHlXH+VH5zi3T
j91BFNJtoM3ugPpiQe26snWp1ArjXn0Zlji3jlmxcF2Xc4razvb+9MDW93dsHNqfeBFWY1Vd
LN2ypgk35Eu+NX8dx8Yk2Dj83G761gR4Yv1UKlPekdfvUpWoJbGLPW++cuIdosvY+SudWcbp
LdbhZRXNuFYagShUKR1BV6Y69T/WC3/XGD2Oz33/AO5LncHJFlM7wvMkdwSCWQVZl1dPWow3
PtMFsyPm6WVTpIT00yAzzOD+vMlFRmmnUtAMlPWn445jTNKCpb7Av3A9wcQ0gdNScwSaV7YD
pwaVUHUAMqdMC0zqzAaGNepr0yxasR6mZCTVdJ+9elemeJYeNyVKk1p2r3w6j66v0rQdTlRc
S0bFQwdqMBTBq0gJA7HT9wAI6dOlcGjKcNmVX1UOVAK1wNG9VKhyDWgfIUP08MIw7yASMGpo
PSn78KMVZ6KpIHTr0wI6xUGQDECpatKkeRwUnQpmC3U1rSlfLFqE7nOhoAftbrn2rhjOgdSf
uHqFCB28fri0nNwWjYsKhD+OeFejQK4DeJOWr92M60PUCzDSSB+WgqcMSOQ6VZ1zGdIz3J74
kaIq4VtNSPtrXviwxK1K0Y0Zh0p27DPFTYQDspAHp7EdB9cEFgdOknQWYkUNKZd8aGGoxcA0
NSTWnfzGIYeMkKdLamFQaZ/WvhiqOyuVCqAW6UPj9fDGcZvJBRHQtkxFSa1oMTUmEtalQQzM
aKfPrXChKrIvrqQvXscWrAMfUQXIIFcGtSJCXBDZMuRGnxxLEZJDArVRXMEdsIPMU1AooNTm
w6eX7MStCkTaixprBz8QcAg2IJVGyA6gYEZpootJoSGqOlaHzxEDxkI2vNWofT49sSwwT0eo
ZHL64msD6TqU1JGXetPxwsYahLGh9I6Dsf8AgYkZllcVLaSPtFO2DWpDKGKaxkuemo/06YLQ
5rmZFoWJ1ZjUakZ9yBjfNPNUNwUaUlc6nr441TaHQDnXLACon8X7u3jiTsidUVjTU4yp2xCO
naHIvFJfQ5IqoFQw8CfDHT+c9HUfVHGPlzhbcHttj3Sx3Ca7hi9uRYUVw1CSCGrWgxf1klZ5
/wAvF+TyLNuUksdm1hbqXMcchOvSTX1HHPG1L6ZFBUU6V8QPH64gOJ2WQBeoGdfGvUeGHS1X
GOd7zsu4wTx3Mr29qfcW1MziIjqQFB746z+n4rPXO/C85H8sycu3q2ud1slbbIChO3pIyhgM
yNfUVOM84JzW6vP7kONPs8m2x8XYRRxLGsTzBkGkUX0qB6R9cZ6kakql4H8x8X40txeS7C91
uVw5LzRusYVGNdIVx6fPFfg+qn5B+Wdo5TuEE1vsaW9rA9ZllkDSS5gspYZfswcTm1SO/lnz
pY7psEWz7Px232u3Gn3XbQxotKKiqBTzJONfWazbjSW/z9w+bY7fb9247NdC3RIzH7qmJ9Ip
qANOuLqT9mW4rU+dNistyMuz8fG27W4BuYgyrI+nI1IqAfDBzJRdd17/AHEbBZWEw41x8293
d5yXEzqwqRmfTmWHniuRbqu2b5+tIrRG3/ajuV5C7PbTawiajmNVQxy7UxWQ3ym23+4C5O+T
XW9WgawnFPahC+5CvQKhIPXGpzK3eFpffP3GLOxkteN7BJbyzEmeafpn5KSanzywfXKxea7Z
fnf48v7OKLeNhupggFUYxtFr8qMO/li65k/JxgeSfKqJuXucOtX49aAVYJIUJGWeWVTh56wW
Wr27+frrcuNR7FPEy3E49u53F21VQ5MxA6ED9uNX66JLPlqYvk74947wE7VtU9xvV6YjGIxE
6BncUNSwAA8ccrPTfXzfuUpnvZZZIm1OxYj+EE1wb6on2e5s4tytXvITLZRyK8sC/nUGukdx
9cdOesrHWvaLz+4a2/X2Vvt9ibHj9vRZ7NSA8q0pSnbywyT8tcrTa/7heMi6mim22az2mXUz
CEgz1/iY5DPDeZjVjk3P5j4BazwvsuxTGaKVZpbq6Yl9INfR6nzOOYjh3T592y95ntu7f0+S
LbLEFTFUGZycjkKjG+cCwv8Akvx38h7l+u5Luh2mxtV0WdkqlpGQ5lnejAfSmMfWiLPj21fA
2xbvZ3u1b8f1COGjBY+2xJy1sUAFMU5rVrT8p+YfjPbr+OZzJuN5bLrtzaqGQl/95IxXnAyW
1/3D7HuM13ByC0mttuuDpjEAEr0OWhjVciO+NTnYNZfl3yH8VzWv9N4tsHtyTtpn3S4UhkUn
PRVmJ/bjP0wZGz4f8vfE3HOMxbNCt61Eb9RWL/yO4o9DqH0w41S2L5y+PbWzl2ptsu7e1Z2a
h0yAIevU9fLFeRrn3T+4HiW2xWdhxzbpns4JV9+SUaaJWpCIK1rXxxc87Vq7T54+NorhdwE1
9d3jIVCFSsSAdlz0ZnvhvHp1xQ/O/Cd4S8tN4S4srGX7EiUSGRDmdRBy/ZgsWqWH5S+Jdj3u
2l47tciIhJnvyGVyCM1UEnx74vrnydK/+eeP7rutyu52jR7QU/kBl9xjlRtQyB/yxZ54Ooxf
Dvkfh3HuXXO7jZGvYnRltYyQPbJNS0YaoJPjiyYxzaz/AMm/IU/M98kvf0UdjbIuiGCOhbSv
5pGyDNgv+G5uslaXLfqoTQ0VshTx8ca5ms9dY+wLXlHBrb402yHfr6I2strFFNHUO9StT6Vz
88Fl1rWbvP7hOHWd/YWG1xyybZajRJeyig0qAPSDmQMU503xw8q5l8C309xvl17u97o6UEKq
1NVKLVWoo+uC8WfLH2l+FR8P8t+Odlv73kG63h269uiYrazCs0UcXYekN6sX11p38x538R3/
AC2y3SGae5Y1F3dxagiLTIqMnJHljUgaaf5n+Ltl22SS33C43aah9uzKszliOjM4AA+pwYvx
4wvxZ8z7BtW67sd2A2623J/cEsdWWFlJpGEHXI9cPyb5HXxr5N4Bx3nl3c7U8sm07ihWe/lX
1hy2osEyJXVgsgkxs7b5a+I9kmv7mz3CS7vtwPu3GiKQ63AoOoAGD61Sle/LHxJyDbYrDdb9
oI/vZpI2UB1z0ggMa+GWeH6Wmxm+Xct4v8pbha8R2y/Sx2iypLPut1SP3CtAEiD08Op64vrc
P5RR/A/BbaRLk8utxIhDxlWiCnT/ALfcbv4YzJVr03mW5fHsGybfb8m3GGK2UxmB6lmLRgUN
EDED8MOUM1F/cDwmXlEVnFMybYsLIdykQhA46ZCpIw4NSW/yn8Q8Xh3C4sd0/W3d/I91Osas
XkcjoNQUBa9MH1W+MZ8Q/MHE7a33ex3iQ7Ym4zvPFK+ahX9JBAFa96411GOL5jG2vBuHc05z
eWexbwtltEOp2v7ohGlr19tWK6vI4zea3zfG5tP7eeKbdcx3kfKonEDq41mMfYa9Q/XB66Tp
6hzy64C1vt68l3OKz9qRZ7V2b1Fk8KBsWViszbfP/CLjl8tp+o/T7eICi7nKCIy4apPTpjU5
o1h+aH4E27btwvra8bed6vS5iigYke8xJ1HIKgqfHF9f2bbni4+CN7+OeMbBJPd7/DDu1/Rr
u1lOgRhSSqrl6uv3YuuLF9vGY3X424Vzz5AvJOO73EtmqGe7u5iNJkc/bADp1eJOK9XGP5zL
V7tf9vWw7Jf2m8R8mgZrCUXK1ZUVhEa6ahm8M8Z9bta++5x8Tb5f7dyDct9hjn2lj7FqHIKy
V6kUq2YyIxq8jY6ovmzgMvI7qyXc0hglhQRbiwPtGQVqoNMqBh1xTk6x3yZz3httxue1i5Rc
79eXJAht4JB7alTX+YUAFPKuL61bK0XEvm3hD8Ltty3e9jg3Swh9t9vB1TlkGgaF/MXpgkqt
j5W5xymflPJNy32dTGb6YvHHmQiD0qpr4ADG7ROcPwLkq8c5Lt+9FBI1hMJQjVoQOoyzxmzT
H00fkb4n3K5sufblufsX+3xsILChMuYpp0AerM5Yr+hmeqrj/wA+cX5Df71tO9g7Vt27alhu
5MwkZTRok09yB1xSD7Kj5L+UuIbBwb/7v+Iyf1QSRGG5vwfRFGx1E6qDW5rUacsUmVdeqfZP
kT4l274lfaJNuN1yN4HSX3IgWM7V0ye8c1VeoH4YeebD1PFB8DQ8Xflo3HkO7x7am3aZrdZG
CpNLWtAxyCr+/Gbxp46yevQ/7idx4DvFnabpab9FNvFtS3hs7dllDo7Elnp0C1xrnz5Z3LsW
PFeR/H3Ddkt98veaT7rcwW417Z7rMvuFaFRFmRQ5VOC82tddPONo37h3P/mC53rmRWz2WdSt
tAzEITGoWJZJFzFQKnzwdXzBzEvI+VcD4V8u2m6cPjW5260T+fFET7PuuKMqOa5BT+3B9TLd
eozfIHxHFuC/JE26n+r+z7C7WpBmJ06QgjAJrT81dONfTWZ0+Y/kTmD8t5buO/8Asi3F7IrR
wnMqqKFAJ8dK41+BOc9ZuAgygaSxyrTuCe2DWn1Vwe4hi/tk3YiWgEV4iEkBtTHJfrXGf5z/
AGH9b/q+Ui0aBgAHLEq5GR/ZjdXHw9M/t/5NsnHefWe57xdLZ2YjmjlmcVUe5GQtaA/mxz6m
1rcR/L/OIN0+TNx3fjly6W7+2ttfQkozEIFZkJzzIx1nLnbny9c+PuYcD5N8VDiHId2/pU8J
/wDkSSOA0qmQyB42NQc8mGOf5al2Np/96nxbtjcf22z3qJ7SxYwtIAxEaJEY1aRiBQMe4xfV
q9R55w35L4jF8577vNxfpBs9+jpa30tVVmAWlfBTp9NcV5cuL7dB8o/KHEN3+JjtO333vblc
bkSlrQhxGk7ye41R9lKZ98anFb6uzwvkrn/D+Q/HXE+P2t//APK9y0/XrpJMEcMeiXX/AIin
XGc8Fvsbfcrj4hn+M4uFHlFv+hiiVYZjIDIWRjItVp/F2xc8WNW683+RPkbiV98Lcf2GwvVu
t0triGOa2FaotuHVmcHswI041OWeu3q2x864tybe+H2m07wn6q0tnnu40qhP8pY/YOqmZevp
8sYb3a1XNNkj5Hx3c9ruZZ9ot5oXWS7BRU0ju+ea+Ne2GM9vgO+hSO7nSNg0aMRGR0IBoG/H
HSuf8e/tzvw69pupbW9t7kAuttNHKyKaVCMDTPHPqa7PoT5s37485pxfb+S2e9xxbtZw+3Bt
LLWU62DOpXqpFOvTDOfMZ6l3XB8wfInGt44nwW2228W4vLTRNfWy/dEscKKQ9e+rIYzebFb6
9Vj+Z/jcchgujvUIt/6YayDVpEnuK3tdPvoMhhnJt9fI03Jb2Lfb6axupIIrmaR3CMU1o7k0
cqfUCDmuOk5XMs+X0vLv3xb8gcd45Lue/JtF3saqWtJGVXWQKgP3dV/l+lhjl9davlWI+dPj
+f5BuLQXpO3tZCzfcwD7AlDl8j10lW+7xxr65GPn1zbXvHxHwjjfINv2vfo7u5v4JpXNS5Zy
jIiAoNNatg+tWyePROGn2OGbDEsj7sP0cVL5Ch/KO9fy/bl4YG2BffeH/H/LuWx7tvayTbzb
LeR+8Q03ugSKYDoH3ZgqPDFOfXP7fh5d8Rc+4xsvxxzba9yvUtr29hlksY3qxl9yJ0CKAMyG
bDfapPMeGA6wuoaUUdT1NT0B8sdJ61Nz19Pf2tT7T/6Vyz9V6bQMhvan1e17L6yf/prnjlnp
mOnceW/FXAvj3cdm49vH9Vn3STXa2qHW4diD6jkFAVe+Nz+NH4x1cjvfhH5Gm2vke98hFjJD
bLCbCRxEwAcu6vUE9aj0nGZKpceT7La/FN58qSadwl23iSMZbK4kPWVCNK9GOgnpXD3+lzL8
16J/cFvHxpv+32m+2W/Q3e8bY6RQ7fC2ozxNJ/MVRTJh11Yz9Lg/+WxseMfIvxrZ2EVzBy8w
7a0S12a/PumGg9UYZhr8sicM5rdvrw+w+S+J7T84f+17ZaSW/HHmYexpo4WRDHJJHF0FWOsL
jXfLlzMq0+Q9/wCC7T8nbTzbjG6DdnuLr9dudoCR7NCpOmoFCwHQ9MZs2NT5eox8w+EX5any
C+9j+qzwpCLVgS0dE9upjUEghTnnTGfrp8l1z/8A3ofEO+bZyPYt73SljuW4uFqrrqjcR6ZV
IBoqumZxv62D5Zf5D5B8b3PD9m+N+P7ytxbPuELSbgx/lW0SuSfdc6amr4pxflS/hA39r/GA
8Z/9xtiKULMY607UHuU/HGL9mrq9sdy+NbLjt18V8r3xIo9qnSaHcYW/lTxuxlQB1DhXStGU
41OLGbfs6d++ZvjjZrTjEexT/rLXZb725reIEOtuIZIjKuqgNddR44f+da+Vxunyd8bwLe7v
/wC5XtyCjTJtUDsKax6Y410Chzyq2HmaOtnw85+EflXj9hJu+w8mfRsu6TvdQ3NxVmD1GUoW
v3BQdQ/Ng7+VzPGS+Yvkbb+afIVu4DDYLApZiQVLSQCXVLMENKMwNKdaYeuP9fGpztewcl3/
AOCL745XiR5KsVjaqLiyYe4ZVkjJkjr6KN6mpTGeeWf6KDj/ADf4m5d8fbVx/mV6+3y7EViF
Q2iYhSiyIyq/VeoPQ4pMLg275G+KeFfJNjLxZJJdo/R/ot3uYwwDliGSdQwBZl6P443f55NT
W8p+SvjW02Xc5bPle4brdXEUkUFnFNINLyZ1UlUAC17npgkF98il+J/kf40suPWbXe63XH91
tECX0Ks7WtxJ/wDhtAWRSXFK9MZs2t5481/uG5nxvlvKrW748DKltb+xeX2j2xcSBqqwU0bJ
O5xqzPHOe15jaTaZo3BoUNamvXoMFb5uPq/ZPmb485Bx3bRv+6XmxbjYxLBcQwPIkTsqhS6v
GDVfTXOhHTBNPUVO2/NnBbfnG/SrPdvtdzty2dteSgy654qj0j7grg/t643eL+RJsYT4h+Sd
i4zwXlexbx7qXG520n6BkUsHk9uSPQadPvBri64kvg7mTK7Phz5J4jZcW3TiHMVddn3X+Z+p
hz0sAAyEL6vyAgjvjFnpnwD49+R+IcSt+ebH700lru0M42a5CElyI3SJWUfZq9yuG85R8cvF
47qT2VQqQzAArmO3THW2Vmvpf4f+Rfh3jfERBeSTWu6XUHsborpJIstNX2FQRSjeRxwnHrp+
FVwj5X4BxXkXI9ht2nm4VuziWzu4wwngYx0dKE6tAJoD1xu8iXUXyf8AKHx3ccbi23jBvL6e
a5jnlkuWl0xi3YP7TiQ+pZOhHbFxm+s/lruOfOHxDY7fHuUKX2zXkq1n2yFXkiWTvpSpQKT5
DGcbtx84833yy5FzTeN8s7QWFtuU7SQ2tQStQNZamVWYFvxxvrM8cOefbf24ON71e7JvFnu+
3OIryxkWeBzmNUbVGpT1XsRjGa6c+PoQ/wBxHBWtP/ZzsDLz3QsDxov8p1XIOJq/bTp3pljU
5n5Nt/DMcK+dLFm3XbObWA3Pj283L3s0cSeuG5kbW2haiqah45YLf0cFzD+4C4fedsh4hCtp
sOxSxyWCzoA8jKpV9Sg/bpYjT+ONeSMS+tFd/wBxnD7bb5t92bYxDza9AW8OmsBIFPc93qfp
T64ueN+W6z3A/mvYxtUnHucWH9V2JpWvbRoxqmhmd2dlCVXUupyVzyxjq+rzGd+XPlqXm1/a
WkFp+k2LamZtvhYUc6hpDPT/AGj7e2O0yc+fLPsF8jfMX/unDdi2K521be72kq73qNVXKR+0
AqkVQMPuFcZ45mem3a0vA/nTj8ey2+x86247va7VRtpvlRXmipkEYMQDQUo37cc/hu5+Fdd/
3Hcofmse/wBrFDFa2iva2m3OKK1szAlJWGeo6QR/DjpZM8Z5/wArrlfz9scOxXNrwraBttxv
SON1Z0VvbaYUkKBDQnrmaeNMHMku1dTzGX+KPmXceE7Nue3x7VFuVpdDWbdmZBHQFNTGjDQV
oGwWy9bVvmMt8dci2vj/ACu13Ld7CPcNuRnM1p4h60ZSeuiuQ6Y13JTx19XrHyv8zfHvMeJS
bTDtNzFuVuoO2XcojVISKZVVidLLlSmCc5+WeurK4OEfM3A9h4taw3vGve5Jt6ssF8qoRI/q
aN2Y5ilaHGZzLT8sPJ8tbxefJNhzTdI0uLi2mjdbMUCCKPogNDSg/Njt/X65kU4z5dPzD8g7
Ryzmdvv+y2k203McESSSMVEkkyEsHOksKqMhjnJLGed3W5sf7moI9lM+7bGt7y60hMVtu0aq
itlkZK+r6074OJLffhrr/DxXlfJtz5BvNzu24Te/e3pV55GoOihQBToFGQGOvXX4/DNmKmGd
xImdKE+3QVoSKZVxyo516Dyb5Zl3v442PhclkYZ9mnEn673KrKiBlVSnWvrzzxv+Vk1rvq2u
X4s+Rrjg3KYt5EP6uD23iubQPpMiOPyE1oVNCMc7yp1+GV3C9Ml9JcMp1SzPKe5GuQvSn447
99/a7GPr9fG1+R/lW55lx/jm13VqludiUr+oRy3u6kVAStPT6UH44583Jf21b6h+NPlK54dN
uKLb/rrbdbWS1mTWEKlwdEgJB+3wxnnn1vfFd8c893jhW9wbvZUllRSksTmiSoPuRgPEd+2H
+ntU6eq75/cylztN1FsnHoNrvbhWie6MiuDHICJNSKiEnOoOLnN9Y2ouLf3ISbfx+12rdNmg
3iayjENtMxEbe0oFAQVfoBSuWMW+t3/DLbr87cquedQcps3S0e0jFtBZD1Rm21FjDI2XuV1H
M43ep9cjPO/l0fIPz/vXJbKDbtvtE2azZT+ugt3NJWJ8QF9J7rh/n1Offyz7rp45/cdyjbeP
/wBJurePcp7ZdO23dx/5oQooPV+fLLPPGZffT31+lXwv5y5Rx7fr7ebwDdP6qR+vtZiSrlfs
bUftMYyXLpjX9LL8HfFjzn+4LfuTWsdtZWibPbwyxzypA2pnljYPG9SBTTTt1xTJP8rier9P
7qORNt4SXY7KfcFXSL4MyH3F6Se3++gNMc5hrOcS/uC5fsF5uE93Db7rb7pK11PaTjQqzuc5
IyuS1/hx06ytXJP8qn5H+Y915u1qLm1isra2JaOzhBK6yKM4Zsxqyyw7k8c/tZdc3FvlPf8A
j+17vttpIstpvEP6e6WbUzrVSutDX0sFYrjnL7tXXe/DM7dvl1Y3SSQOVMZUpISRQqagj6Ye
u/tdbn9G7i+ceUw88PMkS3N5JAsFzDpcQuiIEqVBrVtIOWC3zGV1v39y/J942i72mbbNu/SX
0LwzRlHNVcEEirUBzyxqYxbXl+x8j3HYtysb6xlZLqykWSF1oKEePY171xi/Ltzcnq0vfkDk
Fxy+45XDeG33eeYTiW3rGFbSFKgV+2gpnjp13vMn6Z+Flzf5j5jy+CCDcbiOK3QUlitk9tZX
FRqkFTn+7DxfqxOd9YMlj6WNCo9Jbx8yO2OVrWBLBVJoNRIrQ5HzwCgYF86ZgdfGmLWYQ9BU
0y7D6+OIzwbSSV9JAJ+3w/bgOmQsMwammfbM/XEYRdg7MTUdOnWv+eFHjAo6uMlGWVe9czhx
aFZTKSQMiT0z/wAcAgg0epRT0fwjMCnmcFIzLXKuk+XTLviGnUEPWuYzFPHviRmYfaw+tKGt
OxxIxCEkEent+P8AliaLUxfoApGR6U+uJHV2Sn8J+0UxKm16X9WZOVT1+hwCJAqNpYCikEhs
BL3EANPUGyL/AEwgEcgzA9LgHKnbxBxakq+2VVVamWQ7k98B0wUBTr9NOtepauIwWTAvWj0o
SBnTzwxEpowWMggZ1OWfhiIqLQMxAJPWueWMrQtM6DUKkdCeuWFm0K+lgwfKh1U6Gv1wkkZX
Yopox6gdzTCD25ChkaisK/jiKStXHoOWRNeg8cCxDIkjZKBTVT1VNK+H/PFqGRoQDP3aU1Do
Bi1W/oYZg+l39Q+3wJxCFV39ISpNdbV6Ht/1wECl0hKiiquQXpU+IxasAwZzpYhVNNZPX9mJ
YkERACA5HLI1oRmMHpsBISP/ABgAgerP7qdTiYyiJOk9ivXOvbCTOfSOkhK5kAgCvYDAj+45
AZvUa0AOVPrgQXKsQjUFKV7k+WJqelOaLUAKCM6/dnhgphGAQR6mAFa1AP7MSwpWRfszqMz/
AJDErUAJCha6dRqVHavbBoyguhGEJeUUAp9cutcTUn7ZuQjX+OOpIH1A9RgQ6p4HpTpiGuyx
KVkGRFMx/pXEk22IDdmMrQA56u2O38udrc8favCZYNj+KbS+2yxtob0W6mSRoQzlzk/uE5n/
AAxz/rMrluvmLk27327bvNdXkmt2kYso9KKD/Cv+WIq0NQlmyB7HqR5YF9QKhJd0qyuaLSpe
vSgGGKx7P8V/CMe4Qwb5yuY2mzSFTaWkdfeuQf4z+RK/icFmL4ab5Y4Lw2zO32e3WcWy2Eza
ry4RHIWNTU5DueuMT2rVxufx78a7XwK4udk28XE8sQdNxnJLkkZPXoK4bz6fvWJ+Gvj/AOP9
5vWuN9uf1u5+swbPEW0ooJ9cxHWvYdsb652C9RQ/OezbTtfIlg2uwjsrcKFjjSlSABXP6458
TFrzVIWlm9thpY5nLMfhjpKxX0RwHbfieDbrLbto2eXmG9uFN3KyHTET92sv6EC4OuG5a2G7
/F3A7zfLeO+tIYn9vWNptmCMadKhfVl+/GJzi13n4+4hcWN1C3FLbarWEMqXEoQuwA6qK1H4
4LyGF+PvjD44vN0vJ72YXs8Dn2drj1KsUfZpMgWr5ZDGrz4ldc8U+K7bmu4vyB1s9stAos9s
h1gzMRWrMPUTnkK4eZcbndjewcO4XvWwT3EPGU2SyUH2rq6X2pStMpFBNVH1wYzaPavjLg23
21vFb8ebe5ZqCS8n0lF/3GpChfJRivK1ybt8D8Wv96jkMH6SxX1y2tqKEn+FCa0B8cZkq+yq
5vw7bdt2gxbHwRPahYaru4kBJUdctTOfxxqYWi4023bl8fSyx7HZ2EqwPG0MaK5BjGoephU5
Z0OHqYzfHyjvzRtutwFUJGHKopybSua5DtnginqDabCfctxgs4CBcTOEGohYxqNBqbsPPG5z
pj1i6/t332J7O2tbuG7vrg/zjCpMUXf1OfDxGM9XDJFlD/bReteiyG9IUjGu8uNOSE5e2oGZ
bGbrNdu4/wBskC2BO37zJe3ANFBj0KG6V1VPTF6fsKD+121hhjO6b9HBOwFFijqWamYGphX8
Bi9Wx5z8jfG11wy8SNp2u4JlAWVqIXVhkWTKn0xrnTz1jO7Bsm5cg3e12mwoLm8cRIXNAK9M
OrqyvaYf7ZZIYkG48jiguJBSOOOOvqHZQxBPnQYPaJ4qof7a98uN3nj/AKhDBtkRqL+QEF69
gg/N44Nq3xBy74Hs9g2Vtwtt9iuwgOpKAAmmel9Rz8sMlZrW/E3xnsacIn3e6sbS73G6R2gn
lf3vbUD0jQPtNczisV+Fl8efGfGr/h15M23wvuN7LKDdSDJKEgaQc100ri6hYrkn9u+5RWou
9r3OLcpS1Gt4lCJUnIhySMu+CWw+Onbv7YblrZWv99txeMPVBGrSqvfTUEf4Yb1V44LL+23k
d5uE0ck8NjYW7af1Tt6np/AoFQPrjMtXiyP9tm32+7WCXG7pPY3L0lMSlZGVc/QxJB8/DBee
qNFzP+35XvnHHVjstmtIzLLeXsnpVgvqA7nLG+fFXmPC/jLd+X8km2rZ5ImhtF1XF9KToRDk
GGnrU9hhus8TDfJvx7JwndI9vn3GG7LRe46xA6lJNM65/tw8taxULMjBdRVqff1JJxuXHPrn
fXr/ABH+3nmXIdlt90mu7fbobxdarLVn0E9cgcjjHXdtdJIk5F/bjym0ubf9BdR7obg+3ph9
AD/79dAFpmTjMtiudLY/2s8gSzMx3i1S70gvGquRWn2VoP24NurJFP8AE/xVa7tzq8sd7gN1
abWG/URK6hDIMlBIPqBOeWNW0zqU/wAi/HN9dfI44/xfbFQOoKQJIdCLSpkdz9q41Ljnz8u+
T+1vkNvaS3B3eyafRq/TAP8AdTMBiBXyxz9PUsvi1+Pf7bUvrZtw5PcNEr1FtZwaS4oKapW9
QH/bix02MtvXwLu7c2fYNjvIJpNBmkkDZQJWo9wH7SfDDo+VbtXwjy3ceV7hs1q8R/plBe3Z
eiDUei1FS1MX2o5k+XsO6/2/bJLxO1sdihS33YAGfcJ3Y6j+YsBX8KDFLWr168U+RPiXknCL
NLi7nju7Sc6f1FvUqSBUq2QONTtisJbNdH/xxAys4KquTk1oADjf2GY9i2f+3L5F3Xbobq/v
ILczoCsM0jtIit6hXJhWmOfXdrfMjkn/ALcOcQ7wlhbyQXCurSi8VisS0PRiR1/DGLalJsPw
rzLeN33Szs0jB2lit5cTPpVpPyopzzIz8Mb1STHo/Cf7ZP1FhPeclu3t7mb/APE7S3YEIP4p
SQa17AdMZ9q+Ph478icPm4RyB9pe5iuABqjWJtTJ4B/A0OO3Hwx3WetJ7uaVYoyxkrTSKmh+
njXDbDzJXsNn/bn8l7pbQT39zDCZIw3t3EzO0YIrooAaHxxwvVp+sjP3v9vnyBByGDY4oo7o
S/zFnBpCsajORnPmfDHT/oz9VnyD+2nmu17XLuImtrsxjXNDEze4AOun0gEeOM8261M/Kw+B
finbeSy7pfb/AGz3FlZEQWyq+lWm6sKjP0jD3t+VZMeW8xsk2rle5WEULQxW1y4iTVVlUHIO
wp0Aw82QSSRUfq7m4k9os8rykBUBYg9gAMat8Xj1/in9t/N9y/p97uhSz2+Z0e4hZqXAhJGr
0EGjEdBjltP1yr75K+BY5uVbbtXCbRo1a2Mt60rn2YwraVLO1czTphlwc8zdZHkn9uvP9msx
eMY7+PUAy27+tC5oPTTpXuMX3v5UnrO8j+I+b7JuthtF3bK9/ulBaRowkLsxppy6YPs1eYpe
ecA37h26Lt28xBLmWNZl9tw66Saekjrn1w82/ln5VGybPebpfQ7bZwme9u5RFbRJQs0jHIZ9
MatxqPobbf7ZdiTbo7Pft9MHJLpNUFnAylENKAAH1PQ/cRjGWjq/iM/xz+23ep+Q7jZ75dJY
7LttP1F6oH8xWGsCMHtpzYt0xbWOeZPRfJnwDa7bx7+v8Tvv1+1Qx65XZlai9CwK0BX6dMUl
laZDY/g3m29cVl5PbRItiVd4VZgJJEjBDOFPVQQaY1e278OL4r+Ld651yB7C3lFpZ2aa7+9b
8is1Aqp+ZmplitsYzflsue/BcO1852Hi2zXpmXeU/wDJOADEFb1FiudKAnLGbrMvuN9H/bX8
bxuu1XG73L7y6gkjTQtSurRpP/8ANi9beBfJHDZuE8tutjeYT+yEcXC1oUkGpdQyocb59+VO
lVsGxbtv+92e17ZGbm+vpfbjUkha9SSegAGZOL4a+XvMX9r3GhYR7ZccgH/uHtmUW6ke2FI+
0JX3NPi37sZmsf8Ah4DzHjO5cX3u82Pc1WO7s30soNVIYVVlYdQQajG4J1qmgarRoUNWrWhJ
pp654bMGvpbgnE9ql/ty3bcLiAXO4uLmYO5ZdDxsAgUVoKDw64xz7T/TyPmdjoYyN93Q0FMs
bxn7uzYrC93LcIbDb4Xubq5bTDbxDU7sT0XFbhnr6bh+B+Gce+N3vORbTd7tvkkDTTfpdTPb
yFdSqqg0CofuJrjnOrfk/wBHknBfhbnHNLWe+2mGOz22KRoo5bhtFXGZ0GhLaehywW1qSY7d
5/t++R7Pe7HanhjklvmK2t1GwaE0UswZqDRpC1NRilPjP7X8Vcy3DmM/ELeBF3e21fqkY/y0
RM9TOfTTw8a46f8ATzBOdaHav7cvkje7ZbiCOGGBpHjDSSBCDC9DqBz+4EZDGJ3YMV29fBPy
JtO+WW0S2P6i83Bm/Sy2za4npmasaadA+6uG/wBDOXqPx9/bPucW7H/3ZIpdqFu3sw2kpqLg
0FWIofSKkYzbT5j595XtsW28i3Hb7eQm3tLiaCNpDRtEchCliB1oP246S2PNx1t9d/Bdl5Hv
29wWHH4nuNxkYMgRimhVzMjP+VV8cZ6rvHrvL/ij55fYrlb7dZ9y21ELzWSXZZyi+pgV/N9A
cZndixkdg/tt+Qt42+03C3iiS0vIzMjSShGKk+nUDmp8PLD9zOZHPafAPyE3K143NbRwXoiN
z7xkBhaAEKSr98zTxw/YzNaP5W/t03Pi+2LvO03P6/a7S3Jv3eglhc0DMq90J6UzGM7dZ76V
O0f22fJW621pepbW8NtcwLPEJZQG0PmobrQkerD/ANKMVifBHyM++XmxNYa7yxhN3I2oKjpW
itGxOlyxyFPDDOm5IoON/FvLuRjd59ptfdGyxF7uNmCFWBPoUHq1FJpjV/p+HPqflmPWp0sx
Omgp2Nf9MMn6Z+8rd/FXxjvXP94uLS3nW3s7VBJe3chqqqxoAB+ZjnljHVdeZ+XonNv7edrt
9km3fhe8f1VNoBG5WpcNIPbHq0lajLMlTg0WtTwT4x3TZeLWce+8zuOP3G6tq23breZURWcV
T7j6mNasoywe1u9zMYuH+3fl25c5vNt3S8QRQ6bq93aQlxLG7EB1BzLNpOR6Yr1b4yk59/b9
tVvslxvnB92G62u3HTuNqpR5k0/eVZfDqVOdMU2M77qz2b+2rjC7HYw8q33+n8h3cE7baoyA
CQrVVIb/AMn+4DBLb613nw813bgPNeM8sHA1maK63Vo4YxBKVhuoZWohYClVJ6g9Mals9Y5s
+Pyg5l8X8o4bvlptO7IjNfBTZTQnUkhJ0UByNVNBQ43f63Np+03HrG2f22cWt9ts9u5Lv36T
l24RubC1jZdCucwpU5yU6HpXtjntLO8U/tx3i55PuFrvtzFt20bQ8f62cMCXVhrBjrQBWXPU
emCWtWzPUfyb8J7VtXHp+WcV3Ybvx2F6XA9LSwHUEr7i/eNRocgRh9jn9pzNZfbvhbmG4fH8
3M7OSJ9uhSSZYS3raKAkSMq0/LpPfthnVtNue1z/ABN8ZXPO+TGzgu1ht7UCe8mJBYQk6TpB
6tnQUxvu43zeb60nP/iC02f5T23iO2XbNb7x7JtpJzVo/dYqysQBUDT+OOXXWRid/wC/1epy
f27fFpnGxm/3Ab2YtKSmpQNp1a9IT26U/LqxY1XzfzTYLvi3K9w2S4lWS422YxNNHmrKVDIR
XMalPTtjrzGee5bn6R8P45uPJ+Q2Wz7eq/qbmTQiSfb/ABMWYdgoOC+N8TXvl7/bzwt7eTjm
18jLc3t4hcPbO4ETimatGAWVfD1VGMTYrf0+c+QWW57Pu17tl0ptr2zleC5hrUh1ND0/bXHX
i1z+36TcU2LduQbxBtW2Qm4vLiqxR5CpAJ65fXB/Tpvl9Gc5+IuH8V+IJprjZp7ve0tRNPu0
Z1SW9zpUnXpI/khvTkDljlyx/SazfwD8MHe5YuQ8ksDPsjwtNYAuNE8mrSAwBqNOeLr5P87Z
Gd51wOPc/lhtg4rss+0TSwI52yYioKqS0sfqZfbYCv3dcbtz4XG69E+Vfi3h/E/ip0t9muX3
GNY5v61GdYhuKgN+ozyiapXJaYIz/bpXfAvwul5aHkXKNsF3YzW5l2uD3AVl1kj1BT1oMq4z
9tdOb/rqv2/4r47yj5qvtq/R3Ow7XbRLdTbVOFEpKhQyRMCw9ttVQanLGurfhc3Ww3z4B+O9
32ncE44byw3W0VpkluRJ7TFK+j+Yq1B00qDlgo0HHPg342tOL7Xd7+l5f3W4wrM89uJGjXUo
YpSINTTX7jjOa19mfvP7Z7CT5Gh2+wv2j45JaDcdUvrnRBIFeIE6aliRRj0GNbXOT31FzD4O
4Dvuy7juXxtuGu62Qudxs2YusntgsTE5A9VFNOqnF7Gt1Y7V8KfGGx8dsV5vupXc98SIWTxs
0ZRpACDQA6s2Aao04pt9a+34T8G/t/2Lb+Y77tPIh/UrS1to7ra5Yi0btFKzCpVT9w0aaYuu
rfGp3kRfIHx98ObfxuY21huO3blOANvmlinCmQHJGMgC6T3zwZjPy2N38M/E9hb2xn4/dzyS
QBnezE0i/aNVdBoPHFIK8mt/hbaOYc43iDh1w1jxu1EZubi6jZHt5tJrAEfSxFV6nphtpnw5
Of8A9uG+cb2cbxY38O62sTKt2YQwki1EKGVSWDKSQDi2s2tp8Wf24Wqxi65pqX31ZbLa1kKv
UrVpHoew6L2xW61rz3jXwhu2/wDNt52u0b2tl2e8ltrvdJMlUROQKKerle2N3/WefJ/nfzWr
+XfizgvHo+Gz7dO0NjuFx+i3K4DkiUAqTPX8rKNXTKmMSf8A8s99X7PSLT41+Gbm4k2W14+9
60KxxTbrb6pI1Eq1DmdXpq8adMH1NryTaP7db255zvdi1yIeO7TcMsm6SUNU0iREp2YK41Yb
v4Z5n7XPyf8ADXFhzPhe18dYWdtvqNDcuPXqEWlhOfN1Yg0ywwdbb/hpJ/i34Svb/wD+7u3E
kPKbaJpV3FFbWHC6v5jfacs9Ph3xfW/Naj5d33Zr3Yt43DaLwUu9vuJbWUIarqiYrX6Hrjp9
WefWp+LuAJzblNtsZuzaLKkjTTqAzKY4y4oD40pjn1cbnNsetbl/atax7RLBYb+s3JoomlXb
W0KsrLUgD1a1Vh0NMUtFvjl+P/7a9v3TjVru+87m9g92C0aRLragOlg+qlDUYL1bWv8AXP8A
JQf2yJHzuPZb3cHOzT20l1t+4RAa3MZAMcqE01DWDkaEYtomLSf+1bYzFcCz5HruLkN/RmdE
0uyglkeh9VCPy9sG0INj/ts4ymz20nK98O27lOXYQh41UgHTQMxXUQfDFdalinX+2G5PJN42
aXclkljs/wCobJdBRSb+ZoCSCvpNcjT641Oqzn5Zbinw5uG+8C5PvsrS2d7tCPJZRlaRzG3q
06ljn6QpXLvjVrXxzrd/2w8d2K/2vf7mdVuLiSza1fb5FBEltMCxYOft1MNOMW+izxlvhz4X
sea7duklxdPZy2TRrAgAdWjZm1BmPQ0WgIxrvfwuOZJ6vub/ANvvHrXYzvPDt3O5xWlwlvuE
MrpIYkdxGWDJT1Rlswe2M5VbGs23+23hdpNaWd1uTyb2miWe0JpHOimraagH1LUVxmS/k+T4
eC/LnHuObFzbcrHjtz+q2kMHgKnUYXb/AMsBPU6HBx2+vmszbusXrrKI3PqyPmDhgoCSsxBF
QemGM6YNQ1609VT0xWnTRy1kBJ00JPhTyGBaJi7j1Ka5HzpgQaBiGWopkyk540Tv7jA6qhVH
iQcW4sOJgFqQSclAPl4+Jw31gRkBOlvSDnl406YydNESpVmBbXULTLL/AFxaoKPNxqalPt8a
HxpgO6OriQsj9FpXxwG3AgswIAFNXT99cFEtOWqSPtK9K9zia01CwHVGHUV8frhcrovcTSak
muTHrhjNpRz/AHVBqe48vHFY1z0MzVXUnbqOhpjH1btRRO7h4/yg6tRypjUqkSkAHV9xy1ge
GK1fUIdakhsmNK9Dl4jBYDuuvSV7DOp7DESLu32tQ5EjtTCtMGBoNVQD2OLRPk2pZKkqNXSh
zFB4HzwU6fXSPUa+kfaozoMS0ypFkUJFR+FcVOjhUAt7j9s+9cACzMrEgahQUFK4pFUbSaa1
HoPUDqThxJCxMmkUOQJP+GLAdR2rRQvTwP8ApjJOCQDSgUmletR+OJo8i0OsGoUaqA/uphGo
lUEszHSKg9fE40ynMnuMFVdYAz8Kfhga0CiTWaCqA0K5VGM4CAb2yQoUnNVIxJJEriPSaAr3
OYJOeXfEYcqQ4PTP1DxOBBYH0gEdCwUVOFYTsaepdSmlQO/4nCNCprIUA05VBzr9MGCeiLAm
h0hmFVy/DPzwNjjLH1U0dqnpiRB6IBpGnVmB3xLTtmpIWtcyT+zAtBIHV60yIpXrQYhTnSwC
9zmp6GvnXFVgxUKCWND1rSn44oYQYL/LU08umfjhQQdKgIuY+7oa4lRx6lUgqSzda4MB/cYK
dSgnwAwnSFdAYLpV/tSmeDTBPUjIaRUV8cKIs6HUPsH3Hpmf8cSp1b06VPerqMv2YKr0EugT
V2GR6k4kiQnQzKNRqdC07eeLQNQNS+oqpFSK54tQ45EWGi0JBJJf1ZeGI6GsRIcip6ajUH9m
FBZj7gRSSGyJHl0zwUAiajsumjDNhnQD8cRGIgXUox1dW+nnXAiZyG0KvpPUnr+7EKGrOzEe
kKa1HQ+XniR/tIYkZ/8Ajy8czliXwGcqDRl9RNVIr+7FhlEx1esSFVB7jviNRu6IoDUKMaq1
c6/TqMWM6QDNHkdNa+o5EYbyZXJKzNGQaA0zJy6dxgUqglyc+R643GrTAMKGlcshiA/V/Cel
fw8cSddpLbxuXb1E9AP/ANrAZcT7dK7zsAoJfIAmlRXPr5Y9H8Ll9a66tj7P4Xd8fvviyzsJ
96srSc2+iT9ROnuIFNRWOoJ9OOf9/enDmvm7lUW12+83NrZ3ZvURiFuQmkPnWq/7fDGOepY6
Zap1COxK+GffLEZYnhuo4AHJOitV05kHywy5W7mPYeFf3Cb5YGw2u/WzXbLYKjXDwt7qRDKn
pPXHfOev/Lz+rb5J+V+P8xuLXZNv3X9JthOq83Fo9KrnSmn7j+GMc/z9a5mtlNyz4ki4cmyP
ygSxQQBNcaNremeSadOfauMd83fUx3xHdfGmz71ebpJyGKyaUsltZzA6ijV9Zf7QfIYfbGbZ
rh+Y90+O95v7aXbt4e/mkkCymJR7Manq+qgJpjE4MQ8hsfgjaePxRbXfzbzvkwVIpU1ek9Gd
qqFQYZ/O619tek8b5D8a2HELfa9m5Ha7C0kSm4eKhuBKR6iztmW88a64sW6rONb78ZcZ5E7Q
8ml3S83Cscm4XALKjE1oZKavVgy1Ly8598fbEl/uF/yj+qzTj+XZRN7hUUoFiXOmffGfqtUX
xbu3EYri/wCQ7hv9taTbiaja5GVZIk1VUuTQ1NO2WNXm4NWNnL8RPzSbeZt6tr6+QVtpLkj9
PE5OXtn7ZGX92Cc1a4vkKTg+8wS3W6c+d4YwXi261ZGiJp0VFObV7nFlSs+PbvbrHb7fcNz+
RDb2EVXTZY5asEH5XUktWlMlGN9efgbrWbN818Q3fd7qzgvP0Mca0t7q8/lCSmTNn+7B9atx
Lec24ntG13KXHIV5DuV2SLO0jdWYs+SxqwNFXPMk4zeT9k+12Z498d3bb7d29m8izT6fcHo9
xfSlR9zDyxWLHyNvMsEu5XE8JGiRtUdetMEjX1PskP6rcra3lmWzjkYCW8c0EaE5tnjrxMP4
fVNj8o8G46m38d2zcv6gFQJcbrI1Qn+92pnTwxnqVz2LPYOb8Gtb69sLDe4GuJ2957m4OiFm
bpSQ5avLB9atQcg5ntFmYor7mME7NIp/R2SRCorX1yKz6VA8cUlSj5H8hcOuefcfSHc4pba3
LteXeoi3SoOk6+la4fpV8OXn/Gbv5Q3FF4zNFJt1mKTbm7/yWkp9sdAWOXgMYyw844+If29b
9x7f7PdbncLaWK3cSNoLqyNX8pIFajLDLWrj1TkNlscO9WW97lukVkbAMQkzooIcUyDUOJnV
Vbc54dyMbht9husC0GhprhhEpJyqmqmrDeazXk/OOC8D2DbZWvOUTbnudwQtttts6aQ5/MwV
n0L4saYJrp9vMem/Gez7Zx3hYsLvebETTKXk9qaMrGWX7S1fUQDnh6lZ6dHGJeKQ8dutni5D
bTI7yRiX3UjJ9zyLfXPFlSG43jgvF9js9h3DeYphNIIwsTBm0s1fWUPpUdycM5tTTwbttdq6
yruG3Wu1qlFRXTWSB/FqpT8MZsqVU/ION8ntNx2va93geZqxyTF9ISv5lB06vwyw5WbGI2Di
/FeHcksZb7li3+4M5MdmXAhjRhmxqz6Px64Jp1oOT8r4Hyq5uOMz7rGPbWpZZPbiY0rUSVCn
T4d8P1qryDhO2cM2vml9Becpksdjt1JS4tZWhWZ6/wDjMi+oKMPt8UrNfMV7wGXkYj4oZLiF
UreXkrvIskv+xpPURTrXGcxqsDbEe6pGS1zyA/aT2xqOdr7X2DaDu/xdt1gl09ss9miiVWzU
U6E4K1E/9f45sT7Nx+63SFr8BYVQMCxIWlXz9OrzxSaay3M/jq43PfbjkM/K/wCg7XIn84Qy
UJVVp9xYJ+Axn0M58D8WK8p3HfbO9NxskJe3gmkOl5pOlQv8IGdTjXuKNVyTa7rbvku03i03
uys5txT9ObKX75AAKggflNBnUYZaPitdutttY2u5ueRPb2lr7Z9+6WQxgL3zqP8AE4zmmvOv
g/ftifdt+stvvxLBI4O3QSuVZ0UmrKD2oRXGrMHHWxLwqytuNfKm42+4btbT3W6xPNDGslX1
F6iNmb8wGLNh1udh44dj3bft3vryNV3WQSJFWgRFFPUT1P0xkj5FtMnJeNrY7fuQglfS6yRv
kyg9G050OL4TDfN8c2/bPt/Ddh//ACjv5dHmt4CG9qKNdJeVuiAn+I4hfa8vsv7cfkaC5iuH
toGKOrsvvR9Op6Hrg+1asj6M5NxrdN547aWFteGyniMLTUJ0t7YGpSVIOLQ6P65skXIrXZhe
RHclt2K2wYFiBQU+vkc8OCuDinG343Jvt7uN3EI9zuXu9NaLGlM9TGmI/hl/hvfrbcrbkdlb
34uZxdyNaxs5LCJgQGUE1018MassZvw8I5V8V8tu+eS7HZaN4v5A09w8D6ljRjWs0jH0nyOL
qtczxbbV/b18l2e52l1JYx+3BKjlROn2hhWtCfy4xpyPpHl/Hd63iwsIduv2smt5o5Z0qyiV
VpVWK5/hh0Oxd52luTHbEuom3JLTW1qrAvp1+GLBryblHx9zq1s973Dc+aDbNjm9yR4mZ3Yq
1aIWNKZZBVwy3fBZ566v7auMbttuzX+63BP9O3NgbAM1WdUZgZdPbV+3B035I8z+S/h/l958
hXkW3RDc7ncC94IoXVRFGzdZC1NPgK9cX2Z+uxHxT4J+Rts5Ftt/c7V/IguYpp/XGx0KwLfm
6/TFe9a5kj6R37aeR7jyDZb7adw/S7ZaOzbhHU/zk/hC0P78QWcd7af1+8tElR70W8cn6UMN
WirCtO2ZxYGc5ryDeNl4juF/LDZbX7S1R55jIGYt0VVVasewxROvY345yrbtl5YzRTtaRtKt
23pCPopJ1ppCmvXF1E+Svmzmqcr57f3dvL79laMtpt5A0j20zJHjqaprjbPMzVf8Wbrt+08+
2Xcr+YW9naXUck8xBJC5r0FfHBedNr6z3Ti39b51sfNbS9tztO3QNrk1V1qQTVWHppU516Yz
TDWfKON8vXlOybLuUU15KrQpRhRqxaC6fxKGGZGGzEyvMrja/j/4Pk4xvl/FJu9xbSQW8EWb
SPI5b0qc9Kg5k4eJt9He54qON2sn/wBxH6ncOZPZWj2sjRwRe0AiZ6bc5e6S/cA98Unp6vjz
v4B49yXdOZRT7PI6bZYPHLuUmsojITXQe7VpkPxwdUx618/7PyO03Haub7RNHbjZE0PKW9Yd
39PoI9QzpTzxRm5LrZ8Q3T5Flt4L3lFrtljt5hEs86SMJgCtRqqdC+eeBqvBeV7FbfLfzdfW
Wx36Db1iT3dwK6lC26BHKAEa6tkvj1xrcjHE9uurZ+J7Z8VfM20w7pucU1rJCZRckafbRgU9
SAnT6/3Yz9bXSdS2vZW4i/8A95a/IH9Qt12QWRRm1ilCmnXr+3TTvXGrfwObmvlT5s5FtHIf
k3d9x2txcWBZI4rjoJDGgVmX/bUZHuMdJx44f+vVt/LCqA5UKMyaZGgoTgvLU/pr6v4DQ/21
7rqOmkd5qb/6hjl/Pdb/AK3x8oTqitpH2qK1Pmcvxx1jNsWvDd9v+Pb7Du22T+xfoKwzU+wd
CCDUZjB03/OPqn5i+QN82r4k2TcrS+/TbhvKwJcSJoDussJaTR/Dn1p0xz5H9PPE3Bbfct/+
BrHbuLX6w7sg9tpY5PbMbiYswZlqwJXPzwRrqfpsYbqLZYuJ7Vve4R/1ZRol1SVMjiIoWq2Z
1MaAnvik8F+XBxz4+3ex+Ut85ddTRfo79PbtoUJLkZULilBTTipnii+TuRX2yfENxfbdem0n
bcXhE8TUcK91IGVWHQ5dsaz1z6+PWu2zc4RacLmu7lfcvLfSskjDVJK9sD1PUscZbDxXjnML
DmO+bhuO4+9sN42rb7AuzlCTWtDktBlkcNpnw+KucywNzffTFVo0vrhVZaUP81vUG7439crz
fy/L1X+0+8tIua3iTukMtxZvHbhiAXYSI2la9TQVxjuevRzfH0jc3e8WqX8/9MgtoYopXS6m
uaqdIJGpQvpB74lXlvyZyzdto+KOKXG2X7WT30kSzzwNoJj9pmIDDtWmGTGft7IXz9yG42f/
ANLvba8a3uZHYSXMTaZfb0IWNR+U4MtjPfVnUR/3H7TyndNk2/fdqu/d4xawB9xgjmojl3Gi
WgydaHFD3bLP0H5w5lvHH/jrh0uzX72hvFiVmhfSXCWyso1DOgOKN2+qz4x/uBXdeWk8rZIU
ns0s7OWCMlQwerNIAWb1eP7sNhx6PwngW08TseSPZ7gL5t2V7k0Aqsel9IOZr92MZ7rN8j4e
leOSUOG1E1LDxapx3kcP53Z8Pff7V+R7HYX287RuF3FaXO6xx/pPcbSkrLqDLqNBqAYZd8cu
p67/AIx6Fxvi6fFnCOVPyPc7ZV3P3P0eknUxMboihTmzMXHT8cUm1TyD37ij/JFvwffeO31u
+37Xp/W6z/MSntlhp/iUx0KnFfjF+Whj5/w+859uuwxbrCLyezjtImJ9BuFaQPGrfazLqGWL
Eye2cah+LPjPksPINzg97c2drREb1uxXQiqMmYsT4ZYpLafws964ovyHvfDOW7HuFvNtG2BT
ctUl1aN1kICj81V0kHpjPUrP1u681+XOR7Bv3zlx1dv3dYra0MVtd7nEwVYJEmJqkh9PoJ+7
pjfXNkF4/wBtB8+tDt/yHxm7uN9fe4oQJmt2MRa2VHBrSIAH3O2VcGeNZLXqm6cUj5vzni/y
Bsu6W8mwWESNMQTrrDI0oHkfXRgaUxmzTPKaHmvDOWbvzDi1hu8KXu4xLb2srH+XI/sGJvbJ
+/S3YfhjpecZY/lu2Wvx18C7pxTdtxt5N83Mk2dpGfu1Sx5Koz0qqEljg55tqvOzK4eIWu2z
f2+7jPDyuXb5Zba8eXblmj9qMjVWARsNf80Dqp6tg5+Wu/h5x8BcQ3ffuXJJtUwtI7D2bm5W
SVkLQiQGihc2JAoRi7urj4eof3Pca36y3Ox57ZyKtpYrFbO6NpmiYuSpXLuxFD44M1z76+t1
6bw7cPkKPbrLdeVbhtg2MWQuZ7qMMspDIGUyMSIxkasVyxT10189XOxbH8o/PW6QWd8Y9p3B
y5ugKNW3hUehWIOpiuVcdLc5xynP+2ufbdqsPij5ys7O/uP1FlZyrJJdJXUsE0ZIZkzzGr1D
9mMWWunP9P8Aax7rYcPsNt+Rdz+VJ95tRx25tvdilVhp0vEkZLP0p6Kihz6YJLavZXyf8kb1
Yb3zzf8AdLGQy2l5fzS2slCoeNqaXocxXHec4xzz7aq+O73e7LucW5WcskF1bN7kEsJoyMBQ
EePhTwxn+k2O3L6W/uA5zu0Xxpxd7DctI36HTuZj0n3o3tlLhgAaDWaGn0xjn/LHc3xU/wBr
HM755dy49NfsLGys3nsLSQgoDr1OY/pUkjBZ61ZkeQzfJXKLzl0fIbrdJ33SErHBdghWjjUk
ACgA0eomnnjV5rn/AD735e1f3N8z3KLYOPbfY7gP0O92rtuMMbKwnUCMrWmdCScPEa6/SL+3
Lmm4NxnkO1zbiyptdg822wSMP5elXZ3jBzoDTLpjnn+zX4U39vnOjuPyXJuPKdx9/dNxsxBa
3dwwUsw06E7AVWoAxvvn0fz9j3qWXke17Vvdzy/dLF9qaGRbZoUaEpr1BVdj1JUgZd8ZzVfh
wbDfbtvPC9gm4Pu1jDFDbpFeJOnu+pVClaLmrBgevXAor9+59xjYvlKys93voo5b3aDZ3M6s
DHDOZwyCWh/lh6t1/HG/r5oz/ZR7ftHHfiLh++ybxu6XEm+647JIBqaTXG2jSgz/AD+o9AME
5tW5MC+18a+Wdo4tuNhvKWU3HQv9SsnA9wAIoYEGmQKVB6UwVrEknN9p375bNrxvksdldQ7Y
LKCdohJbXVyJGk9o6qBtINQR+GG82Ker/mM9ztvx5v0PN93sr43cDpt+iMRH3ih0KoGZbXQq
R0wc82i0uMvym+2ja7zZuZ2l1s4ih9E8ERnOkDXHJIDkwoR0riwn2bmHEB8mcl2qxvrRNxvr
a2dDqBhkuIldXDEEAuBpqAemKzBLqD5U5NdbN8fbjJdbntaXcpRLW3hDfzqMCUUFidRpkemN
cc2n5cPBvn7ifKN/srS7sBtl/LEyx300sZi1hQWjRjQ+qhpXGcGsTa/OmycW5ny+BNr/AFmz
7luDyIkcqkmdAIppA1CpSVl1AY6Xitcca0fyLzvgO+8R4iJreAWW6bhEWQuoeyWJh7wZVFVq
p0nyOMzhdTLjdbhttvHYwbdxnkVjxvbYvWTbCGR2DCi/e2mnn3xmM9PE+M/Mk3ALzfuJ75aJ
yIJfyStfRyBhK0lPWSwcOrAD6HLG7zfmn761fyT8vcStzwbd7O0ivCJBfqYZF922iUCOSDSB
lq1Uz7ri552Wjq5cWsG4fFEHIrj5dTkSiCaIRyWhIDrL7YjI9v8A8hYj8tOvfGfrb4ftj5X5
zv8ADvfLd53e1D/pb+9muIFkprWKVtS6wO+O18mD7eNn/bvyTbNm+R7WfdZo7SyZZENy5oqy
NGyjWT0BqBU9Mcuo1x1Mri5jzjcbT5R37eNi3Jg63sps9wicsDGTRdDdChU0yxu82T1jjPl9
D/FPyPsm8fH+32cO/QbPu+3jRfLdiMs5LMdSiRkDBya1GOMbv7TXPyTxe3+TNrtbzkltcxW9
jNFPN6EgjuJdJH8xTp9ar0rljf18Z1WbT8lcLXbeGNNuUMZttxuY7lS2kwikqiSSv2xkuvqO
WeH6/Ilde+XHxp8iWtoLrkUdhPsdzcKtXjTXrcUdddNSMEBVhjPXN+Gp56zPO/mfju1fKGwb
jtd0u42e1W8lju5gaqFZSKmNhUPo6/XGrxZPRz1Oq6vm35L4vtXCptr4rfpLcb+73TtZyqfb
SUhphIo6CYMQV+uH+fP5vxF1cuflF/brP8d7Px2fcJt6itd1u1e33GzuJUjCBXJQoGpkVxnu
7XXvqZkc3GN24Fwmbm3HbbkcFzbX9j7+13GqtXeORWgqtVLhmFKdRjWW9Ry+0+rP/HPN+Pw/
FHJdivrxbPdp2R4YHPrZSUUurD7iuknFZfsbNj6Y28xHarT25H3FBDH7W4qUdpBpH80HxPXH
JV8S/MWwLx7nu67et4L1DKLhLoFSzi4rIyvp9OpWqDTHfLZtZlzxgJJVZ+vQfu/1wC1G7IXP
qOY9QOX0ywsgoC6ihBNCG8sWKDIHuMKHxWuRpjOOksIlkbSCNPj9PHDgtGrABicnr1Hl3wKU
yBtTNQFTkWOdAfHFUJkSQackHWviRip8EmhgtD3p9adTniWQEbKtNOaox0V64GMJWrIScyMw
T0zwL0kL1IagNcvoMOk5Z0JKMPVmR4VwDSHUkAsPu6mte5xNSiWRgdda1OSnMeNaYhoCjPTo
tfUK5VwszkipFc9JVunWv1xaZyMQAHKq1y1E9fLA3gidABcAM4qWbvXAqb2yupT3GSkftpiH
weMDWqulVBqCfpgUMhJ1EkU1Erp6Z4VEbGjLpDAtUFhhZpBRrFT6x1GffzwWmJ3ArRloBm5H
+WDWrDKSwqv3dK9cuuEErGpFc/H/AExYf/IVqVJBCvlVuuQ8sSHKzKBQ6jWtOgH0xRUAdGGa
1ah018frhxkUZGhUA0knoRT6fhgI1JUa2AJU0qKU/YcCDqY1WnqJIGdR9MVWJACrBjQilDTq
cUKP2gC+khUapZOvXwxoCqVU55DwyOCgo5FNApOrqfocFWieX16ydXSieA70xQ/IdY0alYnO
gQZdfDATl/WCSc19bdcx2GFEr0Jp41/dgGkgYhlIp28QGOISjKsQIw2a1rTLPxxNaBCzNXIE
Elj+7BRD1ZOpqK6QvgD0JxHEhoqFq0Y9RTKoy6dsRxGzSKfQD5oc6fTFUmjLCMOKeGeBIZEV
wWYFJFNVp4+R8MVZqXTRVdqBWGdczXv+GKGWHZTUadLfwsAR+2uE0wkbQx0Csfp1Dr+J7YBp
fzBpcuyqoFOlan6eOJSJi0aKjfa/8ORzPliaDRC4fV6eoIypX/XBDpOxSis1cqBfEYQb3DWp
+6gBrkMKFEoZdCsrFTmRQZnpgsZw4V1oOoJoC1Kfj9MWNAodVDln6aHSaDF9Vp5AlFCpqU11
N3FM/wB+LBodCVySqMKkHpX/AFxYocgg+oEsegB/wOE4cKq00gazk5zzPkcA0AV3cgCjr1qc
hgsP2CqqKEA1B6A+PiMUBn1ABQxyzIHj2+uJGaVJOj0XuviR1IOEWGRB7elWLGvhkB4YDIOV
/ZB00MjdKH1DxofDE1UanWmfRei9Bni0U5CA6Stcq6G/yOHVgJGOiua6s9Bz69OuDVY4rk0h
LdWSoINMic8OM5VG7HVn+zG2hgECoPao/wBMSwPp8+mJOqHQXLMtCe48fLAlhtkQW7UCrI1A
QKHL8ca59rrsj6k+Ofi/47tuCxci3izuNxvpY/fm/m+3GACQqxAEV864v7c+442zXjXLr+1u
N4uP0NilhZBiIbZTVlFehPfHLmY1Nxn42SuqukEdAOtMb1Cianqar1NUHTL/AJYhG1+Ovjfk
HNtyeSzjSKwSn6ncpgViTtShpqPkMbvkb+Plq/kD4YteOx2lvY30m57jekKgZVQFiRUL4Ch7
nGOerKp3i2n/ALeotm4tc7nuG7e7diJXNnAAY0J6Jrrni66trN61mfjn4Z3Dl7zyS3aWG1Qy
FP1EtDI5U00xqCK/9xwXV5iq+V+B7dxHdP6ft9y9wo0mSViAKEdB5DBx1ZQxESMzgrU/lHhQ
9sddtqke38V+D+PxbRbbrzDf0sZL6klvZQUZs/tBOZY07AYLqvSz3X+3mznvYU2m9ktraZdT
3N2vqCg1poWla9sc5qnWOo/AfCZoZ4rHeLm93KAVfSgVFoO2VP3nFl/Z+zCbD8M8m5Bvd1YW
U4t7G2cpd7jKCYQ3dV6FmI7DGvvcOh+R/i5eMbnZbZtl3NuVzdALHDGlaydCqgft8sU6v5Zn
OtFaf237hacdl3LetzS1uwgm/RQoGCMR0djlXBt1fCbj3w78c3sduu4cvilu58hZ2+kEP00g
t6iR/wBuG7Q798+CuF7ffWtvd8j/AEMEq6tdxRXanQLTL8ScU07rrh+BeBXu3S3238knuoLc
EvPF7ZQFRUkt0p3wSWKu3bPiv41v+JNd28t5uZhjfTczyMgLL1YR1w9SqV817zaCDcJ4I1CR
JIwrUnIGg/bjUxu3XPFE8k0USJraUhEUZkkmgBxesX16JtPwd8o38Ikj2pYY5BXXO4QkAZAq
xXGfsPq4pfijnq7wNpttvkuLzT/NjQ1RR0q0n2afxxf9Dkdm7/CHyFse3NeXtpFHbD1SJHMs
rV6aaDF9rVkXnxh8N23JrC43LfLi4Fpbao7aztECe5QFs3I/LSmG9VdfDR8E+I7PcIdzvxfX
lssUjR21lA7ISVqA7EHrUYtsjE5ZrknBPlXZT/UveuorWJyYiblpJNPjoLGn1w892NY4f/ut
+WeVRtuFzZTT60pHcXbUZwPASMGpTywXtSYo1+M+bvun9ItNskurtKEhF9CA/m1E0UedcH3P
XGrR/gv5DtLi1hmt1t1uJQgkeQOKt2bSWoMEtHPMjs5d8Nb9s13a2NhK+4386rI0dupKAd6U
6Ux0nda+a8+3W03DadwktL9FjuoQUkQ0r16NjO1mxz2q39xeRQWySTy3DCKKKIFnkZjRVAGO
/H9M+VPh7Ptv9um7rtaXe8bzFYbhIgZNuVTK6hh0Zgfu8SBjj9+qPhRbH8Hc03Le5bNVSzsr
Wnu7nOSEAJ/+zA+8lc8V/pcyHJnqfl/wrum0Rk7Vcnd4qgTTRx09X5QFBYEHHOb+VMWVj/bd
vC7S1zu+6x219OgdduX1tq6gFq0HnjW03Ge458F8w3feZ7SVVsrG2b/5N9ONCEk9Av5mp4Yp
VJiPn/wju+wW5urW4G52cQ9dxCukK1aGtScsMrPXrO7X8ac0vNrXdoNtm/p0Q1vOU9NFP8Xg
PLB9vTeW+4hsXyxzzaRtY3RrPi9kfaM89Y4yVOSoVoX04fupziq5R8N8q2a+jg22X+qh3VIZ
kViTKcgtSS1a9DinVnyJF9e/29cv/ooudw3eH9ewBG2gtJQ9xX7CfoPxxTqnI82g2Dlyb82y
2FtdT7gFJFpCWqoXqdK9B54fvcZvHrk3faeVbVun6Hcba4tt0JVViclpHLdh18e2Lm35rOe4
3158J89h4fLv29XP6eO3j9z9BJK7ShWoPt+xWz6HFe7fh22QfBPgXle9bam6S3ke07fIK20k
+oTOg/OoWmkHzOD7VmyOL5B+It74ZCu6i9W/tn//AIuIspVumZYk18KYebYzcrzyfftzdleS
9nkCmg1yuQPGgJxud0/D0T46+Lef8ttm3KK5fa9sZSqXs8jIJK900+ojxPTGeu61XLvmyc/+
N90kvLS9ljM3o/qdqW0yIO7E1BHke+N89b8sc86sp+cfOkO0RbrcXG5Rbe4Dm6ZdCaT56e46
Y5ddQ9TGW5N8wc33yGCC93WURQCkRgpCzgjrJppU43OlP8s9tMm93u4RLYe/dbgxLRrGXeSt
M29OeVc8HUxnj1o904x8pf083V5ZbgbZFJeWT3WQg+LHP8MV/pK6ZIr+M8Y5xuPvS7LYXc7q
NL3VorgLTsHHXPtg/wCgnKy2q9+S+GbqYLY3tlu9+umSPQzPKa9CrqxJ8MXX9N+Wtnw195zX
+4C1tBdXv9QtoBRmleFVWngfTl0w/afpzyrSxHzj8k7X/UIrl7Db7WMm2cVtffbuEC0Z+nU5
Yzev0bHk1zBy3a+SNaslyN09whGq4uS/ejfdXDbVIsd+2n5OksRJvVruTWkbag92ZXRSR941
k5+GM3tfX9FsX/3ppEkW0rugt3XVAlv70aEHuAlBnjXP9B9LUm0c7+SuL7tIsd1dW24XbKs0
dwPceVifSH9xWOnPLG9lV16v8ibt8ucS4tYX13yZZLvcfRNaJGitGWWtEcLnTx/ZjGtZ+Hkt
i3yndRGC1O7TQuDKPbNwVOrMNpHXV4nGvtJBeFbb3POYN3VIRervEje2YkaUXAoM1LZP5muM
/ZfXxbbnxj5T3iwnvd3s9yvbey9ci3fuMsa0+4auuXfFO9Exkn5LyCCzk24X1xFYSZG0WRhG
a9igNDjf22NZHHY7bf7ldiCxtZLm5bNYYgS5A60UeAxm3DXVuXF+TbSFlv8Ab57XU1Y3kjZR
l4EgDBOmbyutvtfkO52OS2sotyl2iTUxgi979Oe7Up6WqcU7PPDh2i25PDfPJt9vdRXMSrrm
gVlmQnpmnqBJ7Yze4bxhchs+YG4N7vsN8biT/wC3vlkLsR0AaQf4Y1/0c51lyPQ/j74G3LkX
Fn5Fve7DZdulq1osvQqMmdhVQq1yGC9Xr4a6+GI4u3IrXlA2Xi25Tx3NxP8ApIpYXaNH1NpD
VFDTvjpZIuJrUfNmw8w43uVjt+8ckn333IlnWNpJNKMDT/xkn8wy8sHPUnyuflSJF8sb5tNt
YgbtfbfcsqQQymYxtU0UKDRWHbPFf6TfG7FBd/8AsvD97NtMJtp3W1JD6CY5FqOxU54ptcb1
NxVXm57juu4SXm53El1eymsk0jtIzKBlqZs/wxr4dJjSnb/ku/4/HaJBuT7KsYMUP84QaMyG
0/a30xm9xdT8s9Z8b5DuRYwWc91pPtymJCaEflNBWp7DD/1xyv8AK35DuXGN62ueOPcLKe0l
ZQY4pkZDTxIIHfD9tP8AzekWfIvlW0+OLjitns839FudRkuBbO7aHIZ6MR0J74583K11xsyv
ObDi2/bnLJDt9nPeyxgPMIlZiK9NVAeuL74xz/IO58c5Bttykd5ZSW8zZhJlZMgPAjGp1HTn
m6urvjnyFc7dA8+3bjJYWyVhWaOZo4lYZmMMCqg+Ixc9yVdcZfVdtnKeQ7Dqh228ntC7fzjb
yNHrA7HSc6Y18/A0N3yHfNzvUvLy6mu7pSAjSu7vQfatWJOWLqiRrby6+X2so7q6bd0tYhVJ
GM4RVpln0UZ45/fG8Hw/gPyBzyFrSxV22+x1TM0zt7Hu16ISdOs1/wA8V/ppvMxmN8ueV7Pu
A2vdHuoLjbjohhnZ6RkHJoqnIUGVMa4lrFwU/wAjcwuZIxNvF9KsY0xhpnOkkdQQwpjfwHBt
vHeRbwHuNvsLq6EVfd9mN39beJAOD7SNzlJHa8g2fdY4ZIJ7O6VwDq1RSRuRlTow/DGb3Dj2
3mXx5zOz+N/6xybmksU7RCZtlmlakqtQrESW9Tio6LjPNtY/rc+GM+MvjfknyMzW39Sa32ba
h/NuJ3eRYjJ+WKMnTqbSa4bc8bnxqg+RNn3TbOWS8at91fkH6LTFaXKu0wIYahCi1YKy1oVU
43Mkc+boN1T5Ch2VbLchuNps0YRRbzCVbcUzA0tkMc+e/Wrzrii2zm2/2EJgtr7cbKwrFblR
JNHE5FWSPIqte9MbvczFjp4LJynZeZWl3te3m43iwlGmykjdj7hqp1rQEdeuM3uZg47/AA9p
+XPlf5c2O2l2bcNog2hL2H1bta6pNSOtGVHOpVbqD3xTMZ669x8yMrFgwJTz6ZdsdY00HD+P
8g3rdotv2K2murosHAjFVXMDU7dFFfHHPrrPlrn+e16B8n/Gfyxs0EG58gnk3SzjCqLlZWnS
DUKFWr9tPGlMc51YznrC229ct2QNBa3N3tpuoleREMkOpX9IOnLWp8cdJYOpvkQ2GzbzJu0V
itjN/UCwZIXVlkLHMMqmjBjXLGfvkbn6av5R4R8hbDNtkvKbia5a8i02czzmf26ULRUOYYVz
pjXH9LGK2/GP7f8A5afYEaDdF2qK+i9yWxM8sdSwyEqRjqVxXv3Y1HjvMOL77xbepdm3mH9N
fW+ax1qro32tGehU+OKes9/0nVxSNcyPLWQmvdjnUDz8sMhr1j4i+O+Wc2sdwt7XdZNm49Ch
a8u6uFkZvVopUBgaeo1yGMTrKuuZYwSWm6Wcl1c2CyXFvYTmJdzgUvHVXIR9dKAVHpxq961x
1LNPeHkO+NLfTm5vpoIdd5cSa5SsYyDOxLELXp2xT+mGSfhOvA+eS7YL0bPfS2OkzqywSaPb
019xSBmKYz98vg65BxfauWzbjTj8F1LdpGTS11hk82ZPUMbnci5ibllzzm2ijs+UncFdgHgj
vWm9YU/cokOk6cbnWsbzPKqm5PvUtp+hN9O1gPSLVpX9qnYaK6QBgkwddStj8V/GHM+YbkZ9
nD2VtHIRLvTFkWNlWoVSM2b6dMcurGpzUfP/AIu+QeP79Fa7nBLe3W4yaLa7jYzfqmyHpbNq
0p6Tnh+7jJdxn1j5fdTHjVtFdyzwu0X9LT3MmjqX/k9MqZ5Y1z/SSOvVd3A/jHk3ON8FjtiG
NI203ly60igArq1tTrUdOuHrvGuJ+VVy7i+68R5Lf7HfMq3llIoYoaoVYagyE/xA1wbVsqpu
L+4kg9iSRzEmarqJUHrUDpjf28wYe33C5swGtpXRtOktG2k0bqAR5Yzmtb5iXZNn3zd7hbPa
7SS6uH1usUQLmijUchn9uN/efln/AJW+u6323e9zvLWwgtJrm5Y+1Zp6jqyySOvh4Y4/Ycr7
b/jT5KluAljtF3LNIjMWRCvoVtEmeWVfTjM/o3Zqg3Hadz2bc22/cLaSyv0YBreVSrA9Qc6e
ORxr7s/D0v5L4Fu/HeEcd32+3G5lm3htF9t10zfypdJeMqCcxpX1VHWmLnq/J6u2D4j8NfJe
58Vud4sDLZ6YkntIWNJLzUCax0IX7eleuOd+W+pkebXttvC7vLt99A8e4CQRXEEgJdZRlpYH
1asdduM85+G/+WPjK84jsfF7w3c9xcbrDIl1b3FRJBMiK5Uf7VBpTyxnm2wdWc3P2qNu+Lfl
CScx2OzXYBijllCgrqjlHoatRUNTt+OMXo4q9p4fy/c9+k2CzsLgbxAXL21CkkZT7q1+3yOH
VPV9ufxJ8nQbLue77pZSxptCpNdRzPqb22qdceZ1BAKtp6Y3/wBKz1ZPXn8W7XNrG5ikdST+
UkAk09WN8w1ufi34237nm8v+jk9qzio97fMDSLw0nqWJ6LjH9O2+eZmrr5R+EeS8YSHcNtuv
69skzJCl2lA8M8jBNMgBbJifuH44zOrHLmWXWEuOH8lseSDjl1ZyW+8s6RLbuOpk+3SR1BPc
Y1ej9fWl4Z8Ub7v/ADf/ANd3CCWxW2lMO5SImoQFATVqZeqlFzocV7rtO5i7+Y/jbjuycn2j
Z+JSTzbjefyLrbZqsyy1ASRXYAFZNVMulMPN89cvmtJ/+bRvMnH4Iv69DDylAzJtLFfbKKKl
Feuon8NIxn35a7k+Ipvjf4Di3fb9y3XlO4HZra1upLLS4BkW4jYLIH1UVRqP44u/6XpZJHF8
s/CY4pa2m+7NuP8AVdhu5Ft2kFA0UprprpOkoaZEdMHOjrqD5V8C7vx74wk5TeXCm/t3WS82
+msR27kKrRuPzAsCR0p54eN1n+nn+Vd8N/HvE+cjeNj3G5ey5CYkn2e4U0WsZIkDIfvGamnY
Yut1vJn+Xm27bZuuw7vfbVuEfs31jPJbXCnu8ZoWXxBGYPcY6/Vy46cLSyvJ6/uHqTzrhOpY
7pkAIlKSKcj1OMWN/g8l5IT6hVjm1KUI+mKRm0K3typIJJWUUZjmCPPD4RfqZAhiJDKwAIbw
HQV8MX3ov6DHMQx9r7gvpp6RTxyw335Z3PgP6gtF7eWoCpalTTvU4JF9jwX0h9BPoNDTrmPL
GrIZSlmmqQWIqSxAy6/mA8cFZ8/CVrxwoJJJA7YzWvu0W1/I/LtssY7Pbt6u7S2iBVIYZnRB
X/aDQYxnp661mbm7kuZZJXfXISWJPUsc6kk9fPHfru1mSIZGJbPI0r+NOmM4kfuakP5lyBr1
riZ0YIFBpANQwbrXLpha0zFpGDVKBD6iemM/B+RBDqBB65kHz74F4akygsRVa0bxA/DDsFg2
jjRdXuVqa5dx/CQMZaREB6KCPUcienmDhPlSOQAM/UOgHY/6YovgnjNNTtSvU9xgoRsqinUk
ZAdyB1wLEkpOZAGpMhn+7CzodJ06dVT4nIUPbCcSAItADpPQAHFYTn7Q1cyclGef+mMrAUqp
MhIZhkvhTEoYF42Wh1A9R1z+uJRKzEEq1T3H+2nbEsqNhVgVrTqKdKDvQ98SSs56gsTUENXP
LtgaRgnVUGpBzYeGHXMVchpAoD6h0GBv4DIaUXx7djhY04lowCkrnQAjPPEYkkVwSgGsN27g
+OA4GoUrpqWI6dK4j8HkWAaWIeopmeg/64kGjep2pUmgXxH+uLQeLW1aVoB9pNQD2wnDoJg1
dNaZH/XAyMBtZZjU0pkcsRh5SqmpXoAAy06eFMGHSQ0Gsg1B9APX618sWH6lXUNSgDPqTiQE
00LR0JHVumffLCyYhtemvp7k9PHLEykKhc1Ne2keB7YD9YfoQSgAXyzzwtEEdf8AdQ105f5Z
YMIZSK6oxq1H0qT08csSDpm9wsRkOnYmuKM0a/y2qFrWhcHoMIJfcJ6rkK0zyJ88BEBNp9a1
Fa6x1xkjLMrBSFpSjHwwpGSpzoa9yvXABw6yoVRQdmH+OIjZT7fUgpmWPc+HnihQzSTFtVDp
GWqlaj/LFrN5tTq8UkZ1CgNBTqRT64sOESaAVAp0y7YjaZmkDMMs6Bqdj+HjiZtFSpBqCKai
fA9K0xNQJdg+rSQRWpPQ4DomVGJqMwCdXcYBpM9Ia0D6qgFhnQdCPPDIdwCE5O6/cuX17YQd
RImpSur81R18KA4YjRtKCwFMs2qc/oPHAsSGPWuoZnMCpzxItRLai4ppAzyFMBkM4YKdIFQw
pnSpI7eOEBTJNLZd6jMGuWBHUB0IWmvqA1QBQ+WKEyV+7Lzof8PHEsAzsoLVPkaZn6+OJGZj
pJZa0HTpSo6HCBBY6poppA/AZeBwEtdZtFRr6aRl07nAZQyAF/Wuv/b9tT9cQtONKlkzqKgV
7E9sRxGPdZyQNY6FR1/E4mpNKYMDQDUhpRupB8Rhi6V88blSGrq7Ef5YtZnNU7odZr1xuLDV
Y0HbpXEBaTiWO2w/Ma1PU1zA+uAOmwVv1DUYIaZufD/Xwx2/jP8AY9V9ncGt5rn4gtIrWAz6
4XEEa1NdPj596YP/ALH/ALVz5r5r5Nt0ljuFzHKhVg5aTVmQ7NWjdgccI7bKpyftdRodctIG
dcaCWCEZhgFU5kCtQeuX+eNc/Jlx9G/Hvy3wI7VtPHHsrq3ul0xskbD2y56uxBHXvjtf435j
n1/Xb6sPnDdNmkgsdu26ITbtNkhhkUsqNkOh645ccenNam14pvX/AN21vt86AX36ZfcLMDR6
E1bOnTGe/lX/AA88+EuMb6vLb2Yn9RY2imOSZW/lVYkaFHj3xu/+rV6l8cnz9xa8G6xXDtGF
nZUt0VhqLMKUK0/fjlIpWf3P4E3PYeM/1/etwgtWRQy2sTlmo1KUNBUiudMalurcb7424lyH
aNss9y2qW23y7nAKvNWT9OhI6OSQGHljXVt+Rsvw9NuZLddzgjv7tG3Fk1C1WQEt/FRK4JBQ
28u+OL9twhg2vaIyfZf0ozr3aTPIYMVUfGOe/H1/dwbFt1xIs6O6rAtBEzA1JLf7uorh+lzV
pck5jwHj3Lg+4lkvxBpSeMK7Rg1OkA5rq8uuDnjVuLefeOJ3nCrrdTJJJtckLuXkOl2B7KD3
rh64svo+zx34H2q13nmO5b5pjWOy1C3hIrUOaajq6keOGzxrWd+YORT77zr9BHNW2jdI1ZX1
KiA0NRg54olep81li4V8W2+222gPcRpErMVAOrNmy606YZPRdrq+LY5P/utDyRNQrPIwZdIK
/jg/rffE+XOQup3a6OrWokahGYABy/djMUWfx/GH5dtjFACtxGRqHpJ1DMjHXg6+w+RWnIrq
721bOZo7cTA3cStQPGOoalMczUz3NpFNdwW8qC7jShiiNXDEZCi51xYy8a5Zsny5Jte4XG57
l+j2RGLk3EimQoSSAtc/wxaGr+BLHf4+JXTXiTLbzS6rIy5a10/cB1FTjXeKa0nBdu3Sx2/d
43VYp5LqVxJUHMj/ACxlrS2u13aHj91Jv86tdyTt7VzOVI0tkgFaCg8MSWm3pS4gtyt3uDUD
S3ksh9lSfIUH4DFYk1/J+lW9FnRLpkPtxx/dqp6TpGCQV5js9p8rf1iyn5FLTZVugT7pX337
hVpnpPaox0lVrecyvLhLeXb9nuFtd0mjJZgA0oQg506jPGJB1cfH/Ndgv9o3mSO/kMlw7GRp
CdTHPPVXzxprYu/hm6sbPn22XN3KkUKv6JJGouo1yqemGej4fS0233W4fIVpuduhk2y0tnSa
Vvs1OaimfXKuMnXTu2+bdf7Ru1rYXKT3R1RJHE1W1Hr0w4Kg2AQcc4jtVnu8sdnOXoiyuNZL
mv76/hgsJ/0F7ffIltusK6trtrZo55Car7g6UNaYsEqTfNztNz49udptU63F2+qGOOFsxIcq
ZdK4MTM80u7Xjnw5Bt+9SpbbhNEkQt9VZJJdVSopmcuuGTVagvoPkSf4rL7luNhsth+iWsca
kTeyFGmPXXQGZcssGem12bVKm4fDdpYbZIZryW39pFiI1iQN6gxXoTjVmUW61u2yw7Xs+x2G
4yx29+Y1QxSMFYsOtK+eDNOq/ZbS9k+Qt13V1b+lC2SOKaTKIFKdK9+5OKqfDEcU3ffL/wCU
+RtxBLS6tXGi63C4B9uMBvSF0UZs+w64bMZmvNflO55VtHyab283WK+3qPQbdrdBohNBRFU6
qfjnh56/FGa983SHfN1+IpUvI5Jt1urTXKjLRy9dVNOM76eqxnCvk25v7KHju68YudyuLGkO
mAVUImQ1oaZjvXDcMWP9xm7WkHx7bWAAivbuaIW1llrXSP4R/CMsZk1fl4bc/DvONv2CPf8A
d7Rbfb1KyFHYe4FNCrOvbFPkXrPl9Fcsjvbv4+2uy44PduDHb6YrVgDoCgtpK5deow41b6qP
nW+t7Xi2w2LAS7nLcwmHba+qV1UD1eQbrh5lG+uH5lf5Dl+NDJuZsNrtFERvbeBiXOeUYLZd
ey/4YJ/gX/L5Ul0MT6i61z/yAxuMd19D/wBpkFsL/eZdAM/sqVdgCwXXQgeAxjpvieNFv/zv
/S+T71sG52ougs/6awRBpVQcqyE119cUg+0eorcbXsew2Mah7eIqumK1j1sWcaj6VB6k4MaZ
/c+UbRHy3a2bZbqaf2pPbvmi9cZbIHSfV+JxYpHbvg3zetjvBtdzHJ7kZ0w3EVAuWdagZ41B
fhW/EnK963XiF9JuPtNc7XJJbRe2oRSIl9IoMsXUmqdbNZj4Yju945vyDkHIog29xgLbs6lB
DG7EEIjdMlAri6ujj4bfjO5bxv8AZ76u/wBuotre4khtUMZRXiWvqIb7unXGWnD8o89ueD8e
2u6sIYvaklSOQMPSsKgVCgUFT2wyK9evCuefKdty/nOxXWw7f7Elmyisi+5JO5IOlwB6gB0x
02SOdnX2mfD2n5n299y2XjnvWxkpuFvJcoq1VVoCwNegrji6/lc875jece3Xju12UUSxbtci
3lkbL24xQUQZDvhkZt9xYLYbNBzeeaO3Rd7ubIGK4KV/lq2liW8a0rgws38zch3zZPj64gtX
eXdbsezLdwxHRGjV1tSppUekY1z/AJHU3x8WkVuKnsKMT1rhlMfUH9s+2WVjwTed+it45N1D
SItwBVykUepYxXoCx7dcZ+a11fDwc25rzN9u2nf+MBNrnvokub1o2VQuvNQsn7NS435ij0Pe
eSb1tXNtk47tlin9FuVVZ5UjIEIBPpUr6VypjDM+VmBZbKvJ9ys7ZBNGDdS0GkO6QaqMR9MH
1VrJzb6nKPh5+S73ZwTyRq10INJMf8mSgUdTnTGs1nrz1YbrzjbYfiEcoXaontHtlaPa20+0
CzaApFKUDeWDmenq+Pm74f3q+k+U9tvbaxjM087BgiAxIktQ2kGpXSv7MPTXNez/ADLYbY/y
XxC83e2L7NHVbyTSSrkyehGoOlcZXNyvWpWuLMy3Ttq26GIezZQxVkBXwp1yyC0wjq4+GPlv
etz3rnW67luEZiklmIgjfJo4RRY0YeIUUx1lcuZ+TfFk22Rc52mbcLJtwtVmUmzVdbORmo0n
I+rGe3SXPl9pQ77JubBdmu44ivpaxuoGXtkuVCB9MZxbqk4+277BwXfr2326E75FdXcz2kI9
DzahT7cyKZjywX5W5CvLO55Vw3jl7u+3wPvEtzbymK4jokbayWBU+rTRemJVqrLcYn3Vtta5
eS4hQmSFYdMQA/30p+FcRZS5VOIcT5Tu3Gtui/qP66WQQLGaSPrRQCFoaAMSAMXM9VZHYN/5
LzbknGRynjaWlvDNNPHcyRke6Y4yyLpetF1CtMVMep3PJtpg3c7dJcyvKMjapA7r0r94Uin4
4cD4h+U5LKb5A3w2Nt+nszdyfp4tOgooyNV7VNTTGuZGJ8Yu/gLapNy+Stst0iiuXiLzt+oX
XCnsqW1ED838Png7deMz19kWO5W95udxtz3D3ElupW5hMOmLPL7iM/pjNgYL4h5DN/7by3iM
MMUW0bLdSPYhFoyh5DVSe4xXnBOtj5t+bOR79v8Azi9n3iD2JbMvbWcQj0UhRjQmuZLeOO06
smOUsvrz63AWQH8i5nuchWlD1wWunMfYW37xuPCvg/j99xXb0uL66ELSxaGk1mUMzu2mjHoP
pjlIe+nX8pbxtewPxPl247Vb3l/IRb3KSKBX3ogeprmjH01wZqtyuD+5DmG3bfxqy26fa4rm
feInNrdTUrbsuk+nKtc/HDyx3Z9s/Lr/ALcuUWG48Bns4NvjtW2ULHcvHQfqGZGbW4p9x059
cWet7sec/F/NeP7r8v7lyBdg9i2kikW1htYw/wCnKEB7kxin30OYGNdcri+PZ+UXm6bxxfc3
2x7XeLGa0kJ266haKSgBqwr1K9QCMDN3HVsU+xcX4Rx+2imG220ltF7Yih9wM7Rh3JCg5knr
gjU9ef8AyFzu3498m7BfcetUN7u0a2e4zzQlFlilkBjIqFYtUdcF8jH1zpX/AN1vLd+s7GDj
sFsp2i+gWa4ufbLv7iyEBQ59KUAr441yu/8A/T5YLNVj3BBGeWXWmO2mPpT+0qKMnlDwqBcm
GARMKawDr/8A2hjh1/7N2+NV8atym54Jzn/25pmUm5EP6wmgVYnDU19tVMXzWJ7yvtn4nsPN
ePcU3jl22Db912+OMWduZAglRQDGGHUo2nUE6jBhjyb5O5lzSz+ZrbdbHaf0G67dEbS1iKCb
9RDISfcbL1VXJadMdLn1Z/nd6rV/3K2m43ljw68WJ9UUpa5k01SNpBF93YZ1xiXw9T2PSOT7
jsNtveyW257bdX1/MsYgu7VZPYjOsCr6WAHqzzBywfhrq+vnX+7YTL8k2utlMb7bGYwooQA7
ghvHPPHTh57x/wD3HiQLRyFdJPljo72Y+p+EncIf7WdxbZda38aXJkWE1kH80e50z/8AFn9M
cPyz/TrOdjl+DdptJ/hTk9msK3Ec05Qqq11+iMjLyrUYxJ7T88vVk4xt1tybff0djFBHe7NB
DJpQBHZTIi1FKZLTDJ6pMq7ttyjg3e12SSeX9UsKFoo4KW5Cp2enpGXSuNNshse47Xx3kvLb
OHbZbWye9jlbc7OISKkssCMUkRQSAD6hlTPFIxzPlif7k7fkM/x2s8slru22x3cbLfiL2LmB
mFFTSaqVetCca5c/7WSS18rgDNiNWYqK5jPyx1pkfUvAl3a4/tpuY9jaSXdI5pqpaH+aP5w1
BQM6lD+zHH8uvU2Ly8/9n274v4OXtjccptt1tvYtbk1kLn3fQSc/sPq8BjHtZ6m42Y45x6w3
vcOUbXt9vcc6lttU9itwKlyqhwtftrSmqmf44fr6v/Dw34R5BzeX5P3M29s9vZbnuTTcgs44
z7ULszCjVHp041/Tr8D+MmeMt/cttF7H8o7hdTQPHHdCJ7WdwQkkccKq1MqHS2WNSqZtZHf/
AIr5dsfEtt5ReQIux7kEKTq4dlMgLR61H2aqd8U6h77nLIaNIqT6+tOmZ6EY3G49u/tOEJ+S
ZGJ9YsZQlaVrlX8fp2xj+kdNn1e/7Xxu1hseNyiwVLm03m6lLhAGjWVripPhWq/uxzxxrg+e
fkTd+Ccb2+62VYo7q8umhE0i6hGApc+nodRyxviRXx89fJHyD/8AeTuXGZbXbDDvsMP6W8ZK
VuJZGUoqKM6V1Ur0rjX1yVjrjepXq/zptHJdw+HONXF3aS3O57fJHLuZC1eMC3kRmYAeNK4x
w338x0/BPN+T7l8a77DDL+svtjttGzwlQzgiFzHHl94DKAv7MGet9fGvnjcJOWck5+0l9G03
Kru4QTW8ae1IZkoNOgAUKhemOvdmOXH+12PoP+4ja98vOIcRvpoGl/Qkvu0pWpjeSFFq/hV6
g+eOM3Gu/Olz8rfIe98WHEdt2mVbc7giG4nIDVRPbQIK+JbDLJGr8tPELeL5b3C3itWW43LZ
oXmv4lFY2jkdAXbzFKfTFn5TOfPl7ynZfi+4t9vmubxppBFuG5BY/RaPUSrJSmnVULqAw83K
x1j40d6URagVowXNaDwx0jVfRf8Abbdpe8C5jsNnMF3u7geSxt9WmRh7LIpUgj7Xp9Mc+pla
s/1aTY+Pb5xr4N3jbd8P6Pdb24H6GC4cFzK0kSx0qepcVy+uCaz+Hoca7L+t2j/2htuHPfa9
q0kGTM4GqinrQ9f8MU5302vL+B825XbfPW77LvCwwTbrMY7+BB/LBt4f/jtExox1xjqeuNdX
xniT39qT5B+Q3ufnfb7HdhFHtvGdxRYrhF0OscmgsZWNSwU/hh6n+jXF9ej7jwvfJvm6z5eh
RuPCGNkulcFaiIoVoD0Jzr0we2Ypkvqy2LdfeseX3vHoId9uv6y1bD3FMbALGtVOYHQn8MZs
TKfPsu43Hw/Yy7hbJtN5JuMIuNujIK1JcLmKf7WyxrierNdo4Xze++A9x49fI1xvk6A20cjh
yYlMbooavgpoMZ/ncq/r8PJf7beK7pe/ICbjHWO22cNJdVHU5po8Q9T08jjXd9Z4vmsh85XV
jf8AypyW5s3WW3a6QJPGaqTHCiSCvk6kY6WWSKSMAyeshCa9SD2r54FDMzA6Xb7RlT/PCfsH
U1AD1qengehxOd6KNWb7vTl6v8sFMotS0Aoa9x4VwNaBCasOy5KfHGozaAF6P6a6upwjRjWp
OkenrQdfriSRF1VbUAzHp4eWM0znTNIvtha+pRShz64sFCMlYUBUHx7+GJXDs6hmH8RBoB0x
Emf0laUIHU59cMVCulkpTSRnT/HCqEv/ACwhHU0r0IzwYExoFNR6Wzy8B4jEtQBX11B9Pj40
w6cGFY1IqSfDxOCjDnSqkPUeI/088ZaSIo0apAK9j4fXAajJ/maVNSMvIA42wkyqczlUCvSn
hjNagIX+2lVoSFPU4qUhjCA1JLVoDlkcUmqxHqUkFxUdhTvjQw6hgaGqnqO3/AxLBDWKgEah
U5dxgpIu1Q2Zyoadc+owYje6pCgHvQqBQkd8WD09QgDJUDvXtXAfcMpk0jU4oTmSe2AJmjUL
SoBNKHvXz+uBvQqhOqmVMqfhhZwjI7oEjzp1HU/j5YlpkQ6RRgSMzQdPL64hiSQCg9OfVa9v
riw4S+moHalR9fCuLDQUzT0murMjqRgqkExYFw5qKjPywIgoaTSW09Cpbp+OFWiBcIxFWp0A
GQAxM+nAZdJV6sRnnX9uJRH/ADAtGcEVzp4nxOIDKBpA2ZcCmmunr4Vxa3MJHq2hvtUmhy69
O2I/JSKoXRWtOo8cSogY/cJHp00yIxAAWpJIDAHMjwPliowQdmFQo8NP+JxLA00nUxIBr16e
OI4ctRkWor1AHSh/zxCkWRgdJNa0TSKk4lKNNDKAPVnTPx8q4jQs3rK92zIp4YmYIe60lK/c
KmufTv5YCKRlCsSfSgoQPPEkemiqynQMiVbOv+mBVJoFfTUrSrE9v2YLFl/JGgZWUEIh6ePj
XFhHr7k5VNB1P0GEi1HXmSCo0soAoO4qMWEIJoMqAn9nnikVOQAQxOonMV8ugyxM/ASIynWj
eGIQQXIFhpIP3ZVbyyxlomq2lgfUp69FYeGFExjrRBpYihP5T454JBaZCQNBYtT7h2A8cKGU
ClKDSwJKt2Y+OEkD7eVAaEGv164kcqjNVyUFK5Z5+OImkZlq5TS1OvUfXEj6FFDlnmWr1Hh9
K4NGk3pXUFoV79yBgIGkAFT6l61Wg/DLwwjCEkpDKFOkiuYzp3xGHDLo0ChYVAalKd86YGkT
u4oZASKdRWle4piZspg1FoUND0yJp/24hopFQHVqoRkFORAOAhSNS7U8as1a1riODKgsGAr2
0nBgNMVIFDTPv3OGRr7I4l0alDGh/HI+AxWKeoqMVIqR2DV0mtfDCLEF2xpRWEZpmPHFi+yj
amo08euNDCHh+/ET5ePamFa6YondmAah6t2zwavqnsInWfU3RTUjw88dv53Kry+n/ijifyfu
PDor2w5INn2Ro2WGGRtRZRmx0/lGL+/e1jJHlPNrKC33+6jW/O4XCOVluPyk0zI8fLHmmmM4
5UMEADDuKZ/XG2hISnRyaVGmlMsIdO1w3ct9DDYQyXF7M2iGOKust2C0zxuWxmzWx37hHOuP
28O67uktnJIcpZ2BmqfAHP04Oe7rUi1s+P8Ay3uew/1AvfJtAWplkmZUKDKtCR18sHf9LV9Y
ouL/APvkt+dt49LcvOWOsWxboD93pI6Yb/W5inE3R8y23nG03qLySaV7gGqe7J7jA+eZpjnz
3dbyfhQ3nIeQ39t+nv8AcJrmEGqRyMSF8gCehxu9/li8rTjXIubQSR7dsl5e67j0C2tWILN4
Kq+I641f67Fz/ORfbnxH5MsdztZriK5Xd7p1/SgH+ex6AZHLzxnj+uGyVfbzwH5rvtp//LVz
cLaL/MltZZdQJH0JNPLGf+guM7xn4t+SL52utosZUKH03hOhGI8CSDlh/wCnis8Q778a/ICb
rHBd2r7hud29NKn3GqBkTQn9uCd5XPn+X+VxuHxh8rWWzLFcia325F1C3ZwUWmdNOo4r3b8t
eRzcT+Pvla8iln2G0lt7WQ6Hufc9lZOx0kkV+oxXutdfCrvfjbntnu62q2jzbpM+tBETIzFT
XUzCvTG+P641zJI0G8fGvyp+jhut4heWK3Ik0u/uAkHp3yxn/pt0XHok2wfNu58PME+42u37
a0P/AOLppSYx9aagMsF6YzHzhudk1vfToZFk9tihIr1BzJ8zjQ1Nsm5ybZvFpeKNRgcSKjVO
YP8AnjXNwvRN++ePkPe5PbtJW26CP0uLUN/My/iIqPwxbDjI7fy7lu27h+uhv7u2vJCfUCVZ
j2qpH+ON/wDWSYrz6k33m3Od+b2N2vbi5hH5ZCSB2qtPT9csc9jX1xs+Ab/8y79ayWew30kF
hYDTPJIQscakZANSpP8AhjXXc/Q64z10cf235h3O9vn2vcrh4bVit5dvNpQyKK0BNdX+eMT+
uM5+WQ5TzHnV5cGHd7+5uf00lI1ozRVU0BNKDDO5Wpjosfln5DW2Wys92uTDE2gLD+Xv1HbD
11GcritPknnNnfyXcF/L+ubKRpalj9Q2CdCypNy+RPkPerm0uLy+up3hesAQFFLdvSuWGf0k
+GZu5R73dfJFhdQ7xuT3Vte3I/lyzhqsoFBQGnQYL06sfuW43l/eyXN/O007fc79f2YtY8c1
vcNEwNNGnNWIq2KXC9N2jcfmPduMm12trptkK6feCmJGU9vdGbUw/eacsZ3jN/znZd4ji2dJ
U3FnKNAimSRnHiM64f8ApyzZ0sebQfIov1m5QLkysobXISpqegC9B+GMTtr8LWw3n5h3Ti7Q
WCXjbOnoMoUqukAgjX1YYf8Aovrig4rvfN9n3j2tlE/9TaqKoVnYilPx+vbF9/DedDzsc9lv
0veT/qJLpSpUTglQK9BTLFz16oprnlHJLy2SwuNwnayiOUDSMyen8pDErljV6Zn84uuBct5n
tV+0fGWke8nWnsqhlUZ/doNc8FvnqvH5Dyzc+fSbwt5yM3CbhX3EM3oAocgB+UfTBP6ZRmtB
uPMvmPduJski3J2SNNLzRRsqug8X6nFe4ZzWGsOW8g2EyTbbez7fIwAkMRKhkP8AFTF9sZ65
RW29X9vvC7uHMl4rCYTS+vU1a1Na98XNl+VefPHqj/OPzFuOzSy2sCrYhNDXy21ADTMh816Y
d51qc3PVZwf5A+U7WG6g41aSXrTsDd3C2/un3CKipIoCcZvUi6/wzXK7jnI3yPdeRpci+DrI
rXA+0g6gQOlMumDnraHZzD5p5tyvaBtO43CJaBgXSNAvu6emsimeOnx8M2al+Pvkb5A2uR7D
YUkvZJFOiAJ7vtj+JQftIw9dNfWs/wAm3rlU++Nf8gknO6RNr1S1Vk7gKe1MZnXpc/Jed8t5
Hax2+67pPeWlsaxwyN6VNKDLxpjfXc/C+usyKhFAp3AYdq4ox9Wv4P8AI3I+IS3UuzzrDJdR
hHLIGBoa9+hzxnxqc1S3W4bhd7o+43U0k9/K/vGWQ1Jb+Lw64p16OuXpfHf7iub7RYCz9yC9
WEBY5Z0q4FPtLVzFcauUcTFVH8283k5OORSXKtconthBT2QnXQU7jFkx0lW3JP7jOe7vtdxZ
D2dvtZEpJcWqlXZTkVq1aVGM9fWfC6il+O/kzmPHJblNjhN5HNSSe3lDyRZdXIHSvjXF5+V/
4Ft3zPzWw5Zcb484NzPqE8Gke24JHpZfAUwwTjF1yD+5LmW97f8AoFEO3B8pP040uQPyktU0
8hjMz8JkOd/KHJ+aRWlvusqe1ZKFihiGhagULUHdqYtZ689V/BuaXHFeQW2929vHc3MFREkg
quYIqPPGpDO3r/K/nP5Kk43De3WypY2F7QwzaHBcrn6SxIocYufgX/LAci+ZOXb9u23bpuMy
a9tYPZxxqFQOp/8AIy9ycP8Ag5+3o3BP7iFbfr285TGKXMaRC7tloYQv5VQk1Uk598P1lK0+
RP7g+Mtxm823jkU19e3yNBLPcqdMauKFtPc0+3GbME9fL8hZZR9xYmp1ZZ4JDY33xv8AL3IO
Fe/DaFZ7Gb1vay5x665tTxxuZPljmXV5zT+4fmfIzBAkce22tpIsxEGrWzj7fU2fXDJPw115
V3af3U8rh28Qy2VtJeIoBu5AdTHsW00A/ZjPWNzi5rMy/PPNpNp3jb5Z0K7yXee4p61DKFKo
3ZdOVMMxixX2vy9yaPgR4dbsibYh0sxWkhjY6ymrwrimaM1c7RyL5O5jwL/1DZNve52CwoJ7
mGMlsvWI2b6muLrJ8N9T9qT4255unx5vV5NbWMd1eyabaWGQEsKGmhAvqDFsq4Ml+Rb549I5
j838zi3HaouT8eTbrWKWG+/RShh7qqaj1NXLGdgnren+4746e2S8d75rilVtET0l6VCgg0p5
419U+YvkXl0nKeX7lvn6dbaK7ce3Co+xUAUV6VY9zh0yYptk3m82rcob60kEVxC4eOXuCPpi
l/a+XtB/un5kLJbddvs47wDRHd6GYkhfu0k088OcqyrDjf8AcdbbRwSW1dZL7lD3EkrzTCkU
mtwzMzV/CmDJo3WR5X/cHzjfL60mjkWxhs2E8EVuKan/AItda+WNc3nVq7X+6bm6QRAWtoZk
AMmlPVIPFiTQH6YOpFtV/H/7h+Z7RdX08vs3a387XMlvIMo3c5lKHIUGCY1J4q+T/wBwPO97
3a03NJ47Bdvb+TFbhvb11oWq2dSMsbyZ4Pr/APy0N9/dVzQWRtore0junShudJ1K7DrT7c8H
PPN9q9ny8R3LcLy+u5ru7mea4uZGlmlP5nY1JxRnIsOJ8r3TjW82267ZObe8hBCyLlVerK47
hsXXOrnrHsbf3U83RYSLOzCqoMjEH1mnfwH0wf635O1g+LfJfM9q5Ze77tJaXctwkae/joXS
XW2sh0H5BjNzWOPyqvkbnO68z5E27bpGkMojEMccIoNC+f1xq39Nc8SessNJJVK0yOfX64o2
9e4L/cNy7iewptaJHuVtb1FulxWsK0yQMtDp8B2xZBWc5/8ALXJ+c38Eu5skdvb+q1sIRRIm
PUqepbzOHr6yeM7Vxz29+TucbFtfId02102faoDDb3SxlAxyDyNUmusKM+mOfPUg6432uX4o
5v8AIO1LueycUsze3O8oKxBdZTShRXHhkaYftN1nidT5VOxb3yr435YblInsd1tiUuraQEhk
Y6mjavZvHGtlb45kjfcj/uf5Zum2TWFpb2+3yXaaP1MSkyEMKMoLEjyrgn1hy1y8R/uT5ZsG
zW+zvb299HbDTBJcVDpGMgo0/cFwyctTmsPzn5H5By/e23jcLgq0J/8AjRw+lINJDDQetajr
jNkMaTnPzRzHf+IxcZ3qKOgEbT3LxlbltIqGJOQJB60w856x1J+XkzEa205gH0+HjSmKamh4
pynkPG9wivdkuHgudQINfSxJHpIr6h5YJP2d/TefI3zN8gcmtI9r3SI7TaSxj3IYleP9TpNS
W1DOnh0xq2Z4zJ76qeUfL3Kt+k2a5lm9ptjjVLT2RpBdGHrcA0LekDBPI19f26775z5Zcc6s
+Xzx273+3wmC3gC+gRt94Yf7vHtizYzM3V/zz+4bl+98fudlv9rgs4txiRhVXB0FgyuhYmoN
MiMXOKzfK6OM/wByvyBb2NvtENlFud1Egig1BnmdVHU0zJp44J9Z8t48l5vy3fuT77Nu28zG
e8Zimmn/AI0HRFA6BemNWz8L6z5/Kh91tQYKAelfHDq16z8E815dsd1ebdsm3S71Z3kLNd7W
pLLpUEGbUa6Msqd8YvyzVR8e/KvKuDX13/TKJZzOzSbfOKxNQkKCMtJXoGxmz0fz+Gi3f+4/
5C3La7japp4YP1BLNPAumYRtWsat4Z9aY6z6t5Xbbf3Q89i22K2k/TrJBoX9S6gyPop1r6W1
DrjGQWqfZv7gOabbyfct/WdWXdypurR0Bh1RjShVfysFFMdLJZ5GZaqfkX5u5hzqxj2u/eOD
a9YlkggXQrsh1KW8aUxieLr9POySErXTUnV/ljes43nxfzHnXH93R+M+9PNPm9kAZFmAFNBj
GVf34z1ZHSa7+QfKvPZuY2297tLJb7rtkvvWtoVZYoX6FPaatAQPV44p1LME6cVp8rcqtOcX
HMortTvF1UupH8r1DSU9smmimQGCrn/K7+OPkz5G2/ke7XHH7Zt0v96me5v7ZYiy6qli6qPt
01PfGe7N1qfzyeKz5R+V+V8xa3sd9hWKba5HaOKOIxFHcaSHr1yGNeZ4zOZbq25n81Wm8fFW
18EsLEwiBIFvbmRq6jbnWBEB/G3fFz/OT5Z798eOyO7x6pAQRXUfPzGN5ilXnFt/3fY9zt9z
2uZre8tyHikjNCCOuXcHuD1xnr35dHpzf3OfJB/UyNPChuU9kGOEFUYGmtV7Pn1w3mL62xke
Y/JPLeT7TY7Ru9x+st7FnkgUpV2Ldy/U/jiljNZvad3uts3GzvrRjHeWkqTwuRkJI21KfwOH
qSxq/D2/l3zH8xxcOs7nc4be32rkkUsdrdCJdTppo60H2MQfTXtjH8834c+q86+Pt/5zxcXn
IOLmUWtiqx37+17kQSRqKsqn0kauh7YOrHXn49V0vO+RTc5/9yaRYt5acXBmjUKmsClNJqKA
CmK3YxzM+HqPyP8AKXy7Bxq0t+QLbw7TyK1M0DLCmqSIAFkqK0OYbxoa4Ocq7n4eY8i57yrk
cW2jdrlp/wClQrDZuqUKxZD1MB6m6erri+PFw33DP7gOQ7fyldy3of1SCSyTb7yEKsUvswk6
GVh1k1Ma164bJjrfrn+Wk+QPn613XiO57PsOxy2ibhGbe9vLo+4io4pVRX76ZZ4Zn5rlmvnM
suqi1JFBQ1rlgh1fcGk5ON/tH457w3ZJCbX9N/5B2PTKnkeuC2KStL8p778nz71bxc1SZb2z
jSS3hKeyCAfTIgT0k+NO+N/bzBPfFFu/NeWbxvEW9X98827Qe3S4oBIvt/8AjIoOo8e+CdSz
BdWuybjzvlXyBFuVhNLccnnkR1u10q+qFKBhp0gaUWlMHXUxrLrt+TOAc42rl1nHvciXW7cj
Aliu1OoyzMwQxt4MDQUw/e2es3ptLT45/uKg483HbZ5I9slIaW3eVKf9qyatSqe4GMTuxvNn
rzjjHIObcT3i8Gz3FxY3Xrt7yKMZ6kbSdaEFSQw8MXV/NMk/DWcw2D5R5Jx7Y+SbpevvEO7T
G0sLYAgxS1ZVVx2LaWzwz+tzwfX1rJNr/uZ2XYDJHd3EllZW9WjRozOqR9VRTUnSPxwTr1nt
mfiXj/zBd7Tu24cUZ7a03g0uriUgF31Fi8ZNCDViDTxxrr+k3yNdTzxg+UfHXMth3yPaN2sX
S+uEMkKj1pJqahZWFdZLdcN/pvtc5rr5R8Oc847s1pu99YN/S5wGeeL+YYa9PdA+z8cHPTV5
Hwz4b5nzC0nn2m0Voo0HtySMEQnwDmgxm/09anPih/8ARt/bejsTbfcLucLFZbUoQ4YGmY8P
PD12xz/PV1zb4c5lxGwt7/dLVjYzqrC8gGpEZ/tWQ/lP1wfe0/WJIPiXern44POtv9u7gtZZ
Y9ztEr7sSRkfzadDk2pqdBjXF+1xrqSTXn5KUXQOoqD1xuRzt0y+4KqhzOECWVFH25/aD2Pi
K4ms/YP5akaMgRnWp9R7YlhPVZFIOpjUEn/L6YGcEqj1MOidvLvXBqwykHUhFSpqAMssKlEQ
ZKdE7aa50GeFqeo2Dl69AxND4UxncUEwf3VqwGZzP0wo5LkM1MhQUI8cPg07+nSxyJyy8T2w
HSGoSVX0E0olaktiW6GQtpq3qatT/ngR/cZiAM9OX7cI2GVgPQyZHPViqOgfWq/dXx6DGWpE
sbAVVgKVNNPQnriKNiS9RUFTVadD+3EyLQ7gEHpmAOw8sSAxjClakMT6gPDFAcMCOlB0LeWN
EQeinSanvWlfrhxWk1B6QoXV38z3xnFKkeiaNVSBkx6HIeGMq1EX0sdJrX8BQ4lgsyaU6507
HANHrZkZhUBRQ964oZoV9xDUnRU9qVNexw6cJwQ9BkwNRXwGBJSGLawNQPbpTE0jV2ZmJFR0
VsMgEGIkqTqC/hlTpiqDLMoPTSDkppX9uDAdPbLqMwKUFexH7sCKj11liF6EDL8csSCyNGCM
j3VhXpiFOpWlQG0g1XIdcQ+p45m00fOpJLf54Wp4S0CMKeZ6VoemJSnFNOmv0PWn1xEUekOy
NmDQinn2xEbqg1UGfXM5+fTEg+6wjZUzkIyNKZdMAtMpNQK0AHqVv8MQsFGAVLgEOCcjSlPK
uIU9I0owzz7CtQeuI6WiRD6wGDfbTwxLAqi5knr9ijt5YLUMSsAGHQZUBoR+3AoKWNC5LLUr
TOtCf2YsIZAjygOCjdBSn/BxSjTygh2VDTOlD3+uIWW/BCXQaECijS0fiMJz9moBQlRUGqAd
aHEk3odmIFcqMmKtBYENpC6lqKN0pixEQVYq2krXIjEjnTQGtQcgoH7z5YFaDTSNQeg6Dqc/
DAkpCt66hu4Xrn0zxDAFSxoo0hKivUEA4UKjKrLVaN26kDxGLESMulUB1jqBnkOtcJkh3jlH
QDS2Z+nngUEiye2dNGJNNK96dKYsJKNWoN0716/hiUqNNLkkZL+Va9hgCQyLpUshoSfX3HYD
8MWLAnWG1dH/AISPTXxxI6MPaybp0I/ywHUJDFaJnrINR0r9MS0+mQK4dQwJrkf8cC9EFcoS
z+rpTCdRAt7hJH2j1V6AdsFFSCnt0U0AJIXtTFAZ5DTURQGlf9cSRyrVQrBSvUMCa18sMRA1
IVUOkgK1MjlhalRymnroTTKn+ZxCuO5CKjHVqJNQMhiSmbNz2wqH0gHERenwP/PEllCjmVVy
c5knzp3wH09rK63qMrlHVq6qVXwzHfHbi+w7X2J8csf/ALmYA51SCCQOafmLE5D+Eg5eGL/7
H/s5vnLeYpU3SdyoCySO1D2B6Af548/O0cy6rGKVBIoy55Z1x0dNONRZArDI+lTl+BpnjXMH
Vx9R/CvFuNbXx223fbZrN9/vhqluZZEDxqx/8aIxqMPXFlZnUsdvztbQw7HbbneXUcpt2rFA
fUHYHVWn1xjm5Va74N73LePixbq9UCSe0pLBEumPpkq6e1Bg7zRjzv4K33cLDlM+zxwRQrdB
5Lq59v8AnmlWVQx/LllQY6fWfXXX/n5rj/uEtryTfkdEcHRqkNKhiRlTt06jxxxc5K8tteG8
qu7EXybdN+irQ3DRNoH+6tOnjjUxvcfT/wAT8B2nYNjtNy21YbrerxFN3fzMupNX3JGgrop+
0411xjN61Zc3HJ7Pd7G42JILvdbkG3jEoqUU9XBr6aeJxiMzVRfcpsOF2krcl3W55Hv9wKix
gGqKI0zRDT0gdyf2Yfk3Kg2PeObb3tce9b5dJxLisJ1w28SkXNwlPtIatFP0w+RYb/78OLR7
37Fvbz2+2Rx6W3OSM6gfHR1Kk41OJmm85Gv4xvHGd8tLy8sdxm3gEnU0yMEQEfaisBQYxYzH
Rud1aQ21p+v3GWwjZ1WO2tv/ALSv2pkMRd6rIt2WiQQLo/8AOwBYA54kq7O42y+t7/8AQ3ku
73Sa1kmmqFRgCQoFFFPpiosUHCJJRw7cxNM0rRvPVq1HfJa1pTF0MfK3JoZBu93U6gJG1EU0
kda/5YZWuJEvEtst9w3uxiuCxgedFlQUroLANT8MdOZrp9N9fXl/Fx3jUW22G2bDAyXDrCJX
RRoXL1FiGZmOOVnrF6pPwTiS7q+73Fit1PApKCQDQO9SoFK4JGdeX80+S9tvra92eDi9bmKo
jNrErLlkGYoocU79sMkXtXv9vK7fccbv4pLAQ3ED0uJGctr1gnTp6ZeOOn9JI1bWl+N47W3s
d6jkhb20vJNNuvQinSnljnYDPZbByrY75rjZ0skgkaFYVRfdan5y4AP4YswrTj/F9g2+wt7R
dn26wWRQAkuiWdzT7iSBqbyxWara5pvjvhNrul1vU+1x3lxHHqSJgAg05kqn21PnlikX2xi7
f5A41vu+2W32ewC1vIbgVeNE0BValdS0r9KYZzGctbrnex7FLGu6bzYtvEtoh/Sbev2lz+Zu
lcBfIPMHnm3q4mmso7HVIQkMdAqZ1VB44ZGZHd8Z7PYb1zbbNu3BTLazTr78QyJANSMux6Y1
9Na19V7vuFzByzaOO2yRw7RJC2u2jGkLoHpXLpljMg313ttW1bMu6btt9nDDuBjY/qCgL1Ay
XyHlgnK1S7FaW/LOJw3m/RLd3M8hWYgUHpagp4dMNi3HXf7ncQcw2vjFuix7O1sxeGJdOnTk
B9PHDOT8prnadp49Z7tvW1WUMe5qjaJ9AL5L0Xqc6ZgYz9WbazHIrW33z4pl3rdlWbcJIdTX
BBpGSaHRXMUw/lWeMztq8Eg+MH/ofF7rdL9rYie9ltjoWYrR5GmOWleo04smtVoeAbbt/E/i
puQ7ZbJ/WruFp7q5cAuzgkaB4KtMgMaz1ddeNFb8c2Xk+x7RuG+2q3NzclJpi/QkfkH+098F
gniaPcpdw5te8VmijGz2lskiQoAv3GmkgenTizxR5jBsXxvt/wAr7pbbxZNLBEoNhYRxvMrS
sAW1ogLEqPwwXk8+sB8mjjcnP4f/AMizbTsY0+5aMhhkkCnqEP2BsPPLL6C3W+2+8+Gbyfbr
JbCxeyKwWaUOlQQBSlAcGetXXD8a77xaPg9nY7DuVrtUsCj9a8+j3TL+dijkFq+ONXjEh+fL
OzPxo1y7LczrJGFvDQkhgSWB8PIYzInyZbWNzcAyiEiAkIZTWle9O2NfdSZX1lBabV8b/Glp
uXH7OMXt1Hb/AKm5mUmWT3BVix6+nwwHu2qf5v4tsE3Dtv3i4QJeTzQm6uMiTFIKyUrkSB9u
DGcxQfIVr8WWfxqyca2Wa4uCiiPc/wBPIumtNUksrgBq9KdK4pFr51aRKgBAoAC6F7Y7Tlq9
Y9h/t2+Pdh5Rvd5JvcRmt7SEPHaE0RyWA9VM8q459qV6puG0/Bsr7vsrW0Fhc2FFluJCVdmI
oBDqJ106HLBOGJ3q14n8GcE27a4ri421d3vbhdcksx0rR8wFWoAAXGWnByP4V+N7rlG1w3Aj
tI3DyNtcTafcMYBAGerCpU/L/jv48i2a9EXHmt9EdP1cIZ+gyyq1QO+Kcxnq12/C9pwGPiE5
2C0YCP0bpJPGPckcLUip6rToMVh3x5xxPgnDvkD5F3S4gtWtOO7YVcWgJDzyHLr+VNWeWG08
2xt1+N/ibk9pudrtO1C3uNrdoZLsBhIsorkuotqGWM4bdR7p8bfDXD9msLveLFpFcpCGYsxk
kk6u4UjKpz8Bhk0W68q+WuJ/He0cz2iLZ7hDBdlWv4YGEwhQNnpbz8Djfsjn7r07592SzveG
cetIZXS2kuraCEkelYmUAMVyzC4zG7fXRuXxX8L8YTbbbctua5ub5ltbUMzuZJTSrEAgLUnr
gk031zf/AJuvBBy17qSJv6ZFB7jbapJ1SuadepAAwRc+THVzPivxrxPgO7b4NgigeWL2oYZq
iQyN6E0hz6T3yzw887WbsfIFwTJKWVQp1ArFXPPsMdcHw9u+Bfh/j2/2N7yXkTGbb7LUkdkh
IqyjWzyH/aPy4522tTMXfIuO/CPJrK3seKutpu9xdJbW8ZDrVi1CWVu344rzia9Pi34W49f7
fxm+smvN43BAIWl11kPQsSuS51wZrd6t8Dt39vvxvYXe9z7ijz2duoMYdqCGEp7j9jXGcc8V
2+fE/wAUcg4X/wCwccU7dax1Au/XpZUbSzOrZ0HXLD9TuN/tW1cW438Yi223eBZbRFCztvia
dTFj6pOlCxOX7sMPfWvlz48Tj958rbfJfXEk+3i9BhmodczFqRtIOtGYg412zy9f+e+Jjkvy
TxPaP1bW39RQwvJSoRA5LUHi3QeeMNT5bTa/hf41s54LNdnkm/QhJP1lwzFHKnpWtD5imLBa
+WfmPctguef7qNjtY7Wxjl9lVi+xjH6XkFMhV60Ax1kZ5vmqngHHLTkXLNr2e6uxaW93cBJ7
o6QEjALN92Wo0oMFmNyyvrC8+Gfivb7Nbd+PTTxldDXkeqRqU+46TX9gxmRm2szxH4m+K4+M
btyHdoJbqyt7qYrLKWVkggIVVCL44zYdyH5V8Q/G++cb2zf+PxSbdbXk0UY9tWbXFLJob+Wd
RVsjiUa+0+DPjFrUWw46YwUMZupHIlp/EaN9x+mKxX2sptPwx8acWst937kUTX+32Ny8UasC
VjhUqAxVSCzerFJqvWKLcOE/DPNd92O14k5tZLqVheQRq6qIYgXf0vkrZZUxZjXPj00fBPxk
Iv0S7AAmmn6v3CWrSmrM/d+GKM318gfIvGbHjfMd42fbp/1NtY3RiSSlKKVDaSBlVa0rjrz1
gzYn+MOL2PJOWbftF/K8dvO59/2kLyFR+VB4t08sX9Lvw1/PmPrNfgn40ntv0q7AbZTH7Yuj
IfcGXWlWFfwxyxVT/BvHeFbLue/7LawCTkW2zywXk7oCptw9EEZ7A9xgz0c3x88/NknD35xe
pxW3a2tIqrdxuCqm51EM0an8mOkjE6sv+GCt1HuKWqpJAr51xqta+meE/GPxjxj442/k/MoH
uptyKsz0f0GWpRAiHwXM45brVScz+FfjO23zYN3/AFb7VsO7vSZHPpBMfuRqrHNfcrQ16Ys2
MZla/wCeX2yx+OWso96G2xG3EdptyhG/VqmkKmfqoB4Y1xGf6y1zf238Z4ztXDJdwsb5LndL
xR/UnGkNbBdRWKhzA/Nn1xn8ukmTHn228P41zr5w3W33PejutlbxCUXAKobp1IAgWnp/l17d
sb6Z/nvrZc8+J/i6z2PcAu13O0XNvA81tfhHeEMoqseqrD1HLtjON66+EfAXCbXim3TbnZSb
rf3kSzzy6tIX3lD6AtRkAaYIrWe334x+LOG/Je1PukTybTukR/S2LH3FjuUdQCyjMpnjWeMz
q/bDf3Vw8Dis7cT2rDljxKbN4RpU24Yq2umRp2xriT8sf1l/D5ajVtahhTKlMbPM9e+f2xcN
4/ud/u+57pbrdttEUclpC+aK7lm1kdK0TLHHq7XT4ehR7ns/y9wfks277VHbybIJf6dcRMRK
mmNnX1f/AEZjpi0dcbN/LOv/AG98X5ZZ8c3fiVxHFthijTeNRYsXWnusq/x9QV6YPTJ+2Z5X
Y/EXHPl6wS1ga747bxE7jbRMXUXa5KgLH1DoXGN5/q5Zft/hqP7qtugv14c0MIje5MkKABQV
VwhRfotcPHwx/Tm3uV6B8X/E2z8BltAts9/vN3Ef1e6FQUgouaIT9oPTxOOd99em5+Hy78y7
XPtvyXyGIQfpIzeTSxRkBQY5TqQgUppPUHG5HLm/LBxrrcFBpzybMmpNMa0y2vqf4wa04L8C
btyuwt1beJ/cWScmjag/sRZ0OSM2qnfHKX3V35GV+OPjjZOR/FXIt83INLvPvlbe8UkGMjSW
9PRtTOSa4PttPxG2sfgThm3bvyHbpEluoF2qKaylc1nikcvrKn81Wjyr2NMXuidfhtbH4X+O
v6ZbWT8cSSNoVWS6lYrLUrm7DVUNXwxo2b5WO4D8J/H1nv8AyWyvIhvNzYXIjtLK4cKUtnjW
RTpBXUasV1eWM826z/O/MYv+4Ph3Bds2G33Hbton2LdI5/a/SOhEU0WkkuranT0U6A546cz0
9a+fzGrVapDAZg9D4Ux2+DY+nvi+ay4X8FXHOLawim38ySa53Gpv/KIUWo6BR+3Hnzb6111c
WXLzxTkPFeE/IXJ9ujSee/gh3JYFJWSGQyLpkA9TBSgI79RjO78OdnssSW39vfCNo5Pfcs3S
a3l4bokuYLCVXpGJly1N10pX098W3/8ADd/ybbdw2f49+IZeV8bskmubu8lS3uJl9RjkndYt
dc9IRRljXIlsmMp/cJZWG8cE4pzf9DHbb3ujiG+aIEK6yRM+fiQyek9cPOaz1x1vj56msLmG
ITPG4QmiysCFJr2NKY6/aNdcZHI6DUcjlka4dZ+r2L+3Dimyb/zr9NvFsl5aJaTSG3lUlWYF
VVsqdNWOHd9bj1Djnwlw+3/9Y3N4muDNud1DeW0x1RyorTCIUy+z2R9cBlabmmx/D/x/tVvu
+6bEk1Wa3hhiT3GcsS5YgkL6R+Y9sXPGsdXHgvzRafHy73sW78PdBZ7rb/qrqyiNfYkjcDMZ
6dWqhHlUY6fW4Nv2n6ei/wBwcVjdfFHCriwhNvZNcRyR2YFaLJauxGr/AG5/XGePhrvn2Nb8
Ucq4ZcfFe4Srswit9qswN6gVFZbnREakZ+rUFP3dMY5nuOne2PmDfrzj11zK7vdpsnt9iaYS
w2c3qopozRGlcq5fTHTvnI58zPl7n/cnBY3fHeBzW8RtrSUSGC0GYjR4I3Cr5hfTjPPXhs2t
jyRvirhk2zx3ewxXN9vUcUYhSIGigKnunV6Rm1GpmcU502+oNj+K+C7V8qb1HDYwylrKK/23
a5Ke2ryF0l0huuainhXGbBPlB8zNxvj3xZuavx6KwvN+dbX9HGYz7cqglLiqZegL1XHT+cms
9zZj5B0IECatWfpdfuPfGrmr2R9Hf29mDavjXl/IILVG3iwjaSBtOp1AhLqBTpqOZp1xyz/Z
1685aT+pXHMvhCff+T28d7uW2XazWs3thGKxSxmlQOjBippkcN9Yxap8NcJ37f7LnlnbyWdo
4FxcbO0GirgVIC5UB7qBTwxnFFD8Qbv8ez/LPI4rDb5ILy5uXl2aV0KBVRCLlAgPoGvVSo8s
b7kmD+e5WC/uQ33ZH+QLb+iTXSbltTyLuAbV7Uc4ZXV4NRND40yyxrmeB6f8Dcx5hyDbtz5B
ybefd2uxAhjWTREFlUAs0hUAadJ745+7jr1PC47yPZ7Dge8/JQ2OK7v903OVpojRj7bTCIAM
Q2S/dkM8bs9xm60PJt0vp+FcV3XYdpEV9JuVvc2m0MQiklZSy1oANS1INMZisT/H3LLHetyE
d3e3lryEPcC72CYMIYirH0qCtKKtKZ4CyXCPkbjJ2u+43uz3eyXG1bhd/p57LVpdPff0B41b
MVppI7DGr4JNeiXsNhc8g4bIa3ACXjQS3ArKQbcfdqHU9TgVYvgm9cv3znHNdk5Akk2wos9t
BbyR0hyOlVFR+aM5411ngvwquXbtyLaPizg6cTLwCeaOKWS2UlgFVioBHTU654ucmnmbmvRk
tIH51tl5PCqbnc7JKtw+kBmIkiJFPFScZk8N/Tz7g+58s37ivPrPmEck1uIbv9PBOlEUIr5I
CMqEA/XGr8sczIoP7Uob2dN8tJY2k4/dW4SWJ/VG0x9LV8C0bEEYOvK6X/1fN+5RrBcTxQKq
pDNLGKDoqyMqgfgMd+/HHnnxxAZVJGZNKefhgMMQCAXqFXtgFgCoLkKdND6utK0xNQKqCSx7
0yHl3xBIof3aknLJ6ZjI4GL6TKpnqKhTQBh0wyNQMzFTpNaCpLDuBjWKpI2ANUbWpFD4YMV8
QmPX6lyK9u2ISumMxtoMjUrlQZD8cFa2Iva0yDSNYI0k9uvU/TBFk/BpGNSr5kGi08OlMsK0
iTVgq6WU5Vzpl3wsaYGlFIJBzBWta/5YMI11AF/3nwxmqUg7M+sEKp6U8B1xNwgsQcqp1VJr
/wAsB2Bd1IYIaEGniKYWKdiVSqVIpmB44BTBlQgrnlnXpXGopcSqwkU0NGXIg5Cnjia+flCF
YFhTUOoJ6A+f+WNDBsquqqooRmZD/wA8Zaufg9BT1AEjI17jBqOyVAowOWZP7hgiAW9AcVVe
jEZ4Wcow7AhhWijKvQg9hhkZ2mVXLA1rnqo3Qf54GoJ2IOjrU1AxKRIFJ1EsQa517U74G4bU
pLqW65kj/FcOI6yIaKFyP5vI9jiwWnS3LKQzegVIPf6YxqMD+WhDLka9D/zxENIxIAhOsg6q
minxAwjIZzKoIOVR18u2DViQaFIrVgMvIYB8BWMu7aWpnmG8PLDqnpjGFZtZ1LXKmdfD9mFr
CKyVUfbXOh6/X8MQOqrUscivameeCqRIyMWqMg1Ovl1xESSxBf8Ax1OWZyJzocRAVo1UYjrq
U5nAiM2hq0OqlCQKAVxM0iPRRvz/AGkHLIdjhWD1+gaSH0eknvXAcJQCVh0jM51608T4Yhpp
RoOnUABnUCtCTTERiQdVALEURsiCp6598R0xUN6idDnuOo8hiR5HUUCmrIQGB7nxJxM7gWjJ
bUn3DqBmB/ywtE3vAagCH7EZ1+lcWgVSVDGpYZsPriWDyqSp6DLOlR4UwNGWoBZqr/EPGuIH
Vk/I1aZkHv8AicSOWQtQqxJPpIyPmTjNFp5GZiPSKD7WHWlKYkQIDdTqatRTt5HEToylSGWj
moz60PTFSdUK0oQtQag/54dZ+oJFZVoxPt1yH1xacPI5jGlSVqa9618sStFHGrx1B0sM3WmW
fXEsRTag9AKGo9VM6UxYh0L0TUKHsfHB9SdkMdRkQzeodK/TwxYClhUKfaBHenYDrTBiDqcr
6VZgeirQZnEgx6SSlAP9RiOkDKBRgVJNGJ7/AFxatBLCrFWVlUHPT4jEKJFUMI2INCa/XtgW
HkYkZNQA5D/P6YUjLZA5hqGhIpTzxRGVAkYYFnC5NQ5VPjjS0LGteyPkD19WJa5Z4mYEk0BU
0A8sUMqjemrLpiWkGIFR1GJD1DxOJO+2uJRq9tTWmbdMRmjsmIuEqdJLhshU6cb5vptfS3xJ
8icgHHF2TauIPv8ABETW4VnKmv8AH6dOXfPD/ay3WLzjzr5BTev65O28bem1sGqtinp0qcxT
Opxxjc+GU6g1IVK9caGwhVHBAoGIUMMwPrjWqTVltm7T7ffW84KySW7CSNeoJU5dPDHSf1sX
XM+I1W9/JXKuR3tvf7zHFc2dkaRwmFlt2H+8qaH6Y5faWs5jaWf9yPMpLNbGw2nb41gSmuON
yqr2AUNpGWLq8xcy1nuMfN++cc3C4uzt1pe3125D3U6MJUUmtF0EUXyx1v1+q6nQ+b/MfMOS
xQx3lnFYWiMJVFvEy+5Q19Usmojp0GOM6mtTkuSf3A8x3jjx2NIoLC2ZRFM1slHZB2qSQte9
BjeT8DM+WH47yzc9n3aHdLUl5bWQyxxu7GMt21Z5Y9N/t/rjF5316TY/P/KId7O83NjBOskY
jktmVliXPqpHQ48/N5/Jy1oJP7gN/uLdzbcQthI4Oi4KvQHrq9S5/txzvXMqQWv9ye7RW0EG
58ftri6UAH3GZWamQyIIB/DG7OVJdUHIfmPcN33m0udx2CzjsbXUU2wqwWUtkPdcAE/TpjGz
V1K7N/8Amzl+52C7ZtO0rx+wZQJFtENWHgX0gKPoMbmafrVpsXz/ALvbWNvBf7HDuO4Wy0Ny
+pWqPtrRWUHLth/pefwJK5E+f+XneHv76zX9Ew0LtojdUYdhq+4kV64Jhyurdfn7k09m237N
sCbNFICJ7iIM8iqRmUIVVBPbGfFJWh2j5Gt9v4WbHinFdxu5QjGW5uULR+66+t3Zalq5mmWC
2GR81bzNezblNLdxiOZ31SR1oobw043xJRVjxLc7ay36zurnUkEUqmRhnQBgch3x24mj/b8P
ovk39y3GrRootn2591kVA5kn/kRoaU8GqccfomHsv7j+Rf1S4uN1s4L22mBVbCMmNY17DV1N
B441OeTQ8l/uJ3PcNvk27Y9otdlSdQs9yCJJdBHRfSoFfHB9ZKpLfFjwX55sNi2qHaNv4r+p
uSC0z2spZpZAfU8lVZqnD1zP2Otlyu6y/uB36HcrkR8ZRhO4ZraJXWVQuVCAOp7scZznD64e
Rf3G7q0D2e17Suz1Ymedj7koYGuVQFHnXFJBXbbf3K2UUEEkvHo7jdVjqbgTAqSBnpqpZK4s
imuKH+5m/k3KZ9z2iO4sJAAtrFKUK9vuodVfPG5xzYLbKj3b+4qGRoo9l49BtsMUiyTOfbLy
aeqjSqgV798ZnMh9cN7/AHIchut5W6bbYV25EH/5MqasejH3KCmNfXlPOuc81n5ZuX68WUdl
CVIWFAKkg9Wbqcc7MZyxWbFu97tN9FeWbGG8jzWdSNS/SvjjXPWGzXttt/cg0FiGXYUm5Ayn
/wCez1iz6kR01D8Dhs5tXM/ag478+b5a3802/QNulpOxeS1LaCD4oR9oGHJY10Lmf9wXIb/2
INptF2bbIWDR20YPuMwPp1OAB+AGM85omr2L+5Nbbb6xbLHJyEppa+kYmP1dwoBYZ/lri7+u
nbVHxf583mzvLgcmgbd7a6bUIkZY5EY5kVII0eWLJg62OD5F+b955Sg2a1tl2jjyfdaRjU05
GYDuKClewwSxZ66Lv+4XlT8UTj1jaWu3xRW4tnuIUZiYgukhQTQEjqaYZIcT/H3zhHtO2ptW
/wBtJuG1wxgQQLQMBXIEnIg179ca6ys3ypd9/uL5Fe7nbNtVkm27TYsPbtFGrWvQ6yAO35Rj
EkWr+8/uS2uHb2m2TZGG/TxlHvrhlMaHr29TAdQMPXGfk7rA8G+atz45vu473e2S7nuG4qwk
nmYhg2rUdNBUBvDDkqtv4VO+85n5nzKLe+SozbeWCvaW5CMluBmkZP5vM419fGeZnWvb0/uC
+NLXYBtdts9y1rFEIraxITQwp9rNqNPPLGfp+ddPl5/wznfxXY3V7fb7sEt1eXEha1tolWWO
FENQqoxWn/ccFZVvyR8xbjzO6hsZLb+n8cs5AUsEPqkANKyN406AYeZPhbnrQ8x+buF3fCf/
AFrjOxfoy6KhlkSNUhC0JMempZjTrh/5ZVbb66eH/wBwGynZY9t5jZNeRWkarA8SqVYIMlkU
n9hxdcG1kvkT5qvuX7jaRfpf0exWTrJbWDD7gp6zHoagUp4YzIz+Xdz/APuI3PkvFDx6y2uH
bILhFS4Mb66ohH8tV0gIuQxZILLXjLFKkKdSaqivQHrXLG54o9V+FflHa+DX17c39tLdR3cQ
RVhA1IQak55aTg69a9ZLk3L5t95Te75PGIVubgyrbqclBOWfkMF69cP5zLj6A2v+4Hgt9sNp
bcit7qK4gRVMVpUxMEFFbUHjbOnTF9P076zt181cDl5faXB2V4dltVMfvfdclya+51yp4FsE
43wtLvn9yXBNr2iaPjtvc324FCsEdwhWFSR1dyzVA7gYf+dnyzu+Mb8QfNezbHb7nZ8ihk9j
cpWnee0UNR2qCvt5UU9BTFedUnjr4p80cM47zfcZNv2yW045eoqVSryrozDhSe5Oa1yxfWNS
Vpr754+Ntl268XilpLJuG4P7krSIY49TZNI7OTmP4cZ+g15981fM208u2Xbto2q2nVbcCSea
Si6nIC6QK1I71xqeCxhPjubi68ptrnlRmOzQAtMkILs7AVWv5qavupg6uta+guZfNHxPuuyp
aIJ7uW0aOW0i9togjpTSSzeAwTlm+sX8gfO2w79yXj9xaW0q7btM0c9w0wAldyyllRQT9tOv
fDOfDr1fhfyXxTmHMLm6sbl4WtrVYrS2uP5JlqxaR6E0OnIDGbFqb5luuDnht4eUywsVRv0E
SNqlM9KpoXxJ65dMPMos18TXhEtwXSkYY6iB069MaF5e7fBvyzxfjmz3+w8gDxWV4/uLdxgu
DqUI6OgzUeeL6tyeLTkHyZ8NbALVuIbOt1uRnSb9W6uoj9tgTRmJNWHhi/5/sNbL8yfDl/e2
XIb/AN/+sWgH6dRG5MZP5Qwohz8cH0W4ob3+4/jV1s3JVNrOt5uAaDbLcgBSpj9sNI9fTman
FOfyzZvjM7b80cc234ak4p7Usu8ypJC2QEY9x9RkqDWg8KYJPyb+kO7fMGzzfCMfELWKQ7w6
rDcagojWP3PcLL39Q7Uwyzm7R3N8Z74O3bg+1ckk3Hl8kka22ltvUIxQSg11yafVkPt88Y3a
Z3I9b5p8lfEfKd22U/qp1u47lFO4FZIBDb6qtmwAOo0HTD9VzZa9mn3HaZbRvdubU7eyet2l
WhSn1pSnngw318P/AC03E/8A3jck4uoO0RsFWRTVWkOb6f8AZq6Y6ZnyJMio4Zum2bVyGxvN
xtv1dlbSq9zahigkQZ0rnivqkfU9p88/Em22Rura6vBqXULQJI41dlGZXrllljP19advHPkn
ht38ebhvfIDBb2N9dzi421SHcCVgFTQKFmI9VRi659DJbx/cJwrbF2nauJ2T3O2bdIksuusd
FBb+XHrzrVq6jli+uQT1om/uB+KoLobkk97PdygLLEFYiMEZ0QsEP4Yef53r4OqLavn7433i
03nbOS2s1vtt5cySxRspf3omIIEgQ+k+muH/AJ1WKS8+ZPjHYd82iTiGwhbeylZ7i706HZJl
9tlWpZq51zxn/mca3cfnL4jLNucj7hc3mklbQCUIzAfbQMI8Ums76+VeSbpFu+/3+4xW62dv
eTtMtoh1CMVyFfGgzxtY03xJzmDh3MbXermFbiCISJIoydY5V0MVPj4YLy1zcnr6Nj/uC+Jr
W9N5DLfS3V1nMvtuwQHpRWbRn/txfS1n7R5ZwT5q2/Yfkrft7v7dv6bv8zl1Q1eBTJqRvB8u
owWS1nnyML8yb1w7dOXSXvEYJIrGVNVw0woXnbN9AJNFp1zxrmRz/wBt/wAMJC9AWVygBB00
qMdMb/D6a4d8vfG26fHNhxznULqdvCKqgO6S+3UI4KEEEA0ocee8+46b5rM/NXzHs3JItq2f
j1sf6LtjJILmUhHcqoVURamihRTPPG5/Pw/XfVZ85fKez8zTj0e028qR7ZC4u3mCikkgUFFp
WoUJmcYkE9uur4V+Wdk4dsvIbXdY5pbq9RTZ+yuurqjLpcVFASwzxv6fkes38U8s45snMGv+
Q2bXG3SpJG5QESQyMQTKAtKlemWeMYufI903757+Ndt43eW+2y3m6SzxNDFZTq5WsgpV2l6A
Vz743zxat/Tn4584fH+5cU2yz5DcXe339kqwmO29xRJ7YChw8Z6aRmDjN4y4Y8p+V/k7Y965
rtu48ftHW02soFnuXbXcNGwP2sWKL9euOl/ncYsy60fzH8nfGHNeJxbhDa3C8sEaRQh0KrEu
rVIrMfSy5mhAxznJ3fh8+e4GOTaDmA3ceWNzU9O+FPli54Lf3Oq2F7tu4FEvbckK+lajVGx7
ipyORxnrkzXpHLvm/gO07FebH8f2IV95DC+uXBjVPdFHAVqksVJFegwTn8rc8De/3HWG0xcf
2/iFp+m2myjj/q0U0SqXYFVeJT40BJcdzi+viuqLkPPfirfPlux3i62+ReNvb6L+kYUy3DE6
ZGRTWi9yOuNf895Y93fw1XzB8q/EfIOJva23v3G82qU2V1jZPbkBFDqJppyzwc8XD3N+Plm+
Af3H8jsd4sY+T3jXOwwJ7Nz7UatLXSVVi1AW0mlaHGa6fEeXfKHMJ+X8w3Hen1+xPKyWiSfc
tup/lKQMhRcdvJMavPjIqzrIpFfWO/btgcduvoL4b+UOM/8Aql/wTmREez3KObWcrqKas2Dh
c/uGpG8ccfit3jeUHw/8ucZ4tabzxjkEUtzsktwXjvFFWNKLRlqtQyqGyw/X8xj+c6z1vt5/
uP4Wtpc320WE779OgtkWddKCJSWR3YE5Zk0GeGct46YP7hPjN5LfeZ7a8Te3jWOSBQxjBIo1
CW9s0HelcF5NZbafnL49uOVb9Nu+2zmwvrlJrHcIxS5UpGI9J0sjBTpqKNjfX88ZlVPzn838
Z5JxCLjGx2s9wk00c813d5NGkLVCpUs2pjl16YeeM9HU3x4AJGLMUNPUfCoI7Y3TMezfFHzP
t+wbFLxbk9iu5cXuHLNHpBki15kZkBxqFexGOPXy10veRf3A7De79x+1sdo9nhuzXaXIt3Qe
7MyKyofbzCaNRYDvhvMkZnsddt/c7K3Pb6W8ia44ZKGgg2/QnuBKDTKQa1LGtRXB9VN/Ks4X
88bHZruGw8g2YXfDJrmWeyt9KyS2yPIZFQo2TgMcs6jGuuJPhT4Zv5j+YxzX9HtlhZiw2LbJ
TJZRLlLI2koCwHpUBTkBh+kjUnmtj8s8q4Rc/BXH9shubW43727R4IbcASRKgpPrp9p01DA9
TnjPH871XPu3qePnOpd2YGg7aulfDHWzDI3PxL8hz8H5VDu6Re/bkGK7gY01RPQPRvykUBGO
XXLXw97P9y/x6jLBDs1yYbYm5sR6ARcHUXyrRR6znXucsH1X4eZ/MfzTBzrju0WEdq1pf2Ny
896VNYXRlKj2z1pn+bFkZu15zw/drLa+T7fuG4Wv9Q2+2mSW4szTTIgNGShy6Y3fZjU+dfQP
Ovm/4v3bhs2zXPH7xrYxONsRkRFjcKVSWIhiRoJ7YP58+jvm2Y87+Gvlqw4ZFuG27tZNuGx7
lH/8yIANJqFVX0nIqQxBrjPXPq5lkyqSy5bxW1+VG5DabSTxxboyx7PPpbTGyaHVcitasWUH
p0xrvmZh5mPXPkT5v4BvvFJtrfYbkTRw6tqkmREELAUSSMgk0FKenr0xnnmLp5z8l/L0nLbz
jl7aWTWc+yRIG1sGWSVWUuRp6KdHTD9ZILLXqvCvnDh+9fIMm7btCNokudvjtbW5uGDLFIjM
ZNTDJVfVkT9DgvPjUWfznzf433DgN9YXO4W267pMpG1La0aSO4X1KxYE6Vyz8ca44t/8Mdzx
8hsyj0j/AMhrkBT608sOYf8Ay3/xT8pbxwXcJGt4/wBXt1wFiv8AbX9STLmFp/CwqaHp44x0
d/DZ/IH9wl1ukdntXHdvj2nZ4JEe7sm0kyMjhxGwUBVjqOi9TjXM5z/ItxDvf9ynJ77l1nve
3O1lttoF/wDyOzAxS0ylEjAAnX28MEkzDz8qHbflWRfmWfnO27WyJdSE/wBKDamIeMJINaim
p3q3Tvg6zBxvwoPkeTfbzm24325bbPtt9u8v65bGUMSI5QFTRUDUMuuHfBZlXvF+bcj2XgnI
uGf0p5o95P8ALmcN/JJAVqADOqr3xnjuS7W6tPij5xvuH7XNs13ZJuu1OzSRwStpZJBlVSQy
lTTMU+mNd2W6taPnPzzyu82zZ7m22Q7VHHerfbReNqdJVgqvtgGiuKMQ1O2KXk9c4s73+569
t9tFyeLRQbncL/JvSx9vWcvcoVDFfLVg4zfWffwxXxf8v73xm63qVNmG9Lu07Xl0oBDCUliz
IFD0X1Hth7zV7ip5X8zc03/k1rvAuDt09nMX261hY6IMtNM+pYfdUZ419p9cg5t1rOVf3Gc+
n2OKx/QptN9cx6Jd2RGDyqB+VWHo1ddQqMH8/r+V1LfhwfFvzFzXju0z2cW3Nvu1prla0fUx
gBNSVZAxA7kEUwdZrpJ/r/lm9w+W+fbly1OSR3zRX8DE2McVdEURNDCsZyZW/NX7sd++ufrg
54/bTfIfzv8AI247XHtN3t67AZYx/UpI9QkuI2Xz+1Wr+U44cdcudl0e1/JHKeIfDaWVhszw
puc04h5Gh/l6WOeSepZUbIasqY1/P6/b1uvDpTqmLULBiWdjXOvWvnXG+utrOIkNT0IAIC4y
jPUkqexJJHj54gYv/MK1ouQC+OJaCVgoGk0UimWdcI+UgqKjVXLMf44AYuNFFBVCQAOtfphh
gXLRjVmQCaf9Mb1HUtkSaV6DwwUakc01CvoahPjjNi0zEknUoApQEGv7cCMyGjUNDQaj2OKr
MMAzR5HoPUfAYy1pzQAM1GHUkdaDGoz4Ye4WYfnNQBXKvUYcZ+RBpSjBATTJgOg/4ODT9TgM
Muh6EDMU+uAkSVVwvgfUe2Eo40UsGNCT0Xof+DgoTAsuYBCNka55YMRLQnuY6ZgZ9PHFhlwV
AyEJkpzrTOh8sK3TKqsEFKdsvAd8HrW4RQVyI8Knt2GL1TDGhCgkr2UjP/DErSVgzorZVyrT
PI+WHQdlCKypmGzofE98ZVJJFAYEUFKL5/twiUA0CjeGVB/mcSSMJNJYMNPY/XBqkAVYIEWp
DZVr3ww+iKACrDoKEg0OQzphonyJkHtqxqDTJvHyxzbotaE6W9LkUX/XC52m1gBgz6u5A8O9
cWNSkKSKKLQp1H+H7cRlFSN6IARmAfDwFcC04ZSpUek1NKdMhmcC0AdTXUlFAqCP9MKlF7o1
DSmQPXKlMK3Sb7gzDUwroApSnjXEQo5LOeoXIVzxARUMFIOQFCTUEeWDWjsXppQZilNWWXji
QmR/tJzOdV60HjglQdBBJD+o1I8iMQiQsWiACZHs3+OIoJCVYaFFPzUNaD/XDgwopQ7FGVqk
Zg5kDyxYzo5EiCe2CQCfur08MRFGrlQgAFPuA7g9MDUPJHUAfaB9+nEjuoGmp1BxnXt9cQFU
Rv7RFFH5q5U8MSFrH3FwFrQnyGJaiF1IxIUaaijDqQMUUGYvdoQPt9XhmM6YSHV7gYCgXqpJ
6HwxM0TGi1eoP8Q6EYyRFogmkVZzQ6h2xYtCwdpTUBmAyIxLRIWqyVohoWXuT4jvgUS1BT3N
Wa/aR1+pxNaCT3FP8sF8qsvUg9iMQug1MYwWP8wdR1BOFJHzALdKdRmV/HAaY6iNQNQD6SD9
2JI3VgCak5mp8a+H0w6sL2yqihH0I9X7cUpuG/mqigtViKahnXwxaDFpUYp6gDnSlSMDN0Qk
NCGUKvQEZ5HLECjZ1UOQQxNOnUDIUwNGJBdiD6sgRixQhHpPqNTU5Hw8KYKsAGHuNQZrQRkn
r5GvhjWHCL+27K6lq/aR4+VMqYhTt00qy9fWc+vhhQQEp6fRnkAaCvc4jAFGU6q+nq/07Ylj
jvyPa9NdJz1YIFN3PhjROFFetR0wJP7SfxflpiadlpPnQCrk1NOn44qJcSwCZrwMqmvSvSlT
543/AD+T16+w/h6/uoPh+L9JK1tJGs2mSPSKPqpqNB443/eZ0x/h85cpuri83i5u7t5JrgyM
ZXkYvqOo+rPPHKG7FST6Sy1XT49yf4cTJ0Z2bIEDoRWtDhhle9fD/wAObfLttvynkMD36SEG
x2qFWIcE5NM3QgfwjGeodn4bT5o2QPsVqq2H6Lao2UXKW6pH6a1otPSDQHM4OZNWu/aE4p/9
2zTcf2mOwtWhfS8iK8zFQdRZ6fcaYO/5yUba80+Gbbg55dcxbltr7jvU8jex7oVrWJQa10mu
p616jLHT6ea3Tf3GRRjeYRHHHCNClUjIVAQtOgpT/PHJmPFY7eViyUOonNhX9mXXGuems8e+
/EHwrt8W3W/IeVwtdvcMGstoiBaPTWge4NMz/txus7j1q72Hal5DYmTa7SG0gjpDHIiLEH7H
MBSw6DGJBtSb5c8nNrOm2y7UKBkiio8sgqKCgU0ri+sUZiz2XZ+FbEu+3FhFunIrx1/WXlyF
JV3P2R5HSo8sLV6tBzP+l8XhXlX9MttxvJArBLhfs1CpIAy6HrTBOJrN6q94jy+43viEu+7j
t1rYWbKwjSKrgquWo6lXKvTGrJPhaubW2ii2yJ9mtNuT3F1vJJTSQRWpKAmpOM/VbXnPM+W7
/wAc3Ibruez7dudpGvt2ukMkSOKltLMp9X4YZi+zYcc5Wdy4TJyLc9utrCBo2lhiiAk9K5DV
qAGbYrzBuuPgvLNy5Fwu4vZigZfdjgiiX210hSQDSgy8sa75w18l8wp/X7kk1XWc/E9WNT4n
GefGah2mwO4bpb2yyeyJ3VTKRXSCQoIGO3O/hudY+lo/gv4s2GCzi3ue4vby4KqoDspkkpnR
Uz09+uOV2m3Ucv8AbtxS53uWaWSS02rSCbZDqlPgusg0xmSsqDlfGPgXb9rvBaXTJfW6+3pl
kZnqMqIjAav8BjU5p2r7+3faNiXZNxu7KUSXbOY6iL+YkR7LIepauHuY11fGk+L9rskTkUjB
Pdku3Wa7ko0jJStKnMUPbGbGIody+IvjrfLK+vNpkku70MyS3kr0jD9xpCr+3GbKi41/b98e
GxUXL3e6XbZyXUWqCE17Ll28a4fWr1+nHJ/bbxz+syzXF5LZbPENYgibVKT3DSEGgA70xTRO
nLN8cfB5mtIdrvXuJHuAkkJmLSNnQqqlVbM98ayr7XWn5x8ScVvbaB5Fg49sNkhNy8SD3JB2
TzPn1wYHy/ymLY4t1mg2BJv6dGxWB56B3ANCVp0H1wpLw/jp3zkdlsySrE15Isfu01KlTmSO
5pjWeLH1LZcU4PxO4suK7dssc91fIQ+4XKrI7GtCWLVI1f7aUxmRX1yw/DfB9u3W833cLc30
NspeLbyAIlK5szAU1U7DBiNuHD+Oc949Few2UW2Qu5jt/ZjVSNDUZjpA8MH1z4Tus+M8E45d
WXELDZYpLi+jL/rbhFlkag9TMzZioHY4fqtV1p8M8E2XcL/kO4WxvobZDLBYtT2gVFST/GfA
HLFIdV3NuH8b5NwWbk1vYx2AhhL2sEKqlFU0GpVoK1xT5Y+1jJbb8N8C2/hX/sO+cgruEsBk
jhhZdKuwyi9vNmOrrhu63q0+L/izh1hxX/3rf4W3IPG8sFmTVEiUn/yDLU/kcsH1qrSb78Vc
V5vt237pa26bTBeOlY4kCkQ9wFFF1ZZEYpMHwsoeEfGi3knCLHY4Unht1kkupU1tpOQPuHMv
XDgtteV2/wAG7VuHyHf7M+5/odpsVEkznSskmv7Agc0FcPuNc2VludcK4ltnMYON7Nuwe1JA
n3GUgxQ5jWGdOrLims69pn+LvjLbfi+9udns03GVbVnj3aSrTSS9Neo/bU+GBdUfxj8XcP23
iVluB2qPft1u192UyaGWPWT6E1+hQO/fBY1qv+e+A8btuDvvUG2wWF/A6KqQAIo15EtpyYqM
UZ6vj5cVJJJtCVABCiQ9TXyHXHSd0TX0vwv4l4Zw3iVvyrl8B3a8mVJDb6Q8MSyfYNH52AIq
T36Yx7W+6r/mT4r4+mz2fINqhWwa8kjiitIxpQe99pIP2+eDLrPql5r8C8Z4lwU7ze72Z94R
U9qFNIR3cjUi0JLAV64Zp6rw2WNaEaQqHqe/njcVjd/Enxfc/IG8y2S3C2VjaxiS6uKapAK0
CoCaEnGeqo9en/te4uILmK33yS43KBatGVTSnh7gU1WoxnKrI4eK/wBrsFxa/r+Q7rJCZDW3
ghUOQmrqzHuw6UwenwO+/wBr5XeLKDab52srot79xcZGJU9RJUfcW6DDtnwPy6t8/tp4tabV
ctDv0kl5DGSqSqiLkK56TUVxei4t/jL4B4Euym83SeHe7uZKVif+Tb1HQUNdQ/3YLrW5HnP/
ANyttvPyPcbDxncludvtR7u4X5Ue3CpP2ZH1NXIUxq7Bx3rYbr/bFsRsbqTad8kvL+Fc4GCF
ajPS7IfSfwwem3XPt39qe3iygud75AYDKg96JVWgL5iMSMy/txbV1ZWL598JT8Y3/atrg3BH
t95cRxXEv8qgBAckZ6dIxv0SvRvnDgGy7TwHZdv2Owt4pFnit1mVaySFgBnIczqzOZ+mCXB1
8q/bv7VILeKGbeeRLA7KutVjU/zD1TUzKGp2wfateRwD+2/e/wD2wbbDf6NsjQ3Lbmymqq2Q
VUqPX+ODbDbLF3vP9vnFts43uW7blyK4uYoIWeGYgBQ4FM6l9VTRcsGW35Ynnw+Y5kT3GoMg
aUB8+2Osat16L8TfDm789llmjuf6btNp6bq+I1EuR/440yqafcScsHXWfB56bTmf9t0e0bMN
w2Xeo7+QSLElsUVNbOQq6NLNVvEYJaL7VpY/2rR/oYk3rkKQ3ciiltGlaNSunVqUtQ+AxTqq
1X7d/a3ut3ue4295uscVraMiRPpY+5qGrUBX00GM7WQ8s/tmfbtrXcdj3M7sylIniIVfuOld
LAsuROZxbYo9S+PPh+x4bxFwdutd05RcpqupbnSULHIRh2BoqjrTri+flruzMfO/FOJ2/Jfl
hdpupLeKBr6R7yJGpEFjNWiQ1z6UXDZnjHGfh6D/AHAcKD8x45s/HNthge/hMSRwqq62109f
kB3xGT1cWH9si6bWC+5GVuFobi2t0JGmuenU34VK4pabI8N+VuO7Px3me4bPstwbmztGVPca
mT0q6VH3FTljrPj1fbYz/GeO7lyDfLPZNuX3L2/lEcRr9tcySeyqM8YtxmcvoaL+1TbIraKC
65Nov6URRGoUyU7Aspb9mMXW/sqONf2zz7vb3txeb2trBbXL2yGMe4rLAfU4bUoAr0xbR5UX
L/7abjbxZ3GwbnHuFteTJbSPNRNLOaB9QJWmWdMO0VoE/tR2l7VEPI3/AKhSp9qFfb106fdq
pilp8Z/iv9s15fz7k297l/TbTb7hrdJkoxlZT6upUKlCMzh/6V0nUxFy3+3Cexu9qh2Ddk3F
NylEMTOQNBP3yHSW1IoFag4PtWft603/AOadtBthbnkEpvdPqT26pqp46q0xmy31m2a+duVc
WvuNchvtmvCPfs5WjZ1+0/w/ux25vjP2lqbhHEr/AJVyaz2SwMRnuWILyNRUQCrO3ko8MPVy
N8za+gpP7Utqa1IteRMdyCeisQ9skCtDRunnjjtVw3xD8AcXuI7u+5LIu4XsTyWx2oNT9Mwa
mp9JBLfw1xn5Hjx/5k4VsnEOWybXtt/HfQmISBFoTASx/lyUJFaY7cyyax9/cee6DrCqut+p
C55fTHQdS17z8ef25He+Jw7zyDdY9tguzrt1oKOh6MwZlC17d8cOvnx0nkBvn9sm92vKNv2/
btwjvLLdA5jvn9IRYlq2pRWp0nKnXF9rIpfXqfyPwXbONfEtztuzbLaTxWtrS5v5ionQgjVK
CV1OxOf3DFyO+s+FD/b18NWVttS8p3yzguZ7xRJtNu51qiGpLv2JbtXoMVut/bIyfJPjHfOX
/NV/tdvaWmyQwxJcXkcGl40hoAkvp01eQn7aDDb4xxPnVjyb+1+1ttovbnbuR/qru2haZ7SV
Vj1Knq6hmIPbpjM2NbEHCv7ZZNz43a3+9bt/T5Lz+bDapH7lIWFU1amWjEZ0w/a1q2cuSP8A
tuWDncexbtvEcW2vAbm0vDQST6XClFUkAOAfHDe+sUvO+u75++FeIce2WXkOzXKbd7aRo+0s
dXvNULrjBOrVTM4uZXLu38PnNxC0oINQK17E47SeM1638CfF9lzbdbu73G5MOz7VErTiMjVK
0laLn9oCqanHLu/h156ma9F5L8ZfFnLuLbluvB0FjeceSRJotLCOQRqWIbUSfUqkqwOMWYxe
pJrDcg/t65VYz8fh2p23WDeUjMtyq+m3ZwCwYg00KDUNgnRnWiv/AIO2/aPk+x4ru2/INsmt
xdC/l0xsozBjoSQpqPTX643b1OfHOd73l+F3/cB8c8Y2C64wvHLZbX9eGt5aMxEgQIEfVn6j
qz8cXNxr7X7ST4bO0+NPh7iUOz8R5LD/AFHf96WgvWDVDsdNUKke2uttK5fXGb766dX3Hg/z
L8eQ8H5bLtMVybi2mjW4tnNA/tOSKNTL0kUxvhf9N8efrEzTUUVINc+mOtcs9fTXwvxTiPGf
jq/+Rt/hjvpAkogjkQSBFQmMKoNfXI3px592ul6zl59xX4pn5ZwjfuWxTpZ3FldFbfb1FVKk
h2XV2+8BcP3v4XHV5nq/sv7bd7a83fb9x3GKG8sLOO9hljBKSNMW9DE0K6RGQT44zO79jOmw
h/tW2KTbIoX5BMu4PFUaEVohIRWgNaslcbvVo661m+F/20TbhfbqN93L9JbbXcm1RrYamkdV
Das+i6WGL7VmZWe+X/hjbOJbPFvuy7yu6baZha3MRKB4ppAaMApNQaZjth40d9PG9FPTToaA
DoScddEr6L+I/j7gez8Db5F5XC1/CrMlvZldaKA3tKdBprZ28chjz/8AtXTrrI6+X/DnCd/l
4vyPjk0m1bNyG8S0u7d8vbLhyrRqx9LVjKaa06YPhj/Km2X+2HfZObXe07m8kGwQCSWLeY1X
+YMvaChsq5+odsP2rbUcQ4F8ecE4pNy/k0f9Uk/UT2kUQUSRNplaJNKH0szKlanFJbfWfv4x
Hz58d8X23bdj5lxiN7XbN9NHsD0jdk9xHQVOkEVDDtjXPWM35n6eKTOXTQxFFOqoHUjI546c
3Gsc5BoKGr50Hie+OuiPSPhb42sed8mfZ72Zra3jtXuJnSjOWWioKH/cwJx5v6X1rWz4/wD2
6X9xNsEm7X4/RbvcXNrd+x6Xi9gSaClcv5giP0xXtc9eNvN/bPwPa7NLnft8e329E0SSM0cI
94tRTrbsV/LjOW0/Z5D8sfF1rwfk+3W9neG82TdYhc2kwA1aS4V9WnIgBgQRh9xmZvr0f+4j
ZdlsPjXhQ24pc2dvIYba9ICu8UkBcCqj/wC0YAnzxcXxrv2tLwL4u+Jrz44uzLcpdJe20Em7
XjuvuWcqJrIRqVjCsT9cYm611XzjvOx8es+azbXZbgL/AGWK6WOPcoz90AZdUmXdQSvmRj0f
05vMYnVr2r+5XYdvi27gkNiiPaCOa1huFAVmiEcbR6qADTnq+uOUvjF5v2i7l/t4+K9pSE7z
u08BvvaW0EkqI3ulaOq+k11E+nLLGedsdb+lHsH9um3Rc43mw3i9aTY9sgWeB4vTNJFcVK6s
iAU0NXLPGtrMdfyJ8VfF+wfGu8bnDdStLcaDstzKW90XEYP8oZLUSaTWoyxr+c2sf1tzz5fM
MKF1DyZsxzJ6gnG/WeJ1+X0B/bvxfittx7eudb1bm6fZAZIoaB1XTGWZtP5m7CuON96x2zI0
XLuN8F59wmLnqbYNluLG7ii3FYSqrJbGVEk1aABqCyag1Kjzw6Oorrn+2O3/APdbW52+dLrh
FwyzSStIGkSNqao9YOf+1hilomyrT4d+NuF2PyXv0lpuSXMuyXjpttkSkmqCSPJifzGPVpqO
4wd81vnqxTf3E79PtHyJx3cluLPcDtuqW0gVVLxFJFYxXJUnIkemvnljpObYz+deo/Ffybcc
7N5cnYotv2+0A1XBb3CzdSi0UZgZ452Rr6+awuwcA+Jn23e+ab4rDYr7dZk21AHVYEaUhWAT
1AuxIp2xrrnWOZjc7rxTh258d4dtNsgvePm/12hbqI/blcLqIr92RxnmZGuva4ot34lyvlO5
/F95sMLbbtsTxrP6apIgBLRinp++qkHrhq+VHss+0/FHxjb73ZWCXu77heNZyXEnp9wrJIoL
HqF0xdB3wyftauG+N+FbxzPjfJztkUJ3q0luryyoDE04jSVXIyz9R1eOM/LUv1qGHeOP/Jj8
m4fuGzRW1tssTpYXGXvRtESmtf4RrFQB2641ZjPrhXkVh8VfHfGZ9s2qO53HfdKXEznTVgnu
NqI9VD+UdsE5X2q9s/jXhI+Q4t4h26JY73amvhan/wAST+6h1qvgQ+Y6YrNgnVlxnbjedo+X
eD8rhvtqjsZdhhkk2m7UVlj9pHZfVlkTHQgZUONXmS4rNmqX+3e5h5XwvfuD7pH7u0tAZY9Y
zjaWmoqe9Goy+BGLvn69NWePm66hMVxJGDq0Fl1+Okla/jTGp8s8uCVSakCgy09gfHLG1YcI
xDENQeWXXv54bBIRAMZpQlOuA0IXMlSDpNR4U8MOMYZRpdgoNDmfKvbEzYdU1tmaDNR4eODW
4Yr6tLsWIyQdqDuMWkmqP5bA6hUVph0WEEYMiytk3TLqDiZFKq62FV0jKv8AnjK0TxO5BBDI
v4VPjjLQagqz09ZoH/Hphw4akgAFQT27dO2FnDuGX1BQQKBx4E4lg21AhQuX3HVgHoKsCFUE
hjUknIU64FII6hGxGfcVOY8MsMrVmI6qyl2BGWZOQphA/SR0JJOdK/vwVGZVUZ1qOnXp4YJT
hF9JbQaagMiM64QWogGoPiq9zhqMSxU6lyJowP7v2YKRqraCEJFPuWvfGVhkEhalK0zoeow7
BCR2Iooz7/TFqnI0IYhWWpApT6YLSBwRIFABQZ0U1/DEvslIEgB06KZsgOVB3xLLTrIrKdKj
qB/zwNB0VYqfUXOQHl2OHRg9JFCp1E5Mp8u9PLArQsdRVaZZeo5n9uJjoEUa6nXSakGtaZjE
INFZAr9uw8KeeBqU6yDQTWoGX/XFTKONFLalFcqqeop3rgaiL3NJNT9taHyHhhwXC0l6hWBD
ZE9KVxCQbGirHWtMjXtiPgdaqTHTSSOvUfTFi2D+91KtSlDn1phgl9J5lDLoUgdADniO09AG
dqEnrQHPPrjJw5I1Dx0+ojI54kP3DpCZZfaR0piawzV1mlAe5GdMLNQuzkAZqaGhH/HfCz6Z
UjLgD1CnQ9M/D8cCTIzaTWgIFCa0/HA2IsGqCMwKNXLL6jENIrpQFgKEii9wPxxIExJK9G09
gOv1xQUVaJ2AFK+VcOG4ZnlC1VSgqNJ/ir1wYho4LaSSFIyp0oP34RpBXB0qa6jUVH7T+zAs
GFjLVDUOknQaEYmsM7xtRmTSR+UgkAfhgBhSmrIR9lPeniMKwTKlEIFXyqTmfHGcWpdQOoKa
DoSOn1zw40gLszhlPq06WAPQ9MsWDRRgaNI9CKKMK+r6iuBFJJpotTrpkO9O2GA6SuKnSMvS
q1qMRloZFWV2BAqoqpPTxwrThx7a6/SDkF69OuAmj9xFLMNQ7H/jpgFLUpB01aKTMnuKdRXw
xYDRMNTAgsnQemhp4nFSf3SgHV1rTT0p9MQ1Ezu0h0gaHGbmlTQ9PLEZUntlyCGFRlqXop7f
jiac5RgSgNG1UNemZyzxYLEkjOKIwrTIr54hgJDUErkwyxEK+nNh18888Sgi1WJBq1Ml6YtL
iuWBgMbL6RVqHuR54hVQaBv8hjSMSK0GBD0n+I9KdMSx1W0kULBiCSRQgYa146IZpJLj0t7e
Yoe30ON/zyVnrc8fTvxN8i/H+38D/oW97lNZXg1/y0hZiQTl7bCoORx0/wDsTbrHOz5eUc3l
2V97nbbIJUsjT2WnJMlak6j0oMeaO1srMH3W1l4yj6gwLn7gD9y405pI3LSBoxm3Qj/Pthi+
ut3xr5T5fsH6azt93uIrCFhW2Vv5WgGraQcej7835hv8ZJ5Wp+Rvmy25TBY2CRXNvt6FTeqS
NbqTRqedMceeJok8bHaPmn4m2zjUWyW1jfvAiUaOQAFm6mr6qZnB3x+xLf0y/BOf/HG28kut
+3KyubeYOy2MNuVeNIm//CZqzN+7F9fD6i+XPk3gfJ9A2rb5TckgvfXNIwFTppWp7+ODnmMf
aj3D5H+I9v4ethx3Ynl3d0Ae9uEAcPQanElSzVPYAYrzJTOtVXFfnPle0zW0V1ucp2mFgXto
40crGDUotQT0x2kl+Tf8NTyr5v4rybfbO33C1uG47AA7rmrtIfzFV6/9uMThau7H5Y+HOMQT
z7Ha3dzuE60EbKAD+IagA74zea19a4do+cuK7zaezy+GWBImLxR2yao5BX0gg5/jjd/l5usW
5WP+T/maTl0tvtm326WOy27BgrGryUNPUR0Axjnn11n878tByv5042/C7Xjux29wLsRpBN7g
EcShB6tLL92o41eJ+2K6OH8m+E4Ntt3v5ru2v4AskketnYv3IZDVs+2K81mTFP8ALfzHtnJp
rTatosym0wvV55cpJdJBoEzoPrnjPGapztWfP/mnj+7cVtuO7HFKGdUjuZJF9lI9IAIVRkVH
XD9fWsutlw/fOD8M+Oxb3vILa6u3jeRo7dtTF3X7FU5k5/cRg73T1dfL3IL23vt7nu40ZY53
ZlQmoC1y+mMyMWO7hc8UfI9v96VQvvpqqMgNXUntTHbjmi82R9b8v5j8aWK2L73uEeqFlmg9
gmQlgMs0rSuOf0utysrbf3C8Qv8AeJ7SVJrHb2Htx3bLrY5dSi1Kk9sb/wCNwdMjy7k3wdaW
tzLs9pJvG/zmiSTFtETt+ZyaD8KHGPpVK0Pxdzb4q4psDrcbz/8AlG7/AJl3EUICE9UiCj7R
441f5dWNXqVZ7B8s/FW3T7lHFcXEcNxN7hZ4ySxZcz40xfTwIrz5l+MNi2ySz2NnvpblzJoR
GSMajVixanTBP51SaurX5k+Opo7a7m3uW3Eagnb1R1Cn/fQZjwzxdfzsTlf5x4FfX11aG6ks
7OUaEvZIywkJFCFQeofjin8rVZk2slccg+ENg3K0vtqll3Lc2lVzdMzFYc/UwBC+rwXBOfWb
1+lryH534RuO4HbpCz7aEpJfOBRGPingfHtjV4sN3HhvyLunF73eW/8AXyTbgUkl00Bfr6Rj
GKX9q/hO/wD/AK7yOy3lYv1Bt5lf2+gYDsTjfJtfRg+WvjKWaPk+5Xzfr7NW9jb0Rtas33Cp
yNPrjN4oqp2b+4DYt6uLyy3tRs9hck/prg+uqeDgZ1/zxv6TPD18B3j5z4bxuxttk4pFJe29
u6yXVzJXTRmqwWuZY1wTj30Srtflb4xdo+V396z7jbRFLfbogxlAYfb2U/txm84pVbsnzpsH
IGu7DfFXZrS71aZGJkBiOWk06H6YZx+lbii+Sfl/jVpxb/03h4eaJ19qTcZM1Va/lHVq4pzl
9VmuW3+UfifaeC/07b9ma730wBLh7iMMfeYUaT3CSaCtQBi65E6WXx78q8Y3Ti68T32X+lxx
o4eeRSEaNjXSFFc643f57PFeltvHzrw/j0u3bHssMl9t1mQJ7t20+mnRBSpPfPGJz6bVrcfK
Pxhs7y8wl3QXm63UAij2y2NZDmCAU7NXqTi65sU6lea8d+QeDbxzrcOS85i9uGVf/g2KAyQh
gABrApqNB374c34EuMxzrfuOcu5ysexQR7NsdVillZNAIFNUrKn2j/HBi6sj6Ij3f41sOBJs
NxyW2ezFv7b3CyLrfLPSgqa55LgvPS+0eb/GQ2JzuEq82n2fZYpylpZiZYpHFfvcN9vSnTDd
+GvtMRfMXy9snILCHiGwSNPZrLGt5vM1fyGlUqPV/uY4vrjF9/8AAeXbB8I8X4GWs75dy5CY
0WCeOT3JnmNCWKLkgwfS763evxGp4/8AIPBea8Pg2Hdr5drWyjhS6R2IL6AKFHzBzGYxq8WM
W+sr8wfMPHt2iseJbC7SbbZuj3u6AEeiEadMWWfmcZmH7eo/kTm3w5BwX+l8dtf1+7PGiRTu
HLwmldcjsevkMOad14EzyE1yNOi4y1r3D+2zlGw7Hum4zbzexWMUlvlNK2kNJqHpP0Xph+tZ
1mOW/JF3cc63i62DcJrXa764o7qTH7sQy9RGOnLPEtr6W49z/jW68bsm23fLbbvajSKQXRT3
AyKBQq5XI/xYx1xddLGY3b5C4/LzrbbGPlv8y0V/1E4VBZszinttSq9utcEjFrSck5X8f7ds
txNyPdbK9iKEe3CVeSU6ahVVCxqe2LKtYD4G55xQ2++bdLdJt095MZrRbk6R7OilNR9NUr0w
/WtW+epPi7eeDcW53vm2W+8x3EV4q+3eSUSJpVapXUKig1da4rzaxzcjXW24cG4HZb1uc++w
XTbvMZvYikR2MhrRECFifu6nBl1qfDB/3B/IfHJ+L7LabXuEV1e6xcUgcN7YUDNiD2ONTnPl
nbsx5Rxu/wB6+ROZ7dBv296NPpF9dSUWGJDqIWpC6mplg6rez8vpr5Ji4rd8UtI597ghXa5I
ZoGEiMZHjAVQQD3HhjGUflkvl/nPHLvknErO03KKfRdJPcGN9UaIWWhYjKtK41Jg/L0mx3y0
3bll1Ht93Fc7bZ2qi9MZDgzSN6FVhWoCg1xnGlR8uccXfeCXtrHdf0yxtEM8qsFjidYxq0sD
59PPDFuPiGRwJmCEiLUdBIpRQcqfXGxbsfTX9tW/7C/D9y2Ca+httxuXciJzoOl00agWyY1O
M3lS+JLzh/B+By2O5bhyt7uQX0MsVkjqUAV6u3tqzmijvliktOvQtz2vj2/cm2jl6b7bx2u2
qGjiV10yipYFyWFOvhgyifLjvvkzhtxsPLr+HdIv08MTwxksFeR/YKgxj7mBY0Bpi+ov+GM4
9zHZNv8A7fJVuL+OLcJI5Y4oEce97ryejIHV51xr6+mzwO9/INpH/bxEz7wZt8uUFqF9/wD+
SZTKdStQ6so8E59Z6rzP4G43bb5zmK6uL6Kzt9rYXI94jVNIrA6EqRq/3HFa685HvfyZtsN7
ynjO87Xu1sm82lwkFlZsVf3BK/qegPRVODKzL69EkSIO4gXRdyDQ1ysYJHmSfDAq+GfmTYV2
D5D3axa//XyF/emm6nXN66EeIrmBjrzHnmxxfGt8trzHarhtyO1QxzASbmg9canrQHx6YuuX
fm6+1U3bYmsDc7tvG23diq+4lwWjUhQKk9ev0xjKlPx6245vvx9u0FjcGy2O/ublYbrUF0oz
ga6k9z44zmFV7ryb4+4vs3H+J3u7Q3yQTRGXQyyExxsW1yaCQPURlhy/Kt2ttFyHao7sTSb5
YLYTem2t1eMEmnd9X7qYvrRrILvvCub7FyTjtvvkVuXvJFluC6qaFwwePUV1Cq0w/WwMjY23
x38d8t2AHkkm43euSK5rIHhhSZNKuyoWVfVl188H1rV6epX3KbGAtuUnJtut9oWjFP5btpAr
TVrrU/8AbhnFv4UfD/PN6TeOXbxukdxJcRX13JJFLKAHZK0WqjJRQenyxqV5+ZZ3f01XwPuv
H9s+Qtuvd9dYbSP3AkpqQJWWkZbT+UHM1wdx2+119fw8l2a2vCbnftv/AEs66rS3V41IAFa6
9WeXljP1pjxr4s+QONW3yzzGS6vEhtd2uH/pd01falCuSSW/LXtXDeXH+XVmyvIPnLj20bLz
O9isNxTc1um/VTe2FPtvIS2ksCdRxpud2XHn0VBkSKt6a0OWL5b+z6648vFvkH4k2fYP64lh
cWHtfq1kKiUSRBgBpdlqDqqDjGUfaVSfO/Pdo26349sGxbn7+5bS6G6mt3oEiVAlGdD9zU6D
Flxnq+xVf3Pc0tbiy45ZbbuX6mO4gaW6hgk1RnVpCNIFPWobrh4dJ8u/+3LmtlZcV5LFvG7r
CLTQ9pFNLTQntOT7YJ/i/hxWbfF1WH+CuUF/ki6vN03uTbpb+NzHdy+v3nL+lJmeoKnI54Lz
XP8An1b8vojmN9w+Li1/Pye/sJh7DoLqAqszPpOhVCMzE+QxZW6LinLdu3vhu0T7LvNnavFA
kVwlzpZ1eNAhQoWQggjF9VK8X+cObbbP8g7In9Wjv7faCgvPYUCOKQsDIaqW1n09BivNxS+r
L+5K04tyHjtnziw3yCb2YUgtLBGBaf3HLVCg6g4r3GWNc+MddZdfMrhNRI+1unahHhjtDXuP
9tnyDxvi11ue271J7EG8JEIrxRVEdNSkSAVIrrrXtjh3P9tHHOPRr6++P/i3iO9bTZ7wu6bj
yMSfpLdGVyPdVkUsyEhVUNUnGZzflfXzFpafKXCvj3aOL8dW7fdYryJPevVmEnsBqCrjMqNR
yXsBh+lzWpkmPOOccV4buvzVDt0fI1FnuNv+pu7yWUTrG7VIhEhOmjhchXLF1zbypbK9B+dN
r45ecZsd2g3i2F1x7Q9rbB0f36aaIoVqgnTlg+txW+7DndPjHn19sPPZ98Swn48n8+wldEdX
U+6yOrHUaMOorXF9bfB1PdeCfOvyDtvM+bvuW3IRtlrAlrBLIAGloxYvp7Cpyx3n88Z+t+2v
NxJViBnGczpFD9PpjX1alfTHw7u2xc0+IN0+Prm5Flu8aSvEGcIXDN7sbpq7KwAbHn+uUdS2
YXwHyPi8nEd44fu25w7fuM9yZayMEVlGlWKs1FyaLxxXn089bPXqO+c24DtabnyG43qCdJoF
sXtY2VpGaJmJCp9xrrPlinFOxZW/NeMVg3GPktlHs0iRrFZn21ZSwotWrqXPsRlinNrTHcd5
nxq753yhLTki7bctdRmMOyPaXEUUKo5X3KJq1DMq1cV5sZk91kv7mN94PccPt4bS4tLzkhuU
e2eyII9of+UylCVppOVe+N88X8jr/D5kDgP6TRagpTKh8MdMVr6J+KuYcL3r46f435RcjbTM
zvbXqsFjoZPdGbfaUbLPI44fXKZPFzybl3xk8nFfje2v2m2rb9wjmvtxV6RBIw4VfdBz9x5c
yOmKfzyes7taA/OHCN45RuPB77Tbce9t7WLexcaY3eNRVQ46A9FbVnh+uNbrN8W5N8a8j4zP
8a75uBt4ttupGsNy1BEmjjlZo31NkDRqUPXF9bF9fPWP+e/kDit/tGzcN4wWvLTYWLPuDGsZ
KRmJFjP58mJLdMX/ADyDufZ2/Kvxrw7bPhTZOS7dbGz3aT9KJWLVM/6lKPVemX3inbBxNq76
s+Hz80I9QVaquZYH1Y68m16p/bxzjZeK82W63iQx2d1A9r74Un22dg2pqdhpz8s8Y/pBJ6+k
U5p8YWaWlovIYCdmka/jbVq1rN7mpajqaSnp5Yz9K148l/uK+SeL8n4vx9djvRPL+oluJ7cV
1xALRPdXoDqxvnm/KryXje8rv3Itls+UX0qbPG6W7z1LC2ty1WVAdWkV8BjPR5+fX0b8rf8A
3V718bRbRHyOANssXubQUlWRnlhiKIrAfcGBocXHFZ/r1fmfLzv4F51xOPZN84fyy7G3Qb1E
FjuiwRAAjJJHroQrUOpa4euMvh5952sVtWz8ItvlaXbL7cRccV/VGGPcYqDVEwqrk0pk1Axp
i/pta46/b3T5h3H4s3vhtuIuQQyX3H1rtsUUgcyMAq6HWmeoR9cZ5/ncZvU1iPl/5G4zu3Ku
G7jtl0LqHb7eOW8Ra/y29xHKMD+fSpwzj/U/l6zxLl3G+SfKW5bjs+5ieMbXbwwW4bQtw2tn
f0sM2iqAfCuK82RY5/7jNitd2+Nr+8vJjZS7V/8AJs4XdPbmf7dOnuxUkLTD/L5Y7uTXxkWA
jqykKaNX+HHoxc38vXvhP5M2TYrG/wCKcjgMvG98BW5uY6hoWcFakDMoynOmY644d85ddb7G
y5j8p/Gux8etOCcTjbctquJ45NxuCxoIveV5ACQpdzpzxcfy31z+8txZ7n/chxva+UW2x7TZ
wz8LREju7lEIOiQesJGaf+MHPxwXjJovXrB8U5vwnivzhc7nZy//ANrGWWO2nQM2hJ4utD6t
IdunXF1za1w89+S98tN255vu42J12N1eyT27FaM0b005dsd8ySDzXoHAvlWx2f4b5JscdzNa
chkYybW8PpdtelSdfRdOk1r2xw+udHq7F78V/J/Ap+D3HD+ao8dlHL+phlAZhIS4fT6BqDK/
qywdc5TYvOb/ADrwu12jZU4gxlutiv43S1kjMcTW6o6Oa/7g/wC3Guf5eC9ann+dPjHbYZOX
bZtbryvcF03doQcmpTU0gOmmQ6ZkYJzvyutjL8G+Z+KX+zTcb5/Z/qdqFy93ZXMKsfakd2fQ
VB1UBdtJH0OLqabEvJv7jZv/AGzbrnYLKNdo2QvFbRy+hp4pFCPqA+zJRpA6Y19OZGpn5XG6
fPvAtr264v8AiuzFOR7zGxvYZQEijc/mdwfV3oF64OOZb6zZ7im4J8zcPvthtdg57t/6qHaf
5u1XUaFnULUaXWozUdx1HbGbPWfmua+/uYvX5xBvVhZRjarWJ7WO0bJpbeRg1GYfa3pBFOmO
t4k5b54/a05N888MsOPXO38L2f8AS3G+RSjcTOmhYjKpV1UKTqYajTtg/n/OW7arL8OD49+V
eHcF+K7iWxAfmFw7osBXWaqKRM2WUdO1euK8/brR155HgVxKxuJHdSUkZ3JUE01Et/njp9Z+
HP7YhKl2VipLAEf8DBY3zp5kZSqMpB0hhUUJU5YIcqAMwaqnoCR50xrGcAKEauigZr59sZF9
SI9Ar1Brkf2f6YyEbv6VUfU9svPzxRaMOGCqvVQfI4RLSZ205+PfqaYWyVQ9HPbLr49sAsJ3
jPpK6QfzYsGm95AKLmB+bzHb8cOGnAQpTMMc6eA/0wWLRIrEEnMnofDBqgHLANXPwFKYtFhN
IxIVm6Dt/hgQ2VlAan2ilO+DT8Er60Bag/hH+WEUyuCfbApQdf8ALGloYzGr11ErSg82Pjga
SEFRprXWM6dPpgwGKaGU9f8AaMyD4HEib3ApK/lJOn81frihpM5ovuCoObDKmXfENDGYw9QQ
AD0PQ4ryLU0Q1Etq0uK1HSowSBGJC1Q2mq5geWCtQUJVgz00qooOxHjixonVNIZQwWuQHX92
LGfsYaQag9e2eVcSPExrRvUtMh9cTUHHrUgaaFRm/jXp9MC+ET6wSw69yf8ADDGb6EKVeur0
t9x6+dcOs2Jw0ZlBUmgrVTllgMggA1NddPXTXBa1OAuAsjDv2WmVP9cWL4PHcpD6VWgAoSan
r5eWGRaHUoZnY+54MB0BOIxIhYJUAUByAFK98vLGcZRhFLPX068y3XPtUYNGGf3FrQkkdvHx
ph1YNYkZVJyI+4H/ACwtZhgv8ttIAavQ4ikVSdTEAkEaTTr9cZRTtXoKn82XjiWBV29ulC1D
l9P+WFDUaqagFYkaad8SC4jXSv3E1I8fL8MQ6CraGFD9+RU9v2YWNOWOthmzUy6aad8TpLEm
iYsHDdOufXAQzFy2pzmMlHgfMYBTyFxEpNAxpTsScIH6vcVmeprUrTKmA6NS/wBwOmNm+0Cp
/fgSFyQAQa55eeffCN1MHkUEMozGWnoP24jDgxFdagEk5kHE0i9YcqPzD+WO+HBEyFTTUpUk
Gq5HAtRsUDAe5WtNRIyI8MsSGlAnoFX6A9aDxzxYdRgBXq1B0GX7yaYKElFFdKgimedagZ4j
CKqx9ynqalM+w7YtBmj1GhIXTnll+OJaZg+n3QwKg0cAZ0wovcTTRQCDmnWoPngxEmo+mufU
ny88BwL1ANDRPp2OHRSWXWxAclelBkBgENMSstf8fLCDIwI0e3XSxJHj9MRSRRgKSlVDHsc/
PrgaDpGtjro0eQH5SPD6jEkc3ue2lWVCDqFe/wD1wETh8z/9mwzFM8QRxhFzo1ey5mvjh0/C
KRgaBetcm8vwwrdQXA1RMACyj7nIzP1wBTtllhUOpAFRmcSF7jfwnEdddoFrmtXIOmvTLFqr
phdBNoaMOaAKvjXvjc51rX1d8E7Dxm24Ad4utqgvtxT3HV7gaiFGa11VAp3xr+vOXHK3Xi/P
d5bcd+uJTAttH7h9q1jAVAK/644WNfXPWZmMTrU6ii/kPQZ9BjSTIGoQubgavAH/AKYdMbb4
6+Mt05g6SO/6Lao2/wDkXUo9IQH8gGbHwwt/aZ69E+S/i3iWx7Jbrs6vJdSeg3Up+5jT1GmV
O+eM87rPPqw2f4X4NacMk3OW9bcrxoiwMbUiVgK6BQHUa9zh7l31XphPjj4w2jlPJLmHdtwF
lY27kJAlDLMw/IK9hXrhvNs1n7Jfmzg/HuM3cNrtEBSEgMTIas1cvUcY5ZrydncykBAE6Ka0
pn0rjWJ6B8X/ABDu3L7n37lv0Oxo5W6v2H3U6pFX7m/djVrXPMny9G5l8W/G3FTYJO86bW1H
ubpfW501rlSmeMy6tX3EuGfC3Iredtksp5IrcKXnkJjyp2xd/wA7DdPafDnx3amTdNyglu43
kP6Syt9TKgOWfUkn92MSUfZDvHwbw97uGWSZNn2//wAjgsC2lcytXPfphyn7p7T4n+Lt5sp4
dhsZZ1tgUa/nL6Sxz9LGmqlMOYGE2D4Hl5Dv10slx+h47BIY/wBRpFZCPyw1zP1xbfhrfFty
P4u+L+N8gtLXc7ieLaliDvIoYu7fw1X8uLmM/fGq2H45+HN72iW+2u3n/S29QbqRmT1Uypqr
XFZYr1ru4Vxn42vuNX0m3bElYmkikuJx7jSPHWjKTlSuHqX8syvl/mMdtDyK9ht0osUhy86/
aAO1MPE9aVlnHcNcLHCumVxRVGZzPbHo+2TxPZNp/t15ruW3wXt9dQ2SOmv2ZzR889TdafTH
m67tq6qsk+A+ZHeksLAiZZF1G7I0RAL1NScX/S1R28g/t63rZdqlup9ztW9urPGocA+IUkUr
jNt1nqu34S+Kdq3WG93Xd7JrsW5KWy6gEJoRqJ69umOvduGWY0/Avijj+7TbzdbnaF5rW4MF
paq38pCorme5zxjLhnXjK8r+Bt8spbi9hlhIdzItlDqLqlPChxTuwc3K5No/t75luNiL+7nh
sEl9UKyuPcYdqrnSvnivdrp13HF/9xnNbneBtlsgkkA1PdOSsSqP4HPX9mNT+th67+3y7b7+
3bke2SWpuLqBzJKIyiMSVDn7yMhljO1zuWu7mn9v81ibSz2AvuF441zO50xhfzVyI+mCWi9V
5ByLZJdq3CSxkdJJlIEjLmPpXyxtmeubbLG93C8hsrKJp7qdxHbRplqfoMHwY9+svgDj23bf
CN9vpJt8u1Gjb4iAquRU1OZangMHtPXeK3b/AO3i+ut5kS6u/wBBssP82eamqSn8CqfLvjO1
n5iXkXwjt09kbniLy3FtEdKSlg3uP06jKtca9jcuO+x+A+P7ft0Kb9uTXHILpQY7GEFQppUg
ZktTvgstquaptq/tzvJt/uJL+5FjsUALTzlqu4OeiNTmD4k9ManVnkZufKHlvwntcO1XO8cX
leewgUFpJW1aguRzHceWLrWd1m9i+DuYbrsD8gSJIduVWlR5m0vIiDNlU9RlkTgvVrc8ju+N
vhS/5TFNuu5Xp2/j1sSpuO8xX7tFSMvE4ftVZ+Vvyj4NX2Rc8XM17allWJ3apkZzQHUuVPPB
th5v7XH/AObrs1vYLt15uzXnIXj1mCKgVAcif4jnlXD9qe7zfh5pL8Tcsblh4tY2he/zYhzR
UjHR3NPSCPHBKJPHHy74933jG+RbTe6BfzKCiRHXUnppp49vPDLR49Lg/t0mseFXW/b1fGPd
0t/1CWaCqpRa0c9C1PAYb3aP6Tlz/H/wHZbhsCcj5NuLbdZT0ezhhoWaOuTSk1Hq8MG1bMQf
JfwltvH+Of1/ZriZ7Xo7S5O2o+grU98HNsq3XiqtqVQz0WgqTmR9cdb1qnL1z42+DbvfdmPI
N7vv6Tx+bOBqUlmFaalBKhV8DXPGeu7fF9ZFf8j/AA5fcdtRue1u1ztLsI4p271JP5f3Y5+r
FVefCnPYOMNyG+t0tNtMazMzOqsEP2syHMdcXF9Zvl9YH0ijD1axUOBQH9uOmNrbjPHN65Fv
KbZtNrJc3UlTGsfSgGZNcgo8ThvWKdRur3+3z5Cgs3uZrQVAJZCy1oB+UAk4xO6tcPEvh/5D
36C5k26zPsAlRcTtoRnBp3pq041f6VczEO9fE/PNq3O222WzMl5dMFgEOYdh1AIy8z4Yz9sZ
55+y13T4F+SrHaJLq7gXSi65F9xXY/gKsAMH2tq+siz+PP7dOScl22XcN0mG22ZqbAdZJGXK
tOgUnDeq1kxjd3+O+X7Xyg8egha63KQ0t4Is2K9/HLxxfbB9V3unwf8AJNjYPfXloEiC1kQM
rOBT/bh6/o15I4ds+CfkfdbNLq32lkS4BeCRyqgqMgxLU+7Bf6GySKreOD8j45uEO338DW8s
zqqADUGY5UFOxxS6xMrffJPxZt3EeB7XuYlurvd7pk/UAqREhcZgDtpOQGNzWO+suRm9p+E/
ku9jhnj2h1iuEEscrEINJzBbV38sc7062ZHVx7aflfi/Jf6TskFxb7vIDpiiBoyd2atV0+eN
fYZ541/KOG/PXIdkul5FeFdvsUM0sMkiIsgGeYjPqIH8WWH775I59bzNfP7B0mZAQQlQT2NM
sNmGXYvOI8a5FyHdIbHZreWa4YGRVizUCtKsRko+uH7OnPOr3lfxZz3j8Avd129orYtqaX78
z2yr+/GZ/Rjr9R17L8Q/KW67T+qtLCcWcg1QCVmT0DpQMRlg6/p+mPpivs/jL5C3O7urOz22
W5ltGUXSgFdAOXqJpXy8sU/pjrnh+TfFXPuPWay7lt7x20rUVkINWP8AFnl+OD7eivSOAfA2
zy8Jn5NzJp442jY2llADqVFNFdgQSdTdvDFbausjyfathuN05dFse1CSNbi4McTOCWVC9CaD
wXPG/wD1Z/l3txuvmLhNlwbkO2W2zz3M9zLCHFxIx1+5kCUbsSfDHPa1PnGt2+T+5O726CNG
uEgnVEgmcoj+rLUxNHFBni+xvLxHn3GN645yO62/e5Bc38LBp5idQcuNVa9ehx0l1yk9Z+2j
uZpUjtwTKx9KgVrU0AA8cN8aktr0u1+EvlqewF0Nrf8ATzJq9qRtDBetdBNVPlTGZ35hvEny
6eP8e+Xd649dbFtMV0+yW9zpntftj96gOnPwIz8MZvWJTco+Nedcdcf1SzMJuSAHUiRWdsqa
xnXD9v2easofg35YNil422H2SmpNT1loT6V0df2jGr/WsXlVcc+Neeb/AHt3abTZPrs/TdSq
NChiftqaZ+WMff1uc/tDyT4153xy6htt0smSa6ZVhSNfdDsxoACpb1Y19rvq5zcXtx8E/KLb
b+ql29liRPcCFwz9KN6SQQadqYp/Ww2SV5ff2VzbX0lrdromhOl1PUEdiDjcxjrz4HtUN3Pd
xQWUTPPOdEccY9bs32haYuoeZr0mb4I+VoNtac7a8aIvuUVgxzGpv5ddWOd7XUv4dXxh8Ecm
5lJcX88/9L2+FWjErrqEk6kDSFUg08cV/rsyL6/msPzvhe88U3242nd4hHcL60kDaleMn0Ou
Z6+BxqRjm+/+GYiUM4WtFP5v9MaxrNb7h3xhz7k1lJdbFYtJZowQzl9PqrmFY9SFxzvWNfRy
798e802bfINnvLGUX876baPTXWv+3T93/dg+4+v4erc2+FOKcT+N/wBZuFxcNymSP3o9GcXu
GhaPMZKoyrXFx87R/T4yM78JfCj8xnn3Xe5JbbYrQAMqAq85cVKq3gAMzivfuR1+szb8sxyz
iEDfI11sHCoLi6gVgLYMG9wgCjrQgEhfHHSZJ8uXEt6d3JPh35K2rb5Ny3LbXNjCoLsrByva
ukHp4nGZ3aeriLi/xb8kck2b+obTtuuyc/e7aA5T0kLWmrGfs1nnrj234q5xe8qbjkNg8W5x
VkkWX0qkdR6qn8ufXDf6ef5Z5/wv/lj4T3/hEaXaTi72IIrteKun25mydSufXt44zOqvy8mM
RK6gf5lQTXw8hjpKY13x38d8j5luxtdlVY/ZUSXl0+SRRk/c31zoO+C9mSPQeff2+X+0bVNu
+ybim721mn/z1WitFpzYqoJyAwbZ8s89bXmm+8M5Vs1ztse4Wb28u5IstggGcqPQJn3OffPH
Tn+k/I6m/Dr3LgHMtr5DbbHd7fP/AFmYIyRCjmkgopFO3bGfu19LY3PyD8FbnxHgUHJL/cTc
XiSRx3dogNIhKSFo4PqKnJssZ5totnLxp7q59x1L1WmdRT6eeO04/I311bLs24b1fw2NrGZr
q6lSCGMUzeQ6RjP9O8jpz8vUvmL4k43wuz2LbNvvJbrk15pa6iFAjIAF9K9tUmQ8cY5vnrl3
1ZfGR3LgPOeN7lc2t1ZSw3lnClzM8Q9wRwSfa5cVoKnSfA4p/RSXVvYfCXyNc7laWibe6zXd
sLxWcgKschous1oT5HPGPuZznyg5h8Tc94fBDNuFkRazMIYZEYSJ7sn5G06qGmeN89rJV1B/
b78ottZvTYhiye8INah2Sn2lCfup2xc/0sb8kx5hcNdWt2Ypy8LxuVkhc6XQqaUp4imNZoln
4dO27fuW/bnb7Vt0PvXt5IsUCClWLNTL/PB3c+V/7eR7jcf2tbe2z/0+De425lEgnewLARsK
ZqPzU8GpTGJ1RkeS7r8ccv2vaW364s3Wwt7ySwmdhVo5ojQhgOgJyr0xvjuVjvrzxHuPB+R2
Gw7XyG6t2j2ve5ngtnGQDqclZD9oeh0+NME6dZ1JkvzXo3MPgC64x8Yycpmu9W6RGCS8sQoM
axSMEADfxKXB8MHF9c/7dfSbAfGHwhBvO1Dk/Jr9du49KrCJgRqdlOnVXoF1Y1e/t8Nf9JZK
5uRf298ksuYbZtm2XaX+27u7SbbuFKAKg1OJAMhpU1y6jGL1i5vvra8s+BRuexGw2Pkj7pvu
yitxtc0lVC90jWv8s16fsw893n4HV/TM/H3wBtt1sEW+8x3BdusL0OtmEIWVW1FBqJyGeGdX
r4XVmf5V17/btyWz53Y8esrpLmxvka7sNyBKhYI8nMn+4ahl3rg6/ozLfitTyf8At72t9pnf
h+9Lf7rtYP8AV7Nz1AFWKAfmqvQ5YPtjpuOTg/wLsv8AQId65tua7eN1jDbdoYBg0o9LOD6c
gemKdWzxdZHmvyb8b7pwHf49quphd2t1EZrK9X7ZIq0zHiO+Hn4Z579y/LIG8n9Su1cgNPQU
GYAGOmuvkcS3MjSKRX0+PQV+uNSRztbPh3x/yXlUO6321qki7PbteXYY6SwUHTGn+8gE459d
On181a7F8Qc23i+t7Rbf2bi8szuNtJINIe3TowJ7mtB54xP6MY1dv/a1z26kKyXNvFoRZU1E
0k1ivpI6FTkQcM7XXrEQcd5PsHO4uPXErWO7Q3MdvqLUFZSArqy9irAgjGrdjXPtxv8A+47j
7bJvfHLG3uJzHc2Wq7hklaRGnR1jMo1k5nVng5/pZHPvNX8v9qLHhjNDfK3JDKJkYsfYMGn/
AMX1LHVq/DGJ11aupJHifE+N3G5cws9gJ9m6kuRa1Y/y1dW0sGP+6lMa/pfHT+eV6N82/Hm3
23yjtOybJbixTc7W1XQuUazM5iLJ5UA1eeMS5NY5n+yyP9qfJ5ffB3G2jmgZlijY195aAqwI
Hp1dOmGWqyVmuE/AnJOR3O4R3M67SNsma1mnnoR+oQ0MVO/jUeWH7Vq+TU/yb8By8P4THvs9
8Jr1Lz2LiFPUrQyZRyxk+rt6hjfHVt9YufLx+HXqMddSVyA7411G8e8fFvxfxK34kOdcyuK7
aXMdvbqSKknSenj0AGOW21rqSSfsfJfgHa7jftju+L7lXj3JpHWykmq36d9DSqjd2VtJA74t
cuZZfWU438D8s3bll/x29jbb2sDNouDqMTuqkpRvzK+WNX+jeWxoP7evjLj3Jb6/n5EUMu3S
m0O3VoZG9QZhn9oKEVGMdaebMV3HPiOy3f5a3riUd0Us9uurlY5X9Te1EQUWnfJgtcasyL+d
/bd758V/GHKtq3Ky4kWt+S8agcuwr7dwyChD6qipIp5HBNg69R8c+PfjbiPEdqv+Zob2bkQU
wpQgx61DemmepQ2eL5XVnxAWn9t+0W/yBNam7aXYhZncrJiKyFNYQxGlBlX7qZ4LbhlwPK/j
n435nxXed24UrW+67DGFmyPtTrH6mJr+YhWz8RhksY538re14J8VcRsNo2PlkB3DcOQxAifS
f5LPRQUpmPU1K4cvy3ffFbxr4n4fxLm++bbyCRdwS0tor7ZUmIU3ELsQ0RFfU6sKAD64ttUz
Fj8v/GvFLn453fe9t2RuP7rswW4jJXSZkLAOHAOdQfwOLn5xjrc8ZrZeJ7P8ifBlzGttHb8q
4kknsXqgAyRrqmEbsPuV1BXPoQMPx011uSvncoqsG05kBie2fbHXWdoGRWIKgA0IB6GuIegB
ZTUU60/HABaZGJfKtNJP7+mIE5IVSD56QPwpiitN6A2fQZdcNPNKqspYmhr6QP2nLBpKU6Yt
RGsNStT/AI4AYxH0hFoD0XvjUWQZqpU1DZ5q2WQypgMyEzAHVnpY+o/54ytEFjAUgkN3XxH+
WLD4jNaGTIOTUUHfzxYNItUKTmDk34YsH5M4Y6m0UUUIamRHfp3xSm+nR4qa/wAgrlT/ABxr
KNIEEK2kBehXt5GuK8mVMyIGAr+Hn2xk+IhrVnH5ia6qYhg0erDUQGYUbtn5YDpi0TVUA5dT
0H0IxrEAJHXUSGUdfxwLw7KdRA9QPQ50+mJi0TOIyqFBpOZApgxqGqSG05Z00gVy754CfStW
DHQozJoc8RxI0SspC9OpHWtO+IZAMj6vcB9FBqFczTriRModi3qIpqyOnr44sROX0gsQaEUA
zA+uGRi6eMjQ1BU/lZss/piwzwQdBSRSNQBBSmdD4YmpTMzN6lOk4NagnJYkHv8AcVOdPLAM
BqI/mZFV6jxphGYONRWigvl9oyzPbAYcshYAEhuw7YgLSir1qVOZ8PLGasG8rPGDSjKasp7g
eGGQagDkvUVOr7qipHf9+EpDLQhlAp0APY+OJBLurso1MoGWnKo+mCmJQyGN1rmft7U74MVp
lRtOfYfb+bThUJtDKSVBBPYn/iuLCGlPUtD4U6gDI/twjIFwSRrA0rSlD59KYRmkhVJM/sJ6
FaADywDJKkVlIJTMNl1qBgrUoRqcGqVC0IK9TnjIvI0NaihJFfV2ri0TwWtlYilRlQnvlhal
OTQ5MWJGQFKYNAFABOdQ3WvX9vhhIoVYq2YIOQrXpg0waiiksMh6Qen4f6YTh9KkrpIGnqCc
8/PEAMFGZzJBB8gMSJBpWimhJqcsji1GZ9OZNKdx1qPHAkjaJIwD4+o07+P44sFRMoC9QCop
GPH/AJ4hz4m+6JdZ6j1BcsTWgaMDpXTTJia4Fh0KoKM1GIpTxz/zwrEISkrCKgFKgH64UmDQ
nMqTlRipNcGCo/c9PtDsKgEUP4eOJaaJBqZaaiBqBoQKYBoKNmsnQj0tStc8sWlLB6aBSoyN
WIrT6UwkzA+4BGK161OVMSwSNSFqrRnanXoe2DGkNwvqBJIIr6fPAsokOpywpWmYrlTAgLKg
Y6qFqEKadfocICYQj0DAgfc3XPt0w4nLdO2hwOh9SgEkjCFL1xFIqrl6vriIqv8AxDw/DEkt
oAXJYkD9mFY6YJFa7pmKH0noaeGN8y6n2D8AWLXvxfLDaLqLtIgqRX7aZ/U541/9jZRs14Zz
zZm23erq3upUMyMRKqEOFPgCMefab0ysKKAVUjR1A8saUiWFqdWCqTQZHx/zwyKx7V8dfN1h
suwWfHr7ZxcxxEL+pSURg1euphSuVcd/+csc/vjZfL/P+O3mx2+07W1tPuN6oKFXyjHTSTkM
cZzdPPUrR8L2SPb/AI9i2+83SzW6aN/eYXEbBNeY1NXt0pi/ptp1538W8Tgb5Dl3Kyv4Z7Lb
Xb9RdtIETU1ckDfd1pXG5b9cNsWX9wWz7dLoupNxgVig0wBw7MelQBnjjlWssPh7h+08LO/7
xyCG4uZI0kW0gdWarjJMiTqp5YZOtX2jTcE+b+J2+17Zx252qasTpCtwjIsIzoHapr/3Y9H/
AB+3srF6/bq+f+c7A+3W20WHsXl49WcxvrWNR0JIyr4Y4zj0yuvjW67BxP4h/UQX0Md7exiS
jOhczP1y8fDG+tt9VutXw68ubvhlkOKzRPdaF/WXMxDaZCaufVjPUsMug5jDxbdbq02C/wB2
iG5mkkiKyl2Zc9I+p7YzOaMWdjDuGzbZcNyO8trHZrZKWsYKxGgzOor1P78RZnj3zRwPcd0t
dlht5U0uVhl9KwZdXpUNTHXn+VvwL1Pyyf8AcRynjs97abZZvFJcxqBcXCNqCqzCgNPDrjnz
z6Y0XJ962biHxJabfZ3kS3N3AqxhWVn1uNUjUXsAeuHLq7634d3wptG423x3O17EYFuTLKkr
5VjKk6s/PF/RY+YebT20nIr9oGUKJWUshqKjsKeBwcxqzw3E2pvliVVS6yoQWzOrUKU+vfHW
TWL1I+0eS7BuW7R7d7ExRYJkmlStAVHVTTqPLHAmHItmm3O4sLS6judwQUaCAhqPSunUMqjD
OankfM/jzlVtZblue98jWw2uWsgsjJrkaproFcqjtTGpVWk/t749ullxy8u5a/or5w1jrNSy
An1Edq43/W/hSzPGr4Vs8+2T75A0ya57ppciC3rXp9RTLHO/AniGxgPHNg3CXfb5I2u7h2hu
J2HuFXIAChs8utMWaZdW212KQvaCwtv1NsRqm3GaUkHwKg9fwwJPfbnaPc3dnazpNuPt0jto
2Bf9gzA8Tiwa8sj4PzTbtwi3HkO8xx7d+pRo7N5S0jeqp0/T/gY1LvjV6kjfcx3e0mDbDaX6
2u4XMJcpG4EhjPgcEjO6+S/kLjw2Tc2t1uROWOpakswHfVXOtcU5On+MNysNv5ft95eyGC3j
lHuzeQINRjcmiX19UXNmd85ZtfIbWVE2nbgzyXDsAja/BvHPGbMWeiTm3Ht6bcts2i6S8vUD
xpEjA1B9JYda0rng+tnqcNpf7RwHitpYb7eQ291NcVjgBBYCRq0oM6KMUmp2PYNvPKtr5HAy
LtG3wvW6ZlAIfPKpp5nF8eI0nL9i5Ba7rt2z3C3d2xaDStSCWFB06jDmeqzWe5RuVhwj4q/o
u6zxjdrmGSKK1UhnHuGoqB2UZVxfNGYrrPbr1vif9TyLln6ax/Sgw2FvpAVafy4iwOti3SmG
82Uyx1cD3Xa96+K047t1ysu4GJ4JLcEa4yxJU0GfTD1zZ6uvWott42bh2xcf2Ler2CC+9MPt
BxkczUjtme+MSaklpts687u+VXDLb7Qlp7DSSMFocjqr0p9MQmPPuNb5ect+X95uOK7qllYx
Jpub1lWQyRKQtI1agPqGXgMasyKPN/mKFbT5GB23e33rdEA96YZvHJ1EfpquR8Mb48+Vu19D
bLse/wBz8S/067Rv6vdWbApI1WLPmoYny8cc+utp6ef/ABvyX5EX3eNHZYt4XbG9pZGcJHEq
mgq32nFokXX9wvK7K14GmzTlP67evFosITrKkHPLwxSC+3x5Rdf2+b/tXETyjdb+3twEWaXb
5RRwjGukt01HwwbrrbI9luf/AO5/ivbdo2WUXF8YIFbQR6Cg9QanQ1GH62fLG6q/nHe7PbeE
7Pxv3VO9u0Gi3RgSpiWhZh5npXFzPyupqD5c2rfm+KhJyXkKMxSKljbIsaTNQenUDqkoMz2x
Td8Fn7fKhCSV9ptGnIVzoO2HWtfQH9qKRpyPclFCwtak09VCy559K4OmZi05h837xsHOt825
TFfQowhtom0skNR1Uinq8a46c8z8rm/L2OzvIYuKbYbe3e9aWGJ2itKD1SKGduwHqJxysw7q
l3rke9Q8i2SC32dYlkEo0yspnCFc2UCtOmKRO2/2i/3nbbytxLtrSI4Ly5oop9zHLt1w6zfW
b+D+RXd5s2+W098b2HbLgwWR6nSASdHcgt0w9xc/Cq+Htv3G3+QOSXnIRXc/brFLJTUsbuCQ
K5jKnTGbGp1423FjyEychuuQFv0IkY7YZKBBBQlqfU4lPhR/NHN9w4pwfbbzbplgkuJY4yCB
UpprpXDORa8J5X8m778h8i2K22+y/TPZSL+kihBdpZT17fuON3wfXfh798u7Puu5cM2iEQtL
JBc2st+FzKhANRNP92OcPU2pvlXle67PcccsttnFuN0u0hnFAWaMsoIWvTI4ZGpm+tJLJBFz
GO0hgH6iexYi4AH8qNHpTx9ROMrWV+YoORw/Hd9tuyJLe3U4pe3VQHEJNXyHll9Ma5uXWO5s
x8Tzh1mdSAasfA5jrUY3p5mR9T/2xw29vwnfb6GNV3CSUIJBTWSIqoo8tXbHL8tb4JIvl7cm
tIOVW0f9C/W25umk0xu6GQUjp1K+ON7Bkeg8luOWnm2y2e1Kx48CBuuhRpWhyUntkBjPwlvd
X0VinJry29tZrSH3WYAf+RICw1ePTBivwxOz79d738JXW+7qYrm9KSyt7ijQGikooI8qY1m1
nryOy9+RL9PhJuXJDGt4baiRdEr7ntekHy6DBJ610+cvhiXkO4fKFjd2kbSXTze5csoyWOvr
60ypjffxp44yPdvmOylh59xHfrixe82mwkCzBV1L7rSehad2PbHPNjFudPUJ4p4JZNxLPO5j
Aissgqv4jvXtiar4a+Uot7fnW73fIImg3Ked2eM1IVK0QL/t0jHSdMc+G+Lb57HnGx3EdmL6
WO7CwWbUpI+k0JY9AvXGrJY6cR9nI25btNqDXO2zqSChHor417qMZ+2fDH1lqr2Sy3uw4Fvd
ttkyT71Fc3YhuABpebUKNQY576b8E1lu17wrjtpvjrFvtxdQNPJIAWWVWZyQp6kKMKsaWykZ
t5eD2rmRIQS15JQQlvAeJw34MZ7d491g4lyRuNx+3vU1/N7BiUBjKXQVPj6RjM8Wsfxe2+SL
jlnGpebRxLAklw1vG2n3TMsZKuQCenbDerWfrNely7pp3spFtt5cUy/UKB7APiCT498WNPiP
5Vvmv/kLfp2gjgaS8kJhRgwU1pmR37nHWMzpoP7etrF98o7REsohaJJplkIrXQhJVR/EfHGe
2+fH2FtTl9zmiaC5Kwiq3cv/AIpD09J7nGKM9ed/Em/XjfIHNNkklCbdZ3Ms1taUACOZSHK0
65UwWejm+PnP5v8A/am5zfyciSRJpmYWglFaW6n0EaaClMd+Y5/aSsHaKuoClMuhFevfBa7T
4fYsC8qs/hLjUfBoidyKQ6vaVa6KN7zUOVS/XHAdVJ8v8quOK2PEN5CQvu6S+1SYVzmiAly6
9e+GL8qn+5fnG47Zx3b9ntkipvcZa5kI1FQhU+ivjXrg3Ge567f7cOYXW58CvoZYESHZfRb6
RQspVn9Y8csONW+MB8S863fefl3dd5Tbo729vLaX3IYToaKFXWpTV+bIDPrjfUg5+Hs3Idu3
bdePbq9je3Fo0lpMslpdp6NOkmmf2sfEHFaM8d9hcW1jxDj62lrc3MP6WLTHYqG6RLUsMu/7
8Y5PVecfKnOtx2j5I4y9jE+3SMiwXk0irVoZZFOh+vpH+OK+QT/2/wDwr/7sJ+XJZ2q24kPF
vaBvVQDQbjWdIc9ftpTG+Obfgd9yX18tgq3c6c6Z5kDG/rjPvzH0j/afewaOS2ZkSO7nggME
eoAtRZAdIOeVRjlZ66c3xreHbFvHDvjjmknJiYP1QnNsJW1FtUbIumufqZhQYPbTZ41uyx7S
di4w3O1sTvwjQbWH66tKlNNfz9K9q4s1aoLLktjsfyTvz88u7Wx3Ga2hXZLkD0raanqqGhIf
Uc8a+tvwxOvcP81tsNz8GXc43A3Nixhls7quozP71VUEda54OfKP6TZ4+NZQplKqaAj7Tmcs
en7NyNv8K39rY/JPHbm4ZY7dLpRLK49KhwVBJ6DM1xw/qf8Aw9O+atm3Cx+cNr3e+Q/0y+uL
JrKfVkDCUV0zy650xm/Dz3rOvXvG67HLc33J5P03ufrtsjt4GK11lVlqg/8AqYYzj0Oya2ZL
CGfTLLcW1pEr2UNPdbp4nrUY1BZrO/Ikzp8cPcC3aG4F1aPFBdmpWX9VHp1H64L6LPFwqbpu
V2Bdw3mz3kYWksLrLbu4HY91+oGHchs9fFvy9tk1h8i79FdXC3Fx+pLTSxgBG1DWKAfbStCM
duetcueZy6vhfc9u275E2G9vZFhtYbkGW4fJV1AqCf24x/WO3Nx9OWvCN+i+c7rmUhjXjz2l
UuC4INYFjp1yFRq8McrKzN0/GNzsWteTb1u13aS8EvdzuGgSZQ3qDBHepyKu61A/HGpP0eeZ
JWB/uSN7dcW45Nsk9u/D5bmlosIp/PKsI/OhGvTT8cPP+rHfzGyuuHc3u/gi62LcFe75HJpk
CO4dnWOZHUauldCYxz61/SeYoLHZbjm/wFYce2OVH3jbrhRf2RcRujRzPrRhlTJvxxrmXk+Z
42A3faOMX/Adi3S/t4r62jmt50ZxVXa30ITU5anNKnBObjV+XHw7hm58T5/zPl++zxWux3rT
S29y8gP8uWQSaj/CFApTB7rHPzVHvGwzfJPxTsdtxi6hkuNuvGkuoDJpYBXdTWh/3BsdOf8A
XWOpfMa2bk2wbNzTh+y3t9Ct7Ht01rMS4osrLCsase2to2pjF5dN26qOD8Ou+BXvM9/5JdQQ
bfftIbeRmoCjO8g6nq2ulMGW1T86rOR8Zl+S/j3ib8cuo5DthX9bErhWSqBSGXr6dP2nrhkz
wWfDD/3QX1hd8h4ztVrcRS3NjaPBdaCG9qSVkVVanQ+k5Y3Of9bV9J11GN+YPhu44DabNfx3
a3druVUYjIrKFD6QD1BHfGea6d9SXHl7pUB1WtDUfXxx1gtj6K/tXuYJYuTbUJEW8v7Qfo4m
ahk0qytSv8Jcfhjn3PWvfq96stsltLjY72ZUSLbdrktLtyQPbfTEaE+H8s4xjDyH+5fnG+bL
vHHLbbNwezgktmumWIkFmDgA5dRQdOmO388yp5lZ7hyD5b+UbScGC13cwovoqiaLRSxapNat
1xnvyZFxx/tr1P8Aum4ju91tW2clgjE1rs8DQ3g/OvuulG8aVGeMc865f2uelwSTkHNPga92
nZ9xZeRW9x+aUrIsaOrqobrpdVIGNc+dOnV3nY8e+KOLbryL5CisrWb9PdbdIbmVJmIYtDKD
Ln1ZgcsP9vnGv5XZr3j5p4zfp8g8S5kxQ7RZT2tldVNGR3uSyt/2nVTGM2M7lcPLOUXsP9yG
z7dNfsm2RJBGsIbSitKhJV86HU5XrjX/AMWpG64pafq925vBuCxSbLJuv8sVOoTCGL3Mx4EL
+OMjHl/90uybsdh2XdbGUPsG3O1pNpZvdVpaaGevUArpBx04otx8zglGLLVT/GMwSMba19Gf
Gs+yc++H14Cl7Ht2+WdwtzHFOcpVWT3NUdeuZOXbHH4pv7avfH47A3DPjJd4Zd1sLqOaa7iI
V4tCyFatX0uztQDGvpc1idfZom+R+M8j3S+4FbbpLZ79EjQQ7olAv6iNTkr1zYfv7Yfpefad
eXf2y8aa65NuG/ver7+0TTW9zbO3qcuXVpR206gTXGf6W2tzPqu7+az+NvnN9/3adH2Xk/6m
Q3UXrMDOVFHHXSKCtMV581mX8LvbNv4j8dNv/J5t7jurHkSSNt6xkMztKS9FoetTi55vVEv4
VO1W3GfljhXGbZd0Tb9y4u9Ly2lIDGiBGYZ5ii1r+3F1MuGxa3XzVwO2+RoLE3RfbobB9su9
wUViSUyKy5jMrRTVqZHGuv5XmTRqukh4b8UcP3+2uN5jvP8A2GGRrBYqMz61YIaKSKevNsM5
vXwPt+E0EfE/k2347v39Xj2+949CsW5bfMwRwFKOWUkjL+XUNmMZ9njXxdQX+/fGHyTzjcrG
a8/TypaQw7NujnQBcwyOztCxNPzDI/cMa64vMh5vmrT5Lv12n4o5FtnIuRwbveX9vo24qVWR
2TSdIVK51Fa4P583dZ6mxnuBHZOCfBe4b5fXQeTf7aUQxRkMSHVoYwo6sys3r8MMl67b66yS
PlsgPCrNJqdECkjufLxx1/rlvjF6DpdmLNQ6evgR2xzlZAfv06fUASCe/jjSwJp3OosMtPUH
zxDoKN6itCC4zpnQjpixmQhFrAqemWeXfBhE0bmoA+w1UZD/AAxaYVDo1OKr271/AYjTetau
TUeI/dg1YdSBKCxLautM/wAcQwyKzemoWpy+mDGp4UpkXLUSq9x1+mGGkK+5VmqCMlGWLWal
ZEoDUClKDy8MFGEDRCKha1rikSIKvpy/f1pjSkOaZqnpQ9RWuRxafkxjIGsCop1r07YNOEBI
VJ8cgSaD9nbELRe6GADgKqjr3GLGdI+2GAWgZO58/HBla050AaQMzn5Z541jNpBlLAN6qHp0
A88Fhkgvub23FCo79fpXFWsJXOoBQxB8BTywNCYMQKsaDqa5mvbBazYYq6asvUMsv34NGYdC
aVC+qoByy/acRMSpGg9sz/zxABoqaT0qKCmVcItG6sNJJr2NOx8qYkIEINRFWr1PX64I0Yt7
Z0ZMfDCvSVwUV6U09Qex/DALTE1ahAEZORHY4D9iQqNTR5FhmDgItKlkJOfV2PhiAlNPSarE
TSh6mn/LBqNUalGrSrGhbt5GpwyqgMYBArrDV1UOfXG9Ao006gAGU5AE9R1rXBTCjiZlJQ51
oP8AaP8AM4zVPBohD0RgCOoI60+uIkTIorT1NUKwNMv+eECjUKukNU0qR9f88TQER6ls/JyD
T9uLRlNpJqSTVsghyPmcGo+pSKO3r6LTrTsKf54liWJRGalVLEfafDBiwSuPbYOaKv2gZE5f
vxU0MblqLERUfcKZ5+eJnBNJUBAoUf8AH+OLEdFTqQBQZ5f4UxYQylhRgfTWtKYtCRXIfT2c
flNSKYCHWrZtQGudK0oOmeE6B4wSWJJCmpUd/pTFoFK2llKgn3D1HbwxLDpqJ1rXV59/LETy
yAVqBpoCVYUPjmeuBGEtTr01WlRmKZfTESkVdLBtJYHJeoFehxM4lBCRnuy9SD0riwyI1X3T
RsgQNWnx8jixJUaINoyK01V/HoKYChoguT6iykVkUjp9DhBGqKWSlKUJrQ0/DEAIyFq5U7Nn
kf8AXAtEjaQ3tAsxJqc6Z5YcSPWQzKIzoXICvfxwtECBSvelR4YMWGjaj6RTT1NetMVUEwct
qXPKuRpTwoMFovpnLFRRi3jXx8TgOgrpfWzaiRmVGf7OmGwHm9cauqk0IyP0wYrUJUpTPI5U
6EHDqxDePGI5FUgkqKtTPI17YYlKTnlhIlqxqfxwLB/yvP8A54sTphWN20VNB3rgrWp4kAux
FC9dRopI643/AD6srVj6I+GfjCLfeOy7xu2/3W220L6Tb2LuupQK1ehp+7Hb+/8AS1nvqVg+
fw8VsuQ3FtsIuJoYzpN5ck1kbu1Djz4xjKO59zUVKmtCoA00w4kimsXqI1nuTlpHYDDgWWzb
JvG8X8G27RbNdXlwdMESg1J/iLDoB44PtV9db/kvwjv3E9ij3LeL2B7gj/8AF4SWbPtU+FaY
4z+nX2E4wfHvgvmu67Kd5nMe3ba6GSL32IkkXqDo6hadMd7/AFrrLIznHuBcl5BvT7JskUks
0VWmlBpDGtdP8xiQBXthn9LhvEs9dnyH8Zbrwu3t49wu1ubq4DNLGpZlUDoAzZn8MY++31nn
JMYhz+okVZSQABRSTlUZ1p5Y622sZHq3Bf7fOTb/ALVFu015DttjMK27TlvcZQfvAXpXtXHP
7dT4OQ3MvhaPYLWGL+txbjfzyBLWxtQ0s7ashVR1GMy9bpyX4TN/b5y+DZ/125TxwXFAY9vD
e44FO4AIU088XXVMxZ8V/t95te2n6qfcY9psnH8mN3fW69mYIcvKuN/9K1bFJzr4vh4XbfqL
rforyeUl44omZphnka5nFx/Xrk82G478RfIXJNqbc5Wa22kJrhkvpnVpF6+hGJIXwJ64O/62
/DF8dHGf7e+Yb5qvTdpttgHJguZiQXAy9CgatNe+Kf16k8Z+svtdt9/bzyr+oLaWF3FeM1Cb
tqoiivqBL1LfhjPPda2LU/2ub9pSSbe4GkjbWwcyN0zy9PenTG53d0TGotPjfk258cmtL7nc
0lrEGjawtNJjAXMxu2TCmLrrfVr5r5Js0G2btcWUZJWCQjWxrqz8Th5ptcu2yyWl5Dcwj+dA
feRx2K98dOenPJXoW48k+XOaQWzQ/rJ7RF9tVtldEcDoToHqNMZn9JHT6MzJtXMtmv8A2prO
6tL96LEPUsrVzrpHqxX+kjdk/Dr3bjvyVdwJe7tabjLAFKmS59woKeT4L/WVixpPjTjHPeUi
eO13aXatutNIuC88gBByVViRsPX9tjHHE+Vzxf4t5TvG67obfeHt9vsH0SXkjvrkkH8Ede3j
jN/p4cY7k2x85Eztci9vrGCT+XcTmQxqQaBgXrSv7sE/rDmOWyb5Iv4WjsZL+a1iJjaOFpTB
kaj7KgaThv8AbGfo5Euea7TuZSMX0G5SU1vHrWdj3oOox0v9pi55d9/tvyVfzR3V5HuU0jUF
u85kbM/lVjWhPfGJ/Wfg/V0co+P+ecctLXc901wXdwB+mjWZmmB7K1CxyrlnjP39X1kYbchu
YuWfcfc/VOKyGZiXJ8STjW6LjntpD7tVqDSgr0yzOHcaj1jjnxr8t79xlZreaaz2N01RJdTm
BHVsyUi6kHsT1wf9aO5ik2/g/P4eSx2Gz2sx3JatFJCfbIWukszg+lfPGp/eX5a5mz1ac3+K
fkKyD3W+3P66dvXSNzMyDLIucc/vZWZ47eP/ABn8ub5xwzfqnsNiKn24J5mVZB/FoObBvHF1
3q65jPbBwz5Ch5MNs2QXKbmC0btETGFUH7/cHRfOuH/oZzrq558b8+2h2vd5lN5NQPMwlNwy
0/iappjM6usyRj7S13zcImW0immt1qzsusxpl/8AohvDG+v6b8r64uvjvbOe3+7m14qLhL8U
Eskf8sRiv/2jGgXG+/67MM5y6s+e/H/NtjumvN5P6y4Aq0yye8WataBvHHD73VZKuh8efNm8
8Phmu7meDZtCum3XExBKEVUiI+quN3+ui848ykt972q8ktKS2twW9tVXUr1OVKjPPHTnuSOs
ylJDve0XUEk3vWt1GyypIwIIYZ5k9fxxz+21yvD1Wxm+fd84zPu8e43sezCH3BLMdBZKVLKM
iQB4Yb3DOZIr/jLZPmbdorl+LXE9tt7sRc3TSCFHl71Y5k/9vTFe5+Gu5LNiv5z8ac/47I27
b7/Plc6nuoX92hHYnGfuzrM75z/lu87bFtu57ncXllBp0W8j+gBRkdP+uN278MTnxf8Axda/
Jl5dT2vDRcRPp/8AkTrVI0BzzckLU9sZ67rpJFbzzi3Mtl3eSbkDTPdyHU9yZDIJH8n6V8hi
+w6/wz083JNys/euHnubODIOzOYlp9ch+GN3+mrJFQuqpLZ17jIU8sWM1d7ByretjuxLtN3N
a3MgEYkh+/Sciv44r4vpr0C3+COYf+u3XMN+vYNuSRDctDcmk76s6sT01dgc8c/tdasxRcf+
UvkPZrL9Fsd/ObdWI9qNSwHfKoPbwx3/AOk/Mc+JXJd8255Fv43y8vrn+qLTRI5ZXUH65AVP
SmCdc310nNq25F8m/Ku+7Y1peX92LJwFlVE9tXApUFlArjHXXP4N5xn+K8o5nsV3JcbBNc21
w49txGC4IrkSMx+3Df6zMc5zdSnnHOLDka7/AD3txHurElZmBGoDI+mmmmeM89ym8127980/
Ie92a2u4bo8tof8AyQIqxq1OgOmlfOuOl5n4UlT7JtvPflndItoF5JeG0iEjy3DlYoY09I6e
PQYx1W5Ip4f/AGLgXMZP0Vxo3PbnkhEyESDVSjBQcumD7X8szfw029fNvyrfbe9pe3kkUMp0
fy41BcdRmqjGvvz+IMs+Wd3LmHMt6vIL+9urm7ubEIts5NTGyGo006dMZv8AQ5seg8S+aeXc
Z3Vt15XbT7hBukSoZJB7cuha6PakpTTXqKYdlOZHfzn+5tdy2G52vjO3Nt8t2DFcXU763EbC
jFdOQanjjV/nZf2zmvC7Da903O8itrK2e5llZUQqvWRzQKx88Ytxr6vS7mP5I+GtwtYWulin
vonlCR/zYmBOetSOqdsZ56XwpeTfMHyDydIody3OR4IGDiG2iEQLdmcJ4dsdeuuZ8DF7Y/PX
yjBtMdnb3/uKq+2kjQD3WHQUcipI8RjE/pL8syVmZfkrnH9P3LbmvZ/025vW/DVLyP0NW+7M
DHT7c43JYhfmnOo+LtxxLl12RiWazANSWbVm3Xr2xj7RWflFPzLmF/xe34zJdSHZbf1wW1NK
hkJPqI+6h8cZtn4c+d/Lr+Peacv4ruTtsC+1uF7og0PEJDICfSFDDu2L7Nz3x6V8gc4+XeOb
vsz8qmgYxMu4W9qqIIy6GgVtHcE0+uL7foZNar/86HYf0cdwNiuJr5lqP5qiPXTPP+EHDZP2
br555lyLfeXcout63FNV3dvlFGCFRFXSkagkn0gdcZvUE431U2lzuG2X8dxAJrK5izinpQq3
Q0J8emNzqG17dYc6/uD3LiU28WrP/R4UNbz20DMqijFGyYgfxYL1J+B1uIeC/L3M5+KjhnGb
Frvkss7ypfMwLEM2qRmVvzA9ziti59jOfJnJPlW15BbR8suZYtxtUWWBIyEjU/xIqenPxHXG
ue5+mOe59soZPn/5QeKGEbo2mJh7bJGBJXoAxpVvoca665/Edf8Aw4Np+ZOf8fnvJrTcWEl5
IZblJKPrlY1J9QOk+OOfyLMVm9fK3Ot/3aDc7/c5GurT/wDFTH/LWNq1BVVoK+OO31yCNLef
Ofy5c7Obc7jKkDLoaeK3VXNR/GBUfhjn13xvjeTPWC4/xHkvKN4XbNmtmvdwkLSSZ50rVmdm
+tSTiv8ASM2V3X+18o4DyUQXZfb94syJBIhKEDrVSOoI798W6p8PVb7m/wDcInEYOUyXDQ7S
i+i7VI9LRt6A7IM6GuTEYx9/8MW2PK9o5ZynbN3fe9suZlvGLSyyrVyWfqc66s88atjp8/Dm
5jyLkPJN5k3Lfp5Zb51ES+4KekfaoUUVaV7YJ04/SW6phY7hCKmKShANKHL8cOtzmxveH/Ln
yJxyy/p21bg/6RfVoeMSiP6Bg2H/AF/I9qm5ZyrlfKN9F/vE8t1dkFLdaUWMZVRU+0V8sHXX
LXv4WfyLH8jyw7TfcuEyotqBtrTZD9O1KU05V6ZHPGb3L+DlVfGuYct2awv9v2O7lgtr6kV4
qDN0IIoT1rSoyxreZ7V9bji2De9947vQ3Darl7K+H2yrkDn9pr1+hxq5imvX9n5N8zfJe3bj
tUN6Rb20DTXmS24ZRksWpQCS9Ppjnev0c/bHce+bue8b2ddn27cCsEDURJVWRoxU1VS4P7Om
GWfkeX4ZDfOT7/vm8PuG5Xkt3eH1u7Eucv4fAUxvZflz+vq33v5P5pvnGouP7tuUk1hGAyxt
6iVjFF1NSpp54Lfr8NXmX5c3x58abzzncZdu2bQslvC08skhKxqBkoPX72yGOd6rX1mKyxn3
nZd3JtHktL+zkILxfcrRmjaqdsqY1a4cf027Gg5b8h8/5G1q/Ib6SW1iKGG3VQiHT0bSMnbG
p/WZkejndWNzB8rco23/ANslS4v7DYUWC2uEyMSxsHHtgfdpNKnGfvPhruMtyLkvJuQ7oN03
e4mvrxYwnuy5NGi/atAMszjU6kZwV1y7k+48Xj49NdzSbLaN+ojtK/ykYNTr39TVAxqSC5FR
Fte4SxNLBaySwsKhlU08/Vg+8HqbZtq3LcNzg2+wUm/uZFit7cHNpWYAAY59dLiV6P8ANO+f
JUMez8b5nCsYsALqCZdLtJUaAPcXJlyz741/Ka5f3kqos/nP5Ftbezto98m9qwyt1NC2kDSF
YkeoAZZ41P5u/P8AkG1/KXyMvIZt5ttzuJtxuiwZ1zDLTVpMea6V7DF34foPkfyPzvebCa03
a9uJLa8eOeSGUFQXi6OqgCgy6dMZ+0HXDvi+XvlyPaEtYNyvEsYoae/o1MIwKL6iurp3GM82
fkWXPHnj2O7X8ktx7Us8rsBNIQzFpGz9RzqSM8bvUZ6lpjY7hZSaZomWQrqSNgQCvRu2D7LH
uWy8J+Xt8+JJbh99e14/+na4ttsklIklt4qnSG6ohC5KTjPP9PfGseacWm5LyY7dwm0vZF2/
cLqMpbCvsiVz/wCUjtpUZ4113k2H6yujnlvyzjm4ycG3S/e5h2m4/UWtuhLQgyDVHIinMVU4
OOrPljn3pprHnfzvemDarK+vppmhakaxgVjjWrEHSWyGD7yOncYbbOQcw43enc9uuLmzu7h3
WS7XUmpwf5gr+Y1xrft6xJHJcSci3LdVur33rm83CVpI5JNTNKa1Zkr4EYzf6a1Ji05DyD5E
vtjhst2ub+XZbWjRW82to8jpQVIoVHauHn+klZ6huJX/ADewllvOOG7t2I0Sy2qyZHsvpBB1
+eLr+jXUlcMllyW/3ySN4bi43eeQ+h0YzVXvpIqtMZvewTjfhqPlbbPlfZoLDbeW3F1c7aER
7AtI00AYZ6CwFNYAzDY3/PuCz8Oj4W2jmW+8iutu2LdZtqlks5ZJrqPUBkAFjdh/ET17Yv6W
LmVio7LdW5JJYhHudxt7ltYoXZpomqa0rqOWM93z1n+NbT5a5f8AIHLLTbV3/a323b7P02cY
iMcZlKBWbUw9VV6Dth4/pznw6Wb8sTb8E5ZcKTBtV1MyqHCpG1Qr5qaU6Ed8V/pFOccdjuG7
ce3JXgmms76zbUroWjkVxkR2ION82WK1bvz/AJbepcwz7rdypfODcRvK5SRh/GDkSB3ONa5/
VPccZ+Q99ihvZ7G9v7e3j0W1wUkfTEBkiEg5eWOX/aNzjHJxrbeUxbk99slvdG6sWUPLArh0
Z/Sua0oScqY11/SWHjmz1dcrvfk/9Cdv5PLuMdrcf+O1uzKqtT65GmL7zGZ/P9m4/wAR+TrK
2/qOz2V/FDdR5z24loyg9ynUDHOf0yn6qnimz8yu+QyNx6K6l3K39xpHttYm9WUlWHqBPeuO
vfXK42rLlz/JFs0Njv730ck5VreG7MxDEd1DdwcV7mNXmJp/jr5PniG8zbTfzlVWU3AVnLaR
UMCeoHXHK/11nMT8G+XuXcOvbmeyn/Ui9at1BeanjkYZazU1DjxH4418n7S+D518uc25/wCz
tlywS3VwY9vtl0xu46SODXUR2rjpz1OZfGZffWws/wC37ZI/jJt/3jdv6bu01vJPYxSUjhDR
gsEevd6Y4822+NdXHn3xHxm65Pyvb9qhaaKKaQPcTQ5SQwrm0g8PDGv6Sz5XHsPy/hu4bV8i
7xx2zlkv7i3utNrKup55FKB11MPVrVW9VMa67tk0fz/Lu3r4i+S9mspt3uNnn/SwIZJHBLNG
oz9xitTl1OOV7t+Vfjx0fCfAp+abxfWH9YfbZYYDOjITrc6qZgFdQzqcP9OvYuOfNUt1xLk1
7y+92BI5rndtvuZLd1jYuDIjaCy9gMa67yetcTXo/E/gHf7iHeLfk6S2Utvt8lxtj6qoZvBu
opXrShxmd1eT1juKfD3yRvWzjcNpspBbymiylhGTTwqVJXD11lY5tvtYTfNl3TZd0n2/dIJL
S/t3PvQOpVh9a9QT0PTHqv8AX7/LFk1yXN3PJCqM5dVpRWJYgDwwS46eBW9mQeyuSsO/enXP
HP6/lnqylBfSReuJyj07HM/hi6tt9G5D3G4yzIzSNqcUIqxzp4VONS2LUkm83c9glrJNKYIX
LRQs59uPVTUUQmilqZ0xmbKq4LiRi2rIhsmI6HG5IDKQoNSSKCoHQ/TBSeSQIANGTZDv+/FF
aWuMD2xRZD9tOtfPFVMM4VKgkopzFPHGVgCT9z10kUr1/HFpl/Y1KouoVGeYPT8MQ39AMmpl
YkrmPT0pixDGkkqcszQeOLFAxxRA+nJz9oOXX64LCTNoYqQCOnnXEaTNTPqozOrOo7YmaJcz
qLBehGQwqEJNT61yI6g4KCFezVYjMeFfLFjOgYqFVftPY9fxwzxpKjMpJY+oUFPLzxW63JhG
QNQtQZ00gUqPPBGfsFtDJRkJOrLFi02QYkgmppXz8/wxYNLQoAJpXw7Z/wCmBqciMJWq1Pq6
Hv8ATDtWEqKoIpQU6f44NO4RmJJ0gADIrWtfDPEdOj6YgaEGmXjjKngwX0qxoDWhp18sVMpt
WQzLKDnnkf8APBjFodZYAK2YzZfPDFukH1EnpXIrTv540tFWgBFChyJ6geYBxmw/A2oAvqC6
l+3tl3wYLQiRAK1zGVKVqMOD1EZCVqTmciKdRiItbFgRQD6UI88QTaTJXuo6k4IaSQfywK0Z
RU+dfr3wNIW9yNghJLZUXt+3Eol1AMC1SKUJ7g4sGC/l5u+aqKZeHbriWgAESmlBX1Cvl1GF
DURGOiHMmn/TDq0PtLQKKkA0X9ueMk5DVDnrUACuWX1w6NJZmZhojBpWpOX7PPEoZtIrkans
cvwNMRSITIEWRtOkeNQB44Cd42EZAqUY/wDk/hpiBaWqNNQQeoocz5npgawys4JZ2DZ+k08e
2IYUhZqsVAPav+WLBaGEsrV/Ocx5Ymfsb3BQP1AJUmhyH0wr1IFkTINmc0AGR+temDWvkpJ3
9IINAM888UgIz6jpRQVbv2+mLDomBddNCpOVOmYxIwYirV+3p41/DEsSRo3qWpLEGp6L454q
YBtFYyTn+TqADgBNIyqwkYnVQIaCmf8AnhIgAYyymsg/L9tfriQqAKaAVOda9zliSEHUuVTJ
mOnfEiDFBqrQjJwOxHh5YhiVQASyrpAApU9a+GM1pGwLBlNQv5TTIE/5YNB19pG0VI10q3Wu
X+AwxBkjBVdKEsvVh3/4GFWC1lVUg1AOZ6Ur54gTuNZ0DVlVqmmf0wWGUDsiqNNK0BRQfHtU
4sa06Mh1g/u6DxGLRoH9uqlKleursD3xKnctVa0CNkGBzqOuWJIwHaEkg0BpVcsq4hhCUswS
pqDnlTLwxDBazFSo1CtQPA/XpiajhvgPaJHpoD6a50PjjcNVNKZ/swKCBz/yGIUWl/AdK4k6
bUt+oUj8vQkZfjTGWqnjAF6tTQV69BjXMPV8fWvwHQfGt+gLyOGkK0oQ1V9I86DHX/7H4cOf
l4nzSzvot9nM9vJGjuxQygqGAzLLUDLHCV3s8ZgsWJCH7T6mHXPwGNsaX82XU6gEdMzQjGuP
nGepX0V8Kco+Ndl4t+nk3SPa9/vHYT3TRPJMyk+lVeh0jyGO39P4X5i568ytp8xNsEPDoLp7
mSWZWBs6/fI9K5ZZnHlnPpk1JwReU7r8b+7uUVxJezRyLELhfVJGQdGlTQ+WNf0kl8EmPOvj
q25ltPyGNngimW0klafcrRAfSCKKZCOhA8TjfNmetdWrb+4Tje6XjRXYtz+lhUVuWGkIRkAW
OOGLm4802j4R5ze7Q2+/o1tbH2zOZZ5FQtFSupVJqQQMdOe8rWR7BwfmF83Ebfb5uKXO6Wts
PaiuUkAilzrRVIrlh/pZaxecbqx2DaLG6s97O3w7TcSKI0i0qrI0nbUcy5GWWMHccWz2O5T8
33S/dWO0+xoWSR6xghs+uWQGHfBHNLzG3uUmtE47dbtZJIUjnjYCOUA0qq/cVHjixSujdOI8
GhuLLeN7s7TboAVMcU5WOP3DmFk/iocGH7NNbz7Vu+1XLQbpHdWbKymSAqURVFKCmWWG82fL
LzbZvkra9zvP6C2x329y2kuhZIB/JAXJWYVyy86YfqXZ8/b3DYcUtrdZxZ3lywEUKN/N0gZj
0kGlcsXM9BcdQ8H+Jn3S6dm3G6g955QzSMDIP5YZ3J+0Y117Ug+CrmW84duV1IjM81zIzV9R
ZdJ/xri/op8Pmznsft8kugTpAkYM3YkEigrjHAri43Dbyb5aKQfbaRQwPqDDV0PljrGuepH2
xvN/vW37ZtsOwWUYido0cKnpWIjMoopTHHPTa7W2+xgvjfSQIbuKMlLiUamBPU1PQnBgeW8g
5h8rbku5WNtsSzWS+57N80bxoiAU1HV6W8a4bYZ/kX9ulzfvtO620sCFIJqtdIgq8pJ1+v6d
BjXWYrMbDgFpdWt3yET2jAyXnuW4YGhUr0FeorgrMdVq+871tm5LvdoqpHI8cFsKhGj6DV2r
gyFYbQYdttrXbmkhtWentWNjCQiDw1gHPxY0xYkk+17XaX8u5LZxG8iQhLiQamFc6hj0JOD6
xPOk5r8hb5uSWs/HjFYQ3AruIDpGqaqZ68q0741MWY9A5DDawJ/UoLWO+3WFGFo02apUdaYc
Wvjz5FtN/O+T3u8H+fdO7agulKDsgGVMXN9Yk/KL4ytLK65rtEV7Ek9s9wgeNxVOuVf9Mdfr
qvb6x5HNuk/MdksLIs23V/8AnxxiqKg6BgOmQxyxq+ru5eK0i3CW2CRTKvt1jADAgVRKjp1w
ZijPcPgvpuHm435S13LOwd7lfUUMlFGedMN9a8xPvbbpPzXZ7O3Ms22aHF9Ag/lqQPTUdBTL
AFruMsVrZ7rNaaI7sqY9cYGv3FX0rXy8MWK1kGFzH8QX19vp9q+kgkM9zcjS5Oqi9fEdMP5Y
s2KzhN/yl/i8wcf43BYw/pn/APn3ThVnyOuYR6dTk9q5YupNazz12fHVu21/ELS2VLbcbn3n
kukp7ssuoqHPfLoPDBmG1puN200vEtsl3qFf107q8xuANRkJqK6u+WKzUBrndbr5KFuXln2Y
WhdoaVhWVcga+NcSYWe+s4PnW9Npso36+ECpBFEY1FvIANTszelSFyJPjhzwc9awfzpecgXm
1tdb5Z20SKitHt8TCVWRMw0z5Vw8ZrPXV+J8vbds3y+5D8RzbhcW6QTXFpIqW0AIQALRQvkc
HWb4ZuesnwP5H4tb8atOMbl+r2u7skCNbwRv7kxc6teqMVFScxhwrz5rutth+I7mWNilvL7R
ieaodqt+avq1EYyo+XrL475XdbQd5G2XH9LSpa8dSIzGB91e4+mH7H6vp629/j3wzt78aQWt
7JbxTMI1q7NJTW57k+eAWuP5hitz8T2Mu4xiO7mktzLqA1+44rIAe1cStcHNL/d5fh1otk4u
No21rZFM1yYwYo6CkgjHqz/ibDzkXXr5PmV/dap0lAdVTjpL4fq3vw/Pxi05fZ3fIrSS6gRq
wxRgMvuGlHYH7lWnTGevWuLY+mvnLftosOEObuyF5JfL7dn7g9EbMB6z5qDljEjn3cD+s49w
T4y2vdrfZ4JWS3h9ICKfclTUzNKVJ61xfXVa4do3P4/+ReT2DRWi3dxt8X6m8ZlCoXy0xsP/
ALRVbPFeWuem73a4401rc2t7JbPEEKvaqFZjQdKAE/swSC1xcb49xTjuxxSWdvbWa3P82W4u
Aod2fOrM3+GLIbaw3ynxDh/L9z2nbBeQWM0jNNebkKIphRaaFOSMxr44vr+hrLbv/b/8U2O2
z3k3J4wlvGz6i8JNAMl9L1OeN8c9ab3kN/apPtQ3LfbS1t2c6EMd2xP/AIw1Cmk+Joa4O5lX
NtmtN8fcR2O5+WOXbhf7fFNNbyBLIyIGjUMdLsqNlVgPuxU82yNTtl1xrme1b7ZNtEVvYbfK
9qRpQayBUuukArTBecZqy4vxTj217RZ2mzbdatt5Ws126q7OR1NSCWz/AGYMkUfOv9z3JG3T
lsG2QAR7ftcHto+kKGlY1eh8AMhjchleR8b2obvvdjtrFgLmeOJnBzUOwBI8csde7kW+vtWT
/wBQ4NcbFxux2ZQl4yxRXEcatIGSgDuaamJPU1xww26td74/s+47tuF1f2sdzPFY+xb+6A4U
Sai2lT0qaYmdYD4t+NNps+AX8m8bQF3G+uXErToBJ7KOFj016LSvTrgwzrxq96v+H8d5Bs3H
YNit2ut5pFFIIo9EcaHTVqipp4YpzF9nPY/H/FoPkHctwTaYnEMMToZVUW0LyVLsikEFzT8M
WL7fhbcn4bxPetsRrqxtpVSaMiS3VVJGsBlLJ1Hjixa7rnb+NoLpX2m1ZNqVHjUxR9WQkUyy
yxYtecc72TjWzfJnD94Tak1XblHjjpGnuVASTSMqpqrTvixmXK5vmji1lyD5V4Zt92G/S3YZ
bor1ojErSuQrmMO+KedL3lPNeEcH3e0443GRLA0SlZYIo3opyoFZSzkfXFh+21w/EOwcaur3
f+RQ7Atik90Vg/WoFSJAKkRo4Ok1NWxWRr8O/wCbuKcbvPjrcN0NpbfqbGMTW1zCir1YKaMv
XrX64pHPu4p/jC5vrv8At9v4rhzI1vDdQW9aZIqgqBQdicU+T37HivwTdX1j8nbZ+lDxfqJv
ZuCRVjGxq6sCPzUw9Xxn+c/D2jnmy7du39wPGINygW4s1tGb25P/ABl01ugYdCdQ6HGer8GS
fbRSfGdtdfO8e5ttAGxW8HurpQC399FyLACldZ+3FWuZ8vBfneytrP5V36K0hWC2WSMpEgCr
VowzaQOnqOO0+Bzrk+HNjst9+QNqsr2yN3AZCZLZTpZxpr6m/gAHqwd3Y6cc/Nfa8O2bG6nb
TbWOhU0PZxohITpQrTpjl9YxZr584lG3H/7j7naNlU2+1tO8M0agEe2Y9Wgk9tVMWYZfMUH9
zcEg+UZLlo/5C2lsC5GRYBvw746S+MSXXpEc93e/2we5Mx90W2lFp0RLiiJQf7cY5H9+bZkF
wC3458f/AA3Z8kbaVv7u+ZZLpaLr1O7IqgkNpVAME9+W5skjWbrwviPKb/i28X20xpJODcND
pFCPa9xUkoAGoxzxYsyqDkfyZxWHd904pdcSe4t4BLbu0EKurlVyIVVFFp3rljXkXysfhrjm
w7bwWDcDt9taybjLJM892FLMC5WMAsKgBVoBjNKH5NTivEd+2DlzbNFd3MkklpLHEqoCHTKS
hGnUvbB9dYvf1sn7c39xPMLPb+JW20ttoupt6RjBMyhhbaNJ1AUPqzoMamHqtJ8bcV4xx/g2
1GOC1Sa8hjuLq5uFUNJNIoZjqbw6AYxJvrf2vw57jgvAdy+RrW/Wytp7uOze5uI0VTEzh1WK
R1HpJzONWMzx3cG5jtPIZt6/SbS23TbaTBLJoULKBq6MoA6r0OLwbsfD+6FW3C50CjGWQsfI
uSMdJdc/5ezx7J/a5ZcZuuR7rabrBFcXW42mi0WVQ49BPvKtRRSUocY7+XaTxv8Aj/xhsXA+
Lc33nkKxXMJW5jtpJY6lLcA6NNQc3dlpTGZLb78MZcVH9o14nt71ZfpKO6xXDXZFDX7DF/nh
skp4lz10fAnG9mbfub3O52Ud0YZTHEJUEgEDSSuwAYd9IxdXenP+Fv19DyD5Z+LeS8V5Dt19
tUe3y2kZO1L7Sl5ZRUIU0KNBqB+GNTjfh022eN3wHntnP8Ryb9DtaJBtULwvZQkaZTbooYj0
9WrnkcZk9xq39vPvhaz2vkW0/I27T7ZCkl27CG2CgiJDFJIqJUZUen4jGus+zNlnLk+L+JWn
/wByfLprywUz3esIZUFT7S+gpUVB1Gv1xnq+jb9HumxbZse17Vtu1LbWlmwgiVbJghk+0Aj/
AHGtc8UM185/MWxWvHPmXbZuO236eaQ294Y4MtEhkIZkA6VpWmK/C53cWP8AeBbu+6bBNoJj
W2lHuaSQD7gy1dBUdsb4vjHfO9PnrRG0mkg6V9ZYdR5Y666Ppb+3u02rYvjLfOX/ANPF7uNv
PJpVlBf2o1Q6ENDSuomuPPu1r7eeJt95ZsvyOmzWu48TmsNW7WsCbhKB7ehzqeFnAU1dfy9M
NkYtei3nMILX5DteDR8fR9vkhVP1wUe2qNEW0adNKALTrg8h31x2y8b4Lxfk+6w7bHLb7Zuk
1zDCFWquyxqtGIJULroPDBzPWZ1fWX+Qd64/yL412Dn25bNH7kW4wN+mBqzQtKYnjLgDUGC4
az/SfFaL5M5jw61+IItyuNvkfa90gWLbLWIBCkssZMQJUgIBTPGuJ631fHgX9u1xeQfKG2zR
2/vxtrhlOglUEyke4GHQimNf1xcb+XoHJ9q2wf3MwybxbhtrujAy+6p0NKIQFNT9w9ylRjl3
6OPLXue/7ldbJs267xPBBLHt0Ek1jHEjlyVU0VqDvkPThxq14hzzapN8/t/2O9trMT7pc3wu
gI1JdTcTSmbTpzphn5cbzbJl/L0XYuP29k3x1DcWqJd2dncoVZRqVmtlL/8A6xNfPGOJ5669
W7GW3L574gvId44nyHali2G3aezrpDszxHTQxDIBiDppmOuN/WYJdtjXfFX9L2T452QhorKG
8RnikClmkDuzJrIH3aCOuM8xqRnfmDfIuIc64tyeytopb66jns7osNPuRExha0odQ9w0OM9s
9dZ1P8q3+6blW52XHbXYU29Hsd0Akl3J61glicFVXLSGYefSuO3Ei7+cZz+1Jd1td4vkFuf6
ZeQszT6Syq8bAqok+2p1ZjGerNdd2LP4M2BbP5R5gbyAm9sxJJZGdKMqSzGjLXxHfBfa583x
stu37dOd8R5Pbcr2FLGG0hcW6ujqWcI51qZO6lQajBcvjQedfJ0nCLXiVnZWUM93vUMURmlJ
AWONEFPTQ/nyrljXHPgvteMf3YWFlZfIdrcW0AWe+25Jb3SKB5BKyBz4HSuN8RmT/Z41tiTL
uEMsahj7ih0Y0qtc6+WN/wBPhrn5ffm37nNcbZb21gv9FvViQ/o7mAmNRpGSFSqkfQ488a6+
Wf4pDdWG/c/uBtkUd8pgnW0gNUnlFuzqy1FR7rD9uH8syqu/ut75d8V7rNynbUsdxgnQWiMh
XR649L0bP8xBxfJ6mR6PFexQXsW3O7RzBV0RxxEQ0A/K1KAZdK4h+WLS3k2V/kC84tZRvviz
xTxQBal5XgRytB2ZixphTI3u+/IO9DYRy7jsNpaHebUQ7hp0yQNrHp0MWOl/4sq9MVUexTbr
a224pZn9RqZkUKkLNEC/T1gUA/HBifDfy81nH8k8ljsoPZsxfyiOIjQEcU930/lq+ojHpnHm
s885qh4dv8uz8msN3hRZJbCaOdI5qlGCNUq3kcZ7njXN9fV/yr8qrZ/FW07pJtUFxDyhGtZL
eViyRe7CxqtB6iKZY5/zktXdx5F/bTtXIX5zBue2RM232BFvuUmWSTgj1Dz01yxf26X8vZf0
9QtthvNt/uZfcLu29i03RJJNvuCPTKy2qrIAemrLGetsjXP5euHcYWuJbV7O8kjIkRvcjBhZ
VUkipOdRkK9cWMSPm/8Att47uU3OZt+tYB/SLSa5t3YUBj9xWKKwr2rh/pMuNfz85bTg+xXm
z/3Cck/qMJhXeEurvbnfpLE7rXQfwNe4wde4J+Wz4ufkZbDkQ5W0ZtVjmG20CiXQFbSTp7af
HOuG1rPEXxpfbnecB2KC6imiR7VBb7jYMrL7YyHuAj0uKUORxVmx4L/dHt27W3L9tl3GaO6V
rEizvo41jlkjEhrHOBkWXse+OnF8c7PXiDN7aiuTHpl2x2woiMgK17DxxCw1KMcwoPh+zFYM
M7LXU2Wkig+uMtYmdl9vUBV+n+WDFqAMdIOnNPtHnXGhpxqLFWWjCvr8e+DxenGpozl1qDXz
HWmCnTKlFXIs3bxxI5Ya/W1SBQHriUEf/GNY9JP3fTuMRCzNQ6asepB/0xDTGMM4JWgHQg51
xWiQaKwqQehqR50zxLQIVZmZgQwNMBgve1Np05CtT4eGHFpLGshCF8hUnxxk4Yxp9gNV8epr
hFIVLZ9ew/6YKPBasgaBXqKEDt/yxeqXDBVK17ltRbsfpiWCElADpJGbHxJJpTPEdDpZnPpo
3UDyxGTRt6aFaVPUHqK+GJWoSTksmQUV/wCWJHZxpYBcqimeZy64jKliUKh0sQeoBzJGJUVS
wyzJPfqMFrISY6oAc+j0zoeoI/HtgqlEhWRG1EMVPXoaYy2RqTRloe1fDCKZDkFStBXLtTri
ZCC9dDCiEagoyGX0xDSLJrGnvm3YVwtQQmC51C5AafPEtILqcrIvpBzHgMGjDVJDEqAgNUB8
Prhw6NctLRkUH3HrQDtjOIcvtONNDpHdvu/dhxn7eolkDv16ggZUGFfJfzTmQCSdIA64zW8T
ySKAiAeRJ/ywY0Gtfvy/ipliFAzqvpNKFs/piFIPG2oZ0yz6EUwjSUspoFBWtRXKuJFI/tmg
BBy01zIJ7YjRsW0qWAGYWpzzxkSEySK5K0LH06R/D44m9NmoCdh1JyGGVHAoKLSjGhFO/fPE
No9LsxBJUqf5i19NRiJ/dK1VaFjkcu2DEQJPrA1UOYOQr5YrDhjRqo1fT1PYDAA6gjKEqA2T
E9zhZpGvRl0k11AHsMOhLbqzaRISpGYBOVOxOBqRGYakmQGlToNaEnCDuI6BNJDE1r36dcB0
S5OoYZL36Z9sSwaFgPUATXpT/TAjVdAQo1spqV7/AIYpEF/sB0esk+nwwjTIupUq1WNfSeow
kSzZ6a109iP8cWLQl2yoRWuVa5eX0wKHD6Cdf2keknqT0yxVGRUI0moZszXEdG4NQozzFfp5
YyDMCumrUUU0+R/3YiZvcKDWtFPRu3+uIhEml1jRyF8D/jXCiYMgYdSO48D3wAg0ZV2Ddch2
OXni1IkZlzAFPA5gnywjBsQpkcEhtIqF+0kH1ZeOMqULyDQja+nqCjsD3IxLSESlgy6STT7j
QjCUlTGBUkivrocqHvTAdL0giVVqFBFRkaHCUbyw+yRXuSor1Pjixa4b4P7ZBzp4j92WNQfK
rqc/8MRwlJHQnzpiCXVJ/EP29sRxJarK7gI2lehNaAYzidaIRclah3oBXyGOn87lbzx9EfBr
/Mcm0vacPFvbbRGx/UTXixlFmbshIJJ+mOv9/wCkv49c+s/DJ/KVrv8AFvtwvIN3i3LctX8/
221rGD/9mKAAZ5nHCQSVhDAUk1EFgtKBfDwwL8pQEKsKaK5muWY8MM+Wk0M5jCsgLS1URUYg
11V7Y7T+nUM6bLc4fkm5s7XeN3a7e0tTW2acsUBHda+mn0wT+k1n4+FztfyF837tbkbPf39z
bR1D3ESoyLpy06ytVxd9cs5VTs3yF8hbPudw1rfzQ3d1L/8ANjiUOZJF9PqBDVP0xfbmT4Mm
pubb/wDKt7t8P/ttxctZu3uQ2sye0WHYlAF/Yccp01kkVV98n853DZjtF9u00u2Iqolu2lVC
KKANpAJ09gcb2Vjnn1o/jTe/mOO2NrxP9ZNZE6mf21khWnX1SDSMH9f6T9N98b8Ojme3/M00
kO6cna5EcR/kOzhEUnwRNKgntQY58/0lZvP4dl0fn7fOPC3kW9O0FKKpT9PqjAqCWA1EeeG9
wzhDwjlfzPtp/oXHUlmEZrKj24uEjBFB6iK0/HHS/wBOc+FeVX8gL8qbjeRzcrSV5FokEJJV
AxrksS/b+OOc/rjX18WW27d812/FprCwtryy2W4QllMQj1qR6iWbNcbv9/t8sfRwcb+S/kTj
FivH9kpCWc0i/TrLMz9GOsgs2LrvnBxxVFvlrza/5Erb5HdXm7XjVSGUH3NJ/Kq5/sxc/wBJ
8unFxsd32v5w33aYNu3SyvH2i10iO1WLRqVRRVbpko74J3LW5OXo/G7/AOWYOIDbNj4rabJb
QwFUuXchy1KGRY2pVm88XVjl+XzVyWx3GLd7lNwl9671sLh6VUP1qPDzxSjvmYWyXgsL62up
ACsTq9FPYGp6+WN8/LPHPj2rf/7nN6eKCDYrKPbgqeqa5pcM2kU9IoAPpjN+pusVZfNnPYt5
l3ia9e5u5AVjhmQGEDwWLIUGNTrnG/r4m5R86/IXINvexkvY7W1m9GmzTQz+IZqnLGebzonN
/K84D8s/JxtIOP8AGNttZkgBRVFvTTTNndgRU+JOO3f0Zyra0+RPnCXfrqyt4Uur1KfrIUgD
RxrmaV7CnnjnLznqs/Kg5d80/It4r7VezxWEcYHux2cZjZip/MxqaHyyxjZvhk2Dtv7jufWd
gltBDaSSRD/ymJpJCg6ayCBX8MdP9azeeo5IP7gOdxbhJfTzR3TyKD+knjBiFOyIKafrXBPq
pOr8H5F/cFz7doko8FvbRSLJ7EEZpIUNdLGrErXBLJT1zXLf/LXyYb6DkVy0g9tdCAxBYdB7
+1kr/XD95T9bjF8t5tvPKNw/V7o6lhkgjXSg8lUZAYz5+GbFPZX15bTo1uxSTVVWH3L5jHTn
rBHsWw/M3yo+yHbtntVvJIUodxWEy3AT/e32DyLYz31yeeOozmwfJXNuOb5NdGSW7uLhj79n
d1kWWQ56nH+FManXNmVr638O/nPyD8o75HHPu4lsbPrbWsCNCgPZ6HN6eJxned8M5Wu0/Nny
lNsj2G1WgupY00zbkkZkmChaV1DLLsTi7vOsznqfLN8Y+UOZ8c3OW5bXfy3Xqnt7urpI5/iA
zw/bm/LUmj+Ruc/JHIzCu/W8lpYsoaC0RWitwR3oereRxj7SfBzKrp/lznU+xNsH9WePa1jW
L2o1VPQMtPuAawPKuNbMF5yujgfylyfisnsWKf1CB2JSwnDSRaiOwqDX6HGvtzZ6Lx+Y7ub/
ACT8mbluFvebo0liYGEthZrG0MStWodRlqp0q2eM8dTRZsaCT5z+Tr7jz2m12CQSlNNzukET
M/T1OH/8Yb/DF3ed8Elz159xr5F5RxTcbu+2y5MN/epS4uJ190yUbPVqrnXGee5flmSqzdeY
bnufJI9731v6rdM6Sstwx0EIQBHpWmlD4DHTnK7bJP8AL2yx/ud5DPt5h2/jdtGIE06o2doY
lAoKKoA/DLHPucyszbNrNcI+ZN92y+3O7ttji3ndr6QyNO4b3EA/KCo+0daDFbFZ+lB8i8z5
9yTcILzkcctpa2za7ex0FIErQ6tLAaqjLPBOpbgnK55X/cTyHeuJf0G2srbbYnRYZJIiwZok
AFFBySuN2cy+M+35Q/HnzjyPYIV2/wDTpvCoNNlbyFmMa06gLUmmNdfXDzznwpPkP5D5jyTc
7a430MEtpDJa2caskMek9dJ6/U54582a11z4blfzdzzlGyf0W/vUFllrjgQRmRV+33WH3fQZ
Vxvr6z4Y55t+XntXDAP3NWY/44w27tu3FrK8trkMGEUiyhRmPS1e9M8sX5Vbv5G+Y995sLC1
u44rXbbQq6W0RIDuBTUST2/ZjU6kZ+n7NzL5t5DyTitjxf8ATwwWFroSZ0LM8oj9K6tVKU/f
jr1J8/tm86rvjD5F3The8SX9uiSxMNE8MgzdDkRXqvTtjHjp9a9M3n+5+7utseDZ9jhsLmdS
hu5Dr016gCgqT+7HOySjDce/uWu7XbYLTetph3K4iWgmLaOmZBFGX8cdLzMHrz75P+W945ve
xa0Sx263BW2sY6hVr1LnqxIH0xzl/SnyxEk168AaRHZWIILVKgHw7VpjV/rfhXl7P8S/OO1c
P2CLaIdga4nZyZ7qNqSyHxeoPQfswdYuf6b4sLj+57cRvVxf2mxxxLJAbaGJmqysG1e4zADV
9OmGTk3VPwv5V5XsHGd2lk2pnTfbh5f6qwZEqfSwjUih7gHF5b6rMe38c+YvjV9gtdG7xbdF
FEqG0lB9xNIoQSAQc++DrjKJ1K+e/wC4H5D2LmO+2sWx25FlYx6TuJUK07k16ddKjpXG+PGL
x7rzXZNxm2/cLa8iIWeGRXiJFVJQ6tJp9Ma7w83H0PZf3VW4sIpLzYVn3eMUN0JAEanVlXSW
X6A4xOJ+2vXLxj5oflHyfHuW67kvHtngtfbWIAyRyAeopI1KZsa1OK8ZPFseg83+d+DbXssq
7bfLvl85GiC3NEXSQfW2VB9M8Z+lnyNleO7r8ybzyb5B2zkdttZls9mCyJZRBnYKp9bMyj+I
9cN5xr6560m3/wBzEyb/ALhcbrtCttt4qotkjfzR7a0+4ihr4HFkxn10br/dBbtbJbbZsS20
CSoaGRSNKkMV0qqgavHBOY1Y4Ln+5i+nh3d4tqCfrfbVXJqIVVaGo/OaY1eZFlHb/wBxu2XX
JrXeN22JZra0t/ZsIopFd43ZvVN6gBmop5Yz9Vh+X/3L2+6JaNtGzJb3lrKs8V3dsshXTWmg
AD9+NXiT/LPzVp/+dJsZgEz8aL7poBEjSR6dYFT6iNVK9hgvM1r1n+I/3J3FjJuMHIdqW/j3
C4e5iggYKUL0BQ1DArguDMD8kf3CPvfHJuPQ7INutLhkWcs4ctArAlUUKtOmZ/Zhkn4F+fV9
s390HF9s2S3sIONvBBDEqLHHIixGgoTTSf34v+fvp1jOC/NO1bLzDe+TX2xR3Fzujk26QMEa
3jJ+xaqR6sqnB1x63zxkd3yV8/x8qt7Ox2/b/wCl6Z45H3Vz7k0ahqHRpAYeOH6yM49k275b
+O9q2S1N1ygXktvFWUFSZpW01OoBev44JxaXyX8k8q/9r5ru29RIYIrqQNbQk1KxqKCp6am6
kY1fPKzJgvj/AJdf8S5Dbb5ZAfqYQVKMARIrCjKR4EVxi+tc9X4fRO3f3BX24Qvf7Jw6W6jg
NNwuYyT21NpcJ2/3HFkXUx5/wP5tj2nlu97zuGxDcdz3y41pJbsAYFBIEaVDE1FK0741f56P
/AvmP5gfkVxabXcce/p/6GdJrj9VpaVwpB9uukFVOKSM761v/wCdHx6DaltIuNe3GiBRb+4n
sJl00haEYzOZ+a1ao+F/3GptG0ybZvG0pf2kcrPae2Vj0LIxb2irBlahJ0nD9Z+BbXNvH9zm
8XPJLC+2+wjt9t28MEsGGppNY0uGf0gZfbpGNXiSeKVc7v8A3UWEdhKNp4+qbk8dFupHUorN
lU0XW37cHPMt9X1t8VnCv7jYdo43HtO+bSu5fovVbyKyiqlidJBDCoJyPhivMh+PGc+V/nGb
nEG3WSWKbfaWchmaj+6zOckNQFAC4MmeM3nbNW3J/wC4afdeArse4bLGdyuIRD/U3/8AGAuW
tEI1KxA8euDmT4Xa3+Pv7hJRse3bBuHHzvN5bhYrMQULNGooo9sq3qCjrh+kkb5lqtk/uP3W
DnDbs22x223RRmxG2Uo3ta6sGagpJrGRpTFZMZn7Xe5/3Vwrb3UG2cdFuZImCrLIAwkYU1sF
XSQP341P5z9p82yySvO7PnK7FjQdjmaeWBc85Mi34pyvceN75b7rYy+1PaeqNlrQ1ILK1OxG
WKwyvSvl/wCfdz5ntFts+32p22x9Ml8DJq996AquQHoBzpi55il/a0+Kvn3a+F8Tttkn2X35
4y7y3cLiMuWOos+pT9OuMWTVLRH+5M2+/bxf2WyRQx7parCkYemmSMNpdioGonXnjfPHO6zl
jwoTXnvSMxJJc1z7sa/ux0/1/DfUs+Hq/wAUfOl9wba7nbLixXcdrd/dSIuFZZGydtVDWtMx
THK8zWZ8D4P85XfGeR7zuUW3wzbbvUzT3O3p6QpBOj22pT0hs/HDePVzGg5P/c9c7rsu47XZ
bTDawX0fswuJNTx1+4kAKrGmG8SDXdZf3TTW+12wvdlgut0giVDfs+hWK5FqaSRXwB64JzKr
1Z+GYT+4Bl+TJOX3+1RSJ+mW0jsgdRVB0ZGINGrn9MX034a+Jrr+Uf7iU5fxO72G32ZLZLnT
qu5ZA7LRg1EGkUY0pjU/nI5d7f8ADwpbiR0pmCKgtlmPwxWOs+HqXxL847rwOzuduFkm47XK
3utA7aSr6QNSsfFQBTGfpBuLzn/9xW8clt7Ox26yi2yC3uI7tmRjJI0sXqjzIUAA9cMkkGb8
tMv92d7FaRrLsUE24COjTrKQCwFNZTTUV60rjPPMvyt/Tz/dvnTkO7cN3jjt1HDI+93hnubz
NGUOysVRRlQFFGNfWauvhW3vytfzfG1vwaWKP2La6Fyl2CfcAVy6xgfbTU37MZ+jfXMsiLkv
yxum+8G2fh01vGlvspWRLiMktKUDBA6t00hsb54yMd8yrH4d+Y7r4/XcY126G/ivNEjszaJF
0jSKGh9PljHXO1rn4bLd/wC4sbxybje4Xe0Qpa7PdGaQRuXlb3E0Mo1CgoDqHmMP1mCz8vV9
y/uN+LrbbpruPcJLuRYyUsVjOp2bpGaigJwc/wA7Veo8b4f/AHJ7rx6O429LC3uNuaWaezhZ
mX2Fdi+hWH5atkMN4msyyeRDef3QcnvN52rcWsbZX2p52ES6gJlnXSVck+kBOlMa/wCc/Bl3
5eP71v8Afbtv24bpPQT31zLdSaftDSuX9I/GmNXhTx638e/3J8g43sEGzT2sO4W1tVLeSaqv
EvUCq9UHaoxicS112Ws18lfMG885G0SbhDHbTbXrMXs1FZHYHW1SeyjLF1zz+HPr+e/Lt5V8
+8j5HwZuKbtbQXIDRGTcnBM7e0aiijLVl92HjmRdT9u342/uE3rhfHf6Pb2FtdWkcjTI0pZW
GvN81IrnTrgn89vp+3gLv+4fk91zi25ZHb21lcwwfpP0sZZopYNRYpJnVqk1r27Yz3z7kc5P
dWHMf7oeWb1ss22QQW9hHcqVubiFWLNGcmjUuxpqH5sdOOOfy3axfMflLeOUnZJL9oon2eAW
9s0SGtFpRzqJ9R0ivbBeMiv7cPP/AJG3rm252m47uI/1FvaraI8IIVkRi2plJPrZjVu2KcyQ
TplYp3NWSlU616543i+z23j/APdHznbtlhsp4LW/a1T245rhWEjBQApbQQG8PHHK8yNX48dn
Df7j7yyj5RuO9kybvukaNt0saj245YlZY0YHog1Y3/znXXnkc+er7s9Zvmv9wvNeV7XHtNxL
HZ2la3jWalDMAQQDrJyBFcsPU55+FvVqzs/7oOeRbOLESwyyxqEiupEBnOnoWNdJbLMkY5zm
Z6dtVdh/cDzjb+VX3IoZoTJuej9XaOlLdhEgRSFBqGFOuN5LMjXx8ufnPz3zXlcEFvdTpa2s
LCVLe1BjDyKQVZ2rU6SPT4dcM4z4Y3bq6tv7o/kWHaVt1ngnnjj0e/LCPdJ6VY1AJHjTHPJL
63vjyLed7vN23O63C7lNzeXrma4nemp5DkWP7MdurPwpXGGTTpIrlnWv+WMWaWk3vn2/bpxz
bOO3k/ubVtLM9pGVqyySAhvVT7RX0jtg/nzjFurD47+UeScGlnk2adIxeALcRyIHDUNQQG/M
K4eudp4ueLLlPzfznkM1jcbjfiG52ucz2T2yCExyA/cCuZNBjUk+MNlny7tw/uR+Tb3bpLCT
dViSaMo0kcUaSUpnSRRUE+ODnmS/DHXV/Cj4J8t8q4Zb3MWxXaxxXbB7mJ4w9WH5gWyBxizb
66b46t++YuZ73vFlul1ubre7fU2M0IWJk1fdTSBWvevXG+frnwo7r35++Rr4KJ95cMEeCqRx
oGWUaZAQozJHQ9u2M5JfgxW8V+Y+dcZ219t2bc2gslJdYmCyKpy+wODp88N530S6yvKeUbzy
PdJdz3e6e5vJeszGp0+FBkB9Bhl/DPUqmV29zUx1LQAjt+GNaJCJjByYEAV7mlcU0/JEFlqa
EVyP1xKzASKuo1FSwAU9qjvgxJEBULn0HqX/AFxasAklB2JDVBOeFi0UjAnUx69E6DGatMmj
UAdQFKsa9fDBreQNJA3uVqx+7w8cM6GEMm1lRpboPr/phMkPRyNAOXUeWfbF9Udh2Bofy55/
twyM2hFM0JOVQx8vDGaEgUovpoVrQjqMBwq06BgCMz/x2xKGUgqQR6gMyehzw4dhlEYeoWle
/h+GDBKZDqyBJfMDwwmCVDrIZslGVPHwri0fUhlUsK5VPhg1nBgnp28e/nTAQF8zpzBA11PU
A/44sUES3uKqVGVWZuuLG9M5oQqgEDxzJJxHDoMvWMqZDz7VxLAhgKUPq6CvbEdxKFGigIDe
I6DANQqSCcyHH3Dtn0xYzqSIxsaqDqjzp0p+OCwwo5qKNKmpyp16f54vqZ0ABmRqMWJr17+W
IJIzooCCAM3C9sqYsHwTlh6lIJ6V7eVcGLQ6xUh8vHplTvhEv7Eq6Czg6qZ+B6d8Tfhg6Fqn
78yD2z7YAbS2kgghe46gk9/phmCylkDVa55gnoe1aYINOqkMFNBmK98sKwg+kkFvVmQF6Ykc
SkpUKD5YLG5aINGxZXUhDTPwI65YMWlIjjPrGeiHvXwxaLUTSSAE+P2gjsMSEFBo7VZ/tUjo
SfHFokSL9668v4T1+tMDUFQxylWPprUNTt9fHESlLIFC/jU5Z4YLURYUNCfcGSkVqT3xI+gu
rAUoR6VPj5HARD0KSRqoOg6jx/Zg0eijKsCxDUXoe1PPCTtpKVFTTuueJaT6iqtGfWB6nHhX
OgxYdGrRFlamnLIEECv44MRS0pVsjWoHVenamDAEKmkuTQmulj4UxrGYHVIW06iBpNT5Ylpw
W9sgHtpJbpiMEWWisGzHQjP9mLF8JRmCpycdzjNa07JVa1r46T0xYtQSNNGxlQhiDmoHUd/2
Y0NP/OZxVsz18hTAMMhU9RR6nS/niXg2Os5UBHVv8a4iJo0WmdR1FT28cJBHok9JcaTXSSaZ
eZwDTq0mvRl3BUjP6/sxkm0MzaUqSATq6Gg/yxCQR0k0c+o5MB0y754qTFGH2+qMihrnXx+m
JBkU0Pq1BB6K5E5dsSDrUqGWtSKZdwO+JaJZ4g3uRgjVTKmQpl0PjixWo2JdslpEGyYeIzpi
xmmZ5CSyqQa5nywiJXGpwxo2oACuX4ZYGpAmUVZqUZaqTSv4f88ShmRKhKVUZAA/j1wY1Aoa
po1aEPUHMgVwrTaFMLLGKg+ku3XLFpxw35buQdICinSgy7eGKBWnrh0njFWyH7e2JYk/Tyef
Xw74QktPTN6TUE08sRTtrFwGORBoaDG+PlWPrb+3ce18bXdFLBHkfI0LOyEmhPfLHX/7Nmxy
4/Tw7lxH9buTJT3S5Fa9h08a0x5nf4iiSRVbST6znQd/PCxhw2pigBZSa061OGKvfvgn4y2e
faf/AGiWBN1vo3P6K1kKCOI92kB+4/uxdSj4eg/M+33c/DxcXkkaRQAe6FOlVzrQU8aUGMz5
SL4/35t0+MZP09pBt9tHFJHDDbA6WABAc16lj1xr+kkrM38vLPineoNp+QjCdujmvbuQxLez
hmMOn/8AAjpU92x0nMvLXU/S+/uRW6uZLZlq0ippIUZMa51/7R0OPPafHg+3bZuN2G9m2aXQ
KyEKWAByz8zjrz1DI+ofjfm3FbPgdrtUu9x7RfQKwni0HWjMSa5A50xf0n6ZtTb5FuETW3Iv
1M3IrG2pJZ293RUZmyDqqgeRxzV+FvxXfrzd5nn3ndveupVJi47boIxGB4sRVsvPG7PA44d1
5jcXtztOxWcGwbZaSFr7eZqMAvWmk/c9OmeCSRrf2reUfL/FNpv7S32q1bku6IwSW5Aool6a
unqbvQdMU4lOWtTxrdZt0WRt43UbjdTKJTslvGFjgUk0VyRVvCpOLrlis18fblYwc/3ayO1I
m6Sana8NP5USj0oi50B8sX0mCX8Mh8s8z33jHyF+v22SOK6MYDSNGH9PQKoYZVGZxc4bL+Ho
vAuW8jveBTcn5NcrIsqySW8IjEQEKDI6VA16u2Ndyb4srl+JOQ7lv+xbtdXly1wP1Ugt4mPp
RAKgCvQZ4u5IsfMnyG7DmF7HQ/8AkOlsgCDnVj45Uxmcs9KjbIDd3dtCWBWWVVk8dJahNPLH
TnnW+fh9bLw3404jsVpI+0LuU04jUSyDWxMlKHPJR9Mcrx61tqS8+H+IbjvMd5fWaexEur9J
Avthu4UkUyXGfob3+GK5ny74qhtL3abXYY4L+GqRIIFZ3YZZUzFKda4eeJGOuq6f7cV2xrPd
lEc0V7I2qYNTRoHQVpXLvnnjt/ST8GfDYfGlrbw3vJ0I9mI3WkqM2K51zz/ZjnWcckvGuB8k
sNyNjs8cAs5GimupEPvsy5llzJp9cZvJlxJxD4v4Ra7bG0ewfqWmOoXm4HNj4iOvpHlhzxr7
Urr4X4XJvjbhe2yvawoXFnEPbQtTPMUy8u+MfUTrGXv91+Jbi4g23atoS2vUuRD7XtEN935m
BPpPgcdZwfa3PyBw3jF/Y28+/IU2rbozpsbcBTJlUJQDIfTGcZfJPNZoZt8uRZWS2NoDSGCn
2gdK/wD041z4LKh4dsybvyTb9tZ2jW8nWKaeMVIUmhI8KY6fX8nmY+uLgWnFbnZeHbDZRWe2
3lYJZVA9ygqC7kZszdSTjlmq31LDwfi217rdcgXb45L60jPsvMNQQ0qWAOVfPBivVxX2Vpa8
947NuO8wUd5WhSND9gU0qCe9MODm35dk72ewbhsvD9ps4rPbNwRtc0eUlV6sx/NqPUnFi66t
qSHhHFtl3C/5DFYRzbhBEfaMwDIpUFqhf4m8cU5O5FFuVvacu4Bfcj3SENdKkhjiWpjRozT0
6uo88VjO+axXE9i+I9v4PNdzwNvPI54mNxbosje3KagAaRoUDrqY4ry39ti9+IOK7NtHD7zm
r2S3O7ssphWYVSCOOo0oD3NMz1xfVm9ZGkj2LbOe8Xtd13qItLfMCVWn8tVYhgpPjiv+Dz56
sP1Ntb8jtOB2NlFZbN+l1lIgQ2leq+B8++HFbryzfeA/G6fLJtN9kWx2i3QTiEswE0jZqhI6
DyGM5Rx0xPywnB4+ZQW2xWT7fs6gLd3ntt2yLIHzOXSuOnPOq9Pfdjg4ivxFdNxqwEG3CzcQ
s6r7kjKv3u1Kkk5nGbxlxddbHB8Nw8esuGp/QzatvctX3O7nKhhIfGpDFR0yyw9cL7Jvm3Zo
Zvje7vbwx3W4jQVmUegE5UTrl+OMyel8gRW1wyfzlJYdRTr441arH1NwXjex/HnxlFy5LSPc
t4uIknmuHAGgSZBI8sgP34zOdPXXjm+YeNbVefHcXKDCItwujC4ZRp0rPQ+qn8Iwzn1i9Yzv
KeK/EGw/GpmsG/q2+vGhW5hJdjI1CwYD0onli+nrfXb57mBlkL6dJcfaD08sbkZteifBvxrt
nNeWm23NyNus196SNTR5CBkg/wAzg6HM317pc/F3wsby/wBmjt4xuNjH7lxJcOQsKMtRT7Qc
uwxz+plc3Cf7eOC2+3Nue6wPubXTGS0jViI44uidMyWGeeHq2k3KP7fOH3e8bYtu39Lt7mQm
5tw+p2SMV9BOde1MU38Gd2OnkXxX8Ubfs9xBBt80MsUZImq7kedD6ScWDrqufg/9vvD4Nkg3
HfbeS/u7we4sKHSiRvminTmTppXzxerUHJfhz442zku0XG4sbPbr6X2/0A9TSyADQhYUIFeu
Mzgc31Yf3B2fB9r4dEk1iqbi9INqSFdAFaD10yKgY1zxqt/bo+D+IfHEPEXu9uCX91LFo3S6
lUkoSvrRAw9I/fi64sWSR8/c5Tid9zuW12BWg2hZlSV5Cx1CtWKrlXp0w2Yp092+bdntD8Vb
RbbfII7SOW3itVA0Ah10q1B3HU4IO5tRp8G/FPGtssG3+aWaebQgrIQJZ3p9ir2qcsWWrHHd
/wBuHHJuYQhpHh2ZIf1NxEGq/wB9FjDf54MrfNWO9/GXxHxvh278hG2OII4XEDTs2rX9qlFN
CCX8cP11z7vj5KuZ5HnkdVzZywoaAA9MseiSQfavYfgv4m2nmO2bnuW6TsY7JTFbWsVFcyuC
QzeIyxz7tl8a5+Fxxn4U2qTg+/ci5HPJbm1aZbWAGioIMqknrqbIUxym2rqyRpv7Zdk25eEb
9dQOg3O4LxOxWrQxFCY1r3rmTh22+mXxnPib4d47ym33ndt7uH/S2c7wxKhCkaas0jntl0xd
T074tuV/DfxTccUfetivf09nbSCG53B5C6kagHAFBmtfxxfWwXpuLbg/xBYfFkuiKObjzxe7
PuQGq4kcHTqDH1atXp04s9a66eS8H+H+M7/w7kXJmeSKC0a4XaolNNCxLq1SMB6jjXVrE682
g4j8H7PuHxZLya6u3Xc7mWlvWntwxCQR0Ydz3wc9WtWvTNr/ALevjIWMNvLDd3tyy0a61mMM
1MyKDIeGDKb08zsOH7Vwj51sdlmgj3KzZ0EIlzISYVR280NKjFg5+U/92tjZ2vJdnkt4I4TN
aMZWQBSx9zSpy8AMdOM+WL8vAyZNNQTpXJh1JGOtxS1718M/D3Gtz4tdcy5lM0e2LqSGBW0B
VUgGVyAT3FMcOutvjr1fHdyf4w+Mt73PY7Dhm5Ksm4XH6aaMOZf5SDU7nVRlIGKzI4zrr7f4
bNfhv4Zj3CPi+uSbkCxCiGRlkoFr7h0gDpn1wSWTW9Vuz/29/H+37fu+4ciupGt7K7Ye8xVF
jgip92RzfVmRg202+K7mXwZwS72jbd941dmy2u6niiaWVy0ZR20iUas+opQ4MxmV6fvGz7Tx
P4uk2/a9zi2uxgt3Bv2Cn3i6ks1R+aQ9xjXPyv6dPnH+32x2W++T7A3zkGAvNYClRLMKsCfL
vjXUyL+d8bP5P4GeXf3AR7MJzEt3BDJNJTJY4oizgAUqaD9+OdtZ4/8AavQtu+Evim0maT9F
JKm1j3Lxpy3tNpUnMUAYd6DFY6WvJebfF/G4Piufm22I6TX18JLSAGiJavOyRqR/FQVxqXGO
+rzGu2L+3fi9ztvFHvnkee/V7nctJ0khofcCIfy6TpGMba6as/8A7k/hi4vbvjVncP8A1+CJ
pJXEpLRLWoLL9vp6HG5zfkfbVN8V/CXBbzYpdw3f3b+aa4kWOKBiI444WKgE/mLfdgu2+qX8
i5b8LfHewcq2e93CSS341fGVJoHOaSgAx+oZhTX8MZyj7YtP7g+PfGljw+2aW3SHdkhEGxxW
4oWRKZsOhVR3bxxvjiWsd7+GJ/tS2/a5eX393dsF3OC31WMFKjS+UjV8cPcxrjr8G2X4ug5p
84clt7+Ypt+33E9xeCOgL6pNKRoKUHXM4zuD+Pc9bO++IPh3km3brb8bkZd122NmaTWzCOSM
HSrhutSueLMbvWo7L4b+Idi45s258omkWW9hQMHkbTLcSgN6Ao1AitMjTFNrV6eV/P3xTtPB
t1tZNo1Gy3RXkhhfrG6MNalh1FGrXGpXO9e5XlUCl5VRzQtRQcyNX+WOt8intx9Z8Y+Cfjjb
uJ7dc73DPud5dRJcS3UJdlDSKDRFTstccLNbvWKni/wfwHcuX8ito7h59qtYontSPS8MsxJY
En+ALTBZYyl5B8S/Fm88R3q64lIxvtlRmkn1FkMiKWZSCBUsFPqGGwXq4uuG/B/xw3GNsa/2
+TcLu5gjmmuBqRS0i1yApkK9fxxmc76Z3sU23f288Tsub7tJfyyy8f26BLuK3HU+6G1K5GZC
Bcalp56xQ854n8J7lxqeTiF4F3e3kjSOHW5EnuuEIIYeB6419f8ALHe34ek7X8AfHdltdrt1
/t73117YFxcB3UM7CjEUOQHbGZGrXzV81cC2/hfN59psZTJA0Mc8By1qj1oHHTKlK98deOmL
7ceeSM7MdZqF6dq/XHZr6/sKUSmfqJyphwHNNPqNW6AUqCPx6YxTk/I2bS2kCn8AGZBByNRj
OC39F+pkZteZkUkVHevjiyMzUbMwJqpyzLA9Dh+y+ptbE0BppoTUdsVjX2F7lSzq1Sa6lpnT
B8G0g8hLAUI0UyPUA54VKNZGUhtWbVJp1p4YGaF7qZmVZD6a1BoOnn9MMOo2kAkJrWlKE9/9
MaF+dJnAiGknWxJPljcVoFPVAKuTWuYp4jGL8qQbMynJjWh1eA8wcBOXNaGtR9rHpUjBkO0J
lZmoDmBUsMzlhwX08Zr6qaqDypho1INJ6DMD1ri1X0A7hSAR4mop174hIg1O0jAiunq3+GGt
SJWZlehNA3qamQp3+mAYYAsNVarUCg7qMVEg3kBQCLLx61wfX9nTxv1qculW6eeKtTrCiYtI
3fv9BjODURIyK9BWvauffG4zqXWAlK5AZL9eoxGehoDUoaImQPcYdpshw2igP11dGocYvrO4
EE9EAGf5upGL6qUzM5OkVqRkR/xlhw+34EGIj9Zrp869MSwjJXST1Oax/wCWNH8BD+skjJTm
e9T0w6vsKrBjVagj0ntXFotJJCdGs10da+WMXBbqVJAgCimf5h2xjGpcCHaqqRWvQDscb5SJ
ZTragpprl3r/AJ46CXBxMtDVqVHhlTvjGNXoxVCNWZQ/aO9cQCagKq0Kv18aDphivwAOo0lh
6cgAvkfDEJMS1CMABUdVpnl3xlXqIzqrQjNswfDPDqktPpSVfR286UNcAk/Y2VSSq+sD7T0J
A8cFFDHpKgr1qAAPA4Mb+wkdUDA/d+UU/ZikaMZfSp6ls8+wHfCKjIIH3EEGorjUoOF1VYGo
GR6d8NUgowgNMyKGp/3f88YsZwS1QEEFVIoAf9MGr6mcmlAMj0J8B2xNUIB16iaqOnc0ws4Q
UFqtUAGunvTt9TiqOhYMc8qZEChwmQSrQs1NY66ugr2/ZjNbCZJAOhbpVfriYppCdAopqADX
8cTJkBVmDD6k9/2YpTolMgGknMZgdRQd64qsMpCs9BnlgxaPXIKhQBQeo9a/hiaEoLAFe3+f
QYCb1qKk+RUGoB8sKsMzR+kganYZDuP8sOazmEj1Si9xVq/XxxlacRkykjIAUoOtfLFVITOw
oNJShILdaV8Thaom06SclJ6+fngYsA1PaCAVFBqp/j9MWDTFAQTXP+KlKHw+uDDo4tLAq1QW
ADkjx6YrGof211sBqY0zXFYzb+hD0aKfsOCNaj1MGINARnQeHfLEjjSo1aSAxoudQMv8cSmQ
+nT6WoxHQEVphZ0wAoaZV79j5YKZQgVPqWlDUnrQnviKShST+adRH2qTX8SMAJQpUGtM+vUY
sJFgNSIKCtajxxIySSaCNXqqABTpgpGdbFlY+uvq71H+eLQFy+ZYZacvriWnPsVr0VfA0/b+
OFqU0TkMSRQHJR5nvisAmMqrQkaq0qR49cCtO4araaaxkKH09MQunhAMtWcplmK9aYdakw1C
oBFWofV9PLECkBPUjxPh9MBF7jaCKlT0qM8Cp5FrHUVzyAGf7sUFR6XqRSlT6gf9caZsF6fa
opFQaBTmc+2JoTaq0A0kUpTpXviJ3KhQZBSQGgJNRgxf+TICCGUetsqAZf8AXEuYJWLBiSAM
xQDEiAAPpU+rIZ5eWIUlUGKigelqage/UZHAYGMOzEsKsfHI/uyxapBSKtClB6qDUPHvgtFM
oVV9qitoPQDrTwwE5PqyJLEVUmlB+OLUT6ww15Vyqvge2WLSdqK5ZfUmQYHqW+nkMKC8zBWV
Wo3bLIUzxYg+8ugVqxPfrgHwmWNQPcAycVK1617U8sWmIVjapU/aBUntiQGBQ6WWtcz40xAx
l11DsVAoRq6UHTGgnTT6dXrB6KMq+BwNYiIIkYsCjr9qgdR1zwHDSTN0YUqDU06/TAfgxNVU
ay9RQMDnkelcK03q9sivT7a5/t88WpxXmgIQMifuAwfIkVwpQ40TqdLCmbVxLXX+puvH/phx
BtZEjOrJj2B6YDDxyyS3GWZAIz6Y6/zs31m19C/C/wAt8Z4vxqfad7t7idpjqBhAbSdOkgno
K49H/wBnnmzY5c7LrD833/ad43ie62yz/SWtR7YkzdgPzN2z8Bjxmy26zcU3uoXIopqvpUDp
5HFXU6MQFjV/UudTTCsWu28k3WykRre8miRWBESSMsQI7FVpjc/pYzeW35j8ybxyTa7SwngF
tYwaTKsbVaUqKMWJ/di81cyxseN/3CcY2Hjke02HF2MMSesiZdJJ/MdQqa0xdcy/ke/lm+Nf
LWw7fymTkO57MJ5Tq/R2sDaViDmtBXInHTn+Us+cVvjq+Tfm6z5Tt621ltAsWbUJZndZHzoK
DSBQimOF5kpwQ+eds2/hS8e2vYYrW6MIhe8FM69WIpqr5k4bzz+F9raruCfKfGNkgkh33Yxu
BlOr3o2Ab1da1BGfTHafy2fJ7mNDu/8Ack93PZwbRta7ft1rm1vL6nIpQLl6Rp8sY/5zfWKs
h/cdxOyjkvrLYZDu7Lo/WTNr6+QzGeH/AJT8VTr9pto/uJ43Lswt9+2eeed2ZpliZSjajU0B
IrTzxnr+c/al2/CnuvnXi8G82txsPH0sY0qs0kqpqoSa6dIPqp0xc8T9m2rf/wDOK4Tt1vPc
bBs0p3W4H86SdgaPSnqpmafhgvH+RKo/jr5k4nst9eb7yO1nn3rcHYvcRsGCRk0CqmQpjd4m
Dbv+FN8u/K21cqmVtm2025y96aUBpG6dPOnhg44m+tc+/Dt375xivOE2HGrGwa2EMUcMt1KQ
WPtrSmlemeH+nM3xq82fLXfH/wAj/GnCeDta/r3v9xdTLNaqhA9wjoSegr3xm/zrE6/DwLku
8tvG73W4hVj99tQVPtCeAr5Y5z5N9RbLJDHuVvI7hEikQmQHp6q/hXHf+fMpzI+sOR/MHxtY
7fYLcyfrnjRJBFbgNoZQKBmrpGMdfzsrP2Y2L+5eyud9kM1k0OysNC6KNMv+5fGuGcTFaq+R
fNHx3Dtko45x8y7tOuhru7VV0g9W1VcsB4YzP5rHZ8W/LPxxxLZnjuRdTbrdMZLxwoZfH0VI
oMdOv4XNa+y02r+4DhFluF/JFYShbx/cVqBCMur9RnjH/Nn4c+7/ANwnELPbLiLjVi817fMJ
LiedQi6yc/Tnqw/8rPku60/uB4I8Nvfbulx/UogKWsWcSFRmQ3pXT9cbv8r+FLvgB/cpsF5u
ksUlnJbbW6Ee6AZJSQOuVFAIxi/yHXUii3X5d+K7BoDxnaGku3lV7jcJE0uoBqxAJNf+7Fz/
AD35ptyD3/8AuM23ct3ggitH/pAWksj/AH6qULDDf5yflfMeTfIPJtk3zeS+0xNHboM5HAVm
b/ljPMwcbv8AhSbBv8+z7ra7nbxh7m0kDxGtBUZZ+OOnNmrua9+svnrh3sQ73vFrNc7/AGw/
kQ/kDEfdqxnv+Z9U+z/3Dz324XQ32Nl2i9NCbdfVEoP5kP3hhg55lZ3/AAm5J8/7baWkOz8Q
tPYs43Eklw9Nch+4+Qr1OGcTW7VpZ/PfCfZXe9zglm5JCh/TWpACLQU1BhWgbwwdfz/Xwxuq
jZv7hJbu9uouQJ7e23R1B4szECcxSnq+mOnP8tn+Rbji+RfnOyl2E8T4batZ7S4CT3jVDlSa
lEX/AHHrjnzzlXV8Nsvz3tmx/H68d2vZxa3jRtFNeV1BiwzlYUqW+uHvmS+VqbUfxz802kG0
tx7kjGLaCzMLmKmujkkqVoS1fHGuuJm/liS/n4WfIP7i9ut5LLaeHWSw7LYOGEswoZQMwoUm
oBPU4xzx+3S1fD524JZwy8iMUt5yZ4iqQMPTHXqmrIAV8O2LrgSvOeLfMNmnPrjlnJ7H+pGV
WEUEaikJPRl15HDzx9vzhkqn+RvkV+d8tgvHQbftw0xIi01LHXN2r9xpinGfAuveNo+U/h7Z
eHxbDDfvNCICPZ0NqkJHqpqp1xi8U7vw8s4Xu/xVNv8Af3m+3UthaaybPbw8ih1Pc6Bnjf1u
eOW231L8q/NNpyTb4eKcbg/T7BbuiSzyEiSQJ0UV6L44Jw7Tn8rffOafDOz/ABiu1bDZJdbx
PGsdTH/MWbq7ySmuQNaYzf5WXaz9tHwn5j4vufHIeO8rkNrYWSaEIUnWEzXp18savP6Y+3vq
h+WPmqz5Na23H+PRtbcctCuppAVecpkF/wBqj/HBz5fXTHXzf5y4tLwH/wBW41sy28hhjSaS
QLSPIVK09RcnucanMl2uf9Jepjwr9QzMWDFmY9syD4YtanOTHrHwDznYeI7/AHN9vUxhglie
MaQWYnI0I/wxXnfhczGe+ROctyLmt/udpI9rbXEoEUVTXT0qSOtQMXNyj6voPjXy9wzcuLWe
37hun9E/SQRxyaGOp/bUKSrLWmY6YLxb63b+KyfIvkngV5y/bIIbu7lsbXNr+V2L6m6kaj6V
xc8VNxuXy78a7Ts0xXdzu7yKUjtkGohnFBqOVBivF30I9l+YuGb3sFsl5uj7N7Q0FAaSOyjq
hFSRi+n6E6/byv5S+S+MX/KtnGztNf2m2yI824zuT7hVgxVNVKdOuD62HcrS/LvyP8f8r4hH
de+8m5iMm0sxVGiY0qzg+YywyYz3zvwpvgP5O49sG1blsm+3H6Rr8CUXrj0dCrgj+Khyxv6/
abDueVjr2Dg26/JsFtaXjW/H/caS53Bhr1hAD6a064x9P0eb+3vfyDzj4rueHRWQ3eKZLVov
0sURLPVB6ainSmDn+d1W/ph/lb5d4jvM3F7PbJmnjs5op7ucgqgUFQyZ0zAFcU49w2V6hsXO
+O8p5wJ9rvg9pY2RDRVo08kjVFE7hF7+OD6VJflmz2TcOGbjccgn/R2FrG0kas+nVIB6BTKt
TlikHVz18NyvGbgiL0RVIj15nR21Y3P0xLb8vVPgLn9txjlaJuDCPbrxTDLcNqoms/fll6af
sw/8/t/5dJ0339wXyvx6fj0fFOL3QuVkYTX9xB/41UHUEBP3MzZkYxP52fI8saT4Q3f4841w
gxS77DHdbgBLdxyNpKHSVp3zzxdfzpnc6njg2f5G+OeL7Dy+ysLz3EneRrKAeoyNLGU9J8K5
4fp+Wd2Y+am3u/Ft+g95xbMS7Qa20K5Nala0rXywzVlj6I4Pz/ge4fEQ4rv94tibWJluQxoZ
vUZV9ts/UWwTi2jq75C+IeecFTjG68PvLxbKK5lkaOWVgFaGUBDmejZYPp6fiZWrvfkH4t4t
w+22Xbr1b2KKZB+nHqaQe5rd2JGmhp1wfSw7vwvY/k/gLXkG6yciSOKRdKWIPpGXVwBlT64Z
xbPF9nmFryXgvKPnS/3283EW237YkAspZPT7skaDUVOYKgjF1/P/APlcdZbT/wByG/8Ax7vG
0wXtpuUe4bvH/LtoYfWqJXNiR0zxf88+WOt3Y+a4pkAJJC50ofHHXNdObH0r8Pc64Xd/Gb8M
3y9SxLM4lkkqFkhkYMQD2Yd8cpx6u6Gbk/w5w3keyNxotfSQXDPe3aNrARlI0hmpU51yxfT9
s7W4blnxHDyH/wB5ud4jXchCVMKksQoTTpKAffTIYvraYx3Lfmjim7/Fe/W6z6dy3i6lFtZE
etYnlUo7dui54pxV/hU8p+UuLyfBmz8dtLkXO8ukKPboPs9piSWOCSz1dzfI4flP5M49f/Ef
G+Pbfce/uq+011HnRBEhV9VetS2WCcXNHSv/ALcZ+GWm/wA+88g3JbGfb6foVlYIjFlOo+YA
w9cdVr7R6nvu7cI3j5c4/f7LvSf1KQ6b2aN6RiCJckqaeqT7cF4uMzzrXqm828O7WU9jcaoN
uljZbicNoGimfq8MGNWa8i4/yn403zgkvEN33JYbLabh4hIW0e+kUrMjRtn/ABUxr6Vm3xoZ
PmD43tL/AGC1t9yX9FaRypJK2r+UEi9uMPUd6HPB/wA6ft6+Y+ecxO5893nddqkltba+md1k
VyjmNsqenpUDHXnyDiWPffjj5H4Vc/H+27RNu52a625dM41BXlbNm0kV8c8Y54tbt1m/7gvl
Die8bVsm3bLeG+lhm9+d1DVQKoADVAqT3w/878MXi2wXy3zL415jwC23Q3pi3ezg9u0s1Uhx
Iaa1YeFBli54HVu+IP7a7zguy2E++b5ukdlvTM0CRTvoX2j0YDxamM3m2t/C/sfkT41418s7
hd2F5+ott9tyby8jOqNLgya6np6exxr/AJfmufMkt/ysTzD4k4PtO8Xuz7mt9uW7Ruf0yEuX
katO3pWrVzxT+VtdMyMH8ufI3Hd22Tg1lt10LiSw0SbgQCvtsqImkinWqnpjN4wfmK7+5X5C
47yu72KLYZ/1cNlFJJPMAQuqYABM+4C1xrjiz5Z7/n9rrxO2m/mqxcKikE1GeXamN9Rvm5X1
z8dc/wDjpOM7Yw319umghWKewlZjR1yLUNRn2OOU4q7s3V9wbnfE+Q8n5buO2lEsbS3gS5kY
KnuhBJrlPl2qcXXNlxTWQ3jn3xPxPhe9WXHL5Lq73lWT9JGWYK8ikZk0oqq2K/ys9Ytnw0W3
fL3xzd7Vte5TbzJt0lnDGj7cSUqyAAqVHpbplni54rU8cr/OHx7NzTdLGW9P9PvbKKBdwVTo
BAfWG6EZPkcN/nZ8i2MLzjcfhTYOOMvGAL7dp5YpYGVi/t+0+pzqNAvTp3xf8qp1r0Y/LXxr
vttabtNyCfb5VjXXYozRsGHqbUB1p44uea1Xy38p8r23k3ONy3WwEn6W4IMZmbU2lRpFT+FR
jrOMZlYt2OSCpP5R2yx02AQRnCtXIZk+YwWqGYrpIBLOxqGHWuMWnyl6lkAK6a9RXKo7gnBP
VZQltIap8emdfriwYWoJUE6qAGpH45YbFPDhqkM3X8uWVO3TGcavUDGWLBkJDZ6qd/wwiSiW
P1OpUa1NBU41DmHK5a61I6UFe9MHq8RtXVqyoBQ59cTJ2OamgK0OX+uKH5CwKqpelG6+dMa1
QmZqrpb1J1IzywKyCZiqkqMyK08/p54KDSPVKg0/i0n9xwyQWhRtA1AEnxP7OmLDDkBULI1M
6EHtX/XDDsOrEZkfUDv5YzZ6pAs4TMDNhUL2+mKfKlw+dA4Y+ojMjGmrCJFWB9VOpHXBayYa
SarUKKE9TXFsP1EdKSN6qAgVDeH18MI+DHSepoGz0nxODANlYAhTpov7SOmeKDUfpoanUeuW
dfritEoyPRXox6g+WLWjA0B9NVr6ulRXB9kcj3DR2IoBQnM+VcOM6WohxQinQ0H78GLTCT09
K+Vf34cMuCUA/cCK9COurEYjKkKQ3UfhX6Yj8CVEfJgUcZrTuMQ0xy1UJJHRux8sOIcQIKkg
Gi1z6Z4zY0ZmAbXTvmR0p44MZ30QX/b92WRz/fhiRUBDEgkDMDuT4Y0YTBmqUJoczTuf+WIf
IkD6evQ1BPUU64KC0vQHIr0PmB3wxqQLMCaaTXuBll0wSK01QxIUlIx0PQV8DjVEw5bSWULk
Miep8MZsUt/B+orXSTke/TFgOF9RNKahUdq4LFhnQihTKuequdcUX1wmrqVBkxqRU16Z54Gp
6cas9YyoSSf30+uKVqzBGkoLICWFCQe/b/DGhKYAKvXMij/ji+pp2YU0gAkmhPfyOCsBchmK
moPeh8MZMgtRAYE5djX9gOFYSaAP4dWRFcqjGsoqNgQ4rUqB1HWhwRDEwBKe3XUKLn3xXnWv
vhwEVdSmprQr4HxwYtMvQVYGpJNBl49cZqwGr23YVJUmle/7MIsORXSi9OjefhXDjNg1FKgU
0D7gf2YgZESg05HwJB6dsFMh8tKFa16N+GBUAbSVLVOk+pQMWEctXOkdOvhUYjZptJPpUE0z
JPUH8MMRtIAIALHL9njg0YkTRo9RoTlXDp+BFW0LobUW6hsq08cWhEzgyUy0kUNOlO+KC+lR
29QUUBGlVPTzxDD+7poGplmqkZeFfrh+BhNVULA6lYCo7/QYK1BAlSCKggeo+GM07iRXE3ar
LUfXv1wKeo3AdqqPUBnXpQ9MSItRqKSGAANPIYrERcBly0kr3B74YbTAAFlDB/8AIDFUNCFY
GopmOn78ZpJ1BJqQT27kD6jGVhpdSqEVfTStSemJI9VAcirGlW8sKoomABINQBmMs6eeIEly
7yPoUAgUC0w4IkcMY/tDKASfrXt9MBDoDqABpp2pX64lp1I9xxX0qADStRUdsTUIqpzclutB
U1/DEiYr7ekjS2WqmDF1R0JiKMKKCKV6/TDIvwUdACwBbrQdf3YmcRx6jINVQNRyPcUxKJfS
CaUopqRWn44GgAFzkxBA9NAep/zwATRMihmqfUAB4k4dGYYsfdqMtLDP6dsaGjeplABNSPU1
P8aYodAVbT1LEkgClR074h6kjcLQFqClPMV6GvhgrUEyKtSopXOven0xlYZWaho2VMgooa/T
CzlKMyeshdWjMigUr+HfE1tMrPRsqOR6qdhgahiXioAPcJoSR1IpngtGCiJBLIGCkdGzYYFg
2YkBlC0PTOlB9MWGUvUVJLAdhToPwxLRK+pDqoZa9e2XfCUHSXTQGh6d6+eHAdpEAPtilTpL
eHjUYBQPqorAkBT93lTtiwC1TEVVTVunQZDoTiIFQUJrWmfgAfxw6pyhjDlj7tGOdCemBVKu
lQB0/MetK4jgw0ZUBjpkOZGf7sZaqOSSegJoVI0r0rl9caYsRxVVQoGfYAf4jCx6TqsjAk07
sTkAR40xnHSOW7UhWPpp0I7/AFGJpXZg9MJEikkH93TEElT4jrXEgkgE+X2jEalt3pINJoW6
nphjNaCzCFARqNQNY6ADu3n9cO65R1s1JFRalamjHsfOuF1gA7AotQFHQDr5Z4MVo/fRpKRs
pCmpJ6jFh04Rl9S10VzOQB+nngtGk1cvI9egH1w4KaOZ2Vg2TeRyphZ9M0bJGzsaac9RzNPI
HFbTIJXolQx1DofHLpgKNZevTWSKV6YcB0kFSlTToQMq/QeWHafsKU0IYHMCpFM/plg1H95O
lCegKr44kISSIKx50oAnYDxzwaNCZgvooFcZg9jXDivRgSS2k0PgPPxwys306+VdSilTkrDu
PwwWtyBVQIyozAyGdBQ9aYtZ/J1c6QK1C5gitcsWtXobSMsZArRxmppU/XD96zzcpTuhjSqj
QvU06fhjLdBmGBXoD9g6fjhlsYsSSuKl8iT9xHjitta+UaySlHNenmaYlgEJjU62DHM6T3Pj
jUoqRZpUhKq+hz9unt44batOLqQgqCAQfpX64yTn3BVmemrMKOtR3pi+1CINIRmfUMwtcgBj
X3qTe+wYmtajPLucFoxHLISfdI6LmB1I8PwxSqovcaQLU08OopQZ4dUpkkSSIBc69j1/acZa
vRJQOQw0mmQ8BiZH7qsdCGi19Y61OK21v7DaRjQKSPPrnglAGdhJQGrnt0IA8vM41GaESMZF
YkgjGtFO/pGkM1PEZipFc8X2V58MBSrkkjoT4fhjA+oQSyuKmpP0yxSNz4CrspUKNLEZjvTG
6yMS5eX8RGdfHBtJCZyT6sq5jxxmozySUMYFCR2rSn+uKICCVSSTkfuH08Mb+7V9O0h1Aj0F
cw/li+x1IJ2Ua9X8w9T3p9cYtHgWkuHAZ89WajsO2GXFqJpJSTU1zrkKCnTIY3brlo5JGWtC
dQzqCMu2QODTIFJJeushctSnPPBbrXpqsWJAzkoT4k+OeM6Q6PbBCltFa0BPUeXfFqEJgCC5
1FTVT1ofDEdw5uJaBhWpHWor+zDOWb0cyFoiCKHpl0oM8sb2nTLckuzGtPzEnM/XFeqz+dF+
olcsAPSn4dcsYzWjfrS6jXq1KCoAFTl4Vw4x+SFwWAIXWepX/XzwxrdE02qIIW1eIPljPUag
ZZm0LHT0k0ZT4eRw8XGf6c6D3JgurXV1+3Pt5Y1LZ8Mc8phcymLWXLL+UNjMt11iCS4cOoBq
B9oIBHmTXHWM9dLLa9+3DbpxLY3DwOBpV4iQ30+mMy4p3b4l3rl/JN6RV3TcZ71EqBFM5ZVB
/KF6dcF9anKjLO8tCaLTrXqR2xMWCEjh19Xtmle+XjinWCzB++GZqklTXqMjXFL6OZaeO8I9
IcgD7kFaHKg+uH7OkmBM7EtUKgbNgO5GM2sgaRvvDBiQakZedPPBpwQnZodRYFRSg741tH1E
l1KtEAyGeoioofrjPNxYX6qQlaFlFf5fXTX6dMb1nbEhuJXDIc3I0lu7eOM7Vpv1UoOlGKAA
AKOvXOh88X/k+0Ek87hgGJU0NK5AdMbt2NfZA+kepTlWij/XAziSC6njogJIetQa5L3xdfBm
1I1zI1M+gyB8O/THPN+WgncJCGRnqHAz/i+uOsn6Z+2HFxJQ+rLPLqPwGM2KUyShVov3EAEr
l+z64x6dMzMdTOT6hXMUqB0wyqihuKSkfkapYDpjUtDptd0uYnEiuQwIZXU+r09KfTB1tWrm
8+RuV3tubW63e5a1KaDGZDTT/CaY1xaJzGeS+aroKFWJy8znQjF23J4RuZAW1N6vsAPYeflj
PjElQtPI0oEnenTP9+Na18JhcSKKRsVz6/64twgmmJDBSdNaAeAPX9uLdZuo2uJioDuzE0C9
sgaU/AYZMFuDillKyLrJHVdXbxocLM2n/WlvUhJU5FTWlfPFfXRL+sleMayak5xk9aZf9MYm
w3rwAld3MZ1KrjURXoPAYLtZ4iFpNWtVY0JqdOQNMsdMavQVdyS5XS6+n9mWDQm912UNkVFP
bXxNe+M6z18Oi03m/sI3WG5eNZ1KzoCfUOul6dR9cdeazLXO16zCrkNpOqoHjljF5tbtOLgq
je3my/dU9K+eKxqdaeSdQo1CrGgepyNcEn7HXQJLh5BqVqKnpLDrl2xZlZ+2HScrEpqdRFGH
UUGNfLXPSJ5fdDUFS1Kkd8qYYuuvMAtMiy1Kg5DwxMTwlkyYgUH5amlcJlL0kqGbUa1bwB8s
ZsPhEgsdWYFcs6jApaAgAvX7GIOXXLFpw4J11cVArTwxQYJJlBJC0OVKVz798Vi5gTIdbSHo
T0/474cb+wVodRppK9AeuHLGacOa1QUBH7D4jBGaZiCxDA1YCtTiYtpUUipJHYDyxNQmcjSG
zp1r4Yo18ARRRvFTUk98WI5YilMwRRh5eOHAYKWABOkVoRTBiqQAaWSvrQZN1HlTFhIGOik1
AORUdK4vV9oj1MRTutCT41y/ZisXyMxjQcs+1T1wfJpRSUTPMmlD2GfXFYCVnV8lzqaAZYpT
IcSJpLaMu9PDFJh2hGnWaChJoWPh+ONMSaaQNTSQKD7e5wrDmrMaGpIBr/jg+FgwoUgAVJHb
BWvICisKE0PY4qyeoDHUBr+0t+/tixnTRxGpJqFPWhzHhh04aTTq0qT4V8j/AJ4DIOikgrQE
ZHLI4sNEXRVNKIOpPgRhh+QmV3Fc9K0P44cjFoWJYimchpXLIYFgw7aT0FT6W7YLB/4OxAjo
33N17DAfsCNhl/Ee3gBiWnVqsa596/XthxSk0TU1fkpm2L7NHDKoOg0b8tKjLwxDML3RQGlQ
TWvh44sanoMtRI6da0/y8MazB8GJp6qmtf8AoMsTNokQ5GhzJqKeHXGaZ6TUjJ0ii9c+9ep/
DFlOi9lqDPMEEH/M4tQVSrAsaZkEV7+OK1fU4pRjnWufjiZkNRNQNDTqD3xGHZmaoAoScmIw
GBdnj1E0zAAoegwyi0vSF0tmR6mFMiOtMO1Wn1EvqQeoHMHuMGLTgerWW7/bTMDwxmRG1Bo+
gyOeFaS+ptHdsh/x44dOGDgtUVqBQjoKDti0YJFSg1JQHo/l4YygKoVRJH2NCtK/hXBrSaMm
PNsw1fSO2Mn4C2kyKW6mtW/DLDgp5o9aq1dPY+Fe2GaKZtCjSxJPQnrWvjh2i0khQgNSpJ0k
Hpl4+GMmDEqBCsieqvQdKjGcPgRqZ6qQtTU/T/XDWDAr/wDaDUCagKPOlThitP7xAIFOtFA7
+OGLRFRVBTSTmD9O2M6dKSUhqsgJpTMZH6YYNNVqKtSMsqdfpiwgAIarCniegxVQaxkk6SMh
mAaf4YtWEI9QBXPVXUO+IWBqVJVh0GQBpi0CShoQar0JJ9X/AEwNfUS+4VZYfUAaHt+OBf8A
g7sVy6kjPy8cXyfgCaR6qes/m8sHo0TMHZCKmnTLuMMh3QhXaVgoAr3Ph9MIONIqr0FTQeH0
GM0wmcj01GnKp7kYZCJlTRQNUsQGA6/SuI0TKCATRWWgXwFMAtRkIrEnrQrln1wxaZkTVnQn
83mMFjIh7kYK51rWpORwHTxsWiqw9RBo47f88RhKWfTp01AozUzy8sRwnNG0qpViKAnpQdMC
vhlddHryfuPPCycEsF9zpSqkeRpiawTuD9oA1Gg61BP0xCgLkK4IJKH7qZDzOAYMxgv4SEV6
gjLA1g4o0VKAE6c+mZbBqMWYvqYA0y0se/XGosGAhFKKpGbEZ4kYSGqrpIB7HwA64WajJ1Fo
wOgqo8wPLti1kWsUFM/9oNQQRTBrUGz1FaZjJadj54GhAhATpNAfX3oTgSEAaw2oNn9p6YlE
rKxq1NDr2r1HbDjQSzUDynVVv+K0xM0bkOoC6lcDLPFh1F7kherLppUDp0GIYb3aqCQC9TQk
UGXemDEKOqPSpUNmaZ1xRSHSJdVCxLtmHPX6VxNShyOoDIg+pz0y8MIqMMVmKasmOQ8vEeeB
girdzmTRgf3YWoNSFiKEUHVh1+hxKgSOiVMhPcIRgtRHKTQvUrkc/wDHFpJ/cNAwKSDJq50/
1wLTmAliWFQMwTTp4DFrWBK1VWDUBFB4geGLV4Z42KhCQYyM2864oHBdhlTRJWtKrWmQ+uJR
xazl+7CSANQTn4jEhUXx/f28MSE7EMUGX1xLT2zATLUZVGGC1oYAishNWLegKD1743jnHbFE
wBANa9a0/wAcZta0BAGWQcZeo1B8OnXDjnejaY3RpaASjLUB+/ywNQ7FxQBjpAFQB174W4le
NnjOsBRQ6UFP24IqjDEZenQozGRGXfCzphG8hrL06gUJy7YjhHUtBXqPUx8R2GI4Guokn7h9
pPc/UYZETBqklOn5h3r5jFEczOwUKPVQA/t6YgkRXpViVYZHL1H6YDOTUNfWCX/KcVisxCqA
OJCtFHj6q4tYSu+qQSAjw1EgV/AYcM9OHQVbqa0WvjjLQQWeRhXIAVHQY0BRMvuCpyFdI8xg
MOCigAKetAWNOuDDh1UiSSMhgwzK+WFWYEGJl7aaUp074GbTLGfcH8B6GuEw510KqKKcgT4+
OJAnjNAR6vHEzmmZVX1agqk0qfHwxvR/5HIo0glavTKnnjNhlMGAqy50GYrmD0xE4yiIKli1
aMBn+JxGAjRjqU9Py0yyGIYeIP8AcTQp0UjKlev4YLRPQlHZq1GlzQdeuHVeThYWWrUMlaCM
DLLuThzWftIZinrqfST06n608sDWgYKpLMpqKVYdAPLEkoYZmtFOdepU/wCuI4DSgYUrq6lj
/jhVNULGM60zJrWpOY64dRVYN6zQVzp59jiFoXlPuqqgFWFGHUVpgZ+TSZkAVArl2H44o16Z
GZQAaV6GmRAwgWlQKA0V86HM59zjLRSFkOlaEUzpn+AxAn1BBpNNXVc9VR54l8hjPRWzIzy6
/jh0yWCLamANNYzB6DASZQsYqAvUkf8ALBAGrFCtSatWvQUpjUY6lDIg1in5QfUc8/D6YdEJ
fWa0zGWXfBrX1RzO2igIJJq3hhRagWBZsgM17Up0GBaNifSNQqMjTpTtgaNKE0jKijKo718s
MhJ2KqCKasgtBU/QjDjJ6lkrUBjkR28sPyZMAlVBplQ0oRU0wCJPdqhU165djXzxYdDoj11X
0kGor0PjjWqQZdGanc9CenhTBgtRkaI6dq9e5+uD1r7EWjC0AJ055dP34pB114D3SAfzMTWv
YDwxpz3EryKE1dSBl4VxY1oUOtVJAVjk69sR+0ACQSUNU/MfM4dEvpgGVyADqH7MxQYMOiDI
pVTm7mlfLwFMOCGc1RmYV8B1GXnjONHaRcgMxQCh7ZZjBItRhyrVpTOmkZVB/wB2NAbnUgy0
gdV/HvgAQwZzTLQC2feuKrKEo2dT0zHah64tVSF1Xzr9y1618MWNHkLkCmTE+kD6dcKBqKNS
p1N009q9sTOBSWQKxIyPpJrnlg3RdHK+kBgPTUdO+NQIhpLAnNmrlXPPEolVj6lX05Zg554K
3KYswCtke1OxwHDSBanUOnQdv+BjUZpMtHAqDUA188XwzpmEj5V0qDSoypXzwLKZ5G+3VqoA
D5UOWL4VtSljqJyUZCpyzwNSmppkCVHqr06U+uLTTMyEUzBFaVxpF7cjgVIFKVoe31wWE+lW
DNUU6tn1xkU7igIAochWtaA4VpwVTIMaVqQRSv1wrCZw5YadKgZYjaFhGUBzDrmqjsPHCzSB
CDIUfOvcU8cW1WGVamo7ZUGeeDTCqupVUZmp+hwyC8gL6WJLEEHSDT92WNRmwUTrqKkDSGof
HxOFaNz9xoa9R+3GMalCr10aVyAyAw2DSb1HPqPtB74pUYk68urGlT9op1w6sFUg6qLTuBl+
7GSjOkChNVJr0/HOuNCiZhU1HhmegHjixfk6MCWH7B/0wmZQ0DAkdK5eZxHDkgep6Z9advww
Cowqu3U0GYHUYtGacRyINRIowqPIDFpkE9KKfHNqHF6dOJSxNQKEVWmVSPDFi1GQ1FK0qKE9
/qMXg0omAcUqOyk4cFqNwSzCtBXIHxwjRrp+0+HU+eCtfbRhQiqepPRe344yQKrlDkNVftPl
i+BhmHtqaCurxzzrjWavgza6k5Fqj040No6syhychkQB28cZ026ZgFYgZUGR+ueCVkzerVV6
E0r4AYtOCWmgg1VuzjC1JgGj1kaahRQkjrXwywrCPQhsww/wwDRudaEqTqyCg/4HGcwnAIGg
EHSQSoy/GuKKgLjUA4p1q3jhkGlT0nTQN3PYftxWCUTFQpVsugFO9fDzwmmCiFg4qVoQG74q
EYp7oI9GXb9uJT1KDQAgH2sx1zqeuJYGqj1HIjOhzqD2xYKWgEA51yFMTGJ1dXQmoGf7aYzl
biNgrU1mg6Ggp0zyOH1uWAkILEHMinTLD8D5IkaDQHV0YHPIYsFEwYBAuasajyGNT/KyF6Kg
nLKgwWDSYtQMD6q0p2+uM2UyERT7lqads60xGwCxkANWhGZ8sOMiDoH9Qz7+eGxaJp2UKD9o
yNM8vpjOC0DPq9IGmnc55H/LBjV60UZQKQVNFyB/HDYYejFE6UNfT4U88OqgIZUqRmei9hX6
YdYpxI6BtJ798/2YfDLo5GDKM6GlSD2OMnUQc9CahvSPAYYKJa11D0gEZdfwxmr5MKkstCqg
jUfI4Ps1mDUOGPgfSP8Ar2xZo0smqM6Amv8Ahh+BqOREqCclGRrkK+OEaIAKwoaqO4NQBhOl
obUQGC1zNP8ADAjukpAYAHVk7eY88YtMgWTPUDTT0XsfrXDqsGZA3qQAU7DrXFg0JYtVnp5q
BTPxrhxaStkfcZlp2ArlgsUo9IY1qQI+nav1xmtw5oEqzVAr/hgICRIqqFqQOvXArQqUFKAk
Z5HP9mN6zRVYkVz1GgI8PHEyKMuD18s/88Egvz4jcIFehB1EZ/54GpUnuMI+gqKVHQnzxYKV
BQjuTQ+VMRwzLHqSlKtlo7HBNNgictBNQKkeOFmf5CDqSmdQarXEdEtE9TOQe1cxTFogQxqC
pqpNSD0r44NNiQBSoEZo7ZAV8Di0hD5kZin3V7HFqh5KkMKg0GYp18xgWBgZaEkEx9aHrn4Y
VhkI90aj6fKor9cISCTyDBa6PDTgw6i1EKFKdTma0ocFB5Mioq2oVGQyH0xD4FE5CksCWORr
QmgxatsOWyqQMxQVFf2Y0pARhA1agg9ia0OL5MEVYCvbse3XwwY1p2FZVDnwr2r4YgJvWzZ1
01z/AMsB0I0aSRQnuR0AwAbtpjBb7WNKg5064GdQmRNOgjJs9RPgcJtEC4DU75E98+2BaZ3Y
pqBPooKd8Oo6hpCwK5HMGnq+uBqC9YzT1R1oRTwwlIoU5EdfUTXKtMsCKiqrBiRqyqBXOnfA
ggr6S1DXsPDFUL1As6Zq3pAGRGLEMK2rWq0NMmpUZeOJSErgO4YA6VrT64VpAVZTqzAzHSo+
uABNElNGGdAG8M81PjhH1w7HUxLegVJFKVH7MC1Hp0atR9RoCK5kjME4ho0kUq2qtWoXbpgs
bhRSJoMaH1nrXx61wISBjFpLCvbxr41wkOkEHSWBPXwr2r9cOiQSglwzAL1IJNen+mJrScuR
QEVY1y+vfwwBECpSlA2k9xmDXwwo6llQuwzX7X8cCsEGJKs7Zk5Fa98A05DNIwLgA0Ud8+hH
li0Wo6CpQAhgfSMjQAdjhZgSVZwasRWrnE1Esbll0nMUzyqQPPE0TNqg6F6iobLt3xYkHq9w
tozYUI61HfBQk9ANGYKij1ClRXp+/BiOWdMgVZa1JBrSuVMUa0KOQCxyAqC46/8AAwIJrpOd
SRmMq1PQ4VrjvFb2gob0duxqMUZitAqetMLQloWA1ZHqcSFpH76YkmEfvFiB6sJ117dst7e3
8NnZqZ7mdgkMUYDMSxpkKjEfrr3TZv7Y/kE25nu4La2RgKe7MAVUCtWAr18BjHXVxSSfNccX
wpzGfe22zbrM3Uo/8kmpUijXoWLN0+mDm+Dx0b7/AG/8s2K1We4EcjMSXhib3nAH/b2xqd/t
f8pfh0bT/bdzu9279ZNHHZQPVzHcSBGKn81BUjLti+1H1kHxr+3jkW8bubEyfobC2P8A8m+p
qy7BVbMlsW05HTz74Kn47LbW1lcLdPduI7ZSQJpWqM/bHQVPjimnytTxj+2TZBbK/Ib1pdwY
VO32qUjibqAZTWvni73FbGb5D8C763IW2rj1v7wTOSZ3/lxqcxrfLM9hjMtkWRUb9/b9y/af
bE7RTTTELFDAdRqTShP16Y1O1k/Dp2/+2zn8sJuJo4rSJKsfeZVNPArU4fvV4q7H4R5nuW5y
2O32ut4Mp5CdMSEnL1NQZ9sX2qx2ci+A+VbRan3gk88lAqW76iGPiQK9cZ+1P1lSWX9uHyFc
7cL25jjsV0ghJZV91R408T541elccNl8DfIG4PKllbGW2VtIuW/lK1DQ5NQ4vsJB758A8125
0t/YW4nlHphgk1lP+6mQBwT+n7M4/KkufinlVvuMOypYvJu0+bW6erSB3NOg8ca+48Wu1/A/
O9wv5LSCyV/aWlxcBlEKt3SpP3eWM3+n+F9XHzL4j5FxaCQ3yxtoppMRDGhz0kjKvgMPF1ZB
Wfwxzj/1td+ubD9DYU1gzOusoaaTpPqFa9catZ68ekcK+Hfjy12a0ueS7olxu25qP0e2xzCO
MO2aIxAY6vrTGctPXSXlHxBwrYrN2uJ5Lnf9zcRbbtEDBmBJoK6a/b3PTBlUqK2/tWuZYokv
r4JcSDXdGPNUB/ImWZ864z9ulfq8b5vxm149yKfbLVmaKAlWdjkG75dv243zb+WP/Dk2Li28
8gvlsdotnu7txq0Rg0CjqxPRR9cN6akW9p8Z8uud3m2ix297ue3UvcFTVU8cxlXDOjJI1vxv
8ENyq6vRum5RWlvZuEeFPXcM3c6TTSvngtqmflj/AJI4ta8b5DPttrMZUt/QzSGjED7aUyw8
0ZGT9vWMoyrN1XI5j6Y6Zqx6Bx/4N+RN7sory22547aUD2pHKoxVhUGjkZY5ddYfrji3X4k5
lZ7tHswspJ724NIoo11Go6klcqeeMz+lHXNzxZX3wLz+y2o3l3ZG2C1bSXDEEDMUUklaY19h
JHf8S/DVvzC4u7jdJ5IdssW0tEgJaZz+RSey0w9NdcyTWJ5ns9pt+93VraQtDFbyNGQw9VQf
zAd/PGubZHHJVIkEuhR7bAGoU6TU4L0pxXq3xh8CXXMNsfcrm/isrBPSEU65WYCp1KPtHhjN
6dbxk9YrcOHCPlEuwWUrO7XIggY5feaAt3pjW+DmN78k/Du1cL49aSw3M11eOAt1OQBGzEdF
HUZ5YxK18stx34Y55v8AYLuFrtUn6R6mGVho1L5aqVGH7r6xJsvwrzfdeQtskNqFlj//ABie
T0wwp3q3n28cF7rMjv8AkT4N3zhdmbt5Fu7GJB/8gUUa2/KO+WLnq2sddY8vCswQhaVFaH8u
O+H+fX2i/wCLcH5NyVphs+2zXgibTLKq0o1KgAmgJOOfVw5Vxvvw5zzYtuO4brt7QwkgEhlb
SpyqVB1L4Yzz019U2zfBvyJuVgL+DapUtJBWOR/SzJ1qFJBw3tXmRVf/AHc8oWW4jFhMTZ53
E2k6Y6dmPn44NH1ya5to4XyPcrW5vbTbpWs7ZtMk2n+WW7IrfmbFuGevR4P7aOYLxkbvMqrf
ugcbX90iqRXNgaA07Yr0bkeebZwHlG9bpLt+07dJdXEDaZPbUkIf9zDpgveOddu8/EfOdlEf
9Q26WMzMKNpyJJpppma4fsZE8fwj8jMjzf0a4WJBqZnXTl9Dhn9NP1Y6+22726aW3uI2WVWI
qw00p9MahrinjHu/bUMcq5/hTDLjl1xbQCH1E11AVNDn174aZMP7YDIQ1RlXOpz8cUayEHKu
RkXrRW6Z+eLFDjSxkYDTl6j2JHfESfT92TM2daf5YkBulV6MBQ/jnitZpDQGXM9SNWQyOFSh
fqWLUXpTw8MMpSCixghg1aE16kk4qMCx9Rb8p6kf88RhFakFDqUmmnsfqMSoGZRJ6j1+0eWI
fU+oAHIDRQAd6dsVVhJHVC3QdadAT3rglGCkSiAgkhjUjpWvliGBRlqQW0g9H8O2FqUkSkuq
o6ZE9DhjWjQBhUEFFzKg5ZYPhYhZgaUzXVX9uKxga0BJLAU+zvngUMzAsudan1k+PhiRT6lO
lRRagk/hXviW0x1krWhqKA9sWrTxRtrkyr9TTp3wmHzyqAKin0I+njg1AAUAgUqTWpzOIfBO
GKkKfQD0A6+eLBSkYBVoCq1qQPEY1IDrUEac6nLxJ8sWmacFgKtQAGh8c8VWJHJkTUB0yGXb
8cZbiMgehRT/ALfP6YdHRijBywzRepHQYmTSfaCPSAahegr3OI/UkY+4WUalAzLef0xLBamD
Gq1J6nwzxG6bXUANRSpz7mmHIDuNQLAfb9vgfwwmEFckFzQE/ecZqOVVQ2QIFSaYNXybUD2q
Mqgf54NHoEYafImlR5Z/uwwTTsCki0zBz8aDz+uEpCAZDpoKD0lTU1HbC0Ufr1+o1YUHY5eG
AyGINRpFKjM/9MQtOhXQXbMrWlOuWHF8ICKNqYEqT0A6VxoDETeo1B1DMdssGjB6XDUOayeG
fTAcJSAcsqAk16YrEYiTMg5dQe+eL4RAAUOqoIqMqmo7Y0CojZ1KmmXjniRlHpbUMjQA+XTp
gbgmjXsKgDOmYP1wGmZRmwKkADVTy74ZrnUJDaNdT/3U8e1MaZSa9KZjVpzWvWuDCSkoCWz0
nKgyz88VMhv/ACatFAvniROullX8qftJIxIJ90oAcgadDSmf+eLENlZVIAp4sM8jhxq/DmLA
6fUQSaAfTvhxhOy1b7elAT3oM/24BpkWrZZLlQ+P/TBTEjNIR6G0noSaYza1gFpoHunp0JyN
f9MblgMdMkYB6nqR1rXGuqoFSPcOkahWi+ZwCiUiuQOmhUKf34ooCVhI1aUFBqTocsFhFGVI
aoCg9BTBg58J6SPoA0nT16/jljUaJGZdYoDUCv8AriWmVmZCPA0DHvTBRgtEfqoSGAzP+eC0
6EerUhNKLWvemJaYAgaW9R8Dhxk9EQVbI1yoe574RQu0gYOc6Gp8DTFggmIcBs2Apn2PfPBj
WnFAKnpmdJ8fPBg00hDepSdPXT3B+mE/IFqRp1Z9R3xoUbKoy1Up+0k4IKLWiglswO/Uk4J4
je5rJbTQLTSCR91fDFT8nDFqtIat0AOQH0xGeHKt1qABQkfTBFtpMJFoVzVjl5988P2Vhi5b
SNPTqfI4pV9RBvTQiun91TjSgZC7EKcj4DqcXwSKotAx1jMkAfgMEosJVJB1elQK1rn9KYvl
kxNKHUJDSpP1wHDr7gCljkagr38Bgw4R1FewK9T28sSMHmQKOmdVU5gjDKLcNUBdNfLPvnhG
B05HLIV/dgUgw6tVCMqVFRQfTFhOy1Qa/TXLUKYMRkGVBQjoufU+WHTIkdiENRXp1FDQdaDG
cawAciOi0p1rixnqBLKwYocwM/w6Y2zSSQnLOvcH6d8VA41iIFFIH7gTg9PhgiAEVqQKE/xV
xaYdJTp0mmnpp7kjGafsF9KsdNTUjKlf8MS07FY2KdWIypjUgtRqzE6q0I7jx+mKxCoxPqyp
mATQk4zQILT1E6aCpoe/hgxuQ662A9IDE1Ir/lhiIppcoCS4NfDGTISg6SaV8/A+GJYSsDpF
KMTkemRxr6ufpnyBqGHWvhXyOJS4BCGJBrpHUDLp5nB0tTDQRUj0jqaf5nHOxrm58hGgSArm
D9x6/hjUh8o2KvpVRrCnNuhH4YTaZ9ddRoGAyPXLzwMXkJlYkhxQ5Ukwo5k1ZKaKlMx/nXFa
3hKddEpU9T9MZsHwZBpOZ0j7q0yxkWpQFQFTmr/mOZoMONcmar/f4UQjocONWGGpYzVc86Dr
kcDOo2JoGp6e9R+FcaZSKV0nL7RX8fPzxDBAUJZ6aTkCTkMZpmA1ilA1enprl+3AvkID6mZS
AaUOeFUTltCDpq6jthJmjUxVehz+0da9Bi04YoSFUHMCuR6eeLQdRQrqXU1fSc8FJ0OrIHMD
M98Aw0goQBXM6q9OuBYNkQ5MwoaCnU/swrAMg00bTUig7AHzxWatEujScispy65/hixbCpTR
QhhXL8cIDHqD50DUIpn44vGsFRtAINTX1MMhU9MFB1lkVSpb1CmfSvlgQpVahYZDI+IGJZgD
oNARQ1yI6U6nENGgIAIJWgorHOv1GBoy0QFnJoRQ+GfU4sCVPZRQqKxFDTuK+eLCTdFLFSet
Rl+zEUdVaLQWoUbPuCT4YmaSMFYotTTJq/vOLBYkCK3qBCjvWuZGAyGMyoKGrK3UHzwoMhj1
Ci/cPTU0agxE3umSUGua5FchiKQDQupiSw+41xHCjYqBUHUxPXtgFOHYrqyJ/LXpT64UZNDA
k9syaZjwzxHSdmeMJX7cqdiMWIwIcemh8RUkCmDBpvZQkA1EfkOp65ntgxfUCx1GoGjKSGrn
1xatwk1expr9wolPuP4nCNETICFoPLLOnnTFqlqI+hNLKQpPpBOI6kCOiEioZswWzGIxH7Uh
OmMCRQOjCg864sWGZNDKBlT7yMwajzxkJCooxCnQtM698GGI3VszXI0YDoemJOS9UtEWyPSh
Bwmq0Urn1xpDSpYADMYEkqPE+HbEcSmqR0WpZsyR4DCsa34nuo7fnmyzysKJcxs2ohQFB6lj
549H8uL1KOrkfRfzlzyfb9y247VuSz3sC+/HEsupVI/jAJU9cceZ+2blWXxZye/5Hx7dDeXi
3PILxqvArBDp009P20Ut3x2/t/Gc+z4HN9xpdnhj4fxqKLkV/Ba3Mk7OA82s0Y5KCNVaeePP
8tJdy49vW8cg2zebZwm2Wv8AMkmebRG4pQErXMdKYlGdueW7C/y3bW8G5p7EMRVpFYiNp/4Q
a6SemOnHFsFcfOUfa+dbbyHcrhIdvjkoqPIHdkNNRCg+mvUYv558GfD1rZ75N1ijuLJ3FoAC
Jko3uDsKrjHfOCVUXF4m5R7vYbVJ724JqjaKNwHElMmdqimM2HGG2bi3M9k37bZOQ72phMtL
TbJLgSvqNSpzzFPHGpd/DWyK35f+Sdy2HltuNr3BZLmFQUt2b3EVm6M6D0/TGuJ+2Nl+G1+L
uTT7zw6S8ljG5bxJK7XVtC4j6n0mgPpGNf24nN8+Fzbrv3vcN6T9DY3LWvG4LqUBHSdZrx6Z
lI9ShV8znjnzNO4v9pt2N8RHtr/plFP6jczF3kbx0NWv1wWBTcw382HEt0uor39PpZ41liZQ
dVaEKf4vpikMScdnSLjezXV3Oiy3MSESuwDSEioFWNWY4bz6qi2bjW8tzy935k/TWEluYUZz
/MZjQkqvVRjLMSb2d2u+P3VrxqR2vndljlgbSqv1JZxkD9cXw08N+Tdj57Z2NnLy/fIpZ1cG
0sVK661oHYrll4tjp/Pr34PX+Gu+Q7Pd0+PoLnmXLVeULG8G22McaI7UquqnqbSPwwX5D5vk
3CVbhnDZnOOrMzdfy46893MPVk8e2f2wQSXPJ769M1f09uGlMiiSRiWppUtUpTxGL+k/1cuf
K+guVcv2PYOO3G8blc+xaUKxvmGZzkqp3rXHCc63bj4c5HvB3XeLm6WMIkzszLUsc2JqSe+e
GQfZ6X/bunLH3q6tNmpFt8wU7rLcAFEhXrmPVqNcgDjV5mLde63O4bdNtO5bZwq4i/UrVdwv
1AbQzClWY5VpXGMz5aYj+33j+4W+67/eeq4tHfQL9ukjgnUEJ+7zpjp/S+NWY88+f+MX9pyS
Xcbtkghn9UFSKlCxpl1FK98c+axHnHGI4W3azLDUBOjMnUla9vPHfkvtbkS8nuLTa02qRoEZ
4jcolCxipmpbtl1xwvyc11mW2s7u4WNkFzFCA6x6TIFOYFMyKnPFjNryXktj8z3FnuUt1eRW
uwgMz3txoEiRdgg60IOeFT4d39t9rvg4/fibW20e5psmIojuD63QnqGrjfeZ/k4y28bbwHjn
NNx3TntlLdfqpP8A4MERBTQvRmjqrM3njH01S5EXK/kn4Wi2aWHj/HhNucylbYyRhI1y+931
GmNf8M+R9rY1n9t1nuj8a3G9urZo4bqQGBjUK1VNdIpmADjPUw+4wF/sPJ9t+XrKS2gKX01z
riVAHZYq+p5PD0nvi5rPPteufMuybluFjYyRWYvIrWdZp4CwCMRT0Hv6qYDWh27eL7drSFYr
eaycoAbcBfaUgUCIRTIYlYyO37pvmz/KcGxDcVktL1WlubKPSwTKi6mp6dPcY3zPFGJ/uUj5
BuUqTGVhsVmPQJBpjMozJA7lcZlxn5r55hRg6rISAe56559Mdr1rOY+ufhS9bbvhl760VI5I
/wBRJ7rUoXQUzr4Uxxs9PXV+usxxT5gn5Rue2cdv7aO8e5uAL2/kA/moPUIwlKBR3GNzjTO8
nr1i6uOS3fNYLOCr8ajiY3VAoHuU9I1Url4DGPhqFyKMTca3eysgGmmDwxwRioaQ5acs8Eit
UBtNx4h8Ow1VLLcLGNZERlU6ZXf8/ic8Mmqu7lO68ovPjhW21i2731rV/aA1BSvrZSOmL4SP
4oh23Zfjizcgi8mLNevCuqaWbURU9zQZYrD0up9zjuNx2mFdulRJ5i4nvKK2pFJ9KmtcGMS5
WA+S/mq44nzZ9tvYBebWkIZLSP0P7pzDu+Zp5Y6c8L7b4+ZeacwuOScgud2lgWFp2r7SDSoH
YAYPy3FAZBoBYVYGhxC9IqsrVoB/CPLGpWbA1VT6fWcqk5HGkIEEass8qd6jBW4FNRQaqgZk
E9MAw7lRmQQzeHlgi8R6wxoVOk5Adwa9TjTFoAxC1YVIyLdvwxLBAoGbSaGmY6mtMLWHUKMg
tajM9vr5YptQ6RD0PUgDIjp5Z4rCEslaAadJyYf5Ylp2jhYK2nPrn4eeMtwRMYAC1JIyGdfp
i0WhVYwCW/JkFXP1dvwxRmwKvKCWOfq7dqZYbWZD+2ZD30g1K+fjliMgiBpKPn3UDrjUOxEA
SpRfzVNaZ4tFpgSKhR6Fy82bzGIG1NqqehIplQV/5YsVg0WR/TUKxyz/AMcAggWMWhkzJy/h
JHc4LD8GCk5EenpmM+uJXDBhXua9u4wwQ1AHADELXMH/AJYq0Jl1VyHqzFTU+GeAWgZGJIB0
kZFu/TEtI+tfVkQAQB4+Jxr8A4lAYH8w6V6D64Do1QFqkdqjtWvhgp0tbMaHoAc/GnjiWgQK
YhWgUGhapBrgBlHoA/izYdsOAnkcVBA0Ht1Ap0wmeEzFKRoB07dM8OHQ+osx6FBSjHtiZ2iV
ULgjJz/1wWHTipzBoSKkVzFMKE+cOokgDoScRCjD21BNe4z60P8Ahgxm04Ca/TmBUqBliqlA
uQA1UBr6fHxwIWj85yJyVT1wkw10/wC3rT/DGtW4Yag33EMRXPwHSgwL7UiyV1VJQDp3xrFp
ElxWgocqjtXplgWHU0Wo9TAgZ9qd8StwSkBwSdIGTMf25DEIUamToaBj06AnDaZCcAD+HUpo
GNaEePjgiv8AgA1UDVI/206jDANywAVyQKUI6ADE0YltALUBrUeNBikB0NWXWKg1ALdsGGXK
TVOoA009Gr0r5Yo1egD29ehjRTXr0JGGViyEtasuZHYf88TJGNj6AKmmWEyWn0FT6mIJFCfI
9cZta+C0LV1XJD1r2H+OHRoCVYipIFPxA8cQ2DWgGhV75sen7DikMoZmqntgVYGleuNSK9AU
aycqBcqnqKYcxmUSVGoRtqK0q3ngRtDah1INTq8sZtUPrchUQmvQmmZ8aYo1umWtPbzLVIq2
NSM3kowSvt6qjv5f9cNJVVSuVKZt4fhgkVuHZo3X0ilM6nrikHyFnkIV9NQD9PpgwnY6iWC1
PRuxzwTlD9tVBIIWlBl/liwyxGz1GlD3rTvQY1Ir1DlmdFVSCQSW7V8hi3BpgQxAGVBmKd/r
gMwgrha0JNfX50wnDFWZan8xOfTrhjNhRhaMnh0J/wAMNAaaqaQQo/McFEGWEagpmp6gjMYk
YFpMyaH9la9sCw9KGh9IrpbPw88WGU5cLIAKVHQjMdOmEEP+2veg7E4KZEYUSPT7QaVBHjgO
DEeRcUJU5HyGKI2pgDXNQB1xYNSK1E9TBgSB5CvjhxQwJ6K2VfwHnixGMukntX81MssM5V6p
jqK6yDqOYGNYoSE01A1AzBOVfHBWhii0bPScgf8AXBWfg6r/AD6joftPl9MQRrX3QhHUHIZZ
YKZCoMl6Afm8TikaMS6ek55Z/j0wwWDBVmGrrkFamQ/DGcGHUIo0DNq1B8fxw4KTMyilAXz9
Qy/d54rD9qb1OegWnjiWE4aoyAodVOorT/DBFomK6dJAD9SQPDyxY1oVfW2dVHj/AJZ4gfX1
qMuifTF9VegjTpIcDQM6eH/XDjGhL+glKxknp4jFORqVX9Y1VOQNR0/HBSQGbPU0PQjvjOtS
ATWzH06TSrdsVMgiFPfp4dcWVXqBc0Zair0rXvTGuYMOWhY6CoHT6AYbFogIdOqurM0B+7Lz
8MZsanUMdbHXQCn5fGo64zVunCvGAwJr1p1GDTQMR7hpXIas/DvjQFRdIZgcsiv17nCgBiGP
Yt0r0y74WNE6avTXUPAYEjVRQg9AcmPSgxm0p9TsUjBrmBTpQHxxk6ahzCUDA0BB7d8Q+Asy
kUHqL9+lAO+NNacgLIA1SBlQnMeGDVBS0WgQVBBLCnceGKRWknTVnp6sxH7jgsah3ERIomh1
zHga+eMyUaISPQ98ungRiZsRCiUUDIitRXI9aHGmuewRysHY9B3HUZf4HFavtUjioJrRT6iB
nn3wSoLO7kDMU6L2I7k4WRB+ujIHME0zphFCHzqaZGv/ADOCxHJCoXrSpqy5YzWoQKiqgf7g
Pr2wxC6hSaU6kg0OHBUjBQAyLUVoO1Q3XGfWtiN9MZYdiSFoO31wAOoULAsWyBJGYxLTsCDq
6f7QOnnhVlOGdlLD1dh3qDgo9oqMKMw0kemozNPoMGnEbI/trqX/AG6WzORyw6JDxxnTrJHu
Hr4ZeGLSIodK6lIPVGrl9KYvlXw8hzyzA6V718MQhlLA6ew66cqVwK7RSBi2mhzGZIqK+WJa
dXYo2R8kH+OFrfDrBF6tDBmAzUdv8s8Zo2EHRdWpcj9o7nFjOhVypBNa1yr28BTphMoo3dSe
5/aBXtgaiQZmqkFxXI9PDPEkclQ9BQknrWgPhi0WaYRkGgoB/nTzwaJBastKlaqcz41FMWqU
wFIgGPqBp16V7fTC1o2RKH11PVW6/wDBxDUaI2VaV6Z5U8sLUPomlYI2ek10ggYLVhnjJJUV
0joDXp44FhUkCqyCp7KcumLRTB9YpVj4j6Z4ToizSJXTp0HLM+GI6IBZU0pRQoFB0IP1wWoI
V0IapLfmSuSjAjFnk9YT0r1SnUePniWGIVq6anwXzxMnEjFxIftI0gdOmJGLJIGJaqLQ550r
l+/DhM8bUQhKLmQB1BGJrDz6w1QTTTRichl/niViOMCrEsFWnRjUVwaJEsdVXP8AMchXqfpg
SFwA3rJBHQ+XjiPw4rw+kgCiE1r5+ONROKgJ61OJYJQvfMeOJC92P/8ABj9pxJMyNXV2Y9PD
wGI4mtzJHdJSqeoUalep7Y6cd2C82thbbPvckiz21lNcNkFKIT26nGOutovFnw64od326cqk
NxbyyvVKMasaUOSnLHXj+9kytTmV2XdjyFwtxeRXcSUyaUMaN51rjM/rHTnnk9s3K543itTd
yISFPttKEA6dK6R9MPX9ZWJxN0218d5FebrHYWFnJLuFwSAkYOo0PU/64uf658M9RZco4Vyn
bpAm7Eo0Y1NGxZ1QDr6j3xxvdt0c8+tLxL4/+U932r9TZXFxY7O6EQSPK0IkU5n20BD0x16/
v411x6zO77Zy7jm45XNwLh6KXhdw5K9DUZ4eP7ftnNri3EcyWUXN2bpnYVa4lZmYeHqY1GDr
+k/DffExXf03kN8Xdbee4IYM8jRuxr0J1Hrjnrl9M9dtjfcg2qUR7e9xa3kjAskDFWbT0BAx
uf2rX10d+OTyyLdbsbuWQsWWad3YqwzoGJy/DDP656vq6od/5zdw+3Bc7jLAqkQorylKDLLF
/wBtXHMny4bq75NdQSWEonua/wAx7c1pnlWnSuC/0902T8JZr7lcEUP6x71Y4FDW5lkddAB9
Ptny8sb5/ttXPU+HTJznn0yK8+636oF0xxs8i0IyFQfVn+zGuv6RXmfg21cw5vtsb2u3bhfW
rzMGkjgLDWzGuZp0xj7z9MfW7qu3h+T3s4ud3M01wAWMszMxBHZSxOK/0/TeK9Yt1vYXjaOa
SLpmWI6dK/5YOu9+Rl/Dc8K+D+V8jsG3p447La11CN5665QneNR0Fe+Drv8AS5mfK1t/i7mm
x7Ze7xBd/odvVM5w/tmbS1NIANTljf8AP+9zFk//AC4puG/JW9WVq88NzPay1NksrOQVX86p
0GOff9GpzIwfINlutlv3sryn6lT6yO5Hb9mLn2OfWFtO+8i2yGdNru5rSK5Ht3KRMQXWv/FM
P3yY5c8e67LPfOVWMElnbXd5afqf/PCrmkoAp62Xrl1rjf8A0mfDtOL8trwLffmC7hOzcbmu
1ij6aSVijBzJAypXtXB1/bn9Cy38sv8AISckXeZIuR3Ut3fAapmmcv8A9oxn7a1IodtvP0Fx
HcV1GFhIkY6kqRRc8b/nVesepb3/AHA/Ie8RwxbSx2z2UEbpaoWJIFKs57/TB1eYOdrFrzjl
m1bom5Le3FvuNayyOP5khPc1zw/y6n5Zro5Jzzn/ACK3WLebu6urNfV+mkBVDXxC01eVRjPX
U/Dr5Go+OOXfMG6r/R+O3kiWFnGSNPthIl8DUZY313znx6xOaw3N73kEvIbg7xeNfXyuQ12W
1Amnh2+mOf2i+34Z732BBc60/MpGf7MVqeh7N80/Jlpt6bdt97IIIU0QJHEtEUClNRBUY68d
cTyxZXBYfI/ONt3ibenu5RucykS3LAs5HgVocH250ya6+QfKfyNv1r+kvr65e0FGdVX2xXtX
SB+3HK9TfGb47LH5o+RobH+nWe6yssaBQiRKXRQKABgMavfP6b548+WNt915Gm+pfNLPLucs
g9eol2kY9CeueNT+kxnr9Nf8mw/J42+zu+ZXZ9mZQILRnDSAEVUsF6YzsvwPh5vbw3M6N7SP
Ky01IoJZq55Yr1F9dag8p51abCnHlknh2uQEiyUFdSsa0p4YL2J/PKpLCfc9uuxcwq8Usep0
kUEaT3p9cb/n/SRdc/tv5Pnj5OmsP0MF8624TQZYo1WQUyP8ygNca7vP4+WpzkWfx/vHy3b7
Ne7zszzJt8er3Zp195dQzcVfIUPfxxn7z4oxlOWfJXyDv9o9lvN/JNYxNqWILSrZg6qAeOWG
9c/iGc7fV7xfffmG44fPbbM07bIEKtO6k0UD1APStMsZ/p1NV4sqr4v8uc841G1lZXZqGJb3
UWVq9wocemuN/fmz0znQbh8x/JFxvcW73F6wurUN7LtHpSKtR6UHp6HGeLwKw+/bzvG8bpNe
7jNJLczvqaRmrqPnXGcZzPUG1bTuG6XkNnZQme5nfTFDGCzE0rmcHVxuc67eTcV3vjk6226Q
NbXBpqRgQ4qKjLwwRj+kd+xfHHJd72G7323tGTarUZ3MuSvQVJX/AEwXr0S+MzHCauxWgUkV
OZy7mnbG9UoTbTkGsehGyCuCtfMeJ8Ma2OnEtWr8V3+LaINwms3js5iVilkVgpAOZFfDHP7H
6qclgdIBqBpVh5eNcajFgJEZWDSD7h9o60+gxo/BFVpqU6lB6j69cMjIVGklR+bqehoT3OFl
K4jZApAanUVpT/XA3KBifa001NSgI/diH2AVdEOeYpXOhofrglp0avJGp9JApTSMySf+WH5W
04I1Vp6VoSD1wWEgQwISozzxYtOB6WXqAKr2xHEccjNUOwCAeojqPPEMOr1IPQgekHuMOMHK
prZxmehp2wNeBZGU+oEgmnWufjjUgoWf1AEdMvLzOJejD6VVgNLKMmbqBgsGmikJqWPb1eA/
64KjmrjLOpote/icOnEUYcHuO3XP64dWYkPu9A3rpRcsqYRCAAOddX8NK088Z0mIBWtQAT1b
/PBqsOtAaGgFAak5DzxqDDRgv6qekdE8cVhh2aU1T9le2JXwwyZkNKSHr0Bp3+mJnTI5oo8D
6R3+uKI5WhyPpGQ+uKHw4GojUakmrfSlMOLT0Qk0NQaaVHhigpnADnUKVFAGOdcTOmLkZhf5
dKMT5/5Yq3IYxkijHID0j8cBg0BYFWJah6eAxVGfT6PBuh8h0xYz0ZGIrSgH7ajFVIMOR4Be
tO/0xSNQqF82oQvTPp9MSiEgljpqoHQ166vLFjNSGM6UYkEHqtanLLM4cX/gw9oIVHU5NXzx
rDqUJoBPTx8xjDWhRkbUK/U9cOMbCY/yDRR7grp7n64vq1OgIoZNIcZZn6+GGH5PRm1KD/ri
1g4o1AaU6k5/gcEQGarACoINDXP8cKynDPn2rWp6mnhiJB21eokg+GVPGuHCdSC5IzBIH7em
CxZAMC71I9S9wOhxCiDFRU9vDxPbCoISgIXJoelO9TgkblRuQwGk+oipriwabL2q51PjnXEz
TGpiJfPsT0oO2LRgyGWpUmjCnXwHX64JWpEbKAPRULTwGdcdOYz1MIMAtPzV+3DrMSVzagGY
z8zTxxmtQvUU0r2X0/6YKjUZVqpFTTMVz/0xKhVwM6aiAQRhxqUwAIDAUatarWlPphrOFJku
lqdM/PAtCslGJ01oQP8AlhMuJtQqa5ZHLBYrTM+TfTM988WCUBiCyEE11ZB/+O2BqCdCQ5UA
FuvQHLrlhasgQVzoKhgB+H44XISV1CMrmK5eeLDDLUHvqr1Hh3yxNS/sCy0ajGmnrU5fTGxb
pD/y6SQCf3U74tZGWqCS2amoy8MYqh19S50A65ZZ+eLEZtPtsVBGoEk+QxYtAVaRv4aUINOp
OGqpK1qj+kA1IHXLoMZoOrfzMvtp17UwVrQoKk06jMLnQj/LAYdh1C5ilST2P4YouojHqbSS
Qo69xXHTELRqDBBkcwT2ONY1PTmPSgIPclq9PDGLKz14CQArTPV1JGflhY+RIzlNAajN6aeA
8MaMhVQUIAYrkSemXfHOy1rSFBToAcz5YZGbf0aEuCWqa0JyxYsGVYAe4MiKqPDDIkbqdQoa
VORHXLtjRKgrVOo6A54zYJ0NAGTSKhhklelcZ0mUlfSV1Edc8q9saU8Kkzk5aWqST3APjhVo
ahWo1VagIJ7jGQc6vSagDqO/fDkR2DFiKZHMEnr3OeMnDmMg1NKHOviMVOGegozHr086+WDK
wbSalCpofDFpnItP8wahqHc9DTp0xaMEAdRVcj+7FVDOxMS6FAYHr4Yo0YFixL/QkdvMDCrQ
lSBRTUg1BrSuHWRDMAgZ16npg06LSml9QJHY9wenTA1PDCMiOlNOeZyxVfUpHBNBRcgSaZYJ
yfAo76mUnLxPlg8A6HWCSASPu65/TGVAsasqE1Pl0/HG5WbTKK/cQAPuYnvhWDHqAz+05506
4z1TIEUUEEEAHLpWmM6bMSgksFY0kIqa/XqCMGDDe2sbamr7naozofDE1J+zKsb6QBSmZp2x
ZikhSMGapADdm7nwwxYbXqTKppkT+OWGVHR30FVJB8BhxnaJKe0B4fafxxmrQyPRqk5p1Pan
lg1EqO7NTv08x2wNyaYgKAdOs0zr/wAsMVmEoNRSgTz7/XFRD6V616jqPLxwYLdCgQKQRkc/
w7YZB8DQAK5bNSKeGXgMNEG1FUgeqhB0EDPLGHTfERQnSxBGv1ZZkDv+GI/JEkqRpoKHr1yx
pzpgwdKjI0yIPUjEoJTRSXY1qPMCvbPGa1IlVYgSFqKmpyyOBpETqZjkFP41PTPCPToi6fbL
1bwBoa+WLBSjlLfZm6k6uxxD2kFV3r360/54NaPOyMpYVJ6Bh/ywC0znJSMytadepxKeiBBW
pA1L3OeJWC0EErUkEZP2wGQKliwEhqgHXwGEYZ6liBVXFDQeA64hfRLUKNNFYkE065Yl9Q6U
0MzNUn7mHbBpkhBqBQwLKO4/zxrFMSAhqCupny6ZD64zil0QjVZDTNvymuVMBOtG9JoC1dLC
gy88GILwnUItQOR9f7+uFn8hKqCGy9P3Dpn9BhxrwqUqy9X+8EdfIA4lgmZA+S0AoAe1cSwQ
o0hYNVqg9KeWLTKaRCUIBVS2Y6io8AcRoFeRTUHVGoo4H+uJztsNqSYELUasi57HtTFih4JS
lSPtrpVhQk4mtIoXVmdtDL+XVQ0xUSHLDVSFlDjoT3HU4GtOnuOfcyQkVqT388SwAYsAKGhJ
NK51/wA8StMrKsmnT61Jq3emASjn1rTLMGgHX8BTFCCLQHBC50+wmgofp3ww6dZmq0YakZJ9
PgfLCQlpVkSvQ5itCBXELTBlrmpLJkSKUODEZZGDGuk18jjKlR3A1Gq5s5opXMD8MRclyD7d
SenVcJriBpXLPxwhKAhrXr2OJF7Y/j74Djq1KgLN9xOQwG1o/j9Le45ftEd0A8T3USSRtmCj
HPHb+f8AO9M9d4+t/kLndtwmKxtrDbbaK0lUi6aOIe4yAepY+/44588S0brg+Pt049vMG58t
s9pjja3IS2FyqnTRKkiopqJ8Ma/p/L6/IlW20mPmOwnc99s4xIsrxIkahUZA5WtBnSmM9cwp
dx3h9k3ba+M7Tt1vHt10um50RAyhDlUf6tg55h023XC8f+Sk2nbrWAfrYA07kfzsvUxHemeN
8zwVQfIG63G8/Ie0bXeRI+1wzKZgorqauSuT9wXKv7MHM0PVkSAXCJEnvTLQFFICIB/CBkgG
MnVJucFhaXm577Jbw3V/bR6YzKoZEUUqNPi3c4MTzmx5necw5Tt1nuexUsoJ9UhVKQsFzpmK
HG+eJVbjU89+Qdn4jd2u3vt8EG0yrqkkjQ+7RqiiKvfFzxvkHtT/ABxb8YutmuuVRW9vbtM7
LDcXKA+3Ev4danPD3/P63KZ1TcluOJ7xbxW9pFHvu8SsI44rVCIFFc9bBQun6nGPqdrR2G37
eoTZpjYW7GOr2NolXBGRqadvHDjOua42/j/HNp3DcodphnuLarK8irqah9NS1cZ+srW+Oay2
naeQ2e3bzuFhDJcXSiVIdPoXV0oPHxxfXGZMZqHjXHtw+VJIZ7GOe0tYmdYlB0Bh9qntQeGH
Go0V7Z7Bxbbb/fodrglvCwRVcKKVyVI8svr1xSJ4h8g8wl5TbKbTZZLNI3CzTumlBTqqHrnT
rjr/AD+u+itluO4Sy/FyW3HeJSbVt6RKbjcblUDHSPU6n7m1N+bGbn2N15PJ8sc0CWtgb8R2
e3sDFDF6UIHivnXHok5rlOq9D+NeT758l8os7DkLPd7ZYEzPaWyiKEFPseUDtq/bjl/Ticzx
0kz19ISJDAHuCqr7EZpU0UKo/cMcMD4X+Rd6/qfJb25MiSmST0hB6AVNKDG5FY0Pw1eceh3t
F3XbP117PSKxULrWNz3KGtT3r2xu/wA9msy17pc8D43xaC73/drVNy3i4B/SWlAEjJ7ACgOO
WOm+Mt8BbpuU3K97jnj9iJ1YyQLnoYH0g1/ZjV4kmsSevOfnG2lPM7qREIiDNrbPRqyzqcGt
x55tECXm4QW4qfelCEnoNRz/AHdMdeJ6zfl9g3FrxLiHH9si2zY4Zmm9qH3SgajSAanLEHV1
xy659N10y/HXF5t5TcL20W7khj1+04Hts3jSgyXtggrzjm/yJstyt7s9tx0rc2+qKJIY1Ytp
6VZRkB4Y3OYrqX+26K0ltN4WW1MV0X1SMregFj9pXyxruSRqfDJbj8Xty75F3SD9RHtu3QOQ
ZJJAoqpp6emokY4zfwNWG/fAvHdj22S8n3pJEiKqsYKvJJU09NM8/DHSSrXrHDeG7JtfHrK3
i2qC0WeMF5LvQ88jN+Yg9K9QMZxag3Lg3CLLlFtd30AubkpWC1fJHf8AjelK6RgnOj7WK3nG
12d5tZt4NphgjB9MsaCoWv3ZdhXF9B7VhseycL2zZoYNk2uG5Vl9d1MA8sjAUZm60zxr6m9a
8k37bIrH5N2uS4sGhtZbiFY1AClyGzceXh54ZNikb75z49ZblJsVoz+2tzcCOW8fPRHX1E17
muCL8trsPCOI7Xb28G3bVB/TrVC7bhMA5lYCjN6q/djOG1X7NYcR3s7zybcbJZkgd44ImUhV
hgXqoFM2ocFi1U3my8X5dw6XdI7AWtpqZLe3jUK4KenPT1rh+o1o9i4Hx202y12+HaILJGiC
sZAskzVHqLaupP0w4ljb7ZsuxcbvbR4ml2+39xzbRj7wTUrTpniErz3lfHuNb58eXPIobJbJ
UjY2cKLpb0tRg9PuPWmL8nasfjG9lufhq69AKxxXCRrGBq0KtRWlKk4bGqqvhr4o4yeOtyfc
7NL/AHG/kleNZhWOFAdIAU9Sad8ZxW+Y6flPg3F7njyRQRWy73dSLHt9tAVUaier0/KvfD9X
O3Plih/a1ci3Et1ukKmMAyn1Ko7t+GHa1bMVXwtaQbN8uRbdazpcFmeBpgK+lULUXLKtMOee
idX8Jv7pLctzKB6mhgRWI7jwPhi5u+My58vRtssdol/t/kWFXjtUtWLRk5CRaBqU/ZgvOU9T
Ypvh3gPBLTjKbpcQxbnvVyS9wGAKQ+olY9Jr6sH1O5MQc/2ridvvO2XN/tHs7csqfqRElKit
VXPLNuuHDLW4+aN82TbPjbV+gjlWdAljbtH6FDLWvT06RninLPe7j4suJNUhalSWOmgpmTU0
/bjU9b3HPIxdyhor9ux/bjcBVAX0gstQNfTM4UiOpJDpA1Cte4r9cLNh1DslQBQDOmdQeuHR
9SDFWFckArgV8IKzUzqA2ROef/TEvqfUFcElqgUXxJOVa4saiT3ARkRXoVPl3xfVddIg+lgW
OX5adBgZ0TFmbST6etemRwVuU6kN1UAKCDlSnhTFFqPQcyQBSmQNTU+OIWymjMhkJTI0NFHS
vnXDaxBudPqX765hug88DcOxQgA0P+49ic8OqVGupVGoekZA96DCzaKP2ySSPuBpXIUwGG0P
qzYhey+FcRsJSVXXk0gqAMawYeNyGyrUd8WDSR6sWJ9TZggVxmxQI1AEV9Pn2+uDFRtRlAjF
TTyzrjTJvbAQaidXfF63cO+SAtTVX1eY88MgpnyNAABTLv8Asw0YYaTpY1GnrUDBIi1AFQTV
dQIZe1e2BSiYgy5ZjqGGE9WIm0Ala5j7qdvphZGBXofUAMjnU4y3otT6vbUdcs+n78PjP2oD
9xyNF7HIgd6YUJNKsW018P8Ap54Gvk6KClWGruKdBisEA0hclm9PiKUPlg+p2kyvQogp+Yk/
5YZFT/zFBOmrDNfr4nDgNQqxVhWudf354vqbSq+pmjFKjM0rTP8AwxrPBJ74YAfcR6m+7v8A
txBMc101qMiD4nvXGPqflGQyk5EAmrHtTvTDBYZANRemn+Gv+eHVBMa6SuYINT9cTXyLSoAO
Y1Zk9svpiwUObGi9AaV/y88FY9OZDqJXIpmrEZjxyxWGdULGq1ppoQT54VKEqwBf8lageXhi
+XTJg/R7ZYA+odO/4YlbAsoWlerCpIzyxYBRxOyhSQVJ1AnMjBWJPTOKkOKUXPT1zOKNw1Er
mcxUEDPLpljWIxVTqVWKkUK9xlixn5O0pZSAMvHqa4swmDKKqSammkDF9RpBdTn05DsOlBg+
FhiCClCD3I7/AFxT03kXoaiLktOn0ws+gRasR+brQmv/AFwtSCcEE6cvAk+GJUAR9dK0A+2m
RxGQ5chaV+2npWlcPjOk0YKVXMmhzyp4ftwYNIIymhYFup69+2I4cBqM3RgMvp5YRhlppOqu
vu2VKYsVKMayM+hoDTPBh5hTtq+0UYZU6g074tOmDJoX05EUJxLTaKSKEqARRQetcXwRyHQx
XV26/XFOtHXnyUYaoAA0mtT2r44ZcGgYqxUkDVT1Uzyw6NEFk0qQBpObU8DitVNpDEqudDke
x+uMyU7D6TrUH1ZGoGNfhFIxRKGhXLMHMeGMyar4FBkurM91H+RxMCjYFjl93Ve4AwWNwTep
tI6rmtMssGHSYLpD1Cufup388WrxGEfQyjuNWXUeeGI59wipNAn3U743Kfv+ikNB1r2XrQV7
f9cLj8nfUjnsGpVaVFcUmm0zghtQH08K+GK1qQS0WLNa1FaUxjTYcxhUVxVSeoOeGVYSyEqC
MyDUj6eOGYyCSoqDmwFeuVMMaEWRsxkezfQdsOaxekYBOlyaau//AB2xi7FEnqD9iCa+I/HA
18nAKE1pq/Lnl51wxr4Jw+bV8D5kdsa1mmT7QWBIBPXtjNZ0FftIy09DT9mLTNFo1xg1qBX0
1/wxk4MtSpOa/wAJyp2698QxHooCB6kNCCe2eHTiapX1VBpkfGv0xhUIIAJIFQc2zyr5YVqE
ssYoKl8tR/wxqSLRh6qTo+4+rxxWREVpUqaoDQE9T9cGIwoxLaQAO3bEISsSQaEnLpipSGbo
KEKe3n9cGHTxsgqgzFD6jnmcZxuA06UYE1A6j/piQFYL6iKgE+eR6GuHNAmAqSpyr6gPEDFj
MSJErrpJGonyxAnEWRVQxHcdB9cMlFoVUoDpGWeR/wAcZph0EfUkEnNQcqHvTBY3IaQkGiiq
j7q5H8MTOmVn9OqpJ/L/AJHFiEslK6VNK0r4j/TFh0UlCasMlHTyxYf/ACZZVSukVUjNRlQn
pinI1GElcs4NOlcVHyRBYOv2kdwe3jitX1FpUt3dqUK0/f8ATGapKcNItFoQQaAUyJPcYYto
yEAKoo1Ba1rSuDDKE0oKmpOWXTp0xNaY5OtVoV7HocTOw0rl2EsaABjpyzFQOuKUWU4kAjYn
M+J8cLYXKhVap1Hx65eJ88LOndmZKqWKAV/64ziRHWRkPu+7PLLCziSNSwqOmVF+mDW8SOSF
KkUGRPf7fHBukCyAkOrCq/cp8OuJGLamJC6aio8a4BaeVvcc6BT+KvUfT64YLDrVTVag09eW
LD8EF9LALWnUdMsZo9plZAusLkBmKYDecOhBjzWtMz44cMOKJUOfSuZ/HFi07MJQI2+3pqGW
QzpixULU0ZFjQjSvQUxaLRIVaOq55+pSc8vpi1mCdhr00zPqXtWmM1uQgCzPpGnUfw8qYoML
2pgCK5d2xCwLKwSlKORX8BhWHNWMakVAH3jLBWsOWpJUEVNdHgT9MREGNRqUqzEEUyz75jEP
gJGpXZR559T5YlIJCNLkH1Dt1JBwrDKyaw1fSv3K2dKjAdOGUy6tQAXv2+uf+GLBgkKotWYE
AUJPb/niaRypCpU682oQVPbzAxQdUTj00WtR9y/5YGcJYlBDqRHkaIDU1JqcsLWYGQxvFrcZ
HOh8fPENR6GK66CvVQMsx5YB8CLLVSetM2HSpzzw4Caf0Ar1XLLpTxwHRLGDVncFwQEI6V61
wLDFfcICkg0pIvX8fLFqkA6UpT760APXLG2h+1HJHSKtV6luoJ64FqNQ3ttXM5AAdRTphRIp
oWCkUYaqdDXpgpPICdEg9TnJ1yFM6Z4zg1HFCKsy1NPP9xwGOa9yXMVAz1DMZ9q4TrgqKZYV
o4zStMSLUcSSvrYVJrnlXyxHFlsO4XO3X1tfW4pdWziS3JFQGU1Hp7iuPR/D+300fWXyt9v/
ADDmfN76H+oTfq5kQVhgXQqUpVtK1occPtJV9MWvEefcj4i0sMUcVxaBNT2d0jMmroPSCKk+
ePT1/TnqZfkWe+NHvnzNz7ddrCR28e0bagBH6OMxH6eBxxn1ldZ/Ofl07N89cwA9iy22xu7m
NVUXskLPPkOrEGmWM9/X8OV5usZe875mOTf+xz3LtutSArjSjDpo8QMdf5/05kut3+X+vi05
N8icz3mC0mmsI9uhioxuLSJopC3WpdiSccv59yXxz55/a72T+4TlG0Wht2sbS+mIAF1ca1kb
Knq9sgfjj0f0+l9X0scO3fPXJ7W9uJbm2tr+O7f3EsJQ3tK1fEeo+GeCfz4sV5v4Wm9/OnPL
h7e4Xb7fb7S1b3IrCGJgrtTLWWoT5aTjjz1IvowfKeZck5lvI3Dc6vMq6RbwRnQgHiR0/HDO
pzfFztaTiHy/vHFbIbdcWUO42UJJS2uSVWpNcivf6469dzqe/Ksuu3fPn3l25S2jWcFvtEFv
J7nsWi6Qx/3E9fPHGfWX1qc6ubf+5HfImIsNjshekUku0DySN3Jby/HG+uef2zObPlSb5808
y3Xa5doaKMfrpdc0wUmVqtUIB2WvTGOuuZ8Kc10J8y8927bLCySwFra2BC1lDKzr1IJ8MdP5
3jq+mc38ruH+5i7s/TZbFZwyMv8AOmYyOzt3JatfwOG/z4/bf1V+z/3B3sf6pN62mHd1eVpI
I2ZhoLZhQlCp/bXB1zxny53dyM78i/KnI+TgQ3Ngm07RCUaKxtlp7hGY1PQVxji865/1lzxw
ck+b+c79scey3dwkVgmhVijQRMUQaQJGHXLtjWc74v5W5/swokDP+UAZBx0/aca5mOjc/G3y
Jf8ACbm5vLS3S5mu4zE+okUXxAyzxqZZ6LLq65X8+8m3vjw2SJRaxtlPPGx1yp/CxrjnZJ8N
Y8unkQ6ix/mZBgf21+mMSDWu+OfkJ+D3l1utvZQ3d/PF7UUsv3RK35k8CcdvtLMo9X2x/PPI
oU3K4vo03W8vamFbhqC3JqNSVGdPDF1/KZ4xx1bVr8d/Oe18WtLmWfYzf7zdyFrq/EwXWDnQ
LpP24v6fyk/I56u5jM/J/wAo3HMrpHtLVLDbz6mh+53b/caA44ZI7XnxitnuhBultMTpSKRS
9cqaSCSPOmN/z9rOPprfP7huF2VhYRWlm+8PFErPGp9pFdPtzYHVi64mr7MPD/cnvlxvsl1u
FjEdvloo2+FipWMdKSUP4+ONT+cxaXJP7gbC52qfb+Pcfh2u4mH868mYSSCv8BUDM+JOC/xz
8tpPjf5v4pw7YmtF2aa7v5gWubwSqpkrU5KRX7j0xrvifsbawvyJ8mPyjdmlsrQWFvWoiQtq
r/FXKuMT/UXlmdv3u/t9xiupSZWhZZEDsTqK9CwrTKmNf9aefPHuFh/cnsyiC43TZGvN4iGp
ZzIPbQKKVVKenPIY6f8AGWeVzne3K4Z/7hbLd9+a/wB32g/okULBbo/8wU66XpjlOMHVHy3+
5Kyn2Vts2HZ5LVZ6LcS3DhpCAa0Wnbxwz+dtb2SOrYP7geF7VYRFtknF5HGKOHHtM4Hh1pq7
Y6df/WyfLPP9Jfwzcfy3ZbnzqHk/LbIzWNotLGwtiAsdDUFq9sc+eN8HXWVq+W/PvD+UJb2M
W2TQxmVTcXk9DojBqwjCVNSMM/hWrfy9Zg57w242uKSXeLK121I1CxGVTKEC5Ky9m0459fzs
MuvIeUf3AbHBYbltHGLHXBOWRL58kOrJj7YzqeuL6z8n1Tca+d12XiNnssFoGuIJfcluXNA5
1VOkdBXzx3+vP7Z51sLb+47hsTrfNttzc7uievWyqgByPtn/AFGMX+W/FLnt/wC5vY5YrqG/
26Ro55Gd7iMiiRVFE0kGpWlMV/iLbPwzXyF862u9bFFx/jln/TtlmZDdXDge97amrBEHpXF/
xy+Nc83qa0+wfPXxjs3Fodk2/brsW/tldDaAZCRRi5r36HF/xvzpl3xWcX+fuPrZT7ZuthLb
7bE7G2gtiqalJr6qkV8sHX8/NY3GX+SvmXbd5tI9q4tt39It1OqW+kzuSR00U+zxOMz/AFo6
5leaTcs354Sku5XJXp6ZHAI8dIOO321jnua9B+EObcL4lfybzv8ADdS7iq0tzFpdRrqGdtRF
CfrjnebfHT7Lr5c+WOG8ue3hsdtkjiMite7hKFWb2xkREATni55kcu5dbGH59+Ktu4jHsVnt
97LbxwBBA8YUOfN651OZOK/xrpepZ4zfBPl34+22CdtysZ9vd5DIkdrmrCtaVr18sP8Ayo5u
xT/LHzxacmitdn2Kze22uFxLJcTU1yHwYA4Of551lblWm+/PfG944WljuW2NdbrFAsEFqD/I
RtOgSE/dWmNdfzk9Y9r5/mmaS4YyZAnMJ0r5HtjnfGuZfyhkLklgKilASK/4YmsG0rEqGWtT
XLoMsaSNytft068iSeuNRnKKNW0r6iGB6UqfxxEEtWp2CmmLAXuOEFc6nIdsWqCdCEBICnsx
P7sZ09UK+phrIGZ0AmlAPHGsYPqUvQEaR08NX1wKUgwajAklRQE55eFMZxrSIZaMTSvY5nFo
GqgkV+zoWPeuJqUJoJDWpIA0g55YWSKIuROZNadTXwzxNYSIdFK516+R71waMONRzf1RjpX/
AJ4dH1MFpQkUVsiozwNQ1WaPUaipqpH7K4QSvGuoE5AZ0/ZjWDcLUPbUdG/KK1JpiOhAcGmR
BqdQFMZqlOFybW1B9wjOVcsQqSIrGVqo1MB18+mGI0tWlOrv0I740qAhK6WOfT05/txnVhwV
ioxNVOR70OLVIklKNpdga1yUdvrTFDgAQQSKKpGRHT64mbQ0J70AOqg7/TDacFJEanUaGuXk
fLGdNmEAEIKmp8fD64QFiZGBp+bI16UxeALgiX1kGvT/AFrhUupVUsQVbWQehyy8cVaNInt1
0NUk1wyLDkqcjRmYUYnPOvbGVaVWMgP2kLTScv340pdE60GQzfNfp3wyGgBNDmPUaGo7DLP6
YWafUgZQBUGtaeNMUh0JCIQzGjP360GKxfAy5VCFzqQD51zrgxWhMikjqNI9PmcMjJ2kJWhp
nl51xYNIMQNSioDUKnvgsaAZQ8jBagHrTsK4fgaJlcMBqoK1Vu31wasJ/RSh9danwbz+uKLc
NEtBm1PHxOE2Fp1Jl9uZAHfzpiq+AgVAJOdc+wBOJo7UY0zGXXw88I+pO4SoWmpRTVXv2ywY
LCYkUzGeZp2wys5gAob0joa6iMj9R54UNG9Q1AVpl/gPxxYdwz6TSQjTpNBi1QTFTQn0k/b4
5Z5YxbiKuYINCMzp8MR0pNasSunMUqOte4xrDtN6QfVVWNC+DBoGJLM4IHYAZ9e+WLKvaJlX
29JHq+v7cOflA0kek5IcxiGUhpUEL6hXIYsXwdlP3E0U5gdM/rg04lUBlC9K50/3UwyJGxJ6
flzyONqhljoAaAgE5eOM6zIUSkMa+ZGX4Ym5BEVoTX056jlUdK4jfQyRuEPqzHbsf24xlYsw
6COoY1IXoozzxsSnIWi6hRu3fKuGw4Un2ErkT54zmCogAtMiAOo7jzxqC+JJDIfTQ1r0HfGl
CiFBpIBahz7V8cYq0wUowJoK9K9MWmVIQGWjrmvWnfww4zYAI66a0CHtlXByrcPWoFOmX1pW
nbFhkMKAkn1AGhHnizSRKsCp9B796L4UwWI9AZK6iABQAeB8KYouqQID6iQw7gYdRgQDVuoN
dPgBjUGUUrIW1Z5ig/DP9mFShUFQCzCpJIXxGBoalnqKaSf3HB4NOaF/RUHwrUfjgsVRNqoS
RWvXtQ41zBYf7RpIDavHsMLOGRY6FeoPU1yp2+mKtEqqyV1V1ZEVFcvDGbTIIsNARqVy0mnh
2wSLRCpSlDQZV7EeOA6dsnKgEqy+r/TG58MGJ9NC3boeoxnF8gVEpXVkASQPHwwWtfBg5qAF
Oo9x4jFMElp/FH7Y00kVyoZSPT0BPSlKdcSD6qBWyJGQP+OMj4PGKp219GJ/fjNURSe2aEEA
9vPDiSgsUK9jkB2OBrQ+2SdIpTqPEfXFq8MQQCur7D6iPHthjNGrsyAg9PSKdK174fqoB6Ei
hyrnXsfLEhospK5LlWnhl5Yy1p49RDkUochTx/zwE0kJ06HBGda/64BaBYfXVVqp6LXpTrhR
06tp7DI9OnXDINMyg1GqoOeXb641azIIRugFDUsPHGWoUnqIooDA6ar3J/1wRoJk9GlQCwrm
frijNhFUIGf4eH44rqHG7ZajUIaA9aeP1xM2ko9LF+x06q5H/nianoCxUEIfSTQ5eAxLKkjD
BdJzBPUda+B8sGRqUxUrIDkCals61H/dgzTBBijMa1BBWv1wL4AI/wCXU+lgQVDEjr/hiZ0i
W1NpyHQZ4qzfRozg6XWoJFO2KNZQMlBXoCagdRXEvBa1FG0k6BmMvV5jBIZYGRCWUKKBszXp
+Jw4r6RVK+sdtKqe2JWEisK6CB/t8cZ3AcU9skmhrQAip/aMU9MNSZTUigIrUeH/ACwj5Ili
c86mq08MGElCMRTJhXSB388GCXUr1uFpQBgMyBT6V88SoYHaukINXXPoSMVhhwrBxWq5UJr1
rhaNlpqtaVOoHw8cZqgn9ChV/wDGcz46jijNiJXXUVIC+VepxpbpqKzVNQB9viTTFTBJoBqB
qNOhyFRjGHqipIylmyI7+fnjTBLqzKCiKOlBiw4fWAQhIH5jX+E+GM1qBMrDUG+7qCBgFFES
7UzbKtVrSuLBYkOliocGnQt9BixbiCRih0oTQmmk9aeOGLdTR0dVYqKU7ZnP/PBWglZQSSKd
SVYZ/j9cUoMrvm2ofQ9PxwxRHHLKrkUzY0p2P44bUMCEy6TUGmqnhXAsEG0FamoPb8uffEfg
TQoZKiUAjOnXr0qcS8P7SglmBCkag1KjALyAS+pGCkhvzHp/rgA2UEEU0t4jsfr4YTYib0Vj
kTrmB/ni+RtP7gYFVFKg6VJoPriPyeJFroeRfV/F0qBipkgURPfBJ0qBQgDzy6YsHiaf21C6
qda0HWvlgkNqKZqJmhqTV/L6YWLQKwFZBQlgQDnSgwgKzy0KqfqBlXzwKSiBC1YavcXJgaZg
+OJvBhCV9Rofzaex+mBpEoVS1Kk1zbqaeOCoXttGxYepjkoGdR5jwxkuG7WNYlAYnKpB8cMD
h0kDPvjWEa10mn4jEi0n+HElhEVzLCg/LX930xjWlnx6xj3Xf7KwnLR2txMkUsq5ONWQK08c
bkt+DPH1rcR8f4KNp4zsu02ht74KlzeyoJLiSuTOzNnStcZ55c7bXdB8ecStd7uN1msv1txa
QmW3huKGHWATVkGTjwrganTnis7DnmytebhDHbrbv+njgtVCIATQmh8PLDeV9sSXH9C4jdbT
xnYtoto49yPt3d46Brkg9W1EZ/TFONg+zpt/j/iW3breb49ku4T2cZktIrgD2VfSWZ9AyJal
M8ZnJ+9cE9ptnNuH3O931qlvLCrJDbxKEjXRkpFO1TjV4y+L7sDxTj/w2myT3m8X895vzB1X
aoQxMZWukUVfUK55nD1xT9l/8ScJ43Ht+58yvdvS8vbVmFpZzU9mFFFQzKcmY969Ma6lia3+
ibTz3ZbbdtxhS3MpaEQRKKJoOmi9KCmM/RS58plOw7FudlwjadqhtbS9QiW9NDKzEVqzHNia
GoxfUXq15/y7gvCBzm3sd2vjtWzsnuzXbULMxFdC5ekE5V7YpLfgxlvk6y4Ha3drZ8Tt5ZNu
VQtzuTatJrQUWoXVQeWN8caz+Xrvx9tHAU+Op7vjVh7rGJxLud3GvvySx5M41VoAegGL+n88
uUW64/iPb+MJtt5fWFpaX3KHnYXDXjKqxKWIRUDfaP8Atzxdfysa2tD8l8csrjiNxf7hHA+4
Rx1WO3A9pT4L5fXGBr5FlrLM7RUYgnTGMic8shjendfQnxnwXjHGuDrzK+sl3jd7hBJHBOf5
UIBppVcxqr1JGCzQ7fkji+0bpw2TkmjQ88YeK3VVXQrDIVXLV4Yz9crn1uvmO4iCMdJDqcy4
6AjvnjpK1IhFVRjpqS2RGYr4Y6SgUnuMCrGhX1Ba5Gn+eC1B1tr0tTMZg5jPvXAEZb7mOVTp
Wvh2JGLTiSshDAVLjoD38TiMpqnSGORNNVD+5calNhnYrRlFATl2P7MF6Bkk1kGmpRnq8MZ+
SjeWrkdVeoYHKp/1xcinFzMQVjJAApU1r9MayVi+mCzKRqJYEAPXIkYvszebBVUNqZqKw9Kg
5CmC1c9enEgJodKgZHrUjtU4zY7/AGgI3rLUFiwoNAPQeXlhZH72lCKemmZPQD/HFgRI5C0f
MBhqFevfMjtjUrP11MJH1ltWlXWlT0GDpqEsmjpky9WPVsOjAGWVSJaimeX+uN/YEJmc6mJF
Bka0Awbi+u/II5DpWjlWrXyNetManbN5SS3R9r2mNa0GkeH4+OLvr7fK3wCze3Iyj7wvQ9B5
Y52avvcIzu7AqKLTPBI3Oj1ZmPq01GbE+qnhjXrf30zvJHCV6xvlUdfpi+zNp2m9yM6Rqp0P
cDpmMU6P3pRPJpJJp2A8BivSnWRGHjMZ1n1A9+3hjNogi7ugcuSWpmfEYMNiAykzVocuv1xu
VyvKdJkSoBY9SR1HjjNrcRmZwT6iGIyYf/y4ozmXTpPMEoXNcgKivnjf2tJe5Win6gHoCOhx
faty4jLuVLV8AKf5Yz6zp3l1HVUEDueoxaqaNj7ntxr1GbA0+uLGfQFyC3rorEV7eXTE1thy
ykAfY4NHp3w4tonRzpGVcya5jwBxN6hVZA5CktqNC308sanRwWkrkSWpnl3PhTBrNhyBUKoJ
NciO1cQPIGZhlSmdSOtMFo6gHVGIYkMKEdhWueLWfp4cVD50bL1DsMuw8cRw8bRqQqkUINPD
FgpLIwIFNYFcyc+uLDo00CsYzpmx6VPgcZSOoZq9gftrnhpyJG0MoHUrSgr1piRmOTL+YnJj
375Yo0Z2AjBb7myoM8z44YLKZAVUmpUKe4/DMYsZ2lVaVqBStV8R4/jiqnp/bDUYhVqejCtc
P2bOFVciCO9K1/wwashjQ6mGQByP1Hh54tZA+WbV79RTIjGhlEiKFVm+7/jPAiQuSzkBQozH
ck+GKrTqKRlgPT18CB44KYCpDKSoIK/XCjhupWnqFPClDikOhZWFQaFTmudCQcWM2CKMtCTR
eq0707E4MQm9wqHKj19j/hiWEIUFXWWp6BOufhhynwCsa1VhoHpIxvGLdGjK3UahmfD8KYMa
3ANQuAT/AC6CoHbxGI/KTUrEH7F7jwGIWgiVFkoMyxopOQy88XqkSEk6jUV7DqcjSmDmq1Dq
cORTrWvjjdqiTQWFTnpOa+P1xncVunWnqUUYjqK98OatCXJ9NMxkCen44YuugBVAX1UJPqFc
gcOsykjxFtLZkGpIr+GEjMaE1LVPUN/ywew3DrX1KAMhlT/DAMoP/GSCtNVGPeueIDJct/MF
NP208O+Bq6GgDoUeq9BXqPPDiE6a6EVqD06VFcUuC3TFiImKgBVyBxZqwQkQAjTUj0qe31GL
6r7IyWfS1PUMtQNanxxv4WaIxxmrBKmuYGM2U4jVWLtX7gPSw/0wxijDaB9v3ChJ/wAcVxTT
atR7kEdT1OLP01P8ieWPP0kkjLLoB44loQ505ChBqK5j64yrTSGShqfu7eR6YzCdXL1DZE5E
gf4DGjKAs3vayaKoyNMjTthjIgyEnSpU/mHYf9uNZT9hDTroQDmPUD4eOM1To7mklAABUkD/
ADqcZSOTI1VegzNcwD3OHVpgNQOpvWDkO1MMrOhqG6A0FDUf44ZUc9GQVapBVh0H4+eLToyR
mCF1dsuvmPDF9VDaWZgGqKCtfDvixT5LQWBBBI7gnxOCk0kagjMk9q5YmevkiHjauVfzdgRh
5qkEKO1BQFQSV8f241WpCo2kkjpmadKHLGKS1k+kkE0HX/AYZNZzQlCmR9Jr+/G4cKQyKQrU
zyNPDwwVytGV1JllpGVcsYa5NEy6lBz76uuNXxSGYkGoGVTQnocU6WaQC5EHsSBjSw0aFl0s
aDqPPGbcU9p3VjoLHVSmQ6jDK1TakXSM6jqPx7YLNYwZ1ABOxrrPQgjF9WtsMXLgk6aDKo6f
jiwbTF8wWHkB2rjUgk04BC6wRQilDlmMZvTeYYs6ero5GZHcYrwzpgCTpz0VJOkGvn0xWYfU
gRQaFs1+0+IODTgSAxChfV2pQfhjWMWnIGkmlaUBPYYOjIZaVoVGkZ+f4Yj6cZRkD7q5V88Z
xadAQaN2FMj44YLcQtKqFSBk3Y+XTGpzoEh1MeormfHw/Zivhgsl0/wgHU31xnDQkrrVgx09
COuXbBjOijkWursDQdxQjGop4QMfYV050PT9mNWeGUmeR21Nnqpp/wCWMZB6ZI5S2XpaprWn
4Yid1KgBqZHMDqMSlLU1KHoD6V7YydPJKhqnc5kjqw88A0KIFqZG9JGZPbGtMMwKnUvpB6Ad
/wBuHR9hoy0BA9JrqFMssZs0ylpNKp0HU96DEjhyVyashqajIHFi0pWqDT1d2HUVHfFYtMzC
hzz8KZmnlixYbTqGphVj/j5Yvg2wlUhaFgCDSv1ODVKZUoAvqOk0Gn/PDurBxioJ1Z0NcqZ4
q1z6BygUHqAfVQDv3xYeujqumoyOXpPiD0ric8Csnt5UqvfxxaNGLclqJnqofqcZrUukQBQk
5dgOtTln4YD6dloh1H+YPDpT/PCvwZDQlKFaZjv174FtJhINTMMmOQ+uDxXSf0t1qCPszrTB
jKNiCDXIK3qrlXwOLGtTVLtUMdJFVWmXT/HDi21GVdCD9yt0QH1D9uKVdQlB9I1VBrQjqK9s
NqEkZqwrUd65HLuMZMhR1eSuZK55Z5DvXFfBYSSAs1VrVvvHj1rjNKSZWZVCgeR8fE4zq2I4
9SgVr3AB6U88OnCXSGQAdCfUT3PkMKkOQEVqCj5VByH4Ysp8CxDOEIpU9jQZeeFjSZwxJHoN
KDv0/wA8FOidpQyknUWA8KYDTPrRiTQkZgfj3xLDyPlmaljQ1yA8csWDTaANAcatXTx+mK1C
klyWi0FKGn+eJaARgsan0khj9PDBh+RmViSBkK5DywwWnZlbIGhr6QBkD41xHSj06tD1pmQe
pPljNmmF7YDamf0j7R0GJADFWOg6an0/TBjI2Zm6ZeoHLOpwogiNUBqP1UGp+or4YkdZEBrT
1KAEocqk9cuoxKUihLJQt7eZOnocB+aAPRSSpKkUr5YlbhOyAKYwdRFBXOtfDCLTNI3sqW6r
37ZHEhehiYhVW/KpzFRnSuFYIAEs2kFgB6enlnXBWjtKVUI+VQAFPTEKEZTNUAqVotOlcAnh
wSukNVtX7q4jppCWGSksD6T0piOmcFYfSutgc8u/nhVOoOgO5zPcCtfDAyBtcZOWoEUYL3wm
QakBGqwYDIrn37g4ii1OW1KTppSoApiFgqAlDTVStad/2YFgo3ZlYhSqA+k9PV3GJSnVQdTn
7vAZ0pgMP6mXWDkelcq+VcVINMix6Qc+46ZnLATIj0c1BpQkDIjtg1Y4rwDRpJBI6KOo+uKD
HEakZ5/TCcLVQEDMYUXtt4H9uBO8SSABj6c/tPXGG5Xbtu7z2F/BcQALLBIHiP3AMudfMjHq
/wDr9zm+s97+H0Dsvzxx28tody5Js0l5vdnQWskMgELMPsLK3rzOZUY1/b+XMu83xx/n1b45
dv8Anne7jfrmXdYF/pNydL21uAsiKMgqk5VplnjP/Lm8/PrfUsT7z852dvtkewcJ2k7Zb6mk
uZ7plkkY1qQKdCaYzxxt9P1trv2/514xLDBuW/7FNd79a1ELh1WAMPtciurMnHT+v8Jz8Vr/
AJ1xbd893c26Xzb5aatsujoMdpRWRfKuRy8cE/lLz8r6eB5b812bcfbjnDrA7ftstf1t1ctW
5Jbrp0+kV8sc5Jojh4l81RcZ4ou02XH7Nd1ZGE+8TNrkZ2J9TKF1NkaD1Y6f04n7a75sP8e/
MUG2W15tW+ws+2Xkhkkltv8AzK8n3HPIg9q9MdOv5Trn/LOX5Xe9fPtjDbWuycK2/wDSbfA/
uNcXQq7sufq/7j1pjhzzN9XtWsXzpwpjFv13tV1ecqiRljhoPYjemdGqMvDLLD3/ADkvz4Oa
wFj8uGfmb8q5LtsW7L0ttvyWCNhkoSob7e9Rnh555x06lxzfIfye/N99s3Nqu07XaD20gjAI
PizFQvTtjE59yOb1biPyn8O7DxBNkR9xmWUMLgNANUjuP5mn1aaY13/Lr8id+4wmzb78Vbhy
K8vt6W8stpjkCWdnapX3dP5pnrWvjTG+Z19fK39q6vkr5ltdz2VeL8SszY7EtEedy3vyU6Ia
/YvjmcceZ6vpR2vLPh7ZuEGx2rapLvlUsCrcXF0gJWSg1ESVpSv20GHr+eXWNrq4P8vcdueO
tx/lTPY2MFT7ltGzalbPSw6/jjr3/OX2US2fKs+Uvm+23rb4+McYtmtdiioDeE/zZCoouXYZ
Z44/XFenjkzKS2deyjtXzwxaBWUgLQhR+YdScbsWi1mR2BWmnr0AyxnVgaKVGuoOZK+XTBrW
Gmkip6RmvWuRP7cIOqMRrpXxIrQA4rBgBIzOdWaCtCe3hhlM6pvcrUyaQQa+LHzOKnQpJqqw
opzqB0K+fnjNipSyAppqApFWetR5UwhGr0PqFBX0+H1OJm0xc6dC0cnMasyB+GNQXSJjrVgW
WPMjqc+2C+jDjSSxr6mNdPgMDcw0atRmJOs5dev44lmjkIelTqYZasIATorVdKgUfP7q4atp
oiAv26UBqBSpxLUgYMvpzkPWvXBh+xlLPnH0/NUj9+EfJyyKlCagmppnXFBUcgTIE0r0C9f2
9sGGw5Pt5qQScx3oaeeFX/BUateokHq8Se30wDQr60MdKeB6V8ThMM1ZArE0K1BFK18xhtax
IgASgqtMySf3nGVgVeT1L0UZdcyR1FcWMmq82f20GZGWNZgykmldVRUnPUfDGaZ4cB1q6kKO
oqK9cFrWhYq2VSVGWQPfvhxm+nLeoKWooFMuo+mE3wALlhVhllTzOXTDiE8bFc+/QeY654Yg
ggE5AHtU5nAJKFdIUAD1N1JNAMsBMWOhEpqXqScsaZJXKu5SoBA9XfPscSOiqSADn3c5nLrl
i+WoF21nSo7k59sWDTKWqwz0mpPnhOaOiqj0NDTOvhjOEQVtI05AZt4nFViIer0I1NXYfcKY
yNS+kHQ7gKR9SB/1wtSYjHVirURarmK1P+uG6yMCoUyZdzp74hhm0aie3Zm6geGWIyAQAUcg
1JFDXKhwLEr1cMvQL0HQ1OJaSRgZPUsFq5rX8MWVZDNGzfbU9NKVyxRYd9ZRFYjUK0IHhiwB
ZtS0WvpOQ8T44sagslUMop6gHqdRNO1T1+uHBqNwanQAFIqK+B/0xrWD6wAEk/mKPuA6+OFf
AZCrORVlyqvmMZ1obqvtAqaUyU9yDni1ZvwfSHTTnp8fPrixaZhWorRBQZePgDhjISoAClq6
86+H1xasOoIjGo11GgBzxX1vBmMEqrZlMwOmDF8hkoBSh1Z9P8MEZt8JHckjqSaAnw8MasZl
MSAdI+010/XvhI42DodRqRkFP788Zw/YzRrXVWrA+lB06dcM0ZEQ1akABOnImmdD3xqKb8pK
Z1I9WdM86+Qw2GEvt6SdOQ8TShwVrRBlbSrChPSnShwC0LKoVkI9Q6V8vDEP/AYjITnQeZ64
bjMNIkiMNRJWtTh2GypEYBSGB6Vr9MNmpEW0yE5AUNQe1cZzENPvCsMxkOuffEsOyaq0Olga
mnbGovAoASD+YeOXTzwGJFFTqUZnqfDDaZAg0Y6qaCak+eM2ue3RtEhUuoy7muD1qWA9boOn
pGQNcbk9PtAF0LUNqP5gO2Nyes4IsXpVippXUO48sZsWHYloVIFPEn9lcZIWVPSANI6Np7n8
cMWykwjDCjVWlP8ArhMwmrqB6NShIFRXuMMq6v6NoZgXqPSBkDSvjgrOW+nkYlQpAaoACf8A
PBD9vAsCiVBBoMKkIyAny8QK0A6dcX1Uhw4ALilSDQeGM3nFKkyAU5UrkB9MVIGNVBVdKkdB
9cU8V9JtSqq01ZUTwNcIwMerVqJ9VakH/LEDu2lj6aUocvDBNa0SiRnLBgAaVrlXB8rCBEmp
iSB+bzI88QCuliT0JNMssqY3gheoUQkHLIDocSzDM1NJWhPTSPAdsai8OsmtyFyJGdcsjg6G
mVGCnV1pTPv5YDCOvQHB09tI/Z1wykqj3NJ7Gjfh1w1HYAZ01LXofDBkaDG4DNQavAnGlowz
EFFICHOo/wAMZtWU1BQkenOgbt0xn5BjoFSTn1IByHbHTm6L4TLQaR161Of4YbzrMmlqAqK5
ihzFRTBOcbOrIWSno8A3So8/PGcIpFr1I1DMUwxmoaq1KLQmoINaY6ZjFuniRgHqSM6K3hjO
jEooootDXMnrTPGbDO8AQPU6/actQOdfPDhEWYmlNQGf+uH6s6aqqBQBlJqR4Yz9W50MlchS
uo+gdCCf9MFjUsgMxqrQKVz70IOXXFi39kpDKBmQBmR4Y1PRSUMpqVIBGQri0zYdR1OZCjP6
YLB9ghVTTXMitPKvnhjnqSoAIFGGZxrqbBx1Sqq9FFSagY5zlu0nDBQXYBegzzNOuLBTqqj0
kinh4188WL0Mkauoq1BmK+BPhjU2LEaq4AAOlwPVlXLtgpEVdSNXU/j0xndFOEAFWHpGX/M4
pLGQFxUemnWtOwx0+qlOjKQ2nNjlUCuXiMV5w7+j6WooQEaTkT2/ZjONe34PViAQfUDmaZnB
VIB5SZcl6ZkfhitGDUa0WlCa1HbHPGg0bXnXScz4iuIHLBgAB6QaE+fjjXMFpdWB1Ag9a9hj
Vgh2VUGtcwcwfLzxmeNUauKKpJHT7en4419WfsHSXJ6ZdulfLBTKWtg4HfpSmWWKQaAqGFV+
4Ghz7d8ViltF6AgowJrmfPsBTBjUpFlyfVRhkVAxG02r0ggUJyBHXPB4tSBXCkn1B6UP8JHa
mLRUZTTVhmPpkanpit0TTsVajKuY9JA88GNGmXQwJpQZ5eeGrBIUJpqK1BIr/wAZYNalkO6P
WiZCmbDP64znpvvwZlehK9DSrHpXy+uFigUvU9QagsT0+mGwTqiLDQQDl5DP6VxSLTS1yfqF
7dzjNMokdDmwrXNa5f44yZ0Yo1Qxo2sVIX8uLToWOkqKNqPjjcg2DaM11UPatP8ACmMnAM7V
K6aiv2jqAfPAhFGpqyJ8fEYzqoTpLEsSAMyCf9MSS5D7SdQFfUMApjK1QCB61zB8fLDIPtYE
VWniO3cH6DDXTnov5ijW4qTWnfp/hjX2iMFoCztqzFPp4YzaBAKzUUeoUAbBRPTaSKggKyfl
Brq88TVha1IDBamvTyGIWmRk1N1bxJ7E9sSO0q6dSgDTk3iK4qrSRjIVWv3dF8Rgo8PKqAjR
XSRWnfFD0FTqIINNJpnh0YMSaS2ggnpJQdR2yxY1ETHW4Gog1FdPicCtTHNQ/cmjeNOmDGQv
F7hdaZqKg9aUwGQSe7oGpalR08sVXqTL3KEGp9Ve5xFAVBclSQGFT9fD9mIJYySqrWh/MzdD
liIZVKrRqqex8D9MQodAEZLVYoRpAGRJwLCLKY9LEZntnkcOCEqZVDhyPyd6DDjSRhUppbSM
9eeLEjYh/TTUFzanWnlgVSL7TDSKggUBHj3xlFrBIUH7KD6k/wCmIHLFhVG0lcz+zFhRVoAp
6eI+mQxYhvpCVDksv3qM6/TDFTKxy7KOtPPwwjTDQS5K1HdAc8R0KEldOokDIKcqYjgmdkWp
Wh6NlkfPBTRREPHqAoSM1Hl44MAQkkS6qGpyJ7nwywHEcs/pWudMs+2fbCiUNqIOqgoanOtf
PBo0phRaAZipArTPzxYXDdUZQSKEZU7jyxKOShAFD+GJoSAqakdemIHoPHv/AMDEXfHEJKVq
SDkadq4A6LKzkm3G2ggj1STSrFGhyGpzQVOOnM1rx9I7R8XcD4lYWsHIxLuu97hQJ7Y/+NCz
dlBIZ6eJxnLR11J8Jk+CbGXebgbpIYdqgX3ZFt6LIEI1aVIPpJwarUu4fG3D+QbU8nF7T9Pb
2re3K87M8rOMgC5pUYNrP2SWvxb8f8atrW13sT32/wC5ZQzKAYIW7AL+bPKuH/atz+lBH8E7
ZLvc8+7yiHZoI2luILaqyOOuhOoGMb0vsW5/G/Ed52aXceL2htdvtyUczV90suQYNU16Y1mL
nr9snxX4L3XfNnu+QXO4Q7ftMJkZNZ1OdH3inl2rjVtXdSfHXxDtu+3l7uO43Tpxywakjw09
+5K5gKeirQdca+1E6yNfuXxbxHfdvju+NxLZ7UjexLcSmsoKmhJbqR3pjnli+6y4/wAG+Hxu
C8e2u1u963dV/m7jSsKPT73KkAfTDYNYjmXxO15zqPi3H2ifcJBrnK+iOCP7mZvp5dcXOtzp
xco+G32Pedt45BfQ3+97idPtx+lVAyzrmPxxnn7brPmtzb/EHANnii2K4kfcuTXEYIkeogVv
zCMKRT8TjV35Y66lcO3fBG0w3lzf7/dvb7LZBmMdvm7nwBoaIMG1vnrPhzc3+Mtku9lbetgV
YNoij1Qu7HU+n72atK4s9F6rwyVIkvCAznsQRQ1HUkDsMdp8Ddes/GnxBtu57VLynll1Jb7H
G4S3t4T/ADJqHr5Kcc+rTHbzv4p2hNnl3rYrcW+1MAInZiWcGtGIOY6dMY2yivD5oZFk0gDW
hzI6Dtjtz1HL60KRFSUOZrl5+OLW/qaaRQSGBOo+ofXqcStwmlKqdIrHXIjOmBajkUsFk7Dr
4V/5YSbUtCus07mpy+ow6ycKGIDnPpUEZeeKNSAMJOZFWXIA+PY4jUkZShBADUqAPHphqviP
R1qAARmB3wM6UqAFVJH8zNgvY+ZwTTfhEtQyhRkMqHqfP/lhUOYnCNTLqAPI59cUowRLLDqJ
Kg5Ch6DCKSFghofuFRX9p/HFiOrCoyrnX/XFhOAtCKkgmreWHFaTLRtZoVbwPh2AwQYEKqnS
AKtWh6fXFTMPXRXOmdDXPEsKSNhQZEMKEDI1xWj6oyNIpXqCqk+OKU4KNJdWYqVFfLDRgHQA
A/cR9wrQV/DBEcFQq0qBX0qM8/HCh6T7iOCdJFW6Dv28cVahpKZ0b1Z6dXX6ZYJFTJ6cwSB1
YdRXyxoU7hGb1kllOYX91ThkFoHJSGjfcGqR16+WM06FhrHiRnl/x0xQX4KFnKliPsOliT2x
YzPBSgGRSMsjRvHzwuk51HkpNc5D+/zxQYemskZL0IB74dWBlVyg1/dmY/24RbYRKnOtK0Bo
aio6HANIxhmYZ1GbE9AT3wYdEQhQLTQDlX/n44swQBcIFXMAZenqPPFsITUZjMnse+FSplyB
ZWDU/wAe4GDEEAMNQH8zw88Fa0v5pdlFMsy3bFrPqJCyUIFT3bqc8UqxJK7oSQCXFAK+fWuN
RToY1OtX+wgZDLMeWLcatANWslFIXuO/1xazpAIB6cwxFewH49cBhO50Kikla0rTqa4IiYvq
GVCCatXInyrhWGWRhVq5nM/s/wAMXypBRyEgZkdCSf8A+bErBNplYkV0sASPp4YFLDFFVtIN
aLWgr08MOoPu6BQrpByZWzphk0WmLU+7Lzp4dMODRUFDqBzzr9cMgRyCjF66gBkvanauL6nw
ipIXUNSKKdc6nBRh3Vg+jUKJ9wHX6Ypas9GUZTpYeo5qO2LT9bTGMBAOjk1qTXOuLCR1LUnJ
a/jU9cNMgmNXBFQwBUnxFMCsEWAGfQUBr/p54MYoZ9IkOefR86Y0MMGQVQiuo08aU8Di+rRj
WlBnU0XoPPvgnhO7qxKrVSaBST4imIUpNbAAZKo+lfrhVKNqEkdT38vLGvljSqGIU9D+zp1+
uCNynYBlrprWgWmVMsWN4ChqSTWgp3yxqOVgwQwp0VW7jBcPJpF9ILtVl7AHJRgO4Ss4av3K
3VvLG9FppFBIJANfDrl44zYd8EC8j0IHpH3d/KmDDLvhEhVZarUVoe9fE419VshglaMpIoKH
8e+IJBpJ0rlXue3btgxbAFGaOlda6qHxCjGbPTnh1+31U1E+rwFemNCG1x6jpQnVlX/SuGTG
pQxUEYrSgrkeuHqj7SHcMzAj7QPVT/XFrEulUGMitFJAUHwPXBPlq0TJqDmtATQV7nxxVEIo
s6ZqAa161werz8h0jUxYgDKi9ziWw2StmDWmYGZxoaWrS6169fEZ9hgzUQOgKwNQTSo8MMGi
AU/caA5U69MUla0AUx6jIBTOneoOG+gwQ+kq1ASQVP0wLS0sKMBXPMDviw0imlSCD9329x5Y
NGiCkt0o2VB5HwxaCZArHWakEVxW6IbJwEJoDXUvf9uDDL6WkfamRyoB4fTGpMOGSIFywOlT
UUrX8Bh1YThDMSuQOQHiPLEPkVV06iAqqc1PeuDKusRla1Y9R0AzrjpWcEiuIyXFD1y6nPGT
PAKZSa1qSagDv9RjeRDKhXNPs655Z4z1G5QyHqDTIAgf5DGZVSXUDUZ+Q/zxrIBmjNVgBnXV
Wi5YfqvvvhgQBkR1p45Yz9cVoSCXqT6RkBjUZwTAihc0rkM8qdBhkaoKoSQK0pkPA4rPFKN1
Vh0AIp+B8aY4+xoIXUWrk47eeNbXO3DjWDpJFB3r443ui0jUMUDHwAGYqcI0SaQDnmufauf+
WM0I9LI2k0o2R+mJQYXSpAbIekkdgMsW3W5Jginq7EE0I/0OK+m8hISoFKCuZ8fpgkR/QFKs
NWrMN4/XDOdU6wY6BAn3AAeGXhinJvUqOSSNgCCdK/cp8PDD9BetOxJhOQUdwP8AA1xqQaet
OgqD0IFaeX4YfqxumGhWIyZwaE9qHuMDfk+CdxQqB6gc2H7sYsU9SLGroa9c+37hjOm8o0C1
CJnTqSf3jEJYWkkk5ZGhqa4furBAAtrY1HQ08D3wbWbQOFDDTSvXIf44WdFpAYkmhFNNc61x
RUL6tPiW8csbkFsOqOGK/m7V6HLtgvqlwWR1ULLlmvb9mOea6/bAxmWNmLfaRQnDGtGyV6Cj
0qf9MOMBBUKprQjLLoAemM3xr5I6Q9QKt0FcODDJQoAxKqDmO2DB9RLQlinTPIihP7cXq8Mq
qMmNK1/f/hh1nPTBI0XLvnWuQxS0+BVwta/cBUU8/HG4KkonUqKn7vpjNilB6aHSKAmoGH6r
cEUAAqNQ7gZVOOddPwAMoQEAVr0Of1wWKZPlIixEtoFADUd6Yz9cNunZVUv1zPpNf34cZ6CC
5DAkayBWvamL4UpBStAKa+ufSuL5RyCahVDP+UHETABUEjqA57UywaciQMnbJaZ1waEWoFQR
6VJplnXFCHUSAAvo6V/Nn440zRMlToyB6lqZV7YBYWglinXKjf44CIxRkVz0NQDxB8BjLWk6
6Bqamn+Eda+JIxcqmVtRL9T0CkUH4DDYiBIByzppC9Bgq5JowMw1C2YqMVrXh5ZWKplqHfL0
1/zxnFe8A+lWGhQGbrXGvqzoVlk9wl2CrShU+GNRm0SlNIUggD7Gr3xeMemZHaTXUgDJq4y6
cQwY6CtWzHoPShwY36lVACWY1IpnWuZwDQqFGqmS1yPQmuLD9gSMAaE+QWlMNmCUcTKKKz6D
406f9cGk66BUPm33Kegr44PRPDR+0R/MI1GtWPh3rhxgyIMswEY0BPWn+mBrmxIWjcFkIRh6
RqGVD1rhdNh/SoTVpBJzr4YIL4icorkrUqxP+uHGfSbSSEqVPYqMz5VxYj5/aVDD7QQadB3w
LUkdCGbNVH3AGpb6eGAxGsgYOxJAoSaVNfOuKtSYLVpIoA4JGk1ofPPANGSKZjL8w74NUMzD
2tVR7dcmB7fTDKKiR6hXYl2GRZjlTDUN30MoQV1GgpnTFioNIYZfd1FOhp1OAw/smpCrUeI8
8P2GD0gqq6QCKVI6GnniJ4wpAfV6qkK3fLtjOGCUKmmuk1NSKHUD41xYcKgMhUVp3U5UwABR
Q5SlEboa1rT/ACxA3tqzVAzX7W6nFqpBtLiiEsaHU3T/AIGFkDFQCynSSaVBqSDihn+T+hiK
D1KRqc9xhSRxoU0OmveoP0ywNymozik1cqFKEio8cWIwEY1IagNmGGBJRIsgUFc19Jp4DxwL
UI9o6gRVaE6afvpiQEcA+WdAOmICdCSCzBkUHp1OEuC6RkOtiCT2xGOTWa16eFMBHUstQKEY
oga/+PPEFrIxjlrqHpGVOgriOO3Y9wtbLerS6lBlWKVXaMfwqa0/bj0/ws3GOpfw+nv67x7m
Z27kF1uMG12W3aXaKd1V29vMqErU9MZ/p/O80T/Lrt/l/j19vlzZ2z+xDOoigvJfQjsQakE9
K+eOd/m31MiNuWcY4Hxz+nybgu7bjesZWS1IZIwxyYkfsGGcazPXTLvXHeV3O37/AC7lDY2W
1kM8ZZS7MgFVCn6Y39OuVzYe3+YOP7vu+47bbn9OZwUtZ5yqoVI0knwPhjH/ACuar58uXeOY
8c4Fwu42cXyblu117uhLdldAJK09Q8j0wfS07GW4dcfGJ4hc3vJ96nfegGVNnWRlVaCsY0D7
q9zjffHUh/6y+Ra/FXO9nbatw4/lBe3TNJEWZfaCnIUrmAuOvf8AL/SdRm9e+tPNu3HeN7HZ
8Ibc4brdNydvemiZSkCynMkjuK+muOHPF6+FZ+V9ecW3Cx2Zdm4jutrs9oQWubzWpmlZh9zO
M/HGb4d14nYbTsUHyD+i3jkUltHESbzdoZWZyaaqe4K0r546c25431Z9V5v/ACz4+2L5B2m7
2N5L+3twGuL5meQu3Y6m6n6Y1/H+f21y/wDD0CO92K95BFzWfc4rXZ7WImNSQzymg6Kp7U+u
MfSzwyxBafIOx8vt9y2fZpgt3csREZSFPtk0qAT1xdfyvM1WxQ/JHOOPcT4FHxKG4jvt3jX+
cITqSPM6tR71rn2xmTar1GSsPh/YLHg//uHIt3RJZo/ehsVYAjXmASDqJ8sXX2lwzL8Nfwnk
m38l4EOK7bKr3sYIPukL/K16g1OlQeuN9fzs9F624H5Q5lsmx8LtuJ21z+v3JVVLkxMCkenM
ip61OOd50/l83tI7M2shqkkyDLrgkxAdGEi5Ejs/avfGoMoZUIFPtzqPH8cWoyBwdJqxbIp0
0g98Xyp4aQRLJ6evgc88MIDrUl0QPU1anbyzxrRYTKzgSKQF6U/N+7EPk2nS1DlrzYDqAetM
VhC8Sl6IKn8r+XngHyWlaDUM0NK98SxHqQRkDv3NSST4HCzDqFVV/Oa1auWY+uBocTvkyqW1
VahzoRkK4DAEKzGmoqtR5E/TCsEmohS4oq9u4OFYYq2pj41qTnQ4dUmHWOPURn9uVMjg1ULF
h6K+kdexqPDGpXP0TAkpqFS3Tr4YzWpyVdKBdOk56qdT/wA8EMmEfVDTUSeqnsT2w4NRjOQB
6GgJP1XthWjjKFgaZN9p8e/4YhoFijpI1Tk+nIGtSK5YGpg/SwQFQGH5jkcLO7TOACFQkE9V
7UOEhjJL+o6UHpJ7f9cDU04alU6Ek6XPQjDpvodMjkqvXt9PHGqPqQqsdG6d6ipr54wyESMi
laVFak4mdy+GppTUp15ZL1yr3xrYvkLM2sEeo10nt+7D9ofU5QiNa/eO/f6VxluBKqXDU1KR
XrkD4YUCRQpBK1zzNepP+mHWTtoYfvLAeeVcYp3TTrUEMSQOlOtcQ+py9UbOtPt71Plh+Qj9
1SKEaTU+keJ88Ui0SBgn+Ip0GEHRSwDKPqAf2nAQR+4rUUVJ+3xy74rVgh6zqpVwTQeP7MBh
MrKpJFHPQds++DBYcMqIQDULmAev7cONaZ2WikZH7Qqkj9tcayL7G0A0EmqlKmmQGBEM1Kr1
A9I7HPBjOnCMcx26/iMTUGyvkzEBKAivjhkRmj0+oksaj7elMMCKNvQ0jfiK0+n4Yb6Uqe2F
yNKigc+J+uOeEv5Qcg6qBa0rnX+LLGmQGPWOlSx6kZ/hjUh+SyACs9D3U9aYWcOHINNRI7Hy
ONQaFFNSakA0/YMVhhVGsgHR2Q9a174FbonqR9or1ZvEjtiMhyf5QqDqGZp44D8FEBVmJFFI
z8ScGqkVLDW3pYGn7e+IfBOSBmaD8x6g4cOhZdLiWukkCiny8MMHWn1LQM1CCamorlXvhvIE
GCVDd6UNPxxYYTmmpqekD8K+WDDiP7CGBqO48vHFjAmrmOvY+IOKQBViT9tQenhiZwRSsZP5
lJOkfXxwfl05hEt7lXNT2A6YWj5qAAKsxrT/ABw4zgpNFfVQE5lO+DFZgUNXLJmBk3j0xYyQ
UqFocmNRhlMhwwDEsKqcvxxuVaBqqexC0I8qYPyLp0UVqQBrJoT4nxxsZaIO4OagtX1V8cWH
2eEWlBYlqhu1KZ4zYJ6dWKmoXVXqFzqeuDDLYjLKtAcz+cHKvfrhnLX2F6mToCARqWv7KYTe
jFSw66HJr1zrTvg1zOkL6c3pn+BwasP6StW9SVofGleuLTITQk0pko6V8MUuNEyqutAchnWl
AK4Np6wOhyaL1NK16/XPGtH100zASChp2rXMHzxqRjr9EwZvu6KMlyFfpgtjOGLOqoFNGqKj
Bp+Dgg5E0WuSnD8NT07rSMkgDPr59sX2VCIHUanFSorSv7SfMYpVYAM1fCM5gipxAQBJqSSv
jgWD1N7hfPJT9Ri8IGJRaEUBpq71r3w/WKkgZT/MNFzoep/HFi+BRnMedVBp/ngaRqQXKZ06
ah1B8MbnKxKYlzZzUkjLvkOgxnBoHrWhq1KZdBXrTDGejlP5YetM6nwwrBAqcwB0yr1IOWKy
HTNEVC6TQkV8wMU7NwzBmIJGlT0A8sNrM+SWIkMVIqMs8sc7Gx/c1AOwGXT92NcRm31AVA1A
+ehfDHTGbUoKMyKRTxHmfDBeSB9Sgk1Lg0RT1GMr4I+6F0M1a+o08T4Y6/Ye/JwrasslIoCc
iT0xm3xvSEb1FQK/lWtenUHHOs20nND7aCjHr3H4YZjFDXUQB9wGXfyxuA6oVGnr1DU6ftw5
q06I2nV4Z06+nGDJplPqB+2lVz74MMOACGIH4dDXFfWpDgl6ClT2r49MNMSGNwGL0JPpA+nf
Fmn4+QEn26KM+hB/yxfVjq/ok1UWp69R4U8MaH1JQQpamRyKn/MHA14JIwaBwNQ/GnhiNiMv
6wDkK/4Y1PXO3BhgaaqhSSCf8cFXN0UTooOkghcwT+/GbG/t6YjW1FJEb5H/ABywfAttCiLS
g6DpXr5YzdWCBRSdSj6d88UQVKVKqtBUipyHkMakZyDIowrQZ069Dgkb8JkZpTX69vrXFGer
ngGzrWpqcs+mNfZzOoAKOB6s6j/LGN04bUxYKKEvUeJGLVqRgukhT6KdDnQ9MUa56CqqzVbL
vn5eBxRvSIB6Nl2FOuHF9ggggEkk9GpkPqMGatKRlaulaAnpn2xqc4xerToxYlsxUUAGeWCo
wBypUAnPxoO2AYKiKlBn2p064Jp8gXiCjVXUWOQ8D543rUkOwIXtUdPHF9lZDg0pUFqmoP0x
j7M/AoWCSl5AWA/L5UyGM31T9g0qGY0LCuVfA9vrgutzoqqiAAamzp4in0wHRqrlDXMUFCD4
/wCmDEiMdCRU0FNVepxvFaLR7iamqpOdR3r2pi6Ywg2g6TmKULZA16/jgV+BVYMW6eI8RjPi
nwFU1g5+qldJyriamGRFXqDU0CjtgkJwimMELqJOS9qV8MVqGHYBanqaLlUAYd1i+IRVZCS1
GNSKZ18sFrXP7H3qoJUCoByPnlgO6H3I3AoT3pTofLDILRQkdSc2oBUdK4Ph0k8CT6ygzPQ1
HftirH0sIhvImlR/mKYgSLG1TmFPq/Z9cB9Dka6x6j9tPDDhoniyz9S9/Hyw5jNPoLDQOtQB
4eIxZBhzVUOR1A0YNnXxxmtyCJRiSev5Qf8ATBY3qNsyA4zOVOn4ZYmaYF9JXI1OYOROBmCr
GW0kUalBUZnGsahOrqQrerKtetO3TAtDX8tK5mhA7YGokdj7tajOmvyP4dcTHUkJ/wCW6lgG
qTlXLF8s6FTSSgQlQehzFOtK4Pq3zTuAasSanqQM/LEerTRqrZkUVeor+GFSmJZCARVB4+Pb
PFjO6RZhNpNc/wAwyy88QGBVWfSaHoe9cGNwnQ+16mAr0UClCfHAaERsoAH219QyOeBnRCRl
BpQpSjE9aYjDMoT1qKoa5HoB+GGw4ZEQSdB7ZarN2r4UwKmVJQGXo2YJy6dcqYmcMtdOlc18
TkKjxGIyCj906mRtITMn/lh0pDJpVSnpLCorkanEbcQqwSUAkhTUtXMCuHGeRGZWWoNSrUAG
efiMZxvThk1KNQU+JzIA61piB2eP05Zg0Jp2Pc4lbEjmiqw9Kjs1M8WC1EjeGchHpatKDwp/
ngHhvQyB2ojjJtOYp9MQNo6lahaek9a+XliMJmQKD3UVC9WPjiaAz3DIpJAHRa9PwwgaMFHp
U0PfrTzofHGVptR0OBnqBoRkQwwYocAMATVQBU1y6jPFjRUGlsqAAAHw8PrhRtCKtGyJzqeh
8/LAccV6G6sKZ1J6VxCOPrUjE0cH0lf8cOKgxBYTRv8AbqAz+3wOFoaRyRnP0ueh6tX8MO4c
/T0DYuD8s3K3DQbVcToyqNZiYofPPIjF1/bWbMBJsG/224xWLW0gvCxjFsBVqjwHjgnRk13b
h8e8xsE/VXW3TQW8h9DOKEk59PH64uP6+s9c6i2rg/LN2t3ax26d1U1aQK4BplQkCmWLr+90
SZXXtnAuYX26xbPa2chv5TpUEZKOhaR+gGN/9a9F7nUxa8z+JOQ8ZtdV/SSUZs4FUz/KpGRJ
8cY+9rl9JVrxH+3/AJXvG2x3t+8W02ToGt452/nSL4aR6qYe/wCl+GPpOVHyj4v3zj14lnbI
01276oYwDqKnqVpnn2ONc/3s5wfX7VW3/AeU2IMt3Yz26uAauCCWpU62OeH+X97zW+Z+B2HD
+eXiRPFY3Jt38A5VkH7h+OMf1/8As61OMuq+bjO+Hcv0gtpzdO1TCilmr5gDMYOP6WC9SuqX
gfKtut/cv7KSJUFU91WSoP8ACDT9uGf1y+HmYVlw/mG5WzPb2VxLAR/LIDBGUd1byw9f/Yt8
X9Mvwhg4vv8ADdGG2sX95TR2iJ1An/txu/8A2vMc5yO/4dym2kE13aOBKcmcEEk5UOrrjHPf
5Z641x323b/bokW4JMI3yihk1AU7HSe3hjV/tK78cyQdvsHJEuf0ltBcw3MgHtiHUr0Of5cx
g/8A2bI5/XafdeJcjsF97c7SW3L1U66lhQfmJz+mMzudGSOXb+Mb3uNo81pbSm1hqJHK+mgH
cnBsDfcA+E9y3zZW3zebuHbtuqUgik/8krpl6M+hxm26pYsL/wCDY7XarvfNx3BbPa7YN7Ec
lBLK4FaDuR4UxTTkql2/4O5XfWkN1DbPF+r9cEcmbBCK6iozpTpjN7X1ZDmfGrnje8/02Zg9
yq1lovoH7e+O3M2BQsB7blV1gnqRQ+Rxq4yjJOkrSug1AXoR2wxfJSBiRU0bLUPL64mbASF9
esmjKMx4A+eAhRYw1HJBH3U6EHBpzSOlVKhQAT4HUKZYWaQBKaidK/tzHTGaeSAnjBKigX00
6nxJxG0IYk/y21E/dSozr49MaG1IdRXSxAK9dPT64pVtCmp5ygqq5kMcgTiUEYZERmHqB7mg
wabDIukeruMx/jmcLMP9rAv6WLenUakfsxrTaEsxqGXOtajIZ9RlgwTaZomWMHKrdjl18vAY
dRo10K+ZoOo7eeKjcOKlaA6BTJf8sOL7GSbM5kgile2oZ9sGKdE9ApK1LHMn6+GFW6ZToVa9
xRj1r3xYoI5g5Zfl8M8UO0RjQVRmGQzPenjQ4y1QolWqtdQPqXp16HCxJab1GgbtWhH174as
AuoZMdXVSRnn4n6Yzi8Fo06SR0z+tcsFEpBFBJqELCn/AB9cMjc6JY/SR2BqK554bCQoQWI9
deo64hQaRQimk1qfPENFpYfYP5fWnn44oaWr+WW1Zmvp+mXXDjKACPKtSp/KMqDywyMpNUS1
Cj6VwfWrSRZPUBQJ36g40YFNJNR6SD+2vjgVmCGg/a1DnqPbBYZSIDMQhyGZ8Di+pz0LRsSK
lmQfeSafTFGsJ8zQtUEgH/txSMikEgHrIUtkOx6+WDCjFRJUklh/xTGsZEVGujnoRT9nhgxG
U6SQDkvh3zw5BYTylzR6DyGdcMXp5HPQZ0XKmLEQI0qGGZ/Keg8cVOHBJIU0qen/AB44wTyq
AVYGlDWg6dMStga6kUkacsh0/fjcyAKKdOqoYkmjd6/jjUAiPVpBrIQaitRniUz8nbQiKpqS
etM8u+NQ+G0qGppGnrXwy6YLyjVYVBzSlQO/78UwjAr9xoGFCv8AnizBSRAIznXP7fHGaYUj
CjVBDGlVp2JwYNMrUADAKBl5E41i0aIgBUjT1IY0IIPcDDnh0LKFNTQnqD/hixmUl1aSAM17
eP44q18m9Qj0HIaswfEnFgvhZAEMvrJp0/CpxZjPv4OigqNGVcjUdxh+tX2lMWGS0JTodJrS
uLFokjJGRzH7D54LGpSJqCKAEdGPUYIrp81UHLX1J7AY1ItMSTIdIGRybp1GNfVjq3SCFamv
U0JB/fjFikw0gKjSPtHXBIaJQAn3VPcHDOVLhF/c0k10n8opl+3GsH2O6r4E51PlhhlDGHYM
QKVNTU55YtwzjRFQ1aMcstOWXl54sHkJAA1B3qCOmXjivNMRkAk6AdanOv8ADhsA0Dr0AINa
g9vxxT1QOg+kn7amgH3V/wBMWM4c+2+pQDrHXrTG7zihpAVFNJ00p9DjFjc1IHoq5Vz/AAr0
wT/JswGpnYitNOWnvhowtbEHStG6g1rXG+Yxt/ATHUl8iCBQ9cGKnVUY0III798s6jGepGfd
OaPVa0Bz04y3yaVRqouQ6Hxp0NMabsEQM1r6QMz3PYZYLWcQt6RVqaRkKf4nG5VkOhWpDdAa
5d8X1YMx6stRTp2p54MX2Gr1BXSc+mfj3wfU6BqlyxzKjSCPLGpIftRKNYBP29AvceZ+uKwS
nyLkjr0r2xSNz0ySZqAAM+3cnvh+p0mLCWp6r6SPDxwWMYVEoEI1AkmgzPlXDJpmfk2tXOpC
fV1U9Kjtjf1yLwtRVxX1MtMwOlcYsGYfUxXIeokg/jiBFH9wEkU6Edc+2KVH9VNLVILU8Pxx
VfJiQKjI0NQenfpjC3AzLpNNPpr37Y6SnBBSopqq5Hpw/LV8MNTfdkT3rXBYAoKsDX0tkWpX
p0phlJ8swT06NTrhxnZCqpYmpNBU08R4YLzoO5kHrBoTmB/pjH1Y0OkH1A6WJybvTCqdq/bX
pnXxxuMwSj+WueeYoa18qYG8L71B7DqD/j+3GLz61zgilUJXoMiDkQTgxrQL7nuaDQdNJ/D/
ADxuSM7RxkPr05Z0LUzGNfDU9MyAipI1ZBgcWashiEKaUopGak9f+mHMc716SyEIpJJDfnI7
+WCq0/ryqACep8B5YFhNG9WJU6uw8MWxkP8AMWgc6umk/wCWJTwTa/SY/uJNVH7a54sbl0SO
TQVGXUeI8MZP2CUOotnp6ZdaeBzxWCaJyGBJAY09LU6eWGSM3ozaAtdPTr+GeH6tbp1YUUnN
qZEjrhUhFxXUQAcgCegHTB5HO7aJzG0dKfb104yr0DrWhqR0qaZ4LyJ4FEZa6qAk0Hale9cG
NDYBG8WbOnhXvjUjJ/VTSBVqZkDB9XSU3tggaWHXML0OKwyCK0JDAE10in+OMHr9B9tQFUfc
DTzxuMkwIXW30FMz+GKxuCQpQqKZfmzyywXkAkFJRrHpUdR54Iz1DOklQegbPVXL64tXI6Cg
Dfcv4/TDh/JAoZFCZEGrE1FK96YzZB7p5EJOZUODUg9PAZYz7DcCUcHTIa5alI6V8MU61rJI
coSp0+mpoWHUV7YPgzkDaVGhGFT38QMsSkGoUMqfc1Pu6H8cUVwifSHGbCtBWhy8Ma1mwDFW
YZVAIyPTzz8cZY+DmQMp1ENU0/4OD6q9W/BO4GRqAMlI8f8APBjpDMj0rJ3oRQ5kf5YKco4l
UIdXhQf4jBYs8R6Y9LVyPc55eeJnDSBCuo0YgimdM/E4cW4LWciDWQAZ9frgkNu/AKtl+Ruo
p0NfDGsWDVV9tlFSh6j/AHVwN5KYs6qQ2Tdz3A7Yz5VtN0Prqe47YsFNKwMQ1VoTllkM8akZ
+yQKhYFsgOpH+eIhUxtUPV2/KD+3EsOrBm1H0+Vc/wBmM4zKTsC5PXILU9xgrXNAaEMASTWi
+WeJuBLEgqMzX9i1pT9uIWCiBKUoDpOZ6sBgVKRAzEL1oP8Aj64lg5V9BY1ovWn+uDVIYlhQ
KQQPy+GX+OJojXSE05sxOfc074rR+S9shvWCQua5d8NsWYAephrIWprTFrPyLXrrnmMtNaHL
wxKylRmFCgzFSVyII/zwHBBgqLqWo00z8vHzxa1ckJVOnXT09c/34HManrSvgKdKHFYtoLYV
VxJUgH0nwbrng1uTwyqhGQrmWPma98Sg16atPX8pp374sVp0kcq2o01AgR+Y8cAnSEJJWqHU
pNSgFKkDMkHFqS6AgOg+s0P0/HESpIyjUa9KNTpi0S2nRWclNVNZ+lQMFalRzIFcoDUHowzA
ONSihEZKCpAb/d4YtJlXUw7OuRPTLzwIigWQktX/AHAdsTOJoFRmbWOmYI60wVozrGyjQait
D/tHY4j4NoVI9VK5BCvfzwRZEIGmTTWjL6ScLMHQamUGppQnoMWtYILCASwDGnQZCnlgJnUU
Vh6SwNNPemFm1CdTF601J17aa+WJnTK4XSzITn1HjgMomNWI0+liDqOdMTQtYIJoQ/2qevT/
ACxQo7gq4TSdP8XcCnniVcdyRkall/3dcxiTkoKUBwNCyoVOJA0/X9mLBqzLaGYsauDkRnU4
WoseMpFNv1jq1kCeM6UGrIMCcj1GOnHO0Xfw+sflP5E3/jm27dHt862sGke5FCqjUMgF6ZKc
Y55msfLj+L+Zf199y32a1h2+6t09q3aNfdkBzNVJFTqp9cdv7fw+k1ff8NBsH/se4cd3C+5S
JZCJf5E14AiGI9BpIFKY82a38RLvu58s/WbRYccaaKxdUW6jto19pYq5nIfmX82GYJzrj3zf
LjYvkfa9rsLoWhvlDXUEemukVID1B60zxrjnWOvIrPk+933cORbb7s7PsdpNCZTpCwhwa17A
1xczaZ8PT7a5guDGLIpJMQpE8n2lKDpXrTF1MUuua9vEh3O/vY9H6+3h9uO8KhmApXTGDnSv
gMZLzKK++Vt53G1l3yzU7JHcCt9dRiGqBslC+nJvpXG+frVF98rfJ278V3CwjhjjG3SDXc2c
QUSSKRRV1HpXBxzKXR8Yck2rcdn3DkzwLt0sspjj9mNZJUjA+xcjlXPzw/0/l9WdWG+bxHuu
2RxWu1Xd9czSCNb/AHKL2YFqc8u4pnQYxOTKuNpul/URbLLuE19IqUlis7URWiJTJC69P24b
yNBf3cOx7VvF3t1pBFNbBtDMqgBhmoOWdfPGfqdVHHr88i2DbN73ONLm+lkLanVSPTkAqj00
643eMWqobJJuPyvDcpYi7srONizstY1elFPSlV8MZkUXu43L7Btu67jtsUI3iaQrHOUEkjEZ
D09aBe2L6w68W5/uXyLvGytfb5tiWNsjA/qaey89KZJq8ewxvnNGNbI/Nbr4iEc1nY7JsKwK
BdMT+pnU+XQF8Zvlakx4Jfcj3hxb2p3CSW0t/wDxKjnSWU9aDwGPT/Lvn8xzu749A+MNy37m
vNdust8uG3e1tayrbXUrGJFUgaqDM0qMu+H+nEz7NczI+qJtw260t5bl7i3ht7NNIkLAIgUU
+mPJ9Vr4l+Q98st65Pe3ltK04aRgjFfvWpofDLG54z8se7ENpY9c2r/jjflW4c1+4moNQB0B
88saHoDWRi4bUR9oPU4NMuowayAZ6evr6f8APARSIy6XU5FScutMZVhodT6dZNTX8RjWwZ+y
TUQUUVCmtadPOuKsyXRxn01Fc+gY9jidKRUe2AKoM6yMPHuAMUgl1AUZSTUlScs8vqRh1iz9
J82QAUNPz4rg59RhZQUqQVYZp/CO2eCtyVK8bVKsRqpU+f0/DFBYhaPQASaj+MHsfHCrJDsz
IAMieun6ZY19FbfwcKRpOkEHxJp+3GcwfaiaQjVoQFjlpHcDvh8P2tRAsq+rNa5eP/TGpNZy
xIvt+0FCgAGh7Vrg+rUtCw0UINFJoO5Phhgw6hSdLZVOZpQfuxfVozIqCg7muR8cC0J0BDqT
VT82ZJHniW4JkcqzCvornX9mD8rQklXWpBr4dKnFWL+xKFYavtY11dB08sRyU2tidKr/AC2q
ajt44Jyz9r8BVmUR0X0nz7HG5MagCQWoaChAUg9u/wCONfUiIJq32gEFQOuXbGcGk2mlSTra
lQeo/wCWDFppTocAn0GhFT38Rhi00mgkFSKk+kf543g3TI2ghiAQ1enbGRTMw0kD1MOhGX+O
GIcTSDSGzJ/IRlXDYjMhToNRpln4+OCQgofbWq6gTTT4f64TIH1NIClcvDp9MGrUsjGoyGQJ
CDMZ4PDlCENNWTqM9I6keWEyYGRgW7k19PlUeOFkQIZQtdQyNehy6Z4LGMM0Wr7SXVjQ/gcZ
xuQZiqnqyp0fr+7BjQEGlfUvqGZIBqMawXSbUCaGv+A8sOqSH0CpyP8A2/vOCkvaVqqoNTmG
U9O+eJCo7DVQUFAPMeYxYz9TMFJ0knSaEd/wxrBsCopUlCVHXPr5nEL4kQnMKNYP5qdvpiWw
wGqqqdRWtCemf+mNYocVWmo1IHXoPxwn4OgBfUx9JrUeNcZwy+ilRSdIbMD0gZfvxer5RZUO
o9gBTIk+RwwCJToT/M65Zg4fozqFgPuK5A/cc/LCoMULmhpoHp6AefXDTM0Sqpzb8w6DrUdc
YvNa8gqKAKA0OZXxPjhnI66RmruGppH/ADpmRizB9tSjSPWKKDQBT1864sa0Cq+vU59BrRaY
1JnjF9JWQMaLReuQ659cFh2SHNBmldQOXgB54BbYThnWooNJ8csavEZ+90irIPV6qipPYnz+
mNfXY3KRzZR0ZhQsv0yocV5a5vgM9Kk0NPS2CxdWYJi5X1CmkfX8cUjnIc0Zh6x/DQeWLBSB
JZQemdAPDBWpCFR6tQKn7RgOX5OEaoVTVBWpGXXGvB6Bo6ClCDWh86f5YudV5GEOSgkdl88J
0qjRX7ZD28vE4p6PtiRaEflJ00C0FAcOGWIkLiQajkQfSOuHGbMIGQuxRSKZDPsPHDJIt34B
Qkt6/VXPP9tMVE0bPWQMxBQH9/SuM3lqHkIBLjoB1/5YZI11v4RxsutqAjUKL37Z/TG6z8CW
JqUBGkDxxnq6zIfQCRQkBTXV1rjF5wfYiNGpwemYAHbpjOGXPUZDL6yc/pXrnjUmumX5EGUr
X8uRLeZ7YrIzaUcer1SCgyy65YNMye0GlgRoAVSdQJ/zxuCm0A5kagc269BjTFgiXLejr9tO
lBjGfhr7JJPUigCjHuBkCP8Alhkka3fgGgDqQqgHVTrjQCZGJyWi9B9Bh8imndQUBAK1OZHb
BT4EKAKA0I6N5YKhJpDaqlmXyy/bi5X5GrBSVK6e9fL/ACON/U0NEYAgAHtXLGOoxpK+mjUJ
A6+RwYzLYQALgyGlfUAOlfLDjcyw7hgaGopX09+mLRIZIwRqP1Ncv8cY1DILhIyPMkeNMXJA
SwUVzCkrTyONxZSLAPmMxlXxrjUgohVhQGnegHfpjN89anoTECxJJ1D8vl5YdX0DEgIL5KCK
CmWGsWmZ9bF1GZz/ANaYzmM4JmZtOoitNP7e+CC3QrGuqldSr27mnnhlMERIakgdaHwoPPGm
vThVDjSTppXPMVxn5UhtTEEE9epA7dvxxQUzFxT1Bycie+eKlIiEHUAAAaN9fDzxm9NSYTRj
WrEhWGZWo6Y0r6HUpYKaVNQvgBjWM2kxIooqFAGZAyP4YfGdN7kgqpAYkAAV6k4PILRVcqpq
QD1zy8MYtjFhSLqbUOnTV9MEa5mmViFLV6n7a5nPrjU9bkpnjWNxo1VAq7eXicMrX1ypKAqQ
jaH7keB+uHBdR62jHpzXpQnMjGsZFrqdS+rIVpgsak0QliGQzIzI8O2DGbaF/bZ2WtFGefgO
2DMV2mQZEhhUZn6EYmbwbVWhpQg0PeuM2iJAyGoGQyz6Z98Mz5a+tIKSTmNQHQ51GC088lrX
TppmMzU9u+M/JnFggqKgkU6hnQDzxi9V0zDVqQ61qDUnvn/jinTHtpMHyagNfu7nGvvgm6R0
MrEHp1yxr7RuywyuoJPuAKAK0H+NcZ+y+lotZFT+VjkOuXbFO4OubAE+oMrCjZnvTLGb3okh
9UeZDLRaCjGlR44PsbydpI1anj3oMsZ+0XofdANKVOrME0y+uGdRmcfkNU1MwfKvWuWG9RqQ
5mDMBrBHTwqfpi2HNOhiAZlIouRHevljGtBedQ41nWKZkfurincZwwlh0V6qMhTrXxxXqNyB
9+MAJ0JFDXrTxGD7SM3jRl4SchUUFPHLBelAiRVagXqPSwOYPhjM7anJhJAopX618cVqSa0J
zYgHMClKUxn7tfVDLPHU9CDSvn9cOudh1kgYDUcxm3hlg1HEqM5z1EjInqQPDDrpJMJpEZVJ
qNP2mnUeWH7Mzk6XMSo3WhpqPj2xfZXkEl3EyqoJUDIE+GBWYf8AUQyGjjzDDsPwxaZJQvPG
y6TVT/CPEYda+sw7TwrH/LBBNA6/88H2YpSXEK0GdT1bt9MZ1FHcQADpqPRgO/hivS+pe6RR
CdIB/ZXBemuYaafWSpz05UpSo8cX2gw0bhx6aqaZgdvPFrQ9bL/MzCHJq9ajFOhYdbjURpjJ
avqPennivSEQZEAK966SP34zasAspCsaFtWbUGYp2xfYBNxI6BlqxBNcs6HDFTq8xkCBCzEU
AB8fDCyFpX6aCaZDLuMqnFrUSJE7MmiMCgBJGRr9MF6UlOoZVICFX6+VfLB9msdFvatcpKK6
Ag/lFhkx7r5HBe2ZyjAuoKRKjEjr3NMH2OGEtxHVWh+40p2p4jF9jOQS+8AfQQG+wrnlhlMh
oYrpzqWL7cqHpmPHFVg5IrmMAFC79B3A7YZWL8hMV10ZTUZUH78Ws2EP1KkqyEso+/tQ9hjO
tAIunBKj7hQL3oMOiSpBDM6CMK1T0NKE08MWt5goY7poWYwuenqplUHMV8sFrNpxaXcpOiMa
VHqLGg/HFqnJlhmZyIxVlFKEdsOk0cdytVZCCBU98h3waTtBItFC0LEMSfP/AIzxSq4F4bnV
pKgg5Eda/TEDSWtxHRxGVjAB0DOgwacTLFcPE5oVYAH8D5Yjhore5GqRdL+Fep8v88Vo2BEN
2aMWoG+8daGvhiixF7TAn3R1J+tfKmJamTbr6RVMbDyXw/biWSnXa7pwS8g9ZzYdajufLFqy
Im22cU0yURSVOfaueLUkl2+StI5KqKVUdR3xaNO9hP7xBINPtI8MUalD+kOoIGFTn45jphQT
s8rtRzRxStB3OfTBqxwz7XNHWgqmqnTPFpcxQKhNOmQJwaUOr/HGgtGX3ZjQ1Yn0kZEjAXfZ
WdzCElhYmVWBSRDRlIPljf8APvKvst943vk9/wCyNwuZbv2skErlyv8A2k+ON9dS+qZjr2Pk
XJdhvP1u23E1neFdKSwGrZdqHL8cU/v+L6zJFtyPn/PN5sxFum73N5HWqrK9FDCn2hdONc98
/OGyfh2bd8tfJVhZLaWO93FvCBSOJCrU/wD0wxxjr+nNvwOeP8s5c75vs25SblJcSy37ZvcO
xaQtWuqvTr0pg5/rl/w1ffFhv3yDzbe7OOz3XcpLmGMD2Y2aiJlQ5AZ/jgv9PdjnZ479g+W/
kjY7NbDa9ykt7dQfbRQrqP8Ad/MViP8At6Y63+86nsc+eaC0+VPke03STeptwddxkOn3lIIZ
San0MGHXDO+MzGso91+Wfkre7mGa/wBzkkFrIJbVQQqqw/NpUBa/XGZ1xz8OnPOxQb/yHkO8
3v6rdp5b+9FNEktARTIUC0UD8Mc/t74JzIuOIc+5vxkSS7Vdm0uH9JRkV0y6HQ1Rjtf7SzKu
ufzHTyL5Z+SOQyQybhuEk8Vu1YRGBbrXrXSmkfj1wfz64l+GPasR/cB8qQWC2abiYkjAWMLD
EGHYVkC1bzw93+dvjUmKLcPkz5Cu7GawudxZ7W5k92dKAl3bM1OH7/z/AEpKUfyTziLbrXak
vXgsbM//AB4IqUFO1Rn1xX+0v4Zs+rRr8+/Kxto4YdwS3jjABaOGPr0OrKp+ta4z9v5/prmW
/NcGwfM3PtiuLiaG9WZ7xjLJNcQpIRJ01LqrT6DD31xmYcsqo5n8jc95ZcRPvNy11EgBiiRR
FHGVz1Kq/vOOXNkP1Vl1zXl24WQ2y9vrmewUaUtjIxiUAUBCnplh76lhmRUNJfAkGIaPLJiP
w6YzKx16uuN8r3zjN4m4bUxgvnVo1lUVIRqEihy7Y68f1yZWfqst2+TudbltT7VcXRexMnuP
EiCMAnsSPUxPmcHfcvsbxlQ16XKgUQUNRmAPr+7HPUhk/VNKWCEocj3p5YfsLyh13J0goTH/
AAiop54fsPqkrcAU9rOvYdFHfBp0QD6K6KFcx+OC0nRZQaFden7a4tQbk3BYFc2Gf0PfFKsN
FFO66wrJn1I64dWg0z6hRcyep8v8sGs9TR0nKErGSOhBy6YZVmQibmOApo1ahmwHX/TFeoZ4
CI3Aj0tEaHw7DyxXpTk6yyligiPpGfl+OL7NaP2brSGKuMslpX8cWi86gaeaiokbZGhyJNPP
GtZ+pnkJbSqHM1BPb/li+w9SNJOFFYiR3Ay/YO2H76vqlRZHUyhG/h1nt9MZ+xxzm4J6KS4P
QeWHSf3w3WPNczlnXyGNaRiVg66kalMh3+uD7atJpST6Uqxz+v0xfY+Uk95jRV1jtlkP24rV
9Ts0/t+mE9TVqYvsoiWeX7dLDsK/44r1AEyMxzQtp7jL6DDeoEjI5ZdcbKAKmuRr/ljP3Y+q
FbmlUVTQ5MBUnPGvtGpNSPPVMwSQKGvpocH3OI11KWT29Z6lu9MbnYp4pSSQU6DIAZYb3DId
5/QQqsSc+nnjH2EgC7kr6SwOQ7ZYJ/QWHEuljVTVegPQU/xw/dSBDvqMhSo8B3Pnh+0X1FEA
5DAV/h69cV7U4KUuGGtSWXqK+Vf24p2bwGO6rX+VRMvwrlnjX2ikP+rjyYKRXJwO57dcH3wn
iuHBPoOkj05YL3F6aS5C0UpnpoSB/pivSOtyjux0EELkAO2D7Cl+qStCNQOdaZ4vuMB7ta6Q
xJPqFKZdRh+5whchSSVK6sqDoT9MVumTDS3DFiBXrmR5eIwfeH6iM5Wrv3pn5DGb2fgmnHuk
AGvYHpnnhn9FmhNyoQxkEEElnr+7D91gUvVX0gEV6kVwzsSCF6UbTpb28xqpn9MNsGUjdVZS
q6AcyanOnTBOzZKGG4ofShFT6q9B+3F/0ZvJG6ox759M8U6YwZuzQaV09a+Yw/8ARr6hkuYw
vQknIDtnhn9FYYXLddFJD3z6eFMZv9FOdH+syyXplTtXF/1X1KO40r6079Tnngn9WskM9y7Z
vXOgA6Z+OOs/qr/PSE7KT16kBevbFf6xX+eGSYsRUZZ/XFf6j6Q4uSTlmOrKOmXTPD/1wzgS
XFUkyIBJYd8vDGf+g+oFuxpI0nQRmKGnXB/1i+qT39SkAEqCGr1pTFP7YLz/AJRm5qShRhXL
Gv8AqPqITSRR6iCAftr1OC/2av8AMUU7AhtLaaVzHXGf+w/5h/UPRtMZalQcsb/6yM/89L9R
II0EsZz7DtTDP7xT+eU6STEfaaIaADLGp/eN3khLIKakqSaaSPtrnT6nGb/aUfQYkALAxnwZ
etK9cZ/6wXgOs6qqpATIA9wcU/uvoIzywyEMhUrmB4gjB/0la+mQmkkKopSgatexOH/pDzP2
TyzZH2zUZuaUBp40xT+kb650DTSu1ChoKVz8fDyxr/tI5Xm6ISyGSoU+nSe9MX/eRfQ5kk01
9s9TQ0pl/CMF/vDmw6yOJDpjFKUPhX8xwT+4wMrXEbZKakaieuRONf8AeH6o1klUk6DXoK98
V/8Aswf8xvIwCjIE5gn/AI64P/2Vz/KfkQkkFKp3ofqcHX/2GsNI0mjWyFMqFQP2Yp/dXmB9
y40jQKMRmvQ08cb5/wDsT8s/UIknqpCZ9x9emG/2g+iQe+SSUIIGSnqK+GOd/vFP5wUbySFU
K6TWgGM/92pzA3IkErK6MSAKEdKYef8A7DN5sIe+xB06hUDyAOLr+8HMlomE7axpI0mo8/LF
z/Zq8aBGnLSBh61ALZdD4Y3f7Mzj1HW51BCh0HoT4Yr/APYN/mkLSqrt7ZA/KaY5/wD7GtTj
Cjmm9dIiE70BGX443/3h55wLG7kSojNSOhFDTFP74Pr6eJ7o0JXr+XMgDwOD/u6T/JKt6G1a
KKDQedcbv/2JjlZ6eWO7U1KkEHp164z/AN4PrUqQ3TAF1DFwchkaD6Yb/wDYz4bnMiORZi9T
GaqMgK1qcP8A+ypNRgXQNdFQe/elcZ//AGWeuIQ/ULJqII1ZBu1T/lgv99H1FS4WilQxJ9FP
rin9liX277M6QXrl40GC/wD2D/zAY71lYldFCVoe58ME/sLALDe69JJB7V8Bnh/7qQZW691S
VDaf34J/ZuRBIbwEkp6RU9D0PSuGf3F4TezcrFq09QKg/uxX+9akkHJHdBVrmVBofrjN/tca
vMxEqXQAFPQWoE7VPjXGp/8AZuOX1HJbXi6aDQa0A6CnfB/+zV9ACO61kEBtPj0OC/2pn84M
RXQDM6U1AaV7eWMX+1P1hpIbnKqgZUpX9+Os/wDsVm8DjgmQoXeo/wBcYv8Ae1qcyUD2t4sw
UgOGOYByHgcan/2GeuZTvbXNGemkdm+mD/8AYo+hCO716K6aiqkitR/ljF/ra0Y21zQALpAr
qFak/hhn9rF9TyQTxojVB79PHKmKf3pzzBNaXEcdAwzz0nM4v+9WTAxWdwI9daAkkCvhjN/t
afqnFrJJbAg6XBb0nxwf9brH0lQxW9wQ8Zc5UOk9KHzxr/rVzxId7C5XSGcEkgAjMDDP73Gs
9I7dOjke4StKkYf+1byIl2+dzm9FbM/hjXP/ANixm8yjksZ42Yu2pPytXL92L/8AYovMQR21
xp9L+rqgOWXjjPX96z9YmFjKaB3BNKjr171pjH/at+fB3tZFyDAOV6VrSuGf3osSRbXNQa5K
ClDTxPj5Yv8A9ir6yoms5EYj3CwUn1LmDTwxr/trH1kT/oJdSqrH+Yua9+lcc7/a63OZiBtu
m9vWJNWWS1zGL/tR4lXapH1e5ISQBka/swf9a1JEsu1sgU6yFUASBsuvYYv+lOhFoz1jV6gD
PsVp2w/9BkRCwkqfbkJZgaEnw7Yv+lZh2sJdJZXKkAa88qYvubakXbi711UAA/DFf6WtHksF
UIx+xsmbuPA4p2zTrtykBw5qB6h4muK/0onJjtsMhrrIkY5J/wBcY+9akGdqVXK6qA0Zq9qD
LFO8Zs2ii2pXkJZgyyA1Xp+yvTFeqLz6iO1wl2SpQHoK9x4+OL7NWo22yESLpZmUd6UP/Fca
/wCgSPtEULVDag1Mq50OD7s3mn/p1uXC1PiR3ri+zUiKbbaVYCqnsO2fTB9j6nhs7R4pG6FS
FBOVB+OL7tb4E7bB7irXrmx6VJ6YvuzicWNmEDAnM0an+ODW0EdlaByR0Nanth+zOJo4IdAz
BOYoMzTywazUTbba+0ZFKk6iKk1NB3Iw/YVHBYQF/aZCtQSCex6/ji+x5h2s7aNl0+oK1Ael
QeuDar4lFpFoYEBgasKeX1xfYab9NaaayL9oFD0GrF9x9qZLG2oWYZU6ede2NfczR/obWPU+
isZFCKZj6YPs1ArZxkuuRWn/ANXji+1OJFittGuJC8qihFM616kHBqthPFa5K0ev1faBU5jr
4DGbTKJIrSK41LHQAdgKNTPv3xagztaSLqWMA0yy/wARjUqRs8TLTSPb6dO4718sVoIw2qN0
Y5UAGZ8c6YvtqTCMFlJUMpAFCBnTEoiRI4hqVNCNUUrmDXw74KamBUKzVzrlX+Htg1z+1iMN
HoZhqII9VBl+zxw61LqOFkViR6Q+VD1NMaZmpZZAf/G1CPUcsC+2hJaN1VtNWzB7Z9MGtSJD
InuKWBrmpIyHjng0/BNM0khKKEIpTL/imGKUUFwiSZrUFSWoK5nyxIUV6Gj0FCzVoh6AjzxU
aheaXX6U0kDTXEZqNJzQ1AzPbOlMRGLp2FAP5Z69Aaj/ABxK0T3rqmhlNWYdKVp2GFm8F+tC
MVKBkPoJzqfA/wCuJXyIIp5Y9VTpir1bM6Tl9MZEg/1NMtJUD7Sv+dcLUqX9UDnFVmIqnbrl
liOniviiaT6dI9X8JbxwYgC8dlYItMyGxTxkKTaZFPqXStDSmdfGuFS4JLlgCNCACn5ug8K4
MaiK5nkkkLEKNGS5ipBz+mGMdelHNI86+oRgj1U+78Prhw8wQkkkjILUqKHV1y+mBo3uS6da
NRgNINcWs6ETPVvddavQgjsOn4VwnBe+rJ/5F0g1HkfCuMoDzVbU+gUPpC9BXCKEXDrpBetG
qO2WEwZuT7eTAqxqB3pisATeFSCrABlqB2HhgwemjuFUBmNM/pSvU4sUgZLxo2/lvXSQak9j
iaML2M6jq00NKnMDxONIX9QTUfceqr9hXoMFg1HJdxSkBzTUakGufn+GM4fXJftbiIKtNedS
MMjTg0f7h0whb2H/AOML9vUdev8A0xVL6H/8XP29fy/Xv5YIjJ/5x/5Pw+3G/wAB1yf/AGH3
9T1/7v8AHGEiu/8A8WX7/uPXp+GNNRyx/aOv39+nT82Mh2L9p+7/AD/6YaY55+qfd9y9MEDs
g6P9/wDx44REV19o69e/XDFTWf2j7+n/ANHXGaeUw6r/AMd+2Kip4v8A8ZT7up+uJUbdT9R1
/wAvLEoCf/xJ06dv88FbqBfu/L0/HEyik/8AP2+09Pu6dsb5Y7Pb/wDhPX8vTp0/xxHkEX39
+p+7r+GCtOi8+z8/b6/jiiD+Vvx6f5+WKqgH3nr1H1wcm/B1/wDKv0/DDWYFvsPX8On4+eIV
If8Ax9u329cTX4QXH/2X29T0+3r3woa//i/b7u//AB0w8jv4M32fgenXr/xXFfljlHB0X7en
fGa2nh6r9nf6Ygiuug+37/8AL/Hwww0D/aPu6n7f+OmGuPQx1X7ftPXp1xQ8/KN/z/b1PTrj
Nb6dC/8Ak7/5YIYGT/6fu7/5+WJsS/a3Tr+H4+eGh1Q/m6dPy9fwxmMRBB/+KzdPsP29fu/N
5YY6Rzzf/Zfb3+mN0Ucv/jX/AMfb/g4zBUk3/gT7P/H/AJ4Ry4V//GV/8X+fU9cV+FUdt/53
/wDH9x6fdhidsn3j7PtP3dcETkH/AJ3+3r369O3ljcH5dMP/AI1+3vjNbiY/e/2dfy/8dcYp
rkn+yT7Ovb/jrhgDt/X8vU9emNQc/Lr/ACH7O316/m8sZPSvTqPs+49Pr3wxyPcfav2dT9//
AB+zGoo6Lf8A8p/8f2/k+/8AHGK7OdPtH29D/wBvXvjcZBF/4pOnb6fjgoicdR/4+n4Yzyaa
P75Ps6H6fhjQ5B+R/wDxdW6f8ftxN1EPuX7Ovb6dsVQ+8/2/bjMFRzf+BPs+0f4/4Y1Ahm6R
/wDi6n/j64qnRJ/+Lj7Pt79P+mI1CO//AI/tXp9e+NVmDH/4wP8Ax/cenXpjFaRL/wCSX7Pw
6dcNZh0+xvs6jp064IU1t9kX/h/N93441QTf+If+P7j9v074xGkEv2H/AMX49cJjot+j/wDh
/wDq/wA8B5ckn/hH/i+5fp174j051/8Ax1v/AB9fwxqMOs/d+TqPp1740k159p/8X2jpjEUc
tz/4R9nQf4jEKOH/AMjf+Lp+P/TDRDL/AOWT/wAXXv1/6YoqkH/kP/h6Hr0/HGaQt/8Aaf8A
j6fl+mNRAj/8p+zoemCmOm3/APsv/H369MZCa5+xvs69+nTG4YjP2v8A+P8AL1xNVF2H2dT9
PwwsQpfsb/x/b+TEUsX/AOLj/wAf2j6YzREE3b7P8un+GBnodv8Aafs6j7f88KnwR/8Ax1P/
ABfeemGJM/8A5B9nU9f8sFbRx/8AgP2/f+bAEsv/AJT9vb7enTv54o0jf7h9vTt0xplJaf8A
iP29D16dcSgm6jp9o6/X/imMmmP2n7fu/Hr3wClJ0/8AsvvX69Di5ZvwFf8Ayfl+4fd/njV+
Ght96fZ1/L0/64YjS9Zf+/v9vT/iuBozf+P8nf8A7+mBGtOifZ0H1/DDWRSfZH0+5uv17+eM
lBa/+Q/Z/wDVhSeT/wDGG6fZ36d8Chx9if8Aj6/m64Chk/8AJF/4+jfTp3xqCjfpB0/+rp+G
Jnkj/wCQ9O3Xr1OKNo7j/wA6/b07den+OJmnsfsP29T/AN344b8EQ/8AtOvX/L/DGYPwiT/8
Y7dO/wB2IOi5+w/Q/T/6vLBFCt//AC/l6n7enXvhA26S/b3+7phaiCPqv49Ov4/541+BDR/m
6dD1/wCOmBsbfY/XqeuMwVDN9x69/wDg4U6rb/zR/bgpqJOnb8cX4FPH/wCQdf8A6enXt54v
wx+XRJ/45enf7f8ALyxR0PD/AOKLp26dfw88aZRyfdJ9f/qxQoW+0dOowCjm/wDCenb69P8A
DFFTSf8Amj6dF6f/ALPlhjPKdPuHXv069TgrpUU3T8e3XGeXJC//AJB16DG2vwjH3x/d1PT/
ADwGO2b/AMzfZ0H3fT/imIgl6p07fd92KKHl+yP/AOnr9cTaEfd2+4dfoftxOdGPuP8A/UPX
/tHTAYif/wAg6dT9euNCpIekn/f+bpjKMv3t9v3jr9e3ljTQV6jp/wCQ/Xp2xVUcnUf8HpjL
Ibr7IvqOv+eKKIl/8o6/j9Ma/CrpP2j/ALu/XpjJ5R3n5evTvhFNH96dP/Gfu+n+OCsz5KL7
R16Hr1/HFGif/wAbdfzdPwxIrX83T7R9PxxUpj/5R9R/3dD0wimX7j934YmaBv8AzHp9v5cR
cjdE6/8A0dMQ6HB97fX/AFxVJT0X7vtP29evfE2Fv/L26D64GUg6t93+XX/DGR0iX8v/AH9/
rhrHCZO339+nXv08sUbK0/8ANJ069vpgESP1f/yfZ2641GkH/wDCp93Uff1698FUTN/+MS/d
93b7ftwko/8A8Yf6fjhqiVOknTp/n3wAHdPw6YSU3/i7/b+b69/LFBQL/wCMfd1/D/piphn/
ADYzCc/+X8D16/8ATCPylHUdeg+3Cqa4/wDKvT/PpjICPsPTv9e2KhFL9o6df/qwBLbf+R+v
Xv17dcLfI/8A7E/Vuv1/wwGon+/8v3L06d+vn4YUa4/8Q/H/ALuvbCif/wAX4L/hgSCD/wAb
9Ov4dcAo4+v49umNRkoOj9Oo6fZ0xCHt/wDwy/h/wcUMC/3x/j9/Tp/xTFUI/dJ9e+Ms08v/
AI1/4GMwQrbqPqMbdI6JPvbr0740HHD/AOYdep6dPxw1qJP/ALSP6jp06fmwMdCb83XofrjN
Z5+TxdV69D930/4rgjq5D98v29R0/wA/8sajQn+5f+7t+GKs0n/8ifXt9cUESD/zL933n7un
TtiNSL94+h+7p1/xxE1x9x6//R9v/TBWO/gx/wDFH1+3t9e+KHn4cj/aPu6n6fjhSWH/AMq9
ft/4/DDRz8oZO/8A5P8AL7sRNN0X/wAnX8vTp3/ywI0n3fn+4f8ABxQnPVvv6np9e/ljVSSH
/wATff8Acen+eMihl+z/AO2/+rEYBP8A6vu7fTDWkX/27/8Ak79OnX/HFAkl/wDAn3fd3698
ahjmn+4f+T/6vpgFSR//AIufu6/h074yjXP2H/ydO+ElD9if+T7z1/zxBCOjf+T8Pt64g6D/
APjD/wDl7/4YAgk6D7/tHXr1wxoVv9yff9w6/b+OFHb/AMA+/r+HXAIEf+T8/wBmNGGb/wAM
f/l6/h+GMmif7E/8nX/imBhCPz9f88LRn+0f+T/LGjQHt93/AB44GU//APDnr1H0xEP5F+7o
evT8MVKFfv79O/Tp2/yxCkeg+7/6v8sSiJ+p6/j9O2AiH2/hiB16d8Kgu+FoE/Rf88ZSP9uI
P//Z
------------09910D0C003FD46CC
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------09910D0C003FD46CC--



From xen-devel-bounces@lists.xen.org Thu Feb 20 09:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQ5P-0005jN-8m; Thu, 20 Feb 2014 09:38:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGPOS-0003gT-CN
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 08:54:11 +0000
Received: from [85.158.139.211:23882] by server-5.bemta-5.messagelabs.com id
	B2/51-32749-FA2C5035; Thu, 20 Feb 2014 08:54:07 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392886444!5086804!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18710 invoked from network); 20 Feb 2014 08:54:04 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Feb 2014 08:54:04 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:49426 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGPNJ-0008W3-O2; Thu, 20 Feb 2014 09:52:59 +0100
Date: Thu, 20 Feb 2014 09:53:59 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1142136480.20140220095359@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140124174806.GA15571@phenom.dumpdata.com>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<20140124174806.GA15571@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------09910D0C003FD46CC"
X-Mailman-Approved-At: Thu, 20 Feb 2014 09:38:29 +0000
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------09910D0C003FD46CC
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Friday, January 24, 2014, 6:48:06 PM, you wrote:

> On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
>> 
>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>> 
>> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>> >> > nonethless.
>> >> 
>> >> As usual ;-)
>> 
>> > Ha!
>> > ..snip..
>> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0x45
>> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+0x6/0x9
>> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x159/0x1b5
>> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x4e
>> >> 
>> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus reset) should also fix.
>> >> > I totally forgot about it !
>> >> 
>> >> Got a link to that patchset ?
>> 
>> > https://lkml.org/lkml/2013/12/13/315
>> 
>> >> I at least could give it a spin .. you never know when fortune is on your side :-)
>> 
>> > It is also at this git tree:
>> 
>> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>> > want to merge it in your current Linus tree.
>> 
>> > Thank you!
>> 
>> 
>> Hi Konrad,
>> 
>> Just got time to test this some more, when merging this branch *except* the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>> seems to help with my problem,i'm no capable of using:
>> - xl pci-detach
>> - xl pci-assignable-remove
>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
>> 
>> to remove a pci device from a running HVM guest and rebinding it to a driver in dom0 without those nasty stacktraces :-)
>> So the first 4 seem to be an improvement.
>> 
>> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to give troubles of it's own.

> Could you email me your lspci output and also which devices you move/switch etc?

Hi Konrad,

At the moment i found some time to figure out what goes wrong with the xl pci-detach and xl pci-assignable-remove, i have been
able to narrow it down a bit:

The problem only occurs when you:
- passthrough 2 (or more?) pci devices assigned to a guest ..
- and only remove 1 of those devices with "xl pci-detach" followed by a "xl pci-assignable-remove"
- when you first detach both devices with "xl pci-detach" before doing the "xl pci-assignable-remove" it works ok.

In my case i'm passingthrough 2 devices (02:00.0 and 00:19.0)

I added some printk's and what i found out is that:
- after doing the pci-detach of 02:00.0, it doesn't call pcistub_put_pci_dev for that device ...
- but when i subsequently pci-detach the second (and last) device 00:19.0 .. it does call it for both 02:00.0 and 00:19.0 ...
- so somehow that call for the first detached device gets deferred .. but since it are different devices and not functions of the same device i don't
  see any reason for it to wait until all other devices would have been detached ...


I tried to capture the console output but some how that didn't work out, so i attached a screenshot of what happens when:
- doing a xl pci-list for the guest
- doing a xl pci-assignable-list

- doing the xl pci-detach for 02:00.0

- doing a xl pci-list for the guest
- doing a xl pci-assignable-list

- waiting some time ...

- doing the xl pci-detach for 00:19.0

- doing a xl pci-list for the guest
- doing a xl pci-assignable-list

There you can see this strange sequence of events :-)

But i haven't been able to spot the culprit

attached: screenshot.jpg

--
Sander



> Thanks!
>> 
>> --
>> Sander
>> 

------------09910D0C003FD46CC
Content-Type: image/jpeg;
 name="screenshot.jpg"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="screenshot.jpg"

/9j/4QA0RXhpZgAASUkqAAgAAAABAJiCAgAQAAAAGgAAAAAAAABDT1BZUklHSFQsIDIwMDkA
AAD/7AARRHVja3kAAQAEAAAAPAAA/+EDlWh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8A
PD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4g
PHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iQWRvYmUgWE1Q
IENvcmUgNS4zLWMwMTEgNjYuMTQ1NjYxLCAyMDEyLzAyLzA2LTE0OjU2OjI3ICAgICAgICAi
PiA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRm
LXN5bnRheC1ucyMiPiA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczp4bXBN
TT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6
Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtbG5zOnhtcD0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIgeG1sbnM6ZGM9Imh0dHA6Ly9wdXJsLm9y
Zy9kYy9lbGVtZW50cy8xLjEvIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjU2M0Y4RjI1
OUEwQzExRTNBMUUzQ0VEMTFEN0M3RTM2IiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjU2
M0Y4RjI0OUEwQzExRTNBMUUzQ0VEMTFEN0M3RTM2IiB4bXA6Q3JlYXRvclRvb2w9IjEwMDMx
NjEiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0iREQ1QjY4RDQyODg0
NEQzN0I1ODVGMDkwRjcyNkQ3QkIiIHN0UmVmOmRvY3VtZW50SUQ9IkRENUI2OEQ0Mjg4NDRE
MzdCNTg1RjA5MEY3MjZEN0JCIi8+IDxkYzpyaWdodHM+IDxyZGY6QWx0PiA8cmRmOmxpIHht
bDpsYW5nPSJ4LWRlZmF1bHQiPkNPUFlSSUdIVCwgMjAwOTwvcmRmOmxpPiA8L3JkZjpBbHQ+
IDwvZGM6cmlnaHRzPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0
YT4gPD94cGFja2V0IGVuZD0iciI/Pv/tAFxQaG90b3Nob3AgMy4wADhCSU0EBAAAAAAAIxwB
WgADGyVHHAIAAAIAAhwCdAAPQ09QWVJJR0hULCAyMDA5ADhCSU0EJQAAAAAAEPkXFbhi6c9J
PDKtAE0qv1X/7gAOQWRvYmUAZMAAAAAB/9sAhAAGBAQEBQQGBQUGCQYFBgkLCAYGCAsMCgoL
CgoMEAwMDAwMDBAMDg8QDw4MExMUFBMTHBsbGxwfHx8fHx8fHx8fAQcHBw0MDRgQEBgaFREV
Gh8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx//wAAR
CASyB3IDAREAAhEBAxEB/8QApgAAAQUBAQEAAAAAAAAAAAAAAgABAwUGBAcIAQEBAQEBAAAA
AAAAAAAAAAABAAIDBBAAAgECBQIFAgMGBAYBAQIPAQIDEQQAIRIFBjFBUWEiEwdxMoFCFJGh
UiMVCLHB0WLw4fFyMxYkgkMXUzQlGJKiwmNEsnMmVDUnEQEBAQEBAAIDAAMAAgICAQUAAREC
ITESQVEDYSITcTJCBIEUkaEj8LHB/9oADAMBAAIRAxEAPwD5cKtrFPH9+Eu1LK/ChlVijda5
jDF4MWt238kKWkodKAVOWZp9MNM6JbC9XrC4YjInp9Rg1rSXbdwIHto2o5in8PWuIB/pl0+k
CMg9x0/HPFgG22XOrSo1lRQkf54cZQ/0+6WgMZz6Vyw4tSHbb41ojMfLw88Ehgl2+7VGbQ1W
6DqRghhLt+4ekGNgsgLRntRfu/ZiwB/pl0KLoq5zVR1P/LFYrYMbduiVIiJjFAWGYqegOCRn
QjZt0qhMLqXGrpl+3xw4jNs96WCrGxdjQqadfxxY0Z9ov1JIjOlerDOnbFg0v6TfaUYREsxy
Hj/pgX2gv6PuNVQ2zAsfuPT8TiOmXZr2QtpSqj8On1xr62idS+HG03gahjOo5Cnc4fpW5hhs
9+WakDnT3UdcGM/aJZNn3Agn2iVHVqeknDjV6hjs1+IzWJnJoFp0qRXI98X1Z+8B/RdzpUwv
UDMeWD6r7COybqgOq3YEgEV7g4MWkmw7sSD7RA8Pr2NMIENi3bWFER1DqCOlemFaQ2Ddmp/J
K50Lt0r4YCY7FuSlWkiJqaDTn+3D9aYTce3ZXKe0fFSMx+OCxbBHYdzNaRSMR19NRWnY4pyt
CvH9zKBvaNSMl6Ej/LB9VRDYN1EagwsEbP3BmM+taZ4sGmHH92UmtufSOozqPwxYfC/oG9ad
JtnoDQnw8qda4R94X/ru9V9v9Myt2PUftwyi9wm49vKspktyofpWlTTLLBg+8Sf+tbuKabcm
tOvavj9MTU7gZONb1qp7QHfr1rllixm9wy8b3oNX9P6h1UnPPxxRfcjx3eWJIhPt1pq88WNf
YC8c3lgKwNpOYNetMssWDTNx/eDRf07Ekin49DixaSce3d6//GddGRJ8cWKD/wDW96VQTbtp
OYpiaC2wbrGaSW750A9Pc/TDIpfTDY90ZXC27sUzI8PLPDeBaBth3UKSYGqM8h2OD60aePYN
4LFBbucqkAVr9BgyoTcf3fXVLVyoFQCKHp54rCUey7oVBW3f1ioYj0/TFeF9pEkeybs4Li1N
CpIJHUDwwfXGp1EI2fc3UFLdqV9QpQg+GHGb3Eh2LeG10tjQdaHpgwfeG/8AXt5GhDbPV+gH
UZVri02hj2HdJSQtu/pyGoUJwr7RIeObxpr+ncVrUYfqtiIbHuZBpAzBASSBi+lF6P8A0LeP
bUi2dqgkaRXpnTBYp1DpsW7NUm3alKggdfpixqdHHHt4YnTavXrpIHQeHicWLQ/0DdTGsgt3
0nplTFg0jsW6DUrQMNPah798X1Fsh49h3RtQNu6lRWhHX8cWGdQpdj3X/wDxpAozrTr+AxY1
3Yi/o27drd/EimYqcX1rEsSHZN0Un/48lQaMAKkeeLG/sQ2TdgwV7d8z3GWDGbQNsu5irtbO
BWgFOp8sWKdQ/wDSdzL6TbvWlSQK5YWvtDDZtyJo1u4A/Mew6Yhozsu5xrVoHoa9BXLxFMWH
xAm23lRWGQKxoCQQc8WDYlO0bmv/ANg9K+pqYLEZNr3IltNvJkaVIpmOvXBi0SbXuWmhgatf
DtiwaE7TujMzC2lFOnpPbCvsS7PuEhr7DBlNGFDl9cWHTJs+41NYH/YcvpixEdsvNH/gahOR
pmaYsX2kL+mX7U0QNViRXSQKj8MM5q+0pn27cG6QOWB9R0+GL6m0B229UvqjZQmeYOf0xYyk
Xar9iCsDkUrQihH1w/VbAf07cGqPYdRWnTuPPFgtENuvytfadSAcqGpAwNQy7besNftMUpkQ
K4EZLG7MukwuHHVaEdcWLYY2F1qqYnVTlUjv54sRjY3QTKFutC1DiGh/TXbSBfYcuMuhzpgG
jFtdBhqiNT9uVQa4TKD9Nc6gRGwLZLUUrTALRNBcE+2Y2DL9wpUjC39kftTAtpQkKM8icjiG
w6wThvVGQ5+0UpWuJEVljZ1ZDUHMEHLFVoSrdWVhShzB6HGVpPFMJCNDaqVIpn44YNL2pWqd
J6dSP8MKMqSMSVUkdARgxCUSNUgFic/HIdzixaS1YVVdROVPPFhIxSkEFSufQimHCZUbuCtT
QHoMWHCEZIrQ9SMWM4SRM4PX0dfLBYMIgilAakVNe+IkcyGFdQ7eGHFpgDSp75eeHEco4Q0q
KU1YKiVGI75/sOAGU0etCKdvLCREu5DL27+GGrSqMq9O5p3xlBIYqMjXsPpgFMAxJ7MOlPHE
oejlaN0HjiJyoqAq6Sv3HEDAOSaCoJqPPEiVnGk0qozH44iatST1w4jKpp5nMVxARU01dOoA
HjiRipUAAZeOIiJGkkA6qZk4TgG1D/HALBUPtjv4HCiViAK9Tl9BgRCoAJPb92Ihq2RFfLEj
0c1Yg598SDVlOfbDANa6Knp2xEg7GoHXsMCMdR69syPHED6vSOtB1xAGdfCvbtTFgPV1qMx5
YCcGncimJECxBGrriJw5C08PHEcN6i3XI9cSw+qmoA5eOAENXXw/N2xGDaUlRprXvniCMsdX
7q4Ud2YHUCSfHEi9x6swyr1wrDFzpADfXEjF2Jqe2IDDFlyJrXpiaACQTnme2ICZjUUY+IxA
wYscswOtcCJiajrTviR9bmlTkO30xIwkcVIJoe2InWZgCPHFiOspVwQafs/zxAwkcEnv0J8s
Kgg9RQnMZg4miEzRjTSvfPAhPcyk5jPr9BixB959FK06nFi0yyOBVWzPUYhhCRsx59PPEoXu
vU1Jr3w6MMbhz3PhXviGD/USqFzOWdMWHD/qGNSTmeoxIAuJFFASBXMA0GKxYI3MjClSO588
ZxYYzydSxp3FfDCsEtzKrD1nT5nxxRYZrmYClTn1rhX1JLqRTUEmnY9MCwSXk/iT5E4MMhze
TVyJHXKpyxfVSBa7nDgiRq5VNcOCwZ3Cep0uwr5/4eGFGG4XAC0clh1xUSJEvZ1BLSE51oTX
A0Z9xmy9Zy864pGcgl3GRVoH6dCe2LDkL+oTaQiyHT4k51xYcL+pXKDSJD51zzxYsOm63a6m
MhY0pQ9Kd8VE5h13W4SQEOaUyFfHERjeLtci1RXp/wA8SMd5ndj6zmevTIYMWCbeLhvUG0mt
f+mHBhzvNyaa2BPY+GJWEd6mYaiayL0r3Hngw4S73cU9XcEE+NcWDBHeJ/bClhqPQjqBixWI
/wCsSqNa/dXMVyp44WcIb3dVWp9IFCK5VxYpyP8ArM5Tx7AeWDGsEN9m06AMgKA16YcWG/rc
jKq6AJK0Dk9vPFiwhvciP0DN3avf/PFgwQ3uf11YCuagYsFhLvLqDUVqAaV/acFX0Sxb1qBL
LSudQe/0xnDOJC/9go2oCp6ljn+zDinIRvwMVAgPl1PjixXkLb2Sh9NHOVRkMawfVJBvEXtU
oV7DuOtcsWL6n/q6qWOmobwFKV8MFi+p03pRWpYsT9QPKuDF9S/qyFwKUAzz7+WLGvrTndwv
QZnop6eWJYNt+iFBQAfmHevXvixYjbeIzUqPzaiGrU17ZYRiRt0tdGRyFO2eeI3kK7rGrBTT
TXIADp4nE5/Uju1uoCUqmYr3XzH1wtUk3i3ilAz096Zkk4ME4dK7pFpdgwIXy7/88GL6oW3S
NQHcAORkB9e5wyL6Eu6W+klmBoMiBlXGsMgTucKigOpqVqMTNSR7iisG6g5GvTPxp2GM2j6i
S5gBYlyW76cZGH/WWwFTkKZmvWuVMOL6hS8hqArV/hr54sPMsSi9jWXqPbH3GuVfLDjaM3lq
wrqprrkPHww4pD/qgIyFNcwAxOdMTWBF1FG4VWLEjI1qa9zioE11C2p60UfnJABIwM4cXSlj
Rh01FR1A+p74lOTtd2msIXqR0Hf8cFa+qN50RhqdW6aQD2OeRxQfVObyEIQfuJ6VyA8aeOEo
zdRMHFSAPyjuThRhcqg0Byqnr06kf54BUn6l/LpT8MCxT7JYG7uoINK1lkCKzGgqfHvTG8a+
2R6tbfG+9yWum2ihuwv/AI1icBjQZijUrjU8cr1vwrjtE1m7RS2kiSxn+bHpowpjUmue3kJs
YwSDFkpp7ZBBFemWGyNc97PBna1RhIyPHHSuvSQK+eWL6xudHSzSSsrRmT+JlU0y6dB1wzg6
JNsgaT1xhSfUY9OkkD+LGevBOtFNtNs6IDGFbotQNQrjM6xrD/0WGWQJHHqdfS60pSg743rG
AbbbRXERQnQKkEdMak31jq2X/DoXbYtOoxfy8v5gHp/A9MZsdeetB/Srdn9EAI//AAqjU1fw
7YmOlknG4m28XFrGCOslPEf5Yz+TK4222EgSFCGkGbGuY7+VMRvg49vs6IGjXUvpBAANPCuK
rRnbbNq0jAA60AHkcErMHHttsoI9tdJ65dP+3CpyF9qtjLVkACDJvr5Yh7DNtdmw1qgOeYA/
bUY6c9Y5/T3U0O2bdpf3YAw6gjKg8PpivVdOZfykG3Wy10xgIfyZUxztE4mk2125BIQAjIAi
v4YnTCO3WfuLWMKwqK9s/LGtFPJtlma1jUBhmo7/ALMUpvwcWNsooAGFKafplnjNilIbfZqv
pjAkr9ymlcu+CwdUz7fbkhtNXORPl4n6YY57SG32sZDFaoM6H+LzrhMtB+htXLMYwTkKEZE4
j/0L+n2hb1oAVNRnita56JLaKpOjPt/yxrUlFjb6aBRSmR74xTKNbG101MdSa9euMnArt9vp
AVVBU1FMjiUM9hC7GQqA9BSniO5xqOd5N+giBIYD1UJp3pnjUYoJNvt2zdKkjKg7+Q7YdYnN
vymG3QgVpUmmffGLXX64CSxjGSotCKHxp5HGpRefyKKxiqCF9WedOn1xbGpDm0jBpQEDMdhX
zxHRfpoQmmgaueY/wxmxqdaQtlVTQDS33LSgwCw36a3qRSuoZgZCvjiwem/QwE6tNQagA+Pe
mKRS0SWVuCx9sAHueo864WuQm2goFaMGNB0A/filZ66JbGHNlULq7EYdV05s4hLooAKZUyHj
TDoloFtE1uvUH93lni1ekdviDg0qO1f8MX2N50S2kIk1KADSlKdKdxg1Ae0i1mka1J1assj4
nBpwwsoRX0gdft8TiFgv0qUI9ugpmDn9MCE1tDoZciadwK4cOAe2QgLpBXqT598bkc7TxW1u
CG0hSB9v5T9RgaiQ2sYFVX0Dr9cZalRyWcEg1MMxkKUGWLSY2kIIb21JWiqfD6YYzNP+khqC
IwtDWo6YjDJbwmpVevl2/HCxQrBHUAJStdAAyP4YdZF+lhVSoAB7DtXGXSWm/QwNmRn1qKZe
GDWhC2jFCMiPzDL8KYtOo2toQa6QFLZjDGdL9FbNrMQqTQ0oKA+VemLQT2qhGCqArABqAauu
AYM2cdB6MqEDIVOXfFKdRtYooACqppRgACKYZRpmtYmGkqKjpkP24o3CXbrcg+4q0Wnt0HTF
ap0TWNsFIZQXPelf3HAtJrOBgFZaAfb4ZeWBi9C9lCSpQZnvkPHphkH2ALWFhQqCF6VGdPDD
VO7pltYVbWqhU6BOo/fi1q0nsoWQ1jXUe9Acq1pikWg/Qwlg4UBsqt1yAoBhlwSibb7U9EoR
Vj364vtWyWzhDh9Ir2FB364rdRv0Vtn/AC6rUlmPcn/LGdVhnsYZBpZRUmtaCpGAaM2NqwJZ
FNR6qgdumBANhAwYsNRY9CAMI9RvttswVyg1IKDwp/niRDb4dSDQuR9NFH4YTOiO3276qqNT
H1NT8cWL7BG2WT6gUGeRHl+PTEAjY9uqVEQHcCnh0pgOFJtdsysDGpUn1CnhhWIjtForK3tr
UDwArXLPCBf0mydVrCpWPoKaRn4064zjWyg/o1iEVViDMKUYgFge+eIWmk2WzI1CNAXyagBF
D3xeKVGNjswykgHRkFIGHVdI7Nal6iJAejHSK4mbqP8Ao1k0YHtAEdAB1/b44KtpDYbEZe2C
V70Az/DE1LSj2DbkYMsAqOq9Mh2qMGLUcnHdvYL/ACY19QYgZedMI2iOwWbV1IF9VVXKtPL8
cRnVDJx3bmTQqAsMmYip+nlg9dNB/wCvbco0hA2k6lr2JyOI6deO7bGSY4QMwTToCev7cQtC
eN2gzVAAuWumQri1m2o24vttBoh00FQFPTxOeHVpm4rYgH+V6moUrTLBp0Tca2w5PCoFPtA6
n/liHoDxq1qGUAL3FOg7YThpuNbcyge36yc5B3y8O2IfYjxfbtJCRhajLr1PXBp0L8bsFUa0
BoaJkKZd8UF6wzcb25hkihyKRmmYp1p5YcZ+54+L2TUegGiuVKVPjXGa1Npl4vtq+pU0kVNB
n16dcWHUbcetia+3XzplToc8ODSg4zZ6SDGC1dS/Xoa4KYf/ANZ29W0ugZK1ApTPFh0jxSwC
tWLKnoA8fE4h9kZ4zZ6VPtgFe9MyMakU6J+MWUi6pIRqFKEdajocsWNaTcZsmRSVH0H/AB3w
W4zehHjFmQEePUta5DMU/wAaYNU60MvGdvDA+2GIHrB6VrWuEaE8XsSC/tBGY6tIzH/LEZS/
9V29aUSncMfH6YsOm/8AVrGo9FdVcu2KQWh/9YsAo0KQ6j/6f91PM4sX2GeM2MiCi6a19PZR
gsM6QtxaxEitpNMhSnT6Ylo5uL2JZmC+hPtrlSuJaF+Lbei6gPWoFT064D9gScXtqKWAERH2
9HH1OHGTpxWy01NaVFQ3Sg6DCNoTxayIZGBCA1ByDfTLA0YcYs9IoNEhbMk5BaZCniMSQtxO
3WTTRqnNg2XTtgc+urov/VrWlB379xTsMWN/ZG/FrUormqCtKnM/XzwyLRLxq3EdUrqPQHMn
/lhW0443ZqlSlQD6j16jrjOHUA41b00kECvqbr+wYcMopeOQKW0gn+GmXTBi+wf/AFuNlJkr
prU0FD+GFDHGYQx01C9SPAeGBajbjkYU1UVcUBHT6jDBKdOLxU1aarTIAnOvTFSiHF4xT1HU
RWnYYzopf+uxgekkEdBiH2CePDUCpGYqQf8AA40zewrx1tdFFc61r0FMVMqQcdhZiqVquVc6
nGdbtA/HdblQNLAig7afri0Al42UBIzAOXj074tWmOwViXL1DLLz7nGsH3D/AEPSw6sCaBT/
AK4vqvvDjjhq7FugJUdenUYsOmXjsv5uhAYZf44kaTj0qICc2NDpHge+DW5Ttx9qqDXwpiX2
D/69Kf8AcfGvSnjXEBHjbk1VvwYeVcKtBHsM2YOVDkfI+OLWfsYceuM2LAmtAw6HEvsdthmj
bSxGeZODB9jf+uSGWgOlR2PXEdPLx+4IJFdI+0U/1xqLTLx660Ift1ZVPQeWDVoW4/chfTm3
TPti06L+g3KjMhiBn5Yzp0A2G7yC08jT/jPENGvHbtQ2ofbkf2VphQDsV6I9WRUUA8c8WjQL
s1zrNO3XLphOj/oN3pZxmRmB4jEtMNjudIYDMmhr1r4UxLULbNdhmFOhp9a+GIad9mu9OpUy
rTrnXwxLS/o15l6aEjVU9sCgv6RekkFfT1JHQYmgNtF4jA6MQ0v6TcFgqKWr3plhFoV2y71U
CeOI6TbXeAghAx64lpNtt2oD6SMtQFMyPGmAlHtd29SENRQn6YtJ22q7DaSlAa0JzxagDb7w
R1CHuCDkcsWgjt10QpCHMZ1yocK0L2N0AD7TeZoc8A0nsp1IpGen44lp3srnoEIAzJ7YlphZ
3LCgRj4GmLUFbacg1SgXI17YtBzY3FSAhOkVPli0mNncqBVCAcOo6wXNRpQgOKDAtL9PKPUy
EjsRhIViepOhh4Hp/jhBCBx6tJp54Cb2pTlpNOpPliBijhQCMj0xIlSQ1BGXQkYDKIRyBdSo
ela+WImEUpU0H/cMLJgjlRlShpkMRwbB09JBqRkMNVhkD0KgE9zgJyzA0Wp8z4YlptUhJLGt
O30wM/AGNTmv3Zn/AJYkQBNa1AHbCkqiSlAlaZ18sBqNj6hStfDCyKrkVAoPHzxENa0rUnsM
KSByDXotKZ5188FRi7Nn1NfuxkgbUwqBSmR8MaQ4XkQkAZkZkYgTPKBQ9v8AHAgiWYRmjZH9
uJYJZGKEE/6HEjvM50kH1d6fsxogM76gKkmlM8QGJmQgg+o5fhjJJZnB0k0A6V7Vw4iSXSaK
DWuZHXDgL3X6E0UGtBiQ1uHBrnUda98FhwjMQdbZE5g4MJmMrUINSeh74ASvKvqLEE5HuKYV
hSTSVJ6DpU+eJBN3KzqWNQB+7DgP77fwjx/DFgxf8OoN/siWqjSeodP2nwON80f0zHtz7ptF
tBH79vKs4NUlgfTQj64KxxyKw3Rdxc3MOpxbyaHaP/yL3BpjP1rdz4W+7Cw3KwjuNaTXK/ax
jCOdJrQkUrjM2M885XHt+4+7EsNnciwuASrWV9H7sUvegenprjWtuKwvv0G4yyXli1qAf/xi
y9YUk9dFDVMP2Fixv57mZopJGtb+31gx3UUftzpX/wDCdzg1ZC/oNhcAFoA2pvV1yHjlilCz
i2yJIjeQp/8ANhbQssYFStMq9cOjHEtx/VreNr63ie6jZtMwjCyFQcgxGLT9YC/3yLaXQQ2k
ABzmhZAYpBXpo6KfpiMkDtQsZXlv7CD9NIznUVBYLUV00/hxUTl0y37Xm0SSXVrEl0UakkC6
BJ29VMsZKvsri/XYGhvNuj3LaKs0E+nU8DDvVf8APDp6ys0dDOSgGeQB8MbrkNVUvXoe4HgM
DUgZGJlzoAOn0xQ0YFGFOnTPCzRnSPV26/j2xCCQ0Wn7PHFTpihypkfHELBqfSakE9x/pib0
zqAtTViemJk4C9aELkK/5DEYfTRq0oTTPtXAgsPaXoP/AKf88RwyEAEg1H8XXGnGykAChJzJ
P2nMUwY1C9ujClNNMh4YozeSCdA2YFSR44VhyMiUoQKUHiD2/DGWzqD3NAPD/LE1BKMgQadq
9jiagVyFTkQc8LHRChJ1fvxM6d1XocqdR2zxCk5oTQ5nqfDEfwdFIrqzI+04lghGKAk0p2OA
w5/YnQ07jC0jCs1PVUZ0/wCeFzvJipLeo6SBkRiGnpn92Z7D/jPA3C05ZGgGZ8zXFEc0K1OR
bNvGuIHFK9R9MLQdArlUUz8a1wKzTyAkgj9nfFFgNWmhp16sfLDrMFpX7ga6un/XBWvAKKvn
U5fcRlXEBivYZA08sTUKWPqSOnSmWWIgFDICpzH7K9sQqRQFOgj1Hr+OAQ2gZg9fy0/yxNYi
aOlRU16kk5kY3KxeRUoukkEDrl2xMwiiChz09csZahqPTKmXQnFGdHpUmjUJHXP/AExVqUKk
U0tTI4hOvSoVRsu/hiISMwVzY+HhhZwQVXGpfSfPxwNQjHStaHzOWWFoKqW/2kdMZwEYxpAp
Ummo+P0wikUUensc6YDREGlR0P5u+IH01IX7QK/txLAGlTWvi3+RwxAZKhqGtc/2dMKIBiQR
mafbitVmiZCDVsyP8cDJZminPPr2+mBUzqrCp/d54YKFVzzPoH2nGsZnyNwqEgGobqfHGHWw
LoQBpyJ6D6YdWBYlftGTGrfXxGIGYNlnU9PLFpOqsEp005jxxNSndCzUOffLEgBG0lSa/wAP
iMDPJyhqBWtMqeOJozFtHpNT0P8ArhGCAOlarmg6f9MSMEBNW6DoMQAFOpqChGY864tZ0kWr
VAGkjMjqPLEcpFXrqUilaU64B6ZQalqgUOf+dMRl0mX1UZQ/UgDxOJqEKrVRliSPomonOtBT
uPHEBqmkhv4ch5jzxHAhNJLafoPPxxENTpoRUEVy7VOI6EoCpPXLKmIBr6KlcjmG7YgRWtRl
Q5U7YjD6AuRGrPuM/wAMRs0xQM+plrkfIj6YBIERqCKVFTl9MSkEygGpAXLMeOIhodSlehy8
/PEtM4kOpDQFT27jAgkFjp6Dx6ZYkLSDRei9mxGGNNYFCaZhvHCkPQAGlW8M6ds/DCLRKrHK
oY/mB7eGMsfkz+ldRBUk0Ze4GFqGmRW0nVUg5GlKd8TXyQhVSCoHXIjz8MFYsOI6BiPtpVjg
akCAKE0zr2zxoafVTqaAin4YFKZqqlB6a5U+uIxH7IDgux6dvDFqw/8AMY6Sc16V7HCDkOci
c+xOEfgzoq1rmB1Hb6jAdIwvXJsupXpljNWGBAatTpUZ+eJYFWWpP3U6geeHDgmQMpKilBXP
EodwACjHKgowyGIyhJ1UAFF8PL6+OFaQjZwq9R3JxasMsRBYCla+oV6DAsCQiAg9DQknrTsR
gR6UUasg2ZI7YkjKlpDXME5+VO+IWHKL46gCMj388WtH9sH7hQAklRi0BMYc6mBJrkp8sTQB
GoFSCNXWnUEnthRpACcxVRlQ9MsTAJAigDPP/CmBWlQZKQTUUGXT6YdRNHRwa1FKV8AMTRgp
AOnNKipPY4ABlq1UP1B8ThiMtWYhstDUp5YLTD6WUCpBGdD2p4YtQXFCdH5j1JxI2jpUGlMW
qErNqqctJzHl9MCMc21EDM5nxHamFGiSkxrTLx8MQD7MYYlaaulaf4YtOHMaVqMnJoaYLWbC
9lQAGP2mrf8AAwKHolSScz2xYtNpqhLDJuxHhiMA6KygKBqNKAdRjcosM0J1AmhPgfHFowlT
S32+nt54tOnXSSdQJp95OXTwwVSjUM5pTt0/wIwNQESVBANY+7Hxrli1HMRWQDTU/mypXFop
m1k6SAATUg9sLO0bIAaMNQbJR1NPPA0ge3zWIAIadR3+uNShN7IDhXAeg9Ne/fFpwmtyz6kG
kNTLpmMGnBPANRY5n+EeOLRoAQNNUIByFcyf9MZxmXCaKISDUuZFWbthOhjjV21AAL1FOtMI
xKsUaLQHUAMvxxHTe0oKtShOWntTwOBn7GMQZtJXqc8+lBlXATCBApagZlyI7CuFqH06m6UX
8wIHTELTexGzn01K9u+Jn7EkcQlox9PhTr9MK308lsrAUA7nV/u8cWulMlrGE0vSiGhP+pxa
yRiVi1QSwINaZ55YTND+mRpV1LR0rQeFB4+YxM+jWAIxckKGHpJ7n/l2wFF7CtpOnM5gnM5e
eIJDaxVVyc6Z+fjQYmoB4FoaioGSMT1NcWr7HNrGGNQI6/srixfYhDCQxZe1TTLMYhKRtowx
ooYL1qKk4mpTfpow+qhGkEn6+QxILRIWCt1P3LiZvQf0ie5X2hnT09qYNaGlsmpvSKA1NRiG
0UdlEFYsMyMvLxxKoHtIyxIUEdgade2GRkf6KMKQyhqCmQqT3wtSktmgUHSpByVRQ0P1wadC
LCMggAUQUGXjg0aiNhGQUIGdMuxoa9MOrRy7ah11XW3QUphOhl220LKCh0gdaZnBqtRNtkSq
QqhmfLzp36YtQk2yP26FQG/M2VPoMaZynG0W4NYwAtPWaDPPA1Bvt0bJpVP+xcgB+OCqoW2m
BZNQUUz1GudfDBrOmXbIUprVSCMyOlcWqWjbaraoCrReoB7jF8tUx2i1RwAKFwSV60AwjcHH
tVqauqAvSlCOoJ7YlK5xstrUVUMSamn7sDPXR/6Va1VxHmcilOmLWZ3vwZthhZzRQSehIpn9
cWt7TjZrRcihIpTtTzwkL7JAz9BqUZP5eGJm6YbNbR+r2/cP8RNKfhgah/6HbGIqAATQ6u9D
0ph9IP6NaJGapUA0Ddv24BbgI9khYlSNJ7L/ANcOGD/oarWgqpoBXIYkQ2CPUWI9ByA8KfTB
qRT8fgVCqktI9dPh+/EL1iNtiiYIjDSACWp91fDGvgc20/8A656wmYSgq3c16DEfRtsECSaU
BZwPUBnSnXLFGqjfZ4ipXRmT26/uws3o0ewKsuhjqUDNgK5+IOBSkuxQ6qayVNdVa5+FMGmG
bj51fyiz5dch+H0xajvx6rJVq5UYAaSB/ni1ac7JGuoNqI/LpArXtU4jOgtsBDekmgIy6jzz
xE0ewtoL00iukE+OIac7EulmUEkHJQcSc1xtRhQqp1lq/SvU4YlZJbui1ala9BhWo6f7sSWm
0STLeRyKXBVwEZOxxvkdT9vS03O4ubSKG4f3FUVBP3UPji+rE8S2F3d2M/vWU5glFPXGcmA/
KwORGJrItk5VuBjAGlAw0yKEyIPh4Y5/Uo7fku5QViYrLCM1SRa5HDjNpR79ua3K3EUzRSR5
hkqukeFP4cWHXRcciu5tSyUDSeosvpHn0wYtPFyreLVAqzggCiawKr9DhwaKHlW5pIZUkMci
kMVjFAx7HTiw6KblN5ckyehJf4o10mvmMP1DmvtxuLtV9w1cdCpzoMa5mKltW6bjtk4mt5zC
chIqHJx4Nis1c1Yz8k3GSNlOhQ4IkQKFVqnIjwxj6Lqubb913OwRzZyvDFIaEKSFIIoag+nF
eXPrflwmUe6zEAsTWoyrXwxv6nRoxB8KZ/8ATE0NQWNWy/MB4HxwDak9WoL9zDP6YDhaA9DU
hh2GJfB3/lSAvX1dqZU/DEPlI6gLVhpVj6QT2HfEs0w+6tSSOn4YUWpiakHSc8v8BiwjrQDI
6e/emJGz+5WJ8u9cCwRLaiSdLHoB18/+eGDrrAD1tQdO4A/dhxndFGrEHR6siajqPLFWTLQj
I6yM6A5ZeBwNw7BSKA+oitOmAyFqJT00JXv54sEOCQCTmaVy61xE1CY9VTXsMK+xgzAlSudM
UaGaVFMh08BiczFOzGmeQGf4YjJp6dCpqDkoPbEoL1gUJpiOBYEMdVSB/niZPmPpQVpha0lD
L6vHt5YWKdqCh0npVadcWINKGvQ06dcsANRqVHSoPmcSlOuoajpJOGxSnzWSpFU/xOMukASx
YerM1ApiW/satT7iQTlTEMI5VI7DLyGKBGUeNSNIUdVUnCokC6hQVoO4zwVuQJVhnWq91xRW
GKlyCSc8gR0y8cShFiG9IGQzwK04VasaeoDPzxMwqSmhqA3bvlhxqUmrTNcz1HemKGwwYaa9
ADQDy8sarmSkD1VGn9oxk4ZnBIKrl2AHbFIxTKcyAOnfxw1QhoCitTnkPPz8sTUEoY9U9Pgc
iB2wGUvSTSlAOh7jEAoqasyfGo/54STFVqvh9pxLQ0Ar49c8Rw51Fgc6+XfAyTL6KflP+GIm
+wBWJIJpXrgOCIoSCcuw7Ylh2UEFhQ55jt+7Ekcieuq5Hw8MalWEpdSKZimJkSls6nPIkHwx
lqGqO59I7DEyGTIVGYPbrljUZsNEmp+pI6t2w2rnn0bKyiunVX8vgfEYw6mZgXXt2qB0xCwn
FahhXPtiGozQU6mp6YWUgFaBvuOXl9MFagaAA55j/DviVIA5/mr1PhTtiRUBIHcjriaDmg61
PT6k4UeSNlTSxp5YgQWi1PTw8vDAybICpNQen4YlAgkZCufTEdJiCqUqO5pliRKFzr9hHfE1
hitOnTPpiIJUAeudCaAHtiZsI0rQj09iMSwnrQA5jw7g4lhidKUYFu344kQFFGdRWpr4HEkY
9NQmVD0PSvXERMqnMmte46Z9ScSBRQaUNB49cSFG0hB1DVTMf8sFahh1Vq5nOtK4AjJqadKV
0j6YRYJqkVqOlD9cRA/an4GmZGJHBo1ACezEZU/biVC2kNVvUBSo7HFgDQ6tNKGhNPr2xEJR
wQKZ5YhTM1GqfxNO+JnfTGiOWbvSh864ikkFfuyoc8TWAD1rpzSvpOIaWk1yzOf4YKslOw0q
c650I/yxNIjoAOod6AD/ACwslUVI6qB9cRSFV0lSPrXOhwVaBlowFfuHWvTEAMG9zW2YPUYY
zTqhoanOg0088xhOU4Bb0g59KnrgJCiLXIBahvHLAi1KtABm33efnihgdIA0pn5YSFmelPDI
18cQsKupV1UzyoP3HAsL1VZSoYVrTwxIhqVgo6E/XLFpFIir09J7554kFUX765UINP4vHPAg
pkQHPc/t88SMQ3QU65eH0xEvzmopkMj0OBBcMPUuQ/hA6eeJE+qhJGpiOnn2woAQj1OT4Gp7
+A8sOg+hvSqk6swa0oR1rjNqoWLe4WyK9PLLEoZadTUk5eGFGcKzACnjmMRJ0Jf0mnY+eAI1
OkkgEtU186nCoUhVWqDTUasK+PjgJ6IaUopyyH7MSCyRpSuYB8fPLEsM8gDCoLKxppPXEEel
VlNAenTsPPEtSSaNINOtcwOuFI9ddKkGmYowzH1xHCQIpIB6d/LEilCsSqgBejEZ5DzwA0hU
KR1alR5Yh0jYOcw3q/ipUA/jhCTQDSh1KRmD5d8FawzAagSRUdqUxIzUMlK5Ur5D8cIOxOk6
jQDOoxCmFPbCsppXIHpTxGBJFoQvQAfb+Hngbhm9sLmCKnTiJnIcEBtS9OudfLCDBGA9JqSK
HxyxMyHYBT3zyzyOedMRIqpOs0BUZGmdcShozqplQnNSfHETEsSdR0gdT40xazqTSCtKkUGI
4Sqx70A6DwP44lgAKsTqqTko8x4YsODVGT7hpTso654hmAlrpZUGZOVPDCzfToXz1Ag5E18v
PGazJTnNQV+0Gn44mocgiqgVrnTv+/C0R0kAAZk0XxA/HEDElXIXqfSAeuJmh9QZKk5ZHvQY
QlVaLUHM5g1/fgdNIanap+wZA1oa4iYUrTSSx7YUYAKzMTQqP2nyriBnZitG0liOgHSn+GEU
vUYmBOYHpPhTywM6GgC6jm1QfwxNH9ZFQQdR6+Q64jhSR6aM1aGgAGdDiZ+pLpVTU0BOZ6kn
xxKGk0tShoK+nrXLrib0QXUag1buDlkf9MFWhdVMg0itDRiR2HhTFGak/NRc/wAcz9MRqKb0
ijdTmoH78QOrrRSfSCPt7jEsMyhXqwKkZ0pUkeeEmBZiGBpq6DP/ADwaBMWUfaBTpTLPAgws
HUlgc/uoO+FJlCAlF+4D1E+BxFH7ILMweijPz88ROzCmWQIpQYEYgKfSSGOVR0xEClJdRCkK
PSR0rTv9MIKlBRkGlANOdP20woYZjHka1yXLLLEkchDEeqmYBBHQ+WAU+h6FcilaAnPPERih
AfSF/afxxJF6ipz9QNK4RgkaMLoY5nz7YCir7YZaatYIBA79hirNmiX3USoXzIrnT/XBhwpa
UStQFyIr0J7YUOL0g0FS9TU/uxNSGMUekFzmT6gOlfPCLyj9ypYBSVyAHgcSSBAiByaMa0Bz
/HEiopAZW9PWhzOIEunVVhSgGmv7qYjpwyMpYqajse1cWLUc0jUSpoB1jHl4YgTKZAH6inh+
0YNF9GVUotBpU9O5y8cTUhmAC9Qpy6DsMTUL7HHtnrmx8O2FaF4yHAC+gig/zOWGMWenSKNg
ppQUp16geOCmBcIaKKiP7ScDR6xKwVm0kjIjqKYsZoRWmt3A6ksoqaf5YkNfbdtSCtcqZ1qe
9MSC6q+QFCOh8D9MK051MNJyHQkf44MRSRZhRTS3Spz0jM1xFBNDGEYMaA5ZeHlhlOM5ua0X
0ALHSjEmpr/zxtK7QfFelOn78DK+4YBJvdpCB6WkAYHofHG+Z431fHp2/WlrbmJ400SSVLCv
py8B4YI4IdqsBeSUKe4D+TMdO9emWNaPr6vLriX8hbiKV42NCystcvHLvjP2bwEmy2cFt714
HeKQfyrhB6fTnQgZjGbWg8ei2xdxEUo9+FkOmVetD4g/XFrF50uSbXs1nIr2UzVfrCa1BOQO
fQYT9PFHUMRUBiMv+eGDHTthdL2AoBIQ4BU9888sbk0Y1G+7TZSXMDxKsJmOiXLtTI4xozHN
bbBZG7W0uGNu832zrVhQdKL/AI4NE69SDiM66qsswQ19NQxX/aP9cH2dKV/sNtBafqEYSoVr
7vev8JHlhnQ+tSbHZyz7bIYnMcmohonUPGVA6+Ixrco5lLa+PxX1uZ0l0lH0mMjJgOwPbBem
sxz3WygQNcwOHUZlT0FDQ0OKVfI7Gz2y+2t2F0bXdIydUMiVjYdRpevcfvxXyjFQrMoVWzfq
WGQqMSGGYSVUgKc6/wCGNya499XWv28xT7LqaBTMisVfLUKeJxz78bmCsrSzvNjkeVESgcae
pZfHV4k4lFZ/6/q9t0lLRaQCWGa1yAy8Ma1dUFvsccl41nJIVqCVkHkK54rTPXNuG3mxvGtm
apQLUjIeoVGM7piXZYo23FY3AKserdM++LCtd6sJYHJWOF4wKakYFgPEYNazXLb7BcTQfq4o
zIKEsU6jDKzkdFnspt7tJY5CyMhViRRgcX2ZvIJdntLi/eOErESurIZVHj4YtHEsvqmu7ea1
uGhmWjDLVSvnixu0EWlZRUah0I70xDFvbfoyzmWMyQspAIyYE98WEc20Rpatc28nuRMNRB7j
p1xaJz6gvtuVbeO4i6MQGT92KNKyqaigBNOowyM1Y8fEJv41njEiNkVIyoR0wdNSB3m0tba/
0W/8tCFYRdAoPh/lijOLGWKzfZDNJEqzqnpZRn1zrTFPlm1TWA29pQt4r+w4IDoaEH8tfGmH
Fn6NfQQ28+iOYTx0JDgUJXzHjjTci2nsrKXZUnKBZo1BVh4eeMyiwMNrFLZxyBGhmoM3+x/p
htHLn3SCBEjJVopj9yU9LAjIqcEhsBc2m0y2KXNjeUuFos1nIDUZZsr9CBilESy7SYbSO7DE
owBdW8/4fLD92MtqK/29Y1jnjf0sP/GagjvXGNdZE9hNbiNorm3SZWGllNQwB7oR0OJrIqpR
Gksntj+UpPt6utPPzxpmyOvarVLmaVJfQAlVYZ9TlgtWEDarK1vNHryIDA0IPiP9MbZkXM+2
Rvtcc8IVnQAM5oop4EDvgbkxRStJHIQy6GGemn+OMmri/S2G0wXHsJHLoUPIopUnxGGM3xni
KFiT6W+0dCcVFS25UZuKqR17ivhiMnjtt47RmZbkFUYfy5F6qfE41gg73bI7W1W6EgaAmjHP
vlqHiMGm3HLebf8Apiko/wDG5rTy8RiZWO6WdpJtVvcIumbIFgKBgepYeOMx08c9jts6XNo8
8R/TSuh1joVJzWvSvlitZnJuR7db2u4AWwZIpFLaT4knphlc5yqV09GqCQc86nCYsJNtMcST
sS0EiqdXhqGQI8cGmxJDs7yP7buEDD0Oen44Wcw1vss100sEJU3MILe2DUMAaelumeK+GK6S
JkYhx6xk3iP+eKNFCqNNGGOWoVYdQK9c8FC75LtVraSwzwNQSg1SuRI8P88UrH5QbZYWFzbT
NLJ7cgB9lj0BArQ/XBp6mfDjt7F7sSqsgDxgMopkQcaHqXbdoe+WRUcK8fStSOtCMVa570Ww
2sE129tcxkqyOrKPuU9mBwdGeuO8t1hme31MwiJUMepoe+Ba5/UTTrUZHwGEw6gBgfAUrXx+
uAhcfmH4gYmLDE1qO3anljS+S0HMgnrU+WDTDu1c0J9I9VcB05JLU7tniV6AHYKQwK9saZmi
NNIoudak9/wxmmwmVtQK9K4llIqApHUk51ocsCw+k5KpKgihHhhQAf5mRpQZEdMRonANCGpl
Wv1xEzK2oMxrTCKE6SSKEUGedRQ4mTINNa5qTlgMIsqdF9XQ+AOFqhSpqKEkGhz7YGdOTRSO
w7d8sTWBrmAprXOn0xIyucxQ0rl3I+uEYdqBczmfwOAhIZvy0/hxKmTSaqwyHXyOJkmNQVI9
J8fHEtCzekEDr+2uBo9BoFBTxPniSMMaMTkF8e9O2FCDpGuVSMyo6dc8BBrUxrU5scwPM4sJ
M6mufjQ+GEGVVZzQg5DUP9MCMrHSA/Toe+IUq1cVr4CmIBdR+K1/D8cRsJNAckEVNKgdTlgM
MQCQ7Gleg+mJYYAE0BzBqT4YR8GAB+/PuD1/diBmWgLA1B7Dr5YSERqEy7dB9PDEcHpUkZkV
6jGTCKHVUVDJka9MWpDIzA6QoNTXXXpQdcISM1CDpoTnTwGFULaQcyc8qds864KD6VDr6agK
Kk/8dsBJl9Qqa1/EU+uI4TKKekDSMjlXp0wxnoAoy1rQjviUh3QEgk1XV6a5DyxNYR0qSpHq
P7PwxYjuFBr4AZDPM4CQQLVDRs6mozIGJIlCmpDZflP44gUa11E/dXPOn454FJhg3fIM1QT+
OICZTRjTtQf9MJAjEZacu4OAkKE62NCTUgZ/jiRmpGD0JH250pXBotMiKY/DrQV8cShi2RIP
pXp3rTywoFOw+1xke/jTEiGhlGoZqPSetDiQ6gEamrXIfjgICASQa+mv4UxAB0hiDmD1X6eO
KILkmoC/X6YUcV0FyBTOudKA4CDV6ioNC1MsQwygAkKKP3riSNwFBK/lPqA6n6YgKRlkipTq
M/w6DDG7Te4jAUUoBSvfPBgpnOj1L1/xxMAkV2iOZ7adPXrhag5FOtdOatU5+XfEUavXIj09
h1xSIBC6ep1HNR4/sxYMOYiznVQF+lOuXhgWDIrQigH5R3r0wiQJLCugatOZpl/wMSoB2Y+o
HMg9qZjGREjxqfUCKuBQ+B7YWiIGoCoK9/r4HCsMQW9LgVGQH7++Bn8iJOkAZ9ifDLpga0Jz
OlvsP4/XEdPSOoK5ox6d8MQtDioHiPUfDEjAqDqYFgo6N4nzxM6YAh65aMh518ziMOaMpBQ0
ORb8pp4jEtOQpH8sVC+PniAh0K6fTTPxwFER6iGrQZGuWFUSMKAkUAJGIilMftA+NAtc+uJU
xLMxFAAKUA8aYWJCZtJ0j7qHIZ1Pbr4YMG0grEsJD9CBlUfTE0IOahTmxp17U88ShjpUEhWq
W/w/1xExEfulgKMFqeta4kB9QIkChWc5Dr+0jCzZo/QulGFWPRqVHXxxNfBlQCQn8prViemJ
QSq/uUJpXucvxxEzoKUA1GtQfLx/DFqICNjrIGsClTWpJ75YGSZxSp86U7YtSLWOgNaEEVxo
RKGZI9SgEg5g+B88DWkShDaQ1CQfEeYxGn0udSgUoPuy7n/HEyEBiaLnQYlDaqMAhqB+H78D
Wm1Mpo4FGYmudP8AlgWJEijVvcUUIUrl4dc8UQENasGHoGTEVIr2xqowce50AatK9un3YicA
M4IqPHxy8cFFA9C5YGhGRJr1GKsikIZSakkgBge474FTZaTRTVjQ07LhAq55E0ApmPy+eFrQ
n211Z1HTVnSmBHAOf41Hl5eGJGb1ClKDy7DEdMxIoakpTTUeWJaNQGANaAjMHwxICiUDUxoK
kAV6DEieMAaiQF7V6kYgZCrBB+QVH18AMSMwJAFCNJ055UBxIjpNSKVPSgp9a4jQaQyElaFv
25YNZSL7YQUOpx50ywnAHVTUc2B7gj/imI4fQNJVupz1A54GaBGkWUBBl38MvPzwmU5aUy0r
6WzNeoHli07aJgoOZr4jrn26YkcKNR6kEda98WoLBU9INVrnXv8ATEArpFWf1FjqAOWkntlh
Rx7goa+ls3r1J88IG66YyqtXuSBX/HAUcZIIrkpyoSe+BRIS6kKsgp28B5Yl9g6dfXovc9BX
z88TWjZVUk6aauhxLQmmliGp5HrhAFBUVb7hUgDCUjNXSa0qPCn4/TAEeQJqevpPeufT8cSS
JQZAaFP5k6VwaSJzIoKnImuJAdD6tL5eAyNPxws00WkIEQHMEfhhJn0k0atQAOuX1xloFw1E
BGSkZueppgDM7uGqNYGljVadaDG5Qq9Tef7e2HUt+N3bWm6QXIDOkbqzqnVgO2OnNw/0+Htx
3Tgu72kDyz3Vs6A1j0ilT2LUqfpjHrnMBYbrsu0StBJcPuO3zEFpEj0yIemXj9MFlaxbtyTa
I7R4Y5mmVkpGWUgAnor1/wAsGFBs+5bJaRa0uZrSfq0Mn82Fj5DtjPTUirvdxtV3Jbi2iT0s
S6xZK+vP0+FcakOG3262u+RJbWV0uQAs1q46L/tcek0OGMXxSRlV6ZZVJHc9MaYWWznbv1CG
99wqGDCSEiqEZ5g9QcO3E1O87tsFzaQzbfeAXMDhikyMAT00+WOc1nf2Ue6cfuxbXbXEtrfW
gNFlUSIWJq2kj8vgMN1rnlbbJu8F5fgQyQGdAVMM7+2k4PSj5aWxjG3Py/bIxG05il26Yikk
bMrwuR3DL0OKXFri43unHobN4rqeaCUAByUquqlDShHp8sdKMpW267fYGW0aX37dmZ0mRSpB
YZAg9PHBYbHBa7vELGW3l+/1BJB0NemoYsZzElvvm1Ptn9PvLFHuYl/kXiVQihr6qZNhwVSS
OhlrXM1NfLCztOoVmNTl0r4Y1Kvqv9n3W1t7F7SWtCHKuM6E54x1G+eYksd2tYrJrShqyPpo
KjVTqfDPAbzBWu9oluqOpEiigPY+NfDCzZHbDuWxLcLfa2ExGh7duv1Vhl+3Gbavrip3u+hu
78NbklCoBLKA1KYYnPZXgtrpZjGXUDSUHU1+uWNJPu12JrhZIGpEQBQmh/dinJd2y75axWxs
rkywo6lUuIqEof4iDgswalfcba1vI5oLk3cNKMCNOVM6jPAbUjbjsTXP6+ynlgu2XQ8Mygxt
TKgYYqIpd1uFurn9Sa5gdT088MrNoNqnt4Zw9xEJImBEiGtc+64LW57F1cxcYuFb2L6S3qAw
WWNiQQMwSP8ADFKPq5rTcYYY5LO4b3rciglhGeZ/halRiaQX91/J/TpIWibPLoc8sUjNVrB6
mn416/hjcZqx2OTbYbgSXTukgNY2rVW8RTtjPZ5uu3kcm3XFxHNaTiRVUGWNlIIp1FfDBye/
HTDfcen2eS0d5rdnTQCVBoa1zIOYw/ln66p9u3GLbrgs0KXcPR45BUMK9q5rh6awG4T2U157
1rCbeBqEwVrpPehPbDyrV9FebHNsptWdopCCqMRqAPX1eOMn8OS0v7B7E2N8zBR/4bhMwCpq
MvPFjErl3K8hdBDHP76pnGehH4HDiveJbvdLK9sYxNZpFeoVX9RF6A6gUPuKO/ngxqT9IrTc
rcWBsrxnMS5o/XTnXMf6Yvrpc+4XyzILcHVEpDLN3BHgcGCpZr+0msBG9UukFFcZFgO9cWFW
dQKeod6+OESO7bdw/TT6inoKkNTuOvXDi13yjaPXNHJ7izffFIpVh/2t/ngXw6o77ZrjbBaS
SSW8seUbfcHUdOgybCdU17BBHI3sXKzAdBnqX6k9cTOumfcI59rFs9S0emjDPUe5xK+qto+g
BqAAa98MNdm0S2guPbu0LQZ1C/eM+or4YrGXdcW20lGaC9Ysp/lqwoD9cGsW2Oa3vUls/wBF
eEgI1Y5Oq1r0K9sVM6R3dyrmOMyBlXqVPh/xlixvVrc3O0XGzNbJcgTqFKq4IpQ9/Ptgkqt1
V2G9XtoVRSJYw4ZI5PUooewxfVXp0cj3SPcZYpEABppYLlmMz+GCRWKZMm1rmBmB3yx01iT8
r60vrO82z9FeStDMjBkYCqsB2NOhxz+HWemfcYIp0iMgkjXJpFGfkadsajPSPbdwhs90kc1M
LV9S0yzqDivq/GKu+kEl3NIcw7atQ71PWnbApfEYQLRjkv5vEUzxLFzu25RX9hbujCOaFSss
RFVcmlCh8hilWOHb7pYy0U2pYXyLoKkVyrQ4rBQoHgujHFKHJNBKPtIOJfTWi22CDbnka6je
AXS0XUCUD+T9M/DFetX0k+FbsbWMd00ssjQ6SwYN0IJ7eeCmTHFuwjO4SvHIs0MhLRutQc/y
sD0OFcxXVBNK+jsfp2xNYVTQEjM/mIxA5BZaN0GYA74lS9I6ChPbwxMfBwDWh+lfHE3CpVzR
RkOv+uIUyjUc8iOh64QZ6EenOnXAuioVzPT7h9cQGGJFe5BxHUdKAHLLw8/HEt9MTnpoRWpG
JU5YABadsj0xHTaaKPAd/PEvk1CGoD6ievapxCnoVJqAQP8AHEkbV6U+h7YQRK0r2GZ+vngb
0jEASWOSjMfXEqdcnSpOk9O5xAzBdNTk3SuBrAVooJ7jt5YWL4JEUjpUgfswVqGY+oVyp9v1
xLUZQ1GoahU0Hf64WKcL0Y+nKun/AI74jhi5FQQD/CvTI/TAYbV6T5duw+mElHpADHMGtTiQ
WUdD0qDiQXiYimQpmG8MRMdKgqc18cRC1Mi4PgSMssQpmIOWYFMq+GJzokKkjsSe/l4YmoEa
a1A6dTQ9fHBTCIDNWtBqH7aYDCajAgjMdD4Z50xQhcMPUB50HfCzQMWWppUdgOpwiHFAoLDL
sBiaOAuqrHTWoHniRmcAA/m6jLPGUcs5CkDI9a4jz6ErHUDoK0bLDETx+HfIVzyOJHdYwCSD
QACnkoxDDIy6KjpT9uA4QKjsfVnQUpX/AExIJoAv5T0FOmJUIoJGVwCRTp4fXEIKRfXn06Ee
PhiaC6CopkIx9vj9MQOrM1dQo9f3U74SbWdJDUNetK1p+OJaH0ClCM+4/wBMACQ49JApnprn
+36YEkCID1FSKVOf4YjiNqkGpzqCK+B88QMdYaldWqtKZ088SPGMvtFDl54EWha1yyzr4Dwx
IAGlgCM3yp2rhiAgfQBkG1Gg8MSCImBbUfU3h4npi1QRi0A+rpl+I7YkYqxepWlKHPpU+eBG
eoc0FT+ZvLEjFSAKFQudfx8cWqmbQ7UboF86VGFFpDeqtK9K+WClHpNQehUGvmP+WKAKBQFr
2JyJoDhJFGAEiiqjw7HAsIBdOkkigqPA1zxBHpRk0UNa5j/A4BpvbbVp6LWp/DEJPUxAFQT6
VpTw8sabRFyBUg5HPuMziVN6PAlTlllXzxIyx6dWdKAkDrniWGYF8wSNPVfEnEDDShVnA7np
mKeWBGBcGqAV6qCfHCDhSwPT1A5j+IYEKpW3YMPUcj4YiEqvoaoq2ZHWuAk7OrVJGkdB3xCm
1xnNWNXpUH/PFQJqaAoypWpHfPoMRFIBUhh2y7YsR/cBVVNemWVaDCQNGK+2HBrnqzrniZwL
NUlBQkCg8/wOJkTFtOkZ/hQimJqWniYtHnkhPhQ5eOJEDUmuQJoaZZYkjm0uC1CpH2suVcDN
EqgMA1dORP8AzxNYOQKGVK6gTWhOQr3GEhCkVAIBZsz1riFE4JTURQ9K98KMGz01rTMDEgqH
1amIOdAT4HrTBqkOS0dBqyJoB1AOJCCEhg+RNAW8fD8MJMgcAjr4+H4UxacNJUEKK1AzDYma
SD0lMkp0pma+OJHADLVqkHufLriSSkdCPzdMumeBaCjU8OtKZiv44NIWKhqfdTNsODcMgK6j
kDWrHuB2/HCodAVLGorX0jwPTE0c1DFCxqKZDpgWAYMFqp9VfUfEYtGC0CgNSoAqy1pXPEpC
DgFdNTpFQQB/ngUpwZMw1dNNWo9u/bCQgkLpyZa+HWmfTEtOyaogfDJx9c8SMVOZUZEUBp1A
xIv5hT0ih6EVzr9cIDkjVAyORYkd8C8O0hGmhApl+H4YgZVPuijZnoCP30woXthpACKMvSmC
ktSqldBarUI6V8cKRye4xKgVVDmx798QPJQsa+qMdVJp6vwxLTIoC1GZzyrln4YlDhloK005
Zf6DAtNIS5BIoT4Z0PSlMStIMC3rzPRT2A8MRSCgb0f+TPrlTFi0LkgrU1DEhgPLDACoB0jN
h4n8cRMwC+gA1egFB0/7cA0tSBaMCc/SVyNf9MWLTO1aVBPiepOJoQBqMhmcx+amAWFVkNDk
R54hglMUgLEmnQgYTKi/ku38sCta1GWIpFoqmp9RzOJGd0owAqq5jFggiI2p0I7sMq1+vXCg
EKSwUHWfDyxClG60BbJujA5U/ZgIUzkK09OemuIU0j+plCZhQAcNYkEyPo0o2S09J8xWte+B
1hVNR+UCgIPjiWDJ0yUqQzDqelcMFqBgzNn0+00yIHicLNoggVFVWpkKEeHQZ9sDWEoAOalo
yen+JwEtbK1WXU3QLXp5AYUUQZXZmNP4OmXb8cRMBqbWwy6E9Sfw8sWg8TFZiT9r0VSMwPpX
FqgJkVHWhp2NM8uueJaCZnoVJJJICAYkze7uM4zXVWrU6YgrKj+LtTEV7xWzWbebRDX2Xk0T
ODSinpke9csab7ejbjs9vZLG0UpaNySFpQrp/KRjTyWD263FxOqe6iM5pR8oz/tqPtxa7cfC
93njH6e6hhWGW0klpSOVgYqkdUcVy8ji2L2Ch4DyG5Qm3SOSVRT2GcK8lM/Rq9LftxnTO1c3
HN6VHk/TtG0RpLC4o6kDPLv+GF066Pa7BulykcscUYtJTQevOtaGo7YNYoU2K/F9+kkQwys1
EMhATyNfDzw2udjnutuvLGQJNGY3LVoaUp4j64dUhlhmlnWKMCSSTNUJ8/8AHG5VY0tr8f8A
I7uE3FgkN2F/+wjkAlHlobTnjF6MjnteHcguWlYwNF7DBJtZ0+2a/nHWmG2Nam3fjHLLIQi8
jeW0mU+3cRt7sLAHojV6+WM+E9twDkV5aC4sFjvq5exFKnvAgVIMZOrLvh+0Zuxz7dxndr65
MFP0s6PoeO4pGAR1VtXQ4LVddG5cM5DttzFHd2hRLhtMMqsrxknsWXofrglVdI4HyIxMsMKy
yJUtbK1JdPc6Wp+44ZRkVZ2bdRD7nsn20YrMctSleoK/cMOs/USbNetEsns+5HINQaM6svGg
xWtxcxcC3+axa6sIo79FFWWB1dwKVoUNGr5Yz9jWeMVxDMUkRkZahkcUZSOoINDjUus31PaQ
Ge5CNIsak/exqKeX1wqxqbbi+wT0i/Wz7ffldSLcRB4HyqKOh6NjOpnNx2+eyuGiuFpJGuRB
yP0OCHEEJR5PXlXof+mExopOJD+kDcre5SQEapIia5HppOD7M9QOzbJtt1bme5eUooPutBRm
ShpUqcz+GL7LEO78ftrWJbnbr9b61f8A2GKSPwDqe+NfYKbS1GAavc9qEf54sZxb2GzW00Md
xdNILaZfuiI1qPEBsj9DgawG8bM23kSwSrd2DLWKZARpHg1c1bywYdVQlLSIrEamzQE5Y3Ix
z9tW238a3LdVUWzRhj9qyto1Z/lr1wV10VzxXkFvuP8AT7m1ZJyC0QqCkqjOsbj0nBKOqOTh
fIf0puoLcXMUQLXEcTVliH+6M+o41OnPNR7Rxncdw0+yYxK41LFKwVmXxUHwwWnnmwO4cd3r
bty/R3ls0DHNXqGUp/ECtRhlg/p7XXLw7fktBewxJe2+nVIbeRXZR4lRmMEsal/ALLi+53cS
Tx+3HBLVVldgF1A/af4fxwWnqoL7je8WV29ndWzxzBA61ppZT3VhVT44p0HNZbbdXEkkSITJ
HUMjZVp2Fcaqld1hxfc9wcxxmOKTr7UjUZh0OntlgtV5ce6bNue13P6e9tmglWuioBBUdTqF
RjWs3j9OINQqHNakU8cGKbF/tex7Jc2YuLu6lTvJ7P8AM05kaSgGv8cUtjeItw2BYJ4/0V0t
/YyLqW4RSrf9pU9xhvp3PkpuOwXVu8m2XHuXECVlspcn/wC5GGR/7cY3BPQ7JsAv5mSaUwsK
hQR6gwHRlPTLxxK3HFdbfLa3rWr1Z0NCVGVDhlcvttWT8akG2m6SVDKgLPG1eg/hPfLwxSul
nhrXaIPZSS9laGGZdSTIusp4EqSKjxw3qa589VC2wzQ3hilljkUeoSRt6JY6/kY9GI7HFaeL
b8rB+N7deRyNtV8BeRIHexuEKEjoQrj01/ccUuNbXHs2wNuNxJAZRCqgjUfyOv8AEe1cVuKa
549ll/qEtnIdbwk19o+p0BALJ554NS0bjO23UL/03ci11D1srlDG7Z9FelC3+ODV9GdlV45H
RvvWoeuRAGRGG1iT1Ja7fdXmtbSIzSIobSuZYeS98E6dcjsu+Obxb2n6x7YvZggvNEdYTzYC
pGOk6jFFtnFt33JPdskSVVqyx6wslP4gp6rjnaZPFdPa3NpM1tdQPBdx5SQyAhwD5H/HGpF7
XfumzTWu32tw6OjzUaNh6oZVJz0uMtQ7jFq+prfjm9XBhWGCvvrrhlLBVcV/LXqw/hGeC9Rz
93xzzbbd2917FxBJDcqdMsUgoVxR1/DovuPbxaW/6ia3P6fr7yetQp6MSv7MOCdSg2zjm8bg
oNrD7p+7SDRwB5dcFrM59QT7fdQXbWlzE8EynSRKpWhJy69sTpjo3DYN8sIfcu7RhFQaZEIl
T9qVp+OCUeumx4jvV7bLcWkYlVxVQrrQ1GansGHgcWpTzwz20jQzxNDIraXjcENXp07Y1Dom
urlwIXmYxD0+2zHStO9OlcX1Zu13Qcd3q7CiO11TOgkjQnORfGP+LzwbIfmOBbK6/VNaSRtb
zqwSRZhoKN00vXpnitE1Pumw7ztUWu8s3SA0/mgalq3QEjIV88Zlb0+z7ZJemcFHkRYi0vtU
LIP/AMJp6nT3phXXiCOylkuVgjKtU6ffb0rn0JPQVwaKuLXiV6q3MO4RSWtwtDbuwBQ5ePRg
fI1GEWOKy49vN4rG1gLqlFahHXwoc6nAp0W3bJd3JuIQpS4gAFGBDKQfUHBzywymxFfbJudg
gkng0wN9lwp1Rmv+4ZYaz1455LO7hZEmTQZAHjbqHUioIIwGUv0Vw84tyNEhYCrZAHzOIOuX
j26QyTRSwkXESe5HCSB7idyjV0tTvTFEso+J7e9nBNJuBtvdVdaOAtXIDHxywyi8VTbpt/6O
5aNJ47mCvonhNQR4EHMMPDDI1OUU9nNAyrIjAlQy+DBhUEf54FUOg6c2qB1r9cC+BW9tPcyB
Y42dQw1gD8n8WWCrdq8uOF3cU90mskLGJ7MtVRMhOa59GFMU6a+rg2rj91uCt7J0uyl41bIE
L1/EYWLMV00RDMhoWQlHH0wqdB0NQhjkBX9gxNNBZ8YS4SL3ruO2lkQSL7vpDKwqChPUf4Yz
qcq8W3Vri7tUj9+4t1MntpR9cYFSyEdcsGrXPYbJdXu3y3sSj24D0OWoU9VCejDwxvF4tYOF
CaCJl3KJRKoKCQBaMwrnnXLyxjTFBuVhe2F3JaXSBJIWoxqGB81Yda41jndcyEnLTppWle+I
xo9r4cm5wLdWu4RkupkMLLpddP3I6nwPfA3iGfisypKqSRzuoDRSIRQnuDT7T4YEpbvb7y1o
t1bvCxoEVxQtXpQ98aWUDwTJK8U0ZjlUgFDkRUDt54NZyomjZWaNqhkNGBFD44tQdIFGz88s
6Yta5jULwWQshXcIEDgOPdNMmypTDGlDd7TuMO4Gx9k3F0raIo4yG11zBVumY/HBWNBeWMtl
IYruJ4sq+oUqDmD54jjt3TjO5WUmpIHntjEJUkjWpC0BbUB9cQioBTQHBJLCuodMVQFLUGo0
Xrnl+OBY0lpwm/uQoFxEsxIH6dvSQSKqCW66geoyxSnFMNtuhdPZyI0U0MvtTBh9j16Ht0zx
obB7zsF3tYUzisJIEV16jG4OY0t4/wC04hJ6rXYs+Q/7qdcVNia3s2uWUCRY11ANI1dK1z9V
Kn8MZ1R17rse57ZL7V2g0MaQXSeqBx1VlftUeOBrFlacG3W6U/ptDXP5rZjRq0qFzyq3bxxL
FEtrO0/6f2mM+ooYyCGDqaFaeIPXCoibTHI8ZDB0Yo0bChDKehB6Y1FbI7bbZtxlspr2OLXH
b09wd/UK9D/tzxnRuFtm1S3tzFErLEsje1FLLUKXpqCCndu2Joe+8Z3bbRG9wuiKRygnVSYm
dOqAnocMrN9Vxi0AM6No1aVahKl/CvTEJ479u2W53K4SKNliMp/ls7aQzDMpXMasGtSj3Tj2
7bbcRRXMOlZ20QOKlWev2qe5xSq1OvC98a0adBHNIpYiCM6m9PVe2Y8MOjFDXVQAUzzJ7Edi
PrgIxBcSyCKNdc0rBUjHUkmgAxKrduHb1+ja8SATQopZxGSXAX7qLTPR+bDEj27jW57lD7tq
EZalc2oxp1IwG+I9w49ue2iFL2L2xK4jWatYwSaAMw754qNds3BOSBGpCsiK2gFWFNX17YGb
qguLW4tZTBPGyPHVc8hVepGGxS+BhhluH9qJSZWI0AZkt4YGljdcU322sf1Zh9yPT7jaDX0V
pqIFch3xSpyWO3Xd/cLBAgNwQdNTpFKeJ74U5JLZ4ZdMkbK6GrahQrXL/HFjX1dLbdci3F2Y
T7EjaUcA5ECuY88ZxlAIi7FQhLtQ6RmSSaCmI4ae3ki1RzRmNq10NlniCFgwNc0yyH+OKjDy
FXWqt6QMwfHtikNJUkllRIgz6skRQSSfDLDix13uxbtYWv6q8tXigUhXlFHVGbJQ1K/dgSOy
2u93CUQ2kLXE2ksETqQO1ScUMc0sNwkjRyIY3jNHjIzU16Y1Uj1KHAB+40qcsArqsdpvr73U
s4/1BiUMyAjoTQYi4i5RmjdSKErRhQhgaEYGNTNZziFbgoywOxjWU5KXUVKq3QmhrgaxzSOA
qqynI9PAYVaeHLNhUflI8MKJzpHirfm8RhISdIVgSFGRNcv24kHMr6TpJy+uAYcaRUEFgtAP
OnfESai9qAHMfXErCUAMFGS59OpzwCwRbS4FAcj+JGJECjV05UGRpQ+OIhkQlVIyB6/XxxkU
yKqmukZk6l6nPzwqndhq09PADqCe2EBKaGUNkfyj64iklbSwGmleh7Cn0wo2TAjSK9A/jQ4K
ARka2H3Hrnl/1wI75n0igJ0kAmuE4kBVXI65VNOwHfEAkIy0Dnxz8O2JaEL6MxQ9qdK4lgmQ
jyNOnjhMOwWootPOtanxJwKm0GhUjvmR+YDxxChVi1SpzU0Hl/zwiCKHUWGZNNRrng1qCeig
aQKkdhXEqB9JNTkimhr407YBhzV1PWpHTCSdhpII0L38aDFqMofVQsdPn2xVEzIpNBXPt2xA
SFK1YnSctHTFUZoyznVkPLw/DAYAkqSFNQO/Y+WHEThfcUstR0p38jhxYkdkH8sDMEs7EYlp
1KVKkVpT1da1wGB1v5IoIrXM17Z4keopVswzUr5+GJI2D1FKshrmaVJ8/p5YRaHSVZTp9WYq
OhGBjMqUUILZk0GXTPA6RHpIcBaEioX8ThQqAKrZgjMr1y7jEElB4VH3MOmQxKogaA6QKNU6
fLCtBE9KVT0jLPsfPEBEKy6lJUnIds8QCCylWU6mPQn91cRw1XrqBpTKnc4kdA5BHY5gA/64
gTFtWZIRgKgf4Yh9ikAJ06vT2yFfriGiAA1CgIXLV0y+uJoA0BRGVIzOY8TngJamX1BQ2k/c
OpxIcgUJ6fu706/hiiqL16xUAilKVzA8cKhV0mpFARl3p9cWE9DkfAVqP88SI6GatahR9vT6
4sATEGbWx7ZivQH6YKMGwyBNajKg+uJqHYhjRgcsvpiwoWiqPbOerL8PpiGCo6+ilBTt/wAd
8Kw1dSlXUjLIUIy7YCcKoRasQSQadcsSOxUtWtfAjyxIyxuZC7mh7rliZwmFarmJBWpOWXXC
cR2wDsdbUqSC1M8vI4rVgnqzekMIxn7vWv7MGg7nUoILM4yJGVfAYVghkAtDl1pngaLX6Hov
/acR0CxtUaifEhjUAHwwsnMVHy9SmmY7+WeJYYoBGHBoK+mnQ/UYNIxWikk5nLv/AIdsRsCk
akMxenU/Q+WBFI1EY6qZZAYcFqJZBqGr1M1DT/lhGpmleSXQRQD7h4DtiROtXCipTqadBTxJ
wGuedVcMdWRyOXUDpTDAzW7BHK5Uky1Z4TI4qDwXrTA1i74iVi3u3kmekOoBgT0JPXGubg/r
uePet04ZuG7WNtLttzazRUr7yyLRu/7a4r16486zlvxjcIN4SwvQlnLXUhmYLFJpz9DVpjUs
dLNbnfdj3G2srW7cRzWqOGMiSq+kU6sK45pSbhujRXe3u86/okbU7g1VGJoOnl3xqRma0qzW
tzcxQy3kEUkykwPK1EZT0Af7QfrjLTnTYriy294rmRFdAxA1qaVNaihofri1I7HfNmjs4tt3
xEuLcAiz3D/7eFj2LCupa9mxaPrrI8it722uVWedbiJlJtJRmNFa0WmX1w/YTNcG13MdtuME
80RdUYesZlSe4x0/DXUbu/4zfbjFBe7XcQzRMhLtbzAyL3OqOqnIdcc9MHsc91bWzJPfe5cj
UysrESEAf7864KNZrauS3tvdD9VOwtpJKSR1Lxrn9yr+WnemNUvSEu7yaBLq12/b9ztgv/4x
BJ7cw/7ghRiRjNicG23abpuZVY7a33IAqttcSVinp0B1517Z4jVjuRh/Rj37KHb54aiaKN2A
DVoKIxYf/UuIWM3PuTxb9ZCW5ZYVUhXL5LUdA2HRi5eOPcJZrGKeNbx4darI4j1qRSqucifx
wSrHOu3na7Ff1EoVYhRwDR17VIHUeYxrQ4b/AI3u1xTctuuIpLIJqE9tIGZWGY1qpVsEaZXc
jOZCbiUSyxijODU5dM++GLEVmlrJOpuHMcIU1kFciehp5Y1ox6Ds9nv+3W1JHtt549KoNVKT
hARUHSaOmMdVRQ3m1W+48iayF+lmhQezLP61Jp/464pMFim3PYdw2m7FrfKiN11o1Vp0rXz6
43Lrj1src7TxndJuNBbP27ke2zKVkU6ic9IzoD5HHPq+usuqXYtg3RmP9JuUst5t2IuLK5kE
RNDlp1fdXFrVjq5VJuEtr/8AlGBIdxUaZJkVUMlO5Kel/CtMMrFqmfiN8uzpu1pcwXlppBuF
R6SRNWmllPhjrO4vrXfsr2m4bO22+4lveKCEWZtKy1NaI56N5HGej8nvp5NttVjK6biKgCSL
qRgv5SMYhw78y2uaART8fsn1DORSVOr+LyPhTGpoljt2m2g3rZ5rbbpY4p4s47OaQJItWz0s
33AYrV3d+HFDYbptV6INzdkVlIQSvqUtXqrVII+mK0uDdt03WPcw8d06MgUKyOQfpl1GH8Mx
rON7rJue3TQpHZ3l6hobS5AjeoodcRP+IOMt1xbxdMJ47fcLNNubVqR1kYgGvU1LDr4YMEi4
2+d4HcXlrZSRSj+TfwyBUkbKmaHSrf8AcMRxVbrdFLa9S3kVXQlZI1OVCfD82LBfXBDdQfpI
g04CsQmssSFbp6ga0ocbxmu7bNjuId3mup5oRDdA+1cJIGgdhSg1/lbyOC1c84Ke0O7291a2
E0K3sMmj2pH9uSoPqZD3WncYJWrGfmG77TcBZz7mh8oJv5iah2INag98O6OXfNyXb57WSGfY
7MiTJympch3B7U8sZasT7Lscl3Zi+2eZfcjbOJXImjc9NVe3njTN6QX9zutluRfd7f2VcaWl
jVdBev3enInxphg6orKKTbLp76RoptvvKBLqIhlpWukjqD5HFWeTbJtlxfb5c3VnLFJFqYPV
9EgB6HS2bCvhg3HSzw26pNsXJG/qVl70UqjWjGgcAdUda4NE5ixW/wCI3NhMtulxCwqEWQj0
VHUZ+rBGsV9qIt02U2ttKq3Nsw/kTemUop9LpnRvMdRjTN5N+gF1NDaNIsN0QWRJW0KxHap6
HywaI677beVWtvpjsRIF9ZdCHdadTHSh/wCOmHYJfXFxTbtwvL6S5h0uRqWdWYI41daqeowW
usjj3203Cw36stbSclWt5iaR5D7lYYXKtPBv282KCLfLaGSGdf5U2hSGrmZEde+M/Lcqgh5B
t8MsyXW3w3dXJhuej6CftI6EY1OWeqn2292273cz2kI2xtNSUcmMN0BXKq4qXfbW287ffzz3
cn/xbk6UuInDxsSepKGgr0zwatQf0283DbJG2ho5LqKT2lRW0yICT0H8J/ZhtOM9usO5+/Ha
7gzGVCEVpTRga0Cknpi5GtLuGxb6OLBvZMsMBVykLiQClRqCjt4kYL0r0r7q7mPFoTHL/LBA
emRjfqafwt4YpBcQbTvU1xuML38yyyhNKm6JOsVACM30xqtSStHLeTWQeL+kvbW0tYwY5v8A
45qa6CTUAHtXri1i8uDZWvLq1cPZ/rLGGQgPbS+xd2pr9moHNa+OWM1uUETpbb1Wa4/WwFdE
A3AaWFcmhmJr+B6YqlrLfTWMMi/0eS3t5UKxlJf5DAn7e4FO2rriw1k9hnvWkureycxzXLFx
CX0VYdhU6a+WNeOX2cv6y6tdydt0gNw8XpkjuQS3Tpn4Yi7rvduKXkMkT7U8LkVSSNwpU06j
xHlg9a2Lu5sd1m2Pb12wrOkYQyxBlWRKKKFFJBP4YDMcEmq53H252D3VwoUmc01lctLlu/1w
FXz23KrOD23eVV//AIeKVtUcijqor6Dl0BxpmpuH7Zu0t0L+2UeyAU9xGGoMctJHUYuvDLp4
bGSLfdwsruNY7h5HCRSgIrV+3TXKmMKrXYrXeoYr+0u5G9yOMAWUtWpGKkPHXr506Y0vw7LO
Vm2m3u7ewN5IpEbTwOI5V0flkHcjqK4B8OGO6O4b27QO1vemMVNw4DNIDkC38P1xQf8AgG3R
7vaS3dju0bQQzqWS3ehtyTViRSqeY8cIc13Y3G4bNYz2ah7OFyJJEzMZAodY+5Rlh5uGw1xb
G13GFL6ISWjKouGWhBiJ+9WFcGqcuvf9svbTbkexuZb/AGyPSwQtrNvrFFYN3UjLyxSnqB2j
c+RSbNbxC0t9ysY2KwzNTWhAoyNTOo7asBVF/LtlnvEcq2Ci1mVjJZnVpRwM6F/UtfPGmVzu
m58Nm2i2ZtvaWGT0o6SUkhfTRgSe+M4djE3ccSl1h1GKtIyx9WnsSR+/GlYuuISLb7j+qLsr
RAUMfUr3FPzfTB0eWy3zkG1R7WksV+DFKCbJnhLKrsP5kdMB66xleHW25yXSzQIxtHJXUGBI
k1ZfQZ98atYlUm8209vud5DIhilSd/cRsiDqOf08PLFIzHERo9Yo3iMVOtjbQXb7PZw3+1rv
GzsddtdRSBZYCRSSOozqhzCtgbgNu17VukslhdyFIlWS1kempFP5WHehFCMFUW1jyGz3O23O
S0tltbu8t2O5WiisLyKCBNEv5fwwnJHHuO92Vts+2RXu3x3qNEDG+SmmgfnHXFjP2jF7nfR3
k4eJCkQFEjJroFelT1p44VuuRlLnStOpKk+AGIfVe8OQ/wBVLFigEJWnYmuC0/CbYriaC73K
ZT6kL6lOalasKU8CMMYm5rr2bcbjd9i3O2uaNaRIrwwFdRRgG+0n1ALgsanR96267mTZLpAH
gkSv6gEMp0hQoL/Xx6Ymr8qPlEL2+5vHMmhwBrrkdQFD+04mZ8qf/wCyk1EmgOnT1p5Ym3oW
+bxs9va7dHuO2LdB4vTJGwU0YCuo9x+/BLg1ScUi279Tdo0Uk9sGLW0QY/qEC1OvUtKsvjh3
VmLff952662K5tJhLIxUNGskXrVgDQ+KrXrhkFS8hu+Rbc20T2KGixKHvUGtPaop0P2oO2Mx
WsVyb2G3h5IkUC5UTtGnpTU/3FewHlhrGqt1UJpYe4OrKf3YG2yfbty279Idxt3vrQRrGUFd
ZjYaw0cg7CvpP4YJS4LGc7ou7Qx//jMoJto5KCRigIX/AOsY3WJyk2EmLj2/WF6rG4VWm/TT
GjVij60bpIGoR3wX5PO4ycbsQCfSxHcUJNPDC1uxoNhgWfbb61hUyXbr71vGKVJRc9H+6vbG
Fjt2udW4pvtreU9ynvNDL9ylBpoFPcHthjVrptGuLK4hh5DYyzw6I/Y3C3JEgjYVSRZK6SFG
FKa8W8blCJHMI53mX2J3yFWPpd+3q74FKLnz7jLyad92sIdv3GNEju/a6TkLQTk9CWHhhl8Y
z1bcSO6yWG428scqx/pKQ5HWy6S2kjqVWmMfk1WcTmUbNJM1v+stkKtcWYNWK0HqUdQVw2nV
1us1pNw/dZdtWf21kC3djcjS0ZJXRLGD0amKJWbTJug4Vd1slvtrlce7SheCauUniMX5GK7a
7O8azYxrI23PJSd1FfbkFGQk/l1fvxc0SLDed+EdlLtsYd0leOYCQnTHPGwIkiZszq/MMapk
X1jv2y3ssNxeW1xa7iUDXDQVERcLT3KEg+o4y1WD3oRHd79Iv/Gkx9skULKc608+uFmG2lQN
1sjWipNGSa9PWOp64KWj5bdb/Z8tuYtvlngQ6HVVqsQYirZjLU5641zNMdGx3dgmxzz7jHMs
ZuG1Pba1kjYNmnpIIBOD4q68UnJL2CeQR2V5cT2Uy+57FzqBDA0/NmR4Yqzq02Pcdwbi28k3
chMMaiMBidFBlpr3GMymMldXl1cyyTXUryyflaQ1J/HDpp9sjupNwhFm5/UOyrAFGesn0/sx
VcxshvhsLm5e+mkt5HjZ7cpqaBj9ssToOiyEHPscP11m31UyaI+PpuNkvtypcPQrmVjLVowH
SnbBPlqpN1MV5wjbt0vB724peGL3sg+lgWAJHYmhFcNZ/wCmV1ruD3fBd2BRQ0UsbsCtDroo
YgnpWvQYzPl0uVBxTjsl5Z/r9vkSW4Rmju7R6awOqtG3XSR18MK6R/I22blDLYXM8ccYmj0G
YA5sMj0/NTDGIyTqvtqO4Bqe+AtHx3aLTkUcO2SOm334qlvdOP5Tkg6EkGX3HKuLFenZYwra
b1s21m3Nvu9nO0UocArJEAxKMe/qqFbEftrQSXdhJ+us7LcDRI2V9muACfbIq0SsVofI1xBT
bPDx+32a0vrm7O33HrSOVq6XUMcshXWv78X5GqPmW23kMlpuE16t7abimu1v4hp1la+hsgSw
A64TKzrCM1BIBINSPGmBrHoGzQbHY7Ztl+b47VfNEB7jsBG9B1UUNW8R3xRm1xy8SnvOY20N
1dwXKbxE9xY3QpHBO+mqKfDPI4hJ6ut32HcrT423qzmt1eO1uY5LeQDOPMaylBlTp+ODfVa8
wZdNAXOlqNSnXwOIglJ9qgObGor2OIBUMCdXVcqdqYSByGGQqpyH4HPEjJGtKDMZtTt9MSGu
vOQnIDp2wI0clalhl3+uLR9gKVlb+ELme1CehxLQoWADKaGtPL64UcuAQADUitB/liSSlaqT
R8tfhgwgAqxoAMqin+mJEEVQWAA/GuIHyGkEk6j6j3p3+mJGdVanUNTJfzU88S06tRT0J6UP
an+eEFqBYlaAg0INCfrTBTCYk0fofDEijAcV6HvXLLAD6VrUMKKP2DEsApmDHL0nMDzxokZJ
ImID1J6A9fwxDRITI1a9BkKUxNSCjX1mtQMs/I9emJAclaqoFD6iR+6uBm+HyZCh6CnU0OeJ
rT6RkqGpXsetPriQSSxJI9IHU9KeOJmmEgbJRX+Ijw7nEhLEQAB9Ca9jgQirU9eSg0PhTvXE
T6QMyRQmp8adq4YQMFJ1ClfynsThgOzBmNCW+vbyxK0Pp9umoFuvTp+OJGAd2BLdOhH7MSE7
ITpU1JGZGRpiQQErpAIVT1ANK4kXiFNVPUDPp/hgJGrUFR1rkK5eGLGKHUUGkfaSdNexwiiR
mKqBm3Qk5D64GpDyq+opXpl4fhiQNTORXKhybzGJCAU6mZslGeXfwriMhVp6ifoDlkcUQAW1
Aa9IP0/dhGkFbOMMUVeoOf7sLQpI0euZqOoGQpiGAJ0qooNJPQYzUdg5YtpqnZT0BxAGnNgD
UnsBQfvxaqUYrUFiUH3ZHr5UwsYamTZUOWntT/riHyelBQvRRmD2qMVW4UZlqS5UrXNT/ngb
h6MDpUUHSvYjEYkDEUPU0p0qMMFASo1BxV9XX/LEpR+2mjID1HPxP44G6jELJWpoPAnwwjDp
EXowyHQEnPLtiR3KoaA6QDQUxKhIo1FYvXqCMSMysfU1U00OuuX7MQSBwsYLnSpbSGPX6HFj
QXOk6tVS2VfDEELH1gZtQ0JrTPEkhKhar6i33U6Z9sCJyfbTSuZFRn1PnhVOtWzCk5gMKYjI
S6jJ6VIoSpJxKGAUmuoA5g+GBEB6AivqBzCrlTEjmP1AyGpXqegxHAIs7Bpa1Arp8QOmeEBI
cgAVIGQHcYBqZgyqS3qWlBWtcvDywnQJJCoo+YHXV54QchC7VBQN93kMZrWExCghTkDmD1p5
YsBmFG9LZEUI6E4IdM6moVaMaZf88IQU0B/bbUakBj2/DE5/A43rCQc5MqsBTFrX2Fm66a5d
cwcRjmu2bUXbLLSQMqDxGNRVndxYVDgDSPtPl44aua4Kr5ePbBjeu7a2InABNT1I7fjjpzzK
r8PRdpumjt8pGC0FKE5E5dsZ65ZlTvLPJkzmR6VCsxbp4VwSs7XTHNKjvpY0YfzCD91RTvi1
TUTKAAoNY6Gi/lBGfTFKspxM72wj01QE0U9fP8MOGeJVu7oqqaz4Ia5inSmLDoDMQdBYllzp
XOuLBpme4IQFyyGvprktf8PPEz0dXQ+kEhx1riU6SRzzI5ZDp1L9oNM69aeOHGtSo8juHZ2Z
x0qSeuKxQiV1jU2VajvngI4LyeFm9mRkJPqoSMvwwYz9jmYTPqkerrlrqa559euHBOhvc3Es
gDSsY1FArMTUfjhxbRIzHTU6gOle3nnixuDebux1Kpooapp3y/HGcFqSS4lYAGVzlTTqOYON
Rz72nSaSP7JWTx0mmeGNb4b3dZZy1Se3fFg+xiA0eeQ70NMR0cdxNGAEcqta+kkA4LNV6Czy
M9SfSvY4h9qkaWVkUOSydBnWg7YFboopp0Ue05VR9wBIB/ZiUp3lZ6uWLOKAGprl0zOeM41L
6Uj3Eo1GRmCn7Sx6/jjQsHFNLECFY5/dQ0rXrisUoFkABOrOtRXrSvX8MVUo3kMhJZy7DuxN
T554kBDqNWFGHRRT9uIYdZHV6xsR+45eGE/CUySOAGLFCMlZi2X+0E5fhgwW6CR6rVvy/m75
YVDNI5A0tQgghh1BGKRaf35ZKCQlqGqk1IFcVi2pHkYIVVjo66VJpU9ajBinVDrYLXUQxFCK
nt/liHVMzkhTWpGde4ONRXs6yy6NCkoWz9J798GNTo8krg66nWAAWzDCnniwUckkzEO7tITk
GYk/44sQdfQHInMD64sJ0kkQExuV1jSaE1I/DEoUtxJJpV3ZhlQE5ZYlUMTOxdVJCk9zll5Y
axzPU6ymOhUnUDVWBocDpbkM15M4/mM8mf5mLAfSuCxznRvderL0PSmDGuezowR9eomg6+Hn
hbAzO8hYvqJzJNScsaxytupzf3YpSaQD/vbqMss8GN6iW4lSbWHPuH8wOZH1wWD7UU9zLMwa
d2kp9oY6qUy79MUG35N79wyCIuzID6QzEqp/2g9MTSGTUF6UNaHDrNiW2u7i2lEkJpp+5T0Y
eYxUJZL64Z3dJHjWY1eNCQrU8RikatiBbuWItJCxjcdGUkEDDhnR5rqeZw80jzN/E5qQPAE4
mdTJf7hEFWK4kjQdAHIBB65DGTdqJpZGrqb76Fv9xU5EjpjSnKKoK1HXwP8AniDqG5Xvsfp2
uH9lRT29TBetc1GWM43qGGa5gfVFK6SP+ZWIy8DTzw4Bz3lzPp95zK1KCRjWo8DjNWnG4Xoi
MBlcxUpp1tp+mmtMMjN6cyu5ct2JrXwpjWOc3RzXFxN/55GkcgLrclmoOmZzwV0gHofvyFM+
/wC3FC7JN5uXs4LYyeq3fVDKtQ6AZBQw7YpyXJNcyzytLNI0kjKPWxJPhXPGRozuN6YDC1xI
0Q+1GYlQB0oMKtBBe3dq3uW80kLMAH0MVr+zFfVDzXt1O2u5laZgtFZjUjPxwNxI+67kfYP6
mSsNPbJY1QAU9JrUZdsWC00V/d2xY29xJCzU1SRsQcjlX9uGRjqlJdXMkguJJWkmOZlJ9R7Z
nDk/A/nLPaJ9yv8A2v0zTsbbq0Wo6a/5YHTQW24XlsJGt5mh9waXCn7qdNXji+q1Lb7vdxyp
MkhEkYp6sxQ5mqnF9Rrvl5RO0QKW6RXAGgyx1God9S/aa4pyp6rLe+ubUt7DtCH+7QSKgeIG
X0xDpFc3M87l5naRpKepiSSfM9cMEqF2AWi10dxXL64VhavTn+FOmCIUUrRsrKxEnVaZH92A
/CVrppIhCWZoVYsIz0DdyB2xH8CtNyvrOQm1neFWpqC5VIGVR5Yhgb/cZ76YzXchllGWo5tT
64YrY56imojLw74mXXZbtf2Gv9NcPGJ6FwDkSvenSvngalDLuFy9z77SMZ2J1S1/y/zwRIrW
+uLaT3LeRopKaQ6mhocmB8iDiM6DLczPEtqWYwIS0aV9Kscshic+7qD26OA2VT+zCeDnR+Tv
l0yGJ1PBcywSLJHIVZTqQjqKYANLuVZfeR/bkepYjvr+6uFFbXc1nI0tvIUdlaNyv5kcUZD4
g4LVqWx3zc7K3FrbTslo4bVBQMprl0Pli1RyX95cXkyyTSGUxqEJc1OkdMz4DGoziDUBShyH
QDOuJX4TteXMsEUMj60g1e0rdVDGpAr2rjNPNQxXUsMgkhkaKUDKQEqQfEEYo1a7rvkW8XtF
u7n3mQGjhVQkEaaVUCtRhvwzL6Oz5TvNnHBFDdOsNuCIYjQqEOdKGtcYP5VU13JdztcSsGkY
lsgFUd6KB9oxrGZYD10yNaZivYYLCvbbmW/wW8Vql0vs29faRlDFdWeRPauCNRU7hfTbjdNc
TlFncgvJENA1DPUQO+NCj3TdL3cGWS6kDyqixvJQB3A6FyOp8ziicYVfy/d4k1/DFo0SStEy
OtUl/KQSDXr2xlqV17lu97ukxnvSpmChCyCjPT80hHU+eNM102HMt9sbFbVJRJbxn+Uk6rIA
v8IqMl8sWDarJ7kzTySSH1Oe3QDsB4AYG+R3u5Xt5cI97I88kMaxwtI2o6F6Ln4YsFW+zcx3
rbITDbyKdNQvuKJCFYZrU50OM2GeuC33vcINwm3C2C20jvrVI/RHnSqhR+XyxVSOjduT7nuK
Se97a6kMcvtLoLRtQ6Xp92fTDC4LXddwtIriG1uZIY7pQJ1Q0DJ4Hywh17RyO92lZFtmDRzU
EsDAGNx29J8MRHu3Jb7c7f8AS3MEBXUGjlVNLKwFPSfMZHFfT9dPHyrclsUs5fbmERpFOy/z
U0/aAe9PPGcZ6VDM0jtK7epiTn1qfDGmQqwVqISrEAMQTXEdaJOc7sbVbe6WK8bQI/flWkml
BRdTDqR49cUOubaeYbtthuAixTR3R1SxygkahmrLShrgsVpt55JNuttouLSBJozVbmMHUP4l
BP5TiGOCx3W8tLW7toyDFdoFkRhVfCtPEDAnEFLD1dBnQ5g4Wh28z211DcRO0M6MHidezDoR
iDWXXyJLda5LzbbaeWUaZsqK3p0nIdNXU+eKUfVndq3eewuvctzRGaskBAZWU9VcHtire78l
fbpJcSTRwj9NavJrS1RiyKR9pqfDtitEmL21+QZ021rO7sYblZI/anlY6dag9Soyr54zB10y
azTCZ5IWeNZKjSCVFCculOmNatS3243F1FbLcMzm2qAzEsCD0y6ZYtUcn5tZJNT07UxF0x7p
LHt6Qr/LuInJS5XroOej9uYOIOmTkt1LbWVQDebdP7tnfGvuqtdQQnuNfqzwYZVxf89gu43l
fbo1viC5u4zoPueOimnSfDFIbzPwr9j5TNZp+mureK9sAxlWJwAyMcvS1D49MNGeHvOSRXVz
ZwyW4bZ7WZplsnYllEy6XRWy9I+4DxxDmO2S6+PDHI4tpULGgBjKs3/bRjniNiDbeVWliBaX
FqL+wtyxs2NFlXUa09WoZYOjsxW7zyQ3v6eGFWgtrAyGyr6njDkMRq70PTFGbU9rzXdE2G/2
iaeSZLzS8UzMxKMrDUT461yIxSMys6zF5CwUmn20zpXDjX20i5Uigr5+ZwUGIy8j374SBmZE
qoyrQN5YUYuQSFPpGZ/0p5Yhp2JC0BLDo2A6SlfuA6gU8PPAAsvX/wDWXviRg7FANQB6BTSn
7cJGsfpJL+tR930wFGGYE06jqtPHEBAHNCQAOlO+E6QoH9sr0ocuhxIgdQqGGX2HqfpiBCXW
a0o0ZABr1y64EQoRmPSerfT/AFxAx0mrLmcgABnTEzaeQqKuFOQoGNMv8sTchLUxggjMZEjE
B0C+kU9X0P1OIhyypRlByzw4gkBiJMiR9uXnniA0IGTd+hPicRKIadQ6+ZNK+eJaZtLsxQAZ
VIPXwyxDrrQkVU6s+2XauJQWsFaVOqgFRiaMoDeJKnx/wxIWohwO/ViB/pgFpqnUdLVp3Hn1
wA4lX7aZtkwPTLFDpncFwKUYdO4HlhViMgs5LgVUZeFfPChK5QV0V6enoKYlon9twVC0J69i
MS06sqowWlVyJA64ijcgPqNAG79ssQSKCSK/cCD+Hjg0gfRkWOVRVVyxIiq6jT1kio8fqBiF
hpHUgqVOroaf5YUJXC9QDlQE50GBGYhakDWxNBTwwpEyyM2tR06keHliWJEqi0qSKioPbAka
BWLM7Zr0UZ6R/wA8QnolINVHVB1PSmGNYEtqq3RgQNI7074Ro1cEMVOk/m74iZV0opIqamp8
sGMyHLsQQQVU5mnc+GAkuYzzr375Z0GGRUzNTyB60zJ8ssTASNRCuBrPT/DPCodoQSBoy7Cv
SmA4eiNkGFTXLywVGaSTSFQAaahu+WLENCwBzCgDUB+PfGiiZ0NXp6a5KelfPBROTq5K6jQa
ei+GJo6ydTXU3YeR7YiFhJQZGg71oM8MGEPTUqwY0616U8MSOFUNqIJQDI9iTiqEshbXlqWt
NNK0wEDJ6iWAooFFNOuHSIkKpIAZjlTI08wcApiyg0H3U/CuBkizBdJAJ61GJo0jGukfVf8A
jxwg2pRGaOc/vPXPwxLSCElczqXPIin0BOIyiZGAJoAOgrkQfPEgqSCGABJyqAOuIlJpPpXo
ozLZivfEqBQysAsmTAlh0rTvicrSVqDUTpY9PLEoXqUaiSB4ePjliMF7asQ7ggN+Uda0yri1
vkRIU0fOv2geAwNItSgguRQHNm6YcZSxpQ6jm1DmfPpiCIQSNWStFjBNMqg4NQfajC1cD1kE
CnUnxwDINg7UKZAVzByxECqU6lcslqc/PEUdxmrUbXQZE4ZVWa3gEye3UGlCCMdIOXHpj/8A
wf5dP4+OBt27Mum4VmTXU1Wh741z8s9fDe2Ptfp1ABGXppmBn3w9M812JGBkTTuM/wDDGWkp
qoBb1N4VxlHL6aMfwA7Uwi0A0gnrQiuR/wA8KExGYUip9Qr4+WIiiJIOsUORGVemJDIQvQkZ
dPriYoUC9eor0xDEpZa5nIdPDEjqSZDSoqKnPsMJw4C+4F6jxHXEtE4Bele3qp0rgMJWoAOt
Ohp38PpiA+urKpP/AB1GFRIhzKDwBHkfphQlB0KK5DMHvTGUcUANQASOpOJi0SZCmVT+7GlB
Z660AJGWVa4Yi/mZnpXoD2OAyiUjQNRqW7dOuIErKCGQdu5zwNHRo2ZgaUPWnjixSjGkoFqM
q1HhgVhIFANfr50xKHAV2qDRvPEjodSsTmD1IpXEsIkFSBmxzJ6GmJCNcs8xl+GIh1qXoBmO
/cfjiWmLGmdSK5kdcajIqtkQagZDEhEgNRjkO4FRiRiyAgV1aj93l/lhxnThqMKeOeCtSnAA
LaiR54DgvQSAcwfuzxIy6SQB+Aws2HZhUMTUjtliJFV0jz+4f6YlDaiGGVAcGnDF11UZanpT
PFoECamo0/4DETMwINR1NKDxxCmCohJrQnqD/liPwJUYsBXIZDCzpdyNOYypiBvbatVPbrWm
BYQU6RUVpXLyxY0fIPqFczlXoRhR9WdKU1Zip7YiA6tQbKvQfTwwCi1jSczqPlXFgtPVgozy
8+uAhYjPvT9+EEQ1AcxUUphJelV8Se564kWoiooAKYhToc9PWmYPjgMIGlS1OtKdPxwNaRVS
QBmvf8cKCtNRINNIy8xiBAqQSMz3OJfIT18Msxi0YdPWDqGlh4eGAkVckVYAnI4YKQ6E9Sep
xoYEoUJJOoAUNMZJytRmM8DULSoAIGTVri0gJqVGYUYmaAEBvWPQMge4OJCCrkASR3OI4cKh
U0NR0HjiRxqCGnbtiQVyB8/24WNOS/bqKeo+GIbhKHJyHpHU4G5SZtJOkdeoPhhQclBrRc6V
6n8cLIwx1Ainp/ZTBWuaTVqWUZd6nPAeoZhQjxPh+/EAhaZL6TXMdRQ4kcBVoBXI/d9MKCSS
W8sqj/PCMECppSlR1OMtyikHSnQ5HEKiAIJIIDZ5/wCmFk0QqC1cjky0wIdVAJoen+GJoH3L
kCNI8MABqADBqmprl44RT+kUXV1zr/piWAo4K9TWta5fTE38GJD+juTmD1xLTMKNU18h0xA7
+k5AFSKU6YDgZGUJUZk0Ar5YhYjXUzHSaHLUewPlhOnCkEas8RMfzZ+omtMQsI0FSc2A6HpX
AJMAQ2oVzFM6+HicCKoZQDmKkA9MSwjpatOq9KeWJAKaV1ZClcz0zxU4Tk0quRbwOIaAq61Z
89RB0jLphODXSKE5EHPPpiBq6qAUofuI64CYsoIJyKihpiEO2f3DIGgzocTQAjhqEVrkCT2w
gy6SCf4RTr4YgWo5t0DDOv78RlNGdWQqT2NO2JJEKda59j2OAwxJ1kUqWyr9BiQAZHVVAIIy
JxeKCDHSFUUNe/bCRLk1SARTI9P24EHU2rXTPw7YVpo/cZNZHQ11fUYGadgR0NHHUjPLEKjE
lEyWp6KR1+uIaYsR260qKknEtGQzlWpShqfMeZxRvSOsoScx3GIGGooKD1A0byywEm66qeke
mp6HEjOTUEKCB1p2wk6kFidNCOg61wIB9BbOtetB2OIEQxJIoDpoB44FodOefXIVPYd8SKTU
mVQXU00/vxECnVUMRTw8PHEgSMzCqECoNB1xA0asV/mUFAQtf9MSwtQI1GtB+04kEBtQUEBT
mR1qO1cJLUoOjNaE5HyxImZw+osA3hiCF3NQSW65KBmCe/0xIYqFqMgMz5VxLDMQciPU3U4k
BylCvQKOvU/9cAwogq+keruO2HSSqnqUfn/N5jpU4CjoAASDq7Z4QaQuoqKEOdQp4+OIURzQ
Fsh+bT3ODUBC3pYDqf3dsVQyulqhaoa6h/p9TiIXLagtK0zB6YosAEYjL1DuB1+owjBpqoFJ
PnX937cBwzoSFeMCi0JU5dcCEr0kLDuO/icaG+h1fziFfMDp5nxwtHI9VVp9O2Q74kEoFGss
FDV1L1wM2ErMAqgaaGufTPv54kk1BnIyH8FAa+dcDRgCoY1qRkwPhiooAtXGkZk1YeXlhEG4
jGoLVmbIA9hgjRljGkIwqRkAMiCM8NJmC5rpr01DPLAxoyoY0+0dvrhUDkKoc2NBqGVMRBRV
dWkNT9o708vpiYz0XuJrI00Nc6dPwxNiCgNWmlRmc+uIwVWiqxoSegODUFQCP4QcyDX/ACxI
LkqV0+gZ0+uLGadh0JBFBSueLFCUh2r0H0p0754Wjspev8Nc+wBp1wILaiVIzK0IHY/XCyKl
GHmOvfATMrClKFu9TiVBkGVNQAU1r1/CuJCoiuCGzalKjKmJCYCppSlcx9cRCUYHrka0r2r5
4lRVzJrTSKV8fqcINlQk0AGeX+OLEYEafuAYUr/rhxDA9RyCAUz618cCB6GTJTRjSpywILqR
QIchnTz6VwlIFURhvSWPRsRc5hILOAGbqPLFrOJlZAFBYgt0PmO2Em9sAVJFewPb/rgRFh7h
1ZV6Af4+WLECVvDKh+pp4YRTkKEDOQoYUBGWeBZgPcJmUAejoT4EDEzLtSyKViLDNWzYdwB4
4mkaNV9QC0/i/wA88S0pFHQmpPhlTPEzUIDFqg+qtF7/ALsMX1dDFQumMCi5551+mFrA1pQK
AE/Me/8A0wE7CMEimdMl7Z9M8ZRihUVJoe9T0w6DUUZVpXMda/twomDPSq6UrQEdwOte2JCA
OapUn8pGQwYdAigCQtm5/N5eGIGBBjKhaA0z/wCOmInEatHQnpmG8cZWHoQKmhHifPKmEWnB
ALUzK5EnP9mEEqKvr/NnRqdP2YCiQI1V/KTlUdfPChSe67aWb0UNPBgPPC1DDUq6hmH65dAB
kBiVpimumkUC9CPLz8cDJ9ICa9NFGWrufLEsKRAiozdSehBqRTviWEkJWuVB91etBiMh00kh
pM6VoenXAdCsoLlepAzP1wiGk1EsHIZKii9hh1FLr0gnMLQ0By8sCwWoHLUKjp3p+zEEbvIz
eltTDqWyriZ0cir7FSa16Dy/64GgOh0iQHSqZIevbLEUUjrJAyMGANCzCn7MS1m91RXfVqoB
ko7088akEcef/wCpT8MTeu7Y1lacCMAsxoaiv7Mb5Zvw3NlJpiBpQKKtUUoB3NcdLzrEuO4K
GQSItE/iA8M+uOdmNSjU6G0la6swadcZWnUktV8q5A9q4hpGmSgCoyLDCdEV0NpyNciV6n9u
HCcB2BAqKAihxSM0ccYrQt6ugJwVF7ZoDUZZFh1OJWCzYjIUqOuIQRDenT1zqD2GHGjxvRwG
GZqAB2wyKCY0NR16E4sV5FkdNPTllXv/AM8WMiUF9RJJp0AyxYYMCp6DVTqP3YFTHSKUJ9Pf
rkeuAX1ItBmwzpl5r4DBhwelWAWgGY9Q8Bh0/Xw5ilL0jBc9l65+VMM6YvJFGroNdYPbsfPD
QNQKAkhiTiWGCeogJR/LpgR+je2ASD1NKdPriRKG16D188RHpULppTtqB64rCQQjKuXY4FiV
InZDoTUQK0H+JwL5AQdOR9XQ4VYEM2pRUgHpjUjNSRxyAOwUuv3MBnl54sWgDMzVT7B0J6Ys
NpBCK0+3qAMQEW1Ghy/xwgXtsw9KksTSiiv+GE2GWjAAghgcxjNUGdFQCf8AX9+BvCaKRGz7
5rUZH8cMZsJVYknp5Z4ajDQ325M1enh44IMMp0soLAg5/UYViSlCSAaHvTpisQSwZAfuIrXG
EXqqK506+YwkgadBqI/ywjSYKwBNdR6DANgkfIGhJP8Ax0wnCBOot0JP4YQZwOhp0yPYYlYW
iQClCSelev1p4YzqkMD+UDP+L6eGFHYHUtVIHUE9M8ROQ6k1WtP2nFiB92S9xmT49sA0ZVqg
dfL/AJ4AEqSK1FegrmMLWlUo1GIIIz7AUwmG1FwudT/phG6RpX1ZHqanGV8HYpUKB1NPwxYt
IqGFVyNaAnr/ANMWHn0IaijUanpTwxGnYioU0AOVPE4YNCVy09x2/wAMQwiaE1FARnXscCSB
V0LRupo3jXA0GlOh6HqfDDGAA1kLD7R6aDLCokVc6tWgyA8RgrciM6gK51rUfTExdh1kYkhh
UkVIOKtToUjAjIZjp5fTAtRUcjOnXr9fHENIL6s+1K4Togq0IJGfWmA6To7EAZADp5YmbD6f
uHY9ThgsAdRrU9cqnEzDqx7HyNM88TZz9oIzyoSe2BIz0zPq8O2EnCkDSPT3GDTh9VBQ+oHL
8cRMfb7ZHEDqy0OkV8sSDQAUPU9ThGGGk1qaL1ywg4cAigJyIr4YKZToxJAbIDp9D0wEJBpQ
A5dAR1+mJkqrkBkRnliFCo0kkiqHMfXAuT5kFq+S16YnRGanUQAaZEjx74tBmrWtSKDuMKwz
Fm00JPc+OJUIcAV6Nn6vEeWJaRWtSTQZdTn+GIwDFSoXKgzI8h0xGC0NoNTTyAr51wMg0qpy
JLHqB3GKMhMtKVHalSemFrmkBWtaEjuPpgrYtAEVSa07nw+mABIfKmZAqfADEsC5qOmoAZ+A
xM0Ik7nMHJV7iuJQ7HS1RU07HwOIgagpSuXU+WFBYk0r/wDTXEBgKaLpoBmadfxwNBUrJRga
0ByGWIYEgMWIypliOCOmjEZ0pqBzxJGTmdRII7YRaddOlgV06+hPjgBwgaMUzXxOLTD1FCTk
D6RXETIq6dRIy74kcFWFQKBTkT/hiRULrpodPTWcqjwxJGxVx3y7dvDCjqxYn0khf34hp6uX
9FKDBTom1afUpzNa4hgUZVajDTqqEr3IGJkSoKUAB8/HAYBuumh/DwwgxDKqgUAOZzPbFD8H
Ri1ADme/08cNR6oiaa10muffGSb7qBcvHwGImV0rkCVJoK5A0xI1MxSpB+1fx7YSYt6wAKg9
SPLxxAOoEKaVY5FlywAlDIVObgAde+BExBNT9wNTXsOgxYUMi1lLsAaZGmS/TEoJioU0A8NP
WvgMaQCWMijSoAyqfH64Ec0ByGa/d5/TwxEK0ahKig7079sQRuK1qNVTU18PDEhNVhVSD3Le
GJBHcE0H5Wp498RMWPY5AZnCgkoyLQ6WXLzNM8Aol0GpFdVBWnUUxIMoIJJYgnMnxOBB1NpN
AWOfll44kYqKUqSo9WeVD3womdcsgFIIy8BlSuIaEMzgmlAaZVwIJApQZKDWnWv7MQMJCVNK
sSQV8SRii1ISSKMMute5wtSgEeRKgUB/aBiMEylgDq9NK9MRRoJDQlaFsgD2wM4ehQe3qGfU
0yriULQwWqAFyaZ5UH1xqLRF5T6eiA0BUVJxaNpiE1VLBT0UHxxQgdtBzPai4lokb0U11IzI
7188BIyLUg1zP7qftxM04YgEK1adB2wGHqq6BSrHOp8MLQShzoxNMgR+3viFOiho31FqjqAK
YGcCpBIFfVSlfCn+mExJqjQ1NWLD0keOIoyPUA32k1HgKYmLPUjkKA35WzPfTiw0DMyClCBq
qKeffEUlWFanVXMsMBA5UEEk0pnWv7cQpKH0AUqzUpTwPji0FQlV9Z9JrUdK4ThhqZi2kkfw
NiIyCwctQp3Vcu2Ah0vpIXMdietPA4nO6dWFehqQOozzxYp0RevTqOtfDE1KUlGU0FWNGHTw
8MWJEEDGrGpyanToeuFYkMhDAAaR0zzP4Ylo2INNTUAAOeIomkAcqpp/DXy8BhZtAsZAYuCV
c5nzH/PEYMazQ09JFS1c6jEdOtNNADqNcvCvXrgBIyhApqe6sOte/wBMSAdLMVOZr2wg4OkD
URl4ZVpgaMzBlCKp09TTvTPrhRwCQcqk9+lP+mBDLoyBVo3fPy7k4VUcoamoVagHhkcQJm0t
r/P3PXKmBEqtKuSio6qex/6YkFgwK5VXtTxwaKL3KBUBHqzYtl+GGLQvHFnnUrTKvWv+WGAm
1NGBUCpz+pyxKmrqIFQKZM1e4yyOI6dvU1AKAdT2oO9cWkjp65sVzC+P1xLSyLE5gV7/AOWM
qAGosVY1Fcj3piCSUIpVBkAPEUI8MaIVYlGoSF6den0xAixGgE+gDMDtXAQ1CSBdRPiP+eJC
ersSGNT/AMd8SNrkXInTpFGypWuLBoD7iijeoMaCg/HMYmfUiFghUDVU+jt3xNQ//wApVZq6
c/TpzpiaxFIzNpSp1A5keB6/jiZqXUahK1UZBj1IGFpFKSDSoNa1AP7MCpK3toGZa0/HPBrn
1cBUswNKqOx/5YmdtGCCrVoAwoPp5YWpp2YltCkaQB9B2y/zxN6SSRNQ0JI+6vT8MRIxqAa0
LfdlTPEjkUCs3h2yoMSoWLCUUAJVe/ShPliACCpOZ9xvuUimXjiFhOCAwCn3GFRXsPrgNJY5
NWoRliooQT2HjiEH7sbDSFJByAOQxNOeWQgGg9PSnQYUzG7L6y1fST1p1ONqOKg8T0xg4sdl
neC8WS3Y+8Cad1HnTHXhWa+iOB7wt1sjw7htttNJA4AuJIkZySKnPMYv6eXxznK32a42i/8A
1kTWkNkPcowC/wAtxSh1LmP3Y561VHecK2m8YvtN66uGo1vOvpWh/K4OYw/YLCz+MrZ4VN9D
fQ681v7dBPB1/hJBGDTjH8l41Ps1z7TyiaKUkwShCpoDTNT0xqUVybTZR3N7BbzV0ysFYrka
HuKY6MxtTwbjC+xBPfTIZ6rHKtGof/3iH1Ef9pxja1Rbb8bbatYrzcAZw5EEoJWKRCDp1V9S
N2xm9jHBccR2C6khk2rddbqSlzYzoVZCP4Xr6hh5tax1wcT41LLFZXl3PZX04CROQJYWI7j7
SMTOpbf48jS6lN27XltEK67P7tHYkN1y6jxxXrDdrouOEcaRoPYupJbRzpW4QaJF1dmQ9xg+
ysRXPC+LQSJHc7vJG0prDcKo0CnaSFtLU864Z0trn2rg0U1+1rO8l+q0eKW0dfUtK/yw1K0/
hON3tm81Y3/x1DazxSRTTvbzZvbXUJhmUHuslSrDyxj7qTHR/wDd/wAdaJWjvbklz6oGC+4h
A+9X+1v+04r03NR2vxzAssr65dxhTUxW3ASbQMq+zmW+i4L0mf3vZdjt/cG1blLJLEwWSyuY
WimjNO9aVxSpR6FX7iST9xH1xpatuNpbTbzbQXJkRXPplgI9xSMww1VBp4Ycc78tHu2w27bt
b28k4mtXU/8Aykj9uTw9Ufiv78Z+zdiY8J460q2K7m0V6VLRzqnuxPTOjL6GjywaHPb8Js7V
Hfd5pDErNnbkagPGh6jF9lin3fbNnhbXtO5m9jU0eKZBHMle2RZXH0wyWleww2V9xBRc28ck
0SEwXCLpkBDECtPvr+7BfKulWnG0k2SO/SYmaNWLp1VwMqqexw3rROXRHxCK4topo56a01Mr
A0LEfarDscWnV5x7Y+Ox7iUhu2tb4Rfzba4UtFl/DIv25/xDBbTiS54NFdXU901tNJb1q62e
k+2T/t7g0qCuDRVNuXBv090otrlpYnGr2po3hlUH8rV+76jGp0x9VlxrZNhh3F4rW9eHc1jP
uW94uqNxXPTIuQz7aaYuqsrM8shSLebiNUWPS3rRAFQEjOgHauLlc39oNm2+O9uhFIxU0JBX
Og+mN03xbLwyYzxa5Ve2lqJPyOoFaMvWueM/ZFYbDHZcgjtZrqUIw1w3kCj3EUjoUPpPTPDe
9h+qPe7aK23wC9Imiehe4t1CPInZqGoDU64yIv8AlfH+KjZoby2meG5YL7czJ6WBWul6GoOL
m1ZQ2lls8/GU/Vgn24tbDL3ANXWMnI4WLzazm+ceisBHPY3hvLN1rV1MUiV6K69OncHDutfC
nY1YBe/h2GItXxKx4zeW81tuPuxhaM2kBgwP5lP3Kw8OmM3QHatu2uHfZIILsz2UisqO6UZa
Ho0fj54bfG8T3HFdnu7m5t7G+WG8iQyJEfXG9OgUqKrXzxnTh9s4JK9t792k725BV5bYB9Lj
sy9fxwj5VPIeNybRcj2phcWsmccjIYpB5Ohy1fTLDGOlMygnPIHofA4Yz9fW0aSwttggnmso
rlqIpJBDFTl6iMZdMccnFo7u6DbUHVJV1raOdUgNc0VvzeWNToYnueCiSKRopJbS6C19i6jK
qXHbUootcH2akdl3tO2Nx22N/IUmt1USslNcVSRkPzg+WLRYy+9bR/S50KzreWzisUiq0Z+j
Ke+Geue5cWe7bbDHsVvdwy+5CwUSRzLSSNm7Bx6WT9+CN9RHsvHLe/tWnZZ8qDXGuoL5Ov3f
QjFeh9R/+nCLcjby3arAVrHP7bKQwP54z6v2YtMpLxKzuhNDZ31NwgBd4XoyPQ0qjLkAe1cF
6M5ZuSJopJIpRokQ0bLOoxpizHfsFmlxPL7gBAQFDTVpPmD2waPVxJDt8trK247Q6Wykg7pZ
gr7T9NTAVXr2xYr567rnjexbjxeO4tiY94tQrIyLSO6RT9pU51pnX8MZ3HS+uOHabBpQ0kGl
njo6nt/uUY6a5Yh49tdjd7neWlyBLEBRKHQ1K0qCOhxjW54qf6WW3OSAECGKRlEp8jlWnljR
W1ttthJfSQTQAsFAcLnQilHHnigxn7iJYrqZM9IYha9cu+Frn1ZcXsYrm+EbzCByjGKZ1Ekd
QpJWRT1VhljNORNb7Cl5vNzZCSOzkhqQSf5OrsA3YHx7YGbRtxJbiKb+n3iG8tBWeylIEjU6
mNlqhHh44WPa477ZDZ28F2WJhuNIaI/ert/iMUrpIKTj4W5SMXCq0wpaFzQF600SE9PrgtWg
g45cfqJ7W9rYXkX5LhSqlq9G8AezdMUps1UtrrppkTTSf2dcLnZjv2ra5b6doYgXdKExL/5G
8dAPUgdsFMjsvtgsyKWO5JNMilmtJVMczEdlB6EeBw8tXxz3uxSWSWsxcvBcqoJagdGYVIIG
XTMHEvBjjUxvRAH98SDXDHFX3XQH1UU9SPDEyK42CBwx2+/imnjGdrITE5p9wGofcO69cR/L
v2rYdhv9pMs+4mGda11LTSyipGXUYzPlqxV2+wXNxd3FvAxuf0wDzi3Gpmi7vH/FQZ0642JI
7bziE4tZLra5RdNEnuG2cGORlI/LXIkD8vXGZWb4mXhqNb280V6CZI/cNvIumQ+nUyxnoStc
q9cWtap902WexRZw6z27GiyrkwINKOnVThnov+VeZDrJpmQe3fAWg27itveWkcrztGWB0SBC
y1/DuK54mqCHiV0dwm22S4iWaJdcVwT/ACpQTQDV+XV59MLMsKfisskMh26RZ7q2UNJZOQk+
kffpA9LafLGVamtuF+5HHNPcNFHcoskEoQtGQwB6juv5hh0KPd9puduu2guCpYUIliOpHHYq
cOlx0IrSteorn1wMtfx3i97cWS7haSW8xQsYdRBNQvqjdT3pgnptxVwbLd7nuN3bRQraTwGr
QSGg1E/apP7u2GzDPSl4nKbCW7sZ0uHgBa5tCaXCqDRm0d9HcDFvq+qVOEzyRW3v3H6Z541l
hJUlGVh6SCMu+Y7YtizHL/63uI3Ca3nj1Ja6TO8A93+WTT3EUfcuC4tT3nFL+OzlvbZhc28N
GkVfTIFP5gp6qO57YhrPSBTKVJOhqVPjTFDa69v26S9YhMo4yNTICzKp/NoGZp3pgpx1bjx1
raB7yC5iu7eL1SCNh7mkdTorWi98akZzHTFxCRzGtxcx290yq0aTHQhWQAq1e60P4YFrl27j
V9NvsdhNFEkkUy+7Cz0EgBzCEfcCueK06sud8dGyze2IGFs3qtrpRQlM/wCVID+dcZV+Ue38
Ntdx2U3UF/HG5JdWYUyH5Hqag1w8s9T9KEbFPJb3kiOGlsqe8gIp0JOf0GFv646bbi24S3M1
qVMU8cIuoFbJZVyIAbzByxNa6bHiE91FriuYlc00rISAGPQ1IxMuO04xu0730DRiO624/wA6
EsKlerGPs9FzyOfbDVP6C3Did7b7edxt3F3YRaTcSRke4gf7S0Yz0+JxkX10w8FvpKJ+pjgc
qtFnBCksKr6v4W8Ri0yKSWxmt9xWzvQbWUyCKUOK6DWhOXUDrXDT8urk/G7/AGG/FnclZYpl
962uomDRzQno6keeVMCVAUgmrVHcf6YWTLIxU1yNaCmYpTE1CkYkCgrn9MvwwIwZihA+4dBT
piRuzE5LQk9jTxyxINKSEgV09BiFSMFoqnNmzVT0xDDOeoVhQHt/hTETto9QJ1ADMd8RCpVV
Cq1a5rXrlniAa+oknMZU7VxIYYsAFOkE1qfLwwaDKGqF69aAV/HFpFpZSSPt7061rTGlhgpY
CmQXqOxOCmQWog9D6+vfIfXAjgkEKKU/xOJGd4wmgAEkjpjStCAagnI1+uBiBmQsgBY6TmR4
nBK0TqCaBjqAoe3XCSKArnTI0xCHIUKUY1Hh54Gg6W1AA1FMge+WJFQLQDwzI65YUCNW1Gla
g1B6ilMSM5ovqOQ8P24hhMzEF60IPTywYg6loHYjSfur0PhliAACa+Ff3YkZhHpU5AKaAjEq
EkA9K9s8/wB+FSGNWJRQVamWVaHAt0RQhfVUkZeAPiaYGgySeoKagHIjxA6Ygb1iulAFAFPP
CjagQA/XOn18sRCSFX7aV8fPED6FAVurHrU9f24EYjSf4DloA6mnjhQSXIBLZdACMqeGJEx1
GgNdXTpSnlgOhkDUCfmHQ9qeeFAdaRqFNX8T4+OBmm0lXOXpIy+vfBQSiidBqP7sQJAKFhl1
y7fhhMOunSTIKk9a+GLWpDsegppGQB6A/TCdNqfUUqdJoB44kQJBrWgGQp1qcQCHVQc8upY5
1xLTpKKmoJrmq+NcQEBGDmNI7E9q4jCl0ByXoa5kEdD2oMMWBDI2VPSMywFajEL0HSQ4PgDl
TLyJxD0RC0IoK1oR54DTpkgIIUas++YwHk1ENBUk50HapwtHRGIoTQkHr1yxAwBVAKtl1Pj4
DEKFCpSoGYyNcziGpFdhHqoKj0knw7YjpSAUJBzGQr4deuIWhJoAxBJPXtliR9NFYLkTRhX/
ABGJoQ1Cmmv+5sCMztQhgSBkJK9cq4loI1LAhXIJ6joBTCzDgEDLoTQEDvg1rT+s6wBn0PgD
5YkEO4cEZsO3kPPDRabWSxIyDHP8PHEzoiFqdOXYeOff9uIyYARMKAGpX7gennngMgzQNmQM
uv8AlhISrswK5dAPp/phRgIxVSdVelete1MAFJ7mRANMh/x9MQw0i+qhAr1qf8sTUGCv2nId
++fan0xK0lCGNiNNPt1E9/DAKjDHR0Apmc+30wjTFlIWpov7PxxA3uAAtT1A5N4jEcJQX1Mv
Q9RiMOqGnoByGZxI4YlMzQtlnTp/rhIVAVgqkjUale3/AAMSPMF9CVqCcx1qR0xAxYhRUdaB
fH6fhiQw7k1Rqaj6TgBvWGrq9Byp/ngBq5VPnQ0qcuuJA1IRlRg1Mz1p9cMaOiHQAvievU4Q
JkZCqxsKNnT/ABpgOmIKdG0q/UHOtMQviAtUClKk6vL6EYREoVqha5UqK9h4YyTUU6lqCx+5
T3wmUriOhBanqzKjKnlhVgvZUE6lXIZUJoR3ywGTDzLqooyB8MjiJOjMyMOpFSCOtMSwJIPp
0EHs3+OWIWH0rmCNQHc9/wAcSwKxrpCkseorWvfy8sKM8jMTlkKL0pUfTENC0qp6AM8s8zlg
GpAvoXUpUsakjP04ieQIanIdj9D44jqJRGslNFK5A0r+GIW0pE1nQG0gVLEdc8TFhmUBApai
9/EjzwE6uaDIejpXsT5YmvgtdQxaI5ZE0/diWjBBizUFKdB/yxFHoLrpqAK0jA+vfzwhJqGg
jJmPQHyxYajNWYBcj3r1J+uIJFkOr0MDlmDTMeeIwwajkg1qPVU1IHbA1kMrakpqoT1I8fHE
sIoDXOgGdCMycS3UV26+2wdqjv418sR8ZXdhWVtLEocwPOmeOmhxaX8T08MZCy2pdNyGrWhA
DdOuN8tY9f4Ryay29Wtb1WVJmUiaMagtPEU6Y33xs1mfz9Wmz8msba7uEmRjbvMXR1oTQ1H2
449RXlPtnJrS1WWKRSFjdvYYgkkMa4zIzi7seVcYvLdNe4bht17FUj2m1wPX+IVVgDi9ak1k
OSX8F3c/yryS5Y/c8pLE59q5jGouvHFsskUW7WrSlVjR9bMTlUdAcdJ8Odej313wt0tri6Nx
t99H6oby2Kz270/ijJDqT5HHPa1jkk5vsol/SSVdWBRJ41IFSMiynPLGcrXOMSl5+m3IXMYq
ySawPyuK9/wxvmq2Nxbb/wAC3T2r2W5u9r3i1FI4mQTW7nyGTafHwxWBecQ361vd6H6G5gS6
RCvsXhaGKVB/+Dk7H64Kk/N4LeJ4dwktpdtnSrNAHilgm+kkf5h1zGMyqx5zybdLPcJImRjW
MFdJFKVzocakFWXEOT7PaJ+l3uCf9K50pc2bfzISejaW+6nlirN6am+51sCWyQxbjcX5TKOV
wQrU7MjV0sfLFPQ4E5vs0ZEbrL61qjgACo/Ka98ON6Q5PxC9eFtw/V27wM2m/sZAJomfsR0Y
fQ4MSPmPINkvrOFI7992lj9MNxNbiK6QAZrJKuTj64oMYWrZ6mq1QBXxxtL/AIrJxy2vEfdl
uRRgUltyDTt9jUr+3B1amn33eOHte299tV9LOLUOtxbSxFScsmRzkaeGBKQck22LfY9wjR5I
NBRq+mQBx4dKg4zJVi+i5XxW7mns91W7W0nTTHdWxUyKD1Dow/ww5Wvr4zu+2fFIDF/Sbprp
TUsGjaI08Cc8/MY1zaMafbt3+OzsZsJbm9sJ2QhRoExDHPtQEYz1NVcO17vxmC0n2jc5pmsw
CttuFoBUZ1GuBs/wBwak19yTjltbLHtrNJKlF1UZY2T81Fb1o3lhysY6LXfOCJcf1KJ7v3pV
C3FlKo1BgAGaGUZeelhis1uRJBy7j36iW1nnvIbeXKG+tBokQV6PGetO9MGHFbunIba2vka1
3WfdrVK1S5DISpNCDWv7RjTMdsG88FivH3SBrkSSIUlsZQNasci0M49PmVI+mM+lkN/vbW73
ee4tH9yAke2xUqTl0z7jGuZjANm3E7derclPdVsniOWpe6g9q+ON2sttDyniRt9Xv3USy09p
WQPJC1c9QGUik5ZUOOdamuW13nir70lxuM06rFnFcxLSPT09UR9eJtwcy/8AXryWK92fcXmY
IFa2liMbLQ9dfRq41yz9Pylk3rad42IWd9I9huER1JIE923lCin2/creOGzDXNFvVi+xSWMg
dZok/lyflah6CmYJ7YFZrjut5guNoSBSVnBzQj7gB3OHGOlWChQ0oG8uoxGV3bDudvZXbveQ
tNbHKX2jSTT4rXKvkcVUnqyW52jb9zN7aXBu7OcHVqTTLHU9GWpFR4jGMrUDt+929vvk92FL
wP6AwGek51ocP1W6v9t5Jsk0txb3d5dWLuf/AI99a0ZGj6hZE66hg+GZzVBya9jnZEj3N9yh
QExSMpQgnIgoeh/dhUjPqoY0aunpUefauNFtdt3Pil9sI27cZ5tvvYtIX+X7sMmnMZj1fswW
G1wtyC0tLmIR/wDyoYmBcIdLAA/lLdD9cGWMxcbhv+3Nbvc2HIbx0ZfXZ3AIlX/scApl55Yp
Natc0G98P3LZhZbo9zaTxMCl1FpbWK5OynKvjTEr8KverrbWMNqJzfQK1WnVfaLL0PpatGph
kYnPvq6ll4bfbALGHdprW4oNMdxGWUBa+l9AI79Ri9bQ7Jvm1G3/AEVzezbduFtRYb+FS9vK
i9FfTRs/HBYh7tyzbZ5IH9+S4cfy59YGlR/EjEAlT554sFUuy7/abdvNxJIjG3uaIzpTpqqr
GuH6rnpU7mVa/nkjbUjyOQx71NanDF0n2Tc1267WWVC8UnouAuTaCfy+eKwcxsdr3vjm0zTy
2O5zzWW4ZTwuhV4mJ6PGfRJH40zGMWN2Kq7363js3/RyEXdpcLJEgH8t11Egqfp2wuU5srs3
bmGxXclruNvbyRzSALuNmCAAfzNEx8fA4XT6mgfjdhK+57bupn95Stzt91H7ciq5DDQ6imof
swYL+lVcbhtL2s08DEXLNRoyDRg35suo8RhEn4cnHtwsbfcPc3EyrbMuhpIhV1qeoXuB4YaV
vuPG+Ize5Na8iBlcaoY5oyqVPUPQVH+WM7TJjk4pHsCXHvX96bS6gJ0qwrEy5rkw6EZ9cXWq
V0blJstjv881ruCXdjehNMwBDRnwdad/EZYYOnHxzdbSy3O5F01be4AVJRVgoDE9PMY3fgYs
oL3jm8bcttdXhsb2ym1KJFJhnjBqNMi+pCfpjHw0593uOPT3UWichAh01A/lyH8jfxqfEYoM
90V1yWDerOPa95kMf6YabDc0GuSM/wD4OU0rJFTp3GKRVlAKek5sDQsv2mnf8cTPv5dG1taf
rUa7eSCNTU3UB/mREHJlApX9uG1Tmtfu1xtd3tyrf7jbbvEy1ivAhivoWGQ0n86nvrwQ3nQc
dtrzdduhjubKS7is5dVrfW5WRkAFPbniYjWDTLDTPHPv9ztbbtbsDNaoimssdfcgcN6tK+lv
/pxkc313b7cbTdbdGdwvrXeFKj2r6Jfav4MsmYALrFcmDZ4Yemb2G729Xls9ymNtBOKC6Vda
pIclLr10+OAunZr6HYd2kS4uQ8RYAXdmdYHgydNQ8RhK/wB23OZLeV032C/imUgPGipLGTnU
xkAsrdDTMYhK4od+2r2Nvmmk0mMOksag60LKF1le6k+GM0eKveL2xuduuLdJB7qS+mhqrANW
qkYeYumeDD0rXrWhw4txu9i3Xa7ja4ZINxTbdzgCi8tLkqsNwi5KUYj0t5jPE1od33rbLndV
niug8Dw6RLoCaGBzWQjI+TDriY+usQ8ziX34iySqxCOCaitRWoxKPQNj3y23DZobaHeF2u5t
SRcW9xEvtNRQFaNmyPTp1GA/VleT3Ms14RM8UjR+lZIqFetagjqG7YlKoQX8xQ5A4QtdrvY0
sbqFnKO6ltXQEjz8aYDYk41ucUVzcpK+kyx6UY51K56anp5Y3WcueKuG6nt7r3o5mDA0Eik1
I6HPzxmxra9D2fe/6nsVtbWm6QWV3a6lmsrtRQhcleJqigYD64zhvqqh5CbTlFby6iiOgRPe
WwDrEBUgtpykBr+zDikjv3a9v7O195twsZ4ZUJiubYLT1A5MgOpQ3TyxRjNrziTJfHvpB8et
PLDHSx38fmjj3GNnupNukUEpexjUAT2YDM1xWMytdyGz2yazke/ktDfPFWDeLAhSHCk0mhBz
WQdWp1wTWLsCbOw5PY7bcw30FteWKGO4tLhgtHUDQVcmhRtPXFrVZrfb1o9+t7mv86zaMOsZ
BKGJwzAMuRHhgY56v2/w7PkTdY73fVntbg3FjPDHJG2okM7EmhU/mFaHDjra49g/TX8E20z3
MVpcTj/40s1REz/ljZh9vhU4sZPsGiwur3aNzcWU059tpyRJFG6qRRqdVNfuGGtT2NeqwRXE
E8t1GZrGBra7t1cFljbSUuIsyJIzT1AZjGWcwe1XVpJZRX23zwS2qFjf7XIB7sDBvU0OshSH
6074takVO7X9iN43QRTxBpLaOS3khJEbUUnKv2stft7YRWGtN0v7VpJIJCksyFJe4euRDeOR
xU8vVrW/n3na7CTa7m0uFghMU8F2QksMgIJjqStQ35WzGCHrx5xv8s9zyItOsccgZIpYgdCA
g6aEmtMjnguqOjm1nZ2V5aCymla0miLx2cxq1q6kB4QfzKT6lPhhGs5KQRSmrTm2IaABdBYG
qgE0GRHfC1KVaBKeZKdDQ4qaSsC2eWoV/ZgBNlkPVQZEd8SLKrHwzPmcKDVV6+l6mh7E4gdM
qMBQk/8AXAiDaKmlWzqfwwoI1jR0Ip0717g4CclgAx6DKmICqKFiadFGJERVSv7/APHAjLKx
YnPLq4yxLRsWJAK1TtQ9CcR05zFPzA0p4nCUZqoyAJbx+uJFlqBA69a+OJmnLHSTQDsHGdO+
WIQvUUJHSlFwNGCgFj1JHqr2xaMOip0qCV6jtlhMhmrWvSp6+X0wEmJD6SuVAdY718cKMa5k
KKDviAQSoOeda18u2InLlRqpXuaYkiUepUAqCCSfriA9KuStPT3B8sARNqp6wK1yKmmWJGKq
FIr6jQ0/0xEDoVLUPpNKeGJH1MuqtMwKZ5Dt1GJBPtodYJZa5V8R44FpFqSDXmpr+HgcSLQH
VlBz6AjpT6YlgGZlYkUyAAYjLEtD7hP5fSAKH/P9uEECGABB9J/Go8vDEQk+smmYNCvWlcCO
W9RNPSwoF6jIdfrhQaMakADodX+VMSPO1CSe2WWfXKuJUwyRqZkAAocq9xgQTQUap1CpB6im
I4iCsXYMfTQnLr+GEYkQGMerNaelh/niMI1OQJBHXL/DApAChK6SWUGlKUBr9cKOhCII9BBJ
NSB3GIUQANQD9uJIzRRqyYVyp0OJFUVzy7AeH4jENSAqrHV9SB6sBhtZJ9XqU5gUGY8cJCgL
SagKqTWnXEzcFpYTUWpAAy8TiqEQgp+SmZJ8PDEajYkHQAAQexyxIWt2CigDD7gOw7YqcMQa
6kFWGROCIglASTQVy+mNQYFw7KpFAQTQ9KjxxMjorENIdIJpo6dMFX/kOlivXr0A6YUQGoAs
dRFKN5YjRBiQCRQA9e+BCi0BqE0Fcz9cDSMF3Ypk9e/T8cIOAFFKDM1J8KYkcr6yASQRVT0z
PXAqBKV9VRSudaEjCjCshBFaAdTX9lMQ9KjBhShbqR9cicFH19EzSGmkafFiK4WqQkI9LCuI
GYyiYVYLToPI/uyxE5YFqq1WGVD3GJaFm1ilfUaGoGRpiAkbUGANCpzY9c/DATNpB09HcEBc
SPFT2wSwr3Wn+eImkXSCVHXIr5eWIYIksmfVe7UqcUFQqqlyTVs6Bq/uphGJdFE0EVAJBIzI
r2wH0kjQFioCgEAnrU/jiIMkalTpqcj1xKCahrQenpn0PjhSOSh9OSgj7fEeeIWpGVDRSNSg
Z0yriSMkgkBSCT6WJyAp2xI4Q5rSjDoScv2YQZFqc/tPQjGdRNpCFe4yPgPxxKm0sEAb1L+U
06Z9MTUMwjRasTVTlTCiqqAALqPXUfH8MQoi0j/eQBkAAfHFWflGQ2tlAUKMlYA1J71rliaG
ZQFAY0Y0H7cMhJUVTRWJAJY+By8cOGQDa/TUDXWoJyFPDEzbh/cLIaA9aVGeBqXRwuQKCrKe
p6Z+RwUiYEICRmc/pXsMCR66nSuWfqJOf4Y01yEvKDoVq6jXPw8MI6pvafoDoB7g5jBrnaIq
QAr9h17jwwLQxsY3zGRHXKuX+WISiD+5VVqFOZYdsRl0lOp6lAY61OWRHbLxwki7VBAK1HqF
O31wEz0Ede2n6k1yzpiFRkMSqMBU0yHQUyGeACUDVQ1ZgM1HYV61xNQnYaiUDBSaE1ywo/pC
BUPpBzpmSO4FMQl0DEflXUOoJqafQ4ikXQ5oCDQduowpHQEClXIPfIinYYBgtIRg2S+YzOA6
JRHoUMAp6gjMD6nzxNaZggfSOtMwM/8ADENGAFZmJ1MK6WP7MQc1zqCqJFFQKqe9T3riGstu
ZZqMakE5A40XJWTx7V/5YksdoRTewh5kiQsAztWijpU0x1/m31cmvfNn+I9yuePrd293A8hU
sCJUKSKPyr3UnB13lceP62n45woXkdxabhGbe7hfQ7rQtpI9PSoOM9V1tV288B5HtzLJDELq
01aUuYZUdK1ppIrqBwSsC2/g293yt+jmt3mBzsmlWKbzFHpnitjPNsqiv9uvbKV7a9ie2nVi
pjbKpBzpTr9RjUbqCO3edhFF65GPpH8R8sbjj1b+Gp27ge/X1iZLN7a4cZi0M4jmrT+F6Yxb
Id6xxDiXIEjuDLauk1qSs0NBrQ/txWxvEK7Fu8lsk0MJdHJzUgstB+ZfA4pjNi3tPjze76DX
Yy2l7KgDGBZhFcM3dfbelD/ji+0MtR2fE95nuGt5Stlco2mlwxiGodV1ioritjWujdeH8r2o
xreD27OU6Rcxy+/Cp8CQTp/HGJGJtq7k+I93fY49xtbqCaV01qolUq5/2sSueH7t3lhZYJoJ
zFKpSZMnQ9Qe+HVia1tpLmbRCCznwGdfHGfgWRruPfHu5Xk6TNHBf2AVjNFDKDKuX54jpYZ+
GNfZmxyXXE7az3+C0tnCCeje1OdCtQ0K1p+ArhnQnNDv3HDb7gtvb281vPJ91rKVah/2MhIK
4zy6RJF8db7ewG4sZrS8ZTUwRzKJq910PpzAxvrqM3x0cd4Jc7xdTxfqI4Z4AA9pK/tyoeld
LDNT4jGfsZ65914Tv237mNulSN5pWPsSI4aOQdqN4/XFzNY7tiQfHPLzHM0VostxbjVJYCQC
cDxCtpDf/STi2Lnq/pXbDt9zfbotoY1M4OmS2mYwlj/Dq/Kca+0x13xdLxuzj39Nra3ntyyF
mtpyPT2AWWtGqcDG2JNz+Pd5geV7FTcxw0LwKR769ydLULAeWM6rFDNse5R236l4SYmBLMci
lMqthMv7SwbBuDPD7vtwpLT27iRgI8+gZu1cVqjR718X7rY2S7havHOoUNcwiRWdarXVGagO
mDVrP7Dt/wCt3VLYhZiQS1u7+37gA+1W7N4Ymku9bILe+/S28dxERRminprUeZHUeYw80X10
/wDo3IhY/rLX2L23Hqf9O4dkXv6DRsvpgc/hDZcN3m6hS5CJHayGizyNpRWr0eldP1ONWnn1
ybrse67PP7O427R6vscUZGHiGUkUwHHHmetAMQrScc4Xcb7CXt7iESCtVeQK6AZepD2PliZt
V1/x3erDcmsruL1Aeh0IeNlBpqUrXLE6R2S8K39rE3lpGl9EorotpFaRQOpKHPLAOpEVlxPe
ri1jnKrDbSkUnkbSFYGhWSv2/jjWnQXPEd9t7o2d1AtvOw1wu5BjdD+YOtRTExU7cD5IjqBA
riVdcFHAE1P4GJpXyOeJn/wjseIbrfSiJQscxqDDO3tNlkQNVK4tbcO5bJu+0XL2242rWxOa
N96Op6MriqsMNhnjiOWRFGp+IxqMVpeN8W3C+K3a2D3dutNJH/jOXqUlTVfrjPWKXXDuNhZG
8eLb0uImQ/zre6Hqioc/VkWHmRjEaxNc8M5JBam9/TC4tOpmtXEyhT0JpnTzpjUsOVFacY3q
4SKZIwtvNT2JpG0pIxqNAY5Bsu+N7GPXDue17ltt40G4W728wGavTOudVIqGU9jgvoxA7VC5
0C/4Yofstdt4xum46WtmiLAFjE7BGNe4B+7GKYgn2bdre5NndwG2uK0PuD8v8eX5fPDG46rj
i++Wtqb39OJ7VU1yzW7CRdNaajTth+UbbeLbvuQpaqjMBqWJnCtTxUN1AwW4E9jxi+a/k2zd
LWS3uNNVRsmB/iXswxbqv+XPfcS3u0ga6W2NzZxVEs8R1BFU9XUZrhvQkiof7OlaEEgdRXAX
RY7fdX0pS0jaWcDKFcyR5Dvi0OybjO7xWMl8IDLaRV9yWL1aSPu1gZrp88Z02+Hg4vvdwIHi
iVY7kareZ2Cxuf4dRyDf7ThG6guto3Wzu2tr23e2nj+9ZBQj/d9PPDG46r/i++Wtob+W1Mll
oDe9CfdUKe7aa0Hni0VHYcY32+gLWUHvNQukIYCQqP4V6nD9mfqFti3NJmRIWmlUamijB1in
3Ar1qMCzCm47vdpatfvAXsaljPCfcVamnr0/Ya9jgP2Ft3Gdzv4ves1SQNTShajEnp+GDVZq
KPj+4vfNZPAYryE0aGX0EV6ZnsT3xrmszlLJxbfYHlSSzkWaBdcsIzf2+nuKB1TzGK1r4VCF
jTqOq1OIflJDHLcMEHQEKpb0gHoPV2xeNO+/4/uu1xe9d2rJDUUlHqUV6VZagYrGdPb8e3ia
RYobZi7p7sMZIUyJ/FHU0OA2uRLG7W4a2kjaOZG0SJKNJVj2avT64cUro3DYd3sY63lq8cT0
BlHqVa9AWHj2wLXNZX+4WJZrS4kt3ddJeNipNOlaHOnbDfRJg3O4bnOZJJDLPIwDyytTM5Zk
+OM/CXe38OvZv1dveI9jcwqpspGAMUhNS3q6MKeGHVqrseOb3fK0lpamepoFRgTn09Jp1wm+
ONLS4aYQPEyThxGUkGjS3ShrSmfjiwWujc9h3fb40a6tJIkr/wCQglQTkPUOlcMZvqSz45vd
2rSW9s065KBGQWJpX7a1z7YzWpHEu3X8nvLHbOWtiVnQqdSDvVaav3YeXPrfwluth3WzRZrq
Bkt3AZZRRkFRlVhhtEl/KfaNqF7LIZUZoIkLTSx0LRns5TqyjvTGPy6yJ9t2H+oyX1tHOWe3
UvHKoLIwU01f9uGnXDuGz7pYQRTXMDLbzCqzr6ozXoNQyFcMgveJbTjm+XkYa1tDcW5FTRhq
Hb7etfDxwYtv4cUdlNNdGGJWEvue20b+nSwNKNWlPxwavrfld7bwu8uGvre6jlsp7eJZLaSR
T7UpYkldQ+nbEGdaiqGU1BHbsRlTCbQaciBk9e/ljWiLHfNoayMNQY3lUGe3kFHicrq7ZMj9
VIxT4Ft0MGw7uzNGtq7OEEoTT6tBFdQB6r5jGK1KrZkkDZLoboykUOXUEdiMOnqGoFOnrQZV
zBwCLeHinIbnbTew2bvEBrBShqn8S+IxStVDtG0yX8N2FYrcWiqRUVGdQQfDphVrivdv3GwY
G9heJZRVSw9LU8O2FjrU8GxbvPCssVlLLCan0ioqoqV0jOtO2M1vmK2jqaEUGqpB6/jhGSEz
E/7h2r5YlIFVJQhfuB6Hv+OJGZm0Uc9RmfI9jgqzIbWV+3KlAKdh0wYtMXcVGo1FdZBpWnQj
F9V9sRNJI6mhIoa+Fa9emFqXQhjqHfxI88IkE0mmMAipOTV6dcUPXwYyatQfMjrXM/v64aJT
XU8kq6mkaRgoQKTWiDJc/LGVUanKhJ1jNvE+A+mJkINQAmSnx8cTRm9xRXJywCmvn1/ZhIzS
qjwGRGVf24zQRIBqxrl0H/LEqHTRdI650bEjenTpzFcyT4eWE4cNQsi0NBU+P1xIQ1AsxFT1
J8R2xIj9le9M/piFOiaqKOhBz8MCIaTlpFRSvl2wItIGrMAj7j5YUJgakjLxp3wIzuka1bPV
Sg/1wxfAlYlSxzDfur44iFKEGlMjmG8u+IkzVDADxNT0+uIBCsEViaDuvQCuLRggKov5T3Xz
GBYcltFENS2RxIMYFaMcx0/yxKCJyrWpXKnfLE0joAw/MKZ6uo/ZiBqaaivStKeHnhQ6KQ3Q
9x36YEjNR6a1BGdOpPliJ0Nfy0pXOufhiAMlXLLPp3/biVKhLEMpAyo2IRG5FXIIFTmf8sRJ
qGMEVz/xr4YlgahhpOVO2IHKqBSgNPHwxELIxH8zJWrTTmKDMYEYB1Wur/X9mJoLsFIKZ1Hq
OffriZtAgOo6mVQSCB9MSDqBfUR6vMUoK4SSUEhFAvfyoeuIaIamUUAIFendRiKKpJDDpnWn
7sQIe3pL0NFFKdeuImDBRqAzr0PWmBm0tSVY6SvegrSmFaQiGkGnTOnemIlI4I0U1UNfwwka
ufSvXsD4jBiCBQ16gdR2r5YiEUd6UoxyHYiuJkgKKUbqPvGJFSMAKG9Qyr2piQa11rWlSKDp
iQ1BVKHIHqR54kQIDgdSooCRkwOWJBCsKU61pl44mThmJJIJrUChFQcSmmf3HDCunQMjTv8A
8sDRoiQajt1y6YaBKQTQDP7ifHzwGD91TpK9D5eGFByZtIXIZajnXEsOiKAFBGpT0GIBZkBA
86/64sFoJSuvUfs7f5fXELDhhorQkg0KdziFIK+s0NV7jz/5YtMFqXP05rmxAwNGGgAlahyK
rTqcKOpYnPoRn4YiQVqEUBJz69j0wIS6/b059KE9/wDlhxB9xVFCOnRfE/TET6TqBpn1Nf8A
DEguwJ9XpXOgrT9uIaYArV3BqaDrWuJEEarFR6h3OWZxM2FGlENWBb/HFghih1UU5KcziMg2
PUnp4mlD9MBAVZpR1H+mFQm9FaLRVyJ65V/xxEcgQLWpIpXV0/DBjV+Ao4clQv8Atphczqyh
qEe3nSnX6YCBySfQfuyNe1MShKTkFFNINT40wHCZWdASadz4mvTDFIRC6mVs26ADp9fwxEyK
DIwI8NL9q+OEA1tqMZWoXqf9cQ0Wr7mUmlASfqcSLqNbE1IooP8AniAQ7IGI9PYd8RxIHJZR
XPtlka4qrSJUNobJVPpp9uBaY0X1EF+uR7nCQSCKT0gkdiwyxYjrGgTSBr76sgSR0rXEPrBI
xKmrBTQgeNaYjiEgyONag6PtJzpjQiSM1FRl1yPWnlhbgIwCSTl1AJGAWFSmnIZj1jEIMkii
6qj8wGDEAu32kGg6E9c+4AwE4YNq9AFMqdST+GJAjjLN1yHSv788Os+0TFaN2bpTsfPBrOG/
mhq5ZCgPUYtJh7QA/M7VoueVBiIhGYxppo6VFa1xaR0YpU50/AU8sSiJddQFFe5ByxE7Rsoy
IZjmDTrTtiFCqqupmI1GlD2PkMCxGKajUaSe9cxiUSr9mls1Irl3GGEBdIyRmTSoUdgepxD4
IkGFWBIFK16Ur4jEkiIF0saANnTpmPGmJrDCRCCgAU0+4/txIIDtqJcFnIorChoPAYLRIkb7
ii9e9cEasASNTBK164WTrqGb5J0A7U88GjXPOWYsV9JXNQ3Wnhhgxnd1RgdXjkxPj407Y1je
Kqp/iHhhS02mBWukVwNLePX8cdODeX0HwTcdvk2JtsW4SC8SOQCOYhVaq+goxyrjHbn1M+D8
PnWykubG4uP092ktUQvmcq1Vu4z8cPV8a+zu2fc7VTcJczIZ4pWaSMnMiuRFeuOZq7febu5t
Irjb0269CE6zIEE60yINaNl9cQYPmW73W5TxrLDBCYajVFn1NakE/swxRTbVcx2u5289wD7c
bDVQVqB3oMb3xnPXrX9cmuLSKfboduvlC+uQhUnUDL1Zgt5Y5NRUR75fyb3+sja1gvYMntkY
gPH10urHPzphQtz3PZNyVLvbol2+Stb2yViysB1MQNCPMYZ4ZFptNxFbG0kIstw29zlcyANL
EPCgpIKYNTt3K42uW7/TPNa2jTofbLPqhYEdNbd8++IY873TYt42pZAlwoiYU0iSqMAc6aSR
ljUrEuVpti3CzveLS2EVxF+shiZBaynQz51qp6VwWetyvP7hJkcrPUSBqsa1NR440evXZsF3
bWu6QXE7MI429ZWpop8RgZx6BbWkUm+Wu6W1/B+kVGP6hHJIJzUOooR9e2IYg36ya65ZbBby
DUiq5LOCjas/uFaHBKok53tt1bmxnSddCmn6q3kWQg1qMh0wTpoYVN2t0fe4bS9SJM9029zB
eRgdNajSsmnuCMWrVJa3Ntte+6J70zwSqqw3j1JUMftcn1KR4Y1gkx03EcVnyVJLm4iayuCT
Hco+pK0qD6cw2KK3HHyzd79d0heO7lh9pFaEo5qPOvnisE/pB7LNc75vkVzc3UCzgqJDMwja
QDvUUBOWD4O61O8W36Te7Oa6ZVsWjKi5jdXzJqAaHBo/8neO5tN9Xcf1kV7Ye2FCxSB5EJFA
WQkEL9MNaKW62zcrW+sLS5iadUK6HIUOD6joLUDfTFPBit3yCGLjYRbmOVowqlU+4FOmpcVv
qzBbjcW25cS/+Bco9zFoMloSUmXSPUQO4+mK0qLiNl+p3JW/UxRujaj7r6WPkuXXGtGtJv8A
bttfIoJruQGzltwhuY2EqqT/ABaa5UwapAWlku374+56o5bKWJlWeFg33AUbxxSsyOvdNwhk
2i/awuo3AT1+01PSTnVcjSmJqKPdZrabjUbRSrKUCBgrVZDp6EVrXyxYKymrMVoKeWNYxV5x
i8to7wxyyLCJchI/2jzY4KeXZa30u28iL3Uo9lnpFOvrSh7gfw4Z8NVo7rebuGSW5isNvnid
fTdQMQfCvpIYV74zTI47ndlm2G5kjZY5dJEturVAqfPqDhFqj45u8huxBPcVgZSI/doQpGYo
e1emKxZrTxXVqUJMq6YWrJqP21P3Yzq+qO+mvXuXhjFpuG3yRLJLaXL6HDHo0MgOoE+Rpi1Y
zXIrSCEAWs0yW+XuWU8vu+21Pyf7TjXNZv8AlQhyRqPQZEdca1jWi4hud3Zi4S1vGt1f1hNZ
VXJ6imDpTaO13qWbeTLulwZGcNGkrmoGYpqPhTA6RZ7Yj7Xvc9zJKGsZhojlhesdCeuXUYsO
ujeLyJtgnSxlRo4pAGCGoHqNBQftrjJ1ip9yuLm2W2lcyxwsfbaQ1IB/KG8MbxizXMqAkHoO
hHY4tU5j0DadyS447FBaRW95PakA20hEcy5k+lhmwxnWrFdf7lFfXUEd9GNvkhJEUpJZFPge
uXnhlMur7ZLyk09le29qJHBMd1DIUSU+K6fRmPykDAz0rdxSK+26SKzkRb60kKPCTokXM508
PMYbGPhV7Q15bbiLe9lfUR/LEjkkN4KW8R3BwR0t13ccv5X5FeLdXFSymNI3NC4B+0E5M2Gi
Txkt1j9ncLpIwFVJWApmBQ5YZWfVrw0k7lIGcQJ7dGkzOlicshmK4rVi+sWu7Per2bcplktp
1MS3CSao6E1pJp/dXGWgb7eCPjFwltOrRrKqTrEwZFoSQGHjilOKbaN8LzpDudzUaPbgllq1
CTWjNnRe2eKDVtsPubfut088hjs5xoi0Mfa9RzDUyoR+HjhtUd+3Czfb2jupRaX1rM0u33Wo
oWhVsvbkHpah7YBzXIbu4k5KpudNvOYQqur+mYg11qa01MPPC11Gfm5Du1pvF1cQ3De6zskk
fVZFGWl16HLrXDHJcbY6f0mO8t0juIQwN1aKdM0Lt3ShDafpjLcvi33FoJNygkSP23uIKKfc
EiSKDUaCc8u4w/hiX0EFzVduujcV9pmiSUOaoSCPbqT+7A61iOQx+3vV4gACh6qAKVPc0864
1+ByWwpE90YnmWGU09j3Ke2x7qx8MAb7j00E8d7t01p7MyxVWASH25NFalFPpYCua4KlJv17
KnG9vazmpGH/AJRGZRwtGoTmGFKYOVjk49vcd7uhbdfbubl09oNcZrKvYOcvV4HG9GatP64l
mk1rNtkiWsq6BC0wYOqnJa9Mu1cZa8YEhA7KB6dRqD169DjTNXvF2t5JZrSdlBuEpHrICkrU
0z8sGNY03DpN3RdytLi4Yx+3/LtHfV6Fr649XYf7cBwbXlsdj2+6s7QXcqAwSzwN7cqhVB0S
KMz0yGJVQ7pvi3m8fqEtV9yVP/m2tyRIkudKAnMUHjhZTypczxyRWktxZSBATtlxIZYJkHUR
O2SkddLfhiMg9wNw+y7fd7Q+qYqI52jfTIPbXIOAQQVPji1XYh47e3s17czXNwf1oiAjZz63
C169zirM9VMXKd1tppqyCW3nBS4s3FYWFf4PynuKYK1Z4sOAwXY3uO5hJVfbkjQVzDOKBWxU
c+O/j0M9tyTdIRJ+lukZmjXJSSCahe1MF+R1PB8elu67hYbqQkF8p0wNQQsxqxCD7Q3fzwz5
WSx1C9sU2HbL22tDPKqrbTzQMI5NSLUiUdcu2JuRXf1C03fdrjVGYp7mERMspFZXQkg6svLz
xaLHfwW63ktf2t5Mz+1ECLaVg2lVJBdK5mgHUYBPY83NF9IOsdadO/TG8c9FEHlICqASQFBP
c9sWu3ONhzK3vm49tDFDJHEhRZCNRGhB6a9aVwz4Fm10ck3a6tdo2CWG5OQUJcLTUvpWqhh+
U1zXHNnu5VDzJpTuaTMKz3EYeUgCjHpqJGVcaH5Z9wQv+IwNY1PGd33iPZLu3tLpg8f8yKEN
mFNM1U9q9RgbLh+43Jn3G7Mhe8dSz6gKOoUhtakCooKHGhfI7OMbhJvG1b3Y3h9+2WFJbazI
BSIBWoYyc1ANMIli1mvYLTadtv7WGQvcRgST2p9DtEFU+5nky0z74xGqw3LLyyvd2kurWN19
9Q8hkAXU/wCYgDt+/Cx18qJWGYqM/u/1FMKlC/QEV1Vp9cTRA56M8zTPtgUgJAoUaT3oAPEY
h0bIVPYdfOvhhZsiIDSntAkA5gk51JriakwaNmVCkADr44Goc0cBKigxJAGoa6emRPXp4Yhh
81BNfuPTCCIc1qSKnIHAZAqXYjTXLJgMjTvTFq+oq0oGNB0I8MWk1R36nv1FMFAVZFzBFATi
BBgQCanPv0OJaKpocwtOqnPC0aLKQ1FKnLxzyxIUqk+qMZAU8c8Cw2QcVrWhz8/HECSQgKB1
Y1LeAxEmZUYFftY/cOpPauIU51MlMiaVOJHBoSCPKvXPET6QxVSBXPQD+/AUlWIGlh6sqeGC
KgNCQurMdfwxoacACpIoBlTxxAzNXr4ZHqD5YCf0k0ait38z44BQEA1DVABJqOnli1D+0BgO
v3HLCYaMkLrZQe1PPESf7QdORNCfocQM6giq1z6jEgxoykaqtQGh8cSkIgmo/N5+GIhZKVqD
XIEjEDFdQYVoDl+NcCA+rWooadKjucMRgzqdLLTVmSaZGuWLUHU2kKqCuroMzQeGFHZaetj6
a0HgD54ECRc9SnIdc8SErkAL4Znx0nvgOBkcLUhaPSgPgO2IIfUI2HU064hRiukavUNOZGEg
IIoWHr7VxILRrqbWSSBRadB9cJwy6lyy9OWWXTvgFMUPt6ivQ50yFMCC4YjIaf4q5YheiYMU
Hen2n6Z4RC1O2lmIApUN5eeEwl9zVUUVQfUCcj554iKqggHt1oMhiSM6gtQQNJyz618MB02s
awukgKK+52NfLEqfM09taHuxOJHkRqEEGpNajIk/XyxA1aHNKV7HxxIwRdTVIA6gd8sSF6Wy
Pp0/aK/jXAgyv/LA/M1KgdaHwwjTesCtKKMhiGHEmnOvmW6/gMWGUi+ulFNDmzeJ7Z4TRVAX
SaBTmxFcqYMZtMpqpHU56WGQp4YmpT1ZSCCCuYDDriOm1CTUE6KQa9CMSCwzB1ZDIADz8sAE
CvqcjTTtTrXCj6SaAioGZ6YlIdlNVDEAihr3xLDFdJGVCa0P+eBSHHUacq/l8aeOJIpDRhIF
Ib8lfLqDhgSRuSM6VapIHl2wrQh/Q2ogdaKMz07YqfwAIASqsexJ8MQkTaFKEgEEmmpu3mMF
OGTUjDVUhfy1wIz0kqQSaHM0HbsMSwPQ1IJalABhGFJrMYatVB/fTEqEP7Z9ILNTOmFg6y1G
QqOgXpn54moWktEx0+qgy6Z+QwNU6gjTUkk0FfEjCBZmNy5FOxrnXyxHQqwKEN0pSnf92A6E
nR+YtnQqOuJi+HPiRVsgD2pTDBEYjHuAkmv+I74G5Bks1CxoF/bTDhEjFRlmKfjiMMXEcikD
qPtxAZlYsQKUXqRXMHyxJDSE0FdJrl5064MZtSAr9i9fyk+eI6jZSwUE1alNQy/diGiIcK6t
Sv208R5HCUCkIRqBrSmRqfKn+eM4xokVqAFc6fjl44lJfylRDQZUoQTnl08PDC3AOACCa9xT
p+AwrCjIHp/YMCIH1Mukah2HicSChGoKcmrkfPCYkkC69fVloARlmeuFUEinUSar300yp+OD
WergdGssy1p+cfvywWsabU4aoUBSKE+OE/YkJZqsat0A8PA4GvRV9vN6kn00/wCPDEaYFiw0
A9c27AeOICdA3ppkOn1xYaaTTHnoJFenniZtwcTUOfoB9RHUk+eBoJlFQNILHoPEdjhQRINZ
RWIPevY/jgJpNOuhUmTNf+eJaYOyMqdRX016U88IGdCioyA+1SP24CCPS5ZqHUBhEPJoUBn6
rQKvX9oxEMgieTWo1fxUFDkMCpCMtGNP1KnI4VJh1bSzwlqavz0qB9RhJ6IUoraqde2WCqCS
op00jpq7f9cBA5YMMqaBmB/gMQH61UPSp7aqEjFhONZSjtpB9VR/jixlx3ioG6sEGZp1P0wx
M/ukupqaACOorUn6421Iq/cH8K9fDGQs9oY/qaOenWmeXljfNrVtemcf2x7+3UwyIZ0IIics
p8hmKYOq5SiubV/ckVtXvrkxqSQD0ofCuBXkCq6sC4JIzLEGteh64VI6dvt7q7uYoYwYmaqq
5qBQ5gGmD4G11bhsm6bfcJHuMdDIarIhBVvGvfC3gNx2m5sYhKzrJBIfRJE1a+RGCpxo7p6o
qCopQE5j6dqYpGfsltpZZJQrOQX9LSCtR2+uNYZY7b/Z7qwVJndZopP/ABzI2ujefcHGc1bU
e0WVzd38dtEximuCQjPUDIVFcVjP3T7ntW67fO0N8CrutUo1Yz/2/XFD8uUzyKsUdaL9xFew
8caxuyF/NBZ1BQD7mpSo8sZrOuq2tJr26jhj9csuSrX95J8cUiHebbfbdMI7y3e2Y50cHNex
U4U5mnaSQqlQq/lBpl+HfFORenXt9hPegW9sGdgCQF+9dOZPjhsEn5dN5aTWkRBk/lDOnQ1H
iMYxvTWm1XdxYtdwToTUyPbhtMmju4BIqMPw5flHZ7deX90ttbkPM5yjb8x60Fca1s13aXtn
K1tdwvBMjGsMikHLuMK6mxGQ65BSFPZq9/AntjLlAAurg01lqqABUnvTLFYdxPFO8v3E0r6h
nQ0wWNc9+gmcs5Gpsh06Dyww9VPaLJPMsRlWPX6TI+S5/wAR+vfCzo90gurJ1juKajnrVgyM
OxBGRGGTToFd1CsNRDii0y/EHvjNjUoBK9asM0Ofj+7FgvWJBPKTTUQvVR5eFMOCd102VteX
TslqCZACwRT6iBkcvHywY1K51EkZaJ1ZZB6pEb0sB0zBzxYzpKsjJo7DMUy/64dUoSNBBY5s
ARUeP+uGASLJWtCDQkqMqjr+ODqGHMhICtUKOx/yGCMddOm0hu530W2p5AtQFzan07nFHSde
IT7olIIZW+19VQcuoNe+GBJY2Et1L7MGTAagG7he318MVpKVLq3cpMGRgc9Qpn5jApQB5KCR
c6LRG8AewxYdAJXaTUxrUUH1+uNOV9omGlh550PhixYQKkZdswewxEOmSQVIBBGkd8hip5dN
vb3lyri1V3Ma1eOOpJUHPSB1/DBXSodTZhKjWQHHTp4/TFjFldN5tl7ZFDcIdEo/lsM1OVcv
PGoZ050opIXqcx54zhN10lSfqCRTzFMQs0xZ6gVJC+JriW11Rx3lxGyxK0iRKZGVKmgHU0HX
8MS1yvIWrJ9xqCJDUkedeuECeaZ1VmYtQ1qxr1wieCggursySwhpPZzkIzYDxA65YMa+3oH1
k11amPQ1r064lYks7y5sp/djkKSgDw9S9aHyxMew7XVw8zvUgOdbLUgH8MWOkRF3o66qBjVl
ByPkwHh2wfVi0Og6QDQmlaYhIkLzBCFYhR6QoJpn5YsadVnu95aW5tletuTrMDZpqH5hXphv
JBebi8yImftodSAdATikXXUBNt95DEl2y6oXrSUHUNXcN4H64a52II/eD6lYq4yqvWhxYN9d
draXl1rEWp5Lce6ACSyjoSAcc63LrmLy0IBOktVlqaMfEjx88LQZfelDzsxcAhXk6mo7HDAC
MsCQpIr1p4DPPEtwbXFwskZWRxpOtKMcqdwR0xQUjcN7XtgH2ydQXqK+ODGrQoz0Ir6e1MsS
lSPPO6KhdqJ6QNRyHXEqhYEtVfuPUnvhZ0ILKDX/AA6HDKnSt3dqFYSsrg6l9RqDSmXcVwVu
BiuJEf3EYxu2baSVJ+tMDNBJM8j6nJLE5sTmfPGozgnv7t10GVjpr6WJpTpjON2lDeTQ1MbM
AR6iD1Pn44sc/sb9TcGX3nkJn6mUfd5UOF0niNuuf5vzeOA6lju54SSjsgqCdNR/hiGCkv7l
7gXTzO82f81jV88uuLB0jF5cGMIZGMVSQpJOZzqBjQ4ELu4jLvFKyM9Nek06djjDpqCSWV5N
bMS7GpLH/PFjOpxf3gdJVlZWQEKwJrmKdcB5mOWhLClM6kjGmcJ1Ck0NVbqD/hlgLqO9brJA
tq105tl6RliVr0BAxHXPJdNJbJBJIxijYmKMnJWIoSB5jriwdBa6mlRI5GZ40qsakk6R1y/E
4mYhoNJRgR/ngJRyyQzCRCUlTNXXqCPCmEzoYvrpLprqOZkmYMGcHrqFDX641KLQ2t5dWsnv
W8hhfSRqQ0OlhRg3kR2xVYKHdLu2UJbStEta6AaqWp1A7Yw3rnuJ5ZpCznWWqXJ6k964WLUK
ooGnT6hUiuWX1xGGGSsR1Hj0zxEvvYkgjKozxUymcAADKg6UwarEBaQyZmgH5e9fHGmPr6KV
ULK2qvYV8sTYdWk9aknxy+mAacKSSe/n44DhgE1kH0nx/wCeJYhBPXox7noB1GEYM1QBmqWN
StOlPHEsMAELEZBvzDscCDoIY1zJPp88JhwWqrVoTWo/1wIKBtPqAqR26dfHEDBs60OlTlll
ngUgtY0kkdTkP8c8aa0zOxI6KadRiAfcYgkNmOtPHExTkhmrQrl/jgI61SjDPsOtcRkI0LjO
n8WAw50p6Qe+bDzwqiAUPoBOrIr3FD1wDRCpqO1agnM4joKOpo/WvUYlaTKAykg1GWXTCyWo
liinqa5d6YkIKemR7/TAT6h0OYAzODEFm11UDSDmWOLERDlCWrln45eGEn1o7A5qKVAHjhRh
IPUBU1GKolDBADX1kZjOn1wEzhatQk6ciO1cSKg1hz1NMzXqBiJnIObN/wBv1ws1CxWvrqDl
p/DAiJIAr+b7u1P9MRFIQSKD0dQO9RiCPTU18616AeWBEwBjYfTsaVJ64VDBVDFSQSPPMnxw
qBfqCPu7jGSHUrMdI6HPEAov3VyJ6H/HEIaQqB2qMlUdq98LVBpJ0aiRnmPDEBiQFWL9WOnz
xLQLIi/jkKjPwriAfWuRIqenhiWl6iwoahepPjiAWVtS1qBqOrEcMwq1NNeoXEMONRSpyIOm
hzJPfEQui0Utk1cj5YWjq0egAmlc9Pn2IwINCxAoaZU6j8cQO5iWMIoLPXM4hpB9ROkkkjMd
adsGo7+8iAqtQMwT1p4eWEiZEdQTm3dunQd8WFG60ANaasiT4eGMikDpYrppkNNf8sWjSIDm
hB9AzXsSemHVAtnpBbSg69wT4YSMawmXqyB007nEjFKMAT0+6nj4YtBQI4UrX6eNTiWHqQ1A
PSRSg7nCSY6aDpQ0BHY4Bo11BKkitfuBqDgWI6yatPavrOFepFkq1JBRRWh/d+/E0SihJqNP
YHrTEA+sqSPUK1pWtBiVEDqIUrqbOlMqVwFD72qrDMplp8fE+eNM6QVTX1kEdK5+eeJCJAIo
ASRlXPM4VT6BnQj0/m/ywAz6WoCetAfLArRRkEZmo6fs8cBhO2kjQc8iafuxIxIqrlSCT6vr
iaggBqArlSufTCkULEyAEV7q340xMYcmjsaAktQgDqDiJOrKwYt6T0FehP8AniItRLVHRQTq
bwwjAhA6sTmQK/UYkQNCCQcx0UZeZwINAystMyKahl54FBgagpI0kd/LCUbDUxZTUE5AZkDC
jppYEDNgfUa/4YicxkUZWJYdF/1xK0wIFTIAaigJxIwA10DZnp4YgWgE1Cg5+pj0y64AIUaL
0io6Z5Z4kYlh0o3Qahl17YkGjAChGo9j54qsPVAQCNIXOuRoT5jAcECKjUtGFQCBkfx8MJww
NcvtU/l8sK0LO7SISRpAOqvj0wAw9pXc/auWZ8fAYkNvaKhCCWNenUqcSMhAbJTQVK16jEQk
slFNa/mPT9uEUBd2JBIbT/8Ay98FrNGkpzCkaaZClM/+eAIikunUCNKnocuuHROaIgBfUfUP
uYdKeGeJvBsfSI6g5/af8cSKqsCQaEH0gdeueImb2aVRjUd64ieRdB1s1T+UHMH8MQoFNEqv
3nLP/LBhEhNRqzByPl51wqGcZrUAgfbTOmA4EowcVJ1MfQMziB9D1JA6VBIxIXpADN93QDt9
c8SMjEN7detSGGWEm9t+hWijp/ywaMPoYIq0OoVqR59PpgNNRtBUigOYp4YUSaGUBSBrOeXQ
+GeIGfTQr9zDqRl0xI6qChYksCKHt+zAdAhYuQQa1zc+B6YoEsruVFFqQQKp0yz740UavTNi
Q3UKemGJDNMTQsK1Byp0+mGRM1uMqvpAoNJyZain7cVLj9v/AHH9nbxwasd2zEteRqEGstVG
OWY8PPG+YnvnxkqT7NcQXcazQq9VEi/arLUkHrUnwxf1kYWfF4bdU3BJLZJ4BcVQyDNa1Gkd
+1cZqrrtrbbt0tfavLNNMMrok1NLkoehI7DGNWGt9W1bpZw21qLqB31TRumoZHIqQNQoO+GD
6ubm80P6rbmhBRdRBUZiqn1Vr+XDw1z/AJPyxrW3S3vY7SKGUMjERD78vtYdMQ07bDbSpbb5
ZxpCr0bWR/LY+B/L5YL1+GVVvb2C7xZNbwCzuVcGdUNYmJ+00/3YZWsXnJ7GxaSxlWCGKSWV
Un0DQrBvEDKuLmqui0UbXvFjbW9olzHKTSORCzpTusg6Uxr5ZvC63eysL+Ck0SGNVavuZ6Tp
PQ9cZjTNbXsttPsaRz2yqqhzbmRc61yJYZnDrPq349t9n+uisbyaGhBMtnPFq1gg5qy9hirW
s9yPYNv2jlFmloWSK4ZXCqxdF9VPSxzofA4ubsW66/kJ6xQO7FyGOqtSKDMH6YzixghpJJKl
VPpDDtU17Y6SmRs/j6/S2v2hewjuVKHSSSrDoa1wdM9A5zcbfPvEhtLU2x0gyRhyyavEVzxn
menlccehtrrh5S4gjmaESexI6gSRnxDj1DD0bzIy+wqV361BIoZR6hmK4YJ8tpvsdtc8h26K
+BkiWN0K1AY/w0+mKVV2tbcZtLl7aa+hvLXQR+iuYdMias82XPLxxm6lHbvxe3upIY5ls4pA
f0l7KhaAt/DIfAdMs8VjP1V3Jdv3OGJJZrK1ETZx3tg2qGQHv5HxrhlH1jMkqpJUknpl0r3r
ha8XfEJDDvVqJApacldLoGUg+NRQ4rrMjVyWdjByiJYLaIQSwsXtiNUVakmqt9tfAYNas8SQ
Q7Ve7/cbY+2xfpGiLCLUTpPjEeqYN8EjIcj2uHbt4ltoG1IlKFuvqFaE96YuaqqkhlkbTGhd
upp4d8dE1Etrt9lt0csLqzIq6Rq0yhiKksveh7jGZ8j6rC6tbfcOKC+uIhJfouVz+cNqoKt1
IIxUleWW3R2EFzcWTO8aL7gXJwCOudOmCIBs/wBZDIdnFtvUZUtLAKLeRADNtDUrp8VxrRmu
1Bt0fEo7uSyQXMMbGGQg619dKSDvXzxhueM3d3O2bhSa32/9LdpQTCCpiI8c6kfUYYz1Nddx
DY2+2Ryxt/MB/luhowc9nX/AjDCk2zcNtmr+tmjtdyentT3Ss0Mvb1MAaHzwVp32Nne2W+Qy
XlpHEXjJjngIeGZexVhUfXwxM9fDrg/p+4b/AHdhd2SPbCE1VmIeMig1I+WJcXI449lG3PKW
lWOy1COG5lAZCSaKHpmG/CmHdC2k47tlrcpcRRwSJJHSS3UFomr1Ohsw1fDFokZ3cLvZYr+4
tJbAPZUoAzkSoT1ZGXwPji1q86toNl2JdnjvIbhbalKrcCquKf8A6rYb6vrIpOQ7ftTaJdu0
+8f/ACxRElf+6h+2uKUBvIYbC1hntX/nAr7UsZKHWBmHU0ZDn16YefR6sZ7W3TYod20ia4nC
/qI5BUO5ND518cY1uLO7trO9jiheBoiFroU1CkDKlemNTxz6Vey2W339/ebddwRyRQxn25gd
MgeooQR0NOtcFMg9y2yz27bUmiAMqUjcvmsmZoafl8MsSusxOqXD6kiMT/nI6CvbPpiMi2vo
4Nvtbe8gkBuCFVZojpNQKHWvVWH7Di5Zrs4rttjuO4K94VjldXbWwDRPlmHH5T4Ybcak1x8l
2CGw3J0hX/4zlTHIh1IAR0J7fjg+zE+Su4bLb7SK4s5KT19E0JpqI66l6q4/YcOtR1XVvb2u
0Q7ksSSPMB7yOAVkd2Oo9PSfGmBquHarKyv76XXB7RWPLTnpbsV1eGJmzVjs237dul1ebbe2
ys0KkrdRel1YGlVof2jEs8T7ttu3WGwQXAtoxewsI1nIoJEBNda9z54dFhn2jY0vI5TEscG4
IqtbsdSBsj6GOY64zpc/Gdkt5Ny3K2vLZpbKNPbb3AQVFaq2rIjyONHmOne7HbLPZI7uO0UX
UbLErj7ZY8xRx4/7hgl9OMrucm3zAS2sJtmP3wk6lFO6nGsYzKvd3tbR+L2d3GntXUoX3DGS
FdM//InTUOzYeT3NTWGwwy7La7lBbtdAN7e4W9RqIH3PGRQ9M8H21mcY69uG1R73JHYsbqyM
KlS/pkUV/wDG5HVlxmmcxybbt21bzdblaT64jGNdvcJkylnpUgZMB0OI/guN3sdraXtpc28d
zGNSNUZmlQCD+/Fh/DJXCxNK5iTTHXJfPwGLWAalDaa0IBqO9cTZvVQFTQVypnhZKlMydRPS
mJGLKX9Zp4Af44jTsK0qOmXWhOKMhDMBlQ16/jhImFUFPy9z1xlqGYLpZmIzIJPfEzTMnSla
9z5/5YkALR6A0WmZPXGoL8ky19BI65+FMDOCC0Q0qT0xNc30hrAXSa51z65YnTQsSW01yr9M
TOkanPOlaUwD5IavDNcsKnhBqEkgk/mXxwLQ6m7jIdB5YiYlydRFO4U/4ZYgbUp+z0+P+dcT
RpHbQCczmD+GJnoFCEWgNAa4VLRMPTVgBXoR1xNAZqmq9MZBEamLHqv+GKKh1FmFMlX7sWAI
UkhSBU0oe30wozEU6GnQ4iHUioCKGleg74CD3D9qoenXLL6YhQMzaumdK18sIhNIzD0qQh6A
Z0xNALtpAGY7jv8Ahgxad2B9LZIeoFf8cZOo21qpaudQAT/hjShmFEyPeuE1GGVV9JzOVTnm
e+Jk5kcgAZnLURlngahNISaUFW79cSBViCQMxlp8u+JQ9C2YOa5r/wAHAgMQPzEAZmuYJOEe
HD0pq6HOnfyxLTBiSCcwDmPLAodWCgnPT0oT/piJNICrClVI9PhXCjRqxClmqSdXlgBmIAqO
hoRTtXCLoqnSaEeRp/jgJgWHU+noPLEsErMwUjouYB+tM8QCGZnYHsMvxzxEbGICnQDqfDFq
EpodJPfM17eeIGBjWTUBmDRqdTgIySaHr3qMKJQVCt27k+eBYFmHUdRkKYQbWNADA1Y00jER
gkDTTM5j64kL0qhrQkZn8cBCCSgXVpPXp2OFBJB9NNK9vHEhADMnqBVfwxHAh1OZGmuWWIBY
ouSmoDZ/5HAhMxVya+nuD54kFyCdIHqBFf8AliQW0qAHzUEkL3qfDEidRUKPtA1VJy/HEQLQ
sRmGr+2vcYhpiw0EavuNHFO48cAJT0TSanIU7jzwlGK6x5Egk54RKTfcf3Dy8MTRnZkAIyIz
IH+WAUxJII6N3Hn5YFpv4qZaRmxHXCkaygqc/UcqdziEoWBC9DWlQxzp44lhwcqV7dT54ijK
sz1JNF6L2P44gkpQKxyUVr454EEo7vUNpoKmpzNcSwvtIZepGfgKdsJBQSSAMaqKkEdcSD69
ArmQKDvXPEKQKZkrSuQHcYiIamHpz7lq0y7DEkbhaACoAzI6Z1pgZHVSQBU0FGrT8OmIjDkk
LWppT64WglVH355j6DzxIKVL6SKnozf4YKKT6a0oQ1evjTAxQliZgj+nuCfDywwykNLRgVGk
11EjwyxNpU9D6aFaDPwp54gAnUdGrOo69enbEzfkg4VqaaV6kdMvPDounXUD6slaoqDXI98J
KgFAPUp+0DsR4YFCKvUGur09R2z74lh1VlqzAV8R0+mJqHLjQSB6cqAjv44kYpUCi0cnNfD9
uEWkij3epqciB0y7DEpDM2oLqUKACpNc69sB0DB1OS5GtfKuEEIQAT+7wOLUdQaiprnQAd8G
qHlZDp0ilc2HjTCsJfUNQALU6Hw8zjKpL205A1APUHEDh2DFCorWgP07jEZSkalarTV0708a
HFi0COSGBqG6ZjChxoAdRbMjM9q+WBBc60GfQ0oOtaYkbUShBFCvl3PfDDUbI5BFcz0FPHGi
KStBWhamRrTLvjIsHp9NKkimfj9cAKJl1FTnl+OFoztVm05hMiQO+JFpLin2ilNfQkYtACDT
0A1TIDKlMSEqlRQDTrNTXw6dsJFIKVJYVGQU1zAxBGDpPgHGXjXvgAHDqmoelRl5EnyxRfA1
DGrUIyAP/KuFQQAQVrU/lB8cRCtCak+nTUCtKGuAkxQOFNQWGQ7Z+OJHV11UqSAPUWPevb6Y
cISoZT6qJ+UEUOXeuIYZY1A9JDEAqCc8z3GAHLqi16k/aO1R1riGn1JUFAVZ/uNKdPHERKAZ
GctmAOmYoO+ImD+rxBzI7jEkTMS7MQAtfup/jgZzRMCeootOhyqe2EYWRjoagHM59u4OLDsg
g3uHQEHpFK+IxGBEagHURmfTgIk0qdJWreFaZ4UiZHK1YBaHM4hRGjqASKV7eXhiRIE0eo6W
NQv17ZDERvVFMZ6n1Fa5ZYiFkNBRdFcy1c8/8sApjpNQKhhmCa0xUEdYFan1D7h4jrTETag4
GqmkCpA6+WIw2as7CrKM1HU18sSJpZGXpUDNjTv4UxDBmVlI6aO/etMRMSqktI1DkB5d8Qpe
6wBYCo7Hp16k4ogaXJLsa1yrXt1woXtsqjSBSmQY5+PTAiBAU5aUGZI8T1ywRQJOlgFBzyB8
RjRBJr6KCT4U6fTzxKoZm/lFGNA56g18qfh3wxVmdwhkWSjUr2/Dy7Yao5Pc8sGHXftDKb2N
yBVelemr6Y6ysdXHtPEPkbe9j29IovYuokGlRcxK71+ooaLjn3BLBT83vX3AXltBBaPJ6pvY
TSrV61Uls/PDjdc9pybdYJZGUpSUsxiIJALeGM/VlZ2vyLv0NvFbTCK8ig/8Mki0kiP+xx6q
eWD6nYju+f73d3MdxSIPBVUOhW9xD1EiGqkYcxbHdu3yXf7pZLYXVhZ+yg9Mqw0etOq6T6cH
PLNn6cXHefcg2eB9vhkin2yUMslpOgkTPuK9MaslHHXvrls+S31juJvYdHvFtXtyKsiFD+XS
e3hg+rr1V1vHyVNvO3/or3bbYKprFLGumVT4qQRikxmWIbD5G3WC2hhkhguP0wAgunBE8ZHQ
6wRWnnhwaNPkfkv61L1Xh95P/IBGGSVfB0bKvmMWF0XfyPuk8DxRW0NrBKTrWBaLqOZoDXRn
/DgniRWnyLvUVrHFNHFdSwAi0mdAsqd6ax6iPLDlQJ/kHfJNxS/9uH9QiCNgY1KOta+tCOvm
MZ+tZ6dm/fIVzyCzW1v7C0dIP/BPGmhk7HOuNYYyhkQya6VVu1aDFC6LTc7uznEls2gr3Ph4
YZFqS+3O6urgTyODOKVcAUJ6dMMjNuNJs3yjyHb7Frd7e0ulo0bLNEp1IRQaqfd9MFgnX7cN
vzG9s9yN5tttb2sTke9aCNZIT3qFbMfgcMnit3x28h53PvMUTXVnaxzxUMN1DH7c6hei1Bxm
Ncwf/wB4m6vapb7haWe4MgoLiWKk6rTKkqacxixWufbuabxts0ksHtSRS0MltPGrwsR0JBHX
6YrFKDd+VvuCMsdpb2SyD/5AtgURm8dNSBinIqirGKgGpONQXKtdi5DfbJKz26wzhiC8Nwgl
U0OWmuan6YbdGLbd+d3W6MkslpbwX0BCrdwqVeg/I3Zl+uMY19lY3INzXcBfxsIrqlGZQCGB
+4EHtiwxz7vuU+53bXUqAO9AxjGlagU6eeGReILW/u7C5S5tn0SR1CkAENXIhga1rhYqS7v5
ryb9RKqoxOaIAFH/AGjt54ZcZ+yz2Plu6bPHLAgju7KevuWd0gdBX+GtCv7cZpl10X/N93u4
FgjUW8SMGRRnp/7S1TixvEzc4uJIljmsLSWZSCl0kZhlWnfVGQK+OWeDC4n5VvE0VxBczLPB
cDMMBqqOhBGLFarrK+ltbhJ4TpdRShrQ16gjuMLI7y9mupzKQsdfyrkK/TpiZqw2Tk95tiGE
JDdWTE6rW4jWVK+K1zU+JBxmnnq/kd3yu5di1paxWCs2p4INRiY9KhWJp50ww65Tv1/HuQv4
2CT9H0j0sp6hge2NJZWnNdxha4Bhhure6H821mXUlexQ/ch+mDDJIbcOa7nKEjjjSIRGsRFS
ACBUAn1Hp3wyDq+Ki6vJLy6e4kAWSXrToAPDDYzzVlsvLb7bLaW1Mcd3Zymv6a4XUvmQeowR
q+gn5HI13+otraG0JGl446lGHh6qnBgsV1xezTyM7AKGNFjA0gA+ONSYNWmzcsvLCzexeCG8
sZDnBOuoKRnqVuobBeTKK95juF6IxUQvbNqhdD6wAairfmHmcXwZI6bnm3vkNJt9vHdijC8g
UxMWPVmUHQa4zIdxz7ZzfcbVZ4Jo4Lu1mkMjRXKBwHPdOmmuNXlyneoJN+LXfv28EcCt6Zbd
amNl7j1Z54rDxfwrJ5DLM8mkKGYsEXt4D6YpWuuYkivLiDOJiKGooaUJ74BOnTBvV/Bdm5Em
t3FJFcAqRSmkjvhxbHHdT+7KZVGjwVei1zpTyxRq1b7Fyy52+1aykghvNvX1rbTipVz1ZW8/
DBRrnn31xepe2EK2MkbaljjJdR5UbqPLDDMjvl5cskayDbobTcAdX6y3LRknqap9pr3wFYLz
zTsS2kkMdxOGDNBNGHSlSSy1+04MChvt5iu7mM+z7VsGDexEx/EIW+3/ACxrB8tTZ8p2ywgg
u7HdJLl400S2V5GfdKE5x+6OtP8AdUYI0zF/yCe6spbFqCAye5C1KFQCaD9+GRaqVmMciOOq
n0mlcx4+WFndamXmdrc7S22Xm1QtC4BZo3ZKMPzJQen6dMEas13cW5FtMVo8Ut5LtMwXR7ir
7tvKg6a0AJV/Bh1xjcF51Wbzu9rbbt+t2/2ZZDQTSW4aOKUD8xQ5q/jjX4Y+qu2nfJ9u3B7y
BQPdqsqscipNR+IPQ4lKisN7ubO/e6jRDrdmeGYe5G6tWobxw61zd8c95NZtcmS1jNvE/q9n
UWCN/tbrTwrga3Fpv3J/6zY2i3dnAm526+3LfwrpaaMfaJB0r54mLVFISRl0607YkZVUipan
cVwqk1RQ0pXucSMVfqSCDkCczXAjhRQ5dfuPTCTaT6gvQZn8MLJaNQOo0H+WBEzMqlVyOVa4
VabUhoep6VwLSc6QRnXwPTEqEIRUgklumeJcw5aQoAMwK0r0wkIDdK5L91O2JDBVlIbIihGM
0w7UrqH/ANWImJB9IFPBv+eEI5GFaL38etMC9OztkK5dfKuIgJJpTvkR5964h1SoV05eoivi
cTJy1M6EjwHTE1AAU8/I4jajapP+0HMDEBsASNPl+7FBUGo5+nLrXxpiWE7UrpzHiB0wI4A0
AjInqMQAUJJcGtMh/lhanpqShioFMssDSOgbJcyeuEENKsw8MifHENygAZI/Fv4u4p0riKIG
rhVqKjMdfViUNpK5dSBUVyFK4moEkyaSaVB9NOo+mJaZaAD0igqNPauJj8mX7hp9K5+k1rQY
mtMWzJHhWoyzwLQs5XICpbof8cLRlYUooY9QxNKnywC02oBSCK0BAPgfDENCCC2bUFM6jriA
dRC0/d4/TCkgc6AB6SxqR4DEjakA61Y1VgOmMt4ZW1EilK9AO1O+NAMjVQVFWJFWAwYzRBlA
JIzH3DoadcS0+nU2pgaDM/XwwGU+oioAyFCPxwLRAajVhQZlj41wgzAZflIzNehPhiR105kZ
N55dfDACKER0GWrM/TCRD1ChBFcgR1yH+GAnOohlJ+vhTEqYOgcUNKZLXucIOzeoKMq9M+me
Ii1KCKGlT1PWvfETk5jTmR1HTL/liOGGkkqo9VfVXpllXAsMnU1GRHqr0GFDklBoBnkKkYgA
lSaDMda/uxIxA0EUGXSnhiRtbqDqqPAnuPPEsO4B0kkihJOWBIyanXQ0atB4DtiRtZUDsuYr
iFNWjaUqx6HpkcSNrY5Gi5598SPWhGVVbx74msAw9DMpqa00DIAYhUaVVSCfOmZNPLCNDrGv
wA6gf5YsG6b0irGpKnKvfwywGGZpF+0VDVyrQ4kSmrqSBUZMAafvxITISorWg/CtMsBgCVFT
Wvh+HjiRtdWDMansB1zxM6H3GiHck1qv1woxZtIVKZdz1IxGaFqMukDSvUeeIw6aaEU6dz5e
GImY1AFCfCg6YWbQozgFqaxnpXpl4knEJpCRdWkmoXp2ywGHmV6qVoanLyXEsEKAlTQUFB45
4kFWVSAzEM3/AExL7EZVX0UOhq9euI1IGIGVCScqfuxKoyNTMcqHoQKZ/jgApVjLFlFCopU5
Zd8TWExUIKAZkEUyJxEshWunTUVbx/biAGCCTUepOf8Ax4YmPykZV0lAKFuowtYYgsfUaEdP
LyH1xGxGYiWOj8D5/XCySE6TUDI+oHLrgGnGnUQoqMwqf4jETOX0VWmrqScwPIYhaJdPXUGZ
etM8RxHVgTQ+de9T/phWJhRie4pmQOprTAjNUmrGgA6HxGJYYMWY66CgpQd/M+OIWnaqn0kh
vy1GVaYlD9ZCaBgRXy+vliaRSMxZdOZrUZZYGaeqMgChgUOdMq+OLQcnUaVBrlRsqYYgoWrU
GirXST0oPHERUrq0Mc/HM18q4jBFjnQEHpU5ZUwUgDqEoOgFScSLXqFfzDJT0zp2wxaYj1K7
Akjoa0xpDHq6qdJND4YGjU1TAUIy+4dKYAfTqi0ac+oUdMsSqP7hpXKmer/TEBllAANNJ+0d
c/8AXAgMxHpGdc1HiDhWpCQaVoEIoT4D64SjfSw1k1Zar9R/piHRmD0R66WyFF6+GBndMCJB
oKgg9Segw60lKBozVtJ6D/XEkSLUesmgPWmf7MRGtAQSKjPUcqfTAkYqa5Z16nww4jolBUCt
K6adMSDpFSWGoA+k4tR0UB6k17qT2PhlgGG1JrBFdPauVDiA66tRILEdew/HET6QE9I0s31r
XETUaM+kkN1I60xFHJCSXqwAc5jOtfLEzTuCVChfILWgJHmcQpjrHpJFa9sxQDEMOhjH3HSD
kF8cNMpBw2pQFHcD/TA0FWdmA0mvVz1AxDUjNpFVr0zHbPEgqQadB0/DzGADYuFyoKHKvfxx
NowvqyWjtShFKCuQxAzjUwpJ6hkFHWo8sIptdQTJXVU+mhFAP8cSSQkF/U9CRRaivXPt0wUw
v4qfcDnTFhRDUza6aAeleuEiVndKajWtCuQGIaReEMDSpPWvfEzp/SrsBkak+roD4YEbNySf
T2IHQ1/NT/DATmqk0pXt9Pxwo1HGTt16V6keGJDDZ1K0YZaV6EeOI6jlP8zoSRnTwr4YUQoQ
rZ6x9tfrn+3EkUwdauooudadvrjQZfdzILgEZ1z1AZHzzxLXJSfxP8fbr44i7NrSR7hWAoVO
Z6HGuKOvhvLR6RxqaDoPD9mHNY5rrDKp9HTsfPE0Z1LAgnTT8vfrgoSxkg0PUA5Uy/HGaEia
FAK0oRXSP8cVMApTUQSQPLqSMWHT61RiRlShr2wjClLFQVzzz79cWmjFKU6VGWWdMQkFXUQa
D/cMSGFBTQpOo518sCIMfbIFaDsfLviUPrL0AoadfpjcVSVTrWig0JOY8MVoMFKgqK5+OCo7
K+mjDyWuJZR0Uxgt/wCRfy9vCuBD9tgppQEZEdjTxw6MEi0+0fWv+GJYMDTWgHqHTp0xE5AK
An71706YMMpxIQAprUjOnhhxklegqtcuo8B44mhsE1FlqfHv08cQOi6qEUocx2IxaqF11ZA/
5HEzToRSrAlulcVE6wZYnUegyqOg+mLGue9JSSOmXmcRh0YsPR0P/GWJH1CvSgA6eYwM35EC
5BLk6egOLV9YJWVq5+XhUYsblDU+la9KkHvT64heijGZNSA2HWTqhYk9adR/piQxQAKQAB2H
hiREegVrSuR74Cev/GeIg9VaZnP1eOeFDK+kmtCP2DGkcdVrX/v+uLWLCopAK5AfjjNMFIWN
NIy7V8cTUIkqKGmvpl3xIFSTRuvQf8HDrNggHYllHT8BhEp9NfSKUpU+WJqG0gCh7Z4tRiFI
plUnr54WcIA0Bz1eHjjNX1IlR9pNad8sCtOhoT4GuojwwAIBJ8qZHDpHpAXIZnqfGvjhRIVU
lVy8R5/54EcVWpFBXL/pihBqplnUeONYzadfsZQaDrSvfyxLRqR9XGBuUvcYU8eh+mMnQkAq
Qa0/N2ws032sPSCvbLP9uJCDErQCgIp+3E19jLIVTTkB+YdsGDTPIpbzqM8OLSGrSdQooFWG
BkhQKDTr0r54kY0oQCCCPxBGI6IEHOtD00/51xExooUaiaVoD5YcZtwnoVDdiOnjgNCpOVO+
R1f4YUei5opqQcx/pixHNCpAGfniiCUIp3Y9h2wqnJ7AU8PAYgFc2Ibr4/TAYTqQfSOhypi0
kaM2kmpAqPriBwBXwbufAnAYRBI6Vp54UEkrWnqXETZhepP1wILtU5ZD/HEofM5nrSlMSw1N
VTQmuWquI4TZLSlAO3Y4QBVJIoMh44BhiNJqD9T4HELC1P0B65HEdKtKknyp4YSAjU3pyVa1
+uAEzr6dOQ/iPjiFoKBjVqgdT9MSNRVGlftP7KYkFx18e/gfDEETFgB2A6j/ADxL0/26GLfU
eXbE2BiCak1fuOmWJI6jVkaAnKvfEKYkgsQ2QHQePhiGl6Syk0qPzDrkMRgBIjCgUZ+eZxNA
kCh2ND0/Z54lSK6W+gB/bgAHoa9aNXMdBiUROxACr0GZ8K4SRavhTxB6+eIfYJeigKKEAgHw
8MRojkCG9JIzPaowM6j0kKanVUnPwPlhIm9VNVKgAeGQGFGAVowaUNc0GADCxkfaDqp165Yy
3KjJ0glmzrSg6gY0hodYyI1HJcDOBoS51CuXq/HFpwY6dTqYGtcxgOnQAA51UihPbABVBTQT
6e9K/wDBwnAqCtDSoHSnc4lhwR1bI5geWEUSrVSmqhPUeHhgEh66WCUJIGZ7DEcMtXBoD9PM
9a4lgwoR81ALClG6fhiUIqAoIqx6Cnlia0/cBvxA64kdV1GoyHUH/XEgksyklfWMjTEjrpYM
y5MciT4jEAUKhdIrXNh4VxITB6inqTv5Yhhi1SDXv6lGJBUSVJFKEde9cRM+t8we2f1waQ6S
YzUnPLAqaqhKOKnuPphZMaachSmWXUDEirpb1DP9uXbE1DOCGBGQ6k/XEgFSshocxkF8Pxwo
pakFgaUAr45dsTACpZSWqK0yPb60waoB1WlDmB0Pj4jCcEnuIlC1QTRQeoGAmqgDagATmDgA
a0WpNK9+4OFGU+mjUVqdvHywFE2khiWACj7aEd++FmjoCQTmwABPfPuMWHDihVsugoScOKhN
QlF7jp3+gxELVDA0qT2GWVPLAQmM0qhoD3J7DrlhGDqCKCjVqdJrUADriSNlpKQtaGjN4Cn1
xDNENR1EePQ9aDAbAh9TFQKnu317YcBwa1Y1yOljXMDGazg5X0kk9Mh50xY2YsorWmRzHSnl
hRlkDMcia9QcgRgQyKlQOgr1+mIhKgr7nc1ABzOQzxIJFFIBqQPQp7g+NfDCqFdZkNakflr3
8cTGJGrQilV7eOeJs2skgsMswCO1MQtDlrNBSlCi9vPEzz6JlNNen1EAE/5VxRrBKpQ9KNT0
iopl1wnCEYavanY5dMFGFpj05DUVBUdic64sR8qDLSO5PUjpXEiKtQrUEKfT9OtMSxF7laqw
y7HEBUUmlfoR4UxApHoFAaikZr1r9MOExZKqBQOO1adelcCpAE5dRXBWadgakaMg1Kg5V7E4
oTBkYBQfV3Xz/HCYaiZhqs4GZzoadBiRRgBDIanoCBkc8SkF5EVXL1V7YmgBFBLR5KCfcb/X
BgxIgWgzBbqDQDI4VgNLvUVCrhRDUtc8l7n/ACxVGzYKCetOuKrThyA/Yr+Y/swExShoKZCp
OAU+pWOqn29O4r54RoKVOruPHoCcR0TFKABSpAqfA4iB0BZQftp3/bhgo4QaM46dSCcCHqQk
6gFp0y8u+LEiatApAA8OwHhhWESvt1yAH5u1OnTEja66vdYnUKCgqPphGkAQoKmqdh0ocBha
pDQg1HQ16g/hiWmYk0oehp3z+mJCdlXSop6s88CDMKBW9KpWhNK0qMSowQQFNSR+Xrl4nEtA
XouRGrUcs++JHYjJyQJKZeJFMRMrMtHBOvwPYjywgIZXz/MDknSvlniWJAtKsSWZTRl6AVwU
YhKhVrGtGJNT1p+J6YEkiX0GPoR1OVf24iH1rJqANa5U6VwgJCE0BJpmaZnPEoJdKAkg5dq5
4DBlQumma0LEVzB75YCFgETUT2yAzFe2GVBj166+AyXxwgOWos2bHPT59MICNdWTNlbNiork
e1cTUGz0JEQPp8MiKYFptQDAKle9T3H+uJaIosikkgr28hiFRhSmRU0/L4+VBgWJfZAbIliQ
D1pU98GnAsBWmihVs/qcURizaFK9upHXLChLISgdqVAqK9SB3PfCtMkhYEkkAdMqN+zENO0i
kF1DhWyVV9ROffERk0ABYEDKvjXviLnnrGjAtQnvnT65YlrL7lJEX0Zh0yBbxxrBHH6P/wAI
f39cRdm1Ae+ueoqa1qf8Ma4Z7+G7tBWIFj9gqe4p5Y0zz/l11HpRgQGzBHUftwVo6QrRtJK/
xV65dvocZX1SBYwqkVB7jtgOBkhDOHDEU8MLFEiggUOfSpxYYIR+nVTPoQf88TRm1AqVoSxp
UdsQtTAipDGpBybrTEjIUVSF6HNiT1xI7FSFbQVPQAf4nCsHEfRmAR1z6EfTFgiTTGGqaUOR
HbBhKhCkrkaUA6g4WaUSqAFJpX/imJDXJtOeWZPXEUlCM6FQevTIDAL4NGBqRnlT/niFpaau
FUnUBQDvhXJwuWpjTT0xa3gohVmqaE9R0oMCJWZ3ILBadRiZ0fq+0L6SdJr3r9MROsbgEVpT
oe+JQYFfLw8TTCrEbatQyq3fExYIt6qHIeWIYRRiw7joB2xHkTNVQtCDWhGJsbrQCjU7gD/X
FURjbqfSTgGBoFeopnTEKVQH6Vr1JHTDjNpwa9PEYhBan1Up06YGhVJYaRQjviJl1GSjUbrU
jpiHwImqKvUDNz4kYURIEZC10nPLM0xGUdVYgZrQDPrXEQ6iCT1FPoThZogejeOBqDco1aDP
oSOh+mKJGZK5qchTLDg0f8wUoQQe58cCMzAqx88xTrhRgo+4fdTI+R7YhSXSpBrUnoMIIAl9
QOZ6jtiOkwypnUnAj1bJTXPw/wAMRExWgByIGX/PAqEdKdT3I6fUeWKsHy61NBkSMDUgQdWa
5qfxzxpCKstMwSeuXTEQsZDmOnft+OBmnBZgKEEE+knLLGgIKCanx6UwKBppYlq1b7aYmztU
ICcyTWvf8KYJBaH3HoDXr3640xohQgA50z88DRUBAJJXviioSGPpI69G7YVoSpCiv2gnp1xm
lIuQ11yIpTrgIWcAnrQDM9jhwWkqjWKilTSvjiZ2HFQWBIoft88DdERTTVfUOv44QEka8xQD
r5YhTA1JpmR0zyp9cRMdRoSPUe2ImGoUJINPxyxMnBbr3riAgaE/v+uAhFaHKhrkTniOEQaU
FfEA/wCGI0iKJp7nsP8APDBDKKtpAyXx8cBwzN6dWZU5VXocRLUtKsTqXIYRKS6hWtM65Ysa
D6vupRV7DPFRCJXqwyJ696HATluhUek0rXEgE0FSamvQdMIMpPpZQdQzpiEL0ha9/HzwHTFR
91ageGFkBXWanIHt1ODTJ4A1GoVqa5DvlhRwy6mJAoM1wEBJzZBmevYHEPkjUD1AV6HLEkZY
Hv8AUdsQgNBZc2oe+JrDaDWp8vQPDriUM2mpH216D64lqI1IFSK+IxCwKVUjpU/cMSkIhQDS
hqSxNO/fE1AFARWgFMiRgIWQ5Z0A6+GQxarAA0JUmrd6mmXliZKoWmkVXL8PHEQtTRVR17d6
nEjUGRGZAzr4DCgEZggUDeHUDEzDFss/UK9PEeeImUAEkiinx7n/AFxLw3pYdxQZeNPDFpNT
+WqA06ksfDEi9BjrmHHbxwICs1TQZfuwjlKHFNHetSa5jvliaJ9QZ2Na6RUf8d8ZR1FWAPpI
yI65YhDhnACitAO/n/liIy3pIJzyA8/LFFCVwYyKAMDn16eWIkanKhqKBfxwg4kOalQD0r/j
iSRDRTQ1Hcd6jxwISyxjMihOYH17YsSMEFqHqBlXMYmRKhC1ZqZilOtD44DB+llYEggHL/TC
0AuFYrXr3xIVCqUUhs8yOo8sSJQKGnTrXsT4YhRKSO2TZ/8ALEjHRq9J6dQe+IVEwYgHVpoe
vifLCPlIrakQaaED1UwY1ptBDNVgqnt5YCH1aATlU+nwpgCNgVY6cvI/44RSBAcMTUAfv/zx
KUmoQWbI/lr4fXE1oHJA6kk9PGuEaEBw2ZGfUnqMuuWBaIDLsxPWnfEJDDUBSoFM9PjXEQsF
Cseq9a0yHmMRA4auoAGg9I8fGuIWho5IAz+njgQSSF0ntQVOef8AphRqrUj+HrXvXzwimZdL
VOemnoGdRTEpATMpFdOQFQ1e+JUSmudKAD1djXEgN55AeGfXEdMrqyGtaipY9x9MOKmNGADH
uat1z7U/zwIdKGgX009Tk0/Z4YCYBqnMle/YjEjyDSnoIIz65/8AXEka0I9GZPftXxGFnREK
QA1OvXxPnjIhnkKioWuffocJOWMkZIA1H8P34kFBIKk0qOwz64CISMVNTXSRmev44hCjfqtN
BOf/AFwtHVgASBq71Jz/AOBiJxk4YZ16Fsv24gdjXr0FQe344ghLBFIFWyzUDtiFg9RbSoPp
AofH92LFIcaEQLUsFBNKdTiaMzVDAVKACjE0y64qBayyk0zpQ1754kRCKAQK0NCACB4dsRAx
qwAPfNup864gMnSWJ6A0B64kBFiegVahSanwPmDiB6UORqvQ1PfEZDmjOtCQQcye3jTEgUBe
gX0k5sR0xCw4LqxAOqhqG8BgBnYEUcmrDIdz/wAsJwWoU1gBCFB8a+BOJQ4ZgDQAk9z498BC
mtasY9Q6MMyK/wDLzwo6mldVKZkDtniGhBrHqGpT4AilP88RgVoQdJOodu5Hc4ojLq0k1FBk
KdM/HCBKQze2GoRmR2p0OEEyhNXXOg60zGAyCJ6lnWlBXL9uIgJVpVApRsh5088SHQqWBy11
IpmARgFgPSCFAJNakdMsS06qHXTkMzUnqe4xGembSSooQpyz6/jiNp1dUUqqdM8z1PkcTOk/
uOFZCADl4n8cRMH1sEp6SfUeoOXbCN9Rn1FWNDTMAd/qMQojCsr6QQoIrQdajLFq+RVZT7ZF
AgrmKYjKEk0IHb1Env5YmjLqAVSQGB1jPsMQwQcMxD50/iyzPcYkGZUc5rQ9mJP+WCKw4UIV
IqWA9VOoA+mIyBI9JYpTrm2bNiWGDMOw0nqO+WJHNS1fvC1pXrXCAgMRqr6h9tc6Ad8sRH7m
rPoCAMxiFLQFHr+38yjrg1UDFlYGmVPqPxxA3qeI6hqNKlhl+zEMMvTPJfy5dcTQgsigI7VO
rM9xl2wWoaDIsKlTULnnn5YFIZC9AgNFHiTXEsFIi1p9mX255HDIkTIxalaNlQjwGNKxL7aw
wF1eoB/LU1J/ywGeQLKCAoHqahY+HlgBgoVmVcq1Ir0riJljCux1VoNWfQDpniWGABcaiQ1D
Rh1/6YkJDQBKagBQE5DESLfzBqqT0Byzy8fDEgtpkoa0J7L/AIfhhwEzI5JpQg0XvX9nbDhs
TBE75k9SBl9BjNWIyFUACoANCBlnigPkxACaQPzVxJz3KBo2YEnLJh0yxqLGVvQRcanP25AU
w0ahp/8Au+9ep6Yjq04vbRXO82kEmUMkoSXuSGy7Y7/ykHXXj3OfhnD7eCK1Bube5lOlHRg6
MQMwQR+zHLrr1nlV3HDd1guvbOlbcmkdwwOinYMB0OCWNXrEn/pu7W1xBLK0VxYvKsZa3OdG
8QanFRtXe+fHisIpNvCxmXpHI1AxHYP4/XGZW9Y+TaL6y3MWd5GYnZhRXNUOfQMuOkys9c6v
uT8cFhbRslk9pcyuqH1CS20sKnSwzHjjM69Yyz4ctlwjdb2IJbzQSSgZQs/tsxr+QnLGrW5d
+T2vBN9uPcUW+kwmjrXU3WnQdRjFsVoL/iW8WNo1wVSWFfUzREnSPBgc6+OK1ap160ANG6+P
jhl1RpuMcZg3WE3ciPLCjmN0RwrCmZp1GNXxi7asY+K8Qv5VGzXzpIXInsZ1AcKMmKvkGH4Y
xbWo7d5+K5dtubOaG7F5tF22kzRijxmn2shzxTpUMnxoqWkmm5JajNHIVqfTnpYeeLWZtVcH
D1n2YX0czLPCGZ4iARQGhzxq9NSVxbTtS3MoluAy2BYo8qAgh18cF8a1om4JZXW2PcWl6JdJ
LKKEA6RUqfDGZUxgJrpUjzPjjTlflqrLjkN1sDXslo0vtq7G7tW/mIV6CRD+XzGI/VxWXEd4
vrH9bboDbJk0lQQtO57kYb43Yafh+8QSRpoWUTDVHIjalP7M6+RwCRHb8e3G4vjYrCVutJcK
ciwUVOHWb4Kbje9QWrXL27e2jaXTup/3YL0ZUNxs1/Dai6cB7ZlqJF8f4T54NacKl2CkGhbp
4Y0xq2sNiur6MNrWLX/42kyRqf7u2BWGueObrazCG4jVy/8A45IGEsZUdcxiqny604huskJl
ttExVSxjjapp39JpijV8K24bvt3bCW3jBUhqgnS1FNDpr3HgcWiygl4tu6Qroi9yPuwFCtOu
teoOKmRIOJ7rLC09uY7pVFZYo2DSKo8UyP7MGrK57bYb65IzS3JJA96qjLzwj7I7/ZdxsWUX
SUSQVV0YFG/7SK4WbHEwYFWodPh18sBxYDYd1MEdzDbPJFLkjKKgEeIHT8cOLwJ2ncRbG6EL
GFX0SNQkKR4+GBpbS7JbLx+O9kilSRq6biM6o2atArgfYfCuDUDjHE333344rqOJ0AISRqH6
nyw24JynsuJzxb7JtW4LoYgopjYEN/CVPn54FmI904ZuNrPKLVRcez/5IlYCQKMy1D91PLDq
8UDqUYqVpTqp8cMVg7W2aUla0B+45kAfhjQnLQTcbglgLWs2q4jQOYgwZXSvqZTQUp3BwNa7
LXi20Jsy7jctcFHTU6xitGLU9IwWixBNwqRrqL9Fci6tbhdcBI9t88/bYfxYLWZuAOx7HcO1
pFczWO5JqH6e50lHcfk9w5VxSs/W0S8XgsbaOfdfdET0rPAwOgk6QHTv9RjU6b+o/wD0kyXA
WC51wSLqglZdJH+1qYvsPqzc1u1ndSQualG0n/urQjFfVFxccalh2m23SOZWin0+5FT/AMZY
0FG71wNYV7sNvt3sXFw7T2UwFdBCyIT+Xw+mHRNi3ueK8Xt9rXc1vZzC66m1KA6scqsKkEDy
xbouwDcV2W122O+mmneCUDVLGur7vtNO3mcZnokV0Ow7Zf3M1vt124VY9aNKmkhgftOffxxq
zxrVNcwSWs0kTkakOlqZ5jvilNi0fZK7LFu1tMJbctonQj1I3Tt2ODdSK62cw2JleVY5vSxS
To6nOqHocUYtVR1CtRRB/wAVxoNRecIvU47bb9aTx3Nm6/8Ay4hlJCWNASP4TTBGsyud+H3Y
MLQMJkkQSaSfVQ5lR/uGDUOHjlnuNrMNtnk/XQAtLaTJpOkGlA3c4pcanrOmAxsyspRgdJU5
H9mNMYsdn2dtynmt4pFikWPWurKrA/bXsMZtaT7Nx2bdLu7sw/t3FstaH7S3ZT4Vwy4LNPac
aLJ7l5N7MSyGKSRQW9lq0DSIPVoPZhi0fVLa8QuTus+3XZKywx+8pjzWWOoo6t4GuC1qJBxO
K6Nz/SLoz3dmNc1m6kO0YPr9vpXT4YzqHJxWwg2+2vru7eKK4AqaCils6GgrjXNVkV24bBc2
u4C290SQSKGgulHoKH+IEenww2sTn0rvYWht2mgeqLRZ4W9UkdctbUy0eeB03HZu3DtxsNm2
/eYyJ9uvkSsoyaKRuiSDtXscELkvdmh2u8SK/kJgnX3EniFRl11A59cazRb6t9z4fttjaCc7
iv6e6A9iYqdIcrq0leuM6OrIptx2Se2srXcEYTWtyfbSRRSkvRkz/wAcM9HP+UF3s00EBl1B
ZIyFuLdjRlJ7+Y+mI31xMtCQGr5/8sVGBVAxHiD0J/fgUddlYy3d1HCvq1OAamg60J+uLWpF
hyni1/sF9+nmAktnBaG5U5NnSh7hvGuGMf0V1vs95dAtANTLk6Zq1PpipnMQSJLDIySjS6fc
p6j64DqPPV9R6R54Ut9u4nvG47bNuFsiyRW7VYKwrpA608fLFKOusLY+OPvcd5GrtBc2ih0G
k9q1qP8A6cVuVv5itm2u7toFldSyN9sozGefUZfhh1jn9Dg2y9uATEoYrQkZd+mR8cGNa5pI
5YXZJFKOho6kUIPbrgVRVqadfPEJXXFtW4XEWqGMlSaFQRq+lPDFKr5HO9rMFlYxkC3H8yoz
XtnjU9AYYZngM0alo0AMrqKhRWgr5YLD8La24vu97tU24WsBnjgp7ugg6QehxnTFP7ErSFGX
S9aENkQR2PhhIrmxuLb1TJoB7gekV8+mJn5Mm13zyafYcOFD6D6Tp6givUYkVlYz3d6LaMHW
zBdBFG60Jz8MSnyn5JxrdNg3FrO9QK4zjkGasD0YEYj4qnVhVgK9gO+DUj166gjtSuEQL6RT
uQcq4lsRvmdS/lzI/diRCjAUAUH7vriJmABoaZjM/wCWJIdRWQACp7g9KDEdO9a6gPGo88C6
QoxPrFSfzV6fjhrMEtSciR2r4Yy0B1YFhUE0p5+OFYjlqKGuQPq754UcaXIK5r0J8PLAyY/+
R2H20oa55DphJa0OZpqJ755jEg9AEJIWpqPI4KdEXRmBUZL0HTr5YqvkDsQjgEKvfv8AswQY
dPbVQVyFK1+vjhghA0FCOh+4d6dsTemWqrUdBQHxBOBCIJXrSmbV/wA8SO1XDFjliRwVIFR9
32k9vHpiZ+Baz26+Pl54oZREHVQVNQBTvlnhJMOpQ9Mgf8cQ0o9Nc1JAyHh4YlKJ2RgqkHUK
0HbETmI6Q9RVhQf64BYJdAQNqJoO3U+GJSkpUAgmhND1xE5CFCNNT0BPhiJ1LUqB09IA8MQO
ZAQCT4ah2rhWgYsr1RcgaGmf+OIFJpIociT+3AjZZg5jrTywo4LBWIqwAyB6Hy/DAsBEzMCW
Ude5/fiUhOwQrQ0zq1T28sBpMhoAfUxFXP78sTITmCxX1g5A0A6YkZmLUBXIjviMqJg2mjHL
rq6YRYlZkJoRkev1ODGgGlCU+5TWvjiJpWLMStACKknPLEtMz+mmqlKVHTCL0jqdZFST/F2w
MymZ9Jr+U/mApUnE1SLA1bp2BpU4loSpLZUNB0xM6BtVNVaN0Bp+FMRNRiKUqtTmew+mFaWm
kZUtQjJiaZn/AKYtIGGlUb7UA71p+OGLDMqkirV1ioHT9uJDaQIp0imQIIwIOoqakdex6VwI
4LMPSCxNSR4UxEEqkgBRQDIivTEzaURVFKihH8XhiF/wcMtQNOfc/wCGIyjzYdKaSNJr1phw
g0+qhBU9vAnv0wAdEpQgEIcgO+WBWh9xAdA/NSnl4/jhkEpGnpNNRNdJJ/zwmUlZm9VKqMzl
QeHXA0TKsjAg0Ph5YmQ0c6kBJHavSmJJSgFVXSB1FOuJAKioK550AORJxETupbv6SQV8MsKC
q1TTo9IOCqEKAA9v4T1GJH1UYKcych4fjiR2QJ0+4Chcdz54haEBDpDZGoy7YCERKrFl6Vqa
dz2xITgFmBFaV6CtD3/HCiRomXMZ9l70xLSQEqx+wDtiqPr9tADnWtfxxLUbiNjrJrQUYHrQ
+GBm0gFpl6lJzI7DCYTUGQauXpr9O/hXELQqz0KrUKT9aYiJB66M1TT7gMqeBwaj6XB9DDSS
CxriQU0e4C5oprqrXCDLHXUvU9j0OFSksWTVoGr6SPDC0f21FAc26Vav+eJYbNSUycg5knKm
BUtCxpVFoPuzzpXw+uCqUek6yoejsCR4gYtVRnV6k/NT017gdMWs4ZGIppqDTNhnkMRsOwK0
INZDmPrgQuq+sVJ8MKCpZTRlIPgTXEqEVGQyp5ZAYWToqDXqWr1oD/y74tWJApKhhQEigY5H
rkP9cGkDBlbSeg/Mc8U9MwKu/uEFfQ2Rr5d/LGmjmOFWqwNBShGYxlCZqkMO+QpiBOR7hA70
p2XpniIFU+8zZVpQR1pT6nEcOdQHrNTTwr+OIGOrSdIqWpme4+mAaTsCAyL0FSCaZ+GFHUqq
VYUcd/CufbDCifXqDfw5tnSuIRIWAXM0YmtcyM8SR1BAU0pU9un44sFEysEpq61DHKpP0Hli
QoGWRaqwC5KAP8frgah2JSQZA1rpoKDLv9cWIhJHX/eMjTEQ6gzBaGoGqvenjTACl9pXIGeW
VMwcIBUmhZAE7Me9exxDaIB9SlctJ+0ZCv0xaSTJDpObmjf9TiJKshk1ZinVP+eIw4kjrpDA
E9POmLEdSQNJag61I6UzzwUGkLSKuuh7lugI8csUQJEDGgeuYGXaufbCjhkLn00Iyp3r2OE6
Sq4Fagr3HTv2xBL7ihaAkfXsPwxmxrQsSzqGyAyDn/HLBBhyuk11AkGo+njhic9xISj0AVOv
1wwMvuXokKsx9JqCO+GqRwe5L/F5/jiWLzhssce+Whk/lxh1rJX7BXM07478b+F1PH0fuG3S
TW9huNlLFdwwMXYqy10nP1AfTHC2jlY7bzHa7icWqskM5P8AMikYFCD0oT+zBhxx3O571ttz
WW0higaX/wAikH0E5MtOuXhhTr3O2muIbK/2+8hurdJf50cUlHHiAh/xwJR8rnt9wurCOCVY
ZkYlllIQihzSp7nth4MXXMdqlOy288qrPbI6NNGjhnUU9RpXti/IwGzQ21u1pcWLQX1ira5C
4JaIdq09Qpi1NHf7lYK4CXEKOxNSkgBNemfhjH1tFZLZ76OWxvEmmT9Q7uRE70oB0UDzw9Dr
XnExBuZAPSQ7ek5kZ9K43Pgct78c3VqLaXbSwW4kOtNTBFbt9K1xdN40Vjte1/omfc4oF3K3
kLi4WkcpFcgQDRvKmM2iuuDke1R7naWl/crHt1wSP1IzVH7BqdATijU9WV/c7at1/T4r2KQz
KWhaNqq4Iy9XiMAZiAiy2+ew3CUWU8mpFmf1ISTlpK41qU11fLs+3fpGiUySBvbuYXqDqNSW
XMMP8MPyzV9wJVv9iubKGWKS8DO4t5HVHIK+kLXrnipYjdeLbzto9yaFQpJLMDWlcyMulPA4
pWZMbzhW2XT8XeGFkldhI0aahVlbzHYeGLqkHHriCDjl9YvKIr2NJgYXCqxU9l8cHyZdSbRu
9jFsaSSTahEKOv51I7kdaYRU8VoX5HFuizxizlhASfUChqOhb8v44NX10+4TNZ79cbnHJbXl
u0arc2JYlWAUKWAHUUHUYIvrYz3JRaXdk24bJGsFpJQ3FoH1GMnwVjq01w6YxaKlM86VJStK
42nouwXVpuHE5dug0vdJFIEV6KQWYFdJPhjItxUbRBPsm5pJuGuOL7G0/lqKVzyIw/YZK0MM
M9pyX+sCMybaYQnuwkOua0qVGXTBvjUi6G7bS8bvDcwlUProcwTn9ppngZsrosN026C9Mhnj
L6KkKQwoe58cLUUu6X24Wu5/r7CygSNkBa5iGqM1yzXz74CpbbcN33C4uxb28M9vI2qfb6/Y
aZmPV0r4jFAr932m0t7YPbyvFqNJrOcUZD/s7FcatDN56zUkhfynw8cUosek8P3ixfYFgWRv
dtamaKh1HOusUOdBgpkdm6X9nHHKokUi4jOmTorEjKtPDzxKodt2+5fhM0Ef82VFPuxrmTnX
Uvjl0pgilZjit/DaX8iXEnsO/pVjkASaUPh+ONHXZYVsuUTy3hKiQuqSE+kk9CD9MGs31Z2u
5onM3Se4UK8ZWN3YFTkKCoyyxBl+WwRxb3cqNBTWftACk96Uw8iXE/BL5bPelld0QupjUPQq
QxzUgimHqa3Gr2redqj5VeWr20MEcisElqBqcmpA7AHAYsNxN62x3ke0BJRE9XiVlNPXWhFe
lT1GAWs1uUdhcxQCS6O33gYAghkUP/Gw7fUYVEYsp5i1vyOIMNJEe5wUJD0yJYdVOKxR3zCO
74rNt9o6XFzasi06B9JzI/DGTXXtt/Zs9tbGQQy0CFZzpoQPtr0z7YWZGU5HsG6jdLud4P5K
nUoWhJHalMM6OLlydz4fHb2JRryARrJBkGDI35QTTpgNcXJ54X22GGMaZ4qLJEwowoOp7ZHC
xZXRuE1tPwhFgdWMYjVx4GvqB8a9sUXdWsZ3G44Tb/0zRM0IjSRBSqgVyK/TBuNVTcZ91N/k
juaW80kYUVGlWz6KfHGt8Ziq3zY91j3C6le2KprYhhmDn1y8MUElFxG5jTdTBcS+3b3SmKdC
dKsCMq/jgp9Q8khNrf3FkkjPaREGEMa0BFT0y8qjrjUZlVLAn1HNaVoR2w6sehrukMHELO4B
Q2g9uO7iXOisxBVlGdaYxXSxbbmu27dPt1xbXkU1jeArbXKsCK0GT/w4N1n6qPjbDauR3L7h
/wDHjuAyxzsKoDqBHTx7HBWoxm7sDul4a6lMshWQGtdTE1oMbzxi/K/+P320b1pvW/lSR6KD
Jqn7dJPXBrUjbbXfbBdb3dbe8kI3WFCtnuca+2LqMGvsyL091fEdcFhkUe/z7cvHkneETSpO
0EsqU1K2o6kemEWO+G6sf1Nlc24WNmhKgswIbTQ6V8/LFaMUZvTucl5HZxLDvdmxmtpIWKSS
RA5qf4mH+GCVqRZXW9RWvGNqnmjjvYWljjuYGUdW1a0z+1hma4jZHNt36aDfpbeR1udpmhL2
Ckkj1mpU6vzL0IGNYxRcOn26fdN0sliQNMjLFGzV1IpzSh/wwG8+J7rcbSHYNpiuM9snkFre
KCWC5N6zT80eXXFPAq/knb3sTYLHOlzbSxl4LhCrCRchX09K4Z0toeTyauI7PMrgxsFGoZUZ
UzT6+OM/Jv8AlV8Q03V7JtVxMVsbpdZhYjT7yZrpr0Y9sa+Bmq/emnXc57CWQulnI0MTkVqK
/wDFRhEvuB3LZNy26ON7qIrBcKJLaZfUrqRWobx8R2xnWrFeApqBkRkAe9MIxLHL7cyMXoEZ
WJHWgIOX+WDFPlr/AJNvoL24spYJv1Fpc2hIkjrpzkP3eBHgcQ6vp+QGK72bZb3bzruU0wXE
q1D0ijXKTT1pT04tyGz1V83ktrm8truHQWmiHuSqNJYr/EviuHlm85WdFCfVUEdx4eWKtRqO
KXs426/treYid1LrApoWUfw+OBVJ8f3EkN3uJM+m5eNTGtKFtJOrr4VzwLmV2cGFvuC77Z3B
WQ3UKsLcigJRmJZF7afLpi1vJgt7l26TYtmu7OH3J1AgmlhyasSiiyU/MuYr3wxixmeRblBf
3Ec4g9idUCTVNQxXIMB2xL4U66QdPQ0pl0zwVrmt7u72B4/s24WcWq4oILmeGoNY0BpIB3DD
qcEVUm47lcXu5pf2lusM8UY/qHQick016O66fuGNSud/Se4FgLdty4p7qvHD/wDlTa5Br9ta
6XZCf/LHU16VXFv7avPgOL7hMuyblbWspWX23dEVu1Kd+tM60wflmzwHCZLG6vNwi3RRcG4j
AWKQ6WZ0/hPZgOhxNWZA2282lvDf2t3Zm7sruIxtHLVSCpJSRa5grjY+VpzOdk2LiMyOCxjV
feAoQqhPTqHavbGJBZdir5n7Ntynb72ICNbhVmmeI5Eh/U2XiMP4b/Lr+Wboy79bvHMJLSe0
jnhZTqUlmf1KR5dsZXwwJZQxXMjxGfXCJSJCkqDQNlqAyJxG1zOKSih9IFT+HTGtZFQihI65
sR/zxFGQDUdTTtgUC2qor1H3EYEB2oKhixzJGEoRI7EEeFKHywg5cpRcwvWnngV0KyK6jTWp
ORHQfXE0Ah1ckEEdTU+eeJkg+Y1VIrlQ/wCOJBORqpy7ntln0xH5CxqpyNemrtT/ACxI7aYx
q1UUjSBSueBElQgBJJqSW8j4YrAXpZSXqoApgUpB11VP4U/wxNkEFa9u4P8AjiGGrmM606H/
ADxLMJHAOZNPHM4UkRgR2INQKdT9cBPSnqVvEFD1H0OJUGk6g9cmyH1GLRiXQtFC1XV3PjiV
BI76gzA6RmMIS5ahpzBFSP8APEibIV6kdAKjEhPqoGXIkVI7DE0cZn1EkNQZDoKYtFhIEpTw
P7u2IHrpFQT4/TtiaiQPQajmen7cCC4amqtNXRSRlT6YlaZaUWqkHsT3phR/SuWZpTr4nAiM
ZcA9e9T3864NGGEaBhXJiMz1+p/HCha9I9NDXr9DiJg40s2VR2Hf64ijcFpPStF8/HEyTFlj
HqOWX08cAtMP5lCKlTUE+Y8cSsAwz1J0/N5nt+zDFCk9TqRlkBUdvwOA6Z8lBp6u3/TEiJAQ
empGJpGQoGTekj1VzGEYFlCUquTZnwz6YhITkZMRpoMqjMf5UxGEfsU1GeZr4YEDJKnsc+ta
DAgsISNWYpkB/ER2wjCYnMEekU/H/li0ADCtNX49zi0nIUNQ0GWdfHCsMspKkSCoGSD/AFri
hgdYBdmGVM/8emEkrEkMtWy9OWR8sFAn9QBpnTOvTAgsr6as1AxBCg/hgipHJtbVavYdPxwg
w9tWLNkD+UdM8qYmSdXC6mFKDIL4HFhPpQ6RQVFQGGf7sRKQgIoJrQU/acI6plqWFCNJFASK
Cp/z8MA5h3jzqVqy+fng1vA/axOnrkoGEJBVVKtVlrlT6YCYqFoxYA9u+eFmkcvTXM5AriR1
Mgqa5d2OWJqFpZi1WIp6vqe1cWqBUsD1q3Qn/ngQ6LkCBkSaivXEkbfcWX1NUGngcOg6ooBa
oQ1qCcCMx1ipb1Up+/tiFmjYAKPMafphakBVQdQGRHTwOKo9ar1o5PpI6U74BpgHrVRRkp4A
U7401pMj0NOpPqHbAqaQdg9VBBzyPSlMDFpIIWqwFOgoMxl/riMpRqSdDCngR0ywmExQlkGe
eWVKfsxCwOlV0r1Ddc6YGb4L3B0B9NKZdPpialOSWABByGZ8fpiVAxY0A9JPRqdq4RhiKKDQ
6mqAO3/FMKJ1YEBc2OdQRhOJJKa2dl1MABp8SMI+DCmhiwPmcZUDEwGrQAGPj5eGCmCpTOvU
dPLzxEJBalWqoNdXanjniMO7gZpUVFPLEtOPSpqauetRT9mAVGrs0h9JqSf24aDrpcu7KSAA
CD28aYjzgnXsvqYj7ulAcUNhgFRArEFlyI8Qf8MTM8MpOorrFSKjL9mMoynUQ0g9Jyp1z8ca
XPgy2rUinoaH/HFrWhjYlK0FV6V74gZkjKqzDMZkZ0B75YkQYkKaApkcvDyxEgY+qZ6STWnX
EtMoYRFq0opJ8R9K4lpkXSpZnIyyI9WIYSworHMjV97fhl+GBYZgqgrWoyr2zwgqR1KsKLXN
RliJ5QqqaEjMaPx8cUIRGjRUaqyDoP8APCzQ5qaVFWzUnI5ZYtM0cCEZ6tRH3L2z74DIKYoI
wxIGeQzPUdsKCCDpVAMwMj498VROEVzKagp6exBB8cBF3OlqPXqaHLwFMC+pKVzQkVAqAcgf
PPAbQEyaSzAKxI9R8PHCwNnDKSxqa+g08MKoBICzagCvRVB74TCTRrLfZVMh3PnTGSYDSOp0
+B7/AFxI7ODUhQPAf9MUCNyyLqp1+0AZ40sSe9rVRkQVpQihxIStVCoAZkNRXoP24EY6dZAP
rp2zoOuM0wXt1bqWUj01P+WA6SKBq0r6TkKjoPDCMQXa6FqtF1LWpFaHvliDJ7qVMgGZUdT4
n643jUjm1Dy6eWLGnXtMBlul0BgwNch547fzvrl3tj0CxuZ0tVo7Rx9WqzKBTIHyxd8z8DmZ
HYPdllEi+th6tYBannljnW8TvezqzJJK4Wh0qxYgAjPStcv2YxzpxHFJLbovsmSIPmKlgvh/
0xrGaNJpZwXYmRMw79RUHv4Yb4zNEu43kdayymKoo5J0/wD6XQ4MMtWex7Tf38Un6KeOK4Wo
MMjmNmVu6k0Br4YfhIIds3ae5FqatOWpUtViR2BxGguLS/sneO5hkhuE/KdVT9KZmuM4tdB2
LdTYpufsNJZzVPvRgtmuR1AZih74dZ1yRBwHJDBRWumtQKZ1I6YrGtSy31x7YDTPkoXWzlqr
0AzOCRj7Oqws5dxuYrFZVSWb0oJa6fp/0xbjpLHVvGz7xtNwkF6AqutY5UYMhCmnpp0NcXyz
esRbab/cruK098s7ErErsSoONfVc9affNn3DbZRDeotCNcRDah4ChGDn5XVcMckq6JUPt6cw
UNGoOuYzxvV8p3vrrTpeeVon6q7Hp4ivbGaihuZtBSJzGoIoisVBp3ywUJDc3L6meRnFNIka
tfpqxSN6jjldRRWZss6k508fLGscr1iaKaUMSkpEfXTUqpJyr+OMtSk91KjaAz6o8hprmB/l
jc5atqMyO8esHIeksp7daZYvrGYtJuN72m3pujWTybdLkl1FSRAR2bTXSfrjNq1zbdG0lykY
uBbFshO9dC16GoxYMdW7We52UkcF9J7iOKxFZPdQg91NT+zBJqxyJe3cFIUlkiUliUqyg55+
WKRW4CO4jRiykiRqhBnU1zNPHD9WZXdtNlf7pdpBaSL7pDFDqp2zHXGvrF1ae6O6bVI9pcO8
BJ9UZY0b/d1pjH1Z+/RbTZbnud5+m25DLeAa1jV9DPlX05jUfLDZ439/2bck3WC4aLcElWdT
R45gwYfUHBG65SwNBXyocicsNgSRTSI/oYqy0oVOkVxYMP705ULrdkJqykk1xHFlLtfJbCwj
vnhmTbpM4LmNqxZmlKr9p8sTOKmSSRpmLtUmpdjWtB1xuRmpS80kQWN2aPqoqSBTOuMtwCli
wkk6rnU50p3w+YOieX3CXLay33Ma/wCeCDdGKrSlKj99cWk6yyFhn6lyGo1PmBgOjF3OrMVk
dXP5lJBp4HxxYrQy3E8p1zSM7ZdTqOX1xOfNyiS5uUGhZHETAgrU0/Z4YW7TRyz6tUUrRnuy
HT08cVglG9zPIVaVixGdT1ywNxINwvqqRcPQZr6j/jgxWoFuJhKZVkKyP9xX0/XMY1g+w57m
WSRWZ2YBaGueLF1USyP0JJWmS9hhc7NS21xcQBlimeIP99CRX6064L63L+ynubqZV9yZnVCS
Ksaj6d8E8VONy3Apoe6dlyouo1y8cagnSP1DU9RqOde+eBv5hi7M5LnL82fWmNMfUCscuuWQ
A64KvhIJXX3FEhq33qTkaeOMq9AWVApjbMdc81H0xrGeenSL25MPstI/tU+1iTgsblch++qn
MClOuNfItEqyllP5gaj6+IwahvcEya11awah6517HAtonvJaeqtGFZB2J8SO+CT1r7eB96XR
6WIAIIUNkPPGvrGZ1aZZ5RIJc1etQ+er/uBwZGpac3Uh1JI5Ic6jXOrHvniwaIXVyNJ9xjoq
UFehPh4YlqOOeeOVXRm9xCSJASMz1J8cSlPJdz6DGzakclmTpVvHD9TaFriQhUBooqQnbzyx
Y47fgazsYTHqOk5le1R0NMZdZfEIdo2D1oSeg/1xuM21K85kmEjmrtUsxqanzxn4ajqutzvL
myjtJnPsxNrSKtV1EUJWvSvcYFXAQT06jucaBgDqzNc6sO2DTiVp5vb0CRtNdWgdCcBuFa31
1bahBI0at9wXvhZgJ3uJmDyMWcmmv6npTB8G+gVGAYV6dz5+GIBjMqSJIhyVqhlyOWI6le6u
TOZvcImPV+jfuxNSmiuLiCcPG7Ryip1r6SMvLFjN6SWt/eWqslvIyKxrQdKg4ME6c9xcSSuz
udTsfWaYcFoQQEFSTQ1y8fDFh5qeDcby31R28xRHzdPyk/TBjfyjF9c+8ZAzCfqzigPhlTEA
2+53VvciS3meKQA+pevqyOHDoY7ho5A6VRgagrlisRpbmSWVpmIExNVdPTQ1BHTvlgkCW83W
/uU0zyBwvUkeqvSpONROWW8uJLSOzeV2gjYtHGT6VLCh0jtjSqOa4mKIkkhcRLpjJNaAZ0wU
zxJLuM72ogeTXDF6lUiuknM6T1FfDGMHVcZzFUX7+vlXGrBAuvppXLvXxxkyIyGYMCKGtcIR
E1JVRnQEivXPPCcI6c/pUfjgFAJRQLSpr+7zxKVGxoajx/D6YiZnQ0amnPIny6nEQkgkmoYd
D4YkjCDSGGRpTSOg71xIs9NXAOfqr1riQHZSch1IAUeGIZDOCAylsutO6ntiHwBHOrN8j38f
LDVzREVBVvtHTAcMpo+RyApSvUeOMrIaP1CoyGdK9cS+pyzagfu8T069RiP4N7mmjMaDMADv
9cJEJGoAvqBFSen7sWAEh1egExgZ+AxHUkYPb0gZZfSlTiQiaDrkajUf8hgGgRtYpUen0nxq
O4xLE+WkgnOlQDkfwxEJSpDKtaLTST/xXEkihloCQa5Ad/pi0HDRfcM/Lv8AhiMExUlSpqBU
1r49sCoeqE9x9o7Y0EijUCvj44gXo9vSDXsSP3YmgkkE9R51xazuCWmlS3jQEYFonzFValKH
PwGIirSlDmOo65YEE1LFV6nOnamIUKk6suq5kjvhWGD50b0t4EUNO2E6c0FQetc/w74yQyat
KkA6fPr+3EDEKwGk1A+5j2AxMn0mOumg1dh1p5+OIhkIBFOjdzmABiJmRcqk1OeWVfpiKMUV
imk0yJbrXEzptR16jVVGVex/HCdCpGlqVanQ9j9fLEghmZSrZls6nIZeXlgGheLUFBJYdgD3
71GI4MUoarVaDSpxEzArQKNK5jAqEoojNTl16Z4UH0ByTllmPDBgCaAgA5E1XtiOh0hjqJ65
VB6V7GuHR7RMEUUPrbqx654YkZL0LGhQmlPPxw4aIM+elqFf8Dgo0j6wPWNJyWncYDKcoCRU
auxYfu/ZiRUQh9INf4fHEqjzzLEEnL6eGWLBRppYGgLOuQ71+n0xHA6tL1NQ3cdc+/TECALo
xIqD0Pev4YqBFyKAnJaDOhoMRgTJRgQTQEipBxYrTOKerq/bsM++BHjKlCBSlajsKgdsKhaF
6seorTzwq8hYsag9jl2+mBYkR2r6a6O47Hxp4HEgClW9QUN0WlcvCvjg0iAcqFrpAyFO58sI
LWRRagvT/DrgOHpUgFunXFpAxbVUqPIHvgZGpVmzIUeFM/2Y1EYVqynLM1Y+HgMOExZahWyp
0I6088SoSCR4UPXAIk1llUtSpyIHSgxGGkqQKMK0qop3xDQF9TgMKGlMs6E4RYf0IoRQM+vU
eVcsBKrhDTp4nKlMRJaV1KKEjLEkbO+ujfa3SnfAziWJUGQWv8J/zxNGoadx4/64kZxqopIJ
I+7phBUZvS3pZW8+3XIYUZlCLqK+ok18KdqYZQBjIyhqAGmYGWJm6IrI2lCaEU1d+vngamn0
qpp+br4U/bgOi06hRsyep74qoageQkjNRSnbLGVqIk+kKakk+k+fTCzo+hJNSQM6knr4Yo2Z
dLMoViD3p1r9cI0kRvdKBdZT7S3cda4qpD6mZSQCWHn0/DyxEMbK+rVQUUU1jMjpXEocadZY
rVgMj2PngB0cnUg6Uzc4qrdMiooLEEU6Dtn2rhM8MSfUydGpqBxEUpEaLnrrnSnQYhplDGOo
oKHMDERSOKijelur9608MAodKmrH7SRUHP8AGmKLAApXQpzHfz69MKJdZBJbMUPX/HEimHpo
MujMR0yxA4clNIFDSoIzqT3wIqszguRl2PenhhJMysfQC0jDueg8csKEFqn8zSAe9c/+DgMR
l3VToX0+R7fTvgF6xEDKwWoqiGnkM8sLnzbUwjVRqHUtn54nSGFVBBFBWpqa5YKdGqo2UdNP
5cB0DHWaN6R2Ld8KMsbBXjOkhvH82IH9LAqSaio6YahlUUaQaOANP17YEYlFqaVfuFpUYkEy
Fisa/cw8PTUf6DEhG2ag9VT1YeJOLTgCsugCSlKmnjXzwgnjGgk5MPuXuR1+mLUdwNJalB1H
Q0GMk5lJyrTLLxB8cOALFvbUgKprmx/zxYhlyCpNdTVFR0H1wJFNX2aUrmSP4iThiZTdX/mE
Aen8tOh+mNxvmuHVF4dv34BrTcEaSXklmVIJSVSytn1/yOOv8+vwLfH0vfb5PtBs4xt0CC5Y
CZpYAYJUPirAofwxzvyxeVlt+08eS7a7ihG2tI1Wlsow2lvEIQwp5YzqqPdNy4nexJFPfQbh
OkhWG4ewMNypWnpMi5Anzxa04N95THstvG9rt9oxlNHjkjBjmAypIpHX/di59Q+J77s1+s95
t1lHYM0um6tVRTblwPyqa+nxw+jVzfi1uI5IbiytJLZhQW4hVVWvUx6QNJ+mBaoOLwWsS7rt
rwJcWUclUguFEgTUuQVyAy/h0xdUqHim9Xu1ckeK0aGO3kkKe1IqunWmn1VP7MPzDvmLL5M5
VfTTQRS21p+ohAKTLEI5VIOXrGdPLBwxOfGm4zzTdG4PLdMbaYiOVWeWBcylBpcKBXr3xqyS
rnFT8ef0rcEvtwvFO3Se7rSaziEscR6H3UOr0E9MXUasXG8T8F3eOFbi7sb+7EhWG4Swa1l1
fw1WqZ/7sZgsiBN3uNt3Wy222sLfcLVq+5BPAsrRhT90bABwR5YkzfysbZ72z9qMW6BWcR1J
IzoVBOefnh5VZnicSrvlmsVS/uAqRnn4MO9emHaJznw9Y3PkptN3sNpn2+2Mc4KXMNzH7iGu
dNLj0k/7WwRfUDcd2aX9Vt1lYwtMytdxbcqgyDKgMP5qA50GHSpJOOSw8Qrf2GuaJWb33T1B
i1NLHIrTwOKVnHmIjZmq/pKGhAy741g16b8eJ+msZNxuJDbWqsUW6SBZtAH3CVGrrXBb+Gm0
3HatlvZLXcoY7O5vl9Qnhtmt3lV//wAIn2urDGNqsVU+72Q5DHx+bZrVbSZC8lrcQh46hTmm
oal/BsUqdos7DaNkuXt9uRoLcSTW7EamjYDUAslCaDwbLDKcZjfk2zeuPjczt9tFuOgO1zbo
IS4I/wDtEX0MfOmHWZT8cZv/AEi5jeVo1dJlkWMkChofoaHtg6NvrJ8QuLm15JatGFKGQRyr
IoeN0Y+pWVgVON/g1r9623Z5uVW6QWcUSshc20eSjPqijpWlcsEoiPfeW3llM+0S7VZbxZqp
Y/qYNUwT/vj0tVfHBPWrXVw1rCKzG4xFNrsJ2MkbywrdqtDSjhhqyPRhhrMutHe7JsZv7Tc7
RbSS+YVla2he3Mqt/HCcqf8AbjOjrcV829WN/wAibY73abaaExszwXCBmUgZe22Ui/txLmSx
kP6dY7Tz2O1sWJg1qYlY6nUsKiPVlqI6YpVJ6vd2itL7mlmLyskfsn3UObkDJSD/ALcOmxo5
m4gI22++3LbdwjCFv013aGO5VfH3IhWq+NMBeSb/AG+2W+6zx7VKs23NUwyg6svJu4+uNRj1
XoTHmR1yBz/diwyvRNuumTgExMlEMTe5HWqlg37DgNTcJtobLbFvbuRLW3lzW4lhS4hOf/2q
EErQ/mGGr6rzdOO7HJeQ7jBFZGZlYSm1jaJXDCup4Wyr4EYvsvhWw8c2MSxXkcCRyrIftNVP
ajKa4FrJ802ZId0pt9mRGV92dLdGfQKVqEAJph5Y+WcFJEULQqPzDw/HCpACOhBpQeOEnYs5
pShA/CmJGp6ifAdRnTGWaTEjMEnT2PfzxIJbKoqo7KeoxoiqSq5V8c8B07HTVsqjqB+/Dg6u
lUEKT+P17YgSqSfVVfPtiJ3Jy0miDp4YEJakZDP+HwxSrKj9Pu0Zq9jTthJkUmgqAFyXxxVn
4GtK1bNfpgMABqFa5ePbCRLkQO5rVun4Yh0cgrmaUHXz/HEzAaaKcxp/KPDEvkSK4bSfpmcV
pETQZDrkw8a4NWGQMAcvT4+eHUSVC9KV/E/jgqNJ21ZqOlMqYIkhIIWgyGFGYgHMEg9Cf8sR
0AOpmLUoc1PngEMG0mp79fPGlgtVAa5AGoJ7+WJBdgfWtCa/acI066aVNAR3GCmHLeoUFa9j
271wHS9ZFamh7U6YjoQRU07UGnyxI9cgK9uv17YMICDXKpBzyw6MOAtW9Rr1JwLToDXr/wBM
S0zCM+odupHgcQOCxYMOnj4DEtA6knTq+4eGJBGpfw617YkelKs3UD7sSMwdsyevUjDqwyxD
UTmtOhFc8WqQIUKvdiWzApgFhqDp9oU9/wDHES01XUwGRzriOYZiaUHpP7aYloTmxbrlU0/x
OEYCuRJfocu2JvAgIFIrXtngAJCqHrWv/TCiaM1JrQt2/DIYtSI9RUGi9PE074mKB2OrPI9c
u3/XEjxjJmz8h1/bgrUM0tBmKk5VOJaikVlJYNTV0HXLEZEZJKjy6t0xIHqyZKU6g9/DEyEO
Rk7Zdj0NfDEsC1WNKCh7dhiIS6ZjwOS9On1wrajIohXsxqR9cBNQBgrHLwPTCoGSoGoDUD/i
PDAdBrXSF+2nc/4HAD5BKVrn3xHxGH/m0A9JFQ3cjt+3Cxz8mCpViGoE9XQ9cDeCc1IOoCo6
9s8RID1KAMsw3fIYgH3aLQAFCaL5U/xxDQyHUANJ69PDDFaeOQNI6hqBQRU+XYYhogxdxlXK
uWCtQysNZqc8jl0wHExIz7sc8+mWIGDBqVUgk5ZZivQZYcQlbWAWOeYofGvfEdSelaEnKtBg
qIIKN09Ofeor3GBBjYeoHsak/hhQi1UKrRKgUr3GIJEPpqpqK51yz6VwokkAJFa1NajPFUNh
UFBRS/anbvjOnQ+gUrTRXInpiCRwCdQpka0AxILyHI5UHVqfjlhB1f8Al0HcVB8/PEsMBUBq
5r1p44ENAtWdTQgdD3xGA+8g1q3X64UdiQ2okfj3OA6DW7PpGYao09sQoVQqRXIDsP3YUKho
2eXQUzy7YMWHIJcAd8/LLA0iYkMQch11H/HDjN6xGp1HNhl++vfCzBSR5KA9FIq5PQ0wVqYG
oHYioGlu2LQAkD/aQaVOWZ64hh2KJQFqkH7h+4HC2Biy1ZVy88/xwAmLFDSlT9p+uJVHq9NR
ViDn9cKOA61ByHQ1PTAcC3q+0ZKMvLxwCwyUZS3UBswM8/HDIoABq6hlTMDKnXGjIJ10uWCi
nXzxasCw1mg9Jy/YcDOaKI0yC5moK1GXgMSBViSCdAzp2r9cRCNRI1H0n7iO/niUg5E9QqRp
UEU754mgq1DRKjT2Ncx/riY9MHlK63pVj07jzxIcamhFSWBqCMl8Bn3wKDeMhTU0rm5yNcTQ
VZGBFSvenniJ2jVgurMNmPPAghVRSi9DnTvX64RTrpAY/mHc9xhJiihfAZUU9anzxIiuRVTU
A1pgZNJoJU6ftpVR2xYylBWgJbMmlPKmLGkTOrgNQrQZefbFiSIupP8AuzB617Yy3pFFZiAQ
3cknMfTCgNMaeoZggnzrhZKRqioY0IOXn9MMFJXOoDuRTPp9cFRwAqlQ1ZKmpOQzxIAACMpS
pGYH/PEMPGQSrDI9GHiB2xHBVZgStAa5joPOuEaEEMa0AUdKdcGinLM2kHvkT4eZxNaL1RtQ
Zgjt/rgIKxh+hVu1RixCU6jqBoOhphRvccHQBqVj16kYlQsSSQGJI+0kYkIjUurNW6E1oKnE
ofQwUsaFVPTthKDT68qhVp6GPQ/6YWfykYSatIFajr2B8sGgo6lSGbI5mpFf2YGoiNQWoas2
Y6+n8MTNiRGirUqRlnQ5CmBkySM/rVa50UEU754moeYE6QPUxNajsMJNXSelT1FOwxLcOHY+
kqBXPUMyR2xLEWbelvuGdD/nhA5Fqv26mYd+wPhiaDrZaVoMvtGBk5kNPbK1Xv44jDiMtGla
0FSPIYiJtbKQ4GeQHh9cSMQxNDRRT9vhiWBKuv2Uqae4o+nYYEd09dSBmAVHgfxwkmOQ0qAz
dTnl9MOJC0T1JGTfmHfEykjFGMgYDKtK1qRgaM4BoQTQ9uvXtniZFCpPSQihzT/LPE1oJVbW
SgFT92eVBiGG6alADFTUOTQ4iMNpI0n+UOnSlf8APEgmOpEgJ1A+pa0P0xA6Kpq7LkRQdjT/
AFxH4NoBGpTQVyJNf8MQGI19sg1P+7vgtKNm00Knp1Hap88SGWDGi5AZGvceOeFYZkDoQxpp
PpPU5YiFC3UGjfxfTwwVDqQNYNMxQ0odRwVHaFFq2kRsANTigJ8ziAVWoqw1MD6W8CO+JQUr
6W09SehGX45YYgSliDr75Dw/HAToDqVGz9sg1JFT4YVT6dBJydWFSvgxNMQMIo9YqQKClKnM
HzGJAKv6QG65kN38sS08pJAXMBhn2AzpXEZUc+gICoLUGQr0I8MEb+WU3Iu1SVyrVMs8dIy4
vck8ulen7sGldcYupLTcY54ApaFxIimvqK9j5HHX+fjnY97i+Z9zh22K2sGMajKWxvVS5tXF
ASVBBYfupjnedUsnyql+QeSQ7mNwsLgWsymqrANKivWqmo04I3kd+4/KnItzQJcJaiUCpuYL
eNZSf+5Rn+OC8VrJIz15yTdtxtljvnST22IEgXSdPn541zMc+oLZORbvtKstlJpQnVooHUgd
NQIw9emSYtJPkHkZMie+oauselWC+KnxHhjGM4s9v+XuYbU38t4WRs6XFsrqCehRsio8q4cW
qbkHJbjeb1Nxmt7W2vVq3u2Ufs6tXjQnMeWDluzxNuHOd53bbxYbutvetCAYrl4l/VKBkP5o
zP44cAOO8033YDJ/TbgC3m/81rMiyxN2OuNx1p3wYtibbub73tm6SbltTR7bPJX3IbeMCAg5
kPGagg4donysd4+S963OBoru0s0Zsv1FtEsUjautaZZY19VUu3fKnKre1js5pIbyK3AFs9zE
sk0R6DTKul6/jgxcqnkfK9536SM7kYmkjFFljTS2f8R7/XBDaqLa4mt7mOeFikkR1I4rl/rh
kErbQ/LfKY7NbS4FrfWiikcF3AsgHgQ2TD9uWCnHHY8q2+bdUut6triYL/4H2+c280B/2E11
CnbFBjTcl+QtnuNsks7K73G6R00NHuATVQ//AL2M1P0YYM9X1eYyNUlq1Vh6R4A+JxtwyytL
w/fuV7M0smyCSS0VNF3biL9REw6jWhBp1zOK46xfv8o8imIaWziG3LRJILeNkVT4qTq0nyGW
Ma0m3H5Y3KUezCkd3Y6KGG/jWVom6eh10v5jD9TLDbD8qXtjFKs1Y7hlZY7iKjJX8okietR+
OKwKfdueXu7RUe1trWXq8lsnt66fxAZEnDIciz2z5a5NZ2P6V7exukI9vVNANRSlNL6dIb8c
GDIqtt5rum138l9tkFtb28tfdsRDrgLL3CMWKmngcNEjq375Hvd9hSS7sbOK4i//ABe+tlaO
eMdRRwenljMXw6k+WN9ezSDcbey3B0I9q8mipMlBkxZNOfnhwyyq7aPkLf8Abrq4kheOaC7N
bmzmjWSF6dypGX1GGxmX8O7cflPfrlUjjjhgijNYkRPs/wCxiSVGKQ7Bv8q77cWpi3G2tL8q
B7U80R92MjusilWrivOJxbRzzkNluLXNusd20lNdtPCJQ5Hh+ZSPEHBidXJPkW+3oxNLt1pa
Xtqw9i7tlaOZD1KaienlhkWug/KW8z2pjv7GxvZQKC6eLROCO+tCM/PBinrI324TXkz3kqok
jmhCKFXyyAAxpm1HBK4nSQZOjVBOWIN3ZfKnIrXbjbXO3WV3DICgeaDT7i9PUUIVq/TGcbir
2Xn247VPP+it7dduuSTPtcqmWCv+1W9Q8OuGteJ93+Rd5vYREkUdtHGVaERV1RFc/Q7VYDyx
BOvylvvsOskFlPJINNxK0QHvr4SqDp1eDChxYldbc+3+13dN0spVt5VHt+yUDpoP/wBnRs6f
jixnVdyDfZd73F7+W2hs5mAWSK2XTGxHViPFupxqMqw5KKde5+uGHDIrEfhl9MVRyAGJGRpm
euMsHFNJJzNPTTwxLDEErUipHUnxxpo4BpUjtXPPEDpSh79hXBBaE9aAgOPHGoNPqP3D1L38
PDBW4L0ip6EdPD6YyaWZ09qCnhiQWYdhTwJ7jDBSzCjT4dB/zwuekJAFBFADQH6jBWoJQwU0
GR6k+XlibCwUgLq+3x8cQFqKjoC1PSfI9sWgJBIDEgHwphtUggynPUDQUJxQGB6Uq3fPLLsR
gWFViurp1riOEXomWZP78S05YMCG9LHIDwriRgSBpYah2IyrgQmoooTXKoPXrhgsR6gM+3QH
FgO2kAL1amRwNadsxmQO5r0w6jFR6SPCoJxazQ0IJLdR1AxL4PUM2QpQdQemFQgxFQ5oO3/T
EtJMhT9nngbnp0IPp60PXFUbUD0Hf6YFpz5DociMQCetOpHUnEaWRBJ+49adaYhhjQA1yA6Z
5HEC1VHpqK9cRASK0ApXqTiR1DEkt+3EZDMQWK1yHU4lhAM3cg/5YiYA5AZ9ziGhByqcq069
cTNKSullU0qev+uJaBQwNTUHuMRMOuWVBTLzxKBcDMjMDr4ZYtbCFYgV6kdPLEEWQ6DIZKfP
EDMp6Dp1wmw2vSNJFW6jtX6YmcwJKV1kChFP2YBPkEzekKvWtK9MvP6YnQDEkr4AVNOv1xCo
2BPfI9sCDIpUZDP7aV6YUEFVTKlBkQO2JaBNRLA5knIHxwM7UR/KCaZ9vHCjMdC5Z0yK9euI
6Craq9RkKdsJRl1YUUen/PFYZSZpNWgA6fPrXAQutSABpy69frTAyEMVYg+oEmv7MsWnDMdX
bJcvE0/0xHRM6VUAnLx8fDFg0DehiSaVFSeufhiVMH9WWY8O9f8ApiZ2iU0oQe+Q7V8sRJWP
pJppBJIP7BihwBiAFK96se5PamGCxJGQIwFXrmw7jBYJSKAH0mq9QuBuEJNCaz6sv+PwxIVc
9TmrAZnxPlhZFVfcq+eRBB+uBSmR8wanL04mk/uFak+ApUdsCNkQCer50/wxIRYfaAoI7ZZ4
1iOmg1AIFfuJzwDCR8/tovlTEkkWrrUgVoG/ywIGqldQqamgGX4YgMPkDTM5U7ePbCgtrOgk
hWPbt+OIiBA8gQa+R6YiUa+o5dAcj2xIQkWirXpUajnniWk4JUAmniBQYEACtDq9I70w6Ebl
wQFz8x1xIgWNakjwPXzxBKSmkt0zFa9afhgaAWfVSmX+OBaDSrKatRRmoHXPCz4BQSSAQEHT
/XDqLPWVBBHevSvljNpIvTIgaj0FemIUxFex6517n/TCZA6qOajr1AzFPriJmA1fb6eh8ajC
rQhzqovXrU9vHLFAElgchn3HUjwpiWGofU1SzdD+OE4b7SCpqnif8RgR10CpzzB0g5V+mFRG
uTGp/H/TETg+nSamhrnmTgVMwCqTk2WfXr2GFkBUqAa076PGvfEzog51AefpPX/gjAZS0vkA
BQVJWueBosg/QA5demfli0HagUnqKUB6eVMQ0FRpqPuA6diPCmFWnViqFR0p+ArhxmUSr6WG
qmR016fTGW5dCTQ0ybLMdMvE4mhNGag1oFzbsPoMQM2oAA/YerdqjDiISVY0BzJ696eGLEdV
ctqappmuVPxwIhUu4yoD6R2I8cQPpQOS4OhR0B6+GIyA0qoJboe/+WFWEQooxzAFKf4CmIDj
GlyASSfHoD4DGagB2Vyag/lPTKueDTowr0LKM2IJ8PDCgGHVIpNKHoR3piQ3FCT0H5u+FYBa
+4SB6SKmmWJmT0760Ymmodh5eOJoMcw0sadW0ladMQopEQdwT2Y9T+GJm9YjjMiLQtVfzL4e
FO+HDlGAx/3eDYFgI5vEdARU4joyymlDUHP6fjiaAwZBqHY506Up0IxAlZ2kpWhNDQ9x5HEM
SykE6QfAedMSCaZj7mPUnIGmI4Ytqc6hQUAqDnSnXCqFaZgUZaDt1xM09XA0Vrp6L2OJS0md
QTQaRXp3JwGl7kUlF0lWPQ0r+Bw4qZWI6D09CO+ASDBBOoClcshSp/5YMKIuA50ZA9APuBOE
ehKZ6gSGB/afPE0l101A5VFAT4eWJajqRlFRgtaAnqW8zhUpCmosB6iKeI+mLSXrWq5UOTHr
TADijIxAotaGvc4UEuQFdSV7aTnkPphKQuWqy11HM+VcZZOoLUX7a9ScvxyxHAoVDu5FdP5l
79hhww0jq4YkZilV8vHPADF3AYVoHpmOnTqDhZtMjyBVC56WqxNKtiUSSyBoyNIUg9Sa/jgj
QAGAJ1BgCAa98QkGqAuGbOT+IdB+OBqRHJDqBYMQey/6UxI9VKKtaeZwoek0DqMxmcu/bLxx
Ko1OmPUw79e9O5+uFEyvQAhv+7zPTLEj6lDjI5UJHgOlSBgR2kLGv5SaGuXTywI7NHQk9TlT
qKeWKHAHKhDasgQTlmPHCAE1+6gr6h44gIMgzqKfmr0/bgJz7lDnXPp5YlhLqCnUT9e9DiR9
KkjKiJ36Cp/0xI9GYkCgz7kUIwGGDH0kioUkAdK4lmmeNTocVOk9upPgcRGVHRh0odJHfCMI
x0r10/wmmQ/DEEWrVJqqQgooFDWvjniQwoaIg9EGpsSc9xqCBiSxH2gZEEjFGtZW/f8Am1ZK
ajqGZORxpmVD/wDR2wN6t+H7db3m8wQykshdQ8SnSSa0yx14uMWa+gb3h3xzt0EZ3Bbp4Z8i
Y3Cyx08KVEg8sY+1ZzQ7bwnjkjve2dwdz21qKIWYRNpqTpYp0bLFaLanbhPCtytHk2ZL3brp
H0NG7rIlR/DkpAxn711l89SQ8Y4dYtDb7xa3Mk85KQ3lvcafrqicacW0YLbOL8a2Pf4Eu7eT
cNvuCf09wzhJkp2kQAqaYvsqfme0cIl3K2Wwtbm1vJGBK6k9pgcjkACMXNPNjZcf2gbXZxhH
WSwmYiVWiSUEDI6lYEHL6YbRVPvHBeMSbmbw2ayQS5va2xNsjUy9Gn7Cf2YFKytztPxrcX8N
vt53Dbr4yLE0FyEliZemnWvqB8K4fRVxd8V+N9thibcHuSZyyiWNgssdKDU6kFGwRn5p9k+M
tqvpy9jdf1bb2zjRZFt5wD1BILLUeBxq1vr/AA6t8+JtqW0/V7fZX23SQkie3nkSeJgPzKy5
qcU6YkPsPxzsG7WkTTbXevISR+rsJ0qjDu8D+H1xWtDg+K9mgluE3OecRxNqhaRNBZPCRVzX
/uGD7Cxw3PBeJ38Ud7s73MVSEuLeVg8WkZH2pB6xXqKjDKlXLweztd/s7OSeSe0uFOpCAGj6
9xkSMbtlE6q8t/jrZrKV2uvev7ShYqJDFMB19LAEZeGOej1n95seCSRu+w3l3bThqGy3BA0e
X3BJkqxI7ahiyuiWzsfj7cdkeK63OfZ98iUmroHt5iOirQekn64sus2ag49zK82SN7W3VWCO
THcxsY5UbpXUMmXyIxqzWYurTnqblK0d5typeXBpNuFkiqZAcg0kB/llh4rTBOWo9DteH8ej
toJJpFa4hSiSPbEV1dQaAnOvfBaseS83t9nt97m/pZiMYAEyRAhVf81AaZ4Yap9t/p8lxGL0
uLVspHjpqXwYfTuMOJqpuB/onivZ3kuuOXaK8G4xLSjGoAoaUNfHAzenLsaR7Py2BNuu1u7a
aQLraMH3Ms1eNgaHtihjo+S7Pa476G4tLWOyNwP5qwLpRpCa/b0B+mWCCsfHXVQ5ajmD0/HG
4pG647wvjt7sC7tezzRyRM5lhVgqsoP5WAOkkdK5YvtR1xorrgO03At7rZr2d7GcVMN4o1p/
9a/ep7ZYzvqz9gu9i+PoE/S30m4bbf0FZ0CXEJb/ALTpYDFemoq+OWVnDyS3tzO8kTSaYr20
bS4y9LgP59sIvKy55tN5JvUFsrRXE0ifyJNC2rSGpylBOnX598EEiul+PObJF7r7POEQV1LR
svEaSa4rWoHh8l3t/JrdTDVyxiuIJ49QI8GRx+3GksPkHb7K23UNZWyWvvLrMSV0LXrSvauM
+Oe1a2NhBPwIzWszB5ErLa3CrIlFP3QuAHjJPY4m7PHHxTiGz7nZS3c6T3qQt/8AKt7GRVuo
hn6hERV181xdUTmq7crHilpOJbG5uLq1VtM+33a/prpCT0Vswcu9MTTZ8r2PgTcXh3GNbq1u
pY0W3ujGGkao9KSgEKen3DGYK8rdFVjoYkCoBI6jvjYW3Fdlt923NbS6kaGIqx99RqKn/t74
0xb60l1xHhccrWDbvJZ7khpBcEiS2kP8MilVaMn65Yy3JobbhO12dl+q373zbj7nsnDNGSaA
spHqUjww3pfUpeAba12os9wluNvuI6wSugSVHOQDAGjDFaziu2XicN5ul5tc0p9+3UlZE6A9
ASD1GC1r5+F/xPY+G1v9u3mK5N5bv7cksbao3FPuSgqmDR+FHHxjbb3kElpsdw11EvqWO4Hs
MQp9SEg/sYYbS0E/xfZXkDLCm4bXfRBnCXEazwMw6KJEoaHxpijLjHB+P2m1RXu6yTek6Ly1
jbRIr1IDRvSjdM1OKm4z++7FtljovNnvxuG1T+gNIojlif8AgkTv/wBwxDm5VEfXUHNVyXE1
SUklQeo618BiEmExHUkAD9oxChYqCadKZDCCAIGXTuev7sRwer7c6V7HocB0LN6qk1A7YlpK
6gZUzOTYhKTiUtUU9sY0fqWY/wDppSv+eJn4IMX1FRTL/gYqPkQrp0joK/Q4GvwajaDT1A54
tBv5mrMgk4tRVz1E0H+GAnBNG1Go8DTEiyFKVFegOFqcnoq9WANaj64BZgAMgSa1r+0nEDqT
WgNQMiO2JUzZUp26geBwxmkaoCT0707Ykb/fUMBQ0HXPEsGAooSPInw/64sb5MEUNUZfTFWi
bMnqQf21wDAyNRdJFKZ0w4DkZ1P2jt44iQK+ofh/0xkYYKAaN9pOeFSBZlUUFMu2JUxZSMwe
uRwrDkkBg2eqhFfCmAhFSa1Hq/diBiK1alP9MRCzJSvc+eJz6pV0ozHLzHhgXJH0gnuaVYZ1
GE4EEBaDx64cOESAtajMUwGhotfT26jyxLQNSg7NSoOJAA1DUpBJzz7Yj5+EZqAAGoKk5dc8
Qpia0ZxqY5Ll3whHIdEZNNVO1O+BRG6prIFV1Znv2xRoNCSKU0eXXELDEdjk3b8MCA1aDKlD
1xHQa4w2VSC3qr0w4NRMwYg9KVr5muLFDsFb1DMKaNlQn6YCCUV6VA6AjsPDFFUShmDMT0Oa
f54VCMZ0hiAM6gH/AAxGGeXSNLd/25Zn9mI6jPRgeg8cjgBGq5tmxIyxKUB0hjpNa508CMvx
xEgQJAvj+YdjiWALKWOoZj0g+WIHBGpSvQ0LNTv4YsBAFtQABHbyGKqCKBlKj7lFAvTr0z8c
EOeBXqgY5qpHXphqkwl1glQKitQK+P8AjiOCqwI1nqMq4iI6Sq0ao6HP9+Jk6kFBQ0UE9f2Y
EXTqKr3YHEJPR1UmtQWPQH6ZHE0SyVArmpNSfLviWjKsrMR0NKUzoT9cSMQxjBb7lBz8M8K0
atEpA1D1GmfWuDGbUoVaE0Boc/LywE2mpCZaGzBHiMSEdTKxJowzy8fH8cKDnQhQATmun9+E
CBYnMmop0xLBMW1CoqlKfU4GsPIPTkak5VGJBA1HSRU9R/ngGkNOmlat9Mq4kTGSgJbr2woM
gJIVaAZCq5fjiJtDsKg/Q9sWoyhdOXU5U6Vz64FIalDTVnWhp0p44ETClf4V8fM4RgATXLr1
c/5YFhE0FQoFAK08PHEguoMinVUnM5dsSL1U1HoOrDt264ThuoNB9uYHbPEUVDUkn0HMkePh
iZCHKgf7fxHjnhUJixYlerHPPEYTORmwy7gdMSpsjTSBQV1Vrl5jFolM7Rq9CCTTLKv7PDFp
09AxKMSB49sS0zB+i1BpSnSh8cCRAnVUnp9y/TCykVNR0ioA+7EsIxBR6QFJpQ+FOv44GpIB
6MfA9FHQfXArCIJbWc86fhiB0VWJSuormAegPWmJSE+krqJ6HOmNGwipFWXPpl9cIwQB/MKO
fy/4YzTgQVRtLAknsD1Pngi3BKfSAvj08a4R8n9sZ5+j8wFWqfHCsAhQPQD0uepNc++JaYPq
ADMQQcq9euLF6f25GKAAKCfVnQkVyp+OA4J1eNipAFDQnzxIEjFMyK1JBrlTFEcqdNASCaZe
OIUyspcp0HXMZfQYBpyutSpHTqfHAR5ag9SGp9q+WWJaF6tIFBzAzpn3xqENGkJGQ7MPE4hT
AEZrkWOf4dcsIw2aPQAnVXLriIzKNQBFCQSSPHASlpIMwC4yqO4piURkqmnVQMxpUHpXrhVO
rfzQFqI6EN4mnTAMOUVpNamhYjIeXjiRtZ1qTTR+cAVp+GIicsdJFCpOYPUV7nEMCVYOAfHE
gOWU6mrlSracv34gdnLSUADIMyT49RhIqEDU9CCciOgOJUyNEJDQ1/xwDRsNbdRq6GhoRlli
IESsh7sO3TtiUpqhAWr0FaA0rhwmJLrQAqWGeXjixU7I6gBa6aAE1ywCi0KFzqHpk/hgMCNY
9JJyp6gO2EHZqFgTUqQFHbDiNHp+wnOudKdMCG6nRUClcyT5YjaFfUgrQEj1AHI08MSgTGun
3CCpboPDEgrIgb/cDllnhQmqzB43DIa559/r54EQVqUqcjllXLxxKEUABXIEH0jwIxEz0kDK
B07HKpwAIAqAMitQa5ihHbDrN50YA+wCtM/r+GDWpMEfYNCRVvDw88MKGYoV76q+nTmNOIDa
UM1NVAw9X08qYgFNFNDSUCCgzoTiaNISpApq6Z1yxIcgFNRfM54lTGIfmJ1Hqv8AhiOBWUuT
XUaZAKf8ziWHkX0UVzrH3Vpn5YKBkmpFaEeXXLAgKWqSxNFyWgwlIpX1agACKgjLphQZAKj0
AhaZ9OvjiZRnSriLp1B8D374h+RsEpQ9csq5ZYGrCYkShpGAp08x44UKGrIzZ06BO4riMiOc
qrKqlSGNNPU/u6YKhqRqOo1Ynqcsz/hgQtEZqS1AOgWoo3l/rih8RqwZyXH25AeJ8cIwSLLJ
JU1AYV7YVhiXDvmWjBoAMj/wMGqclprVWyB7dc8Fa+qK+fQgyPgc8/rhkZsZbc6CWoHjQnpT
GmY4NR/iOBpb8W3IWG7w3XthjE4YI3Q0PlnjryLcew73zG23nbo/YiaK5X1nURRqntjn9fRq
TiXMTsl1puolu7GTOWHNWHeqMv8AnjckFq/bm3H7T3Ttkc8RnbWkFywfPqdJGOdjRv8A3jje
4QwPulhPDeW7loLiF6jUD+ZG/wAMWN4r+Qc3N7cwTworPZyEqD6S5bqWUZYZyz0beuU7Ru8U
d3BHLZ7nBQMqNrgenUgt6gfLFOXNd7B8ntBFEt/7wlj6XFswqwHaSMjScGNzp0TfKlr+uDPB
+tsmLB4xSKUV7+kUr+GCxKe83XhIdLmwiufe1e4Y7ghqtXOjihrjXwHPynk1nutrAsERS5UU
YP8AbSuZ/EYJDYPiHMF2R3ivLf8AVWMjEuFPtyKzCnpZczjVojsv9/45Ay3W1yXaTKdYWd9a
ip/MBStPDBG6sLTnfG5biG4urWSzv4G1FoGPszAdC1SDXFWVrd/KGxvPpNvce0yH3aNrYHoB
6s6HBmrFBsvO7WxtZbVoWAVneGYZg6s1Dg5419Us4+a8ZuZ7XcLqymS8t/vRJA0co7ldWaGu
BOm4+Tdjkvfb/RzSbdKhDLq0yoTlXViwaye8NxEqX2lJ4/cqzJLSobyI7fXDE7tu5jtsexPt
G57Pb36qpMN0tY7hCTl6s60wNyay2qNgWiUqvVVJqR+ONMWO7Y9xj2/cLe8dPeWFqtCTTUO4
riGPR4vl6CO3e3Uzwo4JhlqHKPTIaR1H44xjUeb7pu97ud5Lc3b+5M5++gFR50Axtnqg2+SC
G7R5k92BTR4yaVH18cF6YythZc9m292tLUyz7NMum4spaDUD1ArVencDBW5rhstw4pa78t2s
FxNbMVYRhgrIx+5fD04VsdvOd54xu6RSbXJcpLEpDwTqtNVa0Ujthixi2IqQuY707Y1FJr1L
iO6cf/8ATprK8oxjVmmiVwk2lv4CfupjPUavOK6Tmm17fbRQ7cr3kSrqiEw0HSPytTrjKc+5
7vxDeZP1zR3NvdsiqbcUkjOkeemmLAq9i3DYrHeFnuI5WtFYFCp9YPlXt5YSs+c7xxfdjFNt
U1yJUGl4JkBWtctL1xY53ys3Fvu7xLoW9uFDU06ZXFKZU64py3qy47u1jFu8V5vBup/akUiZ
JA708w/XGrfEtOcbvx3d54bra5ZWKqFltplIZRqJ6jtjGM2XVps/JuGxcdk2u4S7tJHDLHMq
LKAWz6A/b9cNjbObVdbDaXVLqS4T2i36fcrQmOZa5iq+H44qzXXyffdv3G1iq5vrmKgjvpUV
JtA6LIw+8+eGQWfpLLyux3Ti/wDSb+F4r2DSYLuBlKHT9uuM9D2qMGHxkZCFcgZhj/xljWBa
cc3ePbNxW4ljaSEDSwQ0YV8D5YK1Dchv7a/3Se6txWCUghCKasqGoOKLV5tPJ9pl2B9i3WGR
YqUjubZirAV1aZAeo+mDFXRJzO1tFt44Yf1MURUGtUrHShB/3eYwsZvym23k/E7HeZtySC5d
bsaJIcvdiqeitkGUdaHFfYuOcqrXlFra8ivL1ImuLC6di6E6JKHJW+oGLF1cc9tvFrte8/rb
FjdW75PFMNDaSan1dyO2KrmrW95Dtdyz3dtf7hBJJmbZ5fStBQ0KEUp1XFG0G4cos73i7bcx
l/XKQFkcl1lWv3s3UMB1xRm8siwqAK0FO+HBYHQ6sFB6+GIfB2BPmO7A0piWhpQUOeXhhGma
NmowND3p4eGLWc06xODmdOoUP0wOg2jJOffviBvaqxNOnc9DiWhEQoNQBpUgDOmISkVbLSKE
Doe+eEkAWYEnUCKjti1YFdfVh9Bi0Zh1qwYd8FUEMgGUeXn+zAjVqS1PoR44VTsyKNNa1oAT
44GykFBkagULfXEzSLhqZkeP0xNyhcivpoQvj1GJWwTEZV7jM4mcMNJLVAFOhw4oYkhSftJ7
4orMCGYtkR/3djhc4QLE6v2gf4YjpI5FTnpOVCemBSiB+5ele+Jv7GaWgr07Ad8Ui0JYUJoT
Xp5jCD1L5U9QFAcS04YKdLCgORy74zTCJABopOWf4eGBAzFKrQnqcJM5NK9F7DvlhAQzVUgA
EdScQHqAJHhmO2eBqQOpagk0I64haTBgwpTSRn5YKxZpAKuWRPanTEZMASO5y/fTC2CQ0yWn
Tr4fTCLTUYRqB171xU/UiR0U1r9MCoWdNNK+moqTiZCSoBUCvdKdxiPiNypboStfu8TiWgr6
j6qkjpiARIahCPOvlhMRMBqqoLE9K4BpjTtSh6gf44iCgNaHpnTpgMgXKqczQAAk1yzxCxGU
q9AKggZ4RgWBX7swSR06UxNSGlYiLIileo8MSc7ySN1ORIZh0yws4I6fowGQwNIaTVIavZTX
oKGuI/A1dTMaepVFM+lcCRsshr0oex65YlgU6EfcpAqe/liMC4UdCRTqOuFGJdSKEBD1I8fx
xEgzB6gaip9X08cCJ8+gI8T4eGFixEC4U5Ztk1PAHtiEiQs9DmOtCTXIHpSmJomYVU0zrQgd
T4YlTiSuZNNJA869f3YjKfXqchsgueA6dTGaADOvQjLI4gOR9BKmmrqPIHEKZS2k9wK0xMfY
66DpbpQUC98DXNL0h2oTViajw+mJalpRPUcifS4+lMRJCCw05ZdD+zGmadAur0VNBTUen78F
MiQDUwJIA6so6eWAyEJsgOgzrTPPEkiIrDrp7EHwxIoyVQsACoGZGJfBaGrqYUPke2LR6IPr
XOgArSnbEZTovenUUav+GLSFQA49P+0k/wCGBk50hGzqaZkdfPEQgN6iBpHYHPLCiqxdSCKK
On+OIlQAMfcyYUI/0/HAglq9fy+PjiGBIofT4Zn8e2EXwvcBYUNQTU/hiwSnBQNpBqWJ6mmD
GvDIFIqvU/dUf64Ci9KuF+5s6GmflhwnaXQKvmQKL4fTCrUauFq7A1plp6/sxMylEpKsrihO
a/TxJwGAkiZkIc6T0pXPEjlRoCqB7YGbnMn64kicAsPzCnQeOJm0UbkKQubdCcSMWqSKDTkB
4fjhQWLBgFFQakUxAafaajJaAAGp/ZgaRl1L0NDn6W/1xKDMbM2RoD6gR/yxEJowqzEsprnk
MsIREl5CR91KZYsF9EodDoUVL9F8PPAcEka6ycs8jQ5YjIcjMBhpVftK96+Ne+JUEZP2ip0n
NaZaQa54tEGWJJBfpUhu4xNaF31OpHc1Zf3YGaQJRMqaxlXt16DCNFVepzI/MD0/DE3DKoAq
PSmeX+mFU2bM5YagftP4Z54mSYsrhifTShPhhHoVLqAakqa18T9cBkDpq1VNKflY5dcWIcZD
EKQSMyadu+BE0LiRdRPorSn7sAMJaZU9TGuXgMRiRqMACp9VanzwkkYmi1NR6T9f9MQRkBTp
qM8iw7YSejkEKKUy1dx54AM+hQWOrP0kdaHETFoqMQBU5Z9caxozZKdIqCMlGRHjgY3wwUe4
jgUH5gO/nTBrOX8nYqaaVy6E9xXvgjRtJVtQB9WR+mE2HOZNAWp5gdMCggAVIJ05VIIp+OHC
HUweoAKUpQ/44kRJcVY0jJ9XkPDAAOy50qTUaQaAfXCqaN1GrM1yqDniZOiESGi0JNR3PTEZ
CiOk6j0zqa1rXpiRqkgE0rXLCS0hFaQ0JrSlM6YkVCFepAbIntl+GKoQ9t/zdczl1H+uASHB
K0IzBzbLI4kj9RbUKljmR0y/ypiJH1ocqN11V8MSSChfI0rSrEeH+WJG0MzMuvIdfpiWBbSh
AJAbIHxIr1yxEmIJJH2dK+J6HLwxMo2YoBoTWQciOgB/0xVaddbMAczkQBi1ejDMtaN50Izq
PpiJllP2soqenjTxxEmANNGa9z1rXwwISAilciKljXLyxCAYoHrU1OevxOIksYWLNqyA5sBm
S3T6YV8ItMiagKhUOX070wsWCTS1VZSD2pTwwGUZjD0H2Rmgr3PhijRvaUBgBqqat2/HEiMK
OoJPqBpUZYiMV0trzpllTMeNcRwLOaFFHtpk1R9wxAEikL6sjWo8cDKRGUioOrOuvt+/FhgW
eMCtftyHXv8ATA1SQiuVSijrhZ0CqWJ1DIk6a5j6YVBxRihYANQUAORy8K4kZVDioBBy1dyP
qMCErFgYyNQ6jLv9cSNE0rMR26BQMh9TitQ1RFBOk1brT9tcu2AlVGFKjPMg4sRg5IEYoDmd
Xenh9cKOiEMobL+Hz88KwIyOkknPLKn7fDEDFqA9DGtaNmCD0zxLQe3IQtKoR+0/6YZmOn28
RXDKoKk1mzoc6D8B1wRisvuCM0lSSVZiS3UfXDQ5dC+Xh/zwHVnxqG2l3S3F0/t2+se6/cL5
Y6civYdy4RFZfpprOVryGdwE0A1o3Snj9cV6Y9Ws/wAeGSxWawn1Fv5ckco0FJCOhYdq5dMH
PXonOrXa/jF49mEm52sougrJHcQMJENPtCnpni76jpPFDH8fb9ckRWccUsva3aUJJX6N38sZ
mNfaON+FckX3kG3v71t/+MQMQrDvlXriHVjnj4rvE9kb23s3eFahtOZVh1qMWsVWBTCQXOYO
mvTM9saEbXhPDdu3i0uZ9wuHhaFxRo6DI9D55ZUwdeOsg924JC9q95sdw1/CreqBl0OpBpT1
HPGWO/HDa8B5BdoRBGjyqCRbB1Ep+ik/sxpT1EnDeQlJnFnIGg/8sZykHfNTnT6YGfqih45v
Mlib+K3M1qAdWj71p1qn3UxN24uuMcXbcNrkuJrGS6jT1GS3bVJEP/6XUjviYnqrtONbpuFy
8VnEPb1FI5CaEGv5i2QFM8LVS7jwfk+2kLLaOySmkDxMHVz3pTGubrN2Oq04FyW8gE1vbfqG
FT7UTL7oUeKEgk4zWoor+2lgmkguo2imjajRupVgR1qO2FI11au9QMh2xAmf0ECobtXriaIE
5UWgpTTXKnicQqQEUqgz6eeI/JwWWtKgDpT/ADxM4c1rRydQ+2pzIwVJkbSNPbOn0xkhZlVq
LWmZ/ZhRkShr0GZY/XGnOwa6QampHavXDgFlkwAAPXwH188DfNGJZEINMugHgDgvrWnZwUJ0
/iPDAKdqg5Oa0y+uGKGUer1k06h8Kt0lckkNUmlAPLxwwXkg4DmorX7fLEaNGFW6KD+cnApR
a2qAGqtP+M8B0+pKaPtr0PauIaaNnJoaUIp+GFm2iemoVFf8vrikFqOqEjSBqHh/ljTGiIB9
OX1xGUlQjIGg+uJudDBJRqdun/PBToVGlRUj6dsSlIs1dIBIBxYKYFtWZoSajxywiHJBJBJ8
RiIiFYAA9RQsa4lMM0j6OnkSfLFjWnEoqG8STSnTtixn7EdJagy71r274VaJidIAatcq+WLA
FCwko3RRQAjxxD6pGU1VaCnXBUFhpNFBI6mmIlVnb01IP2174junWgko320qVPlisZJlZvwp
QdjiagNJBzGQ7HCEgC9OvYV64LEQ75CnQNTvgIfSWGWrz88OM4RQCv8AEP8APEi9rMZ18jgM
gdNGAIyByPSuGAjHWpNNfUDt5YsRihJyoezeAOLFpGMu1KjLBVDe2DUA9ex64DpLFkwPXw64
jpjGdQ/h/fXChGINQVNfHzGIgMR7Gn8Q8sLNhzHTJcie3cU/xwD6/o3tjqB6u1cqnEAmNiQS
agdfrhOmeM9DnU5k4kf2pFFVAGeRP+GBrCfI9Cw7/XCD56mB+tR0wYQ9T4eIrngwHYEMM9Qp
Sn/HhiKPQHFD4VoO5B74iQXI0OR6LiZwLH1HOmWZ/wA8RC0epahslzHjhZpmcKK0y/GuKg5N
EU16554DKHUrElhp7fjia+wavqFenYUphFO60ckEVpXPr4ZYGtRsoGX2/Xx/DEsC4LLQClck
/HEzgQoBGXeg/wBMRRtQPmcj+wYkSABciC1etOuIBJr169CMR1HT1MRQn/HyxJGa6q9+lD0w
asAWGQNK9vpiWgejk9aflXDGjayiUZaEZAjvhCNiXVlkBr91P8P24DoGUkGlMqenxxMWoZWU
01DTnl2wg2Q0kCoAyJP+OLW+TCRipfPLrXoa9sB2UNdNWOWk+qnbwxVZQnWVqOmWoeH1wI6k
L6TSgzY+P7MJiNijE0PXx6ftGJSnZ0kC+moA6YkZX0tUaivSg6eIrgJEmpBZgWzr4YhgW1hA
MtTH1HwB6YWaCktaGtV861+uIJtSoTXq1B4jxxNBZQxBVT1oFwM9Gkqr9hXLTiUHIVCClT/D
TLMYmrTxqAilhTT0Df54WcLMGmfStR374mKKlI9ZY1+417+VMFpifLSCPqPDA6Q1WMZoDqrU
eOIkFI0mnqJqTXoPDC52VJloofvrSnfLGa3CXUSaUHn+/CRLoqQBXof+YxMpFzzAyz+ueJak
otT4CtDgSMnpnqAyoetcTOiFQ1CtAwyrl/hiah2VyNNR0ybrl+GIhAYR0Och6D8euAYTBtND
TxywrAEMa1Iz6nrl9MMrRxVj0BdRlTwxI3tIDUnIivl54kUYBqaeef8AngGhYOzaaZAVoMsL
OAqFXTTtnXp+GFWGAJYFQCKdTiGBqtepNP2demBvRrShahLDoenXAQuXL00FvKmf7cK+SBdW
AjIYkdPr2xDP0idmzBofHKlPGmInJOVSC/UADp51wIPRCGzZuvgPriVAvpfM1BGR8SPHAx9a
Z/SQa6C2edTUYTgsmrTImtf8sLWIwB7hNa0Fa08OuBkSszdaBj0DdvD8MIw0gAJ6Bm+4dR54
mg1oG0mpJybt+NMSN7Q6KxBIzH/XEqdQW01FMu2R+uEYbST2pWoY51IxmrAomRKqR2y6UPhg
OJA32r41yPUUwktBFaBSPzkdQfDELDEDOvpan3dT+GJZhtFFDdR4Dx/1xCi0syNU0anUZ9MS
w4VNK1OnT38BiahxSppRgOh654jqPXIqqFHiaeWHRaaEsw1Hr2yxMpc9NR6T0r+7AYAqa6ez
dT9MKw5ajEAjLw8sQOzlq5kLT7++BT0/oQVFc19TdjXFjZB10VJoR0IFenjiCHWG9Irr8Tll
i0eiWNXbP0sO+LQSBASQNJORIH3YiJXKyUIBJ6Vz65dcJhnC6hQ5Voad8SLUamgpQUCnExYG
i6AY6k9xXOtcDUg6K9SSCQPUT+/AQlegzyGTYUTVpQkU6+efhiQC1SwZadCM8iPwwqU4DZk5
kUNe3jXARGlUOkZ/vwC0EldTCgBGZJ8PADEz6RGsVyz6np0w6UjAhQ6tmRk30xFGtdZYgAAU
p0rhJgmlqMQe5UdBiB6NIxABIH7PxxKCQFdSuDoH5h1PjiJkQR0oo0d18B9MAEVYNl36A+H4
YkFNQYtXLP8AYP8ALEofUtNVcj26D9nfEqRCirg1HhXocSD7n1Pck9BiREH1FciPuXy8sSqO
L22WrjLuh7H8MAkGGRmAVQBSvTviWijclmjRaECpanT61xKEQVJDgaTnXC0BT9xYEqBQMeue
LASVIAXInoB4DzwEmLmhpQ9NJPatMSJomYABqAZ061+uI4X3RlkbTQ/aOtPxwjBEKooVANKG
gNcRxHRmOtDpIGk0yyHhXEziSf3ymoZv+Y9K/TAsOrHIfmBqp88Wt6F/T6ciwGZrXLzxUGJX
RUVy7gVp+OJUgyhB1Zz4+JxIzUcVFQehr5YgGRkSmVRX7Sf34ikMkY/7QPL9+DBqKNkANK6u
56g/hjQ0ShghqSdRrU+OIyHUIW9a1alAaZ0wExUrpLEgnKopUV6HECCk6CpbLLV+PXEUhcBS
gHrFQT0+mJBjaUH26az+cnKn44qoYMwkqSNLH0rTp5g4NJxInrOkGhoKeHlhBLKwWpq1ch38
6+WFYev8xvSc81/DEAo5bUNRBWtaAHM9sICrUoDmxHq09PxwHUVyZjHQHLsR0ywpltyLiTrm
xNRhxOTV59/3YvEsduQJc1DDV+WtCMsa5p6fRfx3ve37psi7fuclLy2AVH1iMuq/bpBHWvhg
7jErQbTv1rfbvJs81wkUwIETXTiPX2IL/bqHnjOHVptuz7ns0Fyl/cpARI7i3Mvp01yKH7TU
YPk2p3nu90soLrats2/d0hr/ADjKqzxZ1qVDK5OLFjOtyi6g5LBYbwtvbW8wMYeJ3bQzEZyC
T1DDOdZvju5lYWvHrFpoZYkSbO3u7abUrZZ6kGdaYz8tY8alnWWdywprOpD4jr08cbUj0/4l
ihu7K6s0lhN4zhlimlWNmyIqpfr5gYOo1atoorvi4ki3mH2FaQye4pDKK9KUrUfTBzdV+F7c
SXW8bfbXu07Vt28JDX/5KyATRAdyiuhrXExigi5HKeRpab4sVhHcKUL6iRG3ZmqdQwrE/NLG
DYdtLxFFaaotru3lDxyhuoIHTFCXxlx3epNqlnt/ZYSElGWZCQGGasCQVr54erBdPxeyurXc
b7ZL2Jba9mZzHDcOsfuZEKUcnSR+ODqbBilbi/LOM7gt7uCvbWqy6nVn1JSuWggspFcHNabm
5TcbqO33Hbtjst0RfWdytpdMgI8Y421ah9MNTzDn+4395uccl7brbP7ek0fWXA7saBq/Xphl
UZdaVqei9RhVATXSdRXsV60piB1yb/j9mLEImoLL5rkKfsxI5PprUnMGtMjiZHEPcGok17Kc
s/PBV9RUCkqa16DpQYG7BqGFQSfJux8sLJgirIK1+vgPHCsEQGj+3R9MIsChyoRqpnl1zxMp
Epp9TEUI+uXfAZRNJKoFANPY+WDGj0P/ANoaeHl+OFFRxVGzp4YgSdQCfr44SMDvSo6MDiIW
WMxii1zz7nAyMJUD00bqPCmIWnIYMKqP9wrkBiF0gV15inhhxHLE/b6QepHQ07YcBjpbxJ7j
ALDjSDRQDTscaAgqdW7dfr2xaZC1AMDT/Q4GtPTSpYmtftJ/xwNUq0Gkdev1wg2lSeozy01G
EFQaajoMqYD8G0FXqDQU/diAgprprWn+Hni1WEA3cZU+meGKnVenT1eH+GBaeuZUkUGdadca
BBQepzzIbwwKmBcU1EGuAnWrscvT54kVGAooIZemeFGZnVszVjmT9cTFMla9evSvl54GuRIS
2QFQASDXrhaOCxB8RnkP8PHACWhOf2gVGAGIRRr05n7aeP0wnSFDm3T9/wBMKP0FQevUHOmB
EwqS5owGVO31xDRZZGtScvTQfhiwA0UbUBWnfviR2Ck08/pT64iQQhq1HX9mBExLDTkBWgPl
iakJAaknPwA8u+Ik2osWU0qMx0xM6VG9NRmSK59R54kRKGhArT9uIhLNpoorQ/44hhH7SOwz
U+OJYEL6gpBI6jwxoaZqE+fWv0wIiq5VUgjscTRlDt6qAeJ8K4jhjHVhQdOpwCnIzOntmO2J
QxVVWgrqPQUwGm9JcECgA9X18MTO4ZkViSOv+GJrUfsqiljkAa/jh1ikFCkMTXLPV0xJG0YZ
wxBIpWv/ACws3kjGCATkDmPwwNIyjNSpppNFbr+OIemaMg6ia1OVfHE1DCrGpGZqSMWHUba1
qKUHUmvfwwLDMCaMuVTn4DEvQEAZHuM17YF1UK1BJrSnXCybUrHSD07HIH8cTURySEMV6Fhl
gVgTKWFCuRoVI74h6ZpKilK1yPYYcTnc1IoNXYDpXCdPrXTpXJutfpgV6AWUguMiMgcS0ACr
IBqqp/MOv78QDJWpemVaDwwNfULzJWlNNSQAe4PcYWYjIAVa0PfI5VGJuQzhehNKU7+OLCAF
mDDUDX7R0Bp9MRCzDWpIoF7jCwj0g1C5HrUiuJEjkAnNu2eWBQbvkHXLOhXxOJr5HIVEYJGa
0r3FMCRkxsehDdaVyrSmIaYLGiEmpzAPh40GLRggGofAn1DuMWnD6iXOZovkSCPwwLDVowBW
g/MetKeGE4Iy1enQdiR455YmakRdRo+VPxBriahAMK1GQqB4DwpiZwiw1HMA9ic/8cCFGEJ0
fae9a9++BqJaMSB4ZUqB+OEkPUzRHMgCo7fQ4mToMzTp0qe2KoSEV0kdRQHwPngFogFDhifo
Mu2GVWnaqAOM/EDPriR0KuoUZilKdDXywIxJJK0AIH7MQGtaUYUJHXviOHPSqjWFP+OJBLy1
A6Anr4eWIEdRoVNQp9Q65eGJoix6AZUqCPPCQDRQOpNSNI+h6gYlDsSoVn+3ppGIBkIU6hmx
6CtBTEqIUFTWpHXwof8ATEdiKuodRpPfzHliBqstKnV9O/0riAZD1LEE5EV/0xNAYEAAg6QO
lcQpVIGpSTTv0qMQw6+oser9V7ZYtUAVZjQZMD07E+eJoThoxqWgXI1/ccCoEPqYHNupp3Hi
MTJy+qLQopq6V7HC0SaAn8wZ0ywHUKSACr9elex8cWs6TKag00pTOn+AxAQLErp6UFK+AHTC
cpiFDeDMKtTPL/TEjKyI4Q9CMwM+uJUGsCQ0ADUy8zhY08a1aqtRuhauRxNCyBJBKmuoHr9c
ZaMA4BMdAMj+OJQjrJbIg0q1RShxI5B0MxIDN2GVcSoVdqkMBXquJjBKqAGrAKO46V88RkMW
GrSoo1Ovf65YWhKpXUKVrme4P+mBECik0XNuq174haASyE1BJpnQ+GJQcgPtk0FX7E9MRR19
IVRUmtSTkCMQ1KDqbMVqBketcCRuvtoWCnUSdXnjQsMELMpOQGbow6jyxKRI5UnOqgekVPh4
DBjZix050y6EZVH4YlSDln01rp8BT8BgZ0yLqB1GhzoBTLETyuciADQ0oPHDAFg7ELQ0XL8B
1rhJaTWtAppl+HliJxUhqDMdSf3YAZVozaqgZVP18sSOFiOQYagc+tMWLTBtUgXL/t6dOmHB
p2bSxBWh618MSpkIK11UrmMupOJGqAPUantTpnixaZFbIKfSuZPlgREpT+a1Bmq4AIJoQVNa
jPLr9MTWHShUV9I7DKigYSjNUc0BILAk9cqYmdGzQM7EgggZk9K4cMMzEsCvXsK4sRomJb1/
b/gMCGRq6U/3f6YCjjRAumpJ69cxhRwaIATmc2pmPLAjsVK1bIeJzOeEWg9CsRlp6D/LLEDs
rEhQdVfDx8sKwyiT8qkspy8TgRFAGoTQE5+NcCw/tstCGIHU/t654hgQSzACpBqa59uoP1wm
U7NI8elegFT4Z4GjgadLNmq5dc8KMyIc1OYIOoZAV8fpiRg66iv+6oxI9VDe2BQ5lz0xE8rj
3FVBq7g0oB2xLCJIUnTViaE16UxNWB9xSaMa5ZN4HxxME0jAas2JyWg6L9PHEgIxEIDkCnf6
+GBDiLliykAMKFj3+mCqCJ0mjAEqKAjKueEogVZqrmRWoJpiAjM2gsgrU1AGf7MSOWLdT0HQ
jEgGRWUqoC1FA2ZGXiMQ0lSNEB0kvQVAPXwyxKRJJIBRUOnIUU+WGGl9xDA50NVPifDAQ6cw
Rkxzz64kdShbM6e5HShGJGZS59J6/bTufriMgvaKxkL371ywEyFdNSQGOQIHU+VcSwLopNDQ
f7shU4dRm1kqV9NDmOx8ziFgkSIVJ1Fzmta1IGLWTI2iN3YfcTpIwoARhkg0gABf4Rl0xM6g
ncLER4+kFj+bzOEysvuCyGcAkkgnyxoofZm/iPj0xlOnao5FuY6qWNclXMknpljrxxrXU8ey
8f4Dy7c7I3W32Qm0KGNssq/qcu/tkq37MYvlxznTr2PgXLd4eRLSKFZ42KSQXMywSBgc8n6/
WuH7RnLKtN0+K/lDbLR55rH3oVILC2uY5SoA6suqpGMeNesc0txCWjZ2jcjQ2kkUbz88ONa6
bLbdz3edUtqSSA6JpJWJRR5sfHGpc8ZtWm+cR5bsFv7u5WLQWcpAhnV1lgYUyIZCwH40xh0l
8ZzMUcU6mtMiD44dYsd+0bRuO53aiz0LQj+ZLUKCTSuoV0/XGrZIouN94tzXayn9Ttbj2piW
tbrX79scsgjKWWvljn9l1an2zg3Mb62a/wBssxcRp6nghl0zqF/N7QI1DGpjH3/Thsdn32+v
P00cRM5J/wDKSNJr6g5YGn44vG/XXv3GOU7KsK7tayxQyGtvMSHidqZ6GQsmYxn5Nrp2zhPM
rrbTf7dbtNGBrkit5AZhTu0WTafpXBg+yguLy5Yuly8ssysVMcmoutOq06jHSGXUsdxeMAPe
mmVuiGR2P0zPbGcC/s+Jc4uLJtzsbOZ4CP58VtLWRKZgvEpDdPDDsi3XPsvFeSb/AHLpYRiW
dhQNcSiMM3can6sPDGrJ8jmpN/4RyzYVLbrts0CEhEnXTJEx65MpI/bjGtWKFdbEBVKnrpYg
MR5Y1sc5qz2rYty3KQR2kYklNFXWyxrWnQu1AMGtYLd+Pb3s86W252U1nOwqiyrkQDT0sKqf
wxaq6Tw7kw20bkm3vNZUrJNARKYu9ZEX1J+zFpVGtCQQK1yDDtTtixbBxproKVNaj/niwrqT
h3KP0I3H+myybfUs00JWQR6f/wAIqksn4jEFTq9OXqFajyr9MSDrFG/yzxpnSBjVQwo3jiip
19S0btU54sZ8SLUgEjxBp5+GBqDcKNLA0rTzpTyxEwUuCDmRnXwOIGUsWAYjM5geWIC0uHau
Q7nyxNHbI1SoGXpOX+GIWEylyQzUpmaYmRKreRFPT9fPCtM7O9RWukVIHT64kJVcKO9B1xaT
LVdTjNj0HfEhenPL1kj6DErh3VQq0NKdB2NcWik2TUPb9priI1HXOopWh7YjoHDUB1ULdQMM
FgkUKoYAV/iPXCiJqBkKYAWmgqD1yzxE5YHTkM8OAgW0kaaHs3lgUlMwCgMa55VOGHC0MAhK
5d2HamLR9aYhh5+JGJHXMgVoDkMqZ4AIUDKqdB1J64odM4UvQ5E50/wriByqGnQafy+eFaFg
rUp26jAhD0gkCoIwnTBWAoOnh4HyxDBAEgaTkwNB9MZwhpnUUNadMsMVw7EflzbvlnXDglDr
Jy7jKvbBqwy6tQPQHqfPEhAtQqAdXc4jhF9IrTLue37MIsPVSdYBJIzODFIRCZ6RUkH8MWNa
QAVM869j1GDAVAKMp64UbT1IOVKgd8WKkubAavSB0IrniwQjqIPiBWvXp2xE5FTlkSAaDpiw
gKopJBqcqgYBSFB0zpnp8anCzoAAQctPWvl9cWtYJNNA9a+IwEj0JFR2PSlMRIFQBSvmBgRE
5GmY7qeuJYE6QBQ5jPPwOJahJYOamurqows2CV2AArU9vIYiA1UEac+vlgGB1esVFAfHvhwa
HUzH0/RVGJQtbghaae34YGw189Q7+JPbEKjkEnYZjrn4YQYsxJAPQekYUjGro/Wpy6YDAOrg
ZHLrXAaBhJpBBoennniERsoIpmMQM/5RnXI18x2wtAdix0kZjIk+WAm9pmUMx8SvalcSkBHF
X01oc8/P6YmcRMCtAASVBFOpHniSFVFQS1Opz6YlgTVTXVpr+YjM08sTIdSstepPbtibkiP3
aEmta09JypgaJhRanxFfDEzgG0H8oA6jv1xExKnp+WlUGFbgXBViooAen+lcR0zIqqSwB1Ee
VcsTFAFPugVIXz8R2xGDJbUTp9HRvAGnX8cDSELRitCwr0JyAPfEoUjE1ABofTX8tMIEiFa9
wuR8TXxwLDI6qmlyTp7mhB75DAIYMzDV0DdvLFiSK1VJqKeJqMsTQkYmhJocwFHShwkIXSCl
exoOg/HwxM2pA5C0GQXq1ehwIK6iFU5Zk6jlU+dMREqlwWB1EChH1xJJrpReoIzP5RiB6agN
JyXERGn5uo6V61xI6SKlNLBQRmM8Sw6D1FgOvUHpXFjODMqvITSle5HfyxYj1IJFag0/b3xV
YkGoKcgCw9J7ftwNGRWr9oAOZ8cQw7N1Wh1HoMIOhcgmma9vEYkB9QkBLUFRqXuThxq8nHYJ
kD1Pn4YsZD0encZGvgcDWiKEqyipcfh59sBMBWg6BgM/r1xILNVyxJIC0oMq4WaGpKsQpA75
ZYkYFQ5WlS3ll+3CZQFHaRtRyGVfAjEMEiH8w7dT4YKQl1FE6rkAehFcC0pGAAWiih+3oPI4
QBQyyZH7jmR4d8sCkE59IK+OQPT8cBqNpG9SkgrX0nvXwphY+3oKDz1jI4iYltRPnkB0/HCr
aM5yDVQhhl4YjpqLUgilT17+eDEjFdJ1NkpyAzyJxpGLMCQ2ZOTKOpGIGjUtSQmjDIL5DFpw
6sCQSKUNVby8/pgWlJnRVNO7HtU+OFghpjoagk+kjxGBqQ4qW1HJa9s6UzrTwwUheSmRFF6s
OhP0pixWmeslT9iKAFB7+OIbpIGMhrnQUWuf/LEodjIhpp69cRwkOiUrp1aRqyzy88JzDhVp
qAoSc8+mJDGpBkDq/iwGImQLIWofV0Zuor/piZSx6MxTTl1yqcShj7JqCSDSqt2+hxYQMUMR
ZCD31DPEsGqkaasQ1cmpl+ODFmAqQxZqNQ0oelMIOWBb0mvb9vniI6EkAePqJ6nEkbagChpT
wANPwOIU6yMQKDt360rixmjIUo5+09M6V/b3xY3EQkFNQFaDp3H4YsZlFFPqVia1B6988S0J
zYGpqDQH698DWDBFCCencDI08TiIXlPpU5A5FuuFnQhQTVj9nYDqB1OJSnBAPug0LGgUZ1Aw
jTkgLVSSG/Kex71xKBdELaWFB4DocRh3UqooDTwU54FT6fyioJ7DwxE5WgHRqdKHMYEBA1QG
oB0z7YNJeoqFjAVW6sO/8WWNA7uwQKAeuVeoxaKBSWkCVzFNQ8vPChEVDAn7MgeuX4YkZQDG
RkM8x9MRh6vENFdNaEEDt9e2Aw7szDMD/u7jEgqU1nTVczmP8KYhTLG/u5MdIFWQ4maXt1kd
9IJbKoNemJqC/mfatQB1PY+OEk+orUV9IGYyOAAbMU1aqmtO+eIFrYIddVJ6YEd/U1FHqA7G
la9sR0gwzPalCO9egwxI5NZYaz0yA7/TCsOpdiFUGgzIyp0wCkixhgVJdSQKnqT45dMTUg5V
Ao1K9v8AqcWqlIoABD0oMvP9uIgUgIWeqggaf+mI6SojKCjEHqCemIafT/EagnoPAYQD22VP
bFAvUdzTGQkKyDNADl6QBWlMqHETQEs1GNK1ByzJAxYoH1aSK1IyIHh4jE19hM+gMNOk5UXo
cQ0LygrQ9TmPx+mKCnqNA7t1oMgSO2eEYLWxYkkEih1Dx8PwxVBePW4Lr6q+hcGqEiaqnVQ9
x4jA1g20BBlqY/lFPwzxAMasCXdRr/h7U+pxKHOtUIIyP29uvQZYmpomIY9KAihU/wCWI05Q
fkelF7+GI6ilctJGAobM9h/gcsTCWmlAK01fa1Kin18cDplM2kOxJoGpRh28RTthjNDI2kA6
qIOtafs8saZRs2n01Jr0wsYC59pYiAMyMjSv1wNyMludRLWlNRqGxsYg9yT+Mda9TjKXfC5Y
YuRWTSPoT3V1s/QAGuflj0/y7mHq3MfSm8QT28213lqrwoZQ36iA+ho+xDjL6481EmLu85Ba
bWFvLjam3e0lr+qkt2PuCvQ6lrl9cZUikvdp4TyHb7m92P8AqOz3yVJtnn9yIkCpyPqFfHFu
HXms3HeRNC90ljcT2iNoN2EZowRmQXx1ljNrf/FtoJNuvLUIjXAf3RB1elMyF6muM9U/XFvs
7XT7NuFpdsxDtIDFN0oMw2eWMUPJLxGW6lFB7QbuBQmvamOjpHpHxgBcWF9Guk3EZDCBKGQp
Q+sKM6DF0sX3G5p2tLi2vWKq8rO0cpotK5MFPQ5Y51lzb1BJa7ltckOq3heTW80ZKoyVrUMK
V+mHWcamaCC5aaaBUkndDNJ7ZBlK9NRA9WA1S8ZuppdsktruRirTszwTGlKdCqNl+zFU592a
5st4202uu2gZwWliqiUGZJbpT6YYrA8hG5nlO13WyJBNus/rjMyoUmYGnrJy/HFIxy5uU3u7
PyHb/wCv8ah2S7D0lu4lAjn89QJDY1y6Y7b4XlhyTazGXt7d6vKYaqpXswKkAg9MZrLt3qDd
Evf1MDrDt8+d0rKpSQ9ySBkfMYZfFqi3/m2xW20z7bYvcMGUpJbyye9EmeehmOrLti5mq10b
VuHJI+ExLe8Rg3rYfbb9LeoFM8RJPqYirek+AwWZWZ0Lh0MVzxO5dIQ8luZNdBqdBpqD/EPI
4Wlzsc5vtitYb6QzpbktEtwdTIfLVmK1xGuZZLzb+YW0cM8lpbiPW1CVShFQT2Nenhhl8MzG
c+QdoA30ixs/5s0ayzR26FtZpm+lf4uppgjNU/DfZHI7VLlQqBiJFkWlAajSVbvXxxuxRv7e
6vdu5lbW0UrWMDoffjWqI1cgGBFKHpjFrUjI/I1jBa8hZoY0gMyCWSJBpQlxmwAy6jFGdZJa
fdSgrTGhRUDilMj50FcIoSQcq59vPDrOJFcmoYUqf34zTDADI1rnSuI6noKV6U6kdMSIUABA
yr2GBH9yoApWprQdcKP2YFgQe/l9DiVMqrTLoelOuJnCWqZVz8+pwxU4yNRQE5075YiJQ4Wt
Op6+WEafXVielOn1xWH7BCEZjr1NcFR3ZyQMyK1BxAZLBcqEnqB/riRIQAB3A6HxOJolUf8A
cTnn1+mE4Qqa1zFcsTJNoqFzHmMSpKRTSCD3B618jhWGVRULQr4164hRH1AnOg7dz+zATZg6
jT6Dp+/CpCU5gDvnX/XBURWgzbIfv+uJDpUZ/Z4g9cCBUE16HoB40xqCioozFNLZk+eIHrpP
qzGQFOtfriJLG35uvUk9cADHnqHXOlcS07rq9Iy/ypiWnoDU1Awm0lKnPqPDwxM6FSFNe/ce
I+uIwtdBpIoK/wDAwIQC0Gquo5AdssWIPYqCa9xilQV01rWoH+WIwRahI6A/8VwyE2olMu3X
xpgtBKSx8ujeeJHJJI0DyJPhiWnpQGv3HoMINWo9IIPVjgGmI9Fa6e9QcOkwJU1B1E9c++Iw
iEYZ9T+4YEYqqkFcyO/jTtgRCta9O5xKUwPqFB1yoPHzxYdORQ5kU8B54haY1X0BdQGeBacj
M+WZPhiaAy5hjTLyyxAzrVgVPTrhiNqD5EUpnXzGLBpMAy0boe474sFMIiaA1ApQ4FgXiHUe
Hb/HE19UaAEk9T2r1GIaQhUUqKU6j/LLEguFoSvUV+tcSAq6aUoAerVrh1HKRA1J869jXA1P
EJVSGAAKn7hiFBROnc55f54hiI60Aypmc/riaA1OlT9MSA1PSWz9sk/tyzxILMxbIDMZg9MQ
RFfbXz6k+J8sCC2QFRWtSadcTTnOlqEioBPXKhGESBWUMQiipJoa9cQwJQxioILhjkf8MRRl
dVSanPPvQ+WCqGyAFc/AfXEQsdILEgHoG7V/HEqjm1KwINTlU9BhZLTkAWqMzXEQOaCjZsaZ
nsO/44lII0Y5ZNlTPInAdNRixodJoK0zrTriRjrZSwyrllmaYNMNlIAK6aD7T1qPGmEYdXYK
w+5dNMvr1/DEoFdBGea1qcq4MRy7tIAAAtKMO5+mFHDtmtKCtB4DwwLcEy1JzDCg8MvCmElV
lYkZLQBu5PjiBnNwzBFORFGYDt+GBCNQVj+7UoA7/txIURZagdFyDePbAUoWoBOZrmvbChqy
HIrmB2xYjA1diKdgCc88SGuYLGqr3qMq9MSMGFCQDpHUjx8aYlRKQykgFfPt+3EBoQYwGyFK
KD10nAoZa0rX01yPliR2DFgQ3qUgjz8RhGCAYrl6tHWuZNfPCcA2ZUg070r/AJYAkcam06sq
jI4WtM6sFOkAlfuPQ5eHjiFpZSVcZqpFAcjXrXGUUjBG1kkI3bEtJX1KDWnSviO+IlTT0ozk
g59KYVQtVTmcqmn/ACxBGDqVhX8uR8MS0KVzalGA9RzzH1xI51hj2UjFVpnAatDnTPLOuBaC
o1Eg/Vad/riJFo8in3AZsPPLEkbMQBT1NXqOmeJmm9FMjqVfLMnwGEQhIXU5+sAa8u9OgxI4
Cmun1ZVA+nfA0jOa1LUyyxI/ukaCQGY9VpnTCtMS9Tl/LP3eX/TCj6Cc8ytMyOtD1wVQzyVo
o6IAP2YDpgjMA1cj0HfEANlRJQSVNPOp8aYVhxT0k01J0Hl5YiBF1VCsVFcs6V8sAzT0ZQVO
WeZ8KYhSQqtRX1MajyOFmDWQljqNNRqB28MUdIYvVipGsnt08sNJKTqYAAxigJHXAyZiiS0A
9T0qK+HliZ0a6pFrSuroAemffBXSUDFFUF2LMSdIPhiAgGVASKmnpI7Z9sRwALEgAD2q11DC
yRkVWC1ISvWnfvliQ3bSaqxIGerALSGrI0r1B88Gk7miactROajPMYlphQkVGljlUnLETaZG
ZkJHgKHthGGQuyldQ9OQy/xxHCIZKilQMQpKqqQWarHqP+eJm0xqSQRWmQJyyGJQRqUBGXgT
gaATJSmXXJTiFGqqlZABqOQBwrTlAWVjkejDoPE0xMhlRAupaAkggHt9Biahm1iMVNFPU9Ms
Ok7EV/2tQU/54lh2VQKq9R0Ioa0xI6MNJY11gZ+WJqUiSzUVdNBqJJ7YKglar6QczmT1r+OB
BAAYosZCnP6VxC07L6mGmpUduxws+krhXq6aWIp50OIfAgtTqJHp9II60xNkjIKZAx1Jz7U7
4CZiDXWaqOlMssaRpNDKPbILdDlWvcf9cSpaslAyYCv+tcCRxn6sxqan91cQwSFVkJYFaZHT
XDqwVDI4JNKHua1Pjg0gFWkIkNSch4fj4YgfTIKaBpUijE51AxE5UVyppPcjt+3CEaFi5BB6
9fCmJQSqCSAa06EmgBH+uImeqtGFNDU1AA64hRtITmwHma9cQ05qzIQdKrXMDKg8cDQFUhmF
QSelPPEhkFWIZa08czhaRh1kFH69Qe1OmeCsaSAEEatSmpp0A7UwE2lFU6Qa0zP0wo+ShakK
tBQ9CT9PHFAYECMaCyvUnUTnpOJCdzEoYqusE0Y1Jp4YjqMsderSFDEMB3zzzpiZ9E1CCdPU
9PLywNQy0DMFFVWlD2z7Vw4jy+x6dTagfuC/uriJw6iRQvpI/LpHT64AdpUIbSxOdOmY+mIk
5LOoBKuB6lIpQ+eIGj0lqU1McvwGAkn8z1rSg7HpU4Qc1XLMkHVpHc4mpRVYqBSqnOvQYiQl
pqJ9OqtK54BoCcgaZEADPMU8cJgtdR2K+IOJuGUatQXp3xMdHU+rSTVRnp8W88UZC4UEnr/D
455YdSOVgo1D1mlACKiuCNWsjugb39JzBJIXsMdfww5PZk8MDWLPaQgmQ6a1qB4EnDzBbj1r
j/NeUbJtojguyNvl/wDLbzIskOrvTXUDGep6zLqfb+X8gt9zXcbC9khuRVdUJJQg50KdDivL
r9fFruvypyvcbRoLmW290AgT/p40mq3iwAyxiZrnccXH/kPmHHYnTbtwIglqZrWQB4iSKFtJ
8cbyUbisTer99wG4Rym3vteqKeAFWU9fTTG7ManWrPdud8l3aD27+5Rmb0u4jWORwOzlApqc
YKhD1YE1BFa6iSTgZ1Pt17f2V7Hd2bSxXMJ1RvEaSV+o7eWNZrN7aDdudcl3i3EF/cevJpyI
1SZx0pIwAbPGI3KHaub8g2mz/SRXDGzfNredFlt+uTAOCAe2Rxr5Nxzx8j3X+opfwTyRXCtq
ja3JDKeo0kdsF5onjq3vnnI92i9u/ukZnJWWVI0jdlPQMVCktgivU/B9t53yXa7RrKK7Em3y
1AtbmMTxqTkfb1fYfocazTx6r599v5bj3GuX91TVWJPpPYLTp5YJGLMrqv8AlG+7nbQQX15N
dQwV9lZmLla5mlc8MjerHaueck2+w/QO63O2NX27a4QSKhOeqMn1IfocNjXMldGwfJPKdnnl
axuwYps3tJYxLF9QO1MX1c7U+9/Iu5bzatb3m37c4auiaK3VJQT/ALhXp5YzIlXtfKeRbbDL
Dt+5T20EykSwof5ZqKH0morhxTAbVyTd9pvP1ljctBcgUZkpRgTUq6n0sp8Dh+up17pzHc92
oLhY4m1euO3X2w3hqC4MErrtfkHfYdtO2XEiXVtGKWy3MYaSI+MUh9Q+mM1a4LDlG+2O5ruN
rdvFcxH0N1GnwI8D4Y2q7OT8v3LkjxTX8MCzxihlgQRO58Xp1xmHxNb/ACBv8W3x7bM0N5bq
h/TtdprliHT+U4Ieg8zTFitUN3d3F2+uabW56MSTRR2FemFmuetQRQ5Z+GVcaQzRRQZ/T/PE
iU19RXMZZ9cQwgT179h54WRqCyiuWeZH+WMtnDekg109KdqHEzowraadhkDgaJa1Dr17NhRk
K0IYZrn5nDg0YI7UoMgOlTgR9AFCFIy6k+OHVh6diw1dfDLEyFmIUoBkTmDhWCU0NO47+WIy
EC6khep7np+OIUyuyv1qen0xYz8JOlSCKg9AO5wGUwoQCcgM6DrnibKoHqFdRFNI/wBMSOp0
gnoSRVcSOyKRQsQpOJGJH21GodAP8sIJlNVLGorlniJ1dtRZsszpPbECIBqR9AMS04ChQtaA
Z4kZvuFPCtR2+uJCOpl9NKVzOBYAoWoTkVzAHXCqWhsiSB3041rGC1CgOY/D/DA0LouZqvic
jXxwKkoQCvboQe2IBrWijKg/4zwo1PS2XU5jr+w4tZI5CncfcT4dsREOgFenQHz8sCO6q1Cc
sRCa0on3DOvjhMERRQ1Kk0rgREANQD6jsf2YCAo2WeRNKfXCCIKE9CfHy7YhpAEqMipOeR8M
RPSpNcz2PYDvhRqBDprWv+eJW4Xq1VqTTpgGF3IbMEVHiMSCVBVeoJ+79uKNGY1YaRn38ziG
iZlBzH0UYMWg9NddPoMSF9wGR1fuI65YQZRWoagPUYBSd1IIFS2LGtLNgSTQ9qd8B0watABk
OoOHAapqQVoOgOWeKEA0geWYoMIPQHqaN0A8sTNpg5XLqw6HAoGpCVJrUfhgdJQsBWoyY5mm
JnoAYZhuvSnf64WfQ0GoAdO+IhcVchR6e474iEjSVVs18egrgOh0irVqK109/wDgYliN69Wz
AP5TQ0pniGULlQvqpTpQ9jiaRVoBSgPc9/riVoSBQ551yAzxK4hZfSAe+bf6YgidQ+VSKZqR
4YkDSy5OxDZEnrgJiEJ1H7a1P18cR1Cw0gyGgqc6ZZdO2EBaM6cjSvbxzxBBXJqZU6jsT44t
GUx10HpA0jM+JriMM4BFDkaklfMYDQHXVzSoFNSjI+efhiX1R6PQCTQHMGuVDjWqQy6Vb7TQ
erT/AJfjipkIyam9Ppr2+n+eBYI6iwWlVI1VPkeuBEmsgBRoy+44kEhW1GulvD/pgIULKFBI
0nIn6dcKw6gGtMj2p1PmcQw9SSdPU9B2I8RiVJlNKEZ/4YkcxotKE0NcvEjEhKaBBqooIIXz
/HETuzqCQCUrQsMuuCjT1H3KtdQFanr18cSlEiEqwJ0qOhH+GIpIxX0hqAg6fwxAlCa270oK
+OIiUDU1Pt7g4hoyGopJpSop2xIzZivQZkgVAr2xDUkYZlVa1BrkemJqJF9wjTUA1qKeAywI
DPQ6KnIUJpkc+mIURUihUZA1NfLrjUGGaRgTpJBJzqO/liFpzpoNQoxxCW0tYJJofLLA1DK2
pmJ9LAeoYkRav/ae/icSpFG1Kx9YPY9sFEOBqPp6dyf+eJswcZhT0NM/HEQkagdPqZcmr2rh
ZOD6FIyzoD54hqJ9S5jr3HTriJRsdK5VFSM++ImI1NStCcq/v64BqAyKxIjXOtG8D5nCJ0eu
oaSKU6le9PripJjr9JyVfy96+IwKwkXWoFNJFdVTTCIYIQemZyoexwNYEqxBqP5erI9yMQwz
BiFWlFAqB4AYojSghdVSBSlfr4Y0KH1qo1HUAOnfLviB1CkkL4V65YDDjMjoKgip8cDWBJkB
A6UrXxy69MUBBVFCGJ69MJJSNJr6a5kEZ08MREAlQqgU61HfywM4FjGWzrXM18M8sQpl0Fq9
lyp4+eESnbUSq9UFfby6d/rixuCMaOArLqJoNQP7sRN1bSppTqfHywM2GlJNGrQdMuv0/DDg
p2koyKOhX/D+I4MagRAGzB9RJP0PliFEPSmk9KmhPniJ1ARlcnLpTwP0xILxsxKnOhBqvYYg
IEdKUWlNXn/niIUBJIJaoy8AAcAEqoMznT7QPHEiFWFAQB49DlhJqeitPUDkwxEtaag3UtQf
j0xAzMsr/camtanv2y6YhmhELAMWyJrl5eWIZgiVWPTqDdyGNdVOmLFBhlMgBop0+ntXE1oC
fSSB+J8vDAzpgGNK5E5iueFmnNDJprQdTU1Nf8sKMyrqqpzIrXqDTxwHQk6+h9LdQcR+TIlG
bUaqBl2FfDzwHBI4ajjof+KnCQkUY0PVug6ficKEHdSWGVctPc1GVcGo2tSST9woCo6j/XAN
M7DXRWybyyrhBe7RQGUgUyetST4U8RiXpNXMitRlniODVBUH3AG6nvXEQtQodKmlcgM/xwIx
NdNQCB2zzxKUguhK9zWuWYrhJ09bDUupiKeX1oMSNQkqK9ftpmDTz74ho9YzViNQOQ7fU4la
jca6itK9WXOmIfImWjfeKrQHx8gcSwgrjM1GWfhiKNEIHrq6GoUf4DChx6WqCtdOTAdq+eI6
dQrV9tdLUNFyP/BwInZAgY1JUVGXfEqiAq6lWrqzqcx+zCIkZgBpJKr5VGQ+mIgzqxQAE/mJ
7HGUR1BdS/iwOeGq1Gnt6jQj26erSMq4GJPXQ6RqgbV6KUA74o6I0kLHSE0IO5NRXCAssbkM
Puz9Vcj9MQoTUp6jkKVIqB+zEEqs4aqUzyNcSlCGjLERrqNM2NaE/TESQFwQDrBOYzFPpiJS
fdpHbMnMEDpiWky0CiMCtKAt3GIioQ2rJSOpJqDiR4WFT6SGHUNT93lgpOSKGirn0briUR1Z
SCSBnmfHwwIhGgFdZVTmKUzbvXCMDmk1CpEgFD16nz6DEaNY5A5XX1+6vfwwskypRlerUNan
/ljJL+Y0Z0gAdx59sMOkqMrBQAzLllkAf9cQ+wk+4uDpVR9xH+GIHKsUalMx18/qMRCqSgq4
cBkGakD0k9/PFhQ3K6oayEMB9wGVfDphxMpuooVAB8c+uf0xta5MvDtTqcGnWg4rbfq94tbN
mokrqtRQEGvnjpwx3fPH0Lb2CcfjtrWHRNaXTrHJbzxClW7EGtfrjnfazxyuLb452qS5e5t5
ksmYjSrKdIJ8FGC9N9apuWWHJ3tp0ItL6xh1RPPGitMpHXSQAe2eDxy5sl9eYuEqTmunIk9s
dOY1vrdfH/G472Ga8kl9mdWCxgjUtOzEdsHTbQS7TZ8msJReQxx3UDSRJcRoFc6PGlCa4JRr
y6eP2p3iJqVyz6nwP44sTdcC4xb3tpNcKzJcIw0ilQcu/cHBaMX42ew5Nbn9VFGtxaSmMXaI
FlOnLS5WlfxwbiDJaR7FPabWEjuLO9b2jBNEHVdZoa18fuqMM9Trj+PNttJXuLWb2mU1jAWq
+nsPDGr2a54dk27kdiJLyCNbi3mZDNCiqzU6aqAaqjGLWcOLGPaL2222kU9ndtp0SIGpXJqL
Q+oeWGVTyqXf+LbHtG92s140jbNOCZYo9OtTWh0164r1rpmhv+OcL/qtpHsW7PdW0xGqCVWS
WIdPVWlaeWCWs1fLYRbZuNttEkUV3ZXZ6PGDIv7cwcMusy3U68As9vMt1byjUlXVCCchnpr4
YtOOLc9is962EboYo7O8tgxdok0CTSaCoH+OLcMin2/ZOE32xs8m7y7fvKanNtKmqOWn8DL0
rjWiuPikW2JvEKXkQvI2dUQk0GZpmP8APEcavmu08Usd3spEtv0tjcyMtwiMTQdyAaUIxM5G
g2zbo7qaC2tP6fumzsD6Z1X3UyyArQ1+uMyHGD51x622Xd5Le1UxRlRKsTGpUN26nLFFFVx+
wi3Pc4LNmIWViCehpQkAfswpuLbarNdxi2C+tI7i0lBeGUJ/MialKq+R60whk+U8eGzbm1sr
iVWAdGAoc86EHuMGmqZgAKkhadT9cK02k9QtSRkK4kPUaVPXw/0xKhUEu3ie3fCyMMeoHp6B
sAtMA1aVBHl4YgNXUinTsadcDZ2TTlXSD0GJYIIdBFPV1r2xoYZR6m1H1dsSEFUrQHURUnri
HVJBU6sqDADsTWoWp7jyxpo4OQqoJ7HEQSIwfI0bw65Yoz1BgZAkZUzPfExYS6SwyOfX69sS
0VCGDePftXA3phpdjlQjvX/DDiKRqP4jpUjriSQBMl6L0wC0GlA2sflyriJzVzmKE5imLRh1
6UYksex6HEYRJBqOoGf0wowaPTQClenicQ040Uz/AA8QcOKGX3K6B9orX64AQrX16VYdwe/n
iJd+n4HChFTXPMUApgNgqhiVZev7xiZygpnTIr2OFYf0ii0ypkAcQ0jQEN+WlAK5A9sR+TVJ
r3p1rmfwwIqGunoa5EYlacag1fDLPDFpMrKwKkUPfEiWuqoPQZg4FCNSuqpzyqO1OmInYqM+
/wC+uBAWoqDl4E/5YUkCs2mlMvu8MSDKaaichQUA/wAcRptVSSM6du+IeENIr6vqPPFgpiBp
qDn5dx54kdxlVaNXIg4NOG9st5A/a2EBpQZ/dhgOAa6l+priIaMp1daHLyrgRMEILZ1HQdji
IaZdhn08RgR3LfaPSR2OdMSwv5lTU+fhiNgDkQSadxiAi5A/3eH1xYAIAx1aqnpU9sOnDEj1
ZdMuv78GMhBWhqaeP4YjKYP2qademKq3Qk5VIq/j3piZyomDuDU0bLSR2Pc4moZxWg6Z1Jr0
GKDqUneigfcRllgOIydKE1Ap1B8MJyomNWoR18MK1HKRUDI0PfADOZCFApQCmodcBRIRVgPo
fP64SHLVqoG8O2CkJICnTliOIGBZq/bXLCMA7KGBU5U64iDWsoCk5iopgEoJNakoCKg0VvDL
EkeoE6R36ntiRuwD0H8R7EnAqH0qxJypnU5AeQxYkbjRVmyopCg5VPbEgaHKAn1HpTz74Vpp
KqK5HTmPDwxNBC9E6K2f7MSDq9tmDZdxTzwwXwYfUBoNCO3YDwwHQys1F0mg6EinXENOz1St
ADTIf8sRMVVVFRTyrXLtiWGBCRnLrkPH/lgA9TEVFWIGZ8q98QJ1FW9VaDIdK/hih0o2qdLd
fHxwkbIKBPyHNhgFhlKUc1FAaGuQy74mdS1bTmQykeogZYGoaMjOtKEVB7/uwqjWMKTnUGlQ
D44tGCSNg3qNT5HI4Fg9R0tqzIqFFe+EUSSFq1WukUJGIwekaj+Ve9M8sC05ZQQKg174jKKi
uSXrl0U+H4YlpfkBocuvjXwFcMBmrTQtABXM9TiQ1hAUUPpFagdRgQaFmUkkBsxXw7YkOSNA
BWmeRPckZ9MRRaKg9aHucSE+lQpBNOhr288TNpjVqgrUAih6CmJuFKoZqLQAggU6+WJGBJB1
tRuikeWKiIySpNRWtKgdKdMQAVOokZutQfpiMEVqaD6HLx74jqMsyEBB3FAcTNC6Fe9G7+dT
iFJVzJDalpWhIH7MRiSLUeoD+GXfA1iMUEh69Ovav1xKDdhqzFAaVU+JwFzPHJqAFadcunnX
CCf1EEZDx/zwinUVVlJ9Xnn9DgWgoMlr4kYdGCVQB4RkZ+P7cSkwLBSK6SQtDln38MWEpAwT
1MBQ1Lf7cODTF0GlhUA9gMziOnUK75gimTHtQ4KZTssen0t5U+vTAkelwoDHI1DEH/DDjNhh
mCB6iuQPb6HtXEpBkStkTQgUBPj5Ytazwmce2PTQ+X+JxmCnX1gHpX/HDEjRXKlQcnPbrl3w
qDRqBgK9KVp1PngaSL3Yj0gZgf6YkBZtFQc69e+muJkINdWnocz559sCONQOnwNSSfHETl2D
sSagU6+eNYkZY1DGtP8AimCsjUFTUGtO3f64DkGSpWgOY/wOeImLioA+6uYrTIYcRnaQGpUF
SepIwo+k1ooGk50bM1GLBpSaZYwanzAH7jiw30DKqpRVNctX4YhpKGCAjv8AmOI6FBIQxrUj
7vp5DGWcEiswJUCtKjPt/rhH1CiER+r0hjQmuGmTCq2gioOWYrQjP/DBQcFGGonSAeg7A4DP
TFGqy5hhnQUIpiJwVD6T9tajwp+GEifQAEFQO1Oh+pxI2pgooCprSp7jErTBNXm4pXzHjiB1
9gVBBJXwGQPfEYKgKkEUXvXoMRCvlkG/NWvTENCY1q3d/Ed/DEsJftogHuHqTiFpiGDagPtP
qPagxMSnFJPtzNKmngO+JuXQypIvQV1ZkLhV0Sq9R+UdgB4d8CgdEjHVUEdyD1xI4Uop0v1y
Nc64UAFD6CtT+/8AHFi0etl1Ek0HQA1FPDPEtAQAC2nUOjEnpTvgRzmP5YOoCp/xpiOiDF6u
cuwwEitVGfTJlpTPELTUNPQh1g1LdqYTgWMgahGoN1Nc8RFG0ZFApPYU8sSpgM9JPo8Dl+/B
ooFSSMBlXKuRHY+eIDVSysfyjKh6f88RMEanqXLqK+HniQurBR6dIqRTFqIxEHUTQDOniTg1
fUKSW6khgFbVUClakjuB2waSOpyTHpAByHT6/XGgFndTktK/mwowjPvD1VLCqkf4HBqxKdZO
rT/MNPURWv0OIwIjAZgftp188URyAykvnnQaRniXwdDGFeo6AlSRWn0AxE4MZGn0sSKgnxxI
DmUmmQpWnjXECbWY6ktroSQev1xCiNTCGyU9K9T+GJI3ZmagaiU+0itMQpKtATXI98IHoIrp
6AVYePicBhmlCtkfRWin/PERsVUACor9tOvjiKNgqn3SCrMPr9BiiRzn+S1AAwyoTmO+XY41
GmV3SRveC1FfzU6VxpjXJ7J8R91PxxkrXY3Md3DNGNMkbVVx2I6Uxvmh7psvNtuv9vjTkKSQ
XkBBt54/5qMOqs+YYNi75xqSrSy+T1gvRFeI01gDnKKCTT41PUjwxjBZUr8x4zbLczbVczSi
Vi36a4hKE6u2oGn44MYvMvypdg3D4/aG5i5Jts0lxI5aC+t3pIhbOjJ9pAOHdbskc2y8p/8A
X9yma01XG3TEhomorAdjU5VAxu/CW91y/bLWCSbZZXdpVLNBcAhQzZVJXr9RjGL6sM7F2+0B
+poO/X9mFNBxHll/slw4ze1kP86JDQnzDHwxVL+fm1hZoz7M00YmYtLC6U9bfca9/qMGLAR8
z23craJ94Se23G0qbe4io0RJ6ax9wp44oE6fJdzFdKZqz2SkqygBZGXpqHbpiWHuOZ2FlBq2
S4lCyVPtyoFNW+4fsxIoeXbVuNlA+6tLa7pZ1Mc0S60IJyGVDqyxQWOG95xeS7hZ3UsaXkVs
SEjnQOrKc6EdMWNfEDyPlGyX09rebVtKbXeQtWd4nqjDqCiHp51w4xz1q2bme07isN3eyzWW
7Waho5kUGFx169VbBjWJLX5KlN7o3Bde3OPbl0AIyoR96jvhxIb/AJZaWu2yWe0zG4gnGQlS
jKp611d/pigc+2cm4u+wtt+78eiurqIE224W7exICTlqYfcR3w2eqeq/jVzs0G6xSXjywwoS
0RFHNVNQG8cTWNVz/duIbtBbvt24NLNDUSQSxMtWbMUalMUY/Ks2efj+3LHuFructrucIOiB
0OksRmoIqDX/AHYNaprjk227xvsM+/25ntAntTC3b220jOo8DnjLO0G9HjO3X8F5xa8mkt0o
wiuVAZGB6D+LGo0t15VtV9Jb7t+oks95tlztyn8tqZelhUfgcTLM8l3+bfL4XcoAZV0gqNOS
9Mu2GRi3FSV1KKCtcq+WJrmiIZaH8xNB/piaw+nQpzrXoppT8MIwKhkNK1HUjviZSt6lGdQO
pPXE1IHQwalNQPWn+OAWCyXMUPiemIaOjHMd+oOJoRBAqKr9cSwzUOeRPWuFnTaSDVMjTr4Y
RYddepaEsa+OAYTkluuQ6ria0ejVkr5Dv+/CdDVhmKZZV74hTkAZ1FCOmFkWgkijAntTrTyw
LUgEgRVNaN9xOMtSI8idKZEGmeWNQ4L1k5fd3J6YhaY1DZmtBlXExpa2ocq+WBqCBcZ0qSBU
+AxE5Y1p91Tme+JomY/aBqoeo8MLIfAnr5ZYgSv6woFWJp50GeHUcOD2oTn+GJk+lVYH8vYZ
YiSgkHMZmvXqTiWlmhoBUdh0wVoUZfIg9KgGv+uAGbVT1HM/dTPPCDAChAqT0OdCBhWHQg6h
ToK5+GKnmnIpVhWoOdfPAqbSC1akasQOF1A0Iy7+GIwmH5jWvYeGKVUxatQeopWgphBwQ5AJ
yNfLAKInSCqqCR3J6YMalRgkswYjyIPSvcYVRCila9CaMTiRtWY7rnQ+eJaP0qQOhOVR3OCN
aEgaTqFa5ZYWaTAKMwaHoRliGGVgpz6sPwwIlLE/wrWoPn44SYkBulWrmB0wgiBSiECvXx6+
GJEVBB6V8cZtSPt0qehPTCdF1zDAaf24hQ6/VTp2oehGJQ7AEEUPjTv4YGsRMQo0sK0GRwgg
NQBbL6UwHTmqsVIAXrVfHAAlQK9KnM1wgOhftXo3cdsQCq+LZdB54GsAtNdcwa9O1MKC7KDU
CoHQYmTHUBWubZEd6YqpUYdSdIOdB26YDpiKmte+ffC0i0sCVBrXPEUbR+r151FRU+GC1kLE
qaEVIH4Ygi8gM8s6fjhKNge4FfHAZEOsvUk+noThh0DCQsFA0qDnXwwr0JZRUHNj91B2PamI
ai1E5gUoc6f64zg3CVzQqQKnMA5D9uLGpfEDUBIK5D8MSM5VmBJYOMyCcgPLEzoCDoJOasaV
rWuJaakrnQoyrlqzP44K18m/mBjnmOw6n8MRRzM4ByoB27Uwo1OwzqM26YlgQRVuhXIA9wRh
hoCZWzbOlaKMuvjht1mHQOtNQoK1YHMYyUgaPUSoofE9csB1HkFcVBJrUd8RSwE5BwAKVFeu
JGYNVmbIaep8PPELDKS4rpoFzBPiPEjCxiUBa1YAgZs3f8KYic51UVqO1aGhwEy6XJRaqvYd
cu4zwqxI7gEMoPqqD4YDp1JQUJzPRT0B/wCeIHoWUNq82+nliB0ZSyn7owc/9a9cRiWSmWeo
V+nXEhKV9sCtC2dRQ5YFaeQkn00J6L54oz7ToQrgE6gpqa+JwmJA4GrpqOVT2xYYVW9QB1dq
Dw8sQpDqOlB1wI762OlAEAzBr18cRwRjAbIguRmPD64loS9BQ9G8O+IECT6SKZ9cSDItRkuQ
rQnucIoVcgKD36/TET0Oo0b7smINRQYlaYKx1IQFzqvhQDvgUgVaOpOnyFfyjCjNrGkgCh7E
UNBgRmq4zOQyqP8ADEQMF0hV8aLXscSMtHY1HqWor3rhAaLXLoB0p1/ZgRA+moPXt3zyxNEP
tNajqRTqDgOhB1qWpUqMgev1xM6bUdAzoR2B6/sxIwVGVSWIFD9MRw1SktKZEUP+4H6eHjiZ
sCGJqytQ0oAP+PLEArQKQxNTQmnTPCcCp9taCpJrQ+Iwkkjq9WHpz9PUeVcSkEyqUDLkx79a
jBTaAAKwBWlc6jw74pRtOKHrmK9emFCkSPRUn1dVHmepGImB9QAQkHowyxmo4UBfU51EejxF
cAwiGA6fbkR9O9cKwKaixIBr08M8SoVUfxUIJB7fjhETtpWKlexbLrgb3wIYN6yPTSp+p654
hoHDaunUfaepwI6q5TQx06ft+vhhxUSqWy60/aMKBICfVWo6BQfwB/DETor6NFaOTUVy/HAz
h6Nm9WJIqUOf4jEsJ2DELTwLYicqhaimpp9vh4/txImLEuA2k0ArTw7YhiNCEIpXrXVhWJPQ
F1joTWnn3OImUq+oVzPUd8IgH01oMoxStP3ZYKDOdClVqS35fLEqJSaUIKkjPpQnyxNGahYr
QUIp1r/xXAMFGhYVAoQKHwwLQh1dyG9K50A88QEhf2yEyJ61OXl1wNTwnUUXoGU9M++KE41F
vuA6au34YQcASAAOAxPpLdPrnhCJlk9IqA3h2I88Wn6wRLlQaio7+OJGkZWqWGoUpSpGeIhd
ZBUg0ag70B8Rg1kWoe0FDAnuOmZwqgUqWApqAOY860xCncorhdLaiST4YZBykUoGPoorDOvj
0HTE3zDfzAQC3gc+/wCOBs1AWZ19JrpAY5EDvhZsRkr7gdO35ewHjiZwZJowyKkVFfHEkZEd
CzEgAgZda0/wxatSIyupAoT1Nf8AAUxCU8kYqRUggdD0NfHA1iMKSlGrqr0r2wKC1MrBS2o9
MvHESOthU5sTpY9q9q4hh2Z6FVzK/ePAYSYMZDRR7enMkGtPwOJEg/hA8QR0xEmVtZTME+oK
e/7cSIuOjZOuVT0P1GIAVWyoSGpXwpngAhXU4rmwzr4fTESpqQkAqOrMT6vr/wAsWIgWoIzm
tPu6H/rgp04fJtIzNPV5dMGGhCBZKHo2ZGVBhBELVanM1C+FPHCMN7ZCMoJBrlSnbEhRKRF6
uo6muWLSH7VFAStanvWvfLCtMRVGX1AjoKZ18a4ikDCoo1AvQjMjBTQ+ihAjNSakCpPiD+3E
CDgkPQED9mIGQF6nWAFq3fAcMWqldJYD8vTM4QaGql9ZybJvw6U88KkGgBUaBmKEsO/7cQw1
Sx1UqTWhrQZYNI/S38xjQ0oSew8vPBTPAtroGALAdRWgHhTFoPEsjO1RqAFWrTL8MJxz3QlB
OmhqKA06VxqFlNzQfqtIqQDpqcq4bWAaf9vl0wHXdxuwurndbe1VqPM4RU6+o9Pw88al9FuP
cbH4Y57LGPaghDkaoonmVDIB/Bq9Lftwd963/wBEEXxtzI3hsruy/RyhtDm4YCM+FHGpc/HB
OoOusde6/FHO9ttRdtZJcWXQz2c8c6p29QU1xa53odh8Ucs3CNHtYbSYkeu3a5jjnqM8lciv
0wa1ui2r4r3+83s7VuMC7fcICVt7iVEdlP54yfS9PrjU7jVdnK/iDkOwrE8ZjmtJWEZlR1Oh
j01KD3xS6LasuO/D1rcxBd1uLyO6fKOS3COhPf8AlmmrT3IbDaxjg3f4i36y3IWllcwXMMlR
HcThrbOvRw4op88Yb5rj3D4r5vtqrNeWSz2ldP6u2ljmQM3ZghqtPpi0Wui3+HOdyqXFkitS
scLzIjSAitU1UB+mHRqut/j/AJdNeyWUloLK5gbSf1TiFK9hrPpzxrzGrE+8/F3Ndqg/U3Vg
skDZR3NrKlxHr7A0Ppr54zKE9j8Wcw3Cy9+1ht5nKg/p1uIxOKdmRiueK39M3UNl8Z80uZpI
49vYSwHTcQTHSY/Nv9vmMOwzRbl8Zcx22aIXW3kxT0SK5idZYWdui6kPp/HBpcKcW3xNwi22
W1Md1NT2A7DS1c/uHpw4tdtn8c8wubt7ePb2jkU+lbhljV/+xydJ/bg1m0HIeC8u2BS+57dL
BEcv1KUlhFf/AN4hIwSocXAOYvs8W8Q7ebnbZVLe7bsJNOnqWQeoDLwxr7NO/i208OuLWT+s
3LiQVJUOInQDMGOoIY+Rw3qn7VarwriV1ae/t26ybmkpHsiMhLyM9dEts9NX/wBOD0V0WfxL
eXNtBd/qp4YWJEiS2zBkJ707geBxmhkuTcaudhvv0kkqTqyiSO5Qada1oKqftPlhWuGwsri9
uFgtv5k0lQkPcnywh1w8f3uTc/6aYXjv/wAsMo9ssR/3YjE+2cc97e4tp3cXO1tM2hiYw0ik
5VocMvnjn1PXXy7h78ZmijW7jvopgSk0WpGp21o3RsZ+2tSYogCoOrqc/oOuNNauNu4Nybc7
UXNlZF7SSoS51KqlupGo/afriFBuHE+SbTMsG47dPbyMDoZk1KwXrRk1KaeWNbDzysm+N+aJ
bJeR7W1xaSLq12zpM5BFfsRi2M6cVuxbNc7lu0dilu7z6jrtdSwyEDJlUyUow8MNZrr5Zxuz
2i7WKzW5jBGqS2vIykqV/wB3Rx5jBBikBoF1CvgRnn+GLEuNj4jyLeiz7bZmdVP53WOvc6dZ
XUPphtagd04ryLbLwWl7t0sE0tfZ9JKN3OlxVTTvnimM9O2f495glmt3/TZJrRl1iSHTMNNK
kkRlmwasc218T5DvCn+mWjTutdSalRhpyPpcqTTDoxDufH962qRYd0spbSTM6pEOg/8Aa32t
+3Fq+q5PEIP/AFld1mju7SV0129wiCa0lHQa2Spjbr92CU2K7auJ8g3WN22+za5Cj1orKrED
qVViur8MXwpddEvAeURophsXuGcZwRj+ah7q8bUYHGpWM9RzcG5ZBY/r5Nsn/TKCS6AOUA66
lQlhTvUYPs1zwbbeHcr3SH3tusnuYa01qyAlR5MVOKWKyx3bDwy5ud5O07xbXFldGMsiaf5i
/wC5QarIPpire+A3fgPJdr92VbGW62+Mki8hUMun+IqpZloOtcWuc5tc3F9gtd93H9DJemza
QfyZvb91dVK0YVUgHxwUzhHu/GN02vdW2yUxTzoRSeFiY2UjIkkVX8camJLuPCOY7bbm4utp
nFuBUzx6ZVA61b2yxxbBbYpa1oKhmNKnFTCUjMMKsf3YKY1O58HMGww75Y3gubaYD3YZk9q4
jcmlKAlWTLqMEqsV/GNhm3TdBB+mluraOjXCWzoLhU/ijRiNVMVrMgNx2NYN4lsbF2uSjUjk
kQwuK9nVqaStaHtjX4VnqTcuEcu2+Brq82qdbUU1zIFlVQRkT7Zb0+fTFrX1WHHvjfft9sXv
LNRHHp/+NI5BSUg5rUE6T/3DBelYz+42F9tt3Ja39vJb3MbaJI5F0keBWvVT4jEzY5fTTSDm
c6dqYhpZAaR1zqtcJJDStehp1/yxALBmNK+nx8cRO4IANa/8UxIStmDTMdcvLEg+4urPJTmT
3B8KYqoLWpWldRGZNMEVpwVqQxyOeNM6HJgTkD2PUnGcXyclW9NanpiUgEoAQcmGI5SNCRnT
xr1ywnRHrqBoegH174kcAhT9a0xAVK9anLL64jIEh6AHoM/rgJiGBIqKDMYoDe1Q+Z6+eFYf
QwFQaaepxLA6ctTDMYqDNpOR6nLwGWDFpAjOhrTx8f8AliRirdj6q0pi04GQeoVPQUr2riiO
SQo1Hr0/64UFkYrlmfHw+mBCFAoqRUZ6cSR1b8xrTP8AfiQAxKjua/h9MTOnDaRXq5yJ7YDA
FTUHp4+ZxELRjTmO5woJGQ7DwGEAcswzaukU+uJmwFMyaAKczTrQdMAnPoSDXVXLoR186Ym5
NA1QCVIqOlcRD62BByJyGAoizadNAGB9NT1y6Z4kE6glaUXvpxFztp7GgP8AwcsQRuq6RWoX
+EjLEgSNqQVB9Ip4VpiO1FWlGANRUN454dZ+QGMgUGSn7gMq+OAwLIpYgGoOdc64mkdAE1MG
0g1Ne7DviAP5b0YKfUCAR4d8SCy6AwUDRkKA+HfExmEz6lBBoRU5eXTFjUqP1avcY1JBp/zx
Y0EBnAHUKahenXEDDUPSFqWJBAyoKdcSDItEBAArRhn3riJpFLOgqRU1JHgPA4UTGoAH29QP
r3IwKwkyUkLqcnL/ADwUQaKqai2ZP3KPDA2jB9ylQeucnQeNP8sQoi4Gpy1F6MT4jEzTrG1S
qjOgJAPU9sMUh0AWMo4LP9fTSviMVI2TUBpNew7GmLWTsPuY9ajSRTKnbE1BFQFWoLMM/L/r
iRgxZCerE18/rhZETRgrZNQaj3NfEDpg0jC0Qsp9IzUf64EdQzKy1+3MfU4gmQoVYAUI6+OJ
aIkBFIzrkWOKCUWhK/mBbM1/wwtCVWD1BqCcq/T/ACxLCGkfbnU1p0r454gdFUdxXv5fXAZD
MSCATp+g/HEdEqnXmcj/AKYVoJnIGlV9INFbtQeODGfk1Y9IIqg6Emv7sMip2AIJH3Zdf8sR
wztRgtR5YlRFAFquUnTLAjAArpLUkIJDd8SC9QwbSDXoPOmIygLMAAVzP3AdsKpPpQAIAOgI
GdPM+OIB1hasVqR0pn088B0xq2YIqaknvn2wgxppIQEZZnz7YFgVGkBic+gjP76HESlbSoqy
qSR92f8AhiNRk1kpXr+0+OAeQhUPSgJPQj/PCQscwMgejEd8SPSgOZ00qp8ge2BWI1RtI1N6
GP4/TEycBtJIy1flIzyyxprAsjABanUM/wDXEKGj9CKVyNMDN6EC+hQ1KgAAdD5V+uBqlVqg
Cng1f8cShvbIb0kA9anv5YUIMSBTqOtelfpiWmKg0YnvnU0FPDEdN7ik0ocs69xXABlmHRDn
9Mx44iE11NmFJGQ659MsQC7LooRUAfd3r4YWSAUlTpyFCc+v1xKXRghFNM1GRWlBSuCtFUmT
IHxHj+3EZDSAoGfSS9CVHc07YkSTSMpYDSSBmOmFAChnZnNCa1oKfXFqIutAxJPYZYAkzIqz
UORFehxE/uEPXTl28a4UGqsdZXNRQKMqd6VwKCjybURpz9IJBH1xEw1OfS2kk0DdQQcKO7ok
RVs9PfFGaj92ihgalRnQda/TDRDgjSAKKx6ZZ1wGURUiinOmWrvnniVJCCdByArUeGIaRVaE
UNB9tep+mA6H0o3pBzFD2P8A0xAmRMjqqQAT2GAkfcrn0/Me/wBcCGaP+ei0BqfHC0j0sX9Q
6nIYVgjH6gKAEZgHyxLDqH/NR2Jqxb/KmBJPQe2ksKBvrhSBPbUj3D0rkor08cCNIGFWUlcx
VSc6HCDN/MyIBypq+nQV8sQCsSZKDkDWq98Qwcj6iqjzOYyOFFHIHWin7jl45f4YHSUztUkN
kv8AEK9cCFMNA65075jDKKFCgFNP1I60wo2hjUqC1T16UIxMfKRQRTUAT+dqdBgaQZVCq3/6
PUeH44hEraSoCsRQ/b4/XE0ar6Bq+7p9RiR6r6WUZg55UGWJBYyU1UrXPT5+OWIH1OKZh1Ay
J/biICyqS6HI/cevTEhf/ZltFPD6nESpJ7or6j4GtR44lgdKe4zKrMcsj2OIYIsBqUDXQZ6c
6HATKBmhAUtnqBzqen44gbTp+4nPImmRIxAIaQIWc96Ktf8AHCThgf5hNUTJaePhiawXqVar
muTE5V+mfbALCZpFkqDQHo2R/wCBiSUlczQAgGi9ycSRLQkEEgN9lc/+M8KO4NaV00/AfhiR
NGSDVtB/46DEQJ7gJIKllz65kHufLANHqNRpJUt+Yf4fXEjo8RLL3Ayr/wAd8WKBaUBRkVP5
TSgyzocWI8ObHXVq5gnriwwx0HNRnXocWIjFJpJrpYn0qfDBasFqb7UX0j7h4eeIHIIIDZoQ
CD0BwLA6pCW0GjVGbfaw8vPCTFyCdSgFcyev4V8sI9c87tQlSAa5VyGNQstujMs1VYkk18q4
ViL9RN4dq9cZS44hMkO92cjv7eiVSzZ/ZX1AsOmOvMY7nj6M5tu8yceglsb33YXdfa9mXI6a
VPpP7Mcr8tcfHqf425Wd3SXbt13BEvC38k3Te2hHQj3DkenfF1yeo0lpsl/sCXF1uHtpYtI8
guoJo5IxGTVVIDV+gpiZvMqG62CfkTWd7stzZXiwSe5JGLhEnVRmfQ2k1wKRW843Kwt922xL
yTR7T1kVjVlXL1EqTl9MMhc3NLG+CWe/7eYdy26CmuazmRupHpZKhlP1GM89en61vOIbvtO+
2MKWd2sxiJEtssyQXMDnt7cho48wcatZxPfT2g3cbXcXYW5kDPCb11jRwv5RIGdTjJkee3PA
eV7JuQ3h7uK1sPf9xitwrRlCT+ZT1z7rhjHX2nwbnu83MVnZSW94XhZg4eGTUoav3DScunXE
eatvjTms98stjuV9bT3qjTDFuVPblQ50Eh9JI8DnjX4aXm8Tbzt1m5veP7VZbfcNpN1azNpI
buNBpWnYjGYnVsltJcfpZ7O12vddvCk/rC4M8VD+bSVf9ow1ZXXud9DDMVhuYYZmi1RrDNqU
oPAk1ND2xmC+Mtxfc4jtt089wXkilczrWrKAaltPWn4Y0lm3F77dN32vdbGaGXaoR7gnjkDg
d9LAZpX6ZYvtjUi3u7KSeY7cLiA3jxs8ETyoBICKaoyTRsZ+ReXme8cS+QNhsJmvJWjs5GIe
BpiyaSfMlGP0wsdXHXxiwJ41JLsvNDtO6aGe72i4IWNhnkuf5h4A4Wnnd3G4uKzPVyTVgcjX
GozurbicUFxvlml5I0dqJF9SyCN61ypIen1xq3C+io9+2+O2YyXF2ssCanRJ1lJUD7qnrjmY
8F5xvdru+8yXVrcXN1E4ya5VUYUyAOg0NO2NYlVsdrJc7lDHBcLayGmmRm0eodKMelcKeyx3
2yXEdvte/j2d+9vRb7vkrMR9nutlUeffGcTKRcX5le8yjWdUupLV0Y1lTUYhmrpqILCnhjUu
LFl8s7BvIS0uZLCR7WIN7s8YBCVao9wg5eWOYeXE0GogVH5WP+NMdINey/HttfXHCbiGwulF
wwkVYGbQGYrTqfTgt9ad1n/Uto2e1t97kktZoRSRpzVK9CUYFlI/7TgVqo5HxXm1zuEd7tGu
OzdQwurWf0sx6NSE6qeeAyqDZePcu3LlXvXEa3l5aSKbxzPH7mn+Ifaz/wDcBXGt8C9+TrHd
rK72/c7jbzd2lsGDLMplgqW9KyUNQCMEClHL+AvbhLvhlurEBGaKVlFe9KLqXEsaXhUOx3Vh
ePsdiLlY2/mbRczl5UAFQ0TtpYV7EYjmNByKK5h47HNb252+WAroMs7SRwlsvWHzWvTPFBYq
xsu6XhMm47fe8fuKBl3XYrn3LV27SNAhJA/DCscm2pzD37yzkjteUWoc+/onW3vEyoJY2Gn7
q/txVRXcx2TcLfagbe/v1hZlMmz7qxdo2ORaGZqq48aHpihq52bjfJbfgd1ZGyczzwuP06Oj
K+rMEaWoajBRhcBtUm48dv30SbfLHIwsJnrBKBWpozDTUH6YtMjs3Q7qnJ9vW51eyVkW33PU
PbmyyUkE+oU6Hv0xM/lxbbvO8S/IE9nNcuGFu3twlxGXIAKimWo9fPCYs7TbVU301pYjcRHP
/PsFkMUsMpzYppIKVHameDUtb2ImXb5Cj2p+2F5pdYViPtDN61bxxDFNbxcuseW3N9uPurtb
xaIrqJg8JII0mQJ0zrmV+uGtRmtj2Pcrnnl1uG3WyT7dBcMsksbINAkU0Oiob91MVEi0mS42
35FWTcIBBDdW4toJLgUt5T/BrHprn3OIflfPdXe0zTtHxie3RASk8V3/ACmz8BqVK/7hTAa8
8Xk3E4bq5/rPEYZpvcY1R1WQEsSfcA9B+q4ayqd5u+KbvJCuwbXJtV1OwRkeYNE2rIZPkp86
0w4ZY9Bu+M7+nx+LP+nSG8hT1W6MruQGr6aE1y8MZi6msn8a7Fv0++Wu4wwO1vYyFLhlZVdD
Q+lkJD5430zzv5W/KI+SWXO1uYUWG4nSNIZb6gtrmi6WhLt6TXwJGJqLYbPvMiLJ/wDlDiG4
g5e1L+q2tmPYqC3thj+Axm1YpOC3N9Fv26bcZ0i3KZyGt45AiSupNWjppQ/hgUusfyo70+8y
pvBuEuY6rEl2DqVCSQEJ+5fAjHSMWqXSlanqMsODSUZ1oBUdMS07nuRQePauAiFcxpyHfEMC
7Emlcq0oRiawwQgEmgU9fr4YhhOFIrQZ9KH/ABxYYZNJGVTl2yP7MTJyoemR9OY/DFUVcySK
V/bhGHCkf8eOM04QYZUzI61zxIKkaiT3FQDhRn7f7R+FMR0qUAFP/qOIDXqAxzByyp9MRhml
qdIGailBiVpAhhQtTtiRAroFMvEYCEH0UGXcGuFCPUEZ50qPLAMARqDH8wPUeGJGZTlqORyB
wacEG9AzAJHXtTEfgOfQjI98S0BCNmSAR08saAgWyzyHcd8AwJ0H/jpiRiyAEZ55fTFiMqBc
jma1p/ngxn4MVp0INDkMLSMUJAzzxELZUFat37ZYgaoyJ9Pn/wAsTUAaqxIyrkDhjHUAVoa0
p2Y9iMSkRyNpYaq5/aAeuJWYF1XSDUjV+XsMBRtkKtVV6AeNcARfcQzdsgOueI4DUfUCKdqY
kiCFnIOSgfccvwri0xGxdsly8COmJIpB6SumpXqcQAQwNOtBTy/biIBpYFj1PSlKZf64aoik
1AaR3JyHhiJizUIFCGHTAcA5ZAFAJLZ5HtgWBqRUUBIzrTIYdV5NpGda1alR/h0xCco5AQSa
VAoKVxHCRxUkD09ycQCfuoO/3HE1iN5Foqg5NkCcz1zxAyqVYEGpGanrQnLCqddVD6uvSuVR
3zwKE7aToFTU0y/xxEaoFqDUqvj4/XAgksBUL3q/fIdK064lp1alKmudar0OfTEzYSCkmupB
r/zpiUiStS3tkGhoQcq18MTXyQFKFidXSnah7nAzYcUBy9BA/L2wkYJrmag5huhwg6EkEsQK
dh+zM4hBKAPX1BzqeuXXLA0lUIU0/bnUntgUPQaPSD1+uIUhUkZjI9f+uFRIhINctH8Z8cRE
clyP7emJeDCpTVWjAfgcAyEBRAw6d/MYBg0YHUW/Ad8JxHSj1bPUaKB5Z4RpICxU09NTQf64
tJFxTQQc+4zGeJYWappK1p0OJG0h3Tt3IODUVahnpXPMVpQfjiVJ6EEUpXrTvgBwkf8AF6if
GuFvEbFc+oY5DywxkJYLpOjNT0r1w4rTM2pmJWg7IBXpiWnqwUVJqcwv18cZpRkDUKjIdAPL
AqTMMgRTpn3wgNGBLBqnpXwGIyBdSVH5lPWvl3GAgaokqPuA9Jr3GLRgZXzqBVWPTphiOEDG
tKDx+owmEDRQhBU9CwPfAajfWGqjLkPQT0Bp388Wg1WUAMCxYUJrU18Ri0ekxJPqOXj54tVC
uYY0IIPXri1nB+3WJgfvrmadadBgMMcs8iUGde9fHCqBSSBTJv4mPfCpEmrSdSmqDr4/swLS
LPqJYjSQaHI5YVuGSNzXOi5aq9cFMgjIhdQzHXT8DgJLGQQK9QS2o5+VMCCFTVqBI0+OeeFm
m9xBVlQVOYY9ThUSkoigip1DMdeuCxpGTpBb7QO/lgWlr9GpiQ4FQfEDphg3BKpZgUIVBmwP
SnfCjakctSnWmrwOABpRaP6l7jz8cTMiRCsjMprQU0sRQfhhbhFG7DIfaBl+3AQKCANRAbwx
aEgAC6T06jz+uJrQxBwpAYUavn07YlA/b1WtenmPH8MKMihWYjNga0XIZYqzCZ2FCBT+Fh1F
cDR6qh1FqVyr1OIUo/c6qMupp288GsmYFgWNfDLqSO+FSAY6iPcY9c6dsSSNmEJ9S/lPX6HE
tIqzZOKVrRh38zgX5CTSgUV7GtKnE1ogxBVW7+OZphOiBDMQBQpUerp+7EAVFWOYr1X/AJ4k
cOhkLMaqoyI7fhgITQa2YVJIHevlhBOpKg6RrFO/Uk9fLEhSh1Sn2r3IHWvXEijDjToFVAqW
6k4kbMVbPM1GWJI2rqADCleoxKCiUiuohh91OgqcBO2lQWU18T1/aMKCtKlQAwp94ypU+Xhi
QloqkKKhu+IEKxnST6GBo2LUFAxNQBpIFSR2piR3TS+YzIoOwPniRip1BaeQp5YiQkKhhQ6j
+UZnLt+OEEjdA6mp7CgKg+WJQslYlT/Lp6ScBhzHqQF8sjSncYkFSQVBNaiiYNMFEkZVipOo
fd2/ecSMhIB1kgqfy+IxAIl66Bp1H7/88KwQDAZtQg0rTw74EY55sTmaAHpn4YUfMVBWhr1/
6YlBvHShZgC37K/64GvsFirsFDAU6eNcWM6YLUEH1Edu5/HFhEEHtgFdLZ6qnqT/AKYRoft1
fXMgjoOmA6FpVc6lJ0rmK9CDhUEwP5hQKagnM9O2ImPQOG1MewOZwASuHdkXoncdRiSJtRkZ
kzz6ZfjhGnZmoxIIz+oOWQwImZQooKnrUGpHkMR06hCVLr6KVI88CFRC3oNB2JOWJqU1aZIa
scy3niQpA9KyAUP5euXjliwaQQacyNI+5R4ds8KOzEAKRqNMq079K4lqC7WPTryUAU+pHYYY
fGV3PS0mumkKTVMbxuTxz+7F4dsGMurZSVuV9VCSAqg5Gpxrlz6r0CyS5FvqMUtFH8L6Bn+a
goPLBfFzXTAzXA9qBTKa1dEVmYU8lBOBVO7zKqKTIFFNEcgdTUH/AHAVwaZB0voUEot5YQpL
GTS2kjv66YtjNSbZt9/uN4I7RJJJiM3AOlR2q2YAON+YZU9/tm9bVKsF9bvA8gyjAJ1k98uu
Of1la+682T495TusIns47dJSKRxzzrBMy16ANkSfrXDZB9lRvex8h2uaSz3SzkinQ/YR7rUr
QEFdQp54BXGZrvKGRZdYGSuHRvppameGemdeHBmVdWh9EgIICtpy61oKCmLBKUKTuQY1aTR2
RS1Cc6UFcMp1J+puwwQs4IFWiaoNPNTTBilomE0FH0vBXq5DIufSv1xRr7lJNKqqxLHUeoJY
V/7hgkY6tplnlEjEFjKuTqSQ4+vfFkY9jogu7lNYjldTLkzRswo3jl3w43KTTSOc2dWRqUGp
jUZ+enGTqRtznkARryWVTmInkZgKf7a0xRWogQ9MiCDU1rWoPWpxpNDx/g2677bm5hnt7eLU
VSS41BCy50LgEL9Th1n5dUvxpyq3aSG6hSGSMaokZgUmXqHil+wjBaLK44+N8tPslbWaQSEp
C0TalKr1Fa+n6Hrhwy4rL+xv7OZoZ7d4JiaNHKpB/EeGKL5R6WXSwWpAo1Blni0/CaZ5HoHe
opkSSen1wl3bRbbzut9Hb2DGW8FP05Mmhiy9BG7EZ5eOLGb/AIdG8tynbZXs90F5bsfuhneS
jV70JKsPPGVap0LU0k1PeuNwWL7a+Xbtte2S2EMqGymDEwSgEBv4lJzXGejFedz3C8RRJPI6
IaqjuzqPCgJIGIX2ntt2u7WMxwXckIqf5SyNH93cKCuLBuUdibqe+j03BhldgP1byMmk1pUs
Dl9cVje4s+Qxcn21vY3W7lljlX0EXDSxypX7gakHyri5rNuqJZFdSwU9eh74kmhubi3ZJI5T
btX+W6uVYUzoCCMKtXt/tvKW26Lc7p5ruxkUH3jcF9AOYVlJ6YZWdqstd+v7KP2rPcJ4EGZS
Od1FT06EYMalSWKbrNdmSzWZ7mP+b7sbMrjxbUCDTFZq2leb1vF2xi3G5muVQ0CySM1G6UAJ
OeCxWjteQbtbKIbTcLiJRT+XHM6qAD/CCO+LB9quNp+SOR7VLIslyL22mYNLa3y+6rN4hmOp
csWNfZ1Sb/vnKyNus7W1sVIaX27cskbsOhIclVP0wlmNyW/gvTDfa1voT6yxOsH61r+w4mNP
Fum4QXP6uG5liuvtE6uwag6Bmr6vxwY1pXW8bpcFjcXk8rOdba5HNT45nrjc5FqW33zeELRQ
31zGXzdUncA+NRXPAt9QxbpeWVw9xBeyW08lQXWQoxz6VFMsZxqpLrft3vYjHf39xdRVqI5p
WkUEZVAJpXzxpy3068g3qO2/SJuV0tqPT7IlfTQ50IrgwzqVEsFxcK7RIZtHqlCjUyjrmBnh
KFyMhlpbKmRriZ9d1tyneIUEUG63EaxZCBJ3Sn4Vypi+rUuo4923K3umvI7uSC6kzedZGSQl
v4mU5/8A1Yr63or7kG7X8KxXl7NdxIahZ5GkWvQ1rlixj7JoOT8ihtRaRbndJZgaBAJW0BT+
UKT0xYz9w7VsO8bnrO320l1Kp1AIKP6cyUJIqR5Z4rWoG9uN3vrhYL6See5t1MQScszxgHJP
XmoqcM8VT7jxDk1jB+pvNpuYIRSkunWgHixQsAPri1nFMYxVX1DtQnLPEz1zUkZYmmbEmntg
Z4m+ZZ8uy/2bdNviinvbWaG2nUPDOy/y31dAHHpr5YlLDWGy7vuQ02FpJcsgLCONasfGgP3H
FuFyCNwWqpV0JV1cUYEdQwOYIwiQHqrWgqOh6DAz267DaN13F3Sxs5LkhDJSKhYhetF6tQeG
HYeZfy5yskepSNDDJlYFWDDqCDQg4t1YEKejZpkPOuDUdz6goFCD6vPwxLEYyYitPDEhIKnS
Scz18cSMp0sQw/l9CDiREOKjTSv44lfT0ZkoRp8O9MQpgM2NR9T1wQ5CRvUCcj+2n0xYhGta
qNRPiMK1G3qYg5CnhniRzpBFT0FAQc8FGkFAWhof+eIyBqKaRUlepOJotLUzAPj9PLAKSkmh
GdMs/wDPEKbSp6YUZq5VI60r5YloCPUDSpGVMCJ6EkiorSmFI82NOwrXxPlhRyT3NM8x5YEi
JFSaEA5/XETMoVgST516YgZz6aAliOmJXozFmyqK0zOBfKII+ohjVT+6mHVOcRFtTUfIrko6
4Vb+0Ur5rpBIY0yzyr1PhiVpE/chBp0qe2Ms2o2TStF+4nv44Gp6ChOkkgkda/5fTEcxHKx7
CoI7ZZd8sJc4K1JJ6Aj9nbEpEZkoOpBIyHbPxOFkIdimkdO606+OAxA4FAR6BT7fEHvigtMw
YPVj6dOQw6JDJUnV0B7eXbA6wElQykgAdz4+GIgYkiiZdqkfgcSJ2ahIpUjt5YAGhIoozXow
6YVgHYg1UDrUAjx+mFENaj19a9exGBajqula/eorQ/XECqGYv+Y+mnQZ9cSJyaUApQDTlUV/
5YloQWYA06dfr5YWR5aasag5kV8MZMtKMJqUk1BqSR2H0xEtRVdP3t3+vUdMKlMpOZbLVm3a
vlgA9SitDpr0PX6nEUoyAq3pORJ74mjEkjSnp8z0piB6O6gE5nKtKEDvhFSxrpKrJmB9tev/
AAcAg0bI+JyZR9csDWiLqq5eo1r5DPCjwofuOS9x4eYxCDBVhpJo3UEjLApBaiQKGjDqO2FU
yhyVLfbQ6qdfpQ4AmqxUlBqrmFPTL/LAvDrkhJQ0GYAxEIIoaGi1y/H/ADwg666AUooyBPev
+GHVOTEGtDUdziWCDgtkcxTp5dRiOmf+YG1emvge3fBqM6gLq06iB9lewxIzVbIrpIANOuCr
CVo1qKkp2xEz6UAqBWtQ308cRAVYzEqMiKin+eNM2ImNSwdqK3X/ADzws0TMaaSSKeknr+/A
QgNT15nxHXAcDIFpkelAo8fHAQlSytXLuSMIwwVicunQHtniQtAKKoNaCgNBgKNgSw1EgKOt
KZ98QILVRTpUmvjiQVDEagRp6nzI8cIRmikyFtVTUdu1MsTUonIKgNkQRqXrgJCUhWGRA7nK
mIEoBTI+kj1DpX8cOAAcKmquXSnfLxxM0Won+YfStNQoa4GiIoelVofA1B/zxEvQGyHXInwI
+uIBZ2Daa06k+PlhAQVXKlBSpUdMKETVatVl7nx8sVQidWVKMvT6Yy1DGunrpk7MfLEi0OhU
tTPrn3r1/wCWBUMmll7gk9B/lhYtEsrNCq0oe5YdvPDVLQrVqp91M6dhXGSJWQDP1PXSdXQD
EoIgoWVRq7DPC1gAAr6TQ19WvpXsARiEmFmZMhTsymnTscsJFoFNQHTIqOufngRHQKaT/j+O
IaHL1U6jMMcKEq+otrFGHc/5YiBmUEJTIn8SPEYiegkyX7hlU5ZeGKjTFFJCoKMOo6eXXAhh
a+lhn4DuMRMxQij9+1OlMSIFgvpYE160NDgZA1TWpqe1PHwwapBoVUgkEkjLvnjWkyHKgHqH
f8o8cDOGLEU0E6jkRXLEgpR2oyUpmCe3bCRFAWVRQr+Vh1r4YFpwHzAIzqK4SFgq+geqv5ji
w6VGIbQP5nWvagxCgogapboK17eGWFCBjY6lNR0J88BOWdfQMgc+/TFoILUgOG0/wjv4YETB
KNqqozCgf4YYjAIERQPUewzwkpJEOSrV+uX7MFAgGAqKLrBDA5kCuDSjRmZSpGTDLKgyyOGQ
U5fTXSSRSnln1xYjkMVKA9T9nXLx+uJEZFBOnMgVauYy8aYotOy6gGJ0mtKnoDiRpS2r05hP
zd6nCrTVLLUgBj2IqcuuJYdQKM9KI1KVNScBNSsRb/xsv3iv+GJGR1WPTXSoGdep88SMzIpR
gTnkF61r4eGAakk0BggJKDJss8s6DFh0tKkMyvqLCq18R2wkPtADrqzyBGWXjiQqRk0D+oDx
y+meAUlUSGndcyDnTwwoxQtGdbUpmfDzxGIyie4GJFAK+rp+7FosGoUZmoJBCk5V+mL5E5My
Vb1VCjv0Jrgawyx6FNDRBm3jiWHCqKk5L4r0P1+mIEwZlAVfp5/XEcCyhOoLUpTwNfPyxL4S
GOMDSppX83l4YkF4QHUhhQipIz/biFC0QSikCtaaq+rPqT4nEhVRVApVQOh6gYkGNDTWa0HQ
dz+zBrWDaNfUJKuKjp6Rn4HrgFNI6pKFH2sKLTMZZ0w4tP6n1A19fY0A6+WGG0nVY9S9ie3h
/EfOuKs6eNRQ6DVe+f8AliJtTEGpDK3dT1xHUEsker7WKgHT0pnkRilGsluXpnIYkmuY8B4Y
3qlQ0X+AfbXr28PriOLjhhaXkNogdYlWZHMhGqgB8O5xv+bHd8fVvIOY3HH9rgurK1t4yukE
CIAsKUGtftb9mOO+mcoeM7/Z8jnl3JdvXbt0jAMs1oQKk+AQDTjVKxtN53HfILqx3plu1t5N
Fu0saBwn/dpDYkivt13XjrWabW5htJZVR7Zx7kJRsjk4IwYJDblLNtnJdt3TbJVtLi6kKyxx
BBHL2oUA0gVxz5t+Bjk+QeYbzLc2MV4I1tnNZnkiSqsDTJqAj8MdIZGx29dq/SW7Reyrs1Y9
Wn23FAVYNmpxURJfQRvcrdRRCO5twwWeA6itcyajV1OBp53ec95Nf7o23brbQ7hZpMIo5Zbd
BNHp/MJFAYH641Iz9ovuT85vOOW1tNYWkILsqzLo9Mg00pLH0IIxSqQfCb7je/Sz7tDYTbTc
Bv5s1kQtD1BVSOnlipxPvG+cY3a3ayvNxfcboPohkuLFI3WQGgDSJ2PicsS+rq2C8FkIdrvr
2a2D5C3a1iureQHyYEoMFSz/AEG17XPcz2trDIJfU9IVjGoDqq09JxSlR2d4OQ2yz7jbwCSK
UrDMsaCUBTkDIAC344bGbFXuO12kfLdvjt7NAz0adI0pqANKt2NRgwT9NMNttrGSW+2+P9Jc
BTUoi0YgE+sUoRXrgaeab3zGfebd03TabFpULGG/hg9mZSMj60yYfUYZNUi42zdd9g4YVv8A
ilvu+zGvs7iI6OlR1Zlq2X8WLMNmMPDu+52YmjtriSCKYFXRSQulhmCpws1d8Y37kdzPbbRD
ePNZyOK207F4QF6kq1aCnhhMe82Fpt8e2Rxw2doIF/IJGXS38VDgZx4p8mXP6jkkiNEsPsrp
X25RKrA99Q/dhwxnNtuZrW9gnijSZ0OUTqWV+2k+RxF6lY8U2S9t49+2oWtvewxk3m0PmtKZ
6a5NXAPhj2udun5dbPa2a2C+6q+0AQCynMqO1cJX/wAsM7vZlm1KFcAsdTBa/bXGYnm6KGAZ
sgOox0lD1rga2kHDpNwewjnuITIVunUF0YdBqIIpl3wdUuqOy2rkm32m5X22W63jer3IU9pn
H+/QaVxQKjk3KLraLkbV+hs90ttAKxXdujyBCB6dYo2Q6HriZZ3YN2hHJreSxtUtYp5AslrI
PciIJzU6xmvlgka1d/Je37NHuNsqQLYWs4IuTbKTSh/IjGladaYuYscMHE+GPAkictWMHLRL
autMstXhit9EjSccsIdq4xPfW80F1ImstN7aywThc81kFR4YlYvrSXb982G0Nzt0dmbpQ8kV
vVY9JPQJlinhwdweL2xWw3S+2+7tFTQYLyzBnVKZVkj/AMaYFIpOKbtY7Zya6sNllt7/AGRx
rg9AdoiRQhJCA1AexxCVT8mun3rlv6aDabaO6jJ1PEjK8tM6vTuB3wprdsXZ90I27cP0EzaS
XsLm0KXKae/vx0GX8XXBYY5Jtrs+N7Vf3dgsUiwFm9q6RJ4mAoaFnGrIdM8LF+VJxjf7DduS
xXf9Gt7C+jVmaW2DxwyKwoQ8bZA+YxY3Gilvtn3nk82x3m1289t7JkczAGVSoy9uRaPTyJw/
hn/w8v5Ntce2bzd2ELl4YZNKFgK00g50+uJRa/HVtZT78Uu4I7uIRMzQyiqkUp/x54uq1jQb
5ybi9vuR2e+2OK52qM6FZVUTxE94ZPSSFr0ODFrsh/p2wcTO67TDHcwuTJHDfwq6yKW0imWp
CO+eFWuz/wBf2G9uLfc1263hNzBW4giU+zJroa6D0I8sTHc9UnFtk2ocz3azktFexWKRQkil
o6AhqeR8MVXMXXEt4m27dd32tLeF7Sxk0W5KKXVeukOM9PgGwY3uxR7XDtfIeZ3AurGHb/aR
5C9kDRnVsnKsSurPOmG+KNNd/wDp87S7bvV5tF2qiisYDb3SNXJmdPtIxCuVo9h2bjB3G0tY
Lma3BSG+0I4lQsQPcDD1r+/DIL0wPIdy41u0SXVntq7XugJF5FbiltLXMSIOx8sHo2KBa9Tm
MwoHgO+ESPSZIYrX43s9ztSY7r0hpUJGp2k00qMgy4HR17/LuKbdte+Wypcbs0KwTyuFkDxs
vrSb+Knj1wxm3HLtfKrLZ63kt/JWRGjba0JltmDfcqoSTGfBsxjNi1SbBu1uOQ395aceXddv
uFb3LHRraONj+UkEA/hirUjp4rDsG6cxnjsrJ4tvmhcxWkp/mpIMyAeuRrTvh1Rd8UnmueU7
vx66l/VbYYpYxZSZq+kg5oci1M69cFrM5Lf7KLZuHSXW2A201vce0rJXV/5NIp4OopmM8M9Z
65qt+QbRJ9j2rdp19zcZVWOe5ACmVQlQXp1YHLPPGpgefguJGEtVYCo1Cg8MDVj0a6hXbvjn
b9329jFeDQonQ0LOZGXUKZiTzGCRqub5Ftbabatr3p1139wiRTzilZQqg1crkWXpXGoK8/Z1
BoOhGfhgGk9ANVOvXzxAKgHpUjzyr5VxITVBHhXIYkZjllWlaZ0wyo+kDzrmBiJmCgnOjHoc
Qwl9RPp9XgT1xYsCWIy0irZA+GICDEfd18R/piRlOpiOmdSfDxwacMT1yqO5xU4ZXIb0gg+O
LEEUBLV9TZ18h5YkRcBqDuM6nLErAiUKxqxJAOXliYCXQgVyA64lpFhrAOan92JoUjUBqaA5
AeNTiQNRbLx6V/xxLcASR+Yg1yIwrTUBJyp2BwLA+oAlhQHqDgOI2ZlYAjLsTiWG1GpIyJ/a
cKAutvSSAD1PngNoWooHUkEjL/PEELoQVLft754QRD0atCRn4YCgYMgJDmhBAy/YcWj6+AyE
fi3Ut3xLETKQCdWa5ip7fhiOIypOaggtmw8hhxAc5EHt9v1xRmoaHUCxFfDtniqxHqapAIqO
hHamIwxBIoDVgag9a+WIWajkYAFSDqOfkT/rihkyhLMWIIJPSqmhrgdIYOSKtRkGRplnXBRb
QsKOKeon1AimQP8AniMDQoxUnLrXyPjiSL1kZZAV/ZXwwrALRagEjuK+eFYKq6FJyXOhPn2z
wLUaOgALfSp/xxI6sDVS9fLw8B+OIEpc1A7GlDiAkU6OoWgzp498ROVXUK1HQkDywNYcUYAa
KKa0I8D44UGFX1kA0QZfjiZkE2lquGOVASaU88Sw6feoFKeI8cQw7ES1LdFFFHamI6P1aAgX
7cyT4YjaVGDrqIyoKeA6dcDI6hCWNTXKmIyJlWNBVaZ5A9RXA0cAk5gjIlj4+WIFGoBqTkST
Q+AxBIGjJ6UBNa07f5YlBkAEKPTTqepz7YVotABr3NRQjqRlgGnABVUr6u4rUnyOLCTAAFdR
1diO2JHUjRppq7GvUV8/PER0oQAD5Zd8BCzKXAI0+Z+uEaWkayKBaZ9KDEC1aak01HIimeeA
hBIyJrXIqBWn0OJYfQ3Qklz3wLA6SMwPHPEkbOtKk1IOSnCdPRT6g3X8taYWaDSNBJ7Z5dfD
EAKVBGkkg11/XsRhwfJtbigbM9RXuMDcOrknOlMgKDxxHQlajMUjUmg8z2xA1BpIr6lOf4Yk
GoqdOVfDOmLAArqyYeXXAjUpHorUVzUeGJESBkfXkdI/5jEMN6FIIWv8Nc6/XwGFTwcbRyD0
g07dv+Bgb3UTgNqCtVehIHTEzTMrUpl4sD+zph0mX1VUihNM6UxAJD1A9VAaVNMyO+AX0bu2
igXIj09+uWAyGOoCpz8sRwLrQZj7umXSnjhZsJhkApzp6sMGUiCpA1BWYfX6YtMSCRu5FKZn
r0wa0FXZpTShA/KfA4lDy6FYdQoGQPYf64sVMWFBXIHo1cWAqBTobNj6gf3iuFnQxAiTwyzJ
xKDYAkgmq09JA6+QwUmCskdCx9WQU9vPEZBUWqhmJoOuXc4Wgej1FmoRkCPu+mICqvRh6iMw
PHriQW1mi00aupPWmJYYKusktUDop7jADihFSaAZEd/rhJ45GoUKhh496nphR1WpJ06aCtfP
8cZrNJCrNpoRUV1jLA3A+7UEIKt01Hr+BxarUiFiKlT6RmMSC0aociDQVNa54gSP6q1ANKAD
/HGSKgNKt9oqq/uzwpE7hCqlxnkle2EaL26CpbUAMvriROUKUBAPgetT3riGhWJC2g5mhoKf
54jgiAiUX6UHavf64UdVpoXSSD0NcvPriSNgdR0dGPc9j3wjSlljSinp000qTXwwLROVKqhA
WnSnQYiEmppqJJBzpiOiCKAAFp/uBxCo2jCrRfW61IbIf44hYSzatJoRQdO+WJTRgBmWtQwF
AMDQ9JBp1J61zp9MUSN6ggjM9qdPxxIgoINR6mOdMlp/rhMMgACmKpOf3ZH6/TCCCElmK+2R
kR54BhRq1aahn0JzFfxwA3tmg6jx8KjEvgR01BOamupa517HCdRqJG/lkU15hiQBl44jo1qt
ehJOSnMZYicqC4LAnOoBOWAUEkSFlalHjPpHanjiYw+lpAWB0gjInr54lo4THQAMPTkWIP7c
LQVzrQg+Vcvr+OJqHKFaFVFGGeJGjGVVAXsfrgUC1SSemokeQAxNQjRVD68jQUyP7cTNSELI
SK1UjPV28lwiEHp6T0AoD5/9MGN4jlVaUU0AOqpz/DELDqoapJoOgNc8+tcQw06OoqCa9COv
pGIDUsCqqfR3AFcuuJBKqutiuqpr1JocCSaUDgsNRp1GQqeuLW5DSxxsNBUtU1WmQqOmeKCo
39JoBUnqvf6/hhBKypQj7fznwrga3CVRI2TV1dFPX64AeqawAuYqCe4r5YQcqakk6QT6T5ee
HUj9sln1N08/8cVqkSLGaGvqBU9K5nxywaQxQuVqDVFHTLP/AExapEdyGplTUD6VpmRTrhGM
nun/AJwD6jXOvbG4HJ7x/wAvwxalhstxJbXY05FSGr0NRjfFxa9Ybl+97ntkdpeSNIkIUqzA
NmchRu+WOd59Usgdm37ctouDPt1w9vdMRWRPHwpjWLmtPuPyNy27QQXQVHYAe4sQjck/xECp
xnFENj8ict2hHtGui0QYM9tcgP8Ad4aq5fTA3kVO6ck3TdHV7p6RoSyogAGrxFO/0wyCx1T8
03q9s0sb9ku4FGmN5BWRQOlGP+GH6jnEmxc13zZYnhsJxDbUyhkQSISD4MDnjPsbvMdMvP8A
fjfx7jBN+kvIszLCNI65+noa+eJnoW5fJ3ItxGi5kiYjMTpGqPq8wB2w4zIr915Vu+520cF5
MLiOL/xsy0YZdz3xnG5IbZeU7ttFyt1ts7QXC5Aj7SvfUvQ41gq43b5E5BusQS8MJNCEeONU
ap8xixfY+3/JHJrSCGGSVZlh/wDC8ih2Q9PScDLom+TuYXEweS7jLgaQvtgLQ9QQOv44TVbY
cz3vb4JYA6qk5b06QQCcyR4YQs7H5S5RBapC0ySmI/8Ax3kQM6V/gY9MNjPXSVvlLlr7il77
yRyogiOlRpZKUOpemMYdc2/c4vt1iIlgtoPTpJhj0Vp3NMPMw7jj2rmXI9ptJLfbtwmt4JQQ
8AIKGvfSwIxrNa+2xVyzPPqZ2zfNm8++DGU1jfXNhdRXNpJ7V3EQYpKVp2wprG+S+Sya43mj
0SJolGgaWNKVr1BPljKxl3Z5p3kBLSHoO1B9fDDqSWV5d7fcpPA5E6ZpUVz8NOALaXl27SXs
N8miC7hzDQKFSvSpTpmOowwSp4ub7nFui7msFt+pyVyIl0se7Mp/yxY1Hbyb5DvN+sltr3b7
MMpOi4jQo4B/HFIGTCgLQnS3fxIxovQeGfIdrsm0vtrq6TNraKRQHRmYUo6tlgrUiovvkDd7
jStusVmGbVIbcaFLD/aMvrTFjN8TH5H3OWIRX0FtesootxNEvvU/7wAcDH2jh23l9/Ybmb+F
I2Zs/bdFdBXwB8PHD+DOo7eTc/uuQQRw3tjapJAT7NxChVlDeNe+CVdVlSz0CHPuB0/HEmg4
/wAx3jZdYtWWa1mFLi0lVXiYeJU5fsw4dWl78mb5Nt7WEUcVtb9YfaBV4SDX+WcA+3gX+Rb2
5hX+oWNpeXIWn6gxKJFNMqMKYR91IeQX6bh/ULZjBN4oNJH+WCTTHVuXJ7vcbuO/9tLbcENW
u4NSs5/icDvhkMkW8PydvEbreGK3fcIl0C80ASOvQq4UaWB88SNH8l73+ruJ3gtzb3hH6iyd
BJCSBSoU1pXGaz8uOfm92L2K6223hsnjIce2oMbdiCD2ONSDm4jTmu6Jv8m825iivHGmSPRq
RlORQqexxWH4Vu/btNum4SbhPEInmNTFHXSCQBlWvhixFsm73m0XyXdo1JowQwZQysG6qwPU
YaRb1ukm53sl7cgLJM1XVR6a+IHbDBItdl5xue1WTWDpFe2LGrWtygkQHxWvTFY15nqS453u
TmJ7E/oY4G1QpCWKq3iNVafTpjFEsd1t8n7zFJ+sSC2W8cBbiUJpEwH8aDLPyxJS/wDuG5w7
3LvFmwt7mVi8iIKoS3VSjVBX643jOzAS8jcbou6WiLtt3q1sIAdJY9aqajPwwYpkWd3zuS+g
Z7nbbZryQ/zLqMe05Pj6ev0wFyzcy3SbY5dldYmtHI0tQBo9J1DRTzxsKAEN1qR/Eev/ACxl
Ux0oSNRA+0/j540It9l5Num1RSW8DrLZTU9+ynGuF+2rSejeYxmt747Lbm+62W5Jc2DrCoJ/
+OV1xEHJgynqPDEzY7L7nW33rP8AqdgsWEwOoICoBP5qjPBhlik2ff8AcNnv2vNqma0YklVQ
1TSfykEHUvkcas8Wn3Pfr6/3NtymYJeEq/uQqIvUvcaaUPnjEhld03Mr65VJJo0/qC0P9QQa
Jn0ilZNP3N5419R+UO28u3WzlnSSl7ZXp1XlrdVkRnH5wD0fzGDDrm3Xfby+Q261hs9RdLUG
qg9K1OGRnqLLceZ/1TYoNu3Dbrd7y2VUt9yQaJQi0FGAFDkMEmHfHHs3JrzbUmtCFuduuKGa
ylGqMsOjr/Aw8RisWuXdd7n3ECN6pbqxMUCklQT3p4+eHmC1WqGD0YivYHwwsH9R9Vaipyp+
7ASBcnM6Sp6dhi1CII60r4/6YjIFhRdVe3hkcMFuFQgCmS+Hf64h78mIYKQufY+Aw41p1jYg
0Ncu2f4YEEsAR+yoyzxI7IW6Ll+Zge/liVMqL6WBNOmJQ7oaBVPoY0rgQTGUzpWv3fj0xDcO
ER8l/f0FOuIyhZSHWgDDxPfC3TMVYKGFKHv3pgFDJmAVApWp8xhjnTZUFR0oFH+uCtQiwKHX
1PUYEAMpQMTpJyH/ACwxkDLpIqfqD/hi02GJBIk0+IFcQ9O5VlKE59z4YHTxDrFVFcvPphGn
chcqVp37+VMUKLMsGpQd6jvhsRKVJqMgKgeOMpAx1EU9QrWoOeWGRi0BfUgoO+ZwLUZLaCOg
GWfTA1EbKadajPr4YkikoXoxoUFWby88akE9RISztQ0C9GGH4KKi6vuK0796j64tFgWWoKdG
AqX7U8MCkRkHKoz/AIh0y60xNfULOijIEBc/PPEUbrID7gyVRX6eeJUFCV6FVPVjiq0C6aFl
FSMjXoaeWBnSCBTUEAnoa0659MDWh1sJGyBfScvH6YkaUfaxyB6HzwwAUUAYlSSe/iegxNaE
sJMjUkDp1BIxLQxpH0INW7kZEdeuAh0AsyADrUN3OFkXuZtWpc5Bj2GE050KKGpp0HbPAobS
fy9B9x8O3TATuHFFWhQfcQMjhAgGDaa0y+zrUeJwLCFVkalAoBoPGuEHOpQqiihiKn/liWHc
yBSEH1QZEk9x9MQSUJHYZer8MGqiRs+ikgdCaD6YRokdjpJJA71zpgbhBSCCGzY0+lemIal+
3ItXVmMSEVBHWpP+OJYIopGpSc88j0GJHUKnqJ6dMQH7ravGp6mtTXDYvkekoocHUWOZHgcZ
NKixyGv0APSmKxTw59RDLkvbzoMBEJCEJA7kAn/HEkDs7lS3pPUHCEgdUDClQTUNWuAhZ6vq
UGq5/XEBpNqVsgATStO9O2LFOgiRyxAoE8Dl07jATOyjNj6FxC0BUMxYZBafvxAtRRiageAP
7saiCKAfZ6ycz4HxxLQqkZfUMgBQ1ywKU03qHp/IP88JpilF9Q6daf44NMCKupAFM89XfCgy
A1AAIFaeOWJBAKgFR1y09gD3xIR0KhoKFu9OmJIVcu49NAcjiGCoA5FQpOemmAgOkPo6E9R4
eOKg7AAAMfStCCuRA8MUVOFFHJHXP6nEjurNX1VIGfb9mImWhqFqaZCniPriSNnJBBHb7fDF
o0QqgDDI1zJ8MTR0A9TNRq1p16eGJBkFCACCDn6cUZRghnKqfT3Pfxxpk4RFBr6iD17YzTPB
JG2umbVNT2r/AKYDA1VnyppppGXcdq4Y0N6g5DqaM/kMICU01rlqFWrnX8cApGjirHTQ+lu/
7cSzTkVIo2k/uxDDIxkyc0auaHy6YaTnXq+4EUBNfAdsSlOI5SnqoK/aD/zxa2BlAKGmY+7z
r54mTqAasmVBlqOWWJGJZ2oSSK1FTkBiZsDRtdCDRegNKUwGpFbuciKA51OeFadT6ghA61yO
eKgjRtQ1DScs+lOn7MDWnZatVTQ9vCv/AExFGoozF2NAehAH+GKRkSyE+tq5Hzp1yxE33sGJ
Ola0PYE4KYkJ9NTmK0FB0+uDEByxeoqWoR5AUwyLDHMCtARmGp3xYBMxLkknI9exOIALN6m7
VoGA6f8APEjKVEgJB1jqT4H69sRFq1alNKHp2xE9HAIGa1rUYhUbh2bUagH7QDQZdcLOHZTq
11JNANZoKHwGIkRpr6dYIoWPXFVg66VDigyz/HEcADplpTtqIPSmJJI0UKTka9++I4BCS+oq
BnSvQgYKBSPnqU51oculfDAjIQ9fFcjq8cJCQenQn7Qf9cSEa6VAyzofqMMOm1EFsz1+3vQ/
TxwrToNJC9iCY+/1xA5ASQNkV7qBkP8ATGUeQhXBAqRlpp+/CgOApzUAEDEMAwU1JzPge+CF
KCuhQQBTsc/2HCjBkrUL6j49MSA3uOc1z7ZVU/TBazZpSKdCioCg/af8cRsCo9ZOnI9DnkR0
wk6qzLUmpB9VMqDzpiR2OpiBUaegJp+AOJQkjCsWZSC2ZAzGAmOgflqMsWIiaZCmeXStMSMy
jSAFqCfuqCRiJ1j+8muo5rXMH64LXSSCSOutlALAdCcTHhnqSACEoKHt+zFrJomGg5VAz/64
tUO5DGtKaehHf9mFHRqHM1VuhPX9mCgEj+36gembD6YlowjyUYEAdQuJrTOWWRugen1GG0HD
gRuPzADJcZJlStCrEVNenTBrcmkutQVJAJJAIqT+JxqM4cSpo9RKoMyKgg188SOw1qwWhp6l
Y9M8A0KBlUU69T1/f9cRwReUNQaVC08+uJXlz3Tpo1A0I+4+IxqSqRlt5ZffHtroJzYd8bzG
ccWp/wCIdMGrFltQU3a1UBkP1y7/AI43z8ivob4aFnPYXkNxY295C+XszIHapHbF/WYzOVBy
Lbdvh5LJB7DQ2ccqgxwGmQNdK16UxjhrK9A5Bs1vBstndW9+88euFniu4lMqhmoQsq9csX5Z
nMlXXIdn4de7VbR7rNMvvEBXgRFmFegbUM8/PPBW2Uu/iXaLa9S0Tc3dbsgbfOqBakj/AMZV
z9344JRa4rn4xj2yGYbyJ7KZKkORqQqPtNBnXGtSz+MLTbp4N0s7iyg3C2VlUCdASwPVkbI0
+nfF18JkdzsNpt+US216Wg2iOTRIYKatNcyvWtMENlqw5Zx7gFpZJece3033uAsbKVAssYA/
M2Xfyxn3WpMjHJqY0z0kZBcwMdYI3nx/8fjkUTy0lnigrrigKq9fJnND/wBuLqYb40tx8M7Y
I47i0l3C0o1J7K/hoaeKupIxmVzxBuHxvwDa4Y1vt5uIhcPRHCBtDnsVY1P4HGb1VZT7V8T7
BKZ5H3r9RbEhrS5gAMbx0z1A5o3bGtE1yy/HvF7mibLvZkuUfRNaXIBYUPZgc/2Yvl0nNjoT
484pHNHYXu4XFnud1lE4jEkBIy6Ka4JrN9dFl8P27GSO73YRtCx9to09EidjVvtP1w6ypt04
Dsn6C4vNh3qO/a1qbizuVMMoHipqVf8ADFom/kG18R4XumzGduRDbd+jQl7K4UGJiMxoIzoR
li2t4yMkftsyE1C/mHehzwoyEKamp8MMT1Lh2y7FvHFJ/wBdt0Uk0bSCK8FUlAC1AYg54zYm
N2W0hPIIrZ52jQSqPdVVZlof4GoDijNb7dNlksuVbYXe3uY5iyG7ii9p2FMxIhqOncYS69+4
JxjcdySKHcY9uvrhf5cccYZHIHVlrVT+GDRyy4+MLxzcW0dyH3eAGX9Eilllh6B4yvqz8xhj
VV99w0Wu3fqprg210g9dpOpGfT0tWlcDLTcX2HZdy4VcPe7dCbyH3WjuoiUlGharnmCKdsVh
edSKolkRR6QxCnwAOGKV38c2Zd63W329pvYWZtPu0B0gCvfLG/sLNba4+K9kinh2878i7jOC
0LNEWjYAVNdJ1If+7GN1TiOO1+MXt/el3i6MEMJYGW1X3iqj8xU0yI7jFafqpuQ8e2e2hNxs
u8R7kgIWSEo0UyL/ABFWFG/DBIOp40uw7ILrhctzazwTU1+5Z3cClgyj7redcwfI40x9bY8/
eMByj5MhNfPPyxa6SOzZdpl3TcUto2VGeoJkqBQeYws9RpZeEcZt0Nvc8hWy3TINb3cbe3Xy
kjqNPni+zl9VTtPH9sfd2s943JbSDMLfwj3Y/wDaf+1sNv6dOeT8g47FsF+IhuVtuVu6B0uL
c1yJ6OoJo344zutxr7TivF77hD36Ixv4VeRbgMR6h2ZelPwxKxm+CW2w3O9w225qZgW0+0QQ
jZUFWHfyxWVmLreOB7W3MY9s2qQw29xF7sSTEUVwTqUHwAwxRJvfDLywsZILrjck6xDLcbVy
SB/E2jVWnfLBoy0XGuBTzbJFu8Fou6JOWJspGpkp0+IOL7NSMtyewjtNwCf0ifZ3Iq8E1WUn
xRiPtOIrDcOH7YvH13rbd5gulRF/V2Uv8uaNzT0qtTqoT3xKzEu38Etpdri3W/3IWFpOA8Vw
sfuxpQ0AkAzqW8sO0NbyPhFpd7PaJaLbNdFV/wDnQjSZFA7Ef54zqkjNR/HNnfe9bbbvcTbr
bqWfbriNoCprmNfqBqe+LSx15ZTWd1LZ3S+1dQMUlQmtGHUeYxueuVi04jx/+u7i1iLhbVvb
dxMy6lBXsadK+OCt8tFJ8X+trG23WGTd4lMv6ZwQsijusgqufnglWOa1+O9FkLve73+kQEkN
JoEyoVbTpkoaqfA9MWrIpN22GKwlU2e4W+52MgNLiEkNl2dGzU4dWRo7/YWHx/b7hF+mu46K
dbIYbmKrFSurNZVBGKUdTGFJ6UNTTJux8MJ3xccU2A8g3I7etyLaURtIHkUlajpWmdPPBVPV
9L8Z6bh7GDdrZt2A1LYykRmWgqPafNDqHjTGdUgLD45uzt6Xm63P9NiZmUP7bS6Sp0lW0faa
9OxxrVIJfjG+bdIbee9jO3XCFot0gBkUkZ6XjOllOLRY6n+J5kIRd0gdJQRaT0YKZe8cmdU8
jmMGtOcfG80EPu71dptkeY93QZIqjL1SKfTXtlglWo5PjLc03JIP1tuba5iL2N8rFopSBURk
0DIx+mHVis2jh15uO83G0maOzvIFY/zamIuv/wBnqH8WHWJ6s7f45uhELjc7pduhYsiyFGlj
1odJVmT7TUd+uL7NdTxT8k4juPH3RrhkurS6Ba2vbc6o2oAaHoVPemHdZx2b5w1LHZYN6stx
g3CxuNIkKUSWN2GalCT3rg9NkZgELprXSOrHthZkWmwR7JJeBd1maCJwfamC6lWT8usdSp8s
ZrpI1O3bLxPe5JdveKTaLkLS23NJPcheStPUrflbt0w7ixUcM2Gx3Dky7XuKe9EBKsvtMVro
BAZT1GYwWic78huNo2Kx5RfbZfXLxWVq5SC4cB/VRWT3FFKj1UNMatZ4XG1bJxPfDcWSQybN
dKpFteO5a3eXtrB/K3bMYNb+qn4NsthufKV2zcI/et2SeNljfSdUYNJEI8xlg6c+btRf0bZL
Tkt/tV/dn9NBK8cMzrUFlI0ianbxK4eq3Iuts2HiO/vLYRJJs17GKWt48nuQSz1+0oaHSRmD
XBrWKfg2w7du3I/6duJaSB1mBERKkMin1qfIjBaziOLatmteS3u13920UFrK0VrO6nS7IRRZ
NPTUuWod8dOp45S2rbbuP8V5A9xaWQl2a8QVtbmV9cEslc4yDnQjocZ1qzxVcH2Wy3Xkqbdu
Cs1q0c2sxtpYFFNCvjnnjPXy1xPEcW0bPacg3Hab66ZBZzNDbTyL6Xdc/wCYE6Bl7jDROlvt
vGeM8kFxa25m2rcEQG1uZqGCR/8A8GfqOhw7+mvFXwbYNv3bkv8ATdwVjbvFMC0Z0uGRfS6N
0qCMFp+uxTb5t67Zu97YCQzLaTvAspyLaejU88ac58K37noTQ0GZ7YsX216PufxjHLsWxb1t
pf2buOD+qBvWtvrFTcBR6tA/MB9cZ1v6+rgfFnH4OTNZt7t1BcWYubaJZBqRw2lmRh96nrRs
ZX5Y/kO38asYnim2jcttneoguJFpEWXLVQk1z6543Ix0xRUxsUrqIGZPicRjWfH/ABG05PLu
Npcs6TQwCazkhoSsgahqpyYEdsGtfh3cR+N5t1Xd4t5trqw/Twa7G5ZdIMqsTTSfuGkZ415r
C13Dg3GrLY9l3Jdrvrp9xt0eZLTU3tyaASwp0D+GD1u1V8C4zxbke7bpHHFdy20FussEDHTM
jlyjg0pr+mDqMfzuo5uBbNuSXVvbxXmx7tBC9xYRXy6IbpUqXUV9WrStRT8cLXXMWG58F4zY
7Lst+drvrttztkkmjtgXCSCMOzeQIzxk3Ph5nvS7S9y0u1NMbR840mXSyf6/jhpkVpCe4y5k
nr44sWPUOD/GN/dcXn3j9FFdXTNrtIDIrxTw6aaP9sgNcUvq+GFtOMXm6X977FtJYWlk6i99
wFntFclV91QKkA5E4bfVHXv3xjyLa9un3O2kg3Xb7ShuzZOJDEjdHda6tOXXthmDbPl1y/D2
+xy28c95ZwC/hSfbriZ2SKZXUMRqIorLqClT9cF/wrZGN3vYd22LcJtt3WF7a9hYho2zyHdW
XJl8GHXEpFfUkhO/mK4KtbrbPifkc81oZGgWUyQytt0z+3LJE7BhoqQrKy+B8uuD8K/Lt+Qf
iyO25jcW2xQfoNpjjimvRKSYoDOTTxcISPu7YbjHMu+s7vvxbyfZtsn3ILFuG3W5T37mzdZD
Gjk6WkjHqC5demGNeO+8+E+X2kEFzrtrmCa3W6eZGNY4nAOt0pqouoatINMXMlFufKouvjXk
dpuVvaX8SWkFyolg3IMJLd0JoGV0rl5YeuYvvoeS/GnJtl2+Td4xFuW3W2lbuWzb3RAX+1pV
U1VT49MZhvju+M+ILyFt212qbnFb2zCfa0k9q9KsAfetCRoLoex+mHnrKepsVHG+Bch5Jabn
Ns0UcqbUVM0BOmQRMzAMo7lQp1r1wXNHF8dO/fGvI9l2cbuxW92hdOu9tCJBD7n2e6v3Kp6V
I+uBv7B478Z8j5PsO47tsvs3DbZ/5tuD0unFK6o1ORqK9SMUvrPX7Z2x243E8cZlSNWI1s5P
Q9aeY8MXVwS3Na75X+PoeHbnt0dldm4tNztVu7cyACVGFA6MF9NM6jDCw1TkVXPMEE5U+uIy
1c8e4tvG9tqtVSO1R1hmuZTpiSWTONGftqAyrg0Xya9Ct/iCe3+POVX++Q+xu22SR3G13cbB
0kRVIeJgPyPXyNc8Uno+/ms8nw/zK79Mcdus6kI1nK+i4WSlRGFP5mBFPrh1apBxHf32jctx
Nm3s7JcrabjE/plgLdDInXSOlfHGrF9wwcY3ibjt1yS0T3dssZkttxZc3gLqGWR07JnTUD1x
ifOD5rQcP4qm68Z37cprZrq0s7f/AM0DAXNrIlSsnsH/AMkLU/mHsMx0xZ63fFJsHEt43yJJ
bRI1WWX2VuJG0RGXTqEWvMaiuYri+PDsde/8E5JsAt13G19F5KI4JoW9yJpOnthxT1nsMF0S
rRfiblzwym1t4prmGqvaiUe+HUeuHQR6pF8Bii6rPbLx++3febfZLZVjvbmUQRC5JiUSmvpc
0LKcqdMNXPqXfOObpx/d59q3K3Nvf2rfzFJqCGzVlboysOhwGq/Sx1CvqP2gd8Ws40uzcRmv
duXebrXFtLT/AKd7iOgIlUBimeSEqcieuKXWrE258Fu0gs9x2yZNx2TcbprKO8XUpS5JAWKU
kempOTdMLOfpZD42M16/H/1RtuUIzC3s5aCOVkFdAcdzQ6Scj1wmqzbuGX72p3DdY3ttvhuH
s52pV1uI/uRuwoDXzxT1nqJtx4LuCDbrzbJY7/aN0uf0VpfLVVW4JAEUxP21rVT3zxY27ovj
Z5Zn2aO99rlUZk9uwnGgTtHViiEZKSqnTXr+OMizWc2Hje673vcey2CJHubu6BLg+1RowSVz
6NQGmGjn0acb3T+vR7E9s0O6vP8ApPYm9GmYsFGo9KGoIbwxmtL3/wC7mSeWXabW5H/s0QkY
bbLVGn9seoR5elqKaAnPG/rGer544tl4lY3G3NuW5XbWtuLhrOZAPVDMBVVmUgsPcFdNPDF9
fR9fA8i4VLtmzWm/WF0u47HdzPaiZRRopkJ9EnT7gKqcH1gmxxbrwrk1nxi05NNZl9lvCFhu
I3Ryob7fcUZoGIpn3xcTbh6TbNw4zW1tuO5v+m2y69wRXAPp/lHS5avQoeq+GDPVlg7vgW9C
/wBrgthHfWm+SGPZr6POCYq2llZvysKdD17Yq3HV/wDd5dXEFxbbXP7++WoeWbapBpldI/vM
KnqVHY4oKxgiKGtKsfuByIpkajxxVR27Lsm4bvuEFhZI0t5cNoijXMnInv1yGCmxfT/H881l
LNtl3+rvtujabcdr6XAhU+uVE/Oq17Z4cwaxwdiSiioTIt5eOeHFutd8ZcSsuT8qttlvGdLW
7jnUyp96skTMj5+DAfXGbWpFVvPC+S7VBJc3Nq0lgkrwG9jGqOqOU1Gn2V098bvMc9ultnDO
Sbtau9jae7ElC5JCtUio69jjErdquv7C6sLl7K+ga1njIEyOhQqOuQPYjocWKXV/yzgCbHsO
y8hgvDe7XvcZaBpAFljmXNonAOY09DjXM0/FVlpw7k1zYzbha7ZK8EdfdNK+o0I9J9XQ16Yz
WOr74ufkvgicV5DZ7fZye5b7jZwX1ssv3J7nokjZsv8A7RTTyOGTw564+bcTOwJtMrrLBe39
vquttnA1xTLn7qSL6WilUjTToRTFz6rMrlseE8pvbP8AV2+2zNGCdSPRGBArTQTUnEubq54R
wI8m27kysxtty2iz/UwRygoqyRNV4pEOY1J37HBJ611ZJrL7lx/ddoWCS/s5LdLkFoJHB9t/
9ofxxvrnHP7T8LnjfA7zfuM75udhcp+q2WJblrBwayQer3Cr/lKaTl3xiT3HTPNUm07Buu7T
fp9rhNzJo9w0yAp09RpTGupjPthbvx/edpeCPdLSWyFwpMMjrkwU0op/yxYpDtxnfDsP9chs
2/pKT+w96c0E38HpzrinoxXMshOsnvmPr0pgrTq2vb7nc9wgs7VQbu7kSGNWbShZ20ip7dcZ
tUmtonxNfTStbRblDDuRkEH6O5pGVnDaBG2eoA+NMaiY692Td7ASG7s5LWOK5eykkkUhVuUB
ZojXo1BXPr2w2C1HJtF5DHbyzW8ohu6ixkZfTIynS+k96HBJqx33HEuRWVm13c7dMLWMBi4W
o0noaeGKRYi2zjm97oGaxspbmFM2kQUBr4HocFakc13tl/aXj2k9vJDcg6P08ikPXr0p+ONY
xY6rzjXIrGD9TuFhPBAMhOVrGAelSK/6YJdMmGs+Nb3ukTSWdhJNHGQhlUEgkdlGLT1Njni2
m9a4az9hhdK3trCwIdXHVSuKnmamvdg32wRZr3bpbeGRiiySCmY/yxQWKwKGzpXsQB3HfFSI
e5WkYBUVArn1z/dgZpaEZQuZalW7V+mEGGQqKVPQnAocBCzZgsR08a9ziB6KtSa6CQBl0/DA
1oSQBVWLUJrnUgnCtO3t6dLNkDqy/wBMROxjOlSagjoO30GAiLnQaCgHfocQRaSFABHpNcsz
n9cRSK2tch9pyr4+FMQAFLkkijk+n/GlMSwwJBb05jM5dTiQx/Gw9FKqD1riISpUBuobrXtg
R2T01JoOoHbLCqH7vTUClCg7DxwshJVloQajqcWmFqWo0nUTlTy8aeWJJApCEK3pP3Gv7qHE
qTBCqMXOogivSoxIygqM+1CKGpOEykjnNnHqY1zGVcCJiWqC/op6lpkSfPGayYAGVjpLjrT/
AKYYpdonbRmran8KVGeFvAMqyLqfIjriBkKnJSM6Z4UdZdCjsy1HX92JaZC49YzV/wBh8MSG
oBJaoUAde9T2wIwUPVetex7U/fipwIrroQaoPDI18MCwNCQCxpn0Gf4VwjCOqtD0/LUZ0wJM
rEMRpyyFQfDviISoUCvrqTQnt54hUYcMpWtFU9iSSev7cSSg5k1NDRXUClfMYoQPn0JY9VU9
MQAA8hpqOZqfEjCpB+2sZ/h7EdcsDRqq3io/xGIfJGFqeltJPbwAwGQQchCA2QFB07d8SpB0
GqoFKdSep8qYgaQrpDZk1qWPWn44DRlvScho7HoT9cCOVAUEBUVR6lUZDC3x1l1AJEcMyodK
mgOWZ8jjQ76vV0Rk0nSBqrmcsDmZVNcyEUktXz/HEkkemjEDUFy61BrnXBW5MDqoKqNVe1Kk
eOCxYQkKdyuofb1HjlhWk5U5gFv9p6+ZwwkmS000J6eVcIOc1Kk0C9iMApoREBpclFodIB8f
riPNwIDFslyGX1GJrHLeKuTAagcqE5D6Y1yxZWZ3JWeatc601Uy/bjXQkQfpz/t8OvfGGndt
JU3gFMnNFAyzGeOsgs19DfFPJ/j3Y7Fv6m24w3kpBnki0SRUHdejJjP9OmJKHkN38d3O/Lud
tfbhdWsj6pY2jWCRSehDNVX/AGYxOsdMsavdeYfGN5skdnBfbnbyxaWRHgSQgrnQrXofI4d1
ixG3P/jXdraG33S1vrSWBx7V3bDUrFDkXR2/wxWNWObc/kbh1xKsDWtxeWkDEJeACJ1qKVMb
VyHamDGbHJc/Km07xYy7Nvqz3djESduuh/5EyouvME+BGNfVSp+B8r+NtnWaO+O5LNMc1RUe
GlfykHUn44evhuyqTkW68Yg5BFuuxXEl7AsmuSyv4FCmv5WoTqGMciwPLuWcT36zjNrxyDat
2Uhprm2b0MOlCgC/vxMskpXIHMfmzp/hjSbX485nt/HpJLfcYp5bKTMSWpVZo/AhWyNMOj8t
Fecq4VFoubXft3vGDGX9JPrSlTmDnp/CmMtM9zTle08gtbZrHWrKTrjl6KezA+OKLU3BOZ7Z
slrNZ38ckqyuWM6aSB2oUND08MVGs/DucdvvQ3KBfcb3dfUqWANdOWYyw810vevQ4uWfH+7y
2N9d3t7tW62/qSCSETRFvCqZn8aYXPHXJ8mcdX3beT3D6WEdxGpKE+JU+pRjFOPKrqYm4mki
dmjkkLoTUDP/ACxvlctRs3JeBf0M7ZvXHlnvo0ZYd0hfRLU5gua16+GLqDrll5vZ9w6FJirW
PUa0HauAGqDINYNBSun7sRes8S3z48sdhexl3y5glkDalnt/5iawA1NAZWGK3TmszYW3DbXk
bS3W8SSWSSLLaXkEHUjMe5G/qp9MO+LGy3/f+EXE1juFnvfvNt7ljYyRSIXU5MFagAP1wSAE
3Ifjm+3i13e23OayuoFqkF3AzxMy1opKDUKVxD6/pyNzjh825S3sc08O726FYLpFb23K9FVh
6hXzFMOJwcm5jsHKNmWTdTJa73bkhJIQ3tSIOhalQCB1wRWau+K7z8f2fH2tJN8mhkkRvc9+
AkhnFDTQCjAdsVqxk4OFbbuW4S29tyzbdVdSm4WSDUp7jXQE+IriinONHx34+l2febW9feto
vYImIlijuNDkHIhdWX78VpkXe+jiNryGO+k3J9uvooqfpZk9yCRSevuxAnLxxRmxxR8/4vfS
3tjeyT2MMo0RXsae5GQRQ6lHqHiMsJjJb3s/D7OANte9HcZK0YxqVND11qwAH1BxauZWs4nu
/Abbjs23Nvz2z3AKmO4hIaMstCFKggrXvXFUxVhZcTtuQPBvF2dx2qlFuduLqQSK6qMNRA7g
YfiLNd08/FOPchtr3YL192sdOp42XQ6VP2hmC6jTxGKGRcbvF8dchnXcrjkD2rFQDYXMRiYB
R/GgNcZrP0ys1YXvGtq3b3ZrT+ubUoP8qQhD/wBy09LEdq9cUJuST8Nm3CJ+PRTW1pJ6riGf
qrVqdFScqeeA43Ww33Bo+KzbWvIhbyzxsv8AOhKuhfr6RXUvbLDtDN8ZteN2fIgdx3sW7W7q
8F1BGzW01MyCx9SH6jGt8LQ8i3njtvyC13+w3WDcYo0/T3NkoKTBCT64yRQ0rilGY67XfOOR
bl/XLHlVY5UKHbLgSRUr2FK0f8MFWIYuY8X3a0vtum3CXYpZpCILsISpFdTEaaaSfPAorOZ7
3t0+x2+3R7km7PH9l2hZmXTl6g+ak/XGlinvb/gl3sCBdsm2vfYFVfct2rBKRSrSVPf6VrgV
XWy75xjc+Kf+u7ndnbZY1os7IXilUNrAJGanFKsXV5zDiVhDaRWd0bpYSsc0cVTSMimuNmAD
9PriUQWt9xK13mXkFtv0E7XCsJLG5jMEmYrpVu7fUYosZ+64n/7dfXe5bZuljEkspP6e5kZJ
kHbUmn83WoNMMuM31e8L4Numxbwtxcz2d7atGyk2k41iufRgCanww3rTJjpDca2bk99usO8B
ZiHS52q6QxMA4GcbgaWPhXGdMc55NxTkW13m03d+20GWWsFzNEGR0DBlPXJh0o2LVYxm97Tx
yweFbHeBfLIwEzRDVQVoXzoVP+3FjDcF+JT8K/o1vyWBJtNI5bhGjAbVqo6V9Ofhhla65tY/
afjrcd4jdtv3PbZvbJSWP3jrVq9aaft8MP2xj6VebBxXeeI7iu5bi1vcbfodLie0kErxhvzF
G0t+yuHZTPGT5LewTb/dT20qyRSSFo5ErTtmpxmmN3xHm9nNx87TJvR2fdLepS7uEEscik1D
Vb06u1DiI7nlMFpvNrJuG/RblZFWX/4yIyqzZe4zRgEZ9VI+hxasTjmfGFtDqvA0lvdB3jCE
l0LVDJT7gB1wKJN73zinJbC52ZN6itWf25YrllZoXUHUPV6dJrkVOeJYoOYb7Z2W2WFjb3UF
3e2Mkckc8EnuRMkYpU06HxU4ZFuVa7xyjjNztE+82F1E93coiX22yUSf3jQB0alWKfjl3xS1
fFNxTmtvd8bTaE3pNm3i3b/z3UYmjlQtqDLqopNMjU1wUs38h7pcTtbWs27w7p7ZEgW3RBGM
qEh46de6sKjG+WLHDvb8CvNiiutqgn27fVCLdWqlmgb+J6tVa9xSmMzqt3nWXLDTUD9vfzOH
WcCrHV6T0698Qa673i3bhdnaWu4JLd2pCS2rx6J4wxYn2pVoHibuGzBwYfsreI7nb2W/W9zc
3L2EaV1XcYDlCftLIfuWvXAZ8oeU3JuOT30zTwztLJVprevssCAAUr2xu3wczF3ue6203DrC
G23BXntmEM9rIgS6jUhqqJBlJC3avTGYelXwzcobDfYbm4vZNvALRi7RBKE1Cn8xD9yfxYaz
Jjn5NM02/wB/cM8U7STOxmgJMMlaepK9FOK3RzLFxuu7283DdtSC+R5YaQTwmPRdRZGqah/5
IO6k5jGZD1VbwrcLfb+QWs814bCFAwF2F1qrsCF1p+ZDX1YTzYg5PK0+/X8skkUrTzM0j25J
hNe8bHPSw/ZhEXG77lFNw3bntLuJ5Lci2niZPbu4RpJEWoZSRd1bqDgi7cPC9yt7Lkdtcy3n
6EKHCXWj3FVmWg9xO6H82M1v8OHkjSSch3KSSSJ5ZJ3f3LckwGv5o656TjVc5zi63ncoG4nt
P6a9hmuIx7N0iqI7uMhTSNyp0vH3RvuxQ3lXcG3CLb+SW8095+hWMMEvGX3EVnWirKv8DVox
HTFWpXDyWRpOQbmzvFJI9xK7tCdURYtn7Z7r4Y0JMVLJqBXIhh07iuEPU7/nstlxDjtxx/cF
W+tgLW+hIrT24aaZI2/LX7TjGN1e2vLeP3PI7GcbjHCt9YmNJmqixXAcN7bZj2+9MQxRco2H
lc+13Yk5pZbnb5v/AE9powzquY0Mx+4D9uGVnrHlBUkVWnTriNjb/FG9bft26Xy3d0tmbq30
W0rZASKaj1dvxxYfwvfjr5AvLq+3Ow5BuryJc27CxNyV0NJnUBwBpJXpg3KxJq4gu7neuJ7H
/wCucpttmks4xHexTSCNtYjCmMqe1RWuHW+oo+L3EnHuZbqOQbpBcXe52Y/SblaurRTSIxcZ
r9hyzDYM1jmyarbHll1zDju4cc33clj3WAtdbRezMEDugJNvKaeH24rMO7Na6G7v994psJ43
yi12iW2gWLcIZ5Arh1QKVK1qCCPxxc3FeXjXN9t3nbd9uF3eeC+urg+8by0dWjmDZa6J9p8j
hXMxnRIQak9O/wDrga16JwfkFvDwTkW3Nf8As7i/862j1lGKkAExVIXVlQ0zOD8i/CH4u5Xt
ltf7xZbzem1k3y29mHc5tTIsulkCy9wCG6+WKHPGo4/sUXANi5Au6bjZXVru9m0FveWcgZFl
CuEjlWur+ZqoGAp44vyPwzPy5v8AZbnxvh0233Xv2q2TpPEreqOWNYkbWvZhQjGrVJ6823He
L/cJIf6hcvcvbxiKGSQ6mEa/agbyxar8uIFFYH6rmTQkjocFi171ue2WXPds4nv22bhGltsU
aR7rZFlW8QpoNFViFJHtkjPMdMZzzGrPdWe7co46/Md02+63a3WHf9rSKw3Jm1W/uDWCkzDN
c2xqRjdqm2Ljlx8d8d5Ja75eW9xBvNg8VnfWr6o/ejjakTgmoLa/S3Q4rGrmY0W1co47JyXh
9z/UIlS52Sa0ifUAv6hfaBiep9Lek/dikH5VOw7vxm74ds1pJusNluVvdXMVjczMNFtdB3b2
5gaj2pUOmhFMOixdyXD2PFuUbTu8G32G5z7ZdPZxWNFjuYhCwLwuTV6Mc0IqMZxf0m8+Md/b
rxadbluSC/tntJbVrOWAPS4hlqD/ADgaUrpqCMTX4cfBorziF3znZ92uEst3Nq13YlJAPcQN
IwkhkBoTpcHxw2MczOVd8U71Ztw35Csrm5H9QvtulnghlPrkVYpPcoDkzLqBywT5N95Vfxpa
cRvdo3OK95JPxnkQQGwvvd9u2eLSKI65BiD1Fa9xikytS+MXx/Y7je91t9tW5htri4d4oprm
Qxw68wKsA339q+ODqrHtP9wPCuQTbHtG+28azW+yWAg3WONg0sWkCsmn8yVXqM8PNF8uvn5G
qBqOoEVFPpXDad16t8P3u23fHuUcRa8hsd1363H9La5OiJ5FBXQZOz9x+7B+dHfP2mRsds45
e8S+Lub7HvN4H3doP1AtCxNIkUBZIXY/zEPc9VORxT5EmctdeT3XJIdq37jO2WG82QtIhPfP
KyX0Ukf3RrGMtadaMwJ6DEc9eaWPOI1+UtyHJFgXY+Qxrtu6QI2qEo6aEmcmhUg/fWjL+GHq
+NSaH5ChThPG5uGWRC3lygVpIyDFe7Y7kxO47TRldNf9cZv7c75MaX4R4PvEXH97vEuLe6s+
Qbc8Fk0bagkullMcuXoIrmMHNdM8Z346227PEeU/Ht0U2zlc8qNa2l0REsjIFQrE9fuGn006
g43Z6xJ4qN64PzHiNl+p3y6ognhluLMSs6SW0bhvdiZ8nkjbJk+4A1wVv7a9q3l77db6LkvH
tntd12uWCGeLc4p2juAy1J/loCS6ft7YpTjwjcb2y3P5Ma432T9FaXN6rXtzYvqZE6CRHUVq
GArlXri6jPN9Q/JsFzbcvu0n3teQo0cTW26BleRoqH24pGX060HhhsmD8swpFQ2QJNCB1qO2
MU/D0jhe4WG8cC3XhH6mOy3i+u47rbmunKQStkDEXAOlvT3GeHmYb1qzk3JOKfH8/F9zQxbr
a7pbXiWxyL+3LG0iHw9KalbowxrmM3r9L29sLrcvkmP5G25orrjrC2kgdX9Yljh9uSKda/yT
1oW7/XF+FblVn9a27l218n4xtN3DHue47oL7a2umMUVyKjVFq/K/pyr1xbiy1B/V4OKcAh49
uKsm6WO8W24G1P3tGkqtIlD0ZNFQTkQQcU89Pyvrvbbq6+S4vkexMV3xpxDPDMkgLEpAI5Ec
f/ZOBWmvIkUrU4zWvh57ay7DufyhNcS7pJZbZf3rPb7tb0WSNmJaJ6tUKNVASegxVnmn5DdX
ex/J73G7343a5sbuCZ75GDe5EgVlNBkGEdFp2OKrm+49AO2XH/3mN8i25ivONzFZ7S6gfUzD
2VSVJO0TINRo/WlMHqvir4/wwb6+48o2N13L3b+4WS216NFHrDIq/wC5W6nNeoxvVNjNfKUf
ObBIrffNuTbbG/YSA2xBguLiCoQy6fSs2kmtKahhnOs9dSfKa62y6Pw8bvaOS/qLVnA3vjc7
IAvrA/lg+uqsFanQj6YOLisvlT7J7fKvjSLiG0yRnkVhfteR2cjBPfiOpmMZbqwr0wSt9erO
Lk+3cb41xGw3Filzsu8m6vrYKRIIPWWcIc6xs9GU54vr+2ZVpYbRfbR8pX/OZykvHL8z3Fjf
wyLIskE8YBdKZAxirFTmRWmIx5lsHE5Oacw3DbtvvIbaW5e4ntZZtTI4XNUFKGrDvi6bwPDL
ocT+Q9tffkNs+3XYi3AZt7RFVY0WvSta+GCxmV6VsWy7lxr5H37le6GJdj3E3kllcxMJBJbX
BDJOrL6aLlqWtaHDaxJny8Kumha4mZBVPccgdjVvHD1da55b/wCBLiBPkzbvccIziVRqqBUx
NpVfrjnW49B4/tfMNq3jk6criMWzbtDex7dbFka091i0irozCa0GoV6nHQeY4OJ7VFa8S2/e
bKwl3mwkgJvHtpQk9lNExEscig+uL8wFKjFWJKr/AJ52aO+37h8G2RLLf7hYtDFIXAaYBl9p
S5yP3HTXxwT4bnyXyVxjfLf4b43Hc2bLJtN1N/UVAqYVkDhSw8DkK4uB1XZyu25Zdbdxbf8A
h0xttsfbIIt43G3cBQ8QEf8AOA9Xo6Me2Az5V3zxtO5bnz/j23W6/q9wutniVY0ZQJpFlcnS
TQZ9RjU+FP8A2H807RvUXx5wq5u7WRpdtSS3v3ycxSMEVVkYVoPT1xc/C6+W53drUQWG8bZt
d/d21xZQyz7htbp7TlEA1OtdXuLShoOmKCz1y8Zvm3fnHL57CyEd7c7EgjilCabyUrRJ/SaF
ZPSp/fg3FZsZKKLkKfEXJ9v5jFIm4wLDdbVa3irrjjSilofDQcq9Ri91nmTPC+E+Mb8OOcqn
/Rto3jaZI9skqpEzKHFFIPWrDrilzp1v/qqvhnYLgXG/xXkMjbxtsUddmZtDsI3/AJpSpUe9
HlpatP241/TqWnfG1+Qbezvfh3dbj9PN7trdK8Ud0F9+CVaBsxXSSv7a4zz8ufTA8UteV/8A
3NcivNq3GCXbGem67TJGDJHEApM8T1yZh2I7eIw9TKzLseXMCZGXTQAggeFO2CtY7th9ybdL
ONLtbESTxo1ywqItTZNSo+054xY1HuN/sMu57oF57sEtpuELJH/7jtrII3qwWOeRQSKUAOqm
WNaMc78c3a7+PvkLjkr/ANa5Db7tCzsgBmk9UeiajZgmPv8AXG5curubFjtO3JtfB+BQ7wIQ
qb5/NZ6OqozOCrk/aexBxnmejFLyO3+Vds5XuFhtFu9vsC3bRWH6lENsY5GJWMu+o6WB0j9m
LRg+Pw71/wDdBcPxqzZOS2O8uJbKBAZII5qLJGyPlpy6HuMEzfXSzMc/FLjkVx8hcZPMLaES
M0tqs0kao1GjZTbygenV6sgRWmKw5G8jv9p2ncLiyn2+/wD6dpngNpcWpMUkOlqwI2o6vT9n
jlijnWK4BeXj8XhRtuurO1S6nXat7sI1uWjiZ9T21zET6HGTAkHLDflr8Obe7PkG3fIdpcXF
tByZb21lMUsGmJ7y2pplQ6dOmeOnYVyw2+Lldb/t267rs16m13Mk6m1lkk43utvHDLKsaail
vKvqZ0HqBrXLBKzXz2rOSjA/lHQUrUVzHbD34d06k1oe2aEZZeGMYAkSOVAAFTX6eeIYcKx9
Jo2nIMe5wI7oopStAtAfIdcWrDSAaGGouq0IOFYYZj1LTVl6cz+7EZCcFfUM16KCMzTEjvK+
hQ5FKU8P24FaYe0VatdJA+o/DEjEj7CTXtXI/TEDaEXUor07nOvXI4ikJLKpU5jv4EjEqePW
GKgVUD6EHEKChUH1Zuc/IYjCc0oCxZB2Hc4kf10NG9L9u9fpiNhjEqEFDmTlqNaYZBhxOWYo
ARpp6D/yxYTUAAYUBWtQMjiAJfUNVaE98qZYlTlidLGrBhSmIQy/fXp4UGZpiaiUk0qagnt2
/HEaimRtFFBCihKggmuBzsJSFIYVU0oTn27YlzMEaM9BUkde1RjTfyVAzFAv25Gp/wA8SOxY
A5BQKA17+WBGEjF6lQCO/l064lDamoAaUzoO2FEI1qWIp0yrUEYEKTVkFy1Z5ZfhXES6ISGO
dK174kH0qgVBXUa9ciO+BE5ojaV9dMpDlhRjT7V/NlQ/TAKZSdBHRq0UEUFBliBIUWvpq1CQ
vUYoZ4eRF1B6nSo9GkHOncDCQlSK6q1ypXqa54zppITpq7HwBXpXw+mJSHpEAVLasqjCjH2R
koIag/DEBoVU5oQT0I70wI8aIzEsaUBIyHboD5YqoZWHrXIoSK5Z1PTPEDspDKtdMZ+4Ma5+
eBrUchBBVGyHcCuFfKRV0R5gUoTUnr4/XDEHX6vsDA+rLrq88SGukVPc9CB2xAJ8QQG6srGo
P4YjAhyEKnJTQ5eOE6cKFWqNRhmRXM4zUZtbMfTkBXMdMADoYkPRTXv1z8sMGginkSU6+x9K
0qKV74atSlpWJqMuqdjng0AUyIMivtk1ZepriKXSS2ZrXKh64nXlDcoqqQ65A100zxarjJ7n
/wCcUY0z9PT8Rjbk4/cHn/zwJ27bldxrSiDNiT28cdf531X/AA92+MOIcS5LZXS3ct7aXsRo
t5alHRhlk8bDGP7Rys1Wbzs8O378NsFyjnXpinlUxx0rQM+nV1xcc67fbG55FwTbNv2u2upd
uktndkEk9rcpcW8oIFT/ABr+zBJ6zL+1punwibuyiu9lVbO6YV9q4uaQsCMtFQ37zgp1ipvi
nnsNxc28lkFvIF9wwiRSGiJpqjYEqRh2G2Vx2nAeTXCTS2tuJTbf+WBmCyEZ1ChqBqdcsX2Y
lytBwbgux79DdQ30t5Y7hC1BNB7bw1P5XjbOvnXD3PGvvrPXHGJ25Adlt7hZC7hYpXPthmrR
dWdFxnmZDa6+T/GfMOMwLcblZqsLHSJo5EZDXsKGv7cWsssi+plFag5A5VPnjQaHjfDd536R
ltGiijQVFxcyCOIHwLUOG4tXl98S82tY1nEFvc20hCfqbWdZYgT01EHUB+GMavl1W/wh8gsu
v2rVXYZRNOFZ69DHUFSPqcW6Pq47H4k55ezTQLZJDcWraLiKeQIFBzVqjVUeYxasFvHxbzna
o0llslmtiQBdW0qyxVOXajD9mFqJrX4p5vd2ouNvSzuS9a2/6pFmAGR9DUofLCOtQ7f8X86v
pHWLb/akhYxziZ1i0Htqr/iMVqlQ7/8AHPMdih/Ubpt5Nr1FxA6zR59iynI/UYtalDafHXMb
rZot5stuN3t71IeGRGkAHWsdQaDFoqgf0ytGyssiGjK3XEL0JWdqZ6GpVVpXFjM6b3YOA7Xv
fH23GDdJbTcYgwlgli1wllz9LoQy1Hjgb1ntk2WXcd7TbtAnkchRH7giZiv5UZqYfr4Gs3Hg
W3Wu/wC32It763juTSW0uQBpplWKdTR88EosTb/8Q8gt6tslvJeW6qWKySx/qA1emn0kj6Yq
YyU/EuTRWz3j2MntQMY7rIa4mHUOn3DDKKj/APWeQLtS7qLN5LF+sqUYKK/mFaj8calkanjT
7PwKx3jjsm6Wm6vb30BYS2dxH/KNBkUaOp9Q8e+M9Fi3SSN2iYV0HT/FiTq2zZ7vcZ0tLCD3
bmRqJEKAknwr3wVlov8A7svkA2xn/prSxoM4/dRp1H8JiJDfsxapa49o4PyjeZJEtbSksR9s
wXDCBtQ6ij0+3vjXisQ77xDkewaButhJAjHSswAeImnQSISDg8q5X1hwa1uuLNutzFdQtnLb
XVuI57ZgooBIoOuPPqTjVv4F5Y4JpdgtVpX0/wDMYykkNvcXUi28SGR3+yNak5fTFIWjk+NO
cRWq3Q2xri3I1Rm3ljmb/wDRU6sI2qnaON7zu19Jt+32ryXsIYywORGVK9a66dMSBuGybtt1
4LPcbZ7W5rkkopVT0I7EeYxaLrVy/Fu6Jxz+tLewa1XXLamoOkdCrgkMadsV9GYj47wTb9x2
p913LdnsrdWPuvEizCPQafzc9SV7ZYMxse6/Gt8jQ3G1Xse7bfPUpPENDqv++Mk1/wDpwxUF
3w3j9tZJ7nJYbXc1X/8AErqLQhYGpAdCW/dg0RHsXCBf2z3m5X/6K0cEQTqnuAlTmxBK1X6G
vlhVjg5DxDctkSKZpo7vbbpPct9wtzWN6jzzVvLFa0h3TiPJdrtYLq92+VLSUK0c4o6DUPTq
K10k+eCVJdn4dyvdYDd7ZYG5t60L+4iNTyRyCw8xhoq73/47jsttt7q1/UR30pVJbGaho7fm
DmmnPxywz1zkqr3D485zZ2xnuNqeS3C+p4GSYgAVrpjLGlO+C436ztW0DUFOnoxFaeVcCdO3
bVuO6XDQbfZtcXSIZBHBT3KDqV6Zjyw1mdatr3hXNoLU3t1ts89t9zyKwmcCmZkVSzhh54Fz
aDauF8q3e2afbbIzwj0iskaH66XKmnni8atVm6bVu2zzNa7laS2dwBU+4CAR01K2YYeYOEWN
FunDobDjEG5TPcWm4PpLxOvu29wr/a0MqVVTQiobFFt5ZUFgQGzZTVexH7MOLXVYbdf7nctD
Y20l5OFLBYhVyBmaDywN477rh3KrXb0v5druDaMuozJSUKOnqVCzL51GJjKHZeH8o3W2F1tu
3m4hBydWQV+moio88VMBHxrkL7r/AEpdulj3WhkFoy6HKr9zKTRWHmDihTvwfmUJnX+jzgwK
HkQAM+k91APrH/bXDsc+pZ8Its4jyndI3udv22SdVIDkFUbV5qxUjBbDPsgk49vgv5bEbfOL
2FCxtyhD6RmxA/MPphjVlrnsts3O8dorS1lnaFGkljjUllVT6m0jPLvitDv2riXJ92tjNt9h
LcwhtHuJpXMeTlT1xm1barr+0v7K4ktL22kt54yFeKVdDg/TuPPGpWb8p9z2Te9nSN9ws57Z
J11QystI3FAaBlqtaHpg+WrcV5ZmNSPp/wA8Il9WvHeO3O+Xv6a2qkiL7jAEe4VH3CNCRrYD
PTi040k/xeJo5n2Dfrbd7q3BeTbiohmIrQimo0YHsQMZ0fT1nuO8cuN73hNrWUWk8gcI8qk6
XjUkq6jMZimGXFn6T2HDt1vOQXOzPQXVl6rwRUc6FIDewPT7ho1aYqtXN38YNNaTS8d3m33u
aFdc1ktIrgJ5As2dfytTFPn1dees/wAZ2GbfdzG228q20pRyJJQSuqNSxVwMx0p5Y1Zhl2al
27h+6Xe8Xe2SaY57CpvI4SJWAUgM0a+n3AAa0GeM6ubq5ufjG8azmuNh3i13yWNfdltYv5U4
j8kJbP8A2mhxS6PWd41sE/IN2j2uCRbeZkdo5JAdOpASVamYri3FnnibbOJ7pd7zebSaLd7e
T+tRD7hVVajtGooZNPgMVonK4vPi27exe72TdrTfTEnuvaW/onVf4ghZq0/hNDilOKDjfHp9
/wB0O2WkywTFHZZJgQmpFJ0vTMVpTEYm2vie6Xu6XO2spin2/wD/AB7T/NeJQ2hnCrnIFOdF
7Yr4ebsW+4/F19FYy3ez7na74kCmS4trU6bhU/i0VbVTwyONfYVnuNccn5Bua7baSxxXEkbS
RvLUJRBqINMxWmC3BzdVt/Z3Vpdy2lxH7dxbSNFNECDpZOuY6jwIwtVzgFmomf7gDhYjR7lw
jfLC32i5Ci7g3xY/0bxCj+7IK+yyddVMxTqMY+za5g+Jt2feH2m7vbe1Ag/Ux3ZUurANpK6D
pZWB66sWo25fE8lvZyz2O/7VuE8VGFqHSJmA/KrFmAb69cUosefuxr0NfzDpTCyuOO8V3Pf/
ANYNtMbXFlCJjFIdIkBNCAfHwr1xfZuQ3G+Kbnv8O4vYtG8u1xe/NauaNIK0IRvtDCnQ4L8s
ZrVbd8PT3G02d/LyGxtDfRLPEJ1IKiQatPqZdWnph10+qvtPirdLjf7van3azgSCIXBvYz70
MiM2gZKdSGuZDYHOcz4R8i+I962zbJ9w26+s98htEL3kNo1ZY17vpq1VH7cO/tu+RYWvwtc3
O3Wd4+/2FrBexRzxrcKQwDqGIFSAxAPbGDGC5Vxm841uLWF1Nb3AdfchuLSQSRulSoY6c1Pk
cajNiiGok0YBad+/jlhwLW04hy69sFv7XZr25sZFLpdQwmRfT9M8ZNVQtr6W1e5iheS1gZRc
yBTSNmYhdfhmCM8Jnwb9NcPbT3vtE2sLL+ouApKI0lVSrfl1UoMRkaFfi/nE/HpN6t9qkZba
jNaFHWd4SK+7ECKSLTwzxT2rrxj0ElxLHHap70kjBY1UZlmyCjz7Yvhnqb8D3DbtwsLuSyv7
eS0u4spbaVDG6V8VahzGGKuU6iyFailCTX0kjpkMTMq841xO/wCQ3ErJLFa28JRLu+nDe1CZ
ahWk056DShYdMX2xr6ytBzT4b5Zx3aG3Fp7fd9si0fqLiyldxAjdGkRsyv8AuH44yr4wUtte
C5FlLA8V0wUm3caXAkzQ0pX1ChU41vg4SW23bldXI262t55L3Xoe1RGMhYdRopqrg1vIl3XZ
t129kt9ytLmzcglVuVeJyAM9JalRTwxRm78CtOP8oexTcLXab64sWVnW6hgkaNkz1MCo9SjC
Y47SHddymSCzhnu7qQFoo4AZGc0oaKtSRTB8KWFuG1bxtbCG9sbmwmejQ/qY3iL1FGIDAV/D
DfWeek0XG+TS2Q3GDZ764s1BeK4jt5Hi0J1ZWAKmlDjLeuSztr26uEhtLeS7nuBqiS3VpHeu
ZKqtTXvjS+8zXVuce82ASzvYbq1kere1dCWJnUDwemofuxnBepVYrMxagpoNK+f0xVCM0iU0
sUbqxHWteuBR1Ne3UgrPK7vpodbs561zJ7eWFs8N/dR1SKaSEMRQRO0ahvE6SBitZ6vqBpqu
2ZY9anMmpzJJwSBPJe3U7J77vIFGkF2LensBU9BjTOJrfcbyCErDcTrCWYn25HUaj1JCkdu+
B0vppLmS4kEssruwKn3GY69QPp9Rzy8cZusfUdxf7hOSbm4lnCmqCSVnAqP9xOLa1lS2273k
FYobmeJW9WmOV0APdhpI60wi1zJKNbMT9xJYg1IJPWvj3xoDjMS5kU8/H64zSlFSDQgnLyp+
GIlHrLEGpLZavxrhEdE93dXLiaeR7iUAASSsZG0jKlWzywtaUN7uFvHcRxXMkUdxk8KSMiFf
9w7/AI4GLLoQzBtRYFwOoyp55YGkk1xeXEonuJ5LiUj1TyuZHYUoAWNTiqiVdyvoIHtYZpYb
aY1uIUchHI6a1GTYF16hRyKJU6u/+mJA0FQSBlSlO/8AwcOsZjoi3G7S3e1E0qW8xrJbrIyx
vTpqQEK344F8+FY7vuG3vIlldTWYfJmglePVTpXQRWmLT6kv963W+jWO9vLi5iLaxFNK7rqH
5qMW9Xng2s/XVe5Beg6nKvgPA406aeC5ntZFnt5DFOjBopI2KuhHRlYdDgX2w8t7c3Nw9xdT
NcXDsXklkJd2Y9WLtma4dZm6m/qd/wD05tr/AFEy7ez65rRZGEJI6EoDpwbW6itLq8tLiK6t
J2t7mBtUEsbFXRh0IYdxh0IZr26muZbmWVpppWLyzSsXd3PUs5zJxX1DbeN3bbTtn6uZtsqZ
UshIxgDeIQmlScXI+XIsigrlViOvnhW4Vve3drOJbZ2jnWjrItQykHIqfEdsTG+rW45lyWba
X2yTeLuSymP/AMi3aVvbYltdSte7eGBWg2ble/bJI77NuM23tMKSfp3Khq/xj8xxOu65Lrdt
zuBBHc3MtwLPUbRy5/lGRtbaBX0kvnljUuM1b3XyZzm6gktrjkF7NBKmiaJpao4IoQQcqYjX
Hs/MeSbRBLb7XudxZQ3IAuUhaiSAZZggipGRwVa5Zt93iW4tbiS8maaxAWxmdyzwRo2tVjY5
qFY5YNUXO4fKHPb+znsr3fLi5tLldM1u9GV0PYilf2YNNcuyc95hslu1vtO6z2EEr6vbiagq
O9DWh/DFGfs5I+Xcgt9ym3ODcJob+41rJdA0d1k+5SR2NcNX3xLu/OeV71ZRWG67tcXljbNW
K0mYFfSNIOqms5dicU6Z+/6dOxfInMNg28WGz7tPZ2esutvGVA1EdgQafhi103XN/wC4cmG/
/wBf/qVx/WPSTfah7mQ0DMAD7csxhMmOjeufcs3mOZL/AHee4iuFVJonIX3AhqusKFBp2NK4
Nxnq4ore9uIIZ4o5pIo7j03CRuyq47BwD6h9cQ5xCZH0nTU6uq+B8sWtmErK9KE1BoPI5HLB
Q2Fh8sc7sNsXbLfdpxZKrL7MmhqKRTSGdWbTTtXBqtVGx8y5Dsm5z7ltt/NbbhdK3vzq2pnB
/jD6g3lXG+ponaffOc8p3uB4dzv5LmCWVZmgaiR+6opq0oAAaeGCWxq39O//AO9vn/8ATBtC
7tLJZhPbZJFjchDnT3GUv9DXLFo1xcd51yjj95cXW1bhNBNdkm5f0yB6mtWDhgWr3OeC+rfA
8j5pyLkF9DuG8X8lzNAF9ummNVK9HAjC+rz64ouPFpN8v/IM9iu3T7s8tsgVQzRxGWgzH80r
r1Ds1a4az104OM/InK+NCc7LfvFHcHVPEwR0Zia6iJARqz69cDSDeuZ8k3TeIN1vb0/royHS
WBRCEZR96KlAjHu3fD9mbc+Fxufy98gbrtz2F5uoe2WmhlhjSYZaSfeA11K5VwWiTWOB9FOv
QAeQ7A+QxN4BBWStcgKaQag07+eLRgSRqBFKgZAVAxIzEkAA0JNajqMSwahdAjPUd3z64ieS
MlggOfdT5d8StMtCwXV6h0p2xYgsrk1rVgKVwAjpahcZg+uoxLTuArVpQd/phVOT1IBKrnXu
QT2wYjxxkhj1pmpPUeWWE4jJBPpOQyJxM6kL0jLdSK0H174lUEUa0yrWhGZrXGdWDR9Le336
06Z0wtEC5YhhQ9Q1akYpUYu1aA5VzB6f8HGkehJBrTVmKGlMCESpDqxrSmXQHxxJDUhar0OS
qMWs1MoQKEqC651HcYiZnJ0yA1JJ1CnYfTEvgzqNYoaHwbvihJiFWgWiEEU7kHErTVWpzA1A
ACmeWJjQsygr6qU+2g7Htli06JmWtUFK9icyMRggCV0Z6c6NkajEQMqx6ZC/ppQqOuIikYe2
AGPudiwy+mIG0yFAMiB9xyyIwIhrNQ5BTp4f8UxYoH2QNIDUUD8KfjhJRI2k+rTpzGeQ/wBc
GmCkPpqFr5f54RRKEjjUnPUNTNSuA4B/S3rNYzQ5dPxwjDSaaAhqMOgzofxxAaOioWLeoemh
OYr5YKpQuuo6gSFGRyrjLpLplkGnSnpXoKeGFU7E6SpGSCueQriZ0CKn3oTQ9R44VJqQlPcN
aMzdU6npliQVK1ZMvSM/DEqWWkaSdNKF+priEM1aAGpr9q+XYYjgjSmlFqpFPOvc4kai10MK
rkAetD1yxAK/yz37/b2WnfEkiAKA4FSftc9CP8sCCzjXV6hvPMUxHTLp1UJBBzoPPGjT6/bq
WTUAM8sxixmiE66ADXw/5YzYjS6aqpJCd9Bp074gbShQFTQAj1Cufma4kQJYlSRSuRrngR0L
AUb7mND2NMRw9FCrpOVan69jhb+2Ipw9GDPVl71yp3641BWR3hQLtmLA1Ppp4Y1GHDVf4f8A
HFiWO2sovU1ehS3ppn6u3XGufkV9O/CmzVtJJW3HbgzgEwNOIpVI/iRqYP6eiOPmfEd0XlFZ
LqziincfpbhrlDCw/ibTXSfwwfz6xuXG73vj17NxqBILuwuZIPbJEN3EVIXOorSuf7cZt9Yu
0W6bKm77RbR7TvliWjdWntZbgQuStKoCCVOeLacHu1xaf1CyKblFZX1jIJNLShS5FCQzAlCp
+uMxvEW+8h2XkNjcw2MkG1b1BqVm1qkUvZtNaY1IrFT8WcduUF28m5WLmYqwU3ASQMKgj2+u
Y7411XORluWcbbb+WiXeZBFY3T1d7SWOVljNQNOmvTrQ54JTZfkXLtv4tb7RH/Q+YT7rCrAf
065EheMEdtWX7sEg9YVANJNAeuZ/fWmFp6r8Mci2y1Fxt89zbRXUi6olvQPZceHuEEA41Zq1
tdw3bfrGMyXO2bFa2TSZy271JB/NVG608VxgyMj8jbuf09lNZ35ZAxKtBMToYdwFOWR60xQf
b13fHvMLjcLGYX+4Br2Goi911jk00yo3pwdQ7rJWHMd02zkEqvuEwsZZj+pQ/wAxaVpWhB/d
jUax6LPst1vG4bZvG0va3tlCAZmhnQSdcvSSrVH8JwMLZ9zsi8yRXaLeQqdUYlCyimRGmvUY
cTxpuY79afrLOO7Z7aUsHjl/mDM5FA32mmKRNFxSytJtjebaubPsu7uC9xtcgaO3bLtUiurx
w2q39MFcrJ70plYSOpIaRTUEk9anxxmMyftCgYkIprU0Wvj9fDG4zj3T434vvC8XkRvYm94s
0TRTIynUukBtJOYxno8ysJtvDuRrywWr20Ud3BKHeO4lSNgNVQyMxo+X8ONTrxt6RyzZd1g3
LZ766jpa20n825LgrFX7Sc69TTGVqbctn5K/IbC+tboX2z26B5RBLWRSe/t1DFT9cQAb5rbe
p9xiu47mz9kJfbcGVtarX1yKc6074Fqi5hd2m7bD+u4xOIYX/l3ti/plUV7UJqvbDisWfBeN
70eJyII0Pu6zEYpo3VqimZB79wcNLzC+4byw3sqf0i7DqwDosTPpJ81BGLUteIbFvlhyOza/
sLqygMn/AJpYXCjxrli0Y9Ku9t5V/wCzWl/Hcfq9mgjOv2pdToGH5ozR6D8cSia5kh3ZNw26
yuIbm/0em3EypKSwyZS1O/fGYvXnG68e51tO3Mm5XMqQysEe3lm1Ll+YVLK31XDVja8R43yK
Lhk8Cxq8k6Se37U0bK6uMvtPfuDhVeb2PD9zv+Rf0aQptt05Jpe/yjkKgUHUt2phGatbDjt9
xDltqu/NHawN0utReJwcgRpGoCvWoxnTI0fJuJ80ud2W92Vmis2UMt3bXAKAr30I1TXuQMOr
GY2+2vZ+Wad932Tab1QCu5xFSdVPSNa5Z+J/HD+FD89g3qPdLePcd6i3mP2wLW8jKnShalH0
9DXPrgk1nfcbnj3GeQLwOW0NuGmmjkaJUlR9WrNSrVIGG03lS8M4TuEdnJukdux3WNmV7UuE
liZDQK8b0GfWuLT6s49y5bab/af1+wi2iydiIrqMKImlOa+4yFkBPmBjOtYvbtd9u93mt962
W0u9hdPRe+2srGviVJI+uWJmRTta7bf7FuHFtkuI2urY6Y7UsoYoWD/yyT6gv7cMuFUcxmis
eMWu2XB9m+iCKICNNQozYA5jPphlZ/LmubTdk4YZ9p5Ym5bboVb3apyqvHmKxx1qw0ntlXFO
dq6W+y28+7/HcUGySiTcrUBHjjfRNG2vVQ0KsFI7jFZhjTXAv7Ky29N1u1cCkJuZmADSMK0Z
mzHh6sUVVG1bXybauWXdxMs0GyyRk28sb+5bg5UqqltIP0wWxiSvO+X7RuU/IL6a2sbie2lm
ZopIYWkjevdXjGk4TJVr8Z2t9bcm03UUtj7kDBGnR4wGFDkaDFVzy3Vnb8tt+XT3W5M7bS6M
ltdQsPaPTT7gj/8A5mGDWpEO62t7uXHb5OPMJ9xt5mVBbyKksbFwWoarQEeHXEq8+5Fa85ig
trLepXniZhoW5fUyO4pTU4BFaf8AbiljNbibYeSR/GT7athKbtUBFrG6yEgOGGnSaHLwwyxd
yvJG2beBIPb2+5auakQyEV750wy6LPy1fxnHc2vJ4knje2lljkEYlDQsXAy0atOr6DFjpHXy
HnPKNr5dcvb3RT2WK+w6jQ6DoJYxTP8A3dcFg1s+P7nb7nxGO/t9tW/uY5H9yzsJPZkhZ3ro
UkqwHfGUkTc3v92s7WXapbOeNX0PcSiR1GVY6H11NK5HFA7PeumtbaR5JAYb0RBy5Gn1FSC1
cq+BwlzcttN5udnum2QO28wOmcDBJ1LHUQxqpoy/twiKrft8udustl3O4Y/rbVkhvTOtZgGX
+bG4NGz61H4YD+U9xttlse53PKbW4MUO4KJra7P8yAM4q0TU9SiTtXv3w7qvjp2bdIdx4rFu
dvtrbjdiRxLBYT+3NEzSMWQsWRqd8Zwsd8o7ot7b2cNxtk9pLASPfncSvpI/8Z/OKHx/DG+W
L8g5RDySLg9tTfLfeePyNHSoX34ny0R6j6iFP4jBPaO/fl59IOoGZ7174lCt5JoplkhkZHGa
OpIYEdwRnh1qPQ0vX49xix36wt3hv7nKS9UiWGatTUuCSkmWaPkcZg68U3x/e3J5tBdLCJ5r
h5meHWsZJlVizJq9OrPJcNXHTi5fdzLzHcr61aa0l98NHUGOaJlUL0rVT5YKLWlW+bj/ABuy
37b7aS23C8T+ZdCksMxzrIHrVXqvqjbDDYp/ji8nHMYZYoP1TS+60kIZY3YurMxQt6SwrkO/
TD0Jc8cXKdxuE5duN9ayT20n6hmicq0E0TADqvVT9cFo5saFNwk4zx/bt5sraS33G9Wj3DH3
opQakyLMuWokeqNx9MUg76y4rfjW8mHMYrhYGuZSs3uQoyhmDKSxQEgFh4d8VjpFVyW/lj5f
uV7ZSS28v6p5IJAGhlTsag+oHxU41ax8NOu5NxnYLDfrGKSDcdwWs07ASRSmh/mRyL9KtE+M
z02/pV/GV3KOZRXKQGclZfchiIVzqBZiob7j3098XVa5VPId2mi5dud9t0sttL+pcxSjVDKp
Jz1A5g9iDiZ5+GmTcW4zsW375tkMlrue4xgNLKNcMtalnWRajST90TdDiatVvxlcyxcwhlS2
a5lKSmSOIqH0lDr0KSNXX7cF9XMkij5dcw3HJ91njZjHLdyOgdShpkKFWoRSmYxqwTqKg59F
DDppGVfxwKvc7jlW27HwnjX9VtWutvvIIkRoyPdhkjTUsiGv3L5GowSNb+F28083K43EfvFt
scpGQKuhlFNQ/NUeGLBY8w5063W2TSXXA22n2GrFusSsug9NT6EAKsOzHFvqec9TqA60of8A
lhY2vRvg7Wm67xIFWRktFZoWFQyhyTl4Zde2K/J5+Gv4JyfZt9l3u3sdit9qltbYs8kDL/ND
6l9Sqq1H1wflr5gr64lh4ZxynFhyVXtUUIqgmIe0M6UbqMsULN/FS21tybk36XaW26BLdW/p
N1qkaMaquh1gMVr0w27WZM2p4eT7Zeca3PeOG7NHt2+WCGPcrSMghrR61k9tQBKFzPYrhz31
dXYud4nt7XiXF1k4p/7RHLZoiAIGaMNEpqPS/wB1cZi+HgW/RWybteQ2e3y7VCJT/wDk+4Zn
khYdVZmCsR/DUdMbtYVTBT0Boain1wNePfOUX3LLDbOLXvDffFvd2sZ3SayjE0TCFYxV1oVD
gagT1xj8NXdXD2u2zcr5ALGOGWXedojnaJArLcn1gsqD7zWlSM643JBvzjJfD/Hr6y4zzH+q
2vtpLZ+3JaygF1aOORv5kZGVQag4LPVLcB8Xcq5RJ8W7um33ks+4bUtbOIUmkijCgj0sC2jI
0xU/bY8hfc7zeN6G43U0Vvf3lwkk1zpWKNJS4/m0SgWjZkgYzWuY23zieZfrtotuUQWs0sED
iz3mzqP1i1UkuPylT+Xzr3xuMX/2eWNq1Aqar1b6U7DFosehfEt3yuzl3a549ZwbtGsKf1fY
ZiC1zAxI1RL1Jj708cY/LXLVf0Cyutl3zdeFx3/Ft0gtZW3fjl/V4bmycEy+1rqaKtaeByyx
rV1yP5Q2K83e9+ObrabdbiKa0gil3BFCw0HtMnuTDIAZhQTh5+FL62G57bBNy3lHswD+qwbX
a3dl7ShLoXUOsrJHp9VRkD44zjO+15Nv/OflDf8AYdyseTbWL3bIkVpluLVUmtw50pdxBQko
EbDNxkOjZY6/XDmvQecck5txuz423CNbWm57fHNePFALiD3ERFD5gpHVfuIpXHNrFZ8Vpb7r
vPKrredvFjuNslveEbcuiYSrqZrq3Vela+oJ6Wr54umZM+F1zzk/D984Nvlluz7hfmK3Ett+
rsjDLbzkEQXCEKjKmugZqU8euKeLqbC43y3d5dosbQXFzw/dmgjLO1j+r2m5BjAiuEf7YQ1B
r9QH+ODxvGS2O53TZebbyN54zQ64J7642EVmgmp6L+1UZ+1MK6lA0+ONdz8ufMiz+U7vkO98
Ov7tLy15PstrH789vPaGx3Tban0XarRSyx09dB6hXtglw9R8+Ox6SsWbL1AUBB7/AEwGfHok
KiqgZ9MSlIFSfUxoOijuaYjadlJSjVND1BocQlEirmCxKjooyypkpxash1LlgAvoVfVTrhQo
2YNSlK10nyHiPDAZUrMEYmuQHYdSf8sKKNgwqMya1B88ZxaOI+ohRq/iPTriRBVqXArUHKlA
MISLGBQrUEfkPngKUipLhadPpXuMQtO4Rs/4iDn0qO2BacghSwNBSvpqKU8cKJWUEdSDT8Ti
WxIdXukmqqFzy8e+DSYvpWoBLU9FOmLUJW/OwoT1PniJagr0Pganx88QAGrnU+P+mEaRpShz
J6070NcQtCz6hVgCR38cSloXY0BQedew8cQ9NUGrqaseoPXFTqNg9VpXT0phQyGoocVzof29
cBKZii6iTQHqMBMNZHUU6+GXli0oyCAyV9fh2p1ws2h0EEFaKy9a9PwwowBD0I0swqD2y7Yh
h1OVVo1D6ie/0wLERJBKsoqQP39MRzBShBSgq1ftBpgOg9Z1KMlyIBzNR54UTxKSyZEHt28q
4QAHSQPDIVxAqqwpUsP4e1MBHWhFRQ0yplgxpExagJJOr7a4Yzh8gPUatQZd/wBmFk8cik+o
GnYn/PBigUADZ5ocwPA+IwNjLioB9RHVWyGGHUaszj1ZnpUfuGJn5MIJAQjEmmeogAkeFMVM
EZfb6flrmfHxwK04ZwwIowapFO/jio2kTJr1E1qKDviiCFBAZ19ZHQUzp4YVgVdlKkgFh9rD
wI88CPqFBQ0rUVHb6YmaKMmhqRUdRnWmIyD9JzrXIaq9a+YxNQCmgKiq1qSB4YkTtpUJlTrS
nXELTg1A9RDPkQelMFoxKQWAonXqtczgagD6GOkAhh1GEmjKtGw1lQB1GVM/8sQol0x0UtUU
6Ht3GFIwBlppQmmr659cWrRyOBSo1gnIny+mJaTCuYIqMye2EhU9aNSpqfD6YgOuVQDmBXwp
gIVo2qmTKanyxYMMNIAOqrdlxDCkcaCWJLCpPamInEyE1jfVTJ6ePniWhKproCArdT0xEaUI
GlfT/DXOvhniCP3WQkZg5Vp0J8q4EJioKOzVqOtMCgqkqSuR7DtXGoQayEoFBAOeGo5WiU7g
+kdTU+OBG6CjCrkjUf8AjxxahMhchsgF9OWIGQ0Pp+/OoOf1wEJBqHC6SeoHhiAlEYkyyHWg
qf8AHpiJaEK6mNQxopJqcsSRyBqMT6aGqkYozgkRSiZgE5eeI4E1ErhaaPyk59MQ1KCiioFB
/COoJwtGGSmi5HuRXEkb6jUA1/3ZHEij9sClBVugzB/HAhTEr2opFexGFBjbLURqFK6gc/Dp
iRK8dWVSTn0PUD/rgR1JjJP5j0Y54mtEsrNQAn0Zg1616YhoWXuy6QchQ9aYUEfzF0qoU9ye
p8MQ+qRVb7SRVagLT9+BYGN9Lhaeg5A5nP6DAobVEa6Bn3Iy6Hviw2lVGoHzUAGnbM/44WSC
xgEJ17kmvftga5p6qKkgVOYI6nE1pn0BM8nGQy61ws9UslgFTSv2L0JI7A4kUbOy6WGknrn0
xNfYbaxUkF9RrqHYDLpgZ1GCtMs88voDnlhoLoX0jr1GJQVFKqlaA00AdMBM4WqtQ9CGp0yx
NBiWIVKkl29VT0p3rjUVSMxYahnXPPFWSd9DgIuo+dK54sIJI6quqqkZlRnQ1ywLCYyDvVa9
fr1wRWC9XuGhGnyH5aeOIEpX21Bp6egqa0rnTEdI6KAVFa0UdDhOI7iJJIvUaqctJ65Y0qx+
5L/8kDoBUAdRQZY3WMcukePan4YyVjtShbyMyk6A2pl6agB4nphkUmvZOBcKfkvuexffpDbg
OHMfu1r+OVO9MHXjHbo5LxaTYZWibdId1QmpEYKaSeoZWzxzb52s+0siv/MIKintqBQIcbUh
GbUHBoQwpXPt4AdMWBY7Nsm67uxhtwCEoJJJKiNSft1EeOD6tXrxc7r8ZcysbL9ZJBFdwK1T
NbTLJpyz9P3dPLGoztQ7Vw/kd/A5so19wdIncRuf/wBKlfoMZsb56xDY8T366vXs2hMEkTaZ
Hn1KgPT1HM/jizxXrVhvfxxy7bLZbyeKGa2pRJ7WRZVof4gCCv7MDLLgPpowIrkR0qRjUTu2
zbby+uo4bdQtTp1Nkv0rjWha7lxLku0R6riALFcMaXMbiWFm6dQcjjOnV1t/xNzG/hD2jWks
bqPb0XUZqaVyU0OKxmxS7jwblO3bmtje2YguWoQXZWir0qGWuHVuOi44ByixMS3EMcqXDH2Z
4ZlkUtSueeoYvk1a2/xVz8wr7NvCrMMylwmoj/tJBpgi5tc0Xxvzi7uHiFtGroRHL7kyo2o5
5Z/vrhNthbj8Z82sWRpNvHtswTUkqP16HI1xQV0r8W85MZkTbfdLDJRNGp+rBmFPpisURbNw
HetzvnsJ54dv3GENrsrmo16c/TImpf24xivSo3vj24bHftaXqIrJ+ZG1qa9CtO2NSqVyxXE0
JYwsyO33FCVr+zDQP3LuUh5WkkZTX3NRZhT/AHYIJBi8uiucsrRnoWZyPwBOKnS/USxsNDvF
4gMwo37a4cQ7X3bi7RddJJWCiRz3bKhbEl5JxDfINxgtgYx+rqIZoZNSsKV0kqemWCrXHuth
vex3Oi7Z7ZgAV9t2CGvQgqQDhi0VjybkW33HuWG43NvI49ftuxDD/cDWuKRas0+SeZufRvdz
qXqusEV8MxgDh3Ple7bndi6uptF3oCGSMe0zAd/TTGsPFn5Qpt29fphuP6eZratf1tCQCMz6
hU4DvrllvZpR7U0jyKCGozM2fjQ4ke33G7gosE0kApUiORkGfjQitMUildNta7tu1wI4lku7
hzk8rEknqBrY/szwnS3S25DbXX6bc4545l6JMWrQfwmpBHmMMkYuuxNq5Vb7WLy3S4FjqqZI
XZBGe+rSRQfXLGcNcdltu6blc+zbJ70xzYFgKn6nLFrPPOFuG3blts5ttwtXtZuvtSgjLxp3
Bxa1IsEs+W2m3i8hW6Ta/wA0kLkKPNtJ9P4jDIdcC7pfrI80dzKsrijze4+pvq1anBVOjSbn
usx9mS6ml9waTE7sy1/EnDnh13S23J9v24XJ/Vx7Yxp7sTv7Qy6HSfT+OKRi9FtfH+Q7wyvt
1u07AlvQ6oR2JBqD+zBhQ7vYb9ZXJi3SCWK4YAUuCzOKDI6mrUfQ4pRarWooUgaiDk1BUHGo
xbVns1hvVzcD+lRyGYCqtG3tsT3o1V/ZhvrXN8DeXO8C4eC9acThyJo5y1dYy9ascZOuqWLl
W32UU5NxHZSVVXikcw5/lbSaD8cUi3B7NzDku0RtFt24SW9uxqYBRkUnuqsGA/DBWln/APed
zYMpG5mX1ZJJFGyEnsV054sTk3q75brfdbmOe0t74gubZnW3ZiOnoOkfQ4mNUdpuNxb3HuQT
PDO401idkan1UiuHKp0O63O+uWBuriadVHoE8jPQD/uJOJnr5dFnyTfLaIRQ7hcxRA+iOOZ1
VRTsAemDGvs7to5xyrbHdbLc5VSQ6pVekq6j3/mBqE+OKRpPuPyFyzc7VrS9uxLblqlHhjqC
OhVtIZT4EdMOLWcnnlnlaSWVpZzm8jksx+pOJmxNabjf2Env2l1JZvShmgcoSD2JBGJXxNd7
zud68cl1dTzvb5wvJI5ZPoSajGbDIV7db1BqS6aeMXWlpElZwsg6q+f3fXChR75u0NyLhL+d
LkqE9/33DFF+1K1zA7DEtBuO77puEiy393LdSoNMbSuzGndRWuNMWm/q24iwbbxdTCychntQ
5EJPWunpXBjUtT7A+9HcUg2aeSHcZ6JH7cpi1d6VqFw1rfA77eby117W8zTPcxVjP6h2d0AN
R92eDGf8q/3jqoTSvUDx8cUY6lDVaAg9SQK+OIwIZQPT4/TDjaYX9yls9osrrbyMHe3BIjLD
o5Xpq88WCo1Yh6Voy0YeoqR4HLPBVzE9xd3d3P8AqbqVriebN5HOpmpkDX6YlhkvroWxtFmf
2C3ueyGJQv3YL0rTEr4hWeVZ0kik0yK1UdTQ1HmO4xM1NNc3dzcNcXMjTTynU8sh1M5pSpPf
B8qLC72bkFvtWotr24Sa5bMOf5UxFNTREjSSO4FMUHU2uLatt3W/v0h2uNmvgpkiVG0uSg1e
g1HqyyxrY39UG4TXkt08180j3TMWuJJj/MLd/c8DgHWifcbr9GLAXEgs2b3Bb6iYhJSmoL0B
plliTnSR4ZUmhdo3iZWV0JDKwzBBGJaV3cXN3PJdXMhmuJCXllc1ZmJzY4cMqU7jePt36AXD
tYh/dFvq1IshFC4Hao8MFKGGaaCVZYpGhni9SOhKsCOhBGeDUa9u57qeW5u5DLcTtrmnbNnJ
6k418sSeoqUH8Y66fI4C6H3K+a0jsHnc2UDmS3tyaojsKEqp6VGHRnupoeT79BJbOl/MDZFh
ZMHIaEN9wTP7T4dMZkbiwk+QubvG6NvlyysKMrFSCCKMCCtKGuLFb4zRLautCev1xplPYbnf
WF2Lm0ne3uEBpJExU6WBDKadiOowHcLbN83Dbbw3llcNbXQBAlhNDpfqp6ih8DisUWO2c45d
tNnHbbduVxbWIc6IVNY1LGpAJBK59q4fqdQXPMuSz7gm5z7jK24RoYluagN7bVqhoM1z6HGS
qtu3jdNtvXu7Cd7e4KNGZIjSqvkyspqCp8DjQ+FptXP+Z7Tt6WG3btNb2UX/AIoFIYIO4XUC
dPlg81XcVXIN/wB83+7SbdZ2u7gARxyMFDkE5LVQK/ji3GLdRX3G952+0gvZ7YiymBrKDqCM
P/s3HVH8ji3WsW/EOR8/tlaw43fTQ++xZLbWgRpAP/sxICuo+HfFitU+4b9yI7zJuF3dT2m8
QSapWH8qSOTKvSmg+OnLBreSfDvvvk7m11Mskm8TmX2mgcr7Y9yJ/uV6KNS/XErig2jft02i
9S82u8lsbuLITQto9NfsYdGU+Bw1nmSOfcb273K+mubn+de3sjM4WNUDyt4KgAGr/HGaZ4iv
N03K7hgt7q7ubiGzDRQW0zsyxDoVUNmtCOmNaa4XDllFKd8sjgc8de27rum2bhBd7dcy2l3C
w9maA0kDNlQU8a088RmtLyT5O+Q9xh/p+9Xsglhqul4khnQOtGBZQGAdeuHD18ODYPk3mex7
fLtm2bg8W2vULYyxpNCpObBVkBoO9BhY52m/9z5num+2W5Jf3M282sQit7mIBJtK19IKfdXV
+auKtfl28j+Tvknc4P0G93LqQGVfctkhm0sCjqJFAYqw+4dMG+NTKg438vc343Zf0/bNzaHb
gKxQSLHKkfYhfcDadRzIxYbYqhzPkq70m+xX81ruSFvbuICIwqk+tFRQEVD/AAUpi66rH/ha
ch+X+e79bm03DcA0ellWZIoo5dDrSRNaAHS4+4dPLFjSfhXyb8k7VYJsXHLqS4t11OlpJGk4
RTVmWP3KhVzPpwTF+FPBzfksXIP65FfT226oNKug9v2lXJoQn5I/FKUw2LlY8j+YOc77H7V/
eIdSPAzwRJE7RSCjxF1Goo/5l6Y1yzZ6xhZmQVyGVK5Z/jgxqmjYBiGAq3Q/6YsZOGGvKhUG
mfiMZWksgAJfMHLUehqcFKeMO0qQxqWd/wAqglm8gB1xaijLnWjqVkjJVlI0kEdmBzBwmXT6
mJBofQKk9CcQqQoZWAU5gZL0BOFS6JYgXala060pU9MBGqlV1srCFvSJ6UQmtKBumBJI4ZpJ
ljh1SSSZBVGomuWQwi9YMsEkCuCjJ6ZAfuqPEdsUgnRV6EGqk1of88TXydMnAPSh69MumAYI
F2HiTWvjQYKtSNGtakfj0IBHXATodfqFTlQj/jtixQRRW7EVOajxGEheN/a/UCNzbh/aMoU+
2HIqFLdBUYhpyhGbCi5jPtiQCmrSGWir0ArX8cKE1tOsJuPbP6fV7YlodAcjVpr01EDpiFAA
qj/aR6hX92I4BzkDQ59sQ6hvbJBJBr1Ncjn2/HFokAzJGhJGf5R3+mI3wiW0qzZGnqOFQnjl
BGpSC+VM8/pXAQAaUoMtJoM88Sppj6wCMurEdcGjD0cgEgaBnXsB2w41IjkY+pgdQ659Prhj
NMxaJgSQG6n/AFwU5iNnWoYedKmlT54mKlkhmRUuCp9t2KKx6awK0B+mBvEWtQQXFDXOneuW
FWBYldP5VOWXbEzSCFVFc8ia/XEdRhyEB6KPEUqK5Uw1UTsHNGYZZH6eeMrTU7aqqOjdBhIS
NfrrqoQKDr9KYWRM/qYK1SOppkAMGHBa9UZ9ulG6augHc4CYKHIbWDlVTXIj8MSKhUhtJp/E
B1/ZiGGKhgJCxIByP+Rw6QsK55kdx3GAYJGdnyNNQqD5DEiSgqQSVNB4UriRiAzF1Lah1IzA
+uLSJQNXpYgnKtP8cSJFjUZCj/mJ7YMZw2pHqmdBWhAzOEwNFGRJqAKL/rg04IMdVFoHpX8M
KMAGYh0rT7aHpi0YIyKaUNFStPHAsACSvWiitT3+gxIWlcmSukD9/bEjJF1JJLk1pXKmGInV
ichVele+XfAYItVwpGnTnTx+mJUmkU0A6CpIByPhn4YUeh+zKgAORpn5DEiRSxJpUUqanL6Y
WjHUpGnIGus+WIDZkChVoVPc9aHxwYTQuqV9wAnsf+WJlGI09RBJIzoemfhiUpRoUNStEbp2
y8hgUGYyaGi/WlcsJIRnSaEk0zYUpTEiILkKv21oD/jgREChBJK9x54lpHUy0egU98IBoIWl
KgDKnjiRlYuSjiviBXPFQfJjpZCAPtBrniZvpIvpJX1UP21rngbg+9VUKK9R0FeuJoAIVqEi
h8Oowgj7YYUIRaEk9KjwxClCrIpYAUOefn3wVHMpYmnqpSijEtC+tQ2QoSK+IwxU2nRRQa6R
UeOeeeICOrSpBzDVKnuO9TiOCMyhM8jma9KDEUOl179RnTFQkRQFBYhUFTXKoOJQ+pULEHNs
60HTofpiIVYD111GuXhljKEUQuaGp6kd8SlCCHYaavQZ+X1wrSFFyoQDkadm64iZnJUBgSfy
/wDHY4UZCyNkAVPUgVIPhUnAkiodLV1A9ST/AMZYq3+C91CtASAuS51zwYxpBhCoXQGPU+JG
GADKjByXUKtAwzFG8PPEvqBwRQqPdqSW6CmIpKmmoIKr0och+GJabQagsNZFNJJzGLThPGAt
OwNAOuZzxKlqFCqgavCvbEDlpF1FUIXoK9SadsSKQKKFUpXqRmfOtMSwx1ZaTkMzQZZd8QPU
tmpGgZhadB54MQSdMQJXSRQkUywxpIYpCpYHMjsPHEUYSToSBqH3dAGwjKdU0jUwBZurDp9c
8FOYYIjOHByJ+2udO2CmZiRz6dAb15ZgV64kBwcqVRga0y9X/LFq+ptLr6DRWJqtex/DEPqY
aStDGCa0JxRrQ3K6kPqpTMU608MagsZTdgfepSgr1/yxphz6/wDZ38MGnK6bAL+oC0DsxAGX
T9uNxl9EfB6Sl74SCkiop9IoKE/cKeHQ+OH+vwsVfyAYv/Yblxk5NCcwDU1pQZY4Hi5WUkjq
xCsGBoVqO/njca6pzTUwWpNQe2FlpOINyZWmuNlWaSKMD9THFQsyf7lPUY39pmHx6nw+5tLu
wuJ4A9rdq1bqC5o0ZNPyjtjl0MR8qEMZsZrYCOQy0dkrSnUnT3wxYn5Mu4rb215x8GeQU/UG
FgS8dPt09/xxQB41eQ3+z3Rsddre1ZLq1uhqUmmZGfppg6U6eTbjtO5S3VzdJZytBG5DzIpa
MPq6EjDK1cbr4qi26SzvIrlYmlrqPuZEL/tJ6YbWbV3x9YH2jcbW9dSNT+0sx/KPt0lsZtVr
NcH2S5uNxvLv2TJb2shpHG7A/dlTPpjVvgeg8duts3VrxL9Hu7W3JWNJDSRKCp88sF+Fms/N
xnZ9xibfNhmnafb5TL+iuGOhlU1NCCaVHTBKciS45Jwzk9xb7Zdw3+3XxJAaM64Q4yyNQafX
DYvhLbX1nweSey3iOW92q6YyQXsJIkVSKeuvf8cWq1Emybdc2ych47d3UyQPrk2+8cjUBnVG
BNPxOLVElxv/AAnlN9a7e7bjtu5UKgLIPaDD7v8AuxMz5cO38c3DZOawQ3N0bsSxl45yTrZC
SAM88u+Ln4blU3yeo/rulRqqlWI8umCVlj40qajv+8jxw6Y9Z+OrLb34rc3E9ik8qyExzkH3
AAtaL2Ayyw2Dr4WVvt+38k2aC5u7aMTI5UNGAGAHRWpgU+FNyPc7jj8sVolna3UJUaRcxLK+
nwQ5YtZ+rK7TuFrNyayuba3W3WSVVe3ZfQPEKp6DGo1j0Le9u2+25XstxbWiW8kshMiQiiMd
JyKH0596YysWO8brbTcks9olsI5klUmaOVQ66WH5QRlii1WtxFdn3OaaziWHaxRi0w1CMnMq
w+7ThlZ6qzv+L7Jutnb7hcfpZLg0P6izX21bPowpXLFrUjOck35tinO3R2Nrf2cqgt7yKx8N
NVFR9cUp8X/B1sbzikoQC1ilMtbct/LBYdFPn54KMeZct0RXxsjb/p5ox9zKPUPEeIwwqIR5
kr6u/wDyGEPUuK2O0ycGmnZA13Dr0urFWVqZVI+7PtgvylzsllabpsG3XO4xrdTRUUSOdRBH
QeWLV640C2HOLayjpHZOh96GQ1jcspyIOWR8cWpYbrx/aLbaNyktrdIZoQ0sUsZ0sD/D4Mvk
cX5WI9ks7bcti2263BUuZ4hpZ5CHIqftIPbDanJBHFafIA2yNvasGiIks3NY3LLVNQPUeGK9
JU3Fn/RucXUG1bKu5QxjVJtjxmUIGALFaVpQnLBg5rn2uTYdw5qhj2iPbISjpJZSk0EwPXSQ
NOrEF9tIjg5xNtJk1WLxMrQSH+Wz0rpNcj5Y1apXFuyblt9/ebckYi2VJK2zgUkSoFfakABA
8q4LjSp5Vyaxv9kg2wN+plgoIbt6iZQo+0nv54uYz0fd9ztZeNW8G7cYay3B4lFpusKGBXVR
kaUoxPcYzfDzFqm37evx1DuNuFivYVzlQ0fUXoSQMqiuGWir+32ja90tttvr62W6uXhEbzvm
XNOpI6MCMsWlXbCkA5xfbKSZNtaJ1FpIaoTSq9cieuNaJI8/5Vtlrt3IL6xtdQghkKKDnSnU
E4qZ4sfjewtL3kH6a7jW4jaNysZP5lGX4+GKmRruPqG5vuGwTVfbPYelrKRpZuoAp38MFZ31
QWkUGx8x3W2i2I73t0TmJrf2zJLHHQGqsAcwa4bWpGc5DcbLdbhLNtVm9javl+llJLo35uua
jywMyKwaBpA+3L8MVPj1Czsdhtfj6DeWsVN/GhYSkemQiQgq6jIj64Fri2PbeH8o3eD2bV9u
u2hZr63ioYgyivuKa5eFKYdM5X0vHdiNxLYbuu3taAaIryEiO5XsrMo6YEpNm2/jVr+qsbBo
bvdLVyiW96UaC6Xs8Uhpoah/bh1nqKHf5VsN1Ep2b+l3ieqW1YF4JAB1APX8DnhuLndbv5O3
G2bh+2tNYxSy3ARYiRQxaogwK+Xli5h6qhs9p2/aODwciitobssP/k2lygZfvKhkY5g+ODNH
S6g4Xxye82+/e0CxX8Gq5sKlYteiuuMg1XEcZ7drng1vut1sd5t7Hb4h7a3y0W5hfIg1FBIq
+eLKxKreDWe2ScuhsJI1vtvlaSJPdUoWWh0uKZq2WWHqYebrQWnHtpuPkncNrvYn3Cxjttar
cOS4LBdKl8mqvbywVSeVLtmzcX3re9y4ybULFaK36W+oFmRkOmjMMmAJy8cDUnhbfxqzs9lS
6htF3RYblrPdrWVamiNR3hbrXR6sMVZ35Jm2Y3VtDt/sTw+2Hju41KSBM1WGVchVfGlcPqjF
qgBHcnMA9R9MLMvr0prDb9h4jte8i3ivBfBRNBMo1B31UZX8PTmv7MZ59atxX8Z2TYOT77IL
S2bbvbh9yOJWMkayFwKjvpOeFVZ7HtGy8quN02mfborWexjYw39sCre4r6MwDpoeuLqqXUu7
bXsWycW2rd1sY5rxgLa4V1/ly01AuVNaN6cExnr2q3l3HNg2LcNvv0hLbXeKjPZKWqhWjOY3
8DXLDfgz5XfzQNsQbdIuuO8lhfRIlAJIxQBJKZnTXrg/At9ZT4xtrO75KlnewiaGaKQ+5UiR
HRdStGy00sCMEjXN112ewbbfc93Ta7+6drr3WWxupQCZZ0IoJaZVZTTD1FLqyEG0WdruEt1t
kFruUKmK4spgfauYEqWZO6SZVVlocOOfduPL5hqdygJRiSoPUA5gfsxVT1bcYg2Z9w0brP8A
po3/APHORWJX/L7w6lD3pgtbnDRck2+ytbWQ3uyewlAId4tGDQlfyv6fTRvPCLPVjuu0cb2T
i2w7/wDohNPeRRJc27H+XIChfXn9p/xxmRr8uPkvB7Da98tZdvgd9vuYRcyQGsn6apFdf+yp
pXtjfPwz0urXgVhyCzvYltYdvu7eJZLS+t3DBnzp7kYP2t38MF8MikvNp2njvHto3O5sotwj
3WOntS1DxynNqMMmQgZVxT0dde44uKcT2Pk2+Xy7fqsVjhE1vbyH3Y0bVT2zX8rePbD11o54
xoJOBbRvFtd211Db7NfxqWsr2KQFWljOYkWv2kDpgw7jyHUC4BGn/ae2GwTrfVvxuxaa/Zzb
maGEB5mpVI9RorS/7ScsZjb0VPjXb+Q7Xer+lt9uvbWITWl3avVGOZKui19DD8RhtxfXPTfG
17szcH3uK+2qOWOEguFoPd0x1BqftYGueCfIvw8hv5bKad5bON4rdiGjhdtbLXOmrwxrDPhy
lGJL9F8fLxwVPTN32DYuKbVstxdWKbpBvMMciMwpLE7KpcZZMvq9OMy+L845OR8N2Di3PLCC
5klfaL545bTP+bBqfJG8QGFK4b7GJ84sv7gbfboN+Rbe4e2vbm3WW6sxlFN6mUSmmQcaaHBP
IPt/tjr+O7biE/x1vPvmWAROHmkKh3jcICrRN1zbPrinrrfIwHGbHZtw5Dcx7juQe4kVW2+4
uVpDOw6pMDUhmHTOleuCZRE3M7DZYLdor7ZJ9kvVUpaXyHXbzMPVRqZUbt3x0vMcr1bca+X4
1s9ptdsdtti3a03CCKYSswSZCyK0iP2emuqEYOZMdWV3HYdl4V8kWlqvs7ttN5Lby2hkYmRA
8oKqxHRkPjgs8Z+3+2LD+4yHjsHLpY7eyMG6S28U09xHRY5C5YVde7UXrhk8Z6udf4eOljoG
Xl9fM4GnofxPxSw3yx5DeT1iutqtY7mzlrq0sGYlWX/cF64Pzhvk1ccbsrf5C4fym53R1S/4
9Cs223yqPcRVR2eKTvIraO/TGrPwr8bFRB8dRcl2uz5PxeKRrGGKP/2HaHIeSCaMAu0QHqdJ
gC2XTPFLnlPMxy/1biPHN/vE223a62u5jR43BIurG8WpPtSCvuAdx3GDqMXrW1uGO8cJ36Tk
Eqcg2+OBLiz3GzVRcWVxRvZYqPVokPpkPbvinLf41nuQcV4dwmPa4d9sF3CHdLVbu3uVJV6M
AXjdR0ZC3pYYudYnzleZb2u2R3rja5nm20kPbtL/AOVV/hfxI6Vw2K31wigo59KjKgz1V8cZ
aj1v46s4Nv8Aj/c+XQRpJdW1w9rc2ky6ori2cKPbIyKGuYYd8XM2m3IpOGcItOU8V5bulxNL
Hu22qt3b3NQwkrqZkkB+6tOvXDu0fE1rNn+CrCTfZtpvLtjb3uxLum0XoAEkcxZKiVOh0EkZ
dRig+2uXi3BOPXlta293skj3E0gguLoNpljLkK2tX/MtdS+WHqYzzba5Y/iex2rmO77Bf3Ed
9c7eEm2yyuCEW8tWGdGqKTrUHTXFfY6s/wAz2niUEEn6Kxutn3O39cdnMtIrimTCOoHq74zg
1iwx9w0PpPWopiwPSeCcXtbrhXIeVBiLzYZEeJfytF7ep0FOjaiCGxcZuHu3PHdcbTa8z+Ot
95hcpHBvGwzxNJPGAv6u20LWOYgCr0OT9fHGr84Px4gufjSxuZbbk+xrLecGn0S3CswM9q4N
JraU9fS32v0ocH+PyOt3fwz3tbBtnNFjtJP6ltMcyGB2FHXUwrqHRipyPbGuucjPF1tPnDad
ptflC2gs4Fhj3CK2lu44xRC8jlHeg+2qgVpgz/XW9yrnddssdh+Q7T479iK92Dd2hSWOdRX2
J9QTS35ZU05OME+NO/hXzfHtpxja+SbsszTXPHtxEEbEUL27hWQV7OpfM98XzVFXu+yW3KPj
q8500UdlvNjuAgvTCoWK6gk0AF0GXuqZB6h1xrfwzjAFAsdCdRr16VGObpA0ctQrkKaD0yxJ
Y7FY295vNnazF9FzPFG2n7tLOFan4YKsbrkfxFdbdNyf9HeC5h488EzRSZNJa3KlxmB98Y7d
8LHF+daDevh7jW2bC+9zXUjLb+xcGGQ6VkglRdUJIz9wMSV8emKQ9WsLv+wcek2Ib9x6+Mlr
Fcfp9w26X0zRmT/xSqDmyN0PhhzCveN7ZHdfEfKJ7J6Nbe2Nxt5wHjkiHqWSLoUmSpo34YOf
kdTwfwnacduuW21luVuLm7cSiIONUZT2mBDKeuA8zxkeabbsm37/AH9rsxkW3iuJYmhkOpka
JyoANcxTG+qOZjU7JtQl+GuS31ow1RPH+utZgHjdVYFJ4T90cy1PlTGZNq6DtPANhm4Bb8sv
LllaO6lhurVyAkiKaD28qhl8+uLPTf8ACK++ONvvI9i3fabwJsm9Xv8AT5JJxQ282oqlTXNX
oc+2GDPVvZ/HXFzuW4WlxBObvZInfc7DWf5sSemR4HbLUisJFHfDIuvh5JdrapdSx2shltw5
MMhFGMf5a0y6dcFS84JsdvvnLtq2y5LezdTrHIVzKg988sZ0zlvbHj1nyLku5/Ht7Akcm3i4
Fru8fpeK4t1DKyf/ALt6+pWxrcZs1kI/j2fcdo1bBIb7ebGSWHedpyM6e21FuIe7xHv3GKxu
uLftg27btq2a/guhNc3GuLdLEijo6sWWRT/+DdKLn0bBglbPlWy8Zu/hex3zbLZLe6sdxW2j
mUUMkczDUsv8dKjPyxrhf0cLbZacP4xsnJxFFex77E6yW8oDKGViJEPihWmCI2/fF0l1d8cv
ePwBrbkVpLdixJ1PC9uuuZYq/cpX7F8csEmn/CwfjHHJPhvkk62yDcNjmhnt7sqRIGcqrISR
muliNPjjU+RZql+I9ui5Fdbrwy9iRrferOR7e4Nddvc241wyRt2APWnXGbMp+ux5zPbPBPLb
zaTJC7JI6moLIdJK+AJGWHoAqwfSDQj7q/4fswCnASQeqp1E0ANKUxFE+rIkAUPb/DENP7Ku
pVvs6+HXucChnOWnTVVA+pxRfJlCBga0Y1oM+3nhXkN7i19fpHcePjhFuDIVjpYUPl4DxwNQ
B9qJyFWgGdBikFNVT6QSAwyGfTwyxBKfbyFfSB9nYVwa1pUFMqjOoH7uuAYCsbHImpBHStMK
w6J40oozGJEDWgXqegOQxH5Jft1FqsRkDiBh7ekPn6iK9xXDiOlVfJjVsxpHbARkJQswAYnM
HxxIlGlQw6np4jEYQpQmmlsxQ9MRAFi1BdNak0Ydc8QpqqZGjYHL008TTxxMpDCHRWDEhc6Y
CEmlCRQA0r/hhFpaiEOv7xnSmXXLPEtOfUc8qjKo/wBcBMhqdDfhXywoQjVSWZSCTka1piOm
Fc65Cta9a08cSNIUeQhgFL5jDiOVAGkAVr6a9cCJwy6VYhnJOoAZ/jiAQoEgzppP3eJ8vpiG
H9s0IrWhJr/zxGFVgwBauVP29hiJtIP29Dn+z/LEtJidOpTQnsO30xC0y6SQSaL1qD1/ZgRS
UNEr6ACB/wA8QJNZI0/aAKV708MOknqa0IEn5czSvniMHUEUqwc09X+OI1EZVSpUkBvSV8Di
Z1KZCo1U1gUqO37BgOo/5ZZmqVJ/+zwo4Ri9S1FUVUUr07Z9cCKTWT6Vr2zOkU8cIMsbE1Jy
6+GIE5cZ0Cp0Ne9cRDIpDg1FAKNnkcWpKaaB/uyU+GDSDWlNLff2NeuIaGSVifbAAJyJ8RiH
pwoyOYr1AzGWJoa1zLdKdPLBaiOpQAyggiqgZfv6YkB+pcjP8tDXph1YJWoGYAZ508DhQEcy
MwVs6VrQ0z74lC9SuRkyr0bp+7BSLV68h+ByGJm0xLEk68wa5+HnijU6CABSlCC1Tp8cVUGQ
qrmzBanr44GpTPGppIQS2QVe1BnX9uE/Um0BCWBNQSSMjXyxM0nce37kcekBalR/zxDDJKwX
RJUl66R4D64ltF6FHpXw1E+GJEixmTT26gYDh9RZvFD91TTp064UEvmQGpToDl+zEd0zaZG0
E6ezHoB36YhYeMgMMvSMyM6HCyd5QPSh9Z8ftpgOnikYLpIK07t3wkDBg5AkAbqo6hh9cCiS
EjSVbOpNVIyz7YLTJoDEqhs6LWoHWmDWoRBLUV8qZUOf0ONLRByQPQG/3eNPDyGIW6Yh9PUn
xp/ngZRosgUHQS+ZFchTA1A3CDV6spMx16HrXFAy26FvfoDVwa5nM/UY6azHN7z+I/54y6a7
NskDXGsgdRU+X/LHTlysez/GnJd92W6afa9uO6EpSeD2nlXRUZkpmK433ljVyR1865Hab3cm
7Tj8m03zNS5Bkcox6k6HAK488i59ZFJKEsxApUEdRjYTFSStT9ueqnj40wGRZbFyLdNm3Bb3
brpraWP7lUAowPTUpyOGrpqNw+U+QX8DxzW9ilxJQNc28AilJp1OkkYwpFbtPyByTbo5IEki
nglb+ZDdp7qk1yKavUD5jGtMkRWvOORWG7NuNpKtvLWntqhZKVzBVvScallZ6+sW+5fLO+X6
FGtbKKZhpaeGARykHvVT44zJ6x9nPxv5L5hxxJYdvuke3m9U0U0SyAk9Tn3xqm2qyLke8R7r
Lusc4guSxkcxoqx1P+z7aYGljuXPt53a1Cyi3VtNPfgi9st4EgEgnAuY49m5jvOxz/8Awrj2
neokjP2mvZs8as8Njq2zm+87Vur7rBcRxyy5SI6h4mr1DIfHGBsWu5fJ+5X0cgt7WzsZnyae
0iMTNQdTQ6Sfww8xnSsflLebe0WG/srDcSlFWW4gVZwD4SKVZj54bEFPkne4r03SiFovs/RT
KJYNJNSul64sPifcvk6/uoCltt1lttAVkksw6KfH0AlfxxD4Sbd8pX8VqIbraNtu5zmtxJFS
YgdP5iaTXzw3h08dO0c+VNwO47ps8t8AGWCaCSQGNepA9LA0xMKXlnJoN83A3dtCYIWACKXB
OXiQP34zMayM+ZBrBqdHYDMVxuM69G4J8jW2x7XLZyxP/MJeKUDUhNPtkU9q4KnBuXyTuE4Z
bSzt9vZidctpqUPXvoYkDANSRfJ9+9tHa7jtVhflEAju5UdZcv4mRh/hixa4tv5ncWe7LuCW
NtM75+y8QZNS5AqfuqPHGrfF0tt2+U73dViL7XZw3NqwaC7t/c9xCuRGkmh/HBig5fljdrpD
HuO32V+0aj27l1Mc6U60dDXEcclh8l73a30k9IbmCYaBaXGqRNP1J1V864qHTd/KW9zWL2Nv
Z29lb0oixBm0V/MjPVvwwEyfI0jQIm4bLY3twi6VvNLQy0/iOg0Jwj6uKDn+7Qw3NmiQ/pp1
NYFWhTUc2U4MNcF/yq+vtuFjeolwYsobll0yIP4dXcYZGJFSrKQNPfucsTS22Lk+6bHcvJaF
PZkASeCddUEgPZ1/wIzxaK7N25ZNfHTbwrt6sRqitnYgZdQTniErsTnl09l+mv7WG8liGiC8
A0zItPzEZSU8xibcO3c23ewuZLnUtxG40SWlyPchkTsCreHY4cAt65TJeqHhgWwjNKx27sQa
eBbPAa7pOfXE1qkd5aQ3N7GAqbiKxzsAMhJT0v8AU4EprXlO+2u5ruFpcNBdpUK6mvpPVXr1
Bxuc+MXpLvHKt63m/hvr+ZGu4lCpLGgjcD6r1xYFlPz65urT27qzt5b+MaV3FaxyMB09xV9L
n9mM56bHVsnybeWlg23blY2+52ZJdBcahIhbMioDVXFYuaquQb1sW5Khs9ki2uWpJmgkbSfI
ow7+Iw8nBnnvJ12d9le5E1gyhAkiK7Ko/hc54saRbDyi92lJYUjiurK4NbiwuBrjc/xChBVv
MYMGH3XlN7dXAktVaxjU1SFHY6W6g1/wywiu2fnlxc2o/U2MJ3Kg1bhEWjdiO8idNVfCmIRz
7HzHc9l3Rr9Y472S4H/yVuAGEteuZzBwX1px7zviXe8Hctvtk2pjpolt6FVhnVSO9c8alS3v
Ody3FqjS2USbmoAbcYiUaQgU1Og9OrzGCs9eKzbeW7xtm5Nudlcul5IKTOTqV9X8YPX8cNh5
Rb/vlzvm5PuF2IxcSaRL7S6QSBQmnjg1m/KsWQLJUD09MKaYc1vhxWTj8iLJa5exIKB0UHVp
y888B8U23btf7beRXllKYp4mBVuxHgw7g98P1P2am75vx69SW6ueOpHu8g/nXMEx0M3SpjYf
ae4rgyr5Vu1cm2e3DwbrsVrudoXrEmopJED/AAsa1HkemDFsHyPlo3azhsII3hsYGLRxTyCZ
48qBUkorBadsMjn9kj8xS949Hsm9Wv6tbU6rK8R9EqELQBhTSy0yxSN30ex82Frtj7Lu+3pv
GzlxIkLyGN4yP4CMiK554rVLrol+RpLa4tjtdq0NjbGqWdy4lAXoUVwAdNOlemKXTWZ3zck3
TeLjclQw/rHMjpWoUntjTGRbcO5Tt2w3Rnm2xb2YHVDcaykkZHYE1XP6Yyef8LS/+Q7X/wBi
/r237e9tdOgiuYJXDxSxgAflAKNQdsWa1Lir2jmt7tnI7jeoIo2/UlhPauNSlJDqKhhTMeOG
8szpseN8z2ZrW4WK9i2TcJ5HeeC6VprSYO1Q9QcpB0yOMRrWX53Lxu5mimsFj/XsSLuS1LNa
S0H3qHFUfxGHVGPoDqIyp37Ux0jFjYbNzuxj2NNj5Btn9W2yF/ctU1mN42Uk01DqM64xjdrk
Tlq2O+x7lsVt+jRQFMUxDh1rUo+jTVcvrjVqi1TnuyWV3+v2bZpNr3WYH35Y5Q8DknUVaNh0
Jxm0bJVVuvNLzdeN22zXUKCSzl92K5jrmtT6GU9fupghtDyPmd7vu32FvNEsM9mCrSihEi5A
emmVAMKDyvmz8ni24TRJFc2EbRu0belwSKEKc1yXFGeprp4Xy3a+Oyma52v9TdKSY7mN9DhW
yKPq9JX6Z4sPw6xybiNxy6Xdms7qzjvfXNJqWRrecEFZ4wPuXIVU4q1LGm5DufHdyt2k3rcL
DcoJVpHc2DGC+hpksiwsSHJ/MuJix5CwCMyKdS6mo3TUAepHicOKOnbryG0uPcmto723I0zW
0tSjJ1K5EEHwIOKlrYubcbsbRk2TbriyFzH7d1Y3MguLORG6oyPmKdiMB+Vxfcx4bccI2Syv
7M7ilsyxXFpqaCeFo0NHQjIqemeCCqWb5M3GHdrS8sS0trZRvbpDdhS8sMhBaKVlyNKZHEsd
u0fJWw7JfSS7TtMqW94pF1BLKrBWArqhk+7TXqpGN5rP2xW2fOdmudktdn5Tt0m421jI0ljN
byCN0RqgJIMg1AeuDP0dyOCy5omw8i/qGx2p/TaPalhuaD3oj+VilQD5jFZsXNHuHIODT2br
t20XdvcTIROk82uJqmpKtXWpB6HFLgyVjZY466vOvnnh1r6SLninLLzjO6G+t1E0U6ezd2sn
2SRH7kPhXscZpjX7P8k8c2GW5batsuVsb+Mrc20jIXiJBoYZB1FW+1v24t1W+YyvEeZNsIub
e4g/VbXeAxX1pqKF1aoDq/5XAOK+VmZmM3uo2xbqZdr907eGBg92gkC/wNTIkeOLVK5F1N0J
HiR1p9cOmVvbT5B41uWx2G08wsri8faKrt19aPoYxkCgcAj1JQUPTBivz4peZc2m33dLS51s
67YqrBPIoWV9LagZaZaq+GWIyel8k83HLt1ttxjgEEyWiW06EhgJEZmJFOx1dDgxnZKj4xzy
TaeP7vsdxCs1tuSZS1CMk1AoOfVafjhxrq7FBsG62djfU3Xb13Xb5RpnhaQxSDM5xOv2tgnH
6U8bO553w+HZ7mx2a3vZbG+Qw3m27nSeOhX0SRyai6vGftIxtw63Ulp8k8b3nj227bzO3vTf
7PGYbW6sJNHuxkKFaRKr6l0DyOMu3zGE3zdrSfeRd7YJI44XVoZJaltUZ1K5Qlgpr1WtMFqX
fyLzPauZR226TxzWnJ4YUtblVAa1mVKkSKa1Q+vpjc+F1xrBtWoB6j7vHGFIuuK8s3fjW4vc
WDKYbiM297ZyD+VNBIaMjkdPIjpjU+TYv5OcbJtEVzFw1J7CHc7eS3vra4pJ7YlUo0es/wDm
XMlGIqMVow0nyc22RbPPxWN9p3G1iWLckjNYXaIAelT1SWmplP5umMwwpOdcQt+R3U0ewa9i
32NRvm1MaCK5B1e9Zy9UNSWAxqYx1xn/AIW0HP8AgvGtu3O34f8ArJYd4tpLO+2+9GXrRlSR
ZQfSY9Ry71xWthHPuC8q2HaLbnNteJumywmzgurRVeKSMUKOyEj1KEGXfGZcZ6515zySTa5N
zkm2uNo7RmLRl8tbfxhDUorDPT2OG9auf5+KtGIahI0r1Y/bU4C3Hxvy/bdtW+2HkJaXjO+R
tb3zxkhoG6R3Ea/xIaV8Rilb8sLgnM04nd7xtN/H/Vdk3NHsdwNs2iRo1YhJoGIArpNaUw3y
ueeY3rfNPHIJ4Lm3muLi/wBptH2+zd49Ed/ZMARDMvWGZaCj9CcMsZy1xwfKPGtzTbN33YXF
nyfakjhkihXVZXohNUebSyEtp9PiPpi1vM9VXLfkLhvJecnd9x225baZ4YVaSJ9F3bzxrpLR
sMmUdwcZaiTl3NOHz8dutlsrm53u2uVDWzX6MlzZ3MeSXEUxqWUD0lT1wwZ+HlyMoqa5n7Kj
IYFfGp4bze72MX9k8Zvtg3aMW+6beG9tnTprieh0yKOhOGG/C53Dl+z7Vs+48e4q8lxsO5oq
3CXiaGChgwRlzq6006hkRiokqxm+Vl2zcrKXi9v7GyXEEY3japjqjkrVJbcVqCmiuh/uzxlX
dz8M7HLwJOYmSGO9TjMxrCwI9+3diGAo33KjZUPbDerTzxI13y9yfhXJLi133Z7+Zd/to0tm
heFkikRCTqjY/aQTX1YZfMHXNl1E/PePb5Pacj3h57Pl+2xRxCZBqguBb10SRGh9qf1fm9OB
dftX2nyhdXt3vK79bC52TkGlt1soDolDKQBNAT9sg098sOiSg3nlO122xbjxXZJ2utlv3ilk
neMxMWjYMCFJycUCuRke2BZfy6OPbVw684DuQ3ILa7vbxvPtO6K51PLGCxtJUJoDJ0GWdcjl
ikatYolguagydz06DMYyUm3X81lucF7H/wCW3dZLev260IYVH1GNY5/f17RN8o8Pv5twuru5
ktYOVWSWm7wiNmeyuIIyizilRJCa9vVihU3KflXa9/4Pd8dcPFuULWwtrxEYQ3K25pqGr1L6
RWh740zlseXe5E+ZGTZCNiaVHT6YzfTz49e4rv8A8aWPBd22Wfc5YJd9gC3SSRM0kMukjoo0
sgbOozpgjdvjDcQ5H/6vzK03VGS6hspGDe3VfdhZSradVKVU5V74KzxXPzF9ll365v8AY7z9
ZYbjLJde26NHLE0zFjG9RStT1GNVqPQOO7z8c23x9u3Hpd4ktm3qNNXuRM0kMqADSdC6WWq9
R2wwdMvNyzbV+KpeKFvc3C23IzW8yj+TNA1akVzFPA4L7S7rHlfG7vgG0cXvJprZ7beEubmW
JOluwdmdD/FGz9O+My4rHqPJrBt2R5b6CW1ikgET8k28rJGxZdKXJC5+0ymjjsMdOaq+abqB
rSW4tEcSmBzFHMp/luq5Ky0/LjHQ11bBvV9s27W252TKL21kSSIsPSWQ19fke+LG+XoX/vHF
rHfbjm+2XMkW9XjNNc7VcAsFkkFJFiIGlo3pp8cTn1cuKfbOZWOy2cO97RW35P7sn6haVRlc
6lapr6aNoZT164ZWoruX7xsW8ptu+BTFvs0kke/2UQohA9SXUP5V110FPH950zecut3fXnx3
cfFE/ErTkMaSrOl9atOH16wQ3tsAoAOVPDBy1/SfaYymzcl2ffON7fxTklz+ji2l5G2ncwKo
NYqYbgAf+Mn84GWG1SZAbp8pXkU3Hk21tEvF3l/R3C9XSWimNwRmg05V6g4oJfVzY/L1vd8C
5ht26Rwpul8qy2NuEPtzF3AkXOp1D7sMaqs+Jd02jYDf8mnukjv7OD/4SEk+qhYxSr3SdaoG
X7TQ4r7TevHnd5Ot1dS3Xt6fekeUKMtOti+k/wDbWmLpnEDVb1NQP1J8sCCI6qK+kg1zyH/B
xLCdSHqvWnpHYfXEzQlhU6VIpkSfHESDMtVoD4V6VwLTMtTQGnl/lhU9C0Wp2K+og1DE+Hlh
1WEiZaqeoVoc+nhg0yYZ/bojDv3PUHEBMTmMq+I6eWJExPp0jVTMU8+vTxxI6rrUO2X8IPbA
iX1AiiqRmCPAYmiJzOlgCD3PU+OHBQOGcU7kgEgf4YohAsNStQgUCr4nEKSFwCMicjn49MsA
04LV0sSWzzP2mnbApTqiglpDTPOmJokKlznkT6aZ5Yjog0fU9K+rzriX2AxYlgNIHgKigwjS
QjJaBmJrq8frXEjM/qMlCe+jpTzxIipNGJJJNQoyz8cDNhFVzq3r+6hHXPoMLUhGQCIBqsT0
z8cS0jUUz9fSoHjh1akckpkSVGVe9fpjJOilgRTMedf8MJxGyLkKUqfVnkcQOpCerTqI6eH7
MQ0ySNqJU0JyAPhhQzRk0/aBmABXpgQURQfuIBNSR+7AT0ANRm1enYAd8IploFPT6DoaYIzC
OkrqyDMK6fPtl2wtGQx6qhRkCMxkD5YiIagQoANfurll5YECQhWBzI6GnjhRqFjqYAU6f44k
Z5WWRAG0vXI+GfjiQQsnuMXFB3r9p88QSLIyrp6u1a1FBTyxJIWGkn8w8hXp188BOHUtqPqV
B6gMuuJIkJNdZ7kqACaDECMisTTJehJFM+4+mESlGEatDqYde9MB0nCNqV8xlSnXPviISwJL
DoRpAB7jviJnL+gmhpnVzTAxYOOoFZHqgFQDUftxGQ9JPc61J+xRkD+zEYTp6RRaMcyen44m
kcgB0leo+tPr5YhRLqVWJABJ9JzrQ+GJr8BSNifQNC5VqaVI8caYSFXA0A0NM86g18xgIQ61
zOf07jDVRgCRQB6VYeqvanngFiPNTp+5WpqJpX6nEMCvoUgihFTp8RXt4YmoLSC5AX1L45it
MRhlYaSRQZDLt+GJfYAcAFiahjQVyp2xF0agEDVogHqJ/wCOmIWgAUFiQWHUYhoS4ZjoBCn7
lOdP2YjJqRUOo0Iz6EmlfpiMgKsV0lKdaqepBxATpqQEGjLUhTTt2wGEWj0n00yq1RmMKvRi
5FCpowFaE9R2piBVTSutBQfur4jvgV8HIVddKULVrUDriWodNNQNM+tD0woUOpwScgv2t4+G
Aidm/O2sjKoFa4mpUZRVkUNk1anxH0xSs9JGjyqhqRkARQftwshJcyDIgdx4jFUeravS5NOt
cqZYGtRXT6V9Z+37aDOp/wBcOKzGV3IFp9YOvX6q96dKY0zHHWTx76emAuyw1m98c+g6UrjU
ZsfRnwHO8E940LNE+gKsgqpqfMY1/WJz/KPIt6vN4kt7i6eeKI6UDgE0HUVpkMcpGuWD0B1J
C1Uj8vWvb6Yl+UnteijGmWY/xwxq1ruA8i2XaNwX+o2cc5uCohuGoVjHTUVcEGnU1xvNZ3Xs
b7ZFve1zNuFjsu6bc51QX22BBNGD0rp0sG+mMWMqS5tNn4zFHBDtttfWtzIqNHdxKx1MQK1I
1fsOM4qHcLHiXFt1h3NLNEivZKRPIFKwHxAcMGTx741FedXl7t1ru+x3s247fs24WIVmh3Ha
9BljFO+gBq+NMU8OPAL5LaO+uLeBSiI2iNMy1B3Nc8VFb74t49t+4Q3G4XEYkltnAiYgNpIO
ZKnJvDDZjMaySx2Tku2XlxNtcG37naO3tbhaIsbEp2lUDScutBjN8anjg4lzXeLu+Tbrrbtp
kSPJrqSzUM4GXqKkLU+ONKtEt3wt+SSQSWe27TvHt192REFvIT9oqwZK/gMUg+XZunDY972i
dty2raf1Y1CG+28qhX+A6gf8sUEil4hwfZdnkMW+bd+tmlViDLGzaR4IQKUp+3B1TivtJuGb
Jyy425bCCJpz/KNzpaFa9FZZainninpi85Jx/brnYJ7veNlsbG4ofZ3HaiCpA+3JKj61xfDN
rh4TsnAr7jNwtxtjT3EbsZJJMjVQQCjKcXXVaZ/4/wB53LaeWS7fYTvFZSa1MJo6tQ5DSRTo
cbk2My+uT5RET7/IYoUhV1V30KI1L/magA64xjUrGiFanTmaZ06UPXCW/wCCfHHHt82l72/3
O5s5I2pKiKugoBWuoggN44drOO25+LNjvrdL3j+8ySRM1DFeRZgDqwdKA/SmKxWSm3Lg/ANu
jSDc94vLC8dNRmMKzQk92XSuqn1xnCotv2jabLlNtAN0/W2Lspt7+zVdXWqn2pK9D9wwyLW8
37bv0/Kdmke4gn1EgXUduIZMuvuKPSx88Wp0cq4PwzdL+GOTc/0O43B0xi3iQox8WjorZ4JR
jObV8VWa7jd2e4XNxcPbKHj/AKfErF1YmjEOPD8o6Y3bo1NuXw3GiJLtu5zqjOBJFfWzpIo6
/cMj4dMZiuubcuEcHsmW3vt9u9uvSgOmSFZoRX81YhXT+OI/K72L4/2e44vIrR219Mpf9Pu8
BNSvUNUUp/2t0wWs3xhOT8d2valia23P9TI/W2ZKsviS65fhijTP/kyIBWtA2NrXpHC+ObdH
x873cxreTRly8EqB46IK6KHIlvE9MCxYPwXjG9Wtnum3QttLz/8A41aIxljy7rrzB8sFF8Sw
7RxF9wtuMXW0JJM6My7jbkxyx5Vrqr6jl0bFIpdK34FtGxxXW43Kxbo1sGJiuEBQwgVppOWs
9/3YlmIZeCcb3yws9x26B9njuM57JT7iDwZNX2k9xhlxqVOux8Um3NOM3u1Ibj26puNt/LcA
LUEno/8A9WC+s/LLHhux2fI5to3veP0MKoGt7+NPSxemgSBwQtRjWmcpBwDb4+Uw7THu0W4W
0yGVLqADID8rBGYV+hwejGjTjvEr7d24xe7SsNykJZN1tW0SDSoIJXo/XOoxDdc+0bZxraXu
Nnbbo923e3kNJLyP+RLqPpKzdIsjQhsVg5qLn3FOPWe2x7hFY/0Xc2Ye7Zq+uGTV1KdsvFcs
DanueD8ebjw3TauRwzzRR67qxuAEkqRmiAHVUHpUYdqW/HeObJYcTTfry2j3MSJWS2nUUX1a
fTSn7euEV1XHxtx69ktNx2qSTb7a8UPLZOzSBGpUBHzNMUPkEnG+J7vucnHZdtNhuNojGPdL
ckepf40qRIPEnBYzPXnm9bXNtm63W3SESSWrlGkU+kgdCP24sP4dvDNgh33dzYyTfpwq6y2n
VU06VH+OHBzdbVePcT3fcpuOCzO3bnax1Xc7epDFR/8AaxnKQHucjipzWX2bhu1Schvdm3fd
126W3JSGYge275HSC2kdM8zhus8qnkvH5Ni3R7FryG8yDLcQGsZVui96HxzxYPyrM6AMKdss
WHW/2X4zsrzYYN8ut5FrbOjSToIgxiCkgk6mXUKDtgvy1Yhk+Ktxa7gG339tuW23YLQ3kWT5
Z6SlSNRHTOmGdM/R3S/EcEkTxWG9OdwAqLW9tWgApmVLioqcZtMjl2n4yD7W99vm5nbFR3jf
RF76xuhowmYEaCO3bCs1WXnD9qtbmF332G52S4Yp/U7JRNJE46e9BWoB8QTg2qc41nM/jjiV
vssF/tl7b2Nz7Y0tKxFvckLXUPu0Owzyyxc6uufFDs3x5Z3mwQ7zu27ja7aYkwziISRgA00u
xZdJJGWGrlP/APdRuBv47eLcraXb7tHkstyUEjUuYSSOuVR3Bwa1g7j4i06bKz3y3feSDINu
nUIHAHq9t1Zq17VGHf25/W/hVcF2WvLF2+8kW03KDWP09zD78Luho0cgqMqd8VPOu654X/U+
aX2zRC32lo4/eiSJjLATQHImjBWr0/Lh3Dm/LouPicjXbWG7w3W8Rp7sm2yr7VVHX23DMCK5
KSPrjO0yIbD4zgk2y3vN03UbXLdnRbxyxBojJqKiN3DAq1RTFKcji5zxCz4+0MUFy5unUe7a
S+qhUAF4pMtUbHoCKjEz+WQUZALl2Ne2NasbnbPjnb5dmtty3bextUN2uuJjEskRBNAA5ZaN
4qRjO0/Vx23xvuM25tawXlve2ZT3I7u0cSB4waH0Eikq/wABOeH1THXL8YRXUE7cc3+Dd7uB
S01gy+xJQGlBm2k1/i74dZvH5SQfFkH9Jt91vd6WwspokMkskWpopWy9uUalAWuQb9uCVrHD
P8cb1bbzb2D3VmDcqXs7x3KwTqDTShINJP8Aaf24KZGx+SeLtYcdRbLaLe7sYl1SPktxZHKr
RlaNJGc6g1pii6rGfHG1W+474Y/dgaaNG9uxvIjJBOhUhwWB9DU6GmKMSagg4VPufLdy2y1h
/Q2+3kvLbg/qGii1BaI1V90rq/ZjXTnxx6t7f4y2yS2muDv8Nzt+jRb7jAvphnrXTcwklgh6
VDZHBjt11J8vPblWSTRUMUYqWU9aZBh5eGFnd+Fpxjjd7yC+ktrVtK26CWchdb+3qClkiBBc
rXMDF1cMX1/8cosRbZ99st3uY1LSbcpENyQBX0Rsxqw/hNDgl1W46LX4nnl2623O63q3srO6
iR4ppIm0q7/klqy6CDkD0OJfXVZJ8e73a76uz3zRDWnvQ3MJ9xZ4AaM8CZM7L1KD1UwVO3cf
ifcY7KW52fcoN5aEe49nEDFOFpWqo9dRFOnXDKz1z4Ha/it7jb7a+3Dfbfajcp7sSXS+h4zm
CshZF1eK9RivRvKvg+NN/m3SSxd4jBEgna/tT+oja3LafdiCkGSn5lGYw6ZHduXxLukW3yXe
0bpb737MfuTWsKtDcaPFEYtqp4GhxnSwIZTULmPE9cK1acb4zNvd8LWOVYiKvKSNbhB97JH9
0mkZkLnTBasrR738Ubnb2El9s25W++x26e5PbW4KXCocyyoSdXmOoxqDrz1ZcQ+LONcg4rcb
h/XIhd1BiuAug2rUq8M6Ocx/uy8RjPunJ8vM9126XbNxuNvneKWWBtPvW8glhkFKhkdeoIOH
GLfXBI7LGSWChfzUNKYmo9D274anubC2ub/f7PbJ7qJZ4ba4qAwkFUKSalDg1zpmMU6ovKrs
vink0vK22OeGH9TCI5mjecItzb6hqaBqetade4wa6c41PzRw+12LbIXtNhV7CIARb5bnRLDq
NDDdRqKSZU0yNnh4c+/U/wAefFd0vHW3l7Kz3W5lHu2ELuk1tcW7pRopQR6ZK1o35Tilm+td
c+Y87suA3W9b9uFvb2b7HZ2Thr+3n1TyWUbsQqFae5Iqn81Mhhvfrn/Pc9dnJ/iDedq259w2
vdbLfreFfduobBw1xDAv3SiKpMij81OmHWunftfwRut5t0Fzc71YbddzIs0VvcBgHSQaoisl
aOGBzp06YxreKLYPjK+fmf8A61yC+t9ovo3jdYrosY7uMsC8cEqUHqX7T1/HDZ4Oasfl34fb
idy+4bZdR3GxswpbO6rc2xY+lGWtZU8GGfj44Z8Drr15dJqetKKa1aoz+pxhpreD8Hk5St7c
sStltCxy3qKKyNG1SzIO4XT6h1p0xflW4vd5+PeN7ztO6b3wC4kmi2QJJuu0yksEjKlmks7h
6NIo0klX/wAcsbz8M9deay28/H3I9u3HaoXRLq33yGO52y/tKyxOj01ii+rXCGq6/sw8z9qZ
Uu4cAu9p5dHse+X0NpDcKJLXcVb3LZ4n/wDFKjDs4/i6Hrjn1z+Yt9xefM/xrtXDbvapNrnl
ktN1t9RiloWikjVQ7BlJBD1rTt2yxrm+NVaj424Xsdht1ly27nh3TfUj/pm42lWMbyhfbf2D
lJH6wHrmD5EHGZL8ide48+5tw7eOJ7/Ns27opniGuK5j/wDFLET6ZFzqAw7HpjfU81c9fhnH
9JYMM/y5VGeMnHpfxbwbZLvb7nku/wAay7ChezZ3q6285UESTRrn7ZBpqGamhwfLNmM5YcNv
N723fd52OMvtmxOGe0lfVci3kLaX1j0voC+rvTDPnF8Ta7tv+LOZ3+5Xu3pbCPcLDbxua2km
RuISVAWBwShNH1D9nXDfFst2NFt/wByfcNtS8g3bbY55YlkG3zu6SAumtArUKnXlRhl+/FP8
tazu0/FnKtz3C7sbpU2V7GZbW7ub0ExRTyA+2r6K+l6ZSfbizDPjXRzv4g5Rw+z/AKrdPBuG
1Aqkt/YuZUheTJfeUgOqsctXTzwyaz9s+WGJr1NKAVAzFBlgxbK2/BeCNvdreb3eSBNj22ZI
b5VqGVWXWGkb8sZGRYV0nPpjJ+IseQ/H+zPtNxyThl9Lf8bsbgW+521ypFxZl6ev3DlPCNX3
DP6jGr/kc9qbceAch2/mKcXulRL2YpJbXSEvbSWrmi3SuASIeupvy98DV6Tw/H/JP/Yn2C7g
/SXUEum4lPqiCD1akcfcpT1V8MH1o+0aD5P+L7Hi2x7TuNhfvdLdn9Pe284BkWcLrrGV9IXt
+8HCPvdeeDVTJgF65+PYYtawYYFQKVI796d8AdFnbvdXMFnFT3J5EijZugLsFFe9KnEYuN44
TyTZbrcrfcLF1Ta5Io7+eP1xqJTpglPhHJ+Vv20xrfGerI0H/wByvOFRJ5o7eG2ldE/VNIfb
iWZA0c0lAT7eekkdD1xhr7KrmPx1ynh8kP8AWLdfYuDS1uoH9yB3pUqWoNLUFaHr2xqOWe+r
nZeDWd18bbxyK7QPNDWWxvrRmeaCaLIw3UIH/jk7v264Oflu+Q/xl8Z3HMp2lnuFi2+ItHeL
HKP1cTaDokVWBDJqp+/FtOKLl/Bd94lu36Dd0UiQVs7yLOG4jH5lrmG8VPTHS+zYOZ6ubXg8
UvxnunIrsapYXV9vvrdjIYmQ6XtrqLskmoersaZ458/I/pLFdsPx5yTe7G03Ozig/p9xO1qt
1LJpjjnQ0pLkSmominoT4YzTKHkXxrzPZN1tdt3CwJnvm0WUkTB4ZDUDSrj83qHpIrjUK+i+
DOY0tp7ma0h26aqpuCOZVhen8v3UIVlVnGhj+XEx3zWA3GyvLC9utuu4xFdW0rRzorB09xT6
gr/mHgcavOU/z62epNh2q+3neLTa7Wn6y7kEVurnSNRPc+WC43J69HvfjqKS3m2TYt7uDye0
UySbI8rotzbhav7VCBHIuf8ALb/nhkF6/Tz294xvNps9rvcsJG2XTyQwOtaLJCdPtS/wOT9t
euM2ZVKmbhPJxb7VcLZPOm8s8W3LEQf5yEqYpK6dLmlQD1GE63PKvjPbOM8CkvdysblNwuoI
pLbc0z/S3gye1ukrpETN9r9cHM1jvmVUcJ438Y7vsqnfOQTbRuwfRLAwBjkB+1ojpbr0PeuK
wzmyeOnm/wATR7VJtP8A6xeS7rbbtJJbqk4pJFNCvuMWoANHt+rpXDJ41PFVyT4h53se0Sbz
eWcM22RgGW4tJhcaYmFRKaAfy+5Ixci3E/GvjiyuLCLdOS7i+1bbeWzXFjdppK5kqrvXNlVh
RkGYyPTBirlHxXyMcnttoDw3S38Bu7HdYi0kNxbKKtLGBmWUdY+uGzw839u7kfxRbR7XcXnG
t1O57htC6+QbO+n34B/+GjZMnjy6Dt+OCM0Gw/GNvNtJu+T7smwpdWyS7dcHQ8NJRqiaWtCy
N9p05qeuKQyKyL4m5WnJ/wD12dUqYDfLuEbFoJLUZieFqanXyAqMa6wR2cq+LYLDbJN34xuH
9XtrArHyCzk0/qbKQ5A0SqtHUdfxxSftr4cfx98XbrzC7X3ZDt1gxC/qpFqxJrQInUj/AHHL
p44zT1FfccY2/j3OjsXJWeSwtLs2t3cWxowjkWizDr9oIcqfpirPPx6Hn3Cr3iO9Db55kurO
6iW52zcVFEubZujf94/MBjWeaxdZlQak5ZZk/wCGMkDOfU3Q/l/wOIenDEMKk0p18sWtQ6OW
b7vtH0H44kZY1y6gZsp6j6YFhKzEEH1EdSOmfnhR1dQDlUHIAkdsWLTAN7VCasGzqf3YsJal
9LZahnU/4YkFhqanQDp4/wDLCi1ltWvqPtK9KeOWIWiDLQFhpAOZ8a+eAQLNRgUWq9RXriNS
h5HAKkeJHTPt1wYpQlhVlK/9wHSo88RMzAFaHSaUrTLAtOYy6jwpUr4riWEclJStSMz3r2+u
HSZSuglyOvofw88IMPWrGuqlAWPif8sQHECSUyBH4UHgMFagFb3GKMfQvcEjJcSEQSSQpI7E
dMLB66ioBz6g1/wwNc0o5GyIpqzBP0xNEXOmhWqnqenXEDoBQCuQ6g5UxIMjLqJWnp/Kcvxx
I4pUZiozAFB+H0wjSMqV0gV7EjPGVpOq6AGbIZimRwakKk/YwBI6HPphZo1j9ZqtSfylqLTE
pBGN9YbI0OVPDp1xa1gmIH5uv+Q6HEQlclbPMGoGX4YUbMn29XoUimX44kTtmUboDVcIoVUF
CxORzr51zxEzs+bVJA6jpXwwIbGpBFafdUDLPwrgBjGcxkHYZqOte+fhiRBh3y0mlB3oMJOz
KKOrVr0/Z+/EgsoFWQjKpKeNf8cSAmZzqpYUUDoSRmMQgqkEtWlAFXy+uAlIFdqkMSemk4sQ
tIWMZ1JP2/TEKcODmaAjPx/AYlBUTT0Jfs1MxXOmBoGZJIJ1f5YUY6PGmn7R0qD44kkNW/Ma
Uy8PrgLnkQudJbStc36+rGmaUemuRpUde2I4leoGoL7hH2LTOv1wGkjpQsEoPAmv1xSMhAfX
6PzUoPHyOIwz6R9w0kUofriQizE0NCSfy5DpliWmCsslVIYgeFcQiRVBB151FaZH9mJvQk6l
Kn0inXyGAaY61XUtGJNKeNcKMVOZ05dKV/fhXogCWoRUgilew/DETKzs5BNAOnngZ0iCpY6v
UKEeOIApIx/hFM/H60xM1JFQpRywp/42PXzxNygZgaZEZ1VmwLRp7uosqAAZ0NMShh6lqOoB
GfepzGFBTS2RYdPzdaYFBekL0apqQRTLEf8AwIkF1YpQjIV/wxLCD1ULUqc/WRXFoEFUyMNd
dQoSRQZdaDEkbe2DUZJ0aueY8BhSK5dTEXBNFGWfWvlhjVvjJ7oB7wFT4lelK40zHN+Lf8sZ
aWFjJGbj3SwA1ZjpWuGcr8Pc/h3lGw7Rc3H9cnlgiukVIpo0Emhh209cb7usa7Pkj/1a5vn3
HY98j3I3DeqJYZInjNO+oaSMcdxSsLV9WqiimTFRlXucarrqRC1RUd8mwazWo4RyXZ9puZot
32xNy226oLiF6CQU7oxqM+4wwfLb23Pvi7arWY8ft9yinmBYWkiJ7IbwD11ADBloxzWnybxv
cYI4uUbdcMsbmW1vLJlBVvyh43I1H8cWDCi+W9murxrXcdn/AFuyFSqibSs9elQRqUZdsb+q
dttz3402awn/APXodyE0moizljUoCwoQHrXTjP5V8Z3j/I/jho7lOUbBNdTTyF476CT+Yqkf
bpVkOGxqTQcc55t3Hd3nbb7VrjZJmpJazH+ZpPdH6qyjELMX1/8AJXEbXb5Rxu3u4rq5DCWC
5AKrUUZtYJrXxxfXWaqeG88stm/VR38DtBeUZjEFYqV7aW+4fjhayqxd04zd77Nc76t1cbXK
xIS0ZUlQHp91dQHhhZ31rpuccO2jaDb8W/Vk5r/8tAunLrUMdTeeMWNIOOfNPI9sk/8AyrfT
3tkcgCqGRCTl6iuY+uIO67+ReKbxucq8lsm3HbJhpguFREuIfrppUD64TPXSeX8F2nYpdt2O
5uriFqoLe5izUNnqVxQYzPVeT8K5Z8b7TtrwSy7jBLKayI6pIusk5xugz8c8bvqzXJsMPBo+
Rm/g5I0CFmaIXlq8eoPmV90EqPqQMAxS/Jd9tt1u5azvIbuHQAZbdtaFj59sWJjowVcDX0zI
6nLFKnsnxZf7MnGrq2u5EJEjs0YlEcyx6fuVSc64uqcRy824Zsdh+m2V7q6dJPTa3MZBFT6m
Mi+kgeGKaMQbjyD4u5A0Nzudxf2tyq0ezK60YdwGQA0/EHFiig2y5+PbbflcncYbKJw9tPEE
JqDX1o416f34cUjW79y7gdxd2W5wbjcPNYsSto8J0yg9RX06TXGVRz8w+NL7ebPdhc31jf2Y
Mio0XuREkeoEZkn8cSPb/KHGLm5nt5ku4LYitvfwLSQMO/ttRgPCmG+J33vybxWHa1itb++v
54hqUyI4Emf5jJn9DigsUm4bl8ZciaO8vr68s5goBtpo9ainYmMGtD0zwCcj2bn3GtjsLrb7
C5u4mjLvazyRiVJCRkppQ0NPzDDjVjM8j3fiO/WYv0hk2nfBQ3EKITbSk9WTOsbeWLEyZVKt
Q6q9SMaYsbrh3NbC022TY96jc2E4YLeW50zRFhQ6lOTj94xlta7jzzaNosoLHj0n6xIqaXnR
lCAdFNc6+PbCk1vzDhV3dLvhln2/eoY9P6GUGSB2GfoZQf8A9bAAQ/J+230t7Z7xaNDtt4mg
T2pJmjyoSVbJgfLEi3DneybRYQWfH5DeQw0YNOjrpA/KQ1CfPDIRw8z4TPdjkIuLjbt3SPS1
lKhmhY0zETKM69tWDAok5ttNzyqXddw2eG+spowj2zk0Wn/2i6gQW8jiq1z3HJOP2HI4N04t
t7WcEQPu2lzmjO1Q2mhYqKYcZ33Gnj5dw03rcjhnuLPdyml9smX3Faq6WETrlSnjni+Wsju2
7mvCtxtZ2u7mTaL+Zm1ymL3EqRQEUDZ08cFi+sY3lG38eVFn23kx3dw3rs7lJFkHmpoFp5Uw
rcTXm98Ev9h9qTYRa75CiiG7tSAhYfmehBoe4ocSv+Hdx7mWzzbC/HOQJJBbtlDfWqlioLat
MiDtl9wwaN/bt3T5B23azZ2ezPFuUFtpBcBlUp/BnRvx6jGorNTRcp4dDeT8lsbyWHcZlEcu
2ToWIqPWsbgaSCe5xfKzJ4zttunDd25HfX/JluIYbttcTWzHTE1Ap109RFBXplivon+UMV9t
nFuUm64/cpuu3FfS0ildSt9ylqDMfxAYhz5WntuT8Nt9wm5PaX0qX0qhJdruEJZVyqiMo0up
PnXB8ts5Zcn45cckvtz3zZlv7K+JPsMdbwHxTVQNl5/TDWcim5M3Gn3Fm49FPDtxUFY7j7lf
qyrmx0jzOLV9cVdQAC32n8xwxmx6D/7Rscnx22ze8U3ONaexKjBXUvU6HFRQD8cGet34UHCe
Sx7FuqXMwkksc1liibSVr/8AaLXIkYMXNelTctsJWN5Bzb2raQE/opYUjeNSOzFGbUPAihwH
qM5xrfttk3C5nPKLvatwklYzz3MKPbXidFdoiNKPT/lho5cnyBd8Qnjjn2+WCXdyaTXG3ApF
KnSssfRX71FcTTu3bfOO8m4lZWA3GLa9zsNBa1uQwilMSFTokUH7gf24hTbDvPGN54WvF7+/
bZ54PsuXXXGyq+pSrHKvahwy4si4l5bxbapNss33CO5S2qr3FqDJGFppDSKM1NeoFfHEvmvO
uW7jazcmvLmyn9yN5i8FxGxFR/Epy04rGLVx8eT7Am7x7puu8myvI5KxpPHqSUMKVM5OTeIO
K6eGlud24vYc6bfxvNrc2F5F+nkWCrywSBQA7Ktaq1MyP2YvwZMVHGeUbFtvPdwvru402N17
scN0oLoNcgZXP5gpwYY3nE57WTabooP63t813NIBCFkMTPIToMb09PcMDgqee/LG23SXtvuD
3Zls2BgtrWVTHcW9Dq9tlY6mXqQ2NM24wFavpOde3+pxD7R6fY7rxnlPDrLYLzdTsd3t5Wrz
Irq4QMF0sSqiurMVrg+GvlybNf8AFOH8mWSDcBuFpNFouWtyJViYkUkVh93TNTmPE4Rq32B+
Jcb3G73uLkVvuMV4jCW3NIplVmD1RSTrbKmg0w26vhTcr5Lsm6cEsYLK4D3dtcD3rVxok0fz
CGA6FfVSuM2q1Hz/AH/Z932rZ5bCf3GhQrPb/bLGwCdVPeq5HDPhfnTfKPJNp3y32S52u5E6
rDIk8dSskbeigkXscjg/C79p/i2PjdpuEe63W9x2d3EHjexnTSpVhTXHKTRvpTBK18La029o
vkqXdNp3uzuBdu08FvFMAZxkr2r9VVqZpXLGrWZz60nKtpXcduuYNoR9j3O7jKOs8Pt29yKH
+U7isYb+Fhni56wf0mx4C0RWTQ9FZDRgO1MuuNazzPFrxaaxg3dWur+42tlH/wAbcrVdbQyV
+5k/MhGRAwdNc316jusfFr7azLyLdds3Mlapu1mEttxiJFFfQrH3PNcvpjMp6wW8bRte+fHv
G7W43mPbFKxtBdSrrjmKIw0sdSgHvniKLeeW8RXkFhBucqTLFbeyby1cSrbS6g0dxFImdPTn
TNe+LB81f7dyu1tAyb1yWxvYLldO33UYRDmp/wDMFqVb/d0P1xFjn/oHN+J7XtD7xDtN/s7s
biO8Aq40lNSEsqsrA1r2w6Pq5eL7jx3hHLJLUbp+osrq3EP6iHTItqwcOrEpVXR+p0io7jFT
NarcuTbvbWc92OV7VOjR1iNvDG8iMcxqi1l5EYZHTmOtDgkHXw8BkX+Y8n5iT+3ucsNE8jVf
G26cfsN2l/rGu3juI9NvuUZOu1mBqsw/MPA0/HLBY6S69j2/lUFoskG+8g2+8e7jb+n3dt7c
an0EkSUYlH+uR+uWIX4eVfGvJNjt7Hdtg3W6WwfdKfp7+RQYEbQU0SdNNa1B6Yf8sz4xhd82
ibatwm26WSGaWAhTLbSLJE4I1KyMvYg9O3Q4rRy4JCvtZ1zUig8/DEdez3EGxfIXFePR2++W
2z3ewoI7uzvSAzMqoMiWT0to+4YPg2e6z/y1yrbp+VbTdWsoWfbvbF57LhtDJICdEkZ0t6em
ntjVZ58uoPnPkttu/JLO42i//UbXcbfGze05MbP7j5OoOTKKZEVxT4Z/2+/+D/HnIdstfj7k
W1T7j+l3FiZ7aNpCjONK1MPQasugzxn8t93xmOAbmw5M97PyCfYdykCrabrJWaBmrRo7sOdW
hlACk5A4sXNeoc62PjbbHc3+93W2QbrHGzxci2VhDIZiDRLm3RiXSX7Swr+GHbVZI0Gxcx/r
/Gdrk2PddqsnsrdINwst2WrxyqiqAtGWitpqGzxmN1458qclurjndlc3VzbXJ2z2RLHarp0i
KQPoObLqzyZTQimNfhznP+2rP51hs+RzWnOdlvra82qW2itZrcOFuoJQzH+ZCTX81D4fTPDI
rv2/w8gYV1NWlRQE9yB3wOmY9B+Iub7Px243Ox3T3IbDfLcWsl9ENbWzgMqSe3+ZfWdVM8Hx
VfZjW7Ku0fG3H98trvcod0Xklm8djeWhVklHtsqMgqdNDJSRX/8AprjVv5c5M/1dey844nwX
iWx7ekrb9s+7fzZY1f127CnvuAfVC8Tn7BkfuUg1xlrM8YDmPG4bvnqWL8ntprLd196w3mVt
UKRXB1IkqJ/48zRugrnljXV8XHj0z5z4TebjxXZb3bb22ujxu00XtrHIokkiVFDSxVNDp0V0
9SOmM8z8M9T3VUw2n5Sj4vu1luMO3XnGoBFuGzyUE7iMoyyW5YqsiH2608MuuL48N5u7Hn3z
VyvbuS80F1YSB4raAWtxIlWheaM+sxMfUyeFcb/GOXtusCtRIAegBIY9TTpXHO13j1z4rv7P
e+G738fGRLLeN2DT7TcO2mOeZQD7Dfwt6fScXPg642H+Gtw221teV8R3OdNn3Peof08El4Ck
UdxHqR4pSQCpq2VcOWU3mXnHtK3FnabnDu0t3BCu0bK217pD7ia7VjpdLhqH1QkJkV8R50d8
Ejn41u1lcbdtl1x5ttuONRIovLaf1Xlk65yrChIGgH10J/7cqYKcZffN1lPzBdS7LyOzsbu7
s7VYbe8/m7ZewFKtEzqfTJXNf3YvwxJdcnylxTbLDi19ubrBxvexA5Z9rmJsdxBzks3iaml2
6pl16YZ1TeY+eVqFoB6e0WDRzHqPxbyjZYeP8g4hu1wu3DkKKlpukil4UnVdIWcDMI2Xq6Yp
8t3nY0EtvZcF+PORcQ3i6SW+3pYzbvAfcWQsVCtFTNoyi/cc1IocOe6PhpTy3i/FE2jil/dn
cLa7gjax31SrtaRSsyxyhjmYq1rEftzHqU4IblZfauanhO975x3llovIraeVLqK7tnWutQPb
eI19KmMj0qar06YtYnvix+U+QcK3n4k2q52eCSCt4f0tski6oJgDrFwrElgUrSnemDT1zjEb
J8bpvvCtw3/b9wI3TbY2ubnZ5o9CPbRirSQzE1YgV7Urllg59atYpSDCNBAB/L3IIxF27FcJ
bbrZ3Uqs0cE8ckiJm1EcFqeOQxWqPp2/ih3mfkN3Yzwz2XLtthtdluS49l7iOJwYJD1SQ6sg
fPuMblZ6nmODmnIlg+Nb/wDpl6LbebC3t7a8tVcC4gZaRSpIprkafiMEmL5eBJynem2K62Nr
t5tsvJYZ5YJRrKSQsGVo2Y1XMUNMiMB+Xt3x58eb9D8ccito7i1uW5HZ6tveGQslWjYaXago
atTBPk2eY8y+N9wPGfkqxTeGk25bedra993LS+kpR6HpU5np3w08l8r7DyDauXX1xuKFrC/u
JrraLpZDLG8EjatKZ0Vl8BjUD0zhPx7yFPijkW3pLbXk+/24l2xoJSYmBTJSzABWr1/fgmaz
1tjO7fum5bN8JblawyyWW82u7G33C2GkSxLIQGWRTXJgP9Mayaz+HTsl7d3vxJsq2Nwku82/
Irf2/fk1FJXkJiDs1SoatMZl/bo9G3ewsyu6bjsOleSX8EkN1tU8pRJ3KaWj0saI4I1KyjP6
HFA+U7m3uIZniuRJHPExSWOWupXU0YGuda9cXUypa8K3a12flu17vdI0kNjcpLIiU1aAfUVB
pUgdu+M2ade07btBsvkO9+SmvbafjN2xltZ4G1VR4lRxKSB7bp1CnqRTrjf+FIpeIbzt+z2W
4ckvb1b7i+63lzHLtjL7iCZWJUPGwOmR0GpGGXY4qPiKH5K3G7k2bat541uDzcUmuybOOmi4
tNxhFEWepYlggOhq9Otcjg/DUq8+VOdb1LwbjLW2664N4s2i3ZgI2WZtOl1ZaZMGy7YObg6e
Z/Hdptu4802ew3Wg22edY5at7YzBCZ/99MFn5ZlejHl6p817dBu1yIIthmbb5bnNY5CgZFnk
T7VLKQpb/LGrfGuZqXknAflXbt53febPdRZbP7k8m3lbl2gETsxSN4j/AC0RlamYK+OIOGO2
bnvx3x7Z9iuYY+QcdaYbjtsp0yvGTV5IFAIfp9ow/HypPy0MHN+N7Duvx9a314hO1w3VpflM
xD70axxTPSulWIzHUZ1xZhvt8c/GOM33BW5XvPIZ7eKw3eC5jtZ45NSOlxIzRS+4BQ6mYKVH
qFQemM/lnVNu+0n5B4TxpuOzRS3fHrZoN126Qn9TGuX8yOIAmRTp7f45Y3PGp561NtzTjW38
34el3eRRC22aXbrh9Q9uGVzGsbOwyCyaMvCudMYsUuqPjPFdx4FxXlq8luYrV93tnitmVtaE
vqMVGGTa9en09O+NX2q3zFpw35A+Nd5GyW91+q2/e39m3mgiqtvJNCoTWzA00sF9Xj3wacYD
5Z2WLc/mLcLDaWaaW7dGvQyEfp5FQGQnOroqj3Cw7fTF1/6s8zXZ83zW8Wx8Q2QyrLuOzR3E
G4QjqgZU9uVfFJQKpTtg5vivW15KAWBJUqvTI4hiPT6tFMuo88SwTspJFFBA/HAtREZaR1Od
a9vA4VLqVMlCkBaZgU6DEQJISaKQCtdQ+uJigYKGqqgSdenj44jo6hgEoTXqp8cTQgFUn8qL
QA079K1woCytXqTQkAnpTxxAgqVJjAFe1M6f54FAhihLUoh66vPLAhsKGnRQKCnh5YlgVBqa
mrHJSfPKuFErVYxVzFdXfIf44MU0VKE5VFMxTqD1xGHEj+7nXPM5UGntlipOxCljSoIqT0/D
EAqBr/h7BT9MKD6Q38s1KgkjMA5+JyxAVGYek/ywDU/XviIFJ0kUqaff0oOmAENSEJ6qEA0P
auEYOoEgqa0FStKUPhgO4EyK7aGzI6U8R54gcopyFfKvliJKzrXSoLUz/wAsSCwRxqkqD1r3
xIaKuvVWgy8f3YiRVQ40nOufliESMRp9YqWFQAK1z74CjSr6qLmtSxOVcShtIXJh6fE+eICQ
MNNTn++hxNH1NUhCWHXT5YRhRqGfXnmK/TzOJrDLRqKuTVqw7fUk4gRTrmKdu31xIlkOQIIC
9ajOp/5YlJpHSzAZgf6YiEuq9QQK1FOtDliZNRNRAOoD7tXWn4YiJS3UUKmhIyzxIDKFAAFB
10+GGEbaaa2K5Dt3PamAUKyagGIoTUADMDzpiwaQWU5gCozr2y8sSIsvt0YVLfco7HDhpKwl
IRlIPVPw7YAcElAEAYDqtQK16nEtMpoCSdLGumudPLEYSsC3rHrFAOwp/rgOjIAajHVq8v3Y
AFCSWiNdQ7UoDXzOEi0ACjCnYv3OJYCSMU1KwzqW0+OJouiZCrDL9uJnSYmqVJNMmp9oB65d
8R06uPdJBGkZZnrhoMwZmBGdTQg4LQGiqPTSoyI6mn0GJYdjGDVWI1DM+PlhQlHpCE61666D
PAiEisoyoytQg9f9MFOlIQRTPI1JApUeGJFqpIVcaQANLAimfbCdM7mhY5L0OXq8cIpQk1LU
Glsm69fHPAsBIyhwS3egPXIHwwjBvkfAqaAjPBiw5rQ1TUzClevXw88RJkYk+nNRkle+BEhL
D0nSelQa1xIMUKqvqIyJSq1pXvmcSOB6zSnryIIyOJQtJrXNad+gy7YjpzGVNWrVhkCammBY
jlSYLXotakdzTCMOruZFAAoBnWoPlhKRSDkBkTUV7H/PEvlz3LBUIB9RqKnoMMFZPciRPqJ1
V+7uMbqQerxPj17YyHZt6xC5CsKivqNKjGpKenv3wvsuzX15djcbCC8jjjV/ZnUstCKArQ5e
eLr4ZwXyM/G7fcZLPadkj224iqjFJpGQ6uwjaoA75Y5znWrPGGOpulFAPTp1xoyiUHqO4oR1
OXfAo1XBNm2DddyaPdrtYWTJLbWYzISOivQivfG7/g243Nx8XcB3G1lG1Dddu3CI6VW7KtDJ
4EEgGh8RjHrGK2x+N+LbdCLfkU94J5nEUdzblWBYn0j22FPqcTMSp8WbRYbrp3S8dtvlFLaN
CIJJDXsaMFOGWt/DuvPjHg1/YXA2iTcttv4CSi3xVo3ZfOimh8RgrN9eW31vLZ3TxO2n2zQn
I1plihnjR8I4d/XZJZ5dUllCP50aEK1Pr1/EY1fFrR33xjxu8gaXj24Xf8t/bura6C+n+Ixy
KAMvPBKvys0+MOE2kdtBvFtu8ksnpivbQq8FezMCtFH44rVbUKfFfErHddN7uck9i6f/AB4i
62s7d/uoyNTB9qzedHcfGHDtws2l2G63C0mjb1xXyqymhzbVk1B40w7phQ/H/wAdhora5ur+
S8lOlbuxkjmgDdyykVWn+7FlMVu4/GMNnyCKyO5gWF6VMF1IumUr3GkHSWH1wYdT8v8Ai204
/LZ+1uivBcPoMsi6Sjdye2nBE7YPj7gcZt4b2/3JLqevt3VsyTWZalQSVXKvgTjVrMTbdwvY
do5Amzbxty71a3imS03HW9vKgPiisU0jEWU55x/aNm3Qw7WJI7dgGVZX9xlrnTXTP/HEmXGb
6R08cMiajYeBcl3y2F5tduslrq0SSNIq6WAr6vEYlro3n455jtaJPNarPASF921lWdST5CjV
/DEL66YfivmFzAktqLO5agZYFuUWQHuCj6cxgpVu38e3Q8hj229g9iYvoe3uW9kt2K6z49sU
ixqdz4TslnyLb7M2F1YpeFfetZJBKpPSscikjM9cUh8Tci+Id8jua7FDHLDSrW8kqpKPJQ1K
4mbWZ27gvJr67ktgkVtLCxSQXMgioR2zwzpD3PgPLNpEf6yz1xStSO4t3E8VfMqaj9mBOqD4
v5pNB+pgt7ecfdohuUMtKfwHTn5YC7bP43ur3Ypbx5pbPc4Gb3LGaPro7V6qT49MLdvjMbls
G8beiy3UGiI9JQwZT5GmY/Zh1hXBCx1R1Pl9MGhquL8MTcIGvdzmkt9tOSyQgFv9zEt6csOF
07p8cXMZt59kuxu1jcnSkwUxMnk6nL8cTOOxfjjbLhI7WDdza76FLGxuY6xSEDorrRk8PVgi
xzbVwCZmkl31pLG1iYo0sVGao7gn0BR59cOs+fkO6fHN/H+nm2W6TdrK5akdwAY2Qg//AGkZ
8u4w2UyO1PjfbLnTa227mHfSrMLG4jrE5X+Bl9Sf/VXGfWvln7LhvIr7dJtptLWt7ACZIZHV
AKdTmf2YUhj4fyQbr/Sp7Rre9NSY3oWCjuoBowPkcP2U5agfG9pcQrFY7pXeQpZ7CZQuvSMx
HIPt/wDqGCVVHs3ALJtumu97vp7FrclLgwCOQxEH/wC1Qktn20jGtTm5RwUbXtibrt+5Dctq
cqyzGMpImVfUPtNfLGMHStvuEcqsNtj3K5smSymVZFmUq4VXzGvSTprXvhM13bDwiG825dz3
a8l22wlDGGZF15L0Y6vSR5Vrixm+/Jbn8fbpb3kMdnKm4Wd4pe13CIFQU61deqkDM4Ysdknx
7bT23t7Xuwn3hYy8m3Tx6DIFyPtMDT/6WzxWFi7m1kt5HhmiaOZDpkjfJge+WGVm3XZs+13u
53P6W0j1y01AZkBRkSaYrDxdrVyfGv6i2C7VuqzbrHGWm2yVdBbT93svXpn0bGSzm18U5DuN
1Pa2Fk73VsCZ4CVVloafmIHXFaJPy4dwsb2xna3vbeW3uE9LwyDSwb6YYUBRD9w69MQ1dbfw
vlu4Q28tnYGW3uaiCZnVEamRGpzkf8cNozP/AA5r/ju97feCxu7GaC7evtxyL9wrTUrj0lfO
uKVqSLC84Lzaws/em2qSS2Wh1wlZxppXVSMsaeeM0Xqxz7PxXkW8wvJt1kblVNGXUimn8QDE
VAPhi1SUz8Z5Cm6RbXNYSQ7g/wD4rebTGZO3ocnQ34HEdXPJPizkmz2C7hHEb23ZFNzHEpMs
JpU6lBOpR/EMM6GKfaOK8m3aF32qz/VoFqQrouR70cjLzGHZVliJuMchS/G2SbdOl+UaQ2jK
dboB6ilKhwPI4K1iW44Zyy3sP177ZK9ioJeZKOVA+7WikumnvUYJ0zYLiOyJu+5xwGG4mtkB
a4W10+6I+hdA3Xr0GG1TDbhx2dN+utt2hpd19n1BxGyTMAKsDGQCCnRsQ56013xHlVrY/wBQ
udslXbwCTMtHCiuepUJK+dRg03XTx7aeaXUL32xwXDpSjyW8vtuVHampS3liaQ8q2blFsILn
enknWcAQ3UrmRhp6xknNSv8ADhlc+r74z+hV+6v+n0xM/VebTxHle525l23bnuIQfU1UX8aO
wP44LW+dcMu0bml6dvls3gvVIR7ece0wLZKamg0ns3TCY7914nyrarYzbltU8Fuv3TkB0X/u
ZNQX64DQWfEOV3wj/SbXNIJYxNCw0qrow6qWIB86Z4qzeXEdu3KO6axktpxewf8AltfacyKK
0zSlfxwmRreX8I2LYNtiNzdXUO6zLrgfSJLW6IAJQUCmFhXvi59XVxR8P2Bd63FkukuTt8KF
ria0AkeIkel2QepkB60GK+Myb8uCbbJY92k260kF80bhLe5hqBIK1Rx00/j0OK+GdNNLYfLF
1Fc7ZcG+rFBraynlIE8NaERGpSQr3WtcTXywknuAnVmTkdXYjqMOueJrS2nuriK1gjL3EppG
B0zy6mgH44q1I7d04zyLaEWfc9quLSGulZpIzoHbNxqVSfM4JNXXWfhPa7Jza/tYrG22+7ns
aC8ht1B9s6hQSxqxAzGR04sw7flWR2F/LdLZJBJ+pDaGtmT25FkJ+1g2nST54FPXTu3GeSbT
EbndduubOBm0e/KlFr4ahUYVYLbeJcr3S3F1t23T3lorFdcIDUI7EEgg4GZKrFsdxk3D9BDC
/wCsD+20DKUcPWmltWmhJ6VwzD1+Mde78a5DtEaz7ntdzZRSGizyxlV1dM2FafjgatUr6go9
WTdSM6DCzaO0s7u6ult7aJppJCBEi9dRNABXLPFp5de98Z5FsiLLu22XFjBKdKzSx0Ut1oSt
RXEbV1t3xhzHcdluNzgsGYIqPDAQC1xG4rriIyancdcX2HsYyZJIgIZEMRhJR49JXSwNGUig
NQcQrnYjVQVPZuw8qVw6zfV5t3CuX7laJe2Oz3N5a5hLiOMutR2oDXBreKqLb7m5u47KOGQS
NKkDAxt6C7BCWoPTprnXFuHytdz7492bjFulod1dd9VVafb5YyVnTMe7ayqNNDT7XzGL59Wu
Thvx9b71t91u+7S3VpstlIYbkWyfz4yyBluPWp1xqT6tOeCW1nrxmLjZp33qTbNkk/rFZDFa
TWylBcJ2YK+Yy7HvjXgnOot349yDZnj/AKttFxt7StSBpomj1t0orGorixr7eumLhvLb2xF7
ZbNd3tk4LC7hhZ0ov3UIFajuMEsatc2x8b3zfr42212Mt3KpT9QqIf5SswTVLTMAd8F8Z561
0cy4VyLie6z7Zu1to9Wu0vFFYbiMdGjf/FTmMV/bbK6nZCtCvqoR9T1wsbVptWx7jud5HbWU
ess4jlnY6Yk1ZAu3n2HfF8m+LfmPx1v3FrU30rR7js7lUG57fVo43ah9u4jb+ZAa/wAY6419
fB9svrNTW15DdPaSwNDeVjZbd1KyN7gBQgHqGr6Tg+qnX2N7V7Fc+w9s6ztIIv0/tnX7hNKF
KV/bgtkany0/IvjXl+x8XtuR7lHH/Spm0Vhlq8JbJRIo8T6fLFLsHXUjHMxFANVCKVrkBTBI
p8HRLiZ1ijBmllIWNAKs7nIAAdScVoa7mHx3NxfbbCe6vYzd3Sq6W4BKyo+RaJh3ib0yo3Q4
I3PGUkgurK5Mc6S2s8ZBIIaN1rQqR0IrkQcOHXUthyHdNzQGxur3cr8NdKWVpJ5wcvdWtS+o
L18sb8xjfT7nte8bbMkO42dzZTstUS5R4iy9fTqyamM5+mZ3Nx0w8a5bNb/1KLaLySy0NKbi
OF2haOhq9RWoWhxTpqVUe6rUUEOh8OhB+n7sONDaSR43Damz1EH1fTrXFV9W5274a5luGzyX
ll+na9WMTQ7TIxS5mQZt7VfRqXuhNf24xzPXPu2TxifYu40eSWOSFdQhd2RlpL3jav2t5Y3Z
i/n39o6XS4a3FwVZYRJ7STUOjXSpUN01UHTBK63Gk2v445juPHtx3+2shJZ2KLNNFqpLJG2Z
eND4L669x0xZvg68is49xzc99u44rFaREit02USKc2OVWYhfUQudMGYzzfFhyrhPJeL3cdtu
cSulyT+juoDrt7mn2GFhkTTsfViwa0+2cT+VIOK3J26dVgUGafaomDXwikUrK0H8SulVeNT9
RXFzg6jH8e4xuG+XiWlkEiQN7YncERRqBWpyqdI7dcFjpK6+T8O5JxfcY7LdrdoDICLKeP8A
mR3IrXVC65Mc/t6+WG8+DV0/xrzn/wBffddGs2qrNdbbG5/Uxp9wmWJfvoMyV9XXwwcjrVPs
Gyb1yK/WC1kSMTf+a8nZhEKd3cambqPHzxSVqI+TcW5BxvcWsN6szbTN6o2rqimQdWikXJh+
8dxhxmdOOHc7uGIrBcyxhakCOR1zOeVDgalRmQyye6zamND6qk0PWpJ74NQ2kurkxoXlmKGk
MZZmp4BFJNPoMaSWLd7+ArFFczotSViikkQVH3elTkRiRtN9cT5CeSeWrSIA7ySsMySoqzEf
twVm3Ucct7bzuCJLecFTJE4ZGDA+kuhpQjse2CqX16xa/IfyNf8AANw3Mz2k6bYUjuhdW/8A
8h7dhpW5R6rrMb09Q754Ya862vj3KeU7nKLOKa+v5lluFmnavumFNTKH+0uewr1wW+rVKyXN
vPpuEaOYHS0brRlI6hl7H641YJ1K0K8a3abhl5vkFyWtLKeNbywdmjrDIaC5AJ0yr7npNOmC
XT0z8UV+2q2iilk1+sxRB2qB+cKtdVPHtivyrz4B551jMQLBA3rh9Qo/TNP4s6dK4bFKni2r
fLkyQRWtywije4aHQ4oiD+ZIFI/KD6qYPg5qtikdDVGGVNLDrTGhKltotwvr8Aa7i9mOlS51
O5Pap65nqcC91sOQbH8k7Hx22vNyuJpNjnrbv+nuZZbeNq09qaMn0mop0pgk0dXGQtLu+s7o
zxSSW1wnqEsbGNs86oy0NKHM410OO/2B3naYuyu00rAmtdTMejZ5muMZa3auuSbHzbbNo22T
e47mLabwmTbIZZGaKqdxHXSjENUeIxrlf+1dHEeI843Iz33G0dbq2jL+7BL7E8gpVkgaqtI2
nqoxaLLFFFt253e6SbetrM+5vKUltWU+97v3MH1Zg/XFVOouuZcf53scO3QcheWXa5x7m0us
7XFsAV9SxmuhHVew7YM2eOfV9RbLwXmG87dPf7Tt73cVmgkkCOI5dAPWKPq5HfTgk9a66s+H
Jsr8mn5Pbz7e9wd9Rw0UpJEqmIeoyF+yqKNq7Yem/wCcsdfN+P8AMdt3TXyeFhcXy+5ZXKkS
QTIaHTDIvp9NR6R0GLmWnFhsfw58hbxt39RsbSJ4BUSQCQLMjKKlWjIqreAOeJn1R7Lw7dt1
5C3HarYbwGeNILv0E3CKWEDV+xnNAvauJZvwq7/bbzbp57K9ga1vYXMc9u4o6OpowYH/AIOG
8WKZY5EYBdIoQK5Dx+v1wCTCCuWCtm/U1yGfli1qJJUJelFQsOoOeXjgFiNg2QGfZWPQk9sW
s35ESVpG+nUK174ToIxrqKmoOoavCnbDqGKe0QubA16dPHBp/AT7xPpUaSR6q5/TApqRyKMj
nMdD5ntiQaAA9WIFSp88JINmKgrRa0PbtTEMMGAQ55E5EeOIQ4ZuxoR3yzwEipetMnr37j/r
gWnIOmp+3pqGWY64UZnFMxU1Fcj2xI9H1KWIUDt2APfFqRl6P6aEkUPY0whIFAGkd+qtmBgx
rDGSiAx/cMj36fXFgwzsCzHRQ1zNdQ/DywMUlZKj2jUHMny88K0zUc6wMgaMF6g4jJo/tFIw
ApFPx7YK0FdSqQCpbsOgz+uHQZWkX0joTQVPXEDqFVNVRqP21y6YiNpwpqMyAMh+/AgsGrkf
STkPriOG06tJB6DrnQftwgwlZPuNSTVa/wCAxETuMwBmQMx0JHfChB6hiv2eIIJpiM8IaSBk
KrkPHAtCup29S6DX6jz/AG4kQXUrNX8a0anhiWnl0BtBBFc2KkV/DEtBSpqnQ0yPWvgMKM1V
JKmr0yp5eJwIYoqqSKMMzgQZavVQG9BqK/mxInz0kKPDPM5dsKBGyqxDr/uB7gYQJAWIJrRa
ig7g9MWozaQNdTnnpNOvTPETq+qmX+016f8ABwIgDUnJqd/PywDAkrlRqNUggDv+OJaLMyZH
Jfykdx3GAwbBqjSSF66qA9fDCtMnq06jTyGZyxES6w1MmrmDWo+mLEHUHLBQKKaFF/xxHTPH
UqBnQVJGWX+mKM0oj/L1kioNC64tB/T6dPqI6U7fTEQydQKAhvSwBzz65jEcMAqHVpNQQAv4
eWJYJ2CEh60AyApQDCDKSzBeqfkGQ6f4YFCZTnqrmclPh4VwNYbS7hl1KPGhyH08caFIqoIr
1PjkQR4jAkjgslBpANa+X0xIFQsQ1OGLGmroPDEaeqkGjAimQp0PfPCgOoJqPS3h44FmjDqI
lLvpzzIrQHpTPEMOdRYaWAArqJ6kDPp54joX06xnUnw7eWAYbXma1YjJWHni1Yd3CJ1qCaE+
eE4dZC0i9a0qv0P7sAsIHW5OtRp9JI6+dcRho5FBFVyXMk+HliJ85BpRlqa069B3OEadJGZV
LrWlNROVa9MR3EV8IjBoWq6PVQDLPtXDKLWQ3L/zmq+gZdemXhjWhyenz6fu8cCWFgQt2qrk
9c65/sxqG2Pon4CSZ7+7t9So4VGhjLKC4rVlUE9Msa/pfGcc3yntO4WfJbh7m3aJA5Cy0ojV
z9P7ccuWvMYks+oKw9JNf2d8awQMYfWWPc5U/Niaavhe07Dud8sG6XzbbcUH6a6qNAYn82WM
6LXru2ncdn26W23zfE3SwQ1t5hKsh0juQTqX6DEFZd7htXKLeNNvuoop7CUO0Uh0s4U/lJpU
Uwqwt73LjXIva2me9/p15E1IbjUGi1n/AHDp9cEVWlkb7atkntOSbzDulhQ/pZBcK9BTLP7h
TyxM4wGy/Hu28pkvJbfkNtaXcEh/+PcGnuqcw2onp2yxbjfwsuDb7t3Gtwudj3F1jEjaf1ae
uLUCQKkVyPY4vkXGktdz23jsV017cQT29zMZI5IZASUOdKE0OHEtrbcrm6htrziu8QxWYbVc
2sk4Q+LKUk6fTBQhvNo45yHdgVvbeHdI1yhmeisR4AHTgxKTkG0c7Qfp7i9t49oeQRyCK5Dk
IehamYywypZW/EN/2qz9zil5ZSpMoDS++iFqfxVqK/8ABws3WHuNs5bd8mji3iWJ7tZASTMo
ShPq0Mo0gn6Y1nh5r0P5H2Ce52O2YSRtbwyap3ilViikCrAVz6Y5zytaruK7JyTZnguNh3aO
+485JubfWqkDrRopBQEDMgY3ay6bqyivOaWcm2K1wFhZ7oIdSodX2gVy+gwQsd8sRyDel91W
RWQaAw8M6CvTPFExCMApY5eGVQc8MMe0/E1tI/GrpfdEUjSUjlBIzK1GY6UxdKu/btGw7W8O
93MccokeQymRXRtZ6iQHv4YAqOZcXuuQtb7htM9vLaBKSyiZNQIGWVQa+WCplNo4/u8vJreG
a6inmt3Ut78+khVOQq9c/LGpVK9L5bt9zDuuz3zGNLZZW92UupVaj81CcsEphbztW5HkO2bp
Depc7VED70cM1WA/j0j7hTGmLfRPJsu4bnObK8tpt0WME2c//ilX+E6tOfia5YzjSyFxaW2x
o9zBZ2Dx0M8EDD0Z0yWpr9cUDIcy4puu83kO57X7ctkYdUlxFKKmnQhQQaimCmLnhu4i143d
Lfzx3lxGHNzqk/me3SgpX1dMaVrzvmFlNdMm6WV6bzb3BCxMQHhr2ZB/jhc+urKyi61apIrn
0/fgxqV6fwbeNuvOL3PH1mWHcCsnse6QEk1j7VY5AjzwtfhbWu92HH9nt7LcWKTWvoMaDWSS
a5Z9BgGHmETclteTW0kd5tIgAkkhdS4NOhQ9DXrgw/Dsfkey7zFuO1WlwBdPGRB749tJWZc0
BOQI88UZxx2W+bfx3ZbOy3NilxFRNCkFitfuB6ZY1aSlVI+UQcrgkjuNp0affiYAglNPqHVS
PPEpFBuNzs2883eWHe5dpidF9ncY1ZSsij7eo6+JwfgTn8jnv7nYuW21xu27jereNQqX8LCR
1jbIEip+2uYriMX6wJb8tXk0TJcbJNCQLqBg1RpA9Qy0sunAIGTjp3i7uuSbMUv2YgCNJBHI
rJQdDl+3EcZTmtzzmOJIt2tDa2jmoCqqo1D+fQdJb8MalYrrax3AcLafZ+XC5smUfq9plIRl
1fdGgarfhSmKfLXW4sdiv7Xe+BzcctnH9ZRCBbSkBZRq1VjJyqPDEvwto+UbTtNjt9ruUrxy
wBYpAq1YACmoqP8ADCz8obZDt/MpuSSOk2y3kZWG5gIkWjHqe6kdwc8Zt1rnmxlNw2GXmHMd
zXZJ4qN/NiaZ9GoUocs2qD5YvgSA4utzw/lgh5AjWzrG0cki0kVVfo+X3DLDbp45xq7OI7fz
WXkM4D7LeR+3b3sPrVgy9cqUoR3zwW6vhQQC33jnt7Pt+/f0csS1jdiq+6aAMmZWgr1DYaOY
oecjfF3x4t3vI9wuggC3cBVkdB0A00pTuDniHVZ8N0AGR8OvTEy9dtry7h+IDJaT6XRDHI0d
GKVkCkUPQ0wR0qn4Pzu8ut1sNu3u4jktIdf6e4ulBbURRY2kPY+JxYWwudxl2XdZ5LTjLpEg
zvkuGEJD5/YCyrq8aYFrOWG6btftexJsgvdoE0jRW9rOsV5YFsz7brpOksa+GIqvnGz31nts
E6blc3m2E+iyvm/+Tayt21E1IPiuHWPq0XNL++u+AbZe7dcSzxL7a3slu7Fl9FGWQL6sj1xS
qoLKOXcfjGKPaB7+72ZKkW7aJ42aSp6aSQy9O2JqtLtjF4dleWQncbeMwMZG/nI7LR1Oo1qa
Yla8+5VzPkW085uprS7K+yxjNu6j23T+GVBQH/u6419WZYH4+t9z3Lm0e7WlqEthK8lwsPoS
OooABWoUk4OjzGtiivoPl64eWNohcWYaGU0AchACqnxywLkHE903C6+St4tbm4JRI5Y1tydK
vpIAOjoWp3xVqO7bLW3vNvltZX/pt5tl7K+3Sxt7Tm2SUsKFqBlrUEYtZYH5Tv76fe0Se2a0
/kgsA1Yblm//AIiOlR09Pjh/DEn+2sOEJZaAnt4dMMrPcr1q9gur7402mfj+qTcLd0imktGI
mXNy6MykGgNK4zI665eDXe4/+yiy5PcmV5rZoLU35DOzEgmIs2fSpAP4YRmrbhMW9WfJNzs9
49+La5o5IbSG6ZjC/roqqGLKPRWmKwcdyufmV9d7f8abW+3SGNffWISJT0KGkyVxmp9NMsWH
q/CP5M3G/wBtTYt3sLhob4xhDeoQzOhCk1b8y9ajFJ4r50m+ZuQbjDZbZYwyxta7hC8k6mNH
JKhdLoxB0/ccxh58mjv24znw/Y7k/Job2GFzaRxSrJOn2VK0CtQ1wUzXfby/oPlbd4rqwEm3
XskkN6zodEcMmhveqBQUIHlg6omauuTJuPH+PXUUT3G6WNPetLlJNctmxJCTRuDlGMakHdse
JSu7udRJeupj4nuTjJvsaLhV/ulnuRFhaQ7ibqIxXG2zAFZ0JrpWvRgcxiakxuJYd43CCU7P
d7ltV4I3rx/dWFxZ3CEeqGF5MunRXzwysdR28iHIR8a8cfYoZZLiIQ65IFImjXS3qQ11LmKY
I1b7HTvdjBu/INqu7K7iF9NYvRZANN2UK1gmBoa9euY7YTflacUmtt2h3Pb7jbry3jEBjktL
64aeNg2XoR6kLXo2C/KnsZTkLbonCOObhxYzfrHQW17LZuQ5jiQ+iXSRUo/44YrcQfHlxe7n
yi7teUiO+mnsPagE4UPOqSA01UBdkHQn1YrDFp/7fttvDf2M+ybtdWMsbRGC8mEkUkYy/liS
uY/bgYvrwpgEUpm6BiBXrSpoPwxq1ib+Ww+L9quNx3q7gtbhEuBb60tJiPauhq9Vuw8hmD+O
M115r17it1Zbxt28WFxY30UMUJiuLTc5vfU6QckRq+kEfd9MRvww/wAQ7puf/p2+2NneyPe2
8bTWlsrVdRp++ND/ALhnTF+XPd5eT7vfXl7fS3m5TPLeTU92WQ1clRpqT40wtZ4rWDMGjRss
9LdxllXE5dz9Pc+X3W92nGuH7nxGaaNruBItzlsT6XWGJP8AyAZalIYV64Pw6/FcvyRv0uwc
/wBk3zZJkifdIYkupk0tDcIZAje4OhNPxxrPGN/3Qf3G8iuId4tdkaGBrJrVbhJGjX3Vcuyn
TJ1VaDPFJcXWfZZ/GnNuQf8A3XbtcySpctsytFZmRQ1EWMFVYj7lz79sHPPrfV2PN+Ecmvxv
25XabJb7ta7ggbcNotwIJVCtrWS0C/8AjKGpywW6eZkxruRzcn3jY9zm2bc73cLOOAtf8W3+
FDOIQKs9vIB62h+4H7sbmYzZWxvt42XZuP8AHdwtRuNL+ygLz7Ow9qQwRoP5wNUyrQmlcY55
b15fzbmtqPkvb9845HcbPcTtCu6yGkPuVcB5DoJR0KfcThxmXKsf7l+QbuvIYNpFzIdju7OK
5t4lAaJ5A7Asr08KdDinwt9eIh3YAj0k1qev1wNPWfgW72lo+V7LNKov9zsh+jgegEskavRE
LZaqkZYp86O+diw+KYUs+Lc6sd9U219uNlKIrW9qpZoY5aBtf5wWXT+7Gr1d1j/nPriw2Wx2
Xd9m2NefPHt3J9vhji2W/nFElhkCm3iuVGT/AOxx2/3DGeerpyeOG33zZp+e7tZ85UbBvlvG
sG1bjpDmNoKn+eVBEnuqRR6eoeeHr0/XP/LQ85tIL34Hv3t97S+QTI4mZCi1jlH8mMAsan8p
PUYubjl/WbI+atVUGsdegPb6jBa7SLPjV3DZ8k2u8uX9q3t7uCaeQCulEkUk/uwWeNvQP7hD
/wD9Lco5/p81tb3NkBX26SjVLJF+X+Y33U64nKf+1ey3uwce3nlNhBe2cV1bblxYxiqhhMVd
GQK3Usq5imdMPNyGz/ZXW22TyfHHGtxsYvffZ4CNwhiOi6MMJYH2ZB6klhb1U79Ma1my2yh3
yTZuXfE/KLq6kvL5duikubObckjjmguYYy6PGEGpKGlfEYOflruf6qTaeZbrd2djaR3t5w3e
XSM3Mkdslzs9yzIui4RiD7Qky1acvHPFjMuzx4x8g2m82vLb+Le7KHb92Ztd0bRQtvMWFRPE
BlSTrkMarXHOKGGUpcRayVIZdeVRSoJy+mM1rcfQPyM8138m8M3naS8myyW9rfNdw1Fu7JK3
uMWHp9z21UU6mgGD8D8pORSQH5A5FZmxhvuEbz+n/rIABT9TImsTQyKfRPpNa+OITmbVBz6z
s+McN3Da+PRQ7zxHeJRF/UgVlNpdR5oJNOazoBRW6MDQ4ZgtvwuPg2fkFxxrk1nemWW3h2qR
LAsDQCRHPto3cA9B2rljP5b7nij+MXtLj4h5RZWY93kNsY7yzt1FLhSqKmuLoSRRgaY1z8+s
c3eXfxFoR8M7wu8SVnstyt9yhWeuuICaINKitmDk1af54PybfF1y+h+dePbhaFhsxS2vluoy
Rb0cv7rqR6SXAXVTrivwv/k7OQLYNsfPbXZ1Q7rZ7iu4W0VtT30R0QPNDozao1V04pPVOlDx
x7T/AO56d90qLmw3i0vl96rSRapoo2kAb1CnqU4p8tVZ8gkkh+ftrlt5Hj2VUgvI3UkW+mYM
ZpIyPSQ7EaqHzxfgXrKn362sYuI83h2aAf1LbN4e6tYrUfzYUl0AyRhM9BGrIY1z5RfIy1yq
yfAExvqzbhZbpDND7prPGkzImoVJI1jUuD8s93Zrzg7NuUO2Ddmt5BtskvtC+ArCsn8BYd8Z
b1yKXBJbqK5HL92Ir3hPtjmeys4oTeQUJIAFZF74rE9j5XxTY5R8g3K26R3W3XVpe2boulkl
eINIcuqSFjqHTCmj+Q72y47w+feLWyiW7jlt3tbhBoaOeWMD3ldRU5KNQ6MOuKRnPXh/LOdw
8t2WE7xZovKLWYexuVqoijltn/8AJFMg7qaaDilX19a3Zo7u8+B+RxbhA7naJlO2GVSHjBZC
+g5HT6mr2wc/LfU8B/bxyTdIeUpx9Jg21XSyztC4rR0QkNGeqk9/HBVjA865BuO/civbncUh
iuo5pYBLDEImZIpWVDIB9z6R1xvocc/lvrP3rr+3veBfoQ203Srt0jLR0VpErTxHrbGeJ6bH
bYb6nHPhbYd5trWP+rR3c0VneN6Xib3HJPT1oyqQyNka+WNWeru1NvU+z3m3cC51vdjC24Xe
4Nbbl+nQIkyPrAMiDJmTTUH9+CXRZl1qeSbnvHG7HeJN2uY3j26Nv/Wb4opmjk0n2kmapLJM
P5bFhTLPFi18u3txLcXT3B0qszl3CABAxzOkeGCMWtx8JtaH5G2hbox+07yIPc/MzRsoXPxr
0xWOvLecCS9vfkfmG17tJLPtLw3dj+lnJ9thFJphQA5a1jX0nrl1w2+s54puL8a2jmmy2+0b
/CNovtqSSPat4PoFxbh2McNypp6Qej1z+vXX2y+Myb6znyNuRtt62Xbk202G7ccj9hrwiouI
VfVblAB60Q1KP3GK/HjWtb8lX99uXwhxq9ukpJLuRZqLoRFKygen8o8BjPF+Wup6h+Rbu92/
inCbvjsskH9TsYop7q1JUSSQIABqX7XWpBzqemDmK/LeWW3bTc882C6ZI5bzd+MOt3MtNU0s
OgaqjP3EzFev7MIyRgfjyTcd94TzqPkfuXiSQm6tbafIJNCX1SQKfsoSCdOOls3wfWWJfk68
v9nuOKnjcrW0O67fb3EstsxUSTRgIxDLSnopqA698c/2Z8tzLtu0R/IO+3sEcYu7/jMe4AoB
6542osyKOp/ljp+OLdirAcQ/Ucj+IuXDfJZb6SBob7bDISWikA9TwA/YA2RpljW+n/wnXeL6
3lXceaXdzt1zbwRQTR2DyQSuyKPZnkVDSeUrTyIxnRawnO+WNynmab3tMbwTUt4jJGBHJNNB
kZgozUtll2pivkPPy0Pz/aWU99xreQUe/wB22pHvrhcvelg0rqNO41EHG+er9WbJvjyYsqvQ
0UU9Z88c0LWFUkZ5/d/piIVZmJNBU550yGAGVFdeuYNKjxxIEgAaq0oD6j54WaJqMevpf1Dx
OI6FyFDAd+i174klKspoVpUD8DiIVJVwKkmtK9evjipEfc1DR1rVj5DAhvQDMnw88KD+UGg6
UPY4BpnmA0oFooOeJaTRENRXK6jnXpTriWCBD0IzIJpnll0xICgD1lWJrSgz/diIqFlCudOu
tK5YlgY1AQn7mjP4U88QPGSZCxGSipA/ZnXthJq6iWRSNNKgigGBCpmSCNNamuX/AFxM30La
RRlPX7h4DFGcOpAbVkGYUqKdMTUJk0yeKGhU4yi0lnBoKd2HhiWlpbT6BkGqwPcYSTurHKg1
ZZnwwnBE+3Sv5u3bADODIQwGQyB7YiTKWjMddKHp3zOFYTRDSlXAVSPV59ziR9Q1EaiGHcdw
cuuJai9fqyAA6HywoZjDU9XrU1qvTpjKCxkC1ata9xhRI3cxkajpr1qPHEsFoIrmK/kP/HbE
sDSgX0hQSasPHERFShBBoOpoK5YgYtkGIp3FOhwIzM5UkUVSMqnx7YgTM4AK5UGWkYhajQpq
IlB1ePan1wxQ6ysNVF1AAZUzqcap06KwYkKA/T/PBpSaiWCmigHUMhQnGNWmAf1edcx44tJM
ooCg9S5+P+OJYeiZnSdRFCP88CCaaFXSQQcz1H7MICkchd2LA/jQAeGFCqaEN+APhiQ0Ck1S
iU7KKUpgahmHr1NIrHovUD8cQB/LqVWgPWmfXwwwDAHtlqgKtKeRPkcTUCFCr6RRetRn+3Ei
eWNgH6aRnXLI9emAWgZQxABoAfUR18sKPUAelaitGJr/AJ4johqdcyoA7Nli0AklRZRGlega
ngDlniWk4Qn3HPX7vpg0pNK+7pSlKZKe9friKJgXGlBp0+klunnliX11JHHmK0L0GYOVPHEM
CUKtp6EmjDwxaaIhyGEoGkiv18MsTI44xIQBp9Irl1/fialKNStasASeg7/XEpZS9lA+bA0q
dI74GtMQD0bIHv8A5jFqEARTKq5+rtiZoGRNOaAMTSopmMQ1G6lY6n00OnTSuZxoz4EkKEim
b5FiK0wHCbRTSS1OuYz+v4YmUExaOEMKnrp+h74ofGUvwzShh0OR8z443rOOep8T4dMSdu2C
X9QDQEk0ViOhw81mvX+DcR5DvkLzbEBcy29PdRJFWRa/m9RGHvYeO8S77JyaG5/p+8S3DyRt
paG4JZlYdBnXIY5czXe5njql4XcLtkd5He29wjqDoR/5ikmlCppjprjKm3z4+33arBL6R4bm
BgCrRMcqjvkKYLV92cf0JqkHo6itdPkcGGUy3BRyw1NJUVckkE+ZOWGRr7L3jvB+R8gjkuNn
tRdC3FXVW0yJWtaaiOvljNZ1W3e339jcva3kLxXMZKOj9ajGoa5RI40xyhsqkI5NMuhGNeM+
JI3bIqaMM8iQQfDLGQYvKSGLE5+tu/0JwxmptUpVVVjpX1gBTRafmzFMGtHL3AIKmTS5orLU
6v2YvAP+Y7M0mXYNqOo0+mLWp/lNNPdZRmWUE0IQMwqO1KmhxRuSBWe4g1sjyRqDmoYqtT1B
p4+GFmyQZnlVs3JVzRV1Z+NT/rjN0Ohb65zHuM0ZrVdRZWr1yxQ6ghunjmYxSt6hU6WIoKdx
XGqzrttN63azkEljdzQOw/8AJExFRgXgb/cL7cpvdvZ2nkanrZi2YwwrGw4pyS+2p91s7CS5
2+PJ5oaMVpmaoDqy7mmC1a6Nh37kO1u1nZXRghmekqN9i6hQnOuK1l1XnFuRi9gtp7iKZb1h
7E4lEsLlsxqH5T9RjOHXNvex79x+5MF236aoGcMgKmvfLCbVSJH94Pq1S0qrajnXv9cWrEiX
t7TSJ5GHdXdqfsrhxqVaWmycsk287nY2U81oCVknhPoWnUNn54fhi1VPPP8AqNRJWRCaoSa1
8zijG1MLyZ1YNI5BrUlixH0JzxNQ0V5cxgmO4kQV9Sq7U+tBlgqDHcs7kowWVyQ8laavIk/4
YLq06ySOjBWOmhBIOQX6YZUstv4pvO5bVJf2YjuIYj/MhV196nj7ZoaY3ay4H/lyMqnT4U7U
7DGWtO9zczNSWV5NPRmYkinb6YZDyIXMsVY4pSI+rqppSvY0wYz1aEySk6lchhmSDSlPPtgU
p5Lq4mYCZ2diKKS2pqeFcOs/a67l2/e49ua8WCdbCQ6WkUN7TUy6j/PFa0Lbdk3vdWcbdaTX
QX71iUt+GeDGsQX+1bnt0vsbhaS28q1CrIjKT5CuEO2PZuTx2IuY7W5Fi2Z0h/bIPXphGuC1
uNyimYWjyxvI1CkZYMx/7VOf0wSafXXuTb/bk226i4idczHcBl+ho3UfTGRFYZACVVvVXUdP
WmLVRxzzQuHVjHL+V1NGH+eNYpfFtuvHOR29pFut9CzWtwB7d2GDqa9BUHL8ca5Y93VZHd3K
RmGOYrD1KajQk9TTBY6ToEU8kUqtbuVdcw6kqR5gjMYcGpJ7y5uWD3E8k0pp65GaQn/6jU4B
KdLu6VTDrkWNjV0BNG8yPLEfEX52oaU6EZGvjiX2IspJHf8AMVywi1Y/+ub0dsG7NbFtvbL9
REVYLQ9HUHUv44DeXGLy6iheCKV0jlI9xFJ0vT+IdDiWozrIKj0kjt1wj7Otd83iK3WBb2ZY
VqFi1FkFTnQHxwLYitb+9tZmubeV4ZifVKjMGNc+oOFQV1uN/ey+5eXElxIMleRizUP18MUZ
9prfc722jmSC4dEkydFY6TXLMDFjVs+D2W531kS9tNIjsNLOhIy/DBYZgrndb+5lWe4uJZpU
FFkdvV+0Z4YOsc8s00zmSR3llkNZJGJLE+ZOLRmJrHcdxspFltLmW3cZMY3K/wCGKzTLia+3
7dr6RJLq8nmljqUld6upHShwKuQXV6Z/f95/fBB92p1jz1dcTMlaDa+d7jb2r7feRx7nZlzK
kV36jHITUlHOfq74y6RBvvKdw3qOCxCBbWE//GtlGooWy0q1NVPAdMMrN+VNdWF/arrubWaC
IZFpUZBX6sAMWpPtO47xazMNtmmjZ1zjgJJcUzqorXCZEVxf7nuNwkk8r3FyaBWapYkdAPMY
jjsv945XDaiC+uLpLZwFUThlXLwJAxSuXUm64obvcZbb9PE808RPuC3Qs4GnIsEHh3NMB9pf
q7yeOGzWR5Y0JEEBJYB2+7QD4+WCNz0V/cbzHb28F+s5toqpbmYMFUMc1TV2y7Y3Ix/TrBbR
uu92Mko2uaeFtJLLDqOpV9RyXwwdR0jtsuZckXd4tyS6ae7po/mD3BLG3/2ZXMOD4YLdYmau
JeeXO2lru32k7ZdTEpLIVYwOHqDH7TjSR/txadYZ0kkkYxxEltUmlVJKj7jkK5DEJDxSXKyx
tbE+8SDGIz6q9QRTDkMXG7cl5VJAINwnuTDJQaZg6hjTPMgAnBBasZfkPf5Ni2/a7cuk21+u
K7twxf2VWmlwKgqteuL4arM3e+bjdzm5uJ2M5bX7imlGOeoEY1rn9sd55zytZkmj3OYTQCkc
oNCEIzU06jywSavfwh27kXINoDJY3UtoZqOyL6QScw4Vu57HFjaK63Hfb6/a5c3E15EVZmCv
7isOjekVX64Jf2ZMdO4cp5dcWZhvLm4a3oNSuhRdXQZ0GeHGYzbMSSNNBmT288OK0dqbuGQT
WxdWhAf3EBGhuzFh08sWqauxzrmBUXf6+dZbcaZLqOtQjeij5UoT498FbVlm2+WNzBPYrcQT
hg8M0SOjDuDqA7/vwyOPXnw57h913q5mnkhkuLksRce3EQ2oZHUtMsVmNcd744ZLS4iC642j
rUUYFf8AHrjJxd7NvvNNmt3trBruO1k/n+zpfSA351BHRvEY3Dih3Hcr3cpmlvZGadu7grQg
9NOVMHUyufM/J953vdd3eB9zuHuWs4RbQPIdRWJalVqeoFcPNas2otu37ddtguYLO5kgiuoz
FcxI1EZD/EMP1/Tc/Tms727srmK6t53gubdg0M8TFWBGYzGMXlS4uNy+QOT7nA1veXzSsFOk
10vXo1CtO3XBdi59RbPzzk+0WKWu3300FqXJiXSGjV+4XV0Ld8PNwd3YqN03zcNyvTfX0muZ
iRWmWfXLoMOnn4S3XLd8u9oi2a6uTcbdF6II39bIo/KjHNR4UwHPyptFNRQHQgqxoaBelT/r
gIo5ZYtEqakYGq6SVNa9iOmIWrfeOacj3eOKLc7xpxCojYsBqdQOjkfcfPDP8s65d05Nuu5W
Vla3c8k1vtiGOwRqa442bVpD/cQCMgemCeK+/Ad15Pvm+TRXO43D3NxaRJbCdlq4jH2gvQas
vHFaYZOQb4uw3OxxXMsW1Xrq8tvWie5EdQIB+1vph0dTVUXcqzjIj1EeIJp3wNQyeDCquMwx
yz7YjV9u3Lt33XZdv2q/lE42vUm2XL5zRRmhMWvqy+kUr0xrmOP9HRt/yLy+w2222+13CRIb
Ng9oxoZIc9WmFzXSviMEjfF/YpfkXlL7sm4revbyxMWiih9EQZhpYlBkdf5q9cWN103XynzS
4t76D9YI4tzt2tL6NEASZGGk1GeekkAjFGfrsBsnyRy/Z9rG2Q3QayjJ0QSqHCAj7FrXSDjX
21zn88U+6XG77rHNvd1HPPAjiGS7Klo4SwJSIt0XL7Ril2uvitRnTSWBBJoofrT64zRrQ7Jz
nkm0bfLt1tdlrOXM2sgDIrnLUop6T5jBQj23lO/Wce42trdN7W6xmO+jI1e6K6umfqr0Izwx
qRMu48o2Dar7Y7mOWzsN5jQXNrMh0yKpDxv6uhBGRGeIX3xf8V+TOe7DtTx7ZI39LtI9M07Q
rKkKOdKI7UOlS5yJxm9a11VCORbsm+ndbOQ2m4s/uKYBpVXyLBR4HFazOcFv3Kt3326Ml5IA
KqxjjUIrMOpYD83jXGt8Zkmuyw5vyKx2ltrhuCbKhMSyKC0Jbq0ZzKn6YpR3LarNv3vcLK/h
3G0upYryF9azhiGqPE/4jGdM8dm/cs3ffrlpryQKkjF3SIaULkZsQMVpt34d1nzzkNptC7OJ
vdtY6m1Lirwg50jY5geQw6bN+XBtXIN22vcY76zuWiuoh6HU5keB8Qe+LdMSbvyfcd3uDLdn
Skh1pBGNMWrx0+OHpjmZVzsvOjYcJ3fjRtRPHusbxyQv/wCNJPyzoezKe3jjMaZCpfSAKFh6
upoRl1w4ZE6xXNvClzJFJHE5b2ZHBUMUIDaG70r26YcTax/KXMfaW7kpMXT9Hc3ToCLhAPTF
MSKFtIy7nBBqu3L5C5FuOzSbLeTCXap3jdLZxq9poiSBGTmozzxamWdSJNRB9VTX9344DG7s
fmXk1psT7MYba4t5ohDMZ1rqVRSrHuaZYZ4r6x1tvl/t+6Jf7dMbW7jcTQPF6dDA1AWvbyPX
FVesdHIuRy73un9Ruo0junFZmiXQGdurGmWo+WLVz01Vt8ycig2J9lkgtprOSP25lkQMXp+Z
vBjTFDYys3Lt3n46/HJXX+krdG+ihZRWOUggqp66PVXTjVuuVvqz2j5F3mwsNnsdEd1Z7Nej
cLWKddWZUo8Z/wBjK5+hxzlx0+W7uflvjwLv7lxeQzRSRHY70e7EbeTrb+4egT8rY1Fjxa6/
Sm4Y26CKGpKRDMKCftWvhjNY+voVkaOUSR1QihVlqDUZihGYI65Y1K1Gp3P5G5FfbWtvcMn6
xhon3GP0XEkeVRIR1OQ9XXAarb/mO832ybds1wyPaWGoROK+8Uc10M351DZgHph1ncNunNt5
3O22WK5K+/x4GKwviAJDFqDIkrV9ax09Iwc1NXvnzZuW88cfYLzbLV7DIoaHUHAoHQVplnSu
GeNes9xX5B3HY4JbBQl1tMhq9hOC8SNmfcjr9rfTri2mWVWTco3aXdbfdhdSxX9tU2ciOw9k
6tQ9sAinmO+L7Vnnn1c7/wDJO9brZ+3/AC7OeUabuS1qnv1OdR2B7jF9jllR8d+Sty2ra5Nq
mjW+28qwtref1ezJ2aJuqivUDrgNsV0HMt/g3iHekv3W+tqexOtaoo/+zoMtBqcumK2sc3Fp
yH5H3LeIDFFDHtqTLW7FrVVlNOrU+0eXTBrUvrQTfOO4XNpZjcNntbu8s4Ug/VNVWkCfaXHS
v0xQ1l4OVWSc2bkTbcltbkrItrBQ6J1Gcq1y1Fs88atlmHHJy7lU3Id0e9mRINVfYt46+3GD
QtpB6FiNTU74higoCxXyzB74tBa10Zj/ALVPbzwLQq/pJGRHVDlQeeBz0swBUVKdQvevjhbh
/R1LGhIFKdD54lgnYiME5MMgoAwE2hmIJYA9aUrQeAIxIgdTBWzH/wCtUYgfWo9XSuZr0ywN
Dd09ulAxOYH+uJVFQsKs9a5gdOvnhEhAvU0+j+QxBJST3FA+4+kVzyOI4AlwAWc1PUHPM9KY
gKgDalNScmUdPwxGBMiqafaK0FMWNCJGSKoFOjdTTyriB2jjD6kFM6Edq98UQVCB2SpOrN/A
HyOECGsrX7hQ5f64AS9DUesjp+7AiVR6KkFhk+nMUP8Aph04RSNwa1BGYIFBQYibSpb05in7
sVZsOSqqQhpTMDtgVOhAORBFKha5E4sMMFauoLRqUNOmFozCVqGldQpn/niZ+SRyiFDkD18P
qMBh3ViKin0J6kd8SMWKtq0ChABPWpPbCiyUMgOoE1JGJaRGrUqmmWdcyPHAg63pSNGCjv8A
5VxASrIy5/iK9uueEmWrGoOYBAUZ5fhiR/ddiuuhKAjTSmQ6HADq5emZOXXt9MJBJIFyNSa0
U1zzwHDMKShQKVHXqemeIUyoAw1Einc98SHO7H7Bl0K98WCxHRApBrl91c88LNPG7dApApWo
zxKUQA1En727dMhgahjIQKBNVOlcqA+GDCloKHLp2xNALgEtTM1IXFiNWuQ/ADCCGorVWo3a
vhg1YKMFgDmK+FM8Wo1aalajSeJ7+QxHk4aiiopq7HA1gSxZitBoIoR/1ws00bFtShM608Bi
ZIr6krmgzJ+n+eEwTMoLhV1K59ROR6UNPDBDaTOcl0Ahcqjt4fXEYZkXOin1EVP4Z4UTtpCM
DWvSvh4HCqBajUH+o8q4GRrqeQhmFKV0dKDxwKE56ECqDKgHcd8REELjUBpBNAD1xHAqmiSl
fEqT188BMdQoiUqTqNOh+pxDDSMUUu4qzdWr0HjlhZKJnYBvubqCegGBYKq1OsaV6kjvhWHh
UPQ9GoQQMzn3/HBWphBScjkU9ORxHTigGRFG751qcGDTaQkgIbUBk2dR+AxM2mABA0kKelD5
59cUtWkKl9BH2mjN0J71FO+NaZTI0hkahzA69AB44kQrIV90iv5RWn0rTFpnKKdqIQoGpstQ
z/Z54ob6ym5KVmA6qP3EeONsY5a/7z49cKd9kHF0RDXw0gZkd8jh4jXMfQnwHHonuWJZiI1W
qqfUNWojV0646f3vjGeuf5GdxzCWWOT+ajhg5BD0yPpU+J8ccf5WNTa9I5BIt3wqC5uoIDcK
kTGURqrE0FCWUCpp1wflmrXcN8v9m43ZzW9irSNoCyNGSGXxKkFWJw9M8wG6C0W7tbuDaYJF
uKR35jjVIhGaHXozFV8O+MyNwG6cc2nZ9qvNz2Vk3aFqvLt7woyqzeKkVUDFoZH4qn9zet0m
9sw6tGqGhVQczpUeWG/Bql5JFuD/ACEF2tUur4zg6JGGgsDkvq9P1zwc0a0XyJuPO32VYN74
ba2hLANulsqUpT7dVWK1+uKKV5HQs4LVWpoxUdx5YYa9c+Ets2eQbhdXdVlUqqv7YYivjWoK
16YeqrjaX+/cJvImtNwvotxljkCxE7c0TIwNFX3I1p/ljOBz8v5anHLG3msdqs2LkmYCFAsi
qQKaaUBwkPB982Dd0utzttit7KTUf1Vu6qYvcH5osvTXvg6gs9Z+751s+6blJse9bFZz1nMV
rPCirIKdATQdBhaxody3eLYrnarC2t7WWzu3VJYbqJZiqHI6SaGmdMDC/t9p2Owurp4tptaT
nU8SxrSqigKVBp+GGw15tvPP+P3qXe2btsFv7iBltru1VUkUg1WukLUYZDlS8RvuTxcWMa8M
g33ZSzGC40qZa1zDBasaYuucY+tjzrcSf1crmA2rFj/8egAQVrpp5YlI5Qz0UUAJNARkBh1p
7H8STMux3uiTS2vSwBoSdHXrSmDUwVrd/puVxSRhHLXPqSQB1OdTWuRwyNW69P5Za7eu77Fc
W9nBDcyyhtES6A4FC1VGWfjTGWVrvu+QLvm37LNtNrLHejTOtzGHFDWmkvmMKkcEmxcZTdZt
mfbYottuU979QFDfp5my9JNGAp4HBDVNzTY9q43sUNs9nBucEhK2l4q+uMf/ANRcq+ROGM/l
1/F8qLxO8VWKR1k10aooQa6qHwxdGvKLujXE1CCdRLf9lcs++KLxc8Hhhk5RYW8sCzRO2cbC
qsOtGHfGqHq19d7FByK02JdgshFeqTMHiUP46lOTD6YzIpIJeMcb2VL67t7CGi65/auFE6Eh
a6dL1pmMvDFIK803jlG1bzbEy8et9vvsyl3Zao19PVWQjQcsOLI2XC4Nqv8Agkr3NjbyyRLL
7Vxp9uUClVLSKQTTBTjyyaFZZWWANJoNVUKWOXkoqRiX1XHC9rtdy5HbWt5GZI2IMkROnKhy
w/ZStryrfdp2O5XaG2Db9ytygJLxaZgrfxMg8PzYlbrN8Yu44+SST7Nx9tzgYMW2uQCbSrCh
CFssvFsFo55xDzG42593iez2B+Pzx53VrLlqfsQtABl4ZYopHovGt3ubz44uBKsaIsU0YCgB
WAzB8K+NMNPTNcSuOUPtDwahbbSHZY7jJZl0mrGJ8qde+DTJ40lnyPY9yu7PY2uH3pIyay3K
D3F09RrzVz5rhQ9z2rZr7dX2ja+RXu2XxUsLZF1R0A1UYilf24hY4+N/HwtS8v6iG93u3l1S
RFwpBB9LqOo1DOpw6om+ZLXcZNr2+4nRRFA7iWSoIBZQAD9TjOC1mtwvLIcVt7feeGvFeLEq
2e7RRmOJjT0uxUVzHiTXAqtdj2vZNu4J/WpbCC/moWkiuVEgajflfquXTG22ra12fdtisYza
tb20yIzWwPpWgqqqT1A8DijKitrTjd/ySfi97sNkYYY2ZLmEGKagA0n05avocFUea8r2O22n
f7yxtnaSKJqRSMKOQRWhp5YtGLP4427b73kohvrWO7t/ZcvHLWgyoKU+uJbG4j27hW4cim4u
mxRJFDCXe4VjHOjAAjS6mrdcKkgX4vxrjnHbu/FhHubWrFnS9GouNdBpb8mkHwwHI875BunG
dw0TWG1naboH+Ykc3uwOrZVCkDSfpjUFkjcfoNuuvi7+o+ytrepEV922cxrIFfSPeUelj44P
yeq8sJU/Tv406YWbWp+Odp23ceSxw7jai5t2jcmGSoBIWlciDXwxm0yNFvFx8b7fukmzXOxG
O0SiPfxVWeJmz9JqfcpXr/jh9gnOrjj3CuMf0U31ulruCGR6SbiDGsiqaIwI+w9jka4rWpMG
OFcL/rFndwxW2qRWS62pJVubdqLky56l0nBqdkvEeGSJ7g2aFUaY20kKswWlaao6EaHqcQxw
brxHinF9ku9w/p67jErrW3uz6iHbSoWQdKYlXHPxbg8Y2zfJrFoLDcAsVxt+pnRDKCFdGqGW
jYlZPhBZfHFnt3Kr1b6z/W7HBGzIhfXKVfo65hqx0z74Kzzznyttl4VxVtnF/bQW24wySP7R
3GQpWMOQpWQDIjoQRijeMt8i8S47tttb7ltUsdvI7aJ9ujmE8akiuuPPUPxwwbHByC54lJx+
31bDdbJvyKhSTQ0cM6GgaQs33V69K4pTYxtAGLUoGFCSO+FzWfHd9fZtwF4IIroMmiWKZdQZ
a5gdwfBh0xXxTXpWy8g3OHa23nfL1L/jV6DCNucrMIqkj23ciocAZA9cZ+W8xl/j292+L5Bg
nhZbWxlkuBaiQhQEZW9tat0Pl3xrPG58epOTbumwfIu7SrZwXMLuqS2kq0Qo0asdNB6WrmCM
Vrn9saTaN+3NdofdeSSx3/Gb5DGLBisnsnspY+oSU7N26HB8isr8Y3FrD8gwSRlYbVjcCEOQ
CFdWEYJ75ZYeov5/b8lyHe7fj/yDvDRWEN1amVop7Rh7a6WRW9DL9jA/mHfBWpflqNs3zcY9
lO5crni3PjF8hiS0ISTR3Ca6BvdUdm+oNcWr8esr8Vy2kfPoJI3EVq6zx23uEA6W1aFNTmdN
BhuDno28bxDx35E3lksoJ9vNw0b2bDSApVWJiI/8bhswRiyLlqdv37cDtD7jy2SLceL7lHog
tjoZowT9pegb3UA6dT1BwfJvjL/FEttHz2EB1ijk95baORgGKlWoor1JXtisZ46/aDcN0teO
/IO+ymxinsTNJEbQAJpGpXDQsPsZWFRTGvGuWu2/fdxGyrfczli3Pju5QhYYyqM8ZzoA6gEy
qOoyPcYzqvjKfEM1uvPbdahVdbhYtZA1EowUUPU0xWNT4ZznUEMHMt6t4o1hgjvJFijVQqqB
TIAds8PUcM1SgrpYMKjwHf8AHA6yvf5uKbLv3E+Lm79mLcLeGCSBZTpW4RVrJbs/30K+HTBD
floTBaw8vhlhQW8h210lkCgtRJVCK1M2C9q4sDzjm28cit9uu2i5XtW8WbakudtMUKSqjEii
fdWn4HHTnr9sWPGiFNAc6HOvTLocBr1D4GtrW43Lf4Loo8M9ivvB1DqAXp6gevjg/JzY1PAO
C7HtaciEe7WW+Q3Vq0UtrEA2hVLN611N36YLfVzLi1upL2Ph3HDacgs9iZrVFMl7EkokpGCq
pqK/bhlUUPxpuG53PM+Rf1S5s5LuGyjjkvLJVMElJKpLVMmFPurn2OC3088Y6N+hS243ez8u
v7HkPHbtHignsoUie2uRUoYyhJOYodJy8MV9C1v23JeJcZNjv9hsdyLGHW+4RJJ7ieytAmoj
7fLDLiv6fPXOtx3a/wB/uDuclpc3kB9l7vblQQygZrIpTJq+PXthq55yM2hVmC1LMSAKGmK/
Cny9n3DinxtxXadiHINpl3Zt8jSS1uoX0Sq8iqXSQVA0p7noYZ4zyfziT/7jePW+88h26a6m
lig25b7apx6Z4CSwZJAPRJ9gAy/fhl9Z6ZT424FsfKOPcre91Q7htMKT2VzEQGjdVkZlZT6W
VtAqDi6+WpfNbj49uOBbh8Rbm+5bAwt7ajbpEra2eURqBPAzkGNiM8u+KcrqzHz9uo29by4/
pbyTbYHP6N7iiymM9Pc05auxpjWfpn61yx6gSOjCnbLzxl0j3T+3K42C9uLzY7jbYpbi6tZR
uMsqCQTxahQHV9gANNPQ9euMm/Dxi6htYeQT2/tFrO2vpE9pK1eCOYgxKex0igONd3a5/wA7
saT5QsvjqGbb7zhE8scVymu92qZZA1q4pQEyeqrVPpqemXXDMxdbPhhqFmowK1qQe2fnjOqP
bv7d49nk2jlx3i0S+26OCGS7hlVXBQayxAI7L0xn8n4jS7L8dbTtOy/IO2SQpebBfWMW5bFP
IpYey0MrRMrsPviamY8sa5+R+FBf8D+JuNf0Gx5LZX11eb1BC9lfW0hBrKFVlmQnSpVnyI6j
FizXm/yvwMcG5VJswuTeWkka3FnMyhXEbkjTJTLWKduuN/XZrP8A0y5WQijVpFOr0E0FOoGM
OuSve7H4w+M147Z7om07pyDbLi3Lx7vtLmSWMhQZFuIK+mSNq9qEdsUjn1GV458Z8W3njnNL
qy3GW5i2ZEutnvipjb2yjO0FxER1qNJI6Y0LbI5OD/He0ck+OOTb1LK9tu2y+3PazxsWQx+2
zPCyn0sH05NSoxn/AOWDnv7c7Gu5Hwj4c4g1rtvI7XcJru8sxdWt9asWb2SNJ1xk6BIjeGRw
RvFj8RbRxpeGc8s03GO+2CUQyJeyIyARe2xCzIalHy9VOnUYefldfDK738f8J3vhW6cn4W1z
anj5B3GzvX9yN0K6n9mQ51X8vY/XGrfwzzPy8oEZNTpzI+wmlDkQcZdMek/CO1cavOYWTbrf
vZ7lBPDLtcbKGguGD5xPl6WNMvHB1TOlx8/x2dryP9FY7xDd2kUh17GFpLYO6hj6iM4nqSBX
KtOmNfhy+2dOn4e22wveIcuayLW+8WtnNrnf+Za3FqyM3tSwtllpoCOnXGZPXTr4eXcemltd
0226gqs8EsLRljUVJBH/AHYrDL49O+cdg28fKcNraW/6Vt1it5Lh4aBfdkco8pHTUQB9TnjW
+OWf7thL8B8VSb+nXFluSyn+Um7Wz+5DQisczoehBHrWlMTXU2vD+VcdveNchv8AY7+RJp7J
iGliHpdWFUYA9CVzI7HBZGufXNsW3LfbnZ2RZkiupkhZxQmMSMFZvwBwWGcPY7j40+LoeRjh
s097Y8nkoltMgM8BDgtFJ6uxUZqTka541+GdtuODbPiDYrfY943Hf74Q3Gx7jLb3oR/bhlgR
VoI2YEpI2qo88sUgu54zfKeL8NTY25BxDdJLiyt5kt9w26+Gi7jaU6YpY+7qzZN4dcXVY5rD
EsRQAZ1pjLq9H4jwTjc/Ev8A2jkM1wdo/UtZ3ItjpkhZdOhwAD7itqzHbDzR140vMNs43B8G
QybLdHdbKHeIpba4mj9uaP3HAeJgfLI+Iw830dTxWcc2u0uvhTkt5ZPIlzGSd0tLnTLbuVNY
5IKUaORexrjPPyfts1Hxb4745d8Ch5Xut9JBHBdyxX0OtUEkKEKBCWGUo6gV9fTFParXLd/G
3Gt0udsu+L75Jd7PfXse3XqzppuLWWU1R/bOksjiv/HS1jvnq3ytjL8C8enE23wjcLLc/tS7
kAmsxKoqvqAB0SL+IxRqMls/xpx/a9kfeeXyzxWb3c9jM9sdXsTwsVWoAYyJJpNCOnfGrN+F
L+1Dv2z8CsdxtL/Zdxl3fYJZhFfWTqYruCmZ06wNasK+ofaRTBngsyvQflDiXxlHwjZt0s7h
7LcjaaNmuxE3/wApFAb27hVFKsMtR6HFw3nrJ8f4DxGLi1hyfllzPBY7iZYFlttVYZkYgB1F
Sysqk6h0xSM95IsP/uh2GffONT7duUt3xLkUjwxzf+G5SRFY6fUDqB05GmKtQV18X8Ca8ueM
2+9tbcvsy0cBnOu1mZQSFLAAIzjqK+k4ZMPV88ZPk3x//R+F8f5Ikuu43Jpra/tWoUSaGRx7
sbj8pVcx+zGued1xvfwbmXBLPZeJ8Y5DazNIm+RyLcQSZaJoqnUjfwMuVOv7cc58OvwxAfUS
C3rOWoigoMA1E1Sc9WRoD2+oxM300gkoV1DqDkfPvjSkOoUhqAaAKEYm4Cnpp36DvTEsEGy9
YqVHpPf64FoXZc61I6AfXEL0ZDpyJyIyFM6UwA6BD6dORyYDtiUFUBSEWqdx4Z4G4QI01ApX
pXxwnQRGSrMVqq0ypSgwgsmK0NRStQMgMQExIpTOuQ8DiRmrrIIoV6jr1HliRmIKFW/7qjx8
j/liVCSqkdad/PEqeNtCkDLw8VJ6YDBlsvV9vQHBRTdaBSR0K9gcQwMkmmgyJboaZ4RKlOQo
Tl2FO/jiblMAaBVyIrqoOgOBEClQa0p1/wCmFB0iokALHzGdPDEBsuqlGp5DxwkOb5vkoP4Z
YlggCQDq0+OWRp0wELKh/LXKpPT8MS0y6NNRlXJV64gdT0UGtPu/1piBZ/bnTx88SGW9LVNC
OletfPBTaSijVIqD3xIJ1IutgKn83c4BDghs6kBvVTr+GGEwidlVg9D0K/40wozRnVk9B1bx
GIYRRW71pQr5/twESxOwBahIzYeFe2LQSq2QqdB7V7/88MEhgQzaFOk9anw8MTR5KKQDlq8f
HEsPrXTp69s8sSCY89RANOunBQFB7UZbIHsPEnt5YF8CJNdIP1NKivl/rhaEo1ao3UGM9Tln
5YQiHvRliW9Na0Azp4Ygdiqsat6OtT1z+mCtRIEBOpQCpzA8cB8BKq06+k9jlTEjNRmUPkwA
0Hsex/HCBZZroOnqCe9OueIBbRoBGRp1PX8PPEj+2SCB2oWP78UrOBdjUqh9QAoeoxGBkzFU
bPPXX/TCYdjpqwJ6d+pB74CJAYiznq2XdgR+OLDTFUZ1pmADXrn+zEBEhggY6V7qO3hXEaLQ
FWtevbriAFkCtVVJPQBcCOgbTWpFc9eR/wD0sSGHYPnQin3dsvLALUK1UdMxWi5nLDqHKNLo
XNNfTy/5YkVQGJyJJNfPDEYxscjkT51OKmEFBKqx9IA0nzwHSUuMiwEerSpJrWvhTChMVCFc
tWdD1+n7cSCAGpUZOBU9wBgGnVEJyJIVqgdDUZHAZIExrUMshPfQc8RO0shyrWv5u/0womWk
3qHp6iuRGBaM+pSAukDqMqnCZTD3JGJKhV+0yDy6CmJZoChCk5Bulc8vrixU7j0MHpQkZnqT
/pgZGjUWooKZHz8MRkRMGVg1ciKhh/phQVWh10JNKlelR9BisGJWBfQ1dBpmuRHnXGWTBQ0l
CaDuPDCZBatDfb1618u2FBkjFGDAdKr5nyGJrAqx9oSAAj+CtSMVakRSe3oYgU1GoYdFwyGy
4ye4qHuifHv3P1xusYiqfAeGAY7dslJu1pQMSKgdfwxrk3rHtXDPlbmvHbNLSxvVituntzQx
yJp/LRtOXlnjXccp1LVhufyzyzdbyC9lmt1vLZibe4ht41KsB0Y56/xxz5mt1eXPzB8n/phb
3ltH7DqtEmsVKtqGTLWinBZ6sctv8tfJG1x+287wxKCTb3FoDGQ2dKFaf/onBaIrZ/l3mc93
78Nwlu/X2Yo1SFgPGOlMbUrltfkTlVrcT3EF3pNyP5yBVKmuZyINMB+VrsnzFzPaIfaguYPZ
YltM1urip8HFMvLBaVdyP5D3jkUqT3YtRKlQJ7aBY5D4etc8vPBgsjgvOZcmv7EWF5u1xPZJ
9sMsjECnTrhngiqjJJJBJqcgOueJauuPcp37j1z+p2e6a1uKCp+5GA6hkORGNJp7z5m5jewP
G36JHfrNBbqklPBXq1M8GFQ7zzTkG72kVruEwnWL85QK7E+JAzxYvig45zLfdiMv9PuPbjkG
iaFwGRqHI0YHPDYza4Jr6aa9a4dj7zEuzD0+qtdQp0P0wHW12z5e5hFZx21y9tfwxgLGbu2W
V1UZU1DTX8cZvjXMQyfKvLVuJJIp4VikGkWwiCxKelVXMj8DhnqyMncXU01w805DvKSxIFKV
NenYY1BVztPMeTbMgj2vc57W3Y1aKNqoxIpqo1RjKVF1PLcSmeRjJISSXOZJPjhVIEBgSWAF
CAB0IzGLU3ez/MHJdssUtEgsbkR5BZYApI/h/lleuD8hxx/Im8rvj7nY7ft1pPKtJYYLbXG5
U1qQxqG81xvxT1b7j8tclv09ifZ7ESRUaGT9NIZY2HRkBPX6YybMOnzJypkMd9DaX3tigW6t
9LA9+hUhsQcDfKnJy80YkgFvMDS1MWpU8TExOpcKctr8g8gg2ybaf5c+3Tmpikj1aa5+ljnn
54sS32n5n3jbov0ZsdvmyoT7IiYgClWEZAOXljOK4hh5pxW4uHn3niNpMJvURZySQEHv6a6S
D+GIYsrXmnxvZzx3VrxWeyngcNHJBcZ/WhNMRd+9fMqSXCGysYbyBUpW/QidGOfpkjIywjVD
afKnJbW6lnMdvPbzffZzoZIgO1DXX9c8VIt8+RLncrMWQ22ysISQx/TIetOyt6V/AYFju275
c3Xb9uWyOzbfLCFoZBE0SMD11BRoJPfEcUdnzW52/fH3rZrO22x2Gn9JEDJEa/dUMQQD5Y1j
CTfvkC/3rcbfcRaW9lf24J923UjWwNauxzNO1cGNLiP5evnVTfbNt97dKKC7KmKQjzKg/uw4
mcl5bu43s7tZyCxumOTWyqg0nsVpQ+deuNSeM0+/8v3vfZ7eXcp1mktwUiYIqZV/MFArjO41
saew+Wry02w2DbNt80aqFkVUaJWFKEtGAVr40wVI7L5Vl2uKWzj2uybaZG1xWMuohC2bAMa1
BPamKROO+57bXUkdzYbFZbbewsGivrQkOB4EKFU176sQd6/LEskqSXOzbfLfDL9XGWhmrWnV
M/30xYemT3Lk1/uG7nc7h1F2zg619ABU+kClMssbrPPix5LznceQ2Fva3kUYaCuqVKgy+BdK
lcsZiuBj+ROVR7N/Rjd+5YMnte3JGruE7KsnXKmWHB+HZxz5D3DaLA7bJaw7nt7motLkfZXu
jAHw6HBjWrDdPmHeL+OOC2s4LH2mV45ELSONGVDrADKR2pijN6SH5Xjb/wCRNsdp/UqAJfQO
0L6gOpoC1PKuBrXJb8i4buss93yXabifcJGLS3lpMV1+H8vUgWnliomLTZuT/Fm0bqm4bfZb
lZXCr7Rk9LqFb+JWdjT6YsPibdflLbF3SW5ttpgupR/4NzQtbTtQZaqKSfocS1T2fyfeAXEO
7WVvu9jdv7jW8o0aGbMhWofT5EYbFK4+R8wsd0SCO12a2so7cgpmJQQPygFVov8AtxQRdW/y
ttx2w7Tc8dtJICuiSCGQxxFSM9KFTp/A4lsVO03nxi5lG6bReQOpYwexcNMKVyWhKUKjviUs
vixg3v472mdb/j8W5W+4x1KJcL7lvJlQq+piQD4riixkeRbzJvG7Sbk0fsvLmYlNVH4nPGvl
nLF9xvn72O1ybRudhFu+1SkuLeY6WUmmStQgr3zGCt7qeXn9la7ja3Wx7Da2UlshV9Z1lkOR
jRwF0qR2xfhzvWdOhvlm+MMyJt8MatKJoG1MxjcENpauTA4Ma+yZ/lv9YZod22W3vdsnVddl
7hLB1GZVmFNJOdCMOHVRynnUe6bbFt1jYnbrFHDey0nvUK9AjUUqvlgis2u6b5Tu7rYhZTWa
/wBThQRwbsj0cDKtVp6shQ50OHGnLx3n4tNrfZ942yPetrdy6wvRGRia0DEEFdWflgsSv5Xv
207oYo7PaI9viiACMzCSUAH7VdaejwDdMPMYsuj3D5C3vcOPf0G8EFxaoFEdzIhM6hfto1aH
p1pisw24y7lqBQag0piYzQ1Ok6aBhnU+XhiWVfy8miu+Ox7RPYQe/AR7G4Q1RylSSrp9rnPr
gsN6cex7s207tDuAhiuhDUNbTrWNlIoa9e2FqXUW9X9tuG6z3ltC1rBKwZbZnMmjIDSGOZFe
mIrC/wCTR3XGYtrlsY1u7VlEG4RHQ7xivolQelmz+7Az1NcHH99Oz7nDuJtY7z2qrJbTCqsr
CjD6+HnjWa0j3e+tL7dru6t4Gt7a4f3EgeRpSi0pp1Hrh6jjuVY3nJ7e945b7TJYLDd2bKYr
+E6NcWfpnj/Mw/ixmRu3XFxzfF2feIr42sd17eoPBN9jKwIbsaZHI4Kub6h3q/gvd1u7u2he
C3uJNcFvIxkZVoPSWNScahjvveTWl5xi22lrBY7uyYab2JyuuPMj3Iuhep+7wxWVddOLjW9x
7LvUd7Jaw3kdDHJbzioKuKHSeqt4EYBkrn3q7tr/AHS8uoY5IbaaQyRxyOZHUN+Uuc2p4nEZ
ziy3TkVnfcVsdrk28W+5WTqRdW7lY5olBA92LoZM+uNTxn+nOuHjG+R7PvMF9LbJewxhg9rJ
kG1AgsGHRgDkfHGbFP8ALl3y/hvd1vbyJXFvcTM8MUzmSVVPRWc1LU8cQjhDgNUDKnqTrga5
ajkfOhu3Etn2F7cwzbVKNFypqrRBCoBHVW8cMh66aa0+aVTc9vvJ9tDGG1NlfIj5TKaFXirm
pGnMNgGeq7d9/wDhy9s7hU43eWd3NqMdzbyqJEfrq9UjJ16gjFp8jzSWVvcZCukA0HjQdMaZ
rV/HnOl4pukl0YP1VvdxGC7ir6wnUMp6VB8euK+NSzE3AOfrxrdL+ZrX9VZbhE8Mqp6ZI1Zi
UIZvScz0OM/5anxjRw/JfB9y45tu28r2Ca8l2kBIGgcBSANOumuMgkdRixn4it2j5B4pxfkN
zd8d2q5/pW5Wxgu7K4kUvE9SVMDVb0+KscXi++qTi3OLbarXddk3SyfcOO7upE1qjaWikp6J
Yq0AP8Xjiak/DUp8n8E3Tj217ZzDj815c7REIoHhcCNqKE1ffGwJVRUHDPFeXm/MH4nLuKPx
m3u7SxoS8N46sUkP/wCDILEr/wBxw+MSX/8ADOGuqoBAAoCe3j0xqU5Xre3fKnDd22La9u5t
s0t9ebMgTbru2kCEjSFDFQyUdQo8cYsa8+TTfOLryqS+/SG92uS2G33UFwQk09spLa3KVVJR
qP25HAsdtr8wcA2Swvts45slzFY7nBKs0bCMPDO8ZUUerPIhr0Y5dvDFi/DEfHXyHDsFhfbD
ulk248d3NNF/Eje3PE9NPuRyE9dPbyw2jzGM3ldqTdrlNpkmk2tJG/RyXQUTGLt7mj01+mNb
4dcoUEtkSNOYxitSPW/iH5G+POJW63W5bXeJvojeE39qTNHPEzaqOjMojZaAZDPGZB1cYXmN
7xdOSy7pxJrtbWaT9UIr0JrjuDJ7hppLVj1dAe2WNWMSY7fkr5Gh5w+23sm0RWG6wRFL2+ha
puCAApK0FAhrStetMOHrnWHBZtIPqCnsMqd8CxuvjX5HteJ2W/W13aPcWu92ht9UZ9UMiqyo
xB6qdedMU8rH9P8A1rQ8M+bTtXCNw4puqNeW09rPBtVxUCSFpUYLE4b7owzZHqMM+Ws/0WFh
8nfHO/bRtLc92+8bfdhhW3trqwLCMqmkiYLrQa1KDIgiv7MVi+t81gPlPnSc15Gm5RKfbto/
00Ujp7bSoufvSRioRm/hrQY1Lkxm/wA99rIh00Kpqo7V61+uMO2+PaeFfJXxdYWkd9f2W5bL
yCFVW4m2mSQW8pUBf1Bh1iPWR96lSCcE8Z/yitfmTZ7blm9tLtKycd5Cgt91WIexNOUUot2q
DJHdT61rmc64bWJzvz+XXJ8pfGm1cS33jHF7O6Wx3a1lFtNIipLFcsmhI5SSWaOvqDE1GYzx
b7pnMkyHT5J+K+U7ftt3zrb7tt/2u2Fmk1oDJAyqQwkMYZQSSM0YEfhisN5/Kj2j5M45te38
w2bbdtkg2vfYgthp6rMoK1MbH0RuDq01Onpnhl9Ys2WK7iHyBt+0cC5VxncIHY77AyWU8dHA
lC6Asi5UUjPVjMnrXMt5xgXkKoM6stAzZZ0/hwzlrrxa8U399g5Dtu8e2Z1sblLkwlqaxG1d
Ne1cV506tPkbkthyXnG7b7tYZLG+Mbxe6KPqWFUaq1YZMpw7+GZPXpPxfz/4p2HjU9vf2t7a
7puFubTd1i1zxyAgqWiao06ga+WMznPUwexzfHe280eG5F5uPFIwP0tx6obmMBgwcqPv0faf
HrjfWWDiY2/yxzH455HLa8h2m5vY+R2JiCRSQEQXEcbV0OCaKVqWr+GMz4xq8+6vT8s/H2/i
Ddt2vN22retCRy2Vu8z2QdAQH9qNlDq1f+7/ABxfB+u+vFuRX9tuG+3d9aNO0Uz+kXcvvyg+
Jk/N5VzxaJHNt95LYX9vdLRpLeRJhXoWjYMMvqM8WnXuLc++Kt23e25rfS3u38rhjj9Cq81t
G8AIA0Jk6tqPev0wap+2f3j5R2vduJ8q22a3azud3uxf2BP81KejWkjCmkjRqH1xbgx5Rrrp
UvrqKntWp8vPFfXO8+pQSCc/MKRnXE6vR+A834svF7vh3LIJ12W4mW6jvLNj7kclACrKM9Pp
FCMGYOrHbzHmXDJfjtuH7IGLW17b3FrP7LxJOqPqdnVs1kUZN2bqMajNur3j/JPh2z4He7DL
c31mm+wf/lJGSSZ45tOkmN0UrQNmKYOfLrV58xjZ+Z7LB8a3nDo5HurpNx/VbfeCJkSeLxZW
zicUrRsauazKyfHt8uNo36y3SNVMlnOk4WQ1V/bNaMewxzz1r3Xt198ncAvUl3leRbtY3049
x9r1TNBG+ijRj2xpFaelg3Wh8sblHTI8b5rwrc+OXHEuXi9j2lb176y3GN6zVZtQScqDVqsc
wOuHc+BJsZfnqfGccdnHxSSee5jeTVdMjoXikz0XJcDVIj/Yyj7TQ9MWmc7V1f8ALuLcn+PL
baN3uJtt37jsZG1TBTLBcUApFIF6MwGnV26+WCXGq6uI8x4JufDbbh3NXn2+2tJ2uLG+tySp
L1OiQKrnq5plTBB16tbn5H4Zsk/F9p2i4k3Tati3Brs3ZRkkS3dHVkZXCl5FaStVABGL8LXl
3NN7tty5pvt9YS67G8vpLi2moUrHIaq2k0YfjivwJ09K+Hk3DlO1XPHr+3tN42C0l9/9DcSG
K8gd6j3rZu4rnQ5ftxS2HwX9w1zTjvFtpm0w3233FwtzbDSrCNY6RS0XLS9BmMia4ZcHXrwy
QVQstRTsT1OLXP2mrTSoJzqanpiwz0enSFNAQf8ADA3mBK0ZumkHOmWXniIBIinUTUDL/pip
MCFBjI+3MdxQ9q4mSRS9Fc+kmhB6j9mAabQxagAIaqgse2Il0OXXp5nywIar6TVqgjNevmMR
iKg1H05n8v8AphQg7EjV3OdT0Iwo4WtGLBgtaoP9RgEC0khdTmScyDnQYlgiCCSBVOhPSvhh
OIy1T00jt4VGJJEAoRpGrTXPPKvXETa1MlQSfHvgQkrQA00kVA65fTEjE1jAY0AOYH7sDNEQ
QKGlaUX6nyxGB1GNAjkNXo3WpPniSRWAXSBTVnqHie2ImVQxFDmDmfHzwoDKwYLqAqa/hhRx
GpQivQ1K+IH1wIiSwoopX/j8MRwQZlBY50yFaV8OmAaZGJFDUmvTtTCieMiqrk47eGIYZAwI
WlCPzYiRlIGQy/48MWqG6agx1A9MWgSPApqSSdNBXtgQCfVSgGrrnliSQsST6hlkpoBWmIhV
nNFr5k+OEk+ktXJicgKHr0y8cQMdZOphqY9B06ZdDgGCCkJkaD+KmWIglcKAKknoCOlcOjRo
jmqvkgppIIqD9MBNIPXQCvgBmfriOmID1ZVoD9xJ/wAzhAhJUrX0gfmp+GIYcjUxLnWpNany
6Ylho2Uqprl/F3xGU7uUAYjr6dKip/HEgEgDUWzPYd/wwDT0LMAKOqipoeh8MSh/cAqhFQcj
5YmtOkYIZGFFBqCe/wBMFIHQnQyrqCnM+GGAbMwHWik+llH7sGCo2VcgzZjM/Q9OmHBgquCo
WmfpNMwf88SJRpBjU6Vrkvev1xELhk7FGJqR2/bhRy2mhJp4U6eeJE3rCovpK1IB88CoE9Qc
ABtJqKk0piI8mJqoK0zIFM/DEjFarRiSD9Qa4haKtJqUpHT0UzwYicUoTTzA6n9uI0nDSKVU
EIcqf44NGaGOBUUBS1KUPhli04Qpk2Yz6HvhWGjRtR0NqJ7gVHn9cIEQwYZlQooPE/h2xLTa
HK+5SoXpU5E/TEhdyynVXKhrUYDDaiD4sT4UK074kkZgH7saCpp44EiXSpAXpU551JPjiOiY
gEsM1I8MycK1GyVAkU1ocxSmI+BU6WK69WvqD0H0xMJmiZgCrAVzLZGmJrQAN9pXOpzOE54e
khUHzoQelfrg1QQEnvGtNf5UbMAYlCJz1rSvRgRSn0xI4mrmV8nB6fXCPS1BUBcVLA6D1/HE
tsDB7bLpQFtJ9VMs+vfrgoKWjdtR79sOKQZ0U1lc65AdAfPEfg0kqGgGRGa4jKEtWlBnX1ds
sEajnuzVSIxpoKkHoT441IdZPcJD+qJ6VOYH78arFRfy/Hv4fvwDHVamP9QqodDhsmAzqMa5
+V1Y+k/gLc72J7u1VUa0MYZYpkV1Zj3UMDnXrh/pdmuUrg+RrWzHLRJNbp7HuL7sEAEYK5ZD
28g2Of8AF0k16NvVtYy8LtJbO7vVt6oVtJ5I541PWisVDZUp1wX5EWW7xcOm4xbLvzXc0LaP
5ULFNLkfkYdsVWMvffFvx9b3kJa4vIbTcRogeSTU6OftBZV+1vPBBiC6+GNr2WCeXkEk0dqC
WtrqBx6VI6FQKE+Jx01uJvh+dY7rctugEd1tisNAuIo5ASelNQrmOuDqeCxjuY2+yx8ylj3G
3aLa4X/mxWgCSldVWCqvTyxczxZXZyux+G220XfGr28Tc6ClpcKSGWuYYsAR+GMb6pjCRUZg
UTQR+3y/HG41OXo/xh8f7fySSaS9DXEcNGNtHN7BJJp9/wDlio6mN1d/BnGpLcyW1huG2Swt
6lNxHOjp+Y9WI+uMsuHdPj/4j2Syjl3OW/YzuFaRZmUoO59I0kLhSPZ/iv4zv55rmDcrq+28
kPA0UyswB/JJpH+WL7UOSbhPxNfe9ZbRuN1Y7nC5SOG4fVHIR1CawNX4HBTjqbgPxtty2dlu
UO4G5uSFW6gl9LSE9dBFFH1wfJx223w1w4Xcxvbu+eAHVBpZUYClCrKFP7RiCg3bhnxdcW10
+w7xPZbhAKmC7kUxyAH7VqAwr5Y1Faq9g2n4qutuaDed4u9u3hGb1ijwMtfSEGhs/IkZ4qJW
SvY7WOaVLSYyRKxCu40kgGgy8cLVQgtqQtVdJqCDigex/Fxs77il1aX1na31jGx9uOWFSQSK
mj5MST+zF0NYGwjsY+UpFNHL+mNysapbOInUFstDMD0xqfDUeo8stpU3XYZHv3ubf31GqeGN
JlAIqpePTXLxGMQXddvLtg4Lul1awblc3K39wSsD24VWBPeQMpBH1OIYyyfEuxJeTbQu7Od2
KGWwjdVQTJ4U7GuH04rtw+NbLbtke43S7ksd1iA0QmjxMxOSdjWnfDo9aLgcG2bnw24t9wsb
W9RGdYpJIF1rQE/+RfUTUYKK8nuojHK8a1UIxAqKentkMBix4vs8W673b2U8hjjkYanWhYKM
ydJ6/TGpE9HuPi/gUE0W3Hd7w7nMC8ci6GUAZkvGV6fjjNUjntPibZbU3km83lzNBFVlksyI
X0KK1KsG7dhiarOb1sHBTZS3Owb5NLPF12+9Qq5B7o+lRUYozGv4XZxScAuDZ3roSshmtLqC
OWEyAUJjNA6/WuNHHktyrRzsGoSD6qCg/AYRXdx3Zf6xusFisogM7f8AkapAA+nfFYm33bhP
xxtsn6G+3ncNuvSAys6LNGT/ABDQhqv4jGcGRnrHYeJLvz2u674BtdKxbha10sT9uoMGKefh
h2j5Lkew8c2i6Rdp3pN3tpV1HSR7kY/hcr6c+3+GBrmN/tnHuCbnwKSa3219aRsDcSgi4WRe
7MDmK+GWJWKXgW57DDZS2abOJ92Lep7qNZ7Vh+SsmZi/ZhtLS7xwfj27C0F7t9vtG6XRozWb
gJIAM9On0kDx64oyk3fi97a7cLOz4ltm728K0Da0jmZQKAggByfxwVpwcGsNnk2aeG3sLKTf
beRhcbZuAU0zqmguC6qBlXx64qsR/IHCtuTj53cbTFte4IR+oW1bVCwbsQtBlT7gMUFZabiX
EJuPJuVjyRTfJErz2FyEVixFSiAaW+nWuFY7eP8Ax/ss3H/63u99cW0DAhJIQjCMA0q6kFs+
1MHpzGz3niVjuXFrGxtp4ZwAi21/7YWb2xn4VBPnljUuDGZPxpxK7vX2203y5g3eNdRtrmJX
XpnUoqinmDg0XlgN52q82nc7jbrwBZrd9BKGqk9ip+mFmxZcO47Hv+6rt0lybUNG7JcBQ9Co
7qaYFlbGX4j2H9V/T7fkL/1koXETorQsF+6oTNR+NcFP11yWfxbttvt019yLcXtIoXZJHtlD
xx0NM6gtn2IGEyYz3I+M7VZotztO9W+62QIVlp7dwlRUak7g+IGGGzWtfaLWX4uM8PsXcCqX
H6mAR3EUgcBvZlT7vUctXbFPljrjx5kKqdQ6mlK5kHDaueZF1w/jycg3hNva6NqGSR/cK+5m
orSlRjOtRrLv4l2kSLt0fJ0TeXUPFbyxoI2p1oVOoDwBzxbTZodt+I2e2nn3q8msWjkaP/40
QnFEy1NQEip8B0xLMIfEjrucEZ3Qy7VchtN2sWiWNgKqGif+Lxw65zn310SfDm3AMU32RoZG
EULNCDplzylWo9JOVVxavq5k+JbextZrrkG8/oY4nOma3jWSER/apct6hq7YtraJviiT+pWs
UO6wz7VfK36S+06WEmmqLJHX8/YqcBVO18Aubvk93x+5uktZoFYG4VC0ZkAqgNdOTftw2scr
uw+IybWSXedxmtWSR4lltYTNGNB051GsV7ZYzrUUfMeBXnHfbuxOl9tsx/lTaTG6nsJEbME4
1Fpbvwawt+Nw77tu+W9/b6Va4t2okyu+RVAGNdJyoRXGfarGSkjdFIp4Vzz/AAxoYuOKxceO
5xjf3dLJhlIql1VxQr7qjNoz+amC0vRdu2zgXJZp7IbGm1pGD7G82re2jsDRWRaD0H/dlgUY
/hmxbdc8+/o97Gl7ZRyzROcxHKI6qG9Jy6V641YZlTSbPw/Zue3u17nLJFtFuVFrI491FkdV
YLNTN48yD3xWDmtNt2y8D5M1xYx7KuzGFGMG82r6I3OqgdUampG6+oeVcGGsjwXj2233Mv6T
uMIu7cNNE4NVVvb1DWCpqOlRniXPwk/onENp53f7Vutw67ZbyAWjyVkQOArgXIFCyZkGmG6J
jS7fsXB+VvPaR7IdkkhVv029W0h9t3rRWCMBqRuvqFO1a4yWQ4Bx/b9x5ou0blCLm0U3EUyi
q1aNWGsMpBHqFRnhvjPPwnj2Ph+1853Hat2uJY9st5ClrLNWRQ6hWUThaMUYEgkYauepa021
8e4Bylbuytdl/ocsYPtbrbvWGR+inQfyN1GoeVa4DYx3x1sG37nzQbPusAntdFxHIFZlDMim
jBhQjNajPGuhxo7TZOJ7fznc9m3m7KbbaysltLMGZXdSGCXJShCFTQlcFjUsvw1O38c4Dy6O
a3sNmk49dxBv0u6RsTbySA0pQn1o1KgkCo6GuL4HlZD4547tm8czG07pH+otmjmV1UlTVVNG
VloRRhlgsF51Qcq2q32fku5bVC7SQ2M7RQyPTUVABBagArnTGsZ8VK66gk1q2ZwDXrV78Q2l
7x3j+9bOGge5jgG80JlAikFTcrGcyyn7gO30wS+NXn3Wig+J+GWnKktzY/rbWewecWsjt7Yl
V1XVGahhqBOROWAzjLrN8w2zgm17dK19wXctqhJKRbnA4IRqkA1MjqNXgwzw88+rqbHj0ulZ
ArAkqT1/z/DGrn4EmT1uviXhWz8rvN4s9xjfVHarNazwsUeOQvQFfyn6HGbfWs2LnhPw3c3g
3uDlW23FjEkGrbL/AFAOkqkkuoRiDUflYYbffGeOWnt+AcKteMbPff8Apkm+3d5bobprZ2rr
0As7B5FHqPhg1qxScI4v8d8g5Xu8VvsFzFb2VsrnZ7xyvsz+4VPtaWDioy0ueuG1nnmOvePj
biO8W1zDa7Dd8S3SGFprG5uGDQ3DRgkxFdcgNAPI98Zb1YD4/wCEQ8a2a8k4ZPvUt3axPdNZ
uSwf2wzF1eWOuonth02vD+cPxWTdZW43ZXm3Wy5S2N5QGKVSQUUVZwv+1s8as8c57WaZtIo7
asuo8cDduPUtv+C/e220m3Xktps19exLNb2c8fpZGAZNMzSIHrqGrSPScWjpTL8Kcva73mxk
MUO4bPbrdiOtYruFic4pegyWvqHli/LP4U/Ffj7deS7LvO47bKmvZY455bSSqtJGysX0ydNS
BO/XBfK3OpZr0Livwjw7f+Avuke/ob6YiSK/oY0tyEGu2uI2aho3VsvLDL+2ceM7vtU+17jc
7bNJFJLbuUke3kWWEnt7Ui9VpitMrijXKiLqI7E0r+OBp7xx/ivxA+27aI9quuRXG4QM73e3
u5mQppE4uYBIgjaEtlQHUP3kjLzP5V4Nb8N5BBaWd9/UbC/h9/b7g/cIiaBZCKKzeYx0vs1X
r8MbFqeT2Sx1HLWAMq9zjNrT11f7dd30RSXfIrHb0uoY3sJ5kdYp/cTXo1ErodfDMkZjvglY
26zu3fDnKW3q92rdKbZHtuj+o3Uie8kaS10TDQQXiJGbL9vfFVro5f8ACO8bHtL7ht+52PIr
K3X3L5rBgZraIEAytFqctH4spqvWlMa5F6sc/Cvh/eOa8f3Dctm3C2N5t5KLtU1RLKNGpfWD
RQ/RWIpXGft667kefsrrk/pdaqwOf2khhjVjJIGIUE0UmiaulWNBjOnXt9z8X/G3G7Dbdj5N
LctvW+CMWW52jN7kUktAswjJ9t4NbAENmDlmCDh/yz174875D8ebtsHNI+K7zcxQyySIF3Ff
/AYZTRZqE1UfxKemC/DPM/FSfInxzvnBdzSy3J4riK6QSWd7bmiTKPu9DHUpXoQfrXBI3PGQ
qdIULTQdWquf4/XGtFet/EHxVsvOuN781zdS2O52txD/AE69U1RWkjJ9t4z6WQsB5+BwfkWa
h2r4nhfhHLrvdFktOS8auwGIJKSKFo0RXIe3JUMrdeh6Yfy1x147Yv7cd4ntW/8Ay7t1tupQ
H+lzahIZSupI/c1UIauTqpGH7D+nvw8v3nZN12bcpdv3a2e1vbVyk8DZkN5EZFW7MMiMXWT4
Z52z1HBE88qQpQzSSKkYrpFXIAAP1xztwzx6sn9u/J0s5HO/bcm5IhJ2mfVDJ7tMohJXTRuz
0ocanqusrL8c8th2Lct6ktvYi2a5FputlLlcRVAPu0+1o6sPtPn0wyRrr41HZcB3/ceJtyWz
ZJbSO7FjcwLUSRmTSI5SW9Ogs4Q+HXpjOn5jax/298maxleLeLF75VoNqlLRTiUDOIuTp1V6
N0P0xT/I+HRxL4sg3z483KO6tjY8n2jcjEty6NqgXSnuLOi1LoKkmn1GGTKz17NjI84+LuSc
Ojhu7x4L3aLlvbTc7Ny8azHNY3qKqT+U9D0w4t/bIqWj9JqwJJUAmv44xmjndeifFPxnuPKd
0hvJ4RLx+GUxXsiyoGRimpf5ROoqTprlmMVdA/Muy7Zs3I4bSz44+xT6XEk0bBrK7UH0y26i
pUmvqBOQyp3x0zY577jo2Thm23XxBvHIrqNLqaIGWzubSpubSaLJ4rhGophfIswrQZ4zz8tX
cZPhdptN7yjbrDdkll226lWGYW7mOUe76FdX/wBrMCfLB2zm+LXmXArjYud3HF9tlN/qeJbE
vRZCbhQY437FhWmrDZk1cTPGjn+A+ZptD3tleWO4zW4J/p8DuszMp9ca6wBrWh9JwT1u3HmE
sbLM40vHICQyMCrrmQwYHMGuKzLinvqfbduu9xv4Nvs4vfu7l1iggJ0lnY0Xr064mnot7/b/
AM2t9nkv7O5s9wZF9z9FbF/dan3BA4ALp/D+GHkKbjPxDyrf9vtL63lgt9tu5Zo5Lq41AQyx
NpCyoAWGs5A9PHBQquc/H/I+G38UG8IjJdgm0vICWt5dNNSqTmGGVVOffFIt9ZkK5aofpmwy
6V6YqcbfiXxJyvlNuLm3ltrBHVns1uWYG4WNtLlGQMKq2VDglFi2+SPjuLjfCeN7lPatY7/N
LLabrEpDCXTreOQ0JGoKoFR1GOn88/K8lQ71wHb7T4gteQvErbjNcRyW+42rGSN4JfQYLkGm
ho2Bof4ssXNmtdyb45eK/EXMt3s9sv7WeGxsd1jc2t7JIyqroxUxOYxqDuVOnsfrjNrF53xw
br8bc2s+Xpxm/gMu4XEZktp9RljmhWpaVHbOiUJZTmMZs1j+fGeO7knwPzfY9ok3j3LTc7KE
e5Kli7SSLGR/5SCF1IO5XPGo39UPEfhLlXJbP9RaS2sAliSeCC5dkZ4JSdMy6Q3pqtPLvg03
nHFD8TcmXmw4jf8As2G5uoltzcPpguEzzikUNq1aSoFKg9cF0yauflL4Q3XhcCbraTm94+Sq
POR/OtWfoJuzIW9KuO+RxrkVycZ+B+acgsv1lo9rbOoV1tLp3jmMUi6o5KBGGlx9pxnTfFbb
/FfMrnfty4+LAx7rtcJuJoXYfzIqjS0TCqvUHKnXp1w4r7EnKvhzl3Gdqg3m9WC62qehe7sm
Z441YDS0oIDANXI0+uHGZFDyTh28cd3iLbb+NVubmOOa0da+3JDMAVaNu/g3gcGNzEXJON71
xneJdn3iIJdwKrllNUkSQBlkiP5lPSvjXDnmsX5VOrVUKKeVKU/DANoTQvWtaCpYf4YidiVI
Yiq01Ad69sTUB76kFi1GbMrSg8gaYsY3aJQK1FQzKCaZ4jDlzp0Dov3A98TRlemTVJYZmn7D
ngRelDrUgscjhQvcyNB95APniRPJlQgstOvfyxEOqN3AGYyqRXKnfPGWKSaNZGYfqrEZ+OLV
INidJHbrq7/sxENCaio9QBH08MSIo+igNR4+HhiJRMOrktQZqczlhAi4oX6EGlP9cS0Kh2aj
1AP5u1P+eKkXpCqCGJ71yI/6YgJBVS/VR6dR6GuLCXcDPUpyNOuJFoJ1DoW6N3OAlQIEViM9
VCKflPgMCJ4wyZeVSOv44QERxRnI1GVfDLtTEBTa2K6sh+Ynp5DFiCG6gUqBn4jCRoFAAPpr
UkHriWndVVTpGry70PT6YCGhZUlcGORsigaoA8DTDBgpNDAOpKqeoIJzH06YkFHUCvXwxIIU
awxJ1D8vbPvgR2GnPqSaA+XepwoLCrLWuXQDwA74lRkhc8iCK6gO48sS0xOpVFQS1aDw/biG
gZCklQvUdO1B5Yhgg6OT4ihJpgtUJSzVDJQHNQO5warNNqkVssifuHTrhZ9SGRUC0X1Cupj0
8hTE6RGFOYckEtXVTtiMFIxckKCOwJxADVemkg5Zk/5eGEUVNaaWbpkT2r+GBA0GPMmrAEEn
wxKmq5Yk9h+GJDFJSRnmOp6CmEnZIRWgNewrl+3ETR+6rMXFWFKN16+WACjCq46Kpzc/6Ylo
SxUj8oJ6YlpmcEmuWv7KjCNEVoUAYjqS3WhH0xEEhk0GM+sp0PXzwBJ7jOpYir5VHTp9MZbw
6uM0b7j4dPxxYsAXiLVCamXKhz7Z5YsWgQiMkRElm6/7R5YYLgnMbEKzk9lBr1/DPCykqxQ1
oF8hXLwwNI9MYRQtRqPpKnw7Uwo4A1lVNCB1JNCPrgQ1LALRhWnQ9MUSN0AYnXpY9AO37cKM
2tTUkFj0Yd8VQ2DUINc6DsPqcBOsUag0AWv3MTmcCwNBGRlWMihXvQ+ONLAyswVRn1yIOZ7Y
GhRpBqMjNU//AGYqaL5+BOLBYelaSlSa/npmMQgpH0ihqWGa0xG1GhJUo40qfuIzrTFqsNM6
gaAa99PbCDKrZICE8QO9cChBo1GZ1Ejr2wiirWKlK1HXwOJHZApWgox+4UrjLUyJKRFizCjK
aAf54mp04rqjCQEUoDQDI17Y1LhjJbgW95QRQ9SpxpnpDr/2r1xM677KhvVKEEA1DHy8hjfI
vse/fFnPuFcctS97slzJdOP51xFcEKa5krC3b6HD36zI6+Tcp+PNy3eHcodrv/b/APtIJZwI
2BNdSOpLI2OMmHlqLz5Q+ObnZBtr7TfxxALo0TRk6lzWpqe/lgrUmhb5c4JfbfFtu58cuJoo
CqxSJMFYKvRiQVGWEINy+ZeOe9Fbw7PPcWSEqJJZVWZMsqUGk4ZF8OWP5iimFxtm52k+4bVK
v8pGKj2zWopUdsVBcK55wHY5bl59ou1mkaiyxz6wy9gEOkAjFbpk1Ucq5dxabfod52Ozu4bq
NwzRXwSSGq5j0DOn44YU/KPlRORbStlc7BYQ3QI/+XCumQGnVfCv1xZFOXn7MPUU9DdRnmSM
K1tfj/5Ai4xMy3Noby0uK/q0VzG48CjDw8DirF6aW95v8ekG5tNu3N52kEmme4rHXrT0sWpj
GOnPqo5f8g2PIdugt4rF7OZDmS4kB8hQDL643gviP4/57Yccjure7s3uIpmDEQsEYUFPVUYK
yorre5P61+vhycSCVAwBoAagHGsjpvmPRk+SuF7otk++bbfQXtnQxS20yshcmtaNT9hxj4c6
65PmrZUu5Fbb7iS1AISbUiyE+DLmtPxxYceUbndrd300sY0RSn0AUqtTXDImt4v8i2e07O20
3ewWW62nqo0yBZanxbS1R+/F1WcZDcrsTXjTpGII3NVhXNVHYA+WI6FCtFdswB9tKV8sSep8
F5xwTZNm/T3MG4RyyUMsitHIpbxWmnTTtioUrXvAF5P+rpudzt0je6xf2opEbrl11DxGRxSl
rOQc/wCAbhDbyxNuJu7WRZEiMaUcJ+VmJpQ+IwavhJdfI3x3uN1bbjc2N/Z3VqC0fthGTUf4
lLUP1xY1jjuPlHjUl3NdyWVxHcRpS1uxoJbuFda1XPuDhZcG5/Iu0cl2F7LfopHvY9TWd9GA
tK5gZUy7Z4fgatOIc1+O9o2d7Vl3KBpTWTpLVqUqpWlPKuDdat1m49j4Ju9/cGHk01hGTqjW
+tKEhv8AerUy88TK745xXie07xb3lvy+yu2ib1JMpj1CuYDaiMXpafkm+8Fst5tNwunnW9gW
sD2TrcQsCCPUobFgVlv8q8du7i7tr22uY7GZdMd1EVLgUNdSHFYZKym/L8cCzRtpN7NeO1A0
n8qgJqWb8rfsGHkWVqeLcv8AjzbePnazPf27Sg+6HQSULCh0sg6fXFYaxtmeEW/Ina897edm
rSjKYJgT+YqCtdP1FcSdt3u3C9l3+x3Xiy3FzChLTWlzqChuwVjmOuVcVGLzdt8+Md+nTcdw
lvoJQmiS0ePWpFP4o/VTzBxDGXsd541tO+G6sdsXcNqA0for0hjT+IMwPqHaoxHQcm3fjG5X
8dxs+0naqik8epSHetdQCkgZYMEut1x7lnx/ZcbbaWvLyEyIwkEkQZlZh6iClVIHauLGnLxv
kXx9t22SbcL64Dh2Zdzjg0mQN/GhDZrgxlXndOJ7Tu0W97du11uUqN/PtZ4ilVOR0vkFP4YY
1IvIOV8DbfRvybleW96Ywr2MsWuOmmlPRl+IbFUz+5bvw3kPJbncN0mvduVgqwTQKjhtC0Os
UL9PDFgiflHNttbZk2TaZDeWSiPVcurxvpU9FDertnXFIfhxS8l4Zdce/RTbCE3iKP24b+3K
oSwzDP0bPuM8US143zDjc3F245vAuLGNAfbu4l9xdJIPq7hgcOpd3PyHwnb9vt4dsaa4/SFE
0+26hkJ9RV2z888QQ23J+BR763ILfcp47xojG9hPCxUgjpqQUr4GuDCoL7Z9v5lu15u9vvtp
tqSvRrS9qJEoAKq1VVgfLDoi44Xw+LZN/hu33va76HS0bQwzBJDrGQAYmuCnVpfX/DNm5fdb
tJuUlvuDJomsZkMkbAihMbRiqmnicXojgbn/ABHe7K/2ncJrjaoZnb2bzQJFKHMagAdLVHhh
LF8g23hVmtsm0bjLeSDKSVAVDL+bUsgGk+GnLDA3O27z8f8A/p7bB/XjD7qOmq4gImBc1OpF
BU08jgmq1i9o4Jb7qkptORbejROUCz+5CxANA4DdQe2KnF/x7ikvEt1j3q93OwvrWNXE62Ut
ZUVxpDhHI1DxAxSplua7nY7hyO7vLCUTQMQYpdJXVQCvXPrgrE6ytlw3n+3f+vf0a+3Kfar1
K+1uYX3wwJqK1DUYdMx0xNuk8422y3KyNzyGferT7p2iiX26n0h9NEIpXMKThi11yfIHEVtr
iMXjOY7hZo1WNv5qagf5Z8V/hNMWgW+8x4Nv+23eyybs1pDcpG/64QsyABteggjJss8Uqqg5
dynZI+P2W0bVdx39xaPG0N1GG0ERnUC6sBQ18CRhjF69XV58h8Un2ibco54zujwiG82iUMrS
Pl9rUPQ5qanBPTXBxTn+2Nxr+k3G6zbHfQyPo3F4/fR1ZtQOeqmR054rGoovkDkEN3bx2MfI
ju6gh3ARQlenZVZW/wBvTGufhnpX7zuXAbzjtv8Ao9ql23foAiFowGhkUUDNI1e/UZVwSenq
+MgZCzFagU/HDXOd7SEqqOpNO3jjOOjbvyW0f47i2+DcXW+t3Gvb5ko6oxJrbTLT0HKqk4pG
evhTcP3e32zkFtfXNxNaQRVrLboHdCwIBKGupfEYrGuaj5nuK33KLy6F3DepIoKXUKmKOUUG
miNmDQeoeONfLhlnS7vd+tZvj+02+Dcz+ttWCvZzJpmRTq1GCZKaojXNWwTx16lql4Pu9ttn
JLW7ubl7OKKuu6hUOyAqRmrAhlr18sZta5mIea7kl7yncblZ4btJZAVuLcMscgKDMK1SDTr5
43HPfV7uHI7Sf47srC03INdQOIprSRClzGmZpDKtNUJ7g4zL66eVTcH3WDbOS2l3Ldvt8EQZ
Xu4093QGUgB0P3KSfV3pitHMc3M71brle43P6mO692UN78AKxSVUUZASxFfCuHRecaDcuSW8
vx5YWlluokvrRxHPZyIY7iKP1akjlFNcJrnWuMyjvpT8E3q123lVpeXV49hF6lN0qiUJqU01
qeqE9cLfN8cXK70XvJ90uPeinM1wZBc29RFIpAoyAknPwOFcxf7vyO3m+P8Aara03FHvbUC2
ntGUx3UaEElA4IWSD654Yx/Tfwp/jverba+T2V3PeSbdGutP1axe8ihlIpIh/Ie5/ZjPUrXP
Su5fdC85TvE6zw3Ek100hlgLey2oD1RaiTpI88agxTqzBq5BulfCn+eG1fV69uXycNv4Rx1u
Nbiq7raabbcbUqGIVYzkyPkV1dGGMyG27MaeH5F4bcb7tl426pHBc2UschZSvtTl1YiUf/Z9
D5YMW+sryXbdyuLK8WL5Ktr9JA5XbriYRxOtdXtlgzj6VGNT/C6uR44XqaGhJrSmYH0wYp1s
eifCHKNl2bf72PdrpbVL+29m3lkqqCRW1FWcfbqAyODPTKvfjT5MkTed3tOSb9I1vPE0e3SX
b1hEoJ+1iPSNPSvXFYJ8ertd82/feGbFFs/M4+OXVkntXodgkjMqaSpVitQGzBHXEbql4bvN
hxznG7Hf+UW27NudkkdtvCSBgzwmqrMV/wDGwFAtevjiEVW1c+PKeM7vxXke8Nb3lHn2fc5W
Kq7JUmCZhQEH8tcN8W7GuO+Wm/8ACuPjZObQ8burSFIb8SOqSMyRBTGyMy00kEg4o1a8Q+QN
uvLPkc7Xe9wcglugr/1O3kDlxT0h1H2sAKY1+HKTKzSUL0Y6NOTfTrQ4zXSWPddzk4x8kcf4
u0PIrTZNw2BVW42+/ohd0VARqYr6G9v7lrilyKz3V5e/KvC15tdWtxeiOPcNtFhcXUf86C3n
BYirpUPGwfJ1/GmIdTVdwvb+J8L2fkO2NyS1v7/eLKVrSZJozFKkcLhUABJSQFujH1dsFu0X
mc85GO+JeU7B/wCob1wzc75Nrvd2FbG9uB/8XU6BDHIQaqdS9TkcOfkzm/V5Xue2zbRd3FhL
JE09qxhMkLrLE3+6N1yYeFMTX4ckUVAr1LBxmD5dcsGaeZr6T2vaNjfiu2S8M5Hteye5Gjy3
F04W5WQJR0I1KSVetGbMdMxgkxdeV4b8hbbvVjyS5j3bdIN5upD7v9RtpVlSbX3GnKM5Zrjf
4c58syjCgLAdc6dcz3wNPYfnLk+3bxtnDm2ncEubYWDrcxK1WSSP21/mR19LVBGYxT4X13oX
xB8l2vv71s3Kd2dJNysVsts3G7JkjiUK6rG7UyX15V+mCy/KsVO+/EdrxjaLjcP/AG+ylmgi
JtjZyKVeoOu3mQMZNMqmiMKiv3ZHGpVmIPi//wC7OfY9xteR7nd8c3sgrY7xDK8aqjD7Asdf
tfqrZEdDjH5at2ePMdWgyrq9yjsPd7OQxGoDtq+7G7XP+dtnpQzMrqDm0dCp7DywXMar6Afc
uN/Jdzxrf03iDab3jduIty2OdlWdmjdZFeCSQqkikqCR1pl1xn8YZfyxnynzjjnK/kG03CNW
utst1S3vhGQPc0uBKIGNDpIBoWGKnme6q/lv/wBPO7WknFN6m3La3gB/RXDSO1gwp/KDS+oB
vDG2eZ/sw6spcqAKgV1Dpgw9V7H8O8g2qz+P+c7TcXYttxmg/UWSSME9wRx0JjbIalcjLrg/
I/Db2/yNxvknxVyO7mMUHKpLIDdrRm9trl7YBUljDH1alAGX0wT5X4Sbztu1855Jxrn2z77Y
pY7XbQe7tlzIsU7ywTGVoqkqEOdKnKvlh/GDPdeUfOe/bXu/P7zcNum/U2kkMCe6oNQ6rpZD
/wBpyyyxUfnWCR0jkT3NYjqtfbIDqO5UnofDFIX09stra8h2G1bdb/Z+V7CITHbz3lLbeYkI
p7ZkLaRKh75VPfvhlOftQ8fvOMXG28u4DDyNJJ92nT+lbluDsVkACgwu5p/Mj0aSO/bB+R7Y
7Es9q4Z8Tbzs8u4w3u6Wt3bXV3bxSRkMP1EVDCAasjKnQ5g9cUnp58gedcEtPkLkMXL9q320
Xap7WFFh1hboPEX1KqMVUP6qUY41L5jPUVmxb7a2fxZzHb7bfBNvVvdLNHcxSPHLJA3tosiF
yHb7WVvDB+Ws8VVhuthJ/b9v9j+oQ36X8M0kDMDKUkmio6q33D0npjM+V3PHlBZQGINMqqB2
XFE9A+EuQLs/Odvee/az2+5k9m8JbTE7MjLEsudKayMz0xVuVw/J2+7vuHNt5tp757u2tNyu
f0CSSF44kLU0x5lVGXbGmNya9b+PuD3I+LuQ2EG5WN1LyO212UsUlEBaIrokDUKsGNGxn4rW
68w+OuK3V3zeCze5ttvvdruklkhuHGqQ28v8xIyvoJoPHFYpjefLG0XWy/J+381uykmxSXNq
ryxMGkjaID7krX1ANSnhjd95xnmevRdx3be3vn3vaINkvNlKLNBf6na7KaKlzpKghT1Fa0xm
Lrx8uctvm3PlW7bjMYGnubh5Sbdy8J1UP8tzmV8MVUdHA91sdq5ftO4XzMLWC6ilnZRUqisC
SAPDrjNjXN9e92HF722+UJ+eJfW11x65LSoYJgXWGSAR+6VrpKI2bd8aXwy3LOSofiTdf6Ju
aiZ99njl9iQKzwTuz0y/I6kEZY1Pljp5Be8p3a44/Fx+adn2iG4F5HbyDUyThStUc1ZVIc1X
pjLXio+9iCQAKFvIHx+uIvb+NWH/ALx8T7bxzY7+3tN92q6knuLOdyj+2xcj26UJVg1K/txc
3F1NP8rww23xnxTbZb9ri4stwe3vZ5SC8MvtSM0coBamnVQZ9MUVXz/HXIF+DrzYbOa2vb6S
4S+tmt5A0UsQZJKRlvzEKT54uflWsnyXfru3+DONWu3bgYWW5ntN1hifSSys7e1IOozo2Gfk
W6qfiv5EuYeT8es+Q7gDs22yzGzmlALRtcRlCpkPqEdSMj9cZ0817Pu+78n2i23Dcv6LtUe3
6ZRb3i3LlZI3qUaTSulFcd+lcanLH2jCTbFufPPjvjFtxO/to9x2VZodzhaYwyISQCBpFdFR
UdqYtNl/Dzbm+1ck4/f7Pt27bm109ujywwGdma1kMg9+JQSW0sQHjYGjeWC7Dx/ltPnf9bvW
2bXy/Y7hb3jJt47W9aGVmMMx+39RDWi+r017HFPV1cXW9cZ3r5D4pxO84ffQFtssVttwjado
pEkRVGh1XPqppXBK1rWbfvFp/wDeJtNt76w7jFxuW2eCU/z1uIpASjA9xpLDxGYxJ85WXyNy
u1j3aNr43K71HJFfw3Z91CHYnWq/ajVzBAw9+fDHN2vU5xx75Ii2Tebm7l2y+2K1jgvttlT2
1uIo29UlrO2Vcz6evbGZ1cdLzlZv+5XQOfW/ttWL+k2qw9yV9yUg546z/wBWLjyVDmT1BGqn
U+FMYwQzUoupSKdP88BKQswU5hVFBXy6YEjSMFQr1Vcz+/t3w6JPUmgL6hnX7e2XhgawQRct
Xpan/FcSBrpL6mq/Zc8OA/8ACFy1Dv8AvywVUI1RjSTRR1r3+hwyiVJqpGKCo7EZfjiP2MoU
EpQeIYZ+eAi1ksBQEjqB3wIMigEtXURTStcQoSRpIPU0AI8umEUSg0BqTT1H8MqHAZQyqrVU
DwPniSYBVLVWgFKjofxwoNQpL1OX2VwovUPUDqqK088FR1boXyY/lHT6nEtOXAap6nopyFBi
WhoQRQ1qc16UHjgRigCnUoZmJIoa0r1zxGEUKrUsR0p2H7cIpD0uCKEP1Hc+GBYIOOn1r3Jp
9cRhaDUvpFTlQZEVxI0UQoWFC/8ACc8SGxBYEVocipA01HfEhBAfVqzGerpU+eLUh9da1pn0
7fUYlgu+thpU5rTKpHliWEezHLuKeODUj1+o6sxT0tjWpLRnIU9D+U5ZfhiQKgirnSVagr44
lSIIoajT1K9ziQmCKutq+KkdSBgGlUZyCvq6joadxgUHSNKZ0StCe2eBqIpKe5pzp4jMCn/L
DBREgZsoJAopGVcawaVdKhm01GZrmc+lMRMzMzkUqKZ+NfHAiMat6h2OdO9PHCjqratVOorl
4+OJH1EEADVnn0OfnXEAmVi/tkDxGWdMAPMEWlfs6DPwwxtGo1ZNkT4Z4qsGC7KMtIGTf61G
BGohAIIJHcqRl+OIYEo5GR1UHQj8aVxIw0tStKgUalaVPlhGCGpUowJBy0kZ18csWoKMAA5J
0k0BP1wLBzKTmtQcvUO9MTcpElUD5AdPOuBU5JXNk9YFA3n54D8EPbIbIlaElh1xMIxqDCgq
wFdJ/h+uFJarQJTrmW/0xEKrEuoFfWa6XIzzwrRU01NQcsvr+OJBaQDMigFPV1xQWkdOn1ep
iMqjCjxqzKQzah2J7YlDh1K17gHLx7ZYG0ae66KXAUGpAGdAOx8cFWJdahCR96n6jFitCCjt
pZwprn3/ABxI7LTIFaGpr5eNMWsn1gLpSugfcW6VPgMCM4VAe7A/b2xHQoWLFtORFCK5D6Yl
NpnVVeoUgEkaRmNXjXEZD6fUARQ9evWmFGWLMCgdG7DoMIw5RAwDOAhOZzy8vHADswjanWtB
Wv8AniwE7CWVj2/29/KuLDrnuFCxPKFOoVqPLywRuesdfaPfPtilc6kd8dGai0DwPT9+AO62
/lXY05HxY5D9mNc+0dXHu3xBxTYORJLa7jHMsijUksT0VRTP0nr5Y1/WWCXXLy3j1vs++myt
5S9uG/lvKAT1/MB+/GOI1I2m88GsLPj8O4S7XHGx0Fr+2nDLJ0rqQ1KnwwVlbbt8OWN9ssd5
t8UVhe3CpSSWXTEQy1AIzCsfLGRGJn+J+UxXRsRHEZ09Q9R0sg7o2YOeNa1HHZfHXJL9rmOB
FN3aVEkRZUrTwqcKXvx/wLZt9kvrXeTcW95bFaNbMMu3U+nL9+LqYpsUW/cXFryNtntZhLI8
oihckIKnoWrkv+7GZG/l3cl+KOY8c21NwvxAbM013EDh1BboPHPCwxsmtZAAACOvhTCtXOw8
X3TfZlNoFEK5PM9dCsemph2xpeVpr34m5XawLNDcWt9E3pDWz5hq90PqpjlZ6rXVa/CPNWNZ
hbW6y5x+5LRWP/cMs+wxuVOOD4h5pNeXNutrEWtyBI4eikHuuVTitIt0+JeY2Fs1wI4LuMen
THIPdUnxXrTFp+2ptr+JOVXkIaK4tYWABFtcuY3qe2YNcQqO1+KeZ3c1zbLaKslu5SUmQBK1
pUE5kYFUW9fFvLdrs/1phjntVAEjW7+4VbuKUGLQbZvjLlm97W+57VEk8KsU9tZFWSo6nQT4
9MGrGcubC4s7p7e6XRcw5SRnOhGRwkFcgQCe1D2HYY1A33Cvj7ZOQ7FcXMt7PabhATpXRqiP
gaHBWcZuDaZDvX9PU6zHJ7fuMdKuQaUxSNNpvfCdvsd3261bbLmxecgXEUkqSQuTTOKQf4Yp
B9nXyb4iuxOjcfFUZdTQzyBHJH8HVf24vhS1jBwrlPtzyizdRasVmQ5sNPViB288aOoo+L76
9j+vFo0tstdcikECnjjOpqeMfHe38g49LdrfS2e4wuV9sgMj0zAP8NemKxWMRcxNbXEtvIwY
Rkrq8xhEdG32W4bjdR2VpGZZ5jRIh1ZqdhialaGL4x5u0LA2SrKpr7DOscjAd1Q/44Wb16r7
Dh/Ir64khhgWKaJirLcN7ag1oQWI6+WDW50PkHDeS7GgfcbQrDkRNE6ypn5rn+7FPBel7sHD
LK943PuV7a3jUBaK9tWVo1I7SRkj8cXVZYuUFJ3Ck0UULUpmPLwxQns4byeVYo0MjyEgKtdX
0GENMvxzzGa1F3BaJLDmSqTKXUeakgg+WKUqzbeMb9uO4PtkFoTfqD/JJFaL3NaAYBgd049v
m0TrbblaPb3FKqGyBHiG6UwD6tPafF15dcaO8x30fuhTI1qc9Sjwbx8sKsQcd4Rt1/t8u432
4tb2MZoxgMbulDQlkb1aT44tDq3r4/trSyi3ez3CS/2OQgST6VWZFOVaVzxY1enTtfBuCbs6
QWXJ5zKy1AeBVoeoDnIYMWGT4wtbaS5k3XdgsFqSDNZ6HIHXU0ZNQCMMWqzkfAJLDbod62y9
Xctplz/UKpRlr/EMa8Yu/lWXPD+QWu3puku3zJYuokS4FCmluhyNRXzGMukDtHGd+3hmXbrY
ysK6kchSKD8tcWM241G/fGy2PHba/iM8e4EolxYuA1XfKikHoD1wyKRQzcG5ZDbG6/pbyxKC
S8LJIQB3oDWg+mA/DPtqJZZBQqaFWAyIOAWuvbLDcdwvP01lA1zc6SwRF1NQeX+mGxTVnNw7
ly2bXbbdM8EY/mUzdAOupD68vpih1ybVxvft0LtYWpuXGeiOn7q0rhCLc9l3Xa5/Y3K0ktJ2
zAkWhI8V7EYFlq7fitnBxKPeZ3ubW8LUQPDrhnqaKEkWuk0/ixcwXYy7tqerLSleudP24sO6
ntbO83CVYbSB55j6lWMEtpXM9OwwGV3T8T5Klk95LtszW6VLuB6k8daD1CmNSM2odo49ve6i
T+mWrzuo+1Rpr40LEYzTINuL8iF8NubbpheMKrCwoSP9tewwlI/EeUxu0Um1XCFFJaoFCB10
5+r8MakZ1Ft/G9/3EsthZyTMpo8ZOllp/tbAUc/Ht9juzZSWFwl8oqICtWKddS06jEJygh23
cbmY28cMkkyAs8cakkaepoM8sEpx0bbsO/7mko26yluXQ0KCi5HvnTrhLjvrDcLK4NruNtJa
XIHqilBUgDvn288LNmprzYt6sbWCe8sZbe1nUNbyyKaSKwqCD9MZ04rSEUHz/wCMsLGRYbDs
V/vV+lhZUaZgXCVFSi5tQHqaYtalaqX4vkuI5Y9n3eK93CCMyvtboYp6KfUozp3phXXLO8c4
1d73vi7TE4trl2Kt7oNEKA6lameVMZtEmjThO/Tcjm4+yJ+vtatMiGpKKA1V/iqDkBjp5HK8
3qrx/jOe5tZpNl3WLc7y1jMs22aGjlVfzAA5agcqd8ZsdZuM3xrjl9vu+ptNvILa4l1VMoNB
7aksCBX6YMw5qaz4Zvc/IrjYmi//AClZgtNEtGb2xSpQV9WRBpi+FzF1L8Yz3NlO2ybrBut5
bKZp7FFMcyp00gHqwPbBYNZvjXHb7ft6TZrd1t7mQNSRx6QUUkhqVp0pgp2pLPhm93e+3Ww+
2BuFjU3ccZDlVUgMyDLX1qAO2N3xS6uJfi6/ktpn2jdrfd57SMzXFhGrR3AQGmSZ+oHzwaM1
n+L8ZuuSbtHtdrMkFzIrsnuAgehSaMO1aUxGQdhw/kF1vtxsZgP6+yBa7CkEIFPqYCvqoDWg
xadXG4/F9+1ncTbJuVtvE9snvT2UPouBEDQtozGoHqK/TFn7F9Z7i/F9w5FvSbTZzJFcujsj
S1ArGuoqaVp4YbR/4Vm5WN1t9/Pt14hhu7WT27iPKiuvUVGWKRbNch1FivXx88J1pN14LyHa
9r2rcGQT2W9LGbYxepllkFRCyjvTMHvjPyZ4vIvhflB3k7ZcSw2yyW36mOctqUopAYaetVJ6
YdjNmuPcfix7WzuJoORWFxcQCpsg3tE6eq9/Vi5nrPTAlqD3CPU3QDw8xibxc8T4dvXKWv12
sJLLYQC5ELHS0i6tJVScqjEh8e4PvnIY90Xa2R7za4P1DWrdZATpKKelcZ33B8zWnT4f3AbP
Ybjd73a2EF9Ck0AkHdl1aSzUqR3wt1wbZ8Pbrue+3W1Wu52ciWsH6lrqGro6u+kUHY+WL/DG
a595+Jt5s9tudw2/dLferayGq/t7TKeFOhYqSdWYz8MN5xnie6sI/ha9Xb7G8vt7s4Le+gSe
1Mq/xjXQFsiyjqMElrrcjBcr47dbDuYsZbuC+BBkimtDWNlP+B8sdLMjjepuRTl0yjA1EHpj
GFaW3HeTXloLu12ye6tGromiQtUDI0pnjMnrpzLHB7dyYnmhjd4bdgkzoh/lscqPQen8cV88
G7fEawXZgkumjb9PbMFnlCllQufRqp9tT0rhPrU2fxjza+41c75a7Y0tvbt64VzmZNOouidW
XSe2CXbjX2xhoyVRiF0CPL2iMwOtCD0w0SjodWsatSjJadf8sGp6Ft/wvvO4QRyXW4We330s
aOLK9rq0yjVGQQdLK1RmvTDz0OmL3rje97Ju9xs252f6bcYZBG0KgEMWoFZNP3K1fTTrg6Zl
9cF/tm4WVxLabhbSWt5Hp9y3mUq4BzU0yyPbFGtQiRlVmWpLkBieuWWeKKWtPw/485Py+y3O
fY0S4bbFVp7PWEkkV65R+LDScsP3/DW+aDaOA79u+y7vvNhbqYtgCf1KzKlbgKQ2plB66AhL
DFnuMXqfP4VVpx/kk1ot3bbdc3NtIuuK4SMspUCtcsxTFVviqWSr/bQ1qzfmpTo2M4OOhVKk
UGkmlF8aYW1hFxfkMtn+st9quLm0arLIiF0Ufm/Zi1S45Ireb2TcxwN+kjIikkCnQrEelSe1
fPFYpSkgvpYGuDGzWyELLNmyB2B0oW/i8BimRmV0W/G97lgF5Ht1xJbsrOsiRsQyL1NQO2N6
LcrTcX+N9x5NxHe9/sbikuwyxhtvIp70brqZh3Ei9FHQ4z+ca6uRntys912+4VL+1ktXZdcS
zoVY9qivhgM99cMknuDUaAD99O344a521a8d2O937eNu2u2UCbcJltoa10KSaVYjIDGW5Vxz
vhtpxy5hgt9w/UXKEx31hMume3mU99PpZe6nwzx05581i30uM8Jn3rZ90vobpLe52+EzWcE6
gR3WgEyxJK2QfQMhXPGd9avOeqbZ7IbjfWllC6Rw3kiRpLIDpXWaAkDMUJHTGbWvl38o4pvP
Ft9k2XdkRLy2YSa0pokQj0SI3cHGr8a5/wA77ZUD7ZuyRi4lsJ1h0e8s7RMYyjfn1Z5eZwR0
tcJmkoAW9wLXQe1D3ws3o6OY01yVIXNpB3B6CmDBL+3XNtW5W8LXE9pOkCgFpzG2jSejE+A8
cDeYC3guWkURRPI0zMsWlTRiBUqD0Jphw1Hdx3cMvszxNA4GtkcUJB/2nGuY5f0+BxXsqxFQ
2kA1ShINCMx9CcZs9a/n8eprW3vbmQvaQyTSLR3MSltIUdTTtgg7l/C+ueGbpBwiy5k8/vW1
xdNZSxmoeN60T7iahswfDG5N8PNyeo7PjG43HENy3+CdUbbGX37NiyF7dsjNE2QbQ33KMZ5n
pz3VHDZXk7LFDDJLJIDpCKfUVHq0+NO9MNaxF7ckUhSQGJ0ylVwVYV+vjjNjlPl1zxbhBAZJ
beeO3OlNbRyBM+lWyUA4uXSXUMcG4TySLbwyTtGoGmNSwPgKL4YbRb6ha2uDMbZ0dbhyCYiD
qJA7DrXyxYlrvXD+Q7Ba2VzudlJb2e4oJrWdh6CGGQY50Y/wnPGfkyuC2iv5CfYhkk9sUcxq
W0VHSozz8sMGo3S5mHtsjGZKUjIOpdPQsvjn9cFtPU1NIN3ihBdZ44WA0M2tYjUV9PRa4ZTj
kaSSNdMg01AbQQcx2bz+uGxn4qNlliGplMZNGRSMiv5WX64FEp3K6eHQbl5VHoZCx05dqVpS
uH7HJpra8uYHaSGZoGdaMUYjrn2wNIGuJpZHnd2knJDGV2LE088VoObqQRmNXdIWoZEBIViD
X1AH1fjhgHb3s8Ss0czwZesxOyFjXKpUipxn8tT4J9wvFZpvcY3BqdZdi+fT1k6v34VzcQAU
FVprIFaDL/riDZ8T+V+Vcc2htotGt7rbtZlSK7iWX2ywowQnoPLEtZ7kvKN35Fem+3Of3ptC
xQBVCpHEldMca/lUVOWN75jF1TrrUDTka0NemC1qFpUEKxOlswSKgHqcCGaksPCh05UJPTPA
DIAc9RU+JFajy+mBqFKZWZq1ZTShpgCMkshViexrlmB+/CJT0LMD160bpl/rhawfpf8ALpcj
IHrTtUjpiGhKkkEg6gKFQajLpngFGArjSD6z99e1MWLQqQS+kEHvpqMx/niwyiCBVqKADI0N
P24WjBVIY1FSc2OdMVZI0LEZGgHq8T44sBulWbOhzHbLwwLDSAUUgmvXzwKJET0h3Jr4HPPz
8cLRw1TVslzNR3GIABEfU/cclHYYURFBqLZjqO2DVT1QoDn7gPU/l754mdhSEMtOpGYHgcTR
zINSmub5A9qAYkfU1QACTXocq+YrhQSqlgT6cqr4Zf64lRJk5UEU7A/88ZMEASoCNVq0FRkf
riWImChqAjUW0mlcvpiRmaTXTRkw9J+njixzu6krUhzWr/d4AjwpidDxklFGTGuVRQLT64kY
FSSpq1R6T4/TwpgRRhXRlb7VrU9DXwxKHcqY1Knr0br+GKKhQMCSxoFNKDv54dEIsEzc0U56
qVp+HfFSKQiQ0JLg9GGQA+mIYTAhdLGo7LSuXliOBijK6lJJC59MSIsA3qq1BlTKh8cOA6KK
+02RUZdyRixGoNAOoAqT1/xzxI7RtRfUCOpNPDAQSVNDQ55hhl6R2pgR0UqwzrTP1daHCjvq
DA1yNdVP9MKJGjUHPP8AL/0wM/AWSRm9LBTnpr9pHngHyYxyV9aUpmR1xaRClAEGdRRR3+uG
NnqamqgA/jn9cIEXY00nvRT388BAx0HUVzORbt5YgZdVdTeiuZHUmmBDq3Q1UkZsR49MSCY5
FbSJAQg9NO/44VQIWKrU1NfUR0B74tMiVCaNnqRaagcssRNGVdgT1GWWABOpWKKRqNBTodJN
M8QSPQMDVqigPelOmJHZaKXzqMyCe2I6jGgal0hVXt5+IwrTAaioGYHVh0r1wATUJXS9TX7u
gzxQ0iASaGqitO/7caGGSoQFupyIXrl3OJrmFmjZDI/d5eFcGt/U6hRUFTWtAP8ATAydl0qX
Q001DOBWniBiGhdiRlnpGR6ftxC0ITWiocyerDpXArBr7qeo09PVR5eeJYaNUBJBPj6gTlhI
VCVPqzpQGnSuAibxBplnXP8Awwk+ZVQM2I+4DKnb6YgejAg9AM+tKHDoolZwTWlDUVXPAojA
AIKitMs+mAyG1xo7JU0yLDt+GJIbt0IcL6ljHeozpniX2Y/cS5lV2+5uuOmMQPvv/wDgx4YC
67bK6WrKegOeX4Y6/wA56x314+mP7fOPzSwzbiLm1MBQCOMToGFMgHUkMuefnjp/9nqXxj+d
31N8j8M3NuTRya7YC5alvOJoyuonMNn6fxx5+LjtPG/vOMbtLwwWscltNcQoi6Ip42B09aEm
hameeM9X1kN/s39W4/DbbVutsbmJ092ymnSNqp1AoSMJLeHtmktLOTcIrK+tmDFDIoDHroah
Ioe4xnPTHNuW/bJyC0utvt/0W2bzbqQboMFEjDKq0/LjQVXxHxW+tby9nae3kGvvMmtn/MdN
aj9mK3UzvyRxe7suXJd7oBbbXdMAtxCyuxC/cag9aYJpkNyvZeIw7AlzsvMJNy7pt8tdIH0J
9NPPFDJXmih3OtjRTn+zHSCR678H8htLae6s3lhiluFAto5yqq9OtC1BX/LF1izG93W75Zty
zzf0zarHb2NVu/dqQCfu0hiM8Y8EZf5O3mObYbSezvIydWrVE+pAQtTWhyNemCRYb4r5Vcbh
HLb7vuoN0gAtzM6xs6gGug5VPljfULIy8r3TbeXTiK+l/Re8RNECGiddXTPrpwRmvTtz2q55
D/Sty2Vobu1t39yUpIgk/FSQa+WD4S8Xd9vW6lhlultrpVOqN3VZBRa1GeZwVa8NvuYci2/c
rqK0vHihkZtcAYsjZ9GByONQxc8L23Yr/bJJ5+YzbHvJLloM0Qr+Uk1UMMa6a1id0j9q+lSO
4/VjV67gdGNaVzrjNHVQRgM2RA8jiD3L4i45vUGwSzmNDHc0a3Kyo9OoqaHrTtitDFzcO5E3
Omtf0xSf3feUSOq611VLCpo2XbGpRj03l+y7sU2mZrQyW9tOjXRqDoUChkrXLGafyl3zZt+u
9y2u+sSL2xt6PLHDKAQCMmArVqjwwHUF6Xn38X8Esc0NtFpvbB3CM4ANdSmhyGFKPkyWu87D
Le8UC2Sxk/rbX3dBYKKldHQYYz8O/wCL+O73DxmczW2n9UzSwtrUhlZcuhy/HFVNeV73xjkU
G63EUu1XQkV6HRGWWpzyK1yp0OA/Dv4bte6W/Kttaeymgi930ySI6ilCCGJGXjhWvXd52nkD
8i2q/jLXe2W+oTtC9SlRT7OpGDVjqkuIr8X9hZOk98sRY2wKBwSCorXoa5YE8k3fjfOdtsXN
4ZorV8zFNJqUZ56Q3f6YoLXofx7sG92vD5Yp7dkkudbqoYEuHXSGyOkAjtjRvry614hvO5ci
l2hFW0vQxpHdH2hl2GRri+Cudt4zunEOWbc++lLeN3JinUhojpyJ1daZ9CMSkafm/HeV3u7R
XGzwyrbNErNPGwWMkCuoFM/3YKmO23adwl5Ottu+7nZ74EFL6us6wMhUELn541+Cm+Q7De7e
6txf75HvalD7E0bCoStCGoaLXBGdxv8AhGxbzb8HltJraRZnEjQo1DqR19NMz+GeK1VmuK/H
t4+3z7m0bvexSNHLYEaWGk+B6scOj64reV3fM4bP9NebXJtu3MQahDR2UenUwqopgqXvx/t4
teKXm8xSrNPRhd2jhWUKn20p6gT44cXXwh49xS83Kxn5Lbqbn3XelkhBb0n7KdG/HFfFNUvM
Nw5clotpfbZLttnJ0GgrCzDpqAqo/bg02rKHauRpws3G28ojm20x/wDytschGjqPVGC2o/hl
5YrRYuePj+p/GLQ7fGbjcrYMmhB/PRw33VHqzGYPfA01kH6q12Lbjuso1KqxzTSmo1sKUav+
eGVmxSW1rvdhz+aeWK5g2V4SqyIGNsr0yOWQBw6Zv5ecfIUcb8rvXtIlaEsGT2FOk1XMinev
bGelK7Piz3f/AHCGn8tjDICXrGemVK0ywxR6HXkVt8gtcXwkTZjCVjuOsJLAUqV6fjhtEht5
tv1fF92i2eP3b2KRhCLQVkR2ZSdOihGoYtWPL92i5gttHbbqJnhDaYhN6nVgPt1N6wSMTNt1
6HtW2bvF8TXFk9tOLgQSEWzIS5q4YaV65jBGruPFnSShUo4atANJOfgfPEPlsfiqT2uYWqlv
bcJIVZss2UilfPFWouOac25DtHNZntZwkcI0NCaGORSBk69GxqxRp9hutr3bhM1++3meVZZH
e225SskbEj0xgdKjOmMi8prLe4b7ddttP6ZdW88OpYLu7C60AWhRmBqGIHQ4itnM5tnkkfU0
F4PbfKqDUKA/Xzxq1Y4+cWU78cv32yA/1SCkkT26fzgzOK/ZnmuBWKbdd5ksuLbPvc0ujdtv
eOKeV1pOusfzY5FNMmHXBE7Jdp22w3eXl9tLJBFfQCaG5UB4kJWpRwMgH8cQKyv9u3DhUu6y
WctxN+okLrtn/wCMKxfPTTpkakHFD+GR+Rt/ttw2OytDtV3b3MJAhvLsJrEYGaPpNa9yDjfM
+Wb16j3215xB8fW7f1K33Tj0ojLFCPdhqRpUs4qQrZGhyxmVrp5oa551WvQ/44GRRTzwya7e
UxzJRg4Okin8LDCm/sbr+icVtuTRR3MW8XTGu4P/ADIZatpycVZDlTS3XFGtVXBd4vm55BuH
6Z767nkllkhiKqza1JcrWgqOtMQ58R893cPze/vbX3bYkpQyKYZkeNQM1NGBr+3D1R/Pxd2t
9LsHG7Tk0CXUW63da3LAPBOHY+v3RXSwpQxtgn+Wu/8ACn+PtxuY+dWm4C2a+u5XledI2Csx
kBLFQaAnwGK+rnmRzc43lpOc7huNhJNbsZVMeoGGWFlUBqrkVaoxpnj8r2C+bjnGrPkttFc2
++3fpa4ddcM9Wb1CTMUy9SN3xnNXXnkU/wAZ7lcR86hv1tJL2eRpZGhgprYOrF2VWIB06iaY
rBy5+db0V57uW6WLTQs0ytFIVaKVGVApyOlh0w/MV8XlruP/AK7xyy5LYxXUO8bg5rcSjXFO
pJq6uMvrG2eCT9rrq74qvi7c54udW16LR7qWX35Li3goZKuGLMi5agCftxdNc1Xcv31m55u2
4bZJJb1uGkgnXVE+agGqmjCnQ1wVnmtHY7r/AOq7Ba8j2+O5t923EVm95AYpwSf5iSj0lcvs
bPvhll+T1/hT/Eu5yW3P4LpbaW9klE7PBDQyASAs7hctQWtaYu/RxMUfyBfw33Nd4urdi1tP
cs8TFSh0kAEMpowNRjc+GZPazwc6vuAoQAP9cS319ItyTZNm+OeMvvkDXVlcxQorxf8Akjmj
QusidM109RjnI62tOZTLymyenuIdukdHalXVnU5j6YMF14Fzi/41cw3gteL3O1XEUj//AJR9
emtaHVVaUOO25GObrztmXRWtFOY8aeOOca6r1v8AtvkUch3gJpkcbeCYycwBKP8AHB+W+b42
XxfyHjm5bryC22fZxtlxDAzyy6g+v1FaCgyFc6YuvlmfCHlu5bZY8A4pJuOxyb2rxgIkOr+W
faBLHSDkwywwdVQfBkm3HmXJmsLWS3tTZKyWU7VZB7nqQn64qfMdKb3s8nFt25DwfbxY7vYa
rfe9uY+5rt5agzUz1gfcPxw3rflizMxYc1vdmseC8PG58fk3+FrYe3FFqqn8hSW9PiDi4p/o
+dt5NlJuly1lDLZW5c+zaTmssKkD+Wx707eWNdda58c5/wCXFGDViSSXJIAoDWtMvCuMunPW
PoLnu47psW0cL3HjjyRLf28S7jNAheIrEkVGZVBVT6m1Hvg5+Gut1tbnZ9kHLOR/preJ/wCt
7GLi4jUBkuHUsokCDIsQVz74rR9Z7/l518HcfmTh3Npr6xK2V1btEFlWmowxyH7GzFNQIqMH
5M85TfEvKt7T4Z5G8NyTebWhNlUanRGiXMAVNOv0wyf7M9df6a8E3K+m3C4lvbgA3dy/uTOu
QZiKBqYe/k8WXmVye5Rw1S692Ar38MDWvW7D+tbJc7VtnOtom3XbWgjfYdxgcuIIZmRhomUE
ERk1oxqv0xkb/sl+fLfdYvkTZojfxzSywxf0y++xkBkAjaVh6WOr8w7Y3Z/q59df7YrvnO45
7Lf7XBzDbrS3v4IGSHcrSpS6U0LerKhUj7aZfjgnw1vryhWZWLGmno/18sCe8/233UtlsnNb
u1IWa2to54i3TXGkjCvkSoriz10t/wBW+2CHaL7iPLOabQ6C05NtrT3lutQYb2KGRLhSM+rG
uKfLnzJn+GV5xu27cWv+Hjjh/T7Xu9pbT7kY4y8TOjQr7mQITUjHV+3FaZZKw39yWz7RtXyA
JNugFum42gupTFQxvIWoXApQVAHT64fwJ5XlMKhlVpGJYGoNK0NajLA6Y+i9s5ENwt9vt7W8
k4VyMRolxb3Nqz7dcMVHt3CyL6YxKMz28ca5+HPr2puL7LfNtXydtnIttgt92ltknlgiUNDI
4ik0XEK+DNRtQ7415sqz/WxT/FvHnT4W5tNulnW1lQSWzSqKlraItqocwysQa45z/wBlxznC
x+VuQ8h4nveywcST27Xc7GK8nt0h92EzBhGCoGSa1pXOnfFtwWyX1P8AFe9OOLfIu6Xe1pBc
wlDd7cf5aGVYWLZH7Kk1xSet2+Kjbtxm5j8L8vvORBLu52MrLtc1BrhkSOoo37j44rdqtyPC
pjVjVRXVUBTXr54RY9I+Cd+l2vm23WP6SCeDcriOH+YPXExP3IfGmM9VqR3fPm92d/8AIe52
QsI47vb5I4zeIfVIhhQ+sHutaDG51kxmf+y/+D4XvuC842q+iebbmsnlSOVTpEgik+zLtQEU
wX5bryHYIz+vsG0PIyywVoCSo1KCTTzwdYOfHsHzrHEfmPZv1GcMqWcTrIB7b/zSTQtlTxxX
fq52f7vRt63LjmwcxXaf6r7ER0SnYTbmVHWfVrRWCNRWGentikdNfNnOE4+OX7oOOenZvdP6
NCGBWv3rpbNVVugOFzkRcMhWblG1i4VGUXcNVahQr7i1yPlgtb4j3m83y7275pg4PHFC3Gpy
jNBNGHyu4XZ4gSKaNQoqnD+Bu3El7bbTxP415UbaxSWLad4mO2o59ULlo1Rlcivor+zBF1fH
kvKueQ8n40LferKMcktLoS7fudsAqtbsB7sM3QnpVcUvo6ls/wAsTq1tSlQcjT/HE6V7HxVR
svwu/KttgU75Z7i0fu01kxuyDQ47gBq07YOL6z15Frz3cb7cfga0vtwtI7K4uN1gdhGgiRqs
aS6R/F5Y1zfV15Ffx0G/+AeUi6UTrtbtPtzsNLRvRWYo3h1H7cZ5+T18OzYL6w2j4Hj3Y2SX
F7BuUhsZaDXHK7VDgnrkKEd8M8rN9in2LlGz813zjcG97bGnIIdzjV54lVba5spM2jnQ5syt
TTXFpk16Tc7hw/bt8v8AZdy3m3udtjZoLvZ7iPXIsciaqFgK0StR5Y1vik9YkNYcV+L05Rxk
obldzubRZnXUJrZ5H9sSimehUGnBB1vy8+5PzTcN2l2vc7nbUsd5tmM9puMCGMSxIa0ZSArM
r9GHbBomV6n8vc1vf/uv4y0kMMib9bhbr0alDNCGqn8JDZjFxG1JaXMfFPiHYOVbRbxfr7ia
WC+qNayK0jsrPT8yacj4Yo1fK1G12W37ne/HfL5oUj3jcJ7i0vWjRQssdJGXWO5XRQHwOIMp
vnyBtO2cs3zi257PFccShnntZbbISoK+l4WOQ0vmMPgji5Lstnv/AMPcduePWgvr/Z7ia1vd
IUzxW7u7AzkerSAVOrpjf87JrPct9F8z8ftds4JwGWaNI9xSGS2muEoS8YQSKpp1UE1r2xz5
nyf6PHNNCFBCxkkkj9tfPGYrDg1zFJAKK1O34eeEg1AlSp9JNK1yAGESmYrkSK0OXhXzxNBk
CmqkkH+Hv45YGaZnX2xX0jVnSueEaPW5UJUgH8MhgO05KAhSKsM2bwGAg+1qimk5ip7/AIYR
pU0pnQHy+7PCrUa1FATVa9cIh3Ux19smneoqc+1MBEXX0saEjIj/ADIxik8ldRzoDSoBy/dg
RmonqYgt+843KjI+lKoSOykjp4YSYsCGzGokamAy/wCuIUgxNEJqo6E9qfTzwMGjoF01qcx/
x9cSlSoHXwBpVh1X9uJqQlhVqliCppQA/vyxNBpo1KoH0OIFqOkVQqvenX8MSkTHT6QVB0dM
zgIFZFoci3YeGJEWcB8/GnliA0NegzHj0P4YiAhSShr4g9sQGwKhVZixPhTPtiSP2KkimZ6/
XEMJVU0RfQ3Qs3emWFbEigSLqYBtIGoeFO4pia0Gli2vSVJIIHXt38MQJ2JZQyhvcJyr08sQ
06GIUT60HU1HhXBWod19HmVqKdB4YNR3ViVJFDTr0GWJAHpkJJBJJIBNfrUYWaKsQGrTUgg6
gaAftwInpk7Go6E9aj64CZzUKEqpH5V7f9cMRU0j/d9P8cRSRBFVGdTmCaGlBgagCQpKkEiu
pWOYIwikzIMyBlmD54gc6ilSQV7Uy/64kHWDSubU/EZ9MKGdAABORGoMO2JI2Vcs6t1NfLEb
CcspB1aGFAaGtfLyxA8gDLU50rWmRzxaLDAgdQVA8R4YKRFdSAipDdCfDAYb+WQFcE+BwxU1
FWlATU9B4YmTmJBJqJFMgB4YCcBCCK0AyA8fwxYsJtVDToainTpiQTIvpYgh86UFMMRqDqys
4rVQvWp/xwrTofyasyaZZHLAYOUEgH+HoD/p3wLEeolTmQTQ5CtPDEEgajCpyGRI88WI3tgm
i5EZ1xNSBCmJfTUmtdJ8DiO4c0oxXsAQCe/8OFhGVYDVUkChNO9R0z7DABodTqxOXiO/4Ymj
a3JPuKK1pQHz8cQlF6dNVGqp006GmIgkhJZh1Hf/AJ4lgoIqJ6vwr1wHDtDULqJJUkqDh04G
hBoFGWYYdD+GHWcGPcqWUdR6qmnXwxOkmBLjWACS2Qbzr0wVjTEgGin/ALuuQ74jaACn/jFF
BAA7YhUgOhqEVJzPniUILJ7RDEVbqwyp5jEsCVAC+ksT1JNKAeIxE4Zi1QQwJ+6hFaYmUZVi
SACcz+z/AEGInjWUUqaRr1K964NI8wR0oxq3jTyxaqTr1VWzPj3GNDDhnRTRguVDkP24IqOP
QXBYlR1B6En6YKZZ+QlkWQsoIVjmtK59ziPlct6zIGb7jmfqD5Y1IGPu9Jm9QAYnMj/njbMN
QfxeX4eOA46bWIi5RQtCxoMq1+v1xqXGbHr/AAPhG98ot5P6PFBcNbKA8bzpBLXwCsRqw/0/
Y4yfBbtsO9bVett+6wfpp1ougsHOfiVJxjNX12ruThN3FtMe5227WjIVUyW4ZlmQHoGU9T9M
Zz1bUu/fGe/bTtkW5Nd280MgARYS5ck+oagwyOK1TpkpNbSB5dTSDLUK9R11Nh0onK+h0J11
IVs6edKDCZWn4zwXl/I4pbnY4EmMNFc+/HHN/wDSrEEg4LVfFVvW2bvtd41nuULQXEZKywMx
ZgVNCMiR+zF8tT4cROrJiSiilCCtM+gqBnjMY0kojliDSuRJy/ZjemVOpLNpNTU6gQC/T/aM
B2JzNcKdBZlLChiIZfScwQrZYtZ0xOtHYK7UH8xQGqaeXTFG7JDe7pQVBGoalNPUFHh+OHRC
dGjdSKgdan/HPFo69qVZZCGCI+R9RXWP2sP9cWg8jsyq7OF15K1SxX9pOKiwzLKsunUS9PVq
qGy70Pji0yCWYgNErU6kDz88KL3DpAybUOtMCINXQynNciPphkUavZOHcz3HbG3HarOSa2Vv
UbWZRJXw0Kyt08sFNVR/qJuoYp5ZI5lcZ3DsWU16VJyIwaZGjv8AhnKrNrcPuEd1aXj6I5lu
WdSW6al1FhT6YNrPXPrj3zjnIuMyIt9OI2ddSSW0xbUO/qUgrhHKja8l9wN+oZpqVLaySQfE
98aII3k1kLIUcGjlTTr2PiMAaTbOOc2fbv6ltdndS7e+oCa2kqPT9x0I2r9ow2t265rHl3Kt
snc2+6XUEv2sgkLVI7MH1VpjLC2i+VedIQW3l/E6o4nFfMFcScG68937c5o5ri+9qZQVSS2Y
wlwPHS1MMWqmLcrlJffWZklJqJPcKsWPfWDqw4pE1xvW6Xalbi+nmiUg6Hld1WmWWquCKr3j
2z8uvdrnu9mu3FrbE+5bQ3TRyVpqLe3qFRTGqoo7y53CW6Ml1NI9yP8A7Z2LOD/3Vxmmwr3c
L27CC6nkuiMgJXZ6CnavbDBbgrTf9ztVMNtuE8cSj/wxzOqqPoDhZ+zlmuXkaWSeQsZM3dj1
/HDGdw4kTSgFNBzUL9M6UxjfWp1L8NDYSc0i2s3Vq24rtXQ3EBkEa6focsVMqTaYuebpNNdb
T+vu51/8sySMrZ9AzFlzxQh3kc9ipY7wdxImb0wzmQq7eVSVP7cMq+p04dz2zgeT+jXscRX1
vGQwI65qjEkfhgtZu5iLjw5W9w8Oxi7a6NTLFbMyEAfxZquRw2rnc9Sb3ec5hpZb5LuCK4/8
F2X0NnU6a+k54mmedqOADWQjPSa/txDqunbt1vLC7E1rdPaTt6dcTtGzBc6VBrljfSmtFvW2
c4O1jdNylmm266AZpzL7qZ/aZF1GjfUY5pWwcv5BHbLZw7xcLEF0rEszhdAyppJw4vtHbsvP
+U7IHgsrwLFIdTpJGsgL+NWz6YsV8WI+W+XNNHJPLazFe7W6Vp4VGYr5YsH2V24c+5JdXUsl
vuEtnFMSHtYJCIwTlkrVw6vlV2W+bltk7TWl9NbXDn+ZJHKULnr6vHGWpBbjyHe9zlB3C/nu
Vj/8fvMGZa9gQBhHur+33D5Ng2FNytb68/pC1VZkmEoj0mhVh6mWhyzwQoNp+RuU7WsqJcJO
srmR0uYkkOpsywbI4qvgW5/JW/39q0N1FZmtSjxwBXQ9mRwahsa5HXWRmLq/ur2dri7laeZv
ukcjV+ONWOc7rp2vfN62ib3dsvJLVyKMEYqG+o6H9mMOsT7jyrftxkSa+3CaaVM4ZCwXTQ1y
0aehw4EcnJN8pLqvpibnT+pHuZyEZqWHf64Fek9ty/klrem7g3S4juCixswkqWRR6Q6tUGnb
Ex98QbtyLet3lR9xvpLzQCI/c05A9RkBijc9JeR75DtTbXFezLtjGj2gasfWtAGrQV7DLFGk
W08g3vZpJH229ltHkALmJ6AgH8yGqnDjN6wt35DvW8yJJud5JdSJkpaigeVFArglsEuq9p5V
i0e44Wp1Rajo1f8AYDpww9I2ZaBcjXqB1xDRKjsVjjUvI5okSipJrkAB3wFa3icr2rapLK8g
vbXa5XDyQzxyLCZBmDVhQHDMF8V1ku4C5iayWYzj1RfpwxlqM9ShPVl5YmiubjcN2vDcTzy3
t7MAru1XkkK+lRSmqvbFapzI7rpuV2OznbryK9tdqdwxguI5EiVzkDRhRTjI+yssf16Xay2P
u/rYyWiNvqMg091C1ONQSgmudy3ncP1V3LJebjcsEdqapJHX0qKKMz2pjduOfHNWF8/LrHZX
26+S9tdreTV7N1G6RGRe41CgbPtjHjplV+1tuCXMU1g0wvIzriNsGMqlcywCgnLv2xnqjmWU
19e7lvO6tNIZL6/uCNTga3lcCgoqjM5UxvnyKx1z3HLbHZzt19Dd2m0NL7qQXSOkPueKawNJ
8hg6sElV+33O4W97HNt/updwkyxPAGaRNIqWGkVyGDW8BeX25bvurXkwa8v71tbsiktI1AAw
VR1y7Y1rjJddl5f8otNkTaL79ZbbQJPcitriJ0jWQVFVZwKdTlXBPlu2q7a7rc7S/judtMsd
5BWSKWAEumnMt6ammNWGX1HuO53e6Xc243T+9c3LmWecgDUxyrQZYNb6jlQlSWqCVzrl37Ux
axOVpc75yCTZ7bZ7qWT+mW7C5sraZT6dQI1IWofbYMaUyxmdN+flZ2XL+dwXdiLO5u3utpQr
bwrE8kkcTZFXUqWKHwbD4LfVvu3yn8rNZTDcTKu3sjJMbnblSGjZUculMUot8ebamLer/wDR
6Dp0HgMPTn/Oft3bLvG+bJejctmuJbW6VSrTxqWAQ5FZB9uk/wC7FI6Xx0bRybkGy3lxue0T
y29yoK3MqKTHolOfvZFdJJyr36YpNP4aXYPkb5Y2faIbLaBdT2YP/wAdpLQzrRqmiOU+3y/Z
guaHBLz/AJ/echbdU90b3boYLlbO0MblD2njRat/9QwWjlTWO+8t4xuct7bPPt9xdK6Te9Cy
JKj/AHoVkUKw7+WHGv8ADS7D8nfK+z7dDYbWLqXb4lIt/csWn0L20uUzX+HPAz9v9mO5dyze
eSbv+u3pEG4xr7chjtxbtUdPdUAepfPPHT8OPz34ogSjK1QxHVsZdpGy4n8tc64xth27atwX
9ArHRbzRLMqFyTSMtmoY/l6YK3aqn55yf+uLvUF6bXcImrE8H8pY1P3IkYqoQ/w9Ma+us3uR
c3vzNz29uZbia8ija5gktbv2YUQTxOpX+aBkzKD6W6jGZFus3xrlXIeMXibrs1xLaEr7ayoA
0T6T/wCORW9D18Divt/yp5FVu24TbjuFxuMiRQyXMnuvFboIogx66Ixki96Y1/5Ekk8cqN/M
LqaUPU/bn4DGapXp/wAe/IPy3BtB2PicL7ra2yuwtmiS4KIPvWPWQdPq+0VxnMdMYTfd+3be
t1N3fMWkirEkOkxpCKnVHHGf/GFPbD1/lmcwG6cl33d4bW23C/nvYtvUw2kcrFxEoAqFr5Ux
M/lyWe0bzuMdzNt9hNeiyiNxdCBNeiAdZCvgO9MWxR3ca5XyHYBeNs9y0P8AVLZrS4hCCRJY
3BWntmvqAb0kZ4fg2by7+P8ANeT8bsb/AGqxnltrfcYGt76xnU0pIhTXokoUfScjgl/Kk8xc
cT+Z+c8c2Rdms7qKWwiqlnFdRLOURhRkDNnoPgcVpkZDfORbtyDc/wBTfyiTSBHHGg0JGo/+
zReyDsMRkkVze2GUKDnmAvXrTPEdx6Pxv5h+Sdi2NNuth+r2uJW/TveWb3CiI5MqS6aGMeGe
CVi9M/afIvLrTfxyGyvjHfAaIypPtpCcxD7Z1AxAdE7Yd0SZ6t9x+aub7nHucTm2gh3W2a03
CGCIIkobL3SudJNJpq8MNlPzE3Hvm3nmwbHHtME0F5ZxKY4FvIBMwiIziLEiqeRxfK+uKSP5
H5Xby7v7d2Pa3mH9Ne22isZioQiKCfT7anSrdaYopy4tl5dvm07Pu2zWNx7e3bvAYNxhdQ6l
afctftcA0Bxq4PrfhRjKhzBoDpyqO/UYyNjr2rc9ysNwgu7SQ29xC4nhdR60dTVWU/UYbzrf
PTt5Fvm48j3q73vcpFa/vGVpjEntqSsYRTpz7LjNM5jdcX+decbBssG0262k9vbRe3AbiHVK
qfwalK6lHngUjO7Vz7eds5VPySwitbK7mZvctoIgLMhxmoiNaAnr+7DRKtuc/MG/8x2pNu3W
zsGELK8U8UTpPC4GftuWNAehHhglF53122PztzC22WPbp47Pcoo4v063d7CXndQKAF1ZSSvY
9cMjc9msHd7rd7heT3102u6uDqmdhTUxyqKYmeYhEmh0q+lvylTQg+IpniXw9Kh+duWrtybf
d21huJji9s31zCxuG0/+MsyMDqX+L8cUpxmpPkTlU+y7ptl5ci4tN4lW5vBKCz+6pB1qe1dI
1DFsH5ZirMoFauBQkD8enbBVaMO4/wDGKkjV5YhK1XCPkbkXEHl/p5hubG5NbrbrtfchkYfa
5GRVh4rikadXJvk/fd+2K42OaK2t9qa4ju7a2iV//julapEWJ/lsWJoenbDp+utBtn9wnJLT
aU2mTbNuvYRCIbh7iNw8yKumkioVRvTl0zxQWW3GQuecb0/G7zjQSJdjvLr9XFbKDWBhVtMT
Vrpz74a19cjPWl3PBPHcxyNHc25EkbKSCsgzVqjOoOeM6y9Jufn/AJLc7b+mutt2+4u2RQd0
aJlm9xBRZfQQNY8On4YpWfspOF/K/JOMSTrEIL2wvJGlubC5jBgMp6SIi09s/wDblhtUjm5n
8hbhyqRUntbWwghcyx29qGCiVhQuA9dOodQtBgvwZzN0Vp8lbxBxS54texW+5bPKp/TRXCtr
tZOokt3X1Bgc6HIYpWrEvBvlLkPEbWSxiSDcdtkf3TYXie5D7lKF07q3jTI4b1q/8pNz+YeQ
3u52W4WdvBtqbfcm+t7GCvsC4ppZtL6iA69VBpic7ay2+b1c7xvF/u86rHdblcPdTRrUIHkN
WCVqaDtXE3PW1+JOU8P2O+uLrfLm9sLkqBa3dmS6MvRo5oQGDhu1RjON2wXy3zvaN/sNn2Ha
5nu7TaZZpor+WMxFkmXKMofzR9KjKmNzxix5m7mQ+oZdMsqkYyN0C6fUENADmR3+pxL8mLKC
SAQMsz3r16YWOv8ABkJC1B9VaMaZkYmpRqpIDKwNMj/ywY1Oje4fcDqCKZaSOnmMS0z1LVNd
JINBnShxDDMXd2VW1VyHYN51OA0NSpIYEilTl4+GHQORwCGrq0qB9fDFo1G6FSDXOnf/AEw6
R00kLpNWNT/3fjg1YTO41ig0ECh/yzzxkjQE507dBnWnj4YjoToIofSx6dgPMeeEaUZJVlYa
qDJj0Fe+NILEsM/sp/xTFgumRCQUBKsmdR0r54KvlI4ouXXodQoTiZ+DK60Utq9RIXsCcTUo
6EdDTPr9MWkLZyEnS2v7aeeIHqysoZidAqD1H/TAtFkSzEsGqNNemfcYjoGFHNCdApl5YiVx
LFUGopUDPLLEx10LNPzkk9FGef1xNk+VQ3qNaVJp+zECBFVUGrgZitPxpiRSagorWjZNSv8A
liGBcitRkOmpsh9PPEkjKEyXqfuI8PpiJg2lGZs6fbTP9+JG/wB35MiajKuJjSrqajD7elAK
1/HA3B+kDKurLr4DETyqAhappUA6sSsARkV6mlQDSgPjXEyJUQgepXyq0fYV88BwzD22UjP+
EHIUphJAEkOv3A0oTliSJj68qjPt3+tcK1KNYIJYkDqCMGGAlkT2y4OpTkB16eGIWnoxQNTU
OoPj44tBGSVBqda1yK9KV8MJCquWLk+npUdv+eJCoXVnzIHQ5f4YqUiitGBowqaN1P7MSCAr
EuF1EjIf6YCRBGktTw86YmQsgL+o0UZ1B1dfpiwZRVyqa0OWZpl9BixBDgt6xkvSh60wnSVR
7papLMaA+eCgUikABQAy1yGdPPPAgNqqCGOvuSOnlTETyFzQhSTlQHrU+WFBeQshTMk518Gw
ohqpX8xyJB6HwxAdQRr1AFT0PWg8MFpEJGYqMqAZA9jgOBoVWvWpLHtn4YFgWZyC/boVPUDy
xHD6joAAFa1B7+WFYQV2UZk55npT8DiZoXI1qhowH4AUxYzmnLFQorXV1yyAr0OJo7mNVJAB
YGhp1+mHEd5JdVaFiRTsKYsOAijEQ0Kxev8AxSmKoVQurTUVoA308BgR0pRyxNB6vIfjgq03
uOynKtDkT1p4jywk+lQNX39KriOFI1AJOhI750xC0CIQoogrUtn1oeuCwQ5dARUEVPpH3VOJ
vYUqqGHWtaAjIZ4BbArGqnWKls9QzI8saEEGLCq0YHIg9j1wmw0BarAnVXKnWo8sQwaoqgmt
V6BT0GMpGZQCumgQ5MxqTQ9qdhio0f2jyNBTyPgMTWi0IzBANQOYY9Rlg0BMaICRmO1MzUdc
JwzKQAVWopmvfPvhWGDFaIgJqavTrn4HxwM3w7llpoPehNP8cLccd4WeJlAqaH1Dr07Y2L4y
VwHEoauosM8s/piEN7knj28MDTtizuFBovt0DeJz8sbjFfRPwDF/8u7kpJQxKB/LJWursSPL
rh/pdZlF8ja4eWCWHX+oBDqwVhkMxpHYjHLmtz4ej8jupr/gEct3olm0Rhbh4NElTl91K/XF
b6zat7/kO+7PxezmsbMOzrGhnMOsBTkSwFRiqh9wnuPds72Pb4LlpvRflIkUtH1PbL9mMytW
eG3HZtk2bab7c9jgTcRPV5duZUahbr6aEqB4DCZ6w3xRLPPv25yTwGD3VBZFjYAVPYigxq/D
PUxRctO8J8iRrtdJt1eYfpUlVTqbsjBxSmVca46i/DU/I9/8mS8cMe/cT263jOkvuFu6ymMj
zr6a4wOXialWf0jVWtPLDrT2T4HtdtQ313NUXEBULcKivImrMUrXr3oMa6ivLfXvJ+MXRmtb
68l3QglQG23VoauQDqgzxhly8r5vdcc2m3m2+wttMrDUfa0eimRKgd+mISo+Ecs27fpLrcYN
lt7W6XStydMbBj0DAhRTPrjVmR0sUN/8nQ3e8S7Lu+yW17bNLohlCINADU/OCf8AXGJ6zI0W
879f8dO1WO3iOKxuJNM8EkQkoreZwrF/bW1nbXcs0FhahpwCVSJArFRWoAHU4hjzTfvlSyvn
vtr3rYoL63BZIpEVVljIOWkkZUp1xrFUvCNx+QE4/L/TeL2e87IZHokxjSXV3GkmkgGIPMt9
Mz7pcGay/RSaz7luq6RGa/aB4YWq4Y11E6TWnUDvXBpj2H4XkT+n37B8xIBRag6QtST4nF1Q
xL30tlzQzW8iCWO6p6gsiZmlGVwdWWKG3x6hzKGze+2K5FpbRXLTqGeJApelMjQ5ivhgZXe/
ckltdy23bX2yEw3r6JGmj1AL5KR3+uJOKbZ+PLu77c+x262F8nuSXUcagW8hqKgU7+WJK3l+
x7VxjjDIljFvFs4YRStGjPGadda5jyxQOX4iniTjt9JHLpPuEuqsQQdHgPHzxq0vKN2Jl3a6
oKqsrDX1J8TXvn3xX4Y1b8JSKTktpFNEsqGRV0OtQVY5qR3xSGfL2O/3DarTfbLYV2KxSC6q
XEkK0pmDQaRmPriMSw7Bx3ZxuF9Y7fbW8jqXYOoaIBRXTR6hVJHbAXmO+8ys91s2S945aRzH
0w39mrRMD55EOPHPEmu+Pht24cNuf1dtbzLbmSOOf2wsgXTWmsULUJ74qq8pnhMl80dvG0yF
mosSl2p5AdTig1Y8Q2uC65Ta2N7bO0DMBLC4KMQe3amNSwbtejcw5NabDeR7ZHsW37hbPFqe
OSELIF6Uqint3pjONMnsG5QzcoW52DjC3SGMltqlpIqg0De2zgUp/uxYJA8/u7We/gI4w3Hb
tATOrqgD1/MAgCHp1GKMySfh6HwzkF5uPApyzRAwxSRr7Q0rQKaZHp54Wr8M3wS45RLt9zbC
SK32QSMP1LLouPcrRjE1RQgdWOGpo7PmO03M9psm238u53Jc0kuwHKiPrqdetfGmDQsYrrax
yeK0ttzu7XcGj1/08ESWrDp0XxwF1ynbLey3G8vybBg7NNc2+kEgD78x1P0xF55zbjd3NscW
/Wu/XO67YhDJb3tdcavlqjYda9KUwRncDeXlieIQJu/CZItMQFrusCAJSnpldlowr1IY4Wqt
uK2mybX8fvvT7ZBdzBZJnS4RZA1G0gKWBK0r2xKtOkO07vx6xjawNtbzqrNaKaKvfIEDv0qM
MrNipVOPX/J5ON3ewbdJCsOo3CxrHMaLnmgqPrXF8D6y+vLeb7PbbPyO8221ZjbwMDGzfcFc
A6Sf9vSuJW6sfjTbrK95LDDf2yXds8cgeJxVTVf9emKjl6ALTgt3yNuMJxy1CRws8s+lVdSF
qNDL66+eDG8C/HOMcb47uF6m2W9+1q7SFL1RIzJULRXboaeAw4rXnXIN64lulv71rtP9H3OO
msQOGtpF80oun6gYsUrbWcW3XHxXNfewtrcpG6vLbMYVkaNgAZEU6WrXOuKcju48iZvWwP2A
nM56f2YhutV8b7Vtu4cot7bcbZLu2ljkrE5Ok6VJB/DE1y2O/XXxfYbp/Qr7jyJagATXaKVk
Q0qGDKdbgDzr9cSslWfGuGcM/ojXtrFaX1vJLIYrnc1pWOvp0sQNPh0zwKCPDOEf1Wyu7aGx
Mra1udvglE0D1GTBDmNPXIYk7n4pwuklwuwWgcSCCRdJoVOWpKU0Nn1GFZHBuPDuG8Y2a/3N
dpi3COL1vDd0ZtGoAKjtWgFcTN5jguuMcH/T7XySXbP09lekJdbarNLEfeBCMApBGlhX04I1
iG1+MtusuX3S3FilzxtIvct7VpC8ragKlc1PoPYmuEfC12HhXCDs897aW1leW0lxII5dyr9q
mgHuEVFPocFLL/JXEOJ2e3wbntMsFpciQLPY204libUKVRSapT6UwwYqd7u+GTcVgWXjdztO
/rGgtboRskE2mlZGkNA4YVNCK4ufkVhakmp6j7tPjhC04zyCfYtyF9Fbw3Y0lHt5xVWU9c+x
8xgsMj1vj/KNwXbpuSbxuCzcXnDRrtjaZ/Y1NpEczkGSpp6a5GueMtaw/AL7aR8lwXNoP0m3
STzfpkekYjjcNoTM5dadcarXM8d3Lt/bjXylul9HZQ3ULqiT2rqBUSRqWKkZq9cw2Cscxqdg
5TuS7ZJv/Ib6O54vea4xYMFm9kFqLHK1NZegyDCh8cXyr4w3xtc7YPkmGezYW1lJLObNJDpK
xvULHmaD0npiq5T8l39eNfKG8XSWENzE0gSW1kGkFJI0JZGUehq56sasXNa/Y+S7j/Rp9+5P
eRXXFL5WSOxkCze2rNQJI5XWXHQBvu8a4MN8Yb4pnsl+R4ZIQttbtJMLfUStI2DaEzPdcqVx
dHn1JyPkUPGPlHerhNvhubWRhFLZsvt1DKjFo2X7WLerV4431PI5/wA/mtltHKN0GxNvfLLq
G84ruClIbF1SUoGJ/lSSU1NMB0U/d41xhrq58sN8Sy2afJVuLYBLMvci3WRgpVGVvbGZ+7TQ
acVka4+PUm78ii4r8o71cLYxXNq8xjltCPaohVW1QuB/LeuYYYbGZfw2m0clvm4/Ju3MriG9
4huMRWG0mWORl1EkI7geqVQKaT9QajBmtWMB8NS2kfyVbe04W3k/UC2DtpIVgdKZnM6aCmNd
DiZGe+SYLe359v0EKpDGLpisajSoqATRRlgo5Zp1XTQUBIyB7nFKcfScfEuO8n4JxZNwMUd1
bRwzW3u0X3lX/wAluWrq0P3ANR1xiw41EkFtHy60eFBBPJt0yPIoBbSjoFBI+/R2rh/Aee84
5NyWw2u8lseabRuioGWfabi3gDvH9rKoBfW4/hIzxqRi3x8+B1LBlAWM5BKUpXwwl63/AG52
9vLv29xToGt5NvCyRSrrUr7o+5TWuMW3Tnja/HPA+NbRNvv6LfrbfbW8tmSfb1RKRIGLDUoZ
655ZjDfkczIu4by5tuE8d/Tcjt+P6rdED3cUcqyUTJV9xkoV8j0xRpnfjbet63Dn+/ru+42d
9dWlgqJue3xosbJ7gKsSo9emvQ9OmCiYtN/uIP8A1e9uuXbtZcj4xcK0KSQ26o8NxUhSntl9
RrllQjFhrr/qF9a8I421pyS02BmtIlMl9Ekqy0iFFXW6U0+XbFA+cfk3et63bkk67vd2N9e2
lIf1+2qqxTR9VbWv39e+Y6Y2OOd9vyyCqpdgq0LUBIzP4YrD+XvW48V+IuH7Fx0b3x07s+9Q
xm3uQ7CZWYISJBrRfSZfSQK4xJcXVm4m/wDuB4lDvHIdvuJZprdLFL7a7lm/n2rEvqUkUST7
O46Y39mPrZKxPxh8f8d5XxnlR3AMu47ZAstnexNRoygkYgr9rB9ADDw6YzOvW5f9dehfHV5w
Tcfh3dDdcaRLG30je7FHLJNMsat78TOwKsQQcqUOGzOhbvPr523r+jpuVw20JONqZ1ayW7Km
cRkA6ZNGRI8R+OHq6zzznjhVSkoZaCrDr9vnX8MZb8fT3xjZ8Aj+Lb2826/vLO2SX9RfXTqp
ubK8RFVmiZFPp7rSuWL09/DzbhPHdk5RzbeJt7nbktoJVddzlDW9ldO9QguSoR4mkUeg9A4z
rg6HPw33OPgPjUvGr6+2XbW47uW2QtdIDcG5gnWNS7xuCzspovpYYeaLMmsx/bVDtl5u26W8
UcltvX6cT2O6xuxMUTDSYpISfbddRDAMMZs9Umx47Lqi3Odq+3Pb3ErI8PpKvHISHWn2moqB
2xrtri+PYv7kRFLZ8P3Fo4xuN5ZB7280APKPbQgOR1oSaeFcP8/Y59dZ1GZ+OrL4nveN7ra8
0guLS/Us9nv8STlY43UBUJjBTUrZ+sUNcZvy6deR5kxMRZYgfuYaj1Kg0BI8xnjf1Zm2Hj0i
VGFCpYV7CvnTyxkvqzbuX79vGy7c/wAb3m2RPFGItz4xuYMU6zIqkrAtRpV1B7aTXV44JIru
+MTNwPh/MLbmd9/Q5+Lbls8Mdz+hJEfs3Sxu8gVB/L9qXSP8RjUu1n+nP+txlOBfHuxcj+Nu
UbrcO8W67KYprK7Q56BGztDIhyZX/b0OLfcY44zlt+T8c+DOFR2G2ci2K5ubm8sVni3G3eT3
Gjb0F5VEiIJVY5UWmDPy63r8KPg3xt8dXlryDfv1J5Nx/Zvaf2ZRJA7W7xszI1ChE8RXqDpb
Fu1mSybVVyq3+G95tKcQF1sN/AyOtpOrSQ3kZcK8aktK0UihtSk+nscUO+ePY1+APj0W6Wb8
de5Dpp/q8V0ySkSD/wAroXC+4vkpH+GCeK8SsLxz4c4lt/Ntz47u867xf2eiWw2udza/qrOV
K6kZSv8AOjalRXSf8Ovffnjl/Pmy3VL8mcM+ONvaN7Lb9x4nexuuqwvkZor6EuFl/TSaph7s
QJYDVQjGfw118zG3vPjH4pteNRXSbBfbrZzW2qPkG0O8x0ac5mj9z0uvVl0kVHTtjMdeo+fN
4tbHb92mtNrvV3Pbo2pb7jGjQiQUr6o2zVl6N2rhc+e9chY+7QH1HqR0/bjON6iqUUIDqRic
uhrjUEo6uOnqGWQxNQ4J9xak1p6i3Wg654CcuY3BrUP0UZ/Xpioo9edD3r9v+GBncIMVBIz8
vDEAhZGAJcZmrKMv24ROcOr0AVDROgLZmpxN6kSXVpQN0BAr0Ixkym1MHOrMr0PWmGNIzK4q
AR4gZ4dFtErqa1NHYasxT8MsZwBWUeqjZDt2/wCuNYiD1jIDaanP/jtiBFkBVui9Aeh65YUZ
nB0gnLs/164yTmQBaFss9P8AyGJmgAJiJOZqNGXTCpDiSnpzYklTTrXria3DFgNLDMjLSe5J
xlb4fV7hU6jmPST0/HGjfYAlihegFOijxGVaYmcMHdRkPMZdcCsIihAaoZszXuMQRqDr1aiM
s6g0+mFfUnWRGFCCewr1/Zi1WHY6BqzOrMD/ACxQ/B1LudJ6ipFD0y6ZYDAkSABPvqCVr417
4iNSOlCq0HTpTviFA6oDorSgoD/ywMWEjgIUZdUtQMulMLUOusFiDnSp1Hv+OKnS0swAYUHV
j4YEZMtRcZdyD4+NMQ8CCVNO69f9v+uIJXakekEaT91etMajQfap6gBpXNe9QcSEkgUE0zrQ
nvgolwmZa110DdR3p5YD4SgKCnUMa07D8cIM/uEtU0jAoT3FMSOYq6asBTL/AEOJYNlHUn09
CB/lXAQKBr9sA6QBnnQfhiR1jA/EZhv+WBEURqsE1MCKdKeeIYYkCqU1KMxQGv4UwnRBiTVt
WroKgEUxC2nzrTIv4/8APAoRU1JLktSgGIkjppUN6kAzXoKjuThR9L/cSF71HTLwxEBFVAIz
zNAaCh8sIMqrroFLsRmegoPCmBYkkINC3bNSfLyxEixZtVasMsxlgROyxsFLVavQZjED0NSW
PQ0KjpiH1MKFtVc1+0U7YGpCbWAQ2f8AD4jCiYSUGkkRgAsB/iT1wxWEctRyZO344gSMAxBr
pfNicRExRFrQ1rpXpUf8sRRlCVZdTKzEUI/wr4YlYOTXrCFtRVaEA/vwRgwALGpAcZKQev8A
zwkqtQrT7sq9KHA0ZlVGoDRQPSOpp9cQCGOQ6igLAmmWKI5aNZq6fSMzp6Z98SMTIz1amnxX
NcWnBlVYDwrmRlXwwqxGhjzBOmvhnkcQE8ihNIGkjJTSlR54EarArUEPln2pi1EVfUGMmbHM
UyxasI60FBkCczT/AFxLDrHVsqkn9h7YkUa0JrkPKlcsIOAEagBCtU5mtPrgMhzIUdhQinRh
4fTA0SjV6zUU6qe2AmLmjkequSkf5YRAqgTMHVQVcnv9MCOvrXIUXt1r0rhGGaFDTtlkf8cQ
PQgkV9R+4AE1p5+GFGZ9TAUGRyAOQHnhI4kRCQjhB/DXrgRlQdgevQ9a4F4dULP1yXMHPr9c
SwKlNI7FiRn/AM8WCUeomqMKEZ+FfpgxotQIqtG0nofHyOFUDsjaDUZA1+p6UxMkQahh6agV
6/sODDTgBlBzIzoMqfhia+Ahgx6EKtR+HamFkyrpcJWinPrXLBpOUIl0MpArQqDl9csKOjUq
VYgHIGlaeQxIwdywAA0n06aYgUvrHSnZq9KVwGHcAUUVNTll44GsKVQgFFPUZDx+uIUkeNiV
LNU9SDUCuQIxqwaNYtJr7lQoIJ/i+uBqGzXT7dS5zoDp6Z5nDjNDV/cIFST9pJ/bhwy4471C
EbWdOWRoag4RbrJ3pb9TnRTQA0/x/HEIDT598Sd+3v8A/LXVQEGtRnjcqvke08M+Uuc7Dtsd
ttW4lbUGotpEjdV8gWU4zbokWm6fL/PNxkhnuL9BJCxa3eGGJJMvFgvTBIa73+dfksQGGTcY
GUrpkMlrEW886af3YliC1+aPkO2jLQboWWuSSRxsgP4r08sNNmIbj5c+QZ7xLptxKSoP/HBF
HFGw/wByAZ4ziV1n8jcttbuaazv2hluGLXAZFateoAYUGLTytNk+YfkXagYLTcBNBIxJWSBG
RCc8tKmmHRJ65uT/ACXynkipHuMkUgQkho4EibLppkUavpngNn6VFxyvkl1Z/obzdbuW1FB+
nlld1AAoBmTgxzs1WIArBQAukGgHhhkanNi44/yff9gv/wBXtN01rKRR2UihAPQqQVP4jG5R
a1t98289u4Dbvexoj5tLFBGsikZgqw7+dMCig3bnPJ93sP0e5Xn6iNTVKKobV4kj7sFCLj/L
992BpF2i7eFXp7woHVh4EEHFfWpdc9zuM97eG8lcvNIxd26Ek5k5Y1zcblaraPl/nlhZixW8
S4tYwFiFzDHIVUdtRzNPOuCxihb5X5u16Z1vkQSLpMCxIIqjv7dCB+3BiZa7vJrq4kupyBLK
xZiFoNRNc8I127bynk+2kjb90uLMt94t2KggeX24rUjWHe94uXaGG4v7g5v7SNI4JP3EKMKt
NFt25xzuv6OYSIKshibUufVlpUfsxlStjtHy7zPZ7RLGM2siQLQLNbgMB2qyFK/jiLnl+VeQ
Nu39RiSyjuKaZDDax6W8316iW864mbVpdfNHKL62WKa1sSyHUjtBqYHxUFiAfphMKH5r5oAs
U72twiD7ZoAxJ8TRhniVccvy3y8zTP7lv7Uoo0BhDRrl+UVqv4HFBrh2/wCQuRW9jNt6TpJZ
TEuwlTWAzZ+moy/bjWGWL3Z/mLke12YtorWymRKBWkiKEKBTMxldX4jGDMQQ882S8vnuN84r
t10zg6zb6rd6nPMVZc+5whYwc9+PrG6S6tuG/p54yGjlhuKOCO4AFMUg+HfvXza7+2222CXF
vpB9q+jDyq3kykDDil1R2/yzyxL2a4j9hluKVsWiLxUGWnTWo/A4mrS3f5T3jctu/QGws9tR
zWRYoWR6HwD9K+OD4ZldW1/MG+2FsLNLKwniACh3haNqAU9QjIVsNhqqPPdxtN8O87dbWm23
Mg9aQRViOXqJVjlq70wSLD8k+Rt632ezvnihtrm0IaG4tlIcmtc2JJoOwwyQatrf5k379OJ7
vb9vu5YqKt5PC6v/APpCg/ZhvOC1n77mm/3G9f1mOdbO9YUVrVRGFXw71FPHBi+yPfuYcg3v
2P6pdfqjAaw1RFVfEkKBXp3wxXpqtq+YdytrFbWTbNulQrp1CNog+VPUqeip8sZUuo7D5Z3G
y9yC32rb47ByWex0voDN9xU1PXwIpia8QX3yM1yiyQ7Lt+33cbh4bu1UpOpGeVAOvQ17YNUd
qfMG6FVkn2nbri6TL9UyNG/1qtafhjStcll8r8jinuXuY7e+trj77OdNUYHQBadBT64zrM6l
Q7/8l7hue3HboLO2220yMscFfVTMAA+lR9Bhi2a4rf5C5TDsz7Ul8zWTx+yUlRGYIRQqrMKg
Uyw4bZXVxz5E3bYrNrIwQbjtzmv6W5yCsepRu1T2ocZwrTdPmHf7qJIoLK1sjGymOaMtI6Fe
gIf0sD3FMIo1+W7l9NzPs1g9+g0i7QPG5ypWoq1D4VxFw23LOH7k8t3ybZXut0ldibq3mZQw
JyUpqULpGVcVEWmzcs+NNo3SG/stpv7K4SqmZJBIpDZHUrOa5eGIeOjd/l2wTeHntNqt75QA
tvuLBrecAr6g9AWND2xYdU1l8s7rHJcR7naW262d0+v9LKgAjqMwhAIp9RiLj5LzyHdY7eCH
Zraygtm1IFAlJr+UVVdK+S9cKXVp8uWsW1Hap+NWht3Uo8cTCKBlIz/l6WpX64hcql27c/jB
y8e67BdQM7F0/R3DyxgVyX1NGwoMVinMWdryT442edNw2Cy3K33C31e17tJImJFCrq7sdLA9
VwYsZPk/I5d73eXcmtxb61UGJGLqCopWrZ541rPq84z8jz7Zt7bRuNnDu20v/wDw09A0detG
IYEHwIxlv7a6Ln5Otoru0uNl2G0sGs6quukhZD1QMoUqMLH39TP8xbmwnRLG3SKZxLCWZy0T
LQsCPzq1MTVqeX5mlunMW47Nb3e3zqontSxYBwc2VmHQ+BB+uJT1Ucq+RY9z2hNn27bhtlgj
BvbEhkIK/b7TaV9sDwxTxjp1/wD3u7pJx5NvmtEfcIUEUG5q5WVVApqYUOpiMjnnijbg2D5J
l2/bZdn3bb4d52lnMiQTeloyTqIBowIJzGDDrl5fzHa94S2gsNjg22K3/Oul3oOiqyqlF8Qc
MY30958k8gvuNtsF2YJ7P0Kk0sdZkVM1o1dOVOtK0w41usexKgEdRXI9/wAcLOHWQEknr4Yr
E0j8xW44t/QrjbYHmjYfp75axyBa1PuBf/J5VxmXFZqq2LfZNm3aLcUhjuGgOcFwuqJwcqEf
Q41fV9rD8g3i03TeJ7+1tP0EEumlormRQVFCQxzzPTwwYOddsvMIp+Jx7BNt8az27aoNyjb2
5NGosUkQfec8iTgjSu4/v77Nu8G5LDFdmAkfp511RsrDS1R9Oh7YsqnQOQ7rBuW8XV9a2wso
piGjtS5lMYpmNfcE5jwxVcxYXXMkuuHw7C23xi6s2X2dyicqxSpLCaOlHPqyOGK1Xcc5G2yb
rDuC28V20Leq1nFY5EIp/wDSfA9jgw/Zzcg3Oy3Pfbu+tYDbW1wVdbRpGlMZCgMA7Z0rjWuP
PWXVvecwjveIW+xybbEl3ZMpi3CNihaIV9EqdHbPJsGH7fZV8X5Eux7xb7l+mhvVi1VgnHoY
MKU8j4HFjXNxFybdbXc96ur61ga3t7mQuttLIZmWoAKlzmadsDXNWW48ztbzhdnx6XbxHd2L
r7W4QSFFkjXVQSwjJnGr7sMh6uq3ivIl2HeoNxa0hv1Soa1mHoZGBBAP5WofS3Y4afly8l3K
03Dfby+s4nt4bhvcWCZzLIgNPQZDXUB44tZkxXGUHKgrkARQZ/jgw62W/fJUu88L2nj09oLe
42mZGF4hoksaoyU0dVf1A+GKRnrdjW2vz2sc+03U2065bW3e1vQkgCujaaNFkSrVXMHLA6WK
vdOYfDG4wzauI3NtPOpP6q3mVJ1JzLISxWv1GJytn4eUVOYWuRqNQGqlci1Mq060xq1rWv8A
jT5Cl4Vvc181qL22uoPZu4tWhwtQQyHMVBHfqMFZtdPx/wDJScU33cdw/Qm8tN0SSN41YJKq
ltSEEjTXxrivrcnmNRZfL3CNx4vZbNyzjb7m22V/TlGQxsACof1MjKxU0YZ4MEVmzfKPCeK8
kfdeObFdW+3X0DW99t8syHSKgh7Y1ehqPUrmh7Uw5ozFLxH5Kg2qDdto3O0fceObwsguLFWC
NG7V0yRk+nV0r+3BY1cxqrf5e4HuXGNr2nl3Gm3OTaYxHE0TAx+lQmtQWRgSoFfPEvw8253u
fC7rckn4pttzttnIhNxaXDoY1cUp7AUuwB71P0wxSVmA7GRSPQuTFh1BHTDQ9h2z5h4fuuyb
dYc72GTdbrZQE2++gbSRQABigZKONIzFcZPWX1Lcf3ATty39db2IuNlNsbC4huWC3FzASW1O
Y6ojrqIqMjiql112/wA1/Huy7XuG2ce2G4tLTcYpo546xVSV4yisj6mZ1qcwzZdR4YGerkYj
4y+T4+NbZd7FvNmu5ce3AAXlnqKzBqaS8T5VJAzBp5EYb+2f4z/XK5OPWvxXdclv7Xe590tO
PyZ7RdKV9+Ij1aLlUV9WVQCo+uK+uk5aa74r/bjLaTJbcvvre+ZSLe4ljkkRW8XiEK6x5VGK
SuexQ8Q+TLXYeE8g4lJZtc224tIbW/gYIVkb0MxR8yh0gjuMas9PzHL8Y/JEvELy7juLRdx2
Tc4vZ3WykOlnRQQGiJ/MAxy74zjp5mPRW+cOB7ZtG47XsW1XosdztJoyJWRpIZJIyiBSzuzR
59C2XbFg+vim+IPlPgfEdujfc9klTfxF7FxuVm2v3461/mxu6hCKAenrgs9XMyYyW7718cw8
9j3Xb9muL7jU7NJuG1Xr+2TJJXV7DRtX01quo+WNdezBObG5+RflH4n5hxmLbv0O5Wl/t0ej
abmNYj7TIAEjesh1xsVAbv3wS/Udcz5Y746+Y7ziezbjsc21wbxtN8Webbrk6QGddLgPpYMr
fwkfTF/lrfs88upU93Wqqq+r20U1CKSdK1OeS5Z4bZTmQ8MoA1MxCClB4n/XExY9u2v5W+M9
whsr/mewzNyqwjjhTeduoryCGntyel46Ov8AuBH4ZYy3hn/uDhl5XuF1d7Sbnjm5W62F9ayM
qXM0UYYLNRDpVyHOpAaU6HGvhie/KeX5l+P9v43u/G+M7Rd2227naTJHK6xh4bl1KrUlmaSL
OtSajOmXQ/JzzAJ8w/GG+2u2S892OebkG3QLbw39nR4yqHUGCF0AbVmVYEfhliY6yfKr2T5e
43tG9b3DHsleHb4oS/25CInLU0++gDaU1AnVGrU8Dix0zz1Dyrlvw7/SJbHiXHri3u5ZEliu
LiiJbyxj0zI/uSOfBoz6X8iMEgnP6amD5t4BvUFld8rsL+PkNrGsBuducmGqeoSe37kaZ9wy
nw6YjZ6yG3fJHDpOTX029bB+q2C8lSaHTK5vLaWMaPdhk1KyhxmYddB2OHqz8Kc40PP/AJg4
XufF5dh2i1v7pHKmBt0CsLSSIgpNC7vJITTIqTQg9euKXGevfHdxb5d+MrC1W/bb7/ZN8z/U
rtZaSzMjAVdIHkEeljno0f64zrV8ePcz3+05FyW93exsE2mG8kZjaRNUFxQNIQMgXI1EeJxv
fGJP/wCqlEhVGPp05jMZn/TA38GUIzVoXQ5hfL6+OK1Q9CG1BiGH4DritZ0i7UPQCuX4d8BE
DFpB1GvietcWmCIapypl9tajAMMquun8p/KD1zwnDBnzrUNUhcx1/DC50yyBF00JamYOeJCQ
M46VPY9QKeeA8w+qVG9dK9AfMZ4Y2fUASSBmcu+fUYkjZnJ60rnkKE18cQqM6SvqagYg/iO+
FmVJJX0rrAA8vDxwNUTDUq1Wuk+nx/ZiRs9Jz0OR9pxExUswpQKozp5eOBn8pItYYNr1Ejp2
8cJkAzVJ9wAOfzD/ABxK0zCqFS9Aa1IyqT064MBhJ0qAWzpTv9cLUMGZ1pT19aDKmEhVigUU
oxyavaveuBmlIdTUFRTMjPP8cA2nSVguooFU5lielcGkEsRUlgQDqoz9cu2WHWbydlAoG9Vc
yB4YW9w+ujluh7jEjCSMuTQ1r08j5YBBUFdNcyPt6VOAmVkZa0FeoWtaHpiGgAJUhgVY5gjq
PCuHEKP0BqkHV0r1qcCAHAoDQn7a9qf88Q0RFGABJX7g1KHEiUKQzE9+vjXtTGsQqow60Vsv
+DgaggNIKkaVFSzA1oO2WKnEbRyFQsdCPA9qnBowXtITn2Ppp0ri1FUkgMvqPSnTCj+2HCgn
1L2ByIGedMQpR0k1F81/ZTFokIkiQCg0r6qnPrg1qGEqs5DCqEU1fTpgQ9TKrHqe3nXriROq
kqremh6DCjKAKlBWgopB8MSLUNJqaMDRa9PriBK8n2kDLMHz8xiE6ExPUHVTsKVOJo1BQhmI
YCpyyz7UxHCQkkZ0Uj8AfHDgFrCsEADdjUUoMRJKUOr0hegJyPngRF9a6QMzX26+A/fiOEho
O9SKaKHI/XAsOIwNNMwep7+OIBk1ampQDLSOv1xRZo6mp76BXVUU/dhWgTSQWJ1AUrSoz60N
cFMGWVqhTp1HoK4YAkHXqIqK5nwxCiLj1KfSfyscR0gF05k66UPQ4mghJaVBqyVqfHyzwaNP
oAVfcOnR1Bzr+OLWcAHSoCrRlPUeeFDdh6VbJdX/ANOIkACxWpyqFpgQvaWvtrmSOvbBqQsd
DIvUZioyBwxJj61Br6ehUdsBJdLLVaU/diAAqglaggmuulK+VMKJGABAzA+4Uqc/riWBVdQV
xUZmoIIywIYEf2tk/cjP8MOLDgHNgdanpXrgQWUt6fAg/s8sSDoJY0eqk5nuCfPESLENpGZH
Wp/fhWiaXStaHVnUnr1xYvseOSNwpJoTkCe+CqAQxqSBnp6svSuI/JxIhoxqeo0p1r2xAatp
oSNJByHXArUYNZKaSB11dhhGHBYBQFPqz6+PjiJxSpUAPU5kdvM4TqMB1Y6gpC/aq55+eLR8
jBkClFAVhmc69fDFpw6y6Y9FSCTmTlXypiOIZggICHVTMgfcDXLFGfqkDvoRm++tAD3xUwzF
39SgEigCClfrggwlc6x3FSGUjqcK07sFC5mnRqZmhwHCRFQsQ2bfbTsO+JHWKhqG1g1OedcR
J9aUKUzyJpn+GAEgUCn3ePjn4YUSL/L05huxr/niWDSumhqFHRhmc/E4qYEMoADrVhl6sh+7
GcOmqGUBQVHQDtixaRDfbTvUP9PLCCRwzDT08OmNMHZWKs0QBqe5y+vngM0Ko4NG0uozYjKn
mMRn+RPMBRlY+rJT1NB3y6YRXDdNMIWMj6Qakh8zTyw6mRuSPeNB17/XCIbS3n0p+PhhSy2m
IS7rbRore07LU0r3/wCBjp/GS306+s4vj34m2bjse5bjZXErMqPJGszIqswB9GnOn1xy7nvi
c9n8efGG87h+o2mSaa1QD9Rt00zlzUVGdAR9cE8CwT4++Nt0a8s7PaJtvvLOkPvJdPImqmr7
HqDlgxIH+OuA7HYI277ZJuzSyaFcXLxEdxVRll2ODFbqWTgnB+P7nZ7nHt89zYzMum2luWZ4
2P5gaZjyw6sWPybacETb7c/0mRbqRwqTRMIxSorr/iH4YpC03D+NWmzbVEu2KLaBgC8WhXzI
qxJYZ4NUxzcs4Nx3fbyG4mgDsv3IgEBanYmPpi1a873raPiRbqTbf6bf2W4IyokqTmWFiSOr
E1OEL6/+PPibZNrjvtzt7u5B00pKy/cOooRih1zbX8T/AB/u25i52K4N9Yqo1WDzlT40ZwAV
/fhFi4v/AIS4xLaytbbVdbXcxgmJ/wBYkqvTqSHJwLFfx74x4lc26xS7Ld3twGIa4S7WEla/
wMy1/DCqsE+D+HQ38hujexWjgGKATAuD3BIH7MQxxJ8bfHO5LcLtEd5bTWr6HlM3uK/iCHGL
FKo+UfG2ybZJt4gnmMVzIIpEc16kDJgOuKXFbWj/APuZ4sl1HJquri10qxt2fSzGnZ0zFcRZ
TkNh8SoLi1gi3Lb9zi1CMlxPCzA0OvUScRQcYtPiiXb5Y+RXd5ZX9W9uaGoiKflppD5/UYrW
cQ7DzReMXd1DtUcdzbuT7E8wKz6QfSQ6EUwj6tFbfNO5uR+qsIJrwjQl8mtZdJNNLBQQ4Hge
uKNY9K2TiuyNtkb3QW6mnIm96W0VSHOYCrQ6QOmCh5l8w2mxWl/bw2UaxzKNU4ji9rPoSRQA
4shYCx/Qm7j/AFZdbXUBcGI/zAncrhkUrfP8T7deC03Dap573Y29VzMuUgU9s89QHXEIr7vb
ti2DktoNnuzeIjqStzErhcwNBBybr3xLGo+X9u2uLbrS6Szhs7oufdaGMJUaR105HEL48k1O
JAjUK9aV/wCDgZ5+Xo/AfjvaN/2qS/3G6mg9tyPbhoNQAr9xDEYq6LCT4v4xudkt5x69ugnu
hHF1oYZGjUYaThxI9z4X8b7P7VvvFxuNteyiomhZXj/7gumtPqMQZyx23YbPlNrHbbg17ZGT
V78VI5lU9NSPVajFPTrU/L22Xf8A8ECdLx5f5aaoUjm8tbJk3kcPLNZBPjrnZLE7FdfX0UPn
92eG2IXHIdw2rlNol5Ztb3Cye3Lb3UY0srelgVYGopgkaav5e2faLV7O4sLOG1mn1CUwroVt
NKHSMuhxRlZcC2/9Twi6/SzlG/mCW2uYYriFmC/kNAy1+uGtdRnuH8B2ve1vLi9uJppLU0ax
sWRZloM20SCjCuQpgZxzbvxngVrITb7lfK8bj9RtV5H7VyFrmY5CqrXyOAZG3v8Ajfxzc8Fi
v3imhhjiEsd6Ywbpc6etRRWxY08ZuRbRXTJAxkiJqrsNNR2yPTGoxKt+JbJHvW9Wu2yzGGKa
oaVaMy0FemCujcXvxnwWxni2275DPBuM4rHK4iMRr01oV9NfNsHrFQ2Pxbt1taXV1vt5IbWG
rCXbypAQfnIYNUEeGAw//wB0+z3Ztbvatynk2y89QMyASqpGRB9IYV8QMJxU2vxtG3Mjx+4v
j7ZiaVLmNaNQDVQgkjPGr145zj1p+J8I4la7zfbTuLzXV7AKUkVDBoYA1SgLBwDnXGHWM9vP
CNnuuWLtOxXkh9wt7sLxkNGVzIVjTXl441Fi8/8Aue2mUm1hudygvipo1xArW9QOhaMAZ+Or
FQ5rL4h2+32yW93fdJLb9OrC5iiRMtJyZWfx7imDEoOR8HtLPa03nY90XddqYhXDaUniJ/iW
uf7BhkGMcSAxUn1dFP8AlhOBc6WzGf5iOgpiRy7A6gAPAVyxM0waoJqVU+OJDU+oMcgfwywH
TKSBRWHWgJzJGEgdjpAP4gf88TJq9DQAeFMR0gx0nSaZ9PDEKE0NT1J6nEIZdSjTXPuBhMIt
IVwHC1AsKCncVHfEg6qHJan8xOBYRK6sunbywgAoDSg8hXqcUWo9R6GpUn1KcKOO5OYPl0Hj
iWB9yjk6vSMz3wWDTtLVT/jgkV6R6nJBrUd+1catHNMaEaj4dcZa0B1CPOhrmB0/fjWiGBOn
1kZ5qD59sTSM1B8PE+NMGudIuWboAB28vPEUZDhuw/cc8OrASSfxE5erFqnMMjGqgMArCpBH
TvniaMzgrmBUGlf88SRnQVNOo6+GEaFZCV1LUeApnUYFKbWNJOQZugPb6eOJfZGWFSrAa6VD
D/PCx9sC7Lo9QrXuPLuMTdplf0kk0r0ODDvhVDAAEqVzLVywYsiMzEKe9MqdOvnjURpJDqGn
PoCTiFiJzqqVYrq+4/5Yy1og9FCoSafd9MQQyPqAGnUo6+VcUVgGzrpNCe3XMda41Kz8g9xW
UFq+BrWlPE4jyZpCak5KoORzrXxxlq0zClMiCB9fPEAmQhhRc4z9Mz5YFUZkkJJPfNB5Y1ME
lAxAoKDPoB4gdcXjV5A4kosi5uuR7knwwCTCWX+YpDHUOtT0B8sKlpnagLaiwU6gtAKE+GDD
gTMTVgSajv2OIyHJZqBqg19RrmaeGAUImJ1BvurWueEbhzckrQAjLrUVPj+zDjX2QamaM6ft
UZt1qD1pgoSGQrQL6a06+I6U8MGLSMmn1sKOBnpOVPLEtCGYgMB6cgcsyMB0AfQGCk1U1r16
40kqsxUNU1BzHn9cSwmeRh1GfU06U8cGMdAEkgFQaBXIP0HSmITT62aJlUEVy1fvwn7FWUNU
kgEUIByz8sW6hQO4LKTp/iU9z454muaYa9bOh9PgxyzxLNTRyfb5erTXuPPBjenkuGFQaM5G
pUY1oCaVOM4LQ/qGA9sUFT28O4xrGYkZwewqtKtWnliwm0ihGosWzI8QMQMNFAlMyfwxJJVv
u8/Ue2BQyFFXUWzqT/wMRFqAINa6qZHt54kdiKlep7DFiDJIKKq5DPI559+uHGaYOCwPhTOv
TPpiqgiVZwoOog9D+3EqINEPuaj/ALBiW4FTRtTMSdWXkKdKYKYPXUlaUIzBIyJwpHqzZGqy
LUlvrhElhApo6agRTQcssRRsBWppTMMPAdMCxIuXpBoOuZ6jEDOrFMjmaEg54tagnCFQoYAg
fh9DiQVZy2mv2/lplpI8fLwxA7eokOKEHNRl074hQhiimnhSppWn06YlPDPqU/bStCPD60xN
Wi91VkoToDZaiMiPri0DzC0/IOtO/n54EiDBwGA06utTTLENEHDBlAAUEDM9sR0DMqtTNs6D
/gYojoXSlV+6tT4fXGiZQxJOrU9TQimYxkEpKyMwoSemeQwCFXV6W9YDVr0zOFo2gBvur3Cr
386YkdWck0X1HOvgfDEDKQqEV1FvHMYtBkjSmdSDU18DiJy9VUNkorpoanPyxHA1kqHJqoNC
v1/zw6zgkFXLE+kilDmTgbg2qldIqMi1a9e2AnLkV0fcQCNXU/XAKFCgBr/9XiT2BrhBDoag
6D0XzwoyNoXP7Qc6DqfDEyNqUJK0r27AeWJogIkppyyoqjzxlC11NAtSB9xXIfXCjolAGY51
OSjPy64lhy1EqhBNakdBiKOOQUIC5sSaAUz7jLECEiKCNJVgaqCK5+ZxC3wNEJYk5MeprT8c
NohFmHhTsMGNaKNmz66cyW88KhBZC+oj7ulOxHlgWEj0ehJc0JH0GISiqJK5kJ1J8CMFMpSR
tUFGNRnU9f24mi90AqdVQe/icSISoCwI6mgAOVf88WM2ncrq0knUO4HQHCjMrSR6WfuSoUUr
iRwXEehgFFMwMv2YiZFI0knId6Z1xJJr9J/McgB54AVQVzGRzUjMAHEUdG1daqO1aEn64QLS
wFQ1G/h64KUjVZNS5FMyCcv+VMSRAe5RwQXNBl188vLEi06Bka1qG71wggKAuhAFakE1Ap4Y
Cdqk6i9R1NemBGVg7Aacxmpr3xLTo6AMK0PceGJAppUuWOhu/b6nCjKVQhDmFOVc61754sIm
oSAvpHc9K4CerjUC3pBGontTKij6YcG6cyo7BlPpbIeJGJBYaX01OWagdRgR/cZ1KgkEHplX
CqYa66ydNAQF+v8AhgB9ACkBtT/uP44UdkDJkKkGoDZfgcCwlLhgCagjLtiNME9pTRBXw8R3
6YjAliCNJoTmpp2+uJJteqgZ617jEKB0ZgGByNQV8fPEAEkAAkmuRI7eH4YUIqvtnR1oTRa5
1xNHhEKKMtNOvc/jiXkGgAHqNK1OqueWBqAkKaQpAOf20zPmMRpI2n1AgEAlSB0P44hT/qAy
moUBc8uoPlhZoDnUugA8jTL8MWg1ELkxahTofEDFpPpOqjVaueXjggEEj16VYMx6sM/wrjRR
s3qqtAqdPHr2wUDCs9Sa9K6iKfhgOCHXPIMKg+Xj+OJGUKCSATWpHh/1xLRNVwV1CuRP1PXF
VKFBo16QWNakkk17YNOEaVH5fEHPA1hKRUiQa1ND4dehGFmhlYKoHUnsM/riGHUMy0ViQPtp
kaHDqwTkkDUuqnc9R9cS00rIATGAuo1Y9RTwXDjOuLc2HtMupXBzJIy6eGNctX4ZK5R1mp9z
dyMNYhUH8R/5+OA47dpuWhu1ZBrZHVgp/NTOmOv87jcmvoG9+Wtr3nise2ixktrxUCsoo8Yo
unM5HPxxx7vrGOLgXM5eP7i0xg9+3mUJNHlq0+Wflhlaxvf/AL0+C2plvNtsruHcLogvHIwe
IkZUqD38sDOOZflfim6Wht+TbZcK8b60ltHAWqmoqCa4cUV3LPlm0vxb22227CK2kSaKaZQG
YL2anbyxm85TLBb78hce33Yo4761uINzi/8AxeSJgYta/a1OoHljO1O/iXzPBbQrZb+bljCQ
Le5tgrPp7+5mPwxplaX/AM47WNxgkgFxc2dTqSZFQgdANS0r9MPjSm3vlnxRcyy38Nheybmz
B2/UMUXUPyhgTlgCv5l8kbbvWy2+3Q2MkDKVDvq1BVUUC1IBGQxLPHBwLn6cbuXMsHvWchHu
AV1qoPbPCNrS7zzD44lD3tkNwe+lf3BFK1Yw/WmfbDjp9bXbZfKXEri2hO7W13Be2h9zRBT2
30/xkHvg+GLMq1ufnHi5kiaGxuhGR6xIRrI8VAqTXFTGc2T5Z2zbLq+E1rI8d27TIwYAqT9u
oAUxM3xaj5S4duSwvu9jcrdWjh4SigQuRnRs6jEpXdd/NfH/ANXEbawnFt/9uNShgpzBQD/P
DhxluRbz8WXCSy7db3kt7cHUxn9IVjWvq/54JDJXPw75KtNisZ9vudltL+1lctrlA9ypFArE
hq4bJQyO5Xsd3ezXEFuLdZ3JWJT6VXsPoMWJNsl/+g3GC6kNRDIrHwIB6fTEeY9jg+dNlRRH
7FyiKlGWq+kjqemKCvLuV8pvN/3Jrq4naVasIDJTUqE5DLLLElXYS2cF1FJdxmW31gyRhqVA
/KfrhMj0q2+UrLa/0sWxe/DYVBu7aTSaDuEJHXFgxU3e8cAu+RwX4/WW1mzhrkimUmqpUaa5
HFFWj+QeVfH+/bGkVtuE/wCqgNY4yjAEjqCSOuDA8q/lhiCKn9+eLE9h+Ity2eHjN5b3dxGs
gkJdNYWTQR+VWp27YadRnnvFtisWs9oeW8QSFvbnOlxqPqoQKNiW65945D8d8ligudyvLiwu
oVKvahC0bMc82AJOAMttcvCIeQo8kk8G3xOrK6iragwIqCK0+uN/XxSNf8h8n4XvW3xmx3J/
10BJCNE1KHsTQU/DGcLz+HlfI4o0hg3S6iRRQKJXIp/2kkYsWuvYN02+436C95FfTqIWUiYA
yk0OYalSMblFrY/InIeEb/Y236HdJGu7ViViMTAMGFDqLAUOWWMYs9dvCeU8D2vYG2+63CaG
a41GVXU11EUJUgU7YKbWZ2+54ZZbxMG3S7itya2u6W1VkU16uM8vHDqXfLOXcWudnSwku33u
5QVtr519uVD4saeoYA59o5rsl1wl+O7oz2UgjMcF0gLBhqqFalc8MarzueNI5GjWQSoPtc9S
PHCzi34fvFts+92t9cp7sEbeqMDPpT9uCqR3/IPINr33eVudvL/pVjVC7ih1LWtR5eOH8Mxe
8Q5lsK8bn43vUjWcc4dYb2IFqiQZh1zoR2pjLa4fn3Htk2yz2+wkO6pBpQSoPbbTXMlW74pA
ntuW/Hp37/2L9fJ+oeNozaFGDx6hmQKeonCpVI3P9os/kC63SBGuduu1VC32sooAzAHqR4HF
iny5JuUbHt/MY+QbbeHcrZmYzWxUxyRFsiKt5YsGtNf8v2K9c31ry24tQVoNrIMYFOvQZn8c
NhxWT882S54XuO2TXcsm5MrKs06l/dJYFTrGdRTOuMz5FeXrO5j0KfQPuBJ6419RzUSsoaoI
YeHjhOi1Kx6aQTkOmAaGYEqMwKdR4jriRywoCCQD0BxLQk0IFcie/wDjiQenqIq1ele2GM0z
hWYE16dsTUESFddIPq7+FcAoVoGNTmD3xD4O8lCRWhHWuYocTUCmtOmVete/liImZ65Zj9mJ
AFSaKanvTtiBgyhm1Go8PP64jDFlo1Bm1AB3GA1HqUMKV/3Hx8sLAWcjoCB59DhJga1rn2oP
8cRwAYEkVoBnhrByuYApnn4/hjIMKlg1BVctJxNSAL0LKpqo6KcgMR+A+/qyYBR3Ff8APDEa
pJ1Hp2HWuLSHXGGYADSBWv8AjgZoEb3AW6DrliEhiwIL0zPcdcTRgyGmWVMx1riUqMuCpZSM
69fHErdRsSpNaFiMwen4UxqM0IYkUU+mlQaZfTCAVjAJHWtSCcZq0DlG9Yzp2xLSZRJXrQ91
+nQ4h9UelPynSQPtr2rXofPFVAO4qanMZADxwtwJLUGXpORC5VGKUh9LGhBr0oD4YNQS9DpA
AyrpGHBuI0k1KyivkK9cFUOZDShoKeJ6/TEgOwFNZqQc+1K9PriKP+WzAqQQehHemBeEx0ga
BmRWviMKRkgijChpmTln4YF4YFiKhqZ0H/LEkcrKwAy1NkCelR4/6YsVpmfSF706jtTvhOmU
l1DAUOZANcDUJqqxqMyOnc+NMTNBpoWbp2z654kYyihWlSO474VoGnoy5Bq5U6VP1xYrSJiI
zGoihGeVT54yYFwaCgyXJWPQ5YtWGdH9JAoSADXPPqcKoNdSI1U9aUpl+PjiZEdVCQKBulOm
JFpAUALRurKcqDzxEjVgDUAdKeQ+mBfJH20AopBNfpn1JxQhGlZD50HU518MKJTGCQB1Jrq7
5Z4mbEaBgErXPvUgj/gYtZyJqOEK/aRkPAg4jDq8kZYM2qnXw8/PE1mI1LSJXPVXVToSO1cS
SiZjpqQS33AeXhXAtPGRICnQ6TRgaf49cTRx7ek+kKQOteppliBqgH1g6V/MACenhhB6pnVq
AgUbwGJYYsJF9NWHVSDQ5fXABsB9560AoOviMWE8bqTpKaSSaAk1r3/DFiwVfVUjtQmvQ4CU
bpr9RFD1HWmeeHEL3EHqyD5ihBz8hiF0IY6fUAVHTtUYlg6IFrTShzp4V6GuIwLxihCnIfaR
ng0njlzMgo2kds6ftwsS+i90ioZQCRk1aCmCrS9zVUJmD1YeeJrQkguBlqOVPp3GEGdmWuRy
zy8fEYUaoFA65nMHxwIjp1FwCKZKpyH1xLBMCSexWpB8TTBqsChUoH1VdhnTMHEIUjE1XsOv
+QxLDhmDa2qSfuFf3YTCkiqNIOdKrnliRnkJRRH4UqepPfEtL3Cft9SeDfvpTAtPViKBCQOv
0xI8n20RQCaD/rgNCAwIBJIYH1DthWAJrIQVJApQDr+7CNHqYUWhp4fTETOoqQrBf4QT174E
UtRpYUZT6j2I/ZiWAZqjUPTnkcQsNmFVV+gr1FMKxLHqp6CA/mK5d64MJEUNKAIT6ic8+oxY
Axs7CuohD+UD/iuEyHQ11MUAI8qfjiVJyqqEy1tSqjqcCohpR6sa9wFwITvUEE6ScxUDEUfu
q5GRXOhJyp+3CzRqVDhCOpOXSv1wEejSoBNafsB8K4SiQDSRXufT2r44mRB30Vrqp0WmQ8cS
0g9UVkUKrEEmo6d8BlEWkQuv3J+U+JP/ABliRlJUH+I9M6dPHEiU0TVSsp+4jwxKEMtQFSMv
rn54CTZIanrl+zEMIxsYxQgL11d/ocIsOA1CpqT2Ipl44kSug1EVKjqp6UGI6YsWzjYFsw2I
lQCtfvPXxp4YmDVBJ0gAAAN1wNQY7AGunIA1xNSFIhVVBGZzBPlnh0UiiUSo9R+0dOvhiRlG
pzrJIHVeh/A4BDxoEBBzGZBPWnliqIPWpahAoGNM/wAfLCQlnaJg/py9I7+WLAdFGgj72y8v
2YcMgvczCsAA2THPIdq0wI0kYOgjoMqf9O+JDQKf5gprXI0608DjJwLRhgSGJq33Dv5YkTFq
UWigHMgf8HCLDuBoBP1A8PPCAa2PVaA9D4YFCBkoagAKaAVyIGeE4THU+oUyGZ6dc8WIXoYa
iPsFPDLucSJWVl6UAyAP7PxwEygBKEUY5Cn/ADxEU7VFaVAFWA61xM0NCFFfSB1BzJOFU3uV
VaDoczTt4YkeN1ox9soVyqcv2YKodfR6gCTTp4HAg0kclpOq5ZeOK1YQIHQ5A06Z5+GJHDkA
gsdWRUsMhTtTEifUxX05sfVTtXwxKC1LUIwA0iijyxEzaKGuQFB1zr5YQYtQA1FR2z8cDUFr
FKGuZOk0pWmI26ddbgtUA0+lfpiAJGIAdMxkPT59cGs6dmUppr37ZnPvXEdAEAY6QrGPIK2Q
J7fTCqmFGXUp9QzCt0HjgagXTWSHJAFQAppXzGJjQEOraaAjqTlQ/XDCcRKQQTpSmQ7eQxab
DsGICoKU/ZQ98CgmJZRSq1yP0HfCkZd8lQFsjUCgH+WEUy6WodOnV2OQ+mJYkUTNSpyHQDMH
6YsWGCspIoM6g/hgNoU0imk59B3GJk7KrOWLE0H0H44jIMZprBCse/TLzwNow9Gq3QmhU9j4
YjMEjEq9R0JyHbEPg6aQM6aOjEZGniMQhogADTx7D9mIbpFgGZmarMenhhECVp6kGYzp0698
MMji3DWI2agb0mvgK46ctWsnMdUmVaDKnngrEDQefh+OBrXdtqxfr4lcnSHAalKnPMd8df5c
7cc++rJ4+i7n4r2lOJQ7vtF9c3UJVWkUxggVAJGoeeWeOffGdYeOtmraw+GbS82CO4ju5oNz
ZNf6aZAIvUcgWGYB7HHOxVfcU+DhFbXJ5Farcxkj9PNZ3Aoq09QyHWuE6zNz8Mb5Pc3B297S
2tlfTaJeTLHIyjKoUjv44SqJfhbn0N1JBJbREooZZBKjK4P8LBsz5Ytcp449t+L+ZX8l0lva
ATWbaJ7eSVFZW65Cp7YrW9Zzcdvu7C+ks7lCk8R0yUzocKrbfGXA9m5NPOu6T3EUFsodPYoW
Zv4fUP8ADFZkU9abdPiLjd1FdDje5XVzfWRpLaXCgE5VyYaQcZaUNl8M8xvIvdVbeDSSFS5m
WJ2I8B54lOnJJ8Q8+jvZbNrBRNGochXQo6t0ZX1UJ8sa1j7X8uaw+OOXX/6pbezD/pP/ACxq
6+4WH+0nP8MFrf3q++POAPuU16t9tjXpiAMkUVwsckbdBRWOeK/DF6tVs/x7u15yG62zZbOV
ZIGIcXMioFIzNWbIDBzyZy6dy+G+f2Fm13JZpMiCsj2sqSuK/wCwdcKFt3xFzi9s0lhitkV/
UVluVSSp/iU5jFqql37ifIuPXS2+62bQEg+25IZGNM6MKjCYpQdRNB1NKnpn4YToQKEqBXV+
4DAyMCgyzHYD/DEB0qhFSSOgp1p1xE6SZGmROVKVyxBKmkLU0ocz2xKE4QkUyy7d/piNECNK
knInKvn0w6NPraQaSBo7U6f8sBEXbQFPqFclPTzwokyckE0NaDsf+mFmpGuPQcvWopUdQcGD
RIW06VNQaE/XE3DAsOla1NTXviR1ZzTVUdaEHrh1aYsWIQNkD0z74macnSKAj/TCjxhg9akf
TI4KsSGVdVWIIzpU9/rjNOnDPWpqpI74kNtelepzzI7V7nEqYkqhDEkn7c8KkL8oNczXLxJx
Ey06asxSo/zwoiyg9TqPQdsWAi4J60b+HEzaLUQC1DUZU8sBhaiR2y/N0xYaSSHTRaAgVof9
cII6gQR17U/yxLQ9qMc6nM9cUopalyNNRHSuJS03ujtkxNST0xKmVQRQHM9T2w1Q/tgZCgyy
pgIkIIzAJOfeoxYiLLmSMu588QL0sQCcjlTENBRCCR6iOhxHDBQG8a9csRkJajIL5g+eJAqw
r9c/xxM4PShUA9T3GdfLETEUJ7AZV8sRMegFfUOhr0HniWgodWmpA7HrgZnp/UpHcDqRl18s
MOGRW1HwpU1Na1xHA+oMXPTwxCwDAhszVe474gYpUgg//T44VA5jpUV/bX8cTQFUaq0God69
MTOE2VRpJYjLPv8AXEMC7CmpaVPbxxNQMiVzVqN4YtNmhY+mrj0jLPriZxGCxYE/aemBBOkE
mo8any7YloXK10g5HM/j4Ylp2NCM6qBme+IyA9KggkUOdPPtiVRHSCFHqb+KnQ4cAHeIVJ7Z
aqd8anIMSSvQAn9xwBEqrnnpZR6cq5YPlfHyHXIsje5TQRQmnjiqkOH+7MjTm2eR+mJsBarM
xStRnT/LExYiotRnSuf0Aw6bDyMrHPInILnkfHGVzPPQs+gBVIzNfPzxHcCyhXqKE/lr/hXC
1qP/AMYIY5HI9DnhZAQpAJU5V655+FcQ1Hq9ILE5Z9KkH/TAcJWJBY0zqa/XrSmJIl91XNcl
yIBPevhiE2FKEyUZL91fPvia0zhKEg9B6fInviFoUDBWquVa18fwwHASJ1FKLSpp3GNDDR6W
XWreQNemM1rSLFn/ANoFW8QcTN6DkAp8emJaBUVHq1fBhXr3yxHw0ghUsxpVScjnUeWI+GYr
6cvFgBSmWLB4aQ6yFUkGmo9s/LARMSVop8Aa+HjihwGqRQSSBlTPz61wrUZU0opqqCmX7cLN
ECZBnk4U1WvfEgRtUqqselGrlTPGTDtnIB0kbJVPVqYierHWvcNQVI/L4YUZItWpmXURln1r
44AZl9QpUilE7EntXt1wjBVkQ6ZAHH5ipzI8vDFpzDnSHyVjTr4mv1xYTkJX1BtTfae4OJUy
OwUAGp+7UO+JnBqQ+rVkoI0qRQVwNSnMgDL6RqNMz9pHTENMRpUrU9chh0iQkgLpHgx65YBT
kqqkah1oxHbEAAntkp9Q86eHjhgollLAnSSQcuo+uJuCdg+pm9INDUmooPHFitCoSqkEHPIg
ZGuIC1MSOuWfuZV/CmIepdTFdRyPSuX7xgbQtKwYLWtTUYGbUpkK0Ud6n8MK0yZ1CEZdB3+m
BYKRkJWvfIfhiRkbS4ORUg59M8SISE5Vy6kgZVHh9MKO7o4Gk0JHfof2YlpzpYLqIJ/wxYSa
VmFXjAUn0jrkPPBiEQzeotRaZBszQ4CYFWqUGa5U8PPAyAkqQGP8zrqpkcaBq0BOqpBp0pXE
aLS5K0JUUGknPBpw6lyWJ9SrniXwjf2vcFTQ0oV6dDXCzovccyFguRA9J6UPevng0y3TtIwY
ekitVrXw6UxQ0JKHSoGdDqYZCo/1xpklRVZjWhIAbypgawmaqhq1JNCMQOqH1MxHgpOZr54K
YFnqRSgjpnn0IyI+mJHUqF0sQB4Hv4YUaT22FCMyQKDwwg2oajq+gPgB0/bXEj6TTUBQ9wSS
MsBwhItNVBRRQkdAcK0pOoVaU/jbv54FTO8jEFRTwPcYlBKw9IBOomtPrgQ9QFaNUZZkVIBx
JHKWYhNQI6gN91B44UIyqAQBmmR+uLFowrNECTQd/M/XEQuwpWgoTUkdqYhTq9FKqpAPQdSS
fPEzpBSFA7nw6V8sDSRCS5UmnjU5065YkEjIAjLuKiv7sWo0bBUrSgr6j3B8cQlIZDSwJH8Q
zrTpTBjR3XSVplU/a3Y+WJCIXS1TmO48u5xDAsFX1ivqAoRXId8OLDIa+inoJrQ9TirI9CA0
DKDkSaipwNI2jbVQDM+JzoPDDKsHGzLVTn0oB0/fixQylgxcL5e51zPbPE1B62T7qMaEgjqc
VBhpVaH1DxwIDioQZANTIdv+mIYSyMrLUKc/uPXwAGJJK1chcq9FoOnnhJIP5g0nVp7NkK/T
CDEa/Q1aV6eGFo5EoyqDXNRWmMoDhpKBnC9gngR+Y07YmOiVpDlQBaUFO5xNYNtS+kZZVzNa
fTEdLWrE0Oo5VXpl5nBgAW9OugC1PTPphSUOjAChqe3gPpjOHQKgaRadFqrA9698K0xCZ+nq
aAVrXPriwGZRroWqX7HxH+VcKSe4yChNaip7516YkWtQCWNf4R9PDATSdQdR1NQEdK+X4Yha
FfZkzNaitAD0OIECACpBNMwR0pgJB6nSaae3iP24VDkKuRc9gD3OJDU6lPWlO/h54EjURqfW
Sa9CR3OBHdaMRGKuwyTLOv8AyxIzMNABpRSK9Qajx8MSL0tqK/8A1Zg5/XGsRgOhIqfEZ+rA
kzImTinpOdepxGUAKFgGYBegLdD5YTpA6dJrp8h0A7YkESaakD0fv+uAUQKe4dJGmmde3fEg
siEn2qgnP7cqYCcGStdVKn0nsadqYgMOM/b/APIBXV3FfDETawQBRfcYg1bsO4xHRHPTTPsp
r361HbEqWoKo8K5tl18cSRvJoNBmrdCfHCIY1MhNNNaekGp+uEkcsly1ZEnEBorslVfVpyYg
9/xwGFIJFKijCq6q/dTxrTFo6h0CsAKCiVz6AVH78SnINQBYKK0A0uMq/wCWKmAy1DUDlnTr
TA1Ik9zWwzpTovc4Tga+rIaAfSB1B8MGiQjqYk6c6ZJ2H/LFowkQhmNehyHbwzxayZGZhXJj
3H1xoQXtekaDV/An0gDxw66c8uDcmVrZk6xrXUR01U7YZV2yUjr7gYLl/Dhrmf3x/D/x4YEs
bbTHeJoNBWtRnjXPyH078J8z2yXZ22He5QlvoIgZ5BGCCehY/XKuH+nvrWTPG/HNOPpyNNlF
/FGgWiXEsqmEGmSmUGgP1xy+rCz2fYEsL+/3OXdLOC2u21QqLgGMrTOpqFqcJkdEu8ruFiU2
b+jbpPG5Dx3jqNIH5j+Yjzxk4zu98m3bZtz26PeX2eG2ZtLW1gzBtLZKWqT6Bi5p8XO9vx/Z
tout4truz1Ouv9TBIoctSlaA5+GLNGPmrd9xN/fS3UrMdZqFPXr1rhhselfAs0Qub6Ka6hhn
0qsCzyrGzVz9Feprjp1fGJXokdhLxx9x3bc7i3S0natUkU0Uf/rFvIY5tS13Pun9X2yJ9kt9
p3Ukii3sgVwK9xXUDiCk3flO6bXvlhb7y212tvIdLxWcpdgezOGPpAxRv6zFrydNo2HY7zcr
aS2jeWjC5tmBkd28q51r1wdRzsZj4j2XdJry83dmt5bW5NYis8Zkof40B1L1741L4fF5tm2X
m1ctvv6m8cMe4AmydpkCtSvoLE9fri/BjN7n8ecxg3S43r9fDt1gshlcm50qyVz/APGf3YoK
3M91c7ptlpPs237Vu7ZEtPIg0acjWhrUUxanm3zHf7+8Vtb7ou1xrBUpDZySPLpag9QOQxZq
x5RQAlc6ePTCEqt1z70PiaeGJGJcHKta1qBniqEpDE0NPAjx88B0Sofcq9KAVOVP2YUIv6ak
GhPpAz64icGp0sKnrliZOrVNK08MKiRPRlQnLMnMnETlg2RILHqOn7MSJkAIKkgrlQnFKKUc
alqkeoeqhy/bhEiSEnNvzdss8ZaOmvWdP5u4Pj44kIsF00H1GEGIWrAfUEeH44RgsxQg5np4
YiMsAfw9R8MSIBNNF6noOgGBHzYDyzC4kYswyyFP2/jiRvcNQaA6egw4zaJmBFAKV6ny8sS0
RZWXqMuoOJoxyrlkOlMQD91SKVGQHQHEB63fPUPIHIYid10INS50oT5eOAmFCo9RWmQyxLC1
hQB+YV/4zxI7MaactLdc864gYqCgrXxyPXCsw475VFK550xIAKlsqAHKmIaIAEjOpHSuQGIz
0qNXoAW7/wCoxLCVRrq2ZrQr/wA8BCTpPXOuQwg5Ymhb83b/AFwLDEhTpNPI/XCglgSaih6D
/XEAkj0A5nECLtT05gDr2xExZtOX4jyGIGAqSW79PAfTEibM01ADvQYjEbE6QQc65k4YaIEv
WvbMgYDCkJUhs6HpQ9sQRkh0zoueRHftniRq6T1r4jEsBqjqSRQ5+eJAkYqRTKoyOEZpoyCu
uua5svb64tWADJqrQaqVJplniIHdipoemZFe+IF7jGmfoPUAYEGWU6W0jInLyoMQtQLLmaD/
ALh0rTzOEYZmOgdOtStc88RAdbNVQ1GyBPhTFKcCMtNfU35vww6goWLHrQnNvLwGIYCqs5Bz
oCanLCDKwA6fd3+n7cCwLE6hU1yAr0/digoCChrq1VNTX9mJqQDnMA5eAzwnw5KH7aqV+7P/
AAxmweI2KH1rnQZ+X4YB8G1glVOZpUdev1xNIqEmsh+7tXphGDD6idQypSvX9gxHmOdnFArN
Ug5AChr9cIuAkL5Bcq5n8MUqzSaQKCQQA33L1GCr4piUOpANOo0Sle3fFi2h9wK4LHIdFr6h
TLFp0JIdXXME51HXFRQstCVoSRmD0zwKwIOpian0EenMV8ziMJtJo35h4HEvA6qn05BVqB0B
wtXEbjIEZdge/wCHngZpg0amgJKgCg8MQw0iampWrHt5DEgSRqSAFpSmqnb/ADxHIJlRFFRV
+zdMumI/gBbUKEZU/bTAztE0lABXSxzFT0xNShYRs49R0/7upwxAVCytQ0J/P0rn0xoejYBW
HRjl6vEDAvgDg1YNmAB0GdT0wC6RGeomjEaaHrXtiU8GlNJWmosfu6Uy7YDICNDQrXUBmAe2
JrEoQV9RqDn4dPLEvgtYZjQ9RkB4/wCmEVH7jBPUDoU0XxxDRKRRWX8wyU5ig7YCQ0moVKBq
EHwxA9H0kFtNex6A0yxL0OY1MwzP3DsPphH/AISRNUMKdKmviP8ALA1KdHLKVGYINfr4UxCl
TOoqewB/yxLDsymqUJBap/5YiYVJoKgKOmEh0BhRaA0qfHPyxGwNVUAA0K5ZeeWJizB1Y6/4
6ekfj1OJaIP0qmgHIgmpPbATyVBAqAOhqfDAtCWAqXXUo79/LDgSRuhBZwNJ+0fTAYcEqK1o
3iT6a4jCRgYzr06h0XrnhQMwCzAimYQV6jwwIdACmkUAyZuxr2xA5RHVvykArl9MKBDKFIUV
qPE9Bip04ObUJ11yB/x+mAGLoGZtXqbwFD54EfUCqtUsT38MSOAQA3U9AOoOEyH1t6aqQetR
1GJGdnJow6DUaGlcAsN7h06qABhnXyPhh1kWWgtnpbIU6/swN6F0rqkV6GlCpFRUd/LEqf1A
Ak0C0oBlXzGEchbSWIYVUGtc8u/bE0Sg0IAIB9RJ7YRhKfWNBozdaitcsCwSAF+tQMs+n0xL
DUAWteooVPWtf9MURyXYIyqAlMmPTCtMrh3JWjCuRAp+3EoNpItJJplkwz6/hjLWIwYygRcw
zd60K9cOudomcAqAKrmX7+WLDBqBnpBIOR8MBgXXzqKUHYVxRacBwgbTpqa06YsQiNWQyZSd
J6ft8cMQY0IZqrqp3zxVQlqTWtc/SD0p5YkNtKxtkGZuijxP+WIgoAtAaUofKvhTCyNTXSQR
XMsT0rgJmk9JOk5mgpl+zEhFAI1CkaQMj4HwwIziNl9Jq3auVfGpxCmDsjaTkCKimdPxwtQS
6mGVf9xbMn/TBSIIhUhgC/hXr9cAwBdVSoGfQqKAg4UeRlIqBRmIB8aYlTpGNesgazlQ50y6
4AEktU5K+npXr+PjjS0Wmnqagp3wI4BZmVcx0z7/AIdsR+TL/LPt6gOv1OBUMmoqQKgUoPHP
wxDCQsjaCSSaUYj8cRxJnTS2RbMk5Z9MsOIgugMCAa1q3U/twkGgGRW6EjLwz7/XADhiD5ZD
r6jTriIlqevfopFMvPwxLDMAjV11z9MeXTECao9a5NXNQKimJGR4gzFu2QHfChVTRTq56r0p
/wAsA0wXVRRpCUNFJzr1yGAmOll6E1+6uIG0sUBT1dqdKHxwtEyupyajmhP0754klBpWoGok
ED8MQR6lLMMgSKkH9mWJEHkFWCkioVXC5Vp3PbAjEamrWlM9WdfpiOHDqat9x+nf/niqO5ZU
1AUCjsM6nwxYiDalqMwMjiQqkAgCpB9J759q4l8GLOQKgaaUp4k9cCRxJJU6wAqH0Bv8D9MV
UiQiNVUnNgMz4YEE+1qoRRf/ANr6YgJAgTNKnrWnXCTrEadWJz9NT0OJGc5elugzB6YgAP7t
GIo4yI8fIYjBpVSQQAR+BIxKUYoCzE1UZKB0FfPxxLQKaGqDUlcyf9MSNLRiK1Xwr0GIB9KB
asCrZ17V8aYzXTmRIzIKFc+5H+Rww9XAGnpYdzUk5UOFg7j1UoTUioGY8jXAsMy/zCc+/pNC
D5nEbBEsNFVUmlBQZVHcnCPTqNI9xj6fzN3/AAxYfgLlalWUtUVAXKoxI6xCupSAi0AC9RXq
cWqhYSZiNiARTSe47YhDtraMr2B+45A08RiammCFeoqxzIBpX8cR2HMYA6E6smBNSB1waoOO
MnUKUHQg9gcBOTSMrSh6auophwWgqy0JrlkSvXAztFQrmo1UFf8Ad/0xqBFIlBqfItnpU5gf
54qiWTQoVC2odn/zphP2rm3I1g+4KoBoa/jXDPkfZkXkAl1BaU618cNQPcTwH7P3YsDtslkW
50ChValj1AHjTvjfHyse1cF+LOX8msRLt6QLboAVNy3tNICMgisOmNf2sjr9+c8dl58e8ssd
4/pVxZUvCBqjTS8ehhk7SD06RjhrGT8La5+Fua2Nkboz7XNbBDIQLpV0LSpoHAXGbbXPKwZ1
pSP0uqZBh1r2+oxucu2eOvads3Hc7hbSwhNxOfSQagAN/Gx6LhvPjPXGRrd3+G+e7dtj7k9p
BcwIKsLWdZKCmZK5VpjjZVzWFkSVZGj0UA9LtTOv446xO3adq3Hd76KysLf9TcsaIimhB7lm
PpUAeOGqzxsN4+JfkPbbRtzvLZLi1UKGeG5SXT5hR++mOV0zpBx34y5pv0Zutts0jQZf/ImE
Mj6f4FY1IxqSs7FfNwbli71/RpbB03EmjI1GBX+L3B6dPnXF6b0tN1+KefbHt3626skNqv3y
286SgDwIXxxq1m0PGvjTne9W73u2WcaxlfSWlEEjg5faxXLzOD8HFLvWz7zst+9lutu0Vynp
kRiXX66qkfTBtDlLyJFpo7IRUqWbp46WOGVVquP/ABjznedv/WWFovsEVCvKIpGJFfSGK1FM
QVzcO5S27f0eSydNxc0aGQgKoXu7k6QPOuNSiu7ffi3mWyWjX9/ZILU0/nW8izIP+7Saj9mD
SzAjlVmXSxC/cyoxAp40GLRmJ7KzvbyVLa1hee6mIWKGMamauVKDGhKvd3+PeYbSkc+4bXJA
khARwVcVOYU6T1wNRa7f8P8AyFe2q3EFpA0co1KrTxowHmpOWC0M7vfG972aY2e6272lwBro
wByPQhlJBH0wxpwRgEUc5j8xPTDgaiw+OOZ7lYf1Kx2pprQ5V1osrAH8sbEEjBq2M/d2N7az
PDcRvFPGSro2TAg9DilQNFNJYA0GZHWvnhQjTUtTXyGdPriRm9JIPQZKRiQgXCkmo8B/ngZ9
F+VQDQf8dcDUpzTJQPtzHTCj9sxQV/HDEZVOodBSv4+ZxCwdfcFCpKt/ji1HBNCKdMSP6aUN
a9vLEijUHqaHsa1riqPoSvqPqHVhiWGrRSXzB8OoxA6vRc1B1DviIjXqcgMhiB9K+rUK1GX4
4gY6gwYdBkPLET6q50qO2JGLGoz1V6L/AJ4kdmNAK5jqO2IlkW9QJPguJGq7VrSg/biWHzAB
NQD0BxAGmrkqv4jywiwnGoUXLtXzxGHWkdaHMd64DhVyNOrdv9MSCxY0NdQ7+NcQJzU06qQR
l5fXCkdT6fST/wAsQ07EM+r9gGKrQ1BNMwR0ywI1BoINcz174WdJiBWgIRcRNUDT6vTT1V61
/DFpwiVY+AIypmTiQCCSdJPliQW1jM5AigplXEcMTlQ9aUBGdMSLUvc/XzxGVGZVYHMipqaZ
Cv1xCgANFJIB7+OJYZpCKECpzz8sRACTn+XqRWmQw6AqQakdAcx9PPAkTmTSe1en0wj5DG/p
oahSK5Hw+uIGZwwyHTufH8MWYfAPMjSVP3AdaZVGHGaD3TUsxpnmMBwyS5UVjqbt2xYfsj1q
H0/mOYOHBQmRgub6j2p3xYtAZDXUfUvUjtjQCwBZWDEV7ef4YF9YHWtTQg+JPY4BPDl0NAWz
r9M+v4YW5YgdnrqY1XoDXsPphct2hEgOkqKLSgHYnxr1wVvNEZFC1AAI8RjJ+AoSULHrUqD/
AJ07YrAZCKgVACmpNa/jgaBKBryU+K0z/wAMMNRuQzUFPSKlj44mcBqZwyjKopTxP1xpmWnQ
AMtaVApT/OhxluRGyaDo15Hq3gBgVMrxjP0nppfucQhdAXND46utPwxY1aB5A/qFdX/HTEN0
ys61p36CvbFqwFASRkDWtR/yxKYeSlQdNB2PbEQyyUGQ1VyWnbz/AAxIyqyjTUEkVYf6YjmG
WpcHTSlaE5/txIBV2lDVJrUsOhyyGWJYUbyE5AVA+mX0wDAMylwpOk1y/Zl9M8SwMgQSUb7i
NVFzNcKGPVk2dASFHnlniVCtVYjtWhJ8e+XbFoykxBIKGvmegPlgOBkKqwFa6syPLEsKi1qq
kMagDr+OIYdGCNWgJByqMgcJ/wDAKEMWqNVaEeXniSRGQsQDqrlnWlcSlIKdVY/Sf4qioxKz
TFiagipUerOn7K4AEUpQek92OdPrhWpFK016hqAyHjnnTBgtIFUD1oFI1ChJI8csKIlHXVmE
6nV0JxE6vocqhypqFcqHuKnEfQqJGLMxC06074gcSn3KChC9NPj54sA9T1GYLj8o/wBMDZmZ
kBFa1HbLOuWeEETr0azV1/Hp2OJWnolBVSurI08cDOnX/bkQKaic/phpCwDUPVlbPzJH7sBE
pJY6h6gfRTt4VxKQ2WnWlC7ZkHpUYgISARkahQdVIyqepxNacstSuoGQdF6fjgIa6T7ZNdWf
liCRjLpCNQqo+/of2YRdA02mOtKxjq3l4UwAUdArEGoagVKf4YWoINk4YepclJ654qQmpIYL
6ula5GgwI32lSSFLGtAMDItWqjVy+4ZUGEknqQ1alCch4eOI6ETLQAdM69jXsTixnUjIXUVO
nrWvX9+JaEKAw0Ag5Cq9iP8ALATGNtJUMCetT54jh1Egyb1GldXfPFoIKWahoe4boa4VhIza
VpUNWtewPgcWI6lqUc5E1zzwElZENPsp4HrhRlSp60yyUkUBHShwahKlQMySa1PgPxwjDhNB
0j6geeFIwzRCppn1ABqK/wCWBJiFamojUwr5/XGWtNrZgS1DU0TLtTscQpgoNCCadHqKYZER
JR9KdVHToD54sZMjsUOrJSamuIyBZkFFUk0yFOn1zwpMY/UAytQeeeXhXEsRvNJrLBcyaHw/
54kf3HaJVUgLU9u+JBJI1CldP2j/AEOFYljcsWAWlTmD/mcZEgnDUYEihoAOtKdcRCPU1JKZ
mlR1/acCCxChR+U5HPPLCymYKUQZAg1So6fXE0DVpDe2aeOVcJEZUJ9xQFNadMq+WBaYBlOn
Tl3OBHdgXDFdZTLpnTEjMoX1IlG6MD44kWlychQ00jTnlhBEKp0lcjkQR1/HEsM0UaPnmK5g
51JOVMREGZWqKBaksB/jgR1AyNQxOTN4fSuJHzVwq1AOYIFaEfXEgtVfv9XUK2Qy6duhxLSj
jNQCAwA/ZXCoVXoag1ByPTUPHEDRFFTTUmpNCe3gDgJ1ZWclXqw8fDEpSzZyfykeonxxLTFW
LE18hlniSViypTJSRQv1y/DEgK4RwSwZhmOlc/DCZDOqGYENpPgPDvjIsMpRldiSF7ZdulQT
iZ0aerKpYAVApnllWpwxuQykLUrmCa59fwwgysjeonPIhadadsSMqFnz6gZg9c8SExUZuahe
y9QfPEUZMS10M6h8m0minyIxYElEIo+TkVK0piap29KjoKABT0rTzwAMi+kyGpOeTHtiBJ/4
yUyJGaj9+LWoFz6QpY6uqZVA8ji0JNZQUpXSP/IRQE4lYFiSAwJap9QI6YEkC5aQurL1dAB5
4Ei0aGAoH8jl9M/DChRoc0J9J/E+JxagI0jNV8tJIqDlQeeIRJJMoOkIMxWvcfhiIFZXJ0E1
yzP+WHElb3Dmx6dGOWBBiP8AKZa1ZaE0yz+mIhQsCxB9LUNB+/EME4BcKAWJFTXqPwxADAOx
RgoC5EjMGn+mJU6k1LgZACp8uwp44kdSVJNC9cu1AfDE0cKVrQkN4DM070wE2o1oxJYdiKf4
4lId1IYUJ8BTIVPl5YjqRURQWJJCnI5Cp88MitiOMPUNl7h/ZTECQsJOhFevY/h5YdRjm4rX
/aAM6nzwYMMJCTlQ+Fe1PEYjoo1WuVS1Mh+X8MC+CljZdQBJYEGoPc50FMC0QIplWp7jOp7/
ALMQ0KqSNNSGH5ia0rhQoy7K3u0oxoQvh2wNS/sDgxpRV9HYkkmn+7Cp6IRoT16jPM9MMZsC
kQBCn1UH8v6fQYlHBuWiSOR4mq1CKVOZOXQ41KeWSkH8z1EhuhGFUXtp/F/1xYlnsjxR38II
USrIrB2zA0mpy+mO38k+y9wRty4HazbMpvAVjWM25q66hU6mTuD4449Sy+s42O2NbtDbpJGj
ssaq9uxOuukV8zmO2OYsxmd049wnku531lecdmt7mPJb0TSqSx/Mq9Kf44rzFK8c3T485A28
X1psW33O5QWbUd40qVJ6aiP8saldNa/4a2+W03u+tNyjNrdPFpEU3pkDA6nGlqNjVuxi+t5s
dvdwb9u36lWSJmRYTMCkZBXoNXp8sc+vhSPCediEcp3ARBfbaRz/AC2BStft9Ph5Y3zDa2/w
VJbm7uoCYzLPGTFCzAOexK+eGxlvuPRXMO8br+o9yKBm9uMS1VG0dStcvxwIuZwySbXb3G2q
zgyppe2BK6K0P2ZU8cSaSFoJP0wJSSWRSI0LDWafcAPuwYmf43+rh3LeEvXeKNnpFHJ6V0Du
K5UwkHOLW7k26zlsQ71nWstuctNRU1TLLrXDA5vkKWX2Nol29IrjdPfAtFKqdTKAVBDdQT2x
mSnVdza/+SG2yAcj2Cwt7FZI2mv7fTI60P5jqbR50wz5E+Vvy79RLY7VLt2t42kjaSWA6lEf
1TLOmGqtYFtWmUExvO8YNDRpGUAfiaYyma4yZUj3uO7DxxPK/spPUKyAHMe544tKi4Te/Ilt
Z7gnHdnstz2Vbhwn6hljfUKalHqUkDzxrqeBhTvPJdq5bPcxwDar+Sb/AOTawoCtWPqjKUbK
ueKewyPUvkrdrxeMWN4ZC0iSxTDUrFQyioJWn7cZCq2j5E4/v17YWu57JeLuiMAl1aSt7Zr1
ds0oMuhriRfNCJdJtkdiktzJISqKh9xq9ANK5sM+njh5WvONp2q9sN+sV3S0ltR7qkG5jZAN
JrnrAGGdH5ew8pnu7Td9n/TSSRRTSK0rw6grR/7ymRGdMLOYzPzfFa+/ZzRqPdkX+ZKoFSo7
5HP8cZNeWIRU+GHRKRPgM+xxEwHpzIzP3YUly05sdQyA7UwI9BkD2H0/fiQQqAnIivU4cQvc
zoOp6fQYho4xpqaZ9RXocSOS5IRe56DAg0CgqxqR1I/1woRZa59T+UeOJCY0GmlAegGeeJBL
V0qVzHcYMWnpUUp6hn5fhhREDIkg9wB2rhQCzsW1EUGVcSSUVT9wNOnjgQjWmXQ9R3xIBcay
DktKADED6WzbTU9Sw64kcVNSevenWuEmCspYk5jrT9uBGDPmK0rnhxHL1oaU0/8AGWJkwDGp
LZHpTEi9RGRpQ+oYEZtFfUQCPwyxNaJdK9cwOhwoLPqIpkKmpHTEKYsQM1yJxM6H7Voc/DEa
ZGAJPUnt4YlKT50ZjTt4YgEUYGpNAKA1pXEAAkEAd+uf+GInrmGJpnX00xEMnUlD1pl4DyxL
QtI9TlpHl0OIWmJ9NK1A6+J+mI6iUvkT9pOWWdO2HWfyFgAxFS1anOuWBrwHVaVAz8OuJGNd
FOteniMRMzVCtmaEZjrlhSJpASFrnXMEdcQ0i6gaFI9dDpOAyh1hmJriWAJShPZuhxDUdM9Q
AzyIwoBY1JyPiQa0wig1kZHvXOmWIGUkZCmfbpgagJB11jOlCMLNBEUBc6aEmgr1H4+GNKWF
pr3zFOnhjNawJVRkGzGa07eWDVQMRoyGpvDEyAAE6qE/7T0xDDfyiSVyIz8v2Y1pkgfR7vXK
hoeueAhYVFChC+B6g+OBQJWhopqACDU+GDVbA6xQBMyetMOLmw6aghzNTkT0APfE18hVVdCj
genPwr4UOKiQDEkgVq/Ydf3jBqsoaAnS33np4+WIzAalJddOmpA/ZixXrDuo6MwUdAR1zwak
TydQM6UBqaVGEEI0jRvUWVjQE4hzCkADCn2mn1/diWhy1kDNemXYDETMoFA3/k6KT4eWLD8G
DEMPTQAkNXL9lO2IhJAFT1qaV6/8HEgNr6gEMMiBniRywQ0JI8CMz5YEbWzEs1BUds8vwxHA
g6WBoSGGRAqaYhQ1b3QxrTsvnhGjNddUNGJzJ/z8sSwEZOqrKK1oD/maYDIIuCDlmv3nEQSA
6gDmT9reGFkx9JUPUMD6WPbBFonkZDpYk1NKgA59anErTGJwmoEEVGo9MvLCDMCHLofTkTnm
fPDgpNpJDswzNV88WDTgErUn0xkg5dD1p54zTBs4KBlAOoVC+PliIQtQCD0zHjjQE6xplqyI
ofGv4YGoFyx0xhciMj3AHY1xLTsGSgNGPQ9P34lfBV6mgFPydyPHAyjAUMwA7VDA5Yj4IqzA
VNGFaGnhniJtK+lR1JJNfPrh1CyCaRQr01d/xxA+bIuXQUzrn4YCYiqrpJBH3k0qThV9IVL0
6kNTT+UjAp4PTWQuhNCenh2piVMrmPVQdKtXrSuJQhSqso1Bhn2r554jKNStKUqc6A+OAgqQ
hzqw607U74RpK8irXUSG+40rmcQMrflQ9PuH5a+WFYOlFq9KjNqClB4DA1DxgalVmqx9Wfh2
qD3wVB1aWKyH0scx0oPwxDTodTaywUrX8KDt9cTMpEnSSq0Jzqe9fLE1KZSfbADCr5V6VHb9
uNCkyrQDV6x1Ttll374MZGJ1NNR1Mew88FjQkkIoCldRzH/PASZgrEMfUVrl0r5fTETIGqoV
wFHqbUK/uxayQZ8i2avWnbCvTgUWi5A9z5eeImoK+rSc6VXoSfDApSYUXNdQGVTlmMTQCCSR
+aM+s9s/PFAMBi7rUgClD41xpnKYpqYFjpUZVPT9uInIND6vSBUV7+WM1UkWki0AqQaUNen+
uJRIHoo0gnPOuR86YiFjUhlzAqQvfPyxAnqW00Ar9cQPAGCmgGZ6/TwxE8bgMo0DPqQOlDgW
ndwZehGVVAzGWNQlGzEg1JAqc+x8BiQ1FQBUUUZL0+mWAIScizHSB6Qv1yywakiRyhAVPpAJ
8yMJw8oRQ+s+oZKe1DgZCzUCeoOemrwriVEkCdNQBPqJH8XjniUh3QGhoSqfd0IJPQ1ww6UV
akBad2zzy7ftxUnI9AoBpzIQ/wCmBUPut+YByorpHWpy6ZYgJJDkAoUnJyfDtiRtIHpJ9BFI
2Pj3phQyahQpAy6/88SCSEdVJrUekjrQ+GJHcFSVyJBqK9KjEiUesL6akVr2y754CQb0kg1p
VvHENCzMQRUmvb64qgs6ppOljTw6HzxK1N6KEjpXIYVKDWokC6cm9IBJOXjip0ZAVCSBTtlX
/DBiwzCNFJYKHAy0g0/5YBpKX+8gAMMu2IBDZsxyUjp5/TCYKoVAF6GvpHh4VxIw0KDqUBAK
qQOlPriJatWYFGXJh3z/ANcQMVJUgHIfl8O/XAcFRqAV1eKnrhRj7gcKxUZ+on+HyxI9TpPp
6GqsDlTEsNqZq6h6fzL3NfHDSZxkF0kg59Pw64kFUGmjEUPQYkKMyhtRIoMhXOv0xIS5sdQA
HYd/rgoR6Ff1dfqSTlgSU0QBSF9xh1/yxYQuoIAfLLr3r07YlDlSTpZiSKeQFcR0xBVgAQaC
lB5YQPPIgVJrWnmPDAggsWAJ1KR6AaD8TiRVULqOZ70rnnSgOJYPTIhIYZgdDSp+nhiWAXPU
uqhOf4HxOEwQK6mK+kjJvE0wLTaWAyYkg1UHsMSEzPG3YqR6mOeVcBxGfVQg6a9KDt9MQE7K
ooKs4pTzHjhwUTIHV2ZACcz2/HEkZUllNPQ2QJ7GmWWIyHCKSDUhk/N0qDiJ0CKcq6+poe57
4EebUTrpUD9v7+5xG05lRovaasbnJaimX1xM1GWI01H1B/xp2xaj6RrrQ1PcEZfWhwnwLkh2
YnUSdK0zocQw/uOPzCo6HCvQ6tL0aiufxqfwwUJBGxVdQGrvTIYGjsSdAQUp9xXoKYBYjzqN
Z9NSQcq/uwjBPIpWoGgr1JyqfwwxaJySgYEV6uOmXlTEb4YSBhpNBWgKnuMDYvaMcRAP3/jl
XtiF+CBomR9QyoOw8cRiq3QfyGMbev8AMQKVPjjfLUrLyrWTMjUcq9carnS0nwHXT9378Adu
3TM9wASCGNGPj5YZca++PU+Ecz5Lx6LRs+4T2Ub0rDEwKMf9wNc8PfWse/la3HNuV3G5rusm
5z/1FGqlyjUZT09IGXTGN8PVtXdx8y/IM9obSXd5CrDQ5CIjEdvUo15/XBKOVNsXPeWbHcST
bbuUtq9ydU4r7isV/iDhqnGrWu745ty5VyHdt0O53c7TXf3Ryg6TVehyp0xSMz1a3fyjzy+2
s7debq8loAFJZAJCfDXSv0xWKMxonnkJiVnkHqdFUsaHMk0GLTRWd5d2kkc9sz29xE2uKQVU
6gexwfZnY1l78o883CxNhuG6NNCwOoFUDHtTUACRjF3fDK5OP/IPKuPo8ez7hLDGzFpIm/mR
ljlXQwoMdPqZHJdcp5Ffbmu6XN04vI39z3UOkhuxWlKHEzi43L5Q5rue2/ob3cXuIG+40Ctl
/EwGJmoeO/I/J9jV4tuvmijlqDDIBJESRTVoaorgrWxw7pyHet0vDe31081ywAV66AlO6KuQ
/DFKhT8o367thaXm43VxAB/4ZZW05f7a4isuP/IfLtgtv0e2X7pakk+xIqyRVbrRWBp+GIVx
ycp5FJup3Zr+Vb9G1RTRHTpP+3GrStd1+S+Z7ttosdxvmljpQgKqMwP8TKAaYyIqNu5Hvu2h
v6dezWQkGl/acgEjLMVzOEfZ3bBu3KIN3j3Da3mm3JTX3lX3nYn+LVq1Vw7h+Ws3T5Q+TZLW
Tb7qP3ZHQieE2YDafGmmuM/I3FZtXyvzPbbT9DBNGIaHRHJbo1K9hlXDi+/imbl3IhuybrFe
yJfw+pJiFovkFppofDFFKs+RfJnKOQ2K2G7SQyRRsGDpEqHVShqR/liUvqbZflHlm02A2+C8
WW0FVSGaJZNFfAtnQfXB+W6z1/ud9uFy1zdyNIzHItSg/AZY0y5VcjuTXv2PjgZhwCK1NQTW
nhXtiaEGCZlRQ5Vp4YRaRq5XOlc8/DEJTmoFaFh0p1/HFhp1XWwFNTnoPE/TCzozE6tUqwpk
WZSAP24tJiKmpNSuda9MBOaUNQSeoav+mJHUFaaK5Z169MGgqotDWhOVe2FWiyINOlfxriMu
kQxoKek9M+uI0aqKNqDaR6S3avhgAApNAtSK/j+zDoNoIoMw3XwwlKqZAE59a/8ALEjGpGR9
PT9mJGjjLnTpq/gMz+wYAcalNAR6cq9RXEsMe7UOZ6DEqajEVU0zzr1wqB050Nc8zXrl2xI5
JJ70UZV6YkSqT5KeuJQ7lftJrT8gP7MAodasaGgp+/EZTv0OnMU/wxExAoQR6iMvCvicLNAX
1MApq3bwxLT5rXV0pWtO+CoKgMSP2fhixBrqNTXv45/twrDSB9Cqp0t44tEhmI0VIqehIxGw
PrNS1ACMvpg1BNakkUoKfQYsWBMqhdCmgGYNOn44VoS6hcjQ9SDliFCxz65n7WwAAZ65965/
54dOGdqAN9x/MQKYig1hnqxOeVQczhwH1gr9Tn5UxYtBqqBpoDTV5UwHQk1qStAopiOgLDV6
WopOYOJI5CpAWg60APfEKRoPVUeAIP8AliQA2mpGVO/TphWGdj1A9WfXv+GFlFQlteqnl2+m
LSTge2WFDU/dUg/hglVRVaiAjURl1yxqMYQOskgFSpJFfTl4eeBrnaY0NAlFHbLM+WC0hDlm
bWtAOoGDEj1AAkfbWhHl9MWLYEP7jElssjmM6Y1o+QsT0NKk+nxOJYZx6A4pQ5kHLAULAU6+
nsBmc/PEeecPoJ9ZyJOQ/dhGRHUlqnoDSh65YaYNSFNWAJp+7/DGKgCRhErEDNhQjzxEEjks
WU1INaDqfriFmhqCQxyNfSK4dYzb6cipZcjXMt2Ixl1iJmULnQqpz8c8MjJaQ9PVQVzrl2oK
YVsOEGnKtaZk1OY71wD6o19wSfcWSlR06dMIm6dqaa5kitDWgH44HQpSGoQKaemrIn/liiAp
oB7tGZRkoNRWuJQYEayCtQnTSDQjzz74ijBUMQM0JJVq5jyOIBMmWoKCTUGhpgJtLKv8IGbE
ZVB74pRSZdK0jYszCmXT654RMINTTrFcqah5YCFpRULm1Sa0y+mLF9iLMramquWar0OGRUzl
tINaE5jPLpiZpKjOKMo9NGqa1/ZjNJj0DD01b7jmD5/TEJQsaMtRQA11dfwpjUFJWMjsa+kZ
LlSv/XGtUP7dXTSB/wBvQgDzwKwRMb1WrAKSKeOM4PgIL6SRmAAoP8VTibwTMzsGQkA9NOZr
50wrDNIBpFchQau9fPEBNIDUaad9Q8R5Yjo0IJC0Byq56nEtL2qSEEGoyJY+IyzGAIx6qoy/
mzGQ/ZihHVwAFr5Uwg4NFClQrj7hWuWI6jSQZrXInOmeIQdalVDEHppIrT9uIkyD8xNa01A4
lCBCKSBlWtPL/HAtL3fSSAfbYemuJQo2fqrekeOZxJIpQJ6TU5lVPTEtRE/frIY1Gog0xHT1
D0YAhD1Hf6nEPkRNBqStQfT4H64VpAuxU9KV1U8cS0RYhlDDIGoIPQ4miYq4pmKk5nqD3xhC
RV0jVRnFfUfDw88QwIjNR6wwXr4D61wick0qhaEVIPU9MsRMzODTqq0AFRkCOpxpnoyZ5aug
1KRQ1/biUmiVEFH055AjrnjNaxLH1BBqq5HUPHBhRhiWavbIeP44kc+ohyaFVIA/54hQIHqA
tASPTTvh1iS6Zi58an7+4y74WksZUAlaBgfuPbBUb3TIGH5ctL16nvgMpwyhcsyTStajFI1K
eTVqZwxUg1OQqT5YdYvQULaBroQalj516Uxa0djlQZgZk4AJZBUIRQihDMOgH0xE4dq6SQKG
pqcjiRJqNSwppyqO2LRSALU0kDOtcQLPS7K/pOZFMqeFcSPH7iuDIKDpX8tDiMGDEEYOfNaZ
4VAqNaqKlG1elh1xE4pSoHr6Vr+3AC1gEq1Mj27VxLS0NKdOsxkEUp+7piqPI0enSBQ9n+7p
54BPDqNQUgDSg9Q7nwzxNEPuWh9P+J7nDAS0VGUL3zp0oPLCjVowdSAvenTERUYEBqMejeFc
ZR6Oewr1r3I6YgabS1PBaBiaAfQDDAQ0rGBqOfTyHhhxBJKplWncjt5UwIQDUFQCpzqDVgR4
YFmGkABrUdsz0ocIupIzGSQaEAdT1OBqI5fQy0Wjv/lgIkGqgIBX+Hv9VOEYN1P2k+gEHLwx
LABSsrAELGv5SanDB9SZVdgaMq0zI8frhaCsrPpCrkScu4oeuCo8lKJQUkGTA4BRNVF9HWlc
8hQ4sOEWUKrUoO1BlXCiLxqo9xSPDTln9MVWlKrM3uRjIfav/XAkYd66dLZ516CmFJV0KubV
c56e2BIwGkkqoypnSta+FcRiVtDgs9QF6hf8M8S+DUjIGZCrmB9O2BGzA9JFR4dK/XCtE0qu
iqT6hWrVrX8BiRtY0KK5E1of8sIIOOjDL8qeJ8cu+AikotKk6fAZdemJaj0rIp05v1cU6EeO
ICHpoGU5nPvSuJHXqVUV7dR3wNGYAVrmc8h4+ZxLQgAGgFT38fPENGQUY0NNWRHlhQdLD+Yf
3DqPpiJAelgxYKxrXt+zEoT10ga86gAk9sKuQwRqsBTIZHsfxxIRNRqOYNCxXMg98CCTQGhy
P3VywWrBaCuoaSUI9XcAYtWBoR6qksRln+GWJYdIyra1Yhz38B4Z4tRyGUemudRnmKHwGJCp
6iSzerrnSn0wNUMsHp1L0Xoe9D3p0xathklCyjSdQ61AxDR6iQ1fSGJYZ1yxM7SQKerZZHUe
58M8R06OASDTSuZb/kcR01CWeQAAkigUV8sWjCYupYFRRcjX/li1BWpUqaGueXgM8aUNCyM5
BUg4KpBO3pEbBilajLuOnXAqAsgJTVUEkVGYoB2piEp3ZWA1ChrVutT/ANcSo2RCnqGVehyz
OJQNPRSoqp0r3wq05ao0adRAzJ65YiJJJaUNKEUWvTywN8/CM6hR2qSepH+mEzlX7k8vssxf
0CvoqSc/DDrHXNZSdgXyUKPBemNM4ap/h7fuxJ3WEKvfRR0qpdACTp6sK5+eOn8pLfVd/D7B
2v44+Lto4tb7ruVtdTExK1wtSv3rWi0xy6+Ttrltfjz413y+in2C8cW4XVLtzEl1p0pXMDBi
WQ+PfjjcLi822222e13G2CiS4MxkVx0LUrTI4JIMcf8A92HBNgsTebxZzboNdF9p9Ei1PTKl
RhF9Nf8AxvwfZrqy3iGyml2ySVK2TyklQ/gD/ri38CTF58nbfwBdgWT+kNHduNMEsYVCMqev
POn+OKfJni34hw7bNr2i1n29vbrGp1MoZiHHqNe9cNpttNy74941v7RPPCGlDKXdFCE6ehqu
Ve2M6zjzvkHHviC0uJNqEF5ZbvC6oJNTSQMT1o1WJqPHGs0xfXPxt8WbVs67juk9zcRkKCq1
D1PhTL9uI2uOx+JOE71fJPsm5G924ep7RjqdXPQEqR+/FZiW978G8UuLWZbbb73a7qMVFw0g
eJj3Y1riCr498WcWkjC3W2bhuM6OVNxAyxqq9KipGX0w1m86tG+COKruJNxJcxWzL6ULBpB3
zalP3YD9XK3xT8e3/wCrtto/Upd2Mml5ZZCy5gVxYpypuT/FmxbOlpPBcyyLPIIpYX61P5gR
54jF9/8Acrx5ZYn9+d4aBprc/wAv3MqihFSMVVZ3k2z/ABFFHPaW7X227jbdDIxMbN4VNf2Y
ZAreIbZ8UXNrNHyW5ube71VtZI2ZVKHIEUBz8jiNiLZ+Y2vFN1uotnhTcrEnRDdXC6ZfAMKH
DkXOr7bfmHc3LNc2sUl29EW+UFpEUnppPUDyxabHpvHeJ7Tc7Wl3uLQXlze//IaY2+mhfOgF
BTBQ87+ZrLYLCS3gsYo4rojVIIY9Fc6VP+uCRPNLJLeSaJbiqQs4EjDqBXM40o9Ek+J7O9is
7/Zrme92587qSNAGVR1HqJqfLAlbu2w7RsPILGLaL1twVnVZYZ4xVM6FWHfzxciNh8vbRtC7
Ja3trZxW1ySPXbqEqNI6jwrgnyunjyKQK/mOamvXzxrE9B+O/jfauSbVPfbhezW5hkClEAoR
1zJGCqxaXvxRx/cNvW649eyvJE5jdZ2X2yQaGtF6jywyM4a/+P8AgWxRRR7/ALhdQXE4OieJ
C0QI/BsDcZM7Rs1nyOCG1vxebezqVuIFrItTkc++NYsjcfK+03cdrYe3cpdq5EaAwhJaAA+r
TQMTXwxiQYwB4Ry8R+8uy3ZXOjCJmqO4yzxpYteKcKgvb6SHeDcWrRGjWsSMJq+JqMh+/Bas
ai/+H9uFl+u2W7uEi1UkXcE06V/MQaK37RiwxBuHxtw7bLGKTedwvYJZVqt3bxe5DSmRFFbL
/HDBXFxz422fdLm4K7n+rsoM0FqdNwyUrr0nDRy4uR8X4XbW7ttO6z/q0P8AMs7yP25CO5Ul
VxmLq41Hx1s6XHDr9ra5RXqwkhuYElWoSoZWPqzGEWbFBxf49t91t7vdNxuHFnG5Ui2I94Cl
dWk5acVMnjSXXxZxNLS0VJ55GunCQXpaqkHM6kA7DEYgn+IOMW15Ht0m8yG/uam0oi0y/iWp
P7MBVNr8Z2Ftu09hvN1KGhGpFtlLsVNaPQVP4YU0Wz/GdttW97Zu9lM93tpalzDdxBHFQQDS
ma59O2ASOrlfx9xbd9+Fuu4jb9xuE1RwQIpVqDMlR0xKMTs/H5Nh5zb7PudtFdCSRY2WQBld
X6EDtXE1Fzz/AILYTct27bNlt1s2vI2LIPsBVsyP4fThlZ/KQ/Dm0u/6AbldQboU1KJIh7OX
Qlh2/HDqxhdz4Vybb7+WyawkuHgPqeBGcA+OQ6UzxWs3VbcbVuto+me0mhY5hZUKZDuK4DPX
qOy8X43unxnNd3O3xi/tkk9m7iBSWqGq6iPM4FZjD8Fsorrk9nDLKkMpkAjZoxKmodAUbriq
jTc+4nPd81sttUWVvPex1ikhHtRtSoYsvZvTijUc958bcYsWNpdcjNjudK+xdxBUJp0DjKmN
WMV0fG/CeLXu5Xtrvcq3F7EtFtlP8rST96OuM6eYy/P+N7Nsu8y2+0336mEn/wAJrqiYdY2N
M/LGljLagaU606A+GLAagZSTkx6Z+GJBGsVDNlXIH92JaENUk+VDU9/LBSSZA07dGOfXAEIf
OpoanKnj9MKJmrVdIFPLCrQPTMZaegr59cQpgvpC5mnWuX+GAzkDykEKB0A6juPDxwnETto6
10+J654YLUbBSuo/TzxVnTH7hmSWGa4tMhm1EEZqe30wFFraoII81OR/HEsp5KmlMmNKHIA4
loHABr1ANDXMYlgGoAaoSK5NgW4EMCpqPS3QdvwwrQChYk1NBSvU4dWGrqWimhXucCCz6iFB
BVepP+WEUEjjV6T0+3/lhYRioUljUnoD2xNGoaNQaRWla+IwLEQWM1CFiwHXxxer6mGqpHUU
HamESYYyU9SspjHqYk0piayBKhiDpzPQ98BkOxjFOocZ17fsxLIGR6DScg2Zr0r44loVDqdG
ZCigNa1+mLTIGvo0aTqOfniwEdfgaflr0GJIm0ox1epKVYHLvmcWK0wYSBjkFYVBp18sA+TM
opWtaD/qMSwgwOf20FQaYmg/awAHVci2f7cJAuWVaEZaT0qcTNP/ADGBBGfQeOCtQoswCSFW
tCD38hiWBBdWOo1QZivXPEDSlgQreonMLixWmpTVUVUCo8vxxL4A+pqUo5I79h54UGI0RlFB
nUsOlcC0iXKF1IUEZE/6YFpPVl+8ENTLqR9cQNGKMWUeqlEJyH4YThkUkaa+ofb54kHSVlAY
gnpl1+mFiHqAxZ6qoPhhOmZjpKlDUZr0qR5UwYCjLldRJJzybp9K4zTCArRStFJqa9cSIFQu
l81LekEZ/u7Yj4bQVUAkEMcz3IwtYaMUFaVY5AUrQYlRHQRU/ap6dDXzxM0l0hakkrUFq4lE
ivRv4D1IBp+OBrQColKaQxU+mp6164mClIJUgVp1PiB2yxHQsUWpHpSuYGfXCRspIWQ1YtkG
8PM4FgjmpVaMBmx8CPGuKUow33UOdKgDPphZ1IyLUGv8xjRm6jPxxNbEZ9tO9WrXL/PANFG2
sagKEmvWpP0wszoj6mXUvp1HMdDTviJaiXJA6ZBjllgJvccAhkqT+YdB+GJBRlD5AAdv9p7H
EtSOULaQT2Jp4jEKSUPq+4gUB8CP+WJYcZSB+ooRQ5gf64mgl1LEKKIBWh8Pwws0aMqmlTSh
0g9vpiZlw7Fmzf091y7eeB1MnpJNTpzGfl5YEdZkVvGmWlvVWvSmJHYhCaVVT1XPr9MSJFBr
kPbOfWuRxIOoEnT1Y0Pj4YcZ+SVQrBdWntpIzzxVqXBqsakE98qdDq7VwK2A1GpoD6a6QCQa
+eIDFdRFBqAqh7H60xHBe76UBOlh4eP+mBYdDQVRRQnMfXEKAMzOQ1VAGajInPtjQkHpIUgZ
JUH/AFy88Cw2k6MyB1oKeGAGGQWlFBqSKZHCkgdutKnuO/0wHAiRdTgeorTI+PXLATJUkppI
ZjU+eeJCMtUI1DSSQB5jLpiWiURha9WXIV6Vp3+mFCDLUVclqUavU4cRMyN1GgdhjKJWXQ1Q
Vy6npQdsQKIkqdWqnUdMhhMMFQuC3ftXPEhfy1T1UFK/RvxxK0s0XT94apYAZ18vpiFMMhUq
GJFKHpTEiLVVSgrU+pajp0xE+p8xSnnTp5UxUWCUgMFAp+7BFC9TVY5Afl8MKwI+zWtStP8A
jLDqkGVGmjAA0qeuLWiQBloSAoFWenf6YkUNGQknVnkR5dBn0wDSkjIkUrQaj6gT0OLRoiVW
pqdLDNh0B8cRKqECtQykjzJPfDiMopRV9K9/OmdMBOAzlTkFFdI7knxH+WIQMVQhDDWa0Feg
piIQrBQAMlzUHuCevjjr39Mn1+W+s/CVCCPUdBGYbv8AQY5MaRYqRprpU0JOf7cSMrKX0moH
8QHfwwkiSjjI6TmqE/vNMQpEEDKiqvSnicFQSZSuogF1OYP+WBJJGaq6jrPdCMwK+GIaAsTI
ASKjMEeHjhJSvrqUzIpmf8sQGizenV0AGdc6YGsBLG2rM6iKlien7sDJkqVJNSerAHsPr4YS
lULkA2moqM6N+7EYYatFFPfIeXniQWB9L1+2tB4eWFHVnZMqUrXwP0wImAqGoKHKpyoMSLQW
bSMgBQVOVMKOfdqFDAVr17/jg0ImkontMQGr49Ae1cItSqqslFFGA9VTT9mBaGUhSlGJr1Wv
Q4UfIlR4fbT/AFxHTUPuAUAU9RWo/CuBCaTQpWoPWhPjgVoWYFlLZhc1I+0mnXEje5MlE0hi
3251oB4DEhMQrKANWXrB8fLEUaL9xTPPMMMq4QkJf2vStGGRJ6EYVpo3olCKVzyrQeWAw6sw
fIih6kZ0/DAYJANWRNK9AfLvhVISOsvp6dDUZ/hgAjVgKNTV3PUeWJUIWX3SGXTToDn9enTD
iJZHckLSmeZyH7MGH7FGXBq1COpr2/DGaqJzJpWgFSTQjID64tHqLSpOYp2rXuO+eLEdY60q
CcqKB0BPfPCD5g6Qe5BAzzwa1DxpQkk0Y9DiIaZ6CTqAqxAplXDGSKiqFFJDAlySAAB2wwpF
R0YZ1pkxBBy8cVUuI3Ka8wSo69c/P6jAL6cKI6HX1pQgePfLAswRAMgAzWtHJByJFQcS+Qys
VqlfUOjZGg8sLeYQL6jnXT3p27YVh1kDMzEBT064B8nVQQRWpGY74q3zEblg1FyUGkgOROGL
XDu6K0Bo1CB1Ph4Dwws21kn1CUnt440wVG/4/wAMSd+3TMl0jogLKclYVr5UOOn8rPyuvh9I
3HzBYbrw5dmG2TwXRjWKOX3AyhVA9QI9QOR6459SaudU3BuatxnfP1skBuEkX25o1YCRozkQ
pI01w7MavOR6Mvy58f289zf2Vluv9TnADxP7Pt5eJr0xj5EqBPlrim7WMtryGxvVQvqWWwZD
kuag66EHxwXIbzVXzD5W2W9srax2KGf9NBKrarsKjH2+xA6+OHm+n6H3b5K4fv2wC13KHcIt
yjFEaD2mjJp4sQQMWzReK7+HfMu3WMAteQR3cohGmG7tgsjMoFAHjNBX6Y1cvwsXG4fOPG2u
IZrAXk9qG0zRvDFEwHip1GtPPHOyT5ZxR7zyj4Y3Ke43KS13afcH9XtsoRdYp31U7djhmCRx
co+Udo3fji7bbWN1BKQucmkgacg1VP8AjhxKX4655b8Y3J5buF2s58rhkIEo8NJORz7Y6eUY
2e7c5+Nboy3cd1vr3klWSFmKQFh+UgEqB+GMeJ07f8rcIu7K2i3WPcbKa2NQbP1I6jpqKlX/
AAxWtV33XzrwwSL7NrfOiVUu4U08CV1aj5jBAzuxfLmyWm4bhNLa3BgupGeJkFGp2quNGTVo
/wAofH26QQrv0N/EbeUSIIFBRz1WoBDjBRecd9z84cRS7h9iwvRZZ/qJCFEgA6aQSa+eeJM9
vu8fD16brcI4txudyn9SpKulNTDtnRcUSo4n8ibNsVvcWt3xy33S2nbUjyaBKgpTTqdXqv0x
VMrvd/aXt/NeW1ollFI1VtojVUHYAnPDpzEW2XsdpfW859ftOHIHdhhlGvc7f544/wChGjul
LKAyaFIUgdie2MjXl3NOaXvJdyllmk12qsf0w0qpCg0AYrjR1n7I2sV2kt0hltwQJI16lK5g
edMGrNepW/yntGyQ2cHHI5kshT9Vay01aehEZz9XniWK7c93+NNw5HBfI99ZwN/MuUWPW/ud
aU60r1IwKXGi5ty7483zY/00W7zRTRDVH/8AGkIdqUApRafXDgvrx0SKjlVGlB0r/wA8MEe0
fDN1tabLewTyIJCdbBZFDMpH3BT1xVrRrzvgfH7GW12q8lvWaRmW1MTqVYtnqkai0rixn/w5
t25H8b8qgt5dz3afb2tx/Nh9piKjqodQcZaZWFfjuHk0bQ3t1bbVGVkjuGj9w1GZUg+pV71o
TjRrX865NwLfNlijtN/ZZrUkwgQTN7rU06a0Ug+dcEDz6HnHMoEENvvd4sKiij3CxCjoBqrh
oa/g3yTZ27XMPILiZmnGtd0CmWVXpT1AZjxBAwYdae5+RuEwbLPYHerndJZI2AkKPVywyWrK
KYmaruMcp4lY26LDyW7tbYL/ADdo3GH306epUbT0+hxVpVy8j4Dc8jkubW4u9oqo9vcdvUrG
zVz/AJZ9Sj8KYmftFnybl3Dbrj72k+4x8gvDUQ3BhME6EdCWCqMv34sNScE5HwLatjltG3l4
5bj1ypLCylGK6dK0DggdsRd/xrBtNrdbqdo3Vd6ikIItdPtOFFaMFkpWtaHtiUd/J4LKBrXf
L2W426Gzk1f06UIEfx0e2T6sQZPduc8fk5vtu7wO8tlarouG00ddVdVFJ/LXF4tq+X5L4hNu
F3F+uazEiD9NuIhYkFhmDQVGn9mFa6Zfkjh1nt8VvLvcu5TQaTJKsbFnqc/yr2wHXC/Jfjq5
5NDyC334QXEKaWt5o5PbYFaZNpqDiEZ9OScR3jnv9auNy/p0VroELtC0iy6P4m6JXtUYb4Ob
vq65Py7ii75tm/7dvNvdtZ1SSxUN7jq2RKNTt4HBDa6r3mWybhcLeWvNzYWmj1WRh0vXxqyl
v3Ylrzm857yCx3Wd9p3y5mt2Opp5AhMjdKlCCKDtiVqv3LmvJ95hS03Xcmlty2WpEqD3I0hT
+FcMHr1DYd44La8Lk2M8mgZpkZWmljaN1LihBjOeRxk1iuIWXGrHlAkvt/iiisZVktLlY29q
YrmaSH7fx64cUzF98mXvEd1v7PdI96hnihHtzwWx/wDkBdWoSR9vRXEt9Wm273xw2Xsbhyqz
37amTK03SEJcKCPtEn3V+oxerGR41ybiWw83ubqwWUbLcD2VY1ZogSCWocyoI/Zhvgiq+R4t
qk3iTc9s3W33GC9csiwsfeQ9xIjAADDonOMazsKnInL7RmcLOhBYkqCPKuBo2lgA1Pu/y64k
Ymr6SKEZ0HQ4EbUxDA5kZmvfCLQsPSDSmeS1zp9fHAgGhyHUCtD/AJDEYjcoftoW7j/TEsNr
0sWrkKLpByzxKIpPQxL9DlXOmffFF6B5DRaeoDqPGuNAzZVLUXse+RwIOqgLaqheh8vDEQtJ
UHIGgqc86DEfqjMhOTAOhqGrlhxIw5qF1nStKZZ/TPyxYoeWU10gZHtgIGY5qMgMtPf8a4hi
KpqDT09gfLCz9Qk6SetadBXIHEcNrgCg9j0BFMsC8J5KMNP7PD8PPCEQJZaaDWuqp6jzww4H
UOriqjtXARtLkCO1DQ98WBBJJV+gC0rQdMK+xmkObNVWIzr+XBqwDlSoGmnQ5DpTv+OGxQ6l
AayEAt0HmemAyAmIKkgMXOWWIajEYC1oa0Ab/pi1YPVN6gQCtBUjIimBbUbNUggGrVBriZtI
6Q4pVl7jtibR6ZKlmHX8vamIHPUgEFKZL0/DEEau+koVOlFqc8zU9K+OE4YyMHAAPSpJ6U7V
xEi4cam9JXv0NR2xLQ+r7iBTt+OJaOr0DDNulcZG4ipTUxJFfDP69e+JFSqE1AK9260OIk7O
AxVCz0pp7Z9xhGUP81gqooLHIKcu2I4VWAQOKFa6h49siMWqoiVRwKGmR09DhjNpj9gUE6q5
gdq/XFjOkdWlihDAArU9fHBjUFVmVV+2mYNcvxxRvfDNKFIoCsRFGYZ079TiZ6IiNtNAT0Cj
uCPDENiIOVqhGXn0HkcbjGmAkdjIANK9B0y8sFOCRajTQHwStcFihxqAUsrE5grXKn/LA1ga
Myk0NGOnLqAM8TJiwRMwaDuPLE3KlzWNW05k5HI5EYDoRExjFVqVzA6UrhVNpORzC9D/AJ1x
CTB1j0kL6ssgf3DxxNGUoAWcetKAmnc9aYAFm0hQppUf59cIog3uABn0t4gVrTr+3Fg0ayPo
oMiGqB16YmtASSlQTUV+v4jEqZTqFTXT0A6UI6598CkEZFCgeNQQa4RaAgFK5kA019vCmBZp
ijRkaaFSo69gMMH1IBnVwVKgUNB3/HEfRqQUKkZgChrQEdM8RlGoFW1AjpmvT9mCkDorMDmG
odI8/PAjydE0qCU6g/54WYKMqoai5g1LA/4jBCQKZ0GTZ6hX608hiIZBU6QaK3WnY4RRihqu
RNKE/TEsMrZAEao8wB1FfLE0Wv0gKOvSvXLtiSNKq1T9x7Vy/ZiZTIqkHSag+J6k4KYE6tGY
IpkF8RiJ3WgFBmAKkeeeEfAi1fUwA6AHOuRy/wCeICMKORrckoevStfHAtlNo6aT6lqdOeWI
nf06TTOhOfb60xEJ0yEEeFNPj3wLRF/bYqSKE5tTviVCxYuCCCW8cjhZwjXWpyYEmqitcsSw
5YtJlU0oM8iRiWGYCisqEAH1E1wLEupVqgqK5qfHBSKmogt9p6kdPTiJaGOpzQ6upGfQZDPC
gCQgaigPcdjniZ+Br7nqKA6elO+JGWMvPqH5Mgx7nwxKDCmmrSAciy/8d8RlOW9IVVAQ1JJP
7sS0wRSFNTlmUJ6/hiQSukdMh6q988Sp9TU0kgaupoDiZpN7orX1VORXPPAakKSMczQ9STmP
ph0yhYda0FKU8f3YkcIp9fRiScj17nEcMx1tTMMOp8a4qoNUKjS1AeoqanLBqM8Y0aABUHN6
1z8aYoDBmoVYKpPh/kcJHDRQa5aaagTUUxEL0YVFGH3MTli1nBjVI7EOB0zPXPAoZtZqiUZR
95PTCUY1KDmAQfSx6A+WEUQ1mhZwxGZ7A+YwAayqBpXPVnU9sROB6TRaJln54CTspkTIALTV
XocK0MhCyBSADSop1H0xA4UNnVs8j/yxEmLBaAepSBpOQIOIaJERlBAAyJzqCPGtcWmUiRoy
OsGlMFgL2iVNKZfcO5OAnkGuij8ozbxGLVgWBUgjJj0I6ftxrSTRxagwJckV+lcjXEMGgqnq
Bz6CuZ8ziRlZVIGoI4HY5/UYEJzqdS1OlTTwwGAYgsA9KA0XxphlWgKKJPU1Ce5PX6Uw6iOk
joQq9KmufiMSgirqusigOQp/niVOjsASrBlGZqKjPsMSLMpn1Jyr0FMCIsqucqKMz/D07YkZ
gunUyDOhArU9MuuIIykQUszaV01+gPc4Fg1Xp6terJWAHT/TGlgtKh6L6dOVMqZ9SaYkAj0s
GA0r+avU4GRIjPH4gkZ5HL6YqcSL7Sx+0ACvn/hjJRCKOhHXSc27kYScuqprUA6SNBJ6nx/6
4sVplK11BzU1JJ9OeIYkHQU9PbPvX/PCjsqqhAOpx0AGVD3xlrEY1hdGkEt1I60wo4VKFQf5
q5edDiQUQqSDlq+7OtB5UxLEjlAAK5gUDkf6dcKpiAy6ipcjPI0piCMh2oFqD+bsQMVWaNtX
QmtO3j+OMWn6nVCAGkckMfQT3AwbqsRkqX1s2kKch4/640MGXYhiCB0FG8s/TXFpwwL6mPQk
VzpiWDjAUaQxah9RPTPBGsMSrMaepq00nKpHnhZOPSNL+qv8JPQ9AcOq0IoGGg9R9B+/ADKi
F1kK1aMNQscgfIdMGKDLDIBTSmfYjywkxmiqa1cAAt2p+JxYdA3tBmKSKT/B5HrjR8iUhAwA
PoYDP8MDWhpGwLAVYCprkfrTAhK4pqGRUdyKZ/hXEzoXUMauaFhkfHwphbVu6HTEymoRz9oz
zpjUorLSgCUjPrkT1wuRe5J+/wDfgWu63gQ3CRRsxOpagH1Z9aEY1z6n07w34l4zZ8ai3Xlt
xPNBcRRuiWT0VFYekFiNTVxjr5bvUsWl98F7TLcQPsV1Iu0S6XIuj/PQHOiCi1xma5yO2X4j
+PbqKfbbCa9i3m2QPLJM4kiOrKrZZ+WKxYr7L4d45tdjJccvvJniVtMYsGARQT6a6hVq4c1q
0O4/BNtLewzbJuMv9JlCuRcU98Kc6LQAZ+NMOYz6sJ/iLgd68+37beX8e92yVeK5ZWQnT+Vg
q0xn6/lrc8Y7ZfhrlO83V5bbfPbxC0Ol/wBRIQxB/hVQT/8AVjXqtFtnxBvUvJW2bcZY7YWy
+5dmGQMQlPy0/M1cU9+Temrn+G+FXdtc2uybnenc7VRqguSpQkdakKMGMorf4y+NtstYl3vc
r293ByE/T7c6khj/APuwGcU88IcHMPiXbNttra54/dzXEd2+lILyivrbp6qAHBL6dVW9fDHO
to29r+9ht3towDIYp1YgsQACGC1zPbCl5tPxHx+y2oXnK91ntJLggRGz0lU1faGYhtRPXDR1
E1z8HTDdYUtb8ybNpWR7xgqzlT+VVGRbAefE978OcVvLe7Xj28Xh3KzFJLW5CEM3hqAFK9sF
0V5VeWN1ZXstndKVmiJBBOrp542ZWj4Fwe95RcOjSfp7G3NbmZSC6g50VT3OD4Wtxf8Aw/xr
cLa5Xjm7XLbjZkI9ndaQpJH8QApXxxn/ACq8ovrGWzu3tpdKzREiRBnQjLLGg50YSiigjScw
f8a4SNXXSGoCOxpQDzxAbO4AJNSw9RHT9+CJHT28nNK51/HyxpmwYLGhWmnv9MTQ9JFSW9J6
Ur+3EsJGkQqyua1pq8sTOJZST0ahxNEPURnWoGoHPEziRGmiXJqE9WFRT6EYzWijYs+pGJTs
PH9uED1yV1A9PwxYb8Bo+VfU3Xz/AH4YzJRCSQitafU5ZYjSDdKklu2WICX3O2VM2B60xapB
+4SSScBJpTrzOquar2+uIUzFgaqQe4p44hhGSQdc65nwrhwwUcxWmqgJqB2GeKxqeJoLl45P
cRmjYCivGSp/AjPGajvf3c3/AJJppgDVfckZwK/9xONGyVCWNS4WiVqRShwYDGR9J0jJR28M
IpLKFJLH01/4zxYzzRPIdB1UI8PHA2ZWJXWcqj9lMSgJJnqC2emmnDGOvTrMymtKV6HtixQG
r1FjT1ddOYxNF7hpqAz7V7Ylp1mcg9K98FjJGfMLmT+39uJqG1OKVOWekHviBhKTRR1zJ+uF
GNOgcZ55f4YKoiAUgkirCgPmB0xM3QOQpNAemVf+WNRFqK0oM+tT0xHDV0gCtCPVUCuJB6Ek
da101zzwJEtSCxXSR1avbCCZqdBq6Bv9cAgS0eokVHfzwugDQL260xWJHIQAKCinqfE4hUZk
JIFPVn6a5YWdCJCTp/CtOnliWg16SwBoOooO/lgEmnd9OQWhIpSuLHREz6CutvoMq08MLF0J
kLBqDOmdehPhha1EZF6KKsaUp5dMQoRIQPV3NT3IwUnVAy6mBBJ6jpX8cBwDalYknLPTn+/E
AEqAK0OfWvlXEyjd21io+0/cPtp4Uw6sOc2I00ZqVYGpy8MDcA5SpY9iQAKjI964WsMG0xgM
uoGv454mUcfqLUYBexH+GGswxqKZ+qukHsMCoKAnUAa1INPHEiMhOZJ8K9Kf9cSDmPSQCpNa
9f34QQuFyUqVPYf54ieTqKZqOlTmDjLSFZaAgjLoPI98QpnGrSQT0JFOtOhwUWC7KCcui6hQ
0xNTwjmreYorVxakVMqaVIH5cIM8iSqvpq1aavpiRhImmhPoAzyrXyxEKqSwJ+1vtr3+uDUV
UbUQOhKnt9cKkCGAZadCSA+JmhlLMQG6kjrl+3EN1G8pLDQDUHr4U64jNFVQ1Wr1BQeHnia0
VAq61BFO/iPHATBteorkDQL3OXSp6YozUZQFi1TmPUT4jDKzQ6QKEN3NAfIZUw2rCZX9rV6V
bpUdMGtYi1OJVBqwPp1f454ViZgpIVe35exwCzUYbLQtcq0xMWEUqKaKk9D388RkMPcRS+eZ
qo/dh1pK+dCF0tXKnjTywKo/uoDWp8OvjXPEoFhIr5LmxAPn54hR6JAy969SfP64lzBAxD01
NQKOAMh+3A3ICT7wlWByAr0OGCncAlFckv3qaV8MKNq0u1aCpyPXMYBKctKVUFiSc28aeOIm
lYaaqdb0pQ0qBiQdCMQzVVBkSmIb+xOoaiICVU+kn/UYjsNobXpqa5VIPSuICaQrUvWhX0/U
ZYjTPIF9tgAWNNQFaEeBriwCGbjVkhqQfLxxYdAfbdM60p9349cQHI6qRpPQVUf4YlQqIiBn
R610tmAaV64jBamSjMCadRgXpSB2NHah6hQadOmAAWoLBjQ5aSevmMsUKVXoVCiniK518xia
ASS5LrStVoKnP8MKwcUavrDZlch5UHTECEgio1NSkmtM8sSOhjalPQtfT4Z4lKAAoC0hrUnM
UoB/jiJOFfQNVQ3RvL6YGRMI6rQ/9tK9fOmEk5BeOrUr91K0r54iJgwUZ5E10+I8zi0UIGvX
7gOkZZdM/DFowmIOmhCgUBH7sSEHAegJC08M/pgIuoAFR1ofp44iSMimoX0g0H1OMgSyD3QF
ozN+Udaf8sOIzfdpr1+4dTQeBxLDsIgmkAgnInxxIkVtIYtQqeoy6dsQwvUzZNk+RB798sKE
DocqpLEZNXFhOfaYhSA1K07AYMSNmmLUNR3C+NPLywwUQNVV82YdulK9c8QSrIwQBa+lafj1
wNajDOyFyCXJqe2fb6YtGJo11DNhU1r1xGRGQfUAKqhAqMxTywobEa6itBkQOh+uBEDHrFSa
GufgcVROtMmGrPMDAMFHVsgKOTkOw8q4kF3agBqKHNR0y88KwgUYFnGWYAHT8friOiWRFyNA
BkFHTPrTEUpT3hoIyPUdD+3AkTx5lQAR/l/riGH0VIRM3pWnamKIUYyJYAFuh7fhiMOy6lqD
9ad8RJip65ECvXrTLCqFmo1ANOr7qf4HAyQoWqWGkmhp9K4lCkK0oRqiU5eP44RadXoNIGQ6
NTtiUpgRq0rHVB36Cv8AngMHG4YMCQGrkv0xEI9RrSo/aDiQmKhtIYMR9wIqKHwOJaIldA9u
uoGoPhl2xI0rLoViCXBzA/164hTMhOZBJbscqUyz+mKDDagxChuhr5EU64GoJXBGgkinanf6
4Ccg6lVDUnP6f9MMRNTSTTLrQdajGoqFSQutqMWFFpkfrl3GJFUlqeFDUnp9cVByqCufqOQ+
h88GIjIhrRQHbI17jpixaaJdJFSpYeNQAP8APFiP6ixzFa/lrp/DFFDFKxHRSoOfYdemEkZW
1KOwyFB0+uIEp0s2dQBSn+uDUchwoqxJJqcgPw8xiSTTTMUKsKqW/Z0wHUbrQgfmqBToRXEj
sVLCM1L50YeI61xEwapD6xSncUPh0xAZckV0qa9wKE4VUKalZhIBmcgOoJ74kkakb6UJVu6+
AOBQgWFVUhuhJp0H0wUwwDCRaEahUsPI98UVNE3pOpF1GooBXI+ffDjOH9FTShp2PXCTpEyk
EselPoD54NakPIwq1GBp6VC1/f54kZS5NSQpAqxPcU8MAKNtQqeg6g4lCNKjSaktl/zxE0gI
NBUAHr2xLBtpVQgHU5t1oTnn5YladGRXqKF1XIdMVO/oCAGSpFAc/I/hgqDUM5jcaSv29xQ4
oyQ9PrjyJ6k5j9hxQ4ctLNlX9oA/EY1gynIAqy+roAKZ+eBqCDqGyYMV70yqMWHQ6lQOz9BT
oetcWC0KsTRaEDqO+XbFrPtEzghVJUd6mpqcQMCS6+HVhTKnbLBWol0MKsMwc2/MaYiB1ioK
10sKac60OEIokVX9ZyrRWapz7DLEpPU5jGkqCFYGobtib1AXjpkKqciRnXyxM3pKVqdTHSKD
tQ4IrcJiugeAqSxGdfLCp0rdz0TWocFlOelWHqoOtcK661lJNXuGlSa1NRXGmcBpbxxBa7cC
twlSrKWGrSaPQGuVM+uN8fKk19acT5rw/l3Do9puNxTYr1QkCm7P8t9AAqjj09jWtMZ7mVj2
eNHefJPDtia02179L8xKqSXFudSKo/McvUT4DGG5U43Tidhc3nIByGxnt7hAP0wkBm0rmqgf
dq8qYad1Xpy7iPMrKbbxuUO0XIkQxrf0VXWuTI9Qp6dDni+BZjrvPkXh+w3VttU96t0yaY5b
m1cTJHl97U6+fhhk1OiG+4xt025cjHIbG5srhKCKKVWddOddJOrUT9MF5vwq882rcfjXfd+v
r3e91v8AbGmYC1mt2aJaf73QNT6Uphs8GOvjvJeFcb5jJBbbpdX22XLKG3OZCwDGtdZpqI88
MniksbyGbYtpur/fn3yxuLO+00SGRC+kDL0k9z5Yykm22m1Jbzbtxa921tzvKuzzXAV6nsxU
ErTwpiqed83seX/qoLnlPIrW5sNWkfo50kZFLDUY4aLn50xj/a1rmC3uz+MLbZ1uNp5buN5d
W9Hi2u4kkkDsDmCjKgTGozI11jufH+Y8ftraz3W3269gMeu2u3COpjy9C1AYHpUY1ZTqyu/k
PiG3bla7Vd3qe5X2XuY2V4IyBkXcHp/hiwFA+0ccTdN4vd4s7i1nJZRBKrutc0yFdTfTBVrz
jaOF2HNbq73FN+ttrYyMY7S40mSTUahyCynT9MK8WfBdz2bh3IbzaNy3KGVbg+yt9btrgDA1
R2P5R44s1a2kNxtPGbfc92ut4tLmC+k9xTBKsjn00VVAPq/DFix4Fv13DebjNPGQInYmOv3U
r3p3PXEZFfGQwIAoDmadfrXChCNSxJ+0Z54mThGBDVA/wy74lYSkyAjuehHX8cQlSRpKtBpD
Ej1E5ZVwxoqgaqHOo0+AxIasVWrAAVzxASqRlWqkCnn54EcALJkCD1BPT8cQ0Yq//wBoFYn7
T3xYRVIJOv8A+o9cROwqoB6+WJHWQaiCOn5gc8IMsL6RraorWg6HFqzRkEAEnNO+JYQYmjhh
nk1PDEjg0yzAIyr4YFSHpK6h9AcTJZax2r1z6U7YUfMdc/Af64jh1Uklsqdu9MRGpORB6dhg
IWbVl49a4QR9WWoDvmcTJByVy9NehxIErVAA8M8WIUaOq5591B6YmpSZmBArUUzU4hQ0BAOo
jsDhBMAdQbooGnETOSAukUTKmBWE5bJR6gxriACxIIQ/b+FMSIBqiuTUqRWpxGEGAJzGeefT
EiditcjRv8cSDppmad8ycjXFRPAOtTQAUGZA7+GI2mLIaZgmhrl0P0wjYFCQKgaTT/jI4jAN
JkTmCfuBFcSCSrAMzVY91FMvpgBlIZqFqrmCPH64jiOoBIINOlMLJjJkMsgetc/piMAWB1VN
FxFGp1g5inYH/LEEbBQWOVMunX64RaRUEM4OfhXPEsgQPTQ0qM1pllgQDL6qMR6cq/XviW6B
ipJOk1yqD1p4jDhA7VIAr/x0xLSY09GVe7Dti0oS6UPgCASfHpiUOwYxGlWUUqB164DgHkOs
DsenevlhxmgYqlRTUhPQ9M8TUMZFBJ01J6V6fswAySEIBkCRqp0IxFEzEGhIBboB0qfPENNI
aLVsgT1HTLDBQlNNUqBUE9e/XEvDB0Zgq1BpUg/6YCCtAc6ZZ/TErCYepWLVAyHb9uEbAEsC
x/8A0a+OGUGFSA5IGQOXSvlgpmnYoQCQCtfUemR/xpga0IVe+YrUsOww4KHNWZiaAnLBg0DM
slVzy+9iP2HF8L2i9KKSTU/mNO34YGsR6Y9Q8T3BOX4YcRSTaVVQuoZ9Bn9cWAyhqKQczWoG
CkpnXq2VQOlcTRZagCTSla+f088IqJUoBl94qynpiZvhiAep6DpX9+WIYZtVG05k/l88RlEU
KMGatAaFTXriqlBqcq4yQCtT4+YGBDRHEYJIKsPS3SuIoj4/lFQKnt0xQfUlalFAGqlBl0/6
4lDKgalegzNR3xacJkXVTIUOpThBBqA6qVHQnqT1zxYJQ1JzYHVStPD6YMWkfTRjUZGg6fic
Rwg5qNHUCrqe4OQwkpGKNQ1NfDKn0GIdA1AaWbp0NM/LEIJ1yNKlRkuAmSpbPUtK4kRXWCa6
QR1HU+WFBLOVpI5oBl2OFkxbUjPXPLr2GIYlA1rRSAlKAjJvw8cDoFJFYmoDdi3Q/hiZ0Qcd
a59KHy7YlIEMQoCZZ1rTKvfI4lunlMZYadQNPuFc6YDcJ3qV6g+OLADUSrazrBOQJyphRzmS
xevge1KeWI+FHEdVR0bqP4fPECBJYhiG6agMqj6YVCAVFGokucwPLGbGtEzqxAAzPUHuMOC0
iWcekhgOx64EH3PuDKPqfE4kk0K6goegNfwxLAqCw0impemfUjviaSCRQilgXYdSOgJ8MSRs
TkHI1+C9MSE9Vo33aR6gMv3YAEkCoz09QvbEBowLKDQ6geuf7MTRe2AGKmvi3h5YUJw5UuVA
GRYChoPHLEjAlgRHkTX7h1piGkrDMDqw69KYEStpiZSczkWGWWJEyrHRyfWfD7enhiWYZalj
SurSAW7/ALTiwYmNWiT2yTJGM6nM17jEQlgqldQPTUa9xixkKIklCmTq2Vfpngag2BUAGtD9
5HQYSf3DpNSCvQDEzYYAKlACHzFfGuI4l9pVRHr6xWgGfl08sWHDVDKwYaXf8wz6daeeBYZX
UKWY6mIpTouXc4VhMPTWuTVahPX8cLNEjMEGtap1Hh9MZUMWIGo1IJypkcsLWi1alLLTQaaq
ilK4EJmKgRR1Na6CxoCPrhFAokB0n1eAH/LEsw5QVYLkRSorUkDAEjJGV0p1/wAf24jDALWj
KQnQHv8AhgRS0EVc9XYDoPxxI6uzoSQFJFBlXId8sKOhCqq5l+wpXAgqlXZSAaVCIKgkeR8c
SSq65rooVNCO9BhxUjQA0BNc69KqMSAlDUgeo1oT4deuLUL1FAKGtcyRQV7YKTvUMitUspyI
6Gg8cLOkQzlclLfmp2wNG0EnPMHy79umKjCcGKpzqKADqfDEiKMdWoDXJTUv4d8IpvtUZE+J
B7YlKUik10n7aaQO5xESLVSHUBs6U7nrgIlEQrQFtWQHgT+bAgCMoxL1+h8e2EYIGVHGrJMi
uffvliJzJQmi1r1YjpnhqMzEsS1NPSmeeAGRmoUKjP8AhyqPx74CdxpQA1bz/wBcWIyFqamq
VGVPPzwk7Es3QBAainenhiBiIy3oNI8w/jU9jhWksgK6Fzzp4/iD1xIVDG6g5V8eo8cCO6FS
oFD38cCBICulhpLV9RoTT8MOnDtLHWqrqGeojwxA6uCummZAzOQAGYzxE38wEspA1VIHTFqE
tWBJAqD6j5U6iuDQFTHpb1Hw6GmFHOZrqOkCobpU4jhOdIOoemlSRX7uwJxYiBj1ek+o9TXI
DAtIVkbIaRQ18ajxxIlIpSM1brn0woL1VtIb1N0A8MSwBorVBYE50PY9uuApFYgDT1Io5GeZ
zrTEKdpNWmgoBWr+fhiBxLqUqaKK5V6YiBSWB9ymr81P88KlGrOWoWpqHY9AcTUonorrEMz/
ABUqfM4BajZgxIADjsemKmQ76loU0joCegP1wI4YaSVb1VIBHb9uJQzUVuvSlPriWDIV6NTV
pHqNaH/riGAIZNFaKB9oyqB+OJo7aQAw1M3VQ9KV7nLDIxplfUxNM/PrlgOCoQcgCD+bsK/X
E0RC/aSAo7UqSf8ALEtPTQpLOQtKsCACQMOLAKNMtCuXU6v8Riq5pipUaiagHLTTr44BBRDU
NSsCSdNcRpq6X9ZGrsQK5VxMjkVKkipY/up4YPadMkhDetite3Qkdq4kTOWGlSchQE9R4nCs
Jnk0aQx8qGtR41xGTSBcIFIBp4HtiVN7asqEVUg5gYh8HRvR1qf93SuJZpH2aEddJHUn/LEa
rd4KJAXZsjXrikZ+GWZtTlixoTXLrjapvak8O1cIddoum6X0nX0UDIk0rkcaij07i/F+SXFm
I7TZ7u5Q0ZhDC8itq7mgpg7ui1Z3eycmt3EN3tl1bOxCxwyxMrt4aFpXHLMUdD8U5fb24nl2
e+jhYVaVreTIdzSmLXXYgseNcjvUb9JtlzdxhvV+nheRajuTSgONWs9XRxcZ5G1/HtqbZdxX
UxpFG8Dq4+oI6YuafMXe7fFHM9lszd3G3hoyKERAlgTnRwVy/bi+zGj4r8V8n5CvvpJb7Wiq
CpvHK1B6ZKGofDCdwXJvjDlPHCslw8F3HNlE1o7S1P4gHPsMEp3Wduth3uBFluNruoV6h3gk
VV1dySuLT1JgYtv36eVjb7bcT6FAcpC70H0UHLzxeOfXWFJYbkJxA9nKs0lAqtGyMx8qitfw
xKdpptm36KP3Lvb7mFaVd3hkAFP4iVH7cFxqdI4dn3W6FIbG4uEINJI4JHBA/wBwUj9mNaKT
Wd+hMf6aZZEAHsvC4OfgCPVi00cm27jHH7s9jPGg9Ss0Txg+eajBrKGSOdVAkjepzUaGJP0q
M8Mgs1JBbXiMBb20rznL2xG5Joey0zw6ufEs+17tAmu62+W2Qn7pIXQCvfUwpg1vwAsbyYao
IndVzfTG7qq+JKAhfqcWi3Gk4XwHcuTXkkcV0lnaRD+dM6mU0OQ0xijE4Wa1Z+Bd5eQC33K1
urShrdxhl0kdmSpocHpigX4t5PKbhbH9PeJayCN5IZQ1T0Izpi2/kX4Vm+8F5FsLI252xhjl
GpZEdXDfTTinTPMqnGVC1RVqDxz8AMLof2XGlCnqPWgNBhidO3bbPf3aWdsEF1M2hfecIoPY
FmyGHAu+R/HnK+P263G6WaxwMRolSZJVY9aenMYxoZxlrQPVa9MLOCVT7hTOoINO+WHTBvG8
XqZWB/hdSpPmKjBK0MJM1DoOnM10tn+6mK0Dt7ZpZY4o3RTIQNbn0Z+JGLTjRch4VuWy2Nvd
TXdncw3A/lNaSe4a0rRhgGs1RsxSpoCUOEpQgBXSvkBQk+fTEGl23453rd9jfebQwC2iLVik
YrIdAzZQRSmAX1mtAWRhqqy/dToPCuNJ0bbtm47nex2VjCZ7hxUIgqKeZ6YNSXc9o3Tartrf
cLdre4AqI3HUf7T0OLTI4g66RkwBzqFJFO/1xaPtC0oGGZBYVUd6YmnVtW3SbvuEFhbyL+pm
ISNZMh4eo9sNErt5PxTcuN3SW24iMSNnG8balNfA+WM1nrpUZ6cqHLM4YYBiBqbVpUd/Hzxo
HMkZBq4DDtWlcY1WnLxAA6gwyoelThU9AJE01LBk6mnbPuMWr6neVPuPqQZDyrjUa11bbtu5
7vN+l22Bry4WrGCKgYKOrZkYtjNqCe1ubaSSC5jaKZCQ8bijA+BHUYNUQKwaSoI6U6iv0xar
AkRh6v6a9GJp+GHTIMCN0IShqaHOpwa0EtqIABBBoT/1xLHbtWwb3vF3+l2u0lvZ0GqSOKml
Qe5YlRi1mwG6bNuu1z/pNxtJbO7HSGUUJB/MviD2wSrE8vC+XRbd/VH2i5/pzLrW5CVAX+Jg
DqA+oxrWLLFNqAOpyVFOpp06YNdMJ3GqpVcxVCxzPniLsTYN/l2pt2G33B28NpF6I29sEGn3
eHngtxmpdl4lyjfRI+z7bNeJAaTvHpADdhVitfwxq2MTUO+8b5BsrpHu+3T2Lyk6GmUaGpTI
OCVP7cZlKq9ZalQPrhajr2jYN63i4NvtVjLfThSxihGr0/xGpAxWqufcNuv9uuJLPcLd7O7j
NGilUxuPwalcEqivJp1UksO+WNs4ZqlRlXQaV/yOLVmAUt6iFzplq64B6iLNXxNKZZjCs0xk
XQwZiW6q3eoxNWYEOhSoqTXp0OAUDNoYmtA2QphgMy9mqtM1bria0xmkI0jpTOmI6BmAFVpU
0GAAZ29r7ch4+oinhhOmWgX1jV5+PlXEdPKVpkKOooKeGDBqIHUBU01dfLCgGrVBIKA5V/54
mfgzsEKkgLU/tOBZp9IH8werxc+fbE6GUzM4Y0VWHpPWtP8AA4mcRSI4kqxIA6nxNKVNMMos
CraqNVqKaGoGf0wotdCCjalOZAGQGAheUAjOqgZinjhRmfRQAGlaBF6nyxK0pItKaT9p8fHw
wACNGtCTn2I8uxwYLTs6liVqK06nL9gxYdAzEM5AFBQMV/zxoaQIDZkGvQA1OeBoJqaIPSAS
QozrgBwraizswp08yexxNRGWJY1qqIPSadjkcxigpg1ADSo/Kv8Ani1n5ORQelQrL3ByP44t
aMG0ge2S1KVr4jviFpyy6qvUn+Dv9cRhmCyOBqJFcifLAjgyUAKk0rppShA/wxFG7iMFiKBq
kefkMQEmvSGppyoK0+0eGIwIf1UAp6q5dMQ0Cw+369RPWnfLEgk+vM1K9f8AuIywsjc/zQ1K
DSDXOv7PPAbCEgNFGYINT1z8AcOHSArmKgZVHf6VwGE2qVSpDEHp4mmFI/SKVqF6Eda4hp11
Fcj6VzoO2Bow9RBkIr2qa08PxxazT0LZ6j1oR/mMIzSbRm2rKnqr1IHhiWGKjIUIrmufQeWJ
YRcEAijCpFQMUOmJ0pWpbOh7DyxCnYlUYZFqdBl+Aw2FIzPI+YCKozoMs/8APGVQAtkM9VDQ
n9meLSABUXJiV6Z9j0/HDrI9HRaDT4DPLviISddAK1pk3l5gYlS9txmp9Jyr2JxLCcqXA01p
kx8h0w6MJwAwY96ALlQeeBEPT91S2TBuxHh5Yjh9MjUboR1ByOnAod2NWqaoTUkZgYiYAstA
5JORp0wrfwElgA4JJB9NemIpo2ZYjqNFAzFMq98SR/yy3q69fE59xiAllYfaegIJ6D8a98ZF
oI/T6gTn0I6D9uBYMlmWpULpy9Ipn54Wvkl7moA7Adc/riWHfUVLA5E9fCn0wjDlwFp+elKj
P1YVaSg6EbOg9NPAYEdlXtWrdKdxgJ2QaVZiX7eAr5YjhmaTUA1MxUN5DLEzYdSin21ZUcVP
7O4wojQyEVFKeodajwwJIoqoVWITsKePniahAGv3ZKAAWzqD288TWGUuvpjFQnbrkcq598I+
pmd2Prb0k6VIoaV/1xMnzGbg0IoSCMvriJw4IoWIVu9OmAUSrpQrQOAep88QKmYGVEByHj4V
xUFGzHJ/vByWmVMCSVDoAcmH7sRAx10BGXQ5dh3JwqjVVoUU0Ip6TTLEtCQ5b/tJq58/pi0Y
ejRy+31FPu+vhg1YJlNchmB9pIofxwE+kazmaDJs+h8PpiQlZ8yRkPsGWY+mLSdmQBQA1TkA
RUA+Jw4tM7kFnQFVoBmc/CoxAiWIVRXQTUVyA/HEjmjISxo/jn/jiQCDQFyagfQDFg1KrhfQ
q6mOZNcvM4iaoDEqK1GXck+f0xE0gb0j7mrQhTnliZsSFRGjEJ9adq4DPDBjmRU5U0n/ACxL
QqHCipAVfUCeuJYJWlWhJyPU9v24UQ01618T/liRBSy+2DTKoGE4YuYwK/zADkP8a4Baf3GD
FScga9P3HABMNTChFQM/Gvb64SdWLLVvVp+5KdfPEQL7ekhRpBNad8vAYsWnBWgqvXIUNTlg
xGkUq2s06fd2/wCuA4Qj1qNZr11A/szxDDoXGpBVh+YVocvPDipgwK6StRTqvbyxLThtMZVR
XLocsutajCtAqrkcg1K5dcGIeqUlkIOQz6dD3/5YAWokUJKEfm7k4ikBcZBhQenKlanviOgQ
KEZxmvjTOuFGajQEZf8AZ/ia4jTjTpUijtSlfpiFOw1ZlcgO56YAVNJrUFQOgrSuJHegNCKs
OniPphKN1NAaek5nr+GWE04WihaVBNDTM4GRKNIBU1NaUGRHiBgJMyu2rTTxbp+zEiY0AK9R
mRUVwnSXWDSuoHMVPQnASYlRp00LdaU7YhYABg1T/wCPrXuR4fXEMHGWq5NPSKDV4eWLUbUz
g6lJb8zLkPx+mHUBSPcBpq0flBzIp1xHUseox+k166dXWmCgiopoIofL7qYy3ydWdKBhkvTI
VxIv5bDRnp79jiisANNS7NRRkuWWFQTatS/wL1NMOqkPWaECo66v3YtGBRlJ0Dt2Gf0wA5jo
Tr9J6pXESckAK5BViTmep65YRoYwCKg1JrmT0AwNaLWpGlWqW9Jrmc/M4UYlqU+91GZPanhg
HwcjMMtCSPUp6jCjAKZAQpCg0oO/n1yxAxUgsDSg6AVqadMBPUuwBamrLUPLxxH5HpWufpYd
GPWgGdcEjVsBrNCEX09VYnCvmCL6VzULQfcDU6j/ALe2WFm+GjUK5bTUnMsev7OmCiSk2mQ1
zTTRqnv4gYln7SllkXWKFTn6elcFbliIkt+YBlzJPX8cQ1Xbwim3Z6Vbscj3yxqM9MuWYynW
QMswOmNAtLefj/zwrFhtsmu7VSzLHqFWUgEZ50rjfFU19xQ77u22fHFvJt8nsezbgwPo6qFB
9Ncssc78hn+DfId7yzd0sdztreS8tl1RztX3B/2Ekeo+GNZGtjXWl/vg5He2k9xOdt9seyJC
+inRqN0PnXGaKbf3v9u2SdthLW1yCAiW/j5qOtMZYcPKtzvodgs9ze4khv1aP/5LNpYdm69A
euCdetYb5F3/AH6Li8TQXDlbjQszRd42GbZDvjSi64rf7RNx+zMMscrFR6gNVG8GoOow4rVr
ezQTGKKSWJpK1jCUDVHde+WMjXmnKuXfItrvkllt6z3O2uQJVkhEiUbIjWwrSnnjUONVvvKN
62nhiXdmqxXOhaegBVyzBoBnivyM1T8F5pBzHdQ242SR3lmg9q4DBn1EZlVbME0w0VfbpzPj
1nPc2M7bvuGrJ4Pa1RU/7qL6frgxUFhJPtlmkk11d2dnNIDDb2McbgBsvVkaHxIw0tK8SJeR
XaJI9xoCq1wvqC9cwRjJUW3b3um57lu1peODZwOEjjoCtPPLvhiVXPduMm3WMdpalpFmUIkc
YroJHh2GIVqm2+ESQSNAIriGMBZAAjig8VpgUjy7lHOPkKDcNx22CL9ftwJV4ZoQ5CnpRyMx
9cUqR/Gd78oJa3zcd26zmsvc1Tx3LCP+ZT7UoVbMeOWNWCsLyq+5HDyK7uNwX+nX+s+5b2re
2EIyIGg4JS5Nqv8AeJrqOCC7nV7hhSsjUYscta1ofxw8h9Scbs4LLZ7W202iSLGDKI00gmmZ
88Vul5R8670slxa7dBcW7CEl5ViB9xWIppYnKlMULyzb3nS+hlhQSOjAojCoJByqD54FI952
jbNv5Htdld77HDtm5wEFIkUIkhU+kn+Kv8OFZjEc93C5uOQw20m3RWwt5ABLFGfcmFaAg5ZU
6YYt9a75VVl4fCCAxBUuTXP0g5/iMZtYvy8MAPv1rmaGuNGva/hOGEbRfy/p455A4Id0DUGk
5Voe+Kpo7G4flO0XcO92kRjjmMaQhaFQhy9VK4Crecco3PitrY2+zpGEkqP0rQh00gAUHSn7
cWJ5zNyO7vOSWl+drisLl5F1+1EFDdAW0EMP9cUErc/L9tYtse3TmOOGWSVRJLEgQ0IBpl+3
GpIOqydrxD4umgEz8tnicrqdWhXI/TS37K4zYW2+POP7BYLfXm0Xv9ReM6EvREI6VGoqUNev
jiTR8f3W53nj7y39qiu7vGzqmhWUHTWmGEV/f7DsK2sM25x2Fs9EjtXt1fWP4QwXL8cFDPWn
JNtg53Ba8bKfp78U3CL2vbQuPtZAQDl3phkLg+UN63W53rb9kigilEzKyK0ak6tVB626CvUD
FFGt255bY2m2X17FbTMNJsRaqUfLMKxrgwDXje07XdbjfbbaRQXLqWBVF0ZAkLpI0qK+GIvP
tm51fb7vdlb7htVqzLcgJeQxlGVhlQt3rhGN5vPITHyjbtljtIHgu1YzvIgfIDpQjviwT5eT
/Luw7btPIUSwiEMc0YlMKn0gkkGlO1cEqsVHx3b2s3LtuiuIknWWTOOQVStO9enljVMj1Lmf
M9u4xu1vYrslrNt8q6riP20DFT/Dlpr9cAl9S8ch46nH905HsVittHJrl9qZEcHQK6KUoor0
A6YDY7NssNu5HtG1b3f7ba/rterVHGB5EH+IeAOHGlJJsOyJ8qWkEG2xfp3hMksKoDGDpOZU
5ZnBWZIudmuhtnyFe7HZ20EW3PAtxHGsSqY3I9dHGefWhyxYZGc3mW35B8o2+07jZQexC51S
xpSSRQNQ1vXMA4fhRsdwk4zYXn6Lc7zbf0zLRrSa1T3itKdV7f8A04zYK49ts+I7Xxzcdw2i
xt7u3tWllt5GVT/u0hnBICk0wyKvNOT8w4fyDY63G1foORw+pZ7RFEEmdD7hFCRTxHXpisWv
PUYF6tkG6+eGLXtnELa3tviO63C0VYr6BZ5BcJTX7i/bqPlgXU8X+w2NtvnE9hvd2iF/dREH
3pvU41eZzz8MUThO4bja/LVps0U8ke1NAzPbFqo1UzDDv5Vw1RmN3tLXYvle4t9r49HusTxL
K21hFbOVdUjx1DAUPjlgqlY35AvduueQ+7abGdgdQBc2koVWMlfv0AaRUdh1wxR6xxje7nef
iHcZp0iQpFPAqwgINKrkaH0gmtcsUno/pPGH4F8ices+NPx3ezPZBXc2W72Y/mBnplTqGA+u
D4XP6bjm0e2XfxHJPb3Mm6W0YjeC5uzrmPqoakjJqGmGLrx85lgkaSPGyh6hGYek/RuhxNa9
t4nGlj8H3u62FIdyg99lu4wA5YOFBencA0GAdelz6yt91+G9r33dEW53qGOILfuAZR7jFWBb
v+OKerp4GTTItUgZMO5/HG2QMwILUyp2OdRgIAVf1A1yADdKV8TiE0JfTUqCD4gdT44iiMxc
a2Wp607EnEADWQxUnMdchhxAViDqYAkglfp/yxAMmjqralP3ds/LDKEbHWurqDmvY08K+eKx
o/QjKmrIV6jvjOo38watOSg/44tNBmVNW8yg6UxALMVGokUqAAuZwiwiaUFaGmWfbCdDX0sO
udQf9cCpKVOpz91MgP8ALBYvEctAmrVQjqBliOCDKV0sKfTLEkTa3p27UrnXzxI5Vo1pJ6q/
bTID6YjmBDaX0pUE5FqflxLDPGhYUagyzNKEDBDYYutSOgpQEHtjQA2sCreoDtWnbI4mdCNB
Sh6rme4JwAysfUppX8o8sJOipXKo1k1rnmMv8MWkIVYh6urGmXh5nAgyLINPQZ5d+mLTRMzV
oKmv2iuQrg1mgDLUayCcxprlT/riXweuigIILHMVqMxTLEfQ6dPqUVI7jx6YWcCrmvfKoUjv
X/TEoGrhy3Wnby8c8WoTFihOS9tI61PhiOko0oVIGrKhPY/6Yms8PUBtRYZdwcq/j0wLQ+kA
qhOX5+uZ8PDFjOnkjdR7TAmpz8emJrET6WND6FXv1OWLRiShCVLUIyAPh4188WnEUvuBhQUH
cDCrDqhNQRUV1KOgz7ZYgYkhCCaZ1xKnUyGTSQaGhyzy/dgRswoJNNWVPD6YpVYEhkYkdAM2
Xv5YUSguVFQZAchln4YFp6uJGyzJoF8Kd/wxaPThiysGp1zB6mv+WEyYZAfbpQgJ9pzpniRR
acm1es9a+fhiAVkl1EVCgnMN0NMsvPEvRCNg4b6988h/liMMoGoFWIfx7HxBGIwQORBJ9QpQ
9q4gYhagVplXLv8AhiwWnJdfT1U/d4n8cCAA1QYwPBaVz+uFqDFF0hgCo6fU4iUtTVkFQOo6
f8UxAOlT6V9WR9fmOuWJnTBUdWfuD9vj+GJHUEgF30gDIdcvPBpkHQ62zALUK+HTEdQxtpz0
UpUqw7+IOIJaArqA9NM1plliJROCBqJKtU6TmBXCCdlOSimnJc+g7/txYrTJWtdQ0qMh164z
TsEra2QAVShz7AntiR48koFzOVOoGImR6yUYCQJl+PemJDChgNDFQa6tOWXhnhCFx6h7YNOh
Jzp9cKS6mDZdOjAdelO+IWgSUsUULkf3AYMMozIi18Oh/HEdCgLDrrK5qWzoD4YAfQQCGWtR
kxOf7cSGAmllZq1Wp/wywqU0hGnSKDp6h/kDi0jJWLPNtXWhq34+eBrTRtHrooZQ/XENOAAX
A+2ta9yfHPwxMkVBUuhJL9SPLscOo6hUFdJZemQwIRD0I1GmYJpkfCuLUABfSpyeTow6/tHT
FEILLUg+oDOPxywA6pIVATqB6x4eWLTgwpUqv5lHqHj2riR1RgpYGuYyPTEsEAVIDepeoz6H
6YKTM7KtHUuGr7ZGVM8SD7jtqWMEuvUg5Z4WamQze1Ufd/u7jxJxGBLsqkEh2NcqdvLETyl/
bVxSpGZrmPI4UfU5iQ1qR3GVPpg0nh0tFU5I3T/g4gEiXNNIoDkxP+GIGC6QdXQdjnU/TEBl
WbSAwU09Ok1p2/EYiMEqNHSo9OXQ/XFhwyVIeT7mGQAyz6VxDcRx66dPWDnQ1/biBytJfTWv
hXI+RxWqQTmlT2/MBXt5YlpnbWKMdKkDr/ywqicnrSpBz05/TATqPcYKtSfHtliRm/ln0D19
fGn/ADwC3B16xkVbImvWmIoSdLVY1JNBn3wgdHRyzlmNKZ5nx7YjD6iSprkc/Cn0xE2j1HTU
jMrq64iMMrAqOo+0gU/xwGoo1KqwatR+U9DiCW2yWhKliKA+X+7EzoIyAQmS51avfERoqlya
Alfy1oa+OE6VaLRVNDUavp4YAYrVKkggHI51p/rgxYLPV0JA+6mYr0wqB1GooCaH1kjLEYcM
1aAVJ6npkMWNHMgBqOv8J8PriVRgKWoVCSA5EZ5eWBjBtKCSq+l6UPetMSMACik9jmCafswn
BssYf0ggN0Nc8WIJqr6jkVyoOp8/riQiTQt1AFQR1r1ywrSAkY1OWYI+vjljK5DRNQJoT1Ck
0B79RiJMoSlFApQ0Hn4nCC9LKVXU5qczQUr2ywEwVRUgfzFBIJJp0xNHDhlABqxzNBXP6Yma
agcig71riAtQjrVqBsiRWmr/AHHEdO8pWlFHYKwyzOJegZBqLKSriudMgRiNghI+QYguBU+W
M0zwWs6SH1H/ALezfjiVoamtCasejdcvwxKXTE0UKRXu1B+zLCb4B5C1FDZN9p8MTF6FIFBz
NXpQkZ1PTriGkCmlfTT8pNKHFpNKwPpVa/xav9cKpomLegtVa9G/54hgmQ6iiimfpFevj0yx
IQEntegjMkdBWmAw9YwgIpTKo8fEHE6TCMQL/dTTmQMq/U4XPPRRMM6MBnSvWmA7AVZaEgFW
HXvXGlIFZEAFGOkmgFaCvfM4MM1MREV06uvUjw+uDFajbppatSc6dwPpiVPoQrpy9XQnI/TA
tMyusgBaorpp40+nhiWhdj9uk60/N2qcOipFctED7YB6gg0JxVTQGtSykg0+zqcDSu3gmO3Y
oDT6ZY3wuoyrj11IyrnTGqwPQvn4/hgDrsH0XKuMipJ1dwfLGuPl05j1XaOS8qutlTaf1tw+
3jSf0ygsoHQAUBxruZ6OuMT2ku7WF0ZYvdtp1oyXFCTUfwntjlz3vwz9W4Tnny9c7Z/Mnun2
8ihlEKZDt/N0j/HFbjN8cG28q+U9iikurFrm3gc62knQTxOT3o/RvxxXpfKn5Bzjk2+Oh3bc
GkpmYgqola5kqoFcZ1rMT2vyHy6125tvi3Iy2jKQI2UMADkQNXgMbgwOwc35ZskzSbTdSoub
OkcfuIW8ShBrljfhd25fKvMtylhlubxFuoTWNoYxEy07MKdcZjPU/Tsm+bPkC6sxZNfI0RGk
sYk1sB4uAM8AijvOb8p3Cwawvdxlmtj0jc9APPDKZFftO+7ptN9DebbO0V1CaJIP35HEsbK/
+Z+f3dk9lJeRLFICpMUKK9D19Xjia+rl2f5V5ltMPtQXgMZNAkqiQA+OYOC9LHRd/LnyDdSI
W3QxaDUKsaEZjKtR+7FKp5Vdb/IXJ7K8nvLa+KzXApOwodXb7TUDFunrFltvy1zixhZIbtdL
kkrNGJH1/wAQY9PwxMAn+Wub3NzFLLuBklgOqJgiKo8cqAH8cbzwypdx+Y+Z39o9nNPb+xUD
WkAR/PMHKuMQWs5Zcn36ykaSzvprOR+rROyMfrjWpy3l7cXU5nuJjLO5Op2JJNfE+OBGty8E
yvEdLqwdShoQwzBxRSNhF8rc3jCRruTGVF+4xoa/91RngV8Zrcdxv9xv3ubhxNPITJLIciT/
AJ438g8UO42cscpt5otRBSRkIrnlpr2wYr3jQ7lzLl15+mF9LIjWtGtzp0kaftPTy64zp59T
zfJfKbi9tb25nikntQDC0sSscv4hShrhlP1We5fM3Jdz26SxvYLSSGUUcLEa07HNiAcZ0YwJ
kkkbW+nWxOaj0gY1Vj0L45+Q7Li1tdW01tLN+oo6PEQNLKKeqvXrh3xYbe/lzkW4o0cccNpE
xIMkS6ZT5lgf8sCDYfMnJ7OzjtpUtryOHIS3Eep/21/yxpnVWnyJvg31d3Edu1yP/GjR1iA7
EKKYoYud3+Yt+3fbpbC8sbHTItCFRmKnxXUT+GM3wxh42lfUI1OlRU6RQA9MBq64xzfedgla
Tb2Ghh/Ot5hqjencrXDgnrR3PzRyma2aGKK2tY5FIMcCUAB8CSSMQ0Fn8xckgs1gubSz3CRB
SOaaMmQHtWhz8MPyPszW6cq3i93iPdW9uC4jIMfsgoikeHU4jqz3n5C3Xd7aKC9giaaAgxbh
GNMuXmMEM9W1j8x8lt7VI5oba7uIABFdyKRLTpma0OK+GzES/MHLG3L9c7W7Ap7b2xU+wy+J
WtdX44mUe5/Le9XvtG3trOyMDBhLboSdQ7+rp9BiijgvvkTkN7vFnu8s0S39nQRFFAWlOhXz
w6Vdyrl+7ckulvdxCLKihE9gUSi+Pme+KYzXDtG6X2130V7bMomiNVJAIBHj4jFrUru5RzLd
uR3iXV+qLJGixp7Q0rRT1p5k4hHdxb5H33jbyQQLHdWUoOu0nqU106rTpXvixq1Ybv8ALG+X
kSw2sEO1RIVkPsairkZgLq+3zpijNd8PzdvUaozWFq9+qhf1pUiQinRqdQcBUG4fI3Irvkv9
ehK214qqqiEHSVXLSwORqMjh1Q++/It3uV/BuYtYtu3O3IJu7Uspdl6VrWn0xRRbS/Ml7Oqt
dbJYXN6F0G/cESEUyPTI4cFqv/8AvS5Amz3W0FLdrW7VlYLHpK6upUjx88Z31MRIzBjp7nMe
PeuEXEbFsgtNRzFKnLDBjQcU53u/GpSLWNJrab03drMC8Ug6epa/vxVa6+SfIu47tIohj/pt
qh1C1tnYIHpQNlTA0sh8tbudsFtd2kFzfIoSHdmqtwq/7qZP9cGpmrHmPJrLff6zbXbJuJBT
3z6wU/hZWqCPLGqpMDyvmW9clvo7rdZFmlhT200IsdFrXMKM88UqkbXYvnGPb9lXaW4/bPAI
9EixsY0eooSwIapPfAqy+z802va97vLscetLqyuMxYTHWIx/+7dh496YjEXLfkO73qNre2t/
6VtrAVs4HquXnl/hh1np2W/y1KODrxW92a2vAsZit7qmkrnXWQQfWPEYhelZxH5E3Pjss0cE
aX+13g03m23PqhdQD0HZvPE1oOYfIe575/8AGt4RYbOSDHYxMSBQUzr/AIYZRWPOZoDVRTId
anAJUbEKDXqDSnXCiLUU5aa51plgIKPQODl1DdRng1qQNQTqc6mbMg5UpiGIQtWoaVNBQHLG
mKMopYMKAioPfPwwHyoXSjBfSSw9I6fsxqM/ACV0sCtWyNDlT/djLW4FWz0EaSAdNcq1xM/f
SV3UEN4VC9Pw+uGNGKtIAoHaoUePapxNFJEEUSSLlpHTt54AgZyrkkAIvU9csbkZ0w1M2pfX
qNKeFe5wYTkMGHUk5svegNDngq2DPtsxJHkARgaRKHcDPSAakU6jtgWHAUFhK1ATVTSv4fXC
jGHQhEdSpzo4rQeODUUilhkwVT1NOnliIUSE/aQwH3Zgmo8B2waPEDoNS0FGBqVGa41qsCUC
yZ9fupnQgZ9MNrEkgkEbKWJ0kdGrlTsR/lgMIJKxFB3JOrLP/piWVIVIBXTXXQhu2r/LBp1F
LE5AR8iorQj94xSr8mX1ZnIqMiATl1zIxVpFU6lULT81G6jwNcDJEfyyNXU5g9MsKJWpFrY6
RWilulfKuCqXPlK4OoZUqOnStcMXSJmcMxFSfsp3GFnEbISRIKeFD44TgjprUDTTofAjAgg1
Y1pkRkT1wjTnINUD15VHfGSOSMAClSD1z05+Qw4LZ+A62ZQGyy7nx6CuHDCKE0GqjDNG7imD
FpgyOzDTkRmD4jLAYf1kqoGQ/Me2XcYWgSI9QBVad6+VajEKjlqQgCk5Vr0yxDTlpGVaAhiN
Ir188sGM6SkkEn7lJNSM6UxGBLORQV0g0qDUZ9zhV0VXZdWkaaUUt1J6VxYz9jelT6cs6A4s
P2h3q1K1AHpy60OLD6NRQhAdOpSNNan6kHAb4FojqzIVRQgUzocaGi0S9Ez8K0NR54FtRBya
n8w6Uzp44UlELlhIOq5UOQIwaPUDtq1Lq0k5EdQPHEdSRxRslCQG/wBMRkPo0yHOq0BY+Z7Y
KsNE5DAGuVSPAnw8MSlOzJpzBJfMkHMeRwnUSqUYlXp41Nevh44gLIxAKpFPUWHSoxDNHHby
sxoK6c9Jy6j9+C1fWmIkKOUbP/bl0zpXAzueCJZRVic6EMe1cTQQqI2rNlfIJ54moTxSBRJU
huq5ZZeWKEIRC4UVoM9R7mlcLIqkqzEValBQdaYcWwMKq3rOTL+XpSowaJBqVVmAqMhRsTRv
UTUkB2qMqCgH0xI0kyrky9QApyFfPErcG9W0qVbPypiWiGpgAK1OT07/AIYUiEZrUMCtaE5n
FQeQO5ATp160rTAsogqRjU2ZPVaGnTEcMVjoENdVTQjvUYBLo4yzIyuAUIoAvh4k+OJoo0Vd
NAFr0DHw+uJJkKCQ0y7aTTI/XELUagFTkag+rKmZxAyaS5ATTIcyT4nLFigmWqdT689Qy8qH
BDYf1LTSKEeJ9P1ONYkgrI5Nagg6iMhXyGAn0rooTrRhSn+NaYsZKunUwzOQUDoQOn44Gjqi
lSUYkg+knLp1/Zi0mcO1CPTTL6+eCs5TR+6CSBpJGde5OFYVWOXX+I/64iPIqAPUWIqfLASk
AV6VBCijZ/bXCzaYEaaA5V9P4/44h8jdUrk1FBAZe1eueE6NIXoSB0qakVPnngKOtWLCjKR9
pyOedcSxIrKV001CuQr2xHTM6q46nr6D/h54hacRuyClFX8oOfXzxUSaWgAkKKgdQMxX64jg
Y4iVDAekZAjr+3wwIQWQMGYtpP3P92fhhX1CUoAENADkviD44kMashRQa5t3ywEDSVlJNRQZ
LXz61wsUiieplOpTTSB28TUYlEjKUUAKWUnI9unfAUdZQlQw8CT1BP8Ajh0iSI6NRYOwNTSq
1DdCM8AsJSxnp6chkD2p54Vgmd9QWlO+qmdR1wEBMvualppYZHqwPliH1Ojkks9VbvXv+OLT
8QEYlBJapB6kMSMWszUoBzB70pQ5fXC0f0qulhQHrXPLBjR3OpchRTmp74QAKShWh1nME/8A
LACCsXGlAcq9c6jEsEqAMrUBA+6mRB8fPELT6gpCv6gK0A7DrniWiRzI4IoIz9p7VGJA9xld
j2BpUDpXFiSajpDkgLTqO31xY3JiMBZFag6nOmWf/PEpT0C5GpUnr4DxpiWkhrnmxBpUDt45
4qtGQq0egVe1PHzwIJJBNDSvqYCmRP8ArhgwC5MGOQUUHfriRq1YgDUf9p6/ji1CQk1CrUMM
gcgKdaYkEOy/9o69v2nFENaOaH7aUI6fswVqFJVWII1K2eoeXjiZpLPH7ekiin0hq0Az/wAs
WGUnAIogVg37wO+HGjlYxQ1KdjQUP7sDPUNojr1Ap0NKfhXxwCQQJMepVJNRq8PLI4MaMWUT
L6Cf4iOuJaGY0NM69aeNMIwRMYX3XNPCmeZ6VPhiWlEUUekE9x2H1xVacHSD3Boad/pgxcX0
VEZgSunuaE5fXE76D1FmLBWCf+MD/PDHCwKVLB6gA9dQy8sTMSxCR2I9Kk/sz64m4jkNQcxU
Gkg69PA4ZBacKjKAumg6AeP1PU4sASC0mhKg50r4+HlXChmRtKhhTT1C+I7UwNygNCasfV3I
6YF8DdQ2lkJYDpTuDiFArB3p9pPpIOX7vHCpBAKxIA1OpqApHpp4+OJo8hKlsqxk9fLxxEFA
RSIa1GYrSv4npitZp1JlzAOkU1D6YxajslBUGor1bM+WEJY6LXR9pzNep/HFiR1q3QALUk/5
4UeOSVgVFKH8wyqPKuLGvtfwcvGE7ZmgJzNO9PPBFelTvpb2Dp9Qbrjpw1fhmClXyHXw7Ya4
n0+R8OmBLFhSZQGGqozGRNcMb3x7/wD24bjf2/IFSCcpDNC3uqVU/b9pzHXHT+l2Dq603zkd
e9W7tGCoRWdKBAx6NQgd8sceJ6I1lhtWz3Xx37lql5aKYDSFbovFqBAJZSMXU9F5XVrNx8cE
j/raT3FpHH/MtkNAT0JBBB64rymc3j44+L/08O7TWU0FhMAJI/eYlS1CpHWmMzkWAl+DOLWr
T7le28h2dIxLE36htaAipkNAO3QUxZgkVXxddbZZc5u7bZGY7S6MsZlAlY6OlNQFD546SeHV
Z8zRWKcugkuIglvIR+oMSLGWUfcwI6HtU4zxDLqLeT8GjY9e2jcl3crS2Rw5DSEdGJHt0+hx
X5FrzYAK7Ba6OgJNT5Yi3HxTwex5ZvUsF8ZBbW6a2jRhG0nT7XzpSuNWDcesP8FcMmEsce13
lkyj+XOLtXViBQGhLHGPqlZe/FnxZsu2G/3Vr6UqAHVZep6alCAZYZzC5tq+M/ifeNwWfa9x
mntnj1NbLL7kqse51Cq0+mHEHcOCfCa3U20vc3NjfxCizyy0UsfEkEdfLFOWfkTfGHANj2yK
53kX+4mSRYjNA+hQznL0jT/jis02LIfCPA5Lz3BLepZSDWsesawT/upXT9a4JFmKzcOEfDX6
i52tb+fb7y1Fazy6kLAZVDCp/bixMrxiw+JEe8g5HuFz+pDFYJIC/tFBlUaQ2Z88VovTK8gt
9ji3WRNjkkm21CRDJKKMRh5utRWghMk6dA2eRPhjRj2P4MaKVL+2uLS2lTSHpNErs1OvqIr5
+GDqC1kOY2e3w8zkjEQisvfFY7dQmgFh9q9K/XGuKzXqXM9tH/re3SR39xJaxyxMkVykRyag
FXUD92M/kXlc8y2ziF7Y2cW+SysWdYoBaAK2phSlafbjOGMjP8QcRt9zjt5dzuK3tf6fE+gO
WHVSaANTtgOuBvh/bNqsLy73+8nsdLk286e26FaekMOuo+GGJZ/D8e3XVludrNZwXdrCQokm
iVnIYHuQfSwGNXnGr8PNOW28cW+3kcFusEaSsFSPJAB4eAwsyuHaLH9XuFvZljGkzqjSdSK9
xiheuXfxP8f7Y0IvN1vWuLsrHbJGUDVbodOg1HjgxmzaaL4P21Nxla53OZ7CNdUaRIqzk+JJ
GnLywY0zm6cV+MUjuE2verm3v7eoMN4jUZx1AYIKE4ZGGi+H9vtpNov/ANNeywXJbROkkMcs
RUD0stev44sateYcmiMW8XCllc62+0aUPq7KPtp4YpBKg2nb3v76G0V1ia4dYkkeukEmmdMR
16Xunxv8fbFBAu/breRzzjQJYlHtll6kKqu1PrgxmyMpcbBw2Hf7a3ffPe2KUF5L6NGWSNaZ
BkIyNe9MONRNzHjnCNutobjjm+f1J5xpNuxVqZVB1KFpn2IwT5UrefG3H+B7lxq4H6A3l4fR
eyXKDUH09EIyUfT8cNVvjAcatOLDljRbzHI9oJ9MMMOaFq0USZhtNMVijefIHC+NybjslvYW
Mdj+ul9mR4loCpGQoMgRikH5W9xwqTZtvWy2Tje27uoB9ya7cJI1c/uIZicFSn4T8d280l9u
e5bbHFuUcxEW1SKPYiHUUqTqr2xaVH8kcf3/APS/qrjitjt1sjeq/sHDuAchrVQtB50xBV7R
wjg1/wAc/WS8n/R7uiMZ7STQoDr+X229TA9iuIYn4R8Y7fvGyXG97pfy21jC7oi26AvpjzaR
yQ1APIYW9ejrxfYr3gkW37bLBe2sqhba8kjAc6jm9QNWrDKzbrHH4i4fHfx7PLv1xHu86Vih
eFCrafzUA6f/AFYKIwfLOK3nGt5fbLiRZQiiQSocmRujAH7T5YI1EHEeOQch5DbbTNcG2Fxq
LTqA1Aor9vjjTNkr0ab4U4zHdw7W3I3Xc7gM0cCxKQdPiKkgfjg9OOKy+EIoLe9n3zdza29o
xBktk9we2BX3DqzUU7UxmazJYzfI+F8cttv/AF3H+RQ7pBHlcW0n8q5X/eqmmoD6DGpGsbXi
OxQTfF13Jay2twgSSSSO7tqlHUVqkqtqr4E4l18PF5kVGbScz1rhg8XHDeMryDfrXa2uf00c
7UM6gMRQE1CnInFaZHoV58J8btrpduueVrHudwD+mgaJAD4ahrr+/B6z+XLtXwLcv+sl3vdT
bLZuY1/RR+9rFK+4R1APgBXDrVg7j4FJuLKW03kzbXPJ7csz2/tzxk99DEBl/ZjHrM5rsn/t
92zU7jkkohhISU/pVLq5P/d0zw+rqW/DjX4BFoL253zfUt9ut/XHcwRawUpUtKrH0fRa4vTz
LFdN8FRTvZXm18gjudgvpAn672grxAg0/lk0bUcuowjqftVH4huoudJxa63FUjMXui/ihJQI
QSuoMaA1ypqpjSnK+sfgB/8A50m6b5+nt7OX2keztzNUBQSWUmq9egBxn1qTIoud/D19x7bv
65YXq7js5ABkKGGWMk0BeNjmpOKaL1iC7+KdpfhK8m27klvcXUcYmurBwqUP5o19WrWOmY+m
KWq8vNZRHkVaoOYPanhjWrF3wyPikvILWPk7zQ7MamR4qmj/AJK6ator93lisMj2nZtn+LuT
7hebVaccMG3QjRHya3d44nf8ujUa1+tRjOM151xjhmyn5bXjF4F3TbYLp4XkqAsoC1BOg/ga
d8NmQcb+VjyLh3x5x75abat1EttxtIo5Y9JZtEkiaqSEeoxg/jg6PMm1rtk4n8VcwuLzbbbi
xsLRFPscktpJEjd60VotRrn19VcXwbzHnXBOF7JP8trxvcgm77XbzXEJYnTHKYQdLek188jj
fU8XG2O/eeI/Hex/LN3sO8SzWvHY1SWKQsX0yyKHCSFRqMIqVpjNng+Wz2bivxRy+a/2yz4w
22WcOpLXktvIwhkdTQGPWf3MDjOCcyPOvjvhex7j8nvxfdI13Tbrea4hkXUyrKsNQJNSEN1F
cjjdljfElnjr3Dhvx3sny7e7DvVxNacagVDA7OWRZGQOEmYDXoOoj/PGK58eWtrtHEvirm0t
5tljxufaLe1MiW3IraRjbSODQe2XPrRuuYp2rhx0s15z8a8K2jcPlQ8d3eJL7boJLiI6SUSR
oCwVwAdQrSozxrrwcx33fDPj/aPl2/2DfbqS12CAq8E0pqPcZEdI5WUVCeoiowXca5njZ7Tw
z4k5lPfbZtmwXG0Rw60j5Dbys1q8gOkaDIxDK3UVGDGPpHnHxnwfZrz5SPG92U3traz3MDaW
KCT2NShgwIamVcjjd8PHs9U3yhxbbeL813bZNvkeSztWQxmU6nQSxrJoJFK01ZHBIuJ8smiy
OwoAadKjviax71L8Ebdu3xvsW+7BG8W/SRxNuHrMkcqOxWR9BrnH1AWlRjEZ6l3WzX4E+P7X
f9oV7WS5iuLWUXMbSMI3niRKTih1KzVOVaYYb6yfMeNfGOz2t7+v4BvNlaQ6kG8Q5qudA6sX
YZnpq64cjnvuY+fpPaMjrFUIGJXUBq0n7ajxp1wOsuvR/grgmwcx5HuO3b1E0lulkzwlHKPH
JrUa1p1Ir3wb6Ma/gPwH+q33kFpyrbLyLb0iYbXuMhVWMiyZSpQkV0UyIpjXWfg7400Pxd8e
23D9m3J+H3XIbudQs62jt7moVrK6mRF9RWmWCesdXz4UXB+E/FXKedbnZRcevrSKztS9xtm4
s0bRXAkADR6W19MqMcVi5yxYX3w5wTfrO+sbPjt/xG/t1eSzu7pj7Fy61qtC76gep6HvivMn
wz9Pz8OyL4u+OrLiWxXqcNvN5mvYI3uRYO7Mr+2CXfVIg9THKmKRuvAfkMcNO/MvHbDcdqhi
DLc2O5U1RzBiGCLUuB5McbvOOf8AO3f8MoQrZkBmHTvQeFMDq+mvhn4fS24zNyK8sbDdN4nj
1bL7jrcW7QSICY5B9qMWqpPVcY/K65eaWXxZvHKeb73aRWCcctNsZZL+1dmmFoJK/Yv3OlVL
ZH7cbp5/9fXbv/wBuVttE+6bFvNhyU2fqvrazf8AnRxU+8KGfpTNeuMs671/tqvRZ2t3d8ks
bK0vYo5bSWeNhGWlQPoZiwCsK5D83bwxS0ddXcea85+POQ8M3WTbd6hVZnUPBPG2qGdCaa42
NDkeoOYw6Jv5Zi3ikaYRoNTg/acgK/XGln4e2bV/bZfC223cNw32wtb64Ec8e13LEe5qIYJH
KSFcsuXpBoTjndq+lnTV/Knwbte7c0gXYII9ntFtBcbvKg1RR1YpHIsK55afVp7Z41Phj/nP
trDci/t93rbdouN12bebDkEFpHquorLOaND/APaFQz6lHU51AxbY6xbL/atu7WW3XFtv9pMl
ykdxNG8cglWF1DO8ahjr0aulB+3Fe6OorG/tw5PFvKIb+z/o0qtKm8w1mjZBkf5QIcMv5gKi
mDTLUHKf7e932rZrjeNn3K05FBaASX0VnqE0cfUuEq2pVHWmffDIx11Z7+Fh8AcG2XkTbw93
BZ7mi25gm2a4DJO0UgqJ7eU0VDqyB/wxnqeun23nxmeB/D+7c2n3obVNFbPs8yqYLjUWkDu6
0quQKBP/AKsX3tc+b9psWXMf7f8Aedg2WbdrDc7bfbOxOvckswRcW6nrJJGSzaV/MOo6064Z
/lX7T/w4fj/4Q3LnXGb/AHfZ9ytReWMrxJtUuoSOVUMhLg0USV9JIph1q/GxluH8d23eeSQ7
duV+m0QGQxPPP/4q10lWPQMSMmOVeuNdTDxftNjcf3A/F3GeFbjtMmwu6W+4wlmtZT7iq0dA
WjfqNequnx6YJ8OHVzv6/t5HoAdQR0IqSf8AHC6yN98cfFW5c1W8uI7uKw22zKpdbjOKxxyO
CyKyAg6SPzdBjFrpfHqsvwlY7b8S8nTczZbruMbrcbPvFmwr7ihVVAw9SqWOlk6HBPL652Wz
1Qyf2t7o0Sx/+wWFtuZAb+myh1YyFaiMPX1AtlqUY1LdXXNvwycvwnzFNg3zcWhWO743P7W4
bcxAkMejW08LD0FdJ/EYZ7WJ35/4VUPxnu1z8eXPNbWaOSwsbgQ31uVpIsTAUnUn7hrYKVH1
xnfWuuv9djd/HHx/tO5/FXK90uYYdzPtO8ccf8u/sZrdC6yK59JRh6iB1GXlglur+kznWW+P
PhrdOXbU+4y39rtW1xv7LX84JRp6BtORGhdLD1N1OWHbfg83/WV08q+CeS7JcWP6O7s98sr2
ZLRb6ycFYZ5TSNZga6NXZumLbGtaSb+1ve1hdbPkW3vuVKjbJg8crShamPVX83QHTQ9emKWj
rfwwHGfju93jm0fDr+ZdjvzK9vJ+pQnTJGpYJQEV10otDnjPWxvnqdRW8/4PuPC+R3Ow7qyG
5iUTRSxZxPBIfQ619QqQag5jHViX9sz7Kl0IqFPQkdfpijT2b40+MOKycSj51y2YtseuSFrJ
WeNSUIC+46+urn7KZV69cHyb8J93+D9v3GbY974ZuMh4tyC6Fqn61P59nPK5UK4/+0T0kA9R
59cc7/hjnmy+3xodw+H/AIxvtwPAdtup7XnUSNKu4Es8DPGnuGK5jB0rqTMaB0oa9sak/K65
mqTjXw5sG0bTd8l5+abdaXM1jNYRO2kSQsUzkjzJen8unp7Glcano/HqDc/g2w3G749uvEL5
/wD1Tktx7NvLeAi4s5jqIicfmRgjafA9exxdHiXn5+F7efEXxrum5ycH2e4n2/ndlGzC7ctL
aymNdTLOlfQWU19P1Hhgkw3/AA824d8WX3Iec3fDbm7i2jdLP3o5veHvAyQnogUqG8s+mHrY
ubodv+Kd5b5JXgd9PHbbiJvaecEumjT7glWmfrTOh/HB1RxNel3vxD8Zb1f3PCeP3U9rznbU
aU3U+qS2maMDVHMBkjgMPs6efTFJgk2qzjPxFxTYePPyr5ELvYs8tqLCBmqs0btGUZ1pVyVL
R09JHXFjUusn8mfGFhxqPa+RbBNJfcQ30E2DSgCeCRQW9qUfmyBo3lnh+fhnLKfmnwpvXHeF
bfzKO9g3Ta772jKkVQ9uLjOPUSSHo3oanRsP8/8Abxrurb4F+OOM8u3Z1328Vo4Qf/yUH9qe
UsDR07lF76c6+WOXUutTnxFxn4y2C4+ar7hV9ds1hZTyxwO5CSShKsiGnj0amOtnjXGZ69Pv
fgv475HFf7RtWzbrxveLMM0F9Or/AKNpYzSiuzMssZJ6gDLPGcxxntfMV3AbeSS3fOSKRopS
OhdDQ0/21GRw9cuvN1r/AIp+O/8A3Tk0e2if2beNWnnYDUxSMVKop/Ma5Vyxi1vPy3u6/E/A
uWbfuM3xxJJHvfH1YblYXBf9Pd6agvDJJmj6lPl288MmeOF/ceGFZM2I0lcioyAp1GCzG5dj
2f8At/4Px7lc3JNt3m0juoZdv0xPSssUjNlJEfyuvUEYvy1eZedcHL/gHkOxcel3qyvrXfrW
1XXfC0BEsAA+54jmfOmY8Mb7t1xkuerLZP7bdz3HbLe7m32w2+7nRZRaTFmOmRQyMJFNGBBy
K45y1qysFunGbzi3LV2jkcLJLZSxm7t0NBPAT90T0zWRa0b8Mb+vmnm/tsfmv4349xre9hj4
wjw7fyC29yPb5SzFJdSgaXYkj3Pc9QPSmHmTDzPV5x7+2/eoL7bbi93Gxmube4i/qGwS5PoD
BnUOx9a6O4FGxmyq/Pih+RODbLafN9xx/bJYts266ltXX3R/Ihe5QO4FKUSpy/hr4Y1ef9NH
GXrHR83cFtrP5F23atrsVsJ9ys4PchgotvPdM4hLwr+Ra01VpnnjU/8AU887VlD/AGybxJau
p5FYxbmCVO3SBlYSj8mon1aux00PXHLBbVlwn4ls95+O+S7Xu1qu38h26/RY7uVatbSRAA6m
XNoirEmmRGeKc+td+xh+f/D298PtINx/UQbvsk7Ki7tZmqrIx/8AHItW0/7WBpi+rG2LjbOC
cV3n4T3jka2stvyLZZizXCM/8xKqVjKE6aaX60ri49rXUVPxz8PXPNNvub0brBtlnBKIlklH
uFpOtGAI0VXoT1PTD15WsyOT5G+IN94OsFxLPHuO0XTlIN0tzRC1KmKRfUUbL050OGTWNS7/
APD9zt/x7Yc3sb2K6264p+rhFRLCWbQAzEnUA+RFARg5mrqXWBoVUlKMFOdcBsbH4h4ntvJ+
Y220bncCCC4R20GlXkVS0cf0bTnTDW+b49a37g3w9c3cnGJdrvuI8gkVhZ7pcB1sveWug+4z
tE8bsKCtCw8Dg+uOXXv+Hk/J/jXdNj47svIJZUlst5eSNnirSKaJiAravuVwjFSMOa1rQyfA
HIRuu1wRXkTW27WI3H9SAf5SKqmRShqdSe4KU+7BhdHJfgG52vZp9z2jkFnv0lpG00thbgLc
GFc5HRQzltK5laYvrWLK6tj/ALff1+xWl/uvIrPZ2vY1ngt5h90MmaNrZ48/FR079cGVqzFV
Y/BPJpuT3Ow3k0UFvaItzLuUAMkb2jkBZYVGcgzzHbDYZfHbzD+3zdth2Wbd9q3aDebW29V7
DGuh4oiae5pq2tV6t3xYzPlY2P8AbiX2y2u9z5NZ7ZczxCRLadfRSQVQB2dNVa50GL1u9Z8K
TYPgnkW4b1f2G6zx7dDtRjW8uKGQH3qmGRKU1REL92Kys32Ob5D+GbnimyHedv3a233bYZAu
4PaU1WpkOmN3AZ/QWyr44pzrE2fLzjQVQFs+w6d8WNVEqitFzoagjoMQSq7KQoqVXoCO/jgM
NVSwDqAxPpPbEafUxcqrA0qGrmP2+WKJCX1OSGACkZdPphxnU1WaU6vSlMgeuIwlK9jkOpHX
A0A+1TUCSfzVHSvTErT64yfbL5dQB1r3wsyiqAAwX7egGWQxIgFLVJNG8BUL5Z4iThl0KDrG
Y1DI165+GBH1sCApNR3xExNRRnpUflyoMNrNDRwo1MD00gdf+DiBRMAzVOWZoTlT/XAYIaiq
16HIUOX78JCcjVj3oy1oCMRwVRrNHAJ+/viGiVNCtob7jnlgMpvcUoRk3mcjTviFpkdlBcnL
oD3oexGJHOhhrdgfKvQ+eJHjI1Ds9f3dKYmod5WBKnov3VyrX+HEA6YSSTUqQKoB+btgCRfb
VSrHST0UjM+YxNaFPbPuUqoyAJz6eGLBD1Ji1dGrTw/54pDai1flNBRalevXxwgQWsbEnpTU
TmB+GAkVOSjNl9QCnx8MSlOAzUND4UbLr44DaJtGoK5JH8QoPp17YlmlIhRBWvgv1wGxGyt7
mouS1KFRniGDYrqz+7Sa/QZ4R9hCBXzFAgFNPY+WLSjCjQK/tHh9MISKDkqLUmtWrgMLWSpB
yKn8MuueLGkXuKzKACASSH6Vwxm+j1vQaiMzR38D2piZwhIhUrUa1oVp1OI6YqWj01oSaVpX
FpzCNC7CmoEUyyoR3waLQ+36ipJFMwD4YapUuldQc9AKVPUg9sBRAp7nuVquQAHWmJaIM4Wi
rRSSNfanlhAY0VSK5uxp/FT/AFwHcFUoVIGkjtXqMKJpK0C5GpAqDSp88C0bKqSe2p0kipUZ
/vGLTsOpGjKgK5U6AeGDBajq6jS2ou2TD64dGGcyawV+1QRTMHPrgUGVWgUZ5Dp3H4+GHGt0
CSss+l4ydIIFCBT8cKlE7FSAtfcYE/8AHnib8U+8K62y1oQx9TEk5jGuR2zusoxZD165dsac
j6j4HrX8MCdHuRm4T21IzFM86fXGuS94+I+c8Y4tF797skm57gAUW8/VlGz/AC6CNFDjX9Jr
F6ta3lvyXwnkarO3G5v1KUDCW7KAqPuU6QaHzx5+Oprf87b6trL5n4LDsybRDxe4S00FFhW7
BUVNT62Iatc8b6FprT5x4qLA7ZLxYvYgaEia5Vn0dPWSMj+OBGvPnTYqRW1jxx/0kNDouJlN
AMqIF19B44PFjli+drgXcxm2s3O1TDTHZGcaEB66tS+rD4pEHGfkjhO17ncbnLx6aOeViIzb
TqQiHqoRtAp+OK1WOb5A+QOI766Xe37NdRboroqzyyq0ftrnmgLd8Z5slSTdfnHddx2WXZ5t
n26MyLpa4CE5UoWVKaVbDaZ/l5ozBpMxStWcr6a/QY1Pg3xr/j/nB41uJuP04uYJF0zQodDB
a56XPSuC9D5ba/8AkP41vTJdzbLuM11ICSktzSIP2zWT/LAMV3JPlna9540+1W+1S2UwCqj+
4rRhV/KMqmoww8y1TfH/AD2DjFzJPcWrXcUo0uisEkp41IocWntV8m5N/Wd/m3SK1MEbsCsZ
YMdK/aGIFMMrEehW3yzxPcdotbHkG23mqALpezdRGxQZGhZSMFq13yfO+wxXEawbRcyWaLpr
JIglFBTJRVTTwJxrE8u5bv8ABvG/XN/AhihuDVUYgkDzHniWrnhPyUvGLea2GzWd+JzV2mAW
U5Upro3p8jgxM9ybkA3vdZb9LOKwWY5W1uAsS5DwxczGp4roh4mvao8BjQeq/HnN+Bcas2/V
W24LfzClxMml42I6BUJXT1xn5Frh37f/AIx3Pfk3OG13Z4nbVdBWjiNRkCmonLxwwNdufyZ8
a3+0x2Dx7osUJQxKkaBv5WagksVxI1x8r/HW7RW6X+3XsRtnV0MZRnVl6Vo4rgq0N18ycSl3
NHfaruaGEA2903tiQMa1IQk0/DEtcMny3s+77debdyOwlvoSSbZowq6R+WtT28euFD4LzX42
49byKf6jBdXFPeDIHj9NSAmjoB54rVarNzs/jbkm8y3kHIrnZxP6pP10AZHPgjVXTT/di1O3
beD8Hsb+1u4+b20ntMriNo0BcKagMQ5pX6Yzo9bflnIfjqMWFxuVzreFxLbTWLe8AR/EF1UB
8xjRqoX5n4vJuTxvb3R2/SVF0qgSah4xk1pT8cSl1lt6u/iB7e4lthuN5uDkumtnhVWbOtSF
FBXpngkS24By/wCOeP2MkbXF7Fc3B13AmQPHXsEKDp9c8NizGZ3e6+O35RFdary+2qRzJexy
KI2qxPpUjSzD9mCReOjkG5/GdpcWl7xS0mivYZBJJG5kSMhR6R62bOvfDF5Gk3fmXxjyaG1f
env7WSD1PBGhILEZjXGGqP2YrVrH3e7cE2zksF3sO1yXm3wNruILxyUlJHZXBI8RqrgiScx5
bxnkEMS7dx9dqu0YGS5QxqWFPtCxgA/U4U1fA+Y/H3H9okiuNwvIbm5FboTRalDkUIjMasNI
7YlWcjf46TlH6v8AqW4PY1EqTRwBGWTVXS+oamXT3C4lK1/K+dfH9/8AobuDcppLvbX962t1
hfQ5GWliyqFr2NcUFKXnXx9uu4WG8XN7f2d5a0YWYicqW89AIah71xVrU8fy9xS4vru3uBc2
9nOoWG9VKluxLR5svliSq3Hn/Ctt4hcbJtF3dboZ1dFaRHUrrObM7qK/QDFIx1We2nmvB4uO
iw3XjMd5fRqypdro1OxzVnY6XT8K4sM9WPAPkPjdnsNzsG7iW1hnZzHdwqZF0yCjKwAqKdss
LdaWP5I+Ptl2WHbtslubk21BDGY2BYk1NWYKB44MxlC3Mfje93+15NJu1xaX1omhrOWBjlQj
1aVPj2OIZN1R73Z7L8gb3LudpvlptAUBEtr6oldEFPcALRjS3bPFS7OJfHttsXIbTcX5Ntd3
HC9WgRljdu2Ta2/ZjM1NLyLceA7ZzK33bc93aw3O2hAEDBpInRuh9Ct/jjS2RVxfK3CN3G7b
ZeTT7fa3YMcN68ZIZCKM2gBinlqGJbrC8n274ostsi/pe6Xd9ubMAZ0Q6NHfUrqg6dKYom24
nyP412ziEuyDkDRpdLJq/UwNHJGZBRhpVSuX1OJX1hdv+M9q3i5n/pPK9vlhikIDXIkimcH8
2hiMvMYNok8aTi3x6/F96tt4ueQbXcQWrNJLFFKEfSBT0ajQkeeL0xlvlLkey73yj9ZtU3vw
CNQZCpQA06Z9aY1RJ603xr8l7BYcdl2HdNwk2q8Zma33EBpkGroKUYqV88sFatWd5z/ie3S7
dJNzG93v25tVzFDEhjK50d1CpQL4KTgjH2iwuflz4+9rc403Qt7uiS3f2pNMlKelKgVP7Maa
1Pu3yd8cbzt9xsz7ybRNwtipuWhekdciGqKA/XBuLWX5Pzbhm0cAi41tm7jdrlNKwTQpmCr6
9Uy5ADsCDhgt1fWPzFxCfjse5Xu4JBvdrbsk23MrETMBkq1U1DkZGuXfByr1kUvx/wDLGxSb
He7RuO5tsO5TTSzWl/KBMumQ1AFQRVOlGw1fhUfKPLNquOLx7facxn3y6d6zQxRRCF9OYMhV
VKUPQAmuLmi++M7c718Q3HCDbPs1xa8lgiCJeR1YPKP/ALVn1faTmQVxfk915lIQuTUDVzAN
Voehpiiw6MqnUTmOnjhUeqbXznZbb4evNoj34228xNIw2+WL1MGcE/ppVAIyNSanuMPM2j+l
8Yn483ew23mO37pc3Um32kT+5LdxKXcAgmulga1PUYrMPPvqx+Xt7td35zc31nuKbpbSQwpH
dwr7akKv2lD+ZehxXnYM9aOLmW2w/DK7RFv7W27wag21yw6ZtJkrpt5lp6dPU1OM8T1n+ssn
jGfGO+WWy81sNwvruTbrKIt7lzEglK1B+5CD6P4iBjfV8dP59eJ/l7fbXc+dXd9a7hDuUEyx
CK8tVKQsqoKdSfVnRu1cYtc719f/AMtXNzjbR8MxbTY7+E3e2cI+2yR6ZgjMSUt5lpVKGpJr
g49dNY74r3zbtk5lYbpuW4nbbSCqNdxJ7hDOCullINUNfV+3Gumof5Y3i33T5A3G9tr+Hcbe
b2v/AJlqpSJkEYVaVJzFKN54r8OXN+Wwn5vZD4Rj2/b+QpDvFmNElhJGY7gB5CSsEiEfyypz
qD50xczau+smxjvibkW3bTzTb9y3HcH220Uv7t6EVxV1IHuhgfST1PUYupW51LAfKe8R7nzz
dtwttwttwineP2Lm2BWJgsagadRY1pk2fXD1Phy/ldt/8thc84sIfg222vb+Qwputnpin25k
MdyitISywslNUZDVqQfrg5lde+axvxLyGz2jnNhfXu4tt1tHqWS/KCYDWCNMit+U/mbqMFh5
s+AfMO5x7l8hbtfxXlveQzGKl1aV9hwsShStScwMmzOeN74xzzltYxJHD1BqVOqmYBPWlR44
y3K+ipPl622b4g2NeObpCnI7Foobrb2UM4Sj6wYm+5RlQjBh79+G3s/lThl5ecb3G43qANJb
yRXoP8v2riREylQ5x6mBArinwyynNdk33dRuX6L5Xs4re4L+3tcs0awFH6QvJ7jaVIy6Yzl1
y/pz1Z5XzPKFjuZIwyStA7RsyEFCUNDQ9x4HGrGv53x6t/bnyvj+wcslm3y+isbe5t5IraeT
0JrZ1IV2PSoU5nGc9dfw3fxt8uxDnG/WfJuS6trk92PbHuHX9NVJCQRIBRax9KnPGr/hmTz1
oYN527kHAtpteP8AN7fjl9Zkrcu0iK5oSCjI7ofMHvgZkUfCL6y4f8hXt3yvmVjvD7xYiC33
VJkPrhcERzaaiM6ftqc8UlMqu2H5Js+WbFyHhfKN8S2uNcr7Lvzye2jjUdEcjLpHp/8A1h9M
PxRzuetad4h5LwXYIuOc6tuN3dlGsV9qaMO5SMKyFHaMihFQcZrVfP8A8s7DuO27/wDqN15J
a8nmv49ablbSK8hEdF0TIpbRpFKeONTcHHU3HnwIjlUqc+pIGVPDG8h17/8AC3NeM7d8c8r2
m/3KO1v5kmubSCRzHrDQ6f5JJpq1/lGdccp8ju+efKq/t857s+x71ucPI7prY71bJBFuMrNJ
EksZb0zOSW9WrJjl2xqjm+ZW/wCJ7Rxr402jkl3ecr2/cYN2gb127oJUko+gCNXdpAxk7YNt
rPM+rFfOvL9g3jhPCE2jcIr0wROLq2jerRssUSfzY65aSCMx9Mak1qzbHj+7ci3rc47ZNzv5
r6O1T2rT9QxcRRDoqVzAHhjer6e64oWUOCyjS3qqQRqp54xbrT6ivhwv5R2fiV7Bv9tt0fHS
F3GxuGjjutWiMaY2chRUxVVqEEYzdkPU91p93+S+E23NUSXebc2m8ba1pFeRuskMMySMaTlT
/L1asicO+MflluLbLxv424/yae95Nt19b7tbsizW0i61lKOFBQM7NrL5FcZ91W+NFs3yJwVt
y4ZM28WyRybVNbo7OFEcwWAGOYkj2j6TTV4YTKfjnOvj/wD9TsILjd7W3u7a9miil1ittdtL
K0TsAf8AxuD932muZwp3vyi1tuPbtYb7yPabnc7uzuf0f6JooI3URMMqu51VPQn6VwDLnrzr
+2fath2uB+ST8gs1kuLc2s+2Sv7Utu4cNmZGAYFQKFRTBfaeJ9ecdnAY9o4Lv/NNo3Le7Bpt
2tnvtpuop1CyxkynSTWiSI0gGmufUYc91njnOcZr4X5LsEPD+eWF9uKRbnf2M0kSXDBJZAIJ
UbS7kBzqYZVrisumXYoPiVviOXjV9DyjcbjYt/kX/wCLukU0yf8Ax2UBQEiqnpaupXGYwDJe
WJ4dsVnyLkY2qbdrfaIrkyLDuN0paJyD6EIqun3PE4111cxvmePdv7h+J2298Z2zdNu3ixml
45aMtzaCeMSSIFXU8VWzK6K6ep7Z5Yuf059cf7Tp8wFg1HU1pRjXspzyxrGo9z+AOS8cTj3J
uI7hfR7Xd8hSljdTj+QWaIwsjNUUfOudMYvla3W4ttq43wP4q5Fx7+vQbjuskRvWhWaIB1Uq
P/joGJ6L0OdfLBaxJkbkcx/r62W78d3vZ49okiUzw3qVvUkViXRayRqjAZUYdfLFPWpXle1/
Km32XzFuse8brb7zsm8Rrt13LBG8UESAFUMsbag2nNHoSM6g411LGebK5/nXkmz7Fw+y4Js9
0J6opMtpIhSXblZtMEpQmkivpI8hXvikGZ5Go+FeMbVt3x1u0Q5LYXX/ALLa0il1CNrdngaM
xyI7VqjNnjPuul95xn+Aw7VBwzk3xJum9Wdhvk8tLS/9xZLOaNkjo0MgIBYe3mhIYVw/FZnx
ii3DgO1fGp2repOTQ315FuMMt5tlo49u6t0cN/LjUlvdipqPuek/lNcHPP7ZuyvcZeUT7pPH
vWxb9sf/AK/NGjo0qVvRpB1+p5Y1BU/lYCmHGtvzHzhPyDjG5/Ndxf8AL72LcNkkn0zbjt3v
QRhVSkL6f/IApA1lT2yxvvnyNc2WMz8tDjJ5rdyce3mXfdquESWK7mkeZ4zpp7QkkNXVex/D
GY58S778McKoalw4Oanp1wutr3T475RxzkHxlc/F247gmx7ldzF9r3KUe5byMziRY3Pp0NUa
eufY9sY3Ge5sanfeb8Z4DsGxcTt74bzuu0blb300cRWgSMl5QzD0Zh/5ek18QCMXMwS/hZbe
OGpzXcPmWHkls+0yQazYMBHPEwhEMiSoSX1aVoukfdTqM8XyrzZdUNrz3h/yNsG9cPN9/wCv
XO47i247PeXagxMHkEgjl0kKj9ctX78dLMM9dO/c+4rwHaeM8Ttbht5vtj3GK/vVjaMlYVV2
erqdBYmX0U7ZNQ4Jy1Nq121OG2HMN2+YY+TWs+13MXvJYHSk0RMQiljkUsX1mlEUL93XLPBu
+M5leTca5B8f7p8ubnvfJDPBsG53M1zZzl3hktZHoY5JWhbUCMxkcjh7tuDnnNR2/K+PcS+Y
/wCr7Xfy8k2K0n91dxd9U7xyLR01tnJ7RagY/di65a5/pHru0f8ApOy8t3f5Z/8AZ7W52q/B
kitU0pLGZlVHilWrOXqoCUUZ9cs8Hz4vtjP2vLeIfJnFbrhD7mOPbh/UZL7bJ7tQYpomlZwh
JICuFkIoT9K4bcSl+auRcZg4fx/g223j31/sVx7s0yAFEUROrLK6+jWXky0VBGeLnnIb7VTz
eX4wm+KNsTje7zwb3G0J3HYWnmdZXoBMzRyH219tv5iMvUfXBxMrPdz4d/8AbjxywuuS22/T
b3bWdxtMjM+2TjTJIjoy643JA0+rGettdPwi+YeLts3ycN6G9QJY8iuzNb7jbSAm0IYavdVG
1+gEMGXr9ca6uxjXsFjyjcuDbRc75zHmlvyOwWJEtoLRYVd9R9OhVJd5W/LnQitaZYxNTwf4
y4XxPnW8brbb3u77PMxkm2uRTEusmQsBL7lQSqtXRUVzzxvrra1feR/EXKbD48+SJhvUy3Nn
bvNt895bepBVyonQCupDSpp2xd8ZR/PrZ69O2L/1D4u2ffd93Df4N3g5D7i2SWIDs0cpeRER
Ks3uDX6tVFp5imH5o3I+ZGMhIZiKuxFMq0qaE0wVqPbv7Xt+2ax33ebXcb1LCa5s9FtNK+lW
KtVqM1AGVc6VwWNf/Fttj2nbPj7g3Kpdy5JY7tHvsTGC9hertKyOqo41SMzSe5Wo864fmuca
3jfIdquto2u+47uO1DjscMa3W23IH6i2dQPcWGrqFp10sPpgaeYfLVnsfMvmHb7GLeba2trr
bYDbbjrDwag8jKjFCKajkM8bvnLMs1ofnfjbSbdx3kNpuNrcRcWjVL+0WVBLLEGj1PDVsyNH
29Ti/n8Y3OpLqy3rZuMct5dtPyPbcngXZbaGIvbIyRzoI2L6mLNUUP3LprTpjF+MYjEc12va
uXfP1zBDvVtaRG3tZba+bTJC5WFGMQcHTqcDKpx06v8ApjX87m1qPnGwii3/AI/zeK9trux2
YQwX1nFIhnAWfX7qrq9a50K9e+Dn2Yp1lb5+SPvE0W6bDvOx3G0SBHCzr/8AMCj7lVjIqq/8
IdRQ9cYoYvZt72fkm3/IOzz8os7O+v7sLFucDGJDDpSKN0DMrH7fbko3WudMasxq/DP/ACHD
tXEvhscM/q9ve7jJdJNatbAUmT3A0n8tWcJ7dfzHPti4uXazWg4VwnZR8S7nx+DlFlL/AF5F
ljuqqhi1KtY5IzJWo00PTGJcutd1W/DF9t23bRu/GIb+w2zl233be9PclZre4hBA+9WQSKBm
KNUV+uNdfOq9bHR/cFyLbbn40t7GO/srndIr+3/VW1s6t6grHWqRknTmD/jnh4Zrzjf5vjOf
4kto7DcpbDlUTp+s2j3JmS7fX6i0bakA0nUrjpShxj+eWnrzHlXpD5Nl1XsSO5OGxVqvjKXi
sfLbY8orHtD6lkuldo2gkK/yZlZKMPbehxm0yvpZIrKz47d2XMuX2HKOK3UL+890IkuEhKkr
Iro7GQjyGquYwssbttjxj5C+Nti2Bd9h2u64/Oz3EVzp1yR+tUKF2QepGybseoxv7ZWup+Wz
3Dm/Dtk5Pxem6Qnbls7ja3mWVJWgkX2wnvaK0zjpq6V8sZkyDNebb38Qcf2Ox3bkD82tveKT
T2U0DIjLMxqiyaHZ5VkB0ELnnXph3azPKvLjY+K/K3CuMLa8jh2ebj8LQXdlcLG8glKIhDBn
Sg9FQwB64tkb/p7UfAN24VwTnd/scPIBPb3dmkEN1cP7yWd0CD7MkgpGUYnUpXLsaHB9cE+M
ab5D5HebfwzeY5uRbRcPdWjQwLbxqHdnyeLSkkjeuMnQ3QN1w8/Os2g4Fe7Je8SskTf9v3fZ
EiWJtr31Y/1UDhaPCJiw9Or7Kocu5wXrTZjJ8bu+H7f8lbrDxvlcmz+4scdkbr/5FiWUkXFl
J7pWug0MbB/oTh6vinrr+cLHhX/p1xdTXW3QchTR+mm2pwq3jl6yQTwIWyYeqrVoe/jr+dus
185SGNqEkAg964xTIAINVc6g1p0/ZjOLC9xlkqB6OrH/AFwqHkaMrkAM66h0A8BgwmWRGQqh
00P2jpXCLQEaRWgBPUqKjECUl29DVamVaEVOKlMWKRrmAVrQ5VAPngKIqHAIP0NaCuACAcV1
01EZHpmPHCpMHSlQVArkanP6jC0jDKgLglqivpGVOmICEn80GukEDPrjK0xlYswLAgmgIFK/
swo4MY0grnTNqYkABcxkDqyHn5eOLRhyEoWCFgTRj1H1+mIjkGmOi0amdOmIo1Bcam9PYaut
B4YtByGDKFprP5Tn0xarBRtqPpNX/M2dCRiUhpdLHS6gq/3/AO016imJFrAJVcgDk3n2xYqQ
or0YgVzoRkDiR0lA9WmjHIEYCQlzLHwNOwHnTCNEQwLEMHD5AZjrgxQ9GClWOo5EUzNR2xIM
OasxaoOdDhaOVBzoWBABIJyPniASsor7a1oep64lSoXjIemfTOn1OLUdcnoaFV+0Hr+GJFKh
rReo6semWeAFHIXjLUFT0qaHLxxNbSRpip1sVHYkV/Ziw3ohrWj01CtMq1wDBSKjv4NXpniU
MSVcHUD5Ef5YTaeQKAwAyPQDt5/TEqBAVXVqoa0oemIaJGLhu1DShFMShBgCDWirlpp0xI9f
uNAU6ZYsCMtGhzqsgFR3p4DDi1ImrTUMCzjIdTg04UcZzJozjItjLXwT6mJA8gCeoHlhZMza
TVhqp0AxLBKyupOSmhIr0xGX9goW0Z/XOlPphitCAgJqRp6B69j5YBEuvIE/aB6XAqaDELoS
rtXI+NK5fsxLC1LpGo5j8w8+uI7CBi1FgK1yqT379MKwUdfygEdKD/HAgsWkBVSY9JzHfLrn
iJiW0LVchkpr2P8Ajh+V7Tuuk5LQfXKvjia+o9TFVqBVcy3+WA/lR8jU0YDNMiPGvfpjfLHf
yzwJzJywsjqvg3SvXviTpQR/qVBWp7+AwxqV7j8HbXx7e92FrvG2ncLd0McYEjRMHXOpC9Qc
a758Y68aj5X4pxnYtwSParY2tvKB7kZkZ6VGWbk0rTHn/nMrpOsnjZbVw3av/QzcCy2rcpRE
ZFuaSidTTwIpVca6nrCx2n464zuXC45JLawspZYzqvZFoRnQsaFTnTucXX+FGNuvgi9EsTWG
9WU0Ew9EgR1U16Co1YzJWpYqoPhTk0u6SWTXUESwqD+qZXMTGpBA1U6YcU7d3DuAbAOTz7Fy
K3O5SIpYT2twY1Qr1NFpXGpPGVf8mcN2LZN6hstnjkhtXoum4lLULdDqPQV8e2Mc8etR17t8
F75Y7Md3XfNtljij1yQqT6h1osn2k418Mz15tJC6SEEqxXowORGGVrF/xLiG58l3IWthLHCe
ss81SkYHchcz+GGwY2jfAu60kMHJNuuJ4664NMgOXbua/hglrHqS0/t/5Q8RubrcrCztwoYm
R3YoT2ag0/vxNTrAv/b/AMtF2ttFfWU0bp7i3Q1hCK/aMqhjg+Bamk/t85HHHJ7W72E9zEup
rWMyaz3pniyqOay+EuQTWgnv90sdpJ9Ps3TNrB6Z09P0FcM02pD8DcsN8LW3vrKZCmpbnW+j
9gGquHaqlu/7fuUR28kke67fNdRisluhkGX4itfDFtCm4x8Scj5E11FbyW1s1k3tTLcsyyav
JEVjp88VqZ3kfH73Ytyl228eJ7i1Yq7W7akqOwBo2KU6rUKmqrUsaHVlUY0HpHxZw3i3JP1k
G7C8aeEa43glCx6MgQyEH1Ke+K8+KxScj4vBt3JX2m0kP6cPpElwwAAqKaio/bQYIo33KeAb
FZ7BZTx7PAk7mNJby3uWeNwaaj7ZoW1YksuRfDO1XdhBNstrb7bclRqeeVglGGWQDZ4zYMYe
T4a5yLloIv0jiNS5cTHSwp+QkZ4h9nLtnxfy/corgwRwr+lZkm9yTQxZcyVByZfPE0vPj349
4vyCC+j3Z72PcbV6E28qrEFzA0ihNcs64bBZrG8j2uLaN9udtglaWOEkCRwF1DtSn78TXNcN
tBcXVxHaQUZ5nVI1rpDMxouf1xSDqN4nwfzl/wCZcGzt+gNZszXtUKcZ9Z9cUfxFzk372DQQ
RNGA/vyS/wAllPg4Bz8qYdakSbn8Q80sbV7sG3vLaEEzJay63Wn+1gpP4YfWLbqz+OOAWG82
F5c7nZG9CnRGkNyscsZHUGM+PYk4a2we+2kVrus8VvHJEgchIZG1sq9hUZYoLHJbwS3Eywop
aZzRR3J8MKxt7T4e5xNbxSFLWBpBUQyTBZAT0BGk5+WMqqi4+PuWxb3Fsk1ske4T0MNXX227
5PmMMGaLkPxzyrjcK3O5W6CCR9CSxSiQE9lNQDn9MGqRrOIfDNru+z/1C73cQ3Eg1xwWrLKs
ZOdJSfzeIHTEXHxTgOybnLfy7vu36WG0cxxwxPGk0gTrJ/Mr6fCgxfU26tL34n2PcNma/wCN
blcyVb0w3q01mtDRiqMo8MsEgQT/ABnxDatujk5Ju19BfuAxS0QvGpHUKBG5YeZphVqt4r8d
WO+XF1cpezrx23NEuyircS9fymqoB54sa+Il5J8YWkOzNvXGNxfdbAE/qI5KCRAO6kBdVO4p
ikYqh274v5te7Sm72tmJLOQGWPTNHrZBUfZ1/fXGrWdqHjnBOV78052y2AihYI80ziFNRzot
QdXnTFa6N9P8OWNtxB7i9huByFFJPsS+5GXJyUKBQrjOMbWXj+GvkB42nS0tySKiP9Sms/sy
r5E4mmN3Tb77brx7TcYHiuoXoYpEIIPlWv7cQ0tp2vct13GGy262Se9lNIYyyxg/VmoMOHWt
Pw58lvbmRrCKqgj2muUMmXhnT6erBomq3aPjXnW6TTw2ViI2tn9u4a6kEFH7oK1LEeWHWkXJ
eAcz2CNZd3stFq/oW6hcSoD2BK/aT2qMUG/tpeP/ABvtl3wOTetwgvo7r25Jre4t5EmheNBk
HiUllqRnUYl18PMZivuMZAGNe+dCOgxSCOvbNrvd23COysYffvZfTDACoJr1oTkMNaadPhr5
K9k3P9KUKK1i9+L3SP8AsJGeM2sXYrdj+Oueb1cXkVjtrCSzb2phcusARz+T1dWHcDDfgWWx
JffF/PLC7trG52krcXLCO1aN0kidj29wGi//AFYJWuefPXUfhf5REixHaEIOptQuYCAB0FdX
U+GG1n39OCx+LPkW9ubi0h2R0uLVgLgXEscYq2YKFiA4p3BwfJvsxDu3xrz7br63srvaZPdu
nCWpRkdHlIqFEikrWnjTDBz+qrU4hyqTel2Bdulj3dqiO2lojHKpzY0pQZGtMLXysNo+LPkP
c7m5is9pZZbFhHOk0kcGlj2DORU0z9NcV6WYruT8J5hxl0/rm3PbRyNSOYFZIi3XKRCy1p+O
MaJ8lffHnObXZo97l2mZdpmQSC5BRvQ3RnQEsqnxIxqU93GVnMsakEZmg6Zj9uNyC1c8N4pf
cq3+32e3nitbi5qyyysASq5nQK+ptPQYrRPXp7/AewT3kuy7XzFZOQQR1O23VuqtUDUTJpYs
Mj1WuOd05WB2bgF/uHOv/Tbyf9DdpKYri4UCUIy/wioDdsO/s8XVk/w7uo+Rxwxdwt/1Ecaz
NckFUaJhUEKczJp/Lh0cctPJ/b9sN3NcbPtPL0u9/t0ZzttzAqMun8sgViyip60wVWW15/xf
47v9652vEbuYbVdxtJHcyafeeJ4syFzCt07417jU534Wsfw5urfIUvC/1cLXEMXvSXNRpMdA
xMcZoS+lq6MZ605Glm/t62i7mn2zj/MILzfbFG9zapotBUKcw9HZ0Oo0zGCysdTWC4h8f3+9
80/9WuZxt93rljndk9xUeEHUoVSK5ila41a1zy7rb4e3q4+QrjhL3cH6mzX3ZZoidJgoG1qD
Qs2YOjAObrWS/wBvO13KzbbsHL7fcd7tFZ32yWIQyUGTB6OzqdX8QwexmSvPeI8Avd+5evFn
mFheh5I5J3j9wRtECXBUEfw0643elPZ4s7D4e365+Qrrhsk8IvLRPdubhM1ENA2uIEAsxVhR
evjhvS/lxmtVd/26291+qtNg5VZ7putshJ2yWMQTUGTK1HdozXL1DGftYbK8+4V8f3/IeWpx
ed/6bdM0kc0ky6zG8IJZClcz6adcXTXHMzVVzTi+6ca5DdbJuTJJdWThGaIlo2VlDowqAc1Y
YYN1UqWrQJlT1Hp07YmpHoW7/D/IbHhG18ptJl3Gy3MRf/EUN7sUkjaY4xT7g7ZasqYzq7v1
8am2/tp5Cd22+zvt1iggvrUz+8IyZY5IwrNbsmrOmv7tWJjKh3L4J2O2Fxbx/IW0C5SoW1uP
ah/mL0R2ErFfrpJwXTa8auYEjlljGlnjdkdkIZCymmoMMmBpkcUmLn4ar43+Nr7nW5XO22lx
DaywWr3CmcMUkIYLo9PStcycOtO/g/xLu/Kdz3vZ7W6gtr/ZI2JtnOuOWRX0e3qH2g9mP7MP
Vy4OepZsbPb/AO3y0k2Ow3bd+W2u0ruCKY4Z4kARu8aySSLqp9MCcu2/AK7nyKfabHlu33tu
lubgXdsqysoVwmmWJX9JzqDqph2s+g3L+32dduu7jjnJbLkd3Zky3u3WwRJRGMvTR5PVVejU
r2xmysy2f5dSf2+Wp2Xbd23XmNltcd/CkkIuIglCyh9Id5UqQDni+V1t/LzP5E4HDxHcY4rT
erHere7BeK5spFYrU/bLGpbSx7erHWfDGT7MnHUtmAFzqaefc4naNjs/xZ8ibrtiX+27BdXV
hISba4QINa9yFZlYfsxxtHXkU0HHOVP+tMW1XTNtvq3NEjbXbLXTWVaVXpnjU6jPN2aay2Xe
r61vNws7J5tvsVWS6uY49SRCQ6VaQqKrXDLGpfz+G72j+3/5C3XidxvVvarHcxgNb7dONE11
EyhxJDJXR0+0Hrh+3q6tk8jzYbduQv5Nukt2F+kgg/RupSX3idIjKn82rKmNUcdyujkHF+Tc
cuVtd/22fbp3X3FjnWmpSaVVhVTn4HBKPy4orvSCzgNqIWlK0ANe+C+ut6bP42+PN05tfXot
Z47GxskWTc7pwXaNXroOhaFh6Tq8BjOfhmftqeVfAW67Zss+7cf3ex5JFYfzry3sgguIoqE6
wA0gcClaZH64p/ly6tns9jzXctr3iwuI7C+s7iGe5RJYoJIyrSRy/wDjKVyZW1ZEYnX7T4dr
8N5bJuq8fXaLtt4UCu3GMxyqAK0rkOnTOhxSii33hHNOPx/qN72W5sLedhHHPdRAKxGenUCy
9sq4ftF9vcde2/GvyHutlDf7fxy7urS5UyR3UaBkYZrlUgnp4Yzarap9q41v+7bk+zbXt8tz
uK6vct0jJcOmbK6nTpP1xudQ8/134T77wPmOyQQvv+xXG3xTSFIHuIxoYjPIjUoNOmf0w/Zy
+K7Nq+Nvkbc9ti3Cx47fTbfcR64Jo41ZWQEioqQe3SmOdvp95qo2njvId13E7VtO3zXu4qGM
lrEhMgCVLBgwXScswc8a6sx1nUs8dPIuF8s49bRyb7ss22pMaW8lxEERmAqQhzGoeHXBK5fe
/lmBKVYBxRCSPM5ZY20njuJYUUAZkEVOdQe2LG5ShuHjp0CMQTmNVfHFZGbDyTo0jGZFbWAB
UVFB0OeeCRXmI/1JjGqMmpGn/risWZ8H94aSV9BYVNO/ni+EYXCBQpAY1qe2WL5Rxdyxqy6g
I2bV7NAF/ZSmG+mUxmUsSie2KeryJxcizTrMGr7gDFc0r+zp0wtczAe8EiUKehOkD/DGWaja
RpfUzersT0y7YWPQqUKEipFTRq5D/pgdIkS4Ksq/cB9wPQk4LCONyulF9KA9+xrXI4F8Dlul
JyH80+kt4LikWhjuXFdDeRLLX0+FfPGtWGL5vQFVWjp0GYyNAMGmXBLOmj3pApdhUmmeXTz6
YmJlRatSVodB+4eK9RXDGqZ2ZiTGQnanjTy+mNbovJGenqRAJTXUxGeeQp/yxmwTiRMkrjvm
aavA+VMDYVnkUNGSB6iUA6CuE6d5XkIBIoBQL41wCihm9vUKVIFBWpAPiMQlNJPqABYZ0LaR
nTtn4YsZvo2kFFKhY6Ak1AqWIp17eWJvIGOTTCVU0V8mDDI/X/LBVfZhK7R0Ioqp0B7Vw3oU
f6rUNVcpPvK5Vp3OKQzAN7boZOlSQrE+H+WBfJQ3EiItCD4k554VozcBae2ACBQ0A6E59MWM
57qRJvRpKrqYUAOVRg1u0yXBBcAUBPryoPEA4KIOSYSsrMatSmmgqCf8PrigvOl7skWSkMwp
UsBlTtXvhxZAJdFRSNR7YqpIp0bM+WKQiN3IzgpmpOkqMhlTM40IeWXVIwZPVUqGyJAPSpwa
cP8AqZEGYFUWgr1H/HhiOktwwb0UDmhNBSoH0wWCUbXJBC5VIOtSBnXrXBhGZ2FVESgUqqAA
jPuB0GDAjjkKgKh0gV0itAB5YUYXMzOaU00ocs+mYr4YpIvkLDTJnUBlr41PhiNSRMq6WBoK
VpnWvngsZov1QDkBRkdSmg/zxSM/CSOYk5+krnQiow2N30mdtYERERyJIAGXbBiH+rIaoRfc
GRfSv7RUdcFkVA87en2wCK1YMAfqPpikEhjdBQY1oqD8oHiegAxq+khOVqqgRtX00A+0/TEL
CFw2vV6QFAWhHqI/5YMZ7qRLtgMs2qc6VBr9cGCdVG0h1qCATU6iMtJI8T443K6QKavuqMzU
n6Yqju401LVIy0+OMhGxXUUBAA/J/hlhRag0YyFa09OQxLDgkVCoCtSFbuP9MSHVAxrQley9
DlniRGNh6gwCqaoFPj40wExRQCSSW6kYlgfQAShBBPfoP+YxYrDsDQMrEsOpOQI8DiBFS2oi
okY519Q6YTTuZCo7SDrTocB8IIwHqAoaihH+eDBpDUWC6QFb8w8u2EaMHU9BTSCAD0pl0wk6
kqSaA0NFc9vMHAQ+9VSunSOxHhgRgdY1E0XrTyGECFxGwBUirCgqOmJaYMFYICcumVST3xKJ
H0hiV6gUBI6VxFGG1gNkKGjDFiDJoLVAop7Ht9cQHG4AIIpTqBn07YkUbK1aCgZsifDywILe
0QV0+omhah6DpiUGqmNFJzDVPqNevhiWI9R1KANTGpHYnDpSIsgQkswDH7f8sWnEYaRa6CW1
fcx6j6dsLPwMaiRRqdvxwIJVCfaGfc6uuX8OJCAcMFWhDVOYr+8Yib22DaQcqfYTTLFaZDgI
wIPpYZt08cSODGvcluwp6cR09REoqM2FK174ECkozPp/3d/Kvni0Qi7lRlqA+6n+mJCJBb1n
SSKBQMI0l9C1cUU/bp8PxwEyMMgF0EZ0BNTiXomUslHB1g1Ugda5+rEcIB69SEUUPhU4tFC6
tIy6vUFB0noK+eIYeKVagxUatRQH/DzwVvUpXTmxIJyr5+eMqgKuakH00/AnvilwfUxiIofy
kU09csMqP7ZC9aoBWh8ulBiIaF86Bc+56/6YYLCIZG0gZUrXviB0ZlGlG1GtSp6YsFoo2Yip
qM6ZnMYjAuD6QrBhTMHscS8ACaqEFR5jLUPHE0l0sWDZFgaEjJc8WixEXLUSpIBz7Eg4gkEe
nM0HcdMvAeWF0gH9wIACAScgT3P4YlfDrpqBnUDJegA8cWMzpR74X00rnT0sAenhjcVULED7
hWudBhrBvxHjgGO+TS0qm39S9x3/AGnFI1H0J8CQ7FY3aX+5b5b2M0a0e1lR9LHpTUPSMa6v
jLcfLUXF93MM8HKLRmWiNCEMhAY1LHScwKY488+mXGn2O64da8NTbDymzdzC38/NCus1yQnV
kfHDZ6pTWu6/H8vHDs0vJYI5kXQ8gU6K1rlUdD3zw5qqLcuRfH6bRabPccggmSPSRLAWAPt9
iy1pXBiTyfKXF7tn2S4v4YrH2yke5IW1aqUyoCOnfD9VuMn8f2PGbDlt3e/+z2xgz0rc1RnX
PMMaLl3FcOeG1H8v22wXVxDuMO+Wt3ECFa1t21SiMn1aSCUOXY4zzMU6VV9F8F/0EmybchuS
xgQoztRpKZllJKUJw3liSa80cqzNRTGvZmqDQHI0GDl116L8N8o2rYd5d7+4/SRyr7a3B9SL
qIPr8jT8MdPmK/D13dOS3Ot7225dtFvYMNSBI0lmIAzqSf8ALGcc2d5tzDj9xw+VE3aO6u5C
iuQNLMTXrGO/cdsArOfFHM1S/mh3ndv09oqUg/USFo6jwJ+36DCJVTzLkzpzK6n2rcWWKSQf
/KtnYAqtD1FKmmWGRuPR2vti5fxq3htt4s7W4TQbiK9f23BXqdJPU9a4rQv/AP2viVndwbfL
vFp7wjHqWVTGaDICT7a0zwSNXrXinMeWz2/KLy52XcnMUh0rPbSuqsFFMxXDGYm4KvAb5Zpe
U7puFvfu5MDwSSIpBz9ToCdVcPcDNcoj2BN0nj2S6lubNGolxOKSN41r1I8e+MtKiBQxNCVN
c37nzxuHmPbPhLbILOO53CXdbAJcgRx23u0lXSa1YNTB0Kq+c8S93l6TRbttqx3cg0O01RGR
1MumtCT3xmRSPROQbbbz8VitY9ysfctxG0jmZQpEdNWlu2EFvdjtm/WFmlpv1mJIXRinvLpJ
FDQUPWnTLFqPum67HNu9l7e920E9iCzIsyhJMqadddP1GAeOPeOVbLyfab6wtr6Ha7yENodp
FHuUyqtO2HCr/iLaP0Md7cS39o4meiqsoMlVyJdT9vliqrKc1+PuTX3JbiTaootyjmJlUW8y
FkBOYcEimeMqOTZvjHnttudtNcbTIkayL7jh42CoM60VjmD4YZVr2DlHH7rcjYSLfxwi2kSS
aCVigYCmqn/PCrE55Hx07r/TxuFst2Vp7TSDTUZBdXSp8K4sWvMN++PN22/9fulxvdpYR6mk
9pZXZpFJJAoKZ5+GKaz40vw1st1a7bdXUlxbSx3pV4jHIGddII0uvVT3ocVaYblHDr//ANz/
AEV1d2tpBuEpaG5MylACanWBQr5VpgZ10bjwOPh1zt+43W62d7ZmdPcC+l1U/mCanLD6YZrU
bjm2xXPLbWwuNi3C19pTqe5aYoQGFAaAdRiFeb3uwDb+S21hvPJQ9qAPdvYWkleHtQMdVM/2
Ymd9dfP7HYRZ2psOW3G+0YqtnNKJfboPuqKU+jDBIZ8t58P7Dd2exXM00lu63hDxLHKHIWnS
SnRsa1quPjfxrbjfr683YW89/FJrsrUyKy6T6vcy9X7cDM5de6t8kR7hBc30trDxq2lVp47e
VDKIky9ZKqW+gwxa0824cgvb6wudlvbS42QjVetVC2k5inhirSvO/cUg3S/2O1vre2vbxDKK
FfaMrChBYehWPhiwdTYpru5seIcEm2jdbmJr2ZZBElsdQLSNqCqPuA8a4omW45t2zNxUzR85
n2i7YO1xt4kCRKc/T7Z9R1D8y4FY0HxdeWd9wy/2a3u45NxDzaIHcof5g9DDV1Fe4wYdaza7
a42HidvBvV9H7lrQ3EhfJQWyFTmaVwpw32xbtdc42zfLMxSbNDEPelSYUatT9laN1wp558sb
LvG/8rkudjs7jc4IoEhmmtUMih1r6dQyHmMDOOT434pynbOXbXdbjtV1ZwLIRNLJE2hSUIFT
TL64tMer7nsnIpOdWG6R3Hu7LbRkTQLNpKNQgkxn7gfri1Jbq6s92tN82va7uC63PQQIEkCs
GZaD1dvqMSeOcg4f8h7FsK/1bdo1tJ2Cy7cZ2aueRUMSrGv7Malo/L0r494zvVh8d3m3TwKL
q7Ez2+mVHDLKlF9SnSPPGZ5T1PHhd7wPmIuJ7aPZLuWaCTTJojLqCR01JqBw2iXxo/jXjvIt
p5rtlzuu03dhGH0GeeB1iWv+45LXxwVrnpoPmPl3INm5rZNt24PELeJZEiB1JrI9RZehBBpQ
41njl7rYfHnLH5PxbcJphbbhvokdptvVhDUAAIfIMO+B1dcO5b7bvt1lfbNbbTaSTjSWuw7o
QewpQ1+uBleXCsYN0DsraGR0qw9K0BBzOX1wrD8kt7i/2DcLGxYSblPas1oiSKj6iKK6tXLP
8wxFkLq83fjPxZbSbtILbe7B0ZPfcSMz+50VmJDtoJ6YZNZtXEVjx3c7iz5+gzayq9+CQ8QU
Gre36hlmCMG+Y25uI8ntuS8b3a7iij3a9W5lK2KOIZHjFFiOZ9GpR1wUfhkflbf99t+AvY3v
HotttZnVV928WWWOhrVFp6iemRxrnnWL0o12vf4/ic3W384SeyFuP1Wzy+3RFPWBZCWkBzzB
64zl/DXdeGzSKXK1LSfdUigONTVb4eGdo2BBMbA1V1yYHxBGY/DFWPh7vwrcbnY/iq55ZZ7b
7+9I8ld2eQTs5Vgv/wAgMfcUAZADywSNXrxg/jbft23L5S27eLg/r91u7gySB3WEMSKEVppB
A6DF0eHX887hdt8j3NyYptvnW3gCqzaZFeMH+YrRsRpI6EHFonltbHje63ewfFQ5XZbazbzM
9JN3aRbj3Kvp1T1IdNI/4zxfk9Xxhvifet1u/la23Ur+s3C8mmkuVd0i913U6mDdF8ad8avX
jUmF83btdf8A3obhdQxz7fcwpBRmPtzI6RijqyN0PYg4L7GG32Hebni/xHFy+w2xpN8vTWfd
XkWf33ZyvuTNX3FI7A5HvjPM024xPw3ve6XfytbbpPG1/uN88r3La1hDu6klhX01FT6e+Hr2
O3NyVzfNl/fr8q7hfWyzbdKvsgFmMc6uiABgY2+0jMZ5jGt8ebnm7W+2TfZ+K/EkXMbDbnl3
u8YmbdnZbhZ3aQoZJmLCSOnYdPHGefWuvIw/w1vN/dfLNnuDp+tvbt55JUDpHVpFOtxWg71p
g7+WubMcvzVv24R/Km63tl+o2+dGhEcgJhnjeKECtVPfsQemOl5+GZ1+G+2XkEnE/ie35pYb
W8u/XrVud0nImWdnchnmcN7iU7A5GnnjGHq5PGH+Gt33C7+WbXc2T9dfXks80wRki1NKGZ2G
r0jM9PwwWt8eRwfPO5LdfJ+7yCGW3C+ypjmXRJVYlHSpy8PEY0xzfl5/G/8AMIcE6s8+lPwx
Y6R9dbdzSx4l8Kcd3XcbUX9lJHFbzQoVH/kLFSK1Bppxkd/LbC7W83fjl3GCsNxaTyorGpAk
jjYAnvQYp8B458oS8qv7feLJvjC3urFWkK7wsdJQorS4Qxj3Cy/dQYzbjj1b+nzhIZlOiQEu
FoCRQ188ajtHtf8Aag4PNNwR6e4LBirZZr7qZAHPBfkyePWvjnntrv3M+SbPHsdpts+26y13
BpEs9JAhMgCg/vw2MRJfXu7Q/Hmxy7XxmHlErHS9lKEIQer+YNYYdRTEtZT4ctdwg+T9/kuu
PDjMlxYB/wClhW9vV7i1eNjkwY9QuQwflT4W1lzNN847ySfiG2WlhzHZnljubVIxWWFWI1DS
EEmqh9LVo2NZ6pPFhvl9vVrwHjD7VxSHlpe2j961lVHEQ9lTrGsMMyadMEL5Z+RbO9g5Jci4
44eLO5WZdoozRqKULxMfuDNnlkOmNCcxnI06NJ1lYagMya9sFaj6v5w/N5uM8KueAtdTRSRR
jc59tZGUxpHGF9yhoaHUP24z+Gbf9m8cba3Nbhomi/U7ls1KLRZJ/blYV8WKBqeWNYr+Xm/w
nwvkfH9q5hPvFk1hFdxOtvazafdZUEnrZBX0sDke+L8jZ9cQ/A2+b3e/FfIbO2vJbndLD3P6
ZbCTVNGDFqjEYP2qz1pivyObvHj57udx3nceTrcbzcyRb1JcIlxf3VY5YpQwTU4pqHt08MqY
evhjjr34eof3Dw/JNptmww8j3Cy3jaW1/otws4vakeUqKmUEtXUtKFcjjPOj+l/3mvCZHUKF
Bo1aqAKnpnjbtXsfwAOXDcd0ueJ3do15bWyPcbHd/buERYgqD1Vk/i86dDjFvrT0SPh0e47N
vG67ZsV98d8mjt5pLiFZCLO+hALTRhQaaWHgo09cV9cufzcxJ8pcY3XlNl8c3Gx2f66yiiHv
3kZHswqywaWlkHRRpP44OpsPX/tHpu4CCbmO4WrgNcybOhtoSdLuUmc1Q5H0uRmOmFr8vALq
9/uIO07qu+Wk11tCwyJe2e5RRSLJCWp7sEZCuxiFHJU1XrQ4rGba3nyRL8jLtXEp/jp7ue1u
LIC9l28I8TlUiELyOQQBQtmP8sM+D1uun4nm5Hcb7yWLm9pAu/NaWkq/pUjWS7ii9zTcRvEf
5jAkDWKEGgywWetz2a6+S8kjl4RyG33Hjm+vbNZuSu5rFIVNCqSxqz1Iicq7kZqM8TF/SDZd
25Vu+y7XbbnbbtsG4Nbwn+u7CIrvbJ0ZBolYESafTTUNOXjTFKz9bf8AyqtuvPkTYeb7zDuu
1W3I0ube2uLqTaVS2vp4lZo4L2JKjU61KyUOWXbFl+Rzz878ofkfbuW3fCt5u7TcLzcNr9h5
Nx41yO0jS4gg7T20oCHXD9wOo/t6sY7nXP53n/8Aq+V5BpoUJfTln3HSuGV6cmEHGoHSdJJq
G/5HLCAvKA4NP5ZOYOYB/wAsZCNnJDNk5FSPEfXCAgs0X8ytG6ZUzHbFrWCLOzkjNlyHkMQD
rj92unIdAcwCe1fHEBEGtaKqE5iuZ/biQgBpHUOM9Rp+BxH7QzBTH91TmWPU5/4YiY6NKGma
j7QD0wIMkgp9tQaDLDFAayoLahUGtB59qYkJFjDErma0A8iMR1KWk0AR5kZkrnSnbAKhjIPb
MZgds8IS5EFskqAKDLPEpS1SgAv6VB0qwz+pwI4MOZdKgdBXPzNMSmHljVgHBqueXQn6Ym0O
hwCGA1jPVXFoo8xQGmoZPTqARln4YUB2DFamij7W/wAsWjThs/UBStNdc64EYyFVIVch4DKo
wnUkT/zMzUEVDeNc8Qw6qGOkVGrMMSAajriUMGDtRqOOxHfEhn09StMqKelPE16YyTtH1IJI
8B0xASy0YlgEUAUGWYHXCgF2kyFNAGQPT6YCKijTpHpBoK5EYoDBqkhu+Qp0A/DvhJxRZFZV
qAKEN1wAQ06grE1JpXt5YElVigQ+kljkfphOg1MWV2+4Eg9gK98SCoQiimudQO1fpiQ8gSA2
VKn8O1cRDKysQFNSPu/0wCjDjUocUf8Abl0rh1DfUp0genNa9SR4YlSK09XUDIn6/wCWBGDA
iqmq9KDpiWHUqpppoqkgHsa+WKIwNASrCj/cT28MsKEZBp0sar1IHfADK1XJUHsCmef7cOFI
ACuhlPgQe3ngZpBUXo1CppTsRTphaOhjzJJI6jPEj+8GZtILqaBT2BHngApSpDgkCOgowzpT
zxEDmJgGWusdQMsqYGRqdS9a+kCvQ+VMOnQPrboPLV1A864mbNErDSPVqTuvgcBFVZNQJplV
gehwkk9EBqTToFI/ZXEsRjVrrkQv3HEzqUN7jNQaVTy/4PTFjUpfyXUqMhXLqKEdM/8APEtC
rAPoFWORehyBI8+uJCX00pSn8JyzOJHkAIzXMEUpl/hgIXXMMa0BAqM8SJVQOT1FKVOJBlYK
pbPSTpCjoT2wyKnQkih6E+od/AYRBSArGdPQGlfM+WM4bAxomgDPOlVz6dc8Qw9AaMCWNa07
DEcPIqAU05HxyOHCehb1KQFA+1ulcSwL6PbAIC9dT0yP7MZxHr6NJOVK5eGEGXQsNAQQx6nO
h+ow4PgiS2ZbNcq/6YsQldwtM3U+Pl44q0QZQKxUBHWueBadWSgY11A1z8fp54DC0rrLdUNK
j8wr44R8kjosh0j0EUVmH7cGLTMJGNAwB6ae2WIBVtQ9Q0sB0yBr0wakioEQagC1ag9cjhaJ
XZiyHMrTWpPbyp1wskE1enUKgVp0xEyKg6LQ9yMwfxxLTSMkikAHSD6m0kEHy8cVCRKaQB0p
n2p4HLARjQlFKijfm7kd8WHXOsDam0UVTmtK0H0riUh/56mgHehB8friNkGQjgrIANJz9VR4
4kWsIA1Aa/d0GWJaYsoSsSjSKZds+uBYYM5AyqvVv+uHR6JV9yjP07qOnlgEDWk4kJ0uoNNP
av8AriOJDUGpalTVq1z8sQtRhv4ermuk59R0wmHHqABqAKAg+I8cVp0WlkIWgoPtpk2BrDAt
QhSDT8vn9cAsGzGgNNOWa+FfPBjPoS1H9zOj0FPAU7YVgWzIJNHj+0DKlcKOVTQACzlyQaeP
fEbZmBGli+XQBTXqKZV8xiGHVSStKqVqDTocVGafUmjTIhyJrTrhjVh4krWQdFFNPl+OC0zn
CkjKKHjrmaue3/M4hfDpNGymvp1Z55AH/LA1AzxUZcydGZVTke+NMflE1w7ElQEHQ1A+uLVt
TNq9oa/u7kmo/wCWDWpEdNMZoutW+818MajNVG9MJID7QJFczmD+IONJn1ArQj8Tgqw1fMfs
xDHeEjilVRIdR6A54pWsesfE3GX5Nu1vtE921rbsAGuACzVAJoRkGNMdOufG7ZHonPvjPhnF
rRP0W8XMt+y6hAyKEYn85IH7scPhjNeZswDFgAtejfmp0z8ca1fWAE7CioKRqAGr1PngxHX3
G0lQ3WlEAzxfWi+tvx34c5xvm2DcrKNLW0zZROwR209V0Zn8cXVsTk498acp33dZ7Db0p7BP
uyyMFiqO+Y/ywS3F1I5+U8H3/jV/HZbhSSd6OBGdZJY06DLDz65/C9b4V5pBsbbvdW8MEKp7
rw+6BIqUrqIHX6DFbjTCuvtkrl6a1Pga064Y3CtLaeadIYQXaU6QFBaoONSmzxubv4g55a7M
28XcUSWqqGMZcCT2yMiV6DBrEcnHvjrku/gPD7UEYBWM3Le3r8waf9cC6xwcg4ZvuwTLb30Y
kaoClGEkZz7MMsU52icrrafijmW4bR/VYLILZn1UlYI5HcqrGuWNXw/C1234K5Zc2yXc13t9
r7g1BZXJbR2rQHGaKp97+M972u6jsRc2u4PIPSlmwcKxNBqX641E6b/4k5Rt+3i9laCZSAZI
4WLOCR1b/ljO0j2X4f5Tulqt379tYxsAVW7do6r0FKVIqfHDfR1/hKPii9h3ZNu3u+j22KWg
W/FZUYE5aTQV1dBgkVNz/wCMxxSOKZdxW8tZsgypocHzFSPV5YvtTKwyP6yGXUOuWf0xvNCS
IzSOVhDNIvQIpNK9K6e2Kw2up7bdbVQZLaaOIrlK8bBXH4jB4zPD2dheXCt7EDsGOlHjRmz6
nOhAOLqq3Qi0uRIYZkYzBtIqMx/upg+WZzWuvPj+7srC2uzudncyzhf5ClgwY+IpibHyb443
7YbKO/MqXCTBaiBWLrUV9QAwCxnbG/3i0nrZyzw3SnNoWZZAx7eIr4Y1K1zNWR5XzZCpk3C9
hqK+suAa51ocWiwO7cl5RvUEablPc3kUVQrUY07VJUYtWB2PifIt7inO12Tyi2AMpyFO4zYj
8MFoxVzR3sMjRzBw6sQQ9dWpcsz1wys3g0VzMi91qakKTUkdzgrbo27bdy3S8SCztzNLMdPi
DT+InsPPFixcb/wnlWzxxTbjaaIJBRXVlkAK+anLLGfR1S2rgPL9xs/1Nja6oEBZLcvpZh3Y
KaZY1oxW2/H98n3BbCC0d71wQ8YIRVA6lj0GL5Ujs3zhHKNkjjkv7Ix27gkTIyuv4le+EyJt
m4RzC+sTuNhbSfpwKijiN3HXJSRXE31YpLn9ZbXL+4JIbhPS5JbVl2Jri5uMaCee+caZC7s1
AisWJr/24ol/tfCebX21/q7WyJgIrQuYy1KmtK51piwTcVdlse+X99+itrR5L+tTFTTpHclu
lAfHDK1E3I+K8o2SSNd4geAuv8qTV7ikDrmpIH0wBSL7g9RPpp4eOeEWOna4d0uryK2sIpJr
uRtKRRBmdiczQjpgp1YbxsnKtuv0g3O2uIZZdIiilJcuTl6MzXw64YNXVr8cfIzWTTWu3yfp
6GirOqse9BGW8MVKksuR8v420ltbXlxtzqzCaDWQdQ66lav7cHo13RfK/PAyON4uGYEeghCD
36Uzw4p67t6X5P5Dayb7d28z2oQUkt6R1Ve5RSMh+/FVaxP628g9cUxieMmsisQRU9yMR3wM
+5X9yqm4nkk0N6Q7s9POhNMK468HFvG5QRGG2ubiGLuI5HVaHvSuWM6qsNp5zyjaBKdv3W4g
WUapFDl1LDIE6qjCc8ddx8sc9uoWhk3idUkUrIvpUkdCKgZYNGayM1yzvWeUnWD65KmgHfUc
IyQVreXdrKJLaaSLRVQ0bFGz/wBwphPIrzer6+TVPeyT+2aLWQtpNKMczlijVkd0+38nSxj3
WZLpLC7TSl67SaJQPTQk5Uy6YtkrPUV0W87jazxzi5lhaAaIzHIykA56Foa/hh0yIbzfNy3B
0lury4uyGIiE7O5Xx0gk0riZFFyLcobWSyivpobWcEzWkcjLG1cs0+04NFs+Kk2GfeX3WGDa
rlre9uXESOJPZpWiirilBnhsaju59tfMdn3RLHlM8kl4qBoGmmMyFWNKxFiR1GZw8i2X5Zhm
jNUIDaT07aj3ODRLEdBUVoaGgHUL+OA08kEyAiRWjBWoDL9w8a/XErDpuV9HayQwXM0cT09+
3V2Eb0y1FK6TTzxC5XMkpSVJtZSQCiOG0moNfTT83nhU8T3t7f39wZbu4e5mAAeaZzIxoMg3
enhg8NIbzfrZSWMF7IttLpM9skjCJ2/LVa0PllisXlc1tN7MyyAmKSN6q4NPUB4ihqMFU1Nf
Xd3fTtc3U0k0pADTzO0jMqjL1MTWnTBrrZ+G1T44+S4uEz7pFri2tkWafbnnaNzEcxKYD6aM
OhGeKW/hx/p4y3Gdj3ve+QWu3bQA24Sv/wDEPueyoZfVk/alMO43+EXKLTkllyC6i5J739bt
6R3MlxKZGNANP8zPUNNKY1+HOdOeLed0O2SbZHdSvYysGa2DMYQVzyQenzrjM8a8/LghmuLS
dJI5HWeNtccsZKOD2KkdPwxqesfFSbjvF5uLvuG4XDzzufVLKxZ3Cig1E5k+Zxa1KnXfN2j2
l9tju5xtlwyvLbB29nX2dlrprTqcGtbrktLq4s547u3laN0OpJY20kMvQq4pQ/TBRqfdN2vd
1vWv9zuWubiWmuaZtbNQADU5zNBlgacUcoWQsGDBepGdM8I1aS8k307Wmzy3sh2cP762buTE
JQMmVTllXDuNWOm15zyyyjslt92ulTbiXsFEraYieugV9Ip+GBjfVs/zV8kThdHJrxT3USAZ
nxIGGNeMTLPLNI7ykySyMXllcjUxY1J/biojq2ze912q8W72y5msLyI1W5gbSVyoKHrn3wYf
XVY8q36x3hN0sNwnh3KTUWvIZdMh1/dXT/Ef4sO6w0Oyc/8AlCz2u4baNyv4dmtiZJmjDPHC
0mZ1Gh0hjXLLPGbV358K68+R+e3u4W+4zb/d/rLVWWG4SVtUYceoL4Bhh1qSVW7fyrftr3Bt
z2y/ms799Qe4gYqzrJ94bxr1zwxX4W20fK/O9lsBt9hv1zaWiHVbwI9VUsSctdcia+nphsn4
H49VfIOW8o5Ncwz7xucu4zoCsLympVScwoGQz6YK1FtvfxVzrYdjg5FuFn7W3S0BcMrPE5FU
EyD1RnGZdZtxb/GM/wAu3zHbuF3lxb2hasrCX2bYTEE6SWBQMw7d8VXW54oN73rnFpyma83m
6uoeR2ErCaT3SJIZE6gEHpTPLI4bKuPY67r5c+SbhgX5FeOdJiFH0/y3Hq1CmdRhi+uqDZeQ
brsl3+s2i5lsLqMELcWz6HFTnWh9QPnljNMkkQ7luO8bzu8u4X8xu91vHDSSEUeRzRVzHTGr
0PrqHdm3eO5/pu4tcpPZjS1rcs+qEfwaX+0Z5UwRymW/+HA7iNiTQLpzbv8Auwu0duxbnue2
bjbXW0yyQ38bg2zxMwkDk/l055+HhiuYxdbbmPyF8otbts3Idwv4hcIG/TXI9vUD9pQ0GoZ9
jgjVs+FXxz5Q5xx7b/0O2bxcWdoW1LbqaoOxpWtPOmG+FHe/InNr7crPcbneLo7hYim3zu9G
jWuYDDOh88WCWLLlfyP8qXVuu07/ALlexwzCOUwTL7fuRv8AaRkKo3Y4pRbL4suKb381bZsU
cfF13P8ApDlvZEEfuRKRm2gENTGdNrKPzXki7/JvbbpcLvbMX/Wq5SZWrR1YZaMx9tPwxrv2
eL/ps8WG/wDyvz/frEbfvO9TT2cgDNCCFDHsSFzP7cZhzVl8f8x+WFji47w69uDCocx2qEGO
PLJquCEBbt0riVljOXXK+ZWvJ5d3nvry15HGzLLO0jLOj1o6Z9B200pirnLvwsN8+V/kDkFo
1hvG8y3do4q0AoillzBYKB+Iwb4bGLIkDBWYK2ZXzHfFK1tC0bIoV2BLD1U7DGjhFSCRk1cj
Q1HTqBiYtxEI1LZ0YmpqR1K9MVUdO3WtzuFzHZWqGa9lOmCGMFmYnoKYmoO+22+2++ks7y3e
1uo2pJBKpR9X+5WpSmKVndcQKvLoCCoJIyzPnhxfKe3triYrDFE0krPoRVBJY9BSlc8FrUmH
uba9tp3iu7dobqElJbaZSkiOpowdTmKYmJ1L8OdHBYkAEGmunT8B5YWyepY0NGNVz8x1OI4Q
VyVAJ1KaNX7svLFqP7PqKkZpnpPUH64gemZBXJT6n6fTpiWnYKZKKaEdAh6np2yOA6t5OFck
PE35PDaB9nimW2uLtTURyEg/zB1Vc6avE0xc33Ge+vrHHabRuV5K1vYWst06q0rxxKXKxxrr
d9I7KvXDblMuzXLFGckBLq5orrmDXwxavV4/COQRcZXlSWpl2P8AUG0e6U1EcpHR16he2rxy
wT1v64pKoVP5QKkDxPYYrBpirUEjZA0IPc/QYRPTezL63cBdNdXeh86Z54JThhqJppAZRQAA
kHtiqPpIZWJoT6QD4k9qYNGLLc+Nch2izt7u92+a1srv121xJGwSQ9SFY/4dcMun/CtZqNqA
1Dtp7DGh6kjjkZvcQD7fQB4HzxnTjS33xfy/b+Ltye6sjBs6PGlwSaSxmWmgvEfVpaoo3TFL
o+FXtPFuQ7wtz/S7CW+NsvuzeymrSneoHh3GDTniGx2u8vpIbeziae7mf2oreMFnZyaaQvjh
Fdu9cR5VsMcNxvW1XO3Wsj+1E86enXSunWMgcZl1n7OTb9o3G+WZ7O0luUgjaecxIze3GvV2
pmBnit9xrmp+Pcd3LkG8wbbtiJLf3VUgQuFRmVSQoY5ampQeeFqRxXltdWV1La3cEtteWzmO
e2kXQ6OuRDKcwRiZQBZGkqpAPWncnEKdPWNRGYyYUp+IxEQUU9qmRzzOVPIYjoanV1GQ+8/5
YgYZLXxyXrX6YtWDjV0dg4H36hp7mmLWtIUBD6QQcyPA/X/LECDvqDAVqaAjwwDExZAAwIDG
tBXoMJDoAIYNlU1BzrXuMQGNKutTk/QVqc8GHUZpXR3r1BxYUgoCTnrOZFO3niQwhp0Davy/
cRiRgwqdLBtVaVNThBkMhFXQGhJLV7dq4hhyPSCRkKN498BlMxLMCxAUeGVa4RRMJCukUbyB
zA/zxYxd/BAyswQkBV61FK4j6MBSy9KDP8MZbO8Sgkk0rnUeGIWBMjA6FBUAV1dq4cZuiikB
ozVLUoQfDxwVqQyiQqR1KZmuRA7HC0IUpXPX3HliBio0qAPLUPuAwjAlGUiMNSv3sa5UwE7A
5eo0BoprT8BiVPIriPUx9JNcutPPFopRrE4AIqF9PXt54jD6dLJUejtTqT2+mJYQtwPcdjTL
SDWmXWhAwLCp6QFBqeprXLCjo4FC5LAD0kZZ+eIkfbzditOpr1/DECQSFTVMmJ0tl/1xaT61
UBRT3BkV+vgcANWdlLN6Qv2pWv8Ahi0w6oS1dVdQoScsx2ri1BqQdNdbjqCP8cWo9WbSQNJK
klTSo8RiRwwZFFNZ7djTFEZUrRq5L1XvhGEoSqnqT+UUpT/P64kcJMxIjypWoapBA8Kf54gS
SVrGqjUn4E/XzwaSjVVU6qnPqfPoDiCUadGk+nLoO+I4goy9xToa9cQokmIzNNP5ic6YqpRN
pLqv5aVy88Zb2GkeXUdPqYZUHSvnhFpBSWVtNSQR9PpiM0ZqqmpFB2pX/nnhVDFpFGVDHqyI
OVfpgtBhq1V01VjUEA1GLUNSEZqCoYUArkcCNHGmlqsBWprWhqOw64lDaKH1VaNcsjUk9qHE
19jo2mpSjHqv+WEG1AnMU8RSv7cRwSRoaggVHYGvXpgUw6gaaEDUv4AYG7kAS2rKhNR6qVp9
caY07yxD1UqT1rWhxYvsNip9RpQCn1wC2mqj0UCpGZODQEgVAFQOpHcHwwtYLS1fQAKGrBu4
OEwij6c60XwNSMZ04ZRpPWsYrpHTr1xYzTFzUVFVIqD3w4zp1ZZWIUnL9n7cS+RMNLhnX8Rm
cDViMkrIKVUNULSoyOFmG9ZehWo8/SfriNhy0qsKnqCA3UUPnijWHUsVCotXGVCaH9pwoWs0
9NFBGXeo8MGq+hepqGINQKoRkO4IpgUhy6UqQcgNVex8cSt07RMyZj0t1PUYdX1gf5AYKCdS
qAK/WuCqRLQk0YVBHpJ/0xYdRFkjLMoLFOinJanzPbG4FLvOkW/go+0qakt541GZus+Mwa5D
BTakp/uPTwxB1KSzqVAJp3yBxSB7r8BR6eXWgWUhAhDxtl6iuQr9e+O3W4u+W4+eUP6y2AJU
lBUg9WXP/wDRzx5J03y8ekqzVfoOn1746kCojmoPp6UBxDWu+NrriNtyNZOSOy2Sj+XIwZkB
Hcqo7Yef8MddY+k9gu+M7lZS3G07wtzZMCNUasFWo8wDg65/YnWvNeETzQ/I93a2N1NdbeBK
JHjRhHqBNNeWWeL65Dji+aU3CHfra8t4pEZEFZghZchSq1Hn3xfzqnGtvth3a5+OK3HuvcPb
kkSI1fty7V+mLr5WevBrXiXJ99vLiPattnuJIAWKKNJQVoAwPjibviKXZ+S8fv0ju4JdvvjQ
KrrpcHqCvXG/52b6x7X0B728XfxzW7Essz2w94sG1NSpz79MHVm+D1x7gFuuBW62SmWQIiko
p1g1rVgP3YDYurp+ODabBN8ngtYgqj+cRkwWuZoTgWrXbl2y72W5/R7tDPZsrKkseaKtKUJ8
BioYe65zt/6AbVuvF7ncbMMIxdQFvbcA0VgCAw/biTQ7Fw7jWzbit9tlv+mW4TOF29wqxFfS
W70xag7Agi5HvMkiGO3kf+U7g6AQBXTqyFM8FpZ25+RNnnvn2bctouN0SKWkU9swCkK1ACo8
PrhC+59fbe1nsqRMIZDcxmOJmBZVA6acULPfNLU2WyOnTVgE6ZVzOXfpinyK8SIatGarMcm6
f4Y3GnsfwIES63Ae2kz6EFWUHSPEHzwdMt/Y7lf7xdbvYbrCn6WBxHFC6aarSuWWf1xlK3lm
63/Fdih/9ciityGoYEiWTL/tHj44YxY8o3/km+brudrf3lglleVXRKImRWatdRB/zw+GfL1b
mumXiljclEWYNCUf2wpHQnp+7EbfVvyTkl/tljYTWVtUzPEsk2gtRGoDlSgrgkJtx2KKTd4N
ytYYIrhQRcXDxq5p5Ke4xKXHXBHtu97bPFdXS7pGp0Mf0/sgEdhlniwazvL99vOH7XbtssMK
K7aGhePUDQZKNNDniSX433Wfd/6hdX9olhOxDPHEroCSM8mp0xFlflG1vrGwaKw29G22RiWv
0Go6up05f9cDOvIkBIDHLKp6ipxo8+vZvhCGyfbdyaRYnuDQFmoWCAHUAetK54LDdaHhZjuN
q3FNwAlpctpS4BZQi5ekPlpxWouYPLBuOwpZ6oVedTIIRRNIpp1Be1K4omma02sXsziKIO0d
XdQNWmlQfHFiZnhZ/WbJuKbhSZf1Le2ko1DSp6DVlRThQeWyvbb9sC2rGC3ll/nrFVFcDL10
/KMAz1WfIlvNb8z2W82nbIb++ZDJ+l0AiUjpqHTID7u2GfCVvKLnctw3/Y/61xaLaI/fUG5Y
q+skg6da6RlglONLyeSa05nsUNqRFazsVljj1CI590GVNOdcNvgny0l1Z7XH/UZI4IUZoy8k
iKA2QyfLPI4CyO0CLcPjK6fcwLqeL3pEMubIUNFYE54p6zaz/G923uDhLW3/AKQm52AEgjvA
Fow1E65FILtp8Vwaeqx/A9/vds5VEbYLG1xMsUocVoGNCqUzGNXGeZdekfMW63O2btsG4xgS
vbM7+06+k0ofV5GlMWn8urj/ADbhnIuS2M8c19a7qy//AInn+m1UJOorVafXBrTEfOlsJuax
rbxa55beP0opLEGvZQSTh/DH5Z3452+Jub7XbX8Iym/mQTqQuQNKhgMGtSPW9wuZ7P5T2zbo
pHi254SZbYf+I+kk1HTI4dUnrO8ojTaflkf0rj8W7tJbiWTa0Re/3yKCNKt5nLBg/LGfK1+L
zeI5RxpuPuiATRzIFaUno1EomXTLFqYJ3chkAI7hutPwxDXvvxht3H4PjafdLzbILy5tZJZN
TxqznQAVBr4YW+p4rrDf+A88urCwv9nFjvZn0qtqFWN16kMwGeQzriyLG8vIeIbTcJtV7Js0
W3MtGtrpFW4K06CpoR+GKhkdhf4o2rke52u3TWovGZf0N1ekSWZ1gVgVjktD364sGqn5Ljur
a1tdyv8AilojW8qva7ttzK9pICchKgFaH/diXNbbfOWV+IE3u4262nS4hRXsQD7AV39sFRTK
nUYOWu4yfxrsexWnANy5a+2W9zfwmaQJOvuR6Yl1BAG+364ZdNuRoLDYOL8u2bYeVXW0W9td
GdWe3gp7TgVXS4oAR3xSMdTPVXzfk3x9w/fbbZLvjNs22XMZmu5hGpaMN0aLIn8BTFjOz8vN
eNz8TPy3bx8ft/1+w3M0aIl4lSBJmwVWofQftONXMa4nr0blnFePD5q47aPZrLaXls7TwSMX
jquoKArV0jKtBlXBvgnM1bTQ8GPOo+Bpxq0NrJC8005QAo5GuiEZ508cFXP84bjnx7xraIt9
i2TbYN23m0u9MkN/Qho5AGjQO2S0RutM8WtMh/cHu+02PHNv2Db7W2SWYlxbgKtxaBDT0kdF
bUevhjXPjh3duR8/6m1k5kdC3Xp/pjNdOdfQ3Eto4vxX4gHMf6RBut2xEl1HcgOCWkEa6CQ2
jTXsMUm116qo4ovx58jc4sIv/XjtE6xzS3sFu49iYoKo2QWmfUUxXGcjZWMfDd95vufx/fcX
sVisoWb+oQhUlNAKMAFBQ/8A1YqpD3Ww8J4Z8fS7zJssG43W3zSW3uTKpa4JnMY92tRmKVyx
G1Q/IfEfj2y2rYubXG0NDZ3vtC/2qzYIrGVNQKZD1KevjgzRvrRfN9zxGP4/217v9ZAksa/0
Z7bKhEQZEnBbNNPY1w8zV08i+A3spPkC3sb2zhuYryN9AYNrhkWra49Pj3OCt541W4cS4ncf
3AT7NvMsj2bok8Cyuzl53jXRG7mpK1JAB+mHpz/nMelJtvH9ik3C/v8AYLHZp7KKSOyvAE9m
7twKshByUtSlDnikg66x8gb1cJebjd3iQLapPM8kcCCiojsWVV/7Rlht/S5lz1f/ABgvDTyu
2HLW9vaGGkSnNUnJGhpMvsr92C+tyPeOV7Dx5trnlbhNruexSo1N+2GVJJUAFRL7Yox09SKn
FZGbXLa7H8d8b+J9r5ddbBb7jdKoUF0oZzLKwBetQuQr0ywTDYHknxNxS63HjvI9p2ZxHuyr
NebNAQEX+WJfdFemmtGA69sIsytZ/wDdrwzkME+132wbVagRt7F3Yyq1wrdA2jSrCnfUcPUw
yMe3HOC/HnxvHvt3scW+GW5NtcxzkV1+48YePUGCg+3mMZnO0Ws/wbY/i35D5+Jtt2mbatFr
JNe7W9Hg95HUK6HoUOrNcs8Ni5nr03cPjbg26RXm2bzs+07bEVKw39nOqTB1+1ghCaKdxU4M
Ur4/3mzSz3O5tlZZo7e4lhhljzWRI3KqwHYMBXGsZnWtl8Pcfg3jmEEVxtjblBFG00qKRpiC
kASSCtWQFswMFx0lx9FzfE/C+T2lzYXOybbt8ipqttx2ycPMsnQMVVV9PfS2WK5Phm876q/7
f9022Tim/wC3vtcCybTI0V7cRgEXqqJKGVSCNVFIODMuKXedr5n5fuGybhv95fbHth2vbZXI
isi+soRk2Y7aulOmHw8+RSR5SZioyLA+HhhT6Wu9v+PPjnhXGLq+4zDvzb2E1yTaPcSV4xIx
1OpBjAb0r2xiDrrLHJzP45+L+HfIOz3l9bTpsm9AmDb7Y5W1yjKQyjq0Tas17YbNjO51jt/u
hl41GlvC19d2u/z22uGzjB/TXMIk0/zfyhge57YeDeV18HtwR/izdTY29zbRIrNv8bGrCVYq
tJAynppFVIzxmX1rdjyj4+sfjTdfkbdBvN+95bTFH2G93UaI5pO6XYNMytAteuNdUc7jZfKH
F+IR7RONx4TJsLKC9hyXa2S4tvdP2LKEI/lydPUMsamOdt34bLZfifh+08e2iJePWG6rPbpL
NJfyqkyvIitJpZlOpc8h2xh16msLvfH+C/HPydtzWdjFve1b20ft7c0gd7Gb3AqyRtVtQ1Z0
bpivwzOsuD/up3TjMV/Dt77Qz8hkt0ni3SN1SsRZk0SL9z005Hth5jPV98+Xzf7LUJdaMT1H
Tyxp0x7n/bNxratxl5HdXttG9/YWsTWV02bR1Ln0nsPTQ+WMflWeNNw7cj8p8M5bbcnijuIt
nRpdndIwJrV0R2pHJSpFYxkeow2+4xLLN/Sp234i438j7Psm/wDFkXaUiRYORbdIHAaSJQfd
hoPV7hBrnQ/UHBWvWT53c/HWzfI1pecb2pWtIC0e77c1Wtf1IOllgYk5ZV6U1fsw34c+Pbrf
f3Ofp7/jXDb6OJY5LhWYNQBhHJFGwSo7Anpgnwz/AF/9+XoF/JeW238eig5Da8RuZLeFbiwk
WNkumCoE9l2pTwOkVzzxS+O/fy8R/uisbK353BLBYCxnu7US3r0AW4lrp91GGRoo0tjWePPO
s7x4uZpdI9tVqoOlj4/TFjvH0j8LLacf+G+QcytIVk3S1e4SR5ftlhjVD7TqPyhmrlgk9a/r
5yzPxZwfY+ZcK5vf75Gbnc7dEurO/JPvRv7ckpzrmGZQDXtgl/2c/pnGytxsvwLw225A1lMj
Xe3bvsLXEcUrES29yCgMqstAfu9OWWM56ZfwvuNfFXC/6VaWk3Dre6Z4xFfXNxKsdzGWyYSK
fUW/MGU5imOlik1ltn+H/jnj3yDumxXzLf3DRxXWxWN/J7cckEgYSQGQdZUZfQfD8cFHEs2M
x84cY4VbbI8h4rdcP3uAhbC5CrJaXqg0eEyxEqrgetSRnjUxnvl4d7KBKkEUAq/8RB64HWR7
/wDCtlt+3/F3KeYfp45d52t2MM7L6jHHErCE+RJ7Yzx70uvI7Lu5t+efC8/O+W2cFzvOwXqv
DLEgjaS0EkQNtKV+5Ssh/H8ca6+cc/rv+yeH4Q4tue92fNtnMQ4G8Yurnap/cjmiK/8AlgOQ
KRjrUtl9M8ZvvjXVsjDcW3rh2yfL7zbHZm+2C4uo/wClGaiSwSSMAWWpYMiP9tc6Y12P5bZ6
6/7qLK2X5NEkKosstlA8+kULNVlDE9zpFPphnwpxPta8UeE0cEVCk1NKLl+XtnjOt4BVKj0j
oaiueXlXCrWz+JtmsN1+Rdisr+BbiynulE8TZl06lWHgTjHUPMewch+AuHRycl3Czee3Gybn
bNDaaqwtbTe28kOfqr/NIRicssdJcceuf/6N7zT4x+HuI7JcclvdljaLb3Miw0Zw4kAAt9NS
KFvt1dGOMcz10seFfJtt8WbrsFlyfiUA2TdRdC13LjcjKJGSQF0nQAsKLpzK5Z+Iz1f0LzW1
2u2266/ta3q5sbf9DcW4MV4YySLoRSp/5VNfuD9vDrg4+W/6SyOn+1nfePtuFxsj7Un9aVZp
k3cZs0BpqievYHpTD/SZRx/6vJflS+4fuHM7uTjm1ybVbRNLbXMLlQhljlZWkhVTRFbPLGrg
j09rOwu/7Vb+9tlO3Swe3Ff+2fTdiG4RR76mvUODUZ1GM8f+zX9KsOObF8b7H8P7TzTf9qju
7yNpEkXTU3PuSsojcfbUIPSx6Yzm2pDvfxh8dbnuHCOV2Vu2x7HyK4/TX22a6IGdHaIq1WVC
zJpYDI1xufFZ/wDS43r8c4lscm67lc8Ut9tm2S2uHjuG9v8AT7lbqlZlVSWzKqKaswfLBIrr
463VobjdLme0ha0tJZHktrauowozEpHq76QeuHpv8PRPgDYLDefkjbrfc4Eu7XRJIY3qVLRI
Svp8A2dcYsbnHmvY+JbxLzH5A5Z8fb6sV7xi1WeOC1dRrV4JljWZH6qx1E5dDjVvw4yMVtPx
LxnnOwJt3H5P6Vyzj08tnuMkysYr2BJWAk1AafdC0zGfjlg6Oflycs2T4n2blXGNqu4rgz7Z
G1pyY24PqAzhkbL/AMuonWP4cGZGp163/wDcvLx+LitjbjcLm33G5twlnbIrG3vIFK1W4NNI
KfcpPfDyzb6puR7zcfHnxrw2+4tFDZXe6xIbiUqCTIqK5c1yYuGINe2H+eZa3flsrHi2yQfI
+z77Bt8Qn5Fsk027xwrpUSIsZM8Kjo7+5pOnPv1wfhzvOV0xxxbpxfksF/v0HNdqNnIGsIFj
MsJUNopQ/cKdTQ1FcalywZsYP+1nebCSS92VrNBdNBJLc3OR9yIMqiN69aVwd/LXM8eEm9u9
v5JcXVi/6OazvZHtpoxpCmKSqig7CmHrkcfL1n+5jarSaPiXLkRU3LfLPRuegBY5PajR1f8A
7h7hX6U8MXNljXXOXx4O6A1Na1ORr08MZZSR1ACN9ozP8VPPE1iIiQtlkB/lgQmYgiNzV2A0
r2z+mIU2liMyQVNdJNc8SGQdBdn0sDloNf34loYi7E6R6hnXxxKJFDCnQ+OrxPh+OJqhBdRQ
6TIeo8u+IGIetKkoew7nCz9Rr7dVFAWUipOIypWALjSNJHgcjjLfhkGkNT7q1oT49q4NZFq0
rpbID1A1zBxICxmpYgNnlQivkcaAy0gjzNSepHU/hiHoUV2otKMCMhlliWHMbMprQUNa1p0x
nTYkQtmVpX8zHLphGGoCe7MMqE9PPCcG2kNkQRQgg9vocBIqSmls+yk9AO2JEinQSXGrIEGu
WeBGRqekA0rWn+uJDk91aUzoMv8AmPDEgMzlidFSDQEGgr3woRGpgyLQAUOfU+WFaZHYKAvQ
/cD+/M4kaSrqQxqO2XQ+WKihrWqsMxSp6H/gYGaNWbxoy/l7fXzwNSjjJ9v0ipY5dqnvXC1p
ChOl11Duw6VHbBQSiKvpWgBrWuJaGrN2KoaVB8P9cK0Xt/nOSgZVppbPERAaZdJOVKD6d8AM
pRW0aRXoK9KeFcCJ6owFKVHqHWhxYjr7hTSVo9RoH+OEgkqrVb/6l707ZYtROsmioBp3z7d8
Q0oyzVAHpUVFcqg4kMxtQgsrDLSR+/EQmI69QNGWvQUNPHFqOoCsHOrUMtQ7nzwarAkhiaqF
q1AD08s8IENFNJOunRiaZ9+mJC0AMS2VAoXt1OdfpiJA+hlpnU/gMQwKhVBRO3ReoFfLEYMK
xj9dSKGpXLAsCAzL6lox8wP24QJHNCDTV1AHWnSuLDKj1K8g0kqa5A9/EYkl0t6WBIp1B6A4
zioQWUEAsO5/hzw1SDEZXNlBNcutBgawNQT61AHcr/yxLISlS2YPqP8A5CKZDt+GIaYgHNam
h9LVFB9cOLRswWukdBU/8jixoOqRUyOVanxz7YGMExl1DStFAzU0r+OJBmWIt97Ka0GkEA+V
cUNJECEJ91PU5Pq6HthGmdgZDQlwxBK08P8ALAtSe3EzMOw61IoDgIB6VIDNQ11M3UYWtEfb
IGoDUKaO/lU4lejSxlq6WKjoGGWY7Uw4KbQEAz1U8a9+oGJkzBNRr9h00PfPti1aJUIkMdSy
k1Ma/wCvng3DCLgEpQqtOn+WBqS0OtViqyNVfUaZgnzrhkXkSP7jNrANDnVvAitMS9RvICB6
vt/L3GLThgprWlQMzT7s/HEpcGU1JpJNc8xlXAPkggL0A9S9aZVHkcQL+YdRUE1/i6U/0woO
pTWnVT6q9F/ypiVgwxdvUmkj7COhwKBaSn2krIDQHrSvXDILTII9GepiOpGY/EYVVPvMa6KK
dKnOnj40wytcs8RpJC5+FMaZtP7zeHliGuwmZZFLEKQKgDqa4oXqnxNc8ntd8tX47E1xuor+
niVRICCp1VDZUI61OO/f9N5wWbfXr/OLn5pvdiZOQbDZW1mq1e8hEBk8gdTtpP8A248W3cx0
5x42yuXLtQgfcQchXG1UI1MSa9/V4k9MaOJRKyUOnUhqRQ+GKM9crjZeT8r2uGWPabq5hikq
00UALJRuuSg0ODquf1kWey/InOdlYxbZuE0DSnW0BQSliepKMpJzxXvxnrcdm6fKnyBfpCm5
30oaOQSIoiEJ1DIMF0g9/pjM634P8+tdy/M/yVHakHdpGRR/5v0yNQeb6MiO+M9d3XTmT8qe
z+ROZ2O4y7hb7pJHcTqGuHVUOvuAwoagY6b419Nc19zXktzuyb1c37ybgpBjnIWqlegUU09P
LGuazI06fOHyYVjAvf5RGkTG2jIJ/wC/TTGerjF+VVY/IvN9vvJZ9t3Cf9RdNWcLGroT1ro0
lRT6YzKeorN55NyHfrv3t0nnu5ya6NBWh8dCgUxtnmYtNt5Lz7bLZ7Kxe7trKVTrQQtoKnIn
ND170xfaNzHdsXynz3Z7c2lhftNAASsEkQnZTXPTUFh9MV6WRx7z8hcs3q4hn3C/cvAwliEQ
9oIw6ZJpFRglVkdG6/LHOtx2p7S43JpLNQEl0oiyNU/nZFBxWs3w3Gfkvm+y2j2213NYhV/b
/SrMa/XTq/fhoiw4/wAp5funIE3KfZTyG/AZhbzQOQhr9yqtNFPMYL3+jLrs+SeTcz3W1ig3
fj/9GgQ6wdMgLD6t1p5DBLqvGvOBUrQkau1ak46Ft/jT5B/9TuJ3e1N5HcgLIFk9srp7rWoO
K3xifK0375s5hfe/DayR2ttINKlYwZQp6hZD/iMZlacOz/L3N9ss/wBLDPDcRRVMYuYlZx3p
qqKjDRiCf5R5hd7pBul1cwtcwVCKIFMYH/Y1f29cU6a5n7aG++Wvkk2MZlsYkgm+yRrNiGHU
FdZ0HGdHX+AL8tfJ9nGhuY1MJAp79poDVGQy09vDCsVn/wB7HMzukd/+qRZACjWwiUW9D1Bj
/wA61wyqrCb5s5vLG0UTWdslCD7UGZr39TNg1YgsvmPltnaew5tbtUA9qS5h1Op8aqVr5Vxe
hFP8w82e+e4/UQgyJpeJYl0Fe1GOqmEK+H5I5VDZzWfvxzWtwSZIbhBIsbt+ZK/4YtWbGauJ
JHcu70dznQUH7BkB5YGpMd+wcl3LY7xb6ym9qaPIN9ylfBlOTDyxs60O9/KnJd9sf0tzLDBb
Aj3f0qe07HsC9TkfAYIi2T5W5Ls9l+ggeG6t1P8AKS6T3WSvQK9a0HYHF4Lith+QuRR7qN5N
+/6oE+oKPbC1oV9voVwjXfyH5U5NvVmLKZre2irqkNontl1HZ2JP7sA+yXZPl7km07d+ktmt
7yID+ULpfcKDyYEGnkcVxqqG65Tvk26/1g3kq7mPUlyrFRGPCMVoF8sWsxLvXNuS76YE3W+e
7WA1iBVUUEihNEC9caaXe2fLvJ9s29NvLW97HCuiGS7QPMg8C9RqpXKuMasxVbZ8jcn2/dX3
RLstK2pJY51LwyKfysp/KO2Ks8x1cq+TOQcito7WY29pbLQvHZgoHP8AvJJP4YJWrHBtnyLz
TabB9usd2kisSGCRsiOUDf8A4N2GpT+ODXPrS4rzW843fy30Npa7g0lC5vE1Mp6lkfIq3jjp
XTF7yf5fvuTWD2d3tG3aAAUnrJJIjdTprSlf8MHwzQ2nzNuNptgtIdn2q3lVNC3UKGNgQKBt
IOZxeHqqTZ/kDftq5D/XtS3t+aoz3BqpR+oHdRTpg3WeYk5v8ibvyjcLW7ltoLSS0UxobfVq
ckg6i5zyplilOraz+b9+i2oWt1a2VzcqhSK+mDe6gAotWr6iMVvp1koOccos98O9DcGO5v8A
/wASKGqt1WhyK5dManq+IHlfN9/5VcwXG73AuDbqywiNFjVQ3X0rlX64Gd1nyfXnXvmemfbC
cbXjvyfyDZ+N3XH4Y4GsblZFq6nWnuLpYhq55YLcNlZXbN3vdpvIryyleOe3kDwyg1YN44p6
z7HpY+dppRHcbhxvbb3cEXT+sK6WPnUhiPwODGp6p9v+X7y2v7yW82fb73b79jI9lLEojR/F
MianvXrht04LlPzRfbnsJ2Tatrt9nsZgP1AhJcMoNdKIQqqK9xi5Fjn478v3+38afj+47ZDv
u0EHTBcEoyKxrQMtfSDmMq4Lcq9oOG/Lt7xuK721tvhv9iuyxbbZSQEDClFchqg9wwxr/wAJ
bXvz1fxw29vs+y2m12tvIJPYRy8cmj7U06UCL40wQ/XWI+Qud33Lt5Tdbm1jtpFjWJY4SStA
OtWzrXDvmMXlL8fc6h4pem6k2W33WSoMcl01JYiuYaJ6HTX6YLDOmy3758j3DcrHdoeNwQbv
t7Vt72S5Z6IQaxsqKmoNXucsXPpjPTfMm9vz9OZQ2FtFcCMRGz1u6FQmhqtRTU1643mwTdel
cM+bOK3LbnLu0q8b3a+mEv6vSbiGRQoVVbLqtCMwMYytM381cx+Od42OCKylg3bkRYFN2tYW
hVUHX3q01V7UrjXH+XHrvHhquyvSgTsRXI+NMFbj0/hfzQ+z7C3HN62qLfNib1RWzt7bRtWu
nowZK59ME5p67iXefmyb+obXd8a2O02NNtr7IqJXYHrGxRU/lsPy/jhUi7h/uI2dLiXdLbh0
UO+zKRJuMc6pqbvq9GrTXqCcWfs3YzG+fNW775wq743fWUOq5uf1H6+NitAZPdZfboa+o0Br
0xdVdWVzcn+YL3f+EWHFrixjg/p7Rsl5GxJdYhRRpPQ0654xL4rAc8+XNw5VxfZ9nuLKG3ba
yP8A5KsSJV9oRrRSPSdPXM4ebF2f4q+Utm4XJLcbhsMW43ILNbX6OI7hAwoUViGBXLwxrrlT
q41lz8v8G3Pne38j3Hi7wso9q6vY7gSSCoHtSiJQtWiI/Zg/wJPXpG6fJXxpuVjPFvHIdu3b
aWQs+3y27pcUAqNJr/5O3QYB3n5fKG9ybXJu11JtKyRbfJM7WkU//kEWr0hj0+3GrJPg822O
rivJLrjm/QblFBb3DR1SS1uo1mhkjf71dT0qO4zGCNPXNs+feH7Mry7Hw7+m7jIv/lguQtuz
kfmjAFVr2pXFgX6/NPAW+MbW33O0g3fcDPS+2BgY83leRnhqCulK5eGA2stvX9xe4Jf7RJx/
bVsdt2uvt2txIJi2pdJjLKPTHpyFM8Q31b7b/cXxqwlfdNt4iLbcb0iTc399P5jV9WlypNT2
7eOG39mqja/nTbLnZ7rZuVcdTeNne6kurW39xUaHW7SCNycn06jRhTAJFbffOVjacn23eeIb
BbbSlnEYJo5dPuXEZNDC5jC+hVHpb7q4rUsr75s+N5I7q8t+BJJut4jid5ZI5I3Z/v1hhmpP
hQ/TEzZnw8RuZo3lmlijSFXdnEQB0pqNQq1zooyFcNEmRpPjbn+6cI5Gm7WCxytpMU8EldMs
T0JQkZqagENjNa5et7d/cdw/YpmuNk4nJaSX7l78GZCjt1JVwGYHV4inlhuq3GC+OPlfcuG7
1uc8Vst5tm6SNJuFjIdBfUzN/LkFdLgOe1Ma66l9H8/jKx3Ltw49f77c3nH7CfbNuk0sLGeR
ZXQsKtpYflr0BxDnVJHOi9fSxNK5Z4ca17Xsnzfx254xYbFznjg5CNoFbC6EixlQFoAwYr6w
uQIxjMVy1QfJfy7c8w3Da5orJrTbtnAawSVhLcO5KmsrJ6PygAD8cOeLNvrk+WflKPn25bbu
D2JsLqys/wBLc22oPVzJ7hdSMwprShzwQW5XT8d/MY4Zxbf9lnsUu7Xd43EciP7bxyvH7WdQ
Qy07YL5V314zfA+Xx8d3CSW+2y23vbLuMQXW33ahqxqaq8b0JjevQjG7yZfHqsnz1wm049d7
XxnjU9il5E0BtrucS2ZDgg60LuykVyK0xmL5cu2fOHF9x2Dbtn51x473dbQPbsrq3k9v+UAF
BkUsnr0qOlQfLFTjDc55ttW7cnG8bHtkezRQaTbkUErMhBSSVFrEGqPy9cV9jHPPq8+SvlLj
HN9hs7jd9lkh5paRrCu628im3dFYsQ0ROvSSSaAZHvTFGuufZXlLys1TqooNAOh8zjTTV/Hv
yBv/AAzc/wBftbL7cy+3dWs41wzwj8kg/HtmMZsrHXeN7uXzftNpsc238G2EbDPfo43B5XWS
JTIKOsIBqytUn10p2phU5/Tl3n583SXauPW3H4W2OfbIwl49uy+3cMmlUCUGpUYKS4YdcP18
a8/Lg5N8lcG3/mm3cmuOKqkCRGPfNtaUKlzKxykjMdKMPEjPvjP4E5ytj8g/MvxlybjSbS+x
blFPt8JTablHiUwOECoCS5DJ6QG65eeKDvnfXPt3zbwffNh2qLnnG5dw3PaB7NjeWzhchpCt
pZ49LegV64rMXXMvywHy/wDJ7c75BFei1MFvtym2tPdIEskVa6pVWqhq+GWOnPUxn/nLdrCR
SRt6tPq616VPiMZdden/ABF8o2XGor7Z+QqbviW8o0O4WvVkLDQZY+nbJh3+uM/Ho6kviT49
+Ttv4Fu277bFGu98Q3L3IZxnBJLCpZUlTX6lbS5Uo3XGrPyObJzj0F/7kuKJrmsdkvG3Cxtj
abTczPFR4ZACI5wGrpBWupanIeeMpxWf9wHDL2Ow3bkmwzycp2mMQ293aMGt2KEshMbOv5vu
qpp2OJnN9VO+fOnFt65fJuG78YF/sd3bQxXFpIy/q4pYSaSQSLp/j+2oriwy6f5C+a+JX3B7
vi2xbReC2vFWMJuUglS2C+oSweuVxIp6LWmGRdcbMeGyzEnWq6gftFehPXLDWrW4+NvlHceJ
m4sp7ePcuO7l/L3fap/tlQilY2/I+nv+3GfhZa1XKPmvbv6anH+H7Mu28aZonu4LvSWlCOJD
GVQuNIKj1V1EZHFoy7/h07t/cfukvL9u3PaLd7XjkMMcd7sbaAk7ZiZXYAgppNI6jLw7YpmG
RTbTzz40t/ku+32fi8knH7opJY2SOqzWtwtC0saqyoUZxktcv3Yr6zz54sfmj5L+OOdQC+27
bLy35NaMq/qZQohktxkyyUY1p+XSOvXLFarf06rHffj6X+3692/e57G53hRJ/Tkji07hDeB6
wiTT6nXVn7gOnTkcXPyevh4ZM5aRmArq/MO58saXNW3F99u+Pb5Zbzaei5sZVnhZl1rqUg+p
fzDFW4+gj/chwW6juFu9ivT/AFhV/rccTIU92JVVHtqkMT6R91O2Ms/LP/Iv9wsHLOGbpxt9
slhnmeNrO8LIapGQazp2fL8tR9MM+U8ThdZbiNZArnqAxpQHrircuPoXavnL4eTgs3GH4xdw
WFzEU3HbbUoyajTU8btIr0JWoOWM8sddb5XlPB/kCPg3NG33ZEM+3NI6R214wWSS0bqHKekS
gdxlXGv6XWePIH5N33hm8ciTduLbbPt8N0Pf3SxumRo/fZ9ZePQzEB6nWK/QYc2NyPWYvnj4
ffhU3FbnjF5a7bdx/wDzrC0MZjDsQWaJ2kDGjqCDjPPyx32wW+fKtne/F/8A6Lb2Uoa0vffs
7t2X/wDFtbOI5VH/ANoNeZGWN2ZW+pZF3x/5l45ccf4pxzkG3ObTj9+k09wp/ltbqrhaIDr9
xGkB8wMZ1fL27knM/j7dtguYN83zZ9049LEXlhWcJeFFGoMiB6mRf4QoJ/digr4s3Q2A3W8X
b5JLjb1mcWUs40yNBX+WXXsxGN9Dn49dnGeRbpx/dYd52qZoL6zOq3kJqB4gr0IPQ4xXSPZL
n+4DY7bbLrcOO8fG1c23Qq95uS6XtDIfvmUEmQtmaIVAz9VeuCT9s1RS/Ou6Q8Ksdk2yH+lb
7FI4ut3ttJ9yMsXU5jUkoLfdh0flS/IXyPHzGz2S7nsfZ5PtqtBum5ggLewintMFSnrTOtf8
OjsxXn10/K3yrDzhNhT9FJZXG02zQ3SM4ZJSwWrIKVA9PQ54zzfMVvq14d8xWdptFvsHM9qX
fNlsXMu1umn9XZMAdJjaT0uq16H94ywjruR02/8AcJvycxsN8W0i/pW3LJb2u2g+praYjXrl
/LIdIIp6VOXTF1YZ8avL35x4Hse3Xv8A6Vxd9t3DclaO591o0glEv3M2h3dnXUSun8csXPtX
wqvhz5c4NwzbCm4bFLJu41RtutsU1TxvQ6JFZlHp8cXd9O+KC65D8bXHyla71abZdWHGfcWe
9tZAjmO5Uk+9Gil9UOoBmRjnnTrit2HlD8yfJMvNt1tks4jFsG3F/wBDAwXWskgAkcU9So4V
dMZ+3GpZjOV50pA60cfmqMgP+RxkwwV5C2RpmAfLEgmPOhJ0tlU55D6YBhzATVTmB6SSM69v
MYicJoALUFWyrln2zxeLDCHQrro0nIkEHr3oRhxYUYZav2NADlnTviSUhWjq6k6TkvQsMSRR
KM1Io7A6UbLIYDEn8wsW0qQAKHz8KdMWAzxkOAFAJodPXChMJlBD5nt3B+mMiwI9wAUzqNIp
1/4GJYJvcXVT1D81eg+uKQUcYAAcqOlBU9ulRiMM1cyV1kGp7fsxEVWMrMDUkU8OmJQowsk9
NOle5JywEamhIRfUDQ59R4Y0CYMrEFajLSp6gH/TCjOVCKCgOkmtTmK5dcCOighWWgC9K9D/
AK4KD+zqT7hpNSDUChOImVQraScv3/XBiSEKRr/NkNVf8cQpKuRLZAH1UPfDiPIp0gR0I65G
tfPCtR66CuipJoR0/EHChFG1GQ5ha+muf7MBwy0YkV1dKA9QfPAsGqoGVwMydOXY4lh3aram
BDIc2yzr2xIOpgzKoJBzA6V88SMdIZSAAOrg9MQE7qNQApSjBh0P0wI0QRkZgQFP5e7eeeIy
idVRBqyJBzPfyxVWhLlwciU8G6jLt41xIpQoINCT0FOw8fPEKKrD7jU9Q1COn1xIQA1a1zbS
a/Q9cBhnVeh1EGlUBAU96n6YdQGElCwByzNe+JC9/SdDCv08aYhpkFCSGLMoyFcRgT7jOhAJ
05UrSnhnipypZqaqsABSp/D6YNFhi6faACgSpHhnhR1SPSGFTQk5HLPEjRk6Gb1UB00p1BxI
4opCv6svSOmR8DiWnBUdGIQ9vDzxYjFSyKSDq7FfDwOKKwCxlAsgJz/KfAeOKjnxIqA1KnqM
xXLA1AD7dTnLpXx8K4iLV2qQCMie4OWWIYcjR6TWQGgOfT64moZQSxHYU1VyGJadiRmrahXo
MxiZE6xKoalJANQAHUdcSAsmpCB6e2k9wfPAUitFQ1NCpOk/7aeGFahLDWSAQQMm8fKmED9W
SioYABs69emR74LVg9K6lLsQw6L2JPjgxCKMgqX9RNSo70+mImOl1EgoDTp2OCQgpIJC7HUe
pr3+mFQI0AhqkoPw/wCuNIZZOpP/AG+NDgZoUdADWrEnKprhVpwyq9UI0nLSaHM4zTBMeyki
nUeHjTBrUgdUJ0sAStMvGvnTPLCZf0aZQAe4agp0xKhLBYnVBUn8pqQCPHFaNEulx/LOl+jg
Dr+OBbaajAE1Onp9D44N/Sw+ksKKcx0JNK+eHVhwCOjZ/mA/1xQ2BTXrNHJ1Zenz61wjBq5X
JakDKtAanxNcQpmKFQQSGr6iRlQYsGnYRhiQC4YfeMh9MSlG4bRSOgy8qD64Yuqo941+1RqF
yR0/17Y1zW/wzjABjU59iP8Alhc6bWvgcQd8xUvGHX109XmO2HTr274ARk5fYMNQjKsPSxFP
Tn51PQY1b4zZXo/z1NcA2wWVyHj1BdTBSa0Ip5Y5Rvl4npq+Q9P5xX/jLE3cE4UqDX1UpQf8
Z4lrU/HXCzynkMW2tM8FvT3JmXSXZF+5VqfST44ZGfl9O8c2BeP7Y+2bVtS29ugOl6RhpGIp
qYg1Y+Zxixl5zwm7GzfId/ZXW1RNuFy5H6qVgZETqdKgEKp70xqc+GxWfNN2q8jtZp7aK5iU
BjBXRqXppZh6qGlcXOCR6DZ7za7h8dNLa2MVlCbZk/TKqsFKijDMde9cZ6T5h3LTHcsCQFJO
joMq/hh5rc6T7VNZRX0Nxe236y1Wmu11EBu+ZGdDjfyzX0md1sNw+OWuLXa4LKCSAhbYKrKp
XLIkDp54zYxqt2eKDifC/wCo7NDElyyq1xLKA5kJIFT3p4AYsaScvubu02JOWWbLa797aD9T
DGrDS+bIQQRn5iuLRY7uBcp5bf7DLum+XSjImJfYSJdKipaoxVCvdyGybTcb3tcMS3l6ytLK
6AhtVRUsBXSB4YE5uQ8C2nl0dldyxCC8lVXmnthpZlOZFFy/E4YsaKw2Db7LY5rCw2VIFjR0
TVGC7mmTajUsfM4DVJs/JNth2tdv27c7TYL+OQrLaX0aq+oHOqnTq1HvXGhU3O/etbbbtzhu
wu4iZI3urasZdG69MiPrXBIozfzTLPJx60aR2c1qxJNAMuxzqcOp4WQTXPUCen0wp6H8S8N2
Hkl3cpvHumOCP0CFioZi3QkDwxWJuo/jT4tvpL7bbGyukubSgaf35RRiKjJmIND5Yzict78e
fHPG9mF5vNpdbgUyedJGU6mPZFZQP24s0VheQJ8bLcWk3HDdtAraruxmNHIB+1HbUQT+OGcr
XqnLrSN+F2ckNzeLEPYYW0rrKoFQVDUANV6ZYSteVQ8Tutjs15D71xHJoWGGOqsXYAkArSn7
cGJl7r4b4Yu728qDcJba4UsLSNgxrToZTTSvlgxOy6+EuI3lo5gsr3bbhKmKV5xKD/3Jqeox
YtVe5/H/AMa8b2qKbfFv7nXQPcRyGpb/ALEK6RhwVc/HexcEeS/Xjy/r7ZwglW+jDmMkE0Vm
AqPww41+GR+QOJ8K2dZfeubn+tylnhtotIjVScvy0C/vxVl5bIsiMEJ1aOtPLv54oXqfwxxT
Zb+a63a9hFxcWNBaxyAe2Gb8+g/cfrhqjapt+y8y22/hvdshs5LOdoori3VVkBTKtaeHbGUa
7TZuH2+3bbYbVbXEV9MscjzqrOQTmzGnqbwxSLHXF8b8Rg5A+6RWYWQJrS1eht1c56hHSlfL
pixY5E27YeZ7VfR321w2cllcGKGe3VVk9Pf7f3dMWKFcxbJxNNq2qz2i0uLe/l0SvOil61AL
MSCWY1+mHFemQ+QuF8L2jkljLdyS2G0XxZ7zQAwjoaDT10rigVW8ca+M5tz2yy4xucl495Ks
c0a1kKIe4dlX1HsDXBIno1xHsHGbraeO2ey209lfvoeaZQzddLM7MCSx88OGpLH414htO8Xu
5wWQmKIXgtZ6PFEaEnQpyqe2rpikCo3PZNg5nwybeJdti2zcLX3mje1Va/y/ysABWo616YGe
tYbj/Gfim52FpN53+W03ga2micjSoqdCouk68qdDU4Ma2K348j4T/wCxIu+Rz3kJdf0UaAGI
sTTVMuTEHw6Y3Y1uvQflHY+HWW87HLc2SWG2yyE37WyCMuhNAG0DLPwGCRltINosJprWHa9s
2W42F19cgVTIFI/KApDH64KXiXzRxnaNi5SItsh/TQTwrM8K5pUsR6fAZUpgxnPWb4bslrvv
JbDbbiV4IJ5AkzR0LBT4V6f441K1j3CXbuFbZvW3cGHHrWa2ukNbohTItQaEuQXLkjrUYrBk
rzvkfx3w3aefjaN13SSw2WSL34rgUDxl81jZiGH/ANVMHwLzrO/IXGOJ7JdWsfHd8G6xTKzy
pVG9vpp1SJkdVcsPI+uMa6liF/MM6dvxxtPWvjz4f4/yDjB3zc90ubVFdxLGAgjVVAOoM4Pb
GOvWk9x8LcZ3ezhu+JclE0UsoikW50l1WtGZNIRi3cKVzxSYKvf/AM3jiCslo91urXBA/wDk
6EMRalNbUUr+/FbTiu2r+3i0W7vl3nc5ZYrMgW8dmq+66MK+46sGpXpRcCxn+R/H3xxZqk1v
yK4EMEgj3CyuYwl4sZyLQK6x6iD2pimmdN5uXx78TP8AG9tfMzW9pFEHg35Iz+qJY0rIqj1V
OWkjFmjusHwz4Z4/ve0X2+bju0qbNbSMkU0Mel3WP1GWRXDaRQ/aK4Zqvws3+BuO7habfuWw
79JebNdTLFNJLGocKx0l4zRa6W/KRimqeOi++AuBWNzBtt9y2WHdruq2cTLCqkjpWPP97CuB
ax+xcHj498qW+wb9uAjaKWN7O4giW4jlZqNF7iPXSrd8N5uLia13ybwCLfPlbZtmna3sYtxt
2d7mziCu3t1o0iMSpeop9MXsYm/Ypv7feFwX8eytyqUb/cK0lvb+3GQQorVkGogfVsW1rHDt
X9vdrbWd/f8AKN7e1gs5zEDZRiRdK0HuPqVjmW6AfXF9qccvyx8VcA4lxe1vIby6feZ6C2Zq
ulyMixpSkWlTXrikZs14eUpOSckqaAZtl+7DKcew8I+E9jvuFjl3J96k2u0YloxAgZYY1bRW
UsGqWbwGC7+D9JAH4O2Hdt5sbfivK7bcrC+dhITp/UwmMa3Yxqc6jvQZ4poXf/3BcFutxuNi
2jlU3/sNqrarK5iB6DPXpC5Z9QcZsowG2f27bIvHzvHIOQS7YsHuJuCqkehJInKVDscwaeFT
h9PSq5D/AG/2ll/T7jb+QWj7BuZAttzvqxCNmXUgYrkdfQHLzwfWqPXOb8bm2j44h2/YrfZR
Fb23t3FvuEaiCRfb9TQsxX1lvUNRw8wd7jwz4AsNuvufxC5uUt7qFGNnA1us8M7UOtHDGsfp
rTG+q685i65N8YXHIPmm/wCPWSWm0RvGl3M1urGIJoBZ409NJGr06A+WM1y4ntaPbf7eeCXF
1eGbkF1uFvYRtFeW6gQTW0lMnqta6QDUUzwZV38Pn3e7Wwg3a9tLC4F5t9tO8VnegaTNEpos
mntXGrF/Lq2erf484Le8z5NHskM8dv7sbyvJJ19uP7go7sfyjBXSPS92/t/4bbF9st+ZLDvB
H8mwv4Vtg5H5NZIK17EVweud6S7F/bxtkvGId+3zkn9NgKSG6CxIywushj0iUvR19OZpng9r
UVG8f27XFpvW1ww79bSbHu5pZ7tKPbz0htBQGhdx/wCOhofLDNFi+u/7ZNvlgmg2Lkksu6Rq
XW0vLQxRuR1Rn/L+/BdXqq2f+3/al4wm98t5N/QVnleIKqI0UZVigWSRyAJCVOXSmGadccX9
v899yO1sdn3yz3XZblJJxukDI0qIlNStAGP8w1GnPSfLCvV1uP8AbFazwXMfH+SSXG8QqWSx
vLVreN9I+xpPynzxbWb/AIeDXthNY3EttdDTcW7vFcKTWkkZ0uvgaN3GNUc9yxoPjnhK8w5J
BtDbjHtwmGsvJm7hescANFaQ9lJxm02PWN1/tjt5rG7/APXOQyX27RKZEs7u2NurBfyB+gbt
nguixZfFHw78Y79xLcn3O7O47kAEvmIaCbbpYwwcIwNG6V1EEZYm57PHg/NeO7ZsHILjbdu3
SLerWOj2+4QHUpRvyN4OtKHPG4xIooYUIqo1GtR5+WF0ke57Z/b7xyPYNt3blHK02aXdY0kt
YjEjQjWNSqJXZdT6SNQxjbWevlxp/bvvllzWHY73crEWM6e/ZXc7tHHdRhxrjSMer3dJzHSn
TB1bhn+Xovz/AMYXbOEi32rY9rbYrOIa56aL6xNQoeCmnWnY1qfGuGa5/wBC+Cvijbtu4rPv
rNtm57pfD3NvvAPfjiRkGq3m1LlRvupmMZnt9dLPHmm3fE248y+Rt+sLuC143+g0zXtnYf8A
yEjRhRWtYy3qWQ+s55VyHbG7ax/PE/JfgbYoLK5fjfK7fdt1tEMtztUqJbTmOPNjGpfNgPys
M/rjPuq+1cbV/bTt77PY3W9clawuLyNZUgitvdi0v6lpISpJowrXF66X/Cosfg2z2nnA2Lme
7xbfbzhJtou0XVDfUk0vESxX2nI7H8MFtE69yun58+H+G8Y9zdNi3K3sHZUd+OyuA+lm0a7a
pLFa9UPnQ9sUjHV+rwkoAxqTVagnxz7Y6Na9T+EPjDbOZX24XW5ySNt2zrHNLYxelpw1ahX/
ACgBen5vEY5223DkzWs3X42+Nud7NvW5fHsMuz3mwgGe3IZbO6jVSzNGjF2jc6T+zPrXD8Vz
k/MY3fPg3f7JuPS7Q533beQrG0N5aISsbvQukvVVC6sjqz8jh6/pcU8vqXkXw/Y8U5ztuzb/
AL7BHsl+hmj3hYyWjoaFZIqn26nKtSB1xfhv7XWj+fvi/hvF7Pj248Wj/Trf6onh90ywygIp
SVS1fW+rqDQ+GLmRjrqzqZ+Vj8ZfAO0q+233PShW+NLPYmemtnBZPcYEEkL6gg/b2xm+u39M
njyD5I43Ycc5xvmzWZeWztLp4rd5DqlC6VcKxFNVNXXDz44cX5n6ZNI5S2t1oOrHId8ssbdP
Xv3wBwjjVrsG5/IfIoku7GyaS3jinj979OqKNc/ttVXBD6dNK0zxm+1X9sfx/wCLn5vt/KeQ
bELfarbZ5jcW+0traNreQPJpWUnUuhUqoI8sXfW+Ry54sm1ZbT/bzyy93WXbLuaGwuTto3Pb
5SfcguQGVfbLj1IQH9Xpy+mM/e7mH73cz8NPt/8AbDZSbdbvPymG23G7iDRpHAZIWd/tCSll
1JXuB+GG66T4UOy/248ivN/3Sz3q+XabfaWRZruNBda/dTVHIqAqfbah9RzByp1xS1fKt5/8
JLx7YbjeePchtuSWdkNW7JEUWe1TUB7hjV3LRjo3cYdv5ZvefLyp41GonNm6eH4YW3sfw38Y
cU3jYLvmfJpGfaNmlKTWmltDqqh9cir6mT1UKr6u/ljPtuNXyO7lHxbwrkfGV5pwh5Nn2WO8
W13Wwu/tgRpVj9+Akk6RrDaCenh0wzI5W1TS/wBv3JIOewcXkeSXbJaTDkEMOqE2bAkS0rTU
Dky6vPpjHVZ4vUuVruMfBnGdhTdt95vLHfWWxXTW0sEQZ4WUIrJK4U6yrCUekZqetca5lvjt
OvGZ+Rfjbi0vDU+QuDrLa7E1wYL3Z7ps4Xd9CyQmrakqw9JJ8R3xrmeuHXNl38PH5CkZZQAx
X7TSmR+nfxw47oAorryIf7SCfxrjTF8aPgfFzynlO2bD736X+oz+y12KN7dQSW0EjUTTpjHV
xvn1t7/4D5ntsm4s7QTWW27hDYXM8JJcRTaCLjQaegCRSwBqMZ9cv+ln4bqT+0u4tyHu+QxG
whaQTXJgrIsSrWOUgsF6/wDkFchmMXrbzz5L+HZ+H2lnu9nuMO+7BfH2bfdbfSqrOc/bcKzj
1UOkg9iMsMtPNy+ttJw7Zj/bbeblGtvul3G6XK3vtiC4sp9axujPUmRFBpTwNcU9rX9ZL8I/
gn4p+O+V2ksu7X63e4aJY5tjK+3JEftS5jkqGanUUFAeuM2es/Xz15x8m8HsOI8jG37bu9tv
lrMHdLiBlLxMj6WiuVUsEYeRz8BjtmzXO7K9N3fh+wD+2tNytEg3WaGaK6hvxEtvPaTySrHM
jUqZUFSmk9Qa9hjn/P5b7niu4T/bzbb5xSw5Xfb+Nu266ilkvT7K64CkhUMWZgpX0+rpTF1t
ptuRybp/b7vFpybaLLbNyg3LYt+Yjb98jyCsilmRlUsuqgLKQSD5YGOb1L619r/bfwcTTzXH
K5b2026KSPc0jWGCa0lI9MstNeSkHWrUyxSNdzfl867zZxWm6XdjFcJdxW0zwR30QOiaNGIS
VfJlzx0vinTWfEvA7HmnLLXYbm5MNtIHlnKdaRjUUUjMFqU8sc7Wpzr2K9+N/ivme5bnw3jN
gdj5FsCPTco0/lSvGwjkhmVmLN6iKuPqPDDWZteabl8Icgj4vt/INnm/rUd1I9tu1lbIGezu
Vcx6TpJLR1GbmlOvQ4rMalW938Abtbb5xvbZt2trVt8t2cC5On2rmABpLb0atYYMNLjrjOXF
OvXpXzxw/wDonxj+j2rZLKbZLQRPPKpZLqxmDBRcQ95FZjRgx7+HTfEY76z1iPjnk/xLuNtt
WxblwSK85DKUtHvP5emaRm0q7szalLd+uMWZ8t9SWtBzX4W4zvXyFZbXxsrs1bWSffrGABmg
EZUI0MZy1ShsuzUrjeeBe8N+JeFbbxrk0MG7WnKtunsiZCyxmWCeFXYEe2WKV65EGoxmT074
5OG8b+Mb/jVk+xcUg5eGgjfcZPchW6tpWFHSVZWUmrK2nSMN5ZpbT8NfH1h8o3loYFuoJ9uG
5bTsc7+hWLlJYDqzcLSq1+2ufTFedmtc/CXnPwxxXduIbhukXGjxDdttjMts8UsUgmH3FZFh
Z1I7Z5jDP0zZVTf8F+HeIx7fxPftuudy3XevbaLcohplRJPT7sTghY9LZMmdeueD6+ac1x8e
+Cth2PkvJZORzC/2njscd5FEq1M0EimQPMgpXQqEOg+7tialyOHf/jDhPNOOrzLhludhtorq
Oz3DanA9kiSRE/UQAH0uBIDp6N5HrSCRoNw4H8MbPuNj8dbht1xNyC8II3mEETxtIP5cwkJo
VY5e3Q6c+oxTnPRmsJxeztuAfKe48X5HbwbttVw422/QorrLb3AV4pljIJV/UjMAajOmM9T8
tc++M38xcEj4Vze92a3cPtjol3tlW1SrDKSPbc9fQylRXqtMdcmaOaw5CachqkI+1T08vLGT
QKVqSARn6RU9R9cTNMJGdWkf7vy+IPbApTRuTlQVAHlXCUsUiBSzDPMeVa+WCgUaqAWJp451
NPocRRFwQHUnIdBl1ywIasghoEGthQE5nEDoYglGyy1D9meJacgFQEoB1FOv7cCLL2zFSorq
JPj4YUInKpK5dVp/hhjWnegDOEYAkaHrTp0xAlMgbMHrUgmpGFm0mYGrAfs6YDCRGIOVB1Ir
+HTATqFRlYip71FR+H+uLSOgAOXpr0GDUT+zrGRYimXYfXDjJ2GoswzHY+ZxYjFoaZgq/wCY
k0B8OnTCrTa6keimnMnsa+GAWiD0JBHXvTtTtiOkroNByUtkPAj64CNnCKDIfpXufEYoQsdJ
IKGhFa1HWvfEjSNQjU2odBToMIEVR8gaknIDv9cCMo00TVpataDsPDyxK4KgpSmonqWp++mJ
I20gEOo05KvXvniRAlTVSSRkT2phwFIFoXaTUVFR1J8/xxKJVLGMKrAqKVqa9e4wUw+tUYEt
Q5gr2wG4iIo9ZPXQ/aB+ymHQlIFClMq9T1qeuImMTnU1NNKVPamIHo+nNaAdCOpxIURiUUBA
PdicGKUPqZqsAq1yJ/dlhQfaAZqOQG6mnTPpgWHdmLDv+YMMq+dO2EUlelAxHjWlfxpiRBw+
pNIABqHr3PYeGJGqYpKNUigpTLPEjagkoNNSk1VKdfGuJqVOkcQj+09/p+H0wJFQsAARHGDm
a0xLR62z0UcHvSufmMSRU1Cle+erqKf88SlSayQGYeonp4Ymh0zB1aSO1NWAo1T1AB/5Y+4A
dsTMEHjckA5AECgrXPECjdidKfcDWrAdPphJ2jDZLRia55CtOuLQFWBcFmzP5+lR4CmLTBSC
PUSDkahmGZrgBgSgYhgY5BQmtRUYjo1cA0GdRmf9MSRhHRDSpqcu9B5DFpypKkekNqy9Xenk
MRCSq1bUCCcwR0+mGC0ygPSgoa0qczTEyaRCIiRRtVa9B3wHwCppBfUWUKNK0yB+uJYlLMyr
ISC3ZRX8evhiOoBIQ2mpdwczSlB/3dMKtTFqoWzFOlc8j5YNOG0sR1H/AGjKuIWHCsqBlFV6
eeCnmHSMqxqKjr1zGI2YDI6mIJ0k0pmf2DEoSiRlJAICZaehBPh+GHWQLqWQrpJDflr0PXti
OYlVwK6hQdNIGIaEe6I9CMvpzJb/ACxCwkZCD2LdO/1xEVAFqF9fQCuVcRxT78FWDQXGpyG6
fuONQW1nU+850FP2nCyVT/EPH8fDEHWZUeRSAPT26jCXrPxDyS02TdrfdbsSyW8FSyIyq9GG
YVm9PbGvwzZXqPyD8h/GvKNuRrW33JtxCkQ6wiRrTrqzaoHljjs105jyRzqZmC0UklTl0w2N
IgdAzXJjnQVoTgUTwXM8DrJA7JITRWUsDX6qRjpFZseicP8AmPeNhtZbS7B3OKTMSzzS6ojS
lEqxyxVw9lS8T+TNl2rd7jdNy2P+obhNms/vsJEr1HqDKa4vw62O7mHyjxLkSITxhfeRwWmk
nIbQOqEJTrg5gk1cW3zbxOz2X9BDxNxbRx0FslxqUilAKkE/vxdSM22MpsfyBxjbN7u7t+J2
15BcEFbeWQOYh19DyK4JwTmNXrXHyPmGwbxvkd+vHoLSyFPesIn9ppBX1VkjAoT5DD8fB5bi
2+bOEw7Ou1w8WmSzCaWtxONFKeqjH1H6nFjNVe1/MlhCJrW/2P8AW7M1Db2wmOuNfAs33Aee
DvwYq+afK+4b7ALHbbQbdtSEn9KG1s1BRSzjpl2wyxOm1+Xli4YNgXaCssS6Rce8SpBzY0p1
xq4pKn498xW9vtf9N5Btv9asVIMNHEboo8QcmAxjTfHPyj5l3Xc3SHZUbZbGM/yTGx94hf8A
f/pljXhnOzVlsXznutjtFxZ38c+43Ta/Zvmn0kVGVQB27YzazVXsHLOCxar7kWwT7luJkMi3
JuCwJOYDISO/1wjFzN8m7PyDdraDdWl2bYrYq8MMAFy7OuVJCR6RT+EYTKk+Uef8S3jaIbXa
Lie7kjJDF4yi5UAqWof3YxLDPXk+WrIEIepr/wAHGlj0/wCFeR8f2ncbtt1u47WKVQsX6iqp
rJzpStCR441+BWq3P5d4JtV7fHabB7q+mY65oW0wO4HVmPT8BjKVbfLXE982w2fJdkuG0kFh
bSVjamYP3IR9MOJnd25R8c3l7aJa8ae026LJzFKIp3zqRqGrLx/xwYNbG9+Wvju72gbfJYX3
tooEUNVU+j7fXrJH1wj7Gl+XeCbnt62+5bLcezBRo/adZDqXpQgxnBa0UnzltY3OOOPZ5jtq
qAz+4vv17Gn2gDwrh0a64PmThVpDObOwvneTUzGVgfX4Zu1PwxYtcF38l/HvILJYOQ7ddH2W
DGOA1Uk+NGRqeWDDiHa/lPhGxXUo2rYp7SylAzWRBIWUUroLN/jhDj3P5U47ve1zWXINnmmq
zfo7mFl91MvTXoa/uwC/Dy+T2tbANpVT6RWrfQ+eNKRtPj/n1zxi5ZTCLqwnoLmHJXA/jRux
A7YNajWXvy7xrb7CaLjG3zxXNw5eVrjSEDN9zU1MWwKhsflbjW4W9oOV2M0m4WR1209qo9tj
2JWqhT9csWrcQr87XQ5BrksAdn0mM2+oGfT2fX9pby6YWft6mvPl7im37XLFxOxuFurmUmU3
S6EV2+9zVmZj4dsWG09p8rcR3O3tZ+T7bMdysSWt5rQVjLDvQsKH61GI4zu9fKUu6chtN1uL
CCSzsSf09hP6wytX/wAlAav38MFRuW/KNpu0tjNt+zQ7ZeWcizJdqVdmK/aKKqen641IGkg+
XOFbilrechsLhd8sc4GthqiZh0YeoAHxDdMZ31Wo7T50L7vMdw24nZp09srCQbhO2utQGr3G
Glz7/wDKvG7Dj0ux8TtJ1FwrLLJdpREEmbEZlixxc+iKnYPlLj+28eG1X/F7e+f10uqxgOzf
mkDKWB81OH6qqThe48Qh3t77kP6iCJJA9pHZaSFYNUaxTUVHamC1c1ufknmvx1yfbIxFd3n6
+0JNqscRCsT1D6xSmA5qv4tvnxFsCRbrHNuku7RLUwyLRGlpnpVAI/pnjWVKe55zx3knNU3T
lm3s21JH7cdtAxOkDNS9CpY1zahwWaoDkPJeEbTvtju3BIZYpbZtc63Ct7BII06Vcl/GuCcx
S42SfKfxzeXdvyTcobmz5DZxlUhRDLFqoc105Gvi1MKrGXHypabhzdeR7ps0N3ZRx+yli1GO
kdJDqBUv9csXyzNVHyPy/jHILy2l2XZF2lolb9Q40fzDlpGiP0r9e+GcliWPc/8AHfCry9c4
R8pcW2jgV7sF+Lhby5Wb9NRNcTe4mmmquWfXBrXXUYHiPJk2LktturQfqI7KQMsRbTqUZGh+
nTD0Nle33fypwDc5otwbkm67aAFMu3RI6L5g6UbPzDYwNZbaub/Hk3J9wuZt13yxuHIW03xp
2d2jA/8AHJHpYaAc11IcNidfyT8l/H1/xhdvMkm/7rUfprxrf2XiI/Oz6UBy6qBnh5n7Z6uO
TjnyRwjdPjj/ANQ5DeT7TJApjS9jjMwcBtauNKtRq5aT+3FjXXpfHPyLwyw4/uXEN2u5YbK4
aVbbdRE2ho5F0nUnqKMPuFcEgvsXtp8n/GPF9i2/Ydsv591ggmUvKIyrJHq1NISyxq3kBiOv
Lvmnm/HuS8lt7/YpZJbf2EjmeSMxetSSAK0PT9+NZ45Tm/bXP8TzcNg35Nw5PvNzYS2sqyWT
6fcikYHUyyuQ7jwA8MFtdpMescw538Wvy7ZuWLyAyXW2/wAr+nwwSOZEYkkqSEoRXv1xQRmr
n5S4dH80Q8mSaWXZWtfbedYmDrLo0/Y2kkZ5nF1BJleqcI5Fs+73HIN12G8G7R3VyjGwUiJo
wsSrqCy6fvwUxhP7jNlFxsFtvc24SWU0BMUexTuml9X3NEEPX+I55YpLWL3j5s1KfRq9Na5V
/djbVr3rhHyFwbd/jFuF8k3GTZMvb/XAEpIuvXRWo2k1FCGGM/Bt1z2HIPh3g3Jtq3bjV7c7
1dRo8O5zLUp7Timr1qlH70TIjFjO7Wu2/lnxDacxuucRcq13d5EVfb3iYAAqKgBU1assGGeM
3zr5X4VyD4t3XbrGZ4t2nuzLBYTIysQ0+sNqAK00Z1JxuRdc7HDz75L4dvXxRteyWF07b3ZG
2Sa1kjOoe2ul21U0aT2OCfFgvzHJ8v8AyLxDkfx5xyw2y5M24WJUXNmyMrR+3B7ZJJGk+sZU
OLmNWa4Pgubgm377/XN9319v3K0fXb2zxgwyIyaTWUBmLCpqMsZp8eg383DN2+YNs5Hs3MIU
uJ6LJbKGVSYkCpF7npXTMMmrio5uV6jvtnc7vZ3O2ol1s1xOjJ/VoBEyg07kEkq3mo/DAOpr
4e5FYSbXvm4bWZo5DZzvCk8J1I4jamtT4NjU5M+FhwDctgs+TWdxvpvYrKMnTc2Mnt3MUv5J
lP5gvdcVmidPpVOb/G0mzSWvIeX2nI9rkjIW3vrcLdUpWgZQrF//AKQfPGfrVc/Irqx4Xu3w
jZ2u4bo20cdkkH6a+NHaNROxiWUZ5kZHzxSfo44d1+YfjbaJON7ULob5bbegimubdBJGkft+
ysjV/OtAaL+GHBrRbb8p8Asrma7uucw3tnM2m2tGUL7Go/a2lS5p4vTFjWsbecr+OOacQu+J
7rvw2JrXcHniu3UFZ0EzvG6FvSah8wc8OM/hQ7FyH4e+Nuc2Nzsm6zbvHc2zW+6Sxfz0i1kF
ZVZQOpX1RiukdMU5uCd//wAN5efInGo3vL9/kyM2Miu1tYwwwLNE1KppBVmen8LL6sGGvkje
r39VuV5dvK08k88kju4Cu+tyfcK9FLdSo6dMdPkcyN58Ic041xfmQu+QRa7C6geD39OoW7Fl
ZZCP/ppVcxjnY3r6P2v5U4Lt07ybhzu23K2uSf0cZCL7SdaP7aljTprb8cA15B8K/KPGNj3r
lG271Kbez3qd2td0UaolVXkVSy0qFZZKhqfXHbrj8xz48mPJOb7Ttmy77cWW3bvDvtm5Lw7j
biiMr+oA9QGX8wGWCSL+Vs+fVLAQJBU1AIb3FyINeowV03X0oeQfG/yDwTj+1btyFeOXuxaD
LFMqEylI/a1IWorI3XLMHHOfCuSqL56+Q+L7xvGwW21Xi3R2dg19PD6oShKlfZky1H0kMO2N
ZkZu7v4Vn9xXNON8k3zYrvYr5L21WwKXCoSPbf3dSq6no9D0xT4Ny13/AA18icS2ngnKto3e
/js7+7R5LSOUMFl1QlFCsARr1UFMZkX9L54w/wAU71x+15I1xu+632w3zxBbLebMhlhkHX9T
GQfciYZUxXRzJnnj3PlfI/jTceMXNtyjku17/cLGXtr6ziWHcFmpSMxpGz6vAjLzwzml27D8
s7BvnFdoaw5bbcWu7KJYdws72GKVnZFCDSZGUafSSCtfPFZWupjyX5j5vYXnOtrvLbfG5Dab
SyMypDHHHHRlcrHOmlZtVOtMsNjG5Xd8/X/DuZ2FvzPZeR2v6qK1SCTYZ/5d2VDsToXu4MmY
IplUHBDceEF1WmkEBe/Y+dcdLDXqPwV8o2HDL++tt0t3udr3ZFguJYT/AD4gC1GRPzj1Gvfw
xn6/ky7MejWPIPjH4x41uke0b2eQ3fIY5PaggK6gJFbTrQU9nT7n5vup0GM/N1z55+viWw+X
eC8A4jsO38VX+tQ34Em5xvKwaGmn9QZY2BKyNqIRQKGmfjidLZ8MRzfZfj/c/km2ng5jr2Tf
I2nuNwkDXUlrKfsimqV0oe2r7fzeOL8MSevRfmaw4lvnA9rh2zlW1td8bg1QwyTx0ulSJVKo
VY6ZCI/QKHM0xSfsd+e/pScG/uWW63HZrDku1Wq20NIZd6P/AJYgFoJNGk6aZa9JwWY39trz
T535Fs/IPkfcLvbIrYW8LLD+stXLpeFUB95j9uoD0VXwxr6szn3Xm5dDQaSqVrkcuvX8cTWv
evhDlux7nw3fvjLdbuHbJd5WZtpv5T6WknQAxNqIGpCoKiufTrgXfOzEvwvyXZ+KScq4Tyu5
Oz3W5E2Yu5SGhinjR4WQkH82vUpOXngnOMfz/wDX165cc0+PbG8j3V+TWMr7RtjWE9pDKjuw
bSwkho38w+j7F/xxGz8j2T5F4pf2m37ntfKbLaNmhjP6/ZrlYop9QzbSZGDJTwVTXtnha1k9
55nxW7+Upr7beX/0i4nsrZbHc4/bn26RfUXt7pGp68wy1Ip9cX4Znyrfm5/j+54Rc3N1uu2T
coiGvbr3ZyqS3Ep+9LmGNpNUcncsaA/vMHXM+fy+YWNXRq6ASNVe5Ap0xoz17L8Q/InGrDjG
78F5FI9htm/6hFusQ1mCV0Ct7qfw5A1H44zLl1rq+NZvu9fG/EeAN8c228PvMm5ywyXd9aFJ
FiiMqM8rMmpE9EdBHm1fHFJ+WOZ+F/P818E47uO1cI2+4M3GprdYn5Bbzlv0wmBSJo2AdlkQ
5vX7e3hh+uRqsztHK+FbRDyb4p3TeQ2z7jI89lyuOkyKZwp03BqASpUAkZVr0xrM9HMuZXH8
jci4Hx74uPxvse5jd7x5oriS6idWhUB1kZvcSqnUFoEGa98Yxmy2YpbP4d41ufwrccyje9st
/s4JbqT3XVrS6jhY19pQPTUZfdUN1GN83fD1M+Pl4pMVjdlDLqbofEeYGHS1fxfyS12Dm+z7
vdQs9vZTrJIgpqKj7ipPpqK1GMdTTxa+sJuUfHu4We6pFyiwSLk80VxbSPIFaOSJYwY5UJBS
vtdWp/qq8s183fKXDt++MN3tNm3eCXc0nij/AEwch20SDW0RFPdSlfUuRxcz1dX9Pmdd/wB1
udvTZf1U67T+oFw1mzloRKmQlWPL10J6YTLr6f2vZ/jlvh+/4Vbc2smTeYzJHfTtFE6NIVYB
4C9VoUoQTXGefk2vJfhjlu08J+SpjvdzHcWSCWyO52pMkNC1FmTLU8TUzIFRh7590cdbus/8
mcZ2PZuXN/Tt8tN42ncZHvGubNlLRI8pMkcoUsA66qg19QxW3FOfXvEO1fHLfD1xwi05vY6L
9RPbX0zRxMhMiyqGhL6lFUoa54IemD5RynYG+AbLjcG528m87buHtXNpHJX3FWWR/cjI++Jl
YEN07Y1zPWLbY0/HeVcc3rifx1tO2b3HYbht24ql0tQJoikcmWljmkhIXV0zwRuzXrXLdobe
9l3PbLNG2XcdyhktpNya3SSJvcWjJKQfUrjLV27Z4InwjudnNt+43djPT9TZTSW8xRhIlY3K
kq4yYZZHww9fLMutT8Uc5XhXLrPejaC7gjLx3US0VxHKNJZPFu+M46x7ltvJ/ifhl9v3yPYb
827Xm+O8i7ONK3CNM2toPaHq+4fe32071xrNc7z9apuOfLHB+E8Qj3nYlF9yDeppDe7XJKY3
jlLO4LoR/wCIr3Gdf2Yc0RkPl7fOLbxuew8t2DcWkuNziaO+2VpC72EkVGb2wSNKvUgAZVFR
4Bn/AK1rMru/uK5nx3kw4xNsW4frIorORLyNCwaNiUosymg1VU5YzPgdcy1QfA2/ca2TnkFx
vjxwbbPBNDJcSisaNIuRZvyLlSvYnGL7jpzJJa2fBvl6wtPmDcd45JeC4srsNttpuSABY4Uk
pC7AAakI6sc++OljPM8bfjMfxZwLZd/tLPldreXm8QyssxlSv2vojKR6lFGfJq+rGfyNZ3id
t8Y7lZWW/bDy2Tgm6rBHb7vZROkQlljA1OVkPRznUZfQ4updOKr5E5JwzkXy7shg31tvhitE
tb/kNj6gl3G7NG6SZBhRgNfYY1/8RK1HLfkbjnFuG3uzJyh+X7juyvFFMZxK0dekj6dSxqn8
P5/LBzz+1svkcbcp+LufT7RyXfN1bZd545b0uNsmlESTqh1BopPz5ioUZmtCO+C++Nf+vrnj
+cuLbrzHf4LyOa145yOzj247nppPG8aOhnMWZWM+506jrhvkZk028cw+PuAcDPFuO7gvIb6/
lW6E8Th41aN1cSykelK+2F9sEnvhk/J1YjlPxTyrerb5Gu92k27eNshj/VbHKQH1x10vEKap
euSqc+9MZ+fDLnrIbHNx35J+br3e7h2sNrotxHBO4jnkNrGqDQegaqiTSfy1GNdzzBz+2Y+d
+bbdyrnU93tqiWCygTbxeRsrRXLROzGWMj8vrpn4YfiDmvNiGDF1FOoDdCcZSIsNIJbzBP8A
hgAtHUKftqSvnhWE0TLkq0JGYPh5YiFOquACtc/+RxVJv5ZkzqulqjuMu2BQxRgGCghSfVQ0
bLAkgaH1FloRSla508MSIkkDQKuOxGXliFoAa0UqdBPqNaUy7Ylg10BtOZYda5/XPESoAtHU
k0IqorQHxHfDgBFHOgbV6l6JXM/hhR/aKsDQlB4E1rgoxIw7qMj2Hj/ywSkYmU6SQ2oZL+HW
vlhMpyRoHQqAT+05jFhSBqKD0Umufj9MGHUYkRmZeqdFFKivhiZtMqERFKaCT0Hh54gZkoRk
FqBp7qfriEg1DiQLXvXSDn4HAs99LKNwPuVSTpYdK4lQjS61LZAkqB0FewxFLG+r10IYDPwr
5YGguyMDlUgUCnI/t8sQplNX9VadQKZDLpXCtOGUGlBWvby6DDi0tCOKior1Ff3VwYiAFAc/
qO1MSM5NCueZy1YoLAaZFFABpJzIOEJ4/TSgOXQ0zwWkTH3DXSPAtkKeWWDWgqyayDmRT098
SIory6tVFpQKBiOHVBqGfiQfPCrDJXNgxKr9wrUg+NMTJJQ6tJIzqM+xwVYINSrgVqdI1Cn7
cSCUfIDPOreGHQaqkEVrWlSDWlMBHqVjrqTQUJoOnlhKMgB2cgg1qKd+2JSDDJob/f3HbwOI
2QIZwKeHU98TOCCyMa6iQPsrTJhg1qCVjUUbJc/EeeKHAaDrkDair+NNOFkUSMtAepzp0Bp5
jBVhVRhqCfblq6Y7f0/j9eZ1suu3X88ku/JlDAhtIAr4544MQ8gPuALWg+7T+/r3xI7VUAAZ
Hr45eOLQehNFyAA+0DFFSWiLQUUAUHfEoQoKsubdycQkMregachnn4nEhq1GFNJqc1/yxL4B
ImhtVRr6KvcV70GJaMhVl9XqHiOtfwxGF6gzEnI0UU8MSRjUrZr6T2zqMTQiGIDMFBpkqj95
OLWfqBSzpU0UggavEnEvDs0Z9DE6+w6GoxI+RyzVciynP8csSA5ZvsNRXqciK/XCoMKtfuoT
lSlAcGukkNLESRIDUdARlQjE52aSKSCCB6fDv+GJYZSQ5D5MOpz/AAxEXsMPXq1UOYB8vDEo
cIwdiqkRgZkkD9gxYJdAg0yVWjA101J74jp3oiVpmxIJ7UOIDgTQASvU1yz/AMe+M60aQkDV
TrlSg6eNcOsmcKxpXNaFgBQZ/wAOFekjlGHYn93nngaU++EvGZGjCCoyrqOeVfDGozYzgX1G
p+uNsn9PhgTtaFmb3MgpIzHh4Ylj2b4EtbG55Zt8dxEt1CoJCTgGIdvUpGZHUVx1s8Ykey/N
W9y2O2pt9rZ20MNwn8x0hRXFGpk1MhjzXz114+XgLsupq+k9FXz79MP2bzAamK/aVJ6V/fhE
WWx7Rf7pdpZ2kOuWYhUzpmTTPLIY1IzbY9x4j8NcLTbpIt6aS+3Xoyo2hF8NIHX8cFo6rN8T
4DwncOW3dhuU7Rwxe4ttYxalZwDTUZa0Wnli/Atc/wAncH4xse7W1rtKta2s4BkeQ6wvUEgE
1ODnVLjW2XxT8cniTX1qbi9uBF7n6okqpcLX/wAY7DrTF1FrwzcYQl5IiAaI2ppGdPrikV8S
WMEVzfR28sq20D5S3VNRWn+3GpD9vHtx+KPjpOGyblbSz3t4Lf3Bch6BmFDQIaBc8sFjKq43
8Y8Y2/aW33kNrJfW8q+5HBFIyBVJoBVSNR8RXDfTKtf/ALnuLbjuFtuFpG1js84DfpUkOpq5
gau1cGBfx/FXBppbiyXissGlRov2mlpUjPTU/tyzxWLcY/8AQ/Guz7v/AEK/43cXu4avbWZZ
zRzX8ylk04fonb8gcH+PLXareW0gOz3crhQHaR0Cj7wVqwyr1wYZam3X4m4HZ8KkvbKee5uF
i95bjWNEj0qBTsD2xWes3xmvjDgXF+Qzyf1m9KSop0WMDNG7UOdWI7eAw3nxS66L7Z9h4hzm
Ozt9vi3GxdlRbe+QSaVcVqvao7E4OYtXfzNsux2+y293ZbXbWlx7mhmt0VGIIH8NAc++GTGp
XibICK9SxyzoMJafhvBd05Xcy2m3skTQANM8pIRe1a0PjiorX3vwVvUFvKRu1ncXUVCbZNYY
D/jyxmSs45Nr+F98mszNebnabXqPpiuNZYDpUkUXPGrUqeS/H1/sF7DFNd29xFcECK4hfUhq
aEN3XFzGp63fJOC7Tt3FLS4j2izF0PaWW9hl1M/uEAk6gOvbAzkW2/fEWz32yW52uK32ufSh
e4nZirZdPTlngNYG4+IeQw7tHYm5tikwLLelmSABexJGo/sxRmOm8+GN+hsnurHdrDcHiqWt
7dm1EeCkgiuNa0aw+FOQ3VolzdX9lt8jiqW1y7GRa/xUFMWpbcf+F4HvLu1325EpjSttNZsH
DE/m9Qy+hwKsxv8A8abxtcF1c+9ELCIkQ+7IBOyg/dpHU+WGVnLWNQrqz9TKaE0zwtNbwPgl
xyi5l92VodvtgHuHQfzGBOSpXIHzOCnPNbC++MOL7jYzy8UluTdWblJ4rt9Ssy9VVqVB/dgw
aVp8WcV262tv/bLm4N/eNpgFq1Ig5GS9Dr698Wardc0XwjdLvBhkmb+kke4Zlo01D+UKcgf3
YcWJtz+K+O7ltc83E5rqS6tGMc0N24KFq0IDUGeWLAVr8U8VsbCGDk11dJuN6wSA2jKsSv1C
qCCWI7k5YrGrWZ5B8Vb7t/IrTZ7Qxzm9JFnJIwRWA6se4p3GCwOffviXl+yz2q3n6eT9ZKLe
2aFyVLt2OoKVxqdeM/lrLT4o4pt0Ftt+/wB9crvl/wCm3a20+1Xw0sPVTBJp6uOSx+Eb6Ldr
hN2uNG023r963P8AOkXyU10kdziURck+Ktou9ibeuIXVxc28AdbiC5A1EKKs6minLwxZiig2
X4k5fvexjdLNITa+rQpkAlbT/CtKftOHqm+OLhnC5d/3k7fPdxWKJ6ZZZSA9PCNa+psFgjVc
z+HLfad22ax27c5G/qkhgrKBVSACGr364pFOvwvpPhngVtdQbVd3e7Hc7haLKlDBqHemgqB5
E4tLzL5B4TLxLff0Jl9+LR7lvPX1lD0ZuwOCRmeKDaNsv943eDbrOP3Lu5OmMfXrhw/L1WD4
b45Etttm5b1cW3J7tSVi0B4WZR0AA1ae1a4MP+GHk+M+Yjkx40IEl3IVdAHCRtHSocs3amG+
Mb7jg5fwLk3Epo4t3RV98fy5UIdWAyNG8vDFqnyzQIYEEVArRetaeOFpreK/F/NOTWH67arJ
P0wcxmWaRYwCOtNWeXfGdF9S8g+Jed7BDC11ZC5jncRxS2riYCQ/apoARq8xi2rmftYQfBHy
RJAJ0jttb+tYTdIHFRkpHjho6l/Cp2T4n5xud7dWsFkIbq0b27mW6f2UBrTSGNanyGLXTzEu
+fDvPNnuLVb62QJeSCGO6jlVoVdvt9xzTR06nFtc7Naq4/tq5BDxldwt74S7/pLT7YGX2SPC
OXL1d88Ws/XqXZWF438b825Dd3UW2WcZazf2ruWaURRB/wCFWP3nxpinjo6d3+HPkTap7S1u
9uEk95J7VtLBKskbP+VS2RSv+4YNU/y6F+A/lGS0ec7XGXWpaL341c6ewUnOvbD9jqi4fxHc
Ny5db8furOQzpMVvLRmWCZVX7wrOQNQAyxWmetH8hfGsFlzPb+P8Xt733twUtFZ3hAKsKltM
hJDLQda4sZlmuT/7h/lc20l2dpjKxaqQe9GZmociErgqrm4l8d/JO9TXJ2aCSBrVzHJK036X
S4yKByVYle4GHfDkdHOPiT5B2rYRyHf5o5VQhJ4GneaaA1otWYsCG6+nFNcrz7teZ00uKmtK
FmHWuG1qRsOJ/FfO+VW0t1slmsltGwX3JpUiDDuUD/dTBrVniLfPjXnO0bzb7VfbXOt3ctpt
BEPcSTT/AAsuVB3OGVnmLu9+BvlSzsmvn2tJIUXXJ7M8Usuk5nSimpp5Yvsa5Ni+IvkLfLSC
72/a3NreahBcSSIieglTr1EEZjKuD7HVVvXxrznZt3TaNw2i5/VXBJt1gUTCUAVPttHqrTv4
YB+XqW+fC/COKcBt9w3qDdZd0uIA8l3Zj3FhuJF1COWA+lUXpqOGLrx5v8V8LPKeXx2c0Etz
tEaGW8S2dY5wgyrVyNVelFw9UfT0ubcNG2c9m45x5LrcFjKm3MkZST1CpjaoFNHTVkMa55nz
WebbWwg+JvnC5KbZNPPZQSxMytLes9qwA/8AD/LZ9LNWgBxhvp5Bu213m2bjd7fuERj3C0lM
UsRNTG65FKjrjWYubsDtm17nud7Ht9hC1xdyMPbjQFqajmf9v1OWK1n61t9z+E/lXbLBtwu9
meWGBKs9u8czKoFS2lGZsh1ODWpQ7J8afK287XGNs2yebaLyskBNwqQSMhI1aWZVqDXqMZ3P
gqfdfj7nVjvMeyXm0TxblcDVDDpBV1HpLK49LCveuNaflb7l8L/Km07dPe3uyyNbwoTM8Lxz
MqgfdpRiWFOtBljNrH2z5cfHPiv5E5Nt5vdm29rq39wK85ljiVsugDlSVHji+zdzFZf8D5pZ
b8mw3W0Tw7y//ityoo+dNayLVdA/jrjcvjnnq73/AOHPlTaNuk3G/wBjlFvGKzvbyRTlVH5y
I2LUHc06YJTev8MBLGdTOdIY5NTv+OFquzYNm3jetxjsNptnur6aqpEvn3JOSqO5OCiTWv33
4f8AkzYtue+3LYpY7OHOaaBoptCkU1N7RZtPiaZYzKft9YvOEf278s5bxybeWmXbqxhtnExV
0ulNdWoo2qOhWmYxud2OffH29eacg2He9l3Gew3izksr6E6ZrebJgB0IpkVbqrDqMRzzxXRU
D0YUpTUgP78Sn6bvjfxF8j8h2sbnsu1NPaFmWGaR44Q1PuKq7K1D2NKHHPfW8/amPDuUHd32
Z9tuI9xWQRzWphdpE1NTWyKCdI61w3rG5lj0r5e+JuB8J47aW/6rchySSNGguSplsrllI9xW
JAEHchQf241y4dZLHP8AEfwtack22+5DySG8/QWQolnbK8c06umrXC+WsIOw6nBffG+4wdxx
BN25jPs/C0u91s1f/wCA0qe3Lp7rNWirpaq1alTh2Rji7Hdyb4z+Q+PWEd7ve0XFtYV0tcMU
kVGNAocxFtGo9zilrcuJtn+HfkredtXc9q2iWe2uB/In1RoGofyq7K340xXo9Vxce+Muacg5
DPstvt0q39g6f1GGUrG8CM2kuyuRl3y64L0sWfyf8N8s4LcyS3A/XbM5Ah3WJCI8+iyKNXts
PM0OAfl5u0TayaUUGlPPGtFjV8G+Pt75huTQ7amiK10m8vmyjgRz9zHzplh+y5jV88+E952S
yuN52XcouQbRZkJfG3olzZyHM+/EC/p8wfPpjM/yu+rPhiN24/yXaJraz3Symtrq8jSa3hkX
/wAqy/YyMPS1a9sRtdkPBOZnkcfHZtsmh316Vs5gI/SVrqr0I051GNWzDz7Wg+UPiLd+A/0q
a6uY7y13BdMEsAI0SgAyIUOZpqybv5Yxm+uXdyyftd8e/t43K92KO/3be4OP7rf0G3bfdAEM
0grErvqBR3FDShw7reY8s5Jx3fdg3ifad7s5LO/t8p4HyUjs6EZMj9VIwiXVYB/MZtLADID9
1KYmnqPw38StzWeW73MyWuxWgYNcIFDTTqKmKN29IZAdXqxmxWebWOuuMbpJLu19tCS7jsO2
TaJd2WJwArkhGmQlmTXTHWZPGeds2o7Pj3JNwuLmOz2+WeW0tzdXirGVeO1BFZWUiun1ClOu
LOarMaGy+Jvk++25NzteP3M9nLFqimUISYlr9qE69WXSlcYsh6qj2PiHKuQ7lLt2zWE95dxM
TJEBoZWGbBtekBl7jrjNN9guV8L5nxiOE7/s9xt/6r0QyzACNyPyh1LLq8q1wszr9su2o/lN
D6WOYNR4j641W8b741+LuSctu1eORbXZ429q73SQVEbU1UUEjU+nMLlXxxm+mzx2c4+G9+45
c201jPFvuy7jMbex3WyaqtO2QhmVS3tyHtnTzwSOcueVk34vyZORR8bXb5f660ht028jTL7t
NRXOgzGYbocdLfHSthwn4c3/AJNdut8X2XbLaWS2vr+dAWiliH8yJY2oGdKioJ6Z4xNY6szV
X8i/FnIOHm3uDPDu3Hr30bdvdkawM6jJZKVCOQDlWngcXU31zu/FZteQb2mzvs4u5f6a8om/
R62EIkH/ANoUrpLYuddZx4p3DMTX1kmlKUqa9RhOLLYdr3Pdd2h2zbomubqc6LaGMAMzsclF
csGicuybaN+t7+XbprGaO5FwbWWOVGUC5FQIWqKazTId+uGq3Gih+Ifk64u4bR9huEleV4Yk
lZUXXEup09wnQPEVOfbGYvhQ8i4jyPju5Db9/wBvl226ZdaRSfnWtNaOKqw8dJxpc9StxP8A
GMEHw7JzW6mmtd0jmWS3hlCm2vbQkIojoKrJmSKmuVKYoeucqH40+EeS85t5b+Efpds9uYWm
4OVaJrmOmmMqDrALZFqYz1fwpIxnI+L8g41u0m1b3aPYX8Z1CFx6XUmgeNh6XTzGNfhz4tt+
MbzdfjSCy+HIuXXTSWe+SXMftI4VoLq2nYIntkatLipY59qUxnmbW/6TGa2b4q+Q92W0fa9o
knttxWV7KeqpE4hJDjWTpBBU+k9e2Ok6jnN1Bf8ACuYbHvybHfbXcRbtKFNvaU1GQVoDGwyZ
dXcYxtddx6hFwr+5O6tzsu4Xt7Bb3FpKIfcvdcTiNKm2LRlhqdcl1dc8Z0S/t4VdW88E5tpI
zDJAxjlQZFWUkFCPEUzx0sMWvEOKb3yre4Nn2lK3dwS2vPSsa5uz+QXwxlvmPTeWf2+3lhtT
z8c3hN83Tb4xLu+zqES6iibMSxBWYuvehoaeeWHn59Z6vrzDdONcg2+12+6v7OW3g3WP3dvu
GoY5o6kHSwzDDupzGHVa634Py5L3brSXa5/1G6xfq7CONTIZbcZe4oSvT8w6jGL14zOvcemf
K/xNw7hnDrQyXF6nJH0PaX2hntLsNm8Eij0wMoJIzrl55a/nD3t8iDhnxJ8Ycg2Ky3C55wdv
3O5UrcbaEi1xSVziCt6m8ssY+tUlV/Ofgvdtk5Xt+z7DOd9t95hNzaXDlY2ohCssvRFUFh6u
meN7439vwv8AjP8AbxvLbFyN+TQz7VutnZ/qdnlSQNDqUOzrJ7ZKuPSBprl1xzltvrGeOHYP
gO7u9n/Wb7vtvsG43SRna4JgvtSPKKp7rEgjUOhSvnnjdm/+DbVDtHwxy+65ld8bv9O3z7cq
y7hdSeuJIZPtmjbLUrdB59aUOCnPHVz34YvNhsZN447ua8k2NJVg3KcKqz2k4y0TRKSuglh6
h0PXxxrPHOSy/wDldWX9u80mzp/Vd8i2bk87KNs26co0TuRq9qVvuDN2K5YxzK3bWf4p8M8q
3Xk+4bbuY/pFts8iw71cswYxtJmgh/K3uDNW+3G7F+NHzr4av9i9i92C+HIuN3U621vuEGnV
DdM2n251U5EsevQ+WCQTWgt/7d9wO0LbXHIYbXmE7P8Ap9nmKpFKFFfaD5uJB9KH6YJPyfWT
+N+HbVunK7vinKjNs95OstnbTGitDuETZCVWydTpIArnUZ412fmefLNcv4xufG+SX/H90Ux3
1k5VnVSFkjOcc0YP5HXMY1jM6ijIYMpNSOxPamCwg/Tj2w+ZzqV8T3pXtjNRwQi+Cn7QM6dv
24kT1BqQS4oBXwPXFqPJpjCkUrkQB44lTyAUaQDMZClaYhTpK4jB0ksBQV8sq4FKciiqTmw6
ilcRFVW1fw/lFfAYgTS6UPWtMvDPwwowUadQJYDInpn9MKIatIHQ9DU5jyyxE6uhCFyS6dKE
9+uAJTrZhpOn+HOoOBBYlYjqPp8KZ4mcJXR6asivUd/24modonb0qSBWoJyNB9cTRyZBJUsW
oKfT/LEKSlVYLpKkVz8a/TxwDRq7soLKa0yX/UnEYRZZHqjUIFS1M/8ATCACJFFS59zrq7U8
MCwyqoA1OxJr1Fe+X4YVgtANXANagUWtMWjBqT6lGRJzNOlM8BkO2pmFHNOhNMq4ii9WSlT6
gaMTlkeuEFUKFDN6gQQfoMWoTMrIzatKtSmeD4WpU0qy6moD4daDE1DHUXIdQVzoR59MINV1
jKjPInIUphwU0JcqG1VIoM61OCqDJOYXNvynBiOhFCSoUk0NehGIgi0qWLek50anXwyxGJHT
XT2zRe9DXEQBqEoaaRSh6HEqmVlUNlkehGIAZmLDTmSc1P8AzxDAhmZRqqS35QPA/wCWJCVU
UHKpp27V8fHEoE6lXIVJGRH+mKQildX0Ma161xEhViSPUF7HEAoZBWgASta+OADV4y2SV1VF
RlQ+GeI6AkKwYt6vt0jy7YkIKNPuFTnlppn9MKKQgKTHkGpppn9fwxNWDZwKMSoalC2eWMG+
BDEv6jXLVQd/CmFmEdJQBjVupIyGXbEdIEmlW6/lp1p54jBsFB0vkDmD2zOAaYlVqSDUnLP8
MJDKrZsD6WyZew74mQsVMgCqDUZk1pXEqlYnTQgauqkClD4jEKAvJLnIBpWgqDn+3EhKAAy/
b08Pr1xY1CJJQuoqBko/0GJWjqzoaih8O5PfPBg+QKgWMhcx+Y+PfvhUCrEUAOZzzpniakKQ
KQKgGn7sWDUhK6R6szStf8sWNbEWmP7B1ByBORP1PXEzsF1yrVsjpHan1xUhaTSvWoqcvM4k
ZXJj10AQGqv3FfDEMGBI65UYahR+h88sRP7miqgEA5U+uHRQ6V1ElSB/HgWH0lSqgksMipoM
vEYNNIn2wCQSQcyen7O2C1CZs0ViNB7jOv4YvBdA6gSkavSPMYdR/ZVmLE0YHJq4tbnCPpca
dWp+tD0yxYM9Uu/NkyhqkkMy+Fca5jHSiUDM+PbvjSwWgePniTtlmVnVVpUnr5+OKRPZvgZ7
KDlNg9xcJArMdUzkBQVBA1VP5umOtn+quSPZvmvj99PtsV7G8X6dV9b+4pNPpXP8Meermvn6
ULTSSNNaVPcf5YmrdBE6sjFXBjU0FM8x4Y1g1ouGcoueObum5Wao8wFHWRaoyHqGB7Y1OvGe
q924b8zbPudpNJu62VhMgJXRUhxTLtljPXqjNcQudr3fn02+SbpY7faRlvZtnOkS1y9JYgef
XG5fBY6/l7bdj3OS1vIN/sCBpjdC6yAdwfQTjHPJxp+NQ8es+Eptj8i2/W0VHkWdMi3j6vww
2DXkey8J4nufJLu23fkEdpboS6XEZGiQk/lJyA+uLFcrh5ZxPjG3b5Ft2z73HuEcmn3JWCCI
HwLqT/jh58HL3PZ7DbbXhCbZLvlg85hYe6JUVTqzpm3ni62n4Z3bORbHue0S8Ym3G3tZ4KoL
hpAI2H/7snI4vq1jj5jzDbIdqt+Mbddxy3aFFF3E9Ej0dGMgP3Ypz6LF7se73nGdra+33l1v
uUTrWK3SVXZAB0QZkt2xUVh+Ocj41vXyE+6biRb2ylpbdmY+pz01Edz1zxfhH+SN5i3rkNvt
23bklxa9FlDfyqn7gT9csUUelrtFmODjaTvNiHWH25JTKmkEiuk59sFgrD/EWyQWm+Xd3Lud
kqxho85QGapGahj0NOuNX4Pjt55xi53Dl9hPtt1b3glkQmOGQF1UHOtKjFynZ81wmDjlqk4C
EuaAkatWnt9c8Fox4JqWg05MOtR4Yo3a9f8A7fg8m5XiaqAR1amYJJpQ/wCOG/DL0f8AR2ux
7nuu73t/CILhQzKWAZVUAD01rjKVu9x2PNuOvbbRf2uosDI8ziqFTUZA1zxZU8x3Xg8237nb
7Yd8t7u+mYatT/y4/JmJoDhi3Hr297Mz8Lh2+K6tTPCsRZvcGhjHTVQk4pPRQ7xt1nv+yWVt
Y7tbGaAxs8fuAoVSmoUB8sWFJuG78Xm3Oz2u7u0e8ioY4ww0q/bWenqpixLa0uls7e5G4XNh
aR1ZoEjZFIjH5mzzP0xJmeU7W/NNjii2O7tyqSfzJJX/AISa00+oYMDn+O7O14xfX1hdbza3
dw4109wejSuYWpw4tUHOYtr5ZBNe7fuSwzWLn3bVnARlB+6hz/EZYsF149JC8UjI+ejLLy8M
SlenfD3Ldq2u5m2+69L3PpikrRQRn6q+NcLd6bi1vLXh+37jcblcQzJfzmaD2ZEOo9AvUZ0w
YyC5ms+bW+3X20XKRpt8nuTLK2iVQPuGivp+p64ZRnrsHyZxZ98G3fqFWZlMQm1D2jIBTTq6
D6nFitc8F7bcM269bd5UpfTtLCsLhidRJGXfzwYpQ3jW3MRs+6bTcoIdsl92dJHCvkcwF/DC
WY+QOSbDyDlO12a7g1nbWhZbzcI/UEJIJ9qnhTriajh5f/61s19tO42XIpt99qVWa1ml95kR
aMWRhRVJA6EYp6Nbm7Ntyu92fkm13EclltrCaeOQqrCufTscVidNv8g8a3De7naIJ/8A5c8Z
WNpCEjZ6GqqT3wxm+qie9tuF8HvNu3l1a4mEwjWJgQ3ueQJpReuCRMpw/aNkveK3M83MrjaL
qQM0+3pMI40X8o9tiNeoDquLGqyfBOO7jvHLo49vKTm1m92SVyFyBFWzz6dsSj1X5z2XeZ9u
sty26LOxZjJKGzQNQ/bUeHUYtGe6fg25/K36a1vd1eCTYANbyyFDM0dPEHL6nAdY3nyw/IXy
Lb7bsl5BGFt/aaeclUqpJIWn3nwAxrIObqubiN38acy2m+3i5iuLTV7qPbkhtKkB6o1DjK/L
1Hcdvl3flu08x22VLnZLWP8AnurDVQVNKf8A1d+mIyZWG5Tueycl+VLOS03t9ntI4xEd0Sin
3Ur6FetBX+I5Y1+HOcf7azHzTYGx3Ky18ok5AHj0qrujmE/7tBI9XbB9bmm+V5nkD6DUn9/7
cLWvpP4gvUT4lv2juAksCzknWA6EJkc+lT0wYevh598b/Km9pu1hs+47kJNkecNcSzIGYVNQ
Q/WlcVlE6j3Pfn3Mbxb3e1ceg3IaA6bkZQmknpQDNsu+JMntvJeb7lyXeY4Nusru2VEW62cT
gPkNPuLJ/E3Q5Yj+FL8lcOhg4fPudtdXWx1KtPsF3cGWCSpp/KqTRhXsca5FyerTbp7rkHwM
LXY7g3e528CxyRQyMZlZJKlDU6tWnpXGfyurqH4jE138Z7tskUiNvcTzj9O9BIrMoVX61rUd
fHFqvs8afh8N3tHC9p23fpP0+6pMoaO4kUyEtISulmOfhUYGtec/PfOOSbBzvbH2u8aFLW3E
ixjNfcY9WXoykZZ43J4xL6zHC7vlPPvkq33z2BNLbSwSX80GmONESgBIGdcYrpy9j5lt96Pl
nie6GA/oI1eBrnLSJH1aVY+OeWBiZrhvt63D/wC/6x2yS6ZbJLRmjg1EIS0VcwOrV6Y1fhc9
e2NLDEm5XXJ9o3AG128XKG1vEPtEvJErSUky9QYfvwJ5P/chLyJ9q2uL9CBsluWC7nFJ7gkb
SFUPTIU098alwf8Al8+ByzhVbSX6igzyyxLH09xBbrfP7fxYcek97e4F9uOK3kCzQyGWtGYU
odNcHMxd7is+PbHn3G+Y7HHzS/b+mOJookuZDI0dy8ZCJrJyBBy7Ybap1Gy2jZOV2PzDf7jc
rNHxZoWS2YyUgEjKKUStPHOmC0wHyJu0+2/E/ILva5xA0d5IkcsJFFVrkBtJHTqemLmes9qj
5J3e+sfhrju/W1y0e5Wv6aWK9VwWDNGdRDdW1f8AXDz+TZ7EvzXzTfNs+MNkvbW50S7ssUd8
VCkyLLBqYAgECp8MXDXU9eYf277FvNzzu33W0ikawsyVu7lCBGnuK38tvHGK3Zj0TkUe/bP/
AHEWu8W+2Pc2W42qW8kgUlTEsYWWSo//AAfcY3+HKfLd7rZ3ezWe7XfHYJN7e/Vpm283C0Rg
KD2Q3QD+EYoz2+I9wW7N7I1w7m6kldpxJUyayTq1Vz1VwW21vmeNf8N7nySx5vaScf8AYkv5
A8LW12QsU8TCrRaj0Zvy+eLG+fh9GjZ9x3mR54F3fhO+aSR/N9zbpJBlQglko3kBlic8DfWH
NG+FIrLj7e5v8blHaArU6bhvdMTf5jGZ4asdy25NwsuE2u538m3bnGaG5DBbn9SkA1RVav8A
5DUMO+GDqe+LjisSR77e2p2a6sv0ylFvZ5neGcFszFGzEaSfLD0WN5Zab7dfFftcMEn9WtNy
kgQWVFkiQXEnuJ6cwMxUYoMU/wAXrz6z53t9vzu7aVZbGaPbRdMjS62064WNAS1B0qcV3GvG
5XeZ9r3e9NpxPcTT3FEzzgQSgZ0iV3K+qmQpirMfFu/zLc75uM0cQgie5lYWwFDCHkJ0U/2V
phEj0T+3Ta13D5CWP9e+2yxWssluFIPvuCqtGwPVWQk064zfluPp/iKRybluFvLsl1t3tKYm
kupmkhuFLUJijZmGg/TD1WZHmv8AbpuhMnMtmjnZfYuXawtNfRQ0iMYlY5UIWtMq4eucrP8A
P/0fOHNZt+HIr1t+ad90RjDOLss0o0EhU9X8I6UwT2nnyeKaIL7i1kqpI1NmSK/TDWn1hyuL
le4fGPC7jgxnmuY1hjupNuajCFIqFWoeiuOnjjE8is9gPm/lN5xzfuHb5s86xblODbXUw0ln
hcp6JVPVdVcj0OHPBn+yq/uq5fu9qNv4zGyDa91tTPdRMily8ctBpc1K/hjXDH9Jvi2/t95h
vt58Zb5+ou/1EuxKy2BcKWRRAZFVulVqMq4x810tznXm/wARc45tunON43q02+HdrndoQ277
W3t27SxJ/wDgKCmtSanxxX5HM8en73sG/wC48f3G54/fbrtNybaRrvjG8kzWskZB9xIml1U0
r0zxuVjrm/Vprm+29eNcZurXab7d45rKILLtMjIEWONKe4FZKivj0wY3Hk/y7zi5T5M49fW1
nPsVxCIxcXzMEM0TSCpZoz6lQVBDYbx43xg/7rNw5Mm42kcctyeK3VrESIyTavcB3I1Eemum
hwRz/wDk+c2kZjkMq1YkUOeHG7698/tW3jaYbrf9onnSG93O1RbCGVgPcK6wygk0r6xjOZR+
Gl+KNg3PiXFecf8As0B21r6Gb2P1TiraY5FoCcqHUNPjh6+RKvePW2yxcX4unynJaHebbSeP
Xkubokqr7aMR6XNAK1FPHPBZqsjyr5aPyduHytb7ffwMN0gjK7CLLJXgDavcR1oxLEVNTVTl
hvw58dX743f9xe37zLwrid7LbzSCxQHc7ihb2nMUdTJStGZlIwT4P9J/tB/JO07hzJ/jzfuM
x/1LaIFH6u8i0tHHRoWAkFarTQw8ji/GNX/238MR/druO2X3L9mNpMk8sVgySiM1MbGUsBJ/
CaHocMjP/wAteFgnSuo6R+Y402+k/h949z/t/wCV8dsB729xi5n/AEq/+RhLGpR16avsIy75
Yzz5R3N5Rf23aLng3OtujAnu5rYaLYU1uDBKmS/92X1xn37apl48exWe3wryXbb5LcKbnj7W
r3IXSHdGRhE57lVBoD0zxZ6kFog23cdk22VNyvjJHCYt1t5Cloy1on6kKdOsZL09QpjWqTFL
vW4bjs/yPv0G3bEd12m9t7S53iGyIjvI5mDRpcxAEF8loxGeKQye1jvlvad+ufjnetxsN4v7
/Y0CPuWyb5CpurZFaqTW8rAOGRvDtXFWepHy+sgUkZnV9wJ9I7nGjK+jPg+6h3f4g5hxrbJB
NyKdZJrexY+3JIpiUK0YJzFUpjHxXTrrYtNq2nc9k/t+v9mvStlyKe9ieyhlYJIJnuIfbVQT
XUhGfhg9cr7G6t4dkXd9vm3trKP5cWyMFvI1U/UFFJAy9Hq6ahn4eGH/AMt1ldt3Df8Akvxt
zzj95GBz43U0s+0pojkNREFaJK5jSmRHX8cb31zzeWe5RYvsn9szbLuwWHdReRTCzcgPnOHy
SvZRVh2xz51rr2PIZfjDlZ4L/wC8QwR3PH6kytC4M0ShtOqSM5gBuuHdPfX1ZDJiFJ9RyUdC
B44cabr4QuLe2+U+Nm4dIoUuKCRiBRmBVa1ypjPTMvr6q37aPe27l6vZCWUblaXtuClWb2xA
fdQ0rkEbMeeNC/lL858m3Dj/AMbX+6bW6Lce5EiTkBwodgNa/wC4flPji4+V18PlnlXyxvPL
OIWHGt6h/XXdleC5t98kNbnQysrQnLOpbNq9APDFp7zY9ltOJcrT+2DeNhudvuG3Rgz29o3q
lMRmjl1KvX7dRpgjf9Phlf7Vt/uoea3OxPfPFaTWzyrtzMRG8ykVdVP5wo9WHqYzxZY81+Q7
nmN3y68t+UTXMm4wTyxWkd1q1R27zN7apqrRGyOWNd/EHL3a54jyyP8Atiu9kuLCVt4gZJI7
SmphGlykmpBnloq2Of8AOr+0v1QXfJN047/bTslxtcq21xNL+nMoGox6ppD6T1Vqr1w8z5XM
8mr6Tdr47L8Wb8LZt53SS5MUsrUab25oXEvq8U01/DDz78ruexo+X7duux8a5LebLLe7q97A
81laxSqWtGp/LktlrUJGTqIXPwwyaXw/eSNcXM1zPM81xM5kklYnU0hzLGvcnDb6bj07+3Lk
O2bT8k2Vzucy20Lxy23vNkgeVdKBj0FTlXGOo1L49m+N+ObtxX5N5fvnIIl27abmS5NveTMA
jxyT+6jq3TTpFD4HG+puYzsxz8Sn2214MsnyPLa3PEri5e747M41vADM+jSy59GqAOn0wZvw
JGV+WuVcw49zbjl3bXdumy2cTXPGLu1jHtyQ3BVJg/8AESukMnSmeL8KfPq5/uu5dvEG2bVx
6D2X2/d7c3F5VAzh4nQo0bn7epzGHi56vy81/t72W0375IsoppPbNij3tutMmniAKaq9V8Rj
HTrMz/L2HgPMbHlHzbvyy1ht4rJ7KysZWDKZEYC5MY7ByuojrjXUxzk8WPxnB8iJxPlycve4
cezMLCG5JeTT7cgqv+1gBlgs9HN1mPkHYNw51sfBN640q7ht9lbRxXckbAmGVVjqsik1GakH
wxqT5jXxW4uuQbHcfI97s8d1Cdx3HjotIoS49VyJJCIGbs+lq0PbGfqGH4Htl3wj4l5PZ8rK
7bfXQZo4rhqM7KlAo61Zu3jhzabZmOz5G4/u3L+XcP5dx4pd8e/TxJcXSOpCMJBJpdBnqp+/
LB+ME8rS7rvO1bxyHnvG9tnil3jc9phFlGjqPclSCRWj19PcXUtV6gYefwKy/Gof/Sfha6se
TOu33r30cy20rASOFliqqqcyyhNRpjXXzp6sWHKeMbnufzJsvNbER3XGDDbTLuETqU1RFqjI
1qRTLHO3zIZcvrz3fNrk5V/cRe/0N0ugl3BP7qOPaH6eKNnOvpU6StPHG+r5Iv52b78OD+56
7s7z5K128odotvgScKfVE4dyY38GAbph+OWfLXjUySEgB8mFK9jjGrBt6FUVLacutcsRREgt
6qlRktMsCH7qjqMgaAU7YloQQaDVVaHUT9uIaKjkUV9SgVUgVOX0xKmQgrUsSFNTQEDzxKCA
V5NWugI7jOnjTEh6tAXVIWYCrZdO2LFQGSr1oBHl6hWle2Fn0giBqEkkkgnMgd6jyOIxMoc0
XOi9SOvlgpASFlAcGlNJyqKHxwLUpj9sxowfSpOk9qYicu9GoTXoTSpP4YkFqhdQI051BGZP
Q4tCWJqoPXpXKgIz/CuIgULVwvSuTYkf0awQ51L0Zs6YgMvqUKpoa1JH+uI6ETyK5Deod16Z
HwpiQoypOkEaRnUdx174kjdmIIrR6gjw/biQ0YBQDmcy1D0xE/vlq6zQtlnnl26YMGm9xgNF
PTkCO1T3xM+jYkEio9IyHkcLRZsQx9NRUAChJHfAiZFK0D62B9WVMsSwLtUEgDI9SMyOwBwx
CVqgAijg5EZVwIMhLqGJCkfsr9MOokZ/Zr38KdD0xIQErBXNNVRUjpXFVh1LCqlgHp6QvSvf
rgRMHprZRl0JNMRJiEXN6d8I0LR10vqBIIIBrn9cQTPpVjTr3XtWmClz/wA3UKULL+XqR4gn
ELEpkqtFCjPpmCO2JekGJJUgEChFOmJJNAI0qP8AcSewHbLxxNIX1s/5anIt+PQYUeNVV2Cm
rf5+GJGKvqpTPv164LFiZin3EitOvmfDBiAHBJJJJHUHriwQjrY6anSMzU0b92Ew9CGAqFXx
65YMbI1IU100yYUyNMQ0INUPcnMHt+zAtOqCoahFM6A5ftOHVILSnua6lRSgQ+PfFjWkGQMQ
lKnqe1fp54HO040oKlaOSOuY/ZiNM1VclyW0GhCj1Cv1piV8KpZghXSRmBWpoelfHFBbp3iY
0YCqkVC0z+uEYGNJXcqooFFQxyBJ8MJkEhAQgDURkCR/hXGW4KQJpAeoFMgP+WI0Bb3FACmg
pWuXXxwufykUJGdQB1n8tTmPpiPwZWBD+ilTmQP+MsFhnRmjGgKR93fEdKQqNJA+zIEdD+Jw
wdXUZ1Nq6U6moz8qHAzIY1VchRaA+f1xNFp1fzGbPsaf4YdPqTSFUMtMxUmvbz88BMZEUIa1
Zq6a9CD3wyCj9wpTU2oUNVNDl4ZYBUVfcHuITlnRiKU864q1PQMX1Bmqlete+JmT1KrUq4oS
RTPscZzTaQoQpUEE1BIyPn9MWCTSAVfuWgHWvXC1mHWZ4+n3E5Zdh4jDILQIwljoQEZgaHzx
VmKbfsl6EsuZbKn7MPMNZ8NT7e/bGgOrfup074k6Hj9qSmrUy55DFE9V+Ldkj3zdLOwmnFol
wQrXCL7hAPX0jr+GOnuM3/L1fn3xpxTje0Evv99cXrKTEjx/y2p1rnRV7Y83U/y6/wAs15EH
CMQMgftQZig7ivjjchtlvhMnrBBFD3GR8sTPSZY5NANKkdh4DDjGN5xX4f5nySwa/tkS0sn/
APG11SNpCMxRRVgvnipnjl2n4y5Vu2+TbPaRJM9mKyy10xRk+LkUqewGCfDd6lRcy+ON94xL
BDuHtTyy5wpakucvoBU4uflj66t7P4P5tPsC7vIIrOH2vfMMjaZQlNRqtDTLPD1v4WSMDcK8
LyRUqwNDnQ1HbDFgbeJsooIgzSNX2QNRZvId8MU8eg2/wZzifZ/6pdRQwR6PdFu7gyBKahkc
lOM9W/hjqao+O8H5HybcZbHbbUOYs5pGIVEA6En7fwxr3G7fFnd/FXLrPfrfZPaSS/nXWojb
UmkfmLZUA74patX0n9v/ACoDX/UNueQZtEHIcAD7VNKYNq+zi2v4O5nfs81y1rtsKOVT9Q+T
qOpGnrg1l03HwXzBdwit4ZbRveJf9Skp9tUB71X92KHXRcfBHIInZn3ba0lzqhkKaj40K5Yt
p1j7Xh3IbzfDsu3xpuV6h0u9swaJR3dpPtC+eEY0qfGfINn3O1h3vcY9jinIMV2jlqNWlB7f
Q/jglqS/Inx1uGyWcV7cb++6RE6NMgdmBOamrMwxes484oEf159iv+WKJY7Jve87TcF9ruHi
mIBIhqKhegIHXGilvoN+uNd3fW106yNqeV43pVvGvbGb0fHJa2m5ySD9Ja3EqLkfZjc9ex0g
41/0H39O9tcQyiC4ieORj6RIrDST3OoYpT8tbd8Avbfjtvuj7xaStIFb9JEX1ENl6S1AadDT
FarJE+/fF+97Ptlvukt4lwhCsYI9RZS4qMgBgtFZGYXcMjRUkWVj6gVap8qdcZli081vu6xg
vBMqV+9lcIPozAYftGdDDDeykmNZ5DWumPXQkClNS4ftjS12Ph/KORPONutTPLAKyaiFovSh
LUJOLWrOc8Vd9bbhZSPa3cTLLESsgzY5flBH+GHWHGjlGIYHSO/n4HCPhYbRtO47vepabfBJ
LcuRojQfdXz6AYrcMjRcg+NuY7FYR3O4WRNouTuJVlCk/wAWkmmMyrAce+Oubb3ZPfbbYH9K
fSrvKINdB1QMQWH7sNOqv/1jkp3YbQLGZr9jT2FALVHckZBfPFrGaseQ/H3MthtBebjaOlsS
KyBxKEy6NpLZ4pWp1gti+OOb7zZG+sLCQW5UlZ2kWEvUdFBIY4KflmtzsdxsLh7O6haKaIlW
jIIp5nxrh5ol1zaZFKimZUk1qDTyrhmNSNZs3x5z7c9tfcNssJTbOoGsyCJpAOwRiCwxmtdS
RSWPGeQ3e7/0iys5JNw1UaOlGjYHqxP2jzw6xPh2cq4hzHYo0l3yykRCx0zahKgJ/wBwJ64N
G4zLSOrBiA8YObnopPTGtH5dO2Lu8+4JBtKytd3DBIhAW91mPZQuf44txrNXXJth5/tphtd6
jvkhmI9hZpnkDv8AwijMCcErNi1g+Lfly42ykdnIlq6+4lqbj2/oCmsD92DrpmysZuVnvG03
clveQta3sJCurgq6MDWvY9e9cU08zxy3t3e3kwmubh55xRfcldmPlmxNMblwyNNYcM+Sptm/
X2djeHbHUu7xysqlO7e1WrZeWDR11jJye9by6JAUZcmD9AD2z8cEp521BPR3fp2AXyPbDKeo
hNSaEAkGoA/zwiang3G7hieOOaZIpAVZA+kMPNehxacc6voLCM+nqafb5Yt1nZFva8u5PbWv
6eDdLuC3poEKTSImf+2v7sYvhvLitN63S3vBc2t5NDcZ/wDyIpXVyQczrBqcbal8PunJN93R
hFuN7PeLG3SaR5AD4rU4PtAsLXbudbbsK8is4L2y2y4Yhb6AMiNQ0q5U/aexIxbl8FVVhvm9
bfcPe2d/cWdyS2q6jfS9D91WrUjEZyl3Pl3IN00ruO6XF6kI9yMyy6glczQnpi04q76+ur2Y
z3Nw88pQCsjmRgAMgCa0HlgtZyLDjV3v8G6W8WyXbWl/cMIlaKQxVLGgDOCMq4Y3I03yNN8q
8fmj23lO6XLays1ujXLSxkr0dKeB/ZhlZYh97v3mS9lvJJLvUGWdpGMgIzU6idVMLHVmt7w3
5t5Lx6C4s7sRb1aXb63sr3Oj0zKuPHzwdc4eetBz35k5DyXZI9risodn2eQj3be3BaKWnShY
CgHfTgnWHfXmheb3D01A0oB+8Y00uNl5dv8AsJMu07jPYSNT3Xt3KEj/AHU/zxm1q5iLeOV8
g3mZZ933S43GVKmF7hy5VQfynDLrnn5Wp+T+bixFim/3yWrLpNuZCVpSmn1V/Zia2Yp49+3V
rSbbzeT/AKNmEr2jO/ts4/OUY9cZtxiX8huN93prWLb572aSwjIeG09xmhBHSiGoGKVpt4vi
/wCXN14fHuLxzT7Dbq13Z2M09CEp/wCSO3Y1AK9D4YZWfZ6o/ju+5z/W4to4luE1rf3xMftw
y+wjMtW9bMKUxWty6ut4+QPlvjHL4od93K6G9ba2sW9w4kg0uOoAFGR18OuNzm2CZ/8AlrP/
AM5K4JN3Z8WtE3Z0YNf28jZOBmxTTmo7gnGNivLxLet0vNz3i83O9l928vZmuJpEUJWRutKY
vt9hJjkhmkiZXjZ4ypBDKTqBBqDqHfLG8PPTXTfLnyHNt0lnPyK9Nq6iN0aQMChy0liNX764
zsNX0HzryuDg9txiyZbR7GVTb7tC7LcBVJbQRQhixbOuM6LdZDdeZcm3jcUvt33Se8u0ULDK
75oVpRk0afUKdcaPNxan5f8AkiaOAPyG90wMTAzOVckZAg9T+OCdQfb1WbH8jcy2Oe7l2jd7
ixnvZBLc6Gr7jE5s4eoLE9+uNXmCoN85hyredxXcL/crq6v0ACTu1GTR9ujSBpI7EYpYJq4v
Plz5Ju7M2FzyK9mtpU9tkDAEqMiSwGr99cGxqxhpDpdm1EOSWfzY9ycW6z8Oiz3C7tJ0ubeS
SKZCjJNExV1ZTVWQjMEHDYI2TfMXyTNLC78kvX/TVMfrAYFhQ1dQC3TpjDUU2wPyqO6n3vZ4
74XO3sJ5txtg9IWc6qu69NXcHI98bvUxmf6/+FdyHkW98i3GTcN4vX3C8lAWS7cKrERjQoIU
BfSMsEbV5EekUPQ0C9qYhWq458k844/ZPabFu1xYQyH1W8enRXKrUcMAx/iGLxv5cG88k5Hv
m4frt2vJNwu3Cgu1NZJNBRVy79sGs+tVzTgny6ePWHIOVRXF1t9hB7EE9w6vNbwkhlWVVq8Y
zH3Yuazbji+Ndl+St6ku7Lhks4jmj9q+kST2LZhT7JXb06j0AxSyVu3xnp7fk3E+RPC5l2ne
bCWjCNjHLC4zXSw7EZjsRjdxidavN7+W/kbddvfb9x3+6ubKUgPCNCEkDNS6qGI8c88c5cav
wh478q8541YHb9l3m4tbfWW/TDS6Bmz9KuG01/24cVsin3fe965Fuv6y9uZbzcLo/cSSC/dU
HQV8AOuNff8AbOfpY75yz5Bj2F+H7td3cdhalQ22XiBTCY81UBxrXI+kV6YzVZsY15dLFwKt
2p0/HG8MuO+0a8V4/wBKjicP/I9qvuayaLopmSDjF6atavnG/fKFytttvMri+b9NCoggvU9o
up+1nWilyDnVs8WsYo915Xv+6W1jDuN/Pdw7WhSwWU6/ZDEFghpX8oxSOfHV313Sc75pd7hZ
bxc7tdS3+2x6LC5LVeBQagKTjcx15nutRvny18zXvHva3O9nl2XdFaD9S1oiQTClWVZSmdB1
occ/vDWW4x8kc14pDc2uw7nNZw3ZJngIWSMsRpL6WBAancYTWdnmuJ55Lm6lkmmctJLI7aiW
J1ajXqTjTGLnd+D8z2jZLPfNw2m4tttvXEcNxIlBUiqq/dNQzGqle2DTZguHc65BxLdo912S
f9NewijBxqikVj6klQ/crAf54Prta3BWfMd+23k0nI9ndtqvZZXdf0hKRKrsWaMA11Ln0bDa
zz58Ly8+ZPkS9ttwtJ95ma23E67qPSqEEgA+3pAKVpnppihDtPzB8l7TsY2Wx3yaLbFUhEeN
JGjVvyxyOC607Z5YvDjjj+T+djkEfIG3m4G+RxLCLwkeqJeiyJ9rDyPXBapMPyr5X+QuV2Y2
/fN0e6tmJrbxKsOoHP1aANS5dDlhtFmqF+M8gfazu4265ba1f+Zfey36ZSDp/wDJTT92RzwT
pd36/KLat33TZ7yK82+5lt7qBg8c8RKujqaqVI+n0w2CdbPFryLnnKeUbjHfb7fSXN1BpEUg
CxgBeh0ppAbL7h1wymT12Xe/c83bchzO7luZ5tu9iP8ArCggRFMoKug0q31zOM7Pg5lDZck5
1u3MTvVjPd3fKJvUtxaoRcMY105CMfwjMUphtTn5fyXme877NPyW4nl3CAaJUdfaSOgoawgA
IfHLBrnLb8Npb875ls/w9Lx+Hj88O1bqHik394pDBLHO1ARUBdfVdVcxi5snrX9P1Xm227Dv
O7XjWm2WUt/dmui2t0MkpI6+kdgMX30of0t/aXjRSxyQ3iN7csTDRIpBoQQemGxTmfL0y6+T
Pm7advsN1mvL+DbrZPa26+ltw0cqvlpMhWkwyy14JV1ZLihj335E5TBfccs5L3dRdSfrLjbo
wZgZVFdWkdPpkMO+j6sdNBd287w3SvFNbv7cySK0bI6noQaEHF9WP/at7Y/3A/LVnaR2ttvr
GK2GiISRRSMY1FEBZ1zp54zjpjEx79ukG8LvFvdPBugmNys8Z0OsrHUXTT0z7dMbt1nnIsOS
845Nyfcot13y6a/uraMRQT6FRhGCW/KB3wb5lV591rbT57+VreyjgXfZTCiCOJTFEz6egBZl
LGnicZayshecr5BcbYuxXm6XE21Gf9WtkxBUXDVOsUHjXLpjWq8rXYPkvl+yHaltb2R7fabg
3dnaSeqISEFWH/aykgiuDDPXuG4fN+723DrfmNhw4WZvpmt4t1W692BLgfd7sIAOh89IJGfe
uKC3Hzqu2bvyCbcdztLJ7lYVkvb4wISkSFqtIyj7UUt9MV69Ekiqt3kjFQxCSAEsMwVONSp6
Fyi++V//ALvNqm3W8uLrht7MYbJzIJVVo8ljlYeta09AbI0xS/I78rKXfJt7u9lttmu72V9q
sZXe0tZCCkbn7iB1FfDBz1jpzfyDcOR7xuu12G33l3LcWG0tI1hCWrHAJKe5o70anTFb4zOp
Uu879yTfYIP6ld3F6m3RC3t3mOv2YhTTGCBko88YnX4X2cuy71umyX8O5bVcyWW5W1fZu4Wo
6kjSRT8ehw2hNt26b2u9R3thcXDbzNOJY5oi3vG4Y11qw6MWxdXflvluuYc3+cNrmih5Nfbl
t8l3bvCsMiqnvRSACQKVUA9umHnpyvfuflj+P855PxxLj+h7jNZx3atFciLNXB9LB0YFTTx6
jF33t8btV0V5eS3ouo5nlvS4f3tTGVmBFG19SR2OG9bFO98jR863j5IuprOz5lc3rXCQJPZw
3i6WEDk6JVoAW1UP3Z+ODfNi3128Zuvljb+M7jfcd/qEXHZ00X9xaoWjpShZhRtJ0n7lxmfL
N6z5UfHLTkd3utrFsX6iXd2cSW/6dz7hfqGVhnXzxd9zW+LM8WfP9z+Qbjels+YNdNu1qgUR
Xa6XEJzVvSNLAn8wxr7eMTqbjt227+V9u4VcXFkL+PiU5K3skan2AaihzGpeuZFPPGZfW7UP
xvuHOLLkEh4dE1xutzG8MsWgsGRhq1A5aWRgGVq9cPVVm/Ci3vbuV/8Asl5DvFvcnkEkp/W/
qVb3Xkb8xAFWr1yGN9f08wcdS+Rc3Pwz8lR7a19JsV21okQmEiJ+QjUXpXVQDPpXHOVrFfxj
gfJeSW+4TbFD+sfa4hNNbKVErwk01IhoXofDFaOucmqBlckjSUB/KeoPTPFRLqNarqboKUH1
GKokDFSVNUQ1YU7nth1aCvqAz0/caZE164dQqKrGp9LdK9Ae1cCOsX3EjPqSc6jwGAHRaA+o
FiPSKZA4lp9DAKXYlP8A8H2/HGiIFSQ+Q/hAH78CBLI4j1RONX8PifwxC9JEepyIWmTEZj9m
LFpqsC+ptZGS+WI6fRKoYgdRVs/2YKkkYZgFYdaGtMjQYiYhdRq2agUGekGvfEjamElQuZ60
zFKYkkhib2xRtIXMKM8vx60xKhZtRADLSlc8gR54WdM7AaUY+ont0p9cR1KFYoArCnQeJwEk
icqdTAqcvoMBh6MsZyDLWgOWY/ywpADoYmh7kgdPDEzalhopNMxpoCc/wxapRK4oKEUPh1/H
ASkZqalNADTT1oe+JGAJFHoXU1J7/Q9sSL/7QECiDrQ6jgZHkWIByPQH/jI4TDhdOT50+0nM
EnucTQZGLAtqoo606+FMIP6w61pQLQEkaaYzaKKvToo6161HalMWkEjVIB6EdfA4VKShhHUD
MHMkVAJ+uJEtA9W656h0P/04UMKQCVJY0yr1p/ngQEViahgCmZIGZxNTm0TK2pXorsK0B8xi
ZoaL+QfbmPy1z8cGgS6xq9IU5+rqKUyriMA8YNKg0rRyOn1xa1iSWNFzUjL059RXthVoA1Rp
B0k55HphB4k1fzKtmKU88CESBkwoAKlh9cWKkJGyKn1fw07dfUcWCaFSpJIUtXrXIKfHE0Qq
HLAVA+6nQjAZBstVDgZg9B3+uCC0lBYv+WhqNX+WJCVgxoKjTkO9afXCtRsivQZ0qTRcgMAp
1ii9wqC7ORXSOgIxIcjBkOvN16GmFUK5D7WBrWmJD0NIUAIFc6+H0xacCxcuKOT/ABoOnlhj
I1YOtdQyFBTL93bA3Alj7ih6+C1y6+J8MBJXoFjViFqRTrUeRxqRmwIVWLBjVc66s64D9RKR
5gA0HYnLEjSsQh7lqVBNDlgFPGQwArVhn/x4DEEbS/zCAMx+btn1BGJSpFVWFB2/KOn78TpI
BqKlCRQmpqelelMSo42BH3EuBSh7fhiWI0YFqVAJFPUPSKeWKjEoWMxagw8MwcziWgydzrag
6AUpkPLEYZ6ldNAG7E54AJioiBcAORkWpkMQwMbSEDpVfzVpUfXv9MUa9EZiVqKCjUJbI/XC
yAhh6jLmhNAf8sWqw8iKi6wauaGoHj2+uFKHfQNK0qUBzPge+NQ5cUqgMrGgJHTEzaVZP4T0
8MQ1ZPAVlUllB/iXMftGGNSPZPgpRDy7a5VA9tDooDmCR6SPKpxu/DF+Xqfz0kgitzCjSuVC
qAfzD7q9RQY81pk14YyzUBYes5FCcyR4VxrmeKJFjcICAMugY5jPG25LWp+OLzj1lyO3uOQ/
zLMMWbVGZQAoyJUZnPwxr8Drmx9Mce3vjO9pK+0bv+qt0+6KKJkCGngQDjnYzrzvYpL+P5Mm
g2i5vLjadTm5KRN7VTWtT0BByrjW5FqL5sst/Fza3lpZ3LPGoKXcKOaEGtKqDQjGeZ6Ja1/E
hyC74ADuS3Ut80L6PeVvcYkemoOf7cPRx4OnB+YbxvNzZ2O0yyzxMfcQKIwig0qTIV69sWpF
f8V5ZxS9ja/s5LS61B7VgQzEg+kpo1Hyywz1a9/4/wD1294CU3BLmW5khY6ZFIkKkVGVBnXB
0Hn3xPJyi15fJYH9Xb2TBmuofaZUJXpUFciMb3Ymo5rsfKL/AJrZDaJZLJyqs97RtKpnX1AZ
kV6YzKcdV9dQcNUva7Lf8i36Zf8A5N+wft0+0Eaf9q4zaFNarzi5hfd+T7M267VcENHta3DI
YqnKkSAtl31YQ3l/uNxZbPbX237TJKEVf/yYuoEAilNRFfT9MTSltLTYeVXl3HvHDZtvuVTU
bi4ZqliKelloMVxR17BbcH2N/wCkbXuNvY35lAlifOV3Iy6kasumEVW/LVvYj+l3L34a4E6x
pblSTJXuOwpi5Tj+XSW4lAI5Msgp/KG0gVI+mIPBFEmrsQQTqrnhhzHrXwAa7vdkRRyOI6UK
BiADkyk596HG+ucg2vV7Tdd8ut43KyvofbsI1CwKYzpcEZ+o1rjmVbyXcNx47xuuwRLbyo4A
gig90ksc/SB44sDyvlvJuc7nDaXO97L+lliIWO4ELRNIQcvu1Uwc9emV6ZyWS7vPj6N5F1XE
iRPUxhCT+Ayw0X5W267vvW3cbtJ9rstc2mIMVQuQCACaf54KXRdbfLd31pfokEd4iUkvHhDu
ikVIQGmZPjixOq3eHcLW8t7qd7+FSY5Ult1RK0zUekVwpmuWbre8W49CnG7aOOTVpjt0g9xa
VqagZ5/XCKb4/wB53jeLy7ud82wbZOqKokSJ4hIPGjVNcRkVXyrHvFpsrpsm3Qy2Erarm8VV
eYHyAFfqcCeAzrLHJpbp1p59yfPGoLHtPwEsfs7izBRNoWpIGrSSchTFYZWy4f8ArHj3pbyK
QRfqHEJuASpSvYvlQfswGoecNexptH6EyrH+oQTC2qAy9c9PbDjE+WqCwm9DAL7rR11UGunf
p6qDA1WY4S9/Lbb0m4CRkN24gSerL7YOVA/5cTOI+byblFc7Gm3e6qNcL7wt66WjqPv05acJ
xXfJpmh5Px2fa9th3O+ZpDDbOMpAB0J8BmanBhkVPOL7lN5uGzrvXGrfbLL9TEWug8c5c5D2
mYUoPLFB+Wn5hNucXJePx2LSraySj3khqIitfzUy06csVTT3Edss9/LBGgmaE62jVRIaD01I
Ff24MOMhx9p7/wCPNzO7lp3/APkMzXgrRV+0jV0p2xrNZ+Ga4Nfcph4hLFY8Ig3Hbjr9mdni
jaWpNS0bBmkA7Uxmn5YTgvJt42Tl+q0t4beS5k9m5t3XUQpahVPzIa+GEx6n807zJtN1xzc0
j96S2meREJotQFOeIW+pNi+RuE8n3ywmil3KDdKhP0sZb9MXP3FtHpYeeCmsb/cRb3Fxyrbo
LSB555rY0SNSz1DED0gVNcOsy+sX8e7JeW/Pdos932+SFGmDNBcxFdR7ahIOnfFbrfNke68h
3Pd7T5L2PbraeSHbrhT79vEKRsRX78qUAwD8stzWL+mfLVnJtfH4d6nltxM23qAuotVWl9QK
gjxOHBL+GK+bL67ubqzkn4k3H2RGBkJj1TZg/dGApCn8cEuM768pRyW9YA1EGvfGmtfSXwht
mxR/HE+5XW2295PbySye7NGjvSNdWnW4OQ7YzjXXkVdhzH445/JZ7NumwJbbzJciO3/ToqKV
Vq0Mi0ah7jD9WMl9ej7jFwnaLqPb7j+jWluV/wDxOe3T3SpyOnPOv0xVpl9muPh6w5HukNhF
ZrNOEaO8uYjJaq5yMKMwCpQ5/wCeDFJ4r/ka2vLba03aXiu13yWjrJZbvtxDxJQ/bPCVqyN3
zpisjNuNPLzUD4c/9lk2y0CG3GrbSpFrQvoI00+3vTDhrJfEmz8cbhe8cwl2ezk3BjNIsMii
SBFjXUsUYeukefXF431fGh2rZOHcz41tPJb/AI/ZQXSzikcKLoorlSj0VdSt5jBjLg+QOWfH
/Ct4tNjueLWclldKHu5BBCKJISv8tdPqbLxGGch5Zs248Ij+XLS64tYJPx+7dIltLyOqo7ij
lUckjP7cV8alx6bznjmxv80cSgmtUktbiCVpoJKyIzLUKNDVAHl0w74xOcvi2vjwWHndpwNO
KWDQ3ULyT3LQRABSpfSlFLdR5YziyUuNfHfDdlbkNvsOz2l9u0E4021+A4WKRQyIGcNpX7qH
9uG3TJkZf+4je9t2riVjxy0tLOOa5q09oqoHtVHq1RADIO5p54Zmeh80LCVcEtQKQzGpr9KY
dbsfR/Adh4pxj4auOWy7Jbbnftrkm/WKH9xfcCKoLBtC08sYg6/wqeMXPxp8i8z2qzfi7bVe
rrluP05C2dwkSFgCoC9COwxTGfy3UH/pe6c/vuA3HFdu/TwRGQ3saIj5IDkoUMGz6hsVivMo
rjjfx9w7g24bvNsFvuZ2eaeKJ7iNHlkrLpTW7A5eoD6YZyr1kZ/m/GfjhOLbJz272FYIrgxG
+22xb2VdJ0OY0haGPypXB9VbjQ/MF5waP4z22Tc7e9/ps8cSbV+ik9qaMtDWJXJYDTpyINcP
C7jxP+36ayHP7a0ubG3uobrXEjT1aSGnqR4yOjZUNcXUh4mNzyXjPErj+4qOy38F9tnt4Wij
lldg1yyfy42qSdB/hxq9XGuJdr1OPZ9g2GS+3C92LbdngsFZNv3GIIqSxsvqWVQBpJ6Z1xjJ
rPV8fFvJ9zG5chv76G3jsoLmeSWO1h+yMavsXtpp0xvz8M8TxdfFi8M/9ws25jr/AKMGyVas
nu09BmVakxj82Dpr5fSe+cb41d7bNJacM2zfePvHUX+ySxLdovYpHQVZRnQPjOQKq02v4041
8TWXKbzjcF/LHWOATxKJpGeZlQSk+kNl6ji8NuJd1+KOD7luHF+RWPHhp3FBLe7LBIIYSrRi
X3j0B9s9VFNVcMmi7rVSfG3x/vgn2y82bZWj0nOy9N1GRkraQoKFfrjOQ4xknHOAfHnxsm9T
ceg3yQ3jQ3L3oQyuTK8atrdWAChB6QMKtZ3hO2fEfyDz21ew2ObbLiOCWfcdu1hrOQqQY3BB
BIqTVQAKYvMM8em7j8e/Ge4xXe27pt2x2kNGCz2cqQ3Mbj83RdJH1OLA+ON/t9vtd8v7KzkF
za2tzLDHMoFHWNiobLxpXHT64zOtbb4F41ab7zyKC+23+r7dbwvcSxFgqxkUAkkU091RWmjv
jNp4fSUvxX8f7/BdbbPtO027iMrHc7W4FzGa5NQKNNPA1GMWHFL/AG/brYPsO/7Om1QQvs0z
W893GADfIusK86mvropB7Z419cHPX252vmLnm7bTu3Jb692fak2WwZqR2Cv7ml1r7jBqKAGO
YUCgwyM/ytv/AIUCRI7DUc1Ffr55+GF0x9PT2Hxz8ffHPGd0veMwb427e0ZZbkq03vyx+4X1
yKw0eC0FMYHVyubnPAfijiXL9k3S92u5G1b2pZLK0kottcqySCRQCpKHV6lrTuPDF9Nms/bO
sd/90V7xWLboba4ur+3365tjJYpbE/pJ0RwNN0K06nI0xrk9RbfA93wub413Nttsbq0WBP8A
8txvJrZpUhq0kDAilQKjpQ4Pyb8evJ/j9vim++TL993u577arhVOxXe9VIaSmaXmonUBXShY
9sa7rnxzjffJPD+Lx7BdPe8Mj2/bkQta8m2J4rhEYD0NJANDe0zZNXpjGRddX4a3ZfjLge2c
b2gJsu1XJubeOWWfc2AmZpEV3Icq2rr07YpHSxgd92n4++PPk/a5Ns2+23qw311/+C0gkawm
Eo0ywNVqAl8lb8Dg+v5YlzrA/wB1e9cajvItofZ9fIJLdLi33uOQIUTWymKVKVkFFNK9MakX
e758vmlo4/cLsTQDqvQVxp0e/f2sbLtd0vJN0ubaOa9sIo5LG4ZNftkhy2mvTVoFRjH5Oz6t
PxPeZvlnhnLoOXwwXC7WDPtbxoFa3dY3YGOQ1YiqD7sN+XPjrZri234c4r8g7fx/k3Hbf/16
JVQb3tzxsfe9kAa4FPpcPQqTWhH+7HLrm/isd/ztssdG0/GHxNy7n+7f0S3eKz2OOH9btML6
IJ7nUwb2g2YjXTRhQBm8MdLPw6zr9Nl8x7beXnwdvNrFtX6ZrRY2t7QIrGKCCZD7iImrTSMH
IGoFc8PMnw5/1t+uvi6ZImqyklWFVbr9MamukaD48sody5js1neKslrLdwLdRsPTJG0gUowG
dCMjTB/S+HievXv7lN8uX5jZcRicx7RYwwMbeIldXu9FkFaOEUDR4VwfbPGN9bHkv9vXx7uv
JJI7eCTazJsrXUQt2/lx3COsayaDUHI5r369cGeDvnfFrsnw/wDHZ2rj+8f0OGWaSxRLj3K/
omkCBvcnjqaOxrR+x64Jz+zzuRwfJPw5wS64fuG82u2W+yX20Ibj3Ntl9wSxoKvHKtFWpXoa
Vrhk/Ss/Lqs+HfGNxxWzvOMcNtOVbc8K10TRpfAEUf3RJQtID1zBr+3BjXXWPmH5D23YbHlF
5bbF+rWwiYBbS/QxXUDkVaCRTmdHZu4xvxjmuPiVnDcci2u2nTXDPdwRshyQq7hXRz2BU4z1
8Ok2evq/lnK73YPlLjXAtsgtoeNX0UCXlk0SujxzO8PtFWqoAWPIgVPeuL8M/NsrL7n8Z/HN
/wAt5JwKCyfbt3cpuGxbtGrPHb+5EC0D6fti1g0Vss8swMLHHObIzPyN8dcJ4N8dLtG6N73O
ZZ1uLO7t0K6kJCPGztQPbqlaas9fhgk9atbngvI9u374F5NaW+2RWUW17bPBPDGFMcswt2Yz
AUyZqAkHo3TFz8n+nsqi+I3s+MfBW88zsLSNt7tJpR+okU62Qe2FRmFGouuopi5+VJZzIuLS
zsefcI4nynk1rBeb6d3hsZp1jCLcQNMVMUyL9y0zFeh/HCbfr6urvl24zfN7/HjxWz8Ve1WO
5254lZZBJbGYEgii6clAXIjzwL5tcs+3bZ8dfH/K944zaQw3tjuU0UM0oLMyF0VQzD1H29fp
+mGT0W+Oe02fbebcd4Ny/kNnDd77NucVpdTCIRC4gLSCk6DJtPthl8Pxwbp+HbbctvN2+bdz
+Or23tp+KLbtHLt7xKyMVgWYPQj0tVqZZd+uNWeD53XHeJZfG/xhu258Ws4or2HdJ7JbmQVl
ZGnKI8jjNmjyC9qdcE8ou4xXzPZW+/8Aw/xrn99Aq8pnnjs7q6iRYhcRv7g/moBQlfaGnwzx
rjodzLseAy20yIGaNwG6kqaftOR/DBLK6VAFIy/HUcIx6P8AAGybZvPyXt1lulol3ZOJvdt5
l1RNpiJWo6HPPGOjHsEPwdwezaPdLeB3ax5IbOS0nJliuLVpxGsLKf4NdQR1pQ4aI1vPeN/E
vANkHIrjjVvOYpmhhgESusjTktobV6VAodDEenpgk1V4F8o3HxpvFptfIOHRLtm5XUjR7tx4
GixKqkrPGAAg1UodORr0BBxqMye+N9zBrK+/ti2/cIrdNu03Vot3BbemKZ45/a1SL3JNG/7s
XF9bsXX9tPK9pvNqvtlGzQRbht9q8lzfIFD3ERfJJhQ1Pqp16dsZ/LXUfP3yFv8Axfe+USbp
sW0f0OylUe5Yhw6mYZM6BQAgI/KBjrjm9o+RWs7/APty4/uttCtgJbq0FxbW/pglZS6e4Yx3
1JqFO+Mc9N98+ru5b4x4f8d8Y5HuXHLbcNxv7NIEVokYTEgO5kZgQGHUNQntjPEHXqWT4z+N
U+QuL7l+gjsLXkdnNPHtDnVbfq1VHT05BtQfNOhphzYZJz4v+UjYOIcU3vfL7iu32FyIDavt
0ckZi3C3dwoWoRKOCdWmhIGLmTWLNfGTEK/pXSwYjQDWi1JAJ76RlhsPw9y/tQ22wvuT7xLL
HFPJb2RaF2GpomZwpZK5amGM1qXxruEbvefJ3DeY2/Loo7q029ZJtp0xhDayIslPYlHq9Ohe
58OmNXNxn6+eodt+JuM/I22bJynb7afYLiGCOPeNuWIKt0YFAEsFaIa9z+bKueZPPg2e6xXL
t/4VsPzDBvXG9pintbRVg3LbpEMEDXaBlkMKEVjYLpOY+8Vw9XOcXHy2f9yDxXt/wO6lSj3U
EzOCAWKuYG0MaZ/dil/1q/8Akv8A5D5lvnEfkbjHG9gEVltN6kctzbxxKaoX9ox0p9gGYp08
cb553kSf7LKTY9j4jffJG67DYQw3dpaRX0Q01VJHheWRFGWmN2UMVGOeCc5PFFsLf+9/FNrv
nKUW+3ax3OEWl5IiCRYmmiBQlQNUbB2BHfFnrdk8qy5FzLkG1/Nu1cNsfaj49dRxNcW/tAh4
5gVZGqCNApRSOnTBfIJ8hv7Sx4Lwz5Cv+N2sNrd7de6rZiKkJMIWCEj1aEaUlBXIgeGNTn2K
3xW3c26b18X8b5o9r/U+b2G4QLtlzoCTXA980ibTpqrr/riw9ZL4vvj75AseR70ouN6u9m5B
+pkSfh89GiUp9yKxQNTv9wz7YxqeUcYvbraf7iLv+lwNZxf1a4tZrRQCsdvIw1qyrVVTPUPD
LG/6TZK6/wA5bLrLf3A7JZbT8rb1bbdEILaYQ3TRrmBLcRh5Cq9gzVNMb695lebmZuPOJHJT
IEZgEnxPc4w0AMWIfILTI0zy+vXEtE7BEoDn1B8a/TFg0Gmq69dEoTpp+GWJJRr9kUOpyRTt
+3AaaNkAYs2fegywL6lQg+hqoOw6fvxpJF0ADqCTUAdh4YCj1KGBUdO46VPjirCV6Oo9sU7m
nj4YmjEqo8SaZdKYUdSKlQPSTkW7/WmDEREoXUCKmpWmZBGBDNCSSxDkDVTOv1GJHRzrrIpo
2Xl9BiagVUIDpBAqaAEV8sIsGoVowrKCQTQfXrXEMMgCykMoFR16j8MTWHIYVMYAHcnrn5YB
g2mkEWaqGH7K9sCDLp6n0k0NFyHTCLoEQ6K/kNfV2piGVLCrqQAdRypTp9cBkOYyZCG+1ftI
FaHtg0gIcihFSDWvQH/nhWJSVD1kzB+0Cn78SC0mmQIqEsM+gAof9MWIiFGaV7kjxriIlDIp
rSh6imFIw5Ioa1Gak5fhgCUmvqemjoB5jw8sGImjGmrtpQUI7jEodTHpIjBBz1E+WeFaGHUU
rmAwrTERAegBqIzePWuJH9Wgg5L/AB1z/dg0wHuKFoBVVyJHXDBaeJY/vlqGPQV/dgA1KD01
oBmoPXPPFhkRM7hkYDI51xLKlUKdRJ65DxNR+7FGvgDBCVNRQfn+mIG1A68gegUV6/swjRqS
XUKuknIjsMWmUyqitqNa/wAPYk4EdEVUVtRyyIIpliVNTSA2agHqDXI4hIQuGClStSuYQCgz
8frip2pftBZRmc2HTAQPKCaFRQgmoyGJko66SalgBmD4/XDq0tTimptJ61ApkcSPFo90ZlSc
xXufMeWAEwJNa5A0LDx/0wnAFxGpIbM5au3XFhGGYkaSAO4zy/5YVotYUksAGWop1r/0xLwL
FEYM3+hqcC0zMJBUuSTkVNRTErdCkcitQZocgfD/AExo8iqHqAchUZfvwCw6s2goxHT0+OMn
DxBqHUCDUkdwPHFVhkLhDRVMbHKozB/DFB1DGICvamda9sCgg7UGZyrUf7cLcAz0Ay9LdXY5
gfTEzadCakagzE5sPpiXADGc5KgydCMqGniMTeJg6soYjTn6x4U8MAwzBZPScyPtYdR/1xM3
QqoDsQQXGTAZ1wgtZZASoJrp0sMhhxToJLgtopRTnXpXA16NvaC6yqiR6fiB4YMZ+AVVhqAA
MZ6VNfP9uH4PyJiHBIrUAaT/AIYHSTFJvQolaUqfUD3x05Z7qhqBlTPDjGi9yTwHhiZdTPI2
kTKSgNaDLM/TDK65HpPxzNyGy3Czu+P28824pQ236eIzse+YpSi+eG1z7k/D1LlvyV8ry7XL
t292psIZRpmkkshHIyn65fsxx+aeJWQ23gHL962xtysbCS4skrrvDpQUXMmhzBHhjV8Nqa3+
OeZSWDXqWRNihIM4IIqpoaEnBT98UrEW0jR0Hur+YnKpFMMPXeuzbuSch2xidt3GazRwRKkE
hC5inTp5YftrN/nVtx7l/OLOd4+PXl6JJqPJbQAuHbx0EP6j44PhirHe+c/KxRId6vr+zCES
CIr7HfLUwVS2MzrWufhAnzD8kKHiXf5zEK0LLGz5/wC8rXHRj8q6DnvLrfcpdyg3Wdb6enuz
l/UR+OWDW5EN/wAs5JuO5x7hd7jcTX0RDR3LN6kKZKUpQBsa5p8X6/MXyUIwn9akoOhKRknx
qStcZ6xmxFZfKPPbOeSSDe7hppG1SxvpmDGngwNKYtGOi5+XvkK4VXbfJolU1PtrGmfgdKj9
+CVuc+Jpvmn5GSJQm6NLqNNYjhFPGnozxpnHNt/yzzyzlkaPeZwJCWkEwWUVPUjWpCnB0LzU
d38lc1v7q3ubneJ2a2b3IAhVQCPzaVAFfrglHNd8vzN8iTQmFt2bQV+6OKONyaZeoLX9mDTJ
WUh3q/iujdxTSJdq2oTD1OG/iBNc8Oi9tFafJ/Iot0hv79od4lt10Qi/iEgUeIoVofPG5fB9
nRzT5c3nk1nHZTWltaW4cEpboSxalPuYnL6Yzql9Y6C1vrl6WkL3Ep6pGpZhp65DDuN+tjwT
mM3Cb6e4eya5lkGh4W1Rso65Voa4r3rOVq96+Rfla/tZ72xtJtu2aZKwy+0NarTNhKRUYNSs
27mvyvsO2q9LiKwb1JJeW5kUg51Rmzz+uLTilvPlLml5udvuc+5kzWhJgjCqsIJyJ9uhUk+e
NQYtX+cPkF00DcIBXIuLaOvn1qMCPt/zH8iq/tx3hu6VJDW6Oaf/AEqMvDFWpHJN8p87k3RL
+XcXS4hFI1ESrCFJoQY+n1rjOix1XPzf8gSJoN9BCoyLpbqCQe9WOWNQI9u+ZOf2UHtJfxzp
mVFxEsjfQNkSPDDVUcny5zma+N0N0ZZNOkoiRrGo8k0mn1xk8q+y+TOYbfFcxWt/J7dwzvJH
KFkUl+p9QqPww6lFb2e971du1naS3lw3ql9lCaV/MdPpXBbjLp27dOQcc3JbqB3sdwt2oCyk
HL7lZT1Xywy61F9vfy1zTe7D9Fe3iJaOayrDGsZkFPtZl/L5YjIbj/yzzDZLQWVldBrfMxRz
KJgvb0Mx1D6dMViqubnfKDvI3r+oTf1RTX3q+gD+EJktPKmHGa79++W+a79YNY3t0kFvUB1t
kEbPTu7dR/2jBVPRbB8wc02ew/p9ldRTRqD7a3Ce7o/7cwdPlioZ665dyefdBvE97NLuRbVF
OhKEHwjVegHgBi+0F2JN65fzHdoETdtyuJo4qmMTqVGfXSKLU+OKdQSWr7Yfmfmu17b/AE+2
mS6hjBRf1ERmeL/6gQaDtqw3HTFXt3yRyyy3uTe13B3vHqsjTANG6k5roFBTw8MXyc8dHLPl
rmHJrA2N1PDb2+oGSO1Row9OnuEkk/StMA+v7Vez/I/NNpsW2+x3SaC09QEC00qDmdJIJWvk
canKnJuIc/3DjW6SX1nbwXE8tfc/VRe5XVnVZK6lNcVi1sN7+ZOZb/s81vPstqbV6obgW8sg
U0zIZiVU+eM2yMXpy7b868l27aks7ba9timjT2xcpCUOWQYqpAyHj1w5GrrMbP8AIvI7Hk//
ALOJxcbl6leS4BdTGwoy6a5D6YulIn518sch5c9p+sW3t0s212/6ZWSTURmS5OoeQxSKrvb/
AJ/5tt+0CzkS1vmRBHBe3SM8wHiSGAcjzGLfRayEXP8Alce+nf03CRt3aoW5IGSnLQFzGn/b
0xdVc84h5bz/AJTywwPvd2bsW2r2ogqxIrMKNRUAAP8Ajil8UihtbW/njLexLLDFm0kaF1Wp
6EqDT8cVWVs+M/L3JOO8futgsI4Gs59as06F5I2caWpmBmOxw4rWQ2vdL3bN2W+spXt7uCT3
YpgaMGJrqXw+mLfGcr1uP+4/fTDBJdbFt13dxrQXjhgxYfm0jp+Bxh0qr27+4Dldvul1cXsF
rfWd5T3rCSILChUUGjTVq+Na1xpA5Z8+b/uuyybNYbdabTYzACcW4Ysy917Ko8aCuCXBZXJx
T5x3/j+wtsc23Wu87UDRLa7BARWzKinVa50IxqTT8g4j84b3xuW+hg2+1l2q9dpBs5DLbxFj
0iPqIqOoNcX1g9W+7f3G8jlijtdp2mx2qGGRJXEdXDBTXQVoqgHvQYPhWZ8sL8j/ACNuPNt4
h3C8hht3ij9lFg1aaDPPUSa43PhnPddHxv8AIUfEb6W7O0We5NIAqG6X+ZGy5ho5KHSa9csY
6albbkH9xlzf3FrfJx2xTcrBxJa3bu8kifxKCFTI4ME1m7v5p32551b8xS0tor+BBEtr6jC6
BSuZrqrQ9cOmPT+EfPnGLldzm3tF2Hc7yRZf11ujXEbgLpAIIZqpToRgaxR/M/yj8d73xeGx
spBvW/OxNvuXsG39jTnWQkLq1DLSMPMl+XPr/DwASohZq6guZWpJz70wNXrHqnAfnPdON7E+
x3lhb7vs5NUt7o00as2FaMpUnOjDLFjeyuzfP7g98u7nbZNk2yy2WPbXLAxKsrUpT266VCxk
VqFxeSMybVt/+cukcx3BeLbed5Ceu+EntsxORodOvT5FsEso62Mtv/zrv298Q3Pjd9Z27JuU
nvSXqFkdC8gkICDI0pSuN81m8uPffmret24DbcOuLK2Ftae0q3qF/dZIclUr9oYjq2LR1dsB
zL5f3blHENq45dWsEa7YyFZ4mbXKIk0IGU5DLrgldLdp/ir5Ss+Ey3Es+xWe5tI4kW8dvbuo
mA00SQq400PlgvKsbW4+eOIbzyza973PicYe1ajXiymWfRQhSF0IG0E6hXEJbK9Om+ZPjC5t
5TeckhvbGRD7u1y2ju5BH2GiGrdsCr5H5LuW0XW/7jcbVbtZ7VJO7WUEn3Rwk1VX60+lcbnM
YlxPwvlu5cV5FBvVgsZlQe2VlQSRGFxSRWU9mHhniany9js/7kNksdUu3cQtLHcijfzoJxHG
Sw+4xRoGI7kHHPZKb4uLf+4Lh8XxxDHuVrFuu9NOf1uyTxARS+5Kzu8TFXjCqDUasbs1dsrv
f9yu/Tbltcuw7dBtm37WxItXJl91SulkqAulNOQAxrnMW4uoP7n9ss5nu7HiUEF9eOG3C4Ey
r7jDqaqmonzOMFU7N/cNGLS92nknHrfd9lmuHuLO2d1/le45f2qOrhlRidJIrh+rMjn3j+4a
8bkW2brsOzWG1RWEZh9shZJZUIoYXkQRkJTooGRzrgyRvMWM/wDcNxRY7i/teDWP9YuUZJLq
SWMxOZPuDj2tTg9xijP1eB7leNPdzTBEhaVmk9uIaY11GulR2UdAMa1mSTxpfjnn278K3qLe
tuKtN/4bmGTNZYmzMbfiAajPF+G58PYrT+5/ZdulebauIR2k9y+vcf56qZG70ISuqvjjODXn
/Avl7eOIb5um4QW8U9jusrzX+2yE0cM5K0kAqrKHpWlMat1SZMZfne87FvXIZb/ZtmGx2s4T
/wCF73vKHpV2XJQurrpwT1njxnwxYOxasa+lmWhHqy7Ytb2Pa+LfPkEPFLTZOX8et+Rx7YEF
lJIUQqqjSg0SK4JVcg2C5qs31m/k/wCZb7mm4WDJYpt+07Uytt1sX9yTWwFWkkAFQKelQMsP
nxGZZrg+V/le851d7VcT2KWUu3WrW0xR9aSyO4bWgIGgZdM8Uisuur43+ZL/AIjsW87TFZRb
hbbuhBMrmN43MZjqNIOoUOY/fg+DbsxR8A51PxLdpZBaQX+33sQt7/b7uMSRTRVrnkSrKcwy
/sxVf4eqTf3EbBYbRebZxvice2SXMTRUNyHtjrWjMbdVXUM/LGueDOXFsn9we2PsO37by7i8
PIZ9qAis7v3I43VQAF1Iyv6gqirL1xnqDu58MRz/AOToeS8tg3yx2uHZ1tSogFsq+85QhhJO
4Cq7q/Q06Y1eTz8+rb5I+YNh5rxyNd348P8A2q2jVIuRW03tgaTnrg0+pWqarqoDmDg5Xc98
eTl3ZhGFo71yJyb9mFmtX8dfIPI+GbqL3aZxGX9N3aSLrgmT/wDByL4f7hmO2M1qXzHonJP7
hw2xvs/Edjt+PxXIYbpNFokr7g/mLAoVAofOpavlQ4ZGfrji37+4nkV9ZcfttoifY22dVE36
eVWhuWjCiOiaVZUopDLXOuKRpV83+W5+RcoteR7LZPx3ckt/Y3GezlIe4durkqENMqCtfPDe
Yx9f9lhd/PvJLj42uuH3KyzXdx6E30XDCYRFwzrKtCW1Cqfd0ODk9c7MeQNppoCEKtRQClQP
PGsMdFhfS2V1HOrFJYCHhlBIZHXNSKdwcZ6mm16p8l/JfFOb8c2zdLm1m27ne3e3DPNEqvBd
QgjMzVBWh9a1XrVemM5+2Lz7rZbL/dCsO22s+67Cb3fraL9K18syQxzQFhUSIQ1GalaDv5ZY
19Wgj+6KOO+isLbYIoeKrC0U+3TSB5XV8iFdRpVRXJSpxWYsQ75/cLxx+L7jxrY+Mttu3Xtu
8EREsamOaTMPoAIaPxzrin7FmxHx3+4Pitta299vHElk5GirHNum3vHa+4UWgfSdJU0GY6Ys
2tZjyz5O55d815RJv0llHa+4qQQiM+p0jroMrZBpQpoSMsOYxePyy8N5JHIrpqRo2DCRfSys
uYIbsRjNbvw9y2D+5KCKyhu+Rcdi3rlFlbm2sd6VkjlMbdFnJHjmSnXrkcX1ZVuz/wBxnIra
y3t7yBZt93Zw9lu0QRWt30hFTQVPuxKqgaW/fjRkin3r5dvN+4Hc8T5HaHc9yWdLjad8mf8A
nWrawZFKsNTKVBC/Wh6DGZ6z3zvjc8N/uG4RtHEzsk3Dwpmh9jdI7N4hDct7eh2YPQ+tetTg
yRusjx75lh4/u25W+3bWsnB9zb+bxe9cTqqsAGCSUoD4VrlStaVw9eesy+Jue/Pt5vcFptXH
rNeP7FZyxzxINJnEsBrGQ8fpVUbpQV8a4Z6JdX9r/cvBbWTX91xqC55jJB+nG+IyRLMEH8tp
l068utFPXpQYJG8ZfiXzxv8AYXO4Lv8AGvI9m3ZzJu+23SKokkIALxUUqjekdqdPrhrMmJ+e
/Pu675+htNgtxsmy7fNHcWdslP1KywiiF5F9Ppz06e33Vw82fCl/bRwf3LRwbbJfDjdv/wC5
3UIhm3tGRI3KUEbzRhQ7aeukHr5ZYOfbjWb8Mvwr513va7u+tuRQLyPZ90ma4vrC5WMK8rGp
liNNKsSK6aU+hzw92NWRyfK3zNuHNLeHaba0Xa+PWsyzRWagNK7ouhWdx6fQCwACjzrjM6jl
ZWt5Z8i8FvvgGy4+16Ny5EkUP9PhNv7c9pJFIok1kUSgj1BWB9QOYxrjk2WvAdQZlGdNVAB4
eQxuxmde5Wo4Xy3dOKb1bbtto9u7tm9yISL6WU+lg3ijCoOOVmusv4e3yf3QbKw0PxNzDcv7
10qXIUG+Sje7C2j00YAluvQ/VkYtysT8j/Pe5874e2w3m2xW00d6t0t3G5/8aatMbIRQt6s2
Bp5Y1Mis2PLNuvWt72G6WNJRE6SCKVQY3KtqKMO6tTPGb2ueJH0Pcf3H8GvuNNsd3weOXbZl
/nbfHNGluXBDEoFjqueYYZ/ji5heU/G/yZe8D5PLuW2wiTb7jVFPYSmolgZ9QX3KV1KKUan1
xVvPEfKeZcevuapyLY9ji2qyRo5ZtqlZZoGlRtTNRVQKjj7lHfPGus+rPMx61un9yPCd04y+
yXfC1l2+RKm0SeKOIN11xaY/QQ2akZ45wXq15tyn5Xl5BwvZeNTWaQf0OVmhuw5YyQkFUQoQ
PUopVq50xu84ur623E/nrbLvc+J/+0baFi2KCW2F7CQ51yKiJOsZFVMapnQ+YwZkNu16Vzr5
S+JNx4xf2e77xbcgtHQ6LCCIi6WUgiN4mNArqT92VP3YuYNfHbyUbSy6iMvBvIkjqcVFX/CO
Zb3xXeE3faZzbXUJzoAyOp6xyL+ZG8MHXw6c16Ryz+4qbcuNttPGtph45LdEnd7q2ZSGVxSQ
RIETT7nRmNTTDz/lj4qs5J8/cp3Oz2OHb5ZNnbakjW6a2kpHcywjTHIFFGSiihWtDi2NVFuH
zBa7nz/bOZXvH7QyR2ywbtt+TpeSKGUzHUtFbSw01B6DGft5g5vrT/I/z/xbl/HJNqbihjuo
lH9Mv2uY9Vs+VGj0pWgp6lrmMa4uM3dcfGf7j9zsdsS33raYN7v7GIptO6syx3ENRTQ7Mrl0
8ehPeuJqzYzm0fOnLrTl0/ILqaO5kvVEG4WDIBaSwKNKQlDWiqpOk9ca6sxTyerTn3z1d8is
Yto2Wx/9e2GMH9Va27AmZq6loVVFSNWzAA654zOsinq32b+5beotk9vcNrtt13yKEQ2W9yaU
lCVppnXSfcXxoRXvnngOazfFfmrk21b7f7rfyJvEG8ODvG3XShoJqDSpVaHRoUALkcutca6M
zMrt5d8+b/u247c+xQrsO27aY5oLCBhKryxsWRpKqF0r+VQuQwyzP8sRrV/ufsHSO7uuH2rb
wANV8kqofeUfeGEbSBf/AKvLHMsnwL5i27Z+Uci5PvG1/rd33pJJbcwgBBK59cRVj/4nAALZ
nLHfrqWYvvkx57yvlW5b9u11uu4uXnuX/M2oiNMokrl9i+kY59dazJipWeNYwXFA+RJFevlj
J0B9DBW+wGoAzyH+eIAWlCVFUGfniWnj9QDV/mGoKt9uXfFqwmdjISRppSvavlh1QbalUUyQ
5in/ADxa1TZtXtUAkDofCn44AKNiSRmSMhq8T54SJYylWenpyCjue+BYdmox0khWGYOf4YkF
kdmLKSBSgqM/24tQ1FAhZaKxrU4lh10KWqSKZUHngB290sfQqilK1OeFA1OD2Zl6+WApWCUH
QM3etKfjiFMItEgavpHpOeeIwRzcFalP4l6D9uHDp6M6GmZ6j8O+DERkqM1FVyPfPtiAlNSK
0r/Cf9cQE0ur0laAjMjEYB1JpoII6ZmmDCNnFaANQZN26YgGN9VRQ+nqCKdfDEtMH0ADpUkU
8x3xIZbUauMgMj0PliRklovpoXORrl+PlhWjKyAhpACCD0xGUFQSPVWhzHgfKmJH0q5oHJY1
IFKAV7YkkDaWHf8Aiz6YhiNQiKSGJDHI9WzOMncPJRmCdF7U65Yhoj9wApUZGvX9uIjQEVPS
uYFOmI1E6FCSaFW/LSmISDKRNpBb1ZkDsAMWtSE+iqnow6N/phjQi6BNBUB/zZk1wi1GymlW
OYzAHTBWBDSU6ig6+FD5eOAIxVaAggU9LAUFMWiJDQgFGKsMwSf20GI6bWsiig7VFcvxwwhD
EvQ+oGmpa5UxIbtGECoxIbMEiqjyHhgVpvdCoFNS1AG6dfHLFh+x20gmrD1ZFq51864hpmGo
eutCBmcq0yHTE1gm90UoAc8+1RiZsOQQG0kFzX7ycsSwKFwAGIo2QP3dPpiJ3lkVqL6qEUp+
8YFSlMLEl0IcflXv4YRSicsPcb0Fh6SRmPHCjBo9eknM+oDqfqcCGFdw1WFB0rTPPtgWAVGN
QB6AMmOX1xHB1qCFQhuhYjqMSpmkkai5UTJBQDLxNOuHVOaSpUAq1anM0rke2eMteCoqoydg
CKg4VQiYBSr5EDp1HlmMWM0CVDEsT40/DEoMl2GutNK5dAK17n6YlSNQpUR+jqQRiNgah/Vo
CEitSc/2YDzMSMAwrStPur0p0xU6EPApQDMdh5+eLBeoL3KVVUGmudelfI4M9O+AlBC6xRFq
KkdfxxpnBUJjXQysvWo65+WNYEakglGXKuRY0FT1wIdEAJpn0wIgXhoVIocmXtQ+FMOLcIyx
/cy0IrpA6UwY7b4z2+kgioPqzUVrTHSRxqkWrNQCte2IJKeZxYXbOJDEgGdTmOmfalMMEeyf
BN/uFryfbmt52Sr6JlQ5FWBCqFrmScat8T1r57d2SzMjF3UMPUa+nrUg9scP530xc/HC7Xf/
AB85bb4oZRHJG0kbyBnYChLKTkf8cb/r5Ut+GXOzQ8Jdrm3N1FEZdURrQp1oQfDBWXDNsXBd
12KPe5OO28UIPuNHAn8xwW0ZEUNRgwyii+JOHXMse+LYxx7WIgUsX1A/9zEnI+WD4W2shsG5
bTbfJFrFsFs+02RZomRG0e5QHJhnRWI6Y3Fjp+dpmL2+siZhH6qk1AB618fLBPkxBtl/8fNx
ZTBwK9urxYtLXRjIRpNNNZmB6fQYz11Iw8fvlD3EjCMxrqJ0H7l8iPLBz1rpy1XxjxOw5Fye
2sb8k2gBkkQMVD6BUIfLHWHrI9/m4J8e2pEU2z7NbwhdOuaT106/mof34xZK5qpOEfFu07Xc
bodpg3ER63WUMWTSSSETOmnw64bNSpsLP4d5RuFpbw7ZFb36nXJZRagSO4ZhRDgnMOurkEPw
7sV9/S9w2KG3ilSv6lldqGmWSktjX1RW/DPjfa9kuN6G0pvNsw92P3XOnQRUKin00HnixasI
fjj483RbO/GxxWkMgEhit3ZdVRlUqRgxK/ebX4X2ncm2ncNlhtfQWW8f3CC1OgZSXrikgYfj
0nxVFyK5W6sbnerIilmLeKQlc8/ciUqzeTYbxGZlZ7n0/GW3Zv6BtU+2WlQojnBBZh1fQ5JX
6VwSHGZUlaKaDrXLGo1Hq/wPuN3Fv8ttDIUt5oyZYyoNSMwSeo6ZYqa5fmc6+Yq92QIysYNP
SWAAOZH7K4ecZtehbhHts3xislqs8NqsK0t2uHdaBhWoP3DOuCz1LG/l45b8IsxvdtLf2vtx
g21SAWIFCTVf21wZqUm68B+OEayv7ixezsrkBY4YnZlLuPRXM0/DB9fWZ8oH+IOKbV+t3Tek
ddqVdUEUbke0oBNTQEk+GGRpwfDG5Qx8h3O02x9G2SJrijkQNKdPRmb7hXwxq/CZD5eenLrs
GgkJqojAUHIVNPPGTKxdonvXMUUi0iZgrjqdNc6V746SCPerzgXxZsW1W11f2E128/txqrSS
V1OB0ClQtMYw2pp/hzhE25xTRQSQ2bKDJapKwDsR6f5n3AeIwYmW5APhTb5LzbbnaLvbby3q
iXEZkerkVBqZGH7cWM2uz4MXbmu9w/Ttco7oGV1cRqU1UAMYrqbPrjVh/DA/JsSQ8qvo/clk
USNoaU62I8yKfswQazG3wLezw2wbSZXVAen3GmKnXtt/wT4u4ns9tPvlhPujzBQ9yHYjoGZl
VWQKM8GK1gt9T4qferVtolvo9rLg7lEVYsgJz9rVU9O2Fn8u3nFv8RRbNGeNNdPuDshCMZvb
092f3R18KYz+S3HweOJzbXc2trtNbxKfq7q50ymSuQUVHpXyxqw4pOOb7tmyc43eC32Jty3S
WUrZLbBKQpWlG1VC17kYrIPw9B3S4tptkez5fPYy3F06rBZ24DSxlz6VFc9S/wAQwSKOn+gS
7BtcW3cZm22wD1Nb5S7OzZkmhVnJPicVhtee2/xPvu88pvb3lRiaCFld1sEEazmlfRQekH83
fFDzcaj5E2PaF+Np0sdljiWFV9hPZ/mRqpzbIagadziwXp5fxLYvhqbjpm5HudxDvRD+5GHZ
dFK6VjVVYNl/FiLr+NPivjG/2t/u+4y3Eu1WcjJa2sR9qRlX1a5mB/h/KMau1Y9Y2Gx4xNwS
4s+OSum2zCSJRMCWV2OltWr1HGZy53nxi5/ir4p2/cLDZdxj3CXctxAWGVZDoZz1Ip6R08MV
h2fDy75X+P7fhG+RW9rcNPbXSmWHV9wUGmmSgpUYYYo+FbBbb9yey226kaKO7cLI46qpNCR5
4aY9vvfhj4h23cbbabye9k3HcDS2jWWrAdK5LSle5xj6iyOWx/t34va7nudxut7cz7ZbJqhg
i0xyaaamaR18OwUDDZokYrlXHfhmTaZbzje8z2u5wii2d4GZZVGVFqoofOuHmYtz4eg/Blhb
P8dbgu3TyxXLmQXSSxxvHr0VQpUVI0+OL8tX4fNu6wr+tlqfUJGL0GnUQT2GNxha8K49DyPk
227TdSvBbXkypJKtNSr3p44Ov8Ga9v3b4Z+GdivrWw3Xdb6G7vWK2wadVUAdC3o0hQe5xj60
oti/tz4w267jLd7hPuO225X9JHaMgkJcVJd6Eah4Lh9M8T7v/bnxWaG2u9rlv7CISql5BclX
cwlvUUy9LeFcZkVtd1x/br8ZrJKwn3LVaoDcKsysxVhUUqmHKHAP7aOG2tzdX+4bjeTbJHCH
ht0KpKgA1MzvTMAdKCuDKJFVcfBXx/vGz2e/8c3K6h2eSUJcm4oKQ6tLOhZVoQfHB9b+12p9
2/t8t7bnW27HBc3Fzx++T35rwKA0aA0NZB6K+Bxvfwef00e1/wBtPFDuW5vebjeX+3WrItol
toWUMVq+pgGD0rQUGCr8Kb5G+Ads2rjEvIeP3F0sVoNV1ZbgAH9utNSkKpyr0OCSjfqzu3/G
vxZc/Hzb1Nyv2OQJA0strIyaVkFSIUhIEmdKagcM1WvJWaRGoppoJOfpJr4+GNtxf8Lm4vBy
jb5+UxyTbLC+q5t4sy1R6Se5FcyO+KqSR9L8b3XiPLNyubax4hYnhca+zJvs0CW2qQJnGoZQ
Tp7sGxjGbN+XkO18Y4h/9/UezbeIdz2FbxFWNqSRkFdTKSMnCN6cavjP840HyXsvxpxr5h29
N024W/HZrdZ7yztVIR5mLAFkWlFyFQvXBjfN9rfcavuHcuvLi1g4hYDhyq0C73JbpbmRlA9C
KVDih6nVgyLf28e45xfiI+d4NnshHuexQ3skcSyN78bIFrpP5WCtUfhjd8h4s6i559svxrxn
5whj3ixFvxf9LHcT2Vup9r33LUdkB+wEVKjF9f8AVykzp6Jxq74bzia6t4+Jbf8A+m0aODfj
EtsZWUZCJdCuKdyGyxm8umPHuIcX4qvzpHsEITddigvJY4fcKyxyoq6k1Eel9LH6GmNdDm6u
ubbF8bcX+cgm7WH6bjQt47h7S2DFDPKp9Rjr9oYVIX9mCzwS3a9I47Jwvm89zarxGy/9RQPF
b78Ihbe6yj/7NdKOPrqyxnDmvH+C8X4y3zf/AEWOOHddkgu544feCyLIiAhS35WNe/fD1546
cZi05hx/4x4v84GDeLT9LxiKOKY20OpokmljDKzJ19sN9yjFZ45vR+OpwPnEt7bnhthFxQa1
teRJGlv7xXKsYCrItO51ZYLzFmvHvjni/GZfmttiEEe67Fb3cqW5k0zRSLGG0tUek9AT2w2Y
OLsVPzrxzZeOfIu5WGzW36SyCwzLaxkkBpEDvprXSpJ6Y1gktrz6J60RlNCatGv3EHzwGPqK
w+EuO8n+I9juNrt0s9/9lJBuCga5VMh91ZOgaq1016ZYxD1PfG1T4g+Odv3zYDFsdtJJFDNA
7Sxq3uaIwRJKhGlnr3p3w1X2qbme0RbVabhK3xft247RCG1T28sCyGJQaye2Iw6UGZoajFjP
Vr5DvpreS8neGEQRNIxigU6hGlSVQsczTxxrGrJI9W/ts4rsfIOW3tlvljHfWn6KR1hlFQr6
0AcdwwBNCMZvyZ7Hp/Bv7fLew5Xvbb/tFtcccnSSLaQ8gllRS4Kt2KuFrRsF+f8ADHEsnrS7
JwfZLPhG2zbTxHb95v2qsi3PtRsU1MNbSyK1TkMsORu1nuJbHxbevlO6sN14LZ7Jf7dt7STW
50TRSh5VVJQgUReNGAqcTM9X+7/G3Gt8g3Cx33iNhx+yjjZrLfLSSBXV1NFYiNU05Z0ao7Ys
jPXGwti4Vs9rwPY5Nr4ht2+XbQKJmmMMLMCp/mmSRHLajg+s/LXuPnb5sn2x+QLaR8STiW4W
aEXkCSq4l1CsbqsYVFHgw641zJFzPy8zLMStaUNBSlSK4009+2/4A4Ft3Gto3PmXJJtsvN2V
Xt1hVfZpKNaxCqOxdVI1HHP5Z6+cND/axdnkW57TPvIW2Sz/AFuz3iR19w6ypSZSarTKunxy
xTWZsnrJfHPw4eYWPJHN/wDpb7YkAiUrrjkcay6SfmCkJkRjd2V056/116h8afE3xBvnx3fX
ck8ly7Ipur+ZRDc7e6RhmVGXLSD6q56hg91i87N186cq2va9t327s9u3KPdrS3fTb7lCrJHK
pAauluhFdLeYxtcXflUrmxUvWoGVMssTdfRf9u/A/ifkmxTw7lGdx5L7bre2VwGX9NGzUWS3
IyIb/wDCVrXLHKz1iTXiNxs1qvK5dsLfp7OK+kt5Lg1Oi3jmMZkNetEFcPVxfzv2jYfLvxVs
fCYtsvNl32PebDcQ9KFDKpABDfyiVKEHI41FesuPMyx1Jpb1LSlcwCcLb3n+3HgHD+UbZyOL
kVpHeKiwhZyWjaGuss0bggqchXHO31rrMX+0/B/HrGw+Q9t3uzF1JaRLdbJuJAEqwLHJJE0b
Llm60cd++Ge1x56vsqoi/t54JYWW1pyjlp27ed0hWSO3WOP23d9OURb1MAXC9euG63ryv5N+
ON04JyH+j7kUmWRBPa3UROmWIsVBoc1YEeoHFNjHN25fllreNJD7ZFGOQ1Gi1JoBhdcfQMv9
uHCdn2i0l5NyyWyu5o1dphEHtS7AH+VIRmorSpNcZu1jr/yzlt/b1eXe28nkTeLeW52JY7iz
e2IktbuB0aQlpFJZGomQ7HDz8+sd23nxUcP+I7jkfA945La36JcbVOsQtHWsciaQWIcGoYah
Tti+3rpuTY3O4/2+/HezWltYcl5mdr32eFZWSQR+2zH0lo9VCY9eVa4vfmLr1pviv4s29uF8
w4nvP6S+je6jEW5QtHPGVMK+3PC/5WUZgEih64pu+i7eceb/ACF8HbPtfF7zlHEt+Xf9t29z
HvGrQHhoQNasmTaa0devfFWJvP8AmPGKBVZoPWrirf8Ab/phldK9n/t2+ONs5LyC33i8vbJ1
2ufVJss41zTIUJV1B9JUNT9mM21rnnPV3/dPbbqm77Ybq321rJHmWzvrPK+0FQTDeCv2j7lI
FPxxqfDnn+zp+OeNbBdf2+csmtzHuckkE0t3azxLGbeeCMtHJDMPU2laOte+WMz59X9Z/q8p
+K5LJOb7JDfWcO5WktxHDc206643WQ6K6c/Upaow9/B5+G0+WPjHbE+ZbXi3Go1sIt0EOiJq
mGOWYGpUZsEy6YcyMc8Z18+Npef2rcYkie0tN9vI990H2f1UAMHugd2RckanZq0xmR0fPHJ9
k3Pj2/Xuy7ggivbGQxTKp1qSO4I6g9Rjo589fbXRwXi11yrkllx+1lW3m3GXQlw4IVQBqZmp
UnIdMZ6vnjrzn5e33X9uPx9dXdxsGy8uk/8AbIA2nbroIV1oupldUUMF0mtVOXnjMljH/hzc
X/tv2mfj0e88n3dttS1e6g3iFPbVYmgkKKySvUEemrZZjpjFl6M6vyxHyr8VWHGdr23k/Gt2
Xe+Ibk/6eG8XSZI58yqNSgYPpahy6UPbHTnlnbL78PNqNUkH1plT/b06Y1OsdpXtnCfg/hc3
CrblnMd6babTcSyWjwhQsRVmXTK7BvU5UkUFKYxdrFkjQ/PvF9m2z4l4cbNoL+4tJv09pukK
6TLbyxSSUBX8rUU08ca488Z73ZQcn2TZ4v7ZLCayZbuKO5gmE9wgjntp5JQkyx6PuzJXPquM
za3/AF9cvGv7feHz8L23le+b9Lt+1XUDT7gB7aaH1kKVdq1Wg+3TXwxnOqOp65J/7ebK55Ts
UWw70l7xXe1mltdwYVkHsDU8NF6n+FjSnfFjcrT7n/a1xqbb5049ud9HvEKs0aX8emFyOqM6
olCf4gThysaoNg+DeAWvErHfed73Ptj7iT7RiZVgUglWhaqyEt6a6shiy02s9P8AHHxvtPMt
s/Uci/qfCN3EiwblZSI1zDImTR3CANoQEg6tOYw9c2xjjm7763H9xHxv8cbXs1tum13UWzb0
Yl/S7dGh9q/ij0q1FQUWQK1dXfv441xF3sviq2H4T+M7fh+0b7zvfLmyl3iJZbWSMrHEupdQ
hPokZnA75eWMe09cxJZ/23WJ5rHtH9TeTY9wsZL7Z9yjA9w6NI9uWNv4damo6jwOH1cyy+i3
z4J4RfWN/Z8N5I93yfaUM24bVcSRyLIEX1qpRV0tXoQSOxpinOfLU+dYT5J+JIOMWnF9ygvZ
Lu05BbJI6yALLFN7auy+mg9uj5dx3xX41jvftjk+YPjWHg+67TFY3TTWW6bel7GZB60lFFlV
iPuUlgV8OmNczzWty488k1M2thXv+zAL6EIkoLZigqKeOFoIlZMuhGeXcdO+DAlhdCAAM26H
wBzwkxJ1Vrmo9KDwwy4dOrautFPYd6fjioxJpQrpqFqMyanLtQYyojVpVUEioBNO1cvDCSEt
HNSQ35SP+eHWadSGocqDt3J8jgWJEmkoVJ1UFFFQagnvgMJWUy1DEFRQEeJ7YsSNWepohOo1
AI79DixEfS+kjIfxdK/XFg0K+44CUpQnSemIHQj3AXJqBSlMS06NpP3VpSgB8+uIjZVZ6FxQ
50rmPLEjkNTTQkdCDQkDEqS9NSrRPA5dD2waoTOUagzZsiBl+7GorBaxpCg6SRQscya9sSP6
lYVGQyUtmc8GEZMoFdRLKQwpkDXxxJHKejkUI+8DpT6YgItrUqRRSMnA74kNmWhXSStBQf8A
TBpMqSU0nJzn5dc64tQiGT7wM8yxz/diOEAysB1HQnwBxIWksRQMEX7ulDiB1k1RkIRUGpHh
hRJKKGqUqQSRl1wUiLKhLfcCaKQBUk98VR1AYmuZXMgZHEkYqAfcUEAkjvliSRGkK6an2xmQ
f8sGolBFSAada5Up/wA8WImzcIVBVj9ag9vwxA5gBr0WvRfp2xI4SPUGy8DTqPHEjEIVqhLq
n5vDENCw60zYDvTp45YjoxHqFemnw7k+FcRh9WllJJqM6qNVPwGHVgJDGjU0k5kkgZ+rviB4
hGvrUZv0Y54LEIkFiI82HqLeWBH9xl1VLFj2HQeeJEuVPVUr2JpiahwXBox06s6ZZeWeBQ0h
IU1JP+2nXGo1uAjKtkgOoH1Gnj2rhtZvRVFTRjqrWnjTqMZ1lNVSGqOtOvUHwxIIIoFY1AzB
I6nEtMxUmmmtMyR4/XECXSGWpopHQdsRKq6QwYBBkKjPP6Yidk0IApoDXIjOn0xM2G0Ar6ct
P5m74moWpF+5dZIqR3y61w418CqpIZQfGp6AHtjK05X0ekgEZgAkj8K98QpmZmXMgNSufjiG
kqgKVDASeA6eFcRKFipHdENCO9aZHEfgUgSRm0kh0OYHauXXEzDZA1YanHVq0ov/AFxG0BCE
q2QNPVTEElUP2enTke9T54iESD3KHMH0gDqMFUFLIgyWoXoFJpnhgtCAQoYMVanpWuRxEReg
JH56Coy/xxGGopq2nOvQdxiNhkVwxZSSAPT4Z4gRA1E6cstTHocVYsO7LWjVOrJVbIV8sB0p
T6iVagpQqD3wkwZc1JJyyJ7nxxN4b3kQac2PQgfvqMTNODWjBQi0GRzocTOHDuijIMB9wYZU
Pniw6XuRgMaqWYU0HpniVrn0yFw5qAtRkKfQfTGtZl1PqZVBZM1zA7mvSlcDRLWoJb7TqAPQ
VxLBFo1calqSfUoyrXzxFE7RatJNVIqABkP+eFWs/vIJYUFFFaE41GcVIqaHp40wEVX8+teu
IOv9SXIUKB2qMqV740bHr/xJzWy4vMLifZbbdZEBEFw50uh8h4074mXonJ/mqx5Bt6QTcet4
StP50hMjin3Koy7YxM11548dexfPG07XtSbdHxiBKKRVH0q9e5Ugkn8ca7xm8Dsfn97K3lhj
49ZrZOxIhQuSa9VpjP2jN5w1989yyLHFZbLb2lstTIhq1T9Bpp+GDVOarR83bu26LeLbIlsq
EGzBJQ1FK+P7cM6OYDa/lra7bf03e84/alyCq6WZdBPdetT9RjfjPUdPNfmTauR2DWiceihm
DBUupXBYV7pln+OMXF64f/v05tBsw2i39hYI4hDHKIwH0gU0inl3waXnrzPM8txI5kllcvIW
zqWNT+/EZ0ueLcrvOP7pFf2iqWT0yRuuoMK17/b9RjU6yNbLHpt/8z8Vuz+rueJRXd8EoJ3k
EiA08GFafTBMZxUXXzXd3GwTbaNshtxMpQSKD6FYZ6R+4eGKnni1lOG8xm43vq7rFapd6NQa
FiVLaxTJh0/HGuaxeHRzXncvI9z/AFskSWmkBFjWp+lSevhljUyNfXGs4z8u29rsg2jf9pTc
7GL0r7bGNtPYMOhpg6sGLK4+fYYpbdNt2VbeyiFBbyOxangKekYzMoxg+Zc0n5PurX72y2oo
EWMMW9K/xHKrGuKtWZE3BPkHdOH3E81hFDObldMgmBPpBqNLAg9cLMjk5rzTdeU7i99uOhSF
EcMcQ06QO5xRM6WcmurUR37YYHo/xvzji/GZJJdw217rc6aRdLNpASnQI2WCnXbzPnvDuRXl
vcJscisjg3LvLRpY/wCHKoX64JYsaKb5p4INnO1f0O49ho9CQNIADToNdSfxwyLUcXzhxWTa
0sdx2CRrSoQIH1AAdD6hU4iivfnPZZLmFY9m9yygWqI70daDJgQKA4VHLB82hpr0blYSXlhP
/wCC2Z6e3lSgqDqXxJwMouF/JPDdkubu8l2WWO7uG6wSAhUPRQrFc/HPDnilBvvJfjblm9Rz
3Vvd7NFpKzXo0yM38NUq+j64pyrXRBsPwlDPHInIbn9RE4f/AGinc1Sn7DgkplbvfvkL44j2
iBJpI92twR7aQg6gU6Ma0p0xYNZV/nuzO6QmLayNpClZAXrOT2I6KAMR9Vm7/InxjcJdXVvx
h5dyuMmuLuhXURSv3OD+GLGT/HnyJwbjcczXO33C30xOuaGjAoftVI6igGK1pTcx5bwbdeQw
7pZbRO0anVuAnbSJlqKqFBOk064Oco118r5zwS6sLaLYePf06+hZZI7gKiBQvVBo+440sXUv
yzwbeNohteU7XPK9uFBjhP8ALY0pWmpSMuuKnGP37mXExvdpebBsSWtvZsGEUtay0zGpasKY
zgz8u3mfy/Jyba121tpt7PSQ73IOphp/KhoKDPCZ/ldfGvyNwHjG1abm3vEv5wFuplIlViMh
pFU0jyw2K1YbT8ofGu2b5f3FpYXSDcKG4uNSlg+eQWtQD/3YBJip3jlnxEbiXdbK13Kbdo2E
0MkjtoMleh1s1VxSKy/hcN8ufHW9R2V9vVrdpuW3triWMDQHAzIIIyPhTEozXL/nDd7rdo5u
PmTbLSJPbA1K7THsXWhUAYfMWV3b389W9/wk7aonG8yoIri49IRgfu//AEhlimCqbiHy/tOw
ccfapuO295JR2M7lQXZ+vu1Vi2fgemCzDa7fjD5Z2vZ4b3bt5tilpeyPMs9oM0qKFSpOYocs
NaxqE+Z/jPY9lbbtkguXEdXQSLQF2NSzsWqc/AYoyGb5V+LdzuNu3/df1sO47cuqJI1LRqx6
0APqwVSYz++bhxX5O3xry/3pePWtmgjs0mTW8q5ksalV64c34GOvjPx78fbHvthutvze2uHg
lEiQSe2msg/aSHNM/LBlMbXnHMfizbd+26/3m8I3SzX3bSe0Pu+ljQq2ioI70xfVbjO2v9w3
C77dNwtL22nt9qul9qK6UVc5UbUvYN1GHBsYbmN98I22zvbbFZXN/uUrVF1OXURitTnkCfID
GcW7Ww+NvkX4k41xf+mG/ubaW5B/V++jNVyKHQyCgXwxY11GGh4T8Z8m3/cJ7LlLbZZFh7Yv
YwGYHqFOpVoD+ONZ+mfw0uwfHXBuMbvZ76nNbS5TbpBMYXCLrAFaAqzH9mM5TKy3zf8AIWxc
q3azuNlWVltYjG0sg0ZknoPDPG/wx1Lq3+HPl3j/AB/ZZ9h35ZbSO7dnhv4DqbUw00IGY8jg
a3xd7l8m/HO3vZyWG7bxvVxFcLLLJLLJ7YjQ1YFXCqfAenFhi+k+f/jqRr90e50XMS6G9sZs
oIKnPthxakP9wHxjuNuNunubm3tLuBoZbl4yoRiumlczn2ywYqy/IPlf432LgEvFONTT7s06
FYi4KBNbaizuwX/9EDFJgtXfG/n/AIX/AOpwybtObffbGExrbrEzLKVFECU9NCKAgkYp7TVH
8b/N/HWtd12zfJZNql3CaW6jv7UHTEZfyAerSV7ZYrVKqPlLnPCp+LDb9t3/AHferuZ6u0sz
iEr4yqQuoeAXDy597qnsflD41i+OTsF5xQT7xbwlEulWIa5WqVlaY0cGpzGeD6/tuvILh2d2
kKjSTkB2HUVxpvALIrNVqgAZiuefn5YNHy9k4b8h8c274k3bYJt3u7beKv7NugV45VkAChaj
09PX0xT5Hfw8/wCCb3bbTy/bNwu55ILS2nDSzW//AJljBzMdcmPjXBi5i/8AnLmm0ci5gl9t
V9PuFisCRpLNGY9BBJMa0C1ArWuNyQyZfWg47z/YLf4W3Djsu73ttvDiT2rbQHhlV21URgtU
B/MdQODmTR/b/aZHn3xxv1hsnMtr3G/nls7W0k1zSW4rIaEDIZ1qD3xj6ufE+lz8Ln5t5Xtv
I+cPuG0bhJf2gt4kjuJIxFQrX+WVAWumv3Y1njctafaPkHYLf4PueNvu9xb75qZhY6A0cgd9
WiN6HQtM+oxSLrWF+NuRbbsvN9s3G/nmtdvgmrLLbgGQVyBAzy8fLBZa3zVl808n2vfudT7l
tO5Nudm8ESLdPGYaaQf5WkqhOiv3Uxqxnlpts+Q+PQ/Bl1x193uLbeUc6tvMfuq6O4YCKQD0
p3Jr5YxzPTWH+MeS2Gx8w2vcdwmmtbO3mDzXFtnIFIoag1DDPMU6Yby6yyTMdfzPyiw375Ev
L+xvG3OxKxLHcmMRhgiAAACg9NaV74c/DjJ8tXa872SL4HPHoN5mtt5SQhtrMepGVpCwWGQL
VFpmc+uKc58nqX8MT8W7/tOx8523ctyuZ7Swt5C1xNar6kyNCVz9NfuoMZrc8iX5n5Ht2/8A
Ptw3Hbtw/qtrIsKxXoj9oMFQAJpov29K0zw1x5uMNFKZJaGgCEe538sStfRcvzLt9l8N7Rtu
wblJY8nsGijeEKQfbUsX0saqyaSMEjVvrbbZ868BuBxi5u92IuY4nj3ISRnVHJJGqGSQgUCl
x1GLB1/7Rlua7V8Zb9dbhef/AHnXFm94XdLdXcwJqz0lBSqdsOUWePmm7hhjvJEgmEsaOwjm
zHuKpNHoezAVw6frs9ep/wBvPN+PcV5pNe75c/p7S5tXgWehZY3Lq2YFaL6euM2etT4bv43+
Xtms/k7kE+9bzL/Rb1pRts0rSNCP5mpBoJbR6RQUGGiVov8A3L425VwSx2mTl0mwXFrcOzNC
zRT+l30gin2uGDDFYqqOF758f8E58bx+Ytv1luFk9vLfzapHgkWRXVZXFdSkDIjocZ+o5cHG
/lzjG8f+y8T5duRk47fTTybRusvuOYqklVOWqg+5P2Y1Ws8aFOUfHnJfjrY9ol5jJx+520KJ
zCWSYmJShDAflb7hiksZ6jwj5Z2ji9lusM+x8ofkyXKVmmkDGWOmQV3OTDwphkErD2sqpKNQ
DA1UrXrlliq19N/+0fGXyDwnjthvu/f+uX+w+3rhajB3SMJVHIoyELXyxiTxrqeyr+8/uE+P
rXnUEX6iSfbhZfo7rdY0PsxyM4dGQH1MpoQxplljX1G6i4jvXwzwq13qCw5PHdXO7pJIbiQM
cyrMEIVadW/HFeb8ieTHnvwX8kcS23j+9cR36d7G23ZXMe6U1omqL22VwKnLqppgz0zjeceP
cn2u123f7uwtdxTdLOKSlrfxAhJYqVVs/wB4xrB/P9VVHSG1IStO3lh1uvo7+2vcvjnj9ncb
3fchW03q6j/TXu33emNFAcMrxPT1BqeOWOd20fXPXl/yDa8V2H5C/Ubduacj2a8uRfT+36CA
0pd7eRqUqc6MO3njd4tmsfx88WHzNv8A8T7u+2XvB7F9t3GRGXdI0iEUOhQNClQdOtWrmO2C
TGru+PMFYLQZLUgmh/dha8e3f278747sFjymy3y6WyO4Wy/pHYN7buqSBkYitGOoUxn6+6fw
3nD/AJw45unxrue1b5dC05Ba2FxaxSSrQXSKjCHQyg+rMAg/XFnrJXO9/GHyHZ8W3y65GNgv
9kjRf6ZOF1awyMVYt+WsWTLisZvLzD+47nvHuW8ws5djlNxBt1u1rLckERPIJCxMXdlofuxY
OZ/tryNbhEmUg69JJEZ/MB+UE41I3a+s/jzmnDotgspLLmf9P2v21jn4xvCJcLEyiksSzNR/
bLV05kf4Yz9bpscOwfJPxHtnK+T8b26R9s2PkEaqNxSOlpFclGik0ddKOG1KSAK+GH64x9dm
T4HZbt8T8I+ON/45snIYtwvpYGuVcsdVxJ0VUAGkNlSi/XBl1T4w/K7j4h+S5dq5JufKF2iW
2tRaz7VKUV8n9x0dj6h1pqXtngNnuqLiXNfjPYuNfIHHNm3SRYZleTav1K6WlrF7bJGwFHo+
QNASM8b/ACztsZziHN+OW/wBzDj1xdpBu0y+5a20po0yyFFpH/Ey6PUMZ652s7bz78vFjMzl
2FFRiQAcvwpjTcmNx8O8p2vYPkLZdy3dylhbysLh1XKLWhVXNMyAxFfLGby1OkPy5u1vufyd
yO+26ZbmwuLovDcRsCjqEShB7gsCMays879rfw97+M3+Kdt+Ltx2j/2pIP8A2OB/1ovCkU9v
LLB7ci6MtWnt44zOa137HlPxnsHCbH5Iktt55GsdvtU6XG3btAumC5ZGDBW119tSB+3F1PwO
Ph6L8tcg4RZc82P5E2zfrfdLizngiu9mhIaQxAn+ZEy+H5g2HrZyLcrcXnyJxS+uP69a/IX6
TbZI0lXZ9EKkAJmup0aTWTnT/LGeffhp8jc03qPeuVbruaXkt/HeTtJFeXEaxSyJ0BeNfSuW
WWN2rmO/4w5PZ8b5vte9TxPPBYS+49sh9RUgqxFepANR44zeWpzH0dabr8PWnPLr5Lg5gjXV
zGzybZJQChhEbKq016wBkPHLDlrF8rG/IXyjxDkPxFudpY3RG5Tbw1za2E66ZWhaYyB6Zrp0
t18ca5+Wep548Cud33B7AbfJcSNaRye6LUyExBxkG0V06gMtVMFdOsrlgcqwYCh60PjjNZfR
nC+VfHPLPi604NyndjsM+2T+6JWoEmXUzqQ7ArX+YQRi5asRfNfKeA3/AAHjnH+ObhHcRbTu
Cx3EUTM0qQJFIHmXV9ynVUHzwznPlfLW3f8A9zl18RnhtvzKCO1crcwXk5UyrIJBKoeNQvfI
gZ4OPldPO+bc84vuHwPsnHrG/jn3far1Yri0WoLRxNJ/Nj1dUIYEHG5Par6y/wAQfJMPFuX2
V/ujTT7bGXDQqWbQJV0F0QmgZep8cc63+Hvm9fJvCbS33Ldl+Q7i7heOR7fa7URCRGfNBD6A
WZelGNPHGpNcdjHbXy74x598fbfxrlO9SbNebPK0jS0H87NvVXSw6P6x2OHGq8x+T7b442fd
tsi4PeNfmGPTucoJa3kbrE4JoQ5zDqPT0OH65PWJsutx8vcz4Lz7gm27zHuZ27keyxiI7HOp
pP7ugSKjL2UrqVq0wcRvr51cbVyv4n578f7FsPJN8k2G+48E9K0QSFV06kJWQFaDp1GMm2fL
SR/Mvx5Zc42G2TcjNtu1WM9lNu5RhBqk0e2QfuP/AIqMaUqcOeLdfMN5vUsW+X15ZzNFFcXM
7pJGzRsY5JGIzUg0KtmMXQ/n1+a+kPia53DnnDLBeQ7FDvdlsTmPab6OdFuYniAASWJioYaa
AE5EdRjHy1186yP92VzDPyTjgiZI5ottmE1sCoaOsy+hl7UocdeZ4xfXgJDFitajx/6Yy1AB
ZCtEWhBJLHyxImBcqStQuZNMjniUG7ISSPS35lGQIxIGpaZpSnVsySTiIii6Qr0FTmR1p+OJ
Ghc6SBmVyWvYds8VZ0Te8TpJAZPuPY0xEABKkFdTt0PcDEvRoArAkav4geuBC1RrKMqKw+3s
AelMR1J7arWhqooQB4+BxBGrEEOMl0lWr0zwrQIyksA9R3JzzxAUeoSUp6aauuY8sWLYJisS
nTp/iABzofGuLAWpitQtCwyHkfPFjRaZNKyUU5aQewPXPBUNBrYhaDzB79epwLRSAMo6nqAV
z698SRBSiminQDpqM/p+3CMGIwdOdKdB2p44SIsjARoKjOgJ/wAsI0vdQdH/AJnQ1wVD0qWJ
caWyz6gg+OAiVTpKkgAZUqf+KYERGkgEZtQFic8SJaBSC1Qp1V75f44tRpT6NKmhY6wTkBXt
gVSR0dkJSiDItn+04UjkozFUfJagnCzp40UAkNVsxTxr3xFIs0gbUwoKgUyrUeAwGdEsjUaT
ucyT4+GJaQrqOfXoBnQnviRaf5hYqXBH4A+OJQ5ZlWvU1pkevlXAiMtUDEUYHMCv7MKEQqkH
TrUDtkSa50wL4JmOr3AKgjoO3/PADjQzDKhUZqMifrhQTkDqH31ypmK+WJYEAQ96joT/AJ4c
SZnOZqKMAKd8BMsVEYVzPU5/5YmoIhgdQ6mg0jx74USpmAp01zA/xoMSkM+gJqJowNCPDGWb
4EAqM/STXqMgMKNqLPRWqTkVPTBUN0jkNCQpBBzqenbEhhdRVS2Ryz6ZYtJn9qtSTqU0GdK+
GJaQlAZuhQirVFMx9f8ALAtJQAhcAUpl4n9uEEhUHUAVr0HSh74DhHQ/pOdD6ifA+OJGctqL
qAR0qPHEhrIgJK0UAg5jvhNRGTT62BzNFp4dcQwy6klYLQKw/wDqJ6/TDrOJH0Oi1XRXMEHP
L6YmxM6q1AaqAM698CvRhICp9QJJGqvbAtMoVXqRq8B/zxDD6gG1hQVJzwrSkCEUoCozyPWu
AkCyxjQ+lq0oc8sBGVIUE1p0z6jzxavriIe4WOldNDQeZHjTFrMqQ0DMVORGeeYOJI100Odd
OeoDMVwqRJkfSzkyUrU0GWImGhWKtkpPqY51FMhiMgWSdl7M6mgQdCMR+p6MrChqSM+xIxas
GyqAoGYAzp2zxM6j0gqQuVT6s64idW1OKA5ZU65DvXDjUPSMsSxrXw6eWLFSVDUa1CilBozG
M4C0qiqAdOedR6q4pGtOyUBelC3UVz/6YXOwy1pmWVT9x7Ni1mgeMf8AkFdPQ5DIYtWHWP0t
SpJz60FO3XFG5AlasWLdOoJ1Vrh1YJYipGiqBzpJ6ih6knBqwQCMjK3rAy1DocGrMA8MaxjS
tBWgA6gnDqqh3aVSwQrpYih7io8MblGqcqR6dJr2IxKw1H8cQxZSLFRTHWh6v0z8Maxu16b8
PWe13vI7W13SI3NvK4JtwxXIdaH/ABw449c7XrvzFxHjW0WVrLtdgLRDmyI1a16UrXqcjjlz
8ukq0+N+JWEnC3u40267uJFJ1yxF5QaVC1oQPKmD+mq7Vzwbh2xX/Frj9TaWa3BZ1e6kUB1a
pzFe2eHIPhld0+DrefTdbXvUNxbytSWXSVRf9wILYqpaqZfhDcW3WOygug6sup7wq3tL/wDU
BQ1xYfsWz8G2LaudWezbtKu7xuSjrCSEVq/vxuTY6eYsfl7ifGtnktpNtsktUcBm7knpXP8A
xxzY2flzWPwot1x3+tz8ltLWJ4vfWIUOkddLvXv5DF1sY15lexRRzMkcgdVJCuMwfMeNcPMT
t47x3ct+3GLbtvTXPKfUCQAqDqzk9AManKeqwf2+xKntPyNGv9OprVYiF+leuMYkVj/bxudx
GZLrdILKAH1Elmeg7qD6R+3F619qCf4Av/dj/pu7RX0Uhp7wyQL9akH6A43II62/t2ZGMM2/
27XbD3EttJDUUZgLWuXjTAr6r9q+CN4kV5dy3ODa7dahXcgsaHrpqMqeODNMuJ5/gbclnhXb
Nzt72CfP9U/pSlOtBWv4HDhnTpb+3m+VWU73bHcNNUtlBNadiOtD44MGsxsfxDvO6b9cbO11
BYy2ecxmYVoc6Kozb6jGox9opubcNbiu5Hb5L2O7lHqZoTqoD4+GKJnQ2eQ60rTMY0XoHxRx
jjfId3ktN5illAQuqRtpJ0kde9M+2LqIPydxjauP7+bTagY7ZwrD3GqRUD/PHOcnW3uuB2lp
wIXcm22cs5jWZryOUlxqFelKHwxuireT4o2HcuIW36G2htLyaJXa/mZqA0rmPCuD06w178G8
mW/WCCW2mjZdYuQ+hCemRYZ4Jv5Fqt234k5Nfbjc2SvEgs20yuzUQsOyn83nTClr8ccD2Pc9
+vts30PNLZgiJbZtKGhzJceWHPDyzPyPxvbNg36eysSzWpoI9Zq57np4YoGZiEk7xxHNSQqr
kaEnI5+eNM2evTrL4E5ZPZpPeXdpaLKgZdbHUtRlrIFP2Yy1jjn+FOXR7pHZRBJYmX13akey
vmT1zxNyujcvgrfbKyluU3G1umgzeGJ6P9DUHr4YPWKf4k4Pa7lf3pvbKK+FsNJR5dOksaUI
74rEyPyBtw2rkV1bCzFnGjlRAr6wv8NSOuWLnmCs5G80hEajU5OkAVLVOX4Y3kUek7T8F8t3
GyS5urq121ZVHtRXBYSGoqDQd/rjOtKPevjPkW073a7PK0c1zduI7eSNvQ5IGVWppxYzHZyv
4f5Nxba13G/uLWaByEcxvTQ7CoFGAqPpiK/+NfiDaeR7fJfbpuJA0hYLW3ZdSVzLSA/sGC81
qzHFx74241dcj3GDdd2SDb9tbSFZhHJJpJFV7UyzxYt2L6X4i4DvmzXF3xi5uo7iNmRZbokp
Iy/wggZHsRhkZ2xz2vxDxXYdu/U8skvZZZR6f0oZqZZV9sH/AExBW8K+K9h5Ff392LqaPZLA
+2IvsuHLerOv2jT3w43Phc7j8KcSv9hnv+MXFxA9trLre6qP7fVasA30wMRgdh+GeYb/ALXL
udhAjW4Z0jEkiq0mg0OgYrVeZ8uHjPxtyrkd7PY2MADWxKzzyNSNSMtLNShNcsV1udPWNu+D
NrsuH3r75bGXeo439qWGQsoFPSoHfANYW2+AfkS7s/1MQto9QrFDLJok8QSKEDLKmC6uv8MF
yLjm+8e3A2m72xhuQKCpqM/4T3HeuNS4zz1qDa7C93G9g2+zj13ly4SCMULM56ZnBa3cehp/
bz8lTRO0kdvFKB9rThgaDLSRXr4YdZZ/bfh/5B3Debja4NvaCe2ULPNKdMak9CGPpNe1MDnm
j5V8N854zZjcNxgiktRXXLDJrCnsG74G/I0nxh8W7XvPDtw3XeLG7u5HL/o5bZ0GkotaaGI6
Hxxs9PIb/wDk3Jib7UJTS2ZGdM/rhjItr2rc913GCw2+EzXc7rDbRg9S5pQVyGL7SGR6HF/b
t8pSRuzWkCvGa6WmQah4LQnPGbdKk2n4h+QNz3q52i3sDHPZH+e838tB4DU2WfbDPGa7t5+D
fkbaZ7WO4shMt3IIkltpBIvuMaKG6UB88Gic+uq5/t0+U4in/wAe1f3KgsJgQvfPwrjX2Pqt
h+D/AJJud5k2lNsEc8IDPLMwEIBzFH6MT2pgtWaXIPhL5A2d4Yrvbw/6phFbNbP7yGVjQKxH
2188EauVSXXxzzmz5Jb8buNuMO63ZH6e3DVRx3YP0ph1m+rTbvhf5Fvt4vNph2xoryz0md5W
VIhr+31nIgjwwfYxwcx+KuccSiju97sVitmOmO4jdZY6/wALMldON81nraZfiPn03FxySHbm
l2lg0gZWBf2x/wDaBKhtPnTF1fW+ucYsxstQSWUEqc8iR2yxCLjhnG7rk3JrDY7WZLae8k9s
yTZBABUsfHLpTvgPPte1J/blwmTcP6EOWt/7QilmtDGlDQVJKjPTTzwVnLXmUPxfua/I8HB9
wnSC8e4EJnjqVWMjUHFOtRTGbWsXnIfgzc9v+RrPhltucc43Bf1EF3NRCIiTVWH8Q0mlOuH7
UTbWy/8Azb+FPdtsVryyT/2hF9z9FIFIXSNWYShoev0xenmvL9u+M9zufkqPhN9KLG7/AFHs
XUyj3EVNOr3EGWrUKUzwy4blXO7/AAhult8nQ8Htb63lkuYv1KX0v8se1Q/kz/menoDh5/bH
N9xr/wD82/iM15Psthy0ScnjjZzt0iKCCo/MAdWnzxWszXmWw/GV/d/Iw4ReyGzuzcNBLOKO
FCLqZhTypTFrfPq43D4U3W3+ToeCx30Us0yiaG8kOlfa06qlf4qDoMX4HH+W1b+27ity8uy7
Xy8PyaFWlewdFWhXMh1U6gKnBY39v08x4x8aXu5/IX/p17N+hvEnMN0TSQo0ebEAdajp2w22
NcdSrmf4R3ZflE8DW8ikk9v9Sl8WKL7BGsDT1107DF114xz1tutrN/bfxm5mudr2DlqScjtw
zSbdMqhvSM1k0ksPV3pljNlV6/TzHivxzeb58gx8Mvrj+nXkc0sF4y+oxmIEtpPfpljW4Zft
Fd8i8KuuGcrveP3Ey3DWwRxcKaB0kUPGSDmDQ5jBjjL+GZRGMiDprIXTXw8aYWo9Tv8A4K3m
L442zl1lcfqhdlXls40IaJJW0RsP4vVStfHGNxuz1sLb+1u/G47Pb328LFHuFu7XhWLVJHJG
of2hU5/d1rg9q/Ll3j4D+N7N7mJufQxblbKyrbzCJAHHRXo1cbkrOzXhN9CsdzJEXEmhyBID
6WplVT4HGsb1sPib42k59v1xtUN6LJ4Ld5/eI1CqkAJp8ycyOmMdULfg3wlufJ+Tbzx2bcBZ
Xm0RyO0gTUjSo/tqnX7WOZOHrpjnnzW3tP7edjXi9nvO+8s/o5vQEcGOP2klBIChmI1H098E
01wbP8Ccf3rl/wDRtp5XBuO3LZtcyXFuqmVCrqhQ6TpqxatcC+XTuP8Abjs1zb30XE+Uw7nv
G3Rs8+2EL6ih+30k51FM++G6p2Kx/t445Hxba9733lI2h9wiR2SVEEYlkGooCxHQDDtrXfTz
D5O4dsfGb+C22nfYd+tZlDieHTVT3VwhIw4zPli2iUscyqk1qudKdTiN5bnjXw58j8h2aPdt
n22SWymYiNpGVK06kKxH7cGi6rR8e85bcd029NquGu9ljM+5Qlf5iRD81O4PUaa5YtjnxPy5
dn4ryPd7S+vLGzkuYNpjEt9JGusxRmvXx6Z+GG10vw9D4l/bdy/fOJz77G4t55FSfbbGY/8A
4xGyajpYGiV/LXGPtaviPKNz27ctv3Cex3GF7a9gcpJDMuloyPynGtPNl9ccKOzGpPfp41pn
i0ya+gNq/tz2ldk26TkXMYtpvN0iSa2tQkTRsJACBEzspb7lrTB8rt5b8l/G2/8ABeQ/03dt
E+pPdsbyMnRMhJGoA9CKeoHDLkc/ZcZO3gkkdUI9ZP2HL8a4rTG5h+C/lG4C/ptguCdAlckq
ilXFVKlj6q4z9mr4o9p4JzLcN8l2TbtsuJN2gJSe2YFGiIyb3A3SmOs7kjO78LDlHxnz7itq
l5vu0y2VhrVP1X3x+5XoSldPlXrjNuryINp+Pucbzst7vm07ZJfbVZ6hc3MOlqe2AzhQTViq
mpC4vv8Agdd2MszSVVT9wHoHanWo7Y3fDLpQRNNIqDUx1LpFM9RNMYtT3HZf7bN1l2pf6zvl
vsu/Xihtr22UARySMK+2zE11UP5c8Y9vrVueR5pccO5NY8mPFr2xdN5WUQfogKl2cgKVb7XR
q+lsN6P8+9Qcn4nyXim6vtm/WcljdBRJRiGUq1dLRup0kGlMjijObVRFMsQqGqslCzk9+2Zw
41r0z45+Ft355xXcd7226WG92+b27e0mWgnHt6yNX5TXJf34xd1nqfpw2fxDulx8a7ty8TLG
+z3fs3e2Op1aUIWUqT0kR3GXfPD9vWJ/SZqIfAvyrLY/r49ilaF4hMiBlL0IrQKDXV5Yfs6W
sHLBcQyMsimKSMlCjekh+6kHv5HGtEsoY1klC+2rO7ELpQVZmJp6V8ScErUjfTfBXy1Dtkl+
dgeS1WH3/SUaTSRqp7dS5IHVcU61i9WMyOLcjTYF5K9nIdo942TXlPSs6qGKHuPLtgt9Yt/P
4SPxHlC7Rab3/T5ZNqv5mt7O6Va+5N3Q98/y+PbDbrq0118GfK9ht0m4XGyyyW0EZmkCMjsI
qVqFB1agOwGM8i7K0Oz/AAX/AOwfFFryfaRJccla7eJrQvSOWDXpVVU/a6jPzw8nrrPXm/Ku
Gcq4lera75t8u33M6e5bmShV0U56HXL098Mo+83FIQJMmIOYqPPyP1wynNelfDPxbLzbkCRX
8U6bNHE8kk6qQJPbp/JEhyXUcq9sZ669aUvyvxzh+y8nFrxWa5axClLm2vI3SaCdWIMZLBdQ
yyOOmMc+tDuHxXZbf8Ijl12stvvguomthIawXNrOQqpHTo4zepp0pjnzNp/p58KD4r4TtvM+
VW/Htw3GSwS5jkKSKquS6iqoUbIgnFbinrk3vgu/7Ty664lBF+q3KC5NuggBYS1zVhQaqFSD
nh75cv5dW6vN++EvlHaNpuNzv9nP6G3ALzRusjon8ZRc9I706d8Z5jpeseehZI3YsoJpQnIH
9uOnyZXbsWz7tvW7w2G2RNPdznSsSVOXfp28TjNuNba2u/8Awb8l7FtUu73e0/8Aw7ckzGIr
KyKfz6EqdI7t2wc1jEOz/CfyPusVnPt+3lre/g/U2lwWX25I60NDX7x/CcE60VleU8W5Dxfd
32re7N7W/wBOv2pMwyHIPGRkynyxuU/KoAc1SRfQDUFepPTEXoXHPhX5C5Fsf9Q23bS1j9wc
sEeRSKgxqc2y6N0xz0yRdfM/xltfDDxptuMscG8WIN5aTsWZLmEIJHqegf3Pt7EY3zfGepl8
+AfJ/wAWbbxng/Fb9Y5LXfL3Um5WcjakkOnWksRHpFB6SK98H8/8rub5EvGfib5rhjji2lJr
NJ7dLtPbufaWSKShQ1B0sRUZdRjO/mNfExmB8f8AyHvvMp+P3VjPJySE1ulumNQriqu8pqNB
Bybpnjf2yHn4S8x+E+fcSsI9z3awC2TN7ZmjYSLFIegfSTQN2Jy7YZ6zXfsP9vvyjuu0Jf2u
3IIZfVGJZBG7ZVzRqEVrkcZtKD4/+IOS8m5lcbDc25sm2yQLuyuwWWBSwUnQ33UDV9ORwWrn
0vlH4a37gm9LDPS62u8am2bggoHYUqkijNZM+n7MdJPGeZbVhYf27fJ95tAv4rBWLhmWB3Ec
mXYo+YY45y638Kbj/wAQ835DDfSWe3sr7bcLZ30E1EeOYkqAynPIjMj64rb+GZ76LkPwtznY
L7btv3Tbgj7pIkFpcq2q3MzmntM4+0/Xtilok/Crj+OuTPzD/wBONp7O/rP+n9iQ6U1EVBEh
yKEZhu4w39nn1Rbjs+47ZuV3Y3sJs76zkaC4hfqskbFWGf0ywjdcj6xFXpXv3p4jETaGYAuQ
DSi55n/niR41dax1q3Uls6g4kMAuAKZEEaT1zHji0mKZ1K1y+4dMu2IB0hahAaMKgVri+RDl
aNrMepWyANKUwqDBGgENSmQZj2/ywNaIBXrWpXTmMu3ngWkGIXSCWAyCriJkYD1OpAzBr2P4
YsVIliipHn6tQbPUB9TiZEXkIYDMkUZu4AwkwjfUag6Oo/HzxaDOQGLAZ1A9S5E+GBa6FGba
6kEDWCPtHbAZTKX0sR6VNa0608cQOHqoJJqxpUgVoOtcJC0hZm65ZA/8sC1I1AWzoOpPn54s
QVZgzBssq5HpXDGaZGZkr4ft+pxVC0SBD1RuxHl4YgTawvpGseI7VwappyrKC2kHIVzyrhME
tKmgGphQCuWWCnSEhqFJJH8OIir6BGoyOZI8MAPRm0ocl8cSOCQFUadK9wTn554Tpmbqn3Gg
Pn1xLQ+lKHMMD071/wBMCEWamsAMTkfphACn88N26VOYFR/nhFo1XS9VpVsqdv24yYMPIKop
oO3kfrgaOrKpz+3xPfFqCxLNTVRa1r1P0w6hGtQa1Hicz1wCj1g9avU0B/DA3viMSRLKxIJ1
D0U6CnWmHGSKu+k9ADWuef8AzwrBqfQVIGmlAD4jBitC6N6ulAPUT0+mJGJhKLRdK1pWpP7s
AOq1BY5qOurt2/HERxnUDRg1BlTtiMpHQxo329FoK5+eImpR3yNB+UdwPDCNNUEKWBoTmD2x
KQ8ZVVp0/h759xixoziqggHr1FDQYmbAvo1amY+k50yr+JFMsQG2igCHShNSOuRxGH9qIk1G
quQByy88R8JSpYAD2wv2nrXsaYsahai1cgDU0p0I8cTNhBSyglqjtTLAMRyMquHppUClVHeu
Z88sWKfI0Zs6sStOlKjPpiWCjYI2eSsKE/4YmvCYSszRBS2kBtX5QMDnSijcMS+QHWmdf9MW
t4ZzE81GahShJGeXnikH29L0l/V08TlUeWFqw7SOnpApQiop0GLCVAEYLVtTdB1B+uAYGNUZ
HBALAag1ehxM0iVjBJJPgBUHDFDxltOp6rkdPiB9MONylJrrkRpoADTEJdEjKXozHMZHrlgN
hNIqP6s16gt1GIb6NvXSh9VdQbxwGzQhmagKaaHxqD54hYBiCxpXSetcq/gMLGHIJFBTV4nv
5HBrUBqAzGTA0BI6H8MJ+RCUt0ycE1y6eIGJTQhi03tsS2nMjpWmAfk7o7MWWpUUFOtP9cUF
5qg3svqXUCGFaVFP3dcdIlONTVNc8SPWTw8umJO+V4NI0sXXL6nGk9c+D4dll3aC83bd4Nrt
rRg2m41+o/cCCBpp9TjeeL7f4e1/Kd3wLftriaLlFm722qkKgyFySDRdFaUxxninUd/x7c8I
2Xi/6OTldi7SVkZlb29BZaDJqE0Hli69PXep9n5D8d2223W0XPJoHMuv27gIyoA5qACRniwG
l5F8f7XsSbKd/juIZyU92BS2kV6swFBiYwUHy1xKwaDZLbcIW2wJoG5vq9J7dRmfGuHDjJ7R
Z8I/9/G42nI7eK3hkMjy3TEGVyNX8uuVATWpOKW43evFv8wS8Q3Tblkj5NZGaJKC3jIlkYk9
YyhOeMzn1iMvtt/8E23GtF3b3s+7olH9wuWab/aVPtha9qZDGuk8yvninuJJbVBFEzfy0OZC
jxGKN2NP8Y8gs9j5RBe3g028Z0O9KgVFKnHSZjMmPf8AdOWxzyCbb+Z7VY22nUy6YpZDQVPU
j/DHKhnty57xJ+L3kU+/pf3TK6V0sGlL9goAoKdMMFjzv485XFY8ohbddweDakY6YpXPsqaE
AkDJeuNt2x3/ACpy2zn3yO62HcWdkjCieBu5P5WFMqdaYxmD5bPYeQbDyLhZ2eTeLW03DQIZ
GvZPaalMyAetfI4g0UXJuK7JabfttxvlpNNGgUSxuChC5VJXUPLEnknyjzCKTk7S7HuUnpoH
kgkZQ5IoaEUyGLRqu4HdcMudynuuabrdRenTFKrOAT0Ot0DPh+YrzFbzh+EDdpE4vNPPalQD
NcMW1N19LN6qfXBilZijUGRVSagDv+GNJ6/8GbWh3A7xcX1nBFGpiWGWZUnYnxStVHfD1fBI
sflvjkV/u9reW287eskpWIQ+6NVR0LDMAZ5nGOY1zP23CbVax8FG0HdrA3HslPeEyiEnr1r2
xX1UFzZbPu/FItlh3+0juokRCwmQj09q1qRhR95vuPznb9obfbeCa3KO8kcyk/ywPTWoUFvA
nFlWIrvmnFd9g3HZI9xh2wW49obgSi+5Tr7R+vXFgZ74n4+tnvF5uX9UtZrQ6oVpIFdiprq0
nPTTxw5Y1sxV/J/A953zkIn2qSLcTKgKw2zqzoF7uMlUGuRrnjN0RnbH4X+Q4bmKU7YoiV1J
VpoiaAg1NDg0SPbuScYl3Pj9tavdraPH7byLMxVDooSpNcJE3Jdhh3O12t9wt2uygUASJpqo
yqwOkHywrXnXLvjm4fcdx3y55JbbZZSFpI4hKWan5aAFQxIyxbRa7fhDYJbX9VujXVubOWsc
FXX3W0mhLr+UfXFYWW+UuIyz8xg//KVjFDuUwCTGQN7QJoxlHUDwwYpQcj+Ldr4pbWu7pv8A
Det76H9P6QWzqShVjUAeOGanovLdrXm/HbJdj3K21JpZ5ZJgGppA9QX1Bhh+FXlPJeJw7RvO
32e7cujuV1KJ5YiXe2jBzPXsOmCWj5WfyPZcEi4/FJZcruN6vFAW3tZpv1CkH81KDTigvjXf
AnHdzsNvutwnaIW94o9kq6l6A1OpRXTTzw2tfhJt3xdY3nNdxvt3aGQGRprS0jmUq+o1X3EB
1Zdxg6Urt5NYfJv6gPFcWdlxuzkV2ghdEkaKM1NSQo6D7cUTVXV9yLcF2y543PbPtrsrXcrF
XrHTPTisTF8zuN1Xm9rFweW3G+yQMNxRnjMVARQyqTp1fTPD7gnyudz5ZunHeH3UvNru1i3K
dHS3jtgG1FhQKFGZ88HLPV8edcDsNgu+NXVxLzmXZGlZy+3RTLEsS9qoxDMT/txYdaL4Outt
bZd42a3uhdXfuswRiFkkU1GvM9DXth1r8Nnxra9x49xFrffL2MOju+qSXUqRlqqutjnQYE49
92Ped45Lx7edq0vtVrSSeQSqA6n+FQfUKYh7rzz552rdOT8it7Lj9pNuklnBovf0qe4sTsxI
R2HRiMVmDmsn8f8Ax7zjaeX7Te3ex3dtZQzqzzSRHSlTT10qQPPGNa177yjYOSX/ACfY7+wu
K7bYy67y3EmnLOpAy1E42Y7juFhf7hvO22lyku4RQgSQRuPcRmUgE55EeOJmvBOSfG/yttfH
twu9w3lIdr1MDt810XLITUaSfTqPhWuDm3XO8b8vSfgrjG/7Twa6j3GIRvfky2alw1Y2joCa
E0qf3YXTPHgXJPi/ncfJLuxTY57mQNr/APjqZlCMfSSVyzxaeYt/jngHNdp5dtN/uOyXlnYQ
3Kmad4WCKtaEk9sVxvyPQ/7guZ7xsO/7Idsv5IGRfeSONvTrLEBnT8y0HQ41njl7rS/D/OLn
lux7tPuLw3m8BgZLOOkdUCUWgPSp74zutZi0fceSWVnawzbDa7Rtst1GkjS3wlkUM1aqNNK1
7asQaO5b17tRyQsSkeodlrlnlhKXcRJc7JcWsBEl3PZt+nRWAJYpRWDV7MRniDCWX9e4v8Rz
jfpjDvVsWZJZ5BJVy4MYRiWrkMhXD81VdbbZcc5LFs3N6Cae1tjW6LlShA/mErWgKEHBpscv
COX2fKNt5FcWrruzR3TrBbowid4QgEXhprn6sVmBmvlHed/t/jK9tX40m32UoERa9vEleOrV
DKvqLN4erFGbN8ZvYNj5BL8NtPac6it7WS2kaXbXEftxx0OqD3CTIrEdvPBN1vvMfN0hCsY9
RkVfQCDUZeHjjQ5mQdncTrOGVikoNUlBo4/7WGdcNqluvov4mlex+M925om2tf77bvMI90kl
1yERgZMpIYKtak9Tgl9a7uR5rwblW+bv8u7VvFzS93G9uVLgssSuTQaa9FCqMsV9c+esaP8A
uZvr9vkKzlWOWxnt7SIoQ4LGjswZShNB2xfgz5rTfGdzLtXxJuHNLba5LrkUbTH+ryyLI7kM
FOoM2pUQZGgzxmXabcjzn415Nu+6fMO37reqb3c727EklWSMM5y0joooOmNdc/k81b/3Ibrf
r8ox3EcUljLb2sQRyVWUMNVJI2QnI164fw58z/fWu+PL2TZPiS95zBtrXPIneUvus0nuM4Vq
L7lTqRVr074zPa6d2SPOvizkW7bn8wbfvUoG4blfXTSTUZY9ZcaWoT6QAMXV1czZqw/uJ3Pc
T8rSXMST7bPBbW4jcsok9NfUugnI9s8a3JHPm7a2nA9wk498QXnOrTbJLzkU0spbeJpBI0g1
6AZSTrVF6U74zLbXTq5HnvxFvu67j8w7dvFz/wDP3S+uZJZ2BWPUZFOqhPpAHhit1c8Y6P7g
N23BPlq+voY57CeK3t1j9Xtz+lKalKE0B7GueNX4jlzfa3XENyk4v8Mvzix2uR+QXLyPPu7s
JmdfcK6patrVVpTTTPGObtdO/h5/8L75ul98v2W7XEf6/cNwmmluipEfuPIDWTOijSO2Hpvm
ZHN/cXdS3fypu8jW0kDCO3j9mYAMQkQo9ASCD28savxHDm7a8yhpqpKaEHIKM8X4dMfZ2yc4
sOJ/B3Ht8vbZr6yMUEE0aFdVJWZdXqyNG7Y5yHq42y3qX+4cavo10RXUE00asQSBJCrKKjKt
DhH5eT/Kv/ue6RbztS/G1tf2bM/s7stPcNPtnGgayw6074vtjN418qzwTQyCJloy5erIjT1q
uNSt2Pcv7TmH/vN3UDV+gcAjt/MQ/vxnoyePafj/AJ/b75z7kWxDabSxm2zVW7gI92cLJpq3
pU0zr1xVjj1Jf3W6QfH21PtnHouSzGZgbOQKyqA0n8z1AitcvxwU9Mf8S2+5R/Lm7XO48cHF
5r3bNaWKV9t6SpqkTID7uoHTFaOMxods5fByCw5RbcYs7bbea7K86exGil5VBybJV1azln+a
mETPwluLzebX4y41JtnG4+U3Jii921mCsIz7ZLSUYHPV6cBvj5a+VrLeI+UXF3uPHDxc3SiV
NvUExaj9zRsaAhiKkLkDhlHFYyHVrXR9w9SnrUjqKYXSfL6/5dJze++POGXnx/JPI5jtxfPY
Mq1hSIAq1KZK9RTxwTyK/LfC4s15tbmsS393s76DUCWT25lJXOhbTU/TPBJ+Wa8++GOEci47
ZcyuN4tf0q7gXeGF6B2UK/q050U1/HDfnRzM5xXf2873ul/8cch2yHcJbi/sVYbdAkgaWINB
6fZB+1fc6dq4fz6N/wBfHzJyi73q432eff2lk3NnP6t7ivu609JDAhSOncYbF/OzPFYnUVpU
N0NaAHLPA1r652vg17w7i+1XEm3XPN94ihA2+2ZfctraJ6OUQsSEpl6upPTBz/ldV4L808r5
pvvJ68osZNqmsEKWu3SijQwyEMPV/wDaVp9+E8yaw+3Mkl9DX7A6AHoKkjqfDBhr6m/uE+QO
UcWtOK/0O/azE0DzXDxD1N7Yi0hia/yzqzGHm+M35dXwz8iy8+v+U7hLaW1nvQsLZBDasdUp
jEmmX1erJiBXtlirHMm2x5/uo/uOl4rv/wDWYJbraPZZby13FI2c29fVNBHQNWKmosOnXFLd
X49Wf9vG289m4Ru91xDkNkgWZ4xtN1F7y+8kYpIWqvt6xQV6GmeLq7XSzOXzndo6Tyu5DSe6
wlYDIvqOqg8jijn/ACmcujab1Id0tp9OpYJkaUDMFAwJ/wCmKumPq/5E49u/NOacD5LxmMbj
sEMayTXcbj2UKTpJ66n0tpBGffLGd8xi/Osd84XdtuvzhsKbTusNrdxpb2/9QDDRbTGcnVIV
IzTI0OG3w8z1Qf3O23MrbftpTk1/a7hC1s39NuLWL2WIU1k92PU1CTTPp4YZ8Odm9vFcqBQR
IrHMnt45YXT4fUH9vW53W1/FHLNytnLXNlcC4VjShEcKPQ18hQnGJ8tW+PQd2m43N8a7ryPa
pV/pm+zWu4TA6QkUrzQrKW8PUtXr3xVzvMkqDlQ+RI/l7Y59oe7/APS5o4ZNzMBU2urXJrL/
AFXQTTDp9188/wBzEFtH8t7p+ljjjrDbO/t0A9x4wzuQMtRPU41KJz8sLxKe9t+Qbdc2RRbi
K4ia3kmIEfuaxp11y06uuLrMb5mPss/+ybvJS7G7cS3iMCt1aslztMk38dDqqjH+IL554zKs
Zg7Fvm4/EnOONe1Dfck/qUyz29mFVWeVopVlRB9gdfUMEc+59ucdewbLNxX4s4hY7wUtpYd7
s3lLGqoz3JIDk/aQTQ+GLmKb9Y4OeWXzz/8AeVdNxO4nt9gYwvZtI6Gz1iIGQOr1oGkqD5nF
rf5RR8h5Nxz4B3rcjFFYcgt764ju4dFEhnkuAraFB9ObArTph5vo6njN/J17cbz/AG17BvO6
z/r9zN9HW8kIaVdbSoy6voACMPFH9eZZHzbqOoZZAUr9fLD8Nc9PpL+1Hme8nc34i0yHaI4Z
buKBlGsOSNWl+uZNSMY91vPHlHyVzPeuZ8wkn3EQG4s5ZrC3MEYjLxLOVjD/AMTgjrjr1Pqz
HvUnDOXj+2W+4/e7bM+9weuHbzR30R3KSgRip6KCQMcpfT/T15b/AG68X3y85/t+92lnJJtm
3XBjvbimURZG9LA+rHT+vjPF/T0eDbrjZf7ov1u6wtbWm9mU7bPIAI5XWAIAjeNe3X9uC/A4
59r1Abze2m73sMXHd0kWMyxxNLNELSYDOkIeSnq/KCBjOtPhrkMok33c5hbGxSS6ndbNusKF
zSMD/b0xvdZ/nzj0/wDtguNvtPk20FzOkbXFvNFAJKDXIy+lVJ7nOgxjp1e5cCtPka153yJe
UyXP/rEv6iPZ47mRXtyplrGqipp/K6V7ZYrXOesv8oct3bjXw3xp9ivDYtPcGJpocpFRC5/l
SD7KEafpjfMmtSPCOe/JO9c52rZLfeY4Zr7ZEmUboARPcJLSgl7ejT26nPFOoLJax1qCVAQK
ATQaiep/zwWtR9ebrbc03b454NN8f3bQvFFFFuN3byKgjiSNVZZB+ZUdTVaYzzivyzn9zO1b
ruW88H2yJUvt0uYZ4YkqFWe4LQjKtFGo5jDz8UflYfMfEeTTfCnGoZbKS5utkMc+6xijvGkc
DqWOdSFqK07YePTbJdQfOHOOR8e4Vwc7FuD2kd9bIZLi2NJP5dujArJ/CdWY74zz8M9e10fB
vyTJzXlVw29pbrusezix9Hpe9CS62dg1PVp6geeC0yflquR75uEHDd/jm4pfRQSWzrJBd3MU
oMekgyRqXevt11MAa43zNo1S/Je1fJu723G7/wCPLqcWL7ei3VxZXCxq7enQzVb1aRXtgvxT
+XmNruXLuM/O22Hl+5KdwH6SG53iMrElzZPRUeSgGrKquWFcsZy4P532yr7+4VuRW3yZts+6
STnh8xt5rDU+q1E0QAm0qPtfv5jG7f8AXz5PPz69F5ttnyFe/IWy73xi4nPFDDA+4/pbgCOb
1k69FfX/AC6D6Yzv+v8Akye+p9/3k7Vb/KO4bSYlvLW2t50KgMDIbQ+th0rXrjUnwy+fti+c
eURWcOzb5Kd4sEvre9967Je4iMEgcpFIezU/N07Yf6ZHWTXuG3cd2Dl3ybt3yXtm7pNt7Rwx
iyKlLiOdFKqro1KKa/t8sY+2zGJ/ra+ePngrH8qcoIrqa7BcnpT2kxv8MvPmkK5CjVpTAiWn
3rQqBQqeuBWES5kLBSF6hqdcSkMUIy6lKGpGBE/QoVrXMDsfxxLCeMoQBSjGlSen0GLVR+5p
ACkNQ1of8cK0Kq0itpoVAII6VFcBzS0+2oWlAMwOp/biiSRlPSB9xrVj38cQ0wNBpqC1aMvT
CRKzach/9P8AniwHLp7YOdRlTT1/EYsWo/e9PprkepHXyGBan0S6dZ/MSADToMRkB7unUqLk
cqf4n6YkkyEQCHJh9pHQ4sR0qhDMajz7fXEikOZDV19enQdTiRBmBBUEquWkdc8QOG1gqyrU
gkVyI/54FQtGFB1HTSmqvQivliZsSZjPqgGRzFBgUENI9K5Vyop7nE1KXthxpUnV4Dth1YBk
IOhjn9MWiQxWhAWhOVKHp44ice4GNWBA7Dwp4jEhJKxVyaELQAAVy+uJDJlAoSNbZauxHXLA
gh0Dkk6j1qOn/TGkJnZnOghgaDM59cFJ1HqoX69l/wA8QA0blia1WMUIHUkeNcQ+owUABcCg
H4DE0FZfcWobS9dVKdKfXEtJpAwIqCW6HqQR4jBgtEqUUHSAoyFD38cGGURWighe9a98RIup
BatG8K/uxIYiI6nIip8+/bFq1GsmpdRNKH0qP3HCSJLNq+0E0WuYqOuIU5diAMwpy8ya4maa
mmrU/NWngBlgpSMxNAoBDiqr3/fgVCVVFGmnu9Wp54sQl91CSlSOtO/41xHQUmObnSVNaV79
Mag0RV6AECvenT8MRgRIVQkE5Za6HPywH7CRlKkIACxq4/zGLF9jy6Si1B1dAe1B0ywCkZIh
6aABsgwzOQzwrTaA2qiEhQCTXt4YEJRpU0P07imLWzBanM0yyHj9MRAJ1rpVQ4Gb5nCxekyo
w8BXOp6Z4EippJCA0Y1cDwHgRhMwOhAfSpI6KG7E+NcQSgu5JXJQKHPw8BgXwZp3Wqlu/wBn
UYcUpSFSQyVWg+40GZ7Z9cUV6JTp71YChdugPliqltIMTTVn4jvXFTLgiyDUCrAKc1boDT8u
M5T9wyMoTUwCVFcupH+uHGeuoQI0E50oPVTMU6jAylJGkVrRVoK5EDrhb5gDrNAWGggVPbLE
QBwjUp91QCMhTEzaaQEiigav30PjiGJGJI9Rrop6KUOCtQtLNUj7hkviK4zD0cAKqyMAS3pJ
/wBcLO5Aqrxxl3GtTlQdsKgmKox0UalCK9VHnia3DA6V9RoD07Ma5fTFh0Te2wA0MDWgoQF8
OuHGfsA1WRVIBj6DOlM8s8RmVn9+DB29RaQsan/aOmeNyCxT5BQa/UYBRe9J5/twh1tauy6l
z0+VBnhheg/GnHE3u9h2+W+Sw9w6RczBigLHJPSD36Y1dsVuPSud/FlpxKGCd90W+kOoelDG
QQOmZp2yx55Lq5kd/B/iy23rZJt53D9fCGBNsI7f0Ouk56z92YxWNZ+nfx34d27dtnmvY7m9
juVqsVrEqHMDIGpr1zxvqZ8Jld3+MedbeQx2m5ELuFRiAzOadAqlsc9q+0VU/EuUR3CQz7bd
JcyD+XA0ZDfgpxqVrx3bPwLerzeoNq3I/wBEkmp7Ul8j0NewA741Nc7Vnzv4uuOH20ddxiuh
JmojjZAB41Ymv4Y1EobDgXNNxsjf220XUtp191YmKstK6ge4pg6ok9Ul7b3Vu7x3EZjdfGle
legxSm/JoPcZo0RWaaY6UC5sWPSgH+GHVrWW/wAXfIUsPuf+vXenrqZAC2Va6SQcc+mvI5Lb
gXNLyZoI9mu5XGTr7TVB86jLLzxm659TUl78fc625EW52W5jjlYKhMRaoHb0k46ToxJB8Y/I
MiGVOP3XtjIBUzFO5rSv4YfuY5dv4hzLcJGgtNnubqWMlJAqEhW8GqNI/bjWnInveEcx2x0h
uNouYWn+xPaYs7AdEIBxzntZo1+OefyKZW2C9SMeol4mpTy8cUrMVdvsHILy+awt9vnlu1JL
20alpBTI1UDpjWtB3PZdy26b2Nws5bWRKELKhTMd88UpcQoHUdCcwfDGg1nBuF7lym8ltdvn
tLeRU1O1y5Vj2BVQCzU74L8GZA8y4duPGL/9DeTw3E9A3uQVINe9T6sZ41rZ00K/Fp/9TG+X
e7sshj1fov0zj09lZj38Dht9cuufXRdfDl3b8Zj3KC+/XyyIJFsooGYlXFVzFf3YbVWGu+Nc
gs5v0l1tM8Ej0McTwsob/cuXfF9zPHPDtW6l2T9LKZq5xvGdSD/tpUYvvGWi4f8AHfIOUTzx
2bQWr25Gv9W2h6HsIx6/xpht8OOPe9o3vhu9NYyXhgukNTNaSuBXqKEaTShxbShPNOTFlH9X
vad1NxIanx64tFrvnv8A5E5HbxW87bjuFuvqiVY5ChHQEsBnhtU5UM+2b/DcJb3G3XCzsdKR
SxsshY9BpIzOCYnZc8X5TBAk91tF7FFlSWaBwte/qIypjNpvMq04NwmblN9NCb9Nt9hfVKys
zGnai+mn1xepSco2ePad6msRdrfPCwBmVWVT5gNmManSirafWCrE6Myyn7Tih132Wz79foZL
Hbru5j7zQRSOAR1FVGD74K47+zvLZjDc2ssEjVpG6Mrk16UYVNcM60W4a42vd7W1WW5spoom
FUkmjdFI/wC4gDF+RN/LT8N4Bzfk1q1xs0TizjyM0jmONz/CrMV14K0LbPjvnW5cjudrs7Zf
1NmxS7neUCOOnTU48fAZ4vlYvN6+FPkW0tHu5ZIr1EX+ZDFOzkHIZLIAMZ9Z+sS2PwP8lG1J
FxbRe4K+2ty/p/8A0BT6541eqXDL8HfI0e9pbRW8TMR7v6pbikYoaamNAwOMy9DEXI/hX5K2
2wm3CZYr2KFauscplkC9z6sz+GH1Y85/S3rB1SB5DFnIVjZtJPYtSgONbEe2l3AXMcVtr/VN
Vo1jLK4+hX1DFa1Zr0bZvi7mvI+Ozb6+5JGsAIktblpvdJjHqpq6eVeuCU9TGIN1yWKF47GS
+S3XUpaBplhrmKih04b0x0l2HnXMeNBk2rcZbFZT7k0QoQz0zLK2rFLpxdRfOHyQsqn+sySq
DrClUzNejUX7cSxy8j+RufcolE11cyr7S+3ps1khQDqSfb+78cH2UZW3u9zgvka2kmW6HqVk
Z1lNcyaqa4ftFYPeL3e5ih3aW7kcN6DcvI1O4I1H9+H76skbfgPE+fcl2a9vNm5Au32ljRGt
ZrqWPWevQelR54z9vfBjO7b8k8543dXQs94nhleQi4VnE6My+nURIH6dsbP4WTfO3yfNER/X
pFDCnpjhUknwOjKuCwc+sXdybruFwbi7ae5nLEl31OQT1q2dMZva+2G2/cNzsrsNt8k8Fzqp
G8DPHJXvRlocMrWa7dz33kN1QbteXc2g/wAtLmSRjUfmUMeuH7nmg/qe/q5LTXbGVVQnXKVZ
KUCdc6eGD7IFpyHkcVzqgvrmO7iRlhaKWRSD0AUVrXLOmDRzNNvXIeS7iy/1jcby4CeoC5kk
cA9CQrH01wzqLEFtyPdre3ksra+uoLaZf51vHLIkT1yIZAdLZYf/AAzOpUm1b3vW23Yl225u
LW5INWt3eOSnShKHpjV7ljpcRb/ybkO80t95vbu5WFtcaXUrtGpPgrGnbGJ0x4rVllMegyMa
H0CtanyGLcWOZvcLnWxRWpobvjWtY6LCxu7y/igto5J5ZGCQRxAl2dsgoUdTgqnOvSx8OfN9
ntsqW9hdR7fKn/ybWK4QNItK0aJGq30xid2fhnq+vP7LY9+beU2mztZ/6q8giitVXTL7in7R
Wmk1x056lms/K05Dxzna7/Dt2+W13Jv8gVIYLgtJIynJBGatq/DLBempNakfEnzXa7NLEu03
kdlIPcuLOKdCGoK5xI/qPlSuMTr34VYDato5BLvUW27dbStubze1BaR1SYTA1oCKFSKd8dr3
sa55/MWXIdi5u/IEsN+tb2432QpFHFOJJZ5A2UaLqqT+GWCdKe3xqX+Kvm202e4tl2q+Ta5K
NPZRzKUYAVqYVc1p9MY+3rNjCbFtu9y7vDt+zwSNusrmO3ht6pKrqei5ihHjXDpjr37Z+dzc
hG37va3txvjEQ+xcFpLqQj7VBapK0zFDTFe2L/hq5PjH5wsOP3Fs+238e0SD3LqwidWSi51M
SOSxFK9MZ+2GxidkseSXe7W1lscUsm7SSabeOFikxdfV6ftpSmHxW3PHTuu0c2veTNY7nbXt
3ySWQRSQ3Gt7gkZCoOehe3bGr14zxN+GtvfjH5u2zj1zZPZbhHs5Huz2kMgeInqXMSMxqPpj
jO7vw3fIw2wbTv8AfbtbWWxwXD7m7FbeKBmVyR1ZSpBFKdcdp0OetByVOTLvVynIxdHekIju
2vmJmqg0gEmuQXpi3flz4zarUYggZ1CkkAV6fXC6a117bfIVtxWya/gvYeMXri4tPdYm0aZf
zKK0VvAEZ9sY1rr9NFtOyfOCTbNaWcW7afbN1tKK+lESgrJEWIUZMPw7YPssae9tP7p44JZT
NuqxRr/9m8bue9aCrf8A6IwToV4TetdNcyNe6jeCRjM0gpIZCSWLV7+ONrV3wyDnVxukY4nD
dybnCrSqbFisqoB6myINM6UxTF9ri04zF8i3e/Xd9sQvm3e0jlnvmtmZJ0WtHdz933dv3Yrc
Mbri+2/3ErtEUmzrucG3XI9+MRyRgMzklnAckjV1PTGfsnLf7d/cRfcghtZ23Zd5tUa424tI
PTGSFkZJFKrQ5BhXDrMnqpl4X8xcU/U8iksNx2+TWzXG5QvWUGT7ndlJbSe9csF6VyNBxjav
7i4NngOxjdItpmX3rTTNGVIk9WoLI2oA1rTF9jWH+Sx8ordW0HPDftMis1iL5lZSCfUYmT0f
UVxbojEL7iyIoqFkoSK5jzwrY1uwfJPM+PWrWey7tc2Nr/5PYifSmrpq0tWmC39tOK95lyS4
34bze7lcy7sukx3zSES1QekqwoBTyxq9D7TmLe8+Yvke79oS8ivpWVHiZRIAGWUUb7QPy4Nx
nrrxx8Yi5ttttPyTj0V1HZ7W2ibc7NWAhDAEq7L2p1BFMavcpmczaoN73TdN53K43bdLg3l7
cMrT3MjAs7KAq5ig6DGrdHMnzHFC385GD6Cfu618q+AxmxqPbOKz/wBxk3GrObjw3NtpVf8A
4dGj0+2uVIxL62XLIH8Mcvtl+B3v4eZ873zme8boicturqbcrFWhVLsBZY1Y+pSoVdOeeOvN
1z56Z5k06VJ9C9ad65AVwyx0qz3Xku+bxa2ttum4XF3Ft0Zjs1mcyLHGxB0JX8vpGIXxbcKs
ObqZ9643BeatoIe5vbPUGh9wEKWK5sDTwPnjHXTPPF5m/hd7ryz5i5Xxu/mvb6/v+PWZjXcC
mn2kLGie6Y1V6HvnTxwzuRqz82+MPbbhe2sL/prhreOv84ws0YPih0ldWLfW+XFLHJr1aACx
AZCeopWuE0oKJmgo35mFDQ9qYzYNaPYuf802K1ns9j3m52+3uc54o3Ajct1bQwND4kZ4zMgq
i92f9SSCWldjI2s1q1cy2rM18cb8E8or69u7t9UzyTSBQNUjM/8ALGQALE0HkMMwXr1zKj6V
qtFXpXI5dT54r1PgVvOJce+RJeH79v3HppBscYW13y2SUh3TSH1GHMOgVhq7gfjgtn4HVvPO
1Q2vKt+tNnudojv5Y9rvgBd2ayH2ZaEEF079MEyum/aLvavmL5G2qwj23beQXUFnB6YoNSt7
a/lUGRWIUeGLBig2rZ9/5Zvtttllr3DdLyQohkapYklnJY9gKknww243zzq95/8AFXLuCvYn
dkge2vtRtru1f3YtSD1oSQp1afLBKzb7jT8Z5B81XPANw3vZN9uW2DYqQ3FokyNNDEFFWUMr
OURTU55D6YJ8s93JrK/HW4/IUnIyOIT3Lb1dqw0QShGkCguzP7h0MB/uxq2Rcal5fyv5Je6u
tm5Tf3rSLIj3lldmg92L7CU0gekdCOuGUy78Ch+Yvky02+Oxh5Fex2qxiNVLKdKMKBQSpIy6
Z4zL63/5ZxuVb6dpn2Y7hcf067k96a29wmJ5QcnlDVq3njWDpE/It9m2Jdia8mk2r3/1UlgX
Jh94DSsmjsfpikxm+qvTL7opSpP4ZZdfHGfvFJix2Ld972Tcf1+2XclpfRNqSaFqSK1KClPE
djkRjez8tOJbu5aeWctrlkkeSRmzqzGrE0p1OeC37MZl1urb5s+T4rZLeHkd8sMaCNCHQlQo
ooBZWP7cZjdmqrj/AD7mex3t1ebXu1zZXF+fcvJoiv8AMYmtZAQVLajjXfX5q54kjVfIr/K4
2jY995buU1/tW4FrnaWDKRE9NJD6VUo5X1LTt9DjHPX2+PhnM6dm+bt81SfG0XIL3fZ7ri16
5sLiITB5V1VQLONIbSWFK1r49cPPXq6uPJ1SUoymkgHpKkmvStDXxxXppJYJdQye/b+qWOhQ
gn0spqMx9pGHyxmTa1u5fJ/ybuG3S7Zf8hu57KZfbuYneiumXorSvbrWuLjym8s9d7ru91tk
O0T3lxNYQuZ4rSR2aFGbqQhyUnFe/U4be1nlVAg1FjpGnMVOQoBnXGdS35FwvkfHrsbdve3y
bfd6Fmj1j0SI3RkYVDUPXw741ql9dmxc35tx5JbbZt1utuhcBnS1ZghalAxrVT4Vxn4P9bXL
uHJ+Ubk1nPd3l1cy2JJsppJCzW5La2aJh9nqFT54Z0xxq4uvlz5Tlie3uuQbh7MysssRkprU
rQ9RmPHG5YZNZa+3zfb62trK5ubiWytFYWls7M0SA5kRq32j6YysDt257ht1zFd2k8lpdQkN
FNESsqMMwwpmCK4PGuauOQfJHN96shZ7xv19e2lVk/TzynT7i/a1Fp443z1jItm+S+abJt7W
e0b1eWEQ9Sw28rKhJ76TkPwxi/LpepYob3dr/cbuS8vbiS6vpGLTzzsXd2PUu2Zz746Tpicu
6/5fyfcNtttsvdyuZ7GzYNb2sspkhiKigKK3QgZYLcUiy2f5M5vtG3Cy27fr2ytFYskMUzKo
Y5lQoOX4YzFetVack3qI3hS9mQbiHXcGSVv54kzcSivrqcatSsWUt1FD6cwTUZ4zWua9t2v+
5bf7LZrbbZNh269nsokiW+lDRO6x5I1I+jDywZDdryPk/Ib7kW73u87kwkvL+Uy3DgBAScgF
UZUCgAY3etc58eqldKqa0+2hXGUl7jSdOQJoOv1+mJpGPcQ55rWpJ8MR0/tdK/bq6A4gZ6RU
0jWTkRXOmBCL0j0kVaoJz6Z9Pww4CLaMnzDU6YkD7RVRVq/l6H8DgKSpBJGZp/xTEMCTpFaa
iRUsMzniFS+vJ6AsAKjsfx8cS08lRVs6A1A8PDCvTDToC1HuU1UBoKE5UxHCgorer0kGpJ/5
4DA+24agcmMGoYYtGCEgBoPTWoqOvXEUjSKH9HiAfLCDrQK2s11E1PgcCOukKyg+lcj5g+eE
adNBD+3+UZkdBgpggFBOkAuo6HvgVqL1sT1AP5MWs4k9YXTqqF6GudPDEcEGHt50BX7fEsO2
XbA0d5NBrpINADTrU4Rp1kqpDU9wGpp3JxHAsq6Q7KAB0YChA8sIExopaoQ/l8cROG1AazRR
1K5muAEpcGquCooE1CopiWGCadVCAG6GnfEiVXQio9KmhPT8cSJG1Vr6STTwy8xiOCE38wqF
GjIau9e+WFaFqsdNaUNSO2ChIyZAkZ0q1f8ADEkCsuqg6rmSBXLwxD10iQfcCAoy6eI8MBRC
WQhQFJfoQe2IZT60DEMoaOmWkd+uJoY9ppAxqB3WtDiQCmlhppQHv0I8BTEcSutULipCnoRW
mXXEOvACR3oSKafUD/pTEzLRVBq+qo7rlTAdPMVzkJLE0AoAQPIeeLECjPI1MqUrTzxE4yyD
MRqAIJ6U6Yjh1GpqD1LUgVJrXv1wQfU+ktVUJXOhBywqgVHZQqVoMyOmFYVaMfRRjSoBrSn0
xKQROjT6qeY6k4yaZhGSoJJYnUCBQA9umJeJJ39OmgL9B4V8MSDpqNRzPhUUBxGG1Fw1T0PY
508sS+QyMyKGRTToWYU6n/LCzqUyDSQWzqKGnXxxY0SlddEdgTTIjrT6YhpmoT6XOYIGdafX
Fh0Ijkdxp+7u9cv3YosE2lSVNC3Wp7U7YgJhqj1swB60Arl54jhw6aRX8fx70wYND7bFfUCV
6g+Phni04dEdBqJqDnXI5Dtg03Ak6q6QNAPfI1wsGXUEJFQCwqFNQPPEoJDoUgnWCSF0jIV7
YmwqlH9JJDflHYDxxLCBKly46GnXt4YkSSNWmk5ZAkUI/wCWIXRALIVyNRStDQD9uMmUbg5q
rEGoIqcI1EWLOV0VB6nrQ4tGCjHo9QLU7nrliRlDMGIrToAT0/48cRlM8JdQngftOef1xGlE
hjjKOCWzOvKhxaLaaUoEVR6lNa9unUnDBaz27tGZMtRzrQ9Bljca1VZM2WR7eeJmn0DxwJaT
C4MQJAVaUyyONGR6f8K7JvW47/aJt9pJee0VkmdDUKlerVoAB+/Dtcu+r+H0L8vcT5HuGyRP
Z2plWAhp2JRSFFM8z2OOXNu+uvM9T/FOwcqs+K3EF7b3Fu0p1wxyN+QqR6amgHhjfV09z3xe
8Q2nlNnsV7AkL2c7SyGJHKg+ro3fBQLb7Dk1vsLJuckj7mzOIJ5pAxDEkpQntiGSuu2m2i2k
tId00TcleNvZcV9R6+nP0+eCxa80u9o5zefJNvPdI96kLKyiIeiGMflbt+3Dx8NSSrf5l47v
15YQzWtlJLBAhMjenTFnmWJPbywURDxzbfkabiqu/OLOzszEdMBVHkjjA6GVtLJ/ljXTN+fH
ge6hBudwhvP1jo2c61Ktnmx7YMb+Wu+ITaJziwkmEYVW9LvTJiOq16HwPbHScywXx9KbluN7
Bdqttsu4bhVa+4soSMeR1soxjGXNdbtvMexX9xFB/T5o1ZhCGWZkbwrSmrEnnXBPlTlG8cih
2i5SJwrtquPU01FFc89NaeGNfhv8LX5M+ROScW3y2FvIkkUkVGtJh/LYsK19NCCMZ5rC72ze
d93bhJv7BPa3CdDIsViB6XfP7R+b641ZhXmyNu8e1bfJuYlG4KgWZ5vuL98ZqYT5I+UOR8W5
FGts0VxatGNVrJkAxNMyvqywax6z3x7v/POS8m3C+2CXbttlnUNdrKutCoOQQD1nzxrGoz3z
JBya23pBv+7W1/dOn8tbUaVRegQp1XGZpledBWyWoCjue318cIepfBG33knJhdJbyS2qRSK8
6oxQN+UM3Tr54d8S4+adq3v+vw30NlcGEKvszIhILVroDAHPywQSPQNHIbz4zaG4juZL6S2I
9mRSJT3UHKvbFflp2GTlVtwiD+kwst6kKaFeMe4B3Gg98VFdN9LyFbbbJrWD3dwYoLkTemiE
Vlz7U8sUQ7iOztG3G62VUuOSSoGuEY5kgenWprQeFMWanm3xdFv9xzrcbvdLaVpFVkmcJpRJ
O2dKY1+GpmMz827Xua8mkuXtZBaaQYpCh9tzTMBqZHyxnWI8+2+GQXMOuJqGRQx0k6a+dMM6
i/L6q3W95Tb8Ws22K3D3WiFWGgEqlBqIByqMRq0WFXubW6mjDXaxkJM6j3F1D1hSRlgDzjf+
SfMKb3uFvtO1te7PnGPdgFApFKq1V1eeC1Ob4Mk31Ny3ZLhXjic65o0jpCJvAMB2qaAHDrWM
T8wbXvEvNJSLCd/1LfyJNBCv+UKtBQnAJFNJ8Zcv2tIdw3namhsPcQSGTSwFT0dQ3pBwy+qR
71zHdeRbJxyzHFYCZUEaJbwwe8oTSMlRR+/Daa8x5Bvvypf71srbts0VteiVf6cZIhHreuXu
Fiwp/txmVjJrQ/LL/J0/DmXe4dttdvLL+p/Ss7vWlACZPH/bhjST+3/feQX1pd2128r7Zbop
s1KaI1NaOiMAO3auNWpHs+0c2vPkLe49svpdm2oymS+nEfqlHRFjVwQW659sC5rp5Hzmw4jt
11tvH9k3C/u5A36ncLpJmg1NkzM5zY/QAYpfWOr+GU+Ddlv963+73a8kD2lmS62upwwlc16d
CvlhrTbjc+Zck5jf2+w7i+2cfsKQX92URnLDtErAnUex7YKZFbyb5G2ziW1Xuz7BtG4bhckO
ku43QlMJd8ndnb7voKDEHN8Vbr8kjiDjZ+O2N3t7tKyXFxP7DyyE+r0UOtQelafXBTVp8K2L
xwb/ALjfWcUO+rcMkrKgAjpU+2nYLXww41+Gx41vW7b9xWa43ayFtLJJJDp0soeNW0hqHPPE
y4t+5Bumyb3x/Z9rgiXbr1xFcosJPtIOpGkgD/6sXgl9eSf3NbZt1jvW3XNpBHDc30Lm4kQa
dWhqAmn+OJPPfjmGCfme1LJErr70eqNgGQnUOv1xWtPqjkm/7ns+/bNtm3bbGLHcJdF1d+22
lFrQj0jSK+eLIy6o9h2Tbt13XdrPboIL94Qz3IjzJUE9v30zOD6xfDw/lHyhzDf+PbhZ7jw+
G+toyyruXsyqkJByZa1zFK9RiyJsf7fbqS/4LuFveQxtBC7CANEoJV0LNqIHqAPTFJ6bMj5p
5HGYt6vQ6GJEmfTGwppNaEfTGqzz8Ln4sigl53s6XEcVwhuoz7UmlkajD0lTXtnjPVa5fSfy
h8mR8Cv9uhj2qCayu2BuyAFkK1IIRRQVy6nD9R+XT8dXnC9/G6cu2fa125pSInuJo11gotWK
qKhQe9OuLC6Nx3rgu8WlutzLFutxFdR/ppf0jgLLqGlSxXSPPPPBiaa4MWu6H6eEtZxh4SYx
kSD0yw5E5bjadms4rrkEO3WybqbTU1x7QqdKl9JpnmetM8WJkrLcNv5T8fJynfdotmvLCY3C
RpGAVEDigq9TSnUdMGRdJNx+Ntn3jmvHuW28FvDaxxe5LaiFSsjFapqp6a+qlaYWZMq02Cw4
k+48h3q2262sZoZRaz3LRIQBAtWagyoS1cuuDFGN+Wb/AONN14JeTXE1vebnArGwurW3dT7o
6JqVWFPEFsM503xn+N7nvy/DgiX4/S/tBaSLb31YVLjMmZ4iPd9PXUOtMZw183XjHU4YVjB6
dCCT08csbgtWnEeR3/Gt/tN7sZEN5ZktAkg1LVgRRhh+pj6X+OOa3++WV58h8k3k2cFkGH9A
tpCLcIiULkMaO7/lAwT9C+TXmFhzra+QfPljydY/0FjPdRCPWR6UjXRqfTlWTqcb+p/hPm1q
PnjnU/HvljZd42eWC4u7OxKjXSVUDswZSoORIPljP18c9s6/w1Xx7zPcNw2u7+R+T760FrDr
QcdtTSFFUUDaGOp5G6gDBPfhr49eYca5nt2/fPsHIok/QWV1dq4WSgogXQC1O79Tg6i/lc1o
Pm/nN1xT5ksd92gQ3F5b2KJSX+ZHoYurL6Tkc8ak8a5vy2Px5zTcLjZtw+R+S73JHYqzp/65
bGsEQFFBVGOpnY5imDZVbkeV8N5XtO8fPtvyGONbCyvbt3iRuiqy6fURkGZhU+eLrBzMaH5k
51NxP5rt+Q7T7N1dW9ikJRx7kVG1B0ZkOpGNcGLie1suAc4u5divfkvlG/Spt5LqOOQEGC3G
rQoCk6nYnpTF8qzHlXx9yWx3f+4CLfjB/T7TcLqSSCGXSFQOp0VZctZxYOP0uflrnt7xD53m
5BtSQXUsdpBBSQ1jIZNMikqev45Y6TjZGef/AGrecE5zOnG7z5L5Pv0psZzII+PWzaobca9A
VY2Opm6U8K4zefcdOvHl3xZv1luHzuN9CfoLS/urh7aByqrCs4akZ0+mrV/bg7kh5mKj+5aa
KX5b3JoWR1NvaqWQg5rEKgkdwcP4jjz/AO1rzG1UtLp7kZEdv24zXSPtXjMHFLv4Z4/a8peG
LbJoIF1TkIglViyZnJTVcDXXy2Vw9s277EbfQYGjnMDR00aPaWmmnalKYQ8k+Sec8VsZd5ht
+e7vt+9W5kC2EKl4kmUEiIARigPT78Tl37+Xyjc3M9zcPPca5LiZi8ssh1O7MaksT3J7406S
ZHt/9pKBedX5YHW1i5DHP/7RQRjPXyOJcr23gO2/Hdvzvf7nYNwnud/n1/1W3lJ0JSUaiBoX
8+XU4qefILdN62PaeA7Xc7xu93sds8zKtzYVDFi0h0H0Semg8MSZb4p5Eu7/AChfx2fIbvkO
02+3M9rLfKUeJ3lTWgBC1yAq1M8GrmNpHum1WG175yK23S/5DZWvvRXu1yMsqRsmboqMqaQo
61qNOGpX3O98e274347c7rvN3sVnPHH7M9kxDHUhYRMQkmQXy7YIq+X/AJd5Ku878Le25Ldc
l2e19Vjc3dUMbOPXHpKrU0yLUzpjUYnvywcNWZVVQXrUA98NMj6y3TcONfGvx9xKSx47ZbiN
1WNJ2ulX3PdliEjuZCrklmJFMYklm1rrr3Gki+KuALzS8kbZbf8AS7ztBe8smQNEjrMtWiU/
+MnXnppgk9Y65eZfBvC9j3SDm0G5bXFc2QVobWSRCTGsTPpCSdQ2QNVNcNk08bePW8+E+bLu
HxjuM77XbQnY4jFIkKiKO5EcGqsi0PqYCjda4pPTv+vr5K5fu1juu+3G4bdtkO0Wt1J7qbZb
ktHDUAMBUDq3qpSgrljpZIOJkVkH8qVXGZJC55jFZrT6t438kXc9ttO1c7u7/hm7wRw/o/0q
BYNwgk0rEWXRLpYFaEA98c9itUP9ykTR/I3Fbn+mxTThVdwpH/zPbmUrE4A1U/LnXrht/wBX
Hr+e/wBJWZ/uIvdnnl2v2eHzca3hoy1280cUUckRppUGIlHKt364ucdP/k8RUEyALQk/dnQf
jjZ8fT39p15+l2TlV0y+4LcQylV+4rGkhp9TTHK/LrtvL0S34ztO38d5num0FTsfJrJr6BUK
6FeSB1lUKuSg6q0xczK42eKDkO8cS4FdcS4rb8VsLq03a2SNZZkQunrjjYMWRy9fcJzOK5Dv
xHiv9xvBeOcR5tFbbJC1vbblai6/TVqkT6yhWKvRcq07Y6SeMff/AGx5ZYLAb+FJgXhLgOif
eRXov1xVqSvtDY5+IXnH7S24Zs+zXLxQrFNsW5Bbe+T0iqNrR2YnrVhn4455HSsxsXCuGXfH
/kOObjDbYLYCVbC90vNb3CW7PWCZakRVoVoca5nrhv2lZr4x4XsW5fCe+3O5bZG9z+pje2uJ
E0uGTSqskn8NWPTGbNrfzGz+SeXcX+P932zi1rw6w3O2msleNCqCXRrMXtpWN2kb09eueHJh
/Kb4k3XiFnxPme6bdtk9rsNvdmeXaZ9LyRlIEaSKjUGkN0Ddsjg5nq/DK81j4p8g/DG9cyTY
oNm3HZZmFlJahdbKroCJNCqGDiTp2PQ41JJWP6bOdj5qejFkah/KaZHrjTq91/tf3biMXJIN
svtp9/kUsjvtm7g5woIjriIqMqA507452etb4q/7kd24lecxuLTZba7tt7srl4t3eR//AIkz
FVYPGhZtL+JAGWN5449S7423xpcruv8Abnym2ntY7aTbrS6ijlth7UroIjKqysubUORr1GM8
/LXc/wBfXj3xXK0PPePvGTDMt9AymM0orOAwqOuRzwdtcdbHr3ydsdhvH9y+yWd/EJbCeG1j
u4sx7ldZCsw7UHbG9/1Zz17Bu1j8e3E1xsW7Lsj22j25rF1iS5VNNQKAhhQdx2xn6rqy/L4r
5/tOybRzPeLDYZRc7NBORYTa/cDREA/eMmCmq18sdLWeVv8AC2xbVv3yHstjuEP6qxlm/wDk
W5qFYKpYaiM6VXp+3GenSPpG33/jXJ/kDd/jLdONWD2FsJUNwoVXKxokikKqgofX1VsYyCzf
ly+3wn43+N33x9mXcJNpvru1s5TFE07e7O6j3HYDIhQGPYdManLMuTx4r8u8t+OeXbBY7/td
kNo5kJBDuW3wIfZltippLrAVCUIWmWrMg9sa45NnrypF1ygEEK4rpBoD+OCtR9T7Be7PwD4U
2Tklrslpf3O5SrHepOKlmld1VtZV29ISlAM8Yzfld9Y5P7ibmCb4u4fPabeduhmujIlgwoIS
0DnQQR0qT2xrieeM9U3M7obt/bLt17JBHbTxXNojrb+iN2jm9rU6r9xI61754OD/AEq6i37j
PCvh3jXJZdigu9zmgS0tyY4vUZGLUmcitDprUZ1xczWur6r+Gz/F/wAg/IWy7zZbQltu6W87
b1tugC3WSHT7E4oAjnV5Z9xli/DOZdj0bkUPxzvm23+z7/dbVPaQq5m9pVSe3aM0EupS2hk8
fH9mKK+sPech2/4z+NeN32z7Pa3/APUgwmklTN3C6hOWVSxLj8PDGvqfl5xd/J1pbfIW1cn2
/i0e0blNFp3ezuABbXccxAiuIF0qyMQTV+/7cY2Mz/2yPQ/7oOWW1rZ7bx652uC4W/ia6t76
UsJbeWNwD7VBQ1U0YHtjfM8Nk11b1ye0+NeBcUbathtLxN2t4/1ELpQCUxIzSlgD95bOv4YM
a6+Wi2nh3Fo/kdNxh26BP/YdjM+4WmhWg1iSMF0Qig1h6NTrTAPh5dv/AMyfH/L7HeePcq2D
9JYxLKNjv4EDzJNHqRC4QDST/ty8ca64xR3fJ/x6/JbPgm78W2xLpri1it9xv7WipREjMYmP
2rRtQ1HPtjP4F37Mr/dRsW27bzLa5bS2S3ludrQ3giUD3XjkMak07hRSvhjfPPmrz7PDDQhi
Tr6GuZFMCpL6lVCaqe3n44hgTojOoD1H/GmX1rghMXb0gKKUr16dsKukCiSEadRbw7fhiYSI
FAfIlTXwGYwKUGoF9SLSn7wMONSpPW0gatKAVP4Z4mtAFU9Ktn26V7dcLOC0MH9RAB6U8MB/
yQRlbWoIL5A+X1xatSui6QNRy6he/wBTia1Eyo7Z1JAJA8MQOtEI1as/20OGDUbgamAFAx6H
r+GBDC6aOc2A+3M4kR06gymtKD8cSOvvK51EZ55+HliIgxdgSCPGnU/h2wMnXUoHt5Kppn0+
uIWJMjTMg1FSPEeOJoANDTowJzIr+NeuFHlldqFupAqD1OBaVFpoIYOclr0p/wA8TQ1rpFUq
y5GnT8MSwwSIscirL2GQP1xM0oipmpq75Ke/ifphZEHjeMutVCmjr38+uA6KFiM60UD7Rka+
eJSiQF2VkBao9Wf76YsMJCVbLMVyJ606UxKkyhWDtT1HxNa96jBiMKggKAUFNNBXEYeUjVpA
DZ5n6YlSUjWPcarUJVfP/XAsOPc+78gPqy6eWLWBPKKaaAP4AdcMrpDVUk9ssxhgpIshAUEC
mZr+3EDk0YHSQlD6ssj54EPUUQMANGQp9cSPEoaSrLpA6sM8BCEJTUmfUV6Z18MKpoE/KDRh
498VYiVqDRrJ1Zgr4YG8LSFHpFK9SfDFqERVCCKGlQwzFPocBRqSFoG6GinwxA6qBIdSg1Op
an9nTCi9sM7MKEdB5HEtOoiKkEGo6MMwcRItUBUBYn7iDTLELD+3IoCg0J6jp1/yxLAqoZ/U
aduuRxLDn0ua10kClOlehxISMFjNB9adfHEdAxfSfTpOTEePngSUAulWOQ9QNKAk4EYVVGoS
NRAPj+3DqwMnoBKSeeWRGNaB6SEOugdl1AnsD3yxmtYAKusF1FRSlep8PLEIGjI4cCrsamte
laEU/wAMSowlwpZZQBpJGoGpP0OK1Q3pjag+w4GgLQSVd+oqD5fQeGJn7OgdQSNYWmfmfLCQ
1QOGKggn7QKZ+JOLFqRyxVSKkEmgORAxUImVHlqG0nw8T54FCFG6mlDSi50A74Yidol9Rr9R
1I88R2DjZljJyGo9PAeFMCgWQkOU7ZmvX6eeLWsCTMNC6fV3OZA+uDQeqF6NqNBQSEUH0GIf
k5XQioFqo+0eB+uDWqaNToOoAEZAjpjTMLQRQoC1eq4msFEoNTQ5Cp8aA4sV8NMVUrIBUtkP
GnXviZwgihhJUGU5BiSFK9lp44EQLPUyCjr16dvpiwnXUxOurVzAX/DFgoSxqVUFsqEgHr4V
74lNJ4/zAUIFKV6+PXEbDLFmEDN6uq/mrgtXMkSmR0A9JNMq5UGBIn0nU7eIFBlhtFMQpJUA
1pRiaUr5eGDW5yzu8oiuKMddenamOvNp6VYND5YXItf+0eOJO1Y58l1HQRWhwypuvj6XfxdJ
DtD3BvZfTHDbO0bMTlkykHGr8Cc+t7yK0+TbG0jO+i/ghlPoSe4aRTTsyhsc9XHP+zv4nF8m
7vt7HZ9znjtIifcV7po1FOoQVzNB0xWu39ZJfHRtXH/ky/FxuFpeyOsJPu3DXL6iUFaD1Vzx
nu2RxtZfdt/5Gz6Nx3G6meI0AeaSRVbtQk5NinQ+HHNvW6a1nkvJ2uEoFd5HLIvZQSa4J0cW
WzbvzaTcWi2ie/n3GahLQO+thXIVQ9MdPsfhacql+VLW3ReTT7hHDIaxxXTkg07Chwc+qSMY
9zJKfvJY1Dq321Gfq/54z3MV59OTWhcUFMvy1B7UxqHadbhYSjkn0sCultB1A5eoZ4Z4fK0h
+Q+cxwSW39bv1twgovvMSoPRR/piovOKybf9+aBojf3Iif8A+y91xqPWpz8cYvR8R2l5e2je
/bSGC4VqmVGowbxBHfDrOhv9w3PcJBd3sslxOMtczM7mmVc8a2CyrHY+W8j2IOm17hc2YfMr
C5CntmDUHD92OZ66rvn/AC67vFmud3upJUWqMZGBUDoy0IAwOlxT3+9Xu4Tfqb2Z7mR66WYl
2anXr3xkyeAt7y4hGuF2hAOr0NStPEjF93PqxFc3TSSh5WaRzm2o6iaj+I9cb1qUyBSqgEip
yr2wSJrOJcs53Y//AAeO3FzU1b9LAC6D+I6aMuHfGeecdW+c4+R1nWPer28hmtz7kcMn8ojw
bSoXPwxmda3MaC25T813WyndBdXUW2qKvNSP7fHVp1fiMa+AiTf/AJps9rTdBeXoszWRXlKu
GBPiwpTGbVWan+SOczXhvm3e5S5WojlSTovcAfbTyC4ta+rmt+ccrt7l7u13SVLm6r+omU1d
mPXPscIkW3HPkH5Jtw8Oz3l3cKWLPAkQnoxzLGqsc8H2GOs/K3PbPdC28ubm4jSsdjuEA0Bj
+YR6U0keONzEuR/cDyUqtdrsKgEEe21Kj/6sFxYW6/3DckvLRIbC1i2udKM9wh1qR4BXFMXg
Yy4+TebzbrHukm63H6uIEJJUBFDdVWOmkA/TFKsdu5/LvyBfWptJd4lEMikSe2qRuyEU+5VB
ocJsdnBub/J1BtfGHe4t4l1e0I0lp4ka+mCqq7mHN/kR9zhTfLueC7tJPchiISP2nU5MqoOv
1xc1nXHvfyXzXfLAWG67nJcWqkH20ATXT+PSBqw2NY7dq+Zue7PaJZ226E28a0iSaOOUqPAM
41YvGdUG/wDNeS71erd7puUt3cL/APi5aiLH/wBipTScWqY5b3km+7pEFvdwuruFPSI5pndA
e40kkYN1Rq+OfMPyBtNim17Xch4IlpBE1uJiqjrTStaDFchuuyL5w+UBuWr9UZrhhoFutuvt
5Z5Rhev78Updm7fL/wAvvZtHcK9nFKpVn/Re3VWyqHcZYPtAy3EeV/IG0rPNsAuF/V5O8cBl
RyKnqVK1w2xYn458qc54zPdNHO3v38jS3KXMOsM4ObUoNOC9aOVxvPzl8p3O2yxywrDbTKUd
1syoKsOmt6itME69Ft3GH2vn3M9ttZLWw3e7s7aQsWhicqCzdaL2/DHSqR2cQ+SeTcUu5Ztt
vCxuK+9HKTKhJNftfGfs3raS/Mny7ve33UlnGP6cqlJZoLU5ZZ0kzz/Zi+A5LP8AuI+Q7GzW
0eS2uJEOgTTw+sAZaTQrU/XFsFc20fNT/wBVvNx5Ps1vyKeYBYWuAP5Cgk6IkYNGqdzlXGrD
Gih+fOLQXCzx8Ls4SjA+5HoRqDujCMCuMYan5L/dBevJGeOWawRaP5v6xTKdfloK5fjhErGW
fzr8h2+8S7qt+s73ICy2ksYNvpHRVQU0keIzxrY1huU/P3OuQ7bLtxnhsLSWizrappdx3XWx
YhfHFMZq64l84fJsezCz2zaYL61sIwokhtnJjRcgZShCdB1pjPVLitPnCNt4ur7lXF7HeryS
ixsEjiaIL+X1h9XmTni+VLi2PzxwuHTPZ8Eso7qMq0U6e2mhx0NVjVv2HFkU15v8g/I/I+Z3
0NzuzRlLUEWscCaVjRjWhJzY5dTjX2zxmz3XXwP5a5TxGSVNsnWS3lH82zuV1RMezAArpI8R
+ODG/lo9+/uJ59uEUcNubSwVZFkUW8VS7IagNrLZfTFKvqjH9xnyM/vu81rWRfbcmBdNAKZC
tcV/wLK6Lf8Aua+QIJ4BM1i6xroMUkRCyLTJmKsGB/7cU9gvjg5b/cBzbf8AbJNs/kbbZS+m
4/RrSSRev3ktpU9wM8EqvKLjv9wPM9i4z/6/Zi3mgRGjtbmdHM0KsSSKqwDUrlXCxbfhx8H+
auV8TN0tq8d5a3bmSazuwWT3GNC4YHUK41Y3xPHXzn545byXam2dY7XbdtLBpktUIaWn5CWL
aRXwwTF9dUFn8w8/2/j39Ctd2ePbtBj9kKhZImBGlXILAHPB8FhppCGA6EnVqOfXGojIrHV6
9IBOk+Bp1w6rG+438sck2Xh15xW0tbWTarkyavejLS65ANciGukE9q9MYnWXRedjK8X33c9h
3i23m1obqzl96LWAyAg1zBypjX208c4uee/IG9c43eLc9ztLe1niT2FW0UhG0mpZy1SWwVdc
xY7L8t8i2rgdzxOK1s7ja7z3KTXKH3YzIfWVzFfItmMZ5sWMtxvku4cc3i03i1CteWbiaNpF
1Rlu1VPjjd9Vrv8AkPnu78x33+s39pbWciwpEYrZSiqBnrbUSSWr3wcVj86tdu+XN/suBXPD
0sbWTbbgnTdyofdhLkMXRgdJrTKowbJR1/tMrPcY5Lf8e5BZ7xaLG9zYyCVFddSFhUHUDTIg
4K6Wu3nvOrrmPI/63eW1tZ3EqLE0NsGCUStGNaklq5k4mJZqysvlrerL49ueER21jPZzM7W9
zKhaeLW+ttP5SS3Q9Rh5snrXTPcS5ZufGN7tN8sfbkubM6kWcVRmpQhgfLwxDa6+f8xveXch
l369tbezeWNFa2tQwiGgfeNWepupwzu/C5nvq6h+Wd1b44HCZrCzms5D/IvJFZblAX9w6dOR
NejHPBz1latl+Wb4lzC74pvlpvVjJG1zZMXSO4XXGykFSpHn4jMYL6zKLnvL7/lvJZ98uLeG
0lutA/T2ikQ1RQpIr1LdSa541usXmy6oUZlYvmKihP8Argaj0O9+Yd53PgFrwe4hgO32TRvF
eqGWVkjBIjbPTmWzPXFJhs1rNo/uT5PaW+zQy7fZSHaU9kSnWhkiKiMKSCQp0qMxi8xu4790
/uYguIZv1HDdtuGuNSt7tG1VyYsdNT/jjPjFjwS9mEt3JOIRCsjs6QIToVWYsEStfSK0GN/L
Psar42+R914Lvh3TbIop5JIzDJFchvbZGOpgCpBBBAzxWNTpoOJ/NW+bBzLceR21tbSy7q8h
vrRg+kh21gKR6l0Ng6q45yNVtP8AcpuNrsi7duew2G4Q27lkExKhQzFhVWD106qDKuCetYr/
AP8AOKurfksO/bNxvbbGRImtbkRFgLiN2BUPpC6SpWoy+uH6xi274p+L/M3IOM8j3jc7e2gu
od5Z3vdqlZjAzOxNV0lqEVI+mLZWszxp9o/uTu7Pj9vtl7x2y3G3s/TAZjpCrU6V0FXzUGgI
7YrI11MeefJnyBZc0vLW6tuP2WytCjLJJZ1Dy1I0+56UX09jSuHm+Of026x6SRpISBVqDIjq
R5DFfW3snFP7jt52vj9vs+8bTa75DYqEtpLvJ0AFFUHS6uFGQNK4xIqgvf7leZT8rt94tkgt
be2jMCbaqmSB4moXSRzpc1IBHh2xuyfhi1f3H9026QwSWthxzb7SGVZBJF7jhWZxTUBGFp59
a4PG88YH43+Yt84TczwWiRXlhdit5t9wD7LNSivUCq5enL8Rgua3xxsxjuX73abxv97u1tts
W1w3TVWytyTFGfzaa5+ps8LlJnyqlZhKrsuliRU+WFPeOLf3DbydptNpvePW3J7rb4//AI87
hnnWJRToEkqVp92XnjEn7ayVhPkP5Z5DzHkNvul1Gtmu1nTt1tGGDQ6WVw3uN6nbUvfHSYzx
Z9nP8h/LnLec21paby8ckW3hjbvHEsbu7gB2Yjx09sH1HftYy3trmdmeKJpDGmuXQpKhV6li
K0A71w2xRvfjX5c3jhlru9vYWsF5DukRidJyVAdVKqUK5/mPkcY6nut3cxc8V+ceRbFxK84q
Yobzbry3mjtBKWV7b3gdRR1+5dTEhT3wzKpz41HF/wC4bkEu2Wthd8Zh5PuVggSO80kziMUo
WVY5MxT7hSv1xdSLHl/ypz7fuZciG4b1CttNaRtbxWqIy+1Hq1CNtYDlge5xrZmRiWfP5Y63
uVimik0GsRqKHMN1wYJ1j3PY/wC5u/t9vhj3fj1ju97AqRpusrBJ3VRRfcIRiXFB6h1xixrr
quGx/uO5jHy293yeO3m2++QQS7S6Vh9pK+2pYesuuo+s9Rkcb/B55118k/ua3XdOO3uyW+y2
dhZ3sBgX2JHrA5NdaelV+gp1xcyUZXXtv90e6x2tqm67DZbnuNsoWPc5m9uZh1VvSraWr/CR
44Pq19Wcb5/5VJb8otbu0spU5KGDgIYzDWP2wV0EayEoPVmaVON2QXn8M7tHyjvu1fHW7cGN
rDNte6kMZyxWSGpUtp/KdWgVrjH9Mrzznr3m/DCSsqNRfu6mmWXbC7/DRcH5funGOR2O/WCx
G6tWqolBKsKUKNSmTAkZHFY1KXMOQy8h5Luu/XFssMm5ztcyQKSVjYqF0qTnlpwzrwx6/wAc
/uevdp49Bt0vHLC4EcIguJY2MKyaECAyRhWGY654585o6Y/iPzAeK8s3TfbLY9v/AEm4uGO1
ipigzrW2cgsnqOYpQ+WWGz1jmYu/kD59u+Tpt9xbbPBtm9bfcJcWG7QyM9xGVOcdCo9Ld6kj
DMK7P9z+4+xHcT8ZsJt2WMar/wBUbmRRpMmoK1B/t1eXTBzV1PHiG/3u5bvvF7utwqPc3b+9
cexGIotTKCdEaZCvljc6hnJ+M7zuWxbra7ptkrQXtk4aGdTmCc6muRHkcXWVqWR7bP8A3Sbo
kCzHjlim5+3STcqsrCYjSH+3/wDVJzxiYz158Mnv3yH8hcl+OLzY7rZ3u9tmvG3KXfIInUKB
J7jj0D2wNZNT2GGdyUfh5LMZNSs1V9VCta0ON2ndOJdLBqatJyJ+3LpkO+DDHr3x58/bzxbY
12S7srXedqiJkt7a6yMDE10o9Gyqa0Iyxysso61F8gfOe9czh2yC5sra2O1X3662aNjIxoKL
CweqsB3Y9cdJcFm/LW3X9011c7TJtl1xXbp4pE0TQNIxgNRmWTT4+GM841lee8n+Vt23vglj
xGW0iisNvuBPbSJq98KurTHmSpRddPGlMa+BVDwvmG78W3yPetpl9m6jACMfUpX8ysp+5WHU
YzaeZj1bc/7qt7ksZ4bLj232d7cRtG94NT6WcUZvbI9XjRjTxw84FZwv+4rk/Hdki2e6sLTe
raJibL9T6GhqSSgKAgqK5ZZfTFRLWP8Akb5T3bnG6Wl1d2tvYx2KNFZ29ouaq5Bk/msNZGoV
Vei9sOzBPKt+R/Ne/ck4LFxvf7Oz3CaB1/S7u6st1EqUAZaGhZlGlj3BzxfzuVd8zr5aLgHz
9zbbNmj2GPaYuRxWURe1SSN2mhtox6hWMNqRB3IqBjPUbtPYfPvyBd8qfkdntsD2lhZyRPts
ETvbxWuTuzuPWKMAxPQdOmLZik/LxyW5aaaWdKlHZnIqPzknLyzyx08vyNe8/Cvyd8e8d4w9
tuu4blYbkzst0Ii01tIpzRlSjKrUyJpjneW+rKxPzt8hbRzfk1nPtJmNttdp+jFxMvt++2sy
FghzGRGOksnLOevLViZiErpcVLIuYoemWMaKUiCgR1PudMsunXLrlgZ3fD+0zBU8qCgr+GLW
8AEbRqb00zStCKYYglFCqagEH1jvn54mKOMOxYKKgZ5+GDRITLJ6mYEaR3yy6nC1IFy+ZAPg
4rTr0w6bHTYbZdXU8cFmplmlISGBBUyO3QKPM4LRILcNs3Cyuri03G2ktL6ycxXVrKtHjlAB
ow+mKUz2OZcjVW9AFSw/wocLNqWO2uWQyxo7IprIVGoqPE0+0YrY1CEbMGHmaDvn4YNJmD1U
ha6aBfCtelcIxYbjx/d9utrK+vrKW1g3JGlsZpFOmWNG0syHyPXGd1WY4nUZ5NqplXri1EIg
QFHU0AHicIIpMrAEUNcl6nwrg1o8QiUvkQ5JWo8V6/hiMhSIxFK5DuRkK4tHXLoe0vRafqBC
5t9Wn3KHTqH8TdB5DFsWOeNHRGJ9UjdWJpSvngqSJC9RUZgdep/HBaoIRPUErRich41y74pT
CSN2LaWr7Zq/lXIftOFAijeRiQdQY106hnhZxbw8V3mbYLnkEVv7mx2UiW91cr6hG8v2ah1V
a5E4B15FSXk/UNqGRppZR5YVpyv8yiEhSOop+P7cBX3GuKb5yO6ltNnhN5dwRNcGBaFikYq2
kEj8B3wXpRVSW7h31V1gnVVdOkr1FOxwhCqM8fuBdWWRGfU+eLUJEOgv0U/dU0IPl5YmpTBv
UaH7uh8fKmHC6Nus7q8ult7WMy3Mh0qgFSSe1PPGbcU9dW78Y37ZLxbXdbCeynlGuKKVCoda
0rHX7hXwxM3la2vxtzu92wX1jsl5c21arOsfpIH3U71Hhg+y6uM81pMHdPbfUhKvE4IbUDQi
nUU741uGEsRyQUKsdPpNQfKuNDFy3EuQLtDbuNsuH2eNtM9+qkxIxGQLDOg+mOdpzPlz7ZsW
57vew2W12cl5cNUqsak17jPAai3XbNw2q/e13C1ltL22aklrMhSRWfKp8Qe2Hkz/AA5DGgk9
RIWmQrmScaC42HifIN/nNvsljJuF2MzHGKEKo8TTBapFbe2F9aXUtrdwvb3Fu7JcwSrpdGTr
WuHEgEj6QwOgKa1brTAGw+N+DS815CuyRz/o2lR2W7YByuhdVdHeuM1vmfLNbltl1t+43e3X
sftXtjcSWtzGPyyRMVb6g0qMa64sjFrlZHJC9Kg9OpPemCUUa276CQrnqXFMsuv44tOHWJtC
yCulh6KCta9ssJFGshkGiupqqIxnUjrn44NSSW1lX1NExSlQQjFadOoyxfZIxazyNRYXc/l0
qe3h40xWj0wt5Wk0urI6HNXGklTkDTLLzxacaY/H3IG4dccvtGSWxs50t7mChWWNZDQSimRT
UQPxxrmSnrnGdXb76Ufy4JHkrkioxJB6sKD1LjNsZJbaZ5GhQM8ob2/aAIYOOoAalcDUdcXH
93e2nmjsp5IrVfcnYRk6UBoXPgoPU4oqrjoJB6VzBNM/pjSi02XYdw3m8is7WNpp5XWNAF1B
dWQBp2JwXpq878OzdeG7rtXIjx7dYjZX4dV0UL1V6AMn5WGeGc3NY5m3HZzj435Fw/epdtvo
Wu41RJIr6BHdHVx+YAHSa5YMV+Wct9rvpGSOO2klElRGYgXzPhQdcZtUlhn267ilFtcwyR3A
OXuoU6+RHTDPWt1NPsW7Qr7s1jNHFp1e77baKE5UamnBenL8oZrWeBImaGREk/8AHIQQrf8A
aelcPN10swKWF62hZbSQPI38r0mrKTlpy9VfLGmJbVxZ8O5NN7lNvmT2UM0qyIylYU++Qgjo
viMZ10mflecv+KOQbLZ7NfWkR3C33a3NwnsKSyaaEoy59QwNRikN59xh3WdX9uSIq4NHSnQj
KmGRinjhMrLFCCWY6AtK54ktP/WuQi3kmO1z6IARcMkTOBlWvTwzxnWnNt+x7puBpt9o07gf
lBaoGdaDDVSvtsvbCYQ7hDLbTuBWGdGjqpP3DVSowSWqNUPjK/m+PrzmCzGE7bJFrs2Ao8Lt
paRXP8JOLi7cPUyKDbeMb9uCF9qsbi8jBJfQhJUd6+WNdc5cG2Fb8Y3i53P+m/opEvh1tXWj
kHppHc453yNc+/8Aldcy+MOVcSa0O5QarS8jR4LuIVRWYV9t/wCFsa55tmjfcZ19p3W1kgjl
gkV7lNcGoFRIjdCviMsU9NuNxw/4eu+Twbskhlsdw261N1DDKuj3GpVA3+1+xGM77431/SXn
4Y3eOLb7tUcP9U2+W2guP/BLIvoYAVI1dPpjr9XLxWHRGpZaGhpn0/f1wfU9Lbj3FeQ73pXa
7J7g/dIAQCFHfPGesjPy2O/fD97YcFt+URvNHdm7a1vdvYGqVJVCF69Ri/nPs3JlZy8+NubW
dg9/c7TN+kiXXJKoBCrSpagOojFfLjXWfCmudj3O2tbW8e0dLe7ztbg10SUpVVqOoBzxqRyu
6nvth3KwurZb22aA3CCaEuKLJGejqx6jtjN5/TpzP21fOvjT+gcd2HefeZpt1WT9RZvQGOma
MtcyrLg/l/O092bimsPjXmW42ovbTbHntGpolDBNQp0APcYOph+HnXJrW4tLySzuE9qeFiJY
jUFSDQgg4687jj11tUyqumoah7nwwjT+rx8sQ1YyCT2VK09PUjuDiMbv4tuWj5HtqITWSVaO
KjoemWO0+F9n0H80NLJs1qx/mF1NM60UZ0Jx5/ypaL4YO33fFroPt8KPGWT3KAyFSvQj94xr
sW21o/j17G22jctVt73tSuRAckFB9hy61wUYcDZuQbLJf3m020EVuXPsRwqPsOdTTPGbmtYh
svj7j29JbbubWC0sYQSLE5K5Hdgc6eGeH6pgp9zsf/vE22PZ7P8ApltbzBIlQH+YRU6s6fsx
uRmy1pPnVgm22ckr1YP9vWp05kE+eDn5aVfCd72qHiAS34FNuFw0bLLfGNTHKxJ9Tkg4O1Pl
4/vMk730qvbi3oxBh/gIP2/QdMHBtW/x5sNpvfKrGzuwDbSMQygjMrnQk9B9Mds8VuPpmTZu
GbNGIGi2ewtzGSXljT3mHTUNfXHLBqvsuPfG8NpdblBtltuIAJ9/TrRiOgUH0j8ME5W6zVjv
vxzyq8trS62eO13D3ShtYYVjUhTQ1kUZimNXkRbcm3T494zfwWVxsNrBBMh13QhSV1/hHdji
nMq1Fs3H+DWmz3PJ7TaoL6OUNLH+rXUQgzoENR+0YvrFFja8Y4jyOwsdzudmtoEcFjbQr7fX
+JkpXBY1rh5HuPxnx7ck2m82O1trWaMe5ciFWINMqIMzl1OKcs6892G/+Mxy+aQbJNutkVJt
bcRhyravvEY6jsB2xXmBQ/J99tU+7L/Stil2S2NR7UsftM7jqdJ6Vxc4PsxR/lrqqQ/fvTyp
jTT1D4Ivp4uWRrHWJJI2Dr11L4/twzMart+eZCOTWcpAcIoLAjIiuYJ/fi5xzrc21rtdz8WC
WOF4gISqD3X0nSfAnKpPTFZ6bVtFfbLa/HdlLuNob+FYUH6Q5qzH7VwWJTb3xDgIhs99utmj
SKRUC28AK/fQrVQcznikX2B/90vFLC4u9+3KwD7X7YMFmHI0JStWHdjXBY1rP/D25w/+77jF
taG125o2MUNfuVT6a1rjUkxn1nfnSd//AGtqDSXjrUnoR16559cZMedWokmnhjqEWQgB/qaf
vxqRXp9GScP+Mtg4zbbldbJ+skZI9Uczsza3AB6nT+7B9V8uif4k4Ve7haX39OS3tiokk29P
/G7dRq7/AFpisTJ8v3X4m2+e+2GbjSwXiAqlzAlfX49dQH1wfWQai+BE2Z963iG3hePSoKSp
IUooOS5f54bFGI+YrOC25jdqoYoWLBnJZvH7jniikYyzjM80UX5ZXCVOddRpXLzxox9C3XDf
jfhHGLe/3PZ13d5QpuLhydWpl6KK6aDtjM509Y8w5XffGN9vNlLtFlPaWZdGvbaPqFBq4jqT
2xYxV58gz/En/rSrxrZZ4dxcq0dw8csIXxLF66iR2GD6qSNb/bnf7M1hdWtptqw3iANPeHN5
ADQj1dAK9sasaVtjyGPZvlDd0s9j/qW43EjrbPENKxCtQx/KtB1wYI3N7yGO12GROb31vNPe
NSDarcAuQT6UC9W/7sOalyGu4LGxj228g2a29FbeaNSzIwHoQZANiwo5eKbHuHJv6nPZp+ut
4gBcSoCpP8VDkSMZxMh8nbJzzfthvYLPeLO+2+In3rG3jCuVUdHk9QqPDLCzrzn492b4dTYr
j/2/3jvSOQ+chCAdFi0dW8a4eprdnjt+KPizivJbvdN4ulkn2WznMVrYP/LZl66nYZ9ugxn6
4JfHsfEo+MNxK7t+NRtZ2eqSJoHq2iQ1U/cWxD5Ym++Nvi3ZZ9t23edtuL/cN0fQlx7rBWct
3AZQoqcsX0lOb48w+aPjPbeGbjD/AExyLG8qwjYlmSppp1H7jXCmM4ntFvvO/bftN0zLDcTJ
Gzrk1GNDTFGrj6J3X4o+HNkvbDb72xuLu73JxFApmY0Jy1NpKaRngxnNrlg/t14hDyG9ubx5
bjaII/ci273GViczVnWhNAMWK1geY7V8E3O03S7Gbnbd0gakYJZlcjsVYnDOcYut3/blYWMn
DN4WznuIZ2laOaQlSpHt+hlBU0yJxWN3nI+cuQwMN9vlZyQkzAEdyD1P1w5jHHrp4ZskW/co
27abqR0tbudIJNNOjnOn+mC1uR9Cb/8AFPwTxe6tbTeIrgT7k+iN/fkAUfbVtJUBfM4Ppovt
xBsH9v3AbretwmjuW3TaLagtrWGUV9YrpeVepA/bixqXx171/bvwW7sba42+yuNkMEyiWOST
WWhLUYCpajHsa4fq53Vld/2+fFJZwtldK1qokkZZ2UsCPEg9fLF9WrXE39uHAI7+XdZf1Em0
pB7iWAkoy5VLa8iaDpgxKV/hf4u5Dx6DfeNPcWO3pcUumlclXiVtMhGupFOxrTDeTKqd+/t9
2WDm+07ftX6qXj25x67q4LB9CLm5D0HX6YTyvts/tu4I2+bjciW4v9stAi2tnDIFdZGFXDOf
uoOgxaJVb8if2/catOI3O/cfjudtnskMk9nev7mtVy1DM6W/HPGc9TKbFwz4WuPjKS93LfXi
5MkMryIJc1lFdEaxUowOQrhvyurjxgRojkVOmgoGzNKePfG2dXXCrvjsHKLG65EnvbJC2q4t
4/ukABGk/jnTB0tfVHEeb2PKL9o9p43b/wDokC+zNud1AqsSE+xUWqkYPq3nmvHYdl4bN8+x
bfsYjvOPreRhYB64lYgGVAPzKGyGC858M8TGm+adt4Bx/wCXNkvN125I9jkthcbrBbKE1srs
qsVUr4CtOuNZ4pfW64jy7ZuYXk8Nnxmz/wDQEUwDc7iBELvp+xUppoD3/HB9Ynjm0bBxQ/3A
2217UiXnHlvh7St/MQ0WrLU9UV6gYb8Mfxt278NF8sbd8fce+atuk3XblXYntUub60t0Cq0u
pgrlRTV0FVwWeNTr2vQuL8o2Pl15dCPjNn/6GgaAbvPbohkdV+0KBSinKvbBhvP7eNcZ2DiU
/wDcDFt+2sl/x+K9KW0UlJIyqLqKqCKMquaYrsXHK/8Ak6w+OOOfOllJvO2qmwrapc3FpbpR
DM2oIxRdIpqAJXvixjnmfa/t6NxjkPH+YXl3bDjFl/6KFeJd1nt1iMjqMxSgC0PU9sN5dLP2
8X4Nx/i0vz9HZbeibhsUV7MLR3AkRliBIFSKMARjNHPq+59t/wAdcd+dz/WLFYtgWGGeW0t0
FBM6Eh9Ay06syMK+z0bje+8c5y17HecXsY+DnXDb71PCkRmZaABRQaD3LA4sn4Z6ks9eM/GX
H+NT/PC7daqm5bLZ3U62Ukml45IY0bQxBFGzyw3nw/zuxT/3A7LtOy/KW52G1WyWtr7UE4hj
GmNWkjDPpUZCpzyw4589XbHnehT/ADJM1amo1PQmmVO+B1fWe3/DOwcq+HNhgt7SGz3iOJJo
r1VUSNqkPuLI4Hq1L498YNbiL444Vtu7ceW32m3rawTWys8aEuixAgyAijNUVqcKUHPZuRbH
Ybjex8I2S/2a3DgMWRZTEfTVk0DrXoDjU5jGvjW7dZrmWYRqil2YQKaKoZiQFrnQVpjbUlev
/wBsGwbRu/NryPdLKO9hSwk/lTKHVWZlFSreKkjGOmset8B+BYNm5nyC/wB3srS62K+jkh2y
1Y+60aO9TkVGk6MqjPGbWefI0ex8cFjwbbV4/sG3Xl0NaOl6FSsetxqMmlmY9OuLFrNcPSPd
vly72/fuLbdtd7te2ltECpIkuuRQsnTSfT9ppWmKyKNJu/FNu3G33W25bsG1bfsOhv0m42+l
bhWH2uxCroyzyPlixA2XjzW3x/sf/r+wbbul0YkEhv1SOsZB/mF9DEkmmLF1a+bPnnd7mfko
s77jtnsO4WQ9m7ayYMs4OaOSqr6fDG58B5jRxpZASVNKf5VxaX0VYfC/xlsfEdo3Lne53UV1
vKqYZbNtEIaVdaxKuh2qFP3HGM02pU/tc2x+W3u1vfyHbJtva72i5AHuCT3FT25l6HTq6jrg
xmsV8ZfDFhy5uUJe3skNxsSstnLGAU91dVdSnOnopljVn4Y5v2mvVviLg3xNvHxvftcI17IK
f1u4ugBLbTRR1/kuq1RQvqFK174zI6zrzx8x8usdlsOQXsGyXcl/tXu//FupF0NIvXVp7daY
3JjMuqjRV1BqDUUOfStD9MOqx9j/AATwfadn+Pp902rdrafcdxUzrugjX/4hCCsD6jUqjCrA
0r+zHP8AJzI8b2PgljzD5V3+25LuUVYpTLI21iiXRFB/8egKLXrn1x0tvw5/z491reXf21cZ
/wDWr/d+LTbhDd7bG08lrualUlVAXcRnShQhVPTKuM3W+vPUf9qu27fPPyIxTf8A5Va3EZtZ
lElrNaSDL3B91Q+TUPQ4zl30ZbHhqp+k392WNX9i5fVHprFqjlPpplVaj9mN9TNH8u95e1f3
P8c2S2suMb5ZbdFZXW5RlLz9KojjP8tXHpWg9NTng4zFbfs7v7RI1G48ikApItvEqgjMDUT1
86DBb66PE/kTlu68t5Hc7puftfqgTC3sRrCCI2IXVT7j/uON1x/nzv8AsyBDJJQioFCR3AP0
xN4JZ2yQ1rXL/gY1IHqHxB8S3XM/1e4XM/6TYtudV3CZTqkJYaiqqBX7fzdsHV/Edebj1H55
4fwaz+Itn3jj+2RWgtbqK3guFjKyNBLrR/crm9WUMNX4dcYnjl31mV0/D3GfjDevi/eLSCxF
7uosm/rEtyup0nEZKe3JlQVUMumlMVrp18PEvitOGS8ps25eskli5j0aAWUz6hoEijMo32tg
lueuP8bevluP7qeN7JtHNNul2uxjsjuNkZLsQqERnR9IOgUUGgoaY3PhWX7/AOHiDghddAPE
08PLvgldG/8Ahb/0luYWq8rjeSzleM2KINUbXJYaBMtKlCcHUdOXoXzL8fcfuPm7adms4Bt1
rvaW5u/040DXJKyO0aj0hiq/twWZNjj9d7escg4XuW0cduOP8b4jte67OLVoVS6lVJpCy+oi
q1Ld/u65g41xJF/S3189fCO18StvkG2teX2kwv4J1isbaRKhbtZNOi4ibOg7efji79o/j7G2
5nsXFto/uV2qBLa2ttsupLaa9t3CiH3pa+rSRpGpgPKuK3x05nr2PlNty6ya+faOI7Pum2xJ
WGBiEuJlKguujQUrWtBXPBMZrOfEdnBc/HNzd7LsW3f1o7jdGTb7xdIgYy19pyVaRdC00jwx
fk8+TxUciNzJ8i8QteX8V23aov1bNabxbsHhl9JBt2NFX1NpOlx9O+Kj6y1teWxcusINxbbe
H7Ruu2R1MVsCFnmiCjV/L0aS3XKuKQ3Xmv8AbZy3dLrkW58UaNIePaLi7g2wqGWFndQ8Q1+s
LVj6Tgt9MkkfNu+rEm530aj20S6mSvbSsrBQo7ACmOvTn/P9OCNx0FPTQmudfpglx3j1/Z/h
Sz5T8c7fv3D7g33II3aLfdolZdSsXIV0XIrpXSfMdMYt9Hfl2NMv9u2yDd+IbWm5FNwvYZzv
sFQ7q8I9z3Ix1VW/8fqwWVn8tXu/9s/B9xtLq12FNy2vdwpeKa7PuWrOuRjJy6/xA4vrg9Uu
1/CXxltXC9q5Ryy7uIImiaHcUSWlbguyK0VASaaaafxxcy0vPfmb4t2viB2ndtgupLrj2+xs
1gJRWWN0Acq5FNQZWqDTGpF7K83jjOsBlLliAAe5PbC2+jLD4g+Ktk4dsfKOVXVwtrf2sa3M
SzFP/lzDVriCgMRSqlenfGJNZ6vrn/8AzcNlPyFt1rY3kknFtwtG3K1Vqe8BCVDW7P4P7i0f
FIxmKnlnDvgLcNtvoOM7tLtPJLNZJIv1xkaGZogymFqggEkZFf341P55fWepbNgv7UrTb25h
eu4ePc/0bCzf7416CXUuVQwOWDv5d5J9d/LRf252EY59zG1mSNl0XEVwgH8upuSHXSei+WDu
f7QS7HDu3w58ack45u0/BLmWPedhVnvFd2ktpggYmFSwy+w6SPxxu/Lllk1b8J+DPje/4tt1
xd2G4391cxpJNcQs0RjdwKo0dQBp8aZjFY3PYrrT+3TY7Ln+6Wu43El3sFhZruMEB9MrwSFl
aJnSlGQo1COuWMYJv5Z3kfGvgvcbFbvhG5vabpZlXawu/c03kTMAQhcfcOozxdfzyHnjK9G+
Tvibh99yG35TvbLtWxRbfCu4yxAI08ykhAadDpoKjyxrnnZjHXOdbHzjx6y4mvOLe13CeSXj
K34jaf7ZP0ms0dgoPqoc8XXDrx78r35s4nwfY+TWi8Pv0u9ou7b3ZY0k932JA2kjX1OsZ0PT
DJ5rO+vOVEaSaSRmaKCMyT0OM1Y94+Hfivge+cJ3TkXIlf8A/JV0H1tK0aCFEDSI1KfcDkeu
M5rXUmeLHf8A4T4Fv9htHJuB3Etns15fRWN/bTAyVEkgQzR+4WbUvgTQ+WNfXPHO85VhyXiP
9u+w3c/Gt+a62/eUVVk3FTIwUuvomNKqAa1K0phnH5a/DzP4p2jZ7T5csdsv5f6jaQXrW1vd
WraFLgk2860zCk0NMP8ATmfhr+WtV8p8esN3/uLn2ndrpLOx3D9Es10lEorQgKWqaaqgKTiz
/Vme1t+W/DfxBZWDWW4bVfbROyUTkESvNaowGUrkMy6a5lSv7OuMSKxzf2zWPCLiPdtveyS4
36JZobm8NXt7myZwtYw35WIrQitDh6mV0+eWR4zwb425L80S7LtUc0vFzBOWtZCVeGdFYMkb
5nQrrVa411mOX8/y5PjD4n2LePkHkfFN0kNxbbal1FDOuREiSaI5h/uWuC+OknmvVLu04HD8
QcUk5vALvb7AtEsShhI8nrj/AJaqVPapWuLB17Xjnzr8X7VxO82reNilkGyb9E0tvaSirwMF
V9GrqV0uKVzHTGuZ4xd15hbQxs4QOKgj1AEE1+uMVqPppuA/DPFON7Buu/2H6ld5to4ntdTM
8sxAkMwAIzAaj5+GCc61Z7hH+3biicl36x92T+m3u1m+2c1IltJdekgkf+RV7au2XXPDB8R5
/wDF/wAc7PyHgnNL6/j/APyltlv7tjcGtY5LZWkyUGlHEdD5HF1n3xr/AOO16t8e3nxvffCe
43N5s4G228Qbf7fTX3JY41PuRknI0IIzFDin88uRju+bXyvvRsE3O7Xbmkl29Zm/RtMoEphD
EoXC5aqY3/XnLkHrs2C2S63azikJMck0aSqMqKzAEr+3HHqeOvHG2a+nt54V8IbBye14xebK
lxd7+yfpYxrL2+sGPUrk+lSwqPPDxxM0T1R7R8Q8I4h/7buu/W39YsdhlT24ZDXVaSRLJVlF
AZVL/uw3nazvjzf5Mn+Gt12lN04Ravs+9QzJHNtjxsqXUElQXQ1YK0fX6Y7cZ7K52XdbfisV
nc/208tEEH6W9hhkS7lA1CdEpIgAOQybTXqOuOX87vTt/SbI+fvYBoD6UWlc+njn3xrr5Ykb
X4h4nYck51YbZfkpayk+5p6soQtpB7A6ccrXT+fM9/8AD6S4FdfH459e7Ltm0LYcg2ZZbee7
gAEc6ppV9VPwOeNfXGJfHNxb434qNrbd7jaLfeJ9yurl7kXOgaGEz0aMt2oMxjV9rOKbc/jH
48235J2ZJLaOPauSW1xA21Bvcjju4wNLo1arq1/TUK4vq1zJmMxwr4r2Kx2zmacvWIWFgZLC
Od6rJFKvrimVhUDUCACOpOKcb1MXWfV4EYipAJ9cVBXpWniMb7klxn638vc/7b9m225n3u/l
RJbyws/fs5D6jFJUnUB50pTHGz11lzn/AC0O1brP8l/GPJbnkcETXezVn2edVCvA4QtpVuuk
6dJB6jHbrnOsZsya02xck5TdQWe43tynErBLeJbuO40FJbhRT3NLAFFdQBjn/gX9vKOS7xxb
c/myPedotRue23EludwtI19E0wGmYKPz6tINfzHG/wCkn1n7b/lPyk+Q9u4Xc/Lu2Js23tYW
9z+mTd9vmhNsiS+7QFYSABqi6064zOZOdHPzXp29cp3DY/lvb+G7ZDEmw3KRfq7UxB1ZJvTo
A7Adv9MP1/10Sfbdc9ztW2cM47zzc9ktY47raL8S2QIJ9tJRG3tjPJVMhK0OWHnmWway/wAk
Qryn4VseXbtHG3JLW6SD9dGgRnidypicjqpBHXvjXM/2w9zL48HltLlPbZ43CSglHdDpI76W
OWPP1fTZ+H0FtF+3Ffgy05Hs8axbm90IHnC6mKu5UE+aUFP+eH+PO/LfckyT9OD5zsotw+PO
Kcxu4kG/XWiG9uI1CGZXjJGtRkSunvj0fz53Y5d363Pw8AaQvLRTUFvuA7DHCwPYf7aKn5Gg
qlAIJqHsf5eVMZrrx8X/AMMR8lSf/wB/ckYLm+5XLFia5ByKnzx2/r/6xykUm2RLJeWy6VKl
hqDZ/jjz34bnEtfUnNOU8W4FabDcDYob263KyjSaLRHpMcSqQ1SM2q5Hn3x2/l/LedXX/tYq
Nl2H453qz5xd7BAku03e3JcohBBtroLIzLFWhSjAdP8ADBLKOtxTfHPFtvufh/eLm+swJI72
Ca2uClH1RyICVbrQ1IPY411xJ1kbsyT/AC9D3vkvHNn+QbPiNts1u676kLX4ZFEYWSsYKqBS
pVc/HFP5TLWZNcabPxPhPHOW3K2KzWuy3wurFSoZ4ndEKKGOekM37ME/n9rItt8Yv5VXa+W/
Fm3c7awi2/eVuxbF7cf+SOSq6WPVvUAQe2N88f7YO+Mqe3aS8/tov0uUKSbfIqQ+2NJYLKh0
yUpq+85nGf559mv6fhc7TvW0cU+G9j5B/T0uNwikeK1DhesrNrVzT7Co6fTFP571T18rK62f
iu4cp4TybcbCKGff7aSO5iQaYzMEWSFyP4lZqVwfX5iksti05zv17xrie7bjulrZDclVrWyK
ghLqGU6TG9OhoddPLD/PiWxy7vj46cqwBjH8upIFNNM+mNf1z7eLmX8vRPg7k91snMrQxwxT
C8dbSf3MiI5GA1Ka/cMef+vfvkd51r0n525H+q+QNr49LaQqlk8FxDf0JmJl+6OvTT3x6+cn
Dn/G/wC7dct51/SvkTaOOW9lFM+7RxLePL6qoWISgHdc8c5xvOjN6xwy2ux8R2Pnc9vZK9rt
13HeQQCg0SOquNHcBXb9mMz+e2H8Rkdp51xf5D3LYod22uKDkUG4xqmhf5Mto2bB6n1VPbsc
X9JObjd4srY2fKLqT5FueAXe128mwpqSOaWMtqjaPXoLfb+agwfSSaxOdmsZzri23w/Edta2
MHuybbv0sdrKBVlTXIjKe9CgCnGv5yb6N+Gxt9n2ax/+7v8AqdkjTtFLae4UoylotUdfAKxq
MH1nrX58d/NN43Hj3GN1ud09ifcoUMO03CRFg0ctFMcuXdeurLFxzrHzVJynmtxxLi3C0srd
JLu7h9sGVQVVNCahprnmwxvn+e2u85+3djzf+5PZ7Cw5Ztu4WUCwTbjZCe7SMBUZwxUtQdyO
uOn8+JeLv4cuZPtjzfjJEm8WiUALSoQRStAaaT2x4f6V3/nNuY+qeR8m3PaOebPtVntS/wBP
3GKEX96y1UpUoKjoujpjtzzPq4fXbXFc7ZZ8Xseb7hx+yiXcrO6juLaEIGFZI1JQKB9ra2NM
azc0b48a+RuY8o5Nx61bkuzRQSw3H/w90SMwurUOuI161XPpjWzm5FebL43W12G63H9uW72V
3bH3rfS1ohUlgpkRtagZn7ic8cf4f+7p/aZlS/E+/wAMPDoLa4hexumkcW27rAZoJQctDqvq
9ON/1ydju1VfKW48r2Pk3HN4uBa3Fyms2W9266VlRWWsMqUABWtR3xrnn7Rn+ds6XP8AcNyf
eY9i2K10r/St2gjkvX0VCS1VlKtTrmaCuOv/ANaTLvy1zz//AHMc3yLsL7txz4/m22zE6JAU
kmReiFUp6u3qqcceM91T/wBnp8dqbXldw0USvcrsCD2x0kaNyqqafsxynM3Wbfbn7YRrreeR
/FXKU5LZLFNZqG2+3eMKYguasvfr+YY7eSwf0nkx81f0u+Mmj9NJQnuhpUedOmM/3sl8Mv7e
+/Fcl7Z/Em9X2zqDv1nPGyIqgurEqNJBrqGljkcceedvrp/Tr/WSPQLW93DcOKcYuOQRoLyb
dYBcppAVnqwGsfxY65l8Yz3x03HJtu23fbqzlt7qS3iaRWgWDWhWlSFJHqB+uDHO159se1WP
Pfjq52K2jpc7FuQubSNiNfsu7UQDqo0HD1crpfxWf+c7y2uN+2Tiu1WwlbZ4lFvOramKygFU
P/bpoa47/wAZMtq/ld62tB8tbXuFx8Y8VmurXVJYSFr4uKtGoGnST4Y5/wALtrpud/8A+fpY
863TnG27XxqXgzMltdWQW6MaBkOimjtQEVNTg826593enzN813W9XHK2n3y2it91MarO0KhV
k0j72AyqfHHTm7y45dYCP3QgoPS1R5442t4HLz6fv8cQdEtxIx0P2pQYoXoPxXyqHjd9FePt
0O4SoxKxTaqU7Gox1+3P1xPXuQfPN3vdr+ifYLCNHI1SlmdqdwgotD548/mm82Fxf54j49t7
WFpxm3MdSHmWRlZz0q4IxrrGPXTb/wBwV/Y3Eht9ls1tJ9RNuoeoZuup8UzG7zYh3f8AuB3B
7cRWO1Wtl3dULSBz/CTQAVwTFearbj5p5BPudrfG2gghgWjWq5RSZZhlH+OGUYeH5fb+uwbz
uezW90IwPatkZkC9q5g1P1xrnGsWvL/nFN+2ySy/9fgiciiSPJ7hQHyooJwXIvrWf275v51t
m1R7RZ3EK2oVlX+VWVA35VY4z10Pqw95NNczvcTsWuCdTu5rWvXGea11Md3Hd+utm3K3vbVv
buIW1KaVIFa98sdZ0zr1V/njZ7gr+r4vBd3iDSLiaXUtafcAUOkV88FYVU3zjvSbbc2Fvtdp
bNKSFIqwAOQYDoMsGNRjeM8rvNj36Pd44lmnBLFXHpOrI/8AXDOpjUkWfO/kG75RuMNzNGsH
tLpjjjyy65t3zwSsfVoOHfL39J2o7Zu22jctujXSkKSCPzzLDp2xq2VVcXP9wEEcltFteyCy
soa6rcuHOXRQBpABwXGpNYXnfN7vlW4JcTxrCigBIYjUAnvX7q4z9oMwuFc93jiFy91tkdvL
LICj/qFJ9PUjUKEY1pxDzXm+8cqvReboULqNMaRigC9KKB4+eDWbyzajUSR2oTTrlnn4YdD0
X4w5/wAa4xcSXF5tEl1duP5U8TgaQeoocs8Hh2rPnXylxPkbwMuxOk0Z9crygll/hAXI/jhU
aC2+ceDwbEu0R8enFs0ZjMOtDHn1qxpXPEKgs/nnYU207XuHHmlsUBULHIjHSOnUdPocOGOe
7+fdtmuYFttiL7bakUieSrllFBpoKenAzL644Pm2V90unvdu/V2MqkQWLvpjir1oME6dOuS4
T8mcP2Xcr/dLnZ5GvLjIG3ZNCRk5qFJGNM4LlHMfjjmW6QNfWc+2RkET3ZX3ZWFcgFBoMGLE
o2r+3uELIm83bFKUakjj/tyT/PFh3G6335Q+MrPYLeCSZd4goiLbxCkgCgULatOYxYNZG4/u
DthusAs9seLaoRpeNpAJnWlMj9oxr6xOPe/lP4wuILm4teNSXG6XI9Ut0VAViKVJVmJ/DGcg
cfxp8n8J45+purza5v6nceljBoKBK5KiMwONWLVfzz5A4XyHfLW/tdomVInDXonYD3lB6UBa
mWRzwZGo7uY/KfCNx2O3tdk44NvuoyrJcyJGpiVfyjRm2rzxkLW1+ZeGbzstvY8v2ya7e3AO
iKgjIAotV1KQadgcaDFct55xy53a0m4/x6CxhsZFes3qdyv5SB2P+7BixY84+cN05VsZ2n+m
21pCSvuSCrM1B+Qn7F88XNmtZVp8VfJXA+Jba4u7S9fcJQBPcpoeNgop6QWWlO+N9cmzFzY/
MnxhYcivN0j268EW4IEuJigZlkrn6S3SncHGfL4xqt3/AJv8K3ck+62tjfPu5YSQXDDTqcdB
RnIp+GLMEWsfzL8dbxaWMvKLa8hvrAh4BCupAV76lI8q1xWmij/uK2F9/Je0mTZqGMz0Bnp2
fTXT+GJahvvm3492PZL224tb3dxd3bOZI5lKIGcUZmLE+PTErzcZHhXzZt/Gtoutvl2GC+ur
iR5BcMQNTOMlkqrEqMFXV8dXxp8z7dsc24x7zbiGx3CQyyS2gBKMa5e36cgDTLFGZfGwT5v+
L9h2a4sNiguZZXLSD3I9KPK5rqdien4YsagZPl/4t3ldv3Pezdw7lt3qt4YULKr/AMQoTq/H
CdUe/wDIOH/KXING47wvH9s2+Iram4CiSWpzbM6Rn54LBKm2L45+Kto3i03O15tFPPbzK4jk
KesqelQSR+zBNa1ved82+KrTctuu94uzJe2pE1pLZgzFa9NeiuXkcakZvWesvaf3G8Yud+vI
ru2mt9nmj9uO6Cgy1pQl0rkGxYpdYjmO+/BsO03C8dsbrcN4uTqW4cOohLd9T0FK9hiwxrPi
35N+KOJ8cNtLfXEd1dNrvQY2dFcClEKjpiVusg/FPirlvJ724tOSHZttBEiG6Cq8sjGrKNZH
pXxxZRMkX21/HXxlxzcbbe7fnFpNLZyCdYWaIh9JrT0szCuM5RKyvzp8obJzC/sV2eOUwWqM
skrigf1du+kH9uOkuRSe6s/hX5h2Tju33mx740kFtdsXhvIVLyBitDUDoPDBYdXW+/JnxdYG
zG37lu27XaTpKyyvJoVQ1SDr0Ll4Yoc9aKf+4/4//U3oYXAaaHSisgA9wKfTWtDXLMYfqNdE
P9w3x1eQRWM0lxb21zA0FxOUziYrpKkCvbvjKrLcg+XfjXjfALjjfE5Z9xF0HjUSKyiIyHN2
ZlWo8hhVurjiH9wfDYuI2q75cOm/2MZhhhjjJ90qPRoI9IFKA6qYpFaoPjP552JY96suTvLt
z7ncyXEe5QqzqDJ6aegEppAyxX5ZlxV/J/NfjuXjrbVsu+bnvl3Oa+5LLKYo8si2oIrfTFGe
ur+FZsvy9wLbfjluOX3GUut4ETxLeBYtEsjAlZWf/wAi0r08sE599a7514xM4eRjRV1HUyr0
WvnjpfBzzfyFGKsU1UoQ1fIdMGrrl7Lwb5G4vs3xbvOxXu6X8G7XHuG3itl1RH3AAAuVFzHq
ODfXT3Mef8M3my2zktluN480Vvbzo881sazhAwqyHxxi6daT5z5hsXJuYW1/sm4TX9lDbIpe
5GhkY56FqASADU1743b5jnJ76vOM/IHEbD4c3Tjc26X1ru0wleGzhFYn937UDEGgY/dmMH8/
n1f1n+v+rAcE3my2rlu07nuFxLb2lnMslxPA1JFUD1aPFj0xNTyL75w5ptHJ+a/1LZL+Tcdv
W2RElnT2zGwJJRBRchXMkYr8KfLQ7Hz/AI7t/wAJ7hx47zdWu+uX9mxQaoZA5B0hgPSDnqzG
HmTfWf7W2efLz74537bdn5js24bhNJb21tMplubeglCn7gCcsVHFsmVc/OPLNm5Hztty2W+k
3OwaCFFuXUqaqDWMLRft8aYN8MuVqNm5/wAdtvhC748+7XMG+zSuw2+NfQys1QoYj0LT7jXr
i4+T3bWF+Md+sdn5ptG5bpcS2lpayVuJbehkQUoWXxr+byxq8txYfNfKNl37n93f7TfPudm8
cQS7dSvRKaKUXJele+D6uN+Wms/kDjkHwOeOjfJo97EjgbfpLRujSVChqZKozOfXBzJ66dex
ifi3kW07Hzrbdz3a8mtbCGSktzbL60GkjVUA+nxyOWG3zDL+nR828i23fPkDctx22+fcNumW
H27xl0lwsYBAUhfSvQZZ4r+HPM1irSZVBQ5gj83+P4YzY1K+h7r5m2iz+GNo2zYN2ay5bYPF
E8C1Dqqs3uvU+kxlW8cXMX9Nbbbfnz4+lXi895ubi5SN4dxaRHLRSmIKzyUH2lhXUO2H6mVm
+W7Z8O73ebluL/Ik0Et5rcQLK7QoWzACUzGL1mPmWaOCO7lSCUSW0bssU6g+tQcmz6VxqXG+
br1T+33nXHuK80a+324NtY3Fq9stzQsqyMQ4L6amlFp064x1Gq2/xz8w8esPlDkV9u+8S/0P
cPcG3zSM7QpSQMPR1UFemWGzXPnyetMOc/F/KOB2u03vKX2Ke0uGkZ4meOYaZHIoQM0dWrXB
hUnEOQ/F/BPkEX8HKn3mz3OzeCe8kDySQyIwYe6wX1KwGRHTBYXJxP5h4vusvKOM8rvivF90
muJdr3CXU5iLOaCuZAy1r4HDfKvp/r60UXLfjLk/Adl2W+5ZLsc+2hRKI2eKdggMa1NM1ZaM
DgyqvCPlzbOIbfu8Fxxvkzciiu4z78jlnljKmnqdhRq41J56Pt6w6TRh9IYip9I6fj4YLC+m
rfmfxb8gcE2HaeSb2eP7hsbRhoHH3tHH7aspoylGGfl0xmeeCzavJ/7hOBWvO7WGOZ5rCC0e
wvN0jWsasWV0dB9zL6fVlljVmRfNR8U5N8HcLi3mPbeRR3dzvSySy3J1NnpJEZAFFqWy8cP1
t9N5yYwXwT8n8T2fZd64xyKVrCy3kv7O4Ea0TXH7TrIB9op6gcGZWfr/AK4w/Hvi7bOQ843T
jO3cnslsrL12O7TECO4TroQd2Fc/3Y1/Sz8LiZMbm8/tb3L9PI0HKtskuEQlYgxTWwFVGon0
1xzkpo/ivn3Ftg+N+X8U3a7S13l/fSCN66ZWeD2gquKjJ161w/XKz1d58Un9vXyHx7jO4bja
cjLW1vusCRLfhdXsuhObUBb833DwwZ6ZuPZrj5P4Ft3GN62u65oN8vL20uRbTyZ1YxMixARr
RTX9uNLr4xi/7cN7+L+L7I+77hv8dlvl9ELe8sbn0BAj1Ux5eoNlnjHzVz5Hm+58e+NovlBr
NuSifi91rmn3SBS5RpSW0uOnpY5kY6d/Dn/OX7f4eqfOu6/GPJOEW0Vhy60bc9hgL2Nsr6hc
URVKMFFQzBcvPFxx431Pdig/tM3zarLe94s7+6W3ur2BP0pmYIraTUolSPVQ1pjPU9aksnry
H5I4jc8U5Xd7dcTw3XqaWKW3cOrI5LKTStCa9MavjH8vJjJy+2QV+hr0piaMoUKNFNakeo9x
44dL0P4z+XuS8AW5XaGhmiuyouLWdCysyj0PUEEEAnvjOetR6j8jfOOy8u+GYtvEkTclup4o
9y28RsBGiMWaWMsKafStCDinz4598Trxb/CXI/iHYOGTrc8lS13TeYAm7Ws5KNDIFZD7Y0n0
0bI51wZbXTrPh5Vx/afjWz+TZNuv+RB+MWpEm3b3EvpaRCCqTZZdPupTFY5cX8Y3v9yW7fG/
J7KHe9m5Pb3e+7ciwDbIm1ieFn1OAQPSwrWvhjpzzZPV1u+Mvx6x+FL34buY94uY7Dm8KTSx
TkuZJJkLPAkaisbI60UjHPibWu554zHw3Dww8pt35buJ222iCzWdyv2maMhtL0rQGlOmHqfh
rm2T1698ycy+PG5FsnOdj36Lcd522eGJtnjqTLCjliyVAKle+NfXxnfdaI85+JNy5La8/Xls
trepCrnY53dIwUjKFGjpQPQ9c88Zk1Z6822Dk/x1yP5q3HlW9XsuzWJmjvNsklUAO8AUETEa
tFdNcXR4masfnXe/i3dN9sOW7VvUO8XKywW+7bPCzVktkqQ8Z0ihFKE174cuD8tRxrffhbY9
yh5RtnM7gyIjyJs99PI1EdDWHS351BouZzwf4NuMnsfL/iTlW97/AHm77lfcYuL+9e7gvRK0
cc9u4AWOZU1BXUg0r2w9S6OZjv8AkT5E+NrXivH+J7ffycot9u3GC8uZoj6v0qF/cRpWpST+
Z6aYcq6u1f8AGeUfDHGLw8msebXN8RDI0e0Xkzu4SQZxBGA/mLSgrjMmm9SF8GRbBf8ALeQf
IabrbWkO5XFyo2iZkjngSRldGkqQPUBXLLF37RzMfLm/FZ93vpFYSxNdTFCvTT7jU/djdxjj
XHCqe6VX7VGR+vbGXV7zwz5T4hwj4xtbvjcUf/u7ubfcra41GvrJWQmnqj00pSmMcyK1Z3Hz
FweDlnGOfQQsm+TiWy5XtUYaqp7dBdRg5NT8tDmMuuNfIk98bHe/kPgNva7jukPyFfTiaOWW
Dao5CjIZs0ES+2GJQnJScPPyL48u558k8Z3r4f43sVpdFt82+6P620YMrqq6x7pPT16xTPxx
vnn5avNtmLTlG/8ADPkTZfjvix3Zdru7f3ob95QQLZxAqRliaIVkZaLnjEl9N+XSP7adhiPu
R84sTMuarI8eguPEa8sZ9X2eg8nsvjd/jzim28zvxbQwApabhbNrjWW3TRICyBxpb6YpyKzd
7/cBwrb+a7B/T5ZNx2farWSwvdwRSqsk3t+28aH1Ep7Xqr+GNfXwazXKLX+2/b9m3a/sdzl3
fcr1naOCIye8k0zFw8IZEUaWOeo9Mb442+qSn/t55P8AF/HI5d333djtm/qrwtbyh2hNu1KF
SitU1GfhjHckrp1ki44t8gfE3D/lHdL7bt7/AFuwb/aNLNchJGFvcmUsyMdIYiTqMu+G8+ax
zPlNZc++F+F8c3mTim4z3029xNEdpYP7kcjB/UzMoUBdZr6vpg5m1nXdsfzL8c7vsm1XG5b9
cbFvG026Qz7ajOI7n2gB1jVtWoL+FaYNOyCvfn3gcvP4Z1meTYd22j9Df3PtsslpN7jsBIh6
jS2emuK+GesLyv8A/N92Tjcqcbv5N03asXsSesyRNGa6vUqD2yMmGK+/KnXq+5j/AHF8duN0
2y3itV3zie42CQ73tNwugpNqrWMt+dR+B8cU+GbfXkc+7cP438m2+6cb1bpx2yuob6C3uFKy
FGWsttIH+5oiaBsx0xuyYOfl3fN/N+J8y5HZ7zx/b5NvH6YQ34kVY2kmVqxnShIySoLYzuTD
nrzuKoYsSPTlU0oa/Xwxhp9J/B278RPwryuDkbs+2xSM26wRGsotnCpqUZMaHDzLrXXwW/8A
y38ccU4tZ8e4VJJukCXsN+mpXQQiOVZHSRnVSden00H1x0vOX1z6uuvduZ/268q3P/2fdL2b
+ryJG0m3TpJprGukRuFUoQelQ2MZqnUee/E+8fGu188u9w32a4sdtt5jc7BNJV/aKS1WKYIG
P2ZDB363xsmtD8wcv+Kdz5ntPLdouzu917kce9bYY5ESW2iUget1UA0yp+OOkmT1z31utj+X
/iDZdqeTb9+vZLWRCTsV6kswzGcI1K2moNPuK4xJrc9eQ/FXyntHCOc3m5yWsg2K+Msb2wIM
ltDLLriVB0Yxigbxw9SfMO+Oqz55w/hPy9/X+MStufHpFaSVWDJIq3IPuxrrALFGOpSfphsm
Dj9PT9j+Vvgvjm/3O87dPO13vTs987wvrhL+vUKgDSWyOknGb6r1nil2/wCT/iPfOE7Xx/mU
dzFJYXEraoVcqh1OVkDpnpZXpTD18jfhX8z5h8dfIW/cd4vPuLWXHdqSURb9MvtxyMY1WOMh
tJSmihZsicalyLypD8NfDYjd4ucWokVT7ZE0elWpUMfUT188c/rTLjacl334pk4dxGHlczXN
vHDItne2Le5olt1WOUN7dW0tTw7YZLDetus5P/cdsCc3gmht3uePRWj7Zd3I9EkiFwyzJG1C
NPRlOK5Ifx6L/wC9j4c2Djm/cf4uJhFullchJTEwpcyRMio5ehIOrt0w8/MrNvmMX8S/KfGt
k2DdOIctgll2HdV/m3FuKvHVAjhgDqKkAZrmPDGu+v8AbYLPMqg4Dx/403jet0seQ7421bcj
vJtV44EZlj9wqqyBs0dkocH9LbW5bj0Cb4/+E9vha+2vmcEm4WlJreNpF0O8RD6SB01Up9cc
8U7bfnPyL8Ow8r27cN5jkl3WGzhu9s3S0HvqqM7MquENAUcVoR3xrMjO4xe1/wBwHG9y37kU
W/7Yx4rvwjSRIvXMhiT29bLUVWQLnQ5Y1ck8HN2Mr8n8p+J73jtttnCNvaKZblLg3UqGJowo
0sg1Elg4/ZjM6n/5WN1xj5i+F7DgjcevrG8hF7B7W7WcUTShpCuh2Vyw+7r5Yzx5db/p1sxi
+EbP8G7pHuEm/b1Ntc0V036JZQ0eq3b7CQAw1dmFcdP6fPilmLvcD8S8Ils+S8F5Gu4b5t1w
rNt87swuIH9MkaelQrUPXGJwJ/TP/wAtHYfPnxPt+6vvu17Tdx7lujF93Zo1BzHqZXZsyGWl
MqjCNUvFfm7iN1s0uy8t2+4lsobuW422WzYliskjOFkAaOlNf0w3qb4cYr5C+QdivOR7XfcX
tp9vg2YBbW4uCFaRhJ7iBkq/2nKvfBevPFz5Wo+UvnXaeWcMTarC1msL/cDH/W1KLokWKjgJ
IDVqOB4ZYf5d/W6z3Px+HiLzKWLKAdXUDpT8cG6Wp+O/kLeOF70Ny25EnV1EVxbuQI5Yiasr
HzxVr8N/yT5u2hdpba+F7W2z224a/wCprNpYAkepY9OqobxyxrnqfkTr334aaf55+L9822z/
APYeP3V1cxxoJoaRtGZFWlQC6k18xjC6+XkV/wAr2jafkJuR8OtHs7GKZLm0spwPS60LKQpb
0V6Z433PI1lif5E+Tbjm3IbbfXsk2q8tIUgQwtr1aHLq7E09SlssX2znHOXK3Fn8/WlztaS7
3swu+W2UYSz3eIIolUdPeWoZW81rnjnz1+Ff8KHj3zfvNryDcJ9/tot32vfSDvG2FVVGKgKp
TsGVQPrjV7/TWZMD8jfK437bhx/YLdtu41QSGzfSXaQNVcxWirTL9+Hn+klZnO3aO++Utk3D
4mh4bdbYZN2sGVrLcCV0KQ5YsfzaqEqR0IxTN2u3X+18cvxz8tycesZtl3yyG98buG9yawdg
JIpgah4WPppkKg4xuU9exx/JPyhdcrlhsbeMWnH9vYSbdt5FWB0ldUrZ1bOmWWOn2+s8ebLf
lhXBFXHQ5EkY5a62PRfg3l2zcb53Z3e8z/pbP25ImnYEJH7q6as3hXrgsa5sysrzjcLe95hv
lxasJrSbcLh4JlrRkaQlXXyIOOveZI5TrYp4HCSBgaFfsboSfHHLHXnqNl8hfJNxzKy4+k1s
ltNstsbaSVGr7tSo1BaDSKIMvHHX+f8AT6yxn+n+3WpfjX5S3LhW5XMiQLuG33ie3e2UhCrI
vYK35Wz645Xz2Ny7MrZb3/cEl5sl/su2bGu32V8qGH+YGaGVXVi9FABUlemNTv8ALn6zPIvl
bdd05xtvLUtobe9sIoFeMEukkluxPhVVevTHT/p/rYfZ6sN9+cb/AHjbOT2JsVjh5AYpBRyX
geMKGC1GasE/DGeP6ZZWcUB+S75/jpeFz24MEdyt3DdKaFApNYyvg1f8cZ/6X7a3/Tvcegbb
/clYRcai2e/4xDNE9v7NyiSBY5Ci0BMek/dT9uLfdF61g9y+Tr2/4RHxUW62+3W93+qtJC2q
RUNSIm6iisxx0n9cus+27+mo4t83G1m41a71ZJcWexCWMyqSZpI5xTVpai6k7eOOf2b+23Xo
3K/l745m47e2825nfo7+Ewnb2gaN6sKK4dl06k643yxXy7I4YvXNamn+4Yz3160fbr+W2uY5
7cmKeBg8T9aMhBH7Mc7h+1bvmfy1fcvsdqTcrWCPd9sfUm8Q5SSgfkZaUGefXHXj+mQTN1yc
k+Tt+3/km28luQlpum2JGg9urKzQ5hyD0L9xi5/vkxcXOtWm7fNm97vZckt5rSGG25IsYuYl
ZmaJ4wqhoj/uC51wc/1yw2eMLt243Vhc293ayGO5tpBLDKrEMrqahh40xz/rftdrU/pY9Zm/
uX3x7JVk2iybcgoI3JS6uJgKe57YGmp8K4ee3K/4UvEfnTkmyC8t7m3t93tL6dryRbosgjnc
1ZlCDKvcY1elKubH+4LcpN62q53myhlsNvklpbQ1aT2bhdJFWyJT8mH7eGXGy5H88/Hj7Ffx
QXF1uxu4Xtzt89s0WtZBpH81shoPfGuJLWbXiPJfkHe972bZdpvGjH9BLmxnUEyMGpTWSaHT
pyxv/pluN+/J+b893nl6bfPu7xCXb7cW6tDX+Yi/mevcnwxzndksPNvyodq3L9Hew3iKpaB1
cKa0bQa44WOnH9Mu17Tyj+5a6e+iG0WEVxZS20YkhvVKmKcA+4UZTXwpjv8AzzHLr58YzY/m
zl+0bzebks8e4S7iALy2uPXC9PtrSjVXscHfe/B8+EPNvlnkHKhFFdwW1naxsHFvArepgKam
Lk5r0FMHPTPsur7aP7iuZbftEW1G1sJEjjW3M8qOWdFWgY0YA+nLGdkVtql4t8xcm41PeLt3
6aWwvHac2M6FoI5D09oA1Ufjh66261qq5t8jb9yt4DuTxQ20JrFZwKVijdvuZRU1ZqZk4p/b
PFJZ6kb5U5JecS/9VvLhLrawytD7qapIvbNVVHOdBi57xi71dWHGfmnmWxbL/Rrd4bmxQ/yI
rtPcManqIzUECuC2t5al/wDvu56N4i3Vb6OO7itv0PuGKqNDq1DUK/eCcmw6upIDkfzPzTer
NbC7mt4rcEe/LbRe20oBGT0OYPXBO8rMnrS2f9wsMNlDby8ZtHZEEfuLIUroFAzrpPXyOK2X
5bvqgk+Zt4g5Jc71sUMOztdIsV3aD+dazaf/ALR0anqxv7yxz5ji375e55vlq1pe3kQtWmS4
RIIkjEckR9JQA1Xxw894frfl2zfPfyNNYtazXULKE9sXLRKJwAOusdT54weprOcU+QeScZ3S
Tc9mufYvLhStw0irJGwJ1Zqeo/wxWtc3zHHPy3d3319/knDbjJKbhpNOke6TWunw8sav9LZg
k+vw128fOfyJumyzbfdXlq9tdI0U0S2yVZX6mtcsc+P6XlfTar+O/MfPNg29du2/cQ1qhJjg
nhScJUZhS2YHljf22tWa815vybd983eW73aYXFzJ97aQAAemnwA8MbvfmRmeM2kxEZU5jsMY
ZpqyeHbEsds9sqBdLEqafvxQPZvhb4eflu3yblfXT2W3QuFHsRe/M5H+0dFONd8rjr16vff2
77O9i0u1Xl9+pqDru4VjUAeVFP7scc9PXVtRp8E8O2+0Rt33q8eSWrF4bc+0leoqQ2X1wdc7
+W9eYcz45xzZr5rfZtz/AF8A+1iCGHkwIAqMMH2t+VTsmz3e7X0G322kSXUixq7nStSaamPg
MdJGtlj23b/7ddjSCKHcN4upb1xVv09sfar4AsD+/GOprlpoP7b9rSacXG9SWttH6grIrSMv
ZmqaD6DFzLGtcl18F8Um2973aeQS3SxsRNK0a+0NP3Fu/p8BjTU6rU2Hxb8UjigmeUXAMRL7
sVYE6fuZE/LjNmsW6xuwfCOwcigm3OHef0+2K5WFmjGsKhzYAnLLucX0w3rwt6+D+MW+w3G6
bNvzXyQ6vcZwiqSMqaq5HD6NaH4z4vtLcRuZoZ7C6mIdWZrcySo2n7GdyFI+gw9TBYzWw/Cs
nIri63Lcb4WFgjsESGIzPJpNK0Wmmn7cGVr4We5/29WD2aTbJuE9zJqoWuYvZRV7k1ocvpg+
tZvoV+A+M2ECpuXJzDdSAkaYD7Rp/wBx7fUYcrX2eW8n2C12ndZbOC9jv4UFUmi7jz8D5Yl8
i4XxF+S8ht9s/Uraxz19yZl1EItCxXzpjWKV7unx78a2123HY9pZtx9syi+eRmcmlCa1yNM+
mM3kX5Vth8L8R2j9Xu2+mTd7aOrQ2gJjWNR/FShdjiwabcPiHjHIYbTcNmjG07bdBXe3Kl30
dfTUmhP1xSCLCH42+Nvebj1vtJ/qEMWr9a8jA1OfYgE+RGGwq2w+HeGcbivN45Az7nbxkstr
EGWOME9T0LN2xZpV3Ofi3Yp9iTfdnX+nRyBWS3dixofFj9uXnizKvhUXPweLPjEm+XfI7MSJ
F7oh6x+oV0BtWb+GWG6L4uOKfFvEts4yvIuUNJuSMqyQ28NUVEPStKa2NfwxfLVsxZbr8T8F
LW29T3DbZx9gJJow2o6TQge6ftBr/pgxmXFvsvB/i/f4pU23jUiWUNfY3VzKiM38UZZqt49K
YLyz1686234ett/5LuljtW6QW1nYSaZJ5PWzE/8A4NFIFB3qcayw8ms/hhLrmjccO7xT21sq
y3d1DXWAeiBQTR/8MMtLdL8XfG96t3sW2Ws0O5WQBe8nkeSle9CaMPwGM1ar7P4m4Xx3a33P
lrtuR16UjttSImpqKQooxbv1xSUWs78n/Fe27Ntke97M/s2krVS2kJdxUVzP+uEx5PDDJJcI
hUnWwGXSp6YTj3HY/iXhmxcdi3bmXubhPdBNMNuGVYRL0AoQzNmKnB9aLkZ75T+KbLj9ou97
RJp26UKFtpWLOhOdAe9R44A8m9ckwRFqWyocznjpOfNH29fRPxJ8NW0O1vuvJtsS6urgB7K2
mdWQRkAqzKtRqPn2xm+ums7yD413vf8A5BG2Jslrtliulrn9HMpP6fuxY9z2WmL8MxsvkH47
2rZeHzjj/GbKRo0/nXcr0mQGgLgmmo+VfwwSG3XB8U/C9jFtr7tyfZ0uNwlr+itZXHtrDSoO
lSRVj/F2w2m1mdy+NN35B8hDbRskO0bfGVa+htpEdUgr96sT1YeAwY5yRr/kT412XauLi22D
i9mVGlZNznl9cIrp9xhXUx/Hr2wyFDZ/Cfx7t+32ibhbX+73V0F9yS01e0rMAa0UiijxJwXk
0pf7euFNyABridNrEfufoVeszP0zfw/DFihXvwj8fbltN6dpivtuuodUcUt1qCMyjsrjME/m
wYvsqbD4O4DsllbJyq/nk3PcCIrX9OdKB2H2g0bUfM5YvqrXHB/bQDyR4p9xKbEiiUOoBuGH
UqBSgP8AuOFST8uzcPgzhG9beLvil5cIltN+nuzMdVdDUfRUD1fuOEYt9z+GPibZbaI3dput
07rm8AklYnuxEa0GM3nfRjg4t8QfFfIbq+ubNNzWxtNI/T3H8ti5BLU1Lrp5HDjWKLlfB/iS
ytJUs7Pfl3KQlbOMxyCN5K0UFpEA0k+eCc4zOoq7b+2/m11AtxNd2e3vIv8ALtJ3IlGrP1aA
wri9arPN8I87Tko4/JZKZ2j917iNtcIhOWsuQB17YdXLac5/t/41xrgL7qu53VzuUGgSEU9h
5HIFFjAqAPrh9tVcvxl/b3e71Cm5cmnk23b5FDWtmgCXEtaUchgdCU8RU+WCnqO3bPgngkXK
d6j3jenh2vaSGWFpEjnkLLqLkkfavT0jPD6zzMjt3b4K4FuvF7nfeKXt3C9skkiNdiscugVy
V0Q0NMj3wYzeVH8af26Xe8o25cnlk22xkB/R2iECeUH87Bh6F8BSuGtpdu/t2s7zme6W/wDU
JE45s9FuJwF992ZNWhR0y7scXplmB5Z8Cceu+Nycg4HuMt7a2xcXVvMwqTFkxViFzXvhmwbj
wlomXXU+pDRiaGvhQ98OiRGZtOqgzOeXj9cRKaZUyarscszlmMSA8szp6FordVBBrTFINNHP
KAqtVRU0zzrTxw2LkzNIV9VQPCtcsDVQF2yVVrTNjTDy592gDGjMaFa5/wDHljVZ46v5FDGW
cSKMugJr+z6YxXTUnueltLaDmcu/0wYfuBWJKstVrmSDmK4WdhPJXT6QafbXqcOGh1VOgmiU
J0k9PPFPEZaM4BIXSKgnpTzxLBFpDL6qDV0YdKH/AFwGGLSg6TmtfsB7eFMGLUahGYKKAVI8
sWM+JCziQDoFGVDnigpvdYL66gV8canjUuADFNQp/L6R0659fri1aAllUCmghtOXh26YdVHr
aRSpqRWmo0yr9MZwQkZ0XSxqM+mZOFoIko1XoSRlU1y8sU5ZtISMKaaEAitMaxkbOdNGOpB0
8TXGMXXWzCFy6xKFaueXiPrhglyHknlZXDmtQK+ND06fTBY1KEzBwunJcxp7V8cU8OlEJNZq
Mhnl3p9cJ0pZ2LdK1qKgdh3rhkZtomvZRHRDUHI0qennixShNw9Q2osctC1yGkf44zhOXzU6
h5n+GvfA1tFJcu60Bz+0V7jCKid5EWqdgA2eQOBI9ZY1zJHQ0AOWNQ6f9RMFoGZcqhaZjv1H
jh8H2xJHdEKxYmmRBOZ88F9UovcWUlq1PUmncYNrV61EsgWQNU0B69q4rHP8kl3IhJz1oT6u
nTphxrfRrdF49BZkjFdOZqAeoyzpXDR16gWeVpGLmvZT/wBcGKSJPdOgnSSCQVBOeWWIpLi9
f0rU0BB1E9KeeCzTaYXZFVDkKerj7qeGCRmdA91lcsGalf5dafvOEna6kcaT6Rq1N0HX/jri
YtA9yVBBJKdhnUV6keGHF9qD7ug9HY1IxY0jAkYMCCc/U2XfAcSLkNRqFIAAwnCLlBnRTX7R
16YEd5aIq0bxFe34YZBqb9W2lasNQyAp37EnFivoBMEoqn+Y1ak5HPz7YyvwZ7iXSwL+rx6j
6Y3KubYi96Ufe1FAOivcd+mDRZRLIwYAN2qPAU6UwUJVnmLlqlqAhW/yzxacMl5KrakJDAUp
lSnf8MasglCly0QVh1IOdcxgxv7C/UyPmz01mtaZAf6YLBKYXMpkSrABQaKBkRgwlHMAXYsV
Wua9j+GEToZldn+8LU506/TEr0S3Lxe6VOpuuYH0wYydJ2CksxIcUL9h5YMaiJ5lNadKmo6D
PCrQKxRjUeodfLEYMMS7OCRqoCnl4YMF9SmQOQZGK18OvhhxQJmaoAatKBSTU07UwrSM8rhs
jpI79SemHW9JLiU0UuxJH35Agf8ALBXO2yiivG9SFRoWocACp1GtTib1M243EkaxNKxiiqIo
XYsi16lVPQnywSGVE7gxt6jToQeo+uEdYY3LMKGpUEKAOwI64tUpR3Drq0lgwFK+IHjjFmiU
wu2fUrVBZiQDlU0xueNbBLO8ie27FQv5SepHj44BaD9RKQAalgdLU6ZdBizXL6mWYrRmdmDV
oCcgRisalGJ1Ioc8s2Phialw0rMW9zJzQfUDtQYoL7QGRG09VA7HrXuThRyymiOxKmtAcx9f
rgp0hVRql+0VBXrnSuDFiZLueNTpkZUkWkgUkVQmukgdRXPGmbaaOW501YkoR3OXlgq9P783
syg1Or84pl9MA45wzyyqAA2RAp2oRn+3E6QzSMM2lLqOpP7sTNgppzLVj6HoFApQUxLQ+5Jr
opDah18MSEZGcKRkyD09yAPGvXDFKH9TMACzaewUVI/5YToy7agdRHgorn9cB3SadhIy11UB
JHgPDEKeKV44qaVZQfUvVqnzxazkSG6kCrR2XIAAE5+FQemK1qUzXhrXupyJzJ8jjP1avYTM
zZuxrWq07lsajGmMpNGI6AlKfxeWJHQyMup2AZlofzerwzwXXSdeJFuZSKNUA01AAZkYJGBN
IXdgrBFpUL28csat8VuxEpCABfuGRK5EeHTGVDtK59ByJOTZ9fHB9YtTJK/pVjQqSQSakk+O
JaY3Loy1NMyTXpiZ9DLMZHDBQVppahz+uNxQwaiFWc6TTVQ0ale2A/YYlYfaTUkUIy6YB9jh
5JGOthUdC2Y+uLD99CWkLGtSKkk9TXxP1xYTkoULNXLriBoXWpKZgmpHauFRKbjSoUihY9u2
DDh5DmtahgfQwoe3jiJi8/Zwp75Z5fXDKKcUYAkjVSuWX1qO+K1SEGIMjqSezZAgUxnEHWQq
iQmldRqa/T8MJH3LK5OoEivnkcWA7tSMFc65PXqAMVOhD6fSnqU5gdwDiOiQEMNAFT1Y4NBn
ozEr6TUAAnoRiSRmQwlNer8pB9Rz754NZ6uhXT6VDAKnpp0pTzxacCQ2uhHpfzGRH+WEDUEN
9tcqGvj/AJjAYTEArRfSfSe1cLUL0KFNSSMx374lojMy0ZcwDRiTn9MTNCHf3NSBQBmWJOQO
CqUncaaVyPTx/bjMi6OrA0Jyz6+PcDGjKGSfM0qFBz8f3YcVtC0jn1KdRI6N0AxoYmhkkZCN
Rq3VWPQ4y1DO5DClWDZDOtB0ocA0xb8pXQx+5lzFOmBaYVQa1zINQRl17YcZw6opb3P4z18/
x74sakEsgRaR+kA9x18jixUJdWkOqop49sQCzkqAGBkA9QIypXLMYkKoL+pjmtQRlSnU4kEE
lqqT3p/kaDE59T0aPJbrUkszklm8B4Z4o6QRkfUCwND+Xx8MNalJ2DU0nS5GS5fsywVaYmgq
2XQFaVr+IwG2EBq9TGprmT1/AYhpLUsC50Z50rWmJnRPNVdNKrWhGJGjcaSv3EHr1p/rgrWG
eVoyI65NkHGLFfDwu6nSy1IBK5ZHDilpAtIApQ0HRsgBi8XyZXj0HTXQCCaDMEdRliQm9TMV
OsCg86demFfJndhq1Ln+U5dfLEYJVCIyNHpJo1FGZ+uDNNoU1iMFQTHWufqy7DEeYUp0KS3Q
CpHbEusP7hLEslR0HgQemFzPLq+3p1/4H0wHcRh1dVOWs5gVyA8c/HEfsGsxJDnKuXb8csWN
YJig1eo9PUD5+eJM1vUhe5IIIC5Gv7sbjOK5VXPPLKuJkVH8fLriw67TNDpALNqBqFPT9uJm
vfPgf5U2Lj23TbZuk9xt4uDqiu4EMyCoAppXMCox179jlJ7re798j/HptHjl5Dv29yymn6WJ
2giHjWoXL8TjjHaTVpxn5N+O9uslaPlO4Q2yD+Zt13C84GWaozI5oPI4rFfHmvy1zjjW/wB5
TYrQqgNP1UiBDIa/cFAqK458zGpfGW4jvCbTvttf3Kuy28qSMg6sAa0GPTzZg+r6EvPlPg26
28UzcyvNqoAZbeGFlc+I1e2/fLLHPGJVUny9wGGzvbZL++uJ5VYRS3iFpJCcgan7R9cTVVvH
vlXh9txS7sp3n/XMZD+nEY0Av9o11Fa4qsro438lcEn4y20btuh2h6PGuqNpdQr1GkEA/XFZ
qxDsnyZ8fbFxm/2tL+4nce77KtAVMgP2nwGKtfV4/d7jLeXswDvFbTsAqaq0DGtSoNDTDzTe
ce/8D5B8bbDxldsuOUxzyuKz6oZIyGK0IUaatTF0za6OPfIPA9shm2a05G0EOpnS/lhOhmfM
kHPMf7sDNit5JzjhotI4Z+abjuzSOCEskRFWhyLURMh5nFIpGg27n3BrOyRbzmEd/bhdRivI
dcxy6VIrl4YsWvB/kjfeLbtyBrjjlq8NqTQyk+3rbrq9s9K9hhkUrn4DypuM79DuUkIuI0Pr
hrpJDek0r3pjUxV7snOvjI3X/tEm9f8Aykg0CzCN7g76dNM2rlUZYzi1X7d8scS5PFd7XuMv
9DtphSG6nYNrU9j0CN3w/UQ9/wDL3CNgSx2jaZn3mOHSk9zFkiquTdaamp4ZYJDqxPNPje3u
ZeWPyBZTJEIv0cSMXJGdNFK6u2eWDE4bH5R4lyqyutv3G5/oMchPtSTkMGUGoNeganUHCtU/
yD8ocUt+PJxrZrs7mwVIpr8CiRoO6k/c30FMCvSnvN6+C4OMtDbbfd3G7mIFZJVkLCWg9bHX
7ZFfAYla0fFudcT3/i39B3S+XY3twE9yYCjxjoUY5VplnjdkzwSrubm3xxeXFrxp7+O52+0j
Uy3ctVt6p9iNWlen0xnDouYybbvNmba259abTtujSLa2MdGWmSkq6nT5DBZU8v4inxVt293H
/su4TX8MfotLmMSpA9DmXVaPn+XDZsNq22D5A4DsPOnm2O1mi2W6QxNI5JfW3/2iKxLaR5nF
OWdkb+XkfA9jn3HkR3+K8mvUWllbsJGJUekKBmPxpTFilcEHNuF852U2FzuA2OYOrlLkjUdB
rVGNFYHuDgLPfJ3yDxfdLS14ntV0LiL3FS63UV9mMp4H81OpxrF9lLyHg3xbsHHhuVlylr3c
hpaNVKS+8cqj20FUFPE4JzdWtttfLOG884vBYz7smzyW2h5I7hlV6x9CpcqpBHnXGvRust8z
fJfF9w22Dju0Sm+e2Oua9UfyKKukKvc+ZGM2eD5eJxyhJDIdVK6gR4dKD6YY18PfvhDm2x2H
Fr9d33iK2cSM8CXEjKdCjogbsKdsIteabTyK3PP03L9bItm957j3WtgArNXUa50wrm/t6B88
8x2Lctr2232vdEvm165UgkYpToC1PScZ+B+Vh8Lc02Sz4Vf2+7bytvJEzGCK4l0yBdPSNSa0
B6acTVrzTinInj+SYL/+oPHaNdESzvM61jNRViak/Q4NEegfN3Ldhvb7Zbe03WO8t0f3bmO3
k1rpqKnSp9TZfhhkZ/L0j+rpvG1bdLxPkFlZ2gCM6syljHQegoalG+uCt6nXl3DTyr+npulq
27iCjMJFpXugkrp1eXXCNR3W8R7Ns9/ccs3y0nhZ2a2EZAYJ+VNIoXb6YJFqkubnjnO7bZ91
sd5t7WDa5/dliuSqSDTSoKsRp8j0wp2W/wAs8FveUzbRb7hGzOntrdsQtu0nT2xIev16YMVc
X9R478ecavk3Hd7e4udwuJJbeGAguGlOVFBJ0r1J6YZE6tj/APfrs2l9b8v22+2ZiryIIUEh
i6lda19VPHAhbtuy3/LXtOMcqsdu3NYSbu2dUnEzg0BNaKSo8DUYsEWEu7vs3GZJOc7zYSTC
TUs6URSARp0pkSR5YpDasbrcbzcoLS92Btqu4HAb9VcuzaQehTQCa/iMGKMDv3zXYcW3+5sd
6e23iSREJ/php7IzGhtddRr/ALsazwTVzybm/DLb42Td1hgmtG9uSDbmmVXEmoMBkWOoHFJt
Nrn4T80cQ5dvMcZs32+9jjY+7dSoFXKpUeoA/sxXkprPYOEcl57ue9OlvcXG3BIIIWkV45JR
m05UHOmSjtgwKzntr8nhW3C73HbIeOWLrcybTFJ7TSRRNqVHldRqrT7cU1O7hnzJwvlm+pH+
kfb72OMhJrqRFTzQUalfDLFi8dO3c74XJzLfuPw3EVre3wAScyAxTy+2VNH6BqdsOLVNuV9s
Pxl8YXm0b5ucFzdX36hLWGDNnM4yov3aVGbE4Z8rr2Y+SJ29wyM/UMdJApU+YHjiZnOOY1I1
FirLTVXL9n4YmiZmDAFa9q9TnhOHk1AKVNSMiAO31xSM1EtNdOjVzHQf8sWiSiLENkpJ6VNM
j1rgjR2IqKD/ALgP8MOBGsYVGZl01p0yp54tWGLAMAA1CCQSfHxxM0OnVkw9Kigp3wIDqUJB
GZyBr+6mGLLBf+Nq00kigetcz44iIh2BqKkZUH+OKwnLAkD7wP254mgMG1Aae9K0p+7yxIKB
WRiCAwIoPOv44qyJwVWrGvl3p/niwYGI0zAHSrE9sBkHRZSfy1HrYHrTFiRGMhk6gVzzJoMO
qQZDMaN0H5h4YGsRsvqBbMEZDp08aY1g8I1oCBUU9JGVMA9CM6vpCMvQU61wyjBUDABxop3r
3xaDZMTGcyvj/ri0w4YJHqXScsq4mToSIiW055sTlQYza1IAirAr6UIrXw88ROus96mv0qfH
GoTii6lIJUGpPY4h/wCDIhJJrSgGkCta4tMKqF6dCAQzZ/XAAtHUVr4aiOoHbAhOEqoDfcOh
6jFhwHrVKuxKseg8vDEjgAEaT1p+JGEU380uXPUA0p2p44tQHchT2Bp+P1phZErGmjUfAr/h
0wEJ0qznURWgJ8x4YtIqac6GpyJ8a4ocJs1GnJhUlfLvi0AAojClRQnKv7cSOKaFqTpbMDw/
HEfg7LrPtMSSTl/yxM0JWMU05lMyc8ycUCVAjx6wCpBpqPb8DgahpKaSK11dD4jwOGCo6Pkd
VD1U/wDHhhGYZlKFSWyIotO9cZIhoOaVFBUk9Kg4jKYsSxUnM0qPIYjp4xUZMtF6+Pn1xEpS
CQOh+0mlcOggjKSFJIJ6gDp+OAnVNVWBOmtACKkCuLEYairDM0yIpiAddWoVJSgox8euJadS
pFaA5VXEi90v6RkwP7cKEUGkjqQdRI74YMAgqCKgkihJyoPHBYZDlCGIQGlPSaePbEUkcZJK
9+gAy6dcCRKBrOrUR0y7jxxMUZ0KwPUEZtTNcK8E/pEYDAp18vxwIKAHNcwa0FcsRJkApU6S
2Zr/AK4CeN2U6dNdRqa/54kdxpcacwcwRlQ4URqGFOrZZ9x9fHDEYZMaVAXIf88ANqfpXWak
qB18a4laIEE0WpK5sG8cQgkQEah6SPUKdcszga0LAlS1asxrqHn5eONRadXZq0FGU9+hGKqH
ZSAEJGfQAZmvhjKAaISisSzZP507Hywi0m91nUVoR2p2HniUGVLPn9/5SvSmA6Bw7Ka10Hv0
zGFHjDaSelO7DMYlgRGpQk1AByOIDBX2n9RBr6VIqQO3TEdAjelWY+rsDlQ9KEYkRH2kE1Hp
IPbzGIHTVqFSfVlo7HAR6SslTQGnbpiWHIkVSK1Byr1oPLENCpFaHIUpTPOmJobMKqQSdI6D
KlO+AaEEqDQaVArq8frhWlrLBXKaQcmHXphQyUBDUqQPupXI+WBYUa1Y1Wpyq3X6dcUQXUFi
VPpBqfD6YR4cMa10+jpr71wI+oux09D6i2FfJnaRVoAMvUT069yRii0+ZU1FWNPof+BiP209
Q1BJUBSB0qSKdqYiF9I9SOaU6DtXExaIqV0DUQMqEf54GjnUqsdXU5HFpSSVC1AOYBy6ftwL
Eah1J1DJvUfHPEBMSrKtKEElD4jr1xVHozh2bMk5jwpmMWggy1WhH/dgIwBp1ykUr9vVajCT
el2yooAqSBTLEzpUOkFhqrkPDEjigYVPpUerLpnljIwqFasKgk5EefjhFSKSWbqB2r0r5Ymt
PG5YjMVrQ9O3UfXFETMlSqZeRH76jCdNRTIaGpGZDdMsS07ONNR+Y161IrngVoXkAUMBUdeu
f7cQqRXQsWVqagDQ9SQMWtynUxq1HzalelMh1xA6HUDQaUA64kR0xn01BYV+uGIJdVWiOWJG
dPPtiSVY1CaxSpAqAMgcBCxbSQ382tDU9cApnXWpOZJzIHl2xAKVY0ddLr36ZDOmJSJqAgls
jXKnj4YKcDUSEgA+4DRh+/CjpKC476RmT0BOBnSMr5gKKkGudRSvjiJZljr+5fSaeHjhJxQ+
pz/L7g5VGJBaiUQagtaigqCOvXEiBXWwHqLerrkMGAVXVelB3FMOEiw1KdINR9x7nCSFSRUE
Z+rKuR7YkJdB1BjV60oO2MqkFRSAQCq5gkmtfPEkZKtJkxD9CtMjiwJA/b8ytWlPHEiOrMpQ
KOpHSuLTDBWZs6EHOh7eWE0gzg9Rq7kZ18hXABV16qAs/bTnX8PHEjH0poCigzyGZPjiwYbM
FXQgo3j2PfFiwT11KAPTXvmaeGGRrDqFLNpqQB6hnlX64qpKMhvb/wDGrZ1BHXLGSExOrV1U
r+UeHmfHEyjLVcaAxC/cxzphXyNWX3ASSCR6a554MPkMZXK5gaScmHceB8MQ04jFQYhl3Fc8
vAYsOmDAawwIYkUIocOIqBQdJDEE1XOtPHEB07gkUyAOAwAZKHWpIBzHcn8MFiMrqJAyKADm
Qf8AliglGVZ5AHGknOlRSmFv5CC7ExsfWvpVa18+pxaswSnTVT1UUatafhga0NFJ1g6m6HsQ
R0p+GKs81ICDlSjE1J6Yijo5KEVIFS3UEeP4YWfkUagoaUK9RXz7V8MSnJ9JHqb0hh6a+Iyz
xNXr9IGQKAasdXY9Aexwxrmaz29ECUBev5q541jHdVpoKDI/TEwWoeHamJLM26G2VyKkHr2w
a3OWk4ZD+rvba1d/ahnlVJJz+VSaGgx14mrZJ6+mOV8P+NeNcXN2dimvppItKSS3D0rT7iFI
+oAGON5Y14BcXetiY09hGIHtitAAKUGLqLQsPWrAEgVrnmP+uKOm+D91zFrJApkBnl543IMe
p/HvwjvvJbM3u5XZ2fbmANvrAM0h/iC1Wi+Zxd3RUd38Q3UvL249tt+HhoPdvrghVVe7AHNi
PDGOeVOom5/8PWHEdriuF3V7+af0U9sIoK51IBate2eCy0zt2cM+Abnctn/qu5bwlks6l47S
LTM6ilfU1fST3GeHrYxb6835HtcW1bnNZrMZEjJCS0zJrQ4HfnrxWQli3oNS2QzzLE0ph5+W
OvHsPGP7f9y3DZV3Let4i29GT3UsoQspUUqut60Fe4GHuax9lRxv4fvt63K4Mm4Cx2O1dka9
ZC7uQclSPLPvmcZ5lb/6eA5txDg202iW+y77e7nupOn2VholAaUJKqAcW+sy1d8e/t53K92g
blu25R2UzoXhtkAdgtKguVOkMfLpjVZ69DxX4M2Pddol3Tdd7a2tlZgkUUQJCoaamYk5/TFZ
QkPxL8bz3EO2bTyqaW8m9KoYS48TVqZU8MEl/Znjt3T4T+PNodP6xytoZVX7XgWhHjT1YrL+
w5ds+ENkv45dxG/vBsasfZu2h9cqL1YDVRV+uGbD8OjcPhPjFlb2+5/1i4vdmWjXTwRKsnte
Kk1qMUgT80+KuB7fxKTednF4ZtAkjkml1Aqe1CBSvlixV4aZnrT29TeOqhp4/jjStdO2bTue
6XYg26za6umA0xxLqahNAKYAtX4PzEbqm1ja5pb8rq9pVOpVpWpwLVjt/wAU/IV7cPDDtE0h
iIDuxWONSf8Ac5AJxa1Bbl8XfIlhcRwvtcqvMaROpWSrA0IOgt1xfY30f/3OfJgT9QuzSaEq
ZQ+gMB30rq1H9mD7KMddWt3aymK7UwurFfbbLTnmMMus2ptq2jdd0vBbbZbyXl0VqIIFLGni
adMalwWa1L/EPyR+mkmOyTJCq62BKBmoK1ILVH0wfY449n+POd70kj2ez3EyofbZyAqEjwZ9
NcFw9OTe+C8u2R1g3PbJbMSn+VIc1appX0lsb1Nze/DO02PCU3e7vr7+o3CK0luYNKK7dP8A
t6dcArtvPgN04vb7hts9xuO6XCI7WiIiqoIrkT3Fc8FtEed7n8fcxs9wisbzap0u58oIlWry
f9gXVXA146Lj4f8AkW3tTdz7LOluis0mSsVAFQdINf3Yfsscu0fG3N96hM237PcXFsDlIqgR
6+41OVr+GK0YuuPfCPMd53VrC+h/pMkKB9V5qAI6VXT93TBLW9mKjevjnk+33N5BFBNfwWDF
J7yKNjFUZ5MBQYtc/hUbJxHk2/uybZtVzfNCKztHGWCV6AsMlONbjTg3Ha9z2u8a1vIpLe6S
oaNgVZexr0zwy7GZ8uWpNDpGutKDqa+YxNY0dl8ec93CENa7HeToyh1IiajJ1U50FPDPGfsx
1EL8V5VFuse1nZ7mPcHIpae22snuSgzwa39fE+6cC57ttsbjcdju7S0i/wDt3RiAvn1wToc8
4u/jf4xbmZu5LrcP6dDYqNLNE8xkc50NMtI754609SZrNXvH7+PkVxtG1rLusyOyw+1ESzhT
T0oBUD64x9hOtSbpwDmu2RLdbjs15aQgBTI8TnLr16DF9hGl2H4S+R952GXdII/00AFYYZme
KeRVFaog/Z6uuL7NMe/G+SW1/LaLtlyJrRC1yvtOTHQ0q2VRXD9hgtl2HfN53KOC3tpiZJFh
lnZWaOJnNAzfw+OHTOZW+5r8Ico43DbHarm4373lrMtpC6MhIz+w0p4HritjEvryi+s76yuW
W9tJILlQEkjlUo48Kg54LdO2uYCeQh9BLKajqTWvlh0Zq/seFc1uVLW+zbhL6dVVt5AQDmO3
fGb0XNFtfJEvZYorK6ilto/5wSKQNGOo1D8ueGd4upKZbbkO4WskogurqGI6J30uyg1yr1HX
thtORrtn+CvkLdOMzb9HZCCJhris5i36mVFFdSx0yr274zaMef3ce72N3Ja3ivbXSHSY2BUg
L4hu2NQYF3uJ0MTF3cn0kkuaHsK1PXFTIsrbg3M7pGEOz38gXORkt5aCnSvp74r1D9VfBxve
Lm8Nhb2M9xfrUm3jidpqg90pUYPshbtxDkm1Ijbptt1YhzVJJ4njVj4AsPuwT+mjK2fx78MX
HMNl3HeW3SPa4bCgjSdCUlbTqJL9gvTLvhnVNmTXnVzELaV4dQdI2Khl6Eg0xpiXQ/y2o2k6
jlSmKmNdw/4z33k282FhHbTWNveuR+tuIZVhFB1rQVy6YPtjeJ+c/FG+8Y5Q/H0L7vOEWZGt
IpGPtMOrIobSe2LWc1l7vjm82E8Vvf7dc2k8hJijmikQv/2gjPF9pROXOduuUmEYhYzk0ERV
tZc9QFp1xSn6uaeKSO4KTIy1r6GFGBHYg+GN6sSW9lJLJHHHG00krURUGpizZKFAzJ+mCnmb
8PSf/wA3X5RGxjeP6eGQxl/0PuqLoJ2JiOVe9K1xn7VXnK4uDfBvNuXQTXe326QWdvJ7bTXT
mLXIPuQChJK/myyxnvu/gZ6i578M804dHFcbtArWkpKi8tjrjBpXSSM1bLuM8XPVvyrWJj22
9jsVvnt5/wBHKdKTmNgpJ6DUBTGvsrGo4L8U8u5pLMmyWeuO3oZbmY+3CjHNVLnqx8Bnh6pk
iDlnxvy3iu6ja94sGS5kGq29n+Yk1TT+Wy/d/jiamXxe3/8Ab18mWfHRvkm1FohGJms4pFe4
RSK1aIeqoHVRXGb0xY87likjU5lWFarT8MwcaiyCt7K6ujHBbRvJcTsESGMFndj0VQATXFpz
W93b+375O2rjo3y52kTKyiR7aJw00QpUGVBn9ev4YPtrN8efTI8catKoRgPtPbPCYiCs0iBG
JLHr0wwPWpfhOGP4cXnsO6MJ9LyT2jopRo/cMShH61rnnjE9o/rcjyUBSQhPqpXwyGNGXxIs
KzJpVKj92XbLCc161d/257vY8LHId03vb7C5mg/VRbbO+htAXUESQkKXK/lA8sY20X/V5bab
PuV4tbe2lu9I++CJ2HXuFBp+ONfaT5OI5dvulcQvG8cyMdcWn1Cg7jtTD9hYZLK4d9IVmYAk
ooOS0+4geGK2CJbbZNxlYCO1nm1NQGNHZaeNQD0xi9xvnkE1hdW8hj9tv1FaFCDqB8CAOuL7
M2/htvj34Y5HzS8vrSJ1265sbY3Ea3aMomJIAVelASfuPTB99pxhbqyltbye2moJIWZGUZiq
EqwB6HMY3PVLL6jhSSab20FQaCNVBLFvBaDPLDY3mumfaNztI1klt540ZqRNJE6BjTOhYUP4
Yxuufz8Gj2i7nXVHBNKKZGOJ3yp4AVzOHTeR7fsV7e3i2lvG73LnRoRSxU9BqUCq54vtjM52
rfnHx1yrh17bWO9WbW80ih4nBEkTqR+WQZEjoR44p1+2e+5yoYtp3Qv7aWc8jMC2iONpGKj8
2Q6Yb1DL+0LWc6lvbjce3T3EAPpqaerwzxa1DrDNqlKoRoFZGAqor/Efy188WwzXMoYuwPcA
jvl36Y1YsW3HuObtyDcYtr2u2lub+U0RIYy4z7mmQA7k459dYZGk5r8P864XFHcb7tpFu5AW
+tnWWEMR0Zh9p+uD7M3I6eN/AfyRyXZJN922wU2IV2t/ekEck+gV/koc2HYE0qcP3X1Z7aeC
8l3TfTsVhZTNuesxy2zIwZCvUPUeineuK941Jq35r8O874Z7J3m0X9LLTTexN7kBYipUOBkw
8GA8sH2/bFWPH/gT5D37jx33bdvBsSjPbxyOI7mZVH3Rxt1DdsU71qxgrjarxJf01xDIk6Eo
UKlZNSmhUp9wauWG0aimsrq1k/SvbyQzLnolQqwr0qrAZnzxSh6Ds39vfyTu/GTv1rYAw+2Z
YbVnCXM6EVBjjPWvatK9sE6avOK/hXw7zHmN1Pb7XaCNbOi3ct03spHKcvZeo1B8vtxff9NZ
47Oe/BnO+Fbcm5bvDC9g7iM3Nq5kWNmNFElQNNfHDLWL1J8sO20boYHuY7aX2I8pZ9BaMEml
NYGkYr1GrFzwf465Jy/dRt2z27PIpH6iYkrFEvUtI/QAYb1gkdHO/izmXC7sWu9WbL7xP6S5
j9cUgpUhZFyLDw64Pt+xq/P9uvyWeLJv67f7ie2JXsNQF2sZ/MIupyzp1wa1cjzVoTDLIsq1
KZMCKMtOxHWuNwTqUUFqzyBUqysVA0AtmegFMz1w61j0i+/t1+Ubbjg5DJYJJCie7Lt8TVvE
j/jaKnhmQDWnbFP6OXdseaz2vtyMGQ64yQ6kd698H21ufAoF1Ggz/iHUHDYp09a2/wDt4v7j
48Xk9xvFpt088Ju9tsrhgnvqo1UaZiFRmWtEzxjd+B/S5PGQ+NuA7jznk8G0WMsdu8iNJNJK
a+3Ev3uAPuIrQDvg66zz8nn2a6vkr40tuF73b2UO72+7wXcTPFdWrKSrxtSSOaMM3tsPrnjU
0fnGmX+3u+h+Pf8A2e83m2tbyW3a9stskIQTwUD5SMQBKVP2AYz9ra11ZIy/xl8ZX3OeQps1
pOlrEkbTXNy6kmKNf4UFCzMT0w9dfgSb67+U/Dm42XM7PjGy7hb79NeKHtbmzdWCqWKusyBm
0FKV+7png9inzi/3n+1v5G2/bZdxjW13COCkhtbWYtIyd2UMq6qDOgOfbD9mrji4Z/b1zTlm
0pu1j+ntLWUt7P6t2RptJKl1VVY6FYUzwfY9SSKLlHxBzTYORW+w7hZar+8o23NbAzLcVOnS
jCmanqDSmL745c225YseZ/AfP+LbLFvF9ax3O3EA3bWrGRrYn/8ACqBXSDkSKgHGvsrLv+Af
HvwdzDl8Et7t8ccFpGh0z3JKRyv2jjanU+PTGPt7jeRHsXwty/d+YycVmt0sN4t42muFuaqg
jH2sGXVq1diMsNuLGu3f+1Tn1htk99FdWNx+miZ3tIWb3JFXM6WkVRqoMs8Z5vX5FgNn/tZ5
tu2zWW5w31jCt9As6q7yFlEg1KGGmnQ4b11vg+rGc5+HOV8Q5Ba7LcqL+6vkWSwltNTpKCdJ
VVpqLI2RBGN74su5FBunDuTbUEk3XbrmyglLJFNcQvGpZe2pgM/IYzz3qkW3AvjHk/NNzXb9
mgDBQHuLqQ6YYVzozuAevZRnjV6hn87PVdyPh29bByG62DcYvb3C1kEUkaEvqL0KMlBVlZSC
tMFuCRf8i+GOa8f4jacm3GAW9nczCIQVHuxhh/LaRB0ElDTuO+M8dWmzPlr9l/tY5num1QXk
26WO3TTKr/oZy/uKrepdegMtW8jhnVXUrzffuAch2Pk9xxu/tmG6wsP5MQLiUP8A+Nov4lYd
KYb1g598XnL/AIb5bxfjW271u1uiQbm3t+yrFngehZFlyyLKCVA+hzxS6bMq52f+2r5B3Tiz
7vFFHFdVElrtlwxjnlTuVLCikeDYJbv+Gnlu57dPZ301heW7QXts7JcxyKUaN1yZGU9DjpWN
DZWzXc6W0Q1yyELGiirsxNFAHiScZvkMaOP4z5w0E1xFsl69tbMyXEogkIjkTNlag1Cg8sc5
1rpeccOycQ5Hv3ux7Nt01/7AJdbdSzLnQsQB0xrWMHd8G5PZ7lDs1ztlzButzp/R2ssTLJJq
OWhepri+3mic6tJfiT5DiuBaPx+9F17Rm9sROW9sGhalOgOL7rHDsHx9y3f1uP6VtNzfJbf+
dYULMpNQAwr1qDhvWMzmIZeC8si3c7C+03Y3hRqG3tGwlK6S2pUIqw0+GKdaeZrj23j28blc
T2232E11cQKzPbxIWkCICXbR1ouk4dONF8V8DbmXJ7TamEq2UhrdzwLraKKn/kPgA1BU4Omp
EO8fHm+23MN141ZRNutzt0sqD9GPcMkMYDe5pXwQjV4GoxdXIxJai3T4y53tG2f1jcNmuYtl
KqyXuj0aGAKs1D6a1yrgnWq8Yn2T4z5tvG2ncts2m5u7FdQeaKPUBpGqh6Hp4DF9vWrxny4e
ZcG5JxO5s4d3tfZjv7cXlrMhDRvG1K5j8y6swcajP29xn9FK0Ho61bMivlhWBEhElCC2Vc8s
z4YcRFTrA0k/wgHLx/bipkDLpQCtTTw7VwKpAFNDUEkZA9qefniEJgrCoXIiuk5Z4NIlMmka
gR175DyxLB+3QqjZE9+37fDAQBKAnVX1fae30xMiYAI0imrN4dvGvjiRZkCqgqaMD4AZYkPU
qeiufie3lih0grMTkCOy5AZ5dcJH6lUAMNI6eXjgVRj1KaNl5/5YGaUbEqyCo8SPHpi1kVZB
RajLLpn9cKiRQFOk+o9ch365+eBozCfRqRSc/UKZgeGLWbadAGIr6ie/nhrUp9DBNJIGeZH7
sBOAoqHJVe57/uwKU+mOigmlO4Gf7MJORWM6cwOpPUUxLSDKy6QMwOuBGjkJAYjUB1BND/0w
6hrSlaZGuroM/HCdDAHIJBAOdEJ7YKDhvUPTQfbXsaZ4GdNqbSdKUAPc1Iwq+jjbQfWoI6dT
3wYcOsvryX09z1HlixaRJqwKaSfzE5fUeGIW1FRiftz/AIepOAOhFLKcs+3l/piaNGGICkFj
llUZdq+eHDgVMagxlDq1ENXtT/LChsBGwqvQVBOJIalm1xtT1ZD/ABP/ACwIa6//AMJ6cxWl
enj9cWozKaZrXOta0AHliWEXKjWGNaVHn5UxYqkSrABaUbMr9epxkow1GoTRtWlGHh/rjUB1
jTWQGHgSeuIwbPRqKpBQU1E55+eLEZlDAkgjOgIwKnTMmoyPcnLLBaiC6SrAatP7aYl+REgs
GSqjwrnhiNpBpRgrd27t44STa9OSlwpqzinTxwBEukOVVDprUauv4EYT8pUX7hJTUx61JAAw
NTxJWqnS1CO/Y4giZg4JGZqAOtOvbEMEyvXIkL5UyP0xLAsgArSq1Of+eJeHEelQQx1mpoRk
fHAycKpr7y6nB1eFKdK064TKRUFzp1BTRtS9/HCt0wyXUhFAaEmop5fjgR9RUjSe/qr54jog
IhIGcadXUg9fr2GM4pUcgo5YNpH5dPbxGeIX0qB1DOtDSoPTLxxKQIOsgjtkPGnXE1ElSyMp
YoDma9cvPCbTLqoGKVNfUB0wWGHlBAUKBozqDmfLLFgpxopVya9FINBhAdbLIVrRTnl/piZh
esrWUin5U7Ek5HFrU8Q3DPHF0GfQHx/DFI192b3YqZ6KKU60NR+GOjFrhBFM8vMYmT0TxOBL
WRStuGY0Ldj1p44HWWNLwFgm72LenS8yAajUihqa06g47cfDP9L4+sfkvY95vuGL+ksmmZVB
y9IGQodRP24893WeL4+YLgSQTzQTEe7GxBRTllllTqTjQspozITWlK9C3T6YTFpsF5bbdu1v
ezqJoreQSMlNVR3AByxviz8tfMfTHDPk7iHKNxWCOzms5okBRriVFjHagof2Yrx+WNZXlNlH
v3ybZW/HLWJri1ZZLqZpwUKK2ZqTT8BjPMFar5d4vu15xwmExRpEKyF3AJp0Va/jjP5KH4V4
zvlnxid7uKNVu3LwSK+rUoJAORquNdXU8u5R8b79uPP5tohkgS4uZfcVmfUBUV9VPAdsYnLW
uLnfxDuPELaBrrcre5muPtiSoYkEdjTueuNcS6pHsvw/xjf7DiLLfR+qdi8ILjMFaCgNdONf
0Fki24hFf7bY322D2pdy955PYBVyK926VxnAh5NHO/ErmXliW1rcgN+lhQoSW/Lo056jjP10
xB8X7Dv0fD5Bdx+010GaAMwaqmoDUqaasb6hql4+/wAqbcbzbNqttunhglYpDOQ0gqT3Ur18
8NYam/O6W2xW+57tFZ2fIFzihAQRe4xpStST2rngOn2O4+Rb69Kb1tljHYFBW5iCk1p2Dsa1
+mBM5d73yqz5JcbNxKxg3iFafqI5QPageuYLVUD8MJtW/PuRLZcPaz3YRW+7XKoEsbb1UbUD
+4YpKHBzWCYfGaiSJxogjY6gVIoBQZjB+R1XzDJpLtKvVzQrSo8s8alXy1vxYnJZOWW0OwSi
C+IJ1EjSqAeotXwH7cSnj6Zku9qF3NYWlH5Sbb1TAeoADOp/hrmFOM/VWJLCWZNh0zwPuV+h
rPFay0JcnM1UqBhwgst13d7+0t5dlO1q2oq8sySysAMgNBPXzOBnfXBZbjuF18ibht8ly09t
BAhjg1gor/m9PSoriwvDfljatxvubXJ2zb5LqBaoWt0aQKTlQaQR2xm+LWl+Adm3rbeTXJur
Sa1UwlUa4iZKk06awMbUj1+2s+Stye/kvneTaTGv6SOo9sNTM08cBcfJn3p+NSDjTyS3obTE
bRuhU5gken64Q8Y5ft3zG1tZjk0+oe6v6a2RkeaUlsgFT1GmHTj2Dcdp5Dc/GgsZo5ZNx/Tq
HiY6nJBrn5jGVXXuEfLV4Pax7MrQ7ksUI0kLrCgeoDzw1V2XVnNK+2xtdm1vtIM2llE7gD1g
avVme4xRO3a1Z2uY2tboKmSy3UhcSGmZUE/5YMTPc2PJRxsJxb3hfq9FFpQgdevaleuNJXfH
EXyGN2lbmmbPB/8AEXUhbr6iwTywafHJ8pWnIN62O9teM36x2sJZb6wh9MshpmNY+3Lt3wYw
xnwht/ySLXcbXZLyzsLCKQCX9WnuuZaZ0VfVl4k0w+mV5X8mR7vBzC/i3jcItwvhIxuLi3zQ
scqDwp4Y1i+FPsA9zdLWlJF92IujdGAcGmKnX2NzCfmEe0bedgWhMkIufbUFxEaVoM9NO+WM
wX5XHsWw3aOV0UXpt/8AykAy070P3dfDFheN79v3zi8m8xW1lLLswMixyzQxoBHUjJmALCnf
Enf/AG3pvUVlu8U6ypaCUFCUpEZRUOEbuenTFrVnhuJXm5ce+ROQLPsM8q3p1veQr/PVdVfS
GpqQ17YqxJkarkMPKN02O8uNj3SZUClpLK/t1Q+n1emRlFKU75YmJ3ofjbknI91+Pbm+vLhr
rdIWmjil0qSxQemgUZ17VxY6dXxT/CF1vN3HyW73oSPuUswMwnUe5po2lTUdKZDEfwuvjDZd
y27i24pdWbWxuL2aWKJ1CsyaqAkdc+2IX2LXmW48utLjZotlgraTzqm5ShNbJHUfsr44g8g/
ups7EXm0ypEn6qaNhM6Ae8VUihamdMWLHknx5A6cz2oyQmn6qI6XU6MmB6nLBbFL6+u+Z7py
613PZYdmhJ264uAu4zogdljqMs/tr44YVs8Fuu47k6xqXltwJqAFn0qQNQ75dMWBivjLZdy2
n473VLm1ezuZJru4jRl0vQZoRUV7ZYh+E3C+U8l3P4su92eU3G8wrcJbSCMF2aP7Kp3bFh/D
5I5TuO/7hvFzPvJmn3OVz7skiFXoT+YUFAMPjMtiz+NLWVOcbL+oVgBeRMdaVVgGHcjBbKed
19dcy3rl1lyTYLTa7cttF3Np3S5SP3GRQelfyCnfFMKz/p1tbbtvG5WsCR38lumqdEBdyqkq
G8emJV8/8j5t81bxxfcbTeOPpPtTk67ua1ERQBqDSGyy61pjUs1Nv/bgdxf483Gzu43NrDM6
2ayJRSrx1YKQPV6uuDfT1PHy5yXbru03e7hmtpYblZn1xSelgSx7N0w6xx/PGh+I9ps9x57s
NrfQrNbNeR+9DIMmFclNeueM3qNyPrLcd/5TbfJO27HZWnvcfnty12/temIgGhEgFFpQenF+
AuQtraXe+7o6mKT+WJLpE1ylIowQAtGqBXIYix/KOe8bj2Sz3Kayu90msrmOa2uJ7RoVR9Wn
UXZUCihwyDcWU3Atqv8A5D23nCOCsNs2lVCtEZCulX1jwB64Ma+Hyr827/Z738i7te2MKW0E
bi30xlTreH0PISop6znjpmMTn8rD+38QD5T2iSbQixlyrSFaamQgde/hjPTcv6fQj/8Asa/P
Med0NhFkTq9Qt/cKdP4fuwX4Y5vtBe7Vvt7uvJhd7mdv4ElwBJbWyD9RLKFV59EigtGmvqRm
TiUeY/Lny9t91xQcS2PaJ7PatapJdXgdXKRnUvtBqsakVJJwzysXqdLy/wBy+SW+DooJuN2H
9JNioF2JP5scFPTN+mIoH/NUN54y13ciz46bm2/tlZ9mkZb5IGJktfVKXM4Dk6fVqK9e9MXJ
tuN7x+F5OM8Mk3dVk3GL2iXuSGlWVoWBoWz1YjbjP7VccmPz3f28r3I2OKzJijYsYKsq0pX0
/dWmHr8M8/NfPfybxLc97+X+SbbxvanvJxctI0FotQo0rqZqUVfUTjp15IxPm4vv7d9mudm+
X4bLeLY217BBPGqzqEZW0UFA2eo5jHK3Xbi+V7Jx645QfnTere5e6OypbkW8Uhb2ACoI0gjS
QT0w0cvmD5fgtIfknkkVvGiWwv5qJGAFWh6ADIDG7zmM8X5ZGFf5qs1AjGpJqB9MDXL6jnSn
9pp19Vt60pl/+NeH44z/AC+WP787MfLEgijk06QAnRwain1xrDz5Man445JabByKLc5dsh3i
FQVNjdZxt4uKggMvaoOM9cunNfTnz/y/Ztt4NtMV/skF/wD1pSlqsrU/SP7SsHjYCtV1ClKd
PDFzHP8Ap1NxBY7xecF+CNk3PjtnEL+5MRmX2jJ7jTFi0jhKMWooxlu1urTadru9+2DkE9jC
u7323SLdyhANQeON2UgjsxIz7ZYk88+IeNRxfLPNbj+mrHtdJLeGsNIqPNqMa6hTSQOgxdfI
5+F5yfmUvA/ifbd12m0t/wBS862UIdaIgkkkJyWh/L0rhkWtCOM8fvuW7ByGbbbc7rPYSySX
IQD1lY2DgfxAsaMc8Z+Vnrl+OuZcj37k+/2W6beqWm2SNFZbisTRl1DldBLda01ZYbMp5ux8
V8mdf67fqa5XVwCMyQDK1M8drPXPj/1ezf2mbXYX3LdyubuCOaWys1ezJUEwu0gUutfzEdGx
y7+XSXx7du3JeFbtY7xtm8X53eyhikNza/o3rEENC4ZU6qe4xWBQcv59d8F+OOLXGz2luLnc
FgtkkkFUSNYtf2rQkkeeKSK/Kv8AlTdn4TzTjfJNggt7Xcd+ja03ZnT+VcR6o2WoFKOGauqt
aYpPNc71ncn7cX9z/L91sRt/HBbQPtm5Qe9PcPFrkSVJQB7TsaIcuwrgbsluLz5F+SL3464R
xa52vb7ea73GKKB5JFICpFArkaVozFvrl1w8yNdfLh+OeZ/G3Mub7ibPa49vu912xYtys7hY
wlzOsnqCUyfSp+7LV+GLGcrm+Ivjbddi2PnK8h2tIre+Dx28EwR2aKJHp4+jMFTXrgz02/6v
k2WFEPpNWrQU7988dNHMuPoz+01Y1XlYiOq7/TxNGgP8zLWfSBnTVTHO/JrT/FlxyHeuCc4X
lxnu45VleFb6pHoicsQrAadLKvTwxX0Z4l+T9x5RFf8AxrZ7DLOLa4MMky2ZOhygip6kyI0F
jStCMPxGbfY9Gnt7WDlO+yWkaJuF1tUctYwBLI0bSqrZZmnpH7MWflrfl5h8Zyb/AL18Wcxb
lQnuA5llhN+GP2RlmZVemnSwHTFm30c3eXd8lXnKm5d8e2mxT3A264EUsv6VmWJ2V4zVimWn
2tXXKmC+Q5tjJfOllJb/ADlxqfY4oTu8qW04hplLOtwVT3aeIAFfDDbkc7xb3/hV/PV7vl1z
3jrb/sEG0SQe2zXsUwmW6T3UqrvpWqR+DCo+mH/4tzne3pPybdcnb5Y4Ra7NLcf0mQJNIlsS
ICfeozMy+kr7IzUmlMZ/C6uXHXv2zcmn+Q9+h2i6i2vjF3a2x5JcGMNIZWVgJIKZrJ7YoXOQ
6ntiky6fy8/+UPkbiW18K3nhvHbC/nj3P/4v9Tvmla2qoBaSOSbU7sNOSigJzxvmesdzZn4P
8cbpzeD4J3P2ePWm88faO701l0TEEkTMYtLBwjVYdDljNk107z6pPgFr23+GuXDanZ96t3Nx
B7Ocwl/TLpZRmfy5eOeMz5c5s5XXEk3F/g4S8naWaeHdbe5t7jcKs6hruKjgvmoBZx9K4Pn5
XVv138rTlEnLT/cLsMdpLeLsS2yTSKhYWxU+4Jqkeg/kqD+GN74c/wBnivzlxa43X5z3PZ9g
sfdvbsW7pbwKiappIQzmpIGf3MT9cO4eJ7VB8d7Rece+V9ksORW7bdc2N9C0sN0dCoxcAFif
SVocm6HF3tjXP9Jbj6KvW5bJ/cjbpGbz+gWtqpyLrbKskJ9z/Y2qUivgcFqk9fO/zxbW1p8t
8lgtYkt4BNHKFioFLyRK0hy/iY1ONYOPLWR4vur7Tv1jufsx3BtplmMUq643AIOh07g98VF+
X1p8p842cfCe27lecfgurbfRFBFtzOI47d5UZhJG6rUaCtV0gYxzF/WyRjv7U+XRrdtxSTbr
eSXTNdxbuqqtwAaa4nyqVJ+3PFZNa58jD85+Q9lu/lCLfdu2C0tpdqnlgu4mHuw3eiWgaaOi
qGyzpnnhvwxx7a9j+cOabNH8SbRPd7HBdxb9Ei2sLPoW0laH3EljZVr/ACz0pSuLiav6X8KP
+1nl8Fxq4y9hAbq0hkmG7KqrO6M4OhzSpFWyzwWZXazIq+C8msOT/PtjuO1bZFsEiC6hu4om
Di59vWGkZaKFZlWmQw1n+fuvW+Nbrxeb5A3SwsdqvbLfYDOZ7ub3f0chDAN7dW0EMSD6VxUR
UTX+12Hxtts+/bbcbrC93cj2ttEmuGRriUnSUKNpGYwxnFtBebc+98AmhQ223zx3wsY7on9Q
kjQZRMWJ/LXvXBmt35Zz4qj5VcfJ3Nv6yl0+265oIUudRhK+9/KVVf0lTGMiB0yxUSyqbnyb
8/w7xteKGcqk5tbxbAkfy1dwI30ZhVkX8MPPhanfN53LYd94FeHbJd23STbru33GKMqbzRHF
E7tnTXocH0986YJ6ur6qTsPDfka13O1sNo3Xjm5wq13Ffz+7GJHJPpKM7K6kn1LT6YNGfly2
PO7KXhux2XJuGbhf/pbdVtLm0Be3lRFEfuI6MpBIXNT0w5il1fcS4DsGx/JpvrH3Ba7vtBu7
KzndpJLRxIiyLEzliAQ46dDXEZ47uR8o4runFt8tN0jvt2s47aQywXFiyhaVVXU6F+0n7u3X
DgB8cbjwb+ibBZ8V3aws0kWJrzb4whuJplT1qwLBgxYHqPpgzGrrP3o4VY/3B7pd8gurNbua
wt32ueZgVt7gRiPTMGoquUGpNXUeeGzwRY/Ldlb7x8Obk7cggvNNytzb3ilRFI8b+m2QqWzP
2r54ePlOJNll2fZ7DcebJdcq3i3tYods2+ziegiyYeqMLqdehd/+eCxdXFdw75E4vyH5J3jc
eSW8Ow7tYWsdltA3D70WMu8hlL6VWU+4MssumKzxT438rz5ZtbO/+LLS/ffYbyC03GC+W+IV
UnAkZTFGFrUgPl40zxT5HVxw/Je38r3H5R4buGyi4l2Fo4rma5hkYW/uLJmxodNTEencYLbn
hlzr143/AHLLbJ8u7rKgXS1vae5opTWYqVanU5Z43+GZ8sh8ZJGOc7KpRWY39vmWotRKp/bj
HUdOX0L8y/OHI+F/IMG1balutlFDFcXcbpqe5L9VJy0UUUBXPGpPGV/8S71sF5xjkfJdtspd
vtLzcnnFpZBXuI6qmpdKgjNyzUpTPGDfItN73bZdyuuKs9nfG8Td4hZXt5b6GQgn3EZqDSGU
5VGeLFHZvHIt1Sx5BcpMqy7VuUFvZvpUFI39nWp/i1az1xrAh56m87Lxzd7vhVsI97N/FI6W
8QkZ/dVPdLJQ6tQNfrngFZ3kvIr7bOLcF5nv9sF3+0v0tr95E9qRI7lZEmRgBkNIDUxczWtW
dnwfYuL8v5F8gREPbXVqL6wKuiQktGzXEY/KfcoGQ/7sWfawS5K8c/t4j3m++Tn5FtVpJDst
zJcLerGP5UUc5eWOJ6UHpJy+mOn9vmRrjJy23DNsuNv/ALlt+FxatapdfqbizJQqksckcZMi
HoasDXB/SbJRz8Vc8A51ufMubcw47uiwnZLOOa3TbBGNBUSGMuSfV686jGb5ik8tX3xjDe7n
wLZXtJptnm28GFlCq0dzGjHQSGB9B8Rn1xWeq9W/L5//ALmN93W95u+23cU1vt+2gNt1tOoR
dUqATPGVHrRmFQa41njMnuvGxQMyAmhGeroMTWIyBXVmASAD4YGcEhRWD6vQtR49T44lhSa1
Go5g0GR6A+NMSwBeMUWg7Z9RUeWJijEoCkZ1/dXwzxnDOjSFtA01Ab7R/liOiQuTXSGBGbV6
EdcsRFRw/UFTmAPPCMDqOogA+k5sCBkcRxJHHO1VrqC9QO2AQzRMCChDMMyevTxxERjZU1JR
QRnTzw6hGoAfVXoPbAzJ8cCFUaq5UNSRkPwxYjII+ooSpzB61PfFgpkC+5qB01Geo1xMw2ka
gEJKr9urvTBqdBeSpIAANKfUeOJuaaR2BBqNXQgdBXriJpIgaDVRBUso8adzgVDVPtIqPyk+
QzwspKoyAvGVzyp4dsDVoP5xesYoQDXVlWnjhZog0poQvrFASM/pg1acHSc1ozdQe/j9MUIj
HQaqgAeHn5YTTEAEtpId8z2r54hYJHcAalBXxBzJwYpBEsKt9h6qpzJHhiRgsaAg061FMq/U
nCtMKMoLKdVQWUeoAYFpH72bJWJNPDLESI1uKOaL9zfXrniWHDKGYKCwbrQ9O9cOLBhI6qxY
616AdgcDXwE5s1VOYr7hOR8sQw4UOqj7XORPUYtFIFQKDMBjn2864hpnLA6tRCnqtOvngag/
yAMQVAqrDz7HFpQOoYrpzzqSO+ID0yKlagEilR2OE0zFzRfSo/MeoI8D3xMiQoBoAFCevlhW
DYEZ1GmmSn7T+OA/BAFqEn00ooGdCe+Cr2nXUqvqFKVB8M/LA1hRr6e1D9o8SP8ADEML2wpJ
YgMuVTn/AIYhfAKxLUUHpkDTGho1R/b0fao+8E/uyxaTsIwAcq1FSOo88ClASZB9vpByPQGu
I+mLqo/lUIpkvSh88Q0OnIkkgSZeQIxNSptSqAQG/wBw6Z9K4tRkij1V9WZroGeXiaYmMKSN
1atT4j8fDAcEpUx6sm7swrXLxxIAQUZ/tNMyD+7CBKW9vOlSPUcuvhiUMFoatQEj0iv/AB1w
NnZVLAtlUD05Upi9PhqgtoGZUii5dPGmJnSf/wAjOpzUd8h51xSDRAnSCKVyyH7iMWNc04Uk
gEmhGR7YK1JDAotVboOjA9T/AK4lpRsSxamknIsc8x/x1xazfkzRlpFK1FB17ZYRSGT+oUXq
hGefSuDTIb221N6qhTUk9KYtMm/IJDH6n6BwdRNK1H1ww2Rl9xOqYqq6QBXT0/xxuOdceWQG
YGEab0+BwJ011DM17+VcLbScXlMR11oV+3SMx51ONS4Pw9AfmfNH24WMu43r2JjCpE7PoKjq
M/LGb1o44kUBq+t0BDtQoppUUHnjMN9XHG+I8m5FcCz2i0N5cIfWtaKB4lzkMVuCyxp92+Fe
c7Rbrd3tmiW6rUskgkda9tI88GmVjPYu4LphpIkiydxXOmdMa+3hsiSG9vlZZYjIh6hxqU1P
n44Of6YOb6nuty5K9swupbmW17lmbJeoIzzxudS10+sFY75yQx6bK5uVhoFOhm0kVr0GX44O
2LzHLNc7wlwshllWY100LAiniSa4J0yO7bd7kB7r3pIwAUL62anUks1csWtSfh2W3IuS+3HH
BfXiWhWhjieTS1OldP8Ajilo64xA28b7Hde9BdTxzNWsodtZB7BidWeH7M2B3Lc98uWFxf3d
zMsagFpZXk00GVKk0OCdiO225rya3tzHb7pdwilECzuqqvklaVODrut/Vy2/K97tZjPBfTpO
xrJL7j62PiWrilrPRr7kO8Xrxy31/cXciH+WZpGalc8q41okdb845T7DW53W6FqAB7aSOqnt
Q0PTFKfq5tu5Hu+2ye5YX89qz5sYnIrlnX9uNfc1PZ8v3u13FN0FyzX0TB0nl/mNXxo1e2Mz
qqLbk3y3zHktr+i3W+BsgdXtxL7WY6FtNA344LROWQSRSjaiQa+kV7fUZZ4YsT2m43lq/u2s
rwSAH+ajUI7VqO2JY6IeS7ylx76XswnceqZGKk0HcjPGtFnjusOccnsGkew3K5geU1mdJCCx
8frgtoBecv5Pcbh+tn3W4kmyWOZpCzAEdK4DIhi5Pu8V0bhb6YTSCjyIx1U71xS4a12xfOfN
tk29LGzeCSNavrljUswPfV3OWGDFnH/chztJYXnjtJkNS0TxiMAeOsH9mHxXcZ3kPzBzLfJZ
9W5TWkEh/wDxeGQpEVHaoNcZvnwzKqtq+QeW7ShTat4uLSM1LxIxK1PUhTXvhvZvrnv+Y8jv
r1b+53G4muomLwzu3qjPfQR0xc9N8riX5e+QprZo23650dGOqlVpmPSK4bWpw5Lb5U51ZRAW
+73KBasoLtpFe/Xwwbrl/T/VyT875Vd7hFuU+5SyXSHV7rv6hXoAT0pjeyKXYtLn5d+RLiIK
/ILghhpZRRRpHkKZ+eMfZrHLtnytzPaoDBY7tcQoxJ06q1r3Fa54oHPJ8j80k3CTcH3i4S5k
9DTo1CFpmMj3xqxY44uY8itJ5pYb+dJbgfz31mkgJr66GhOM0q2He9yt2keK6lR5G/mMsjoW
rnnpIGGUeOKWeV5WcsSx617V61PfDDXXtl5+ivIbg1rAwfQMyaZ0xGPVt8/uR5ddbfbwbTDH
tcsUYEk5o5emVKGtPEYLjM9YqP5Q5oN6G9PusrXqCizscwndaCi0xWxrE+/fNfyFu9rJYXe6
uLaQUIjCx6lPYlRni5oTcc+bOc7FYLYbduGm0jqqxyxpLQn7qFhkc64aPyhk+Yecy70u8vu8
rXka+0rBVRQvgFA008cBLffmz5A3+xewv9wraNlIsK+2GWvV6UJ+nTDPHOVWca+S+XcXMh2e
7aFZGHuB81Yd6qcvpjMa6qfb/lXmVnv029QblILyY0lc0YsD1DKcvp4Y2pVve/3AfJF6hgk3
XTEw9axxrG1K9KgV/fhkhHF/cP8AJ0SkLuSzVACCSNdVB1JoOuCyD0uM/Nm87Xvc+7bnb2+8
3l3Hpd70EsgrUe2fy+GkYz5TI1n/AOdFO+gjjNko61LHUKGuQpi+sMQcq/ug5Lc+y+wW6bYF
U+8JgJtRPgD0xTFjC2HzJzq13g7z/VpX3GQaXdgGjaMGoQp0IxWiVZbx8+/JG5Ws1tcbkscF
wjLILZFRgCKHOlVr9cGpRcV+V+Z8Uimj2i/YJIQXhlHuIWH5s/LDPTat+E/M1/s29bjvG47d
a7te7j/55LlfXka1R86delMPUDbN/c9CzJTi1mGB1BhmVFeq5Ka4pzBrn5R/dFyC7miOwQLt
senTPHKFlJPiGIywYZdYbbPmr5AsN8m3mHdGku7oBbhZl9yJgPt/l9MvLBhkHzH5x55yaz/p
95eJHaVDSRwqI9ZHY06jywys2O/YP7iefbLtUO1281uYbZQkAkhVqKPPFc/S9dewfPk1rvF9
uO97ZZ7zd3wUGWeMAqF6KhoQF8qYr78n4XO7f3KbdeWckEfErRJmUosxIDRmn3KQqkUPQg4z
kG2qOH+5j5GgsYrUXVuZE0gStCGenT1Mag5Y1cLisP7hPkG13m63Vr5ZZ7qkbwsg9giMemid
FPni+W+cxw85+Z+a8vs1s7+7SO0T1PawoERmGfqI64Z452eis/njnlrw6TjVtdQw2axG3jk9
usqQsCCquPI9euKyRuyPN2uGkFFAUtVmbxPjiZSWd5LayLJDI8cuRDjrXsSRg0R6cn9wfyND
x7+jLfAqY/bS7dQblaZUWQeP+7PDy1Ifh/8AcLzHi+3DbLVobm21mSl1GXZWY+ujKVNCczXF
kZ66m5HHz/5z5dzOyWwv4raKxglEn8mMAkjwdizUPTLFuM9c7VBL8n82m443Hzu1y2zFdAsQ
yhQgzEeqmrR5VwRqx2cJ+UuVcKm9/aboFJsprSWrxPlSpX+IeOKSK3IDl3yvy3lm5xXu5XbB
rc1tbeGsUcLfxqo7+dcNUy+tBcf3F/ItxxldoF2inT7Rv9FLkquROseP7cPMit9UHAflDfOE
bxc7ptISaa8TTdrcAusoB1VJrqDavA4uufF/4cXKfkjfuU8pl5JPItrey+2B+nrGkawiiAH7
qjxrjPlPEu+tVd/3H/Ilzx0bPJcqjU0DcQoW5MYyFXHfzGeHZF18vL5pGeR5XBLSGr1NSSTU
sSepPWuDavEUZ0lQTpoQQxzr9caZ3HtLfMm0D4Obg5tpV3ansLMlDGVMvu6jXpllTGePKe59
niUlRI6KNKg9+le9MaNiWCcwsKEinqC9K0Neow5rHVsbPmvyxyblezbRtu6vE1ttPpikWPS7
VUJ6iO4UUyGMzZ8CdbV9wX5/5vxHbV2m2MF3tqGttDdBmMA7+2Qft/24zjp6befn/wCQt03u
03Zb79C23sTbW1uoEJJFG1Ka6tY6g41G/q0Mn91fPKRlYbFHAIaQRn11GVRqyIxZBeWI3/5g
5Jv/ABaHjd+6PYWtyLmIhFSUSAsw9Q6qNZ7YpMUiym+f+fyjZv8A5Yjm2Wot540UFwVCaJB+
cEAA1weM/NaKT+6z5FKxlYbCOSMkSL7TMZMv+7L8MazlqR4rf3MtzdPcyyappXeV2J6mRix/
DOgxbtYzPFzwrnHIOK7tFuuzzfp7qOqM3VHU/kdfzLXxxdcibHo++/3Qc83DbJ7OKC1s/wBQ
hie6t4yJTqFGK6manftinMb5lYrknypyLf8Aj+zbHuEiPZ7L67UqtJSNGldbjrpXIYLzKOvl
JzT5Y5Ry+02iHeZYmG0p/JMa6CzkAa3YdT6R0xn4Y6491Zcg+ceW79w1ONbsttdxLo038seq
6XR0o/QMP4uuKXD1NU3MflDknK9g2TaN2kjdNi1CCZVKyOSgQaj3oq9cdZzMdMl9qp4buu22
fILC83uKWTbYZV/U/p3MU3tipLIy5gjrjn1yo+kd4/uJ+PrDjd8myT7nu91Pbm2hivCwUa1K
hjI9TlXPxxTlmzx8pzyF5CWqZDRm7ZUpiwc/C241y3eNh3NNw2u6ayu46GOWMlSKHNT4gjqp
yOL6tytzzT+4Hm/J9vXbrhobSAqEuHtVKG4Izq57D/bjVn6GG4P8/wDN+KbU22RNBfWAoYIr
yriEk1JjORAqahemDmT8q2VnIflHl8XI25GN1n/q3uF/1JY1NfyUNR7dMtNKYvkyL/nvzzzf
ldnDZ3UsVpZkD9RFaAoJulfdFTXpkMOZ8OfUFw7+4HnnFtlm2e2kh3CxFTapeAu1uGGXtMKZ
A5hT+GL6w6xF/wAs32/3c7zf3klxvDyCX9WznWr1BDKw6ae3hi8a56x3cs+ReWcsltrnftwk
vXso/bhBCoqZjOiAVLUzOM1nfy1XE/7hPkDjmwS7FDNFeW+kizkugTLbahl7Z8Ac1r0xKzYH
g/ztzLi13uU6SrfPuLLLuH6ystZugdTWo9ORHTF/5XLs53/cLyfmPGp9i3CwsYrKYjVKqFpF
dGDI8Z1HR0p40xrZPgdfz+09ZDYPlDmWx7HdbLtu5y2u23ius1qmkoS66WKFgSle+nrgkP8A
hx8P57yLim4Q7jtF41pdp6WUU0SD+GRD6SpoOuHr1S5Gg5980cw5nIqX08cO35OdstqxxFh+
ZjX1tXxxkWasdj/uJ+Qtn42diguknjAeOzuZk1T260ovtuTnp7VwyYbGQ49znfNm5HByG2uW
fdbd/dFxNWVmY11a9WbawSD5YOvWo6ue/I+8825Mu+X8UMVwYkhEcQKoEStBRiSSCTWpw2+M
5Nafbv7iPkXb+KHjq3scyhPatdzkQm8gWlNKNXTRVyDMK4OL76rrzG8vLi7meS4leR3Ot5HO
pmY9SzHqcbtWIoZtBNDVVPqBNKg4xVjX7x8lci3XhdhxC+kR9p2thLZKVHuCmoIC/cKGKjFP
D1JflxcE53u3EN/h3zanVLuFWBjYao3VxRlZcuv7cH1Snv8AcLi73W4v52V7i8lkmuCtQuuV
y5oP/qwiRo+Q/JfJt+4ztXGtzlSXbdjJa1ammUjRoVXYfcEBoMM+PF1zL8ouA/IO/cK5BHuu
zNGbkKySxzgmOSNhmjgZ9siMGa1qu2vku57Zu8e72dw1vuaTPcQ3MZNVeRyxA8vVTPDa1zce
rXv91nyJNZNbJ+kilZdAukiPug0pqUE6a98XMjG+qPhn9wnPOKwzW9tPFfWs8jzPDeAvpd6s
8iMCCNTHUR0w/VdVV85+YuW8xv7e6v5ktf6e/uWUNrWNUkYUMoz1a/OuL4jGrPff7gPkLe+N
rsd5eiJAoWa9iX27qVFP2SOv8VM6dcHNxrFfwP5h5fwyd22i6DWkwPv2N1WS3Yn82kZq3+4Y
Pm+umar3+TeXXPK//ZZN0Zd2hYG2kXJIlU5KiGoCEZFe+Hq+eMNnun9zfyLu+2T7fJJBaGUB
Td28XtzKDkSh1N16VxufWT/Jis4T8+894rtX9MsJ0uLX3HdY7hPd9oE1/lkkEAk1K451Vybr
8z83veUW3JTftHuVnVbUR+iKNH+9Fi+0q/5ga1wW74MWnJ/7kvkXftjudtnube1gudKO9rH7
cpj/ADipLU1dDjXPyza892nkW57TultulhL+nvbVxLb3KUJBU1FcPXrX212cm5puvI9/ud63
aVZb680C4kjQIje2oRar0HpGM34PMdyc73xeGvxMTqdla4W7aNh6hIOmluoFczTBz1lb8xtN
s/ue+RrDbYrJJ7aaO2RYYJ5og0jKgoCxHU4Zl+WNYLmHPd85dvzb1uiwm+lijima3jEcZEYo
rEddXnjVq5qe7+QOSzcSi4o1wrbLbS/qYoDGNaTUI9L/AMPqJ0+OMSN/0y4seM/NvPeN7Hcb
Ptu4E2c4IRJwJTFXqYSc4/pmMXw526xG43txdzS3VxcPPNKzM80pLSOzGpLV71w6hbXfz2Fz
Dc27mOWFlkjavqV1OoUP1FcHSlWvNOa73y3e23zeZRPeOqxDSgRNCCi+kfSuH7eYsWfBPk3l
nDbp7nZbsW4mXTPFIoeJ8vTVDkSOzYLF9lpyT5y5/wAhu7Oe/vWjO3TLLZrb6YgJENVkZUHq
YeeNSyNc2xFe/MPOLqPd4rncCsO+NHNuGiNR64SNBQfl+1QaeGHUsbD+4H5Dtdzm3iLc9dzd
RJBOskaNE4hGlW9s09Y/i74zgU3N/lblnNJLf+uX9YrdSUt4FEUWvsxUU1OB3wy5FKluPlbm
V7w+Hil1eI+ywFWjQoNZEZqqe4c9I7DGeevrR16i4F8pcs4U1x/Qr1LeK8p+ohmjEqsyn0FV
OQIqc8H5blWu6fPHyDuG7WG7ybgg3DbC4tpY4kj0iVdLqUAoQw8a41vgz1QbD8gch2bkdxvu
23bJuc/uNPNT1P7xq+quRBOeLq76nqPBP7ibDa+PRbJyTaJL6OxBWwuYJPaPtMfUsoqMx44F
WG+Y/lVed7nZvb2X6Ow2tGgsvV7kzo9Cxlc9aMuQx2l5k/y5bdeeoz+4wYihzqKeFBQY52x0
07yLUsMlA0kdyfGmAo4yGkDBui1FOmeWJgwlZIzUeKk/9cBKlVWlAB9zdcLFEArUGrxz7AeN
MCmJKCOR1UahSqntgOmLBGUj01Pfp+3E0Jp1UVA9Q79s8OKBCBmpJVgVqaDthWDcMhoraSx9
JPSgGBFpZo9FdLDpXz/1waD+2ygs7aaChQZinhiNFCRqRS40nv2H44kKRkjFKACpIPWhGJUw
SrDIHUANY8u3lgRm9ppKOAOykVocWMxKGKsCtGVDVRTsMROxkUEoAGY1YfXyOEnOaocqHrl3
xEPRnY9up7HPwwIwqrN6eua9v+DhGpV0BRl6wMzlXPxxLTODpjRR6KVZun7cA+S1tUaTRe1B
XpgxrAuqyFnJJcnOprQ1xLBlyw9ebLStOhwmoh629bHSWqO4+lepxMxK2kD0jU/Zj4+GJo8i
CgGZpnXywVUwio1DlXMZ4hhFTTJzqr6VIyr4VxKk8bAkrrZAfQxAH1p3xajrIyOqgmgNSPH6
4gFcz7gzBOY6DriaTs4GopQKcq/4ZYkZXkVyXBatAegy7mmA4GqhAFPpqSSev0ri1mwB1FwQ
3p6Fe+LWMGryqGKUI7aiQcTc0nkLKSM2pUgdvpiWn1JkwFdRoRTE1pCLUAAwXwHU0GdMWo7B
WUsQFIy0nxr44YALUHv4UyJ86Y0CMbuoBPp6jLt4UxlJVFAoyIHfzxlvQn3AjUWoYkKa1yHX
EMMavQV9OXTPp2xCmbNaFdQX7iOhpiBtMYBdOjdqYTgkFAc9SDw8cJwxOn1aiScq+XbBRggA
CDWtclGYp+zEjutV1Nka+oEjOv8AjhRi5ZupoOinpT/XFQdZGYFDVW7igAI7YyfUij05PSnh
l/hiaJGJXoAadT/riVkAaLVWzZj0HUk+OEU8lV9XRagANlQjviZJ/bDVZg5bsPtr3riRm9t4
tLAjOtMRNSMhaqWVvuFaU8aVxHTgaWJcjTWlaEH61xHTE6Zs/VU11A5dMTMomWPTUA+NcR00
bjSKGoJyp2B64zW5T+2ig6vTGKjPr5YtWomKFgSKhT6QDnQYWEsUqMGr4/acqeGDRYZpQHUA
aajJ17fhganVGGSunVU9x3xNoZArFstVO5FP3YWYzO8sGufTlQdMb5jFvqvWnc6RTrhQ6y+J
/Z+/CMd8VUjcqAaj01GefniabD4ve5h361lg29d0nWVfZsJBrWVickGNb4y99+VLv5DvONQ/
ruL2WxbcHVmaNlaUtSukhRkPLHGfLUjxDRrCqx1GnUGuQPXGxI9g+FOQ7pZ2d3ZRbXPebcxP
v3tooZ42Ay1VpWmffHTrmYu/HolhZfq4ZJ9j3WS6tZJALyC6Rq9aMoZjVaeAxgSouWbBtkm4
7FbCygYSy6Jo9IXUF9R1AdfqcZmFp22Dj9vO24Nttqs0EdYxJGPbAAyGnpiyB5rec+ud93dd
gvNigEHu6P1NrGdOjLKoBNPHPDMPw1HMeSWnCbC1TbdjsZIn9JRo/UB2IYZnF4LT/Hlpsu8x
XPJL7aoLG8LERi6jJRU61AIXpjXUn4Mrs5HvHxzeWEke5XVrevGf/wAXsoCWpXoNIJ/fjngd
Gz3eyS7Yq8VtdrESAK1u8ZSQGhora6H/ABwlSbEnHrHfrlt62y2st3lOqGS6BeE0/goNC1xr
PFlN8i2m5XOwzsu0WFzCFJjnsqAhT9rEDrTGPIHzRcLKkrxzlRpJDaTUE/XC3dcxZWYIQaqM
z9MaZGZNQCkae+RrXEDlo6ghqDoQK0/HAjFgBUAk0rTIAVxINVkND16nKh8sSEXUR+0aghvt
pUEntiM06HJqt5GnY+WEUFZBI2k1yz7VxAlctXr6u56UPl4YiQLHouSilDmDiWH0qVoxJYfZ
nkMJDqWmtc/4h1xIUkxFHY1A/LTpiAKqV0s2sD1FewH0xI8iDSWFP4mH1ywCwCKtSKmpH3YD
IZpDGxUkkE0yGS/U4Y1mH9xUqTUk507GuJaIEdT6lpUg/wCWDAH7quQStf8AHxGHUBlYAlaZ
dR1p9MQpvc1GoAHYnucMGonNBqalB+U5Y0DEEqDXUqilAKV8z44CTBWCjVp8adRgQWejKpJI
IqB2p26d8RGSSAaVAyAPcgYTh/dCUkcgDwHYYGaAvUkg1FKpXL9mLBOkbxyEVBAUnt3ph5N9
JGbTmRQGiinTxH/PDqN/LzUjpnl2p5YFaCpOoA1OfmM8Qkh40dYypJJBqAc6/twavqjdtRr0
Hb/LDowgtVLa6swo1B/hi9ahFqMnU0FdS40iGoUZ1BatDXpgGmaRg50NRRQVIrWvb8cC+x5H
Iz0+pRTV5YpDfUSsQzODUdxTPPpTFYsShvUvp7ZeGJIHOot1LDp9cOs07ah0ppOQPkMMqIaM
xTXkKHMUr1qcRhyKIGQ1VcqHrl2wVBL6SPcrpPRQD/gMB0mcEgIdVRQf51JxLTrOgbUQSa01
dKVyxKULmQOK+nwHepwoTSMKlRmAK1GdfDB9Uid2LZUIFKfj3P0wyYrNGjuyEIwDDIEDrXFR
ARNKxDOaueniKYNQShoEBGg1NDmaf6YkZn1UBAUg5AdfLPzwi0qSRsSGGphmPriWCIJQFmpJ
4VoPxwtaYuozJq1PTTvTt54MZ8OrBo0UGhY5Dt+zwwXkyh1/dQ001qOgxLQ010rkDkrd88Wj
66UR1HSpqD1Ympr+OeJqAZaqxBYZ0NMsh2Aw6LBqulRQkOMz517HEYEMNRJbp1T69cS0jVlY
VNCMifr2/wA8VZMHA9LDPOrHscA+Ds+tRRanpXoKjChGXVRegABAHSo64MbgXIk9LEBjUljm
Prh1Iew8FP7fp4YfllIqNQ0elcwD0wHAlmQDTXPp3+tTgOC95tBFCNRFKHCtoZHBB+4uKnwU
HzwwSjVqEZCtTTxxU7Q+5JqFMhX7unTvUYyNMZGYBh96Nm3Sn4Y1hMCpqCQD1OrvXwphkZBJ
IQAYxSmRI6D/AK4lNHE6sdWalRpUeOCt6aWV0k0qQK9Aen0xQU5Y0BBFe48DTAAJJ+Zsxnqr
/jiGiKjJtVK5U8j0wNUjKQxYdVpmfPp+GKiDaSVAV1EqaZDMVOIxF7nUjqO/av8AzxrFCdxS
qireP17YsJE6WIDVIFevjiFpFgqkVz//AGe+DFomdAACQRSgI61w4rQKMgtNNK1qcxTuMTJB
pOopl1PY4lIYeltIyNMGkzM7D05DMUHYjEsJWYrVwS4JBzz6Yj8JArsQGAyzzOZAxIIkVSVD
U70p288WJEshSrGgFep/dTCyk0mjE/d1GWCGEdAFDlXuO57YiTlBpzrQ5/j+bFo0zGuQ7Eaa
dz9cC0VZDRQwofx6/wAWIkyiQACmpT0HcfXFBqOUOQtKKfDv1wwkgYLnmfE4AIe4QSgzFSAe
uk/XA0coCBoFR0OeGCheqhVUenM5Z1/6YhQyPJrJyANBTr0w6zEwkZmGoaKDM0wHSqoq65aa
injijWmR1X1DLzPT8fDG1hy4GqjVA70yz7DCthyWVtBVS1KAjw654zipvWTrOnp2zwaJCLqo
KKtD0y6nwrgbBGoGZzLUJJr+GNaKkWVlPTTXqOuQwA2tgQEoFB9PWpxYB1ZqMxAKjx6Z9Pwx
Gl7ZozuNenoK0oTi0WFGagsKZZ0JpliogRSlRk7mp8PHFh0WbAn7lUnv2wYdMGJkoxotKgeX
bECEgQsJDXuD/pTDjUG5Eh60Y5jOv78HwsACAw1HJsq4LRkKRIwwNRQdq1xATe2SVJJz6DLp
0riWgleTUKAk/mz6jtiiKP3MiD6l7HpXGtWDWXTSoqDmp6VPQ54sW0mPrY6suukGuXlgqDqU
gADPOiAUFcQlGsoZACM+hBH+uJo4kByKnSaAmp7dhjJOxkZFVcx4dzgFGkiZF8h0IIzr54Vq
MtJHIPVktaZ9T5YVRLI70qxkOfp6DzzwYiYZFjQEYRhiEWhdlJrWnmM8qYmjmSMAlhQt0HfP
CggqtVrRsjmMApHIKHoR2IFfKuIGFuTkpPWvma+GGrDstYguoL59yR5YGbBKdR7Bx9tQRkO+
JTwTuaqh0mp9OXQYm4Qik1CqioJJPgfHE1INJZAvYqOgOf4YlaQzK6s6mtR/hgZnqM+hxUU8
M/24liQxu9NB0q3c0xKQSVGqM/lyBHTETuoaMFV9zuTXL8MSDGjtrXoo7/5eeKgRVCQlaZgG
nU4kLP2ajVTz7jwywLAsFU1cNQHqR5dvHCcEwaupev2jwpiB4w8hIKn05ajkP2YrThGQg00a
h0FOpPjgApm9uh/MeijPIYYSj0MlNRAOdR2/7q4sR4zochXo1cgKUFP9cCJmUg1y09SPE4EX
vKIzQVanbtigOyLppShA6+Z+uEnIYKCW9XcflH/PAhhzoClwpGfnTyGIkuqONmLajXIk5j8M
SMWiNG1VRup6UxDRF2oJNdTlQ+Q6YsQFNGC5alqag9Sf9MQ9PHVqrSrD9n0zxKUiEJrIdIjO
RrSgOJqURoGYH1J11Hr+zAjSLrVm1aRQGg6EH/PEgxUdAhUEL9tD1rnhESzIGA01Y6fVSn+G
BagkjBIkC1plq8/HEvqNdQAC5g9R0qfA4jglVJCQBpKgUoe464TDH3A2lW9PU/8ABxC0gdBp
lTIg9TX64QKhB1mqsPynz88FOGSZiK0IP5WHTL/XAYkRo2JIGo0qfLAfwjZQHqBUEZ0HeueE
YJtQJK4hoI3kr6vtNTnlU/5YjKJSwDe3ShqKnpn5YiSIpBWRgJR1b8op1yxaDEhFART6DWoy
UjvWuJX/AAYUlUal8aj9/UYRggoC1J6d6fuwJJ6GYkMDmK+P1pgUCWYgimbdzlXE1hg7BT7y
1AzJArQ+QxIcbqHqSCG8sS0MdQ7a1NPy9xQdKVxRnASL/MrU55gdh50w0EhZ6rpyJozt1H4j
EoJECSlWTUDSpB7eNcWmDydStTpByr4jAQq60MVKMa+sAU8xi0YYLF1JpTtXEfDKUQ6c20mp
pnmcRSgH1MTn0HQ5YlaEOgH20cGmv/PGapC1kghqGNjXVTqcUipn9IABDIfVq+nhhZ0KhmyP
pDZ5dR9MGnSZRoYfcVGZGZGFqxk9wANy1FCivQd/M43Ky5gp69hhQ9fmcQ1LqLVNMh2J6Yjr
S8R3O9s7lJrKRoZlYMhQ6WGnwbHWf1yYO+b03O+c25hvixQ7lutzexxZJFPJRB2OQ+444/eO
3P8AOyKSNpw5JK0aoGkUpTxqemLXOytdw75F3zij/wD5P9qWBspbWZdcRPXUQCMP2V4taTc/
n7ml3t5tbNLTbkYn3P0kWhyvkSW018sE69Yyuja/7geZwWqQvY7a9wij/wCaY3abyLUYAk+O
G2RmaFfn/mn679XdC0uV06GtnjPsgVzbQDWv44bY3gt3+feZXlt7Nra2m225yL20TaqnPq+r
T+GOf/Sa6fTzUtr/AHB8ktrFI7jadvuruMFTdSRsXPn1A+uOlxyuuGy+e+Yx7nLdXot76GVd
Js5UIhUA1AREpmPrh8M5tT738+8ku7dbazsbHakZgZGt46s4GdPWTl44z439cdFp/cVvFtBG
g2HbXmjGhbptWup/NRf8jgtjP1ris/nrlK7hPcblZ2e6pNklpNCESMDshGYH1xvfBdNyT575
JuW2SbZY2VjtNrMpSaO2Da6N1XVkB+AxnwY8tmeZvcZVFWNewFa54tOloejFKhtP5hVfr54d
WFpUZ0zbNqf8sQJgy5KtQ3UDtiODRfUSAT1JrlSn+mJYBVZtZNTT1a6UyOCtfSuqPbr9YRK9
pLooCJWVgpJ/3Upg+0FCbG+ize3ZQQWVqEA18K4fA52Z1B00ZgQK91r5YNAtEuqoodQ6nLp1
zxpHSN9NRkzHI9Bnl+JwmR0z7ZuMURL286R5BpHidQaiuVRmMZ+0OORtQAReqjP640AuWK08
s/rgAY3bVVx0y+uFRNGT6gp1R16Ht+GDWpETFlByI1dQRRqV8MSM0uSqBVGNRQdT/wAsUFoV
cAnWQWGainTzw3GMOGBNU6kde1e9PPGW5TB5a1ppJP2+ONCmeNh6myByUV6/64NjPVwy0Kg6
SFA6ivU/TCOetDNDJpcCjUzAP/LD9m8JUIpqIIAqwByHng+x+oEPuk+n0jMv1BHjio0nLRAO
SCv8a/uwS6qHQ7GjAaB9wzqPPGsEomRGSmoFR6QAOprgNgZhpZS4oKhFbtqPbBeoyZoWjapo
STUmtQK/TDLpxG7KWYjMimVMyMKCUaUGuXcEdjiUlOAooBmVybtniAnU1DMTUjt44mtRSlyM
z+PTInFrNJ5IySF9RAIanT8KYfsoYJRA6mnlnQ16UxS6b1pKk2R+7UTUsan6YNY+oXRgTQVY
dQegxNQyo2tiUzrRa9x44jDBGAIZshTSv+FMIhwR7XrH4t44CjGRUV8+vX6YYCVdZEdK+NPP
thGEVb1AAihxnTIejPVqVWtfAVPbEvROgaIqQUHUEnp5YW5ESRq0J0+pVFGalK064BhCMoTp
FVIHTt/piGBkOZ05noy9CMKwy6z6qgFuinPIdzjUGCQRnUgyPQU61OKrS0qhQDMioFTmK/TG
dOCcVlYEaCANQPWpyzAxkI6SKGrQdhUZ/TCcCIkbSig6qgr4ZeOFnR+yUJ10Zj0PfAfqRQk0
/Mvpr2/54hEZiUuF6DqfM0wnIEReqhNFGVenXLALCKaEWtWJPbPLFpMQdIVmqo6UGf7euEHQ
tUsGBA+00qTipNWkhaTLVk4XP9mBEWTQdFQ3X6iuVcTOiKggM33A5HvQ9caZ0xoFCsSEU5V6
+Z+mMtwpERhqIqa1B7nC1gfvQUzByyHU4WcMUpSq6QPtamWmvh9cGNwpPQCa+uulgK/gcUFM
A3tU7Dyzr3+uEC0SOKsTmPu6fuwNYYoGWhP+XTucQpKhDKqEOwzYjwwM0PqBIr1P2kdx5414
IYKCaBRUj1jGT6KjoAgJoPUQete2KHQPq1VVamhP7csajNRhWYAlCW8cu3jTCMSLkATktaaT
1y74jKJRVvMjLwxlvSdRJT0+oZdR+0YDhhE7BqqWYZsP9cTIBEwAfQQJAQK96YdiogGbQpXy
p3riXtCyus5VlAI/LTwy6YoKEKVaoNNWeXQYmpB0UMh+4EHM5ZfhhOwD0NY6aW6huhwi2EIj
oIWh0jKvie+AQbUJGpakDAQLSmlmJPRhTMeeIaIii6WBNCKUHY9cKwLK9CHFVUgimIaIorH7
QWORP0xmtSglRlAPXP8AH8fDEsMYtR60BFT2zwhLp6Mp1V6eAAHSuA4iILoKDMda5DPGsVO1
uCul1Wv8WKrACNlLFTUZBmzpn5YBgolJBqdTE5jrTFpkHpojBaaicyMq0wCwxqvqRTTLU3Xz
zxC6SxGuS1B/N2HbFq9E0YUAAgOQTSnVR1phN+EZhBOosS3j3GFSCcBh2FcshQD6eOM1UzRs
xbQain2nuBiVP7QCAjIt9qnr9MNZlJyoCLTUK0ov7K4EZI1Z8iBQkfj/AMd8R0XqdSTQmozP
h9MSIhhHkBmfQRih+A0c62OQ7jt9K40sSJFrUNT0rUnLIUzwWqGeNfSwOmuQFa59cjh1Q0qy
ahRSA49JrTpjOtGAFGbNWBCqhHXyxEQL6gaBozmB4kYmSPqYFqksT6fCnX9mJnTmEldNQDnQ
4miiRyukH0geod6+WJG0LUD1agO/+OIWEQtKUyFdTDpXEgsFXTRCQchTzxISo76ifuTN6ZA+
VcSl04icVk0Eq2Q8Mupy8MGrDezIG0qelKg9640pNGqFQRQ17mopTwriIntwRQrSoBUnBqwE
Op6My6etT5dM/LAIMRsPTSh7+WKq3AsoRqAH0jp1OfSuCCaiLtU1YEjJvPHTGqNCCCKnQ/5s
FigSpBoMzTI+Xc4lTkfmR9LHt2wYK6PbZiNeTDv5eRxm0ymQZ+kgAgEDpl4DGUJYPdUEKSSx
I6sQAPLwwrNBkXBBZixIApkKYWdwwgMa6pCHBHUjt/ri1rEsdvPMD7UZYAaiqjIAZ1ri0YF0
Lyakb0v0K0IJHh44tVDoZgR1YZmuVf2Y1Kj/AKcoqn7QBUMexGeeM6TN6SWY6gcxXphFO9CC
UyYZr5g98RiREZNIeoZgdR8zn6TjOr0zxlioCg0IHgaDpi0WaERuAwZehNO+WNMYlNuVWMM3
qr0/CuDXSCpNpchCyr91OgU964LcOhET01EZU/l/Qf54Qkit5VLyADSuQA/bhpkdJ2u+a1a+
W3ZrRKJJcBSUUtkAXppXPtjNqxwvGyKFCn/tPY06HywqiWhQHSFZhmOgriRpEZISy51y0jxr
i1EqOo9tkGquRHQeeJHdHy/LTMAUzpl2xDBBnchWyBzzyp+zESNckeule58K9cAO7BFyBKD7
ga5eeEl7iqy/cRIcj1FPPwwITVFAGDAA08fLEocIwIJ+ozoTTE0QFBViTn1A/wAcQC0kR1fc
WApqywspFU5azVGHUf5jBjUptCgqQ2R9WZof24MVO06DUGoIyKUOZqP+MsQE0g0lWckClK/8
dcRKqD1Hyp4/TEsPGGOoigXy6mnY4RgAK1OQJNSG+mJFroxDZKewHT9mAaJaEqgFSelTkPxw
LQqr+4aZU6gVyw6PRardYyxWprQjxOAkWpUEEkGnTIfXE1h6xlQq1Z26imVfrhB0WIKcwKZB
f9MBEAFcaDRiKHp+GeJaWdGD9ABpC+OJabUTUEHW4zGeWIEQ/sh+4608aYWgq6mMEerKlT1p
3xAtDVBALU6nri0DQhw5app+UdcZunCYLpUCvroK9gD5DFh3QjWoCBlAU55Gv4YoRlirjIEV
pUioH4DCIcMSKVGnPr44kEg+r3ftoD4YhTAg5VFB+b/DAUihgwGZ7ekj99cS8DLI6ir0DV+t
cMRoy+QTv1A6188SOXFaFc+xHiO1MQCgKtrjWgObav8ALARyxhxVDUeFe5/yxYPRHSiAuCSn
24Mb3A0LDWfuYnIjEJ6LILQmhH4jLCbAKB4+kghVA6AnMk4mcFFpodI0p0Fe+IUixatRVRXw
ypiOEVP3E+TCnf6Dri0YaN1059TX1AV/DES9KsSWUBqAaumJSn0ip06Qa5GmeeAwHsmgKmjV
6ePniOUMjMr1rSn3DtiVSrX3NRzPcDoCemLViMCkoldiFY0X6nt5YtWQ/vvq1FdROdV7dsEX
sRyuVWmugY0IoST3wqsxuoDXGvUG1Y3Gccp1L3y8sKpqr/DgZWDpCsaoVOpvubqBihte4/2/
fF/GOVW15fchubhFiYLFFBRAQPFgGpmMav8APxz57616TLwv4Nke5srXVt99bN7UVzd3DKXJ
HSMerUK/THK/zd53XSPib4o2eC2l3aC93G5uWA1o2lKtnQIpX04Zwztdc3wRwqbcVkJubbbI
0Ht2ds+qUsfGR65YpGp0iT4u+LN1sbwbTY3cT2rFHlklJIZepVKkN+OL65Wb1Wk4nsHxnBxZ
v022vPHGrpNPPEhuHI+4hgaZnpTD1xvyPszXAeE8I3vfb3eodtb2Lab2rWwvD7iggV1uB+4Y
1mRa9N3TZNovNrms5oYKMKKkUKJpp0pQYxeZTtjzLefjT4k2O3S65NJeNJOSGnjJRFB/2RVy
Hji+otc2xfEHxbyP9Tf2W5XX9NhbRHGpGoBfza2XVmPEY19TOiuvjD4f3GxdNlvGjvg/so11
PUkg50jpqNe2D/nTOv2vtv8A7e+C28CQ3UO4XkkmbuHWONT5DqBivKvWqS4+COIR7/JFc3F4
0QAePbbNfdnKHvLJQ6cUlZtWm4f298JubIGyhvrBq+uW4kDnPqTH+Y/jioQp8F/Fu2y2lheX
l/c3lwT7Kq1Ax+gUgftwZf2XFff2/cZit7y5W+uQsYLwWx0noPzPlX9mKRfZU8e+DNr3XZDu
DbjIssrMkUWhdKiv5vzfsxqrYsrv4e+Htj9m137eb1buVaiRn9tCQPuoEagr0qcWVr7MXb8H
4HdcxksByP8AS7JEA/6yUDUxOQRCfRme+Hnnxfh077tfxJxPkVpPt878iSIa7mHWrqp6rRgN
BFexyxcT9j7Ncn9wnF793ttz2OT+lxANbQRCN6kfxhtKU8AMa+krLU8Ysts5+8e9XtlYxbLB
qS12lSHkDdNU4UKoNO2M9c/hZg+fcN4Ns/E9wvP6daw3KoTHKkNX1H0qF60pXGZzD9nyndgN
M2paHMmmXToaY3F9tb74Rhspuc7es0cdxoqQJFDKDQ5516Y1+Gt8x6N/cHy/frSCDZ7R0jsZ
1DShUGpulBqNaL9MY8c5bK+dJmLyVrqkJNcvH/PFGrTN1AeiqSAG6g4WV5xjiu98m3eHbdpi
V55CQC+Sp4sT4UzxrGua+kOI/DHx/s9jLBfWy73vGn/5l1KG0BqfbGvRVH7TjnYurrDfGnCf
j7cuUXyb1GZLv3GFrsrxuqKEbqz5asvy41fgy+K/5c4zxmx5la2VpHHsu2TohnljQvor1Kx/
h9MU5XPtegXPx38Zw/Hd1d7XYLeP+mLx7hcFvdeQCgZm6DPsBTGeuWb8vmS8i0XDqgCqCcjl
pz8sEorUfGllweXf0l5lctHtgBMFuNVJpeyEr6gv0646SbBI+itm4xwje7O4kj4fDYbOqUtr
+4RImlSn3qho6indsY+sN9V+w/DXEtthm3ZdsHIZ7glraBnVIUTooVWOlj/ub8MVh1hedfHP
JN65DZbZs/EbTj8Ug1e6kqyKVB9bylBpUL4DPDKo2O0fCnx3tPHL0XEA3nd4Y3M92+oaZQuS
oqkBVGKTTekfDfhLi207Uu63Ozjf9yuVDRQFhHDEh6LGGIFPNsVi1RbjxHhw5l+o5jscfF9j
tog1vaxSGRLmUtQGR4sgB4Lgkxls7D4/+PORbVdOnE12mzof09/OvsSOun/yqK6gKfxYcalY
3jd/8OSbtFx3beIf1C8R/YS7LpKr0Oku0jsP8MV4/avVavf7XgvEeTbNBtXHrA7huknsTxum
aRkgawTVF9XlinEZ31jv7ndvtYbPamsrSCBmZ/ckiRULUIyJAGWGSRPnV1LSEKoFG6ZAZ9sa
OPSPi34evudNdSw3cdja2VBI8iF3Z3zAABGXnjNMrabp/bJbrt1y+08gS93C2HrtRGAur+Go
dtLfXB6NiLb/AO2SC3torrkfKIdsmmA9u1CoVRj21yOtWxbRJjLco+IrLj3J9u2t96tb20vn
UfqHYRgRsaH3QpbTlnXFNT1b5N4Nx7buN7BbWe2bfFbm6treWWFfXLFkCuqlSG8a4sU8q5+Q
PhTh+82VlFCLTj8cBCrMqKHeooqV1Jn+JwWfo568vm/tn3KDkYsX3iCHbXj91txlGlmNaaFh
Jzb8aYPV47d2/tgptk93su/C/uIgf5LxaFanUAhiK/XF6vEe2/2wRpYRTcg5Lb7fez0024VS
FrmF1u6am+gxvabi+4Z/bpxO3ud1t+Qat2ljCPYzwSlFCkVqgU6g/bPLD6zHm3Ofgx+LbLLv
u67lDCLiVhbbQWLzaSSVow+5qfdh9/CvPrymSGjAL6lpU6f8BhFj2H4Q+G7HmMs+77zMybNt
1FNtCSsszkVpq6hBTtnjNrczG/3b4b+OuX8YfcOFIdsltZWgPve40UjKwDBwxYjx1DGfqzU1
v8OfE/GLfbdk5LHLue87yfagnTWqCRuojCn0gE5Fq4pK3v6cu3f2y7PDym7fcrwz8etIxJDb
rlO9cwrH8oXx74PVLkBvvwr8fcu4029cJD2UsUhhb3mk0OUYKR6i2kj9+HMYx1Wfwr8S8aXb
uO7+J9w5BvPpjuY2dV1V6JQ6UUf7q4vrrW/h5xyT4CubP5Is+K7dcwiPcFNxb3MxKgQiuqoz
q4p0HU4WZfUPMvgLc9g5FsuyRbjbXlzvsntWrhTGyEEBmdST6RXr3wy1m9Xcelw/CHw/srbf
xXeXnuuT7mlIb8FkOoDMxgehRXxBwXXS+q7Z/hD4z4zuO7DmN8l6bcA2O2oxjeSKTNX9tCHk
kc+kKuDLYpXR8g/CvApeBXXJtmsLnYLq0QztDdalaRVy0ujs2kn8pxc8/pnq568+2L+3je9y
4G3LRuVrFFJC9xHZyA6jDGSfVIfSrnTljW3TuLP4i+FeObrsV3zDlUmrj9r7hjtYS2tzGPXI
5GYUdgvXFV1Jmr3k39vPF+Qbfte98DuP0u37jIkcsN2XOlWYjWmrMUpQrjPv4UW5+Cfh+O6h
4ZJeXLcvlgaUXq6q+la1Mf8A4wtPy/vxYr1+ngPyFwu94bye42K4lSeWGhEsfVlYVRyT4r1H
bHTnRLKi4Lw3cOWcmtNjs2jjnuW0+9MSUVQpdiadaKDQd8Z66xrmPoB/gf4kMqcMj3G4XmIh
aX9XQ55aiWSnthadga0xmwde/Dynafgbft0+Qdz4jBdQW9zta+5c3MhYpoNDGygeoltXT9uN
fazwTmWes/8AInx3uXB99bZ9yngnlMYnjkgJ0tG5IBIIBGYOH3BGREVDRTkQSo60OJW/p67w
H+3PlPKuPWnIBeWlrbXuto1nLe4qqaaiEUjMg5E4xev01kNyD+2nm+27lYWlsYdyj3B/bgub
dj7ammomTXQoAKmvTFOqMi8m/tQ5LHbOttve3XN+qaltKupcjqKkZfsxelS8U/tq5hvdpc3V
7cwbLBbTNboL3V7kjxnSzDT9qg5eeNfY7Dbr/bdynbN/2/bZb+yXbdxLLDupci3V1XUUcGjB
v4f4sX3rEnrac4/tc2iw41b3+w7gE3C0i1bh+slpDcUXUzRE/Ya9AcsZmtdX9MTwT+3nkXLN
i/rhu7batulY/onutRaZQSpkonRajKpzxdW/g10X39sfM4uQWm0Q3Ns8V7HI8e5KW9giJdRU
imoN4YpbFFhdf2nczj29pUv7C5u0iLDbkMitXp6XYaa/XKuLaNUXwX8Z2e8fIc1nvUEE0Wzq
TdbTekq8koqhCgVq0ZzK4118Nc2Onf8A4dbfvl7eOK8ZjXa4LYG4C3DkhEEanSpXUdLM3p8s
Z6ufDh/PbatNw/tO5hBtvvxbjZ3N4i+5JYRlld8s1VmFMsEtdcit4h/bTynftlTdZry12lJ2
ZbaG5LF5Appq9IyqQcuuM29Wj6q35h+Fo/j7bNtuxu0d9Nd0insyNMkblasy5kulRSpAx04l
/LOV5UIySwfqSBkKUx0akez8P/tk5Dv3HrberncbbaorxQ1tBcFi7x0qshK9Nfbvjl9raeuZ
HHN/bdz4cvTjrCBoZYmnh3NSf03tKQC7EjVlWmkCv4Yfsz+cWe/f2pcpsbCW9s92s9x/SKZZ
bK2DiWiip0auvTpi+1OJdr/tV5Lum221/JullZ291Ak4aQyFlLioUhQACO+eMzrqt+Kc/wBu
vO4uUrx4wJIukXL3cToEe2EgR3Vnpnn9tK4trE9emfNfxvxTjXAmi2fiJmSCJdW/wyKJIGBC
l7gH1uGH3Hpi5mK31hv7avjnYeSbrut5vNrbbha2kfsx2czkThpDUzIoP2j7a9jjVazxmdh+
INz5TzneNl2ZVtYNruZTPNeOD7EKyNGiyEVLEhaGmG38Mccz5raT/wBqd/a7but9d73aotra
GWyaAFoHaMFnSRm0suQyOeM71pvWR8/SoCPSpKkagCaChx1g3Y23xp8V7/z7cp7bbTDaw2SB
rq7mb+XErGgBpU6iAaCmMdUc8tlzH+2rkew7NNu9re2m6W1r67qOzLe5HF/HpbqB3p2z7Yze
rD1k9WW3/wBpHKZ7aO4uN3srdZ4kddZdvVIurTUL+Tp54p1aZIze3/26c3veW3XH5hBZNYRr
Pc307/yvZckI60Hr10PT8cX3omLDmX9tvJtg2WTd7a+tt4tren6tbOuuJOmrSfuGfbphlrVx
Z7L/AGn8nutogu7rdrOwvLyIP+jm1Oyas11MmRNKVA6Yr1V5jO7B/blzjceR3203ntbYu1Ff
1t/Mx9mklTH7dM3VwKg9u+C9Vym6m5//AG6cl4vskm+W17bbvtdsAb79LUyQrXN2BpVB5Zjq
cPNdLXkTWoDaAfWxoQM8j441aI3/AMYfDG88+F5Ja3MFjt9gQk95cNkHkFVXQvq7HPpjNtbz
8r7nn9uXION7UN1S+tb/AG4ypFdXFoW/+OGIVXlDD7KnMjpg+9jHXT0OX+1LaJ+DpLabtG/J
Ke6m4RuTYzJ1VAD0FOjjF6b/AIfM1/atZ3E0EgUyxyMjaTX1IxU0pl1GN4eb4n2nbbrcr6Db
7WIPdXTrDBGOrSSGir+OM9XDr3I/2kb/APpQP65ZLetGHbbatqMlK6NfbwqBTBtY6t/Dk+I/
gXeb/lz/APsW3qu0bVctb7rA0oWRnjXXGukZlGqpLD7hgttP8+v27Pn7hViOX7Ftmz8YXZrm
8me3truF41tb0GgUBVAWN1Jz1dsPkmsWb031l8MbNxz4pnj3LjC7/v8APCz7pHFKv6hXGrS9
tIR6fbFCAvXzxct9XHjfxz8EbrzPaJt4k3C223ahKYYbi49RkZD6hQFdOnpU9cXVv4XHx6bl
P9ufKtk3LboLK4g3u03a4Fta3Vu2lEuDUiOUn7fSCa4ft4x31Y1F5/aTu6WcqWe+WVzucaa1
s11Ru7jquok6cuhIwbXTyw/xB/b7tG/C5l5DdpS1a4tLnZUJjuoJ1yV3+n3eBxW0R5n8p/G8
nA9+XbWvYNxgnRpre4tytQobSUkjBJRlyy6HtjpzNFvuMMFZqmLJn8frgE5fQfx18RfH+2cE
g5t8gXTTWd9Gy2tooZAjFiozT1NK2n0jGPl06yOTfP7ddvvd/wBjbjW7qON8mEj7ddXQIlhk
SMy+24opfUFIXIeGL2Ofwz3F/gHmG+79u2ySxnbbratfuzzqTBI4yjRGAzEgzVvDPF9vw654
9I+NPir4xj4pby7rts/Id9meRdxsrV/cltJIWKPGY0eL0VH3HqTgz9sy7FZz/wDt1tLnlnH0
4mj2G378kvv2N3UNZ+woZ3oat9rfbUmvfGtueOPXN+0cPI/7XNwt9qmm2DfrXeNwiBc7cqhJ
WjH3GP1v6x4Uweut68eDSQGORo6kstQ5OX25Gvh0xuT9j5SbXaG5vIrVG0vKwjUnPNshlgvW
Q88voX5b4bwPgvxjY8bezhuuXXksUsV6pZJPTUSTKxB/lAej269TXFxz+WP6c7cZK+/tx5OL
nZRtc8V8N62836q/8pkeJFdoqGo1EOAp6YzO6bMqx2D+2/ct0s9hvlv0tYN1Mq3cUq0ltjbs
wYUr/NYhDkKYz1rUru5p/bHJt2w3W8ce3Zd6ks0Mt3YBRGTEoqWSjNVgBXSaeWKS/kunbP7Y
bCTZrK+3zksG2T3USSlSo9vQ41JSR2RdWfbDtVzXkfyDwO+4TyCXaLuRZkZRcWN1H6ori2ky
SWNvqCpHY46yXGL1NxQbXbie5ELKQZGCjr3P3ZdcZ68akr6ck+NPhXh9jt/GuTQS3+9b2FMV
5ErLKglOn3IqNpRQ2R6nvTGOObmulkvkYrdf7b9xTcOU7Vt24LdbhstvFebTbGgluoJiSVYD
7WAQqKD7qeOLbrlbYoU+B99X433HmVy7WM9hI0j7ZcLpeS3joJGBNNLAk0UjOmOku3GviPRe
D/Efxpu3xFut3HMm47xJZvdPPGxEtlMkZdYglfT6lo1fuxiT31rr48Zv4S+O+H3Ox7nznlD+
7tW0mstnoJQhVrqIHqp5YuubesG/WaueR/EHC+XWO28z4crbRt97dx2W47ZMKKheQRmaFan1
AkHTWh8sVn4Oe+tJffHfwxHuFt8b3FpP/wCwOvo3iJaTxyyJrQu/2lWXMLQgdMH1kVu/D5q5
hxqbjHINz2K6kSe62y4eCSQZCRRQo4HbUjA47fhjn/KgclSStAQtSvgD44tSaGNZUqzGg8u2
MVY9a2b4D3DeOJca3uzvURt9vTZXltImn2VbV7cisCa/+M6h+zGOba3ZjT8P/tel3eyuDuu4
PYX9juEtjcQqiyL7cSVSZST6ixZSPI4rqrt3X+2HY3srjbtu5JGeYw25kTa29rTK6eoekkSa
WHft3wTms2/pm/7ZNjsJ/kGm4ziG+tIpWgsnjDiWQVSaNwclKDMY11PXTj/1tVlnxPiV580b
5sm9THYNqj3G5S3KaXRHL1SHWQVCtXLwxv8ApznMrhxNt1tvn74i+P8AaLD+ubXuEW3brHbx
FNmqoS7RSI2liTqJCMzTI0wcTfGrPVd/attezX3Ib2WWZXubeF1gtHQNHLG40TK2oH7aj9uM
9TK3+FT8b/EW3c05LyXbZHO1JtcsxtI49Miq6XBQRscjoXplnh75E+Gi5b8AcaXYdyvuF7z+
u3bZv5m6ba5SRdKisqBloVZQCR40wTnL6Vvt39uXCv0G2wbtu0sW/wC4RpPAsYAt5CaNoXIk
jOjVbzxm8+m2PJfmziPE9j5lPZcZm12aRgz2rFj+kuUJEsBY5t/EPDHb65GN1jdksVvNytbZ
l1LdSpDrpXTrYClMc+vgya+qdz2v4v4rf7Z8ZXOwHcTviIJ75wnuL7p9tZfcpqB1KaKtKYuf
5ea1J9vGFvP7d9rk3Lk2w7fuxk5JtrQ3e0W8pRP1FtJHq9t8hR6+nUMulRnjWfkZ4o+SfBh4
78YDkW4zfp+Qw3K/qtvkkWklrKRGUVP/AMMhbWe2nBzL1R1kzHoFrxv433L4B3iXaLRZLq1s
xPczsv8A8iK9hXWFDkDIHoFyKnPFxzl9P9Iq/ivYeIcc+O3+QdxsP19xbTPCLchWQFiqoVVh
QnW3qr9R0wT+e9K/GFzP4s2bmthxzl/FrRNmud9uTa3+3VrAXfWfdULkpBjNaDOoyxqK8ZfW
1t/inhm0cH5Lx2O0e8vv0Dz3N7JHn7yIWjET0yKutaLg459Hd8eVf29bzG+73XDtzt4rvY+S
xFL22IBHuhDpYV/4rn1GH+nOXDzJ9XnPN+Nrxvl+77DFIZ1sLloo5GyLx9VJ89OTeeHqDWcl
QVZ60HXS2eMA/uqV1BaEijAYUjSZZGPbP9/l44sAvc0rXPS2Wqn78WEDVLK1CQO474klVap6
jXTnWmZAwI+tKkEkkjIDphwnLDrroFoCKdT5YAjdJdJYKSAasT28mxM3RhsxVu2eeRxYdJWA
JI+0mjk1FPPEdCpXSCp9dSpoKYRolj0rWvqGRJyz71rg0pBqpqObk5UIp+7BiPrdQy6fU2TE
UIp9TgVKUGqigoPuP+eJAMJVy6gE0yPf9+NQnWQGhAo46HuMQGqrINJyrmD5jEiU6FGpSOuf
YDBUYga1aMjUa/d4YFgpHop6k5dOpxA7aTGdYy6eY8KYiSitCwyGZXt5YlKI+pi1MugHQ5Ym
kZWpWQkKBU1PXLEByCiqytVT1HbPpnhViLWwLgUoxqzNT0+ZI8cDI9LMgCqKCmtqnEMBMj6q
Bj6jmuXbxxQ4l0qNOfoHSnn44jSIcqTqy6AA54RZTAJ3JDqOo8B2xaRrKVcoRrIFXp+7PAaa
oRPUKkEDrUivbEjuTqbTTTTId8ODTpGWQMPu7t0AHngrQGPuGrDVp+2nWoxLDsyAEnq5qVFC
PPFIqYKWfUKIvQL0GCqHFWycelASK9evTEjRINGoek91GFbD6kBauZpXUcsvAYqxs0hrJUEM
1Oik5/hiaIVYtkAq9s+gwKC1hkoRQGoJ88BCGCjQ3rWnXoa06k4ZFshhm2XpQinl9a4mp6IO
xYgVIHWnSuIdCMbFTRqgfaRlU4mTerMjt91cxXyxLDo5kUZjrUeJxNSGJQBCCABl50xLwtQ1
9iPytQf8VwHSL5VH5fAZ4sG0LO9Ml1GtDSowoxDGjEUPdu3XqcSwasTGVZs1JKgD9uCRI2DM
wbropVaVFO+Fr6ieTPIaR+Za9vIYlqNtFGyI1ihHie1cEVrLbiQZ3qBVTTLIY3GNcgHj36YW
R+2fDy64k6tfpWoJbuvj4Yoa9j+EPljbuHbfe21xtst3NMVZSriOjUIzLVyzx07ss8c5sqHc
OVG/3k7sIhGRKJI4VbqynuTnjnL63fXsEHzVwLc7C3O/bffG7tNJdbfQIGdR1Ugrl5YeofRp
/cVsc25NHLtNxBtRUIksbA3Byp0Pp6Ysiymm+aOB7RttzHx3abue5uixmlu2CKpdaajQtq+g
AwGRUcN+Ztj2+yudr3qynEMpLLJZkGoYUIYORTLLLD1GetiHjnzBtexbzKLHb7iPYp2LSQsw
N1U9H1H0k+WGWXxTa3u4f3C8GTbitql7JeMP5cTxJFVvBmFf8ME5anNUe4/KnxPyGwgTkdvu
JliFXtY1Htse66lZSw+uK8M2459t+Z+B7RZXdntWyTWtvRjbxq2rWSP/ALRmqR9M8WJ5VZcv
9jk8W9i2D+1N7xtgQAQWrpLHtjS+r3K6+Y/jXdYI5r++3S2nUVe2t2ZASeoLRkVAxixODYfm
jhNpNeW36a726wlAK3wk9+ZjTq+vP6ZnGsmGrQ/O3xxt1oba3O43RBqZpwWY6jn6mY9PpjM9
ErP7t80cSuuVbbd+zdR2diuuViFLlWrktDn+GGQ5V6vzj8fbgbm3uWu7awZaG40anIPWqD1K
PPBgswo/mf4s2baf0O0C6uWT1IrIVL1OZLt/phwa4eSc0+EuS28Vxvt3eOYRVbVEkQ0GZFVG
YB88H0WvPrDl/wAa2XK13K347JebFCjCK3kk1yM5pSV1kyy7A4cdJ8OX5M59sfJ3tk2PY49r
toKgGiRytXuwQaR9MHkZnOsJC6JNpkarg5AdAfEgYZfT4+hfjv5k4PsXFLPbp5ZY7uEf/IKQ
agxJrXUCP3431yzaovl75yXd7X+lcclcWEoH6qRkCtJXtnUhfpjFijxKSaV3Y1qK0BPfDBrc
/Dd/Z7bzqymvpo7aAV92aU6EUMCMicvrjebDa9H+erbjO5W0N+ORWrSwL/KsLdhJJIaUyKFq
da545ZWfy+f9TFvSCFINGIrkO+JpGaBmRipC0Jp4HGjjR8N5ju3F91Xc9rMazqpWkqa1ZWFC
DmOuOk6nxWOte7cS/uH2yTbLmTk0ywXqE+1BbQMVYU7mrCtcY65n4Nqi+PuWcUueXXfKuQ77
+ivHZhaWbRMoMbCgJdQVoKZDGcM6N8s7t8Zcgvraez31pb53WOSSKMtDHFX1sSQp1DwXrikP
NxsoeWfEtjw+PYH5RG1sYTHqIPut3JC6cj5YvraL1K8k4btHw/cch3Bt93WVdpj/AP8AXtL/
ACvdqTUyFAxB8BgvGxn7SLzjk3wpa/Iv6q3OnZrOOsFzel3hFwPtYBuwPTVjXPGQ8/Db803T
475CrS3nPZIrJRX9FbOpibT/ALFUlq4zlMrosOf8A3jjse1xclbaBbERiRiYpSiUCsD00sMa
+tVqs5n88cZ2CG0t+Ozjer+MhJZZQ3tBAtCWb0szHywzka7eO/PvG7zjd1c7zNb2G7Uf2rCN
XbXUUQ1NRmeoJxdc4tdFr8hcQ5Fx23gPJTx+5XS0ylhFJ6eoUn06Tgp+HVN8g/F25bxZWFxu
sM4sEE8V1MP5DSn05s2TN3xfWj6/lR/ILcR5DbXEl/8AIptdtkBCbfblRGQMtOlCGevTF9at
jz/4N33gGycq3OW/vI0WGMpt93OpDEhsyiioB04bKfst7aW7+QfmD+obZdrJtG2SIVe4cR1R
B+WPItX6YbfGP+fuuv8Aujvtvaz22BLiOa6BIaCIhnX1A1cDsemeMWHLr54V0ZtQoASciOn4
YXT7a+kv7WkDWG8KwASqBF6tQ11V/wAsVrLXf1/464Nbby779HO97M0rWqMJZVfMaAErT6nF
gVvJbvgHyJsFhq5Nb7bBDSR45GT3lLL0Icrn54rzVseXXXD/AIzfmu37RsvJC0PS73S+esQK
5iKJvSGJH4eeCc2RfHr2nnF1wvcdhsrd+SWUY2+aKVCsiSFzBT0hY21Dp2xYtmm5BuHxzyob
bdf+0WsUm2ypcRt7qUY/7kcr1piyrfUg+S/ji+5f+nXcYJLy3iKQXzituJCaMqSH0188OLVl
c884rtu23C7tyayuZaMV9oxqQOyBEZqnBi1luZ2nBfkXadulPKLWwtbb1Udo9eo9QQ7ppOFI
uAcg+MOHPvFhZckS4iTRJJPdNUtIqkMI2p6lHgPwxfWlk+e3/wAWc/2CbeJd0G2b9bB6wTMa
yBcxpU9j+UjDlXw+dJNMUhVJNcaNRQMqiuRwyD5e8/29/JmwbFBc7DvEotGvyHjunoIY2AIU
O35QwPXBYb43V3zbgXxrxeXbrTck3y8vZ3uEgt2QmkrAksyFlVFp9cGM66ZuSfHPM32flM2/
R7YdhYTyWdwyI9Qa0cMRUV6Fa1xfPwRbV86cE3fk95tqzPDZ3MYgg3CQaI2cA1rX1KprkxxH
NjjuuZ8D+MuIDZYd0Xe7u5mkuEgt2UtpdwXJ0FgqqB454bKMd0+8/HnML3ZeZNv8ViuyfzXs
5mRHrTVRgx1fsBrgUuXXmnLvkTg3L/lOwvL+W4teM7YntSXa6kklYFmDDT60TUfrTDechzPU
PNeVfGW0c54/vXE5J9xNlJ7u4kSyyLpWhRI3nJYN1r2xZ4zN+Y9Uud44ByHedp57/wCxxWcG
0xkNZT6El7kgqTrrVuwNcH1ta+E3GN74Dyvftw5dby2y7rBSy2x71lV0SNa+6I2II1sxz60w
2Yq88+ati3662GS/3Tn9pdwRtqg2eIrEkhJ6KqMxcj/cMUtG4qdiufhdfi1od23u8/rM0Tia
xSaZWMwqERIB/KKdMz1xcy6OutXvw7zHiu5cCvuB3l6m13d2swt55WAiZZQBQEkAMtOlc8Zx
q3Y0u5/JHCfjnYNi4t+u/rd5aOnvfpCjaEDFizkEqCS1FWtcM5FqxabgdxyyH5Q/9jgjs4rL
2ntHKhxkVzWusNmfTpwWKWR47PFxP5g+Y7ud719q2V4QIZ5NMcsxhAVQgfJddCc8bvwxzx7r
oW34h8T/AC7Yy2+6Pu23LCZJVj0vLEZAUKsU9JIB1fTHP611nT1jVwdOXj5RPJbUbX+kMZt2
ZdQLDT0rq1f7dNcastZjyraOUcF5R8x7vvm67jc7Rscyf/Cn9xrZpJFCoNbxnUinSSBg7/DP
M+WH+ZV4k/M5puNblJu9lNEnv3c8zzFZVyKLI/qdafsxqS4ufbWBjcRMX0igPQ5jDjX1fTdv
ybbbb+2F4YNyiXcY4PZ9pZQsxc3AOhVB1VKYzOR3rJfDfy9fpynadt5Duxj45aCRbZJMljlZ
CsYZgKlRXInpgxrX0RebpuCXst7FuWyQbSFLi6cl51SldRIdUP7cQYf465Ffb/Duz23I9t3j
XfzNc7ZeRlYxGTRXgauoI/UVUgHDYJdZL+4Kz4Vt+z2b2N4tvvbSAttNnM0tvQ5PKU+2PT40
FfDFJXTj5XPObXbvkD4i2dtk3m0jO0RpNdx3L+2x9qHQ8ZB6GvSuRxcrv/21JxxNo558IWXG
dr3i2s9ytBEtyJiR7bRSFiNIoSCO4xmK9TW6g3/jWy3/ABfjtzvFp/ULa2ZWHuKoZUg0aqk0
GphkDhxm3a+cPkT5G37aPlje912LdpKxu0NvPG4lh9qlNIB1IwqP241cxmRpP7b9mut35Zc8
xv8Ad4TLBNMLiCdwLmeWZfXKFyGmrUrg62nnn6x6ZBtVrsvzlccgudzsxab7aNFDGZkWRZIU
QaWUnuFJBwVnny3/ACznxnyjbm+b+YPfbnG0UyvHYSyzAoQsgOhWY6RRBkB2xdT08bj0Lh6W
txw+1m39Yru3F7cTbYyRtJpiE7mE+gH1Ad8FheG/3VbDv0W82nJriWCbabtFtLJYwySoVq+l
1YnUTqJ1Cn0xqazuV4NDpLtqpQ0JBORP+WGtPrW527bvk34q2Dbdk3y2sbvbBD+rW4JDI0UR
jZWQEHvUHocYksmHubdR8IHHuA/IKbTuHMBu0l3ZeyI5mOi1lLqypXUyIr0NM/ri+tX2nwu+
H8ZtODXfJt/3ffrGWy3IO4KSUMfqZ9OZzNGoAMPus8+Rh/mfk1lJ8acLWy3NXSSeN7mGGX7l
ji6SKpr6Se/fF8GT2O/535jaW17wfcdo3RFdZGZ57Wb1rC4jB1aDq0nPrinwz1v2Z/8Auo5h
cybltO37TujS7XdWJkure3mrDIWl9JdUNGqo79sMV+Vp/azxmK2W45VJuVoVuUe0Njq0zxEO
Cuupp6qVGDr5ejrv/XGh+PeIb5x35f5HIt5aXW17nFJe3UcZ9yRo5pmMagDNWjZjUHqMVnuv
N/PnNlaD5M2B9x+Nt62PiIgt4/Zea6tZYpEDop9xhEzaQrEr1zGGfJ78mvh9i7UqpDkAVPgc
+uO1wc3Zsewf237jNZclvra332HZr26iT9LDcprtbx1JDQymo0lFzShr/hjl/SNx7pznZONN
xbcty3a4sdn3b2G0X+1ztGs8hGULR+kyCQ0Gk1xnKupPw7ed8LvOSLxVot4h21tu9u4lsJnK
rNpEZqoBFStKVp3wfhQUnLuD3Xydc2v9Stju8dgtrZyO49pZdbtJAzV0liGU6f8APGrz4s/K
z3nc0suG74m6Xe2R3v6K4f2bJvbUoIyK0dtRJ6YJFfhiOacLsvlC149vGzcjtrKxtrX2JlLE
S1LKWFARpZdNKN3xXVg/jLdti47yLkXGX5THvV/IsQ2+9vnOgmFGVrd3LEExkj7TmOmL64pX
b8p7zvO2/GXIE3DcdktnubV4I4bUSBpkkUq6oGavuUPpyI8cUjHc2Y+MQI1mQ9HoA3c/txrG
+ZkfRX9sW8cbt7fetvNxbWvJLgI1g14f5ckaqQyE1Ab15leuM/lu9eY9P+Ut0gh+JOTW1/e7
e24m0LNBZMEUqzALpVmLmtMWOfUlmML8fzbbzH4Ku+E7fusFlv6GTVDcOUGn3fdXSa10MMqj
p3wcnr48fM+4L7N88Mqh5InaJ6ZjVGSpoe4qMsbz9n7eLXgW7Wmz8v2nc70sLSzvIZ7hwKnQ
sgOQ/NTrjP8ATnYzuvre94pt2+/JW1fIm38is32mKGIm11gFhGHGoNUU+/MN+OD5WZdYXafk
DaZf7lbq4s97H9Dvglu8iyabWWWK3KKrdjST0g+ONdb4OZ7XlHy7yjdNx+R99iuNxlu7Ky3K
dduQys0cKjIe0QaLl/Dg3zGuL8vVuY87m/8Azctkm23fG/rKTW9vee3cf/KKhnWRJPVr6UrX
Fyu0f9vG5Odhutu23kVrFdi4Ml1sG6IHtnhIA96BgyOC3RgKjxGDbTjV83vvj/iW88b3k3dt
YXSbin6zbtvlMkEsLKym5aFclMRb7qePXDjP/hvrjepjdtucG9bNHsjoZI5yNc+gJWuv3FVh
XrTtiw3x8+fFfyFtEHzVu27bzfw/pNwlmjj3NAYoHklYKhKvmiNpy1dO+KwcXz15/wDM3D7n
i/MJoZNwttzt9xeW9tprZlLKkkrMUlAJoy6h9cdN8Y46stjBK7KwKEE1qHI6EZ54HV9JcH3/
AIl8g/Fll8e3u4jYd526RZLea4A9qYI7OCjMVFdLkEVqOueMZivq93/cfjm/3Xhfx4N+bTsd
wZLrcYnCBXiiZY4jOp0rI7t1GG8+D5X918rcX5bum+8AXcJdjngSW1t9999VWSWLJ1V/AjPV
q9Q6UODB8xjvjni/F5tutb7jXK4tj5ptrTW29zvIrpdDUw9wLIw1o2RDDL8cZ64vyeLny2m9
fInD9k5jxCLd98t7rcYLe5g3K/iIZF91FEbyFKqgd1PmPpjU5sitmsbyL4l4ZYtv3Jdw5wlo
k7XFzazWMirJE9w5Za+27PIDq06VwfW3pnnnJXzDIp99xrDuGOqWpo+f3Z559cd7xinw69sv
BaXUM3UxyKdXegNcjjj1zrUr3/5xudv55wrZPkLYpvej2tTZbzt6kCW398j1OOtEkWnmCDhk
/B6tl16ZwvnPBt12bj/ITvcFn/QrE2d9Z3LLHIrNGkZNCanOPLTWuCQ2J15r8e7BLsWyXu+2
lzNJNcS21zGRIsfvMzRySMupYzV9OZw4JFnecx2e02LdLfdOT7be3U9tcfpjCY0FFjPpYK71
ap8c+2Kc2i2Mv8Z8g2C84nt6QcntJ9sWCMTbJvqxmWFwtGQSOyMUDfbkcsHUsp14N8/QcOHO
m/8AWLmOaAQL+qjhYvbwXAY6o4D9uhhRqLkCcdeZZGJzd/w8926eWC4idWK6TWtOwz/ZjFmu
suXX1HFv/wAYfI77Lyjct5GzbjxuHTc7bM6Jq0lZA0bnNlDLlpzPQgYM/A+0l1BtXyT8cb18
ib5zSW6kifabBV2pJXFuLowq5k0K2epqgIjdcPXOMyqb5A5/xv5L+OL+9ubs7HvWzSCa32pp
z7d7G1NI0ZanpUD+EjwONfy6y+Dppvhm1+O9s4FeA8ogV+QWpjvYp3jgkt2MbKwCudVRrOZ6
9sccs6btmMdwvkHEOOpyT4r3zcUl2Tcz7NnyK0IkjpIg0mQLXT550DA9s8dbPyz9t8X3JOf8
G4DxOw4jsFy2/wB1b3MV49zC6GIMkolPuOKqC4XToXph4/l5tP224sl5N8QblyH/AO9WXfHt
L+1hjWfZpSqzJLEpTKLrI2lqek074x9d8XV+ry7jC8d+S/nW+ubuwml2jdpZZhAG9uUIkICu
SD1BTUwHbG+/OcX85LrI/MnDtr4h8hbnsG3SGSwQRT2wY6njWddXtE510mueGc/66x7rGxMY
20odLEEavwpnjFatx9R/DfyHwm74JtGzbluibVuHHbsXYW49KzrGX+1jkf8AyGoGeMSNRv7f
5V+Mtqunk/8AYIZ03q7/AFCshr7JMaofd/gT0DM98asFr5R5nyy5g+Rd73nZdwZR+uneyvbS
RgTGz5Mrdc+3ljp38RmV6R/bnc8JtN1u+Rb3vY2/eIJG9q2nZUikjnWjPVhUtqrXPHG/LrmR
nvm+34zt/OBvfH9+g3hN6u5Lq6tomDm2mABoWWqlG7dxjp9djlzfWk+Yt+4Dzfh1hyay3gWv
INstktZOPy1DSpqBdAo7qSWVq0IwcT8NWyV1/wBuO5fG2xwTbvuW9iw3tdcElrcsEjaJ9LK6
EjPwxi8+tW5FvxXkvxtwz5E397fkcF3te+Wss8dwDqENy0pkeB2UU9Vap+zG6zPYyPw98g8e
2jYuY7Vu11+mu9ytZmsXcFklISQaAVFQfWKVxv8Arn22Nc+zH0TwWa3bhOzLZu+82YtYwLwu
heoAqpqQQUOQHUUxyvyuvl8kfO/H/wCh/It/FDuH9RW+YXjzOQZkeXJ4pQuQdSvh0x2vvMcp
MtYnbL2exvobiCTTLGwkU5/cpqKU6545Y3z1lfSW2fJvxRyq923l/Jrhts5Ts0IjNgatHce2
TIrRUXrqNQKjPI4fcz8G+XY49o+a+GXW8cj5bd7esXJUSIbJDKWJlhjXRoDrX22b7j4Yes+F
LcZrnfydxzn/AAX9TvCJYc02u4U2sMIcpPayNokHceha171GXU4JcG+t7wbl3wZa/Hkuwzbu
1iu624XdrWb3DIkhjCvpZUZR0quMcc2XW/6dsdw35D4ZYWO7/HnIGk3Phdxcsdu3qKqShCQy
s6UDChUGq9D2x06+dHV32pOe/MWy2djtnGeDNptNnuYryDdcwrNGpoI0IrQ6zq1YeJJLR9tq
54l/c/eTWm6JyL21uGtXfaJILdin6kKQkUqgt97Uz6Y5yenqTPGb+DuRcKsbveuVcjvoLPcr
PVNbWTJoAaT1M0KrXUwf06R2ON/0m9MzrOXlnN+TS8l5NuG+SRLayX8rStCtfSSKUqc+2Du/
pjdijSQOFBIZ9JHWta4zG9CAaV1UJ6qM+nniGo8zpFfbJP3HqadqYdKU0aMK9UAY6WI74kcN
6eyKK1B71/hwI5jKglXqKgMQPHPpiR4l1ISp9ZNSetcSN3Kj1AV1LlXL/PAhs8grGGBBAJHk
cSRmRW0hgF0Z1pXIZYtVqVQWGlRqK9T5eGeEGYxhjWtQKNTx7Yjo4VLKZCASO3+RxlQBJIVa
hXcHtnQ+WJacjSqq48hnU4logAzjU2qn5T0y75YjhwHcaFFdJrUf4E4QAGUsGU9fuAHSneuF
YKrM+oZCtWX64KEzSGlWTInp0p5YI3tRRsVqooW+40GZ/wCmK0CAQGhzr+XrgAX9SBRlXqPA
4ULSuhlIIJFSw7DFqHX0Zg6TkD54DqKTNqIpOX3Ghy+mIHCE9WBX85/ywo66l1FGAYijqQOg
xI6R0zJ6n8vTp0OJQzF1cqw9ABqcqfvwIl9Wkg5E1oGpTLEjQ+22Z+zVXM98K0T6qlMgK59M
Wox9tFqAWNQMuv78GE/paj6iClc+1emJHhCIzDqx7eBPfFQbSVoARp/MfPEjNGxppyBFcv8A
OmJrEqxrooDQ1HSmRGA6GQEtko0+BGeKj5FkFK0OnKpP+OCUVHIpQhgeoyP+WGMdJAdOdB6q
dDXp2r4Yl9ojYhgWUsr+eRxY0P8Amfw+pv31xHEcjMC2nsKKa+kYT6NV0VWQB1Yag2RofLFq
8hqlQHoXByNcqdsVhndOk0jO5YU1fY3jl0xZGZpvWyBgTQfl1Ggqc8sQPqZlKEAKc1Ip+3Bh
KWoYCpLEZkjI5+OA6FyVoSunSci2Y+uWFCZcyMgD9tcRLUV1ekmOgqfIeGJW4Y0BJSukile9
fwxDEhUgirakpTwOfjiaxEi6ARUZGlTmKeeAy4Ziw6K2lB9vY+QxqjfSZHJHoIUjM/5eGBI5
IlAGqpA9LkfuJxY3kZnc2LXL1BpXr441JjHTkUVPl38sOMDov8Y6eHbFgWLIjws2kZD7u+Ml
Y8Yt0ub+GKV9MTSoppma1yr5Y68ivry927iPGuApPZ8btJJnhX3JrldTa2GRzqQe/XHPqKPn
Hcbr37yWUiNdRr7EdQB5DywSV2kiAs5NDVS1RqGdB/lhJi1ATIxZegYeP4dcMHXb1X4z+Ebn
kCRblyK4l2zbp11W8CgCWYDOormFw2uX5dHK/ivaLfmFts22Xf6K0cgz3N2akgj1KoHV/DGe
Zp+yy+RPiDi3G+OxXdndXF5PXQzykaGr0yFKEnBJ6Z0j+NfhDj+/bZ/U983Wsj+kWVrT+Wv8
LOe/0xqs2vNudbNbbRyGfb7Iv+lgJEfuH1FakZ074uaeVRtNgu4XsNkJUtnuZFQXEh9IB6kg
dlGOn1b65ke6bR8V/ENrNabXNuF7vG7y9raT+UG71KLRR9TjGOVpXnwDYT7tohunstrUBpgn
8yZR/Ctcqt54ylFzLhvx5tNrDbbTt27e8rj3byYNpFD6iAQFJw8tRbcS4f8ABu+XKWNq+5XG
4yZvK38vMCpU6QQuNXim2xX8x+JbBuTQbRxdnaWWjTNPIWECkffIw6gYzB9t+VhyH4G4zsXH
he3W7T3NyrKZpUARQDk2jv8AtxUW6Li/AfhHeJ47K0vNxuLylGZgECkDMMdJWv44cUoN9+M/
hzZt8eDeN2vLcMAUhSjg9wCyqxr+GD66tWd98SfFUfHP67bS3psowHIll0B1rmKaQ1csH1Z1
17/8f/HcnAn3TadlW2kaH3I55GLOpIoKkk98V5wvnK61wPSnoY0yoM+gy8ca1Jtl2bcN6vrb
b9ttzcXVwSscceR60JNfDEeY9HT+3T5DEHuSCzgTKvuTivTpShpXGNo6VO1fCvPtyv5rW2tv
b9hqNK7rGtR5t1B8sa+y+ouU/EHLeNWv6jcGt/bdSWMDavV5ZLlXGfdF8azi3xDtp4JJvW62
M9zubRmSGVZNEZAGQ0A9/PGrFtxace+DOP7jwxb6GGS43a6DPErMEiDVoFNcF0zp59yL4U5v
s8ihraOYzPoiSCQzHU3QEADTh0u2D+3f5FWz/V3CWtvJQMsMlwKgHOjAA54PtWVfsnw18h71
dSx2loEjgYpLPcN7aBh10asz5Uxr7GL3b/gLk/8AW4Ns3iWK2tZwS1xG/ujL8pC0ocG1fZxc
q+Et+s9zms9mjbcLO3UM93IVjjRvAFs8hg2rHmlxataXDQu+p4W01HQMpzA/HG4FvxfjO8ck
3WHbdpg924mNXcnTGijqz+WH8OvPfmPW7j+3/ao7dobbfmu9/ijBltI4gqio6DPUFxjK5qjj
PwJe3UUtzye9/o23QMVSoBdyO4BOQB/ixerxzch+BN8O42sezTG/sp20x3LgRkA92GYy8Rg+
BFxff272NvtrW9nv7X2+Qrqks1T0AkdMjXp01YctHX+FbxL4H3C6tp7rke4DYtvRzHCGAeR+
wJrTTngy1qXPlRc8+Kd747PFc20rXe3S+mCZl0V/7kFdR/hI641LYftVbuvxR8gbdsp3jcds
9iyAFGYprIfMagCSg+uL7VnJK1HDfgS93Xahu/I9yj2WyuBrtEYapWr9rkmgWv7Tiu03HTa/
2+cjm36Pbba//T7WQzruHq1sB0oOv4VwetW7PVzff287ZfWtz/ROTvuW5240yxaE9suBmrMG
ahxZWZ1jyzavi7l28bpfWm07dJePt7CO4kyVFfw1Gi6vLDWvqteH8N+R7nf5tg2iS4266jDL
uEqytFHFHWh9zSf2YpR9Wn5L/b29rt0lztG9DdbyEVubRUKnxOYJzH+7Dtcb5XPxb+3zcLnb
Tf8AINyi2RJ6La2zrqkkPY5kaa9hittdJzIxvOPjrfOI3am5g9yyYE21yQaMD5Hpg04yJSSV
/bqXc0UqOtK5UAxpmx7DxX4B3i+26LcuR7jDsazJWys3BaZ1YdXAK5nw64z7+FmMby7455Px
vdI7EwSXHvELZGNSBKGNBpOVT5YvY3zNrm5H8ec447ZpdbvtklnYyU9uSTTQ0zo9CaN5YJBj
KR3NS4CirGrUBoSO4xvFF1x3h/KeQ3Eg2SzfcHRNUixLXSPFsZtxqT8rbe/iX5C2q0W/3TZ5
ILOnrmybSO1QCSpw/ZdXTbL8N/Im82P6+y2WaS2c/wAmVtEepa5Musio/DFehZI5oPjvmp3k
8fO1zNubAvJbaSxAB+45aaYNUdm+fEvP9kt1utx2aaG3rpWSMq6hmyGvSTSuH7asjWbL/bjz
vdeOtut08djdIuq02mZAZHAGZLA0Un8tcZvX6XbA2/AuYT73JsFrtkz7pCxLWqLUqE/MxH2/
UnDz3h5vjs3z4o+RdjgSfctmkhikanuKyyCp7FgWzPhjV/pvyz3dTW3w78nXUTyRbBctGoqC
ygdRkQGILU8sYnTPHl2qKLivIZuRxbJcWk1vuDzrA0LI2rzbLyFcdNjd6+zcfLXxRt/EZdqi
2m8utxur5KSQuumjig9IoK6mOQ6jGGbbbkUMHxF8pTwvLHx27IiyrIAtaCvpUkYZ2qqts4Vz
HdNyfabHaribcUqZIgprHT+Jm00H1xrV9ZV3f/DXyRt2wXe97htYt7S1JM4kYCUeLhf4fPGd
1m5ywUjsiepQTnUkZ5+ffFjWrrjnGeTchneLYbCbcpUUM6xLq0AjInoBgtw/WpOU8I5jsAhO
9bTc2aytpjLx0UkfwlagnwocPPW/Ixbw/FfyRNZG9TYLwwBC4l9s+taVDD82C9fpXmfKt2fh
nM9ySY2O03V2qt7RKRs1JP4SR+YYz90DeuG8v2CSOPctrm2+ec6kjkUrqA7r2ONaZHp2y/B2
zx/H0XKOW7xNthuVM8EEUWpV1A6PdyLanpXLpi9o7mPLuI8SPI+Xw7HaSuIbyRRLOELFIQ2b
FR4Lh6uRRZ/Knx7Bw7labHt9624K0ayJIyUYavyELkW+mGeDf0Gz+Gfka+ureBdguo1u1/8A
jNKoijkJXVVnbJMhXPFO/HSYzG9bVu2zbjc7VucLW+6Wx9uaBipZSDUHKoGXfBGdcVqlw0yC
LUZ2YBI1OZcnIA+eNUc81rrr48+SF279bPsd+lkie47tDIF9vrWmMTpnvi/Li2bivNtyi9za
dsu7mNm9qCa3iYh2GZQOozp3xXuGf5c+68c5RtV/+i3bbri0vZSB7M0ZViT9tB3r2w/Y8yz4
Wd58efI0G3teXmw38VqVDPKYnCqo6ayOmWLTXDsHF+Zbv7zbDtV3fpFRJWtkZgC2fXxOLZFe
PNQbtsfINu3IWW5bdPa37MEjtZo2EjFsl9JzI8KYvtrMx37h8Z8+sNvlv77Yr22tI/U8jQvT
T/EKA5Yz9mp4zce5XsZ0qSq10g9CKeHfHaNT2eiVtxv5ljijee4lpEqINTksclFM8zi2RjqN
He/HXyHtO3Pf3+x3drZJQtPJC6gU6Fqfb9ccr1tE5en/AA3Z/Nt1sN2eNXptNvtlD2/6saor
iU5MkXuKw1ZZ9sLpmR5f8k7/APIG57w68xmnbcLNigtJ19tI6H8kQAA1ePfG9mY5xjkkowzo
a1H0OBNZx3h3Pd8szPse0Xd9ZI3ttc28bFdQ/KSOtK4x1004J9p3q03g2V3Zz2u4qxVoJYyr
6iQnQ9an9uK39rNr1T5D+E9n4jwqK63DkDHkgh95NuMf8mVqjUsTD1DTXqca4+WevPhT/D/x
HNzuK7v9wupNu2Sxp+pkiFZnLIT6NQK0TT6sZ6u3HTzNrLcj4tAvMJ+O8Zu5OQDWEtJ4UbXK
tM1p/EpyNMsN8cp19nPvnAOcbHard7zs15Z2oOhZp0bSKmmkHOn44PlrHRsPBfkfdbJb3Zdl
vLq2YlDNErGOq9V1Dqc8U6i6iw4NZfIcPJv0vHYbuDfrc65hCH9yMVCkShvyVOeoYb0JG7+a
d8+ddgs5Nq3zcWm2i9UR/rbOPRFJrFPbaRFVh3qvfBzR1n5fP2tqhHAqCRVcx9cdVzP00/CO
G8l5XvC7bsts08mkNOxOiOOMmmuQ9BnjPddJG2+Q/hLmXE7H+qTTx7ttsNP1F3Zkn2GGWl4z
6l7Z9MZnVny4/b63Wb5byXne5nZ7Tkj3Uj2kCx7ckyaJDFPQowICs+sADVh56dPlW2XGOVNv
CbGNruBvDsEisZEKS0Pq10OYoM9RwddHfMjRfJvxbyzgYsf6pdRTwbnH6XhdidaAFo21CpYV
6jLFLXC2c9Y0XHv7fPkDeuPLurXcG1XFxGG2+wumKTXAI1AeCFh/Fi3Xbv4eW73tu97Fu022
bjbSWd9bgCW3kXSQeuRHX6jDPRHAby6kVS/qUigJJJ/f0xUyPRfh74WuOdPd3l5cttuzWKH9
VehNTGQAtpjr6TpAq3hjN7tuQ7kY7+h7kn9QvNvje72uxnMUm5RI/s0LEI7tSqh6VWuNUS7N
PBt2+35u57e2nnEEXv3TorSCOJCFLsB9q50rjP3lHK1t/jL5Gudtj3C12K/l26VPdW4SJtDx
UqHyzK4Pt61VNsvEuU8ivJLXZ9unv7uMEvFCtWUA0JzpSnnjX2WB5BxDlXGpoE33arna3lqY
P1KFVbTmQp6E43Lrn1Z8OB725MWipdhVu6kFjn0NDXBz43Zvy3Hxr8Ucm5pcBrVf022wtou9
xl9McVfUQBUF2/2r9cZ6vrUnhc++IOWcS3K1tZIRfwbk/tbZfWfriuHJoIwB9r0zocZusXqM
tHxfk0+8LskVhNNvAdof6eFPvhkBLCh7qBhlXz8OjjHFOR7zvK7NYWjvfs3sCAgoUcmjBi1N
Omnqw9dSLjqVovlb4u5F8fzbbHuVzFeJuEZZLiIsQWjprj9QB9OoYpbVslxg57yd4lh0kr41
rl+3G631UTTSEd+vQjuPI4HP137PtG8b3fQ7ft1rJc3k50wQxnU7GnRe/bF1ZGpDw7Bulxer
Yx20vvyTfpo0CMC8oOn2wp/NjO+L5aq2+IvkyW4jtrfYroy65ViVxoBMJ0yZkgDScqHB9kz/
ACLjnIdkv22zeLCSwuko720qlfSfzr/Ep/iw89zRP03nKfigbP8AEW2cwuJpoN0uLiOK4sbg
ehoZq+00VDX7aHv1we27F35YPgHwNyvle13W5wwi1tTbPLttxIR7d1KG0+0prqXMHM5Yuu61
9WB3XY932fdptp3aBrLcoSDcQuuYbTUEj6dG6YfsMlb/AJ18VDYfjTYOUSzTxbtfTCC82uWg
AWZS8bxEfwhasa51wfzt+R3/AKs1t3w58j7xFA23bNO8dxALq2kYKivCSBUMSK5npjd/oup4
rv8A0PmUnJE4y+1zx70PT+lZdLjV0NOmk/xdMF7g5lrYr8CfKi2m4wS2D26W9v8AqZ4GlAF0
kZqQhjLK7r1CnFOnS3I81e4lRgqUyNV8iMalN6ljUfH/AAjfub7r+g2lasi+5d3cv/jiQHN3
bvToBjF8ZbP5F+CNz2DbDu+w7jDv1hZH/wDLRiotxbyAVq6KXGj6Z/hi5tgyPPd44zyTZbyG
y3KxltpbuJLi01jUJYpftZKV79adMFv5pXG3fFnKbnl0PF7yzlt7giBrrSokWK3uSv8AOqMi
tG/ywX+mTWuK1nzX8X8O4RYbXZ7XNdDeg5W9jlVjFcwsP/PG32hkagKA98a5nnos2tHsPwJ8
X7rZ2MtrzRWkuokMduqxM4YrRkKltWRqKEYzlZ65msZyH4Q5fac+vuJ7Gv8AU1t4Y7mG5ICA
xyAlPcJ9Kt6SP8MN8Ev4UvJ/iPnfGriyTeLAwrfMIrW4RlkT3JMvbLCulvI9cH2yacei2X9t
d0NjjtrvfoLLlc5ItNtloYn0CpiJNW1jqKZYJLfVf8M1wb4R33ft13BN5k/pO37RObfdJZaB
0dQTRBUVpSoNaY3rXkmn5v8ACW8bTd7adhnTfdi3WeO22y/gAp7sjaRFKFJCt1oSaH64J1YZ
1JWkvP7Z2fa47O25BCOZaXlG2S6fbmVK1VX+8MvQmlK+WGW/LPfvwo/ir4c3u+ubnfN1v5uP
bftkrwyXmr2547iE6ZAtCKFca661z5mTVP8AKPxjyLYdyi3CO7bkOz7u+rbN5QiRpiwr7clC
fWM6eOLdjU3Wi2f+23er3gW575f3S2O5R273FjtzpUkRKWb3TkRr0nTjEu1rueKr4c4PsXOb
Lddju7t7HkXtLc7JcZGMaAdash+7VUEkZgYeplUnjzvdLXctq3W6229QxXtnJJa3cXZZIzpc
Z9fHDRu1XS69WpzU9vEdgcCp1uioJB9App1dTTzwY3OjGeRjU0NPuOFjDSXUhZwQMqGo79uu
LGQpcPX06gi9q+HamFrUwuJfbpT0rQRimYOCtcoxPIjKalWU6l8KjywWafhc7dzDe7CD2rPc
Lm1jL63W3neJS1OvoIzwSWfA6u/Ktu9wluGZ52MjuxZpWYszM2ZLE5k/XGpv5Zz9OViQTpPX
r441rNFC00YKmgrkTXMeeCxQ7yOrqQfUpIqKg/hgNo4ZygL6ag9GHie2eCoSXagAJ4kMjdCc
WKl+pkdiEYqBln0phG2gleTLU2pkGQrQU8MumIzD+46EEMwjc5ivj4Ya1C9wq7NkA1fUfE9/
xxnAZiWfTGCxH3+f/XEsPJkKrkTQHxp+GJF7jAMrDOtRT/jvhJShyxIoWGYNO474kJBqOX2g
ZKc698Wg2tGkFTVfzU6V8MC0SnShKA6Sc69vHENCNahgpz6jxxKJY2Z2Go1cZE9K0xUmBjyH
Q1OZ8OuCxaeJNSE5FRWqk0pjJCurUWGQ6Z+B74QZ0YkqCVyzHiD3w6Lg4VIj0A5NkSDmAMGm
HGVQFqcgSTUgeXhiIxABkQRkGBJqcvDEsNTV6AM1oVAy6+eJGdXVw65IciK9cItMjMpOr7R1
r0xJItKFgpKgivb9lcWmExqR7gq6/s8sROSAaH0rWrAdvxwJGWrWrAHoGOXTxpgCWT/xHxbI
/j9MIoUKuKNU6R6T9MSIMTUli3fLrTv/AMsOE4OpnqdJ0jQRmfp5YLFpLIQDqarNno64gSrE
5WSjAqCAfDLI+eInLMhGn1aa5dm/464MAWYBg33Mc/VmAPH64iKNPUT6atmFXIDLCiic5uAK
dx0P4YFpjQzVNNLdX8P+mJSHMiNIykZ0ybsR9cTQ1LIQSgI655HEzQFFf1M2YPShqMQhywBK
KSQcx5/TEdGJGIOonI6RlmCPDEdAsmkkaDrrQ59R44DojpJJY1UnMjqKdsWDRayVKgZIcwRn
9MGI5dQ5UUcU79jhkFkBRgaEUoPxzws/UtThoyNLAZitakeeJpIxJHqHqOfpP+GBrUYLajXo
VFNXpr41xYjopVQlQaCob/CuFHKkLo0qCaHUPHCCoGAB+4A0IBJ64D6AuyMQqgHv4VxI4Eb5
EgofU3jUf6YFpMhU0T1gD7T4fXBuGYdnUIqBaEAkoM8u4wSrEZJlSqV00qc88agFE+ohiNLE
aSp74lKTxEMHBZVANAR/liawgoCgFdTfxD/jPEf/AAdWFCFTTn3PXtTyrgVhJGiksNQYAgxk
5V7nUcWjEZYsul6jvpBy/HGtGBZlHX05516eVDi1rWZ3Rf8A5DgHzp3FcPItcYBUZd8jjTB6
nx7UwrFvCCsBCEFh+bLPHMYtOLV/Wxy0J0yICBSv3Drjr/JrH2Rybat03f49jWytzcO0MbL2
Wq+kvXpkopjHflUfLe62s1rcSwygB430sKgkHqBUYI3jkU5EMCdQoR/zxLHft04s9wjuCgdY
GVnQjuMxQeGN8ZrFj6R4R808S3y/s7C52sWNwiaVuXlHtgL18KVw9csaqPkS5t+T83sdq4/H
byXUTo88xlUKVDAnrlkO2McTGpW0+R+KtecRMP6mCJLda1aQDJRmFJ6nv44LupTfB/GN0sts
vLyZo/alekFGFGGR1kA+nDdNsx578g/Ht7e8+isUu4FuNykohMoKqtanWRmAB+OKc1aXJ/hb
beOy2ST77HNdXhCGKKihRUAtQknTn1xT7aL1r1yy4Hu3Htjjt+F/pVuZU/m7lcnW7qRXUAdQ
p4YbQPhEu/bZFebfut9DecgkkZqB1dmqKpSh+3FiiffDcWvGr2bmlzbB2V/ZSoVWBHpASv3V
74KmC+C7XZim9b4ZxC7SOoDsqL7S5k5+Qx06+D9tHs3zLxXb9y3G33K2lumuriq3tsFY6T6Q
tKjIU7YzIzrY/IHIOKjgUswEbrcJ/wDGiZwJDXuADgvK1kPgSx2212bct5/UR287HQyyMuhU
Hr1erDZh155Ldx8s+S4457oC1N0VYofSV1aQVY5YJKY9M+bNyj2/ZrLjtlNGpn+2GKjye2uQ
9I6YozV9uFncwfEaC5jMEq2il/coCp7Vr3YUwdX1PlS8cLcyFT/MZ6seuZ+uKG16D8GCU89s
9LaCQaL3yy9PnU46b4J0+kbzZt6l5Za7gspewiiYPDX0ajXOnfHModwnO4bXuFrs8iXN8FeN
Y4XFVkOXqYHKnjhxPEOX8D+Rtu2TVvu7gwO9I7F5/cZqnoKfvONc31PWuBce3yD4zXbbqN47
qWGUIrHOjj0Gn0wdfIrv2Lad6Tgf6C3b9JeKjooU5q5JPboa4KqQshZccsdv3e6W0upSEebV
/MZq9FLZmuKJbbRYiC/ZIdtMdusY/wDnSuSWP8IRs/xxJw8lS9vtj3C02Z2bcTqSI27DJ/Bn
6LiZtY3gnHOfbVyGzk5TeJNAyOttbNMHkr/n+OGeteLb5FguN8iudm2i+W2uo1rcQRsA7A1A
1HMgdcHwtfKe/bRNtN/NaXLEyxnLOpp0pTCXov8Ab9vO1bVy2MXcwQXETrG8ndzQaVPbLxwy
L7Pdtu266teWbtyK7QQ7ZLbLFHKxpQIa1PlgTj36/seX8TurLj0wv55H0mmRBBr6g2dD44vg
VbHfdo2OPZts3K5S2vREsQiPUlVAOQzAY98WFx7Pt19Zcv37fruP2NsuIVWKVzQEKKk+FMO+
L4cfJp7TlvE2tuPSruM7Sh1ETD0lG6n8OlcWBS/M28QWXH9i2SO4SPfGlhEERIBQhQoZ65KN
Xc4MH9K5Pk7aeaQcMiu9+5PDPCrRtLYQxrEsrUyCuM5MUaXvJ4W5Zw3aodipdtG0DN7VPT7Y
AYau2nD8LPVpyraH5FNtuw2189pPbKJNyaBqOItNNGVM2ODDrN8jveacZ2a72rhHF2gt0BMm
6H1MTSjSju7fXpi31isz8IWPyNfWO7tt+72+22pkP6ppYvfka4YZuKkUNOpOK1S1c/EG4WW2
8y5LabluqbhuszkPdhsp2RiTp8czjX1rf28ajh9pPsNryS+3hP0VrdXbzxTS0GqM9D5DGWcQ
c8spuU7dsb7ARfLFOkzyRGqiKoBate2H4GXWT/uSu4t2k2Lju3FrneZZS8VnEakaqKNVOhJ6
eWD6+atuvNr34g5fwy423e99WFLX9XGHeKQNoNQ38z6AHFyu+sr3jn9nNyU8em2St5DFcxzT
SRN6faFDUMOpFMUP+VH80b083K+KbXsVzCd8FwZI9bCkNaBHf+HPxxZcG+s5/cNtXNouJWU+
/wC/2c8CzqBZW8BiaSQitVBLFtNPwxcnbHzmGOeoAMewNKV6muNz1XqV9Q/2qMg47u6xgBhN
GWIFCfSev0xnqer7f6qGz+b95bkG48d3BIr+C6u/ZFxMMoo/c06QB2HiRivg5ux7jv8AuFrY
R2kUdne3uQCR7etV0qBTVQqNODDmszPy7eU5c8cHGLgQfpAJZQUFyAWqCBXp5VwrEXKtp5Bu
vGb6523dLmCNUaWayv0CiiDUVBI/Z2xY5y0Px1yDfNx+Iprv9U93u0Szw28goz6kFIxljM+X
S/Cn+Aortdq5HdbkH/r001bmSUgzE0YqPECp6Yvyp7Gr4ZNyC54jeScpDm//AFL6P1CgUiVh
7dFp2wpnvnT5H3/h1xsTbcwEM7s9zF/+FEdPQT2U4ZPE8ibn29c6+Tdp3badvFtd20sS+zbg
uSFILO9M/tyr4YOsXHL3D5WsJ5uVcHuxbmW2tty/+Q4BoobTp1EDIVxmxflNzLlW82XyZxfY
Laf2Nvv6yXKgeqQBiumvhjWeazv+2L61Fu3Kd92+2i/TXc9vBNJfxqNVWUooPmvXAZHnf9xW
4chsuCR7VZQXU0EjBNy3RTRGjCmqyac/Wc8b5snrPc24+TmRiQKevwrUCmKtySPq/wCEoZbD
4Uvrza0B3ZjcOJIQGd50UaAaeBypjH5dP6deK3i+5fK+9bzsEPNttA2f9crB7iNVYyIpKEoR
XrSmHXGN7ebvy1flm02yEzDjJh1TssVYvd0khTJTKreeLPDPl371uEmwcV5VuO3IkU1s800N
BVfdKLVqDvqxnDb4y+873c3fwpt/KNwSO83W3EV6ksyAp7olpXT/AA07Y1PR1cdvOef3tn8O
QclFlBJcbhDAr2rjXEv6gZ0HemLmenp4F8BXO8v8k2M9or6JZmbcHjX0iNqghsvtwdHj+ea9
Y+SVtbD514zu99YPdWggWJAE1K0rsyKy+Lxkg0wWiX16xfyX21G/3KV7jcLYpqgsIEq8egZh
KZsWwjp8H8tv73deT7juu4hv1tzIzSGT7h5HzpQHG5dHHORd/DO5foOf7ZdrtrbtKhYRWK0q
xP5hUHNBmMHdb5fWf9V3fep5DsO6T7ZuC6g+0bpbegkdRqIqB/2k4vMYu6rrK45TsXxHdS7Z
ao3ILaWesEKBlWQ3B1lVH3UBrjLVrqn2++37YeH3d3oi39pI5ZbqaMB0Pts0yaPw6YZF+Wg2
a/hm3u725rm8nuLZf58csdLb1ZelqUb9uHpMlvtxu3GPjjcLnh1ske4RbjN7UMUXug6rhlaq
AerL9mM8/KtZv4+3rnHI+bbDJzjaYYXgtbmWxumiEbuxWn2EekgVy/HGuv8ACkeiycr2q23y
7sableSRKytapbvJDkKsEagDZeeDFr4f5nPZXXKt5e0tjZ2rXs6w2Z9LRprNFI7UHXG/hjjv
xuv7cNpu9z+R7c20yQtaQSye66e4CoAXSAejZ9e2M9VvnqV9Y8fvrW/vb+zaW8ufYUxXUd3H
SCpNGCEqA9fxwdJgvgbfLuW+5XsJlpY7ReOtha0H8pGlkBA70quWCz1nnrY+Yfk7fOR71zC9
u99dv16uYVEie3oiiJCDSBlljXx4zxPyy6NoCsa1LAMOwNcunbDXTX2Fvm78i438RcUm4XAR
PKtusywxCX0PEWkdkAPVxUnHOfC669F8t8hHF7/hvKktIbjdJ62dwZ1pqjkRGIoPzKSSPDFn
gvyqv7o+YtYbRZ8e/RW8y7vC7tdzAmSDQyisJ/Kx8ca5uK33Fn/btzO73L46vrea1ijHHh7V
voGkyx+2ZFMg/iNMZ/LV+HnHxNzu5vvlDeeQw7Ct4t9a/wDyrXbk9dsisP5kSmmZp6u5xrqj
iTHrXIm3zfONbxPsG7DcrSS2c3Gx7na6HVdOYVyFbWtMsiK4tY65uLb+pbNsPEeOKJby1t5b
WJYU2+FpNVIlY61QNTM1r3wSNvN/kz5DO2fKPHLvYYZduvJ1S33W4uYvbW4gmkUKjqfv0jv2
xWeMzr3xy/3W8h5PBLa7JED/AOu3Vuss4EVdVwsjU/mHuoANK41zcVm318vhnWTIUVelelR/
rjoZH0Z/aPNbmblFpqU3k9tC0UZIDOAZAdNc8iRXHG/+xab4m2zfLHgvPTyWOaBJhc6Fu600
pFICaP8AUfXF18sz3louObDtXJeM8S3H5Bsrex3+xSMbTrcIZAVAi1oaAsaA6PHBJ4ZMx5F8
r8n+QT8uRXI287Xum3xG22v9MpeWW3ds5Swrq1dsa68jPHc+/wDltv7l4byTj/Db+SJmS2lL
XkrKQsZliQVc9jWuM2bF3J9pqx+U9s3zdeZ/H97sSTXO2L7cklzbn+T6JEcMSMvsBxe543f/
AGecf3ctt5+QNr9vQbk7aPedSKj+c2kOfNemOnO4xudPCS4TMD1AE+NMWOj6h+OJJrr+2PfL
fZZGfc7f9SWWA0mBqrMSBQjVHX8Mc+fkd3xzf2+20V58U80hEYnM0gUp9+oeyNNB16dMO+q3
Z49cTYbODlV5Pb2kcC3nH1hlmCAI7I5ADZUyUj8MZnPusZd/xjqtL0WG77ZstxeXUl4Yoyiw
Q0tHRVoWLUovTpXG63Ga/qUHHOc8pt4dnuBtd5JaXFxue2x+5Jb3MtvQl41BOlgtajxPjjMZ
m7YxXz/Y8hu/jO6u/wCoRb9ssdxDIbiaD9Pd2TEhVZaU1KdQVgRXPG9Y75+NfLfsBW1Bj6ex
7eJFMP2dsfTXxQl5f/28ck2/Zy0+9R3ErLFA38+jCIqRTP1RqwGMT5HXwtYtv5Js/wAEbPbL
Eycjh3W1l222mo0iyNdBolo3cqcx9cWaz/TmWRvV2TYo98fkiWFovyY9iWksBP8A+R1jCHSp
PSg066fXBh+PYx3x3yfjs21b9NvV/a8a5tcbhcDczLpSWMkgBUD9VHl3w/n1nnmfhT/3b2UD
8e4rObpXaCaVBGc3lV4lrIAOw0iv1wbYup7HhO6/FfMNv4TFzF7ZZOPXDR6LqNwXVJDpDunZ
S2VcM61vq/tjpBJQUapOQPkMjjocer/21yRp8r7SJnRSVnWOoAIb2iNI8Scc/wCk+Ma58fSd
xscLWVzObUC4tuUrexyhKMv89FMgPnGSCfDBXKf/APS+eOZblw/gMm57XoS+lu4oIp3XV7Rk
LEuo6avTQVxvienr4fNHPvlS859suxbZcWivyLb5pFfccgbmOYUSPSBkdVKitK9MFgvuft65
zbYuSS/2z2G331nO+62bwGaB0LyxxwzGpI60WP8Adg4a/rn5cH9rvKd6mtd648t0ZrOws2ud
utWoxR3b8rddJJ6eOK/Knw8R5Pecy3vlzpyBZbjkUjQ2zrINElf/ALOPSAtB6shjVPMfQHyb
sfIp/wC3rYYLu0mk3Lbp7aS/iKl5YoovcRi3eiqVqcZ5jP8ASal+Q+c73xH4i4VJs1yLa4vU
t4Wu1AdlQQBjpqCM++L+c89a7+WwmuxHzzhN49o15ebrtE8F5dgAFY9MUvuN5BmP/wCliH5Q
/K8vJ+M/Gu8vtlzfbneOdUN4ArSWsbEe4W00JjCVGHn5V+HxJKHWTM1krqY1yJPjjd+Tz1Me
+/2mbrY23Id4264lSO63C1AtFYgCVkapVa/mpjn18tfhs/irjm7cP4ZzZ+VRfoP1Cyon6hq+
4RHKagn7tXuADxxr5rE9jQceh2qPinHYvkySw/q0KRrst49A4SRV0Kx8aAA9j3zwdRMTuXMe
b7H/AHA2VvuYt4P1CW+3Ikan2pbCWQlHVupbXXr0Ipizxcz1T/3X8x3CXkUHFHSL9BaJDfRu
B/PDyq6GjeGWN8fA/Ln/ALWdssp983Perr+bc7PbNLCi9CXqrkqfzaMh54x1PW9mN18acpuu
X7b8j7ptMkjbldMW2WB2X9THGkTfp1HksnTtXGuplxnPNZS9m+bLHY/13N5DNx6K+tZL6K4C
GeFUdSJlpmErlljNu/EU6n5bzlXEt43r5k4zzDbWS445Hbwt+sWRTGSjsxCgdSyHLxxX4xqJ
d5ubXk9l8l8X2G5jn3y4KSW8KOo9ykEQOhq0NGTSfA4bLB8uXZmtuD/F+wWXJZ0sbuPdLe5e
3kYe4qfqQWJXvp/NTBmrddEvC96b54j5jGqy8ektkkjvA4Kf+HQwWh8tWX1xe41Lkrh3r2Oa
8K5xx7jk0d7u0e8vci1jkAaSIzRnWjA0YEKRXphzGL7HTfbxsnB+B8KteSSRC5269iknsgA8
ixn3QziPqfa9wV+mCc07Gr4/yHgPJLDkVxs+7tcW36dotxJDgW8TRvVlVwDSmo4sqfP39tfH
7+856m5WNH23Z9ZluD6Q8TqUTR4sykNTwxv+3y1z/wCrG/M242t98o8iubGRZbWS7qkyEFXp
EisUIyOYxdTyM7GHLZ6aaVFfUDnnjOBBITRjSlcwBngVMaxnSx9bUBYeHfCqIO0YC6QTq9Of
UeeHBqNz0Y+mngcz5YlqSKVdXTMDJTma98GNQnLAh6A6jWvXMYsFKRyVrlpIpQCn1/fhQdOp
KrnSnXti1YmjXWTlVjQOw61GDWsDKsepSWrmQD+7EzZDagCGpq8KdcvHEpBPVmqQF65LXMYB
DvCoTVqzFAag5eGeKHEZzOqvpzqp8j3wjNHqb7CCrn+HoPL6HEyJZFQhaa89XkMTpDO6Me9G
NATQ5fhgsSVaijK/fI+A7YlgRUy17jKnUZ9vpiBnLKepB7HwHhXARPIQtKZmmQzPlhAlkhGq
oplkxPf64kBUUKw1dfHuPE4zUc6FiLJJTVlXrmMQ0WsotSQxOQ7Co8MKIse+RPQdxiawRY5P
mR3P+eMinVieoID5ilAKYtA0jZtRX7e7EdD50xHAgEE6upGdP8MTBe5QEBfTXLOg/bhamjRw
si09TMKV7jxwY0IKo1PQkVrX/l0w6UZaNzVmoOo/Z2xA8avQkEMAKFf88WjDLQllI9R/J0b6
54iNdbClSvgDlkf9cCOyqx7kDLLPPxNcRLT9zlqoDQ/WmRxI4Ueoint0yp44EBiI1L1qpGVa
j8cUGDCsash9NNR8BjYMPaSlOhGdPPyxLYkjBDa8lyoampI8BgIDEhJZmHqGQ8gcFqNGxA8W
GVT2GIaIs5pTKh8P29cJgDHI4LZqdJDLkKivTAiHuoFqy0yIOXTzGEjByy0uD9oP78ZoNqk1
kdI1zbLI/wCpxL0VUKk/xZU8jiIQXf0qGKqaamz6d8QwZYEZnTUipH+WJGjKyaj+YChB7d8q
YieUn3AASWHTOoJ8MUWow1GY0pr/AC9hTriAxpotPSG6eBxJIr09OfUtl1NP8cDQgYwRIM2P
QE1pXyxIFWDg1Cj7tOf/AFzxqDDNKxkZyNQYVp0GMikzsTqClT0BHYdcsS9EwkIY5HoWFcOt
QyS+shl1ArQUORIxapAnSJFLrqA6g5dcIEVQLWMUAzBGDT6EqxJUnVn3P78NGiHtRsSPUzCj
UHUdMhjK06iTUNC5oMqHpiplPXUdVAGAoWX0keeMnAhHjUSMobtWuf1OFrAswY5kgg9OvTpm
BihMXUhS3U5CpIHXCzSj0LISWKvX7Sa4sM6xI2pxqYlh0yNK+GeDT7UZnkCsrKGX+Clc/Pzw
semjIJLK2Tda9P3YNWlKhkiZtPT7VJr+7zwWlltx/wDxhwDmD3643yy5SNOdc8bQ6P4npXth
GuiIoAQvU5n6YxIHfsXuvce2FcszAIiEChrmTXwGO/8ALk6+pbb4wv4eJx3e884lsLFohIth
CZGQKy1EZqw6+AGOPdrUsjw3dbeygvZorR2kj1UMnQtQ9aHA6Xu1A1FJIJIyAr0J74mBI5Yh
Sv3ZBiM/wwq3F9xLiPJOQ372GwWkt1LGC08o9Mag5etzRfwxWixZch+POV8e3O2sbyMPfTsB
FHC3uSam+yhXM+eM89U8SVabv8YfIG17R/U95iMdvIKBZZqkd/UlTpxbdUkV3DeCfIXJUkm2
JJzZR1RrgsYo6/wqWIVqeQx066uM3n3aouTbTvGx7jJZ7g4e/hOhqMSwA6qtT+/GeelarRPc
yzRkK007UVBVmY/vrjcXXMj1XZfhj5Z3Hb0vBK1hBIuuOO4uWWShGRKV9I8jjnbWagtPhX5O
l3F4oI1kkjP8y+9/Qv0Dg1P4YftXT7THduPwB8jLZy3F7cwzLEGkPuXTyUAFcg+M22i2KPYv
iD5G3OyN3aWxitWB1TSyezHIoNBpTItjV6rn1w4Nr+K+d7lu0u37ftrPPbema5ZhHEp/72oK
4zreeLbefhX5E2my/U3ipJmKwRTCZj4UWteuG2s8ySp9n+DvlK5s5Jf0y2cE1W9uWZY2oR+Z
K5fji6t/Dd+qotPh7n027ttlrYN+phIaR1YCFB2b3KqufbPDOsYXdz8QfKez3VveyPE8qsFi
keVGaM19Ls1TRR54Z0rI1/NOC/KN5xd7je+WQXtvFHre0hqqGnmoXV9TjPVuh89XMTRTP6wz
KaSDwI71ONLFtxblG4cd3GHcLJh+ohJ9vXmPIUphlWNJv3ydz7f5GkkvLmCNhpe2tC6QUIp1
HXB9jIzlhvXItouXNheS2lw2ZWNmBevfIgtjV/oeuT7hvXJLu4jur65up3Ueh7mRzQ1rqGrp
ngnQeg8Z235d33j0m7pvclrtMQNfdumQnTkF0g1AwW05IHZuGfJ77HLyG23eSCxBZ0c3J15H
M6a9zg6/pV4wu679v819+p3G8urmWMn25ZmdzUZVUnOnhhnRdR5vzq6g0He9wljWqJ/NegHg
B4fXD/0k+WbyhseZcssJ2O37ld2csn/kNvIw1nxp2xm9+M9Sntdw59ve7xJHc7huO6kGOP1P
IxNa5k5LTDz141IDebXnnH90kTcZri1vZwTIGYiShFfURnh+4+sZu4luJZWe4BklY11E1Jp1
OJqRLBJMkqpDUN0YqfHoManSx6Je7B80XfHo5b6G/bZYkDUdnoq0yb2ya6aeIxm9eqyRUcKT
5He7mtuJi8F2g/m/pxQiMnoxPpXyqcM/pMYs91xcn23mm2737m+xXEe4sagzEmQ+eoda4uem
pWj3TafmjcONK+6jcm2SJA0fvFtAUUozDqR4VGMfdWKvg0vyct3Pb8PW8WXTpuXtEqoHbXX0
/wDbjU6hyKfkNlyuy3aWTkEU67i7VlmuqmQt/urXB9mcipn3HcrhVWW7mmiiYnTIxdVPTKvh
jX2FjU8Au/k5orm34VJeyBmb9SLdGIBJ9VSchWuRxq/0nweeb811bVuPybsXIZltBeJyG5YJ
cIoYyOeujQ1dWMXvW41u+3/9yJ22V7wXsNi0Z/UKqKDo7gsBUeeMTtnvnfh5Gu6b/ZyOi3lx
BJJqWRUdkJb8wbSc/DPHS9CRJts+6299Bc7a0i3WpTCYi2pmr6aBcE7xfVsuYXHzPe7UkvKj
fHbVI9j3ozEgJ6MaAaj4VxfZTxBwfePlWOC5tOKPevbD/wAy26M4VuhZiQQK+WC9mRmru95N
s+//AK65kmg3mKTWZpGPuK/ia9x4YZTY6+R/IfMOSQwQ7zuUt7Fb10RSEKoJFKkIAC3mcWr6
xccH5D8ufop7Ph5vpbBSRP8Apk1xxtTopINCcXN/YsrKbndb1Z38s92J49zd9UjS1EquDQsd
WYocP22hzbrvO7bikcm4Xs90yfa00jSUrlRak0/DFOvwfiKxQBoZgRWtAegxpnxo+O825VsU
E0Gy30tlBcKVmSM6QSfM9zg39n5VSXFx+s9+QkSOS3uGtT3bM+OM3qNNxtPzl8jbbty2Nnuw
Fsin21kRWKAdKOwLfhXF9oLFXY/J/NLXe5d4i3eZtymFWndixK0zHtn0/hTFaI7eU/L/AMgc
l2z9HuG5SGzehkt40WJZKfx6QDTyxbhyH4PyX5I2KyvrzjIuY7GMD9bOkZli1HudYKBsV6mt
dfDg2X5M5ntW9XG7Wt+6bjcSMLiVl1ayPyFXyIGLXOVYch+Z/kTfClvdbo3sxuJBHAixgupq
MlFTTzwzqY1ZWb5dzDlfJ7iO43u7ku5IlEUfQBVHhTKvjiGWJ+D8t5LxbcxebDIsd/KPaVTG
JDJqNPbo2LYLsek8/wDkH5y2zbrUcjdtut7ujxNHFFGWK0cCq1IK9TgnSee33yTzW73u35Bc
38h3S3Ki3mOZQL0Kr2r3Aw/ZTn8vReCf3FX+3S7i3JYH3QXoR3miPtTB0GmgHTTp7YzrX4Sf
Iv8AcLb7txeXYeN7VNawXn8ue5vG1kp10qKtTzqcamDNjwgtoerHOubf9MQbPhPylynhyzDZ
rwxJcU1RSqJImNcm0nvTvirTr5X8yc/5HLAL7cQiWx9yGO2URLrH58qGuDfwJ8+rtP7kPkxN
vjt/18I9GlbloFZyBkWZj0p4nDkiyszN8qc6fYrzZRubSbdu0jSXJoHd9fqbTIRVdVMFZ753
xz3PyTzOfi0PGJL7XsURBWz0gminKP3PuoDnTGuci6/nbMaezt/mHmfBYtqsrWe941YN/K0I
EVjHU6QxIMmmvQYzesvjVnim+Nefcz4nuUtpx231X24OsEllLF7zNIppSgOoH8cPUs+Vx1LP
G55H8xfLGy8l25eU2FvHJt7/AKmK1eEJrVlKFtQLZ0ORGLzE2Df3O8KjT9f/AEvcf6gEJa39
1fZDfw/fSh8dOLE+cOY8jl5Jyfcd8uIEt57+UytDHlHGtKBR4nxONSrmYr9l3a+2rcYb6wlk
t7u3IaKWI0cHr6T4YzaeblerP/c58mTWwtRNaxSGIj9SIB7uXVhUkA4JVb6tto/uTm2bgEez
2kL/APsVu+v+oT0eIrJIXd3U5k+qmNWfldMdyH5w55vm62e4Xl77A286rWK0/lRhz/8AaZVq
aeOM/jxiX1ezf3PfJJjRVubdDH1YW6EvQdz0z+mKWT5atVfGvnv5A2S5vnhvo5xfytd3MNxF
7gEz9WQAjQvkOuLyrm7Ffv8A848/3neLXdLi/MM1ia2SWqiNFfu2nMknz7Y19vxDPF7e/wBz
Pyldba1qtxawa46G7jh0zeFQa6Qad6YL1Ix1rx+8upZ5Xklk9y5kZneZzV31GpYnuSThmice
LLjHI9545ukW5bZcPb3tu4ZZYzQ+BDAZMtO2D5rWY9Ql/uf+T5I0YXNrGY6jUtuvrFOr11D9
gwzDz6xfEeY802nkF1vuxTTm+mLTXzxqZFKai59xaaNJY98HXUc+ePrVZzfm+9ct3qTet59p
9w0rGzRR+yoRAdIKgnPzw/LWYolYgK6k5kGgxYpXqPDvnnnnFdo/pm2yRzWaDVbQXKe6IgTm
ENQQte2M43JrN8z+SOUcv3Rb7frr32t6Lbwwgx28I6sUQZa2PU9cLOXWk+Qrf5a5LsG3cm5N
YTttdhbiK3vfaCD2pCPXIB6vVQeojGZT3k9c/wAXbv8AJ6vuOzcJR5RuUFLuFI1ZaAFVkMj0
WNqEjri3BJsUe27ny74/5XK8Im2ne7P0TwuM88yrA+l0b9h64bDL+mt37+5L5K3TbLjb2uYL
VbkGIz20IWT2yPUAxJ0kjwxrmxrxFxL+4LnvGdoi2i0njurSOogF1HraIU+0NUErXtXLB80R
j+Yc95JyzezuW7XBubnUqxRqPbji/wD3cajoPxxq3wTnmXxe8t+VvkbcOKDh+/upgh0BxPBp
uSF9UZ1t6sh5Z+OMc9Rd/wA9ebqNZYyMMh1pQnGxix49vG87Xfw3+13MsF7CymF4GIcMDT00
zrn+OCyflSVvfkf5P+Tt9toNq5HK9tGEEggEDWvvD+ORRTX/AJYzOv0t9yqPkXyXyzfn2Ybp
etO2xoFsCqqhUJpKyOwzdvQMzjUVuLS8+ZOcX/Ltt5RLdq27bbB+ntmESlQh+7VH+bVXPB+B
kt38tBy359+Sd54td2W5W1qu07ijW7XP6RgG8faZzoLCnXtgli6n7Z3h3zfz/imzybNtN2p2
8A/p0nX3jGxrX29X2jOvXDIawd7fXe63ct7e3cl1dSktLNOxdmYnPM+eNW+HFjunDuTbRs9l
vG6bdPb7ffEfprqSMorg9lr38PHGZ6L1J8r74z+TuR8G3ZryxVZ7S5XRebdMD7dxH9ezL2YY
sWn4j8l8k4lvVxuHHpFsRdyOXsQvuwGN3LCMq33aK0U9cXUg4meRpN1/uI+Rtx2282uW8hjh
umIkeGMJJGrDOONwfSp6YobR2X9x/wAl2Gzx7TFdQTewgSK4khDTIi0pV+hKjvTDIb8aqrH5
95/bcru+SQXyG8vFSO7hMY9iSOJNKfyhlVezdcXXOLn1z84+beec32ttr3W4iXb2YO9vbR+y
rlSCvvZktpIyGKdTGZ/ljZNj3obYN1ezuBt7tQ3ntuYQT0BcClcZ1qyu/i3MuS8T3O33LaLq
S0vIWHukE6HSv2SIfS6Hz/DFJqtXW/8AyzzTknIot4vr+l3Zssu3xQr7cMDIdQKIOrVzJPXG
rcEuUc/PPkObkrfIYkl/XRyokm5xwlbVJFiESxUp7frQZr3wbsdO5OYpp73k3L+RXF+sUu4b
zujmadYIv5jtQAlY1H5VAxnquc5x2cx5hzXfDYbZyKZpZdmRoLaKWMRSRKKBlkoFLN6AM88X
N0z2t3u3yFvqfBsXDo+OXFtaXie1db1cCQQuPcEoaBWAoz6enQdsXORd/p5Ltew71vV5Ha7Z
ZzXl2QxMcSFiNAqclGNXrD9ae0k3TZ9xWWP3rXcbWUFdIKTRyKcjnmpw/aWCV69f/NXzntMN
nuW4O8NpPEIbW4lswIJg1CC9QA7+DZYzzearcZSflHynz3am4073G820MzXgtYoi0hkzJZmG
dEJ9K9BhvclXtYOOS52+8L6ZLe5tpTQyDRIjo3ge6sMbt2Odsnx8vU5P7nflb9KbVr63kQx+
2ZWtoi5BGmrClK4zIb1L48+47zHfOObsu87Rdta36SaxOgH5ydS6PtZCTmpyxfXWuMkx28h5
3yff+TryPcrkf1QLGVniURANBmjKEAzWmK/Ckba5/uQ+Uriwe2N/G0U8ZilJtoSwVhppXT1I
74Jca6jFbrzTk257Rt+yXtw8+3bUWO2RkVRC3XoPy9BnkMZ+zPvy1HC/mzmO07xtN5cS/wBR
tNrhktEt5zkbaWhaPUBXqg0k9KY1Z41PXsPOvmi82Xillf7fxiS0XkkLLt95cTi4t2Qj1rIi
10tQ1UH/AFxc/wCWevnK+Y7Xje+XO23u529jI237cVN9OAWWL3XKpqNPzHD3/SWqSc+RzWV5
d29wklu0kdwjAwyREq6sDkysvSmDGnpPyhuny7JsuwDl161xtl9CbnbZUKaZtIH/AJdAUl1U
ggN41xrnvJ4JbKxu+cx5Hvce2DcLyS7Taojb7eZaApGKEqTlWlKDVngtmHq+6Ld+Xcl3rcrP
cNwvpbq+sIY4bSetJI4ozrjGoU+04zFLt1DyfeeU8hvJN63qae+nIjt3vJEFBpU6EqoArjX3
nwz+Qcc5ZyHjr3Fxs99JaG5gNrce2fuibqpBxUWrP49vOZW3Irf/ANTkuY93c0t1t/UzK33K
ymqlf+7GeutdJGg+TeUfLUlz/Rea3F1AIQJYLd1jjRw4+8+2NL06dTTD9vGPrNZ3avkXmez7
HPsm27rPabTd5PbqxbtnpLVKV76aYOb6bUXFbzk43q2l4+0670ZB+kaAkSa3yAU/mBrnh761
0k8x3893Tm03KJrflrXEm+2wSCRJ6DSCoKaQo00YGobvi/H+HOWbn5aBbX5s274/YP8AroOJ
XJLFRmEAPXL+ZGpI+mCd/prqftV/H22c8vN3jn4SJotzhjYxTQEqAh+5XY+mh8Gxm1T4VfMJ
uZXXLGTlBuZeQIRFN+qU6105KEUALpPiOuHvq45S+rz/ANa+VeJ7G2+w293YbfukJglmjY+2
8DmjJMozSo7tTBz/AE2Oliz+K4/ls7bu228FjdbW6RDO+SGNlNVMbvpCtQkeYx0/6TdbvM+r
E2vEeQX2+NsC2U/9XEjrNaSIfd1aiXqD31HrjPfV3a585fhfco+CPkXYNon3fcNuYbdaqGml
iZJGiT8zskbFqDuaZYJ01ccOzfF3Id34Tfct2jReW23yOl5YxnVcxqqhvc0/wUNfHDo6mTWL
9t9IJGVagk0xMwBRAWehOeYGNIIiYNr01GWRzyxKYUiRmT09B4dSB1GKETKp0iMVbw8MWELE
aXVDmAAfpiZOmiNKA0Yda9zitOpY2Y6kA0kHMjrXGbTpgK10ZlcumfTtiMpqGJSMwaZ16U8M
ADrCjMknsQf4uuGAWpvzEt3r2A88ISKp1VOa9tPT9+AkFObBiGUfaTlTzxHBpGftbSVrmQex
xFGWKBkXSwzOkChAxAQZEiAFBl0Ne/h9MVWjQArqBLAD7hlX64EaupTU6lNR5g9cOAKkVLAZ
j8p7fswrBKV9rSyA0qTXrQnBhwm0kVBqB1FM/pXGWTsRpXQRnX0Adu+JaP26KyMoFPtp93ie
vjixbA+pSWH2nKpOeEwYKmPQVNBmtOlO+A0Mw0oCreqvYdAfHFjOjSoU6ftoPVUio8D5YsJN
IjtUgLp7LWhHlXFi8Oq6TVEJc1HXDhEuWpdBHZGFKgeNTgPwGRv5WiT1eAGWf0GAGT0AAN93
5emfhigOrFKmh/iPkfI4UOtVaRvUT1anfzwUSmEukE11ADMgV65UwSHTFlUCRQRqGZJ6fhhO
jWhU0zXtTpQZCuIkZGYURcgc9R6DxB8cGLTsVVtKnVXKp64sWo3UkVZdAb8K064dZsGGqgot
Aa6QPEeGFYSmlBQEEdPr/lgWFXTnpGXqoOn0xIlKylnUaB5dMuoxIxYhdRNB3HUYtMOwdFoM
wejHvXChskQjSgrIQAXzzI8cCtRwrHpIUmlfwp9BgoxIFdVCrl16nKuBaBpFV1C6tQNWA71x
LUjFwFCZGuan00+tcWkhJ96saaev+78Ma1EFlUBAwrSoJ8DiRkCqKnIZ5jpUYlhyVMhKjVH4
GlPxwHToWJU1AUnImhpXEoLPUciB2r3P1wHw+lgGJzCjMefXLEgK0TBXFRUCi9vPPCPk0suo
glRRemdB5nDgEzoyUBAY0apzGXfAadV9emnqI9XmDgtWA+2XUo6dB2/Zh0HlQSULAhwaV8cB
+Dq7hXLRhyOg75+eKLUboyPrGb5UHge/TDqgonSmpjmah3GZ8qDBaMDLKyUBLasiG75dKgYL
VKLSdBrUtWta5n6YLTDlSxzquWdOtMWtGJXSVLUHQHp+GFq9FVQodsz2TtQ5VwoKiQvUisRH
UHMinc4hpmZwVjTNVP3Hx88WC2nQkA5ggd/LwwsaZZSgpQKlRqbtTywWEM0oaPoCOhIOf+hx
m8tSsxuCkTsGUA9cs8sb5ZcpIqKCnauNo9Pr0xFZ21uUhY5a6EiuCVSeurjdTuMQ0azq1Mni
B+XHr/hmjrjX2PzHL42hLOhQxIxC/mBAoF8SMeT+t9UfMt+srSsDUFfUGIoSvY45yO1czDow
zpn1JofGgxpj2O6xWB7mGOaT2wWUSMOy/map6DG+edc7X1rwWfg1rt1pt3GN3soYyqvJBC4e
dyRmXqdVa4uuRusp8r37bDyvbdw2q+ru1wCFHtByBqC1DNXrn0zwfzbi9+UoL++4YlVknuWX
U+mNmXVQHUop069cZtys/Cn+C+Q8lurC72/cS6W23kJbII9OkV+1tIzIGN34Tyj5b2bc5+Wz
RRWc8muQtGFjclzX8mVWqcZ5+Sr4fjz5D2KS23GXaJrMmRRbSSrmJCfSNJPfG/ub7Xu1pa7h
tVna8g55uV3um7RANBte3glFrmFYJ/5CO+M2jHmXyJzvk3JN/trW02642Sz9xYrYnXHK2s0D
lqoMHHWnqSPX5Fh4rxaws5X1XN28cc0js0geQj10ZiQMvHLGr7WHVyB73+u7DBZtLJDrpcRQ
1KGPT0cCg654Gku+WnKL2/Tbtp3BNp2tkJv7lQDKvj7S+Jw54MZbeOfW+wWo2LiG3X+87pr9
sX91rMfuVpm5zbPtkMEw/VZ+/ebHBBv/AD3dbm83UD+Vs22KTBH5aE+9vEk0xWiuHaOS865p
fXB29Dxfi0dVnupAHuWI/wDwVR93+GKUzFZzL5I2bZ9hl4/sVpfbpcyKYJNyu1cqueba2FWO
H8ir+PS3xOsmor/8fUa0qG71r554uk+Vr2gnfSKKCQVbsK4zIY2Pw9t1hfc8s47+3jurGgcx
PmC2ekEeBxuQx9Q3N9HZ75abVZ7VBFaToS1wsQGnTSiigA74xkF1zS7PsGxxbjvVttVtJekN
KzyICSQOi5ekeQxZBbXj/MfkvdeSbI9q/FI0Gqi3wRiiAZ6gSuR/HFOppkr0P46lt774wCyW
FvEyRujCJfvKL9zE1q1cb6voru4Zefpfj0Tjb/1slushWArk2liVGmhwWBLcWG3cn43aXm4b
NBLO5Vo7SntKpr0L0DafGuCwr6wt7BJf6dJ/T45QlTaWsIqF/wDqrl+GLFqlu9l4/wAY23ct
127aLWa99UrPOvqc1rStGp5AYPrBrN8G583K+R2qnYBtksKuJL9FIjPkrFVyONeLV98hbfYW
Vrdbpb7HHvu8TJpRp81jUCmXWgHlgwvkTdBdNukzTRiKQudUSjSoI6gY1K6ePS/7eNt26+5s
pu7VLn9LE8sPuAFUlGYbPqfCuLrkXx7zabpuF7zvcNsnkM22wwLJFCR6dTZVrSh+mBz+Tclk
l49xG+utiRLK5dvcDRxjNifUxyzoMUitdtltlluVjs9/uFpFd3oRJmmlTURKVFXz88Rc1jul
/fc33baruT3dtggjeGCmWtiQ1SOoIHTEc8cvKJW4rw64uOOQRWVwZVakMa1Ys1GZh3oMXMkZ
qg+Y9v2mfge37jfWguJhJA8pzErhgGlQHr6sIvih5nf7dPwRY9m4BPtkEioBuE1vHEIEYgal
YVdtQyqcZyN/LYb1dHh/x1trcajjsvTArhEDFtYGpmPckk+rDi6W+/y3+3Ntd/se0Q7lv14P
aEklV9tXXUzl+oXxxYFZuHPds4bY3R5bv67tvMylhtNqi0Ukf+KNV7eJc4sDzf4q3nim4bjv
F5dcSn3fdLljNEIbZZ0hiNdMZV6LGWPfDZG88aH4i2TbLzn2/wC53GxjbLm2INnYyrQWpY56
VIoG+n4YzkGtlxnc7nkkvJLHeALq2srloIIio0e3TIEfmPeuGwZ45+ZbjccQ2HaLTjkUdjFP
cJFJHGg+0kVp/ubxxQML/c5se0Q7Jt24RWqrdyO0b3IFXOQK1/HCLbrwPZ9luZ9ws2uIJFtp
pkEpZGQMhYBlBIGf0xmdfgyvrfl1/PwvadiseNQQ2VnNcRW80aoPtegPT8xr1wtX2sp88bHx
i13Djt9uFkZYJ7krfe0D70ypQ6csyTWmKQS+s181ycd/9Ts49q4VPtQdwV3Ke1W30Jppp9JL
MWy+7pgkir59LgsjZsK9emWO0GvoP+2zgnE94s91vt4sY7+6jaNEE/rRQQSSq9K9q9ccuvaZ
fGmuN7+Fd6tb3ZZtug2m7iuDaWiRxKLhn1aQ6FBRan+LF9IpW527414LsdhDaw7PYMCP5016
FeVz3Ys4NTjP1VtZxuB/DI5ypaOya8eD3Y7DUvsa60LnOlaflrhxQXyBxTjcPHLqR+HRyRRD
VHcbc0QIHiaBWC+ORwfWULrgvIOPTfGsu4WOzrt222cc6z7Z6SCYlrICaerX4nPGvrhteefE
fE+H8w3Pe+V3u0RRWtvKEsNlPqhi9JYyNXJmb9gxWRb41Fjxv4555xe4u4dgi25ba4aJXiVY
5KxMK+uMCqsMZ+sW35Py2x+HeEvty33G7fRf0j94RKyRxrSrvqPn2zOHFbrybkr/ABbZ/Km1
z8dgW+2gvG95awMY4lldvT7bHMfxEfhhvPgm2vT/AJw4/tW67/wu0vld7W5vTDcRFzT2jpJp
4E9CfDF+DzPVlu3H/ifY+RbZx5uMW8t7vTn22MYKIPt1FnJ/YuCcr7eott+F/jqx5Nu12u3L
fTxxo1rtkrApGrg10gnMMRQFumD6qOD5fTifFfjG6hg2OztbrdSIo7MLHrWRxVpKj7tFO2N8
SMdW/h8jaH1qi5sRRiMgT9OwwtTx9GfCHx5wgcGvuab5Zf1OaP3gIZR6I4oVBOhCQNR8TjnZ
rW5EdxbfCPPL/adq2fbJNp3a6uBE5iQKBEoq1aFlPgMayYzZtbiXi/xBt/LLbhP/AKtHJdXc
IP6oioKhSas2rVXLBh3TWfxF8U7Bt++X+4bV+qt9vmklLPrLJCqBliQKy1Ar364rNCm5L8ef
FO9cLseY223ybJY6kkneADWbYvofUhLLXLqMWNfdtt8n4VtPxXA0W5XW1cajgRLS7siyz6Wq
FUempLE5gjFzMZ7fN3widmX5S2/9Rby3MEszpZOXKSLK7VSWShBNO4rh6b5lx6b8scQ2rkHz
hsO1bldyR2e4Wmq4dnAKqjPSGInJfcI/bivwxz816Rtvxl8c2G4F14xBbLtoV4dwmoyP6aEn
UzV0/wC4Yzi18efJG4WG4803e72i1Sy22S5ZILaOhyT0lhTL1MCcbnlZ5ldPxbsOw7zzHb9t
5BeCz2qVv/k3FdOemqRa/wApdqLq7Yu25X1FvXxR8YW1oYDw5pbF0J/XWJMkiUX7wA+uvmAc
Z+sZrNcc+OPiDbOAS8n3jb3vra3mlka4m1iV0WQxxqYwy+WR74MatLf/AIb+M94tuP77tVhc
2Nluk0Sz2Fpm0sTqWHpYtoPp9RB6YsGZWyPwj8a3Vs1m/GI7WJ4/bW4SZhIAOhFGPqxYaxdh
8U/FvCeJbjvfI7B99WG7eIyMCWRBL7aBEDKP+4k4sHPii23iPwhz3mW0xcbjuLJWEku6baqs
iaYVqoDMW0knI6T08MTUr1G5+D/jK5Etq/GYLSMoUW9hlIkHmoqc/rivMox8bcz2a02jk257
Vb3S3MNjcy20VyADqWNqVFMv2Y7SMS60HwxxDaeUc3tNu3WCe628K0k8VuDrfQKhWOVErTUc
Z6uNT5fUtz8G/Gu4Wk9geNQ7esiFUu4JSZEPYr6j+/HP6pV/AVpxSzsd62Sx272902qZrTdr
1/ULkLJIE6k09I+2mL65TLsfM3yxdcS3Dml7Nxawbb7GrJKkhHqlViJGRAToXLIVx1/DMjG2
6qZU900UHSM6DM5YLVj6k2/hnxFwf442bfOSbO+8Tbro9ydhrfXMhcIE1IoQBaY58+/LfVzx
HzX4p+Jtl5PsO7XrTbZsO7Bv/ixZqlwNDxEmjkIdVCM8/LDZsUvrRf3J3+xw8RFjcb9c7bd3
MLfpdthqYbxFIqkygUp51xrhx/tzbHR/b5ZcJs+AXD7LdTe+0aNvjvVWinEZYmP09KEleuMu
nMznHmvEti+POXfMm6nd95m3zbyiy7fcXz+2bqYAD2HNFNI1yWlNWNdUcXJjafJPxv8AHNvs
d6Ljitxs3sxF7Xe7Ae/GrjMBwrkhCcjqXGZIur4veJfCHx/a8U2z39hi3e5uIUnuLi4kKtql
UMaCoFBWgwNMhvnBvi/gPyTt8l3tx3Dbd7UfpbAsJP0d0sq0lUEglDWmZywWOf1/2B/dXfcJ
hiWzn26R+Wz2yS2e4R+lBCJSpWY19X2mmX443JGrb8R8tOUUhvz+Yx1xa+gf7V+M7JuNxvW5
7hZRXV1tUML7eZAHWNn1sX0dNXoGOV+cb3xt9u3q1+XeHcsbkm2WyNsvuPtc0VfdiKRu33tn
1jz8cGudmzVNbfBXDec7fx7f+JSDb9uKxxb7aye5VzFQv7dQfWcwaHScZs/TUT2vw18Xcn+Q
9wTYGkg2nZIY13OxtyT7lzrb+XFI5qE0r6vPphvKny13zbtUc3wducFjtJs49uETWlnoXVFF
DMqlworT0VJ8uuN84x/a3NfFrx0YBSCQc6GlT9emNVqNN8a7JYbzzrZds3GIS2l1dwpdCtKx
s+YB7Vxjv4bex/3Fchu9y51YcEAMOy7ebOR4o2NJTNp06wcv5aE6aY1z1kceuZ1cajlP9u3B
Z983Rdt93bVt9oW4tYIjqRZlLrqOrUxBEeYr1OOeNZ60G0/CHxnHDs9+uxGeV7CPVAzt+meT
Sra5CTUSHUadjhxrVN8sfCfBjxK43m02wbDc7Y6STG0YOJbdnVZAy9KhWLL54HP+nP5XjfGP
xZHsdu+28UG+bXPAtdxtJFknZdNPcoXRi9M/T37Yq62vkrnm2bHtfLL6z2BpZNphdhA9xG0M
65V9uVWoap0rQVx1mOXPVSfHm1We78x2Xbr1C9jdXkMVymYLRs4DDLPPGO24+rd05pNZ/K+3
/GVttdoOLy2yR3FuY6hlljZgAv2gLo/HBsNu1kJviL4z3fd+T8Hs4DZ8jtJ1utovW9x0WGSJ
X9gkVXQp1Ag555YM/LE4+WX+WPjjg3A/jux2t5kl5otyLiOSNXrLEcpPc8I41+0+IxTjfae/
8fLfx73sHIP7bd9G27Wu3WdpYtbPasFIM6IjGQEdaswIJzrhi7nio+Pby24T8ET802/boJN+
9xojPLm2kyiFcx0CjOnfvi4ktN68aJdl2PnezcF5fv8At8L7zd3aQXbRLojnT+Zk46kaogwB
Plipnjsg5jNv3y3u/wAcbhY2c/FobcxNZslTUQpKG8vupQdO2eK4J7ar7s2vxf8AFu5bxxuw
h/qMV/NaiaYan0m4aNCW76FAy6YeZ+111cdcfHtj5dcfH/Lt32+3beL8v+vWNdMc2mF5FMim
uoo8YIr9OmCzRfLKPZuXXXL/AJN5NwPeLK2uOMWcckUdsyEsGgZF1lj0rqyp07YjP9pdVW6b
ovxh8U2d/wAYtIReXd7JaTXUia5H/nyKsjEU1EBAKHLGuYJ5GG/uQ2vb7/i3D+YNaxW287vG
ybhNENCvqiEgYr3YNXM554Ob6z/T+fsrwKWxuoIVlkieOOXJXcEBqH8leo8cbnWrM9cw0l6M
CT5ZZDGq1Hs/9tHFtj5Bze8tN6tI76CKxkYRTCo1OVXWg7EA45dfLpz8PTuNfDPCrSXh+6mA
3RuLy7gvILg+5HMKTGJipyqntD64MY1oueQ/EnxttdpuF3x6G4e6LW1tbLGG16TrYkvVQwB+
6laZYueJWngHy23x1cb1s+88LKR2m6QmXc9qRafpZ43U6dGWn3KkFR4VGRxq83GZLz09H/uC
NpdfEvC72ziXb7SWdJV2+E/yVaS2ZhT/ALCDT64zL4138tZ8Q834/ffGG8XZ2CKJdktdO6xx
hNF77ULMWOodWC56q54OJ6r+3zTvG4bZecwut22fbxYbdPOtxDtLn3BGRQtEWUfazCtB0rTH
bvJPD/PXt/8AcekFzxbgc8Ma2ltcGQizjyjj923RvQAMitafTHL8Dqf7NlybdfjbhE+yWs/H
be63DkEcMegQx6SqAJrq4ZdQLZimeLn+fmr5qHZ/jTgezfK+7x21lbveXO3pf7Xts6qYVd2e
OdUBqCDpU9PTU0xCbKj+Zt02XjPxlfrd7JZ2V5vzCznsImTSG0sY7ldKrqMVB0HWmNfz5mq+
vj9AQoVwAtO1D38cNovj6N/t1kisPjvmXI7K3ifdrGBv08rLrascDOF7HSWGeOef7Ol+F3b7
ldcr+Dp+R8qtV3PdtoukudvmaJUdhHNGQqsB6lbUynx743k3Ge55q9g+HuFb3yGx+Qre0l2+
EIt1c7C8AAMy5kmIdQ3dQPVjFieYcO5vs+zfNu53XHtr1bNvN4bW3ikTRJbySsqyyRpT+WPd
1EJ/Dlh6yH+frT/Itla3X9zGwRXSxvFJDZAxyAEPSSXxxv8A+OLiTdaz/wBv5UfnV+I6P/7Z
S3GuD2axMksWrU7H/d6R27YrJJBPy4N7c8H+LOSy8Ut0tJbXepYEkRdTxxfqFFdRzrGremvT
BJ6rV5YWcO/WXx7yLfLVJd+95kkuJECuytBKaMtAD9gYeeYwftWeuDjnKeT738vcj4puwNxx
mCOa3Fs8Q9tkoNOpqVqwYivfGupJIZ8Knkm5bpxH4Y2U8XY20jbg1o88S6pfbEswrq/jPtKu
o9ssEkW7i93+13+PduD8h2iwhueUXFtLDupuB7ZlgMCGQSlaZoxqP2YJ8KzKW58z4/wixuBy
ffLnedzeCVptqjJmtyspOmAChVKA6QXPTDJtZrzH+1u43H/22+gjjddmu7d/1URXVFXV/KVz
0DdQPEVxr+1m+Ncc/wCvrxjmFnZWPKt6222i0WtluF3DHDX7IknZUX8FGOl/nJJf2OJ5ihbQ
pY1NMqdvwGOdZsIhiAewz+mASGlQMzUYKR0c9BXtiapkBUAnI9KnL8KjviJmqCCK1PqDUzp4
HyGIUxZDQkEgZ6j0J/1xDRFpSNFQPFh1wYtpJrRc8we+IiMgLBSKnsD3OJHKeoEUFMzppUYj
pCpGhW0Kw9J6/jhAwBWpLZ5qfLviJAsFK+gk51bsf88CJ0korDIqKk+PY5DAiIrShFTmfDrl
hBHUzBmp6QQTiCNndSEBNT0UCmdcQqUgnSuqh6kAdfI4takKikBmNGGQC09R60xGDUBioJVW
YdO5xL5M6PG2gZjImufXwwIKVRgG9JXOooa18RiZ1IVLAM1a9VIOde9cUWEhFQQpKZmhHjgO
H1VTWoJ6nT4+GEnEukioBr9ynw8MsTOgDoWLAUqKnrQfXCEo6hqihHqHf8MDcGfZQdyD93l9
MB0PvDVoC1UAAU8uxxMXo8lS1ehFOtBgalA7MTkhOda16HFEOhK6qAEEDSTWh6ZYmSKLG5XV
nloH164kXVmB9A7OmS/TLE1CcekGT/x9aj92FHJKolBqDZt5DFFSaOMURjpjGZFcj4Z4joY1
jUgLlU5160+pwDUojPue4Vq4qNDGv44iXpYFiNIXqpPn1yxYDF/cUGgOn8p8MWE9WYK4qDU0
A6imIGJ/mZnSMyKDrXxxLTq2VKVIyJ88ChpCjHXSooMSECtFGfT7QKKMQoPVqJWgX/GuRGJY
LJIwKVHbv+7EjjVqpXUxGRp174FCdyqgVDaaD19Wr1xM3TsQxDe2FB6muZ/bha2/kwj05ahR
x6mORqMJGpIpqGphkM8iO9cBPAIkagAK9QnZW8sSkJRrqCK6ich1DYtakKT3ApBPpyoAMssW
s5BvlUMSq0FR44Daj1orBVrTrpIyxILo+onOgoSB1AxUSEkcRepNZOmZ/wAMDWRK8aRMueY6
CueeKClIdLAqQ1TQgdSMNGmFSuqQEL+Wnh4nEDI0oJJaqgf8UriKN11xqGqFBqBTPFqSRe2h
AHpDdvPEdJ0AZqMDpox8cZsFMzNrUkgsR3yyGLAUjSCQK+mgGTA1Jr3xrCbVoT1EFuin64MM
DGA6BT6JBmO4/biaOhfVQeonIr0oMI0qsCygCgNDU1NcWC0jHX1dK+BA/diGkwjZaKfSRkte
lMWJyyqREVYAk1NRl1xYmbus7hwakV8cbikQknooyGYOEn1Sfur0xanfHFerCSKe2OvjXyxl
R2ceeaO4WSIkEGpcUNKZ1pjtx3ir6w4zyP5p3DjUcFnxWzawEA9i9uYxE0ilfvCO1D+zHLvr
3xSR4byGbd03OVd1ZGuyzG5ZSrKr1+waMssYi31VlqIdQCMD2HU+YHTDT9qZUU0NNSkV+mLW
byudl5Pu+z3cV7tkxtbsehJ0UFvDQFNeuNfbxYvR8g882/fYeQ3LS/rnGkTXcKyCpFBSNhQV
8qYJYrK0F/8AOHy3e2Twu5gjdNMs0VsqKOxpKOhxfaDHJx/51+RbCCPbtvZJkqwjj/TrJJXq
WZsjnjdsMmqzePlLn9zvlvuu6XWjcbNq2hEOhYyOgCdO+Lmw2Q3IfmLne/TWr317VLVtSxwx
hELDuVGROCH6tZtHzx8tXcRSzt1vI4xpEi2TM9fAlfT+7F11I53WV5Ny/wCRuQb3b3G8RzNd
wlWtYI4CqoFNQQiiv3YeepPW7zMdnJuQfLPILWKPdobtoYAAKW7oK+JAVVDU745/9ZrM5d/H
vlb5Xs9uew21HnhirEJjamaZCMqNJQ109ica6/pyueabYfmP5C2Kaa2aIT3FzKZJVu4HeVmI
oSGBB/DDOpWs/S73P51+Umtx/wDkwWcOkMHS3eMEih++QkAfhng+0Fjjt/7meeMPbSzsJJel
Ehl1Z/mpr7/TGv8AVn01t/cbzq1maS8trS4SX1OjwlQgpQaQpVv24vF6mvvnrlm8KlpdbfA2
1TlRLDAjo8w//B6m1FdX+3PGZY39Wl5Lz3mcvDpLLaODS7TYSRe1+puKsqJT8qlVqfrg1mPn
KUzCZmmRCyk1PQ16E43PTca34z5XZcc5JabteQu9vaMSyxULspBGQPXrjeeCdPV+R/3OT6we
ObZE0JWguL7UHVj/ALUPbwxz8ijLbR/cHy7brmX9alvujztraO4QhVbwQrTSMUsqxzcu+d+U
8giWwnW3stuqrT2NspIlANQHdjq0nuB1xcpp9m+fuWPYfpNm4xatBbIAHtopfZQUpUhTpBJ7
Ybiug2r54+SlSXTs1pcwQsRMFhkRYzWtCUNMZ2D1Sb1/cVzPcLiJYf022w2zhmt7ZSS5Hiz1
y+mGYLKtm/uc5FElIdm25bhlAM1ZBrP/AGjMftwWr1x7b/cdyi1knbc7G13P3H1rrDIIx4JT
tjXikqLcP7keZTXMNzDa2sVlD6ktY0LRknprYnOnliyNWK9Pn7nUO5y30wt5Ful//FnQmBCD
+ToRWvXBLGbHn3IOTblv27TbhemNJJqsUiXSlPKnfDIuam47yXd9k3CO826d7e6hIZHQ0y76
vHwocdPw1Zr1G6/uU5VPtTW9jt1rYXbD136gliOhKKfTq8/2Y5XGPVXxD515LsUMibgi73A5
JjiumbUjk19LZ5GvSmG2YZuoN7+dOdbpvNvercpt8NnJrgsoAfaFOgev31GRrlgnUNi83L+5
ff5NvktNs2q3sb1wUl3BayPl9zIv2gjzJphmDcV3Efn3kOz28kO4W0W9iRx+nW5f25I2Na50
bWKZ9MbyU8+qTmfy7yrlG5xXNzItra2Th7SyiFYFYGoJBzc+ZxnwxPyj5z53yLaf6beXMcVs
9Gm/TR+0zhT0Y1Y9RWgwbAueJf3Bbhsm1/od2sot6EIH6aSVtLJXsaK1R+GKtO7av7j97Tkc
24blZR3UMkYS3sI2MYgQmuRo2pmHjh8xj137t/cBsd1DN7fDLaSeUEGWf23oSPuY+2C37cYt
h9YbjfzRy3jEV7Fs6WyR3cvu+1JGXCkigNaqchhjVQbB8vcr2/lk2/ST+5cXZAv42B9qb/bo
8u1OmN82MXa2XJv7ktzu9plsuPbXHs8k2Ut1rEsjVyYoNK0Pma4z4UXGv7jLza9vFpvm2rvc
8I/+JdO6q48n1Bq6f4uuKSVc3bjKb38tb7v/AC7b9632GOez26US22z5i39Jrpy6k/xHDuRr
rn61f/JPz1d8q2uDbNu2qPbY4JUkednEsqkA+mOiqF64uZGcrq43/cpe7dtK22/7Su8XkIpZ
3cjrG1F6awQ1afxDPGrOWdsZDffmHl288msuR3ckQm22US7dYhf/AI0IBrmhzZj/ABHPFLJ5
+G/r+UnP/m3l3NNoXbb4wWtlXXPb2oNXYfbqZiT+GDwSb8vOYnd4lJBQ9gaGlMEp16h8X/L0
3A9uvrWDb0v2vSshkdyoRlqKEAZg1xYzmMim47y+6vyEoyC6uHufcEZMZlrqCgkdAcF6mqT6
x7RF/cpx2+26D/2HjX66+iorMjoY2NM3CyD0g+GKxXpV2v8AcFapyaW8vON2n9LeP2Esolj9
1R/F7hWhr3yphk0urkf9y8B2a42zjGxnb57hGj924YOIw4oSqLlWnnjfPE/avqk+M/nZuMbF
Nsm97X/U9vk1SIsbqjKzn1q4eupTjPdm+CT8Onh3z5Y7Hvm5MmyRQ7BuJ1ixgYK0TCvcijdc
8YMXO5/3KbPBtzWHGeP/AKFXcvNI5QIMwW0ogzZvHDImB+Wvlm5+QZNvSGw/RRbeGKxhjIzu
wGo6gPtFMhhtyDPVZ8Zcl4/x3kf9T5Bta7rbooEMbMB7U9cpACNLH/uxj7aZ17j1flf9xvEt
yS2VePvdXtpKk9tJLIn8shhVkZamp6Y1IaynJfnf+q8+2fkK7b7Fvs61W2Z9bSGpJOoZDrTD
+BOfXqnBfmnhu973uF5uDpsu43Mccdqbl6xNDGCdOv0rq1Ek+WMtWOX5q558a3HDLqzu7m13
vd5AU29bUBmjkPRi4JCKO+efhjU5v5+GNfKESkUYZAdGPgfp3JwU49p+KPmm24tsNxxjf9t/
W7ZIzNCIyBIBIKOkiv8AcDixLHe/nri1mllBw7jNvYexcLcPcTJGlCp+1RGK59zXDOYWgH9x
XAJbqHeLjjkx39E0LMGjIFB9vu1rpPb04LyGa3X+45Nx4xv21XG1st/u5lEM0bj2oonUKA1c
yQB+OGQZ4qbj5ys2+J4+ErtrC6WJbdrtnrFoD6vSo9VcUVmublPzXZ7t8U2PDYNuf9XbGFJ7
uRw0ZEDE1UDPU37sZn+Vu1xfDPNeF8Uv5dz5Dtst/cpRrG4iIYwEV1ERkqCzeNcVmul7ej8i
+YPijk/Idnu77a7mJ7OeNptwlAR0iWrqjIhYsuognwxZrE9ew3HNOG3dqz3G9bXLtMqEyJJK
hZkpU5Fv/wBnFi18T/I1xxm55duMvF0aPYvdP6MsCBTLUVBz0aq6a9saki5lqHgu/W+xcmtd
1urOHcktW9xrO4qUkqCM8u1cvPGuvhZj6JsP7j/jHbLVp9r2m+inYM36EMFj1d6AuVGf+3HI
attq+XuBP8Zz7hyE28r3s8hv9hUqZKzylgixsRUBaGuNSb8NMnvn9y2z2N1s9rxvZjFtO2sC
6XJ0uy00e1EqltPpJ9THDOfB6uov7kfjm2vW3G12W9/qFyaXkrMgVfoxdl/YBjNSk27+4ThG
42m5bRynZZbjabm6luLaJP5lELawsgqmanuDhsSquv7geH7Zv20z8Q4xFZ2G3KySzSqsc0kU
goyKI9XbOrEmuGfzHutFef3BfErLJukWw3lxusyljHMQEJIodRMjLT/6cP8AzM98fNG+bgt9
ulxexwx2sdxK8iWsNSkas1Qqk50UeON6z9fq1fxP8hT8E5Sm9ewt1F7Zt7i3roLRyMC1G6ah
QUxjrDLXu0P9yfxrtt1Lebdsl4bq8at1JVQS3Uganbv2yxj5Lzb46+cP/VOXb5uM1ibja99u
ZJ5rZD/Pjq7MmlvtJUNQ+ONdT8qc5GI+TuQ8S3rlVzecX2qTatvYIZIpj63larO4QFgq59K4
Z6xuMnH6H1AHr0HQf9cVmtR9E8X+eeDX3DrHYOcbJJejbQotnhUMj+2CiHSWQhwpzzwT+Z6v
5Zf5d+bbbmFxtlrtNh+i23ZpBJb++aySGgBDKKhVCrlmcX18U/bi+aflq151NtFxZ2UlnFY2
5jmEhBLSOwJKU/KNPfBzhxL8SfNNjwfYt92m+sZr47pV7aSJ1AQiMppYH8uda41OZrPrP/GH
O9s4vv8ANebntcW62N3E0FzbTgM6oWDB4SQQGBHXwxdcHnl7Ruv9x/x/YbBdbfx/bLlpLmBo
oorpv/iqHGkl6vJQZ5DvjMk/K6cPH/7g+DX3F9useXbTPc3e2IIUe1r7R0KF1FQ8ZB0gVGeL
PU82+Rfk3auRcytN22Pao9ptduCi3kahlmdGDhpVHpAVvD8cVi5X3yr8x8K5vxWE3eyyQcxi
RIkvUcGFADVqZ1eNqnIioOHmD+nNnseJ6BIxMeaKRVVFfLphtZ+rd/Ffyru/At1mksUjntJ9
KX1nIDplRa9+oZa+kj8cYsbl8ejco/uJ47Dxu62fgWwrtTbqj/1K4uFCqplqriNUJ1tQn1HI
eGNTmfIxz7x/ctdx2XHrHido2z2u1xp/ULRgmmZkAAiQitEyYsaZ1xSKbVPzn5wF5yuPkPCY
5+PTT2yw7mwK/wA+Sur+Yo9OX21698WHMd2+f3F7juXxXd8YvvfO/wB3WF92Rl0SWzPWQMAA
amMlDQUph552jq68MJRSDlToFHh9MNmGYtdk3a82i+t76wkEV7byLJA/WjIdQH7sc7zpr1r5
b+RuEcv2jZ+T7f7thzWzaEXto0Z9uZB3Eg9De2RVe+nLD+McP6fz/wBp1G923+6HictlHd7t
s8zbs8S2m4tb6faeMkn0Bm1UJJyp+OMvReUX/wCdFxlr47OuxzNxhIRbkOwFxQCgyDFdNMqa
q43efNZit5l/cBwy54ZunGtj2e4tY7mJVtpJihUP7gdmlGpzp9OWZr5Ypx+x18LDj39xPxta
wx7ldbDdWG8sg/U/oABBIwUCqJrRfVToVy7nGfqdeD/KXNG5hy+93/8ASx2CXOkR20RqSsY0
qZD3kYfcRljc6/DjzzPtaoNl3e727cre/s2eK6tpElhdcmR0NVYfji658dZdfQ21/wBynH/0
Y37c+NLc80t7ZoV3GEBIpFAopZjmlScwK96ZY5yftXxn9n/uV3O043vpmtlPL93n9+23WBFC
KpCoI2Q1JESr6frXrjpk3/DU5uaquWfMr8p+PYeP77bfqOR2t0r2u+ELqECn1Bx9wdhVadD1
OCYzedeg7D8+/FljxCPYn43MttcRBdxs4hGYZH0hZHqzLXVpB8cZxrpi+G/Nu1cduNx2Kfa/
13A7yeR4tpuNMk0KuagK59LdMwfwOL/wxzxJPRc5/uH3Ldrnb7TjVoNj2fapkurBdCGUyx5J
rAqgUAsNI61zx0+skPN1ov8A853ZItsn3W04wkHOL2FYZ9wogtyV9Id2r7hApUJT8cYnMvyc
9Zrhn9wt5ZJebXy3bl5Dsd/NJdXMTBA8csja/wCWD6SpbPT2PQ46dfz/ACrzgea/3E75uu97
Y/H7ZNm27ZpTNtsQClySmis35aaSRoGVDgkmBpJf7nrCPZrm92njkW38u3FBHe7idHsGRRT3
KU9x6dVB/EnGeZN9axScC+eLyytZdq5LYJyXZp5muEtSq+/FKzGTVHUaSuqppTLscHUxYzHy
18wX3yBc29pJbR2Wy7dJI1nZx01AOunVK/8AFTL05Y1kkY65tbn5k5/wHefibj21WN9DunIL
Y2xWWOExtBGiaZxJl6CR6aDr1wcc/ldx8+KI1JCCoPqr4Dxzw61Gy+MPkLcuD8lXdrBYpXKG
K6jlrpkiJBK1H2k0FCMHXsdNe3j+6PjCLHFDxd1tYQZ7FPeQGOeramyWij1dvPGWLHnXy18z
XXPtg2iyn25bS82+aSaaaNwySF10qEUiqkDrU/TDB6wvFd8j2nf9u3O7tor+KymSVrOWhSRV
NShqCM8VjVr3Hlf9xXCN941NsdxxBntmhYWyPLEqwPpIR0otVK1/LikxnvnXnvw78uXfBZLm
2ubQblsm5rpvbNyFzCkalqGFTWjKRQjBf8McfpVWvN9lsfkw8l2/ZEi2gzm4h2Kch4wCtJM6
ZAkkoKZHHTvqWN8W8vU+bf3FcW5HxqbZ5+JvmhW0keeKlu9KJJGVSqlfAY5TFfWB598vTcsu
OOXb2gs59ihWMun8wSOGU+5py0LVBlXG58Yc916bwz+4Lju4c5bduRWqbY1zYRWSX6H3Vilj
ZmdwCNQR9fatO+MKrn5i+T/i7duE3+0ybjDyC/uUK2CW6BZYJ6akm1tQKoIFafTHTjn1nqb5
Pl8t7fsO87mkwsbKa5EK652gjLiNB+ZtIOWDrqazzzcytT8X/I++cM3LXtDrNb3Hputvlzhu
E6EMvUEdiM8HUdeLL43HyB898kvpbXarHbf/AFuwtGjmm28r/wCdtQkQOCqUiyqFAz61w85i
nz64d7/uJ5bfcrs98tmO3R2RVTtqStJbF/8A7XUvpr7ncHp2zw5MZ+Dce+Z4LD5J3rlv9Cgl
i3hVL7aGBMbjTqkSQjJ2dNROnOuMdSYObXZ8pfOlvy2ztVt+PDbt3s5Untt094NPFpJ9Ioqm
h8zjf87MVtWu1/3J8yuNpG32u0x7hyVk9q33VBWVo1OeuBFOtvxp3pjnvq31lOCfOHKOObnf
XV3/APlez3OZptzsLsj+ZKTRnQ/lcUocqY331rUR83+duUch3mG6tWfarXb2STbrKB6iOROj
lqDU37gMsM+PBxffWhu/7pOWXGxG1t7C2tt9cBLjeVFGZAOvtgUDeGeWDmT8rr/DP/H3ztv/
ABYzWdxDHve03DmWWxuzULKW1GZHIajMalhSmNd++n4juH9xXKTz2HkUqJJawI0VptWphbpF
IKNQ/cXbI6/8sFzPFGk3r+56G9sbvb7jiVnJBewtHKryEhldSG1aUr9MEkZ38Mr8dfNEfC+E
bptFlt4k3m8dnt9xL+nSV00kAo+qLrGR1Jw3L01fJkeVX15Jd3Es9w7yzznXcTP9zyE1d28y
2NXq0c/6zHO2kLR2Nc9PTGRoHeqhA9HObGtcsWLQq7LJ4pTr/wBuJeiZmdTTpXNfA/6YyTKJ
FEZBFRkxwjBlBoqp1Uzp2p54jhwxTNiFBBBHXEKg1ESiuaA5H/PCz6mamhWipqPcYFaTrLq9
NAT92fT8MTU9GluNOXqI/N4/TETj3AG1ZVy/dkcCDKsYcVJ9XXvn+GIkJGWqGtBUDPse2Fm0
8bVXQx1FRnXL/imHEUcnoJXNgaUYUqMCSlgEWlCw6+Of+WM1Sm9IUmvp/OATiaN6CyqwFD0H
YYQJhHX0UFftzzAxEoNVaVJqMz/1xaxlRxx5e4T1OXjXFREjqaA5qeoBp+3FhtPGxKkn0CnX
PLPFWpEgKDSCaeBOX1pga+AgIGBPqHXxIp0xM0ccapGHJBataDvXAT11AsRT9vfwxI1F0sCf
X3J718cRLT0rRiMmp2r0xYMHoRIySNZkagPjTww4bAOzqQT3yAxA/wBsYdsiagjvgR29tVJr
n0UtmcDOFrKae+oamPhXplhUhLICpelVzGk9MsRwCu3t+n7B9oY5gfXEUhLggADSRUCla+OF
GBRqUGamg8BXBYNGKlyxUEj7SK/hgMMBWkn2r0bwI6HCp6kUaSuimhqGtOh6DA0j1MAV8GOe
Fk51ABuh7Z4yjVUPqIIZuoUVP7MQ0Slya0oo6DuBhw6Uj/lQ+oVy64gH741DjRIRUAdMWHCX
KpTqaaqf6Yhh29xQW0nI+k1oRgVghJ7hoczSpP8AnhRKaalagYD8KeOAaRKsi0FadQRn1wtJ
FrUKaqO1cRpAlQcvStSaDxxLA6tQJQaSOtcgQcCC5k1gV+4UDAmn1p3xH6pPWNOlS0f7QT/l
iWCZ0WLLNT1c/lOBfZEJgg06mbUQWNPHEvsNhkKj0Zfd3PlixWl/LK6i515+n/ng9ZtMV9sk
sK160qc8KJVcFR6q9VNfHEYjaLM6Xqyj1LmwDVxIaydA59enpiWnAIJ1mrD83bFTIdjkDWi9
SAPHLFp/ISDVY2oOtG6eeLCYwCQj/aDQ1xYLMKqMUXTRehJGQPjn1weimVZGbRqooOIYkWIB
jqIpkxGnofriakLQrErqybMimY8CcKvMM0WgLqJLAUReuWAfACi6xrBIANAMs/I4lrnldQXo
pJpQrnTLwriitZi8ZDO9Kg1yB7Y6yBHkDUnPEUvuj+JumBa7LVCVK6yUGdPE+FcWB2cbWm5x
qGWNnelWFRQGtPA1x2/lztZ66faG/Xt1H8ZwyRyywn9KikKxDBdFaAg9PHHHuZT/AD9fLdzc
SfqWMiiN11a48iatmc/PGI6VyMejsaMRQDr+Bx0xzdNha/qLiGFJKNK2kZ0AZh1A+uKc6NfU
3xh8UbfxS2ttxmsP6pvdwqu946gxQhhkIlOXT83XGbGlT8wQxWnKNt3PedvN5txdQlp7gjV2
XqWp9w8Vw8cpe/KF3HL8fUtbdLa2kRXijiVVoKVCrlTPGbMqih+AN02A2V1bWmzpDfJnNfzM
JJZOxXp6APBTTHXrmYra8t+Z1duX3XpoxIf0kLke1PLHLcDE2KwJNDcTRNLBFIrSxCoLKPyk
jpXHTnqfl1j6n4dyvke62FpcWNjacV4rboDI84q8q0/+yFFoP9xwXPw52Mzzv5qsdn3b9Pw+
4ju7+Q//AC7xohLGW6AAn15+RxkNxbcs3a34Ad25LKkk00etkiURAK3RadScVicvDOeNvy27
bdPY7LtgPtm0k9V1Ky9dIoq5/icORbqXfN7G078BtWwtum93wAivZAFVT4liCQopnTFMQeWc
msNt4rPFzPdLOfcZUJSwtQCNRzCgZt6fE4LzKLrj+HLriV7sNxPtWzrbzVb9VdTqJHkelepz
oB+XGuuMUeYbWeLRfKJfe7Fr9GmP6O2BC26ylstYPUDwP44eedM9b350uBt+1WNxt0CWUkDV
VoVClSKEaSB2+mMySU6v/wCp7huXxe9zezG5nltS0jE0J0jyyrho18j3yk3E2oZh6Cvh4Yoz
IveCcZteR8os9onk9iC5KrI/UqDkSuNVrMfQn/3PfD+03drtLWdxc3twtYtUz0oozLFSKA45
3nS44vgTgdpcXt/uTXD2CLqWzgYgIi1J1N9zfhgnFGsXzW0+DRsjtx/3Ytw+1YwXY6hkNYc0
A/HGuZgt8ej/ABdZbb/91/s2FzNpaOVZwqolJFBJoaVONdQy7FnwWLbR8cEX0vs2g91rnTQv
kc86VOM4rWW3f4l+O9x2SPc9pjksraRtX6pqyOQWzIQ51r0GM/U6tNr+CvjqW0VBtV/MXUH9
VdS+0Sf4wFIoT4acOLVXb/AvBtrS93DfJbiWxgJZbaE0Cp/uYDU2KxfZFxvjfw3c8p25+OQN
PcoSWtJy8i0GYJR6gUxvPGt8d/ydwHhELXO/cidrO0jjpbWNigUu/idI798c/pvwzOnzHuH6
L9TK1orRw1rGrZsq9gx8cdPgfVrfiLhW18s5XBYbgzC1Ue7MqmhcLmEXw1dzh0yvpFLPjsu8
TcLtdks7ayjt/cEyRoxOoZL0qD/urjOQKqy4Bwrgtjf8hj2xdzvkYmJ7qh0Bjp0RghlX64Pr
qlTbt8bca5hFte7XNsLQXASV4LZFjDoRURmnQ+LYMw/DuXbeL3u5XvDINhtrS1tYEczxxpq0
nsGpq1eZOeH6xm+qe04Fwr4+2e85CdqXeL4MxaS40toQn7Y1YEL5nFIVD8ufH3HrzYbXkscK
WU960WtIkVYlSQArUCmYw4LMrN8o+OfibY+Grd2m/tfb0VBt4kkWVZZH6qsQGpB51xSftrqt
px7h3EPj3htrvt5ti71uN0qvI8wU6PdA9MasCuVc++LBasOTcK+N9tvrHlu+WapbuA/6KBB7
LTP6o/QvWngO+LDK0Ntb7PvGy3V1vfH7TZNkKkwSXKxLMUp97KFX269s64JzGa8S4h8c/HHI
9z3e+3HfEsNotZtFpbpIkLSdi+qTt5AY1Vy7Pj/4r4Zv/O9yggna94/s9DGBVf1JrQAn+GvX
xwY3z+3ph47w/ldrvGyQ7JbbcNqkMK3UcaBi4FAVKAMB4g4LGarrfhnAvjbYbe4m2ld5vb+Q
QzXE6oxq/wDArhlRQfDPFIIw39wHxxsmxWcG97aotpLyXRJbKD7YZhWqeFaZ4sF3XiVjbNd3
CRRU1SsqBz6gGJAJyw60+pLfhvBfjLZbGW72pd53DcJFhnvJ0RzrcDJUbUqqK9B1wyC38Mj8
w/EPHLTeNouLK4XaV3icQ3RA/kwioLOvgBXpg3Bes8UPyf8AGPxtxLjcN5tG9vuG5MwDQI8U
3u1++T+X9tD54Gr08cnClghDKWFSmNSDrHo/wf8AH1lyzlEMm5Swx2Fkfdezlejz/wAKBKjV
nmfLGetPHX5e7/N1nLDxGLarC+tNrsbh47ZbERKJH1MAFi66VH5qDp3wxZtZY/25fHWxWtvN
yDfbgGfTEASiB5npQJkx69MGWqz1zSf2xbbJyv8ATpujf0aOITS10vcgk0CUGS1/i/dhmyGO
/ev7auKybRO2yXF2t1DVglzkjkdgdKkfXGcs/I0Oz/2x8Yt9vgXeb+7mvZqNItsgEa6h9nRy
aV+4nFedOvP/AJF+CN22Te4du41DPuyXUbTRRRgNKkaEA+4MgBXKuEay1/8AEnyNYWr3l3sl
xHBGuuWUp6VHfVn0xfZV7L/bBsG2txzd7mT2Ljcnl9iSOSIOY4wtVBY5ENU1A8MNnqvwy3Af
g+35juXIb3c9yazgsrtreFLdVbuWJNfTpA6Yuo5/z38rfeP7dePXfH3uuI7017NbSGKQPo0M
1aMNagUK1xnHXVht39sHE4/09lu2+XE+7uBLIkKIilR9wSob9pOFWvKvmrYOM8X5eNn460xt
44h+qWYltM3dQSBXKmNYxOmN2DZtx33d7LaLFFN9eypbwoSAgLGldXgBmcapnL6Y4j/b7w7Y
N/2x9w3cX+829Lj+nSKgicr1KofUVB8cY9b+yH5W+F7Dke579yGKZLP+nWqFLWOMBZGjj9x2
YrQioyywOee6wnD/AO3j/wBi4Xab826+xJuMmlLfQGCRmTQTrJrUUrSmHa6deNZL/bHxKCeO
wu+USpf3NVtIRHGrsFGRCFiT50xmSs/LO7J/bdaNv26We9bxHaW+3soiEA1zSKw1B9B+1SPr
nh9Tu3n+162R7CXZN4kntLudIbmS4j0mJGr60XLUPI4vYPFpP/anxeNgU5DOkEJP6tmjQsCR
kMjpH44rLVWb2z4TtOPfK+07Nu99HNtkhFzZPLGNN17ZqsTIOjFutcsXrXNn5dnzdwa53r5X
2nY9otrezm3K2UqYgI1IUsHlkoOqqKZDGviMTdd9z/bbxGEnb7nmHs7xInpgdY1WunoEZ9Wn
HO82nGW4V/b3abs+7neN/jtbbb7g28b2hEzyMvVyK5Rn8vjjfsXFQfKP9v78V2RN/wBp3Ftx
20UWYvGIpUJPpIAyIPTFtHfTVbdwLge7f2/yb5FsqJvNtA8hvjX3XniehkJB+1h28MXPO0d+
cvOvgrbeM7rz62sOQbbHudtcI0NtFIaoJcyZCPzUAph68b/l7y1vO/h7adx+aYOJ7Fo2mxvL
VblyoLLFpVi+gE510/bXBuRmbtZzbPg24vflKfg0u6COO0jaaa8RdepAoIoh6E1HfFRz78vP
eZbBNxjku5bDLcCebb5Wt3dKgMAcmFfI46Srn+nqfhHE7jlXI7XZIbiKzNxk1zctoSNVFWY+
LEdB44LcdJHurf2pcfuIZILHk8s9+EJ9uWACPVTxDVA/bjG1j65WP+FOI7BF8nXXFOXbQm43
ISW2USENFFLASxan5iwWnli66trU9Zz5x4ztPHfkzctt2mP2NvQQzxw1JSNpIw5VSfM5DtjV
+HPm+vTeX8L4NdfANlymw2ZLDdI4om92N21s/ue3Jrb82ognGeI131fMUvC/7c9q3HiVvyPl
W/DZ4Lz1WSjRpCP9pd3IXU3YYrt+G+5Ik3L+12/XlW2WO37nHc7NuEbSJubj1qkQDHUgqGYh
hpIyOKWyMz59du6f2zcLeC7h2Plsc+7WwdjYzGJfWoqVajalz8RgkrPny4OBf26bNvPHY953
vf8A9NLcu2i1tlE4TR6f5lDk9RWnhh+3Tr9/Dbj/AG6bbsnMbOw3ffVtuO7kjtabk6gOWQr/
ACGDEBWYN91afjg2s76vPnj4Y+P9p2ZNz2m+i2W8hiVY9tJLG7KkDUmZYP45UPlingt9cHE/
7ZNsvON2W67/AL5JYTbgolgt4Yg+mNhVA7mnqp1GGdVqxxr/AGtbuvNxtX9TT+jSQG6XcdJ1
eyrBSgir/wCSrDvSmNfeufM/b0346+GuFces9+/TXtvySzurc28wlSN5IigYshoWpX8DjnZt
9XXsfH1xVHIC6EY1AqSQpPWnbG78jjbzNbz4i+JP/vEn3WCO/wD0k22wLNGSupZJJGIVDmCF
yzIwW+tZ47uA/Bm4cqsuSTy3BsptlR0gDqfbluELa1LZUChKZYtu4d3nWy/tQ4psl7d7xuF4
0N1fJF7A22eIMDC2TSqx8T6Tli6+Rx7zv7Zn4/8AhSLnO/8AKLVLgbcdouXWEEGRFLTuoiqC
poip174zfLjH8b/q0XKP7a9rtth3G+4zyEb1e7ShbcbE+2jqEBZqFGbQwAJ0kZ4brpOmu4T8
G/Fe6/G0l016l5PeKJ33xToa0cRrWOjNSiH7tXXBNN/w8r4r8IryS95bFZ71DIOO5290i64b
lvUcjX0rpTrnnjfu4xO/Lrh4T8QXPK+Icl5At4Lc7KpaG1K1EwRDJIpavp9K+mgxXqymdT6/
Z6BsH9sHF7zY7C5vuSNFd7jCk6R2sQkhHuZqqvX1aehwbb8ta8v5X8f7h8e89tNs3pEvbFZI
7mMIKR3NtroVr9wrQhvPGetxxvd+2Np/c1wjjGwz7DuXHrAbeu5QOZraGgjJGkqdPZqNn2ON
c5Gf6/b7ST4eIqJs1U6U6MR2FcdNd5K9U+G/h2Lm0G5bvf7p/TNm27TFPMBqf3Cusn1UXSFI
qScsYvVtyLPyt+c/Aez2u0DcuKclg3qNZore8g1xBkE5CIylGao1MKjwxbYbd8aZ/wC2HiUM
dttG4ctFtyGeFPaswsagzEfkUkO6EjGPrflnr3xWcW/tae+bdYN63JbS52u8WByiEq1sYg5l
ViR92qori+1o4yK7nPwBtOz7RByDjG+R7rtTXUdruEzaaQl2EevWrEaQSA31w7Yr1j23ZPjc
cR+Mnstrsds3beWgc30lxVILsNq/+0Oa+ggL2/xxcT9r+3sfLXxTte37hz7aLO+uBaWxvI9U
hUOqsr6lhAP8TAJU+OH+nP4Z/hLI3vzXwdtz+c4tk2y2gsZN8WD23ApG8hWryuAOvpNaYviK
b9r+m02/+2j4+TePYn36S7ksU17vtvpjIUplIDmyeohvpjOV0ebcv+F9rs+EX3NNq3U3u32d
/JBAANBkthJ7aSqw+5iT0pjfOxj38rDi/wDb5b7px/h+8Sbg6rv8zRX8WkAwrRyhizzyjNa4
Pta3JI01/wD2wcRPvbHacqA5O0ZktrOVY6MFBNWQHXpK9SMEl+Tv6VHxx/bxsm6bXPuHIdzl
tby3uZbGSxtk9xoZIG0sJCATVvuXyxq9Wndnrm5b8DbBxXk2xyX+7yjh+6yNDLuLx0ngkVaq
jqRTS1Mmpl3xzu5g31rfnT4i+NbDhMG7WVzDtO5WlsFsF1BU3DQgOkqOsjLnqH441xF3Xnv9
s207TP8AJtrNdysJoYZWsYyAwMgU6latQAUYmvlh6tPPkG3xpHyv5537jsCxbdaLdzSypF9q
QqQ0ntrlm2rIdicPc8c/5T51sb3+33453mz3C04fyFrjkFhqkktJyrqPbOlopFCqy5+mvjjH
PN5/Lp9pQbR/b5wGPiu18h5LvEu3Wl1bI1yupE0XDE/a5DVWlRQCvfD7VbPw8z+afimHgu5b
dPt91/UNi3mJpdtuWNHBjClldVpXJ1IbuMb5njF79x56Gcn1EVJAqT3PnjWL7PpLiv8Ab7wD
/wBHsd75RfXTLewx3LzWtTDGrgFUbSshqCcz4445a1bjg49/b9xTdObXdja70L7ZJNvN5ttz
AVMisZBH7c6j7WRj5VHhh2xSurkHwJwC/wBj3Q8K3prje9gBfcbaRhJG2hSWjqAApOk0IJxS
WfIurHh/9vPANx4ttm43l3e3l1dxLO01uvpQydYwpViGTpngw4rrH+27a4PkS827c7yWXjlr
ZjcIyoAnmjZimhj2KMDXLPFZcH59VXMPjD4gvdiurvgnJF/qVihmmsrmTKaOoGmMuqaX8KVr
h+la5bvZf7avj+HabH+qjcLu/liWSSe29MdZFGQUK9NPnjOU2vBPlf49fgnM7jZlnM9oyLc2
c7UDGGUmgYClGVkINOuOvPPjj9t6xjraYSTiOU6kZl16Mj16jDXXmx9pcA4ttHFPjuVuP8ht
FNxIJ13ydI/aBagEUoLLkOlCwIOOfPz6er+Hyvs6bfc/KKS76sa2k26O13Lt9Ah1y5tbqBpC
u2Y8sdO3Pjm69a/uH40Nz+WuObbRIl3OzihN0oAei3BTXJ/Eyg0WnbGPiHNrbP8A2z/Gn6db
CW33FropoN8j+jXT/wAhFNPXtgNjxXh2yXHBPm232qb2717G/FozOoZZIpwF10b8xikB8jjX
XsP8/wBNT8o8D2vdf7gotktgtlBu8ds9yYwKCSQMHdV6VISp88Pxyxxz/ta9K4bwT4m2LnsN
lsk8sHKNrDG4tpCzrKrR0JzGmtG1ek4xjWqLZPhvgm77nyveN0gkvUXd7uNNvsvS9sokLH0J
6vXXp+zGu/cHPjyL5t4Vw7Zbu0vuK37/AKa4f2r3Z7gOt1bOBk5SQK5R/P8ADGuJ4z18vLZM
gGGQGTAHqRi1ogtVOkU8G8K4gQLyAgtQjpIOtB2xlqAErEqDqK0NPrX9+JmcidwKBBQAULHM
/SnhhPV8AWzqBUEUqDhxiUCPRqPQhhmfLywtCjpp0svqJJqMvxxYRJGVYIDRSaahmKeGeDAZ
nJalRpHQdBnlixbSC6WKk5joo8PEYDDmOTSaUrkdPj5jEaBmDaTUg9z2/fic/hMEzA+7qaD/
AI6YrDqJSpIGS0P3dv3YhmpFRSn0OYHjXE1hAuVZlyocz/0waKTmsYIY+ZrT9uLVo2RAgaME
9hXpTE0Y6wp01oxoQMhTxw4rDMaS5LnQBhigpVDfb0H5u30GHAMfyk9FATWh74xVTamqVbJy
KdPHEfRFm0mv3Ll4CngPriNPriZanIJ9/SoxYtSAVGmEgJ4kVxECopVgxLE9DT8w64dZ+pdY
/UBTqOxGEQQJCVOVVoo69e2AmoAat6g3Q9KHwwE6vnpLaaZ6cumJbp5NIbWBXQftBzPn9MGL
CeRmlrmaDI0AocUUETrQh8x1JrQ1xNHqBQBsulD06YkILmnq9PVUzr9Rh1BdSCuomlSwHXLC
DuHBBIB1dK9vAYBTkEgAiq9W9NTnjOoswSaDLqw8MSpVkC0YAuBX0+eJHJIBLUzGZP8AphQX
ZdICjUwFQSKHPvXEhFGSMAHU9Klv9KYSZGJjKlvUo1AnrkcQOjl1DU9LHv8A6YzTo/SX0mvq
yp4eGeKrDDUkRSpOo9OpxItRZylchSoH7emLWaauptQBUdjWv+GJEJf5hC10uKlPHCiYxhsy
NNDUnoMsTRFC0YcDVlUHplgIVDP6EybI1rT8Ppiqqdw2mj+pa0IBz/DEKjMkagZ5nKh8PpiA
zCxHQDV1FK0p4YDJCGsMO5FSqjrQYkcqftd6lslJyp3wtmYPVUzIbrTsPM4maf21I0ilSMqC
gNMssWE7mQKEQjKmmvanhgwnDUZRmQc2P1wi+HiJZfbJ0VrXMio7ZHvgjJe2oFGAzPpY5YkF
z6fb6eXWpxLTqg0EMCCp65Uz6YCQc1quYIoKmlKYkBnYhSFrX1A+Ir1H44VBh3oysdLdNQ8T
2xE0fuFSNJDLkXrUUwI4p7RRiKLkPH8a4hpl0qQI2PtMKZ59OtMIMK+5XTQP9zdf+mBuUTFO
hUsR0P5mHhjUVsQH3XbMEBSevl9cGj61IlFSoYM1akKOgpXPAPgJbRqdAQG7E1AyzyxAKsSo
DVJNSe1B2GWHDEgbWopkDkCKCmC1qQLAKCq+qn+PlgajmndqZn1jMgVqPrhjNZq8NZ2NM60x
0ZRLU9BUjAk1U8D9tO+JOsTxxQ6KVlPVj0H4Yoa6thet8gCZg6l/7gainnljv/H5YsfVyfKn
xjfcMh2y7ubt7r2VRreNCKyAUproR5HHH+k9XNrwXejZS30psomiiWppJ92ivpJ8csYjr64x
pYChIFB07D8caNmihJjYsPS9a616jwp541OmLy3nCvlnkuw7hbzXl7cXljAtEtDLpVkpQCna
mOt6li/51a8k+Utq5TySzvN8spRstuQf0kTkSBhTvnke9Mc+ZGbrb8k+dvj662NtvTZJ5yi1
hjm0hAaUU1qScZ64h+tU3xr8qcB4vZTG42u6O4zktNLGQ60rkFFRTGrz4PWe5l8i8F3vlMF+
uzt+g9wG7TLVKF66mqafh0xn6waseZfMHCNzsrPbdk42tpDFIks81I1dVTsAv3V8zi54mmbW
tuPmn4p3XbYrLdNqu5giKJbc0CVQU6q61GNdfzk91b7jzLlnLuHz7vbTcZ2NLGzjZSVrVjQ5
0ArTxpi4xq8+NJzP5r27fNvsdvsdu0rbshnW4Jo2gZDStKrg81mStLZ/M/xvLBbXO6bTMm4W
lPaht0HtBgPy0I/fhvKkrs23+4PiM15cS7hYz29q/ogkjo7AeD0oRXyxfWYsZnlnM/he+sbm
Wx2iefdZloslwxWjVrU6mauDIl18e/KXxXxfj5txHdpeSASXihDIGf8A2moX8MPXH+VlZxeZ
/GNzzf8ArE9rNbbXEwlCoTr111ayudfEqMHM/TU5rUc6+VfjPk+2pZ28U9xMzBY5plMSICRU
0rVq/TDeLGXdvHOfj7ZPjhtpt95XcLow6IooRRg7UJ7UAXp1xmz1V81XNz78skpUIXYusRPS
p8cTUjZfEtza2vN9tuLmQInuBJHOSitaZ43zNFsj6E5Z8h/F20blFeX9z797aofY/TVkBr+Q
lcq4z9bBrL7Z8/8AEd1kvLbeIZdusnakWikjFD2fwrjX18FrF8v5F8OR2kltxvaWmvLw6Zb+
apZAfu0BmyI8sZ+nqnrdcL+Tvh/j/FYtpi3CckKTOsiMXMrD10NNONdcU0+xfMfxhaba+0GO
cQSO/wCQSLRiTmQfuz8MZwObcPnbg21Wttt2xW8tzbQvWSSYBQgrWlD0NcMhl1eJ82/GkU6X
019ePcMmlbYI5QHyQGlf92D6jXD/APffwHeBd7fuPv2NnKSA4XWWU98s1OGcqs/afIPw9xne
rWbj9ncMMzc31WaQh+oCt18+mKcr1LvPznxTdt0uLa+s5W2p00xM4BkI8dJqtD+3DiyPDOSX
m0XG6TSbUjJZSE6FNQ1fLywXBwt/jnmt1xLkEe5wxCRQvtshppKnIiuRwSz8uj3B/mH48sDP
v6vLc8iuYvbe2UUUD+HUPTQftxWRiqvZfmvjnIoJts5VWygkBdWhUkEDohGfq88a+viswe7/
ANwmz2F3aWHH7R5dpsgoMsoOtwuVADn+ODD6tZ/l3462xLnksU8l7vl1CE/SAFQAuYWnTv1w
YIq9o+ZeKcn2mbbOYTDbLZyGDxk+sA10ZAt+ONfXfgyM98o/MezbxZWvG9kRxsNq8TS3Mnpk
kWLLSlc1WmVeuGTGP6fs3LPl340l4Ym1cf44EvGVAryIiPDpIq4cet2wfX0Xu5rQ8e+V+H8i
4zHs/Krj+nxWaoCUz9wKMgpFdLYrz41L+1tY/M3x9vG/29leSC32TbVrZXNwNSvKoopYAGlA
MsH18M9Rc9vPiPkHvXW5cuuJyqkwWcEtYxUVAVCuk54MqeffGHNvi/jUm4Tbzt0l7MW0bfO8
Sy/y8zo9tjQN4mmL66J46uH/ADNtGyc33G+g25LLZdxIUWyffEpP3gDL6439ZYdbq++Wfjni
u3X97sd2d13fd2MrwqfR7hyqxp6QK1pjP1PPqLbvlLgvMdnij5TP/S2sZFnCA5M6Zgowr6fL
vivI31h/k35Q2bnm8bdsdvWz45ZXCGS9OcrrWjFV7DSOmNczxfX10/I3/wByey7Ha/8AqYWf
eUmiKyws5AQEFvdZqAV/245zjfV1b+GusPlD495rs1n/AOzXn9Mm25klWEVHuPH0KGhbOnTG
8/TM38sZ8j/MGxcw5DtVv+lZ+Mbfce5NWqzTitHyHQUHTGK19fy5/lf5K+Mt245FtPEtnWG6
DqJLs26w+1GOqimbk41OYnjElGYhTVaUZjka+WI7rX/GG+2G0cv2e8v5DFZ2s3uTFszSnUAd
8FXw9I+YvkfjG/8AJdlk2iYzWto8bXt0QQoQuGNF+7Lvi+GZ3lT/AD58ncS37a9ss9mn/UTx
sZGm0lVjqKKudDUEYfwb7T/Avybs+1S7hZcgvnjmu9LRXtw2sKEBGkn7qeGBbjacs5/xTbdr
mnbl826zSOpjsrZkAIrVlbQB6aeJxZTKu4/kziG+7daXVvylNlQqGe3IQS9Ptb3Af3YbyJde
N87+arnbeVvc8O3ea9X2xBc3k4DxvpJoIkIp/ljX+BJdZfffnL5A3fb5rC73RltpU0SrEioW
r1qVAyPhh5xuZ+XsnwLecK43xaSafkNv+s3Ah7iCVliERWtFAPfPPHO6uup+EvG+b/GfEjya
0g3kXaXLtdxUUn3GdDqjSgoaHDjnb4rfi35D4xF8e3O0y7hFb73PPL7Vs3paspARgTkQvXBj
U+Hs8ZljtLe2Cy3EzRqj30enM6c31nxxYa+Q/nnj9rtHPLmKHczuM1wBcXDSMHlTV0VyO4w/
5UxmfjvfLfYOY7Vu86s9vazLIwX7wtamlcVFuPqi43b423fku18zk5HDH+kipBbalUEt3kr6
gV8MGUSoIfkiz3vne68b2q3O87NcWyfq7m0IJjYoQ4qaBgVNMONNWV2DjHD7WNV/pG1WLx6F
uDRlHuVzqTVmJxmQV5XvPP8Ailx85bTukW4xPtdtAYZrst/LV2Uior5tTD+Fz81t9q+QeC3u
/wDILbb94t7bcpCgj3FwuhtEYFEZqK/tntXFeTHbf/IfDLW0sbe55DbXl0k8RmlRlqQD6nKp
kFGLBar7v5O4C9rvmneYK+/GEGZLGiiqD8wqMX1Wqvetw4byf5Q2W+i3+3gTYbcXCuHXTM7P
XQGY6cl64c8MnpfIO9cN27nfH+ZPvUMr2Ia0ns4Csr+1JqrINBP26sx+zB9bi+K4eb8M+MuW
77/7buXLY4LQJGHt4ZIlqiDIaidYLeFK41l+GfgfxDyXgcWw7psuwbtFt9/FdyFL66ClpI6j
Q9ZNIYAZUJwfWmpfmrmvGZ/ja72mLfre83Sd4oQYSjM7hwWYolQFyxTmwX1YcW2HikHxIeJy
cktZIr2CQSXiSRoV971EKhb8vTBzLKu5sx5H8GcT45b/ACFd3knILeODYJ9NrqKot11FQztp
UCvbF16eOvG3+V97seNfIG1862jcLbc72RFsm2hSHYpQgsroTSobvizz0T5eicZ2rZ7nkI5W
2y3dhvm5W6rPJOAURdNaagep6YLTJj5C+b5IpflHkksTBwbx1LqQVNKD/LHX/DHEnrq+D984
zs/yBYblyEL/AE+ISJ7rL7ipKy0R2H8Knv2xjrl1nw+u7bluzW9+9zd8qsZdsuKforYGJSmr
MapASTl40wfWsvH/AIzt+Lbr8xck5b/W4LZLW+mNlbuVQXEcmpWlDuV+mWDrlcdeKH+4Xj/H
LjnNpvdvvttMN1khgureIrIbdY0CGQlSwow8c8astjjOZz1v7eqXvHeHSfD68Pbk9p+migAS
/eSNQTqMq6kDVpn9cZ55rr0odrbhHPPivb+JXO+RbVLtM0SzMSg1G31aGTWygo6tWvbDjXXv
rQ7h8q/H+xcj2DZH3RJv0ED29zcxgtFGGjRI2ZlqPVp7YrzYGL5d8XfFbX+78q3blysl4ZLh
ILWSIyB2BK6dLMzUr0ph+trH1af4y5bxm5+PNrtNk3y12K6sAF3CO4WPW4UnUxEhFS/XUK4J
GmW/uS5rxm/sOORbZucN7cQXbzSLA4cqFUD106VI74s8V+Yg+bbzh3N+HbZy7a9/t4Z9pjKL
tk2UsjSlf5ZWoZWWnhTFzzsZ722Y9A4jz3a9+4PtI2nkdtsl5Zwxw3yXaxO4MSBCNMjKKE5h
hgkdbFdZfKvGbL5Pkstz5Cl7Clitm12EVLaK6MgdlquVHA+7xw2YzztdXFLb474Ou+ypye1u
rndhNdlfejAVFDMVRVZqn1fU4vrb6fxj4xvZg8jyKKoTUUzFDnhvyzNz/LafDHPxwzmVpucz
FtvcNHdxKSC0b5E0H3aK6hg6n5PL6C+bPkfjGwcCv7Hjt6lxuXJC8sRtJk1R+9p92VyPtDLU
UOeeKRmxQ/2t7ZxXa9vuuSTb7BFuN6ptJ9rmkRPaSNgUZS7AnUB4YrNp58i54JecO4RzbmFt
JyC2uLbdY/6jDOHUIG1SM8OoEqXXWKAdcX19Z5+LHzMvL942+fcE2+/nt49xMiXfsMVE0Tsf
RJ/EKY1jUtzK9y+A+XcTn4LvXBNz3GPbb29ErxT3FBCUniCEKzFRqTT0Jxzk9N+PEXwVyDin
FORcm4ruO6QLFeFLWy3WNv8A40rLrUnWcgTqyrljXXOXVuzG526x+P8AgXBOUbNb8jtru9u7
S4uJVaVPUrRsiIiqTU50yzxSe6zc+uL/AI1zXi17sGzbnsnIbPadkggQXm1yCIONAAK+oh06
HMDPrinLTzL5an4jzH5j2GzG+QW9gNt0TbilJI0dpWkjTVXSC/iTlisuLnn/AG1b/wBx+2cc
3jhEO52W/wBm0+wxALZxyRyGdKqpChWLBhTLFOaz386+UHkrQ5hBSpIxqRuTX0J/bpyriw4x
yHh+73ybbc7sD+nnkoIyrxe2RqJoCBnn1xnLp7+C5JwT454FtUO6WnI/6xvcN7bSWsEDIVeK
J1aWKRI2YDL1BicXXHWesy16dfWfx7yrl+yfI8XKLa3/AKbCnt2krxJVY2Z2EupgyMNfhjOW
/AvPuqbkfylwjcuL/IhtN0Ss6BLNJCVknP6dY9UK5MVLigON/wDOyqR5/tvM9gf+2feNme8i
j3pLmMfoHYLMweeJlKL1PpU5+WM/WrqeOrlPOdiuP7a9q2mHcY/63FPDFLZB/wCcmiZy3p6l
dB+lDi5lh6Yv4L2vjm6c1juN53Zdo/QaLq2L00TyRyD+WzMQoywd7fGuutewfLe17Ff/ACFx
rkm0cotob2a8t7CVIZI3kjFfTMhDEek5GuGy4x9sr2XdoP1sd1t0Qns7u6iaE7qkKkLVaV1H
I9emKQvFuMnhm4cF3X4p3zeots3HaLt1uLglQskaTe4kkbMQufQiuGTKxuzxoLrmHxxxzb+I
bVY7zBNa7TuSwTMsmtox7MgMjj+HVIM+gri+tje6+e/k7lx/+9nft42a+JCXlbS7t5D6gI1S
sTqelQRXG/xh4+XtHwx8g8bv+B2+0tyBNh5HayySXtzcaC1yJHZ9euWiyEggE1qKY5yYelX/
AHI854xvHCtmt9q3eDcbuLcCs6q3qPtwursy0BAqwpi65uMWX8OH5I5Jwnn/AMPWF9FuyWO7
8ZgBO1T0WSZwiROsedWroqhX8ca4n4Z738Kv+22Hhlru8m87xvUe27vt7E29jMyIskMqFSat
1IJ7Yz1zbXb4jV7lyDg3C/muHllru0e5bfyCOdNzEBEjWkhK/wAyq9Y2oMuuRxvqXIxxzdrR
bRufxHwndN95hbcmjvn3TW1xaoyyGssnuBIwvfV44z9KJ5480+VPkDjW8/FPFLfb7wPuFrcF
rqxNdcYAZPWPPUNPiMMi32K7535rx7kXFeDR7RciWSytZ1vbdvvgkKQoFkA7koaU7YeZ4u81
4omVSwqpGplPcVyw2sXx9W/DvMuDwcV24Q8pm2K8tkEd/tNwwe3eUfdJGkgaiv1OkjHLmOku
tXwzknB91+Wtzm49JFJcPtQS/liAVJ5o5gdUQ/MdH3HG7BKz238m+GeCcc3u72Dd3vjvKOp2
9SJJVlbWAKUXSAzkGuNf87aZ8O7i3y1wLc+ObZPJyJtgvNsgjgvtsOlfeaJVXKgOoNTIqa4x
hNuHzdwFfkO1Zr4S7PuW1CyuLtAf/jTmZm0y9x6Tn4YfpcTAcv2n4C2DjF0dl3N9z3WUr+gS
Jy7RSoa6iAFXQfzV/DGueLYJ1ZfHo23/AC3wbkuybfczcrn4xeQQhLmxVlWrgAE5q+oV6HHP
Da+efnPlu28j5zNd7Xdy7hYxQxW6TzZZoCH9sHPRq9Qr3Jx0xmfLz2J/ZoyEKRUUrShxL6vZ
Ns+QOPJ/b/u/GZ5lTezdRSW9s9aXCmVGOntVRGdVcZ/nP9muutZX4ml4hLzG3/8AbZmtttVW
YzqSrLKKGItQN6aimQwdet89fV7V8y8x+NL6/wBk5ds28R33ItinjWGzjLFJIBIHkVgwGlh1
Bw882xjmy1qH+XvjPdhDvTcru9taRVI2rWyKGTqrIoNa9/VhnFXUx43wjkvx7ufzJu2/8jvJ
4rCWdr3Z7ydmMgmhddEchUN6dCn92Lv4wc7PW1+UOe/GtvzTZOd7Fui7nvVlNFFe2MQOmW1W
tSrMBpZdWXji+txSxpLb5G+DLfko5lDfk7tuJWOdZFfVCGXSW0UoNNPVQnBefDuMxw75K+Nf
/ZOSm+3K52i4m3Oe8sd5t2kQXVu7VWFlUNUD7lDL+OLqKVlv7kPkThHJ4dqtNhk/qe4bfJ7k
+9Imge1IpHsk0UsQaHpljtx/Py+s7t8eHyICCV+3ueuORAocEKfGoA6DAMCkrg1JB1VAIFDh
8Xp3kjUaQKEirDz8Rgw6AsGRXVSAO5PbDgsFUU06DoOZYDMV8MKMoFQRRVGZYjMUywaoMsoN
XyANACafhh1E8ZWtftH5e+eBbEYDMaA+pcz9PLFighqBDAkA5EEVOIyJGkkqWJC0AqPI4F6D
7iSAGp92fbEiBRSCBXX1FcsTCRRGYVQIQP4sifMVwnw5QCMLqHo6gZdfPGWglf3gah9O+FHi
09BQFDqBPf64MEhZVI1FadR418MSsPEXAFT5EmtfrhJysZZn7jqBUUGFBiVZAaDShPp8/rgQ
jGQRXotfp+zAxTrIpPrAdqZKDT9hxYpaYBhQaizGmf8ApgdMFIUppUUNPUCOueFmnRiippq4
zz6k4mfsfUz5kUByI708a4NanpKrEtnQHKh6ZYdMgnTNXkYkgUUUyriVhvT0Qelsm1eOAH/8
dGloEqBTrlhHwEBWVWU0YVJPciuX4YjBrUKXK6ST1GBo5VS1Hqa9CvQ4FgSfzFSFJovhl0xA
QdECkn0mtT1OFW4SMrknMMv3E/44lojTLUvpH5sBsDoZaGpPehrT8RiEgg6gFgAZMqjOmJrC
dnDANmB4Z4maaZ20aV617DOhwDRKFFWTNqUIPl2phhOcwAFqxFKA0/xw6TCOONyoB1/mDUIA
8qYqsIhQAoqdJ75Cp8sZGpVICKgFNXf/ACzwa0ZA1aAZkZNlUU7YkCirmx1AnKgzIw4BSR6o
mMZogILDufDCMINShelaUr4k+H0wrAN7agMfy1pl2OBCEjaSoUiuYHj+3AhhM1KrqYrTI55Z
4iaGpJDKQ5z6508MBCutZqkkp1UkfuzxASufdNNQNBn/AJYgdjViKUIGp28a+AxHDR0DZgtT
uRXLDrXwmqHGSgBswSaDLp0xL7I3VvSAK1/DLyxCnKsqKpoNRqSSMh45YNULSRmjV7acWqmU
1kFVNTl1r/wMS05OoA6iCDT6jFQMyKdKlRl9pp/jgkGg1NJVMlpm3nhhF6TEFrofqScxiOBE
ntM35hkKt2+n+mFbUutCaqBQ9QM8GJDO7B2rUdAy+XiQMQMrSuPuBQZmnb8cSOpQj2yPV3/y
64jhj6axdGFTXtXzwBLG1WIk6tnp7UphalAF1VQGikemv7cF8O6RRdID/ewqQppg9GGVoyjC
lBSgB7Ed8aVoY1/llmNSPzjofocGggC4qR6cyDTLPtgrUAB2rQkVbxpgMRXMVYz6gznqK0NM
ajNrM3i6ZnUmtDjoESEDOv0OBJ/V/wDhfy+eIOuCIOKIddBmCMwR3xNWOnYo63g940WoUMBU
A1/wx3/nzq+r7M2yz43x341h3Cz2Oy/VNarrklXWxdh6qsRWuOHc9FfNG830t5dPJKixsG9U
aLQZZ0+uLG+Z+Vax7/bqzoDkMWnEkS6jVQPRmxbIDCztesfGHwo3II49z5C72GzSZxowGucd
tGoVC+eCi9rb5C+MuM2G+7fte0FbOCVgz3ly+YU+k+H/ADxc+idLnl3w/wAE2ThYvLES3t4i
FjeSsfVUVqBlpGDqH76pviL4l4hv8Ml9vd8ZiDqi2qFxqGdNcjD9gGNWXDe2J+VOObFsfJLi
y2eEwWoFArkE6up/d3xzzGYx1hCk88MIJjjmkEbORkoORbHTl0vOTXv/AB3hPw3t8FlZXAfk
e93S6GEALqhbrUJ9v1OLrlyo+U8C+LeMX0dzyCzmjsrn1R2tqSTXvUD1UB88Z54UtX21cI+I
982J9wsdqngtitEubnVHWn51qT+3D9Mq10SfFfxZs9mrnZLy9kkoWMPuTPnn4gYjtYG9g+II
N+Zt12XcLGyUkQ2+mhcjOrCoKjDOdTa23xz8T7rxyTctr2eWBJFZIGlLLqbsw1E1HnjN5W2K
uw+GuEbZawJvzST7nfN/8e3ViI1Yg6czmTTFZTbXCn9vMVzvhmvZWg2pM5Y4TWQ07KPp3xc+
NTvIrOc8T+Ptlt1XZdg3KQq4W6u5VdY2C9QpenXvSmN8+1m3W3t+NcFv/jf+o2OxxWrvAXV5
QS2tQRWv1GDvnKy+Yd09F7MiqAisaAUNKnBFam2Sy3DdZ4rCwT3riVwkaDqWY0FfPPGo1y9Y
tf7aeaNCr3d3bWpKgtqfUAQKgZdwe+Drq0de1U2f9v3PLrdZbKIRrDGSXvHJSLPsv8RxfYWS
n5V8Bci4/Zfrri+iuY0qaREemgqag0JxS1WtX8YfFW2XnDpd33PbWvLuRZDAzyhYgB0oq5g/
XFdNsxacF+H+L7rxe43FrUy7pM8iQOGokWkldI8fPGcGsRyL4D5Xtie5EsV0hOj2YGLsuo+n
UKZ4Nw8ui2/tv5vNbi5ubi0tZWT027SVkOXgo0/vxbWvtFZtvwJzvcru4s0gWOKDJrmd9KAk
ZDxb8MNtVsWsP9uW/Wu72dlf3kS21w1JLmKraRSrCmRr2GGH7j5f8B3UG5fpOMe7cW9vF7l1
dXTaY06nSW7mgxMPHL+yuLO7lt3dWMTFT7bAgnybuDjeMSZVnxPjW88j3eHZ9qhVrydqKWFF
VR1Yt4DvgvLpLr24f298StrRtsTcpbrkLxrK8MYBjUjqwU9tWM4L/hTcf/t7uJLi5uuUXo2/
aLYn11Adgvc0NAMW0S/s3JPgi5MkFxxZ3vLK4ZY4JJSBQMfuNKVXvXF7pl/a1m/t146Lb9GN
2e45GsfuNAgX21PXxqB4HCOutviq4z/b7ca7m85TuC7VtUDaUI0l3YHqScgMG1RUfInw9Js8
A3rZ3a62WQjTK4OrP7SB51qKYZ8rFVc/CHPrLj536+hSK0C+7IrMPcSM9Koc/wBuH7VXmRou
BfA5v9n/AK7yu/G0bXMlbWMAe6Q3RmJyA8MsF6tVz8u24/txvrjfoYdvvtOyXHqW7lHr0AAn
09ye1MW4vGhf+37gG4wXNhtG8SXW62uT6ShRHHZqA6anrQ4sF15ZafEfKt65BfbPtNuJ321t
F1dlgsKOMgpbuTToMFuGXYj234g5VfcuPGDGYb6Ig3UmRSOMULSFgTlQ5eOL7Uya9D3X+3ba
RYSxbNuTbhvFmKTRhVVUIFSpNe9e+eGWxc3FbxT+3hpbE7ly7cjtVq50W9uKamoaKSSfTXsO
uK+hlPkb4l3PiTG8i1SbZIawTsaAk9MutfLANefRpcTSiN6tqYaQBVtfTPGp41r2nhX9vzXO
0pvHMNw/pEcorZ2/pMjK2YL6vt1dQOuDbRZjLfIXxLvXGry3aNWnsrs6LKSMEM7N9qgHu3hj
HsOa5OU/DXPeP7Cu6bpaolqSgc+6CQX/ACvTNfDDzfQwQXTqp+YZAeAx0+WWj4dwnkvKbl7b
Y7RrqWJdVywoqoO2pyQBXwxnqtY0u6fAPyLttjLuNxar7MakyqrKzU/7VJqP8MZ21nrmI9h+
CPkffNuS+tbER20vqjad1iZh0qA5/YemL7N45bj4i5zb7+mxLt0kt9JVooh9ugdZC/26fPGv
tRmu/ffg7nmybfJuV9ZBbSFQ88sTLI48ahan8cE6o8hti+D/AJG3fb0vrXbfatZfWgmIidhT
IqHIOH7NSo9o+Dea3/J147LB+mePTJc3Min24oyc3yPq8h3wfZLT5Q+BN04lbR7hYzi8sGyu
bk+lvcOQXTXv2ph5tXyt+Jf217/f8Wfdb68azvpY/dsduoDrBFaufyF+w/bhvWqyR5jc8U3q
15EnHp4TFuBkERgAq/rNOg6HPBPBHoXy58U7Vw/b9k/pP6ua5vT7Vz7ilgZKAAR06Ek/bh0b
67Nt4j/cVb2X6K1/WWkCRgRxCcBVU9gWPUeAw/duZ+Xn0Xxtz7e+TXNgdsnuN2H8y6Z6gjtq
d3I6/vwfdm+ry6/t9+QLLY7zdr2CG3j2+sjxySrVkQVZwB4ds8G2i2R5p70mvQzFlzIH5c++
NyC31r+EW3yFb213vvFFuY4LNSl5d2+SrTPQT0NevljN7XcdO9//AHq8s2R9/wBx/Wbhsto+
lrqQ+iMqPUadKDu2D778DqNJ8e/EW2b18eb1yXdZZopItbWSxR1BWNNRrUNqqTTLB81qzJrE
8d+P+a8ggnm2jbri7s4HEbSIlaSfw1/7euK0yJN8+M+f7NcW6bptUkJuzohamoM7ZKo01Gry
xTpn8tre/wBt3NbXhMe7h1l3Ufzp9nQEOiVJ693A6jFtatz4ecxcT5W+3XW8QWM5sLA+1dXS
IdKMTRhX8cb+2tfZH/6hy652JN6TbrptreX2kuwCUMi+kAHvXpUYL0rF8PhH5SlsUvE2WcwF
PcXVRGp1zT7j9KYp1XPqY4vj3gk/K+Wrx25uV2suP57S5OGUfYqnqx8MNvjXM2J/ln48u+A8
ij2trz9fHIgnhuyNLshqKMgrQ5Hpg9qxiFvJint6iIxnpqQPphys5Gi4pwjl/JzM+xbZNfJb
gCZox6F1ZhanKvemM7jcgd74dzfjl7Ba7nt88F9LT9PGysS5Y5aGNamvh0xfMc/denXLf3LL
s4SaPcU272qEgAusYUU+31ZYuerG9mvN9s+Mud8gi/V2W1XF1HcSvGtzRmDSpmwJp+3zw3o3
HHvnx/yrj97FY7tt01rPLQxxspOs1/IVrq/DF/0c/bXrV98Gcb2H41XeuR7lcW+7XUIltLeN
GZEkZdawsP8AEmmLn261fPHnHxdwz/3DmFps0rSx2jOWvnhUkiJMyx/hJOWeG/BwfyFwg7H8
hXfH9mMu4CCRY4QikSMXUME0d2WuZHXF8RjmXqrax+B/lK8vIrKTaXhjmikdWuSEAKCvqbNQ
T0AxnW7Iyu58N5fsm3C+vtumg29pWhinYEK0yHS6Fq9QwOKUSyum1+N/kK8O3pb7TcyTblGZ
7IgZyRAaiyk07eOK9NOq++Jfk3bdofdptjuIbSL1yOBVgvSrJ1ph+7N8QcX+OefcjtJb7Y9t
mureKXS0iiikjqqucmA8sH2xT0k+NOdT8l/oC7XMN4P857aQaWKHMvqNP24L0pGg+Tvgjk/C
7S3vai/22SNWubyNf/DN3R+tB4N3w/anyOb4S+Ok5xySSwuzcRbTbxu11d2/VZfyLqIIFT5Y
ejuxm73im7ycxv8Ajmyo+5XUN3LaQlEOt0Rioan06nB1/lz47vSw374m+RdlszuW77JLDbR0
Et0RrUAdNWkkgY1z3inyfbvhf5K3K2guLTZJ3juoROkjLRWjbpSpGMXpvqYym87HvGw3rWG4
2b2d7CSJIZAVdanLI9vCmN89My64Q8jyepQX051r0/543kOY3PHviH5G5Bt8W57Xs80lm41Q
TfYslMsiTXLxxj758Gz9uKz+O+cXu431jDtc73O2DVeW4SrIoNAzKPPBexLKl5B8S852Hbhu
u6bXNa2hI1TshKLqOVSK0rXvi+xkjo2H4h+St422Pctq2eaexnzikKha6T+XUcwfHB9lqr2/
g/Kr3fn2Kz2q4O7oSJLT2zVQD+cGgAr3wzr9qeu3k3xxz7jNqLzetpngtiae4U9Gs+JUkZ9h
XBuuX59dmz/DPyRu+3R7ht+yzta3I1wXGS6lXuM9RB7Y1P6WfDp9P2yO5Wm5bdez2l7G9teQ
NpntpAUZXBzqOxxTraZZ+FbM/uMzMFIpQMeuWfbzx1nVkwSYgOorWtFpUAVzxkijkKglVIQr
R861HlitFF+pZpFcqKgAo5yNB2yxXb8sfFSrcSMjIcgc2AqAT5rjPw18oxcuDVhoC11VPj1p
hlp58ObhqUADah6WIo1PM4rauqD3SGyJJ6M30xlkSTlVyqRWgoaD9uKNRMbsrQ/mAyCgZHxy
746Q5Gw/++n5FXbTt/8AXr02nt+x7bPX+Xp06amrHLLBF4yK30pUyGQs5zctmSD9euM9T3Wp
gHvJV1BSNDClDnTxzxRmTHM9zKwAIBNMiegHhlhotqSG5GgqFzUUFc/rTGcMKW7ZyVdi+s6g
euOk+BesCZ3ZTGGGimkCnh/lgyMfaniuHoign3Y8tWVf24rDOhm6fUsisdadF6/49sP22Y19
sJ7mRhqWrFiKrXIEdMvHGWPm6ZZXcqxJC00gN5Z9R1zweOskC0jspVmDU6ZZE4oMOJWZFVgx
Y9RlpyzpixmzRC5YBeuWSnw75eWKQ67Nu3W+2+VZbO7mtZfUIp4HKSAMCrLqBrQg54bE53uZ
SKBunfrUf5nBtqkAs0uj3CdAPSuda98Ma0v1LMCCzVrQjyHjjTNpNMcpFQaqUIGXTGZcV8Et
4xYLX0A1yFTXt+zAygui+sh/UAae5U1PfDFaZSpYkEk0Jz7nzw6oMOClRkrmjf7jiOiV2Urp
qmRFf4cC0bXUpyUZChLdyfE+Jwa1v6N71ACOhzK07+OGM06TsSVr1rXtqA64vgymN2wqxOTd
MhmTl18MHyz1Re7IKZ6iB1JyqPDEtpR3ctKEgyHsOmX1xfVShe5kZdAz1itBTqMR1EWyqT6S
AR9cSokX0t6iQcgBWvngWGQsAAKaa0B8K+WFFJGpUuCRpyPgMK065qyZdMx44iFWOk+Nc/8A
tGLGaSFn9Ioa/lbp+GDDKcVYAGp8wa6cNAmUEEZ5DuaCn/PAQKAqEKcic69R5YjgwaxkCo/3
Hv4YFBV0VqKkLUmvU9sR0IZBFlSrffTxPYYmbToxJA0ekChI7VwsyUYWi01At10+HjiIY2K6
aqdYzFewPjgaLUEb20Gk9WAyzxDSJYx01UNQcs8/PCBPJGx0laEmmrvXA1oZH0AB1qF6UPfz
wyISOjaaAkdTXyxfA0mYsSc6VNGHTpngAEfJgSTTIEjKmEDgClTIq5iirmKkDOuCx0g4mYtn
kSMm65HPPzwIKVkYOvbsPHxxOfSWJ3UfaCvRkOdBgU04aQPQUox6HsMWHTlEDOgqaEFK51/H
E1gSaaNXRSRmcvriWJGD1DKVCDufAeOJrAqig0BBJ6nrTEzSEaoHrkWNFHX64hkMAdYqDopm
Af3jzxNCqAAGOQFTTrTEj+uiaqsoOX+uBGmbSQUAA7Bc6nz74maaMv106mNdTDpSvTEJEuoK
KaqK3c9QMTQfUX1K4+v0GIl7xqAorXPMdP24Vp3dXIAb0jt1r9MCRgktprVTkW7/AIYGRhhG
GUirgUOVCT5+GLFC9RiVlFBmxBNe2WE4RJZVZqFyB6ugzyxHBvpAyz/iX/TBUdGNc/SaUP0O
BQ5YKAzLXV9prQ9aZ4kSqWJYUABqD/x2wo6tKuppKLqyCimY8xhowAZ6AvQqx+4DMDAjukRU
H1KO56gHtnhJFMmetTlp7Cv44ziOzggslQwy8Dn/AKYkemlU1mrVNCO5OFEQXTS32EjPEcFo
ORU6j4VpgxBkAWTqQW+764cCRdBK5kgjt4+GAglIC6ZMgTkB18q4hRFndtFWYgZtToMR+Car
j1AKT4dMS06xqI9XWppSvcYBgkAERdlqFyqDU4jiMBGYoCpB6kdK4WSFNYRhTwqaD9uI4Joy
TqK0AGRGYHiDgMhIiger06Klf+3DDThlUVYZdA3jhQFbSaR1BX7D/mcFAiDGAalmJoT4YAGN
cvU9Av20FBXtWmKmHGklhShBrSlfrTEJAgBvUDn+U1yw6idaqULVJ6KMshiVNFI9ApUBxQgH
/LApTurUzozHoB3P174GsBMhRtJPqpXV4HwIHfEjtQoGCgMOgU1H7DhwYYtMWDlaMAMq9h9M
ROxBYNkGAoa5175YzTOkLQnN9WXgcxXvi+wkZi61G5kYihrmK1x10RGyjQGHXv8A8sSSax4n
7aYU67YUjdhIEC9j1wRqO7ZGBuVVhkSASoz0k5k49H8r63bI+2odj3Defi22t7IxVe2WrPIA
qEAVqx8AMcP6S65yvlnfrFrG+uLb3UmmhdhIY2DoSWrkwNDjON2YrB66ZVVgfEU+tMFY2p7S
VIZo3EYmCMC0bkhaA1pUY6cY18x9DcL+fduv7yw2zdtpgtUQCM7jNNRAEFAxSnXGrzM1zkoP
lLkuy8s33b9k2O5sS653O4PIojjVjmus5YxzD43HJLDYZ+GLtr7/ALegghVPfMqtmi0qgU1L
eGC80ayPwZx+02s3u5zb3YtbS+mGMzIsuRpqda+jG71cw/Zl/l7jGw3vL7Qx8hsvdvm0UVgV
jDsPU7AkYxJSDlvxZ8ecb2eAx8oF5uMzqkcSSRUofuaqljl54pzda+9r1bj/ABja9j4vDacQ
3OwsLyeJGn3KWSOWRyRVungcNlY3XinyXx6a136Gbd+Vxb7eXbKkpiYkIpNKEVKqB5Yud1PS
ub8q47svBLHbdn3GBZWEcMahvedAoz1Cv44bzdE9XXFIOczWW33UvNbO5sQA0yUjBKUrp1jO
oHjguqeMV8xch4fue7Wm02d1HdyGVV3C5hoQtW9QD9OnUjBzPTL+l/z3mey7Fsm17dZ3sEkU
bRloIW1uUQgk5GmeOmMxoRc7Ryg7TyCLdLaDbbGkzNLIoatM1Kk1BHngssMqy2Ln+w7zd3lr
sl5HLcQ+kFmCg+Y1UywXm4NQ79enb+O3h5TuNtd3kqultaw01OZMkRIhmW86YzJU5bDbdxsP
jCdL9DaN+mdnSUgFA32oQfww02vkXcysm4SsjB11sUK59+uKMth8RMRzzaCg/wDtVzoAK1/O
csahtfVe78c3C833bt0W7pa2WoyRMx0MGHXw/HGdSFd227eP6lt+zXsF5eQgpIsUgIRmGWde
3emLE8Y5r8Y8g2varmff+VxJbysP0u1xsxkZ3NdKg9/PDzbp69em/FHEd62jgn6O+VY57kM8
aF/cJV1AUua0BI6gYe76MyLDimzXFnxu52X9fEt2JJRSCRSy6ye1cuuCifABBY8e43bbdvu4
pDPNL7YkMlZX1MSNNan8e2DNaX1hZG1uI2gs4INvSP8A/G5ZC8pr9a/tJwJybnLHu9luW37T
PHc3hDRt7b0RGP8AEw6YcTA8V4TyXjXILG55LyWG5QsxttqaViamuYL06Vwy6Gl5ndbVyL9R
xu33YWtwkWuWG3lVWOrIaiPuHiK4sL5P5jx5tj3uXbzPHciM/wDkiYPmfGnfCmp+EuQbVsPM
4bndHWG3dSvvHpUjSK+AzxqezF9n0hFt36flVxy65mgt9pNoIxcPIBWn5yft006Uxz/wnE++
bRzfYL/buOXcV5OSVNTp0mtNRU56fPDZi10tyDj/ABSy2bYt13GL+oRosftqa0PSrn8i16as
HyrU9ttk1ryvcOUXbw220yWwT3HcAkqPvJ+2lMQitvtxsedcVvLDjlxHdySNpZq6QlDXOvY9
jhsxVRfL2+7XtXCbDi0t7Ed/lNvHFApro9ulS9PtXFGnHzjZ5k+PxLvnN2vGWNGhso/aVJpB
SkY0HW/hVvxxZWbV3S35p8b2O2bDMs86JEsqE6GjMf3Bq/b+OKq3V1v+22m+R7dxD9eYL23i
jl3H9M38xYYwF0hh9usin0wL8uDfNq55tG2y7P8AH+y2ljbadIv5pVMzMRQtQmpP+5sGlgPh
faeYyXe8xvyVNq9tlN9GBFPLJKdX8z1mgX/djVUxb/Ge7bHtPybv1i++ndZroaIr2VlrLIpq
6A9CQfP6YpxVPW12Ha5eNXnJt93iSO12++cTQSM/RQD6SD0JxDHJye0HO+L7cOPzJdKk6TM4
caRo6qxGQPli+Eyn9w26WV9ZbNxPb5Eut+knUJaRkFkYqEXU35czTPFnmr5rAch+Ct74dtFt
yHcNwtnaKeL34Ii1V1MK0ZgKkeWMyWnqyPbOU2EvNuO7K3HpUuYYpopnuNQoAgzz/iHgcazB
fWc+buQWt1uvHePbTfxJv8V0JCzMPbgNAFaRuimuKQxU/Pdhu1rwmCbfOVi/k9xPb22KGOFZ
nI+9VjJLaetTi55tD5roApcVyHbsfPHWwfV9Lf2pyJ/T97C+kBomf8Qc2J7nHGxvfFLN85cj
seV7rsckqXttc3zRRTS5+zHr0kJTILp641mOfN2PftyuZIbW1/S7fLuagKB+mdUQADI1LKCM
ZbUNxvnKhyyCCLaYBb/pT7sQuENzQnLoKKAR2xGRNveyXN9tM8/6y52s5PPFcOJIiqtVlNSc
j9cOsWLfc7tlgtv0m2zbitAVFu6RotBlUsyAjAXn/J+V73b/ACrxmxnC7Zbz1SWJZ1ZpY2//
AAtKAKD088OLm7VD/cxY8pmt7S9tmkXYLPS9ywbSiyhvSxFfMUxbfws9ab4q3nkG7fE9zIt2
9/vEYnhtJNY9wEIPaGs/54zz/k14Bx7j3PP/ALzYoWgupd/iuPdnZmrKtMzIxJIC/U431Vz6
+kPlLZN0v5uK3VvC0se3blFNeFSPSpKip/HGYvyH5A5Ju9jzriO02t0YLLcJz+siUZyIDQhj
1phnwzflp45dfKNxsUjMbSWkUjXqD1DNkC18R1wGMJ892nJP/u6O37RFLd26ujbleFxrEKdd
QFCanrh5Y7mvj14nWY66ehioPj541zW5H0p/bNvkG5cd3fiM4RYGRpUUNokczLomAH0ANRg6
ki3Vz8uXln8f/Edvxbbwpl3CtroY6m9tiWlf9pAri4nrPd/Cz+Eto5Fb/EU9nuFvJDNOtwdv
gcUPtSp6KA5gMT3xnfW88P8AHi7vxn4i3lpojZ7rtzXsrLItSrqgZWIP3ZUwz2rq+PHb3+4z
k+48ZGzzQRteLIJBuxb+aNLalIQCisOxxqcjNe1bhvHM9z+ELXcdmklueQXVtG0jwhTKwLUk
pTodPcYxD0p/j6w3C/8AgLctvhjM25TrdwmAZye8xoVNfzV8cSajYOM7hYfGGwbDdBbK8hNu
kuoKwiJkL5g5E5/twcq1qbZ3TdhalL6QxoS13JQWxNPqKn6DCnh93sm4Xf8Acs9xt1oXt7GW
3mvHiT0xK0alnc5CrHviq4nypf7sOP70/IbLeo7aRtrW0W3a6AOhZS7GhPbI4Yy+c5Im05dB
mMdYzj7F+PF3uz/t+ik4lFTe9LNbrGgZnk9wBsj30+OOMdeg8Sb5XveR8ZbnlnbpZQyztbTF
E98zCJghkA+w+AGC0RrbC6+QW+VbqC4WUcOWFmtmMahDJoAprHqPq8cVrMcfMuQ7hxn4z3/d
NmMdtcWt5MsLBQVTXcBSQvQt6sanK6vip53ym+sPijjfMHWK73m3e1nSWaMMC80ZEmXavlg+
R3cN80/Im6bT8X7ZuNtb27y78iQ3AkUuqCaAyH2wTSvhXDz+x3fZJ+Xk39sVhyFubJuVjDKd
sAeHdJ1FIwChZFftm2Y74Oq7z49ek7hY7htf9ycO83G2TXFpudulrZTxpqUERBXfV0GjPV5Y
K58/Nen76N62vZt2urD9Tu11MjtaWqlA0TFSB7ZyqB1pmcIseYcr49vfJ/gLZtt2yJtx3OSa
3eYLmwcSOZS/gQSdVcFZ6nkenWNlLZScZtJlCy2tm8DgZgMkKKwB/wDpxR0rw3kf9wnJ+O/J
G/WN1DDuGy2sslpFt1QhXTkHLUNSe4xvPGZ69M+PVlsfjfYp2F3crd1mSDa0UCEXDtJoIFKq
hahJxkyKf5+3rcuPji+7bTIbbcv1TW5uci3ssqlo3P5lPgcsFnmr/wCUUv8Ac9f82HGtu/pH
vnj15CV3owoHU6ypjDkZrXx6Y1wz384zX9pu3clh3m9vUilHHpYWhknp/KedCrJmfzLU9MHV
9dJMjcfGexXm3fNXOJb+1aFrwvcWMjrp9yGSYHVEe4PemM57rnz+Vxwmb5Bum5Vb8yhkG2iO
QbaJ0UI0XryqPu9NK1w2+tRUfJnOd/4zwzhSbNci1k3RoIJZioZtCxJQLXLOuHBvsYn+8G3t
1uON3GgfqDFOsktACyqV01PfMnDKzbfvMfOMC+65DMEY0qT5Z439nR9w8V3Petw4hstvuFnu
GzTNaRexfbbolgKaQI2dVDaSQM1K45q+u/i+2b5Y863ttzuIL24nsLZra6ijELSIjyKPeUE+
qvfEzFLx5/kC849zWLnduvtCKb+nwsiCH2ljckoc6qKKanFfVJ40eyrNtuxcdsXN9fs9tCgu
rJVWFQFUgyUIAWh69xii10yQRJzHd5II1W+l2qIhkykfS8gBr3oaDAJ814luFz8/T8d3leS2
iT8fWMfq0vIog6xCTOSLTRiVAqa9OuG9fqGyZ69y3DcYrSLbFt7PcbmFokaJttUNCFAGkSUK
ihH7sE+C+WP7pL6C7+RYq7a9hNFYos7ShA0zFiVkOgsDRfSM8bkEz7PF60DCnq65Z5fjjVOh
kBZToNdOWjvh0hDEVFDppmKZ08BhBiQVq2Q6VBxM6HUodj3PQHtTocZ0wzrqfuR0r28+uKFL
RmqMwRShHTFVCcNSpJUnLEyagKaa/wC6nWn0wNQ3o1KQcwejdQcaWnLhpCCCVpkMA09VJqCW
CZ5/4YLTKEV1Fsqdf+mLUTVYZNQ9K0p+P44URZli16clyBHX8cGLQUzYkUc/a4HY/TGmcSDW
IwSNNK1Pc1wqckBpqzZA5VJ79cASExtUkCrD1GmYp3wYUf8A5EZUGmnRh0xAIDLpBzKZ/ie+
BuDZjUUHqrU/TDipFiABQinRvr1xSKmo1BQ+VT0GNAUUlF9WRqKHGUZmcKBWtCaAdM/DAbpw
rGNmUEqF6dfxpjWokqc+r0+0d8Wk7AAZVqMyx6ZdRgsBkFW1Zjt9Qe+IWER6iWNQDSozqMTJ
zGx1sorQdevXoMShKn8vUWyPQdwRixq4IMykMfWpJFAaV88C0WoszjrUZClM/pixboEaWpJz
r+HlhlWUjHpz/N+fxAOGnDhI9BrU9vGnnjNX1FpGrT+UCgB/ZiWB0hZXTUC47Zg5jqMM1E8V
TStTQ5eHmaYWQpVwoC9QQK9KfTGaYMArXsRkAP8ALBBSZZNC6ga1qBl0/iw6sEsaZGh1N9wP
+OLTIeRQraEp6u560xNBaNQoUkEjt4jyphxjqlERqI09sj0B/DFYpTRFm00ADk5npQ4vFp2l
ZdWta59vHxwatp0AZKq2hh2X654mvk5UjIAksKhT44sIfSVrSp6E/XLBguEqSamoakAUr3w4
zp01lMmqzGoy6E5Z4EYNKp00rU0z7ds8VqkGDITUk+kZ9v2jBpwxUuV0jU2fqbsuLVg/ZWhX
OozJPhh1YjUqlVByPQZVHlgEh1VX9Ve1KeWFoymRV6aqHsMh2xAYhI/2n8gNSCR5YlhmDHI0
qT0p0xLTgVOWVOpbqfwxE6yHSVrqD51AFRTtiWjogGpRT/Eg4yKQ0s9SaitMsj54gY5PpIOn
ux7eOIWCKNUBSSaeuhqBTtih39JkkRQSwBJFcyPpirc6RvI9CEFErSnf64Gb2SBWQKAT46hS
o+mLVErqdIV/S59JAyopzxacKNSCFCkADI9a0/wwnQIDI5OnUrd/AfTFaoKVWABrUrlprkPr
gJkCaA7fcx7dcu+JkwiJqrGqkdBkaHvXFWMJLaTMA6vCpqRTE1zKIqp+4B1XMt0OFrQ63Iqa
dfSfEHpiZKFnkejALJ5eHb8cGE7FgarQEZMO5+mJDBUFVqKdDTETKwK6WNBUjKpHkPxxAZah
COQWX0H+H8MR0FQctNHXq9M/LACRSgJjAZj1qTniJnJcUFXVT93h44QMAAgaiFPY9Tl2wEpp
PEGvQVGWGM2iibVQEAN1qfLp9cJhOWctX1gnMjICvliSIR6TSoZB+UtnU5jEU5IZF9NAvpqO
tPrgQdXtjRFU51qR074icOzEEULE16V8+nTEjI1SSR6sySB+7EqJmXSKBetDnXAMFrZZAwTS
jZEZUBGAgpI7en1KRUUFMvxxCJEZgp0CrdBQ+HfAQFW0amjNAw8xmOowqExCjUGJSuQpXr/j
i1k4kVlVPvXTkBkfriOBOQCKKAHUQPynvTFpxKynIDp0OrM/QYDpMXVUXqDkVP8AyxK0BL0B
bJfzL1FfCuHRhKsrOfb/APEwyWlM+/XEpAp7YJ0k6D0rlWn78SSaQ/TIr1By+lMS04bQunRq
LV1aTkKf54loRQEVPXox6g+WJqI3Pb7QvbtiFhGJ6k0rWmoV6fjgZwTKGXSBVV6AdM+uIloW
ulsgAAhGImkOlRRfMk+OAbhtMci0C6T1LA1qRlTPCgpVUWpoK+kH/DCYRzf/AHGoYdKHrniQ
ZGlS3cKy+0eviKeODNWMpcsGmLeeeN4oalUBr06DEqm0jwPTyxYE0cMLR0LUr+/AZ8rHZZI4
5wmgOWIVImJBY9gtOp+uO3893w94+oeO/D/F7biMe8ci5DfvHNCJZrazLJGmsVCL1LZZY5/0
l1jnqPEt7GzpuTxbPE8dkpKqJSGc0P3GnjjDvbvyrHZTlSgrQV7/ALMWMXkwCJGxYHqDX/pj
UUjU8I4NybmV/wDotlttap/+NXMqlYIkp+dyPuPYDDXT6+bav+X/AA7vHFrm0sP1Ee57jfUE
FtZqQ2onSK5Dv3OCWuOR37r8C8q2Tjbbzvd5axe0geW2i9Ug1fl6epq+GNX+lVxV8F+JuW8t
Esm2KsFhC2h7y4BjjLdwhpV6DrTLB1asin53w674rvb2Ek8d3LEArtEPTU5g554Oejfhmo45
TITGpklc1oM21HL01xqVl7Bx3+3zlV9tFvdbxutrsiXIDxQOSZdBFRWmkavEVxnrrqm4kuP7
d93XcP00e82cdgoBNzcVj1A9aK3qOMzRK74/7Y/cgL23J7e49pCHjjQOtevUGoy8cX26OxFt
39u95OhL8ksofcOn2UcMTTp9uVThvXVGRLJ/bPvX6j2YN5tWj/8AtZXDilewyqxwbWvtMcO8
f2+2W02Ut3d8qsAkXqrJUu7L+XSCST2AxTVLHJsPwNyjdLAX97eQbXt8nqtnvGMbtGfzaKHS
D1GrDe+lcUtt8bb5dckl2Hjd6N1lQkXF3bfy7aNEPVpT0rhnVExoN5+H9y41HFeblyqxt7l2
AiBdzLGx7jIk08RilonjY7n8YT7rw17y75ze73FHF7kShqwMQK/mP+OM96L4+cLu2EF08ag6
FYhMyMgcjjXNE9dWx75fbPuEF9Zv/wDLt3DRMc8wajI/443K19daTknKee75LDdb+92GbKJS
HgTyCIukMcF6ZxWwbZydLuKGzsbuO4nP8rQki6x306QD1wf9PF9drq3XivO7aE3O42F8jE6f
duEkJz/hd+mMc/19bkxJZ7Z8jXFsI7GzvpY1GaxCcr4assjjV/qLFXdz7zt0j+800Nwh0MxZ
lKnpQVzxffW5ljjk3C9nlSSSeWab0gPJJI5y6ChJ6439qzsi/j2z5Cu7dRFb7lcwqCQf57R5
/wC0VWn4Y5/9PWbFfYxcxivJLW1hukvGFWjiZxJqHkhxv/p4zOdSbhtPNrL/AORuVnuFuxGo
T3gkZiKZ+tugxid61I1fEPire972C55HNu8e120YZgpBeVyB0IAOkUw/arrlLtXwTvG4cbk5
DLfxwwOWNvBRmkl05Vrn1IxWjmMFfcY5Bth9ye0uIYCaRtIjKNXatR5Yp0bwsTa8zvLEAR7j
d2VAyRhZjb0UVqq005ePfF9j9ccW3HkdvOV25buO81En9KXWQ17EJ6qfXHSdZ8tzmdLCy4nz
Td94W2h225bcron0yBwWp1Ls9B5kk4x9vWLJrr5js/O9lnGzb5NPJRAf0omaWIClQQimnpwz
+miYods3jkFjcmHbZ7i0upCAXt2eKQjw9H3Dyxu9eH6a579N0FxJNeNJ+pkJd5Z9RkYk0JOr
HHdokcpkZ2QVYtCPQSf8D38cdNo8XOy7ryy0YrtlxdwCYESm0dk1eXp6kYxesNmiXduVbNct
ewy3VlfMdYuRI6SuPNjn+3Gue4z9dWl7zX5Rn26SS53DdmtXB1nXMqspHQ0FD+GKdxXxycA4
ZyHmO7HbtsmSNwNc9zOxUxp10nu30xq9YeZEfLOObjxHkjbc14lzLBpUNbliPcGf7RjM7qvp
btuvOt0tkG6XO4XVpGP5QunkeMV8Fb/PBe2Z5VzwHaflbcYLmLif62C2jBN5cxSexHWlPbBN
NT07Yr26dcsvdxb5tm5SG4SeHcFejuSTLqU1rU5k964ftMZljo3XkfLd/ubey3K7u9wkTKJJ
3ZhU96E0ri5ovPreXXBfk7g3F13Abm9rbbjTXt1nK5ce4K0encj+Hpg67oyvLb1d1jdp7pJF
nb1NNICHJORLV64ubsM5xyz3F1OweST3ZFqEYsxFKdACafhjrGuZIO3sLyZT7cMjdATGpbrn
lTrgvcavLvst25BtB9izlurB5lKyJGXj1KOocDqMG6z9arQsrz6k1O5rXTUk1zJNO2GdYzY9
S4TffN1/ssi8cnv/AOm2aaRJG49paZ6U19SBn6cc++/VOcjFpy7l+376+7Pudyu7aj7t47sJ
TTKmfQeWN/eU2unevk3me+25t923i4ubXJv05ekdf9wFK/jjH2GOzavk35LtLJLLbN1vIoFB
WKNTqVV/2ag34Y1O419WZ3LdN8ubx7y9knub6Z6PczlnlqO1TmKYPtpnPniz33mXO9429dt3
a+vZdvjoI4bgt7IZBUZn7iB44Z2sz5af4d2n5M3iS/HDdxbbo4lAv5i5jQkfaoBDDWfGmWC1
VmzyvlvF+WXV3HuUybwHeO5vA4kMvYhyailRi3XPnL8Ozdfmn5C3KD2Lnebj2dQ/ljQoJHRq
08egxrxdRz2m6825HyW1kS8urzeC6xbfO0h/lmooAx+3PvjP2jc5etJzjn/xbvD2XLtG9jdI
ln1CUl3YekaXpUU6UIxnRFH8if3C71u+xTbHtO1rssV4pSediZXdDkyx+lVFehON89T8U/Xf
l4ZJ7ijOtQhJ1Vqc6VBw76LFpxvku88ev7e+2yeWzvo6iOVag6KUIofHvis1j108i5hyPk+6
RX+938l/c24pCzGiqgNRRFooFeuWKeNZNesbFyv+4m740Ny29LuXbYlAgl9iIs6JlqUMAxHm
BjFrdmfLFw8h+VuVSXvHbK7vry53Ri25WCkkuFybWRT2xlQ9B2w7+RedjN8j4HyzjFykW97d
JZSSrrVGoUZRlUMpKkduuH7NeYs9g+S+ecc2p9s2jdJrexfU7RpRgpb+GoOj8MGsdRLwbnvy
Fs1+bfjlzcPc37/zLZQJvflY1B0MG9Wf3DD1156pz+l1znl/zHHeRbPyS5vIZpyk0dowCFmr
6SixgA0Ph3xmdGRtFu/7objaYjFFeKoUNFqECSlVAILn7yfI5nF9vR1Hm+3/ACl8jca3zcro
bhJHul23/wCUDcora5EFBVGFPRWgyxrZRNcHM/lzn/LdtG373uXvWgIY28UaxIWByZglK07V
6Y1Ks/biHxhzw7CN+Ox3L7Q61W6Vfyfx6fuCn+KlMZ+7X1x1cQ+Uub8Pge32jcDbRTlZJIGU
OlQNI9LhtPniuLdLkXyzznkd/Beblu0zy2rarb2P5KREGupUjoK+fXDGK9g27fvmvdvim65c
OSxRWUAekCxoty8MfpZveC5NX8fPHOVd7I8PvfkDlN5x9+PTblNLtDTe8bNjUF9Ws6mPqpqz
64661+Ablz7le67HbbDuF/NPstswNtZflXQKANTsO1cEmDN+Qci5pyrfdrsdt3S+ludv2yps
7TokYChARkCdIyzwm8ZNWPCfk/mPEbe5j2C+NrFclTLGUV49a/mKsDnTKoxhTbGqsf7hPkFd
9sd43C6F4LQuI7No0SIrINLA6NJ9Q/N2xq2XyKPRbz+6bbEt5Zdt4243Mx192SYPEjkfn0LW
n7MYLyPaPmn5E2b9Y237o8b31w1zcR+2kkZllOpjGrA6K+WN2wfXJ4Cf5o+S5rmC8l324/V2
xdbcjSNJceoUAAbGfszz/wCXHvHAedHisfyFuSiaw3SYyPcmTVKzOxAkdfuAdlOH7a5/07vF
n6rq4p8s/InGtokttr3JoLN21CEorohbqyBwaVwbHfKq+T/IHMuUC3O+blLeR2zM1urNRQTl
qXTQY3epgkx2n5N+RL3i7cVi3G4k2pYSs9sE9yQQjMguVLhB9csZnUnwOud+S4Z8v854tt7b
bs+5GDbw/uNbsiMAx7qWBK17064sP4SXXy38hbvv1vvE25yvuNhqWx9qi6deWnSgUGvh3w34
Mi/538n/ADXbqmw8jup7JruBZXh9pIGkR/B0AJXxAxmWMd/OK75G2X5S2/jvGH5NPJLtjKH2
VUcN7NUDANpA0tpp+GC9RrPWT5Ny/lXKGtl367mv/wBCrR2YkpSNDQHMDMmnU54rVJ6of08y
kM6Fcq+odcX2VegcV+Z/kjj+1rte1bmyWER/kRuiTaQeoUyA0XyGHYrV7wv573zj2571um6x
vu93u8KrJLNIY/bKVCEECiqNX2gfTFJtWKPffm/5E3zbDtV7u8n6KSqyxoFT3Vr9jMoDsKeJ
xqzFqTj3zL8kbLtMO0bZukqWCD+RqRZBEtT6EZlLBfDPGfIzNX3HLv5p5RuV9yHbr+6bc+OW
663mb2pTC9X9sI60fUFLEEZ/XBa1fPWb5P8AMXyTyPbZdu3PdJH26X/zWyokVQD9kjRgFh4g
5Y38KZYLj3zj8j7FtcW12e9tFZ2oC2yyRpOVQHJayBjQeGC+Fi9/3jed/wB0m3HdLmW9urg6
5bmQk6qnsv5R5DLGr0vFaYHBbWr1A1FaH7elQcZvUGGe1uAisI29vqSRQAdq1wf9JEUVjNLd
JDpPqPUZ5npU46feZqrTc++M+RcKvobPe4kWaaFZoJo21xyKxz0mgzU5EYxLq2bjN/opqemI
n3BrRqfd4kE5dcH2itxYbBxfed+3KHbNptWur66OiGMCgyFWqTkDTD11IJa4r7aNx2q5uLW7
jaC5tpGimjlQqylTQgg9KY19ozt/Dna0bUY0BMgpqy8RX/DBLFlbCH4q5LLwSbmVsiy7TDcf
p5QGBlWhA16aZrqYDGftrXWczax7QuiSOVpITTMdKfXG5dX/AIAqll0p1PWtf3YVXRBt1zJI
NMbOD1oKZeJpjn31IMsanfPjHkWy8P2flc8aPtu9Myw6G9cJFdCyA93CkimM8d/Znv8ArJn+
WZudtuFjq8LIjHRqb0kN1HXG51HazEJsZI6+k0NK5dR0/ccN6jJhazVKAFWrRh1yOKU42W5/
F2/WXx3Yc2LxHar24NsEBJkiNSqNIp/jZSPLLxxid7V34yUO230kQEUTSn8xAJFa/Trjd7gv
FwI2+5aX2TCySAqChBDeo0zGHZYzzLXVJs99AZWmgaGOtF1CgJxn7RrARbVuUgaSC3kmA6vG
jEZdyQCAMH3i+tcs1lPFIY3VldQda9x3xqdakaKxUAkmnWvUY0y7rXYt1u4gbe1lmjlPpZEZ
kDf9wBzxm9YrKUW1XMrlI43Z6FpEVSSNHWvh+OM3uN88pJ9l3G2KPcWk0SSZxGWNkDDxFeuL
7RNBt3xdzK841ufIrSweSxsGRJgqn3SH/Mkf5kUHMjBP6fpXnPWdtdtvWaRkhllMRIcohbRQ
0oadMzjfXSkBfbPuO3yI15Zz2vuhmT3kZAwBzK6gME61i1tfjv4m3bmtpu0ljcC2l2y1NxAs
yHRMa5x6+laDLBb63Z/rrIbbse5bhciC0tpJ51Ul441ZiKeCirHD11kYnqW+2HdtteKK8tpr
f3Rqi96NotYB/LqArQ9cHPcrWY6F4HyyazF4Npu2tGX3FkETgUrWtaHLFezePFNJA0czRvlp
A/AHDGMTWVhcXcqwQRtLK70jRASzHoFA7k9hgtyGTXqjf25fIS8YTekszIyjW225C7CUrq0f
m+gzxznXTdnqk+P/AIf5Pzi6nXbYkt4IWpNe3NREkgy0H82quVAMb67/ABDZHNyf4n5hx7ka
bBfWTG5nbTZPCC8VyvZo2/xHUYZfBJrR7x/bf8hbXx2LfBbxzjQz3+3wkm5gVcwdFPWaZnTj
OpXfHPwZyrm8T3NkVstvj1Kt9dAhWYZBEAGo59cP/T1fWfLM8x4JyTi27y7Ru9p7F6GYwjMx
zIDQPG9KMG8salc64924tvmz29lLullLaruC67VpFZQw7Mrf6YJ1rWV6FwH+3blnLttfc0nj
261b/wDFpbnV/OcfwAVNK98ZvV1q8RW8e+EOZbhzG44xdW67deWkZlunmNYxHWgkBH3I3iuN
W+KcfnWo5z/bRyfjvHrnerS8g3aOwjMtzawaxKIVWrMurIhevjTGed30Xp5jtnB+VbvtVzuW
3bbNeWlknuXdxEupIlC6/UR/tFcavS8s13/H/wAdcg5juC2O1x6lorTTintxRsaamJ8e2Hvv
BzxL7Wm+Q/gDkvCoF3ITx7rtDlI2vIFIZJHOlUePqueWqvXGbaZzF1sf9r/Ldw40+4vdR2m5
zLrtdtuRoaT82kn/AOzJHj+OM809cyePJN62fcto3G527c4JLK+gf257eRc1YZ9fMZ1x0jlN
1z2dhPeXCWkC+5LMwjijTMkt4Ym+Y9si/ta5DccTN8LmO35E8Zli2eXIyBc9OuvpananXrjl
vW7+D1P08S3Pab3br6fb76CS3vbV/aubdxpdJB1VvPHWMuP2aE0alPAfs88LJgTSrNkMmAzz
wkRlYnwUMBU9TXr0/di+q0auqsRKf5Y6fX64z9UZWboCVcdVGdR2wGQUbBanOnQ0+vTAYdol
Vtdak9RT9wwqw0oP5MgMj3OEfIVEqeZyXzriVEKqumtWBqD4UxAxRz/MIJViCoBxDBynVp9a
t/DlmDhjV+DaE1AqQA/3Fe/7cZRwFKUGdMh5DEtPQmIe6wrXKg6jEq23xXwWXmfJINvHqtlP
uTsMpPaT79JbLp+3Ger+nT+cl9vxG05z8W8fufk634rw1yXltY2u45CfaE4UsGWSp+9BVh2O
N/WznWefahuf7aedQbTLuqxRSvCrtJYI498aK6gAfSaAVA745y0/Tli92+N962nYdl394xLt
u+SPFayrUMjL+WVTkpYAlfocMlovElS8p+Pd94n/AEe8vo1ksN2h/UWcwNVK0BaJq0o4U6sM
mr651jafJ3xbx/aeC7DzHj800aX5ihvbSY+4fdkQkMhI9IVlII8MHH8/tXT+m83EXwZwLifN
W3Pad7edL9oTLYzRNoowJDdiDpqDTvg6+cZ5951leLQ7TsnMUj5FbJuO32ty9ruFuMlkCuY2
IHXqNQGN/wBeci/l1Ktfmv41XiG/Q3e1P7/GN8X9RtLdTGKBnh1dWpq1LX8v0w887PBXnLRC
gowBHdcsZwU5UpUgamAqreFMSRqZCtVPrXMkf88QPCpEQYADpUk5DEjgIztqXSp6AinXFiNq
ZPCpOkv3/DEtR/cGYk0NadOowJJF6s6VrmVIqR2/xwI6oVVVGQBqxPiOmJHWur1ZV+1h0P7c
SP6T4t1BK51wkMZABUAZf8UxAR1OgqGBGY09RiRK9chWR+gB7Z98ER20kBncA1oamvQdMKLO
ootAPpn+GIWgZyGAQ6W6g1yOE6OqsfTQuBmxzGXl5YCZn0+ktQNmCP8AjKvhiRA1SqDQK5E/
44icsdIAzBbSKDr9B2xIggWinq1RUeeC1CfQuVSKD0noaeWEC1KaIBULkW8zgOEaxupNQtKk
9KHw+mBBFHUOuRDH6Yl8iAIYM5JUdsMQJPdcjUQVJrToRXrniCUtHF6W0r2DEj/HArTQhKkg
6gTmfAnzwHk4Gl9WmoBPma4iTMKEE6ZK5eOHEMEldLGj9ycyaYsKFKtVVqfrktMKwVAHAoCo
74ACQhqD/wAWYoSKgk9jiYtEWCLQDOp8wfD6YGiKxqdTKDXNu5p0wgMr+kLpqBll0H1xY1KT
toQ0NVNK17DFhpIZFlOdV6BV6YmTFRqYsSATkgzOBJKHWVavp7Hp/wADCkfqehRqrmQT3wHT
KF0grSlKgYUGSVCpGnU3bqOvniTluhGVohagFWFcjT/PCmduQokYDxrhRlIoAMjiTo9pfEdM
WlNZ0o4ObZaR/lgUd+xqqXqSMvq1D06tJpXp4Y7/AMp613zMfZTxPN8TW6xxu8n6MFY0qXNR
SgC5k17jHP8ArMrhzj5h3O1ure8Y3akTdgRTSvfpjErp9nA7F/toQSCxORA8sR10wGP3P51R
GSBQdh/FjXM9GvqvgHMfjY7dtuxbNftYzAD/AODHA6M7EDVqcg6qnqScbv8AO/LP3/as+bJo
bS82xtoe7ffJHEUEduP5gDHMig1Yxx8i1p+T7JyDcOB/pntXub8wr7iN95YDM54z38nmsh8H
Rcwj3C9sL1bptstE0RxMf5Eb/wAKn7ajyxq3YbGQ+cOJcgk5MZorI6LtglsxABlYH0qB49ss
ZjXNZ+5+GOe7LYQ7/uKJYsXVYoGkUurt9ootRqNMX29Zsley8f4ZuG27Rb73yKO75Zv6Ir2d
gGPsxVFRSuRI7nGtXjzr5b3X5I3q4hXd9pfZdvyjgt1YKp1GlXNati5qkn5elTRw8H+LB7Ua
/qZIQ2uP+WJHYD7yc2ovXBWbXnnwHsj71ya63q6ZZktPUsZBJDvUhqdMicicdb1/q3z149Hf
nG7x8o3K0tdhk3W1hpGXEyqEPXSciDkcscwtbDZtu3yxkvb/AIvFs0kUuoGRVaQ6fzh6Cgxm
jVvvslldS2llJsZ3hJSNbtR0jHTWwOWKRal2vaNr2JLtIrVY0n/mPDEAgPWg0r27YZBrN7nx
jj3Itjv7274+m1FFd/1EyUmIQV1asqYp8tSuXikCn4uljt0ontzpGq5grmCw8fPD3do69fKu
81XcJ1JoDIxj/MCB54DPh1cRvp7Pf7G4trZLu4EimG3ZS4dgckI7541Gbr7HisLG7Xb925Jb
xR7vHGHtbBire3Jp6Kv5mH7sYxrU+xTz0vNz3SIWlw7skWhKt7K/aBUV/ZjVg1FLvtreJFEN
vvrwGQD37mAxwgg9fUAP3YMSPd+RbtFyratphKxWV0je5oA1swzABPQU8MWJ4r/cdaxRbzb/
AKeAFpog8iigduo1Z9cUi56ZH4W2zXzzbJbuxMyBqlZ48q0yybKvhjWxSPqS/v8AkacksLG2
hWLaZI2NzIq6mDAZBSPtGMhHcwps9puN9t9tGm5OpbXoBeR1rQE9TihrxjmXLPlze+M3MW5b
IlpYwnPcjEyA0PUh8h+GNbIMehfGlzuN98WD37dUYwyQwpFEEDKBpDDLOpJNcFNWPCG3Kw+P
lNpaF763EojilUqS2s0yJGWJOkxbhvfGrKfdbSGbcWdXaC5QiFHDddPgB+3FiXljcR/qBt7X
fvSxx+uGGH24gvTqKgfSuDEqtxjt+N7PuW47NYQx3pJkPoq8jE0odPqP0xJk+Hcw5nyXf7A7
1sbbbbwiQLeMjqrkjMLqzBNO2GU2Yv8AnkV7Y7ffXPHdvt7je7mPS91cLqIUD7UB7jsMHmsW
48X+EZd/sOW7gINgO8bq6VeeZ1hFuGYnX7jKaajllnh6srUqm+eZuWycsjl3+K1tWaIe1ZW5
DnR2LSdWz74JBlea2ymWZQV9WbL406Y6SDK+yNtkj458bbfdbBtKSzxWkTJGEoaso1O2kajn
n44xZDatZdo2zfP6PuG72MU9yFE2iZAQrkA00n+E9K4MhYvlfypyWw5Bfcfs+Nf1W1Qe3E8I
cVBXPVVWT8MVsjnbrL/AF5cNzbeba62+G2klRpWVYyDEdX2KfyjPG63zPFrfQcct/mq4u97s
TND7amzb2y8KT0r7kigHVpp+GMrmvSN0nvdw226Xbv0m4RSxlEtnVkJr2br+8YMGqb4n5HeX
22bhYXO3223NtEntezbAqlaEnUKn1Zde+Gxq3We+Ob9+U/Im97lvdlC01ihhsAYhpiQPT0k/
cxH5sFkE+Fj8e8fsP/aOW3k22xmRbopbTtCKKuZKxkjp9MUinw0/K99u9k2S3vLPbv6hPrSK
rDJAxoWNAThWvNP7ltosm4vY7kLWNr6RiHmUaXb0AgZUqOvXFF6+cdkso5r6CGddayMgeLqx
GoGmXTLvhvWmR9n8gvLTiew7ado2SGZC0VuipGqrEjgVY6RXGciWn/r2xvv67jNYwSXjW2gz
PGpIANcgRTErWJ+L+M7U83Kru422Au97LDbyvED/ACRU6VDCgFfDDivw0Px/yoblxzcJo7GG
xTa5ZoY4IckIiFQSAMie+KzGebs18k/JfK9y5Pye6vr2FLU19tIok0KFX97H/ccamGc/lVcc
skm3jb7a5jVoJbiMSIepTUKj8cHlam6+zeT7jY8Tttri2vY4ZI7maO1V0RVSFDRRUgV+mMzm
J1Pwni78s/rEm2wvex2/okZF9sPWhfTSmunfF9ROnlfKflmPdtt3faLjhT3qRa4lmjAaOKmW
tzpBGf8ADhkhjp/teuoJtn3u0azSOVJw81xpIeQSA0Rq5aUAoMVXXw+fefxC35nu8JjEKC5l
ZRSgKsciPAYZGOOcc3Dtjt925Rtm23isLKe4jWR+lFZh3P7MHVdJPX2PeX+w8U3zYOM2GyRC
23AlYnhVdURTIMagk+bE1wZGdW8/H9mvOTzbvf28c1zZW6wWskwDJGj1dyA2VSepxYlFzRvj
u72FpN6ayuIrWSOQfplDsNLg6fRU0I6jFOVqs5D8U7FyDl3Hd8t7O2i2uFRLc2oi9v3VA1R6
goA8MjhsX5eA/wBxVzsUvyHPBtEEdvFZxLDdCNFRXmzLFQPqBXHSc+MSXqqX4f22x3L5E2Ky
v4kntZbhRNE32uFBYKR3FQKjGOo6znPX1NvXI98tflTY+PWbaNlnt2a4hEY0EqrH7qZU0jIY
PhjdqxvLS22Wy5dvO02kVtujRs4nSMVZ44apXxzzpikZ6tzxkOQSS8o+AG3XkECXO5/pDP7r
JoZXWQrrUClDp7Drinp6+GX4tNtS/BjrZ8LlvJWtZRcXjRR6JJM9U/uMTIQvkO1MHP8Ak9/D
Ef298qj2bl0Ns21R3F9uzpaw3LN6rZGY109QA3fvjXUh4ux6V8zb7t2wfL3EN53OJprKztnZ
1VdRBMjAMB4rWuMiX/Z6DsHNuI8i3yFtp5DLcTSIWXagNKekZlgUrl/3YtasfNH9we33N58v
7haWNpLcXUog0xQIXZi0QOQUd+uN/EZ49tQfCHFY5vlfa7HfdvYtCZZHtZ0YKJI4mZQ6OPVQ
555Yxev014+jRy3fz8xrxUaRsIs2kMQjGZEdQdXhXLww0R8sfOm2WO3/ACnv1tYxLb26SoUg
SiqPdjDuaf8Accsbxy4s9YWOP1EMSM/QBkTXzwVrH1Xw0kf2wbkCaMsF2H7dJMx+OM/z+T/T
4fKMukOFFVFMx188dV8vU/7bdk2jd/kmyhv4Eu4YIZ5xBMNSagh0nQetPPHPtvn4e5b/APIX
xJsPK5+GbxslrZ7cqabm+eGP2w7jWFKKpfSf4sE5Z+34ZTm/DuGH4EfcdmsY6tdtNt18YwJ/
bkuSoq1K/wDj7HLAz3uePQuK8B4nFsnCVm2a2aeO21sZI1ZizW2pjISPUamueM8xqeG2++4F
vfL90+P141bfpdvhP6iYxxgEgiqgBdQHqyOrGrF8uGZOD/H3xzHvU2ww35tJ5LWImKMysXne
NS0jhuy9cUg+2Ri/7iuJ8YfiWzcv2+xj27cL14RL7ICoY5ovcNQoALKBQGmeHnnfR1PYtedS
W+5/207Zcx2wtAI7YQwx1AXQ5j1UP8VKmvji5uL+k3FpDdcO4L8R7ByC64/b3tzJFBAoMaa2
knBcszurGnpJwSa6Wre9+PPjq45nsG93W129vPucEj/o2AEEtwER0qlNLOoLeRxm8s76vN8v
Np4tx3dd/vtosLK5tIJo7dYwg9+OlViLBVPrYCq43JGeuvHwZc3BklkkIUvMzMyr0XUa0/Dp
jco5+G++FeYbfxnlUc99tMW7pcqsEaS6aws7isiagw1dumM9R05e6/3Ncu2uy2+z2C42mK8v
NwiM1vuEhAa2IfSDGQK6j9aYJPNcv6de41PMea7FxfiXF5ty2ld1kvo4ILWKRVKxkwqWYlw9
Oo7YJG7XTB8acIh5xd3Q2m30bltxe5tmjUxBxKAXRKUUsDnTBjWvO97+Sfhjk1nvfHty2mLb
EtUkj2y6ESiV5o6hdAiXUhqO5oR1w/WMf+zR3t9xH4041xOxh45De/1YIjSMsYdZSqFnZnVi
amTp2xZDt1df/ddwa25bvTLtUBtdy2wSXFqyAosnuOGeIH7Cw66cUN+HlXwlwvY73g/OJtx2
6K5bRIttIyAlREkjr7bkZMGCnLFfaxxt49excT4rxiLjuxw7Nsthe7LcW6vcXkoQy1Kgl/Ur
F2LdcxTBjc+EXFeS7fvfM+ZOsUf6XbEhs5ZI/UbhYxIWckZGma4RKxl3d8F+TeAcsuU2CPbo
djSV7G5RUSYvFEzh/wCWq0+yhXPLEz1/66+R2jULqVSCQNIrUCuYzx15b5ux9K/2xbDx664v
yW+3bb4txFs8LKJI1lYJFG0hEYIyJYdO+ONm060m4Nw/5O+PN63A7DFtrbTMFs5ECpMpUqTq
KKtKjIrngyVm/Gu3nnN+J/HF9tHF4OJxbja3VoroVEeoJ7hjZTrRtbGlcznhnHOHfXgW433G
oPlu13Pjuyy2u0pc28r7VeAgCdW/mRFfVpjJzzw9WTlmcWX/AA9K/uatbF/kDi0t4jyWj24/
VwAmvtib1COvpDlajzxS+My//wB2T/D163Xjl5tkEHGNp2rcLRIlRttm0wXKR6aFGV0Yhqfx
AYsjpdeZ/EnIdq2H5S3Xh237A1ja3t000KXVBc2jpF60X7tUbU9NG6YepHP+V9sVvyByLZ+T
fOGx7BLs8EEu07usNzd5NJdRnSaOukVFela4z3Jhln2/y7bniVgv90ypBYILEwRXU0Cx1iLP
A4ditNP3gH64r+Dxv2r0oT7NxPinJ7s7cbnb9t3GaeLboEBFWWNgFXoF1PXywyYv3ryf5otd
j5T8Pbfzr+kx7XvP6lIVS3AqY5HZCklAurpUV6Yec0dSzMfOAgmaVSFKlqUFK5/QY3reevqj
47n2bhfwjByi02mHcL26nKXQlWryBpTGBqoxASnTHLP213s8bi/2/ZuS7Nwf9XtP6Tb7i+F0
21OAqo4gmdVZaD06s6UxcyZ459T2L7dk4XuJutm3dtsuLXSyT2jKolSg75+nT4ilMONV5HNw
XjHMvieSw41Gj33GdzmgiulWsk0QmJcalAL64ZARXuMUjGXGe/uN49xvYdi4dxmzVDulnE4k
nSIRtLa6dGpiPGUVpU0xvmeetd22zF78hTNu/wDbTsdzJaiKYXFmGjgQopaJ2i1FR/EFqSe+
OfJ7y/K6seQ2Pxz8Q8X3La9mgvm3JIxdIw0O8siFzKzBXJzGdcakl+V1bscvCLrYuafKG3br
e8YO07nHt8t1cpIo9mdg6rb3CDSuqgLDUfLwwUzyvRuQScD3ratz23ebixu7NI3NxGsfriKV
HuEjVpKHvizUz/xjvWxwcI2zbdnS22ncTCB+mv4mjS5Yf/biRdPuBxmCCcX1kXteE/3L7c9v
y60uJ9oTbdwuLcm8uLRtVteUaiTRggaWA9LVz6fXHTnHPrj3XkthBHJdRpJRBIQGatO+Rw9f
DXEyvvPaZ+P8Y4rslsLm22q3ktIikZi1K7+2pZvTTP1Z1xyjXV2qrarTig+Td4u9uhilNzss
U+5IqUVnMp0NpIoDJGM/HB9YJ1+Gc2blm3/LfBeVpvW1QJabbE7WKqSZUPtyMrGQ/a40D7aY
1k3FngvjX5T5BuHw/um9JZRXW4cfthFawxqw94xRAgOq51oPy4rzJWr8KX+3/ebfdLznXJ7q
yhSW4K3NxaQrREKh3ZFDVpqK1z74es1f/FneR/O3GubcO3rbeUbSI7lomfYntlMjxTUOgu7n
0suRJXIiowyeicfaLj+2b+rNxbke33UDNtktv7luGRl9x2RlfQ1M1PTGevOnTrjOXV/bBttp
Z7TyW9vIvZ3CynETzOhMkKqjFhpILdsx3xdXax+Gv+T9y4nvPA57i5puc9rPC23M9u8Z/U+4
pWOrL6dYyNcqYsZaGPkF3fxR22zzJst/FRZNp3O2KqtBQorIVWnhpJwRXXxv8s2stv8AI2+w
TbemzzGestlEdcKsyhi8bED0SH1gU746SeMSC+HpYIfkvYWmdI1W9hJL5KCWGZPbPvjn3HTl
9PXsXPj88Qe3+rHFI4Ueqki3IeM+4D2P80DLDfhRByJd0i+POcw8XDR7nHvNxSKyFJVWSSIy
ZCp9SM3TthzAstnims+J8BbfyV3KC+jjEs7anV5YpQq6mzOpSBg59b6+XHx2y50nznu818br
/wBeWFltdTH2PaZQYgo6GjV+hw2+MzPVPye3366+HTZ8SaVpot5niu4rEkuIP1MxdDpzpUqT
hnlCh/uJintfjvg43Sp3OJxHPMxDTK36arerv6lFcHG2VrqT7eIvnSXlsPxTxm33Sys7q01W
xG6WzH3YmWH0LpcUGtPuKmhpi/nGf69Sdf4XXK4uTbx8T8IXiJmmg/lpuIs2oBojopkIzUJK
MU6zWup62nI9m36/5rx1tq3Bdu3C02yb+o3BUTO0LNGjRlD9x1mq1754p8C/P+Ge5jyC14Px
jcodm41fXQa1NvdbxIri3ZpqozSsx1NTVUEDyxSe+i1lf7aYuUQcX5BNsjWl2quurbrrWuuV
YzpCsuS6wNOeWMX/ANmsyIP7ZJQ15yywJjg3O8hZre3BA9Su9VXyjd6eWOv9PmUSf6tPwzZu
T7L8V8wbmAdJphLN/wDKfUaqh1P6iaEkAjx7YL/tfGpfI7+ebPyTf+YcM3fY3kudgWKKSa4h
k/kCT3A3uOQepTIH8MF+MWZfXnHz7sh5J81WO0ba8Ru7yzghLZU163GmQjoQPHti68jHM2+M
LPw7deA/JO22PIo1jWKe3uI54ifakh90anjPfTpNcFlsPP8A7PofkXHuUbr8zce5DYMbji8U
EUsdxHJ/IDAsXAoaEutD59Ma+0+plyvAv7jJLOX5b3i4tJFkjZLZWeMggypCAwNMsuhON/X/
AFc/r748tcqj0zBPfy74oMOC2ggHIdKdaeNcSsCoXRpBp2J6kk+OEkqdn6fw98uuC1YIK7UC
gh+7VFK9sYOHRaBTkFFSa51r4YD8DJrShJrmPIHCA6lBBBJY1CkZgDzxC0LerLOhqNIy/YcL
PpDSDpXOoz8aYDRhSsPqrpPfocDZIr1oVAZgMgTT9+FkXtVWuYcdF7nypgOJKIy1qpIWop1/
ZgaMFDJRSfM6cs/rhFj1v+2/ke3bN8iW6bgwt0voXtUlI9HutQItSfSDTFW+fhrePbbLxv8A
uRksdxcR/rL2aeyY/niuVZ4s651rp+ow922Q8eStZxbmG4XX9w29bTeX7i2g923tLQtpRvbQ
MBpFASKmnfHTvmTiVnnqWVNecaj+Qfji32XZbqFbnad2mkvYGbToaOWai0Ga1D1XGd+t1Tr4
rg+dtgt7jZeCbAkygGd7WOeQhVVkiRKsfy0PXF/P4rV/261Z/JXxrv1z8L2my2bpNe7JS6mj
Umk6RK5YRnxo1RjP/wBfr631f162s9/bDxSQvJyaK4j9iMS2cloSTMrHSwJHYHtjl1z/ALa1
9pOM/bA8q+Nd7tPlWXYJlDy7vcPdWMsZorxyOZMq/mFDUHHf+v8AtzHP+flaX+5XcdmisuOc
SiuWl3DY0Mk9ENBG8apG1elSFPTpjp/9fzm1fbengbhdFc2Wp8zTzxwPXOIfc0KupsxlmMgO
xOBkKM2ssqalpTWtMwc8QHVCcqMnQqcSO+ssGBqFyo37sVhCr6SQFBBPfI4hhKn8sqCCT0PT
LwGBEFZWDKaECtB4nKgxFIr6zRiGDiras8q+WLUYkAEAnyNch9MGISFIx6QemVT1YYW5PAZM
S9fUOop0+uJiwSvpWrHIZGuVcWjDITSoNaZmtMBFoLAADInMn/LFrJjq6KpKD7mOX1GFYJc6
AZA9VpXFqIBUcBAVrlUdzi06dSZNVaUzDZdsGqH1A1TMrSoP0xHSVRWhp46jlXBSR9sgZGp9
KMwzBPT6YkMR5VX1KB6jSoHYjAAqpUjTUg5BadPDP/HCdGNXqEgo3QL/AKVxaYjC+jL0sKmp
7jEtOK6tX21FKn1DxBwgjIlMhVSKkDx8MVX2JlLKFCjQRpoR0PmewxUYeABTpNa1zpgMsPKq
qwIqfxyzypiRaiXLkZL0Zs8+h+tMWLTGqr7lBUilBXI4sMJCWXMEnv2xWETg1DMCY/4VHU+e
LR0EMaKXy76SK1GBgi+osr1CkVNKVPh+GHCZmaRAyrRiKUalc+2FGKpIKCqtSjA9a4G+TONW
kUYr2p+zrhrX2PKHKgBdIqP3DuO+MsdGQSFQyZlRqr3A74hSSSooa+v1Zdj4YtUMK6noKD/b
6c/PEjROimuRNencnywkmQVIFKnMAHMV8cSjnujRKk0bIs2XT6YsVxm7lw0zMi0zzAxoARgG
rSuIuj/6e378KTBHZQ0baQcqg5D8cA10bUNN2pesmnqgBNfHp4Y6/wA+/rRZvj6s4B/9/d1x
GEbXNbWG0LGRbS3gQzCPr6ajUMjlXGf6d7dZnH1eJ8qhuod7nS7vhfXjOWnljNV1eI7Y5St2
fpSHTryBPfUcj9MawHlcjIdx93XG4VhtN5ucV3GdtaX+pNRYBCSJaHw046TrI59cS/K/v3+R
9lvF3q/lube9qP0908j+6F6fm+3zxy57mn6+eNDJzP5l3HZpbh9z3BdvCeuR00I6UzoxWpP0
OM9/0n4jV5yKvjfPPlCzC7bx6+vSGOr9PT3Tn1IUhqeeNfeZ8Mzm2/Lk5RyL5A/qSTb9eXT7
jbEPE0zFTGeqlQKZYNdHPuvyHzDeZIU3Xcp7+GI/yo5GqFJ60Aw89KRteM80+cd0tEj2GTcL
uCBaJJoVlyyA1stCB4Vxd9K8YzvLLb5Q3HekTfZLu83GqmFNJOgk1CBFr3xTvGPosN92H5k3
DbQOQWt81hCoCNONEUajoAOueOd62tTlVcQl+SIXubbjBvEk1e3N+lWq07hjTSMN6Umpdk3X
5C2HemG3NeR7rK5SRUXXI7Mc/wCWwOWNc9+elpeTbl877ltko3lL+028j1jSIFA8WpmcE6U5
1JxrmPzvNYrabMt7d20K+3GxgU0yy/mFc8b6/pz+mPrf2pLjmPyZx3dpb26v7mDdJxSeVwHF
R2KuCuKd+GxPc8/+VuUW8b3TXF1tyOGNusYjglIOQcqF1D8cGwWV6Vum5/N11wuSFNnsdmsm
g0s0NA6xFc9MerIHvjPVM1803sN0ty6zqRJGSr5jI96AY1Flde1bxfbReR3lm/t3UB1RMADQ
juK4VGjuPlLmdxvVtutxuLyblbJRJj+Woy9Iy+uGdQWeLXbfm75Dstwlv33P3ZpMpRMA8Wn/
AGqRQYvtMUn7FvXzjz/dGjmO4tEYHDRxxoETUMwSoADHFzY6TmWKxvlPmku+R73LfyPeoumK
Vgvp/wDpAoP2YLjEjS8e+dtw2+WW63barPeNxlP87cLkaZPT9oXqAPIY1eZjP5aWH+5tmuIW
fj1tHGpq5jYGQDyqBSuM/WGqvkv9zPLby5pscC7bY6SuplWWUv4+sZDF5B6zWyfNPPtou5Xg
vhOLhtdybtBI2rrVQftOKZjX1qHlXy9zTkhji3K9H6aB1kNnCqrGxBqvuIPvr4HGftDY3HG/
l35j3PbXG07fDJaWq0E0dsAsagUAOar27DFeozJUGy/KfzVLZ3F3DH+ptLdmMsjQqyau4Ld/
p2wyzF6zm8fOHyDuN1C11diJrd9SwW6e0moHIMMyfxxbIeVm39yfyAUEK/pLdgaSPHEGI+oO
WeNTGfdcm0/3B82sJriYvFeSStqK3K6ivYaSNOkeQxXMMlNdfP8Azy83a3vBcRa7erQW8aAR
55ElKmpplmcWyKuN/l75Dsd0ub6W4P6i8zliuIiYRTppiNAKDoRjMsrX035Udl8mcxsdzut0
sd0ktb66qJ3VVCaa1ClCCtK4ZcYs/Sg3ze923m+l3Lc7mS8vps3nlNWyyAA6BfIYNMcsFzHF
IDJRiCBpU59eoOGe0W4+g4v7kLew4rYWO0bcZtytoVike4yhAQU1ChDYeuZPyuetYzcP7gOe
Xm62+4tcx28dqapZRJSLPI+4DUsG8zli5xrmW1Z7r/cvzW5254LOCzsjIrKLqJWDg9mXUSuH
JDeXH8dfM/JNjhbb7DbYd1v72XX7joxuZWb+N4zU+Irg6xWOzknzl8jwchglvrK329bUsRYt
HUN0rq1HVjPNjnYbef7mOb3Ni0G3WNnYSyLRrqIMzgHLUoc0rh8XcsjM8C+WeX8au52tqbl/
UDrubedS4ZxWr1X1ZeJxrzPTJ1juT505lHyp99eOCByuiWz0H2nj6BSB6ssZljM5utFc/wB0
XLJYmS226zgypqQOevddTYZI36sIP7g/kKz2yO7vtht2s3Wkc8iyRqT2zBzyweL1nIfm6033
kqX/ADbbv6jZ2wP6HaYT/wDGjc//AGlG+8in58WapWqT53+J2CtDxGP3AaofahU1rnmE7Yz9
IvtVlyP+57ZYLeBeP7f+qcLWYXdY418FSn3H92NfVWVhY/7juYtyFt2mWL9Kqeyu2aR7FCam
r11E+eKmLPcP7peRzQyRWO12lnqWiyai7Kx6mhOnLBMgrLcD+beS8TW7jht4twgvJGleG41C
kh6tqXp54dl+RzzifZvkzjN5zC75BzbYo9yadBHaQwqqxxEEZiNvS3hUmuG5Tljbp8z/AAxH
J/K4kEkVqe8YYVoRnXWPV1xmcRasuWf3Q8ftEt12LbzuQYapnuaxIhHQDI1Iw3laxFn/AHK8
pHJG3S5ihkt5I/Z/pgqsSpXUGDfdq+uLZ8NYPlX9ynId22yba9r2+32k3PpluVYyP7ZrVAKA
At44pjNdHx3873uy7TFsO28aju7mMGRjblxI61JMs2lXqcWRbQRfLvCN35NdbzzfjyTLGghs
YIkVwgBq7Shiupq+OK878JcSfMfwdChuLPjDC6io0JESIAw+0hgx/wAMH/NffXPB/dTerbgy
bJbyXgyhuGciteg05tXxocORTajl/uP5bt++S3W7bKkVlcRKn9NlDRUUColDsCxrXoRhyD1Q
c3/uG3Petnl2faNpttotJ/8A8alSju4BrQDSoUH9uKSK66eP/wBy+/bPw5NpFmlzfwRGO33C
R2bSDXSWU/do/LnhnGtT4eLbpud1fXct7dzPPczuWmmalWdzU1xrzDs/Atm3m/228t9wtJTB
cwMGhkXJlIP3A9jili+XuNn/AHRcjGwNGNqgut2iXQu6tWjHprKAU1D60xz8/LNZzhfz5y3Z
90u73cpf6tb37M95BMCEZh0KAfbTpkM8asmCOb5M+cuS8tgj24QptOyqAWsbYklyDVWkag8P
tpTGZl+Gb3nygg+c+YwcIPErV4rTbUiaBLhUrP7L9Y69ADU50rhuStT/AGnqP4j+TNq4LuVz
ez7RHuTTqFjfVpkjp0MbMGAr3wZpnkyNR8n/ADrHzHZYtvXYIYayLIbmZxJKijMiM6VpXviy
H6u9f7lP6fsQtePcWt7C7giERvAAyKqgDWEVVY16jUx88ZyRW1heDfL24cd51ccr3K3berm9
V0uWmbTKGegDI1NK0A00p0yxuzTL54XOPmPeN/5vFymwUbPdWcaxWwhbWwVKks7UXUc6dKUw
X4xzz3W3n/up3w8dVrXZ7c8hKaRurgFAp/MYwP3aqeWKSNvCd+3a/wB33e63TcpTPf3JLXEz
H1MzGvTwxuxjI4YqsRpJLLmBXOo6Uxk+vpXivKrSL+2/c7H9LcySr71u0iQt7AkkYN/5Pt9P
fzxniZR38PmuZizOZFpJU9TnQny743p5njW/GHPpeEcmh3qK3S69hHQwSErqEg01BXoR54Lz
p1y855jd8r5RuO/XEKwNfOHMEbakRVUKKE5nIZ43ngnElt/b0n42/uHn45xVNg3ba493261o
tgrsA6r1KNqVwwqaqcc7ydXt3/dfePc2VxFscISzZmaASMWYOhTSDQBaA4cirH8Z+cb7avkD
deXSWMc0+6KTLaEsoRWIyqK5qFGK8ieFzb5zvuR8Lj4zJZwW6G8a6nulYksvuNIEVexBfM41
OHPvLBci+dbnftl47tN5tUBtNjljllklJb3xboFQFOwI6+OOcrpvrf3n9zdjPxxoZuIKbCZW
hhDt/wDEYgUKj0UGnwBxZDa835h8xXm+8B2rib2UdvDt06StcxuSWEOr20CsMqaszi8T0Hiv
9xG2X2+bAvItvSG12y3aMXMdXKzOiqJdLf7UoR2rgOPRORfNXxbBst299vMe8wTxMqbYkJLS
6gRoFVUfiTjUgs18TXM3899MYjRyzIimoVWNVX8BljdxjmZMT7VcvDewzo3tvEQQ38OnMHPz
ximXHr3NPnyPlvCBsm6bLbXO9BVVd7+0roIPuRLTUrkjx0+WHmGzVfz/AOXN45Nx/i+3Xm3f
oP6WFeK5GoNc6FEepAw0gejPrjEw2NhL/dNuy7vHuC7JA5FgbX9P7rZyMwb3NVKaQR9vh3wz
MVjwia/mk3Ca9qBJNK0kiAEAlm1NTyw/I55z4e6cd/ud/S7Da7fyLYYt4uLILHa3ZZFbSooN
aur+sAD1L1wfVquT/wDOZ5QeaNvYs4G24wfpTtJJA9kNq1e7111z6eWK5+Fn7Wu4f3Rxzbbe
7ZYcft7C3vIZIlIkzDyjSz6EVR9p/bhyMyV6Jw/5q+N5OLbYq7vHsX6WJYJbB49VDGAG0kKf
T54zJq3GC5B897XtnLN8uOHbI99tt/aLHfXZUxRm4QMqThVFQjBqEt1xqcz8iT15lwr5e3Di
/G+RcejsYruLfonUSsxQ28kkZjPT7l0tkPHFOYbzsx5uSSVVnqlKGuQFPpjbM88e8/2/fKnH
OG8b5BDucphvJ9E22qVLrMyIwEdFzB1H9mOf1trWn5Z/cvcbpsFxs2z7HBsv6703VyrB20n7
iqKqqG82rjpf5yTR7XbYf3VSiwtDvXHbTcd2t0Cm91BCSpyYBkbST91AcHP85nrdjFbN83Xr
/JV1zbcdqtb154zbvZCOiLECChUmp9wU+4gk4rxLGOY1nyp88tyHZf6RccZ/QbkjwXMN3M2q
WDQwkDIpRTmKftxjmxdc7XZt391RSyiuL/jlvd7zCgWW/UiPWQKaq6WNTToDhk2t2PMp/ljl
Nzz8c1aZV3ZZQ8QijrEkSroWHT1IKelicb75i54k9XPyL81S8l3rY97stsh2fddrf3xuFv65
HbKmvUBUKRUVxcfWzK5d/wA5ep03Cf3Y7lHBFN/QLObcPbVZbsOyl9JFRQLqAbr1yxmcT8uu
I9u/uT5LY225bi2wwybdf3wkNw3ue0jvGokhDgUY6UHX9mM3Bz8eqDmX9wE3I/6bYXOxWttx
21vIr28sI2LNcGN6hSQF0LQ1NBnjf15kUaxvlb+3sZf+naglQKQxUHfKr9MYn8zqosP7i7DY
r27suO7Ij8UnYSWm2XjDVDLSsugrrojNnpNc60w/SKbQcz+feabhZ7Ndxbau2fp7w3233iq4
jdI0aNo/5npcDX6sXMisdF9/dZuTWcjWfHrKPd5EZBfNWiyaaF9NCWHlqxSTXO278ML8bfMu
9cO3q93KCIXkN+zNfWZPtxSSMa6kVckIPSnbLGp/Pa7ST6+KflvyFu3IuXryi+/88Uivb2j6
nihVXDrEoYn0ahn44OpPhiR7FuP9y29vxpZL/iVv/TL8NbxSPra2lYL6wBSmVa0rjPM9Hc8Z
XhH9x+/cY2QbLc2VtulpaVNiJgyNFGWJEQYV1Ba0UnOmN3mDLJ+3Lu/9wfLL3lm38it1gsjt
cZhgsYV/lFJiDIsjH1Or6Rl+XqMGeeGc/lfb7/dNv91tNzb7ftNjt9zdRmMXa6pGQv8AfkwC
mo8cX85N9N1UcJ/uQ5Fx7ZINlvbK23SG3B/SLcKQ8aV/8Y09QPy+WHvmazLjGfKXylvHO92h
vb+OO3gso2gs7O3HojEhBY1PqYsVGOn0mNT/ACx0LlJUJr6MiKVJpjnapHtPEf7nOW7Pslvt
1xZ2u6wWqrDBJcBhMqrlpJUgMFGWeeM/WG3xxL/cPy//ANtvN9VLYS3FmbJdvKUhjgBLItQd
VUYltVfLFv8A/Dnz8s7wT5R37iG27xt1kIZrfdovZuklUtVmRl1rQhqhXPli6s3Y6Xcx2fGf
yjyngaT3O2D3tukKRXVtcKzQswB0NqXo+XjguHAbR80b/s+9ch3C0ht4F5IJDf2gjPsKZFI1
Iqn0sK1FTn3xrzWevIwayyFtTMWIBKmlNQ8DTyxXFK9v+N/mr5K23iE+3bRtMO52PHrcOzmN
vcjiJLamCEekZ54zZ613fNZqy+duZW3OLnlVn+mtn3GONb+wRGFtKyLpVnWurXT83XGupHP+
fVsS83/uB5ry3bf6Xce1Y7e5DTm1VkeXSaqrFmf0qwqKYuci93/C023+6Hn1ptkNpO1tdtGu
lbu4iJlYdFL0YKfqRnjJuvJuU8i3jkW/3W9bvN+pvrpxJNIQFHpAVVCjoFUALjr9/MGOC3lk
WVHjJArXUPuFMxjnWnr0fy58x2nChaiWX+kzfybXeZYi0qEfdGs2QyGQJxnmxWbGf4By/wCQ
Ns39r3irTXG43IIuLajTCfT3kTMHxr1xXr9r4jm5vzrm3JOSG55BNKt3aEwxWSI0UcBX7tEX
Z65k9cdue+ZPBOvWn3b5V+ZrfhNtYbhJPBt91/Lt9zkhZZnjGWn9Qp/AnGJcvg7v78VPxhy7
5I2fcZJuJJLdtKv/AMi0VDNFIBnrZK9fMYxetby4z3NeWci5FvEu4b5LJNd6mi9pgUWAhs4o
4+iqMb++zI58c/lDuvNuV3+y2uw31/PPs9qFa1t5fVHGV+3STnkMh4YObh6jX/GHKPlTZ7W6
k4bDPPbsnuXMAhNxGApzk9s5Bu2WM7HTfHFsvyZz+05bNyS1vZLjeLjUsqSqG1KcmjKMNIVa
ZDtjXX9ZYzPGx5n8y/M1naTbVyK1S0tdytmXS9oq+7FMtDpc1X7T9cMs/AsryrjnLOTceM77
NuFzt5mAExt3ZDIq+Q60wWyNfMxy7Tu2829+LzbJHiufd92JoWcSq5Naqw9WeLrrflc+Np8k
c0+U9wuLHbOVyTxfyFlitHiEHuoaFWcIAHPgTjX8/wCnnjPPtajhsHz/ALZxySPjMF0NovEd
lSRFkCVB1NGG+1j/ALcc/t66/wBPXkrbru9rvIvf1EsW62s2p5ZCROsympJJz1V61xu21wnW
VY8y+QOR8z3O1u9+uTcy20Rt4TGiqoFdRyUCtT1w/fOcO+622z7l82bVwW7l28Xi8ZVSJWjA
k9tHHqZQRrQUPbHLm+ul635eUT3DyyGR2JGfrJ1EknP1Y7Xq1n4+ESq7yKFUlqZLTqOmBfL0
La/gvm25cPn5RDEiW1tbtdQ2zN67iGP7/bplVAK59cZnfo65sUPCeB7ty3e49s24D3SpZmyC
ovUyGvZR1GDrvDz/ADtmpedfG+/8K5ENn3ZopmmjWa1u4M4pI2Okt4ih6g43Z5rMl1o9z+CO
U2XBZ+VSyRaYVjmNlmJjayEATL2p6sx4Y5c37XxuzFP8ffGW8c13WSysZo4IrdRJdzy/ZEhN
Azfj2GC9fhTjZqXcPiLk1rzi54gkQuL6HQ0c8BLRPFKupJPELT7vDGrcg45+18d/LPgTnPGd
qbdbuxElpDldPEwkEYBpUhc9J/ixmdUdcu3ZP7eeeb3taX1jBFGrqHjS4YozBhVcuwbscHPV
tb+mT1ll+NuUvyl+OPZSRbvAwSa3K106hVaMuTKRmCOuN9dYuONdHNfinlfELmGLeLMrDcgm
0uYmEkT9KqGXIMP4Tgloxabf8Gc53Tjbb3t9kZFTpa1pOy9yinrTvTFOvTguCfCHKuZ2ct3Y
vHbrZymCVZxoZWBNVz/MKZ4Ouvch+n5WW+/28cw2W6sRdRxSW+43CWqPC+orI59AbIaQcM0T
2rmX+1bnYQrFcWjUqFYyEGo6VqMQeW2PFuUC8uoraymkvdvcpcxxhiysjFWzWvQjD34z7PXd
yTeuXbhuVlHv0s77htcH6e2lmBWdYSdSrroGbPNT1wzvxu38xe2/xxzu54xd83iLSR7cNcrs
7LdMi0LujfexUZnPDer14z1/LJri4Vt3Md53X9Hss9xFdXclHkjZkWpObyMhHpz7459dZ8t8
8bFzL8d81uucXXErmRnvrE+48zyvInssoYToGz0MGqaZ41erIOY0nL/iH5R4/wAZn3SHfP6r
ZWcZkuLe2llVvZHWRQWOoKvXvTF/O0XGK+OeF8x5DupttnuXto5wPcnWR0VQATVtJofLvh67
yt/TxJtnEefXHM59nRrlt52e5aKe71sxi0t6ZEcmoVuoOHv+lxniZK6vlL4s5px0DfNzuv6x
Y3jD3N0jLOfdboshb1DPIVHXFO+rMHPy8xkVgRqeh7gClcBqIs5JD+quVe1MTJ41ZRpVQi+I
7k+GCklV9WlQAgzVf93T8K4tGBVJBISQVB/GhH+OLVg46aala5keBHlgJNQ0ZcmGWXTLxxIR
DNpr6ew8K/XEj+2V6NTSPqMS0g4MhVsiv2kjrlXCzUYEpyqKDOnhTxGDWpRxEsQRmFNT9R4A
4PlSnZSSzAmhAFDWn1xIDK7LpXIHMnxxDEilUQIcj2BrTyzxK+GDSNJqYDPqvYD64KPUjOR9
uVPtp4+XnhPqMqxUAklzm3fvniowB6lBq1joPDCLqRVNKMSWc001yI+nljLUF6h6KVH5WyP7
sRN7xUBH9RpqB618K/TEYIXND6CSakFegGHFoQxzata5GlQKdcGBIsiH1azka6T/AK4VpnVi
pc0KsR9w6HxywHDguTVzUGn2jKn0wgx9lZCVyHj26YbFaEy5KFdlJOfgfpgxU5WNFQxsKUzH
fANGjmukKGpnV/Dx+uJaeQL7aBu/3E+ff8cJMBnUitDpLJU1xa1A6EajVOla5+dfDEtSI66f
uJFe+Z/0wEJkYgivpJyr4f5YcZCpjVAK+Q71r2+mIygY6moG0muRHXLtgAxQEE+qgKhz2r2x
GBVq6lDEmvpY+PU4jpzUfe5EhGVaGmrpUjEJ4HUqFoWBKg0p2rgxCDUp6TSoDZ4KzopmNNVK
VHqPlhw1zlGB1/c1PST0p2xHw8fuffrDVrX6/TEHNcCgLEZj0t3r3xqapGenAE7hchXDhCdK
kEdcSS6z/H2r0xYvXVDIoGhc0oVPbrhZdO0uUuxk1SwCaOtRnn3pjf8AP5F19mrcP/8AdDBJ
7jhktFpGCVUBclFa+HfGf6T1nn35fL+4Sk3UyaNTqzA+kjv/AJYxJHRySgSxAUoQQSK9DjUZ
0cAd5EjjUCRiBXtnl3xvmatfVPxJ8X2vHdus97t4Y9w3S9RXN1J6khjIyCHy74z0cc/zjbzw
3G3blfxR3FvGyn2JD6XKH7Co+tcY5l0Wrvlu6z7t8ZC7VI4dVsssMQyC+nIGo6EDLFZ6tYn+
3zf7WG6u9ut7CO2nZC8141WlYVr6j+Xr0xu8+acrI/Pok/8AbZpCQA1S9RX1VFATkOhrjErU
rzOzhnTRcvblrfVX3cwpAPSvnjXOaJ6+leD7/wAz3fYrFkEHEeL2ajXcP98yIfUIw1KV7tjp
19fwsv5abYuU7DvO/wA8OwxtdC0T+fuhAC6m/hPWvnjnYFkpt7j9dG+4NvM5BQ2wpogr2J/1
xmxKvf7uXju2WFttqpaw3dyqXAQCpLijeZ+uKSJDyK+t+JyQ3e22ay7jfqIxcuCzL3LMRn1w
pY7ZOLzZJp7vcF3+7KszFVEcceVdAHgMNKr2ne+V3yCe8mg4vxayOmRmZfclCnPQT9q+eHwO
aXaOM/Ie9Ge0UTbLtBHulcmu5SSR5+2PHB9TGq5Zt163Bru0sbSO0EMP8mNQAEEdCKfswCs/
xG9u7n4wlku5XnkSOYOWOoilSFr3oDi7WvlPfWRd0uYjmfcardBnnh5hlNscW3Sblb/1Niti
H/nun5UrmR50xr66Zj6Ptfhvh3IrexutqiFvsACySzliZJkArTUele5xi84Frsnxb8e7rulx
ewbUrWW3N7EFuWIDSAZs3iMWB37/APGXA7iySK6sLTbUZ6RpER7rmtAAcz+AxYpQXPDPi/aL
uw2eHY4p7u+BSN5AzqNArqeppgnJ14384cL2fYNytzYpo/UL9ijSletFH+0dcamsWsZ8d8bt
OQ8psbG7maKzlmEUxUVNGNPT59sJ52Ppe54H8X7VuNlstvsMU19eKwilkDMFVMixYnGPrDbr
gh+Ffj/b7q73fcLQ3UcHqWAkqihc9Rz9XkMNmqV5zz7ffia62xodj21bPcKkKFUqwYHKvXvh
55kYvT0P4o2/bZvjBzbSTLrEpnZTSrhTq008Ma65jV+F1wBNqt/jVVvj/wDAjEpljj/Mtc/q
cZxazu+fHXBt+49HvG32QsbdjWNolLSuNWk6gTWowfWL1e7V8ScD/QLaRcc0AoNV5dO3umo+
7r3xYdU1t8I/HeyPf7zvNu15FES4RSQioPzUB7dOuL2rccPG7f4l3bmG2f8Aru2xLNGzO0BQ
5MgJBKtUdc8P18EtWfy1xHiym45DviGV44TFZWUICgkitWpmTX9mDDenjHxZs3BNy5FPPyyQ
Wm3Q52tvVqO5/K+kE0xrqfouL5YThicgMXGLeSLb4gP5jhgjsBmy6s6V6YpPEwiyRs9NLMMg
qgdzixnX03xz4c+NNr4Rb77yBJZy8QnuirsBVh9oVczTGPrq6dO5fA3C94/QXG2xNttjcaWe
NCxdoiNRKsehIw/DfNxxcg4v8CbG82yXNsbS/hj9MrszNUjtU01H6Y1Jvyx1dZ/+37bdoHPL
+S0kdAIXFkxUMSgNG1E/a2NdYObfy7uV/HO38l+YpLLcdwaG1SNJZ5mYB5FFAI0r0J6Yy1Gg
5D8VfFdjtk0Me0TxSohpdku1aD8xBp+7BIzasPhLYPj0bVdvtduLi/X+Vfyzx5hT0VK9FOLq
N3pkds4Hwnm/yRuVtbRSQbPYVeWEEgvIraaLWukV/di+uRc3wuI/C/F9x5jv0d2zm22VzFbQ
g/xHUCajtgsqleq8w23h54raQck9G0waESFKqC1NKj054pGbXh3zf8T7FxjaIt92ZWgS7YR/
pnYuUJFcm8KY1Li3I8ZtIJp7mGJGoruE1NlmTTVlh0c/Ovp3/wC5D4p4/stlechknuJZVRWd
pDpkmkFQEVRkK9BXGcb6uif+2vhU+9JUy2+2NGZBYxsdWs+Mh7YLGcZXhHwRxret15G17LIL
Tabh7WzjjIBdgCfcc9x5YRP/AFej/GHDvjO145fDbLaO5BLRbrNOutvSKlRWulQM6DFYZfHz
F8injA5RdLxhJBt6SFUD18cyPAeGNZjW1RbbYyX24Q2hqrTusQfuS2Q64cUvr6cuPgv4m49t
9lJvs9xNJMUhUtKVM070+1VGMXaur65Ln+2Xj1xyyJjPJFsyxe9NbAgys2qgjDeA/iwYtUnM
eD/BibTdrtV61vutmTEAZCzEg9CGAqB44ZyzbVl/a3tm3Jbb9PbyBb4usTEqGIjz0sCegqOm
A548D5/bSW3Md4gdxIf1UmpwAFrqNdIGOzH85FbtO1T7tuVtYW2U1zIkEBz0iRzQavKuL8Ol
kr6k478IfHPF9w2e33W8e65DIyy2/uELE7oQWVUpmK9ATU45D7Z4v+d/FW1825nbXe5M6WO3
WYWRIqB5JGclRU9guC+xMryb+3LiFxtzXOyST2EiSL7sly2oMmoByNQFMsH1Z+zJ8t/t4srX
l3Htp2iWeTbtwOm7uigb2wubkUy+0Vxv2RudS+MD82cE2XhXLE2farhp4GgWYpIQWRnNKOR+
3GuefHGX/bGc4Jxock5dtmxSyGJL64SB5gK6Vr6qDxpgvw68+vqt9r4Fxvdtt+NrTY4vY3SI
t+qcKz9DUsxGqp0nGc8F6244dm+HOFcT3XkHKJbb9eNtUvZWko1IirH7jHScix6VxRv7eKbn
nEOL80+M5OeWtgu2X0UDXEccOkalU0ZX0hQc60yw/wDhizPlj9l+JPjeb4sm5Ju+8GPeXhkm
WMSoojZa6IfZpqLGn78U/wBqer4zvwXsnCNw5bFHyNjOhISxsmX0yyuctdPyr3GM1R6X8t8N
4eflfidjdRRbbs93EzX4ipCjCNiqr6RRa5A4cE+XrcXHLRL2GwteP7cNhWMJ+pKIX0gZKFpU
jzrhskF2vkn5141tPH/kW/27a4vattMcqx1NFMqh9IH8K9sU8Z++XFT8X8Pt+Xc12/YpJDFB
dEvdTAAuI4wWIUdiaUBw9OnPcfTT7T8dDkKfFY4/Ctt+m90XgChyNOr76a9VO/jjPgvr5f8A
lXh9rxXnG6bHauZYbZ1MDNTV7bqHUGnfPHSMS+4ySR6SFIoT0YdR44m9fV3BIUT+2LdEQg/y
LslvuzqDU17jvjnx8ruePk/SdZCZ9zXM0OOlErbfD/BYeac1s9lupjBZypJLcyJTUyRqToUn
uaYOusblj3N/7d/iU7g/Gv63OeRGMyxwmRQwj6r/ACwOg+uM+4531geSfA+37N8Y3vJVuWl3
W0vHhoorGYVl9lTl3rmT4ZYvfgX4ajYP7a+O321cXvZr+eOXdY/c3CNdNDWIyD2yQaUpjMtb
zKuD/br8WTXlxsFpu07ckhiMtNSnQOil1APiK54c/K3Vbsf9u3x/Z8Vg3fl1+8IillW8YMFT
0ylEUN1rVfxwc6MZH5k+F9o4tt1lyHYLp7jZ9w0RxJOauruNSPWgqhXy+uHFnrb8z2Xb4v7Z
NoTbZUktYjBO80igMWkdvcoe1HalfDFwu+XDxv4E+Po+EbRyTlW5zW8V1GstyFZY4wZATEAx
BaoHXxwYe4G8/tg21uYbdFtW5yf+s7hC10Pd9c0YTSdCtlXUHyJGGrnxqbP4M+Iraz3HdHku
bja7C3niukuGP8p0GppRQISygHtTFIL0+RL5IRPILUkW5YmMnqVLGlfD046axxPG2+JOPcQ3
vlkNny+5NptRTVE4JUST6gEjY0PXGOq6fR7b/cfwv42sttt75DFt3IIoVWxs4l0rPCjCM1Ci
g0g9cXLPxfGv5pwj483vhnFF5ZcDb7K0t4ltniIjZnkhUaA1CfOmMyav6c6xFr/a3sg5febe
+4SjaDZfqNudQPeV2k0lZOmoL+/EZ54h3z4B+M77Zt2h4tvMkm+bNEZr2EyLJGWiFWUig0V0
mlDjWWC25p9n+Cfi/auO7Pc8x3KaHc91jEtv7bBI6ygERqCrFmAYV88Z+TLqNv7XrJuTbvYS
7g7WkdiLraZAAHMjMy6ZR0oummXWuGK2sT8ZfDe3cs43yncbq5eC62eMizjQVUyKrOxfuQQm
nDb7jU6/116RsX9uvBxsm0W2+X9yd73OITRPbmkYd1DaQdJqFHicYwNvwriHCdu5Ny7Y9hhV
bc2ltb38D1dY5GWSqgvWooQx88UnqeX8t+COA33FN43XhG7yXG4bEjm9hdlkjLRAtJHqoulq
KadR2xvLGd8fOBC1JNQ9K6e+eeOh/D2r4L+Htj5xtG87hul3LaCwkiSDRSigqWkZifECgPbH
Prr8H8NHzD4K4Fe8Tvd64DuUsz7dIq3UUrCSN/UA41aVOpQ2M7YzuTYuLj4V+FeN29pt/Kt3
kt97ngWQz69CEudAZU0v6Qwp6vDDJa3e3jNzxXZdk+To9hmv13HZheQa7+3pnDIwY00k+pR1
xvrnORx1769N/uf2Y3HyFxyEPHCt9bR2yTUA01mKa5fHTXLyxy/A3/ZpL/8At7+H9osYbXep
L1LpoQJNyj1rC0jfnGhWAp4Vxqc6v6XVD8H8L+PrPnV/t253g3Hetsuq7JTO3mhMZIkNRR2H
1yxXw8dXFV8k8B+Pbr5Z2nYNgnKtul6YN5sY6hLd9YLCM0y1AnLoMVmTXO9X7Z+HG3w7sMXz
8OFe9K2xlBOFBAlETwGXQX8VbKuC954Oe/8Aa8vYdn4BxbbvjTfeN75OF2Cw3SaSa5akbe1G
Y3BrnRqemuGT1vPHjvzH8T8S2niFnzLhVxJJtFxKttLHOxPqeoSRCwDULLpIOHnyq/h4tGJV
lFWFW+6uY8MdF9fXv3x38TfHacAj5fza5lWzvJWhQx1RYTrMaGqhmOqmOW3p076mY9E5l8b7
HvvDOEcX2y8WTa/1wNtuJAZ/0/syyNppSrMMsE8Yv4Rbn/bf8d7jaXW3WUF9YX5QhL521Rhl
Ip2oQ3lhw268q5X8JWtr8Z7dyLaCzbxa3clnvcAJeMUnaFSABVShCg+Nca4tny53uznxyfMH
xPtvDuM8Uv4RJFu+5I8O5W8h1Ksqx+7qAOY0VKkYxy331ja/JO27Un9tPG02lStkt3Zswcap
Ed9Ym0Ht/M1fhh/ncZ/tNgePfF/xBsPAtp3zmksuneUWVJdTKqsyltFEqftzxqb069dSZHBt
Pw38eb/8lWUXHNzF3xe4s2vpYEYmVGicIYWOTKGqCKiuC+KXG45F/bnwPcNkvY9ns7ratzgR
pYLiRi6O6gnQdVag9KjANVfCvhf42i4Pt+9bxYT7zNeRrNNPa6mMTnJk0RmvoOVfHFmj7bHi
HzBxbjexcm9rjt01ztl3ELgQyBhcWrFirRTBgDUEVFc6Y6/DO3WJtrd2mCZlKdQcz5YL1k1v
n5fYHHPgL4+suPbat7tc+63FxEk81yrU9bqCQaFchXLHOe/K/p1+oqNu/t34Tb873nbbz3JN
rlsEvtuQtR4NUpSQMw+6hXKvbBZvjH0V29fF3xTyvhm6bhwRWtdx4+pkkmBcxSlFLsjaydVV
UkMO+LMrd68afhKfDtz8N3t09gBsAjibf45FZnFxGF9QpnUO2oFfHF+V16wfw98ccB5JJzC6
vY2m2yydf0E8j6Hjgq0tX/8ApSjeWG/+2KXwua8R+GuScJ3be+FSjbdx4+vvSW7llW4WldAV
yWNaHQy98bnPuVz6lzY7P7co7GXg3L44YjDuxsm1XNNQ9p4n0pQ1U6WGMf8AydbN5cP9vPxf
xzftr3Led3s/6gbNxbw2RyV9Y1MxrQ1ofTi69p+s55av5Y+GOERcXXfbGxOwttk0KXaKdSyW
skqpJkSw1KGqpHhg+u/DH2xo7r4u+Mtu2mGJeLvutgYiq7lZD35TGRUPRGqT5qMa55atfInL
9u23b+RblY7VdG+2+1uHW2u2BDNHWoVlPRkrpbzxvqRx56vurv4l2Tb91+RNisr9fftri5jS
5gb8yg1Kny8ccu9deMr6ai5kknyq/wAYttlr/wCtx25RrZohpYe37i6V6ClaefXGrM5lak2K
W3g2z4w+Pt/3vYbGN9xh3ieyhml9REfv+3ECfCMHFzz651brxnYuYPwbmG7bdD/Vb+Rk3Iou
lZgIpCgcDrpaMEYLG54fZuU/+0fIPIfjjdbCCTj1pFLEsJQdYyF1q3YkNXyOHrzBeZZ6pZbx
PjP4k2/c+P20C7hdXzWsl1KoLlRJKqsadfTFSnTPGZMTI/3G7HYzbTxDmdvbx2u8buNO5e0N
KSsYRIHYCnqBqK43xJWepZfHH802fBm4hxS423bW2rf5EX3YTC0QkhaOjM7kBHOuhUjMjBxz
PW/6S2xvORcoufjjgvC04zbQwPvCqbuVk1ElYkdjQ9S+qmeHmLvjevGwh4RxhvkpN6isIEu9
w2k3My6Ro/UCRAsyr0D0ahOMZq+GM2Pkd18m8J5vt/I7aJYdsjll29kX1QPErlNBOZ0mMf4Y
3ZNF+NZP4NHEJ+E8hTkm0G9spIgz36WxlWJPaOtdQBMbDI1GDrnOsNv+ry/473qz49yqxv57
Jdwgtn/mWz01MnRW/wC4ZEYeuZFxXsf90tzC93wncxCDKyySyVpVov5cgjJ6dca/nZlF4/2e
nbBzfjvJhte4bXyQ7asqRpLsjhFkMiEekq2YqMvTkRjmr8vBP7ktliPyzPFZ21JL22t5isKZ
vOQVJoOrMFx1m/XWPrNYjge1FOe7dt+4QmNjdxJLDINORkAKvXxxw/p1sdv4ybj6c3fl2/bf
8z7Zw+xWKLjrW8XvWqxA6hNqDA+Crp/yxvycsya+bvnHYds2D5N37b9tiWG1eWOeOBBRUMsa
uyqOw1Mcdpz5rnljMcZuYLTe7Ke7hE9qsqC5gao1xagXCnrXHLuHj2vrz5K3bg1v8RW0yLPa
7ZeR+3sYtw6vHNIjFFYAj0dQwOVMa/hzt8a7+WS/tb3HjQhuNre0C8iiaSRbxV9DwEKCoPYr
+/HPvPs1fhhOXck4I3y3Y7pY2095sUD+xuFnc10hwx9wR6yfQevhXHo/p/Lr65R/PqfMesf3
CbtxGL43s4J0nhm3GD/8hPACukKqOUloR6DGQKGuOX8uFf8ALn/t237i9xxK8txYiPdrC3kO
63KqNM8NSw9Q7gZacYvM+zVv+v8AhT/Du5bBvPy9vF/xoSDZ2sS36S4JMqqaK2mpJoXpQE9M
b/rMZ5vleibTuXG7zaeQrsu5XG7PDZzJcWF2z6UFGGgaxl0IwUAu9w41ZbJxkb5uNzZXEljA
tu9qzqsnpX0sUBGVe+DD+XXdIg5jyK4hiAvodihMFwKGSUgysrjvkwAwySq3xg+E326ck+H9
+n5NrnEU4mtjOBWMxlW9Na6QrDp9cPXP+2L9Lnne98psfljitntk0kGy3EaT3KRCiPV9DhgM
iNIGWKSfXVzmuq45BsGxcl5ptd9b3MG3XcttcS3tpG7BJriEagxT1KWYagfrjODfwpOTWtql
lsnN+M3l9e2NjuMMV1tdw7v7g1hdUavnrzp51xqe+Hlfzbrwfl++SWEc+7We53ylFIEtsInR
aVzyBHh44zLjOJPj7bbbjPDbh765ji3C13C5t77cWXV7rrLpq1Kn1AAjFZ6b8OHm7/Hm77/x
K43eSCe4luXiNwQYhJCUbTrchfSsmnvkcM43WuWwm2Rzxnf9thnga1mtJrezSKgEatEyjWQa
fmrgnmCsT8RWfHn4FtD7JdW1ruWv2t0kcr7vvRtpZaEg9svHD/Tn/ZqdX4/CHlXH+VH5zi3T
j91BFNJtoM3ugPpiQe26snWp1ArjXn0Zlji3jlmxcF2Xc4razvb+9MDW93dsHNqfeBFWY1Vd
LN2ypgk35Eu+NX8dx8Yk2Dj83G761gR4Yv1UKlPekdfvUpWoJbGLPW++cuIdosvY+SudWcbp
LdbhZRXNuFYagShUKR1BV6Y69T/WC3/XGD2Oz33/AO5LncHJFlM7wvMkdwSCWQVZl1dPWow3
PtMFsyPm6WVTpIT00yAzzOD+vMlFRmmnUtAMlPWn445jTNKCpb7Av3A9wcQ0gdNScwSaV7YD
pwaVUHUAMqdMC0zqzAaGNepr0yxasR6mZCTVdJ+9elemeJYeNyVKk1p2r3w6j66v0rQdTlRc
S0bFQwdqMBTBq0gJA7HT9wAI6dOlcGjKcNmVX1UOVAK1wNG9VKhyDWgfIUP08MIw7yASMGpo
PSn78KMVZ6KpIHTr0wI6xUGQDECpatKkeRwUnQpmC3U1rSlfLFqE7nOhoAftbrn2rhjOgdSf
uHqFCB28fri0nNwWjYsKhD+OeFejQK4DeJOWr92M60PUCzDSSB+WgqcMSOQ6VZ1zGdIz3J74
kaIq4VtNSPtrXviwxK1K0Y0Zh0p27DPFTYQDspAHp7EdB9cEFgdOknQWYkUNKZd8aGGoxcA0
NSTWnfzGIYeMkKdLamFQaZ/WvhiqOyuVCqAW6UPj9fDGcZvJBRHQtkxFSa1oMTUmEtalQQzM
aKfPrXChKrIvrqQvXscWrAMfUQXIIFcGtSJCXBDZMuRGnxxLEZJDArVRXMEdsIPMU1AooNTm
w6eX7MStCkTaixprBz8QcAg2IJVGyA6gYEZpootJoSGqOlaHzxEDxkI2vNWofT49sSwwT0eo
ZHL64msD6TqU1JGXetPxwsYahLGh9I6Dsf8AgYkZllcVLaSPtFO2DWpDKGKaxkuemo/06YLQ
5rmZFoWJ1ZjUakZ9yBjfNPNUNwUaUlc6nr441TaHQDnXLACon8X7u3jiTsidUVjTU4yp2xCO
naHIvFJfQ5IqoFQw8CfDHT+c9HUfVHGPlzhbcHttj3Sx3Ca7hi9uRYUVw1CSCGrWgxf1klZ5
/wAvF+TyLNuUksdm1hbqXMcchOvSTX1HHPG1L6ZFBUU6V8QPH64gOJ2WQBeoGdfGvUeGHS1X
GOd7zsu4wTx3Mr29qfcW1MziIjqQFB746z+n4rPXO/C85H8sycu3q2ud1slbbIChO3pIyhgM
yNfUVOM84JzW6vP7kONPs8m2x8XYRRxLGsTzBkGkUX0qB6R9cZ6kakql4H8x8X40txeS7C91
uVw5LzRusYVGNdIVx6fPFfg+qn5B+Wdo5TuEE1vsaW9rA9ZllkDSS5gspYZfswcTm1SO/lnz
pY7psEWz7Px232u3Gn3XbQxotKKiqBTzJONfWazbjSW/z9w+bY7fb9247NdC3RIzH7qmJ9Ip
qANOuLqT9mW4rU+dNistyMuz8fG27W4BuYgyrI+nI1IqAfDBzJRdd17/AHEbBZWEw41x8293
d5yXEzqwqRmfTmWHniuRbqu2b5+tIrRG3/ajuV5C7PbTawiajmNVQxy7UxWQ3ym23+4C5O+T
XW9WgawnFPahC+5CvQKhIPXGpzK3eFpffP3GLOxkteN7BJbyzEmeafpn5KSanzywfXKxea7Z
fnf48v7OKLeNhupggFUYxtFr8qMO/li65k/JxgeSfKqJuXucOtX49aAVYJIUJGWeWVTh56wW
Wr27+frrcuNR7FPEy3E49u53F21VQ5MxA6ED9uNX66JLPlqYvk74947wE7VtU9xvV6YjGIxE
6BncUNSwAA8ccrPTfXzfuUpnvZZZIm1OxYj+EE1wb6on2e5s4tytXvITLZRyK8sC/nUGukdx
9cdOesrHWvaLz+4a2/X2Vvt9ibHj9vRZ7NSA8q0pSnbywyT8tcrTa/7heMi6mim22az2mXUz
CEgz1/iY5DPDeZjVjk3P5j4BazwvsuxTGaKVZpbq6Yl9INfR6nzOOYjh3T592y95ntu7f0+S
LbLEFTFUGZycjkKjG+cCwv8Akvx38h7l+u5Luh2mxtV0WdkqlpGQ5lnejAfSmMfWiLPj21fA
2xbvZ3u1b8f1COGjBY+2xJy1sUAFMU5rVrT8p+YfjPbr+OZzJuN5bLrtzaqGQl/95IxXnAyW
1/3D7HuM13ByC0mttuuDpjEAEr0OWhjVciO+NTnYNZfl3yH8VzWv9N4tsHtyTtpn3S4UhkUn
PRVmJ/bjP0wZGz4f8vfE3HOMxbNCt61Eb9RWL/yO4o9DqH0w41S2L5y+PbWzl2ptsu7e1Z2a
h0yAIevU9fLFeRrn3T+4HiW2xWdhxzbpns4JV9+SUaaJWpCIK1rXxxc87Vq7T54+NorhdwE1
9d3jIVCFSsSAdlz0ZnvhvHp1xQ/O/Cd4S8tN4S4srGX7EiUSGRDmdRBy/ZgsWqWH5S+Jdj3u
2l47tciIhJnvyGVyCM1UEnx74vrnydK/+eeP7rutyu52jR7QU/kBl9xjlRtQyB/yxZ54Ooxf
Dvkfh3HuXXO7jZGvYnRltYyQPbJNS0YaoJPjiyYxzaz/AMm/IU/M98kvf0UdjbIuiGCOhbSv
5pGyDNgv+G5uslaXLfqoTQ0VshTx8ca5ms9dY+wLXlHBrb402yHfr6I2strFFNHUO9StT6Vz
88Fl1rWbvP7hOHWd/YWG1xyybZajRJeyig0qAPSDmQMU503xw8q5l8C309xvl17u97o6UEKq
1NVKLVWoo+uC8WfLH2l+FR8P8t+Odlv73kG63h269uiYrazCs0UcXYekN6sX11p38x538R3/
AC2y3SGae5Y1F3dxagiLTIqMnJHljUgaaf5n+Ltl22SS33C43aah9uzKszliOjM4AA+pwYvx
4wvxZ8z7BtW67sd2A2623J/cEsdWWFlJpGEHXI9cPyb5HXxr5N4Bx3nl3c7U8sm07ihWe/lX
1hy2osEyJXVgsgkxs7b5a+I9kmv7mz3CS7vtwPu3GiKQ63AoOoAGD61Sle/LHxJyDbYrDdb9
oI/vZpI2UB1z0ggMa+GWeH6Wmxm+Xct4v8pbha8R2y/Sx2iypLPut1SP3CtAEiD08Op64vrc
P5RR/A/BbaRLk8utxIhDxlWiCnT/ALfcbv4YzJVr03mW5fHsGybfb8m3GGK2UxmB6lmLRgUN
EDED8MOUM1F/cDwmXlEVnFMybYsLIdykQhA46ZCpIw4NSW/yn8Q8Xh3C4sd0/W3d/I91Osas
XkcjoNQUBa9MH1W+MZ8Q/MHE7a33ex3iQ7Ym4zvPFK+ahX9JBAFa96411GOL5jG2vBuHc05z
eWexbwtltEOp2v7ohGlr19tWK6vI4zea3zfG5tP7eeKbdcx3kfKonEDq41mMfYa9Q/XB66Tp
6hzy64C1vt68l3OKz9qRZ7V2b1Fk8KBsWViszbfP/CLjl8tp+o/T7eICi7nKCIy4apPTpjU5
o1h+aH4E27btwvra8bed6vS5iigYke8xJ1HIKgqfHF9f2bbni4+CN7+OeMbBJPd7/DDu1/Rr
u1lOgRhSSqrl6uv3YuuLF9vGY3X424Vzz5AvJOO73EtmqGe7u5iNJkc/bADp1eJOK9XGP5zL
V7tf9vWw7Jf2m8R8mgZrCUXK1ZUVhEa6ahm8M8Z9bta++5x8Tb5f7dyDct9hjn2lj7FqHIKy
V6kUq2YyIxq8jY6ovmzgMvI7qyXc0hglhQRbiwPtGQVqoNMqBh1xTk6x3yZz3httxue1i5Rc
79eXJAht4JB7alTX+YUAFPKuL61bK0XEvm3hD8Ltty3e9jg3Swh9t9vB1TlkGgaF/MXpgkqt
j5W5xymflPJNy32dTGb6YvHHmQiD0qpr4ADG7ROcPwLkq8c5Lt+9FBI1hMJQjVoQOoyzxmzT
H00fkb4n3K5sufblufsX+3xsILChMuYpp0AerM5Yr+hmeqrj/wA+cX5Df71tO9g7Vt27alhu
5MwkZTRok09yB1xSD7Kj5L+UuIbBwb/7v+Iyf1QSRGG5vwfRFGx1E6qDW5rUacsUmVdeqfZP
kT4l274lfaJNuN1yN4HSX3IgWM7V0ye8c1VeoH4YeebD1PFB8DQ8Xflo3HkO7x7am3aZrdZG
CpNLWtAxyCr+/Gbxp46yevQ/7idx4DvFnabpab9FNvFtS3hs7dllDo7Elnp0C1xrnz5Z3LsW
PFeR/H3Ddkt98veaT7rcwW417Z7rMvuFaFRFmRQ5VOC82tddPONo37h3P/mC53rmRWz2WdSt
tAzEITGoWJZJFzFQKnzwdXzBzEvI+VcD4V8u2m6cPjW5260T+fFET7PuuKMqOa5BT+3B9TLd
eozfIHxHFuC/JE26n+r+z7C7WpBmJ06QgjAJrT81dONfTWZ0+Y/kTmD8t5buO/8Asi3F7IrR
wnMqqKFAJ8dK41+BOc9ZuAgygaSxyrTuCe2DWn1Vwe4hi/tk3YiWgEV4iEkBtTHJfrXGf5z/
AGH9b/q+Ui0aBgAHLEq5GR/ZjdXHw9M/t/5NsnHefWe57xdLZ2YjmjlmcVUe5GQtaA/mxz6m
1rcR/L/OIN0+TNx3fjly6W7+2ttfQkozEIFZkJzzIx1nLnbny9c+PuYcD5N8VDiHId2/pU8J
/wDkSSOA0qmQyB42NQc8mGOf5al2Np/96nxbtjcf22z3qJ7SxYwtIAxEaJEY1aRiBQMe4xfV
q9R55w35L4jF8577vNxfpBs9+jpa30tVVmAWlfBTp9NcV5cuL7dB8o/KHEN3+JjtO333vblc
bkSlrQhxGk7ye41R9lKZ98anFb6uzwvkrn/D+Q/HXE+P2t//APK9y0/XrpJMEcMeiXX/AIin
XGc8Fvsbfcrj4hn+M4uFHlFv+hiiVYZjIDIWRjItVp/F2xc8WNW683+RPkbiV98Lcf2GwvVu
t0triGOa2FaotuHVmcHswI041OWeu3q2x864tybe+H2m07wn6q0tnnu40qhP8pY/YOqmZevp
8sYb3a1XNNkj5Hx3c9ruZZ9ot5oXWS7BRU0ju+ea+Ne2GM9vgO+hSO7nSNg0aMRGR0IBoG/H
HSuf8e/tzvw69pupbW9t7kAuttNHKyKaVCMDTPHPqa7PoT5s37485pxfb+S2e9xxbtZw+3Bt
LLWU62DOpXqpFOvTDOfMZ6l3XB8wfInGt44nwW2228W4vLTRNfWy/dEscKKQ9e+rIYzebFb6
9Vj+Z/jcchgujvUIt/6YayDVpEnuK3tdPvoMhhnJt9fI03Jb2Lfb6axupIIrmaR3CMU1o7k0
cqfUCDmuOk5XMs+X0vLv3xb8gcd45Lue/JtF3saqWtJGVXWQKgP3dV/l+lhjl9davlWI+dPj
+f5BuLQXpO3tZCzfcwD7AlDl8j10lW+7xxr65GPn1zbXvHxHwjjfINv2vfo7u5v4JpXNS5Zy
jIiAoNNatg+tWyePROGn2OGbDEsj7sP0cVL5Ch/KO9fy/bl4YG2BffeH/H/LuWx7tvayTbzb
LeR+8Q03ugSKYDoH3ZgqPDFOfXP7fh5d8Rc+4xsvxxzba9yvUtr29hlksY3qxl9yJ0CKAMyG
bDfapPMeGA6wuoaUUdT1NT0B8sdJ61Nz19Pf2tT7T/6Vyz9V6bQMhvan1e17L6yf/prnjlnp
mOnceW/FXAvj3cdm49vH9Vn3STXa2qHW4diD6jkFAVe+Nz+NH4x1cjvfhH5Gm2vke98hFjJD
bLCbCRxEwAcu6vUE9aj0nGZKpceT7La/FN58qSadwl23iSMZbK4kPWVCNK9GOgnpXD3+lzL8
16J/cFvHxpv+32m+2W/Q3e8bY6RQ7fC2ozxNJ/MVRTJh11Yz9Lg/+WxseMfIvxrZ2EVzBy8w
7a0S12a/PumGg9UYZhr8sicM5rdvrw+w+S+J7T84f+17ZaSW/HHmYexpo4WRDHJJHF0FWOsL
jXfLlzMq0+Q9/wCC7T8nbTzbjG6DdnuLr9dudoCR7NCpOmoFCwHQ9MZs2NT5eox8w+EX5any
C+9j+qzwpCLVgS0dE9upjUEghTnnTGfrp8l1z/8A3ofEO+bZyPYt73SljuW4uFqrrqjcR6ZV
IBoqumZxv62D5Zf5D5B8b3PD9m+N+P7ytxbPuELSbgx/lW0SuSfdc6amr4pxflS/hA39r/GA
8Z/9xtiKULMY607UHuU/HGL9mrq9sdy+NbLjt18V8r3xIo9qnSaHcYW/lTxuxlQB1DhXStGU
41OLGbfs6d++ZvjjZrTjEexT/rLXZb725reIEOtuIZIjKuqgNddR44f+da+Vxunyd8bwLe7v
/wC5XtyCjTJtUDsKax6Y410Chzyq2HmaOtnw85+EflXj9hJu+w8mfRsu6TvdQ3NxVmD1GUoW
v3BQdQ/Ng7+VzPGS+Yvkbb+afIVu4DDYLApZiQVLSQCXVLMENKMwNKdaYeuP9fGpztewcl3/
AOCL745XiR5KsVjaqLiyYe4ZVkjJkjr6KN6mpTGeeWf6KDj/ADf4m5d8fbVx/mV6+3y7EViF
Q2iYhSiyIyq/VeoPQ4pMLg275G+KeFfJNjLxZJJdo/R/ot3uYwwDliGSdQwBZl6P443f55NT
W8p+SvjW02Xc5bPle4brdXEUkUFnFNINLyZ1UlUAC17npgkF98il+J/kf40suPWbXe63XH91
tECX0Ks7WtxJ/wDhtAWRSXFK9MZs2t5481/uG5nxvlvKrW748DKltb+xeX2j2xcSBqqwU0bJ
O5xqzPHOe15jaTaZo3BoUNamvXoMFb5uPq/ZPmb485Bx3bRv+6XmxbjYxLBcQwPIkTsqhS6v
GDVfTXOhHTBNPUVO2/NnBbfnG/SrPdvtdzty2dteSgy654qj0j7grg/t643eL+RJsYT4h+Sd
i4zwXlexbx7qXG520n6BkUsHk9uSPQadPvBri64kvg7mTK7Phz5J4jZcW3TiHMVddn3X+Z+p
hz0sAAyEL6vyAgjvjFnpnwD49+R+IcSt+ebH700lru0M42a5CElyI3SJWUfZq9yuG85R8cvF
47qT2VQqQzAArmO3THW2Vmvpf4f+Rfh3jfERBeSTWu6XUHsborpJIstNX2FQRSjeRxwnHrp+
FVwj5X4BxXkXI9ht2nm4VuziWzu4wwngYx0dKE6tAJoD1xu8iXUXyf8AKHx3ccbi23jBvL6e
a5jnlkuWl0xi3YP7TiQ+pZOhHbFxm+s/lruOfOHxDY7fHuUKX2zXkq1n2yFXkiWTvpSpQKT5
DGcbtx84833yy5FzTeN8s7QWFtuU7SQ2tQStQNZamVWYFvxxvrM8cOefbf24ON71e7JvFnu+
3OIryxkWeBzmNUbVGpT1XsRjGa6c+PoQ/wBxHBWtP/ZzsDLz3QsDxov8p1XIOJq/bTp3pljU
5n5Nt/DMcK+dLFm3XbObWA3Pj283L3s0cSeuG5kbW2haiqah45YLf0cFzD+4C4fedsh4hCtp
sOxSxyWCzoA8jKpV9Sg/bpYjT+ONeSMS+tFd/wBxnD7bb5t92bYxDza9AW8OmsBIFPc93qfp
T64ueN+W6z3A/mvYxtUnHucWH9V2JpWvbRoxqmhmd2dlCVXUupyVzyxjq+rzGd+XPlqXm1/a
WkFp+k2LamZtvhYUc6hpDPT/AGj7e2O0yc+fLPsF8jfMX/unDdi2K521be72kq73qNVXKR+0
AqkVQMPuFcZ45mem3a0vA/nTj8ey2+x86247va7VRtpvlRXmipkEYMQDQUo37cc/hu5+Fdd/
3Hcofmse/wBrFDFa2iva2m3OKK1szAlJWGeo6QR/DjpZM8Z5/wArrlfz9scOxXNrwraBttxv
SON1Z0VvbaYUkKBDQnrmaeNMHMku1dTzGX+KPmXceE7Nue3x7VFuVpdDWbdmZBHQFNTGjDQV
oGwWy9bVvmMt8dci2vj/ACu13Ld7CPcNuRnM1p4h60ZSeuiuQ6Y13JTx19XrHyv8zfHvMeJS
bTDtNzFuVuoO2XcojVISKZVVidLLlSmCc5+WeurK4OEfM3A9h4taw3vGve5Jt6ssF8qoRI/q
aN2Y5ilaHGZzLT8sPJ8tbxefJNhzTdI0uLi2mjdbMUCCKPogNDSg/Njt/X65kU4z5dPzD8g7
Ryzmdvv+y2k203McESSSMVEkkyEsHOksKqMhjnJLGed3W5sf7moI9lM+7bGt7y60hMVtu0aq
itlkZK+r6074OJLffhrr/DxXlfJtz5BvNzu24Te/e3pV55GoOihQBToFGQGOvXX4/DNmKmGd
xImdKE+3QVoSKZVxyo516Dyb5Zl3v442PhclkYZ9mnEn673KrKiBlVSnWvrzzxv+Vk1rvq2u
X4s+Rrjg3KYt5EP6uD23iubQPpMiOPyE1oVNCMc7yp1+GV3C9Ml9JcMp1SzPKe5GuQvSn447
99/a7GPr9fG1+R/lW55lx/jm13VqludiUr+oRy3u6kVAStPT6UH44583Jf21b6h+NPlK54dN
uKLb/rrbdbWS1mTWEKlwdEgJB+3wxnnn1vfFd8c893jhW9wbvZUllRSksTmiSoPuRgPEd+2H
+ntU6eq75/cylztN1FsnHoNrvbhWie6MiuDHICJNSKiEnOoOLnN9Y2ouLf3ISbfx+12rdNmg
3iayjENtMxEbe0oFAQVfoBSuWMW+t3/DLbr87cquedQcps3S0e0jFtBZD1Rm21FjDI2XuV1H
M43ep9cjPO/l0fIPz/vXJbKDbtvtE2azZT+ugt3NJWJ8QF9J7rh/n1Offyz7rp45/cdyjbeP
/wBJurePcp7ZdO23dx/5oQooPV+fLLPPGZffT31+lXwv5y5Rx7fr7ebwDdP6qR+vtZiSrlfs
bUftMYyXLpjX9LL8HfFjzn+4LfuTWsdtZWibPbwyxzypA2pnljYPG9SBTTTt1xTJP8rier9P
7qORNt4SXY7KfcFXSL4MyH3F6Se3++gNMc5hrOcS/uC5fsF5uE93Db7rb7pK11PaTjQqzuc5
IyuS1/hx06ytXJP8qn5H+Y915u1qLm1isra2JaOzhBK6yKM4Zsxqyyw7k8c/tZdc3FvlPf8A
j+17vttpIstpvEP6e6WbUzrVSutDX0sFYrjnL7tXXe/DM7dvl1Y3SSQOVMZUpISRQqagj6Ye
u/tdbn9G7i+ceUw88PMkS3N5JAsFzDpcQuiIEqVBrVtIOWC3zGV1v39y/J942i72mbbNu/SX
0LwzRlHNVcEEirUBzyxqYxbXl+x8j3HYtysb6xlZLqykWSF1oKEePY171xi/Ltzcnq0vfkDk
Fxy+45XDeG33eeYTiW3rGFbSFKgV+2gpnjp13vMn6Z+Flzf5j5jy+CCDcbiOK3QUlitk9tZX
FRqkFTn+7DxfqxOd9YMlj6WNCo9Jbx8yO2OVrWBLBVJoNRIrQ5HzwCgYF86ZgdfGmLWYQ9BU
0y7D6+OIzwbSSV9JAJ+3w/bgOmQsMwammfbM/XEYRdg7MTUdOnWv+eFHjAo6uMlGWVe9czhx
aFZTKSQMiT0z/wAcAgg0epRT0fwjMCnmcFIzLXKuk+XTLviGnUEPWuYzFPHviRmYfaw+tKGt
OxxIxCEkEent+P8AliaLUxfoApGR6U+uJHV2Sn8J+0UxKm16X9WZOVT1+hwCJAqNpYCikEhs
BL3EANPUGyL/AEwgEcgzA9LgHKnbxBxakq+2VVVamWQ7k98B0wUBTr9NOtepauIwWTAvWj0o
SBnTzwxEpowWMggZ1OWfhiIqLQMxAJPWueWMrQtM6DUKkdCeuWFm0K+lgwfKh1U6Gv1wkkZX
Yopox6gdzTCD25ChkaisK/jiKStXHoOWRNeg8cCxDIkjZKBTVT1VNK+H/PFqGRoQDP3aU1Do
Bi1W/oYZg+l39Q+3wJxCFV39ISpNdbV6Ht/1wECl0hKiiquQXpU+IxasAwZzpYhVNNZPX9mJ
YkERACA5HLI1oRmMHpsBISP/ABgAgerP7qdTiYyiJOk9ivXOvbCTOfSOkhK5kAgCvYDAj+45
AZvUa0AOVPrgQXKsQjUFKV7k+WJqelOaLUAKCM6/dnhgphGAQR6mAFa1AP7MSwpWRfszqMz/
AJDErUAJCha6dRqVHavbBoyguhGEJeUUAp9cutcTUn7ZuQjX+OOpIH1A9RgQ6p4HpTpiGuyx
KVkGRFMx/pXEk22IDdmMrQA56u2O38udrc8favCZYNj+KbS+2yxtob0W6mSRoQzlzk/uE5n/
AAxz/rMrluvmLk27327bvNdXkmt2kYso9KKD/Cv+WIq0NQlmyB7HqR5YF9QKhJd0qyuaLSpe
vSgGGKx7P8V/CMe4Qwb5yuY2mzSFTaWkdfeuQf4z+RK/icFmL4ab5Y4Lw2zO32e3WcWy2Eza
ry4RHIWNTU5DueuMT2rVxufx78a7XwK4udk28XE8sQdNxnJLkkZPXoK4bz6fvWJ+Gvj/AOP9
5vWuN9uf1u5+swbPEW0ooJ9cxHWvYdsb652C9RQ/OezbTtfIlg2uwjsrcKFjjSlSABXP6458
TFrzVIWlm9thpY5nLMfhjpKxX0RwHbfieDbrLbto2eXmG9uFN3KyHTET92sv6EC4OuG5a2G7
/F3A7zfLeO+tIYn9vWNptmCMadKhfVl+/GJzi13n4+4hcWN1C3FLbarWEMqXEoQuwA6qK1H4
4LyGF+PvjD44vN0vJ72YXs8Dn2drj1KsUfZpMgWr5ZDGrz4ldc8U+K7bmu4vyB1s9stAos9s
h1gzMRWrMPUTnkK4eZcbndjewcO4XvWwT3EPGU2SyUH2rq6X2pStMpFBNVH1wYzaPavjLg23
21vFb8ebe5ZqCS8n0lF/3GpChfJRivK1ybt8D8Wv96jkMH6SxX1y2tqKEn+FCa0B8cZkq+yq
5vw7bdt2gxbHwRPahYaru4kBJUdctTOfxxqYWi4023bl8fSyx7HZ2EqwPG0MaK5BjGoephU5
Z0OHqYzfHyjvzRtutwFUJGHKopybSua5DtnginqDabCfctxgs4CBcTOEGohYxqNBqbsPPG5z
pj1i6/t332J7O2tbuG7vrg/zjCpMUXf1OfDxGM9XDJFlD/bReteiyG9IUjGu8uNOSE5e2oGZ
bGbrNdu4/wBskC2BO37zJe3ANFBj0KG6V1VPTF6fsKD+121hhjO6b9HBOwFFijqWamYGphX8
Bi9Wx5z8jfG11wy8SNp2u4JlAWVqIXVhkWTKn0xrnTz1jO7Bsm5cg3e12mwoLm8cRIXNAK9M
OrqyvaYf7ZZIYkG48jiguJBSOOOOvqHZQxBPnQYPaJ4qof7a98uN3nj/AKhDBtkRqL+QEF69
gg/N44Nq3xBy74Hs9g2Vtwtt9iuwgOpKAAmmel9Rz8sMlZrW/E3xnsacIn3e6sbS73G6R2gn
lf3vbUD0jQPtNczisV+Fl8efGfGr/h15M23wvuN7LKDdSDJKEgaQc100ri6hYrkn9u+5RWou
9r3OLcpS1Gt4lCJUnIhySMu+CWw+Onbv7YblrZWv99txeMPVBGrSqvfTUEf4Yb1V44LL+23k
d5uE0ck8NjYW7af1Tt6np/AoFQPrjMtXiyP9tm32+7WCXG7pPY3L0lMSlZGVc/QxJB8/DBee
qNFzP+35XvnHHVjstmtIzLLeXsnpVgvqA7nLG+fFXmPC/jLd+X8km2rZ5ImhtF1XF9KToRDk
GGnrU9hhus8TDfJvx7JwndI9vn3GG7LRe46xA6lJNM65/tw8taxULMjBdRVqff1JJxuXHPrn
fXr/ABH+3nmXIdlt90mu7fbobxdarLVn0E9cgcjjHXdtdJIk5F/bjym0ubf9BdR7obg+3ph9
AD/79dAFpmTjMtiudLY/2s8gSzMx3i1S70gvGquRWn2VoP24NurJFP8AE/xVa7tzq8sd7gN1
abWG/URK6hDIMlBIPqBOeWNW0zqU/wAi/HN9dfI44/xfbFQOoKQJIdCLSpkdz9q41Ljnz8u+
T+1vkNvaS3B3eyafRq/TAP8AdTMBiBXyxz9PUsvi1+Pf7bUvrZtw5PcNEr1FtZwaS4oKapW9
QH/bix02MtvXwLu7c2fYNjvIJpNBmkkDZQJWo9wH7SfDDo+VbtXwjy3ceV7hs1q8R/plBe3Z
eiDUei1FS1MX2o5k+XsO6/2/bJLxO1sdihS33YAGfcJ3Y6j+YsBX8KDFLWr168U+RPiXknCL
NLi7nju7Sc6f1FvUqSBUq2QONTtisJbNdH/xxAys4KquTk1oADjf2GY9i2f+3L5F3Xbobq/v
ILczoCsM0jtIit6hXJhWmOfXdrfMjkn/ALcOcQ7wlhbyQXCurSi8VisS0PRiR1/DGLalJsPw
rzLeN33Szs0jB2lit5cTPpVpPyopzzIz8Mb1STHo/Cf7ZP1FhPeclu3t7mb/APE7S3YEIP4p
SQa17AdMZ9q+Ph478icPm4RyB9pe5iuABqjWJtTJ4B/A0OO3Hwx3WetJ7uaVYoyxkrTSKmh+
njXDbDzJXsNn/bn8l7pbQT39zDCZIw3t3EzO0YIrooAaHxxwvVp+sjP3v9vnyBByGDY4oo7o
S/zFnBpCsajORnPmfDHT/oz9VnyD+2nmu17XLuImtrsxjXNDEze4AOun0gEeOM8261M/Kw+B
finbeSy7pfb/AGz3FlZEQWyq+lWm6sKjP0jD3t+VZMeW8xsk2rle5WEULQxW1y4iTVVlUHIO
wp0Aw82QSSRUfq7m4k9os8rykBUBYg9gAMat8Xj1/in9t/N9y/p97uhSz2+Z0e4hZqXAhJGr
0EGjEdBjltP1yr75K+BY5uVbbtXCbRo1a2Mt60rn2YwraVLO1czTphlwc8zdZHkn9uvP9msx
eMY7+PUAy27+tC5oPTTpXuMX3v5UnrO8j+I+b7JuthtF3bK9/ulBaRowkLsxppy6YPs1eYpe
ecA37h26Lt28xBLmWNZl9tw66Saekjrn1w82/ln5VGybPebpfQ7bZwme9u5RFbRJQs0jHIZ9
MatxqPobbf7ZdiTbo7Pft9MHJLpNUFnAylENKAAH1PQ/cRjGWjq/iM/xz+23ep+Q7jZ75dJY
7LttP1F6oH8xWGsCMHtpzYt0xbWOeZPRfJnwDa7bx7+v8Tvv1+1Qx65XZlai9CwK0BX6dMUl
laZDY/g3m29cVl5PbRItiVd4VZgJJEjBDOFPVQQaY1e278OL4r+Ld651yB7C3lFpZ2aa7+9b
8is1Aqp+ZmplitsYzflsue/BcO1852Hi2zXpmXeU/wDJOADEFb1FiudKAnLGbrMvuN9H/bX8
bxuu1XG73L7y6gkjTQtSurRpP/8ANi9beBfJHDZuE8tutjeYT+yEcXC1oUkGpdQyocb59+VO
lVsGxbtv+92e17ZGbm+vpfbjUkha9SSegAGZOL4a+XvMX9r3GhYR7ZccgH/uHtmUW6ke2FI+
0JX3NPi37sZmsf8Ah4DzHjO5cX3u82Pc1WO7s30soNVIYVVlYdQQajG4J1qmgarRoUNWrWhJ
pp654bMGvpbgnE9ql/ty3bcLiAXO4uLmYO5ZdDxsAgUVoKDw64xz7T/TyPmdjoYyN93Q0FMs
bxn7uzYrC93LcIbDb4Xubq5bTDbxDU7sT0XFbhnr6bh+B+Gce+N3vORbTd7tvkkDTTfpdTPb
yFdSqqg0CofuJrjnOrfk/wBHknBfhbnHNLWe+2mGOz22KRoo5bhtFXGZ0GhLaehywW1qSY7d
5/t++R7Pe7HanhjklvmK2t1GwaE0UswZqDRpC1NRilPjP7X8Vcy3DmM/ELeBF3e21fqkY/y0
RM9TOfTTw8a46f8ATzBOdaHav7cvkje7ZbiCOGGBpHjDSSBCDC9DqBz+4EZDGJ3YMV29fBPy
JtO+WW0S2P6i83Bm/Sy2za4npmasaadA+6uG/wBDOXqPx9/bPucW7H/3ZIpdqFu3sw2kpqLg
0FWIofSKkYzbT5j595XtsW28i3Hb7eQm3tLiaCNpDRtEchCliB1oP246S2PNx1t9d/Bdl5Hv
29wWHH4nuNxkYMgRimhVzMjP+VV8cZ6rvHrvL/ij55fYrlb7dZ9y21ELzWSXZZyi+pgV/N9A
cZndixkdg/tt+Qt42+03C3iiS0vIzMjSShGKk+nUDmp8PLD9zOZHPafAPyE3K143NbRwXoiN
z7xkBhaAEKSr98zTxw/YzNaP5W/t03Pi+2LvO03P6/a7S3Jv3eglhc0DMq90J6UzGM7dZ76V
O0f22fJW621pepbW8NtcwLPEJZQG0PmobrQkerD/ANKMVifBHyM++XmxNYa7yxhN3I2oKjpW
itGxOlyxyFPDDOm5IoON/FvLuRjd59ptfdGyxF7uNmCFWBPoUHq1FJpjV/p+HPqflmPWp0sx
Omgp2Nf9MMn6Z+8rd/FXxjvXP94uLS3nW3s7VBJe3chqqqxoAB+ZjnljHVdeZ+XonNv7edrt
9km3fhe8f1VNoBG5WpcNIPbHq0lajLMlTg0WtTwT4x3TZeLWce+8zuOP3G6tq23breZURWcV
T7j6mNasoywe1u9zMYuH+3fl25c5vNt3S8QRQ6bq93aQlxLG7EB1BzLNpOR6Yr1b4yk59/b9
tVvslxvnB92G62u3HTuNqpR5k0/eVZfDqVOdMU2M77qz2b+2rjC7HYw8q33+n8h3cE7baoyA
CQrVVIb/AMn+4DBLb613nw813bgPNeM8sHA1maK63Vo4YxBKVhuoZWohYClVJ6g9Mals9Y5s
+Pyg5l8X8o4bvlptO7IjNfBTZTQnUkhJ0UByNVNBQ43f63Np+03HrG2f22cWt9ts9u5Lv36T
l24RubC1jZdCucwpU5yU6HpXtjntLO8U/tx3i55PuFrvtzFt20bQ8f62cMCXVhrBjrQBWXPU
emCWtWzPUfyb8J7VtXHp+WcV3Ybvx2F6XA9LSwHUEr7i/eNRocgRh9jn9pzNZfbvhbmG4fH8
3M7OSJ9uhSSZYS3raKAkSMq0/LpPfthnVtNue1z/ABN8ZXPO+TGzgu1ht7UCe8mJBYQk6TpB
6tnQUxvu43zeb60nP/iC02f5T23iO2XbNb7x7JtpJzVo/dYqysQBUDT+OOXXWRid/wC/1epy
f27fFpnGxm/3Ab2YtKSmpQNp1a9IT26U/LqxY1XzfzTYLvi3K9w2S4lWS422YxNNHmrKVDIR
XMalPTtjrzGee5bn6R8P45uPJ+Q2Wz7eq/qbmTQiSfb/ABMWYdgoOC+N8TXvl7/bzwt7eTjm
18jLc3t4hcPbO4ETimatGAWVfD1VGMTYrf0+c+QWW57Pu17tl0ptr2zleC5hrUh1ND0/bXHX
i1z+36TcU2LduQbxBtW2Qm4vLiqxR5CpAJ65fXB/Tpvl9Gc5+IuH8V+IJprjZp7ve0tRNPu0
Z1SW9zpUnXpI/khvTkDljlyx/SazfwD8MHe5YuQ8ksDPsjwtNYAuNE8mrSAwBqNOeLr5P87Z
Gd51wOPc/lhtg4rss+0TSwI52yYioKqS0sfqZfbYCv3dcbtz4XG69E+Vfi3h/E/ip0t9muX3
GNY5v61GdYhuKgN+ozyiapXJaYIz/bpXfAvwul5aHkXKNsF3YzW5l2uD3AVl1kj1BT1oMq4z
9tdOb/rqv2/4r47yj5qvtq/R3Ow7XbRLdTbVOFEpKhQyRMCw9ttVQanLGurfhc3Ww3z4B+O9
32ncE44byw3W0VpkluRJ7TFK+j+Yq1B00qDlgo0HHPg342tOL7Xd7+l5f3W4wrM89uJGjXUo
YpSINTTX7jjOa19mfvP7Z7CT5Gh2+wv2j45JaDcdUvrnRBIFeIE6aliRRj0GNbXOT31FzD4O
4Dvuy7juXxtuGu62Qudxs2YusntgsTE5A9VFNOqnF7Gt1Y7V8KfGGx8dsV5vupXc98SIWTxs
0ZRpACDQA6s2Aao04pt9a+34T8G/t/2Lb+Y77tPIh/UrS1to7ra5Yi0btFKzCpVT9w0aaYuu
rfGp3kRfIHx98ObfxuY21huO3blOANvmlinCmQHJGMgC6T3zwZjPy2N38M/E9hb2xn4/dzyS
QBnezE0i/aNVdBoPHFIK8mt/hbaOYc43iDh1w1jxu1EZubi6jZHt5tJrAEfSxFV6nphtpnw5
Of8A9uG+cb2cbxY38O62sTKt2YQwki1EKGVSWDKSQDi2s2tp8Wf24Wqxi65pqX31ZbLa1kKv
UrVpHoew6L2xW61rz3jXwhu2/wDNt52u0b2tl2e8ltrvdJMlUROQKKerle2N3/WefJ/nfzWr
+XfizgvHo+Gz7dO0NjuFx+i3K4DkiUAqTPX8rKNXTKmMSf8A8s99X7PSLT41+Gbm4k2W14+9
60KxxTbrb6pI1Eq1DmdXpq8adMH1NryTaP7db255zvdi1yIeO7TcMsm6SUNU0iREp2YK41Yb
v4Z5n7XPyf8ADXFhzPhe18dYWdtvqNDcuPXqEWlhOfN1Yg0ywwdbb/hpJ/i34Svb/wD+7u3E
kPKbaJpV3FFbWHC6v5jfacs9Ph3xfW/Naj5d33Zr3Yt43DaLwUu9vuJbWUIarqiYrX6Hrjp9
WefWp+LuAJzblNtsZuzaLKkjTTqAzKY4y4oD40pjn1cbnNsetbl/atax7RLBYb+s3JoomlXb
W0KsrLUgD1a1Vh0NMUtFvjl+P/7a9v3TjVru+87m9g92C0aRLragOlg+qlDUYL1bWv8AXP8A
JQf2yJHzuPZb3cHOzT20l1t+4RAa3MZAMcqE01DWDkaEYtomLSf+1bYzFcCz5HruLkN/RmdE
0uyglkeh9VCPy9sG0INj/ts4ymz20nK98O27lOXYQh41UgHTQMxXUQfDFdalinX+2G5PJN42
aXclkljs/wCobJdBRSb+ZoCSCvpNcjT641Oqzn5Zbinw5uG+8C5PvsrS2d7tCPJZRlaRzG3q
06ljn6QpXLvjVrXxzrd/2w8d2K/2vf7mdVuLiSza1fb5FBEltMCxYOft1MNOMW+izxlvhz4X
sea7duklxdPZy2TRrAgAdWjZm1BmPQ0WgIxrvfwuOZJ6vub/ANvvHrXYzvPDt3O5xWlwlvuE
MrpIYkdxGWDJT1Rlswe2M5VbGs23+23hdpNaWd1uTyb2miWe0JpHOimraagH1LUVxmS/k+T4
eC/LnHuObFzbcrHjtz+q2kMHgKnUYXb/AMsBPU6HBx2+vmszbusXrrKI3PqyPmDhgoCSsxBF
QemGM6YNQ1609VT0xWnTRy1kBJ00JPhTyGBaJi7j1Ka5HzpgQaBiGWopkyk540Tv7jA6qhVH
iQcW4sOJgFqQSclAPl4+Jw31gRkBOlvSDnl406YydNESpVmBbXULTLL/AFxaoKPNxqalPt8a
HxpgO6OriQsj9FpXxwG3AgswIAFNXT99cFEtOWqSPtK9K9zia01CwHVGHUV8frhcrovcTSak
muTHrhjNpRz/AHVBqe48vHFY1z0MzVXUnbqOhpjH1btRRO7h4/yg6tRypjUqkSkAHV9xy1ge
GK1fUIdakhsmNK9Dl4jBYDuuvSV7DOp7DESLu32tQ5EjtTCtMGBoNVQD2OLRPk2pZKkqNXSh
zFB4HzwU6fXSPUa+kfaozoMS0ypFkUJFR+FcVOjhUAt7j9s+9cACzMrEgahQUFK4pFUbSaa1
HoPUDqThxJCxMmkUOQJP+GLAdR2rRQvTwP8ApjJOCQDSgUmletR+OJo8i0OsGoUaqA/uphGo
lUEszHSKg9fE40ynMnuMFVdYAz8Kfhga0CiTWaCqA0K5VGM4CAb2yQoUnNVIxJJEriPSaAr3
OYJOeXfEYcqQ4PTP1DxOBBYH0gEdCwUVOFYTsaepdSmlQO/4nCNCprIUA05VBzr9MGCeiLAm
h0hmFVy/DPzwNjjLH1U0dqnpiRB6IBpGnVmB3xLTtmpIWtcyT+zAtBIHV60yIpXrQYhTnSwC
9zmp6GvnXFVgxUKCWND1rSn44oYQYL/LU08umfjhQQdKgIuY+7oa4lRx6lUgqSzda4MB/cYK
dSgnwAwnSFdAYLpV/tSmeDTBPUjIaRUV8cKIs6HUPsH3Hpmf8cSp1b06VPerqMv2YKr0EugT
V2GR6k4kiQnQzKNRqdC07eeLQNQNS+oqpFSK54tQ45EWGi0JBJJf1ZeGI6GsRIcip6ajUH9m
FBZj7gRSSGyJHl0zwUAiajsumjDNhnQD8cRGIgXUox1dW+nnXAiZyG0KvpPUnr+7EKGrOzEe
kKa1HQ+XniR/tIYkZ/8Ajy8czliXwGcqDRl9RNVIr+7FhlEx1esSFVB7jviNRu6IoDUKMaq1
c6/TqMWM6QDNHkdNa+o5EYbyZXJKzNGQaA0zJy6dxgUqglyc+R643GrTAMKGlcshiA/V/Cel
fw8cSddpLbxuXb1E9AP/ANrAZcT7dK7zsAoJfIAmlRXPr5Y9H8Ll9a66tj7P4Xd8fvviyzsJ
96srSc2+iT9ROnuIFNRWOoJ9OOf9/enDmvm7lUW12+83NrZ3ZvURiFuQmkPnWq/7fDGOepY6
Zap1COxK+GffLEZYnhuo4AHJOitV05kHywy5W7mPYeFf3Cb5YGw2u/WzXbLYKjXDwt7qRDKn
pPXHfOev/Lz+rb5J+V+P8xuLXZNv3X9JthOq83Fo9KrnSmn7j+GMc/z9a5mtlNyz4ki4cmyP
ygSxQQBNcaNremeSadOfauMd83fUx3xHdfGmz71ebpJyGKyaUsltZzA6ijV9Zf7QfIYfbGbZ
rh+Y90+O95v7aXbt4e/mkkCymJR7Manq+qgJpjE4MQ8hsfgjaePxRbXfzbzvkwVIpU1ek9Gd
qqFQYZ/O619tek8b5D8a2HELfa9m5Ha7C0kSm4eKhuBKR6iztmW88a64sW6rONb78ZcZ5E7Q
8ml3S83Cscm4XALKjE1oZKavVgy1Ly8598fbEl/uF/yj+qzTj+XZRN7hUUoFiXOmffGfqtUX
xbu3EYri/wCQ7hv9taTbiaja5GVZIk1VUuTQ1NO2WNXm4NWNnL8RPzSbeZt6tr6+QVtpLkj9
PE5OXtn7ZGX92Cc1a4vkKTg+8wS3W6c+d4YwXi261ZGiJp0VFObV7nFlSs+PbvbrHb7fcNz+
RDb2EVXTZY5asEH5XUktWlMlGN9efgbrWbN818Q3fd7qzgvP0Mca0t7q8/lCSmTNn+7B9atx
Lec24ntG13KXHIV5DuV2SLO0jdWYs+SxqwNFXPMk4zeT9k+12Z498d3bb7d29m8izT6fcHo9
xfSlR9zDyxWLHyNvMsEu5XE8JGiRtUdetMEjX1PskP6rcra3lmWzjkYCW8c0EaE5tnjrxMP4
fVNj8o8G46m38d2zcv6gFQJcbrI1Qn+92pnTwxnqVz2LPYOb8Gtb69sLDe4GuJ2957m4OiFm
bpSQ5avLB9atQcg5ntFmYor7mME7NIp/R2SRCorX1yKz6VA8cUlSj5H8hcOuefcfSHc4pba3
LteXeoi3SoOk6+la4fpV8OXn/Gbv5Q3FF4zNFJt1mKTbm7/yWkp9sdAWOXgMYyw844+If29b
9x7f7PdbncLaWK3cSNoLqyNX8pIFajLDLWrj1TkNlscO9WW97lukVkbAMQkzooIcUyDUOJnV
Vbc54dyMbht9husC0GhprhhEpJyqmqmrDeazXk/OOC8D2DbZWvOUTbnudwQtttts6aQ5/MwV
n0L4saYJrp9vMem/Gez7Zx3hYsLvebETTKXk9qaMrGWX7S1fUQDnh6lZ6dHGJeKQ8dutni5D
bTI7yRiX3UjJ9zyLfXPFlSG43jgvF9js9h3DeYphNIIwsTBm0s1fWUPpUdycM5tTTwbttdq6
yruG3Wu1qlFRXTWSB/FqpT8MZsqVU/ION8ntNx2va93geZqxyTF9ISv5lB06vwyw5WbGI2Di
/FeHcksZb7li3+4M5MdmXAhjRhmxqz6Px64Jp1oOT8r4Hyq5uOMz7rGPbWpZZPbiY0rUSVCn
T4d8P1qryDhO2cM2vml9Becpksdjt1JS4tZWhWZ6/wDjMi+oKMPt8UrNfMV7wGXkYj4oZLiF
UreXkrvIskv+xpPURTrXGcxqsDbEe6pGS1zyA/aT2xqOdr7X2DaDu/xdt1gl09ss9miiVWzU
U6E4K1E/9f45sT7Nx+63SFr8BYVQMCxIWlXz9OrzxSaay3M/jq43PfbjkM/K/wCg7XIn84Qy
UJVVp9xYJ+Axn0M58D8WK8p3HfbO9NxskJe3gmkOl5pOlQv8IGdTjXuKNVyTa7rbvku03i03
uys5txT9ObKX75AAKggflNBnUYZaPitdutttY2u5ueRPb2lr7Z9+6WQxgL3zqP8AE4zmmvOv
g/ftifdt+stvvxLBI4O3QSuVZ0UmrKD2oRXGrMHHWxLwqytuNfKm42+4btbT3W6xPNDGslX1
F6iNmb8wGLNh1udh44dj3bft3vryNV3WQSJFWgRFFPUT1P0xkj5FtMnJeNrY7fuQglfS6yRv
kyg9G050OL4TDfN8c2/bPt/Ddh//ACjv5dHmt4CG9qKNdJeVuiAn+I4hfa8vsv7cfkaC5iuH
toGKOrsvvR9Op6Hrg+1asj6M5NxrdN547aWFteGyniMLTUJ0t7YGpSVIOLQ6P65skXIrXZhe
RHclt2K2wYFiBQU+vkc8OCuDinG343Jvt7uN3EI9zuXu9NaLGlM9TGmI/hl/hvfrbcrbkdlb
34uZxdyNaxs5LCJgQGUE1018MassZvw8I5V8V8tu+eS7HZaN4v5A09w8D6ljRjWs0jH0nyOL
qtczxbbV/b18l2e52l1JYx+3BKjlROn2hhWtCfy4xpyPpHl/Hd63iwsIduv2smt5o5Z0qyiV
VpVWK5/hh0Oxd52luTHbEuom3JLTW1qrAvp1+GLBryblHx9zq1s973Dc+aDbNjm9yR4mZ3Yq
1aIWNKZZBVwy3fBZ566v7auMbttuzX+63BP9O3NgbAM1WdUZgZdPbV+3B035I8z+S/h/l958
hXkW3RDc7ncC94IoXVRFGzdZC1NPgK9cX2Z+uxHxT4J+Rts5Ftt/c7V/IguYpp/XGx0KwLfm
6/TFe9a5kj6R37aeR7jyDZb7adw/S7ZaOzbhHU/zk/hC0P78QWcd7af1+8tElR70W8cn6UMN
WirCtO2ZxYGc5ryDeNl4juF/LDZbX7S1R55jIGYt0VVVasewxROvY345yrbtl5YzRTtaRtKt
23pCPopJ1ppCmvXF1E+Svmzmqcr57f3dvL79laMtpt5A0j20zJHjqaprjbPMzVf8Wbrt+08+
2Xcr+YW9naXUck8xBJC5r0FfHBedNr6z3Ti39b51sfNbS9tztO3QNrk1V1qQTVWHppU516Yz
TDWfKON8vXlOybLuUU15KrQpRhRqxaC6fxKGGZGGzEyvMrja/j/4Pk4xvl/FJu9xbSQW8EWb
SPI5b0qc9Kg5k4eJt9He54qON2sn/wBxH6ncOZPZWj2sjRwRe0AiZ6bc5e6S/cA98Unp6vjz
v4B49yXdOZRT7PI6bZYPHLuUmsojITXQe7VpkPxwdUx618/7PyO03Haub7RNHbjZE0PKW9Yd
39PoI9QzpTzxRm5LrZ8Q3T5Flt4L3lFrtljt5hEs86SMJgCtRqqdC+eeBqvBeV7FbfLfzdfW
Wx36Db1iT3dwK6lC26BHKAEa6tkvj1xrcjHE9uurZ+J7Z8VfM20w7pucU1rJCZRckafbRgU9
SAnT6/3Yz9bXSdS2vZW4i/8A95a/IH9Qt12QWRRm1ilCmnXr+3TTvXGrfwObmvlT5s5FtHIf
k3d9x2txcWBZI4rjoJDGgVmX/bUZHuMdJx44f+vVt/LCqA5UKMyaZGgoTgvLU/pr6v4DQ/21
7rqOmkd5qb/6hjl/Pdb/AK3x8oTqitpH2qK1Pmcvxx1jNsWvDd9v+Pb7Du22T+xfoKwzU+wd
CCDUZjB03/OPqn5i+QN82r4k2TcrS+/TbhvKwJcSJoDussJaTR/Dn1p0xz5H9PPE3Bbfct/+
BrHbuLX6w7sg9tpY5PbMbiYswZlqwJXPzwRrqfpsYbqLZYuJ7Vve4R/1ZRol1SVMjiIoWq2Z
1MaAnvik8F+XBxz4+3ex+Ut85ddTRfo79PbtoUJLkZULilBTTipnii+TuRX2yfENxfbdem0n
bcXhE8TUcK91IGVWHQ5dsaz1z6+PWu2zc4RacLmu7lfcvLfSskjDVJK9sD1PUscZbDxXjnML
DmO+bhuO4+9sN42rb7AuzlCTWtDktBlkcNpnw+KucywNzffTFVo0vrhVZaUP81vUG7439crz
fy/L1X+0+8tIua3iTukMtxZvHbhiAXYSI2la9TQVxjuevRzfH0jc3e8WqX8/9MgtoYopXS6m
uaqdIJGpQvpB74lXlvyZyzdto+KOKXG2X7WT30kSzzwNoJj9pmIDDtWmGTGft7IXz9yG42f/
ANLvba8a3uZHYSXMTaZfb0IWNR+U4MtjPfVnUR/3H7TyndNk2/fdqu/d4xawB9xgjmojl3Gi
WgydaHFD3bLP0H5w5lvHH/jrh0uzX72hvFiVmhfSXCWyso1DOgOKN2+qz4x/uBXdeWk8rZIU
ns0s7OWCMlQwerNIAWb1eP7sNhx6PwngW08TseSPZ7gL5t2V7k0Aqsel9IOZr92MZ7rN8j4e
leOSUOG1E1LDxapx3kcP53Z8Pff7V+R7HYX287RuF3FaXO6xx/pPcbSkrLqDLqNBqAYZd8cu
p67/AIx6Fxvi6fFnCOVPyPc7ZV3P3P0eknUxMboihTmzMXHT8cUm1TyD37ij/JFvwffeO31u
+37Xp/W6z/MSntlhp/iUx0KnFfjF+Whj5/w+859uuwxbrCLyezjtImJ9BuFaQPGrfazLqGWL
Eye2cah+LPjPksPINzg97c2drREb1uxXQiqMmYsT4ZYpLafws964ovyHvfDOW7HuFvNtG2BT
ctUl1aN1kICj81V0kHpjPUrP1u681+XOR7Bv3zlx1dv3dYra0MVtd7nEwVYJEmJqkh9PoJ+7
pjfXNkF4/wBtB8+tDt/yHxm7uN9fe4oQJmt2MRa2VHBrSIAH3O2VcGeNZLXqm6cUj5vzni/y
Bsu6W8mwWESNMQTrrDI0oHkfXRgaUxmzTPKaHmvDOWbvzDi1hu8KXu4xLb2srH+XI/sGJvbJ
+/S3YfhjpecZY/lu2Wvx18C7pxTdtxt5N83Mk2dpGfu1Sx5Koz0qqEljg55tqvOzK4eIWu2z
f2+7jPDyuXb5Zba8eXblmj9qMjVWARsNf80Dqp6tg5+Wu/h5x8BcQ3ffuXJJtUwtI7D2bm5W
SVkLQiQGihc2JAoRi7urj4eof3Pca36y3Ox57ZyKtpYrFbO6NpmiYuSpXLuxFD44M1z76+t1
6bw7cPkKPbrLdeVbhtg2MWQuZ7qMMspDIGUyMSIxkasVyxT10189XOxbH8o/PW6QWd8Y9p3B
y5ugKNW3hUehWIOpiuVcdLc5xynP+2ufbdqsPij5ys7O/uP1FlZyrJJdJXUsE0ZIZkzzGr1D
9mMWWunP9P8Aax7rYcPsNt+Rdz+VJ95tRx25tvdilVhp0vEkZLP0p6Kihz6YJLavZXyf8kb1
Yb3zzf8AdLGQy2l5fzS2slCoeNqaXocxXHec4xzz7aq+O73e7LucW5WcskF1bN7kEsJoyMBQ
EePhTwxn+k2O3L6W/uA5zu0Xxpxd7DctI36HTuZj0n3o3tlLhgAaDWaGn0xjn/LHc3xU/wBr
HM755dy49NfsLGys3nsLSQgoDr1OY/pUkjBZ61ZkeQzfJXKLzl0fIbrdJ33SErHBdghWjjUk
ACgA0eomnnjV5rn/AD735e1f3N8z3KLYOPbfY7gP0O92rtuMMbKwnUCMrWmdCScPEa6/SL+3
Lmm4NxnkO1zbiyptdg822wSMP5elXZ3jBzoDTLpjnn+zX4U39vnOjuPyXJuPKdx9/dNxsxBa
3dwwUsw06E7AVWoAxvvn0fz9j3qWXke17Vvdzy/dLF9qaGRbZoUaEpr1BVdj1JUgZd8ZzVfh
wbDfbtvPC9gm4Pu1jDFDbpFeJOnu+pVClaLmrBgevXAor9+59xjYvlKys93voo5b3aDZ3M6s
DHDOZwyCWh/lh6t1/HG/r5oz/ZR7ftHHfiLh++ybxu6XEm+647JIBqaTXG2jSgz/AD+o9AME
5tW5MC+18a+Wdo4tuNhvKWU3HQv9SsnA9wAIoYEGmQKVB6UwVrEknN9p375bNrxvksdldQ7Y
LKCdohJbXVyJGk9o6qBtINQR+GG82Ker/mM9ztvx5v0PN93sr43cDpt+iMRH3ih0KoGZbXQq
R0wc82i0uMvym+2ja7zZuZ2l1s4ih9E8ERnOkDXHJIDkwoR0riwn2bmHEB8mcl2qxvrRNxvr
a2dDqBhkuIldXDEEAuBpqAemKzBLqD5U5NdbN8fbjJdbntaXcpRLW3hDfzqMCUUFidRpkemN
cc2n5cPBvn7ifKN/srS7sBtl/LEyx300sZi1hQWjRjQ+qhpXGcGsTa/OmycW5ny+BNr/AFmz
7luDyIkcqkmdAIppA1CpSVl1AY6Xitcca0fyLzvgO+8R4iJreAWW6bhEWQuoeyWJh7wZVFVq
p0nyOMzhdTLjdbhttvHYwbdxnkVjxvbYvWTbCGR2DCi/e2mnn3xmM9PE+M/Mk3ALzfuJ75aJ
yIJfyStfRyBhK0lPWSwcOrAD6HLG7zfmn761fyT8vcStzwbd7O0ivCJBfqYZF922iUCOSDSB
lq1Uz7ri552Wjq5cWsG4fFEHIrj5dTkSiCaIRyWhIDrL7YjI9v8A8hYj8tOvfGfrb4ftj5X5
zv8ADvfLd53e1D/pb+9muIFkprWKVtS6wO+O18mD7eNn/bvyTbNm+R7WfdZo7SyZZENy5oqy
NGyjWT0BqBU9Mcuo1x1Mri5jzjcbT5R37eNi3Jg63sps9wicsDGTRdDdChU0yxu82T1jjPl9
D/FPyPsm8fH+32cO/QbPu+3jRfLdiMs5LMdSiRkDBya1GOMbv7TXPyTxe3+TNrtbzkltcxW9
jNFPN6EgjuJdJH8xTp9ar0rljf18Z1WbT8lcLXbeGNNuUMZttxuY7lS2kwikqiSSv2xkuvqO
WeH6/Ilde+XHxp8iWtoLrkUdhPsdzcKtXjTXrcUdddNSMEBVhjPXN+Gp56zPO/mfju1fKGwb
jtd0u42e1W8lju5gaqFZSKmNhUPo6/XGrxZPRz1Oq6vm35L4vtXCptr4rfpLcb+73TtZyqfb
SUhphIo6CYMQV+uH+fP5vxF1cuflF/brP8d7Px2fcJt6itd1u1e33GzuJUjCBXJQoGpkVxnu
7XXvqZkc3GN24Fwmbm3HbbkcFzbX9j7+13GqtXeORWgqtVLhmFKdRjWW9Ry+0+rP/HPN+Pw/
FHJdivrxbPdp2R4YHPrZSUUurD7iuknFZfsbNj6Y28xHarT25H3FBDH7W4qUdpBpH80HxPXH
JV8S/MWwLx7nu67et4L1DKLhLoFSzi4rIyvp9OpWqDTHfLZtZlzxgJJVZ+vQfu/1wC1G7IXP
qOY9QOX0ywsgoC6ihBNCG8sWKDIHuMKHxWuRpjOOksIlkbSCNPj9PHDgtGrABicnr1Hl3wKU
yBtTNQFTkWOdAfHFUJkSQackHWviRip8EmhgtD3p9adTniWQEbKtNOaox0V64GMJWrIScyMw
T0zwL0kL1IagNcvoMOk5Z0JKMPVmR4VwDSHUkAsPu6mte5xNSiWRgdda1OSnMeNaYhoCjPTo
tfUK5VwszkipFc9JVunWv1xaZyMQAHKq1y1E9fLA3gidABcAM4qWbvXAqb2yupT3GSkftpiH
weMDWqulVBqCfpgUMhJ1EkU1Erp6Z4VEbGjLpDAtUFhhZpBRrFT6x1GffzwWmJ3ArRloBm5H
+WDWrDKSwqv3dK9cuuEErGpFc/H/AExYf/IVqVJBCvlVuuQ8sSHKzKBQ6jWtOgH0xRUAdGGa
1ah018frhxkUZGhUA0knoRT6fhgI1JUa2AJU0qKU/YcCDqY1WnqJIGdR9MVWJACrBjQilDTq
cUKP2gC+khUapZOvXwxoCqVU55DwyOCgo5FNApOrqfocFWieX16ydXSieA70xQ/IdY0alYnO
gQZdfDATl/WCSc19bdcx2GFEr0Jp41/dgGkgYhlIp28QGOISjKsQIw2a1rTLPxxNaBCzNXIE
Elj+7BRD1ZOpqK6QvgD0JxHEhoqFq0Y9RTKoy6dsRxGzSKfQD5oc6fTFUmjLCMOKeGeBIZEV
wWYFJFNVp4+R8MVZqXTRVdqBWGdczXv+GKGWHZTUadLfwsAR+2uE0wkbQx0Csfp1Dr+J7YBp
fzBpcuyqoFOlan6eOJSJi0aKjfa/8ORzPliaDRC4fV6eoIypX/XBDpOxSis1cqBfEYQb3DWp
+6gBrkMKFEoZdCsrFTmRQZnpgsZw4V1oOoJoC1Kfj9MWNAodVDln6aHSaDF9Vp5AlFCpqU11
N3FM/wB+LBodCVySqMKkHpX/AFxYocgg+oEsegB/wOE4cKq00gazk5zzPkcA0AV3cgCjr1qc
hgsP2CqqKEA1B6A+PiMUBn1ABQxyzIHj2+uJGaVJOj0XuviR1IOEWGRB7elWLGvhkB4YDIOV
/ZB00MjdKH1DxofDE1UanWmfRei9Bni0U5CA6Stcq6G/yOHVgJGOiua6s9Bz69OuDVY4rk0h
LdWSoINMic8OM5VG7HVn+zG2hgECoPao/wBMSwPp8+mJOqHQXLMtCe48fLAlhtkQW7UCrI1A
QKHL8ca59rrsj6k+Ofi/47tuCxci3izuNxvpY/fm/m+3GACQqxAEV864v7c+442zXjXLr+1u
N4uP0NilhZBiIbZTVlFehPfHLmY1Nxn42SuqukEdAOtMb1Cianqar1NUHTL/AJYhG1+Ovjfk
HNtyeSzjSKwSn6ncpgViTtShpqPkMbvkb+Plq/kD4YteOx2lvY30m57jekKgZVQFiRUL4Ch7
nGOerKp3i2n/ALeotm4tc7nuG7e7diJXNnAAY0J6Jrrni66trN61mfjn4Z3Dl7zyS3aWG1Qy
FP1EtDI5U00xqCK/9xwXV5iq+V+B7dxHdP6ft9y9wo0mSViAKEdB5DBx1ZQxESMzgrU/lHhQ
9sddtqke38V+D+PxbRbbrzDf0sZL6klvZQUZs/tBOZY07AYLqvSz3X+3mznvYU2m9ktraZdT
3N2vqCg1poWla9sc5qnWOo/AfCZoZ4rHeLm93KAVfSgVFoO2VP3nFl/Z+zCbD8M8m5Bvd1YW
U4t7G2cpd7jKCYQ3dV6FmI7DGvvcOh+R/i5eMbnZbZtl3NuVzdALHDGlaydCqgft8sU6v5Zn
OtFaf237hacdl3LetzS1uwgm/RQoGCMR0djlXBt1fCbj3w78c3sduu4cvilu58hZ2+kEP00g
t6iR/wBuG7Q798+CuF7ffWtvd8j/AEMEq6tdxRXanQLTL8ScU07rrh+BeBXu3S3238knuoLc
EvPF7ZQFRUkt0p3wSWKu3bPiv41v+JNd28t5uZhjfTczyMgLL1YR1w9SqV817zaCDcJ4I1CR
JIwrUnIGg/bjUxu3XPFE8k0USJraUhEUZkkmgBxesX16JtPwd8o38Ikj2pYY5BXXO4QkAZAq
xXGfsPq4pfijnq7wNpttvkuLzT/NjQ1RR0q0n2afxxf9Dkdm7/CHyFse3NeXtpFHbD1SJHMs
rV6aaDF9rVkXnxh8N23JrC43LfLi4Fpbao7aztECe5QFs3I/LSmG9VdfDR8E+I7PcIdzvxfX
lssUjR21lA7ISVqA7EHrUYtsjE5ZrknBPlXZT/UveuorWJyYiblpJNPjoLGn1w892NY4f/ut
+WeVRtuFzZTT60pHcXbUZwPASMGpTywXtSYo1+M+bvun9ItNskurtKEhF9CA/m1E0UedcH3P
XGrR/gv5DtLi1hmt1t1uJQgkeQOKt2bSWoMEtHPMjs5d8Nb9s13a2NhK+4386rI0dupKAd6U
6Ux0nda+a8+3W03DadwktL9FjuoQUkQ0r16NjO1mxz2q39xeRQWySTy3DCKKKIFnkZjRVAGO
/H9M+VPh7Ptv9um7rtaXe8bzFYbhIgZNuVTK6hh0Zgfu8SBjj9+qPhRbH8Hc03Le5bNVSzsr
Wnu7nOSEAJ/+zA+8lc8V/pcyHJnqfl/wrum0Rk7Vcnd4qgTTRx09X5QFBYEHHOb+VMWVj/bd
vC7S1zu+6x219OgdduX1tq6gFq0HnjW03Ge458F8w3feZ7SVVsrG2b/5N9ONCEk9Av5mp4Yp
VJiPn/wju+wW5urW4G52cQ9dxCukK1aGtScsMrPXrO7X8ac0vNrXdoNtm/p0Q1vOU9NFP8Xg
PLB9vTeW+4hsXyxzzaRtY3RrPi9kfaM89Y4yVOSoVoX04fupziq5R8N8q2a+jg22X+qh3VIZ
kViTKcgtSS1a9DinVnyJF9e/29cv/ooudw3eH9ewBG2gtJQ9xX7CfoPxxTqnI82g2Dlyb82y
2FtdT7gFJFpCWqoXqdK9B54fvcZvHrk3faeVbVun6Hcba4tt0JVViclpHLdh18e2Lm35rOe4
3158J89h4fLv29XP6eO3j9z9BJK7ShWoPt+xWz6HFe7fh22QfBPgXle9bam6S3ke07fIK20k
+oTOg/OoWmkHzOD7VmyOL5B+It74ZCu6i9W/tn//AIuIspVumZYk18KYebYzcrzyfftzdleS
9nkCmg1yuQPGgJxud0/D0T46+Lef8ttm3KK5fa9sZSqXs8jIJK900+ojxPTGeu61XLvmyc/+
N90kvLS9ljM3o/qdqW0yIO7E1BHke+N89b8sc86sp+cfOkO0RbrcXG5Rbe4Dm6ZdCaT56e46
Y5ddQ9TGW5N8wc33yGCC93WURQCkRgpCzgjrJppU43OlP8s9tMm93u4RLYe/dbgxLRrGXeSt
M29OeVc8HUxnj1o904x8pf083V5ZbgbZFJeWT3WQg+LHP8MV/pK6ZIr+M8Y5xuPvS7LYXc7q
NL3VorgLTsHHXPtg/wCgnKy2q9+S+GbqYLY3tlu9+umSPQzPKa9CrqxJ8MXX9N+Wtnw195zX
+4C1tBdXv9QtoBRmleFVWngfTl0w/afpzyrSxHzj8k7X/UIrl7Db7WMm2cVtffbuEC0Z+nU5
Yzev0bHk1zBy3a+SNaslyN09whGq4uS/ejfdXDbVIsd+2n5OksRJvVruTWkbag92ZXRSR941
k5+GM3tfX9FsX/3ppEkW0rugt3XVAlv70aEHuAlBnjXP9B9LUm0c7+SuL7tIsd1dW24XbKs0
dwPceVifSH9xWOnPLG9lV16v8ibt8ucS4tYX13yZZLvcfRNaJGitGWWtEcLnTx/ZjGtZ+Hkt
i3yndRGC1O7TQuDKPbNwVOrMNpHXV4nGvtJBeFbb3POYN3VIRervEje2YkaUXAoM1LZP5muM
/ZfXxbbnxj5T3iwnvd3s9yvbey9ci3fuMsa0+4auuXfFO9Exkn5LyCCzk24X1xFYSZG0WRhG
a9igNDjf22NZHHY7bf7ldiCxtZLm5bNYYgS5A60UeAxm3DXVuXF+TbSFlv8Ab57XU1Y3kjZR
l4EgDBOmbyutvtfkO52OS2sotyl2iTUxgi979Oe7Up6WqcU7PPDh2i25PDfPJt9vdRXMSrrm
gVlmQnpmnqBJ7Yze4bxhchs+YG4N7vsN8biT/wC3vlkLsR0AaQf4Y1/0c51lyPQ/j74G3LkX
Fn5Fve7DZdulq1osvQqMmdhVQq1yGC9Xr4a6+GI4u3IrXlA2Xi25Tx3NxP8ApIpYXaNH1NpD
VFDTvjpZIuJrUfNmw8w43uVjt+8ckn333IlnWNpJNKMDT/xkn8wy8sHPUnyuflSJF8sb5tNt
YgbtfbfcsqQQymYxtU0UKDRWHbPFf6TfG7FBd/8AsvD97NtMJtp3W1JD6CY5FqOxU54ptcb1
NxVXm57juu4SXm53El1eymsk0jtIzKBlqZs/wxr4dJjSnb/ku/4/HaJBuT7KsYMUP84QaMyG
0/a30xm9xdT8s9Z8b5DuRYwWc91pPtymJCaEflNBWp7DD/1xyv8AK35DuXGN62ueOPcLKe0l
ZQY4pkZDTxIIHfD9tP8AzekWfIvlW0+OLjitns839FudRkuBbO7aHIZ6MR0J74583K11xsyv
ObDi2/bnLJDt9nPeyxgPMIlZiK9NVAeuL74xz/IO58c5Bttykd5ZSW8zZhJlZMgPAjGp1HTn
m6urvjnyFc7dA8+3bjJYWyVhWaOZo4lYZmMMCqg+Ixc9yVdcZfVdtnKeQ7Dqh228ntC7fzjb
yNHrA7HSc6Y18/A0N3yHfNzvUvLy6mu7pSAjSu7vQfatWJOWLqiRrby6+X2so7q6bd0tYhVJ
GM4RVpln0UZ45/fG8Hw/gPyBzyFrSxV22+x1TM0zt7Hu16ISdOs1/wA8V/ppvMxmN8ueV7Pu
A2vdHuoLjbjohhnZ6RkHJoqnIUGVMa4lrFwU/wAjcwuZIxNvF9KsY0xhpnOkkdQQwpjfwHBt
vHeRbwHuNvsLq6EVfd9mN39beJAOD7SNzlJHa8g2fdY4ZIJ7O6VwDq1RSRuRlTow/DGb3Dj2
3mXx5zOz+N/6xybmksU7RCZtlmlakqtQrESW9Tio6LjPNtY/rc+GM+MvjfknyMzW39Sa32ba
h/NuJ3eRYjJ+WKMnTqbSa4bc8bnxqg+RNn3TbOWS8at91fkH6LTFaXKu0wIYahCi1YKy1oVU
43Mkc+boN1T5Ch2VbLchuNps0YRRbzCVbcUzA0tkMc+e/Wrzrii2zm2/2EJgtr7cbKwrFblR
JNHE5FWSPIqte9MbvczFjp4LJynZeZWl3te3m43iwlGmykjdj7hqp1rQEdeuM3uZg47/AA9p
+XPlf5c2O2l2bcNog2hL2H1bta6pNSOtGVHOpVbqD3xTMZ669x8yMrFgwJTz6ZdsdY00HD+P
8g3rdotv2K2murosHAjFVXMDU7dFFfHHPrrPlrn+e16B8n/Gfyxs0EG58gnk3SzjCqLlZWnS
DUKFWr9tPGlMc51YznrC229ct2QNBa3N3tpuoleREMkOpX9IOnLWp8cdJYOpvkQ2GzbzJu0V
itjN/UCwZIXVlkLHMMqmjBjXLGfvkbn6av5R4R8hbDNtkvKbia5a8i02czzmf26ULRUOYYVz
pjXH9LGK2/GP7f8A5afYEaDdF2qK+i9yWxM8sdSwyEqRjqVxXv3Y1HjvMOL77xbepdm3mH9N
fW+ax1qro32tGehU+OKes9/0nVxSNcyPLWQmvdjnUDz8sMhr1j4i+O+Wc2sdwt7XdZNm49Ch
a8u6uFkZvVopUBgaeo1yGMTrKuuZYwSWm6Wcl1c2CyXFvYTmJdzgUvHVXIR9dKAVHpxq961x
1LNPeHkO+NLfTm5vpoIdd5cSa5SsYyDOxLELXp2xT+mGSfhOvA+eS7YL0bPfS2OkzqywSaPb
019xSBmKYz98vg65BxfauWzbjTj8F1LdpGTS11hk82ZPUMbnci5ibllzzm2ijs+UncFdgHgj
vWm9YU/cokOk6cbnWsbzPKqm5PvUtp+hN9O1gPSLVpX9qnYaK6QBgkwddStj8V/GHM+YbkZ9
nD2VtHIRLvTFkWNlWoVSM2b6dMcurGpzUfP/AIu+QeP79Fa7nBLe3W4yaLa7jYzfqmyHpbNq
0p6Tnh+7jJdxn1j5fdTHjVtFdyzwu0X9LT3MmjqX/k9MqZ5Y1z/SSOvVd3A/jHk3ON8FjtiG
NI203ly60igArq1tTrUdOuHrvGuJ+VVy7i+68R5Lf7HfMq3llIoYoaoVYagyE/xA1wbVsqpu
L+4kg9iSRzEmarqJUHrUDpjf28wYe33C5swGtpXRtOktG2k0bqAR5Yzmtb5iXZNn3zd7hbPa
7SS6uH1usUQLmijUchn9uN/efln/AJW+u6323e9zvLWwgtJrm5Y+1Zp6jqyySOvh4Y4/Ycr7
b/jT5KluAljtF3LNIjMWRCvoVtEmeWVfTjM/o3Zqg3Hadz2bc22/cLaSyv0YBreVSrA9Qc6e
ORxr7s/D0v5L4Fu/HeEcd32+3G5lm3htF9t10zfypdJeMqCcxpX1VHWmLnq/J6u2D4j8NfJe
58Vud4sDLZ6YkntIWNJLzUCax0IX7eleuOd+W+pkebXttvC7vLt99A8e4CQRXEEgJdZRlpYH
1asdduM85+G/+WPjK84jsfF7w3c9xcbrDIl1b3FRJBMiK5Uf7VBpTyxnm2wdWc3P2qNu+Lfl
CScx2OzXYBijllCgrqjlHoatRUNTt+OMXo4q9p4fy/c9+k2CzsLgbxAXL21CkkZT7q1+3yOH
VPV9ufxJ8nQbLue77pZSxptCpNdRzPqb22qdceZ1BAKtp6Y3/wBKz1ZPXn8W7XNrG5ikdST+
UkAk09WN8w1ufi34237nm8v+jk9qzio97fMDSLw0nqWJ6LjH9O2+eZmrr5R+EeS8YSHcNtuv
69skzJCl2lA8M8jBNMgBbJifuH44zOrHLmWXWEuOH8lseSDjl1ZyW+8s6RLbuOpk+3SR1BPc
Y1ej9fWl4Z8Ub7v/ADf/ANd3CCWxW2lMO5SImoQFATVqZeqlFzocV7rtO5i7+Y/jbjuycn2j
Z+JSTzbjefyLrbZqsyy1ASRXYAFZNVMulMPN89cvmtJ/+bRvMnH4Iv69DDylAzJtLFfbKKKl
Feuon8NIxn35a7k+Ipvjf4Di3fb9y3XlO4HZra1upLLS4BkW4jYLIH1UVRqP44u/6XpZJHF8
s/CY4pa2m+7NuP8AVdhu5Ft2kFA0UprprpOkoaZEdMHOjrqD5V8C7vx74wk5TeXCm/t3WS82
+msR27kKrRuPzAsCR0p54eN1n+nn+Vd8N/HvE+cjeNj3G5ey5CYkn2e4U0WsZIkDIfvGamnY
Yut1vJn+Xm27bZuuw7vfbVuEfs31jPJbXCnu8ZoWXxBGYPcY6/Vy46cLSyvJ6/uHqTzrhOpY
7pkAIlKSKcj1OMWN/g8l5IT6hVjm1KUI+mKRm0K3typIJJWUUZjmCPPD4RfqZAhiJDKwAIbw
HQV8MX3ov6DHMQx9r7gvpp6RTxyw335Z3PgP6gtF7eWoCpalTTvU4JF9jwX0h9BPoNDTrmPL
GrIZSlmmqQWIqSxAy6/mA8cFZ8/CVrxwoJJJA7YzWvu0W1/I/LtssY7Pbt6u7S2iBVIYZnRB
X/aDQYxnp661mbm7kuZZJXfXISWJPUsc6kk9fPHfru1mSIZGJbPI0r+NOmM4kfuakP5lyBr1
riZ0YIFBpANQwbrXLpha0zFpGDVKBD6iemM/B+RBDqBB65kHz74F4akygsRVa0bxA/DDsFg2
jjRdXuVqa5dx/CQMZaREB6KCPUcienmDhPlSOQAM/UOgHY/6YovgnjNNTtSvU9xgoRsqinUk
ZAdyB1wLEkpOZAGpMhn+7CzodJ06dVT4nIUPbCcSAItADpPQAHFYTn7Q1cyclGef+mMrAUqp
MhIZhkvhTEoYF42Wh1A9R1z+uJRKzEEq1T3H+2nbEsqNhVgVrTqKdKDvQ98SSs56gsTUENXP
LtgaRgnVUGpBzYeGHXMVchpAoD6h0GBv4DIaUXx7djhY04lowCkrnQAjPPEYkkVwSgGsN27g
+OA4GoUrpqWI6dK4j8HkWAaWIeopmeg/64kGjep2pUmgXxH+uLQeLW1aVoB9pNQD2wnDoJg1
dNaZH/XAyMBtZZjU0pkcsRh5SqmpXoAAy06eFMGHSQ0Gsg1B9APX618sWH6lXUNSgDPqTiQE
00LR0JHVumffLCyYhtemvp7k9PHLEykKhc1Ne2keB7YD9YfoQSgAXyzzwtEEdf8AdQ105f5Z
YMIZSK6oxq1H0qT08csSDpm9wsRkOnYmuKM0a/y2qFrWhcHoMIJfcJ6rkK0zyJ88BEBNp9a1
Fa6x1xkjLMrBSFpSjHwwpGSpzoa9yvXABw6yoVRQdmH+OIjZT7fUgpmWPc+HnihQzSTFtVDp
GWqlaj/LFrN5tTq8UkZ1CgNBTqRT64sOESaAVAp0y7YjaZmkDMMs6Bqdj+HjiZtFSpBqCKai
fA9K0xNQJdg+rSQRWpPQ4DomVGJqMwCdXcYBpM9Ia0D6qgFhnQdCPPDIdwCE5O6/cuX17YQd
RImpSur81R18KA4YjRtKCwFMs2qc/oPHAsSGPWuoZnMCpzxItRLai4ppAzyFMBkM4YKdIFQw
pnSpI7eOEBTJNLZd6jMGuWBHUB0IWmvqA1QBQ+WKEyV+7Lzof8PHEsAzsoLVPkaZn6+OJGZj
pJZa0HTpSo6HCBBY6poppA/AZeBwEtdZtFRr6aRl07nAZQyAF/Wuv/b9tT9cQtONKlkzqKgV
7E9sRxGPdZyQNY6FR1/E4mpNKYMDQDUhpRupB8Rhi6V88blSGrq7Ef5YtZnNU7odZr1xuLDV
Y0HbpXEBaTiWO2w/Ma1PU1zA+uAOmwVv1DUYIaZufD/Xwx2/jP8AY9V9ncGt5rn4gtIrWAz6
4XEEa1NdPj596YP/ALH/ALVz5r5r5Nt0ljuFzHKhVg5aTVmQ7NWjdgccI7bKpyftdRodctIG
dcaCWCEZhgFU5kCtQeuX+eNc/Jlx9G/Hvy3wI7VtPHHsrq3ul0xskbD2y56uxBHXvjtf435j
n1/Xb6sPnDdNmkgsdu26ITbtNkhhkUsqNkOh645ccenNam14pvX/AN21vt86AX36ZfcLMDR6
E1bOnTGe/lX/AA88+EuMb6vLb2Yn9RY2imOSZW/lVYkaFHj3xu/+rV6l8cnz9xa8G6xXDtGF
nZUt0VhqLMKUK0/fjlIpWf3P4E3PYeM/1/etwgtWRQy2sTlmo1KUNBUiudMalurcb7424lyH
aNss9y2qW23y7nAKvNWT9OhI6OSQGHljXVt+Rsvw9NuZLddzgjv7tG3Fk1C1WQEt/FRK4JBQ
28u+OL9twhg2vaIyfZf0ozr3aTPIYMVUfGOe/H1/dwbFt1xIs6O6rAtBEzA1JLf7uorh+lzV
pck5jwHj3Lg+4lkvxBpSeMK7Rg1OkA5rq8uuDnjVuLefeOJ3nCrrdTJJJtckLuXkOl2B7KD3
rh64svo+zx34H2q13nmO5b5pjWOy1C3hIrUOaajq6keOGzxrWd+YORT77zr9BHNW2jdI1ZX1
KiA0NRg54olep81li4V8W2+222gPcRpErMVAOrNmy606YZPRdrq+LY5P/utDyRNQrPIwZdIK
/jg/rffE+XOQup3a6OrWokahGYABy/djMUWfx/GH5dtjFACtxGRqHpJ1DMjHXg6+w+RWnIrq
721bOZo7cTA3cStQPGOoalMczUz3NpFNdwW8qC7jShiiNXDEZCi51xYy8a5Zsny5Jte4XG57
l+j2RGLk3EimQoSSAtc/wxaGr+BLHf4+JXTXiTLbzS6rIy5a10/cB1FTjXeKa0nBdu3Sx2/d
43VYp5LqVxJUHMj/ACxlrS2u13aHj91Jv86tdyTt7VzOVI0tkgFaCg8MSWm3pS4gtyt3uDUD
S3ksh9lSfIUH4DFYk1/J+lW9FnRLpkPtxx/dqp6TpGCQV5js9p8rf1iyn5FLTZVugT7pX337
hVpnpPaox0lVrecyvLhLeXb9nuFtd0mjJZgA0oQg506jPGJB1cfH/Ndgv9o3mSO/kMlw7GRp
CdTHPPVXzxprYu/hm6sbPn22XN3KkUKv6JJGouo1yqemGej4fS0233W4fIVpuduhk2y0tnSa
Vvs1OaimfXKuMnXTu2+bdf7Ru1rYXKT3R1RJHE1W1Hr0w4Kg2AQcc4jtVnu8sdnOXoiyuNZL
mv76/hgsJ/0F7ffIltusK6trtrZo55Car7g6UNaYsEqTfNztNz49udptU63F2+qGOOFsxIcq
ZdK4MTM80u7Xjnw5Bt+9SpbbhNEkQt9VZJJdVSopmcuuGTVagvoPkSf4rL7luNhsth+iWsca
kTeyFGmPXXQGZcssGem12bVKm4fDdpYbZIZryW39pFiI1iQN6gxXoTjVmUW61u2yw7Xs+x2G
4yx29+Y1QxSMFYsOtK+eDNOq/ZbS9k+Qt13V1b+lC2SOKaTKIFKdK9+5OKqfDEcU3ffL/wCU
+RtxBLS6tXGi63C4B9uMBvSF0UZs+w64bMZmvNflO55VtHyab283WK+3qPQbdrdBohNBRFU6
qfjnh56/FGa983SHfN1+IpUvI5Jt1urTXKjLRy9dVNOM76eqxnCvk25v7KHju68YudyuLGkO
mAVUImQ1oaZjvXDcMWP9xm7WkHx7bWAAivbuaIW1llrXSP4R/CMsZk1fl4bc/DvONv2CPf8A
d7Rbfb1KyFHYe4FNCrOvbFPkXrPl9Fcsjvbv4+2uy44PduDHb6YrVgDoCgtpK5deow41b6qP
nW+t7Xi2w2LAS7nLcwmHba+qV1UD1eQbrh5lG+uH5lf5Dl+NDJuZsNrtFERvbeBiXOeUYLZd
ey/4YJ/gX/L5Ul0MT6i61z/yAxuMd19D/wBpkFsL/eZdAM/sqVdgCwXXQgeAxjpvieNFv/zv
/S+T71sG52ougs/6awRBpVQcqyE119cUg+0eorcbXsew2Mah7eIqumK1j1sWcaj6VB6k4MaZ
/c+UbRHy3a2bZbqaf2pPbvmi9cZbIHSfV+JxYpHbvg3zetjvBtdzHJ7kZ0w3EVAuWdagZ41B
fhW/EnK963XiF9JuPtNc7XJJbRe2oRSIl9IoMsXUmqdbNZj4Yju945vyDkHIog29xgLbs6lB
DG7EEIjdMlAri6ujj4bfjO5bxv8AZ76u/wBuotre4khtUMZRXiWvqIb7unXGWnD8o89ueD8e
2u6sIYvaklSOQMPSsKgVCgUFT2wyK9evCuefKdty/nOxXWw7f7Elmyisi+5JO5IOlwB6gB0x
02SOdnX2mfD2n5n299y2XjnvWxkpuFvJcoq1VVoCwNegrji6/lc875jece3Xju12UUSxbtci
3lkbL24xQUQZDvhkZt9xYLYbNBzeeaO3Rd7ubIGK4KV/lq2liW8a0rgws38zch3zZPj64gtX
eXdbsezLdwxHRGjV1tSppUekY1z/AJHU3x8WkVuKnsKMT1rhlMfUH9s+2WVjwTed+it45N1D
SItwBVykUepYxXoCx7dcZ+a11fDwc25rzN9u2nf+MBNrnvokub1o2VQuvNQsn7NS435ij0Pe
eSb1tXNtk47tlin9FuVVZ5UjIEIBPpUr6VypjDM+VmBZbKvJ9ys7ZBNGDdS0GkO6QaqMR9MH
1VrJzb6nKPh5+S73ZwTyRq10INJMf8mSgUdTnTGs1nrz1YbrzjbYfiEcoXaontHtlaPa20+0
CzaApFKUDeWDmenq+Pm74f3q+k+U9tvbaxjM087BgiAxIktQ2kGpXSv7MPTXNez/ADLYbY/y
XxC83e2L7NHVbyTSSrkyehGoOlcZXNyvWpWuLMy3Ttq26GIezZQxVkBXwp1yyC0wjq4+GPlv
etz3rnW67luEZiklmIgjfJo4RRY0YeIUUx1lcuZ+TfFk22Rc52mbcLJtwtVmUmzVdbORmo0n
I+rGe3SXPl9pQ77JubBdmu44ivpaxuoGXtkuVCB9MZxbqk4+277BwXfr2326E75FdXcz2kI9
DzahT7cyKZjywX5W5CvLO55Vw3jl7u+3wPvEtzbymK4jokbayWBU+rTRemJVqrLcYn3Vtta5
eS4hQmSFYdMQA/30p+FcRZS5VOIcT5Tu3Gtui/qP66WQQLGaSPrRQCFoaAMSAMXM9VZHYN/5
LzbknGRynjaWlvDNNPHcyRke6Y4yyLpetF1CtMVMep3PJtpg3c7dJcyvKMjapA7r0r94Uin4
4cD4h+U5LKb5A3w2Nt+nszdyfp4tOgooyNV7VNTTGuZGJ8Yu/gLapNy+Stst0iiuXiLzt+oX
XCnsqW1ED838Png7deMz19kWO5W95udxtz3D3ElupW5hMOmLPL7iM/pjNgYL4h5DN/7by3iM
MMUW0bLdSPYhFoyh5DVSe4xXnBOtj5t+bOR79v8Azi9n3iD2JbMvbWcQj0UhRjQmuZLeOO06
smOUsvrz63AWQH8i5nuchWlD1wWunMfYW37xuPCvg/j99xXb0uL66ELSxaGk1mUMzu2mjHoP
pjlIe+nX8pbxtewPxPl247Vb3l/IRb3KSKBX3ogeprmjH01wZqtyuD+5DmG3bfxqy26fa4rm
feInNrdTUrbsuk+nKtc/HDyx3Z9s/Lr/ALcuUWG48Bns4NvjtW2ULHcvHQfqGZGbW4p9x059
cWet7sec/F/NeP7r8v7lyBdg9i2kikW1htYw/wCnKEB7kxin30OYGNdcri+PZ+UXm6bxxfc3
2x7XeLGa0kJ266haKSgBqwr1K9QCMDN3HVsU+xcX4Rx+2imG220ltF7Yih9wM7Rh3JCg5knr
gjU9ef8AyFzu3498m7BfcetUN7u0a2e4zzQlFlilkBjIqFYtUdcF8jH1zpX/AN1vLd+s7GDj
sFsp2i+gWa4ufbLv7iyEBQ59KUAr441yu/8A/T5YLNVj3BBGeWXWmO2mPpT+0qKMnlDwqBcm
GARMKawDr/8A2hjh1/7N2+NV8atym54Jzn/25pmUm5EP6wmgVYnDU19tVMXzWJ7yvtn4nsPN
ePcU3jl22Db912+OMWduZAglRQDGGHUo2nUE6jBhjyb5O5lzSz+ZrbdbHaf0G67dEbS1iKCb
9RDISfcbL1VXJadMdLn1Z/nd6rV/3K2m43ljw68WJ9UUpa5k01SNpBF93YZ1xiXw9T2PSOT7
jsNtveyW257bdX1/MsYgu7VZPYjOsCr6WAHqzzBywfhrq+vnX+7YTL8k2utlMb7bGYwooQA7
ghvHPPHTh57x/wD3HiQLRyFdJPljo72Y+p+EncIf7WdxbZda38aXJkWE1kH80e50z/8AFn9M
cPyz/TrOdjl+DdptJ/hTk9msK3Ec05Qqq11+iMjLyrUYxJ7T88vVk4xt1tybff0djFBHe7NB
DJpQBHZTIi1FKZLTDJ6pMq7ttyjg3e12SSeX9UsKFoo4KW5Cp2enpGXSuNNshse47Xx3kvLb
OHbZbWye9jlbc7OISKkssCMUkRQSAD6hlTPFIxzPlif7k7fkM/x2s8slru22x3cbLfiL2LmB
mFFTSaqVetCca5c/7WSS18rgDNiNWYqK5jPyx1pkfUvAl3a4/tpuY9jaSXdI5pqpaH+aP5w1
BQM6lD+zHH8uvU2Ly8/9n274v4OXtjccptt1tvYtbk1kLn3fQSc/sPq8BjHtZ6m42Y45x6w3
vcOUbXt9vcc6lttU9itwKlyqhwtftrSmqmf44fr6v/Dw34R5BzeX5P3M29s9vZbnuTTcgs44
z7ULszCjVHp041/Tr8D+MmeMt/cttF7H8o7hdTQPHHdCJ7WdwQkkccKq1MqHS2WNSqZtZHf/
AIr5dsfEtt5ReQIux7kEKTq4dlMgLR61H2aqd8U6h77nLIaNIqT6+tOmZ6EY3G49u/tOEJ+S
ZGJ9YsZQlaVrlX8fp2xj+kdNn1e/7Xxu1hseNyiwVLm03m6lLhAGjWVripPhWq/uxzxxrg+e
fkTd+Ccb2+62VYo7q8umhE0i6hGApc+nodRyxviRXx89fJHyD/8AeTuXGZbXbDDvsMP6W8ZK
VuJZGUoqKM6V1Ur0rjX1yVjrjepXq/zptHJdw+HONXF3aS3O57fJHLuZC1eMC3kRmYAeNK4x
w338x0/BPN+T7l8a77DDL+svtjttGzwlQzgiFzHHl94DKAv7MGet9fGvnjcJOWck5+0l9G03
Kru4QTW8ae1IZkoNOgAUKhemOvdmOXH+12PoP+4ja98vOIcRvpoGl/Qkvu0pWpjeSFFq/hV6
g+eOM3Gu/Olz8rfIe98WHEdt2mVbc7giG4nIDVRPbQIK+JbDLJGr8tPELeL5b3C3itWW43LZ
oXmv4lFY2jkdAXbzFKfTFn5TOfPl7ynZfi+4t9vmubxppBFuG5BY/RaPUSrJSmnVULqAw83K
x1j40d6URagVowXNaDwx0jVfRf8Abbdpe8C5jsNnMF3u7geSxt9WmRh7LIpUgj7Xp9Mc+pla
s/1aTY+Pb5xr4N3jbd8P6Pdb24H6GC4cFzK0kSx0qepcVy+uCaz+Hoca7L+t2j/2htuHPfa9
q0kGTM4GqinrQ9f8MU5302vL+B825XbfPW77LvCwwTbrMY7+BB/LBt4f/jtExox1xjqeuNdX
xniT39qT5B+Q3ufnfb7HdhFHtvGdxRYrhF0OscmgsZWNSwU/hh6n+jXF9ej7jwvfJvm6z5eh
RuPCGNkulcFaiIoVoD0Jzr0we2Ypkvqy2LdfeseX3vHoId9uv6y1bD3FMbALGtVOYHQn8MZs
TKfPsu43Hw/Yy7hbJtN5JuMIuNujIK1JcLmKf7WyxrierNdo4Xze++A9x49fI1xvk6A20cjh
yYlMbooavgpoMZ/ncq/r8PJf7beK7pe/ICbjHWO22cNJdVHU5po8Q9T08jjXd9Z4vmsh85XV
jf8AypyW5s3WW3a6QJPGaqTHCiSCvk6kY6WWSKSMAyeshCa9SD2r54FDMzA6Xb7RlT/PCfsH
U1AD1qengehxOd6KNWb7vTl6v8sFMotS0Aoa9x4VwNaBCasOy5KfHGozaAF6P6a6upwjRjWp
OkenrQdfriSRF1VbUAzHp4eWM0znTNIvtha+pRShz64sFCMlYUBUHx7+GJXDs6hmH8RBoB0x
Emf0laUIHU59cMVCulkpTSRnT/HCqEv/ACwhHU0r0IzwYExoFNR6Wzy8B4jEtQBX11B9Pj40
w6cGFY1IqSfDxOCjDnSqkPUeI/088ZaSIo0apAK9j4fXAajJ/maVNSMvIA42wkyqczlUCvSn
hjNagIX+2lVoSFPU4qUhjCA1JLVoDlkcUmqxHqUkFxUdhTvjQw6hgaGqnqO3/AxLBDWKgEah
U5dxgpIu1Q2Zyoadc+owYje6pCgHvQqBQkd8WD09QgDJUDvXtXAfcMpk0jU4oTmSe2AJmjUL
SoBNKHvXz+uBvQqhOqmVMqfhhZwjI7oEjzp1HU/j5YlpkQ6RRgSMzQdPL64hiSQCg9OfVa9v
riw4S+moHalR9fCuLDQUzT0murMjqRgqkExYFw5qKjPywIgoaTSW09Cpbp+OFWiBcIxFWp0A
GQAxM+nAZdJV6sRnnX9uJRH/ADAtGcEVzp4nxOIDKBpA2ZcCmmunr4Vxa3MJHq2hvtUmhy69
O2I/JSKoXRWtOo8cSogY/cJHp00yIxAAWpJIDAHMjwPliowQdmFQo8NP+JxLA00nUxIBr16e
OI4ctRkWor1AHSh/zxCkWRgdJNa0TSKk4lKNNDKAPVnTPx8q4jQs3rK92zIp4YmYIe60lK/c
KmufTv5YCKRlCsSfSgoQPPEkemiqynQMiVbOv+mBVJoFfTUrSrE9v2YLFl/JGgZWUEIh6ePj
XFhHr7k5VNB1P0GEi1HXmSCo0soAoO4qMWEIJoMqAn9nnikVOQAQxOonMV8ugyxM/ASIynWj
eGIQQXIFhpIP3ZVbyyxlomq2lgfUp69FYeGFExjrRBpYihP5T454JBaZCQNBYtT7h2A8cKGU
ClKDSwJKt2Y+OEkD7eVAaEGv164kcqjNVyUFK5Z5+OImkZlq5TS1OvUfXEj6FFDlnmWr1Hh9
K4NGk3pXUFoV79yBgIGkAFT6l61Wg/DLwwjCEkpDKFOkiuYzp3xGHDLo0ChYVAalKd86YGkT
u4oZASKdRWle4piZspg1FoUND0yJp/24hopFQHVqoRkFORAOAhSNS7U8as1a1riODKgsGAr2
0nBgNMVIFDTPv3OGRr7I4l0alDGh/HI+AxWKeoqMVIqR2DV0mtfDCLEF2xpRWEZpmPHFi+yj
amo08euNDCHh+/ET5ePamFa6YondmAah6t2zwavqnsInWfU3RTUjw88dv53Kry+n/ijifyfu
PDor2w5INn2Ro2WGGRtRZRmx0/lGL+/e1jJHlPNrKC33+6jW/O4XCOVluPyk0zI8fLHmmmM4
5UMEADDuKZ/XG2hISnRyaVGmlMsIdO1w3ct9DDYQyXF7M2iGOKust2C0zxuWxmzWx37hHOuP
28O67uktnJIcpZ2BmqfAHP04Oe7rUi1s+P8Ay3uew/1AvfJtAWplkmZUKDKtCR18sHf9LV9Y
ouL/APvkt+dt49LcvOWOsWxboD93pI6Yb/W5inE3R8y23nG03qLySaV7gGqe7J7jA+eZpjnz
3dbyfhQ3nIeQ39t+nv8AcJrmEGqRyMSF8gCehxu9/li8rTjXIubQSR7dsl5e67j0C2tWILN4
Kq+I641f67Fz/ORfbnxH5MsdztZriK5Xd7p1/SgH+ex6AZHLzxnj+uGyVfbzwH5rvtp//LVz
cLaL/MltZZdQJH0JNPLGf+guM7xn4t+SL52utosZUKH03hOhGI8CSDlh/wCnis8Q778a/ICb
rHBd2r7hud29NKn3GqBkTQn9uCd5XPn+X+VxuHxh8rWWzLFcia325F1C3ZwUWmdNOo4r3b8t
eRzcT+Pvla8iln2G0lt7WQ6Hufc9lZOx0kkV+oxXutdfCrvfjbntnu62q2jzbpM+tBETIzFT
XUzCvTG+P641zJI0G8fGvyp+jhut4heWK3Ik0u/uAkHp3yxn/pt0XHok2wfNu58PME+42u37
a0P/AOLppSYx9aagMsF6YzHzhudk1vfToZFk9tihIr1BzJ8zjQ1Nsm5ybZvFpeKNRgcSKjVO
YP8AnjXNwvRN++ePkPe5PbtJW26CP0uLUN/My/iIqPwxbDjI7fy7lu27h+uhv7u2vJCfUCVZ
j2qpH+ON/wDWSYrz6k33m3Od+b2N2vbi5hH5ZCSB2qtPT9csc9jX1xs+Ab/8y79ayWew30kF
hYDTPJIQscakZANSpP8AhjXXc/Q64z10cf235h3O9vn2vcrh4bVit5dvNpQyKK0BNdX+eMT+
uM5+WQ5TzHnV5cGHd7+5uf00lI1ozRVU0BNKDDO5Wpjosfln5DW2Wys92uTDE2gLD+Xv1HbD
11GcritPknnNnfyXcF/L+ubKRpalj9Q2CdCypNy+RPkPerm0uLy+up3hesAQFFLdvSuWGf0k
+GZu5R73dfJFhdQ7xuT3Vte3I/lyzhqsoFBQGnQYL06sfuW43l/eyXN/O007fc79f2YtY8c1
vcNEwNNGnNWIq2KXC9N2jcfmPduMm12trptkK6feCmJGU9vdGbUw/eacsZ3jN/znZd4ji2dJ
U3FnKNAimSRnHiM64f8ApyzZ0sebQfIov1m5QLkysobXISpqegC9B+GMTtr8LWw3n5h3Ti7Q
WCXjbOnoMoUqukAgjX1YYf8Aovrig4rvfN9n3j2tlE/9TaqKoVnYilPx+vbF9/DedDzsc9lv
0veT/qJLpSpUTglQK9BTLFz16oprnlHJLy2SwuNwnayiOUDSMyen8pDErljV6Zn84uuBct5n
tV+0fGWke8nWnsqhlUZ/doNc8FvnqvH5Dyzc+fSbwt5yM3CbhX3EM3oAocgB+UfTBP6ZRmtB
uPMvmPduJski3J2SNNLzRRsqug8X6nFe4ZzWGsOW8g2EyTbbez7fIwAkMRKhkP8AFTF9sZ65
RW29X9vvC7uHMl4rCYTS+vU1a1Na98XNl+VefPHqj/OPzFuOzSy2sCrYhNDXy21ADTMh816Y
d51qc3PVZwf5A+U7WG6g41aSXrTsDd3C2/un3CKipIoCcZvUi6/wzXK7jnI3yPdeRpci+DrI
rXA+0g6gQOlMumDnraHZzD5p5tyvaBtO43CJaBgXSNAvu6emsimeOnx8M2al+Pvkb5A2uR7D
YUkvZJFOiAJ7vtj+JQftIw9dNfWs/wAm3rlU++Nf8gknO6RNr1S1Vk7gKe1MZnXpc/Jed8t5
Hax2+67pPeWlsaxwyN6VNKDLxpjfXc/C+usyKhFAp3AYdq4ox9Wv4P8AI3I+IS3UuzzrDJdR
hHLIGBoa9+hzxnxqc1S3W4bhd7o+43U0k9/K/vGWQ1Jb+Lw64p16OuXpfHf7iub7RYCz9yC9
WEBY5Z0q4FPtLVzFcauUcTFVH8283k5OORSXKtconthBT2QnXQU7jFkx0lW3JP7jOe7vtdxZ
D2dvtZEpJcWqlXZTkVq1aVGM9fWfC6il+O/kzmPHJblNjhN5HNSSe3lDyRZdXIHSvjXF5+V/
4Ft3zPzWw5Zcb484NzPqE8Gke24JHpZfAUwwTjF1yD+5LmW97f8AoFEO3B8pP040uQPyktU0
8hjMz8JkOd/KHJ+aRWlvusqe1ZKFihiGhagULUHdqYtZ689V/BuaXHFeQW2929vHc3MFREkg
quYIqPPGpDO3r/K/nP5Kk43De3WypY2F7QwzaHBcrn6SxIocYufgX/LAci+ZOXb9u23bpuMy
a9tYPZxxqFQOp/8AIy9ycP8Ag5+3o3BP7iFbfr285TGKXMaRC7tloYQv5VQk1Uk598P1lK0+
RP7g+Mtxm823jkU19e3yNBLPcqdMauKFtPc0+3GbME9fL8hZZR9xYmp1ZZ4JDY33xv8AL3IO
Fe/DaFZ7Gb1vay5x665tTxxuZPljmXV5zT+4fmfIzBAkce22tpIsxEGrWzj7fU2fXDJPw115
V3af3U8rh28Qy2VtJeIoBu5AdTHsW00A/ZjPWNzi5rMy/PPNpNp3jb5Z0K7yXee4p61DKFKo
3ZdOVMMxixX2vy9yaPgR4dbsibYh0sxWkhjY6ymrwrimaM1c7RyL5O5jwL/1DZNve52CwoJ7
mGMlsvWI2b6muLrJ8N9T9qT4255unx5vV5NbWMd1eyabaWGQEsKGmhAvqDFsq4Ml+Rb549I5
j838zi3HaouT8eTbrWKWG+/RShh7qqaj1NXLGdgnren+4746e2S8d75rilVtET0l6VCgg0p5
419U+YvkXl0nKeX7lvn6dbaK7ce3Co+xUAUV6VY9zh0yYptk3m82rcob60kEVxC4eOXuCPpi
l/a+XtB/un5kLJbddvs47wDRHd6GYkhfu0k088OcqyrDjf8AcdbbRwSW1dZL7lD3EkrzTCkU
mtwzMzV/CmDJo3WR5X/cHzjfL60mjkWxhs2E8EVuKan/AItda+WNc3nVq7X+6bm6QRAWtoZk
AMmlPVIPFiTQH6YOpFtV/H/7h+Z7RdX08vs3a387XMlvIMo3c5lKHIUGCY1J4q+T/wBwPO97
3a03NJ47Bdvb+TFbhvb11oWq2dSMsbyZ4Pr/APy0N9/dVzQWRtore0junShudJ1K7DrT7c8H
PPN9q9ny8R3LcLy+u5ru7mea4uZGlmlP5nY1JxRnIsOJ8r3TjW82267ZObe8hBCyLlVerK47
hsXXOrnrHsbf3U83RYSLOzCqoMjEH1mnfwH0wf635O1g+LfJfM9q5Ze77tJaXctwkae/joXS
XW2sh0H5BjNzWOPyqvkbnO68z5E27bpGkMojEMccIoNC+f1xq39Nc8SessNJJVK0yOfX64o2
9e4L/cNy7iewptaJHuVtb1FulxWsK0yQMtDp8B2xZBWc5/8ALXJ+c38Eu5skdvb+q1sIRRIm
PUqepbzOHr6yeM7Vxz29+TucbFtfId02102faoDDb3SxlAxyDyNUmusKM+mOfPUg6432uX4o
5v8AIO1LueycUsze3O8oKxBdZTShRXHhkaYftN1nidT5VOxb3yr435YblInsd1tiUuraQEhk
Y6mjavZvHGtlb45kjfcj/uf5Zum2TWFpb2+3yXaaP1MSkyEMKMoLEjyrgn1hy1y8R/uT5ZsG
zW+zvb299HbDTBJcVDpGMgo0/cFwyctTmsPzn5H5By/e23jcLgq0J/8AjRw+lINJDDQetajr
jNkMaTnPzRzHf+IxcZ3qKOgEbT3LxlbltIqGJOQJB60w856x1J+XkzEa205gH0+HjSmKamh4
pynkPG9wivdkuHgudQINfSxJHpIr6h5YJP2d/TefI3zN8gcmtI9r3SI7TaSxj3IYleP9TpNS
W1DOnh0xq2Z4zJ76qeUfL3Kt+k2a5lm9ptjjVLT2RpBdGHrcA0LekDBPI19f26775z5Zcc6s
+Xzx273+3wmC3gC+gRt94Yf7vHtizYzM3V/zz+4bl+98fudlv9rgs4txiRhVXB0FgyuhYmoN
MiMXOKzfK6OM/wByvyBb2NvtENlFud1Egig1BnmdVHU0zJp44J9Z8t48l5vy3fuT77Nu28zG
e8Zimmn/AI0HRFA6BemNWz8L6z5/Kh91tQYKAelfHDq16z8E815dsd1ebdsm3S71Z3kLNd7W
pLLpUEGbUa6Msqd8YvyzVR8e/KvKuDX13/TKJZzOzSbfOKxNQkKCMtJXoGxmz0fz+Gi3f+4/
5C3La7japp4YP1BLNPAumYRtWsat4Z9aY6z6t5Xbbf3Q89i22K2k/TrJBoX9S6gyPop1r6W1
DrjGQWqfZv7gOabbyfct/WdWXdypurR0Bh1RjShVfysFFMdLJZ5GZaqfkX5u5hzqxj2u/eOD
a9YlkggXQrsh1KW8aUxieLr9POySErXTUnV/ljes43nxfzHnXH93R+M+9PNPm9kAZFmAFNBj
GVf34z1ZHSa7+QfKvPZuY2297tLJb7rtkvvWtoVZYoX6FPaatAQPV44p1LME6cVp8rcqtOcX
HMortTvF1UupH8r1DSU9smmimQGCrn/K7+OPkz5G2/ke7XHH7Zt0v96me5v7ZYiy6qli6qPt
01PfGe7N1qfzyeKz5R+V+V8xa3sd9hWKba5HaOKOIxFHcaSHr1yGNeZ4zOZbq25n81Wm8fFW
18EsLEwiBIFvbmRq6jbnWBEB/G3fFz/OT5Z798eOyO7x6pAQRXUfPzGN5ilXnFt/3fY9zt9z
2uZre8tyHikjNCCOuXcHuD1xnr35dHpzf3OfJB/UyNPChuU9kGOEFUYGmtV7Pn1w3mL62xke
Y/JPLeT7TY7Ru9x+st7FnkgUpV2Ldy/U/jiljNZvad3uts3GzvrRjHeWkqTwuRkJI21KfwOH
qSxq/D2/l3zH8xxcOs7nc4be32rkkUsdrdCJdTppo60H2MQfTXtjH8834c+q86+Pt/5zxcXn
IOLmUWtiqx37+17kQSRqKsqn0kauh7YOrHXn49V0vO+RTc5/9yaRYt5acXBmjUKmsClNJqKA
CmK3YxzM+HqPyP8AKXy7Bxq0t+QLbw7TyK1M0DLCmqSIAFkqK0OYbxoa4Ocq7n4eY8i57yrk
cW2jdrlp/wClQrDZuqUKxZD1MB6m6erri+PFw33DP7gOQ7fyldy3of1SCSyTb7yEKsUvswk6
GVh1k1Ma164bJjrfrn+Wk+QPn613XiO57PsOxy2ibhGbe9vLo+4io4pVRX76ZZ4Zn5rlmvnM
suqi1JFBQ1rlgh1fcGk5ON/tH457w3ZJCbX9N/5B2PTKnkeuC2KStL8p778nz71bxc1SZb2z
jSS3hKeyCAfTIgT0k+NO+N/bzBPfFFu/NeWbxvEW9X98827Qe3S4oBIvt/8AjIoOo8e+CdSz
BdWuybjzvlXyBFuVhNLccnnkR1u10q+qFKBhp0gaUWlMHXUxrLrt+TOAc42rl1nHvciXW7cj
Aliu1OoyzMwQxt4MDQUw/e2es3ptLT45/uKg483HbZ5I9slIaW3eVKf9qyatSqe4GMTuxvNn
rzjjHIObcT3i8Gz3FxY3Xrt7yKMZ6kbSdaEFSQw8MXV/NMk/DWcw2D5R5Jx7Y+SbpevvEO7T
G0sLYAgxS1ZVVx2LaWzwz+tzwfX1rJNr/uZ2XYDJHd3EllZW9WjRozOqR9VRTUnSPxwTr1nt
mfiXj/zBd7Tu24cUZ7a03g0uriUgF31Fi8ZNCDViDTxxrr+k3yNdTzxg+UfHXMth3yPaN2sX
S+uEMkKj1pJqahZWFdZLdcN/pvtc5rr5R8Oc847s1pu99YN/S5wGeeL+YYa9PdA+z8cHPTV5
Hwz4b5nzC0nn2m0Voo0HtySMEQnwDmgxm/09anPih/8ARt/bejsTbfcLucLFZbUoQ4YGmY8P
PD12xz/PV1zb4c5lxGwt7/dLVjYzqrC8gGpEZ/tWQ/lP1wfe0/WJIPiXern44POtv9u7gtZZ
Y9ztEr7sSRkfzadDk2pqdBjXF+1xrqSTXn5KUXQOoqD1xuRzt0y+4KqhzOECWVFH25/aD2Pi
K4ms/YP5akaMgRnWp9R7YlhPVZFIOpjUEn/L6YGcEqj1MOidvLvXBqwykHUhFSpqAMssKlEQ
ZKdE7aa50GeFqeo2Dl69AxND4UxncUEwf3VqwGZzP0wo5LkM1MhQUI8cPg07+nSxyJyy8T2w
HSGoSVX0E0olaktiW6GQtpq3qatT/ngR/cZiAM9OX7cI2GVgPQyZHPViqOgfWq/dXx6DGWpE
sbAVVgKVNNPQnriKNiS9RUFTVadD+3EyLQ7gEHpmAOw8sSAxjClakMT6gPDFAcMCOlB0LeWN
EQeinSanvWlfrhxWk1B6QoXV38z3xnFKkeiaNVSBkx6HIeGMq1EX0sdJrX8BQ4lgsyaU6507
HANHrZkZhUBRQ964oZoV9xDUnRU9qVNexw6cJwQ9BkwNRXwGBJSGLawNQPbpTE0jV2ZmJFR0
VsMgEGIkqTqC/hlTpiqDLMoPTSDkppX9uDAdPbLqMwKUFexH7sCKj11liF6EDL8csSCyNGCM
j3VhXpiFOpWlQG0g1XIdcQ+p45m00fOpJLf54Wp4S0CMKeZ6VoemJSnFNOmv0PWn1xEUekOy
NmDQinn2xEbqg1UGfXM5+fTEg+6wjZUzkIyNKZdMAtMpNQK0AHqVv8MQsFGAVLgEOCcjSlPK
uIU9I0owzz7CtQeuI6WiRD6wGDfbTwxLAqi5knr9ijt5YLUMSsAGHQZUBoR+3AoKWNC5LLUr
TOtCf2YsIZAjygOCjdBSn/BxSjTygh2VDTOlD3+uIWW/BCXQaECijS0fiMJz9moBQlRUGqAd
aHEk3odmIFcqMmKtBYENpC6lqKN0pixEQVYq2krXIjEjnTQGtQcgoH7z5YFaDTSNQeg6Dqc/
DAkpCt66hu4Xrn0zxDAFSxoo0hKivUEA4UKjKrLVaN26kDxGLESMulUB1jqBnkOtcJkh3jlH
QDS2Z+nngUEiye2dNGJNNK96dKYsJKNWoN0716/hiUqNNLkkZL+Va9hgCQyLpUshoSfX3HYD
8MWLAnWG1dH/AISPTXxxI6MPaybp0I/ywHUJDFaJnrINR0r9MS0+mQK4dQwJrkf8cC9EFcoS
z+rpTCdRAt7hJH2j1V6AdsFFSCnt0U0AJIXtTFAZ5DTURQGlf9cSRyrVQrBSvUMCa18sMRA1
IVUOkgK1MjlhalRymnroTTKn+ZxCuO5CKjHVqJNQMhiSmbNz2wqH0gHERenwP/PEllCjmVVy
c5knzp3wH09rK63qMrlHVq6qVXwzHfHbi+w7X2J8csf/ALmYA51SCCQOafmLE5D+Eg5eGL/7
H/s5vnLeYpU3SdyoCySO1D2B6Af548/O0cy6rGKVBIoy55Z1x0dNONRZArDI+lTl+BpnjXMH
Vx9R/CvFuNbXx223fbZrN9/vhqluZZEDxqx/8aIxqMPXFlZnUsdvztbQw7HbbneXUcpt2rFA
fUHYHVWn1xjm5Va74N73LePixbq9UCSe0pLBEumPpkq6e1Bg7zRjzv4K33cLDlM+zxwRQrdB
5Lq59v8AnmlWVQx/LllQY6fWfXXX/n5rj/uEtryTfkdEcHRqkNKhiRlTt06jxxxc5K8tteG8
qu7EXybdN+irQ3DRNoH+6tOnjjUxvcfT/wAT8B2nYNjtNy21YbrerxFN3fzMupNX3JGgrop+
0411xjN61Zc3HJ7Pd7G42JILvdbkG3jEoqUU9XBr6aeJxiMzVRfcpsOF2krcl3W55Hv9wKix
gGqKI0zRDT0gdyf2Yfk3Kg2PeObb3tce9b5dJxLisJ1w28SkXNwlPtIatFP0w+RYb/78OLR7
37Fvbz2+2Rx6W3OSM6gfHR1Kk41OJmm85Gv4xvHGd8tLy8sdxm3gEnU0yMEQEfaisBQYxYzH
Rud1aQ21p+v3GWwjZ1WO2tv/ALSv2pkMRd6rIt2WiQQLo/8AOwBYA54kq7O42y+t7/8AQ3ku
73Sa1kmmqFRgCQoFFFPpiosUHCJJRw7cxNM0rRvPVq1HfJa1pTF0MfK3JoZBu93U6gJG1EU0
kda/5YZWuJEvEtst9w3uxiuCxgedFlQUroLANT8MdOZrp9N9fXl/Fx3jUW22G2bDAyXDrCJX
RRoXL1FiGZmOOVnrF6pPwTiS7q+73Fit1PApKCQDQO9SoFK4JGdeX80+S9tvra92eDi9bmKo
jNrErLlkGYoocU79sMkXtXv9vK7fccbv4pLAQ3ED0uJGctr1gnTp6ZeOOn9JI1bWl+N47W3s
d6jkhb20vJNNuvQinSnljnYDPZbByrY75rjZ0skgkaFYVRfdan5y4AP4YswrTj/F9g2+wt7R
dn26wWRQAkuiWdzT7iSBqbyxWara5pvjvhNrul1vU+1x3lxHHqSJgAg05kqn21PnlikX2xi7
f5A41vu+2W32ewC1vIbgVeNE0BValdS0r9KYZzGctbrnex7FLGu6bzYtvEtoh/Sbev2lz+Zu
lcBfIPMHnm3q4mmso7HVIQkMdAqZ1VB44ZGZHd8Z7PYb1zbbNu3BTLazTr78QyJANSMux6Y1
9Na19V7vuFzByzaOO2yRw7RJC2u2jGkLoHpXLpljMg313ttW1bMu6btt9nDDuBjY/qCgL1Ay
XyHlgnK1S7FaW/LOJw3m/RLd3M8hWYgUHpagp4dMNi3HXf7ncQcw2vjFuix7O1sxeGJdOnTk
B9PHDOT8prnadp49Z7tvW1WUMe5qjaJ9AL5L0Xqc6ZgYz9WbazHIrW33z4pl3rdlWbcJIdTX
BBpGSaHRXMUw/lWeMztq8Eg+MH/ofF7rdL9rYie9ltjoWYrR5GmOWleo04smtVoeAbbt/E/i
puQ7ZbJ/WruFp7q5cAuzgkaB4KtMgMaz1ddeNFb8c2Xk+x7RuG+2q3NzclJpi/QkfkH+098F
gniaPcpdw5te8VmijGz2lskiQoAv3GmkgenTizxR5jBsXxvt/wAr7pbbxZNLBEoNhYRxvMrS
sAW1ogLEqPwwXk8+sB8mjjcnP4f/AMizbTsY0+5aMhhkkCnqEP2BsPPLL6C3W+2+8+Gbyfbr
JbCxeyKwWaUOlQQBSlAcGetXXD8a77xaPg9nY7DuVrtUsCj9a8+j3TL+dijkFq+ONXjEh+fL
OzPxo1y7LczrJGFvDQkhgSWB8PIYzInyZbWNzcAyiEiAkIZTWle9O2NfdSZX1lBabV8b/Glp
uXH7OMXt1Hb/AKm5mUmWT3BVix6+nwwHu2qf5v4tsE3Dtv3i4QJeTzQm6uMiTFIKyUrkSB9u
DGcxQfIVr8WWfxqyca2Wa4uCiiPc/wBPIumtNUksrgBq9KdK4pFr51aRKgBAoAC6F7Y7Tlq9
Y9h/t2+Pdh5Rvd5JvcRmt7SEPHaE0RyWA9VM8q459qV6puG0/Bsr7vsrW0Fhc2FFluJCVdmI
oBDqJ106HLBOGJ3q14n8GcE27a4ri421d3vbhdcksx0rR8wFWoAAXGWnByP4V+N7rlG1w3Aj
tI3DyNtcTafcMYBAGerCpU/L/jv48i2a9EXHmt9EdP1cIZ+gyyq1QO+Kcxnq12/C9pwGPiE5
2C0YCP0bpJPGPckcLUip6rToMVh3x5xxPgnDvkD5F3S4gtWtOO7YVcWgJDzyHLr+VNWeWG08
2xt1+N/ibk9pudrtO1C3uNrdoZLsBhIsorkuotqGWM4bdR7p8bfDXD9msLveLFpFcpCGYsxk
kk6u4UjKpz8Bhk0W68q+WuJ/He0cz2iLZ7hDBdlWv4YGEwhQNnpbz8Djfsjn7r07592SzveG
cetIZXS2kuraCEkelYmUAMVyzC4zG7fXRuXxX8L8YTbbbctua5ub5ltbUMzuZJTSrEAgLUnr
gk031zf/AJuvBBy17qSJv6ZFB7jbapJ1SuadepAAwRc+THVzPivxrxPgO7b4NgigeWL2oYZq
iQyN6E0hz6T3yzw887WbsfIFwTJKWVQp1ArFXPPsMdcHw9u+Bfh/j2/2N7yXkTGbb7LUkdkh
IqyjWzyH/aPy4522tTMXfIuO/CPJrK3seKutpu9xdJbW8ZDrVi1CWVu344rzia9Pi34W49f7
fxm+smvN43BAIWl11kPQsSuS51wZrd6t8Dt39vvxvYXe9z7ijz2duoMYdqCGEp7j9jXGcc8V
2+fE/wAUcg4X/wCwccU7dax1Au/XpZUbSzOrZ0HXLD9TuN/tW1cW438Yi223eBZbRFCztvia
dTFj6pOlCxOX7sMPfWvlz48Tj958rbfJfXEk+3i9BhmodczFqRtIOtGYg412zy9f+e+Jjkvy
TxPaP1bW39RQwvJSoRA5LUHi3QeeMNT5bTa/hf41s54LNdnkm/QhJP1lwzFHKnpWtD5imLBa
+WfmPctguef7qNjtY7Wxjl9lVi+xjH6XkFMhV60Ax1kZ5vmqngHHLTkXLNr2e6uxaW93cBJ7
o6QEjALN92Wo0oMFmNyyvrC8+Gfivb7Nbd+PTTxldDXkeqRqU+46TX9gxmRm2szxH4m+K4+M
btyHdoJbqyt7qYrLKWVkggIVVCL44zYdyH5V8Q/G++cb2zf+PxSbdbXk0UY9tWbXFLJob+Wd
RVsjiUa+0+DPjFrUWw46YwUMZupHIlp/EaN9x+mKxX2sptPwx8acWst937kUTX+32Ny8UasC
VjhUqAxVSCzerFJqvWKLcOE/DPNd92O14k5tZLqVheQRq6qIYgXf0vkrZZUxZjXPj00fBPxk
Iv0S7AAmmn6v3CWrSmrM/d+GKM318gfIvGbHjfMd42fbp/1NtY3RiSSlKKVDaSBlVa0rjrz1
gzYn+MOL2PJOWbftF/K8dvO59/2kLyFR+VB4t08sX9Lvw1/PmPrNfgn40ntv0q7AbZTH7Yuj
IfcGXWlWFfwxyxVT/BvHeFbLue/7LawCTkW2zywXk7oCptw9EEZ7A9xgz0c3x88/NknD35xe
pxW3a2tIqrdxuCqm51EM0an8mOkjE6sv+GCt1HuKWqpJAr51xqta+meE/GPxjxj442/k/MoH
uptyKsz0f0GWpRAiHwXM45brVScz+FfjO23zYN3/AFb7VsO7vSZHPpBMfuRqrHNfcrQ16Ys2
MZla/wCeX2yx+OWso96G2xG3EdptyhG/VqmkKmfqoB4Y1xGf6y1zf238Z4ztXDJdwsb5LndL
xR/UnGkNbBdRWKhzA/Nn1xn8ukmTHn228P41zr5w3W33PejutlbxCUXAKobp1IAgWnp/l17d
sb6Z/nvrZc8+J/i6z2PcAu13O0XNvA81tfhHeEMoqseqrD1HLtjON66+EfAXCbXim3TbnZSb
rf3kSzzy6tIX3lD6AtRkAaYIrWe334x+LOG/Je1PukTybTukR/S2LH3FjuUdQCyjMpnjWeMz
q/bDf3Vw8Dis7cT2rDljxKbN4RpU24Yq2umRp2xriT8sf1l/D5ajVtahhTKlMbPM9e+f2xcN
4/ud/u+57pbrdttEUclpC+aK7lm1kdK0TLHHq7XT4ehR7ns/y9wfks277VHbybIJf6dcRMRK
mmNnX1f/AEZjpi0dcbN/LOv/AG98X5ZZ8c3fiVxHFthijTeNRYsXWnusq/x9QV6YPTJ+2Z5X
Y/EXHPl6wS1ga747bxE7jbRMXUXa5KgLH1DoXGN5/q5Zft/hqP7qtugv14c0MIje5MkKABQV
VwhRfotcPHwx/Tm3uV6B8X/E2z8BltAts9/vN3Ef1e6FQUgouaIT9oPTxOOd99em5+Hy78y7
XPtvyXyGIQfpIzeTSxRkBQY5TqQgUppPUHG5HLm/LBxrrcFBpzybMmpNMa0y2vqf4wa04L8C
btyuwt1beJ/cWScmjag/sRZ0OSM2qnfHKX3V35GV+OPjjZOR/FXIt83INLvPvlbe8UkGMjSW
9PRtTOSa4PttPxG2sfgThm3bvyHbpEluoF2qKaylc1nikcvrKn81Wjyr2NMXuidfhtbH4X+O
v6ZbWT8cSSNoVWS6lYrLUrm7DVUNXwxo2b5WO4D8J/H1nv8AyWyvIhvNzYXIjtLK4cKUtnjW
RTpBXUasV1eWM826z/O/MYv+4Ph3Bds2G33Hbton2LdI5/a/SOhEU0WkkuranT0U6A546cz0
9a+fzGrVapDAZg9D4Ux2+DY+nvi+ay4X8FXHOLawim38ySa53Gpv/KIUWo6BR+3Hnzb6111c
WXLzxTkPFeE/IXJ9ujSee/gh3JYFJWSGQyLpkA9TBSgI79RjO78OdnssSW39vfCNo5Pfcs3S
a3l4bokuYLCVXpGJly1N10pX098W3/8ADd/ybbdw2f49+IZeV8bskmubu8lS3uJl9RjkndYt
dc9IRRljXIlsmMp/cJZWG8cE4pzf9DHbb3ujiG+aIEK6yRM+fiQyek9cPOaz1x1vj56msLmG
ITPG4QmiysCFJr2NKY6/aNdcZHI6DUcjlka4dZ+r2L+3Dimyb/zr9NvFsl5aJaTSG3lUlWYF
VVsqdNWOHd9bj1Djnwlw+3/9Y3N4muDNud1DeW0x1RyorTCIUy+z2R9cBlabmmx/D/x/tVvu
+6bEk1Wa3hhiT3GcsS5YgkL6R+Y9sXPGsdXHgvzRafHy73sW78PdBZ7rb/qrqyiNfYkjcDMZ
6dWqhHlUY6fW4Nv2n6ei/wBwcVjdfFHCriwhNvZNcRyR2YFaLJauxGr/AG5/XGePhrvn2Nb8
Ucq4ZcfFe4Srswit9qswN6gVFZbnREakZ+rUFP3dMY5nuOne2PmDfrzj11zK7vdpsnt9iaYS
w2c3qopozRGlcq5fTHTvnI58zPl7n/cnBY3fHeBzW8RtrSUSGC0GYjR4I3Cr5hfTjPPXhs2t
jyRvirhk2zx3ewxXN9vUcUYhSIGigKnunV6Rm1GpmcU502+oNj+K+C7V8qb1HDYwylrKK/23
a5Ke2ryF0l0huuainhXGbBPlB8zNxvj3xZuavx6KwvN+dbX9HGYz7cqglLiqZegL1XHT+cms
9zZj5B0IECatWfpdfuPfGrmr2R9Hf29mDavjXl/IILVG3iwjaSBtOp1AhLqBTpqOZp1xyz/Z
1685aT+pXHMvhCff+T28d7uW2XazWs3thGKxSxmlQOjBippkcN9Yxap8NcJ37f7LnlnbyWdo
4FxcbO0GirgVIC5UB7qBTwxnFFD8Qbv8ez/LPI4rDb5ILy5uXl2aV0KBVRCLlAgPoGvVSo8s
b7kmD+e5WC/uQ33ZH+QLb+iTXSbltTyLuAbV7Uc4ZXV4NRND40yyxrmeB6f8Dcx5hyDbtz5B
ybefd2uxAhjWTREFlUAs0hUAadJ745+7jr1PC47yPZ7Dge8/JQ2OK7v903OVpojRj7bTCIAM
Q2S/dkM8bs9xm60PJt0vp+FcV3XYdpEV9JuVvc2m0MQiklZSy1oANS1INMZisT/H3LLHetyE
d3e3lryEPcC72CYMIYirH0qCtKKtKZ4CyXCPkbjJ2u+43uz3eyXG1bhd/p57LVpdPff0B41b
MVppI7DGr4JNeiXsNhc8g4bIa3ACXjQS3ArKQbcfdqHU9TgVYvgm9cv3znHNdk5Akk2wos9t
BbyR0hyOlVFR+aM5411ngvwquXbtyLaPizg6cTLwCeaOKWS2UlgFVioBHTU654ucmnmbmvRk
tIH51tl5PCqbnc7JKtw+kBmIkiJFPFScZk8N/Tz7g+58s37ivPrPmEck1uIbv9PBOlEUIr5I
CMqEA/XGr8sczIoP7Uob2dN8tJY2k4/dW4SWJ/VG0x9LV8C0bEEYOvK6X/1fN+5RrBcTxQKq
pDNLGKDoqyMqgfgMd+/HHnnxxAZVJGZNKefhgMMQCAXqFXtgFgCoLkKdND6utK0xNQKqCSx7
0yHl3xBIof3aknLJ6ZjI4GL6TKpnqKhTQBh0wyNQMzFTpNaCpLDuBjWKpI2ANUbWpFD4YMV8
QmPX6lyK9u2ISumMxtoMjUrlQZD8cFa2Iva0yDSNYI0k9uvU/TBFk/BpGNSr5kGi08OlMsK0
iTVgq6WU5Vzpl3wsaYGlFIJBzBWta/5YMI11AF/3nwxmqUg7M+sEKp6U8B1xNwgsQcqp1VJr
/wAsB2Bd1IYIaEGniKYWKdiVSqVIpmB44BTBlQgrnlnXpXGopcSqwkU0NGXIg5Cnjia+flCF
YFhTUOoJ6A+f+WNDBsquqqooRmZD/wA8Zaufg9BT1AEjI17jBqOyVAowOWZP7hgiAW9AcVVe
jEZ4Wcow7AhhWijKvQg9hhkZ2mVXLA1rnqo3Qf54GoJ2IOjrU1AxKRIFJ1EsQa517U74G4bU
pLqW65kj/FcOI6yIaKFyP5vI9jiwWnS3LKQzegVIPf6YxqMD+WhDLka9D/zxENIxIAhOsg6q
minxAwjIZzKoIOVR18u2DViQaFIrVgMvIYB8BWMu7aWpnmG8PLDqnpjGFZtZ1LXKmdfD9mFr
CKyVUfbXOh6/X8MQOqrUscivameeCqRIyMWqMg1Ovl1xESSxBf8Ax1OWZyJzocRAVo1UYjrq
U5nAiM2hq0OqlCQKAVxM0iPRRvz/AGkHLIdjhWD1+gaSH0eknvXAcJQCVh0jM51608T4Yhpp
RoOnUABnUCtCTTERiQdVALEURsiCp6598R0xUN6idDnuOo8hiR5HUUCmrIQGB7nxJxM7gWjJ
bUn3DqBmB/ywtE3vAagCH7EZ1+lcWgVSVDGpYZsPriWDyqSp6DLOlR4UwNGWoBZqr/EPGuIH
Vk/I1aZkHv8AicSOWQtQqxJPpIyPmTjNFp5GZiPSKD7WHWlKYkQIDdTqatRTt5HEToylSGWj
moz60PTFSdUK0oQtQag/54dZ+oJFZVoxPt1yH1xacPI5jGlSVqa9618sStFHGrx1B0sM3WmW
fXEsRTag9AKGo9VM6UxYh0L0TUKHsfHB9SdkMdRkQzeodK/TwxYClhUKfaBHenYDrTBiDqcr
6VZgeirQZnEgx6SSlAP9RiOkDKBRgVJNGJ7/AFxatBLCrFWVlUHPT4jEKJFUMI2INCa/XtgW
HkYkZNQA5D/P6YUjLZA5hqGhIpTzxRGVAkYYFnC5NQ5VPjjS0LGteyPkD19WJa5Z4mYEk0BU
0A8sUMqjemrLpiWkGIFR1GJD1DxOJO+2uJRq9tTWmbdMRmjsmIuEqdJLhshU6cb5vptfS3xJ
8icgHHF2TauIPv8ABETW4VnKmv8AH6dOXfPD/ay3WLzjzr5BTev65O28bem1sGqtinp0qcxT
Opxxjc+GU6g1IVK9caGwhVHBAoGIUMMwPrjWqTVltm7T7ffW84KySW7CSNeoJU5dPDHSf1sX
XM+I1W9/JXKuR3tvf7zHFc2dkaRwmFlt2H+8qaH6Y5faWs5jaWf9yPMpLNbGw2nb41gSmuON
yqr2AUNpGWLq8xcy1nuMfN++cc3C4uzt1pe3125D3U6MJUUmtF0EUXyx1v1+q6nQ+b/MfMOS
xQx3lnFYWiMJVFvEy+5Q19Usmojp0GOM6mtTkuSf3A8x3jjx2NIoLC2ZRFM1slHZB2qSQte9
BjeT8DM+WH47yzc9n3aHdLUl5bWQyxxu7GMt21Z5Y9N/t/rjF5316TY/P/KId7O83NjBOskY
jktmVliXPqpHQ48/N5/Jy1oJP7gN/uLdzbcQthI4Oi4KvQHrq9S5/txzvXMqQWv9ye7RW0EG
58ftri6UAH3GZWamQyIIB/DG7OVJdUHIfmPcN33m0udx2CzjsbXUU2wqwWUtkPdcAE/TpjGz
V1K7N/8Amzl+52C7ZtO0rx+wZQJFtENWHgX0gKPoMbmafrVpsXz/ALvbWNvBf7HDuO4Wy0Ny
+pWqPtrRWUHLth/pefwJK5E+f+XneHv76zX9Ew0LtojdUYdhq+4kV64Jhyurdfn7k09m237N
sCbNFICJ7iIM8iqRmUIVVBPbGfFJWh2j5Gt9v4WbHinFdxu5QjGW5uULR+66+t3Zalq5mmWC
2GR81bzNezblNLdxiOZ31SR1oobw043xJRVjxLc7ay36zurnUkEUqmRhnQBgch3x24mj/b8P
ovk39y3GrRootn2591kVA5kn/kRoaU8GqccfomHsv7j+Rf1S4uN1s4L22mBVbCMmNY17DV1N
B441OeTQ8l/uJ3PcNvk27Y9otdlSdQs9yCJJdBHRfSoFfHB9ZKpLfFjwX55sNi2qHaNv4r+p
uSC0z2spZpZAfU8lVZqnD1zP2Otlyu6y/uB36HcrkR8ZRhO4ZraJXWVQuVCAOp7scZznD64e
Rf3G7q0D2e17Suz1Ymedj7koYGuVQFHnXFJBXbbf3K2UUEEkvHo7jdVjqbgTAqSBnpqpZK4s
imuKH+5m/k3KZ9z2iO4sJAAtrFKUK9vuodVfPG5xzYLbKj3b+4qGRoo9l49BtsMUiyTOfbLy
aeqjSqgV798ZnMh9cN7/AHIchut5W6bbYV25EH/5MqasejH3KCmNfXlPOuc81n5ZuX68WUdl
CVIWFAKkg9Wbqcc7MZyxWbFu97tN9FeWbGG8jzWdSNS/SvjjXPWGzXttt/cg0FiGXYUm5Ayn
/wCez1iz6kR01D8Dhs5tXM/ag478+b5a3802/QNulpOxeS1LaCD4oR9oGHJY10Lmf9wXIb/2
INptF2bbIWDR20YPuMwPp1OAB+AGM85omr2L+5Nbbb6xbLHJyEppa+kYmP1dwoBYZ/lri7+u
nbVHxf583mzvLgcmgbd7a6bUIkZY5EY5kVII0eWLJg62OD5F+b955Sg2a1tl2jjyfdaRjU05
GYDuKClewwSxZ66Lv+4XlT8UTj1jaWu3xRW4tnuIUZiYgukhQTQEjqaYZIcT/H3zhHtO2ptW
/wBtJuG1wxgQQLQMBXIEnIg179ca6ys3ypd9/uL5Fe7nbNtVkm27TYsPbtFGrWvQ6yAO35Rj
EkWr+8/uS2uHb2m2TZGG/TxlHvrhlMaHr29TAdQMPXGfk7rA8G+atz45vu473e2S7nuG4qwk
nmYhg2rUdNBUBvDDkqtv4VO+85n5nzKLe+SozbeWCvaW5CMluBmkZP5vM419fGeZnWvb0/uC
+NLXYBtdts9y1rFEIraxITQwp9rNqNPPLGfp+ddPl5/wznfxXY3V7fb7sEt1eXEha1tolWWO
FENQqoxWn/ccFZVvyR8xbjzO6hsZLb+n8cs5AUsEPqkANKyN406AYeZPhbnrQ8x+buF3fCf/
AFrjOxfoy6KhlkSNUhC0JMempZjTrh/5ZVbb66eH/wBwGynZY9t5jZNeRWkarA8SqVYIMlkU
n9hxdcG1kvkT5qvuX7jaRfpf0exWTrJbWDD7gp6zHoagUp4YzIz+Xdz/APuI3PkvFDx6y2uH
bILhFS4Mb66ohH8tV0gIuQxZILLXjLFKkKdSaqivQHrXLG54o9V+FflHa+DX17c39tLdR3cQ
RVhA1IQak55aTg69a9ZLk3L5t95Te75PGIVubgyrbqclBOWfkMF69cP5zLj6A2v+4Hgt9sNp
bcit7qK4gRVMVpUxMEFFbUHjbOnTF9P076zt181cDl5faXB2V4dltVMfvfdclya+51yp4FsE
43wtLvn9yXBNr2iaPjtvc324FCsEdwhWFSR1dyzVA7gYf+dnyzu+Mb8QfNezbHb7nZ8ihk9j
cpWnee0UNR2qCvt5UU9BTFedUnjr4p80cM47zfcZNv2yW045eoqVSryrozDhSe5Oa1yxfWNS
Vpr754+Ntl268XilpLJuG4P7krSIY49TZNI7OTmP4cZ+g15981fM208u2Xbto2q2nVbcCSea
Si6nIC6QK1I71xqeCxhPjubi68ptrnlRmOzQAtMkILs7AVWv5qavupg6uta+guZfNHxPuuyp
aIJ7uW0aOW0i9togjpTSSzeAwTlm+sX8gfO2w79yXj9xaW0q7btM0c9w0wAldyyllRQT9tOv
fDOfDr1fhfyXxTmHMLm6sbl4WtrVYrS2uP5JlqxaR6E0OnIDGbFqb5luuDnht4eUywsVRv0E
SNqlM9KpoXxJ65dMPMos18TXhEtwXSkYY6iB069MaF5e7fBvyzxfjmz3+w8gDxWV4/uLdxgu
DqUI6OgzUeeL6tyeLTkHyZ8NbALVuIbOt1uRnSb9W6uoj9tgTRmJNWHhi/5/sNbL8yfDl/e2
XIb/AN/+sWgH6dRG5MZP5Qwohz8cH0W4ob3+4/jV1s3JVNrOt5uAaDbLcgBSpj9sNI9fTman
FOfyzZvjM7b80cc234ak4p7Usu8ypJC2QEY9x9RkqDWg8KYJPyb+kO7fMGzzfCMfELWKQ7w6
rDcagojWP3PcLL39Q7Uwyzm7R3N8Z74O3bg+1ckk3Hl8kka22ltvUIxQSg11yafVkPt88Y3a
Z3I9b5p8lfEfKd22U/qp1u47lFO4FZIBDb6qtmwAOo0HTD9VzZa9mn3HaZbRvdubU7eyet2l
WhSn1pSnngw318P/AC03E/8A3jck4uoO0RsFWRTVWkOb6f8AZq6Y6ZnyJMio4Zum2bVyGxvN
xtv1dlbSq9zahigkQZ0rnivqkfU9p88/Em22Rura6vBqXULQJI41dlGZXrllljP19advHPkn
ht38ebhvfIDBb2N9dzi421SHcCVgFTQKFmI9VRi659DJbx/cJwrbF2nauJ2T3O2bdIksuusd
FBb+XHrzrVq6jli+uQT1om/uB+KoLobkk97PdygLLEFYiMEZ0QsEP4Yef53r4OqLavn7433i
03nbOS2s1vtt5cySxRspf3omIIEgQ+k+muH/AJ1WKS8+ZPjHYd82iTiGwhbeylZ7i706HZJl
9tlWpZq51zxn/mca3cfnL4jLNucj7hc3mklbQCUIzAfbQMI8Ums76+VeSbpFu+/3+4xW62dv
eTtMtoh1CMVyFfGgzxtY03xJzmDh3MbXermFbiCISJIoydY5V0MVPj4YLy1zcnr6Nj/uC+Jr
W9N5DLfS3V1nMvtuwQHpRWbRn/txfS1n7R5ZwT5q2/Yfkrft7v7dv6bv8zl1Q1eBTJqRvB8u
owWS1nnyML8yb1w7dOXSXvEYJIrGVNVw0woXnbN9AJNFp1zxrmRz/wBt/wAMJC9AWVygBB00
qMdMb/D6a4d8vfG26fHNhxznULqdvCKqgO6S+3UI4KEEEA0ocee8+46b5rM/NXzHs3JItq2f
j1sf6LtjJILmUhHcqoVURamihRTPPG5/Pw/XfVZ85fKez8zTj0e028qR7ZC4u3mCikkgUFFp
WoUJmcYkE9uur4V+Wdk4dsvIbXdY5pbq9RTZ+yuurqjLpcVFASwzxv6fkes38U8s45snMGv+
Q2bXG3SpJG5QESQyMQTKAtKlemWeMYufI903757+Ndt43eW+2y3m6SzxNDFZTq5WsgpV2l6A
Vz743zxat/Tn4584fH+5cU2yz5DcXe339kqwmO29xRJ7YChw8Z6aRmDjN4y4Y8p+V/k7Y965
rtu48ftHW02soFnuXbXcNGwP2sWKL9euOl/ncYsy60fzH8nfGHNeJxbhDa3C8sEaRQh0KrEu
rVIrMfSy5mhAxznJ3fh8+e4GOTaDmA3ceWNzU9O+FPli54Lf3Oq2F7tu4FEvbckK+lajVGx7
ipyORxnrkzXpHLvm/gO07FebH8f2IV95DC+uXBjVPdFHAVqksVJFegwTn8rc8De/3HWG0xcf
2/iFp+m2myjj/q0U0SqXYFVeJT40BJcdzi+viuqLkPPfirfPlux3i62+ReNvb6L+kYUy3DE6
ZGRTWi9yOuNf895Y93fw1XzB8q/EfIOJva23v3G82qU2V1jZPbkBFDqJppyzwc8XD3N+Plm+
Af3H8jsd4sY+T3jXOwwJ7Nz7UatLXSVVi1AW0mlaHGa6fEeXfKHMJ+X8w3Hen1+xPKyWiSfc
tup/lKQMhRcdvJMavPjIqzrIpFfWO/btgcduvoL4b+UOM/8Aql/wTmREez3KObWcrqKas2Dh
c/uGpG8ccfit3jeUHw/8ucZ4tabzxjkEUtzsktwXjvFFWNKLRlqtQyqGyw/X8xj+c6z1vt5/
uP4Wtpc320WE779OgtkWddKCJSWR3YE5Zk0GeGct46YP7hPjN5LfeZ7a8Te3jWOSBQxjBIo1
CW9s0HelcF5NZbafnL49uOVb9Nu+2zmwvrlJrHcIxS5UpGI9J0sjBTpqKNjfX88ZlVPzn838
Z5JxCLjGx2s9wk00c813d5NGkLVCpUs2pjl16YeeM9HU3x4AJGLMUNPUfCoI7Y3TMezfFHzP
t+wbFLxbk9iu5cXuHLNHpBki15kZkBxqFexGOPXy10veRf3A7De79x+1sdo9nhuzXaXIt3Qe
7MyKyofbzCaNRYDvhvMkZnsddt/c7K3Pb6W8ia44ZKGgg2/QnuBKDTKQa1LGtRXB9VN/Ks4X
88bHZruGw8g2YXfDJrmWeyt9KyS2yPIZFQo2TgMcs6jGuuJPhT4Zv5j+YxzX9HtlhZiw2LbJ
TJZRLlLI2koCwHpUBTkBh+kjUnmtj8s8q4Rc/BXH9shubW43727R4IbcASRKgpPrp9p01DA9
TnjPH871XPu3qePnOpd2YGg7aulfDHWzDI3PxL8hz8H5VDu6Re/bkGK7gY01RPQPRvykUBGO
XXLXw97P9y/x6jLBDs1yYbYm5sR6ARcHUXyrRR6znXucsH1X4eZ/MfzTBzrju0WEdq1pf2Ny
896VNYXRlKj2z1pn+bFkZu15zw/drLa+T7fuG4Wv9Q2+2mSW4szTTIgNGShy6Y3fZjU+dfQP
Ovm/4v3bhs2zXPH7xrYxONsRkRFjcKVSWIhiRoJ7YP58+jvm2Y87+Gvlqw4ZFuG27tZNuGx7
lH/8yIANJqFVX0nIqQxBrjPXPq5lkyqSy5bxW1+VG5DabSTxxboyx7PPpbTGyaHVcitasWUH
p0xrvmZh5mPXPkT5v4BvvFJtrfYbkTRw6tqkmREELAUSSMgk0FKenr0xnnmLp5z8l/L0nLbz
jl7aWTWc+yRIG1sGWSVWUuRp6KdHTD9ZILLXqvCvnDh+9fIMm7btCNokudvjtbW5uGDLFIjM
ZNTDJVfVkT9DgvPjUWfznzf433DgN9YXO4W267pMpG1La0aSO4X1KxYE6Vyz8ca44t/8Mdzx
8hsyj0j/AMhrkBT608sOYf8Ay3/xT8pbxwXcJGt4/wBXt1wFiv8AbX9STLmFp/CwqaHp44x0
d/DZ/IH9wl1ukdntXHdvj2nZ4JEe7sm0kyMjhxGwUBVjqOi9TjXM5z/ItxDvf9ynJ77l1nve
3O1lttoF/wDyOzAxS0ylEjAAnX28MEkzDz8qHbflWRfmWfnO27WyJdSE/wBKDamIeMJINaim
p3q3Tvg6zBxvwoPkeTfbzm24325bbPtt9u8v65bGUMSI5QFTRUDUMuuHfBZlXvF+bcj2XgnI
uGf0p5o95P8ALmcN/JJAVqADOqr3xnjuS7W6tPij5xvuH7XNs13ZJuu1OzSRwStpZJBlVSQy
lTTMU+mNd2W6taPnPzzyu82zZ7m22Q7VHHerfbReNqdJVgqvtgGiuKMQ1O2KXk9c4s73+569
t9tFyeLRQbncL/JvSx9vWcvcoVDFfLVg4zfWffwxXxf8v73xm63qVNmG9Lu07Xl0oBDCUliz
IFD0X1Hth7zV7ip5X8zc03/k1rvAuDt09nMX261hY6IMtNM+pYfdUZ419p9cg5t1rOVf3Gc+
n2OKx/QptN9cx6Jd2RGDyqB+VWHo1ddQqMH8/r+V1LfhwfFvzFzXju0z2cW3Nvu1prla0fUx
gBNSVZAxA7kEUwdZrpJ/r/lm9w+W+fbly1OSR3zRX8DE2McVdEURNDCsZyZW/NX7sd++ufrg
54/bTfIfzv8AI247XHtN3t67AZYx/UpI9QkuI2Xz+1Wr+U44cdcudl0e1/JHKeIfDaWVhszw
puc04h5Gh/l6WOeSepZUbIasqY1/P6/b1uvDpTqmLULBiWdjXOvWvnXG+utrOIkNT0IAIC4y
jPUkqexJJHj54gYv/MK1ouQC+OJaCVgoGk0UimWdcI+UgqKjVXLMf44AYuNFFBVCQAOtfphh
gXLRjVmQCaf9Mb1HUtkSaV6DwwUakc01CvoahPjjNi0zEknUoApQEGv7cCMyGjUNDQaj2OKr
MMAzR5HoPUfAYy1pzQAM1GHUkdaDGoz4Ye4WYfnNQBXKvUYcZ+RBpSjBATTJgOg/4ODT9TgM
Muh6EDMU+uAkSVVwvgfUe2Eo40UsGNCT0Xof+DgoTAsuYBCNka55YMRLQnuY6ZgZ9PHFhlwV
AyEJkpzrTOh8sK3TKqsEFKdsvAd8HrW4RQVyI8Knt2GL1TDGhCgkr2UjP/DErSVgzorZVyrT
PI+WHQdlCKypmGzofE98ZVJJFAYEUFKL5/twiUA0CjeGVB/mcSSMJNJYMNPY/XBqkAVYIEWp
DZVr3ww+iKACrDoKEg0OQzphonyJkHtqxqDTJvHyxzbotaE6W9LkUX/XC52m1gBgz6u5A8O9
cWNSkKSKKLQp1H+H7cRlFSN6IARmAfDwFcC04ZSpUek1NKdMhmcC0AdTXUlFAqCP9MKlF7o1
DSmQPXKlMK3Sb7gzDUwroApSnjXEQo5LOeoXIVzxARUMFIOQFCTUEeWDWjsXppQZilNWWXji
QmR/tJzOdV60HjglQdBBJD+o1I8iMQiQsWiACZHs3+OIoJCVYaFFPzUNaD/XDgwopQ7FGVqk
Zg5kDyxYzo5EiCe2CQCfur08MRFGrlQgAFPuA7g9MDUPJHUAfaB9+nEjuoGmp1BxnXt9cQFU
Rv7RFFH5q5U8MSFrH3FwFrQnyGJaiF1IxIUaaijDqQMUUGYvdoQPt9XhmM6YSHV7gYCgXqpJ
6HwxM0TGi1eoP8Q6EYyRFogmkVZzQ6h2xYtCwdpTUBmAyIxLRIWqyVohoWXuT4jvgUS1BT3N
Wa/aR1+pxNaCT3FP8sF8qsvUg9iMQug1MYwWP8wdR1BOFJHzALdKdRmV/HAaY6iNQNQD6SD9
2JI3VgCak5mp8a+H0w6sL2yqihH0I9X7cUpuG/mqigtViKahnXwxaDFpUYp6gDnSlSMDN0Qk
NCGUKvQEZ5HLECjZ1UOQQxNOnUDIUwNGJBdiD6sgRixQhHpPqNTU5Hw8KYKsAGHuNQZrQRkn
r5GvhjWHCL+27K6lq/aR4+VMqYhTt00qy9fWc+vhhQQEp6fRnkAaCvc4jAFGU6q+nq/07Ylj
jvyPa9NdJz1YIFN3PhjROFFetR0wJP7SfxflpiadlpPnQCrk1NOn44qJcSwCZrwMqmvSvSlT
543/AD+T16+w/h6/uoPh+L9JK1tJGs2mSPSKPqpqNB443/eZ0x/h85cpuri83i5u7t5JrgyM
ZXkYvqOo+rPPHKG7FST6Sy1XT49yf4cTJ0Z2bIEDoRWtDhhle9fD/wAObfLttvynkMD36SEG
x2qFWIcE5NM3QgfwjGeodn4bT5o2QPsVqq2H6Lao2UXKW6pH6a1otPSDQHM4OZNWu/aE4p/9
2zTcf2mOwtWhfS8iK8zFQdRZ6fcaYO/5yUba80+Gbbg55dcxbltr7jvU8jex7oVrWJQa10mu
p616jLHT6ea3Tf3GRRjeYRHHHCNClUjIVAQtOgpT/PHJmPFY7eViyUOonNhX9mXXGuems8e+
/EHwrt8W3W/IeVwtdvcMGstoiBaPTWge4NMz/txus7j1q72Hal5DYmTa7SG0gjpDHIiLEH7H
MBSw6DGJBtSb5c8nNrOm2y7UKBkiio8sgqKCgU0ri+sUZiz2XZ+FbEu+3FhFunIrx1/WXlyF
JV3P2R5HSo8sLV6tBzP+l8XhXlX9MttxvJArBLhfs1CpIAy6HrTBOJrN6q94jy+43viEu+7j
t1rYWbKwjSKrgquWo6lXKvTGrJPhaubW2ii2yJ9mtNuT3F1vJJTSQRWpKAmpOM/VbXnPM+W7
/wAc3Ibruez7dudpGvt2ukMkSOKltLMp9X4YZi+zYcc5Wdy4TJyLc9utrCBo2lhiiAk9K5DV
qAGbYrzBuuPgvLNy5Fwu4vZigZfdjgiiX210hSQDSgy8sa75w18l8wp/X7kk1XWc/E9WNT4n
GefGah2mwO4bpb2yyeyJ3VTKRXSCQoIGO3O/hudY+lo/gv4s2GCzi3ue4vby4KqoDspkkpnR
Uz09+uOV2m3Ucv8AbtxS53uWaWSS02rSCbZDqlPgusg0xmSsqDlfGPgXb9rvBaXTJfW6+3pl
kZnqMqIjAav8BjU5p2r7+3faNiXZNxu7KUSXbOY6iL+YkR7LIepauHuY11fGk+L9rskTkUjB
Pdku3Wa7ko0jJStKnMUPbGbGIody+IvjrfLK+vNpkku70MyS3kr0jD9xpCr+3GbKi41/b98e
GxUXL3e6XbZyXUWqCE17Ll28a4fWr1+nHJ/bbxz+syzXF5LZbPENYgibVKT3DSEGgA70xTRO
nLN8cfB5mtIdrvXuJHuAkkJmLSNnQqqlVbM98ayr7XWn5x8ScVvbaB5Fg49sNkhNy8SD3JB2
TzPn1wYHy/ymLY4t1mg2BJv6dGxWB56B3ANCVp0H1wpLw/jp3zkdlsySrE15Isfu01KlTmSO
5pjWeLH1LZcU4PxO4suK7dssc91fIQ+4XKrI7GtCWLVI1f7aUxmRX1yw/DfB9u3W833cLc30
NspeLbyAIlK5szAU1U7DBiNuHD+Oc949Few2UW2Qu5jt/ZjVSNDUZjpA8MH1z4Tus+M8E45d
WXELDZYpLi+jL/rbhFlkag9TMzZioHY4fqtV1p8M8E2XcL/kO4WxvobZDLBYtT2gVFST/GfA
HLFIdV3NuH8b5NwWbk1vYx2AhhL2sEKqlFU0GpVoK1xT5Y+1jJbb8N8C2/hX/sO+cgruEsBk
jhhZdKuwyi9vNmOrrhu63q0+L/izh1hxX/3rf4W3IPG8sFmTVEiUn/yDLU/kcsH1qrSb78Vc
V5vt237pa26bTBeOlY4kCkQ9wFFF1ZZEYpMHwsoeEfGi3knCLHY4Unht1kkupU1tpOQPuHMv
XDgtteV2/wAG7VuHyHf7M+5/odpsVEkznSskmv7Agc0FcPuNc2VludcK4ltnMYON7Nuwe1JA
n3GUgxQ5jWGdOrLims69pn+LvjLbfi+9udns03GVbVnj3aSrTSS9Neo/bU+GBdUfxj8XcP23
iVluB2qPft1u192UyaGWPWT6E1+hQO/fBY1qv+e+A8btuDvvUG2wWF/A6KqQAIo15EtpyYqM
UZ6vj5cVJJJtCVABCiQ9TXyHXHSd0TX0vwv4l4Zw3iVvyrl8B3a8mVJDb6Q8MSyfYNH52AIq
T36Yx7W+6r/mT4r4+mz2fINqhWwa8kjiitIxpQe99pIP2+eDLrPql5r8C8Z4lwU7ze72Z94R
U9qFNIR3cjUi0JLAV64Zp6rw2WNaEaQqHqe/njcVjd/Enxfc/IG8y2S3C2VjaxiS6uKapAK0
CoCaEnGeqo9en/te4uILmK33yS43KBatGVTSnh7gU1WoxnKrI4eK/wBrsFxa/r+Q7rJCZDW3
ghUOQmrqzHuw6UwenwO+/wBr5XeLKDab52srot79xcZGJU9RJUfcW6DDtnwPy6t8/tp4tabV
ctDv0kl5DGSqSqiLkK56TUVxei4t/jL4B4Euym83SeHe7uZKVif+Tb1HQUNdQ/3YLrW5HnP/
ANyttvPyPcbDxncludvtR7u4X5Ue3CpP2ZH1NXIUxq7Bx3rYbr/bFsRsbqTad8kvL+Fc4GCF
ajPS7IfSfwwem3XPt39qe3iygud75AYDKg96JVWgL5iMSMy/txbV1ZWL598JT8Y3/atrg3BH
t95cRxXEv8qgBAckZ6dIxv0SvRvnDgGy7TwHZdv2Owt4pFnit1mVaySFgBnIczqzOZ+mCXB1
8q/bv7VILeKGbeeRLA7KutVjU/zD1TUzKGp2wfateRwD+2/e/wD2wbbDf6NsjQ3Lbmymqq2Q
VUqPX+ODbDbLF3vP9vnFts43uW7blyK4uYoIWeGYgBQ4FM6l9VTRcsGW35Ynnw+Y5kT3GoMg
aUB8+2Osat16L8TfDm789llmjuf6btNp6bq+I1EuR/440yqafcScsHXWfB56bTmf9t0e0bMN
w2Xeo7+QSLElsUVNbOQq6NLNVvEYJaL7VpY/2rR/oYk3rkKQ3ciiltGlaNSunVqUtQ+AxTqq
1X7d/a3ut3ue4295uscVraMiRPpY+5qGrUBX00GM7WQ8s/tmfbtrXcdj3M7sylIniIVfuOld
LAsuROZxbYo9S+PPh+x4bxFwdutd05RcpqupbnSULHIRh2BoqjrTri+flruzMfO/FOJ2/Jfl
hdpupLeKBr6R7yJGpEFjNWiQ1z6UXDZnjHGfh6D/AHAcKD8x45s/HNthge/hMSRwqq62109f
kB3xGT1cWH9si6bWC+5GVuFobi2t0JGmuenU34VK4pabI8N+VuO7Px3me4bPstwbmztGVPca
mT0q6VH3FTljrPj1fbYz/GeO7lyDfLPZNuX3L2/lEcRr9tcySeyqM8YtxmcvoaL+1TbIraKC
65Nov6URRGoUyU7Aspb9mMXW/sqONf2zz7vb3txeb2trBbXL2yGMe4rLAfU4bUoAr0xbR5UX
L/7abjbxZ3GwbnHuFteTJbSPNRNLOaB9QJWmWdMO0VoE/tR2l7VEPI3/AKhSp9qFfb106fdq
pilp8Z/iv9s15fz7k297l/TbTb7hrdJkoxlZT6upUKlCMzh/6V0nUxFy3+3Cexu9qh2Ddk3F
NylEMTOQNBP3yHSW1IoFag4PtWft603/AOadtBthbnkEpvdPqT26pqp46q0xmy31m2a+duVc
WvuNchvtmvCPfs5WjZ1+0/w/ux25vjP2lqbhHEr/AJVyaz2SwMRnuWILyNRUQCrO3ko8MPVy
N8za+gpP7Utqa1IteRMdyCeisQ9skCtDRunnjjtVw3xD8AcXuI7u+5LIu4XsTyWx2oNT9Mwa
mp9JBLfw1xn5Hjx/5k4VsnEOWybXtt/HfQmISBFoTASx/lyUJFaY7cyyax9/cee6DrCqut+p
C55fTHQdS17z8ef25He+Jw7zyDdY9tguzrt1oKOh6MwZlC17d8cOvnx0nkBvn9sm92vKNv2/
btwjvLLdA5jvn9IRYlq2pRWp0nKnXF9rIpfXqfyPwXbONfEtztuzbLaTxWtrS5v5ionQgjVK
CV1OxOf3DFyO+s+FD/b18NWVttS8p3yzguZ7xRJtNu51qiGpLv2JbtXoMVut/bIyfJPjHfOX
/NV/tdvaWmyQwxJcXkcGl40hoAkvp01eQn7aDDb4xxPnVjyb+1+1ttovbnbuR/qru2haZ7SV
Vj1Knq6hmIPbpjM2NbEHCv7ZZNz43a3+9bt/T5Lz+bDapH7lIWFU1amWjEZ0w/a1q2cuSP8A
tuWDncexbtvEcW2vAbm0vDQST6XClFUkAOAfHDe+sUvO+u75++FeIce2WXkOzXKbd7aRo+0s
dXvNULrjBOrVTM4uZXLu38PnNxC0oINQK17E47SeM1638CfF9lzbdbu73G5MOz7VErTiMjVK
0laLn9oCqanHLu/h156ma9F5L8ZfFnLuLbluvB0FjeceSRJotLCOQRqWIbUSfUqkqwOMWYxe
pJrDcg/t65VYz8fh2p23WDeUjMtyq+m3ZwCwYg00KDUNgnRnWiv/AIO2/aPk+x4ru2/INsmt
xdC/l0xsozBjoSQpqPTX643b1OfHOd73l+F3/cB8c8Y2C64wvHLZbX9eGt5aMxEgQIEfVn6j
qz8cXNxr7X7ST4bO0+NPh7iUOz8R5LD/AFHf96WgvWDVDsdNUKke2uttK5fXGb766dX3Hg/z
L8eQ8H5bLtMVybi2mjW4tnNA/tOSKNTL0kUxvhf9N8efrEzTUUVINc+mOtcs9fTXwvxTiPGf
jq/+Rt/hjvpAkogjkQSBFQmMKoNfXI3px592ul6zl59xX4pn5ZwjfuWxTpZ3FldFbfb1FVKk
h2XV2+8BcP3v4XHV5nq/sv7bd7a83fb9x3GKG8sLOO9hljBKSNMW9DE0K6RGQT44zO79jOmw
h/tW2KTbIoX5BMu4PFUaEVohIRWgNaslcbvVo661m+F/20TbhfbqN93L9JbbXcm1RrYamkdV
Das+i6WGL7VmZWe+X/hjbOJbPFvuy7yu6baZha3MRKB4ppAaMApNQaZjth40d9PG9FPTToaA
DoScddEr6L+I/j7gez8Db5F5XC1/CrMlvZldaKA3tKdBprZ28chjz/8AtXTrrI6+X/DnCd/l
4vyPjk0m1bNyG8S0u7d8vbLhyrRqx9LVjKaa06YPhj/Km2X+2HfZObXe07m8kGwQCSWLeY1X
+YMvaChsq5+odsP2rbUcQ4F8ecE4pNy/k0f9Uk/UT2kUQUSRNplaJNKH0szKlanFJbfWfv4x
Hz58d8X23bdj5lxiN7XbN9NHsD0jdk9xHQVOkEVDDtjXPWM35n6eKTOXTQxFFOqoHUjI546c
3Gsc5BoKGr50Hie+OuiPSPhb42sed8mfZ72Zra3jtXuJnSjOWWioKH/cwJx5v6X1rWz4/wD2
6X9xNsEm7X4/RbvcXNrd+x6Xi9gSaClcv5giP0xXtc9eNvN/bPwPa7NLnft8e329E0SSM0cI
94tRTrbsV/LjOW0/Z5D8sfF1rwfk+3W9neG82TdYhc2kwA1aS4V9WnIgBgQRh9xmZvr0f+4j
ZdlsPjXhQ24pc2dvIYba9ICu8UkBcCqj/wC0YAnzxcXxrv2tLwL4u+Jrz44uzLcpdJe20Em7
XjuvuWcqJrIRqVjCsT9cYm611XzjvOx8es+azbXZbgL/AGWK6WOPcoz90AZdUmXdQSvmRj0f
05vMYnVr2r+5XYdvi27gkNiiPaCOa1huFAVmiEcbR6qADTnq+uOUvjF5v2i7l/t4+K9pSE7z
u08BvvaW0EkqI3ulaOq+k11E+nLLGedsdb+lHsH9um3Rc43mw3i9aTY9sgWeB4vTNJFcVK6s
iAU0NXLPGtrMdfyJ8VfF+wfGu8bnDdStLcaDstzKW90XEYP8oZLUSaTWoyxr+c2sf1tzz5fM
MKF1DyZsxzJ6gnG/WeJ1+X0B/bvxfittx7eudb1bm6fZAZIoaB1XTGWZtP5m7CuON96x2zI0
XLuN8F59wmLnqbYNluLG7ii3FYSqrJbGVEk1aABqCyag1Kjzw6Oorrn+2O3/APdbW52+dLrh
FwyzSStIGkSNqao9YOf+1hilomyrT4d+NuF2PyXv0lpuSXMuyXjpttkSkmqCSPJifzGPVpqO
4wd81vnqxTf3E79PtHyJx3cluLPcDtuqW0gVVLxFJFYxXJUnIkemvnljpObYz+deo/Ffybcc
7N5cnYotv2+0A1XBb3CzdSi0UZgZ452Rr6+awuwcA+Jn23e+ab4rDYr7dZk21AHVYEaUhWAT
1AuxIp2xrrnWOZjc7rxTh258d4dtNsgvePm/12hbqI/blcLqIr92RxnmZGuva4ot34lyvlO5
/F95sMLbbtsTxrP6apIgBLRinp++qkHrhq+VHss+0/FHxjb73ZWCXu77heNZyXEnp9wrJIoL
HqF0xdB3wyftauG+N+FbxzPjfJztkUJ3q0luryyoDE04jSVXIyz9R1eOM/LUv1qGHeOP/Jj8
m4fuGzRW1tssTpYXGXvRtESmtf4RrFQB2641ZjPrhXkVh8VfHfGZ9s2qO53HfdKXEznTVgnu
NqI9VD+UdsE5X2q9s/jXhI+Q4t4h26JY73amvhan/wAST+6h1qvgQ+Y6YrNgnVlxnbjedo+X
eD8rhvtqjsZdhhkk2m7UVlj9pHZfVlkTHQgZUONXmS4rNmqX+3e5h5XwvfuD7pH7u0tAZY9Y
zjaWmoqe9Goy+BGLvn69NWePm66hMVxJGDq0Fl1+Okla/jTGp8s8uCVSakCgy09gfHLG1YcI
xDENQeWXXv54bBIRAMZpQlOuA0IXMlSDpNR4U8MOMYZRpdgoNDmfKvbEzYdU1tmaDNR4eODW
4Yr6tLsWIyQdqDuMWkmqP5bA6hUVph0WEEYMiytk3TLqDiZFKq62FV0jKv8AnjK0TxO5BBDI
v4VPjjLQagqz09ZoH/Hphw4akgAFQT27dO2FnDuGX1BQQKBx4E4lg21AhQuX3HVgHoKsCFUE
hjUknIU64FII6hGxGfcVOY8MsMrVmI6qyl2BGWZOQphA/SR0JJOdK/vwVGZVUZ1qOnXp4YJT
hF9JbQaagMiM64QWogGoPiq9zhqMSxU6lyJowP7v2YKRqraCEJFPuWvfGVhkEhalK0zoeow7
BCR2Iooz7/TFqnI0IYhWWpApT6YLSBwRIFABQZ0U1/DEvslIEgB06KZsgOVB3xLLTrIrKdKj
qB/zwNB0VYqfUXOQHl2OHRg9JFCp1E5Mp8u9PLArQsdRVaZZeo5n9uJjoEUa6nXSakGtaZjE
INFZAr9uw8KeeBqU6yDQTWoGX/XFTKONFLalFcqqeop3rgaiL3NJNT9taHyHhhwXC0l6hWBD
ZE9KVxCQbGirHWtMjXtiPgdaqTHTSSOvUfTFi2D+91KtSlDn1phgl9J5lDLoUgdADniO09AG
dqEnrQHPPrjJw5I1Dx0+ojI54kP3DpCZZfaR0piawzV1mlAe5GdMLNQuzkAZqaGhH/HfCz6Z
UjLgD1CnQ9M/D8cCTIzaTWgIFCa0/HA2IsGqCMwKNXLL6jENIrpQFgKEii9wPxxIExJK9G09
gOv1xQUVaJ2AFK+VcOG4ZnlC1VSgqNJ/ir1wYho4LaSSFIyp0oP34RpBXB0qa6jUVH7T+zAs
GFjLVDUOknQaEYmsM7xtRmTSR+UgkAfhgBhSmrIR9lPeniMKwTKlEIFXyqTmfHGcWpdQOoKa
DoSOn1zw40gLszhlPq06WAPQ9MsWDRRgaNI9CKKMK+r6iuBFJJpotTrpkO9O2GA6SuKnSMvS
q1qMRloZFWV2BAqoqpPTxwrThx7a6/SDkF69OuAmj9xFLMNQ7H/jpgFLUpB01aKTMnuKdRXw
xYDRMNTAgsnQemhp4nFSf3SgHV1rTT0p9MQ1Ezu0h0gaHGbmlTQ9PLEZUntlyCGFRlqXop7f
jiac5RgSgNG1UNemZyzxYLEkjOKIwrTIr54hgJDUErkwyxEK+nNh18888Sgi1WJBq1Ml6YtL
iuWBgMbL6RVqHuR54hVQaBv8hjSMSK0GBD0n+I9KdMSx1W0kULBiCSRQgYa146IZpJLj0t7e
Yoe30ON/zyVnrc8fTvxN8i/H+38D/oW97lNZXg1/y0hZiQTl7bCoORx0/wDsTbrHOz5eUc3l
2V97nbbIJUsjT2WnJMlak6j0oMeaO1srMH3W1l4yj6gwLn7gD9y405pI3LSBoxm3Qj/Pthi+
ut3xr5T5fsH6azt93uIrCFhW2Vv5WgGraQcej7835hv8ZJ5Wp+Rvmy25TBY2CRXNvt6FTeqS
NbqTRqedMceeJok8bHaPmn4m2zjUWyW1jfvAiUaOQAFm6mr6qZnB3x+xLf0y/BOf/HG28kut
+3KyubeYOy2MNuVeNIm//CZqzN+7F9fD6i+XPk3gfJ9A2rb5TckgvfXNIwFTppWp7+ODnmMf
aj3D5H+I9v4ethx3Ynl3d0Ae9uEAcPQanElSzVPYAYrzJTOtVXFfnPle0zW0V1ucp2mFgXto
40crGDUotQT0x2kl+Tf8NTyr5v4rybfbO33C1uG47AA7rmrtIfzFV6/9uMThau7H5Y+HOMQT
z7Ha3dzuE60EbKAD+IagA74zea19a4do+cuK7zaezy+GWBImLxR2yao5BX0gg5/jjd/l5usW
5WP+T/maTl0tvtm326WOy27BgrGryUNPUR0Axjnn11n878tByv5042/C7Xjux29wLsRpBN7g
EcShB6tLL92o41eJ+2K6OH8m+E4Ntt3v5ru2v4AskketnYv3IZDVs+2K81mTFP8ALfzHtnJp
rTatosym0wvV55cpJdJBoEzoPrnjPGapztWfP/mnj+7cVtuO7HFKGdUjuZJF9lI9IAIVRkVH
XD9fWsutlw/fOD8M+Oxb3vILa6u3jeRo7dtTF3X7FU5k5/cRg73T1dfL3IL23vt7nu40ZY53
ZlQmoC1y+mMyMWO7hc8UfI9v96VQvvpqqMgNXUntTHbjmi82R9b8v5j8aWK2L73uEeqFlmg9
gmQlgMs0rSuOf0utysrbf3C8Qv8AeJ7SVJrHb2Htx3bLrY5dSi1Kk9sb/wCNwdMjy7k3wdaW
tzLs9pJvG/zmiSTFtETt+ZyaD8KHGPpVK0Pxdzb4q4psDrcbz/8AlG7/AJl3EUICE9UiCj7R
441f5dWNXqVZ7B8s/FW3T7lHFcXEcNxN7hZ4ySxZcz40xfTwIrz5l+MNi2ySz2NnvpblzJoR
GSMajVixanTBP51SaurX5k+Opo7a7m3uW3Eagnb1R1Cn/fQZjwzxdfzsTlf5x4FfX11aG6ks
7OUaEvZIywkJFCFQeofjin8rVZk2slccg+ENg3K0vtqll3Lc2lVzdMzFYc/UwBC+rwXBOfWb
1+lryH534RuO4HbpCz7aEpJfOBRGPingfHtjV4sN3HhvyLunF73eW/8AXyTbgUkl00Bfr6Rj
GKX9q/hO/wD/AK7yOy3lYv1Bt5lf2+gYDsTjfJtfRg+WvjKWaPk+5Xzfr7NW9jb0Rtas33Cp
yNPrjN4oqp2b+4DYt6uLyy3tRs9hck/prg+uqeDgZ1/zxv6TPD18B3j5z4bxuxttk4pFJe29
u6yXVzJXTRmqwWuZY1wTj30Srtflb4xdo+V396z7jbRFLfbogxlAYfb2U/txm84pVbsnzpsH
IGu7DfFXZrS71aZGJkBiOWk06H6YZx+lbii+Sfl/jVpxb/03h4eaJ19qTcZM1Va/lHVq4pzl
9VmuW3+UfifaeC/07b9ma730wBLh7iMMfeYUaT3CSaCtQBi65E6WXx78q8Y3Ti68T32X+lxx
o4eeRSEaNjXSFFc643f57PFeltvHzrw/j0u3bHssMl9t1mQJ7t20+mnRBSpPfPGJz6bVrcfK
Pxhs7y8wl3QXm63UAij2y2NZDmCAU7NXqTi65sU6lea8d+QeDbxzrcOS85i9uGVf/g2KAyQh
gABrApqNB374c34EuMxzrfuOcu5ysexQR7NsdVillZNAIFNUrKn2j/HBi6sj6Ij3f41sOBJs
NxyW2ezFv7b3CyLrfLPSgqa55LgvPS+0eb/GQ2JzuEq82n2fZYpylpZiZYpHFfvcN9vSnTDd
+GvtMRfMXy9snILCHiGwSNPZrLGt5vM1fyGlUqPV/uY4vrjF9/8AAeXbB8I8X4GWs75dy5CY
0WCeOT3JnmNCWKLkgwfS763evxGp4/8AIPBea8Pg2Hdr5drWyjhS6R2IL6AKFHzBzGYxq8WM
W+sr8wfMPHt2iseJbC7SbbZuj3u6AEeiEadMWWfmcZmH7eo/kTm3w5BwX+l8dtf1+7PGiRTu
HLwmldcjsevkMOad14EzyE1yNOi4y1r3D+2zlGw7Hum4zbzexWMUlvlNK2kNJqHpP0Xph+tZ
1mOW/JF3cc63i62DcJrXa764o7qTH7sQy9RGOnLPEtr6W49z/jW68bsm23fLbbvajSKQXRT3
AyKBQq5XI/xYx1xddLGY3b5C4/LzrbbGPlv8y0V/1E4VBZszinttSq9utcEjFrSck5X8f7ds
txNyPdbK9iKEe3CVeSU6ahVVCxqe2LKtYD4G55xQ2++bdLdJt095MZrRbk6R7OilNR9NUr0w
/WtW+epPi7eeDcW53vm2W+8x3EV4q+3eSUSJpVapXUKig1da4rzaxzcjXW24cG4HZb1uc++w
XTbvMZvYikR2MhrRECFifu6nBl1qfDB/3B/IfHJ+L7LabXuEV1e6xcUgcN7YUDNiD2ONTnPl
nbsx5Rxu/wB6+ROZ7dBv296NPpF9dSUWGJDqIWpC6mplg6rez8vpr5Ji4rd8UtI597ghXa5I
ZoGEiMZHjAVQQD3HhjGUflkvl/nPHLvknErO03KKfRdJPcGN9UaIWWhYjKtK41Jg/L0mx3y0
3bll1Ht93Fc7bZ2qi9MZDgzSN6FVhWoCg1xnGlR8uccXfeCXtrHdf0yxtEM8qsFjidYxq0sD
59PPDFuPiGRwJmCEiLUdBIpRQcqfXGxbsfTX9tW/7C/D9y2Ca+httxuXciJzoOl00agWyY1O
M3lS+JLzh/B+By2O5bhyt7uQX0MsVkjqUAV6u3tqzmijvliktOvQtz2vj2/cm2jl6b7bx2u2
qGjiV10yipYFyWFOvhgyifLjvvkzhtxsPLr+HdIv08MTwxksFeR/YKgxj7mBY0Bpi+ov+GM4
9zHZNv8A7fJVuL+OLcJI5Y4oEce97ryejIHV51xr6+mzwO9/INpH/bxEz7wZt8uUFqF9/wD+
SZTKdStQ6so8E59Z6rzP4G43bb5zmK6uL6Kzt9rYXI94jVNIrA6EqRq/3HFa685HvfyZtsN7
ynjO87Xu1sm82lwkFlZsVf3BK/qegPRVODKzL69EkSIO4gXRdyDQ1ysYJHmSfDAq+GfmTYV2
D5D3axa//XyF/emm6nXN66EeIrmBjrzHnmxxfGt8trzHarhtyO1QxzASbmg9canrQHx6YuuX
fm6+1U3bYmsDc7tvG23diq+4lwWjUhQKk9ev0xjKlPx6245vvx9u0FjcGy2O/ublYbrUF0oz
ga6k9z44zmFV7ryb4+4vs3H+J3u7Q3yQTRGXQyyExxsW1yaCQPURlhy/Kt2ttFyHao7sTSb5
YLYTem2t1eMEmnd9X7qYvrRrILvvCub7FyTjtvvkVuXvJFluC6qaFwwePUV1Cq0w/WwMjY23
x38d8t2AHkkm43euSK5rIHhhSZNKuyoWVfVl188H1rV6epX3KbGAtuUnJtut9oWjFP5btpAr
TVrrU/8AbhnFv4UfD/PN6TeOXbxukdxJcRX13JJFLKAHZK0WqjJRQenyxqV5+ZZ3f01XwPuv
H9s+Qtuvd9dYbSP3AkpqQJWWkZbT+UHM1wdx2+119fw8l2a2vCbnftv/AEs66rS3V41IAFa6
9WeXljP1pjxr4s+QONW3yzzGS6vEhtd2uH/pd01falCuSSW/LXtXDeXH+XVmyvIPnLj20bLz
O9isNxTc1um/VTe2FPtvIS2ksCdRxpud2XHn0VBkSKt6a0OWL5b+z6648vFvkH4k2fYP64lh
cWHtfq1kKiUSRBgBpdlqDqqDjGUfaVSfO/Pdo26349sGxbn7+5bS6G6mt3oEiVAlGdD9zU6D
Flxnq+xVf3Pc0tbiy45ZbbuX6mO4gaW6hgk1RnVpCNIFPWobrh4dJ8u/+3LmtlZcV5LFvG7r
CLTQ9pFNLTQntOT7YJ/i/hxWbfF1WH+CuUF/ki6vN03uTbpb+NzHdy+v3nL+lJmeoKnI54Lz
XP8An1b8vojmN9w+Li1/Pye/sJh7DoLqAqszPpOhVCMzE+QxZW6LinLdu3vhu0T7LvNnavFA
kVwlzpZ1eNAhQoWQggjF9VK8X+cObbbP8g7In9Wjv7faCgvPYUCOKQsDIaqW1n09BivNxS+r
L+5K04tyHjtnziw3yCb2YUgtLBGBaf3HLVCg6g4r3GWNc+MddZdfMrhNRI+1unahHhjtDXuP
9tnyDxvi11ue271J7EG8JEIrxRVEdNSkSAVIrrrXtjh3P9tHHOPRr6++P/i3iO9bTZ7wu6bj
yMSfpLdGVyPdVkUsyEhVUNUnGZzflfXzFpafKXCvj3aOL8dW7fdYryJPevVmEnsBqCrjMqNR
yXsBh+lzWpkmPOOccV4buvzVDt0fI1FnuNv+pu7yWUTrG7VIhEhOmjhchXLF1zbypbK9B+dN
r45ecZsd2g3i2F1x7Q9rbB0f36aaIoVqgnTlg+txW+7DndPjHn19sPPZ98Swn48n8+wldEdX
U+6yOrHUaMOorXF9bfB1PdeCfOvyDtvM+bvuW3IRtlrAlrBLIAGloxYvp7Cpyx3n88Z+t+2v
NxJViBnGczpFD9PpjX1alfTHw7u2xc0+IN0+Prm5Flu8aSvEGcIXDN7sbpq7KwAbHn+uUdS2
YXwHyPi8nEd44fu25w7fuM9yZayMEVlGlWKs1FyaLxxXn089bPXqO+c24DtabnyG43qCdJoF
sXtY2VpGaJmJCp9xrrPlinFOxZW/NeMVg3GPktlHs0iRrFZn21ZSwotWrqXPsRlinNrTHcd5
nxq753yhLTki7bctdRmMOyPaXEUUKo5X3KJq1DMq1cV5sZk91kv7mN94PccPt4bS4tLzkhuU
e2eyII9of+UylCVppOVe+N88X8jr/D5kDgP6TRagpTKh8MdMVr6J+KuYcL3r46f435RcjbTM
zvbXqsFjoZPdGbfaUbLPI44fXKZPFzybl3xk8nFfje2v2m2rb9wjmvtxV6RBIw4VfdBz9x5c
yOmKfzyes7taA/OHCN45RuPB77Tbce9t7WLexcaY3eNRVQ46A9FbVnh+uNbrN8W5N8a8j4zP
8a75uBt4ttupGsNy1BEmjjlZo31NkDRqUPXF9bF9fPWP+e/kDit/tGzcN4wWvLTYWLPuDGsZ
KRmJFjP58mJLdMX/ADyDufZ2/Kvxrw7bPhTZOS7dbGz3aT9KJWLVM/6lKPVemX3inbBxNq76
s+Hz80I9QVaquZYH1Y68m16p/bxzjZeK82W63iQx2d1A9r74Un22dg2pqdhpz8s8Y/pBJ6+k
U5p8YWaWlovIYCdmka/jbVq1rN7mpajqaSnp5Yz9K148l/uK+SeL8n4vx9djvRPL+oluJ7cV
1xALRPdXoDqxvnm/KryXje8rv3Itls+UX0qbPG6W7z1LC2ty1WVAdWkV8BjPR5+fX0b8rf8A
3V718bRbRHyOANssXubQUlWRnlhiKIrAfcGBocXHFZ/r1fmfLzv4F51xOPZN84fyy7G3Qb1E
FjuiwRAAjJJHroQrUOpa4euMvh5952sVtWz8ItvlaXbL7cRccV/VGGPcYqDVEwqrk0pk1Axp
i/pta46/b3T5h3H4s3vhtuIuQQyX3H1rtsUUgcyMAq6HWmeoR9cZ5/ncZvU1iPl/5G4zu3Ku
G7jtl0LqHb7eOW8Ra/y29xHKMD+fSpwzj/U/l6zxLl3G+SfKW5bjs+5ieMbXbwwW4bQtw2tn
f0sM2iqAfCuK82RY5/7jNitd2+Nr+8vJjZS7V/8AJs4XdPbmf7dOnuxUkLTD/L5Y7uTXxkWA
jqykKaNX+HHoxc38vXvhP5M2TYrG/wCKcjgMvG98BW5uY6hoWcFakDMoynOmY644d85ddb7G
y5j8p/Gux8etOCcTjbctquJ45NxuCxoIveV5ACQpdzpzxcfy31z+8txZ7n/chxva+UW2x7TZ
wz8LREju7lEIOiQesJGaf+MHPxwXjJovXrB8U5vwnivzhc7nZy//ANrGWWO2nQM2hJ4utD6t
IdunXF1za1w89+S98tN255vu42J12N1eyT27FaM0b005dsd8ySDzXoHAvlWx2f4b5JscdzNa
chkYybW8PpdtelSdfRdOk1r2xw+udHq7F78V/J/Ap+D3HD+ao8dlHL+phlAZhIS4fT6BqDK/
qywdc5TYvOb/ADrwu12jZU4gxlutiv43S1kjMcTW6o6Oa/7g/wC3Guf5eC9ann+dPjHbYZOX
bZtbryvcF03doQcmpTU0gOmmQ6ZkYJzvyutjL8G+Z+KX+zTcb5/Z/qdqFy93ZXMKsfakd2fQ
VB1UBdtJH0OLqabEvJv7jZv/AGzbrnYLKNdo2QvFbRy+hp4pFCPqA+zJRpA6Y19OZGpn5XG6
fPvAtr264v8AiuzFOR7zGxvYZQEijc/mdwfV3oF64OOZb6zZ7im4J8zcPvthtdg57t/6qHaf
5u1XUaFnULUaXWozUdx1HbGbPWfmua+/uYvX5xBvVhZRjarWJ7WO0bJpbeRg1GYfa3pBFOmO
t4k5b54/a05N888MsOPXO38L2f8AS3G+RSjcTOmhYjKpV1UKTqYajTtg/n/OW7arL8OD49+V
eHcF+K7iWxAfmFw7osBXWaqKRM2WUdO1euK8/brR155HgVxKxuJHdSUkZ3JUE01Et/njp9Z+
HP7YhKl2VipLAEf8DBY3zp5kZSqMpB0hhUUJU5YIcqAMwaqnoCR50xrGcAKEauigZr59sZF9
SI9Ar1Brkf2f6YyEbv6VUfU9svPzxRaMOGCqvVQfI4RLSZ205+PfqaYWyVQ9HPbLr49sAsJ3
jPpK6QfzYsGm95AKLmB+bzHb8cOGnAQpTMMc6eA/0wWLRIrEEnMnofDBqgHLANXPwFKYtFhN
IxIVm6Dt/hgQ2VlAan2ilO+DT8Er60Bag/hH+WEUyuCfbApQdf8ALGloYzGr11ErSg82Pjga
SEFRprXWM6dPpgwGKaGU9f8AaMyD4HEib3ApK/lJOn81frihpM5ovuCoObDKmXfENDGYw9QQ
AD0PQ4ryLU0Q1Etq0uK1HSowSBGJC1Q2mq5geWCtQUJVgz00qooOxHjixonVNIZQwWuQHX92
LGfsYaQag9e2eVcSPExrRvUtMh9cTUHHrUgaaFRm/jXp9MC+ET6wSw69yf8ADDGb6EKVeur0
t9x6+dcOs2Jw0ZlBUmgrVTllgMggA1NddPXTXBa1OAuAsjDv2WmVP9cWL4PHcpD6VWgAoSan
r5eWGRaHUoZnY+54MB0BOIxIhYJUAUByAFK98vLGcZRhFLPX068y3XPtUYNGGf3FrQkkdvHx
ph1YNYkZVJyI+4H/ACwtZhgv8ttIAavQ4ikVSdTEAkEaTTr9cZRTtXoKn82XjiWBV29ulC1D
l9P+WFDUaqagFYkaad8SC4jXSv3E1I8fL8MQ6CraGFD9+RU9v2YWNOWOthmzUy6aad8TpLEm
iYsHDdOufXAQzFy2pzmMlHgfMYBTyFxEpNAxpTsScIH6vcVmeprUrTKmA6NS/wBwOmNm+0Cp
/fgSFyQAQa55eeffCN1MHkUEMozGWnoP24jDgxFdagEk5kHE0i9YcqPzD+WO+HBEyFTTUpUk
Gq5HAtRsUDAe5WtNRIyI8MsSGlAnoFX6A9aDxzxYdRgBXq1B0GX7yaYKElFFdKgimedagZ4j
CKqx9ynqalM+w7YtBmj1GhIXTnll+OJaZg+n3QwKg0cAZ0wovcTTRQCDmnWoPngxEmo+mufU
ny88BwL1ANDRPp2OHRSWXWxAclelBkBgENMSstf8fLCDIwI0e3XSxJHj9MRSRRgKSlVDHsc/
PrgaDpGtjro0eQH5SPD6jEkc3ue2lWVCDqFe/wD1wETh8z/9mwzFM8QRxhFzo1ey5mvjh0/C
KRgaBetcm8vwwrdQXA1RMACyj7nIzP1wBTtllhUOpAFRmcSF7jfwnEdddoFrmtXIOmvTLFqr
phdBNoaMOaAKvjXvjc51rX1d8E7Dxm24Ad4utqgvtxT3HV7gaiFGa11VAp3xr+vOXHK3Xi/P
d5bcd+uJTAttH7h9q1jAVAK/644WNfXPWZmMTrU6ii/kPQZ9BjSTIGoQubgavAH/AKYdMbb4
6+Mt05g6SO/6Lao2/wDkXUo9IQH8gGbHwwt/aZ69E+S/i3iWx7Jbrs6vJdSeg3Up+5jT1GmV
O+eM87rPPqw2f4X4NacMk3OW9bcrxoiwMbUiVgK6BQHUa9zh7l31XphPjj4w2jlPJLmHdtwF
lY27kJAlDLMw/IK9hXrhvNs1n7Jfmzg/HuM3cNrtEBSEgMTIas1cvUcY5ZrydncykBAE6Ka0
pn0rjWJ6B8X/ABDu3L7n37lv0Oxo5W6v2H3U6pFX7m/djVrXPMny9G5l8W/G3FTYJO86bW1H
ubpfW501rlSmeMy6tX3EuGfC3Iredtksp5IrcKXnkJjyp2xd/wA7DdPafDnx3amTdNyglu43
kP6Syt9TKgOWfUkn92MSUfZDvHwbw97uGWSZNn2//wAjgsC2lcytXPfphyn7p7T4n+Lt5sp4
dhsZZ1tgUa/nL6Sxz9LGmqlMOYGE2D4Hl5Dv10slx+h47BIY/wBRpFZCPyw1zP1xbfhrfFty
P4u+L+N8gtLXc7ieLaliDvIoYu7fw1X8uLmM/fGq2H45+HN72iW+2u3n/S29QbqRmT1Uypqr
XFZYr1ru4Vxn42vuNX0m3bElYmkikuJx7jSPHWjKTlSuHqX8syvl/mMdtDyK9ht0osUhy86/
aAO1MPE9aVlnHcNcLHCumVxRVGZzPbHo+2TxPZNp/t15ruW3wXt9dQ2SOmv2ZzR889TdafTH
m67tq6qsk+A+ZHeksLAiZZF1G7I0RAL1NScX/S1R28g/t63rZdqlup9ztW9urPGocA+IUkUr
jNt1nqu34S+Kdq3WG93Xd7JrsW5KWy6gEJoRqJ69umOvduGWY0/Avijj+7TbzdbnaF5rW4MF
paq38pCorme5zxjLhnXjK8r+Bt8spbi9hlhIdzItlDqLqlPChxTuwc3K5No/t75luNiL+7nh
sEl9UKyuPcYdqrnSvnivdrp13HF/9xnNbneBtlsgkkA1PdOSsSqP4HPX9mNT+th67+3y7b7+
3bke2SWpuLqBzJKIyiMSVDn7yMhljO1zuWu7mn9v81ibSz2AvuF441zO50xhfzVyI+mCWi9V
5ByLZJdq3CSxkdJJlIEjLmPpXyxtmeubbLG93C8hsrKJp7qdxHbRplqfoMHwY9+svgDj23bf
CN9vpJt8u1Gjb4iAquRU1OZangMHtPXeK3b/AO3i+ut5kS6u/wBBssP82eamqSn8CqfLvjO1
n5iXkXwjt09kbniLy3FtEdKSlg3uP06jKtca9jcuO+x+A+P7ft0Kb9uTXHILpQY7GEFQppUg
ZktTvgstquaptq/tzvJt/uJL+5FjsUALTzlqu4OeiNTmD4k9ManVnkZufKHlvwntcO1XO8cX
leewgUFpJW1aguRzHceWLrWd1m9i+DuYbrsD8gSJIduVWlR5m0vIiDNlU9RlkTgvVrc8ju+N
vhS/5TFNuu5Xp2/j1sSpuO8xX7tFSMvE4ftVZ+Vvyj4NX2Rc8XM17allWJ3apkZzQHUuVPPB
th5v7XH/AObrs1vYLt15uzXnIXj1mCKgVAcif4jnlXD9qe7zfh5pL8Tcsblh4tY2he/zYhzR
UjHR3NPSCPHBKJPHHy74933jG+RbTe6BfzKCiRHXUnppp49vPDLR49Lg/t0mseFXW/b1fGPd
0t/1CWaCqpRa0c9C1PAYb3aP6Tlz/H/wHZbhsCcj5NuLbdZT0ezhhoWaOuTSk1Hq8MG1bMQf
JfwltvH+Of1/ZriZ7Xo7S5O2o+grU98HNsq3XiqtqVQz0WgqTmR9cdb1qnL1z42+DbvfdmPI
N7vv6Tx+bOBqUlmFaalBKhV8DXPGeu7fF9ZFf8j/AA5fcdtRue1u1ztLsI4p271JP5f3Y5+r
FVefCnPYOMNyG+t0tNtMazMzOqsEP2syHMdcXF9Zvl9YH0ijD1axUOBQH9uOmNrbjPHN65Fv
KbZtNrJc3UlTGsfSgGZNcgo8ThvWKdRur3+3z5Cgs3uZrQVAJZCy1oB+UAk4xO6tcPEvh/5D
36C5k26zPsAlRcTtoRnBp3pq041f6VczEO9fE/PNq3O222WzMl5dMFgEOYdh1AIy8z4Yz9sZ
55+y13T4F+SrHaJLq7gXSi65F9xXY/gKsAMH2tq+siz+PP7dOScl22XcN0mG22ZqbAdZJGXK
tOgUnDeq1kxjd3+O+X7Xyg8egha63KQ0t4Is2K9/HLxxfbB9V3unwf8AJNjYPfXloEiC1kQM
rOBT/bh6/o15I4ds+CfkfdbNLq32lkS4BeCRyqgqMgxLU+7Bf6GySKreOD8j45uEO338DW8s
zqqADUGY5UFOxxS6xMrffJPxZt3EeB7XuYlurvd7pk/UAqREhcZgDtpOQGNzWO+suRm9p+E/
ku9jhnj2h1iuEEscrEINJzBbV38sc7062ZHVx7aflfi/Jf6TskFxb7vIDpiiBoyd2atV0+eN
fYZ541/KOG/PXIdkul5FeFdvsUM0sMkiIsgGeYjPqIH8WWH775I59bzNfP7B0mZAQQlQT2NM
sNmGXYvOI8a5FyHdIbHZreWa4YGRVizUCtKsRko+uH7OnPOr3lfxZz3j8Avd129orYtqaX78
z2yr+/GZ/Rjr9R17L8Q/KW67T+qtLCcWcg1QCVmT0DpQMRlg6/p+mPpivs/jL5C3O7urOz22
W5ltGUXSgFdAOXqJpXy8sU/pjrnh+TfFXPuPWay7lt7x20rUVkINWP8AFnl+OD7eivSOAfA2
zy8Jn5NzJp442jY2llADqVFNFdgQSdTdvDFbausjyfathuN05dFse1CSNbi4McTOCWVC9CaD
wXPG/wD1Z/l3txuvmLhNlwbkO2W2zz3M9zLCHFxIx1+5kCUbsSfDHPa1PnGt2+T+5O726CNG
uEgnVEgmcoj+rLUxNHFBni+xvLxHn3GN645yO62/e5Bc38LBp5idQcuNVa9ehx0l1yk9Z+2j
uZpUjtwTKx9KgVrU0AA8cN8aktr0u1+EvlqewF0Nrf8ATzJq9qRtDBetdBNVPlTGZ35hvEny
6eP8e+Xd649dbFtMV0+yW9zpntftj96gOnPwIz8MZvWJTco+Nedcdcf1SzMJuSAHUiRWdsqa
xnXD9v2easofg35YNil422H2SmpNT1loT6V0df2jGr/WsXlVcc+Neeb/AHt3abTZPrs/TdSq
NChiftqaZ+WMff1uc/tDyT4153xy6htt0smSa6ZVhSNfdDsxoACpb1Y19rvq5zcXtx8E/KLb
b+ql29liRPcCFwz9KN6SQQadqYp/Ww2SV5ff2VzbX0lrdromhOl1PUEdiDjcxjrz4HtUN3Pd
xQWUTPPOdEccY9bs32haYuoeZr0mb4I+VoNtac7a8aIvuUVgxzGpv5ddWOd7XUv4dXxh8Ecm
5lJcX88/9L2+FWjErrqEk6kDSFUg08cV/rsyL6/msPzvhe88U3242nd4hHcL60kDaleMn0Ou
Z6+BxqRjm+/+GYiUM4WtFP5v9MaxrNb7h3xhz7k1lJdbFYtJZowQzl9PqrmFY9SFxzvWNfRy
798e802bfINnvLGUX876baPTXWv+3T93/dg+4+v4erc2+FOKcT+N/wBZuFxcNymSP3o9GcXu
GhaPMZKoyrXFx87R/T4yM78JfCj8xnn3Xe5JbbYrQAMqAq85cVKq3gAMzivfuR1+szb8sxyz
iEDfI11sHCoLi6gVgLYMG9wgCjrQgEhfHHSZJ8uXEt6d3JPh35K2rb5Ny3LbXNjCoLsrByva
ukHp4nGZ3aeriLi/xb8kck2b+obTtuuyc/e7aA5T0kLWmrGfs1nnrj234q5xe8qbjkNg8W5x
VkkWX0qkdR6qn8ufXDf6ef5Z5/wv/lj4T3/hEaXaTi72IIrteKun25mydSufXt44zOqvy8mM
RK6gf5lQTXw8hjpKY13x38d8j5luxtdlVY/ZUSXl0+SRRk/c31zoO+C9mSPQeff2+X+0bVNu
+ybim721mn/z1WitFpzYqoJyAwbZ8s89bXmm+8M5Vs1ztse4Wb28u5IstggGcqPQJn3OffPH
Tn+k/I6m/Dr3LgHMtr5DbbHd7fP/AFmYIyRCjmkgopFO3bGfu19LY3PyD8FbnxHgUHJL/cTc
XiSRx3dogNIhKSFo4PqKnJssZ5totnLxp7q59x1L1WmdRT6eeO04/I311bLs24b1fw2NrGZr
q6lSCGMUzeQ6RjP9O8jpz8vUvmL4k43wuz2LbNvvJbrk15pa6iFAjIAF9K9tUmQ8cY5vnrl3
1ZfGR3LgPOeN7lc2t1ZSw3lnClzM8Q9wRwSfa5cVoKnSfA4p/RSXVvYfCXyNc7laWibe6zXd
sLxWcgKschous1oT5HPGPuZznyg5h8Tc94fBDNuFkRazMIYZEYSJ7sn5G06qGmeN89rJV1B/
b78ottZvTYhiye8INah2Sn2lCfup2xc/0sb8kx5hcNdWt2Ypy8LxuVkhc6XQqaUp4imNZoln
4dO27fuW/bnb7Vt0PvXt5IsUCClWLNTL/PB3c+V/7eR7jcf2tbe2z/0+De425lEgnewLARsK
ZqPzU8GpTGJ1RkeS7r8ccv2vaW364s3Wwt7ySwmdhVo5ojQhgOgJyr0xvjuVjvrzxHuPB+R2
Gw7XyG6t2j2ve5ngtnGQDqclZD9oeh0+NME6dZ1JkvzXo3MPgC64x8Yycpmu9W6RGCS8sQoM
axSMEADfxKXB8MHF9c/7dfSbAfGHwhBvO1Dk/Jr9du49KrCJgRqdlOnVXoF1Y1e/t8Nf9JZK
5uRf298ksuYbZtm2XaX+27u7SbbuFKAKg1OJAMhpU1y6jGL1i5vvra8s+BRuexGw2Pkj7pvu
yitxtc0lVC90jWv8s16fsw893n4HV/TM/H3wBtt1sEW+8x3BdusL0OtmEIWVW1FBqJyGeGdX
r4XVmf5V17/btyWz53Y8esrpLmxvka7sNyBKhYI8nMn+4ahl3rg6/ozLfitTyf8At72t9pnf
h+9Lf7rtYP8AV7Nz1AFWKAfmqvQ5YPtjpuOTg/wLsv8AQId65tua7eN1jDbdoYBg0o9LOD6c
gemKdWzxdZHmvyb8b7pwHf49quphd2t1EZrK9X7ZIq0zHiO+Hn4Z579y/LIG8n9Su1cgNPQU
GYAGOmuvkcS3MjSKRX0+PQV+uNSRztbPh3x/yXlUO6321qki7PbteXYY6SwUHTGn+8gE459d
On181a7F8Qc23i+t7Rbf2bi8szuNtJINIe3TowJ7mtB54xP6MY1dv/a1z26kKyXNvFoRZU1E
0k1ivpI6FTkQcM7XXrEQcd5PsHO4uPXErWO7Q3MdvqLUFZSArqy9irAgjGrdjXPtxv8A+47j
7bJvfHLG3uJzHc2Wq7hklaRGnR1jMo1k5nVng5/pZHPvNX8v9qLHhjNDfK3JDKJkYsfYMGn/
AMX1LHVq/DGJ11aupJHifE+N3G5cws9gJ9m6kuRa1Y/y1dW0sGP+6lMa/pfHT+eV6N82/Hm3
23yjtOybJbixTc7W1XQuUazM5iLJ5UA1eeMS5NY5n+yyP9qfJ5ffB3G2jmgZlijY195aAqwI
Hp1dOmGWqyVmuE/AnJOR3O4R3M67SNsma1mnnoR+oQ0MVO/jUeWH7Vq+TU/yb8By8P4THvs9
8Jr1Lz2LiFPUrQyZRyxk+rt6hjfHVt9YufLx+HXqMddSVyA7411G8e8fFvxfxK34kOdcyuK7
aXMdvbqSKknSenj0AGOW21rqSSfsfJfgHa7jftju+L7lXj3JpHWykmq36d9DSqjd2VtJA74t
cuZZfWU438D8s3bll/x29jbb2sDNouDqMTuqkpRvzK+WNX+jeWxoP7evjLj3Jb6/n5EUMu3S
m0O3VoZG9QZhn9oKEVGMdaebMV3HPiOy3f5a3riUd0Us9uurlY5X9Te1EQUWnfJgtcasyL+d
/bd758V/GHKtq3Ky4kWt+S8agcuwr7dwyChD6qipIp5HBNg69R8c+PfjbiPEdqv+Zob2bkQU
wpQgx61DemmepQ2eL5XVnxAWn9t+0W/yBNam7aXYhZncrJiKyFNYQxGlBlX7qZ4LbhlwPK/j
n435nxXed24UrW+67DGFmyPtTrH6mJr+YhWz8RhksY538re14J8VcRsNo2PlkB3DcOQxAifS
f5LPRQUpmPU1K4cvy3ffFbxr4n4fxLm++bbyCRdwS0tor7ZUmIU3ELsQ0RFfU6sKAD64ttUz
Fj8v/GvFLn453fe9t2RuP7rswW4jJXSZkLAOHAOdQfwOLn5xjrc8ZrZeJ7P8ifBlzGttHb8q
4kknsXqgAyRrqmEbsPuV1BXPoQMPx011uSvncoqsG05kBie2fbHXWdoGRWIKgA0IB6GuIegB
ZTUU60/HABaZGJfKtNJP7+mIE5IVSD56QPwpiitN6A2fQZdcNPNKqspYmhr6QP2nLBpKU6Yt
RGsNStT/AI4AYxH0hFoD0XvjUWQZqpU1DZ5q2WQypgMyEzAHVnpY+o/54ytEFjAUgkN3XxH+
WLD4jNaGTIOTUUHfzxYNItUKTmDk34YsH5M4Y6m0UUUIamRHfp3xSm+nR4qa/wAgrlT/ABxr
KNIEEK2kBehXt5GuK8mVMyIGAr+Hn2xk+IhrVnH5ia6qYhg0erDUQGYUbtn5YDpi0TVUA5dT
0H0IxrEAJHXUSGUdfxwLw7KdRA9QPQ50+mJi0TOIyqFBpOZApgxqGqSG05Z00gVy754CfStW
DHQozJoc8RxI0SspC9OpHWtO+IZAMj6vcB9FBqFczTriRModi3qIpqyOnr44sROX0gsQaEUA
zA+uGRi6eMjQ1BU/lZss/piwzwQdBSRSNQBBSmdD4YmpTMzN6lOk4NagnJYkHv8AcVOdPLAM
BqI/mZFV6jxphGYONRWigvl9oyzPbAYcshYAEhuw7YgLSir1qVOZ8PLGasG8rPGDSjKasp7g
eGGQagDkvUVOr7qipHf9+EpDLQhlAp0APY+OJBLurso1MoGWnKo+mCmJQyGN1rmft7U74MVp
lRtOfYfb+bThUJtDKSVBBPYn/iuLCGlPUtD4U6gDI/twjIFwSRrA0rSlD59KYRmkhVJM/sJ6
FaADywDJKkVlIJTMNl1qBgrUoRqcGqVC0IK9TnjIvI0NaihJFfV2ri0TwWtlYilRlQnvlhal
OTQ5MWJGQFKYNAFABOdQ3WvX9vhhIoVYq2YIOQrXpg0waiiksMh6Qen4f6YTh9KkrpIGnqCc
8/PEAMFGZzJBB8gMSJBpWimhJqcsji1GZ9OZNKdx1qPHAkjaJIwD4+o07+P44sFRMoC9QCop
GPH/AJ4hz4m+6JdZ6j1BcsTWgaMDpXTTJia4Fh0KoKM1GIpTxz/zwrEISkrCKgFKgH64UmDQ
nMqTlRipNcGCo/c9PtDsKgEUP4eOJaaJBqZaaiBqBoQKYBoKNmsnQj0tStc8sWlLB6aBSoyN
WIrT6UwkzA+4BGK161OVMSwSNSFqrRnanXoe2DGkNwvqBJIIr6fPAsokOpywpWmYrlTAgLKg
Y6qFqEKadfocICYQj0DAgfc3XPt0w4nLdO2hwOh9SgEkjCFL1xFIqrl6vriIqv8AxDw/DEkt
oAXJYkD9mFY6YJFa7pmKH0noaeGN8y6n2D8AWLXvxfLDaLqLtIgqRX7aZ/U541/9jZRs14Zz
zZm23erq3upUMyMRKqEOFPgCMefab0ysKKAVUjR1A8saUiWFqdWCqTQZHx/zwyKx7V8dfN1h
suwWfHr7ZxcxxEL+pSURg1euphSuVcd/+csc/vjZfL/P+O3mx2+07W1tPuN6oKFXyjHTSTkM
cZzdPPUrR8L2SPb/AI9i2+83SzW6aN/eYXEbBNeY1NXt0pi/ptp1538W8Tgb5Dl3Kyv4Z7Lb
Xb9RdtIETU1ckDfd1pXG5b9cNsWX9wWz7dLoupNxgVig0wBw7MelQBnjjlWssPh7h+08LO/7
xyCG4uZI0kW0gdWarjJMiTqp5YZOtX2jTcE+b+J2+17Zx252qasTpCtwjIsIzoHapr/3Y9H/
AB+3srF6/bq+f+c7A+3W20WHsXl49WcxvrWNR0JIyr4Y4zj0yuvjW67BxP4h/UQX0Md7exiS
jOhczP1y8fDG+tt9VutXw68ubvhlkOKzRPdaF/WXMxDaZCaufVjPUsMug5jDxbdbq02C/wB2
iG5mkkiKyl2Zc9I+p7YzOaMWdjDuGzbZcNyO8trHZrZKWsYKxGgzOor1P78RZnj3zRwPcd0t
dlht5U0uVhl9KwZdXpUNTHXn+VvwL1Pyyf8AcRynjs97abZZvFJcxqBcXCNqCqzCgNPDrjnz
z6Y0XJ962biHxJabfZ3kS3N3AqxhWVn1uNUjUXsAeuHLq7634d3wptG423x3O17EYFuTLKkr
5VjKk6s/PF/RY+YebT20nIr9oGUKJWUshqKjsKeBwcxqzw3E2pvliVVS6yoQWzOrUKU+vfHW
TWL1I+0eS7BuW7R7d7ExRYJkmlStAVHVTTqPLHAmHItmm3O4sLS6judwQUaCAhqPSunUMqjD
OankfM/jzlVtZblue98jWw2uWsgsjJrkaproFcqjtTGpVWk/t749ullxy8u5a/or5w1jrNSy
An1Edq43/W/hSzPGr4Vs8+2T75A0ya57ppciC3rXp9RTLHO/AniGxgPHNg3CXfb5I2u7h2hu
J2HuFXIAChs8utMWaZdW212KQvaCwtv1NsRqm3GaUkHwKg9fwwJPfbnaPc3dnazpNuPt0jto
2Bf9gzA8Tiwa8sj4PzTbtwi3HkO8xx7d+pRo7N5S0jeqp0/T/gY1LvjV6kjfcx3e0mDbDaX6
2u4XMJcpG4EhjPgcEjO6+S/kLjw2Tc2t1uROWOpakswHfVXOtcU5On+MNysNv5ft95eyGC3j
lHuzeQINRjcmiX19UXNmd85ZtfIbWVE2nbgzyXDsAja/BvHPGbMWeiTm3Ht6bcts2i6S8vUD
xpEjA1B9JYda0rng+tnqcNpf7RwHitpYb7eQ291NcVjgBBYCRq0oM6KMUmp2PYNvPKtr5HAy
LtG3wvW6ZlAIfPKpp5nF8eI0nL9i5Ba7rt2z3C3d2xaDStSCWFB06jDmeqzWe5RuVhwj4q/o
u6zxjdrmGSKK1UhnHuGoqB2UZVxfNGYrrPbr1vif9TyLln6ax/Sgw2FvpAVafy4iwOti3SmG
82Uyx1cD3Xa96+K047t1ysu4GJ4JLcEa4yxJU0GfTD1zZ6uvWott42bh2xcf2Ler2CC+9MPt
BxkczUjtme+MSaklpts687u+VXDLb7Qlp7DSSMFocjqr0p9MQmPPuNb5ect+X95uOK7qllYx
Jpub1lWQyRKQtI1agPqGXgMasyKPN/mKFbT5GB23e33rdEA96YZvHJ1EfpquR8Mb48+Vu19D
bLse/wBz8S/067Rv6vdWbApI1WLPmoYny8cc+utp6ef/ABvyX5EX3eNHZYt4XbG9pZGcJHEq
mgq32nFokXX9wvK7K14GmzTlP67evFosITrKkHPLwxSC+3x5Rdf2+b/tXETyjdb+3twEWaXb
5RRwjGukt01HwwbrrbI9luf/AO5/ivbdo2WUXF8YIFbQR6Cg9QanQ1GH62fLG6q/nHe7PbeE
7Pxv3VO9u0Gi3RgSpiWhZh5npXFzPyupqD5c2rfm+KhJyXkKMxSKljbIsaTNQenUDqkoMz2x
Td8Fn7fKhCSV9ptGnIVzoO2HWtfQH9qKRpyPclFCwtak09VCy559K4OmZi05h837xsHOt825
TFfQowhtom0skNR1Uinq8a46c8z8rm/L2OzvIYuKbYbe3e9aWGJ2itKD1SKGduwHqJxysw7q
l3rke9Q8i2SC32dYlkEo0yspnCFc2UCtOmKRO2/2i/3nbbytxLtrSI4Ly5oop9zHLt1w6zfW
b+D+RXd5s2+W098b2HbLgwWR6nSASdHcgt0w9xc/Cq+Htv3G3+QOSXnIRXc/brFLJTUsbuCQ
K5jKnTGbGp1423FjyEychuuQFv0IkY7YZKBBBQlqfU4lPhR/NHN9w4pwfbbzbplgkuJY4yCB
UpprpXDORa8J5X8m778h8i2K22+y/TPZSL+kihBdpZT17fuON3wfXfh798u7Puu5cM2iEQtL
JBc2st+FzKhANRNP92OcPU2pvlXle67PcccsttnFuN0u0hnFAWaMsoIWvTI4ZGpm+tJLJBFz
GO0hgH6iexYi4AH8qNHpTx9ROMrWV+YoORw/Hd9tuyJLe3U4pe3VQHEJNXyHll9Ma5uXWO5s
x8Tzh1mdSAasfA5jrUY3p5mR9T/2xw29vwnfb6GNV3CSUIJBTWSIqoo8tXbHL8tb4JIvl7cm
tIOVW0f9C/W25umk0xu6GQUjp1K+ON7Bkeg8luOWnm2y2e1Kx48CBuuhRpWhyUntkBjPwlvd
X0VinJry29tZrSH3WYAf+RICw1ePTBivwxOz79d738JXW+7qYrm9KSyt7ijQGikooI8qY1m1
nryOy9+RL9PhJuXJDGt4baiRdEr7ntekHy6DBJ610+cvhiXkO4fKFjd2kbSXTze5csoyWOvr
60ypjffxp44yPdvmOylh59xHfrixe82mwkCzBV1L7rSehad2PbHPNjFudPUJ4p4JZNxLPO5j
Aissgqv4jvXtiar4a+Uot7fnW73fIImg3Ked2eM1IVK0QL/t0jHSdMc+G+Lb57HnGx3EdmL6
WO7CwWbUpI+k0JY9AvXGrJY6cR9nI25btNqDXO2zqSChHor417qMZ+2fDH1lqr2Sy3uw4Fvd
ttkyT71Fc3YhuABpebUKNQY576b8E1lu17wrjtpvjrFvtxdQNPJIAWWVWZyQp6kKMKsaWykZ
t5eD2rmRIQS15JQQlvAeJw34MZ7d491g4lyRuNx+3vU1/N7BiUBjKXQVPj6RjM8Wsfxe2+SL
jlnGpebRxLAklw1vG2n3TMsZKuQCenbDerWfrNely7pp3spFtt5cUy/UKB7APiCT498WNPiP
5Vvmv/kLfp2gjgaS8kJhRgwU1pmR37nHWMzpoP7etrF98o7REsohaJJplkIrXQhJVR/EfHGe
2+fH2FtTl9zmiaC5Kwiq3cv/AIpD09J7nGKM9ed/Em/XjfIHNNkklCbdZ3Ms1taUACOZSHK0
65UwWejm+PnP5v8A/am5zfyciSRJpmYWglFaW6n0EaaClMd+Y5/aSsHaKuoClMuhFevfBa7T
4fYsC8qs/hLjUfBoidyKQ6vaVa6KN7zUOVS/XHAdVJ8v8quOK2PEN5CQvu6S+1SYVzmiAly6
9e+GL8qn+5fnG47Zx3b9ntkipvcZa5kI1FQhU+ivjXrg3Ge567f7cOYXW58CvoZYESHZfRb6
RQspVn9Y8csONW+MB8S863fefl3dd5Tbo729vLaX3IYToaKFXWpTV+bIDPrjfUg5+Hs3Idu3
bdePbq9je3Fo0lpMslpdp6NOkmmf2sfEHFaM8d9hcW1jxDj62lrc3MP6WLTHYqG6RLUsMu/7
8Y5PVecfKnOtx2j5I4y9jE+3SMiwXk0irVoZZFOh+vpH+OK+QT/2/wDwr/7sJ+XJZ2q24kPF
vaBvVQDQbjWdIc9ftpTG+Obfgd9yX18tgq3c6c6Z5kDG/rjPvzH0j/afewaOS2ZkSO7nggME
eoAtRZAdIOeVRjlZ66c3xreHbFvHDvjjmknJiYP1QnNsJW1FtUbIumufqZhQYPbTZ41uyx7S
di4w3O1sTvwjQbWH66tKlNNfz9K9q4s1aoLLktjsfyTvz88u7Wx3Ga2hXZLkD0raanqqGhIf
Uc8a+tvwxOvcP81tsNz8GXc43A3Nixhls7quozP71VUEda54OfKP6TZ4+NZQplKqaAj7Tmcs
en7NyNv8K39rY/JPHbm4ZY7dLpRLK49KhwVBJ6DM1xw/qf8Aw9O+atm3Cx+cNr3e+Q/0y+uL
JrKfVkDCUV0zy650xm/Dz3rOvXvG67HLc33J5P03ufrtsjt4GK11lVlqg/8AqYYzj0Oya2ZL
CGfTLLcW1pEr2UNPdbp4nrUY1BZrO/Ikzp8cPcC3aG4F1aPFBdmpWX9VHp1H64L6LPFwqbpu
V2Bdw3mz3kYWksLrLbu4HY91+oGHchs9fFvy9tk1h8i79FdXC3Fx+pLTSxgBG1DWKAfbStCM
duetcueZy6vhfc9u275E2G9vZFhtYbkGW4fJV1AqCf24x/WO3Nx9OWvCN+i+c7rmUhjXjz2l
UuC4INYFjp1yFRq8McrKzN0/GNzsWteTb1u13aS8EvdzuGgSZQ3qDBHepyKu61A/HGpP0eeZ
JWB/uSN7dcW45Nsk9u/D5bmlosIp/PKsI/OhGvTT8cPP+rHfzGyuuHc3u/gi62LcFe75HJpk
CO4dnWOZHUauldCYxz61/SeYoLHZbjm/wFYce2OVH3jbrhRf2RcRujRzPrRhlTJvxxrmXk+Z
42A3faOMX/Adi3S/t4r62jmt50ZxVXa30ITU5anNKnBObjV+XHw7hm58T5/zPl++zxWux3rT
S29y8gP8uWQSaj/CFApTB7rHPzVHvGwzfJPxTsdtxi6hkuNuvGkuoDJpYBXdTWh/3BsdOf8A
XWOpfMa2bk2wbNzTh+y3t9Ct7Ht01rMS4osrLCsase2to2pjF5dN26qOD8Ou+BXvM9/5JdQQ
bfftIbeRmoCjO8g6nq2ulMGW1T86rOR8Zl+S/j3ib8cuo5DthX9bErhWSqBSGXr6dP2nrhkz
wWfDD/3QX1hd8h4ztVrcRS3NjaPBdaCG9qSVkVVanQ+k5Y3Of9bV9J11GN+YPhu44DabNfx3
a3druVUYjIrKFD6QD1BHfGea6d9SXHl7pUB1WtDUfXxx1gtj6K/tXuYJYuTbUJEW8v7Qfo4m
ahk0qytSv8Jcfhjn3PWvfq96stsltLjY72ZUSLbdrktLtyQPbfTEaE+H8s4xjDyH+5fnG+bL
vHHLbbNwezgktmumWIkFmDgA5dRQdOmO388yp5lZ7hyD5b+UbScGC13cwovoqiaLRSxapNat
1xnvyZFxx/tr1P8Aum4ju91tW2clgjE1rs8DQ3g/OvuulG8aVGeMc865f2uelwSTkHNPga92
nZ9xZeRW9x+aUrIsaOrqobrpdVIGNc+dOnV3nY8e+KOLbryL5CisrWb9PdbdIbmVJmIYtDKD
Ln1ZgcsP9vnGv5XZr3j5p4zfp8g8S5kxQ7RZT2tldVNGR3uSyt/2nVTGM2M7lcPLOUXsP9yG
z7dNfsm2RJBGsIbSitKhJV86HU5XrjX/AMWpG64pafq925vBuCxSbLJuv8sVOoTCGL3Mx4EL
+OMjHl/90uybsdh2XdbGUPsG3O1pNpZvdVpaaGevUArpBx04otx8zglGLLVT/GMwSMba19Gf
Gs+yc++H14Cl7Ht2+WdwtzHFOcpVWT3NUdeuZOXbHH4pv7avfH47A3DPjJd4Zd1sLqOaa7iI
V4tCyFatX0uztQDGvpc1idfZom+R+M8j3S+4FbbpLZ79EjQQ7olAv6iNTkr1zYfv7Yfpefad
eXf2y8aa65NuG/ver7+0TTW9zbO3qcuXVpR206gTXGf6W2tzPqu7+az+NvnN9/3adH2Xk/6m
Q3UXrMDOVFHHXSKCtMV581mX8LvbNv4j8dNv/J5t7jurHkSSNt6xkMztKS9FoetTi55vVEv4
VO1W3GfljhXGbZd0Tb9y4u9Ly2lIDGiBGYZ5ii1r+3F1MuGxa3XzVwO2+RoLE3RfbobB9su9
wUViSUyKy5jMrRTVqZHGuv5XmTRqukh4b8UcP3+2uN5jvP8A2GGRrBYqMz61YIaKSKevNsM5
vXwPt+E0EfE/k2347v39Xj2+949CsW5bfMwRwFKOWUkjL+XUNmMZ9njXxdQX+/fGHyTzjcrG
a8/TypaQw7NujnQBcwyOztCxNPzDI/cMa64vMh5vmrT5Lv12n4o5FtnIuRwbveX9vo24qVWR
2TSdIVK51Fa4P583dZ6mxnuBHZOCfBe4b5fXQeTf7aUQxRkMSHVoYwo6sys3r8MMl67b66yS
PlsgPCrNJqdECkjufLxx1/rlvjF6DpdmLNQ6evgR2xzlZAfv06fUASCe/jjSwJp3OosMtPUH
zxDoKN6itCC4zpnQjpixmQhFrAqemWeXfBhE0bmoA+w1UZD/AAxaYVDo1OKr271/AYjTetau
TUeI/dg1YdSBKCxLautM/wAcQwyKzemoWpy+mDGp4UpkXLUSq9x1+mGGkK+5VmqCMlGWLWal
ZEoDUClKDy8MFGEDRCKha1rikSIKvpy/f1pjSkOaZqnpQ9RWuRxafkxjIGsCop1r07YNOEBI
VJ8cgSaD9nbELRe6GADgKqjr3GLGdI+2GAWgZO58/HBla050AaQMzn5Z541jNpBlLAN6qHp0
A88Fhkgvub23FCo79fpXFWsJXOoBQxB8BTywNCYMQKsaDqa5mvbBazYYq6asvUMsv34NGYdC
aVC+qoByy/acRMSpGg9sz/zxABoqaT0qKCmVcItG6sNJJr2NOx8qYkIEINRFWr1PX64I0Yt7
Z0ZMfDCvSVwUV6U09Qex/DALTE1ahAEZORHY4D9iQqNTR5FhmDgItKlkJOfV2PhiAlNPSarE
TSh6mn/LBqNUalGrSrGhbt5GpwyqgMYBArrDV1UOfXG9Ao006gAGU5AE9R1rXBTCjiZlJQ51
oP8AaP8AM4zVPBohD0RgCOoI60+uIkTIorT1NUKwNMv+eECjUKukNU0qR9f88TQER6ls/JyD
T9uLRlNpJqSTVsghyPmcGo+pSKO3r6LTrTsKf54liWJRGalVLEfafDBiwSuPbYOaKv2gZE5f
vxU0MblqLERUfcKZ5+eJnBNJUBAoUf8AH+OLEdFTqQBQZ5f4UxYQylhRgfTWtKYtCRXIfT2c
flNSKYCHWrZtQGudK0oOmeE6B4wSWJJCmpUd/pTFoFK2llKgn3D1HbwxLDpqJ1rXV59/LETy
yAVqBpoCVYUPjmeuBGEtTr01WlRmKZfTESkVdLBtJYHJeoFehxM4lBCRnuy9SD0riwyI1X3T
RsgQNWnx8jixJUaINoyK01V/HoKYChoguT6iykVkUjp9DhBGqKWSlKUJrQ0/DEAIyFq5U7Nn
kf8AXAtEjaQ3tAsxJqc6Z5YcSPWQzKIzoXICvfxwtECBSvelR4YMWGjaj6RTT1NetMVUEwct
qXPKuRpTwoMFovpnLFRRi3jXx8TgOgrpfWzaiRmVGf7OmGwHm9cauqk0IyP0wYrUJUpTPI5U
6EHDqxDePGI5FUgkqKtTPI17YYlKTnlhIlqxqfxwLB/yvP8A54sTphWN20VNB3rgrWp4kAux
FC9dRopI643/AD6srVj6I+GfjCLfeOy7xu2/3W220L6Tb2LuupQK1ehp+7Hb+/8AS1nvqVg+
fw8VsuQ3FtsIuJoYzpN5ck1kbu1Djz4xjKO59zUVKmtCoA00w4kimsXqI1nuTlpHYDDgWWzb
JvG8X8G27RbNdXlwdMESg1J/iLDoB44PtV9db/kvwjv3E9ij3LeL2B7gj/8AF4SWbPtU+FaY
4z+nX2E4wfHvgvmu67Kd5nMe3ba6GSL32IkkXqDo6hadMd7/AFrrLIznHuBcl5BvT7JskUks
0VWmlBpDGtdP8xiQBXthn9LhvEs9dnyH8Zbrwu3t49wu1ubq4DNLGpZlUDoAzZn8MY++31nn
JMYhz+okVZSQABRSTlUZ1p5Y622sZHq3Bf7fOTb/ALVFu015DttjMK27TlvcZQfvAXpXtXHP
7dT4OQ3MvhaPYLWGL+txbjfzyBLWxtQ0s7ashVR1GMy9bpyX4TN/b5y+DZ/125TxwXFAY9vD
e44FO4AIU088XXVMxZ8V/t95te2n6qfcY9psnH8mN3fW69mYIcvKuN/9K1bFJzr4vh4XbfqL
rforyeUl44omZphnka5nFx/Xrk82G478RfIXJNqbc5Wa22kJrhkvpnVpF6+hGJIXwJ64O/62
/DF8dHGf7e+Yb5qvTdpttgHJguZiQXAy9CgatNe+Kf16k8Z+svtdt9/bzyr+oLaWF3FeM1Cb
tqoiivqBL1LfhjPPda2LU/2ub9pSSbe4GkjbWwcyN0zy9PenTG53d0TGotPjfk258cmtL7nc
0lrEGjawtNJjAXMxu2TCmLrrfVr5r5Js0G2btcWUZJWCQjWxrqz8Th5ptcu2yyWl5Dcwj+dA
feRx2K98dOenPJXoW48k+XOaQWzQ/rJ7RF9tVtldEcDoToHqNMZn9JHT6MzJtXMtmv8A2prO
6tL96LEPUsrVzrpHqxX+kjdk/Dr3bjvyVdwJe7tabjLAFKmS59woKeT4L/WVixpPjTjHPeUi
eO13aXatutNIuC88gBByVViRsPX9tjHHE+Vzxf4t5TvG67obfeHt9vsH0SXkjvrkkH8Ede3j
jN/p4cY7k2x85Eztci9vrGCT+XcTmQxqQaBgXrSv7sE/rDmOWyb5Iv4WjsZL+a1iJjaOFpTB
kaj7KgaThv8AbGfo5Euea7TuZSMX0G5SU1vHrWdj3oOox0v9pi55d9/tvyVfzR3V5HuU0jUF
u85kbM/lVjWhPfGJ/Wfg/V0co+P+ecctLXc901wXdwB+mjWZmmB7K1CxyrlnjP39X1kYbchu
YuWfcfc/VOKyGZiXJ8STjW6LjntpD7tVqDSgr0yzOHcaj1jjnxr8t79xlZreaaz2N01RJdTm
BHVsyUi6kHsT1wf9aO5ik2/g/P4eSx2Gz2sx3JatFJCfbIWukszg+lfPGp/eX5a5mz1ac3+K
fkKyD3W+3P66dvXSNzMyDLIucc/vZWZ47eP/ABn8ub5xwzfqnsNiKn24J5mVZB/FoObBvHF1
3q65jPbBwz5Ch5MNs2QXKbmC0btETGFUH7/cHRfOuH/oZzrq558b8+2h2vd5lN5NQPMwlNwy
0/iappjM6usyRj7S13zcImW0immt1qzsusxpl/8AohvDG+v6b8r64uvjvbOe3+7m14qLhL8U
Eskf8sRiv/2jGgXG+/67MM5y6s+e/H/NtjumvN5P6y4Aq0yye8WataBvHHD73VZKuh8efNm8
8Phmu7meDZtCum3XExBKEVUiI+quN3+ui848ykt972q8ktKS2twW9tVXUr1OVKjPPHTnuSOs
ylJDve0XUEk3vWt1GyypIwIIYZ5k9fxxz+21yvD1Wxm+fd84zPu8e43sezCH3BLMdBZKVLKM
iQB4Yb3DOZIr/jLZPmbdorl+LXE9tt7sRc3TSCFHl71Y5k/9vTFe5+Gu5LNiv5z8ac/47I27
b7/Plc6nuoX92hHYnGfuzrM75z/lu87bFtu57ncXllBp0W8j+gBRkdP+uN278MTnxf8Axda/
Jl5dT2vDRcRPp/8AkTrVI0BzzckLU9sZ67rpJFbzzi3Mtl3eSbkDTPdyHU9yZDIJH8n6V8hi
+w6/wz083JNys/euHnubODIOzOYlp9ch+GN3+mrJFQuqpLZ17jIU8sWM1d7ByretjuxLtN3N
a3MgEYkh+/Sciv44r4vpr0C3+COYf+u3XMN+vYNuSRDctDcmk76s6sT01dgc8c/tdasxRcf+
UvkPZrL9Fsd/ObdWI9qNSwHfKoPbwx3/AOk/Mc+JXJd8255Fv43y8vrn+qLTRI5ZXUH65AVP
SmCdc310nNq25F8m/Ku+7Y1peX92LJwFlVE9tXApUFlArjHXXP4N5xn+K8o5nsV3JcbBNc21
w49txGC4IrkSMx+3Df6zMc5zdSnnHOLDka7/AD3txHurElZmBGoDI+mmmmeM89ym8127980/
Ie92a2u4bo8tof8AyQIqxq1OgOmlfOuOl5n4UlT7JtvPflndItoF5JeG0iEjy3DlYoY09I6e
PQYx1W5Ip4f/AGLgXMZP0Vxo3PbnkhEyESDVSjBQcumD7X8szfw029fNvyrfbe9pe3kkUMp0
fy41BcdRmqjGvvz+IMs+Wd3LmHMt6vIL+9urm7ubEIts5NTGyGo006dMZv8AQ5seg8S+aeXc
Z3Vt15XbT7hBukSoZJB7cuha6PakpTTXqKYdlOZHfzn+5tdy2G52vjO3Nt8t2DFcXU763EbC
jFdOQanjjV/nZf2zmvC7Da903O8itrK2e5llZUQqvWRzQKx88Ytxr6vS7mP5I+GtwtYWulin
vonlCR/zYmBOetSOqdsZ56XwpeTfMHyDydIody3OR4IGDiG2iEQLdmcJ4dsdeuuZ8DF7Y/PX
yjBtMdnb3/uKq+2kjQD3WHQUcipI8RjE/pL8syVmZfkrnH9P3LbmvZ/025vW/DVLyP0NW+7M
DHT7c43JYhfmnOo+LtxxLl12RiWazANSWbVm3Xr2xj7RWflFPzLmF/xe34zJdSHZbf1wW1NK
hkJPqI+6h8cZtn4c+d/Lr+Peacv4ruTtsC+1uF7og0PEJDICfSFDDu2L7Nz3x6V8gc4+XeOb
vsz8qmgYxMu4W9qqIIy6GgVtHcE0+uL7foZNar/86HYf0cdwNiuJr5lqP5qiPXTPP+EHDZP2
br555lyLfeXcout63FNV3dvlFGCFRFXSkagkn0gdcZvUE431U2lzuG2X8dxAJrK5izinpQq3
Q0J8emNzqG17dYc6/uD3LiU28WrP/R4UNbz20DMqijFGyYgfxYL1J+B1uIeC/L3M5+KjhnGb
Frvkss7ypfMwLEM2qRmVvzA9ziti59jOfJnJPlW15BbR8suZYtxtUWWBIyEjU/xIqenPxHXG
ue5+mOe59soZPn/5QeKGEbo2mJh7bJGBJXoAxpVvoca665/Edf8Aw4Np+ZOf8fnvJrTcWEl5
IZblJKPrlY1J9QOk+OOfyLMVm9fK3Ot/3aDc7/c5GurT/wDFTH/LWNq1BVVoK+OO31yCNLef
Ofy5c7Obc7jKkDLoaeK3VXNR/GBUfhjn13xvjeTPWC4/xHkvKN4XbNmtmvdwkLSSZ50rVmdm
+tSTiv8ASM2V3X+18o4DyUQXZfb94syJBIhKEDrVSOoI798W6p8PVb7m/wDcInEYOUyXDQ7S
i+i7VI9LRt6A7IM6GuTEYx9/8MW2PK9o5ZynbN3fe9suZlvGLSyyrVyWfqc66s88atjp8/Dm
5jyLkPJN5k3Lfp5Zb51ES+4KekfaoUUVaV7YJ04/SW6phY7hCKmKShANKHL8cOtzmxveH/Ln
yJxyy/p21bg/6RfVoeMSiP6Bg2H/AF/I9qm5ZyrlfKN9F/vE8t1dkFLdaUWMZVRU+0V8sHXX
LXv4WfyLH8jyw7TfcuEyotqBtrTZD9O1KU05V6ZHPGb3L+DlVfGuYct2awv9v2O7lgtr6kV4
qDN0IIoT1rSoyxreZ7V9bji2De9947vQ3Darl7K+H2yrkDn9pr1+hxq5imvX9n5N8zfJe3bj
tUN6Rb20DTXmS24ZRksWpQCS9Ppjnev0c/bHce+bue8b2ddn27cCsEDURJVWRoxU1VS4P7Om
GWfkeX4ZDfOT7/vm8PuG5Xkt3eH1u7Eucv4fAUxvZflz+vq33v5P5pvnGouP7tuUk1hGAyxt
6iVjFF1NSpp54Lfr8NXmX5c3x58abzzncZdu2bQslvC08skhKxqBkoPX72yGOd6rX1mKyxn3
nZd3JtHktL+zkILxfcrRmjaqdsqY1a4cf027Gg5b8h8/5G1q/Ib6SW1iKGG3VQiHT0bSMnbG
p/WZkejndWNzB8rco23/ANslS4v7DYUWC2uEyMSxsHHtgfdpNKnGfvPhruMtyLkvJuQ7oN03
e4mvrxYwnuy5NGi/atAMszjU6kZwV1y7k+48Xj49NdzSbLaN+ojtK/ykYNTr39TVAxqSC5FR
Fte4SxNLBaySwsKhlU08/Vg+8HqbZtq3LcNzg2+wUm/uZFit7cHNpWYAAY59dLiV6P8ANO+f
JUMez8b5nCsYsALqCZdLtJUaAPcXJlyz741/Ka5f3kqos/nP5Ftbezto98m9qwyt1NC2kDSF
YkeoAZZ41P5u/P8AkG1/KXyMvIZt5ttzuJtxuiwZ1zDLTVpMea6V7DF34foPkfyPzvebCa03
a9uJLa8eOeSGUFQXi6OqgCgy6dMZ+0HXDvi+XvlyPaEtYNyvEsYoae/o1MIwKL6iurp3GM82
fkWXPHnj2O7X8ktx7Us8rsBNIQzFpGz9RzqSM8bvUZ6lpjY7hZSaZomWQrqSNgQCvRu2D7LH
uWy8J+Xt8+JJbh99e14/+na4ttsklIklt4qnSG6ohC5KTjPP9PfGseacWm5LyY7dwm0vZF2/
cLqMpbCvsiVz/wCUjtpUZ4113k2H6yujnlvyzjm4ycG3S/e5h2m4/UWtuhLQgyDVHIinMVU4
OOrPljn3pprHnfzvemDarK+vppmhakaxgVjjWrEHSWyGD7yOncYbbOQcw43enc9uuLmzu7h3
WS7XUmpwf5gr+Y1xrft6xJHJcSci3LdVur33rm83CVpI5JNTNKa1Zkr4EYzf6a1Ji05DyD5E
vtjhst2ub+XZbWjRW82to8jpQVIoVHauHn+klZ6huJX/ADewllvOOG7t2I0Sy2qyZHsvpBB1
+eLr+jXUlcMllyW/3ySN4bi43eeQ+h0YzVXvpIqtMZvewTjfhqPlbbPlfZoLDbeW3F1c7aER
7AtI00AYZ6CwFNYAzDY3/PuCz8Oj4W2jmW+8iutu2LdZtqlks5ZJrqPUBkAFjdh/ET17Yv6W
LmVio7LdW5JJYhHudxt7ltYoXZpomqa0rqOWM93z1n+NbT5a5f8AIHLLTbV3/a323b7P02cY
iMcZlKBWbUw9VV6Dth4/pznw6Wb8sTb8E5ZcKTBtV1MyqHCpG1Qr5qaU6Ed8V/pFOccdjuG7
ce3JXgmms76zbUroWjkVxkR2ION82WK1bvz/AJbepcwz7rdypfODcRvK5SRh/GDkSB3ONa5/
VPccZ+Q99ihvZ7G9v7e3j0W1wUkfTEBkiEg5eWOX/aNzjHJxrbeUxbk99slvdG6sWUPLArh0
Z/Sua0oScqY11/SWHjmz1dcrvfk/9Cdv5PLuMdrcf+O1uzKqtT65GmL7zGZ/P9m4/wAR+TrK
2/qOz2V/FDdR5z24loyg9ynUDHOf0yn6qnimz8yu+QyNx6K6l3K39xpHttYm9WUlWHqBPeuO
vfXK42rLlz/JFs0Njv730ck5VreG7MxDEd1DdwcV7mNXmJp/jr5PniG8zbTfzlVWU3AVnLaR
UMCeoHXHK/11nMT8G+XuXcOvbmeyn/Ui9at1BeanjkYZazU1DjxH4418n7S+D518uc25/wCz
tlywS3VwY9vtl0xu46SODXUR2rjpz1OZfGZffWws/wC37ZI/jJt/3jdv6bu01vJPYxSUjhDR
gsEevd6Y4822+NdXHn3xHxm65Pyvb9qhaaKKaQPcTQ5SQwrm0g8PDGv6Sz5XHsPy/hu4bV8i
7xx2zlkv7i3utNrKup55FKB11MPVrVW9VMa67tk0fz/Lu3r4i+S9mspt3uNnn/SwIZJHBLNG
oz9xitTl1OOV7t+Vfjx0fCfAp+abxfWH9YfbZYYDOjITrc6qZgFdQzqcP9OvYuOfNUt1xLk1
7y+92BI5rndtvuZLd1jYuDIjaCy9gMa67yetcTXo/E/gHf7iHeLfk6S2Utvt8lxtj6qoZvBu
opXrShxmd1eT1juKfD3yRvWzjcNpspBbymiylhGTTwqVJXD11lY5tvtYTfNl3TZd0n2/dIJL
S/t3PvQOpVh9a9QT0PTHqv8AX7/LFk1yXN3PJCqM5dVpRWJYgDwwS46eBW9mQeyuSsO/enXP
HP6/lnqylBfSReuJyj07HM/hi6tt9G5D3G4yzIzSNqcUIqxzp4VONS2LUkm83c9glrJNKYIX
LRQs59uPVTUUQmilqZ0xmbKq4LiRi2rIhsmI6HG5IDKQoNSSKCoHQ/TBSeSQIANGTZDv+/FF
aWuMD2xRZD9tOtfPFVMM4VKgkopzFPHGVgCT9z10kUr1/HFpl/Y1KouoVGeYPT8MQ39AMmpl
YkrmPT0pixDGkkqcszQeOLFAxxRA+nJz9oOXX64LCTNoYqQCOnnXEaTNTPqozOrOo7YmaJcz
qLBehGQwqEJNT61yI6g4KCFezVYjMeFfLFjOgYqFVftPY9fxwzxpKjMpJY+oUFPLzxW63JhG
QNQtQZ00gUqPPBGfsFtDJRkJOrLFi02QYkgmppXz8/wxYNLQoAJpXw7Z/wCmBqciMJWq1Pq6
Hv8ATDtWEqKoIpQU6f44NO4RmJJ0gADIrWtfDPEdOj6YgaEGmXjjKngwX0qxoDWhp18sVMpt
WQzLKDnnkf8APBjFodZYAK2YzZfPDFukH1EnpXIrTv540tFWgBFChyJ6geYBxmw/A2oAvqC6
l+3tl3wYLQiRAK1zGVKVqMOD1EZCVqTmciKdRiItbFgRQD6UI88QTaTJXuo6k4IaSQfywK0Z
RU+dfr3wNIW9yNghJLZUXt+3Eol1AMC1SKUJ7g4sGC/l5u+aqKZeHbriWgAESmlBX1Cvl1GF
DURGOiHMmn/TDq0PtLQKKkA0X9ueMk5DVDnrUACuWX1w6NJZmZhojBpWpOX7PPEoZtIrkans
cvwNMRSITIEWRtOkeNQB44Cd42EZAqUY/wDk/hpiBaWqNNQQeoocz5npgawys4JZ2DZ+k08e
2IYUhZqsVAPav+WLBaGEsrV/Ocx5Ymfsb3BQP1AJUmhyH0wr1IFkTINmc0AGR+temDWvkpJ3
9IINAM888UgIz6jpRQVbv2+mLDomBddNCpOVOmYxIwYirV+3p41/DEsSRo3qWpLEGp6L454q
YBtFYyTn+TqADgBNIyqwkYnVQIaCmf8AnhIgAYyymsg/L9tfriQqAKaAVOda9zliSEHUuVTJ
mOnfEiDFBqrQjJwOxHh5YhiVQASyrpAApU9a+GM1pGwLBlNQv5TTIE/5YNB19pG0VI10q3Wu
X+AwxBkjBVdKEsvVh3/4GFWC1lVUg1AOZ6Ur54gTuNZ0DVlVqmmf0wWGUDsiqNNK0BRQfHtU
4sa06Mh1g/u6DxGLRoH9uqlKleursD3xKnctVa0CNkGBzqOuWJIwHaEkg0BpVcsq4hhCUswS
pqDnlTLwxDBazFSo1CtQPA/XpiajhvgPaJHpoD6a50PjjcNVNKZ/swKCBz/yGIUWl/AdK4k6
bUt+oUj8vQkZfjTGWqnjAF6tTQV69BjXMPV8fWvwHQfGt+gLyOGkK0oQ1V9I86DHX/7H4cOf
l4nzSzvot9nM9vJGjuxQygqGAzLLUDLHCV3s8ZgsWJCH7T6mHXPwGNsaX82XU6gEdMzQjGuP
nGepX0V8Kco+Ndl4t+nk3SPa9/vHYT3TRPJMyk+lVeh0jyGO39P4X5i568ytp8xNsEPDoLp7
mSWZWBs6/fI9K5ZZnHlnPpk1JwReU7r8b+7uUVxJezRyLELhfVJGQdGlTQ+WNf0kl8EmPOvj
q25ltPyGNngimW0klafcrRAfSCKKZCOhA8TjfNmetdWrb+4Tje6XjRXYtz+lhUVuWGkIRkAW
OOGLm4802j4R5ze7Q2+/o1tbH2zOZZ5FQtFSupVJqQQMdOe8rWR7BwfmF83Ebfb5uKXO6Wts
PaiuUkAilzrRVIrlh/pZaxecbqx2DaLG6s97O3w7TcSKI0i0qrI0nbUcy5GWWMHccWz2O5T8
33S/dWO0+xoWSR6xghs+uWQGHfBHNLzG3uUmtE47dbtZJIUjnjYCOUA0qq/cVHjixSujdOI8
GhuLLeN7s7TboAVMcU5WOP3DmFk/iocGH7NNbz7Vu+1XLQbpHdWbKymSAqURVFKCmWWG82fL
LzbZvkra9zvP6C2x329y2kuhZIB/JAXJWYVyy86YfqXZ8/b3DYcUtrdZxZ3lywEUKN/N0gZj
0kGlcsXM9BcdQ8H+Jn3S6dm3G6g955QzSMDIP5YZ3J+0Y117Ug+CrmW84duV1IjM81zIzV9R
ZdJ/xri/op8Pmznsft8kugTpAkYM3YkEigrjHAri43Dbyb5aKQfbaRQwPqDDV0PljrGuepH2
xvN/vW37ZtsOwWUYido0cKnpWIjMoopTHHPTa7W2+xgvjfSQIbuKMlLiUamBPU1PQnBgeW8g
5h8rbku5WNtsSzWS+57N80bxoiAU1HV6W8a4bYZ/kX9ulzfvtO620sCFIJqtdIgq8pJ1+v6d
BjXWYrMbDgFpdWt3yET2jAyXnuW4YGhUr0FeorgrMdVq+871tm5LvdoqpHI8cFsKhGj6DV2r
gyFYbQYdttrXbmkhtWentWNjCQiDw1gHPxY0xYkk+17XaX8u5LZxG8iQhLiQamFc6hj0JOD6
xPOk5r8hb5uSWs/HjFYQ3AruIDpGqaqZ68q0741MWY9A5DDawJ/UoLWO+3WFGFo02apUdaYc
Wvjz5FtN/O+T3u8H+fdO7agulKDsgGVMXN9Yk/KL4ytLK65rtEV7Ek9s9wgeNxVOuVf9Mdfr
qvb6x5HNuk/MdksLIs23V/8AnxxiqKg6BgOmQxyxq+ru5eK0i3CW2CRTKvt1jADAgVRKjp1w
ZijPcPgvpuHm435S13LOwd7lfUUMlFGedMN9a8xPvbbpPzXZ7O3Ms22aHF9Ag/lqQPTUdBTL
AFruMsVrZ7rNaaI7sqY9cYGv3FX0rXy8MWK1kGFzH8QX19vp9q+kgkM9zcjS5Oqi9fEdMP5Y
s2KzhN/yl/i8wcf43BYw/pn/APn3ThVnyOuYR6dTk9q5YupNazz12fHVu21/ELS2VLbcbn3n
kukp7ssuoqHPfLoPDBmG1puN200vEtsl3qFf107q8xuANRkJqK6u+WKzUBrndbr5KFuXln2Y
WhdoaVhWVcga+NcSYWe+s4PnW9Npso36+ECpBFEY1FvIANTszelSFyJPjhzwc9awfzpecgXm
1tdb5Z20SKitHt8TCVWRMw0z5Vw8ZrPXV+J8vbds3y+5D8RzbhcW6QTXFpIqW0AIQALRQvkc
HWb4ZuesnwP5H4tb8atOMbl+r2u7skCNbwRv7kxc6teqMVFScxhwrz5rutth+I7mWNilvL7R
ieaodqt+avq1EYyo+XrL475XdbQd5G2XH9LSpa8dSIzGB91e4+mH7H6vp629/j3wzt78aQWt
7JbxTMI1q7NJTW57k+eAWuP5hitz8T2Mu4xiO7mktzLqA1+44rIAe1cStcHNL/d5fh1otk4u
No21rZFM1yYwYo6CkgjHqz/ibDzkXXr5PmV/dap0lAdVTjpL4fq3vw/Pxi05fZ3fIrSS6gRq
wxRgMvuGlHYH7lWnTGevWuLY+mvnLftosOEObuyF5JfL7dn7g9EbMB6z5qDljEjn3cD+s49w
T4y2vdrfZ4JWS3h9ICKfclTUzNKVJ61xfXVa4do3P4/+ReT2DRWi3dxt8X6m8ZlCoXy0xsP/
ALRVbPFeWuem73a4401rc2t7JbPEEKvaqFZjQdKAE/swSC1xcb49xTjuxxSWdvbWa3P82W4u
Aod2fOrM3+GLIbaw3ynxDh/L9z2nbBeQWM0jNNebkKIphRaaFOSMxr44vr+hrLbv/b/8U2O2
z3k3J4wlvGz6i8JNAMl9L1OeN8c9ab3kN/apPtQ3LfbS1t2c6EMd2xP/AIw1Cmk+Joa4O5lX
NtmtN8fcR2O5+WOXbhf7fFNNbyBLIyIGjUMdLsqNlVgPuxU82yNTtl1xrme1b7ZNtEVvYbfK
9qRpQayBUuukArTBecZqy4vxTj217RZ2mzbdatt5Ws126q7OR1NSCWz/AGYMkUfOv9z3JG3T
lsG2QAR7ftcHto+kKGlY1eh8AMhjchleR8b2obvvdjtrFgLmeOJnBzUOwBI8csde7kW+vtWT
/wBQ4NcbFxux2ZQl4yxRXEcatIGSgDuaamJPU1xww26td74/s+47tuF1f2sdzPFY+xb+6A4U
Sai2lT0qaYmdYD4t+NNps+AX8m8bQF3G+uXErToBJ7KOFj016LSvTrgwzrxq96v+H8d5Bs3H
YNit2ut5pFFIIo9EcaHTVqipp4YpzF9nPY/H/FoPkHctwTaYnEMMToZVUW0LyVLsikEFzT8M
WL7fhbcn4bxPetsRrqxtpVSaMiS3VVJGsBlLJ1Hjixa7rnb+NoLpX2m1ZNqVHjUxR9WQkUyy
yxYtecc72TjWzfJnD94Tak1XblHjjpGnuVASTSMqpqrTvixmXK5vmji1lyD5V4Zt92G/S3YZ
bor1ojErSuQrmMO+KedL3lPNeEcH3e0443GRLA0SlZYIo3opyoFZSzkfXFh+21w/EOwcaur3
f+RQ7Atik90Vg/WoFSJAKkRo4Ok1NWxWRr8O/wCbuKcbvPjrcN0NpbfqbGMTW1zCir1YKaMv
XrX64pHPu4p/jC5vrv8At9v4rhzI1vDdQW9aZIqgqBQdicU+T37HivwTdX1j8nbZ+lDxfqJv
ZuCRVjGxq6sCPzUw9Xxn+c/D2jnmy7du39wPGINygW4s1tGb25P/ABl01ugYdCdQ6HGer8GS
fbRSfGdtdfO8e5ttAGxW8HurpQC399FyLACldZ+3FWuZ8vBfneytrP5V36K0hWC2WSMpEgCr
VowzaQOnqOO0+Bzrk+HNjst9+QNqsr2yN3AZCZLZTpZxpr6m/gAHqwd3Y6cc/Nfa8O2bG6nb
TbWOhU0PZxohITpQrTpjl9YxZr584lG3H/7j7naNlU2+1tO8M0agEe2Y9Wgk9tVMWYZfMUH9
zcEg+UZLlo/5C2lsC5GRYBvw746S+MSXXpEc93e/2we5Mx90W2lFp0RLiiJQf7cY5H9+bZkF
wC3458f/AA3Z8kbaVv7u+ZZLpaLr1O7IqgkNpVAME9+W5skjWbrwviPKb/i28X20xpJODcND
pFCPa9xUkoAGoxzxYsyqDkfyZxWHd904pdcSe4t4BLbu0EKurlVyIVVFFp3rljXkXysfhrjm
w7bwWDcDt9taybjLJM892FLMC5WMAsKgBVoBjNKH5NTivEd+2DlzbNFd3MkklpLHEqoCHTKS
hGnUvbB9dYvf1sn7c39xPMLPb+JW20ttoupt6RjBMyhhbaNJ1AUPqzoMamHqtJ8bcV4xx/g2
1GOC1Sa8hjuLq5uFUNJNIoZjqbw6AYxJvrf2vw57jgvAdy+RrW/Wytp7uOze5uI0VTEzh1WK
R1HpJzONWMzx3cG5jtPIZt6/SbS23TbaTBLJoULKBq6MoA6r0OLwbsfD+6FW3C50CjGWQsfI
uSMdJdc/5ezx7J/a5ZcZuuR7rabrBFcXW42mi0WVQ49BPvKtRRSUocY7+XaTxv8Aj/xhsXA+
Lc33nkKxXMJW5jtpJY6lLcA6NNQc3dlpTGZLb78MZcVH9o14nt71ZfpKO6xXDXZFDX7DF/nh
skp4lz10fAnG9mbfub3O52Ud0YZTHEJUEgEDSSuwAYd9IxdXenP+Fv19DyD5Z+LeS8V5Dt19
tUe3y2kZO1L7Sl5ZRUIU0KNBqB+GNTjfh022eN3wHntnP8Ryb9DtaJBtULwvZQkaZTbooYj0
9WrnkcZk9xq39vPvhaz2vkW0/I27T7ZCkl27CG2CgiJDFJIqJUZUen4jGus+zNlnLk+L+JWn
/wByfLprywUz3esIZUFT7S+gpUVB1Gv1xnq+jb9HumxbZse17Vtu1LbWlmwgiVbJghk+0Aj/
AHGtc8UM185/MWxWvHPmXbZuO236eaQ294Y4MtEhkIZkA6VpWmK/C53cWP8AeBbu+6bBNoJj
W2lHuaSQD7gy1dBUdsb4vjHfO9PnrRG0mkg6V9ZYdR5Y666Ppb+3u02rYvjLfOX/ANPF7uNv
PJpVlBf2o1Q6ENDSuomuPPu1r7eeJt95ZsvyOmzWu48TmsNW7WsCbhKB7ehzqeFnAU1dfy9M
NkYtei3nMILX5DteDR8fR9vkhVP1wUe2qNEW0adNKALTrg8h31x2y8b4Lxfk+6w7bHLb7Zuk
1zDCFWquyxqtGIJULroPDBzPWZ1fWX+Qd64/yL412Dn25bNH7kW4wN+mBqzQtKYnjLgDUGC4
az/SfFaL5M5jw61+IItyuNvkfa90gWLbLWIBCkssZMQJUgIBTPGuJ631fHgX9u1xeQfKG2zR
2/vxtrhlOglUEyke4GHQimNf1xcb+XoHJ9q2wf3MwybxbhtrujAy+6p0NKIQFNT9w9ylRjl3
6OPLXue/7ldbJs267xPBBLHt0Ek1jHEjlyVU0VqDvkPThxq14hzzapN8/t/2O9trMT7pc3wu
gI1JdTcTSmbTpzphn5cbzbJl/L0XYuP29k3x1DcWqJd2dncoVZRqVmtlL/8A6xNfPGOJ5669
W7GW3L574gvId44nyHali2G3aezrpDszxHTQxDIBiDppmOuN/WYJdtjXfFX9L2T452QhorKG
8RnikClmkDuzJrIH3aCOuM8xqRnfmDfIuIc64tyeytopb66jns7osNPuRExha0odQ9w0OM9s
9dZ1P8q3+6blW52XHbXYU29Hsd0Akl3J61glicFVXLSGYefSuO3Ei7+cZz+1Jd1td4vkFuf6
ZeQszT6Syq8bAqok+2p1ZjGerNdd2LP4M2BbP5R5gbyAm9sxJJZGdKMqSzGjLXxHfBfa583x
stu37dOd8R5Pbcr2FLGG0hcW6ujqWcI51qZO6lQajBcvjQedfJ0nCLXiVnZWUM93vUMURmlJ
AWONEFPTQ/nyrljXHPgvteMf3YWFlZfIdrcW0AWe+25Jb3SKB5BKyBz4HSuN8RmT/Z41tiTL
uEMsahj7ih0Y0qtc6+WN/wBPhrn5ffm37nNcbZb21gv9FvViQ/o7mAmNRpGSFSqkfQ488a6+
Wf4pDdWG/c/uBtkUd8pgnW0gNUnlFuzqy1FR7rD9uH8syqu/ut75d8V7rNynbUsdxgnQWiMh
XR649L0bP8xBxfJ6mR6PFexQXsW3O7RzBV0RxxEQ0A/K1KAZdK4h+WLS3k2V/kC84tZRvviz
xTxQBal5XgRytB2ZixphTI3u+/IO9DYRy7jsNpaHebUQ7hp0yQNrHp0MWOl/4sq9MVUexTbr
a224pZn9RqZkUKkLNEC/T1gUA/HBifDfy81nH8k8ljsoPZsxfyiOIjQEcU930/lq+ojHpnHm
s885qh4dv8uz8msN3hRZJbCaOdI5qlGCNUq3kcZ7njXN9fV/yr8qrZ/FW07pJtUFxDyhGtZL
eViyRe7CxqtB6iKZY5/zktXdx5F/bTtXIX5zBue2RM232BFvuUmWSTgj1Dz01yxf26X8vZf0
9QtthvNt/uZfcLu29i03RJJNvuCPTKy2qrIAemrLGetsjXP5euHcYWuJbV7O8kjIkRvcjBhZ
VUkipOdRkK9cWMSPm/8Att47uU3OZt+tYB/SLSa5t3YUBj9xWKKwr2rh/pMuNfz85bTg+xXm
z/3Cck/qMJhXeEurvbnfpLE7rXQfwNe4wde4J+Wz4ufkZbDkQ5W0ZtVjmG20CiXQFbSTp7af
HOuG1rPEXxpfbnecB2KC6imiR7VBb7jYMrL7YyHuAj0uKUORxVmx4L/dHt27W3L9tl3GaO6V
rEizvo41jlkjEhrHOBkWXse+OnF8c7PXiDN7aiuTHpl2x2woiMgK17DxxCw1KMcwoPh+zFYM
M7LXU2Wkig+uMtYmdl9vUBV+n+WDFqAMdIOnNPtHnXGhpxqLFWWjCvr8e+DxenGpozl1qDXz
HWmCnTKlFXIs3bxxI5Ya/W1SBQHriUEf/GNY9JP3fTuMRCzNQ6asepB/0xDTGMM4JWgHQg51
xWiQaKwqQehqR50zxLQIVZmZgQwNMBgve1Np05CtT4eGHFpLGshCF8hUnxxk4Yxp9gNV8epr
hFIVLZ9ew/6YKPBasgaBXqKEDt/yxeqXDBVK17ltRbsfpiWCElADpJGbHxJJpTPEdDpZnPpo
3UDyxGTRt6aFaVPUHqK+GJWoSTksmQUV/wCWJHZxpYBcqimeZy64jKliUKh0sQeoBzJGJUVS
wyzJPfqMFrISY6oAc+j0zoeoI/HtgqlEhWRG1EMVPXoaYy2RqTRloe1fDCKZDkFStBXLtTri
ZCC9dDCiEagoyGX0xDSLJrGnvm3YVwtQQmC51C5AafPEtILqcrIvpBzHgMGjDVJDEqAgNUB8
Prhw6NctLRkUH3HrQDtjOIcvtONNDpHdvu/dhxn7eolkDv16ggZUGFfJfzTmQCSdIA64zW8T
ySKAiAeRJ/ywY0Gtfvy/ipliFAzqvpNKFs/piFIPG2oZ0yz6EUwjSUspoFBWtRXKuJFI/tmg
BBy01zIJ7YjRsW0qWAGYWpzzxkSEySK5K0LH06R/D44m9NmoCdh1JyGGVHAoKLSjGhFO/fPE
No9LsxBJUqf5i19NRiJ/dK1VaFjkcu2DEQJPrA1UOYOQr5YrDhjRqo1fT1PYDAA6gjKEqA2T
E9zhZpGvRl0k11AHsMOhLbqzaRISpGYBOVOxOBqRGYakmQGlToNaEnCDuI6BNJDE1r36dcB0
S5OoYZL36Z9sSwaFgPUATXpT/TAjVdAQo1spqV7/AIYpEF/sB0esk+nwwjTIupUq1WNfSeow
kSzZ6a109iP8cWLQl2yoRWuVa5eX0wKHD6Cdf2keknqT0yxVGRUI0moZszXEdG4NQozzFfp5
YyDMCumrUUU0+R/3YiZvcKDWtFPRu3+uIhEml1jRyF8D/jXCiYMgYdSO48D3wAg0ZV2Ddch2
OXni1IkZlzAFPA5gnywjBsQpkcEhtIqF+0kH1ZeOMqULyDQja+nqCjsD3IxLSESlgy6STT7j
QjCUlTGBUkivrocqHvTAdL0giVVqFBFRkaHCUbyw+yRXuSor1Pjixa4b4P7ZBzp4j92WNQfK
rqc/8MRwlJHQnzpiCXVJ/EP29sRxJarK7gI2lehNaAYzidaIRclah3oBXyGOn87lbzx9EfBr
/Mcm0vacPFvbbRGx/UTXixlFmbshIJJ+mOv9/wCkv49c+s/DJ/KVrv8AFvtwvIN3i3LctX8/
221rGD/9mKAAZ5nHCQSVhDAUk1EFgtKBfDwwL8pQEKsKaK5muWY8MM+Wk0M5jCsgLS1URUYg
11V7Y7T+nUM6bLc4fkm5s7XeN3a7e0tTW2acsUBHda+mn0wT+k1n4+FztfyF837tbkbPf39z
bR1D3ESoyLpy06ytVxd9cs5VTs3yF8hbPudw1rfzQ3d1L/8ANjiUOZJF9PqBDVP0xfbmT4Mm
pubb/wDKt7t8P/ttxctZu3uQ2sye0WHYlAF/Yccp01kkVV98n853DZjtF9u00u2Iqolu2lVC
KKANpAJ09gcb2Vjnn1o/jTe/mOO2NrxP9ZNZE6mf21khWnX1SDSMH9f6T9N98b8Ojme3/M00
kO6cna5EcR/kOzhEUnwRNKgntQY58/0lZvP4dl0fn7fOPC3kW9O0FKKpT9PqjAqCWA1EeeG9
wzhDwjlfzPtp/oXHUlmEZrKj24uEjBFB6iK0/HHS/wBOc+FeVX8gL8qbjeRzcrSV5FokEJJV
AxrksS/b+OOc/rjX18WW27d812/FprCwtryy2W4QllMQj1qR6iWbNcbv9/t8sfRwcb+S/kTj
FivH9kpCWc0i/TrLMz9GOsgs2LrvnBxxVFvlrza/5Erb5HdXm7XjVSGUH3NJ/Kq5/sxc/wBJ
8unFxsd32v5w33aYNu3SyvH2i10iO1WLRqVRRVbpko74J3LW5OXo/G7/AOWYOIDbNj4rabJb
QwFUuXchy1KGRY2pVm88XVjl+XzVyWx3GLd7lNwl9671sLh6VUP1qPDzxSjvmYWyXgsL62up
ACsTq9FPYGp6+WN8/LPHPj2rf/7nN6eKCDYrKPbgqeqa5pcM2kU9IoAPpjN+pusVZfNnPYt5
l3ia9e5u5AVjhmQGEDwWLIUGNTrnG/r4m5R86/IXINvexkvY7W1m9GmzTQz+IZqnLGebzonN
/K84D8s/JxtIOP8AGNttZkgBRVFvTTTNndgRU+JOO3f0Zyra0+RPnCXfrqyt4Uur1KfrIUgD
RxrmaV7CnnjnLznqs/Kg5d80/It4r7VezxWEcYHux2cZjZip/MxqaHyyxjZvhk2Dtv7jufWd
gltBDaSSRD/ymJpJCg6ayCBX8MdP9azeeo5IP7gOdxbhJfTzR3TyKD+knjBiFOyIKafrXBPq
pOr8H5F/cFz7doko8FvbRSLJ7EEZpIUNdLGrErXBLJT1zXLf/LXyYb6DkVy0g9tdCAxBYdB7
+1kr/XD95T9bjF8t5tvPKNw/V7o6lhkgjXSg8lUZAYz5+GbFPZX15bTo1uxSTVVWH3L5jHTn
rBHsWw/M3yo+yHbtntVvJIUodxWEy3AT/e32DyLYz31yeeOozmwfJXNuOb5NdGSW7uLhj79n
d1kWWQ56nH+FManXNmVr638O/nPyD8o75HHPu4lsbPrbWsCNCgPZ6HN6eJxned8M5Wu0/Nny
lNsj2G1WgupY00zbkkZkmChaV1DLLsTi7vOsznqfLN8Y+UOZ8c3OW5bXfy3Xqnt7urpI5/iA
zw/bm/LUmj+Ruc/JHIzCu/W8lpYsoaC0RWitwR3oereRxj7SfBzKrp/lznU+xNsH9WePa1jW
L2o1VPQMtPuAawPKuNbMF5yujgfylyfisnsWKf1CB2JSwnDSRaiOwqDX6HGvtzZ6Lx+Y7ub/
ACT8mbluFvebo0liYGEthZrG0MStWodRlqp0q2eM8dTRZsaCT5z+Tr7jz2m12CQSlNNzukET
M/T1OH/8Yb/DF3ed8Elz159xr5F5RxTcbu+2y5MN/epS4uJ190yUbPVqrnXGee5flmSqzdeY
bnufJI9731v6rdM6Sstwx0EIQBHpWmlD4DHTnK7bJP8AL2yx/ud5DPt5h2/jdtGIE06o2doY
lAoKKoA/DLHPucyszbNrNcI+ZN92y+3O7ttji3ndr6QyNO4b3EA/KCo+0daDFbFZ+lB8i8z5
9yTcILzkcctpa2za7ex0FIErQ6tLAaqjLPBOpbgnK55X/cTyHeuJf0G2srbbYnRYZJIiwZok
AFFBySuN2cy+M+35Q/HnzjyPYIV2/wDTpvCoNNlbyFmMa06gLUmmNdfXDzznwpPkP5D5jyTc
7a430MEtpDJa2caskMek9dJ6/U54582a11z4blfzdzzlGyf0W/vUFllrjgQRmRV+33WH3fQZ
Vxvr6z4Y55t+XntXDAP3NWY/44w27tu3FrK8trkMGEUiyhRmPS1e9M8sX5Vbv5G+Y995sLC1
u44rXbbQq6W0RIDuBTUST2/ZjU6kZ+n7NzL5t5DyTitjxf8ATwwWFroSZ0LM8oj9K6tVKU/f
jr1J8/tm86rvjD5F3The8SX9uiSxMNE8MgzdDkRXqvTtjHjp9a9M3n+5+7utseDZ9jhsLmdS
hu5Dr016gCgqT+7HOySjDce/uWu7XbYLTetph3K4iWgmLaOmZBFGX8cdLzMHrz75P+W945ve
xa0Sx263BW2sY6hVr1LnqxIH0xzl/SnyxEk168AaRHZWIILVKgHw7VpjV/rfhXl7P8S/OO1c
P2CLaIdga4nZyZ7qNqSyHxeoPQfswdYuf6b4sLj+57cRvVxf2mxxxLJAbaGJmqysG1e4zADV
9OmGTk3VPwv5V5XsHGd2lk2pnTfbh5f6qwZEqfSwjUih7gHF5b6rMe38c+YvjV9gtdG7xbdF
FEqG0lB9xNIoQSAQc++DrjKJ1K+e/wC4H5D2LmO+2sWx25FlYx6TuJUK07k16ddKjpXG+PGL
x7rzXZNxm2/cLa8iIWeGRXiJFVJQ6tJp9Ma7w83H0PZf3VW4sIpLzYVn3eMUN0JAEanVlXSW
X6A4xOJ+2vXLxj5oflHyfHuW67kvHtngtfbWIAyRyAeopI1KZsa1OK8ZPFseg83+d+DbXssq
7bfLvl85GiC3NEXSQfW2VB9M8Z+lnyNleO7r8ybzyb5B2zkdttZls9mCyJZRBnYKp9bMyj+I
9cN5xr6560m3/wBzEyb/ALhcbrtCttt4qotkjfzR7a0+4ihr4HFkxn10br/dBbtbJbbZsS20
CSoaGRSNKkMV0qqgavHBOY1Y4Ln+5i+nh3d4tqCfrfbVXJqIVVaGo/OaY1eZFlHb/wBxu2XX
JrXeN22JZra0t/ZsIopFd43ZvVN6gBmop5Yz9Vh+X/3L2+6JaNtGzJb3lrKs8V3dsshXTWmg
AD9+NXiT/LPzVp/+dJsZgEz8aL7poBEjSR6dYFT6iNVK9hgvM1r1n+I/3J3FjJuMHIdqW/j3
C4e5iggYKUL0BQ1DArguDMD8kf3CPvfHJuPQ7INutLhkWcs4ctArAlUUKtOmZ/Zhkn4F+fV9
s390HF9s2S3sIONvBBDEqLHHIixGgoTTSf34v+fvp1jOC/NO1bLzDe+TX2xR3Fzujk26QMEa
3jJ+xaqR6sqnB1x63zxkd3yV8/x8qt7Ox2/b/wCl6Z45H3Vz7k0ahqHRpAYeOH6yM49k275b
+O9q2S1N1ygXktvFWUFSZpW01OoBev44JxaXyX8k8q/9r5ru29RIYIrqQNbQk1KxqKCp6am6
kY1fPKzJgvj/AJdf8S5Dbb5ZAfqYQVKMARIrCjKR4EVxi+tc9X4fRO3f3BX24Qvf7Jw6W6jg
NNwuYyT21NpcJ2/3HFkXUx5/wP5tj2nlu97zuGxDcdz3y41pJbsAYFBIEaVDE1FK0741f56P
/AvmP5gfkVxabXcce/p/6GdJrj9VpaVwpB9uukFVOKSM761v/wCdHx6DaltIuNe3GiBRb+4n
sJl00haEYzOZ+a1ao+F/3GptG0ybZvG0pf2kcrPae2Vj0LIxb2irBlahJ0nD9Z+BbXNvH9zm
8XPJLC+2+wjt9t28MEsGGppNY0uGf0gZfbpGNXiSeKVc7v8A3UWEdhKNp4+qbk8dFupHUorN
lU0XW37cHPMt9X1t8VnCv7jYdo43HtO+bSu5fovVbyKyiqlidJBDCoJyPhivMh+PGc+V/nGb
nEG3WSWKbfaWchmaj+6zOckNQFAC4MmeM3nbNW3J/wC4afdeArse4bLGdyuIRD/U3/8AGAuW
tEI1KxA8euDmT4Xa3+Pv7hJRse3bBuHHzvN5bhYrMQULNGooo9sq3qCjrh+kkb5lqtk/uP3W
DnDbs22x223RRmxG2Uo3ta6sGagpJrGRpTFZMZn7Xe5/3Vwrb3UG2cdFuZImCrLIAwkYU1sF
XSQP341P5z9p82yySvO7PnK7FjQdjmaeWBc85Mi34pyvceN75b7rYy+1PaeqNlrQ1ILK1OxG
WKwyvSvl/wCfdz5ntFts+32p22x9Ml8DJq996AquQHoBzpi55il/a0+Kvn3a+F8Tttkn2X35
4y7y3cLiMuWOos+pT9OuMWTVLRH+5M2+/bxf2WyRQx7parCkYemmSMNpdioGonXnjfPHO6zl
jwoTXnvSMxJJc1z7sa/ux0/1/DfUs+Hq/wAUfOl9wba7nbLixXcdrd/dSIuFZZGydtVDWtMx
THK8zWZ8D4P85XfGeR7zuUW3wzbbvUzT3O3p6QpBOj22pT0hs/HDePVzGg5P/c9c7rsu47XZ
bTDawX0fswuJNTx1+4kAKrGmG8SDXdZf3TTW+12wvdlgut0giVDfs+hWK5FqaSRXwB64JzKr
1Z+GYT+4Bl+TJOX3+1RSJ+mW0jsgdRVB0ZGINGrn9MX034a+Jrr+Uf7iU5fxO72G32ZLZLnT
qu5ZA7LRg1EGkUY0pjU/nI5d7f8ADwpbiR0pmCKgtlmPwxWOs+HqXxL847rwOzuduFkm47XK
3utA7aSr6QNSsfFQBTGfpBuLzn/9xW8clt7Ox26yi2yC3uI7tmRjJI0sXqjzIUAA9cMkkGb8
tMv92d7FaRrLsUE24COjTrKQCwFNZTTUV60rjPPMvyt/Tz/dvnTkO7cN3jjt1HDI+93hnubz
NGUOysVRRlQFFGNfWauvhW3vytfzfG1vwaWKP2La6Fyl2CfcAVy6xgfbTU37MZ+jfXMsiLkv
yxum+8G2fh01vGlvspWRLiMktKUDBA6t00hsb54yMd8yrH4d+Y7r4/XcY126G/ivNEjszaJF
0jSKGh9PljHXO1rn4bLd/wC4sbxybje4Xe0Qpa7PdGaQRuXlb3E0Mo1CgoDqHmMP1mCz8vV9
y/uN+LrbbpruPcJLuRYyUsVjOp2bpGaigJwc/wA7Veo8b4f/AHJ7rx6O429LC3uNuaWaezhZ
mX2Fdi+hWH5atkMN4msyyeRDef3QcnvN52rcWsbZX2p52ES6gJlnXSVck+kBOlMa/wCc/Bl3
5eP71v8Afbtv24bpPQT31zLdSaftDSuX9I/GmNXhTx638e/3J8g43sEGzT2sO4W1tVLeSaqv
EvUCq9UHaoxicS112Ws18lfMG885G0SbhDHbTbXrMXs1FZHYHW1SeyjLF1zz+HPr+e/Lt5V8
+8j5HwZuKbtbQXIDRGTcnBM7e0aiijLVl92HjmRdT9u342/uE3rhfHf6Pb2FtdWkcjTI0pZW
GvN81IrnTrgn89vp+3gLv+4fk91zi25ZHb21lcwwfpP0sZZopYNRYpJnVqk1r27Yz3z7kc5P
dWHMf7oeWb1ss22QQW9hHcqVubiFWLNGcmjUuxpqH5sdOOOfy3axfMflLeOUnZJL9oon2eAW
9s0SGtFpRzqJ9R0ivbBeMiv7cPP/AJG3rm252m47uI/1FvaraI8IIVkRi2plJPrZjVu2KcyQ
TplYp3NWSlU616543i+z23j/APdHznbtlhsp4LW/a1T245rhWEjBQApbQQG8PHHK8yNX48dn
Df7j7yyj5RuO9kybvukaNt0saj245YlZY0YHog1Y3/znXXnkc+er7s9Zvmv9wvNeV7XHtNxL
HZ2la3jWalDMAQQDrJyBFcsPU55+FvVqzs/7oOeRbOLESwyyxqEiupEBnOnoWNdJbLMkY5zm
Z6dtVdh/cDzjb+VX3IoZoTJuej9XaOlLdhEgRSFBqGFOuN5LMjXx8ufnPz3zXlcEFvdTpa2s
LCVLe1BjDyKQVZ2rU6SPT4dcM4z4Y3bq6tv7o/kWHaVt1ngnnjj0e/LCPdJ6VY1AJHjTHPJL
63vjyLed7vN23O63C7lNzeXrma4nemp5DkWP7MdurPwpXGGTTpIrlnWv+WMWaWk3vn2/bpxz
bOO3k/ubVtLM9pGVqyySAhvVT7RX0jtg/nzjFurD47+UeScGlnk2adIxeALcRyIHDUNQQG/M
K4eudp4ueLLlPzfznkM1jcbjfiG52ucz2T2yCExyA/cCuZNBjUk+MNlny7tw/uR+Tb3bpLCT
dViSaMo0kcUaSUpnSRRUE+ODnmS/DHXV/Cj4J8t8q4Zb3MWxXaxxXbB7mJ4w9WH5gWyBxizb
66b46t++YuZ73vFlul1ubre7fU2M0IWJk1fdTSBWvevXG+frnwo7r35++Rr4KJ95cMEeCqRx
oGWUaZAQozJHQ9u2M5JfgxW8V+Y+dcZ219t2bc2gslJdYmCyKpy+wODp88N530S6yvKeUbzy
PdJdz3e6e5vJeszGp0+FBkB9Bhl/DPUqmV29zUx1LQAjt+GNaJCJjByYEAV7mlcU0/JEFlqa
EVyP1xKzASKuo1FSwAU9qjvgxJEBULn0HqX/AFxasAklB2JDVBOeFi0UjAnUx69E6DGatMmj
UAdQFKsa9fDBreQNJA3uVqx+7w8cM6GEMm1lRpboPr/phMkPRyNAOXUeWfbF9Udh2Bofy55/
twyM2hFM0JOVQx8vDGaEgUovpoVrQjqMBwq06BgCMz/x2xKGUgqQR6gMyehzw4dhlEYeoWle
/h+GDBKZDqyBJfMDwwmCVDrIZslGVPHwri0fUhlUsK5VPhg1nBgnp28e/nTAQF8zpzBA11PU
A/44sUES3uKqVGVWZuuLG9M5oQqgEDxzJJxHDoMvWMqZDz7VxLAhgKUPq6CvbEdxKFGigIDe
I6DANQqSCcyHH3Dtn0xYzqSIxsaqDqjzp0p+OCwwo5qKNKmpyp16f54vqZ0ABmRqMWJr17+W
IJIzooCCAM3C9sqYsHwTlh6lIJ6V7eVcGLQ6xUh8vHplTvhEv7Eq6Czg6qZ+B6d8Tfhg6Fqn
78yD2z7YAbS2kgghe46gk9/phmCylkDVa55gnoe1aYINOqkMFNBmK98sKwg+kkFvVmQF6Ykc
SkpUKD5YLG5aINGxZXUhDTPwI65YMWlIjjPrGeiHvXwxaLUTSSAE+P2gjsMSEFBo7VZ/tUjo
SfHFokSL9668v4T1+tMDUFQxylWPprUNTt9fHESlLIFC/jU5Z4YLURYUNCfcGSkVqT3xI+gu
rAUoR6VPj5HARD0KSRqoOg6jx/Zg0eijKsCxDUXoe1PPCTtpKVFTTuueJaT6iqtGfWB6nHhX
OgxYdGrRFlamnLIEECv44MRS0pVsjWoHVenamDAEKmkuTQmulj4UxrGYHVIW06iBpNT5Ylpw
W9sgHtpJbpiMEWWisGzHQjP9mLF8JRmCpycdzjNa07JVa1r46T0xYtQSNNGxlQhiDmoHUd/2
Y0NP/OZxVsz18hTAMMhU9RR6nS/niXg2Os5UBHVv8a4iJo0WmdR1FT28cJBHok9JcaTXSSaZ
eZwDTq0mvRl3BUjP6/sxkm0MzaUqSATq6Gg/yxCQR0k0c+o5MB0y754qTFGH2+qMihrnXx+m
JBkU0Pq1BB6K5E5dsSDrUqGWtSKZdwO+JaJZ4g3uRgjVTKmQpl0PjixWo2JdslpEGyYeIzpi
xmmZ5CSyqQa5nywiJXGpwxo2oACuX4ZYGpAmUVZqUZaqTSv4f88ShmRKhKVUZAA/j1wY1Aoa
po1aEPUHMgVwrTaFMLLGKg+ku3XLFpxw35buQdICinSgy7eGKBWnrh0njFWyH7e2JYk/Tyef
Xw74QktPTN6TUE08sRTtrFwGORBoaDG+PlWPrb+3ce18bXdFLBHkfI0LOyEmhPfLHX/7Nmxy
4/Tw7lxH9buTJT3S5Fa9h08a0x5nf4iiSRVbST6znQd/PCxhw2pigBZSa061OGKvfvgn4y2e
faf/AGiWBN1vo3P6K1kKCOI92kB+4/uxdSj4eg/M+33c/DxcXkkaRQAe6FOlVzrQU8aUGMz5
SL4/35t0+MZP09pBt9tHFJHDDbA6WABAc16lj1xr+kkrM38vLPineoNp+QjCdujmvbuQxLez
hmMOn/8AAjpU92x0nMvLXU/S+/uRW6uZLZlq0ippIUZMa51/7R0OPPafHg+3bZuN2G9m2aXQ
KyEKWAByz8zjrz1DI+ofjfm3FbPgdrtUu9x7RfQKwni0HWjMSa5A50xf0n6ZtTb5FuETW3Iv
1M3IrG2pJZ293RUZmyDqqgeRxzV+FvxXfrzd5nn3ndveupVJi47boIxGB4sRVsvPG7PA44d1
5jcXtztOxWcGwbZaSFr7eZqMAvWmk/c9OmeCSRrf2reUfL/FNpv7S32q1bku6IwSW5Aool6a
unqbvQdMU4lOWtTxrdZt0WRt43UbjdTKJTslvGFjgUk0VyRVvCpOLrlis18fblYwc/3ayO1I
m6Sana8NP5USj0oi50B8sX0mCX8Mh8s8z33jHyF+v22SOK6MYDSNGH9PQKoYZVGZxc4bL+Ho
vAuW8jveBTcn5NcrIsqySW8IjEQEKDI6VA16u2Ndyb4srl+JOQ7lv+xbtdXly1wP1Ugt4mPp
RAKgCvQZ4u5IsfMnyG7DmF7HQ/8AkOlsgCDnVj45Uxmcs9KjbIDd3dtCWBWWVVk8dJahNPLH
TnnW+fh9bLw3404jsVpI+0LuU04jUSyDWxMlKHPJR9Mcrx61tqS8+H+IbjvMd5fWaexEur9J
Avthu4UkUyXGfob3+GK5ny74qhtL3abXYY4L+GqRIIFZ3YZZUzFKda4eeJGOuq6f7cV2xrPd
lEc0V7I2qYNTRoHQVpXLvnnjt/ST8GfDYfGlrbw3vJ0I9mI3WkqM2K51zz/ZjnWcckvGuB8k
sNyNjs8cAs5GimupEPvsy5llzJp9cZvJlxJxD4v4Ra7bG0ewfqWmOoXm4HNj4iOvpHlhzxr7
Urr4X4XJvjbhe2yvawoXFnEPbQtTPMUy8u+MfUTrGXv91+Jbi4g23atoS2vUuRD7XtEN935m
BPpPgcdZwfa3PyBw3jF/Y28+/IU2rbozpsbcBTJlUJQDIfTGcZfJPNZoZt8uRZWS2NoDSGCn
2gdK/wD041z4LKh4dsybvyTb9tZ2jW8nWKaeMVIUmhI8KY6fX8nmY+uLgWnFbnZeHbDZRWe2
3lYJZVA9ygqC7kZszdSTjlmq31LDwfi217rdcgXb45L60jPsvMNQQ0qWAOVfPBivVxX2Vpa8
947NuO8wUd5WhSND9gU0qCe9MODm35dk72ewbhsvD9ps4rPbNwRtc0eUlV6sx/NqPUnFi66t
qSHhHFtl3C/5DFYRzbhBEfaMwDIpUFqhf4m8cU5O5FFuVvacu4Bfcj3SENdKkhjiWpjRozT0
6uo88VjO+axXE9i+I9v4PNdzwNvPI54mNxbosje3KagAaRoUDrqY4ry39ti9+IOK7NtHD7zm
r2S3O7ssphWYVSCOOo0oD3NMz1xfVm9ZGkj2LbOe8Xtd13qItLfMCVWn8tVYhgpPjiv+Dz56
sP1Ntb8jtOB2NlFZbN+l1lIgQ2leq+B8++HFbryzfeA/G6fLJtN9kWx2i3QTiEswE0jZqhI6
DyGM5Rx0xPywnB4+ZQW2xWT7fs6gLd3ntt2yLIHzOXSuOnPOq9Pfdjg4ivxFdNxqwEG3CzcQ
s6r7kjKv3u1Kkk5nGbxlxddbHB8Nw8esuGp/QzatvctX3O7nKhhIfGpDFR0yyw9cL7Jvm3Zo
Zvje7vbwx3W4jQVmUegE5UTrl+OMyel8gRW1wyfzlJYdRTr441arH1NwXjex/HnxlFy5LSPc
t4uIknmuHAGgSZBI8sgP34zOdPXXjm+YeNbVefHcXKDCItwujC4ZRp0rPQ+qn8Iwzn1i9Yzv
KeK/EGw/GpmsG/q2+vGhW5hJdjI1CwYD0onli+nrfXb57mBlkL6dJcfaD08sbkZteifBvxrt
nNeWm23NyNus196SNTR5CBkg/wAzg6HM317pc/F3wsby/wBmjt4xuNjH7lxJcOQsKMtRT7Qc
uwxz+plc3Cf7eOC2+3Nue6wPubXTGS0jViI44uidMyWGeeHq2k3KP7fOH3e8bYtu39Lt7mQm
5tw+p2SMV9BOde1MU38Gd2OnkXxX8Ubfs9xBBt80MsUZImq7kedD6ScWDrqufg/9vvD4Nkg3
HfbeS/u7we4sKHSiRvminTmTppXzxerUHJfhz442zku0XG4sbPbr6X2/0A9TSyADQhYUIFeu
Mzgc31Yf3B2fB9r4dEk1iqbi9INqSFdAFaD10yKgY1zxqt/bo+D+IfHEPEXu9uCX91LFo3S6
lUkoSvrRAw9I/fi64sWSR8/c5Tid9zuW12BWg2hZlSV5Cx1CtWKrlXp0w2Yp092+bdntD8Vb
RbbfII7SOW3itVA0Ah10q1B3HU4IO5tRp8G/FPGtssG3+aWaebQgrIQJZ3p9ir2qcsWWrHHd
/wBuHHJuYQhpHh2ZIf1NxEGq/wB9FjDf54MrfNWO9/GXxHxvh278hG2OII4XEDTs2rX9qlFN
CCX8cP11z7vj5KuZ5HnkdVzZywoaAA9MseiSQfavYfgv4m2nmO2bnuW6TsY7JTFbWsVFcyuC
QzeIyxz7tl8a5+Fxxn4U2qTg+/ci5HPJbm1aZbWAGioIMqknrqbIUxym2rqyRpv7Zdk25eEb
9dQOg3O4LxOxWrQxFCY1r3rmTh22+mXxnPib4d47ym33ndt7uH/S2c7wxKhCkaas0jntl0xd
T074tuV/DfxTccUfetivf09nbSCG53B5C6kagHAFBmtfxxfWwXpuLbg/xBYfFkuiKObjzxe7
PuQGq4kcHTqDH1atXp04s9a66eS8H+H+M7/w7kXJmeSKC0a4XaolNNCxLq1SMB6jjXVrE682
g4j8H7PuHxZLya6u3Xc7mWlvWntwxCQR0Ydz3wc9WtWvTNr/ALevjIWMNvLDd3tyy0a61mMM
1MyKDIeGDKb08zsOH7Vwj51sdlmgj3KzZ0EIlzISYVR280NKjFg5+U/92tjZ2vJdnkt4I4TN
aMZWQBSx9zSpy8AMdOM+WL8vAyZNNQTpXJh1JGOtxS1718M/D3Gtz4tdcy5lM0e2LqSGBW0B
VUgGVyAT3FMcOutvjr1fHdyf4w+Mt73PY7Dhm5Ksm4XH6aaMOZf5SDU7nVRlIGKzI4zrr7f4
bNfhv4Zj3CPi+uSbkCxCiGRlkoFr7h0gDpn1wSWTW9Vuz/29/H+37fu+4ciupGt7K7Ye8xVF
jgip92RzfVmRg202+K7mXwZwS72jbd941dmy2u6niiaWVy0ZR20iUas+opQ4MxmV6fvGz7Tx
P4uk2/a9zi2uxgt3Bv2Cn3i6ks1R+aQ9xjXPyv6dPnH+32x2W++T7A3zkGAvNYClRLMKsCfL
vjXUyL+d8bP5P4GeXf3AR7MJzEt3BDJNJTJY4oizgAUqaD9+OdtZ4/8AavQtu+Evim0maT9F
JKm1j3Lxpy3tNpUnMUAYd6DFY6WvJebfF/G4Piufm22I6TX18JLSAGiJavOyRqR/FQVxqXGO
+rzGu2L+3fi9ztvFHvnkee/V7nctJ0khofcCIfy6TpGMba6as/8A7k/hi4vbvjVncP8A1+CJ
pJXEpLRLWoLL9vp6HG5zfkfbVN8V/CXBbzYpdw3f3b+aa4kWOKBiI444WKgE/mLfdgu2+qX8
i5b8LfHewcq2e93CSS341fGVJoHOaSgAx+oZhTX8MZyj7YtP7g+PfGljw+2aW3SHdkhEGxxW
4oWRKZsOhVR3bxxvjiWsd7+GJ/tS2/a5eX393dsF3OC31WMFKjS+UjV8cPcxrjr8G2X4ug5p
84clt7+Ypt+33E9xeCOgL6pNKRoKUHXM4zuD+Pc9bO++IPh3km3brb8bkZd122NmaTWzCOSM
HSrhutSueLMbvWo7L4b+Idi45s258omkWW9hQMHkbTLcSgN6Ao1AitMjTFNrV6eV/P3xTtPB
t1tZNo1Gy3RXkhhfrG6MNalh1FGrXGpXO9e5XlUCl5VRzQtRQcyNX+WOt8intx9Z8Y+Cfjjb
uJ7dc73DPud5dRJcS3UJdlDSKDRFTstccLNbvWKni/wfwHcuX8ito7h59qtYontSPS8MsxJY
En+ALTBZYyl5B8S/Fm88R3q64lIxvtlRmkn1FkMiKWZSCBUsFPqGGwXq4uuG/B/xw3GNsa/2
+TcLu5gjmmuBqRS0i1yApkK9fxxmc76Z3sU23f288Tsub7tJfyyy8f26BLuK3HU+6G1K5GZC
Bcalp56xQ854n8J7lxqeTiF4F3e3kjSOHW5EnuuEIIYeB6419f8ALHe34ek7X8AfHdltdrt1
/t73117YFxcB3UM7CjEUOQHbGZGrXzV81cC2/hfN59psZTJA0Mc8By1qj1oHHTKlK98deOmL
7ceeSM7MdZqF6dq/XHZr6/sKUSmfqJyphwHNNPqNW6AUqCPx6YxTk/I2bS2kCn8AGZBByNRj
OC39F+pkZteZkUkVHevjiyMzUbMwJqpyzLA9Dh+y+ptbE0BppoTUdsVjX2F7lSzq1Sa6lpnT
B8G0g8hLAUI0UyPUA54VKNZGUhtWbVJp1p4YGaF7qZmVZD6a1BoOnn9MMOo2kAkJrWlKE9/9
MaF+dJnAiGknWxJPljcVoFPVAKuTWuYp4jGL8qQbMynJjWh1eA8wcBOXNaGtR9rHpUjBkO0J
lZmoDmBUsMzlhwX08Zr6qaqDypho1INJ6DMD1ri1X0A7hSAR4mop174hIg1O0jAiunq3+GGt
SJWZlehNA3qamQp3+mAYYAsNVarUCg7qMVEg3kBQCLLx61wfX9nTxv1qculW6eeKtTrCiYtI
3fv9BjODURIyK9BWvauffG4zqXWAlK5AZL9eoxGehoDUoaImQPcYdpshw2igP11dGocYvrO4
EE9EAGf5upGL6qUzM5OkVqRkR/xlhw+34EGIj9Zrp869MSwjJXST1Oax/wCWNH8BD+skjJTm
e9T0w6vsKrBjVagj0ntXFotJJCdGs10da+WMXBbqVJAgCimf5h2xjGpcCHaqqRWvQDscb5SJ
ZTragpprl3r/AJ46CXBxMtDVqVHhlTvjGNXoxVCNWZQ/aO9cQCagKq0Kv18aDphivwAOo0lh
6cgAvkfDEJMS1CMABUdVpnl3xlXqIzqrQjNswfDPDqktPpSVfR286UNcAk/Y2VSSq+sD7T0J
A8cFFDHpKgr1qAAPA4Mb+wkdUDA/d+UU/ZikaMZfSp6ls8+wHfCKjIIH3EEGorjUoOF1VYGo
GR6d8NUgowgNMyKGp/3f88YsZwS1QEEFVIoAf9MGr6mcmlAMj0J8B2xNUIB16iaqOnc0ws4Q
UFqtUAGunvTt9TiqOhYMc8qZEChwmQSrQs1NY66ugr2/ZjNbCZJAOhbpVfriYppCdAopqADX
8cTJkBVmDD6k9/2YpTolMgGknMZgdRQd64qsMpCs9BnlgxaPXIKhQBQeo9a/hiaEoLAFe3+f
QYCb1qKk+RUGoB8sKsMzR+kganYZDuP8sOazmEj1Si9xVq/XxxlacRkykjIAUoOtfLFVITOw
oNJShILdaV8Thaom06SclJ6+fngYsA1PaCAVFBqp/j9MWDTFAQTXP+KlKHw+uDDo4tLAq1QW
ADkjx6YrGof211sBqY0zXFYzb+hD0aKfsOCNaj1MGINARnQeHfLEjjSo1aSAxoudQMv8cSmQ
+nT6WoxHQEVphZ0wAoaZV79j5YKZQgVPqWlDUnrQnviKShST+adRH2qTX8SMAJQpUGtM+vUY
sJFgNSIKCtajxxIySSaCNXqqABTpgpGdbFlY+uvq71H+eLQFy+ZYZacvriWnPsVr0VfA0/b+
OFqU0TkMSRQHJR5nvisAmMqrQkaq0qR49cCtO4araaaxkKH09MQunhAMtWcplmK9aYdakw1C
oBFWofV9PLECkBPUjxPh9MBF7jaCKlT0qM8Cp5FrHUVzyAGf7sUFR6XqRSlT6gf9caZsF6fa
opFQaBTmc+2JoTaq0A0kUpTpXviJ3KhQZBSQGgJNRgxf+TICCGUetsqAZf8AXEuYJWLBiSAM
xQDEiAAPpU+rIZ5eWIUlUGKigelqage/UZHAYGMOzEsKsfHI/uyxapBSKtClB6qDUPHvgtFM
oVV9qitoPQDrTwwE5PqyJLEVUmlB+OLUT6ww15Vyqvge2WLSdqK5ZfUmQYHqW+nkMKC8zBWV
Wo3bLIUzxYg+8ugVqxPfrgHwmWNQPcAycVK1617U8sWmIVjapU/aBUntiQGBQ6WWtcz40xAx
l11DsVAoRq6UHTGgnTT6dXrB6KMq+BwNYiIIkYsCjr9qgdR1zwHDSTN0YUqDU06/TAfgxNVU
ay9RQMDnkelcK03q9sivT7a5/t88WpxXmgIQMifuAwfIkVwpQ40TqdLCmbVxLXX+puvH/phx
BtZEjOrJj2B6YDDxyyS3GWZAIz6Y6/zs31m19C/C/wAt8Z4vxqfad7t7idpjqBhAbSdOkgno
K49H/wBnnmzY5c7LrD833/ad43ie62yz/SWtR7YkzdgPzN2z8Bjxmy26zcU3uoXIopqvpUDp
5HFXU6MQFjV/UudTTCsWu28k3WykRre8miRWBESSMsQI7FVpjc/pYzeW35j8ybxyTa7SwngF
tYwaTKsbVaUqKMWJ/di81cyxseN/3CcY2Hjke02HF2MMSesiZdJJ/MdQqa0xdcy/ke/lm+Nf
LWw7fymTkO57MJ5Tq/R2sDaViDmtBXInHTn+Us+cVvjq+Tfm6z5Tt621ltAsWbUJZndZHzoK
DSBQimOF5kpwQ+eds2/hS8e2vYYrW6MIhe8FM69WIpqr5k4bzz+F9raruCfKfGNkgkh33Yxu
BlOr3o2Ab1da1BGfTHafy2fJ7mNDu/8Ack93PZwbRta7ft1rm1vL6nIpQLl6Rp8sY/5zfWKs
h/cdxOyjkvrLYZDu7Lo/WTNr6+QzGeH/AJT8VTr9pto/uJ43Lswt9+2eeed2ZpliZSjajU0B
IrTzxnr+c/al2/CnuvnXi8G82txsPH0sY0qs0kqpqoSa6dIPqp0xc8T9m2rf/wDOK4Tt1vPc
bBs0p3W4H86SdgaPSnqpmafhgvH+RKo/jr5k4nst9eb7yO1nn3rcHYvcRsGCRk0CqmQpjd4m
Dbv+FN8u/K21cqmVtm2025y96aUBpG6dPOnhg44m+tc+/Dt375xivOE2HGrGwa2EMUcMt1KQ
WPtrSmlemeH+nM3xq82fLXfH/wAj/GnCeDta/r3v9xdTLNaqhA9wjoSegr3xm/zrE6/DwLku
8tvG73W4hVj99tQVPtCeAr5Y5z5N9RbLJDHuVvI7hEikQmQHp6q/hXHf+fMpzI+sOR/MHxtY
7fYLcyfrnjRJBFbgNoZQKBmrpGMdfzsrP2Y2L+5eyud9kM1k0OysNC6KNMv+5fGuGcTFaq+R
fNHx3Dtko45x8y7tOuhru7VV0g9W1VcsB4YzP5rHZ8W/LPxxxLZnjuRdTbrdMZLxwoZfH0VI
oMdOv4XNa+y02r+4DhFluF/JFYShbx/cVqBCMur9RnjH/Nn4c+7/ANwnELPbLiLjVi817fMJ
LiedQi6yc/Tnqw/8rPku60/uB4I8Nvfbulx/UogKWsWcSFRmQ3pXT9cbv8r+FLvgB/cpsF5u
ksUlnJbbW6Ee6AZJSQOuVFAIxi/yHXUii3X5d+K7BoDxnaGku3lV7jcJE0uoBqxAJNf+7Fz/
AD35ptyD3/8AuM23ct3ggitH/pAWksj/AH6qULDDf5yflfMeTfIPJtk3zeS+0xNHboM5HAVm
b/ljPMwcbv8AhSbBv8+z7ra7nbxh7m0kDxGtBUZZ+OOnNmrua9+svnrh3sQ73vFrNc7/AGw/
kQ/kDEfdqxnv+Z9U+z/3Dz324XQ32Nl2i9NCbdfVEoP5kP3hhg55lZ3/AAm5J8/7baWkOz8Q
tPYs43Eklw9Nch+4+Qr1OGcTW7VpZ/PfCfZXe9zglm5JCh/TWpACLQU1BhWgbwwdfz/Xwxuq
jZv7hJbu9uouQJ7e23R1B4szECcxSnq+mOnP8tn+Rbji+RfnOyl2E8T4batZ7S4CT3jVDlSa
lEX/AHHrjnzzlXV8Nsvz3tmx/H68d2vZxa3jRtFNeV1BiwzlYUqW+uHvmS+VqbUfxz802kG0
tx7kjGLaCzMLmKmujkkqVoS1fHGuuJm/liS/n4WfIP7i9ut5LLaeHWSw7LYOGEswoZQMwoUm
oBPU4xzx+3S1fD524JZwy8iMUt5yZ4iqQMPTHXqmrIAV8O2LrgSvOeLfMNmnPrjlnJ7H+pGV
WEUEaikJPRl15HDzx9vzhkqn+RvkV+d8tgvHQbftw0xIi01LHXN2r9xpinGfAuveNo+U/h7Z
eHxbDDfvNCICPZ0NqkJHqpqp1xi8U7vw8s4Xu/xVNv8Af3m+3UthaaybPbw8ih1Pc6Bnjf1u
eOW231L8q/NNpyTb4eKcbg/T7BbuiSzyEiSQJ0UV6L44Jw7Tn8rffOafDOz/ABiu1bDZJdbx
PGsdTH/MWbq7ySmuQNaYzf5WXaz9tHwn5j4vufHIeO8rkNrYWSaEIUnWEzXp18savP6Y+3vq
h+WPmqz5Na23H+PRtbcctCuppAVecpkF/wBqj/HBz5fXTHXzf5y4tLwH/wBW41sy28hhjSaS
QLSPIVK09RcnucanMl2uf9Jepjwr9QzMWDFmY9syD4YtanOTHrHwDznYeI7/AHN9vUxhglie
MaQWYnI0I/wxXnfhczGe+ROctyLmt/udpI9rbXEoEUVTXT0qSOtQMXNyj6voPjXy9wzcuLWe
37hun9E/SQRxyaGOp/bUKSrLWmY6YLxb63b+KyfIvkngV5y/bIIbu7lsbXNr+V2L6m6kaj6V
xc8VNxuXy78a7Ts0xXdzu7yKUjtkGohnFBqOVBivF30I9l+YuGb3sFsl5uj7N7Q0FAaSOyjq
hFSRi+n6E6/byv5S+S+MX/KtnGztNf2m2yI824zuT7hVgxVNVKdOuD62HcrS/LvyP8f8r4hH
de+8m5iMm0sxVGiY0qzg+YywyYz3zvwpvgP5O49sG1blsm+3H6Rr8CUXrj0dCrgj+Khyxv6/
abDueVjr2Dg26/JsFtaXjW/H/caS53Bhr1hAD6a064x9P0eb+3vfyDzj4rueHRWQ3eKZLVov
0sURLPVB6ainSmDn+d1W/ph/lb5d4jvM3F7PbJmnjs5op7ucgqgUFQyZ0zAFcU49w2V6hsXO
+O8p5wJ9rvg9pY2RDRVo08kjVFE7hF7+OD6VJflmz2TcOGbjccgn/R2FrG0kas+nVIB6BTKt
TlikHVz18NyvGbgiL0RVIj15nR21Y3P0xLb8vVPgLn9txjlaJuDCPbrxTDLcNqoms/fll6af
sw/8/t/5dJ0339wXyvx6fj0fFOL3QuVkYTX9xB/41UHUEBP3MzZkYxP52fI8saT4Q3f4841w
gxS77DHdbgBLdxyNpKHSVp3zzxdfzpnc6njg2f5G+OeL7Dy+ysLz3EneRrKAeoyNLGU9J8K5
4fp+Wd2Y+am3u/Ft+g95xbMS7Qa20K5Nala0rXywzVlj6I4Pz/ge4fEQ4rv94tibWJluQxoZ
vUZV9ts/UWwTi2jq75C+IeecFTjG68PvLxbKK5lkaOWVgFaGUBDmejZYPp6fiZWrvfkH4t4t
w+22Xbr1b2KKZB+nHqaQe5rd2JGmhp1wfSw7vwvY/k/gLXkG6yciSOKRdKWIPpGXVwBlT64Z
xbPF9nmFryXgvKPnS/3283EW237YkAspZPT7skaDUVOYKgjF1/P/APlcdZbT/wByG/8Ax7vG
0wXtpuUe4bvH/LtoYfWqJXNiR0zxf88+WOt3Y+a4pkAJJC50ofHHXNdObH0r8Pc64Xd/Gb8M
3y9SxLM4lkkqFkhkYMQD2Yd8cpx6u6Gbk/w5w3keyNxotfSQXDPe3aNrARlI0hmpU51yxfT9
s7W4blnxHDyH/wB5ud4jXchCVMKksQoTTpKAffTIYvraYx3Lfmjim7/Fe/W6z6dy3i6lFtZE
etYnlUo7dui54pxV/hU8p+UuLyfBmz8dtLkXO8ukKPboPs9piSWOCSz1dzfI4flP5M49f/Ef
G+Pbfce/uq+011HnRBEhV9VetS2WCcXNHSv/ALcZ+GWm/wA+88g3JbGfb6foVlYIjFlOo+YA
w9cdVr7R6nvu7cI3j5c4/f7LvSf1KQ6b2aN6RiCJckqaeqT7cF4uMzzrXqm828O7WU9jcaoN
uljZbicNoGimfq8MGNWa8i4/yn403zgkvEN33JYbLabh4hIW0e+kUrMjRtn/ABUxr6Vm3xoZ
PmD43tL/AGC1t9yX9FaRypJK2r+UEi9uMPUd6HPB/wA6ft6+Y+ecxO5893nddqkltba+md1k
VyjmNsqenpUDHXnyDiWPffjj5H4Vc/H+27RNu52a625dM41BXlbNm0kV8c8Y54tbt1m/7gvl
Die8bVsm3bLeG+lhm9+d1DVQKoADVAqT3w/878MXi2wXy3zL415jwC23Q3pi3ezg9u0s1Uhx
Iaa1YeFBli54HVu+IP7a7zguy2E++b5ukdlvTM0CRTvoX2j0YDxamM3m2t/C/sfkT41418s7
hd2F5+ott9tyby8jOqNLgya6np6exxr/AJfmufMkt/ysTzD4k4PtO8Xuz7mt9uW7Ruf0yEuX
katO3pWrVzxT+VtdMyMH8ufI3Hd22Tg1lt10LiSw0SbgQCvtsqImkinWqnpjN4wfmK7+5X5C
47yu72KLYZ/1cNlFJJPMAQuqYABM+4C1xrjiz5Z7/n9rrxO2m/mqxcKikE1GeXamN9Rvm5X1
z8dc/wDjpOM7Yw319umghWKewlZjR1yLUNRn2OOU4q7s3V9wbnfE+Q8n5buO2lEsbS3gS5kY
KnuhBJrlPl2qcXXNlxTWQ3jn3xPxPhe9WXHL5Lq73lWT9JGWYK8ikZk0oqq2K/ys9Ytnw0W3
fL3xzd7Vte5TbzJt0lnDGj7cSUqyAAqVHpbplni54rU8cr/OHx7NzTdLGW9P9PvbKKBdwVTo
BAfWG6EZPkcN/nZ8i2MLzjcfhTYOOMvGAL7dp5YpYGVi/t+0+pzqNAvTp3xf8qp1r0Y/LXxr
vttabtNyCfb5VjXXYozRsGHqbUB1p44uea1Xy38p8r23k3ONy3WwEn6W4IMZmbU2lRpFT+FR
jrOMZlYt2OSCpP5R2yx02AQRnCtXIZk+YwWqGYrpIBLOxqGHWuMWnyl6lkAK6a9RXKo7gnBP
VZQltIap8emdfriwYWoJUE6qAGpH45YbFPDhqkM3X8uWVO3TGcavUDGWLBkJDZ6qd/wwiSiW
P1OpUa1NBU41DmHK5a61I6UFe9MHq8RtXVqyoBQ59cTJ2OamgK0OX+uKH5CwKqpelG6+dMa1
QmZqrpb1J1IzywKyCZiqkqMyK08/p54KDSPVKg0/i0n9xwyQWhRtA1AEnxP7OmLDDkBULI1M
6EHtX/XDDsOrEZkfUDv5YzZ6pAs4TMDNhUL2+mKfKlw+dA4Y+ojMjGmrCJFWB9VOpHXBayYa
SarUKKE9TXFsP1EdKSN6qAgVDeH18MI+DHSepoGz0nxODANlYAhTpov7SOmeKDUfpoanUeuW
dfritEoyPRXox6g+WLWjA0B9NVr6ulRXB9kcj3DR2IoBQnM+VcOM6WohxQinQ0H78GLTCT09
K+Vf34cMuCUA/cCK9COurEYjKkKQ3UfhX6Yj8CVEfJgUcZrTuMQ0xy1UJJHRux8sOIcQIKkg
Gi1z6Z4zY0ZmAbXTvmR0p44MZ30QX/b92WRz/fhiRUBDEgkDMDuT4Y0YTBmqUJoczTuf+WIf
IkD6evQ1BPUU64KC0vQHIr0PmB3wxqQLMCaaTXuBll0wSK01QxIUlIx0PQV8DjVEw5bSWULk
Miep8MZsUt/B+orXSTke/TFgOF9RNKahUdq4LFhnQihTKuequdcUX1wmrqVBkxqRU16Z54Gp
6cas9YyoSSf30+uKVqzBGkoLICWFCQe/b/DGhKYAKvXMij/ji+pp2YU0gAkmhPfyOCsBchmK
moPeh8MZMgtRAYE5djX9gOFYSaAP4dWRFcqjGsoqNgQ4rUqB1HWhwRDEwBKe3XUKLn3xXnWv
vhwEVdSmprQr4HxwYtMvQVYGpJNBl49cZqwGr23YVJUmle/7MIsORXSi9OjefhXDjNg1FKgU
0D7gf2YgZESg05HwJB6dsFMh8tKFa16N+GBUAbSVLVOk+pQMWEctXOkdOvhUYjZptJPpUE0z
JPUH8MMRtIAIALHL9njg0YkTRo9RoTlXDp+BFW0LobUW6hsq08cWhEzgyUy0kUNOlO+KC+lR
29QUUBGlVPTzxDD+7poGplmqkZeFfrh+BhNVULA6lYCo7/QYK1BAlSCKggeo+GM07iRXE3ar
LUfXv1wKeo3AdqqPUBnXpQ9MSItRqKSGAANPIYrERcBly0kr3B74YbTAAFlDB/8AIDFUNCFY
GopmOn78ZpJ1BJqQT27kD6jGVhpdSqEVfTStSemJI9VAcirGlW8sKoomABINQBmMs6eeIEly
7yPoUAgUC0w4IkcMY/tDKASfrXt9MBDoDqABpp2pX64lp1I9xxX0qADStRUdsTUIqpzclutB
U1/DEiYr7ekjS2WqmDF1R0JiKMKKCKV6/TDIvwUdACwBbrQdf3YmcRx6jINVQNRyPcUxKJfS
CaUopqRWn44GgAFzkxBA9NAep/zwATRMihmqfUAB4k4dGYYsfdqMtLDP6dsaGjeplABNSPU1
P8aYodAVbT1LEkgClR074h6kjcLQFqClPMV6GvhgrUEyKtSopXOven0xlYZWaho2VMgooa/T
CzlKMyeshdWjMigUr+HfE1tMrPRsqOR6qdhgahiXioAPcJoSR1IpngtGCiJBLIGCkdGzYYFg
2YkBlC0PTOlB9MWGUvUVJLAdhToPwxLRK+pDqoZa9e2XfCUHSXTQGh6d6+eHAdpEAPtilTpL
eHjUYBQPqorAkBT93lTtiwC1TEVVTVunQZDoTiIFQUJrWmfgAfxw6pyhjDlj7tGOdCemBVKu
lQB0/MetK4jgw0ZUBjpkOZGf7sZaqOSSegJoVI0r0rl9caYsRxVVQoGfYAf4jCx6TqsjAk07
sTkAR40xnHSOW7UhWPpp0I7/AFGJpXZg9MJEikkH93TEElT4jrXEgkgE+X2jEalt3pINJoW6
nphjNaCzCFARqNQNY6ADu3n9cO65R1s1JFRalamjHsfOuF1gA7AotQFHQDr5Z4MVo/fRpKRs
pCmpJ6jFh04Rl9S10VzOQB+nngtGk1cvI9egH1w4KaOZ2Vg2TeRyphZ9M0bJGzsaac9RzNPI
HFbTIJXolQx1DofHLpgKNZevTWSKV6YcB0kFSlTToQMq/QeWHafsKU0IYHMCpFM/plg1H95O
lCegKr44kISSIKx50oAnYDxzwaNCZgvooFcZg9jXDivRgSS2k0PgPPxwys306+VdSilTkrDu
PwwWtyBVQIyozAyGdBQ9aYtZ/J1c6QK1C5gitcsWtXobSMsZArRxmppU/XD96zzcpTuhjSqj
QvU06fhjLdBmGBXoD9g6fjhlsYsSSuKl8iT9xHjitta+UaySlHNenmaYlgEJjU62DHM6T3Pj
jUoqRZpUhKq+hz9unt44batOLqQgqCAQfpX64yTn3BVmemrMKOtR3pi+1CINIRmfUMwtcgBj
X3qTe+wYmtajPLucFoxHLISfdI6LmB1I8PwxSqovcaQLU08OopQZ4dUpkkSSIBc69j1/acZa
vRJQOQw0mmQ8BiZH7qsdCGi19Y61OK21v7DaRjQKSPPrnglAGdhJQGrnt0IA8vM41GaESMZF
YkgjGtFO/pGkM1PEZipFc8X2V58MBSrkkjoT4fhjA+oQSyuKmpP0yxSNz4CrspUKNLEZjvTG
6yMS5eX8RGdfHBtJCZyT6sq5jxxmozySUMYFCR2rSn+uKICCVSSTkfuH08Mb+7V9O0h1Aj0F
cw/li+x1IJ2Ua9X8w9T3p9cYtHgWkuHAZ89WajsO2GXFqJpJSTU1zrkKCnTIY3brlo5JGWtC
dQzqCMu2QODTIFJJeushctSnPPBbrXpqsWJAzkoT4k+OeM6Q6PbBCltFa0BPUeXfFqEJgCC5
1FTVT1ofDEdw5uJaBhWpHWor+zDOWb0cyFoiCKHpl0oM8sb2nTLckuzGtPzEnM/XFeqz+dF+
olcsAPSn4dcsYzWjfrS6jXq1KCoAFTl4Vw4x+SFwWAIXWepX/XzwxrdE02qIIW1eIPljPUag
ZZm0LHT0k0ZT4eRw8XGf6c6D3JgurXV1+3Pt5Y1LZ8Mc8phcymLWXLL+UNjMt11iCS4cOoBq
B9oIBHmTXHWM9dLLa9+3DbpxLY3DwOBpV4iQ30+mMy4p3b4l3rl/JN6RV3TcZ71EqBFM5ZVB
/KF6dcF9anKjLO8tCaLTrXqR2xMWCEjh19Xtmle+XjinWCzB++GZqklTXqMjXFL6OZaeO8I9
IcgD7kFaHKg+uH7OkmBM7EtUKgbNgO5GM2sgaRvvDBiQakZedPPBpwQnZodRYFRSg741tH1E
l1KtEAyGeoioofrjPNxYX6qQlaFlFf5fXTX6dMb1nbEhuJXDIc3I0lu7eOM7Vpv1UoOlGKAA
AKOvXOh88X/k+0Ek87hgGJU0NK5AdMbt2NfZA+kepTlWij/XAziSC6njogJIetQa5L3xdfBm
1I1zI1M+gyB8O/THPN+WgncJCGRnqHAz/i+uOsn6Z+2HFxJQ+rLPLqPwGM2KUyShVov3EAEr
l+z64x6dMzMdTOT6hXMUqB0wyqihuKSkfkapYDpjUtDptd0uYnEiuQwIZXU+r09KfTB1tWrm
8+RuV3tubW63e5a1KaDGZDTT/CaY1xaJzGeS+aroKFWJy8znQjF23J4RuZAW1N6vsAPYeflj
PjElQtPI0oEnenTP9+Na18JhcSKKRsVz6/64twgmmJDBSdNaAeAPX9uLdZuo2uJioDuzE0C9
sgaU/AYZMFuDillKyLrJHVdXbxocLM2n/WlvUhJU5FTWlfPFfXRL+sleMayak5xk9aZf9MYm
w3rwAld3MZ1KrjURXoPAYLtZ4iFpNWtVY0JqdOQNMsdMavQVdyS5XS6+n9mWDQm912UNkVFP
bXxNe+M6z18Oi03m/sI3WG5eNZ1KzoCfUOul6dR9cdeazLXO16zCrkNpOqoHjljF5tbtOLgq
je3my/dU9K+eKxqdaeSdQo1CrGgepyNcEn7HXQJLh5BqVqKnpLDrl2xZlZ+2HScrEpqdRFGH
UUGNfLXPSJ5fdDUFS1Kkd8qYYuuvMAtMiy1Kg5DwxMTwlkyYgUH5amlcJlL0kqGbUa1bwB8s
ZsPhEgsdWYFcs6jApaAgAvX7GIOXXLFpw4J11cVArTwxQYJJlBJC0OVKVz798Vi5gTIdbSHo
T0/474cb+wVodRppK9AeuHLGacOa1QUBH7D4jBGaZiCxDA1YCtTiYtpUUipJHYDyxNQmcjSG
zp1r4Yo18ARRRvFTUk98WI5YilMwRRh5eOHAYKWABOkVoRTBiqQAaWSvrQZN1HlTFhIGOik1
AORUdK4vV9oj1MRTutCT41y/ZisXyMxjQcs+1T1wfJpRSUTPMmlD2GfXFYCVnV8lzqaAZYpT
IcSJpLaMu9PDFJh2hGnWaChJoWPh+ONMSaaQNTSQKD7e5wrDmrMaGpIBr/jg+FgwoUgAVJHb
BWvICisKE0PY4qyeoDHUBr+0t+/tixnTRxGpJqFPWhzHhh04aTTq0qT4V8j/AJ4DIOikgrQE
ZHLI4sNEXRVNKIOpPgRhh+QmV3Fc9K0P44cjFoWJYimchpXLIYFgw7aT0FT6W7YLB/4OxAjo
33N17DAfsCNhl/Ee3gBiWnVqsa596/XthxSk0TU1fkpm2L7NHDKoOg0b8tKjLwxDML3RQGlQ
TWvh44sanoMtRI6da0/y8MazB8GJp6qmtf8AoMsTNokQ5GhzJqKeHXGaZ6TUjJ0ii9c+9ep/
DFlOi9lqDPMEEH/M4tQVSrAsaZkEV7+OK1fU4pRjnWufjiZkNRNQNDTqD3xGHZmaoAoScmIw
GBdnj1E0zAAoegwyi0vSF0tmR6mFMiOtMO1Wn1EvqQeoHMHuMGLTgerWW7/bTMDwxmRG1Bo+
gyOeFaS+ptHdsh/x44dOGDgtUVqBQjoKDti0YJFSg1JQHo/l4YygKoVRJH2NCtK/hXBrSaMm
PNsw1fSO2Mn4C2kyKW6mtW/DLDgp5o9aq1dPY+Fe2GaKZtCjSxJPQnrWvjh2i0khQgNSpJ0k
Hpl4+GMmDEqBCsieqvQdKjGcPgRqZ6qQtTU/T/XDWDAr/wDaDUCagKPOlThitP7xAIFOtFA7
+OGLRFRVBTSTmD9O2M6dKSUhqsgJpTMZH6YYNNVqKtSMsqdfpiwgAIarCniegxVQaxkk6SMh
mAaf4YtWEI9QBXPVXUO+IWBqVJVh0GQBpi0CShoQar0JJ9X/AEwNfUS+4VZYfUAaHt+OBf8A
g7sVy6kjPy8cXyfgCaR6qes/m8sHo0TMHZCKmnTLuMMh3QhXaVgoAr3Ph9MIONIqr0FTQeH0
GM0wmcj01GnKp7kYZCJlTRQNUsQGA6/SuI0TKCATRWWgXwFMAtRkIrEnrQrln1wxaZkTVnQn
83mMFjIh7kYK51rWpORwHTxsWiqw9RBo47f88RhKWfTp01AozUzy8sRwnNG0qpViKAnpQdMC
vhlddHryfuPPCycEsF9zpSqkeRpiawTuD9oA1Gg61BP0xCgLkK4IJKH7qZDzOAYMxgv4SEV6
gjLA1g4o0VKAE6c+mZbBqMWYvqYA0y0se/XGosGAhFKKpGbEZ4kYSGqrpIB7HwA64WajJ1Fo
wOgqo8wPLti1kWsUFM/9oNQQRTBrUGz1FaZjJadj54GhAhATpNAfX3oTgSEAaw2oNn9p6YlE
rKxq1NDr2r1HbDjQSzUDynVVv+K0xM0bkOoC6lcDLPFh1F7kherLppUDp0GIYb3aqCQC9TQk
UGXemDEKOqPSpUNmaZ1xRSHSJdVCxLtmHPX6VxNShyOoDIg+pz0y8MIqMMVmKasmOQ8vEeeB
girdzmTRgf3YWoNSFiKEUHVh1+hxKgSOiVMhPcIRgtRHKTQvUrkc/wDHFpJ/cNAwKSDJq50/
1wLTmAliWFQMwTTp4DFrWBK1VWDUBFB4geGLV4Z42KhCQYyM2864oHBdhlTRJWtKrWmQ+uJR
xazl+7CSANQTn4jEhUXx/f28MSE7EMUGX1xLT2zATLUZVGGC1oYAishNWLegKD1743jnHbFE
wBANa9a0/wAcZta0BAGWQcZeo1B8OnXDjnejaY3RpaASjLUB+/ywNQ7FxQBjpAFQB174W4le
NnjOsBRQ6UFP24IqjDEZenQozGRGXfCzphG8hrL06gUJy7YjhHUtBXqPUx8R2GI4Guokn7h9
pPc/UYZETBqklOn5h3r5jFEczOwUKPVQA/t6YgkRXpViVYZHL1H6YDOTUNfWCX/KcVisxCqA
OJCtFHj6q4tYSu+qQSAjw1EgV/AYcM9OHQVbqa0WvjjLQQWeRhXIAVHQY0BRMvuCpyFdI8xg
MOCigAKetAWNOuDDh1UiSSMhgwzK+WFWYEGJl7aaUp074GbTLGfcH8B6GuEw510KqKKcgT4+
OJAnjNAR6vHEzmmZVX1agqk0qfHwxvR/5HIo0glavTKnnjNhlMGAqy50GYrmD0xE4yiIKli1
aMBn+JxGAjRjqU9Py0yyGIYeIP8AcTQp0UjKlev4YLRPQlHZq1GlzQdeuHVeThYWWrUMlaCM
DLLuThzWftIZinrqfST06n608sDWgYKpLMpqKVYdAPLEkoYZmtFOdepU/wCuI4DSgYUrq6lj
/jhVNULGM60zJrWpOY64dRVYN6zQVzp59jiFoXlPuqqgFWFGHUVpgZ+TSZkAVArl2H44o16Z
GZQAaV6GmRAwgWlQKA0V86HM59zjLRSFkOlaEUzpn+AxAn1BBpNNXVc9VR54l8hjPRWzIzy6
/jh0yWCLamANNYzB6DASZQsYqAvUkf8ALBAGrFCtSatWvQUpjUY6lDIg1in5QfUc8/D6YdEJ
fWa0zGWXfBrX1RzO2igIJJq3hhRagWBZsgM17Up0GBaNifSNQqMjTpTtgaNKE0jKijKo718s
MhJ2KqCKasgtBU/QjDjJ6lkrUBjkR28sPyZMAlVBplQ0oRU0wCJPdqhU165djXzxYdDoj11X
0kGor0PjjWqQZdGanc9CenhTBgtRkaI6dq9e5+uD1r7EWjC0AJ055dP34pB114D3SAfzMTWv
YDwxpz3EryKE1dSBl4VxY1oUOtVJAVjk69sR+0ACQSUNU/MfM4dEvpgGVyADqH7MxQYMOiDI
pVTm7mlfLwFMOCGc1RmYV8B1GXnjONHaRcgMxQCh7ZZjBItRhyrVpTOmkZVB/wB2NAbnUgy0
gdV/HvgAQwZzTLQC2feuKrKEo2dT0zHah64tVSF1Xzr9y1618MWNHkLkCmTE+kD6dcKBqKNS
p1N009q9sTOBSWQKxIyPpJrnlg3RdHK+kBgPTUdO+NQIhpLAnNmrlXPPEolVj6lX05Zg554K
3KYswCtke1OxwHDSBanUOnQdv+BjUZpMtHAqDUA188XwzpmEj5V0qDSoypXzwLKZ5G+3VqoA
D5UOWL4VtSljqJyUZCpyzwNSmppkCVHqr06U+uLTTMyEUzBFaVxpF7cjgVIFKVoe31wWE+lW
DNUU6tn1xkU7igIAochWtaA4VpwVTIMaVqQRSv1wrCZw5YadKgZYjaFhGUBzDrmqjsPHCzSB
CDIUfOvcU8cW1WGVamo7ZUGeeDTCqupVUZmp+hwyC8gL6WJLEEHSDT92WNRmwUTrqKkDSGof
HxOFaNz9xoa9R+3GMalCr10aVyAyAw2DSb1HPqPtB74pUYk68urGlT9op1w6sFUg6qLTuBl+
7GSjOkChNVJr0/HOuNCiZhU1HhmegHjixfk6MCWH7B/0wmZQ0DAkdK5eZxHDkgep6Z9advww
Cowqu3U0GYHUYtGacRyINRIowqPIDFpkE9KKfHNqHF6dOJSxNQKEVWmVSPDFi1GQ1FK0qKE9
/qMXg0omAcUqOyk4cFqNwSzCtBXIHxwjRrp+0+HU+eCtfbRhQiqepPRe344yQKrlDkNVftPl
i+BhmHtqaCurxzzrjWavgza6k5Fqj040No6syhychkQB28cZ026ZgFYgZUGR+ueCVkzerVV6
E0r4AYtOCWmgg1VuzjC1JgGj1kaahRQkjrXwywrCPQhsww/wwDRudaEqTqyCg/4HGcwnAIGg
EHSQSoy/GuKKgLjUA4p1q3jhkGlT0nTQN3PYftxWCUTFQpVsugFO9fDzwmmCiFg4qVoQG74q
EYp7oI9GXb9uJT1KDQAgH2sx1zqeuJYGqj1HIjOhzqD2xYKWgEA51yFMTGJ1dXQmoGf7aYzl
biNgrU1mg6Ggp0zyOH1uWAkILEHMinTLD8D5IkaDQHV0YHPIYsFEwYBAuasajyGNT/KyF6Kg
nLKgwWDSYtQMD6q0p2+uM2UyERT7lqads60xGwCxkANWhGZ8sOMiDoH9Qz7+eGxaJp2UKD9o
yNM8vpjOC0DPq9IGmnc55H/LBjV60UZQKQVNFyB/HDYYejFE6UNfT4U88OqgIZUqRmei9hX6
YdYpxI6BtJ798/2YfDLo5GDKM6GlSD2OMnUQc9CahvSPAYYKJa11D0gEZdfwxmr5MKkstCqg
jUfI4Ps1mDUOGPgfSP8Ar2xZo0smqM6Amv8Ahh+BqOREqCclGRrkK+OEaIAKwoaqO4NQBhOl
obUQGC1zNP8ADAjukpAYAHVk7eY88YtMgWTPUDTT0XsfrXDqsGZA3qQAU7DrXFg0JYtVnp5q
BTPxrhxaStkfcZlp2ArlgsUo9IY1qQI+nav1xmtw5oEqzVAr/hgICRIqqFqQOvXArQqUFKAk
Z5HP9mN6zRVYkVz1GgI8PHEyKMuD18s/88Egvz4jcIFehB1EZ/54GpUnuMI+gqKVHQnzxYKV
BQjuTQ+VMRwzLHqSlKtlo7HBNNgictBNQKkeOFmf5CDqSmdQarXEdEtE9TOQe1cxTFogQxqC
pqpNSD0r44NNiQBSoEZo7ZAV8Di0hD5kZin3V7HFqh5KkMKg0GYp18xgWBgZaEkEx9aHrn4Y
VhkI90aj6fKor9cISCTyDBa6PDTgw6i1EKFKdTma0ocFB5Mioq2oVGQyH0xD4FE5CksCWORr
QmgxatsOWyqQMxQVFf2Y0pARhA1agg9ia0OL5MEVYCvbse3XwwY1p2FZVDnwr2r4YgJvWzZ1
01z/AMsB0I0aSRQnuR0AwAbtpjBb7WNKg5064GdQmRNOgjJs9RPgcJtEC4DU75E98+2BaZ3Y
pqBPooKd8Oo6hpCwK5HMGnq+uBqC9YzT1R1oRTwwlIoU5EdfUTXKtMsCKiqrBiRqyqBXOnfA
ggr6S1DXsPDFUL1As6Zq3pAGRGLEMK2rWq0NMmpUZeOJSErgO4YA6VrT64VpAVZTqzAzHSo+
uABNElNGGdAG8M81PjhH1w7HUxLegVJFKVH7MC1Hp0atR9RoCK5kjME4ho0kUq2qtWoXbpgs
bhRSJoMaH1nrXx61wISBjFpLCvbxr41wkOkEHSWBPXwr2r9cOiQSglwzAL1IJNen+mJrScuR
QEVY1y+vfwwBECpSlA2k9xmDXwwo6llQuwzX7X8cCsEGJKs7Zk5Fa98A05DNIwLgA0Ud8+hH
li0Wo6CpQAhgfSMjQAdjhZgSVZwasRWrnE1Esbll0nMUzyqQPPE0TNqg6F6iobLt3xYkHq9w
tozYUI61HfBQk9ANGYKij1ClRXp+/BiOWdMgVZa1JBrSuVMUa0KOQCxyAqC46/8AAwIJrpOd
SRmMq1PQ4VrjvFb2gob0duxqMUZitAqetMLQloWA1ZHqcSFpH76YkmEfvFiB6sJ117dst7e3
8NnZqZ7mdgkMUYDMSxpkKjEfrr3TZv7Y/kE25nu4La2RgKe7MAVUCtWAr18BjHXVxSSfNccX
wpzGfe22zbrM3Uo/8kmpUijXoWLN0+mDm+Dx0b7/AG/8s2K1We4EcjMSXhib3nAH/b2xqd/t
f8pfh0bT/bdzu9279ZNHHZQPVzHcSBGKn81BUjLti+1H1kHxr+3jkW8bubEyfobC2P8A8m+p
qy7BVbMlsW05HTz74Kn47LbW1lcLdPduI7ZSQJpWqM/bHQVPjimnytTxj+2TZBbK/Ib1pdwY
VO32qUjibqAZTWvni73FbGb5D8C763IW2rj1v7wTOSZ3/lxqcxrfLM9hjMtkWRUb9/b9y/af
bE7RTTTELFDAdRqTShP16Y1O1k/Dp2/+2zn8sJuJo4rSJKsfeZVNPArU4fvV4q7H4R5nuW5y
2O32ut4Mp5CdMSEnL1NQZ9sX2qx2ci+A+VbRan3gk88lAqW76iGPiQK9cZ+1P1lSWX9uHyFc
7cL25jjsV0ghJZV91R408T541elccNl8DfIG4PKllbGW2VtIuW/lK1DQ5NQ4vsJB758A8125
0t/YW4nlHphgk1lP+6mQBwT+n7M4/KkufinlVvuMOypYvJu0+bW6erSB3NOg8ca+48Wu1/A/
O9wv5LSCyV/aWlxcBlEKt3SpP3eWM3+n+F9XHzL4j5FxaCQ3yxtoppMRDGhz0kjKvgMPF1ZB
Wfwxzj/1td+ubD9DYU1gzOusoaaTpPqFa9catZ68ekcK+Hfjy12a0ueS7olxu25qP0e2xzCO
MO2aIxAY6vrTGctPXSXlHxBwrYrN2uJ5Lnf9zcRbbtEDBmBJoK6a/b3PTBlUqK2/tWuZYokv
r4JcSDXdGPNUB/ImWZ864z9ulfq8b5vxm149yKfbLVmaKAlWdjkG75dv243zb+WP/Dk2Li28
8gvlsdotnu7txq0Rg0CjqxPRR9cN6akW9p8Z8uud3m2ix297ue3UvcFTVU8cxlXDOjJI1vxv
8ENyq6vRum5RWlvZuEeFPXcM3c6TTSvngtqmflj/AJI4ta8b5DPttrMZUt/QzSGjED7aUyw8
0ZGT9vWMoyrN1XI5j6Y6Zqx6Bx/4N+RN7sory22547aUD2pHKoxVhUGjkZY5ddYfrji3X4k5
lZ7tHswspJ724NIoo11Go6klcqeeMz+lHXNzxZX3wLz+y2o3l3ZG2C1bSXDEEDMUUklaY19h
JHf8S/DVvzC4u7jdJ5IdssW0tEgJaZz+RSey0w9NdcyTWJ5ns9pt+93VraQtDFbyNGQw9VQf
zAd/PGubZHHJVIkEuhR7bAGoU6TU4L0pxXq3xh8CXXMNsfcrm/isrBPSEU65WYCp1KPtHhjN
6dbxk9YrcOHCPlEuwWUrO7XIggY5feaAt3pjW+DmN78k/Du1cL49aSw3M11eOAt1OQBGzEdF
HUZ5YxK18stx34Y55v8AYLuFrtUn6R6mGVho1L5aqVGH7r6xJsvwrzfdeQtskNqFlj//ABie
T0wwp3q3n28cF7rMjv8AkT4N3zhdmbt5Fu7GJB/8gUUa2/KO+WLnq2sddY8vCswQhaVFaH8u
O+H+fX2i/wCLcH5NyVphs+2zXgibTLKq0o1KgAmgJOOfVw5Vxvvw5zzYtuO4brt7QwkgEhlb
SpyqVB1L4Yzz019U2zfBvyJuVgL+DapUtJBWOR/SzJ1qFJBw3tXmRVf/AHc8oWW4jFhMTZ53
E2k6Y6dmPn44NH1ya5to4XyPcrW5vbTbpWs7ZtMk2n+WW7IrfmbFuGevR4P7aOYLxkbvMqrf
ugcbX90iqRXNgaA07Yr0bkeebZwHlG9bpLt+07dJdXEDaZPbUkIf9zDpgveOddu8/EfOdlEf
9Q26WMzMKNpyJJpppma4fsZE8fwj8jMjzf0a4WJBqZnXTl9Dhn9NP1Y6+22726aW3uI2WVWI
qw00p9MahrinjHu/bUMcq5/hTDLjl1xbQCH1E11AVNDn174aZMP7YDIQ1RlXOpz8cUayEHKu
RkXrRW6Z+eLFDjSxkYDTl6j2JHfESfT92TM2daf5YkBulV6MBQ/jnitZpDQGXM9SNWQyOFSh
fqWLUXpTw8MMpSCixghg1aE16kk4qMCx9Rb8p6kf88RhFakFDqUmmnsfqMSoGZRJ6j1+0eWI
fU+oAHIDRQAd6dsVVhJHVC3QdadAT3rglGCkSiAgkhjUjpWvliGBRlqQW0g9H8O2FqUkSkuq
o6ZE9DhjWjQBhUEFFzKg5ZYPhYhZgaUzXVX9uKxga0BJLAU+zvngUMzAsudan1k+PhiRT6lO
lRRagk/hXviW0x1krWhqKA9sWrTxRtrkyr9TTp3wmHzyqAKin0I+njg1AAUAgUqTWpzOIfBO
GKkKfQD0A6+eLBSkYBVoCq1qQPEY1IDrUEac6nLxJ8sWmacFgKtQAGh8c8VWJHJkTUB0yGXb
8cZbiMgehRT/ALfP6YdHRijBywzRepHQYmTSfaCPSAahegr3OI/UkY+4WUalAzLef0xLBamD
Gq1J6nwzxG6bXUANRSpz7mmHIDuNQLAfb9vgfwwmEFckFzQE/ecZqOVVQ2QIFSaYNXybUD2q
Mqgf54NHoEYafImlR5Z/uwwTTsCki0zBz8aDz+uEpCAZDpoKD0lTU1HbC0Ufr1+o1YUHY5eG
AyGINRpFKjM/9MQtOhXQXbMrWlOuWHF8ICKNqYEqT0A6VxoDETeo1B1DMdssGjB6XDUOayeG
fTAcJSAcsqAk16YrEYiTMg5dQe+eL4RAAUOqoIqMqmo7Y0CojZ1KmmXjniRlHpbUMjQA+XTp
gbgmjXsKgDOmYP1wGmZRmwKkADVTy74ZrnUJDaNdT/3U8e1MaZSa9KZjVpzWvWuDCSkoCWz0
nKgyz88VMhv/ACatFAvniROullX8qftJIxIJ90oAcgadDSmf+eLENlZVIAp4sM8jhxq/DmLA
6fUQSaAfTvhxhOy1b7elAT3oM/24BpkWrZZLlQ+P/TBTEjNIR6G0noSaYza1gFpoHunp0JyN
f9MblgMdMkYB6nqR1rXGuqoFSPcOkahWi+ZwCiUiuQOmhUKf34ooCVhI1aUFBqTocsFhFGVI
aoCg9BTBg58J6SPoA0nT16/jljUaJGZdYoDUCv8AriWmVmZCPA0DHvTBRgtEfqoSGAzP+eC0
6EerUhNKLWvemJaYAgaW9R8Dhxk9EQVbI1yoe574RQu0gYOc6Gp8DTFggmIcBs2Apn2PfPBj
WnFAKnpmdJ8fPBg00hDepSdPXT3B+mE/IFqRp1Z9R3xoUbKoy1Up+0k4IKLWiglswO/Uk4J4
je5rJbTQLTSCR91fDFT8nDFqtIat0AOQH0xGeHKt1qABQkfTBFtpMJFoVzVjl5988P2Vhi5b
SNPTqfI4pV9RBvTQiun91TjSgZC7EKcj4DqcXwSKotAx1jMkAfgMEosJVJB1elQK1rn9KYvl
kxNKHUJDSpP1wHDr7gCljkagr38Bgw4R1FewK9T28sSMHmQKOmdVU5gjDKLcNUBdNfLPvnhG
B05HLIV/dgUgw6tVCMqVFRQfTFhOy1Qa/TXLUKYMRkGVBQjoufU+WHTIkdiENRXp1FDQdaDG
cawAciOi0p1rixnqBLKwYocwM/w6Y2zSSQnLOvcH6d8VA41iIFFIH7gTg9PhgiAEVqQKE/xV
xaYdJTp0mmnpp7kjGafsF9KsdNTUjKlf8MS07FY2KdWIypjUgtRqzE6q0I7jx+mKxCoxPqyp
mATQk4zQILT1E6aCpoe/hgxuQ662A9IDE1Ir/lhiIppcoCS4NfDGTISg6SaV8/A+GJYSsDpF
KMTkemRxr6ufpnyBqGHWvhXyOJS4BCGJBrpHUDLp5nB0tTDQRUj0jqaf5nHOxrm58hGgSArm
D9x6/hjUh8o2KvpVRrCnNuhH4YTaZ9ddRoGAyPXLzwMXkJlYkhxQ5Ukwo5k1ZKaKlMx/nXFa
3hKddEpU9T9MZsHwZBpOZ0j7q0yxkWpQFQFTmr/mOZoMONcmar/f4UQjocONWGGpYzVc86Dr
kcDOo2JoGp6e9R+FcaZSKV0nL7RX8fPzxDBAUJZ6aTkCTkMZpmA1ilA1enprl+3AvkID6mZS
AaUOeFUTltCDpq6jthJmjUxVehz+0da9Bi04YoSFUHMCuR6eeLQdRQrqXU1fSc8FJ0OrIHMD
M98Aw0goQBXM6q9OuBYNkQ5MwoaCnU/swrAMg00bTUig7AHzxWatEujScispy65/hixbCpTR
QhhXL8cIDHqD50DUIpn44vGsFRtAINTX1MMhU9MFB1lkVSpb1CmfSvlgQpVahYZDI+IGJZgD
oNARQ1yI6U6nENGgIAIJWgorHOv1GBoy0QFnJoRQ+GfU4sCVPZRQqKxFDTuK+eLCTdFLFSet
Rl+zEUdVaLQWoUbPuCT4YmaSMFYotTTJq/vOLBYkCK3qBCjvWuZGAyGMyoKGrK3UHzwoMhj1
Ci/cPTU0agxE3umSUGua5FchiKQDQupiSw+41xHCjYqBUHUxPXtgFOHYrqyJ/LXpT64UZNDA
k9syaZjwzxHSdmeMJX7cqdiMWIwIcemh8RUkCmDBpvZQkA1EfkOp65ntgxfUCx1GoGjKSGrn
1xatwk1expr9wolPuP4nCNETICFoPLLOnnTFqlqI+hNLKQpPpBOI6kCOiEioZswWzGIxH7Uh
OmMCRQOjCg864sWGZNDKBlT7yMwajzxkJCooxCnQtM698GGI3VszXI0YDoemJOS9UtEWyPSh
Bwmq0Urn1xpDSpYADMYEkqPE+HbEcSmqR0WpZsyR4DCsa34nuo7fnmyzysKJcxs2ohQFB6lj
549H8uL1KOrkfRfzlzyfb9y247VuSz3sC+/HEsupVI/jAJU9cceZ+2blWXxZye/5Hx7dDeXi
3PILxqvArBDp009P20Ut3x2/t/Gc+z4HN9xpdnhj4fxqKLkV/Ba3Mk7OA82s0Y5KCNVaeePP
8tJdy49vW8cg2zebZwm2Wv8AMkmebRG4pQErXMdKYlGdueW7C/y3bW8G5p7EMRVpFYiNp/4Q
a6SemOnHFsFcfOUfa+dbbyHcrhIdvjkoqPIHdkNNRCg+mvUYv558GfD1rZ75N1ijuLJ3FoAC
Jko3uDsKrjHfOCVUXF4m5R7vYbVJ724JqjaKNwHElMmdqimM2HGG2bi3M9k37bZOQ72phMtL
TbJLgSvqNSpzzFPHGpd/DWyK35f+Sdy2HltuNr3BZLmFQUt2b3EVm6M6D0/TGuJ+2Nl+G1+L
uTT7zw6S8ljG5bxJK7XVtC4j6n0mgPpGNf24nN8+Fzbrv3vcN6T9DY3LWvG4LqUBHSdZrx6Z
lI9ShV8znjnzNO4v9pt2N8RHtr/plFP6jczF3kbx0NWv1wWBTcw382HEt0uor39PpZ41liZQ
dVaEKf4vpikMScdnSLjezXV3Oiy3MSESuwDSEioFWNWY4bz6qi2bjW8tzy935k/TWEluYUZz
/MZjQkqvVRjLMSb2d2u+P3VrxqR2vndljlgbSqv1JZxkD9cXw08N+Tdj57Z2NnLy/fIpZ1cG
0sVK661oHYrll4tjp/Pr34PX+Gu+Q7Pd0+PoLnmXLVeULG8G22McaI7UquqnqbSPwwX5D5vk
3CVbhnDZnOOrMzdfy46893MPVk8e2f2wQSXPJ769M1f09uGlMiiSRiWppUtUpTxGL+k/1cuf
K+guVcv2PYOO3G8blc+xaUKxvmGZzkqp3rXHCc63bj4c5HvB3XeLm6WMIkzszLUsc2JqSe+e
GQfZ6X/bunLH3q6tNmpFt8wU7rLcAFEhXrmPVqNcgDjV5mLde63O4bdNtO5bZwq4i/UrVdwv
1AbQzClWY5VpXGMz5aYj+33j+4W+67/eeq4tHfQL9ukjgnUEJ+7zpjp/S+NWY88+f+MX9pyS
Xcbtkghn9UFSKlCxpl1FK98c+axHnHGI4W3azLDUBOjMnUla9vPHfkvtbkS8nuLTa02qRoEZ
4jcolCxipmpbtl1xwvyc11mW2s7u4WNkFzFCA6x6TIFOYFMyKnPFjNryXktj8z3FnuUt1eRW
uwgMz3txoEiRdgg60IOeFT4d39t9rvg4/fibW20e5psmIojuD63QnqGrjfeZ/k4y28bbwHjn
NNx3TntlLdfqpP8A4MERBTQvRmjqrM3njH01S5EXK/kn4Wi2aWHj/HhNucylbYyRhI1y+931
GmNf8M+R9rY1n9t1nuj8a3G9urZo4bqQGBjUK1VNdIpmADjPUw+4wF/sPJ9t+XrKS2gKX01z
riVAHZYq+p5PD0nvi5rPPteufMuybluFjYyRWYvIrWdZp4CwCMRT0Hv6qYDWh27eL7drSFYr
eaycoAbcBfaUgUCIRTIYlYyO37pvmz/KcGxDcVktL1WlubKPSwTKi6mp6dPcY3zPFGJ/uUj5
BuUqTGVhsVmPQJBpjMozJA7lcZlxn5r55hRg6rISAe56559Mdr1rOY+ufhS9bbvhl760VI5I
/wBRJ7rUoXQUzr4Uxxs9PXV+usxxT5gn5Rue2cdv7aO8e5uAL2/kA/moPUIwlKBR3GNzjTO8
nr1i6uOS3fNYLOCr8ajiY3VAoHuU9I1Url4DGPhqFyKMTca3eysgGmmDwxwRioaQ5acs8Eit
UBtNx4h8Ow1VLLcLGNZERlU6ZXf8/ic8Mmqu7lO68ovPjhW21i2731rV/aA1BSvrZSOmL4SP
4oh23Zfjizcgi8mLNevCuqaWbURU9zQZYrD0up9zjuNx2mFdulRJ5i4nvKK2pFJ9KmtcGMS5
WA+S/mq44nzZ9tvYBebWkIZLSP0P7pzDu+Zp5Y6c8L7b4+ZeacwuOScgud2lgWFp2r7SDSoH
YAYPy3FAZBoBYVYGhxC9IqsrVoB/CPLGpWbA1VT6fWcqk5HGkIEEass8qd6jBW4FNRQaqgZk
E9MAw7lRmQQzeHlgi8R6wxoVOk5Adwa9TjTFoAxC1YVIyLdvwxLBAoGbSaGmY6mtMLWHUKMg
tajM9vr5YptQ6RD0PUgDIjp5Z4rCEslaAadJyYf5Ylp2jhYK2nPrn4eeMtwRMYAC1JIyGdfp
i0WhVYwCW/JkFXP1dvwxRmwKvKCWOfq7dqZYbWZD+2ZD30g1K+fjliMgiBpKPn3UDrjUOxEA
SpRfzVNaZ4tFpgSKhR6Fy82bzGIG1NqqehIplQV/5YsVg0WR/TUKxyz/AMcAggWMWhkzJy/h
JHc4LD8GCk5EenpmM+uJXDBhXua9u4wwQ1AHADELXMH/AJYq0Jl1VyHqzFTU+GeAWgZGJIB0
kZFu/TEtI+tfVkQAQB4+Jxr8A4lAYH8w6V6D64Do1QFqkdqjtWvhgp0tbMaHoAc/GnjiWgQK
YhWgUGhapBrgBlHoA/izYdsOAnkcVBA0Ht1Ap0wmeEzFKRoB07dM8OHQ+osx6FBSjHtiZ2iV
ULgjJz/1wWHTipzBoSKkVzFMKE+cOokgDoScRCjD21BNe4z60P8Ahgxm04Ca/TmBUqBliqlA
uQA1UBr6fHxwIWj85yJyVT1wkw10/wC3rT/DGtW4Yag33EMRXPwHSgwL7UiyV1VJQDp3xrFp
ElxWgocqjtXplgWHU0Wo9TAgZ9qd8StwSkBwSdIGTMf25DEIUamToaBj06AnDaZCcAD+HUpo
GNaEePjgiv8AgA1UDVI/206jDANywAVyQKUI6ADE0YltALUBrUeNBikB0NWXWKg1ALdsGGXK
TVOoA009Gr0r5Yo1egD29ehjRTXr0JGGViyEtasuZHYf88TJGNj6AKmmWEyWn0FT6mIJFCfI
9cZta+C0LV1XJD1r2H+OHRoCVYipIFPxA8cQ2DWgGhV75sen7DikMoZmqntgVYGleuNSK9AU
aycqBcqnqKYcxmUSVGoRtqK0q3ngRtDah1INTq8sZtUPrchUQmvQmmZ8aYo1umWtPbzLVIq2
NSM3kowSvt6qjv5f9cNJVVSuVKZt4fhgkVuHZo3X0ilM6nrikHyFnkIV9NQD9PpgwnY6iWC1
PRuxzwTlD9tVBIIWlBl/liwyxGz1GlD3rTvQY1Ir1DlmdFVSCQSW7V8hi3BpgQxAGVBmKd/r
gMwgrha0JNfX50wnDFWZan8xOfTrhjNhRhaMnh0J/wAMNAaaqaQQo/McFEGWEagpmp6gjMYk
YFpMyaH9la9sCw9KGh9IrpbPw88WGU5cLIAKVHQjMdOmEEP+2veg7E4KZEYUSPT7QaVBHjgO
DEeRcUJU5HyGKI2pgDXNQB1xYNSK1E9TBgSB5CvjhxQwJ6K2VfwHnixGMukntX81MssM5V6p
jqK6yDqOYGNYoSE01A1AzBOVfHBWhii0bPScgf8AXBWfg6r/AD6joftPl9MQRrX3QhHUHIZZ
YKZCoMl6Afm8TikaMS6ek55Z/j0wwWDBVmGrrkFamQ/DGcGHUIo0DNq1B8fxw4KTMyilAXz9
Qy/d54rD9qb1OegWnjiWE4aoyAodVOorT/DBFomK6dJAD9SQPDyxY1oVfW2dVHj/AJZ4gfX1
qMuifTF9VegjTpIcDQM6eH/XDjGhL+glKxknp4jFORqVX9Y1VOQNR0/HBSQGbPU0PQjvjOtS
ATWzH06TSrdsVMgiFPfp4dcWVXqBc0Zair0rXvTGuYMOWhY6CoHT6AYbFogIdOqurM0B+7Lz
8MZsanUMdbHXQCn5fGo64zVunCvGAwJr1p1GDTQMR7hpXIas/DvjQFRdIZgcsiv17nCgBiGP
Yt0r0y74WNE6avTXUPAYEjVRQg9AcmPSgxm0p9TsUjBrmBTpQHxxk6ahzCUDA0BB7d8Q+Asy
kUHqL9+lAO+NNacgLIA1SBlQnMeGDVBS0WgQVBBLCnceGKRWknTVnp6sxH7jgsah3ERIomh1
zHga+eMyUaISPQ98ungRiZsRCiUUDIitRXI9aHGmuewRysHY9B3HUZf4HFavtUjioJrRT6iB
nn3wSoLO7kDMU6L2I7k4WRB+ujIHME0zphFCHzqaZGv/ADOCxHJCoXrSpqy5YzWoQKiqgf7g
Pr2wxC6hSaU6kg0OHBUjBQAyLUVoO1Q3XGfWtiN9MZYdiSFoO31wAOoULAsWyBJGYxLTsCDq
6f7QOnnhVlOGdlLD1dh3qDgo9oqMKMw0kemozNPoMGnEbI/trqX/AG6WzORyw6JDxxnTrJHu
Hr4ZeGLSIodK6lIPVGrl9KYvlXw8hzyzA6V718MQhlLA6ew66cqVwK7RSBi2mhzGZIqK+WJa
dXYo2R8kH+OFrfDrBF6tDBmAzUdv8s8Zo2EHRdWpcj9o7nFjOhVypBNa1yr28BTphMoo3dSe
5/aBXtgaiQZmqkFxXI9PDPEkclQ9BQknrWgPhi0WaYRkGgoB/nTzwaJBastKlaqcz41FMWqU
wFIgGPqBp16V7fTC1o2RKH11PVW6/wDBxDUaI2VaV6Z5U8sLUPomlYI2ek10ggYLVhnjJJUV
0joDXp44FhUkCqyCp7KcumLRTB9YpVj4j6Z4ToizSJXTp0HLM+GI6IBZU0pRQoFB0IP1wWoI
V0IapLfmSuSjAjFnk9YT0r1SnUePniWGIVq6anwXzxMnEjFxIftI0gdOmJGLJIGJaqLQ550r
l+/DhM8bUQhKLmQB1BGJrDz6w1QTTTRichl/niViOMCrEsFWnRjUVwaJEsdVXP8AMchXqfpg
SFwA3rJBHQ+XjiPw4rw+kgCiE1r5+ONROKgJ61OJYJQvfMeOJC92P/8ABj9pxJMyNXV2Y9PD
wGI4mtzJHdJSqeoUalep7Y6cd2C82thbbPvckiz21lNcNkFKIT26nGOutovFnw64od326cqk
NxbyyvVKMasaUOSnLHXj+9kytTmV2XdjyFwtxeRXcSUyaUMaN51rjM/rHTnnk9s3K543itTd
yISFPttKEA6dK6R9MPX9ZWJxN0218d5FebrHYWFnJLuFwSAkYOo0PU/64uf658M9RZco4Vyn
bpAm7Eo0Y1NGxZ1QDr6j3xxvdt0c8+tLxL4/+U932r9TZXFxY7O6EQSPK0IkU5n20BD0x16/
v411x6zO77Zy7jm45XNwLh6KXhdw5K9DUZ4eP7ftnNri3EcyWUXN2bpnYVa4lZmYeHqY1GDr
+k/DffExXf03kN8Xdbee4IYM8jRuxr0J1Hrjnrl9M9dtjfcg2qUR7e9xa3kjAskDFWbT0BAx
uf2rX10d+OTyyLdbsbuWQsWWad3YqwzoGJy/DDP656vq6od/5zdw+3Bc7jLAqkQorylKDLLF
/wBtXHMny4bq75NdQSWEonua/wAx7c1pnlWnSuC/0902T8JZr7lcEUP6x71Y4FDW5lkddAB9
Ptny8sb5/ttXPU+HTJznn0yK8+636oF0xxs8i0IyFQfVn+zGuv6RXmfg21cw5vtsb2u3bhfW
rzMGkjgLDWzGuZp0xj7z9MfW7qu3h+T3s4ud3M01wAWMszMxBHZSxOK/0/TeK9Yt1vYXjaOa
SLpmWI6dK/5YOu9+Rl/Dc8K+D+V8jsG3p447La11CN5665QneNR0Fe+Drv8AS5mfK1t/i7mm
x7Ze7xBd/odvVM5w/tmbS1NIANTljf8AP+9zFk//AC4puG/JW9WVq88NzPay1NksrOQVX86p
0GOff9GpzIwfINlutlv3sryn6lT6yO5Hb9mLn2OfWFtO+8i2yGdNru5rSK5Ht3KRMQXWv/FM
P3yY5c8e67LPfOVWMElnbXd5afqf/PCrmkoAp62Xrl1rjf8A0mfDtOL8trwLffmC7hOzcbmu
1ij6aSVijBzJAypXtXB1/bn9Cy38sv8AISckXeZIuR3Ut3fAapmmcv8A9oxn7a1IodtvP0Fx
HcV1GFhIkY6kqRRc8b/nVesepb3/AHA/Ie8RwxbSx2z2UEbpaoWJIFKs57/TB1eYOdrFrzjl
m1bom5Le3FvuNayyOP5khPc1zw/y6n5Zro5Jzzn/ACK3WLebu6urNfV+mkBVDXxC01eVRjPX
U/Dr5Go+OOXfMG6r/R+O3kiWFnGSNPthIl8DUZY313znx6xOaw3N73kEvIbg7xeNfXyuQ12W
1Amnh2+mOf2i+34Z732BBc60/MpGf7MVqeh7N80/Jlpt6bdt97IIIU0QJHEtEUClNRBUY68d
cTyxZXBYfI/ONt3ibenu5RucykS3LAs5HgVocH250ya6+QfKfyNv1r+kvr65e0FGdVX2xXtX
SB+3HK9TfGb47LH5o+RobH+nWe6yssaBQiRKXRQKABgMavfP6b548+WNt915Gm+pfNLPLucs
g9eol2kY9CeueNT+kxnr9Nf8mw/J42+zu+ZXZ9mZQILRnDSAEVUsF6YzsvwPh5vbw3M6N7SP
Ky01IoJZq55Yr1F9dag8p51abCnHlknh2uQEiyUFdSsa0p4YL2J/PKpLCfc9uuxcwq8Usep0
kUEaT3p9cb/n/SRdc/tv5Pnj5OmsP0MF8624TQZYo1WQUyP8ygNca7vP4+WpzkWfx/vHy3b7
Ne7zszzJt8er3Zp195dQzcVfIUPfxxn7z4oxlOWfJXyDv9o9lvN/JNYxNqWILSrZg6qAeOWG
9c/iGc7fV7xfffmG44fPbbM07bIEKtO6k0UD1APStMsZ/p1NV4sqr4v8uc841G1lZXZqGJb3
UWVq9wocemuN/fmz0znQbh8x/JFxvcW73F6wurUN7LtHpSKtR6UHp6HGeLwKw+/bzvG8bpNe
7jNJLczvqaRmrqPnXGcZzPUG1bTuG6XkNnZQme5nfTFDGCzE0rmcHVxuc67eTcV3vjk6226Q
NbXBpqRgQ4qKjLwwRj+kd+xfHHJd72G7323tGTarUZ3MuSvQVJX/AEwXr0S+MzHCauxWgUkV
OZy7mnbG9UoTbTkGsehGyCuCtfMeJ8Ma2OnEtWr8V3+LaINwms3js5iVilkVgpAOZFfDHP7H
6qclgdIBqBpVh5eNcajFgJEZWDSD7h9o60+gxo/BFVpqU6lB6j69cMjIVGklR+bqehoT3OFl
K4jZApAanUVpT/XA3KBifa001NSgI/diH2AVdEOeYpXOhofrglp0avJGp9JApTSMySf+WH5W
04I1Vp6VoSD1wWEgQwISozzxYtOB6WXqAKr2xHEccjNUOwCAeojqPPEMOr1IPQgekHuMOMHK
prZxmehp2wNeBZGU+oEgmnWufjjUgoWf1AEdMvLzOJejD6VVgNLKMmbqBgsGmikJqWPb1eA/
64KjmrjLOpote/icOnEUYcHuO3XP64dWYkPu9A3rpRcsqYRCAAOddX8NK088Z0mIBWtQAT1b
/PBqsOtAaGgFAak5DzxqDDRgv6qekdE8cVhh2aU1T9le2JXwwyZkNKSHr0Bp3+mJnTI5oo8D
6R3+uKI5WhyPpGQ+uKHw4GojUakmrfSlMOLT0Qk0NQaaVHhigpnADnUKVFAGOdcTOmLkZhf5
dKMT5/5Yq3IYxkijHID0j8cBg0BYFWJah6eAxVGfT6PBuh8h0xYz0ZGIrSgH7ajFVIMOR4Be
tO/0xSNQqF82oQvTPp9MSiEgljpqoHQ166vLFjNSGM6UYkEHqtanLLM4cX/gw9oIVHU5NXzx
rDqUJoBPTx8xjDWhRkbUK/U9cOMbCY/yDRR7grp7n64vq1OgIoZNIcZZn6+GGH5PRm1KD/ri
1g4o1AaU6k5/gcEQGarACoINDXP8cKynDPn2rWp6mnhiJB21eokg+GVPGuHCdSC5IzBIH7em
CxZAMC71I9S9wOhxCiDFRU9vDxPbCoISgIXJoelO9TgkblRuQwGk+oipriwabL2q51PjnXEz
TGpiJfPsT0oO2LRgyGWpUmjCnXwHX64JWpEbKAPRULTwGdcdOYz1MIMAtPzV+3DrMSVzagGY
z8zTxxmtQvUU0r2X0/6YKjUZVqpFTTMVz/0xKhVwM6aiAQRhxqUwAIDAUatarWlPphrOFJku
lqdM/PAtCslGJ01oQP8AlhMuJtQqa5ZHLBYrTM+TfTM988WCUBiCyEE11ZB/+O2BqCdCQ5UA
FuvQHLrlhasgQVzoKhgB+H44XISV1CMrmK5eeLDDLUHvqr1Hh3yxNS/sCy0ajGmnrU5fTGxb
pD/y6SQCf3U74tZGWqCS2amoy8MYqh19S50A65ZZ+eLEZtPtsVBGoEk+QxYtAVaRv4aUINOp
OGqpK1qj+kA1IHXLoMZoOrfzMvtp17UwVrQoKk06jMLnQj/LAYdh1C5ilST2P4YouojHqbSS
Qo69xXHTELRqDBBkcwT2ONY1PTmPSgIPclq9PDGLKz14CQArTPV1JGflhY+RIzlNAajN6aeA
8MaMhVQUIAYrkSemXfHOy1rSFBToAcz5YZGbf0aEuCWqa0JyxYsGVYAe4MiKqPDDIkbqdQoa
VORHXLtjRKgrVOo6A54zYJ0NAGTSKhhklelcZ0mUlfSV1Edc8q9saU8Kkzk5aWqST3APjhVo
ahWo1VagIJ7jGQc6vSagDqO/fDkR2DFiKZHMEnr3OeMnDmMg1NKHOviMVOGegozHr086+WDK
wbSalCpofDFpnItP8wahqHc9DTp0xaMEAdRVcj+7FVDOxMS6FAYHr4Yo0YFixL/QkdvMDCrQ
lSBRTUg1BrSuHWRDMAgZ16npg06LSml9QJHY9wenTA1PDCMiOlNOeZyxVfUpHBNBRcgSaZYJ
yfAo76mUnLxPlg8A6HWCSASPu65/TGVAsasqE1Pl0/HG5WbTKK/cQAPuYnvhWDHqAz+05506
4z1TIEUUEEEAHLpWmM6bMSgksFY0kIqa/XqCMGDDe2sbamr7naozofDE1J+zKsb6QBSmZp2x
ZikhSMGapADdm7nwwxYbXqTKppkT+OWGVHR30FVJB8BhxnaJKe0B4fafxxmrQyPRqk5p1Pan
lg1EqO7NTv08x2wNyaYgKAdOs0zr/wAsMVmEoNRSgTz7/XFRD6V616jqPLxwYLdCgQKQRkc/
w7YZB8DQAK5bNSKeGXgMNEG1FUgeqhB0EDPLGHTfERQnSxBGv1ZZkDv+GI/JEkqRpoKHr1yx
pzpgwdKjI0yIPUjEoJTRSXY1qPMCvbPGa1IlVYgSFqKmpyyOBpETqZjkFP41PTPCPToi6fbL
1bwBoa+WLBSjlLfZm6k6uxxD2kFV3r360/54NaPOyMpYVJ6Bh/ywC0znJSMytadepxKeiBBW
pA1L3OeJWC0EErUkEZP2wGQKliwEhqgHXwGEYZ6liBVXFDQeA64hfRLUKNNFYkE065Yl9Q6U
0MzNUn7mHbBpkhBqBQwLKO4/zxrFMSAhqCupny6ZD64zil0QjVZDTNvymuVMBOtG9JoC1dLC
gy88GILwnUItQOR9f7+uFn8hKqCGy9P3Dpn9BhxrwqUqy9X+8EdfIA4lgmZA+S0AoAe1cSwQ
o0hYNVqg9KeWLTKaRCUIBVS2Y6io8AcRoFeRTUHVGoo4H+uJztsNqSYELUasi57HtTFih4JS
lSPtrpVhQk4mtIoXVmdtDL+XVQ0xUSHLDVSFlDjoT3HU4GtOnuOfcyQkVqT388SwAYsAKGhJ
NK51/wA8StMrKsmnT61Jq3emASjn1rTLMGgHX8BTFCCLQHBC50+wmgofp3ww6dZmq0YakZJ9
PgfLCQlpVkSvQ5itCBXELTBlrmpLJkSKUODEZZGDGuk18jjKlR3A1Gq5s5opXMD8MRclyD7d
SenVcJriBpXLPxwhKAhrXr2OJF7Y/j74Djq1KgLN9xOQwG1o/j9Le45ftEd0A8T3USSRtmCj
HPHb+f8AO9M9d4+t/kLndtwmKxtrDbbaK0lUi6aOIe4yAepY+/44588S0brg+Pt049vMG58t
s9pjja3IS2FyqnTRKkiopqJ8Ma/p/L6/IlW20mPmOwnc99s4xIsrxIkahUZA5WtBnSmM9cwp
dx3h9k3ba+M7Tt1vHt10um50RAyhDlUf6tg55h023XC8f+Sk2nbrWAfrYA07kfzsvUxHemeN
8zwVQfIG63G8/Ie0bXeRI+1wzKZgorqauSuT9wXKv7MHM0PVkSAXCJEnvTLQFFICIB/CBkgG
MnVJucFhaXm577Jbw3V/bR6YzKoZEUUqNPi3c4MTzmx5necw5Tt1nuexUsoJ9UhVKQsFzpmK
HG+eJVbjU89+Qdn4jd2u3vt8EG0yrqkkjQ+7RqiiKvfFzxvkHtT/ABxb8YutmuuVRW9vbtM7
LDcXKA+3Ev4danPD3/P63KZ1TcluOJ7xbxW9pFHvu8SsI44rVCIFFc9bBQun6nGPqdrR2G37
eoTZpjYW7GOr2NolXBGRqadvHDjOua42/j/HNp3DcodphnuLarK8irqah9NS1cZ+srW+Oay2
naeQ2e3bzuFhDJcXSiVIdPoXV0oPHxxfXGZMZqHjXHtw+VJIZ7GOe0tYmdYlB0Bh9qntQeGH
Go0V7Z7Bxbbb/fodrglvCwRVcKKVyVI8svr1xSJ4h8g8wl5TbKbTZZLNI3CzTumlBTqqHrnT
rjr/AD+u+itluO4Sy/FyW3HeJSbVt6RKbjcblUDHSPU6n7m1N+bGbn2N15PJ8sc0CWtgb8R2
e3sDFDF6UIHivnXHok5rlOq9D+NeT758l8os7DkLPd7ZYEzPaWyiKEFPseUDtq/bjl/Ticzx
0kz19ISJDAHuCqr7EZpU0UKo/cMcMD4X+Rd6/qfJb25MiSmST0hB6AVNKDG5FY0Pw1eceh3t
F3XbP117PSKxULrWNz3KGtT3r2xu/wA9msy17pc8D43xaC73/drVNy3i4B/SWlAEjJ7ACgOO
WOm+Mt8BbpuU3K97jnj9iJ1YyQLnoYH0g1/ZjV4kmsSevOfnG2lPM7qREIiDNrbPRqyzqcGt
x55tECXm4QW4qfelCEnoNRz/AHdMdeJ6zfl9g3FrxLiHH9si2zY4Zmm9qH3SgajSAanLEHV1
xy659N10y/HXF5t5TcL20W7khj1+04Hts3jSgyXtggrzjm/yJstyt7s9tx0rc2+qKJIY1Ytp
6VZRkB4Y3OYrqX+26K0ltN4WW1MV0X1SMregFj9pXyxruSRqfDJbj8Xty75F3SD9RHtu3QOQ
ZJJAoqpp6emokY4zfwNWG/fAvHdj22S8n3pJEiKqsYKvJJU09NM8/DHSSrXrHDeG7JtfHrK3
i2qC0WeMF5LvQ88jN+Yg9K9QMZxag3Lg3CLLlFtd30AubkpWC1fJHf8AjelK6RgnOj7WK3nG
12d5tZt4NphgjB9MsaCoWv3ZdhXF9B7VhseycL2zZoYNk2uG5Vl9d1MA8sjAUZm60zxr6m9a
8k37bIrH5N2uS4sGhtZbiFY1AClyGzceXh54ZNikb75z49ZblJsVoz+2tzcCOW8fPRHX1E17
muCL8trsPCOI7Xb28G3bVB/TrVC7bhMA5lYCjN6q/djOG1X7NYcR3s7zybcbJZkgd44ImUhV
hgXqoFM2ocFi1U3my8X5dw6XdI7AWtpqZLe3jUK4KenPT1rh+o1o9i4Hx202y12+HaILJGiC
sZAskzVHqLaupP0w4ljb7ZsuxcbvbR4ml2+39xzbRj7wTUrTpniErz3lfHuNb58eXPIobJbJ
UjY2cKLpb0tRg9PuPWmL8nasfjG9lufhq69AKxxXCRrGBq0KtRWlKk4bGqqvhr4o4yeOtyfc
7NL/AHG/kleNZhWOFAdIAU9Sad8ZxW+Y6flPg3F7njyRQRWy73dSLHt9tAVUaier0/KvfD9X
O3Plih/a1ci3Et1ukKmMAyn1Ko7t+GHa1bMVXwtaQbN8uRbdazpcFmeBpgK+lULUXLKtMOee
idX8Jv7pLctzKB6mhgRWI7jwPhi5u+My58vRtssdol/t/kWFXjtUtWLRk5CRaBqU/ZgvOU9T
Ypvh3gPBLTjKbpcQxbnvVyS9wGAKQ+olY9Jr6sH1O5MQc/2ridvvO2XN/tHs7csqfqRElKit
VXPLNuuHDLW4+aN82TbPjbV+gjlWdAljbtH6FDLWvT06RninLPe7j4suJNUhalSWOmgpmTU0
/bjU9b3HPIxdyhor9ux/bjcBVAX0gstQNfTM4UiOpJDpA1Cte4r9cLNh1DslQBQDOmdQeuHR
9SDFWFckArgV8IKzUzqA2ROef/TEvqfUFcElqgUXxJOVa4saiT3ARkRXoVPl3xfVddIg+lgW
OX5adBgZ0TFmbST6etemRwVuU6kN1UAKCDlSnhTFFqPQcyQBSmQNTU+OIWymjMhkJTI0NFHS
vnXDaxBudPqX765hug88DcOxQgA0P+49ic8OqVGupVGoekZA96DCzaKP2ySSPuBpXIUwGG0P
qzYhey+FcRsJSVXXk0gqAMawYeNyGyrUd8WDSR6sWJ9TZggVxmxQI1AEV9Pn2+uDFRtRlAjF
TTyzrjTJvbAQaidXfF63cO+SAtTVX1eY88MgpnyNAABTLv8Asw0YYaTpY1GnrUDBIi1AFQTV
dQIZe1e2BSiYgy5ZjqGGE9WIm0Ala5j7qdvphZGBXofUAMjnU4y3otT6vbUdcs+n78PjP2oD
9xyNF7HIgd6YUJNKsW018P8Ap54Gvk6KClWGruKdBisEA0hclm9PiKUPlg+p2kyvQogp+Yk/
5YZFT/zFBOmrDNfr4nDgNQqxVhWudf354vqbSq+pmjFKjM0rTP8AwxrPBJ74YAfcR6m+7v8A
txBMc101qMiD4nvXGPqflGQyk5EAmrHtTvTDBYZANRemn+Gv+eHVBMa6SuYINT9cTXyLSoAO
Y1Zk9svpiwUObGi9AaV/y88FY9OZDqJXIpmrEZjxyxWGdULGq1ppoQT54VKEqwBf8lageXhi
+XTJg/R7ZYA+odO/4YlbAsoWlerCpIzyxYBRxOyhSQVJ1AnMjBWJPTOKkOKUXPT1zOKNw1Er
mcxUEDPLpljWIxVTqVWKkUK9xlixn5O0pZSAMvHqa4swmDKKqSammkDF9RpBdTn05DsOlBg+
FhiCClCD3I7/AFxT03kXoaiLktOn0ws+gRasR+brQmv/AFwtSCcEE6cvAk+GJUAR9dK0A+2m
RxGQ5chaV+2npWlcPjOk0YKVXMmhzyp4ftwYNIIymhYFup69+2I4cBqM3RgMvp5YRhlppOqu
vu2VKYsVKMayM+hoDTPBh5hTtq+0UYZU6g074tOmDJoX05EUJxLTaKSKEqARRQetcXwRyHQx
XV26/XFOtHXnyUYaoAA0mtT2r44ZcGgYqxUkDVT1Uzyw6NEFk0qQBpObU8DitVNpDEqudDke
x+uMyU7D6TrUH1ZGoGNfhFIxRKGhXLMHMeGMyar4FBkurM91H+RxMCjYFjl93Ve4AwWNwTep
tI6rmtMssGHSYLpD1Cufup388WrxGEfQyjuNWXUeeGI59wipNAn3U743Kfv+ikNB1r2XrQV7
f9cLj8nfUjnsGpVaVFcUmm0zghtQH08K+GK1qQS0WLNa1FaUxjTYcxhUVxVSeoOeGVYSyEqC
MyDUj6eOGYyCSoqDmwFeuVMMaEWRsxkezfQdsOaxekYBOlyaau//AB2xi7FEnqD9iCa+I/HA
18nAKE1pq/Lnl51wxr4Jw+bV8D5kdsa1mmT7QWBIBPXtjNZ0FftIy09DT9mLTNFo1xg1qBX0
1/wxk4MtSpOa/wAJyp2698QxHooCB6kNCCe2eHTiapX1VBpkfGv0xhUIIAJIFQc2zyr5YVqE
ssYoKl8tR/wxqSLRh6qTo+4+rxxWREVpUqaoDQE9T9cGIwoxLaQAO3bEISsSQaEnLpipSGbo
KEKe3n9cGHTxsgqgzFD6jnmcZxuA06UYE1A6j/piQFYL6iKgE+eR6GuHNAmAqSpyr6gPEDFj
MSJErrpJGonyxAnEWRVQxHcdB9cMlFoVUoDpGWeR/wAcZph0EfUkEnNQcqHvTBY3IaQkGiiq
j7q5H8MTOmVn9OqpJ/L/AJHFiEslK6VNK0r4j/TFh0UlCasMlHTyxYf/ACZZVSukVUjNRlQn
pinI1GElcs4NOlcVHyRBYOv2kdwe3jitX1FpUt3dqUK0/f8ATGapKcNItFoQQaAUyJPcYYto
yEAKoo1Ba1rSuDDKE0oKmpOWXTp0xNaY5OtVoV7HocTOw0rl2EsaABjpyzFQOuKUWU4kAjYn
M+J8cLYXKhVap1Hx65eJ88LOndmZKqWKAV/64ziRHWRkPu+7PLLCziSNSwqOmVF+mDW8SOSF
KkUGRPf7fHBukCyAkOrCq/cp8OuJGLamJC6aio8a4BaeVvcc6BT+KvUfT64YLDrVTVag09eW
LD8EF9LALWnUdMsZo9plZAusLkBmKYDecOhBjzWtMz44cMOKJUOfSuZ/HFi07MJQI2+3pqGW
QzpixULU0ZFjQjSvQUxaLRIVaOq55+pSc8vpi1mCdhr00zPqXtWmM1uQgCzPpGnUfw8qYoML
2pgCK5d2xCwLKwSlKORX8BhWHNWMakVAH3jLBWsOWpJUEVNdHgT9MREGNRqUqzEEUyz75jEP
gJGpXZR559T5YlIJCNLkH1Dt1JBwrDKyaw1fSv3K2dKjAdOGUy6tQAXv2+uf+GLBgkKotWYE
AUJPb/niaRypCpU682oQVPbzAxQdUTj00WtR9y/5YGcJYlBDqRHkaIDU1JqcsLWYGQxvFrcZ
HOh8fPENR6GK66CvVQMsx5YB8CLLVSetM2HSpzzw4Caf0Ar1XLLpTxwHRLGDVncFwQEI6V61
wLDFfcICkg0pIvX8fLFqkA6UpT760APXLG2h+1HJHSKtV6luoJ64FqNQ3ttXM5AAdRTphRIp
oWCkUYaqdDXpgpPICdEg9TnJ1yFM6Z4zg1HFCKsy1NPP9xwGOa9yXMVAz1DMZ9q4TrgqKZYV
o4zStMSLUcSSvrYVJrnlXyxHFlsO4XO3X1tfW4pdWziS3JFQGU1Hp7iuPR/D+300fWXyt9v/
ADDmfN76H+oTfq5kQVhgXQqUpVtK1occPtJV9MWvEefcj4i0sMUcVxaBNT2d0jMmroPSCKk+
ePT1/TnqZfkWe+NHvnzNz7ddrCR28e0bagBH6OMxH6eBxxn1ldZ/Ofl07N89cwA9iy22xu7m
NVUXskLPPkOrEGmWM9/X8OV5usZe875mOTf+xz3LtutSArjSjDpo8QMdf5/05kut3+X+vi05
N8icz3mC0mmsI9uhioxuLSJopC3WpdiSccv59yXxz55/a72T+4TlG0Wht2sbS+mIAF1ca1kb
Knq9sgfjj0f0+l9X0scO3fPXJ7W9uJbm2tr+O7f3EsJQ3tK1fEeo+GeCfz4sV5v4Wm9/OnPL
h7e4Xb7fb7S1b3IrCGJgrtTLWWoT5aTjjz1IvowfKeZck5lvI3Dc6vMq6RbwRnQgHiR0/HDO
pzfFztaTiHy/vHFbIbdcWUO42UJJS2uSVWpNcivf6469dzqe/Ksuu3fPn3l25S2jWcFvtEFv
J7nsWi6Qx/3E9fPHGfWX1qc6ubf+5HfImIsNjshekUku0DySN3Jby/HG+uef2zObPlSb5808
y3Xa5doaKMfrpdc0wUmVqtUIB2WvTGOuuZ8Kc10J8y8927bLCySwFra2BC1lDKzr1IJ8MdP5
3jq+mc38ruH+5i7s/TZbFZwyMv8AOmYyOzt3JatfwOG/z4/bf1V+z/3B3sf6pN62mHd1eVpI
I2ZhoLZhQlCp/bXB1zxny53dyM78i/KnI+TgQ3Ngm07RCUaKxtlp7hGY1PQVxji865/1lzxw
ck+b+c79scey3dwkVgmhVijQRMUQaQJGHXLtjWc74v5W5/swokDP+UAZBx0/aca5mOjc/G3y
Jf8ACbm5vLS3S5mu4zE+okUXxAyzxqZZ6LLq65X8+8m3vjw2SJRaxtlPPGx1yp/CxrjnZJ8N
Y8unkQ6ix/mZBgf21+mMSDWu+OfkJ+D3l1utvZQ3d/PF7UUsv3RK35k8CcdvtLMo9X2x/PPI
oU3K4vo03W8vamFbhqC3JqNSVGdPDF1/KZ4xx1bVr8d/Oe18WtLmWfYzf7zdyFrq/EwXWDnQ
LpP24v6fyk/I56u5jM/J/wAo3HMrpHtLVLDbz6mh+53b/caA44ZI7XnxitnuhBultMTpSKRS
9cqaSCSPOmN/z9rOPprfP7huF2VhYRWlm+8PFErPGp9pFdPtzYHVi64mr7MPD/cnvlxvsl1u
FjEdvloo2+FipWMdKSUP4+ONT+cxaXJP7gbC52qfb+Pcfh2u4mH868mYSSCv8BUDM+JOC/xz
8tpPjf5v4pw7YmtF2aa7v5gWubwSqpkrU5KRX7j0xrvifsbawvyJ8mPyjdmlsrQWFvWoiQtq
r/FXKuMT/UXlmdv3u/t9xiupSZWhZZEDsTqK9CwrTKmNf9aefPHuFh/cnsyiC43TZGvN4iGp
ZzIPbQKKVVKenPIY6f8AGWeVzne3K4Z/7hbLd9+a/wB32g/okULBbo/8wU66XpjlOMHVHy3+
5Kyn2Vts2HZ5LVZ6LcS3DhpCAa0Wnbxwz+dtb2SOrYP7geF7VYRFtknF5HGKOHHtM4Hh1pq7
Y6df/WyfLPP9Jfwzcfy3ZbnzqHk/LbIzWNotLGwtiAsdDUFq9sc+eN8HXWVq+W/PvD+UJb2M
W2TQxmVTcXk9DojBqwjCVNSMM/hWrfy9Zg57w242uKSXeLK121I1CxGVTKEC5Ky9m0459fzs
MuvIeUf3AbHBYbltHGLHXBOWRL58kOrJj7YzqeuL6z8n1Tca+d12XiNnssFoGuIJfcluXNA5
1VOkdBXzx3+vP7Z51sLb+47hsTrfNttzc7uievWyqgByPtn/AFGMX+W/FLnt/wC5vY5YrqG/
26Ro55Gd7iMiiRVFE0kGpWlMV/iLbPwzXyF862u9bFFx/jln/TtlmZDdXDge97amrBEHpXF/
xy+Nc83qa0+wfPXxjs3Fodk2/brsW/tldDaAZCRRi5r36HF/xvzpl3xWcX+fuPrZT7ZuthLb
7bE7G2gtiqalJr6qkV8sHX8/NY3GX+SvmXbd5tI9q4tt39It1OqW+kzuSR00U+zxOMz/AFo6
5leaTcs354Sku5XJXp6ZHAI8dIOO321jnua9B+EObcL4lfybzv8ADdS7iq0tzFpdRrqGdtRF
CfrjnebfHT7Lr5c+WOG8ue3hsdtkjiMite7hKFWb2xkREATni55kcu5dbGH59+Ktu4jHsVnt
97LbxwBBA8YUOfN651OZOK/xrpepZ4zfBPl34+22CdtysZ9vd5DIkdrmrCtaVr18sP8Ayo5u
xT/LHzxacmitdn2Kze22uFxLJcTU1yHwYA4Of551lblWm+/PfG944WljuW2NdbrFAsEFqD/I
RtOgSE/dWmNdfzk9Y9r5/mmaS4YyZAnMJ0r5HtjnfGuZfyhkLklgKilASK/4YmsG0rEqGWtT
XLoMsaSNytft068iSeuNRnKKNW0r6iGB6UqfxxEEtWp2CmmLAXuOEFc6nIdsWqCdCEBICnsx
P7sZ09UK+phrIGZ0AmlAPHGsYPqUvQEaR08NX1wKUgwajAklRQE55eFMZxrSIZaMTSvY5nFo
GqgkV+zoWPeuJqUJoJDWpIA0g55YWSKIuROZNadTXwzxNYSIdFK516+R71waMONRzf1RjpX/
AJ4dH1MFpQkUVsiozwNQ1WaPUaipqpH7K4QSvGuoE5AZ0/ZjWDcLUPbUdG/KK1JpiOhAcGmR
BqdQFMZqlOFybW1B9wjOVcsQqSIrGVqo1MB18+mGI0tWlOrv0I740qAhK6WOfT05/txnVhwV
ioxNVOR70OLVIklKNpdga1yUdvrTFDgAQQSKKpGRHT64mbQ0J70AOqg7/TDacFJEanUaGuXk
fLGdNmEAEIKmp8fD64QFiZGBp+bI16UxeALgiX1kGvT/AFrhUupVUsQVbWQehyy8cVaNInt1
0NUk1wyLDkqcjRmYUYnPOvbGVaVWMgP2kLTScv340pdE60GQzfNfp3wyGgBNDmPUaGo7DLP6
YWafUgZQBUGtaeNMUh0JCIQzGjP360GKxfAy5VCFzqQD51zrgxWhMikjqNI9PmcMjJ2kJWhp
nl51xYNIMQNSioDUKnvgsaAZQ8jBagHrTsK4fgaJlcMBqoK1Vu31wasJ/RSh9danwbz+uKLc
NEtBm1PHxOE2Fp1Jl9uZAHfzpiq+AgVAJOdc+wBOJo7UY0zGXXw88I+pO4SoWmpRTVXv2ywY
LCYkUzGeZp2wys5gAob0joa6iMj9R54UNG9Q1AVpl/gPxxYdwz6TSQjTpNBi1QTFTQn0k/b4
5Z5YxbiKuYINCMzp8MR0pNasSunMUqOte4xrDtN6QfVVWNC+DBoGJLM4IHYAZ9e+WLKvaJlX
29JHq+v7cOflA0kek5IcxiGUhpUEL6hXIYsXwdlP3E0U5gdM/rg04lUBlC9K50/3UwyJGxJ6
flzyONqhljoAaAgE5eOM6zIUSkMa+ZGX4Ym5BEVoTX056jlUdK4jfQyRuEPqzHbsf24xlYsw
6COoY1IXoozzxsSnIWi6hRu3fKuGw4Un2ErkT54zmCogAtMiAOo7jzxqC+JJDIfTQ1r0HfGl
CiFBpIBahz7V8cYq0wUowJoK9K9MWmVIQGWjrmvWnfww4zYAI66a0CHtlXByrcPWoFOmX1pW
nbFhkMKAkn1AGhHnizSRKsCp9B796L4UwWI9AZK6iABQAeB8KYouqQID6iQw7gYdRgQDVuoN
dPgBjUGUUrIW1Z5ig/DP9mFShUFQCzCpJIXxGBoalnqKaSf3HB4NOaF/RUHwrUfjgsVRNqoS
RWvXtQ41zBYf7RpIDavHsMLOGRY6FeoPU1yp2+mKtEqqyV1V1ZEVFcvDGbTIIsNARqVy0mnh
2wSLRCpSlDQZV7EeOA6dsnKgEqy+r/TG58MGJ9NC3boeoxnF8gVEpXVkASQPHwwWtfBg5qAF
Oo9x4jFMElp/FH7Y00kVyoZSPT0BPSlKdcSD6qBWyJGQP+OMj4PGKp219GJ/fjNURSe2aEEA
9vPDiSgsUK9jkB2OBrQ+2SdIpTqPEfXFq8MQQCur7D6iPHthjNGrsyAg9PSKdK174fqoB6Ei
hyrnXsfLEhospK5LlWnhl5Yy1p49RDkUochTx/zwE0kJ06HBGda/64BaBYfXVVqp6LXpTrhR
06tp7DI9OnXDINMyg1GqoOeXb641azIIRugFDUsPHGWoUnqIooDA6ar3J/1wRoJk9GlQCwrm
frijNhFUIGf4eH44rqHG7ZajUIaA9aeP1xM2ko9LF+x06q5H/nianoCxUEIfSTQ5eAxLKkjD
BdJzBPUda+B8sGRqUxUrIDkCals61H/dgzTBBijMa1BBWv1wL4AI/wCXU+lgQVDEjr/hiZ0i
W1NpyHQZ4qzfRozg6XWoJFO2KNZQMlBXoCagdRXEvBa1FG0k6BmMvV5jBIZYGRCWUKKBszXp
+Jw4r6RVK+sdtKqe2JWEisK6CB/t8cZ3AcU9skmhrQAip/aMU9MNSZTUigIrUeH/ACwj5Ili
c86mq08MGElCMRTJhXSB388GCXUr1uFpQBgMyBT6V88SoYHaukINXXPoSMVhhwrBxWq5UJr1
rhaNlpqtaVOoHw8cZqgn9ChV/wDGcz46jijNiJXXUVIC+VepxpbpqKzVNQB9viTTFTBJoBqB
qNOhyFRjGHqipIylmyI7+fnjTBLqzKCiKOlBiw4fWAQhIH5jX+E+GM1qBMrDUG+7qCBgFFES
7UzbKtVrSuLBYkOliocGnQt9BixbiCRih0oTQmmk9aeOGLdTR0dVYqKU7ZnP/PBWglZQSSKd
SVYZ/j9cUoMrvm2ofQ9PxwxRHHLKrkUzY0p2P44bUMCEy6TUGmqnhXAsEG0FamoPb8uffEfg
TQoZKiUAjOnXr0qcS8P7SglmBCkag1KjALyAS+pGCkhvzHp/rgA2UEEU0t4jsfr4YTYib0Vj
kTrmB/ni+RtP7gYFVFKg6VJoPriPyeJFroeRfV/F0qBipkgURPfBJ0qBQgDzy6YsHiaf21C6
qda0HWvlgkNqKZqJmhqTV/L6YWLQKwFZBQlgQDnSgwgKzy0KqfqBlXzwKSiBC1YavcXJgaZg
+OJvBhCV9Rofzaex+mBpEoVS1Kk1zbqaeOCoXttGxYepjkoGdR5jwxkuG7WNYlAYnKpB8cMD
h0kDPvjWEa10mn4jEi0n+HElhEVzLCg/LX930xjWlnx6xj3Xf7KwnLR2txMkUsq5ONWQK08c
bkt+DPH1rcR8f4KNp4zsu02ht74KlzeyoJLiSuTOzNnStcZ55c7bXdB8ecStd7uN1msv1txa
QmW3huKGHWATVkGTjwrganTnis7DnmytebhDHbrbv+njgtVCIATQmh8PLDeV9sSXH9C4jdbT
xnYtoto49yPt3d46Brkg9W1EZ/TFONg+zpt/j/iW3breb49ku4T2cZktIrgD2VfSWZ9AyJal
M8ZnJ+9cE9ptnNuH3O931qlvLCrJDbxKEjXRkpFO1TjV4y+L7sDxTj/w2myT3m8X895vzB1X
aoQxMZWukUVfUK55nD1xT9l/8ScJ43Ht+58yvdvS8vbVmFpZzU9mFFFQzKcmY969Ma6lia3+
ibTz3ZbbdtxhS3MpaEQRKKJoOmi9KCmM/RS58plOw7FudlwjadqhtbS9QiW9NDKzEVqzHNia
GoxfUXq15/y7gvCBzm3sd2vjtWzsnuzXbULMxFdC5ekE5V7YpLfgxlvk6y4Ha3drZ8Tt5ZNu
VQtzuTatJrQUWoXVQeWN8caz+Xrvx9tHAU+Op7vjVh7rGJxLud3GvvySx5M41VoAegGL+n88
uUW64/iPb+MJtt5fWFpaX3KHnYXDXjKqxKWIRUDfaP8Atzxdfysa2tD8l8csrjiNxf7hHA+4
Rx1WO3A9pT4L5fXGBr5FlrLM7RUYgnTGMic8shjendfQnxnwXjHGuDrzK+sl3jd7hBJHBOf5
UIBppVcxqr1JGCzQ7fkji+0bpw2TkmjQ88YeK3VVXQrDIVXLV4Yz9crn1uvmO4iCMdJDqcy4
6AjvnjpK1IhFVRjpqS2RGYr4Y6SgUnuMCrGhX1Ba5Gn+eC1B1tr0tTMZg5jPvXAEZb7mOVTp
Wvh2JGLTiSshDAVLjoD38TiMpqnSGORNNVD+5calNhnYrRlFATl2P7MF6Bkk1kGmpRnq8MZ+
SjeWrkdVeoYHKp/1xcinFzMQVjJAApU1r9MayVi+mCzKRqJYEAPXIkYvszebBVUNqZqKw9Kg
5CmC1c9enEgJodKgZHrUjtU4zY7/AGgI3rLUFiwoNAPQeXlhZH72lCKemmZPQD/HFgRI5C0f
MBhqFevfMjtjUrP11MJH1ltWlXWlT0GDpqEsmjpky9WPVsOjAGWVSJaimeX+uN/YEJmc6mJF
Bka0Awbi+u/II5DpWjlWrXyNetManbN5SS3R9r2mNa0GkeH4+OLvr7fK3wCze3Iyj7wvQ9B5
Y52avvcIzu7AqKLTPBI3Oj1ZmPq01GbE+qnhjXrf30zvJHCV6xvlUdfpi+zNp2m9yM6Rqp0P
cDpmMU6P3pRPJpJJp2A8BivSnWRGHjMZ1n1A9+3hjNogi7ugcuSWpmfEYMNiAykzVocuv1xu
VyvKdJkSoBY9SR1HjjNrcRmZwT6iGIyYf/y4ozmXTpPMEoXNcgKivnjf2tJe5Win6gHoCOhx
faty4jLuVLV8AKf5Yz6zp3l1HVUEDueoxaqaNj7ntxr1GbA0+uLGfQFyC3rorEV7eXTE1thy
ykAfY4NHp3w4tonRzpGVcya5jwBxN6hVZA5CktqNC308sanRwWkrkSWpnl3PhTBrNhyBUKoJ
NciO1cQPIGZhlSmdSOtMFo6gHVGIYkMKEdhWueLWfp4cVD50bL1DsMuw8cRw8bRqQqkUINPD
FgpLIwIFNYFcyc+uLDo00CsYzpmx6VPgcZSOoZq9gftrnhpyJG0MoHUrSgr1piRmOTL+YnJj
375Yo0Z2AjBb7myoM8z44YLKZAVUmpUKe4/DMYsZ2lVaVqBStV8R4/jiqnp/bDUYhVqejCtc
P2bOFVciCO9K1/wwashjQ6mGQByP1Hh54tZA+WbV79RTIjGhlEiKFVm+7/jPAiQuSzkBQozH
ck+GKrTqKRlgPT18CB44KYCpDKSoIK/XCjhupWnqFPClDikOhZWFQaFTmudCQcWM2CKMtCTR
eq0707E4MQm9wqHKj19j/hiWEIUFXWWp6BOufhhynwCsa1VhoHpIxvGLdGjK3UahmfD8KYMa
3ANQuAT/AC6CoHbxGI/KTUrEH7F7jwGIWgiVFkoMyxopOQy88XqkSEk6jUV7DqcjSmDmq1Dq
cORTrWvjjdqiTQWFTnpOa+P1xncVunWnqUUYjqK98OatCXJ9NMxkCen44YuugBVAX1UJPqFc
gcOsykjxFtLZkGpIr+GEjMaE1LVPUN/ywew3DrX1KAMhlT/DAMoP/GSCtNVGPeueIDJct/MF
NP208O+Bq6GgDoUeq9BXqPPDiE6a6EVqD06VFcUuC3TFiImKgBVyBxZqwQkQAjTUj0qe31GL
6r7IyWfS1PUMtQNanxxv4WaIxxmrBKmuYGM2U4jVWLtX7gPSw/0wxijDaB9v3ChJ/wAcVxTT
atR7kEdT1OLP01P8ieWPP0kkjLLoB44loQ505ChBqK5j64yrTSGShqfu7eR6YzCdXL1DZE5E
gf4DGjKAs3vayaKoyNMjTthjIgyEnSpU/mHYf9uNZT9hDTroQDmPUD4eOM1To7mklAABUkD/
ADqcZSOTI1VegzNcwD3OHVpgNQOpvWDkO1MMrOhqG6A0FDUf44ZUc9GQVapBVh0H4+eLToyR
mCF1dsuvmPDF9VDaWZgGqKCtfDvixT5LQWBBBI7gnxOCk0kagjMk9q5YmevkiHjauVfzdgRh
5qkEKO1BQFQSV8f241WpCo2kkjpmadKHLGKS1k+kkE0HX/AYZNZzQlCmR9Jr+/G4cKQyKQrU
zyNPDwwVytGV1JllpGVcsYa5NEy6lBz76uuNXxSGYkGoGVTQnocU6WaQC5EHsSBjSw0aFl0s
aDqPPGbcU9p3VjoLHVSmQ6jDK1TakXSM6jqPx7YLNYwZ1ABOxrrPQgjF9WtsMXLgk6aDKo6f
jiwbTF8wWHkB2rjUgk04BC6wRQilDlmMZvTeYYs6ero5GZHcYrwzpgCTpz0VJOkGvn0xWYfU
gRQaFs1+0+IODTgSAxChfV2pQfhjWMWnIGkmlaUBPYYOjIZaVoVGkZ+f4Yj6cZRkD7q5V88Z
xadAQaN2FMj44YLcQtKqFSBk3Y+XTGpzoEh1MeormfHw/Zivhgsl0/wgHU31xnDQkrrVgx09
COuXbBjOijkWursDQdxQjGop4QMfYV050PT9mNWeGUmeR21Nnqpp/wCWMZB6ZI5S2XpaprWn
4Yid1KgBqZHMDqMSlLU1KHoD6V7YydPJKhqnc5kjqw88A0KIFqZG9JGZPbGtMMwKnUvpB6Ad
/wBuHR9hoy0BA9JrqFMssZs0ylpNKp0HU96DEjhyVyashqajIHFi0pWqDT1d2HUVHfFYtMzC
hzz8KZmnlixYbTqGphVj/j5Yvg2wlUhaFgCDSv1ODVKZUoAvqOk0Gn/PDurBxioJ1Z0NcqZ4
q1z6BygUHqAfVQDv3xYeujqumoyOXpPiD0ric8Csnt5UqvfxxaNGLclqJnqofqcZrUukQBQk
5dgOtTln4YD6dloh1H+YPDpT/PCvwZDQlKFaZjv174FtJhINTMMmOQ+uDxXSf0t1qCPszrTB
jKNiCDXIK3qrlXwOLGtTVLtUMdJFVWmXT/HDi21GVdCD9yt0QH1D9uKVdQlB9I1VBrQjqK9s
NqEkZqwrUd65HLuMZMhR1eSuZK55Z5DvXFfBYSSAs1VrVvvHj1rjNKSZWZVCgeR8fE4zq2I4
9SgVr3AB6U88OnCXSGQAdCfUT3PkMKkOQEVqCj5VByH4Ysp8CxDOEIpU9jQZeeFjSZwxJHoN
KDv0/wA8FOidpQyknUWA8KYDTPrRiTQkZgfj3xLDyPlmaljQ1yA8csWDTaANAcatXTx+mK1C
klyWi0FKGn+eJaARgsan0khj9PDBh+RmViSBkK5DywwWnZlbIGhr6QBkD41xHSj06tD1pmQe
pPljNmmF7YDamf0j7R0GJADFWOg6an0/TBjI2Zm6ZeoHLOpwogiNUBqP1UGp+or4YkdZEBrT
1KAEocqk9cuoxKUihLJQt7eZOnocB+aAPRSSpKkUr5YlbhOyAKYwdRFBXOtfDCLTNI3sqW6r
37ZHEhehiYhVW/KpzFRnSuFYIAEs2kFgB6enlnXBWjtKVUI+VQAFPTEKEZTNUAqVotOlcAnh
wSukNVtX7q4jppCWGSksD6T0piOmcFYfSutgc8u/nhVOoOgO5zPcCtfDAyBtcZOWoEUYL3wm
QakBGqwYDIrn37g4ii1OW1KTppSoApiFgqAlDTVStad/2YFgo3ZlYhSqA+k9PV3GJSnVQdTn
7vAZ0pgMP6mXWDkelcq+VcVINMix6Qc+46ZnLATIj0c1BpQkDIjtg1Y4rwDRpJBI6KOo+uKD
HEakZ5/TCcLVQEDMYUXtt4H9uBO8SSABj6c/tPXGG5Xbtu7z2F/BcQALLBIHiP3AMudfMjHq
/wDr9zm+s97+H0Dsvzxx28tody5Js0l5vdnQWskMgELMPsLK3rzOZUY1/b+XMu83xx/n1b45
dv8Anne7jfrmXdYF/pNydL21uAsiKMgqk5VplnjP/Lm8/PrfUsT7z852dvtkewcJ2k7Zb6mk
uZ7plkkY1qQKdCaYzxxt9P1trv2/514xLDBuW/7FNd79a1ELh1WAMPtciurMnHT+v8Jz8Vr/
AJ1xbd893c26Xzb5aatsujoMdpRWRfKuRy8cE/lLz8r6eB5b812bcfbjnDrA7ftstf1t1ctW
5Jbrp0+kV8sc5Jojh4l81RcZ4ou02XH7Nd1ZGE+8TNrkZ2J9TKF1NkaD1Y6f04n7a75sP8e/
MUG2W15tW+ws+2Xkhkkltv8AzK8n3HPIg9q9MdOv5Trn/LOX5Xe9fPtjDbWuycK2/wDSbfA/
uNcXQq7sufq/7j1pjhzzN9XtWsXzpwpjFv13tV1ecqiRljhoPYjemdGqMvDLLD3/ADkvz4Oa
wFj8uGfmb8q5LtsW7L0ttvyWCNhkoSob7e9Rnh555x06lxzfIfye/N99s3Nqu07XaD20gjAI
PizFQvTtjE59yOb1biPyn8O7DxBNkR9xmWUMLgNANUjuP5mn1aaY13/Lr8id+4wmzb78Vbhy
K8vt6W8stpjkCWdnapX3dP5pnrWvjTG+Z19fK39q6vkr5ltdz2VeL8SszY7EtEedy3vyU6Ia
/YvjmcceZ6vpR2vLPh7ZuEGx2rapLvlUsCrcXF0gJWSg1ESVpSv20GHr+eXWNrq4P8vcdueO
tx/lTPY2MFT7ltGzalbPSw6/jjr3/OX2US2fKs+Uvm+23rb4+McYtmtdiioDeE/zZCoouXYZ
Z44/XFenjkzKS2deyjtXzwxaBWUgLQhR+YdScbsWi1mR2BWmnr0AyxnVgaKVGuoOZK+XTBrW
Gmkip6RmvWuRP7cIOqMRrpXxIrQA4rBgBIzOdWaCtCe3hhlM6pvcrUyaQQa+LHzOKnQpJqqw
opzqB0K+fnjNipSyAppqApFWetR5UwhGr0PqFBX0+H1OJm0xc6dC0cnMasyB+GNQXSJjrVgW
WPMjqc+2C+jDjSSxr6mNdPgMDcw0atRmJOs5dev44lmjkIelTqYZasIATorVdKgUfP7q4atp
oiAv26UBqBSpxLUgYMvpzkPWvXBh+xlLPnH0/NUj9+EfJyyKlCagmppnXFBUcgTIE0r0C9f2
9sGGw5Pt5qQScx3oaeeFX/BUateokHq8Se30wDQr60MdKeB6V8ThMM1ZArE0K1BFK18xhtax
IgASgqtMySf3nGVgVeT1L0UZdcyR1FcWMmq82f20GZGWNZgykmldVRUnPUfDGaZ4cB1q6kKO
oqK9cFrWhYq2VSVGWQPfvhxm+nLeoKWooFMuo+mE3wALlhVhllTzOXTDiE8bFc+/QeY654Yg
ggE5AHtU5nAJKFdIUAD1N1JNAMsBMWOhEpqXqScsaZJXKu5SoBA9XfPscSOiqSADn3c5nLrl
i+WoF21nSo7k59sWDTKWqwz0mpPnhOaOiqj0NDTOvhjOEQVtI05AZt4nFViIer0I1NXYfcKY
yNS+kHQ7gKR9SB/1wtSYjHVirURarmK1P+uG6yMCoUyZdzp74hhm0aie3Zm6geGWIyAQAUcg
1JFDXKhwLEr1cMvQL0HQ1OJaSRgZPUsFq5rX8MWVZDNGzfbU9NKVyxRYd9ZRFYjUK0IHhiwB
ZtS0WvpOQ8T44sagslUMop6gHqdRNO1T1+uHBqNwanQAFIqK+B/0xrWD6wAEk/mKPuA6+OFf
AZCrORVlyqvmMZ1obqvtAqaUyU9yDni1ZvwfSHTTnp8fPrixaZhWorRBQZePgDhjISoAClq6
86+H1xasOoIjGo11GgBzxX1vBmMEqrZlMwOmDF8hkoBSh1Z9P8MEZt8JHckjqSaAnw8MasZl
MSAdI+010/XvhI42DodRqRkFP788Zw/YzRrXVWrA+lB06dcM0ZEQ1akABOnImmdD3xqKb8pK
Z1I9WdM86+Qw2GEvt6SdOQ8TShwVrRBlbSrChPSnShwC0LKoVkI9Q6V8vDEP/AYjITnQeZ64
bjMNIkiMNRJWtTh2GypEYBSGB6Vr9MNmpEW0yE5AUNQe1cZzENPvCsMxkOuffEsOyaq0Olga
mnbGovAoASD+YeOXTzwGJFFTqUZnqfDDaZAg0Y6qaCak+eM2ue3RtEhUuoy7muD1qWA9boOn
pGQNcbk9PtAF0LUNqP5gO2Nyes4IsXpVippXUO48sZsWHYloVIFPEn9lcZIWVPSANI6Np7n8
cMWykwjDCjVWlP8ArhMwmrqB6NShIFRXuMMq6v6NoZgXqPSBkDSvjgrOW+nkYlQpAaoACf8A
PBD9vAsCiVBBoMKkIyAny8QK0A6dcX1Uhw4ALilSDQeGM3nFKkyAU5UrkB9MVIGNVBVdKkdB
9cU8V9JtSqq01ZUTwNcIwMerVqJ9VakH/LEDu2lj6aUocvDBNa0SiRnLBgAaVrlXB8rCBEmp
iSB+bzI88QCuliT0JNMssqY3gheoUQkHLIDocSzDM1NJWhPTSPAdsai8OsmtyFyJGdcsjg6G
mVGCnV1pTPv5YDCOvQHB09tI/Z1wykqj3NJ7Gjfh1w1HYAZ01LXofDBkaDG4DNQavAnGlowz
EFFICHOo/wAMZtWU1BQkenOgbt0xn5BjoFSTn1IByHbHTm6L4TLQaR161Of4YbzrMmlqAqK5
ihzFRTBOcbOrIWSno8A3So8/PGcIpFr1I1DMUwxmoaq1KLQmoINaY6ZjFuniRgHqSM6K3hjO
jEooootDXMnrTPGbDO8AQPU6/actQOdfPDhEWYmlNQGf+uH6s6aqqBQBlJqR4Yz9W50MlchS
uo+gdCCf9MFjUsgMxqrQKVz70IOXXFi39kpDKBmQBmR4Y1PRSUMpqVIBGQri0zYdR1OZCjP6
YLB9ghVTTXMitPKvnhjnqSoAIFGGZxrqbBx1Sqq9FFSagY5zlu0nDBQXYBegzzNOuLBTqqj0
kinh4188WL0Mkauoq1BmK+BPhjU2LEaq4AAOlwPVlXLtgpEVdSNXU/j0xndFOEAFWHpGX/M4
pLGQFxUemnWtOwx0+qlOjKQ2nNjlUCuXiMV5w7+j6WooQEaTkT2/ZjONe34PViAQfUDmaZnB
VIB5SZcl6ZkfhitGDUa0WlCa1HbHPGg0bXnXScz4iuIHLBgAB6QaE+fjjXMFpdWB1Ag9a9hj
Vgh2VUGtcwcwfLzxmeNUauKKpJHT7en4419WfsHSXJ6ZdulfLBTKWtg4HfpSmWWKQaAqGFV+
4Ghz7d8ViltF6AgowJrmfPsBTBjUpFlyfVRhkVAxG02r0ggUJyBHXPB4tSBXCkn1B6UP8JHa
mLRUZTTVhmPpkanpit0TTsVajKuY9JA88GNGmXQwJpQZ5eeGrBIUJpqK1BIr/wAZYNalkO6P
WiZCmbDP64znpvvwZlehK9DSrHpXy+uFigUvU9QagsT0+mGwTqiLDQQDl5DP6VxSLTS1yfqF
7dzjNMokdDmwrXNa5f44yZ0Yo1Qxo2sVIX8uLToWOkqKNqPjjcg2DaM11UPatP8ACmMnAM7V
K6aiv2jqAfPAhFGpqyJ8fEYzqoTpLEsSAMyCf9MSS5D7SdQFfUMApjK1QCB61zB8fLDIPtYE
VWniO3cH6DDXTnov5ijW4qTWnfp/hjX2iMFoCztqzFPp4YzaBAKzUUeoUAbBRPTaSKggKyfl
Brq88TVha1IDBamvTyGIWmRk1N1bxJ7E9sSO0q6dSgDTk3iK4qrSRjIVWv3dF8Rgo8PKqAjR
XSRWnfFD0FTqIINNJpnh0YMSaS2ggnpJQdR2yxY1ETHW4Gog1FdPicCtTHNQ/cmjeNOmDGQv
F7hdaZqKg9aUwGQSe7oGpalR08sVXqTL3KEGp9Ve5xFAVBclSQGFT9fD9mIJYySqrWh/MzdD
liIZVKrRqqex8D9MQodAEZLVYoRpAGRJwLCLKY9LEZntnkcOCEqZVDhyPyd6DDjSRhUppbSM
9eeLEjYh/TTUFzanWnlgVSL7TDSKggUBHj3xlFrBIUH7KD6k/wCmIHLFhVG0lcz+zFhRVoAp
6eI+mQxYhvpCVDksv3qM6/TDFTKxy7KOtPPwwjTDQS5K1HdAc8R0KEldOokDIKcqYjgmdkWp
Wh6NlkfPBTRREPHqAoSM1Hl44MAQkkS6qGpyJ7nwywHEcs/pWudMs+2fbCiUNqIOqgoanOtf
PBo0phRaAZipArTPzxYXDdUZQSKEZU7jyxKOShAFD+GJoSAqakdemIHoPHv/AMDEXfHEJKVq
SDkadq4A6LKzkm3G2ggj1STSrFGhyGpzQVOOnM1rx9I7R8XcD4lYWsHIxLuu97hQJ7Y/+NCz
dlBIZ6eJxnLR11J8Jk+CbGXebgbpIYdqgX3ZFt6LIEI1aVIPpJwarUu4fG3D+QbU8nF7T9Pb
2re3K87M8rOMgC5pUYNrP2SWvxb8f8atrW13sT32/wC5ZQzKAYIW7AL+bPKuH/atz+lBH8E7
ZLvc8+7yiHZoI2luILaqyOOuhOoGMb0vsW5/G/Ed52aXceL2htdvtyUczV90suQYNU16Y1mL
nr9snxX4L3XfNnu+QXO4Q7ftMJkZNZ1OdH3inl2rjVtXdSfHXxDtu+3l7uO43Tpxywakjw09
+5K5gKeirQdca+1E6yNfuXxbxHfdvju+NxLZ7UjexLcSmsoKmhJbqR3pjnli+6y4/wAG+Hxu
C8e2u1u963dV/m7jSsKPT73KkAfTDYNYjmXxO15zqPi3H2ifcJBrnK+iOCP7mZvp5dcXOtzp
xco+G32Pedt45BfQ3+97idPtx+lVAyzrmPxxnn7brPmtzb/EHANnii2K4kfcuTXEYIkeogVv
zCMKRT8TjV35Y66lcO3fBG0w3lzf7/dvb7LZBmMdvm7nwBoaIMG1vnrPhzc3+Mtku9lbetgV
YNoij1Qu7HU+n72atK4s9F6rwyVIkvCAznsQRQ1HUkDsMdp8Ddes/GnxBtu57VLynll1Jb7H
G4S3t4T/ADJqHr5Kcc+rTHbzv4p2hNnl3rYrcW+1MAInZiWcGtGIOY6dMY2yivD5oZFk0gDW
hzI6Dtjtz1HL60KRFSUOZrl5+OLW/qaaRQSGBOo+ofXqcStwmlKqdIrHXIjOmBajkUsFk7Dr
4V/5YSbUtCus07mpy+ow6ycKGIDnPpUEZeeKNSAMJOZFWXIA+PY4jUkZShBADUqAPHphqviP
R1qAARmB3wM6UqAFVJH8zNgvY+ZwTTfhEtQyhRkMqHqfP/lhUOYnCNTLqAPI59cUowRLLDqJ
Kg5Ch6DCKSFghofuFRX9p/HFiOrCoyrnX/XFhOAtCKkgmreWHFaTLRtZoVbwPh2AwQYEKqnS
AKtWh6fXFTMPXRXOmdDXPEsKSNhQZEMKEDI1xWj6oyNIpXqCqk+OKU4KNJdWYqVFfLDRgHQA
A/cR9wrQV/DBEcFQq0qBX0qM8/HCh6T7iOCdJFW6Dv28cVahpKZ0b1Z6dXX6ZYJFTJ6cwSB1
YdRXyxoU7hGb1kllOYX91ThkFoHJSGjfcGqR16+WM06FhrHiRnl/x0xQX4KFnKliPsOliT2x
YzPBSgGRSMsjRvHzwuk51HkpNc5D+/zxQYemskZL0IB74dWBlVyg1/dmY/24RbYRKnOtK0Bo
aio6HANIxhmYZ1GbE9AT3wYdEQhQLTQDlX/n44swQBcIFXMAZenqPPFsITUZjMnse+FSplyB
ZWDU/wAe4GDEEAMNQH8zw88Fa0v5pdlFMsy3bFrPqJCyUIFT3bqc8UqxJK7oSQCXFAK+fWuN
RToY1OtX+wgZDLMeWLcatANWslFIXuO/1xazpAIB6cwxFewH49cBhO50Kikla0rTqa4IiYvq
GVCCatXInyrhWGWRhVq5nM/s/wAMXypBRyEgZkdCSf8A+bErBNplYkV0sASPp4YFLDFFVtIN
aLWgr08MOoPu6BQrpByZWzphk0WmLU+7Lzp4dMODRUFDqBzzr9cMgRyCjF66gBkvanauL6nw
ipIXUNSKKdc6nBRh3Vg+jUKJ9wHX6Ypas9GUZTpYeo5qO2LT9bTGMBAOjk1qTXOuLCR1LUnJ
a/jU9cNMgmNXBFQwBUnxFMCsEWAGfQUBr/p54MYoZ9IkOefR86Y0MMGQVQiuo08aU8Di+rRj
WlBnU0XoPPvgnhO7qxKrVSaBST4imIUpNbAAZKo+lfrhVKNqEkdT38vLGvljSqGIU9D+zp1+
uCNynYBlrprWgWmVMsWN4ChqSTWgp3yxqOVgwQwp0VW7jBcPJpF9ILtVl7AHJRgO4Ss4av3K
3VvLG9FppFBIJANfDrl44zYd8EC8j0IHpH3d/KmDDLvhEhVZarUVoe9fE419VshglaMpIoKH
8e+IJBpJ0rlXue3btgxbAFGaOlda6qHxCjGbPTnh1+31U1E+rwFemNCG1x6jpQnVlX/SuGTG
pQxUEYrSgrkeuHqj7SHcMzAj7QPVT/XFrEulUGMitFJAUHwPXBPlq0TJqDmtATQV7nxxVEIo
s6ZqAa161werz8h0jUxYgDKi9ziWw2StmDWmYGZxoaWrS6169fEZ9hgzUQOgKwNQTSo8MMGi
AU/caA5U69MUla0AUx6jIBTOneoOG+gwQ+kq1ASQVP0wLS0sKMBXPMDviw0imlSCD9329x5Y
NGiCkt0o2VB5HwxaCZArHWakEVxW6IbJwEJoDXUvf9uDDL6WkfamRyoB4fTGpMOGSIFywOlT
UUrX8Bh1YThDMSuQOQHiPLEPkVV06iAqqc1PeuDKusRla1Y9R0AzrjpWcEiuIyXFD1y6nPGT
PAKZSa1qSagDv9RjeRDKhXNPs655Z4z1G5QyHqDTIAgf5DGZVSXUDUZ+Q/zxrIBmjNVgBnXV
Wi5YfqvvvhgQBkR1p45Yz9cVoSCXqT6RkBjUZwTAihc0rkM8qdBhkaoKoSQK0pkPA4rPFKN1
Vh0AIp+B8aY4+xoIXUWrk47eeNbXO3DjWDpJFB3r443ui0jUMUDHwAGYqcI0SaQDnmufauf+
WM0I9LI2k0o2R+mJQYXSpAbIekkdgMsW3W5Jginq7EE0I/0OK+m8hISoFKCuZ8fpgkR/QFKs
NWrMN4/XDOdU6wY6BAn3AAeGXhinJvUqOSSNgCCdK/cp8PDD9BetOxJhOQUdwP8AA1xqQaet
OgqD0IFaeX4YfqxumGhWIyZwaE9qHuMDfk+CdxQqB6gc2H7sYsU9SLGroa9c+37hjOm8o0C1
CJnTqSf3jEJYWkkk5ZGhqa4furBAAtrY1HQ08D3wbWbQOFDDTSvXIf44WdFpAYkmhFNNc61x
RUL6tPiW8csbkFsOqOGK/m7V6HLtgvqlwWR1ULLlmvb9mOea6/bAxmWNmLfaRQnDGtGyV6Cj
0qf9MOMBBUKprQjLLoAemM3xr5I6Q9QKt0FcODDJQoAxKqDmO2DB9RLQlinTPIihP7cXq8Mq
qMmNK1/f/hh1nPTBI0XLvnWuQxS0+BVwta/cBUU8/HG4KkonUqKn7vpjNilB6aHSKAmoGH6r
cEUAAqNQ7gZVOOddPwAMoQEAVr0Of1wWKZPlIixEtoFADUd6Yz9cNunZVUv1zPpNf34cZ6CC
5DAkayBWvamL4UpBStAKa+ufSuL5RyCahVDP+UHETABUEjqA57UywaciQMnbJaZ1waEWoFQR
6VJplnXFCHUSAAvo6V/Nn440zRMlToyB6lqZV7YBYWglinXKjf44CIxRkVz0NQDxB8BjLWk6
6Bqamn+Eda+JIxcqmVtRL9T0CkUH4DDYiBIByzppC9Bgq5JowMw1C2YqMVrXh5ZWKplqHfL0
1/zxnFe8A+lWGhQGbrXGvqzoVlk9wl2CrShU+GNRm0SlNIUggD7Gr3xeMemZHaTXUgDJq4y6
cQwY6CtWzHoPShwY36lVACWY1IpnWuZwDQqFGqmS1yPQmuLD9gSMAaE+QWlMNmCUcTKKKz6D
406f9cGk66BUPm33Kegr44PRPDR+0R/MI1GtWPh3rhxgyIMswEY0BPWn+mBrmxIWjcFkIRh6
RqGVD1rhdNh/SoTVpBJzr4YIL4icorkrUqxP+uHGfSbSSEqVPYqMz5VxYj5/aVDD7QQadB3w
LUkdCGbNVH3AGpb6eGAxGsgYOxJAoSaVNfOuKtSYLVpIoA4JGk1ofPPANGSKZjL8w74NUMzD
2tVR7dcmB7fTDKKiR6hXYl2GRZjlTDUN30MoQV1GgpnTFioNIYZfd1FOhp1OAw/smpCrUeI8
8P2GD0gqq6QCKVI6GnniJ4wpAfV6qkK3fLtjOGCUKmmuk1NSKHUD41xYcKgMhUVp3U5UwABR
Q5SlEboa1rT/ACxA3tqzVAzX7W6nFqpBtLiiEsaHU3T/AIGFkDFQCynSSaVBqSDihn+T+hiK
D1KRqc9xhSRxoU0OmveoP0ywNymozik1cqFKEio8cWIwEY1IagNmGGBJRIsgUFc19Jp4DxwL
UI9o6gRVaE6afvpiQEcA+WdAOmICdCSCzBkUHp1OEuC6RkOtiCT2xGOTWa16eFMBHUstQKEY
oga/+PPEFrIxjlrqHpGVOgriOO3Y9wtbLerS6lBlWKVXaMfwqa0/bj0/ws3GOpfw+nv67x7m
Z27kF1uMG12W3aXaKd1V29vMqErU9MZ/p/O80T/Lrt/l/j19vlzZ2z+xDOoigvJfQjsQakE9
K+eOd/m31MiNuWcY4Hxz+nybgu7bjesZWS1IZIwxyYkfsGGcazPXTLvXHeV3O37/AC7lDY2W
1kM8ZZS7MgFVCn6Y39OuVzYe3+YOP7vu+47bbn9OZwUtZ5yqoVI0knwPhjH/ACuar58uXeOY
8c4Fwu42cXyblu117uhLdldAJK09Q8j0wfS07GW4dcfGJ4hc3vJ96nfegGVNnWRlVaCsY0D7
q9zjffHUh/6y+Ra/FXO9nbatw4/lBe3TNJEWZfaCnIUrmAuOvf8AL/SdRm9e+tPNu3HeN7HZ
8Ibc4brdNydvemiZSkCynMkjuK+muOHPF6+FZ+V9ecW3Cx2Zdm4jutrs9oQWubzWpmlZh9zO
M/HGb4d14nYbTsUHyD+i3jkUltHESbzdoZWZyaaqe4K0r546c25431Z9V5v/ACz4+2L5B2m7
2N5L+3twGuL5meQu3Y6m6n6Y1/H+f21y/wDD0CO92K95BFzWfc4rXZ7WImNSQzymg6Kp7U+u
MfSzwyxBafIOx8vt9y2fZpgt3csREZSFPtk0qAT1xdfyvM1WxQ/JHOOPcT4FHxKG4jvt3jX+
cITqSPM6tR71rn2xmTar1GSsPh/YLHg//uHIt3RJZo/ehsVYAjXmASDqJ8sXX2lwzL8Nfwnk
m38l4EOK7bKr3sYIPukL/K16g1OlQeuN9fzs9F624H5Q5lsmx8LtuJ21z+v3JVVLkxMCkenM
ip61OOd50/l83tI7M2shqkkyDLrgkxAdGEi5Ejs/avfGoMoZUIFPtzqPH8cWoyBwdJqxbIp0
0g98Xyp4aQRLJ6evgc88MIDrUl0QPU1anbyzxrRYTKzgSKQF6U/N+7EPk2nS1DlrzYDqAetM
VhC8Sl6IKn8r+XngHyWlaDUM0NK98SxHqQRkDv3NSST4HCzDqFVV/Oa1auWY+uBocTvkyqW1
VahzoRkK4DAEKzGmoqtR5E/TCsEmohS4oq9u4OFYYq2pj41qTnQ4dUmHWOPURn9uVMjg1ULF
h6K+kdexqPDGpXP0TAkpqFS3Tr4YzWpyVdKBdOk56qdT/wA8EMmEfVDTUSeqnsT2w4NRjOQB
6GgJP1XthWjjKFgaZN9p8e/4YhoFijpI1Tk+nIGtSK5YGpg/SwQFQGH5jkcLO7TOACFQkE9V
7UOEhjJL+o6UHpJ7f9cDU04alU6Ek6XPQjDpvodMjkqvXt9PHGqPqQqsdG6d6ipr54wyESMi
laVFak4mdy+GppTUp15ZL1yr3xrYvkLM2sEeo10nt+7D9ofU5QiNa/eO/f6VxluBKqXDU1KR
XrkD4YUCRQpBK1zzNepP+mHWTtoYfvLAeeVcYp3TTrUEMSQOlOtcQ+py9UbOtPt71Plh+Qj9
1SKEaTU+keJ88Ui0SBgn+Ip0GEHRSwDKPqAf2nAQR+4rUUVJ+3xy74rVgh6zqpVwTQeP7MBh
MrKpJFHPQds++DBYcMqIQDULmAev7cONaZ2WikZH7Qqkj9tcayL7G0A0EmqlKmmQGBEM1Kr1
A9I7HPBjOnCMcx26/iMTUGyvkzEBKAivjhkRmj0+oksaj7elMMCKNvQ0jfiK0+n4Yb6Uqe2F
yNKigc+J+uOeEv5Qcg6qBa0rnX+LLGmQGPWOlSx6kZ/hjUh+SyACs9D3U9aYWcOHINNRI7Hy
ONQaFFNSakA0/YMVhhVGsgHR2Q9a174FbonqR9or1ZvEjtiMhyf5QqDqGZp44D8FEBVmJFFI
z8ScGqkVLDW3pYGn7e+IfBOSBmaD8x6g4cOhZdLiWukkCiny8MMHWn1LQM1CCamorlXvhvIE
GCVDd6UNPxxYYTmmpqekD8K+WDDiP7CGBqO48vHFjAmrmOvY+IOKQBViT9tQenhiZwRSsZP5
lJOkfXxwfl05hEt7lXNT2A6YWj5qAAKsxrT/ABw4zgpNFfVQE5lO+DFZgUNXLJmBk3j0xYyQ
UqFocmNRhlMhwwDEsKqcvxxuVaBqqexC0I8qYPyLp0UVqQBrJoT4nxxsZaIO4OagtX1V8cWH
2eEWlBYlqhu1KZ4zYJ6dWKmoXVXqFzqeuDDLYjLKtAcz+cHKvfrhnLX2F6mToCARqWv7KYTe
jFSw66HJr1zrTvg1zOkL6c3pn+BwasP6StW9SVofGleuLTITQk0pko6V8MUuNEyqutAchnWl
AK4Np6wOhyaL1NK16/XPGtH100zASChp2rXMHzxqRjr9EwZvu6KMlyFfpgtjOGLOqoFNGqKj
Bp+Dgg5E0WuSnD8NT07rSMkgDPr59sX2VCIHUanFSorSv7SfMYpVYAM1fCM5gipxAQBJqSSv
jgWD1N7hfPJT9Ri8IGJRaEUBpq71r3w/WKkgZT/MNFzoep/HFi+BRnMedVBp/ngaRqQXKZ06
ah1B8MbnKxKYlzZzUkjLvkOgxnBoHrWhq1KZdBXrTDGejlP5YetM6nwwrBAqcwB0yr1IOWKy
HTNEVC6TQkV8wMU7NwzBmIJGlT0A8sNrM+SWIkMVIqMs8sc7Gx/c1AOwGXT92NcRm31AVA1A
+ehfDHTGbUoKMyKRTxHmfDBeSB9Sgk1Lg0RT1GMr4I+6F0M1a+o08T4Y6/Ye/JwrasslIoCc
iT0xm3xvSEb1FQK/lWtenUHHOs20nND7aCjHr3H4YZjFDXUQB9wGXfyxuA6oVGnr1DU6ftw5
q06I2nV4Z06+nGDJplPqB+2lVz74MMOACGIH4dDXFfWpDgl6ClT2r49MNMSGNwGL0JPpA+nf
Fmn4+QEn26KM+hB/yxfVjq/ok1UWp69R4U8MaH1JQQpamRyKn/MHA14JIwaBwNQ/GnhiNiMv
6wDkK/4Y1PXO3BhgaaqhSSCf8cFXN0UTooOkghcwT+/GbG/t6YjW1FJEb5H/ABywfAttCiLS
g6DpXr5YzdWCBRSdSj6d88UQVKVKqtBUipyHkMakZyDIowrQZ069Dgkb8JkZpTX69vrXFGer
ngGzrWpqcs+mNfZzOoAKOB6s6j/LGN04bUxYKKEvUeJGLVqRgukhT6KdDnQ9MUa56CqqzVbL
vn5eBxRvSIB6Nl2FOuHF9ggggEkk9GpkPqMGatKRlaulaAnpn2xqc4xerToxYlsxUUAGeWCo
wBypUAnPxoO2AYKiKlBn2p064Jp8gXiCjVXUWOQ8D543rUkOwIXtUdPHF9lZDg0pUFqmoP0x
j7M/AoWCSl5AWA/L5UyGM31T9g0qGY0LCuVfA9vrgutzoqqiAAamzp4in0wHRqrlDXMUFCD4
/wCmDEiMdCRU0FNVepxvFaLR7iamqpOdR3r2pi6Ywg2g6TmKULZA16/jgV+BVYMW6eI8RjPi
nwFU1g5+qldJyriamGRFXqDU0CjtgkJwimMELqJOS9qV8MVqGHYBanqaLlUAYd1i+IRVZCS1
GNSKZ18sFrXP7H3qoJUCoByPnlgO6H3I3AoT3pTofLDILRQkdSc2oBUdK4Ph0k8CT6ygzPQ1
HftirH0sIhvImlR/mKYgSLG1TmFPq/Z9cB9Dka6x6j9tPDDhoniyz9S9/Hyw5jNPoLDQOtQB
4eIxZBhzVUOR1A0YNnXxxmtyCJRiSev5Qf8ATBY3qNsyA4zOVOn4ZYmaYF9JXI1OYOROBmCr
GW0kUalBUZnGsahOrqQrerKtetO3TAtDX8tK5mhA7YGokdj7tajOmvyP4dcTHUkJ/wCW6lgG
qTlXLF8s6FTSSgQlQehzFOtK4Pq3zTuAasSanqQM/LEerTRqrZkUVeor+GFSmJZCARVB4+Pb
PFjO6RZhNpNc/wAwyy88QGBVWfSaHoe9cGNwnQ+16mAr0UClCfHAaERsoAH219QyOeBnRCRl
BpQpSjE9aYjDMoT1qKoa5HoB+GGw4ZEQSdB7ZarN2r4UwKmVJQGXo2YJy6dcqYmcMtdOlc18
TkKjxGIyCj906mRtITMn/lh0pDJpVSnpLCorkanEbcQqwSUAkhTUtXMCuHGeRGZWWoNSrUAG
efiMZxvThk1KNQU+JzIA61piB2eP05Zg0Jp2Pc4lbEjmiqw9Kjs1M8WC1EjeGchHpatKDwp/
ngHhvQyB2ojjJtOYp9MQNo6lahaek9a+XliMJmQKD3UVC9WPjiaAz3DIpJAHRa9PwwgaMFHp
U0PfrTzofHGVptR0OBnqBoRkQwwYocAMATVQBU1y6jPFjRUGlsqAAAHw8PrhRtCKtGyJzqeh
8/LAccV6G6sKZ1J6VxCOPrUjE0cH0lf8cOKgxBYTRv8AbqAz+3wOFoaRyRnP0ueh6tX8MO4c
/T0DYuD8s3K3DQbVcToyqNZiYofPPIjF1/bWbMBJsG/224xWLW0gvCxjFsBVqjwHjgnRk13b
h8e8xsE/VXW3TQW8h9DOKEk59PH64uP6+s9c6i2rg/LN2t3ax26d1U1aQK4BplQkCmWLr+90
SZXXtnAuYX26xbPa2chv5TpUEZKOhaR+gGN/9a9F7nUxa8z+JOQ8ZtdV/SSUZs4FUz/KpGRJ
8cY+9rl9JVrxH+3/AJXvG2x3t+8W02ToGt452/nSL4aR6qYe/wCl+GPpOVHyj4v3zj14lnbI
01276oYwDqKnqVpnn2ONc/3s5wfX7VW3/AeU2IMt3Yz26uAauCCWpU62OeH+X97zW+Z+B2HD
+eXiRPFY3Jt38A5VkH7h+OMf1/8As61OMuq+bjO+Hcv0gtpzdO1TCilmr5gDMYOP6WC9SuqX
gfKtut/cv7KSJUFU91WSoP8ACDT9uGf1y+HmYVlw/mG5WzPb2VxLAR/LIDBGUd1byw9f/Yt8
X9Mvwhg4vv8ADdGG2sX95TR2iJ1An/txu/8A2vMc5yO/4dym2kE13aOBKcmcEEk5UOrrjHPf
5Z641x323b/bokW4JMI3yihk1AU7HSe3hjV/tK78cyQdvsHJEuf0ltBcw3MgHtiHUr0Of5cx
g/8A2bI5/XafdeJcjsF97c7SW3L1U66lhQfmJz+mMzudGSOXb+Mb3uNo81pbSm1hqJHK+mgH
cnBsDfcA+E9y3zZW3zebuHbtuqUgik/8krpl6M+hxm26pYsL/wCDY7XarvfNx3BbPa7YN7Ec
lBLK4FaDuR4UxTTkql2/4O5XfWkN1DbPF+r9cEcmbBCK6iozpTpjN7X1ZDmfGrnje8/02Zg9
yq1lovoH7e+O3M2BQsB7blV1gnqRQ+Rxq4yjJOkrSug1AXoR2wxfJSBiRU0bLUPL64mbASF9
esmjKMx4A+eAhRYw1HJBH3U6EHBpzSOlVKhQAT4HUKZYWaQBKaidK/tzHTGaeSAnjBKigX00
6nxJxG0IYk/y21E/dSozr49MaG1IdRXSxAK9dPT64pVtCmp5ygqq5kMcgTiUEYZERmHqB7mg
wabDIukeruMx/jmcLMP9rAv6WLenUakfsxrTaEsxqGXOtajIZ9RlgwTaZomWMHKrdjl18vAY
dRo10K+ZoOo7eeKjcOKlaA6BTJf8sOL7GSbM5kgile2oZ9sGKdE9ApK1LHMn6+GFW6ZToVa9
xRj1r3xYoI5g5Zfl8M8UO0RjQVRmGQzPenjQ4y1QolWqtdQPqXp16HCxJab1GgbtWhH174as
AuoZMdXVSRnn4n6Yzi8Fo06SR0z+tcsFEpBFBJqELCn/AB9cMjc6JY/SR2BqK554bCQoQWI9
deo64hQaRQimk1qfPENFpYfYP5fWnn44oaWr+WW1Zmvp+mXXDjKACPKtSp/KMqDywyMpNUS1
Cj6VwfWrSRZPUBQJ36g40YFNJNR6SD+2vjgVmCGg/a1DnqPbBYZSIDMQhyGZ8Di+pz0LRsSK
lmQfeSafTFGsJ8zQtUEgH/txSMikEgHrIUtkOx6+WDCjFRJUklh/xTGsZEVGujnoRT9nhgxG
U6SQDkvh3zw5BYTylzR6DyGdcMXp5HPQZ0XKmLEQI0qGGZ/Keg8cVOHBJIU0qen/AB44wTyq
AVYGlDWg6dMStga6kUkacsh0/fjcyAKKdOqoYkmjd6/jjUAiPVpBrIQaitRniUz8nbQiKpqS
etM8u+NQ+G0qGppGnrXwy6YLyjVYVBzSlQO/78UwjAr9xoGFCv8AnizBSRAIznXP7fHGaYUj
CjVBDGlVp2JwYNMrUADAKBl5E41i0aIgBUjT1IY0IIPcDDnh0LKFNTQnqD/hixmUl1aSAM17
eP44q18m9Qj0HIaswfEnFgvhZAEMvrJp0/CpxZjPv4OigqNGVcjUdxh+tX2lMWGS0JTodJrS
uLFokjJGRzH7D54LGpSJqCKAEdGPUYIrp81UHLX1J7AY1ItMSTIdIGRybp1GNfVjq3SCFamv
U0JB/fjFikw0gKjSPtHXBIaJQAn3VPcHDOVLhF/c0k10n8opl+3GsH2O6r4E51PlhhlDGHYM
QKVNTU55YtwzjRFQ1aMcstOWXl54sHkJAA1B3qCOmXjivNMRkAk6AdanOv8ADhsA0Dr0AINa
g9vxxT1QOg+kn7amgH3V/wBMWM4c+2+pQDrHXrTG7zihpAVFNJ00p9DjFjc1IHoq5Vz/AAr0
wT/JswGpnYitNOWnvhowtbEHStG6g1rXG+Yxt/ATHUl8iCBQ9cGKnVUY0III798s6jGepGfd
OaPVa0Bz04y3yaVRqouQ6Hxp0NMabsEQM1r6QMz3PYZYLWcQt6RVqaRkKf4nG5VkOhWpDdAa
5d8X1YMx6stRTp2p54MX2Gr1BXSc+mfj3wfU6BqlyxzKjSCPLGpIftRKNYBP29AvceZ+uKwS
nyLkjr0r2xSNz0ySZqAAM+3cnvh+p0mLCWp6r6SPDxwWMYVEoEI1AkmgzPlXDJpmfk2tXOpC
fV1U9Kjtjf1yLwtRVxX1MtMwOlcYsGYfUxXIeokg/jiBFH9wEkU6Edc+2KVH9VNLVILU8Pxx
VfJiQKjI0NQenfpjC3AzLpNNPpr37Y6SnBBSopqq5Hpw/LV8MNTfdkT3rXBYAoKsDX0tkWpX
p0phlJ8swT06NTrhxnZCqpYmpNBU08R4YLzoO5kHrBoTmB/pjH1Y0OkH1A6WJybvTCqdq/bX
pnXxxuMwSj+WueeYoa18qYG8L71B7DqD/j+3GLz61zgilUJXoMiDkQTgxrQL7nuaDQdNJ/D/
ADxuSM7RxkPr05Z0LUzGNfDU9MyAipI1ZBgcWashiEKaUopGak9f+mHMc716SyEIpJJDfnI7
+WCq0/ryqACep8B5YFhNG9WJU6uw8MWxkP8AMWgc6umk/wCWJTwTa/SY/uJNVH7a54sbl0SO
TQVGXUeI8MZP2CUOotnp6ZdaeBzxWCaJyGBJAY09LU6eWGSM3ozaAtdPTr+GeH6tbp1YUUnN
qZEjrhUhFxXUQAcgCegHTB5HO7aJzG0dKfb104yr0DrWhqR0qaZ4LyJ4FEZa6qAk0Hale9cG
NDYBG8WbOnhXvjUjJ/VTSBVqZkDB9XSU3tggaWHXML0OKwyCK0JDAE10in+OMHr9B9tQFUfc
DTzxuMkwIXW30FMz+GKxuCQpQqKZfmzyywXkAkFJRrHpUdR54Iz1DOklQegbPVXL64tXI6Cg
Dfcv4/TDh/JAoZFCZEGrE1FK96YzZB7p5EJOZUODUg9PAZYz7DcCUcHTIa5alI6V8MU61rJI
coSp0+mpoWHUV7YPgzkDaVGhGFT38QMsSkGoUMqfc1Pu6H8cUVwifSHGbCtBWhy8Ma1mwDFW
YZVAIyPTzz8cZY+DmQMp1ENU0/4OD6q9W/BO4GRqAMlI8f8APBjpDMj0rJ3oRQ5kf5YKco4l
UIdXhQf4jBYs8R6Y9LVyPc55eeJnDSBCuo0YgimdM/E4cW4LWciDWQAZ9frgkNu/AKtl+Ruo
p0NfDGsWDVV9tlFSh6j/AHVwN5KYs6qQ2Tdz3A7Yz5VtN0Prqe47YsFNKwMQ1VoTllkM8akZ
+yQKhYFsgOpH+eIhUxtUPV2/KD+3EsOrBm1H0+Vc/wBmM4zKTsC5PXILU9xgrXNAaEMASTWi
+WeJuBLEgqMzX9i1pT9uIWCiBKUoDpOZ6sBgVKRAzEL1oP8Aj64lg5V9BY1ovWn+uDVIYlhQ
KQQPy+GX+OJojXSE05sxOfc074rR+S9shvWCQua5d8NsWYAephrIWprTFrPyLXrrnmMtNaHL
wxKylRmFCgzFSVyII/zwHBBgqLqWo00z8vHzxa1ckJVOnXT09c/34HManrSvgKdKHFYtoLYV
VxJUgH0nwbrng1uTwyqhGQrmWPma98Sg16atPX8pp374sVp0kcq2o01AgR+Y8cAnSEJJWqHU
pNSgFKkDMkHFqS6AgOg+s0P0/HESpIyjUa9KNTpi0S2nRWclNVNZ+lQMFalRzIFcoDUHowzA
ONSihEZKCpAb/d4YtJlXUw7OuRPTLzwIigWQktX/AHAdsTOJoFRmbWOmYI60wVozrGyjQait
D/tHY4j4NoVI9VK5BCvfzwRZEIGmTTWjL6ScLMHQamUGppQnoMWtYILCASwDGnQZCnlgJnUU
Vh6SwNNPemFm1CdTF601J17aa+WJnTK4XSzITn1HjgMomNWI0+liDqOdMTQtYIJoQ/2qevT/
ACxQo7gq4TSdP8XcCnniVcdyRkall/3dcxiTkoKUBwNCyoVOJA0/X9mLBqzLaGYsauDkRnU4
WoseMpFNv1jq1kCeM6UGrIMCcj1GOnHO0Xfw+sflP5E3/jm27dHt862sGke5FCqjUMgF6ZKc
Y55msfLj+L+Zf199y32a1h2+6t09q3aNfdkBzNVJFTqp9cdv7fw+k1ff8NBsH/se4cd3C+5S
JZCJf5E14AiGI9BpIFKY82a38RLvu58s/WbRYccaaKxdUW6jto19pYq5nIfmX82GYJzrj3zf
LjYvkfa9rsLoWhvlDXUEemukVID1B60zxrjnWOvIrPk+933cORbb7s7PsdpNCZTpCwhwa17A
1xczaZ8PT7a5guDGLIpJMQpE8n2lKDpXrTF1MUuua9vEh3O/vY9H6+3h9uO8KhmApXTGDnSv
gMZLzKK++Vt53G1l3yzU7JHcCt9dRiGqBslC+nJvpXG+frVF98rfJ278V3CwjhjjG3SDXc2c
QUSSKRRV1HpXBxzKXR8Yck2rcdn3DkzwLt0sspjj9mNZJUjA+xcjlXPzw/0/l9WdWG+bxHuu
2RxWu1Xd9czSCNb/AHKL2YFqc8u4pnQYxOTKuNpul/URbLLuE19IqUlis7URWiJTJC69P24b
yNBf3cOx7VvF3t1pBFNbBtDMqgBhmoOWdfPGfqdVHHr88i2DbN73ONLm+lkLanVSPTkAqj00
643eMWqobJJuPyvDcpYi7srONizstY1elFPSlV8MZkUXu43L7Btu67jtsUI3iaQrHOUEkjEZ
D09aBe2L6w68W5/uXyLvGytfb5tiWNsjA/qaey89KZJq8ewxvnNGNbI/Nbr4iEc1nY7JsKwK
BdMT+pnU+XQF8Zvlakx4Jfcj3hxb2p3CSW0t/wDxKjnSWU9aDwGPT/Lvn8xzu749A+MNy37m
vNdust8uG3e1tayrbXUrGJFUgaqDM0qMu+H+nEz7NczI+qJtw260t5bl7i3ht7NNIkLAIgUU
+mPJ9Vr4l+Q98st65Pe3ltK04aRgjFfvWpofDLG54z8se7ENpY9c2r/jjflW4c1+4moNQB0B
88saHoDWRi4bUR9oPU4NMuowayAZ6evr6f8APARSIy6XU5FScutMZVhodT6dZNTX8RjWwZ+y
TUQUUVCmtadPOuKsyXRxn01Fc+gY9jidKRUe2AKoM6yMPHuAMUgl1AUZSTUlScs8vqRh1iz9
J82QAUNPz4rg59RhZQUqQVYZp/CO2eCtyVK8bVKsRqpU+f0/DFBYhaPQASaj+MHsfHCrJDsz
IAMieun6ZY19FbfwcKRpOkEHxJp+3GcwfaiaQjVoQFjlpHcDvh8P2tRAsq+rNa5eP/TGpNZy
xIvt+0FCgAGh7Vrg+rUtCw0UINFJoO5Phhgw6hSdLZVOZpQfuxfVozIqCg7muR8cC0J0BDqT
VT82ZJHniW4JkcqzCvornX9mD8rQklXWpBr4dKnFWL+xKFYavtY11dB08sRyU2tidKr/AC2q
ajt44Jyz9r8BVmUR0X0nz7HG5MagCQWoaChAUg9u/wCONfUiIJq32gEFQOuXbGcGk2mlSTra
lQeo/wCWDFppTocAn0GhFT38Rhi00mgkFSKk+kf543g3TI2ghiAQ1enbGRTMw0kD1MOhGX+O
GIcTSDSGzJ/IRlXDYjMhToNRpln4+OCQgofbWq6gTTT4f64TIH1NIClcvDp9MGrUsjGoyGQJ
CDMZ4PDlCENNWTqM9I6keWEyYGRgW7k19PlUeOFkQIZQtdQyNehy6Z4LGMM0Wr7SXVjQ/gcZ
xuQZiqnqyp0fr+7BjQEGlfUvqGZIBqMawXSbUCaGv+A8sOqSH0CpyP8A2/vOCkvaVqqoNTmG
U9O+eJCo7DVQUFAPMeYxYz9TMFJ0knSaEd/wxrBsCopUlCVHXPr5nEL4kQnMKNYP5qdvpiWw
wGqqqdRWtCemf+mNYocVWmo1IHXoPxwn4OgBfUx9JrUeNcZwy+ilRSdIbMD0gZfvxer5RZUO
o9gBTIk+RwwCJToT/M65Zg4fozqFgPuK5A/cc/LCoMULmhpoHp6AefXDTM0Sqpzb8w6DrUdc
YvNa8gqKAKA0OZXxPjhnI66RmruGppH/ADpmRizB9tSjSPWKKDQBT1864sa0Cq+vU59BrRaY
1JnjF9JWQMaLReuQ659cFh2SHNBmldQOXgB54BbYThnWooNJ8csavEZ+90irIPV6qipPYnz+
mNfXY3KRzZR0ZhQsv0yocV5a5vgM9Kk0NPS2CxdWYJi5X1CmkfX8cUjnIc0Zh6x/DQeWLBSB
JZQemdAPDBWpCFR6tQKn7RgOX5OEaoVTVBWpGXXGvB6Bo6ClCDWh86f5YudV5GEOSgkdl88J
0qjRX7ZD28vE4p6PtiRaEflJ00C0FAcOGWIkLiQajkQfSOuHGbMIGQuxRSKZDPsPHDJIt34B
Qkt6/VXPP9tMVE0bPWQMxBQH9/SuM3lqHkIBLjoB1/5YZI11v4RxsutqAjUKL37Z/TG6z8CW
JqUBGkDxxnq6zIfQCRQkBTXV1rjF5wfYiNGpwemYAHbpjOGXPUZDL6yc/pXrnjUmumX5EGUr
X8uRLeZ7YrIzaUcer1SCgyy65YNMye0GlgRoAVSdQJ/zxuCm0A5kagc269BjTFgiXLejr9tO
lBjGfhr7JJPUigCjHuBkCP8Alhkka3fgGgDqQqgHVTrjQCZGJyWi9B9Bh8imndQUBAK1OZHb
BT4EKAKA0I6N5YKhJpDaqlmXyy/bi5X5GrBSVK6e9fL/ACON/U0NEYAgAHtXLGOoxpK+mjUJ
A6+RwYzLYQALgyGlfUAOlfLDjcyw7hgaGopX09+mLRIZIwRqP1Ncv8cY1DILhIyPMkeNMXJA
SwUVzCkrTyONxZSLAPmMxlXxrjUgohVhQGnegHfpjN89anoTECxJJ1D8vl5YdX0DEgIL5KCK
CmWGsWmZ9bF1GZz/ANaYzmM4JmZtOoitNP7e+CC3QrGuqldSr27mnnhlMERIakgdaHwoPPGm
vThVDjSTppXPMVxn5UhtTEEE9epA7dvxxQUzFxT1Bycie+eKlIiEHUAAAaN9fDzxm9NSYTRj
WrEhWGZWo6Y0r6HUpYKaVNQvgBjWM2kxIooqFAGZAyP4YfGdN7kgqpAYkAAV6k4PILRVcqpq
QD1zy8MYtjFhSLqbUOnTV9MEa5mmViFLV6n7a5nPrjU9bkpnjWNxo1VAq7eXicMrX1ypKAqQ
jaH7keB+uHBdR62jHpzXpQnMjGsZFrqdS+rIVpgsak0QliGQzIzI8O2DGbaF/bZ2WtFGefgO
2DMV2mQZEhhUZn6EYmbwbVWhpQg0PeuM2iJAyGoGQyz6Z98Mz5a+tIKSTmNQHQ51GC088lrX
TppmMzU9u+M/JnFggqKgkU6hnQDzxi9V0zDVqQ61qDUnvn/jinTHtpMHyagNfu7nGvvgm6R0
MrEHp1yxr7RuywyuoJPuAKAK0H+NcZ+y+lotZFT+VjkOuXbFO4OubAE+oMrCjZnvTLGb3okh
9UeZDLRaCjGlR44PsbydpI1anj3oMsZ+0XofdANKVOrME0y+uGdRmcfkNU1MwfKvWuWG9RqQ
5mDMBrBHTwqfpi2HNOhiAZlIouRHevljGtBedQ41nWKZkfurincZwwlh0V6qMhTrXxxXqNyB
9+MAJ0JFDXrTxGD7SM3jRl4SchUUFPHLBelAiRVagXqPSwOYPhjM7anJhJAopX618cVqSa0J
zYgHMClKUxn7tfVDLPHU9CDSvn9cOudh1kgYDUcxm3hlg1HEqM5z1EjInqQPDDrpJMJpEZVJ
qNP2mnUeWH7Mzk6XMSo3WhpqPj2xfZXkEl3EyqoJUDIE+GBWYf8AUQyGjjzDDsPwxaZJQvPG
y6TVT/CPEYda+sw7TwrH/LBBNA6/88H2YpSXEK0GdT1bt9MZ1FHcQADpqPRgO/hivS+pe6RR
CdIB/ZXBemuYaafWSpz05UpSo8cX2gw0bhx6aqaZgdvPFrQ9bL/MzCHJq9ajFOhYdbjURpjJ
avqPennivSEQZEAK966SP34zasAspCsaFtWbUGYp2xfYBNxI6BlqxBNcs6HDFTq8xkCBCzEU
AB8fDCyFpX6aCaZDLuMqnFrUSJE7MmiMCgBJGRr9MF6UlOoZVICFX6+VfLB9msdFvatcpKK6
Ag/lFhkx7r5HBe2ZyjAuoKRKjEjr3NMH2OGEtxHVWh+40p2p4jF9jOQS+8AfQQG+wrnlhlMh
oYrpzqWL7cqHpmPHFVg5IrmMAFC79B3A7YZWL8hMV10ZTUZUH78Ws2EP1KkqyEso+/tQ9hjO
tAIunBKj7hQL3oMOiSpBDM6CMK1T0NKE08MWt5goY7poWYwuenqplUHMV8sFrNpxaXcpOiMa
VHqLGg/HFqnJlhmZyIxVlFKEdsOk0cdytVZCCBU98h3waTtBItFC0LEMSfP/AIzxSq4F4bnV
pKgg5Eda/TEDSWtxHRxGVjAB0DOgwacTLFcPE5oVYAH8D5Yjhore5GqRdL+Fep8v88Vo2BEN
2aMWoG+8daGvhiixF7TAn3R1J+tfKmJamTbr6RVMbDyXw/biWSnXa7pwS8g9ZzYdajufLFqy
Im22cU0yURSVOfaueLUkl2+StI5KqKVUdR3xaNO9hP7xBINPtI8MUalD+kOoIGFTn45jphQT
s8rtRzRxStB3OfTBqxwz7XNHWgqmqnTPFpcxQKhNOmQJwaUOr/HGgtGX3ZjQ1Yn0kZEjAXfZ
WdzCElhYmVWBSRDRlIPljf8APvKvst943vk9/wCyNwuZbv2skErlyv8A2k+ON9dS+qZjr2Pk
XJdhvP1u23E1neFdKSwGrZdqHL8cU/v+L6zJFtyPn/PN5sxFum73N5HWqrK9FDCn2hdONc98
/OGyfh2bd8tfJVhZLaWO93FvCBSOJCrU/wD0wxxjr+nNvwOeP8s5c75vs25SblJcSy37ZvcO
xaQtWuqvTr0pg5/rl/w1ffFhv3yDzbe7OOz3XcpLmGMD2Y2aiJlQ5AZ/jgv9PdjnZ479g+W/
kjY7NbDa9ykt7dQfbRQrqP8Ad/MViP8At6Y63+86nsc+eaC0+VPke03STeptwddxkOn3lIIZ
San0MGHXDO+MzGso91+Wfkre7mGa/wBzkkFrIJbVQQqqw/NpUBa/XGZ1xz8OnPOxQb/yHkO8
3v6rdp5b+9FNEktARTIUC0UD8Mc/t74JzIuOIc+5vxkSS7Vdm0uH9JRkV0y6HQ1Rjtf7SzKu
ufzHTyL5Z+SOQyQybhuEk8Vu1YRGBbrXrXSmkfj1wfz64l+GPasR/cB8qQWC2abiYkjAWMLD
EGHYVkC1bzw93+dvjUmKLcPkz5Cu7GawudxZ7W5k92dKAl3bM1OH7/z/AEpKUfyTziLbrXak
vXgsbM//AB4IqUFO1Rn1xX+0v4Zs+rRr8+/Kxto4YdwS3jjABaOGPr0OrKp+ta4z9v5/prmW
/NcGwfM3PtiuLiaG9WZ7xjLJNcQpIRJ01LqrT6DD31xmYcsqo5n8jc95ZcRPvNy11EgBiiRR
FHGVz1Kq/vOOXNkP1Vl1zXl24WQ2y9vrmewUaUtjIxiUAUBCnplh76lhmRUNJfAkGIaPLJiP
w6YzKx16uuN8r3zjN4m4bUxgvnVo1lUVIRqEihy7Y68f1yZWfqst2+TudbltT7VcXRexMnuP
EiCMAnsSPUxPmcHfcvsbxlQ16XKgUQUNRmAPr+7HPUhk/VNKWCEocj3p5YfsLyh13J0goTH/
AAiop54fsPqkrcAU9rOvYdFHfBp0QD6K6KFcx+OC0nRZQaFden7a4tQbk3BYFc2Gf0PfFKsN
FFO66wrJn1I64dWg0z6hRcyep8v8sGs9TR0nKErGSOhBy6YZVmQibmOApo1ahmwHX/TFeoZ4
CI3Aj0tEaHw7DyxXpTk6yyligiPpGfl+OL7NaP2brSGKuMslpX8cWi86gaeaiokbZGhyJNPP
GtZ+pnkJbSqHM1BPb/li+w9SNJOFFYiR3Ay/YO2H76vqlRZHUyhG/h1nt9MZ+xxzm4J6KS4P
QeWHSf3w3WPNczlnXyGNaRiVg66kalMh3+uD7atJpST6Uqxz+v0xfY+Uk95jRV1jtlkP24rV
9Ts0/t+mE9TVqYvsoiWeX7dLDsK/44r1AEyMxzQtp7jL6DDeoEjI5ZdcbKAKmuRr/ljP3Y+q
FbmlUVTQ5MBUnPGvtGpNSPPVMwSQKGvpocH3OI11KWT29Z6lu9MbnYp4pSSQU6DIAZYb3DId
5/QQqsSc+nnjH2EgC7kr6SwOQ7ZYJ/QWHEuljVTVegPQU/xw/dSBDvqMhSo8B3Pnh+0X1FEA
5DAV/h69cV7U4KUuGGtSWXqK+Vf24p2bwGO6rX+VRMvwrlnjX2ikP+rjyYKRXJwO57dcH3wn
iuHBPoOkj05YL3F6aS5C0UpnpoSB/pivSOtyjux0EELkAO2D7Cl+qStCNQOdaZ4vuMB7ta6Q
xJPqFKZdRh+5whchSSVK6sqDoT9MVumTDS3DFiBXrmR5eIwfeH6iM5Wrv3pn5DGb2fgmnHuk
AGvYHpnnhn9FmhNyoQxkEEElnr+7D91gUvVX0gEV6kVwzsSCF6UbTpb28xqpn9MNsGUjdVZS
q6AcyanOnTBOzZKGG4ofShFT6q9B+3F/0ZvJG6ox759M8U6YwZuzQaV09a+Yw/8ARr6hkuYw
vQknIDtnhn9FYYXLddFJD3z6eFMZv9FOdH+syyXplTtXF/1X1KO40r6079Tnngn9WskM9y7Z
vXOgA6Z+OOs/qr/PSE7KT16kBevbFf6xX+eGSYsRUZZ/XFf6j6Q4uSTlmOrKOmXTPD/1wzgS
XFUkyIBJYd8vDGf+g+oFuxpI0nQRmKGnXB/1i+qT39SkAEqCGr1pTFP7YLz/AJRm5qShRhXL
Gv8AqPqITSRR6iCAftr1OC/2av8AMUU7AhtLaaVzHXGf+w/5h/UPRtMZalQcsb/6yM/89L9R
II0EsZz7DtTDP7xT+eU6STEfaaIaADLGp/eN3khLIKakqSaaSPtrnT6nGb/aUfQYkALAxnwZ
etK9cZ/6wXgOs6qqpATIA9wcU/uvoIzywyEMhUrmB4gjB/0la+mQmkkKopSgatexOH/pDzP2
TyzZH2zUZuaUBp40xT+kb650DTSu1ChoKVz8fDyxr/tI5Xm6ISyGSoU+nSe9MX/eRfQ5kk01
9s9TQ0pl/CMF/vDmw6yOJDpjFKUPhX8xwT+4wMrXEbZKakaieuRONf8AeH6o1klUk6DXoK98
V/8Aswf8xvIwCjIE5gn/AI64P/2Vz/KfkQkkFKp3ofqcHX/2GsNI0mjWyFMqFQP2Yp/dXmB9
y40jQKMRmvQ08cb5/wDsT8s/UIknqpCZ9x9emG/2g+iQe+SSUIIGSnqK+GOd/vFP5wUbySFU
K6TWgGM/92pzA3IkErK6MSAKEdKYef8A7DN5sIe+xB06hUDyAOLr+8HMlomE7axpI0mo8/LF
z/Zq8aBGnLSBh61ALZdD4Y3f7Mzj1HW51BCh0HoT4Yr/APYN/mkLSqrt7ZA/KaY5/wD7GtTj
Cjmm9dIiE70BGX443/3h55wLG7kSojNSOhFDTFP74Pr6eJ7o0JXr+XMgDwOD/u6T/JKt6G1a
KKDQedcbv/2JjlZ6eWO7U1KkEHp164z/AN4PrUqQ3TAF1DFwchkaD6Yb/wDYz4bnMiORZi9T
GaqMgK1qcP8A+ypNRgXQNdFQe/elcZ//AGWeuIQ/ULJqII1ZBu1T/lgv99H1FS4WilQxJ9FP
rin9liX277M6QXrl40GC/wD2D/zAY71lYldFCVoe58ME/sLALDe69JJB7V8Bnh/7qQZW691S
VDaf34J/ZuRBIbwEkp6RU9D0PSuGf3F4TezcrFq09QKg/uxX+9akkHJHdBVrmVBofrjN/tca
vMxEqXQAFPQWoE7VPjXGp/8AZuOX1HJbXi6aDQa0A6CnfB/+zV9ACO61kEBtPj0OC/2pn84M
RXQDM6U1AaV7eWMX+1P1hpIbnKqgZUpX9+Os/wDsVm8DjgmQoXeo/wBcYv8Ae1qcyUD2t4sw
UgOGOYByHgcan/2GeuZTvbXNGemkdm+mD/8AYo+hCO716K6aiqkitR/ljF/ra0Y21zQALpAr
qFak/hhn9rF9TyQTxojVB79PHKmKf3pzzBNaXEcdAwzz0nM4v+9WTAxWdwI9daAkkCvhjN/t
afqnFrJJbAg6XBb0nxwf9brH0lQxW9wQ8Zc5UOk9KHzxr/rVzxId7C5XSGcEkgAjMDDP73Gs
9I7dOjke4StKkYf+1byIl2+dzm9FbM/hjXP/ANixm8yjksZ42Yu2pPytXL92L/8AYovMQR21
xp9L+rqgOWXjjPX96z9YmFjKaB3BNKjr171pjH/at+fB3tZFyDAOV6VrSuGf3osSRbXNQa5K
ClDTxPj5Yv8A9ir6yoms5EYj3CwUn1LmDTwxr/trH1kT/oJdSqrH+Yua9+lcc7/a63OZiBtu
m9vWJNWWS1zGL/tR4lXapH1e5ISQBka/swf9a1JEsu1sgU6yFUASBsuvYYv+lOhFoz1jV6gD
PsVp2w/9BkRCwkqfbkJZgaEnw7Yv+lZh2sJdJZXKkAa88qYvubakXbi711UAA/DFf6WtHksF
UIx+xsmbuPA4p2zTrtykBw5qB6h4muK/0onJjtsMhrrIkY5J/wBcY+9akGdqVXK6qA0Zq9qD
LFO8Zs2ii2pXkJZgyyA1Xp+yvTFeqLz6iO1wl2SpQHoK9x4+OL7NWo22yESLpZmUd6UP/Fca
/wCgSPtEULVDag1Mq50OD7s3mn/p1uXC1PiR3ri+zUiKbbaVYCqnsO2fTB9j6nhs7R4pG6FS
FBOVB+OL7tb4E7bB7irXrmx6VJ6YvuzicWNmEDAnM0an+ODW0EdlaByR0Nanth+zOJo4IdAz
BOYoMzTywazUTbba+0ZFKk6iKk1NB3Iw/YVHBYQF/aZCtQSCex6/ji+x5h2s7aNl0+oK1Ael
QeuDar4lFpFoYEBgasKeX1xfYab9NaaayL9oFD0GrF9x9qZLG2oWYZU6ede2NfczR/obWPU+
isZFCKZj6YPs1ArZxkuuRWn/ANXji+1OJFittGuJC8qihFM616kHBqthPFa5K0ev1faBU5jr
4DGbTKJIrSK41LHQAdgKNTPv3xagztaSLqWMA0yy/wARjUqRs8TLTSPb6dO4718sVoIw2qN0
Y5UAGZ8c6YvtqTCMFlJUMpAFCBnTEoiRI4hqVNCNUUrmDXw74KamBUKzVzrlX+Htg1z+1iMN
HoZhqII9VBl+zxw61LqOFkViR6Q+VD1NMaZmpZZAf/G1CPUcsC+2hJaN1VtNWzB7Z9MGtSJD
InuKWBrmpIyHjng0/BNM0khKKEIpTL/imGKUUFwiSZrUFSWoK5nyxIUV6Gj0FCzVoh6AjzxU
aheaXX6U0kDTXEZqNJzQ1AzPbOlMRGLp2FAP5Z69Aaj/ABxK0T3rqmhlNWYdKVp2GFm8F+tC
MVKBkPoJzqfA/wCuJXyIIp5Y9VTpir1bM6Tl9MZEg/1NMtJUD7Sv+dcLUqX9UDnFVmIqnbrl
liOniviiaT6dI9X8JbxwYgC8dlYItMyGxTxkKTaZFPqXStDSmdfGuFS4JLlgCNCACn5ug8K4
MaiK5nkkkLEKNGS5ipBz+mGMdelHNI86+oRgj1U+78Prhw8wQkkkjILUqKHV1y+mBo3uS6da
NRgNINcWs6ETPVvddavQgjsOn4VwnBe+rJ/5F0g1HkfCuMoDzVbU+gUPpC9BXCKEXDrpBetG
qO2WEwZuT7eTAqxqB3pisATeFSCrABlqB2HhgwemjuFUBmNM/pSvU4sUgZLxo2/lvXSQak9j
iaML2M6jq00NKnMDxONIX9QTUfceqr9hXoMFg1HJdxSkBzTUakGufn+GM4fXJftbiIKtNedS
MMjTg0f7h0whb2H/AOML9vUdev8A0xVL6H/8XP29fy/Xv5YIjJ/5x/5Pw+3G/wAB1yf/AGH3
9T1/7v8AHGEiu/8A8WX7/uPXp+GNNRyx/aOv39+nT82Mh2L9p+7/AD/6YaY55+qfd9y9MEDs
g6P9/wDx44REV19o69e/XDFTWf2j7+n/ANHXGaeUw6r/AMd+2Kip4v8A8ZT7up+uJUbdT9R1
/wAvLEoCf/xJ06dv88FbqBfu/L0/HEyik/8AP2+09Pu6dsb5Y7Pb/wDhPX8vTp0/xxHkEX39
+p+7r+GCtOi8+z8/b6/jiiD+Vvx6f5+WKqgH3nr1H1wcm/B1/wDKv0/DDWYFvsPX8On4+eIV
If8Ax9u329cTX4QXH/2X29T0+3r3woa//i/b7u//AB0w8jv4M32fgenXr/xXFfljlHB0X7en
fGa2nh6r9nf6Ygiuug+37/8AL/Hwww0D/aPu6n7f+OmGuPQx1X7ftPXp1xQ8/KN/z/b1PTrj
Nb6dC/8Ak7/5YIYGT/6fu7/5+WJsS/a3Tr+H4+eGh1Q/m6dPy9fwxmMRBB/+KzdPsP29fu/N
5YY6Rzzf/Zfb3+mN0Ucv/jX/AMfb/g4zBUk3/gT7P/H/AJ4Ry4V//GV/8X+fU9cV+FUdt/53
/wDH9x6fdhidsn3j7PtP3dcETkH/AJ3+3r369O3ljcH5dMP/AI1+3vjNbiY/e/2dfy/8dcYp
rkn+yT7Ovb/jrhgDt/X8vU9emNQc/Lr/ACH7O316/m8sZPSvTqPs+49Pr3wxyPcfav2dT9//
AB+zGoo6Lf8A8p/8f2/k+/8AHGK7OdPtH29D/wBvXvjcZBF/4pOnb6fjgoicdR/4+n4Yzyaa
P75Ps6H6fhjQ5B+R/wDxdW6f8ftxN1EPuX7Ovb6dsVQ+8/2/bjMFRzf+BPs+0f4/4Y1Ahm6R
/wDi6n/j64qnRJ/+Lj7Pt79P+mI1CO//AI/tXp9e+NVmDH/4wP8Ax/cenXpjFaRL/wCSX7Pw
6dcNZh0+xvs6jp064IU1t9kX/h/N93441QTf+If+P7j9v074xGkEv2H/AMX49cJjot+j/wDh
/wDq/wA8B5ckn/hH/i+5fp174j051/8Ax1v/AB9fwxqMOs/d+TqPp1740k159p/8X2jpjEUc
tz/4R9nQf4jEKOH/AMjf+Lp+P/TDRDL/AOWT/wAXXv1/6YoqkH/kP/h6Hr0/HGaQt/8Aaf8A
j6fl+mNRAj/8p+zoemCmOm3/APsv/H369MZCa5+xvs69+nTG4YjP2v8A+P8AL1xNVF2H2dT9
PwwsQpfsb/x/b+TEUsX/AOLj/wAf2j6YzREE3b7P8un+GBnodv8Aafs6j7f88KnwR/8Ax1P/
ABfeemGJM/8A5B9nU9f8sFbRx/8AgP2/f+bAEsv/AJT9vb7enTv54o0jf7h9vTt0xplJaf8A
iP29D16dcSgm6jp9o6/X/imMmmP2n7fu/Hr3wClJ0/8AsvvX69Di5ZvwFf8Ayfl+4fd/njV+
Ght96fZ1/L0/64YjS9Zf+/v9vT/iuBozf+P8nf8A7+mBGtOifZ0H1/DDWRSfZH0+5uv17+eM
lBa/+Q/Z/wDVhSeT/wDGG6fZ36d8Chx9if8Aj6/m64Chk/8AJF/4+jfTp3xqCjfpB0/+rp+G
Jnkj/wCQ9O3Xr1OKNo7j/wA6/b07den+OJmnsfsP29T/AN344b8EQ/8AtOvX/L/DGYPwiT/8
Y7dO/wB2IOi5+w/Q/T/6vLBFCt//AC/l6n7enXvhA26S/b3+7phaiCPqv49Ov4/541+BDR/m
6dD1/wCOmBsbfY/XqeuMwVDN9x69/wDg4U6rb/zR/bgpqJOnb8cX4FPH/wCQdf8A6enXt54v
wx+XRJ/45enf7f8ALyxR0PD/AOKLp26dfw88aZRyfdJ9f/qxQoW+0dOowCjm/wDCenb69P8A
DFFTSf8Amj6dF6f/ALPlhjPKdPuHXv069TgrpUU3T8e3XGeXJC//AJB16DG2vwjH3x/d1PT/
ADwGO2b/AMzfZ0H3fT/imIgl6p07fd92KKHl+yP/AOnr9cTaEfd2+4dfoftxOdGPuP8A/UPX
/tHTAYif/wAg6dT9euNCpIekn/f+bpjKMv3t9v3jr9e3ljTQV6jp/wCQ/Xp2xVUcnUf8HpjL
Ibr7IvqOv+eKKIl/8o6/j9Ma/CrpP2j/ALu/XpjJ5R3n5evTvhFNH96dP/Gfu+n+OCsz5KL7
R16Hr1/HFGif/wAbdfzdPwxIrX83T7R9PxxUpj/5R9R/3dD0wimX7j934YmaBv8AzHp9v5cR
cjdE6/8A0dMQ6HB97fX/AFxVJT0X7vtP29evfE2Fv/L26D64GUg6t93+XX/DGR0iX8v/AH9/
rhrHCZO339+nXv08sUbK0/8ANJ069vpgESP1f/yfZ2641GkH/wDCp93Uff1698FUTN/+MS/d
93b7ftwko/8A8Yf6fjhqiVOknTp/n3wAHdPw6YSU3/i7/b+b69/LFBQL/wCMfd1/D/piphn/
ADYzCc/+X8D16/8ATCPylHUdeg+3Cqa4/wDKvT/PpjICPsPTv9e2KhFL9o6df/qwBLbf+R+v
Xv17dcLfI/8A7E/Vuv1/wwGon+/8v3L06d+vn4YUa4/8Q/H/ALuvbCif/wAX4L/hgSCD/wAb
9Ov4dcAo4+v49umNRkoOj9Oo6fZ0xCHt/wDwy/h/wcUMC/3x/j9/Tp/xTFUI/dJ9e+Ms08v/
AI1/4GMwQrbqPqMbdI6JPvbr0740HHD/AOYdep6dPxw1qJP/ALSP6jp06fmwMdCb83XofrjN
Z5+TxdV69D930/4rgjq5D98v29R0/wA/8sajQn+5f+7t+GKs0n/8ifXt9cUESD/zL933n7un
TtiNSL94+h+7p1/xxE1x9x6//R9v/TBWO/gx/wDFH1+3t9e+KHn4cj/aPu6n6fjhSWH/AMq9
ft/4/DDRz8oZO/8A5P8AL7sRNN0X/wAnX8vTp3/ywI0n3fn+4f8ABxQnPVvv6np9e/ljVSSH
/wATff8Acen+eMihl+z/AO2/+rEYBP8A6vu7fTDWkX/27/8Ak79OnX/HFAkl/wDAn3fd3698
ahjmn+4f+T/6vpgFSR//AIufu6/h074yjXP2H/ydO+ElD9if+T7z1/zxBCOjf+T8Pt64g6D/
APjD/wDl7/4YAgk6D7/tHXr1wxoVv9yff9w6/b+OFHb/AMA+/r+HXAIEf+T8/wBmNGGb/wAM
f/l6/h+GMmif7E/8nX/imBhCPz9f88LRn+0f+T/LGjQHt93/AB44GU//APDnr1H0xEP5F+7o
evT8MVKFfv79O/Tp2/yxCkeg+7/6v8sSiJ+p6/j9O2AiH2/hiB16d8Kgu+FoE/Rf88ZSP9uI
P//Z
------------09910D0C003FD46CC
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------09910D0C003FD46CC--



From xen-devel-bounces@lists.xen.org Thu Feb 20 09:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQBP-0005vq-46; Thu, 20 Feb 2014 09:44:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGQBN-0005vl-MI
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 09:44:41 +0000
Received: from [85.158.143.35:28124] by server-3.bemta-4.messagelabs.com id
	0C/75-11539-98EC5035; Thu, 20 Feb 2014 09:44:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392889479!7018975!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15334 invoked from network); 20 Feb 2014 09:44:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:44:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="104252709"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 09:44:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:44:38 -0500
Message-ID: <1392889477.23342.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 09:44:37 +0000
In-Reply-To: <osstest-25148-mainreport@xen.org>
References: <osstest-25148-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 09:33 +0000, xen.org wrote:
> flight 25148 xen-4.2-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25148/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
>  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
>  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865

I think this test started before the fix made it through osstests own
gateway, so this is still expected but the next one should be OK.

But this made me remember a request I've had -- would be it be possible
to record the version of osstest used for a given flight somewhere?
Perhaps in the output runvars? I'm not sure where best to hook that in.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQBP-0005vq-46; Thu, 20 Feb 2014 09:44:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGQBN-0005vl-MI
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 09:44:41 +0000
Received: from [85.158.143.35:28124] by server-3.bemta-4.messagelabs.com id
	0C/75-11539-98EC5035; Thu, 20 Feb 2014 09:44:41 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392889479!7018975!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15334 invoked from network); 20 Feb 2014 09:44:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:44:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="104252709"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 09:44:38 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:44:38 -0500
Message-ID: <1392889477.23342.6.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 09:44:37 +0000
In-Reply-To: <osstest-25148-mainreport@xen.org>
References: <osstest-25148-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 09:33 +0000, xen.org wrote:
> flight 25148 xen-4.2-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25148/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
>  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
>  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865

I think this test started before the fix made it through osstests own
gateway, so this is still expected but the next one should be OK.

But this made me remember a request I've had -- would be it be possible
to record the version of osstest used for a given flight somewhere?
Perhaps in the output runvars? I'm not sure where best to hook that in.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:50:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQGr-000683-UC; Thu, 20 Feb 2014 09:50:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1WGQGq-00067y-Ou
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 09:50:20 +0000
Received: from [85.158.139.211:13599] by server-2.bemta-5.messagelabs.com id
	A4/0F-23037-CDFC5035; Thu, 20 Feb 2014 09:50:20 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392889817!5131574!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30658 invoked from network); 20 Feb 2014 09:50:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 09:50:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1K9oEN2027651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 09:50:14 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K9oDO7011896
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Feb 2014 09:50:14 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K9oDj0011885; Thu, 20 Feb 2014 09:50:13 GMT
Received: from [10.191.14.241] (/10.191.14.241)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 01:50:13 -0800
Message-ID: <5305CFC6.3080502@oracle.com>
Date: Thu, 20 Feb 2014 17:49:58 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>
In-Reply-To: <1772884781.20140218222513@eikelenboom.it>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/2/19 5:25, Sander Eikelenboom wrote:
> Hi All,
>
> I'm currently having some network troubles with Xen and recent linux kernels.
>
> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>    I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>
>    In the guest:
>    [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>    [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>    [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>    [57539.859610] net eth0: Need more slots
>    [58157.675939] net eth0: Need more slots
>    [58725.344712] net eth0: Need more slots
>    [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>    [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>    [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>    [61815.849225] net eth0: Need more slots

This issue is familiar... and I thought it get fixed.
 From original analysis for similar issue I hit before, the root cause 
is netback still creates response when the ring is full. I remember 
larger MTU can trigger this issue before, what is the MTU size?

Thanks
Annie
>
>    Xen reports:
>    (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>    (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>    (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>    (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>    (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>    (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>    (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>    (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>    (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>    (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>    (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>
>
>
> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>    - i can ping the guests from dom0
>    - i can ping dom0 from the guests
>    - But i can't ssh or access things by http
>    - I don't see any relevant error messages ...
>    - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>      (that previously worked fine)
>
> --
>
> Sander
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:50:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQGr-000683-UC; Thu, 20 Feb 2014 09:50:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1WGQGq-00067y-Ou
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 09:50:20 +0000
Received: from [85.158.139.211:13599] by server-2.bemta-5.messagelabs.com id
	A4/0F-23037-CDFC5035; Thu, 20 Feb 2014 09:50:20 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392889817!5131574!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30658 invoked from network); 20 Feb 2014 09:50:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 20 Feb 2014 09:50:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1K9oEN2027651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 09:50:14 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K9oDO7011896
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Feb 2014 09:50:14 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1K9oDj0011885; Thu, 20 Feb 2014 09:50:13 GMT
Received: from [10.191.14.241] (/10.191.14.241)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 01:50:13 -0800
Message-ID: <5305CFC6.3080502@oracle.com>
Date: Thu, 20 Feb 2014 17:49:58 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>
In-Reply-To: <1772884781.20140218222513@eikelenboom.it>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/2/19 5:25, Sander Eikelenboom wrote:
> Hi All,
>
> I'm currently having some network troubles with Xen and recent linux kernels.
>
> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>    I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>
>    In the guest:
>    [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>    [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>    [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>    [57539.859610] net eth0: Need more slots
>    [58157.675939] net eth0: Need more slots
>    [58725.344712] net eth0: Need more slots
>    [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>    [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>    [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>    [61815.849225] net eth0: Need more slots

This issue is familiar... and I thought it get fixed.
 From original analysis for similar issue I hit before, the root cause 
is netback still creates response when the ring is full. I remember 
larger MTU can trigger this issue before, what is the MTU size?

Thanks
Annie
>
>    Xen reports:
>    (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>    (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>    (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>    (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>    (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>    (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>    (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>    (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>    (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>    (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>    (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>
>
>
> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>    - i can ping the guests from dom0
>    - i can ping dom0 from the guests
>    - But i can't ssh or access things by http
>    - I don't see any relevant error messages ...
>    - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>      (that previously worked fine)
>
> --
>
> Sander
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQLj-0006LR-Nx; Thu, 20 Feb 2014 09:55:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGQLP-0006LB-TG
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 09:55:18 +0000
Received: from [85.158.137.68:13424] by server-12.bemta-3.messagelabs.com id
	50/78-01674-7F0D5035; Thu, 20 Feb 2014 09:55:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392890101!3062584!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31705 invoked from network); 20 Feb 2014 09:55:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:55:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102560560"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:55:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:54:59 -0500
Message-ID: <1392890099.23342.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 09:54:59 +0000
In-Reply-To: <1392889477.23342.6.camel@kazak.uk.xensource.com>
References: <osstest-25148-mainreport@xen.org>
	<1392889477.23342.6.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 09:44 +0000, Ian Campbell wrote:
> On Thu, 2014-02-20 at 09:33 +0000, xen.org wrote:
> > flight 25148 xen-4.2-testing real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25148/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
> >  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
> >  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
> 
> I think this test started before the fix made it through osstests own
> gateway, so this is still expected but the next one should be OK.
> 
> But this made me remember a request I've had -- would be it be possible
> to record the version of osstest used for a given flight somewhere?
> Perhaps in the output runvars? I'm not sure where best to hook that in.

Doh, I always forget the flight-NNN branches in osstest.git...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 09:55:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 09:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQLj-0006LR-Nx; Thu, 20 Feb 2014 09:55:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGQLP-0006LB-TG
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 09:55:18 +0000
Received: from [85.158.137.68:13424] by server-12.bemta-3.messagelabs.com id
	50/78-01674-7F0D5035; Thu, 20 Feb 2014 09:55:03 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392890101!3062584!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31705 invoked from network); 20 Feb 2014 09:55:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 09:55:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102560560"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 09:55:00 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 04:54:59 -0500
Message-ID: <1392890099.23342.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 09:54:59 +0000
In-Reply-To: <1392889477.23342.6.camel@kazak.uk.xensource.com>
References: <osstest-25148-mainreport@xen.org>
	<1392889477.23342.6.camel@kazak.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 09:44 +0000, Ian Campbell wrote:
> On Thu, 2014-02-20 at 09:33 +0000, xen.org wrote:
> > flight 25148 xen-4.2-testing real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25148/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  test-amd64-i386-pv            9 guest-start               fail REGR. vs. 24865
> >  test-amd64-amd64-pv           9 guest-start               fail REGR. vs. 24865
> >  test-i386-i386-pv             9 guest-start               fail REGR. vs. 24865
> 
> I think this test started before the fix made it through osstests own
> gateway, so this is still expected but the next one should be OK.
> 
> But this made me remember a request I've had -- would be it be possible
> to record the version of osstest used for a given flight somewhere?
> Perhaps in the output runvars? I'm not sure where best to hook that in.

Doh, I always forget the flight-NNN branches in osstest.git...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQd4-0006uZ-48; Thu, 20 Feb 2014 10:13:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGQd2-0006uU-F2
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 10:13:16 +0000
Received: from [85.158.139.211:62774] by server-1.bemta-5.messagelabs.com id
	46/AC-12859-B35D5035; Thu, 20 Feb 2014 10:13:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392891193!5153363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6384 invoked from network); 20 Feb 2014 10:13:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:13:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102564414"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 10:13:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 05:13:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGQcx-0007Vj-Oe;
	Thu, 20 Feb 2014 10:13:11 +0000
Date: Thu, 20 Feb 2014 10:13:11 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140220101311.GA24574@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
	<53050BF5.1060009@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <53050BF5.1060009@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 07:54:29PM +0000, Zoltan Kiss wrote:
> On 19/02/14 10:05, Ian Campbell wrote:
> >On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
> >>On 18/02/14 17:06, Ian Campbell wrote:
> >>>On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>>This patch contains the new definitions necessary for grant mapping.
> >>>
> >>>Is this just adding a bunch of (currently) unused functions? That's a
> >>>slightly odd way to structure a series. They don't seem to be "generic
> >>>helpers" or anything so it would be more normal to introduce these as
> >>>they get used -- it's a bit hard to review them out of context.
> >>I've created two patches because they are quite huge even now,
> >>separately. Together they would be a ~500 line change. That was the best
> >>I could figure out keeping in mind that bisect should work. But as I
> >>wrote in the first email, I welcome other suggestions. If you and Wei
> >>prefer this two patch in one big one, I merge them in the next version.
> >
> >I suppose it is hard to split a change like this up in a sensible way,
> >but it is rather hard to review something which is split in two parts
> >sensibly.
> >
> >If the combined patch too large to fit on the lists?
> Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible.
> It's up to you and Wei, if you would like them to be merged, I can
> do that.
> 

As I said before, my bottom line is "don't break bisection". Do whatever
you want to. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGQd4-0006uZ-48; Thu, 20 Feb 2014 10:13:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGQd2-0006uU-F2
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 10:13:16 +0000
Received: from [85.158.139.211:62774] by server-1.bemta-5.messagelabs.com id
	46/AC-12859-B35D5035; Thu, 20 Feb 2014 10:13:15 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1392891193!5153363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6384 invoked from network); 20 Feb 2014 10:13:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:13:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,511,1389744000"; d="scan'208";a="102564414"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 10:13:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 05:13:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGQcx-0007Vj-Oe;
	Thu, 20 Feb 2014 10:13:11 +0000
Date: Thu, 20 Feb 2014 10:13:11 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140220101311.GA24574@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
	<53050BF5.1060009@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <53050BF5.1060009@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 07:54:29PM +0000, Zoltan Kiss wrote:
> On 19/02/14 10:05, Ian Campbell wrote:
> >On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
> >>On 18/02/14 17:06, Ian Campbell wrote:
> >>>On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>>This patch contains the new definitions necessary for grant mapping.
> >>>
> >>>Is this just adding a bunch of (currently) unused functions? That's a
> >>>slightly odd way to structure a series. They don't seem to be "generic
> >>>helpers" or anything so it would be more normal to introduce these as
> >>>they get used -- it's a bit hard to review them out of context.
> >>I've created two patches because they are quite huge even now,
> >>separately. Together they would be a ~500 line change. That was the best
> >>I could figure out keeping in mind that bisect should work. But as I
> >>wrote in the first email, I welcome other suggestions. If you and Wei
> >>prefer this two patch in one big one, I merge them in the next version.
> >
> >I suppose it is hard to split a change like this up in a sensible way,
> >but it is rather hard to review something which is split in two parts
> >sensibly.
> >
> >If the combined patch too large to fit on the lists?
> Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible.
> It's up to you and Wei, if you would like them to be merged, I can
> do that.
> 

As I said before, my bottom line is "don't break bisection". Do whatever
you want to. :-)

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:51:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRE3-00080i-Nd; Thu, 20 Feb 2014 10:51:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGRE1-00080c-S9
	for Xen-devel@lists.xen.org; Thu, 20 Feb 2014 10:51:29 +0000
Received: from [85.158.139.211:63947] by server-12.bemta-5.messagelabs.com id
	C7/77-15415-03ED5035; Thu, 20 Feb 2014 10:51:28 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392893486!5129037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19057 invoked from network); 20 Feb 2014 10:51:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:51:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102572093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 10:51:26 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 05:51:25 -0500
Message-ID: <5305DE2C.7080502@citrix.com>
Date: Thu, 20 Feb 2014 10:51:24 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "stable@vger.kernel.org" <stable@vger.kernel.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two changes fix important bugs with 32-bit Xen PV guests.

0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
before using the m2p table)

7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
selector corruption)

Please apply to 3.10.y.

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:51:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRE3-00080i-Nd; Thu, 20 Feb 2014 10:51:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGRE1-00080c-S9
	for Xen-devel@lists.xen.org; Thu, 20 Feb 2014 10:51:29 +0000
Received: from [85.158.139.211:63947] by server-12.bemta-5.messagelabs.com id
	C7/77-15415-03ED5035; Thu, 20 Feb 2014 10:51:28 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392893486!5129037!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19057 invoked from network); 20 Feb 2014 10:51:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:51:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102572093"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 10:51:26 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 05:51:25 -0500
Message-ID: <5305DE2C.7080502@citrix.com>
Date: Thu, 20 Feb 2014 10:51:24 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "stable@vger.kernel.org" <stable@vger.kernel.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two changes fix important bugs with 32-bit Xen PV guests.

0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
before using the m2p table)

7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
selector corruption)

Please apply to 3.10.y.

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:59:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRL8-0008Ge-Ky; Thu, 20 Feb 2014 10:58:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGRL6-0008GX-Ml
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 10:58:48 +0000
Received: from [193.109.254.147:52424] by server-7.bemta-14.messagelabs.com id
	65/75-23424-8EFD5035; Thu, 20 Feb 2014 10:58:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392893926!1892548!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9526 invoked from network); 20 Feb 2014 10:58:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:58:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102573523"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 10:58:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 05:58:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGRL3-0000Du-1K;
	Thu, 20 Feb 2014 10:58:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGRL2-0002EG-LH;
	Thu, 20 Feb 2014 10:58:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25149-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 10:58:44 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25149: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25149 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25149/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:59:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRL8-0008Ge-Ky; Thu, 20 Feb 2014 10:58:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGRL6-0008GX-Ml
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 10:58:48 +0000
Received: from [193.109.254.147:52424] by server-7.bemta-14.messagelabs.com id
	65/75-23424-8EFD5035; Thu, 20 Feb 2014 10:58:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392893926!1892548!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9526 invoked from network); 20 Feb 2014 10:58:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:58:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102573523"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 10:58:45 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 05:58:45 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGRL3-0000Du-1K;
	Thu, 20 Feb 2014 10:58:45 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGRL2-0002EG-LH;
	Thu, 20 Feb 2014 10:58:44 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25149-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 10:58:44 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25149: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25149 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25149/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:59:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRLX-0008Ik-1n; Thu, 20 Feb 2014 10:59:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGRLV-0008IR-R3
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 10:59:14 +0000
Received: from [85.158.137.68:7699] by server-12.bemta-3.messagelabs.com id
	7E/69-01674-FFFD5035; Thu, 20 Feb 2014 10:59:11 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392893948!1857664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25036 invoked from network); 20 Feb 2014 10:59:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:59:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104266987"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 10:58:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 05:58:08 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGRKS-000872-4P; Thu, 20 Feb 2014 10:58:08 +0000
Message-ID: <5305DFBF.1030909@citrix.com>
Date: Thu, 20 Feb 2014 10:58:07 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>	
	<1392807985.23084.132.camel@kazak.uk.xensource.com>	
	<530499D7.70902@citrix.com>
	<1392811341.29739.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1392811341.29739.24.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 12:02, Ian Campbell wrote:
> On Wed, 2014-02-19 at 11:47 +0000, Andrew Bennieston wrote:
>> On 19/02/14 11:06, Ian Campbell wrote:
>>> On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
>>>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>>>
>>>> Document the multi-queue feature in terms of XenStore keys to be written
>>>> by the backend and by the frontend.
>>>>
>>>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>>>> ---
>>>>    xen/include/public/io/netif.h |   21 +++++++++++++++++++++
>>>>    1 file changed, 21 insertions(+)
>>>>
>>>> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
>>>> index d7fb771..90be2fc 100644
>>>> --- a/xen/include/public/io/netif.h
>>>> +++ b/xen/include/public/io/netif.h
>>>> @@ -69,6 +69,27 @@
>>>>     */
>>>>
>>>>    /*
>>>> + * Multiple transmit and receive queues:
>>>> + * If supported, the backend will write "multi-queue-max-queues" and set its
>>>> + * value to the maximum supported number of queues.
>>>> + * Frontends that are aware of this feature and wish to use it can write the
>>>> + * key "multi-queue-num-queues", set to the number they wish to use.
>>>> + *
>>>> + * Queues replicate the shared rings and event channels, and
>>>> + * "feature-split-event-channels" is required when using multiple queues.
>>>> + *
>>>> + * For frontends requesting just one queue, the usual event-channel and
>>>> + * ring-ref keys are written as before, simplifying the backend processing
>>>> + * to avoid distinguishing between a frontend that doesn't understand the
>>>> + * multi-queue feature, and one that does, but requested only one queue.
>>>> + *
>>>> + * Frontends requesting two or more queues must not write the toplevel
>>>> + * event-channel and ring-ref keys, instead writing them under sub-keys having
>>>> + * the name "queue-N" where N is the integer ID of the queue for which those
>>>> + * keys belong. Queues are indexed from zero.
>>>
>>> If "feature-split-event-channels" is required then I think what should
>>> be written is queue-N/event-channel-{tx,rx} and
>>> queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
>>> as the final paragraph sort of implies?
>> I can change the wording to make this more clear.
>
> Thanks.
>
>>>
>>> (what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)
>>>
>>> Is it required to have the same number of RX and TX queues?
>> Strictly speaking, no. But the implementation assumes this to be the
>> case. Since the code already sets up one pair, simply looping over this
>> sufficient times to create N pairs was a fairly sane approach to this.
>> In practice, if you have an asymmetry between RX and TX queues, you will
>> end up hitting a bottleneck sooner in one direction than the other,
>> which seems impractical.
>
> OK. So we should either mandate that there will always be the same
> number, if that is going to be the case, or we should design the
> xenstore layout to be extensible if not.
>
> It sounds reasonable to me to mandate they be the same, but I don't
> really know.
>
>>> Are there any other properties/behaviours which should be documented,
>>> e.g. relating to the selection of which queue to use for a given frame
>>> (on either TX or RX)? If not and it is up to the relevant end to do what
>>> it wants then I think it would be useful to say so.
>> There are no other requirements. The current implementation will
>> transmit anything it cannot hash sensibly on queue 0, but this is an
>> arbitrary choice (albeit a sensible one, since queue 0 should always
>> exist). I'll document this.
>
> What is this undocumented/unnegotiated "hash" you speak of ;-)
Each end must decide how to split packets across queues. This decision 
is made by the transmitting guest, and the receiving guest will simply 
take packets on any queue and deal with them as before. This is why it 
is unnegotiated; the two ends could be using completely different 
mechanisms for this, or could always choose a single queue, with no 
adverse effect on anything except performance (i.e. if they do this 
badly, they will not benefit from the full value of multiple queues).

These properties should probably be documented, though. I'll add a 
paragraph to the effect of the one above. A complication arises if we 
wish to support Windows frontends in a logical manner, since Windows 
expects to be able to control the mapping of packets to RX queues. This 
means that for a Windows guest, we may need to write additional Xenstore 
keys to specify these parameters to the backend. However, that is not 
considered here because

a] the default behaviour of decoupling the TX and RX queue mappings is 
not affected by this; a Windows frontend would specifically request an
algorithm, while other frontends may remain silent about this.

b] Windows also wants the hash value (used to select the queue, via the 
Toeplitz hash) to be transmitted along with each packet. This would 
require either an extension to the ring protocol to pass this value in 
an extra slot, or some other mechanism - either of which would require 
separate negotiation anyway.

For these reasons, the default behaviour is to _not_ negotiate any 
hashing between the back- and frontends, leaving each free to do the 
most appropriate thing. I'd like to avoid writing unnecessary stuff into 
Xenstore, preferring to have a sensible default behaviour that can be 
overridden for the few specific cases that require something different.

As such, I think the docs should, for now, say something along the lines of:
   Mapping of packets to queues is considered to be a function of the
   transmitting system (backend or frontend) and is not negotiated
   between the two. Guests are free to transmit packets on any queue
   they choose, provided it has been set up correctly.

Andrew.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 10:59:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 10:59:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRLX-0008Ik-1n; Thu, 20 Feb 2014 10:59:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGRLV-0008IR-R3
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 10:59:14 +0000
Received: from [85.158.137.68:7699] by server-12.bemta-3.messagelabs.com id
	7E/69-01674-FFFD5035; Thu, 20 Feb 2014 10:59:11 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392893948!1857664!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25036 invoked from network); 20 Feb 2014 10:59:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 10:59:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104266987"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 10:58:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 05:58:08 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGRKS-000872-4P; Thu, 20 Feb 2014 10:58:08 +0000
Message-ID: <5305DFBF.1030909@citrix.com>
Date: Thu, 20 Feb 2014 10:58:07 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>	
	<1392807985.23084.132.camel@kazak.uk.xensource.com>	
	<530499D7.70902@citrix.com>
	<1392811341.29739.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1392811341.29739.24.camel@kazak.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, --cc=paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 12:02, Ian Campbell wrote:
> On Wed, 2014-02-19 at 11:47 +0000, Andrew Bennieston wrote:
>> On 19/02/14 11:06, Ian Campbell wrote:
>>> On Mon, 2014-02-17 at 18:01 +0000, Andrew J. Bennieston wrote:
>>>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>>>
>>>> Document the multi-queue feature in terms of XenStore keys to be written
>>>> by the backend and by the frontend.
>>>>
>>>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>>>> ---
>>>>    xen/include/public/io/netif.h |   21 +++++++++++++++++++++
>>>>    1 file changed, 21 insertions(+)
>>>>
>>>> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
>>>> index d7fb771..90be2fc 100644
>>>> --- a/xen/include/public/io/netif.h
>>>> +++ b/xen/include/public/io/netif.h
>>>> @@ -69,6 +69,27 @@
>>>>     */
>>>>
>>>>    /*
>>>> + * Multiple transmit and receive queues:
>>>> + * If supported, the backend will write "multi-queue-max-queues" and set its
>>>> + * value to the maximum supported number of queues.
>>>> + * Frontends that are aware of this feature and wish to use it can write the
>>>> + * key "multi-queue-num-queues", set to the number they wish to use.
>>>> + *
>>>> + * Queues replicate the shared rings and event channels, and
>>>> + * "feature-split-event-channels" is required when using multiple queues.
>>>> + *
>>>> + * For frontends requesting just one queue, the usual event-channel and
>>>> + * ring-ref keys are written as before, simplifying the backend processing
>>>> + * to avoid distinguishing between a frontend that doesn't understand the
>>>> + * multi-queue feature, and one that does, but requested only one queue.
>>>> + *
>>>> + * Frontends requesting two or more queues must not write the toplevel
>>>> + * event-channel and ring-ref keys, instead writing them under sub-keys having
>>>> + * the name "queue-N" where N is the integer ID of the queue for which those
>>>> + * keys belong. Queues are indexed from zero.
>>>
>>> If "feature-split-event-channels" is required then I think what should
>>> be written is queue-N/event-channel-{tx,rx} and
>>> queue-N/{tx,rx}-ring-ref, rather than queue-N/{event-channel,ring-ref}
>>> as the final paragraph sort of implies?
>> I can change the wording to make this more clear.
>
> Thanks.
>
>>>
>>> (what a shame we have event-channel-DIR and DIR-ring-ref, oh well!)
>>>
>>> Is it required to have the same number of RX and TX queues?
>> Strictly speaking, no. But the implementation assumes this to be the
>> case. Since the code already sets up one pair, simply looping over this
>> sufficient times to create N pairs was a fairly sane approach to this.
>> In practice, if you have an asymmetry between RX and TX queues, you will
>> end up hitting a bottleneck sooner in one direction than the other,
>> which seems impractical.
>
> OK. So we should either mandate that there will always be the same
> number, if that is going to be the case, or we should design the
> xenstore layout to be extensible if not.
>
> It sounds reasonable to me to mandate they be the same, but I don't
> really know.
>
>>> Are there any other properties/behaviours which should be documented,
>>> e.g. relating to the selection of which queue to use for a given frame
>>> (on either TX or RX)? If not and it is up to the relevant end to do what
>>> it wants then I think it would be useful to say so.
>> There are no other requirements. The current implementation will
>> transmit anything it cannot hash sensibly on queue 0, but this is an
>> arbitrary choice (albeit a sensible one, since queue 0 should always
>> exist). I'll document this.
>
> What is this undocumented/unnegotiated "hash" you speak of ;-)
Each end must decide how to split packets across queues. This decision 
is made by the transmitting guest, and the receiving guest will simply 
take packets on any queue and deal with them as before. This is why it 
is unnegotiated; the two ends could be using completely different 
mechanisms for this, or could always choose a single queue, with no 
adverse effect on anything except performance (i.e. if they do this 
badly, they will not benefit from the full value of multiple queues).

These properties should probably be documented, though. I'll add a 
paragraph to the effect of the one above. A complication arises if we 
wish to support Windows frontends in a logical manner, since Windows 
expects to be able to control the mapping of packets to RX queues. This 
means that for a Windows guest, we may need to write additional Xenstore 
keys to specify these parameters to the backend. However, that is not 
considered here because

a] the default behaviour of decoupling the TX and RX queue mappings is 
not affected by this; a Windows frontend would specifically request an
algorithm, while other frontends may remain silent about this.

b] Windows also wants the hash value (used to select the queue, via the 
Toeplitz hash) to be transmitted along with each packet. This would 
require either an extension to the ring protocol to pass this value in 
an extra slot, or some other mechanism - either of which would require 
separate negotiation anyway.

For these reasons, the default behaviour is to _not_ negotiate any 
hashing between the back- and frontends, leaving each free to do the 
most appropriate thing. I'd like to avoid writing unnecessary stuff into 
Xenstore, preferring to have a sensible default behaviour that can be 
overridden for the few specific cases that require something different.

As such, I think the docs should, for now, say something along the lines of:
   Mapping of packets to queues is considered to be a function of the
   transmitting system (backend or frontend) and is not negotiated
   between the two. Guests are free to transmit packets on any queue
   they choose, provided it has been set up correctly.

Andrew.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:01:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRNt-0000BO-Rn; Thu, 20 Feb 2014 11:01:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRNs-0000BF-C8
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:01:40 +0000
Received: from [85.158.137.68:59711] by server-4.bemta-3.messagelabs.com id
	D3/51-04858-390E5035; Thu, 20 Feb 2014 11:01:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392894098!1857907!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30710 invoked from network); 20 Feb 2014 11:01:38 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:01:38 -0000
Received: by mail-we0-f176.google.com with SMTP id q58so1305826wes.21
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:01:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lG6CSSuWsvUw7WlFl3kPQ5ywDuO6UX1QdBIGY6cv2ZY=;
	b=B1qkUwx3Us8t2ryoJpO5f54gzgFbjx8vFHmzQb3O4lLlWDcyqfz1iMD9YOTzxhz4WP
	hRe5tEu+ol7D/StPd+Eh6ZZnj/Eywair/g1YPkB7yNJCHxtiBKlJdX9JBY+m/Klek8tT
	Mdj/vAP3JXvjKQM4O2Y829nbPStYXoN5+fWtspgJzdOoDSBm08nnU8XhcV7Ny64v8KWH
	8gUaDPIfCpoqkf9UYQqNlk9y3u/CdeNYyE7OiE8zVXKWa95QTVXfiYxyIsnt8Pd+AqUl
	Nz4Q7HN0Afn7IAlUx5nq9CSdjUA51JZBHOdkeW7QAkY8/CG0qDPdDibv655ItxvJNEwG
	FYSg==
X-Gm-Message-State: ALoCoQmnXnEOCEddzAQXn3p0vjW+xF7pnT6xLpcry+XM0vb+0dZgDoKQhr+E/0TcN6hMk34XW67/
X-Received: by 10.180.87.232 with SMTP id bb8mr6475157wib.48.1392894098194;
	Thu, 20 Feb 2014 03:01:38 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id h9sm8000788wjz.16.2014.02.20.03.01.36
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:01:37 -0800 (PST)
Message-ID: <5305E090.4030602@linaro.org>
Date: Thu, 20 Feb 2014 11:01:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>	
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>	
	<1392808847.23084.138.camel@kazak.uk.xensource.com>	
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
In-Reply-To: <1392887074.22494.7.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 09:04, Ian Campbell wrote:
> On Wed, 2014-02-19 at 17:56 +0000, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/19/2014 11:20 AM, Ian Campbell wrote:
>>> On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
>>>> Now that the console supports earlyprintk, we can get a rid of
>>>> early_printk call.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> Thanks, due to some conflicts I have rebased this patch on top my patch
>> "xen/serial: Don't leak memory mapping if the serial initialization has
>> failed".
>>
>> I will resend this series later.
>
> Thanks.
>
>>> Now all we need is a way to make it a runtime option :-)
>>
>> I let you write a device tree parser in assembly ;).
>
> ;-)
>
> I was actually thinking more along the lines of a .word at a defined
> offset which you could hex edit to a specific value to activate a
> particular flavour of early printk handling. That would be sufficient
> e.g. for osstest to activate the appropriate stuff for the specific
> platform.

I don't see useful use case to have a such early printk implementation 
in Xen. When the board is fully supported, failed at early stage (e.g 
before console is initialized) is very unlikely. At least if you don't 
play with memory.

I see osstest like a more generic test suite. If it fails at early 
stage, the developper should give a try himself (most of the time early 
printk is not useful to find the really bug), that would mean it need to 
recompile Xen.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:01:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRNt-0000BO-Rn; Thu, 20 Feb 2014 11:01:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRNs-0000BF-C8
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:01:40 +0000
Received: from [85.158.137.68:59711] by server-4.bemta-3.messagelabs.com id
	D3/51-04858-390E5035; Thu, 20 Feb 2014 11:01:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392894098!1857907!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30710 invoked from network); 20 Feb 2014 11:01:38 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:01:38 -0000
Received: by mail-we0-f176.google.com with SMTP id q58so1305826wes.21
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:01:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=lG6CSSuWsvUw7WlFl3kPQ5ywDuO6UX1QdBIGY6cv2ZY=;
	b=B1qkUwx3Us8t2ryoJpO5f54gzgFbjx8vFHmzQb3O4lLlWDcyqfz1iMD9YOTzxhz4WP
	hRe5tEu+ol7D/StPd+Eh6ZZnj/Eywair/g1YPkB7yNJCHxtiBKlJdX9JBY+m/Klek8tT
	Mdj/vAP3JXvjKQM4O2Y829nbPStYXoN5+fWtspgJzdOoDSBm08nnU8XhcV7Ny64v8KWH
	8gUaDPIfCpoqkf9UYQqNlk9y3u/CdeNYyE7OiE8zVXKWa95QTVXfiYxyIsnt8Pd+AqUl
	Nz4Q7HN0Afn7IAlUx5nq9CSdjUA51JZBHOdkeW7QAkY8/CG0qDPdDibv655ItxvJNEwG
	FYSg==
X-Gm-Message-State: ALoCoQmnXnEOCEddzAQXn3p0vjW+xF7pnT6xLpcry+XM0vb+0dZgDoKQhr+E/0TcN6hMk34XW67/
X-Received: by 10.180.87.232 with SMTP id bb8mr6475157wib.48.1392894098194;
	Thu, 20 Feb 2014 03:01:38 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id h9sm8000788wjz.16.2014.02.20.03.01.36
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:01:37 -0800 (PST)
Message-ID: <5305E090.4030602@linaro.org>
Date: Thu, 20 Feb 2014 11:01:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>	
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>	
	<1392808847.23084.138.camel@kazak.uk.xensource.com>	
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
In-Reply-To: <1392887074.22494.7.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 09:04, Ian Campbell wrote:
> On Wed, 2014-02-19 at 17:56 +0000, Julien Grall wrote:
>> Hi Ian,
>>
>> On 02/19/2014 11:20 AM, Ian Campbell wrote:
>>> On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
>>>> Now that the console supports earlyprintk, we can get a rid of
>>>> early_printk call.
>>>>
>>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>>
>>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> Thanks, due to some conflicts I have rebased this patch on top my patch
>> "xen/serial: Don't leak memory mapping if the serial initialization has
>> failed".
>>
>> I will resend this series later.
>
> Thanks.
>
>>> Now all we need is a way to make it a runtime option :-)
>>
>> I let you write a device tree parser in assembly ;).
>
> ;-)
>
> I was actually thinking more along the lines of a .word at a defined
> offset which you could hex edit to a specific value to activate a
> particular flavour of early printk handling. That would be sufficient
> e.g. for osstest to activate the appropriate stuff for the specific
> platform.

I don't see useful use case to have a such early printk implementation 
in Xen. When the board is fully supported, failed at early stage (e.g 
before console is initialized) is very unlikely. At least if you don't 
play with memory.

I see osstest like a more generic test suite. If it fails at early 
stage, the developper should give a try himself (most of the time early 
printk is not useful to find the really bug), that would mean it need to 
recompile Xen.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:04:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRQl-0000M6-Dn; Thu, 20 Feb 2014 11:04:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRQk-0000M1-T3
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:04:39 +0000
Received: from [85.158.139.211:64623] by server-16.bemta-5.messagelabs.com id
	02/AD-05060-641E5035; Thu, 20 Feb 2014 11:04:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392894275!5149616!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25692 invoked from network); 20 Feb 2014 11:04:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:04:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102574657"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:04:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:04:31 -0500
Message-ID: <1392894269.23342.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Date: Thu, 20 Feb 2014 11:04:29 +0000
In-Reply-To: <5305DFBF.1030909@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
	<530499D7.70902@citrix.com>
	<1392811341.29739.24.camel@kazak.uk.xensource.com>
	<5305DFBF.1030909@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 10:58 +0000, Andrew Bennieston wrote:
> As such, I think the docs should, for now, say something along the lines of:
>    Mapping of packets to queues is considered to be a function of the
>    transmitting system (backend or frontend) and is not negotiated
>    between the two. Guests are free to transmit packets on any queue
>    they choose, provided it has been set up correctly.

That's the sort of thing I was looking for, thanks.

(fixing Paul's CC at last...)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:04:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRQl-0000M6-Dn; Thu, 20 Feb 2014 11:04:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRQk-0000M1-T3
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:04:39 +0000
Received: from [85.158.139.211:64623] by server-16.bemta-5.messagelabs.com id
	02/AD-05060-641E5035; Thu, 20 Feb 2014 11:04:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392894275!5149616!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25692 invoked from network); 20 Feb 2014 11:04:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:04:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102574657"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:04:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:04:31 -0500
Message-ID: <1392894269.23342.21.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Date: Thu, 20 Feb 2014 11:04:29 +0000
In-Reply-To: <5305DFBF.1030909@citrix.com>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
	<530499D7.70902@citrix.com>
	<1392811341.29739.24.camel@kazak.uk.xensource.com>
	<5305DFBF.1030909@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 10:58 +0000, Andrew Bennieston wrote:
> As such, I think the docs should, for now, say something along the lines of:
>    Mapping of packets to queues is considered to be a function of the
>    transmitting system (backend or frontend) and is not negotiated
>    between the two. Guests are free to transmit packets on any queue
>    they choose, provided it has been set up correctly.

That's the sort of thing I was looking for, thanks.

(fixing Paul's CC at last...)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:05:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:05:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRRe-0000R1-Sd; Thu, 20 Feb 2014 11:05:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRRc-0000Ql-Au
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:05:32 +0000
Received: from [85.158.137.68:41965] by server-14.bemta-3.messagelabs.com id
	04/9E-08196-B71E5035; Thu, 20 Feb 2014 11:05:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392894330!1857814!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13312 invoked from network); 20 Feb 2014 11:05:30 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:05:30 -0000
Received: by mail-we0-f174.google.com with SMTP id w61so1351083wes.33
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:05:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0pWCRkqCmVUwY/30ElkSgBv5xI05Af1Y1szVo+9u+dc=;
	b=FuJZD7vC6CuipqM0YhhFIbEn7QfS/QAYVRVJoQNI9fACXPInVCQYMoPjSg+KREBrYD
	QjLhRssBXvuZeXa9Yd7b/Xpf2mAZMsLArqTX4V11bRtr8KjZj9prSmu24l5ACZG0D/k9
	xS9JtemLwfyFiNrCji8gmYsYyXcvpgNXz8+QW8BH7tq8NG7cmhyEdus5bNRlQBbraCyc
	58ZlEI6TGKjO5K1ZTWDT16uOAn57zI1ZgzkFaRO12BP7vlr0MIf2BYqR4LH/rJ5JjgZ5
	m7heDNCmgXxk/LwE7WLdoHvKpui4iULVlC4dBB2qcj6qy5mhQ61z/c/UQuAwDNdR4ejM
	t3kQ==
X-Gm-Message-State: ALoCoQkkcOYzNdoCKPt/PplpIEwS6MGUYEL6CwBWc/DjbCwvUtX1iBcNAx0hx59BAnEz8LzXqxn6
X-Received: by 10.180.12.43 with SMTP id v11mr6274831wib.33.1392894330200;
	Thu, 20 Feb 2014 03:05:30 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id de3sm8014550wjb.8.2014.02.20.03.05.28
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:05:29 -0800 (PST)
Message-ID: <5305E177.7080800@linaro.org>
Date: Thu, 20 Feb 2014 11:05:27 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
	<1392830820.29739.97.camel@kazak.uk.xensource.com>
	<5304EA5B.50506@linaro.org>
	<5305C6B9020000780011DEC9@nat28.tlf.novell.com>
In-Reply-To: <5305C6B9020000780011DEC9@nat28.tlf.novell.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Jan,

On 20/02/14 08:11, Jan Beulich wrote:
>>>> On 19.02.14 at 18:31, Julien Grall <julien.grall@linaro.org> wrote:
>> On 02/19/2014 05:27 PM, Ian Campbell wrote:
>>> On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
>>>>>> +#define SZ_4K                               (1 << 12)
>>>>>> +#define SZ_64K                              (1 << 16)
>>>>>> +
>>>>>> +/* Driver options */
>>>>>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
>>>>>
>>>>> Is this just retained to reduce the deviation from the Linux driver?
>>>>> It's no use to us I think. (I suppose that goes for a bunch of other
>>>>> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
>>>>> further).
>>>>
>>>> SZ_4K and SZ_64K is used later in the code.
>>>
>>> But they are actually useless to us aren't they?
>>
>> As we use only 4K page in Xen yes. I kept it because there is few place
>> where the SMMU configuration is not the same.
>>
>> If we want to support 64K page in the future it will harder to had
>> support if this stuff is removed.
>>
>> The constant was added by himself as it doesn't exists on Xen. I can
>> move it into the generic code.
>
> But if possible I'd like to encourage you using PAGE_SIZE_4K to
> match up with what we already got in the IOMMU code. Unless
> deviation from the sources you're cloning is deemed worse than
> consistency within our code.

I think this change is fine. It doesn't fundamentally change the code.

I will update the patch for the next version.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:05:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:05:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRRe-0000R1-Sd; Thu, 20 Feb 2014 11:05:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRRc-0000Ql-Au
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:05:32 +0000
Received: from [85.158.137.68:41965] by server-14.bemta-3.messagelabs.com id
	04/9E-08196-B71E5035; Thu, 20 Feb 2014 11:05:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392894330!1857814!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13312 invoked from network); 20 Feb 2014 11:05:30 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:05:30 -0000
Received: by mail-we0-f174.google.com with SMTP id w61so1351083wes.33
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:05:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0pWCRkqCmVUwY/30ElkSgBv5xI05Af1Y1szVo+9u+dc=;
	b=FuJZD7vC6CuipqM0YhhFIbEn7QfS/QAYVRVJoQNI9fACXPInVCQYMoPjSg+KREBrYD
	QjLhRssBXvuZeXa9Yd7b/Xpf2mAZMsLArqTX4V11bRtr8KjZj9prSmu24l5ACZG0D/k9
	xS9JtemLwfyFiNrCji8gmYsYyXcvpgNXz8+QW8BH7tq8NG7cmhyEdus5bNRlQBbraCyc
	58ZlEI6TGKjO5K1ZTWDT16uOAn57zI1ZgzkFaRO12BP7vlr0MIf2BYqR4LH/rJ5JjgZ5
	m7heDNCmgXxk/LwE7WLdoHvKpui4iULVlC4dBB2qcj6qy5mhQ61z/c/UQuAwDNdR4ejM
	t3kQ==
X-Gm-Message-State: ALoCoQkkcOYzNdoCKPt/PplpIEwS6MGUYEL6CwBWc/DjbCwvUtX1iBcNAx0hx59BAnEz8LzXqxn6
X-Received: by 10.180.12.43 with SMTP id v11mr6274831wib.33.1392894330200;
	Thu, 20 Feb 2014 03:05:30 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id de3sm8014550wjb.8.2014.02.20.03.05.28
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:05:29 -0800 (PST)
Message-ID: <5305E177.7080800@linaro.org>
Date: Thu, 20 Feb 2014 11:05:27 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1391794991-5919-1-git-send-email-julien.grall@linaro.org>
	<1391794991-5919-13-git-send-email-julien.grall@linaro.org>
	<1392819327.29739.85.camel@kazak.uk.xensource.com>
	<5304E81A.3050703@linaro.org>
	<1392830820.29739.97.camel@kazak.uk.xensource.com>
	<5304EA5B.50506@linaro.org>
	<5305C6B9020000780011DEC9@nat28.tlf.novell.com>
In-Reply-To: <5305C6B9020000780011DEC9@nat28.tlf.novell.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, patches@linaro.org, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [RFC for-4.5 12/12] drivers/passthrough: arm: Add
 support for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello Jan,

On 20/02/14 08:11, Jan Beulich wrote:
>>>> On 19.02.14 at 18:31, Julien Grall <julien.grall@linaro.org> wrote:
>> On 02/19/2014 05:27 PM, Ian Campbell wrote:
>>> On Wed, 2014-02-19 at 17:21 +0000, Julien Grall wrote:
>>>>>> +#define SZ_4K                               (1 << 12)
>>>>>> +#define SZ_64K                              (1 << 16)
>>>>>> +
>>>>>> +/* Driver options */
>>>>>> +#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
>>>>>
>>>>> Is this just retained to reduce the deviation from the Linux driver?
>>>>> It's no use to us I think. (I suppose that goes for a bunch of other
>>>>> stuff, eg.. the PGSZ_4K stuff, which I will avoid commenting on
>>>>> further).
>>>>
>>>> SZ_4K and SZ_64K is used later in the code.
>>>
>>> But they are actually useless to us aren't they?
>>
>> As we use only 4K page in Xen yes. I kept it because there is few place
>> where the SMMU configuration is not the same.
>>
>> If we want to support 64K page in the future it will harder to had
>> support if this stuff is removed.
>>
>> The constant was added by himself as it doesn't exists on Xen. I can
>> move it into the generic code.
>
> But if possible I'd like to encourage you using PAGE_SIZE_4K to
> match up with what we already got in the IOMMU code. Unless
> deviation from the sources you're cloning is deemed worse than
> consistency within our code.

I think this change is fine. It doesn't fundamentally change the code.

I will update the patch for the next version.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:05:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRS1-0000Ua-9w; Thu, 20 Feb 2014 11:05:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRS0-0000UG-0Z
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:05:56 +0000
Received: from [193.109.254.147:16474] by server-13.bemta-14.messagelabs.com
	id D0/6E-01226-391E5035; Thu, 20 Feb 2014 11:05:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392894353!5645063!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2760 invoked from network); 20 Feb 2014 11:05:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:05:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104268455"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 11:05:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:05:52 -0500
Message-ID: <1392894351.23342.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 11:05:51 +0000
In-Reply-To: <5305E090.4030602@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
	<5305E090.4030602@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
> 
> On 20/02/14 09:04, Ian Campbell wrote:
> > I was actually thinking more along the lines of a .word at a defined
> > offset which you could hex edit to a specific value to activate a
> > particular flavour of early printk handling. That would be sufficient
> > e.g. for osstest to activate the appropriate stuff for the specific
> > platform.
> 
> I don't see useful use case to have a such early printk implementation 
> in Xen. When the board is fully supported, failed at early stage (e.g 
> before console is initialized) is very unlikely. At least if you don't 
> play with memory.

a) there are boards which aren't fully supported, getting some debug out
of a distro package might be useful

b) even for boards which are fully supported there may still be bugs
which only appear under particular circumstances.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:05:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRS1-0000Ua-9w; Thu, 20 Feb 2014 11:05:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRS0-0000UG-0Z
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:05:56 +0000
Received: from [193.109.254.147:16474] by server-13.bemta-14.messagelabs.com
	id D0/6E-01226-391E5035; Thu, 20 Feb 2014 11:05:55 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392894353!5645063!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2760 invoked from network); 20 Feb 2014 11:05:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:05:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104268455"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 11:05:53 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:05:52 -0500
Message-ID: <1392894351.23342.22.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 11:05:51 +0000
In-Reply-To: <5305E090.4030602@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
	<5305E090.4030602@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
> 
> On 20/02/14 09:04, Ian Campbell wrote:
> > I was actually thinking more along the lines of a .word at a defined
> > offset which you could hex edit to a specific value to activate a
> > particular flavour of early printk handling. That would be sufficient
> > e.g. for osstest to activate the appropriate stuff for the specific
> > platform.
> 
> I don't see useful use case to have a such early printk implementation 
> in Xen. When the board is fully supported, failed at early stage (e.g 
> before console is initialized) is very unlikely. At least if you don't 
> play with memory.

a) there are boards which aren't fully supported, getting some debug out
of a distro package might be useful

b) even for boards which are fully supported there may still be bugs
which only appear under particular circumstances.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:08:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRU9-0000iB-Rn; Thu, 20 Feb 2014 11:08:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRU8-0000ht-Id
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:08:08 +0000
Received: from [85.158.139.211:58281] by server-1.bemta-5.messagelabs.com id
	77/0D-12859-712E5035; Thu, 20 Feb 2014 11:08:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392894485!5103203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29794 invoked from network); 20 Feb 2014 11:08:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:08:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102575253"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:08:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:08:04 -0500
Message-ID: <1392894483.23342.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 11:08:03 +0000
In-Reply-To: <osstest-25149-mainreport@xen.org>
References: <osstest-25149-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25149: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 10:58 +0000, xen.org wrote:
> flight 25149 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25149/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed, but are not blocking:
> [...]

>  test-armhf-armhf-xl          10 migrate-support-check        fail   never pass

>  build-armhf                                                  pass    
>  build-armhf-pvops                                            pass    
>  test-armhf-armhf-xl                                          pass    

This is our first ever completely successful osstest on armhf. W00t!

The migrate-support-check is an expected/deliberate failure.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:08:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRU9-0000iB-Rn; Thu, 20 Feb 2014 11:08:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRU8-0000ht-Id
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:08:08 +0000
Received: from [85.158.139.211:58281] by server-1.bemta-5.messagelabs.com id
	77/0D-12859-712E5035; Thu, 20 Feb 2014 11:08:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392894485!5103203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29794 invoked from network); 20 Feb 2014 11:08:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:08:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102575253"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:08:05 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:08:04 -0500
Message-ID: <1392894483.23342.24.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 11:08:03 +0000
In-Reply-To: <osstest-25149-mainreport@xen.org>
References: <osstest-25149-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25149: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 10:58 +0000, xen.org wrote:
> flight 25149 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25149/
> 
> Failures :-/ but no regressions.
> 
> Tests which did not succeed, but are not blocking:
> [...]

>  test-armhf-armhf-xl          10 migrate-support-check        fail   never pass

>  build-armhf                                                  pass    
>  build-armhf-pvops                                            pass    
>  test-armhf-armhf-xl                                          pass    

This is our first ever completely successful osstest on armhf. W00t!

The migrate-support-check is an expected/deliberate failure.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:14:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRaA-0000yM-ND; Thu, 20 Feb 2014 11:14:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRa9-0000yB-4T
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:14:21 +0000
Received: from [85.158.139.211:48826] by server-2.bemta-5.messagelabs.com id
	0C/DA-23037-C83E5035; Thu, 20 Feb 2014 11:14:20 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392894859!5154949!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23653 invoked from network); 20 Feb 2014 11:14:19 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:14:19 -0000
Received: by mail-we0-f174.google.com with SMTP id w61so1328106wes.19
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:14:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=v88soz5L018oxJiMmrxOD6vWsKy+l7FQAgFDZ+X4DCo=;
	b=nN8QewEgZRGZjTgb10srrRRELdJRl2HJJMEH/0BL/YhJi5w3rCudvSwaZUlA6aSByp
	nJk/ni2Hz9XKxgZNeiqJiZlHIioXeEr1/E+g3vrzp/gsOhh8tuvgfRElqp2+l3QuGQY6
	S6wOig2/+7LAxWfAy6pX/lBNn/ToyTTJZgemP0Ptkg64B0hthfqI8eOWSu5pb7jiv9HD
	TUS7+gIyPRAYdwdULwQme73aLe7KLspZIFHAS58SG2KvKWEbdehHyjoh4H4SZD9thif4
	ILB4rOeiKWqT+K02pJmzKYNy8gkNm/Dc+ZRF1hke0Muw2RaDRprj3Wc56oSjJRSJEW0b
	i7oA==
X-Gm-Message-State: ALoCoQnUDeVyAR/n6Z84R4KUdMh4XmSS6bctI7saNrdy2zHIaPjXYC3nx6CM1w/k8de4G77EZFRk
X-Received: by 10.194.63.228 with SMTP id j4mr1491263wjs.34.1392894858687;
	Thu, 20 Feb 2014 03:14:18 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id xt1sm8053217wjb.17.2014.02.20.03.14.17
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:14:17 -0800 (PST)
Message-ID: <5305E388.2030609@linaro.org>
Date: Thu, 20 Feb 2014 11:14:16 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>		
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>		
	<1392808847.23084.138.camel@kazak.uk.xensource.com>		
	<5304F058.6090503@linaro.org>	
	<1392887074.22494.7.camel@kazak.uk.xensource.com>	
	<5305E090.4030602@linaro.org>
	<1392894351.23342.22.camel@kazak.uk.xensource.com>
In-Reply-To: <1392894351.23342.22.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 11:05, Ian Campbell wrote:
> On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
>>
>> On 20/02/14 09:04, Ian Campbell wrote:
>>> I was actually thinking more along the lines of a .word at a defined
>>> offset which you could hex edit to a specific value to activate a
>>> particular flavour of early printk handling. That would be sufficient
>>> e.g. for osstest to activate the appropriate stuff for the specific
>>> platform.
>>
>> I don't see useful use case to have a such early printk implementation
>> in Xen. When the board is fully supported, failed at early stage (e.g
>> before console is initialized) is very unlikely. At least if you don't
>> play with memory.
>
> a) there are boards which aren't fully supported, getting some debug out
> of a distro package might be useful

Few months ago we have decided to allow early printk only when Xen is 
compiled with debug enabled. It seems a big mistake to ship distro with 
debug enabled :).

> b) even for boards which are fully supported there may still be bugs
> which only appear under particular circumstances.

I understand this use case. If I understand your previous mail the 
solution would me "Hex editing manually the Xen binary to set the early 
printk", right? If so, you are assuming that the distro (or anything 
else) is proving the zImage. Otherwise the developper has to:
    - unpack from the uImage
    - editing the zImage
    - recreate the uImage

That seems very complicated compare to recompiling Xen...

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:14:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRaA-0000yM-ND; Thu, 20 Feb 2014 11:14:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRa9-0000yB-4T
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:14:21 +0000
Received: from [85.158.139.211:48826] by server-2.bemta-5.messagelabs.com id
	0C/DA-23037-C83E5035; Thu, 20 Feb 2014 11:14:20 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392894859!5154949!1
X-Originating-IP: [74.125.82.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23653 invoked from network); 20 Feb 2014 11:14:19 -0000
Received: from mail-we0-f174.google.com (HELO mail-we0-f174.google.com)
	(74.125.82.174)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:14:19 -0000
Received: by mail-we0-f174.google.com with SMTP id w61so1328106wes.19
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:14:19 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=v88soz5L018oxJiMmrxOD6vWsKy+l7FQAgFDZ+X4DCo=;
	b=nN8QewEgZRGZjTgb10srrRRELdJRl2HJJMEH/0BL/YhJi5w3rCudvSwaZUlA6aSByp
	nJk/ni2Hz9XKxgZNeiqJiZlHIioXeEr1/E+g3vrzp/gsOhh8tuvgfRElqp2+l3QuGQY6
	S6wOig2/+7LAxWfAy6pX/lBNn/ToyTTJZgemP0Ptkg64B0hthfqI8eOWSu5pb7jiv9HD
	TUS7+gIyPRAYdwdULwQme73aLe7KLspZIFHAS58SG2KvKWEbdehHyjoh4H4SZD9thif4
	ILB4rOeiKWqT+K02pJmzKYNy8gkNm/Dc+ZRF1hke0Muw2RaDRprj3Wc56oSjJRSJEW0b
	i7oA==
X-Gm-Message-State: ALoCoQnUDeVyAR/n6Z84R4KUdMh4XmSS6bctI7saNrdy2zHIaPjXYC3nx6CM1w/k8de4G77EZFRk
X-Received: by 10.194.63.228 with SMTP id j4mr1491263wjs.34.1392894858687;
	Thu, 20 Feb 2014 03:14:18 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id xt1sm8053217wjb.17.2014.02.20.03.14.17
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:14:17 -0800 (PST)
Message-ID: <5305E388.2030609@linaro.org>
Date: Thu, 20 Feb 2014 11:14:16 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>		
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>		
	<1392808847.23084.138.camel@kazak.uk.xensource.com>		
	<5304F058.6090503@linaro.org>	
	<1392887074.22494.7.camel@kazak.uk.xensource.com>	
	<5305E090.4030602@linaro.org>
	<1392894351.23342.22.camel@kazak.uk.xensource.com>
In-Reply-To: <1392894351.23342.22.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 11:05, Ian Campbell wrote:
> On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
>>
>> On 20/02/14 09:04, Ian Campbell wrote:
>>> I was actually thinking more along the lines of a .word at a defined
>>> offset which you could hex edit to a specific value to activate a
>>> particular flavour of early printk handling. That would be sufficient
>>> e.g. for osstest to activate the appropriate stuff for the specific
>>> platform.
>>
>> I don't see useful use case to have a such early printk implementation
>> in Xen. When the board is fully supported, failed at early stage (e.g
>> before console is initialized) is very unlikely. At least if you don't
>> play with memory.
>
> a) there are boards which aren't fully supported, getting some debug out
> of a distro package might be useful

Few months ago we have decided to allow early printk only when Xen is 
compiled with debug enabled. It seems a big mistake to ship distro with 
debug enabled :).

> b) even for boards which are fully supported there may still be bugs
> which only appear under particular circumstances.

I understand this use case. If I understand your previous mail the 
solution would me "Hex editing manually the Xen binary to set the early 
printk", right? If so, you are assuming that the distro (or anything 
else) is proving the zImage. Otherwise the developper has to:
    - unpack from the uImage
    - editing the zImage
    - recreate the uImage

That seems very complicated compare to recompiling Xen...

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRbr-00016C-9g; Thu, 20 Feb 2014 11:16:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRbq-000160-GL
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:16:06 +0000
Received: from [193.109.254.147:54791] by server-5.bemta-14.messagelabs.com id
	FC/D0-16688-5F3E5035; Thu, 20 Feb 2014 11:16:05 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392894959!5647771!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17798 invoked from network); 20 Feb 2014 11:15:59 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:15:59 -0000
Received: by mail-wg0-f50.google.com with SMTP id z12so1322389wgg.17
	for <xen-devel@lists.xensource.com>;
	Thu, 20 Feb 2014 03:15:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Zx7qNXSTg6F2vUaEQHDUjVjwqtPM8F3Gv3Lr3RVKI4g=;
	b=YJS88edAFdOSPUAPwfg1YLBJrwg6Y+pXdOMnDrARKfS1e/7weYXr5xaA78cgqa7xT8
	tBeTAPsdoNCLkUZD6H21FHIRjWbGgoyox+7ndULYG7fhRGLaP/UNnuL5BvesuYCTvQ8Z
	2//9MYnYsYWHkbjsHN379YKDlJpbeb8ZMdXMbwwWSBD+Xbl+IezlLBZhG8lAm0afnIAQ
	GiFBLeVMKwaKUBmVJad9kQKg4+mBMfr3HQUFvm3fFX6mkFR3da9yJ6G1KujH4XDwNQVk
	Jk7CSYWdD3WN6096S9StTe4NP4QLCJkfD+iYmoE1fMhowj5+Tbcs2g6D/7pVo9y/u8In
	QFKA==
X-Gm-Message-State: ALoCoQm4IBqCq3/PKn1Z/aLHLra4HhMyO5NjxOsIdx8yB/aoEjLRn3u/PxONaNd5jNwhOAq3yEp9
X-Received: by 10.180.189.139 with SMTP id gi11mr6365015wic.53.1392894958529; 
	Thu, 20 Feb 2014 03:15:58 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id di9sm10401052wid.6.2014.02.20.03.15.57
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:15:57 -0800 (PST)
Message-ID: <5305E3EC.4000308@linaro.org>
Date: Thu, 20 Feb 2014 11:15:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	"xen.org" <ian.jackson@eu.citrix.com>
References: <osstest-25149-mainreport@xen.org>
	<1392894483.23342.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1392894483.23342.24.camel@kazak.uk.xensource.com>
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25149: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 11:08, Ian Campbell wrote:
> On Thu, 2014-02-20 at 10:58 +0000, xen.org wrote:
>> flight 25149 xen-unstable real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/25149/
>>
>> Failures :-/ but no regressions.
>>
>> Tests which did not succeed, but are not blocking:
>> [...]
>
>>   test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
>
>>   build-armhf                                                  pass
>>   build-armhf-pvops                                            pass
>>   test-armhf-armhf-xl                                          pass
>
> This is our first ever completely successful osstest on armhf. W00t!

Nice job! :)

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:16:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRbr-00016C-9g; Thu, 20 Feb 2014 11:16:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRbq-000160-GL
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:16:06 +0000
Received: from [193.109.254.147:54791] by server-5.bemta-14.messagelabs.com id
	FC/D0-16688-5F3E5035; Thu, 20 Feb 2014 11:16:05 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392894959!5647771!1
X-Originating-IP: [74.125.82.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17798 invoked from network); 20 Feb 2014 11:15:59 -0000
Received: from mail-wg0-f50.google.com (HELO mail-wg0-f50.google.com)
	(74.125.82.50)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:15:59 -0000
Received: by mail-wg0-f50.google.com with SMTP id z12so1322389wgg.17
	for <xen-devel@lists.xensource.com>;
	Thu, 20 Feb 2014 03:15:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Zx7qNXSTg6F2vUaEQHDUjVjwqtPM8F3Gv3Lr3RVKI4g=;
	b=YJS88edAFdOSPUAPwfg1YLBJrwg6Y+pXdOMnDrARKfS1e/7weYXr5xaA78cgqa7xT8
	tBeTAPsdoNCLkUZD6H21FHIRjWbGgoyox+7ndULYG7fhRGLaP/UNnuL5BvesuYCTvQ8Z
	2//9MYnYsYWHkbjsHN379YKDlJpbeb8ZMdXMbwwWSBD+Xbl+IezlLBZhG8lAm0afnIAQ
	GiFBLeVMKwaKUBmVJad9kQKg4+mBMfr3HQUFvm3fFX6mkFR3da9yJ6G1KujH4XDwNQVk
	Jk7CSYWdD3WN6096S9StTe4NP4QLCJkfD+iYmoE1fMhowj5+Tbcs2g6D/7pVo9y/u8In
	QFKA==
X-Gm-Message-State: ALoCoQm4IBqCq3/PKn1Z/aLHLra4HhMyO5NjxOsIdx8yB/aoEjLRn3u/PxONaNd5jNwhOAq3yEp9
X-Received: by 10.180.189.139 with SMTP id gi11mr6365015wic.53.1392894958529; 
	Thu, 20 Feb 2014 03:15:58 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id di9sm10401052wid.6.2014.02.20.03.15.57
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:15:57 -0800 (PST)
Message-ID: <5305E3EC.4000308@linaro.org>
Date: Thu, 20 Feb 2014 11:15:56 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, 
	"xen.org" <ian.jackson@eu.citrix.com>
References: <osstest-25149-mainreport@xen.org>
	<1392894483.23342.24.camel@kazak.uk.xensource.com>
In-Reply-To: <1392894483.23342.24.camel@kazak.uk.xensource.com>
Cc: Julien Grall <julien.grall@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@citrix.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25149: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 11:08, Ian Campbell wrote:
> On Thu, 2014-02-20 at 10:58 +0000, xen.org wrote:
>> flight 25149 xen-unstable real [real]
>> http://www.chiark.greenend.org.uk/~xensrcts/logs/25149/
>>
>> Failures :-/ but no regressions.
>>
>> Tests which did not succeed, but are not blocking:
>> [...]
>
>>   test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
>
>>   build-armhf                                                  pass
>>   build-armhf-pvops                                            pass
>>   test-armhf-armhf-xl                                          pass
>
> This is our first ever completely successful osstest on armhf. W00t!

Nice job! :)

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:18:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGReT-0001Gb-Sa; Thu, 20 Feb 2014 11:18:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGReS-0001GS-1j
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 11:18:48 +0000
Received: from [193.109.254.147:20824] by server-8.bemta-14.messagelabs.com id
	98/C6-18529-794E5035; Thu, 20 Feb 2014 11:18:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392895124!5662625!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7278 invoked from network); 20 Feb 2014 11:18:45 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Feb 2014 11:18:45 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50083 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGRdL-0003GV-NQ; Thu, 20 Feb 2014 12:17:39 +0100
Date: Thu, 20 Feb 2014 12:18:42 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <587238484.20140220121842@eikelenboom.it>
To: annie li <annie.li@oracle.com>
In-Reply-To: <5305CFC6.3080502@oracle.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
MIME-Version: 1.0
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 20, 2014, 10:49:58 AM, you wrote:


> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>> Hi All,
>>
>> I'm currently having some network troubles with Xen and recent linux kernels.
>>
>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>    I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>
>>    In the guest:
>>    [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>    [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>    [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>    [57539.859610] net eth0: Need more slots
>>    [58157.675939] net eth0: Need more slots
>>    [58725.344712] net eth0: Need more slots
>>    [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>    [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>    [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>    [61815.849225] net eth0: Need more slots

> This issue is familiar... and I thought it get fixed.
>  From original analysis for similar issue I hit before, the root cause 
> is netback still creates response when the ring is full. I remember 
> larger MTU can trigger this issue before, what is the MTU size?

In dom0 both for the physical nics and the guest vif's MTU=1500
In domU the eth0 also has MTU=1500.

So it's not jumbo frames .. just everywhere the same plain defaults ..

With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.

This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

Will keep you posted when it triggers again with the extra info in the warn.

--
Sander



> Thanks
> Annie
>>
>>    Xen reports:
>>    (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>    (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>    (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>    (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>    (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>    (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>    (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>    (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>    (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>    (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>    (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>
>>
>>
>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>    - i can ping the guests from dom0
>>    - i can ping dom0 from the guests
>>    - But i can't ssh or access things by http
>>    - I don't see any relevant error messages ...
>>    - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>      (that previously worked fine)
>>
>> --
>>
>> Sander
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:18:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:18:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGReT-0001Gb-Sa; Thu, 20 Feb 2014 11:18:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGReS-0001GS-1j
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 11:18:48 +0000
Received: from [193.109.254.147:20824] by server-8.bemta-14.messagelabs.com id
	98/C6-18529-794E5035; Thu, 20 Feb 2014 11:18:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392895124!5662625!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7278 invoked from network); 20 Feb 2014 11:18:45 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Feb 2014 11:18:45 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:50083 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGRdL-0003GV-NQ; Thu, 20 Feb 2014 12:17:39 +0100
Date: Thu, 20 Feb 2014 12:18:42 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <587238484.20140220121842@eikelenboom.it>
To: annie li <annie.li@oracle.com>
In-Reply-To: <5305CFC6.3080502@oracle.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
MIME-Version: 1.0
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 20, 2014, 10:49:58 AM, you wrote:


> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>> Hi All,
>>
>> I'm currently having some network troubles with Xen and recent linux kernels.
>>
>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>    I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>
>>    In the guest:
>>    [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>    [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>    [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>    [57539.859610] net eth0: Need more slots
>>    [58157.675939] net eth0: Need more slots
>>    [58725.344712] net eth0: Need more slots
>>    [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>    [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>    [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>    [61815.849225] net eth0: Need more slots

> This issue is familiar... and I thought it get fixed.
>  From original analysis for similar issue I hit before, the root cause 
> is netback still creates response when the ring is full. I remember 
> larger MTU can trigger this issue before, what is the MTU size?

In dom0 both for the physical nics and the guest vif's MTU=1500
In domU the eth0 also has MTU=1500.

So it's not jumbo frames .. just everywhere the same plain defaults ..

With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.

This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

Will keep you posted when it triggers again with the extra info in the warn.

--
Sander



> Thanks
> Annie
>>
>>    Xen reports:
>>    (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>    (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>    (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>    (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>    (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>    (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>    (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>    (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>    (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>    (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>    (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>    (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>    (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>    (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>    (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>    (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>
>>
>>
>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>    - i can ping the guests from dom0
>>    - i can ping dom0 from the guests
>>    - But i can't ssh or access things by http
>>    - I don't see any relevant error messages ...
>>    - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>      (that previously worked fine)
>>
>> --
>>
>> Sander
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRfn-0001SF-Iu; Thu, 20 Feb 2014 11:20:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRfl-0001Rv-UP
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:20:10 +0000
Received: from [85.158.143.35:20789] by server-3.bemta-4.messagelabs.com id
	40/27-11539-9E4E5035; Thu, 20 Feb 2014 11:20:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392895207!7022920!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28560 invoked from network); 20 Feb 2014 11:20:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:20:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102577788"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:20:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:20:06 -0500
Message-ID: <1392895205.23342.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 11:20:05 +0000
In-Reply-To: <5305E388.2030609@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
	<5305E090.4030602@linaro.org>
	<1392894351.23342.22.camel@kazak.uk.xensource.com>
	<5305E388.2030609@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:14 +0000, Julien Grall wrote:
> 
> On 20/02/14 11:05, Ian Campbell wrote:
> > On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
> >>
> >> On 20/02/14 09:04, Ian Campbell wrote:
> >>> I was actually thinking more along the lines of a .word at a defined
> >>> offset which you could hex edit to a specific value to activate a
> >>> particular flavour of early printk handling. That would be sufficient
> >>> e.g. for osstest to activate the appropriate stuff for the specific
> >>> platform.
> >>
> >> I don't see useful use case to have a such early printk implementation
> >> in Xen. When the board is fully supported, failed at early stage (e.g
> >> before console is initialized) is very unlikely. At least if you don't
> >> play with memory.
> >
> > a) there are boards which aren't fully supported, getting some debug out
> > of a distro package might be useful
> 
> Few months ago we have decided to allow early printk only when Xen is 
> compiled with debug enabled. It seems a big mistake to ship distro with 
> debug enabled :).

This was because earlyprintk only supports a static single configuration
at compile time. If that restriction was lifted then there would be no
reason to limit earlyprintk to debug builds.

> > b) even for boards which are fully supported there may still be bugs
> > which only appear under particular circumstances.
> 
> I understand this use case. If I understand your previous mail the 
> solution would me "Hex editing manually the Xen binary to set the early 
> printk", right? If so, you are assuming that the distro (or anything 
> else) is proving the zImage. Otherwise the developper has to:
>     - unpack from the uImage
>     - editing the zImage
>     - recreate the uImage

No distro would ship the actual uImage, it's too machine specific.

I would expect this to be used by running:
	xen-enable-early-printk /boot/xen midway

where xen-enable-early-printk is a simple tool we provide.

Then if a uIamge is then required then this would be generated by
whatever distro tooling would have generated it in the non-early-printk
case, by rerunning that tool.

> That seems very complicated compare to recompiling Xen...

Not really and this isn't so much for developers, but for users with
access to problematic systems to provide us with information.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:20:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRfn-0001SF-Iu; Thu, 20 Feb 2014 11:20:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGRfl-0001Rv-UP
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:20:10 +0000
Received: from [85.158.143.35:20789] by server-3.bemta-4.messagelabs.com id
	40/27-11539-9E4E5035; Thu, 20 Feb 2014 11:20:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392895207!7022920!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28560 invoked from network); 20 Feb 2014 11:20:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:20:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102577788"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:20:07 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:20:06 -0500
Message-ID: <1392895205.23342.28.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 11:20:05 +0000
In-Reply-To: <5305E388.2030609@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
	<5305E090.4030602@linaro.org>
	<1392894351.23342.22.camel@kazak.uk.xensource.com>
	<5305E388.2030609@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:14 +0000, Julien Grall wrote:
> 
> On 20/02/14 11:05, Ian Campbell wrote:
> > On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
> >>
> >> On 20/02/14 09:04, Ian Campbell wrote:
> >>> I was actually thinking more along the lines of a .word at a defined
> >>> offset which you could hex edit to a specific value to activate a
> >>> particular flavour of early printk handling. That would be sufficient
> >>> e.g. for osstest to activate the appropriate stuff for the specific
> >>> platform.
> >>
> >> I don't see useful use case to have a such early printk implementation
> >> in Xen. When the board is fully supported, failed at early stage (e.g
> >> before console is initialized) is very unlikely. At least if you don't
> >> play with memory.
> >
> > a) there are boards which aren't fully supported, getting some debug out
> > of a distro package might be useful
> 
> Few months ago we have decided to allow early printk only when Xen is 
> compiled with debug enabled. It seems a big mistake to ship distro with 
> debug enabled :).

This was because earlyprintk only supports a static single configuration
at compile time. If that restriction was lifted then there would be no
reason to limit earlyprintk to debug builds.

> > b) even for boards which are fully supported there may still be bugs
> > which only appear under particular circumstances.
> 
> I understand this use case. If I understand your previous mail the 
> solution would me "Hex editing manually the Xen binary to set the early 
> printk", right? If so, you are assuming that the distro (or anything 
> else) is proving the zImage. Otherwise the developper has to:
>     - unpack from the uImage
>     - editing the zImage
>     - recreate the uImage

No distro would ship the actual uImage, it's too machine specific.

I would expect this to be used by running:
	xen-enable-early-printk /boot/xen midway

where xen-enable-early-printk is a simple tool we provide.

Then if a uIamge is then required then this would be generated by
whatever distro tooling would have generated it in the non-early-printk
case, by rerunning that tool.

> That seems very complicated compare to recompiling Xen...

Not really and this isn't so much for developers, but for users with
access to problematic systems to provide us with information.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRwc-0001yV-9h; Thu, 20 Feb 2014 11:37:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRwb-0001yQ-CW
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:37:33 +0000
Received: from [85.158.143.35:23252] by server-3.bemta-4.messagelabs.com id
	8A/C9-11539-CF8E5035; Thu, 20 Feb 2014 11:37:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392896251!7053851!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 935 invoked from network); 20 Feb 2014 11:37:32 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:37:32 -0000
Received: by mail-wg0-f42.google.com with SMTP id k14so478308wgh.5
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:37:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=L1ZBv7/ZZcRXt7P8vXbhVuW0IlZE53IMBuigcwc3Cjw=;
	b=JznnZG6hVaGUsuWfh+23B/EAPl7j5sHiTnF4cLJtObQ/Hinx/8RcbqVVypwJ58CKcb
	GfwSdTth11c2ZO+P8NejnHT0DpTfSrAF85eFmD9UD01gR2HooT41nXatx4Yq8YrrE3IX
	9FMEm2Dv/t+O9YzsdBFAg+wjr4VT/Nve1HaSvYIXzFPC5PpE6AvtEucKMBrmY/L/oxeY
	7bxBywkyGuf/JQqkDPa2kCbwUFeMidBAC1170ha8T/bOqgziyVcx2ISl7Wpx1EslawOy
	W/zA4H1V3d16sCymemQJZ994cnLZlFBXt7Jb3CAPI7hxNzuSoZQaQvv8iVIjYJuulCba
	VslQ==
X-Gm-Message-State: ALoCoQmH/U710sUQGivRDdjbBqSOw0KPQAn+TV8qojHZsesc7zbJZOxL7bqnE4UZKTeaazKQaYmS
X-Received: by 10.180.149.206 with SMTP id uc14mr6640662wib.44.1392896251609; 
	Thu, 20 Feb 2014 03:37:31 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id v6sm10662858wif.0.2014.02.20.03.37.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:37:30 -0800 (PST)
Message-ID: <5305E8F9.1020704@linaro.org>
Date: Thu, 20 Feb 2014 11:37:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>			
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>			
	<1392808847.23084.138.camel@kazak.uk.xensource.com>			
	<5304F058.6090503@linaro.org>		
	<1392887074.22494.7.camel@kazak.uk.xensource.com>		
	<5305E090.4030602@linaro.org>	
	<1392894351.23342.22.camel@kazak.uk.xensource.com>	
	<5305E388.2030609@linaro.org>
	<1392895205.23342.28.camel@kazak.uk.xensource.com>
In-Reply-To: <1392895205.23342.28.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 11:20, Ian Campbell wrote:
> On Thu, 2014-02-20 at 11:14 +0000, Julien Grall wrote:
>>
>> On 20/02/14 11:05, Ian Campbell wrote:
>>> On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
>>>>
>>>> On 20/02/14 09:04, Ian Campbell wrote:
>>>>> I was actually thinking more along the lines of a .word at a defined
>>>>> offset which you could hex edit to a specific value to activate a
>>>>> particular flavour of early printk handling. That would be sufficient
>>>>> e.g. for osstest to activate the appropriate stuff for the specific
>>>>> platform.
>>>>
>>>> I don't see useful use case to have a such early printk implementation
>>>> in Xen. When the board is fully supported, failed at early stage (e.g
>>>> before console is initialized) is very unlikely. At least if you don't
>>>> play with memory.
>>>
>>> a) there are boards which aren't fully supported, getting some debug out
>>> of a distro package might be useful
>>
>> Few months ago we have decided to allow early printk only when Xen is
>> compiled with debug enabled. It seems a big mistake to ship distro with
>> debug enabled :).
>
> This was because earlyprintk only supports a static single configuration
> at compile time. If that restriction was lifted then there would be no
> reason to limit earlyprintk to debug builds.
>
>>> b) even for boards which are fully supported there may still be bugs
>>> which only appear under particular circumstances.
>>
>> I understand this use case. If I understand your previous mail the
>> solution would me "Hex editing manually the Xen binary to set the early
>> printk", right? If so, you are assuming that the distro (or anything
>> else) is proving the zImage. Otherwise the developper has to:
>>      - unpack from the uImage
>>      - editing the zImage
>>      - recreate the uImage
>
> No distro would ship the actual uImage, it's too machine specific.
>
> I would expect this to be used by running:
> 	xen-enable-early-printk /boot/xen midway
>
> where xen-enable-early-printk is a simple tool we provide.

And a similar one to disable, I guess.

>
> Then if a uIamge is then required then this would be generated by
> whatever distro tooling would have generated it in the non-early-printk
> case, by rerunning that tool.

Sounds good. Do you plan to work on it?

It would be nice to have this item on the ARM todo page.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:37:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRwc-0001yV-9h; Thu, 20 Feb 2014 11:37:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGRwb-0001yQ-CW
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 11:37:33 +0000
Received: from [85.158.143.35:23252] by server-3.bemta-4.messagelabs.com id
	8A/C9-11539-CF8E5035; Thu, 20 Feb 2014 11:37:32 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392896251!7053851!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 935 invoked from network); 20 Feb 2014 11:37:32 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:37:32 -0000
Received: by mail-wg0-f42.google.com with SMTP id k14so478308wgh.5
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 03:37:31 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=L1ZBv7/ZZcRXt7P8vXbhVuW0IlZE53IMBuigcwc3Cjw=;
	b=JznnZG6hVaGUsuWfh+23B/EAPl7j5sHiTnF4cLJtObQ/Hinx/8RcbqVVypwJ58CKcb
	GfwSdTth11c2ZO+P8NejnHT0DpTfSrAF85eFmD9UD01gR2HooT41nXatx4Yq8YrrE3IX
	9FMEm2Dv/t+O9YzsdBFAg+wjr4VT/Nve1HaSvYIXzFPC5PpE6AvtEucKMBrmY/L/oxeY
	7bxBywkyGuf/JQqkDPa2kCbwUFeMidBAC1170ha8T/bOqgziyVcx2ISl7Wpx1EslawOy
	W/zA4H1V3d16sCymemQJZ994cnLZlFBXt7Jb3CAPI7hxNzuSoZQaQvv8iVIjYJuulCba
	VslQ==
X-Gm-Message-State: ALoCoQmH/U710sUQGivRDdjbBqSOw0KPQAn+TV8qojHZsesc7zbJZOxL7bqnE4UZKTeaazKQaYmS
X-Received: by 10.180.149.206 with SMTP id uc14mr6640662wib.44.1392896251609; 
	Thu, 20 Feb 2014 03:37:31 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id v6sm10662858wif.0.2014.02.20.03.37.30
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 03:37:30 -0800 (PST)
Message-ID: <5305E8F9.1020704@linaro.org>
Date: Thu, 20 Feb 2014 11:37:29 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>			
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>			
	<1392808847.23084.138.camel@kazak.uk.xensource.com>			
	<5304F058.6090503@linaro.org>		
	<1392887074.22494.7.camel@kazak.uk.xensource.com>		
	<5305E090.4030602@linaro.org>	
	<1392894351.23342.22.camel@kazak.uk.xensource.com>	
	<5305E388.2030609@linaro.org>
	<1392895205.23342.28.camel@kazak.uk.xensource.com>
In-Reply-To: <1392895205.23342.28.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 20/02/14 11:20, Ian Campbell wrote:
> On Thu, 2014-02-20 at 11:14 +0000, Julien Grall wrote:
>>
>> On 20/02/14 11:05, Ian Campbell wrote:
>>> On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
>>>>
>>>> On 20/02/14 09:04, Ian Campbell wrote:
>>>>> I was actually thinking more along the lines of a .word at a defined
>>>>> offset which you could hex edit to a specific value to activate a
>>>>> particular flavour of early printk handling. That would be sufficient
>>>>> e.g. for osstest to activate the appropriate stuff for the specific
>>>>> platform.
>>>>
>>>> I don't see useful use case to have a such early printk implementation
>>>> in Xen. When the board is fully supported, failed at early stage (e.g
>>>> before console is initialized) is very unlikely. At least if you don't
>>>> play with memory.
>>>
>>> a) there are boards which aren't fully supported, getting some debug out
>>> of a distro package might be useful
>>
>> Few months ago we have decided to allow early printk only when Xen is
>> compiled with debug enabled. It seems a big mistake to ship distro with
>> debug enabled :).
>
> This was because earlyprintk only supports a static single configuration
> at compile time. If that restriction was lifted then there would be no
> reason to limit earlyprintk to debug builds.
>
>>> b) even for boards which are fully supported there may still be bugs
>>> which only appear under particular circumstances.
>>
>> I understand this use case. If I understand your previous mail the
>> solution would me "Hex editing manually the Xen binary to set the early
>> printk", right? If so, you are assuming that the distro (or anything
>> else) is proving the zImage. Otherwise the developper has to:
>>      - unpack from the uImage
>>      - editing the zImage
>>      - recreate the uImage
>
> No distro would ship the actual uImage, it's too machine specific.
>
> I would expect this to be used by running:
> 	xen-enable-early-printk /boot/xen midway
>
> where xen-enable-early-printk is a simple tool we provide.

And a similar one to disable, I guess.

>
> Then if a uIamge is then required then this would be generated by
> whatever distro tooling would have generated it in the non-early-printk
> case, by rerunning that tool.

Sounds good. Do you plan to work on it?

It would be nice to have this item on the ARM todo page.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:39:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRye-00029u-R5; Thu, 20 Feb 2014 11:39:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGRyd-00029m-TV
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:39:40 +0000
Received: from [85.158.137.68:6263] by server-7.bemta-3.messagelabs.com id
	B3/2E-13775-B79E5035; Thu, 20 Feb 2014 11:39:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392896377!3107162!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13196 invoked from network); 20 Feb 2014 11:39:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:39:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102581509"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:39:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 06:39:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGRyR-0000T1-FV;
	Thu, 20 Feb 2014 11:39:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGRyR-0000SG-7G;
	Thu, 20 Feb 2014 11:39:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21253.59759.79641.260689@mariner.uk.xensource.com>
Date: Thu, 20 Feb 2014 11:39:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392889477.23342.6.camel@kazak.uk.xensource.com>
References: <osstest-25148-mainreport@xen.org>
	<1392889477.23342.6.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL"):
...
> I think this test started before the fix made it through osstests own
> gateway, so this is still expected but the next one should be OK.
> 
> But this made me remember a request I've had -- would be it be possible
> to record the version of osstest used for a given flight somewhere?
> Perhaps in the output runvars? I'm not sure where best to hook that in.

It's recorded in the table flights_harness_touched in the production
database.  Perhaps sg-report-flight should show it somewhere.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:39:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGRye-00029u-R5; Thu, 20 Feb 2014 11:39:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGRyd-00029m-TV
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:39:40 +0000
Received: from [85.158.137.68:6263] by server-7.bemta-3.messagelabs.com id
	B3/2E-13775-B79E5035; Thu, 20 Feb 2014 11:39:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1392896377!3107162!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13196 invoked from network); 20 Feb 2014 11:39:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:39:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102581509"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:39:28 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 06:39:27 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGRyR-0000T1-FV;
	Thu, 20 Feb 2014 11:39:27 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGRyR-0000SG-7G;
	Thu, 20 Feb 2014 11:39:27 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21253.59759.79641.260689@mariner.uk.xensource.com>
Date: Thu, 20 Feb 2014 11:39:27 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392889477.23342.6.camel@kazak.uk.xensource.com>
References: <osstest-25148-mainreport@xen.org>
	<1392889477.23342.6.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL"):
...
> I think this test started before the fix made it through osstests own
> gateway, so this is still expected but the next one should be OK.
> 
> But this made me remember a request I've had -- would be it be possible
> to record the version of osstest used for a given flight somewhere?
> Perhaps in the output runvars? I'm not sure where best to hook that in.

It's recorded in the table flights_harness_touched in the production
database.  Perhaps sg-report-flight should show it somewhere.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:43:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGS1n-0002KD-El; Thu, 20 Feb 2014 11:42:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGS1l-0002K8-MF
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:42:53 +0000
Received: from [85.158.143.35:32789] by server-1.bemta-4.messagelabs.com id
	2F/23-31661-D3AE5035; Thu, 20 Feb 2014 11:42:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392896571!7055048!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19130 invoked from network); 20 Feb 2014 11:42:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:42:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102582050"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:42:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:42:50 -0500
Message-ID: <1392896569.23342.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 11:42:49 +0000
In-Reply-To: <21253.59759.79641.260689@mariner.uk.xensource.com>
References: <osstest-25148-mainreport@xen.org>
	<1392889477.23342.6.camel@kazak.uk.xensource.com>
	<21253.59759.79641.260689@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:39 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL"):
> ...
> > I think this test started before the fix made it through osstests own
> > gateway, so this is still expected but the next one should be OK.
> > 
> > But this made me remember a request I've had -- would be it be possible
> > to record the version of osstest used for a given flight somewhere?
> > Perhaps in the output runvars? I'm not sure where best to hook that in.
> 
> It's recorded in the table flights_harness_touched in the production
> database.  Perhaps sg-report-flight should show it somewhere.

I was looking at adding it to the URL in ReportTrailer (with a
s/ubti/tution/) so it would link direct to the correct branch but
sg-report-flight seems to know about multiple flights for various
comparisons etc and I couldn't figure out which was which so I gave
up...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 11:43:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 11:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGS1n-0002KD-El; Thu, 20 Feb 2014 11:42:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGS1l-0002K8-MF
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 11:42:53 +0000
Received: from [85.158.143.35:32789] by server-1.bemta-4.messagelabs.com id
	2F/23-31661-D3AE5035; Thu, 20 Feb 2014 11:42:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392896571!7055048!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19130 invoked from network); 20 Feb 2014 11:42:52 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 11:42:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102582050"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 11:42:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 06:42:50 -0500
Message-ID: <1392896569.23342.43.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 11:42:49 +0000
In-Reply-To: <21253.59759.79641.260689@mariner.uk.xensource.com>
References: <osstest-25148-mainreport@xen.org>
	<1392889477.23342.6.camel@kazak.uk.xensource.com>
	<21253.59759.79641.260689@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:39 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [xen-4.2-testing test] 25148: regressions - FAIL"):
> ...
> > I think this test started before the fix made it through osstests own
> > gateway, so this is still expected but the next one should be OK.
> > 
> > But this made me remember a request I've had -- would be it be possible
> > to record the version of osstest used for a given flight somewhere?
> > Perhaps in the output runvars? I'm not sure where best to hook that in.
> 
> It's recorded in the table flights_harness_touched in the production
> database.  Perhaps sg-report-flight should show it somewhere.

I was looking at adding it to the URL in ReportTrailer (with a
s/ubti/tution/) so it would link direct to the correct branch but
sg-report-flight seems to know about multiple flights for various
comparisons etc and I couldn't figure out which was which so I gave
up...

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 12:07:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 12:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGSPF-000319-Hh; Thu, 20 Feb 2014 12:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGSPD-000314-CA
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 12:07:07 +0000
Received: from [85.158.139.211:56830] by server-13.bemta-5.messagelabs.com id
	E7/1C-18801-AEFE5035; Thu, 20 Feb 2014 12:07:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392898024!5128208!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1452 invoked from network); 20 Feb 2014 12:07:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 12:07:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102587133"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 12:07:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 07:07:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGSP9-0000bk-9T;
	Thu, 20 Feb 2014 12:07:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGSP8-0003AG-MR;
	Thu, 20 Feb 2014 12:07:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25150-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 12:07:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25150: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25150 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25150/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 17 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 12:07:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 12:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGSPF-000319-Hh; Thu, 20 Feb 2014 12:07:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGSPD-000314-CA
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 12:07:07 +0000
Received: from [85.158.139.211:56830] by server-13.bemta-5.messagelabs.com id
	E7/1C-18801-AEFE5035; Thu, 20 Feb 2014 12:07:06 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392898024!5128208!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1452 invoked from network); 20 Feb 2014 12:07:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 12:07:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102587133"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 12:07:03 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 07:07:03 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGSP9-0000bk-9T;
	Thu, 20 Feb 2014 12:07:03 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGSP8-0003AG-MR;
	Thu, 20 Feb 2014 12:07:03 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25150-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 12:07:02 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25150: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25150 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25150/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24737

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 17 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
    	hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 12:29:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 12:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGSkg-0003kF-P8; Thu, 20 Feb 2014 12:29:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGSke-0003j9-FI
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 12:29:16 +0000
Received: from [85.158.139.211:9187] by server-10.bemta-5.messagelabs.com id
	81/4C-08578-B15F5035; Thu, 20 Feb 2014 12:29:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392899353!1225981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12301 invoked from network); 20 Feb 2014 12:29:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 12:29:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104285364"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 12:28:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 07:28:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGSkJ-0001Gr-N3;
	Thu, 20 Feb 2014 12:28:55 +0000
Date: Thu, 20 Feb 2014 12:28:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Rusty Russell <rusty@au1.ibm.com>
In-Reply-To: <87ha7ubme0.fsf@rustcorp.com.au>
Message-ID: <alpine.DEB.2.02.1402201214300.15812@kaball.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, Rusty Russell wrote:
> It's a fundamental assumption of virtio that the host can access all of
> guest memory.

I take that by "host" you mean the virtio backends in this context.

Do you think that this fundamental assumption should be sustained going
forward?

I am asking because Xen assumes that the backends are only allowed to
access the memory that the guest decides to share with them.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 12:29:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 12:29:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGSkg-0003kF-P8; Thu, 20 Feb 2014 12:29:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGSke-0003j9-FI
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 12:29:16 +0000
Received: from [85.158.139.211:9187] by server-10.bemta-5.messagelabs.com id
	81/4C-08578-B15F5035; Thu, 20 Feb 2014 12:29:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392899353!1225981!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12301 invoked from network); 20 Feb 2014 12:29:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 12:29:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104285364"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 12:28:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 07:28:56 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGSkJ-0001Gr-N3;
	Thu, 20 Feb 2014 12:28:55 +0000
Date: Thu, 20 Feb 2014 12:28:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Rusty Russell <rusty@au1.ibm.com>
In-Reply-To: <87ha7ubme0.fsf@rustcorp.com.au>
Message-ID: <alpine.DEB.2.02.1402201214300.15812@kaball.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, Rusty Russell wrote:
> It's a fundamental assumption of virtio that the host can access all of
> guest memory.

I take that by "host" you mean the virtio backends in this context.

Do you think that this fundamental assumption should be sustained going
forward?

I am asking because Xen assumes that the backends are only allowed to
access the memory that the guest decides to share with them.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 13:02:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 13:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGTGg-0005Tv-QC; Thu, 20 Feb 2014 13:02:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGTGe-0005Tm-SF
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 13:02:21 +0000
Received: from [85.158.143.35:53481] by server-3.bemta-4.messagelabs.com id
	2C/25-11539-CDCF5035; Thu, 20 Feb 2014 13:02:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392901337!7097219!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32624 invoked from network); 20 Feb 2014 13:02:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 13:02:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102600206"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 13:02:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 08:02:16 -0500
Message-ID: <1392901335.30563.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 13:02:15 +0000
In-Reply-To: <5305E8F9.1020704@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
	<5305E090.4030602@linaro.org>
	<1392894351.23342.22.camel@kazak.uk.xensource.com>
	<5305E388.2030609@linaro.org>
	<1392895205.23342.28.camel@kazak.uk.xensource.com>
	<5305E8F9.1020704@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:37 +0000, Julien Grall wrote:
> 
> On 20/02/14 11:20, Ian Campbell wrote:
> > On Thu, 2014-02-20 at 11:14 +0000, Julien Grall wrote:
> >>
> >> On 20/02/14 11:05, Ian Campbell wrote:
> >>> On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
> >>>>
> >>>> On 20/02/14 09:04, Ian Campbell wrote:
> >>>>> I was actually thinking more along the lines of a .word at a defined
> >>>>> offset which you could hex edit to a specific value to activate a
> >>>>> particular flavour of early printk handling. That would be sufficient
> >>>>> e.g. for osstest to activate the appropriate stuff for the specific
> >>>>> platform.
> >>>>
> >>>> I don't see useful use case to have a such early printk implementation
> >>>> in Xen. When the board is fully supported, failed at early stage (e.g
> >>>> before console is initialized) is very unlikely. At least if you don't
> >>>> play with memory.
> >>>
> >>> a) there are boards which aren't fully supported, getting some debug out
> >>> of a distro package might be useful
> >>
> >> Few months ago we have decided to allow early printk only when Xen is
> >> compiled with debug enabled. It seems a big mistake to ship distro with
> >> debug enabled :).
> >
> > This was because earlyprintk only supports a static single configuration
> > at compile time. If that restriction was lifted then there would be no
> > reason to limit earlyprintk to debug builds.
> >
> >>> b) even for boards which are fully supported there may still be bugs
> >>> which only appear under particular circumstances.
> >>
> >> I understand this use case. If I understand your previous mail the
> >> solution would me "Hex editing manually the Xen binary to set the early
> >> printk", right? If so, you are assuming that the distro (or anything
> >> else) is proving the zImage. Otherwise the developper has to:
> >>      - unpack from the uImage
> >>      - editing the zImage
> >>      - recreate the uImage
> >
> > No distro would ship the actual uImage, it's too machine specific.
> >
> > I would expect this to be used by running:
> > 	xen-enable-early-printk /boot/xen midway
> >
> > where xen-enable-early-printk is a simple tool we provide.
> 
> And a similar one to disable, I guess.

Yes, by choosing "none" I suppose.

> > Then if a uIamge is then required then this would be generated by
> > whatever distro tooling would have generated it in the non-early-printk
> > case, by rerunning that tool.
> 
> Sounds good. Do you plan to work on it?

Not in the immediate future.

> It would be nice to have this item on the ARM todo page.

Done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 13:02:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 13:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGTGg-0005Tv-QC; Thu, 20 Feb 2014 13:02:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGTGe-0005Tm-SF
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 13:02:21 +0000
Received: from [85.158.143.35:53481] by server-3.bemta-4.messagelabs.com id
	2C/25-11539-CDCF5035; Thu, 20 Feb 2014 13:02:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392901337!7097219!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32624 invoked from network); 20 Feb 2014 13:02:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 13:02:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102600206"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 13:02:17 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 08:02:16 -0500
Message-ID: <1392901335.30563.2.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 13:02:15 +0000
In-Reply-To: <5305E8F9.1020704@linaro.org>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-7-git-send-email-julien.grall@linaro.org>
	<1392808847.23084.138.camel@kazak.uk.xensource.com>
	<5304F058.6090503@linaro.org>
	<1392887074.22494.7.camel@kazak.uk.xensource.com>
	<5305E090.4030602@linaro.org>
	<1392894351.23342.22.camel@kazak.uk.xensource.com>
	<5305E388.2030609@linaro.org>
	<1392895205.23342.28.camel@kazak.uk.xensource.com>
	<5305E8F9.1020704@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@eu.citrix.com,
	tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 6/6] xen/arm: Replace early_printk call to
	printk call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 11:37 +0000, Julien Grall wrote:
> 
> On 20/02/14 11:20, Ian Campbell wrote:
> > On Thu, 2014-02-20 at 11:14 +0000, Julien Grall wrote:
> >>
> >> On 20/02/14 11:05, Ian Campbell wrote:
> >>> On Thu, 2014-02-20 at 11:01 +0000, Julien Grall wrote:
> >>>>
> >>>> On 20/02/14 09:04, Ian Campbell wrote:
> >>>>> I was actually thinking more along the lines of a .word at a defined
> >>>>> offset which you could hex edit to a specific value to activate a
> >>>>> particular flavour of early printk handling. That would be sufficient
> >>>>> e.g. for osstest to activate the appropriate stuff for the specific
> >>>>> platform.
> >>>>
> >>>> I don't see useful use case to have a such early printk implementation
> >>>> in Xen. When the board is fully supported, failed at early stage (e.g
> >>>> before console is initialized) is very unlikely. At least if you don't
> >>>> play with memory.
> >>>
> >>> a) there are boards which aren't fully supported, getting some debug out
> >>> of a distro package might be useful
> >>
> >> Few months ago we have decided to allow early printk only when Xen is
> >> compiled with debug enabled. It seems a big mistake to ship distro with
> >> debug enabled :).
> >
> > This was because earlyprintk only supports a static single configuration
> > at compile time. If that restriction was lifted then there would be no
> > reason to limit earlyprintk to debug builds.
> >
> >>> b) even for boards which are fully supported there may still be bugs
> >>> which only appear under particular circumstances.
> >>
> >> I understand this use case. If I understand your previous mail the
> >> solution would me "Hex editing manually the Xen binary to set the early
> >> printk", right? If so, you are assuming that the distro (or anything
> >> else) is proving the zImage. Otherwise the developper has to:
> >>      - unpack from the uImage
> >>      - editing the zImage
> >>      - recreate the uImage
> >
> > No distro would ship the actual uImage, it's too machine specific.
> >
> > I would expect this to be used by running:
> > 	xen-enable-early-printk /boot/xen midway
> >
> > where xen-enable-early-printk is a simple tool we provide.
> 
> And a similar one to disable, I guess.

Yes, by choosing "none" I suppose.

> > Then if a uIamge is then required then this would be generated by
> > whatever distro tooling would have generated it in the non-early-printk
> > case, by rerunning that tool.
> 
> Sounds good. Do you plan to work on it?

Not in the immediate future.

> It would be nice to have this item on the ARM todo page.

Done.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 13:19:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 13:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGTXG-0005sH-LK; Thu, 20 Feb 2014 13:19:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGTXE-0005sC-Pj
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 13:19:28 +0000
Received: from [85.158.143.35:8204] by server-3.bemta-4.messagelabs.com id
	76/A6-11539-0E006035; Thu, 20 Feb 2014 13:19:28 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392902366!7102936!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23520 invoked from network); 20 Feb 2014 13:19:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 13:19:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104297584"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 13:19:03 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 08:19:02 -0500
Message-ID: <530600C5.3070107@citrix.com>
Date: Thu, 20 Feb 2014 13:19:01 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, Stephen Hemminger
	<stephen@networkplumber.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
In-Reply-To: <CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 17:02, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 6:35 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> On 19/02/14 09:52, Ian Campbell wrote:
>>> Can't we arrange things in the Xen hotplug scripts such that if the
>>> root_block stuff isn't available/doesn't work we fallback to the
>>> existing fe:ff:ff:ff:ff usage?
>>>
>>> That would avoid concerns about forward/backwards compat I think. It
>>> wouldn't solve the issue you are targeting on old systems, but it also
>>> doesn't regress them any further.
>>
>> I agree, I think this problem could be better handled from userspace: if it
>> can set root_block then change the default MAC to a random one, if it can't,
>> then stay with the default one. Or if someone doesn't care about STP but DAD
>> is still important, userspace can have a force_random_mac option somewhere
>> to change to a random MAC regardless of root_block presence.
>
> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
> upon initialization so that root block will be used once the device
> gets added to a bridge. The purpose would be to avoid drivers from
> using the high MAC address hack, streamline to use a random MAC
> address thereby avoiding the possible duplicate address situation for
> IPv6. In the STP use case for these interfaces we'd just require
> userspace to unset the root block. I'd consider the STP use case the
> most odd of all. The caveat to this approach is 3.8 would be needed
> (or its the root block patches cherry picked) for base kernels older
> than 3.8.

How about this: netback sets the root_block flag and a random MAC by 
default. So the default behaviour won't change, DAD will be happy, and 
userspace don't have to do anything unless it's using netback for STP 
root bridge (I don't think there are too many toolstacks doing that), in 
which case it has to remove the root_block flag instead of setting a 
random MAC.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 13:19:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 13:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGTXG-0005sH-LK; Thu, 20 Feb 2014 13:19:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGTXE-0005sC-Pj
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 13:19:28 +0000
Received: from [85.158.143.35:8204] by server-3.bemta-4.messagelabs.com id
	76/A6-11539-0E006035; Thu, 20 Feb 2014 13:19:28 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392902366!7102936!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23520 invoked from network); 20 Feb 2014 13:19:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 13:19:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104297584"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 13:19:03 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 08:19:02 -0500
Message-ID: <530600C5.3070107@citrix.com>
Date: Thu, 20 Feb 2014 13:19:01 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, Stephen Hemminger
	<stephen@networkplumber.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
In-Reply-To: <CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 17:02, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 6:35 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> On 19/02/14 09:52, Ian Campbell wrote:
>>> Can't we arrange things in the Xen hotplug scripts such that if the
>>> root_block stuff isn't available/doesn't work we fallback to the
>>> existing fe:ff:ff:ff:ff usage?
>>>
>>> That would avoid concerns about forward/backwards compat I think. It
>>> wouldn't solve the issue you are targeting on old systems, but it also
>>> doesn't regress them any further.
>>
>> I agree, I think this problem could be better handled from userspace: if it
>> can set root_block then change the default MAC to a random one, if it can't,
>> then stay with the default one. Or if someone doesn't care about STP but DAD
>> is still important, userspace can have a force_random_mac option somewhere
>> to change to a random MAC regardless of root_block presence.
>
> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
> upon initialization so that root block will be used once the device
> gets added to a bridge. The purpose would be to avoid drivers from
> using the high MAC address hack, streamline to use a random MAC
> address thereby avoiding the possible duplicate address situation for
> IPv6. In the STP use case for these interfaces we'd just require
> userspace to unset the root block. I'd consider the STP use case the
> most odd of all. The caveat to this approach is 3.8 would be needed
> (or its the root block patches cherry picked) for base kernels older
> than 3.8.

How about this: netback sets the root_block flag and a random MAC by 
default. So the default behaviour won't change, DAD will be happy, and 
userspace don't have to do anything unless it's using netback for STP 
root bridge (I don't think there are too many toolstacks doing that), in 
which case it has to remove the root_block flag instead of setting a 
random MAC.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 13:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 13:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGU0y-0006Wz-C8; Thu, 20 Feb 2014 13:50:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGU0o-0006Wu-J3
	for Xen-devel@lists.xensource.com; Thu, 20 Feb 2014 13:50:09 +0000
Received: from [193.109.254.147:10857] by server-13.bemta-14.messagelabs.com
	id 07/D3-01226-90806035; Thu, 20 Feb 2014 13:50:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392904201!5682969!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24099 invoked from network); 20 Feb 2014 13:50:01 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 13:50:01 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so600934ead.34
	for <Xen-devel@lists.xensource.com>;
	Thu, 20 Feb 2014 05:50:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=YReHWlsN2tLA4+E4dIJQJOPw+6vM3iqhBT9RfEWuabs=;
	b=Y47eoto131Dx80t/rQslOnw0KQiZRp6QLfkYRk/MTDpvKhqho1ayNbRBoe761kmjl2
	zgQRMQ/EUgvNq6yAWP9PHRtMqSul4Iguk76DFST7pMeaHPD5AnFM9bJQ8YACYEl8jzVb
	MbO9twkgyef53aTvnUHT4MmYT6kf7dtWV0jIYSYNy3xrCwZN/+WucPqTcXUOmkOKuMFL
	btT4OZeCC+9nmT6XujBnunLj2maPr/8i31JtQDe2S6MrwduQb9HYRZMWSQB/g8i3Hla8
	AJVm2KTqTWY41bcDT65uIq+3KYXh8ttGVz4i8U8BZXxkl0l0YUJ4jPzEyc0bG+xnoeQ7
	eRyw==
X-Gm-Message-State: ALoCoQmY1g1jropRG18dHiqhb4eIv44TPUgYcMvrUJV6gY+EntJL8qZRW+T3geFglYiHBTzUEVn8
X-Received: by 10.15.51.196 with SMTP id n44mr1985628eew.27.1392904200755;
	Thu, 20 Feb 2014 05:50:00 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm14034367eei.9.2014.02.20.05.49.59
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 05:49:59 -0800 (PST)
Message-ID: <53060806.7040903@linaro.org>
Date: Thu, 20 Feb 2014 13:49:58 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
	<20140219182227.6a37a33c@mantra.us.oracle.com>
In-Reply-To: <20140219182227.6a37a33c@mantra.us.oracle.com>
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/20/2014 02:22 AM, Mukesh Rathor wrote:
> On Wed, 12 Feb 2014 16:47:54 +0000
> Julien Grall <julien.grall@linaro.org> wrote:
> 
>> Hi Mukesh,
>>
>> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
>>> In preparation for the next patch, we update xsm_add_to_physmap to
>>> allow for checking of foreign domain. Thus, the current domain must
>>> have the right to update the mappings of target domain with pages
>>> from foreign domain.
>>>
>>> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
>>
>> While I was playing with XSM on ARM, I have noticed that Daniel De
>> Graff has added xsm_map_gfmn_foreign few months ago (see commit
>> 0b201e6).
>>
>> Would it be suitable to use this XSM instead of extending
>> xsm_add_to_physmap?
>>
>> Regards,
>>
> 
> Not the same thing. add to physmap could be adding to a domain's
> physmap pages from a foreign domain.

Let assume you don't modify xsm_add_to_physmap, in this case:
   - xsm_add_to_physmap checks if the current domain is allowed to
modify the p2m of a given domain
   - xsm_map_gfmn_foreign checks if the given domain is allowed to have
foreign mapping from the foreign domain

Both XSM are distinct and should be used together. You don't care that
the current domain can modify a given P2M domain to add foreign mapping.
You only want to know if a given domain is able to have foreign mapping
from a specific foreign domain.

IHMO, your solution to modify xsm_add_to_physmap will complexify the
policy because you need to explicitly say:
   - my domain A is able to modify P2M of domain B which is able to have
foreign map from domain C
   - my domain D is able to modify P2M of domain B which is able to have
foreign map from domain C

The last part is redundant.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 13:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 13:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGU0y-0006Wz-C8; Thu, 20 Feb 2014 13:50:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGU0o-0006Wu-J3
	for Xen-devel@lists.xensource.com; Thu, 20 Feb 2014 13:50:09 +0000
Received: from [193.109.254.147:10857] by server-13.bemta-14.messagelabs.com
	id 07/D3-01226-90806035; Thu, 20 Feb 2014 13:50:01 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392904201!5682969!1
X-Originating-IP: [209.85.215.175]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24099 invoked from network); 20 Feb 2014 13:50:01 -0000
Received: from mail-ea0-f175.google.com (HELO mail-ea0-f175.google.com)
	(209.85.215.175)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 13:50:01 -0000
Received: by mail-ea0-f175.google.com with SMTP id z10so600934ead.34
	for <Xen-devel@lists.xensource.com>;
	Thu, 20 Feb 2014 05:50:01 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=YReHWlsN2tLA4+E4dIJQJOPw+6vM3iqhBT9RfEWuabs=;
	b=Y47eoto131Dx80t/rQslOnw0KQiZRp6QLfkYRk/MTDpvKhqho1ayNbRBoe761kmjl2
	zgQRMQ/EUgvNq6yAWP9PHRtMqSul4Iguk76DFST7pMeaHPD5AnFM9bJQ8YACYEl8jzVb
	MbO9twkgyef53aTvnUHT4MmYT6kf7dtWV0jIYSYNy3xrCwZN/+WucPqTcXUOmkOKuMFL
	btT4OZeCC+9nmT6XujBnunLj2maPr/8i31JtQDe2S6MrwduQb9HYRZMWSQB/g8i3Hla8
	AJVm2KTqTWY41bcDT65uIq+3KYXh8ttGVz4i8U8BZXxkl0l0YUJ4jPzEyc0bG+xnoeQ7
	eRyw==
X-Gm-Message-State: ALoCoQmY1g1jropRG18dHiqhb4eIv44TPUgYcMvrUJV6gY+EntJL8qZRW+T3geFglYiHBTzUEVn8
X-Received: by 10.15.51.196 with SMTP id n44mr1985628eew.27.1392904200755;
	Thu, 20 Feb 2014 05:50:00 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id d9sm14034367eei.9.2014.02.20.05.49.59
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 05:49:59 -0800 (PST)
Message-ID: <53060806.7040903@linaro.org>
Date: Thu, 20 Feb 2014 13:49:58 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
	<20140219182227.6a37a33c@mantra.us.oracle.com>
In-Reply-To: <20140219182227.6a37a33c@mantra.us.oracle.com>
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/20/2014 02:22 AM, Mukesh Rathor wrote:
> On Wed, 12 Feb 2014 16:47:54 +0000
> Julien Grall <julien.grall@linaro.org> wrote:
> 
>> Hi Mukesh,
>>
>> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
>>> In preparation for the next patch, we update xsm_add_to_physmap to
>>> allow for checking of foreign domain. Thus, the current domain must
>>> have the right to update the mappings of target domain with pages
>>> from foreign domain.
>>>
>>> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
>>
>> While I was playing with XSM on ARM, I have noticed that Daniel De
>> Graff has added xsm_map_gfmn_foreign few months ago (see commit
>> 0b201e6).
>>
>> Would it be suitable to use this XSM instead of extending
>> xsm_add_to_physmap?
>>
>> Regards,
>>
> 
> Not the same thing. add to physmap could be adding to a domain's
> physmap pages from a foreign domain.

Let assume you don't modify xsm_add_to_physmap, in this case:
   - xsm_add_to_physmap checks if the current domain is allowed to
modify the p2m of a given domain
   - xsm_map_gfmn_foreign checks if the given domain is allowed to have
foreign mapping from the foreign domain

Both XSM are distinct and should be used together. You don't care that
the current domain can modify a given P2M domain to add foreign mapping.
You only want to know if a given domain is able to have foreign mapping
from a specific foreign domain.

IHMO, your solution to modify xsm_add_to_physmap will complexify the
policy because you need to explicitly say:
   - my domain A is able to modify P2M of domain B which is able to have
foreign map from domain C
   - my domain D is able to modify P2M of domain B which is able to have
foreign map from domain C

The last part is redundant.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUNT-00071F-FY; Thu, 20 Feb 2014 14:13:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGUNS-00071A-Ge
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 14:13:26 +0000
Received: from [85.158.139.211:14252] by server-13.bemta-5.messagelabs.com id
	7E/04-18801-58D06035; Thu, 20 Feb 2014 14:13:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392905603!5202785!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23074 invoked from network); 20 Feb 2014 14:13:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:13:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102623276"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 14:13:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 09:13:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGUNN-0001ES-B9;
	Thu, 20 Feb 2014 14:13:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGUNN-0004x9-4O;
	Thu, 20 Feb 2014 14:13:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25151-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 14:13:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25151: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25151 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25151/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24859
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
 test-amd64-i386-pair         13 guests-nbd-mirror         fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
 test-amd64-i386-xend-winxpsp3  4 xen-install              fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         fail    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUNT-00071F-FY; Thu, 20 Feb 2014 14:13:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGUNS-00071A-Ge
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 14:13:26 +0000
Received: from [85.158.139.211:14252] by server-13.bemta-5.messagelabs.com id
	7E/04-18801-58D06035; Thu, 20 Feb 2014 14:13:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392905603!5202785!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23074 invoked from network); 20 Feb 2014 14:13:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:13:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102623276"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 14:13:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 09:13:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGUNN-0001ES-B9;
	Thu, 20 Feb 2014 14:13:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGUNN-0004x9-4O;
	Thu, 20 Feb 2014 14:13:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25151-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 14:13:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25151: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25151 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25151/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24859
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
 test-amd64-i386-pair         13 guests-nbd-mirror         fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
 test-amd64-i386-xend-winxpsp3  4 xen-install              fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         fail    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:20:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUTj-0007Gn-P0; Thu, 20 Feb 2014 14:19:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1WGUTh-0007GZ-N9; Thu, 20 Feb 2014 14:19:53 +0000
Received: from [85.158.137.68:37348] by server-3.bemta-3.messagelabs.com id
	DB/9E-14520-80F06035; Thu, 20 Feb 2014 14:19:52 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392905991!3177910!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32015 invoked from network); 20 Feb 2014 14:19:51 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:19:51 -0000
Received: by mail-wi0-f174.google.com with SMTP id f8so5924639wiw.7
	for <multiple recipients>; Thu, 20 Feb 2014 06:19:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=t6Jl8yLNrk60mE8m4lObmLwP+iA5YlMu2faG6PWygdw=;
	b=pmVXjvZZptldWLikboR5iqtWZdLvs0kS3qKqMToclph5ziKYUQ0JnUC9T0PRd2Aizv
	iL9vqWqcEmQCe72gyom1lscRP8l8hR0OGkDP1KqbJtfjcH2r44EWrn6LovMpoFWSVRK1
	p9I7hIftrMhsI3KzQ+g0emg6zBcWtb8LSZNllIdGZVrqWxiCWJe4s/RbVWzQsXld8OIh
	hq276euiJJ8wVDhoVfCE/rYwgyd/upkEZ1AXzDnqG5G/HEupjq2GkNSWgqsX2UZp5inz
	pXPkAJR0cw8hkwhxAQ84uGg3qc9Qc1hh44tiMN16GzCYYN4T6e1z1qSwQJH32gv71Vzd
	1XFg==
MIME-Version: 1.0
X-Received: by 10.180.105.41 with SMTP id gj9mr7355895wib.28.1392905991537;
	Thu, 20 Feb 2014 06:19:51 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 20 Feb 2014 06:19:51 -0800 (PST)
In-Reply-To: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
References: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
Date: Thu, 20 Feb 2014 14:19:51 +0000
X-Google-Sender-Auth: px4QPu6h0t1JYgfKdgIW-Di_yb4
Message-ID: <CAFLBxZbHbCs=H6J0-n_YCxj0xx0Hp6ZpLOrzWZo05A1OixmeKA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Kai Luo <luokain@gmail.com>
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem when launch instance in OpenStack when we
 use xen as virtuallization layer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 1:59 PM, Kai Luo <luokain@gmail.com> wrote:
> Hello all:
>       Recently we are trying to deploy OpenStack with xen 4.3.0 as
> virtualization layer, however an error occered when we launch an instance
> from an image.We have confirmed that OpenStack services work fine and the
> problem may lies in the xen layer.We checked the xend log and got the
> following exception:

I think I would start by asking this on xen-users (CC'd) -- it seems
likely that this is a configuration issue between openstack and Xen,
and there's more expertise for that here than on xen-devel (where we
have more expertise about issues internal to Xen).

>
> TapdiskException: ('create',
> '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> (32512  )
> [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:3078)
> XendDomainInfo.destroy: domid=21
> [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device model
> [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasing devices
> [2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request start failed.
> Traceback (most recent call last):
>   File "/usrb/xen-4.3/bin/..b/python/xen/web/SrvBase.py", line 85, in
> perform
>     return op_method(op, req)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/SrvDomain.py", line 77, in
> op_start
>     return self.xd.domain_start(self.dom.getName(), paused)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py", line 1070, in
> domain_start
>     dominfo.start(is_managed = True)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 474,
> in start
>     XendTask.log_progress(31, 60, self._initDomain)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendTask.py", line 209, in
> log_progress
>     retval = func(*args, **kwds)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 2845,
> in _initDomain
>     self._configureBootloader()
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3286,
> in _configureBootloader
>     mounted_vbd_uuid = dom0.create_vbd(vbd, disk);
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3979,
> in create_vbd
>     devid = dev_control.createDevice(config)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> 172, in createDevice
>     device = TapdiskController.create(params, file)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> 284, in create
>     return TapdiskController.exc('create', '-a%s:%s' % (dtype, image))
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> 231, in exc
>     (args, rc, out, err))
> TapdiskException: ('create',
> '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> (32512  )
> [2014-02-18 21:13:44 11395] INFO (XendDomain:1126) Domain instance-00000035
> (db4b07c5-7443-418b-81c4-a1f55fb264d3) deleted.
>
>      We tryed to use image of RAW or QCOW2 format and switch the xen
> toolstack from xm to xl,still did't work.when we switch the toolstack to xl
> and closed the xend service, the virt-manager did not work either.We don't
> know what caused this problem and we know XCP may be a better choice but we
> have to use xen as OpenStack virtualization layer because some modifications
> have made in xen.Could you give any suggestions?
>
> Jone
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:20:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUTj-0007Gn-P0; Thu, 20 Feb 2014 14:19:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1WGUTh-0007GZ-N9; Thu, 20 Feb 2014 14:19:53 +0000
Received: from [85.158.137.68:37348] by server-3.bemta-3.messagelabs.com id
	DB/9E-14520-80F06035; Thu, 20 Feb 2014 14:19:52 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392905991!3177910!1
X-Originating-IP: [209.85.212.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32015 invoked from network); 20 Feb 2014 14:19:51 -0000
Received: from mail-wi0-f174.google.com (HELO mail-wi0-f174.google.com)
	(209.85.212.174)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:19:51 -0000
Received: by mail-wi0-f174.google.com with SMTP id f8so5924639wiw.7
	for <multiple recipients>; Thu, 20 Feb 2014 06:19:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=t6Jl8yLNrk60mE8m4lObmLwP+iA5YlMu2faG6PWygdw=;
	b=pmVXjvZZptldWLikboR5iqtWZdLvs0kS3qKqMToclph5ziKYUQ0JnUC9T0PRd2Aizv
	iL9vqWqcEmQCe72gyom1lscRP8l8hR0OGkDP1KqbJtfjcH2r44EWrn6LovMpoFWSVRK1
	p9I7hIftrMhsI3KzQ+g0emg6zBcWtb8LSZNllIdGZVrqWxiCWJe4s/RbVWzQsXld8OIh
	hq276euiJJ8wVDhoVfCE/rYwgyd/upkEZ1AXzDnqG5G/HEupjq2GkNSWgqsX2UZp5inz
	pXPkAJR0cw8hkwhxAQ84uGg3qc9Qc1hh44tiMN16GzCYYN4T6e1z1qSwQJH32gv71Vzd
	1XFg==
MIME-Version: 1.0
X-Received: by 10.180.105.41 with SMTP id gj9mr7355895wib.28.1392905991537;
	Thu, 20 Feb 2014 06:19:51 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 20 Feb 2014 06:19:51 -0800 (PST)
In-Reply-To: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
References: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
Date: Thu, 20 Feb 2014 14:19:51 +0000
X-Google-Sender-Auth: px4QPu6h0t1JYgfKdgIW-Di_yb4
Message-ID: <CAFLBxZbHbCs=H6J0-n_YCxj0xx0Hp6ZpLOrzWZo05A1OixmeKA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Kai Luo <luokain@gmail.com>
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem when launch instance in OpenStack when we
 use xen as virtuallization layer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 1:59 PM, Kai Luo <luokain@gmail.com> wrote:
> Hello all:
>       Recently we are trying to deploy OpenStack with xen 4.3.0 as
> virtualization layer, however an error occered when we launch an instance
> from an image.We have confirmed that OpenStack services work fine and the
> problem may lies in the xen layer.We checked the xend log and got the
> following exception:

I think I would start by asking this on xen-users (CC'd) -- it seems
likely that this is a configuration issue between openstack and Xen,
and there's more expertise for that here than on xen-devel (where we
have more expertise about issues internal to Xen).

>
> TapdiskException: ('create',
> '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> (32512  )
> [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:3078)
> XendDomainInfo.destroy: domid=21
> [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device model
> [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasing devices
> [2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request start failed.
> Traceback (most recent call last):
>   File "/usrb/xen-4.3/bin/..b/python/xen/web/SrvBase.py", line 85, in
> perform
>     return op_method(op, req)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/SrvDomain.py", line 77, in
> op_start
>     return self.xd.domain_start(self.dom.getName(), paused)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py", line 1070, in
> domain_start
>     dominfo.start(is_managed = True)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 474,
> in start
>     XendTask.log_progress(31, 60, self._initDomain)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendTask.py", line 209, in
> log_progress
>     retval = func(*args, **kwds)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 2845,
> in _initDomain
>     self._configureBootloader()
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3286,
> in _configureBootloader
>     mounted_vbd_uuid = dom0.create_vbd(vbd, disk);
>   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3979,
> in create_vbd
>     devid = dev_control.createDevice(config)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> 172, in createDevice
>     device = TapdiskController.create(params, file)
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> 284, in create
>     return TapdiskController.exc('create', '-a%s:%s' % (dtype, image))
>   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> 231, in exc
>     (args, rc, out, err))
> TapdiskException: ('create',
> '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> (32512  )
> [2014-02-18 21:13:44 11395] INFO (XendDomain:1126) Domain instance-00000035
> (db4b07c5-7443-418b-81c4-a1f55fb264d3) deleted.
>
>      We tryed to use image of RAW or QCOW2 format and switch the xen
> toolstack from xm to xl,still did't work.when we switch the toolstack to xl
> and closed the xend service, the virt-manager did not work either.We don't
> know what caused this problem and we know XCP may be a better choice but we
> have to use xen as OpenStack virtualization layer because some modifications
> have made in xen.Could you give any suggestions?
>
> Jone
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:23:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUWt-0007Vy-KI; Thu, 20 Feb 2014 14:23:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1WGUWr-0007VU-RC; Thu, 20 Feb 2014 14:23:10 +0000
Received: from [85.158.139.211:27149] by server-7.bemta-5.messagelabs.com id
	08/3F-14867-CCF06035; Thu, 20 Feb 2014 14:23:08 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392906187!5169549!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31019 invoked from network); 20 Feb 2014 14:23:08 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:23:08 -0000
Received: by mail-wg0-f49.google.com with SMTP id y10so1491766wgg.4
	for <multiple recipients>; Thu, 20 Feb 2014 06:23:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=b8yr8BjtgOLPWlfkaa4CBmOUCb3/AVUF0595oMRNHwQ=;
	b=yXOjaPbPib3qlxVaGb7IAV2UwYyw4HPXtLC4PFMKIC1q1fGhDoQmqtETdcx2fdQpd9
	kuzTpaiyw7ZCTJjIrrzeA/g1prxQ7PdJNrZTF17WckQ/EmyyXbWa04nAs/1+Zi1qJfu3
	GZck/NaggisnvzNZI5HSDjYT8JmVMCDCBYgviNQJkvv+1GlyULHckMlihQxdaOvZx+z/
	JqpdaCFZOgSA507b77RmfIbk5dO4gE1JglShbem34KxC1PP4jTK6BPAE2R5yjc9wuwjz
	JJ3pNWi8V8UXVDOpVf77QKjNetYC9QvJXdfs3SuM/5nNVC019ICOK8LYf9vO2G4Idbn7
	bDVg==
MIME-Version: 1.0
X-Received: by 10.194.240.7 with SMTP id vw7mr2249310wjc.75.1392906187864;
	Thu, 20 Feb 2014 06:23:07 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 20 Feb 2014 06:23:07 -0800 (PST)
In-Reply-To: <5303B34E.5000702@xen.org>
References: <5303B34E.5000702@xen.org>
Date: Thu, 20 Feb 2014 14:23:07 +0000
X-Google-Sender-Auth: s3iYLhUbvBHezy5rxlYOOX2FbhM
Message-ID: <CAFLBxZZVFPkMuGfK8-0Q1vbGodeAdDv67r7N5o2YzWQuM+2wXA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Lars Kurth <lars.kurth@xen.org>
Cc: Tim Mackey <Timothy.Mackey@citrix.com>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, xen-users@lists.xenproject.org,
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Vote] Proposal: Moving XCP binaries to
	XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 7:23 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> Hi all,
>
> I wanted to propose to move the legacy XCP binaries from XenProject.org to
> XenServer.org.

+1

> With XenServer being fully open source and XCP basically
> being a variant of XenServer, it would make a lot more sense to keep all
> these binaries with XenServer.org. The fact that we have XCP and
> XenServer.org in two different places has led to:
>
> * fragmentation of the XCP user community
> * it is also a constant source of confusion in the user community
>
> In a nutshell many people don't know whether they should go to XenServer.org
> to ask XCP related questions or whether to ask them on XenProject.org. As a
> result many questions remain unanswered. Russell and me spend a lot of our
> time, pointing people to the right place and/or cross-posting. I was hoping
> things would get better over time, but they have not improved.
>
> When the Xen Project was created, there was no real alternative but to keep
> XCP as part of the Xen Project. With XenServer being fully open source, and
> being established, there is no reason why we can't clean up some of the
> confusion. In my opinion we really should do this.
>
> This proposal does *not* affect the XAPI project : the XAPI project would
> continue to develop the XAPI toolstack as part of the Xen Project (and
> deliver source "releases"). In fact, I would also propose to make the xapi
> mailing list a developer mailing list. This fits much better with how the
> Hypervisor and MirageOS projects are run and creates an overall cleaner and
> easier to understand model for the Xen Project.
>
> I have in principle agreement from:
> * The Xen Project Advisory Board and the Linux Foundation (which is needed
> as I am proposing to move assets out of XenProject.org)
> * Citrix to take on XCP as part of XenProject.org
> * Citrix to provide resources to migrate content and redirect URLs from
> xxx.XenProject.org to XenServer.org such that people wont be impacted. This
> part is quite important. People who would come to download or find
> information about XCP, are basically encouraged to ask XCP related questions
> on XenProject.org. If they are redirected to the right place in
> XenServer.org, that does mean that they are redirected to the site where
> they should ask questions.
> * I may be able to get some resources to have the wiki cleaned up too and do
> some redirects there too (another source of ongoing confusion)
>
> == Who and how to vote? ==
>
> As this is not an entirely project local decision, I propose that according
> to http://xenproject.org/governance.html
> - Members of all developer mailing lists (including the user lists) on
> Xenproject.org can review the proposal and voice an opinion
> - Maintainers of all mature projects and the Xenproject.org community
> manager are allowed to vote : these are maintainers of xen-devel and xen-api
>
> You would vote by replying "+1"
> If you don't care vote "0"
> If you object, vote "-1", which must include an alternative proposal or a
> detailed explanation of the reasons for the negative vote.
>
> Please vote by Feb 25th
>
> Best Regards
> Lars
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:23:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUWt-0007Vy-KI; Thu, 20 Feb 2014 14:23:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>)
	id 1WGUWr-0007VU-RC; Thu, 20 Feb 2014 14:23:10 +0000
Received: from [85.158.139.211:27149] by server-7.bemta-5.messagelabs.com id
	08/3F-14867-CCF06035; Thu, 20 Feb 2014 14:23:08 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392906187!5169549!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31019 invoked from network); 20 Feb 2014 14:23:08 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:23:08 -0000
Received: by mail-wg0-f49.google.com with SMTP id y10so1491766wgg.4
	for <multiple recipients>; Thu, 20 Feb 2014 06:23:07 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=b8yr8BjtgOLPWlfkaa4CBmOUCb3/AVUF0595oMRNHwQ=;
	b=yXOjaPbPib3qlxVaGb7IAV2UwYyw4HPXtLC4PFMKIC1q1fGhDoQmqtETdcx2fdQpd9
	kuzTpaiyw7ZCTJjIrrzeA/g1prxQ7PdJNrZTF17WckQ/EmyyXbWa04nAs/1+Zi1qJfu3
	GZck/NaggisnvzNZI5HSDjYT8JmVMCDCBYgviNQJkvv+1GlyULHckMlihQxdaOvZx+z/
	JqpdaCFZOgSA507b77RmfIbk5dO4gE1JglShbem34KxC1PP4jTK6BPAE2R5yjc9wuwjz
	JJ3pNWi8V8UXVDOpVf77QKjNetYC9QvJXdfs3SuM/5nNVC019ICOK8LYf9vO2G4Idbn7
	bDVg==
MIME-Version: 1.0
X-Received: by 10.194.240.7 with SMTP id vw7mr2249310wjc.75.1392906187864;
	Thu, 20 Feb 2014 06:23:07 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 20 Feb 2014 06:23:07 -0800 (PST)
In-Reply-To: <5303B34E.5000702@xen.org>
References: <5303B34E.5000702@xen.org>
Date: Thu, 20 Feb 2014 14:23:07 +0000
X-Google-Sender-Auth: s3iYLhUbvBHezy5rxlYOOX2FbhM
Message-ID: <CAFLBxZZVFPkMuGfK8-0Q1vbGodeAdDv67r7N5o2YzWQuM+2wXA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Lars Kurth <lars.kurth@xen.org>
Cc: Tim Mackey <Timothy.Mackey@citrix.com>,
	"mirageos-devel@lists.xenproject.org"
	<mirageos-devel@lists.xenproject.org>, xen-users@lists.xenproject.org,
	Russell Pavlicek <russell.pavlicek@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>
Subject: Re: [Xen-devel] [Vote] Proposal: Moving XCP binaries to
	XenServer.org
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 7:23 PM, Lars Kurth <lars.kurth@xen.org> wrote:
> Hi all,
>
> I wanted to propose to move the legacy XCP binaries from XenProject.org to
> XenServer.org.

+1

> With XenServer being fully open source and XCP basically
> being a variant of XenServer, it would make a lot more sense to keep all
> these binaries with XenServer.org. The fact that we have XCP and
> XenServer.org in two different places has led to:
>
> * fragmentation of the XCP user community
> * it is also a constant source of confusion in the user community
>
> In a nutshell many people don't know whether they should go to XenServer.org
> to ask XCP related questions or whether to ask them on XenProject.org. As a
> result many questions remain unanswered. Russell and me spend a lot of our
> time, pointing people to the right place and/or cross-posting. I was hoping
> things would get better over time, but they have not improved.
>
> When the Xen Project was created, there was no real alternative but to keep
> XCP as part of the Xen Project. With XenServer being fully open source, and
> being established, there is no reason why we can't clean up some of the
> confusion. In my opinion we really should do this.
>
> This proposal does *not* affect the XAPI project : the XAPI project would
> continue to develop the XAPI toolstack as part of the Xen Project (and
> deliver source "releases"). In fact, I would also propose to make the xapi
> mailing list a developer mailing list. This fits much better with how the
> Hypervisor and MirageOS projects are run and creates an overall cleaner and
> easier to understand model for the Xen Project.
>
> I have in principle agreement from:
> * The Xen Project Advisory Board and the Linux Foundation (which is needed
> as I am proposing to move assets out of XenProject.org)
> * Citrix to take on XCP as part of XenProject.org
> * Citrix to provide resources to migrate content and redirect URLs from
> xxx.XenProject.org to XenServer.org such that people wont be impacted. This
> part is quite important. People who would come to download or find
> information about XCP, are basically encouraged to ask XCP related questions
> on XenProject.org. If they are redirected to the right place in
> XenServer.org, that does mean that they are redirected to the site where
> they should ask questions.
> * I may be able to get some resources to have the wiki cleaned up too and do
> some redirects there too (another source of ongoing confusion)
>
> == Who and how to vote? ==
>
> As this is not an entirely project local decision, I propose that according
> to http://xenproject.org/governance.html
> - Members of all developer mailing lists (including the user lists) on
> Xenproject.org can review the proposal and voice an opinion
> - Maintainers of all mature projects and the Xenproject.org community
> manager are allowed to vote : these are maintainers of xen-devel and xen-api
>
> You would vote by replying "+1"
> If you don't care vote "0"
> If you object, vote "-1", which must include an alternative proposal or a
> detailed explanation of the reasons for the negative vote.
>
> Please vote by Feb 25th
>
> Best Regards
> Lars
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:47:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUuC-00005L-KG; Thu, 20 Feb 2014 14:47:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGUuA-00005G-K7
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 14:47:14 +0000
Received: from [193.109.254.147:23424] by server-1.bemta-14.messagelabs.com id
	B5/89-15438-17516035; Thu, 20 Feb 2014 14:47:13 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392907630!5695508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31745 invoked from network); 20 Feb 2014 14:47:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:47:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104327797"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 14:47:09 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 09:47:09 -0500
Message-ID: <5306156B.4070105@citrix.com>
Date: Thu, 20 Feb 2014 14:47:07 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<53024C58.4010900@citrix.com>
	<CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
In-Reply-To: <CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 16:45, Luis R. Rodriguez wrote:
> On Mon, Feb 17, 2014 at 9:52 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>>>
>>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>>
>>> It doesn't make sense for some interfaces to become a root bridge
>>> at any point in time. One example is virtual backend interfaces
>>> which rely on other entities on the bridge for actual physical
>>> connectivity. They only provide virtual access.
>>
>> It is possible that a guest bridge together to VIF, either from the same
>> Dom0 bridge or from different ones. In that case using STP on VIFs sound
>> sensible to me.
>
> You seem to describe a case whereby it can make sense for xen-netback
> interfaces to end up becoming the root port of a bridge. Can you
> elaborate a little more on that as it was unclear the use case.
Well, I might be wrong on that, but the scenario I was thinking: a guest 
(let's say domain 1) can have multiple interfaces on different Dom0 (or 
driver domain) bridges, let's say vif1.0 is plugged into xenbr0 and 
vif1.1 is in xenbr1. If the guest wants to make a bridge of this two, 
then using STP makes sense. I wanted to bring up CloudStack's virtual 
router as an example, but then I realized it's probably doesn't do such 
thing. However I don't think we should hardcode that a netback interface 
can't be RP ever.

>
> Additionally if such cases exist then under the current upstream
> implementation one would simply need to change the MAC address in
> order to enable a vif to become the root port.  Stephen noted there is
> a way to avoid nominating an interface for a root port through the
> root block flag. We should use that instead of the MAC address hacks.
> Let's keep in mind that part of the motivation for this series is to
> avoid a duplicate IPv6 address left in place by use cases whereby the
> MAC address of the backend vif was left static. The use case your are
> explaining likely describes the more prevalent use case where address
> conflicts can occur, perhaps when administrators for got to change the
> backend MAC address. If we embrace a random MAC address we'd avoid
> that issue, and but we'd need to update userspace to use the root
> block on topologies where desired.
If I understand you correctly, this is the same I suggested in my 
another email sent 1.5 hour ago.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:47:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGUuC-00005L-KG; Thu, 20 Feb 2014 14:47:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGUuA-00005G-K7
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 14:47:14 +0000
Received: from [193.109.254.147:23424] by server-1.bemta-14.messagelabs.com id
	B5/89-15438-17516035; Thu, 20 Feb 2014 14:47:13 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392907630!5695508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31745 invoked from network); 20 Feb 2014 14:47:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:47:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="104327797"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 14:47:09 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 09:47:09 -0500
Message-ID: <5306156B.4070105@citrix.com>
Date: Thu, 20 Feb 2014 14:47:07 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<53024C58.4010900@citrix.com>
	<CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
In-Reply-To: <CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 16:45, Luis R. Rodriguez wrote:
> On Mon, Feb 17, 2014 at 9:52 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> On 15/02/14 02:59, Luis R. Rodriguez wrote:
>>>
>>> From: "Luis R. Rodriguez" <mcgrof@suse.com>
>>>
>>> It doesn't make sense for some interfaces to become a root bridge
>>> at any point in time. One example is virtual backend interfaces
>>> which rely on other entities on the bridge for actual physical
>>> connectivity. They only provide virtual access.
>>
>> It is possible that a guest bridge together to VIF, either from the same
>> Dom0 bridge or from different ones. In that case using STP on VIFs sound
>> sensible to me.
>
> You seem to describe a case whereby it can make sense for xen-netback
> interfaces to end up becoming the root port of a bridge. Can you
> elaborate a little more on that as it was unclear the use case.
Well, I might be wrong on that, but the scenario I was thinking: a guest 
(let's say domain 1) can have multiple interfaces on different Dom0 (or 
driver domain) bridges, let's say vif1.0 is plugged into xenbr0 and 
vif1.1 is in xenbr1. If the guest wants to make a bridge of this two, 
then using STP makes sense. I wanted to bring up CloudStack's virtual 
router as an example, but then I realized it's probably doesn't do such 
thing. However I don't think we should hardcode that a netback interface 
can't be RP ever.

>
> Additionally if such cases exist then under the current upstream
> implementation one would simply need to change the MAC address in
> order to enable a vif to become the root port.  Stephen noted there is
> a way to avoid nominating an interface for a root port through the
> root block flag. We should use that instead of the MAC address hacks.
> Let's keep in mind that part of the motivation for this series is to
> avoid a duplicate IPv6 address left in place by use cases whereby the
> MAC address of the backend vif was left static. The use case your are
> explaining likely describes the more prevalent use case where address
> conflicts can occur, perhaps when administrators for got to change the
> backend MAC address. If we embrace a random MAC address we'd avoid
> that issue, and but we'd need to update userspace to use the root
> block on topologies where desired.
If I understand you correctly, this is the same I suggested in my 
another email sent 1.5 hour ago.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:59:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGV5t-0000Pb-Sd; Thu, 20 Feb 2014 14:59:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGV5s-0000PW-PR
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 14:59:20 +0000
Received: from [85.158.143.35:21782] by server-2.bemta-4.messagelabs.com id
	01/98-10891-84816035; Thu, 20 Feb 2014 14:59:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392908358!7120548!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8698 invoked from network); 20 Feb 2014 14:59:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:59:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102640193"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 14:59:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 09:59:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGV37-0003gA-6U;
	Thu, 20 Feb 2014 14:56:29 +0000
Date: Thu, 20 Feb 2014 14:56:23 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <george.dunlap@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402201452550.15812@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] request for a release-ack on qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi George,
the following two commits in qemu-xen:

commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
        hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

were already present in the tree and fix cpu-hotplug.
One of the latest merges from the QEMU 1.6 stable tree effectively
reverted them. May I have a release exception to push them back in?
Without them cpu-hotplug does not work.

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 14:59:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 14:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGV5t-0000Pb-Sd; Thu, 20 Feb 2014 14:59:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGV5s-0000PW-PR
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 14:59:20 +0000
Received: from [85.158.143.35:21782] by server-2.bemta-4.messagelabs.com id
	01/98-10891-84816035; Thu, 20 Feb 2014 14:59:20 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392908358!7120548!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8698 invoked from network); 20 Feb 2014 14:59:19 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 14:59:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102640193"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 14:59:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 09:59:17 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGV37-0003gA-6U;
	Thu, 20 Feb 2014 14:56:29 +0000
Date: Thu, 20 Feb 2014 14:56:23 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <george.dunlap@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402201452550.15812@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] request for a release-ack on qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi George,
the following two commits in qemu-xen:

commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:43:12 2013 +0000

    xen: Enable cpu-hotplug on xenfv machine.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    Conflicts:
        hw/i386/pc_piix.c

commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 25 16:41:48 2013 +0000

    xen: Fix vcpu initialization.
    
    Each vcpu need a evtchn binded in qemu, even those that are
    offline at QEMU initialisation.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

were already present in the tree and fix cpu-hotplug.
One of the latest merges from the QEMU 1.6 stable tree effectively
reverted them. May I have a release exception to push them back in?
Without them cpu-hotplug does not work.

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 15:00:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 15:00:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGV70-0000Wj-Bq; Thu, 20 Feb 2014 15:00:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGV6y-0000WY-QP
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 15:00:29 +0000
Received: from [85.158.143.35:45370] by server-3.bemta-4.messagelabs.com id
	CC/2A-11539-C8816035; Thu, 20 Feb 2014 15:00:28 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392908425!7092490!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31192 invoked from network); 20 Feb 2014 15:00:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 15:00:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102640563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 15:00:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 10:00:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGUzh-0003XH-Cx;
	Thu, 20 Feb 2014 14:52:57 +0000
Date: Thu, 20 Feb 2014 14:52:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <5304BC73.5090803@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, George
	Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xen.org,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014, George Dunlap wrote:
> On 02/19/2014 01:53 PM, Ian Campbell wrote:
> > On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
> > > Hi all,
> > > 
> > > Ping?
> > No one made a case for a release exception so I put it in my 4.5 pile.
> > 
> > >   It would be nice to have this patch for Xen 4.4 as IPI priority
> > > patch won't be pushed before the release.
> > > 
> > > The patch is a minor change and won't impact normal use. When dom0 is
> > > built, Xen always do it on CPU 0.
> > Right, so whoever is doing otherwise already has a big pile of patches I
> > presume?
> > 
> > It's rather late to be making such changes IMHO, but I'll defer to
> > George.
> 
> I can't figure out from the description what's the advantage of having it in
> 4.4.

People that use the default configuration won't see any differences but
people that manually modify Xen to start a second domain and assign a
device to it would.
To give you a concrete example, it fixes a deadlock reported by
Oleksandr Tyshchenko:

http://marc.info/?l=xen-devel&m=139099606402232

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 15:00:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 15:00:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGV70-0000Wj-Bq; Thu, 20 Feb 2014 15:00:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGV6y-0000WY-QP
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 15:00:29 +0000
Received: from [85.158.143.35:45370] by server-3.bemta-4.messagelabs.com id
	CC/2A-11539-C8816035; Thu, 20 Feb 2014 15:00:28 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392908425!7092490!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31192 invoked from network); 20 Feb 2014 15:00:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 15:00:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,512,1389744000"; d="scan'208";a="102640563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 15:00:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 10:00:11 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGUzh-0003XH-Cx;
	Thu, 20 Feb 2014 14:52:57 +0000
Date: Thu, 20 Feb 2014 14:52:51 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <5304BC73.5090803@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>, George
	Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xen.org,
	Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014, George Dunlap wrote:
> On 02/19/2014 01:53 PM, Ian Campbell wrote:
> > On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
> > > Hi all,
> > > 
> > > Ping?
> > No one made a case for a release exception so I put it in my 4.5 pile.
> > 
> > >   It would be nice to have this patch for Xen 4.4 as IPI priority
> > > patch won't be pushed before the release.
> > > 
> > > The patch is a minor change and won't impact normal use. When dom0 is
> > > built, Xen always do it on CPU 0.
> > Right, so whoever is doing otherwise already has a big pile of patches I
> > presume?
> > 
> > It's rather late to be making such changes IMHO, but I'll defer to
> > George.
> 
> I can't figure out from the description what's the advantage of having it in
> 4.4.

People that use the default configuration won't see any differences but
people that manually modify Xen to start a second domain and assign a
device to it would.
To give you a concrete example, it fixes a deadlock reported by
Oleksandr Tyshchenko:

http://marc.info/?l=xen-devel&m=139099606402232

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:08:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWAg-00021e-HV; Thu, 20 Feb 2014 16:08:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WGWAa-00021Z-D2
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:08:20 +0000
Received: from [85.158.143.35:15583] by server-3.bemta-4.messagelabs.com id
	CF/26-11539-F6826035; Thu, 20 Feb 2014 16:08:15 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392912493!7133732!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28219 invoked from network); 20 Feb 2014 16:08:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:08:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102674087"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 16:08:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:08:11 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WGW2E-0004ed-L6;
	Thu, 20 Feb 2014 15:59:38 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <drbd-user@lists.linbit.com>, <xen-devel@lists.xenproject.org>
Date: Thu, 20 Feb 2014 16:59:36 +0100
Message-ID: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] block-drbd: type is "phy" for drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhlIHR5cGUgd3JpdHRlbiB0byB4ZW5zdG9yZSBieSBsaWJ4bCB3aGVuIGF0dGFjaGluZyBhIGRy
YmQgYmFja2VuZCBpcwoicGh5Iiwgbm90ICJkcmJkIiwgc28gaGFuZGxlIHRoaXMgY2FzZSBhbHNv
LgoKU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
Ci0tLQogc2NyaXB0cy9ibG9jay1kcmJkIHwgICAgNCArKy0tCiAxIGZpbGVzIGNoYW5nZWQsIDIg
aW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9zY3JpcHRzL2Jsb2Nr
LWRyYmQgYi9zY3JpcHRzL2Jsb2NrLWRyYmQKaW5kZXggNTU2M2NjYi4uOTc1ODAyYiAxMDA3NTUK
LS0tIGEvc2NyaXB0cy9ibG9jay1kcmJkCisrKyBiL3NjcmlwdHMvYmxvY2stZHJiZApAQCAtMjUw
LDcgKzI1MCw3IEBAIGNhc2UgIiRjb21tYW5kIiBpbgogICAgIGZpCiAKICAgICBjYXNlICR0IGlu
IAotICAgICAgZHJiZCkKKyAgICAgIGRyYmR8cGh5KQogICAgICAgICBkcmJkX3Jlc291cmNlPSRw
CiAgICAgICAgIGRyYmRfcm9sZT0iJChkcmJkYWRtIHJvbGUgJGRyYmRfcmVzb3VyY2UpIgogICAg
ICAgICBkcmJkX2xyb2xlPSIke2RyYmRfcm9sZSUlLyp9IgpAQCAtMjc4LDcgKzI3OCw3IEBAIGNh
c2UgIiRjb21tYW5kIiBpbgogCiAgIHJlbW92ZSkKICAgICBjYXNlICR0IGluIAotICAgICAgZHJi
ZCkKKyAgICAgIGRyYmR8cGh5KQogICAgICAgICBwPSQoeGVuc3RvcmVfcmVhZCAiJFhFTkJVU19Q
QVRIL3BhcmFtcyIpCiAgICAgICAgIGRyYmRfcmVzb3VyY2U9JHAKICAgICAgICAgZHJiZF9yb2xl
PSIkKGRyYmRhZG0gcm9sZSAkZHJiZF9yZXNvdXJjZSkiCi0tIAoxLjcuNy41IChBcHBsZSBHaXQt
MjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:08:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWAg-00021e-HV; Thu, 20 Feb 2014 16:08:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WGWAa-00021Z-D2
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:08:20 +0000
Received: from [85.158.143.35:15583] by server-3.bemta-4.messagelabs.com id
	CF/26-11539-F6826035; Thu, 20 Feb 2014 16:08:15 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1392912493!7133732!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28219 invoked from network); 20 Feb 2014 16:08:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:08:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102674087"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 16:08:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:08:11 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WGW2E-0004ed-L6;
	Thu, 20 Feb 2014 15:59:38 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <drbd-user@lists.linbit.com>, <xen-devel@lists.xenproject.org>
Date: Thu, 20 Feb 2014 16:59:36 +0100
Message-ID: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] block-drbd: type is "phy" for drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

VGhlIHR5cGUgd3JpdHRlbiB0byB4ZW5zdG9yZSBieSBsaWJ4bCB3aGVuIGF0dGFjaGluZyBhIGRy
YmQgYmFja2VuZCBpcwoicGh5Iiwgbm90ICJkcmJkIiwgc28gaGFuZGxlIHRoaXMgY2FzZSBhbHNv
LgoKU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
Ci0tLQogc2NyaXB0cy9ibG9jay1kcmJkIHwgICAgNCArKy0tCiAxIGZpbGVzIGNoYW5nZWQsIDIg
aW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9zY3JpcHRzL2Jsb2Nr
LWRyYmQgYi9zY3JpcHRzL2Jsb2NrLWRyYmQKaW5kZXggNTU2M2NjYi4uOTc1ODAyYiAxMDA3NTUK
LS0tIGEvc2NyaXB0cy9ibG9jay1kcmJkCisrKyBiL3NjcmlwdHMvYmxvY2stZHJiZApAQCAtMjUw
LDcgKzI1MCw3IEBAIGNhc2UgIiRjb21tYW5kIiBpbgogICAgIGZpCiAKICAgICBjYXNlICR0IGlu
IAotICAgICAgZHJiZCkKKyAgICAgIGRyYmR8cGh5KQogICAgICAgICBkcmJkX3Jlc291cmNlPSRw
CiAgICAgICAgIGRyYmRfcm9sZT0iJChkcmJkYWRtIHJvbGUgJGRyYmRfcmVzb3VyY2UpIgogICAg
ICAgICBkcmJkX2xyb2xlPSIke2RyYmRfcm9sZSUlLyp9IgpAQCAtMjc4LDcgKzI3OCw3IEBAIGNh
c2UgIiRjb21tYW5kIiBpbgogCiAgIHJlbW92ZSkKICAgICBjYXNlICR0IGluIAotICAgICAgZHJi
ZCkKKyAgICAgIGRyYmR8cGh5KQogICAgICAgICBwPSQoeGVuc3RvcmVfcmVhZCAiJFhFTkJVU19Q
QVRIL3BhcmFtcyIpCiAgICAgICAgIGRyYmRfcmVzb3VyY2U9JHAKICAgICAgICAgZHJiZF9yb2xl
PSIkKGRyYmRhZG0gcm9sZSAkZHJiZF9yZXNvdXJjZSkiCi0tIAoxLjcuNy41IChBcHBsZSBHaXQt
MjYpCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWKt-0002Um-Df; Thu, 20 Feb 2014 16:18:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGWKq-0002Uh-O8
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:18:53 +0000
Received: from [85.158.143.35:60009] by server-2.bemta-4.messagelabs.com id
	E2/B3-10891-CEA26035; Thu, 20 Feb 2014 16:18:52 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392913130!7140871!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2576 invoked from network); 20 Feb 2014 16:18:50 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Feb 2014 16:18:50 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51942 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGWJj-0001h6-0i; Thu, 20 Feb 2014 17:17:44 +0100
Date: Thu, 20 Feb 2014 17:18:46 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <929649832.20140220171846@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1142136480.20140220095359@eikelenboom.it>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<20140124174806.GA15571@phenom.dumpdata.com>
	<1142136480.20140220095359@eikelenboom.it>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------0860880181AA81244"
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------0860880181AA81244
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

=0D=0AThursday, February 20, 2014, 9:53:59 AM, you wrote:


> Friday, January 24, 2014, 6:48:06 PM, you wrote:

>> On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
>>>=20
>>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>>>=20
>>> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>>> >> > nonethless.
>>> >>=20
>>> >> As usual ;-)
>>>=20
>>> > Ha!
>>> > ..snip..
>>> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0=
x45
>>> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+=
0x6/0x9
>>> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x15=
9/0x1b5
>>> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>>> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x=
4e
>>> >>=20
>>> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus res=
et) should also fix.
>>> >> > I totally forgot about it !
>>> >>=20
>>> >> Got a link to that patchset ?
>>>=20
>>> > https://lkml.org/lkml/2013/12/13/315
>>>=20
>>> >> I at least could give it a spin .. you never know when fortune is on=
 your side :-)
>>>=20
>>> > It is also at this git tree:
>>>=20
>>> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>>> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>>> > want to merge it in your current Linus tree.
>>>=20
>>> > Thank you!
>>>=20
>>>=20
>>> Hi Konrad,
>>>=20
>>> Just got time to test this some more, when merging this branch *except*=
 the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>>> seems to help with my problem,i'm no capable of using:
>>> - xl pci-detach
>>> - xl pci-assignable-remove
>>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
>>>=20
>>> to remove a pci device from a running HVM guest and rebinding it to a d=
river in dom0 without those nasty stacktraces :-)
>>> So the first 4 seem to be an improvement.
>>>=20
>>> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to gi=
ve troubles of it's own.

>> Could you email me your lspci output and also which devices you move/swi=
tch etc?

> Hi Konrad,

> At the moment i found some time to figure out what goes wrong with the xl=
 pci-detach and xl pci-assignable-remove, i have been
> able to narrow it down a bit:

> The problem only occurs when you:
> - passthrough 2 (or more?) pci devices assigned to a guest ..
> - and only remove 1 of those devices with "xl pci-detach" followed by a "=
xl pci-assignable-remove"
> - when you first detach both devices with "xl pci-detach" before doing th=
e "xl pci-assignable-remove" it works ok.

> In my case i'm passingthrough 2 devices (02:00.0 and 00:19.0)

> I added some printk's and what i found out is that:
> - after doing the pci-detach of 02:00.0, it doesn't call pcistub_put_pci_=
dev for that device ...
> - but when i subsequently pci-detach the second (and last) device 00:19.0=
 .. it does call it for both 02:00.0 and 00:19.0 ...
> - so somehow that call for the first detached device gets deferred .. but=
 since it are different devices and not functions of the same device i don't
>   see any reason for it to wait until all other devices would have been d=
etached ...


> I tried to capture the console output but some how that didn't work out, =
so i attached a screenshot of what happens when:
> - doing a xl pci-list for the guest
> - doing a xl pci-assignable-list

> - doing the xl pci-detach for 02:00.0

> - doing a xl pci-list for the guest
> - doing a xl pci-assignable-list

> - waiting some time ...

> - doing the xl pci-detach for 00:19.0

> - doing a xl pci-list for the guest
> - doing a xl pci-assignable-list

> There you can see this strange sequence of events :-)

> But i haven't been able to spot the culprit

Enabled some extra debugging and added some more printk's .. (see new scree=
nshot)

From=20what it seems .. the frontend state for the first device isn't chang=
ed on the first pci-detach ..

Is the signaling on pci-detach the guests (pcifront) responsibility or the =
toolstacks (libxl) ?



> attached: screenshot.jpg

> --
> Sander



>> Thanks!
>>>=20
>>> --
>>> Sander
>>>=20
------------0860880181AA81244
Content-Type: image/jpeg;
 name="screenshot2.jpg"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="screenshot2.jpg"

/9j/4QA0RXhpZgAASUkqAAgAAAABAJiCAgAQAAAAGgAAAAAAAABDT1BZUklHSFQsIDIwMDkA
AAD/7AARRHVja3kAAQAEAAAAPAAA/+EDlWh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8A
PD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4g
PHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iQWRvYmUgWE1Q
IENvcmUgNS4zLWMwMTEgNjYuMTQ1NjYxLCAyMDEyLzAyLzA2LTE0OjU2OjI3ICAgICAgICAi
PiA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRm
LXN5bnRheC1ucyMiPiA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczp4bXBN
TT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6
Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtbG5zOnhtcD0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIgeG1sbnM6ZGM9Imh0dHA6Ly9wdXJsLm9y
Zy9kYy9lbGVtZW50cy8xLjEvIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjg1RjJDQjUz
OUE0OTExRTM5OEFEQkJCRjhGM0FBMDBEIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjg1
RjJDQjUyOUE0OTExRTM5OEFEQkJCRjhGM0FBMDBEIiB4bXA6Q3JlYXRvclRvb2w9IjEwMDMx
NjEiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0iMzdBQzQwM0M2Mjk1
Q0JENTgwNUZEN0NCQThGRTNENEMiIHN0UmVmOmRvY3VtZW50SUQ9IjM3QUM0MDNDNjI5NUNC
RDU4MDVGRDdDQkE4RkUzRDRDIi8+IDxkYzpyaWdodHM+IDxyZGY6QWx0PiA8cmRmOmxpIHht
bDpsYW5nPSJ4LWRlZmF1bHQiPkNPUFlSSUdIVCwgMjAwOTwvcmRmOmxpPiA8L3JkZjpBbHQ+
IDwvZGM6cmlnaHRzPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0
YT4gPD94cGFja2V0IGVuZD0iciI/Pv/tAFxQaG90b3Nob3AgMy4wADhCSU0EBAAAAAAAIxwB
WgADGyVHHAIAAAIAAhwCdAAPQ09QWVJJR0hULCAyMDA5ADhCSU0EJQAAAAAAEPkXFbhi6c9J
PDKtAE0qv1X/7gAOQWRvYmUAZMAAAAAB/9sAhAAGBAQEBQQGBQUGCQYFBgkLCAYGCAsMCgoL
CgoMEAwMDAwMDBAMDg8QDw4MExMUFBMTHBsbGxwfHx8fHx8fHx8fAQcHBw0MDRgQEBgaFREV
Gh8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx//wAAR
CAKiA6wDAREAAhEBAxEB/8QAsQAAAAcBAQAAAAAAAAAAAAAAAAECBAUGBwMIAQEBAQEBAQAA
AAAAAAAAAAABAAIDBAUQAAEDAwMDAgQEAwUGBQECDwECAwQRBQYAIRIxEwdBIlFhMhRxQiMV
gZEIocFSMxbwsdFiJBfh8XJDNIIlNVNjGJJzJsJENqKyg2Q3RREAAgICAgIBAwQBAwQDAQEB
AAERAiExQRJRA2FxgSKRoTITsfBCBMHRUmLhIxRy8ZL/2gAMAwEAAhEDEQA/APLvNXUn/wAN
aNB8wSNzx+Pz0wIZdKxQ1416Db5V1lohXIUJC9qU2G51IIOqXR2uRcSlSfykV5fxGkgJluBN
OLZ5VKTxFf56hgeR7/NjqBZ7QPH0bBp/PU0UHRzKrq4nip1O4AO1BT11lE1JxF8lBXM9sgjj
0P8AOmtGOmQxfZFUkNN7VrSv9uqR6Ck399KQS0ggGo3I6/HRIdBRyBZNFNN8/RW9NxvrSZdB
aL68tApFSeNEhXIgA/PRkOgZvUhatoqUrSoBQ5UoTp7F1ORvo5Hk0SAfQgGvz29NTY9TqnIE
7BLKimlacq76A/rFpvcYN81RHA0ogAhQFVetDqZKh3ey5tTSY4bW1HSfc0lVQSfUg6pM/wBT
TmTgu8QzU9lz0oRTf4DVJvpIRvcUUJS6nqKfj6apM9A/3qCFFBC/xUn3fLVJKgEXeCkEpKz1
9vH009jXRgTdoG3LmAOlUV1SZ6QJXdoagFFxZKegAIIppknVi/3WCervwNSgkD5dNEikxQuV
uWDR0AqIrsRT5U0KxdWF9/biCruJKx6Gor/ZpLqxAmW+pAkgGtdyeJpqObq5FCRBWCUvIBr9
R2IPy0SaSAZMWtO8gpHUBVf9+pMeoaXovu4PJJ68q02OkuolTraqgOoNOnuHr6DQMBpba2CV
pSa+7dPT1pvpMwKLLRJKXglw7A8ug+OiTaOtHB7EPAj4khQ2/HUJxfZS4aOMtrPRRGxr06jU
Zszj+2Q1EgIW3TqAsEfKldRdzgbSVV4L3HxSa/MbaoJXEftUgbJor5EEED+Og1JzVb5lFDgV
UNKA+uopCXDloKebJ470B1FJxUw6UlRbUn02G2kZOam1UJAUAPSnr89UBIZCwOW/SpSev9ug
ZB76Upy2/wB++oJEinGgrXodUCHXietaf2jUQASaeo+Px1EALrQUr8B+Hz1QQVD0Ow+Hrokg
0+3YUr8NJCgoig2IHUfH+OggcxyG3T6gNJCw+pKRQ1+Q6U0EkLD5NArqnoR89JHRLqabn8Uk
DUQFBlSB6dSCn/eaaiOSo6iCEnlv9J+rUAgoWhZJSdRCajYK2+fzrqEAVvRQ6fDqNRCao5Af
4fgeuohZG+493ofT+GgQipRB/wAW9BpAMgqoVbUGx+P46ggBBFRWih/tXQQSdtgCR60+WmCA
ngoVHtI9Px0CEUKBG3X4fHVIAINeP8Nt9IigCTvsR1PQHUQniTuTsBUCv+/QQR402BPw+R1E
BXSo6fH/AMNJBUNSPU/7tBCQB06p0kKGyj6gDUQNyduo6aCOgVVJHQChJ+eoRWx2J47fz1AF
uQfbSn9uoBQSOIKfTcn+7VJB/l+o1rUg/wC7UQFUoo06fT/fqIJIUFH0JpRXyOkg0AhSqCqR
tv8A8NRIWgca1pU9U+mo1ImiTUpO4Fa/Gv8AfoMMFDQJ9etfj8q6SQa6pqAaHcGu9PhQ6jTQ
SEUVxVXkP5fz1GQyk8QKfw+WgWJUkcU1+r5emkoD+PIcvVJG23x1MgBJVunYDcAD+7QAD7nK
k7Hqrpv01SIYRVPGgG/Wtd/TSKQXTam6dv8AzGgAxwb341r8etdIhHmQj4q3A+HxroABC61N
OHqdQwJJSaA7An/YagYPppSvH/bfSQaEEqp0B6EHfQygJQBc4j0+XrqIOp9xO9f51/jqJClp
UAFAVrsfj8tCE5qFNq/81BrRA5A1r+CR6aATDSfw2NadKem2oQAior671HT+WmAgJRUlIBFT
WgrqESFBIJUSCfQdNQCv57HY/wBuggUFKqG9NzWtNUiAKoDvSvStd9RISqtD8R1p6aUQqppz
/wBvhqIsGKYhdL1FcehQ1SG23A045VIQFqTyCVKUQBUdNbSUSzF29IloXj+8y3pTMO0vOyIS
uMppVElCgKlFFFNVAeg9NdMLZmlv1OMPCLhOiPyoVskPx4/LvrbSfbTrUH3bfLWWktmm2dEe
PriuwG+pt7yrQFcXJSBVCQPzf+muxPodLqk8B2zB2a8aXN60uXVtniw1VbjVU98NJFS52T7u
AG/LQ0pg1LbwMn8KdZZZkvxJCGJFOy+ppSWVlQ24KI4nRBqTi7iQabbU606yl4lMd1ba0Bah
t7eQAV/DTWpjvHITmKJYIS+w6y4U+1DqFJWRWholVCf4ay65wFfbiTk7jDDbnb96TsS2tJCt
/kRUa0qMXcScaYCikkpJ+muxHxBGp0L+1IJWOxkVT3SE71Ktjq/rJ3ZzONNVFF1CulevyoPX
U6h3Y6/0JcVQFz2mXlwAoNrlJSS0lX+En+/pqSXLL+w5uYdIZcWw/wB1DqFcFoKTULArxO3U
DfVb1ivZiRv/AKWqBR01JpyB2GswXY5/6ZcFKuUBNBT0OtKkh/ZAtWMukcQ6eKfnsP8Az1l1
g0ryck4y4T/mbf4vn6aepdwKxmQNu8B6gH1GswH9ghWOTO4Ry2B+Hp8dXU1Ia8dmLXy5hZP5
gNzqgO0Cf2CcklSFA0r0rXbT0ZdwlWW4qV3FLB5nrv8A3aoHsGizXNCFMpUkdzdaOp2+eswH
dHJNmuQ2SQBWmx9dMF3TALTcTQKT1NKbenx1dWPdCHbdcVrX+nuOtNv4D5aoDshCbVPSQvt0
I30QaTDMC5cwSzUDoaCm/wAdPUhLkOeKq7NAdjxGiAVkD7SY0aqj8hxoNqjfodvXULaEJjvp
BSqOVH5jcfy1QXYIR5AClFgqHqoV6nUgkSIslRp2lfMU9NTFNCi0Qj/KWNvntTrTQaaCKRwI
QlfOtUddh6/jqMwgitYUke7jSitz1/u0wUCkuhJHFTldypdafhtoABkOinBbiqipUSf7PloF
IIPu7/rLTT6aE0P460TQa5L/ADoh1ZR8SSSRoZJA/cJnEp7yv5+mpE0Kbuc5sex5QNagildI
Qg3brPeNXXlLUdlKUASf400FAgzpNASoEnatBXUMIJUt0ihpX40GohJfcNAaVGwoBpIIPGgF
BQb7j46igPvD1Skk+vHp+Gogd47EhNfmNEEJ7nQUGogqinQEg16nUQrlQE03/HpqKAJUitaV
/j8dQhlwADryGoGEHACU0NSKH5aiF92orU7dANvb01CwB81JKlbGqTt/bqCBYlr3UVVBHT/d
qISZHqQN96U2320wQO8jYlPuPrvXQAQU0UkVIHr8a6SD5I2PL0oAfSn/AB0CKKWaDi4AfWuy
jqILigGoUnY+7fpoJBlKSaJWmoNQquohX2zxHJKkUHX3p39eldJHFQ4Gn9gOog6jiAk9D6/7
9UAGtKy2abj46iEnkoDfp0/jqIIgFVT6DfUxDopW3Qj6aeugQ+NQeld6f8NJCATWpHXYeugA
6E+lKdB8dSIKgUKk1ptXTJB/4QenqR1/8tRCkdAnpX0+O+hijrXrvQV6H5ai2AJPKp9w+e3X
UEBcQBWpHKu/p+GqAFIJAANKUqf92oQykgdBWp29R8xpICakkgBKTtXrXUASSaUqa13A/wB+
ggKTXr/sdMkBYTRRpuNuXpoIPkQoAUCuPUDamtEKNOO42pWv9u2ssRKQDQjau4J+P/hqAMgo
XQCtfh6/x1EGoACo6j+RppJnIAUrvUenwGohaj0KD7uu+3T8NUEEBQ1IHEn6TvsdAg6GvHY7
D0pTUQAlVf8AmoaE+v46gYkrKVCiuSuvy21EKFKdP4+oJ0EAJBBr0UfdQ/DSIRCdjTielT+G
oAgOQoD6Up/4aiCKD9SU7E/H4ddRBqUk/I/4ulRpgRfIrodg3Wv4HptXQQhXFRI61PSvrqIA
UAlVd9JBAGn+L/lPwProFVAaj2DqTv8AP56UAW5PoPj+Pw1EAhJNOhpUDqDqIUptQUVHelKA
aiQQVRJJH1b/AIV0QQRNEdNlHYn4aoINPIdTt6H/AHahkA4UNUgkb1r1r+GlEI91K/lrTjpI
tuNXjs4jPtfAjvympIWD7eLaCgpUn+PXXRKUZ5ks9tySI5bm4t5MlTseai4ty2Slbi1ttpb4
O89+NEJ93XV1c4BXzkkWMytz1xh3GY0+xIt0t2awhjiWXi893+24VUKadKjREFVNOVoi373G
est1tylvsImSlTY6m1ckclAjsLTUURRVSoevppq9A8aLLE8hwV4/KaktuMXIwnYjaEE9tYeY
SwpSlA7ijYUAobHWbL8sGk5yhldsmtsy4IntXOWIzzkRxyyqQS22WOAXQ8uBBKCocUg76l9D
Lf6HOVm0t7JXJ0qQ7NtC7k3MdiPb/otO80BtKtmzx6hPX11qkDCZKXnL7bMnwfuZKHYjL7r7
UiM293GeaaNoWXlLUUg0Kko/hvoqmghD2bmdlWhEhEhoXhEC4xlSUturSVvFsxgFyOaz9KqE
/ToUjBDyrzarpAU2/MbTdrhBjokTn0kJ+5YdUV91QClVLfGigN9SlBzI+N5trKFIsdzjxroI
FuZRPdb4NKLCFiUglaF0VyUnqPdTrpZWSiSBtdxbauN2dflxWlSob7ZdVF7zTy1UPbaSB+kp
ZFUuD6f46XqCIqHeLnCDKosl6OY/MsJQo+zuf5lPT3/mHrqdZJF9sWQY3bsTRzUiRd0pM8ud
pLrplvpXHQOSnEnmy2o89t0kfDWXbJlVSM3IAASPYEdadQB+OqBbLFcoVlFvlritsm79ofuU
UK/6dgBaSlyAoE81qH1pPTemtd/0M20dMhhXBrFcdXISgpYEtBKFtqUlDjqVtJUEkqAKa0rr
Eps1EDHC7bEuGTQYM5gSIj5dLrBUWwoIZWtA5jdNFJ3OtIiUTZba8IT8qBHi3JcWQ6i0Nr4s
uuNFP2/Ic1K/WBV0VvSu2syF0tkfGhuuZPbG27OIjjrzRXbG3Fq2S4AoiqlLRQb9fSulsaxx
oRcbZDGR3ePcJirc4xKeDZcYW73ElxVFHgU029fX00yC8D2y2O2KjMyPt37upd0bhtrjlTBb
bohff4EKryUSni5ROru5wLUoef6Vxtbr7s+QtC5UqU2hTCHj9slp9SPoaQttVafStQ029kmK
euNsgpLVhaw4yEw3v3ZEgsmYhz9M8RyBW3xJCaeleuqstwN0kh6cUgKtE53mtF0t8Nc99law
eTaeB2bSiiahwe7uaXYuiZzkWeyQr+xaQmS5IZmMRpofDZjOoWpO6Amik15bcidKymVqoa3a
02tMKZOtjz6m4csRJCX0JTyU4VlKmQkkpSlLdCFHWUELgZWSC3cZC2H5H2jTEd6Ut/gXKJZT
WgSKE8umps2rcDxmzWeQ848zcHf2uPFdlSitlIlNpQpKOPbCuBKysFPu6ddUrgOuTlc8fRGh
uT48jvw0ojPx+SO2tTcoqCeSaq4qHbNd9KeYK1YHkfF7X3JjNwuJirZhtzWy2yp5Kg8hLgSa
EU4BVCPXWdFEoRY8Gm3eEmS04ltt51UeLxbKw46ghKkLVUdsbg8iKaGwVIY1VioQww47OisP
yG3lRIqirmsx3VNOJKwOCfcg8STRWnsTUkMQhABCaJAqdtwKb7aGXwS87GmYyn2A+kTIC2hd
W1pIaaEjjwU24KlYBV79vmNaWsj2gRc8eiMWxu5MvreZ5IQpL7XZUsq5e9j3K7rYKOtB1GrA
KWc3MbkR7gqA+lguJZTNdW04h1tLKkhfMrRWqkpO6eo6ay0KtwKOKMlXfSttVuDbr6blwUAW
GHA04vs/WFJUoDj66XHA5nJHzbO3CkBv9N1C0ocjvN14OtLFULSDQgKHoeh1OqM5FwsZjzml
uqLMZkLDHffVwSp9YKkNDiFe4p9eg9dSrkZYyetUdt1bLrHbfaUUOIIoUkbEEdNTqMiDbICg
FdpKqmhp0prMF2ZxNnhpB4tCvpXfb5a1Bl2YX7LCVWoqT/vP9+hpErM5/sMMA80kb0G+22qE
b7CTYIyiAAQn41+PTRCFs5KsTHIJ5mtOvwOtdUZ75gM2JgmgdKa9Cdx+GswKucxYh6Ob+hI2
+GgewkWCtaub9B81aoJWEGwroSlwGm1AOv4ahC/ZHuHJJB3ofQiny1FIRskgD60n0H/loYyJ
VZpKTvuqtAB66CbEm0yUq9Px6j+etFKE/tUlRoAOXwr6jVBSJFtmVpw939vy1QEhG3yQKlHw
H4nVBdkD7KVy49smnw+OoZE/YyQaFBqdh+OqCkSYkmv0HY0J+GoJB9tISD+mdQoLsPcQoNq3
Ox/u0CDtuJNVJKQNwDsdQCQh0n6T6k01CCih1BFT6jVBBFKtqig/DUgAnn8Kj121EDkaj19B
+GogjWukgA0IIr+GgQyokivQdKaiBy3NOteo21QAB89gRqEAJKeNd/QaiAFqrxrQfz0gEVAm
tK/L56CFKXWg6elflqELmrjvsPTUAovLAANKevrqEUHnE+vXqNRB/cqr9Irt0+Wog/uh0CQA
etPXRACRIAVUpr8aGn8NQChJJJJT7j8PhpFINUgdCkmh1EBMlCRXiQQCNvnqAAeb25AhPxFK
1+NNRClPMkAcjU7lXx+eog/uRUUJ/E9dQh/cNDofpO5/4aCC7jJNGzXeu/r8dRALzfRJ93UA
7j8NQClONk05e31ApuaemkmJHCtUkGmyzXr8NtQBpHUdEp3JGoQAj6Rsk/m9a/DUKAKqpToP
p20MGEk1HEnl6k/L+GogFICK8QQo05GtfhtqI6J5J9x2SNiB1P46pEQQk0oK71+X4agAUnrW
p2Cd/T+GoggsigHp1Hxp8tRAWOQoqm3Q9TpIHEbAUFRXjqINdFAUIA3O/U10CDjRIUK0G4Po
T/HSQAoKApsK7gdSdBSJJ3+Pxr131BICnj+U9Nif7tRBAA0APXao+GkpAahRSD8qU2pqIMLo
OVDU1AP/AIaibCqCClR6Cm3x/v1Eg+IISlJHL41rqNBUrsAa19KV/t1ADginU068t/5aiL/4
WgSJ2VxG22DIZPdElJSFICFNKT7gfy1proo6uTNo0ydwDHZknIJUF6DIK48aSh0JSoOMvBlR
QFbfmI48T11rvFdj1XWCStNhan41iTEy2uOx5EmVCnTWuVIqfuAKuACoU3y/PtTU2m3kzHE4
GUfC4BhrfKlrdZEgItaV0fuKWHFo+5iEVACAP1E+tDx0didcDGDbQrxvOnuQ1OyI1zYQ3O7a
qhpxlXcTy6FvkkE/PTeJQPCG061ssYxarkmDLYfmPPNKmOFKor6WvytcfchaPzV/hqS8sy2m
N7Hb1XS9Qbf3O0JklphKiCrj3CE1oOtK611JJ5LS1g9ifubcNF2ASFSmpHYIkLSI7SnEu7BK
aFSeJSd9c+42oR1qsVkuZkrYnOoTDgSZzzSm+S0/bceIqPaUuBZ+YpvpmECRxZs1pciTLmzM
Wq3wmmXJSeFH21PLLYRxJ4E7VrWmrJt4HsrDWI9ukXOROpGQ1BkRkIRycWzcErUgkVACkdvc
ae+MGZbRGMWu3PTZMZVyKWY7C3Y0lthx1DzqQChsgULYVWnJWwOrMSZUkYVkgJSBU9T1p6ak
hvYk7ZaY0jH7vcS64l+3Ki9kJoUFL6y2pKwd/SoI1XYVgJm2JcxuZd0+5yLKYjqHIhY7qVKG
1CFBXH41B0pank28khf8SMa53Rm3SW3mLWEvPoUr3tMOFCe4tVKEpWuhA31htHOGRM6w3KCw
t19IbSV8Y7hqEyW1GvdjqoO4gEfVrbqpD8jtcIK7S5bZEOUpS5kBmchYBQWy/wAkqRyB+R/h
rLOiY0ttmk3NbrEVKOUVhclwLUEJS0zTmanpTlsNKxkw3I5Tj13SEugAxy130zQoBrtBzs8+
5/8ApPZ066rQK/FCLfaJlwu7dtqpxxzlxU2QsqCEFftKiK7J+OjgezYyakyUIUWXVhK0gK7a
lJFBuBQEVGjqjSt4HcePfBHU80X0tPgrWErKUvDoo8ajnv166XVM52s5OUeLdxbZU1lDhtYW
I0p1BojmvdLaxX1pUba0o+5S/sOC3f2rCmcJC02xTqoCGw6eqkBxSSmv0KT8fXWG1JpN7O8g
ZEpvgqSuQbe42C2hwOKYWo0aIpWtVbJIrrUZLs2N48e/Pgwo6XHzNdHcZR7uUhIUaK//ABlC
o/z1WQ1bOAj3KDxkxSA3IS5GS62QpCwoBK2gfWoI21gg4S7lbriEMBUefUxnGXRQ+8gKacbW
KUrSoUNKhi/gdqnZD+7vQ5fH7wn7aQw+lHZSpolKUlH0Dtknj8NDhaM1tOGBy5X55bt7eIkF
RRb33FoSpDnFr2trbAoR20Vr8tUIlZMZsXd9tlDammZLDYUWEPthztFR5LLZNONSKnUyTSCc
kTp0RpLoDka2oKAtKaBCXnCv3Ef4ln2/y0vCFucjJVSqikihSf4/I/I6EzNpJJd+mOFRebbU
86WlTXSn3SQyQW0PCtOI4ivGldTGUC4zm5yHH27eGffRclDjzobCh7WhzJSgCntHy1bJW8BO
Xd1c5UuNGjxQuMIzzMdHBtaCkIWVJqfe6BVSvjvpWsitihfVpQmElhIt4ZejIhklSw1JdS84
O7TkV80JIV/ZoRdglXCM9cmXLlGcchsN/bGI0stLS2lJS2kuUJqlRqTT3dNSHp2ycbfc0tQx
EkNl5huQJiQk8Fd5LfbSDWtUEdR/LWzmk/0Gcx9yTLdkukhchZcWkDaqlcj/AA0NmxJrx6VR
/hGxpXWGUiA0UqKaEgj+Xy0pghJTx2oooBpUD1+ekmgwlJPL/EKAH0GssUgLbPKprxI6f7tS
NCQhFa+vQ1/4apMNHMpI3JFQeu/TVIQwCvqAUJFORrXfUdAh7llPSg6fL46gQSknkSCfkkeo
1CJpVI34kio9d66AQRbqVChJHqPQH1rqECikkjdSq1SOm346oISEndIUBXdSetPXbUSQdU1A
2qfT4HUSCCfZv1TtX8d9tMi2D2cOITsN1D1roMwFxQeg/BPrvqIBZBAKjsSRX5aZANHAChA/
j/t66y2aQklHp06D5gGp66CkCkGtKUAPpQ/z0iw+2kgGiQo7mvWvwGkIOa0BKVJUNiahXrv+
GoYAG2uNOI413FPQaJIQptNSCkEg1rT0+WlA2Elhsg8khIFTxpvT46gRzUxHUrZscdk/L8fl
qGQlxWKUSmilD209SDtqRpg+zaUN01I609T8tDCTmuHHUqvbKUprt8dKISYLB9yQN+ldtQSE
be1ySD+JPxp6ajciFwWUp23JPX5aoMyJMFlR2VT4j5aikP7BsKJqaEVSD1/jogpCNtAVQKFa
V36aWMiV24JO6tzvT1HyOgpB+3b7KFPT409dtRSBVtUkBQcBCvp1DIhUBSVcQeQPqn00h2C/
bnSaClR136aoKQvs3QaJIUo9abjVAtoMQXzsAASD/ZoFNCVQZISPbtT+OoGARJA6oNR6fLUU
iFRX6bJNP56oCQfbPAVKdh1+WoRPZd3ISdtRSH210+kgU/hTUUiQhZrQb11QQOCydga9dRAI
VSoSQPjpIJIUrem40EkCh/jpKAfjqAAUqppUjfpoggclVFCR8NRBhSq9dvmdQig46E05Gh+f
+/UQO+vpXp6/HVBBqeUSan/zGgQu6QeQ6+h+eogd9dDQ0+IppALvq4nYf+OoBfeJSagVPqBq
QgL9E8aUA6gfPSQSna0FaJ+WoBRf9tAkb0qNZKA1PoKAAmhG1dQwJ7yR1B22r66QFd4VqST1
oPn+OooE8xw5b7bU+R1FAaXEpHLcmv07DUIOSKAcvd1J0kBC0e4HcE+v9ugg+5+XmeNaU9Ka
gLp4us4vV5j2lUtyAZhcQ1KaFSlxDalp5CqapUU0O+uqrNW/Bp1JzFIMy83uTHbuL0Z8RX31
qbeLbrhjNlYSgqPuI413NeOjp+Ewc7exPHJIWzDsnm2G33e1XNPO7qkMSIj0oRi+604EBplJ
NXVuVrQ9fjp/HTGs1ZGxsYy55yAiMy4DJQ+5CWFUDYjKUmQgrrRpSCk8kkj+3WRbZ3inI04V
LuEO8OtwGpQgybQVLSkh9Bc7gT/llK+JSoddNlAOVgYSbPfo1iiXiQ2tFlmurZhPhxKkKda/
zE8ASUEfMb6ZMjGKJK5DXYSpUgrHZQ0CF9yvsKeO9a6Ykp5LdeoXlP76Ezc2pq56m3v29KA2
ta0qQe6B2eQKuBNQr3U1msFYjmcfzm1TlQhbJsedIiuJUyGiVORFgB0GlQpHSvw0yjK8hWuH
mtoyB2BDt0pN3UgJVbFxi6XED3J5sLSpKx6p2/DWmuyFOGdZSs2uLsq2GDKffccbTNhIjKKk
LZ5dpAQhP6fEKPFPTRWnLH2WxgZWqRf7cZwtQktOLYcYuYYQsr+2V7XW3kgEpRX6qjQ0YqoW
SMoAEpFaHp8z/HSmTyWWzzLwxi13LFrizbMVst3Z95tLjiFLJ+3qQpKgOYPEgUr10NtOQrDl
HOLcpaMSkwBbIn7WqQ0JM5QIkl8hRaKVchUo3FQkgeutNNuTpXwOr5kt5RKnxrhBjxrpKIj3
Z5tHEvoRxX2llKi2r6EkkCvz1lpmvkiLnksu4NKakhsslzuRGx0jJH1NRxWiGVeqRt69dNEz
lbLgf3y6NPv2uNdLCq2Ihx2UISy66247C4lTXHuhad+RUFgGuprmZLGg4V7xK2Oy3I9slOfd
xHYi25j6HW1B0pKd20tqBTxO9dX5PCGFATWWw/tzDMJIsIjKiJhJfWFpQp/7gESSOVe4N9tx
toatvkYUCMeu8aFkbFyYtT0hCAv7SAy8oqqWyndfFSl+wk7DV8Ik2tkfGfsSXJinoT8iO6yt
FtT9xwcYc24OrUlNHePqmgromMIkvJ2NztMi2w410bkF+3pWmF9utCUKS4suHulQK0nkdijW
1IezL8DVMuALW9EVC5TXXEOM3DuLBQ2B7mi0PYvl15Hcemh7FD4z7McRVbCZIuQlmcCpLZjk
9sMhFeXMJoK1p11izei64wOEX+yQo3bs4ddeQ9HkR4koIKIqmV81BDqCVuBZ9p5Aba1vYYSO
Jv8ADbjPQ7c45xnS2J8uQ4AlbLzQWOKOJouhcNDtXQ02XeNB3q+sPxoMFp5U2NCfXI+67QjL
UXSkqSltNeJATsonrrMSaTjY0ky7TNyR6T9zKagvP91MmbWTLKKhQ7xbp3FmnGo1qGLeSZau
1hT5E/ejP4WgS1TBLUw4CAa/pdr6uW9K9NHUE1IiBLtrVnkWpu7IaeF0bmNSu052nWEMKQrf
jzQrkacSnfS0UJwcDc7I5OecKaJkSlLtpSChNuJeCg+pI2eCk7lvoPx09mXAVqaYVack71wi
pkTEITHaW4ptx9TEkOlYTxpxUkEp+e2jsTQy/wBNZA7bxNbjIciKR3C6l9ivADkSpJXySQPS
ldSSkNo6x8Zu8RbFwuEFZtIU07KdSptQ+3KhyVRKyvdPoBXVHAInoCLU3Ddh3l+KmFIu0RwM
MOJ7ioSUP0cX2/RPJIqdweujnBp1TgjJUOBFuEGLNjBhcgOtSHytl0pS9xDToRHJQC2dwSdL
coIhnB+1FF4gWVlpmVPhOFE51laVIfWHAtwFwKDakNI9qVJI5D56ybgczrJ/9qXqNEjfcXJm
YTEh8+dYiislY93vFOBryrvrRlaE2qwwZcF9a4zqrizOSy/DaQXy3F7XNShxUmh51HImnpqy
tmb8EBJiqU/I+1Ye+2S6pLVEKcPGpCQVpBTyp1po5EnRjkcQrW8Ir6GZ1uemPzCSptLza3Uo
26I5FsJ4q6npo4LmCMudpDBUllh55hDbbqZtD26rbStQqBwohSik0PppUQSrkVLtoZxuDOW2
6gOSJDauaSGylKGylSVUHXl8afDRJqVI0xyBEu19hW1bqm2JKlBzt0LiUhtS6pB/9OpoEjpb
bXDue7L4j9iM7LkJfISgJaCagODYcuXr09dSYRI8iY1DmIcLVxQ6ENNLAboviqRKTFShxY9p
+sL9vpt11PBuJQ3umPotkA3CRIBh/dPQ6JT7+5HKkqO+1CWzTUZdsYOsrD57U9MAvJelJm/Y
vIaSo8CpgSUrTWlR2+RKevtOpA8jeNis6QxGeQ419vPSpyK+VJQz22yoc1qWQpAJQabfDTsE
4IYcTRQJoQCNtxtWmss0x5Is7rMkW8uIM9K0sLjgkUcWQAnkfafq66QUtHRvGZbkoRm3Wy4m
SqE8oqKUtOoQtagvavRpW420NihTONXJ1tTiQ32m2u+txS6J7YZMhJ/+ptJUBrWB5Ob+P3SO
h1TzYDEdaW5LqiAltbiQpCV09VJWD/HQLYk4/dQw5JLYQw2GS48ogJKZSVKYWK9UrCFUPy0I
GcUWi4qVPQGfdAaL0wLUELQkEJKglRHI1UPamp1QCGPEBSUj4VqN9vl/HSQ+TbHlIeDbra34
7JlOoQobMpAUpRPqQDukb6Cs0gv2yUY0aQ2EBmW6plrkoAlxASVVrsPrG+ohLsF1phUtIC4y
XSwt1JBSl6hXwJ+JSCa9NAphOW2ShwtLb4u9puQakBKGXAFIX/EKGlILNQJMCf8Acx4wZUp6
YU/aoH/uhxXFKkHoUqUCn8dMQZVpEJgSnVrCGiVIUW1b/nBopAPqRoZtMCYckLcS20tbjCS5
IBBBbQlQSpSwegSTTfQIFxZXAu9tfClQ5Qn2j8wHw1QEnEsOlsL4KDSiUpcoePMCpSD0qBvT
TEAIHcPtSgrPUhIrv+A0EBDbrntaQpVd00B5EevTrpJyIcYUlXFfIFW+4p/DUQErIqQCaA+4
j16amjSRz5tuHiPq2oRU7en89SRTIQTtVWxR8PUjTBJBbH3iilVqqnroBiUqoACaevIelfj+
GokgEo5VI3O2/wAvX8NQoCgs191Rtv1/DVIMJKFV61bG5PQ09aahQpW52G1KEEeuiCElIrw2
UOIJOwIppANKq7041G341676hBy9D7q7V+Ap01AJPHkCAKJ2qK9SNtJBpCj+UACpr8TqkoFD
c0G5PSuw6emsiJCifaK8gKCnSnrqCRfIjoahI6j0+ekgqUCSDuTUp6En/hqkgKXyJ5IpXepP
w20GhRUkiihRX5um9fnqAKqCvtkVCfQfHppAJQaKthxSn6fnpRaAhLRoVAdPdXqQPnqEIhKu
QKfq9TsKddBCSlgABKAa7VpTf10CrA7LIV9KPUEAddRSEYrPo2BXevpoTAAjRkNpKkAKpUn8
daKRIjxyKKSAo71r6aoLIpMCMRwIqo/TvTb/AG30NEEYLPAbeytBXY1/HQaEGDFFKA0/xVqK
fHSAj9vYP5yKnYkDUUhptzP5lEfAgVNPjpKQv2xKjw5HkelPXQWxP7WnclwAAV6fOgrobGAl
WtyhKFVSOupWCRRtrnHZST8aeoGmRkT+1vBRSDunpqEIWt+o3BV6JG+ghAt0lR6Gqag03G2o
AhbpRJATUjqnSmQPsJNAQnqCRqkghClFJ/TNAN/iNEkJ+2kgV4HYaSkL7dzrxNB6n/jpIP7Z
8EkINBok0qyI7DlacT0rXUEF98UXiXaLwidFtCb3Ka5rjxSpwKSQg1WA3ueKa1+WtqeuDFty
WbEMit0HKJc6BjaJin23jGgfcPJSw321B8IWPe4OBUfduNLtbpHBJVnIu25jCtdssv3GOIej
W6Q/Jx2eqQ42tKy6HOJX0eQ2oU4qGiWSaUwIieS5rFvk2tcBr9quJfVfYaSQJbzyytDvMVU0
pqo48fhvXUqN5Kl5ZztuR48xg8+ySbTLdlypAkIuTcng132UlLY4FBBCUr9yQqp66fY3g1jl
kS5doDtgi21u2tNTIzq3Hrk2tYW+lXRDjajwHD0I9NLlnOIRysVyVar7brqhBccgSWpSWeRH
PtLCqVHStOuulLRhlVwaFcPK8R65xJEdy4LZYXLkKZdTDYU29JaU2gtmKhvlx5/Uo8qa5KuG
beisYzl0q1IltPPyHGHLfNiR20LNWnpiUguDetFcfdQ11dVEB2hHWwZYyqPMtd7mzGosthmL
HnxauvxEx3S6EthSkEpJUR9W2nrCMu8vJMXfyBGds9wt9rkTWnXGLXFamklpT6Lf3Oa1lCuS
SrmkgVP46EpOk8kNZsqWi53OfeZlxVKnwnYwkQXENOOuLSkIEqoAcZomi09TqeEc+6X3K8ug
AUQCvoANh+IHx1VHqi0Y45jicSyGNOu6bfc7j9sIzC4rzyeMVwu1LrVac68QKbHrtrTbwZFQ
HLQfHM62u3xpNydmMzo1pW2+eJZCkOJQ6AWwpwKB22NN9FrKTeWWbIsvtU9q8OIyFE6FNbab
tFv+2U07CloW0TIUkp48UpQr3hRUr1G+s/A1iSCy29YzPgymrMGmZS5oVc3S0ALiSCBKjpA/
6UJVUqaG29flpWzNmhOU2xqe/jsa2Xe3XN5Nvj29aWpBBaeZC1K7in0tpS2eVAa9dKaM2r+Q
7xXFbjaZ1wfugtrYVb3/ALWRKcYmsofQpBQpbbSnCmlag0266rNGa1f2H6JEAy1KEqzOZcq2
8Vz1CP8At7kr7qpopSBGUoxdq8Pl10O516ojcXYdf8ipVIdtSWwh371TbjTML3sqQS0SUI59
wj/L9dxtpURkIfBWo2MXF524RQ7FRItbC5ElLsloJWlr6gwsEoeWa7JSd9EhME5ZbYHsdjOQ
LNbbih1Dn73InLSl2KrmQ0ELLja2R2xyqhKtMtMnGyHYjQ/9Hzn0sQlyW5bSESVvrTcUoWPp
aY+hxk09y+o0pyyaSglpFmuT/jCHOTakpYj3JxwzUISFmMphKVOOqBK1I7nt6UGs4TZWROXi
Dd5FkQ1cbU2m4IlxW7bFW1GatzocXw4QJDHF1xtaKdwuqO3TfVjyDzoisyYksQIUK5WeQuel
9AEwwRBZoAecOOpKUreCqgc1+6oqnrqnODTSYwzWzOxrzZkmzm1NzoEFTsNLa46VPElDqQXN
+Z2ClHodzpn5MumcDOfbFwc2ftkW1uJMWaltNok0mrQkFPJt1TXJLop1UnamqzUSKwyViWWG
jyubOuA4ISbi60mE2g+xs8u3RDiV1bG31DcaIaFQcLPbLOvEZEq8RJi1ovTMQqhpQh1Adjq9
q3HEqCU1orjTc/DWl8BxIhvC2nZM5tuSUt2qQtq4uLRwW6yHOIVCb377wSDyb61+WslVzoj4
UO1v2W/rU2pbsJMd6FLWsoWhC5PaKVtg8T3EKFa/SdWQbwQRQ1zKgkcuhFB1OnqHJ3hQlS5z
UVgpS9JdbaSpX0hS1BKSadAK76EoNNlrg4XarnEEaz3H7y5u3OLbkrcbUwGC626p7kklSVoC
mvasKr8hrTfky66OF58fSLZLZjsy0SPuoz70cFAZdW7GI5MhpLjvuWFexXLffWeDUNEOzY0J
XZRJfaYj3eiuBSpRZaL3aSpYSCSHPqRxHTQTmRyjGI/7hdUOzWrdBt8tcUypPcdAcClcG6NJ
KlVCD7qa1acI50XI8sGA3u8tyHLfIYVHiPIiokoLzrbrrie4hKey2tSQUnqsAV21No6dXBDI
ut+tbjsFqbIhmO8pLsdl1SQlxBKVbJPHrqgyrNko7ZMjk2dia9PZ7L8ZcqLFXKAkvR23V9wp
ZH1cFhaqK3puK6yok1ZwhnLsl8ZtJW/LaEdA5rtP3SS+gH3dxUUHYKSoKBHodNYCycBTDeGb
BbZjk55yLJdcDEZS+bTZiFPH28jxUkq+lSRtuOus2qmZytgZvWWX19m3sLVKmPuARmWGWm31
OJBPtW2lCulaiu+jRtOTicfym2XCI0mI7HnvUdg9tTZSUgjcOJUpqgJHIKO3rrTUBHgdPQs6
euU7uRpS57DYkXBKUpohlhwLDiu2eBSlwDdNfdqjgl5Djxs3TcHIrcGTInNIccfgrYD4SHj3
VrLawpNVFfIq676yaeFLI6NdchjGYWlvJduyTDlqWgqW9uKoSViocGwqn3Aba00GwRnbqi1u
9q3iRFiq4Gc5F732oH5UPFJDdCdxX10IkkEi5WsRQ2uzMOPNoCTL7z6FKWCT3CEq48v4U0FZ
ZkKVd56mmnn4iG5bhS6LqWSh15TZ5BRcPsXQjqkaUpRdocHeRkk4SEPIiMxFJdMp4IbWC8+t
Cmy6vmVV2cNONBog31wcGsulsxXYwQ0WXmUx1LpQBCI6oiR+PbWf46uph4yd3sudeEpu4MId
hTXkSJrAWprmW2kNIAdT7kULYOqCTyN7hkipsAw3W0NxymKlj3lRQ1CQtDSCo/Xs4an11JEI
kXtMmZcZMiEw65dkUSpfJSozlQpLrJBTRYCeO9RQnSkaRGn3qJ22+kjpUfLQZZ2TJKbeYwQU
qdkd4PVAGzZRw6b+7f6v4aiFvygu1x4e9WHHXCCQUVdSnenWvs3roLAiTJZfajMskhplhKHU
jjQu1PNdE+pqBU76eDLWRxcri1JmtPuKKmAzHacCgAQllKUEbdR7fXro2aYpq5oRe49wKlCP
FkBUegrwZQ5yRxRWgoPyg6fgZRxiyEpmPOvL3o87GBBKe9ups8QRQ19fT11NGU5OkS4qQLw5
LdLjtyjOtuLcBUtbjjiHDuPU8a1Olm2kjnGnOR4z0hMik1Z+2AqqqYy21JXSvtodk/LTWOSY
bspP+m48EFPJuc5JSBXkOTKEAn8vE8fx1nyZ0h1hM9m3Zda50paWmIzqnluODk2lQaWU8k9C
kqoKHQKwT9tvdsmQYMWPGjwlIgXFhFuMnsnvyHGVVVJUEmqylSkg/SPaNaWUCaaKpeYz8eeW
n0oSopDjaUPJkBKT0/VSTXp66yVR3BXZ/wBhZalNqefcu3NJbcShYaDKBVYKT7a6pFvgs+Sw
3pEJ2TaY7rqW5l3q5D7HbCBIHb7gO9OAqmnp01ooIe4m1uWpLTCPuJMeyMOdlQbDFC0VOvhS
aOF5rkFgHqa10VcIUsiPIlteiXCWWIb0eD3Gil0sITG4llHHsup68jpwVimrUkkEAlX+H0Pz
1QZkAUog8RxHoR130M0hKVgEkEe4U+ZOgmAj28tyAAmg231JlIQCgCagj4fHSAEpBbGwBqAE
0+PrqIIj478NqA+ldRSCnGgPX4/EaAFVBFG9uRFR/Dc6RgSQqoAVvQdfh+GkA2wErIO1flto
bFASCRVKSOR6DY00AAlSKeqVdPwHxrpkQEFf0gjaqvlX4ayyDHIr/KAN6danppRA5BSqDfj/
ACOogUCyARxUN+np8dJISpFfjUV2pStOmoQ0ra6nZY606EaiFJqOZrQdST0/D8dZAKuxJ2BH
ruB67/PUhCSFAdeSN+J9a6QAhVab9NiPw1CmEkEg7ciNyVD+3SEhtk8fRSjsSr4k9dRpOQ0n
go1+qu6DuSR0poIIAqPGhAqFEnep+GggyKOGmxUNvhQemqQBX3JKj0qeu3I6SFJB3UolFCCq
vy9NIhDmCpW1FfxFPh8tQhVX0IBIoR6Ch0GWAjlU1rtyAH+3TQR06J32qP47/P10mhCtth1H
r8a/8NBOwbfFCSvoSRU/DQACs9wEV5fEfz9dUkGpzkpXEmgFCdhUV66jQalbUFAmu1OupjAQ
UTuo9CaU6/jqMsNSqEL6Cm1OhJ9dIBV5GiiABWo6io1SKQC5ROwNeqR02rqNJg76eXHj7aU9
ela9dEiWzwo41EzS1zZEluHGiOrW6+6vgAhTSke34klf8teirSo/k53caLH4/tseLl8hmbJi
CMmNKafW48jtKW62sM9tfr76dNCt+LQNZJbHno/+ncXtz7sBxiHLlIyWO6ptbzcJb6SviV/l
UmpBRvqvDcmozoZwYuLlCQRGXc+UtOPBRH2j0XuLCRciSFJeQj/KPrtXWXbBdElA0tcKWvxL
d/p4oukeW3GWpBcS020tLi0pJ5daVoNbu02oMSRUxuO7htrkpZtoeMqQlb8dxSrjxpsJTR9o
R/8Ag1DVZQ4MxKkaYzFt8jIrZFuK/t7bIlMtS3lK4hDbiwlauX5aA6CSyaGjHMQdvseBJtbk
dSXpqXEBP2rciMzHWttQVzWS7zSDzG2s9nGzfRLJX8XYxa+PTBKtpjuw7ZPlmKy6UsuORkJU
wUk1XyrXnv7vTWm5UhWpytkPHbnFn3ONblfdW2Ow4bOlxZZkOuOltZYoS9xSiiqV6/LWlZ68
nO1UsyS87FMat9hmXoxVynEx7VJRA76m0MuXAOl5hahVZ7ZbFK+4aHZpwjrWvJA2e2Wmfcbu
mHbp1xgR4L0qO22823Ii9sJPec5bOoQokKSncjS9fJiJZXl/4lClNwNZRNwWSywosjCcnlvR
UuyIC4DkKUlB7rReeU26nkPyqSBsdaaiAVu32CiW6OcFuNx4JRJj3OGz3CgGrbqFmiF/Un3J
9w6HVORalE5lGJ2Y3zJ029D0MWJtqbIbUlKUFlam21tsoAryHcqCTQ/DWVo2l+hXshxpu1xE
TDJLgmqK7WkJHJUYblUhIJ7Dqdv0z1661VqYOF9yJyq3W+GbL9o2WjPtMSZJQVFaS+5yC1J5
VoFca8fTQ9it45OWPWVu5uTw5JEJi3w1zX3QgrK0tFIKOKfU8tif46YSUs3NtD8Ydyg/u6pS
xYzDFwS8EBUksrkmIkdoEIqHBvRXTV2SGrc5OOM2WHcsnZtgcEmG6l0trcCmefbZW4Eq4lSk
Gqeor/LWW5BJrRXxRRStVEigIFAOKvlq6h8kmbGg2+NPmyG4zktKnICFIUvuJSstq5LTsj3J
Ox0aFHFFvBtjs4zY7a2nUtpgFR+6UlYqXUIpQtpOyjXbT2QZ+w5ctCU40zfEzSv7iYuE5E4q
T2+DQdrzr7qhXTRKYptfcdy7DC/b0yolzX9q282yZUlKUR3A57VOR+BU5RsiqgpINNS8QT7J
+Q5duP2LE43OSqCl9LLgkgh4EpJ78ZsuKC208actjuNEo0rvyc75BeEW2Pt3R+eLmXRFbmBx
C0BLgaLnvW4ngtVU1B9DXWqpGW2NExLpaL+LauWYFxjviIuVHc5dpSiEni6yr3J935T021ng
ZfnJItN5QM2Vb03OT+8pecgLuLTji3ylqoISsqC1A8Nk11pQVXnJ1tr+bv26Rf7ddHy+5OTG
fSl0pU46tgqS4VKUEqVxTwod9DiS7W40QqF3pLkJKS8H2ZC/sEq5ckyg4CrtA7hfcpX56rKc
lVkrbbvlzVivQhXAtW9tX/21BIbKlqlOdtSyFoUT+p9VFbHfWYyNngYt5IUW8wf2q1uIS32Q
+qGkyNgUhfeqFcxWvL462pWTIpeTo+0Sy1aLZGkoCSiexHLUlKm6UcS4F0C9qk03Osy5FwSL
99zMWpM6oiQosyM4EoZbYUqWtpfZfKEpBWVt8go/m9dKQWup0RM+8XQzosgw2LZJhqS6wzFi
iIjkk1C1NgCvTevUbaoNduRbd/nv5ALw7Giy5jriVJhuM1iJXUceDIKQlCFfSkGidZaJDiTk
cpV5uJl2qBJTKeU5MtqWVfafcp5JU8lLa+QNVK3Cqb6W2Z751gTBy6VFTJZVDjSIT733SI/6
rLbTwbDXJvsONmnAUIUSNRqWNoM/GSF/udrflvqWpxx1iX9skgmoSG+24NvjXSm0CQ+kZbbz
Bhss2xpM6FDcgMT1uq7iA6twqqlNEOkNu8UlQqk76NC8kfcL5a5kc921j94WhLK56X3OPFCE
pQft6cAoJR1B1KScIcTbnaZGOW6ALS/G7L7jirmJCnA6pQQmQlCFoCAoBKSkV9uhsLOYR0hX
PE7PcI12tSrnJkxXFcY0xEdtooU0pB/UZUpYUOYI2prOxI3Er6uwyQSVCN9u9GJb4KWhMjiH
VtodC2is8Oi00+OtNcktE5Kzhl6Ysn7mTHQmG2wp/wC3bWlqNNTLcR246G2k8qEAJH1b6skz
hKyKyXK3Its1ybAQmVMmiXFShxwmU44pDZSpSKJCV+48uvQU1EOmc1tD9xuM+4wnHFtyV3Sx
NVCkiWqOI4beUacW/al0lO/NKR00TxwSQdlzuPDsEdouNNTYjK2SlyEZL0lbnMuO94uttIVV
yiQ42qnX4aUsl1nBXWLDCNvafRfba2tLfcMRa30vJKU17av0ijmeg91K6mieESt1v0Z1Md39
4dlwDJakHHFMuJbjtIfS4WquEs0CQadvqdjtonGAaykyTlZRHTc40l3IDd1sz5My3uFp0KhM
qZcDTI7qR7u4tB4pqkU1cDDDTm0URH5CbgkXdiO4uA4ppJWJT9t7b6hVFOTkv6iduW+oICev
dselPO22dCi3ZD7TVjnSUIbYjxGmkc0qUptaQlSi4E80ElWmSiBtLv8ABgtSX4BgSLop23B+
UY7TyVOpZcE11oLT2y2twjkQkA7UGgerf0Ilx20MvZU3FchNxnmeEFD7SnnF1eSqkBY/yXab
hatuNRplyVdFb40NEinLoN/zdOnroJpFiGQ3BWMyY0h3vx33FW9iArgI7HJlKzLQ0N+6ONAr
pUk9dWsmbHBx+OzZbKUsoWtqRMWtLiUkL3a4pUK147fm/hoRqDrlF6my7bb2JyvubgplEoXA
obaUhhxtSBFSloJ5JSU8+S96+mtJYK2GdJcpuFlbYiRGEuuMQ2mVvNBxDbrjTQW52jVKySTs
ob1r11llB0rbJudW9sxEdpp9Ma4NpbCESHmlqS66GEDikOU+gD0+OlsoGmPTITbklLtvhyWk
IdkS3pjKpChGaIJQwAR23KfSv+etNRoU0cIzNrkIyF8NFtluOpy1tuEqLRVJQlFVDZSg0qlf
XWVsHI5aXYlY7I+5tjCO3yZjTkqcXNVNLXNHKpDQYqCT7a+mqBsR78OAjHIstvaW7NfadJJ/
ykNtlHt/9alb6vIJzk7YdaYF2yWBbLm441bZDixKebqpxCEMrXVsetCmvzGsMokssTAsaj/Y
C4z4txccgSZlyfRM+0gpdbebSwhMsIc2CHPdRP1+3012e4WhSKhkca1s3J1m1NJZjNUTVuT9
62pzqpbcgIb5JV6DjtrDZMlbBhcy+26NKivMMuLm/aOpkSWY/wCmUoPNoOlPNXvNaaElk14J
WZ4+sUKwybvINwkfbyLiwFsOQW0hMF8sN82nlJeV3Oqu2DT010rsMnC4+PI0HF0XxTspvjEa
lGS79qYS5DiOYithDhkc1/Sjkj5nbWKg3nGjl5AwhvGIzbS5dzdU4tKY6ZaGUxXCUBxamlNv
OLoAr280DUqg6lStcCJMuDUSXNTbo6+RcmOJLiUpSCaBA3UpVOKUjqqmoaofZTjS8fnNRFOO
rQ+yl5AkMLivpCiRR1hZUpBNKjfcb601iQ5gg90jpUVJGw9PUH5a5jAhKuoAqK1KSK7n4aUA
aa+1SuhrQ/DUSQFcDxqeoFN/9uuoWJINCQoA7nbYiuoy0KI4hJKtwK/MaiC2pTkQpXofSvrq
NBlBSsUBJI6n8NJQJUN/b9XUJ36D0PpoAWkA1Uk0NKkelNQpBLqQQRWg5cv+OpAED9Sx1OxP
49NtaIUpOwpvQVJPw1kgu2lNeIBNKAegI1AJUk8gCacab9SRpEMhX5j7iaivqOhJ/DUIFJFA
DQU9fWuqCAagKr7VU3A6bdNBASzxCaVNd+KjttqKAlKHIBIrv19PmafHUAE8dt+taj+/bUIS
0lYFd6niFdBTSgaDStNFAVqfU9K6mikIK3JV7uI2VTbfbQKYAVdVb06J+Xppg0KbO1ONabqI
6A/Ch1mCQS6ciomoPqKVH4aQYDy2/OU7g9emoABR5KqCQamnTc/79QhClaJ3pXl/w31FAtxd
PakClKchsToATxKjQE7dRtTbrTSagICq1Aiop0+H46DLQsrNU7VI/kD+Gg31CDhHtpzTX2FX
w0gxFVcilIVsakHp+OmADBABBHPeu5qBog1Ips13G3WpHr8BoCQjyCaBRoN/QUNdyAdIQEVD
uclDpsNjuetdTEXQpST8KhR9d+g1khPAdK/qUr/f/KmoYLL4itFsvGUW603RK3IU91bC0tL7
SwrtKUhQVQ/SpNaU16q/xYaJfB7Dab5kLttlOhKRHlutIPJIcWw0tafcg1QRxqD0r6UOlp9J
g52zhMlbTg9ku2N4y+boi3Xq+vSIzTLyHHRIcS8G2AEt7NJ3oV76z7K5x4NuuRqx46u0mOuY
mTH+xil9u8zlE9m3SI6y2WZJ61cIHbKQQqusJwieHgZx7BDcwaZkXfdFzhXFmEuPRBZLUhtS
wrl9YUko/DWrJyvkGoGsixhizQrt+4RJCpa3GnLa0v8A6tgt/St9ugolY3SQdLcGVJwt9vl3
G4RrbFbLkuU4hiO2qnvccPFKanYbnRWGyLjL8SZ63JixKNSlSXHY8dxMhYbRIZbLq2VLeS3x
PBJIpsfjo/HgJcjJnxrlbrzbUNUKX32ZDqJUaW24xSKlKpLZc9vFbYUOST6fHTNYOikQzhGV
xpqUR3I7YSymSm6NS20w+y4rtoUmUCEgKWOH47aLRs5vwdBgOdPuTErjckMdhyXJcktJjlEk
kx3i8tYbWhfE8F1661gW3EIZR8Qyty43O3N25z9wtTC5NyjFSELbYbAK17qTzFFA+0mo6aW0
wq2iHBKga1I2UD66yiJ2yDJkY3d51tuH29phLYTc4KV8VuCSoobWEUIWnmkhVTtrbrgJaFxf
9Spw5+QJbqbB92iGYhQVMqeWCqvIDiCgD82/w1loaj3J7b5Ftspxi6KlvstSEst3EJWW3nfb
w4uEBa9+PEK2r00KqjBSyCefvjrNyVK76kmSly6FSaAS0lXEu7e136vaaaLVyGGoJPJJ2UR5
NrF+SxLSIbMi2NkNuNGG6CWk1a4mnX2ncHVWDScbBbcrlRXZKrVbI8eRMjOQ3vtm1qJaeKa7
VX1KQOny1pJvDHshuq+ZMmauG4w6XQ12F2pTCuKWuQd4fagVSOfv6bHfV1lGbWgPHbveWckT
LtkdpN1AdCW3m0oaZoghftXxSg8Kj3fhodcBIyi3gxTLLcSKtM5osONOshaWwuh5MgmrTgps
oaoySgCL+6iGxEltNPtxgpEMvA1ZStRWUpoQD7iTvq6CsZOQu0gWxy0fpGPIdTL5FtPe7jYK
QUu07iU06pBodUMz24Ha749/pdu1iAn7dMoyWrh7+XfLYQU//g1ewD2/x1KmScwhyq8SYgLi
7GmLHuLjD0xp1txEaT9uruJQ2FBIQk1NQhXTWoNz5I+7LYU4HTb34LyyV0fUsgtnolAcSk8E
6G5UM5pKugrhdxOcgKUwhEeDGZjBlBV23A0aqUQf/wALX3066jatmRInRf3b74QkR4wd74t8
RRQhtIUFdtsuczxFPWurezMZJmLk0AZx/qdMV5xBlKmGGlQKypZJLYUBunfY01hpo2snFm92
luzuWtxmQWDckXNqRVPIdpothpSfpVX1IOjLGYQDlCXJk6SqOK3d5Tk9VErLCS5zBhqI5Nug
dV+vTprU8AtHG23O1MWi9wn1vCRdEtIjEISRxYfD3Jw1HuXQA09d9aa8Ge2IYyRAYcjl4zmU
qCeQjqCwuo3KR7eNf46ymgyxzbWYlunRLkuVGlIiutyVQkrPcWEKCi3RSaVpoYrDJ2zZTaLZ
EQ2447OT+6ouCoykfpdoR3myj3k+5K3k+lCBqzIqIIpF6Ql6DFfdak29KHmJq2kLacWzKKe8
kqXyWeJQFJH8uumeTC8BLuEL96tUWJLQm1WlxCI89SFIDie93XnuFC4O6akIVXj00YOlXwOJ
c+2CbehCkojyJNwMmDOBLSREX3OTJUBVJVyT7aempszVAtV1itW9TfNpV2/cEyHJDiyy27ES
yE9olIqodwGop031SMEQLfNuTr8mHFSlpbqylppSEpQSqvBIWoK4itBpJFgkpiRrfam3i0mI
qyOCcyEo7ipXcfSwV09ylJcCdwdh8tA2+CJu7jRStMHtmy9tv7ZCwguJd7ae6of+4Fd3kfho
QcgucVKcRsziAlUoPykupQoFfbWW+yVoB2r7qbV1STtlL4FYta0qyWC1eohRbip5MgvpW00e
LK1JBWeNPclProgW5G1iXa7gruT4yI0iNCccYbYQOD8kFHZbU0TQk1Vy330wClKSUiWnH5Ls
hLkdTUkJgMvNvhLaEuy56WHXGEJUeISwr8xND7tMpicrpZrXDtibhDZRMkPypEZdu95DbDDi
20vJCDz93bFSTTfQDbn4OrmHWZdwmRockuRLRNcRdJXcr/0HZDrb9UhSUJ5hTXP1WpOpkkMG
LNj6Y9uXKfdQmdFMpxxDbi3Y1S4lDXBJ7ayeCeRPSp1SNbNMrSKrUhKjxK1AED3AciBt8tZZ
puWTMizxDfP2NkrTIRMRCVP5BbSw46Gg5woAkCtQAd9LUIylI8jY1b5L7im1OMR4s6VBfHNK
3FmMwt/uIJCQnn2iN60rqBuEd3cNiIhTprkhxDERlb7qE8C4T9kmc0lBpxFAvguvruNUZKTj
c8LFtYmTZMlTkK2Pohyw1xDq3VhJ5N8vZw9/rv11cjY4yMSaiRlyZUpSI/ehtstNICnCmdHM
htRJITVCdlenw1p4yTtwRrloajOXSPMlpZlW0lDLKUKcTIdS5wUkLTs2An3BSuvTRsPkjQDW
voog7f2aDSyTSYVmXBkOIeebdYbJbluEdqQ6ACIyGgOSVmp3KqUFfXURyNsZTbbTOdkFsXJU
lDhKahtEVaUAinuPIr6ahYm6wLezb2JsVx3itwtqiTOAfP6fPvoDft7JV7BXeulBB2l2SNCv
T0GfMUhuMhh5x5CeTqy80hzi2lRA5I7g6noNHyC2LZx51zIIFsiSUhc4NusSKkKZDqSoB0Ak
pcTx9yQdtMCN7TAYlBI/cBClSyY8dhKFOFxS6J4vFJT221chuduvw1WUMkwm7XNQxdQHAlu2
ICZqUqqlafuEtUSU7LSHKH5jfWSejqLO+YUhCJqHvtW1XCVbkFSkpaQgBT/P/LKgFhISPd6a
0mTGj8K4N2mHJcWP2916QIzVejyAgOnj6cklO/y0CdMaYvsm9RWbHy/dVJe+1405EBlZcSNj
uWuQ+egESeP/AOrIUKC/CLEa3ORH3Yirh2hGVFL6UuqPfSpNC+kAVHXW4jYprkicilypFyU9
KVEW8tCE8rf2uwUgHiUhgJQFf4vX46yOFoQw7eAxbEpbC2UTS5bRQHlI5IHHfdRrxFNXDQLL
XwWSTKyqRZHzcsegTIyHLg43NktITIaW4+pUvtnutqo0/wAuNEmh6aUymSImN5SzGkT50CsK
dDbiqdWlNC2hCUsvBAJUlaUpBSsj10NToGsjvNH7nK/6+fizNlmSnkLeno7oU4oo+kpcWtI5
gVPtGtK3Adm9YK3a7g1AnNSX4bc5lIUHoT1QlSFp4n3DdChWqVDcHfWRVoHeQXgXCSyG4JgR
I7QbixFuOvuBClFfNTr3vcqTUE7AbDTwMZIUNKcUUNtLWlIqUoSTsncnboB8dEGbXSAArkVI
CuI9yjTp6Cp6DWoGRAStKt6KP+OuxPw1lkGOSSrYA1CVhPp8tEEggip3Gw2ND/adRBlQok1F
KfX6V9OulIgANkCpNT0Naj56WiYEVSkE0rWg+HXrrJJh1CzUp3qemoBJ3qg++mxNPQ7nSMij
yLYBVQDcV0EGEgUUDVJFFI9K/LSMiRVII2+dN6HoNBlg4J4njQrBNR6aUhQkU6g0p+Y7+nTU
0QCpYIWVAkig29DqIIn2KISAOn8tQyBaEAGmwVSnqdQClAhRTyqNgADtQb1P4ahkQkVPIg/x
9KaQArbZIA6q31lig18iOXXjSlaAfhqRMCPcn3pIHxrUD8KaWAfAEctin5bDbYV0FAgq3SFU
IVvRNT0/DSakBJKRXoPh8vTUTD5BSSn6gr09QK+upgBa1Cg9d6EjRAA4rrU/TUU9BX+Gk1At
IFQBxITuD8vUV1k0IKk1Kh0TvToAnSZbAD7hx3WTX+Z6gagTDAK0HkSUn0H4/LQIYCB7vUHi
CDSh/wAJ+WoQFY3CzVOw4gVJI+egkEogrIAIA+XTb030ohBIoSr2itABtTWjItzZtBHtCd0i
h9etdZEIKKh02rU/h8tIiO6UglJJKjvUV29BqKTspwA8BQk9Kdf4azBNnH9H/wCmvX1/89UE
XLw5PyePkLLeLpjKvb3P7VMpLRCuLZUUtl4cErok09fTXpSmrfgzZJtFoxHJ8qORzp9oh2xN
zkNOOye/EbMdpLaD3ChFOLHNNUqp7STT11zacfBppcoXCyvNoGPwJsS0wZNmL0qbbG/shIMF
5twF95vgOcfiunDfjro6RyT2Q8byDlbK4iULQ8htp/lG7QUzMTLUpbqpLX0yDyVUK/LQfDQ6
la2R7a8gu0bx7KgnGIczH5EkNvXlxtZdTLCCWuTiXB+o2lR4EppTro9jmMme6eyEkZDcXrBD
sbzbKoUFxb0R0NJS9ycPvSp4DkpNT00OWScLRwtVzkWy5Q7pFKUyILyJLKl0UgONK5JCknqN
t9STejDaRbZvlz7u4Rp8e2IjOMKkKfbdnS5iVfdtqaVw+4WrtJ95ISkddaXqto33nBXccyT9
lcnPMJRJEqFJt7qHFlIbTJTwKhx/OmmqHEBMDjGsngW1mdFuMP8AcrLdG22Z0ZD5jro0vuIU
y8Avh7+u2+rq+DFmuSWvnkBFys0qzsW5Ma3uswI8ArdU6tpq29woDiqJDqlB01VQfhoaN9nw
Qtnu8SG/LeuEFu7iXFVHaMhxwKZcNA282pJqVIpQJO2pTAJcyRZVQjjU7bE+pr11BJbcbvmH
xMUvdourdxVOvfZH3EQx+yj7VRda4pdHLdSqL+XTWrN4Q1h8irfdsXawCZZHBcf3eZJYmLeS
hgwe7G5BtG6g5xUhe/qD00O+V8E0oJm+Z7ik1y/TIou7cvIWmoUqHKU2puM0y624pcdwHnyo
3RKCmifjrOeUVbV8kdmOcW3IbY5ERFdhqYlpcjOo4lctjjwLty6d2WgAFDnzI+equyfwKyiV
hd8esLNsvTjaYkFm2ypFwiLbbbDCVHvfordWrmo8eIG3XStuTbqP8SYsONvXV85fHWJlsfiJ
ftjTyZTK1rbKSlEhDYVUAniFV1OygOj0OE5fYC2LQm/SGH/2puAjLjHf7i32phlVUgKMqim1
duvL+zUvPAbwRuK3ezxPIhudzyRx2Cy0425dpEZ1S5ZcYUzwLaeSgKrBq51A331WagzVZK3F
stlVIujUq+NsJhtLVbpCWHnG5rqacWkigW1zB2UsUGlM00WW03VBxa3xbfkESyiI24m8w5jK
nVy3FLUpC0pS24l6jZ4UKknWcTklXwQDE6K3hUu3mclDy5qHkWj7RJC0BNO8iZutv5tHrpaF
1Jh4CR4tgsOXiAp+BcXJibYHwJbbC2kt0SxxHJznyJTWpG+qUW0TF1uCI9ldbfv0KXLlzIrk
S8MSXZinQhSiXZEBYUInbSRVKAOX0muhQ2VkNMhWtGLRIUqTEfuzk+O6wkThckSAGlJU8VrJ
+0RyUkKaqE0Py1Ngq+Bv5Giz0ZFYpU52G4VwIDEh5t1h5gPMji8h1MdRohFQFbbp6aqxBdck
Ui3SZefqZszcMvNzUux021YTBCWlJWpcdT5TRugNAo6m1wYSyWSHYJivNz0ZcBKkme/KW0lx
tCUxXuZS8hba0pA9wKeKtvhqjBtVOeP41cBi8y2SrCbncY9/aEu3OuFDzDKohC3SllaVqASr
qk0HU603wgqsDFnErA5NcbLxQ01NdZtLHdClXhKX+2I6HQQmOpH09xWyq/HWZywWhpZbBOk4
hlckWh1bMdUdUaUWVOKjuNyuDzaXgDQobPFyn4nUw64yVQoXySFNqQFDZZSaE/GpHQjT1YrB
3s8Rq4XqHbXn/t0y5CGC4aVSFqCeVFUrTVDiSbL9aMOst2tTMNMKZahKv7MNUuUkKPbbhPLc
7KylCqOFAJTQ02IrqeCVSKn4hj7s+BFtU9Tz9zZfEZipWhMtqnab7y2o9UuitfbUGg9dZZOu
SL/YrdDuthhTlvpkTSyu7RQlIcY7z/FtsBVPcW6c0qpQ1B1OS6qYDfslmjSLvMukl1i1wbo7
b0Ihob59yrimyA8oIQgJboU1qNarbwTpPI+sGE2i5QH5sm8piRFTfsYinuxHUpSmg9zcRIcb
9oCqEIJPqOo0PLJyVR6K2zJdZS4H0NuLbS6n6FcFceSa70NKjRZDR6ZYFYpDVGiq/dki4y7Y
q7sQOyqgaRz5IW9yolRS0SmgI+NNUQpG1khjcrBbrclbblzbN3aSkvW4suJFFoDgSiR9K1lC
wQmg9d9JSjlPtEeNY7RdW191c1cpDy0808FMKQntkEBJ2XXknrWh3Gs8BZZOVpttzvt3jWZm
T+rKUvguS6oMjg2pzktRJA9qDvpRQOYmGT7lMZatcqNcW3mVviTGU6pCENEJc7rYR3gUKUAa
INeoqNLgvoOXfHWRCW/GeLCHGWGJLLjri2w8iW72GEsBaQtTi3U8UpUlOssduBqvDb9Hccdd
lxI0eq2UT3ZYajvOIrzaad/MoEEEfEaibGi8ayGO+zE+3Wy9NkOW5MdKuJU6yU9xp0A02UtK
t9vXUkEyHGxzKlwXDEZcMd6qywHUh10IB/VbZKgtaKJPuSPQ6oGYGhvV0+2RFQ6lTPENNo7b
alFPQCvHlvX8dZiCbHE+x5XGhxlTIzwhqKURqKQshajxQgpQorQrlsAoA126629BXLHUiw54
m5QociBKE+UFpgR6IJcCUFLgBQSAeNeQUa00RiSwJFszpwqhfZzlJktulxjipXNtlsNO1/5U
NgJV/wAtPTVDRpJM5hWZSJ5jqYlvyZKTLLCmluKcC0p/W7RB5VShO9Pw1dWYbycu5kkxLkRt
mU6UyEuqYS044pLzCCEgpCSpJQ30T6DU01himtjb91vShdHELdU3OoLsptNUOJ7gcT3iBRP6
gqK030JAMCT2wpFU03NegFd66jaRLu2S/M2pt523raglfdEtSAVBVAmi9+aU0p9QA9dU4MvY
zW1Mejw0ORlBol37JwJV+seY7gH+LirbbppTwCeTreYt1ZWy1cobkF1LCG2u6hSCptINFnl1
P4baH5FvJ0W5c37wh9MN1U1C2nUxEpcW4eylNNt1mqUV/wB2qBERpcmHevvQ04mSl5bqmPeh
xK1VJFadwH3deunMkhvDfcZYlhttag6wthbia8UB6m6yAdqD+OpmXqBcWW4zDmRk8imYhDZK
TxTxQsOAKFDy3Ttvq0ajAkS0i3GCEHmt8vlWwBSG+IT05dd+tPlXQSQp65FdpiW7kr/o3Hne
CqcR3wilNq/k3rq+RHWLXtFkvTN0WHKsIdCA2risLeYW0kpVtSil11nrLM1ZNtZ49KhBqfIU
3PMFyGJjbTZZSVSUPBKWAAlCOCKe0dd9dJzkUVOfRU1TjbpkcjyU8U9slR6jj6ayxaJ2yZU1
boljhgAIgXNU24lxlDn6SltEcCoE7JbNQPWmqMEnk7ZLeLddrUwzFfYox+5Lc+4ZJfJkznJD
YZdoacm1pqK7HSwgK6ZRBlPXBMNLUcOxY8cXBLf6shptppt+I51NFqRUH/l66IwDwxpnE6LP
vdwuEN2I5EflrWwuMFB5aFj6ngfn/bqehTIa2SYDEpb0yOqQ0hlwNNIIqHightfu2IQrfh66
yKJLO59vuF8ZkRZCpLQt0GO46uhJdajpStJ6bpOx10dvxSJ7FY9LjNWpthhwMz/3JqRNcCuB
VbG26OIJJ9w5mvD11lMy4mWSkK4Y8bjDXGHYxr7mU5fWCOIUwt5aoYLYPJXFriEgdDqefqaT
Gy3rQLOlsqIs5s7QcG1f3vlVS6fXyHx6U+WlvJNBX1doXbrwhKgiE05HGI+0Dm0p6skpoOS/
091c966cbJ/ucDa7S1eIbcmIxCQLT9yywzJ+7Q/M7SlMrdWCeKnV05M+lKazgGmkPERnlMRr
kzEaevP2b7t/5tBYhq+5DTDn21OPccRsEketdarBVhIZIi267ZHPS3Z5a47UF6QYMctsPBxp
mpkuJWODbYUOam0+mw1lk2dMeslpkQ7GmShqT+9yJTM19X1Q244T21NkEcFK5FVVfDSoyahQ
GxYbctpMIdvkLUu8u3Gv6gfQyVfbbHhwqAnj1rqWzOYIGx2U3O82+3rWYrc95DJf4+2ijRSk
1oFcfXfQ/gUh9c7VAVYW75DR9qj79y2faKUpzn2WQ6XypVCCqtFDoPTWurKP3GuM25m45Fbb
c/VDUqU0yRQ0otQFNvQ1pX4axI1q24OUW2wZEmc1ImIt6IiXlRgUOPd51pZCGEFG4K/8Sth6
66OsODD0S1qxBmc5a4S3FJul9bck29SSnsNts8yvvCnJRV2lU49NZS5GOCrgc2woAJAoaep5
eup4cEGKmpryCtqHpUdN9BCV8AgpG/8AzaSYavpqOuwB6kgnbQQHEkpI67+7b10iEkJVQDr1
+YH4aGQftJNAaJrUn10EE4CaHjVQFSPmd9RBJKvaa8hSla03OokDmOBA3r1NNhvoJhpACPaa
Akb7ddaFBrSpZ2Ip0/jqILpvWlfh8elK6mwgCkihBoT1HXQQCitOVa9CB6DUQaVUpX8g2Jod
UDJzUkEVJO/Q+hA9NQMP3cSRSoHtp8P7tRIJPGgKTv6ppSleuqDSOgQ2EgKGy90pPpT1+egW
0E3UAkUPEVWPTf1OoyEWik8yQqvQnbr8hpIAIqChPKtKfA19dZFMBFCoDYA+2h9P+GlEAFIQ
eO6T1NOhO22tEF6FJGwIJ/D00FIAoKVv7EU2/H46oIRRNOvr1/N10muC3+HrrYbNlVvvF5kO
x4tudEgllrvLUaFIHGop9Wx11o2quOTlakuS14NeMVsOYybhLuzyLUlEltmQ3HUS+JTS0cVt
BQ4hJXWhrWmifxNw9EpjOZWa12PHbUnIZUL9iuD8ubHZZc7M+N3krSiqVApK0ChSv26JzLQ6
2CHnONR0JYUoia8qcq13dTQLllbkurWGG20ijzboO5/JyPHRaWDSIm2P2FXjS8W2Rdo7F3kT
mZjFvU27VaIza0FJWgFHJfP2+nx1WjBNkfPvrL+BWi1m5uuuQ5LrptLkZtLcdLlauNyk+9zl
6pO2mGrfBlOakfiz1ri5RaZd1bS9amZjLk9BQVgspWCscR129NaTgzaibNeuGa4tKyG3LuKL
TIZjuzXI8tTqpoSwuM4lll5CmWkpQpfH2GtD66wrWWns2q8FVxfOGLjc5MrImraZkS13AQpz
sZpIceU2n7aOpKUhCu2pNWqivpXTLSBtOMCLBlLV2lTp0li1xcuEJlq1zX2mWmH3u7+s4+hS
ftkrLJ4j2/260pVYnEmfxb0Td2u2MW+0XO4W9izyMjSzae6FMtPMplq7qZ6o7Ozak/QFcRx1
g3ZQVWxzLROvF8mGDZ4EeRAeLMC4d1TCHfaSmHT3JkE1Lddh01puKwKSKglSisJ49NiafD46
3CRwmS+4hap83xlmCI7Qea70F1hClNBXdYcUX1ISpQX7WyOVOo+Os3cNHSnwCFZpi/Ek+amH
RAu0V9uWlYBUwELS4tSOXuQ0qgB41FdU5RlpbLTntgtcu8ZNKlWJhj7VUZ+03FqUVLuclxxp
DkcqCy2QptR9qU8k/HWe0rYqqXBUszxnHLba5b1oC5j653Zltpcom0LKT/0S6/8AyVKVUB0b
e346abyFliRPkyzTobuOvO21cQzLLCU8tMctBySEHuEpQAC4ABy9fU6VDZNjPA8fh3iZdET4
cqUi3W56bHiRFdp91xpaBwqoGo4qNab/AA02UDVzsn/9E44iEu+/tkwx/wBqbuCMaS+4H23H
JpiKR3i2XyEpT3N0evw1mWEKZI3C7RbZnkNiG1AltxJDchTEOWlDzrKvtVrCnUlHFxAUNqpB
Ox66eJBpZaKYxGmvNuFiO8sMpLkjghau2gbFblASkfNWtJJZMq/wWNNix6FZbXKujlxXKvTS
3oS4IZU00G3FM0caX73DyTy9iht89Z2alTJHs22GcVkXRRlic1LTHQpLBNvKVJr75H5HuvFB
6jVZQ8g7QsZHcu3WU4NbrxHDibq5cnYMtbiwW1cWUvI7bafppXcnc6INrSfkVdrdjkfH8eu8
JMh8z5Etq5tSHAlLhiqbqlvtglse8prufXV0/Yy8NeGScmFgn7Za567ZOs5mST3IrckzXFwE
IJU4A4hsp5OUShYqCOR6jWUbhSMskx2wQ2LTLtjqIka6KW3ISmQLiy12lpCnBIbQ0VEBfua4
1TT56ZngIzEjSLjMebcrpGF1jm1WloyJF4DLqmlsBaUJcbjp/VVVawOPpqhcFBJ3nBbZFW04
i6x4tpahxHn7qtLr7apEtCljttoT3EpWEGlR7TsdNbSHJXLxaJtlurkGRT7qOEe9tRKVNOoD
iFJOxopCwaenQ6XDKszkkbZht1nwIU1iZDYTcluswIz73bfkPtLDfabQAqhUpQ4qNE/Ouhxy
GZDFgv7FtTMVc2IfeQ463alzFNSXkJUpp1aGf8tdVIUN1VNNSjTGXtaH7l/yxGFIlovko256
Su1/t4WktpZSxzoUn3p5A0A6FPz1lKCu2sjRrK87vTTdg/c3pqJxREREX2yHCfa23yKQR8Bv
qCrbO860eQPuYjUl5+VJXIRGY4S/uFtSGUlTba1JWrtOobSoitKJB9Na64+DtRthzbV5Eut5
jomLeuVwS0ZMSYZTTyQ2wsErRJ7nbBbURX3VBprLqc5BCgeRmsgRcIjEtF/lJdfTMUpsOqCl
dt1bjrh4JVz9qgshVdTrkE+ThJyLPrbc5t2mPvsTi4I1xkvoaWlT3CvFwKStpS+AqFAdPXQ1
+o1Z3tU7yPHeuiLfHlFQWHrs0uO28pD3D2rUHkK4udsinHelPlrUN5JaGdpvMxq3SHRj1vu0
WI4lcu4TISpCmy8r2pfeC0U5qqE8vw0uS7IKXm94etjVsS3HYjsxfsU8WUFxEcrWtYbdUCtt
K+4QpKVfTtrDZm2eB7dW8wGPOKmQ4amw3xmy0Ijm6pZBSB9yQS+hATwHIjpx30KfsajA0mXi
8z7RZYEmzwlW1LyzbhGjll6QW1JTIbLrai4S57e4aVOxGkztju8vPY49EmN4k3YZqitUeZ94
5PaWhxtSHG+BW4jkUObVIUOtNbjBrCwRONovlsvbcSDbnZEuWyhtVv8Aelb8dQS4lPJsoWkL
ASQpKh+OuZrD0PJWSXy336Qwq1CHMW5BJgL77jqVQZH3DSQp1S3HC4s8SeXTYa3bRmf1Dt98
uF2j/tsnGXL7EhFyQmLDVJbcYdW4pSnnVsJWvbmU8VCn8dZWQq540cYefXJv95kLhIkyZy3Z
kaQQpP7fJfaMZ59CE7KSphRb4r2BoeuqWM+Bxb88nx8RREEa4fbwY5gMTGJLrMANEEKQ+hLZ
BWS4eR7iSdq6oNymiHjsYS1Cbebk3dNzZQlzgWIpiGQ2AoNhYWHA3y26cqaa28g94O0rIceT
cI98hQZaL45Obn3AvPI+0LjbwfUlpDaUroo7AuE8evXU8mUoO8jL8ZhpfZtEacI0l6ZOmGc4
yFh+XHXGSEdkBPbR3Kkq3JGlpjY7u+Qba/Au0Vll4LuKFNNvBxNEcrezCHKhr9TClbeh0OQS
Hc/ObFeWp0GS9NtsefNcmGdHCXn2kUT2mgjm3WvE8qKFB01JwLUMY3XPWnGnGYEqTEC7lBlO
PBfbW7HgxBGKllCq9xSgTxqevXU1gzdZTI5WQW5z/VhbdmNJvjqXILDKkNMLSqQXSJyDupKU
7oCei9RUWPgrynKqA41Jr7TunrT+3WYNySyLoG7K8GnyblcXFx7m4oKU6uEEtlDfcV7eJcRv
T3bfDTozZccHSVd+Nox9iM925ds+9Ku2VJcaVIeCk+47e5I246EjT3PwIvdwS5aoVqiOhyE0
03LWCVrV988yG3qle9BxCQhPtHpqYklcr0uXmsp5iYY8C5Ow2pMuM6W/0mW2klSXiAtPEpVU
0/mNTgzyLteQpdzu33GSpCWIJMVlRWUp7DDLjbS3HlVKlEEVWrdR0vwSS2NMTvk2E2tP3Km4
tvjuTW4YUGUSXUqbo0+OrqVere5pUD11ckc7fKaXYslccSgLmGKpIBSjitUvuKDaetAPRPQd
dtLcslx9R4q/TTi8r7ookMS3FwWIAS2GYyu0hYloSB3O6KUH5aknrtor58GbKRlcXW/9LWBC
WkpWV3AvO0TVQ7jXAFQPI036/HbQjTTHHjxq1u5U0bpFE23ojTFvR1KAC0oiOHb4FCqKT8wN
PKBVlMu9rtGJKjMu2SHJnRWbRVMr7FE2bIeM3trkKhKUUpqKopyPBIrp2LUMzjJGli9SG3Ip
hFHAIbXF+xXw4+xSo35FKTv89ZaNFixfG8WuMDGHLnPVCkTrk7HLSYrkj7xtMhpHbWtCkhse
8o6etdStCYJZO+RWzGLdirb7MWM3dX2ZymmlxJjr6uE55lotyULTHT2m0AALT6b9daay4KTv
fcYw9mBIRGQwqTEERnsQ0yvvWZL6WSFylPExyyorWFKQOtBqROH+pB+R4WJW26u2+xmIp1iS
8h0xXpbjiEtnjxfTISG0qJ/wV3Hw0RgyyLwyDYJl3cbvzzTUMRnnGlSH1xmTISkdlLryEuKQ
lSup46zLNI5ZbaWLbc0ojsMtRZDKX4q48pU1p5CiR3kPrS2opKgRTiKU1pooHFjxq1SrWxcL
i46hE+5os8JLHEcX1thwvuFdeTaOaRwFCfjrIRI4j4KwudCtTspQvFzL/wBgpKaMIRFcdbcU
8D7z3CwePHp6110SS2WxAw5lcRKGpZN2Nvj3ZxhSQI4jyePBtLleXeAcSo7ceo1l1g1yNrpi
6I8G4SGJSpUiyOtx7w0pHFCXHnFNN/b1JK0hSDyJp8dKqzMuZOScTks3H7P72I4luGLpJlxn
u+02yU8lglA3dQPqb611n6ci3A+j4525Tcr98XGt9xQXoE9LbzkqUA92OAitnuFwODpU0G+p
oocjebiWQf6husBDyJs22x1zprqnQ0ft2kBayvukHuBJFWj7gduulqWkjKq1LBaMZvUu1Iei
ykMIuyXzBhqUe5N+03eDYA4jh095HI7DV1yanBwTYb+bGQl1Cm0sm5OWtK/10ReIV94pNKBs
gg/VX1prT2ZcjEyb3dU2+0cnpbMIqZtUFG/aLx5rDaRv71CqtZTgZY6v0C+s9h24OIkRyO2z
IjOJdYK0pBW0Fo9vdSCO4OvzOiHAsRiL95byOCqz9s3MOD7VbqeaEqTuFqFFfT1rTbQ0iTYq
Ld7+mTcpFtKmnJbTrdyXGaHb7Cz+qFABXBBPU7fjrVp52ZiEdoLmVtWR2bCZc+yjq7KJ4HJ5
oLSQpDBJ5hCkk8ikUHrrKeRckGGSCVD2inpvsB8NLcsoOZQkNihCRTofj11CGrccVq3FKilK
7dNRITsFDelRT8PhpMth0KkqSSRTdfoDoNIBqDSm3+I9fiABobJgNEIAJIUonkPXfUgYSQr0
V7t/afX4aRQCshunWuyulQPx+WgUEVHlwOydj/46QkJZpuSopP0V+WomBQUmlTRQ9KV/DUSY
oBfHrXb30oRX56BAmoKipIChQ1+eoyEE1ST9W3XppRoRXkONaI3NDqMnT9OgJ6A/yFOp0QIk
qKjQpIpsPSo/8dQgSkgnmCUj2/PU2DYtBSlQFKqVtxp0/wDHQQR9rfHpUE1/D/fqEQtI5JQS
Eq6hIqenTSZjItXBCdk+5W/Gv9+qDbUBEpCQTQkVCfn6+mqCAhIIqFU/xfHSxgIFQ6qINKV/
HQZYVNglIoBWpPrpAT20/DetfnXQMlz8HwWp+c2eDLiJnQJUgNSo7iCtCmloVXlx3HyPx13p
HVjBZPHtoYmZ4u1vsKLXbmNthTYUG3ENOKR3ELBBHt/H+OucYkrD6z4zjU3C8OTcWJLUm63O
Xb0zYqE8Qpx5CG1vLWPdw5bI663bLx4B7G7PjOGWJkxyW8mFZ3JbF5QEUkrVFcUht23tUUX2
zQdzf2GtdZ7Y+QiGRMG02N/xvd7o4wUXWFdIsdiWFHmWZDayUKT9NOSAa9davR1gEk3Iwm2i
3M4vb7q25MTOlPOtvsuR1IhFLVaKYk0otW3uT6angWkcsftL94vdvtTDoaeuEhthDyweAU6o
JHKnXrpVUwL854eR+6RoLN7Ya7z0hl1D4bW8hUZpbynQ2w46VMngU12IPprmmWSGtOD2y9Ok
WnIGHIrUOVMlF5lbbzP2SUrXyaBJKHAr9NVfTca0moBqdHGPhUKS3JmR71HdsUSOiZLnJacL
zSXHCyEORh7ufP5/TvqjBOsMdv8AjKTEhyLpNvMaLaI7cKQiUpDq1Ox7hz7LiWUp5hQ7Z5I6
jSrKIgYbIs4wy1dp8CTe7fGMNlUhiY46VR5ZSApDTLiAR3FpVsFeu2sJlBBb1I9NyfQjTJzb
J22Y1HnYxfL6ZBbk2RcQJY41Q63KWpBqrqlSSmo+On6mk4UoRHx4SMTeyJTigGp7MDihSfaH
Uk/qNkct6exQ2611qEnCCy7LZL5R41ulnu1wixHmpkW3ONpePeR3WWnSlKH5CB7W0lbgFfTW
cF+RBXLHr7AYmvSme23AfTFk8nAebu5Bar/mooK801H89aUSOdD2/Qb1j0u0vi6uSVTILFxi
SUrcSppMlJoiiyaKTQgkbazCmDWTtabp5DvbkuNbpcqapuIsy2ypJ4ReSe4srNOIB41I31Qk
ZykIRZM4RdlhpMr91DYdMwPkVZJ4BYlcwlSajjXn121NKC7AsELLF5UuHHly4d7Uh37mS2px
TxQ2guKKi2ea0+3qCfjqssBVNsio+QZA05Klx577Ui4Nlmc6hwgyGl/U26R9SVeoOqENZHlr
ezNm0PftapqbUK94sglpNRvTY0/+nWksg9DRv95NkecQJJsankofUOf2apFPYF0/T7lOld9D
WSqSL07MmsKZYeWv/SkiSpiK0UtcDJSkKWEmnc5cSPdXRAy50OsgOcHH2bddUd622x5KUx0I
ZK4zz2wS52k9xClUpQ/URorsrw1laGD0/KYMi0yX1uNvW5gR7U+niotspKlBokA//hFe1W++
p1DmWC+3HKW5EKLcWTbnIYROgQm2ER0tl2i0voaQOq+IO/w1pJQCbnAhGU3kXK43KS4mZIu6
VN3ZqQ2ktyULUFKDrYCfzICvbSh1ltmk2Prxk2Rc22b1DiuRpEOIpqA+wn7dyM0hQiuhDak7
pSogEGvx0rGhWRm/f41ynOzr1E+/lOhAQ428uMEIbQG0pCWwQQEpHX4aMmvgfpzeHDh2hqFa
WUv2R96RDekLU4tt155LrfBY4FSUFO6Fih01TYNxkj5eT/fQ2mLhBYkTIra2YswqcSpDbjin
SC0khtSgtxXEqH+7Vowrwhf7ywcMNmXbQU/emUi6B52qZHaCOJQP0z+l+X+OqHPwNmmsnO2X
SxW2dEucOPMcmQ3W3m2H1slla0GqgShIXxPpTSzNWxOP5FIsV/jXqMmq4ry3xHJKUkuJUg1p
+YJcNFU2OuZ07NEi/mr7wcbdMmXFVBegoRLfS6W++4hwrTxQhPVobU36nSZo3sfXDPoNzhXG
3S7e4mFdXpL0lbLqFOoMh9l9HDmngSgscVchuDqyKyNU5XaXrq8LjAdcsbkRiOiCFJU4HIbY
RGeNaN1PHi7QboJA1dhgc27yPIahPiYpaJ67iq6fctMsSeTnaQ0lqkoKKEthtPFSd6bHpqQS
oIBN3iqt16Q8JKpdzcbcZcbfLbFUuFx37lhIo9yr7OnBW+htyS1ghqlaK1qlXofgdMGSRaug
aszkZlakz5bi25rgHucicEcGVLJ3TzRXj+G+gXXAJN7X+z2eEwsoety5bgUnkhSC+tBSeYPX
2elKaUsEmcrjcGzaodpglKIQbblSGEgp5XAoU246sq6q4UTt7adNQtKR5eb4mZevuGpARFkR
YcSUporQFNMtNoWlVRXq3v8A2ay/gFtsJOQsKy21TCtv9vtEppECnINtxY75cRVRHM9SSojl
8taaxBfL2cLZd3k3CQh2Y4xBL8meG23lMJW+QotgrR7iF+0cT1+Wl2mweuUjpAvrr0rJp0zt
fdXa3S210o0guyHG1HgkeppsgaJlyMSmhVmyq5w7e+592VojNNxItsW5xiqS6hxtbjkYUS8p
CVVCj0NCr00I1GRh90wjCVRAarN4Q8k8xy4CGUfT9RFT9XTQTww8NcYTmFiW8UdhFwjKWHtm
uIdSSFk7cdvd8tD0Ro9mueHPS4T9hgLiqZn3V9QWtj7pyQuGpYTGUsKbSyhR4x+4D86nXRt6
M2emtQUPPLjd5lzjquIlpCWSmIJ6oy3gkrq57oqG0Ec+lRXRYZCsKceOK3x29CUUNPQkx3IK
WC+kqL3P/O24kJ3p8tHJSaC8xabYzd37dBedaNwgsx1R7dEuDgbNnbdSl5t0ENhbiuSi31VX
TXwUZIjHjYVY5aY0+Iy/crmZ8l63CAwtcwtvlP27ctSkLjKCUkN8U7em+tK2SdfAxvbdth4V
aW2LWwqbLtLEh55dqLrnJ91fJ39yCxxWlKRQFG1NHblC09FCAA2WSlFdz1oK+nxOsIUi8Xyx
41Nu+NY9Y3XksyYqZL8h+K026Q6yX1FTiFKLvINbA7IP060tGbHGBi2N3aI1doMqdBsymppe
RKQ0/JQ5AYRIPHt9tBQ4HAkV3B3Ol1jDNNpIrt1atCXmnbVIeeiOtpX25aEJeaXuFIWW/Yvp
UFOstLgkx9YLLa5EJy43aa5DhfdsQGiyyH1KffCl+5PJNEBCDXjVXwGiJIl5uE2f9+diGeuI
Z9wfg2COGFPIUqO8GP1VlaFNjkoUJqaddaeDFWcmPHncgx3l3NCJjyUqchllZSh2TJXEith2
tD3XGlcjT2Dc10tGznK8eXNjHpd+TJS83CQtc5jsraCGm1hBKXlK/UPJQongK6FVFaIkas4k
ZjGMtwH0KmZB90FLeUUMtrjuBHFVU+3gmvJe4Os8BOY+BtcMcehsxZDE6LPiS3vsxLiqWG0S
EoStTaitKDRKVAlY9pGkqtydrdZZozNiwN3tmItUhMNd4hvuORyHCBVlbPFTqVFVAE9ToGrc
kLc1y1T3/u5DkqS24thbz6lKcV2lFAqVlSvTbfW7qHAd5yTWK4vkN8k25Md1Mdh6U2zFWt5C
HeSnQh1cZpSklwtK9y+GsQKeR+seSGcalpj3CSMZT3g5GElI7sYOqS88IyldztLWFFRCfjrf
SHAWsyBcbydqZdQovpmMMJN3BNVJihTQT3jU1QCWv/5dYZVZL5Zc88u8Kyovk925C5hcyBGU
233OSVFgLK20grLlD67fm01WDVlIygWXPMeyWKxHtz0a+lK1RI5bQ8XG1Ci9jzaWmnUK2Glr
EmajS9t5fe8jcYuMWQ/fQO2YSI4S4hppJVxQw0kJSlCaqokU9dUcGVaThYsqvVlS43Adb4OK
DqEutIfQ28ElCX2Q4CG3gk0CxvoeDafglLXNzZjHpEqIwHILDgCLotKVzWOYUHREKj3uDgUr
vKQFD121VyKhC37jmbWJR5P7chi2OduIxfA2kS3GGt2Yxc5cgwCj2+wcqU5GlNaWhsxtkVwy
Q2qOxNtgtbV4bbkSH0NKQ5clIWotyXlEqqsKUaJSE1O9Dq+UYeDpeLzktuv4lXa1tQX1QWoj
9uMYRmH4XEAhSE0P6yR7nAa1/DWIwabyPcfzatyVGbxs3KG9FVCstqgvvtvw46njJc+3fbS6
8tatwtZFSn4aWwT8kVAmJeu13kWrGlz4r8d5tuE6JE8wu7QGV3AORdbKSUKWKA603DQ1tC/6
jiz5mzb7bFQ7b/uLraUyP9PyQ7xbbMw1dXIaoe7xqS2KgV61GstvPgpTU8hLzKMm3LLMBwX2
TAVYpM0rpFMLglCe22B/8kJTSvLjTeldKnfgm0NLferLbJlluVniPC6W9C13QyXeTD75JSgs
JQAptISSD89DmMhrQV/vVqNliWWxtPptUaQ5OD0wo76nnW0NrRxb9gQjh7T1PU01rvh/Jf8A
Q6eO7nZ7VlUG63aVJjQYZU8l2G33nHF8SlKKckUCuW6q65vIV2KttzstocyJiNNuLkOdCciW
5yKURDI5kGk5tZX+gd+SEmp12vZNyK1BJ2jLLDFbtN4fef8A3WywXrWza0t1S+HW3U/cd+tG
0pL3uQQSaCmucxj7lz9SjoNGu2jcUAHzoKHVbLkG1wAJqolQqmlKD4+lPnoIShSVUAFCK1r+
GpohdORPED5BXX8ToQHMoVUgbqO5PprQoNXJdKinqadNDQphlRKq8CkEe0H4HQiEAKAoSeQB
qPj8taAL3Hia1pQ/D56CkNFFe8H58etAOmokKRyKSs9K0I6+vz0GoC5UUoca7dRrRmQIJCjw
BB3BPXr0pqYgShCaOehqABvX4nWSAkLACdiaVAV11DIF0KSsbBNBt9Rp+OksCUBB9yaGpqK9
dtTZmRRUlXFSTVI2Jpt+GgZCSPqJ3rtX+PpXUQXMVBGx+H47VP8AdqghSuOwO9TUH4fD5U1C
EoFRFAAonr8D8NRSEFUVyNafA7HUaQR4pV9XU1p8B/fpMsWCUqFCFGp9KU0MRLaOKt9lfm9T
+O3ppMgISE+5QrU0FNgfifx0EDgjly4+2la09fx0jJavD8e9S8li22zXZVlnz1lhickuJSFF
JUEL7Xu4rKafLW0pRp6LdhsrOrnmc5iBfJMK7vIeVcp7ay466mIkq9yQUqc3SBtvTfWbVXUb
WWiQsFs8vzcbYvmN3GVKbub0sSobC0J4KbUkLV+qUtqcdJ9vAc9tTSUA9lZYPkRu4WNuMmem
apMk46N+57VL+7DFeh58+6D69dadZQez2Nkhbbvn0fx9cXLdcmk40zKEO425aWlOh2SCeRbU
2SQohQrzqN6U02SwwTSWiDkt5cjGoX3aJYxZT6jbOfL7P7jcOdv8vL6q/wAdZkJwMIL8tiYx
IiOLamNOJcjONV5JcSoFBRTooHprdTDLpcb/AOTIt2hvTLSq0XOj7sdTVsTGU/3UFD7iuDYL
p4KPL4ddPRaN1smQlsGWY7cnm2IEqHOlw3Y7rD0Zzm5EkDi4UIUmpSr/ABjpqdMGdOTrYpmU
2S5yrfHtinnpLfbm2ObEce7yGz3U92NRLhCT7gRrMYwLkc3LMstu0eXanYwbYf8AtmnbazFW
n7dMDkWGWm6FbYR3Fe01rqdTKvJFWXIbrZnJaoSUJVLjuQZQdZS+nsuEFYIUCEEEfUNxqgSO
JIFB8OIA6ADp+OpGNFvsF+fg4Zdopxpu52matpu53VTkhvtuoJXGQotK4J4k8k1G/Q6uuTcq
NCrdfe34/lWlGPpeiKlsuS7ymQ4laZyQr7bk1XgKo5DiBxV+OmHItpId5Fnzqp1/jz8cj2y+
3fts5IgPOhK2mloc7f25J7SlcBycSrppdHX5DaIq/wCcvXuL9rPitqajOhyzFKikwIlAFQ2f
8bSkpTurcUrrKTJvI8ynIMdu8uytzLTcLMzbIbMCTwdD76ozSSWlIbkIaHOqvqJoRqXlm1Hk
d2K9+PrELoG13W4NXC3vwXY8lDDFVOlCk8HY6lFJok+4j8dZmTLfX6CFZ3YHLacfdgTBi/7a
3bW1IcZVPoiWZiFKJT2Ce4eJ9vT563DWeTNnOxrh1/xmyZo3dxEubkCIFpiRGlNPSVFbKmiX
ioceISoqogbfhobbCmHBDQv9Hpfnpk/uL0MsL/ZltKZQ8H//AGzKHuQUf4uG+qYLqplkqi+2
CbZbREuUi4QZtiadaYEFKFNyA44p3ktxa0ltSSrj9KttTwabXBFNXaIjGJVtImCc9JS8hSJB
TCLSRQByLSina1o4NLiSdsEv+4Ys548Ysy7i/wDu7VwVcXIyoyu1xU2lktIe5UBonkFcaemh
tpknhEi/f8WtNzizcYuipUCHLZlCyuxHGH3ksmoMuYpS0uKCt/hXoNGdjBGzrzZkWV+0QJjk
g3O5tXV24uMqZMRSW1tqaLdSpxSe5UrRsabaIYNnfNbjZZF9sMyBd1T2okOFGky2mXWn23Ig
CVupD4HMn6kb/jpSgJUioF8tz1+yZ9d1V95dWFi1X+e0lpaX+6hfceDKFhla0JUmqE031Nmu
IJW83u0XArYtV6hRr2iHb2Gr1JbVHjBEdpaZTSVFtfHk4pNPb7qV1J4MWWcEHeLArI7/AHGZ
irMc29HZQUd9iMFPBhAeWhDy2yUqcCjVIpvrTeMikyw2+2RIFrxhq7rtMWPHflnJI0lTKpL0
RMlIUEOp5peoj6EIVyHprBqM5IK4Mw3cchv2ZVtagIiuJubchTaZv3ZfcoUNrq//AJXb4lO3
9utcmLL9Dgu0IV4xbmhUUzG7kuQ5+u0JJiFhLfJTXLmUh7YDjX16akxdYhgx7DLxGvtsfv1m
eYsn3LSpzklBTHDKiCoOLBolJB3NdVs6GqDx9WL3fJoNvucFmFAVMcSX4y1NpLAbc7TK+ZIU
rvBHFZI/wnrXRlElJLybJYIspf3tpVFms2yRJdiTChhCni80hgpaYed4lKFLP1+74baJgMne
44TYktXBy0Q1XC5x3JTFvtbbpX9ymLLbZ7vFB5lXZcKjxNDSulM00RknCoEy4z7NaD2rs01G
m8HV8g22W0iayUGm8Zai4qp5cUlO50LOw+p3tmKYbcIz1xiiY9AXc1w22GUSJa0RUNtnvhTC
di4pSlILtE0oPjrSYRgrTNpiOWPIZbcN6S3bn47TU8vNtdhLrqkDux68ni6ABVvZB+WhDMIg
Dv0PH+FR+OlmVaSwvO2xVkekOwWm23yqPaixzDzT7fBS3HnVbOo4KI49akdKazTI99LljeZG
s8ezY3KW2pxyQuaLohBUhTiEPJSylKiOIIQTumvz30yVlBzvDdvatERQYQxdngH0COVFkw1o
/TCwvcP9wbgGnH56JFKDtd4NkhZGYkZky4paiGK2HSlBkPMtlwPLPuCea1bDofkNOlkGnMMC
LPYXsutNs5luI84yxeQlY4MPFwokJadI9zaQBRZ+fpo7CqoaWiJb3ZD0ZxlyQ6l12q+8llpu
M3up3kQea6A7fy1W2MiIlqtEl/IOEpbsO2Q5Em2vkBtb5Q6hLXNB/wASF1KRqW8GW4QINts0
qG4XXH46WGOcm5K4dpEgpUW4/aHvX3FJ4hVfnTbVXJqfA0XaGRiaL33Sp9dy/bjG24hAjfcd
wnrWvt+GgmtCcatTd5v9vtTj32zU+Q1HckKFQ2HFcedPgmtToYItqPEE9sxm57/aemrm8WGE
ocWqPDjmQw6kqUlPGSP8up+nf5a1JlWUFNvNrZt0xDIQ+17O463IShK0k047IUoGqd+uh7Ny
OYGPXG4Wa4XSC29JFvdYbdjMtlalCQF8V7dEgoofx0qsk7Inpfj672r79T1xkpMGYiFIESPJ
cIWqIiWVuJQaoShLvDf1Hw1dTGnI0g4ZeJNvt0+PLUDcu8qGvtPBhsNOqbq9KA7TRWoVqVbe
uqEaTZxm2q8QbHEdk3V4RpkJqUxCT9yWlNPk9toqH6HL2mqdUC7EAlpby0tp3W6oJQCaAqUQ
Egk7AVO/9ug0S8+2XGLGFxVObmMxXhBW9HdUosPoBKGQTTYJSaFNU09dRnQqLcMpvN1YhsSJ
EqfMSYbSOW5TIPBbZAFAlfRR+HXWkLQ0dtMtFzdgMIclSGVFDnabXy5o2WngRX2q2qeuhoEx
3aJ2TQO4i2NvoBdT3kIa5cHmgVJNFJVRxIqQRuBqgm0Lby6/oddkfclyS48qUXXEoWpD7iuS
3UFQPBal+4lNN9ZbZKqQ6tt3y2SO3EUXERmm3AtSQUhu2rXLRVahRRQta10rVXTTIWfLOEnL
b3Jjzy52nFXxkxrhO7CEvSGyUuUU6BuUltPGnQCmlShcCjmt2bTaEsRIUZdjLn2TrUfir/qE
/q948v1OVeXpvvojgy9kYi8y2bRFtqO2YsSYue0VDkovLbS2rn6KRxbG2lIZge/v0uHlv75P
hMLnNSUSXIAT2GA4kJWlKEskcECgNEnRaWaTIu5S25lxkTmWExkyllwR0KUtAKzyUQpZKup9
Tps29mYJix5VHt6rR9za0XL9hkfdQCXXGwkuOh5aXUorz5LSOPSnz1n4NJwx3J8hXJ6yJtlZ
DKkx1wwliQUMKZddW6vuM8aqUruEH3AHSrZkPgZT8oVLx5i3x4pamdtLN3uKST9yzGWTETRI
9vDkQ4VH9T2/4Rpq8jgcjLrHGTjy7fDk9yysTGHvuXW1JdE3mVqR20pKVJLppy9KaOI+ZDkR
jOUWXHrg6bY1OcivQX4K1yew8639xxqttkjsEDhSivqrv01N8imEcwQ7lTF5lqmuR48NUJgx
ltwJQQGlIbSFR0hDaeS/eAPcnb10WZeSo04ICRQ8AAlQ2HT4aWZkt1ry+BEetNzfQ59/ZoK7
cxCACm3kvIeR3lO7dviX6lFN6bal/wDJ0TmfkcN5fZmVKuHB12S/Ct9tVbqcG20wC0S6XN0q
S52PYkCo5Gum1pCzyObbmVittwemqL04XK7R7utpCOKojbKnVFoF0kKWC8kDhtROpWz9g1sj
H7xa2kY7b2rj9yq3T3Jsq6lpTzYQ86haaNO0U4WuBWtCvapRpv11N4ZpRJ1Yv8dyVl7bF1TG
uN8dQu2X0oVDRxakl10kM8lRg+3Q8E7fl1NpPOTFSwR8givXO43i23ZpuO/JR9taJMxdqDri
GG0uTZCmwHHW1rQaN8gK/VrNtI19CpIuLDmO5St522i4XCWy82hcdRlrWp5SnDb3UfpsMgn3
g/UmlNbd5efBViI4JyVdbM5bpjDkxleLrt6GLRawB3WroOz3nuykckKNHCXSdx6+ms0tEBKb
D8gXOFIs89Cn2HmTc0KxNpgIqzawhSXAkIALaPoHBe5VU/PSn/j9zV4eeSNsFjNjyW4W+5iM
b3Ft7jtoaLjUho3FbaFxAlaSppS6KqAduWx0pLD4Mx+owzpba7wwvk0q4uxGTfEMpSlAuJB7
9Uo/TCunLh7eVdZbwD2VoJ3UNuexHxP/AIagkUCklSgAB1UD6/7euoZElIKQT7iR7adKDQUB
FwJPX27CnwJ0lIlJBBFN/wAdtQSKBIpVXzIp6aBEH6glPQjpvv8AGuomKaIJ3A4+id/5aCAT
UDfiPlvXSigQCKLHIjcbeo0kLPFIICuSfzCtanQUA2SSAOJp1/v1CEmlSoj2D09K/HUUgNQr
c+1PT+O+koE8VcvaaHqsfD8BoYC9qpRXio9Cfl10CBP0g0rSoP4jpTSQhIUob1B6kHrX00EL
Q2N+QHImgHwHx1FAQK6FIPy36/2aigDaeQKVkb7gn04/36RSASkJCiKKNaHbp+GohKOZSUpP
tpX/AMdRCuIFCr6TtQH3fjoNQEVBKSGzUHYkGpIGxGoGxS0oVxAokGg3+rpqBiViqk9a02P4
CmqSDBPbFRQAb13I0lAS0jam6gNirr8RoIHJfSm/XlT169Px0yUFo8TX+BYshjXqZb37mITn
eZixnC0vuJGyjRLnJIHVNNdKuFgmy34nlWL2zPXb8xYrlObo65boDUhBebU42oOdwhv9UcVK
KaAU0JtVaWmbhPI4tWa4hCtNphybXdFRLLc3bpZJrT7aS4suJcS1IStHaWE8aLUn3alZlMcn
WL5ZhtokxhbXUwrzInSchUh498KmLJSbe6f/AI3AceYpRfrohrDB9fuRdmyPCo2B3XHZirmL
nc3kS0ONJjrYCovMMg8ylZSvn+of5aW24MqqZEyr1a3MRh2dDEtu4MPqkPPqluLhOJUDThFN
UtrFfqT19euq0tmbOENsZvAs2RWm7dvvot8pmU4yFAFSWlhSkgketNtbQbNeleZ7Ei9wpbEu
RIholypiktW9qI4yp6O4037w66p1SS4ORHEEb01zqmbbRUMO8o3a3vyFXe5zJTbcG4N21ypd
damTWwgKStR5hPJIPWg6gaupSjtivkWatmdDyG+y4cp6E1BteQBC5D8QNv8AfVzW2UyFhVSm
oUSPw1uyUYDs2ybv/laKixXWPYLxLF6djWeIi8JbWw9MVB7glP8AcVVxHMLTQrPI9NCqm86C
5V7LnT7l+vV3ulzetsq7W92Ot2BFYWl9xQSO1IbWnilDnGq1o91d66moUEnLwU0BSaJIIAHt
p0Hp/HTg5wX3D/2Vfj7KYEm+QYM+7mJ9tCkrdQpX2TinFFRShSfeDRHz601XejrWuDrB+wV4
lkw5F6tqZn7kxcIltccKZbbLQUl5sUbrzWSlSU8iCPhqq/yCGkWzNskslyVfnzeLHOtz5jLx
ZlplIkMzUvtqW9KSWg6UdvlzWpZQpPporgWmlMFWzh3BpdpuDWM/ZolvXBC7spwce4pSaB21
VKu1C58ytB3px9NPaXkzdLg7eRLDc7rKxNqBKiXiYq0RbcpqHMZkOGTHStTifqrxA6KVsdFW
sjGTpgvj69wpd1XkmPslLdsfdtwuxQmH9y242Ula21nhQGm+29DqnwH1JZqyWJTy1Is9lezU
2hDq7FyZ+wM37vi5xaDoj8/tOKqBz59dDZp1T4I7ArTIc8tIBtkOB9uy+Z0WFJT2Iy3Iy0Bb
Ky77FcyBxStQTWnTVpZJMocTD8mkSLpFZhduTZGXJF0jrcaQttpvdZTyVRynwQSdbUScG2W6
zY1EVidrlW3Fk5IZrLjl5niU4y5CeQ4pKWwtK0oa/TCV+9J/lqbl7Oj1HBXo9virwSXcP29o
vt3BLTd2MwJkNoKRVlUD8yT/APhR/dotCJZRMzLA694os9zZsS2nkXZxp65JacUqTHWyng44
8RQNqcPBNKJFNt9C5BrRZ7xZYDtzgY5kNhRbZcu5RY/3VvgqhRILChRxhNxUB94taaJ5K2qK
gnUsmu3AwvVox2NMtl8etgVjsG4i2zbULW5bZjwU2pTS6KcWZiAUVO4Kjt+bVk0obljDNbFj
ymsVlQoqUy7pKejzY0OE5aXHWkuobaKIb7jvB1fIpQ4TxWfw0y+TnaqlEaxilhF3yn71F0Yt
eNsmQbb+im4lKnkMpbdVRbQUkrqSAQdWdeTSpPJK5biWIW5lV3lifFs0aHbWjDiNMtTVPyo6
3g68l7k0FEIo5Tqd9SkozBTcxx1jH8jl2hT33LbIaWxIWgIUpD7SHkhSd6EJcAVTavTWVky8
E3Z8KsUyFj/3d1fj3DJHXWLXFZjBbKHWnuyFPOqWj2KNNkDkPhTSlsRrPxmx2+1RFXG7rjXq
dFVMjw0RS7GUjuLbQ2p7kHEqWWlb8KdNMQLquBo/arcjCI12RQ3Bd1chulXLkhtEZLqUjcoU
DWpPUHbprJNQhrYbfKu13hWhEjtie8lhK3Cotp7hpVY/wj11NCmTMTx6J9xg2uy3yFdH5klU
J9IQ60WVpbU6pSUuCr7PBCuK0dVe311rtKMqjWuR/O8SzoFzbjyLnFixHYbsxuc+080B2HEt
KSuOAp1J5OJ4mlCOnTWYNQ0xs742ukNS3X7rBiQoyFiZcnFuhll1p4R1R1hCC4V9xQoUgpoa
10QghyRF4xW92wSnXVod7EhuMXIylOLcElkPtPJAAX2nUEAE0PL29daBptk3a/GmcumfGhON
NKYkGC4lD7qRJfDYWW2iyhSV0DgT+pxFTT46HHkkVVFjmqgTrjyaZatbiGJMd1xKJHJ1Rb4t
NE1cCVJovj01tLgE1vgj3AabJNEjZP8Av1gWiyzMUCbHLehX1m4KtaUSLhAabeSywHSlHJmS
r9B9fJSRRNFHf4a1iB+RiLPdJMWwhUlssXYyhbW33O2hjsuhDxUVexAWoVqOvrvrGsgnLFZF
Y3olsjXZF1j3y11NvanMd0cVsNhwR+D6G10QhX5fb6ddUFayk7P47fl5C9HuE+NHnW9qPIkX
KW/xZaSUNqjkucSTxCkCgSaevx0PZqThIsmTRcvhRkyWX7vcnGpttnsOJWxI+4WotvodUkJ4
rWlX1J/hrULZlCLFYb9NXInQn4UNC1OwkOz5DMcPOuji6wz3a810WPTau2iFyESMGIF+i/vc
dqOUPQI7iLy06lJWyyh1CHRRXRSXOI9u+rk04akepxXLTjTstENtVrf4y1pLjBlltpBUl5DJ
V9x2uJKiUpoRv6aqJFYiVG4/6fSssg2szVBEriATK7Aq3z607RCuPT11LkJyDHZFwj36C5a2
TIuTb6FQY3Hn3HkmqU8PX8NZg0mWDEpuYFcaBAsf7zGkuzg3FWhZEhT8cNy2uSVJIDbXFVAR
x1t5DErBF5dbpkCbHYfxw40pxjmIilOr7wqQXf1VLI329ppqbb2MyMI8+ejHZkJtK02+TIZd
flJKklK2m1pQ3yHtHILUf4aDMF4RkN9vTdxRNxG4XFBloku/tz0xhbLiYbUYNulpt3kFNNpX
7999ttCcDZENBvt+jW+MBbJhxqE1IhzotH/tJDDrylrbeWAWwWlLA5dQRpSbwWNi7heWp+Lw
4r9vvDbsKBHiNPpdV+2L+3Kgl5bJb478uoX101QOnMlVZW0l9kvMl2MhaVSG605tAjmgHr7k
7V1hjME9lN1xqelH7KLkwy266pq3TTG+1jMubpbZDFDtQAlXUD46TUjHHrx+z3u33YFxP2T6
Hl9pXBXFCwSkL2+oCmoGyStt8t0g3uPdZMuOzeXWnl3Bgd6QgsvLdSOKlo5hYXQnltTU7Ocm
atNQP73nCJdvdhW8yI7qZsN5qQFcCtuDB+1QtXE17ile/jvt10rApIqUiRJkyXJD6ip95ZW8
6aArWd1KNNtz8NAMvEPJbG1aQtdydjyEWJ+zIsqGVlC5K0LSmStyoa4r5jf6xTUhalHR+/46
sSZn3wXEuH2jcWxBhYMBTLjKnXeRAZ3Q0unbNfd+OlWhyDriBzLyvErjIdTPeWtp0z3FLS0C
vvuT+cRwVHVEUBNfRNR11qraRpJrAwypyxXOLZIlvmwEXlMqSJlwDzzkdDRba7BVKebQot8w
r8mxrrPfDkUpYvJ7jZIV2zGZbJFvnO3FcB2zrjoS62lPKr/aqlKWnAUe8ceh09tfCMNf5Gn3
NhbyKU/DkW+LFctABXKjGUyuWuKnvNttJp231uk8XBslWptODSSyjlg8GVKxnKWmH4rRkQmm
ENyX2GXluh5DpQO6pJNEIUap9dtFY7ZKJRTa8t6lNd6nauhokak1abREtt9SpmLBxuebYxbZ
6ZCRJkQ13Fn7h0gKWtXBIUVLUkFPw002oNR+MDLKsfxS1XOEy/bXoTEybJZafd7aGvslVQzI
CGnn1ultZSsq9tRtTfUjMZIiViqoWVWbFmYip9/jPkXtuOrnzUXUrQyO2Sn9JkclKQfzfEab
YFfyH12xQovWTrXaJN1uEW7qYj2eroe+1c7i/uVdodxQHFICqU31r2JT9i7YDsOC2OVa/u7m
zNS85Nkxn4zLT8l2I0wlBFftkrbDlVmneKUmn46xGSdeSqosyTiE+7CG+4WLk1DauodbDCQt
tSuy4xUuKcWE1Ck+0bjSmpcEsVkjrTGbmXSDEfcLDEqSyw8/sO0hxYSpe+3tBrrAF3tmEWi7
zyw+ly2pZuybO0G/qksoafeLtHOrh7KN0be78NbiJXwSTcNkdHxm1TLDGyNSFRwqHOmu2lpR
or7B5plKQtVXOLvdPM/8pp8lpTAtQpDl4narfZH8kdC3oyINum/tIWUJBujzrXEPbq4thnkn
4131lQyVYGd5xGBb77kdsenPBNmjh2IpuOt8vuKCFpadKNmU0cNXFbVHz1JSl8lZb+DpasLi
yolqb+6WLleoUq5RnDTsMtw+7ybWj6lFzsH3D6dCUqfmA64lHB6w2NOPquPOTFWtruQH5XbS
qc7VILbUZP6jbY5K/VPt9vxOlLJOeDlBxRFxuVrt8C6x5Ei4R3ZUxSEuJRELSFuLZdqPc4EN
k1TtqShSbdHI4j4XHk21q+xJKkWIxpUxwOAfdBqC4208EpHsKlrfTw3pTrpVZcGI5Im+2Jdo
kx2u4HG5UZmbFWRxJYkAqb5J/Kug9w0WrBfXZIWXB5tzjW5YeSy/enX2LMx1DrkUjvd5f/tJ
HIAHep1J4bF1K4EK5Kr6EjYdSDTQ1mDNZLFcfHuV2+Q4xJiBcxl1uO/FYcbecQ7ISFNJWlB9
vMH26VX9zbqMpuLXqHKix3mm3HJqw1F7DjbgU4ogBsqQSAqp/NrJmOBTuH3pm5XKLNQmCLKs
M3iQ8pPairUeKELW3z5KWrZPGutquUvJNeBhcbZcbXMVCnMlmQ2lK1Nq9UrSFoV80qQoKHy0
NQC8DQpAIUfdtuB8KV6ayIZSOCTvStdvn00EJ5IAok7U613/ABprRSDiRQJrSlNxQ76iQrj6
JG1dx6/PQISqdCOQ+PXcapBhISADueXqPTfSSQQHv5FXvPQV1GmHXiQfpV1pvqMhjieVQSr+
wEddAiED3dTuOtP5A6gDXzCARuv+dfjqEWocqBBAJFFGm1PTQIRaUACfT2k/A/8AlpISooKq
qT7R0PqT6aik6BwqcSOINAfbtoI5qXXYUSBuB0HXppFBniCF05muw6/w1FIEhJWVCood9tqf
gdQJnSnvr/i2HwCf+OgWxASlNPqJrsPl6fhpASFkq2I2qDUU6fPQLFpNSUilSNz8/wATqJC+
a60r8uXr/PUMItngJqUfI1jkMrDSI0tD7zqnEtJS0nZypUQKcTvrvRpUcmLpyXrx7j8xHmJ6
NLjNpYYXM+8q6hHaafQ52nUOBaeKuRTQpProraq9bT+xq1k/qSdgtSl4Pj9plWqPclRr5Mj3
hl8pU7FjOrb7jvaCkkeyqg4K8dZdk39gbGMTDMNL6Ww0h51qVPZx2CHuSb8yy6pKQ++CAwpo
bJVtz0Wcj1/Qj7BYrlK8P5YUW9bqI9whvRXEt8ygILiZHBX1cUJA561ZrBlpSQc20wE+P7dc
2rWyh9c1xo3hMzk64KEhlcKv6dKbL9f46G3JQm5IvHLfFueQ2u3SHexGmy2Y8h0FKSht1wJU
qqvaCK+ut1a5BmrJ8dePZeRxrOWZkf8A66VGebYRIaQ7HZYcWlYdk1Hd5tinD2ka5q2DStBW
8WsmA5JcHmxDmQ3IcCfMlRESO4hYhthbS0OLBIUo1DiPp+GmcF1WThZ7HhV1g3O9xokwMWWE
1LmWHvEqW44/2QGZNC4U8TzPs+XTTLMytkvcfHeJ2yxSsjkpmyIyY1qmxra26lKkC5l1KmHH
1JJ/TLeyuNflpdmg6lctFixu4Xe8swk3W5WuJBckwXIbKPuGVpAIMtC9uyhR4qUnrsdDyiSx
JV0LKt1K9w2r/eKaoMz5LRY7PZ5mCZXPks87jaHLeuBJClJWgSHVNuJWkHipKhuKjqNKnBqc
QHBsVue8eTrypCVT4t4iRVLUDzS06hRCW1BQACqe8KSfQg61mcDaqeyezLAMWZyDJ2rJMcYb
x/szLlEU1VluK6ptCksknuKcbK+W/tPprKWJCPkruS4U9YLf97Kmp4zXB+wUbP8A18QjkZKa
E9riCmqF776pyDEZrjtttKrE5bkLQzdbREuT7bi+5wfeCg5xVseNU1A0MLts44njUu9OXFLV
wbgNQYLkyY66VpbUw0pKVt0RuoqCxQeuiQSb5JE+OH/tFXD9xYGPfZpuSbmWXP8AIVI+2FYo
Jc5d0U6/PWm0dM+RtiuJwrxmMexmS3NiuIeW29HUprudphbqQkrSotKqnfkn5aZwYazkrVAE
J505J9Pp3rtTUkBPM4tLNsjzn5saA3cUKdtrT61NmU22ooUpKkgpA5gp99NZtg2pI8Wh5dkd
vffj9pl5MVcPuJE2pFeaY/Ut/FVdaXgurZcZODXVuyTnot0kSY9thJmyXG6i2OxyU1TGdS4e
awV/Spsbg6zC2VrNLY1esyZ+OrnMZXJkW9qWxB7N1S+xHLrqSskLLjyD2Ee5W2yd9TfhBD8j
a/QLuLZEvrV/kXeKzIENhyT32HWnePcSqOl5aytr27OoIFaaXWMFLI7I4WRRsj+1yN50XZXZ
Lr7r4lqDbtC2suJUvkEhVaBW2jSDI+RAyew565YoNxcavzUv7Az4ziquKcIBIKiCoKBrxV+H
XU/PBqjzBLwmfI/fvF5tN2lSZ9vmptMpZJDriaLKSvvnghI7ZHFe4JpowSs9oiIl8yuAX3JU
Fma9NmFl165QmpS1SwkHsJU6kqC+BB4DqKEbaOdjD5JWNlefvW2e1bnWLZGx5Dj82Iyy0wpr
7h8JWttpaVBC0OKAqjipIOtGXEYIBGa5EiyGzmWFQeCmUJW00t1CFqKlNtvKSXUpJJNAr1Oh
lay0EcvvIxkY3/0yrSglwJXGZ7qV1+sPlPcCuI48q1ptqWykmmbvOjwF3qJi1ttbttXHU3dG
0SEOIW8T2XG0POKQ5y4n8pGmTSvmCuQLlebLe49xYQtic2VONpdaUOaXEkLBQsDkhaVEGnpr
KRl26sdHLZzklf2cKDb0SGBDVGhx+2yUOOJX+YrUFFbad67DWk/INqSSl51fW35tuu0GHOYc
eki429xsqbccefEhfuZWlX6bqKoKFdNt9UkjhD8gXmNfJd3DMZxyYw3FVEUgiOhMfj9qUCvI
KjKQlTaifqG+s/JKwVmz+bbbc7AfY+9b+4dmtvmRJjLD7wo6pf2y2w8lRFeK9uvodUin5INq
9Ibt8yAqLFWqett1MtxsGS0pok8Y7hP6aVcvePzDVLkZwR4cTyCSfeKkjY7j1ppRIkn7hKON
wbcGHG4Dcl99mRyPaddWEBaeH08mwAa9d9EZBnGfd1SrXaoHFVLYmQgLUoqQr7h3u1Smg4dK
Hc166YBuBN8uxubcNv8AUS3DhMxUNuKCkBTaaLcQlISBz6n1+JOmYUFhji9Xti5ZKi7ONuFk
ORnC06ULUUx2221JNAEe5LZG46Hf11ng1AqLkyhmrN/kLdWzEmfctIASpxDKVFTTSEn9McEk
AJHtGhikxnY7qzAuL81wVd7Ej7NYQlxTctwEsuJSvYFCt+fUemlvIVwgWu//AG7V9VKWtb93
hLj93/MU464+h1RWokca8CSrffUkwemhNsviIVpmBslq7yFNsolFPJaYKkLS80lwk8Eq5J2A
3Hw0s1LgS5dY6sSbs4X+si6LmcAk8QhUZLVQqtK1HSldYXJB4TdItozGz3SU6pqPBltyHn0A
rWgI93IJ/NQ020vRGh2vybaLqmAi5pt1rcjN3Zv7RUd1dv5TmUcXX0N1Wv7h4KLlDsfgNOJM
rP1M+zJcYXZLsaTbX2HUJPbtDb7UZspJTQIfHNKlfUaba3ZprA6fwPbBkFqtuJXSNKtkO8yZ
c6Mtm2zA7wCW2HAp1BZW0QQVgGp1zUpk1lM0KbklrnN35MG42tpUi6KlR/vLpLtgS2bawyly
OpgpLvBaFIo5X6dNSykReP5zbYNnsllL6Tc0QpfbuTs2R9kzLW+6tKJ0NBEd5txJryUD9QJr
qa/yESs+AZNcprvja1x7bKQIsez29qbS9EKAbqHmHLRUJUupodq/m00gmZtZm4Tl6gNXFXC3
OymEzVklPGOpxIdPIdP0ySTrL0KRpV4s9nVNjIvVos9pfenS49sZt60IRKgfbL+2ee7TrqeX
fDYStZSanfTP6FZsY2m027Fbhhy79ChOS3Zstu7x5biHkNJQ40G1PpaWUoUgEqTU039dSWxc
Sim5M2tjI7gy/bW7S+l9XK2s8i0zyopKU1U4eKkkK+o9dLMtrRHj3g12ApVWsmkFUknYDb3D
r/I6CFKIp0HE9TSpr8tKAJHvACzv1+W2oZBRHXb+3fVJSA0ABFU8jQ/y20EmEpVSAKkk7UpQ
+lRqgBG49dgKqPy1B8iVcFFJIFabk0p8uuo12EFKlAEKIIqTy+GtSZBxIVUAUUPTqdEmpcBJ
UpI9hFfzU2pT8PUaBVnqQc3UOqLbqklJIK0kggH4EU0meQ0TZrbinGZLzTxAQXUuLS4oDoCs
HlT+OopxgUxcblEZeZjTX47UhNJTbLi20rr6OJQQF9eh0tGVjBwRySdlFCf8A2G41loU2J4V
A5UIPUfGvy0o0h1Ku90lGMuTLefVCQlqEXFqJZQ2apQjf2hPpq+Abcyd3MmyJ67MXl64PLur
JSWZZPJaS2KIA/KB8qUPrqZI6w8pv0K4rusea4Jz4pIeWEuBW9fchYUggK3SONEnpTWZNaGj
F6vbRnqZmutfuaFM3P3qrIaUrmtLvqoKVudaTgGO28mvDNldtLDwENYUjmUp7qGnDV1pp2nc
Qh0/WhJodAtuB3fc1ut6iFmfAthkKQ02bgiKhudwZSEpHfSa14pAO2mr6g2NHczyF5+3POTC
XLNGTDt5ShCO1GSSe0eIHJJ5mvKta01cQK9rmRy3nd0M0OJjx27Yll2KbI22W4X2z6kreaCA
rmA4tIWo8q1HUafoFfZk5DK33bm9dZ1viT1LjrhR4j6T9vGbKO2z2UJPt7Cf8uvrvonKF28H
ay5nMtMBiOlhmRJt6nXLJOcKy5Ddkijy2wkhLnOlaOAhKtxqly/knoryHW0uJKq7EH5kD00t
tl2gsD2a3FzOHcwQ201OXL+8TGBUWgvjwp1qqieldLevgJG8+54umbCdtVldYix3EvS2Zktb
5kbjkhSkpbCG1evEct+uq1u2wnK5gdRcotXevEKZajGx+/LQ89b7esIcjmMVKjpYcdCxxSpR
5cwajROU/B0VuCOybIJWQXyRdZDaW3Hg2lLbdSlDbLaWW0+7ckIbFSfXV2xANy2yLCwCUqAo
dyB0J1GQV59NwfpVrLAI/UAogAGtaakMA5hRSn/Efq9aaRTApKQnrUg15Vodv9+slIAVH3bc
QPX+Zp8dIQArBpzVxO2x9NJBJCj7lHl8gNQhEqoVJ6p9f94A1AKCwEpT1BHw2FPmNDENKjWi
Rx9N9qfPRBIJSd6LNNgEnqQR1qdaEI7I2NQD0GwOiQFApIIVRPIe0E9R8tBACSKciAs+01+H
pXUInipJUCocBv0+GogwElO4qDuo7eu2x1CghWhKhQjbYEn5V0kw1gLRQqoT6dK11GQgSahK
hTYD8R10GoDCkjpUJJqE9N/wOmCYVKoTVJSondZ339OmokKHIp3NeNeg9OlTobFB0cr9J506
fL406aDUE34qssK+ZVZrPP5/Z3Cc0xJLZ4rDazQ8T6H567+vT+EVeSRydFrjX2bHtcRyJBju
qZaZee76wW1FJUXClBVypXca5wmZ0Rai84eQcKVEfUgkKNeu/rphIJLPi+GRrzjt9u5uyI8i
xRjMRbAhSnHAhSUqWpeyUJqsUPUn00tmEsnW243MewS55PHvfbNveZZftDS3AsJkkp7jpBCE
hRB2oeQ66Ga7zodW/B8duWPXS9QMiWp6zRW5FwjyIC2W0uOq4Nx0v9xVVLVshXHfS38A0ypR
mXnXksNoU466QlDSByWtatglIG6lE9NEFJoEvx95Bc/YW13MzbvcJMmFHt6ZS1vQXoiAp1tx
wqUkEN7qSjp067aa1XBpZRHJwSVBydmzs31ERb8dx1+dJjTICUoAIW0ht1KXJHc6BKB7+mtJ
rlHPraJRM23xdk8Ge5Nt99agMxoYuDVwSzMQ921u9goVEDf3KDz9FIpTfWcQSUDD/TGUXaZe
kJvrcy2NdhV6urpkpYS8okMNusqb75Wk1oA3t11dcGnMEfZMWv7yJk+Hco9stLK1wjeXZDjE
Z9wD3MNqQlS18huUlNKddSq0ZVWhvDxMzcVuWRR7iw4q0vNszbcEr7oS8vg04h0DtKSuhNPl
pTLrIiz5nkVptUi02+Ulq2ywRMiraacDp3+pS0lRH+HfY6zGQrjA+TcvIUHEHrc5bnmsWdKV
recg1ZKju24ZBRWu/sVy1u1VwyTaOt2y7yQ/D+/uaHxFn9oKuzsItiSEKCmm3JHBKXWxwFEH
bbR1aHtyhf2ucXpcWIFsvKzGX90ww6UIUZEcn9Sikj7dtVduPtWnWUMsbX/K71cpVsbyCyxl
i2oEeLGbjORC9GRVIZC26LW2CapKPX8daSZjfA7tOcRLC/MTbsYbhy50J2C4047IfJbeII5M
vg1A4en89EOTcJCI+Z5I9HmFFtZex+FETEuFq7S0w48dbwdbS5RQcQPuPcDy66WmhlDDHMyc
sWSpyKHbIa30JUmNGUlxLLXcQUEo4q5GqFEVUdTtJhKArRJccXd3IuNNXJhcZwqYDTz6Lehz
o+0tJ5JUihopddKnYW0do2QPyccZamWZN4h2JssRLme8GoqZCy5we7Y7auS1Ep5kaHWH9TVb
QiP/AHGYnG3YKba0qCqWlw3hUcl5t0Cn24lUolKh/wC2dNcGWoHEPInbXY3mIURcRVzYcgz5
6lrcaksqUFraaQoBtCgQORSSfw1ND2aQcbKGkWyx2uTbmpUCzy3pMthS1IE0PFJ4P02HFKeK
VDeh0QSykJyS6Wm4yEyIjExiQpXJLct5LqOyfoQ2gJQUp9B8RoTegsoeDjNuNmmXpMtNtTaY
KC337bAUfbxG5bL5UQVkV32GptnRvJM3bKsYneQzla40yPBckNzpEYrZU73mlJUENr2T2zw6
nfS5agyqw5OlzzDGJ8XJLc8ic1bshuabq1KQpouMq/UJZW3UIdBLntNfnqYz10OJHkaO5e7h
eBA5PXN1lEmM7xdbZissIYDsatO3M9lUufl6apxBO07GGO3fHIkXJosyZMR+8RjFgO9kPu0D
yXg5IIUn3eyiqfGumch3SUckREssJ+IiQ7eYMdSuR+2e7yVgjoKpbUk8vx0NrRjryKFjjoZT
L/d7a4W0pf8Ase6sPGhBLfEo4lXp1pq7IonJekZ7jTEq6TVvvXGLcrrBubNnWyShliM4tTrC
i4QioCwAEDidFjVYIt/NGYky2uJVFnRo8xcp5Edt8PdlxpUdxnuySopT2lkBCdgfcN9Pwanz
wR10lWVN2sVrs81C7RZilTd2cStgOOOO995wpUkugbJQEmtCKjrqtVoxW1W5JSdcbKnIcuct
tzjIfu7rciyXNJW000j7kvOo58OTa1t+2nHc7aLORqowN7Nd7c3BuLDj7Ll+dmh1V0ceVFQ7
CSzRbaXEpVy5O+6hTv11cmp8EFLtNyv90nXK0W9lEF2SpKI8d1tptravBCHlIXxpvWmtSjNm
XFtm22+PizVzMBqyi2uHJYVGzKcWXHw33CAVuUWEce2qopvrKg0yt3g87BCFuci/sybfF+6Y
cLJf/cQmklaa/r8i5vWvHVUGvIm6W8jx5ZJALBfblzO6hLqO/wBt8tFnk2FcvcoKpttoQHLE
8bcYym2f6ktrrdpDqxM+7QthggNLUlC3PaE8lAAb9da6t6KQ8YdsN4useNc4DMZbbMp1sNex
DqkMkxY5ZJ4rWXupKqr6HWbOWNMV+w4kQ7Ob5HZRZHky/tf+piKbaDYkdwhLn2aHPcnt7KRz
rX3DS1GAVmKg43a2zdXJURp+dHTBVFtkVt6SjjKLgeUqOlaHW1p4J5I5fp1+elZDs4EWpnHE
DN40S1JktsxEuWgTOX3jZ77SXEJS2qlUAqVtU8Rv66IllV4K3abS3Ixi+TnYKpCof24RcUSk
NJid1dPdGPukd0e2qfo1pP8AwLr/AJIFQJCiFDkkU49T8jT5a5yLLNOuFsk4w885a40Xm+iN
aPtmyHWVsJSt9x99Qq8lxKqBJNeR+AGlB2lo4zGrDFi4w+I33HciuPXmOlSmw88mQ4lCSoja
rYSPbpa/H7lZfl9jnkT9rXDtziYceFdXWVSJCYDZajfbu07KQhRUe8KK5706azxJLY5u7WOQ
MyukODBbuEBLiWLQw68r7cLWW93VkocWgVV6jc/LS1AzKwdYlrxaX5AYtpV9vZ1lYnHucGxI
bjuF1EdwlRDPfQA2VGpGl2whS2McU/07KDcW6wvuHHlKdmTnHlsiLEQ2C4tpKCO46DUhCqgn
iPjrM5BfOjlCtdqkWXJLh3lKctyohtQUoIU4l+SW1c2x9Sg2OVB0OllbCHTcLF38fkyA09Hk
Q2m63NbpPeuDnuENMUAjtlIVR2uwFTvoSJuBpPtUeLjlluyHFKfui5qX2CQQ0IjqG0FFN/eF
mtdS0U5O2GWCHf76zaZExNsjPMvuPTFp/Tb7LC3Eqcp+WqaH5HWVgWWhjxWmIxHF4kLTLNvu
M6dboqmOTLkBxlptjvOHtVWl8LVXYCg662448gmUy7RYUe4PxoYfQhpXFYlKaW6FbchzY/SI
+FNUonDyTNkwq53i2xZsI80yLoLW42AkBoLbQvvEqUmvHnTiNHElOR9OwKLb7YJ0qVLWkqlB
T0eMhbCREkLjfqKU4hQKi3yNEmgOlFLmDnNwFyPZGboZa+blvZuSu4zwh0eBV2Ey1Kop4J3S
jj7t6dNNUL3BxyzDmsedejyZ7j05lxDXZMF5hpRKAtfbkLPFQSDtt7tDWDKcsgrZa7hc5yIM
BkvynApSEIHuKUJKlK/BKQSflogYDusFmHKMZuUJiAlLndS06yKrFSng8EroPjTfVBZBbrQ7
MDrvcbiwmv8A5E6SSiO2SCUpUoBR5L4kJAGo1A4YxO8uLebUyIzrTv24akq7fcfKA4hho0UF
uKbUFJHwI330wZg4R8cvDscyBHCXASG4izxfeCa9xTLNOSw1xPP4amswEHJVjun7aLgptP2x
4KKeQL4ac+h8tfV2Vei+mjqwW4Gqo8rizzZd/wCq/wDikoVR4V41b293u29vrog6/A6dxi/o
mMwhDUqVIQVtpSpC0lKDxWVLSrijgfr5EcfXWoMIb3WyXO1SBFnMFt1Q5gJKXErSNv01tlaV
0Ox4k01FY5wLdcpy3UwYzklTCC++hA5FDaBVSjqBIJi3TJMaTNjRnHYUQJVKlJSe21yIALiv
TfTArA1UCFqJPvBIJNDrInVcR9tpp15h1EeSFGPIUhSEOhH1cFH2q4+tDqGTlWo4kD57bkjQ
Ehe6oBO5FKagBVKSCBWhISR8tAgQ4oEhIqeu/wA/WmoJCWU1qj4bD8P9qa0TYR5VIG4HUnrv
8NQASsbkD2j09KfjqFB7BZ41Cevy3/46haAmoqr6aHod/loAILIUdiADsPTrqKQEVJ7ijUnZ
Q679NUkDikbdadOWmSgSEqoKqAp/tvqFIQHFe4KG6gAfXb00kdKlHoAfRNadeughKQfp4gAn
Yj46pEFQOifwA0AGQEpIANeg+NT8dJAPIUrQkAfgPjoIL29Uq93r8K/x1EGBX8tBToaVH8Pj
qEVyWTRQChSp+G/x/DUQkFIUpNKJIqFV9em+ogk0K9jQJHoK/wAdRIMFQWKVAPQDoNRBKUta
gim4PoBuT0OgA+2njxKvdSqlK3Tt66TQByJHGlQaAKp1OqACVxAHIe8bAjoanQIfEBaQTU/H
0A1EggkJ67kmnL12+Hx1EAFO9EAI+Pr/AA+eggyKoNCdtgNq/wAfjpTCBQSgI51IpvT01GoE
DnQgCqSRT06D46iFCnIqJ4VFD67/AA0yMAFNiAAU9VH/AHjQQanCDUVAVuVeu+gQcna89+P9
9fhpKck14qcu0fMbM5aW25NyTPYMGI8opaW9y9oWofSk9CddvXWU/oYsvksGfxi1lU+sONAc
Wvm7FgyzPYS6alzjIO/1dU/l6a5oW8FaUt1VOJpQV2226UOkyXfBcmfs9qvbLOKtXpiXFU1d
py1yUlqGoj2OKZPBKeaAQo7/AD1u0uvwbSlSLsWXQ7dh12tCsVTcLbPdaNwufekt9t0cjFQo
oq2ko5HiPz+uhy0pCFVie1mkDAX7O9j0xm0TZbVyXd1R3kpIQ3wQlS+PBTO/MVPXU2Z7eCKN
/hMw7Ou1WxNrvdrc7j13ZdWp2StKgppwoV7EFsjbjrLGpcWPN+VFFkcnRkzn7fOkSfuyyGvu
lSEFtTVWUIo4gLPuSakkVGletk7oK559blqtVmm4zcn7fFdkSBFu0qQ/ci5JSEBUR5baFo7R
QFN0SfdpSf3JWTYq4+ULe7dYCH4t7gxbZEXETMTcXGbyS453eb8haAHUj6UpWnpohha0MkG/
M9vdcmNcLtDiSokKMi6QJLabwtUAro8+6pPbcLvd4r26AaIY9l5Ow822x5+5dwXOxRpM5FwY
fs7kbuucGUMrRLS4Aj39vmop/MTqgMIh5eVY9JxnMJjTb0OTlE5h6Fb0RV/asNx3eX/yxRpR
UlVeAAodNtJAkZ2SK8qVA6/A/DTBmcmy3TyFiTEZmVHus6fNOLNWRVoQkGEl1aAla1OKUAlb
ZT7klBr6HWTSZXs2yyx3pN0ukLILs3IuiGh/pdbP/RNlPAKQ46XFNFAKOQ4Ng1pqlEo+xKwc
wwS3s4K5++yJjmIyHXZzS4jzai1KPOiFKWpP6NKcTsr0pq8mokdseRcRjSLYLhkc+/D95Xdf
3JyO4y/DYXGUyhg8lKJAcUFKSwQkp6UNNKfgWs5H87yVh0n7FH7qkXZMCfCVfmIsoNxn5Km1
x3UKkKdlFCOChWtU12FDo1gylJCW7NbFbFXs3y+DMJEm3RWGnBHW13HWZfd7aVup/VLSf1E9
0cSRwNRqllxBneVPNO5DMks3UXhuY53k3FTRYW53PcUutUSEOI6KSn2/DbW6rBzbzBdsDyC0
tWG0w3L0ixSLLfP3iWXi4lMyIG0oLTKmQe46CkjtroKK+FdFtnREvbMyxdy4Wa8tXBu02ezS
Jjl4x9xPB2a3KeccaLLDY7UijbgQoK+mnw0ElyjgzkdiXZos1VyZRZmMees8rGlEh9dyWVlt
4RKdtaeSkud+tRT46kweMnO8XywP22+y/wBwjPY9crQiFjtiSol+JPb7fJf2lKRylSVq7oPu
5etdJNY+CDjXDG0PYTIuarXLtsMg3liFFWmYltKhVNxr7ZC/gU/A6niTKqlCLTC+5cnWuFOv
VtXlTt5VIsl3dLMxpi0KiuA1UOKEp5ce00uhC6fTob2zcRghs3cuS7hjcK9wEtx2HaJuN3ls
SnZiS6nn967EKuDIH5OqQTQnU34Dom5YiyxIycgy6NaGbf8A6ibZ/wD1VZgrS6x3w6kvCC5I
olauzy4chXrTVmcjBKXODFS9czicWE/mrSYZmRGksupQTHIuhZacrH2f49zh0340FdMvkyko
wVbJcTbn5bfmcKiCXaLa0JLojuJW200Gkl9SVLUCpCXeY9taem2ifIWr4JnD7FbJtsxws25m
Xari44jMp5AUqGEuqDFXyQqIO0OYKfqPqfp06LwVoWWOrDJt5TAcWtieIrV1EpvtIQSKNOQ/
8xSlejo23+WhIU+SCjMB2Yy0aBLzqEUHwUoAgam8GVVSXqfiGEPZorELc7cIU5N0Tb0yZCm3
m3WyT3VqQAjtKbp7evL5aXKRY4Os/CcFRLtgauckR5UtUW4IZS5KKGksFzvJUqOwU+9PFSeK
uKfd6ac/c1VJnGHheNMZpjsOdHnu2S9uIaY4vRnkPqLoRVqS2EBTe/vHFLifx1q1pUmV/Iim
LJizqskurz89mw2JxltLTYjrmrMh9TCQVKIZCUKSfmRT11iJeBUj+74Dj9nYkyrrdZgiolMR
YyIcdtbrqZURMxCllxaUIWhK6L3ofy6E4yLTIizY/Yn5eSRlTGp/7Za5cu2zC2UNPdgoIeop
SVtlIV7QajlWunt+gqIY8t2BW+SzZ2G7wUXi9QlTbdA+1WppASHVFDj/AD2BSweJSk/MaFEZ
MWngKRgEWLhycklXJ0B2GxLjMoi0aedlq4tsNSFL4uluh74Cao9K00QD0VGHDflz40VlKTJl
utx2irYc3nA2mqvQVUKnSoGG2WGXiZfvDWP27JGbtMdkOxJcdaJMcMrYSpTpUl6qXWk9pXuT
60231qVGQ6+BLXj9yc5Hctl6gzLY+3KUbopL8ZptUJlL7yXEup7g/TUOJpQnbR+JpzyOcf8A
GMi4ZTaLYq7Qzb7q395HurL3aLzCSpDnYQ+G194LRx4qFae7poshquCNsuOIuWfJsUSQqM26
+6lpxMppbigEFfbRJQO064rjsaUOrBRyN8dwrILymPLgOsRlynVx4Xef7Dsl1FO+hhO6lqbS
vkvcbfHVrAJYnyV1+OliQ4lK0PhtxbaX2q8FpSSnmgmh4qpUVHTVaBSJZ/C79FtAur7TDTDj
P3CWFyGkylMU5d5McqDikcTyqkHbQqyNsbO94wXNrRbyufCWIjbjfdZQ826WlyUpDK3GW1qU
33QpISpSRXpqSkkc5+F5o39jGlwXXUvrEO3tsutSQHFKKgynsrc7Siaq4qp6n0OpplrAm5Yb
m/7jEttwtz6przKUQmytt1Ko7RKQlLqFqaSlv5qHH166YhGO2Y5GF7g5JbbtzvTbzF1UpMgL
eKSoqH0uBaSpKqceoPy1WJLhHaxwMpudxXNs0SRLuDfN555hsLKS5ULUqo4+4LP89Y2zehzZ
MXzR223W62u2yDAtzaotweDZJbQ7+m43wUCqqR9Z6pG+tdcxyCj7EIzLdaiyYjdAxK7Ze2ST
+ieSOKqEp3P5evrokcDv7e9NWYrVEeTapDqXUzFsLDSnUpKEhMgp4moJ9oV1+elC0dLnb7nF
sNpemmOzEeQ8/BZQpH3SmXHKqkPIT7u0VNkNqUfQ60tGbNzA5S1fMYmMOLS01PmRnEJj8kuu
oZltlujjYJ7a3EL5ISdyCCNc2iR3teYXCDARa1RmZcJqLLgLiyErSVNznG3XeRSUr5BbCSn+
/UlGylPCIxbhmyXXIUJMcIbU4qMwVLQhDSauL9xUqiQKqr01pokPIORSY0K3wUstqat9y/eG
jSvJ4JQkJV6cKNg6ksMVhkjcMkbu8Du3GymQYHfCrk0+8022Zr65CQ6lIU0Pe4Uo5EVAprdL
PRhqfg5v5fKlMriyWkLiCBDgfbhSggO28foyQK/5g3qPgSOmspwbdZFZbfYV6eeuioEuFNuD
gfDj7y3I6qj3hlK0J24j20VsNWidYGOM3yRYryi4siriW3WfaeKwl9pTSi2r0WErND8dZkgX
+8RpseBEiofDVuD9Zk1SFS3u+pKqPLR7SlrhxR8AdKtgylAm33JhFtl2O4IcTb5jjch1xoAP
tvMtqDdAuieKg4a1/HQnGRs+GTj3kEXCWJc+MWVRrg3eIbTG4U8zFajJjuFR2QUx0HkN+ulF
2zg5R87AnW++ym63a1IkNx4zQIZdTKLy1KccJ5I7apBoAPcKfPVOTKcI4ry2EYLzraFKucmz
R8eXH4lKEMMpQn7kOg+5Sw0P06dflp7cm+IRyhZRHhXHEp7IlzP2EBUiLKcT2UrS8XOESn+W
2obkK3Kt9GOsGm4eRzHuOMvRItikzlsxG3J0924JbdQl56UlIRDdQj9TtHhR1af4aXdS/kwn
IiNkIg5dZ5L17LMO1BX202zRhxgoWVKKI0Z9KUkqWqqq16166sRga2h5OFqulucs/wC2yJAj
PNXg3pyY8FUfaQgI7A7Yr3TusA+2p03SlsVZJJFgazTGpN3t96U2mPDtcy6SptrUAHJSLg6p
bCUISO2vgkhCuXQfLVa8/sGjPzbmE2BNz+/jfcKlGL+1Aq+6CeHP7ggjj2ifYDWtfTU3LYLE
F0y+/WGVEyF1hxD7F6fgu45CCqqiMxU0fCm+keo9lB9X4aysfoNmjPgpSlUpQAGo/HWTIlZU
rb0HSmxA6fjqKRSlUqTulPT1FP8AjpETxKh/iHH3HpoKAFKuNOlKU9eumSgOlB8eXr60HpqG
BJoBxKaJ3r8Pw1IGGeaQUqqAAOIPwHoNQSBJC1V9SaKr8tA7DAUBvvtQ1+B9dRQIKSSVfSR1
6evTbVApAVwCafVXoo/DUEBlIqBUcxsCaf26hE8uSlfmr1UnYCmtAEj+Cq13O3+74aBQopFV
GtVEUofUf7tBoApx5beynt9P4aTIRXRI6VpuTTb8DoISQqiDU8h6D/f+GomKHBCuQ2B9PSnz
rqJB8khQ2p6g9an+OgQkqHMlQomhIpvrRAqglPH3LT16Afx0EETwFAArc8v7tRB8dysfVQVH
qSflqITy5DYVrXY/Gu9DqFHQpQo0FQU9CfQah2IASEnfpsafDUDCoFUBNTuAD0/2+eoELdNQ
mm6aVCdunx0GnkSPQdCOgO/4aiDokLITUDqQo+ukoAUpV9AIG6k1r6+uggxySAvp1FCetPlp
EBcqSeRIOw2/3A+mplAagsJBT7iaDkR/edZFyglUSniSeVfqG4GoA6ABKjU03Ceu5+OlCI4+
/qadaeumQnJYfD1zh2zOLHcZzqY8GJcI7smSsbNthY5KJ+Q10opT+hhknl7DDWT3MtyWJjLk
p15iVEd7rCw4sqBCgBU0O4prCwXaSGokmgJSFdQPiOhOmSNL8YXJlnGMttlwv8W3xbrb1sQr
dJcUgLlqKD3aJSrbggpqf5arNQlybTFYvLYV4kyiwy79BaLz0d60219/trqysrfUBw3Lnt47
709NNnryD3gaY/lKrZ46yClzcVe7i6xbWoS3lqLUEpLjzjSFFSSlZAbUmmw6aImIGXHwV9Fm
gwItivU2VFudunu1m2iI6RMZaacAcQ8kj2dxNeJ0vBlPBsTeZ4JIRiL8O5m3w4N5ndmLLjR0
iHGdjlKEKaaJoipCUOH3VJUdxonYpwN7jmEm3tWBaGo79+RPlsptrt1VcHRGkxw13m7otRXH
3rwFfarfRHgsM4ZNkWAqm2UX3uC82yE4lQUo3tLUxT3JoTHipsywlskhPKiTsdtKbWUDUjy+
XG5p8gX93FJTU8SIlucnMwHY8G4vcWwP+heWFtpCDRTyR8aU21TgEkkdpEy7N5nd3cfaZftr
kiOq5osbkNu5pdLCOZfL6XELjpVy7vAbuV6aoJKNkVeAs2DyrGhlKsfYmw3IaItfsEOKfq4Y
/VtJrTnxPX5U1MzbRjCgAg7gE7pJ2oPn/HXRIGoRssvxzjH7eyn9keisPYym9KyUSHC2meG+
X2/BR7ACz+Uivw1g3gr2cYnZ7Mm4wbfjk1caCy04zlglOKZeK0oUVuoWkxiklRTRo1B/jqbk
yuUWPHLAgN+JnpNtWy1JnTG5bMlkrQslY4rUpaASHR7kpWSP8O2m2/JrsNGvGdhvEhl2NZ7l
alIvjlqlWuTICXH2G465K5DanG0hojtkcWwoU2TvTQZak7SfEmKmLFvcF2YuzvQLhPXb21OF
137BaE8GnZDTTqe4HPVs0ptoGIwROMYVh9+g5A5HjTLWhmHFehKupCnI7ypYbWWFoDKXu6j9
NPMD3bfPW3Z6gHUoN4jRY13mRWGZLDDbym22JqUolthJpxeCQEhf4aJky2XTDsYx1ditEy6Q
V3NeQ3v9h4JeWz9qjglQea7fV2q6+/agpT11SbSRKs+Ncbj3ayYxKS9Nl5LIlsMXpDimjD+y
ecZTwZSFNvcu1zXzPQ0FOupNhEjRvAcdMeLaCH13yZYXckbvXd/TR2eZ+2EWnBTRS1x515cj
X5auwzmDheMBsLH77Zo5fRfMXtjd3l3NxwKZlpeDauz9sAOyEpfHBYUa030qdhghbZh9vulw
xqDAuqlzr8sNSm3YjrQiLrTZSqJkJO/uQaaG+WLrkkUYlh8qI3dmFXFmyR7oLNdYvFuVMeeL
CnW3owbCR+spHAoIPCtd6ajLzkY5RjGMQn7U3Cmft0iasputulvtTV26qgEOvPRkpSeQJKm+
PJNPnqXkb5ezjbcUtz7t8fm3IyLJj7aXpMu2pCnJaVupZa+271EJ5KV1X0A084JY2SFw8f26
zCTcbpc3kY+39r9pIhspXLc+/ZL8YKaWpKEcWwe77vw66N/QeudlfySx3HGr7Mssp1KpUNSU
LdYKghwLQlxBHqAUrFUnQ0jLbJmx4M7cIVuQq5mLcclK0WOAhC1tSuysoUJTiSA0O4khFUq3
3NNSwSIT9iuJsr94DrP2jEj7R5ovIEkLIA5JYPvKBWhUNv7dXJqyZHsIcW6hpuvccWlDXp7i
QE/hvpMFlu2B5zEk85kQyJxlJjSOw+iVIakrqUIeDalLQtzjVBV10lLWYyPrhC8qu3O1uSZM
udO7patkxiamUll1pHdUkOoWpLa0tjkqtPbudtZf7Cm+VkS5j3kbIcugQZj65N5fAVBmOy2n
EJbbX/mIfbWpHsWdwj3/AC1tYWdA3OtghHyonMLkqCuS5kjSC1dHELjq9goEl5xZ+3NSBxqa
1+ddY6olZsZvWnybe3JkZyFPuCxMU7OacpxTNDQ5KWVFKAvtUCaGhT9OtONCrSHh91zm2Rr6
1Z3/ANvjw47ki8ofjNuHgghvtLDjbiklf00NE7b6zCkm8fA3ju520iJfY0aZ9raWCiFcUMEt
Mx3OYolfEpKT3F/GmpZJ2gnnspyqXgCLSMYWbUzC4M3ApeMMR45BXJaZUkNIeT1U6lVakmm+
pDaHsz6NKVFlNSmHCh5haXWXUmhQpshSVJr0ooV0QSZZ7tmWWNTYUxy1xrJPS6qa06zbRCcl
uOVStxwqHJ4LDhBpt7unTSlgW1Kg4z82vrPGK3aYtlipYktC2xIS4yP+uQGXnu2sqWVlCQkK
6CnTWmgbk4WXIslOQ41HhREPXayc4dphOo4FXd5rWH+RTuA6pXI0onWVgJ5FYXLmY1mDCY9q
i3m6MLLUBhyQSyh4IJ7jb7C0oc9lfzcToazBteSZwzyPi1ot10t8y3PMxbu8Xfso/GUhbW1G
Gy+406yRuA4glSq7/SNM/lIRjJnUxxkyHSygtMKcWpppRKlIbKiUoJNOXFJAKvXQyLOM9t3+
jZGMm1h+Q+2ppLsiSp1hpSt0vsxVoUWXR1qhwAncj01JuBcNElkXkjGXp13fstm5P3h2Eq5P
zHlOMyGYBbcQj7UCrZdcaov3kcempaB2JJ7zVGIhNM26S81EuTVxUmVIaUUpDLrDkZrsNNBK
Al4ltSqqHRVdM8layIHH8vxfGr03Ix+Hcmba5DfhSlSn478lKJHEKWy32/teSUoAotJ5b8vT
WXeeCkjM4ydrIbhEdYcluRIUf7dhU77ZLoSVqWri3EQ0ygVV0SN+pOl2UElmR5geYWiyxJrE
5p37h50PsyGWo8kKSlHD7ZxqWC2hCj7i6kFY6dNSXI1a8jmdnloud5zV15UuDBylaZEZbKkr
cacjr7jaHEckpKXT7FkHYeh6apcphHBREpK0or0IofQdD8NDJo0qdm1gcx+alu5XB6TOs7Fo
TjjjRTCiutJbBlIdLhaV/k1HFsK93XUtC8kRmWf3e+oxxP7m6/KssFouKeQ2OFxC3FFafb7k
8Sjr7flq4gnuSVvHkRN68ks31y4pMWKy0wxJkRUqDRDKQ93GmeDu7nI8knmnqj008BWuSwRc
3xVnNfvk3oTS5bJEdq5yzKMaPJddSsduT2xOCS0FCpSeB9oJTU6rW0SUShhG8hpZ8gXORb7s
xa4tzgKhKuMdpwxVy0xyhmQ6p5C3iA59ThTU+qdatwCW15Mzlc1ynFqdS+4Vr5uI6KUVEqWn
ZPtPUbD8NDcsYNBsF/s7USyTJE9DVos0d6NebAQouTH3g923UMAFEhP6iAVL+jjqWFgL0+5x
UcQRjKW23YicuatkNqVLKSqD2apLiY6aVNyQnj3Fn2qHLj7tCc7Bp8FilyYi7CxGyS5RJkad
crepMpq4GSiRGElS31Ihf/8AOShr6gkCo9p1SoNqsQVDP4lwjvNGfbbJb3lrc+2TZFskuxx9
Knm2FuN/CiyQqu1KabRELRmHJD4aqzpy2zi78FWz7hIk93dB2PALp+Xuca/8Nc2mK2WmxYoc
guF2ayNvlkLF0Wq8qLyWnUMCO6VLrUJLZkBlIKR6gDbXV7J1mqnZbMX8a4rIkfaXS2Bb3OJD
uPZUtxcF5URDkhTykPtNx6urNFK5ioIoKbkeBVSt2nBbRNxOVMZszr0ljvly5SJDrLZSwHOK
mZLXKGlVED9J9KSfj7k6XCtgOJM1tjcZ64QGZbnbjvPtIku14cG1rAWqv5eKamvprNiozSJO
JQJMlbEywuR2m7zEt1hbt47Um5QHi6XHUKWSmUrglDge2A6VHo1Sh/T9zVayQl4w61WS42Bh
oqyVqe+svTICyIsptDwQI0cpCnA61/73wqKfHVaiiTKWUPZ2M22BeM4lpshlpsMhtNosK0vh
lSJErslS0JIfdbaT9ND1oSdVllfQa1xI9uGC43a7quOzbn7qidd0WZDBdcUqEl5hh1Sqsiqn
ULfU2jubDjuCa6PkpTcGeXm1mFdbnGjpckQrbMeiiYEkoo26ptBccSOAK+G2+/ppvWHgxxLI
9lC3nGW0gDmtKQOhBUqg/wB+smpk0mD4VflZK9Zf3pphhC56W5amVKXS2OtMuqcQDtVT3odq
fPW7VSz9DUKCnXyy49DYiyLZkKbiw64qPLSqM5HdZKaEvJbUVdxg19qqgkihGtNKH5Riqlot
sXx7i8nytcsVk3RVqt8ZSksONtuyVucIofVxKt0dCs8//SN6a53wq/KNJZZWsTxu2XvI5EBU
oOxWYsySxISFsCUIzCnUcKpUpvlx5UWPlpuoaSCuVI6svje43a12yczdbcxJvLbq7VaXluCT
ILCilYolCkN14GilmmiM/Emms48SB3xxcmMaev8ALuEOLHZisS/sj3Vye3LNIiCEo7VXyDxo
s8fzU0qmYJor1is0++XqFZ4PD72e8lmOh08Uc1HqtfoEjc6ww2yem+O5q48eZYZzd6gyfuwh
0IMd3vW8AyGksOErWrisKb47rHoCKa31entGYZzsvjy+3W+Wmyuux7W9eGhJjPTnOPFpa1IR
zQmrgWtSPY3TkRv00NQaVeDnZcFuFzYQ6JcaCmTKXbrQmUpQM6ahQSWGwgK4fUmq3KJFQNV6
9Xngz1eIExcDyF603S6uttxIlpbdW531hC3iw6lh5EdAqXe04sBavpHxrrS9f5dRepGlyxmX
Cs9qvYebkQ7qHUJUgFKmX45AcYdB/OEqSsHoUnQlKbXAvDIgoTuQCT6/GmsEc1HYGh5Doof2
aTIqq1+gChsDtvXUaCClbhIKCPq/36AYAOqR9e9BpIJIH0gUKuiVHavroYikcuBBp2q0I6n4
emoglV6ndBrQVIO2kAUbV09wrRIO+59dAiUlRVQnpUcD6nUIaSedDShoNh/LUApLaFqSCKqq
eQHx0MkgxxVyoK8SQD/cPlqIJZUEgE+07H+GoYCCgEBSdyOoHpqJCiEue0bgjlv6f7HSIgVH
1Hp0r/fqBhFKAAj6iugIrQj4jQUC9glQFDQ7V6gemiSASoiijsBuR1rTSLArZKVFVKbK2NFf
M6UQdCpKUkkECgUPQaGUCVEqUeQrSqa03NOmoGGKFITSiU9Kf4j8dJpMClnhx9K7KVvU6jTA
lKlKCgoGtOJHQf8AlrJlBnlxoAOKuh6kfGnz1E2Cu1K+/p/zU/DUBYPDsGHOzvH4MtpEiHIu
Mdt9l3dC0rWApBT6gjXo9Wn9BUZJTNFpXl127UePEYalusNMx2wy0ENLKE8UJ6bDfXJOUZhE
IFEH3GhJJPH5ddaIuWDWDGrxY8reuBfN0tVsXcoXbWlDCQ2pKQXKe5SiV9OlPnqjEijnZMYs
Fw8b5Je3HXhfLO9DDSAUiOlElwppQ7rKgn1pT01pqIZlJknZLThVzwy+XqTaZUFVojttNz0T
CoPz3jRpKWloCT6qWkKqE6y5KZKdZI8F69W+LOe7EJ+QyifI2BZZWsJcWT6cU76qtTkuvBds
8wnH7VGVJsbMhy1pmmOL4J8a4Ru0alvmhgBaFLSOQCtz01IGuB5/2zxZvNbXZnbq9+zXOzou
aJKizGcW840VoZQXaNo7hFAF7/PTOCaf2G7/AIxiR5Ul1TFytdmtcI3KcZiosh59nuBv/onI
xUyo8lUPJXt9dU4JKMHNzxdEhQH8gmzlpxRqHFubbzLSTOLU9xbTILRIQFpUg8/dSnTQ1wLQ
qD4mddzOTis25tRHmoa5sSQ2FKL7ZY+4QlKDTieAqup2+emFEgxlj9ojXDxzkcoTJzEiyrju
GChwfYPpkudsc2evcTSvLVZuRrKRUiD0qO2NiFaEwsy7XXD/ACPKtceRcX/vWG7YzKiQxKbU
+q2tAltaY4IJbaCt9qjS1IzwR12xLNLdZq3E8YcdCXnICpSFOR0KpxLsTnyR9QP0+uiFwUwS
1uxjJbmjEI716mstZM841CK3HFsx/tzwQtspc3VQ040SU6Xgy5eBlesc8jxpcFuemY842+qL
a3i+XUodQSQhK+auwug58V0IG+hVkQSbb5WF4gtyDc5Fzot+3Opkl4D8rq0PoWpsU6L9346U
v0BMVPsvkeS3fXbs5KDsWKwq6tSXSVSIzjwaZSkJK0upDtNh6/PVrKJWfBWrx+6iatu7KdVc
I4DDyJBKnUdscUpVy92w231JBZk7iMDOXYsh/H3FxI8smJu6llMt8An7ePz+t/hWgTRVPXVB
S/qLsttz1uzSmLZ9xHgyVK5QVLDb7626h37Zpf6qiih7nbp866oQtvgSyz5AGJOpaU+MfKe7
2KpDvY5e5xCf8/7bn9XH2V66cT8k2xM5WeoxhlM1TybLx4BI494NGnBMin6/ZJ/y+57Phqxw
YVnsjH7vlDzVrjyZUvtW0f8A2Gn3pLKSRvGIAPVIpT4akMucllRfPJkG/wBqIhrTdFK78CIh
hstPvONKQt51pv8ATcfLaj3Cv3JHWmsKs6Np5IjIv9RWq42+TLs0WyyY5D0NyHHbbbcU2sKS
v2Fxt1SFD/jrSMvAiBkmSzMgmvsspuc2+8k3K3Ka5MzAo8ihxlsI2CwFe2nE76esoHfI6Zyf
K7jOmQ5UP95S4Ap+wvslbTX2aeDZS23xW39smqU79NjXRBVb5GaMsflzrnc7vFj3uZdGVNLk
TASWV0CEvs8OPFbaUhKfgBots2mmdbLl18gQ4qmYzchWPq52m5LSpX7cp1RKq8fYpLizUJd/
NuNaVUc5zEEeLvWzv2lcWItb8gShcFND75J/M2l+tQ0qu6KayzTyMYrhakNP0qW3ErCVH8yF
BXp8aaQmCfTmc5vOXMujMpRJVOVcTDKlKZ7iiSEqpQqA5bE76Ooq5IXHP35cy3OJduXahSFS
1JenFTncWgoJYcQhAbIB6kH4HbbUhmBpNzKE1kVmvsK1oZcs7zciQt0oC5i218uTwZS22kgb
c0pqfzaYbMu0Z4OMW/WBuPkNqkxJirJkKmFlLS2fu2FR31PpA5J7S0qUs1qK6pyaX8Rzl2do
v1sfgJiLjJXOjyWqucklmJCTEaDiRQFw8OR9B6aEoJnDGMmtTDmQSsjduD868wH4CZMYNrJV
I48nnS6QVKBQBT11Rklbhj23eQIse6Yu88iUYOP2xy3vRguqVuuNvtlxCK8QD3k8vXbQkaT2
Sl4yCySfHDcBFzQ3chb7fHuLiVlTktyAT2YximhZS307qdlcfdWutp5M2ZntnnMwb5bp0pr7
hqJKZkPMggc0NOJWpFSKe4J9dtZ4JvJcBmdtYyuPdH73Ovdv+8ly/sZDS+MUSUOIbUnvLc5O
Nlwf5dBRPxpqtrBl7xwOoWe262xGUu3yRd7vGiXdEe+OtO9wOTWW0RmQZBW57VtqPL6RXbQb
+QY/5Ihw8lxO7PzlqfjW9yJlEx9kPvLWVPLZqpxK1OcatjkNwNum2qcByQPji9MN53GvF4uM
a3RwHVXBx1v9J1LrSm1NIbabUgAlQNOIFOmiTSwsFo8YojRscebacZfnxZqhHDaA6xMUkDtL
uClJ7rDLTlFsrTx9SrprfL8HNuVPJkU1T7kt9Ul0Oyi86X30qCgp3uHuOJI2UFLqa9DqaNcI
v8q526X4/VHS7a4KmI6g1GbSw64+8kpoeKkJmR3+VSFhxTdakgDbRVpBbJOZ1Gxhq53y3zf2
WBDZmW9vH0RGk95g1QJ5lNs0eIDRKlhZoT9O+lWxAt+DvKg+OJf7M/OZt0V1i6pYdShcRKXo
ioLq21uphhP6S5CW6933DoTudExjgn+5XrMzbZOTtpy21WuJJRAfciW+1/bIakS0rQWEuNBw
xqn30ClAKA36DTaka0CfPJXfIrdkTe2GrZbhbKMAzo4LAHf5r9yWo63WmwUcfaFHff10PJKf
sSXjW1WWTGly7nBZuKg6iP2FIDrzLZTUyC2t+KhLP5e5yKgr0pvprCKM50S7sDHbZB8jWKFZ
4dxegSkC2SHnFqliMh1XcdaUlaQtMdNF+3/6uQ0pqSdfxwZcuqDsK1TyO9AdtcjbRpd2w/HW
cfuPC2LYTarXDuUTI/uHFonvvhoOxkoP6Ff1FgBv3Dh+OtVXky2xp5El4xHteI26CxNjWkWu
PcX2SuOtxQldzuPIcS2kmSrhupZKOntFNC0UTYkc4g4i55GYsaG5sexWmIhlMVtlpTiCptD4
SDGT3S2vuVW4oLcBrxFKDW+2EhrEiYnj7HHL3LclJet9kiQFTW2/ui/9x/1AjfpPoY7ooV+5
K2KginTfU4cQHLEWzGfHzN/yGE999eIMKzvXCE4wtLS21IYC1pPdbSpxbajRK+AT6lOhuWiS
5M9WEBSy3XjX2A9aE7Vp6066GiU8l5teJ2WQLRaFsuGffbY9dUXdDhAihhLy/twz9DiD2P1F
KPIchSnrKIz5Kzc44I1/DwzijGTffUtMthn7AqQoOuTnNnYym/qbbb4ro8r2rAHGpNNPXJN4
HTeIWaYcNYtT76H8j7zdweeCTxcak9k9ltB+kJ6VO5601l/xk3zAxv8AZbKmwQr/AGVt9iBI
mP2x6NJWh5ZejIDneDiEoHFaFiqCParoSNahLHKMyMMVsZyDJINnS92fu1kOvFPPi0hCnHCk
equCDT0r11hiPmMcev6JF1sieMJyc5FgxJbvOQG0Rlywtx1WyiGWf/zthtrpZQ4JvCZK2fxB
kN077cWYyJCQ0pljg+vuKkx0yUBa0JKGtlhNV7Vr6aGtMERP+j7qzjP7lIucWLFklSkQVre/
V4AkguISYwc9v+WtQV/Maf68tBMIrsGLKmzY8COAZMx5tmOF+0KcdUEIBUeg5HrrApNuC2v4
ZcH5EYWS8ffzIUhNme7q1w0xJqkuOdiO46ogsHtLSFDjVXpvrSozSy8EXDsmYxV2mRDD1ukL
Lz1vq6Yy4qWVBD77oJSYyKndZpyA9dZdWkZayoHOVz8vx7M1OLyJ+4XeJGY7N+jPvVWxJaDq
UMurIc4BLmtNYUk7NMr8G+3u3/em3z34Srk2tid2XFJLyFq5KStQ61O5PXWWKsS1uRmkbHV2
eHFd/bskVHeVxb7n3CGni0wkLrxSDISQOW5UOutJvYJRhnK5YFm1styrncbJJgwmClS5DgSO
AVTispCivhUj304/PWSTWyzSsq8vpjY84Lq8/KvvdftUdplvvrSz/wBP9fAdwPDcp5e4iq/d
TSlKnwass4K3Nx3KcLyiNFuFtCLwyptyPCkIRJadU5QIohVUOpPLjQ+u2q7lS2C2Tl6vefue
SnrgiKzPyq0u/bvu2uCVMuuBBQUusoQO5zQS2rkASARptPVJ6KrhzAwxPI8mtF9uU63Y3FuE
10FiRFctzjqIQeUW1tNspI+3C+Xbor8NV95MqIbQ1RmV/tt5tT5jNRpOPKksw4qmeKWC8tZc
QtFa/pqcUkAnbYanZmpLjeMvcleF7ZapGNzGrbHSiJAuDpR9k3JQoqVKQpI75W57qJX+mSSB
01Vbbb2NzNbJepdnvEO8wlhM2CtMiOtaQpAUg/mQdiCNjrnZMKvMl/eueZR71bYFpxFuHJhR
nrlZLLD5y3osmUpJXc1pKlu95PAcG3aBACfb6nc4nyamFhDJGYO2PO4WRZdja/3mI024+266
5AdkzKqKZ7/JJqpSTxKUpCTTTaY/1+hUtk4WzN4rHcVGx8yYNjlqu1hSXnnBb33VJ/UlvIT+
szyQmgVx39dLbtP/ALFZ/OjpB8p5QnF7pa5MRmdBmxnbbHmqiND7ZUp/7lxJf4E+5XIpbKup
5emlX/KeSeVBBXy+xHcYsVigR3kQ4H3EqTLkDd+XJIS6po7jtMobSjrWtSdFHCfyFrTC8FeW
k8jsU1ANKEVHod/jrMGBCUckktgrQhJKlIBUAAaGu2wHqdCFA7YIVWoqeg0FIX0qBUqop1H+
7SQs8COBryPQ1NT8idBCCpO9aK5eg9BqENAFeP00G/46iAVAcKigAqk+uoAJBSpSqmtevU6h
kWlSiORAqTQmtD8tBCPchxaiQf8ACNIhhYNDsOuw2qenpqAJaPaB0oNh6nUApK9iCBsPaB/K
uo0CqCfbTkRsodDTUMhKqR9VOO5I6H8dRCTXgKmlN/ht86aiYr9Tcp3CaDlQddDATSh57H1N
R66TcINSiE8vo5dK9af+OoxAEqJTQkgke4A9PloNQExVTfAkACpBG3XUSOjjYoSD7a70+A1E
0hBCEe3aitwKbmmwOoUdP0yClXoAOR6V+egZEqQAAN0j0p1r8tKLYSuNEEJ4k+uhkxVXPq/N
0pty4/Gut9BJjxVEucvLLTGtU1uBdXJbf7dKer2kSUmrfMhKzuqgGxGunr0zk1kuj9kybN/I
b1nlGIzeFOPJnvRmAlhH29e4722Qkr9w9xSKmtaazWsJsXlYGtv8X5jd8dTe7NEVc21yn4bk
SM2outmPQlxfKg4q5Cgry0yZg64zkXkfGIzdts8Z1lF9U62zHVAaeMtaD23WUB1tS1lJFFIH
8tUSax4H2O3ryFEwm+ftYgnHYzwav8V5iOp8OPKIQVNuILnEKJ47+01pos5iTNSufe5Q1ijl
v7TycXmSkvh0sksKlMpKRxfKSAoJJHEK1NtPILx5FSctvL9qtdsWGftbMsuQlIZQlzc8qOOU
5Ob9ArTWrbNJ9UTl8yzKpEFqBOska0xJj6Jz8aPbFQkTnGxUKeTQd2lfy06nUm1gElJ2vfkS
8XKdb5Fyx22tSLeyGGI64C0tvREp4JZcbcUebSQfYU/Seh1lJjIhzyXkCHosNm3RIloiMLiJ
xhDCxEcYkL7jzbrS1KdIWscq8tj0pq6/qHdMIeTrmH32psOJLssiO3AXjjqVohpYjKK47bYQ
oOILKlEg8qmprWunq2MwdbR5VkQctkZVcYca6XB9pURnmtxlmO0pBaLTQbI27Xs91adeuqHE
GcTJxjZjY4WMZHaIVpcYkZCpkVMkrjxmGHO4hCQtPcWqtaqKtVXIrwVpMO4ORnZDcZ16LHIT
IkJQpTbZV9KVuAFKSfSp1LYM0yb5FhKtbN1tOOvvTbfY2Men3x1whhnvoLakrYSCgk/+0SoV
9dahzHli2kpK/m2SWu6TZj15xx+BmT7bLcl52StDTam20pS4iIUBSVKbSNlLI31S0Z7JkpD8
lY5BiYq3Cs8sPYtKcmNOOS21IdL5Cn0qHbCgCoew+nz1lNimg4fkzGLatpi348+YLt3VernF
uD6XSXlMKZAjqCE+1PPmkug7gemmGhlMkpHmWyuwY1oVCnvW4RZ0GfNfXFTKcbuHAqcabYQ2
wlbam+hTQj56obRSpIaxZ3Y8UbuzOOxpzyZ8NqLHXdFtrIW2/wB1fJtqiWkrQSkds8gfd11R
KCIKfenLQ9cnn7M3JbgO+8MS1h55tahVae6n/MSDWiiKkdd9CRNFoxHM7DFs9ugXlEpP7Dc/
3u2OQ0tr77vFIMd3uEcElSEkLTXau2lrJLRKxPKFolXK0ZJc2H273jr0t6BBjBK48pM11x4p
ddWebJbW6RWiuQ+ep14BeTmjyFZPto92eTIGQQ7G7jqLalsGK4h3klMr7mvJISlw8m+NeQ2N
NDRptCLpn9ik/v15bD7l5ye1t2mZaloHaidoNpMhMmv6qFBkFKOIKSTXTHBmRlGzW32+Vhlx
YeuVyXjrndlW+atoR2ztVuEpI5JSqn/udKDWevBpWJS2X7CGGGLGu9yjAn3lV5mXqOy5Geio
Mdxr7ahKnCtZXxdWjbj0rrTT8ArJkTd7rEYuNhYjXeF+y2x/vRW7LHeSIVXErW725SU9548a
kn6iN9DDtk6xL3a5dwzOI/dAF5S0G4N/lNfbJ7rb6XqvoZBLAdSkpJQKD1GtNvHwVUpJa+ZL
Yr7FnWm33Nq1Tiq3Lbvsnmw1JECKWJA7qAXR3V0cQFj309DTQsbJpzjZXclNmyzL79d7dOiW
iElAkxWpoMdUotNJQptlCAQHXVJKgk9a/Gur4M8k5il4sabfi8h6XHiWuyKf/wBWWx9XB2YH
3FKa4sAH7z2EI/5T8BvpWNGo54Kql+KMIlMJft3fXcg4iIthf7olsJA5NyR7Oxv7mz610tS8
GU4/kQ1sS1+5xA7QMrfZS4FfTwLiQvl8qddZeiWGXyVKwWXn7tjl2u3w8eavCkMXKGpTJTDQ
pSVIWtJUl5Lvtoo04+m2ltoqWTfySN1sFgZu1jjvYyW5j8p5K4nfhRRIjpaJQkNsyHW1KDlF
BS1o5/5ehvZutVOTm3arfj/krGVXCFbGo8txHfjLStoMBTvFCpMYuupacHoe4pCxvTQ24BLJ
GQsYMmZlq/8ATaZF/tqmFQMaSpxKQh95QfWhtDnNYQjifr9vLV2yKrgd5RYcHx6DNks20XJS
bo3DbbdluBEdKoLb77R7dFKLTylJSSajoa6WYSb3ojMQx9N1bzF21W+XIht2mWYDyAp1xtYU
2pDDnBJQtS0E+3rtUaVZt+Ct611yOoGI4rKk47Zlxn03G+WRy5vXESTRl5tp90dtihTv2KLS
s/hqrO/kyrS4+BF2wqw23AWL+Le8+5OhRVxJKnXCtD0ihkuyIwQBHQxSjJUopcB31TOjVpSg
o1itbVyvtutjrgZRMlMxXHtjwS84lBVvttyrTWLOEagtMfDbNcsqZs8SJc7ahuTLYkSJIDiH
RDacc/SUpDVHHA1snfY61wZT5FQMGxq42tu+Nu3GHbFQ7nJdiPBpySly2IaXRJ4oTwc79NxU
U0cwa4kc4xg+KOZljQlypJs98t7l0jxn2kuOFTYeBYeUyUJpyZ7nJPUe2lTqlQK218Fcwm22
e+58xCkKadhTlyVs0Y7TS1JYcdRWNzBQ37K8UqqnU1k5zMnTD8Cg3qwovcy4qiw+a258htCV
txG2gFcpjpUgo+5QT2OKVVV9VBpUSxWF8FLeSjuq7IX2So9jnTuluvsCuO3KnWmhmkWabhbc
bG0Xti4mbzQldWIy3IgUaBTK5SCQ26kmlHEJFRSvrprSdhI6vfjh2Eu7iLembxMskmNDuzaG
nm+Kpaw2wttagove/ZSQKp+eh5FYOsvw7f2HLapuS32bk+uJ3XWlxy2tuOuUtRQrkVoDLSjV
PU7aYx8lZkXaMBdvs9UGwXONdGUxVzZElpt5tTTLZSFco6k91RBWKBINdDrkUsZGGV4jcsYn
tQZvFRkMJlMOJqkqYcUUhSm1hK0EqQfaoV9dVkEh43h91yFTqYa2kKZUkKS53VFRVskBLLbp
A3+ogD56kTJNvx7MbsV/uE+5RbfLxqSiDKtTyz33HFKUkhCkgp/L7N/fuNtXVyDtgqVDQqJr
1IJ3p8f7NZNFll4VlEW2OOyQ2EQgy/Lt6JCFvxW3yAy66wD+mF9xND/zD46R7LkcZZgV7sX7
HHktPv3O8R0OMRhwLaO5Xtxm1c1EuJ/N0TvtqWpDk65JguT4/lybIhTz18aZRJfkJ/SCaAK7
rbylV7TYIBcURQimtdcIwzi3b8+N8IDso3pDXcVMEpJUlgq4BX3Yc7fEqPH6+u2s29bTXyPa
B3ZsT8iz7vdHoLb6LzZmVy5rjrvbfoE7hCiqrhcQfZuUqHrqhtoYhOCoKWsOqqauEnf4/H+3
WoMosTFvywWFxqM4tEGWkyv24OcXXmECq5TbX1lkBJ5FOx+dNVXOhY3eueRKcnh5T5U7FZZu
aVIO0Nvh2Uugj2N0COB29N99ZdWjTJO63TOXMUs0mdyj2Fl9TVjkJaaZWh1kdxXaWhKXfXlU
7K+ehJwT8shblkF1u4bM55TrbYoyyhKUNpUr6iG0BKeazupVOStLCrzoNaL3jd6S4CYl0ta0
uOFJCvt3QAeDhTVNaGik19aHS0ab8D62Zje7bJmqtjLDbEt5yaInZ7jLDjjZbW4yD9NW3FI/
9JpqmchxA8i+VMkjy480tQZD8J37mAVxgpEVRbS1xZSFBKUcG0iiq9NTBWGMHNLjAt8i3MQ4
iVy0LbekOIUVKQ6CFBTfLsqUOZ4qUgqSd/TV2blksIhGm7lbxDuiUvR2y4F2+cpJCVOsEKq2
pQoooIBOgk4ZOHOXFrbQ5bYRirkJuNwihKu1MmoQpKZDwJ2A7nLtooiurs5NpxoOZ5CzN24M
TlPvt5I8wuFNuXCsmcy4oFpt5paC2rgAAiidxTVwZnI4uGZsZHnCL5lVu+9iBlLEq2RuSFER
We0kJKeJQrkOaj0B9KaCfkpjiSpCj+QCtRt7Sepp0023gIwX6w+UYVvs8G2ybUt1yCmO2h5t
7glX2clU2MFIKTSrqyHN90dKHSm6rA9iv5dmE3IMluN5j84RnstxnY3cKh2kISFNFQp+kpSO
XHRwWRzkmQ45NxbG7Ra0T25FlTKTIelFntuCY4Hllvt+8cVigB9NSiGanIWS3+w5BmwvalTI
cOatp6eiiHHmXG0pSoR/cAtPsHEqodVs1SJPJOZL5Ft7r+aOY7KnsuZTKiSG31kMLShpSi80
6ptRVQ1FKHW3fU8KATjHyXTGM+x55y8TUTvs3XJcZ23KfdTEUwpuAiK7MkpUoCWnkk8WSSdu
XVWszhLwas1E/UwqSlPfeQJBle9YEw8v1dzR6i/d7/q92/x0WtLMT5NHczmwOty5zEh2PcLp
aoFkEJaCpMD7JTVZ5X9DiB2ipDaByqfT1e2vgVZT9Rpf8ysS22pNibZhS2rgw/cQ7GQ5+7ON
lf8A9oKQoKQwlNSft6UJVXrpVpTTHvEDTyDlcfJPJlwuMSeIdtlyW2GbhGb+2X9n7QpxQQEq
UoDkSVbmg0N4SQK0MtLeZxp1zyF203SDbblH+yt2NT7nxcaFoh80uBtb7b9FvHi6oFNSVHWn
ZYXH/U6OyannkaY/lVpb/bli7RrdFtlxlS8ojstqYbubDq08EsRwP+oTRKh2lUCeVaU1nssz
tg2m1CxB1aybHVY+gGc2nHP2F63rxsVDiry68tYf+2FEqCapV9xXalPlrfZdseQxKXwSEjIs
RckrN8nMTsTkXG2O4xbOIdEOAw8FTAuOkVjDtpotJ/zP+brrNbqXHj9zSSxIzvN3xaQi3DOp
wuVyamT5Lb0OkoJty0hUBhzsqbqwp6q+yFBQTt7emsyoa4MRP1H9wnW9/wAy2iXapseRaRZ4
ZyN1lhuLFRC7B+8SpsFTaR2lUKOqSQke4aJiq8yaaUtMxh/7Uy3Ps6iKtxwxUk7dnme2d9/o
p13109rTtg51RwWFcuQpQbAdRX565iEKe41oFdSd9x0A1QUB8hWhNSeleoJ9dtUEEEnYihSk
nirrQ/hqITv3TsaHqfjXQAaUilOND6HoDTempgKUsj4VO9etANBoCulCkDluD6j8NQiUghO2
6Ruqvy0khVQTRKyFEfV139dQBlFE1IrQbAfAfDUAVU/UlNDSp/DUbkR7yrYcSKFS6U1SQpVK
EGgAI3A6/HQZkUgGtDuUkn+Hx1Cgl8uQFTx26nbb4akaErUUcTTkTXgpVOnr+GpgxWwBFaFI
PT5nUIEFJA4iiqdRtvokoAFJ5BIrStf4aYICVLoONKivuPoD66ibAGwUlRpuQon/AHbeugIF
inc6VQPzUpv89EmkFSrdB6fTUj+zTJpaOfbc4125167df92oCxeI75ZrBlEW93VmTIj291Ep
pmIWw4XWjyTy7gIKP8VN9dq2irgxCbNAx3M8JgeUzlrcS7PRO4uZChoEZT33DoV3ErpsWwF+
3j7vjoTfRmlXA5tme4IzZmLJIRfURIN4XeYMplUcLdKihSWJDZIbVugioPt0S9lL8khE832l
mXLUbfLKLrPmS7mvuBT8BuUFJQq0ubBl6iquKI9x1dZBQQWM5HgsXDcmsFxuFyFwvzjSmXBH
aeSlEVanGipfcSpS3OX6nwPx0t6COxAnI7arCG7EF3P7xEoyFN/chVtKd/cIp+l0V6/jpu5B
NJQiOsFzZt1+ts91svsw5LMh1gFIK0tuJWUgq9vRPrqqy7G6y/NWLKyGDOXdO/bE3Jc5TDVv
kJkMILLiU8nH3nAVVWEnspCTvrKRrqU/D/Ll4ayL7jJbq7IjMInuW+Y8jvPNyJLJbbSlVCoN
khPs+kHUlhjtaCxDydNkmY1kV7TFu/7c5Ds2SOsFx6O+t1LhU+82lbxBAKUkA0G3TU9GLbLD
dfJtnh2S5G2Xtp/LhbLXGN6bj0MqWy86ZLzbjjda9lY9ykg6cFxJS7Bmq5WWT73dp8exvTYa
477rFtRKafXRI98Y+1K3ONVOJ9fkdM4LrLKOolCqHcb0NPSv+7RBlppmj+PWo68EzGG5eLdC
cvEeOxChypQZWp2O93FKUhQoBw2Qr1O2q1jTcoVY4jD3iC9RXZ9sTMdlxpcCKqS03LCWFH7g
KSQFkkAcEmtfy6U0mUSXTObhYbs7fZ0l/HptrXDY/wBPSm1MqnKuqS2kCSf87jsQr8nHWag6
/BVM4t2CJsd3/wBONW9y8iZGbvAC+LbKlpFf2PlTnHU5y7leg+WtVsZsoWA/KOPZDcIuFJZa
Tc5iLQ1b5KYrzMl4y2ypamylpalKo3T3dPnqTXBWmQeNfG9yN2uf+p8YccaZtciRAj3NLkaO
qU0UFsKfqjjWprv00T8klBOpw2w8VPnG4C8yNnMn/SKHSYwliX2zxZD31fb+4Du/OmntnZuI
0RvjuwTv+8LDQx8WjsR3Vz4UaV3kQlPRlJS4h0qVw5L9vAlXHlTS2o2YTfgz6NheTSJN2jRr
W+p6ztqeujCuKHGGk1PJSVKFelaJqdJlyXS24hajithlxcUlZO/d47ztwnRJTrC4rqFqSEJ4
AsN+wBf62stuTarBWm7FFPj9+8fta1yG5qY6bymagJCdv0lwSeVd/wDMH92huQajJOy8djOe
KrJdY+NrZmG7qjS5YS8tyWyW0ltbjpA4NuLVwSQeI9DWuqX5JkhnCLPaILkabjMNnIo0llUV
uJb3kW6I2kVciypi+CLiVpok0SBy3rpfwODrJjWp+Fi3ew2DNv13Uu6/tVpaVFK7YGl9pC6K
WHAtSS6tNQqiePrqVn5JpTBHZXjuOyI+Hz7fDJavkl+LLNpivQTI7TqUhDEKWtxSHwFUCiri
rbTE/UOucEbDxDHjIy2VclXOJaMZShSowQwi5K77oZQlznVlC08qqHrpl6BUWyfzbCMHtIcv
UhcyDYmGbZFajwmWUyXJMuMXy88mQrtp5JT70jfloWfqbeChZnjycaye4WJ2UJAiOICJHDgV
IW2l1BKamiuKxXUs5RytHJZbL46s02Hjgk3xyLfMpS7+0whFLjSVtOLR+s9zT7DwH0gqFemh
x4OiSRF3TEcettshuXC//b3yfBRcWbaYq3I5bdUoNtfcJVVK1cD1RxGtpwogw1Bwl2eA1gFs
vbHulyLlJhylEqSpIaaQpKONShQHLlzFDvT01hZ2LrDRH41jr9+v0Gxx3UMu3FZZbecBLaDx
Kqq470onWnhE6ZLBbPGTd2uUKBYcgg3My3HWZBaaebLSo7JeWvtLAU81xBCVo+pW2s2aiIL+
tynI9f8ADF2jzzFeucONGMYTES3m3kFXJ9EYNrjhKnW1FxxNCrYjfTgzFmN5Hi25W5L8mder
dCtscHv3Z1b/AG0PB9UZyOUoQXOaXUEcqcT1rowayudkXeMAyO1qcEhDb5RObgKTGWXlFyQ2
l1l3ikV7bzbg4k712pUaoD4Ji0+KPIDqJQt8hllSJb8JttqS62JL8Y8XO0ppJR19oU4pNdLS
FKyKl+03D9olXQSGkNwnxEdjqfCJYWsdW2vqU2NwopNOuhOTMdR29nGVSbUmyvTh9o6hERx3
g2lxUdNAll58J7qmk0HtUroPlqNdp2OrvhMCJZJl3td9au6bS+wxcgiO6w2HX1cUfavuHhJC
VCp40PH3aa5KzaydZrGVT5mNqueRlZuEU3C2y5sl7jDQhxYUSpRP6n6JKQj6tgNZUxI8nPMU
ZI0IN1dyZd/h3ht+FFuKfuGStptaUvtKafCFhslafkr47aW4yisnMCW8fyWDk86DKyGLa3MS
SIn7nJkLQxHS7yQluPRCnSFdxVeKNqmuprCCorHrfnuM5p+x49LZh3mVGC2pDSmyy5F7Bkoc
Q482opCmgT9IV6HQkXwTPj++Ziq0KkMCzrF3mOLtjVwmGC/NlqWkKbRHYo3IAcKUhDoCd+Kf
aTpxJJQogzddmuD8a8XNTKGmLS6BdE1DfaXIeU2EoQP/AMZUcU/SNZmWSSRcZcbyNCxGSy61
b2Yn2iF3Ntj7QXhVuAQoGWlH65Y4KRVSvyka0lJSs4G2S+Qc7nIayEMx7O1dJplx5cNptEh5
22qSG0PPAc3QwpSeHMfz0PApednO05zml0nwYFjt0NLkaU9cmrbBhobSt0RloklSK1cStgr5
pKvwprMmoHOPqzq5z487HcUiJt86C+3+3RWExoUqEl1KZBWXHUuEB3jVXdG9Ka0iwpKznMe9
xb+pu8Wliyy0stn7COB2w0a8VpUHHysqNalSydV3okPcTyy+2S0oQ1bUy7Y7OS5BccL7SU3F
KUhI5sLb7p7dB2V1T601LTZSTDcbyLdmsnkuYtLuMC/SnXLihpl5DcedHWshSFIPJJjLWr9N
Wx6K0xnwc1WEVBeK5ILQLs7aZqbM82FC5lhz7coO3MOgcaH41prPVvRtPyXDI71f/wDT82bI
xCTb5d8ix4t4yBwSRFeYY7fa7bS0Blkr7SPcFn1p101T/QLVWimXq8m6ogHsiMIVvjW6qSSV
JipUA58irn9I6azHBpEzIzRi452/lUpl1pqS4kuR4Ujtut0ZQ0O08pK0kjhXitJSfUaW3gy1
nJPRfJNjjXmc7FtD1vtlxhpiyVxPtPvFuNP98PKbW0YdVfSqjQqN/q30ttlCZHIzWC9lF1uN
ybmSLdd7e7aHFlxpya3GWlKULB4oY5J7Y9gSEgbDWH8D9SpKcaDhABAr7R+Yj06etPhpAuMP
NrawxCuJjOOX61WtdjhNEpEVcZxpxkPuq/zAtCXlexI92240pg0pnydZ2b2ddi/aI8Z4Lt6I
SrdcnOK5M9UZSVLjXQ9FxU79pCfp4pG/pK/kW8HPMM8gZDjEaAi0twLoLo/cpK2HHnGuDrIb
CUd1auPSnD6UpA46Vc52t2iCsWO5C33u23BSA6IUpiSpse3n2XEuca79eNNEHahdrDf8St+T
ZGqTL79rusuJcI8gsKdLiGJ4nrjOs+i1pq3/AIQeu2ujvmfgK4J6Lm2DNFhpi4dpdv5pVIU3
KYYkRXZDkgojpj8XS8jvBNHiEEDb46HdaL6jeLlWJPqvwRe2bXZnrhOkxWozC25a2Xk0ZHZ7
bjElCqU7bnBSNwFb11N5UeAWUZCslSCKgVBAptt6aLIUzVLtkdjkKuk2LdIqm7+q1JtFvfbU
tNqVBLffdmMlJbaQOCh+ny7gV066zXEfQXlkHf5OMTY8dGKqjRHF3FP3SZo4qXLLah92ypfN
DNvKj/lK3SqlajptRD8hXZZckfjrze4LTerdIu92s0GJZrwzJbcbansIabeK5Cdoyl9taErP
9g1NqKz8k6vIzauECUmVEx+fFYyxqJaEP3VbqIyHnoq3f3RaJLnFCuXJvuH/ANwJNOVN56Se
8/8AwaUZHCZuJy7mJeOOQoVgbvEh/KWneEcPWla2u1+m57nWjxcAbQK1UNvgufuYhPejhGNg
NiRJbTFawtNrnJujCigOm9LW8YI7RPfLwBZ7ZT7QkH56G03/AK0acDbJ4tqax67u/wDSpxgx
IxwpTZb7ippLQmFATV/uA93u9zbp8tVXn45BqDPbvYLtaWLc/MZ7TV2Y+7gOJWhwOs8igmqC
qhChQpO41hLEg00TGBYxGyOTd4jyHHZjFtck25lkkOGQl1tKSlA3d4oUtRT8BU9NNWuynQpY
Lxk/j3BcVxCTdp0KbcpzFzehMRlyvtkrjl9aI7zwQkrCHGkckqR9XUbaXRN48C+CvYdj9sut
xy5dngT5tri2aY7a1qa7ymnO2nil0JQtKlKUVJT0VQVG+hfyqZhQzgi1Y454hdun7W+q/N3g
QnLml08EIEYu0W1x9rfpQn6t6+mtJJWsn4FxCbZ0tmLRLhhmNHsuNP3bJVQJEotJH6KmWgO0
5SvEdz6Tty31n1rFvhGoWF9R+x41xS6TG27VcpzLKL0qwyHZjbSnH3e2662Y6UHihThZDYCz
9Sgemno1+kkqp/oV3yHi1nxm7x7TDEsXFMdL92amqbLkdx4ktxz2hw5oSKrUkkKrtqiKz5MR
kPFrBZnMflX+6MuzkIuMW0R4LThYSHJaVqMhTiQVHgEexHQnrtoSk2qYkmJniNTU299m6Bdn
xuY/DyK4OMlKoraFEMuIbTXvqd6FLe6VdaDfT0mPkIjL0REPFrFKwb91Zff/AH43eLaihfER
UNyULVtSq1r9gKj0HTfWqUTtZf8AimKrMfJYG/E9vuF2m2a1znGZ1jubFmvEySAtp9yQtwd9
hCKKbQ32T7FE1HrXXOI/SQXqdszgj0+NBeotsuuKd9dvm/fCSxKHceYFsCC/IPYSStCkupKG
0JKq7b9dKrtcoK1yvkeZB4ytFguF4cmzZox20MWtTyu0Gp0hy7oK0JS2ohtKUKQorCjXanXT
/W3EbcjZPP1HCvBt1hN3Rcov3FuI67Cgt2oM81vIaQ8FuokrbVwKXUjgjkquhxiCagpzWMtv
YRNv7Di0zbRMai3eMugbLcuoYWyfq7iVoUlxB/HSqPt1J4SaZXePH2lP/MSd/wABrJSD2UUD
Su/Lj8eugpEiiAFIGyuh9KfhoIUQpzY7UA3+f4DUyDpua7n0HptoAJAAPOgA6gDbp8K6RQae
Yp7SQN1A0rvqGBKkqJoqoVuQPQE76kANgEpVtx9abiuhiGV1IAB4j6iK1r8NQQFRW5HuAoQT
Qb/HSQoe73pUVH1r6np/ZoaNIMgFG+/oqnp89BBIoCAARTp6muklUBNKqJ3OygevXodJaBtw
4LIPUAE0ofmPXQ2KDBofdsrb/wBP46DQFE/SQPma+ny0QCDFFKJqa/E0pXrvrRRyEjitNKKS
DWgp0+IOgBQAUeQAASaFRr/boOnAQoRSmxqaDcf+eoy0GDVFCrYbg9NxtqKAuB404nrTlT+7
SRbPA8adI8i2AQWlPKZnMrkBIrxY5UeUuu3AIJr8teiiUOTles5Rp+JYxcGv6hHYirU6G250
l1wJCmyxHc59uSlSCOA3HE9NFGujk7O34wP7RjbqvHjFulY0bw/HyiS1LjPdxLsZp0ISX1tt
FLhok1Cug9dDax9DN2nH0GafGGFm4KZU0sIh3OZFtTQkcl5A21uIzLiSEsLaVRBX61+Os9oM
tEPi2OuS/GnkApsznKM/ENuUpkvPR3A8pL7bbtOR4I2cofx1q+k+QZX3scYT42Zvosr6Xvvj
G/fxKQY6/wD8QqJXmlXwXSnz0MIhkJY7cLnfLfa1L7bU+UzHW4aKUhLrgSVAbdK+uqqJ2Nfd
8R+PZGTR7DBnPsy0XJdvlstOPPqcaS2sqdK3mkIYWlTf0pUoHQtEqlaxzEPHeSXpNvgP3SCq
M3NVNjOlt91bcJorRIbc4hILihxLdNvjpkU2crNimB3hmZdYEuezbLTbzOutqWGzLQpLgbSl
qQQGldzly3T7emgIJRzxZi7djkZM9cLgmxG3wrnGiNoZVL4THls9pa1fpcklFQqlNUwTqVqH
jmLSsjlRI90nTLM3FVIYmxIan5AWEhRaeZA2DdaOLTtXp11Mlgqym9qdK7A9aV1Jgy1Yrjto
umI5bNkJWLlY40aVAfS4UIBceLa0ON/SsK+J6a1PkIxIdqsFsk+Ob5fFhK7pAnQ2A6rkFNNv
lQ9lDxVz9QobU20JTHgVUs+beL7BGvd7bx+4Bg2WIzPm2lxtagzEUhvuKEkk8nOS+XCnT10E
qtfQql/wCVZLY7dJc1j7Ra0CwvpSofubLgClPRqV4JbSpJUF/hrSFh5jjdvx+Pjc61uPBN5t
bVwd7ik8mnlqKVJQpATRNU1HroewteBWJWvJcjlTIcK8Khpjw3ZU1yVIeS0YyOIdSqnKoIVu
NMlMjo+MpwaEpu5QU479mbgm/kPJYDPd7A/S498K7vt+n59NGCcnPFsOF2zVjHjcG5LbrTzj
cq3vcQ8EMqdSG3HE+1R47haR8NEIOz+5VlPOhVeaw4SA5RSgVem9DuNtaWA7Fit+H5G5Zm5a
JsW3xbghbsKJImCO7NQiqVFlv6FgEcaLI1GuzgikWC4Lx9WRpVH+xaeERxvuoEkKNKHsH3FG
/wBWlbONKt54Jx+0ZJGwi33kXtS7bcpS7cLYiQ7xZKUhwd0cu0kU9xTT29dZO0wSl+sF3csb
0pWZLu9jgy2ItwU592YyXFniHovdUUTA2QeXb3A30V+CbYmLgV0mCzTbFlLc7vTFQLc+tEmG
4x9u2qQtbAdNVNtpT0a/OQPXW5T4M9XMyNMos+YSJlnmPXV/IF3dRYtEtZfZf76HOJZ7Mng6
17lCitgfjrMwmPbI0s8fyHCySdGs33ichjhxu4oQ4nuJCSOYfW4otncCnNW/ppzyCcFgmMeY
7Zk7zMKXNud4fgRZU91sJd/SdQVNtOiQC3zb3Hx/hq4yXZzorbeYXyFImouVvgXKa88p2U5e
YSJUlLuySCpzisAU+npqgp8k6M38lP2WOLTblQrTb2nXEO2+GEstpdcUVOtFSXO1xUVJCmiC
NEL7mpwRH+os9VjFPtnHLEhr7RF1XEQ6UMEmrCJhQVpRUkU5bVppSM2fkOVfclVgkCBIbgqx
9chaIqDHbTJ7rASXHO5xCvcmiFLCqnodMqZMvKUhRs6biSmZlmxy2Wu6xFd2LOjIkOLbIBHI
NOOrQr2kihGqMG+2CJtFzvOP3pm4Roy2priVhuM+yoofafSULR21BJcbcSoj29a7azBmckkn
Nb5HlPqiW2JbS4y0w7Bjw1NJCWX0yU1aNXAe42CVKP07DbWmvJOzfA7X5NvMtp6Fc7dCudvl
qdekQH2FhDi3pJl8/wBNaFgIcWeJB+k01mINOymBMTylkbN8u12AjOyLo00l1vh+iyuLT7N9
lI6OReP6ZNfWtdPgLI4WjyJMttjRaXIzcpph2Q/DeXIkxyl2WoKe7iI7jaJCVLHIJWNug20u
2Rbxggk3ZSbO/Z1RYxRJfTJ+6LQ+7bKBQoadrVLZ/MnprHYU5UEYpRBoaezdR9N9UHPrDJe5
y7mrHrPa5kR+PDgmS7EkPB5Lb/3Sg4SgLSlv20/ITUdda2h3ga3q7uXSHaYamVJTboYgtgqU
4HP1VrS42kgcAS5QJTXfWk8D2c4F5Te5dzvSZMhh+OtpmM0mO+pfJKY6EoTxCko4pUUVHt/n
rCyKbOsrJ2Jufqyd6K6425OE5cZwpde4pKSpJJRwUfb1KKfLTbRJ/Amy5eqHmP8AqWW6/JWF
SC45zSt+rzTjSarcqn286EU6bDWSTWiOxjJGrCuc4lSmpzkB2NbpLRRyZfcKB3UlzdFEJUOS
PdvtrXXJlPhCoN/Yi47frQ4pXevJhKaHEKQREeLy+4onkDvUUrU9dZWxbWmGxkzcbFZtojvK
budxlA3B4JTzegJaARHW8T3OHeHItgcehPw1LcmnY53S/MScUsNnStX3FsVOW+kooEiU4haO
Kwffsg12HH56jMtjvxxlMfGcvi3p9xbX2rMoNLaTzUHnIy0NDj8C4pIPy0OppM0OB5YsF0ix
0XhUC0T02uXb1xjbFSLUwpyY06y2mKgq5oU02o/8qtaRjbgy7K3Lc5fnnLfKizI7qUu9y3RV
wIoVxoUNxnaKRxp7vQnRbJtMsuLZvabLiMG2vwLfdJJvRmSW7lHW8mOwWm095hSVN0X7TXr+
GhYkp5LRfslsF5sD7UO5WXvmbenu5clTG5jaJkzusORFMAI9yPdRYO+tq2dmatpDB3PbErHE
2dksM3FrGG4cO8kvqH3IZUmRBeYK+z+oglCFpR7VfzBXCFrYvyxd49ztapFvuNqehgQ+aYt0
lOTZCUR22+DtvWox08FjfgBxpXVXRqXJRMDVZxl9qN4LX7Wl8GQJAqyPae2XP+Tu8a/LrtrD
0C+DQZ0ey3CNHg3xVgVmcu3XNjuW5UdqGHldk20rcY4xUO1D3FWxH5uo10rbzqSak6Ym/YMW
yzGGK2WU+/Z5LF0kr7clgTnFvhpC3iQhDh9jS1dOO1ab6G/8kvgyqc1ITcpKZMdmK+h5xMiN
H4hhpzkeSGQkqT20n6eJIp66LvJmqL346tV1uGKZqzCtsd5tdtcaRMWGRJ+6Cm1IZaW6tKkj
t81+wb+prtprs08oz1QQTx22p6VFfhrIQGpSiskmlTsKdCNTJsKgPuFKdFelCT11IAglQp8t
qfEn00yKBxUU8QagCgI6jUZshJ5nqD8OJ9NBOsiV0A5dADxrXeny0oVgQSEqCkn1oB6k/DWg
kIr3A5FSVEhNfQ+taaMGpCJUhQ32BqNZYNhciCVVFT1HoK7ainIklJRxVShoadR8tEE87CBS
fdSlDUEj19TrRII7nkd3AOINdxTVozassB4hSCQa0pX4k/H4aDaO0u4zpTja5jy5CmUJZYSs
khDaRshI6AD5auBOTbzzK+60otqQNnEKUlQrsaKTvv0OoJAudLdILq1uggUDq1K2QKIrUn6R
sn4amSY9tl/vtsQ6LXcJcBEgpU79o+4zyUkbFXbUmvH0roNHJm7XFiNKjx5bzUeeAJsdtag2
+AeSUvIrxXQ77+umXMh8EgM2yxFnYtib1MRbYvBUaCHVdhpTaubZQ3WiShW4I9dZFWgOw5lf
7LcGZ8OSFqakqmKaeHcackOILa1uI2qVIWpPLrvsdabbGtoOmYZxdcplx357TDLMFrsRI0VC
ktoSpXJZKnFOOqKjv7lmnpTSm4gy3ka2HLMix9br1mnLhmSng6UhKtkmqVcVhSQtPVK6ck+h
Gsl2cC4mY5RDVbjHuDjarUXl27eoQqUSXlqCq81OcvcV11vsx7N7HSM8v7eJnFEfbJsqlpcC
PtWe/wA0mocEjj3e4OgVyqBt01K7TnknqBUryFlskQyqeptyAtDyHo6UMuOvNijciStI5PvJ
T7eSydtZ7YGcycbtnWQXKQw8t/7AMJUmIxbU/ZMslauTqkIZKeK3VbuKr7vw21Ozage7JFzy
jk790lXK4/bXb79qPHmQ7g13oi/tEBMZamuQ9zdKhVdyTXqdUvHwXY723y1kDC0yLlDi3u4x
5blxgT7g2pbsSW6UqLrJbU2nq2k8VggcQKay2Vbx9SEdy2SvG1WBDKUomTDcbvMBq7MeFSyF
1oEoZ5qKUp6qJPy10d5fZ7M8JeCBVUqBABQAfUfw/jrMgw09tNT+Q0Px66CQSqinwANB6/Gu
oQJIClVrQ7qPT+OgUgyQVpKSSlIqr5V6fjpQNAKlghPEcknenQDUSDJqfcKkbE+oH9+hiGHC
QajoK77g/L8dQiSoK4hdQlR67bb6iAgcKhK+tQdtjqIMgigJ9KlR/uGlAJC0jlyqkncfEn8N
RpBq5cgmlEnYAbbdNBkJS1KNFA1BoE+tPXcahTYZCiN01+B6b/PULF0QpoKSRyVQCtDUjQMB
BLlOCtxuUpHQ1+NdBBn3EJSCN9gab6hUiXKkkFQ/9HpUemkGGUAqryoo0qDUUJ1SAr3cqitO
oO+iSEqSrZJP6hNVIApX8dJpAINDyG4r7CaD5fhqkmHz/NT30p60pTpXVJvpwS3jO0sXfJbd
aHpbsRq5S2oypUfdxJdVwQQkkAjkfd8td6S00cW4L3ZcJXO8oKw565OhpmU/DM1D3YfdQzUA
NFXMdxRT7Uq21mv8fknZskbV41uNwxZd/g35MG4i6v2labhLEVpSUUCEB1NXFPOKUBQbHVbS
JyQ//a3yEX40VmKfuVTHoaEtv8kxZMf3LD6kVEcEe9KydxvoCXOzrZrJki8Wya4xsgeif6ec
bRMtrTrxbfTJWWipLjSw2SVA12NRvXfVZQkx/KMkJ/pa/f6WGScW/wBkU/8Abdzvt8w6CRQs
V5gbdaaIjJS2iLitSHnkR46FOPvLDbSWwSpS1GiQKb1J6a0smTRrjYPOSXLe1McuUt9l7s2/
tym3yxIDZq3Vtw9tfAHZdNZk0mRbOD+U7Nd2OxaZ8a6y23gyqPxUtxJQe+3zQpSQrgfchXu+
Wky0grbjXlSwZAzHtttuEK9PNlbDLLaVc2fzlVebSkD1qdj89SSJDuXF8xXOZcrZKiXV+Q+2
wm6QAwaFtBK49UIAQG+XIp4UFa6noVbBC2eDnFuvEliyxbhDvkdlxEtiO26mQlg0LgcQkcgj
pX+GoiDUColQCuR3HLp89RhyXPELlmsHGL8uxx4TtlaQg30yWWHnO0skIqHPeptKt/b0O+mc
mmp+gLRfMojePbmxHj2//TpfbZnqfYQZTzzhJbW04d3FNHofya05bJYRKZnlPku2ypduv0eK
1NcbZizbozFSpT7JShxEVclA7a0lITVHX01ioyVu6ZrkdxYuMWfwfZnPNvvMOMUTHcaG32wP
/wAcKA4qQmlRpWMmXkmctyG7PCwR8qxWLGhwo7f2DEcORFPwSKoQh1C3v06nlUCtdAOy5F4/
5CxHHXZ8i34wphc6G/BWHpy5TCkvcSO424hHt291FV0wJwHlKR2/sV2uGMc+y/bzYkuPJaDR
d+4oJHIvBXc93X5alPBJhYRlTsbOGrxYcaYmTFoWi12mM44lCP01BxaVVKnFdvly5fjrXECQ
0a82JmVc3l4+y8xNbU3DZefeP2Liif1GV7KWQenPV8GYHjWa2p+y261Xy0N3J60NOsWl0yFs
BLbquZS602CHaKNQajVHwLsvuRbd3aGPuWf9thl5T/3H7sUH75KQKdkLrTt/KmiYMpk4MtsC
8CiY47aH1Khzv3BU1Ugdlx8pShxC2+APBTaacQqo0Z/Umk0JzPI8dvCEPRLdcra6TWFEkSUL
gsMgUKIzQbboOPQ/z0rwOEOLd5EZgz8WkC1AQccirZegqX7X3ngpL01AUKNurCkqqfzJGpqG
K2cL/m9quTdhYbZl3GHZHXHJEm6PpM+Uh5wLUw68z9KEpFEEGorpjyZ7Z+gdrynG0qym3yYc
iDY8obbQEMOCTIiBh4PNAKfIDo24kqNdDk0mmT2QZ/ieRwH7NMRcLdbVft7keWhKJEnuW+Oq
Pwdb5IBC0q5cgrWVgpUyQ2QSrHmeTXbIJN4ZsK5khIZhSWH319ttpDQWVspUkcuHT0Otq0YM
uqZao+XYhaIeEOKvr017FkSlOwITa1RZSnXl8Uq5dstLUlQNVpII1laZqMFWvOQ2S5Wm3vtX
abb5cK1s2t2yMsqLTymVLBcL/IM8HAsEgp5bU1qAhHObcccd8aW2yNXbleoc96cqEuO6EcZS
UtFCXj7Bw48yeh/HTVb8C/geYhbrbimWWnIp9+tM2DbpHdkMwZKpEkpU2pA7bJbQVkKUCd9h
rDcolWDnh+evt5BHXkspcuE196qO/Jq6tl+UwppCwtP6iGkmntb+nqkV0w+DK1DLIx5EtDM+
R350YPfYR4iJ0ESnVOpXPbefbdkSf1XKMJV12A9ugVB0uuXWGfb5qLFd2LVkbzjyYV6fKowa
iJnurRH7/BRQFRlN8E0+kU2ppfgXoYSL14+vN8u7Ep1EG0RZUa7wZLaA0ZK2mW2p8dG1VLkq
RyZBomtT66HlGYbyOLFmFjdsomdi3iVIl3GRfrfIkCE26h9QLDYYDTxfQlv20QRTpq0U6KAi
YwMKnRBNYbdduLbzdqVH5SVIS3TuolfkbR9JbP1ddUqS0kV9CltuIcSQFpUlSVD0KTUfjpLs
Wm85NcHcSSxIlqnvZA+t+ep93uqYMN/9EMNdI3c5lSv8Q2G2hFPAm+5M9DZxSbbi21NtNkEV
K6NvEK7z9VcRXgoJXsFe5J30J4ge0NiM2urjk602x9xdwTa+2tV1feRIkyDMDb6kuPJKk9tr
6G0k+0VrvXSslokZ93uS/Kd7i2mQbWi93QMLlxksrksMhXSO4FBscq+7iqivjpSSGIOON3yD
dfJ5uj9tQlhUeagwW0trKltwHGg8vlxQt1ak9xavj01m2XBmumR/j7IZMO1vNFhpdns8ZU+6
Q22G1quQUtCEsy3XQSlv38QobpT89aUyThKSOtQgOYDmCvtUpk/cWow3kI5/bpXIc5pSs7oq
KJH+IDTXDf0Bpx9yQkX1EnCJb8yBGTCfX+22WFHioQmJKbQ28uZ91TuLWU1HaKjUqJ6azVYZ
N6InIGbWnCMScjMdue/+6G4vBBCnQmSkNe87L4I2FPp0I0d/Fdtsk7MG2r4wZdqRDmvSY6KB
VGoq1BxPzbPuT8xobKNmhtYl49t+OsXOKGLtDascqQb5IhOyUSJDdyZjiT9oVpVxSFqQkH6R
11qqT/UxfD+xj+Syocq8vuw2Y7LAo2ymJGVDaUlP5/t1qWptSq+6p1NDMlvxfD8aueKWSfcr
g3bpE7IFW5xbjTzofYDbJ7Se39HuWd/nrL5N+CRyez4Zj1m5/ZWwzHX7whlicJa5C0RJ640d
EdTSktDtoTQc+pG+tqJMZO98wzEIOIJlBmOh2NYYVwmONOSV3QTprai0642r/p/tlO8UroPa
CPU6q2wNlwQ/kyy4nY5Ei1WyLD+6ZcZbK25cpyY2BHQ4tb7Dn6A5rWR7eh1JpL6lEuSrYpYH
8hv0KysOBhU1Syp5QNEIZbU64rj8kINB8dYEn1YBEk2NWR2e4qdsaoEycw5MZDUkrgOttPMq
Q2paPcX0lK6/HXRqcE7QOsT8d2yfcrfEvN0MRi52V28tLYZLqk8EuntqFRWgY5E+o2Hx1l8f
UrW2vBS08QopbUXWkEhpYTx5IJ9quB+nkN6ems2WSqy1YxjOMXeBSXJksPAOrl3MhpECAlAP
a7pcBXIW714NHkK9NNR0VXkAkkjisDoep+Xz1FwWu5eOrtEkvQ2pcOfdoz7EaTbWHD3mlS+I
YNVhCFc1LCSEklPrtp2jKIu9WH9rCUpnQpylLLDyIq1EtPINClwOJT61HIVG2iIGZ0N7/Z5N
jujtskPxpbzHDk9BdTIjr5pCxwdTsqlaK+B0tGU/J3s9hbnRpdymzPsLPbS0mbODZfUlUhRQ
2hDCCFL5KSa77DQk24QjO/WOfZbxItVxSluZGKCsINQpDiA42tKvgpCwflpCfIwAIFSOVQRU
+n/HQJzCTVJBooUPLruNEgDjTc/mB2T/AH/HVJpCFJV6JIoKEj1r8tUg0Jqo+0gH8Bt/HSSZ
z4e2gUeJ/LoJoHKhNKn4kfT+H8dIIOqBUinWtPX8NBtBL514n3V6HpsflqBidvcKGo/xClNt
RAI5U3onoQN6H8NAABWKhXpv/sdJoNRHRJqlXp/edIyIrX6DQk7ctZMiiRX2g1G5I1EBIJWp
VK/w21CGrkeXEe/oR89RABBHEjYVBJ/nqFBdfzGqqVI/3HSGgVQolNCDWg3O+omwc1FXFe6T
ufgdBIUkGp5ChO1fQ/hogQiSDQqBFemlEwBTgUFfSKch8/x1MBIHpuFE7VHX10EAg9ONFHav
StdQyGSD7KpIH4+uokGU1V7iSgb8RUAj8dUjAFBNQjqQOQJIHX1GogJ5JRStfgR1/jpICUkC
p2NTzI9afjqJoHEVBChU77/PWWKD3CAVE+4mp1EBdAeJ+RJ+XQ6SAQUJNTSp/D8DXShC9quQ
5nfrQ9Kb9dABcUipPQetfjpGAyQj1AHU19K6ADbQAa1oqlDU+nXb46pJA955JBqBuD/dqkRX
6Q2pSuw+XxpoN9lAn4JB4gbEnemowHtypSnHY1Hz2IpqJBpQpAKRT29R600G0gia8vbVsUKS
fj8/jqKAJ4NqruVfA/79QJClqBWShQPL1+mo0ChIWVK41FD/AD/HSyTFb9z+z1pTWQ5JzxPc
psHKYcmDZ0Xye04lcC3r50U+g8m1J4KQorQU1ArTXppKTg5uZNDs+ZXJXlNV1YxCH/qZ6QpL
NqW9IaCJ/IlbvuWOLiqlKkK9vw1lawKaOv8ArqREsDz9wweGrGU3pyU0tcmQj7a7JCS42iQl
ZWPo5FJFCNtTnAtJsaN+br2iTLfbgwkpuUp2VkLaU+y5NPoKBHkcuRS222opQUmvrqZMTjWa
Wm34hf4CsSek268uoE6YxLkNtMJbUXIrQUG1pR267EqqoddDch2nZWP32EbALMqzQTJDxkC8
8T99xJr2iutFN+nTpps5JtDay3F22XSHcWQlTsR5qQhC60UWlhQTtvvTVVkjWpHnqMq8RLsz
DuLqW5330uA/NZ+3+hae2whtlBNFLBBdJIprKNOuCnYXnr2PX12U4JEmA+mXxhJeICXpjamw
9uePNIVurqRrXAPwzphObotcS4WS+OzpNjuEJyAgR3ElcTuOB1S47bp7fuUPcKjVtA3JYbz5
VsK8VmY3a2Lgls2yFaYU2QttDqhFkLedW8Gle0LSviAknVDZPyViwZqtF8N1v79znL+0MRp2
DMVElcRQISXR9SEgEcT+O+jWCTKssAqKk1oo1qTuD8dJnZfcFu+GQMYyK33i6yIk++x24iEt
wzJQ0hpzupcKkrSVcvp4+mpNm61lQKsN3xCN40vWOSr861dLs+zLjxFRHnGm1RVE8QoK41eF
KkfT611p2mDLSiC5ZB5Qxa5pvUuNkM6RGuduagQbGqOsIgyW1Nn7tCirsko4Fdfrr01nTyUe
CAzLPMXvNjvEC2SV2+ZIfiuzZ6owrfO0hKC9JCEp+2W0vksJT9XwrqgxaGjhmrGN3qFiVvtG
RwZEi2QEWx/vJeht8klTneLjyOKUfloTWukXSSY8X4yvH7zdp0+82MIVaJSGJDMtid2HQULQ
6pgiqgmlaAV9NTwgVMkiMssNC2cgti81NnVHOVLaAiKlGX3BV5TPGpj+3/K+VdCNxLwQ+FzI
LflUXK93+yyIzcdabjNbbESNILjJSEsoU2EuOBZSVrCU16j4aW0Cko0fGGpE27xXb5bIzluQ
Xm3FvlTMvqS3FcSmi1U26DfUZSZdbWuCcQsjVifxuMlLD6cnavaWO8ZBUopVRaC+oFugBaV1
0VcoXUqiH4B8eOxDItX3f3wcbimM4brwFKqRLrx7P/IR00pwG0WybEuErxBj7RftiJFuuplI
aS9GBRFWlKUOvtJVzWe4olwEFVOuqtlJuqyibzG+ZHY22i8yzk0ZFwjSnLrNlRXmFvpOzcKC
ytSosdRPGpJ23NNZkH8kw1Nx2TKsEDLWkJui7q7LYgXOdHuTiWfs3A2O+0lLbTSpHBKELJqo
A9NakSk5vZ1fvuMT7mtSrncXizPsd7kRXAyw25RtUiRCCQhl7kd+NU+mngElI2smNWade8zQ
1ZIl1n21ts2THo0lxyK4VupS8GXgppbyUNnkFVFPhqbiDKWy4ZrjFtkzpc6PjSb9eYrFphN2
BqS4exFVGUXHeUdSVuFtxPDmTTRWziBaMtzrFWLTl93tVkYkzLZDkBth5CVSOPJtK1Nl5AUD
wUop33+O+tdZyYbgu+OeMrBOteKmTYbg8u/NyBdb22+40zb1sqcCFKZCVJSohAJ7tE/x1lNw
bghbxhlnhWCAqJYbrcnZVnauj2SRnj9s065yKm1tFBj9tvjRXv5/x0zkrJEddrEtHiexXJMB
5pxVzmCTIWyeKmy0jtudwoCktnokFXEncb6U9wFq6IfA7HByLMbVYJbq22Zzq21qYI7oSltS
6o5AitU/DQ3yZSl5LDjuK4Dkt9jxLbKuMJppEt64szS0tZZiNcw8h1tPFsuKBBTxVx676G2j
XVPJLt+MsFkTXTGusyTbhEZeCGA24tt1+YiIkd9bTTTzf6vM8UgilNTsxdYG998aYjY7ZMvV
wuNzXZ4avtVMMNRxKMpMtyItVXD2ez+iVJ/NvTVW2TLSaG1y8Qy2ri9b7TNXcpkW6Q4Uloth
rtxLg0lyNI9xPu9ykqH0ileh1OAjwd7b4ox+TbvvFZQ3HamSp0e0ynQwy2tuE6Wu86h5xt1Q
Wof+0k00G1VcFIFnhHG5FzMp4TmZYjIiCO4uMtqlVO/dj9NKvgg7ka0zlGJIhCW1PIS+ooYJ
AcWkAqSgkclJSeqgn09dCNr5LXkVgxBGKm8Wr7yC87JDFmTNfbeVcmUKKJD/AGW0JVG7NAdy
QSeO/XWk+GT1KG9wxvF7XIx124TpCLfdbM3dZzzSW3HQ+tTg7LaduKV9sDkqvGtd9ZTcFz9h
GV4nZrbLsTEFcmM7cQDNtE9TK5cJLjqUtLcUwA3+u253EJIqKb9dM4kElMHR3D8ZiZXesenS
7lJXAm/t1ph22O05NmL5qR/7nFlPDjuPWu2iDScnJvAoMnO5mLx7oXY0ePIktzUJQpzlHiGS
WVpSS3yQtPbWUqIrWmpuIJYOOGYdYcjYisrvchi7zEuuKYYiB2LCab/964vrcb4Nq+oKTXb/
AJttKtjQSQcPHnX8Xvd/Eji3aJEOOqIhKlB8zFrSFJXsBw7dRUb11qeAS7RJMzMDtpx+4z7b
kaZ5s6GnpqW4zzcJS3eCe3GlKVwdeTz+koCiEq9Brmv3GHMogbrZJcOwWG4uze7FvKJTsWKO
Q7AjvhpwHl7f1Fe72/x0IqzJ3wnGr5kt9XabM6GLiuPIdZ9yk80tNFamUlPq6kcBXb46Us5N
QXOwePvIsFiLNbvjtgC7Sqf2SiW4+1EckhkR1R46HF1W5RwpSn5qoRrXVcFKn7FGzL9xGQyD
Pu373MQhtLlx4PNFVBTgpEhDTqS309ydV1ARA8tliy2RZ7FJhuqNuuF1Ma1tJUtQZnoLYL6k
hJDYPJHv+Xy1hqV9BiWWt2H5HtuPuCbllrhQ33bkEQ57yFOvLbkranra5R3f819KuPvFeu2u
iTmEZ648kZLxPPIcJVydukN9/wDY0F63IlpcuKLKpoe1ccpH6SGVDkOVUjVE4NWUI7eQm87a
hyomS3KzSX2lMfeMRnoblySpKAGw4lpCXvainIk9KV1lLAPZS7HPukG7xJ1rU4Lmy4BELY5L
Kle3hwAPLmFFPGm4NNZkU0WObk+cQbiluXAbt6YrLiFWJcEx4QjSFAvB2GQkdt1aUlStqkDf
WsrKJwdoV88iXm+xJ1vhrmT7XbzHjsR4gDX7Y0lZU0ttISlTSkKWj4qGwqdZb0XVZKi4vlJW
4lKWgtS19lCSlCCTUoSk7gJ6AHoNtL+QUcFwtNwuUvEFx3bHbptqx/uqTcZZU06y7NV3OKCH
Ww8+qlUJ4lXEfAaaOMIy/JUAivy5GiT0J+adEGix3K+X13IWs5U2iM/Imd6K6niWvuIiW+QC
CSaJokq5fHWes/YU0ccnnBTrR/YBZH31ie4pa31rkpePIKR3/pZWaqASKH47a6bUnPs62hDD
Iruq7XJ+6OxYtuMnjWNBaEeMjgkJ9jQJpXjU/Ek6zM4NOnJ3sl/iQYcq3z4onWS5FpUyKHSw
pa46ithbb6QopIKjtShH8DoVmma0Nb9fLhfbtIus9aVzJRRXgOCUhCA22hKf+VtKR8+unJhW
kjnK0orb0V8x8dBCVKUKDrUiieldt+umBaASAa8aIH0/3/x0FJzTuBuanpTrXVBCVEJG1aqV
T3dD/AaRgTyUNiflXqNv7tBQBZAPuTQk/wBuoRNQSd612p61G+oAjvumqa7qIFTU/CvpqIMF
fUnkQCN69DqISAoADap6V2r+GgkEUbUB29Ceu+mRaD5EH6agfyHz1Ag/fUbg/CnXUagIJIpt
Sm9K6gDqrdKUkbem1PjXUDQn2D3A7kUJPp+J1AL3HAVG35vloNiVbihrt0PT8dQA5hBomhUa
0Pz0oghUjbqTQ0+fppIMJABFN+tCTtoET7Qd/cD6HrXUSFBPXc7jfb0GhiBVSodfaKo3/nU6
ggIgV5VNB7j8vjtqIUoCuwqCATU0+e2gQKUDWqiKbD0JPppICUkrNRtToetfl/DQyQqiSke3
iU709NCIFAtJV9R/mKD1/HTIsSnklNTXeprUGv4/DUZTACFV4q2rvQHb8DpNiiFV40osDrSg
IB+PTWWZgAJKhQDjuR/sfTUMCaq5FCgaAAFI6fGuoMiirl7qD20CfUahkBU4d0gV+J9aaRgC
SNqk0IoQd6H5V0ChHJQpUA7kgHf5fy0AdOJCRy2Cup/4U1EJKiDVfUkAmnp6fLWpCA1JNE0o
FevyP/joN4AlQCuRG/pXb/d11SMwHxKhyqTx2oPnrJSEmgHLlU/DYdPSmkJFUFN/cTQkdOuo
WuRBCgeSaV9CB0FfX46jJ05JrSp6/VXb8dMGia8P/aIzO1yJk5q3x4cxiU9LkFSWghhYWU1S
FHkae3Xf12aTOTmVBr0GfZWfPLt+RkFtTY3pT05+cpwlpyO8o8o/vQau77jp89c01Dkc6glb
FfYFtx242O1ZRZod0GROz+48UqYdtziUFYaccaWhNQOP01NONdalOJXAqudHMX7xW7dlvOJt
7sRV4fXi4bZDaIZUghT1zSSjuRVPEKQj0A/gM5iAy0RWIsLkYR5HivXe1tybutpMNgSm47bj
sd4uOKabVxAbUj/LNN+mttqElsWlpFSE2GPGxiGVahMTN5phKiq/dwitOX3VeJaP+GnT11hh
GCEsLcBV5gIuBSm2mSyJxUSAGSsdzpuBxrWmlMU4N9uVh8bO5Ha4UnH4LFrduQTBuPdgtsuR
O0s8eEZfccbXseTwCgaaE/kykUzEZOCZJlIt9zxe2RERBOcS5GcXHjPsMsq7DTieXvWHE8u7
yr8tItYOOG/6MymRLW1i0Nm+wba87DtDbq0wrhLCx2kpjqWlfJDZPIBzf6tUYBqCfuWG4Xbc
fuN7nY/GF7iWqBLlWFT7qWI0yRIU0tBQ24XEjgEq48v5V1P40TmCm49Cxi/Zg/8AtuPB63rh
OOm0y7iIaWn0gc3GpJqSlJ3ShW/x6aklAIpCigKIANK7J60+R1QUmh+PLC/ccGzUqtX3hZhM
uwJH2/dWiQl01DTgSVBQRuUpPTfU2LeICx6zqkeHcomJgPFxqZBUmUGysOtBZSrhyQeAa35K
Qr192lA+C6eQcFxy4XvIe3Z5dvkWy0x7m3cWCG4ckoS2kxmGQ2GqrBIKuXPl6aFsIKdmPjix
WSx3C5x3Jj7rjzCIUIcFPWsONhxbd4A5FLiq8WuPU9fhrVbZLqpGPkSz263WnEp0OAi3OXK0
NyrippC221vhRTyUFk8VkDcamsm5h4GfjjFLXlVynxp0p1iNb7e9cSqIhDz6iwUkoCFVqSFG
mhowWP8A7b4qqx/6oE+4Ixw21V0EVSI5npUmV9sUdz/IpX3Vp8tUk0MMBx3FLr5Gt9qQp6dZ
5TbzjCLmwG1LV9utYQ6htaeSUkVC21b7HQ1gYUFFcqFVSruIBIC0jkNlU3IrQfM60lODHaMl
sViGOQrHZrhf749Bk35l2RAQxED7KEtLKB3lBaXd1j8iTtqT+DcS4IhNotxxdy7LuS03NEjs
C2CK6Wi1t+p96P0gr/kO+iZAl5WJ423gNoyOPMkLmyrmuBcQ42kNtpQ2HFBhA9y1JB+on3fA
aJyWSef8dY7ebD99h3eclfdx4UJh+U08t/7kqTWW2G2zAWngSErJr00x5KzcBXbwnKjsY7Bt
C3Zl1vMiTGmiSwqLFQuKkLW42laQ4UABVCRVYTVI1YkkoIbL8BhWSXYIrNwcTGvcRDz1xnML
itsrU6pta1IIC0tpACqK91OvXRItTgi7PjkCRmcLHXbj3YsiY3CF0txS4hfdIQhxouU9vI7g
+ldMwZrWH8kpi2GIl+RH8ZXKd4sPTWA9HWlh537RLhSEFQUlJX2wSDt11WcjVkviNmyl3C4F
wx3J1WiVcrm9BTbnpiorDzgQgthpKAoqdWpVFE7fE6mocMG20oGU2z+Vbw9EjTJ0iR3fuYq1
uS6NRyytRksT3Eni0eSSaOk8h9OlJFkh2oNzf8fzbm3dpIhQJ7EOVYiXBHV9xVTbyKL7avck
kjh/HVCkHdqpGHJchEFMX9zlLgt8QiIp9xTKQ2oLSA0VcOKVCoFNZjJOzgnm898n5G63Z2Lj
JmvzVhDEZhDKHFKT7tnG0IWnpUkKG1dMQKs2Om/H2X2lm13S2vD93lTX2YSYL7a0objMh1cp
MttZb4JqpK6kU9dElrQ2v3/dFp+VNusiastRmHZVx+4S6yYgfAZIcQpTakB8D6DsvrpVMmbW
gd4tK8nO5JbLbCuMyBMvLKpUd95IeSuHJUp9Unsq5dxK1pUugHKtaaGdEyNx1Pku4zLrc8fR
PmPzUORbtNY4kupc3WhSnSmpPHYI3A+GlrJicYHGP3LyhCsL7NlYlG0RlPBwpjtu9h0AB8NK
cSp1tQH19r1+espSKtieCpJu9yFrctJmOi1OvfcPweVGVPAAB1SfiAOutFsXNsN/hRBOmWyV
EirIQmQ6y4hqpFUgLI41IO3x1JGHuDtdm8mYtVqi3WDIi22Kh39tcejKY5JfV3V/rKSkuVO4
qTQdNtVlg0rZjk55S1eEJtTF3DKKWxgQW2UhDqYjhWWw/QJPe3NSrelNKco1sbXC63O83xVz
cYSuapTPsjMkBSmEpQ3+m2FeiAD8dU8GWsjiPfr0jMl39uGF3luU5Net4YXxQ6TVaTHT70pH
Lp6aLzyVWuBhY7tLtE5ydFiBxXbfj8FpWW0iShTSx7CCFDmeAr1+OpoW8HG13Zy32+6RG2Uu
sXOMmE8t3me2lDiXAU8SlPKrdPdUfLQ0zNUhES+us2G42cNoU1cZEZ1cgqVyQqMVcUJAPBQX
z3qK/DTBuZEO5CXMbj2AJTSHMenh0OKqVvNpaUktV4UQEV5U5b/DUm4JrkF0v65tls9sU0AL
M3IQl0rUruiU93uXE+1HDpRPXqd9BbO2HZWMbuD9yYQH3nIsmI2or4FpUhvgHQtPRTZ9w0qr
KS6Wnza59k2zfW7hLdXb1QJF3hzzFnOOmX913g+UqpsA2R6gafgPllEya52243t+ZbhMLMkA
/wD2i+JUorAHMuPBKOfy26aG5BNFlxnyreMfstitlsekw2rfdHLjcVRni2JjC1NfoKSPgltQ
qdt9ZNJk5cfJeO3XEXLI/IvlsSs3EuwreqIqG+JspySjvpdAX7eaUK4/DbWlbk1GBrdPMD9z
skuwyBITZHbTEhw2EqQFx50RCElwKABMd9SCHG1E+356EzDYrylndgy1hUiFdro44l5LsS0T
YkRuLHSWw25xktK7quIHt5DfTXUGrIq/j3JbZZMshXeUVriNIfZfdYop1v7iOtkOtiu6kFzk
BUdNjogmi5OZvh67Y1jLt7uE2Cq1y4UjLZUdSnucqU1ISlUYurdUhvs8Pr6moG2tVbTk5Whv
fB0tHkKyWrLLa9bbrPYsbWPJsL84ILbiXuDqUSVsNL/USy46FpCTy+G+stwkdEnL+So2jGI9
1RIkOZNaYLiJC2wq5vOx3n6GvfA7buzlankeVeultNkThzNy1eN71hiZ0eS6q5tdlTLTbrDk
TgvvrbkqQFe5zjxV9XwoNVcMLKUWpWa4d/22l2ZM9uR3LMtiHCdD6pTdwCQTyb4piNp51LS0
kqp13rqo4eTVlKwVbJf2p3xzjMRu+QZ9ztzsjvQmQ4H2o8wtqbRVTaArtcFc/dtX2101thoy
5+pz8kt2swcZXEvcC8KgWtm1ykRHHHHGnWFOOKVRxCP0z3AlJ+PpoTlDaZkZYLfIFnh5FOfT
DXcxAQmztzWUyUKkmS2oltpwFBcSgFW+iqyTeCxYte0XhF4mMJsNry1xEFph65ojtRXmW1Of
dvBLyFR0Pu1b5hKRsNvXVVzvwKXggIcm1omZcHHLIhbkR1McfbOuxnHCsEi1gkFl2oJQtW3X
01qzfZZBVSRTFAkKIPJQFEr9AR+OubYGsXq4Wc4zc1olwl4fJsrLGOWxst99N/QloPOBinfb
dStLqluK9pSRuaga7euMfubejIipR6BPX+fy1g5NnNSSCkAf+r4HQKEfmpX0Jp1oPlqEUUpr
X47Af8dZFhFXUA8iNgfw+Hy0wSEFKyqnIED0+WkJCG/5qGv0+upggKBKiFH+Ppt8dAiVlNCa
+09B8vnqGQ1JNEgqp8D0HzOgmHTrWvI7b02GohKa1B6A+nrtqKRSjVIG9NtvWtfT46gbE8Nz
XqdlE+v8NIwGUpNU0oSaEfPUQXBRFagD1B6/iNBQKTVKwlXX49fTVBQJIBUabJA3I9f46UQd
SVKqQNh+I9d9RBqBLianqNz8joFoIewFJJCq05daA/DUSAVCp40r6H5/PUUhVASfdyWOiTvu
rbSQFVAoUg/Gnx6f2aCFewFPU06L6EHUQVCSDyBp8ev8dQhoCQpQ3KfQH0J0MoACipcqVD0H
4fHQQZ4ivUE+0D+6n9+oWAkdPX02qNJNB0SByKiDSqqdNQpwEpYApXc70PTfod9RNhkA7qSU
noep9ajUSCS2guVBA5bVqRoGA67iiSUgEKI241+ekGgcllY9vtA2FNtRINRqSQOnt6dCdZNN
CAg+7keJqST/ALfDSYg6UPIk1KBtyG3QaGQOX6ZTTkn8vzJ9P4aDSAoUSE1BBI+QOkcBEHcC
pIqajYE/+GoJD58wCnYH6q7bjroCQgDRS/qr9JUKbdDqHIRqop40APQHrtpNJhqSonjvXoN6
V1AwcV0py9OnpTpqLBJeNI1vl5RbItwaS/BlTI8eQytRQFpddSgp5J3Bodtd6KTn8mlxcSx1
PnL/AEwUhNmburkJqOv9UK417bbnuQopPQ7101yn9Dp2cQScXBcFdwefPu0lVslM5K9amJ8V
lUglunFuOhpS0pCOXuKlGusxpLwZnKGrfgq4uXCdBj3Vp9yyzFR8kdCFNphxA33W5Y5kd4Kb
Bq2jcHbWQTXJCWDDMauOKZpcVPuSZWOpjPW2Y0rtNPNuvFvktlQUocgKjfbTGCdfBD/s1rGM
i8qu6Bcu92k2IsOlfbPR0Pf5dPWmqClvAwtlufnXCNCYAMiU6lhoLICStxQQn3HoKnfSkmOD
T539POVR5MaO3cIjy35KYMha23o6UOqBKSnmn9ZuopVH46zKB2ZEx/Dt1uFwZi2u92m5hbj7
DzjT60JZdjtFxxC0LQFEcEmjiRx+ekzLk5N+J79LU1Ig3S1S7Wlh2Wu9NSiI7LcdQQ6XElId
RxKgPp39NtSa5F6Ov/aPKv1pf39sNrbjtz13lyXxiLjPKKA6HCnkqikEEKTX4ank1VvnZHnx
3kYv7mPvOQWZQY+7bceltIiusKAKHWnzseaTsNtH0MwVcIUCQCOtNt+m2tSZRbcRtV8l2HIJ
9tvb1tFkYblPw21upDzbqihRBQoJChSm430MUzra4WRqwS6XmPdZke326QzG+wadV9u4JJo4
XAlY7Y3H5DzrqqpYx8E3muBeSbJ3YLU+Xd7AyGJKXRIPFJcSmjpjKdUtKUKPELpTb01Qc3Mw
Ve4WnOo378uW1NCIbjSMkUpyo7yjVn7g8v1FEkFJ30oX5JrJ7ln9p/09Mul+N2jzI6LpaA8f
uW2+QKAHGXk8SpINKbjRk0ObB5C8nT5rrVhiMSbiqM62o2+3sNvpbcola0qYShVU7U+B1A/g
4Gb5lbyn2oujmSfbFPZU0FqVGJ936dCyW67nb6vnqWUTeMBY9cvJ5z95tma9bslkNqEx+czU
tNNoLnvR21ltNEhKeCadPTW4UZBMgY+b5TDl3aTHlhl6+oUzdlBpoJeQoUWnhw4o6n6QNUSF
mySs2TeRo9gUza2X5Fnh8wiR9kmSmIFiq0tSFtr7I3qaK0M222QarhkaMeVAS5I/02t/uPMp
Sr7QyhQhRXTiHNv8WirMWlIsbt6z9OC22Cu3tt41Ik0tUsRG0PqlNnkFsuf5hcPGnOnupSuj
Qr5JjJE+YJkQQZloTHN1lNCaq3sx2pEqWByaExbKipLo+oc+O+pMXsjrtD8tfZ2ayXBh/lHf
eFmAfbLxfd97lXUulVQEmnI7dNJl2b+o2ymd5KgPWCZkUZ9h+0oDdmkSU9wL7K+alKKlOJdW
FEcq/LUoNJvcZGdiyW+OZwzfGbY1er7Ikd9qEpohKpSaKS4200UUWjjVI6fHVElMEnjeX3hH
kR+927HIki+ylyA3aquoQ08pK/uFpBWFcynnyCj8aaWoRir5F27PHrVaIUhWNQlWRm6PT7E+
4t9KWJqUoKm2nAvktLdASlY39dDlmlYj2/JdwbDiX4UV6FMdfeyKEUcGro7JUpXOVSqk9vkO
3wIpQaWmylIRByaMxgE2wOWFbzU99DqryX3kpEhipjjgE8KoSo8kV93U6moYuMDR53x2uGsR
W7yifwSAXnIio/c25cglIXxO9Kb9NGTJLWi7Ypjs5jJsbZvEiZbn0hCbgiMYVXEKStp1xiig
pTalcKeu+nZNwO7P5Ms1mbgRLDZHYUGG9MddSuX3ZChPYDCyhwoCUOIpVB4kfEazaW5YqDgv
yQHcijSXGJ0+ziI5bplvlyO++9HkijqEcEtNpVyoUcU9RXrrU/qQi2Z/Cb8lIyu+Q5AagLR+
322IpCAymOjtMtKDv5UN/UE091TqawZVoY1RkWGv2Zdhns3RFqh3J66Wp+MpgyVF9ASpqRzA
QKcQUqb1TGfINL9CbsXlGHbsft8RK5kGdZ3Ja4KmWokovfdLLg7r8lKnGVorxUpse7rSusqP
B1lFQYseNSbYJL2TojTnkLU7CXBkOKDhBPbLqDwVyPr00u05ZiqjWi85F5CxqPdLxNgSpd0l
XCJboKYSqCA0mL2HnHGnCSfqZKOHD6qmuiCtlMqObX6xXiTPlwbxcpKrtNMp60y2u3HioWsr
KRVxzmpHOiOITrfBiM5Y6zSVi+Q3DHGLbfCUwrYzbJU25RnWENmNyKHVKBeUoOc+PED201la
Ok5OdknRcLiX9yNkEeXNvFsdhW6RaVO9xp8vNLIcWtDZb5ISqihqiQraJJHDvJaRCuqL5MZb
vcpERqFeJbMhQEeIFgtPOw1NyVOL5j3qUQqnu6DS8i8ZglYflaypTdXlTBHlTLhPlJEWOplp
Z/akx4T3AlZSr7pHJPI8gr3nfS4NdpQ+l+Q8NN3tsu3Xxu3WKIlf+prKuKtK7st1oclJQlCk
OGtUq5kb6P8AJhuH8FJRkWLDCkXQLZGVxrWrGW7UWk8VNKcJRPIFEEpiqLXL6u57tEi/gmvI
mWYjcMXkW2zItptkhMVqyRu+4ZkZxJRzV9oWUJjnlyClF0jifWun1agzYrWL4XIsOR2u75jG
iHGYz4RPWmTHko5OIWhjk0y4tZQl7iTRO3rtrLRtvwScGY4vJ4pyqfjs66IhTv2abFTGWwic
pofafduMtoj8A5XgHEniak7a1j7Apj5JyHLwuLMbN2Njfvz0S3xb662hh6GXnrrxdLKUJDHc
EE+9bQ9o366yp2Zex5bx4yeVDcejWVarjJTHmVQ2A3HY++7RRuAyVBhjmobq2r9Wt/4KjwM2
7ZiclFgkWa12CaLimJIzcSezxhtPoaU6YrbriOx1cqlqpSR002UTH2Jvh6Iu1QPHb8zH7ALf
bVwbrEu7txvDjijNSI70r7Pg8paUsrDbTfVNVA76y8P7k0/2M/eYi/8Ab63yVR4H3709YXLD
yv3ItpaB7bsb6Ex6mqV9eW2msQxtwRNulNRJsaUthqWhhxK/t3wVNOcDUIdTUVQfhrmRbfIO
Qv3HHbI1ckolXuQyLmbm2wxES1GkIU23CaTHSjuJQpHcK1716euunbHwZWfsSmQXKJafJbyb
Pa4YXLatsKAJcYONxX3mI6VvojboWpRUfqBrUnrrD0maaUyLjO45P8v26I7b0KgR3xDuUcsJ
bRKlRm3UPSPtWhxbDq0g9tPSn46r8FS6hjHx9c7QlDkOZZ7dKhxW37leJsuMZsh6HHCaxY6S
QlhZBIDgpuaq6DWmmrytlKVcjO1R7HKx/M5oikOR/tV2TuBTqo6XplCkLG3JLJCFFX1em+p5
tkXMYJJM3GJGEz5LtjhQkI7dvtD7KXXLiu5htLqpD0pRDf23EK5NlPLcJHx0JIPkj5EbGI2N
YpPkxlyXZUieb2yytTTrrLLqEsNhwgpR7Sd0j8dSrhi3x8C83NgFmtMiNbItsvkxBluxrcp5
UdEF5v8A6ful9TlZJcSSQnYJ676U1D/YtDPP7VZbVmM+DZ/bbWOwGaLLvuVGaccoo7/5ilfh
ptpfQo39SbwDA7Bk2L3WZcbm1Z5sObEZizZSyiP2nOa5DZ2oXVNpq0PVQp66wbSLPkOD+M8c
jS3JEeO6U3CbHis3O7SoLymYiGilLQYadDiiXDyKqb0GtdmYcMxxwgIU6gFCaEobp0BNQD+H
TWb7wSXk1Gd4gs0e4XkpubLjEO3wZcWE3JS5cW3pao6FiQzxB7f66inf/DrVKptG60TcM7zf
G+BK8g23E4SngzJnPxn5jN0jzVJZjJXyC2kMo+3dWUj6yQnfrTU8L5Dqgm/EmKOXiNFU/KjN
u2qVdnbf97CkL7EcpEd5E8ITGSzIUpSarFU8d+ulxGAVTN8vtlnt18diWoSksRkpDn3bsaQs
ukVPB2J+ipFCOmp6BE5ieC49cbPBuF5nzGDeLsLHbGoDTLnGTwQsuyC8pP6f6qRRG/XWFmX4
NOsIcK8TFrI7LYV3Md26sXF6Q8luqGTbXH2yG0k1UHTGr7ulflro0KSgZpxDCHMAbypV+uDU
lx5UEQfsm1IE4R/uOHd7yf0qHjzpX5az1/Jrwc3UrOO2G5328wrNa20u3KavtxWlKCUldCal
Z2HQ9dDSN1ROPeL8sMiOywmFLYktvSP3KPNZchMtxSBJ+4kVCG1Mlaedf8Q66OocwM2/HeTy
MiZxyIiLJucpsyIymZjCorzSUlRWiRy7VBxPUg1FNLrAwVwpIWpKac0kpPr0/wDHVesOATLb
bPG9+uFnYnMPxvu5rL0q0WZSlCZMjRiQ++zRPbCUcVUC1Aq4qporWWLTKoCniCj3VTULHz30
NQwOYPQ036JB2G/rqICikhAPSu22+2oBSkkngK03KiPX56CgACiQUkk03/8AT8tQhigHtFRv
yPWvyGkUICaoO/Xcj1H/AJ6ADGwIKQVU9xPppIM/SaipO1fX+GoQ6FKaenpTr+OohLtCnoOK
TuafDpoBoVtsobhQ+XU+p1ChKh0AJUVH8NRAQVVAoAQaJI61P46SFN+5RqaK+W4qOv8AHQQo
JKK1T7geh2/t0CEQDUbjff4gH4jUTQVVoFa0UPbUbkD5aAQZ3IQVEg0NBSoOps1Mh8SaITv1
9o6bakQTTZURUCvzPoNMlAEnifUgjf5/z0SKACK1ASoq+Owr066hFKCieA3HUj5/PVJlhEkU
NOgNKdAD8a6CQW/EIoBQ9R/cdRuRTnBQSCoA12A2BpqKQAK58kmqabf8d9Uk2BaiVg9aUFCN
xoABR0oQkAVHzFamnWuo1W3VprYNgkEGoO3w39NIuIDUqpCPhsU/E/HWTKQDU8hsQrrU9D89
INBcE0Kajc0Hx+RHwGk2kgweIKEnmfUjff56ilLQOG3z+NduugCd8TJzkXxyZh0OTJukRPJS
orCJCmwo8eSkuBSBv0NNemq/F+DD+TR0XPzkrNnJaLbMdytiOEugwI5dRHJqlxQ7fb+rYOA8
vSuscBay8BwMn85vXK6QoUCTJkB7u3a3ptbLjaX6ABxxntcErokdOpFdD1JqMfAm3SvOt2iP
XmA1PfZsU5dxW6Wgh0TVng7+nxSp8oAopsg8U+lNXBl2HuGjzJNsd2kWSxwJdvvDri5v3caK
lyYupWrtIc4FxCFVKQBxCtk76bPGRbX3KWbjnjeJu2hbU0YkZBfWn7VX2vf5bnv9vb3+nOld
VnJduGMG2r1anI1z+1kwzGdSpmU4y4hCXWyFJopaeNQdZRkuMvyxkse5w70qyQIF1Mj7xyeY
Tzbsl7iU7qcVSigupDdK6UuBxor9gyy643kjl9ZYbM95D6FNyW1pb4SgUqHH2n8xppgbpLTC
xHL5GNS5ykxGZsa6R1QrjDkFaA6ws8gkLQULQraux0HNE7dvLNwm2SbY2bfDgWl+HHtrLCFu
LLDUZ4vp4uOKUpalKVvy9NSOjK3j2Rt2aaqcYUC6BxtccMXNoSGBy/MEkj3pptrS0ZXydYeM
5JcLbJvNvtcmRbIyiH5rDRU00RuoVHXjXfjXj66BaLVjt5/0rb75j17xO4SJt3abjXApfcju
JbV+q02G0tOcVmvIGvuHpqQdsQNrPmWOW/CbtjT1nlOru623JE1MtKOK46yWOLRbNOO3IV3+
WmeRccEzdPLlimv3e5NWB5u93y3Js9xeMsKjmMkJBUhHALS5xRQUNPXQngyR+U+SWMjsjlom
2l6NCjusOY0UPLSqMy0hLTiX1Lr92VIT7VK6H5alhi0rAybJsCyGNj8FAutsasscQFS30x5I
+3FVdztNlClO8zTYhNNMlbyS+C3TxrjEy6yv9RTZiZ1tkw+yIDkJyrvGnaeDjqUr9uxIpXWQ
4Da8j4oLKMWS3dEWb9sNuFzoyufyMr7kL7CVBs/4dl11SQxwzLsQsPkJq/vy7u7brcwWoqpC
W3ZLilNKZUlwBYDbY5VQmppSmmcAitMtYP8Ad3Yz5tw+34qVZ3WGGkrcdNTSShalBCa0+lWl
NhOMlkjZnYpOMY7Fev11sEzH2HmXo1tZ7ol9xZWlxC+6hpNa0/USdXJTJXW7/bhgr9k79yTM
VLEgQw63+2FuoPJTQHPu7dRtXU5kXlFgfu+MP+MLXZ3b++5doFw/cTFTGf5htaAgxWH1EoQp
sDklf06pG1ZhvgnU5xi9ktDjVqvpemyLhElxrgi3ralNIZUorcuqlFCJznv9D13GhC8lczib
h0ixJMB62zMnVNU6Z1ngu29j7NTfuRIQ6QFuKePLYbfHVJlJTgK9ZVaFX3Fo1nmvMWDHmYva
loa5uIklXdlPlhwBDjoX8qKAHXTGCnImw3K3yfLDN8lXlpMFm4tz3brNb+0LyG1pUr9FpKgh
agDROwPy1PQpckxii7FE8wXO5TrzbDZHnJr65DjpWy+3MDhbbRyRUuIUtJWNuPoTpvacmKqJ
RJ4bcoEDHLRZXb7ZEi23p92+sSVNuoetqwjn9s442oKCwk7JopXp01mdmlkZwpfjkyYpC7Ym
492crGVPNqEGNFUpZYRe0k+9f/4LYlPt5baWzPVb5IW1W9a/DV2jKnw0vP3KLPjwXJjSX+3H
SpLqgypXILJpRPVQ0zkYcEdA8c5TEmQ599sr7GPNvsu3SSVN9tMTmkvKPbWpYTwJ6CuhuUFa
5NRS1iv2r9syBNji2OVkjDkKPb3Wkl+3NodEV2QGVq6q4J5qoT+emhKNeDWH9SAutvw1F9ss
C7WiPalXJMyI9J5wqtl5vjEkdiC660nsv0o4oitTXprRJENGx+BGz/ErDbIrEufa/tnchlod
T23ZBcS8+SvmlNIyDxSpKtyK9RTWbPBVSkejDgi75otNhbv2RxrkhUC0yXe4VW6S66pUoJQ6
jnyAT7uW1da/xBypha5HeJ4Nicq2yZMqyOzLiLo7EnW2Ipc/7OMlKFcUrZeZ4V5KCXFFQ2p6
am8nRRBl9ws0kXGd+122a7bWpLzcRztOPkNIWUoSpxpKkKWkbKKT11GEsYNFlePLEzao1bHK
YZkY4q7v5MqQ722rg2wXOx2qFtPNYAKF77+3Wa5RpKCEznFLXZ4stiBjk/7eCxEdbyvvLVFk
F5ptbinELSGClS3ClPZVyFPx1pZG0TIxzaxSLfhuFTjCkR0SYcv7kvNKSkPCSaVUUJNXE+5K
VE7dNtCUr7mbVi2CJ8fWO0X69yol1W+3Di22bcHBEUlLx+0bDgCO4Cn+emNA3+LZOYrivj7J
JcqSy7Ot9ut1uXNnwZLqVKQ59whlpKJSGnFKQtDnNX6VQRT56JnRqtYRN27xLhM+5PMMXGe8
w9IgRILzYSgMuzWHnlhwvNtl9CPt6JWlKa1+WhvMBZtLJHPeMsNiotEmXJujkbJ5USFZ2o4j
h6MqWyh3lKUscXeBd/IB00uHrgUxj/2kaVcGh+5OOWuG/dYmS3BDYSIjlpKipTaD9QcaCVJC
zuSQOmhrHyU4OUnxbYI+LNTpV/aj3p61NXhuGtbAQoPIK0RksBX3XJSRs5ThX5aU0yaKZdbF
DjY1ZrmlUszLh3xKYkRlNxUdpfFv7aQfa/yT9fH6TtqTK04I6yJsYucZV576bWhSvuzDCS+U
hJ2b5bbqomvw0P4JFhzSy43Di2Z2Cwq23S4t92dZVShODUdwIMd5T3FHBbwUSWzuBQ+utOyg
m/y2Prni+GWjyBe8eWmfc48N9uJZIUNbSXpTrwRyDkhQUlATzNKJ93TbVIvUoOFguOTfJrOK
xLk6bWttS5EhCm3HWHkRVvOxu4AGnC06jtFYFFaLWCrwxlhGMYjkH28ee9PFzf7r0xyIGERb
ZFaKR93Icf8A8xscitYTQilNyRpbHeSPh4zElY7lF1+7LgsLkNEZLaQEShLkqY5nl7kDinmk
aHuAWiWXhuMyMauN3t9zluPWuMhyTcH2G2re/JWEqMCOoq75ke88dvyk9KaK+CnHgj/9HREW
zFZ8m5NxI+QOS0PyX0K7cVEN5LZdUU1U4Fcq0A26akpTY8wdslxa3W2xW6+2y4SJMSe67Hjx
J0cRZC0tJ5fctJC3AqMpR4JX/iro4JLMDPLMelY1k8y0OSjLehdk/cpBQCXWUOgpBJUCjmE9
fSul6RRmSVxLAMmyO13C72XlIlQpcWM5Fa5F4/eEp7wUk14NqFXD6Ak9BogYhFif8PTbSzMe
l3txltiXKtjzkG3TZaFCMhKnVumPXttq5igX1prcZM1M3El+O0621IWI7mzyEKPB0IPJPIDY
02KQeh0SDWS4yfHGYRFz7U4omHbozN3Wsh5MN37gNIb7SlDgXqSAk/gRpjXyVVlrwOrh42vj
d4tuJm9wpk8zHYMaEBLbQwtRKn1gutJQUckVUUVqaaoxJtM5yPHmQ3J6FS7wp8EMPx2rmXnB
HjotzIfcYX3G0uJ7bagQAgjWXVwDzkrWTR5abm45Lu0a8SJCErcmw3lPIJSkISlalJQQpKUj
206arKEjnV5Y4s0DMbhbnrfZ4cufAEhD0iPFaU639yyklsr4ggLSlSqb6q5OnyT1uzDyvJiz
norcy4xmX3X7g4uAiUlt5yndLiltLKFHj7unTTDmA7KJKX9jPdt7lwTEeVb0udlc0IV2EOK3
CC59IUoemiJZpMtM+Z5LgIfyWahxv93YjsTHylvuhppTaopeZHvYSpTCeClJHKmmqcyuAtZp
nGV5Syd+5M3gtQGLqw87JE2NBZZdW68hbbinSke8KDhJB9d9ZbkZGdi8hXmyRWIbCI0yJFjS
YKYs1kOoXFnKSt9lRJBKOTYUkehr8dad2UkJerx+6TDKbhRLcChKBGt7JYYTxr7g3VXuV6mu
huTLfgmsZ8hXKwwWoQt0G5RYsv8AcbZ962pZizaJHfaKFIqfYn2rqnbpokZY7j+Wr6y007Ig
w5d8itymoV+fQv7iOiapan6IQpLKql9wjkk8eXy1u15GSvLySV/pRnFgy2ILM9Vybf37hcUw
GOB/5eKeXxro7y2/I7+weJ5LJxvJIV6ix0SXoRUttlwqShZcQpvcp32C9ZFODviGUDHXZ3eh
i4QLnCcttwhKWpkrjvFJV23E17a6tg1A1q15v2MqsCmcnsEPJUXWLjUJdqaaU2iyT3HJbS1F
JHdecPBa3Kmvp0GsmpnZX1rNOYIBO4PTY77D5am8mWkXez+UDbbfbyLcl6/WeHItlluPMhlu
JL5l3vx//cdT3VhCqgb+4Gg1LwMooqEhLYSlPt2HEdaam5YQK5IIr+FSNvlqKRJUihSN+tPn
TrqENBqo70NKBPQ6CAKq3r6jb1I+GhEAAE79E7pJ6fLb5a0IpO6dqhddyOihoMsQptR24kKJ
3+aR89UikGa0JIorcBJ3p89tQBggJKa7HqR8tRASVFGw2Jqiu9NtRpAVw5p47VpWu/4jQTEr
AGyqpG5SPU/AagDIFBQCo/Kf9qaSAhRFdgB0p0OpiGlKU15GoO9TuANZFBlKFbkgKUKfieu2
omEkbFJqoVpyPoPhqJVCHNKgFJoDWlNyK/PSUCihBSmijWpoK7/PpoJAUlVQASkkihT0I0Gg
LIFabI320gxaf8qoNSKVr1/CmhgEHFKVQ1SDsfXUKUhKSmtDTY1JI6U/v1FAfRJB99ehPU1/
4aiAEp9tTXj9SviPj/DUMCRXlVdOhKSfXRACgQSFdeVBv8fjqGZFIABHIHgQQUnroERT2ElN
U+hHqPidII6ciaVSDsAhP0kD/FoQpsTxqPaKK6EHofkdJBIICuXGq1Egk12PwHw1ApDAHA+3
jX8w2O3TWZNwxXbTxpUU/wBttUlGJLN4mzNnHLblcZ1l9Tl7gfZtSWFhssq58uaiSDT40316
KuVBiJZbsU8iRhjN5xvJ5dxctt1jRY0KfCKXXobcVwuoaabWpAKFE/HbWnwFsFrvXmHDsijS
bbNN2tEESoUuFOjpQ5Id+zZDJQ82FthHPjy2UdZiMmVuZGL3lyx3C95sZBn223ZU20Ij8c9x
yMuOB7lNJW2B3uO5Srb1robxHg3C4ZO+OvOWL2jGIFqu4kxJlmQGYy2kKcRJQnuFHMoKSE/q
+5BqOhB1XtLky8kLkPkSw3e1wH4uTXezOwbYi3PY5EYKmZTiSfepwuBjtuA+6qOQ/HSnkyxG
Y+Z5dyzN6Rap0l7EpaYjL9qlIAQWmS2t0JZd5JbWVoNF9fnTQnDNdWy3ZD5fxOdMbUu7Jftr
txYmBiPDlffR226mvdlOOMIUk0BDSPcK9NCFV5JRHljx+Hre5Ou7EuXFlSXkyUMTHg2y7Fcb
bHKWFrqXOHJKfbX+eiTLUFdtPkPF7tisePkVzaOTzbfdIEq4yo4KW1PupVE7zqG9kAA8SkEp
1oVWRxEyTC7bZo9ps2RWxnI4llbgwr8plSI6JSJPN5PccacI5t7AqRuNLyL2VGFforHkKRcL
lktrC3IimpN7jWn7mI67QewRilsKX7d3QkA9PXSniAxyinQsqv8AbXGxb7lKjNxXnH4iWnC0
EOuVStxCAeKeadiPhrIOxuGA59hNowxEy6z2peRupXc7hshct2S0ooYQpTi6qeQFckk/l26a
mLcmAy5X3EuRIHIqecU4pVAkqK1FRokVArXprdfkx/ZmDVb5CwL9oujkRu0KzFu2NrnstqP7
Uhv867YaUVOCSiqQetaeusobIb5lbrgrw/iiJDzD79uekrkJEph1xph+hYHFKyqlPyge31pq
cSLUEF4csVqvnkC22y6xUTrfIQ+Xoq68VlLK1JrxodlAHTOANCi4bh76YK73Y7TAyh2Nc1R8
fiyAIspbBH2PNKXl8+e/508tX+AK1Z7Fcx5NxlK8Vi2aUFtvzbfDk80pbS7Qylo7yzHU2NwO
e9OVNXAJZI3MLDj8fNsjbyS4zLU65OdfhKRFTNLzDqyUuqUHW6FXX56zWTTLR42xXGii0Xe1
2xzKHV34R3ZrqlxjCitJC0SFR0LKQOVahyoUNOQJ25eOsClynZk23zbhIuky4rlv28SHDHW0
+tKUNlkpjM0CQqj3XSmDRRp8LG3/AA3HnQMd7lxZuTsSRc2nXFONpQmokSOIICHKUDajwSeh
0J+Rb1BZrh4bxVFlnFtuVBuVvjRX3ZgcfeZUuQUch3HG24bieKyf0Ve3U7YgWhvbsLxqB5Jg
2tnHLo3Hg3dEVdwmq+4t0pBQr/MS602AVU5I4FQ0TgVBUsvxfFVYrJyewNzIQjXty0PRpbiH
UOK4Kd7qShKO1QinAVFNLcgtDDx/i1nvr95dvL0pmDZrc7cXDBDZeX2lpSUAOAp3So0084Bp
Ehb8VwCbb7tfo8y7LsVmhx5Mu3pSx9+2/Ie7IaDygGHEge/kE/LRJQjrmPi+JZLQ7dIlwkSG
1TILEJh1tCVqZuEfvpLhRWrqD7fbsfhrVS6qYJF/xzhFtj53Huk25uzMW+37M1pLKUcHynge
2VJSpalK4qC9gNxvo7A1jYzsviizS8bttzuORtWuZd47suGw6Y4ZQhvkE9wOOJfXyKdy2g9d
XYuiI69YHi9qgMCRkhZvkm1sXdqI/FIjEPpCkRhIClK7qhXgeFPjTUmyiCq2O2OXa8260tOh
hydIaitrIqEKecCQSkfVxKq00PCkVlwaFF8V49fJ70TH5ciB+13lyxXRyUESS6W2luGUylAb
7aj2VgtqqOm/XS3wZVSs3fF8aft9pvtjkPwLLdJrtsfTcOD6462AgqkKXHA7jakLBKAnlXYV
0pxhGnWdnCy4vZpWcxsakXhKrY9IbjN3OMwtQeU4QEdppe6CVK48l7DruNZcmUosTOOYBj82
6ZFClzZBfs8tUWFDjOw40l5oOKQqQXJikMkI4jkhO9T8NaZqtW1JXs1x5zFMtuVgRKVI+ycS
2mSAWu4hbaXBVAPwXv8APptoTM5LZh9kydVns8GBlk+13HIUyncbt0ZbqIR+2UUuCUtC0lou
rSoJ4JVTqrrqlSX5cGf3HIb3ca/uM6RIUlCGeLi1FJS0SEJUAQFcD9JO+lqHAVtpjN+5T5EZ
uI5KdXEYPJmKt1RabVuPY2TxT19Bq0LyPDleTNORnRdpalw3EPRQ4+twNuIHFC0ocKkVANBt
rMCOrz5Fzi8xfs7reZEiN1W1+mgVpxoS2lCyCNiK0PrpgnnZD2283W1TUT7VNegzEAhuTHWW
3AhWxTyTSgPw1kFY7Ly3Jly3Jpu8tU150PuP95fNTqElCXComvJKVEA+gO2kTpbvIWZWxC02
y7yYiFtoZUltw0DbKShtKQa8eCSQKdBpgOy0RzWQ3li2zrWxNdbg3RSFXGOFe2Qpo8kFwmpV
xVuN99GSeVHA7Vn2ZGxf6f8A3Rw2jh2UxaIqGia9vu8e72/+TlT0pqWCbbZESLzdJUCLbpUt
52Bb1LMCM4tSm2C6audpH0p5nc00DM7GAHJS/j04jUEDy63e4XaU1JnLS4+2yzGbAQlI7cdH
bbHFIA9qRuep9dJRzyd38ru7uRDI3VNm6pfRLUstI7Rdb48VFkAI4+0VTSh1MgrXfrjbrz+9
RQyZ3J5wJcbStomQlSV/pn29FniPTQ9mklEHK13WRb2JrMZDbiJ8RyBJLqeZDbhSSUE04rBQ
KK0TkIxAqHeZEe1T7ahCCzdCx33Fpq4n7ZwuI7aq+2qvq+OlGgKvMn9hRZVNNlhEwz0v0V3e
4prs8a148eIrTjWvrpC3kVcL2/NstrtpaQGrV9x2nU15umS4HFc6kj2lNE0A266pgxbLkVe7
09eDCUtAZVDgsW9JSVFSm44IQs8yqijy3CaJ+A0SUM65Df3b9eZF1fYRHdllBUw2ta0p7bSW
68nCV78K7n+zUaTJGx5xdbHYpNrtq1R1y5rMxc1hxbb3FltbSmAUEexxLh5ajXYts7y/bbvG
eZvFmlVcmypiUW25vQEBMwIC2lpQhQdSntihV8da7GJ6mbKLa0rSE8UHkAgdEpJ9fjTU3k1K
ZoUvy7cJNwurkgSFWy4Q4sOPajJWqOwqOWD3Etn2e7sHoB9Ws9vBcskrl5YsUjLbflTTd8lS
4c52aiBcJjDsZpDqVJUiMAjk2UlQ4/Ib6eyiAeBvZfLrok2qXkLs6Tc7fDnW1d4jLQZH28tH
6Cwh72GQysn3nqig3prTaclCSRUs0u1ruV5++hS7lcO62j7mVd0sCQp5I4pA+29pSEAAE76w
2UI6RctELBH8fil9iY5dmrmZDa+DRbaYUgIPEhSlBwhQrtqXItwifYzex3awwG8gu94hXi2v
TJC37cELM5cxwO83VqW3wWkp41KVD1018GV1eSsfvVv/ANGPWUqnJmPzkyqJkf8A2cptAA98
X1e+C9S5NJ6gs+T5/j9yj5Bc4okm+5dHixbjCdQhLEIRFtk9t6vJ8OdlPAcRx3r6aU4afgLP
gzXkPQUKQSfT11koEKNSeJApvufjpKDnVQomu+wJHUfhqZkIhXI8RVI3NevL46ikCeIqP5V9
f4aBQkKrvt13SB01CgCoNaDYVrXfUICslFa7p+O5Hw1FIRIrVQ3NK0229aaiDSeJrWqTuSdt
BBhJCqUA9QT8tMkIqmiRXahPz31EHQCm+43J6knUAqqQKIG1KpHwOg0CopU/UCK/A/hqAOiR
xPRQ+mvXQIgFRFTvX6vj8NICgN9xUgDjx2/hTUQpHKpoOmxHoD8tApBDmSaUIPU+lB66RAAf
QVp1r0/joAUEoUQUiu9DTYAeg1EESAQfXeg9Px1IZE/VTnuBvX030kGpCiQDWpHUbHb46CaC
SQATU9aAfLQwFEJCCoe2tKk9Pw0CFSqap29T8enppkRSa1pQbdT609dQoCa8jSlCNzX4+g1M
mhASSoke30J+A6H8dUgLohKTRVTStPT+OgZDQK7K3H+Km38NBBe5RIqd/wA/91NKQIMg1IrS
u1R8PnqFCklRSOp+B9BXQaEuJTXaqadT1oRqGAyhJAqKdOKflqBoGw2NDyP0+m3z1SCQVRuq
tDX8vy1M1CFlRUr6CFfkIpSugoCIJTx6KJqD6n4/w0phASu4E1URxp0I3+Og1EIOhB4rPE0+
k9Kn56TCAaJrTofgK8qaGbrEhr5bnhxK9hvXprJuPIOC+PGg49K+lNJEt4li3F/NrQi3R1yZ
YmsOBtCO4eKXAVKKDVJSE1Kq+mvZ6YzOjhEm1zMZnR/6lWW27Q4Iz1yRKbS20pKUx6AGUgoG
yUK3Kuleuj1R1c+CSXUm044UYnmMSTjD+QONZUpwQFhxt4tOJAEkloB1SAlXIcdj16V1jDgw
0nsjZXhvCU3yXbkuPRYsK7JjxJC30q/cw4wHVW2OKp4PoVtzPx30KI+RWiGxLGEScd8kMf6d
fbEJtBtzMhkyJUV5DqkllD4TUuIR9YT/AB1NYIqsfF4yvGsi+GzTPu2ZoY/fESG0wkioq0uO
pXcKhWnIJpU9dDWh68FftMJM66Q4Bc7YmSGo5e2PEOLCOVDTpXWqoJaNpm+HPG374zY4F1kt
XFqe1BlNpdXIU4lYPJSgpltuOoUqPcoHQiIOx+P/ABvkeUIx+23K7QpTEiUxMakoacU63FaU
syG1pAS2CtHHtmp1aJ6Gtiwbx/fXZEm13a4tQbVAk3C8299poywmMoJSpp0AM/q8q0P0+uqM
EPx4qw82RzKTdri3jf7a3dWmQ2wuYUqeLKmlkUaJrQpI/jqRNFahYvh87K1W+LeZ020LiGW3
LhwVPy21ca9l5hAIqjcLUnbpqawWHgqK222ytIJPBVPcKH8SPT8NBlIt+HY1ZrvjmXypyHDN
s8FubAeQ6UBCispIW2NlpV89a4FoFkxy1SfG+SX5fBy5WuRDaQolYLKX1UJQAeCisncKGwGo
xC2WfNPEmOs3O6sY3cURpFngM3SZZ323VBmOpCS659ySauEqK+FOnrqNNYgq2Q+NZVktr12k
XGKYalNCxPUUDdUOpC1LjAElIbQqq+eoGuDplePxsYjYvdbJLlMu3a3Cas9wBbTq6ocDbjYQ
eB320M3pkbheGS8pmzIkN+PDTBjLnSZErkltLTRAWolIUfzV1MwnsmleJbkpImt3iB/p4xP3
EZCe8iP2kudkVZ496vc26fPWlocs5Yng0a759HxyRObmxn0OOpl254cXwhlTiQ246k8VEj6V
p0TgtlRU45GW43zWgglK0oJTWhoK066eoplpt2DZM/ZWZyLjDt0e6IW5Biy5v2zsxCKpWW2z
7FVpT3kV0OrTJuCKZst5TjEm9sSWkWwvJiy4qZPB9xe3ErjDdTYrsf46Vspf2JeTjd8Zwq1X
pV9S9bblLVbUW1Mh0tRxTl+sSS2kU3WgJ2FOp022KLFcvH+YSIUJ20ZPIv3YlNRIba1SYrJd
cBSly3PyF9t5CeJCltdBvrMfYnZyR0fxpkVyeiwbVe2LlaJc1yM/Kb7qGGJ7DSnHUusL9zjg
bSeC0j39BoyjIxuuC5RY12xu1TlTGMpC4UN1lL8JUghYS5HdYkBtwe6n1e060g+DhikTLrdm
qbBAnKsF+fe/bnitRCEuq+lDnb7gWkmlNjoY0JLGZfkSVmdytUW/SI92cL4uUsKW8msJKuS+
ABUaFHFCkAFNdVlmA7OJDtls8ozLQ5mNmdk3CTeVyY1ybYT3XShhKQtUgL/TWlQNEilRTbVA
y4giLZkPkeFEg2O3uSwzOadVaYvYS46to8kumIpaFOpFQrkEEU30pBOBnc4eSXTHW8nnSG5d
ugrYsqf1E99oIRVhstpAPb4k8VHTCknbBBNuOIWHEqKCkhSFpNFJV6KSRvUehGpsCwzfIGZX
GRCUu5PiVFdDkYxkpYWuSoBIec7QT3X1AAc1VJ/jrKFOSRvznkd9FmvFxbcjLRMcZs0Fhj7d
xqazxdccTCQ2mjiyQrlxJV+Gnhk5WiMbyjJ4WXOZFOYEm/NrS8+bjGPsdoO26tpQb4KHEcDt
q64BMc2283S95g9LasVpmXW6fqJiyooTFQ8mrqpKErUEoUriVKWSUnUyWG2R7GX3R+VeJ06O
1epN2jluXKnM99xrn9MhspoGnEgUSroB+GpyKaO9p8h3212tiPHbYdetyXU2a7ON8pVvTI/z
hHcB4DuVr70qoTVNNXUpkqxVyUVH8xqQdqHr01ozU4hbSlGhBPXenT56DTQhS23KqSoK415c
SCanp00cmcnebBmw3GW5jCoy5DLcpkLH1sOgltwEE1SsDbTtG20MysJHJR4g7AmnU76yYg5E
p2WSB6gnp8qa0E+QlFSiFUp8K7H/AMtAw2IIoOPU/D8N9UCI4p9CaqNQf+GgkIXuCNwKe46h
mRJAITQAK3BrvoGRFF1FQQP9v9+lGGxY5KV7QfSoV6D4nSxQKBJ2NQdwD1rrJuAlH2gVqPiO
upGZC4qpU9D6/GmkQcvdQGvGtfw0FIqiV0qaD0+R66gkWUq5DcU+pRptTULQASQquwArsegH
TUCOiTtVXqNj6f2ahDAKtyCSOivU0+eoGg21Abp3J+Fd/wCOpmRSa7UFQdqfD11CCp+dB0Fe
m+omgwaU/kOp30mkc1KUKDev9/z0My2KWFe0J2A/nX/z1IoE8DUgp4kGoP8AdqLqJJSR1Feo
67HpqEJadk7le+wHppkICJVyCQdq1KjuPnoZoJKU7hNQk7U+Vanf4apJnNRHLrv1JO3r0rqM
yEqqtyKV+Hy/DUQkpJNUkmtdh0G2oUEgqBJVSv07ahQCeR2TVVabn1HrqQthD5Ciq7K2oK6j
IY+siv8A5nYnQIXLcJHtSPQ9dRINXHpuQDtWuoWEof4UgipNK76TICkqB47gEFKa/H56GItV
RWqfcB/DRJCTQqrTiB0HpWmkhQJWNk9dhTroFMIDcHYbkcfh+GogVQlXH6hyqQOtB8NRCkpU
pRJ2HQAk7V0EEpACgn0Hu/Gg9TpJhFw8wKnfcfA/Kmkg6JJrQCnUDqRTroENLaio8R9PQH4d
TTUQKA+tKD6fhqEIBJVx+AqR6/7HUKD9eaR9W9D8RoZWAhR40HUdARUb+mgwgJVRJXWnUcfj
/HWoNoNQVwqQRQUFDU0Pw0BIAkciAr0rx+Z1AAqCapJoPh/DUMhoQKctjU03Hy601FVAJJUF
H2jqkDULCBNVEUIVU9NtElVizRR5p323Seg0SbYmoCRQAj1PruP7NQA4kUG1BtX/AH11FhIU
U8qE1qnY/iPXQTlgWolAHVH8Nj8aaiE1RUKp+KetB89aHk6A1QpITVJFAfQE9dtBoTUpUBX5
kH4fL8dASDia+4/p7fOvr6agYfJfStRtUbV/gNUkpAaVPEfTtRB2Oo0GUpTQLNeVaAV9fTUg
Ynn+j26Kp0413p8NED2HOAMJfyCJGXJehsyX2WFyo9e6lLjgRyQmqakculder11mTi0ae7hU
yN5bbw1y+SXD90mILqpwsvqaWAoobJUoBR6JTXiTrFXv6EmS0fxpdZEHILpEyV+BcbHd1WtL
lxmiK2WUigW7ICiS4T7UhP4dNTThBlkFI8VeTmJLcZUVa5CbkIKWmpHeU1KdTzDy+J/SQ4j3
d40+eiTffA4x+wZmi35WWcmft0rGx9zOgsvvOIkFRKFrS+y5wKuSaVNeWtOeqCycSVprEsik
Yw/kyEpVYmne2+4qQgLLpIFewVc1bqHu46zZMy0Rcdp5byGmwp15auLSEbqKjskJA35Gumqb
0SNPuFl89pTbmpb1yf8At3mkwmky23VMyeNGg4G1lTblPp7msmtkSzgvl+z3dibFtVwiXeUX
EsyWFIW4XFtqU6lSkLWEqU3yPFdCRpkz8iLZivlnHMgii32m4Qr1IQsRghCV91sU7orVbRAr
7kr6eo1TJbJC4x/OFxnTLZOh3ORJlR22psRLKOBihwloBLYDaW+5XdFN+ungIIGyWjyFaMic
i2aHcoWQsNL5x46HESw0aFR4p3Ug7fLV2xBFdWlwPq5pKXeR5gn3BRO9a+tetdEmcl4wa6Z3
Dx2+HHWLe5ao7Yevgksx3niyuo3S773GwUnYbDWnEGjpY79lrOAXlqFFtpxruIbuCJEZJkPO
PK9nbUR+oWidt/09tTnBvBK5jlPlS1KkW+9xIokPx2Ysu6xoYKnIywlbcZUxI4KSRTkg7np6
6JMNlYumdZdPZukW6hDsWc406+y7HIRFVHASkR0n/wCKClPFQTTkNRLZNZZkd5WjH2MtxWCz
bozCV21mOFxFPQin2oS8246Ut1IVsK11JmsSHj3kfD8fkTJdsxHsPSoj0JfcnPSmlB4CgW26
kBSPb7qGp0mIG3/de48DAVZ4Jx/7T9uNgT3kM9oud7Zzl3we5v1+Wo1sVhGWTmM7aumO4vCl
XFSVNWy1x1rZbZKW1JcUklf6iy3y5Fw6nlGZa0QTWQ2dmZdXF47EfbnJUmPGeceWILhJ97Ky
eSiDv7v92lFwP289tblgttov1ijXVVmbcZtb7sl6OlLb6uVHmW/a77txuK6EMkR/qFpOPOWE
26AXXHxJF07f/wBoJA27SXa/5W3Snx0MJ4LA5mFmPj+LjQsLlIsv75FwckqVHXJAAcS40EAF
tSNi2F7fHTIySivKH7JCQiw2J60qemMT20XFxx6IwY9SGoDK0N9ptfKhCT9OltvYJy4Iu8Zp
FVGi2iFbJlmtSrj++S095SpxlKTxWYrq0oDbaEk8Kgn4nR22PVcBZH5IavL9ijLjyrhZbE4p
5LVykqdmyVOq5PB6SilEkUSjgPaNMYBvkZ49kuOwM7ayOTCkRbbFlCbGt8RYdUhaFBTbZcfI
5IJ+o9dZYJk1jGZ4hbPJsvKy1c3YLpkOxYyUMrfLkxKw6HKK48EBw8CDX463Zing72LOMKtd
us0F5V1LWP3l27255pDXGUk8ShqQnmAlW1OSSafOuswUwKieVLL23Yj0ec1HuzsyVdLglxK5
tvdmKUoN2hWwba+nucvrqrprbMqHggrfd8UR4zn485PkpvkuWxcOwiKFMpUwlTaWu7zBovly
5cdvhrMQ5NPKgj3sXsjMUvNZbbJC+KSIvalpcJWQCKra4+3131SgSaZPYgjHsPyWDkT+Q228
R4S1pciQQ/8AdAOoW0HEJdaQklsrC6V6dN9AqZLJY/JWLWuPZrdIvkm9yITd1bcusuO+EsGe
hAjr4KWZCm0qQQeKgsb020sU+CPnZ/Yrpf3LVeJjT+N3G1/tc+fEjSG+24h37iM/R9bkl7sO
fE+vw1YS+TLSnJzxrNcae8vN3+VcG7JjlrY+wgNvJccU5CbjmMyz+mlRUTXuKK/TY7jRZYKV
vyNsfvcaDjUey2XLY1gn2+5SH7hcAh5Lc+I5x7HaCW1F5KN/03Kf26esZkzlpEzjefY5BwqH
GactalR0zjeoM9Uhgy3XXVltf2LDa0SAttaaArHE7dBpbyNsoziN4/yh+1JuDaIJiLbD+8+I
HA1QkgtFzmCB+XrpcMqynEGs5ldcVjXe+RL/ADrZNx4ybV+0WWK2hUiK62ppU0uNtoC082OX
JXMhVaaxU0lDf1KF5NkXOe4ptd2s0+2Oz1/sbMIsF+PGcJDfdW0232WggoCkuEmo+R1ttR8G
XU6+TMZuF5ye0NWJcK7THrRDaeZtkiO4PuYjXGSSElKEDkfb/i9NZTwNuThYrLNwuzZJNyG2
MRbsqGwrHU3NDMjk+mWlDq2miVpWptCqmv8AHauqcisomcBnYxeo95uq7FD/ANSOvxkpt7P2
SEmOGSJEhtq4lMZAceSCpKN0n6dq6E8mWnGB7CX4+Zt92eZttnBS/e5saPK7UpbbkJuOYbKH
SfeyVuOUQPavoK01pLLLceB/ecRwZq8JhWq1Wp/BFtTFX+9KcbckQ5KUOLaQzILgdQArt8Uo
SR/bollbcFHXj+FpwtGYOsx1Rp9riWqPbO5xcavqnO3Llbe8FlloyKlPE86amv2GHEEx5WxL
B7bYJSbPYnmXUyYzFhu7TKEtym1qHMl4PuqmdxH00bTQ76VWUDwykeP8IVOzGFCyO1y41rWl
9XbkNOxUuvtsrcZYLi0p3dcSlPEbnoNYaYz+pPW3Eo8+9OvXfBVWWXGtEy4wcaQ5JS3cpEdb
aUIDS1Kkp4hxXIIPvpt0OtvSQJbZaLP4tweTMaVOs64790Ysv3Fn+5eQq1yrnIfZeCAo9zkG
2UqQh6tK/DWXYIg5s+MvHxtsW4Ltb5FzaXLaaTMcSlhMW3vy1tIUalQeXGoSrdIJp6a2oLs2
znN8QYbCu9itSbXcJ7eSqUTcW31pTaUKbbcSCEJKHOPd6vEVA1l44NRmCvN+PcLfZg2dtmYi
8zMacyNd4EgdhLjCXF9tMXj7m3Q1uSr2120z/kzbUrwUu52aJHwrH703b5jEi6Oy0vT3Vtqh
yEsqASmOgfqIU30XzFCeldZXIxlEVZH7bEu0d+5Q1z7e0rnJhoc7JdABonmKlIKqVp6ampEt
GciwGNZg1boNsvzrYkXVm0ocRCTFkBCoyeLylnvgFRWU7UoNOqyFW5JO4w8Rtfk2+2iBZI93
YVNbhY7ElPuNwUFZR3FSFpIcWmivaeWx3OprX0KrcN/IVvsmB3HyXItrR442iNJc4hxSGxKY
hLWpDK1nmWfuUfp8t1Cmh7QqYZwwKPg8+2hi820lTLDsnIL07KWx9ozzShkwo7f+c6eQ5IWD
VX/LpSyLWCIg2W2O4Jfbwtav3OBNhR4KOYSCzILvdUpsbq+hO/prcJ2aXg5PSfyTdxtWAuYf
JutviSoLrK241ouEqWHHrnIQEGUhUJI4sIbSsq5g0+kbknWODp18DKVj+JQGcQnXCVJTBu8J
+XeVR+LryS1IW0htlBoE8ghIJV0ry+WhayJ0y3HbBEdsQt7D9sk3Ohm2SXITLeitOLR9s8p5
KUD9dtZUEKFQBX11pxE/6YVlOCIzSxxLBmF4ssRxT0W2zHIrLjhBcWlrbkqgAr+GiyiPoUFs
wDxDJy/HWLpCmhh5V2+0ntOcAlu3paQtyUjkQVqbUuhT8xrK5HWCVvXiHGbLjztylPXWahty
4JL8Zy3tIbEKSqO1zafUHl8+PJfbB+WttA5MmiRlPyY7JPFbi0IUob8eagD1/HWDSRqbngO8
RbxcYs51caBHukG126cpLahKTNkdpToQhZKO2n3cT1O2nr/gKuTkPDlsezKPjTMq7xXnI82Y
+mbCjB1TMSvExUNPKS4p1Sae8pppdQWQ2PCKHLlIjKnXBLDNvFwXA+wDl1bKpIjBl2Gh2ia1
7iTz3RvTV0xgX8mbZJa4Vtvky2w33pMeMvt9yVHMR/uI2WhxkqWUKSrbroaMypIrgmtSeNT0
6mg6aCaCJQGykEgE1qeg1QIPZTYCp+deg6amhEhKeJCVbqHuBO1B89KIASAneo9CetdDKAOU
A406eo9a/HQQaBsQBsQenSn/AIaWyQZ5VCfqVTp6U+GiRCKgDQt1VToKevx1GZAAQnYFNaGt
dQhklI9yK02p/fTQQfL31SOVN6f8D8dKEV0PLdVBsnp1+OoDn7aABPIJPXpt8zqEUdtgmvqA
egHyOooACkpqFA8TRSjufxGqCCKhQpVsk7AE6IINJBFFbj4AU2OlABKyk8a0HwPw+GpmkKTR
HQ19RXZVT8NZNCSKFRqN9v46TIOLYGxJV6/+P4HUAYpSg3V1J3oR6bfjoEIJXxUrl6+7+/fT
JCgvmnlQCtAADQfjqEJJSkFIJJBqB/v/AIayAXJSt6FJJ2FKVr89RQGd0gFI4hXuT8CdUiKV
x4Up1PuqKAH46kMgU2Qmq6Up0Sev4aQ2KTVIBB6ggb7Cvw1ljAnmoFJSKe2lTTfUjYZqEDiq
taEgda6iDUVAEcwQocqU61/HUwDrsSpQ5K9PWny0CwlJQElKdiNx8q/HUQEk0/5Qd96VPptp
CA1EBNQKEHYdf7NQsTU9ae4eh6/GpJ0AKA5FQpRIFafx+OoUg00KqdU/P5etdArAklHLZOwJ
rTZP410jIaiSg86kAbAfI9DoZNiuY68fdxrT0rqKR/41kLj5NEfZtKLxIS6n7W3rUsJceqO2
P0lJXUK3SK7nrr1Us1hHOXBqkzM7q95Qi3mThjDWUofQEWpb0hornFQ7TiuSxxWNgE/QfXXO
rZYgfXHOLk1b8hXd8Eirsb11S/fGnJb4Ee6JT+V9KiUqJ39u3ppnCDGCNV58yA3SXdo9vgsS
rlLakXV1uqlSYTTfbRCXy5UaCa1UNyd9CYMRimcWSBa8kDGGuy7dfAW5piS30sxoqiVNspUl
twN8VbhZNTqcwUzsqreS25Nhes4sdvVJkOFxu7qClT2khQIbS5UApAHHp0J1PJ0hIjoE5yBc
Y0xlIU7FeRISDWhU2oKofUbpppraDkzYpf8AUDDduse4oh3DufdtS5UFU1v7WjQrwbbbZSpV
VUp3FHjrNTSUuCo4f5Odx/NXshcZkP251+TLVaG36JLkhCkJXU+wqbC6BRFaa1xBWbiBeB+Q
3bE9Oh3FUqbYblDlQ3I7TvvjmWQVuspcJb5mnur11Ng3JOzPK1iaxJ/GbXDnpiJtYtcOZIdb
S9Xvd1a3g0QEpp7fbv8AHU3JfUqdkzVbN8auWRO3G6IjRTDjJizXIkhDf5Eh9FFdtNT7f46o
KzllbdWhTrq0ApCiVISSVq3OwKj1+Z9dUGUi9+Pchwy02HIIV8nyosu/RfsW248YPpbQlXMO
BRWhSlVNONP46W8QUYHOOZBh8TxzfMbm3mW1OvLrTsdj7RS22vtl8kgEOUJeAHIinH500OS2
i3XnyxiE794mR73cXkXG2ItkWxLjq7MV5JRWW0srLRKeJVuOVeh1Jmo5IXL/ACDi16sN2tkF
+RBlSHohk3DsJK72lltLanJgSlPZLRBWAk+74akTQzzP/Sl9t2JW60ZPDcftkJNueXKakQ2C
QStTqnVpIQn0CSCa6u0sGTniuyRcXu94ly8lsQC7XIbYkRZaJbjLtUKQ4WFITy48a0G56ank
uB83nWPLY7AyVhWVG0fZpzByOtLSpKpPc+stdwHte3/L+WqCeyHw+92tnymi833JrdNisRy1
PnmMqM1K5tFHbQ2W/wBVaV8SpwhNaanoVOShnH7XIuV3ZdyK3xm4gU9FfPdcYl1qQ3HWlKTy
HT3DrrSTM5L1ZrvbVYVYo1ju9gtfYafTksO9sIcdeeUqqFFotOOvDhsO2vQ0WH9CoonWxHjl
+EqdAE4TeaLd9iszijkCFonV4pR/+Lp021JSwiEW6W8/L8QWFhy72tmZa7omallLzHNiHx4o
UuOhPJ1xK1ErQQVH1rqxkm24ZaXrzKetSo1wyG2C+S7rDdtN4emJuUZ4pqVyUQ3CRAQlP/ti
grtoQlc8gzciXHtdpQyzdEN3MPw7pdp8Se/JkcOPFbSHO3HjKpy7ZVx6VOtKCy/qNfKEAHyf
ap8kW9cOYzCbWp95r7FTjKAmQmR9upSkISeu34V1lvAPZGYbAfc8yx/2huGYcW4IXIFucSqC
3DBSl4tLfKStviqh9fgNacQaRP4HjVya83XRLsHssMuXBa3EONo7DUpLv2zjZQviA4DxTSux
3A0PRlagfYpihk4Rj9jumOJuKWL5KZvJeeLTttYc4hb6kNOJqafmNUp/jozJJKCLi+PMJeXH
UWVOTSqei0WRuYgrv7EdawzJQ8D/ANIQkfT/AO5x9unKB1TIKJj9zk+Ep08WhTimbqw7DniM
O59qW1B9YdA5qaCxQ1NAda29mraUFCVbro1XnCkhI6lTLgTv6VKRqwYl6LF47xVjIczt9muq
ZEaHJ7pc4JKFq7TSnA2lShQKWUBI9d9tDsJpELx/i95x7HWZlmn49HBvE96HIUpUxz7Zprig
Ohr7hTKieQHbK9thrLcPyKK/dPGeMSZsy3487NXeH7T+5WeDIS6hpbzDlH2UqeaZee7jXubA
QDyB0rUk0IxnCbBG8wxsWQlVzZjRltzXFoTJYRcBEU48Ckp4rZSvZHKh5fMam3BVhyQVlxPG
GMSt17yAXiQ/dZsiFHj2xLRUwYwTVTqXkqUtSyvZA4/DS3ljVKET1q8U4lIxuBMumQLt9xuz
Up+IFhtJbQw4ptsLiFK33SSj38FDjX5az9Cy0ZI0GFuJoU8ioVNRXrT+et2Oas8GqXHw7Zl3
i72GzXyU5d7DIt7NwM2Khthabi8hltTPbUXCWy6CaiivTWdGrFMy20YjbXJDFoukuZMt8pyJ
cGpUVDSatlSVONuNLcTTmg+xdCRvpdYGlpFeSrHarLe4kO0oS3EetMCb7eQUtchrmp1YKl8V
r6kA8fhoTwDTUwcMUxKJe4V4uNzuhtVvsTDMiU+lhUlag852koQ2lSPcVHbf8dDeSU8lhsHh
aPfmrjLhZFGlWiI8yyxOYjlxTrj7XdCFsuOM9pTadljkSDtrTwXXAcbwmhyNKdmZLDj/AGL8
xLjiI70ptbFtLfefaW0d930cUdeu+2qMl1xk5XLwZcLbcBaZt8t6MmeYkS7faUtPuCRGihai
v7lI7TRUlpVEqG3Q9dC0aRB/9tL13JjjcqN9rHtMS8tziVJZkNzilEeO0s/+6twlArtySoem
lV0Z7OWSWYeG8sw6yOXmTNaKbc8y0+Gw+2uO88qiCwtxCEu8V+rRqOuiJNL5IOzLz3N7gzjw
vEmWHavlM+Y99s2iMkureUVqX/lgFWw5fDQSTHreB36Zev8Aoslts1tmIuc/kTU9z7eLHZIb
UXnFpDzZ5OJSkcN67eutPRnlj5rw75EnT5CmJ0R9ahEkRribhRuf95zEVcd0jk6pfaWBzoRT
TJMNrxJ5EdY9k6EIoFIzqrkhLLzXYLq3o6jQKabZ5Fw7UTyFNEQTZ0a8ceYIjEmD94Yap5U2
3bv3RLTtxS0ge6MzzH3KOFAlXw21c7JzDIn/ALf+R2LO1fAf0l24SGmWpjf3htRBClJjhfd7
CUn3ilBvtoiXAfJX7jYMiiY9arvKSE2a7KdTayHkrClNKo7+mFEtE/FSRXQaiBtj9jfvd5jW
lmSzE+5JBlTHAzHbQhJUtTi1egSk/M9BudEhBP5VjFyiiz3Y3mPf7bdqxLXdGe+lJMQoZLJQ
+htxKWuSUpIHH4dNbslBKJJFOMZYcwvsq5ZBBtt3xuWg3LIJ8koT94pfFpTRCFOOLJT1De1P
dTWrYieRq8SN4uIZpa80mWeK801f40WRLkPpcBbVGXFL7q0LKfcHWFk7pB39NYs5YcC8YwfL
59iDlplwI6ckaWxDtb8ppubcGYywpSWGlBRoHG6CqkkkfDSm5JtxghocW+rxG5yGOP7AmdEZ
nklIWZZSsxgkEFew5VoafHVWZfkysk1f8Kze046W56In2NoeDs6JHkR3ZkNybxQn7ttsl1sL
ISkA1AP46ayxhb+wzcsGZXVOOWgQg85KgrdsCGktpU7DLq1OOOqBB9q0rJU5uB8tDiJHWxWW
WbMI18hSLtEY+8uKWG4CoimHIz/23BlKULYUttSqhKV1Nanfrps31h6RLOuSHydV0/1Bcv3Z
pMa7iS4J0dKUpQ09yotpKUlSUhJ+B0MxUnrFLzRq0WOPboSnoTV8+5sykt8u9dAhsFgkEcxx
Qg8DQfPVw4+50UlvuX/ca+WL7q9YFFurcYT1tXd9BadaDj7jsnilt9H+S8V0HHalN9MuTLZk
0N55uVHWw2XXkuNqYFP8xYUCgf8A1HamsQaTg0W1Zf5BZyi9TmLCp+5T71Bl3OEGXKtXCM8p
1mLStU9xXJJSd9ttbn/AqR1K/wBV2+8EWzx5Jsl1ukW4RCxymyi+iSkB5xpLtSFMcq+3bffQ
7N5ZONCbfkWWqkzbPd8QnXaau2RrddUNfdx5zjUV7vRX3ltIU4ChNGgae5I3OlN1SBvZnOUI
bRe5gFqkWZPKot0pbrjzJoNnHH0odUVfVuNDTMwnkYptl2diOzWYb64bBo/NQ04tpBH/AOEc
A4I2+J0JSLwN1VSlTavbQV5U6n121kTlQAj4EUAV6fHSZD9qjXj7KVNNRpA9wKTuE1oTqEPa
g3JJ3Bp1rqIIkEgk0qCOI26aAFVp0NBtVXQV9dRBbUG3u36Hbf4aiCoASRuneu/9p1ELUogg
U6ihruQNRMIqbUB8U7JI6VpTUIdUAhIHvAoCo7k6mQYBSkKUqprQim2slIkOEhXUjYkH4jWo
NdsBpUU7bU69OnxqfnqMpg6H2kD1JpsNBoI7jYkDeh9B8dRiwZ4I4gkGg6b+vqSdQoAK1K5L
JqlNaj4dN9DENJSpJUdt6pB3/jqAPmmhP5R9W/8AKmoQVUtzc0oKAEU0FInaoB9PX/w0ohaU
UHLqSapPqfjok0mEFDkpI2cp6gUA+GmAYAFJO4ClKFSSdqfHUyDSQo8ikJ33HwB9fx1koDUn
lsnYdE7U/npJKQlJUpFQmiQevw9NMmoAWfbQkUXT3E/PprLFIWSUmvKp6poK/wA9EGpObZqo
qIKKgn4AV0mRSU86caUQa0rufjudRVFqV+YAgk7g76DUciSFdU15A0CTt16b6gkBBTQV5CtS
T8SdIJgKAglNRT4g70PrpHEAWhQ4qArQVKfl+A1mSFNKTyTv03Ap01EFQqXXcqFaj8Pl8NRA
PGhBVRRpw+H4DUQZSkqBPX5D5dRTUPIKJ5VrvStOW/8ALQawPPHDcZ3JIiH5zFvaS+045Lkq
UhpKG3AslRSFKH0/DXp9bhnLg3K83DHpPnqLkkPIbWq0uvNzHpynVdtDLSQ2tlfJBT3FpBAH
T5jRTEmYJ6NkNqhs5lAsOT2SBcrle03C0yHihUf7ZaQVAKcbW22sJqmvHY9NEqEMrEo4zsj8
PSb3MekLt8mzO3hhdnYjMcHW5LbIEmXKI4c4KnqVR+Y1PTVBki8GR32fIzTt6szSrsyuJFDc
puHFffCirmy0rjxZ4miVU+Ol/wAUL+Ckw5tvZ8ZT4C59nbnqmVahuRS5dFoCkgqYlg8Ut7fD
6a776LOYFuYkrFsTENxifelP2febEven6XMdzfr9NdSgsHo28W3xsbpb4zljtv7NIuEVu3XA
PwG0FgiqwG2D9w4hSfq73Q76EjElSxmb4+v/AJDTj1zxm0RoEaZK+2nsf9OyqKy0sIaeHLi6
VLAWlaj19NPApDbCXsIyu9yIszFrZDusKJOct7Eda2ok98EfaMKjqWApaRXfn7tLSKIJydim
GQbHLu93xu2tZHDsn3k2wlxSIzMoSODRLLTqinkg7gK30LDwFoKTjv8ApW/5g0bfjMNuIuET
ItU+euLD+4QPc60971tg/lb39dMYGMYKLISlLzo4BFFqCEIUVhIBNEhX5h8D66Gx4NL8V4/L
umKZqpu1Km8ren7Nz7cOqEtBNEsrKSrucT0SdSCywDEsfmSvDWYvIt0gOJkRFofSgrD6W3P1
UiqTQM0PLgR/zatQBd89wWwXW63h12ySoKoFmYuMa9MOFEaS4hKE/aNNBHZBUKp68+RqBoQl
IzDxrYLHYbndIqZj7wXGS1ADiFOWYPtpcWm5lJXyUtSuLdP9+unbgHgYeULJBt9lw+dHtaLa
Z9qDlwLbCmEOPggVc5fSsjfffWGVkpGnjDD7Jld1uEW5uSCzboDtwDMIoU+4pkp9iOQUDyBN
PnpJFm/7Z4muxDK3Dc27QbYbl+0c2/vUq+47Hb7xRxp+b/L0ZJojvH+P4zcfKNttrceWq1y2
nHGol3YQpfPsKUO6gcUuNAj2qFK7HTwUStlBeiO9x9LLS3WmVrLim0LVxSlRHJQSDxSPnqTy
Z0slwGIYjbccsdyvlzuDMzI2nX4SIMdp5tpLSuCQ4lSg84VE1PD00OWahEIixQF4m9eXH5wu
Dcn7dMZMNaoJSSKEzCOCXKb8evpq+hlryS8vG8UT45s2Rx3ZirjJuyrfcSvgoAIRzWiMyCQo
gU4cjUnrp+opyWB7C/HUuwyb1Cbmx4kOdGYkMRHzcLgpmQSgpmRFtt/ayCoUSlJVvqgWiNyj
HfHllYivvRnmbiJC2p+MouLcmQIhb5IdeeDdI7oXQKaPLUlI8wN8qxDEIC8aRBXJtEm/Mpl3
BqesSExIzqiGFrLaG1lVByUin4aZwCq5IfH8atM/OoWNyJ5l26TLTEauMEBvn3NkuIS+FcaK
O4Ir8NEkyRxPCLZdPIcrGH3HFxork9tsJWWHXlREucR3EpWEE8OSjSm1NNsGeB1ZvHMS9YlY
7nHuzNvvd4nSYCI0tTpTJUjilptpLSVcTU+9Szx0eRc4I1vxjlSihaXIjbAL6LlO74Ee2uxS
pLjM50D9FZKRwA2VUU0oy5Q1bszrnjp/JGbjKQ5HuLdvftiqpZ4PNlxC0KCuo47gpprUKRzg
4r8i5480qO7kVwcZISC0qQ4pPt3FQT6U21lo1OR5ByTyZmUxnGm71MuD0tYU3GffoirI7ncU
s04hATyr6U1OCkn3vH+cORLJdIN8N0u056Y6mUxOL0aOxDQjlKTMrzGy+KxStdqayENEFesZ
8lW6XKyCa8849bmWJn70iWXVBl5fYadZdJ7iqK9hA3T67a0m4gu0ZO+FWTN2Mwbs0e5SbBcb
lE/cX34i+b7kdTSpLaiAtHcK+vHlyG59NDJPfwIxNzzFPiybpYLhLZbub5Q+8qa3HVLlpACw
33lo7rg6VTvrTiTKb5FWmP5iRjDzNqdntWOT9wVQmn0IceCSUyu00pQkKAKVcy2OoPz1mBbg
h2vI16RBRATEtbjCGwwhardEU7xICQe4Ucir/m66ZYOJLHnF185SWzLv7FwhQnJLJaisoDaU
PJoGEhDRU8AVirYWacum+hM3yRWc3Py4y1b3MuZlsRmnw/DLzEdLbktvdKnAynit1IPR3elf
nqTiTNtnXNMqzG35PCay5m232ZChtOCNLgoDLaZbYUGloQlhRW0BQGpSPy6OCnJGqyHJskiy
7ZjuORorEtDbd2iWGCtPfSh3usqfop0jgtNUkU1ZFsRi1w8gWw3Gx2yyruKEOJeuNnl2771D
LyAUJeUy4hXbc4njy+GmMkrRk6Q8s8mvW95EWJIkwZS5sVwswCodyepC5TSS23RLlWE0SB7a
dNWQ7JD1/wAr+Tp8Z+8v29p91tLzCMlNs5OQ23gUOsMygng0mi1Ch+mp1LwKa/Urb+c5Z/pW
Hi66s2y2OtyW3CyUyEoaWpxhLzhH+S0twqbChxqR8tUB258D7JfIF3un27mQ49FabnSG5lwk
oivwn7j2/cf1ypSUcwaqLKR/LSrOMF1yLj+RMXtUtqfi2JJtlzBWypb9wkzmXY7yC28wphwJ
/wA1tXHklXIemg3+MQcY3kSJBuT32eJ26HbJMJdtuFjbRIQJLLq0ukvOqWZHJKm0lFFe0Cnr
o0T6kkx52vcWSyuJa4MeJDXbzBgJ7vbYZtXc7DAcUorUFKkKK1L31rtgzvI2Pmm5m1Q7cq3x
A3Chvw23eauSkSIioZWd6ApSrkPntrLZmCSf8+olzbbdZ+PQ5F7s4UnHpaJbzSI4dbSg91nc
SPcnluR1ppjg0vJANeUXGrqZyYLTkkY2cYKA4dkdosmT8eVCSUdPnoTgnENeSt3G+RZFktlp
atsWM9a+537gyFfcyw6ap+5JNPZ0RTWokGyMQhxyrIT3HCeCGkpKzU/TxSN6k+g0aBSWrMMj
n3LI4UqXDlQPsI0KOxb5XNKkIioSklpK0t8EuqQVABPXrXTH4mpR3RliZHlP/V5t0iQ2q4ru
K7ZRLsilalBqgo5JNKkoprV7SkjCUaONhzV225Lcb7cVSJUifGnxnnOQU+pcxpTQLi3Ar2p5
e71+FNCq5kXaNnDFcnTj9uv4YLjNynQm4kCcwpKVx1F1KnVdxQK0BbVU1QQrf4aE/wAikKHk
keNg9wxxXMyJ9xizkbI7XbjNrbVWvv5VcHGm2tVe2CeDpJy6OnCIWOwluNyFy5Eq+EBsfdgF
P2nNwEuudn3USv2pPSuiuZNsPJsqbuNvxtuE88yuzWkW2Q5s0UOdxxawhSVFSkFLg3VQ/Eay
sKAnI9v2ZW+ZktoXayWsasqootsZtpDSGUILbktSWkqNVLdSpSiVVX8taUdSVvy+CJy+8R71
lt3uzBUWbhOkSWitHAqQ64paSUVVxPE7iuizBI0bxx5ct+K47Y7T22ne3eXJl2U/HLxjR1oQ
hEiIa/5/HuJ/A6FyblD7LMuwnIcXajs3a2xpLLExvtzbS/IuAU9KdfQI8tB4s8krT/6TvrSt
k06syOySmm7vbpEg8Go8tl2Qs+4pQhxKlEU60Hw1hmVg3K0+XMOk5LcpbrcK1w3Mmiz486LH
dadlw2u+VSJe6ypSVLT+UH3HbWp/waqRDGV2CxZFOu65tlkNO2S4xY1vtIuCWHJLvEttvpfP
cR3vpqhQ6a2mnzyZ7YZL23O8WuDVybiXK3QmJVotjNss91kS4rUVbMpx6ZGdmMkvuKS4e42r
nuFAHpo7YRpVmTD8tKjkk/uzGJ5LxUJUR92UwUqFQlp579RaUA8fdvtquYeWa14pySDa8bt8
i65JDXao0achVvXJdZk29TiXfYm2p/SuffUtCuTiVBPT8usVWTbyo5MNTUAJX6U5JrUAkemm
8S4OaQhSUkHjQim9d66wIkJBolQNN+XHqPTSAog8QSamvu/AdNAhUUogCh+Y6fOukmJQvqVA
UP1K66COhPFNdyg0qetdRBlXFJAHIq+PwPwpqE58CEhKzsNgT0+P8NRCjRIKSK0A3Hz/AOOo
mGmvFRUaGg6dP7dQBAK9pCa70H/hqIC+JoQNj1P4aIIVUUIB6Vp/HShkSndZA+kA0VXpQdNR
mRQqUhSamo9PjoZoIKKhxQDUdfn6+upDACgnqOu3Xb/w0SAoK2CeWydjvWn46SBsapTua/UO
pp10CJoVEcag+o2ppgGGOQcNd1EAj/l1QQpJWTxURtWlBoFIQneu5G/qCevroNIWk/VQ+/Yn
0/lpkvgJSTwrxSSOifnqkID5HjUAhPX02PrvoEFApZUpVU9AenXUAYBoQCQK1Jr1Hy0M0Fzo
Q2oDf4V/3+upIUwwE9EqKBXUxkNdOXFfRVBWu2pAxK+KSpY6Cn/lpMnRJHQH3/hWg+esM2pD
cUAPp/8AqHQfw+Oo00JSQo7dE7LH46Q+oilKk0A/L6DSmZbgWVUSK+1W/TYDbQxkA4H6huQP
cNq09NEmg0oBKqDkVbVVUbDVJdQBBJoenx/Nt8AdRmqyGg0qCakCoA/46Gzqqhf+5Sh5fVyo
K6DPwKwVMZV8jCUyHoq5LKH2SopC0LcCS2VDccq9Rr2etS4OEZNhv+H4pb/OrWMto7dlVOis
CISXf85KTwVyUkqQpR391QOms0UplGJJmVgHj5qzZ09dibam035MKNOgtKeWy0afoNNqXQJK
juVq20RoBi7/AE9zHLxPtNsvTUqfbJDAmtKaU0lm3SWw6iZzJotSdwW077aFgVgisYwfE7mz
m4+5duAx+J91aboiscL4KUmq2Fgniunrp/2jaz5K1EsNodxaTdl3kM3NlwNtWMRXlqdQae8y
EjtI2JND8NDwpDREw4q5kxmOwmr0hxDTRJp71qCQK+gqdSKTWJ/9NmQxS2yi7wjJdfZiyEus
uMICnyACytW74QT7qJB9dWCbI1nwtKm3Vu02rJbVcJn3a4Utkd5ssPNNqc+laeTqaJI5IFAd
RlJnGN4fuct9H7Zf7VcIAbkPTZrTjiExUxD+spxtSe8QPyqSnfVAnRHhi9yGTcEXi1rsv2Zu
CL04663GUwF8FqILfcqlXxTpC0xDIp7xvcG78zZZN1tTSJEcTYtzelJTCcaIqg9wjkhSvRK0
g+urJRBV3m+08psqCwgqQVoPJKqbe1Q+r5HUMltwvHZ93st/kxL1Ity7LFE9URruUfRulQKk
LQEq2FNjpLg72Oz3l7AL5fmbpLYZtjjTX2Md49kiSritT6O4kpSQdiEnkdCyCwT2YeMc7tRc
jwrm9eLOwyzNU0ZYS4kKSkF0xC4TwbUQO4RtqFbKvccW8gQRfVzWJKG7Ytn9/Ut72qW4QWOR
KqP7kEFPKnXUDZLZNcc/sSMbuMrJXLqmcwm6WwOKW82yoDhRTUgKSVJCqVodSRY5Hlg8h+YL
7Pkw7GpM+YY7gW3HhxW3WmV8UqWHEJaKTypxNeulRBcnFELzg1lRcT+5nIzG5CQXUrBig8f8
7kWOHLbr9Xz1kPkVjqfLKs/dgGfLtWTzWlKmy5KS84GG08wVJSlwlGwCe2KfDSMlYTmuWxbl
dJLdzdRMuqVM3WQEoBkJ3TRQ40G1egGqDOyaxm7eXWMbebx1u4OWJnmEvsxu8lmo/U7LqkrW
38+2daeNm3gr6XMv/wBLuobM3/Sn3H6igF/Yfc/M04c6/wBuhGGWdy5eUmMDtUdcVLeNTJKG
LOExmRJckA8m1tLADwUop9rnU/HQnk1XgnbhO87Rn4rvYjOSmprRfTbUw3HVTeNGk3BLB5FV
K/5nrrSSyXJXs9i58+zF/erFb46HZSm2pdnZjEvSlg8mHHYhcqs7ngreu+hIy5nBGZJds4h5
kxPvjKoeUQURVMJ7aOaQ0kfbEoTySpVP5nRBpMPGchyx7PUXqPbWb3k8l1bzcaW0APuvq7iE
BTXFxHH2/DUwT8EvjGXZK35FmXK247bFZHI+4C4DiCy0w4hKjKUgqdSELWAvuVVvp2StgKFn
2Q2ew26ccctn7KLhJl2BxxC6RJiaKcLFHOQS1XYL2PrWmlrJYUSRTHle+shppTMR6CPuDdLe
pAEe5Oy1KKnpyR/mOAqHAgjjQU0RkG1oKLksqN42dsisabXapMtPLIlGQOU5sHgoqB7XdQ3U
ceny0uJHWBpLmeMFRCI1ku7U0hPF1dwbW3UEczw7VeleOhWZckjacxwfHrnHvmL2q5sXiKog
Jucpl6KtpxJbeQtLbaHKqbUQlSVbHfVJErB8qOWS3Wpy04wbXjLf7jHYW3KeLr4lBv7ktzVD
mh9pSUkOAfw1Iu3BGs+WOGWIuy4D0y0OQV2uba5sx2W7JiuVUsLkuioVzooFKRSml5QTwFi3
kRY8kystudplXq5yy4qJAgu8C2VtKaUnjxWVIQxsAKdOWpqcFCSkYt5jhT1ii2K72SdJtdqm
SJljTGmoadSmTQrZlOKbPcpxHvbodK2zN3MfBL2zzGiHikS2hq4xp1rYfjwBb5LLUdaXlKUh
TynGlyUqb50q2scgPmdFVkbWkrUK3eNU2xmQvILki5obS4uCm2oLPfAr2w73foKhTlTpvrLl
j1/Qvl88v4lbMlvt6xyLLnzb3Nt78pyU6lMUt21xD6Cxt30lxaeBSvZPpqVZGeDP8uuOETZS
nLM7eQLlNXNugllhKGUuqJWmO00eLjieZ4rXT4eutz+plYJTN7l4/wAsyi3SId5m2eA1b2Yb
8u5Q+6Eqgthtni1GUpai6mvI9EnWajPYb/u2O4ti+Q2W031d6lZGxEQiXDjvwxHMeT3HWni6
QtXdbNBw29DrUcl2cdSZwvyzFasdwtuSTnRcH5sea1dnYSbklTUZgMNx1M9xhSFJA9rlTttr
NWNmSEfzZZhaJ7Lz89E6YLyrutpDVXp8qM7GWoNKSlCu2wvkR9Naep1pRIqEskhkXl/CbpfV
5BFuc+NDYgTrajFRHUGpqpSHUofWpKwwjd4cg4kq2/DROIMRl/JVX/I2OjGGLglC1ZdcI1us
uSILY4KgW1zk46lShwUqW020gp9OP8zs4+hppckr5P8AIlkym1v2u13WCpi83GNIjsOx5jCo
TaCo1lvyXHWGwnnRXYRQ7jppThGbcELh9mt2EX1u9XXIrLJjPMyYTD9uk/fPRZUhopYlllKA
sNtLoStO6dEE3h/I7suUx494mtzs7j3G+P2d2LastWw+GYMlUlLnEvuN99XcaSoc+HtrxGx1
PeQS3Bb7X5MwGNdUokXWO846uzxsguCo9Wp/2saR968eSKrbW4poFRSCo021aLX0YX/c3DGr
FFMe5xE3eRAkS7koxmy6u6otqBEWsqbI7qZI9tNuQ1pxPwdHVJHW7ZR4+VdmXLPdrHHw1tEp
eW2hbTaX7hIeYJSthosqW5VRAqhaaKr8NXZ7nJhKG1wVN7LMUlxX7DIdt6bAzhbSmWwwyhSr
+2wgpKngjumQl3Ye6nx1JxZFZNylsoWR3CE9hWKMMzYUl+MmUJUSNFLMqMpTgoJUkkiRzG6K
AcdZ1K+TUZIOwZHdceuKLvZ3ft7nHS4I8goCykuIKCpIUCOXFXtPUHcb6GReM4yVUrJcZs6p
RuMTHzFX+7yZImSZC5a2ZMjuySSAltfsQ30QK13rpbwZr/L6EpGyi4OeX79b7ddF2e25Fe3l
XC4wXmG5IZaWtfFmao8EJXTqFcVbddVsM0tDTHsvYvee5TkUqIxFTcbLdSIzZQG0LMUNJPJw
p5rVx5K4+5SiaDTP5KCxl+QeP/INyt2MXMvFt2z4/CSluyIQwhm4u3BxTBXcFKq48GuXIcBy
oANhvqVZYOyIaG9Da8QSUuR0vSjkkUh9XAktIguEoVU9yildaDj/AB1Ss/Q1ZpJEzl2aSbl4
8iruzLLr96luLtLLUdiPGtTFuc7SmovbHdX3+fFQWqgT8Tvq9baT/Qw2htkd9g2GVhlxh2mM
7JjYzGU41LZQWlyXVPgSiipDi0fUlS+tBUay/wCK+pp/y+w+y2RbnvJOM2efETKuNukRI9+n
iK1E/cFyXmnwPtGAlAS0252k1HJY66bJ9UVVWSlZmmK3l98RGZDMZFymIjx0jgG20yFhKAn0
4janp01MyjV/GWLeOJ+O4S7kbZTdrjeZjcZDbfcE9LakN/aSFD6UJUtK6q9AoeupOEzbWheZ
Q7TaMLiJtliZclS4DrqlpsYmN1VKeRzVcwoFhSG0e32njQHXWEnJz7SjIsRgtTsrskF5sSGJ
U+Kw4yejrbjqUqST8CnY64WOlcuDcLF428aO5Hc2oqU3VlnKTAXFciusIiMIYlOripJWrvIB
aG4oTxHx11ay8cFBBWO34Ndskubkm22sWi0WKVclSm7VcIMcPhSEhcmKt5bzyWgajtqHXViE
o5MuuWyTk4b44tqsjlz4EFlq3RbMBKfZnuWpUi4dxSnoTMdxUjtOthASSsgLCtXWYg1FYMLv
syNIvEtyHFZhxVOqDUeN3eylKfant94rd4qpy95rvqaW0ZTk0vE/EsSb4+uuRvlc+aq0y7hb
kRH2UMxFMLCQiSCsOqdV7lcOPAJ6kqNNZ9aTsp0dNGZ2K2Ku16ttqS6GTcZTMVDlOgfcDfLj
t05VprLwZWWazC8O4pkdwfiWlcy0s2W+O2C5rcUJS5gisPSFym0EIDLqvt1Dt7pHIfDfq6pY
5iZJ1nPBFRMBwu42mPmMVmdExwW25XOXZFyA8+v9sfajhtqZxTxS+ZCVKUUEooQK7aumH8OC
6w8nW5eNMPs1tcyu4qnP43Ij2yTAtEdxpuWk3cOFKHZS0lCgwGVEKSj37VpvrPWVK+f2BVhw
yu5B4+gWPIMqtD15S27jiecNTkdxSppWlK0tkN1S0sBYBKvbXQ1rxYGmlJKW7xhZnbZb4r02
SnJ73ZpGRQFpQ2YLUZhK1CO+knurW4llfvTQJNNlb6KpPepg11cYEZD4wtNus18RGnSXMhxa
NDm31C0oEJ5ubwIaigfqpWz3kAleyt+m2t19cuPM/sV64laKrcMdt9um2yP+9RJcee01Ily4
yXFJhB36mn0UClLQOoTrFVKk0lDhl4PiDHm/IN1xV28vvRLdCYmRENpjR5k1b7KHe0yJK0MI
KQ5yPNXQfHS6xVPzJJFU8jYaxh2TftLb7j7CosaY0Xw2l5AlN8w26WipsqQdiUK4n01Ov4pm
MomMa8UovNutC3rsmJdsoEk4zA7JdZc+yqHTLeCgWUqUni3xCvirbWKVnP8AtRrqUIIruBSg
3Hr8wDrVq9XAIsd/w79pxvF7sJC3XMiZkyHAO32G+w+GkIQoe/lv7+XQ9NSX4z8wMZgdx/GN
1N9n2y4TIcGNZ4guN6ubTv3cdiIaUW32d31+8J7ad6nWerxHIQ+SJy7F5eN3UQX32pbEiOzO
t0xkKSl+LJHJl3gsBbZUkboVunW3TCtwwjL+CFHPilYFK9T09elNYYgCRQkDYA19P4aDUiae
0FJp60rvT8NBkFE8SRTj0Ip8PXTIoWkISa7ggVoBU9NtEjAjb202UNlHcba0AYA5kbAHcp6k
00MBXLooUFAan4ayakIrVRJCuO2wr6nUMix15KPuFOQG2pkhJoUpIFPw61GpImxQqQU1SB1p
6E/Aah2I3VUcgK7VofT4fPQAvkBxBoCQKben4akhbCQ2lVak77AdenrQ6SC5KUCfpLYoK/SR
/dqNV0LSlJP/ACgA1Px69ToGfAaN/cepBoo+4V9f7NTMgQkhJ6Djt89ZZpClJPr7ePomu5Hw
/HVJqDkEI5cySTTYkH1+OkwxaUj2q6r9OX9w0Mkg1hRc3qEj6fnXUKQaiACsGnGlB13+O+ss
6JRkIU4gkinw/wBvTQgvH1CKiSabGoAUfT1/t1pGEBRokknkaj3D+8ajpMA7a+51HOnKny+H
/hqlGYc/I88cT7tDyRh+1TG4Nw7iEsTHuHbbU4rgFL5pcHEct/ademsnF6NhuFw8useU4Vln
3Rp7LWuMaPMS0y+22h5IWV+1kE0T7iePIU21mqkE1ol2V+cnlZJKtt0auUm3zG4FzhNQW1ql
ucNnRHWyElIT1K6HVwWIKddss8yJKV3NdxaLN1RI5vMls/uaUpS23ySkclhCUhLXSnpqTFw8
Il8cyHzAoZTKhxYX3bfOVk7NxjRkSihQqsKadSlSkJp9AHH+ejMDjkqMe95wcWnwobkwYtIe
Ls5phk/Zh1RBIW4lHsqQn28h6aTBBxZD0d9t4KKXGSHG1jZSSk1TQ+lDvqQpmhzfJudl+Ld3
rJFiz3H2pAvAtK23pTjX+XV1QoutOjdOWpNSP48EHacqy+y5aMqbiqRd3nH3krfiuJbWuQFJ
c4NkJ5D3GgSdtMwoKuMCMYy2/YtenpzUVDjlwYcjzYcllYS+w8fc2E+xdCR1Qa6FkUv0Jm4+
WMgXbX7Gza4dutTsI25u3tNOjsNKc7qijuKUvkpX+LpqTwZv5KrYMjdx65mdGiw5LoQpstzm
UyGQFDdXbVtyFNj10rKMSMn3/uFuPCgS4oucUAJTVZ5GiRQBPwA1EmXrx/k37PZb403i7l7a
ns/b3ac0/IaDcZVTwV2krS3Q1IXUE6rPBucQdsYyWLEwDIbazij0u3Sy2u83JEx1KGeKyYfL
2HjQ+3/n9dUvBkl8h8ouR5c9U7Ejbcju1sTbJ7kiQ4lLkBSU0UI5RULKE+1VdKJohMj8ptZB
ZV2WbaENWuOWP9OIafWDb0tJDa0ciKyStA/9zoTpg02dsiyrDr/Cx+3yLbdbRHszAiNyCtt9
xUanLkltaGQtZVTflSmsthKZJ4NkHivF5lzlNz7xMFwt8iEpl6K01yDlD7HWXFlJVSgURQap
bQJCkeTcU/Y/9Km13BrGP2024O91hc8c3+/z4lKWD/hp11ZJkbhWW4fYs6YvjEW8PxYLZbt0
MuNypLi1IU2ou/SEoCVVShvpqeiVkQTS8ANxvDk5F2diLSpVo4GO28l5RJIkDdHHkfy6kyjB
PRcyxiXjGOQJ8u82ybjbT7LYtPbCZPdVzB7y1pLRB2PtOk0Vw36AnD5Fl/8AtH792QJCT92o
W4DlUcog2U5QfV6nf5ap8g0iwryXDnfGkDHXLtPdukOd+5LbEYpCgtIbVGafLigjimpQ5Sn/
AC6GY1yTUnOPH7GPu2JFyuc6BLnRnUOxoyIE+HGZqVF6WFf9W9vstW501k1EjHIspxaRittx
q23wlxq6CabzGt67eiNHDRbHdZbUFvv+pWnf56lsN4OOdZPjEvyDacktl8fcjtIityn48VTc
qP8AapCO639wC24pfVINaeupGuyTI/HbzZHvK6MgnXl1FtjzkTzOnskyJHaUmiVIjpISs/gE
6noyTOHXPFbf5Xut6lX+IqxvrmrDhZfIfE7mUthstn/KUoFdRT4V0t6CtXDklcMyezWjG7Ja
P9VwozVouz798YUy64ifb3PyMrU0s+//AAEAq/hotts1GMja35TgKVxUh6K1dR+4Kx+5uwyI
toYeUosRprIV/wBQspJCTxPCo66rOTPWPqV2Gi3/APZuZbDe7c3cH7kxcWLYuSoPhplstqQU
FFA8pZqE13G9RoqpYWUpIiJHivNmWFSFR4pbFFFSJ8JfUgCgS7Wnu9NMo1DJ7DPHszHMohXT
NoMdrG2VKQ887IiyGEvLQoRlOIadcUU98p348fjtqmQhou0C548/Fx6FlMuwzb1Fbu7qGLeq
MIiZjqWjGLiuP2iXVAKHJSeFfnq8xoUQF2jYPer/ADcdfi22zTrta0JZuzD8d8M3Jhfdb7jj
CUxY/eaqhzgDUU9dSZdOxzwhjHZnmN92yNxGMatkVyCuQ4tuKXOMZTKZSCpTZ7jz+xUj8u/Q
6m9EksjHHMXVGxCKxbbFY7zkLM+THyhN1eYV9sy3x+37bpebShKkknm1y+PUap8haYRLWHHf
Ha8Giy37I1dW3Isty9XBt+K32H0OOBKPvH5DTzXBISUcWlBQp1rqUg6rZlcPCcyWmNKcx+4q
hOltZfEN/tllRH6nII48eJrWulrDNKyk2LIPGeCtXe6QrtY2MZs8W5WyPYrs1MUlc1qQ6lEx
ClOOOA8GyVU4gt+uqrYYkz3yNicyOUR2sH/YnHLi5Dtj7L7izcGxUNo7Di3FKUsBKu6ig3p6
600kZ8eRfmzGL2xm0bt2qW209a4P2zKUOPULMdKXWkEBS1Bk0Sonp66yraF1/JkdiGN2ZjGM
nvuTWaRLfsggfZQX3H4AUZUhTSuZCQpSKAVpoeWWi14dgXjG/wBrueQNwLi9AM9iDEtSjJku
x0rjh15R/b0OPLHc5JacdATxG/u1VcuDaUneD4r8ei2yZL8GfKU0q5utrXJVGdLMCexDbYcb
ShQSqkmqldap+elHKAZD4UxS03R3GWYtzkv/ALZNubeUrcpFYXF7xQw40lssn/ICVEuA76u2
TUJT8fuV2R4osAtQvglSW7Dd41pRjL/JKnFT7ksNvNvcgEKEdTbpWkGqRx1didXGdheUvG+E
Y5ap7lnnynbla5qIT7LqluocBKkrccJYYQwUlOyQtQV6aE5JopnjzF7dk99kQ50l1uDBgS7n
KMXiqQtENouFprnVIUs7AkEaWtC9OSbtuIYPcU3W+Nm9xMctFuTOkR5CI4nuLVITHSlh0gML
R7+RVxqPp0LYQ0WyB4HxqVI912nIYuKbauzntMlxCblFdlBMvfiotpjqTVrrUfPQ4C0oSPB2
I/t/7k/fLkI7sZ65xEtxmC4IMeM1IWh2qwPuD3+KePs21qNEnKk4XXwZZbTfYuLzMikKyW6N
S5Fn7cZP2RZihak/cqUsOhxYaP0JIGpQvoMccohpniSyptkv7O+vPX+FYGskkwVxOMYRVtoc
WyH+ZUp79T2+3j8dSWSfMcFPvlit8GyWSdGmPPS7k06ubDcjOMoYLa+KO0+v2SAobqKOmhJE
k04GmPnHRdGXMhekotbSVqfTDCe8tSUkoaSV7JClUBV6DfUMlwyfDMcj5HjFvhB+2SbwtkXq
zPSG5TlvEh9CGiqQhKEFbzDnc4kVR66HhYNJNiomGYcnM71jTrVzu0qNc3bbYLTb1sNLkobW
4kvPSHUqbTwS2KpoK9agadMxGDjHwPH5mU5fbYN4XOtFgts65224MhFZK4aEKSkn6ePNZSVJ
60qOuqMpeSf8ZOuK4XhF5tLqnLjPj3GJCcm3a5BthFqtygF9hh8u/quKeUgISWzupVANjqqi
tWMkLGxNt3x+1lTkstOvXpNmMZxI7SGzG+4U+pY95KD+UDp89Xn4HwSmS4Ri0TFXsgsd3myY
zMtFvZcuEduMzcVEqC3bcEqU4pDPGrncSKAj11pA350crlg1qtV6tES8X37KHNsca9vy3GVO
KR9whS0xGWwf1F1SAipAPrrKWE/JtvZ3uPjrsZfjFojXN9RyksFlyYx2JsXvu9pBlRwtRQT/
AJjfu3T8NTjqVUyo3yCu33ufBW8XlwpUiOp4AguKZdU0V71pyKa6bKDEQaBh/h7NMltmO3G0
SSIN0lSWHVguBNuUxuX1kHo7x2KaHlRPUjWVlP4NZlDy7Ydk1kwkSZWXXJqBLhGQmCxEnrg9
txa2xHclNq+2bU6pG6Vim+/XXR1cg2zOMdtky6X+12yE79nMmSWY0Z8qKQ044sJS5VPuHFW9
U65vGRhs0TH/ABr5NTdlhu6PWiQLy5DRKfElouS48d55U1Ht5OjttrAUKqPKnrrq67+gLCR1
fxvNMjukuNIzpVwahWp6Tc5Etu4tlqEpxAUwtlxoPLS4riqiUkHjrOUoBJtne0Yz5SsJmPRc
zYtNvixbc01OLkgsvxJQdXCbba7LjjaU0c9q0JKa/PSqSlBpX6rPky3InZ8m+3CROuKLrMW6
pUi6NElt9XTmklLZII23SNVsYCuV4LKzhGQWfHDeFX+1Wxm/Wsurt70lTUyTBdXUN9vtlJ7i
2fanlWo0UY2TWCnRY8qXKaZhtuOSnloSwy0CXVOKNEJRTfkTsKeuubwPXwaRkNv83PXCxty5
L8+4MrLFqctz7bpYmtbutSXmKJTLbSP1VOkq4jdVAdacxIJuTpLHmNzNo8hbjT08RXHGpDKo
37H9h0knmkCH9sVbPA9V9fdTU+0I1XILWnzacsvHaZLtzo2Lo1OTHVb+HSGltL1Ip9DEDf8A
9O1dNpwUsqSL1nkf/UbfdmldxSWMpcebWpxY5EcZalJKmzyqKkj4dNWVb5AnYs7yuz49L7cV
ZxpCVx03NTTZmtQnPa6yw4f+oTBUo+8pHCu3IV0Vb4GYBlN08qHEISL5BMaz3NDSHroGUolX
BLCaxW57yT3FBpH+UlwJqN/dTWqTDa4JvOSEuOS5HeLtaHbvGTcpFtYjwbfAfi8Eux2lfpMl
tsIW6F1oT1V8dc+z6xwU5LddpubZX5IdkTMDhTr5BYDN0shYWlk0bSlpyV+sAlTSCkI942oN
9adn1SZfQr9zyvIbdlN1dyazRZl0fiKti4E+Nxbho4BLJjNJolssoA7Z32/np7PBNzWB3jOZ
57bsUL9vtP30PH+43b8lcYU4q0mXs+lp0Ub/AFQr84PAmopoTbf7wLfLKCSoKFCfl6/x1Nzl
mHsvma5SbhieK2//AEu1aWIDK3LROEh5/wC4jF0/cDtrNKLkJJJO46DbUn+LXBqVIVu8gS5W
UT5bOPw34GRxk2+443a2lssvsAAFMYM8nG3SpAXzTX3emsK8RHAtqCAy3Jpd/vX3UphERqMw
3b4UJvlxjRYie20wSuq1KQPqUrcnXW931VYiDK8+SEJTzTv09T89cmMhcDzAB2rsNBBdCoVr
Qe6u/wCGkAgo8hvxUKA+o6+miAkMqoSOJ3UQgmu4+Oo1IaFinQ8PXau3od9TENtCkCnWmxNO
JA0EEEkKU4oCiTRO+38dRC6e5JSKgbqB61/hoNJCV+3dNSdqA79dJWwGrik1O6D0I21IGg+A
2Qn6jQioqa6BkMp4+5SQFn4HetNRSEkrJBSPamlD8fl8tUgkBSSOg91Skb1A0DAYHEFB69AR
vt8NJtIVwHPkdkAEH8fnobNKs7EI5BVCQBSoPTr6n+GqTmzoFCtOiiDT1roaJMShPNJJqUj1
/u+WsydFAAFmgGyiQCepp8dtaM2SAhSDVPxqQknf+eoymKWtZ/LySPQfTvok2Gmv0CnIgVSf
h/bpF2CFFbK2A2A/A9DrIzIS0pKeP5gdgn4D4ah6oUkE0oOQPx9NROzE8duVT+O/H+eqTM4O
uAu2lrImJN2+4Tbmlpcd+17Ze4oVyHEOezqN6+mvVWz4ORsN58hYLfPKUPNA3d2IrS25EiMl
MZS1vR+PaS2nl/lqCfcVHl8NNZRzrV20S0vydgUxvLILwv8AFh5LMauDslhtlDzCwN2inmKo
5JGyjuNZmINurWR45/UFYkXiVdo1smSTLlxawpi0LjMRYrYR90wlNeM1RHXoBTVjkEpIbGM5
8e26bmD8+63T/wDWRl2FEcfZRKkdpZ5d55YcSFOVNOPw1ThIXoqkLKLPFwe44+HbwXZDxVGb
bkpZt6kEpIL8YVKlnjVW53p8NDyU/BXra+iNNiSVJ5tMutuqaV+ZLawopqfjTWk4YQehrh51
xZ25MTWbkVQH5cR563C3ul9pDBCl833XlN1TT29lvfWEjJV7H5qkf9xVy7xc5D+I/fvzWG3m
+682ntrRHDaN1NpHIVSn+OtYgUzhh3mGSu/SE5Xc1rhBuamz3N1oPPRJEo0bdK0hTqW0p9E9
NUhJYv8Autj9vsDkdvIDcsqj2d2K3kSI6wZEpx4La4LeRyqhNd1ig1MWUWy+RJ0vMY97vdyR
an40RUddziQGpLy9qAuNKCkrWv1cOpRBc4KPcH0OzpMkOF0OOLWHSgIKqqJCige1JPqkdNCZ
pwaR4ictsex5Smde7fbjdreu3xI8uQppanlb81p4kduhpy/s1qxng64c1b0+Ksptc262pqZM
cbVaGHX0Id5Rl/rKNUhfFaU/p1+r5am1golF5yfIMSuabg/PuNinWZ2zts21qjSp4uyAntlz
294pHxCuIHUayRVMyPj9+x302L9nXkKjDRdyRxjKVxSF/sdT7ff/AJhPpXWpCwjyPY7vdLHh
EWC/Gu90iwRBejxpbMh8vn3JSEhdVDgn6q020FOTr4w8V3tm8XE5XjIMVFufcgpuYAiCUgpL
ZWpKjx9an0FdU+AJgY7i7jQWbRY3c8/aHHjYkLZ+xVLD4Sg9kPBkq7PT9TpvqwaeSN8eWqSn
zHHU7abdaxFYU5cmLbKC2IqnWlJC0qDn6bqlEJKEKUBX56msAigf6CyWVeb5GjxEiRae5JuC
Xn2UKQ2SpQIJWUuEpFfadTEulpxC3O4XYLhZ8TZyh24NSF3yY5LUwqI4k+xPdDraGKJ39yTq
+oNZKmm2W3/tvKuH7ZEM1EwtN3ZU8/dhHIDgmBTipP8Az/D3akS0WK8Y8654OtV0bx5NvkNX
BS5U1ttRdci9ogSHXF1UEOLP/pG1BqcSDO+a2sWfBsGyKLjLNtlIMpVxbcjqebJ5JSyZXdFX
O4Pcnn1rttqfwM5yS0xpl5jD4j2KWu8ZRdGnr29Djx24bQhBCgymjXBL1B+optRqVAJ1KIk1
iSPzXGbHLGDPRbOsO3yQ/HnsRoibPKkpbXRKEw3FFthXwcKt/XUkYhMhrbgNiLuZyp8G5/a4
wthuPYGHm1T1KlOdv3utpdCi2PcOIIPqdtLszKLf5A8fYdGku3p+3XD7VCrbbWLTbg1HcbW+
wVqkv/puVUPpXsNxoTwbMgzywxcZzG6WFEsus294NNOvFKXFpKAscwKD83XWk/INtl3snizF
7jDxhhcy6JvGVQX5cWS02yq3sLjoW4ULJHNfLhsEmtN66zJbIbIsPxGyRGYbsu7PZI7bGLpy
ZjNOwCJCeYbUlJ76EpGxdUaD11KSaQwv1jscTxtil4iMJ+9nS57M6SpADiyyRRPIE8m0/l9o
OlWmRaaYy8eYmxlmVRbIZQt7b6H3DLQ2HShLDSnTRFU1rxp10MJLBYPHOJ5FPS3YsjcVb2Ys
qZcxOYbYkMIilKUqNF9gJfK6pK1jjvy1PGCRKQ/BtunzZEaLlDDsNLcPsLZabkqS9OeLCGX+
y6ptPFxNSpKz7daSCGMbv4bs1pgou92yYNWWU6xFt0hMFbrypLqnEcVsBxPBKVMq93I126aH
gzWnXPkZSfCF/N0hW5mSzMecuz1lnKZQVIiONth9LrtSk8FxyXeIGwGlWwzcDqJ4KuNwx5u8
JvcPg+h96CpxBTHcajrUhC1ylKSlnvcKpqkjcVOs8lZFQXLubWMsXFrJnA488Y5sCX5KXm2k
9HSQrtds9OI+I1qAbZFNPpu9+aXfLmtpqbIT+4XR5Kn3G0qIC3lJB5OKAHT11E9lkzLGcdt2
OW7JLLcbioSnlsQ4t4Sw2/IjIQVCdFDCipLPMcPeBudtRJuSRdsrOJ5oqLMy65Wppm1szk3G
Ila5rzkttDv2TCEqAqSr8ygk0330LRrydcsxW83TP8dxafklxmC8Jj0VdCVTYAlKJLMmP3XE
Jconnx5eorTVMIyqzMkXhWOQZEyTETk8+23d2cq1w7dZ2HXpboQVAyXkpcZSmOmlD7jTf01Q
aSxJHxcTuplZdBavQUnEYkiS89HU4pmUht9LZS2QoABxR5k77jevXWrMynCJq2YzOumKSY0P
OXnH/sF3i4WZKZa7ehIqstzJXPspkL40KVI3VTr10TJqzjPgrYsV4lYXaLku6A2iXeHLXDt8
hawzGe7SXHZJNShCFBXuoK+us+ZCZaks2Us5VbrNGySJnS8pgWC4Nw0BxEnsMS+KihUf7rmx
KSkJI5JrQEH10rUA3yNM4yzyLaJdoZn3plUp+HEvcGTAjMRXWvukKUhCnWW2llSU15CvE6W8
GUss545dfK+SSLxk1tvMuVd8fgpddSVqcfcirXxUyyjipCkpPvUgj57nWUbbLQrCPLUOfPuF
0y5NvktuwpMx10ynVmTKjr7I4R2XilTTXNHTiOg1pQ0DRn10zLMIL71qN+VNiRGnLc24ytYj
uR1toaWhvmlC+2pttKdwDtoyUKILhEuvm+e3boKbup1nKIMm5sy3QgqRHShzvhUktFcdSksq
9iFb/wAdSag23wMrnj+c2+Oua/k1tRcLpaY0B22CSROctsltAjxe2WeG6OGwVX561MmInQd5
8YeSn2oVifukG7ItEpu1tQI84PJtz81VEIcQUJLQcWild+lNYdYUj2TKZlOEybA02ZNytNxb
eWppTdrnIluIUkGpWlASUj0qfXWuoJjuzJzDNc8huQUNyL/KeYU2Q0htgKjJT21OoQO2EJQ0
OVR7vWpOh6NJZHMU5q3n065WSKL5e4b7778iBEXKiqU9yS44GQ2KNnuKCQUCnw21W2YmVoi7
GnKYLVyetdsfkMPwpFvnOGI68hmOunfNQkhpbdPq6p1rq1aeS7KPgj4t8lxrPcLW0htUS5qY
cfcUnk4DGWVt9tdfaCVe7rXToWpJuBbsruOGzoka3hFitSlXyVOdQpCykIEY9pa6JcQnkDxR
vXeusJzgSPuk68y7TZrZKgKYbtiH2oDhZdS48mS6Hl15bLoo+3gOnWurtiEFnIV9vUy8ybe6
/FEddvhR7cw0hLg7iIdUhSg4VHmSr3AdPgNLlKGVXOSVayO+3zyZFv0W3odvku5R3o9sQtxL
bkhBQEM9xxfcSFcBUqXrLNK7nBW7u+65dp70hAZfdkPuyGgoq4rU4ouJCiTUIVUVr/HWrzOT
NWmXDHfLl6sKMZYiN8YtgckOqYDy20TxIeS9wkpT7SGlpHGgPx1cQLspH9+8jwL7ijbE7Hpa
nYUVFtRcI9wktQUrClOtLejJQWFLqoq4qNVaaZcsSkY1ejZcjtV4cZTIVbJbUzsFXEOdlYVx
J348qamkxTyaTj3mO+RGTebnBmXS2C+yJjcxUtwpZVJiPNfZMOucg2ttL3cTT/D01dsuNQEQ
ME+WYVulXaVj4vLdxuVtFsTc7lcRKlsrQ6hxt5t1LaFBKUoKePz1SmjXEE3afMUa5ybsXod6
jXS+qgSJjuPPoTJdkwWFtvqShaF0adBC+2n6aHQ3hIN4MiursF65SXoPeXFddUtj7ooU+UqN
aulsBKl1qTxFDqtsyjR7N5VtNtwxdnfXdrioQXYH7NKdju2bm6ClEhPNBfb7deQQAaKGx0ev
DlmjPccvb2PXy13hlKX3rXIYlobdJ4OKYWFhJKdwFcdDywVoNOh+YMXsjgh2SBLfstxnTLne
lTXGmpKHLjHXGcZjdvk2pLDbqlJW4KqPoBraty9jC0NGvI+FIsqcNaTObw5NrfthvK0sqm85
EtM0vmMCGeIcbDfDny478q6f7EljzJp24Y6n+UMLvkZVmu7M23WCI7al2iWwGn5TqbQypltp
9pRQ2gyELJ5IVRBpXkNHZRBJ8kBc/I8G5u5rJlGfBlZY93IsOJIbTFoFH9OclQ5PBKSAnhT1
1O+U/CgG1EE6PJeLC3NXdIlpyJrHBiabP2wIpb48PvvuQa8Cj/2eFeXrTRS2EnwKsk2/InNv
JOL3Gz5c5anZMm4549AfuEOQ1wRbRbylfb7oURJK1JokpCaJ677a3S+V8IG1HxJBXnJoF4zS
yT4WQT4rEKLGjpvVwQlb0VTKTy7SYgSS0mtE/m3qTrnd/ikS3Jbr3mdgX5MyKZbMit6sZyht
t64pultkyoiiwEBplyOAlxbiFI5pWmifQ6Xmq8rBJwiuZldcUzfML9eV3X9liMwmhZmVxVLM
t2KyllLIQ2o/bhzhUciQkbHWu6aVfHJNyiexryDi8KxWCbJuTjD2NWy421/HEtLUqe/PSsNv
IKf0OA7v6hd3HHodtZrGvnYtf4McaRxQhqooAAVflFPzV66LubNowjUfIDtjVFwR2LfrZcHL
bCbg3IwipxSFoeLweU2W2w42lug33Udvnprn1tczJtfyxofwsms0jJs2bh5BEi3O+w2Y9hyh
LCrVFbW3wU97UBa4ncbSW+Q+o7nrolfj9C6pa2U/ype7NecwcnWx0SmRHixnp6EqQJMlhkNv
yfcAohxYqFKFVddau11S5UyZah45Kg59BNBQEVodx8v465ECpJVToaHfrXQTYSCTUAEFO5+G
oUJPGhHwNQDsBTSZD5qCyVCgCfqNTt+B0CGkFNQtJNBXboQfh66hSACSlQrxP5OVdxoKRKEO
JAFNz8aEVr6aAydEqUCCVDfr8dKZpMM+0c0+0evz1GmD2gHahpVSldNQBpCSkK5ABNK1+Py0
AxCwagE7CiiR1ofw0khZWEgUNAKEH8DoFhlQBIFUpPwFdvQnVBoJXoFGm3TQaQrdVSqqttqe
vw1DMiUCtABv0BHQD56iVewEtlNepoTT1/gK6mcmhagT/CnT1P4dNZGrAADXelaioPT5/DUb
eRKAEk8qKBrv8d+u/TUznMCiE1SSpJAPH+eg2t5DIIqKUCvQ0pQfMa0MB8jUkVqAK7ay0PVi
QTxBG3E0/lqTOnWA005DkNhvUdRX01M4/wCAuaK0ptWnz0QUjrx3FukjLIUe3RlybkZLSmmW
0hS/03ASUg+00G5rr2UicmWehc4xi4I/qKt62bQ49DkyIj7LTaODam2koDzwLdNm1bqPx1ir
BNdSffx6U3C8mR3sceyBTl4Ykx7a6pxC5Da0ji7ybo4tINTRPWmmFgwoIe4+HPH37vLhqbcs
0Rm4Qks3Rx/uIdclNhTloabUpPF2pqFmpFRo4EjMDxRLs7yNB/0wtmLHguIgwpDX3bjEhFe2
2iQQqrtDyok16anXEk5gpNnxFhzxxe7y9YJEqZAk9v8AeXJiGGo1OALaoiiHFqHLfY9flpai
DUZKnEjB+SwyFBruuIbKyKhIWoDlT4iupKWT+D0BdPCni9m4Isjb81u5tSIbDzyTIe7okKSl
fc5Mpjs15VSULNNCOZXYvj/xXcs8/wBHRU3WHOi3ByI73XkvJkstNqUt0K4gMlKk7Deo0ZFI
aWTBvGWRXp6Da13SD+1NzX7rBfUhx1xmGQEqafoEIU4TTjQ009cSUklE8R4LMsKMt791i2Ry
2P3E27ky5MCozobWjuFIQeQ6bbaieyqQ8YwS45fEg2mVd59nlxi+6xFih6ew8ASWFAJ4LCPz
LSnjpjAlMmRktzHmEh1KWlrSlLoCVgJVQdwD6V06jQiLn43xqxXu3ZWLoyXJNstap8F1Lqm+
04gkbpSaKCvWulmWDGMbtM7xpl17eQ25d7SYhZdcCv0EOroVMqSoAqX0PJJ6aq1Qwy3Zd4mx
R25y4+PT1QJ8CztXeTaiwpbAZSkd1f3BUVd1deXECmonsquReL/2OzSL47eWXLO+GDjcgNnn
cy+nkoJRyqx209eerAZka5xi1ks9mxW6WoPNOXyAZMxlxzuJDqSlJLew4pJPTUVlJywLF7pk
11mQo91/bEsQnZUmW6pzh2W+IcSQ2amoVv8AhpnAzCJoeIKx0XGPf4a8WchLuJvioz6UhDbo
ZKTF/wA3lzPX4ayCbGmIYJZLv5Dh465dGLvbXULWmZCU4wF0aUsNgqSVMuVHuBBpq0jUtlJk
tJYlvNGn6bi2xvWvFRH8emtGK3Tclsh+OZ0iwwrpLvVstQvCHF2qHOecQ5JQzso1Skto32HM
6ELZCjGnFY2vIPvYKe3IEYWwvj79VafqJapu3v1rrTgy3gn5mGymMGteQC/oeauc39veihTv
Yijjyq8uu/ClVgJ29NZNJuIJqV4un3S2R3rBkki7oemNQWU3BtyLElLVUd6Cta3A803xJKgn
6emlIEnOSFvWCTYjUK7Wi/t3G3on/tCrm8XYQiTmqqWn9QlQZSKlLif5DUmkE5OOW4lk0fOI
+OP3Q3y7yUR/tpq3XAlX3KeTaQ48pRCR+OgUIx/FfIrGQXOLZFri3W0VauUtmYmM00VEAIVK
5JR+ofpFd/4a03CBbLTOwjy3acnmRrHdZkmQIsVU+5uSxDHcfQVJjFx5zi4tG9BXp8NE4HOS
pRM3zvF1yrMp0RpDchxUtuZFjyHkyCffyXIQ4s8uv1UPpohG1EFhkXHz3fMbblstS0WJMRfE
xAzGQ/HUVKWsNNqQpZpWvbSDx+Wj6HFtkRMa8yHBEqdRcU4UpkEDkngYxPt5ICu+GPxHGny1
pHS1gr5fc/ZwXHpdxuqpdjuTi/2+1vMoLSU29YCFKVx4rSo7ca/+quiDNnoaw8+zW5Skx8ft
cONeFBSmHrPbmWpqUhB7nbU2CaduvKg6a1rJJLhDOz27yVjORR40C0z4N+msqSzFMcqVIjlJ
5p7agtDjdB7grp66zZ8grcEsb95qRdbglqNckT4iYrlwhMQ0JDDcVReinsobCUNpXVYoN9+u
tJi7YFw8s8zl9y0NW2XcHoLbbjtrkW1Mn7cFSnWnyyttXBSlOqKV9SDoZVsmskHDz/yFDVdV
MyZJfybkiW6ptRcfeALa+xsOLqQvtnt+6nt+GpltQKi5jmkfFxHTbG37ZbW1w2by/bUvuRGl
EhxhEtaFJaAUsiivprqMuWoK+/kWRrxZiwqW5/p5qSqRFaLQDf3RBClB7jVSuJ3Ry2+Gtmm1
giokeTKfbix21yJDqwhlhpJWtxajRKEpAJKifhrNmZTLPnllzi2xbK3lVmXZ2YsX9vgu8UcX
EIUpdHHELcT3RzqQabemmMYGYYqfEzq/ZdFR+xOm/oYiFm1sMqC+zHaT2nVJKjxC20pUpRUB
v6axZp4N/Icp/M4fk9Nxm2FTGUOT0zRZC26kLdWatoQkKLikr6Dis/jqf7BU44Q3m37/AHC6
49YJN2kpblszo7Dbygz96lbagVNlKuSeRoK1231ttcmUmvoRWN3C6Wu336LEgqfTMtyoVwdA
cT9uyHUEuqCaAe9IQQ4OP8dHbIyoHkNzKY2A3aOmzS3MfucmK+/eQ2+IyFQyaJKkjsqSVLoS
o7Hpoqm9E8DKRfZisMiWJURf2Me5PTm51V9tbrrKW1MgH9OqEjl7Ty330pC9pHbKLjfXsexy
03K2SrdEtUeQ1DckoeQ3IEh7vLdbQ6lCEkcgDwrUbnUlgy7KRvl+QP3yXbnn4rkX7S2Q4CEP
LUsqRGQUpeSpQTRDnVIGw9CdHBcsf4VnlyxW3X5q1B5m43VqO1GuTBKVRFMP90rOykq5iqN9
PU2kaCjzg3OiymZsG+R3X02/uz7NKLEkrgxyyrur7SvY8VlfH8NBYMburzDl0kvREvIjuOLW
0iSoLkBJNSHlBKQV/wCI0FdLgxBpFk82zoDOPwAJSLLabTItVxtjbvFmU/JS6lD5R9PtLqTR
Xw1JKIHtnJ3ybyhYLxabeyuTfG3oTduS3aOUU2vuwEoQogkd6iuClD5nUnOBaSySDv8AUCzc
r2J93alPxbdfm7xaGkLQFtQ9w5FfAKUOcR7mSfpVXemh+ASKV5Jy2yZLKjTIl4ut3ls9xpSr
mzDZDbKlckoR9od1cuvMaeBiGcfFWet4VmsO+PvPotrfNE9mMeK30KbUEtmpCVJ7nFVFGm2s
sk2TWIeQba5j93tt/vdztk653Ri7rv8AASX3ni0hTamHaOMqorlXlyp8taby4BMkcw83SJQj
ysUlTbRMTeZ92fYBCW1B/tiNzCTwdNEK5pUCBX11PBJmUyJDj77jrquLsha1uLAp73CVKNAP
bVRPTU8kb2/5fxB2LdnzeZ82Fc4lsYi4cuMRGhrhKZL4DilFj9QMqpwSAa+7fUlEGm5Y5/7u
4e1fW50zJpN7jv3xd2gJdiPJ/aYojSEpjI7pIqXHm0/pbe2vQal/0FfI4heYsJlNtInXZ43R
hmD9hJdZWUtSxBW3LW66hJdQh2Qr9VTfuVsRrbZm1VwRM/NcNn+XMSvrV8jRItqht/vdzW1J
4SXEOud1gdxK3l1bWEpU4Pp+GsWc1+5VUEVAzi04rZ/sLHeYq50jK1SZU6PGCgqzvtpKylT7
dUI5exSaVND6a1dptv4FLSK/d7/jLTPkKNbLk1Eg3ScV2a3NwUuolRw8pSO1IUOUQISa0A30
9l2n4MqEoY/jT2pHgaZZpGQW9Mlq4M3C3WRbvGSlhlK+6gJDY5OuLUFJBUaj1HTXOjy/keEU
XCHbG3l9mdvwSqxomMm5ocSVAsJWCsLSN6U6gaDVdm+tZHhyxbIWT3XHro2L5MmMMW7tJihn
7J1Nv+77bSWqpWUp5qQQnYKrvrc4f+uRUwVy9yPH96yNFglsWe3y7zaHYy73DfZkNx56XQ5E
U68wyzHaoElClITXgdzXW1dJLnJhUE4KcVe8yursS4ETGrPbl28TpTjcRUlSYq2PvEKUUfrP
P9VI34b+uudsQaWckdj9mtkLB7c3AgYxc7sw/PZy1+8PsqLSWlARuw73EKDZRyIXH5VPz11b
bu3y2DSSU5Jh60+MmfFbMhqzxJqlWZLz14+4iNOoupUQU83HBNLiV0AaDXBSdYT/ACkrMy7J
WLc1iWLqYbtYlvNSTMegrcVOUeYCRcG1ji2R/wC3x9NFYVX5krbHXiVqwqzBH7oIxWIsr9pb
ncTHNyDf/RBzn+nQu0pz9teusViVOjSU62aYGu5IcW01bn/MyLK2pbATEVW4mYQ6rt//AAlS
kwaVHX5ctdUk/wCWi1r4/wDkftW/GVXC4vYVEtE69G4wWskacDDsZm3Ljg3VbSJBDTbH3XIO
Ka6floNDzvcGorEcCLPY8EWI6bZEt0jx9Jk3kZldHeC1xmGnXP2wJfcIfYHa4KZ7e6ifXVaJ
lfyx/wDJjqtMyli02X/QdruMmE009Ju4jSLwJ575ij6m1W8D2pSPd3vT+Or2JdrRoa1ypNal
4fg79xRb8mtkCz2Fm+RYWIyWFJiqn2xbZU84t9C1KlIVRtRdUdlKPuFaaKrDjUFDnOypZTjs
d97Ev3fHY9nye7XOREl4/B4W5T1vZWlMdRQpSkMqWSpCX1U5AV3663aqi3haBJJnDx1itnke
bF4/LxpU61NSnGV25x9UluElKTVyQ60FIfCDtuQmpG+uPshQhSwSviW04pJkuWO944pVyF1c
RdJUu2Oy2m4W/wCiXS4z+3lCQVF1Q6b+mt+1Rd/sGIUGYXWxyUu3O42+E8vGYtwehMXQVcYT
R0hlBf6KUWykivXr66PbX8nA1fk0zPcAxi249lTUa1JhNYy3bFWLIgtfcu7k5KS8l1aiWH00
UVpDIHDj8K6166ptfK/Qb1xJi1RyShISa9aA+nxrrizCEpUoOcQKgbH8Pw0EGCOYNSmp+qtd
/lpKQ1FXM8U0FevwPpXQLYYQkVTyG/1A/wB2ooENrqfanpsaem/SmmCDSUpJSo0619N/loKQ
gmqQQaJI6H6qHSUCgVIUaqBNKAHp8tApgJJ9pHGg3PXpvtXQLYG1cjxSmnz/AB0mewe6enuN
egoBrIoFUpNTsfWnXb031HRoCl0OwUanYU2p8dIB8VFXFQ9w6dAD+Hw0gxKQEGgr8vU76yzE
nVFR7RTY7Dpv10GxIAqSSQaevw+WllMB8QADxoBsk/M776yaTFHmrag6b1220iFyUaV2qKfM
D+G2oe0h7J3qQaUG+/8AD4V0GWxJCOHMDY0JA9NAoNCOANEAE7pH4fHSLDpxBLg3P5Rvv8xo
aMigmixQUr+bp/sdDNJAUAk1P4bj/eNSZQHyqfaaJGywrf8AjqOia0wilI3IqnfgPQ/PWhbQ
E79QUk7g06J1lmGpBT2/xr6V/nrISc8NQybyht551lla0IU4yAXQlSwklAqkVFa7nXtpRW5O
K3BruRePzZPKkPDBepDzTzkdH7k852HAHwFFKVJ5UrXimuxPXWKVbkEsE2nxUXf9YOtZDIts
nGZzcWJKnzA20W1pCuUl9AJqAaAIHXbRDhEVy4+HfJUV51CUNznkSWWS1FkqfX3ZIBak8AKh
tXL/ADlU/s1SxT5HuKYVlr0rJ7acmdttyx+Mq4SkQ33n2pBQn3DvtuITz6JJIJ/lrVm4Jsqk
TFb9cMenZG04x+2xVkSS9JQh5bgopXBlZ5uE89j6+mhyTs9EK0h40CQSsmiUp6kk0CR89aSk
ZNPkYV56+wiMPPzXmoqmC3bhcUrejKWf0FLaC/0yK7KP06zbZjEjRzxl5ft90bntQpCLs4+G
/uY0tlchD7iSauLQ4S2VA9VneupM07SIj+P/AC9YbxFfg26VHuby1IjSYbrLpDgBLqXHELWh
JAB59w6tmGPp9j89z7i9Emt3SXKmRS06UusuNuRCv3I5IV2uAWd+JrXV2FNckBCxryPZsjbt
0G33CFkS21LYajkofUwB7ltuIVxKPiQqnpqGUVuUzNYlvtzErRLQ4tMltz/M7oV7uVfzVrWu
mTKbL144uXkRm2Xo4k/Dajw2VSbs1IRHLq2EpNSO8haloAFCnpXULZ1xi/Z4jBr/ACLe7Cax
xk87iiRFbWuU5IVuG1FpSVcOQIqoBPpqa0DJfLb15rtUIxbolLsZ+Gy05dIcILCYzoCkxnJg
aTsqg5Ir6/PRMMiq3HOM/lm8xrkXltzwwq8R3YxCWUsABri2U/8ASpptsBy0gTGV5DlyLZjy
cqsFqXZS2H7OyyylnmwkCrQWw4pbaPcCpIpoWBWw8Y8n2yyTn5llwyK1Meiuxqsvy3iQuhJW
04XUqSCNxT+OkoOH/dDLESEQXLXETBVEXCTjghONxiw4vuq/Qr3alYryBpqZSnoLDcsyJWes
zLLYLWLypsx7dA7SojEcIbVyKQVoosoryW4ok6m1BJ4wRDGXoh3K8yFWC1um5pWwphxpTrUV
VSFOReSlFKuW9STqTBIcR/IbX7DAtF2sUC9N2RC0WyTO71WUOUVxWhtSEOb7+7TMBZEOjJpI
x13HAzCDDzwkrkFhJl8qhXFL9eSU7fTTptqbyULRY0ZvFRg0CyKxRLlrizBJbuLj8ktOy0n9
Xn7e2ruJqlTXKgGrIvZJZTmtyaukLKU4rMsV6jOIcgXGW/KXCS2lPHssxnkNMpQpG1E0p1Gh
Mnsj8hyqTKiQLK5i0i1Y9JnG6SrakyS9OlOghZZfeQVJQQfYlCTT56YKrUwcs+zS3XTMIN5e
xt+BLiNsiVa7k44pqQ0wAloFBbaWlFPqO9dSmMCmpOFnzuyNR8lt90swOPZP2VvQ7c6GFRjG
c7jaWlOhxJTWvKor8NQxwWTJfKVhvCXrXkmOXCHbO9AnwWGHkolIXDZ7aO4qQ3Rba00IIGhf
AFau14wvKb7dMhvtynWedcJKnUwIkRMxtDQSlDdXVON1XRPu9vXppZNItR8keP7Q3isq3/uF
1umN22TDjL9kaMpT6VN8ZDaypSTxXy5NE/DRkpKrk+X4feojdwdaujGSN2yPa+yy6yi3ksIC
A6pYHfUhQ6tkddaTC3kO733CZ/j/AB3HGnLlGm2yU4+5NfYaVH5S1D7ggNr5qS2PoSnc+usp
jtodYhOwPBsij5HFyZd7eZbfZTBj2+RGeUX2lN80uPK4J4FQO+/w1GbQhp4y8nvWGa6xe5ki
TAdhS4kZa0/diK7MUhxx7srKS6lam/ejmK6XuRRcYPnKxRLk4ZMqZLjINpREfaiNwk9iFJW/
IQGGlqISUq9qVKJV02GgkjlcPLeJXuy222OXm5WKTBkMTZNxbYdfXI7TryjFV2nEOEBDiSFK
JT6U20w9ml+LTGrPmLDpNyuV0u8J5xy1XZ+94cwUgkOyGS2ph4o9rae+EyOVTvt+K0Y0jtaP
MNrYwe3sm4NM3OJAkxrhFkQpMhciS+pxSnUFt5uIUOFyp7qKjf5ak0Vqz+hl8u/QD46g2Buf
OXNizlyXLasNm3pQpBSl5tQAd7tVeppudLeytVYjgrkSfJhS2pUV5yNKYcDkaQ0ooW2tNCFo
UndJGskoks+VZGy5iNix20utqgqb/c7y2hK+45eHebbinlu/UQ1QDh7RX46BmGPs4y/9wzhU
u3XMot9xg2+23ORCU4025GQ00iQ0pShz4goPLb02rp4Rf7mPkZzCX5lsdxS9HaxrHZLUGzvB
Tgjt22OVBClOLSp5deZUpahUk/DVZcBSXbJF4TlKWshXHuNwLWPxZM28sQVyHYsZ64ICjGUt
TQKzzISmnqNttMZglo54/ly5kTyHOu7zKLlkVsUV8z2i7KdlocU2wgBQUevtr09dTeSlNDux
Z/coWF3iS/clybg1Hj4/a7dIfWGWrc+2tL60Q00Q8pCfaFq+kmpqaak8i8jBV/bieMMdYiOs
i42zIZc9pvmlaxxZaLTi2FA/pladidlUI0bkJ0O86zOfdMAtNtmznLrOvTrl5vMqTJMlbElD
jjLcZhgbREcDz4dVbUonT8gxh5Zuka5Xq0uMqbU21YrUwQy4h1IU0xRQ5IrQpJ9yOo9dUYFk
14kuuI27Ec6OSsmbbZUWCy7b23EtyXwqSof9PyIqppVF7dPXQ0ahQbRJviZkS7vYyw7IkOu2
dYt9muce2PttC3bgvuVStDZIQUfGnw0Iy0/3PLN//ck5Bc/3ZS13MSXfv1LcS6su8zy5uIJS
pXxUnY61fY1rg3GyS8DQcMakxlO39GHTFNzkPMiE2CxJIQ+yUlSnetPcDuNZqsL6heJf0JPy
A5kBtVkiQYdzOMkWJM+UVwlWVcajPMIQEfcpPcNFHn1r6aZBrK8E3Ou+P5Bl8ONH/wDtJu35
e3DmtzxEpADaXez9mGW0Kcjy1I4e9SqFNPnpmFHwNXOTHvN8rJX5VuTcoV3iW9HfMU3qNCjO
Le5fqdowm2wtARx+upGtNhaq3yMvDNvvkmdepdqjMr+ziIMqeYf7lKjNrcA/6GCoKS889Tt+
4cUjckaxGYNLTkhfKE2NIzWa4xZnrCghoLgy2Ux3y4Ee6Q6wgJbaW+feUIHEemttaMVfgt3j
6Tjtv8QZ5dVMShfUCLbzcGVM+1qataENthxCyhCimj/qpNONDrFP5lZ8IkGvGeKjGoUdVomu
y5eLuZEvLkvufaMyUNrcETshHY4gt8fcvnv+Gn15y+WNt54IHyXY/GOOTbxjsGDcY2QWpMZU
ae7IDzUlx1ptx5lxopT2kJS5ySsEqJ20jG4K/wCNcWhZXmMCyzH3GIkjvOvrZALikR2Fvqbb
5bBaw3xB9K1prmxRqb2B4LdPH9hvCLTcrZa4cO93iVAWUfuclMZ2O2hgSi2klvk5ySsoNBt8
9brWZXyFnycLP4Wwu5SWXg7dIsG7R7S5bI5UyqRGduy5CKyFqSA6hsRa0ABNflrUY/X9hyjr
b/BOEyJNvflzrhFteQCIbIXnorLvdlJPNg8kL+4cbpy4pSkU9dZson4M1llIt+I2mJjflBuV
xl3PHEsMwpTjaeCB+4dhbjdTybcWEfMUJGmIsvEGnXBI3jxViUO3Tosa7XB3I7dj7eTOtuNN
CAGFoQpUULB7pWS5VKqU9DqrXU8i/BS79jtttdix6ew5OMi7suOTkyoxZYQG1pFYbpoJCaKN
T+Hx0LKZdckp5Tx7G8fmY6jG/uPsrnY4lxcMtYW8pyQpyq1cfajklI9qdvhrSr+M8mrSNcKx
GBerdfLteLgu2WSwsNP3B9hkSZCvuHQy0ltpSkJNVmqqnprklmDMODTp3gCw3G7PmDdP22yQ
2LbGaeQ2lxUiVJiJkOyVJkus8G1BQUQFFQ/w0Guia6r7lZOMFPa8KyZUq2NIvEWTCkybnFuN
0Z98aD+01WpxSyRzQ41RwEdAdDo1ok/JEQcSt8zxg7kDau5el35i0tpqpHbbdYU4E7+xfcVQ
8jumnz1petS54Q+v8o+STn+IozeRR8Wt2UQ7jlipiLfOtZYkR+w4tPJS0POAofQ1Q8uIB+Gs
NRsEs/AjOvDsvD5lnal3mOuFeJBjJmusuxSwUkBxbzCypwtBKuXNP4amoRIpV7gIt13lQI05
m4NRHVNtTo4IZfCT/mIB6JrptWDLmS4YX49tN3s8G5Xq6PQUXy4mx2VuGyl5f3/FKu7L7hSk
RkcwCEe8+mspPL4QrZKxPDQKGrVJuxayy4onybRFZb5QFM2txxt77p9RC0KdLKy1wSQnbl10
qsb0adeThI8QUhvRId0VIy2FEh3K4W1TXbiiNcFtoYbalV5KeQXkFdU8aE8emtL1NtJ8k1ka
Zh46ZtFqkzbfeFXRFknCxX1p9ksJamrqf+jKirnG5JUklXE1HKlDrKo4j7i5k5M+O5b2TXSD
c71FdhWSCLjfL3FdVPQmMlKRxYIoXnRzSgIqAD6009bNL5BE234xnWJ+RdXsoNoxUsQ3IN8i
NuqlSkXMFURsxWlJU2fYe6FK9tNq7aOs5QpMhbli2cwHMwhSLokfsikpyFszlpEuv0lCFEGS
adQobdPlqdWrL5QPU8HGPi97V42eyBu8N/tf7mzEVYUOqqHXQUofcTUNt/TRPIE0320VTc/Q
bJqPkseeeJsmsOHpnSL69dIFidZZcgvMSWIrH31KLtzzxLUhHM0UpoD46fXVvkzZ8soeLYrP
yK4LiRHW2Wo7Lsy4zXyQzFhsCrz7lKqKUA/SncnYaylwaQeVYlcccmx2HHmZkWfHRNtNwYJD
cuI7Xi+EqotG4oULHIHTasF1Y9x3xfnmQ2390stqclW8KW0l5DjKCtbYqttpLi0qWv8A5Uip
9NZQNAT4zzxeNO5EmyvG0tIU6/IWW0L7LZot3sKWHihJHuUEfHVDmAeNjJvCMsLdleatjhby
MqRYinir7lSSAoN719pUPqppVZTfgUswPMqwK9Y2xHlPyIc6FIcXGM63vB9lMxoVeiOGgIdb
9fyn0OmHBFZ4ppQH3D6jX+PrrBBFPwNKmm//AB1CK+lAoKU9TuK6jU4DWmquBI6VqNEhCAlA
pStSqu/xFNRl1EpUagEV9T8aDatNQoUAqhqKepO3rqGQxyClAdFfyp6U1EmEkUTQJ3+Feh66
pEMe5StjxoD86/HbWQ6hmtN/Yo7Cu9P79R0QABsqp23GxqK/CnXSZaQpCgW6Adeh+P8APWRE
gA1UfasAhPXp66SbQqo7ZJHJdBQAfD11ABJIBKhQke4H4etPx0CEBQmtCD6fAempinAtSNwA
TUnY+gNPTWZZuMg/UpsfpFCoddRp5QATxKdgn1NfX4akYtCAAs7r+rcCo2+Z1EgbVIIAQN/i
T8v46pAINVUaKqinrv8AwOqR+oqhpwSfTkR10Ewu2KdB261471r+OoBeBSFMZIy4bc1dHC4A
i2P8u264VUQk8FII9xFPdr1Vng4s2TJcyyWT5It9zvmGw4+SsFoN291TzSnnVcRHUsh0iqKD
jvx+OqjJVwSlyzbLEtZa5dMFgPWxclg5Yz3nFNokjZtQdS6TyNR9A21mXBpKsLyQLvn3JE3G
TcIcKFBuL7jCnJrKVKdMOLsiESsr/R6lavqJP4alaUDS0dsN8iJiu5FcrVhKZrd3ZW3cEQnp
KI8eK4Pc2EoSsIClAq57H4UGrgkvJUWcwt7FnuFnZsdspcHCpmc+0X50ZO1G2H3DyATTY0r1
0NlJCxnS1JZeSaOsuJcQSPVJrv8AGumrhyBsNy/qGM6Y1KNtkfeh6M7Iji5OiCRGKVL7cZKB
9fD85UPXUUFStfk92L5IdzExVKaemOTXbQ0+Q2pS0qSnkqlCU8qhRTrSeDT0Kw3yQ5Yckm3F
9hyVbpyZSZNtafUENiYaqW2FckdwdKqTuNSfBmcFlR5jskDF1YtZ7LJatbVslQIrkiShbyVy
3A4pa+2lKSlNNgnc6GEFNsGXpjXKJMvjcq9RIbBjR4hmvRShuhCUoeaPNCEn8g2OqcC8/Ur8
x5h6W66y2WUOOKcbjlRWpCVGoTzVuriPU9dSBwXvxll2GY9Cvib07NEm9Q3LdwitMrbQ0v8A
9wFxaSVg/lpTWmyeR1imUYPbMGyTH5ku6B+/cQ2Goza0NIYWS0r/ADAnksU57UHz0NlBa53m
PEHPuZzEm8KkP2X9nTanEJERDi+I+5QruUChT/DU/LRzBMh8o8o47ecfuVmYcnx3nUxUM3Yp
a+6ufYb4LF0ICfaK+0JOkWMcruOCX7HsXstuyBTcixtKiOvzYbrDJDquSnVKSXFJpSgSEknU
2wdckx4ri4zid6mXF/M7Y407b5LKHIvfDyFniQpCXm0JUocdk13OiJBEm15RxT9vRak5FLcu
iLXIiIzF2I8HEvPSA6klNTIHFIoaGmomQeL5ZZIvkuJf71mLlxhQIpafuEuGtkyApCk9hCE8
lcUkg83tzp4gkioftOLTLxdzJyduPEaC5FvkJhyF/drUVKDKUnipFOlVfw21PYZgu1mzKG3h
Ngh2fKImMuQG5CcghyoZkrkrWqqF9oNLTI9tRuoaGhfxopqb/CHj6RZf3VX3DkvvJtCYLfFS
S4FdwzT70bCvAdPp6a00MQ0XB26xZfi6wWx7LLci7Wy5tTW0dwlcaMBwbSlpLQDjjJVyKT8/
cdE/BNZksSsotVsu8JiRfrZdsZensv3ibKmruFwlOIBKHjEKOzFbQ5xPBoe0DRskMb1k9ug3
m2X2JPt0nLlXF9huMLtJm28W2Q2QXXX3iRDcKqCrdOPwOkER2cZPj8yThdt+5gOTrO+69dX5
D795t7SHzVDb0p0h2SABVSQfbpShZKFIxxtvGpc3NXWk2CbflrYXjqJLaY1pWgrpJLDL6ghK
eHxVy9RrIJYyXXLDjd1nS5lnax+95E0q2x3mpjrLkZu2pZPf+37zjaPa5XcHl8RpSHEmVZR4
9uVzye8P4NZZFwxVMpTcGTBT3WPakdxDaq1KUuVFP5baW0jOINDt/jywwbJjbV7xm2RoEi1v
vZRdJrganx30Nr7KkoLiVJ5OAVUlKt9ttYTnJppFWyvGbZFxVqRasbtcjHlWmPIVlS5namCa
tP61D3Sp1aV7dntCvTWkkwtEDbMMUv48P4Zwtqu/GkSjKaaUlw0lLH27hZStaub+1OAr8hqq
9lZS19CL8Z+O35GWtM5bYprNk7El2QqS1Ihtc22SpoF9QRxqobb6XwYVk5JTx/bvGmZ3dYVj
7lukQLdLkuWuNJeealvIWn7dDKCsSFuIbUrmlKhyIqNNk1g3VSpLdB8V+M5V0lIfssxgUtTJ
hy33IymXrg+tpx1pHcceA4pCkodNa/I6zLRnpwNrh4mwpi2QbtbcZmXpy5SIzDlmjznyITLi
nULlhxpKnik9r/3PbXbR2YqvyQzXh7Cplylft9wUm345dpUfLHX3wpX7choyI7zXCiW/alTP
P1XrUkvJ0t3inx3Kw6Dcn03QS7pAfuTcqMiS8WEpWstNlLTSoh7aUgOBxxKjudtZ5KMGaTcc
io8b27Iv26W3LlTHI67uX2fsHEI5fotsV74WOP1EU2PxGtE0nBXbO9bo9zjPXCIqdbWnUuSo
SHO0p5tO5b7gBKK/4hpKpd/IwxdvFrFKj2eHZ8judbg0i2JeTFRanUqSyiQX1LDknuoryRsB
sdSWGLw4HeTQsHsPkmRAYsSLnDchwm7VblvOsxfvZTDRLsg1Dim+a1EpBG+swacNtC5OMYFK
8x2XG2Y6EW4OtQ8ljx1PoiffISvviItw94M1CQCT1r6a1ZtIxXP0GPj+Dgs64P2m4WRNxlfd
SXpVxmzXosWNa4oJcLaI/wCo5I4g0rsdtF1kys1yRtrsWI3CHnkuIp5US0RfucZ76y26oLlh
tsuJFOZ7SgCk6oyaWESlvtvjOdhVynLt0iE5bYaGl3x+cpT715dSVNx2ICUqbWwVJ6kgpTud
9SiclbWNkGnHsZawvHL5OdeRJn3aVEu/260rWmEwG1VaaX7Q5RR4lWx0w4Zf7lPgf5xZcGj4
jEvdoiu2m4Tn1m1wH5v3rsm3I5I+8kJ4JEdXcSEpRy339N9BmHJD+RcdtWP3yNBtji3WHLZC
mLK1hz9eUyHHQkppRPLoPTVVYk1yyW8a+MmMxs2QvJmtQ7nam4yrY5IcS3GUt5xQcQ8pX080
oojcb6pjAOk5NBuPg3x/a2riqbKdnMwZERiOpc+Jbm1qkwxJcWl2S2pKiVfQgb8fjoUmt4MJ
uTMFFxlJgJWmGhxaYyHVJW4G0miQpaQlCiPikUPXWmsl1NRt3g0PwrTdETgm3T7A5e5pU9GT
IRKaacc7TUckOuNEtp9/E+u+spyZdokd5T4mxyzM25oSro5JnuQI7Mjv29UYOTQhaiqKlX3a
AkOGlUbkD0OlPA54JO6+ALLCv1qtbd1nwlT7n+3MuSvs1mW22hxbj8P7VaiksqbHJLyQRy+O
jZNGd+UsZtuPyYcNh+7OSHkl95F0chOp7deKVMrhPPprUEKCqa3VE5GOEYteLsm43CJdmLDA
tKW1XC6S5Dkdlvvr7bKCWQtwlxewoKfHVfwKHuS+Mcyty1Srq+zKLlwj2xuWZBfW67KYEiO6
lSty0tpQ9x3HQjUjn3ScEgfElzj4nk1xlX6BDex2YIU60mQauuoUpNF7Ac+ST2AQef8Ay6wl
k1MlG/fr6Laq1pnyU2t08lwC86I5V1PJrlwO+/TSnBpsZypkiS+p6Q6t95RBU48pS1KIFByU
okmg21SS0LizpUKY1KiPLjyI6u4y+0ooWhwbpUlQ3BHpoFNpyTavIecu3Ji6Kv8AcFXONzMW
auQtS0d0UcCSSdl8QFDoaae3BmDm5neZqmuz1XqaZz62nXpReVzU5HBDK/kW+SuFOldLsx0O
Lb5Sz63ILMPIJrLZQ0ylCHPpbYr2koqDw4AkApoaamSYLb5Qzm2wJ0CFdnWIlycceuDXFpQe
cfqHVOFaFElf4/26JzIJjB7M8kU/JdNyeDsyEm1Sl1r3IKUpSmMdvoCUD56U3Mi2NpGRXiZH
gtSprspi0jhbmH1lxthBIUQ2lWwSSkVTTRItsk8x8i5ZlzENF+mNykwQURAmOxHKEEU4gsoQ
eA9E9B6ak8QTbG+K5lf8ackKtjrYRNbDUyPIZbkMvISeaA406FIPBXuTttqDOifPmbNXZ0t+
e9FuhmLafebuMSPJZD7DfZbfaaWnghwN+3kkdNXZkyMheRsthWW72VidWBf1qcuY4gKKlbud
vjQNh36XOFOSdumtK7nsxTHETyTfGMS/0ozFt6baHUyUviKn7sSE7okd/lUPIHtSumydtZrb
q8APZXmTKn7hFujLFshXdiU3NeucSE21KlPoBSFSHaqUtJCjyAoDp7YyaTgj7n5Cuc26W64x
7dbrS7aXQ/Cat8UNth8KCi45zLpcqUj2qPH5aJlQSwQd7vUy73mXdp6wufPdU/JcSlKAtxZq
ohKQEpr8ANTbZkn8U8lX7HYAgxmIktpmR99b/vWA99lOAoJccVT+pT0VVNaGmhOPoKY8heWM
njWn7RIYcubaH2Yd/db53CMzMUVym2XKhIDpUo8lJJTyVxpXWlbMsZBI8s5K7axGabjM3Jxu
PGl31puk6RHhqSuK04skoAaU2ndKQpQSOR0r2NCrZOeTeTblfoX2xgQ7c1Ili53dEVCkidcQ
KfculZUU+v6aKJqSaayrME2LjeS57eR3C7SbbAejXqN9ld7Myz9rDkRaAdtIaPJpdUhfcQeX
LUrtR8Eh+fMVxkPyI92tMO5WFwRUwLG4XWmIn7fUQg062oOkNBSuXInnU11d/BN/qQkvPLhN
XkUi4woUyfkhS5LuDrCVSI6wa84iq/o1FBtXYDV/bmX4gzxHA9h5zYWfHM7El2NTkyW+iYq6
iWtNJTIKWnOwUFPFCVEFHLfRS7q2/JpvC+A7v5ES/iAxS1xHYcB8sO3V2VLemOPOxqFpMdDl
ERmgqqilAqelaDTRwpWwwztCz6xWvI5ky04+mFYLpbjarrZDIW4pyO8gJkKbkqHNtxZSFJ2I
FPno7JQ1tGk9kPmWXHI5sLsxUwbXaYbVttMTmXFtxGSSjuOGnccUpRK1UA+A02viAczJ2lZs
+rDsdsMZtyFLsMuXObntOlKlrklJbUgJoUKb4Hev4azW0Va8i7ExkGeYnkFmtrtwt1wVk9qt
SbPElNSkNwVhsrUiQ6jj31KPcJWjlxUeu2t1u0oejN3NpQytmeRrU/i021WlDN0xtZefedkO
uomuKUFJ5NE8WQBX/L+OuatCaOlLw/2DzHNrRcrPGsFhgSYVmamybq4qc6h6QqZLFFgFtKEB
pCRRH5j1Otv2Sn5ZiXopq1FQbCfcobkHqRrkID/iJ32H8NRACXEmpXtSqSrYn5aikASFb1IP
UbVroJAUEq6Ecj1oKfPfTAgBG5I3SPp9dRCgDxCgigHr1O/y+WgGJ49a7BW6aCm4+B0khaRU
GnTY7/8ADQzTwBRUSQT7abAddBkJKSmnIHiBv61r6fw1SaFBSVIOxoFUodhv/u1FIfEFRHHc
dRXY7fHUQWyaA1I6U9T+A1EA9eABH+IE71+BI0Cg61UfakGtfiP4HU2ICEgK4r6fkPqPiDok
mAcvzEqIAoP9vXUkKYaU8fdTkD8Kb/PUMiqAKK07qP11NKfgNWTSS2FyHHepBB3NaD+OiB7K
BVQQONPmfSn46jIlJIJ4kqUPh/ZpgJBxJAruKlSidq/hoFJciu43y6GtP8R6fy0QMuQsIYYl
5Kyw9KZhR3HAHZkjkWW0g15KCQVU+GvZVxk89nk3/Nb1jF280WnJLRk8AQQWFSJq+5xYEUAL
Q4Skjk8KhH9usVcbJMsQy/GI8zOm7LmVrts6/SYsmzTlAqZaogpWlRLakIXsfdQ0rXrq8Fk4
XbNPFT1wmqubkO72lcqAn7CIylt9yc0E9+cXf0uUbolQ/MQfjuqFoYga4LNt/wDq3PZkzJbM
GLnBct8WQh5MNh11aR21NskAcGk+1S9969dM/jBN4gz+yTrfBwLI7S9fLbGllwoZifZfdSpQ
HEFcebUcGzT27Gm52roY2y1BSYyGS633qlqqQ6STyCCRy6b9NFdgtnpy9SvGfbjRo8CyOY66
7b02+QZERHEc0ciGEN/c+0V591dPjqjJlplTZyzD7j5RONzLNY1481cz9jcW2WWW22WUKqCt
FEOhxQ3KzSvTV1Q4+43xrIcLyTNl2u72GzMphuTVWiYy2mMw+r6YrDjY4tuJP1BSzvphQEFm
XZMHi2Uz8is9hGXRLRMlTLYgM/a95t1Ii1bZWW+Sk1HtNTrMAZtj83GsmzO3Kax2zWtv7RaL
hEnvrj2119CSe8UpB7Z/woFa+utQoFVwUa6Jabuc1LKWeyH3O2I1THACjs0pfuLf+GvpoRM1
PwVZLhcLfmHZt5lQ3bS6yzybQtKptD20IUvfnxUen/DWmFkDx/jN3k+J89YFtk/cKS020+mv
J5bCz3mOtFdkg10uFAQmX3MsNsV6dW/dcf4RI2Otyo+SofU2DJaSA3GCUntCvL1HI121iBKP
l3jjFLbj95ukGE+bq0zDVIsH3IUux99vkp2QsKKpHPYAeldKIZeUMclRcHwm4GzGE+5EU3dH
243ZJdJSGg8UpFVqTUjlud9WAjJFeJ8MtmUZFJgXiPKWwzCfktxo5Uw6440U8UhZHrWlPjoZ
TJc0+KMRVZEZC5aLnFP7ZJnPYqqQ4ZPdYfDSU91TffHJJqQEUrqQ6RDYHjdle8q2eIbFcIcG
Qyt4W66JS8tsllZDiwtui2Kj29xINdU4BLZQJFivEm6XKPBgSpCoTjqnUMMLUWmkLI5rSEjg
mnTb8NahFVypLixhWIQ8Ux65XZu83Gdk7b64/wC1JbWmP2VcUpLSkqU6STX6hrL8iQDeN2xW
CSL99vdlzGJfYEtDbX7alvmlPFbh9/d3pTpyoNacgi/rx/xNdcXvs+wMMrRZbe3MZ5GaieHU
kc0zFrP26g4QR+lvTpoQjBFqweZh1ru8vEHIc29XL7W0QLRIfckyWGRyfcQmQtaTyV7B0p10
JA0iHze0ePo7UCFEaatuT/fBm42y2ynrm3GhrAr31Oe1UtCjTttEiu2mGCIO44/ZLbnkSzlU
udajIjNPqmsrt0haX1JSpPAgLR9XtVTfQaqpeSXTh2NI82qxFwqVY27kmGhh0qWVgoSoNLcC
kr96jTkDUDU1gCSa8fYO9iN5n3KWbVLj5K5aok5tpyWAj6URktc0D6ty4o1AHx1Tx4JKVvIz
b8O5CmdOiRL20r9mluRsicQVtIt8dLReRLWCpJcQ40knijcHbTI5jJEW/H7Xd8My68vXCTOl
4r9t+0vlf/TuR5D5bFG3ApaAv6wAR10xLgE8FEcVV4qIHX6vSuhIG8j3HLfdLtfYFqtiu3cZ
j7bMJfIthLi1UCise5PHrUa3rJK2YNdPim55BiqI0HLHr/d3L2u3uqXImJhMtRmVOvJdjPgr
U6hSCpNOoIprCs8ilgq958JZbZHVOJmxneEGVdGCA9HkOoglIfbQw6lLocAcCk+hTXfVLGrY
ys3jqe/muM2a9TRHOTMtTj2XlCQ206lSkIWVJVxfUE+zkCN9UwpKcwcLJhmRSWLhcWb3HsFp
jz12dM6fLdYLz6CVfbpLKV14o9ylEBO+2m2zNU4HVh8O+QbnblPW5+M3FmPPRGEiQrhMVHXx
WEllDjSkc9kqcUAdU7GSkm8XyLDkWf76UzBStSJMFLrgZ5pVRwLaCu2TyG+2lGFfhFnvHh/O
rdBddf8As3lxWESn7dGmtvTER3SlKH/thuEKK0pr11lWNNcEVknjfJschrmT1QVNR3EszY8S
YxIkRXFbJbkMtnmhRPt9d9taq5MsVnuPT7KnHWp8924ibZ48xgOkKbYadKqR46gpaVNJ9Om9
dtTbiTeJOVjtGc5pe3X7YXrjdIDTch2W6+lK2mmClttxTrykji1RNN9hrJROSUZwfyz/AKql
S0MvNX+L2ri5eDLZbQTK5Bl9Mxaw0svUVxIUa0Pw0yYrbAmzePPLbc64OWu3yYc9la7bNUtx
mMpbshPJ2OgvLbS6pxCgSEVqCD66HbMi9DSy4V5P/Zbi9aLbKFskpciTmWi2lb6Iq6uthlSg
84Glo93BJoR8tadlJO2Pg6y8H8h2vxym8rZLeIXOQ3KcSHWzVTYCWH3EVKqK7hCKb1ryA0Tk
WkoIW7M5bGxi1JuDLzGNvuPSbItaUhlxxYSH1sqHuVUJHLl/DSmy6y8gtrOS5lPtGNsKTJXG
aVGtjSy2y0ywkqfdK3TxohI5LUpVTqaxIvJJ51j2V/dWufdZcK7tXRtEG1XO2vNvxHRFAYDK
VtJb4qbqkK5J+dToTxoJcwx3asSz1pjLcU7sO3wIq4zWUrmSWWIja2neUUKkufmUsniE9d66
kaTTRY28n8vC+zMSmW63XO6NtomOwp8eLJY7cOJxbfZUr9MgxUe0j6vxOrwFmokqEHx7m+Ux
f9QQoURMa5uvKgR0vR4hlLQauogxVqSpaW1HjxQNjtqbyENLB2jZLnio3+ow22Y2MwEYy8+W
0JDEaUlxpLK0EhSnFcl1X6HS1x4Mt1f3wWrJ5nktFubyC84pZ2JkEw3pF1EdhV1jJZCDGXKC
HVOtAhtCarQNjTaus1NJ5I+y5h5RYutslx7emZJyC6PX2zMllKw/LUXGZKoyUqqhtfcUHAad
Aa0FdHYUlJD+SbffGHrexPw63YwXSv7Zy1pBRKJoOJdQ9JbWUnokKqK766yH0GVjyK64Y/ec
fu1mjy2Jvbj3qy3JKyjuRl91oqU0tC0KbUfRVDrLXIJyW2P5Qy2awu5TcZhXi2ybrHVaW32H
fto9yix0sx2Gu04kuKSxx9iya9euiYUDGfkS0x5TusHLVuYa9cLbkkt2VcAULb+2uEV1fJ1p
SVpVWO4VJUlVUmm+t4RlPBk3bdWsJCgupokehPT+3WGjeyRbxjI3JT8NNplGXEW2xJYDDhcZ
edPFttxIBKVLOyQeuhokPJnjzOIUqNFl4/cWH5ai3GQuK6FOqSnksISU+4pTuQOg1LUhI3Ti
GV/uSrb+zT/v2kdx2GIrxdCCdlqbCeYSfQ00vCDtJHTrVcrdKVFnRnokpFCqLJaW06AvcKKH
AlQr6baDUyCJb5szumHFflKjpK3xHbW9wb9VOBAUUj/mO2pkhtQ7cRud6f311QQa2VhsOKQr
tulSULIUEqUn6qK6Ej1GtJgwOtqSlPNKklQCwVApqk7ck16p+Y0bJOQi25zNUqqNikggio2q
DvvoFh7qIV0B22/tFNQoJXFsJSSAo9EGn9ldBMP28ggn3/4a7/PbUAYBqDWiRUfHb56ikDfu
SXAahFeQBqR8jqZoBUFge8FPxSajb0OoAyEJqFKoRur4fID56iEqCgCkqHEEVB+eoGKCiCQV
VAHXrt/H4aSDSKAE7pGyPx0ECtSRUg1/8dIpigFV+qtP7BoIBIUAlexNamuqCFFRWsAqqsbc
UgU2+J0EEAaioo4BVNN9RMNaN0kDapKt9tQAIJA91CoVqfh86aiBwok70TTcA7k/D5ahF0HF
Kiap9T+Pw0EFxJQK0IG+/WvXVJB+0Cp2BFB6+mqTQn31oTTYEGnXfSAZryNSEqFCKUBroIHM
hVFAgDeo/u0GlWQCm4VVSq8RX0r8NQM6JPt32/3EjQQOKlIO5O1R8B/LSKEchWppQCm9d6ah
kU0AUpIJKgKlPQ09BqAL9QpJBH/MDv8AhsNRKrYrfiomitgDUVNK+lNgdZNtA4q7lV1Pryr/
AC/HSZiAJJKOPEk1+jfevroZQGKcCCBwHX1Pz30IGwwlJ/A78fkPXVJpJhH3JV7iOPQ9aj46
SYaeJqACf8W1afjXQSYaTuQrjuK8j8RoFL5DCqJon3AH47fx1HTqoCHbUOQTuBxJ9PnpOWwA
kCiv4f7q6yaSAA3wqBX/AAqPr/HWkacABUAFKA36emsyQoFAQK/E8U/h+PXUjbAWwFApTT2/
2fw0ycwwkcvbU8vQ+o611iTaUifdSvE9acaGldRdcDPHmkG7FtzYE04/Ll0FPjr3+lZRxhSb
f/UDFtcDLotktkCHb4MOIy6hMRhLbq1Opqe6sGrnTauuM5MVRmBKwlQpureg+H8fhqg62UF+
8K4ph+V5eiz5CqS6qQ04uIxGUlltZbSVqU85UL2A9qUDc9dadcSc+CS8WY7hd/v7mPT8dMzs
LkPTL6Z0hlLEVhRB5Ntjh7aBIJO566yGdh2fDfHN5sGcXOEuaX7IFv2uMpYSy3HKillS11Jd
UvjU+gHx0tYRSZpHQlx5CUceTqktkb0qs0BNNKSmCxJtT3g/ExOuWOMS5beQWVmFLuVyoh1m
S3LVRxtmNspBTtxJUa+uiOQch5X4lwGzWe7XFP3ikWCSwxLZYmsSXJDTiqHuoDYENfw5V+Gj
IVcORm74mx6+WKxO4zDnQrxkC3XGIVxkoebRAjmjkl7ghK0iqhxSmpV6aUhVh3J8G4rb7vbY
cq6zExJdvcmo7yGoTsqU2qgjsKkBDbBKfdRyqh/PVICLx4Be/doIgS3WselRmpVwkyQ3Idh9
1fANI+25JkLUfo4inqTTWUhOUbxLijV3zTHpqpz8rHoSp8CeXEtIWgNgpSpoJ9/vJqqtNaMq
sIydmQ8y0Q26tkOD9QIWpPy24katnRsv9n8S5tKtcCTEusRlV0YXLtlrM1xEmQ0lPJfFpI4E
hP1VP8dMszobz/HGYQbUVz7rBjuqY+/TYXrhSWWQKpcDCv0yqg2HKuoy2ObP48VPsmNXNy6y
ArJ7qm3yVIWlxCEV9pcSTzU6CnovYaZNNZgkr5gvlCz3g/td5Xc48C4CNFfFxS45HdUSGFyG
nFlthRT8emgESbrH9RiHmWmruqe1MDzLchqRHejoon9XuuFKO2Uo6K/lokHLIaNhHlRc6beE
3hxy6Q4Bkw58Wauc9JbSvtfbx3WVqNeR4kK+WmSTcFXuF4zzHMjm/c3WXFvxAbuUhqV3HSaA
9tx5ClhXHb219upGkTmKWnyBe2pN8ayP9sF4dTavv5r7iXLhKUn2xfYFKV7RTmqgGhhb4JHH
8R8v221rtMS8mwGU4+zb7E7J7bs5xhJ+4EYJ5BIFKV5Jr6aZAhm/HWdrxrtKkIaDjKri1i5f
UJTsZtXFUsRv8vY/E8j11N5k1I4v+P8AlcYW1FudxQ9bbQ03JkY2h9v7qFGVUsvyWUJSePX6
lKKRvQaU1Jm2WRURPlOA5jAZZntKbUteItrQmhLwC3DHBHuC0qqeXofhobRJOSfyy1+Y7zNt
MOYqPdVGWfsZNqEVSWZrQJdQ8/HQ321tfUvmaCla6pMvaIW5s+SrNl0C9SFpu98nUNpuaO3c
2ZZSO2EsKAUhxTfTiBVPX56V8m0O7Xkvlm8eR0yYkFt7MmmzHWw9CZb7SUqCi46hSUhtaSR+
orem2jEApJaz33zLMVeDFj2oRjcOF0TPZhMQTcWwEdtvulKFO+wK9u9fceuhsloqlz8geSrV
eRHnynI10tdwenPsLSElct4AL+4CCEvI4e1KTsE7DbSFWSmOXbyY1hl9ft2Ow5uPXJbrl1mO
wm1OOcjyUW0ckKcbYUqqeCCls6m8yUYgr8fJcUYs7bLmAxZCkMhhV3VKnoLjqU0U4SD2u5X3
U/u0b2DZ0bzvF4pZk2HDI1ov0ZSHrfd2p0t9TT6FA8wy5ybXUbcVbb621iRxwSh86ZTEdjtw
LTbrQyJblxejR4q20SHXkKaf581KKu6HCCU9NqaytFKmCGPk26Rsitl4g2iDazaO4hFtYQ6t
t9EgEPNvl5S3VBxO1BsB008FGZR3svkG+SvJLmWt2Nm9Xiin41tAdSiKI7YCVNJa91GGkcfd
sB11bRJxk5M+V4BizoE3FrfNsdwnfvEe1OPSUNsTFN9txxt1Cu4tDnUpXt8NUDI7tPnGRb7G
izyLRGfbivSXrciNLl29pgyV9xbS2IziUvNJV9KVnYbV1aYPKI20M+PpGPSLhcrFfpUmEUou
1xhS4zMND8lZDI4OIKkBZ2AOrnYMseUebMemz7lKxyw/b3W5QotvkXeTIUpa48YNrUlURJ7X
Pm0EhfL6fTTEGk5bKnmebYhkj79yjY2Ldf7lIEm6XFU12QCeryYzNEpZDp68uVB01Q4M2SJH
LM08fZVJxuPJtVxslrscRNtffYktzH1Q2kqLSUJdS233O4r3LUdxoWoF7Oa8m8e4/Zr5AxRd
1uDmQwF26WbmmMy3HSpaVpcR2CvmfaQUmmqqzkLVcQhfjryv/pq33C2XH796DO+3LMq3vMpk
sfaFXBtH3SHmuyQ4fbx29NTQ7JzGfK8GXebu7c7PeL8i6yDMTamnmrg040y0lCW5TUhpdChL
YJeZ4qA+SU6bOUDUI6WDzjEhYqzGbg3OJJs6ZiYKLWqMqKG5jqnG/uJD7TsprtFzhyQocvxO
p1yMYRQHMptEjxzGxqWxJ/drbPenWyU0psRnEzAkPokJV7wpIR7CjrXfTGWHj4Gt5vtouGM2
S1RoD0efa0uouEx2St5uSXDVBaYUeDHH14ddZjAtTlFfIWEBKFHluEqSSmg/xVG+pM0kW3Ks
xZul+sz8NTzVms7MFmJCUhKEsKZCDKU202e3+o6FK5dVV92mMQEwzvLy+yTfLEjJ56XJNglX
X76TGdZQtTsdKgUhyNy7a1AD6CaarZ0CUbFY55BcjZ1cMlvEyRLelxZ7IkFAddJkMraYT2yU
pCQFJTt9I6DVEv4Cq/GOYI/B8rTj6bnJLqmrq1bHGMckdsPGLMdWkOONKJoypTfMdyh/nq+p
puFAduyaIz4/yCwOOqE27XCDKQ2Wy4FtxitTilO19p5KB3BrqXLM+IOrmXJRgL1tYkLF4us9
Zv8AIKFKdkQWmkCM0uSsqKkBwH2CnpWuquGaspwDIsxW/j+HM2uWtq42O3yY762UrZcadfkL
XxS5X38kK3Umg9NSXkEsi8nyiDIkY7bLOtCccsrENbcdDRabRNWEqnPOBVVrWpYoVV6D27aX
HUk3JHeRb5Dv2d3+9Q3kyIc6a68w+hKkJW1slKghfuTsn109pgq4Rovi/wArWTFsJttrWzHm
T3MhEiUiS2pYiRQ02ETmiNi42QriNYaNOHBLZhk+HZBjBjQr1YG3UOXf33hmV+4IEie7IaXC
W0OCe42QRzHU76eTF68mGQnkplx31HglLzRUD/hCwVfyGo3V8npS1eXcCdy3JnCq321iRfbS
/GucdLyXLhGjvlTr76lKUkhkCtaJ69NPWP0CSETkVosueDIJVysDkAwbwlmPZ7hMlJcfdaKm
TJDyiplT1eCS2RvUaGpJErYM9sFxYdEOZbYjD+OtsQLLPuD8QxpK5iVzY8i4Ff3CuRHcaVzr
x2oNWQMM8hJk/wCsJolSo8l08FNrhTXLkwhtSQUtIlulTi+PryOx1WehiDV/FkrGH/HNvtK5
MaElU2Wcpkpuy7RLZiroGnAlviZo7fIpQqqQdqb6K2aK6TgwaWiOiS+1GcKo6XFoiuEcCptK
iEH5EpoTrV9is7PQV8yHEDb5sidcIUjx5OjWlnG8fb4rdZkRnmjcFJhJHcYWlIe7i6jmFDdV
dPrWl8ORbzkkH8hxBN3jqzm4224217IPvMSbHalNRrH2lpa5BpP/AE8fn2U9pz1TXjsTp6p/
p+4qHrYzZnJS7b4eQz4Mry63AuqbdPU7He7cx1xs2lLspA+27vb7pY5n21H07aYT3/GSTUON
kDdEY9cPJ9kjThZZt3bsvDLV3R/swF3hptfe+4fYBQp5KQgFTdQV+pOubX4qdz+xmEpaOniB
MdeNQl21MAgXdavIH3Qj8k4/wojl9zv9vTuf5fu5U9aanEvzwS0SkKLi3+nEm3s24eNFQLm5
kLzvZLwugdd/bkrW5/1aXQns9kJ6ivz1tJTjclHkyu922BH8d4zcU2+EzcZciQJdwZll2Y92
yQlEiJTiwEn6SOu3x1hVlNorVg0zIrX+xXTxhd7hhCUsuQ1sSLNGaADk1x5XYQkLKw7ICaOg
OVCj10KOn3HkjvMOPx7xLw60WmG4vNbgiUm4w5LMODceKnQqGmWzE4Rmz2wso6Hj9WtJLo38
4KM4I/xrjc1i0XgW+yMXfNIV4jW1+1zWUTDHtyqplu/brVwp3khtbu/EeqeujrDc+CUYnTKT
5DgY7Bzu+Q8cUhywx5SkW9Ta+832wASEOEkqSlfIA1O3rputPyZSgrVapoD60A/8tYEPrQk1
A9PQHoSdRAIT8djsdqn+egAxQbVoaE0/3ahDP0gceXqamp+NDokUKoQj2/m60231EBQSakii
wNlD4aSgIhVQVbg7U+B1A0GEpCPjyqKEUof4aCCQjdIqDTrTVBBtq2JrVsmlR6aiQqtCQVHp
sr8enTU8iBAosVOxqK+gI+H/AB1lkGeXxpT1Br/sNUkIO6lUPqCDtSh+I0yR1SFD3GgIqKeh
0G5CKwrisfT60psf46ikBQkqA2KlDYU/tpoCAkpUlSgQSBuQDSvpUagkNK1KJ9h2FCD/ALem
lMkw0jb12G5BoRtpNBULZKle2vqOlR8NAIVxVUKP0p3AJ3JPx0HT5C4moSFGvUfCn8emmTEC
q8Akgmg2FT7h/wCeskCp409dlBO2+gVVsMKqdz1//p9dUHTWAlrSKChpT3H0HzPx1GIkFCof
AAe1O5r+HyOkFRit1oCfgd0jao+GggVWDxSOQptvt+J1SMgSfaCaAVrUelfWmqAkWSRUq9ya
BW2/y30M3XISVj4EDc16Gg/HQaFJA3ofcrcIJ+P+7RYUgcqbbVpQnrT8K6Ex0IAUlVU12+pV
dh6aQhTgCioAkAA/T8BqGMB/qca/KtK+mrqa7OY4GOPpWLor9Sikn2E+qq1GvZ6pk86qbr5x
Te40m226+39F6uiGEOrS1AbiJQ04kcayASp07dD01yayYouWZV1qon2qNAafDSbdi/eH4/kt
+7yo+CuNsPrQDNkvFlKUJ34gKdQ4scv8KB+O2tW/iYawSeOW/wAh3e83CbBym1Q79dS5bJcZ
19DEp8BXEpQwhlSAlRT7VACo31jKUGZHmOwvMuO2HII9pTCbstoW8i8yS3FebdU2n9RDTjyF
Le4D0pQHbrpzCHLiSiIzfJEY49jaJDabNIeMl9gMtBallXOpdCedOQ9DqHElxuV781x8JhXq
fIch2JbrCIs6jTUx/tVXHLhp33W00qjnsfnrWJM8is2vHllWNsKv8Ji3W7JnQtaI8ZmPKnLb
oUmSlsF0q3BSCBX4aORwmRWbv+TYBsP+pW3ba+1DTGsqGill9MZFEhCgwrmF1p9Rr8tSlmbW
hD/Jrp5IYuGPf6zsrdwW1HMez2u4MF8PINE83GEOdxb3TdVFfLVMikSDn/faVkoLcKXClW5l
habfFUiHBixR/ktFKXENIQrgfapXLrqnBlcyT9um+TVSMyuU7Ho7mRy4ambmp+aGCxA7WxYh
1WFoAFQsK3O3x0dp4JGHBTYSDUcPQ9TT8NJpo1FrzQm22qwM22xMKu1ihOQ2rvKK1KbLyeKy
222QkpI/K566ZkLENkHkyPfIrSrvjlvmXmPGEJq8uLfXxQkUSpMULDPcTWoPx9NEiWGzZnc2
cVx2NbcM7keJdELs8szHVl+5pNVAIqCor3qn6RqUhIc7zJItl3lMf6Xh2y4uXFM/IIrzzj/c
lM7U7ava0repIrvvqQnT/wDKKco0wxaEOQmy+qQX570p5aZKClae6sVbHu9voKUppjBiXoiM
Z8mxMfNwXhGOiCp+IWXZP3D05aFBzmH3uSeHFI9vGiR89T+TScqCm5NerTdLmuda7U1akOgG
RFZdU4wZH/uONpUKoSo9EVNPjqSAtGH+Q2rdZWbPPtDt2j2yci9W0RXFNONymwQfuClK+TJ9
dgfnpZEy35tXJVFu94s4m3mxPSJdpkxHFMxEKnElX3aPeVJSo+2ihX11hjEbOSfMrQYYuLlq
dXlUS3OWaPMS5SB2HlcubjNOfdAOw501r6mVaRve/LNunQrtLZtbrWQ5DBatV2kLeDkBDLCe
POO0EhzuEDcKNB89UjJFw/IkC23LGbpa7S6mVYKmQqXOfkIklSAg9tK6pjp3JogU6DoNTRr5
LHbvM+L2JTUCw26U/aHZ0q5XdyY6yHwqa0ppxDAaHDi2lVQpf1H0GprkPggrn5Ax0M4tZbO1
OOP4u64+1N7qI9yeW+oqc4FmqGSmtEnlX8NSLkbYtl2OxPJX+qJ0y4w7W1JExKFFU2U8U0o3
IdK0cqkVKjX0/HTmCSgl7bn1gtuTXiQ3kclnGp0z9yajO2xiUXn1rLjgS0+tTbTiK8Q7+Yfh
oawCtBXL7esVyu/ZJkd3kS7TKmrMiywI7aZCHVBHFKZDpI4D2J+n4n4avoSwWbHfJ+JQmMXv
UxUoX7E7bItce0NtBbUwSEFIe+45BLIRX3JUkn4V1Q4gnuSmSslgueOIdgTcLk5PZmKfVbXO
ym1oQSs82qDvlz371NNz8tSJjHx1erdZ83sV0uLnbgQpjbsk8CujaTuQkAk0+A0tDVl9xXzQ
89nqHsslpnWBmbNk22TIjpceYW+hTUfiUjm2ylJFUIGx93XU0jK+dk455QszmUWp83izqfiw
ZkddxMeeoBT7iClH37qTIbcok8HA2Qj6d+WgqqJG8PydabZ5XXKhZO45aZ1uVFuFxdYR20zA
0oR+byGW1voaWofrFsE+oNNXBeRlgmTw4dpltf6ptcDIxkCZd9u8ttJbuNqDYC0MLUwvlVfI
hHBPXRA56psPLfKuMWm3WkYOxbno0ibc3Lpa3oLZ5RXJFYzL/cRzQ24lSlcUKB6dKU1QZaK/
YW7a/wCH8mhv3a0x7hPmR5lttzz4afQiMsl1H0lQ5JoGklRr8q61Pg17NfQsuQZVg10ueRWW
OzZ3rC3aoa8eSywxFMi58mu6j7kJS5yVVSFEmgTWvQ6ko2DS2Sfna0uWvB3WLTChxrYueyLi
4mMzDcQyAPtYUchtBkobdqovJVyI6+2uqoWWTJfFLWPvZowb2mKtsR5JgNXAgQ1Tg0ftA7yo
jj3PRft+OizOnGNl8tkFl27zl3u24iMwTaFKslvjOx/sHJfeCay2wsQu8Gq8By+nc76VlaMx
hxs75OrALFa7pcItpsEzJGRZUvwilEiKzNfS4Z/YZQvhxTRPLiSkas7JrI4eumA4nM8pQ7BB
jLjMxohgSWLg80t5EpbJXGjuMrrwacUpX6Zr+VXt1pqYYThr5Kr48xu5zvFWeuxYza1zWogt
73faaddXFkBx9rgtxFUNt/qEKT16Gusp5G2i23fCcdtviiVOteNRHbs/Y2ZBS8kruURD28yd
IWt4oWkIopjtp5I/CuirlyV9mG4zCjTcktsWawZEN6S01Jjh5EcuIWoBSA+uiGyr/EogD46b
Ckb9/wBp8Rl5ZZmXsdjRrfJcncbYl9+HMWliGp1tt9pTr6FJDiR/1LTnH0I30vRzqsjOD4vx
GVJs0i5Y43arwuBcJVyxFEmS+r/pX0NxlMsNufcOrdQpSuIcAXTkOmiWT2Kv3iXC7bMuk61Y
27f57UC2SGcUTIfSEKmuutynu20pcmjQaT7Ss8STX01TjJpVS2csP8UYHJg3OdeMcn/epu4g
LsCHJM523sFlDv1QKK5KKyErePEbJO9dZb8FVmGZFb4EG/3OHBDv2MSU+xH+5Ke8W23ClPcC
PbzAHuprdkFdGz23wzir+AmVKjSW7umwO3z90aXIcZDgQVoQpQbEOigjiWuZcHXroWUTTTKn
5ItmKNeP8Iudkx5VtduMZ5Uy499x5CltulBYdUUJSp4/WCSFJT7aHVTQ8la8X4pCyzOLfYpj
jqIb4eee7P8AnLEdlbwZbqCApzhwr89VjUGi2PxzgC8jwG4SbZdLbAyaVIjqxuapL6w9EdSE
l1bqWF9h0KPIca7baJlMynD+pzwWPgt+8tX2bcQ9IYhx5cy3sKgxUMkw2iVLejNqDP6RSO2k
bL/NqvtGaR0lEXgWBYhlkOZOnO3CTdZk9xEGDF+3tyXWlHn3GluoXFcd94rGS4nj6emlt/YW
tLklLZ4bw043Dl3m+SLdebizNejskJ/QEV1bSUOQkIdeeVVr9Tg4OJrTpUjWS7SpRWcqsOAR
vF2MXu3xri3fbq7LbcedW0Y61RltpeCwgbJSVfo8d/8AHrdORttIrXj3GY+UZrZsekPqis3G
SGnXkU5pTxKlFPLbkQKCuh2N/BfVeOcTyVu0yrNEk40u63C4WWPBUp2YhcmC2FR3nVLSHWg4
pXF9QBSj6tt9aSw54Oa+PAwt3jTHbb5FxvE8mmyn5Fx7KLtHhoSG25ElQDDTMlZSHW1IWC44
3WnRNT0w6wpNrJL4h44wVxnIJl5Q0qPEyFVit/39wct0btgrV7H2W3luyQlIohQCab1rp9ih
/BmqlIzbPbBCsWZXixxQ6qLbJa2GDKCQ/wAE0I5026HqOo31WjgkvJMxMZtEbxe7lExpcq63
e5i0WtPMpaihpsSH5CgP8xa0exA6Dc6Krb8G4Uoskv8Ap4vEa6T4X7tFcTGMlDT6WnebjsOM
zJUjtJClAESQkUJNR03GtqjhMwObr/TrItxt7jWSRGYs+cLZKmTm/swy+UKcSaBboc5BCkpB
KVcqA9dYlsUoKLn2CpxCezAanqmIdQpSmnoj0F1tSVcaLae5BSVdUrQtSTp64kyk+R1heGY7
Mx645Nk8qUxZIEmNbhGt6G1SXJMrdCgp4hoNtpBUodT0266E/B06STE7wZfW7tcILFxiSGLR
Och3mYpXbRb2UIDzUyWFGqGXW6kca7pI1ppmCJbwW2O+N7xlzN6+5mW2exDNuabUGyl5wtof
W4sDlzAKkBPQdeuqtH2aNy0pRSTxKiRxDn5xXf8AjrMhBYMatF9yWc5BiTV92HFkTwp91yiG
4jRccKD7uKuKaJpqzhF8lyh+DPIC4s29XGdBs37c8UXGfPmFCmFhttzuKdQFk+x5FOKuVdtN
kygZYz43nO57KsE25JQtq3PzxMtskEyWjHLyOy4acku7FxKqHjX11KU6t/7grMPyMrL43Rcv
HlwzH97gRTb3Gmv2t1SkuEucjxJ4+1a+P6Sd+XxGl0fd1NNDeBgiZWBqyVD5Mly8s2diMCni
nutFZU8k/qJUVAcabUrrNVM/CCCWuHgrPYkkRm0Qps77pqA9Egy233Yz79S0JKBTtBxKSqp9
OuhJsobIrMfHV0xa3W6bMnW6Si6qdTDRb5IlFTbFAp/kkBPb5nh8eQI0JPYsRiGC/vkOfdJ1
0Ys2O2vttzro+hx5SXpSuEdtDDX6iytXWmyRUnUESSNy8MeQYNybti7YHn3Zv7XHdjOJcbMg
oS6mqh/loW0tKwpygp+B02rGsoUd/wDtHeI/ji45pcVuNsMSREgsMtoeS6Ur4LkOuBYCGAqq
ElNSpXpTfTWuYY2layxb3hLJ2QlgToLt7YeiRrxaQ4oOW/8AcSBEU+4oBpfPkOXbJ4VFdSq2
EAyLwxebQbeuNcYlwbnz12psqQ9by1LQnmoKTNS0VNcQT3U+z+Y0KraZVyNk+K7gnyHKwSTc
IzF2YqiI6oLUxJlFpLzbCV+0t9xKqBSh1202o0k/JdW5a4KU+wWXHWXUltbSyhxCvqStBKSl
XoKHY6LVdXDA5hCAaEcf938tZZASFA7fm3Ff940EGKKpWgHz9TqNMSK1KTsU/wBugyLNAgLV
Q8h9VN69NtRoBUlSBQlQ+PrpgkETRW4I+A+A/DfQMhqSpa9lEJ6k9f7dAMAJQnfjUbD1JHz0
kGKmpO3HqR1APx+OiTSQZptuKkg1p6fHUQOgCQQelAf9+oag4nlQAivX1IHp89BMMKVQkiqq
gLJ/26aiDCTQFShSpIA/36i0FvxC0j1oTsfw21I6V0GvkaEpINenSgH4aQYdFGpqAB6fH+Gg
gIFK0JSCKEjc6jU4DCUr2RUJBPpQ1Hw0HPYdDUKAJKeletfXQiVZByoFApO3y9NKYsJHtHwN
en46BSW2LNKAkE16JJ0SalsRTqD7d/7P+Gg0nAsOIb2p7TtXr11QDYCkHcfEjl8aem2oynkH
ID0BA6V600nScHOiO39PzpX211GM+BpYXOdwWdwVBSiDtsN6b69PqcNHFy8I3DzZc498kWu/
sWy7W5T8Rtgm5RUsxVhpIILDlSpw77+lNYtsK42ZePcfaTUbgdf5apGC6+JcvxnEMoTkF5al
uuRUKTCYjdrgVupKFd3uEHZJ9vH+OlWxBh1yPsW8gYtjec3TIIkOY+y/HkCz98tfcMSXxTuu
caN8QCofgdKqka6jrAM4wu0Y7kMbIHbk5cMkaVHd+1aYLaEEqUFtqWpPvUpdVbU1O2A3pFUS
MHGLyea7krJi+Uw0gsiEmPUULxoVqWU1rx2r020SKnkuT2aYK54oRiRn3F69Myk3LkqOgoD3
HgGualk9sV2VSvy1WYNyzpleaYNdcDx2wQ7vcHbxYVKX9y6xTkX11cWpRcKh2wfbQ1Py1SpB
bJW8eV8MiysZutifXfrlj0IwC3cGFMpWCBWV3ealpcBHt69a11pvIIcXjzvj4lWi5WmziVLi
21yHLXJfeS4wt8gr+3kVW5y2V+r9Rr10TwKF3vyH4/yDIJExNzetDSYUZmMqZGcnW595sEK+
9hqJU4WgQGj8aknVBJ4JNvP8LlX3Kb4byxGhvWD9miJkksvyJCUk840ahKWa7AfH008BBgKa
0SUgE0HT46kxPR1ky/G7HiuGu3G+x49sRbJAumPoa7zktSkcW68Eq9yFH6VkaGkFiu5Pmtnm
WNkY7lEK02AWsRZGNKg92QuSAeXBkN8W1KNP1e5t11NZDZ1sN4xiJhmGsz8htpuFiuzM+Q0l
Su63EKiC0ClsEqRzqqv8ztpkXsnLvl+DzJbyr1kNrucSVeWJNmUxEQ8uFFSSXBISWxSquq1l
Va1poAeycn8VregKusuyuXVl2UI7wUiW20HG1BhTjqWGUBNQK8kEJO2qAwV+wZRY7eu8pze6
2S5B+1qZeYsaUoU82XgSwtxlLLbzik14pRSgrXrpg1CSMpz/ALIyaTIjTrfcI0pKXYi7W2lh
htoijbao4A7K0pHuQan1J0JAmaB4rv8AbI2KsRrddYtlu8S8tTr67IcTHMi2JTRbYcIPeCd/
0hqaNFqg5ZhTzkCZj0+DacViy7g5llteSiMZLD4V9uftlIUp8EbJCenwGphwM4t+xBNkjusz
ITPj9Nlfj3CzL4B1d2W7VB+0ILynDsQsH+I1bJDXLLrjwxa9/wDVwF4TItcdnELaz2S83dEg
9ztspAfbcC6laln13rpVcwD+SpxY2Ds3bB3Lu3YmrU57rumC6666oFscTcgocUnuflTtWv5d
EQMKS/AY8udaWs+FrXkYuctdjH/TcP2/tK+wDhj/AKXa73HgHdUeNFgqOXxYb87CY2RRoD2d
OrcGVQu61DbXH5n7YS3otWmap6KSOVNUBOcEbgNqiu+afsI9gg3C3tyil2E04ubDiNApq824
op7vDpVYoCelQNXAVbaLBirP7dlGRY+vFZZuSruqQblDiQZCmoDrig2y6iYC0zHUmq+aaGmq
E0PBQMzxN67ZhlszEIQl41aXy68/EU39tHZ7fJSh7hVHJCzRNem22mYJKEX3E8VxZ3HsfbVZ
oUnFbnapcnK8lcAU9FuDSato+7K+UYpVxCUBO9eh0GjMZ+OxGvGVuv4tbiZUmYqOq8LmNlpw
JK6tIhD9RJ9v1EenzGpbMviCL8f2GFkGaWaxzSsQrjLQxILKuLgQqpJQqhoduuljVSX7HcH8
UZRmScct4u9tlw5ExE5l55ElMiLDQqikuhAU06pxNOCUK9vz0NGVA9f8W+MVX23JbeuqbfKg
ypUhoNSi0lyO4hDZMpyMh1tpXP3r7JANBXfRL8lIVj8bYRD8gvWO8WS5vx37S9crcy5KYdbU
lDKlFbbrCQXkqI/RUQkhX1JOtN42SRX8XwLD7lYot+uIvAj3S+psNthxuz34wUgKDsta0EKU
kqoeKU9NQNTCkk8k8U+O8Rbt6MmvV1S9dZU+K1MisMKjsiC8We84hXJxQPsqlJrufhq6tmsF
XtNis7niXKLv2g7eIFzgxmpTjY9rDqiB2Fg1T3BUr5A+lNWEyamPqWK/eGsUt8i+WyDf5n7x
YYkWfPensNIgIjyi2Cnk2ru1Ql3mVU4021dnCbJ7+g38p+ObJiWNtrkXS6Tbu9L+2tEWX9ul
hUdlIU9LaS2t1XYWkhLXRVfqFNKcvIu2Sj4JiCcovi4LsswYcaHInzpQR3VJYio5ucG6p5KP
oCQNARhsnbXgOF3UXK5QMmkKxizwBPuchy3FMpsrc7SGm2Q72nCo+7klzYbddXYE29slpPhq
wW2C9d7plC41lQi3uQpKIC3Xn03RpTjKSylwdtQ4e/3EU/locsUvOx1afEeO2NebIyi4w5kz
Fmm0twyqWwyDJUjtvuLYQV0UlwBCEnZf1bb61LwZmUU/HMXs9w8eZpf5VXZ9kTA+0SVLQW0S
H+2pZABQ5y+ngr6fq9dCWTRdLxgmSWzxtLkXfNHzZ4kGPJYsbSJCozr0wgxozLy1JZfT17vb
B7dK0pqWWVmZLY7U7dLxEtyX2YgluoYEmU52o7fM0CnHN+KRpehNPa8S5BZssjWy2ZOxFnvs
Sy5LdYlwuzGZYK3lkuJPcYcbqnm2oj46MwCtGBnD8M3qZcrQq2X2JLtt1jyJNsvEdErksQlh
p1tDAR9xzSVjjXYjeumXyDYU3xDlFlnzHLlfYdnh29qM4q7uqkpChP5BhoNtoU+lZ7auSVJ2
p89Kzou7g6Yv4Wza6tTH7LeYiYn3ZgNTo78l1mW+UhZKVxm1+z3iqnaAEmvQ6w7A7ONGa3O2
zLZc5VulBKJMB5yNISghYS4yooWkEbEcgdx11uBVpyX9jxj5CXgpvcW5NrtDcVdxXbGpT/JD
BADiijgI3IJ+pAWVfLWZyZvdoYZvgM3Gcasbv+o4tzg3lszkW2I+4pDbgJR3m0KCULTtxLtA
a+2nrpTwa5grmJWW/wB4v0SDYl8bmSXWXS6GEthpPcW8t0kcEtpTyUrqPTQxNAg+NPJmQ5rZ
Pvb0maq5JLkPKIs8TW0x47gQ8ph5S0KUtlSgOCd6keldTbgkFjHiS7XPP73a7TKnQrdaWXf3
CaVxfv1IU1VTCUNPdlSpG4Hv4cfrI0t/qZSUEfiFi8ps2yc5il0ctFmclrhIS/cGoAkyGzxD
SEqX21PAEJVxNKmgJ1c6FWwdrTj/AJvRh7zNpkzGbG+iQ4bW3LS288hKimSttjl3VpJQqvDr
Q9dKyDusIj8gxLO8f8aW+ZIujK8Vvbv3CbWxLacCHUKSGnC2FHmVVB/T+n89DoUuS7cMqON2
S93u/Q7XY21u3aU6lERLa+0ruAFXLuEjhxoVcvQaNGonRoeTwvLDr/7xHyA5RHft0yMu8218
utiHFCRcGChYbWnjySpZCKuA8gTrSTA4YTG8wP5HjcSzPqiS3LeoY9JmlkNIty1gqW13Urog
uU40Tz6cdtZs8fc1skcQuvm++Zhd4tlmBy4uSAm9THW432KH46iwh5ZW0WkOEpohSEc1fPTb
wwTlGVXSTPkXGVIuL65M195apElxRcWt3keRKjua6bzIlrtDeTNeLr06t2OjGZ02KwxGlJUt
1+4tq5BVvoP03G0V7q60Kfb11UWfsVphQWJ/yL53WXrW4ic1OgpSZK0QUpmt9nt8lreQ33OR
HaDigakcQrUpQNwLv2Wea0yrdbptkMR1+YqfFtrdqYQ3KllCkOrW2lspfUpLiuYV8a6KvDJt
PBVPJN7zy4SYEXK7ebT+3skW23fafYttNPL5LU21QfWobkfCmptpRwSsuCR8Uq8jPJnt4vCj
Tree2ZzFzQwuAZCVD7b/AOUUtfdcv8kA1/EaK4YtvYbefeUrZPjtOJkid99LfdjvRl9yfKWe
1IRLQAPuu3/l9ulEdB6ab2tyFpJGHI8gQvFNwb/0nBOM3JapEu5qjgS68z25fAOJcCGFq4tu
9vgnprdX+clOF4KbfsoiTsasFlj21mO/aEPfcXJIT35anlBQDikpSeLYFE8io/PWU4TUbZcn
LC8muWO3n9yt7KJK3GXo8qM82pxp2O8gofbcSiiuKkdVAinx0J5EvGQeaM2yawT7KLXGEK9c
DI+yiuqLryFoPdQoFfvIZQggbUHx019kOS0RniW+XazXy4vWzH4t3uSYclTyZq3GBHjIbV93
0U2OSkEgpVv8N9ZScopw2Nrbe7vHxDIGmsdD+J3aQ0oLUJHahSkJUY/ZeCgtRQgqoFlQPrrt
3fdvnksEtb8pnQvFrUVONR12lu7NOC9KkOpfVdGG+aFdkL91GPbsnh/HXKsJv6C1o6WXy7KZ
v9wuN0twXDvd4jXye3GW4w6lcQqKUR1/4ar3r1pSu+l2b/SCrYT5gzewZdJt9xttulNS2ULa
fuEpIZQ8hJqhpqO2pxlHbJJUUqqSfcNSv+MGUyN8dXq7toudhYx1WW2y5BqVNs7QfS4HYauT
UgORquICCaKHRVaaynAttKfBKR/O2QMzZ0yTDiOSbw+peTuKSUffww0Y7UJYoft22m6pCke6
vU9a6tZeNCrSiPTOyWd4vXAhWZScctd5Vc3LsFng26632Gox505hCVJ+nkeldaV82hfyKsqH
yW+7eRc0tijfJ+FOW653GRAkZHcpjUlMaabcUmMhtp1KW4wVxBXxUanpTWKuV8rCJ2UwM8gz
dzK4dkZGN3SdjRur8sOuS3J8yVKLdFxIsktqLbbaaK7YQdCeGNbZO0jL25Xl2XmTWJ3VF6YZ
+5RYz3HVJntthtp54JbbcQwlABKQk1PrvpvdNJMFiTJn0zJLkt9bTi1oKn5yktqCWitZ5qcA
H6aedR7umn22m0+Qbk5qizS33uwvsBXaLxQQ3z6hHcpx5U9K11ykgvtJqyyEsr/VNI54mjhB
oQ2TQK329vrqkQpUSRHecaktuR32lFLsd1JQ42obFKkqAUD8QRphkISk05GpHQj026b6yUAP
KnGnNSTUHUQhSFkdeQrVJ1oWg01B61AFfgK/hrIQH9TfFNSQa1HqD8tJB8khS01qNgOW2gQx
yUPn9JJ32GgUJUap6inRPp/CukhfLnuram/L+/UISKhIUeqSSCN66GyqHRSqAigPVSR8fx0G
gIbCxwIIr/v+JOpmXkMA+u6QKBXxOoU2gFJJAJ22qR9R9aahdpBSoUSr3daU67aiQApHIdaU
BHp1+OgW/IAVKVQ+m43oARpMJnWgUKdUqGw6dPidZZ3wINKg0qU0qK/y1GG8ikn3HoD1qRv/
AD1FUArRZ32FCT/46GbiMg4qcI229CehHz1JhkIJBHCmwrQfL56mCmRQb5N8RQge7rWo+eiD
UYkIEgAGhB6eu3z0sJYNqdPbX/6adK6xJ064I+zgG5KNTxFSoj5H5693p/kjyQehv6jo11Xe
LFNcS8q1LtzCGXVKP2/dKarCKnjy40JoNc7MqmPJUSlQFdunpog0al/TdHee8gJ/+zWJ0ftK
M2VIQlZiJoeCmuRASpxYCSdzTWuDLbiCx+N5OW2vyVdMcky02qwwnZl2uVuLcZRdbSapAWpK
6c+SduQ9vw0Lky1gcePkScts3kO4t43HWbi2+u33Btjk4t1YPGK0SSkdtPE0SPqNSdNkoRGP
M4XlLthl30QFIs9vcLEycpaEpQ4khJRxKuRIJANBoSNOxqkbGkzPBdtmRcUbE4XdguVQpLku
ONlPPvqopDLpNFKCggDprdkkzJdottZFmkP5Li7aJdrnwSi1ogMMsgqVxLFsdTxM7kDQ9w0r
rEIpzgZ3LF7jeJMC6W6yRpbAuLrLWOXi2M2laVcCpL6nGqqfYYSdwa8yNSBEVKagPXmIzHwp
N0uVtjSF3S9rtaoELiFAd6PblFP3nZr7U9VV1QgLDJwPAjFGWS7Uygx7WxIQj9vc7a3XllPf
ds7akqChQURX416aRkiMfxv9ryTMLLPbt0uJLsTl6htMw0MdrmOCKMr5rYKaH2BXXfrqDg88
78UitKbKP0ncb6TT0bPjHhzD7lasZamv3MXfKIr77MpjtiHHcjoK6LSU8lV9BX46LIGiIyrx
7hGMsx7TPkXmXksq3/fsSojLbkIKNeLSmf8ANSnb3Lrt11FI9xnEsZl4Thdwcioeen5IiJdV
ONp5uoUSCz3Qalmienx0g12aJDIfEOK3W83g47JnRV2+7NW+4QDGQWkJkkEfZISQtQbB/Psf
w0Ikxbv9PVlkCG9bsgeVb+b7dwU6I0h1AjJK1dtUZRa5e3iUrOx1CmQ+F+NcHy127R8fuUue
41AU7DNwQIn28vuBtsuqa5IdRTeif47kaoG0tGc5JbIFsv0u3wJD0mNGc7QkSGTFcW4kUcqy
SVJHOvHlvTURdMKwHF5eNQ7/AJH9xIaud4asUGNFdDBYccFfuVrIPcI9EdNINcFiV4Rxy13K
22e8zpUq6ZDLmw7ROjcGmon2daOOtLqXVL9QDQDpvoYLYyb8M2BK49jkzJC8rm2t+7xp6OIg
ttsL4iOpk/qqKgCSuux9NLKORpevEuPQ4t6gR5spzKMdtjN4uL60t/ZPtOJ5FhlAPdQpApRS
up+GnwL8lZtmAM3W647bbbkMOZOyAfrNstu/9DRPOjqiQFqpUUH5h8N9DZTLLTF8KWS9mLJx
q7SW7U3PkWy7LuLSO+FwmlOvvR0s+1SFpRRKFbg9dD2Zzsr9/wDHVpSzjV1sd1LWO5UtceO/
eQhl2K4yopcVKU1Vvht7SP46ZZpYGeJ4TFumf/6UORR4zIdLDd3jcnUyVAgduLxpUr/xKPEU
PXS9CogsWJ+Jm747fAq7z3V265LtRt9sSy7MW2lRT91JS+60kMbUpua11cGHlFCy6yScUye6
Y998p4QnjHceb5IS8mgUnm2DTcK3BrQ6Nmi1WjxROuNpgRjfExb3kMV262exdtxUd+PFFeUl
4HttulNe37VU9ToRZ2ilysYuLWKxcmW7FVbZkgx2owkJVLCxyqpTA9yU1Qd/w+I0oLDTHbde
LlkEC3Wbkm7ynktwyHO1R0/SQ4KcaU66WS2W7/tD5TgXWF9iy07NelOMNTbfObc+3lspLriJ
DyFDsrSmq1VPTR28gsaH7+F+dE5RFkruD715XFdkRb8i59xhMRo8Hv8ArSoJbQhSgkpPqRpn
4KZ+BFk8d+WbzmktX7mmNkUKOmeq8O3BDqnEqbUphTDyFlTiVhBTyT7Ufm020UYlDfELP5xl
PXiRjMiU06ZhYu0huc02iRMoSpLbq1hD7h+KKnR/Fwwr/EZK8a+Zr9amFqtE6Vb4P3AhsSH0
BSVhxRkpaZdcDinC4hRUkDkSNTsabOtjunkSP4rus+DenomMxJTdtXawhKkvOS6l/wB1CUcN
uVetdtZjJePkK6WfzhaGZmSXeFcWmJgYjXGdISh4OoaUgsNut1WpTdUoCfbQ9NKaZluN+Sw+
Vrx5euOHvOZJizVitCpMdybLUVF119z/ACS2HnXO0PaeaWkinRVBpr8DYzPDZ2WRMihuYoH1
X5SlNR246A4twKBC21IUChSFJ+oKHHUKll1N+89Jy4xVwJv7+YhaTZ/29nsqhk8yPtUt/blv
luTT6vnon9Aq1lDG4yPNGRSblaZVruU2QJEZ2fCah8Sy9FQftkhCEANJS2SUoT7SNLgUxUBz
yRlreZz5s5uDGkMsryufOaUyjnGdSmOwEttqUF9xAHFIFPzaOSs/w+P+o3xDIc1heOMi/a2r
enHWFR27v93FbcffXKc4tJQtSVcy0r38SfZ9Q1pL8gsnCLJdc+yW6+MRZv8ARD4gsW3tIuZS
7+3sxopQlc2KytsJQ6n28nEundVab6yoF/JkttnzLfcI8yMAt6O4hxgLQl5BWlQolTagUrST
txINdO1kkapE8vZxarxb/wBwxtuHHhsTFwLNHgOw2y9LZLCpHZcCytKPVKRwpXbR1X2MyiMP
lbO0XBq2iyRGYiIbkH/SiLe43EUxJWH3v+lr3wXFpC+QV/ZqwimXAb/l7LbjLlR7jYIN0jON
R2HrA7Ce7SBAKuypLTSkutqb7pHWnTbUUpja1+Xr8xFlRP2K33CCZSrixCEd5piE+Ww2ShMR
bdG+KN0OE9Px0wVogoMmYqTIckuhKlyFqcUEJ4p5LUSeIHQVO3y1Nj60koRoo85X84y7a3LZ
GK3rZ+xLuh+4CvtUp4pSlrufbJcQPzJRVXroSK3JXsov95mY1jtqudpEU2+MTaLopDiHH7et
ajx4k9tSA5WiwmulaJ2ljHDstmYtf2r1HjNyShtxl+M8VJbcZfbLTrfJJCk8kK2UNxpgWy2w
fJ82TkmHN45jzLMPGnnBYcdZcekqdkTDVyr66uqUTTjt7dYeTK2McQbzKxZldbHCsZm5HMgz
rS7axQuNfdIHcWFNlSPYnepVx+J1q75ZUX4xJ1wTyjFwuDItcuz/ALg6p5RKFzXGmHFJAR2Z
UQpdYeSlSOvEK+eskmoJCN5ucYxqNYzCeTNiRHocIxLg/Eh9t5a1pLkRsVWW+4QP1BUAV9dd
KtCqplUuWVRZmA2XHpVtWJlmck/tV3QsttrYkOhyQ2pkp4uKDgoFpV7ehFdHkm8kfh2Wz8Uy
aDkdvS2uZbnC4hD4JbUCkoWlVCDQoURt066y0KZe0eUrXZp0Fi1Y69Bg2L7qVZYkuS6qU3dL
gEkSZJKWy4yhOyWaDmKFRO+tJzvnYr9TjC8n2OR5EtOaXe1y3psBtlUxqI6gCRPj+1t5ttYW
GWeIoWkeu4ppeVBJOcaHWN+axjWQum320TsUXdV3wwJrbL81t5xBSp1uQkJQ2pPKiVEbJ+Z1
WXK5M/UzK6XD7y5S5pbQwqU8t0MMp4NthxRVxQgbJSmvTRdyzWi1xMqgSfHLFhkx3jcLBc03
S1yEIKo7jcmiH2pSq/pn2gtqpv00LTC1lKNUl/1GYWpL0tFkuL0qQp1ciG6+00yfvyyJSA60
A6AhMYdogVJO9Naq5NzgaXTzxjK02uNBcuzSYlyeuD1wajwI7yGn47jAaSw2FMu8O4OXc3Wk
GqgdCezEGbeVcvx3IJ8A2NmQ0xFjqbfW/Vttx1xzmXGIocfRHB/MEKoo70Gl2UfJLCgcYbm+
LRMUfxjKG5aICboxe4kuClpbq5EZHb+1Wh6iOLiCaOA+09QRrK5+RnXwXCJ5xsBfcdnMTGpF
1mXGZKmNlDkiyqlpUhv9mWofWtJ5PlXEE9Kddas5CUsEbL8rYw9bHZxZlqyp7G/9JKtyuH2o
Y+n7/vg9w1T/AOzx+r81NbrdKyfCcjMp/JRMjtONRLJj8u33RUy7XCO49ebeeJTCWlYS0kLQ
BUuCpKFe5P8AHXNQ03zJcwS/iLKbHYL/AD3L244xButql2sym2++phUkAB1TQIUtPtoQDXfW
OUzScGr5f51xFrDb7a8MnzYd5uEgSoLzEf7PspkPNl5hKgSUKQ019aepO2utHWc+A7FOwDO7
bLyLJb9mmQptsi7Wx62K4xHHEPrfaDKHVpYFKtcQo19y1b6x3cr4KcNeRhas5bZ8W3vDFZFI
jOtTEPWxIbeUxMhBC0LiICTVjurUHD3Pb8d9dFZKzfn9gekSljv+Dx/G2PWyffmVXW239m+S
Lb9q+taIpohyOhYTxqDVwmvE/jrFHCt8oZhouFh8n2bJMu7VwuMZ9AylMrHk3NoIYYtQYeb/
AElcQltalKRxSr/3OJV662nVT/8Az+4plW/qNky/3KwxFXYSocWI4lmC64h6cyoue9+cWStn
uSBTipB3CdxohdNcmNkJ41yCxs4xc8ek3tWNTZFwhXJq68XlNuMQz+pFKmP1eZrzQPpURTY6
xWFM8o3Kx8F3Hkbxlcrqq5ziw1LuFynTLG7Ihgiycme2l6ehBQmV9w4A4locuHqa63Z6n4Jq
EUy9Z5Ak+H7NjaRBVdGbnJdkttxkofbZSn9J9KwAAtxVQpQ9xGx0q8OznPA22voT2H57i0LH
8TtV+ui7ixcruuZmCJanpP27MQcIDDiV8kllThD5p6jfbbXOkpNkrIs0rM0OW2x2x3OLK3kL
L1xclyYCCxAdaeSgNRXpDDbPZ7qNjIQkKTSg331pNJODNWKk+RIruSotlvutoVbv2mAzd5D9
zmRWUSYjy1luHcmv+rk8A5UpUTy6fLRqq8m06xvJ3V5Rw+5XpV8t97ZstpiXaRNyWBIbUw/d
oS46WGEIYQhX3VVIUO2vccuRHrrWIhbxky1heCJYzLCUY4h83KOcSVj6rYcPKayDey6VGQqH
Tgd6LEjlsB1rtp7Ljzn6EmmNvJOR2W7Y/e4UO6RLm3eplsVgdpjKHctzLISiQlSOLYhFVeBB
PuP89FHXfEfuPMcyUvzbdIEzMmA1IbnT4ltgwb3cG1BaX7hGa4yFhz/3KVSkq+VPTRZpeuq/
3f8AQOXGmZ4sgI9g5CtOSSQNcCApI6A+4EcVfCukmglcVAitQk0r8T+Hw0NkBVOo2Fdz/fqQ
h0QUmg9CKA77n4DTIoHEpVumoA29SP4DRINAA9qUVBUQRTf+356AAkJ7Z2ISPoT8dIyHwISm
p4kA7HqPhrIC6KVt0B67U6ahgSDTYqqn83LY0/hqkQymhBBBNaFXx1CAgq2HruCN6/LQbSFA
JKq7hI25AbD+GonVBUpyrWoO5T6fP/hpDSA3UkA1PzPqn4aDKtOBSVIqQU0G45f8NRuAGta1
4gEEn4fwOoogVSp9FCn1bevrv8NA9TmFKqonioq2qPX8NMGYOlOQpTkpI6H4H465tnWoDzoS
TQDYgf8AH4akwvVhEUr8DuVK3r8RrZiICSSCSfyigT6b6y2dVYWAByBAFeqQPTWZDqD30ryT
8K6RhzsjrQhKbooV4ooaAdP469dHk89lg3HzLj2P44LHbbZCcS7IhtzHpr8p99RK07tpbcPB
Ar/h1izyFMZMvWpRcr6DY/gdUm3kufi3C7ZluTNWiZd12pLlSwGW1OPvLAJKUGnBuiRUqX/D
fWobRzsS2J4Rh91yl7FJkq8qvCpz8eOqIiKY/ZZUod15Tp5VCUkq2/DWXTGCacJkjbfGFqmq
y1Fuyp8MY0l9bENtCg+8lkGq3QODKEKWCkcaqIFdtME3Jlgee7fa7i+3uQ0VHgT8QgnjX+Gk
nBpUPArlNwODezla3mZ8+NajbmlurYj/AHBCaPLUpKSpsKHJCU8fSutOuQlSWHI/DlwjyIlt
XfbzMV32o8SdKjn9saWsgBwO99wtpHQURWu2uabkykDJ/Gs+wst3665vdWIsWUqC5JlRpP3K
dtlxEB9SltKO3KqdtMsUOmfG2QXGTZblbc8myDKYfmRHJSJSJiIzQAWuMz3lrWV1ACUkVGrI
NZI+5+O/JlsySDMt2QzH37uwXXr1JW/AdisII5ffd5ai2kFWya1J2Arolkl5JGzYplUW85hj
L2ZyGrxHiG4XF5iMl4Smi0CUuSJBLyT7wninb11qcBGDDW/gAU9CAKbbf7taFGtW0eerniFv
jWdl9jHmYy2oQiOMsLkMmpVsXO84qgNOIBpXbWGyYxRC89uYf2Qm5t46I6lKjKdaQ59sPq9h
UJJa4n+Xy1SCSeyTtKvK0nEMfeayJUW13OezbbTEMdHFqij2n1PJRTilSNgPcaaYNNZCzBH9
QFtntR58i4z48SYhNvuEVpAQ/IBo2ri2O6qpJCe4KaJAb3C9/wBQTE23NSGriy8XVGBHbjsB
C3Sk8xxYQUK9qjyC/wCOlME0JnHzrMbuiZ6HraiPbXFS2nW2YSDBCwp37cNICSsrpXh7umpM
kUXMLllk67J/1Sl/90ZZaaCZTQZf7QTVsugJSpRKTXkvc6WLWcFn8af91DElIxIt/Yl5BUud
2Ux/uyP0xGMj2fc/Dhv8fTRItYJLHv8Avr+2zm7czIUWHX6OXANffokkH7kQFSP1e6RXn26/
z1Ng0cIknzOvCXnI0Zf7Qyy40mcptsXQQ+YD6GVLP3P24XQKon+OmQsIvk3zM9golXG3KZs0
lltqXdUstJnvxN+yiU4D9x2D+UqSB8zomB5hlcfzfPn/APTwLz6P2cFvH1Mxg3xJoj9MobHd
VQcfzf26pFrPyWy+ZP5ntV3syHLL+xuPvqkw7bAhoSzNlPjg8p5tsu9x1xKuKkqIoPQddCZm
ckFmWQ+QLXfbU1dbO1ZE2Ud+yWJEZP2TRcUSpaG+TodK17mqzv8ADSnJKJGeI5Zfv+4CMgbs
ab/kUlwux4QbW0EP0H6jbLPHiUpHQ+0dTvqbkV8EpaHrzdssu95t/jpN1uCHwp2KgyyxCloW
VLUoBaea1ue5SFH02oNTcGUV2Zl8wTcjVklkj3O/3hSkyZU5tbb0J0Ao5MtJ4hChtsenEenV
SkUsE3ZPI2YRcYYnRsfTJdx5hdvgZb2nlC3xnxxcaWEUYWd6JU59NfXQ8BZwylyclS5i7GOI
gwEJjSFShckMj9wcqDVDr9aqQOfw9B8NKsLanByxHIzjOTWy/Jjpkrtj6X22FEpC+II48gDx
61rTQKUEthfkeTjeYv5HHb+4Q85JW/BDpQkfeV5ltQqEOJCvavj+OtWckniCxPeZoLmRQrq5
Huv2tvivRkPG7LXN5PrSor7hQGOPs49tTRSRud6alJmBp/3VtLnkJWSxsdbYhPQXrfNgxlBL
7zT7RbdklTbYbS8QeqWwin89FsYFRBzxTyHhlstEa0XOzzZFvtN5/fLB9vJaQ6lYSlKWpRcR
xWkBCd2wNTqCzDOXkLy/PydVjkx2HLVdLM9OkiQw8r/OnP8Ae5MkcVJDafbyO51rEDmQWLJs
TY8Y3iwvwbw7OuElmXIuEb7f7NqSwSYrZUoVCV9VhXuP5dCZO2oJeb5jt87LMovHYlwE5BEt
8OI4goedhfZLaUt0JJCVbtFSUJ6nrrLWDMZfyyT85Zbgd+sDTVluKFShcFzkW+EhRbkKkJAe
mTStprg8QNglShXb56UjUOTPfGWV2zGr/JlXJD6oc+BLtj70bj3mUy0BPdbCilKlIp0qNTYl
mtGa4RbY94sDV0v71mutsbhfvEjtuSYzqJBf/wCnjpcHFlXRQ7leW+pGX4HmV+X7XOsEi1WV
24R1d2ztNzCey5IiWtlSHVPFtZKVOrIIRU9Kk6VAzLF3zzeh1zyEm0zZ0NnJnYjthaFGw0Uc
EzVOAE8FOtooaVroygdcQQOJX/CYXjHKccul0lRrnkJjqYhtRFPNNmC4XEfqBaQS/sgmg4D4
6q7F/BomfZRZJ/iF2PFyWOi4S4FvTNbZeKzMciJSBDZglwrh0Neag2Eqp7ttVQtl4MJxG6ot
GU2i5iSYggym3jJS0mQWgDu4GVkJcp8Kj5b00tSh0egIHkbD5uWwZacpe+5jQbo5Kn0lJhML
ejBDa4rE9bjqHyd+0hfA0292jJnqR9r8pYlGn2pqRkz1xu9stEqIcrlszGmJL8iUl9tEjtkT
ShttPEcSN/lq4EXf/JuL3afeDj+VDHLtOZtQayP7aSO6mIlxMhhRSHJA5KWlXvrX1VtoJ/Ar
CvKePW23y0JyCKu5m8vTLld7izLhi4MKbSlDgjwUqDivYR2nKD/8460DWMHny7S25l0uEptC
UIkyH3mUoT2m0pccUtISgH2p3+n06arbGujf5Gf4K94ll2FN1jSGFWBDEC3PF37sXFABKVRA
ymO1xWKtuBZUr4+uiuAulYonlbOnsqw/DVrvSJrsGJ2Lpa1p4SW7gkkKkKSEJBaU3RIUlVPl
rXreGLf5fYg/EVys1uzViTdnmI6UxpQt8uWnmwzPU0RFecHFYSEOb8ikgaLE9GrWzyDj9u8g
YNPu90s9yvTEOZGy/ImmULjpKiXIZbebbaSHUoT2+aEVAUUnrpj8fuSiSpeKvItrtN2zO4XK
LaIi7lZ5jkSO5FCGVS6ANwmGwfa0/X3tV91Ouhr8kSXWsEx4culhTh0mLNk2SzuPSXXZs+QI
i5PDjVLb8SYgl6ORVKPt3QoDald9Ty5CMEnb7l46jeJ2oiINplQ1Wx79z78mPGe/clLXulhb
Tk/ly4Kb4LCaUG1NK/lIPRTc7yp6++IsNZ+4tribf3412YbbYZmMyA8exxaSA4G1tUU4Ue1S
t1b6q6ZtuWVnxHPxa3eRbTOyYtCysrcLpkJ7jKHi0oMuLSQdkOlJrTbrrLCINOkPWeWm1W7P
59tvF5t9uvC8ruZfRIlMQn3W1QvtJLQKXZiTVTLfu4pUQaDW0/8AOCjIrCYtuh+Z8ccxmLa/
9NMWuO6Jy3GlPfZLCuUqUXS3wuCl+1xKU1SnoKb6zaOsfJujwxGCsSbQmRAgyrfb7tDyRx/O
UuuxQV4+pIWE8llSH4/FS+TbRJqRUdNbtufOjNFCSZimXLsq8pvD1iSlFjcmyDakBKkpEUuE
shIX7kjhSgOq6yS0X1gR/wDstYRCU2m3LyPjmyhTmTVP2H3FPf2e1z4/l5f82s0/3eRjKnRq
Mnw341jypUt2NEjQ5z0hmGt+aBEb7kyOLdw7a1LSHmO7xp1HWnXSkv2MpM55B468ct3bGZAx
FaH5UuZGkWBjtxFvNMsqcbcXGMpwO0Kdk91Kljb4aylhjEZMj824zDx/KIcaJb4duS9ED6o0
EvoTVTiglTsaQVuRnOIAKOSgqnIHWml1UAsFm8N25l3E0SrFCh3LJnb5HiZC3LbakFrHVo/W
WWnzxQypdQ44ncfEawlv9jWCTi+NvHFxX99AgKfcEm8rx7HmpVTkMKM6v7dxl5SucZEcApA6
uBG2ul65/QkgX3HsebwW4BFrhDEGMciz7JfkJT3n8gdUEuIErkXFuK96FsdAE/SOuqlV2j9S
9iRi10x69WyJBlzoi48a6MmRbnyUqQ+0lQSpaSknpUAg7jWGuVoHuCz+KsMt2WSr9ClNuvTI
trdkW1phRDn3IdaQFBCalzihalcflqol2SeialGqZR4b8bYRhc3IbxbbjdJMC4LiNRFzFQ++
0uSpiO4tSUEpQUJ5ckp93pprWXHwKSnwVXx3jdhud5z53HYE2XZWLDNVZH5LQddadW0n9FYC
XEqdUSoN/noOQ30Va7IVoZNWnET4IduicceXkLF2+xfu6H1lTKgx3e86gJPBih4FpVAVb1rt
qpWW54FpQiSsmEx7r4+wWO9b32U3jKXGLjMLZClx3kIaStt0oBQ2U7IFSnlUjWqR1t5wMLsp
JSL4i8dXaYym3C6QorGRSsalpkPIeVNdjR3HUFCkopG5uN9utCADU6H64n6SZqnBS/MOK4zi
d4g2Oy22TClNwxIu6pD65HKQ+eQabcKG0LSwBx7iB7q79NHVKvy2BI+M8Usb+KpvcyyDJZ8u
+xbAq3uF0ojxZDfcclcWPel0/QhavaPx1lKZfhGmlCXkmV+DcXK58yPPuJs9nl3JmdHUhLky
c1Dqpldo4J4P8RRL6j9NCddbVz8uDMEVf7Thtr8I45Pgp713vs1x2XJkRElx37RQDrDcjkSy
0gmgKf8AM9aaxVJ9muDVqrBPW3CcQy+yYuwbBDxW75JNkPQ3bat51f7Xb2FuSXFJdcWkl1ae
CNtjv01etKG39PuZdR8PH/jVfj+25Xb7bOkwYEC63ZcObxZmTksPNNtMSnWBuy2XCeTY+n4a
1WkzV+TTwNJ/jTArOBfHrVIucS5O2diPj33LrYguXhsuuHvNjvL7VOLXP4+6ujquqf1Lop+4
6T4Swm23e34vMTLuczIpd2YiXsO9pVvbtSSUqDSAW3lrP+Z3PToBqdF/L6Y+pP1qM+JI9nxP
ikqNIxmOmQxfYdliX9eRl0qbeMxSR9qIh/S7SUue1YVy5A/hp/rSeflD1zjyRma4bh9ntN0n
WqDIS7h96Ys1xRNeMhq6haSpbikAI7B9pHFs04n46V6VD/8A5kUlh+Ss+VsZtOPZk/Btbamb
bIjxbhEhuK5qjpltBwxwo7qDZrxJ9NYtVdU/JnUopKgOPPlTntT02+OuaAFTQUA+B6Aj8BqE
MEBPPf27V6mvxp8NAACvh7P8YOo0gcXOBJ9eitQhkqUmoHEECpBFAR6aCDUpIAKvbU7j8Pnq
AFEqUVBXIj8vpTpsdIpoCSOQSqh4+ldvl11mBQorUqlK77FJ3FdQiQAVBIAAB3B2O3w0QDFA
cRRSgoddvbSmoZAriRVO6duJ9a/jpCQglKN+ZI6lJ2HzpXUaxyHsQPbSg6Hc0PpqFqQuCinf
qD8R/u9Rokz18hqTyqsHb8oPoR8j/ZpOjsKI+kEAcT67/PWWzKcsOlSAUkmtD6EH021ls2G4
lsbgilSB+OtJmW8gQv3jiQQD1rvvv/ZrLQ1YRJFNvjQ9RT41GhI05S8ikAcKgVQrrTfc/jpY
VSBz3pX1AoPTWYFWDoUhQFU7evXfRBoT2z9XFVK14U3p+OmEHYY2Rha7qpKxyUf8yigBQ9TX
pTXt9dXODzG3eWJGazJFjteS2KBHuojtN2+VAUp95+OfY3VaVrSkcq7JH9msQ2wrVsh3vEOZ
NXC8QW0xX3rIw1JuIYcU6Al4EtttAI5OrVToNZa5KRph9+yDB7y7dEWRBubCeIXcYrxMUr/O
jdHbKxtVXX009sB1kn8XyDyHPzG5XvFrFHReJMNxUqOzFDLKGHiAt8NOrQSpah1qa/DUngHC
5GGFeSJuKW+5Qodjh3BVwQtmfKktvuKLSti0vtrCQ2PwH46bGrLwyJj5XHYsU6zosNsC5zvc
/dOyVSmUlQPbYUVENoFKD8dTykME9F8mCPgycVaxyG5be731ynVSFlUoUHeVRSUFRp9NePy0
NyZgewPN8i0NLTj+P26yqkPMPzltd9xt0xVckoCHVcGhX/DvqQQMsu8tPZBZZNnjWyPardPl
m43JSHXpLj0g0JIU8aNJqPpSKa0rAlGRve/JTd9v9vu92tyBBtkdENq3RJT0RPFIIBS+j3oq
TX2gVpTV2NPBJv8AnfLPuo5tjiLZZIaG2GbYkiUhTLZCqPPSA4txR/xHpo4AmIfnG3yL5fcl
uNvkOXG6wf2yJCiuNfatMlO6lrWEuqWVb7ClNQcYMojRZT47UVh6QpCVFf27S3SlPUkhANB+
OpBMG0WPyfZIePY09brBOvF8xGFIL0hsluLGD4LalPEJXyQetdumtMSEzPO7He2rfOyHG7hH
vzduEaC6mSuJCdZ93B5I4fcOI5K6VorpoLQqyeTcLteJ2ayNW65vP2m4N3XuqdjhLklB96SA
Ce2d+I6/E6pYT4JRfmvEmZM6ZDsU5wXu4MTr0iZISWwY+wRGLY5HoNlHjqQolmf6hcajtNxm
bbcH4yXXnH3l/ZsLDb7ak8W2o6UoPDkOPLc+p1RBQVvFvKuK4g/NcsDN2uReiFmO5d3mlIS+
XA4mjDZ4ttinu4kqUdTXkqlCzO62O735y4WRmc0Zv6stmY59w4mQo1UhlwFTi0f4efu1IluC
1YX5Bxu34wxj2QsTEotl1bvUF+GG1LW+1/7DqXaBCVf4hqYt5LC55xxq6y7Ver3bpce745Ll
zLbDiKQ4xI+7BHB512ha4bVKU7+mmDLZyHmbGi7FySRHlf6mgWt+ztWtviqKsPr5CQqSTzSE
g/RxrqKRpf8Ayvicli+XeG3LXkGTWpqzzLW4hIjxktp4F/v15O1H0p4/jonyJFxfKVjhXbDp
zci73VWOis6JNdjhlvk0GuMNKAKAf/jDWgA1JSKcNstNl8u4NjqmbZbpM27wZdzmXKbcSz9u
qMJzKmg2htSlKdLfOpUKV9NXyZ+CtXXPMaiIwmw2u7Sn4mJPvSJORsRwlxSn1KVSNFkFe6Aa
e/b4DSlJSc8LzCC35ffyV3JH4NoW73pUyc2RInMgJ/RcRERxSSQDSgTRIrvqaGpJ2TMrTEvt
9gqyK0SsRduZvDRnwpj3J5xZWr7dDfbX3W0USeZ41pT11mxLRUMolYtmOTZXlLt7TZgtZftV
tkMrckSyEBKUDtnggngOp25fI618GUXjGvJOKQ7VjVycvKo0XH7RKtdzxPg4VzpEhHFLraB/
07iFKNVLc6U31IXkzm63qB/2vtlnRdW3ZjUwvO2hEBKFtp957q7hXk5XkKJp60/KNSzkvBH+
MZlkjZ7ZJV8Wym0sykrmuSQFMpbCSfeCCKVp6amjVWaLifkfGb1nLMbK7dZRbYcia5aLp9s3
GA5BSIrC+Ce32eiuTiFUVQnU6mUWJ684U/mduLsLGlTW7dIRJeE2MsF1bie2A8IqLf30o5cQ
tH0+oOsxgpYztGUY9ZfKVxjQblYXYtwtLrRmphx47KJ/aKWmHn0lTG9f1VNENq2qK6XXGjNf
DIzx8xZHLUkrYxiVfP38pyg3AxS03aaCpgBZS2EA8uPZH92llWI+BGa5H4tx2LZU4/jtkv8A
b7g5cjcQ8hS5ZjNyVpiJD5PcZ5IUFJURyokempUQzmCNx+wSJXgu8oS3CQZFzj3COPumG5Lk
VgkPqdSXAVdqlEBSeXwroTyzeMFkv1r8Yyr9lNhttitZg2du2OWh21uBE+W/JdaD7LUhThQo
qSsoCaUB3O+rrBzTlv4Y2854ta8exBxqy4/bmYy7mETbrDZ7bsVpApGiuKddcc7qj/muI9q9
NCezPPD9isd4yiRFucducpq3y37Ra3nO03LntIBjx1KCkcuW/t5Cvx1M1BebXg0GRNvCpuCw
Y+VQrYzItuHMz1LakuuSChx1bSHitpSG9+33P+am+ssJ8DnIsU8cY7EnT1Y/Fmzlz7VAXa3Z
j6WobsyL3ZTQ7TnNXaV0ClGhPXamlKRjg6MxcJwm2+VWLZEk/c2l6HEj3BuY2H1R5ixxbYUp
twoCVKIdIqVp2NKV1qZakoxsqeF4zdZfg7NZUK1vPKVJt648ttJUh5qO6ovpSCDRMehUopPr
7thqxJXcJFlzXxzhVh8bzLtbLEuTPfhw1RZTz76pcYSAlb8qfFJSmMUdGuNUmu+ioWcGP4Ra
YF3y62W6a1Jkw5UhLUhiDw+6WggkpaCyEcjT8adN6am4NJSbU94XxJzIYjabHNZtz8S4vsxo
8uQH5bkJtC22uzNaRIYeqrir6mztQ10JmUoG0Lw7hstUO5ybXcrfEfs67jNsDkh5yRFdTJEd
HIIYXMcSsVVRLW3xppRCMi8O4hj7t2li1Xi+x4y7eyxZoTq0SGvvWFPuOrWGVulKVI4AKbT8
99D0WlkPG/DGBS7Z97dheISptzft8SC5z+6ipZoAl5uLHkpW6eVaK4g7aCfBht2isxZ8xhp0
rZivuttrcSUKUhpZSlRRvxqBUj01uBrlGz3LwpjMHCZ9zMqa1erXa2rvMacUhSCHKK7fFDXB
AUgnjR5RFKkdRorLYWngrHlzG8JtVsxN3HIU6K9drSzOkuynEOIcS6VhKlcBT7g8fdwojjSg
rqroP9xXfG+Jw8myB+LNkvR4FvgyrnLVFSlchTURHMtshft5rrQFWw1qxuDSMR8XePHsxxN+
RLnP4/kUSTcIdrnMAvqXB5BxuSthSUBFU89huPaetdYevuZ5GPjew+PL5fMvvFxdjrg2qBJu
NvhftzzcHtoCAJSozbxWEIUr/wCNzqrry1q6yCq0tkfh3izGsjsKrgq6y1XZ154Isttjsrda
YQshDwZecacfR/yxypSRturUrZDSSHa/CNjj4izdJeTtxb1LtYu7UFzshvtrBW20W1OCWVKS
n6w3xrolsHhED5AxTALRieL3KyS7g7dbxFXKeZltNoQtpDqm1L9q1dtSFJ4hKahQ3NNSyjbT
TwQvjLD4uW5fEs8yS5FjrafkSXWEpW7witKeUhsKonksI4gnpqfhGpUFuHimxZLDsM/E3ZFt
VksOdIs9jnK+4U5MtzyW1sJlgIShLiCpYU4nYjj6jWpgGmt5HmMeADcclkW9+5KlQbSqMxki
IEdSpLM6QriqIyFVSrtj3OSfoCa0BpqbcIkoRnGdWFvH8xvNia59m1zX2Ivfp3O2hZDalEAb
qRQ1A1pqIM0nklLVjdka8bXfLLsh2XJcmIs9liNOFtKJJZL7kl5Q+sIb+lA6nrtoTznSNNtr
BaGvAOTNzzERdoqYzwKHHgXf1HWrem5cFMJqaJbcA5b+7emqZUimls73v+na62uNAeF9hhmV
NYgyVy21w22FyUktrCnCrkn28aUSa0231iXkkVLyJgT+G3CMyq6idJeCyptUaRDktlulFlt4
UU2qvscQtQ9NbhxkyrSwYbiFnu9pveUZNNlR7HYvtmpbcBKXZrz8xwtshovENcU0JWVH5DWU
m8I09FguHgXJI96l2203WJP+1kNMTeKy07GhS2UyG5sxpXHhH7aqOUJooH5aOJRlpkVDwGC9
g+TZI3kAfl4280wmFESsodafkdhMgPK4Ubd3U2lO5HWlddFVq0GkoROSvCqvtXrPEvan8ptr
MObPtDjam7ehFzWhpvsPkk95PdQXCUAEGgO2ik4nTGP0JG4+Ibrk0iFbbbk37kiwS145OE5h
UZuE6w2t9wxEt8u8wQ0vc0WSBXro6tKOdg5meCMgeKWkJi5RZ8mdbxAQZdxdvPYcj3FpEJxL
D7aYzat1rW6kIIcCSknl009HrmSdXyObj4omNF+dkeWuNYqW7ebbcu2/LfkJnhX2QXEK6tlt
IVzCle38ta6stYBUc5KNd28rwW+3XHW7m9FlQny1I+ykuttOOJSCHAUFFfYr8246arVdYBWZ
ERZ92CH4jMt9LNyUEzGUurSh8lQp3QDRz3Gvurrm7cmo4NQjePfPUiScYYlSeEEqDLRuBREC
oZaUUM+6hU0p5BHtFD06acrJWQwNk8k4FcrfOTc4dJNwdiB9qY3MiJuDqOMhqYKqShztL95U
OnrpScNivBPZNg+X5v5Kn4fJvkRpGONFENctDUJCGlMiQoNRmirlyV9ZCjRNFH4abOKpecgl
hsqfj7HskXd77Btt7ctSoFulvXB+2PpX9y3HTy7SFIWgPNqV+ZJ9o309YskVXNWxzaMH8tqx
+0ZBbnVIiMRX3bChua0iQiMQpUr7NkrDnTkXAlO+sZmPkdDa6+OfITGHCbdVtMWKyMiQxBlT
G+cf7whQQ3HSpSkOviighQBUNNVZuFyT2RmHpzW/X6z2qzXF1ubbErNskrf7SITKP1n1oX+R
tNCtVOvwOsOYgvkuGSjzK3IiZWxkEjIIMCMu42zIYLh7aY6lhiQUMrDbiQFgIcQUH49NdIcQ
ZXyIxezeZv8AUd8diXh20X77D9wu7suSEyHI7rZeQOP6ilLUhJKaJq3/AMusuW0bhJZOeL4/
5Rewd5u13kQrdkDT8yNj65RRNuLMcVkusJoogBP1e9Jcp660nbtLDLg6zsT8suYZFsZuYmQo
ojyXMUYkVmxGpaj9ot9v2ntqUapR3FcCoHiNYTtnyL2SF6xfzHcLhY48u6Rsl/bZn2qGg+iR
FhTozfdcZuQUG0qW02j3rUVVSkjl6aq9uoryQeQ4Rk10yddxyPJbQ3+9J++iZE/IV9hPAUG1
oiuNtqPJo0T2+A4imm3ZqfH7GU4wMZ3h/ImLje7QZUN27WaEi5/YsuKUqXEUnmsx1cU/qNo9
ym1gKI6aa+tuP/YnqSjBAFHKckK3JG/XodZsocCJBNSQa7H8NtZFQEXQvcbEjr6aIBsMJbAo
rflvXpuf92oUDklJpy9nqB0H/nqEUSoqPJJJI2HwqPQ/HUZYoBSOZI5fEeugQJBIoK8vT56p
KoRB4fBXoDqFhDce41PSlKUFfjqNB8eSRQciB69NAgKWwOSenUfw+fz1FAHE7hfoB+XrX+O2
opDFU0rvToeigD6beuiQBxoPdtTYn1I+eps3AagEkFR5AEcfX+P8NCANIVQpAoTTY131QCQZ
oSo78VdK13I+Pw0GkGBy48TT4j0/EaYJthVSd006UrT0rqglcNJJbIqUk/UDSlR6aTLbehQC
Uig35UJIOubFJhr47JFVDcck7HUjqlIg8irjy235n/b4a0Yan6A5e3/M2+mld6f7b6jHyNLC
mLJvzaVyFMxHVUffKOZbQfqXwSfdQflHXXq9VoeTEwegM/yjx/kV7xaXa8gkNN2plmJLlfZu
pDTcf3B8AFK1LJ2CE9PjoTzILMst0zy541XlOQXGNkcqGnILfHhszmYjyVRXWSpHMKO6jRXL
Yf26FozOBrc/MODJQ8l2Q/kEViBEgOW55gtNXR9pYUqY646FrSGgNgvdR+OhNM06saY9meFH
yzeMqnZYHLe5DVGiuy2HULJfFe20htNEtsUp7uv9utT+JlIp2P5NZbDbcstLeSzGmZ/JEMWy
I32ptUkc1rkJU40k8uJ3HtqdYehRnKOBPuAFDVSQTv8A8dNTco9NTPKeCDH4rNvmWr9sVGjt
G2Ph9bza0qTzH2QQGUFH1dwq9NXJiZZXrn5lhyvJJtxmQZuCOTYi0vvx09uOlminFtKKR9S9
lLNdKRlIO2+WbNc/I/7fkBgS8RYuD79qnuxk8mkhJRGSjikUar7uShWu5OqEMFkay/EYsdpe
SXuy3bJYrFxdROYQy80lCyPtmuSG0IK+lEUrsdUFEmVWvyI7kGXWCZek2m1uW5DiJN3kQw40
5VJHckMt8ElQ6I4gBJNa6sGeSn5XJYk5Jc348xE9l+Stbcxln7ZpxJNAtDX/ALaT6DQhNH/p
7Spu5XyQ5NjRIbludiFL76GFLfcoWgErIKgBy93prXA4gc+McdkN4fnltkO27lLZVChuqlsJ
LslkKCkpc5CrQqKK+knp66vAQX28/wCnbhbba9d0WSXj8THw3MmuOMrnMzUNgtNIVy7iU9SA
gV5aIyDKZklhwdrHbvIgwrSnL41tim5wEuD7CEhaVc3ISqUclU47cjv+O6kpFs5+SsayGT4o
wRgRi/cIZUzIitrbcf5SKCM2ltslSypI6JrT11IrQR3iXxRdJOXhrLcVki1CK8sJmsuNsKeA
HbCl9Afhoki7I8dYi7HiuXDGrdFzNNunPt4qw7SO+6y4ExippLp7lUb15b9fTYgCtYTjtwb8
z2QO4nFsb7EcyZ1uhSO+hhKkKSJLgStXYXUgBHI61ELAplHuvjnMbnm+R26DblLlQnXpkhl5
1tsojOLKkOKWtVFc0mqd66AWUW/H8EsxwPGrpAw9eZT7yuQm7OIlOMmJ2lcWwhSVpaa23qvU
0JTmLBbh4xut6TZWTLYndhF5XcKLbSHEjtNwx/m0rTl6/V0GmMmU/BbRizUvwlZ7hCxP/wC0
VXlkPOJSvvTItCO4uQr3NsvKUE1B4j01nA8of+RHv9OWNaJ2Ixnb0zIYeiyWLSWbPbgKf9Mu
WpKfvytPtPL2lR+WlJBZvgcSmmH4OFxJOHWu75ReW3ry/AiR24LaISWz2QrtkB4J5BxTZNVE
cdCSiTXMEdn2IY1L/wBBPsWVanb7Jfjz24EMWaXJQ2r2pTCdVxZKf8albj+GmYQJKSEtnjbG
1vZzOuNsuaomLvRo8bHI76FTlGUsJHN9oOglP1AJ+O520irYLr5F8YYMzON6lWueqIp222iN
aLXwjFsvNclSX6IXVQ5UXsOms8BGTCfIVghYvm15x5mQXmYD/bbecKQ4pBSlY5AU3HKh1pIp
NKtXh3DpVsx6K5Kuv7zktpeucea2lk29hcdouFCqpKl8unEKrTeusFkrWY4HhNgjOW4vXmTl
DNsj3Fchtlpy21fSF9tQSO60hINO4pVNaSMWiRvm2OWOF4twK7wo6WpNxNw+/khAS66ttxIq
twH3ITuEbDbSns6W3BD+L8Nt2ZZOu0SrguBHbiSJi5LKA6sJYSFe1JO9a6W/BlFsxnxZgmUS
1O2LKJP7LEgOzbkia1HZmRlJdDTSFrWpMVKXalaSVbAb76NEicg/08Y5NuMqO1lqn4vOExDX
GZZfIdmIWvhJKV9o8O31aUQQRqckmMLj4Exm2KtT1zyeS3EySWxDx5TUJKnlPSEBVZaSsJQE
qUB7CdtHZkk39SO/7BT03O2xv3hl2M6/co19uCGipq3uWupcKuSkqWFoAUOnWmpksbHDv9Pb
owxu/v3+MzKct4uyIziEJZLCk80N94uB3uqR0/S412rpThi3BnVwsNug41aL1GuYenT1ufcW
r7dxtcUN13L6v01lW2yd9/lpTkHhjmNksjIrpboubZBcFWGOpRekcnJSmGwgmjTSyociUhNf
QGvy08DjZNZzh2M2qFjsy0KnQJV+V3FWa5Ox35DcRah9vLJYCQhLtTxQoV2roeiq2rRI9n4J
hdl8iX7GrlNustqA6xGs0K3IaVPnyZIRVIWU9pvgXPUb6HrAJBt+LbJJ8vHCI13eMNuO6+/J
oyuSw41HU+5EWUVZW424ngsp/wB+lzEkht46wTD8qjRo866XFF5lCQ9LRBaZ+ztkVin/AFc5
2QUAoVWvsVsNuuspwDWMERbMYcewTKb8zdHu3YJMSIxFZKgxJRLdUhThqRxTRPJO29d9azIp
tfcvd/W7P8Z3ZULObpdYVpYjMyn5Edpi2zXlcT9hGkKIluuNVrwcqKJ32pqom3AvBSU4DAix
MKny76m2RspbkyJMx1B7cFEV7gFAoPJwqpt09xHpvqyS3BN5JEuGOxLRkOPZFcpCskaftjX7
s12J5iLKUrdQhS3v+mdr7V7Go2+OiHGWHVkBnDN/w/Prnb03qXKuVtcTHF5DrrUhxKmkq3Vz
UsCiqfVpBTGSw4BhvkrJLHLyTGrvM/cmZzFqktNvvIfUw6Ae8t4LCi20ViqPQVV6auyNrRbG
PEuSWK3XVYzO7RI33cqJOXaIUyUlYjIBU9JDDyeHIK6qr676G3OAdsGBOUS64ttXcQnkpCxQ
FSfRQ/8AUN9aBJs2SV468tMxptjlXuYcchQI0huQ790mA+iYtpIitcj2ypBe9w3+nponwDtl
oK/eIr6LhZcHeytySp2c5Gt8CRBnsRmnOClPvMPPgMuISE79smtdtUwixMB2bwxlNqyGDMse
TJjOcJjqLmmJNivNKgpSX0/aOIS+6FIcHHgCFdNZbFNlTzy6ZTaMwYuKsrfvN8THINzDcqG6
wlaVN9oNyUNrTybUa8U0IOtLKBORng2J+R71EuS8SjumO40YFxcbeZjtuNve5UQqeU2HOYSC
UJqdE5k09ErY3/NdgjwMftP7hAau70lu1wAEJWt+MoolpZLlVsuIUCF0KdOsgmm8D+a35nt3
iWPdXZi28Ol1iob7jKn0xln2guUL4ZcUaBAX8ikDTVuXgLNRlbKFJynIZmOxcZdmrcssR4vQ
4RShXBaif8tZHcSmqj7ArjU9NGiJG/Y1mfju82t6S6IF2kR0XKI5EfCnGkqUpKeTiPaFVQap
BIpo2pGchXPynndxfnSJtzWtdzifYSkJCG0/alQWphpDaUJaQpSQVdsJKvWutfQ1OBu/5Cyx
+7W27uz3FXO1NMMQXxxSUtxVFTAUEgBzgfVfIn11NyoZPckHc7tNulzl3Kc4p6ZLdXIkuHYr
dcUVLJ9N1H002YSSlnzC527HrvjyUMv2q89tUlh9HMIeaP6b7KqjtuJrTkOo2NdSeZD4LQ15
78kNW5EJFybC20IbbmJisiWlSUBruiRx59wtJDZX14imhwa2wrh5vzOehKFCA0kTG7hIUxBj
o+4ltgp7sgcaPKWFEL5bHQnAtrggsx8kX/LIsKJOTHjQLdzMSBCYTHYbU79a0oFfcaU60+A1
pQlCMnDEs8vWMCW3DTGmW6elKZ1rnspkxXy2eTS3Gl7KU0r3II3GszAZJtHmjyAy+JSbkVPr
m/uE6QpADsp0I4JZkqFO5GSj2pY+kDT2NSJtnlabb8fvViNjtD0bIHFPXFbkZfLkVl1pKQhx
CQhhaqtJA9vz1J5knLFyPMuXv2xuElxhieEsIl31ppKblKaiKC47b8j8yWlJBFACaDkTTTVw
K1kczPOeZrmxZlvTEtD8eSqfMEFhLaJs1xBbdky0qKu4pxtRSobJ3NBU6lZJPBNoA80Xn7tt
DNvt7OOtRXYRxdpkptyo8lQcfSUlRd5LcSFc+dQQKbaz2wSYtPm/JHZMn9wt8C6WmQI/Yskt
gmHG+y/+J9uhCkrSGRUUKzyr7q6W8QUt/UjE51BnMZXKyG2pu+Q5CEqhz3EtpEV8qPJ9G3JJ
40CUI9u2+nunZN6QNp8FRafcZdZdQSktrSpBT05JNdc3DE2tn+qC/wAV9Epqw24XFClLcklc
g9xT5bVJPAqKU94sI6fTTbrpnEGXsyu35FGTfxcbjCFwiqlLkv2vuLZbdK1lwjmj3JAUrqN9
N7OxqtoLdcPMDT/kn/XTFkYYkPtuMXKD33nG5QeZ7Dh5GimVFr2jhttXS22l8AoX0G2D+RMe
xe5XeSMcTLauLDsGOwZr7f20OQOLjAWEkuEp/wDcV7h6aG27SXbEDiJ5cfiT7K/FtTSWbFa5
dogsrdWatzeVVrWRy5NpXT5/x09v8yPfJZ898oYLf/HTFlhsT1XRlET7fu8UFLsVvtFyZICz
93RNQiqBSvpp9fshuTTzaTNcLzCVi+QtXaKy3KCUOR5EV7kEvR30lt1uqaKRVBICk7jWFhyZ
la8lyj+alWt6P+wWhq3CyQ1wsWLrrklUESF85TrhUAJLj3oFgJR6DW3ecedmk8wKsvlbGomZ
XrKJ+Ouqfu0ZcdTUaaUobXKb7cxwKdQ4o91XuSn8nTfWbWmPKCM4OVt8vxrdaYjLNoS7d7LF
l2/H7kp9ZbYhTahYfYACX3UJUQlftB9RrXdT8bJtyPP++K2u5cIFobZym4NwWLvcHXFPRnGr
bxLIZi+3tKc7aO57iNvb10P2zvjX3FOdaHCPOUOFLra7GGYM6dJueQRpEhbrj8mc0ph9Ed0B
PZb7biu3sSFddH9nPP8A2BrEPRAz/IllevGKNs2lf+k8QVSFa5LgdkvpW4Hni+7QIKlrT7eK
aJ1p+z8GubOWMw5Olr8kw2b5leVSGpEjIr61JYtiVFP27Lc0lC1SFAoUpTTPFCAkempXTsrP
VeDCX4x5M9PbSkJSTRPtqdzt01ztftZvyMCU0qCOg3TTroZINPFXLYkegO5J+OsmwEKCVCnK
nXcbD1rqCGHzOxI2H0pO2pBIqgJVyqsA+nx+Q+OgkEaGlSePWvw9KEaTUCU8UEb1STWvroCQ
ICQaEFQpuRU7n10waTFKQDTY1O1Pw0Ewmx9Qp0Px21DVSKUDUb03oQNgBoFsCSlJNVVAqCP+
FdGwDVRR5JpWmyq7+35aoII/lCtgOqhuf5aDWQ+NDzSNtwd+o0ogcht1A6Jp9X4aSYsUKhzN
AeqK0H8/XWWCeQqJVzV8diOuw69PTUabQEAUUTUBZ2+AH/DQZjGQJASSCQPj8Pw0CGU8hQeh
oSSafw0Gu0/QNQKSQFct/bQ9KfjqQytATxA47A0qVen8NQppg5t0rx9vwpok11Y1t7clWR9p
plRfWri3GQklRNaJSEJFVV+AGvX6lLR5b7PRnmLEuzdMMcjY4puNIhx03GPb4xjpXIKgpbR4
IPFyldiCaalHYqRnJoErEbejMcwDtidlwl2SMYjTLCEKWqigtqMrthtK9gPaK6IUGEU2X4Nw
99sCOmfZ3p1tZuLL85znCgOLc4rZluLCF8l9Gwabg6Oo9msHDEvHWMxPL8nFn7HIl2liC4ru
XZJUXHWwP+oa7VEpQs1CeX+/SlhjmCkYvhVvubORuXC0XqVKtgWYrNuS00yyaKUlUlb5SpIo
KhIH06EsBZlCAXSqRtSoIO1f+Gt1rODXVs3V/wAF4VCtUaPOyN9u/wAiI3KSEhC0KU7T2iKl
KnuPpzUr8dZdYZzlkfc/EPjmJlKsRGQ3JjIO7GbY+4YaWy99yRUNcAFAoSrlyWQPx1Qxqgm/
D2C3LKV4rasmmN3+PKXGlsTI6Pe0ykrdeYLW1BTiOfU6nVwSwxxZ/BmLX2G1c7VkE5q1pXLY
kmTGaL4ch15FCW1ceJoeu+opKc/iGBPXizRrVk8mRDuIWJqVQluTIikioSWGOQWpz0A6dTqV
Q2VnIra1bLzNtzDkhxiO4UtuSmTGeKR6rZVug/I6URavEmIY3ll3n2+9odWlmA9Kjdpzt0ca
pXlseQ93TU1I2DwDD7DfMfy+4TENvzbPby/AZc5hKCAr9YqR7V040CTq6hot138I4zJbsjNj
uSIV9lWYXRcCQ266ZCgkKccLwPBlNTQADUy0yq3HxBIttldv7t6hnHURkSIN0QFFqU+7UJis
ISefPkkjkoU/t0Sa7HLJ8Ot1jwXFMptkyWmfee8Xg4tKS241tVhTYCkb1/N00hycsBYzzKsh
FptmRSYL5YeW7IkyZBb7aU/qApCldQqmmYMtEs14QvbzMe6W+/W+TY3I8iU5fUl5KGkRCEu1
SR3VjkaDj11nIyRuNYBDleQrXjysjjz4dyq69Otb60OKSEFRaqockOq49FDpplwCrBUr209b
L3dIDbq20sSXo6uTilLWG1lNHFAjnsNz660xq8FnsXjXJ5eOxrob1Asdvu5WLYxPmrimWWgQ
vihALfp+emstsmV9nEZpxuVkCrhARGivFgRFSEiY8pKuKlMsgVWmp+r1FT00tspjZaHMSyNr
xtbb8Ms5w509q3JtjUh1UeMlYKx9wvnwQWqVWgIon8dKbJQyYyDCHH8UN1keQ5twx6JIaYnO
zGZRiO8yEl23hbihNDav8A+ddEsmhcLxLIujeP3bHM4kSPvH1xrbKlR34b6GoiFLccigrWtS
WyniEopU6JYEZn3jnNEXiwvfv0rIJ9/UY1rVPEiFNbU0TyStuUrm036hVRqmRkiMaxPy1EyK
8RccfcjXW10Zu81qYhlkFyhQ25JKghRWTtvWumY2K0W664R5xsuUzWLBfZs15UeIq5XV6UiK
lbziTwi85Cyla0Dpx3odU4ApLPkPyLiEiVZHOwzJakOKnNzojEl8yHDVwredStair5q/DVAF
puNz/qNv+MNy0syY+PKhGogJjxkyIqqqUvsIIWaprUNpHt9NUoWo2QV3Z87q8fJTcRc/9EFh
CuC1NhCYw/y+4AfuOzSlOftp8tVXkLfIvJ8k8mQ8KxO4Xu6puFouSzIt9mkRWyw2LepKWO97
EJcQoUUEcv8A1a0TWYGsPyL5DyJ9Vtx20QGbq6hwKXZLY0xNLPGjye4nkeBSfcPw0QiaGWNW
fzBiWR/ZWezT4d5lRj/0Co6XUPReiubbgW0tCT/i6K+ehtBSW2TLN5/qDTdri4zDuKrhDdYf
uDTcNHFlTTakxgG0o4JQlCyUoQKHrqdkMHaBl/8AUIwtyDHts2W9akspWy9bUyVQu02e0tAW
g9twpNeY3Vtqxsm1srELMPKDFrn2OOua4xl63HJCXGnHJEtYJDymVUCqrNUu8OvQ6sbMuFge
uZf5RcwtAetQcsrUX9tRf3bYlTyIoJT2E3BSDRCalPXYeuiYeDTUlVvN3zF/FrPbLgZX+nYK
3DZEONcWO4okuFt3iO4dz+Y01tREGbbIyy2a6326sWu1xXJs6WaMxWh71qAKjQbfSAST0AFT
qbgUmy1+RLHn8O42uTlMBmO8uPHt9vchqadjvIhgNpQVsKcSp3cBdTX5U1SogVuSRatXlO8e
VJVzasaXcttktmfcoiQ23EjutBBaDhW4ltKVhCdu5VW+hudgMLQ15CsnkuT2rGlWV/8AWvuW
lxAUgJlMrU8sJCwngltwqQoKI6ddVmpKuFAMDxjyPIx69zMcx5y52u4wXbVKlFIoGypDiywC
psrcT2x9IV+FdGEzT0Q9tnZE1hOQQo8DvWSTJhG73JaVcozrC1GO3WoSnuKVvVJP4aZyFp2S
t2snkSD43hQ7jjciLj6JyrqzdHG1AlclkMhK/d7G1JSCklAJ+OqrkG+OSPuasju9ixKz/tTq
G47EhmyLabcWucl+QVuLbSa8ilw8PYNTtsnUc+RE5mm7Wp7JLG/Y5UaBEgQ2XUuVebgjglaV
LKt1eqU9NSsmoBPJG57drrd8zutyu9vXarhKfSuXb18yphQbSnge5760SFb/AB1QSJrGs8ym
y4YbTa477ER28Ny13VhTjZccDBbMIrT7VBxG5TWuhLZvssI0STnN/wAmgSkXnxveLolE2XJP
7ZKnw0tKk8Q6w72GjzKAkD3/AMt9SOemYIvZS0JTw5FSQKdPgKfL10s2may55lyR693mW7a5
P28yHb4ybOtb6mov2i2FJd7ak0q92v8ACPq6n1FwZnf1Ji7+R+3mdsytOH5Ei5sXByUzHuMu
S9FUX0OBbMZlxkJbJKgU8PQdNSSa2XIzsnlm+RJlv/1BZbpOvkS3z7Yqe0t5i4qgzSlbRbK0
qUFx1pVxdpWh+Wpv/Jntgz7yDdYdyvwmR4t4irLSUyTf5CpU1xaKgKDikoV2wigoa60dVZQT
OE5/ilvxpuw5NbJk2LDvDd/tjlvebZc+6Q0Gi28HgoKbISD7d9DrsPkt0Lzzi710tV9yCwTF
3uyzbjLtqYUhCIlLo6XFqeDiealtcqJpRJ9dNlOiiCgy8ttMvxszi0uLIFwgXF2426eyUCOv
7kBLyH0KHKoSPZw9eutVeWVnMFPaUjkKp9oUOZr+WorrNgL95UyvCL7Cx1jHGbm07ZICLUpF
xSwlC4zJUttwBkk9xS3FcuiaakvxLmUZ7x6gpoTT21/36kISm1JSTuD9PH4HWZIKhoQU1PRX
oNtaESkHluaimwP+7UZYrZRIQfcBun82/wAfhrMihNFghPTeiT+Hx0iEV/m9TvWm9PnpAMUC
U7/yGhkKXuBX2gemsiJIUmlVHkrqPWukhQT0/j0Hw0kJJ34np0NNvw1MBfIjeldj06HUIQJS
TU1VXcDbb4agYZUNq7k+nTb5ayyFBW/t+PpqNIMqPLlX3J6n01AwgtaU8vVPVVa7HSZgMBKa
lQAUR9Q3G/roEV7TUH0IrtUaSgCTTdNCOlT1qfhqYgCzXikhNelKkmugRSagEA15DYn46mZg
PmrqR8ANCEVuaLFOJ2+P4UOkhIcKQVV69AB1Pz+GkZFJrRSuRIT9X+w1hoVAalKqVCpJAIHU
fjqJsAUSRX2q9CehqOp0AGFUBQQSK71NDpILiCqtKbUCfSvz1EKTyQulAkDqsbnfRIh1UlZP
Gh6bGo/HUUhJPXjuTslIHXSimUA+4hfKhA9xJrv8NJSKoK09Buo9SD8jrIpgKUqO5CU7beu2
oW/IBRaa/UOgPpSuozIdUqIKd0A1UPQfj/x1ChVK19wqfQajQlSTySpO/rWnWmqQ+gqpFCTW
u4TSlANZgYCWRQJpUk1Ka7H4dfUaIJ2AhCTUpO5Jon0r8NISLSoDdRoPn6n5U1iTppBEDgAK
A9dtSBhoHQilB7uI6j/idLZQAUJoo716Eda+h1QaSDqoL39o6BKRt+OpsHRgpWnLrQpFd+nr
toSFKA+ISB+UEACpr/GutQabCKTy6VJNCFV6Dp01loEgUUrbY9N9tjog1h8gSF+6pFQaqG2/
8dMGE40F21/VVXLp0Fa11QOfORnZXHol+bWy64l5pX6T6SUuIUPzVBqKa9FMs4tZg2/O8ayD
GZeN9nKZlxduzLcyNKkuux0xi6QAqpcWUe1RqrrTTVOYZmkKcFwV4xyJWVXa2Iy+7B60Wpuf
HlvSgguuuFVEqWFlDTQKevX11mHAQo0Uq9eNvLqY86ZKkG6RHWmZ0x6PPVKRKRUpbWlFSp8p
PT20GpP4N4JTFMW8pKzN3G3sodsF8lxEy3CJDkpakNCiG3S0r2KQgkhKjsPx0NNoy4KlCx3P
smXfpMSQ7MYgKUq+zZEwMIWRXiXO4sdwlKDsa7aYcSNoKehXFfMbGgI+WpSZ+hrEa3f1BScV
aLDlwNn7SXY8FMhtD5jCnBaWq/cFv4V2pqs3JOBjLwDzi1cxk0uDNcuyHG1NzVPMuyguvFtR
QlaynfYe2m2pWGsDd7DfNNmvLORuWy4N3qQ/Rue2pDz6pDuxCw2pdOfQ8xQj5alYziSeu6v6
jlzmETWLimRIZdZjNRxG4lCgC8AGPYlRFKlVD8Dqk0kioRsV8n4xfoKI1sn2y9yOQt3ZSC4s
lJ5hpSSpJoivLfYddtKclCnBAX9m/ovUtu/F9N4DhVNElXJ7uEV/VNTU0+eiQVS6eIJnkBMu
fGwwQBK7KnZbk1DPc7KdlJbW4CqnxQnb1OllZElguSZ85acnm2dFqZiMNrl3uRJhIV3goH/p
2+CCgig9rdAkDQw4Je/X7zpbLFb3nYrEiI/bg41cIMEOPRIbqQA0ZCUcWipA3A1Ngyoy/Ink
94zoTzazGmw0Nybd9kezHhNghDjUfjxYTRR/U4/OulPBWTgl8lyXMG8Ox7/UOM2r/Sbiq2hl
hIaWtKRyWlLjLjjrAX1UUiqtA53yJxTyvj1huwn2PBIyLh2nGWzGlS3XClY6dtYcqmo921af
DVCHIa/L+eRpTcFdijMQH2HmEY4iA6yw81KIU8QivdUVEVqk0+WqUAyxbL8gPkCCu14jbUXl
tP21qtRYcgtR1UJU7upCisoryW4Tt00qGhyREnOPssrvdyk4zZnJcvnHciutKfjMvBRDrzVV
nmtatyqu/pTU0ZZ2heT3P9MwbDdsag5AxYy4u2uyg8Us933K7zTSg24PkrQ2JAJzOaMal2Jm
DBDEx7vvzkxUfdjmoKS2h3fttjj7UgbJ204GJRbo+cpa8bxLQrC0v4/HnNvG5vvSVMO3BO6y
TTgorFU9qvEDUsuDKwOPJ9zut+gMT7hgM+0TZHaat9ydkzFsNJoAlmNEUlLKOSdgkAalAw5O
qs+zbHLxis26Ym9Ah2WAbVCgOpfZTJ7wCXHm1FI4PrptxBP46p4BJyR+d57ckyMet9xxN6Ba
7CXJce23tyS+/K7pNS+++ltxbQPSg/3U1JKDWJI6w+T7HGi5NButiZVj+ULYdfgW50w0MLjK
5oSyui6IJHu2rXT9ARb8s8xwZZXbMwxGXAiB+Dc7bCTKDL6VRm+LS1qdbIW2sCooK6GhyU+4
Xfxxll4uGRZPNu1vu90kKdXDtrEd+KhHEIbo46tC1Hgkcthv00wzLUFsuPl3xtbE4+5Zo1yu
d2sVmetkKRIW3GZHfQW1JkNAL5KA9wU0flXUkXbMclUzHPMKyGOq4/YXJGUvwGIDlJiEW5P2
6AgOFDae87Wle2s0+OlYCz8DvJchwa84NieO/b3u2RLQ872bxKaZdaWh9XKWQhuheUhWyEoU
Kfm1lYNWcsGMX3xtgU+Te7JfLje5zkN+E1C+xMIp+4QB3e+p1Q/TIrQCvw1pVbMjXxV5YXjy
JcDIZcqVbpEFUOC6lLcoxSpzvLUhiQeDgcXuoKOqwl1hf1E2OPcluvouc2O1JhrjOLTGZUWI
rTiVJDLPbbQlS3faip+avTR1MyCT5ywi6RcbRNdvFtOMyI01PY7axNcZbCVsu1cbIbqmgUa1
+Gro3o1rJFs+drJIh3K6Src6cphSLk9hz1W1JiouteaXHKDj2ORIoDy1P9gSxPI8uHnOxOYM
3Djyn49wFmTZnLamC24pSwgtqUZa3e0Gl/UR2uQ9N9S2NlJlV/yW2TcJx6xsvXJ2Tai59wxJ
fSuAjlyCPtGgOSTvupXz+OmYFtN4KrCmzojv3ECQ5FkpCgmSytTS0hQIPFaSCmo2Py0mNF1y
/MrRMkY7brIUNY1ZWYy24iGe0lM3ZU19QqpTqlqT9ZO/QU0LRr/dLHd8zawXfyvcLnKluKw6
8XKO/dmuLrbcqIyEUDsdCuauJSaJrXTZY+wDjH/JTA8rTcou8sIhKjTokNaY6ghEZTK2YbTb
De6EhPFIFdvzHU1gVMEd47zZqyty5Nwllc+y295eGMyUuPx49zfWnm42yk9pKqcjVY48t9SX
AWcDa0ZRBb8aZVZZUgfud4uECU0wtC1OOpZUpTznIDtihIqFb77ais5S+o/mZ6V+PZiXp5mZ
bkMsx8ilSe69JXa47aTHbC1fpIQXE78fcab7aqrMmrZwcr5nD8ax4GqwXHsXnH7bJYcdj8g7
HdeeUQCVDjyU0o/TUUOpJF2z9jpm2V259eLWS0vpkWOytR5a3Sp51xVxkFKpq33HveSlSaUT
7UjpXRx8sEsy+CJ8tXeDevJGRXWDIRIhzJQXHeQVFC0BtKeSeQSae34aeC6l98WeScVxnx03
BuUWNdJz+RMyBbpClJ+3ZQ0gon0AUKNOIoR66z1lspiC/wCUZlj94tEluxXyz/cKn3V3vTbx
Mta2kyHB2HmhFp36pHL37DanU6knwjMM8tnj3VNufqEqotX+Lfc//VrTRpM9OS/IPjiTmGTl
gwm3Fs2No5CmS4sTQzIjKcQlhf6QDCUHkUeid/XXNJwi8/UcyskCPKtmus+8QI2NsXKc7z/f
13JtwKZeLDphOKUiLQbVR9PLj8NbegSyIxnPLJd022ZEnJQt2x3dpuFcro4i4t3Jx1jkx9+/
7mmltjkwv5K21Wq5LrCMP8wO3Z7LO7dHg6VspMVv9zF5LLW4KDLHxUCrgRtXWpxqCShly8SW
6wSMYjOtQrFOupvKW8i/fFM82LN20/qR0vuNhJJ5+5uqtcxvVNoscDBvFtyv+PXS1C1PYbCX
eUX1ciUhCnj3HP25C23VpdcPb4qRt09daacQS+SnZDeY9x8B2FEO3WpLttnvM3R1lKG5kYkp
7C+Kl8+Un/3VBJB/5da6qWguoaZRfHUN6TmtlZiGEmWuWjtKuSSqEFCqv101HJIpXj6mms2Z
qpo3n+Rf02y1W26xH5IZkvLj5NcnISpck8KLZaZhFXYi/nSldVVpvrVXhmYUlY8bx8bYxfMb
9d7XDvE20sQf2qDPWtLSnX5JbUrg2pC3AlNCoD8DTWUpcC9FyxiwYfkVhuN/xjC4l0u67jHi
rxuZKJaixSwVvyWQXWChC3/aklSuI0YGIRUI1isSbHny3rZbW5FvdCLel65KXIhnmpJRB4gp
nD8vIkdBrcTbQV0Z42htUlvuAIbUod09OKfzf+Gs7I3/AMkWqwRcMyNJhQI2NxEwP+18yP2g
9ILoT94pt1Ki6/VPIu92tD8DrVKqUuIyDU62ee1J93+IVNBXQaYgJJWafxqRoABpy361AJ6j
5ahCNSrcbj+2mqADKveUqoAdgfgdRSGR7aEk0FDXqPw0IRKiqhSeQ40NfjT5aQFoQmnuVsr6
R0Ar89Qh0VuQKmn1D4jUAQQoA7bnYf8AL+OggBDgNeVK/UfSmqSF8aVJNFkDcDanp09NRoFV
DY+2pBBP92oy2HxVzWOn/L/bqICTw9p3p0QPU6CFA1oE7jrX4V0jAfbBNaetAE6gCOy6A8id
uJ+J9dAi6L+ob+ilD+0DQQSEpIA2JFa8a7V3qdJBpClVSDU9dhsdEgGkbGmwPr+Oo0mKIUWx
Xb0CdBJBLqfaQSB6dT+OogK7tOo5HYA/D4baQD9eOyuIJT6CugTpQkAjZPokj+dNAiV/Tsqp
9PiDpRlhgfp9s16UG9NzvvTUbgNKSoDmaAb8vn86ay2ZSCKiohKabA1PT+zUaFEKUCgkE/P4
f8dJAQlIRSnJPpSo/wB2gVUHVR4ncmgFKDrqAUW6VrUn8qR0GiSgBHEpBofgB8/x0GpgCSVK
SDsVChNKmmqSkCUBJoSVDqK9TTSEB0SEggAJ+rcbV+ehkkBJKUkkgE9R1qNZg6NoBoqqttqU
B/lsdIIOg6JHXenrqgpCUKHlQkdOnp+OmAlilElNTWh2A+H4amh7SJNVVKTUAVCv7tRSdByK
a149OSv4bU1SaqAJPEb8aCm9CSD6jWRvnkIEJCiroOidup9dvXWZyHXAak7JKRWu6SRvXVIr
OhPL9On560p69a10yHQZ2V5oZC07MY5NpcCpEVKikOBI3QVDdIPqRr00cHKGehc/vt4n2fGr
5kOEMQLLD7TcBRmud6RGCOSGGwFdxAIAVzKSaaO0MzWQpH9QMR+8S7krFY6xcIabbPbclPHu
MJNUp+niNlEbD166OyNdPkZf99J33DX7FYWYc2LGRbcfW2pyW/FYJq4hKVgh9xzikbjYD8dL
sg8yPMb8gvNeQZ2XNYHNfufaLD0W3l/2PubPOyU9pY5ueiTTjTUmjLKiMhYg3e9sowht164J
UY0G6B+TIgihKlBCgkqqTyJUBSgptobUGlUpauRUmoAqd1dFdfgdKwOTbpPnh6ZY4y/2q6tO
x46IjrkeR2rart7Falpb7hURtwKwPjqdlJhor138oXeX5IbzNq13CHaVOxnJNuR3EpeaiUA7
rhSltQJFQDtpVoFaOds8xy7f5Gfy4MyP2WZLdlrtPcUltQW2WudfoW4hPT0rtrSyiLTb/OWI
4/F/b7Ra7muA4qbJLk9TTbypMwU4pCKJDSanevLWdmTMrFmr7U+0u3t+ZcbTaVK+2gsSnI5S
lYPJLDiSFI5KpWm5G1dIxOhje3/3i9TptqtrzEd1wuiKgvSltJO9FvEKWo/Eq1lDOC2+Gctx
LE7xNu19dmJecjriRmorCXUFLoBWpwqUOKgU+0D+OtTgGSmB5f48sNuyuFLuV0RHvjbkOK0I
yFqSyoHi+rivt95RWfb0ApvqyVi0o80YOyxaZjcm7Ll2i1LtaLOUBMV9am0o7y1BfFKhx+oj
p00upQ9kTePLmLz8cuFhD1wQ4bezGayFIbRPnOo5FUeXsOMaq6fVUpr/ABzINEdkl4wG9ePM
WxSDkBTPtDigZL8R2PFq+rk4pxZr20Ncj0CirSkRKeIbXimJ5gLvLzeyzGUxZDfCOXUuJ5AH
kC4njtT+PpoyU4J6H5hwZiIzZnMjlzpaYc5pnMpMZzkw9MWFN7KJeq2kUJRtt/J6gyp43mGP
teULTd7pnr1zh2qKptd0mRXG0yOQNY0dKAt1SPdyK3etNAp5KtOteBXjNr+/Ky1MS0LLkuDP
ahOqMhx1RWWG21kEca05GgJ6aXJTwXHGc4tMXx7jtus2XMYfcbW5IVfEORFSFTO4SUKS2ElL
5odgpQ320QyaKYxlsFHjS7WQ3yUl+bNU6izJhMJQ8nupX3X5QHJPIDl20q2ICemtNIkuC5/v
dvk+G7RaJmYwGLzDukefFT7nHIsZqiW2ktpQKuMlXOh2+espeCH2SZu7jbQdxTKrPcW3Jcd+
W/KkvTLlOkAgd19KkJYjso68WgAlI2OlLySgstq8oYTDm2q1XifanLg7OfuLkm3vyJduZdXH
UhpTkmRyWlxS1/k2QPhoSZL4KBmczCEZNic9NytqcgS+o5DJaflXm1MsgnslRfUVOHfdAUN/
h10pOCR2xC6YzIuWdPwbjZXcmkuM/wCnLnc47UKCtgLAklth4LbaHEdD7ldfXQ6isKC9ZNes
Uu1wky7DccdeyMSLc1PmXBUdTJtqG6voYVICkbLUrZG/x0NONA4MfyPxlkOUZRe7xgFj+9xN
ctaLc/GUw2yrgAHO0la0K49zlTanw21tWSUEsI1BeJ2KxWCwxr5abBBtZsbysjXKSx+6feJb
IZLauRWpJdru3UlWucSEJlNzy2W+LiyxZLbjP+jxbYy4lzWtv93MtSAXQyUKMhx5Tij7Vp49
a60tlZHLyHiV5keKvHkOHGZeuMMyYz8CI+047zmLCo4QyhxfJboHJwp9fqppryaeyN8c+M7l
a77KnZ9jK2LBGgSXU/ugDcZUhtAUykq5fUSDQaN4BvBLeM1+O85u0iRMxKDCu0K2qW3aYvAR
pshbmymob7rSCtprYhTu/XU6RgzvJc7ZhHiV2+T4z+PQY5MqDDkxJL7ZLBdYcdeLSGHlJjqP
FJKAtVN/joSwMY+BKvGeCKVY5dkxS33du7PxVZK0ZKltWuG6yFKeZQHkloHc+4q1NEVdnx74
neYkZHHS2vH8RkXRnIY33BUueEFRtpDoUKd6oSngKHTHAzifge3Lxt4zj+N0zhY5MharEi4i
/MElKJik8qKlKfSxRC/YWu1yp8TqSyD0ZNlNmtcXAsWnsWyLFmzy991cUTlSH5ITXjzhkUjp
T+Py9dVSayV3GLzEst7Zusq2x7wzEKnBbZZP2618TwLgT1CVHlT1pvrTUjBovk9+2G6YlapU
KP8AviERpl3uMWI1AbfanlK24qGGvqSyjbmd1HWXEYKtZY9fdtlv803/AB20WK2fdXW5sW+0
vXCL349vQoJLpbggcFlyvrQj06nTav6QVXg4WlrCb/5puzceztpsjcG4BNvWzRBlQ46kqfDC
CrtguoK0p6J/HVDQThjbxVc8TfsElq6Y1AlW2wQX7jks6QwqZOmoWeLLMVdUCJxKgOXKn5uu
pV/UOCAtdssb/ivKbm5EBuUe7QGYMogrWxHe5FSEudPdShr11RLHMIsV/n4TM8YyrunFoFqT
MlptuLiGh1U5D0ZKVyn5s1RCHUKTWiOFTX5V1Vqn9gs3pbIu4R8JsULx5dZllFwiy7U/KvMQ
LUyqY8l5aGu45QjiFAV49QKaYxPyPMHbyPBxlFwxOCi1Q7dkEhLEnIGrS28zB+2mLQqO0kPL
WpbqWye4tO3oK+hGJWgcyVjyja7VavI2QWy1MiPbIktTURhPIhCEpT7RzqrqT11uMAnk0Lxh
45wHIMBhXO/zE2uc5kKYbclSj/1TIaQpUJCR7QtfIlKutdc8y4NtZRaM0w3xZitoVIetNvVK
dlXZMZi4KuS3XUQ5BaabYXFJSnin2ku/jXUqyZb/AFPOLTSXJbSFDiFqCSk+gUrp/DXRvwKP
Rb/g3x61lmQwI8gSY0ORZo7Flaef+5h/fPstyC84sBC+6lauFFHjX01zyGcjONhXiW6eSrdi
8KDDXHW/PM5633Kc86lqE2spafQ+22hvkpIKi2VdCBpeiUyOh4t8Umel96Oy2wmwSL04lu4y
1WlaBIbajPCetrv0KVL7iUoPE01NuCVYMT8hf6XRkUhjHI0di2sBKB9pMensPOUCi4h55DSg
KGnHjraWMkreSxYR40sl6slruN4vTltcv90NmsjEeL9zykjjVT6lLb4N8lpHtrrm2/sLXgeu
eEpbeTWWwSbqgzbrGuchx5DRKWDa1uoKU1IK0ulj2nalemlSZrdtZRH5bhGGW7xpjGUW+7Sn
r3e1SEqjvMcGliOpCXkp9xLfZKupr3Ovt1pWcuS5XyUmyW9mbdYsN+Y3b2pDiW3Zr/JTTYJ+
pYQFK/kNT0bNezDw9j73lRGEY7IREZhwe7cJw78x7uIbS6ovtqDaG3FcqgIUGwkipB20OVVM
Gm3PBHHwzFs/k7Fcau1wEy134IfaktoU0tbalKSWFBCnO0pSkU5oUoDrqcxJV2Q2A4dY7/l1
5ss9b8ePHjXF6G9HWglCoRUpHPuA8kcUUNKVOtXUWSBL8ZM9CqoDhSK0ChxHqR0TqdYYxg0i
6+GW4EBvu5TblX1+3M3ZqxKQ6045HfAKUB9f6XeJVRLfVVK6ypZOowzfx7ExpiRFOUwbje7e
6li42Jtt9p9tawCpLC3E9t7hX3caUG+lTyRXMpx2HY5kZiPd4t4RIjNyVuww4lLKl1/RXzA9
6eu3pqWpKcktgmG2u9Qr3fL5Keh2HHWESbkYbaXJTgeX2m22EuFLfIr6qWaDQ9wjXBaLL4ms
ifKoxe6SnpdjNvXdm5cchiQ5GVFMpkKCgtLa6Ciuo+GprCc7AzG5fYOTXXLa24zAdUTGjvrD
jiGzulLjgCAtQ9SANbuknCMVnkbcQASPcB/btrBs0K9eObFZMew+9y72qdb8kXIE9UJnkYiI
/ALDJcKO8tJXxVyAFRttoVW5Joe3zxdhzcfFpVpyNxEfJXHUNm8MJjFqOyeBlHtKdqlSxwSk
7qNPTSk+rbLq5gqOe4r/AKVzG646Jf3ibW/2RLKA33BxSqpTVXH66UrrTrEfQEWnxp4ts+XM
Bt68vR7q66pCIEOKZX2zIG8uc4VtIZYqQAak/Aa5tsYJuV4Nt8PDGchkXSfJQuO8+t222/72
GgtuKbSFPJdQ5RXHlXhsDU6ermGLkZyfBUhlH2ib7E/frf8AYnILe6lbbEJm5LSiO4JZ/TdC
eaS4ABSuxNNKo2/8GozB1zjwNJxuJGns3UGG9chaX3LpHVbUodWCpMhDhU6lyKQg+/b8NVVh
/CkxLnBFSvDkpnyg1gJvcQPPBhQujoLbJ+4bS4lLaSSpTh5gITX3fLU6tVVvJrrJztvja0yI
ucMy7k+3fMSbkvslppP2khqI4GVcuR7qFrWRSnQa10fZLyYa/EoLjbbfGpKeO4I329TrAIt6
/E+dILyzbS4y1CZuRfSpKmnIz6ghksuA8XHFqVQNg8q+mrZ0h6JR7wD5MZkx4xgtPOynTGHZ
ksuBt7gXO0/QjtKKUnrrMt8GYIPJvHGV4y7HRcYyC3ObcMV+G4iW2vsq7bqe4yVp5IUeKh6a
Um1MaBxMHfF/Feb5LCkTbXAT9rGcSw+7KeaipS4oApRR9SDVXIBPxJ1mTUDxjw15HfiuyW7R
2+yt9n7Z99lmQ4uLs8GGVrSp3hSh4V0w/APCEWbxvKuWMW2/fdhlq63tqysthsr490bvKcSf
aUn/ANtSakbg6Wmp/wDUVWWvkkMg8HeQLPeRb0QfvEuzFQYctlbRQ8oAqQXAFqLBWgcwlwjb
UxqsDZXhzyP+5N2wWUmQ40uW28H2DHLKFBDivuQssDiogEFQNdZnEwwOv/ZbyCqItabWpyc1
MRb1W1JCnyt1kvpdQoEtFktpJ7nPjrSq2TUJPyUYx1h1TahwWglNFUPEpND0r/ZqaacMUi2T
vFPkKFaUXWXYZLUBXD9ZHBZ4u0CFcEqLnFRUkV49TrKy4RNHC++NM4sJiuXy0PwWZqwxGeUW
1oLqh7GlKQtQbWR+VdDorLThaM8nDNMNm4jkT1jmPNvSoyGlrdSlbYBebDnEhe9Ug0r0PprV
qtJN8knk7Yn4/wAiy6NcZVmbaWzamVPy1uupbAShJUUJO5UohJpt+JGiuWkjTUKStJUhfBCV
AFynDnsd+lU+tdTrDaYpyXLIPFGX2Kym6XBiOWm+z+5xWn0LlQDKP/TmYyN2g7+Q7/OmmtGy
fggL3jV8sMtMO9wHrbNcbDqI0pPFSm1EgOD5EpP8tHVxPBJMl8G8dX3NJzsa31YjRmnJEq4O
IccYaQ2mtFdpKlKWfyoA5H00MeJGzGA5XOhv3C3WiXOtkVbiHZrDDhQe1uopSoJX03ICaj1p
rVqQ+vISokhWojrryG2UlTiqBCEAqUa+gSNzrFlGzXXOCyp8X585PhQF49OZk3FZTEQ+yptK
1AVPvPtolPuV8BvrOYkcCJnjjOY98lWP9jlvXaEAZMWM2pzik7BwKSCFNq/KobHWurSTfJlN
PQq2+Ms8uEC43CJZZLka01TcKpKFoUn6kJaI5rWgbqSN6b6ocwWiNXiuRi0G8LtUv9p6fuIZ
cVHIrSvcA40r69NHMGscjGJBkyHm4sVhb8pxQS2w0kuOKV6JSlIJNdZGCyQvFuezE3RCLPJZ
l2tlEt63vsOtyXG3FcAWGSirpr6J1pVnBlvEkU1iOTSFyExrPPdXB2mNNRnlqaPUpdSElSFe
u40BPIg43fRbFXBdrl/t7dCZpjvdgA7AqdCeG/46m8wbSTQ5OG5YmKZn7LOTDHb5Syw4lv8A
WPFoBRAqHFbJI9dFXOjPJwcxjI2pyLe9apqLgoc24f2zodUgdVIa48yn5gaE5U8G2hxk2J3z
GZMWJeGUsyZcdqcyhC+agw9XhzGxQr27pPTXRetunbgwpbaXB2iYDmMrH3Mki2iQ5Zmge9MS
kGgT9S0or3FoTX3KQkgeuuaU4RP8Xkg6cfakck05FQoRrEQ8ndZEGqqEVBA9wHw/HSijIP0/
q4fmrSu+qTRHwC2L0jivkFE+4b9R116kjzs9C+bEJl4jhF0ZfjOtxLYzFfSh9pTwdW2iiA2C
V/lPLbb10ezZmu2Y0tStlpSN9lAKr+I0InJoHgmZLheQ4MpiRFixUb3CTNWy2lMYmiw2p7ot
RoPb7qfLW0lDkGsF0jz1xfO8xo5W1AxufLduk1cOfxjrbbSVIbfU2pCAtfHjQqrTWUZJvxpl
8+/5nmt5ffhR7RcW3WY0h91lt4rQkNx2kLXxWWg3uSBxr89a/wBot43gxZrArk9b73OVPtrD
NmWpt9KpaSp5xIrxjBPLu/BJGxO2gJNwizMys2ERJSHIV+U9bEtqgplRIdpgMABYrHUruSZJ
G6lGgr00vZMk8kuEa9Wx+5Zc47Ycf4xHDHRdWJUCcjmnnHEdkBzipP1U0IEcsnyG22cOy7zE
g3oi6R14BaEFhalxFIQ2QwlkfppG5T3Nq00pFIy8h3YTrPYYEy0qm5q/dxIh45dVxpckxwFc
u79pxQiPUg8FEVpv66EiTJO+4vlX/dWXOs0SJEj/ALQzxuq47Uh9gNE9wW+NXip9RomqxxA/
lqwKCxi85HN8pPsvWWZZIn7M59ymSGe7MWg8WpMsRwWUr3UEj8dSKJUnlx9PGQ8RuhKlA7bU
KjpJGzePvG+G3zF7HOusNyNcnnpRjR1PpSu/dltSg21VQ7CGyAkqpv8AHQ0UlsHiLxvExtpy
VZJb86bDclPyo33D6oztKlpL6VfaoDBBT79zTfQBHz/FvjVTl6sUe0uRJ1ssjV3VejLcW4XF
pV7VMqq3wHCqvU+mrqRRsZ8dePbjkdot7eYovX3khtqRbYsOTGcUkglZD7p4pSmn406aU3Bd
jR4/gbBbq4gC23DHhEuciGuMuQpa50dlsuJdSXB7UuU2KK7aCkzeZjuOqyjGBFw+8WaHMuAj
yokxSnm5KW3E7xy4nuGnVyo400qNSPMGlT8SsKFeVHbjYZE6OiTDVHZhspZkqZ7aVcYjnH2I
CvqUj0B9dRlEa14Bwdi8yjPM8W112Ixbkrkhrg7Jb5qZKkNuOOuJqKVSkfE6pwTRCZR4bwHE
WmDef327uXSdIjQFWpKe5HQzskraAJdUo/Drq2Jzz7xNjVtwG3ZNBiTO81bWu7EhpBcVLeUS
ZlwUsqLbSACOKE/Vt6akxezHbHANyvcG1F0tpmyGmFPgV491YSTT1pWutKNknmDa3f6fMTuU
242qw3e5s3KyTo0K4vXBppTC0yQFco6WwlR4pPVWxOsSBnOa4/46gqUzj1wu7r8KcqDdEzYz
YbQhCilbjLzQDdap9qFe49dhrawSck75L8bR3PI6sfxUQbdEbtkSSn7p5uE1700UpS3CeTri
tyBv/KuhMy0R+HeNrS35GtWM5pLbebuNOy1Z5DcoOOqJ4tvPII7KfaVH83SnWutcSbpojz4s
vd5vd9Yx9MM2+13KRDQZkxhhQDThASkOkKXxTSqqaLGYnLLZjXifHo8WxxcldluXfL50q2QX
LfKQI8ByGlQ71Ukpk8lAdDxp89HZoYRS7nicNmy3+bOvjj11xycq2QraY7rgdabd7ZcL26I6
DuUoJ9D8Rrb3BJ4Kg0gLmNjklpTikoU+vZKORpyURvxA6n4azJNGzX7xFj0R3BMcsNyiqyDI
Epkyb6y6+XVpcQtfdbZolsRgUUQoELJ6+upaZTmCNV4nVepcSJbM9jXqOlUsXBx4SUmGYKSp
90RXFLU8gfSlYpUnbQ5Abt+C464/7tFy6EvFVQFXVN+ejyGx2Wnuw4n7c/qJIUfb/i1dmEMe
2bwRAOQTLXecpYZjt2k3y1SWGXCqTFWhRTIWhQPaQ0U/qIV7lemptwizmSPwXwVkmX22Rcrb
eWGbYqUuHDkliSv7tTWxcKWxVlpW27mlslMHVnwFk5ahRHr/AGuLdbkJP7XZFuurckOQisOp
CkAtUT29l9N9MwacQO8b8JretU93JL20h+NYnr5Gx2K+r7lslBXGefSUqaLZ4nkE7gkaG/0J
6MZSkkJoncgFQT1Nd6agg0Wd4flPzMItGPPJmXbKbcbg+646n7ZK0qUVBJ4pKUtNp3rUk7Cu
icSTr+UEFneE3TFZ0b7y8Q7sqRzCJEN9bqm3GFBJadQ6lDrZTUUqPwO2tfUFbAvCcHyzMHp9
xt0piIIBb+5u0+QqOEvvq4Mo73uWXHF+1NP56naDWSwseEs3Zu9yjRrzbYioLyLe5c3ppiIe
nvshxyEy4RzW4lLnFY9dDcgcrb4M8n/Yq7Zi25U4vxE2x6ciPIluRVkOxkMg0dIU2VAHam+p
WgynKkbN+HfKP+j13dDSG7W+yLi7aPugmUthqvGQqJ0VxTuCTWmiTb+SDuGPXZjx1aMhXcS5
aZtwkxI1sBUQ062gLU8nfh+oNjQV012ZYxsVoyjL7hDsMAKuEttl1NviuOANtMtpU84hBWeK
E0So0+OulsIzGCetPj7yblwg3qI05PEll1cKdIkoRxYtqm2le91Se2llTiUorT+zWZhQaiH9
cj2V4a8tXO8TnJkL7uakNS59wflsdlxEoKLb/wBytYQ6lXbVVQPXQ/YR0tHjTzO6ZFliwn2U
WK4JdVFekssNt3IthTa45cUlLrqm+JSUV9pB9dZlCnyWLGo/9S0+A4LbMnRYa5EluR986wx+
oFH7taw+O5xS4T3FgUCtPaCaq0ZvGwu4nBXs37gTCauTVtjtcSouuLSXFuBdapDfEChG9dtT
eWWVEFun2vz3aFuX6YzcY799fhpkzFOMl12QhaTBS+nlzaUHEp7fNKd6DQmmi7RjyTWayv6j
LbktntF0mOv3qUhUiziC3FUtS3Wi2+ApttKuSEuKSvl7fUH10ppKQSl/JHWGN/ULjc+JYLdB
ntSoUR9US2dtiQ2mHJdR3ykK7rSkKdSjlUnifhodkS8FI8iSc4kZM4MzZXFvrDaGnGlsNRqN
gVRxQylLaga/WK1+OtJmHkuHiB7zHKaTbMObcFpdmthye5FakxYUh2iTISt1K1NqSkhSi3vs
NDak6JYyOmL956sliuSYTUiZYbW/Oiv3v7VEgBAdUiUUSHEl7srcqs06Hf00t/kHBT3p2eNY
8vFpUB1dnspRdHoUhgLVEbcKVclOEdxph7mlSk1oqtdOP1FqfsWPMJ01zH7ClvCrDbZeUNCV
al2mLIRcGw3IDaAkKWoVeIBTStUnWEksg65OJvHmeD5Bduku0zBl19YU1JhSIPNM2KUBpTRi
lPBbRS0AoD4a1KjPBVfAi6eU/I9nzGLdLvCjw79ZoghRLfKt6GURmFK7jfbYogIKa+xQ9NtM
TjgUyOsfmHIrPkFzvkW32oTbsmkltUFstJSU8VJZQCO2HB/mAfX66fZHBlQscFJmSlSJLshT
baS4tbnZZT220clE8EJH0JTWiQOg1O3YVglr9md6v9yiXOe4kzIMePFiuNpDfBqGP0dhX3J6
8j66pxBqSXy7yte8qgrhzrfa2FyVoeuVwhwm2Zkxbf0qffBUon1PDjX8NSS+5nZC5Xl9+yqe
xNvUkSZMaO3DYUltDfBhmvBNEBIJ33J1mcQUZkc4fm92xZ+SqG3HmxJjXYuVrmt9+JLarUIf
aJFeKt0kEEHRs2tE1YvLF2iZ67l11jN3d55h2JIhKP27ZYdaLPaa7Q/TShv2pAHTS3MLwBS7
i/GfmvuRWUxIqnCqNEC1OBppR9rQcX7lcOnI7nWrOXKJIbpUCaUIJ2BHx6ba5sS8ZT5L/wBQ
YfasVNht1vjWYqMOZE7oeSXKF+nNakjvKAUvbr8Naq4lGbPJxl+SJNxyGzXm622JPTZYkaAz
b3O4mOtqGkpaUeKgoKqeR4mlR0ppn8YNduR9cfKkafnz+YSsatshyU0pMu0vB12I86pPEyFc
yVc6U2G238dDaaSBHTGvK8K2YgrFbhjcS8W1UpUxxTr8mOpxahRId+3UjuBH5QutNLS7NrEh
PAqweVmrJaAzbsehNXsRn4bF8Dr4dSxI5BXOOFdlxwIVxC1CuwrobXaTX3EXvy9drtZ5Nsfg
RG5dzRGYyG8NpP3NwYhU+2bcSTwb4BI5lsVVQdNbreM88Emkw7v5Rj3KJBszVhZh4nElifMs
iJMh0y5CU9vkuW4S8hPDYJRQD565pwvqScOSSuHmK0TfJUHOXsXaD8VDJMT7x7iuTHSEx3+d
Pb2kICeITxV676m5ql4KWcnPJGLhjM5TFkfhXTLWXIraG5ReisNuuJeeWvujuFanUlW2wBpT
W17PzTFVUQZwXQCkpAqeih0H89cwNIHmmUm1N2Nm0xWrHb0xXrNbxUpiz4rvd++K6c3FOLKu
bZPGh/nqjVRdnMljlf1KPKu0K6x7Q8pxl9+VJjzLjIkMqceYWyEx2yEoZQkulSRQkdARoTgy
UmP5UyK3Y9YLZYnnLTKsKJrap8ZwhyQic+l9SVgjjRKkfx1u905+Ss28lnsnmOzLxORFzGJN
ye9OXeNcW1LkGPwbitUaWp8BXJSV+3tlO4NdYT+RmEhvfvMFgyaK05lGMC5XOE7OdtbrEt2N
HR+4Ol0pfbSCtfaWdilYrTfWn7OP9uP2MtibL5Oxq14TbLAmyPLuduuse+KnmV+k5Lj0TyLP
CoQWxx412O+s1tDfybTcp+CaZ89WKHc1yLTiwSzPuy71em5UoyO5IU0WwY5KB2+JWVp5Voql
NtLtO/EAm9Crt55sl2iC0z7LMfsL8B22zQqY2ZrnceQ8l4OpaS2FVbor2fhorfroCOgeZbNa
LHOxu243/wDq/cJqVToMuW4+p6D2Ay4wXRxW24pVHApBASdqEV0Sq5WzczszOY5DVNkKhtrZ
i9xX2jTqgtxDRJKEOOAJClBOxVQV1XsnoFZzLNrv/nHE4l3eumOWyTJukmHboTsuU/wiqZhL
Q8pKGOPcQrkjgaK4nrppZVSfKHs/3KXlWXeO7xePvY1pu0ZFynquF9C5qFFfcKlKZjISntJI
WqoW4Cr00WtK+0ArTHwPsv8AMrruZP5RhwlWdyfGZi3FqYmLKC/t0htstJUhwJSEJFfWvy02
snVLwU7kThXk2xwb7e7/AJbGnXC6XWK5b/uIBix0pYeb7bhUjihPc4gcSP4g6z/Y5r/6lCSc
ckLF8pZbb7ImwwJjabK2FNRm3okN11LJUSlK3VNFdadSD16ar2XaUTtLTJ/MPMEC+We8Ih2l
yLecqVCGTOvOh2OlNuCQ0mE3TmkOcAVczt6fHXT1+2FL2lg3MPGioZlktsvt1RLt1vdtcZLD
bIiOynZZ5tCilpcdJUEqrsn01zVl0SMctkr438iPYc5fFBMlRudtfhR1Rni2lqQ4AG5CwCAS
3uQoe4emisKyb0LbdWi7Yd5zgWnB7baHkz2rtaEygw/EEVaJK5RUvm49JS48wQpXu7YPL10z
LbZq67ZRktsuSoN3iTx3EqjSG3ubC+27VtQUVNrorgr4GnXWfZbvZt8lX8TT8v8ALuLzplku
NoiXF+fAuTVxnypSm4hkIb/9lxmKotOufF/glVPTfW1Cq5z4CuLfA5s/lzDYWYZTdnG7i5Gy
JbchiYtDL0iIvmXHo5juL7C2j9KFHdPWmq/sVur8KAmFDE3zzNY7vcc6LibjCt+VRojUB+MU
qfZchoSPe3ySkB4p4qUDUJ10r7q16/Ehusf7pHc3zja38ARa4xlwrmiyiyuRGo0ZxhR48FOf
dOHudtSfyBFa+uudPZVW+jN3rLlcmfeK8zt+HZpbr9MZckw46HWn0s07ye82W+aAdiU8q0Ou
VnLlm5lNPkth8nWOz4/e7TY7zkFxk3CEiPBuNxcDamXFSe48GQlZWyhTfX3GqvhrtX217dnr
wZVeH5LvC/qFwH9zXMeYuMZSXojpWWkyDJSxHDRRQupDSgsV7m5UNZ7fiidW8ES956sbl9Zd
QZ7VoRZ50IwxxUj72W8paXiwF9taUoIqTuOlNT9tUo5k6V9bdX9i3ZR5ZwqELJdo9+Rdmo09
iXKt0NRcclKEfsKkKbXT7Tsj3hmvEn/m300acnJKHkrczztjMkP25263BEKXBkRmbxEgpjPR
HZDqV1ZbDq3lEhNCe4BXoNKslkqIzby9mFuyvJ27hbFyVxmIMaIl6YAJDi2AQXFhJUPcTU79
dbv7q/1Kq3MmlV1l6bLNa/LeMQ7BaprsWX/qKy2iTY4dvRwER1MogfdLeJ5tlO9Ucanah1y9
FklD1MhZuyf/ALGQpQGwhCQKIGx/AU9Nc/f7O93bybqoE8juUqNRSg9N+o1yNMG9On6lfnSn
xpoDrjZGREcb4lLQCN6pHoDT469nrzs5G9eTbZj1iwXD02m0wYsq8w0Sp9wDQM11xKUlX6xN
eKiqqhrHsquwLDMo5D3E0Tx332rrJNln8a2LG79l9tteQuyWYc55MdpqKE83XlGiUKUs/po9
VKFT8NaSkxJc4OJ4KjyjcsKOOSLwVT/tYFJ7rCI7KUgrW4EIWpfEVWVE9NtSRrMEnaPGnie+
+Q8gscVyYmHaWVGLEacCkrUwkd11clRUrj3FcUoT+J+GnriS7YgxNfAuKcQA4UE0KQVcBWlQ
RU8dCwEmuWXAvFLmLNXO8vz7ew9GKmr/ADXW4yX5oIq1BgFKnX20FX+Z0NOum1c4Ikcn8T4I
1aZ8rE21396BGEhU1q8RlcBT3uuRkIUvgnfYHfWIhmZI5fheyS7dhv7BeVyJWVuvIXcX2i0w
hthBWstMbOdUkJC1b+tNb6QzShbQ6X4mwYwWbxBvV0atbU92z3BpUdMidIms1CUxExwR+qRQ
cumiGmZVvgb3/wAa4lGv1itkPJZVon3VzhcIE9bcubDqeLaFGIVI5uHYIJoOpOpJkSuKYPZ2
/I2R4LJul5UyhkufcxZn2vdSy2lShIS2Kuf5lE70A/HWswDZiEhBbecCRulakp+BSDogYwXf
HfF2a3y02682uWwuNJccaccEhQ+wbbClLclEf/HSoIJFNzt8dNh+CxwfBnk2VYy5HurH7W4h
b8eCmTILchoe4OJAT2Ehz6k8zvWp1htgxhO8L+QY0Ka85dYMmazEE6fa0TnVyzEpXk4CAlSA
E7Aq3ptpliim4EnJHcrticYWlN6U7/8AZ61lISFEHdXMFIHGvUa1IuGX26eMvN8+4xp5uzd5
nIlrjokxLiXlQ5AqtYUTwDPH1Cenw0SYakjp9k8lxMssT0jK2pdxujhgw7xEuX3SmQSEvJ7h
UkoArSmwUduupPyKa0XD/RvkRdyy9yBnFxE3G3IzDK5chLIkd1HMqedUvtsoQlR40r/PRmAr
BXrXiPn+Jc7lFg3B6HLdU25OlOXJCUyHHU1bUl5aiXSU+qRt01dhtrAdusH9RqWprEWXNhh2
Q6mQl+c0wuRJp+oWS4vmutPqQRpbkcQRl5xrzLZLbEvKbm+4yLOVSJbTxDcS3lfERHXXSlta
iqtEJqr4fHVLkzJToGOZXYrVbs7EDjZmpbYhTHFo4OvNqJSAgq7ikcmyK09NTcikT8zyr5bz
G+xYsa5yHZrskPW63QEpaQl5IKkBATQq4AE/qKI9TrWBSHku8+dsjmQ5MtibOXabiYsVrsMB
lN0bBJCmUpS264hIUSpaVBI9dEoEkwZtYfPeV3GHFyG0TJ8uO2t2KhDUbiEEhK19xmietBuq
o+GjsUFchSM98WZEXTb/ANnu7jR7f3bDT5LajUqa5BaOooVJNfTTMonbhCZVm8j+RblJyJuy
yLy+tXaflQ46G2uSRy40bCGwqhqfU+uiSJnEZPnC22a4WnGrbcTDiOLQ6EQ+45CkFBS4Iylg
qYc4q93b09kDlkbbI2fveNsjWzPXHxaFIaXdITvILly3lJHEewqUpJSlTnJYptXfWk/yNRob
X/xJn1jmsxpNtdkokCMlE2O2tcMuy6dtrvqSlHKqgFb0GsuyCyzA+vWTeTLBlFvuVzYQifho
bs0OT2OcVtTLZoypxP6brnFwk+6vrpSTCWQuHZfk1kyZy+WhlMue62+mVGUz3kOsO+58ONpH
+X6mlKamSmILBN8k53frc7aIlqYj2i5Q1W2HbbXBWhkRg6HXTFSjnuV05rFR+GjCFp8i4vkT
yMMmjzlWYypNutox962qhPKaXDCT+hIbT+pyUPcTsSOm2h2UDGyRsuV5zZcLakJxeFcbA9cp
S7awpqQTDkpTV9IYZWlYaT0o7UemrDfwYSjRCW/y3lse7Y7dEQ47r2OsSo1sb7Kw26JvLuqU
lB91OftCNk010eo4LkkR5pyBqwyOWPw1T5NrNheyPsvh1UQAtpQCFBkFNfQbkb6xiTTTaMsZ
UnuoBqoIIHDf3CvSo9dbaM9jS715ZizX8eV/pCFE/wBPJ7UJvuy1cou47O6kke9XIOD3BQGs
JKDVnnJD+TPI03NXbcmRbG4CLW0tttIW9JkrCyD+rJfq6sJ4+xJ6fHSmjMTknfCXk2y4O9dn
LmzLDtxZaRFfjJS+lsNr5K5RlrbStSvRdap/joceTfbEDiV5lxRFxnMN4t99Yv3YX+zsSZrz
L7M5KAhb7ziO5zS4tPLtk+3pXT1T5MpxoayPPV2lX/Gsgl2yMuVjr8+VRC1oD6rksrUDXl20
oqONK11OCHM/zxJnYqLMq3SP3BFv/aw8zcJDEMtUKe6qEjilTvFVN10PqPTTBPJUJs69ysRt
+Gi0yg/Ypr0qS4EO8krmBLbTa4/Eds/4Sd1VoNFmlkH5HmHyMr8a5PCyW6Y/MbjILjKm5jL0
Vt1D7Sm1pQ6pIAXwUSn8NXbtgU0tlid8uWsYhIxnHcbfiWhu0T4AUuSqU62JzzTjkh5YbT9J
apTYbjemrTK0tB3XyJeL94+Th8LF5dRb7TDMltDjyS3bnXHe720t7pfLlB6Ch66E0iy86H+B
eVL/AGPHpFnuttvsh+PNM37yAQl5Sy0lBZkqlMvlKUpbTxKaKA/hq2NmsFox3yNidw8eSWL4
q5mfLTc0yn2or8qay1OWXCxAnJaU2W1bdzuqFTWvpqnINGWW3M8Ji+LZOKyYlyVd5EtFxVMb
cYEf7thBbZASR3OzwPu/Ny6HWnWX8A28EpnnlfGMixB23ot81+9v/bhV2uC2FuspZNVNd9hL
bstJ6J74PHrWupY5FLkcQvMuPxvIdizNuDLQ/HtqbbeYZcaUlJbj/bodhlQ3oPcUujWXTCMx
D+p2ybzhAlxbrAgPXJ9MqzO2yJJkCLFUl1+Sh5xQahJbbQ2UN8TSqlH5a6VjZOrfJkt3vt1u
i467lMdlmJHbhxS8oq7UZqvbZTXohPI00JGmkaf428nYbaLNj7F9N0ZlYxcpFyiJtpb7UoSQ
mqJHcUggI49BXkPhrLq/1BLJYXPPOPf6VMZr7uNcG4s6G3DbiRHEuiU46pCzLdV3Gk8XfelK
D0NNMKRaKvkfle0z8BagRo7rWWXWLEtmVXdYTxfiQOQb4K+pS3gpIcr6JA00SWWEInrf5S8c
2aZ4/lw5NyuDmJMyYUpD0ZtkqZmoWFPtq7ixzZUocUeo9RrPXAqewjEPKOH4pe+03fr3fIT1
tlxP3G5sh1EV6UttYW1EU6XFp/S/V/UHLanTS1JZKN5hziNlmTRZkKUubGhQmoTby4yIfINq
WohDKVuqSiq9uairXROKwCXJQyoCtOuwUPhrAhKoake7f+zpoISVCquvFO1emog1V5inu9QN
UiwqmvQ16VHrqBCiU0IqQk+o66INBciBSg3pWnw/HTBkB2V6VAqAdTIMEU2NT6cuoGiBDKuQ
SpJPEf219DqIBUSr0HqR8CNQCSpJ3J3UevSnyOlIRZUQjikHiRsSB+OoAEio3oaDYbahFNrU
EAEGp6H5A+uhkGHCFA9DXf021mQbDNQoJJBpXcfH4aUKYSjXkOpO5PprRAqqgI69Ck9RoZSK
bAB6V2oR6fy0AxYdHEJ40FDy6HauqDUHMLJqOiT1HQfKmpmWdOXJPWidv4/LWYNBrcSKU6gd
R02+WmADbWoFI4kFW3JPrXUQqpHsBrUbE/PrqIAC6EUNEjYbDY6CgVUhW9ajZRp0HWoOqBDK
khyih9aTQ71Pz+GgpFNlQ2/lQjUQQUjkFA0SKjauqCCBWmtCAVAn/YahFAk7ior1r8Pw0QKQ
aO0riSSPSiSAK/jqJhUWsEE7A+nofx0ggJCQFdxWxP47amSFe0HbiU0oN/Q9Nvhok22GVlXK
lQuvpvUAfEagmAuagfw2p6b9P4ajMhpB+roT1BrtrIrAEnryqVV2/wDLSkIaCogkbf3H+/WW
UCQo1HUcRRJAr7j1/l6akQpJUVV5bncpI6n0OtMZFA0VyqKp35fLQScBBwq5UPIEVof7q6hY
dAoCiaBJqa7H8NECtBhYTRKAApVNq9K/HWYKQweO6SRyFCr1r+GtJhAXNVCOW56qO9PT8NUF
1ACUgcqcNvTata120HSqgWhSimtRUGu/p/4ayx2I/UpuTT1AHr/w1HNWa+Rf/PUf4efyp8dE
m8bgiofP95bStzmvbkogfCtNeurOVWb9nichj+J8bF7vzchqay0q1WliAhK0MIAoHZpJVVNR
VKQOWj2fyCFMGSKRUnqr4VO1T1prBRktni7/AFscoabw3gq9vIUEOupZUlpobrWovBQQB/iH
u9Brc4CyLmInkyV5AmS7jltotmT2Y/ZIlzHkM8u6ndMZpLNF1C+JUU1ro/KMAo5HuLYx5uxz
JbvZ8fcgP3HimTfLktLbrSS8OSUF95HPkob9sJ+Z1Zg03VmeJ8jZjEjXuEichCL6tRuzqGWQ
47UcFJStKR208duKKakwvU0mGnzbdscgpLFmV/0AVaokpqIq7KgISEhbLS0qVxp0J/36nszB
xyhfl63Y7KZvEuw2pDkVAuSY7sWPcpEYD2tL4J5q5DbinrqnOBccDG85N5mi47jxcs0ezwnH
Ut423DiIbmhylQllqrjjYcT12HIddNnkHM5Jld2/qCt+SWx79njrkye6IdtiNNCGH1pAfck9
lfFD/H6lLXsK/HVJYGj9n8noyi0zmcDsa3mi5IimAyz9opxKhzfkPod4lTatxzWKH0OsrHBE
/ZXPKK84n3R3FrKzlMmEftnFSxFZcZXstxKWi994rZPJRUAkUHrrSeAejzvLW4JLgcIU4lxQ
c49CoK3p8q6TXEF2xryzkeOW6BAtUGO1BjqdMtgsrKbi48Cmk1Qr3eKVUShNNqaGwVWS8jzT
k8yyoiyMejSFQmVxkSwzKSyyg1on7dCwx+mDQcx6b6nA9WhufNmSruV0ur1sZDV0tzVokFtt
wIbioqklDiqguK5KoV7fLTKAXi+eeJMfvcG7Q8TuUd+CtK0vquPeAoKE8ClIWd+hIGswEkor
+o+bbLiXLJj0S32tya/OlxSpxSpDr44KWXTRLSuJ34CnLSoJlSazfF15VbZkLCWIseM/3fsW
JMj7mVLJHZ7klwFXBCtw2lNCfXRISzQ7r5FuRdzNN28eKVb5AjLy5kzlforSgdnk4AUoWocS
Aj4b6i+hHQv6l0IkyZMuwpR3XGVwzClrjudqMgIQw88UrU6j1I231YNQ2R+S+c8TyxhprJsa
MpyJJflWxpme5Hbq8KFLxSnmoCm5TTS8GVkb3zzRGumMw8dv2KpcsDEFLMJsurYWJbNUokx3
yn/LSDx4+78dMGWmyiXDKrXIxSJYEWiM3cI7pefvJcdckOJqaIQ2o8Gk0IB49afHSkbeRhjF
0t9uv0OdcGpD8OI6HXWoT3276+O6Qh0UKPdSp+FdaQNm3DzvPjy7NlGRYrLZIelotk5pS2o6
7e+ihQ0h6qHn0HiS4eo/HWEpwieBjcf6hbeqLdYcSNcpTEu2yIEV+bIjt8HpFP1O1FbaQhKR
6p9x+WlUgG8YKvccmwTMImPxL5cZmPtY7a2re243GExUl2tXFgBae2gcRTlua/LRAp8lttGX
+H7D46ax5N+u1ySm8IuTQhRhDkFTYSr3pWotKZJRQ1VufTbQkxbG188zYdk0VD12XfLJMt9w
lzIjNleaSp5MmnBDr6intqHHqAaemmAkhsTy/B4/jrKMdlv3ddzyVxLoYjsokJaUw4XGwlxa
gXFLoO6soHx1Tkl8FwledMGTLk3Nh67yH50KHbnrU8EiHHTHWFOSGfefcQPppVR9RogWyp+S
vIOJeSJjAdkTLAuNMUiKp3k5bTDdUpSpT8dslTcrffhXl8fhpLBI4Yr/AKFwO+M5E1m6bwlp
tbSrXa4TzbskLFQ08qSC2hhdB3PX4az1bcBMF0sfmjEIdz5SWZGOOO2kQja1xFuQbW4l0OUj
ss/avqbko96iCNwPTo9ReyMy7zzDetV3Zst1mNXSTLtwZuEeOYPKJG/zvcFuOAKqUp5rKyNu
mlVQFkf834BImomx8mm2ONHvRur8VuI7Sex2m0/bOcCkpLikk77f4hrPUSLh+UvFsi+45lcu
4u2pywIujCLD9kpS1metam1JdaPaQlKV+grXS1iAQ0n+TsPk+MBYpF/cRcXLcIUa2Qo7jCi4
rYtyGXQ7DLQP1PNlLlNx7tSXPBp5KniGGWbFcms+SXLMMblW+0ymZMmLDlOvvqQk+4Ntdmri
wemiG8QZwTWKecPvPITa8ulNTcci3CZKt0uTFQtyN3ELajBJQOSGUpKVFISSFb9dLqKZbHvM
uKi8KkuXeAubHtbrDFxhMSXf1XpbKi335SOaylpCzTjRIJFdHUE+B3evKWD3Szz0WDJ4Njyt
TzqLffZETj2be3L5Ij8+yogLZ+lNOnXSqfBFffzTxFe8ivH37zTFptd0i321Ptsdr9weZjJb
lM025B51AUkKAG5J1OjWBiMok8Y8p4IcOalSFWSHOfXOfyGBLStsrdkOKU2ERm2HjJTxUEgB
xNNXWWDR5sguoFzYWocUB5BUFbBLaVgnbfYDSwPUsjzR47n5JfENvRIKG7nZXReUqUVXJqPI
bUpZHHZEVCTUn8dHTBPU/JlfnK5m5duQzd7TIhfdPLTFtl1m3FxRcJUh95iTVDNE7ezoTTpr
VZRNIhvEmSWCzQM0XemmZTMqyKjx7a+6pn7xZeSSwlbZCwVD/DvqtXJN/ibrBznF5uNiPZJ1
tZki1WVpq1OXd61pY7CnlORxMT+sSyFio9eh1lKBeSBxPyLZrDGg229XQOXGZkkxyTKhXV55
tgKQ2Y70hwEKlxSoBtRcI2r8DpaYnW8ZLcZPjmJHtV0tf7imPcRPfhX39rZZfXKcVVmGnZ9t
QqWwqm23roSc6McHlo0Iqn0G21KfKmujsRxKyKJqSR/YdRSCpO5/s+Wg0gKKxv1SabH01Jgw
JVyT+HU/79IyENjyJ67A/wDhoINKiRWpHWo9ABqZAodySSQajSiAVkgV2URXl8vnrJAUuqCE
02FVKPw0iF0ryB4nbgOo+YOgEEaADlTj8PXSTCFa/EV+O+3+/WSAoU3oCOvKmtCCp5A12O4+
H8dBLIDRII49fgdqakEQFQkg9FdQKddIg4BRO9AT16n8NBkUCNjuRvX0of46pGQJKSfaKqpt
8v56BkI1CuhJA/Cg+J0gLRQGnWg2Sen46BC61TXY0qfXSDBQCoINPiB8NEgGncVP4k/3nUzQ
sqB3PT5VJ0EJUQfmTsB11QUCiFcjSvKlFJ0kGqgokkFRFBX4j46ACTUmgry/w9d9Qihsrp+O
/wDw1EmGkpp7R7670rv+OliDkeKa+prQjany0AGU1UD6Gtaepr8D66hDUEprRNArenXpqMhp
5BIKRRJNfjXQxF8vcE7V+nr00EmBQ+pSSSmgodMDIdeIqTxSaVpoYCiolQqPcPpCj/aD89RA
CQaJA47f216V1MRQBO6U8T03FdvTfQTE7KNDsQfdQb7DamokwwGwfQ7UJO1D/HrpGRSOJCqU
pvyT01kZEniQaVPrTcD5ahYtJIIJVsdzt6Hr/boMwGsqKVcQeO1aChNP7tSGQ0khRJ+FRvQH
+WtQTCCQ4OIUKJIqOp/hoYJAUXAKdCSeRB9PnoN5ByDgqQePQ/8AiNTYIXy34geu9eny0CFt
zFVEfH4H8PnqgQgQkjkaHfkfX+P4ajIpYqBsDWvXoRoEIcVLKB9PQEfH0odIpADZSSknhTcC
nUdd9ElKTFkoAqNyetR/Lr6aBkIijm5rUbj5/iNMBiQJPMJBG6T0/vqNZbOkyLJJIJFQRQeh
r8TqkoaApSiOop/hHz/36jLYlQKSn15A7f8ADUUwFtStTTrSo6/CmsybnkjWFhV6aISpoEpR
xUKelN9emuDgbzmt4tl+8VYwhmLc/urA0Iz0r7RSYHI0Sv8A6pXtNOOwHXTfNgymZUaAglO1
KADcfjrI5LF49u9gs2V2+73x+Q1b4TqZAERCVuOONq5IQQspARXrv+GulSnBcLhn2Bv+YY2Z
tRLg/au79zJjyEt91UhtBDZZbSfoSsJPuVrKhZJV/EdYJ5Sxe351fcxyWTObduRdbjRI7SXQ
Q9tVzdNFtISlIAGmcbMMprR8aEX92Yu7POkq/wBOMoS00FKUCe5KIqE+6ntHp89ZSNOzLrHz
nxxbsNj2yy3u52W8ORQm8PNQ0vS5jykg9oy3VVbYCtkpb4immc7DLJO6eY8Ol4vOi3K4yMku
MyCmPFtsuBGjmNKpTvGW2BUIVuKV6auSaZ3T5SwW02PDQm7y73esXdW8+0Y7iFSDIQpDnJ14
nj2w4eO55U1puWDTQcfyP4tWn/TyrpPNhnXN+9Xa4IbciFKnk1ECjdXVJUTRxQ2pt61FEkk2
ErKvG069tsXTNA9i8SKtMCwwYEm32ptwrCkMyGmypx9FN1Cu9N/ho4LKyOMXzHG2PIpyO65p
b5dmttsXEZV9sq2ssl0jhGhRlclOcUpqoj5DVsOJPP0p5Dsh9TQPBTq1I29FKJ/2GoZNq8b+
RcMs2M2S1XyaiVOZkyHoT6o4KLEVIUlLyiU/9QtalcgBWlflqgmXNvy/hMXFY7Ua9wnO3Ffa
nImNSfuZUlQPJz7RujThfV7vefXVGQshhM8uYpOl3i1u3Zp7HV461EtsNTQ7C7kqqSltPEEq
T7dz7U/w1QTWCneNfFN0tOa2G5XK42J+JHkIU9HauDL6z7SAEtgUUoKO2j7E4NRez/x1Gu6I
OU3623qQi8SH7attpC24EYNlDTbikp4tlDnrU166oJPwZreLrdrlnWKpumaWO8y4E1ckzGG2
2o8aOkhYEmUUJbcUU+1tvjsfx06JF/8A36wM3TO/2bKLEzcL+uHIsrrzjbsdLjTfFfeCgW+V
UE+tKgnVAJjm033xorI7vOgz7KuSXIbE9ZMWMlXFH676H3m1JdTyNOLaRUjrodRIy9ycYaiu
u4BJxVgfuUhV+Xdez2VMlI405ArUj/8AR7fDRAz5IbLJmJXrC7ZaLRcrO3lzdjSlqRKCS0mG
FEvsRVqJRFfXxBTyTzCfh110S5gLJvRll3l2dPiC1wkyrOm4feqcdiMMqVd1pCljlIfJolA2
PEDdPH1rq5KzKZZpMOLcY8mZDRcIrLgU7CcKkIeAP0LKPfxPy0tYNQekL5dsHyHzZbbdcott
kWu2WdTy1uOlyOXDH5oaXyUWEhoD2hIrvU+mubq4RhWUvyQvi67+Os9v9bpidqhXS3Q3lxmo
wQ0xMdKx2koiuqQ0pTTdd1q3rU00usIV5LlEw7xdLyy6BONwv3qJAjqTY0GE8pa3XFh14xEv
CMlYCU1/U6b+u51wBjnkzA59zzu5NYLjD67ZEDKJbdu4yWmpSkcnEHsqW22oAiraVHjpUIkj
RcG8TQI+JYwbrhcWRIkyJSMtmXRSm5EOMnkpLqUqW3TYJ3H0/wAdZtk04JjC8PxuxuQrjjVk
iyrS5ari/IzBT4W+zIJWlDO66KSW9uITsBU600TIbI/HniiyYIqX/p56RHhQYsxN5ZPAyFKK
FLH3i3uLnPkQUJb/AA0RLBsp3ljx9geJY7Ou0IsynsjmR3cQSypX/TW9LSXn1fUrmglXbqoV
6aUTfBj0FNZ8fuJCuTqNjSlCoaXoqrJof9S6lf8AeK9Ghr2oXyBrFRX+WmukcrbZI/06jG2L
xfLhdIsh+6Wm1yrjAkNlrgw2ygBxSUuhQ7+/6aj7U6Gpg3pNkl4quGJs4f5Eyt5mc5eoyCv7
8qjOutsTHVdvslxBQl1av85XGh/KBrVs2BJqo3xqRido8BX25xWpbd+nSm7VNmgRlpU842Vo
aaU6la0ReJqulHCeh1jDZvaSQ4ubOHxfCOLRbUxKEnJLoGprriYyVSXWHUpfDj3FTiW00IZC
CP8Am9dXk1f+SRo1/wDHmBTHJFqbx+Ohq3Xa1xGkoguWxSWpLoS6hM5SlffdwfUU0p/LWYBu
cjWb408dZAuC4bKy2hm6y4i0Rob1jL6Y0VbzUQsrUpUgLWgfrIIr0HrpSMNlURg2MzLNEzCX
ijUTIv2O4XRzCW0vMxnZEOQllhaohPf4FCipSAaLIB0/BQSreLeKrKuRIuNlZYn3K32u4rjS
bfMukS3PSwvvRQ1Go6zz41QlZr/DbREqTesCYeHYRDuFxsF1s1nRk7l3+1jdy23GRbVNSGG1
xmGFsKowr9QKWHFe2prtTU64CXEDfDfEWIWzLLG3kkB+ZfLsu6SDCjoS5YYggrWz2HUrCnFI
B3QtSuoGptgv+hW5dg8eW3Bwb9jwt06ZbkrsnbcffvciYtVES3eH/TR4i17ISv3FOlpbJqfq
SszCPH9wxec/FxZyHdLPPtkZ+zRFSBeUokPhlbMpbx+3ccfT7kdr6aip1RAucNHG5+PfG9/v
EDG2mWbBkUd2XKu8WxuvTGmLfGYLimXHX/01zeYSj9MlKfdWuqI+4PJVrLiHjSSi55JbG7xd
bDZLU5c3LJckCI68+h5LIQX44IWz7uay1umlCd9TYfJB+XcWtON3m2G0hxmFfLVGvKIDy+4q
IZfLlG7h4qWlPD2qUK/HWk5UmazMFGWuh2O52INOnw0nSTkpwqSdgoJ232O2rQCeClHiSCDu
T+OgkpEKRQqoaihqfU6UygIFaiKprvSg6GmolIFcuI3oQaJ/v0AA7nlWgPQU/wB2kQJ5JVTf
l0HTb56EyAlNE+nL1/hpNChSvKtafSabfhoAQmvqnc9dRANOBNBtsB8dSEJVCqldwKb6YMg3
oKdfUVp/LVJBmnGoSNq0r8dAgJFOIqR/vGpGWHRJAJHQb/PUzYgj8vQCtabf7DUZFDjQAilP
T10jIFgHeoNNAQEOR67k9QNRMUpNSKAAgdelP/HUACvpT6vX56RAAEqPoD1J0GhSq9CQK0/H
+GsmRO4G5NDuCPUA6SgMAgkU6mu392tEK3AqdvgD6fKusiHyII2oBX8dUirBcFHimvT1+VNU
mWGtaabgH0JB/s0CABVetOPQetD8dIBgUXWhASKD47/hoEOlCSdqbAjcb/PUQsFFaqHXZfw+
VNQpiU7EbkelK9P46gYsUKVKUaU/s36V0CGeRQTsaUAHTau1NABpNKj8/wAPX8dICgklYp9J
6+n+x1G4DG1QSSRuCSN9QCVDkqqU8kU2+NdQChyJB+qp/D8SNAh+wA7ct6Hf+Wkg18gEmppT
fpX4aAgBIPv2PoRStDqBINCQtsGo/wCb5j8dBtCu2hPJQ33pSh2+O/x0E2JoQkKoeu9K9dJl
nQCqikgJKfrFTsRuOuiBQk8OnIhJ3r13/wBvTUaYRRStK0A32oaHqfloMi6K3P5etfkPTUbT
CFaAhVCSKU3p/t66BYrnRQofkd61/iNBlMQtPQAAk7VHQ/h+GtGpFKWnZIINN/d6H0OswUph
ppXkn3Eep9fjqJfAFAJSCkioFEkCg3/HSXYP2miT0ruTvX4b6mDQaSmpJPp7U6yxQVFcvcdq
bK/v209hjmRXuCyojb1I36aDSkMqUSoDYdT89ZgWCo9RTj0/GvoNKCGJqAkHiQa0P+/16aSe
4D5R+fPkK1r8+Xwr/frMF3RHIJTdmASDQJBqa+nxGvTVGIxg9G+Rot1d8FYNIjNuqtsZgmep
CwGkE0S3zSVAKNa8didXs2YnJifuHLlQq25fAV6azJqC5+IojsjyBa0NWtm7kPAusSRyaZaq
OcghSkJJbG6a7V9DrdY5CYNTuz2WWrzq9bLVMj2mBkT7TrkjjEdKorLY5lPJLpa2SpKem++m
kRlAogl8MuMbLPM9/mN2KNMtMFkw2rs4hCwwWhxSGxXtFUhXJRVQniANhrMKPkXoxA+N8zlz
L+pm3IbasSluXRTr7KEMggrCeRXQq478U6Mk1g2DGIV9s+BW6Q/iLNygz4BEayWyM06HAtIV
97cZ755oWsHl22+n9gWocMiUyaAi64jcvtILuIwotqD9ZdugfbLTx3balJU4sOODpQ1H46z1
yZnJSs5xKIjxBilzt+JLt9JHdurLSVmV2SOKS8+tPdq9tQkdSOI1tQng0nDLHkHj9K7vgxsu
DR1tOwnFybbIK2IzMhQStKprxSXHezuShY5L0YyPZ5H3+mcavFx7VwxTleLJCkSY8tVvVaoN
0mor22GYqjzebR13NTt6aElwYk64va7JF8i2KOLPCt9xyCzOTMjsyWUK7D7aUFsJbcCiwCVq
9o+r16aYUFB5juIaTcpad0BD7oKegFFkbaiSlmq+PfD+N5Zi0G6vSpsSQiW6maghKfvGm0KW
GbZypzc9tFE13rqtWDdqpFob8FeOY9oZeukm5xbhcWXJMaISpx6OmhLbSmG2V9xaOi6qG9fT
QYTY0l+DcBpNtTEi6i8wrKm9/furZ+3UFVHbDSUhW5Sa70GnJOzMbwDHYWTZdarK68YzNwfS
y5IbCeaQQT7K/Gn9+nsSybXH/pxxK8qdatFwulv+xuTltnqnttK7qWW+4pxgICeKSNkKV+NN
YgODN5+L+OjfbFHtP76LVJuCYV0Tco6ANnAKMuo4ocW5/hB9qd9bCtYhGmXLxt47be8lmfbj
GhWj7JEJ2AxzditLaCl/bMk8StR+pR/HprJqcEYx/TdYP3Z5Eq+S27VxjG3vKajoWpcpFeC3
XlJaKxt7W6k6gSghsj8GYfigQ7luTvx2Zkt2FanI0PvkBoV5vAnlX5IB+WpSUiMz8K4/aMCt
2UMSH2on2IelKS0487JnOq/S5I9qIzISCVLWdth11pWfAvGiiScRtDWBx8kauj8m5PSOy7bW
4jvYZSKirssjtc9qhI61/HVLnIYKoOKV1JIABAPQpB9dMjIhJPHiEDgD9BA69a06ajCZZsCw
pzMLnItUW5MQZwZW7b479azHkiojtmoAUfVRO2tNwjSrOS/2D+nW+Tbi9BfvMaLIt8dp+9ss
NKfdiPyRVqLRCgHllA5KUDQaw7eDWB5/2UvuOiT3c9Zs9pROagtOxTK/XkSEgtpLUdQ4uEq4
kH8a00Swtadmc54jKrVepWK327P3I2Z5xtJW8440FPALWtHM1/UBSVcvw1tOTLLTD8G3yRbr
etWSwW7ldLcq7wLFV/uLYbRyUSQO2n2mlT1O2s92MeSKyTxnOsdmUu6ZRbU3OHGalnGHHnhI
Sh8AoQzy/SW5xVUJR/PUmDcFitPiS1XQ+NWpk+XIbytMtVxeQ+FpaRHHJDEdCk1b4/Ss7ivT
ROJFRIm1f08zps5xyHeYk22hM0xJVrUZC/vIJ5JiKDgbBUr8yk7VGnsXzyUHyKzkYyiQrIbn
Hut9dQ2bm8w8mR2nkp4/bLW2Et9xkJCVJTsPjraM23gkcJ8b5Nf7PMvEW6Q7NamHBAlyp8ox
W1l5IPbPEK5pXVI4+p9NDtmEKWCds3gHyhLauUNhyLEaakmC6w7KUluU8yAvigNpWlaUkihc
pv8APWe5NERB8f3N7Ccumzbk7Dfw+Q0JViIKmlurWWFKUoK4JcQQpIPE7fI6XvARCTHcrwv5
VTi7VyfaQ5b4bBnNWn7nnJYZeoVvCMfYjkKKXRVaddKstGmpc8krkHjnzuq124XG4OXaO0+w
xGhNXEyVR33TxZqnlxT8OQPt+Os9kOmNsh8e+bblfLXDudy/fLkXFohKRcxI+1daHcX3FcgW
ClI5cqfx9NPZgtkHnFr8pYhkEK+X65PKvExKnLffmZf3JWGaNqS28k/kCgCmlN9VWHaBx49i
eXr1Pul1xO4yWJbrjYu90cmJjpeddV7EuOPKo44on2jr8KaeyFaLjIwXznackyG34jfpkmMm
ShuZdHJTcQzZjjKXXAlL6iVup58SUGvTfQr+UDKvakeebYq24zbpNyirvX3T1utofCe6WCoS
iruElFClXJKiK9ab6pWwovLOzl48pSvEH7u5kss2JyY3YY9nSEKDjPaNf1AO4kJKQhKfqV8d
MpODThwPshjf1KxoVocvL89xliVHFuaDsd55uYraN3G2ip3mSfb3fXrolFGTtfP/AMphd6si
J7s164B9xVnWwuIsIfCCHkqcYHBNG68kumnHWexQV3M8m812DLbfesmnSo+QNMK/bJIWwpos
E8XA2GOUdaSfrTQ12r6a0soBra8B8geQ7ddssiuu3m6NTWo8iMqin3S8jmp0LUUobQgClDQf
DS7KQiNckXA8UeRrkZv2OPzHzb3DGlp4BKkPIFVthKynktI9E10K6DILX4n8i3eCm52+xSFW
0hwic4EstJDAUXSpThTQDgodKV266e60a4yNRgd1Vh0DKUkdi6zzbYENKVF51aUcu4jbiUFX
6dK15aHbIeDlf/H2cY5ERLvdjmW2K6rtsvyGilBc9U1/KfhXr6aauSbK8tJSCr8g2+HX11M0
c1gk0rQgCvwP4aUZAEqKgKmu/wCG2kUg1L41Nd/Tap+WgRIrsoDfYUPWukmwxQilK8q1HQV0
QQOKuNOQpWn46yAR3PxUOvwOtIg+3QggjYdOu+piEU0UeVQKEk/j6aCAonkBx3pSpPX+GmAF
EVpXrXc6CEkEnZWtEKAAqr6inYU+ZqdECD4V3qf5aCCSHKHbkeu3Q6QC4EEV9dyPw6DQIAkV
NKAUHu/uP46myDTsCr/Fuabn+GoGEeleqfQfGmgkGkpA32pWhr6nVBCipR39aVoOlNaRIJJP
JO3TofhqZIUjj3KA+0E0I/v0QaAQVpUEjcbggVoPhqIMClQkbADp/wANBkIEgmmwI3V/HfVA
hk7qqmoI2p1Py1AGBQGhonpT4U+OghQACAKge3r6fz1EAAnYbinrsf56jSFJPuHLcnoaag+o
akqpySSpXUj0B0EEDyc+J9Uj1/HS0QsoSTQkknofUaCDC1AgFQ9/x2pTQUi1US38D+UdTX4a
kUgCaGoOxGyjt19NJQGDxSeVa/Cnr8dBCdgEiqUj1+f/AI6WR1UlRJ22JqAdjQ/PWDUCQCFn
cpqDsdwQPT8dJNBgpACVgipAHz0SCYKqB+ZNaDbp8NQhOlX1JVSopX8N9JNC1UKRUVpuPXf4
nSKgNKjWtdwDuen/AJ6yIfPaqD7vn6euns4jg6L2fjARNQRQCgBFOh9a11k4sFaqSqtCk1r6
ahDIB2J6/wAt/wC/QAhQ59DShpSu1B+OkRSkkDmSEg+pA3/jokXWBZFNwTXoAPp/gflqESDT
2EVFfb8P46iUiuSCopKRRXUD01FIXH3AAhXtIVU0H89DJCg3WtfckfSR0B+GsnTqApUSQByU
KA+iqfPSZbFEHiQSUlNK026/CmgloSVEuKQmhUaHkNjUddjqFPLDBNAePu9T6fw0wLFdtHHq
nlStfT8dRdSHQlLV0YDYHApQriCaVIr89dkzEs3bLcdxy1+G8XujMASLvfSsvT5DrzhZKDv2
WeXaTXp9O346fZhmK5bMrUk8lHanqkD/AHU9dAsnsOtFuvORwrbcrgq2RJbqW1SG2i84VKIS
22lI25KUacle1PU6Up0NbpZgu9ywnx7ZPItwxGYzebkoux2LciG7HSpS3kgkvrcTWvJW3EUC
euitH5MRJORfDeEzPKEnD7bkE6FHhMJcW0CHZDrwHN1CXkpDSENpKeXIVqaDTDgW8ZMWuSm4
r8mMhxXZQ8tCUqNOfBZoSPzH+GsousGs4f49xO8YwqXMye5ttJjKcm3FFI9mhvDf7NxT5SuQ
76KDQI1u0t7DmSVyXwtbbbirr8WdfcgMeOJLbcJyEYYXSo/RW4XEIR1NEdNZcyTclXyjDJ1r
waw5ErLXbou6SQwltLzphsqSD7w64eR7Sk7rCaD06aWm2KSTglL7hU2zS8YVcfIT4j3hlyU9
dlOSFMtJTx9sUc+66p3lRI25f2aVIJId33x3HjSLNc7ln1yiOuhaosS6tuqvCVg8UCJEQtSw
t38oPE6EnwHWWT+LYAbVmQsDOY3mNOySEqc92UR2pyC0B/8AOU53nEHiv2BJr1r01Q4HaPO9
yQlifKjgqUll1xCVqNVK4rIqon8x9dMGS72HDfKlzs9ju9mU85EElcWy9qSoLjOHkXHOINIz
ftPJZp/bob4F1nZZoHj/AM/SrC/Di3VX7Y53giF+5E/dpCiXVs8alaFmtDX3fhqdggaPePPO
cdl+7OSf+ochNuzYpnIXLegtfS2tmu7SfVHT030S5JjaxeafLlxucGBaHYkue64luHFEKI3z
UOieQQ3xA+Sk7eulQuCJjIrH/Uo9cos+TJkSZyJSm4rVvlMuCNIcFSFNtENto4/460T11K3w
IwvEfzvCyvHpd5uZcuk2QuHY5iZEaU004uiHuDaatpVw+s8a09dM/AKJLm3avObt8yhu2Zi7
KnY6iM0Glx2GjLceQXEgBz9JttAUr3K/HWZIrNtV/Uwxe7nFjJluXBSm3p7r5iraCymjRacf
qyDxHt7Xpp7YFIO2XH+p2Qi4tQY02Qph9z7p+Q1FLiH6ALDCnx7SB07W3w1Y5CCNuUrzxZYU
K5KlvuJetL3NtAQ8iPbkKo4ZgdSGkucj1VyVX56pkLOFJSZp8jv4DG+5VMODxnv+iU5REQSF
LUr20oVnmpXxFa61OTUYIC0zWoVwjTH4rVwaYWHFQZIJYdp+VwJKVcfiAdTFPBdJHlexulBP
j/HEhKgTxjOivGtE1Dg21lVRnCHUfyRfZLc1GGYnbrFMEZz7+42aKtcluHx/Uq6tSgwj4rH8
9UJFJJWPyL5olqixmbQu+sT7ehpiJLgGS1KiRV+19fQuqSs07hVTU2iY2mZB5svPO1fsUlaY
FzauD0GPbe32pTCE9ptSWkgJQlCRRHr1JOltIkRr/kG2Ozp0jLcHhXfIZT63LjMlPyYrhWfa
G+y2eLYQkBIT/PfTCYNMt138vZtMsEV7H8JRbLfb7f8AaNXj7V+U5HYcHBwx5CxxQ2pIp7uX
StdZfVcmmsFXyXPswvONJlXDForbcthuI5lgtzgkOstgJQlMlzk2ioFOSAD8Nbwn8lZeS5W7
LPIkhrAGLPilkiqdU+5iYSp0OpbQkpkdwLWS025uoncqptrOAb/IYRPKPlH9wft1gxaPalMR
58pyJEjuMJCnDSTcEKdUmvH8vE8d+h0/iTkqDmfWG93Bm55vjQuLy4qW+9CcVbHJS61+8fUh
JDilJ9tUgD10Wr4J4FXzObbLxd3FcRxx222eVLbuEorffuEgyGhxTwVxSlCSlPShOhNLkYHu
G+ZL1YsZRYhZf3OPAddkR3g/MjltT26xJTFUgPJrU0c9NtaaTMs64zmUqN49ylBw1NztV1cr
kN7Eh9phKlr5xxwRsgNrcHHid9uR0YkbaHF38+XO546zCds6fu+yzFdmLkylw1pj09yIPJMY
LVx3qT+GpJTHIna//wBRD92t6oZx9LbMiRFmyQuZLcbKoi0r7bKTxDDauPRoinz0wkZexxI/
qPIvlquUbH2kptyHUPPSJDj85xqSntlpEwpS4hA2KRRW4GhJciV6/wDkTDMvvUdzKLfd0Wy3
RVMxGo88S5i3nFhSlvPzE8QhIFAlCB86608aZRydonkPxjaLZIx234/cbpj9wfYnPM3Ka2zI
+8jKAaKFxUf5VPqSd69NDrjYVclsuv8AUDdrRdbpAyCwzLbPE771MKNNEVaFOsNjtSVFC1LS
UpSocOJ31lQ2U5hFRt3na5RsPutmdhodukhyWi23Zbi1qhs3JQVKbHcKlqUeiFFQI+ettKZC
tcQJtuX2aH4l/bGsZnLbYuzExd7MmkT91abq03xCOXHtAfphVfWus1S8i4O0vy7hysvbzC1Y
9IRkrk5m4XFbs8uRv008XENMtoT9Y6FdePprXXGzWkSdx8/26Xd7XLjs3j7GC/IkPkz2WXwt
5tTae0mM0hmiORP6oVyGx20Rgy4gpPlPO7DltzhSrNaxbmI7S2pDii13JLq1ci4tEdLbCKf8
iaqrvrdVCCOST8fZpjkLErji18ttzntT5jNy7tqcSh9AiJFEqQpKwtFR7uW1PmBrnZfMGkX+
2f1CR5TVwlOWq6IUm4SLgz+1pjvI4PpSUNyHZLLhaKQj62qfHVCnYC2PJfj+d4uehXO7uIvs
q3SochTba13Ah95TyYrLpbLLjBUoBSlqBUOtNarVzIPKwU6zZ7g8Dx7Y7Q7Guy7ja7s1e1uA
MCIuUjil1lKlHl2g2n2093LrqeXl7NVSlPwROWeTWL3Z8viLYkpdyW8sXeCp9fJLcZnkkNE1
NaAgJ4+3SokzGPuZstJUBtXegHX5AADSxEKbKVHlWoNCkghQI6gjQgY5tlpudzdW1bozs11t
tby2mEFa0tNJ5LWQPypG6jqs4FMapKlKANKAA8uop8RqgUOEQ5bkdySiM59q2pKXpCULLSCr
oFuU4pJ9AdM8AwlwZf26ZJjuGOtRSh7ioNqUnqkLI4kp9RWuhwQl6HKYQ268w420+nmy4tCk
pWgn60Eiih8xrKZCnYMpD6WH4zqHDQ9koUlZ5U40QRX3V2p11OyET9pIL6muw53gSCwEq51H
UFNOWw67akwEtsSDy4NqWQnkriCriK/UfgNa+oMMRXy13EpUUhQSVAe2pHqdLYhBBUeIPvSK
lJ+Hx1k0kcwltdSihQCAo/lGmTIqiQAtKwBU0USAFH8flokmBSBxHJVOQ2r/AHaUwAEKBFRx
INfh/u0tEKLZFQfnxA/36EaWRHCqlADpSvTr8dDAL2gpJOx2Pyp8dBASUhJCk1B2p6EaRDoC
QKD41+OkAgCNk7A1+R21NhAYUCCOo6qp/ZqENIIHuNEqHp6U+OgUElJoaEgg1A6b/jqbBgSF
clA03G5+GoIDbBUhSthx+of7tTEVxKa02rsCfh/DpoIOh3qBX41+Py1EGFAf8w6VpoIXtQit
QNyD1A0GkBR95CenUenX56ibATUe0AitCo7Hf46iDTVAqmgqaJr0r/t66Q0KFEpHGoUVVqDX
RBBgU9tQSeu2pkHtyBK9qAUA+GokAooUhXr7gR/PUQpW9VUI26nfbUQFDeidgacRTofl+OgR
VVdOQApSqulR8dUCgUIr7BUjdI9dUA2wwSsAbhSQK70H/nrMCtAVsqlBRR3Feh9KaQkLmoFI
V9ANK0p7vmNIhhBqQCClXqB0PXWWTF1Tx5CiQRQbep+GhDIXJVBQUpvQAGo9dMFIpdAsVGx3
AFKCugYkIca/SFUFKAfTTqdDJA51VXl1FQkbe0eukAHkquwAr9PodRQIb5c6Hfl0+FT6EaoM
5DSKJKqbpPH8K7dNRtCxSnLcn036U+Og1ASQ4ohSQOtSPTSZDLhKRsKdFU2OgUgwkinE8U1B
4g9fx0FAoGi+SdiSa7VJ+VK6zB0S+QcR7kCqUmoJr0qa0B1DjgSohC0p3G1E032G2o5cgBqn
jyoOietKjWpOj0L2+igpXr6azBS5IYqfFyYW4QrkUKTQcRx6AU13OMs9AXp3JT4Lsq7t+1R7
QlSkWRBadcuqwHCF9tQJaRvXkaV4jT7VLyNX+RkXuCqhRB/NvtX+Gso0yw4LLvcXKIciyWxF
3u4VSDDda+4bLh3Cw3VO6Pq5VonrrUwcmvBo8z/vNc/JQuLkC2RcmsCG3Hklcdhgd5BCFOLU
v9ZXFX+LbQm4IGOseYsVz+YGMdiXbL7sgzHn18XkssOqPJaHULbbZS4rY1O/TUngtopY8h3m
3z8gJtNrRNvBVHlO/aoP2vGqVIiJB4tg1PKtanfT2r4ydHEROS8WOd5Nvnj63NN4Ba7tZLVG
U1bZ8tv38QCFOssqcQXF7VKko3Om7q8oy1D2dQ55ftmIypcHBrfY23rf9vNyBoMsSvsgKqKm
lOUSSP8Alr69dChmIK7fMzv8jxjarc7hkSDijLqUwLovvEqWCS4phSyKKd93Je/U00ypyaas
yfc8oZaiXiMn/t1Hb7ALWLtLD7jrrfAJH24VVSQmqVBRTXofnowGZEzc3vmF5AchufjaPZpc
1txtc1999x9xbxFVJklTnBfptRW/XVVokSHj128NZs5erX44kQbmzb1PRYSZXYDqZBKS/Icm
nuOcqUQEjbeuqcE8GD3NTy7lLXJZDUlT7inmx0Q6VnkkH5H11ooL3hnmGTidrYttttzH25dW
/f1qVVVwQpKkIaWVBXZbQhVPbv8A26HBp5LAr+o1H+nUWsWMMuRmlx4LcadIYhIZXs2HY7dF
OBCTQVXvTS6ryZYzPnx398l3D9pZ5TLM3YkoDhJbQk1W8o09x3NEbem/XRiQgGKZB4Cx2/2y
6wxkgm295DqnpJiqZqNlKcQ3VRG52Tq6/JST4/qWsFrvUheN2Dhb5FyeuVzckSKuSFOI7ZW0
lOzRUPcQqu+qBRQjlvjJzILZNgY/Nj26PKMuWfvO/NkPFXNDTfL9JpoODkoJFVaoLRpV48p2
Z6ZmDF2wq9MxLvHiryNBdSl2M22jg26pNOLQUkp4gq39dAQNYP8AUli6JkuRKs8uM3SMi3vx
3WHHlR4qeIaeLyS2nkalXb39K6mkLkaZV5vwDLYqEX623mKuFLdl2tNtlNtKWHUBNHnT7m/X
6OmkyR9382YtesRi4ld7FLXYGISWy4w+USEzmSS0ptajwcaA2Pcqa70rrUJE2UK45faH8BhY
2La+5dI0hTy7k5LdWyhBJPbjxQQ2ioICjT4n12GoFuSt2mNDlXGLGnTEwIbiwiTNUguhlB+p
ztI9y6fAasmS+ycG8PpU0ljyMtfcWlLql2mQlLaKe5ZIV6U2Hx1JM0kTmPXDxt44kvX2y5dK
yiS7HWw3Z48JcNh5ah7ROcdKgpn/AJEiuhTOS0W+2efMObkz1y2p9qk3K3w2ng5HEqPDfikg
x4sRRQoMFK6joK+mqARFZP8A1AWu42pEWHLuTUhy+RJDshSUxudvjJRzTxYI/wAxSDRrfb6j
qiBRWckt+DZzlF5yheaQ7A1cJKlx7dMjPOPhtCUthxwtnikr41CeoHXfTD4JGjXXyZgNgg2M
M5RJuy7ZYV2/9mgNr+xluPN9pDjpVxS2tPDcLqQP7RVeibKb5H8hYzebLKctmWXZsToEaEMM
aY4RG1NISlXddWS32jQlXaHI+mlVgWpLBC8geNrc147fORvT3sS7zcsJhPp5NymiCrkvcBkg
IAFa6OsKC5JGL5pwK1MCPLvcrLHUt3KSZcuMtHISwAzblc+RpTZRA4ADV0MtmY5HBsXkDLV3
S35ZHtzUmGl+SjIFiOmGsK4JgR1NJ7biGxUp4JACetSdOYGIZNKydPjTx1MsFhyiDMyafc2J
qJFlWp5AiJQkKQ48tASkkopxG9Px1VrnIWfgtuAeasbaxz7q63C3Wu/SJ0iXfW3GZCEvhdOB
Ybi+16iBx4uHr+NdX9behkp2N3jFH/HOfxJuRwbbcMqfDtutq23ErbTHfW8B2mwUp7qSAhKS
ePqdXXOA4LnkPlPx49gabZDmW5NrehRYzFnLT70tpyqQ7/0nFMdtTfuV3OR6etdHR7NWbbkf
5l5L8eSrVHjovFqmqZukB62tOtuykNRW1juLfYDbIb4pryQ10FPw0dWGZlAn+Q/Gq80sE64X
+DOdYTLUlpATJgxHXGuLSzM7DbjaeeyWyFca8vQauuAnJQ/KzQ8i5LbYNgm2R2fboLq58xEz
g0rm4kIQqXIQwH1gbhKUe0E11pKOCjI7wG1/6Gs1ytk2/Y/Z8ulSY0tqdKcZnMqtbR4vsJWl
DgClGv6Yoo9a/C6t5gZL/Dz/AMauT77OiT7M47Iui3rjInvpYL8UR0JaLfOO+qSgBKk8EFPw
1l0ZGewcq8WqxcZY6xDZv2Nt3KBbLElujUtc1Z+xd7LlVLbbbcUVFW6f4a11zC0C18sjrdCs
H/YW1xLjeLSXI97au7tv+4T94YBHbfaLSR3FPklR4deProS3AwpXMFkyK4Y1/qu3KvVzxWTg
Ll1YXaoMBptU5mIlBKe8plCC20lXHuhytemqMYQT+pK5DfcFXl2MNPQMcdlpemL5rlx3Gwz2
yGQt2NHRGQVOf5YdSrfrtvq6YFMyrz+bJ/qS3i2S4MtSohMwW9qOgtrLhUhD7kT9B5YSaJUh
KSB1GtUULwYeyS8O3ZtrEbjCs18h45lhuUSW9PmOJjly0s077SH1JWPar3qb9R8dFqwzSev3
NGsV89lxyHEJ8N+1O3WY9aMf+9ZssYJVQOSLhzo9IS4pPJpriAEdSK0AqjpGLiZb3fG99U6u
zovUi8dwMpaV+5cCoKX9msHttxga026V/wCXW7Vhk0uqNZveQ4YIM1+5XC3yfHE1u1t4fY20
oW7HfYWkzFCMlPeZUmjpdUpXu5etaa5qsg1JXvMdziTMWlxp9zt9xemXtErCGoimlri2Qt8Q
D2kp7DRqlPBZrUHbautJRngmn9yG8e49Hw7yFdLNdp1sXkn7Q7/peelxL0RF1kJSqKUOrTwQ
4ncArFAf4arZiSU5SIPzXIhyLzae5IjzMiZtTDOVTInBSHbmhSu5yW2EoccSniFqTtX11pJh
GZO/gJJTk16Wn6m7BdFp3p0Y41/t0W2hemZewhBQhPSiQQqo2NK/x1ojZ8oyrHx4Lxi1QbYi
Mu4TJKp7LMx4griONc3H2a0WZH5QvZv8msKuWTc7HnlnKMXELx7b41nR+zRbXHuLtranPqZC
HytJjHj9LgpVT1O4eh1VUVY8yTWYZDjcz+o6yQJFuRLtdrkQ4MdBmrXFQp1LK23m0f5TTbVQ
C0n2rP1aWmqQCU2bCxnLcbuP9RV7uM2B9w4yZybbKdmqWGVwG3gXEKUOKg6E/po6NdU6LU0S
cJ/JEeKsxsSB5Dyh23v/ALuLc7PjzDcF/dJZdUlHYRIUgqLvLcv05elNadZukHWKjLE8isNp
8I5TJYgPs3iXNYtsuezMLS3hJQ4pBI4FQZbGy2x/mepGmJuxn8RxPvGNNeD8UtsKEuEL1d1o
uBVJCmnVxHG+67ISE8lNqrVCOQ7fxOsVxLNYlGqZvY4klpMCHY4xcRe7a1jbN1ZhMQH2/cXP
tHIgD7rPBPJYdqTtTWaVmfoSIa626I/lVptuQY8h26x2rg7Zr3dIsW3R7lcWmx9tCajMqSFs
9w80988lUHx3apQCI+3ORoOWRY7+Lok5vcrEy9fkWViE8/AmIfPNRhvqMRKnmS2l31Gxpvqt
VdZZLbgnbQ9hq89v1ssePCQwiXBdu19hR4EqO1VrjKYlfcHtMNhaVLcUx61povWEp2SQxxjx
D4xyCDMf/bO/Hucu6uwrrDceW1FYZfWlhCHQUssmiQUocSrkDrd1Fl9i4K/CxTxKnxDdF2+f
DmuNqtCrxd3g4JrLkiWnvNJqgJYR2+SUdutTUqOqtPyhiswMvLeP2qNiV2el2aBaTb7ozEwR
+GlDbk21qQlTrqlJWsykdvivuK6KPWu2mlV+wZbRgikV9taJJqSdqn4aYA4qTTbpT6q9D8Ad
EgKSkkitUin0/LQUClJUoJrsfX4imlIQBKTRVNztv00MoDCSmpSK79P7NSFAKU8lf82xHp8t
LIHVPbI2PQ/7fDQIB7iEn3Hoa/VqMikoonj8NvgT89UiJCKEV2oPp/HWSF8V14kV9SE7baiA
k0TRIolIPU/7zqIUK1AIFCdyfw1ABQUKkpJA6D4A7aoNBpJ9p6pHt6f3aGAFCiz7Dxr+O+lM
gwafUAnahPx1MkxaADvyqDQE9dtZEUQ3TmkEUNCn5fIamIZqgFBJoRUE7j/w1IGIT7gOKag7
k9DQagkNPuQAByr0VWo21EBNVJPGgUCQqop899TEUApSSniT6qp1G/oNAphgCqSD7VE+nRXz
+WooDCQk9SpRFD619K6hC7ZKeJ91Tseh29NEBAa0BSagmo2P8dBQGBtxWSkp3BpT2/w0gKHE
JKhUrrt8KU20GloUAoVJKR60Hrt/tXUaEUJBNAlQ6KO9PltoKQdvkUgH1rTf+ekgbg/4zWgS
BSh+IpqMh0HX8vQAep+Y9dQpwwDmF9KUACfhvoNwGE0BqaA7VHT8RokHgUNzQAqUPz9B8dTZ
TImhCTUdaEb/AA30Jg0Abj3Vqfif7NtaBNiwsV4EV6io6U1k6KwhAVQjqqnoaHbUEHQKVTZR
rTeu1D+GrBpLBz/VpWm1Kcqb16V+GkIIkLQZbBQCEpCU8nBSpH466o5G832+41efCVgs7Vwd
F3sLjqpMNuI+6lXccUaF9IDTYCVAkqP8Na9u/qaczoypyiU1FACKp61prnINlhwSZFg5Zbpc
25LtENpYXIlsoccXwQQotBDRC/1KcTvSnXXWloCS7Z5lnja+eWYGRLnvTLA6605dG1xi3wDC
QEobCqqd5KQOVEjbbWatLYcaJ7DPLmPveV71l92vjlusqgWodv7TqjIaSkts8ks8kJ7Sfd7v
U7a0l+JRgz121+O5l4v8uZkcpERC1u2oMwlh2YtwlXHgsqDaEGgqojl121zgnovmO5Nidqw6
3Pw80Yg5cmIWJEu5RpMx6G0RtEtzaKNMpG45jkTrbTQxjRJXXyN49umOSFZXcrXf1KtwYjwW
Lc+xc0yhsgLdWtSAhJ+fz+WqDJF5VkViuXiGxWmVl0OdkttfTJDKUuOVO6W2U+xKEhlKhUkc
dvXU1kFZcFivOR4tdpGGOP8AkViNcbSw6xebrC5rfX3mx3C26tvg0HCjjyUmorsNabefA4I+
4eRcTsVxgtSZFvv9ggKcXY7PaFuyX40gklM6W/NHF54JUQnl0Uagawi7QWaHmWJ3zy5Z8ht1
yaXabHZnGrtdZUhDSEreCuDY58O4sHdwp208AmoPMV7kR5N6nvx1h1h6S6tlSAeKgpxRCxX0
OgTYvF9+8fxMWtMDLV292YbkqTaWkoJXHWhKx9zdFVSFNcqcR8Kfw11TBs0G3Zj4/hYmXm51
hmqWiSq+SHHG4qZElRUFKTG7LkhxKz9CW6bUA0OpOxGv5l45ekSseLtoFgRjaFrYQw2nlc1k
BDSXKclOBJHsBrXrvq6A2UDAPBWdx8psk3IrKybOmUy5MbelRXaIP5VtJcJVufp31Nm5NauD
HjdN8FtzI2ALF4ULFGjpaQUR0N7IlBH0q7myg57SdEGZgzPJG7/Py/GI0yPixnR7mp+PGtq2
g4qG0rmFzSkpZQ2lCfaOXJStKA1B9EZu++Q1QHbNMlXtqC9a4UyQy4xIcaZUlQkI5f40mgP+
7VklA2s9g8au5HdZsW32WZNaRCZuURoRnEsOqTWQtCH3ER0Nio5FuqqimhrAzwMLzieJwIsy
TgmNY9kCnLg6i9C4yGe1GZSgEBtxax2QCa+0GmqAInKbBit7wO0W+1fs8nLItnWqBAkPgR2o
zjlX3o1eKFPJoO2pxXT3fHSlkWzJ7jCs7PhuDMTFtDFzfmqCpIfL14fQlSh/kgUZbHShPQVp
vpVYJsivFFhtd88iWG1XWOJNulSeMiOSQHEhKlUNDWlQK/LU3CFI9DK8T+OpFwjQb/jkGwPI
vDzVphxpHFy4QmWFONqc9xUsOLABFAR9PrrCRGb3PFJ681w9pXj6LYZz03uSLZElBapUZh1K
u8qMlSlMobAqpZ+rppxDBTJd5Hji0XDzFmL+RWB+5CQmPJtCnf1opQ4EtqdcZaWl9VFpKUkC
iQDt01W4CreRvjmN4la83zmwMYnb581i0qkQYjUhckKK0BK4baVpBZK1p5ED3pBG9KaWsIZc
MgcS8OWe+27B5ibMtwT7hcBl3ZcUUMNMqUGo6yFfpJSUpSPzH476GzX/AGL3k3j7FLrFstun
40VWmJZJIYyZuQppuD9sSppuifYVL+orc6/z1JBJ5swvEbhdb/Zoky2zU22fKYbkvpYeQksu
OAKUlZTxA4nrrVrIYejel+DvHki6Kt7uOTbFHg3hqDHlOy3VC5x3mlukt89kpBTtwPLbfWTC
RWMa8W4zaGMVn5RY5Spl5yJ23Kgylrjp+3KlCMtTShVSAEciD9Y+WmdstNQZ55et8SH5GvEG
12R6zRGnlhqGoOKKwFqH3DSSkcWXPyJT7QOh0qEiks3jfxxi1ywSXkd6tF0vEuPdGbc3b7e4
pjiy6EDuugJKuKS5VRA+Hz1l2lmuuievHh7AsbWRNtl9yRNwukqBCFqXV2IzH4pBUhKaOrUp
X5qdPx1dgWStYpi0N/xB5GlN259VzhSIbcV5xkOPJZbkULTZSkltzr3uB+XTWn/IGvxUGZWK
EbpfIFt59r76QzHS4oEpSXXAjl86ctas0kVZ7Qei2PDuDSLVfMXtTE+NNTeoFrl3q4NocUQ2
e645BWAkAOCoUn02rtrnORmf1Iq4+D/GEm82yFZL3JQ5clS4bTRcEhKpjLKnWCZAaQhG6Pci
lT6aVOwf7mV+RMFjYjEx2DMdUrI50NU+9Q1UU3HLjhEZKNqhRbSrkDvrVcqSlIn/ABvi+NDD
7jl11sqsjcYuUS0MWpK3GmkiUU9yQvsgrK0hXFHoD89ZbzB0xpclza8feM8bud6i3q0tyMfg
3FyK9kF2kvcy3wStES2MRf1Hn0Bau4pQABA39BPJzSwZkcUx6TheS5JDTc1mBcEx7UksAxEw
3HBw+9f/ACuFChsD9VP8WtJyw/7miSfEeGOi54ozDfj3my22DczlZcKvuXJqkpWypggNJbHc
9nFVapO51hPk2yC8h4BhEDG8hes1tk22dh10Ysy5T76nhcg+Kl5xKkpCHEndPDbiRraTkxZp
w/mCteM8Hi37I5ES+sSmYFst0i7vw0pLD8huMkKSykrHtS7WnIDp01WekuRWE5OPlDHsdgx8
bvtgiu2235PbzOTZ3XC/9qpLvApQ6oBakK6jlvqqZe4Hvgay2q7eRYkS6w2p0IRprhjvpDja
lIirKSUnY8TuPgdVsg9P6Ge7/nTyNAancH4bnWnga6Rqly8S4lbbDbXJ+WKh5BdrQm8wYDsY
iKQtNUsLkcj7nCClugqSNx01zTszo/g6seBlu5hkuPovFV46ban7jsAF1y4LaRQjl7OHdJHx
oNLtCRizgkmf6eLJNuP21lytUhEa5yLVepL0NTPYkR4zklSmvd+qkBkpV8+mh2cGbLXyVy7e
NMfgyMRnW+/qumL5NMVD+8XFXHdR2H0NP/pE1UhfP2EGupzD8m6pySk7xPiDmQZtJnXoY5je
N3ZFujttMOTVnvk9ptA5cidqb1/s1TGEFf458ssGPf02z4txujMrKP25EeQu3uSoTLigqIuO
mS8qSvklMZC2F8aLJClbHbV2ZpaMIuDERu4SG4C1PQm3XBDW7RC3GQohpagOilIoSB011cmF
k0+Z4uwePjOE3KJf3p8/JpiGX4AjraS439w2zIDS/wD2zG5FHJVeZ3TTXLu8yaiHHBOXX+n2
03DJJkTFMjZeis3hVsuERbL4VAWsLUygOr/+QpIb4rodjobZlSQDfhi0zZbS7XmcGZZI8WZK
vVzQw6hcJEAoDxMZRLjiFKdT21D6t9M2RKZ+BrkvhyTZsfnZPEvjFys0SNBmQJLbbjSpbNxe
WwkhtwVaUhTZqFVqNKs9C3Gyw23+m68vXGXCnZBGhsx5DMYSG2H3krUqKiW9y4jiwltpey3T
xUoazLNMOX4Px+5WOwvY5kbKbrc4s+TFiTEuhVxRDdWUOtoFUxklhuvuNKnW02tmONGP2qK9
crhCtzS+JmvNMNE7JSXlhAJHwHLfW/qK/JwarJ8D5Em4tW+2ZNCvM6Ncm7VOaZU+gQHnEKdS
oqX7faho8kt7g7ax2cG64KvfMShM3mzx2cxg5BDuMkR1SmFPpciKLiQtTrLlXEIPKqVp6kfL
WnMZDdi0I8Nzbj5GzG1WG6/tcHHXaOSJC3pUtbbiUnZEcd52u5UoDbaui9ngFXka2DxRb5Fs
y5UrMocJdgQkL+3LrkZ5ClJCXXikV7Syso4U5BdQRtvN27CpgjbP42y2a9j9tiXNuOvIra/e
YzZW6ltDccL5JdSnbmQ3t166znZQTXkDw43ZoU+4WC+MPRrfboVwuNkcdUqc20+hAU44EpS1
x7iqpSd+OlSDKHieLXvNMhgWCC+kSXApDKpDigyyw0C44qu/FKUpKuKRvqtaEWWaKf6d3LnD
x5rHrzFuD02PPnXK8NFbkEMR3W22OwhKS8VVXxWmhPL5aU4TnZQpK5kXgnMrIqUlxbMxyNLh
RQmNzUtablX7d4NlPNCOaeCguhroUvIfBzjeL2mEZ+zd5qTcMNic2kxV+118upQFe5PuaSCQ
rooHWq0bsl5FxAU3wzeYViYuMm7WpFykRGrgxjqpPG4LjPK4trQgjgpSga8eVf46z2YxBNXj
+mjyBbUw1lyFIXMksw3mW3FoVGdk0DfcW6hCFp5HiS2TvrKbZQ2yp5942uOFSWI8y4QLkp1a
21qhLWVMuNU5IeadS24ioUOJpQ/HWknEgx3458SZHnbM162vR4sK3BCX5cor4hx0kIQlDSXF
kmnomg1h2FLEk4z/AE65k6p5mVcLVb5SJjluix5UgoXKlNJSsNx6JPLuJXVP9tNac+BaGM/x
AqJ4+Rljt8gN3ATXILtlKyHw42QhcdJp7pKF7qb6cPdXSquWHgGY+E8mxi2zp8udbZZtPa/d
bfDkdyVFRIISy642pKdnFGm2/r00Vq2I+meEblKv+OWOwvpflXmyM3mZJecBjtdwqDim1JTz
LaQE/l5V6aEvx7fJWrlrwdv/AMnPOmbtLgPyrfGix4rc1V2decEZTLiikEAILwIUkhXJG3XQ
SM3vtjl2W8ybVJWy69Fc4uvRnA8wvYELacTspJSf4dOutxBz2aHF8KOSMejOpu7YyeZa136H
Yy0vtqtzZoVKk/Sh5XVKaUoN9Yqm88HV1eh8/wD0+TH3vsLVemZ98t8qNAySCtC2mYj01PJo
tOkfrpT0XQbemtKr2yhecCB4OhPKbuVtyNuVh7LM1253lbC2nY37apKJITHVu7VSx26Hf1pq
VLPHJERP8P3JrKI9jhXSG4xcLcLxZ7hMX9o2/EWCUhYXXtOkggoPTr01OsJP7F138HbFfDqr
7ZLfLdu7dvuOQrkM4zAWytwSnYYJeD7iNmEAjglW9Tv00dW58IHWUO1eGrcvBrhf4uQfcTrX
DM64spjK+xQtJouEJlQlUpPTgB9WtV9TdoBqEQ1r8SZFdbxjltiyoTz2SMrkwnGXwtLDbSQt
z7lKRyQtKD9Pr06jWXRw34cDGdk6rwnCUlN3jZIzIwhUWTMeyBcd1txtuI6GXk/aGq1r7ih2
6Gihvp6OY5NKuclPzbDJmJ3z9rektSo7zDUyDObqgPRHwS05wXRSFKA9yD01OjSngw1GC0Y3
4XmXqwwpIu8di/Xtl+TjlnU2tX3jMMfrl18e2OfRHLqfhrNat54NtQjPoMB2bNZhMJpKlOJb
ZS4Q2OazxSlSj7U/Mk6LLq2mESapkngaTbclxbGIUpxy7XpkvXCQ8thEZABTzSxRfNRbBNQr
6vyV3oqj6Owqu40h3F8Dxp/lSdiUOZJbs9rYS9cJb6o33CuST/kJQsoKVlPt5bo/OBqtWEvk
ksSMMU8MN3e25TepDslq0WNT7MRlpcRU111lRFHQpwMp4AVWQqivyE61b1/mqyWqzyd4n9Pc
mbiUa/JvITLk29u5hpcdYgBpZ2a+9JFHwD9BT8tZdXMC1Dhj67/0y3WM7FYt12El5ctMWaJk
dyG2mrRecfjOqKu+2hKFV4iusqjYbfwQ6vCDIQm8f6jZdwRcVc85Mth1LqWm3OytH2f+aVh0
gJ3ooHWl6rPHIR5JWF/Ti+3bpkm63t1sQ5AaaNsgO3AuMqZS+3IU22UuNpUhf0kbHbWXR8Gp
SWRlO8HQIuGsZGu+yXPu433MNEa1vvRiSopabceQqjJcNK8x7fXT0clbDgNn+n2SzY5c/IL1
CtEmNIiR1p76Hm2PuD7/ALxSfcy4hKgQDsa6zWlrMkkLf8B29m425s5OE266R3pEMvQnmbg6
uOUjtx4Cldx7mF8kUO6a6ejiRdd+RTH9PrSMjkWKXlDaJZUybcwzDdkSnEyEcw5IiJIVGbQN
nFKNEnU6NKQSxJXpvhDMo7Mh8uW5aGlyBHjpmNIkSWoyyhx+M2vj3Ee009dLo5CDrlXhm6WG
xzLim5xpk2y/bDI7S0laXIQm0MdQdVRt+tRyCd01009Ls0vOjTUMztTSjRXpXf0odc2jUYyD
lsQfQ9dZaCRXt7e6QRQ/yr8NBpoJJUUgcfaDUepppCRQqRUAJWTsRXp+GhkCqSQoGivpG1Kj
QQpSuSgCeh/mf7tORd+BNHK/SOXXt1NPjok1nyRD3NUiGle6uKflUjpudd0cYPRd6Relf01Y
6m1pkCEZMhV1McEIUyl5fufUmgUnmB9XU637FDLTMWK0qIVufjXrTXKINbJ7AIjcrLrbGXaD
fQ6+lBtdV8Vgndagj3FKB7lDoab7a6Vqnsxapsub2+8WTzSzbLHZ7bHi3/7WLAflw2HGkJQh
IeLCD9NN+VE7nRWqZJztkpbmcVyPzu/ZzjDE+3WaMYi5SWUojMutAqceeabHaWp1z2I59ANh
XSqLqzK0YxcfHeZ3DKr5Dt9le5wXXH3kFKW0R2FFSmytSylKOaBVKfhrCcGphGpYDYo1vwWF
dJ+DNTrfMZWG48eG5crpcHSk0krkEBuCxX6RuT1GtWSkEWF7GMcueOvQ7FjkfH1ItipCpV2s
ZUge2q1ia4sAKNdqgq9dZayUxsz7IMMx0+EbPfrRj0qPMemJ/cZ6wp6WqO3zCnVLKUobaWQC
mg4jbrphSBOXnx3jpi4DIs2Ey5rNzQ67Pt/cU3JeWUJU0JsxQSlCBupWw2qBpaRrmGTcDAPH
F/ubdqm2FMOZZW3pt3l2uLKh2t7tkJEVLro7kihNVLRStNuuhJbRNsLHcLsEPyPjzhtNkk2H
JIcn7KKLc4y60YiO4pYalqWtKllQ5KUNxtQaYUPAZPOmSoQxkV3aZSEIblvpQhOwSA6oAJHo
KdNKQSX/AMceGoGa4ym4t3R6HPYmpZuKXGj2hH4qXxiKoe9IIA9taD4fGaBfJcYP9PuB/tjV
xut8nwV3Duu26M+lll9tpGyEvMcVOOu/4ginw1lz5FY0N1f09YklxNqTfLgq+u2k3llzsNJi
9sK48FJP6tamlOvrpUlJj2G2xd+yq1WZmeYTk+WiM3JSoqUyVmnNIBT7vhvpTaNVrya83/Td
ZbnKmRbNkEsyrfcRb7mufFShslSe4txrgQpZodio0UfXWYZkoF1xDxzGulsh2i+3CXHduIgX
kyYXBLADnAqQ4iiFKWdggHlT3a0pfJI025eJPGbF38htzI4t8CwRISoLkdDsj7VLzXccdDPL
k84rj+Y00QXBFRf6aIT91W0vI1otSWI78WR9qlLijLBKQsrWhlFKUCQok/DRLREZfvAlgxZt
T2V5gi1QXZaodrdbhreCylHPk/RSQ3/brUthBxyvwjZbVgdtytu4uCA7DU/PkLaW+uTKcP8A
0zbLCQnstkAlS3Ve0UrvolgUORh9tbwhvJhd0vXJ99LH7Qyw6stoqRzfkf5SFUFUp+BHrrSe
TT0c/HVsvtzzS1W+xTf266SHgiLcQSCyriarFN6hNdvXU7lXZpZ/p9vdzll6zZZHvEtFxdt9
zkLQ+0qPIbCnXT3FlaneIBJ4+uhWY9lwVGVh1layayxbZnEe7ou0pEaZMih9mTFSXEoV3Avk
tIVWiN9z6U31pN8mZn6FrX4YuV08lZNZseuxtsKyFtCpT7z8qYpLjaVpSEoo86d6n8qdDfJU
jJyx/wAKQ3LnkkeZmjEV+xRXJS3ogfS4QWwovP8AJKVobBVxcSCVnRLLBBWjxxmEiLjv7Vd0
Nw8umSY1tSl19lH/AEtQuQ8kU4hYRVCaFWmY2aXyXLI/CM8W+2220ZUlc6TbXJk6zTZLtZTk
cnuFhlI7aWU04p7msqzM42ijY95L8t3C6W212zJJqpEp5qPFacdAaC1KCW0q2+gevy1uElJL
Zbbx4n8qXe8hbuVQr5c2Lh9vJDc15X2Mp1Jc96SlKWyQn6WxUfDWFZkiEs+AZ1mDlrmXXIW2
YtwuT1uizLlLedc77CqL7TayauK4kNgEFVN6DTLFtYGOSX7N/Hmd3iJbMrXcZxpHlXVtf3Di
m2lEIZeU+lzgts/UhJomtKnWk5SkykWjHL555zPHnpzOUswLQ1MQwu4y5bcBwyOFEtpdbQlX
E8wOPqo6G1wsilIyteA+bIUe4wmL8myNPS3ozkaRdBG++kJA73YqauE8t1VFdZ7vxJNEbjjW
cNeKcquLGSzIVtsLyIQssR5PaWZLvCQp1STySj3bKTXma60lkJwR8TzP5TlwmcehzW1MLQiB
GjsQ4gcKVANoQ2sNcgregINa76WqrJpbLxcMI83u4yuXdcilSLlCnwotossaWiUfuSqhLq0q
HadYBrvXbqdY7fAPZD5ThP8AUDKuECVeJDt0egpelQJbc1t5qOqIA44faUpS6KAj21V01p2k
ojJm+WQ8tkpgZLkK1yF5GhcqFMeWFOvoYIaUpQ6oHQJBpt0GmrCCz+KrNmgizr5acmRiNqC0
W6Rc33FpQ5JfI7cZLbYUSrcHnT2jfQ3k0kXrCrB5gtncxqNmrFkuapklqLaCDOeddSeT7y1I
aeUyha18g44RWtdVr8wKcozGRY8/Rid9lredXjFvuambyUvpMdc8OcVK7QP6vuI9wSRuNb7f
lrJyUdVJb7ph3l9zC41nfviZybUhiTIw1uT/ANbEYlGkZx4UAXuRRvme3UbfDn2jSNuUxn5N
sXkxrHI7+QZM1foFkfRAnQ48juqt0wpBQ1IFE9xynt7nu3BFdKc8GXjkiccu/k7OM/Yu8K6q
F/hxgty7uuJjtxocbdbjxASntJ5e4cTyr0Oq2FBqrTI/yvDy9rImZmT3Nm9KukdMq23WK4Fx
X4laI7HEIDaEmo4BIodNcmGiV8BJuB8gI+xcbZeTBuB7ryC4hKRFVvxSpBJ+G+hoeGZqATw4
kgihIPQa6BMo0674hnd6wNvOMhlLahWphi22CMuK8VuxWqcCFtoCGmxzql1w0Udgemuaa0hd
eeSyysl/qTatUF6RBlIiz3IaWJJhR/uJLza0GJ33AnulZUhNC5SvQ9dHapqynZXccvnmht2c
LJFlOyH7xKcmgRkOKN2cjuokoUCNnOwtyqRsOvXWnZTkzVKDpbI3miz3PH8fTj7zsuxoky7F
a5MNDwQmSQXnaqqhfE0oVK9iulDrLtU39AvLF18iW6Uu3ZO9FQ/kTEC8XKLFjpYKXGu52UO0
AKX0GvcI67D01pQ+DLUMunjvy15LuFtvM5GHuZO5Mk/cTbjFSphC1oYQ0ll9DaFodbS2hNED
+/WXCY4jBkEnCs2lQ5GSjHZKbO4XJS5bMdaYrbSlqUoo68W09P8AlA1p+xGVhFsmQ/MGPYTj
KZloH7QmciZYHnI6HZMeR3kqQyHPqaTIdooIP+ZrKdXJ0byjvFv3nPHbm9K/Z5UabkFyXduD
sBRU9MQ2sr7SD6JStSuA/H00u1WZagpuLZPlOIyRf48VDkK6NPw3BMY70OYzUd9lwKoHEpVx
rQ7HU32fyHY0DEc88r5ZcJ/7XabVNtqY8dh+3S4zLVmiNx1qcigBxSUIX3Vnt1USSfgNUpQj
W8l2wXNfJizlTlxwwz7o/LAua+6zA+4fZjBsQS08HRIAaRXg1uQT/iGiU39C4KJCd8wOxIWX
22yMR7Xj0afEt8RLYRwiSFKEstx3F951pju8eQHtpvWh1rstIHgqUTA80xiJaM1l2hZtEZ6P
LXVae8lnuVaW80klxlt3tkJWpNNHftsF+OS6XPy75KuGTQnLLjTNolXS4JukBqPCUmRcDRTL
JfWqneSGllJWkJr10p1SY1y4Oky0+R5Gd2mKjBrJEXaWnrizBjtNItbiPpdfflB0pc7a6bF3
2KptobURGWC3sTdcz8i2jyRPmS8Ot71+uMViU5DjMLkB1DVVNzo70Z1TnuAotSF0Vx3G2lxH
wKnKQ0xi6+WcqmZRkkSwR77GvTIYvsV5lKIrpZAUyhhsLaU46ylHKiCVeqqnTaynArWR5jOT
eV7b4+jXWHisOZBtMR+3Q8gfZJntQlKo8G2+aVqaQpfucCKD1PXQuredE8IgZ/ma9XyI5bHr
FAL9ybjQZ8qI04mbNiRVJU3FUsKV9XGnJKeXw0yk5QJpssN/vLnjmXb7rb/GycYu7vviz357
85hxpSSl+M42TwC1Nr4qSpQWmtdc6LOZFnDEvK2Y3y+WvHMdxy3JjKjP26PY43eYjll1aXlq
U93A4goU3zKwr411uzSRVWRzimY+R/8AuPf1WKNZ+bcXt3MKeraGWoCh2Jf3Tilc1tOK2WpW
5O+l2iE0ST+xEePrpfomK5hKXYbVdYDqloyG63N9bK3u4ruBhkoWgOKWtJcQlG5J2PTVaz7/
APsFv46wOJ+Q5NcsEt99lY5aFPoSzZrXf36m5uJiOJDf2kYqPNTSqJU4lNdZWX9DTw18lhzC
4eX5j9qFywRgPzp7Tqx3XpjcmU2kqSy4331oihW6ikBPT0pqraEzLbmCE8oRfKmW3bH8dm4n
9jKZZd/amY7ipi3Ubd3lMccd9jdB7SqidNLJVZdZZw8YPeT8enX3EoGLPXRxRS7dbctx+I4y
uOOTaxIYW3QKB2SFEL9NYbSyaSJa1P8AmO9qtN9tmKNLjWS7zLlFQpaWayFkIejqS+6lzi2W
+P8Air11u91HWPH7FlfQi51o8mowCeq44u29ZHJbmRxrop1CHoTrqgtx1pAc5OIWn2kKSfbr
S9s21vBlzghrjZ/IWd3265kmxqKX5cP72AgOIDipHFplLSF0W62vt0KknbWX7E8LhGoaNGnX
vytHy6yJjePmoFyZtj9tZjMvrU0/a0UC2TJS5xZLCyFBYWFAnfrrC/iPk4Wy4+VjnE15jAQq
Zb4zDQYMqQ24wiqltuJuK36vd0qNU8ilVOmx03suqBJmRZuMku+U3q7XCzqt8hT6jcosaOtL
MVxAAUhdAQjoFKKj7vq9dbtdYRlIuzWXeToXj9i5qx9KYaYZskXMlMq77dtUreMATxCSrZLy
k/IGus0tmNx/k1azGTnnPLnnUPWuFGg3h6QxMvM6GyVu3GRESEsqeaUSlIA3UlsCp1rtCyHZ
cDhXlfJPvWrbDxpiLaOxKak4m3HfLclueQ5KW5yrIq4pIUlSdkU21j+2M8+S2hk75Fy5/JVX
pNmjyWY0MWeLa3oK5EWLFG7bASoV5o+KzyP4Gml+xQl9yzkLGfJmT49Yo7DdqbmGzrkGwXeQ
y6pVrdlDjI40HaVyrUJc+k6u6bfjwalxk65X5PayDGGbPcMYbak26OIjNwakSmksLqCp/wC0
TRjuOK3JUncnrpp7Mz5MWywncqzywXLEL8i1Q7fMhwFKtUhiOnnLhk9lZmIQSSvqklVDvUdd
ELpjydMO2R2ny3eRIRa2sbjM4uzDeiP4mlEhSFMyVh55S3T/ANQlZcSFBXRNKan7sStlBF3n
yBMXkVymZNjcSU7Igt2+HbJbbjKIMZIrGLNffVCVbLP1V66neYX+1E3uNsOweY73ZrFBgtQo
sm52pl+NY769y+5hsS9n2whNG3eW/ErHtr661RpTOtwZspKCFOVUEEhA9orQmnzr1OsXs25f
JJtOS23TyRcbjnFtzAxGEy7b9kGYaSosrFvpw5193upvTp6aXaaKvgZ/cK0eRbna8suuTsRG
VybyJvdYcKghs3Cvc4H6qp5e2vX11r2e3tZP/wAYMrFYGVoyyVbcRvWLtx23Il++0EiQqodb
EJwuICae08z1r/DVX3P+1+yMsW5SXgukbz7dWbOhlq1RhekQGrV+8OOPKQqIysKQkwirsdwU
+vrrnVpbGeRzM/qGvKX0SrVZ4tvkvTf3G6KW/IltyHS0WVthDxIZbcbWQQ3pVlBa+gw/72vh
tNqTY4Yw4RlQl4rzd7XaUvurV91Xvcy4Aqv8Kaf7Htb8lLmWdYHm+Qm/PX652GLLuqXW3re8
zIkxEx0soCGGVIZXwfaQEjZwV67/AALWTjwh7OIEQPNT1vhz3I9jjNX64svsS7sh+SGlCUsr
dV+3lRj8yT9Q0/2p2l6RlMk2/wCoQd558Ynb+/cJbM68LW684JMhhPFCyhYKWwKAhIqAdS9p
vPGiHleVLJKvsu5v4qy+3cG+1cG5U+Y+8p0LDiHmJKj3YziKcf0tiNZd5rBVlVaJd7+oBE5E
+JdMajvR53aaV9tLlRJH28dAS2y5Ka/WeSKcjUipO+l3WI4MvwV6Z5Ltk6xQrfOxaJLm2hlc
SzXJ5+RyjMLWVpPaQUoddaP0rV16nSvatOepq17TPIrKfMl1v9lmQV2+LEl3pUdWR3FjmXJ/
2gCWCW1Hgzx4gq4Df5afX7oc+Nfcy0m8TH/UoDigQRQgbD41FdcIFsBCiK7gfL4/LQSYkAih
qV13IPQDp00FApBruihBqokdfn/LUNQzTqk77lJHXU0IggFW9AQPSvu+OiBgWOSln2kitDWm
xHr8/wAdUmWHT3cq/wDlok3BBONqbdjLQmgWApJrWu/x13RzN1kWe3QvBcK9rE2ZPustyO3H
XOeRBiltwkOJipPbcVxT6+u+t3UPLObTmYMuKx0CgVKPprDR0Ukti0VybfosP9zTZQ+sNG5E
rAaCv/0fvJV0AHU7a1VPgG/Jf8pwfFcZ8gvWLIcgvLwQxHMObEbQ7Mcff/IFKP6SAT7aevrq
Sducj28Inm/DsKJ5HZw6y5bLtpdipl3JTquEpbjgK0sNoYKELUlA5qUo0T866knBmUZDf3p1
uu91tiJ8hbX3DrEhSnVj7ntrKQp8BXvqB+aus1bCDVfG2C3LJsfbbjZ1c2HHGnC9breXDDgo
41Q3NdK0hCl0NG0Dprd1ZbM1jcErffDd3t2JNNXDLb3PhGMl8QIUZb1uqs+xB5ubJqRXkOm+
syzeJ0O8j8S5YzCi2Odm97mmQuNEbj/bOm2JccI4cne4R226fD0ApqTYDTPcY8h481GETMsk
vV6+5QxAipiPNsreUOJUh9S1IPtJodXZl1I3JcW82W9zG3/9TS7vkFzcfRChRJC3ExC2Eh0r
fKg0eINHFEAJod9SsyS8FhsOK5xbc/h267Zs4b7kNtLrF0jR0TVhtsqJZQ9J5JbSAOXJCRy+
GpPAzwefchiKi5BcmFurfcYlvtrkumqnFJcUCtQ+KjvpkC64TiHl+/2i3z8cWtNltUparWp2
W1HZalmvNbKFn3KqojlQ7607QDjgsdrsv9T0m2TIkaTNajLdfbeblSmmnnXASXuypz9ZW9fo
IHw1jsEQRlng+SrhiVzzc3+S0uIpnH22iApbzJdQhbCnKgNsoUoV4ipPr661LRQtl9ydnzPh
clEiDYrLc7VbAyv72JbmGFcqAlDTSFKeSlJPHkB+GsqGxsyuZVcP6mlSoF2nNS2AZQVBhwwz
wQ85s22tpkrV+biO9X56u0MuuQSbV/UXccmt8u9IajP2xp+db1TERhDb7SP1VhlhK21v0VQc
hyHXbSmvBaZA3XzT5qtNwQ/eJAhzJ0ZtxMV+IwgrjnlwK2gmvuqfq3pqhCkSeBZh5tyaXcZN
vuEQwlvNKkz7whhEFmQKJYaYCklKXDsEpQn4aX9BSJuBM/qQmvT48j7Np5iUvtPXdqKFqllH
LtW4LSQVcU8kgCmsz8BCI21D+ouVb1Tm3EpTFjSLe1apna7sxDav+oU3EWD31oUd3FevTWuy
8GYUSViVh3miR46ZivNqGMQ+U9FkUttEstFal/drYSA6W+XIhSz86U1dsi1BBYOjKMVv1gyt
NhlymHJIFrRwUhE10pKQ2wuiiqtfQb6MDk1u6Xv+oxN+tUiNZYcUzZbq24FuQ04z9y42UqRc
VhRo4hmvIqUNh8dSaQFZzCZ5Ksd4sV5udgsL8JDy/wBlgWtht63rmqHEqUlg9xyQnjVFVED0
31KPAPeB3Zs78s3rPpzkfFLau+GOhFxiyYgjIZS0eaXpLy1oWhe4oVL3FAAdP4ipDtNw89Tc
/wAhlLsLFxuTkUQLzBnNIbgCOqvZZSVFCVhW5QErJX66m0C1JA2nzJluJsxrLKsVvXMsj8hd
vM6MoPwVyFFTrbSQpHbHuIHqE7dNZ2Kcllbz/wAx3fD3cgg4rEVGiRnYkfIkxyZjbLppIVHQ
pVVp/wASggpGnEi5RX7Fm/jezi23djxw+hUN5kIuxuEtSVPMkKUpJKQ0pzavGtP4amp5Kcjq
f/UNe3r+ZtgsMK3JfuCZz7DTanH5q0JLbQkEdVcepbFa7611UBBFXny1kC/9Pw4OMR7A1Z7g
btb4bLL5U9IJJ5EOe5dSpRPHroUZByMZOT4peMvuF1yrDpCnpw5ItNtdkRVh9Sit2QsLSp1w
ucvgEj4aH8MdFxsvkHF0YtMx/HPHc24Jjyhd50KZIdfTHRFSn/qVuJSlxtaFBPtp0/GmiF5H
MSQc/wA0ovkNxrJ8SiX+THlSZsNbqpDbMdUmhWl1prZxA4ge8jbWvxFrwRVh8n2i0YJesaXi
7UtV7VyuE4vvtMlQWVR6NIFEIYJ9qUq91N9MKdhCHlrX41tFig5ZExu+FyFJRHjXJ2bGTFVc
2U97ippI58PbyVxHTatdTrLiTNnDWCYH9Q0OPJen2fF2IMydcWLpdpC5LzyZD7IKV8QpIDYI
OwH0/PQ6ryOdHC8/1A3BzILFebbbZMaLaZTjrgmy5ExT5dSUOsc3AhpCS2o0ASSDvqVVA4Wy
g+Sc9dzLIxcVx0wYMZhuFbIDZKm2IzQJSkK2qSSVFVPl6aVhGbIkcH8mW6xWGTj98srV8tDs
tq5R2C+5Fdbmx6BtfcbqS3RPuTTQ0jSZZmPOWPybZcWb5ZJSrteJS5l4nWicbaZSVCjDDikp
W6Wmm6JCedPUgknTCnDCZ0VW2O3CX4/v0a3YwmRBjS0zJWRErUqCyCChkKUUorRNCQCqijUb
ijjsU4Rb5nmO6G0uZTDxxEW8XcxrXd8hcWtyG+u3hLqGWI5PFtwoSnubnYbayqE0p+pAZt5X
t+Q2qbb7XYxa03mei7ZLKL65PdmISE0jhVEtNn3KoamvyGnqvuFtHO1eSsVsGXzbnYseVDxm
5W42q42l6QpbymXgA+4h/cIcXQU9P56eqheS8/JAZ7m0DIl2qPboIt9jscNNvtMdbnedDKVF
alvOmgU4pSqmg4jppSjBhqckh4fyyyYvlSrrdnizBVAmMJUhJWpTr7JQ2kAehJ69Pjqak0sV
aKOkU4JcATWgCK+tNhqkkmXmV5MnSPH9pxJb8lT1vnuyn1OvqLLrFEhmP261KGyioB9o9NYa
3AuZRoj39QOKxrhPv9qtNydvV9k2+Rdo0yQ0YbQt7iHOMTgOfv7dPd+Py1QUrQUXzlglmDxs
EC68ZNxnXaS/JcYLhlT4jjADIaNEpbW6Keu1eul1c5DrCS8EbjnmnHouG2/Dshj3PtIt1wt9
0uUN1H3IVMkIfS4yHfUBvivmRvp6conbgpnljObVlt9gy7YxIZhQLdGtzX3ykrkOGNyq44tJ
IUohQ39TqiFBNyaF4k8tQbNhzFgVDujk21yH7gk2uLHmd5pfuPPvBSmO305gba59ZY6Glu8w
k3G13VMG5PWex2GTBuTTRLjAmTVOfrrIPZCVFxPuWAflrcKYnkyoSIa3+TLIccwSPPXcXLrh
s9LkhhK0uRZUT7lLylq5q5B5ATwbFKD4jQ6bFWOll80ZE15KeurVxX+yzbu9MES5uLUwy2+H
GUqcKOa2+zHdp+n0p0NNausYNJKYYvzbdsCRYMVxrDJiZsK0ffOvlt1chDZluIWB31pa5qVx
NRx9u2qleWH/AEwQWBZhi8TGrzimVMzEWm6vxZf3tuCFvochK7iG+257VpWNia7fhotWMlJd
T5wxC9XVu9363TY0uz3dy+2ONCW242+tbKG0x5K3B+nuyhXcTUUqKao4TwVcZG6PMuKvssX+
bBmJy63wZ9qgw2loNvdFxWtZeW4od1Kmu8oFFPdtphT8GqrwM8m8qYrOst9mwY0xOS5Xboto
ukZ0tiHEbiISkutOAdx3upbTRBpx3rpq03nSK0PQ2tnl60W/KsTvLDFxmsWG3rt81iZJDih3
muw4qH6NpSk1SD1oBtTWfj5CVL8E9Zc+8WQGrRj0tNyfxqxxZj1tnS2CTMlzXUuAyITam0uR
mwnZKlcVqG+mz/VjYhVeTrPD8kryCTeLtd4At5hR5kRDVrltbe1lppFW0MIPQD1NfTdalJcS
CtEryDA/JOH26xWSPfW5yZuI3J+72tuEltTM5x9J/SeLhHZKVge/f2k+uq23nZJki35ixNcV
jIZDEwZVCtEmwxrW1x+ycRLWtX3Snz708A6qqONSQKaW5cf7ZkbJJfUpFjvOGYxJxG/21Uuf
kNslCRfLc7wbjUZVVoR3AKhSgB1rrDXaTMpE35DzzEpmJPY1jips1i5Xt3JJkuehLS2nnkqT
9qlCSrnx51U5Xf01uttt+IJuYGHhTyPFwfNW7tNkPN2d1h1m4JYQlwuexXZTQ+iXeJ2I1ysp
gU9kjgvmi9WCHmCplzdTcL7HW9BdS00pKrm4tI7yvZxT+mKdOPy11aTum9EtHGB5WlxvEF2w
77x0TJU5JYYU02WvsHAVSkc+NarcUT8fgRqrHZtjZysbJCd5nXKxLBrTJlyJYs8ou32MGmmi
tuPJbchhl4AboZQU1BG/1V1mEk/I91KZpDnnbx5Fu7UpuW9Obl3f9wcVEtwt6orQacQESVBV
Zi0qdFVeu+hLEgmiFu3ljBJEZrGTc3nIM62T7dOv1ut4gNsOTH23m3EW9Kqqp2uLvE+7rrX8
V9xga3Dy3gN5jHHpsq4Wq1Qv2z7K8NNB16X+z19jrCVJLZfrVNSeJ66zEYKcyO7f5fwS7JuE
nJXx+1vTZsg41LtaJzi2pe/CPOQpIZUvYr5evQ01NNtJcGE1EjM+SsClYehq8yReJUeE1Gtt
pctjbUuI6wtKmv8A7SSopcaaSkgCnurv66mvy+Do3GSfn+YcAlpmNSMru70O6XSNPbbbYdir
tcVrdbDDoUVdQAoo9D7dKx8YBEFc8/wf9+iMR8qVDxxhiT9kxb7UpiHEecUj2z2HFLXcGnkJ
PcqakgE9dE/iS8k075Q8V3We64/d3GFxIMOFblSYDzlocdY5qVKTa21BIKC5+klxVE9d9Dwk
pKeURh8sWOXaUtO51PtFzhTbg7Pkw7cFm9CQQWHAwohtvg2gNht0Gg26a0/t/wBjLZFXHy5i
MnHZc1t2WcgmY23iyrC40n7dlKVEmaHgeHFQ6NhPKv8APTRqcvT/AFG1v3KZ4ayuzYt5BgXq
8KX9jEQ+He02XVhbrKkNkNp3I5Ea5WyNS8+M/OzybhJVmly7j4iOsWi9ux1PuNF98PLQ8qOW
n1IPEBHFQ4/hrfsa7Y/iKX4subP9QOFsybkpua+HZEgq+4ZiKZS8lEBTTbhBU4uv3JTus8tk
k9NUIG8IOZ5u8eSJLVwZvEpuJEiyWbhjX2KuF1kSWgnvOK/y6hda9zrqVVMF5jZQ8m8n4lcM
EWIscpzO+woVoyJXbq23FgKPJ1BPsUXRx4gbiny12raqs3wpgbRMMsDfknALPd8Zu0bI3rlK
sdlk2hZVBeS4t5bfJl5Sl1TTlRHE1p16a4+uq6w/I2vM/JN+MfJFkvYgm7XlKL+1aAxdJMha
Ikt6QmZ3gGZiihvglGyk+oNB66ruuY8jxKMU8wT0zvIl7lC6tXht16rM+NXs9untZRyKvaz9
HUiu41v2vFV8aOdbLgpblVGiQFcd/htrjJSICFAVCtiehHp8tBAUn2kkgepJ2UdRC1cwfcrk
n8oO/wDD+OggJTvxXsnqTuPnoFBKUQSpI2XsVfhpMthggAmpSEbDod9QgVuFFJJ6AAaiDNa8
hVNBueg1GmAqNE0IqfqNd6fx1QAalK2G9egUfgPQ/DQbVoEnkoA0Ip1Udj/LUYs5FoU4SUAn
broZIOvtoOv+3roNMSe7z4DbnXcelPhpMhhAJCh6DisHcHVJuofMJPFGyhuD/v0A34BWh5E7
+pp19d9QANSRQ1AqTvQfPpqFPIEBYHt+o719PlVQ1C2ERyKgK8Qanp6+upjVydD6KHqPXY/z
1kXVM5956vryrTlT26YMwQrpdJjFwJ4pACOO21fza6knJvsJ69v+BUsSrRbk2KHJdXb7vNmK
afMgrPIMRUD9RYKlJAKvnTbW/bZN64DkyNS6HpxqfoJFSdYRNkrjk9MC/Qpircm69p1Km7a4
VhLrlaISoN+40VQhKeutpwZafBq19uflDJPJNtuycHbZyOzoZnOQEV5uISf0VSVuKohO1EpF
DXWVZIafJzg3jyFjXlF3Ibzhy5+V3zuLtsFKlDtD6XC0hou1SEe0qWaDUr4gFrBWJGcRYGUZ
BPn4dbv3KZyYbgyObjcJ7cOOAK5c3FE7nbfpTUoFaLX44zPLImLRl2Dx7+5ybN3xEvjBdaZ5
up/UW+2n2yHEg9a7dBTTayGyJCz5Zn0WwOXG1+OpgvD8Z5D2ROOyVRlJdqp14RFHgD1Pw/ht
qtdGYDezTLrZijq7X48uMCPMjNIul2lSJTrCorYqtTSVn9DmK++vrX4aZUk1gTePJueWyyso
xbDLnYBcJLbyJ09cmeVqSn2NxW30nilXw9fh66y7JMHVnPI/LXmuPDsYXYJVpnx0uRnZr0Hk
ma/I/IhjgEJ2TslNT66U0Zhk5ac8yi8+RLTdLxil4ZlY9bFiHaIcZJW+88ntvPvKd7KGmvRA
B66FZZgY5MCyWTJeyG5vyIxivOy3nHI61VU0pThJbNOpR0OiMm+INTw7yxgGPeO7NabrElXG
7Wu6G6R4sdQZS2ttRU0tbq/aQeW6RXW2oMNQJyPzlheWW2IrLbBLeuVtckOw2IUz7aKsyFV4
uOji9SlAeOhxwwTnJxw7OMVt/jG62hVlu81T7rc66y4pbTCivJdBjtpcUVKbbHBI9w5K31px
5OjnksuSecsZYuM+cMauTF+vjEVmfDuDoZZEJlfJK0JH6nJxBI2oPXWceTKySSv6oMPjuIRB
ttweR9yh51LwjMoZaAKVIa7W6uP5ee5+Orqigq1m8v8Aj7F8hl3THWL3cHZTMtdLk+haBKkq
C0oajpJS2ioqpf1U230t4yUQjN/JWTYvk15ZvdrhSYlymN9y+Nvudxr7voft1ElZQR/j6dBq
gwmWHx55HxK3YnJxXJ48029ycxdYkm3hLjypEdaFBgtue0JV2x7tK/ce2S5r/qDxO9rjXLI7
bLi3Cy3Jd2skSCpLqHzwLaGn1rT+nxqOagKfDQ0oI5N+fsXd/b8knQpP+rrKzNjwrezxVCdT
OUD3XHiOaEo/wjfVGDTqNrz5zxN6LNvkOJKXmFzsox+RblhP2TLRUSZBe+pXWqUDf0Px1ppK
AdHn5K7E8pWS3nAnmmrlcZWLr5zRMkDsn9Pt9qI1UpbA6pURWgA31NcDEFvT5Q8SuyXbN97c
F4/drq/fb7OothSHFo2go7X6i0KXs6sUHGo9ajO2CZX8qz3FXcqxmdbcnlTLLaJCnEQ7bARb
mbbH/wAMNo1C3VA8aqrt66g5F2nyNgk5zO7NdZU+2WbLnGHmL1IH3kloxqEB9A9yi4R7QNkj
bVkZwWW5+bfHeVRnoN1fmWODb7hDuNslpbD7ktMAJSGlNoI7S3afGgH4auuBbhyZze7nhmYX
PMcsvdwkWq4yFdywWdlAdW8rhwT3nKFKUjinlQjrUVpq/wAGUmXmF5cwZi2WTIDLlIyCx2R3
H28aS37X3FoSkSu+DwQ0OPrv8tPVx8C8yZ7f8vtMnxVjeNNXCbJuVtkLdkwiltuCwg86dspS
FOrqvYlRpVWpIxaywcvCmQ2Sw+SLXeL48I1sid9TkhSVLCCWVhCuIClbqIAoNVlKwbTNE8Ze
cjLyeuaz2jGjomqstwmNkutSJKgUJecbHJtsNgpHAbaHQEWpny7i0jPFPKvlqS01aExXLnwl
NoLipPNTbE5XJ0EN+41TxJ29NPT9SIe1+WLI15Cy1m35YY1ou1r7dvvMxtIZRc20BCHeSW0K
WlCE+1ak+8/HbU6klhheNM2xq24tZQnKYVqEGfPfzKNJb5P3dDilFpTdULU5zSQfj+XWOrNE
LnXm9q2xrTaMQchyrE/aHWrnbHI6Oyh59aghLntCu4yg14A8QetddFSNmfgVYVYy34mw+Dc8
jtJftl9j3Z6A44C+Ia1+5hSQmvcBUVr5bU9dYiZNOJXwWiDn1ly3IZdsmO226R42VRU4vCdQ
0ygwQFF11CkpqoVquqq8lUH5hq6mSE/qtTdI8LHY656HLb3H1fbOpQzNdlkEfcOtIQ0ntob/
AE0qA/HW/WsMy9lO8ESoMZjJQ1cLbactdisox25Xjt/bNUcrKHJxK0JKkUHTf+ej2ZZtTBoO
N3iI4/fVQcjxdrNFXKCq6XZTTceC9AbaCX0RS8haVq58ua0Aclb7bay6skzuc9wWNfLBDsb1
oTYr1e7p++qcYYIELlRsKU4P0m3TUp6VFKbaHSAbzBSMizywJ8JScdt8S2Idcv0qNHabbPfE
RCStqcBy/wA1SaNhwinHamtqqkEsJMfQbBGkeHcKjyZNpmuw743OXZTLaakPRZiggMLHXuqK
/fy3Sn120Rlwbayizf1P2p62YVDYgtRmbW9day1JYbiO1Shf2rDaUNt9xpoFXvqdFdMxbLRk
3gRy0t56FXGJClo+0fTGRcnUMth4AFJaU6lbRepXiHBx6+tNaspRuumbbItNtVKzCZjpxu5X
yNGtSIsuaxGbiMuuuvdxt4oKo/fKKcSjY+0HfXN1U6MjvGovipOSZDIgs2OUh64RWZrf/SIb
ZbEdP3DiVSeSFNKkFY4sjr66WuCXJWZVusse0OoxKBiT9lEm7Iydy8LTzbKH1pjpbXyLwbSx
x4dr5U0qscC9ZOt5h+NI3hRSoFotr7BsaXG7p3IiXv3NaACQSTM7wd9Pp9OmmtWYawefsHiW
uTmdijXZ1lq3PTWUzfuKhgN8xyDigUngeh3H46XoaxOT0V5OVg2PzcTvcez2iJNjXwRZDSm4
ikfYuIP6rrURSk0SKONlfuSaax1wx20Y559edc8pXl56PDZadUlyKuCUFt6OupafcUgq5Ouj
deutVgEuWWLwfY2JuNXSVbrLaMgyZFwismHeSgoatakEvPtoccaHLntyFTrF8salkRkODYxY
vLDGP2+MttiZFixJUeY40p+PMUUBtpTR5FuM5zNEGixsrYaKr8smf9sFadyizQ/6aGrXFt7c
eZPuyoU1bMhwKcWyyHjJcZT9QUlIR2z7AaH6tKWW0bey/wCSYJ4/cxGxSnrNa7JbHJlsZekl
xBlLafebDzjM9l1YfQpsqC+baSBVXprKROZGvla1ePsWkY5Pj4nChvovCGAmQGW2H4BqFOrZ
ZfdU6hBKVpdUBQ9etNXWUwbyZV/UIh1vybc+5bItsYeAdhGHxUmXHWVFuWsoUsc3R16bAba6
0rgHsnfBqJ5xy7u4qIas4EyIB959uXUWc7TC2JPs4f8A4Sm9P4aw/wCRpPC/ctkfDfEt0vbk
ssQ2EPXu4IxOKw8EsXxLbQ5RXaJWlhlt8cEL2B5bdRWaZKrRKYui7J8V3Jm8W+GxjBskxyG7
CMRduaXxWO1LLtZSpqHfaC2eOw9a6qr8oS5MvKKf5Zwu3WXG7iq2Ybb42NREQxY8xRNAly+4
lBUvt8lfclyquSeIoPd6aaJSTaSMYsibcq8QU3AhNtVJZTOqSKMlwBypG9OFemmyNUR6FkYZ
CyzyxmdnvsVDwiQoKMVjoWlpLVsDrfByIpKkp4hkrJp0NdZbwiWZfJbLZ4c8YwH/ALVdjZuc
loqZkRVL+6kIYdlvfbSn2w80EoMdKf1a9PTWWiTZFWbxFgTycigxcajyxCuUtlqfPfcUwlhh
KSGUvtOdyKpAX7VONkHrU6WtFweX2UsGchDiv0e6EqUDU8OYCvd6+2u+u167Q1yemskx20SW
HYD9itk3Hf3S0M+NoMdbLBnR3KKuCESG1KdKVJJLinOh1zol+xbedlQyDA8PtE/Ghhdtay9m
Ze5LKpzrvsWtoH/7KW0goqllI7vc/NT4HU1+Lb2NdotN+xl+2+R/IEmxWWK5mJbt0jD7e82y
pDkVfFqdIjMLKWllPAg+o30pVxOof6hVpfrklGbBiCL1L/0XbLTNRIvrMbNR+i+1EtaoqFzQ
hLqqNNF/uVUjoRT00KuM74Gf0PPF7xN11rIsisbSXsVtd1chNyEuAlDbrxEaiT71IKCmitdL
0hwvBjgZ4BbbXcc2sNvutP2uXPjsTgtRbSWVuAKqqo41G1a65t4NVPS9n8AeP4AnSL3an1si
Mt5ph6QUBLiHpK1MtUUCf+nbbO9aDf1Oh50U4MrwRWK5N5VwyTiuMOW7tvB+921Dyn46A04C
3JbWs86ISQpdeqhSlOr7K9VHyNWyw4faoTGWeTrHecRRer8pubNgQZKi1JfZekBSWWUo2CVh
QdC2/cPTTZTZeDMrqV/xxjNzn+JPJEmHaXXy+1HbhPtgrJDMgOyI6Un6i0miifq1qVX2fA/7
Y4LTdMF8ZPKulhi2FbUu0Wm13V27RpDjkp5yY42lbLTKiUUUhZp/zHbWK115ZqZY18+4PiuJ
YzDRZscZjrl3AodvLC3XuyhpP6TLqnCeD7wJ5o+n27afWk02zmymeFITciRkTsOFGuWVw7ap
7F7fLSlwOyQqji22lqCHVttVISfxprPVOynRpPHyadP8YeObve5H7kmPYwJVtamXCI+32F3d
9us2zoY5cG1ObKqnZBP8NTrhPkYXGUV2x4i7LwDyixCw7sOxJZbta6CTKZUxJHchoXU1MZCS
pSkmqgr121vpVexLiCf8ZJ6/41iCsQkszYkCDgTFvtb9gyWOltUpy4SHeMxaHQouOrCefNsi
lB020euiwueReXD8jPyBacegWvx9N8bqhPyWbvIasy4YU7JlcVN+91a0juKStP6gV7RyoNtV
ar+u/nAS+yMy80wbJb/KWQs2INpgh9KnEMmrSJK20qkto9E8XyscRsnprVq/ip3BjM/BSVKJ
IJBCj1+GucDICtZUamiVEgqFT09NQyBSiAKb1ryB1lkKJ9pKSeCqV+J/HUTYEqCaoqCFmtQN
UAgyFrSSPT6EjpoZBFCPzH57D+zVAhA71FRXoeo321QUiUhJPtNa9QdjTUAaSkq2BSB60/nt
pgULKipdangE1+J6+uhodgCSFBBNQSOnrXemsmHVgKdqkkkUon/y0yMBhw0pVJPyHy9NRSAI
Rw5c6ggbKJoa+tPjoNBnlsCAetK9KahEBQqTU0HQmtd9q6WCFHlw2oU+gH1b/HWSaC5KCCUg
g0oon8d/npRC0lJTyUNhWoVv+OgQ1H2lIFeW1fw3/hqAABKDuCr0J9a/36pNLQEUUEHhuTTf
/bbQyUg51JookAe4bV21BAZUKctxUEcQnb8d9SINPFKQoElI91KV3Pp+Oo2kgiFH2/EVHQdT
66GUB1VUhYG/XoNAth1XyrU06fKn+Kvw1SMqCCkOoUIiWz9KaLVQgbnpv111SORvLN4xqb4B
i4+u8sRb5b5rss25SXHZDiVFXAIQlJPvrWpIA9ddfftZ2g5wZMpZC+PoKbEb8id6/HXNI0kT
eFyXouTW99q6pspbdClXRQUoR0fnPtCle5NUig9ddKOHINmneWskwK++S7Zd4mSlyzyjHj3c
x232gxHbIC1KcNO4VCvtSDTWapzsE66ZNY/5Ws0/zQ/fE35i0YlCjohRkPBbZkRmUng02gpU
pI7iuZrx9Pw0xhg1GzPrpZcFu+aX6TJzFiLaOa5jUxuK8tchx1SlBiM2riVlPQq6H01hVZpW
UGheKczxOxY/aJt2yyO6YHfQbVLS83IgtcVcG4TDXseccJqtxwK+WulkYZOzfI2EXq1/e5Jd
rY/akxVoYhxnZqLsFA1YbWyFdtS/8aiKV+Wh1aZSOJefYTPaau2VXq0uvKMR2Ai1vzHXkvIW
FVkRFnthDfryTvvXVBT4Ji6+UcZgvvrj3y1OS505ldrCZj0hCmwOKnJp4qEVKan2t7dNSpkU
Ih5bgjORQX7nfYCcicEpDaYlxfmwwy6n2kvvnhHdWRsRQgVGhJmZgKNkUa55piTES4sOrsTc
uTkSYk5UqHHafQUM9yY8UqfUpfRJrTfYddS0xweWs0eiv5de34riHmHLhJW2+2rkhaFOq9yC
PQ/HURvPhyy2Njxbb7uqNZmZDt4KbpPu6GlKVDQr9RDTjuwcCPp/jtqskMkndBgzdrauOBR8
TdgyJUtd7kXspQlLYPFvhy/WSkdaIT06ddXRrgJkruE2lLvhq/x33bUw7cLi3OgPKebjd9pl
9BU4ptauSE0bUGkLTyp+OnErBPwXzP0YZdZN3kX39gXa5jEZm1XTuJM5Usq4KQ462ouJQgdO
GwFSdZ6imdblgfhx2JFgXS22qC01LZZjLbWxHU+jqAFNOOOKQvp+oeR0dQkhsfhW215ZJVf7
Hj2MxWIU/wC3kwHmlSnIY4pS6tkFxKKI/OTUqNKUrrXVcDODEPLmN26zXaEu0ItyMamRkrsj
lvdLrjrSd+9J5kuF08vcpQA9B0OlYBsufgmGj/S9zm48YbuctTY4JlForbtXNBkFCX6IAKed
VDf+zVbPBduDSYTGEGe9csFbtDsB66qRmcklgpRbg2QuheI4Nle/s2J0OvkkxvZ4WBx4LM2z
sWv/ALcTUXJzKJrvbr3QsfaJUpw95NAKIQkfw0RwUjK527DLZiU+kS1o8bqsXft039Nbzt7U
pXRyveW6ajanpTbV14JzBnULEcJTB8dR7tb4NuYu76V3yU7PLkyS0WuXJ5CaIjNKXQEcqjZO
2+lpse0M024Yw2/Li29/GrIrIY94eRiEF0JZZVZ20chIfQwVF1pr6qKHuUAPXRCJFQ8kwF3L
IMRslyxZ9STNWy/eFMxrS5c9iSyw22pXZZTsoqcPKnz0pKGHIjGMUszV3zyXYrJbpeTWJTCM
bx9LwuEdCHOPeWkOFPecSK1V0QrbQ0PBdZWG4fGu0+6YdYrZeb85cYUO9W9RRKZhxngDMU2y
V8GTyV7lDp0+OpLGSSWjCcvwJ2dluXSMIhJlYvYXVOOSWXEmOy0hvktAcWr30IVQJrsNaM1w
pk1iyYHh8XFLbEcskORilysLlzv2UPGr7VySlJQ2JIV+nxJ4hsD5b76zsbOfoZNkWIWuD4lx
7IU2l1m5XSQsSLm/KRxeCQv2R4qSVcPb9SgKU9ajWq7JwMPEWI27L/INssNyU4iBK7ynvt1c
XClppTnEK3punr8NN3AI0PDfHvhnNr99naEXS3ftTcqReIjj3eDjMdwNNKS+EqIUtR5FKAdt
uuhtrkqtNE9/2M8YvZSGG13T9uVbhKTGUH0tl5bxaBVLW13EJ+AU3T57axk0kNbV4w8d2zKM
xs9zsVymvW2zquUGIt9l5SGlICf+nLO6ni5y7SljYdRrTbMpvJDYT4cxG52mwSrvHu65GWS5
jMJEEgNW1uKpSU/eKUmqle3ckDfamh2HqN8q8Z+IcUiwIF9uF3/ebhbXpkaajtKihxtRShvt
JSVkuq2pWg+OtJsw4SGtnwK2XfxTia2orjFwu+SiHdbl2QpxLK0lsdtwD/JQmlOWxVqUKTUP
BNueE/HUqZIhWa8XOC9Z761ZLpLuAZU2tbwKk/bhsBQNeKUKV1UdYbczI1+Ct+dMCxvD02mN
BRc13uZ3HpT1zkJfLUVPsQzybBbUtSvfVKthsdbrrLB7IHxtgNgv9pyG+5FJmt2LHW45kR7Y
2lyW85KWUIDfc9gCSKqr11WeR4ktNu8SePlwpV+nTb6MeduMa12eMmGhmd3ZTQcC30PCnaTy
oFJHu66y7N8hVJP6ko54AweLdLfZ7vfLkblfLhPt9p+2YZLI+yVTuv8AM1ApSoB3PTR2ZStE
ZMw3ArH4VuV0VJauGSTLg5a0ynYbpWy/G+qNGUVhLSgElRfUCDWlOmtqZyU4REowiwyvGmHS
o5REul5vz8G5XaSjiltJTxSCuu7SAAqu1Tobhs09rwS3m7x1a8RsEP7i73e6XqTKXHhM3NSC
0iNHBDzyAlTlEuq4dvcGnUaaJszaymIKN4swi25lkarZdLqm0QkMLeMkhJW4pFAlpvuFLaSa
15LNNav+JVqX+d4XxuzRsgcu9/utvx62JgLdjmCFPurlqcSnkhCyysIKPa4KpFT6jXPsyWyT
tn9MNucvE+DcchdZZiymolseSw0nvd9hD4Ku+tAUpKXAChqp21OzHkq148RYhY4zKMmyh6Hd
pomqtaGIK345RCdUyFPKT+qkrUjokbA6pZEheP6fLfbMDcv8rI20Xhq2oui7cUNcCFp5hpCu
53+ZSaBXDjXWlZtk7JIyvF7A7fslttlYdLTlxfRHQ9xU5wCz7iUJHJW3oNaeEVVLNfvP9P2P
2S9Y6JOSLdst3uH7a+4iOkvJfSOTbSEtOOpouhQVKNUdSNY7MuTMfJ2O2CwZzdrLYpa5sGC+
413HkdtbbjaiFsn/APCdsjjzp7tdaPBjtOCXwbxvZbzjzl8yDIDYrSu4NWaIpEZcxx6a8juh
JQkp4I4/m+OudrOcG6xBcLZ4jxayY9m0zJ58Gfc8fc/bocYvSI7bLznLtuq7SVKLjooWUfTX
ZWqrbYcYIdWC4lA8Ioyd6dGn5HdZCGIilPPNmPw4LcYbaSngt5NSXe4eKR0NQKyVpaGyahck
heP6e3YcW0Ig5CzPl3V+NEjpSw6IJMo0UWJiO4hYQKlSSEkjprKs2itjB3yD+n2DYr3YY10y
pty33eZ+3PPfbqEhDhBUhLTKVuKUhwp4czslVKjVNmn8E2pgzbyZjlkx7M7hZ7Nc1XWFAdUy
JC21NuNuNqKFR1lX+YpriAXEiivTXaulJns2T3j/AATGJmNvZLlUqai2ruTFjhx7cEd8ypSQ
oOuKdPHtNpNeFKqOubs5waSx9SWPga6mXM+3vkU2+0TJkPIbisltNtbiAqQ8+2ohSg+2OSQ3
06aJehXkio2D2aR4qfyKJeHXrw3do0FdvCFJjNpkOEIKuQ/UWtIDgKfpGx31uWrPOjKWETGa
+KscgY7kBtN1myLzgrsRnIUykp+zfVOUED7FIPJrtKVQ8/qGhOz5K2FKMysNoeu9+g2iKsNu
XGQ1GadUKpSp5YQFKA3oOXpobhSNVmEadO8WO5Bl2QWjEpjiGMSVEtKV3J1a3JD63/tF8Vio
ZaDpNEDbiPnrWapD5fBM2L+mvKZ4S7JyFmIt1nirttvyFlaHls9pakFILaS1sqvT01h3sEEc
z4EvyrJdZc7JY0FyFJkw5UZlL8loqjKKayXWRRhLlPb3U9Na7WTJvGTI4EF2bcIsRBCFynW2
WldaKdUEpJI+Z9NVm1JJSzXL14hx5ouQ7Vkr7E+wXKNZr9LuSAiCiRMSVIchltSltoC08FBX
U711Kf8AqU5wR0LwTnP7lBtkiU1Au7vfly7ep1SpEOHGX2/v1JZKypLiva2Ee5R/slZw3wNF
BD+XcXdw7NV25q4TZLKYsZ+NcJZU2+oPt1X8FJHKop6dDpSwnJnbZRUvPNhaW1qbS4ntuBJK
QpB3KVgUqPlredmWa9Z/DfkSZZrbZ4t2jt4/f0wbi6xzUlCJExDvY7yAnkspRGNaGgqNtcl7
HErk6ui5IbN/DsjFIVyfdyO23C4WlbAutpjFaJDSJezKh3eIc61UlO6RudST8GLInLh44inM
8Lxw5E/HjZDa4cyTcXnHXg6/JUptSY6D7m+6gBCeX8dtaTfTtPJvM/BGWTx/amvNUbDWL6Xb
aZi45uURTjDqko5lcbkkVS97OJP01+Ws3n7ma7O+P+JbrkDX72crYtSpl3fstsVMXIclvSUK
KW20ut1UQpOxNf7NdPbMwuC6rb5JC1+FchYxmTdbllQscJmNLmSoqPuHEhuMtTXNSmilpSlO
p48K8qHprFe7tC2UJozjDoN7veT2u0wpa49wuUhmO3JWtz9JRUOKiUnlxQRUD09NXtlPZUbb
NlzbFMgzmBCah5s/c7fGuciyFi9Ibipeu8VpSgWewEpWl7gUNl33A/jqTsk1+ouHnzogME8G
yHs5tloyS6NWx5URV0nW9mQGbjHSmvBmo5cXFAczxOyN9DVoniYJJHLAsJ8fZJmE/GpL93RJ
XOdVbDbJEeTBRFQkq770lYHMpor9UJ923rq90plXQ2m+O1W63fds51Ghi6Jfl2e3SnpEdc6G
HFNsvKWmjYdfSj6Veux0tWqy65gnHvAMaQtePWu+vLyKxuwE32NJbKICP3anBUTiSoLbCk9y
o92s/lH1FpN/QVb/ABBZ5TrGQY3ks+NjdqcnJuEp5HanR37ajnJdhhtVOLqfprQj1rrUW1yS
GrnhrE48ZeU3K+TnMLlR4cq3FppH7qty6OqaaQ8FHtVQtClLVX3Dposm/r/2H+vMD6zf0w3R
V2uzF6uQbgQC8zbpERtTi5LrbPdBKKFLaQk0VyP1e1Px0OVBnEGfOYhapPitnK4PdYuVuuIt
t7jOGrTgfSVsOs1AKVIpxWk/jro/VFrVnQtaZUVoSRxqQE716a5wBybpy3VuN6emoyGspWNk
+8D2lNfTQUhpQT7+RrQCoHX131GgtyKkHfcD5H0GoyGEpKqp9u9SDvSnoKajQkq955Dc9CP9
9fTQB0SkBBPGhP0VO5+OkUALFCKjkdlU22+Wggcm6bcjTY/+Hx0FIalHid/x+A1GsCVKQClP
EgpT16k0O4+WpGGhSaKHTqKkfCh20ihVVJJ4mqvRW3+w1kUcyVEApohXRQ6/wA0gLIBHuPFI
9T+O2gRSa0olYNPl11EFUEAqBAX7SPx+GoUKKOoTxABqK7Gp266CYGwpCVBY5V6kfH+OgEKT
7UgqBIFCD8T6baINiQaGpSfVJptvqCWKFKjkoDl6mp/GlNQLYQUgbblQ2/4b+uo3IOPsoCFU
NDX1NfhpAFOVSRRdCAdZY/QFDxpx93+L10lkhXUEsxOaqfIjelf92uqMxg9D2j76P/TI65am
u2/Lurse4usN+92NWquSwCsI2G9RTXX30rWyjwc6mLOJCAAR7ag7GtCdYk6Nkpj7MF2/QETI
T11j95CXbdGUpDr4Jp2wpAUoAnrxFdbok8MI5Ns8jWkYh5JtbOP4fbH4lxixoFuZnxCuImS6
uiu3VSAp3cciammsUorYZVUktdbJgly8323E3sbYlR4cUJmpgtfbRzKUnurekNtg8m0CgSmt
KncmmsqtYZJtmQ5Zg2SyfIt7tFksMha25DrkWDHZUkIihZShaQqgS36JJ6+mhBLNN8M4fjUq
025u84s2tdxceZVc5jDsxc1xuvIRlNkNxGmePHm59SumtXqmCsWZOB+PJLP7Rjtjt7d3DLyl
PXSBKfBcbJStxM0lDJSk1CSdiem2h0UimNJuD+NZtrlJx20wWWIMRuVcpEqDLRORF6vvMyH+
LKnShKijY779NXXIOw6tXjTwxlFmU7ZreyYQlRmo0mIJLEpSV05Icdk0DilCvLt9BpiGTkdM
+G/Fd4SytizMJjxZ0hmQq3uPstHsJJ7Uhb3vWKj3KRt8K6OoSMY+EeNmb5hr0PHrdLg5A5Ki
Pkh9cUdhClpWy1IPIuVRxKlilK0Hrp67GWzzxncNiJm19ixWUMRWZ0hpllAohCQ4QEJHwA6a
apEX/AvFdjvuFQr/AH273BmLJuSbXCgW9tDiW3HFBAdWXKhPJR9ygBQfHVbZpONEjkvgvB8U
S07lmQzwLhKdj2tUCIHVJDA6uii1lSj6IGsy/JmEyKxXBsTmeMMtvj0dT18t8xmK06+gpcjs
qdQkFA5cQ64lXuqDx6a6taU4BWkuGbeCcKdu1yiYzKctNwtNtYlIt3261xihaikqU8Spxxay
DXgNc3JDZ7+lpKobK7fkHduQebZlIlMobQnubqIQ2pbiSBuErpXV2YkZi/hjx1ecpds8LJpN
8eiJlCdBSwYakyIxCUEunmEoUsnbqfw1rMBjwZdn2Js4tf3LOqWuXOZQk3J1LS2WkyD9TbBX
73EI6dygB9NVSLP43wLHrhjs/K8kek/ssKWxbUwLeoNyHX5KkIC1uL9obT3Bsnc6W3PgYRfZ
P9PeJ2G6txL/AHWbJiXm4ptdkbhhLZbdcb7iHZRV9fE7cUinrrEhKg5Rv6fLBHnx8cvN4mO3
y9Gcuyvx0pREaEA0KpDaiVrW51oOmqWD8DNfgexNQJlkl3iW5l8S1qv4cQhP7cmPyKSylBPc
LhpurTkZX6FDt3jH9wZxtuBe4sm55RIS0YTSFrRCSUlRVIe+krTQhTY3rsPjq7MZWi8TvEuH
R2TdYmVXSO3ZrkbHfJTzKnJDshAolEBEcqXRxRCEINeutJ2klvRG+R/HmPRrnjsO3X2XHu13
c7Eq25DJacegsEVD8lxpS0sp6ew76Kz5FLJHY14zgx5uSXOZkbyMZxPtouN2tSFtyJC5FBwi
hRTRNSUqVWh9K6uzCJUss7vgmDjc2RKu2Sy4ePTpMe3wHLegiW8uf7mkyRVKQhO3Pry1d7Mk
p3syzMbVccQvl1xFy4KcjxHQ1JSypbTLwCUrSVN1AUoBQ66VOwiTQbN4bdl4/DtE/JHol8u8
Bd/tdjQha7f9u2AeUhQNO+pJ9B7fnrLbeeAaRnlzxWbFw+15LJusV1u5OKaiWZDqnJTTaaku
OI+lpPtG3zGtzkJg5YFYckv2VxbVjkj7S8SitLEjuKYCEhBLhK0e4ewGtOuqzg0lJoFo8JeS
rfe7c9jN6gPPS1PpF2tc1QajGP7X0OLSkL9pPE8QdzTWXZ8ooJf/ALXeeYmYPuNZEEzXoYkS
si+9eDZjpVxQ2tSk936hsnhT11nt8E9DPF/FnkJ3J8hmjMoVsvNqYU+/dETi668XEBYWtwHk
hghXvcX9P+HS7PwUKBhhWJeY5+Pviw5EiDabpIkNxGHZ5j/uTqCUvmIk+5wLP5/bXrptbOia
iBncfDPla526Pcbo/GecjwS5Dt8qan7wQ44NUNMn6Q18AafOunvnCJpD+K75N/7XY3OhZJMW
3eLq3arRZmnG0tNNMj9BSlp94V3WxRKtgmhOpxn4K7hr5HkHDPO+K3difGdanzX7olcmI1Kb
mcrg4lQS5MbPQqbKjzJ9o3221l38klka+dZfkmVZrK/lj1oYgOPu/ttstDgeClIRRySV+4lO
5R9VAfTWqZQYx5Kt4lZ8nOXCc7gspUIssp/cJanW2IyEKV7O8t4FvkpVeApXrps44k1HPBcr
Jaf6ko95vaYlzcjzi4ym6TpMyP2X3XUBTIadd5oUrtEce2BRNBQbaw7TwHAyh4H/AFCSvtZ7
S3BJs0mam2KkS2EvIlKWTL7CXCVOLdVyJNDXS7p8BHIykeOsz/7RTMjv0yVHtjEtcy22dLAW
lbzwCXpklwqHbSuvFJ3NfTfUrSywoZJfvfleH47xGUi5qmQ7ncBCs+OJjMOIWmEoKjcyByXy
cR9CutNzqlKTbX5D7zpd/LM7Eoxyy3W20WkT0ntRJAffemdtVdubpSloBXNKSKHT60mYtgzv
xQPIpyRSsDCv3ZUdYkLT2u0ItQV9/v8A6XbKqfV601XYo1O03D+pdF0v1vjxEvXqR9m9MuDi
oimmY6UudhpoqV9t2XE86gJP8zrMrwC3sjcfe/qaVLvKbc3JXKbl8rl959qSmX2xtHMjbmWu
P+VtxoNPar2PBxtU7+pJ/Epsm3sSV2gOSi7IUiP92lSlq+7MfugSAOfKvbHWtNSspBS9kdfH
/PS/HHcmxHE4muE205NLMYSlQNu2HFj/AKrslNOvp100upJma4rKyCNkduesKXTfEPp/bkRw
C4XjskJTvWvT4a1aIyFbZg1jI7f/AFDX29WCxXaD9lNVLVOs3YRGjsoeaoXX+ceqat8uaqkn
1prn2rGjU5goPllvNE5xOTmCWkX5CUJkdlLaG3UCvB1HaAB7gqqqvcfXXSrwsGW0TviWb5eT
b7hFwu2i629DrciQy7HZkMNSQClK09+iUulIp7TWlNZvHIqXkd2nHfJN+xbM7ndpLsC1OPol
3l2ZFedelzonPg232kckdsghxX0p25dNN7qcCmkpI97Fc2HhyJdbg/8AaY5GnF+z21yO4X5D
8jihx8OpTxS3Q/p8zRX5eord5bBzhlkvuT/1Bx7RbxcLS9a478uK8zLjQWmHZEtCgYwe7YJU
oqAPFSRU6zV1NNOfkbZJF85X3IbFaZdhFouKpKptqbixGYSFyEFPOU44ivubqCqqthvTSr1h
qDMZKT5WXl68zmJy2G1CvyUhMtDDSGW3AFKpIHb9q+6aq5/m01As3h1vysIM1WKLiItzjzaA
q7KYEVc/YsJjB8FJmdOFP46LWzqTTmAQ8s84wLnaIDbEwXFL9wcgRFsclT5C1qRPU+n/APe1
IUFJPP6Kbanar2CaZ2tty8mHw7dexaIDWGMKH3lwejIRKddee4hxhZoXHGnF0S4B7B06a3+L
s8ZMP+OHgq+SeVc0yK1ItVylILCuH362WkNPTS0Alpcx1ICnlNpA48vx3OqqhG6thXLAsxxy
ZbOH6l5MIXt2DC5qk25hBC0OSiBRtQTRVB09dc3ms8E5l/BeLDlPmmRlU/JMfxxQnXtpDdyZ
aglUd6RACayi257UPIcUlVdvcemh3x9ChxjRIW/yL5+uEye1GxtM9+3ON923m2BSIEpAUebT
Rp23VciomprWul2rjArUlUi+evIMA3AJZhIucxx9x6e7DSmU05IUS8hCvaQASaJWDTWsTMYM
S2RCvFfkKBZImQognge083GQ4lU9pDiqMSHI4/UbbWsUSr8NT9nZy1g31jRYHPJHmGTKiF63
odmWu4tx5aDAQn7i68S0yJ4AHdkJA/TrShFeuhWSmP8ASCrsOrlH84S8kh4jKecGQWdpye5c
UyeDzEaYA44qVPChxaQTXjWgPSu2l2xMbF5f0JyxteWJPlGdfb1Zot0ulvgJiPJnPts2xaJD
YaiqQ8vuNud76kpT9aiTtvrNnhLgeGZza/D/AJFuUWbIj2lTf2LzjC4khSGZDrrKSp5qO0s8
nVtpSSQn06V119nsTthAlC2aNZsv86Wrx7Bnx7dCdt8aMkRJD8dty5txGyW40ns8g52Weakt
ucfXfbfXJNJxBqxSvJeO+SmbkM3yy3tJfuDzRkpbKFJZe7aO2zKYSpRZLjaQeKvq/HXTsnhY
QWwNcs8x3/ILjYp/2Fvtlwx9SF22Xb44acQhndtklSljtNndKKU1JJKAfkdWTyPlLeXTPIED
HoLsn2tSC1CUYjUh0EpeSlKqokOcVKKq1O+ud4eB7OCYXd/L0xyFOt2Kriw7ZPORRGoNuWmM
h9SByWACeTa0+7iOta6f7FGOShos0HyjPmeFbquXjMaRBMkwn3VuFEd+XcHFO94R+BKnG1q5
K4uDiaaaWm8+CssYMrumFeQMGNpvdxhSrUXCh+33Ajipt5JJSlavyOezkEK9N9ZbmYCINBv2
U+aImR2yNGgxHLo5b/3dq22qIAEuT0lC5shkD/5u26/y126nSrfjLWP8jESiFhXXzanKor37
NJlZTYbeuI/34i3JS4b4Ke5KJ3dISvihdf576X7E195LLRUoVxzDx3fLpAcj/t9zXDXbZ0eS
ErKI0ttKvynZRRxIUDrV7dmrsuzNBhO+Xh4yYUiyW2bb40JTMGTIjNPXpq2rISVsJJ5/bhSt
nOO3X565K0vKKyfkiZ/kDzHIh/tzjT8SVbpcSPc7izFDM5yWg0gtz36clrSf8sKAr611qt0p
NLJYps/zw5nESzmLDgXKHFM6Tb4zbLVs7MwESJFwSP0z3KEOlX8NHb8dbBJji2v+d5OU3NKo
lt4x2YrDlunJjix9sHlATDqe2TyPJjiqta11O7woCHBH2FX9QDDt0ySKuUJK3JLcuDMe4PyX
Eo4yDHhlSQ44wlP5RVNNq6XZt5Wv0CHGSPuOOeT5Hjay2r9pYZsgfRLdjsOAzpC5ayiPMnNK
PcS3VXFCqAep1pXc28vn/sb6uUvBB3rw3lNjkwxdJMBqJLmotb1wakpkMw5S+jcvh7mj8djr
nLa0CUlVyHG7tjl9mWW5sBm5W9wofR+XoFJUg9FIWlQUk/DTDSlgmRySo7gUqaE+pJ1kgBRq
KDZJ6V2Pz1DjgQKJWCnoipSDpMC+4odBx5bgHroKRKQRU8qBXVI3qB8NBIUlKSgGlCTvsQPk
NIiiUqTXYKINFDb+OhiEjtoqFkUVQIPqR0NK6jIYSKjfYVSfkT67aDSAsjiORBRt/PpXSUAU
OXFKa/GgHw0SWxXFQIRXiTTkPTbeuooCUfcd6JJoOlNQthKJ4jj9SdgDXoT6nUVWKUqlCARX
3Ef8NBqQlA8SQVVrsD1BOgGKUVEg8htuQfT8QNUELB5VVsa+6nSvz/hoIR1API8Qo0/j/v1C
LWV9QSpKNgjqB/DrqLIQWniFU40rsfTUyQblQTRBKvRA2qNCwahgrRJCt07Ek0/D01oywgrj
+H4dR03OiCFck8K12+NR9PTrrMDggHEN9lhVVEr29xJ47+muqBM3vGoDsTwHcr3Mu91+0emL
gR7JEeQzE7hCf1JGxWpKvzAEV2GuvsnH0M2WcGSgLHE0AUkb1Hw+WuSFEji7U83qEi2SxAnq
eSI09b3YSwo/+6XvyBIrU66VmcGbJcmn5lia7VmkCHmufSeLEBE2PelNPylNuLV7URkBSin6
eXcOivacPJpXrpIlv+1uSWfOYFvxbN3mrrkMQzJ1ylcoshDClgoCklSnHnHF9G+uxJoBpTtB
mvVYM4ym9ZhjuZX+EL/NfnpcVBuVy7qw5Jbb9vFZJKuB9E121KxpNMvnivEMzvGNNxIGcu2W
Pcg79pZYKXZC+KTxWuV2zSKhXQciK6rWcQZJmV40zm04MIl0z6UxaFMOLFsgRZUtlTIJIbDz
W/v+BIG/w1O8vRQloh8jxTyynCYkW/5FLWiV9u3DxdLbqwlt1YQymZLSAygJ2JStZCdSeTSh
PA8u2I+QZluZvbvkaJL/ANLvtMvuoU41EgOKIbKg6hKW3C3UJVxBV6ak3OjD0SEux+Xbq3as
isfkBnI5P3JiW1MRPYaCqfrubgNqQhI/UJSdtuujs1wVR5ZLV5KOfWu5XbL7fPNzjuxLJejF
+7YDyfc61DZSWktOUQeTivqSKfLTLiIGDBc4izYWY3qLOnLuE1ma83ImqSEl1wLJUspHSp9N
SyaSwaT4yyHze7hv7PgFtBt7Upxx26hDXIuOJBU1yfPb9uxqkVG2qzRl7HGOz/6mzAn2+1tT
nG47rqJj0hDKn23l1LqWXX/fXfl7K7nbQ7IlXBwwy8+V4vjTIHrfMZttksrpjymHYiXZUmTI
co8FPqBopJc961mo6a23ENoK/sTnkaR/UVZoC2ZUlU62x2mZEu726OhriQeSWe4P1FIbUBWg
ofXWe6nQsjLvnf8AUbFgxZ823PW2G++08HmITba3nvyJdCApf6h6hQ36aHdToGnI4F1/qOn3
luMbWiyqfjyi3WM1DjpbUgfcPKWCr9bjQJJ6eg66pUaFmXZzkWcXdq0IycOBLEQftbjzPadf
jE8UurURzdCuPtWrW0yeCyeHZvlAyJsDD2I8iAoJXPTcQgwGXCQGnVlz2h7kPYBufhqbXIxg
tGKyf6iyu921qMZEpp8vPy7slolmYoFVYKnKAvKR9IQCkCmstozxLG9imf1CScRnxYMNSmYC
3203KYlJuqCTSUzCWr9RVfzqA/A6XZNLGQZxlXLz7c/Ga3VW8ptSWVRn7qG0Juz8AfU2afq/
bbbqoCfw0dls1gpj2aeTXLNjSkJfjWixPBGNux43bSqQBwBQoJo+5QlPr1Pz01aJUbNQmZJ/
UBCyPHp8vFI6DJWpyJaozSUsvTXmuC5E4tqqh8N1V7lDiK6zKyEZK35NkZ/YbtZbzcsQs9je
YlF+I7bGUSES5ajTjJUlS1OKTSoQfXfWqpMZga4Xl3l2dm16hIsov1yvw7l8sdyZCI5DaQGl
uhXAMpb2CakV+etfjBhSTliy7zxNvt6tD+Oov05paZT8CewgRoEhn/IdYqUJHEI/TRyPLr89
Z/GDfyZ//r6928ZRHu9qZl5DkBU3cLrPbBlxjQpWGUKASjY7dOO3wGtJJsrNRCLdbM28tSfH
SpcLH0yotrjqtbOXJYCpbUFYCXGGf8SRxALiQeP46y4TCyhFAumWXudhVtxwwGY9ktsguNym
WChb75Cj+u/vzWkLOwPwr01JoLI5+PszmYblEfI4kVuZLi9xLcZ0qSlReQWtyjevuqAOuloq
lwsF6zLxJkyZl7tyXZN7hOl21qeIdbZkL5qqGeSoy+Qr8dZmfoUxgk4/np+PmP70/jhLQiJh
xYCZMtMpCUrLpcMhXJ1ZUdiCmnGnzquIF4IxnzXJOa3y/T8djyol8gqtk+yNFxr/AKev5nUA
uFwn/MWRv0220tKNk9HXGPOFustjtUKZi7NyuONvSX8amd91lEQylFRS42K9zjy47noPjvod
ZyGfqQuZZZlebxG8jZt5iW/G4DNrnzWHOIUuS4o1WKpVR5av8tNQB104TIm7N5hxG1YbYbGz
i63ZllnM3RucqYoJXMQoF50ICaDuJqlKCeI/hrHWeTTHOIeao0a9zZ9ygvxoV2yFu+z5cFRU
plsIUhDG49wJI5Go5J5ADfVCkExr598gYJlz9rex5DrsuElbUiYW1xWOySVIZbYUpXu5nkVU
H8fTdfxCOSIwnJo+KY7cLPllgen43mrDD7fad+2dcRDdJbcaUK+znWvQ/wANDsm8AnwS9p8y
YIzbpFnmYmtNgZubN5tESLOUlxt+OkNpL7rnIucuHJRG1dqaHVLkU8/QVK8/tzMmxa9TLSOW
PzbjPdZZdH667isqSlsqHs7aaCp+rrp6qNhzJT755EfuWFNY0ttaHWrxKurj5dJQUyQUoZDZ
3/TJOtJQ5J4yXe3Z1h8HBMSjqsN3gx7XdUyWshTIRx+8SUKnKZSR7zwqEoOyajfbWOsznIq2
UO/6g/InjnLbVbG8bfVKnRZTr6+y0uNFbZfSS6XULpzfWvj7k/OulKAddMovh7NY2K5K7MU1
OmNymTEDFscCXyta0qSAghSHhtTgoeteuq0bNJGlv+T/AB/e4uTW64Y3cWE3GVAbZsduHbnK
EFKi5JWGwGkKSqgU3QCnzroS+QWco7Wn+p6yNXG53GfZJTEh+cmYymC4yrusoaQwhmUp5JUO
CWuVW6VOtf1p84JMqt88rYFkFvZkXezXQ322omM237SZ9vEU3LdW8gvKQQ7VPc93DrT4HQq/
Ikjlf9QFnveAP2Jhm4NXeZbmrc8yFRhBR20pSt1Kko+5PLj9BVTWq0j7E0ZjhF4TjOW2bILj
AW9Divh8MkFvvpSCD21qpUJJr8K7aH+WESbXwannvnK0XCdjD1shT+WO3VN1c/cksxVvoKQo
NhMdKEpHok03Hx1mPkmoZlPkO9WO95hdL1ZEykW64vKkhM8oLqXXve8PZ7QgOKIQK149dbwl
sxqeS4+OPI2FWzEW8fylu5NMw7wi+26TaVN8nXktpa7D4cIBQePUf3aOjZrtgK9+Zbhc8dy6
C09MhOZDd0XBllt5fZYiK5d6MpQKd3faVACit9Kp1ZlPGCMufkyXM8cWHE0SJan7XNelyubi
uypkEGMykAk/pEVSCKJ2po1Pyak0q6+d8QdTabxBtl0uc63Toki5TpSGmu022FDsOvMURIcP
IhsuIHStdC9fyTZDZz51tM+74vJsqp02NZbmLrJMpuPBDhFB2EojDc8a1Wo71pQ6esJ+SW5M
18gXO03/ADmfPxwzpka6P99puYnlJL76itbSEt8qoQpXFAG9Nb1XJmclp8feQsUtmON49lMO
Y6xaLujIbU5AKAtyawgNiNIDgoG1U+pO41jrL+pvsWa3eebIIaodxizUJvTtzlXydHeIlQHZ
6lKbbs6yf0kEULnLdRqaaIU4J+CKm+d0Xbxld8Yutojic/BhW22So6VU7MRzkpT3JZotFOTf
AU59daqotIPRkSVqBSvbqDxIqKj46LEja0+VsLR5Fl5VJblhjKrMqDkcZpv9aFLUhttamO5s
60pLKePy/lo6tpfBLDfguR/qO8ed59pqPcozMpTk1c9TLMhbch51CnWExlHtrQUN0S4rpq6f
IrIzY8/4K/fb/LuP7kLTPkMS40DtNOoX2YzbROykOxnubey0LoRT4aXSYSYuEeer5NZuF4nT
GkuNNzJT77aH3O+6lLyypIddP+YoA7rPU761dKcHNM1+X5Txp+2Tr6m2XNm8X2LCsF+ktkCD
EZjceTsN/jVUhxttJQ2ojiRU6KridG3dfY4eRfLFky22RLfKiz7RHYuzcqNKYB5TLbw7Kpj/
AC4h2ajhVKgadRXVVKGk+DVR9f8Ayh41lZTcHojlzk2fIrILFe3FNttyoyGUoSy+wCVB3kEk
rSqny+GlJpLyjM7Ocvy/4/vCXLFdolxj4xBNtVZn45acmOftCShpEhtQ7dJCVH6foOh+t6n7
l2ljlHnrF7jPi5Fe7bKZvdgnTbhYocRaVRnvv0BstSXFgKQWuKTySKHeml1XnBTBHt+Z8aYh
G8ohSTmDtmaxpyIogW9MZo1EsOf5nNSQB2/jp39nJl2SwM/I3kXF7pj92bs0OezOzOdGul3V
M4BlgxAEdiMpCQXk9yp5+nTrrNHzzEGm9FJ8hwcMt+UORsQmKn2UMsluSpZc/WUir4S4Uo5p
SroeOmIqp2TaLr4Q8oYpiESdGvqJRK5bNwjOxEJc5rYaWz2lpWRQEO8uXy1zUyXBMeSfPLVw
tmNRMLmT7euwulRdc4td9DTKGmFqbQSlVaKJQoUFdbSST8spyV22ZVgw8S3ewTZ1yRkN2mi5
dtphC4qZLJPbSF8hxQ4aFw9Rpo/ybfJXcL8RnmGb2vJMXxOJJlTm7laGkwLtDWruxlsJUpRl
tLWrd8hVAlQ6f2lI6tDKnJebp5e8fwbsbtYpN1elpxdzHWXnmUNKafbSBFkdwK5cionmfy7E
afXlJPhyHbfyWLxl5dwiU3Fi3y6/ZvQbRbYrzdyWptt+ZBedc76ZSea1FHMUSr6q79NZsv3b
GzWzDM5bTdczyObaH5F9h/cvSjdQyrmtlZ5LcUhIoltClEBVAOIB21v3NY+hmcZ4L9E8t48n
HWbw5bJislZsqMQUtshNv+z3P3KXacg/2yr9L479NZq0oT0nI99tLZyzvy7j+W2dVjQ1PjRI
s6EqLNQsKlTYbSA245cFE++SgJCmjvv+GpNVl+Ubkk8g8p+OncjdcjC6ybRebEMbvvfbbZks
sISOy/GqVdxz6itK/wCHw00tFV5qYdllPlgneXMNlOix361XKFj1sVbJFjWjtia4bWjix9y2
sBHCQhVao+n56p8Pe/uPbM8ht/1AWOdKj3+62mS3k1lduDtgixnUmG6Lp7ViUtQ5pLI6FA92
ptan8f8AsSkajy/hzkZ6dJt0x2+XqDDsmQMJcSiI3BjEBx6MsDu9x1tIAQfpPrqV1zxoVbg4
5/nfj7JJNntsO7XSFjMea2qXalxWWYrEZINXWg0Vuvv+hLhUTUnWE2quHsE/yyUfyfmMfLs4
uV6itmPFeLbMFpZ93ZjthptS/gtaU1I9NdPY0q1quA7eSpqWRwJHKmwqdq/w1xBiTy7gV6Do
D8dUkgKNCRQg1r8lahFOcSEkmiR6gep9dQNBNlVSf41G2+hihTgVXbqNxTr89BMHbI9ldtqA
DYV/u1IYD6KNDUkUP+3pqgyACnpyPTavX46jaDUQE02KzSpptqIC6kGlSrcHj6H0roKwP/zd
9iDX/augyJIFSCaKrUDrpI6LC6ktn9IDpT+f8tRoQmtCFGgIqkA9B/HrqZSLogpNSQlVCip2
6dBTQCEhRJrtsN9uu/x9KaTUnSlAKGqa8gqtKH+PprIyGEgJ2qTX2V3Nfw1AxPuCga/iNutN
CICCOVUgEVoKahQqtNkqIJO9DUcR8NUCGUIJHEU3oQPj+OlBIXtogDquoofSn46BQmjHd4cV
9Pqpv/5ajM5+CFeU4piPUAJ3oQa139ddEMm+4bMuE3wPdrUvHhJssWW5L/epM9uDGTI4Jojg
auuFHXin6iQNdPa5jegltyZB7iQQTyr1Hrt8NYgWPLTKt8a4x3Z8VU2C2tK5MVLhZLiEkfph
xIJRyPqBrdMMpjBreb5Bk+ZZPYZDXj2QxcIjbUpuIovPuyoTCwUJU2tKEttE7FRTU11h9ZkI
aDl5RlFr8qRs2zPD5wuEpfCxWlC1NcXEI4J4ni4p7gkn2hI3Nemmt9llLGSBveZY6M9vN3v+
Ec3nEqKLFJku0RNWaqfkkgE//owAB+Osrqyhk/4qzd632dYsWDz7pc4LypUmfa3HWmHlrSot
oncQrmhoKohutABXrvrVrIy5ZYrH5IzE29d2gYTfJ94UmQ2txD0pVoU66TyUYpCkhKOiUp2H
x9dTtWYkYA951uMuyyZqcWur0X7ZFuvMhTq12llpv2vloBPBLq6lPJStvU7abKvaE8GXJWPJ
GT2zIsXjiDiF+t0JKG2bM06pTVnj0NApqOyjg4tQrQ+pNa6ymlyTkkWPKGQY49ijjWHTrfZc
ehmC8zIbdQHnZnEPPNnjQKUU1QDuSTXWlerkoZK4zl6n8qx63M47erZj+MiRPtduERyXcZr0
jkFF1XsQ00nukg/wJ+F3UMdGK59PkzMzvkp6I5b3X5ry1wn6F1rkqvFwJ9U+uqEU4g07x/5O
8eWPxfEs9/dmvXFm7m4MwYA4LqyUuNlxS+LfbUpNCmu/w0taM1ke5D52wbKrVHVfo15t0+BJ
fkwWbPJbbDvc2QlyQripPt2PFPzGj6MeYI7B81wqF46yKySYl5feuqzLmtQ2kuxoaEOVZT9y
6r14jm44PcdLc86NOr4LVf8AzvhzL8i5ot96Fzu8FmGbfL/6ZgRQskyGiokpUsE0KRvtrKS8
gkSCv6l/H8U8GGJ8oLktOdv7dtkNNIoVVWVrW6pJ3926j8NKrmJMyVq2+Z8CsOSSbsxcr7fk
yBMeEadQRWnpNODLLJJUORFFLUaJTsBvqaxlksmZ+V8qxbKL2xfbWici6y2gbzHlr5sMuJAC
URlk8igegoAB00oCa8b53iELDrpi2TLlQ4cuXHuTM+IgOrU7HWhYYLZofd2/q1pxEpm7TYvs
r+oLCchMebfY8y1ybNdW7raIkcJfMrtIKEMrUaBs03Ua8fnrPXGGUZOafP8Ah8l+05ROYlx8
msouCYtiaAWxIE8/UqQaBCUeu1fkdDhYMob3nznhhjvX+OmWrKJtjVYv2JSQmO0Son7lUgGh
RvsAOX4aoyXn5KpC8o43AjeP0pXcrk/iryXpzT/baioSlsoDcRpI9xHL2uL3oOtTtqP3H6F1
V5N8XuzJNoYyC4NWq+XV293m7tJcjKjIW3/8FCkcnT3D7HFpoAkkA131hqAWpK5lebYu3k2M
SLRlqJFhssnuswLVbDHYtraR9baXyv7l9daEqHz1tJ6Nrcs72jyJg1ykZ7bLldZ9ohZWph2L
kM1PffQItKB0NcSnnSiEp2A20R44M8FkuHmrx5k8R+NLuMvHWrZPhXGLPWyXFzmoISO0EtK5
JW7Q0So9P46FVkssy/Ip+GZndMxy+63V21SHl8rBZENh2TJV2whPNYqhCfaOW+1eu2609IG4
RoMPy1gcWzWS+tXN5mdaLA7YhiSG1JLz60oSH+YPaDdU/Ud/46FVk3szfJcqtUrxJjOMt3R6
TOgPqekW5DKWIsdJCwCp2nN9yq9jX1NdbShsU5gq2HXr9lye23RCWXTGkIVwkth1tIKglSyg
7EpBKk/A76nWUOmb3J8vYs75VyzIBOjmDDsbkTG5vYSS9LIbJCBxKnFKcBAKvyj4ax10YXIn
xP5jhXa4XC5ZfIt0TJ40KPDttyk0jB9sLKpCnH0IWEuKqPalNKDb11NLg1JcrD5Q8cquV/uM
S42iNdnZTQkSXVrhtvsNtCpQ8tpanvfyBokcuujox0jErnmniFd8vUm4Ycb8/LnPvNXBiW9B
jqaJo2G2AAUJoK7ipO/rTWuvyZlwSOJR7Mvwdklvm3yzwXrnNauUK3Ou8pLbbDgK2nE8e4Vu
Jb4tD/dXTarbwsF4L3Oz7Dr5fclxrlaZ+NsotTeN2/g0yiRKWpAf4P0CiE/SVdEp+WsdME22
xX9UkSRCwaKxGejs29dyT9xGLaGH3QhKvt2mUtoQFtMciSSa9NVCZkvgJ/HmspuL1xehRriL
bI/0+9cin7ZFxNA0pZWCgcRXr/v1u9dEsGv3z9hz1KsH+7hS8ml2GP371aGDJhRHYssuvJTw
97aXjQe35fEa5urWRdZOmbZFiGHyGodo/Z0Xdd3tkCWl5hlxxmAmMhEhwpP+XT6ST0qfXQqu
JKZZJ4hJ8RxWrq01Isa7XMulwVPC1xmu22V0aQEupUt1Ck7oLZSkDT1ZI4W6/eJ2rfbrYtyx
m3IYiRHUuNskpZfjOuyklZ9wPsRyPodup0qjJyyipsVgX498dov86z/tVru7kq5RfuWy6YE1
2rYWhBKlr4qT3fUDr0OrOYKcolpybc/n1kRlTmIptTdzlu2duAGy+qOlpX2zcngQylBVwoXD
9fpTT1xhBOcis7iY7O8g+PZOOm1wL20+7JvKmJMVAZYjuNqSHnWKMlYClhIpVXTWXWFoFv4G
GJW52J/VDdroLhBTbi5JnvykSmuJjS0KQ03WtCsuBJUgbjrrVtG6tJM5eOfGtnv6Lda1oiKu
uKZEuTkDyGVSmJcV6im46JTSVNLAHHkhZ9u9fWuW2seQXHwYZlyY/wDqq+CIlKYwny+wlqhS
Ed9fEJ/LQDpTbXWIRzplSejMvs3jeP4YnMxodrSuNamH7Zc2DFDz0khPNTVFKllYUo8ufx6a
5VUnS2SjeTsTvbnjrxyy65DdkwWF255AlsKWl2W8n7RGyvp7aQFK6J9dao4TJrJer1jcN7yN
aJd7RbLlObxRLNmiS5DS48i+QkbtvpSoAp/V25UB9Omsw+sBXEnT9gxlUuZJgWDGpHkdq1RF
v46pbX7amU4+UyCGy52uSWOJ2VXp8dHXyWg/GuMYO/OyKTeLLYVTXLk2xNhRnGJMWEyWEKWp
tUlbaUtrdKq9oEhVU9BpcyKtwirzJVpa8a+RMex202Z1NmvD/bLpQp/9vSXCmYhbix3XGCeD
JT6bUPrpVaYN4yWS9Y14pjeLI62rTD+yVCiKav6XYyXVSXlI7hD3dVJWtJUrk2W6emhVbY2b
Iry/boNm8Z5FCi2m02a3u3WALOq2uIW5Ngt8i0/ISlazy5Voo7n11V2Yspj6mTeH4uL3HLV2
fIm2vtrvDkQYUp76Y051NI73yUFjin4V10vwza5g0fDccskHzpjlksUWNMGKQm/9SXAuJDbk
ziovyRUpqtpx1CU8a9PlrnZQl8ktz8DPsYvgeKmRkOL229ZA/kkmFPZluJecRCLYkHtltSk8
ilXsUfpUanfbSqty2YpwhvBudouHgLKYtmsVvL0G6KdcbcUkz2YDnJTUtS1qSXHmeYZSpG1P
TrpSXZi1hTwTd28X4uyi+X4R7fGxaZZLciyS++3w+8cWymU+ltCi6kpBPNVNq6KqYRWbUwSX
m7x5gzWGRzYLJFtt9FziQrSIxZaVJbfUUBRIec7ja+qVucSOu2qlJn6G1llA/qSs90g5baJs
9ptH3VmhsrdQto9yRFSUyK9s9UqWBWlD6a6U/iZe5OnhTHYdzx3I5kOyQMnyqMuK3brPc3UN
tCM6o99zgpbYVxAG9dtc3vOhWi84fg+NSVPLbxKyzku3p+LljbkxD7dnhIbQU/auKW17PctV
UjY7emhryX1GRs/i5DULGotltzzNwx+5XIZAtwqlJVGW6InbVyAC6ICiT9Xw1v8Arz9wTlFc
8j5JYX/DWBWG128W9M5tya603KK0oeZd7TncaI97jq1lQUv6BUDbWlRLt9Stkuj/AI7U/kfi
6BkNkQq3tWh+FLgtOtuMtzGwZH6yEuclIp73KHfp8tc030+5t7+x2zzx1a7mu445asdt0e4I
sSZGNOxeEYS5y5STNcZLi6hDA4jg4dgo+h1qjSMPP1wRuWWfxxgtjXNVjtqvc1E23W92M45z
S2Db0uy3U8VblSgqh6cjX5alTEuTo3DxolfHuBeKrzgbs42SMuPdFXBxEp1xC5EFCSr7eO48
t1K0OpFOPFCgRudZeWZs8GfZdKtlx/p+xKRBs0Bv7WW9GuU6Of8AqIz3IBBUCrlymBJU5UEf
Cm2tqqTslwFtIua/HkcnxRCyeCwza4sGdFkw1yGwwqc6DJjNLKVn3SFI5K3oem2s0f4NfIt5
xs6XLF8Gs1lk37I8RtcPKIlllzJuKtvgxw6zKbREc4oWuncSpVaH3DbSvXOFqSbMg8v2+wws
rirskVmFEuNpg3ByLGVVht6Q1ycSzUqKU1FeJOtr1/in9TDmWjTfE+EYdePFa5kqxW1+fSUu
Vdp7o7dEVojuIcS7CUkUoS2oH6vXXFr8jekiZu/jrDGMVUJ+P262WpNihymskMkJlru7gb/T
UkK5JSpJJV7PdvpVJYtqfuT+QeNvEKHLKZdog25Cbm1HbWlaGkzGlMuFHJaXXS4044lIC1hJ
rorVtBOSvY3hePu5k0jMMMtNkuYt0l+FZIMtp5Et1LiEoK4hcDYWElXAlwc9/wDDrV6qManZ
JmTecrZjlsz96JYoCLZH+2juSYbakFKJC0qKwEtrdSgkcao5Gh/HWoSqvJjksWC5dYrR4Ry0
Nwe3eHpDEF6W1KWxIeRKCuCuKQfayK8kD2rGytXprNzo6zWCQx/FFOeB7Uua1HFqXkbFxuLr
byEH9q4dhbzqQrn7FK40A5JG9NFXLt8oU4aRP+bsNw9u2WeLjNkiWy8ybq1FtMph2KyiSypB
VyUUvOFaCoJIccCaHrSuj11UNvx+5iX2RJZHi8WX5NySdNtdsvF4kWeG7jcG4PMiNKfaShmY
pYC0J5or+am241QnWv7mk1n6k0qy4ZestvbT9ps8+Q1Ft8YSn3WpbEZpEUfp9pTrK0thVQHm
yo/EbabT0qvqDWzyjfmIjN7uMWL2hGYlPoYVHdL7HFLigkMvKAU4in0qIqRrftrDOaI1ZNRQ
mtf5nXIoQoOE9fcKbep66jYmlep3O9OtfmNEGGjoFEpI40Wr0Hrqg1ISdiD1Sn0Jr/EnUyQZ
SVqClUSAaOCmwr6j8dAg4gA+7puKnbSTEpWQeXKgqaL+J+JGqAQs8ia15UG4FBX4VJ66hCHE
FIWSD8f7joJhHl29t/lQbU+OohR5jpVLZ9oB3rTroIDhQBSoVSh9fU/7V1CmAkFFRUior6UO
qRYYKFbV+noT6V6ddUGQe+lTQmvU7fx0M0kKJUKmtDUlXqSfjqJiQqhH5gnrX4E6DMhhJ91N
gfpJHTVJpCyafWocT1+dPXUTYFVHUDfp09dJoUjlsFrHWo9flrJliVFRJqnc9PUAaAYFFWyQ
CU1/BXyppQrAohtxKUKSeY6pGw20wa7JhBA3SATUUUT0rrIMMkgctufzFTt8PloYpcg5N/E8
+vH50rqKSBceQY0dsLBWCoqAFAmp+OuqMyb9jFwx2V/TxcbDJvMGFc03JUtuHKcIeU2lCKdp
tIUtSlHZNBrr704T+C5Mg4igUkqSmpPI1J/HXM1Ekti7spvIoD0aY3bpSX0FifII7TCwa91d
UrFE9fpOt0eSUo1zzTkuO3jKLDNs2YhTi47VuuTtvU8lbbYXydfcdTwTx3qlB9d9NJT+DETg
mWfJdqm+Zba7ar9Fh4hj8REQTpiuKX2di8lpTqVqLji6binJKetNWYYTGCk5hZMRyXyhfJKc
wt8S0SCue7cyFuNpClCjDWye69TeiTT8TtrnDOnaMF08M5NY7Zj0RyblcFEG2S3aWyW45CWw
xVR76Wmv/mPvVBAcqlA2G+utvW4Tg5zXXJa3PIuJXq0OybzeoDeNBuSyG0TZUe78DUISYjfB
CnF/yGsOrRQC+X3xhNxuAw5PtrNiZixk2aI1Kd7qZyVAoS7b00aDSFbrUsE9a7aujmCkhfI+
VZDYMTm/teY264vPKadlXNU7u3B16oPZt0FlJYiISeiqk03Jrvq6vwOGHac2uEiDh2Mxczjm
7S23bpkN8uCkSTEqkdthIePbS8nmQgKOxTypptXIVaZLMXTKZXkLFrZFuUWTZLa69IlGPcvv
7g4whBSX7i9RptDRUocWk7cvw2EsaI88eUJEWX5GyR6M8iQ07cH1IfaUFtqTz+pKhsR+GqGL
Ni8IY/Z1eL1XtFvtDt0VdksSp15ShaBEQUFxLanPalaUFRT8+uq9QTLFMt+CNwnbvgNsxeei
ROeF5kXZxCGW2UCg7al1LYqeXsSQfTWekcFJWcCshl+Ic1YJtrLd3l922Sw6mO3IQy4CSEuq
qllHD9ILSCR8a61iVgpL7ncPD7ibhMyBmyKtTtuZZg3hbqfvfveXEM80kqbbTyr7E/EnQqS4
KRdy8b+IF2mPb7la7bb2kPR22JTbjTC3kqI+l0OreWlf01XQnroVGPciLFZbRac1pdscsGNw
ojU5UGZHfaXJdioTRDymarSkJTuVrPU0A66eoT8mHeZcatdmuNslWMW9OMS44Nlkwnu69IH1
LellZLhdKldaBI+kdDpiMQBa/AtrZVjd4nWRmFKzxqQwIDcwtqU1C5o+5W2h4hAqnlVfX8NE
fobdnEGqxoGCrusq64ExZ5RXdm28tkKLS0tW5LZEkp7p4pRzqSUbK+err5MzwN7PavHjDDc2
wRbW54/nLua8tnOcFcANoqOTig42n/ClI9fnXR1JN6GE2y4JbMVlFmFbD46fsqpKbmvg487e
FLJbT3eXeUunRHpSmnrn5JmbQcNwpVl8doucGJAavclCr1OdnEzpLJQSVFCaIjscutTyGyfj
ph5FYZqFyxeC8/Gtb+LWZV4jXhxrDbe5xjtOWppsK+5fS0VKdaa+tVR7lAJ6q1mECeSp+TbW
m433ELDPxd1RcnKYk3ZDDFqXcdjViM0lai20AQorcNQPnpqlBOEzljOG2Nm/Z1Ns1jgTshx8
x28cxwvffx0haU99dFqT33EpO56IVt10tYGXGC6S8HxCLdZt0xPHrder4q4w4d2ty+MhmEw+
AZi22eXBtQ5bn0pTbfWeMknGEYTmGBuzsty97CoSZeL2F1Tr0lpxJjstoQFLQhxShz4kL2Fe
mtdoaMRKya1ZcCxCHisCO9Y4kvGLlYXLpesrfI77M9ISptpL4NGwkqoEAenqa6zyabx9DI8j
xC1wPEeOZCi3Ps3K6vqTJuciQji6kBZCI8RBJ4e0e9QHT/mGtp/kHgZeHcNteYeQbfYbmpxE
GQHnHwyeC1JZaU5xSr0qU7/LWvZo0jR8N8d+GM2viY1obucBFpalSLzCcdLoW2y4G2il4BRC
l15ENg7bddc7SsSCtOYJ9Hgfxa7lZjs/uQgJt6JKIau+lpTrrymgTJU33EJ9uwKBv67HWcpb
FPnk7wf6dMAanXWReP3CPCalNw4EEv8AJaOTSXFOFyOh1S+RV7AoCgG+50y/ItyVpnx741iY
X5AUzZblebpYJv2bLyzxfTRYLamkoBUlKQavFSakA9NL3kxOCBlY149HgVjIIFnnyr47MUw7
dOVQ3Ibaq4t3hySmID9KetaVOlfyyNnKhDyz+PbbefGeCIYhSGH7xkCo96uiWAXO04FNhSXK
EhmnEI5e3lqtEsYyicmeHfH17mmLDvF2jqtl+GPz5NzcQ+HVcC5xjpSPYSoBCSRudyNE2RVu
pkpPnPx/ieDGz260Qp7FxnBcibInOh9CW/pQwhSEpaUutVnj0GtUlmG8kd4uwmZlFpyVVlny
GMptzDL1pgsLU2H21OBL/cWnf2gjiK9dNrQzbtJb8S8c4az5eveOnvX6ParXI5vSGw80q5Ja
/X72wqEKUeHry/CusNvEg9Mi7P438Xw8Rxm45nd7nbLjkzMstutNN/bRlRl8P1wpJd+A4jck
+mlzmNE2ohCbjhnjb/tJabvbEXR3ILncFQ03BaUhkvIUlDoeTVXCOmpU3T3FXXQrNNlrCJFj
wFEkX/L4Dc2almwSrfCtz5YClPqnKQl5RFEpV2ws0CT+OhvwXEnHyr4WxfDLLbb3ZbhIuUdy
4ot8+G6Wl8ilJKwlxgew+0pI3pX5a1WzeweWkVLzhjFmxjyFPtFmhmDaWmoq2Y/vUAXWErXR
a6lXuJ9dNVKJuGxWCYLitxxOflOUzpsa1QZbNvZatjCHn1Pvp58yF+0ICfh66LTMCogveJ+N
bPEjR47eWXy3Rsluj9rs7UGOuL3OykAuzmXFIUnlXpTp+Oju5+hVhDFjwZinGJAev8oZRPtk
m6RYrcZBhhuIpYIW4SFUWW9qb6z2s8sFZPIyzPEMBsPh3Hrja5LFxv8AfHDJFycYeTIWhtYS
62yokIabZrwXyBKzun5bo23kn4DV4zx67L8YWy3Pt29eTw5K7jdnkFLjjqXQSngVcSR/ltDb
ltXT3w/Mmnl40Pct8HY3aI8tuJdJwucCyy7/AC2pjDSCG2Foajx1JQohClqUsqVU7AbaFZ+T
LY3Y8I49DxRrKckyGTAgKtsGfIRGih10OXBxxtDKE1qf8se6nro7O2ENrZH2H/094zf37gtn
K+5bUTfsrRNjtNhEqrSXSqjy0BRQV8VIaqag6newKNkLdfGeCWjxzerrMu0tzI7Xd3LPxaYr
GMppCymOArctuBAWXTunpTSnZ22Ttj6kfBwm2XDxlj86KlJyG75MbU5KXySltKmf02gK0Umq
g4pVOVfb6anZpv4BJ4RZmP6fseuVzVbLTkz8hcC6GzXtT0MNBl5LLjxUx7vekFmm/wAa6OzN
T+hRs9wKyWKy2fIMdvbl3s95dkxm3Xo6ojoeiqCVqCSTVs19td9bp4Zl7wh74j8awM2XcxMv
KreYCUdiIw2lyTIUv0ZS4ptJpTpXlX00XbWjVcIscTwLjocbZueVPRH514dslqQm3ucnpCAl
SFOpWUqa+o86/iCRotZhCO1r/px+7xV27qyAMXhLMt9q3KYCW1NxFrSSVKWl0pV2qhaUFPTR
LkOqIy5+IsfizY+Nx8pLuauGEly1uw3UxyZ/BXskI5D2JeCjyArSnXWlaxqq8A8teGIeEWJq
6xMiRdayhAlRFoaS6lZSTzb7bjns9hCkqoRtrNZZMyN91axyUqu9ATvt/HXRTowc2nC2QRX4
EH4Hr/DTkVgWmQ+gKSkkJNKJ6VAPqNUiEXHK0rUCqj6b/HQEhl0qPJR95FEn5/H+WliO4N8u
dunMTYEx1iXHIVHkNKKFoUOhQoGo1LQNjq+ZRfb3NFwu0+RPmABoSJLinFhIGyd/pG+hrjgS
L76lVC1VFTtrTbLYO+vtlIWeJNNumhEBUhR9hNajr0/nrSx9ykISHSkJ5E8dqKJ2Px1PxwEB
h9wrKisqKjU167dNzos2QlxZptUb8iRuf4aypMhofKElAPJNKFJ9fx1Ch3cLtc5shtybJckq
S22yHHVFRDbQ4toqTWiE7Aemnu4gXk4CQrmA6d0jiPw9N/lqScF8iUPOK5e4j1qetR01vs0j
EhlQUTQqNRUk9a652cvJtMSFOJ6CopSh+H46ETFpkuJAQCUitSPWp+B06LtDFJeKQlKzUbAU
ArQaC7eQluqV7V+prQ71PpXU34LmQ+8TUAAAVAAApvuf+Orsa7hcq+5VVEn+RGw1NyAnkQDQ
g1NPlrJBn3t9Unp0FDt8aaBDDleQ6161+n8d+mpA2EhRSqtaKOygP92tGRSDQewin+31aDQQ
WQtS6EfGm4r8KaCk6bAA8aE1PupU10GmIDRJNFHj1p+GtSZgVXryNT+VVK7/ABr8tDGULWBx
ApzAFCPWh9RoFiaJB4gVrsj0NAfX56SCcJ2C00Cf7/jqJoCVUBKiChQ3KfiNDAUV1SoDov5+
ugpCTUigG+4qelfUU1EmKqUpIAJoeW/XUzQByKVUNQfSm4+X46gYfJJXxoAOhTU/3+usshVQ
OPLauw9em+/y1CCqRUbkKPtJ36eg1EgmyniVGqVdKD0/nqGAchyKSmitqE/HpvTUZDGw+C6n
3A7beg+GgRYBJUK8ab8U7nQaSjZz5KDiSNx6p6UOmTIomhAAqQfWtK/Omg0hSlK4lNeu6k/L
UIn2ceVDWnx2/HVBiCFcCTBZoK+812pX4a6Cb9gg+w/p9yW7QGGheV3AQ13HsockFhbaApCF
EKKQAo0prt7Viv0JOfoY8oJVuVbJrUUNB/A+uuBvsuBxA+0VOZXNQ8qGFpEpqMQl9bdaqS2o
ghKiNgTrfrhvJzb8bNt8oWXG8Nv+NybPh0aVAl25FLbMS4pky31+0vOVHN0bD3H+GmlU31Gi
b+pL5Rj3j0+TMSxCfjzEd95pC7zHtSSw2uXJoUtuGpUWW0pUo8TyNeupepOYegrOzMPJOIzG
/J10seO2l8sh5SLfBiMrJ7aAKlugJUn56wmkMt5Lr4Zw7EJkOM1f8cS5LfmKhv3K6ocU2t0V
pFgNt7d1IFXFubA7b9Bu6WHJnPP2Lt/268XitjsVntknJAZSVxboZTrheaJPAOoo3RAG+9NY
deQlkTdPCGNxceNqtn2UnI1Qk3K6T5IdXMRHUsF77VtNI7SeIKUBVf79Iz40Q+U2PxNYMWWb
hipgXXi0uz2z7l5d2eZJqpy4LRybihxPVNeQ+XTS97LbwTdoxnw1Ox/HbhfsWZxtF/kOqiNo
kvLKo0dBUguKJr+qqnQVoeup18FLOn+isHjZPi0i3Y3ZJGNXiY5b3H48iS+FPUUU+1zilXBL
avQitdKWGRiHlKDFt3kTIYUJhEWDGmONR4zaQhKEJNEhIHpoSLLJ7xz4ul57jkpmBeg3c4Ml
spsroUIzTDhAXMdO6a0qEpSOW2+ttuPg1xovFl/pqxy4x3Z7+SyEWtTyo9ucUyyytwskpcec
DxFElaVcUpFeND66x2suTBxhf05Y8+9AhOZU4u9XhiTJtjTMWsYojmgcU4rfjRSfgd9tTbZY
MYhWl2XkbNnadS3JdlJhIccrQKLna5H1pXfTsqo2WV/TREkuzodiyj7y+2t+PHnNyIymWWjJ
AIooFZUQk8vbt6ddCvZE0ik5pgWGWSQzDtWYfutyTNEC8R3IymzH4nip1X1VQg1pXqemtrsx
XEGjyPCXjxvMcotzijFt1nsUeXFkOLcWG3XkrC5bwB5OFPb5BA9vy1lzAJ5ZDwP6anZV8bjw
snC7RJt7c9i5Iju/cONPkhIDIKUpSeNfcvpTbVLQwuRtcv6eE2AzHMly2LY7IJCIcOYW1r+5
W4juBLjYUlKP411K1noMDe+eFYMDx7ByZN/pbS3Jk3KYeS2HFIWEQmocdAClrdIPvWaAb7ap
clPgo3+lVK8fqyaRe2gUyezGsSOTjoJ2W8sA8GhTfp7h66Za+haRUS4pSlFXVdAutNx/H/dr
q4MDyPerqmQmUia+iQ0ntpeLi+4ltPRAWDUD4AayaTRZMNxnKPIN3eiJuYMuJFcfZVOfcWty
hH/TxwolSlufAbarMYRcsX8AeQnrq2w1c2rTcExPurmW1uF+Cl8lLDDnYPIvOpSpRCTQJG/U
ax3xgYHiP6es0tT88SMogWeH3G4S5hdeaMoyAFJRwQQr3qUB21Gp66O74BJGaZLbb9iV0umJ
yZi22ozoROjsOLEd4pSFoUUCgX7VD6htrSbCZL5B8HZ3IxSLyv8AEZYnxf3iFjLkhwPOoCOZ
X2R+nyCKe74mnXWXdjEENknh/JLLiq7vd71bkyITLch/HTKKpsdp+nEdtWwJ29qeutKbBZRk
s2I+HEouuAJZu8qFIymJLl3GfCdQFNobb5CPHKRVJW2rg4o19dYb5GFJ3x3wPnNqnvz7RkDT
SGYkuRbplkeLzi5Ef2pg7cPea0Uen8dPcEsFCyC/eRcTyyb38jeXkSmm0XSVEmKfUkkA/buP
A8VKa6FI2Semu1VKloG8lm8c4/5lvtqmXywZMbRCnzDHlzJc5bBkSuIIpULUpZ5hII3rsOms
O61BpLA5xbwv5sMK8Jts9NqS88/BmsLlqQqeWiUvBKkhQWkqqnmSK7/PWHfJFetuO5C14lyS
4Ivz8WHDuTdvuGMNBQadWpaW+ThCuJPM/SBQ039NKeTNtL5LbcMO/qOtmIwnnbs6iDETG4Wa
PKAkRqKSI4WgBKQEK4bcjx/hpV1wjbFW3x/57w65WybBfanPTriXBGakCU0J7zSg49KCgkEl
rnyXXYV1nsmFarQfkjE/OGbXq1WS5ptb6EJffiItrqEwWC1xS+6+o1cSv3pG+29B66VZJaJL
DM6vFk8heJ8lZCpSYFzfjF2POhOBaHGHKoPBZHxFDUa6KyssnNYxJZvC9i8vXBMuZiNxNotk
6W2zcri44hBeeJKldtLoUXXUIWpdNcvZg6xjI8yTw35dyCVcOzJcvFss8ubGtTs99Lb8ng6V
yHGWj9ZcdqVGu6tumpWMdUV5GNeXYFqlYmlTrNrftv8Aqadbu8A0ITZC+4s/lWFIT+mDWoFd
PZcDErOixXm6+YHsLwyd/qOfPkZTKW1b7U0UAj7VaTGUXU8VFal+73dKb6K8/AvLQ5yKz/1I
O3ywtXSWZ09U1TtmDT8Z5pmZHbJUtzgkJR22ySeYp19dSajRLZG5lgPnnLcjh27Im/3GcxFc
fhyC8wmI2xyAdV3kcW68+II69NLuvAYZWYmQ+T/E91mWOPMVaZTqW1ymGy1IbUlaeTTqSoON
mo6KGpKclKLx44kebrnj8mXbctYtTd+lPotjNwcQqTcJqQVPpjKWlZbV7ac9hXpqbyDWMDG3
4n5Yetsa/t3tiPf41pebtlgedQJ67MhakPrRUFFKqURy9yutemhW+ORlQUO+QM0/0NjlyuSS
cS5vRcdJUkgcllx4JbH6nuWgnkr4aatPBm0o2RMzzw3l2C2O5T7dNmzQq5W+Attr7ZhMZCkK
77jSEklDRJ/TPXpUgaJXXRvDeRv5gzryZYLhj1/ZnQGobrMtiA9bkvLS4OafuW5KZqS4sVCC
kEcdttSUoMSVK3XHyd5Oi3MXPI48SwJERu63G4KbYjJcS4tUJk9tIUVF5ZICf4+mtK0YSyTq
iGj5t5P8ZzLhibMwW9cORWRHKGX0tvFIPcYWtKuPNKgap6jWq0TyCsPscnZXO8aeRb2b0ttg
uRFXKEppDqZj098turUpQJaXx/M3Qn+Gs2zcruK5IqD5X8gO41FwqHL421BaZhRmGGu+VJcC
mghwJ7nPnT3A8tXVVyzUGn31n+pEWy23K5yHI91buLcS1WtlqN3X3JUVxKpK1sjgoob5IPc+
mpO2sq1VwFuCg3Hxx5ieNnweTZ3ymEmRMtMYKZU0EvKT9w4ZCVcDRXEHkvavz091skuJJLx/
jfnXGL7d7PjdqU3KaSy7dYz7cd5gVqWHQXSptS6cuPA1pXRay5FaNDUjzjBFxskC82xUpi5P
9i6zS2iTd5zjCHFNRG3gsIXGR7UEUG1K6V8osFWtbnnFnAecWfF7a4j7iLM/2F3py2KUUPvJ
5gr7AWpRCeVa7j00Ve3ASdb+x5+TjMCK5cY8xxp23olW23loXeKtS0m2/fLCUr96wkpqrrTl
pVluDWExrn2L+asoctdqmTLdeosmU6lx61GO3Fj3FpClPonLQlH6zTQUSo1FK030VtC0Yeyn
veD8oF5EE3G2mzqiG4HJ0yUm2CIFdsul3qKOezjStflrTb8ZGFMHG2+Dc6m5NcrAllhEm0xf
vJEwuhUZcZQUWHGlipWl4J9lB8a0pqd9Ems/Byxnw3leS4+i8xFRWEy+5+zQJLyW5VyUykqe
TDbP1dsDepArtq7GnXAJPhnLmMROQuGK2tqL+5v2NToFxTbiafeln/8ABV+ddUmGlEnK4eH8
zi/6ZbMdt2bliVLtUNpwFdEcSO6T7EckLCuuw676lbDfgeuYJN7wJl6bxDhxplumwZiX1uXu
NJSuDGEOn3f3CzQo7BPu239NXZrglUqmY4TdsVuSIdwWxJblNIlW6fEWHI8qMv6H2V9eJpSh
Fa62lOTM5gryhRRoKHatf+GkZDFDsVHkTUAfHWSCP00IPyFdgfx0lAD9QIHXatd/x0yIvY7b
cd619SPgdDQbAgEmhJ+YPx9NDJBgFJ5GlSKj4mm2gmhNSFbn5hIFeuqAmBW9BvQg1NR6fw0o
UwlAqVx+kD3A/hpZmwRJ2r8aEeu+sSKZ03O4JR6KofT/AMdRoSST03+Caf7ddISGSVfUd/VQ
Hx1A3Ie9DxNQPb/w0M0hdU+lE0B/HWQaCSjqAakE0p/bqBIUBRNePQ/SdBqQGlDVe1D03/gd
MBIaCkgcj03SB0rpghQQPQ7KG1Nj19NRACOJ4n+Z/ma6GIlKiSQFU931U2OoELCkkgCpFaU6
/wA66DQlKV7ivrQHbUgB7R7VfmpVNNxTSQYQRuPgaKqemgGLAUQPUA9QfUdTTRIoILWQVpqV
b0NNUmnkJsAciaCu6RWhp6/26jIuiSfcoVoQSNj/AA0EJTXcqPv2oT8D8tIhpWkCi999vStf
Q6IGRRKlmqU9aelaHVBBIT7Ty3NehOxqen/DU0SYZJ4gdeJ3B9BoIABHpRfVIHqn8dRSBQSA
SCSKGgpUinX+Wg3Mg4rJoTzTXeu5Py1E0dNyCCCnf03pX1rogymc1KVxI6kbBf8AbvpNC+JK
kqURRXQDpSnroMsCa0VwNeJ5JBHw+Ogkwc18eQAFRyBG9K7ddJSxPv8Ap24Upyr8uugezIdx
s/ZocKlEhRSE/lSB8Px1s20bt40FzjeHclu679Ot9mZd7CrRbG2OT7zrSQFuPPBSkJVUJVwo
aDXe7cVObjnZkIUkmg3oR7a1/Gp1zEf2Vc794hqtjhauaJCDDdK0NhD/ACHBfNfsTxO9Vba3
605wDhmu55jeWR71j8LOvIKSy/G/c2ri4lx6JHW2oABlLY/VWrqFkAUGikzjYKCQlYBm9ryy
zXDHM1FwyLKW1ui6y2zHWiGAgd1ff7h4rqlKUpTyP5RTQrNNyicP4KRmWTeQsT8hXRDuSyJV
/jt/YSbsgBJUyQF9ttKh7EhX+Edd9Kth4RRyWvxDbfJs2yJj2rMhYIFyedTEhlJlyXnanuuo
QErWwivVZIqd9N7OFKGOSzJwvy5Z8SXEn5+i22lxL9GWWHZqwylR7ig+02pwctzRKvXrrDu3
wivDeMEHOa81RsDiuXHI3o0GjSrXZUx1rnyG+4Exy862g9tKtjxdc2H1fDWlf8paRQSmTWzz
Ff8AEblDuudWh+PC7QyCA0ltpDJUR+lJmNoCPb+dKfhoeXhEkiPGGZtFi2XKIvkWzPRMeUm2
WuWVFMSMVABaKqTwX7aA1BURtpm05RlRyWWJbs+cyXHL7dcysV2E1bsTHF/buOR0PlJ5uR2G
eyhTh407hOw0ZUqCMF8lsXGJ5Bv0W5zPv7iiY4Jc0JDYccO5UlsbIG9ANSRdh9jd38kRcTc/
YUPt47CuDUuXIjtAc5vJIaQ64Pe6Pp/SFR8daTSeSScGjQb/AP1NSnZ8QWpcuW0sPrclRY61
RXH0BYSypZCEHiUq4AHjrLtWNF1IiE//AFFuvxr5Ftkxa7TDft8KUplFEtLNJDoSogrdKk15
03psNK9i8bJLBF2vy9FtiosZGCWQyYi0p+6cbdXLLqVbrKlVUXlK3r/iOtdavOTKb0W/L/In
9QHcN5Xj7tgtiZTboS3FHNZTsy3IcVyW6kdPpTv/AA1itqpm+pX8zyby73rVcclxVmDb2prc
iPA+xDMaVM6NfcpClOOmqtklQqda9bq2STNBXe/OasuuUf7Sxm5QbQ1KuzSI7qkuMrKlNxAd
3HnFEFO3tHQddDdfGAayVZjyV5wbv0q3P4uJUiSy0pqwOQFlqOzH/wApbTaCCEpKtuSqV1dq
wadQ2vKvmK7S7nbZ2HsX+Uw6H1wpFucdEFfDgghseu3VZJ1lusGev6DR3yF5wt9ssr8q1tP2
2axNjxLY7GDiZYSe5JfkRUHklLdfb9KQPlrStUoyZ7/qbNnvH8ixRIpRiTcr7m5SmGCEqfcV
yQh2QB9IVTin8B8NSeSaK3aDERdYzkuOuZDS6hUiE0Shx5AO7SVAKUkr6VArpb8kavdrx4/Z
ZjOSPEEq3MuOoBdXJlIU4Un/ACGuSE1Uo7fhX11lKvDFJ/cXOzS349LjDEPGqsYySWDHg3Cc
t99xtT9GwYrTyUo7lTQKPTRXr5CGTEvy95Cx7JJGO3axRrvcREYi3RiIXVPyH2Uh0PuSI45q
XRfFQA463CjIQ3oiXM+8k5Sly1QsYTS13Fq/z47LTqFNphpT22XO4a0ojc/5iz0GszVDDWSP
ezXx/dptxvOYYbPul+uUtxT7rEhcdlsgBKIzSNt0JACq+4nf5a1C8hVF8zrydcLLbLM7a8KV
AkOWTs2q6S1KefixHQG1pKGxQJ9o490gnrTWFBtpmb5p5bteUQXVKxWAzkMplqNNyElTzxaZ
CR/07ahRlZSnjy3oDtrqqRkwWUeeo9uONO27CBD/ANPpXHgLeffUrg+2WnGUKLaakqIUrqdt
ZVa6kcvJ0gefbu06LThmJtwEuNvxrZCZU48tNwmrC1vCqQFEGvFrpvuaaIqiW4K1dckwa6ZZ
Ml59i0q3T2GkRX4doWmN3ZqFq+5lPBRRxUdglKdtiSTpTnRJM4ZV5AsMnD2MPwy1TINljz03
VUmY79xK+4CSkBHbBCEj6geRO3ppUVeyeSw4j/URKsuGxbI5Z5FwnWxDqWJaZTzTK+8orrMQ
2OThClEn3Co0dV5KWMsYzW1wvEF6gpxR2akSWn7pfHJXbjpuDjnKIoMgc6N0TRCSa+vXS6qT
LThEtkX9TP7tHZAsKlPh6K/Pbly3Fwj9qpLhSzGSAE9xSPz129CdSokpFMmbx58l3Wzx70zj
EtGO228svy5j85VVOuNr4RmSkIKKc6gD2gUr11iK+TSkYn+puMjImpSMcWzDEF2DIeU8E3Nz
uKSoOd8ISPYpG1QepNdKqnyEZKres+8a5rkz11zSNeGIcaK1CtLEB5EiQritSnXZLz3FNaqo
lKBSnXfVHCYYLHjWfYNGiosGL4besltFimJyGCpTtJDMhCSHHXuwlSe0mo4D1Ppq6rll8+CS
gf1B3eXjtxmMYxcpP2Dst1yRDeW1AbExxTrYnKSnkFN9zfiR/CupqqeyhwUG4+apsvxYjEDE
5XtTKYEm+FRK1W1K+6GhUlXImiVHpx1rqm5JZJyN5Px61YzhIt+J3NT2Oyy9bZ8h79CS+4R9
6hsJCu4pZVRAH0kj8NYSWVJpdm8DpWeWLBcwhXqJhN4syJqpj9xduTriX3TMSORiJcHZT21e
4+p6dNW+TLfXA4iefIk7Oo0mPbLzLimCqDFZZkJdnPOreDyiplKOwpBSnhw4n461aqS2EGU+
VcqiZRmk26QrUbSyUtx0wFAB1JYTxKnUAJSlZ9UgUTpWgnJe/GmfJTjEdlWNOXmfgSZV6tUt
l8MNMpeqHVy0E1WhJUCAj3Gn8dYdc/U38nFnzfb/ANqamuWdTubxLY5ZI1w7vGEITyiouqYH
uLyQtQA+netfTW/60uQSSwijXXLLVMw3HsciW/7eZZ1POSroXlOKeS6o0Slo+1sAUr+H46K1
QPgu2S+RvHrzWGpsbN6t7+M9tgzO+0l4Qefce7ZST+spf0qPtHqDqVGk0LWZOOZeWcdyy/Y0
zcI02XiViddMhc19LlxmIfPJzuOI4ISBxSlKQeld9VlCicjBywjLcYdk3zFP9OTrhjmQT0Tr
Ta7e5/1zK4ii5HaSTUOJ4Joveo6iuhpNTMGVJR87yyVlOW3LIJjCYsm5O9xUZBJS2lCUtpSF
GhPtQKmnXXZQgLDjGUWmB4pzLH33im63Z+2G3RgkqK0R3lOOqJGwCU/HXN/ykmm0VnEr2mxZ
Rab2Gg6bZKZl9kmiVhlwLKSfTlTrpdZRtbN8t/mjHYt6ZRjNivMl6XdX7/cmZS0uPcJMV5Ly
orYKglDaXO4kH2lI3OuWHyHXg4f/AJReKKyaLLU1c37Oxb34rMtZh/dNvPuIWXW46UCN9LXH
3devpvp1Wh5Gl3864Jfp8xu8QLo3bXDb5cJUJ5lqQZNuCxwdUOKQ2vmDVNPw1OuNg65G83z5
iMy8OXi52WU7Ls9xeu2LhDyG092S2ltTc0Dl7UlHIFFfQaVSfoUwRLHmizftjdwXanXc7Yta
8eYkB0C3iG6okvqaH6neAWUgfSdLS5eBrXGCSvfnCDa7hLnwselQc1uMm3PZPHuC6R2zaSlT
bTDYAdT3uIKuf0jpXWUpMzg4M+cMUssmNCxuySBj8iZKuN8amvpL6n7gwph5uMtA4htpKzwK
t1H4a10UTORSnA0/7sYJ2f8ASptc1Hjr9uFp595v9z4iR92mQFf5PLu/k+H8tMQpnI/JI4d5
Pxqd5JmZFNtNyUiLbE26yW62lL60xGGFtuOSK8eSgyrlVOwOs3hJIow/kjMd8xYhZLfa62mT
KuuHLmpw11x9KGXY89ZUDcAN+bdQf0vq+WtdU3E4ZJ2S+QXbzHaJeNSJCLM+1mFxsv8ApiRM
Uv8A+zzBaUCXm0Ecy9vxIrxGqkJy+DNliIJGD5JxdMTBo2IWq7XHJ8WccXGhvdtxqSiSC5NT
+mVOVrXtEJ9o66z1UOXs3P5YGjHmXDYIax+0WaUMKej3Bi6okPpM9w3biZCmXE+xHa4ANg/V
601rET/uCtlplH8iZhCyi5Wtm1wlQLPZITVptDD6+chbLZqFvqHtLi1K6I210xWsT8hDbkq1
ytk23zXYVwjriTmF9uQw8ChxC+tCk9KV1z7ExsBTjRW5JBA+Pz9NLAdOWy4sQIs2TGcRCmF0
Q5KkENulhQQ7wJ2VwUaGmiU9ChtU8SoEED84pseuiSyIUEJFQsJQASDUb/hpUEL5VHJJ9hAB
Pxr1pqYCu3RYCtlLHQ7HbRMiJSKkhulUmpHwrp7GIZ1chyI7y2n21NOtmjiHElCkmldwdwaG
us9zUM7Wq0XC6zmYNtYdlzX1FLEZoclrNCohCfU0HTTaw9GxuWwkgL9pO1VClCNiN/5aASkJ
VEOdoilTxIHX+XXSIaW1LWkIBJUafj6aGzLyx6nHr/8AuqbWq3yhdFcUt29TKw+eY5JKWyAo
8h0230dlE8DWrmBUWxXmS/KiwYUl6VDSpUuO20tTraW/8zmgDknj+avTU7I11nQlNkujtufu
rEV1y2RVIRMmpQpTTanNk81j2pr8zoVlMA1yN0IIcCRXkACQR/i+OmSSJVeJ5Mm4RLcbRLTc
JzaXYEVTLgdfbV9LjYI9yfw0YiTTo9Da62W9WeYuDdYMi3zEALXHlNqadShQ2PFQrQ+h0pqD
nkXAsN4nx5L8GDImMQmu9NcYaW4lps/nXxB4p1l2zBpJtSAWG8mzG7mA+LOl3tKufBRYDiui
C5TiCfTSrZB1aXwNGmn3CG0ILiiQkITUk1NBpeCWR1dLFd7LMVAu8J23zAlLi2JSC0str3Sq
igDxPodZTFpoY8lfWnYgU+PID5aikNKUV2PEVBSo+ukAwqlVUogA0pv06V0FIdApfPdW2wAo
aamxBQ8QCaEilfQ00ChRUa+6tT6j+z/z0E2cwEUVUkUNB6/iNJhYFGlSa0J3SAP92o2SNmsV
0v12jW61x1SpsxfCM2mgKlEfM0AA6k6MGq1kKVZbrDkSWpUVwOQVqZlEIK20qbJQo80ck0Cv
zVppthwxjyNxEeSlZWhbaUU5qUhQCQr6SvbYK9NEmYJCdimRQnrdHl29xhy7oS7a1roG5KHC
AhSF1pvyHWhGnanguv5deR+x4/y9d5uFlVbnEXy1srlTLcspS8W0JCyWkE/q+whQCa1G41lz
j5FLkrbaQUDjshW4oabHfeutWq04ZgClDj1oQCKgdR8NZEUihUGyKchsE+hPX8dDNp+Q+NAR
zqnpT0roCICbbIH1dBtXppNJgC0gewe4709R8dTJsCCPiTy3JAGsmEgLG6iOSaD0/Mn1/HSL
Bx258U0+qvp+OooZCq7n7e0A2A2lwnkSaqUfl8taRG9eJZVwleLMnsxx/wDcrUpwS5lxcmtw
YjCkNp4tvLXVa/pCiEDptrt7GnVbBrPwZEVAKKqBRWo0KEkII9Cmu5/jrkaFQfthLbXOb7sE
LH3bKFcFLbBqtKVmvEkbcqba6evZVrLg2DyFkd4zn/TMWBgMuGtttIthUp5+RJgx6EttNlKB
26fU4ofxpqVkrS8i6NT8HfI8lyljyTbM2zLDZ8GHFLTNis7bnbo6xuyhCqK5mpJKEIGs1t9j
DT+rIbLMxxeX5LmXnJMIdQlbJL9lkyXEuuSlAFD0gkDihKdg2kfPWVDLJK+I83bhRZn7PhM6
4XhuV93IesjjjbXZqSzFkmi/0G+VAiu/UiuulmkhhstuPeVMvfjvXKNhd/uF2ackJUxGdfNn
DrlQlJYpXi3WnBNd9+p1l9fIQIPnq5TLLKmR8Zukl2HEFuu7gdJs8YJP6zpCRTu0qBzOx2O2
n8Z2UOCuZ5ldmv2FMNWfDL/b7M2ltu1oUVMWdpYOzvaZH/UOEk0KieSt66MeScrZX83nZRLw
PG7A3i1xstmsKSqZKfYdSJEt+g7oHED3FR419xKqaVesjCfBc7bkE26+RcRcfxy8Way4xCSL
NaWobrz8lbYCVrCPYltPSqiegFdzrWHLKHOTKPJ02Vc8/v02XAdtjr8xa1QpJSXWxQUS5xqO
RG9ATTQmHVNQyzeL/LbOCWlxlhh6bMly21uMvrH2TUVBHNbLQ9ypSqEculKaUuzhhEGiWn+p
LCIlqfZah3OFwlSJMVCTGkuPl1RWS68+VdtRWo9K0+Oq1F5KssiWP6i7Wi/WC5SIsxTNlgy0
S4xfClSJkmnEe5SUqSin+YvffYanAKSuWi1eHWrtFyCZmkg3JMtE92Izb3Oyh1TodU2lxe/F
JNCsn56VW3ApLwaTdv6iPHlqyOdJtP7hejcZEYzHFqCYrLUXblES4eRC+vGgBO+sqk7ZGd5v
nWBX28xzCvd+W29c/wBxmSJS/wBCCmvI/ZwwarcP0pKlUQOmtKr8jMI0Zzy74/cyi5XNr9/a
XebO3FL6IqkONssqWRJZA/VokLUoudK0prHX5A4Wv+oPAGpJaVJntNQoUeHDuUhvvrldokuq
dZSpPFxVditR+Ol1wU5gGReefH+QRJTEa93fGXG5bclqbCjgvyUNtBHCgJSE8vRz5empU+gT
mCIkecfHdxxRrFZy7rHjuxH27jem1BUxC3HC4hlZQE93vDdziQmu3TT1l7MvwZhJzaxI8V/6
TZcuLk5yaqS1HWpDUBhruc0laW/c+4oCpCvalR26aVX5LMDHxHJt8TyXjcqa60xGjzUOuvvK
CG0AAnkSrb+fTRDeBmNnoR3zd43gZAWXb5JyGNIuqpvecZKmLchLakIQyVAc0JX7klArTf8A
HPRiZ/ecvx+fl+NtuZ/OukS1TV3By8S4objRUj3hpptKe+66unDkr2oTrUOC8FztnlLxnC8m
5DeW8hSiJeo0dTkl+E+jtraAR2Wn00dAKE8z7aE0+Gj+tsZhEdbvN+MuZzlLbeQzLXYbpADF
uukprucJ6U9v7hDaE9ziEBIbCtyRv11dXBmThhGY+L02PDXbpkP2crEp0+S8xIiuKemOSVL4
PqKedFKSvmVGprtqtRs2mWS5+asETbkXBjJVvQ2bfJiuYumMSuXLfp21dxSdi3UpVU8Ro6OT
DwjOfH/iCJbsix+83HL8cfbYlRpD1tRKDjhPIK7Q24qVXb4abdtQbq0tGqz/AC3gUDJkxrrk
zF8Su7qeiduOlTdsZbZU1wW4kUP6p2WKn10L1tkU6359huFNYba7PkUe5fb3aW/kE6LHPERp
ZUV8itKjWqkCqfdRPw1dZmTGoKVkeLSPI3ka/wA+Jl1rlRGigouc537RgpWVdqNHBFXAygUU
oClfmdacpGlBqWDQ7f498eRYsjL7JbppvPfl3FhTctD8ZKUqdjJKk8gstilQPb09dYVXZ6Ib
OeScOXaDLxXJrZjEF2VPfvsCZAEiRLDzh7JRH9pV+n0AV8vTS6NcE35KLaHMcV4Gu1mnZNbo
8+4TRc4NvKeUtJbWKMuIQK914oHEdEg9aV0uZJuUvgZ434dNivNsv+VXuwO2OG81JukVM1L6
1NpIUW+2hJ7iq/lFa6zLeIJYybA55G8frkQ4eRZHaru0m9Ll29qO1RmNFDLgiCRQcSWllI5k
bGmp+thJH5T5M8d3HK4FnnP2uSxc4M623G6RUl77VMlCe0TLKWkipSagD2/x09GlJbMF8vZB
j1xyCLbMZ4qsOPQ2bXBkpRxMgtVLsg/4ua1H3Hr19dbrWETeS/8Ah/LsdgYNChDJW8YuNrvT
d2vPJKkqnwUDiYyVIqXVenb/ALNZdXIxottr8s4OPsMmYvibRY7e7dVXrES3xkT1z1qXHWlh
HseICh9WyfiNX9bJJrZjdwv1lPhqPaGprQubl4dlO2dqMBIDRKilbso+gB9iR1FB6a1VbYLS
+DS8lvqYWJeN7sm+2Odf8Zk81WplxIQoSShDSEobACQw3/mrNOJBVUnXNVcOEM5G/mvKrLkk
CwYim5wHLo7de/MmRZb02DFaeSWwtcp8A7FfIoRsANaqmlJi1crwQnjtiyY3ec0xq3ZXCjX2
VGai47liiW4wcSe5ICXveG6g8OXrQ0Opp4bRqcQimebcjtF+zx2da5InMoixor1wQjgiRIZb
4vvJFEkpWr8x6/hrarGy5fjglPELjTOLeSXFOJbH+nXUJUVACq1geus32gtoziIlkTWQ4Apo
OICjXokqANaelNbalC0eg8k8kYSryWrGf2eyOYfHuNtXHvEdhA7QZKHZK1rQKPJWatH8oA9d
cX611+QVnZ/QnMftPjnFmpAv03HrjKk3u4XRoNuNSEtxDEe+3aXtsOZFEdKkU31KjfBS0iGY
znDV4a1lz8CxHL02OTW3GM0llMgT0ojo+2r9aG6kepT8tT9cvWCdmPsVdxd/zdachhSbNAio
x+O/fVBxuMj7yWy4hzsJBCUvA8QsA1SnrudMONEsNkbiFxxfGYmJ41LasNwfuF3ucbJbg423
J7cVtdEFD6+PBtYIAWr4beusxttDV4SRcG8at0HwhMuuP2+2xpC7TLXGeeaaUAlxxwLkfdlK
lclxz7WydjQHVVflkzLVcnk+2y/2+ZFmMNNyDEWh1DD6O42stkKSl1CtlJqNx6672rKaJHpe
dmtqe8lWPInJNkXZF47MXD7HBDynBDQHI8wD3pSXKoZQfSoG+uKrhYHEkYxl+GnCkZg5arB/
qv8AY3Vrtv27aWBJRPSiM39qD9aG9x+YjrtqVJZNsLA7/j2Qx7reLXbMatOSPzISJ0K7BCYv
2DbX/UvxkLolKnHd6Dp66X6/g6NQp8khEsHim53ux3W2zLFHx+xXa7OXdmQ4htTrLqgYiEtr
FXUV9yPQDpq6vUZMVhOSxpx2Bb/DSLpZrVbIj7tsbU88620Shl12suR94oHm460apaqClVAN
xTR1XaINFA8v5VieR2vPEqYtZmWu4W5OPz4qUCXLS7QSnS8CVPjiOJp7Uga3WjkzkqP9PFtt
txzeYi4R4cpqPaZjyE3EBURLjYTxW8D0SmvuUOg0exaOkQpRpobxc3KSYzWJq8h/s0TutK7Y
sxfMpf3PD/2u4I/D/m0KnlHNPM8ENgECO55OzCUJeNw4jluegOGE/wBmL91JZpyhd2iqc0kO
qRsOidjqaiMFw87JDx/FwRrxG9BuqseamQ25jNxmPlqUpx5BU2kLSe3IT0SptbCiCKEalT89
C8rBBP5jbLv4rwVE79nTGtV0aZyWN20JmtNJkNltbTQqpSXWklUhQ+rWukdkjSxEl0xOBgOJ
ZFMub13sbZuWRLnWhUR9pTka3OMPcUqIp2kq5cSkbb/hrLr2ylwYrhR9SneRcMXmT1it9jGP
v5fHZmyL0bK60xD+zL6RC9yilKl8Sdvq6121rSmBge+PcOOFW+fDvhx2Jk7smLJS7eXWZLKr
Q2qkntFJUA4FdEiitc4dtocJbL1AyDxNKbyOZKNmmpk3Keq8PzXmUKUyUhLBRzQ646lbIHHs
/wC/XW/qcrzBmcGXTrh41V45fypqPD/1BJsoxlNgSgc0yg6QqeEn3cxHooO+p26nW6Um0cLJ
KXzsn0Q8QYtPih3ILzZ5sCxOPs3eMy+hyhm0cjKWyBuEKSnvqV0Vua641q3VpI29ySNkmYwz
ncVeXysSnXEQZotbtqCG2EuKcR2USnVBUcLU3z7ZKdvdXqNTpKwsApJO23bxk7nt7TG/07Ce
chRESJfcjq4vpK+6tgutJiOkJ4BxKaV2Neuq1MLGQXJ5s8gC0/65vqrWuM7blTHFRVwUdqMp
s0NWmyTxSSTsDSvTamu1lCj4Mo9Bi7eJh4fitJas7qRbWRIStxpE1NwJTzozwVILiV78ufE/
hrh66Ns6O2SBzLPcUkM5U9CiWVD+O3a3qxEsR2qvNLV/1bm20hKkcuQ+mnzprqvTXX/rId4z
8kjkebYwvyNfMmkyLJcYa8ZdfxdPBtxTkgdrizMQRUyOaDwSd+G2ilOyS+ck7RJW/FeV2u++
WbdlFxNmxtdvhlF2JKYzU5wpU2HmkKHBL+6ahJG29dY9iwklySal5Lj40l+M7NiE+03Z+xSp
kefNTfHZD7XB5vkpTK2Ctp1chBQQB26EH5637KN3wthKSSIm8XPBJXhB+MzJscWS3DUI8FtK
HJLkgrqlKkLSmU0+n/8ACBak+tOO2qlF/Z8SXsZkfiC/49YvI9put9b5WuM4vvrKe4ElbakJ
cKKHkELIVtrHsqFbRY2hWfWFlEexSswiS81dtlxiM5wkFTEZ6bJS7GaVK4haOLaVCoHsJprr
1x2a/FvCNdlwOj5IxGVPfhWzKGbHdIci1u3bIXErQm5s29BTMQh1KebpcNKBX16OqXy2hVlP
wc4/lDx+40m5Rboi3Y5AN4FzxFbHB24qnuKMVxLCf03UhKxss+zV0/KOcZM9k0ZtaMuxeDjO
AokyW567FdTJu9s+zH3LTIUVEIkkhLyFA/5atz/9OtNJu8a4Ndsr4RqLHkHDnrjGjXDNw8+/
dZ13jX1mo+zt0hstItpeeSox1LrQpQPbxr1poaxMYwvuWEVe557bEeVbLNmX22Jstvtz8aFc
Lah+6IaS8pVGpRk1ddUdt/T0pU6Lr8F9TFL7+gnxn5Bxm24/Biyb/wDsDtmvMi7XOKhtxKbp
EWlfFhgNdfcadpe1NVknZpRk33xnwP3PKOEv49IlpuSo8L9jlWZOEhs1VNfdUtEvgP0OHFSa
r6pprVUu0eHLfwEyZ9hMnDcMybDMmcuovDBQpd5tyGil6C9w4b8va6Kq5Cm/t/DWPZ+U2Wpw
hUK2P1HflzKLJcLBYbJHv5ye5wJE2XMvnFYQpEtwqbjpW6e4rh1Un6Rrqo62bW4gx7LJszBR
BQp07FIrQdBTbfXn0SyJDS0mvEqr6mvEk/A7jU2gSAapI9tOXLcg8TTrolCKXzbTVyqQEpCz
uOvxHwOgQj2wgKWqiSRSpAFPh+P4agYCkg0JFfQetPl8aaJLYY6rT9S0JG4+fqfx1AmKTSgK
dyjZQG9Cfh+OqUaRe/C+U2mwZj9xcnlR4k2JIgGe0CtUQyEcUv8ABPuUEkCvHf11cpm1pou+
D+W7FjFkxHH3p6xBt8y4jKUtNdxuU0vmmOUFaeTrbilclIG/xG2vT7HRu7e3EC7efA6ufkTx
pJtd8mNTX0Xe646zj/7MuGQllUcEcy8CUL5V9pHQaPXavZJxCYRLjyQGXXnx3c7fgkCDkLzs
SwNmJcFmC6lxCHFBwyA2pQCvcngUVr6jprPZP1Wr5Yv+c/6wTU/N8Bd81u5+m9qNtjRUy2o5
jPNPLkMMiOiIkqFFcwOXLYCtDqvVW9dKysMqWaTMQmy/up0yVwDX3Tzj4bHRAdWVlAIFPbWl
db/5d074ycqqFA3UgU4g14+4J9aH1OvKagFHDQg126fLqdDICSSlRA2oBxO53+OgUFySOSju
B+T4U9dagJFc6e0j20Hr/v0MRXGmwUTQ1/vprIhBStviOv4ddtIMKhpz2/D50+FdA9iGQ8x+
3Fon9VTnJI+AA+OujGrybp4mkWd7wxmtlnXWFbZMl1l6MJzwZK0toBV20mqlH20HEHfXq9ym
lYOdX+RkpHKlOo239tQPkdxrywdHlne3d79ziqjqbakh1BYcep2m1chRTgUFJKUnc1BGuvqX
5IlVcm5ec8ltstnGJ9pzFmTfokVNvnPW11fdK3KF58uN8QhvkmtD1+GtKVfwjHVPY+lZ3Al+
VcTtGOZBFVYMZihEm8zXB2nCoAy3A89zq86kcAU+pNCNUNt4LtCKz5Ix6wZZ5fniLllsYt01
H3km58+bEdDSQntcqpS48qlQlKv+GuaoxlQWnwlc7ZbrQ5XKYAs9tuLim7dKdNtAZKuS56+J
CpS3An2NL9qR131p1caLBd2vIWO3+3PSZV5tzGKIVLQqV9+/EuXa394ithsKUtR9g/w/PbQ6
NcGUxrdbh4tmYRBiuSIDWJswW/tmG5bjb6poWC22YKPauqxVa3ASTX030v12TLZF+Qr/AJFj
+GT1wctt0+U82hcy5LmpU/zqAmPareyktMBPoupV6nffR1c6HYrCPIseNBwS2XjJ0XCbdnXr
je5MxffUxRBTFjqKiUtHuEUr0KeWt29blqMh2lSWK535dwyHELNBlhy7t3VU+dChTVTgi2tB
QU7KkqI4pNU/pdCTsDTWKqJnwSR5u8yS4snydkciK8H2FzFpadbPJJIACuKhsQDUayBa/EET
AHcRugzZiKi3S50aJGmJUf3F18qSRHbQApSGE9VrFPX+HS3rcIlY2nHbVgMdm4zEwLHNuCJS
o8xUQxUR40ZkUZbBkbJCUU5lI3VU65v1kmRFtl+LE3TH7QxbrEiDeo9wn3dwoS4ttpFS0Q+5
QoSv570G1NT9ZdjHoHhrObnekT2MeeVjEmYHGneSGiuCXapUEqWFhBa3FRWmrtmGaTN3yPC/
G8V2XacjtVltNheehR7E6xwYluuLp30uLB5jj0HpxO+itG9AZx5Xx+a3KgWWPimPwmH7q0xY
W4LiE3GdGqaJcQg0Q2Uirjil7bAU1utQ7ZyatIshR5Av0pUaI5+5Y4zDixXH0oS642taSwUg
80NqK0prTfRmC5Ia3eN/H0rI0vSLBalXeDbYwutjjFLiGZTyjy4x+aGqISN1qV09N9ZjBpOG
Ivni/C4Tlwm4ridvya6KmMszLW8+2hmKwWuTikBSuLSiT0P92rr50XZkZdMKxe6YBaIMGDaX
MoQxcJFms33FYi3C8UvuhaeJkllIo2VqCTSvQa1GcGXaEY4bJZo3hGZc1woKbyu4iMLlIfLk
xwNuALaiMIHFITQ8lKNCKn4aUsyw7YK142sVuyDO7HZ7igrgTZbTclCTxK2yalNRuOVKaW4W
BPSkzwz46lzk2y445Hx9v92DFrcYkHvXCMhkvLqVK5UURQpG49DrlEk4M7ybD2FZViURHjxq
xOyLkpLkFiWHXZsZlYWVdhNVIbS2OTji/wD061GBq8ovczxjY7z5qyaXklqfnMGLGfs7FaxV
pLYYWt1LSg77Vp4oFKChPw1N4JMb49h+BW3PMzs0fE4k2fGtX3cOK3JU+KOI4GKhC0/oOOrF
a/UkEamsErFaxnwnar7bsLnftDlLlcZ6ssLDp4RmWVLS1FB5exCVoCKj3nc11doGcmgZP47w
q4wbbDn41W1Q7LJLF+afLLEIRzzSkpTQKW59XcV1odZSCZPNeA4HeL7ktkiT7XORZbnIZTIm
CO6lH2zqvcUuFPFII/Nrvb3YwyrPJ6CT4J8cy7gIT2PSrO3Euv2jLzkpajcWAwp7mip2T7aD
j7tjrhkZKpivijE7L/o5/LLLKdu1+u8mKqFIcLSEJQVqjdxk/U2EN8iOqid9ttOchXGDMvKd
lUnyJdolmxuRamWXFKatiEOPLKCsj7gIAJS26d0JA4gUA10raFsUuUaH4x8TY3PwmNer7j1y
utylXYW5cRlbkYR46qJ+4WiiVcElXJR/D01i123s12ZKzvCuDWVCG045ecsFzlTkMybY6AIT
UdwssoUapaUpRG6lnr+GqZ2zE8FMs2Ird/p6yS4xLRIcuZurK1SeHKsSMoV7TlK9toci4obE
19NbVvyM2f4rwUbBcLuGQZVZrTJjSYsG5SW21zft10Q2rdS0qKeNSnZJ1Xsjdavk9HRPD2D3
CxJsMe0TbHFcvy0zHphKpclmAy4QphxXuDLvD06b65MJkgsg8FeMXrmzb7T96xcLnbLg5aYz
i3uyuZG4cFKcdSlw057opQ60m0gfxsxXypidjxK+wcfhOOO3KJBYVkLq1c2/v3k9xbbVOiUp
UkU10rL2ZeX8Gj+HMEsMjErVel42nJZV9vf7TcS/zWiBA4kLfQhH0Lr7i4eny1izy54NJRot
Vj8W4E2/Ds7di/c7Bf03V64ZY44VqgohOqTHbaeSO20AEbqV9R/lrP8AkXZtZMauOI2ljxDE
yxiJKduUu7Lhm5OOIEIMpKw2lDfLm4o8PqA68tdFmzSB2wvkvF3xDAIONeNLzIscuJj85585
LIcQ4uQ6hKkJb76kdEur3QlNPYaDXJN8PJqcnTzriWHs4xjszGLUzHn3O4OwmG4cSRDU+0Ee
1P27/wCopQc4jkBQ1109eGws5wVvxzgTMaNl9zyDH3bveMTYYU1iznJBcdfUf85LXJSkoRRQ
SNjptaQSSXlle80Y1ZMb8hS7baI64UNUeNKVBWsrVHckth1bIKvdRPLYHT61gytslPFzDKsM
8kOutpUtmwntBSQeJU6PduOtR11m38kVtGdwGESZ8VgHiXXW2iodQHFBNRXrSu2utpSwaTNw
uXibw4nMBhEC63ZrIo9yhxnw92+2+1IAW8ljin2lpo1KlevQHXFu0SUp6QjC/A9guVtfuN8e
uEFCLrcYiUJSEqMKCwt1DqQtNVKKkUr0I1f220gmAz4s8Tm0f6x+6vSMR/aP3MRiWVTuSZf2
1KgcPf6fD46O9phME4cfB3xrxjikLzTFsoZeulilWY3WFGmtpd4KkR3C2iVxHFQRSoP+Kmns
4NdnlEfg3iPBbhYcXGQybqzf8vcnRoCYwbDLBhrKebyHRy2oPb1Py0d3Mh67YRMT/E+N2jw7
LyKbc7rc0IhLeaajvhuAqW44ppCPtxVf6LgCnCr2q1utrOwW/jjZg1pbtv7rGTdy9+1h1P33
2vHv9mvv7fP286dK7a1eYwbWcM9AyMB8XQ/LK7NDgS2rXFx+TNuSJKQ43yERtTT8WtStxCVl
Sq7BzprilhZMNb4REq8R+K/2M5gZt4ViBtaLi1GIYM8n7kRlIUf8s9w9OnH46VOpNa2cLT4q
8VyEP3n7683PHZ9zj2eyJgNN/cNPPx0vOGXyTTiyV9skAdPXS7uInRpPg7SfAECNlOO2f76d
Ji3Fd5RdJqGkhLLdsWpLKkmikp7lKKCz16au7gzM4Xgl0+KsSt/jO23m4Trrc2ZaIRiMtvpT
bnJs55KU0jjkpKWCT3Oeyt6b6le2cj2cIrnmLx/49jxcnvGLqmRpliu7MC5QXEtohJ+6FeEV
KfelKKb12OmicmW42VDxDhVuym83Zm4SpkSDb7ZImum3AKkOJZKatJR+fkn8nrrV7tNJG3DT
L7I8NeOoEF7JZL19fxRcCBMh2phpr90S5cHHW0hzbhRHZqdvX5az3b5+5L8SIwjBMZXlebW0
wps6Jb7FMetCLjELbzbxZS4nvs09jqOVGuP1fUBvqs5ayCnI6xfwxid18Wt5GJNykZG5CdlK
ZYDaUMlqoSDFcCHHkewgrbV/DbVZzYm4FTsB8ZTbH42TaI1ziTsskNsSZ61ILSmlyA1J5rAI
7qT/AJHH8v1CuhVeXIuuSe8WeFLB+5TJ93ivyWI15uVoiwpTXtfiMxnil/gU1W4FIHEp2ros
8gkkjNPJuFWKwx8cuOOJuLVvvcaQ99jdUp+8aciPdlSlhAFOY3p6a7KcthmYLD4k8eYxebI3
kd/Zuk0rvca0w4duQk8FrSHS9JSoHkzuEq6bfjrlZt8nRGgzvAXjyam6zZE16BOnzLmq0tRd
mIzcR5aEJbYShfcALZJHJO2w1OzxByWjPr14hscLDZ+YtTnjj71lgyrG+VgrXdZS+2uMqqRV
CVIVt+XkN9tbrRu0STnJNRvFVpvkvxhbHoMi2RbpbJrt5eS3xkvPMqLxQpZTQqUEnt8vcE/h
rDf4/MnRrPxAeKeJvHeQ3xbwiX2yWSPblypMG6gMvLdQ8GUqYdQlSi3RXv8AYaKoPXTZNKJk
wnklIHgbxyvIbzb+/cp8dtERy0spc7FVSW1LU2Za2uCl0TVPPjUHVZuE5FPeDBcos7dmyW7W
xsvdq3S3o6EykpS/wbWQO6EEpCqdaGmurSSRhKTepngPx6xiMJ4T5X+oHIsKW4orJbcEpxCS
jthHFKVBziFc+u+vPVNs64n4IXIPHXiq3LkzoKbquDj1+asd8jvuN/8AUh+qOUcinDtrp9X1
DXTq+XmJJRKwd73498Wt57mqH4NztlgxeAJ78NlaUFx4uBNYnMEhhxOw5H6qmtNP9c9Unmxl
REjDxhimCXPyolqxRZN4x1y1vPlic13RDlOMLBjv0AS6E09qhSqqUO1dYdVjPJJKGyXwPwdh
F18dW+63+XNiXi5ty1trRz/6YxlKQUlkNr58VIqtKyn4ab1izSGERuV+IcOieK1ZFbW7k7c4
kFmZKlLWEALdUgkLiuJSeytC6hxpaqbHT66/lDC68FI8L43ZMh8gW+1XlvvxC2++Iwc7Reca
bK22Uq/51J1ezwFEkauMFsLUL/Vgw5v/AFQuyqnjA6Odnvfd/blz7Y1doGlcyBtXUvWnifxk
2tku14owi33JbFrxxF7Nxu8W3XeI84t/9nhSYqZDy0cCFtFtwlKVqO3SujriX4/UtYIkeN8M
TYnLSxaESsfVbLhcn80Kyp9idDfW2ywmQk9pI4pHJH566V6sw/5SUqJKZY/H+KOp8YyLnDfh
N5O66m6TFyEGLICN08AFFbLnIhO4A/jodYVs6ZRXtHwaJGwTG5T7F7umKR7Tk7Dd0FsxNHJt
qcLeKw1KYJ5OKXTdSac9a/qrMJ/jjJJ4n4K3M8dYzM8lRYL1h4Kn44i73fH4UhMVUSeoK7gj
hxXspseyo7Vr00On4p/+0BjMEB4uxOLfPGucPf6eXcp7MFSrVdwFLUl5ABDDbYGzo+uo3ptr
SrVe3q9TyH+yS6NwcDvPhWZNex8WxqLZu9EmvxUtFy4N1CXWLiFcnu4vo1T5HWfVT/7Ijkrt
teCgePfH5jeRLBBzGAkQ7rDXPtLDrqUszFhvnHQHAqg5r4jiSOtPXWWm1PA1w2uRXmuyQI8X
GLrItUaw5TdYz68issRPaZZLbiRHX9vVXaWtBNTX3UrruvWlVtPE4M2UuES3g+djVpw7Nr3K
iurvFsggpltFh3iy+e2kNNvpWnuBX1KIII21y9dO/sVTdmlQVGumMWv+nNwxIb7dyvVxNtmy
E9lxLrzTQdPLmkqbYKeiUUUFbg66er1q3ss8QpKzmFwP/I93wxmy+NbQ3bnkWSS0m5SoQdZB
U24S0SqQEc+6FHdVeJGxFemPVT/6rWNN/nBdnJdguXmXIoTcFclzGbQItst4bivOBwBtR/b2
XAhtXFC90uk77ig0X9SVK/IKYb+Sywsdxhm7Tv26zsrnSZNtRc2I0WI8+0HmSt1cxh09tlCj
/mdrf1FdZdMJsinKxzCf2uXGattqTgIbuzmQXJC0KcjXJiSUxW2nirutlKePbSke4fGuuj9K
nX5YLsmv8Dp+Bi8NcOabXbX7JHl2n/QDjTjDT04vJ/69Knlq/UHJR5B2gBpoXprEL5n4Nz+u
SXkfcRchgyl2hcmROhzm42PvR7Sxc4xbUg/cslJDL6CBQIX+O++uapNZjBmeEyNXhdql5RJn
O2+Lkl/YlWxT8OQpmC7bYLiC6tx5MTg2XmVg1UAa7DW3X8VKjf3NKyX/AHJHIYkuLa8juRsc
W/Wh+TcSYFvainmjdKn577x5hTRT7Ese6gqd9HWsr7GJ4Gca2vTcLis3S32ZMlty2pxyW4Y6
rPJUpaSFxVICZnJSAe8hw77021mtE7PGMm7xOGDJLDAueRR7XkOOtzbnHiT37FMl/axGbpKQ
hIRBbbjEK7CSeae8rnsDXrqVVH+sGFpv/X1K9jlmbt8m7Q7Zjtps+ahyAuZaHpDM1pu2OH/r
XGvuFFLY4j3t1JT19daXrrOZ68fI48/Uw3PGrAM3vqMdKP2IS3P2/tKKmi1t/lk/lCq8ddPf
61VV8xk511LK8pY4Ap6D1rufw15xkIABPFftV6Gm1OuiCgUj6RwHQ7D4jWYGBB4qFKUJ9wCv
hrSKBShQ1FeNAaEV3+eplAEuqSRQbHc16A/LQSDVy47KrSgUfnX01QAOAp9I/wDVvWv+3rqg
oIdNTblkdeYofh/Zp5OiZvfhpmPH8SZzfGIbL16gIYbhS3mEyFth0UUGwoKIO/prpZQkZfgx
11LnOiyT6qV619a/jrKYpwGy20p1HdKkxwsd1TdCsN1BVxrQcuNaV1qjzktm4eRLDheJwsMv
tsxL7m0yYK35cOcXauuPBHZMx5ANV71SnYE+lNXVS67+hl3bJPNrB4+RfcHx66Y63ap90SiT
eolpBBC5BCWIqlroUp5VLihvQHjpr68uGE5goPl/DkW7yVNsmO2dbcOrTVthRmnFBau2kqDY
ooqNTUkH8dc1Y2lKLL4bwjFHmHWsox1b9xE5MKbLu3dbhsLUQlMOK20Cp2WalSwqgSKbjW2u
ZMVlaND/AO2fiCNLXZ4FogT8jcffT9tc3pPc5pBW00hTKS3QIoSK7ep0ZKXoh5HgjHY2O/t0
ZFvfza4xFT35klTwXGb5DmIcdH6aUNj2pUtXuOpti7DUYB4gNqiWqRYXrVc7mtliyuLcW/ep
AcWAZT8dILUZtQJUApXT0HTTLfIclnneBfGN4jvWizxWoUiFNbiypsOQ6++EhPJz7nuDj3VJ
H0itCanWc7KSCt2CeOGZeO3i32NUaNLvKsfftUp9xSXigrQZalAhTiv06hJ9utqVKngFaF9T
HfL1qtlr8i3+32+OiHBjSAiPHaFEITxB9qR03OsJDHkkfG3ihzN4F0Xb7i21fYpa+2tbgIT2
VKAXIdd34pFSEpTuTrbbj4FfsX6xf00WOc1JkSMkfatbLxiRVlltlTrrWz6yh5QCWw4CGx1I
FdDs1yZaRzif03Y447BiP5WtVxvJkqtLLEbm0tETcrU4r5UJ+P5dXa2ySRk7svJEX5VjZvMk
vtzBCQTIdCOSXe2lXEqNEgitKdNbr7GkXVGu3f8ApyNxm3RiLljl0yyH9uu5plsKQ3zmfSov
EqUa0r7fTrrK9t0tm1bEIoee+PsPxkli15cblkMSWiBdIa47iPtzsFu8zyPbQT8d+g1pXs3l
nNpeDTf+zmEIza6W9x5bVvtWNtT2JSnXlH7lwuJVMeIVzXw4ckoFE/LWXZtbNNkHb/6cnpd5
jot2VldnnwUz03NLDqZLqFqoKtBQ2V1JWr4bav7LImjjcv6encfdnP3/ADVFlsQdaix7jwXy
kOup5hLqErSEAJ68if5af7bPAdU9bGV48IW6BgEHK42QBUTjKduN09621soUW4jEFhsBRLxr
uTxpudtHZtxIQkZ8zhTa8Bfy1+7NNvtvCNDsbaC7IUKgKcdUk0ZTQ1TXrT8NL/YYTGmAxb7K
zKzxrDJEa8vSUIt8pVOLbij/AJhqFbJG/TT3jgxVZya5dfBOZ3G4KetuZNX28RriIc9S1PBU
R5z9Ra+air3JHuUlO+s93ydGkyr3/DF23JrGqDnzF4nXWSICp7D7iZMVIWG3FLUVLUlG5A3A
J2ppVrJZCEWmb4myq4+VshtmOXyRbINkZjh+7SpT8iTweaC0oqkhxZVuoj6U6Hd7JeWc8a8J
SU5Fkf3ubNw5FnirfcnRXFokr5tBZefqebbQ5fqfmJ21O7CVyVqx4H5CVbceas2QKZgZZPkM
WtLb7zTakRgorluJQRQLSglKfqp11t+xpQDqXjKPCV/Njg2q2Zj3nHreqVcbPOlO/wDVrjn3
FhgHilhsbJ5VGsV9jFrwUTGvLfmO6Xm2Wi15A+7JlPtMRGnAyGuRUEoCzwHsH5vlrTa8Ctly
vPiTzDc71Dkf6xjXe5x56o5calu//Zzy0l5SuJ+g8QTxSmvQdNFfZHBJkFj/AI88i5vJslzv
OTKajzZ0hm3zJkla30fbKo8uO2SB3FlPsSDXap2039k6RVUZZEZBleb+Ps8v0S0ZUblNfcQi
ddhxkrd7Ve024t1KqLbSeK0p2B21usNS0ZnguWNSPPOe4ym4JzGPabcuYuMxJkPCK69IUgJL
Q7KRVO/FCR+ap1i9qp4RtrBG2zxf5kYx2faV5KzY7ep2Uwi0yZ5ZM7s1TIU1TbgpQoVEivU7
aF7M6Mv5ISCjLR4LuN3VkEtFnjz2rTEsTTwEb7dSqvFxIHKi1L9orQj5HWp/Jl2jZxjeYfM2
UrYxmPdnHnLopMNthlthhauRoAHEJSpIoPcQemqUloVWTQrn428vO2mBKVlzt4ytV3Qxbm48
sriRUoYX33nHCAUuN8SlQA6ehJ1jv5UDCK9lfizzfAvByOfeBc59phOz2Lu1IW4tDcYjk0yl
SQrn+pWgTQ61/avBlaMtzfG8gs9whyL66HJ18iovBJWXHQiUpRSX69HFcakemtVchZw4L74o
wa/3Cwuyv9XLxaBkTq7NbI7fcUq4S0pJLSginBrYgr6+n45tfxwNVglrJ4hy9WOysWcy1Nqu
N2Ml+24ohThjTmoK+26686n2oCiiqU+u1fln+1vItrZms7GsnY8fQchmzEf6ffmuR4FuW8S7
3QFB1xuP0oSggqG/89bVpcFMZNOvOL+UJzeBWOdmBl/6kdL0WMlaTHhGEELbPcRTvLa+HooU
366x2xoU1MDLzbieW4rMsuXycpm32cZBZhypkZcR+M7G/USptpft4V3BAG+++qt+GCfBA+OI
+Y32/wB3zJzKFY/HgBL1/wAocJUoGSe0hPaTs4pdKAEcRT8NN2u2ChRLwVPyRjV9xvL51qvc
xNwuCFJkPXFKlOfcJfHcQ6VK3qsHca3W05MOxZfGrcpzAPJbwfLaE2dnuAJSSsB+vHfoCAQd
ZaXZGW20zPIbjrchpxkq7yVpLJ6krCqoH86U109ijZ2k1/MPFud2dvH8jlyJsjyFks9BU0gI
CWXiCptPeCuXfSGwVU9oFfhrirys6RzmLQiTyiyf1HKeQbndhNZUxNV3Y0ppTSENMH7lolCU
8XFNV2p8ab6ldTobR9it49hXmLJMZiwbc8n9gmW4uMsvSG2WRb0yiUhfIe0d9KlgfD+Wruk9
F9dlkxXFv6h2MrnpiTfsprKIdunz3XmC2tgJrGRG5hQd4NErSE0+e51d6+BmSiZrkGX2DPZc
IXyTKk41cJqbfcV8QtLj7nN94JAICnVE8v7NbrlaMVtj7mnSJ/npfh1LH7XATZ3ralgy+QFy
VbnaIB7PLhRaVbqI5b11jtWdG7VUQZ3dfAfk20rZXNhRkpkyWoX6cppfZdkbNKfof021nYKO
r+zAYnZaMgwbzYxmcHEmbym73GPZ3mYLiZCEAW5aW0yW3OYBRUkIHPdQAIOitsZRKOzK9kuK
+YrDjdwt90fpjVqhRu+lh9tyOqE9JJZS2of5lJPUDcfhrasp1krtSRTGReT/ABlcZNkjz3bT
KebZemRmlNuIIdbDjatwoBfBwVI39NVVOTU8eC84bbfLTmFIXAzNMB29JlXCz4865zkTmmeS
pjvcKVlqvuIBNFHfbrq/sU5RKEWS5XLys94yZscaZaF3qPAiGfamEqFzZtbxSmKpThUWOf0c
gE1A3rXQmsuAvD0VO9eKvIMpT1uh5JGv8m9XVuPlUZpZSmNdW21vhT5UPelDYUeSabilNX9j
WY4KFrga2jH8xwG9W1/BrxAvTWaMu2e03tr2JD/cSHhxcNW1tqT7F7gj+Wj68EvBIYy75Zcy
rI7rd8yTjsi2OtWm9XeUUPtqklRTHjNMoHBVVEqqE0SKn4607zCg3OCRxjFvMlouN7C8vYsN
7u1xciRmpKw+u63BlsOe1ZQrgntq2WfiE8dDtLmDKZRk3ry7D8b3CSm7us4o1Pctk6EHkBz7
h9RU8gADmG1LUa8VAGvTrrVX+WECcpfI/GF5+1h+EwkZC0u25Pc0m22pLtG4ckJCmnluprwW
Eq5FCfpP/NoT/lgWsotfltjyxhsG2XpWcTbrHMpyIkuMrgOsyUsL9yELSO4hTfNPMaKuVEFg
gfHGXeRrxfLplVwylqBBtcNqPd7/AHFhMxLDDzv6LTLHGhWt3/DT56m5iqQpYGPkPyJ5fxvM
Z8Kdfw1cQwygyLWlLDEiKqrsd0ISB7ilw7/UOlaa60Sa0YdoJux2zySrwvkeZS8kuMS2Su4/
HhRFNqTJU6rjJffVXm2haqpUEUNd6a51v+eOBtWK42NMm8Y5ZaPEWPrnz5Lv7lcW/sLAhTSr
ez94ClsrVy5dxzlyTT2pqa01U9jt2ejTUNJFhzbEvK0W+YRiqcnuE69zkGWA8401Givxgkco
60HktTDalcuR935a1I1Vf4NwLh2haJF/Bs0k+WrlHRmN4afx+2MLVdVNCRNcRKWpJRGjsEpU
yVI5GoqPUeuh2fVaMvyOrF4r8jMZJkL68+uMeS6/GjqnRIzr78gOshxlcpkn9ANpVxP+H5ar
Xbhwi4gz3/sVdZaJSrhkMc5bOcnPWe2Hm4bg1AcWmS8t41DZWUEo59datezy8oIwThwbPV47
Ax//AF2pb0Q29+8Yzzc4QYct5sRXUumgcLalJUW67bU9NFG0tfQ0vlkrePB0leRRsbezC5PD
IZsiTcxKgOttvuxGlu/cJcJ7brgPTeprUdNCs4cxok0Q2T+Ns3k3122W/KZMxlFoQ3d5mQIc
tjbUPvUjxlqc5dxtbgJTStFddSs0lGzKrLkmsa8Qy7BlUy1WXN5lmuDrUJC48Br7mQtT7XdW
p7hRIjIXXi5/sRpwrGo3A2x/AMwuGM35qF5DlG5rfuButvilbsEvsqWlz72QkgtLkpbJ+YIO
+tttWX2MwoG0/wAa5Pc/GseLYs0m3iIgwm27OtKm7Y4qStKUsRpCjxdLRWCU9BQ9CNNLWVnP
yLWYRWYfgi+R8jix1ZBa1RWw8/Outuk95UFUIdx5LiEUWl1KRtQUqNcs+CqkmRHkBrIcZym3
ZJAyiTd/3qIm4WbIiXWZqmDVopd5e5Kk040GxGu02dY4RhrqyUwjxpnt3x5d+tOSxrS1f1Px
A3KmORnp7wVVcZVP8xTxqRUn56w/ba1p/wDE06tVgkY/gnymcCTJbnoatclLc2RjyXnBtyoH
1JA7ClIT7qV5U+er+yys/I9Fgczv6Ys+HajMXe3Twh3t9pL7iQ02FBLz3FY2S0v/ADAN9Y/s
cRwMJkff/FHkDj/qWLkLOQphW9c6Bdokl111xuA4lDjUdSquJWzzBG9KdN9aVrfxegeBFh8W
3B3yHJtGVXJwTl2dd8efjurEpa3GFLShS1791Kv8wL2IHXWr2vZVfDcIkkpXg64r4xzpeHQb
nacsZty75EemwrGiS8w7J+2Ci+kIRRFUoT9R23Ghu1bv4Zt1jDCybxLf7N49dul2yhv7S3tR
34lkCnloS/LopDTfL9LmptRIUivqD8dao73s1+piySMueuFwe7LciQ4tMVIaj9xalBpAPIJb
qTwFd6J1dn16rRl5Eyps+ZIXIlPLkyHCC8++tTjiiBQFS1EqVttrNm4hkcWpL6ELbS6pKV+x
1CSQlQ68VgEAj5awnDlDHkNMh1DQbS4oNk8i1yPEK6cgk7cqbVpXTLF2CW884EJcWpSUJo0h
SieO/RNagD8NtKeI4MNKZHCJ8tEsSO64l1sbO81Bz/8APB5fLr00Ntmu0i2LncG3VutSXEPP
Di+oLUFqB3PNVan+JOm128ySOIkuFBQFrS2oVW3UhPJOwJTXiSn0+Gs97TM5IBeWtCUqJUhu
oQCo8UlRqTT0J67aE3ESSC++lF3uLeWXWgaOlSioV/5ia63LiJwPZoMTX0JqlwqUocVrSVci
DueZ6n+Oq3ss3LZdsQwJlPhvj3FqArWhIH8QD66z/ZadkmLU+4EpSakJqoI5bD1qB0H8NXd+
TTtIlUh5ZClLUoCn1KUSD1BB0P2WiGzMsIuOk8kqqsmorXqetVeuh3fPAVwIWocDX6+ivQcf
jodm9iGFtqSTQeoB+f4aAEgeiyandIB6np/Zqk3IkKUlRJO1enrU/wDHSFbZgPkCQipWa+3b
p89Bp4FFRSTx39Px0E2guJbAAJIPTkCdvXbQZWBQKSoI49agj01tI0CiuX9ta/L46ighEpW5
BcVzPFpQon8tD66tAjd/BLVyRhWX3f8AfZ9qtVrZaXLgWwM96SVoWEnvPJX2+NKe3ff5a62n
qiwZI44l5RUlKkoUqqAVc1AHfdRAr+OuUB9DrAW+JrC46imQHUhlewPd5DgoV9v1ep21uqck
bLnWPZYWsYYznyCl+23dTkoujuSocbspFFJ7NPuFkq4pIHEfGmqbTjDDqtj264VkMC8Y9k2O
52q6X7JFKZt10mtCKRFbb4reUqRyo2E0SmiamoCa1017S5UlKSiSreR7x5FxLyK4iVlD9xyC
FFDDd0SkI4NvpC3G2kKBSkfMCur+zDwiaR38RyPM14dlWrEby/BgyHedwuclaVR233juoKdS
tS3nT+VHuPU/HTa6aRdeS52Oweao9gurCctkW/G2HpTbExEVyZNmKbWrvOMJQlbzaVLCqKUs
b/z1l3nhGokVj2O+X7xh37Wc2agNtxEyBZyjuvtxhVTaJk5sfo807lsrqBt8dNrZmEHVIS27
/UNkWHybnKvQhwlORxAiBptiZMU48G2VJUlCXGmlroUqWoVHy0K8NOETqiUynCPOdxhRX3cx
juv2uQhT0ZhCrewy91Ly5HFsPdv1O/rpreHMImkKjY5mn+rcfym85dbLv904bdaJD0R1caPK
cHtVDip7SVLXx2d6epPTUpaaSFtIxPyzCuELyJfY9wnKuU8SKvzVIS0HFEBWyEkhCRWgT8tF
chJK+Oofle8WWRasLjurtzcxqXNdZLbAXJbALSXn1FClITQHt1p6nWnZKArkvkG5/wBTs2bc
oTdvEyUw6XHpExiKUMuqSNorjvFse0AgJB1l3XguuZ4ImwyvMEpu65DJuX2i/H8R+NRxpL7i
XnwS82lP0KfI3ccX9O22tWtPGxhbLbJtueY5ZLe9FwKwXZtMRNwmXJDCnH23FVX73XVc3n/z
qUkUqdZ/CcyMsicpzX+oifDTkKLI5ZLMpxmS2iM0gvKSjdlL5JU84mp6FKfw01tVPUk68HGf
bv6g8nutmYumOw4cRyWmZ9s8w0zDdeZSpaVz0oUta+KakIV6+ms9kUfJFXzzn5dtE9m4XGFB
ifeMrZipdiBCZMdp3ipYSpXdLaiKJKtqfT8ddF18B1UfI5wXyb5iyi73BNsZt0qEtllMhE5p
DNqgttEhmm/sUVE8U1NT6am6pRAwWeLd/wCoO43i6Wm62ezyWkOoUmTdWUCIh4tlLQiVV+qt
aU7Df500TWIjIRyRtkuP9SE1DqEx4iG7S3KhuW6ehCW57rh7joSxsHlo/Lx4oSNvjomvgksS
ynxsQ81zvG822swURceMhc+Tb6NszpZS5VbvAfqKZbWNhsNhSoGq1/CM9YKvgrGRY7erFmq7
NKk2diahMd1tBCZL1SEsMKp7lKINKA61KiGFpNqynMPPNvlWy8x8WhQY0qd3F22CEyH3pDie
201OUk8g4W102pv1pTWKuppp6RVM6mZ3j+QWO43vDLQ1bWH+9AtdsRyivTlEhJkOs1ccfQd0
orSvx01deZGf1HkTyf5Jl+TXEowiH/qaXHbjuWztOMvAJUHUvvvcgpI48ale3GmtPoliQ6t6
O2P3LzLePIGUXU2a3XFfbNju8WW4lm3oCVEpituVq5xKiVUqVV36jWeyiASIZjzFlODvM41d
catj91xx9/7B11Kk/aGT71oZS2ePGi/aoH6dtPVNSEtFlbzvy3kGFPZBbsOjAR4jsRjIkgmU
I7lA+qMws8nASnkSPaPnTWW6pm4K7jeaeLLA1ab9G8dz22oT7bbN8clurSp9rdRqQltbmxPH
p+Grqm9lJ0X/AFH3d3I238axqNEbfnqmSYqObsic8tssp7hQPavt/wCEGp31rrVLIfQaXnyd
lSbniceDhgsbNkmLuFstKGX1KkOOhQVQKSlR2UupSOpr6ayuuShsjmp+MXHNrvJyfALguXNC
XIVht6nmS0DVT0h0KCXXFuKNeWyR8Na/GMOBytKS92TyDZ2cbVZsQ8bTpibBMVdXo05a1JiP
NULbwVRThd5/+2N6ay0uWCb2VaR5buOQ2fnesEj5Bd7UuSuNdHmnnI0dUhwuuB1hKS2eJP0q
UNgK61+K5HqyMt+eRovip/FP9FuOszFCROvjveU0X1K2lJTwShJQmgaTy4ig66vxmZyDTJmN
dPHeKWux5nYMJuKJsmQtvHpUyb3EvPR/Y4tbTZKin3ECg9yumjrO2Lb/AFHyPPVws8yD+2YR
+0Qn57816MVPlyW/KQptwtrKE/qFSq1TX0FNPWvkIY0vnmzLIWX2jJJeNv22ytMyYKIUxT61
S0OcS+hUh/cqQrjSifb/AB01rVonUy3LL/kOeZZOvJjOPzJRSpMSI2pwMx2wENoQlIrxQkAV
pudLioQy4YL5Ovdkx1NlGKpvjtllKnWR1bb5XAmKrycdDY3AqaJNPx1jE5ZcYH0LzTlCbC1J
VYUy8kgtyWYGXKacP2zEpZL9W0p7SjyJopRoPhparJrrgr05OVyPDEGtphRMdhXFa03pZSJ0
uQtRRRtKjyUhBVxVw2236HUmrNqoWWpLNmWZ2teDYOw3iLlrgx1lyyT/AL9SpDkeO6n7wNpQ
ApCn3PpWrp1TopVTljjtnZH5t5KcvM+wKuONSYmGWeYXv22Y9IfdlOEBTyHJb/uJUj8o+kHT
1UQnkrKHLGuM+UbXHu+TxDibU3EsocQ5IxxhS2y2iP7mO042Kiivcv20/DVaqQRKzkqebZBf
sxymXe5cItyX+DaY0dpfbabZQENt0oT7EJ9d/XVNVpmVXLcErh+Rw7Tg+bWx1l9cm+QmI0JT
TalthSHeThdcGyAlPx66v9yGywUwRpSEh8NuIar7HwlQbBr/AI6UrXXa3sTYQzQ1+V7qxkOF
X0290t4nCZhtokqcKZa0cw68lahsVpc26nbfXLrVqJyadYcsssLzpjFuei26xYq5Hx1a5rlw
gqlLdkPvXBvtOFtyh4+3oP8Adq/rS5BqTldvI7uSWGXguK4fIiNybfHt0O3oW4+6yzEkKluK
UFJSpXJGxqf56yuqzItS5Y5T/UBYZJ7d7xdU2LEkQZ1sYTKLJZlwIwjpU4QkchtyCafI10r1
prZQpmTJ8rvErI8kvGQiMttq5SnpK0JSVJZDyuYSpYFPaD1Ot4Sgz1g26R5eyGf4iE5GPPpt
0JqLaJNyXNCIKlNLQnk3GoHFuEJCTQkJJ31zpVTEiVTL8kyQ4tkd1nWYwrb5GuUebbJK3kLW
luErucO2PetKx9KyAP7NP4/oVq5gTdfL0eTk9zvjGOOw7tf7E/Z7mguL/UkPtoaTKZQU8glC
GqBA/u1UVYy9CqvIfhPJA1c52PZEzJuWPPQQiRavtHJw4w1F9sFoFCmUNlSnKjqfTfWfY0yS
+Cp+Xsxt2XeQ7nfbe0tqA+Gm46XRxc4MtJaCin8nIoqE+g11qoWDM5Lni+e5BFwBq4tYe7Ok
Y1GkWq25anvJZiMTB+t3mh7XVJ5GivpTX3fPnFU4FjV3zc4i2Jdh2RiNlUqPEg3m/FTikPRY
RBYCY5PBpauKOauhpt6UUlkJJ5zytlKLxZn8fwr9rn3+YL1KYV3ZBu0lbaorimULALbKkOKp
ToTXp1PxVZ2zXWSCzvKLnj18xeG7hhx2wY46ufa7DNUt1T6nl8nlOPnlzSVgUCfp1unV1jlk
oTOOBZq5dL9dbG5iTV+g5PPTcmrDGcXH7MxolxotvDo0gV58vTfbWbQiosYH7Xnm4Cc/KyDH
I1xvsK5SJ9ifSt1luLNWkMOJ4oKu+2ngkAVqSOupqsxwS+CjSMqmrwWVYXLU2lMm5G4u35Ta
w6HtwpjlTjxrXau3TSrLtJl1cJeCfyDO8XkeOrJjEfFZFvk255Mxm5qkO9tx5dDLWlJSK99I
2FfZ6avVEvJu25HuReTZebTbDjlksDztoiTm5ybO/JfuMuZIHsUhch0lQbLZ4hI2HU6cVq/L
JS3IlWYf6Uy3KLfdsKETGr0pj91xB5S2gzwIXGW1IQPb+oCscfaqu3QayksQzKb0VrJnsx8g
5Fd8gFrUt6DH70xiK0UtxIcZISgHn/gQB8z1prs71SSRWriRoznctPjx7D0sjsvXNF1+7ClB
R4M9rs8B7eJ+quilYbYzjId6zCff8exvH+wpKrCzIjsONFa3HzJe7te3TYp+kU0KqrVqSq52
Sl5zi+X/ADK2ZIxbHFPWZEBn7VoOutkwCCAtQFUlziaj0+euaa69ZGcyiz4PnOSOeQr/AAXs
dVMfzaSTLsrjz8F5tanC+0ESEBLjaEV9yiACNPssmk1wFOUWW3+aMtmKvdwewxdwtlplsyk/
avyWhDdhpEZHdfbqqSBx94O3qRTRZKUicxJCW/ypmkzF7vlAsUF+TaHHI8PIqlDkJu8uqUtl
tkHi+OZISTuivrrcV7QtGW3EjC5eV8nFiDMjF2o9ympiQrpfHmXgZzMFSVMRltqohKjxAWUm
qh8NZV1P+sGnKeiTv3ly8R7bAQ7g6rbY256pExic9NcRIlhgoKGnnSlccpbXzQEfL01U6w+W
DtGx7j/kq/5pc12SzYS1drB9gYj1iXJecq2h77kOuTXSFA93oFHfprLsq/U1VzkKX5zu6nrj
a7zgSQq4utxnIKHpMN3nCSlr7Xm2kOOceIUEV2rrb6pKG8GW7PSOGNZXktu8fTshg4pb2WrW
l61Rrw7Idae7ckqAY+2PFMxxhDtKr3pv1rorDvyzT0SuSX3yLIYxOf8A6BTb7kHYiMcuYdeW
lLjKgpDSogKW2u8Eq2UBVJJG+itquZHq5GuQSvINiy2zm34DbbTc7q7JSUQ1Cc1dPuBxlRn3
kqPFFPepG1Pq9NHZNS57FVcFMy2FmmbX+5QP2qNbZOFwiy3jsYhJYhxyC4hipPfUnlzO9SDt
rp/ZCVUt5ObzllTmZhLnYdaMadS2mFaJEiVFkJqHlGUAFBR6UTT2031VbqmvJWtMfBdU/wBQ
GS/6Yh2d2BEdkRY7MNF0V3i52GFhSApkLDPMBITz41OilUnk2rZJFr+pzN2pS3mokFK1Kd37
a1ApffD60EFXy4f+k/HS+odiKc873xq9WWZarbAtUKwpfTEs8ZtX2riJm8kO8iVFLlB06EVG
px1aXIu2RvYfNt9tmX3zJpVsg3WffAoP/eNqV22yngW2eKgQgoogpPUAa17Lq3XhVMK0JryN
2fMWRx5tpkxIsOKmyR5sO3xmkKLaI09RLiCFKJPbrxRvsBvXQ7J68yXZ/sWXNfOFryfxvHxU
Y8mNKjoYSxILwWzFUwOIXHTxDgKkinvURQnT6rJWbtz+5P8ALJjy3qEgoqCSQofEbnWIgGxP
JVKJoARv6E/w1llIZJUkkn2qNPwPw0IGBC0UA4lKugJ31MUBSqq3HuHUen4jVAi0nuVoK02p
WtaeuskgVCvQkAbE/A6TQlZSUp5ElR24/GmqDIn3BKkH2n0HTUik6JKqo6VNABQUBNfjpGQK
5V2oCBuPidEBIpQSTyO52oDsD86ayMAB2ABJJJA+A9dBSGgk8QRuK09RT+GqCB3CkcAQONak
V0QUid1e9KtiK76YEUSAk7kA09p339NRALlQSE19NhSn4V+OqCE1CR1BcQKJPy/j1pqIWlKa
8gRvSifh/LWWMiVK+pdD16Cm38vTSQs8wTWqjTYDY/MCugoApQ5CmyR9Stjv/wAdIhczT5dK
ev8ALVkMkO2ViC6hHEtqUkuOfm+Qpqg0jcfAEm6u43ldljY/+8QbjGZXcnnJiYDEdDYXw7jy
gT79z7d6DXotHReSeDLZakfcOFJQpsk/5f0UrsU13p8Drgslg5MLaCx3KqaJo4mvGqfgFeld
bq4ZlmvZvkd0zDFsZsVswR+3Rqfb4/KUt6S66y2kc2o6VJRyqACpwg/LWnZd52UM75lccuRl
WP5LmuGvwbFZhHhWu0NudoKcZ9zQDiQo1KwDxSnoOOml5biUCT+rGmdZvaLh5HYvmU4O5EbD
HKZZZT60PSnFIoy48SkcEIAB4BPu9dc/xfk1EKZOGA+bMosF3bi22Ayqzyp7koY3CYSSXHRx
S1HWpK3E09o232117V6w0Zlsv1g815bLW7bnsTuFxyK0SJMpbMJ11iM2p1RUkTm0D6WuVAlX
w6V1zt1aTRQ0xnYPKt4GMv3Ox4PMelIEj76eytYs6pb6yqRJeZA4uOem5NPpBA026pb+wusD
e4+S/Mtx8d3Zm443LSbklKjkbcURmmIbYHNRFN1UHtUTQDppq6SiVXOdHK4eYXm7DbIcvHrn
ccMmLQ7cpN9fW+5cEtUKW2DQNIbC0hZoKKpTpXTFW8sbJlhZ8gTL9fsWhqx6+RbQiaq8xnJL
SpU+W8wCptmOFdtDcZAXVSq0AoNtGFMMIa2Yx5cnTrh5Evcybb3bVIefChDeILqEhI49ziVJ
qQOWx9dFfguC5+M/ImDY/wCM7nZMkTLkPTLiiSiFC/Tc4NhCg4XCUoCeTe4rv8NasohyZSaJ
u/f1CYblFqXGyW13KKpEwzLei1SksrIQni2HHqoUk/Hht6+miElKZVZF+O85w6JjGWwHLNdJ
Lt6beenwYCucaHCTUJKpLyi5zPIlx1wbnprT4yaaTLNe/OmPQIMG5ox+6M3qXav2+1ty1hqI
7DVTlISK8uqSEqCd9Z6ryZeB8r+qPCosRKLba5zzlWB9s6WUNtdspKwHRydX06rrU/DR0UxI
SQDHm/x3Czf9/tES+XKTJckPvx5MgFpEh5soS1GipKkVWohJWd0pGwNdadIUNismd+Vs7x7M
ZcO7R7W/AydxHbvzjjhWweHtQhlKiVpp69AOlNXVcOTMucD7xRnuN2uw3vE8jhS37TkBaV3b
eAuUHGTyQ2hs/VyKevppaUTORTcl+X/UDjeQpfi5JZJrBt85u6WWJbz3HlrhAJbakBQHHpVa
gKDfWeqiU8mxLP8AUFYbgzCvlytMpeUWF6Y/bIcM8oi0zgUkyFqHJtLYPw3I266GlGwSkb3D
z7jkm3JyJm2SlZm1a3LKqNX/AOzm23VV763acvq+lHX0+em1UucAkVOL5Ws8C24NHbiy50zF
pKZUlyU8A0QhJT2YjaaoSNx7lJqKU9TpST+4tvgvTXn/AAmx3gSrLCnzId6uCrteH5PFtLK1
NdtTUYH6+BNVGtNqeurophsG3grF+8pYmxIxmxWM3AY9Y7mq8PXd1CRNdU8taiIrSwEgJDpA
Urr6aa1l7LwxrifkXFj5jmZhIuMuyWIqDpSpTsqRMKAlIafUCfYpQ5kbpFAkaOjeAo8uSzWX
y7idqyzIXmb/ACm8bkzBdGLc5bkSZEmW6orkdkKH6CAOKApe567U0dMfJpFBu+RYPllyzDK8
iXKi3eZ//Dtli7pKi3wQt94pKfZxHIbetPTU6MJ4L1H81YQxY7ddmDM/1LaLIqwM2JIpFXyS
kJlOOgUCfbsPq9PnptSIFTaTOstzazT/ABhi2LRnJj1zta1uS0ukIiMCigG2UJA7hPP617jf
47bpSN8mHngHhHL7Hiufxr5e1qRCjsSAVJQVq5raKUAJG9Sdq/3azak6KYNF8T+fVO3tYzq5
JDMaPIFonPsl1bbr7vNZeU176cAEJ40220P1zo0mXWB50wORnFzck3tuPBbgx4rM5cR1AeUl
xa3ey6kqfQlIWKcvqO/puP1NIXjBWIHmzH5WSZtGTks6yWO5x0/s1ydZDjjcpCQh55ttASsF
SUpDQUa0FTQ6HXRQ0dfH/lXAbJhuPuryByEi0RpbdzxdMbk5cpD9eDjjlCOW/KvLr9Wn+qzB
OSq+TfPlweeXZ8UlhzH5Fkatz7BbSEJeWD31Mcgk80po2FHYb0HrrSolszZ4gstqynxtCxzx
umbljMqViUnvSGW47pQEvNqBSVcfb9uCADuVEaw/XaYOn0JjAPLFrya4wI93uLT1zVfLjJh/
fJS0Y1vLKkxQ2qiUJdPIBArX6q+msujDRRP6qrhBfySytsXwXARoimzbUrDn2m4/VccSpSVu
SP50T8Ka6etNGGsjbwnlGP2rD71bHMnTh9/kS2JDV4VHEgritIHJof8A1VqPnXVetm5NN4L5
jHlfC0x0zFZm5bXY13fn3hT0JKHrwyUhLaeDSeLaTxATTegFdZ/rsIzh+ccdbu9nhJuH22Ji
yz3LpASyOP30ha1NxlAJqXEpVSg9tTo/raBIoXkLyoq/+LcSx9pxn7xtDyr9GZYS0lktOhMR
pKgPZyQnkQn6vX4akoZTnJodsn4PIu3i9tOU2+8XDHgYciOppSGVBxuodK1US2WOAS2FbqXT
Weltmn/KSD/q0lMLk41Gbu4lJZbfrblKS64lSlJ/6h1aCU+4e1P4HXX1qODDeSH/AKa8wxzG
5N+/eLjEtapbbCWJMlKkulCVK5hp5IVwoCDxKTU0+Gs3q28GuDVIWf4ozCkXqDmDFngy8kde
kSJMdtTk+LHaaS8ylKU8khXH2r41I/HR/W24jJlYCx3y54tZxSYli4xIEWY5cnV215otP833
HFtJDSEcClSFDdRPw1P1tOCeoKhfc8xpWKxGIuWRE4wYMGMcLat6HJPcbKPuOSlULdaKKlb0
9Kk6FRllryzr/UH5Nw694Gqz2i5QZy3pkdcFiIHFLaYZqVlfMJSyaUSAnrWmt09b2LUmaf0/
33HbH5BRcb7LahstxH0Qnnk1aTJcACO4QFFG1aLG46eum9ZRJwbBc/NOJseXLVPg3yMm0yrQ
uHe7iy2VtJlJWsxwtS0hzihaq1+HXXN+toG8weZ8oluSslush+cLvIdluuLuaE9tuSoqPJ5K
aDihfUCmvVEIxWvwegPGfkDELRguPtv5FFgWyBEmt5LjCoqXJNwkuhXbKV8SVfUkD4jb015u
jbOr8FHy/wAqpn+J8Sx6KIImsPPG5wm46eTCIjyFRBU7AuhJLhH1711v1+tKZOblJFiyTy3Y
bznfjl16TA/bLVFjP3V4RatRpriFpeaUkUV22vbRKdkn3b01heturNzLkv1w8reOD5As09++
RZjzNvnJalKQXIkWU+potD7kNpdSlSW1j14//Vq/qcGeZIKX5psiPLhdgXqNEtkyzqgT7sxH
K4/7gkLVGdWtxPN1DPP6th6HbTb1ws7HLZ5uyR92Tf7nIcnC5vOynVruSRwTJUpZq8lBCeIc
+oCg16NBaq4NvsnknD2MGtEh2/yI79osc2yPYcltdJsqYOKJAIUGyj3VUpQ9tP5+dUcwbup+
5zzXOfHkvDL5arbJYVlDkG2x5t+bihKLwlniFx2UVqz2faSqm/DWqetpysmYUQWR7OcOi5hg
t+m5pFvUu1wplsukltpYo5JbUpEhxKQni0FUQoDf+3WOrjRtvMrkof8AUBm+O36FjFstM+LP
kWtuYua9b23UQUqkOIKEtB5S3OdEe/f/AIa7ev1uJMMgvBmQ2i1366RrlNTak3u0S7ZDvC9k
RJL4HbdJ/Kn20r8dV66a4NJYaRovjPJ/F48f2i0X66xIFytX3sZbbzCnqOOzGpSZDS0pNU8Y
/EHrU64ur7OTPfsky5X/AM0eLbnhd6YZuDCmJ0OUY1pejr+6+7Wta/e1x7H1EFJrU+p1pely
aZTfOXk3C8h8btQLdfG7hcVvxVNwYrK22kNsA1UtDif0FAKoeCzXpSmmnreXwZtZSZl4ayux
Y7kk1y8SHIUS626TbE3VkclwlyQEpkhIINEU34mvroaz9Bo8QaX/AN0vHcePIgz5LuW2q22u
22hEKQyE/u78WQ44Zvdc5LbRH5+0E+6um1MZ2/8AUAmpOuP53hjuXeQpN2zss2zIIxhwkrhr
Q26l6KlpD4bbH1RE1ZptzA5euqyc1iCwpkx1WKY2fHcvIFXTje410RAi22qR344SFKfCP8wb
Gtfp9OutflLQWf4obePL7AsGeWG+3FKlQrdMbfkBA5LCAdylO1SOtNVk2oGtvJ6Sb8y+Kccs
80WS7l+4qgmPGdbhKbSZMVt5TLq+Q93cXICRUem+2sL1zYk4UGVeNvJqrl5LsmQZzeVRHrHE
Wy3ckMKcclJJUSw+WhU17ivdx6Cmt+2v/iNWsj3EfIVnx57N8ahZdIi2i61k4/fWYyihEkr7
rv8A0596S6n9Ll8q/DTZfmmCeHJH4bPwhvw1fLPc765Bu02UzPatv2zjqEPQjyQhLiRwrJSA
mpI40+Whpu7g04hF8f8AOGOXnKLvHk3ZSLG5LsbmM/ex+cdkRXELmLcTspB+r3KO/pqVEo+j
kpn9Rh/VNe7dNhWCJbsgZnMNPyHHbWh9MpfJyqkSlOoUoUSKtoSrcBW22n01cMzKmDPfGOVY
xDx/IsYyGfKs8C/KiSDeoaSssqhuFYacQmilIdrx26HrrLcWTNI01H9QWCOXh2bKiSnjMuaH
I6JCGnFWtDERMZFxYJQvm87xBUiu3+9fq+Zj9y7Z+pSrh5XuUjxMzjn3L0+SzenHZbr0Ydpy
AFF5kKdCaNqW/wC7b3fOmunWqtbMYwS4cFwPnTBYl4ev7SLlIk5FPt068QX0hKbe3bkBJQw4
aB6p9wpTbrTXJJNZ4QvDg45T5PxC4XXEG0XW4Jh264Sbk/e4Nv8AsuQUjgllplsju9eDix+U
+uqqXR52Vd6ImJnOCQPLmS55EmS5ba4r0i1RTGWhx2bJbLDjCqA8UNiiuaqf2b9PZWestKAW
mYkpqqAQDxqAhwCor6CvTfWfb7E7sGJWytJSqhAUDwBrQkdaHodY7GdHSPBuMiR9oxHdelr+
mKhsl1VBy9qAOSthXYaHZQa6s4EkV5dT0I/trrSMaAKKCjx6U2+WiAAlauvL+8j56QSF8h//
AHPQK69dSRqRNAUEJpQnkfUfPSkTCChsUe6u/Lr19N9ZaKPAYJHTqdjt/vr89BCh3D7VJFQa
g/LQxkJaVpQDQD0SOv8APSgBRBoQDU0+IpoNBpKzUJ3T6HSgkCCTRPSmxOhodgCgk9OQp7l/
+egEdApKj1NVA0BG3T/amo3sCiPoqP8AlPzp0/HQDSEJcSCEkk1UQOXT8DqA6VRSnp6kV0DA
AkcKgkEnr8RqkoEk81JBPEUIqPWnwGohSlADiRyBPpvoSFCFqIXsrkkbga0TWRSlAK5LRUkH
kBoICQCnckV6g/lPw0Ew0EgbipJ3APp8dRIAd9lNzX1poJgR3RsPxIB2A+VdJLIoAlKqg+33
UTtUHcaDUB8kcu5x9Pq9emqDJDt9sQXgVp7iinij126nTydJNv8AA0u2KwjObTOuUK2uXCEw
3GVcH0stLUgr5EFXXiCK0qdd3V/1p/IPgyp5LaH1pQoLSCoFyhAVTaqf7tcKqDL2FHNXmygB
RBTRKtxyqKBXy9TXW67QLJvPmLJYsrCMYejZfEnZTZ21tyv2x0qfceeSAShTQRwbTxoTt8Nd
c1vhQjS/UTds3jvZB4+xzGsgYdZsbKJVxvU9YU2mW8n9dbrroUnuJb5BJoSlSqddar/JtqTP
V/RDXzHa8fzDy60q35Pa24twZQX5xd5sRW47VCXnBRHNfH2ICt/WmuFfXZ8D9Tv4TyrxTiOR
yITrz0m6vTlR4WTlptEf7FI3/wA1VWELUklSqVKaa2vX2XyjLsl9DSrXlfjW5WCYmNOt7dpl
3Kc9fXJkxcN9Ta1qIcCEcVSFOIoUJO3Gg67ay/W0Kb0xri+V4rarD+6PZBE/0yzHeZt7Xf7L
0KOglDEZu3pFH5CgeS3Vitdhpfra4LsdsjzDD7jjJmZhJgotLrcYsRoVxdlOSVhQUllcNISO
I/8AcqN/7dSo5gJ5Hf8ArPCmbyqPfL1AuNuvk+M9jsLkl5ESIy0ndwceLCC4j2pPVW3x1f1N
4jKFeRnkrttuOcYvIhuFWZG5g/awp6pYTZ21833JBBCGUdKoTsenu1VxPgDAvNkyHM8sZG9E
eQ+39wlPebPJFUNpSoA7jY7H56IaKtjSf6f7HGdwG/3WJBtz9+RNaYjyLshCm22qIKkjnsk8
SpQ33NNFqrAyXubFw8/ud08ewsduF1+/S1eHrgptLCGGm/1AFH6QV1PJCaHf5alTygT4Kb44
sy7jh3k1xP7YqNeFPR4VxbWGI7rjKVcktB0hSYzRUOKlAV6601qETeDQL+cbkWNl69Jsz+PN
WNLEie6ptUpMzinssoUCVJR7uVEjro6OYjIppnWRiPi5rEWrXc4dqjWtDcVLcptTLaHe4pIU
ttxKlPmpJ9yzv11n+tzEZJsZQ4zFs8g29qfaMcstmhuyVWOQwttM52OiMoBxSB7UpQBVS1Hb
YdTpVcaFvmTCvN9ms7DlpuePLgDGJ7bn7f8Aaq5SnXeRW+9LUqq1LWo9TsOg1tUaMO3kmv6f
I7a7PkrtkVGR5B7TSbAqQUdxDBV/1Km+77RVP1HrTRZTng124NeEnF5k+4z8EkWz96RcIrOU
TgWk/wD2cyAZZC3KDgVE1Un6jo6NZaGeDlaJGHJaTeMWXbG8MkTZ7ubPqDSUqaSgojBXMcuP
JI4JT1B6b6nRraCeBmt7ELZjBlwxbW/F7lqfdl1DSi5d3F8kIKT+qtyhoE+n8tXSMNZB2ky+
DYsMRiPjhq6xbbAbutxQ5enVPc58liiuS3VigYYNQFJ5VGw+OtKrcpD3hqTXZ9uxl+fGtOcM
2tuAq8JGGxQGWwbc03zRsj/21OCh5bK2Hrrn1nCLtmeSleSI8W5O4TaMrgRJWaSLuUSbdHU2
w4q1BbgZbcWg0abUnh1+dPXWq1n4FLJDYlabUv8AqTlQoNkt9xt0VYS3Gg1VBgpbQgLkbgJW
42fYainNW24Glx1Ktnkv2Cdi0eQMxs7lilIvcu6m4PXWK3GeQm3yFH7dDynTwYQeKl0HuO5p
trDrhMyngxjyBiknKsyznIcYRGON2JwvSZvNDceqGxyDX+NSlJUfbsf4jXSr45KupZrdksON
2vBoLKIMB3A5GPuTr9dXe2pxV2okoSp3lz5hSiEoG4Ow1hrPyMzoxrLMetFu8M4ndGLaxFud
2kOrlXF13lLkoSF7obH+Wyk+h6bfHXSu8k3EDTwVilkynyTAtt9aD9u7T8h5gqKAsstlSeRB
B4g9aafZoFs1PALD4T8hX5pVvxxy3/s8aRIuNsStbrD/AOr2o9FJPN08arPGm9Bvri6wpkVY
s7Hh/wAUP51MjtY86URrdGdRb1B5MRLshxxPccCVF5CqN7BQ6VNNTlr4LtwRNjxPCbbevI9m
h4a3d7lb2EOxoCJHf5sSEpKIrWwU0eaSpSh7gCBpa0HZjLx54lwmXi1hN5xxy4u5BFlzrnkQ
dLUe1lsK7bSEg8RSnFNfUEnRLNNlf8iWfwjiUB7Hl2R9d+dsjM2BeUvrWpUp0ENpWmvFIUUl
S1UpTYa3SreZMu3gmrf40YvGDeI7dKtD7NrdlyHr6SC2o98FaFuKFFDvlI4evE0GsWs22+TT
2PbR408Q5Q7bH4VmdtsZq9TrQ7GEhx374wm1rBcUORQlS0VqOidq76m2uQTKF/Udithxm7Wa
12WwItEIsuSHp7ZUtMmQ4pPNtLqyVLSwAAKgfVrfr0YbyDw7guO3HEbnlFzxmTl8tia1b2LJ
FcU3wQtAcW+oJKeVK0+FNZtbMHQvmMeHsH5NOzMOm3FV4u0mGptcgn9mjNbJ7pZVwO/Xc9ae
mizfkENo/i/xIhcHFV2p6Rd7nZ5l2avYlKHbDLigghIPEk0HpSg0ZmSbxJW/IkrCrZ4FxW1Y
83IaF/c/cFvOpbBeXFXR9yQoVWSFmjSUmlOut0WX5MvLRNPeM4VzHiO3XC3S7ZYpEN5m5qKS
HPunf10tuLCapXIWk0ruE6wnj5KyfaSs/wBReFY3iLljgWWyG2tyEvSpc4OOPhx0lKRHQ65u
oNJBUfhy119ch8HL+nvx3iOXLvD2QQ356oAYRCjJW61HKn+XuedaBKVe32126k6LzJ0nBf4H
hHBGZTrcvG7hcw/fl2xAjyipECKW0KS68pJqUoKia9eldYdjMnaz+APGwxW4KktSJNxUme/G
uJecHFuK642ypIbHZGyE/UanfV2ey7YK1ffFGJxrUzZGMcu6p5jwFrzRKx9ip2WpruFSVKCO
Ke6RRIJ2Gr5nPgrWcBeefE/jnFsIVPsMSQxc409iGt1a31pcS4lXLl3UpRy9lf09tb9UtlZz
soXgrBrNluZuxb02t20QYT06S02ooUoN8UpBUgFfGq+iBXWvbbjyVfJsUfxl4xsXllu0psT1
wjXGwruMGA64pxDbqO4h1IbdotSnEBIRy3Sqp/Di2xbmTzJeWo7V1mIYiOW+O2+4luA8oqdj
oSo8WXFGhUtsbKNOuvRHgwn+KzJu2E+IsLuOK2UTrLcZU6+2yTc3MnbdUiLBW2F9thQT7K+w
fV8dcO7bNtkXmL2AWvwPikK1olNHIpInTZKmmu499o8lEtSnD7gEK2YSk9OutJNu3k52acJl
hzmw+PbllXjTBILExqyCGJzjDLSO84xKaK21qUgF1bzhZPdPQDcb6Ktqsrk26/lPgmHfA3js
5rFH2D8Oxptcme7GRIeeElTTrTaOqfuG+IdqpIT8PnobcRJdhhA8feLbV5LvNjZsEu7pdx03
a3QlrWFIVxWl1iOlwIcWt0ce2pW6FV02rhNszLcnmy5BhFykoaYVEaD7gbiOkqcYQFq4tOKI
BKkD2q+evVGMDXSPQttwrH0eO4sU49HfsU7F377Oy88u+1dkVLbLb9aI4nbt+v8APXkn/wD6
G7kiM78PYlYMRuuQQWJbkt1NtDNjU42X7KuSElz76iypXePtQOPVX8RurbeXgnCJHLfG2KTc
uwJq5W9WFY1NtB+9DxDSvuWSpwRnpCwkd9TfVSt6azW34wvJr/cyof1F43Y7LlNn/Z24cWFK
s0ZxESAvutp4qWAvnQFfcSRxcO66V119WFBhv8sjXwVb4yrrkV3TEbuV3sdllT7JCdCVpXKR
RIVwVsvgkkhOq7eE9G5Sq2XLBPBmP5FhmO5CGZj8iYyZFxUwSW33Dc0MqQU0oikbnXjT4+ms
X9jbZhV6xBbMq8C+OIOC3UxYUlFygtKlt3FDrq+SVPHtoSpQ+3WC37djUeusqZNNOCoee/FO
G41hUO74/ZXrapMxmIuU8873FBbauSX2HgBXkmqVtkj+Gt+rM5MWRTPBdngT79eHXbezd7zb
LTInY9apJBbkT2SkoSW6p7pHXhqusrwarir8mk3XxVhV9vE03dDOKS3Idmm3Oc2ttuLGucsr
TJgdha08XHwUqAH00/nlzGCSUkxjuGWGPJFutuIxVWifeJ8LMU3Eh9+3QYzHKOO7z/RS5/mp
UD6jfR+5T5PONww+8Jx13KGmg5jaJ7lsalhwcu8mqkpKfqoUfmpSuvS1FuvKCHCJHxBYrTfv
JdktV6Ql62zHlNvMrVxCx21FCeQpQlQA21y9zhFRZyb9A8GePsdxm6XXIbM7N/brazcC1JeU
ha1txS5KaQQQAkO0BqDx1hJ2tCNrUFP8RR8Xv/my1XbFLCuDa2oC3rrAUrutwZnacb7jS1Gq
myrgE1HU9Bq9lYUPclWcjfAbXamfG/kKz3HEW7lk9odbXIYWvtSwhS1FKwBUoTDp3CUfUDQ6
62qn7EpwD/jga4nh9yl+BZcpNtLrEq/wZbs9O/dhMVbdcUK7JZUVVI+fw1zlS/oaXH1LrNwD
xfdMguFnt+NKhJxu/wBugPvRn1uOy2pqVKdStBPtbR+NaAnbV0hLO0C8lQ/qQxSw41Hx+32a
wRba0+ZEp+5xCpTTjlQ39uha/ceKQFqB6E7euunqiG3s5+xzgb+BoUpywX2Rj1vh3LMkzYTL
LMxLbikWt1YEpxtt1SUkb0WeoGsXic+Dddf5Lwx4x8SXG9LlNR44Z/dLg1jtuYlIDN7S2wFr
iAk0aDEjklKgfl0pqdWl/n4FPkp18yKyRP6dLLBtsJyFKutxcTJKHgQp+CtLqlPAjkpJoOKf
ymh/HdaJWtOkDei8WbIIeYY5gdszlyHcHcmnyZyeLTcVLbVuQpLTCu2EgLef4pNeqTTXNUSq
34walT+5Ym495tsXFY1rx632m+NMXKRIxxl5kyGUOPs95VtW9yaEjgQviv20J0dUk+UNnnY4
tV5bY8mZHb4VrCQhu2TLzdGHYTLrPJkpfRN7g7ak+zkvs7102o+q+ZLtggX5OIv2Oc7Y/wBo
PjlxF5VlXEMoWqapav28JSv9RNdiz2/4a2vX+UNflgw3iGG9MwlFrjO5G5a1+PUrsxwxFGit
Lw4m4rKWx3k0qru8/TV/XMqvzP8A0OkznkhM2bvsy/4UwqRAczpOQPu2V6EWT2rGKqjhRZFO
zxTUBfpX56utetnGIx9TFdr9zGfLDlmd8kZKu0FtVtXPdVHUzQtlRp3eFNqF3kdttdvZSFXz
Bwn9CnVqkGpANDt0OuLECnBy5U2H5qagDUmiQVgKUNxT56BkCRyUgUHI+v4apNIM1KSCaH8g
Oo1IFUABUCCPQ0IrqgwEtPL6eoNQK7/wGgWgNhSykHcetf8AbfQyqpFFSiapOx/KOtNBWtwB
AUQBUK68h020BkFUpAUOhPp6fPUSAAoJIVvv1/DpT56jSFcUFyv509Cf7T+Oliwlp5qJJUFA
0pTWQFJVSvL4en1Gnx0xIykAq5I5mqSeg+I+WggkqNST0psOhA+eokBJSsbjb0r8f4emgBQQ
rjRJ5ehUPSvy0DAlJ9wHLl1JoKHbSykWSeJ6mvU+lT1p+GgQipCQe5v/AIRU7/OuomDt/pBV
CaDenWmogJqpY4lRR0oSK/hqkg6gA1qlB+O/+7UAoEKNaDkdqHYdPj8NZNpiOTlePbPX6f8A
l/DWpCSMjDlAkjiClJSSrpvX00M6VWTbf6eokRqw5pd24seTdrXbG3bY4+wiR2XVqXUoQsKT
VXFI6emvQ0v60/kzdrgyyYXlSnPuOYfKyp0LHuLh3NQehrrmwYlpr3Cv6QUQVKpVQ33NPw1V
icgkbfmOL4jYPH+GZNasVdehSO7Ju7M9ThceTQBszXkUASSeQSmg9BtpdF26tyKs/oOs8iYV
Hs+FQrnjMe03TIlom3KLZkEPpiK9rLCFrJ9zilDmfy0NBrVfSnZpOEZ7MrXnTAWLP5AbsuNW
RbFudZjtwY8Zpa+8+U1UlFeSnF1+r4euuVdHRZQ+8Q+Emr5dZDuXq+3ZgS0RXMfU6hmW6+eK
iXSSFJbSlXRPuV0FNaacfBmI2aJH8L4HDnzpciNBkz7lcn4Fit0nvuQo6WSUoSlDPuWvaq1L
UAOm2jMQXZwNcK8R4Ukqst4saJN4UJKVXSSpaX5MhnZxVvaR+izGaUeKXFmqj6GmmznkBd18
c+NX8YntYdbLZIusK3qkffTVS1PICQe48pRCW1ceqN6H4U1mGikpF+wvx5aPG2JZTCtky7R7
hKU9eHnSph6Sy2lQWk9vmiO0Vp2IFaetTrpZPtDYu3xBaHbf4ygx8FlN4VGt1wy2c2EwXJEh
fbgqPAOu0KeSlckFKenx1v10y0noy3GDKPN9tg2nyferdbYjcOBHU0GGGUhDaElpKiAgbdTX
XOpkmvGfjuDkmG3q8Xu9TYVitshpty3wWu+XHV0T3Cg1TsF8a8fjvTWru0JGqtFsyD+nnD8X
EibkuQy2LEqQ1EgpjRw6/wAlo51eoFABI/wJ1lXs9MYn6kJhfj/E5Vp8hynQ/KlWKIv9lanN
OIcQkoUpEpbJKQHF09iVCo6030y+qyDSjRcLv4JwebFtEK3SpFsuv7H+4BlLRcQ+sAKcdkvr
qOSlK48EUp+GsN28lCQ0b/pcafx9D8e/rdvnBpZYeaQ2wlTtKJKa95I32Khv8NafttOSVarW
hpZvB/juXmTGNuZJPulwivravUNMdTI/RaLh4vUISgromvKp6DfWv7Lbk3PgzXyhgjeGXhu2
LdcfnOBb0mjakRWkqWS0ww4v3vFCfrXSnLQm3sxJI+McIst4tN9yq/rkKsWOIa+5t8FXCTIW
+aBPc/KgfmpudTbWnssM0y4eAsHsE9U+6SZ0mw3CTFt0C1xT21tuzKFKn11qttqookfidZV3
qQS/UQ1/T7h9uujNnvc+dMdv8uTFsa2KNtR/tGysvSUg/qLJHt9BpV7REmhunwDicdo45crn
NdyORBkXeFMaAEFhllfDj2id3Vp+tR/hq72eWylLRnVv8Vt3Cw41MhXVLt4yeeiF9sllSo8V
CgqhkyBUc00/yx19NaVrInDNGc8AYveyuFYbxPF2s1xbtF3m3EBaVlCQ4pcdIPtCE/QmvXro
/sssyUp/QrWbeLcYk2uxX7GrtJZZvVyNokSL2uq0ONFxLkh10H6R2j7fwprVbNNzky40R2Me
PYjXls4fHyiTb7ekpa/c2m3Ir8tRSF/bsIG/uVX3q9tBy321dnEoVE5Lrg/ibHLteMubn3+e
qLHubtsRa2JyWZLiG1cUyJjjpq7y+lKdZfstsk+YyZL5GsbmKZje8StUqS5aYryVKZKlUdAQ
laS6lPtVw50qRpXsxkHLcmjWfwnbX8bjWS8ZBNayW6W1WQ2+CykqtrLKKcgtFfe8oFNVbU9N
Z72eRiDNLzhqoWBWbJ37wiQ7d3FIZtTSVOKYaSFVU87Xg2qqR+n139aHWqt9g6rBx8aYlfcr
ymNYLRMMGTKQ7zmFSkBtlKCXd0e41T7ePrXWr365KsM0yweBMmRebarEcwiPNSW31uXWCtba
o6GFdtdEoVyWC4eIIPXXPvZcEkiWi+CfJsPLp6YmZFlxUVEq43pLj/3LnMlKG1tJX3TTtk1J
pTV/bYYRGY94ZkQnsvlTc9Ztr9nbdTIkxHHEuKC0hSnpe/cDSuRSUglRVXfbe7taJNbGWHeM
M9vWFxojOYItcG+IkP2fGnnXEmbHa3U52kqohKwK0IO3XS7tPC0KhZGuUeC8gYtEy+XnKbe9
do1tTc5NrecWuR9qlISAVqPRIohG1CdhqrZvMGXCyiTkY3m1zxPxumJlE168ZRJe7ZclufbR
EsIo0G0o+lTLfLl68vaNVfbGSVc5JDHfFHkvGbva5GKZXAnoEqXHfW2pTkWKpKOUoutqqlZK
UgLp7qgDV/Z5RFV/qGgZFHvFmOSZG1e7s9GceRFiMmPGhxioBpSWz+Z/cmu/t+FNa9dvgrJN
4GfijFc1n264Xa1ZY3h9nS63CkXB59bCXn3N0MpCSmpAVXc+ujvDwpF1xktGKeLfIoiyoUXO
o1ogXCa/CiNomOlFzeRUOra4H3VNQo7nY6F7HMwKhDaP4S8g/trdwVlMRi6/tbyItoEhwS1W
9rklbKCPpa4j09u9NZXsfgw0nkZZ34uGNeIrNervPXLvjrqG4EdMlC40OKtRWphhoVLijXkt
Sdgfw3atti44LCuL5STefGrNoyN+5X+7w3rmVy5HfhMKoAqiCKcGo6yhXUk1A1K6ScocN+Cv
/wBRMbMWnbCrKMgiXSS82+5Bg29lUeOxHqgd4hXVby9v/p1qj+xlpSR/hLDfId7TdJmJ37/T
sSClKLjMLrieSle9COLQJVQDlU9NF7Q/JpaLpZMF8urtTtmaziFaY7d0mw2FiStD1xlO0U/7
x73OauiTunfUvZmYBIaWTxT5q/0LJkx8lTAtMYS1CyCQ8QpLJWh3dA7Y58FUBOp+34NNpojb
3gHk9OJQ4MrMY8hccQ1M4gZp+5ZU+pIjp7dQCodxJHw9OmpXxrAWjg5+afG/lSx4zBuuXZGL
5b2H0x245eeV2HnUmigHAkK9qSKjTS8vRm8FP8P4xmd+y9uJily/arky0t5yeXFNhtsUSqnC
qllXIDjTTe8Eqo0yD4czx3yLIbvuaph3a129M2Hd/uFqkuMnucVI7tFJbacCu7XoD89YftlR
BrBg12Dv7pMU5KROdD7gXMQVLbeUFnk8lSqKUFn3AnrXXdfJzU+INZx7x95Ok+PEsxcrYgwb
hDduMPEVy1pfkQ0VUtwNJPEJXxrTp8dcv7EnKRp4WQZR4mftfi3HJ06eZV+uMpDFri/dNCDF
YmOVKEoJqSskKcWn2pPXpoXsbbbQYUJckrlfiG/W3KcEs9mvL72YXGOVPXR6WktsCI2lSExw
j9VtphHIJP5/y711f2N1yha/LA6PizzJI8hRmUZimXeEQnH/AN7TIfKozLa+0ptSCO571qoE
gUO59ND9jiDSaOFu8M5oM5vz17zZu25BaIYuLV4MhwvvJW2Q28pSylaGG+JS78NqDfVb2Nxg
FEuTCLklabhIK3hKX3XAuSklSXVBZ5OhR3PM+6p16smEbA34umN4C5bFZY+m8PWsZcMX4r+w
MICgUtVeIe/s+XrrhX2tZhQavXhcEdlHjfyFZIt4ut5vLKIDkiAhVwVIUpF4Lqklp9pQJLqI
9QtXLoBX01mjlk04LDlnj693XLLNY7rmsm9WRFjORzbpJUt9thhHND64qCSV8koHAnemn+z8
dKZHCbfBn3lLGGrVJss+Bd3rzY7zBTJssmSkokNRmT2ww6g/T21bJ47a6ettqPBlvI58O2xt
dyu+RyrrJtNvxaCudKkQDwlLLlWm2WlEFILijQk7ems32qjVqHJY8R8e+VXrHaF2i+rh2u7o
blFlmU60lkTJP2IUptNApSq1Vw/L89c37IbjJLWSZvXhTylbcEQ+5k/3FsjgNqs33D6WG2n3
g2r3K/SUKkFQ1te+zcwpN9ksle8y+Ps3xi1283/Kf32OHvtmoa3JBUyotlSVIQ/7VJ4pI5J6
bfHT6r2eDLeSt+LcWiXeXcrvcrs9ZrRjkX9ymTIYUqWEpWEpEfj9LnL8x6artzBcSW64+Esq
n3OdFx28m9tPvQLh2JCi3KXGuCCtqdIQpXFfaJKVq6/ho/ve2KUOPBK2vw/GlNC1zsykOX3L
3JzNnehc3oEz9rJK3JjhUe4hRTRO+3z0drJ9tQTSagxtzIcgGON4448oWRuSqZ9hQdv7pQ4F
wqpVRA2FTQa7K0N+TEseeP8AFJmUZMxZ4ckQ5Drb76JK6kI+2ZU+TRPur+nQaz7LwvJpLEmq
QvB+aXGxzLnkOXtW+LCQl2euQ6++hLEhhuUtSlFQryS4ApPqR67ax/baYSNJ/jkiMI8fQYnk
5ywryD7q1qtL1ziXC1vORlS2+wXWkc0K5NKQoclpV/h366m3C8yZrXYzxzDLVc/F13zmRlzk
G/tLEd1lXdAWtYVSO84k81mSEjga8R+b5b7WfsjwTS64FWPF1TPE1snxrnKanXHJGrMqOh5x
Mdll9oqKFMhXbWSqi6gfLWa2atb6G+rxPJZ2PB14hZBFNgzZmTcmbsmDdZKA60uLMS2t5tXJ
SiHlhCKcfRRp8dYdrPfj9gVUQv8AUNb77BmWNi+Za9ktwcaeeXCej/aCMypQSy4GqJ4rd4qC
gRy9vw129LfR+DnbYw8aYNjM3HFZNkU64MxZVzbx62tWril4S5KQrvOLUR+mlJ3T665W7dsY
hSdlWV8slv8A8n26t/evJyJhVosrs9m8zlJU0YLsMFTSg0Ve4Pp4mqDt69NLveUvJhYzwc8k
wjC7J4exy/vyW7pd7zLQ+661IcaUY6FASGI7JHAlAPFxR6K3FRrVFa3YbNKFwTqvFHjq+2XH
J1jXcMbdurkmV/8AaDv3a/22E0px59LaCaHkE8PjrnSYf+sg1DkeTfF3jeFjNvy5Uy4Xe0QL
Y7OlyIpcjyZ6npIZjkJdPJgoCqLI+oDW69v4zmfsa50JX4Pw20XL7K6S7pO/1BdUWmyLYKW1
xA/FRKDss7hxSA5xKaUNK6xNrflOkUTjkZr8I4m1EVj7smc5lbltmXlu9t0TBQiC8WQwto/4
+JNeqTTTNlmfgsMkf+w+FXW7ybLbn58CVjs+JAu856hbnmYjuKWy2f8ALUjpXoRrP5JYexx9
iBew3B4uQ4berVaJgsWRzpdomWCYtZeSWF/bmSw82UuJ2Xz412p8NbdGqvP8I/cyq5j/AMkZ
Vmlgax3LrzY23zIatc16K28oUUpttZCSoJ25caV+eu923DfKOUQQp48DU1+f8f7NcWhTkSVF
SVb1AIP8tOgbDUCK1qKemiASDSoH0FQKCu2346yIlRSdgNlEU+dOo1pBIdPcVGnMDcH5/HUa
QriRQipAUKE/HodDKQxUUIOw+oD4dK6yNWGR7ajqfUDbbVANcgAVuokEgbAnoTrMFIXFXtUF
b/yNR1GkQgvjRKqgVrxG/wDbqSBivTce7Ygb0+WqCkCqBaTxV6+0n1+WoXkNCeSt+tPag7HV
JhIAXXkCBXc1O1APQaGjaEBSkgApqhIqKkg19NUhIsL5Joknj0odhU700SKYElxRHEcQNx8B
6HUIRJJPJPur1O1R6U0AL5I2HUfHrv10QUhEio2pvQqI+O+w+eoQJTVQIANBWhNAAPmNJB1H
uBVRNaV3GhEErkoqO5R/IGv4aWSFUKaDqDtyO+1NZEVzT86U+mp/lqIhmEc4j4qfbQkA0B39
Rqg2kbR/TtElqZyW5N3mfaINrt6ZMxu1lpMiSkKVRsLeStKAih9wFanrr0tv+tLiQtXJmk1x
K5brqG1oaWslsLWXF0Kife4d1K+J9dcTJyStHIJpyUo1CTuSoHbrrdNkzZ8xxjKEYrjcjMc+
cnWi/v8A6qW1OTYURDSR7glB/XWkniAkUCumtO1naISa+xJqJHF7wgQ42OZbYc4lXG73OQLf
YZtxQIIbYZQpK3A46VdtppO2yfXbfUu/Z4k1Xq3LIHynJzvDs7hom5ZMul+gQw61cqlAY+6S
QtDAPKgINDtU+usq7XCyULZCeOsUzfLco+/skhTE2O8l+TkUpZ7cd9xdA4pxfIrdWs+xAqSd
S9jRJ/oaxjmBeXICr/Fj5fOt2NxJTzSpbEdyZKmSkgl9xllPJbSSuvJZWKq/nrNvY2ksGeqO
OHYj5GvOJO2b/XD9uiqYccRYQ2p11EZRV7Zssf8AxS6AT2yutNa9l2+EUIkMqwDyEjAU2++5
tLlRPt2wLJDhLU29yoGWFTPZVPQKK1cQNyKaz3czg1gj8vxPyPdPGz4kZoLpAgpbjyrLAjFu
M4pBSluMzIbQn7ohXFPFG3LS/Y50gspOVow/PLPeMQyHML3GizX1ptNqiyI37h+3pCKNI7bZ
Q2h2oABqeBNVE62rdn/pGprVQjN/NNsuEDyZeGJ9yduk3m2t2c6lDal8m0kJDbftSlIPFIHp
rFbHNZZafC938t/sd1tfj+GwGe+iRNucjtJDRCaBoF89v3AcvpJ109ll1UrJt5zwWGxxv6om
J1ztkF8qd7pdlT5Zjrb7jwFPt3Xh+ZPogUpTprn/AGJrKBrAywCT5bgWvN2o9yRAasPflX2T
2GpUyVPWhS1I766nkAKqUSePoNabWJRlLBYMitf9Q6sOgiNdET4MuA3Knux0MsOtNqSFCM24
f1nf001WUgV30O6nQwiLmXb+ptzFk3V5CotuUlt0qZbYRPWGyOC3EICnjyoK1pXS/ZTaqXXi
SRLH9UVwudnMkx7SJa1pYd4x222nHWlc3JDTXNRWlvlw5VoegrrKtXOBwnDZkvkW4+SDbrTa
8skurt8Yvqs3fSlL7raVlsyV1/WKFb9suenT4632Tykc7Vhkj4We8jpuc1OKTI8C3IbCrzNu
NDbmUHZC3grZTlf8sdf4anZRDUjSvJoFisn9Qdvvt3hqvUaOxIcStV5uy0uR1yHhRpURKgSH
lhPtAFEj06ayrqIaNNZk4Y3Zv6g02q6WQXVm3qS8+3HenrSqfIkkcn27e6aqHLlVS/T09dDu
nwUPZwas/wDUDc/Hr9qcmx4zDTTraYDykou8qCglLqEOA8hHBTTcgq+NNad6vgOrRQ1XHyy5
imLymm5KMftspEfFVsIQ3zljkEKZbHvdWfeErIPrp7qdD1nZpGUr/qOAs11DkRUpiShJttnU
gqbnOANj78D2rcWg0X+VIJ6az3SzAdW8EB5L/wC8sG+WC+XJcKWhmUkWKPaUB2AxPWT+ihkV
7jxVXkVV39dVbVW1ghhicnyyx5kkPP2hq657IRzfauIBbipWlFJBLagloJb4pTTok0AqdLtW
MrBpLMFsxxryxd8yym7M27HmJ8eSm2LvE5rjGYlMEkogjfm4VK5LWoEnb8NDdI1kIM6uuVZp
h0rL8XuKY7l9vay1e7q4O9J4qRVSWXNgEuJXXpt6U1pVq8wDfBdrZL873rxWpMNiMmA3EU1F
lrCW7u/bhRLiGPzBgD81Ao+mstqdDasKGZtkF1zuf45sjE6IqPhMF5TVodQ0GW3X6KBUpQ9z
qgOfupTr660mphA6pjXxpkOUWDKmZmLRDOvjqHYsWLxLlVPoKeYQnqUDcV2+O2m0cgl4LfgT
3lfx3khabxaRJm39lTJgSmVlUhCSVFQ7ZBTx3J6fPWe1XsVV6LZbvI3mWJnM8HCUSLzJjM9q
1BhaDGisKJQpC0nlxK11UVK3P4U1maGlVjTHLt5sn5HmN5XjDd4lS46Y16hTY/GOkxv8iOy1
X9RaAr/L35DdR0u9YSM9cSN8W8jeTm8NbTZsSbnqsbL8GNlCY6luRUHd1KD9FUjrTb4jV+M5
GGkU/OEeS8sP+q7nYJkW3NQ48dEhplxLIjNJ/SWVL3Xy5FdTtvp7JYTJVNAc8jZhasNwOZBx
O2NR1yHE41Ga7rspfZHBSgKVQJC11PH3K/A11hJZBzOdjfB8zz3E7jZYmRYrOXFfuE2fFbba
W3JkzJySCW0n2kN8lfp/OvoNUpkk9EL/AFHX283u72u5zMXex2Mpt5hiRNCEzJZbKefcSjoh
qoCQfjrpRLhmXuRj4pzHKY9mn41a8QazCEt5M5yC8yt4Mu07YcJQCKUpQH+Gq1aTk6JN6LPj
Hk3yY9b5DcfBWLtLs0t9+LLTDcQm2vPAlSW2kDingOgqDTrosvWl8hkh2s58pKcYy4Y665Gg
2NyzJuAYd+3ShxSg5LWvopanKmg9uiaA6OM8kNlNqy1/xVid4urceDY7Tyt9pilShNltvu9x
ySU0oEeynUbb+utUabaF/iaK75Ll2y5+PpE/DI0SN2FSLJFti3VThGUCy20nkE0SsqDnDflQ
VI31zVV1kXM/JVv6j8mlZHcLPOexqZY2UJfjomXBtLUiStCklSOKSRwa5bfNR1v1pcZBJckb
4PzvLMeeuMfHcZORPSC26sNh0us9qoHuar7VFXRXU6rqs5CHwXaB5XzOVa+41girvfYVym3C
VLejOFmFJqOBZCAVJU0j2uBRBPX11mKcktSMbT5j8mf6FWW8VVOhIjyosnIQ0+EBLxUpxZSn
9LklS6lX89LVJJVcDC++Ss1XZ4GRLwaLDfdMP/8AW5cNalP/AG3EtcFqHFPPtAVSemw0JUGI
GHmrO87v9kiRb7i7mPwHZH3ffW2/+u9wIHFb2yUgLJ4p6/w1v1xwF68Fa8QZfkWN5C6qw2k3
uTcY64j1tAcUtbWylFBa96CCBU6fZVbZVqaFByvy1f8Aye/eGMRDk+22hdvudkebdQyIagpf
Elw8u49X2Cp5UoPXXN9YNJbyYXcXlLuEkmMIilvOKVFQkpDJKzVpKTuA39IB+GvQog5px8mu
WjyPnjGBxpETDmpKLZBctMXMjGdUpiIaoUkLHsPEKKSa0rudcuvr7ZY2eMkFllnyp3xfid0u
TbEK1W8PW2zRiV/fyUSHVOqkFoj6AU0Hx2231mrWRaiPJP5C9ntjzvEMkn22FJvku3sMW/HW
Q484I7DXaQH0AckuuJcKhxPtP4aJq6/Bdfy+Sfl+WvLRzZpDmFr++NsVGbx7tSO65G7gcL3c
r3dlJA+A6dTpinkoyRdovvli7+QcgyCLiCZjka3Ktl4sLzLgZEYJBTGVzPcW6qoUPVXwpqs6
9UhS5MRlKddmPrU12FrdcK2kjilslRqgJNaBP0j8NeqFxowk0ba5evMiPE7U12wxjbkxEW9V
5LaTc1WbajBRXn9svoV0/wCOvKnSX4N2wQWT+TfIF6g3i3XizNu26NOhc4ZjqDdpWwsBqIyE
gFsPhHBQO5Faddar05Gvkns98mZ7Y7pid6umMW+wuJgOtRYbaSsSbYopQuJJaXulroUo2IrX
V6qVssGbvOShzJ+V+V8vtlsiRGGXCkQrTbYrYaiQoyfcoADfginNSjvrq2qVhIKrs8nbGJ+Q
4TcLy03bm79Y7gJllncQtUCaGdlqadAJHbJSsKHQH56xdqc7LrKhF8xrPPMuKYQIn+kUvW2w
trYdukqGtTjKWldxIUQoVDDp59Px1j8J5NPWRGUeVvLysJ715xlEe2XNqOh3IHIywZBSQ4yp
YKu2K8OnGihpqqTgPZjRX8l8i5v5bTacMh2iK2v7kyGo8FChycCCFLUpxSu2hAUoqpQa0oom
zP8AIawLf5J8T3qa+q2MzIJig3SqBMtkqC8sJBWsdUdz212IOx66y2rbwaUwWKz5V50vFxvb
luYTbJ0MRr3cHQ2mKpuCwxWNESD/APu6201S3+Y1JPXTZ10l8IYalse49f8AzbNwu43+0W+3
tW+S5MnWxa220zIyHiTNVaUKNQ3Qnn136b6m6K2U3ANYRCXnDM8V4bsFrFibWhmebgpxDqXb
ghE+rcfuMAcm2nCv6io1NOmmvtrLbWxupaS4OVk8ZeWMIy+wT7XGjyLjLdWw2qO42+yw+GiZ
EaZvRHFkkrr6VodtYd1ZZBJrBdsinebshfn4fKRZYlmuEFbkq8RQlFuVDKkpD33PI/5fbDQ2
qB6eumtq1hpfkUTyZhZMozLxHlNzgOQIYuakJZkNy0JfRwIK0rYWkj2vIXuR9Q66317wy7OI
4ImH5JvcOPkUJEeD+35OpTk+AqOlUZpxQISuM3X9FTfL2EdNvhrd0k+xxl/x2Xyz3nzPHsEb
AomOtFUFSZEZS4iVPhbIRJS6HioILqUOIPLrxOvP2rMxs7pW14FN37y/g9/jXPI7CgtX+8/u
iYshCUoeuHBTYCFJUosq4ue0H1APpqt7O2fiCUzBN+VbDmHk7yc5i9ttsGPcMajVlTg6pSlo
kNoeSHXFpSogKIQgUNDU9Drasq0W8h1U7Kl4nb8lWS6ZTabTIRanbPCkz7xFmtokNtSIaaoK
UElIerslxNQPmNF2m0PEj61P/wBRDmLw8mYTIlWK3RZUpp5Zac77E/kqS5IaKqyT9R9ySU+m
n2eyrtER/wDBQ1shslwHyyz44t8m+Q0sYvjbRet6FuNd1tFwdTyHEHmVcik8VbgdNNPZLaS2
Zt8nHx1dvId9yeyRrRcWob+Nw3URZjwQhiLb0nm+qQVbLa93uCq7baLxWsQapy2XHJrv5+tt
4TkUec1dIMGM0mHc7W2yuI9AuCyG6xkiqm+43x9yfYR1pvo7KIagUmMrOrzdbo/kG4qvn2M6
0ILuQsPrQ6846WwQpgDkG19lXtdRTag/DXbtauA0tnVrGvL6PHItLGRNuMy46bq7iwdrN/bZ
KwkSO6aqDS1qqtrkK9eu2pexqztBqykk71g/mS5iw2WPljV2bs8lESQ3HfLa7ZNbb7yRJWKK
eKGweLlTxpTWa3aTx/IesvYtVs8kuZhFy6XmsSXZIMB6RCy91AVFZj1MV7hEKU0e7lBsNzQ1
09rR1gtS9SVi1eFotzyyTj2Q5UiFksh9DsMJYclC4sSUF1uUhY48eYB5Be6ab6PZ7rvLWDKo
o+hV7r407GK3u9Q7gmXJxq6m13qO2P0Sytztx5cdyvuQtYopPXXVVt26vcSc71mGuSjgJBKT
8agenyOsupzeGAq5AJXTlXqBXpqgewaananyJrvrJoSk8DtRNDWvWg+Y1QArlUjeld6/HVBA
KyAo1r+G1R66GKQtHEL2okqANQfU9NZB4ElauO56jr8DpECVKBHt6mhPp8tBIPanwPKlOlfh
qRaDJKVVX6fUOtK/79OzUhhRAAJqFHY/+PpqMphrX6pG9RQ/CnXWUakRzoCqgqSOdATU6mg0
KKiUigNa9a+usjIShX2jYfn/AIagDSR2eJNUU6g7n5b6hEo2qKHcb7fD11ExRCkrSkKqv0Kt
wQevXUZgFacVkDf1/D00GxSioN1qKE1KT1OoAqlKgK9RQpFD19dRA4p4kElNDxAPz/DQaFo4
7pBoKVp0PwrtpNoFVflFVHoPUACmswZYVXuNePt4/H0+NdUgRDAUmK+EpBUviCPhQ+mtG1s2
f+nCXc0v3+1xLI5fU3S3dqYhEhuGhhlKiebjzmwCq02316HdP1JeGDrmTN7gEpmSG2w2hCXV
pAZUVtABRH6ajupIPr6688QZOCRVQFU7im29fnrdQZr2T5ZNyLxtj2OWfBXocBC/tLDcFOuP
rddP+cmKjijuLcVvXcD003dW5mSU/Ris1VmSF4xOy7Dn7fieMMR4LcEL7P3C0gE/qCvBbqkD
ZI+XXW6+yvZtNo1Nk8ZD8lZ7Cu+dWa9ZNhDlvjsNJdmWuU+RImt0IZ5KKUhttOx48fd665JV
8llIi8R825TYchdds0FiPZ7hPTKcx2GyhSVJSQkMMqUlSkqKQBySK19NdKdIyZy/qaLZPMeX
zLnKtU7DLncb/FmuXRFvgvLYDIc/y2piAPchGwoevUiuh9OqaeSSZyxfypfXrdPuluwiZLvj
D8pdxnRFLTalS368nJLY/wA1bSfbQk8QKCmhusCqnSJn2YN4a9Kg4Ve35EmEpiZcp0h1duDR
r33m21n28k1oduP4a1bpKcz9hqvJ0uXkzLrViokYpgtztEYIjumXPSp2BDjx0iiYjBolCFf4
tvianRbqnuRgj4/lPIc1dwi3XazzW4zU5dylSoEZKl3F+MpS2kxWvakNpr+osq+er8Z/H9zH
VyZz5rn3Od5Mu82fb3LW+6pvtwX1JU6lsNhKFOcCUjkkctjrKNayWLxV5DwXH/H2QWbJm5En
91lM9qDFoHHG20gqWFkpSngpO9Trrb14TkE3JYrv/URh+UW2VByqzTmoolpmWxm3SA24oNI4
th1yqFBQpUlH92jokk1YMyRfi/N8Sj2zMGU2C5Sn72y+5ItluUFRYltQkj3PvK7nOqz3HV9T
002zVZgepZbt5tsVvtFruoxy4MXldpNtsqpSw3FdjEJS48N+RAUn2kDfWetZiShwOv8A8qDE
YltQ1bbTNXI4x0mG4ppMdntEFYS4Kuq2H1K6nS/VWdk0yFZ82eNoucoyG02y8TrjIddkTEPP
AoS66yptLcWMlRQStRSCoioT0qToaxDYJ8Gd+Vc/seXvRZ/7U7b8tFUX14rJaPD2toQhXu2H
WtKdNTqlpi1A68T5/YbPa71iuQQJMuzZGlAfNuHclhbNShDbfRXOtPlrTqomYaMS5wX4+erJ
flSbbkeOzqQ5rU+zQLeS5ILsADtsyKjYJ48nFgax1rEpmnuQof8AUBaLow1eLrYpT+TY+/Ln
WePCJVF/6sKQTKWRybS2k0rTfS6rhikNpfny0S7SnIWbPJczaJb12cKQK21pqRRRkOLpXkVf
S3qvRV08MHV6KdC8s2iBacKiR7c/LuWKyhMkypTpIITX9CKgEhpBrU+30/HWn608vTCswX1X
9QOJWK7In2O1TXYl6n/vF8kTKNp5qa7Sm4e1FhGxKuhp89ZtTyymCs5J5RxmNKxmyWNi4x8Y
styVeH7i8kNT31PrWpX26VABCE91QCj1/tIq/ItRA1xPyNiKvMsrM5zsqx2NSy81GZ5yXn3A
EoSh9QUokLVVxXUVoBrXRwFbcFuxzy7jlqzbJX4V1uzdhkSP3GNaEw235EqWslUkpCkK+3R0
QK0J6kimh0xwKq04M7umSYPk07M8oyhmS3kFwV/+rdpYKg0hSkcEuPugUPb4pqDSu9BvtJNr
4JPhbLz/AN7cPRYId7ahzVZdbbMrHmIIoYYbWEhMp50DYVTUI6+nz1l0jkKy9mdZZmlmuHjb
FsZjplPXKzqWuY++aMNhQIS1FbSacTUEqUK7U9ddEobyausheE8xs2HZ21f7vz+zjRJKR208
3CtbdEJTT4navpqvRvBz0aP4s84XS23AKz+ZKYhPwnf2WXJjrdUlT73cccWSA46k0CUqG1BQ
baz07aNy+S123+oHCXswuy5l3easxhxo8eSqIoIfW0txTpQG/wBVpNHPbU77n4an6X8DBV4P
mnGJl7zqLKvF0s2P3xlIs76Cp19LyU8XnUJr+mt4ABA6BIFaav62zn3hwPcJ8wYJZsMx143e
bFmY/b5EVeKMoPZmvu1CHHF07Z3NQonbcnS/Rb4ya7SU/wAnebL7dpM2FjUx84y/ao1rloIU
lgurSS926ija1/QFHcpBptprRVw9kvL0XKHn3i222zxyHcjeuE/ElrU8lEZygRIZUlzlyTsl
rZCOO+sf1Wci2pkfeLvMVuvU60wrzdv/ALTcudzlKcnqI7UN5tSYzLbtUpQ4QqiaHYBXxGsv
1sKvBRP6objZJ+ZW5y2Xf9yWzCDD0RpaXmIqUqBQlLoJC1uVKl+ooNbpVrYPcjjxHleNWnBJ
9jueRzMQuMie3c2bnFZWpb8ZLfBLSSEmqVKSeoodDr2eDcwlJccY8x+P27ZEmPZLc7a5bLjM
myYTrQVIuyXyS33y0O3Uim21OnQaV6LPUEpIxXnOxC7MNJnPtY+zi7zL9uSD2lXZ6pLXAfUp
AVx5/TrC9cGUm0yh+SvKszK8KxHHlTvuDGjc78320tt/eIWW2QaJTUNtgn27b/HXVevrnyGW
54NYtuQ+Pl5t48dhZQi4zLTDdtT0iSytlhTZZILocWEBLvMBCAK03rri6ODbcMov9VlwtEzJ
rImHdhOLEJbTsNDiXkxgHKoUp4KVyce3rU1oka7eqjWWY/3ZFf06eRMUxaBeol+uotbU1xl5
FWHVqdShJSoB1ghaVD0H4n10Xq3k22tGhW/yfgjFst96XlEq1sLu9yuSLY0C7InspXxDUpKf
cjlVKkcqbbemuf8AW2CQ3tnnHx2347dhOXFTUt+DMZVZ1MuF0SHy4oDuJ/S4KLlBt/HR/W3o
EiGv/lnBHJEO8RspuU2BIftyl4ahhKYkZqItpTgWVppQdoqCUHdVPTWv6X1yTIv+ofyxi+VY
1Ds1jugupNw+9d7bDjKGW221pQhSnCStXJz8tB11uvrjeCdSqf09ZpjuJ5RcJd8lLt5lwvto
kosreQhwuBZ7iEELKTx2po9lZiDWdGgO+cMYX5WnTG7xKhY5dLEi3uXBptaS3OTzKH1NGqqt
BR4nehOj+lmEkmzzhOdQqXIKHFvhbrikSHPrcHI0cXv9S/qOu1Rxwehbf5bweNhEJxN8mtyI
uPKsa8NbZUmK7LUhTZkqVsjcq+onp89cK+hvA3/KUUXOPMV9vOM4XEj3Fx2bZmESZwLaUJFy
jukR3ASkBRQ2PT277763T1YaCNFpnebbdM802bIJF2dbsMCCmOzJ+3C+29IYpIUWaJUUl3ZR
G9B7dC9b6xAylJc0+bPHsfOnpbNxfahv2n7Zi9SI77rPfTKLriWm1q7pQa0NFcQrb00P1W+D
L/Yqv/e6xTPI+UuKv0+2Y/ebY3Ai3SK0pC2pTCUj7pDAKle73JQr6t/5bfpajQxswBbLrq33
UJW42CpbrgSapSpfEKXStOVeteuuztCySUQbdM8rYK/YlXBl2czktyskfF3rfQKjRmWHElU1
tz8BVKevL+evNT1t/RFZK0okch8/4zIV9zboLn3lmu0aTHbWEoTeWWm+39zPUGk8XEFIW386
fw0vRjf1F4KV5m8n2jOo2NuxYX2kyDHkKuW7iyl6Q6FdpLjn1o9vOvxNNdKU6pmUpckN4fyS
1WHM2n7ul1NsnxpVvmyGAS4w1Lb4KeRxCj7OpoOm/prnfJpLa8mh+MPI/jSxYo/jl5myUqiO
3KLFmR2Obb8WetFJFDukhLAqnrvot6bS2VU2k2XK6f1HeP5+P3ZtDspmROjT4rVtWxzJcfCw
06XwohCFVAKR0OlehztFZeSn+RfMeF3zxWqyMzJ0y+yWojK0qR2UtiPwLqZK0HtPpCkGlE1r
T4a16/U1lwkF1CMx8WZZb8Zy9E+4Muybc+xIgTm4xAeQzLR21OtV25oBqB66rrCjgVDwjVYn
lCwYvIZslntVxmix2oWux264tFCrm5MfS88uVH48kIQndocanXNpPb3knI2tHk7xy95JyzIr
xcLpGgXiCmJFZ48yC+x2pLSwBTjHVszUdNbdLNViB7YZH2PylgVusFucUzOeyDF4txtOPJBQ
mPKjTypKX5A6trQ2r3JHX0+S6ZabxsFK2d3vNuJQo7l9tsGS7mV0Yt0S7x3lgQWkWtaFBbJA
5HvpaAAP071+dT1zt4UhfQ6X5twqFcGm8dtc6VBvFxlXXIWZC0h4PXFhUZ6PFCAeXb7hUlR6
nbR0w23k1HY5w/JeCyg147i2i6P4WuAbQ042Um7LcU+JJcDQTx/zU8O319flqaSXac/sU5K5
cMn8f5fkuW3jJGl21lFpEbGYoUtT4lxUhhivEAKXRNVhWw1t/wC2qZN7MqaP0IJHcJ+oHfmn
fbXT2JZRlPyenoX9S+ANJiXB21XA3RHFyRHQWvtw44y3FfCVE1KUtMhafidteanplQ2itfMm
OTc/Ve8oeXfpUyTiUm9LukiCHKupBUpKFNk7JcQyoJAG2untaiF4g1XieC4Xry/iI8xMZ5Z2
Z67ZJjqjXuNIKGnDVj7blH4KI9rdFe5X1ay1+KU6MpZc8kV42zjx1jF4yaTc491lQLoy9AtD
iVtmR9lIBCxI5KCe8U8aKGq+bpyaosQd4nmO2QpFiZZjSVW6z2KdZUMLcAq7K5BuRSvHZHEL
9evy1uyr5n8pB2z9i1eYfJfjbIvF0W3Wy5Pyb2Fw3AwG1NuqcYQG1quC1JSldEVCSkn3U1eh
Otm24RXl6M5xN3I/H+TxxerI84xkcFcNy2u/pLlQZ1G1dlY+hwmlK/x66xeytlcGlVrHJcZH
nN/GbnNYtOOG2XCyR2LLZ2pS1uLixI7xVKRMSFcXHXD0I2TqtWqSU/P3JXb/AOpwxzOsWDef
3t/FLnOtGQfpSpUeTyEWNLpVl11VaFTwKkKPpt6arWbtWXoNVg5xfK93lYnNkwLCF5LabUxb
LhkAWe03aGX0lpX25+p3ucQojb11qaq3WW1x9Rep/U6Sf6hZECYzJsePMW2VMmG65Gl1briJ
klTRYdDQV/ltLSpR9TyP8ynW1XL/ACiEYtZoZr84WlbzdlaxpAwNEFdvXjqn1l5TbjokKX9z
17geFU0FKa06pVw/ymWy7vkjGfNV0bz6dmBgttvPQHLVCiMKKftWu2W460uEKUpbfUn1+Wt3
VIrVfxX7k7tJ/JDMZ3Fh+M5OIxYXGbdJjcm8XRZr3W2DyYbQj8hC/cpX/HbVfZ+b9jeeDDtK
VSnFwBSqnkn0I9D8TrjIJhEE/wDprt8NElAkAJBVuaiiVfj8dMjgWFJIUdif8WswaQkpKVeo
I2UOpGtBZ5ApZBICqg7qp1PxG+swZkUEUSB0rvX4b+tdZNACqAnao6E7j+GqADK9tlEkj3k+
v8tDRSByqVAnen4CuiBgIb/SPQgjSZkMUVU8qH4GnTSQKELG9UgbACtSNGBloVy68AKjrt/P
bQ0UvgMKBHMe1H0n/YayKEugqT9Q9vQev4V0yUBpFakkUG6R6bbU1lm0A1BCiVc6kHfah+Wo
A9thsCkUGoAUJboelQAB0+e+ogCoUUqPFIBqfWvpTQag5thQTyCauHokdaagSg6OElIWDUeg
Hr8tBoUlKgocth+b4j4D56jMsMp91AoGo6ddRoL9LlX8vX1pXpTRBSRcVbSI8krNVFIS2DWu
50s2sGyf073K0sQ8vh3O5Q7am5WdUWLInOhhpTqlH2lxX+GvoK69Dq36seTNm2ZtIYaZkPMt
yO+02o/r0UhK6bckhQB4n0rrgEHAhKilRHImlW09PwHTXT1rIpG9eRsqt0rxZizScsiS8tsX
6i2LcVd5S3AEttsltLaWw0jZStumt3q63mIQJpjO8Z7Feg4FjOO5A1Ifhqbul5vNwWXGGrg7
v/1C3uaaxwVU2NDT103U+xpoZXGA/NoxXL/I1tdtuUW5yPPZRGlSypSmYaWEkqdfd2TVfRKA
a/Gmuf8ATeYjIT+gjxNk3irCssnMyX3rjIVKajWnKA20hlmL7Q65xdKu0haqhTgBVw6U1qvr
lRP5C3Bqtrz3xxOiXtUO5W6PGm3d967vzpbkNx6KAE91rt8VvgivbRXjTrrP9TiUZ7DPDsww
q0Qjco+TQ28Wh/cNwba48uO/Cjgq4JbgpH/VvPLVzLjgqAQBXfS/XblGpkVfM/xK54W7NyK6
QEW1yDSLEg3B9dwW6ndllcQBCOWw5kildum+p0afJlKTNpfkqd/2aZgScgcvWT5NOCZUOU8X
ftoTK6BpypohLqkjkduSVH0GtOjlYNuHpaLzmbcy8TvGqrffLY/fra42i4QLVIQKpAQuQW+0
Uttx2kNnlUgEUArqr+NnjBlfUyH+oK52+b5XvL9vlNy49GW1OtKC09xDSUrSFDYlJ2NNc6Iy
5ReP6d7ZAcw/LLqlu2ovbLkdiFcLqhCmGuQJCCVkBNT8PWmm9dfJpWNCkzsGkKul1wd3HF5E
mWyxcJlzKG43ZbQC9wJA2Uuo5Nih/gNH9bSlrBSVDxqxGmwvJD7syzIh3tDsWFcVLTEZcfSh
VQy26e4IrZWKLUPcd9LShYC2tl2nXTExjdsdu0yxSbFCsaYlwdUUOy/uQ2gMtskEqCCSTxCa
11P1vwyTwd3LR4lYwxu2XFNnZtXYjhEptxlPeLpTyUggmRWpPuVuT6ay/XaYjJrvORqyu12v
Ora3LYxi12GLIdVZHI6kCcWWoywVLp+khKacitSvgBudPTGnIN52Yf5zg42WrRd8Ym25eOzO
99pGj1+9ckLWVyJEsqq4taleq6ACgSNKo6vKB5JH+ntMQ27KTa5MWHnqmEIx6RK4p7be5kKb
LgKOlOXrTWrV54JWxBsCL/idwl3KXhdztbF9ZlQ27/cldttK4DNDLUlxwDk2VFQ5I6n+Gsv1
2SmMEnxwJtmQYT2f37GZ9siYmbhMezJbiW2+6yltSI6ShaeZCiAUJSN69N9Do0srYq04Gqr3
htvxhFzhyLY342dtkpc2CEt9166vL5IQGqdxTlDTj6fgNVvW1hrINtmWQomDN4T47jXYWuEz
PuaH760Fd6e/HBUFuSnkgfbtUoC38xT11t+ty0ad450a/dZGJ/ucS05u/aVwpF2Q7h8RIZ4o
trbQU3yCBxQ2pwAe7ZWw1z6N4gO3JTPJciFNk4fYMtbt9yzA3kquMdpxtkC2c19lqQ+2OLKF
o4e3rrSqRD4XEgTf6lJSLTbLbdrfFV9UNHGDCbaQlK3mgaJW4hXs5dCpRKfTRaq6gm5NBwR1
2y5zmlr/AGpZuDtyVc598ivROLUKQolhqQt4/pgJSpfBPu/DWeuBkxTN8WOZ5ZnWXY89Eaxi
zL7r05xXabcUGwOLKQKrUsoPGnXb466VtxBViJZrdpj47ZvHkFbaLYrx25YluXt50trfcvZS
ChBNe4pypICeo9PTWerThrJWcmNZbarHC8MYe9HjQot4uL7j0x3mXbhJASr3rps00CacT/y/
PW6/yyDzBVvHCLEvMrO3fIq51uXKabchtqCC4pSwlvkpX5OZBWPUba6XynwVYR6EyZ/Bb55U
zOfe7e1cYWEWLi3FffPZU+n3qCQKJapy7YSK+6p664dHCJX2wvFlj8T5xMuORR8XSwYUeLEX
ZaB1pL7iSt95DIKfbulCVH4HbfV7KxicFVstOP8AifxI9Pvc1uwMzg5PVFMfkmS1GDTaeYSA
ri1yWSVb7dPTWHKNN4MKkYz4KULpJl5LLiTBJlBi0wWC+htDbqkstIeKSl32gEr5cd9eheu/
2M4aLXjGAybl/T9BaftqozMzIY1wlvlaUreglSWS8Nx+RXBA/iBvri7NNwLSx8FnOE+I7/Pv
Noaxtq2t4/fYdsRIiOrU9JU7xLiadePVJFTxAKvTQ5QJyVj+pbErPi+PWNix2OHbocmW4qZM
i0qp1LRS0yKnnx7fJSvSo+OunoSkG8lX8DYVaL4MjucuzoySdaY7RtliW6GkOuPrKVqKjQe0
J6nYfy1e7ZpPGDVfIHjPHswvUi029EePkVuNnS/DW7wVFszbZ7jbZTVPVZH8tcVKyZlN/fJ1
uGBeKrRk1lsjuKx5cjJ7lPjMul1SPtmIrYqUIBPKnGg+BrvpU7NSKsHhXxqvxssyLWl6U9Cf
m/u6XCopJUtSOL4VwBSlIHGm3roCcBZb4t8Vs45e241ijxZ0KBMDE5Diu4lyAw2tK+NSCpbj
wSv+XU6kTtgi53ipm5ZD44j3myrj2ODYFsybd3AkqnNIL4iqIIVyWrkpVOulWxA/7p+BFg8Y
YjdM4YcvODpx9mLanZK8fTJTJL7ipAabeU00edEp5D0qd/TW28Qm4Kcju0ePbLaP6jGWMftC
BZ2bQZF0YIC2o65KHEIolZVwLhaAA/H01h6RmW2yi+JsDydVq8m2r9nS9MTActqOXbUtM9JK
hHbcKqCiXATQ06b66XvLQ2TdMPJL2nw/itxs0XKITcaRZrdi8pN0b7xU4q/MtqK1OJ9e1U+v
HYax2fJWtt/BiGD2WNeswsVpmLIiTZbLMv3cKoUoch3D9NU1Fdej2v8AEkb/AOcfF2I220WB
WJ2JuDfZN5bhRYyElP3KQlSihSVqVySS2PcabdeuvM2MttIaZlht/wD/AMpuzSI9sQqPMchS
2qcC0WIbbSJbhRX2hsn16mlK6nHQKrLH8LxpDGWZrOl4ozkl2N/Q3GtTryGW41sl1dTLCUmm
/wAOvp8dNngqzEfJ3/7W4VDiy1WTDo+VQ37lcGJspyalgW9iPxCEIcUSAAeR+I9dS3mQ4g6W
jxJ45X4XVNftbb1wNlduir02tSv1+2pwBL3ICqCniUcdhozIt4IK+2jFMotniof6biWvHbo6
0xJuTL4C2lq7il2/fir9QoKlLVvXbrrSlJwTtlMt+S+LPGH+o7E1GxttmYRcHGrWtf2yZpjx
+bTakFalKBcIorWVPAWbkyb+o2G3b1YbBTamrI4zaXFO2ZhwPIjLckFS0BxPUc667elb8A1+
U/ArxBguFZnjDzE4sxLpZLszc7rKec4lyzIb/Wb+SApJ5H0J66z7e3aEaVh9iGLSMh8YeU5+
PWkNW+6SWzZjzSlSY0aV9wuMOR2Sy1xX8ztvrNsWh8GbL8Rv5IuXiywx3sXYxNhU5Frt8yNe
WXVd1c1xDbpS4n8ramq8qHevT11v1euVMwabey7Xm04lkOd+MPvbBbo2M3O1q7TrTqeLzyIq
lJgqA4njHcUniTuVGnXbWEoq9yU/l9jngvhrHbXIxi2ZTbIsi/NRLvMmQFOIeU+sPMojN0C0
ocU00snjX1rqtZtvwVZUZJG14NbLf/UdDXjMJiLao9oEu9RklHbQZKXmKoQSoDkpKUqSmvx9
dDX4iuZPL1/tMyzXqfaJqEszIEl1l5pJCkJWlRrQpJH4a9tdGE2kb5Ow6yxvFcGXbsatcqwv
WRqTccqdlJanM3Bde6lCSCVKQae31+nXkqp3PY3a0/QuErx9gjciLHn49abdaG7nbmcWnsPo
cduTDlPuO+kH3Ao3Nfx1nq2uSb5KlbJnjm4XvC743jMGAg5HLsoiR3eI7bPH7aS//jUhyi9x
Q9K63/XhpTga7GeOSTlv9T8q8QWUORrVKWZQlSAsJZilUbvNKVxr76LQhNeP4DW/dTrSq5M+
tRLLp4h8bWtqDkLt2scOfeol8lp7krjIS6UAPx22VhRS3Ra0lZKd/wDdz9lptJVxVLwVqNgd
guLEXLpsS1xoDdgujV7YDjSeN6DjpKEs1rzZNQk/ACnprTrwp3+xKzifgY5NfPGjmd/6Cexy
1RraxPt4i36OpKKEpackLkLB4qZdQe2Eg7Hc6X6utVbySZcPIT+AYvl+DXqLbrVEUi6riXBK
EsEIiuIHF9aWFKALeziFq3TrC9U1bjQp52VSywn5X9UUq4OptMNiA8qY6ltxAjuR1IKG5DSi
eKpCw4lxdN+Vfhp9iXVQmVNtySvjqz2W3wpcCTHx6ZJbvsxvMH7m4244mAr9RlcRajQ1Qsmv
xr6jW/ZWbTG9FVqM6GUu7+Ojaxi0aDZ12qXis2eLkQn7oSmlufZtIeqClwJTXf3q20r0689o
/wC5mcQ+EYh4/Vb/APWtiVckx1W/7tkSkzFcY/Aq9wdIBon4mm2t/wDIrEpB67Zk9UTbdh8v
Lo0mK3YDLj2i5v8A2xDBZjpAZDapa2CUKYS5+cpCxU68y9eMrBtfXIzx6T44ZyqRInu4+vKU
2mIi8Nxiw3CU/wB1RfWwt0Ka5Ia4cgBUjW36nExiSdowiNMjD0KvQxOVisSO3dJTt8ReEJW2
5EWyhbX23DcoPv2b6Vpq/qacNOYCThNu3jH/ALFvxoibSVKtC0hjutIlJuPQBDZSZBWHKFKi
rf11qvqbvrk1fL8HnHG7wm0ZFb7k8wzMRDfQ67Hko7jDiK0UHGxSo9db9tE6uDFLdWbBlGW2
nKf6ibfOam26Ba7U+0I13cWTGkNMpS8VOKUVI5K3bRQAdNF016lVbtkaexKxfcLyDALLmGcL
kXiyvzrndBcIk16QgMqhyAVFoPqbcHcbNeaB8t9Z91G+uOINJ/j9yiWb/SkyxeU0wr9arSxk
L62bRbXXFNI4Mvd4OISQP03Ukhuidj6Aa316+2sKYWTKarX7lumeRMKThsntX+1G0ybdbmbb
YBFH3TDzLrZmCShISpYJFS3X3UNNZ9PrasnGpF2TeeSqf1GZdiV+slnTbLxCuU9iSpxcWEgO
JYYLZT7JHbaU2lSuP6SuXxr7da9HpcNtGbmCKV1GxGxB9f4620YkSpRJPTkOg6bawzLYKAH1
JH8/x0SUA9aCvt2SDtUfjqGQuYoSk1UDQAnp/LSItCjsDsobAH+egglmqSB0rXbqNKJi1BIS
NiEjceuoDnUhJBFdvqPQH5amTFkq4pHI7bJUN/nTWGLD3AJ6K9B+Pp+Oogkn31KSmo/sGqAa
DDihsqlD0FN6/wDhoYqQlKHIlIpT6hrIwAcCo0HFaetfWv8AZpJKAEFNUD1+kjrT+GjkWGlZ
2I9pHt321NAGQkU91DtX47mlTrItCVJSTStK9N/hpCZFIVRXD1FT8jX56INgVuDUBKK/CugG
ANguVp124k9BpTFJCVcEBVCTQ0Cfn8dLMtJC1KCgOHXbkepBHofjrJpBqJqSkEJBHtO29Kn+
egGBCl0JRUD12HX4DUaApQSBQErHWo6j01Aw+SiglI49Qr4im/8ADVAidqcq7UrWg6/HUZgj
oAUpuUAKo7RJNK9NB1WWbP8A0zR2XJWTTTFZkzLbY35FsW82l/svocFClCwocj+GvRZ//X9y
tbhGa3JctcyS5KU4ucta1Pl1JCy4pXJXMECnXpTXAy1A1AoAQrb6a9afwOt1iSSN1yTF8ftP
ibFMntGKKIU+p+7pn9xa320ApQZbg4ENOK9yUJoOgGh1r2jZVx8DrP38fGC4j+54rCiXXJH0
TZESxMhiQLek/pNNKV3Dze5JqfQelddKeuveE4+WbVnJDedsCFpyu2QMbsC4kCTGYZhRWGlq
70pVSptJIKnHN/d6/HXLkk20c/FvhqDccinR80dahO2x5llzH3JLbEiUt+iglS+RIbSlVSE1
Uo7baetnWVoy42apF8QeOoV4udzchRJL6rl+2We3qjvS4cYJbQQksMFJK6q961q4p6azDQdv
A2wrxdi0W5P2m848xLlvvSWnLq+ytz7p1vdSYAb/AEorEf6OTm5VUCtNLytj2cDpWC+N37I9
Fxq0W9uezCcfcm3GFJc4lFe6990sJaUB+Q777jbU0y7sqljGCz/Gyr3JwOPHanz2bPZG7aS/
cZKnFUdW249T37KCSfUGu2tOlZUNk7tE/eMVwa0Y3Fu7eLWuJPg3WJCdhKQ8W0B91LZYdcc4
fduI57lHs5fHVVNvbB2Mm/qPt0K3+U5saHGahxUR4xbjsIS23QtjolIAG/wGtep4MvOzt4i8
c2LJ7BkV4v79wXarMGibTbeJdkKWFHnRdUqKKe3b+OtXtaFDgqJbL7fP6fvHOPofvF2kXeRY
wphmJa4SQ7LQ48nmpS1UKlBI9Eiv9muKdm4k0QXj/AMTkXLyC05bpT0myQXFWKLc2OTzIcbU
pDq2z9L5NO2gioG/XXRpqqckWx7wp4/uVoxmEYsy3XV2yqmCSw2kMl9KEuPLluqBWVlSqBHp
rDtbyFhrH/phxl/GytF2k/6k7SHVqdShDbZdpxCo6aupG/VRr600O9vJYSGVr8JeLlZfFx37
i83CdDkpZvJdZ7UJ5LbSlmjyUhKUlYCac+R6D460/ZdrLNVtGkZt5h8dR8Ku7cZlEhZfLj70
rtcIaUrWezHjE1Wvtp2UtZ9x6aqtvYSdvFGDWO82fIMov8Z65WvGW2nFWSMotuy3Hq0SpxO4
QkJ6DrrbTTUOJM4NVu/g3x1Y5hvUqFLuVqlyIkOLYmnFIQw5NoS4tSfetLXJNEk/jrmrWb2b
njkDfgjx/ar4xAuEeXdRf5smDbHUrKG7emO2Vl1QT/mLKkkVOw6U66nazUNl2GjXgjAIROOX
FuZMuU6BJuzF+5BLERphdEspR9FVJpyUrrrXez5Mt4wZzbvEkG5Y1itwh3CQq55Nc24D7vYI
gxEr5e1TiuKnXvaCAk8Tv8NMtNyTro0geBMFvvetdpE+2T7JdE2qfc5S+6qYkIDrjrYV7R7K
hAAoPhrLtbyCgqmaeMsGkWbHbxYTKsLF4uqrNLanFUg/pqcQuUqp5qUe3sE7HbTNpeZJ2WE9
MjrL44xlnzQcMkXO5Q7UlbbTK1tlmXLcKEr4DhTstq9yuZH0/jrVe6UySqm9aLjhPi7xnNyD
KYt4l8nWbu7Bg2F6cuKBGQriHlKrzkKWdgN+nz1nteN4GUZZ5LxdFkzy/Y5jyJT1stq0uKZR
zdDaA2lZU5Tb2FX1K00s0iSwaVZfC+IKscC0Xh+evIbxaHMgjz2FUgxkoSlQa7Z2UspUApSu
vpo72fJYWtGYXnB4Vv8AG9ny5qc9KuF4kLaejNtf9PHbQlRCHHz1d2HtHz+Guiu24ZQklA38
V4M9m2ZRLC3K+xS4FvOSwCpaEtJ5KKBUe70G+r2PqsEkaXaPAGM5Bc40XHM1+/tj7b0q7vlA
TISGHe2klokbLc+kufAq31h+z2JZ0woqrKJeD/Te0xlcqBDzBxqFGhtvurjjjNWp1SkobCG1
BKkDh1qdyBor7rIYT2Kx3+mp6S5eGXMrkwbUzKMSKW0KbckFCQVrdQpTaaVVxHWtCemtW/5F
xVisTPEmB2zxrf8AIpV9cm3e3zlwI6IrSktJfaWUIjcVAci8KKLlaJHTWLWbeQbTWDnl+I43
ZPEdkvcTKZ1wu1xdpDiNl0RFraKQ60hCiA2iMeR59VGlNa9d2nnRNqMIkLR4+KLb4udxy6OR
L7ksmZKn3lK1r4ONJBJQhRCSWk80f8xPWh1l+x5ZRlFgybxbdc/nWhw547kFvRIlxZMhyMGh
GXFFXxGaRxDiyoBPz+NNHdrgsGYeVcMtvjm8wbRY7xMlXNyOX7m4r/pw0h1Q7DXFshQKkAlY
JPprt6/Y3sOcE34qtfkl3GLhk2H3lTd0mXGLaZcQJQ46UOe1Ml1boUpKGy5RNPgfhrF7JPKF
KRUDDLeq4eTX5V7dvU3EoUg26SpxxLy5Kql2UFJUfoWFIpy3UdafsbgJw2tyOnfE8aDibJuG
fJg3VdnF7ZsDhcRGEdY5KTXlx32TRKaqV6aK+y70atEnTIPF2Ots4RAs2YSJt7ytxKFFaXCy
4y8vm7LSCRxS2UAcFbrVQnUvbbLfAtqYOWNeJ8oysomKyl4Ki36RZojzynnHENxG1uOyUVX7
VqCAAlJ9etND9zM1jEjy/eJ1Yv5VxS23HJJ10h5M4WHJ7a1szQEkIDQcC1HgStO/LYV0W9ra
yKabMvy2VccezW/QbZcZjSI05+IHkyHA6tth0oRzcBBVxA9ddK2MVUoumIYDHcw+Je8gzp3F
zki5JtENPecTKMc8HHJCm1ClV/Hen8tCu5fVHTC4yW2HhOSjxqMW/wBdKiy3bQu8tYs1GAa/
blrKv1HwAujhV7hX1pSmua9nMGcEFe/6frBbIlzXAzJh652B6Ize0SWFNR4glrSlCluAmvHl
y/AfHS/bZkzl5rxzH8cvGK2O03dRmhCV3e6qkPvSO492w3KdBJCPYpRaCN+P8NNbNqQz2+CZ
keIlXvzPkVkt94mWu22CNFWHm1uyZig8yiiEFSgpQUsqWuhoNK9zSXkyksv50QWXeMZVlemy
LPkcya6LxCxuS+pS0uOyn2A8+4paVV4NFSUpTv676Le622a3hIlX/Btls7KoF1z1doXcZM6L
bmVtqQxIML2lb3FfBKSBRXL5AV0r3XszCc/UKP4Clu+LU5G5kzqJCbebp+1FKkx0MEFwinLl
UoG54UJ0V991hGpSSZWfLWF4hi9gxt3Hb7Jucq4x0zVMvNqSyppzkRMQFUDXJSQgI60Fda9d
7OU2TTnBP3jxjAn57aYEa7PwrbFxdm+XO+LLr8hSUhannm0qJUlSyU+0dBrC9riDT25CleDs
S7ci+3DMn/8ASggw5sa5rjLVIUJrzjSEdskqB5NGm3rvqt7rWwYhdjNvIOJyMHzC5Y4zPVKZ
YS0fuUgtd1h9sOpDia9eKhyT012paazyMTKNFxbwi1c/GBydOSSGn5EZ6ULdHbW5HbDYJCJK
UHuVVwoqiaa5L3WTNNJQC9eBbImDIYg5E/KyeIxbpE2A7GX20N3FxDSQl1NeakldRxJ2FNX9
1tsWWK9f0wQokq1pg5O67CckuR7gXG0qU0llhb7imA2VUV+kfYfUjWH7bPBOy8EHjPh/Espy
NKrPlU5yxNQVTX5kllUaS28HuylKS9wQQrqVfwOtWtevJhVh/EFB8rYocFzJ+x2+6uT2+w1J
ak8ilwIfBIQ5wPEkU9DShGvR6rtqWZallEcWeVXDyUsb16/xrrtZuw2b5OiZjqm+33FFutSg
KJTX5JO1dYWAq4ErmOLSgKWscdkDkSE7emlSlE4MsLvuGgJG3Qen46UoNS2KZfcQeSF8VCvI
jY/z1lvMitExZs4yuzxZMe13aXCjStpTUd5baXAQRVXEipodUpvKJYRD/crBO9Ek19evx1pu
WZooOa1qJ33QPaR8j11mzyLR0LiuVK+7jxB/5T89Lu4hkkxIdUPpIHpQjWXbyGg+8D7BSiE+
voPgNStmR7CQ8ogE1r8fiPjqayc3LAhXvTUUKdyD/v1PUG6pHaPKeYK+wso7gKXeBKeSVbEG
n1A+o1WWvg3AhSzQBNAlJ9qegAHwpqTZh5AHTsoDkkbCu+9a9NTckJDhUVUFRX+Z+B1mByEV
Enoaq6etaa2mZaCSr3VJoPgST/ADWXkklydA8eJPXem/wOxpqiSC7zvIioPwNPTT1Nq8ClrP
QH0FQemsKpmzkSpZUQVV+Hz1tNgJUoBAH006E7nVBQJqmnt2J2Bpv+GrqAaVLqATSo2Hprmx
gWUgnalBSn8dCLqIAIWDTr8fWmukooaFBNDX1T/u9Nc3kUL9nuINOQJ+e3rok04Ee1JHHod+
JJJHx1pGAwpBTQ032NOny0NDIaFJKCR7UpO1Rtt8NQSEONOShuBXmPnrAphgVTVXoN/xHoNJ
th0QtIqKHp+OsmXASKJqobEGh+GkkGVcwaKp8Ugeg9NUDISaEdagb1G1NTQQLUCCapofmdlf
M/PWSAnooKO5pxPofXRAzgInlRX5h9VOu3/hpkIyHwNCQNknah6E9NDZtINSvckglR+XpT41
0MGwvcUc60qDU/8Aj11kpENgEcVDoagg63IPJ0b9QAFJJNT8PjTWYFBp5g0UaA77j0HrqgQO
EqrxPGiieXQf+egQuRSA4alSa8idtUkK7gSj6aq2qCNz+PpqGQVTz5ct+vH0r8NQEZGQlSZA
UTs3VNDtt6kDroNrBrv9OFvkzcgurrd3n2lm32l6ZMNsKEPyENLT+glSwoJG/KtKk69Ldv6+
OsmYyUK6PNSrrMlspcbZeeU4lDq+44AtRNVrP1KP5vnrgUDb2BwFVT1p6j+Xz012BruTYzkS
MCx+65Fm7sy23iU2y7GQ45LiQ2m08qkBX6rrQ9vBIoDt6a62dnbq0k/0L8YHl5wa0xLBZMvx
7MZ064PzG7VYJU9P2iWkt15OIcWSW2GQDUgUH46X/Yrw4bGkRBC+VomYYjlto+6yqZeLrGhp
ls3VbiwGFOk1EbkpRAI6q6nRX3NN4QStIgcFxbLsyyUyLY+UyWXUS5t+lLIajrUuqHXHTUqd
Uv6APco6P7WtG3DUPRruO+MPJMG8ZA1DzC4wbDGkqalzorTj8ubOKAt5aGUk8ac6KcUrrot7
m6pQjDqtjXA8GzC62WTak5pcLQ1KL6kWBlKnXEMlSkqcuDtaRe7xJ4k1PpXVe1mlhD1SySV1
8YZZB8f/AG90zW6yLciGHkWWBFW7GW31aY7xPRW1Qrb5U1P2t7STJNSQuPeLYF8xyN+xeQnZ
T1idbEaI0y41FiTpiwlLbalFKuZX1IFafCutO3srGEEJliy7Gr5YLzYLnf8ANbndJEOc2zHX
Itncix3HQUKkJQr9JRqQlCjWla+mqj7OEkU8Iyn+oCzv2ryZOYkznrpIdaZfemSOIdJWgUAS
gJQlCRskAax65YpB+Lb15PiY7kYwmjMVkMSbrOCQt9AQSltmOFBVVub+3jrra1UlKkHV8Fxx
nGv6khPnMxLsuFImttzLs9KkNntOvpPabXzSvg9wAJQke0Ur6a5v24jqidci8awn+oy1RrnG
tE5u3NyJbhmSX3Wu9JlKSO44lS0rdUpWwSa9dS9q5rJrEFBv/k3yvFTNxS43qS00yRFmQ6pK
09scO13AOfEjZQB39ddu1XmFJzVW8F/uVt/qe/0uh2ZMU1BQw279m080qX2kUUgkIT3eQIHK
pr6a5L21mepKr0M84uH9R1ks0PJckuQjtwlJTHZaWxzbekoLaS4y2AFPcVkDlWlSdNbpvFRW
NuTvM8QZ7fbhYsKvWRPvx41ndvrrKwFhp5xfbEZmquTiuSgObhoN6DWL2e4GFJDYl48874Zl
DUWwMfa3GWwtxwpdadhdtvYl9SqtFSFKAHz6aV7cQ0Z65klcdxv+pW3368W+1vu/fSUd+6Tn
3W1skvVAcaceqnukDYoGwHoNL9qaiDcKZG0Ow+fLHhj0OHcFtxrtJlolRA5ycjNRgVS5Tkte
0dta68qKqa/E6b+2YxlGYjKKu1f/AC/cPFdwDU1YwS2uBiXILiEKc5kJEdtav1XGxyFW0mm+
+p2T4/IoRUHc3yZ2NbIKrk/9nYzytDKDwTGV/ib409/wUdxrcrD5C2sE5dvNHki6zrbMlXlz
vWqhghqjQDlOJWrhTuLUKhSlfPTNZlLAJjmLl3lPP8xj3aK65Pu9laVNihpKG2IbbIqt1LdO
2jem5FVKoN9c31XAROfBO4HA81Iy9rJrTbhOyG+sPyETLgltauwhSW1PnuFPZCiAhB2qNgKa
x3T2jqqxOYLLDif1Jryu+T41sYN9lojxpdydaj8WEIQotpiKWQlKaEklNanc121L2V5WAVeZ
MzkZTnmKs5RistRjy7y527+66CuWskEqBfJqEuJWeXxr89br1eTDl4LrET/UNc/GgiMR3xi5
jdwSiW0yXIPHZpKye72SlNeAFePy0W9lZwjpHkreS4x5gc8eWibeIbrWG2loOW5tZbbSyh7Z
Lq20+/3ctiv4/PTW9XhIzbG2T3jvxxnVnyTEnrVcG7deMpt0yUHXmVLEOKEUJ4197q21BSAP
pOs2v50Tq5iRxg2HeZMFuF1mxILcJAtjsuYi4pS+h+Owapb4o5VeUo7JB/HV/YntF1wMJXmT
yniuVXF27x47F3mNRu9b5EdHGM2lJWwhDST+l7VlRFa71O+uirV8YB4J/CM6/qGv9rmz7Jb2
ryxJlOuJuEhlpQakcRyDPNSQEoHEJFCB01izouJFVeyCx6B53cwfIIFvtbr9luD0g3R19tCn
y+DxkqYCiHOSt0lSQfluNH9lZ0DrheBhJcz0+BY/elRVYiq5/Yx4aEVnJUVFwhLgHtQXE1p9
R/DWpVnj9SjRdrrk/wDUNabPYOeMx4UOK601a2WIyFr7q0FppCmwpZQfd02qeusz6+EzbTnZ
wxJzzHgt7sFuueMLubS35r1uiJUjuOyZY5vvKeQVJSpCOW6ugJ0dqNcyDrCwRnl21eVvIedR
rcrFVW+ZEhqcYhtuIcJZUscnnpJolVVAISK7enrpVklgkpKlaMq8i+JrpdrOEC23CdHQzLae
HIt7EtvtUNOQCjxVuNaqlbIwXLwN/wB2FWd234ra4gstzlLbn5FOYDqkqKaKJUVAvJbpsmhH
I79Tos6LexVVGXgquU4t5YyqAjKZ1pdlWi2RkW+JLQgAKiwlKbQ4GQSs8jVSlAb/AIaf7KqI
MocyMt8s2m3xLZIgoab8duR5UiUqOjvRS7VMVt9w19h7xogdfXpqfSPlmbLJdr7n/muFkOJ2
2LDgO3W7Rf3hizQonFHGSlba2pFTv7AVqNRSvXQnTq2yjMeBnIvnnV7yRaHZONMOZFbID6rZ
bSwkRmo7qwFvDiviFAgISrl8tYmnKZqFx9yoZRhfmHO82u0mZjrgu8ZLSZrDCEMMtpUCWwkl
VFc9zXkSeuun9lUsEqnGy+YfI2H2X/STbcZk2tbzTDkpgLlRFrUe4lpZNEkLqdwdb605WTOU
Wr/U3n2T4xbfTakpx4REsm8lpAmOQkEcU1r3OCqUrx3TrmreucoojZHZRaPObNtvrl2szyI+
azIguhQ2hTjkhtY+3abQkqU0kqon56K3pOjSUMRnNm8i45Ot/knLm47N9VcW4sW1LbS41wix
R23FBKuJTRNOP+Kp0Uas+qWDLWRxj/8AUzkLGVyMhvNviznnYYhIZYrFKUhzuFSXBzUSogcq
+gFNdr+hYgzSWxk//Ujmyb/dbnbGokVq5utPIguNB9DLjLYbS4hSqEuKSn3K9fhp/qon5Rrg
peZ+RslzBuKi9PNusw1SXWkJbCAFTXe68o061XSnwG2qqSwjKRPOeeM/dxdzHS/HMNyGbc4/
2EfcqjlJQEF3r7U7DQqVTGGQM/yRkk+y2KyTFtOwceWl23BTSeY4fQ2451W2ncBJ0/0VjItu
ZLRK/qM8jTLxBunfhofgIcaZbRHSltTbwAcQ4ncqT7RQV20/1U8AQWT+Yc1yNqfHukxJj3ER
UvMMtJbbSmAtS2UtAfQApxRNOuq/qpwgeSAyvKrxk9+lX27uJemy+2HXEAITRltLSaIGwASk
a2oShGktss1o8159a8QGMQZ7bNuS2uO06GkKfQy4SS2h2nLjVR+euPVJ6Bslsp/qEzy+Mphs
PItNtbcjLjxoqQFtmIEqa/WI5qSHEdynx10p6qJayP1ONw/qG8nzJkKSq4tsuQZH3ccMx2m0
l0tqbUV0FFBaXFcgeusr1VaiMgsjc+fPIy8hF/XcmzORGVCDPYa+37Clcyks04E8/dy666f0
1jRpazsp2UZTesmvUi83mQZU+Tx5qolACEgBCEITRKUpA2A1pJJQjKwQ6hUVVtX4UJOkXAAK
cQDQnoPlogzAfU9CkA7U61HTQyjyGqm31JoakddSFhJ6VTvUiupkkEFcRx6ivTr/AG6IIVUq
VxI+IG/rqRkSFq3SSdhuo+hGt9QFI5EgAUr/ALbV0GguSSOh+XyPw1hogzQI3NEnoPlprXIA
Tur0p1SDTWmQORJAFU/Gvp8tZkkK+mlBSu1PSmgm8CHKAmgp15D/AG9dMEKOwr+Q9RWhpqmC
YXt5Eg/A00iLHHp9RPw/36wwkTXoAd9yaevwFdBClFCa13JFfx+etJGtCVcggUUD6oFevx/l
rZzeADmfWprUnWGEABUolR2SNztXRIoMjmnfcjc+lPnp0MSH2wQAT+O3SuiSDWsEjkQVD1HU
U1mWamdieVE0SOVT7abD+OpgKC+4TWpI2+Ypq0TciuoCgdx6DYb9dH1Mg5gEKNCadfnogUxJ
UmilJ+kbgdN9dKkECSgAA7mpVT+zWWDFJqoUV1qfXrrLRBhQ414nf2lVNqfDWYRtJgSUpCu3
WvoPloINIPPjuogUBqNjrSQ4AlfFRJoD+X4cvnogJCX7TyJpTbbam/TRIg5AkDofzGu23z1Y
IInisA1PXc9D+Px1QFkGtaCdxQjcdCP56Egk6BfFCdwOVQSeug2JCuAURSldh8qbHRspBy3J
V19VDoa9NRQGlaqAJT7T9Q9f5awDB0O3XiCCKda6ZCA0FRPpyJr1pqFBOU23FK7ppXb11Jk2
BtwqWmorxTsKbEDpqaFSGCkcTRVFdf8AhogglDflyBSR0FCK/M6hCp+lShr8Kb0r8dUiRzEl
LKXBwCi6koCq9B8h89Bs0Lw3n0bC7hcpku3quUS4W923OsNu9hRDqgeQcIVTZNOmu9bVfrdW
8yDK3PkMOyXX2Y32rLqyW4ralLS0ivtSFq9yqD1PXXNIF5OPEqFRtX8v/hqQwaBevKkCX48h
YZDx6PEZhVUzcS+486hwmrzvaICAt0k9SQn0Guns6tym39TnlDC6+Qk3iLi1tuFvSbDjLCY6
7eysoMmpCnnFrp7FO8R0BpvrfddpWDpWzWVknc28s2XLcgs90l4uw3EtwQibF+4cWuYy0f02
O4UpDaB60FT665xSctx5M5mRtZPNOSWPJJtwsjUW02m4SWZEqyMNNlkNM0ShpBUk8SUjdYHU
1prp670jq1JT5LbE/qXccVOTfLKqYH7gbjDbjTHIqWylIS2y8Up/UbSEDboreo0dKNbyYdoc
HK2f1Fsxi/OexttzInZL8v7pmS5HiuOOjghUpkVU+GUUQgE9BtQmui1KpYZp2Zxi/wBQ4h2V
aYdqeGSuMOsKuMia85FC3iSp5EPdPKh9o2A6DbTetdpz8FMIqkDyY5Cwu1YxGihCYt1F2uFw
Q4pD0paDVttSkAKRROyl1r0pTSrVVk0Euxcrx50x+8NxbKYE222CRNal3p8y1TpaxGUHG2I3
doltHNA5GvTTSimU/wBjU+TP/Leet5zm8u/RYyokVxLbMZhwgrKGU8eTlNqq60B2GsdeuAOn
jzyM5hCbhMtsfvZDJQhiDKeUft4qOVXlqYH+atYoE1Pt1p5RNQadbf6nbRFFwb/YJUdiW+mW
hUOUlL7sjgEurfW4CKLUmvt6dNX9VWt5M1t5IR7+oXv3LG5b9oKWrJcJdzlsNvFan3JCVpbb
bW57v00ue5S/qPwGpKq+6G25QzavPgi6OyL9k670cmuD7k2a1BCERWnnF8wy1yJJSjYcj166
q+q04aJWhSXLKf6kMOi3mRPxq1PzLjJZjQ37jJX22VR2FdwhDQqvnVRTUgfHQvUuWUlH8jeR
/H2TvOyIllmMS7pLafu9xkSO4W20US4mEwk9oOKQOKXFjYfjprWGoZS40XVz+o3AW71DnxrJ
cVoj2tVmWp11sLLBUkp9oJ6UJUqtTtp/pndkLZxs/wDUXh9vnJtzFnnMWO3QRCtb4cadmhRc
5rUsLq0AsAD1Ip89H9Sc/lkz2O19/qHwLKIs23X61XNi1qeYfiKgvoRIWthOwcUFI4jl/hOq
vqjVsmiIb/qGxb/T8bGJOOvLxx37r91i/cFTn6rpXHQ24vdwp6uKc+pXTbWv6VOXmDNmZ4c+
tbPjufiTNqK5c+eZTM154uNRGQoEJjtbJDtBxU561Py1jrPOgcpFQt6WHrgy3JeEWKt1KZEj
iXODRUAtYSPqKU709dTT4FWNRk47/TmI6ExcivRfWpCVPqje1tHIdxxSeKa7VoBv00L03fKN
JpkxaMk8L+P5n73iU+8Xm7dpxti3vp+3irKh7fullKCtpKt+Irvp/psttC/oWWxf1LYqLpJm
XWBLjSbjBjN3GdFCHgJEdSjwYadVxS0ErNPnuRXfU/VWMPRzViHyj+oi03mKzFjx5zCHL5Fn
yHnHE8vsIgR7KN8QVuKSSWx7QPjpXrS54FuCLuMzw1mV5vGUZNkE20zrnJWtm2MRlOhqM2A2
1zUErCnFpTzVQ7VprK9F9qIHukXjIvNnjG2RoK7NIm3a7QLKLXBQKtQiHkJSTISTQOthO/FJ
/wANdX9FluBVpUlN8oeUPHuT2+a5bheDd7u3GYkRZDxbtsVLHD9QMpNHlJ4niKUJ3OtL1uvK
gH+WCzxvNnii13PEpMJV2nKx2E/AMh5tKCpp1oJqUlXuWpxKfUBIrrP9L5aFyKZ/qTxHHohg
2GJOuEeFGc+xlXFVXHpMmQHF91RUVJbbSNlGpOwptXSvRPKKZKJJHhjL8qul6uF4m4zBeS06
mO4hct5+Y6FKkr50cAbQeKU16/Iaf6bf7SWNnfK/JtlxvGbFivj67zJKbNOcuDt4cR9uhxSw
oobDX/uBKl1PJNNvXTT09XLgbNvJbrP/AFI43b/H0WGr71eRRoS45jtobCHpLleUgylVUjkp
XM0FdH9OZxAP8sFasfkPxnZvFdpsDhnzr7DubF8UyEJQx96hYKkdxRP6SUVp6k6F6W222if4
xBdb/wD1LYRKuFq7CZ8mK3cG5s8FtuP2kNJVwQniSt4hagrcge356f8A87W2CyHcP6k8Ievt
lLDtwEOEZf38z7ZlCld5sIbSloEhSOW5Gx2Gsv0NZwKQI39SPjlnJLi41Fmw4UiHHjsXJLSX
nHFMLWtVY6lEISe4afOpI6al6G3sz2SwUW63DxT5Fyy85Lk2RybGHFsxrVF7BddXHYbCe+5w
SpCOaq/pp6eur+uycLg0rJbLZYPMfijGYdnt0SdeZSMUEqPb1MthuPOTKHIvPt1TVKVEhIO/
5ta//PaNrIWbTyJu/wDUljbnj42+3/eR785bhb24jbTaWmngngXfuTuUfm4gb9NZr6I28Clw
9lS8oeZbDleMwrXb4jkSdeHYr+YTaCqzESG0No6dwDdzpTYDQvVzJllol+ZfEzOVWG4Q3rq8
mFZnLBKlrbSlxthSAGn0gnkt3lXkRq//AD2W2TtyIxLzX47x++TmWJt4k21yEzGj3q5cZsnm
06txSe0qnFujlB89z6aX62+UPySVl/qNwYXy/wA28G4ogypEcw4/Zbd7jcdgN1UlBSWlLUOd
OVANvjpXpb00UHnXKbxHvGS3a6MsmNHny3pLTBXzLSHlkpSVHqQDudLqVXGDfp39RWIu4FHt
LLc1NxEGLAdhhppLKA0UpW4H6lShwSaJHXprmvS4KxFWn+oa0R8hyS5yUTZsW6XqDLtjC1V7
MKLXmKKVRCuhCE+vrrq/+O3htYQOySK15d8jYhecbt2P42u5SExLlJuEiTdSFL/6hBPBuhVR
PJSgE+g1U9TTlsUZKXEgEClT1PrrqCtkSkKoSVb0O/xPx0GusBFZJrX8aaoKAgaAGpB+o76g
kMVChX1/lTWpAIJUTRJpSh3+FdDwSCKhXoKn+I/jqJoFATRHU/DpXUiD3BAp7QN/htogGCoF
ak8ga/OtP92mGSYSSaciahXSnp8tS2KQZ5FW+24321s0xKQgkkqAUajkdtTkzgHEnZOx6+n8
9HANh1QTXqBsR8KapZBJAVWpqDtT5/LVsQBAKaH47nrtqSAILBBFPp/A6tCmGSoJBAH4f8dK
LeggCAa+nw+Z1MA96lKRsd+R0gKUCr81AR/aNZYoIkBX/IOhO3z0ImwUBKh6Dfp/u1QZkNVQ
n/lpVKfhpFMHsV1NCTufwHTWIKQ6FVCnYnr8daRlgPIcQR0BrX1/DWmkbX0CBFU0pQ7Anrtr
HUGGTxVWg4ipFDv89UAggRyG1a9fTf8AHTBWBwSdwdxU/wDHfWGiQuhVv0B606j4aJgZEkbE
mnEUrT5+ulWBBlRSrcgj1oPj1/HSyaAOSiQrYKG4p/v1kIAg0ASNt6Hf462ikNFeJoo1QaBN
KjWdiDlVBTTelQaami7CVbpPEdfQHpoQSKKqVI6k06etOu2tMUGKU9uxA9wO3X56wTDFBvQV
Hp6javXQtjwI4moIFAN1Hag5dNbbBIXUgfEqO41hiEog+0eu3/lrMgAroogb1AASR6ev8dBo
NSk1CTVNaJIHWn/HUDkJSEg06gdT16aclIo7rSeh47E71/Ean4NfIFEKUK7/AA+SfhrBMNJP
0qFD8eop89LBW8ANVEUTsRQKPVJ+H4aPqNnIAFdDuSPXVKCAcQncbbe1Rp00NiEn3ACoFSSV
U6/hoIVT9QpTuPyj1qfhqNBFRJqKhRAoPmD00QZBRahy2CvpIPyPXUUBNmoKUCh3pTqfnqYy
K3SUqChQ7c/n66CaDSn0FKgVPx/hT01SVUEEp36mpqPgfhT4amPIaiak9EtkBSU+tfQ/x0CD
vq4124/DemoiENEj5q2A+WpG5JOAW0t+4qryACU+qvTSZbH4RyWhtKlKdV7UoT9VRv0/36pF
CChRSSa1NQD8KeifnpkGDiNiACTQVO3TodAICQSClKgspNVJ9aH56ZkpFF5R9xTXjskGnUel
NDKRIAompqd6jf8At0phwK4KUCDsmtSANjTprUkkFRRIVWpp0Aodvn8NIoLkonkSeXUf8NtQ
MLcJohW53I+A/wCOgEHxNQf8XSv+/SMCSlPLkKrFd/8AhqllCEGqglYUCRU1qKCnzHprUwTQ
oINADUr4cwOlB/i1diVfIADU02NNvUH5aYFtHMLqeXz40Oss5qWCoJNPT+WpC3ICTxPwBoPi
fjrQgSOJJqajanwHz1AkAKI3BqSDSgFKfPUDqEmpRuaAnf8Ahp+hqqwECCSFbioAV1/hoFhk
7AK67hFfXQYDbB+mvvG9Om2tGlUA5ElJVVVPT1Pz1pGHaHkAUSKDdQ2p8a6mbSlB0rVQVQj6
h6Eaw7F1yGtfRSqgbHkfx1JkJqoH2r26lPoQdKJoBKqBJO5/sHrXWwnAkLUPWp6j8PjowNZB
7ztunbZJNd/lqckw0lRTXkSB0Pp+GiRCCvaEj3A1JUN6H4ak4CfASlO8vkd611qTLF91QVv9
RFB8x8BrnY0mJKjUDqkfx01ZMIOEE7E/h1+WtN+QDSCFglW23ED1I1OxpVACpKaGhIJqR031
A6hEUUB9JJ2IO2rAVWQi4up2PGvT5/DTEi7ORQQs/wDqB32rt8NEwKecgUTUVqqnX01JDbIA
FHYDiDvQ9aapMxgIlagdq7fhuPXWYIIqUd01Jpt89dEad8ANByIFPWp30ZMhlZUSk709fTfe
ulYICgs7jcD1pvv01djMPYChRGxoa9NSYpCRxTQ0+qoHpQ605GEgUHRXX81OpA1SMhoSE0NN
twadd+tNElIkVNQKU3NOv4aWZ3kNKaFSgd+pSf8AhqqxgSgbFVRxO9a/yp8dabQZFEVqofHa
vTQh+AgdyRsr8yfUV1piGGuRJJNetR6/LWXbIRImgr7SnmqtabEEaTLEgJNFAU3JV8DpeBkU
UniFGgHy9T/4aFYn8golQ9qeQ6EAb11uvySYKFP1emyfgSfQH56sMIBxChxSRU716fOmucqY
NWq0HQ067k0KabV+GtIw2H2/aSahIryPw02aWyidILiOJ9oNOp/jrHZTsUmBaQlSqq3H1bau
5NBJFf5VA66iQaWlcid+ad+hG1Njp7IurAhl5avahSjXbiKn+zS7VWwSkIoUFqBFaHooEH8C
NSaemalLYCUjf5Efwp8NahSZt8BoR8FJO1TvoaMhnpQ0+R/jrKyMQGlla6hKFKqKkJBKttye
Ir00tpbJJiVEhQH1dN99vgRodhkAQeRJ6gip9T/46ZYoP1NUk1NVEfAfDWWDCCgRz5bfmB3/
AAOlGUw+dEnjuPUD4D11OoyFQUJO43rU+vpTWerBi+PQ+mw2+PWmswyVhIJpQqrTb+Xz10gO
woJHEFR6dCD69dZNQBArUKFCDuB8TrLQJCRz3+exp/uOtCKqQARXl8flowZhgVRNdqb1Uemh
GogIrTTjWlBvpgmwyolIB+mta6oMNyKClcQrp6EH131mDQFK7hG4JA4pB9ANgAPlpiBEkAqC
qU5AbDen46HBCgCqqTSg+HUU9dc2jIRWmuw48N/w1JG5Dqso39Dt0J/jpYITyCug3H8CNECK
SkkAjcq3oOlRobCQKooio6/m6CmhMcBmgAI2OwPx/H+OhhAdUpAQak9Ff7fDUMgBO9Vcqbkf
GvoKaGKCSoAAV959CKj4/wCx1FIgEVrWnz6+vpqgkzq4KUOxI2CuvzPTWWxEuEJNK16VSDpI
BUvkCTUlIoRtt/DQLAshPt2VWhSR1FPTUASgEjnQ0O+/x+Q0CsCyslIondVBUegGoQDtlISB
7h6E0FD8RqAHLkkgAjf3AfAbV0FIXs4cKb0rxqfj11CM7c0FOLpRQCFE1oN6bUJ0SbNk/pgs
7lxyW7hVrRcYrNskVfcYDyWJJA7QQpQIQ4vcD4669V0l7kZJzxRhk9eP+TGFWl527JiGK1KG
/F+pUuKkgH9ToV8TsNtWOqfMlhWlGhXjBcVuNksibnYO1CiYwiQq/odLTERxpsL7YaSAgrcU
SS4upPprN1lr5J2KLkfi3ELdjt0u9vguKubFvjunHpEhHdtfeRyMy4KCqqWrqhug69PhddGU
MfJ2LOt+K/H8tNk/bUuIAu7jLAYUVurSlsuFQ5F10bp5ep+Gt/1179UVrQslZ8wYzacZyCHa
rZbBZ2kREOrhOSRLmFbhJ7slaSpKFq/wJPTf11zjJNyTfgfxvi2VKvVyyNDr8SyhkNW9orHe
ckFQ5L7IU6oJ49Ej136a3aj6p+RWC75R438NYtb7lkdys82VAjTo0FEFTzkdKO8hLiilJ/UV
RK60PuOivqdmlJm11XMEVJxvwbaMcteT3S3zJVsyKZKahOLdUgxmGOSULMdG6z7R7fid9bXo
faJya7Mm7J4d8fGBBQ5Zpkj94tj13cyBTpSxAb482WSlIDdSj4n4nXN0eZeUTtgrPkSweF8Q
izbKq3S139FsZlwppf5lx99J9ikfS0nopSiOmw009TfIO06RUPIWJ2GwYLicyJb3YU68NfdT
Jst4KkyVFCf8uOkqDTArVCjQmo21p1hvkJ4KPj0cyr5AaRDE4LktJ+yPJXeBWAWylHuIUPhr
VUm4bwYbfBvmdYHamv6iscgqsgRjcoxm2YkdlCWHO0iq6oAopCFBPc26a5USh/Q3OS3Kwayf
bZ8u6Y4/cTMvrSUMxkpZkuxf0u0EL2KWUqUT7KVGp1TgUyAnf09+OpVxnQ4r8iKiFcUNyXag
l5vsB77GBU0U9vRZNSnUm9lPlEDiOCW2ViPlAR8X7dzgOCJaEuAyJLIFasJcNUlxHEFZTvU7
63ZJRklbGDG8cxW53e7QIgiSURJUtqM9LbaUpCEuOBta0qI4kp6632XDBVcno24/05+O5blw
tMCNPtEmFJiR0XaS4HUyBIAK+wk+08U19K1+WuEvyMmd+QcFwWL+1tWPHr1Aji6ptsyZL3RK
a5hPKMVGrjjgB48RQDc66UWcstnP+oLCJLPlJq04/aFojSo0aPaIcVoJQtaGwFIZSkCvHqo/
z0UskgiTrgPiGxpsV3vubxLhLVbJ6bSmxWsVfTIolTinCndXAL6JO2+rs5xgqtfU5Wnxni82
4548q13ENY6wt21Wt9SY3aCm1LS7PeJ9hATyDR9xHpXTHyXbDbg6+P8AC8VunhbLLyxbH5uU
RUts/euIK22wtaSUw0AndLe61EV/hp9lYayKeDRrd41waZgVxt8fG41ruDcJt2CueFJnIU8A
lqTOkj9JBUo9zsoJ9uxp01jMzsrWyMsg/p8xGHhNrs9skxk3Sbd2WJeSylIUtxAaWp3tpQrg
gDieLdfTc6k7T8lK1wQXljxPhdlTgdstKPtGLnNdjTX2iJdwlBakJ7oDdUrVx+kJ2BVTQpz5
BttwQFuwXFon9Q8bE12VT9jbkNoTbZTvdWQY/PvPqbqKV9/A/gdb6xSZKryyawHB7K5/UBfb
fcLYpEOEZ8m1QlsBccJSrgytSF7dtIJLddiaay9FtNlixfxNg11wHELderXOau1zlTi5JjIC
VCTyX+pNURySlDaAEI6V1m23DIgnP6fcVRZBkcaVNn2+JEmGTAQ3wlTJjC1oSWBSiI6Sn3K3
qBX11tt6CVElSyHE7OjwDi9/gWYM3R6c8LncyFLW4ykODmpX5WyQkJHTTVxZ5J20ZaqHK5nl
Hc4ndNUKHXb4a6u68jBoHiXx1Z8lkXebkRmM2awxDLlMQ2yJT5KuKG2uQ+Rrtvtrja2YQ6WT
cpvgTxvcrlEefDlsx+22qIxGjtEMPvyJKnHFPyHFAnnxoCKVr+GuanSLsVGT/TPjk+VSyXJx
2NEvaol7ddUEKjQG2Q6qnIULlPznbcba2r2QVdXkrGPYjjk/xV5Eu9ohrkTG5zMSyvuNdxbc
LvpUntrANFuI3eKRsKa1bD8haeq+Sdvfg7BYDLeNRf3Z7MgmEH7t2SbXzlrQlaqgUSlAcrQm
vQbk6xWziZGzjQjzP4T8f4hhYulkly37pHlMwXkLWHEuuOGiuQSkcKcSQB+GmkvZWvkpfhzx
nbcuvFxTkCZjFutVuenraiIIedU2pKQhHIddyQPU6b34Q62aVI8D+NIrCspeTeDjCbfGlosz
aQq492W4W0VA32CalPp/DWJepJMgMDwLEnPJuWxI9vmvQLXZ35Njj3Bkl1LzrKaOPIUKpUOZ
7YUKnrTW7NpIOGzEvsZzIWhUV1LzR4OoUhZUlaRuFbVB+Ouy9qjJVTNvvXirxHbIjePPXSe1
nEiBEkx3VgGK5ImFKUtNoAB3UqtPyp9dcFazXYndJlvu39M/jZhVqgwZ8xUxc9iJcnO4lXcS
UKU7RITRokNmmjtbYuwp7+m/xc7FcuEeRc24UBpNxnRgUuOORXWVOtsM7bLonr69NHa3kFaC
s3rwLi1yjzFYkuem5ybPFu1ksk4BL36sksvF7auyAPb6E6e9kDyT2M/05+LZUm5mbcpMmHHm
ItEYNuhomUw0gS1cuJKv11KSkelPXS72YyuEed7/AGRMO93WFCQ8/BgzJEZh8oVyUllxSAVU
H1bb671skvkEpUwbBevC+DW60tWGMxeXcuMeE6u89oqtSXJjjaVBRGyUoDvr/vOuS9j/AJN/
YrWLI34A8aXCa1Agpu0U2u6tWy5vTKJRMAQpS1RjT6SUbKGsdmhleB7/APk+eIQwq6D9y+wg
tNzJcZTgKlpmNhUdCFDp26+6vXQ/ZbyPY5XX+n/xZaIEm+uR7xcbfBUqA9bIxC5D0pL4aLyS
mhCPkKfHSvZbyPd5GDvhTxDZry3bruLjJdu98Nns6WHEjtgsNOAvKI/9suHf1+Gr+yz50c62
WoM4meO8dhYLnF2CJMmfj97Ta4EzkEtJZS921KcBNFKUOoA2211Vn2+wOz6rjJn1mNtRdYTl
zQt22JfaVNZaIS4tgLBdSk/4lJBA12um1Cwxnqa75fmYNKwDH50GwxbHeLu+7LgNwkkLbtLR
WykynD7VOLWEmg6a89K76vRO0tSO7jGwrG/NMi1M4/HuinodqhWO3vbQ0TZbTIU8/wBSRxXy
2BJJ/jorWaS3hG0224//AMOqbTglz/qGlW+Lbo37VBhy0TISGyiMq4QojoeUho/lS5uK+orp
dbKq+TNW2m/0OHhW7YAMSlR7xj8STFtUGVccnu0xvvSFpWsNRmYaU7pNFAEq1q3rbe8sVb8S
n260WJPhmXd3o6TMGTxYrj1KuJiiMtxTKVeld/46OtnaE+DCfVKXyW3yleMNuXiW2zoePQrN
Juk5z/TQjNUfRBgr7Uhct7cLUuoHEfH10+uvWz5gbfk1I3vErFsS8hWNxFhj3PljVtTAgupA
jm4SkAIkvpp7xueY6nRSrssv8TXeG4JydCwy6f1NWSzsWuMhmJSJe2WGO1EfuLLDzjhQwa0Q
PYN+tP46b0a9f1CryefpCWg86UbDkrgkdAOR+Hy161WFB56PGT1NjmF+LlY7ZXpaWjlAw9x8
W8oBQ6y80tYmrBG60EKTyrXf5DXzobU8Sel23Hgaf1DxIttwBUG3WJDDPegtSLgm3paQ2gNp
UFImg+8qcog+31I119FVJzs3bZlnge02W45nMavKAu2N2i4OSuaUrKEpZoXEA19yQeQ11984
jydFVQzdsCx3BotshycfhKucFFgjqiyXIbciY8H57oUtccgBSvZ7j6DXmdXOQbf7DrDnbA27
N/1HboltuF6yZy3xI78FpPcDURC2GlNK5pZU80kGoJ9x+ep1UyuESeMbIq5tKt/h9xdhx3nJ
MOe9w+xYkMNj7p0OIefUoLSptvl0+H8Nb9Va9s6M2f4nm7xjCtkryBjkW4hDlteuEZEhLpAS
psrH1k+h9den/kV/FsvXs9Qqs9xuaEIyqywIeQKl3uHYIrbbLan4Jgr7KQkE8qqoa/H4a8kK
caUC+M5KdjIuOFZv4yxqRHjRp9wtQg3+K62244OUlx1NVVNFk7V09Zo7fJ05+xiHkbLrlk+U
yblcuyH2OcRH2zSWUdthxfCoT1Vv19de+tFVQkcrRGMs0e93Lxjj+JYzZrhizM5++Y+3OlXq
OuktmQ6VpYKN6DitJLny9NeSvqdk7TydPY+CT844hbLPh0pdvxa2w7JGXBFjymNJSqXKDiU9
0qaFS4FVV67ddXpqm9uTF5Zn/myFZY2UQW7RGhssO2iCt1VvXyaccUlXNwj0X0BH4H112/49
WqmfY5bLN/T85MTbcjbxp2G35AUYf7QZvbqIYcP3pYL1UFXb+qu9OmuXvX5ZWDdISx9yleZZ
GML8mX5zGez+zKfQWDGADBX20d8t02oXufTb4ba9Xr9DVV22cbOWU4+7cfGvw/36GiQggqHL
ffanTb/joAVzSnqQSdwNZyaC+kg1qAf9/wANbkEAN8RvUpPro7SDQqqkAKoCoVoD8NZCBJPt
FEUUNwR1A+B1C0AEcUg7U9BQ01bEXxVUU/FIPoaddQoS2VcQFVqdqj46ZKICqQogK3r/AAJH
w1NmUhXJXLkATUbn+/QLUgKSaDiB6hXrTUCYdAKAbg1O/QfHbWTX0ApVASU8VetOm3x0ODMB
FAUCoUHrX0/s1CGNqCtD6n/cNZbIIEgkhVCfadv56gC4/EGnp/zH01SUiuC68hQKV0I6Cusy
KAs/UDTkB6/+GtI0Bs0KKe0nciu3z1kEgJqqpI4p329aDVA4AlZKaqJofy9Pntogg3AeIJBS
QPxFNZQA3FdgCNgfx+GolAOQPQfSaV6Db0rqgUw9gElIqog0+X4ajTEg9d6KFOQH9ldBmRKl
EbkAKrTiN+nw1DViqpCwaAdSkHpoaEWmgA7ieKSKUrWlOv46zAoJCkAkioB3RXf8aDU0LgIK
dUmgFSk+2mwJ+f8ADSAEgArWrr0Fev8ADUxSgPnQEAhQQKmnUj066AYmpr3Pz/VSu2rJDKKw
l5woVWlFKomm/EV66zJ0NL8E2BzJMglWlV4m2dtER6Yn7Bam1PLjivBZCk0TQ9SCfQa9itZe
rBdcy0TPjnHn7xasukKu8mMxYYTspq2MynWe88okKceAPuQkJ39VH11zXsv1xqSV14LRP8Qy
pGP2Ji1ZOpUm6WlF2ftEuS4XJK0pC1FpgfpJZZTsFK/ho9nu9k54KU9lVufi7LrZAuV3nXNr
9n+1blTLqH1rYmPK3aiNK3VJdCqddkkfLWV7LNw/JKIF5zY7iz48xG+yMgn3hy/Ba0RJbquw
w61RHFpKlK6HbmfTfXS97d45+AUQVfN8UkY1OjRZV2jXS4PsJkSjCWp5tkqFA0Xz7XFpp6dN
cst52PVQTfiPEM9yO7yBiVxXZjFb5Tbol1bPbbcJ4p/TqtRWR0Gulfc6qIlCkuS2XHwFnq35
rF5yOE1bmJCHXps+U6EOSpI4hzgrke6sUAKvcemsW9trQkjLrXZwa8A5lOaREm3+DHjRZL0C
0MSZBUXH0kl1qM0KpBUoHkAfiTrNr3tsEkh3a/FnlCRjZtasobh26Sl1dvx4zFKVLaZPvdbZ
B4luoruaU3+Gur91uVlGoqyKyDwdmSbbLv8Ae8gt7lxjQxcJkR6SXJYjJFErdK6e0ABKd9z7
RrH9jmY2SSWCmZXid3teO2a93e6NSXLs1yh25Lin348UJCm1O9UtBXL2o/vrrTtZvJWwQ2Ny
p8O+wn7fOVbZndSli4JqVMlftK08dz7VHW6NzoxCZsWT2TM7b5utGNRMwuD91ksMRzkEpaQ6
0w+eTqWkk8QDTZPUmms1u8uF9BgskXCswdnZpMgZvMgTbZdG7fEkzpnFr/KQlUiUsj3KLa+C
EJApofttCUL9AaKbM8ReZ4c2MmNMXJl26cpNrLT6i4y9IAcfmK5EhlJqCpxXuOpe9+DSSezp
jFpyoYZnbTGavi24yHW3YVuUSzMcd5d57vmiuDigochurc6024WECrHyV2yedfLLUa32S23J
AYbDEWHDbjMJqAUobaBCfzbCtddZ9c/xDq3hMv2X4J5+uBNwm5Ezc7nBlNvtWWBIoqI++Qlt
QQkJCBVVE8q7V9Nede6HhQPVPHBXM8tHlqy3ew3q7ZZGu15Ev7O2qZkpdMWSuiVpCClKU0r7
jx/E9NdfXeX/ABx/kvxkaeYbnmmG+T2FKyOXPvFtgtiPdn+3ySJDZ73abSng2FEkUoT89Zpb
bhGHXOxHiOP5fuzV2lYte/2i3qdrc7nNfCGlynvdsXAvk6qtVKGtP3ViI7M0vWssj4+EZ7Je
zO3rvLbVttBMnK5zko9iTIAUtLaqci+4sg0r67HfWP7WuCVVBO4dG8io8M3y7xcjNtxq3tLE
eyxw2ZD5fdCHC6R72kKKjT1Vv0Gt3vlQjT6rZdl+Oc4yXARAmZ67cDFDTci3to5W9DtUhMcy
0D/qFNckg0JAV16a539jnCSf7hCWCHz3wuxjuO263zstnftLE0D9eGpEGKp3/wCRKUtPX/Cj
c1Jp8dap7ru047BOSJzbBfsLbhtwxbLpN/uU2QLdiTQR9twbbUSXWFGnEIdIqo+p66y7uW2s
g4eFyRWI2bL7N5tOPx8lYi5DMX9tcL81/wBWCp1IedQO4Pc6SAn3U92tVviWpHpLks2COeQ5
fmjJLPGyV5LiFv8A7xdu2yp99mBVDKUIcBSipIAA+kVOp3UaBJxJK2HH/M958d26dZstKXb+
5OL8WQ6hsoaLqjwZc4lxbzq+RUpJFPlrm/ZGGkVUUyXiHnBpaXZUhyOw7Y3Y0qap0BiJaGAQ
WFrA4tlQRslPuNevXWv7fj/XkUlqdHa85FncHxBid6fv7c21SpjcePjwjtpjtog+9hqQQApy
hZHJB26ddaX5OISM2egWPzP5fzO+26z2yNb5M9T/AH2GftW0tktpP6jxUdkNA8q606+pKWjV
VO2abfLd5+UjHBbLrCN8mGYuaIDTTUFmM2EpQ444Qru8uXtFNidumuX9lU8VFopF8e/qMxCT
eru/cy6tIhtSX2+3IS6XiW46WG1IIqN+SqD563X20nNSanRH47i/klqb5DsVyvjkZyNa13LJ
zGcS8qTJU13W2C6aceaVHuqT6e3S/asYKIlgxCzedGfFC/2adGteMSI0iUiI6ppqW7HUCp11
FU8+KxWhKunT00V9lZ1L8mbVa2x9ktn/AKhUeOWbrerqkWO2ssTvs+8hEpDbRC2VvcUpUS37
VcSo7/MazX2y8IeqWyr495C8uZ1mthhNTE3W5QnlvWyLKQ2mIhxDauT7yUhIUUIrRRqR6a6+
y1FXQz42anKnf1IN3CLb0Gzd26MSm0XKIG0tRO1xU6644nYKQKJFQRU/HXFW9f8A4sVVeSs2
Uea4vmOJYZuShF3yBlD86dHW0+yYLIUrk22ocAU0UEAJG5+euruusuuDNYdt6M5h+U8/xfIc
iXaru6mbcZqxcJTqGn3HVR1rbbUoqSQCE+idtaqq8qQpDNUwQ+e28DizrQqzi3SW5U5gzUtr
mSVLWt11xXLdS1Hkqp9Ka5O1J/iLrGJKlcfEPmS729rPbi82/OdbjyGmi6Ey0tjimOQkAJRw
HEinT8dL99ZhVwPRcMumQYZ/UvIcgvPXtqe7bneMJMZ1CVd15BjrdUkJTXtJcKVKV0qaaz/d
X/xKERysb/qEx5SHol6alMRorspUiK8h5vhbI4aDSgU1UsNqASmm53O+h+1eAhJMY4xjPnu+
5VGymPcQxeZtrRKVcZK0pS1CkqWhhpbYQUIU5xUtKQnYb9dL91X/ALcGuIGGPxfM+JSssxq3
XlFsTYIyrreVrdStH6ieaVNKWlX6jw3rsfjvp7ptYObiG50VjHPPWe47YmLNbFxEQY4cUnux
UPPKW8suLUpa91qKlHc67v1Uy2pYzP0NHyu15a74wuK5XkNy5PY81Fl3Wzst0ZbdWpLrDBmo
/wAxxKlJKU8uoG3TXGvsh/xWf8DaFxKItdz8mXeT4/duGZLYdvzE2YiQ+UMsQGWUqaU6pSeA
cdUzyAKvU0Hx1f2qWlVE1DFZ1EyWwt45GsGav3a05mpuKwXW1R1FDJZjtL4Eci0lJRxPy9a6
FdKW6oIySuNKypvM8itc7yHJtUezT1W2K6loPyZsqa4Xl9uMAr6lglRoaVpsNZVm9JE0VaRj
+WLznK7W5k633MEbmZFHn0ClOTEpbUV0JPFxSlJC61A47a6W9i6r8csNp+UObJ40j3zAJkiV
mLguT8R3J51jaQp2MkjmpDkt0Hil1fFWytx8NT9zmUoOl7KMIoVv8e/dYtjN7VORFcyK7rtK
UvkBmO22E1ecWd9lKNfQDVb3NtpLBhxiS0+WMAtlsxGxXm0ZO5f7cp9dmt7UhnspShrmtRij
87PcCuSulT1OsU/Fw+JMvagr/lGDe8X8oy2Xrq7cLxbzCfF1coHe8IzTjZHw7RolHyGu3psr
ViDSsptBZfGPizMMwtL+YWK7ONXd26KgTVEgK+1lo/6ySpw7lVHd0+or8tZ9nvhxGhotSXFX
gGyWDFL/ADpGQ3FiB/1jM5MVlKmlxoDx7If+HubCqa519lrWWitZQYNjxul0ft+MGU43b7hP
ZUuODVsSHKM97iOqkoVQfLXruuiduTMy0mbej+nO9LuLFnv1zlN2ONdzbcfClIVyhOtuyX30
pBVwUtTQFPjudeN+58f6YpKZgfL8FY9cvIDdndvl5el260iY6H22xIQ00tDMJtlVSKJ91B6U
Gn+2yrGDUcjm0/0/29zL75IXfrs7LhLiPMyo5bE5D81tbj3fXX6wOtD0VrL9tnAp4iDzlkcW
zRcjuUeyuuyLTHkLbhvPji4ptJpVYHryrr6FOyU2ODaejV/+0mVM44xdhl0QXdyxplx7Ep9a
JKrUtNVtAkj2ALoEj2nca8lfa5mPxT/c7WSWtnHyljmU2jHHo92zxu+PwXYzVzxpMha3I61p
q1+kpVFds/Lbrp9fst4wZstR9Bta/E6oeePWZ3IFNW+NY/3m73GGlXMQy3yfjhFdyU7fMenp
pr7n1TjLeDaqlPwR2dokYVc7U9id+uBsF7tjc60LK3GZDMZ1aqsOcSBs4muwprXrs2nqUYeJ
RSJV/vElwSJE6Q86HO+HVurWvvbDuVJ+sUHu667TiGDcaF/6ryMQjCRdZaYR584wfdDau4ar
qnlSiyaqHqdUuZCSISpSCVJNCk129Kf7taRMkH8gu7zrEqRMkSJMegYfcdWpxsf8i1KJT/DQ
7So4BVg4SLnNkP8A3bsl1coUq8palO+36TzUSqo/HWW295RrQ0cc5LqobqNTX/fTXRuTB07z
poFKPtFEg7gfIV6aFqASyLclyXUIQt1akI2bQVEpTt+VJNB/DWq2aUCkcqrUn1HxpQdf79Z0
QoLU2aJJrWhNSDT5azL2FvgBqUmu3wruB8tPZsEhIQTTb169NVmxASoV6bipFaUHw1mQDqAr
+FadK/DSl5FWaFBRoVUJ+NOn/nrLKQuZKBU0rXaux0wQBUppsQBufkd9WhkMkBINPb/LWUgY
QPSgKQT7tug/HV1MyA03rU06Gux0iABW529d/n8jolGswAUpTluk7JpXUzMikBylOVSrYfCn
w1zk2kBaEbcQT6bfjraZhrwJWqgHodqj4b+mtQDwLJUV+40IG/8Aw1hoU5CQniCfzH4awMQB
PGhNPaaaminAVOSgR+PL56IQZBVKSFVVwOyCRv8APU0xQaQeidwnoT0roNQGgChr9Y606D16
6hbwEOKTVOxVWgHTbUYTAklWyxz6mnTfqdDGQD3FJB39a+oPSmhMQHl6g1B6ehNdLRCgqoVU
UI2ToJ54EBXFVRuFdeQ0yAZryoAAVdEnr8aay2aAsLFAaAk+ny610STqEs+72mp+Xr+GhlAZ
UqnGnJKT7h8xoNBrUFGvQflOx6iu+ogBaCkE7hND8CPx1A8iU0TXluAag/I/GmgUKK015VIA
AoT8x00DID7vadttvj/bpQHKiqU9P8G3Wv8Av0gcYpkoWVsDkogjoCBX8dczsaV4WvuY2KZM
lYviyMhuDyOwZDrD7/abVXm22WlIA5j6q7011rekRaSv240WTD8tzq3ryNNqwmLcnrnUXhww
3zHjMFO0WiFhDbSOvFRqepOntTrDkHW0jyL5L8qSrCVQMZbCbZEVa28hbhu84jPEJcbbcr2g
pSfXfj6a1e3rmc/QnSOZGtzyDy4jCJFwnWVqFh89luBbmCxRqC23RHeiNKqptb1ePcXUqJqn
WXercJZLpyyVzhjyVJ8YQGZuHwLFj1rCfs3G+a5jLSiCQpC1rU0HlU5VHJVaeuj+yismpFKS
kZqjyfkEy3yr5ZH4jpYTGtVtZgmK2lkVVRlhIrvyqT/cNFr0nGg6kpgF5z/Apsm3N41IluX1
CEKtLzb7Uhzt1WgtlqjoAqa+hGulfZR1hl0ts75FmPk3I48/D1Y2uPImSm7jIhQ4zxfbbYQl
DbRqVKptyUtfuJ26alesJrDRVq3lkRLd8iXrFbbiMeyyjFx16Q6tyO06p9Tz5Kld5fRPBJOw
/FWtP/lLt25gLeuzLHC8w5jCsdphxcXaVfIsZVlsl9U06p5LQAQpthn6HHAOpHU9dV7eu1pU
52NaN4Kvmk7yHmct3JZVnkNW+Oy1CDrLThipTGHDj3Ff5iudVKJ9enTXN+xZS0UNHK4T86zC
yY5Y4tlLNoad+ztDcVgsomyyPe4t1R/VcoKqNaJ3On2Or0Dnkl/+z2T4jLj369Qod3tMRxgN
xGZKuE2U+vizDjKQO485z+rgPQ12ropeXmUZ65lFtz/Mcqt+YWnOcsw2HDMBxTUKP3zyfnNI
BDjjqaqUmONkpA48q7nT63SYm0Gur8kZ/wDlGh1y7JlYjb5EK7yEypEPuuDk82kBLil0qVc0
gk06Cmun9dI2wcke3/UblIuTs8RIxXOkmRe0p2E1tKO0xDJUFFthlv27e5XU+us//X8hXJxs
HnONZ7TkEIYvClTMhdU7PeccWiL7iewymKkUDLANAkK93qdb/qo6ppgp0hWP514WtjMFx7D5
028QlIccuLssI5vghanAhFEgBX0oA6baF/x1/wCYqzLBev6oVKvTs/HMfZt7cmWy/cJLyip+
Y3GqGW3Ep9rfWpKSfhq/qonDcgnJUMh8o4pdr3CkoxCLCtbcv7+7ssurMuc9Uq4rllPJDZUe
SkpG/rrS9aTw3HkW4HWXeZcfyrPLbkl0xWO7Et6f17eXVKMxxKaNF50poENGhCOO/rrK9ari
cPkFdt4F2LzbbI8G92rIcXi3e13e4G6It7DqojTDoACGwEg/poCE8R8RXWv66RuGic1tGyBh
eWpMAZI5CtEBiTf0dhh1DdEwI4BT24zZqnkU/nVvXfWOqagy7slsM8tY7jnju64z/psT5t6C
lT5z8hXbcUn/ACOTSU1o0PQK3Ot/1JxNtGqNveCakf1DoZxlq02XHmra6Y7MKXIbkO9lMdBB
WiNHFEsle/JYPLcnrrL9dU9jMkb5F85pyTG38ctNoNnh3Bxp26yFSXJbjvZ9yG2+5s2jnRW3
w/HSvVXcyFbdkQGReUFXnJMfuTttaTAxuLGiRLWFKDa0MHkvmU8T+qr6qem2hUSwv4hVt72P
sf8AKzDfk53Pb9am5kgnuRLfGV9syy4hIQ0U0CqhtA9epNTrPScTg1V+Cdx/zhjFq8hXnM3M
WH3NwT24cZiQUpa7tTIdcWpJ7rr5PWgCRsNdF6k1DcGfyXGGPrf/AFFWeBaYbLOIx/vrKuS9
j6lPOBqI7JUv8tOSkpQviRyqr5aw/VXhisEYv+oWd+xybCi0x3bPIguM/bvnmV3B083Z7yj9
VXFFQZHtGw9Na/rq1vP+sE2oOb/lLAJfj6x4M/jchEGE42uXdC+l19PJVZT0dBCU914FQTyN
E1+Wi3qW+2S7N5aJO1+TPDGI3Bq74njNw/cCFRZJlyRx+0dTR3hRTnvP5aimj+lc2F2xBLWv
+p+z2l2LAt2Mqg49DirjMRGZRL6eToWCh1Sem1DpXoq/9xWb2RkH+pR9jLb5eZFoEmBdIzDc
S1OPKXwdhnlHedcV9ZrXnQD0p00P1V1P3BKHM7IXx55lsuP/AOpbhkdmev8Afck7rUyT3wy2
qO/u4yUAGgUompG9KAaH6VEzBTg7zvNWMTsaiRZuIMSMgtUP9tt1xdeWYrLAJ4foA+7iD0J6
ga0vXWN48DX8nkl80/qNh5Vi6sfk2h6I1PEZu8ShJ7oQw0pCnRFZISkLXw9qlE0rqr6qzhks
7O+IeSPAWKXyPerJYbsxNitupQ648lwUW0U8eHNVSqtPl19NZf8Axm3HZC7yNo/9RFhZdFtj
4ohrD1xZcWXaUyD3XXJziXZC+7QcUkppT8flrf8A+eq5MzOCGx7y5htr8gNZK3iKI1tt0Ux7
JbojxC23Kkl591QV3V8VFKfRI0+z1LTsalnKFk/g+dcLtcsjxyemTNmLkRYcF5IYjxyBxSVF
SFKcWvktZO1TttrC9TemCwQ3kHyWxe3rPFx6K9ZLNYYjlvgJU93JCmXz+oXFpp9SRxKQT67n
W6UVeZJryaBcP6oDMsMGA3Yy2+lcUTlmSsslqKtCylhkABCl9sCqiaVPXWf6KrkMsOF/VPcG
JglCyslSg6lSO6acZExUl6hA68CG0/P3H4aH6K+QaaZEnz9Eg3mxrx7H27TjFmVJLtmS6pxU
n70Ukc3FAU/5dtjua6v66rG/k1ZY+RzjX9SUq3ZDkN0uFo+4h3lUcx4kZ8xlRm4iO0wyhaQa
oCPq2FVa1b00wkyScyVCR5kubt4y66ftsYy8sj/aOFdVpitU4kt8q818APcr1FflrS9NFaW8
BxBnaVpArUhKaUr1/HXTYos68yfV45Ywtpgtx0XNy7SJQWR3VqaDaGuAoKIpyqfWmuTlWduR
6CMmzFy+WTG7T2e0mwwXIKFk8i8XHi6VnYcRSgpoVOv5J5CWsj/IfIzt7zO1ZCqMWmrUiAzH
hBwqHC38falRHt7ikkmg20/1/j1kPkXbvJ0mF5Uez4wkPOqnSJyYClENpVIStKUcwCfZzG9N
6a3b0169ZGjcDHGc4l2MZIpTaZT+SW2RbX3HFGqDKWla3j1K1e3oTqtTtE8GFXENnO25vLt+
F3zGmGhS/PxHZEvmpKkIhqUoNhI2Vz5b19NNfWu3bk22Jm5c9Lwy04wpkNtWmXLmJlBRJX92
Ep4FFKAJ49fnoXqVW2tmHnYMlzJ++Q8eiONCOzjlvbtzISpVHOKy4t7jsErUSBQfDUvXFWvO
zSjkRn+YP5hlVwyJ9hMZ24KQoR2yVhHbaQ0AFq3OyKmuunq9SooRhLJOYn5cyDG8dh2K3Hss
Rrr+6ynULUhUlHbShURfH/21BFT66w/+Mm3Z8m6uXkt10/qKRdbK7BueLQrg+/8AeKQ64+6E
N/eureP6dKK4FYoD1p6azX/j18sGuDJLFdXbTd7fdGkd562yGpKG1GgWppYVRR+CuNDTW/ZS
awOjSbF/UDkNuusu5OxW5Zn3hy8FhxxwoQVsOs/bNkk8Wwl6vx21m3pVoXEQSeDhefNr8pM5
NosMKxOXG3rtr70RTvc4LdQ53OZoeaO3RHwqdK9FE5bbgJH8L+o27pcmOXWx2+7KnCGX0v8A
NIU9DZLPfVx6rcHX4al/xqPlou6mEZdfrsm7XmfckRWoImuqeEOMOLLPP8jY/wAI9NduiSiT
DUMkclzS6X+dap8gIZk2uDEt7PaBoUQahtSga1UeqvTWaetVrHkd5nJZM98z3bM7SYEyz22A
6661InXGGxwlSHGRRPccJNN9zT8Omin/ABvWvLZrukxmnzJm/wDrNrLlvMKuaIzcB5nsp7Ds
RA4qZdb6KC6+7+ymi3polEaM/wBjkh85ze8Zdd03S59pKmWkxokSM2GmI8dupQyy2OiU8j11
r1qtawjS1JXCeSQSabmhGlmZkA23qCnoBrKJAJINTvX0Hy21tKQkMfVUCpI6DWYNJAIO6jQe
m3y1JCxAoVAjfauk5MXupW9eRO/46hQCQRxP1D+PXVAhkVG42PX4gk+miSATuDTcbAeo/HWX
UOQyQFetBvX46UhjAQBTVXX1JHx1pmQCgptyJPp8/XXOMmm8CuQpxOwrUfEfLSzIEEgkE/DY
/wBmqQFN7JIUaE9On4bjWGMCUpQAVo6jYD5fHWgSDXxI5dQdxqRoG4BoOorv+OlmOQ6FXQbf
DoBrJpASSU1br1oQdiT6ayzSQFqBHwJ/nt1pqJhdaBO1Btv8NzoSKcAOyQK0VSqT6aUUgSUd
SKnprQNhlB6gbn09BrDJYCSSOv1b8Kb/APnoQvQFAVpUVJ2HoPXS0EB0cCVAihIqQOp1zVjU
ANQAaV+JPw9dtJmAvj6EmoTX4emkUKbKeBNQf8Qr/LWWU4CSRyTUgUB2r8fn+GiRAjYporpU
f3g/j8tTbMBghQqCa71PrrMCgKc40TtySNj6UPx+eo02JUoJFRxO3r00hICrinYA9Kg1/ka6
GOgKPBQUNwRQiu5p6b6xkZDKQlQUTWprQAD+3UaSAQADT8p3BHod+upIm0El0qQU7BIGw6E7
+v8AdpBMA/UTQbBJrU9f5amTYZVyJ4ilNhT1PpoGQBISoJVUVqa/P56hgLklA5/TXonqOtNE
FAKVVyICeI339DtQ6yZkTX1qePSv9+ukBAyG1UjYV3UDvTXNHc1PFfKhsXiNWLWqVKiX1V6/
cFSWFFpv7YNpASpaSFLKnECqOlOuvUlVdbY1n6mMyX/GPNuPpstnkZHdbk1eLVPlXGexDaBa
uzr9VIS8UKbbSlNQmiht+A0P1p5rENfoCtnQ/Y874eGLfeHpFwZmW+FJh/6UbTWFJfkFRDrr
oUG6Dn1KKj8dN/8AjtKU00/1Oiq3pFTkeVrZcPHuMwJ8idKyLGbimW7DWVKjyaPhxsreJNEt
I9qEEE/gNVlXummupzoaXcf6j8AlRIkhDUx99cyNInW8R/047bS1LUQs+xawo9fzUFKaz/8A
m/KJUDTs1KK3E8w4lZ8jcuacmvOR/oz3GUSm+21GelJAZaYQsqc5E7clHilI+ep+tpOYwL8Q
Rvirzu7GmyjnF1dWsQ/tbXcHWlSlMcnS48HS2UOrK9gFcvygdNbXqXsrNUlYy219C42v+oHB
k3u7SZN2kx0n7dESQqIsNOssg8whhlYWlalrPFTqjtrFf+O2sQTbmBjev6h8ZcvVl/a5UuHb
BeXZl6KUFCVwggIQlaQKudxXu4eg66q+lctaNwzqjzR4skXS0ZK/PlRp1jjz48a0IiqKlKmF
VHO6n2IBTQCm9eun/wDO4xEGXaHD2yAzvypiFzxs/t2S3SOl+3NwBh0RkMsd0J3L0hQKUpKv
qLW6hsPXWb+p03EFafuZ7mPkCBOxHFbRb5E5dzsrBRLlOr7DLBKeBZhttcfbTYuq9xHrvrpM
WlQDbZIeH8rRimXRbpkYmfYWmFJdtkJaHFhLjiQE8ELBSyhdTVyg/HT2/tUYkUn9Bx/UD5Xt
+fXqE1aWVKstpbozJcCkrfdfAUtYSfpT7aJ2qrr01wX44M5+xkzjvMbGhOwHqNKGTlzIT8TT
qPgdDRC0kA0bNFqG3yH460sAwB1QSU+pArvSn4apJCa+1Jr7dyfTfWqmkGVpqFJPQg8dakAw
Ve4getaH4etdAphEkpI5USaVSR6/E/LQTgSoK4lIoeoTX0+OpMz1DCjwSAKiu34/EaWQqp3P
x6j409dUkEBwUOINACT61rqYpB7jcmu1CB11lMhCaBBIqBWleu41qAgUlSqDl09Pw6/26maT
BzqQVe0n0Pp/HUjQtYBHsJAPVR+ProgGkIFACAagge4/36TAErNKH6iPb8/x0NMQFZJ9wJGl
FLYFK25EGpNAfgPhrUGsBqcqhR5EV/h/HWGiYSiVK2PTcmtd9aRnYdSkJCuhFSNYZCVKSaH0
PWmtrApBFSwaq2A6f3a0mgaDqpJJoRX5/HS2oFMClEE8vhU/Gn/HVCJ5FGhpxNPQD5aIFiQa
AAGvxPyGrZQGVk0A9p/Ofno6mQ1cgAoE06f36VUWF3CCK0FOvqNaguwmqd012JPpsBqL6Clp
AWKdNwo+lP8Ax1IY8BtqUNz8en476gkSomntPLanIfPSsGbJAATWooB+WnTSEhA1BPRI6/Cp
0L4FIIpG6jQV2r6Vr8Nbhiw+RJJoQB60rt0FRqQRICNqDY/zrTQKQFK+RoPqGtqDDbkMdONf
q+W4/n01EGUqST6EHYa1IhFSS2QduRqD869Px0cjInquoI6bUFDqZloWVAbgkn1ruB8aaypI
HMFNNymlAT131dRQkJKiUnYD1HTWqpmWFuoCp3HoPh8dLMth9d+ifU0320NmkwFRAIp0+B3H
z1uoxInoCr+Z+epqTOA07DfamwptufhrEFASykjrt1I+OopDBNKn2j8xPpog1AKJCQQa/DbW
8mYDKSRxFaDodYNQJqkg8h7j6egA6aWZbDUglKRQca7fCmpQKQYBAPI0/Hc/Lf00g1AdD1Se
Ppt0OgygqUArSvXbr8NZaNYBQ0VQe406+ulyCgUaKSCRSvTf0GhGgkbCo+mvzqNTMrIoJAB+
FRsen8NEEwJ5BdDRQp1PTSY2D3BZrsDvx6jUxFbhJVQUPUDY6zBqQidioHau1dh+G3TVBNgU
Og49TsP+GlIpBVQFaUrtQdKaGZeQyFJBG4PoPSuswUCN6AHcigCen8TqSGRZBUQE1H+EDbWY
gZkIgoJBAoKbdd/l89KyAVCTQg0JqRXc/PS0iSYsJ2NSCAKEmldZkYCBKQKjr6V1tIzoMFPu
NNkdCD/ZtrDkeBAIA6VrvQ+n46CQoLdr0qU7AnY/LWdmgJWSk19yiqpPzO2popwAhXUKoehF
N/4nQCFUSEbCjmwKfnoJiSmif8RFQa6CCJTwpT3Cnt6bDVBSwUJUKA02Jr/LbVBSG5RJSfco
K+P8jt8tQtITslv1ptTbbQkGQySSTWoSNh1qR/dpshTEgEJJIqOpruRXXMUKUkDoN/RJPt6f
36jYSkgt7139Af8AjqBgWa12HEfVx6n56CkCONSAqlTWny+GpCA8w3Unrtx+XyppMwGVe4Up
yAO1KbdaaDUiEbqPEe2lRTcivx1NiGVNhPt2CTUg9B/x0JGQc006+yvShpStf56iGrX+av6f
X69ZO6LVgvV//wC7PT/70+nr6f8AJ8dYYV0yfs//AN83D/7l6J/+T/lfm/8Ajf36TS0N7P8A
5Vy/+4/q/wD3r/5PT/2f+X/D89bf3MU0Wqd//pW2f/c3/wA2T/8AG/8An9f/AH//AMb8P+XW
Vs1wXnJv/wDQtt//AIf/AMgf/dn19B9H/wCN/wDwn8dC/kaWzHZH/wAa2/8A3B1H/wAf6fT/
AD/n8fnrd98lf+Qi5/8A3pA/+4+h+j/43r/8n/brqroOTq5//Ejf/wBx/wCWPq/+F0/L8/hr
NtcktBxP/ny//uL/ADE/5/8AlfQf8r/l1IKkZI6Mf/d//wAv8v0/V/7/AP8Aivhrp6udmHsf
xf8A7kn/AP3L/nu/V/8AM/8A7P8A+L/wfLWH/JHR6Dwr/wC8ca/+5P8A72j/APzv8zqn/wCb
/wDiP8Pz08MK6PRfmX/7iy//ACP8y2/X/nf/ACkf/O//AMH/ABfLWEc0Zj50/wDuBj/7q/8A
vV//AO5+v/xkfX/yf/gf+XWvVs29GGJ+tX4fx6+nz11OSAr83T1/D+Hz1rgXo5q+lP4H8f8A
y1ck9Ad6/wD1enXQhB/h+nqr/Y6SD/8AaP4fw6/79BcCT9B/H1662iQpz/KPX+P4auSFPfQr
8U9f79ZMiD1V1/L+PX00rRpC/wAjnX+7rqMsR6J69B0+nW2PAQ/zP+Go0wx6/h/fqZlge6o/
E/h1GgQ2evp1PX+/QzdQj9Z+rr+X+/UYsEvoj8B/vPXWjB2HT0+n+/WTa0JT9Cv46SCT/lj8
fT8fT5aVoyGr6V9Oh6ddZRrg5p6J6dBpQcHRXT19fw/jpCwhX+WOnp+GlG0dFf5aP/q6agWh
Lf8AlJ6dB0/v1lghLf8AD/Y+mtoRA6H/ANX8dLEWPz/+r11tEGejX4j8NZtsbbA509f49ev+
/RUxYQ19Y+np/frdiod2+vp+b8f4/PXJmuDmfrHXqOvTp666VIV+cfgen4/7tZRhiB/l/wAT
0/u0oWLc+j+A6a0AhP1ev8fw1qpcBudR/f110QAH1H/0nRwbsH+RH4a58kJV6/h663Y5oJX0
n8R169NXIhj6k/39f4a0iqKT0V/H/Yayh5AOqvw9fq/h89aeirthD838f7tZMMQPpX/6h001
EV/7h/8ASdaQWDb9en06mFQk9V/gOn92s1E5/kV0/wBjrqbWgx9Kv4aOTFtil/5Q/E/V/frN
gAj6D06f8dZWy5DT/HqPq/DWls0hCf8ALR1/hqRMUn6Ff/s9dHIcCVfR69P7x11Iyw2vpa/v
0vQoW59X+39us8jfQD0PX1/DRyCEfmT/ALemuiIUj6vToev1axYwKb6p/u+rodCNCv8A2vXp
/DWb6Kugh9S/w/P1/hpIU56fx/DSYuF+dX8f92pCIR6/iOurgOToOq/wP+x0IXsOP0H4fx6f
l0MBLXUfh6auDCDe6/w9fw1k7CR/D+H1dP8AdptsHo6K/wCHTr01MzwKX1H4/m/v+eufBLZx
Z/8Aq/h+Ol6FHVXX8v8AHWUbYj1HToOv92tgKH0I6dT+H8fnrLDgUror/ZXTWeRQlP8Ad+br
p4C38hLf+UenQdOvX01MkGn6T+P8evrrLDkS56/+n1/v0VFhNdf/AKfXUbQr8yP/AE+v1f8A
lrL2YYTn1fm6f7U0gK/9v/ausPZ04CX/AJB/H830/wAfnpBiE9T06Hp00ggI6f8ADWTSAv8A
zP8AaulCF6D8B0/u+WssuAj9Y6+v0/36OCYpPVf4+v1fx0EA9B1+j166EaYFdU9dAI6udT/+
z16/7tSE5vf5iuv8f79PBMR+f1/y/wCHT/dqE//Z
------------0860880181AA81244
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------0860880181AA81244--



From xen-devel-bounces@lists.xen.org Thu Feb 20 16:19:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWKt-0002Um-Df; Thu, 20 Feb 2014 16:18:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGWKq-0002Uh-O8
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:18:53 +0000
Received: from [85.158.143.35:60009] by server-2.bemta-4.messagelabs.com id
	E2/B3-10891-CEA26035; Thu, 20 Feb 2014 16:18:52 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392913130!7140871!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2576 invoked from network); 20 Feb 2014 16:18:50 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 20 Feb 2014 16:18:50 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51942 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGWJj-0001h6-0i; Thu, 20 Feb 2014 17:17:44 +0100
Date: Thu, 20 Feb 2014 17:18:46 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <929649832.20140220171846@eikelenboom.it>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1142136480.20140220095359@eikelenboom.it>
References: <1447395332.20140110155157@eikelenboom.it>
	<20140110151218.GA20152@phenom.dumpdata.com>
	<1087166993.20140110165729@eikelenboom.it>
	<20140110161248.GE21360@phenom.dumpdata.com>
	<1010658460.20140110171623@eikelenboom.it>
	<20140110173809.GA19423@pegasus.dumpdata.com>
	<1889333978.20140124143602@eikelenboom.it>
	<20140124174806.GA15571@phenom.dumpdata.com>
	<1142136480.20140220095359@eikelenboom.it>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----------0860880181AA81244"
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Xen pci-passthrough problem with pci-detach and
	pci-assignable-remove
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

------------0860880181AA81244
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: quoted-printable

=0D=0AThursday, February 20, 2014, 9:53:59 AM, you wrote:


> Friday, January 24, 2014, 6:48:06 PM, you wrote:

>> On Fri, Jan 24, 2014 at 02:36:02PM +0100, Sander Eikelenboom wrote:
>>>=20
>>> Friday, January 10, 2014, 6:38:10 PM, you wrote:
>>>=20
>>> >> > Wow. You just walked in a pile of bugs didn't you? And on Friday
>>> >> > nonethless.
>>> >>=20
>>> >> As usual ;-)
>>>=20
>>> > Ha!
>>> > ..snip..
>>> >> >> [  489.082358]  [<ffffffff81087ac6>] ? mutex_spin_on_owner+0x38/0=
x45
>>> >> >> [  489.106272]  [<ffffffff818e5e22>] ? schedule_preempt_disabled+=
0x6/0x9
>>> >> >> [  489.130158]  [<ffffffff818e7034>] ? __mutex_lock_slowpath+0x15=
9/0x1b5
>>> >> >> [  489.154147]  [<ffffffff818e70a6>] ? mutex_lock+0x16/0x25
>>> >> >> [  489.177890]  [<ffffffff8135972d>] ? pci_reset_function+0x26/0x=
4e
>>> >>=20
>>> >> > Yeah, that bug my RFC patchset (the one that does the slot/bus res=
et) should also fix.
>>> >> > I totally forgot about it !
>>> >>=20
>>> >> Got a link to that patchset ?
>>>=20
>>> > https://lkml.org/lkml/2013/12/13/315
>>>=20
>>> >> I at least could give it a spin .. you never know when fortune is on=
 your side :-)
>>>=20
>>> > It is also at this git tree:
>>>=20
>>> > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git and the
>>> > branch name is "devel/xen-pciback.slot_and_bus.v0". You will likely
>>> > want to merge it in your current Linus tree.
>>>=20
>>> > Thank you!
>>>=20
>>>=20
>>> Hi Konrad,
>>>=20
>>> Just got time to test this some more, when merging this branch *except*=
 the last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd)
>>> seems to help with my problem,i'm no capable of using:
>>> - xl pci-detach
>>> - xl pci-assignable-remove
>>> - echo "BDF" > /sys/bus/pci/drivers/<devicename>/bind
>>>=20
>>> to remove a pci device from a running HVM guest and rebinding it to a d=
river in dom0 without those nasty stacktraces :-)
>>> So the first 4 seem to be an improvement.
>>>=20
>>> That last commit (9599a5ad38a3bb250e996ccb2cdaab6fb68aaacd) seems to gi=
ve troubles of it's own.

>> Could you email me your lspci output and also which devices you move/swi=
tch etc?

> Hi Konrad,

> At the moment i found some time to figure out what goes wrong with the xl=
 pci-detach and xl pci-assignable-remove, i have been
> able to narrow it down a bit:

> The problem only occurs when you:
> - passthrough 2 (or more?) pci devices assigned to a guest ..
> - and only remove 1 of those devices with "xl pci-detach" followed by a "=
xl pci-assignable-remove"
> - when you first detach both devices with "xl pci-detach" before doing th=
e "xl pci-assignable-remove" it works ok.

> In my case i'm passingthrough 2 devices (02:00.0 and 00:19.0)

> I added some printk's and what i found out is that:
> - after doing the pci-detach of 02:00.0, it doesn't call pcistub_put_pci_=
dev for that device ...
> - but when i subsequently pci-detach the second (and last) device 00:19.0=
 .. it does call it for both 02:00.0 and 00:19.0 ...
> - so somehow that call for the first detached device gets deferred .. but=
 since it are different devices and not functions of the same device i don't
>   see any reason for it to wait until all other devices would have been d=
etached ...


> I tried to capture the console output but some how that didn't work out, =
so i attached a screenshot of what happens when:
> - doing a xl pci-list for the guest
> - doing a xl pci-assignable-list

> - doing the xl pci-detach for 02:00.0

> - doing a xl pci-list for the guest
> - doing a xl pci-assignable-list

> - waiting some time ...

> - doing the xl pci-detach for 00:19.0

> - doing a xl pci-list for the guest
> - doing a xl pci-assignable-list

> There you can see this strange sequence of events :-)

> But i haven't been able to spot the culprit

Enabled some extra debugging and added some more printk's .. (see new scree=
nshot)

From=20what it seems .. the frontend state for the first device isn't chang=
ed on the first pci-detach ..

Is the signaling on pci-detach the guests (pcifront) responsibility or the =
toolstacks (libxl) ?



> attached: screenshot.jpg

> --
> Sander



>> Thanks!
>>>=20
>>> --
>>> Sander
>>>=20
------------0860880181AA81244
Content-Type: image/jpeg;
 name="screenshot2.jpg"
Content-transfer-encoding: base64
Content-Disposition: attachment;
 filename="screenshot2.jpg"

/9j/4QA0RXhpZgAASUkqAAgAAAABAJiCAgAQAAAAGgAAAAAAAABDT1BZUklHSFQsIDIwMDkA
AAD/7AARRHVja3kAAQAEAAAAPAAA/+EDlWh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8A
PD94cGFja2V0IGJlZ2luPSLvu78iIGlkPSJXNU0wTXBDZWhpSHpyZVN6TlRjemtjOWQiPz4g
PHg6eG1wbWV0YSB4bWxuczp4PSJhZG9iZTpuczptZXRhLyIgeDp4bXB0az0iQWRvYmUgWE1Q
IENvcmUgNS4zLWMwMTEgNjYuMTQ1NjYxLCAyMDEyLzAyLzA2LTE0OjU2OjI3ICAgICAgICAi
PiA8cmRmOlJERiB4bWxuczpyZGY9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkvMDIvMjItcmRm
LXN5bnRheC1ucyMiPiA8cmRmOkRlc2NyaXB0aW9uIHJkZjphYm91dD0iIiB4bWxuczp4bXBN
TT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6
Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtbG5zOnhtcD0i
aHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wLyIgeG1sbnM6ZGM9Imh0dHA6Ly9wdXJsLm9y
Zy9kYy9lbGVtZW50cy8xLjEvIiB4bXBNTTpEb2N1bWVudElEPSJ4bXAuZGlkOjg1RjJDQjUz
OUE0OTExRTM5OEFEQkJCRjhGM0FBMDBEIiB4bXBNTTpJbnN0YW5jZUlEPSJ4bXAuaWlkOjg1
RjJDQjUyOUE0OTExRTM5OEFEQkJCRjhGM0FBMDBEIiB4bXA6Q3JlYXRvclRvb2w9IjEwMDMx
NjEiPiA8eG1wTU06RGVyaXZlZEZyb20gc3RSZWY6aW5zdGFuY2VJRD0iMzdBQzQwM0M2Mjk1
Q0JENTgwNUZEN0NCQThGRTNENEMiIHN0UmVmOmRvY3VtZW50SUQ9IjM3QUM0MDNDNjI5NUNC
RDU4MDVGRDdDQkE4RkUzRDRDIi8+IDxkYzpyaWdodHM+IDxyZGY6QWx0PiA8cmRmOmxpIHht
bDpsYW5nPSJ4LWRlZmF1bHQiPkNPUFlSSUdIVCwgMjAwOTwvcmRmOmxpPiA8L3JkZjpBbHQ+
IDwvZGM6cmlnaHRzPiA8L3JkZjpEZXNjcmlwdGlvbj4gPC9yZGY6UkRGPiA8L3g6eG1wbWV0
YT4gPD94cGFja2V0IGVuZD0iciI/Pv/tAFxQaG90b3Nob3AgMy4wADhCSU0EBAAAAAAAIxwB
WgADGyVHHAIAAAIAAhwCdAAPQ09QWVJJR0hULCAyMDA5ADhCSU0EJQAAAAAAEPkXFbhi6c9J
PDKtAE0qv1X/7gAOQWRvYmUAZMAAAAAB/9sAhAAGBAQEBQQGBQUGCQYFBgkLCAYGCAsMCgoL
CgoMEAwMDAwMDBAMDg8QDw4MExMUFBMTHBsbGxwfHx8fHx8fHx8fAQcHBw0MDRgQEBgaFREV
Gh8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx8fHx//wAAR
CAKiA6wDAREAAhEBAxEB/8QAsQAAAAcBAQAAAAAAAAAAAAAAAAECBAUGBwMIAQEBAQEBAQAA
AAAAAAAAAAABAAIDBAUQAAEDAwMDAgQEAwUGBQECDwECAwQRBQYAIRIxEwdBIlFhMhRxQiMV
gZEIocFSMxbwsdFiJBfh8XJDNIIlNVNjGJJzJsJENqKyg2Q3RREAAgICAgIBAwQBAwQDAQEB
AAERAiExQRJRA2FxgSKRoTITsfBCBMHRUmLhIxRy8ZL/2gAMAwEAAhEDEQA/APLvNXUn/wAN
aNB8wSNzx+Pz0wIZdKxQ1416Db5V1lohXIUJC9qU2G51IIOqXR2uRcSlSfykV5fxGkgJluBN
OLZ5VKTxFf56hgeR7/NjqBZ7QPH0bBp/PU0UHRzKrq4nip1O4AO1BT11lE1JxF8lBXM9sgjj
0P8AOmtGOmQxfZFUkNN7VrSv9uqR6Ck399KQS0ggGo3I6/HRIdBRyBZNFNN8/RW9NxvrSZdB
aL68tApFSeNEhXIgA/PRkOgZvUhatoqUrSoBQ5UoTp7F1ORvo5Hk0SAfQgGvz29NTY9TqnIE
7BLKimlacq76A/rFpvcYN81RHA0ogAhQFVetDqZKh3ey5tTSY4bW1HSfc0lVQSfUg6pM/wBT
TmTgu8QzU9lz0oRTf4DVJvpIRvcUUJS6nqKfj6apM9A/3qCFFBC/xUn3fLVJKgEXeCkEpKz1
9vH009jXRgTdoG3LmAOlUV1SZ6QJXdoagFFxZKegAIIppknVi/3WCervwNSgkD5dNEikxQuV
uWDR0AqIrsRT5U0KxdWF9/biCruJKx6Gor/ZpLqxAmW+pAkgGtdyeJpqObq5FCRBWCUvIBr9
R2IPy0SaSAZMWtO8gpHUBVf9+pMeoaXovu4PJJ68q02OkuolTraqgOoNOnuHr6DQMBpba2CV
pSa+7dPT1pvpMwKLLRJKXglw7A8ug+OiTaOtHB7EPAj4khQ2/HUJxfZS4aOMtrPRRGxr06jU
Zszj+2Q1EgIW3TqAsEfKldRdzgbSVV4L3HxSa/MbaoJXEftUgbJor5EEED+Og1JzVb5lFDgV
UNKA+uopCXDloKebJ470B1FJxUw6UlRbUn02G2kZOam1UJAUAPSnr89UBIZCwOW/SpSev9ug
ZB76Upy2/wB++oJEinGgrXodUCHXietaf2jUQASaeo+Px1EALrQUr8B+Hz1QQVD0Ow+Hrokg
0+3YUr8NJCgoig2IHUfH+OggcxyG3T6gNJCw+pKRQ1+Q6U0EkLD5NArqnoR89JHRLqabn8Uk
DUQFBlSB6dSCn/eaaiOSo6iCEnlv9J+rUAgoWhZJSdRCajYK2+fzrqEAVvRQ6fDqNRCao5Af
4fgeuohZG+493ofT+GgQipRB/wAW9BpAMgqoVbUGx+P46ggBBFRWih/tXQQSdtgCR60+WmCA
ngoVHtI9Px0CEUKBG3X4fHVIAINeP8Nt9IigCTvsR1PQHUQniTuTsBUCv+/QQR402BPw+R1E
BXSo6fH/AMNJBUNSPU/7tBCQB06p0kKGyj6gDUQNyduo6aCOgVVJHQChJ+eoRWx2J47fz1AF
uQfbSn9uoBQSOIKfTcn+7VJB/l+o1rUg/wC7UQFUoo06fT/fqIJIUFH0JpRXyOkg0AhSqCqR
tv8A8NRIWgca1pU9U+mo1ImiTUpO4Fa/Gv8AfoMMFDQJ9etfj8q6SQa6pqAaHcGu9PhQ6jTQ
SEUVxVXkP5fz1GQyk8QKfw+WgWJUkcU1+r5emkoD+PIcvVJG23x1MgBJVunYDcAD+7QAD7nK
k7Hqrpv01SIYRVPGgG/Wtd/TSKQXTam6dv8AzGgAxwb341r8etdIhHmQj4q3A+HxroABC61N
OHqdQwJJSaA7An/YagYPppSvH/bfSQaEEqp0B6EHfQygJQBc4j0+XrqIOp9xO9f51/jqJClp
UAFAVrsfj8tCE5qFNq/81BrRA5A1r+CR6aATDSfw2NadKem2oQAior671HT+WmAgJRUlIBFT
WgrqESFBIJUSCfQdNQCv57HY/wBuggUFKqG9NzWtNUiAKoDvSvStd9RISqtD8R1p6aUQqppz
/wBvhqIsGKYhdL1FcehQ1SG23A045VIQFqTyCVKUQBUdNbSUSzF29IloXj+8y3pTMO0vOyIS
uMppVElCgKlFFFNVAeg9NdMLZmlv1OMPCLhOiPyoVskPx4/LvrbSfbTrUH3bfLWWktmm2dEe
PriuwG+pt7yrQFcXJSBVCQPzf+muxPodLqk8B2zB2a8aXN60uXVtniw1VbjVU98NJFS52T7u
AG/LQ0pg1LbwMn8KdZZZkvxJCGJFOy+ppSWVlQ24KI4nRBqTi7iQabbU606yl4lMd1ba0Bah
t7eQAV/DTWpjvHITmKJYIS+w6y4U+1DqFJWRWholVCf4ay65wFfbiTk7jDDbnb96TsS2tJCt
/kRUa0qMXcScaYCikkpJ+muxHxBGp0L+1IJWOxkVT3SE71Ktjq/rJ3ZzONNVFF1CulevyoPX
U6h3Y6/0JcVQFz2mXlwAoNrlJSS0lX+En+/pqSXLL+w5uYdIZcWw/wB1DqFcFoKTULArxO3U
DfVb1ivZiRv/AKWqBR01JpyB2GswXY5/6ZcFKuUBNBT0OtKkh/ZAtWMukcQ6eKfnsP8Az1l1
g0ryck4y4T/mbf4vn6aepdwKxmQNu8B6gH1GswH9ghWOTO4Ry2B+Hp8dXU1Ia8dmLXy5hZP5
gNzqgO0Cf2CcklSFA0r0rXbT0ZdwlWW4qV3FLB5nrv8A3aoHsGizXNCFMpUkdzdaOp2+eswH
dHJNmuQ2SQBWmx9dMF3TALTcTQKT1NKbenx1dWPdCHbdcVrX+nuOtNv4D5aoDshCbVPSQvt0
I30QaTDMC5cwSzUDoaCm/wAdPUhLkOeKq7NAdjxGiAVkD7SY0aqj8hxoNqjfodvXULaEJjvp
BSqOVH5jcfy1QXYIR5AClFgqHqoV6nUgkSIslRp2lfMU9NTFNCi0Qj/KWNvntTrTQaaCKRwI
QlfOtUddh6/jqMwgitYUke7jSitz1/u0wUCkuhJHFTldypdafhtoABkOinBbiqipUSf7PloF
IIPu7/rLTT6aE0P460TQa5L/ADoh1ZR8SSSRoZJA/cJnEp7yv5+mpE0Kbuc5sex5QNagildI
Qg3brPeNXXlLUdlKUASf400FAgzpNASoEnatBXUMIJUt0ihpX40GohJfcNAaVGwoBpIIPGgF
BQb7j46igPvD1Skk+vHp+Gogd47EhNfmNEEJ7nQUGogqinQEg16nUQrlQE03/HpqKAJUitaV
/j8dQhlwADryGoGEHACU0NSKH5aiF92orU7dANvb01CwB81JKlbGqTt/bqCBYlr3UVVBHT/d
qISZHqQN96U2320wQO8jYlPuPrvXQAQU0UkVIHr8a6SD5I2PL0oAfSn/AB0CKKWaDi4AfWuy
jqILigGoUnY+7fpoJBlKSaJWmoNQquohX2zxHJKkUHX3p39eldJHFQ4Gn9gOog6jiAk9D6/7
9UAGtKy2abj46iEnkoDfp0/jqIIgFVT6DfUxDopW3Qj6aeugQ+NQeld6f8NJCATWpHXYeugA
6E+lKdB8dSIKgUKk1ptXTJB/4QenqR1/8tRCkdAnpX0+O+hijrXrvQV6H5ai2AJPKp9w+e3X
UEBcQBWpHKu/p+GqAFIJAANKUqf92oQykgdBWp29R8xpICakkgBKTtXrXUASSaUqa13A/wB+
ggKTXr/sdMkBYTRRpuNuXpoIPkQoAUCuPUDamtEKNOO42pWv9u2ssRKQDQjau4J+P/hqAMgo
XQCtfh6/x1EGoACo6j+RppJnIAUrvUenwGohaj0KD7uu+3T8NUEEBQ1IHEn6TvsdAg6GvHY7
D0pTUQAlVf8AmoaE+v46gYkrKVCiuSuvy21EKFKdP4+oJ0EAJBBr0UfdQ/DSIRCdjTielT+G
oAgOQoD6Up/4aiCKD9SU7E/H4ddRBqUk/I/4ulRpgRfIrodg3Wv4HptXQQhXFRI61PSvrqIA
UAlVd9JBAGn+L/lPwProFVAaj2DqTv8AP56UAW5PoPj+Pw1EAhJNOhpUDqDqIUptQUVHelKA
aiQQVRJJH1b/AIV0QQRNEdNlHYn4aoINPIdTt6H/AHahkA4UNUgkb1r1r+GlEI91K/lrTjpI
tuNXjs4jPtfAjvympIWD7eLaCgpUn+PXXRKUZ5ks9tySI5bm4t5MlTseai4ty2Slbi1ttpb4
O89+NEJ93XV1c4BXzkkWMytz1xh3GY0+xIt0t2awhjiWXi893+24VUKadKjREFVNOVoi373G
est1tylvsImSlTY6m1ckclAjsLTUURRVSoevppq9A8aLLE8hwV4/KaktuMXIwnYjaEE9tYeY
SwpSlA7ijYUAobHWbL8sGk5yhldsmtsy4IntXOWIzzkRxyyqQS22WOAXQ8uBBKCocUg76l9D
Lf6HOVm0t7JXJ0qQ7NtC7k3MdiPb/otO80BtKtmzx6hPX11qkDCZKXnL7bMnwfuZKHYjL7r7
UiM293GeaaNoWXlLUUg0Kko/hvoqmghD2bmdlWhEhEhoXhEC4xlSUturSVvFsxgFyOaz9KqE
/ToUjBDyrzarpAU2/MbTdrhBjokTn0kJ+5YdUV91QClVLfGigN9SlBzI+N5trKFIsdzjxroI
FuZRPdb4NKLCFiUglaF0VyUnqPdTrpZWSiSBtdxbauN2dflxWlSob7ZdVF7zTy1UPbaSB+kp
ZFUuD6f46XqCIqHeLnCDKosl6OY/MsJQo+zuf5lPT3/mHrqdZJF9sWQY3bsTRzUiRd0pM8ud
pLrplvpXHQOSnEnmy2o89t0kfDWXbJlVSM3IAASPYEdadQB+OqBbLFcoVlFvlritsm79ofuU
UK/6dgBaSlyAoE81qH1pPTemtd/0M20dMhhXBrFcdXISgpYEtBKFtqUlDjqVtJUEkqAKa0rr
Eps1EDHC7bEuGTQYM5gSIj5dLrBUWwoIZWtA5jdNFJ3OtIiUTZba8IT8qBHi3JcWQ6i0Nr4s
uuNFP2/Ic1K/WBV0VvSu2syF0tkfGhuuZPbG27OIjjrzRXbG3Fq2S4AoiqlLRQb9fSulsaxx
oRcbZDGR3ePcJirc4xKeDZcYW73ElxVFHgU029fX00yC8D2y2O2KjMyPt37upd0bhtrjlTBb
bohff4EKryUSni5ROru5wLUoef6Vxtbr7s+QtC5UqU2hTCHj9slp9SPoaQttVafStQ029kmK
euNsgpLVhaw4yEw3v3ZEgsmYhz9M8RyBW3xJCaeleuqstwN0kh6cUgKtE53mtF0t8Nc99law
eTaeB2bSiiahwe7uaXYuiZzkWeyQr+xaQmS5IZmMRpofDZjOoWpO6Amik15bcidKymVqoa3a
02tMKZOtjz6m4csRJCX0JTyU4VlKmQkkpSlLdCFHWUELgZWSC3cZC2H5H2jTEd6Ut/gXKJZT
WgSKE8umps2rcDxmzWeQ848zcHf2uPFdlSitlIlNpQpKOPbCuBKysFPu6ddUrgOuTlc8fRGh
uT48jvw0ojPx+SO2tTcoqCeSaq4qHbNd9KeYK1YHkfF7X3JjNwuJirZhtzWy2yp5Kg8hLgSa
EU4BVCPXWdFEoRY8Gm3eEmS04ltt51UeLxbKw46ghKkLVUdsbg8iKaGwVIY1VioQww47OisP
yG3lRIqirmsx3VNOJKwOCfcg8STRWnsTUkMQhABCaJAqdtwKb7aGXwS87GmYyn2A+kTIC2hd
W1pIaaEjjwU24KlYBV79vmNaWsj2gRc8eiMWxu5MvreZ5IQpL7XZUsq5e9j3K7rYKOtB1GrA
KWc3MbkR7gqA+lguJZTNdW04h1tLKkhfMrRWqkpO6eo6ay0KtwKOKMlXfSttVuDbr6blwUAW
GHA04vs/WFJUoDj66XHA5nJHzbO3CkBv9N1C0ocjvN14OtLFULSDQgKHoeh1OqM5FwsZjzml
uqLMZkLDHffVwSp9YKkNDiFe4p9eg9dSrkZYyetUdt1bLrHbfaUUOIIoUkbEEdNTqMiDbICg
FdpKqmhp0prMF2ZxNnhpB4tCvpXfb5a1Bl2YX7LCVWoqT/vP9+hpErM5/sMMA80kb0G+22qE
b7CTYIyiAAQn41+PTRCFs5KsTHIJ5mtOvwOtdUZ75gM2JgmgdKa9Cdx+GswKucxYh6Ob+hI2
+GgewkWCtaub9B81aoJWEGwroSlwGm1AOv4ahC/ZHuHJJB3ofQiny1FIRskgD60n0H/loYyJ
VZpKTvuqtAB66CbEm0yUq9Px6j+etFKE/tUlRoAOXwr6jVBSJFtmVpw939vy1QEhG3yQKlHw
H4nVBdkD7KVy49smnw+OoZE/YyQaFBqdh+OqCkSYkmv0HY0J+GoJB9tISD+mdQoLsPcQoNq3
Ox/u0CDtuJNVJKQNwDsdQCQh0n6T6k01CCih1BFT6jVBBFKtqig/DUgAnn8Kj121EDkaj19B
+GogjWukgA0IIr+GgQyokivQdKaiBy3NOteo21QAB89gRqEAJKeNd/QaiAFqrxrQfz0gEVAm
tK/L56CFKXWg6elflqELmrjvsPTUAovLAANKevrqEUHnE+vXqNRB/cqr9Irt0+Wog/uh0CQA
etPXRACRIAVUpr8aGn8NQChJJJJT7j8PhpFINUgdCkmh1EBMlCRXiQQCNvnqAAeb25AhPxFK
1+NNRClPMkAcjU7lXx+eog/uRUUJ/E9dQh/cNDofpO5/4aCC7jJNGzXeu/r8dRALzfRJ93UA
7j8NQClONk05e31ApuaemkmJHCtUkGmyzXr8NtQBpHUdEp3JGoQAj6Rsk/m9a/DUKAKqpToP
p20MGEk1HEnl6k/L+GogFICK8QQo05GtfhtqI6J5J9x2SNiB1P46pEQQk0oK71+X4agAUnrW
p2Cd/T+GoggsigHp1Hxp8tRAWOQoqm3Q9TpIHEbAUFRXjqINdFAUIA3O/U10CDjRIUK0G4Po
T/HSQAoKApsK7gdSdBSJJ3+Pxr131BICnj+U9Nif7tRBAA0APXao+GkpAahRSD8qU2pqIMLo
OVDU1AP/AIaibCqCClR6Cm3x/v1Eg+IISlJHL41rqNBUrsAa19KV/t1ADginU068t/5aiL/4
WgSJ2VxG22DIZPdElJSFICFNKT7gfy1proo6uTNo0ydwDHZknIJUF6DIK48aSh0JSoOMvBlR
QFbfmI48T11rvFdj1XWCStNhan41iTEy2uOx5EmVCnTWuVIqfuAKuACoU3y/PtTU2m3kzHE4
GUfC4BhrfKlrdZEgItaV0fuKWHFo+5iEVACAP1E+tDx0didcDGDbQrxvOnuQ1OyI1zYQ3O7a
qhpxlXcTy6FvkkE/PTeJQPCG061ssYxarkmDLYfmPPNKmOFKor6WvytcfchaPzV/hqS8sy2m
N7Hb1XS9Qbf3O0JklphKiCrj3CE1oOtK611JJ5LS1g9ifubcNF2ASFSmpHYIkLSI7SnEu7BK
aFSeJSd9c+42oR1qsVkuZkrYnOoTDgSZzzSm+S0/bceIqPaUuBZ+YpvpmECRxZs1pciTLmzM
Wq3wmmXJSeFH21PLLYRxJ4E7VrWmrJt4HsrDWI9ukXOROpGQ1BkRkIRycWzcErUgkVACkdvc
ae+MGZbRGMWu3PTZMZVyKWY7C3Y0lthx1DzqQChsgULYVWnJWwOrMSZUkYVkgJSBU9T1p6ak
hvYk7ZaY0jH7vcS64l+3Ki9kJoUFL6y2pKwd/SoI1XYVgJm2JcxuZd0+5yLKYjqHIhY7qVKG
1CFBXH41B0pank28khf8SMa53Rm3SW3mLWEvPoUr3tMOFCe4tVKEpWuhA31htHOGRM6w3KCw
t19IbSV8Y7hqEyW1GvdjqoO4gEfVrbqpD8jtcIK7S5bZEOUpS5kBmchYBQWy/wAkqRyB+R/h
rLOiY0ttmk3NbrEVKOUVhclwLUEJS0zTmanpTlsNKxkw3I5Tj13SEugAxy130zQoBrtBzs8+
5/8ApPZ066rQK/FCLfaJlwu7dtqpxxzlxU2QsqCEFftKiK7J+OjgezYyakyUIUWXVhK0gK7a
lJFBuBQEVGjqjSt4HcePfBHU80X0tPgrWErKUvDoo8ajnv166XVM52s5OUeLdxbZU1lDhtYW
I0p1BojmvdLaxX1pUba0o+5S/sOC3f2rCmcJC02xTqoCGw6eqkBxSSmv0KT8fXWG1JpN7O8g
ZEpvgqSuQbe42C2hwOKYWo0aIpWtVbJIrrUZLs2N48e/Pgwo6XHzNdHcZR7uUhIUaK//ABlC
o/z1WQ1bOAj3KDxkxSA3IS5GS62QpCwoBK2gfWoI21gg4S7lbriEMBUefUxnGXRQ+8gKacbW
KUrSoUNKhi/gdqnZD+7vQ5fH7wn7aQw+lHZSpolKUlH0Dtknj8NDhaM1tOGBy5X55bt7eIkF
RRb33FoSpDnFr2trbAoR20Vr8tUIlZMZsXd9tlDammZLDYUWEPthztFR5LLZNONSKnUyTSCc
kTp0RpLoDka2oKAtKaBCXnCv3Ef4ln2/y0vCFucjJVSqikihSf4/I/I6EzNpJJd+mOFRebbU
86WlTXSn3SQyQW0PCtOI4ivGldTGUC4zm5yHH27eGffRclDjzobCh7WhzJSgCntHy1bJW8BO
Xd1c5UuNGjxQuMIzzMdHBtaCkIWVJqfe6BVSvjvpWsitihfVpQmElhIt4ZejIhklSw1JdS84
O7TkV80JIV/ZoRdglXCM9cmXLlGcchsN/bGI0stLS2lJS2kuUJqlRqTT3dNSHp2ycbfc0tQx
EkNl5huQJiQk8Fd5LfbSDWtUEdR/LWzmk/0Gcx9yTLdkukhchZcWkDaqlcj/AA0NmxJrx6VR
/hGxpXWGUiA0UqKaEgj+Xy0pghJTx2oooBpUD1+ekmgwlJPL/EKAH0GssUgLbPKprxI6f7tS
NCQhFa+vQ1/4apMNHMpI3JFQeu/TVIQwCvqAUJFORrXfUdAh7llPSg6fL46gQSknkSCfkkeo
1CJpVI34kio9d66AQRbqVChJHqPQH1rqECikkjdSq1SOm346oISEndIUBXdSetPXbUSQdU1A
2qfT4HUSCCfZv1TtX8d9tMi2D2cOITsN1D1roMwFxQeg/BPrvqIBZBAKjsSRX5aZANHAChA/
j/t66y2aQklHp06D5gGp66CkCkGtKUAPpQ/z0iw+2kgGiQo7mvWvwGkIOa0BKVJUNiahXrv+
GoYAG2uNOI413FPQaJIQptNSCkEg1rT0+WlA2Elhsg8khIFTxpvT46gRzUxHUrZscdk/L8fl
qGQlxWKUSmilD209SDtqRpg+zaUN01I609T8tDCTmuHHUqvbKUprt8dKISYLB9yQN+ldtQSE
be1ySD+JPxp6ajciFwWUp23JPX5aoMyJMFlR2VT4j5aikP7BsKJqaEVSD1/jogpCNtAVQKFa
V36aWMiV24JO6tzvT1HyOgpB+3b7KFPT409dtRSBVtUkBQcBCvp1DIhUBSVcQeQPqn00h2C/
bnSaClR136aoKQvs3QaJIUo9abjVAtoMQXzsAASD/ZoFNCVQZISPbtT+OoGARJA6oNR6fLUU
iFRX6bJNP56oCQfbPAVKdh1+WoRPZd3ISdtRSH210+kgU/hTUUiQhZrQb11QQOCydga9dRAI
VSoSQPjpIJIUrem40EkCh/jpKAfjqAAUqppUjfpoggclVFCR8NRBhSq9dvmdQig46E05Gh+f
+/UQO+vpXp6/HVBBqeUSan/zGgQu6QeQ6+h+eogd9dDQ0+IppALvq4nYf+OoBfeJSagVPqBq
QgL9E8aUA6gfPSQSna0FaJ+WoBRf9tAkb0qNZKA1PoKAAmhG1dQwJ7yR1B22r66QFd4VqST1
oPn+OooE8xw5b7bU+R1FAaXEpHLcmv07DUIOSKAcvd1J0kBC0e4HcE+v9ugg+5+XmeNaU9Ka
gLp4us4vV5j2lUtyAZhcQ1KaFSlxDalp5CqapUU0O+uqrNW/Bp1JzFIMy83uTHbuL0Z8RX31
qbeLbrhjNlYSgqPuI413NeOjp+Ewc7exPHJIWzDsnm2G33e1XNPO7qkMSIj0oRi+604EBplJ
NXVuVrQ9fjp/HTGs1ZGxsYy55yAiMy4DJQ+5CWFUDYjKUmQgrrRpSCk8kkj+3WRbZ3inI04V
LuEO8OtwGpQgybQVLSkh9Bc7gT/llK+JSoddNlAOVgYSbPfo1iiXiQ2tFlmurZhPhxKkKda/
zE8ASUEfMb6ZMjGKJK5DXYSpUgrHZQ0CF9yvsKeO9a6Ykp5LdeoXlP76Ezc2pq56m3v29KA2
ta0qQe6B2eQKuBNQr3U1msFYjmcfzm1TlQhbJsedIiuJUyGiVORFgB0GlQpHSvw0yjK8hWuH
mtoyB2BDt0pN3UgJVbFxi6XED3J5sLSpKx6p2/DWmuyFOGdZSs2uLsq2GDKffccbTNhIjKKk
LZ5dpAQhP6fEKPFPTRWnLH2WxgZWqRf7cZwtQktOLYcYuYYQsr+2V7XW3kgEpRX6qjQ0YqoW
SMoAEpFaHp8z/HSmTyWWzzLwxi13LFrizbMVst3Z95tLjiFLJ+3qQpKgOYPEgUr10NtOQrDl
HOLcpaMSkwBbIn7WqQ0JM5QIkl8hRaKVchUo3FQkgeutNNuTpXwOr5kt5RKnxrhBjxrpKIj3
Z5tHEvoRxX2llKi2r6EkkCvz1lpmvkiLnksu4NKakhsslzuRGx0jJH1NRxWiGVeqRt69dNEz
lbLgf3y6NPv2uNdLCq2Ihx2UISy66247C4lTXHuhad+RUFgGuprmZLGg4V7xK2Oy3I9slOfd
xHYi25j6HW1B0pKd20tqBTxO9dX5PCGFATWWw/tzDMJIsIjKiJhJfWFpQp/7gESSOVe4N9tx
toatvkYUCMeu8aFkbFyYtT0hCAv7SAy8oqqWyndfFSl+wk7DV8Ik2tkfGfsSXJinoT8iO6yt
FtT9xwcYc24OrUlNHePqmgromMIkvJ2NztMi2w410bkF+3pWmF9utCUKS4suHulQK0nkdijW
1IezL8DVMuALW9EVC5TXXEOM3DuLBQ2B7mi0PYvl15Hcemh7FD4z7McRVbCZIuQlmcCpLZjk
9sMhFeXMJoK1p11izei64wOEX+yQo3bs4ddeQ9HkR4koIKIqmV81BDqCVuBZ9p5Aba1vYYSO
Jv8ADbjPQ7c45xnS2J8uQ4AlbLzQWOKOJouhcNDtXQ02XeNB3q+sPxoMFp5U2NCfXI+67QjL
UXSkqSltNeJATsonrrMSaTjY0ky7TNyR6T9zKagvP91MmbWTLKKhQ7xbp3FmnGo1qGLeSZau
1hT5E/ejP4WgS1TBLUw4CAa/pdr6uW9K9NHUE1IiBLtrVnkWpu7IaeF0bmNSu052nWEMKQrf
jzQrkacSnfS0UJwcDc7I5OecKaJkSlLtpSChNuJeCg+pI2eCk7lvoPx09mXAVqaYVack71wi
pkTEITHaW4ptx9TEkOlYTxpxUkEp+e2jsTQy/wBNZA7bxNbjIciKR3C6l9ivADkSpJXySQPS
ldSSkNo6x8Zu8RbFwuEFZtIU07KdSptQ+3KhyVRKyvdPoBXVHAInoCLU3Ddh3l+KmFIu0RwM
MOJ7ioSUP0cX2/RPJIqdweujnBp1TgjJUOBFuEGLNjBhcgOtSHytl0pS9xDToRHJQC2dwSdL
coIhnB+1FF4gWVlpmVPhOFE51laVIfWHAtwFwKDakNI9qVJI5D56ybgczrJ/9qXqNEjfcXJm
YTEh8+dYiislY93vFOBryrvrRlaE2qwwZcF9a4zqrizOSy/DaQXy3F7XNShxUmh51HImnpqy
tmb8EBJiqU/I+1Ye+2S6pLVEKcPGpCQVpBTyp1po5EnRjkcQrW8Ir6GZ1uemPzCSptLza3Uo
26I5FsJ4q6npo4LmCMudpDBUllh55hDbbqZtD26rbStQqBwohSik0PppUQSrkVLtoZxuDOW2
6gOSJDauaSGylKGylSVUHXl8afDRJqVI0xyBEu19hW1bqm2JKlBzt0LiUhtS6pB/9OpoEjpb
bXDue7L4j9iM7LkJfISgJaCagODYcuXr09dSYRI8iY1DmIcLVxQ6ENNLAboviqRKTFShxY9p
+sL9vpt11PBuJQ3umPotkA3CRIBh/dPQ6JT7+5HKkqO+1CWzTUZdsYOsrD57U9MAvJelJm/Y
vIaSo8CpgSUrTWlR2+RKevtOpA8jeNis6QxGeQ419vPSpyK+VJQz22yoc1qWQpAJQabfDTsE
4IYcTRQJoQCNtxtWmss0x5Is7rMkW8uIM9K0sLjgkUcWQAnkfafq66QUtHRvGZbkoRm3Wy4m
SqE8oqKUtOoQtagvavRpW420NihTONXJ1tTiQ32m2u+txS6J7YZMhJ/+ptJUBrWB5Ob+P3SO
h1TzYDEdaW5LqiAltbiQpCV09VJWD/HQLYk4/dQw5JLYQw2GS48ogJKZSVKYWK9UrCFUPy0I
GcUWi4qVPQGfdAaL0wLUELQkEJKglRHI1UPamp1QCGPEBSUj4VqN9vl/HSQ+TbHlIeDbra34
7JlOoQobMpAUpRPqQDukb6Cs0gv2yUY0aQ2EBmW6plrkoAlxASVVrsPrG+ohLsF1phUtIC4y
XSwt1JBSl6hXwJ+JSCa9NAphOW2ShwtLb4u9puQakBKGXAFIX/EKGlILNQJMCf8Acx4wZUp6
YU/aoH/uhxXFKkHoUqUCn8dMQZVpEJgSnVrCGiVIUW1b/nBopAPqRoZtMCYckLcS20tbjCS5
IBBBbQlQSpSwegSTTfQIFxZXAu9tfClQ5Qn2j8wHw1QEnEsOlsL4KDSiUpcoePMCpSD0qBvT
TEAIHcPtSgrPUhIrv+A0EBDbrntaQpVd00B5EevTrpJyIcYUlXFfIFW+4p/DUQErIqQCaA+4
j16amjSRz5tuHiPq2oRU7en89SRTIQTtVWxR8PUjTBJBbH3iilVqqnroBiUqoACaevIelfj+
GokgEo5VI3O2/wAvX8NQoCgs191Rtv1/DVIMJKFV61bG5PQ09aahQpW52G1KEEeuiCElIrw2
UOIJOwIppANKq7041G341676hBy9D7q7V+Ap01AJPHkCAKJ2qK9SNtJBpCj+UACpr8TqkoFD
c0G5PSuw6emsiJCifaK8gKCnSnrqCRfIjoahI6j0+ekgqUCSDuTUp6En/hqkgKXyJ5IpXepP
w20GhRUkiihRX5um9fnqAKqCvtkVCfQfHppAJQaKthxSn6fnpRaAhLRoVAdPdXqQPnqEIhKu
QKfq9TsKddBCSlgABKAa7VpTf10CrA7LIV9KPUEAddRSEYrPo2BXevpoTAAjRkNpKkAKpUn8
daKRIjxyKKSAo71r6aoLIpMCMRwIqo/TvTb/AG30NEEYLPAbeytBXY1/HQaEGDFFKA0/xVqK
fHSAj9vYP5yKnYkDUUhptzP5lEfAgVNPjpKQv2xKjw5HkelPXQWxP7WnclwAAV6fOgrobGAl
WtyhKFVSOupWCRRtrnHZST8aeoGmRkT+1vBRSDunpqEIWt+o3BV6JG+ghAt0lR6Gqag03G2o
AhbpRJATUjqnSmQPsJNAQnqCRqkghClFJ/TNAN/iNEkJ+2kgV4HYaSkL7dzrxNB6n/jpIP7Z
8EkINBok0qyI7DlacT0rXUEF98UXiXaLwidFtCb3Ka5rjxSpwKSQg1WA3ueKa1+WtqeuDFty
WbEMit0HKJc6BjaJin23jGgfcPJSw321B8IWPe4OBUfduNLtbpHBJVnIu25jCtdssv3GOIej
W6Q/Jx2eqQ42tKy6HOJX0eQ2oU4qGiWSaUwIieS5rFvk2tcBr9quJfVfYaSQJbzyytDvMVU0
pqo48fhvXUqN5Kl5ZztuR48xg8+ySbTLdlypAkIuTcng132UlLY4FBBCUr9yQqp66fY3g1jl
kS5doDtgi21u2tNTIzq3Hrk2tYW+lXRDjajwHD0I9NLlnOIRysVyVar7brqhBccgSWpSWeRH
PtLCqVHStOuulLRhlVwaFcPK8R65xJEdy4LZYXLkKZdTDYU29JaU2gtmKhvlx5/Uo8qa5KuG
beisYzl0q1IltPPyHGHLfNiR20LNWnpiUguDetFcfdQ11dVEB2hHWwZYyqPMtd7mzGosthmL
HnxauvxEx3S6EthSkEpJUR9W2nrCMu8vJMXfyBGds9wt9rkTWnXGLXFamklpT6Lf3Oa1lCuS
SrmkgVP46EpOk8kNZsqWi53OfeZlxVKnwnYwkQXENOOuLSkIEqoAcZomi09TqeEc+6X3K8ug
AUQCvoANh+IHx1VHqi0Y45jicSyGNOu6bfc7j9sIzC4rzyeMVwu1LrVac68QKbHrtrTbwZFQ
HLQfHM62u3xpNydmMzo1pW2+eJZCkOJQ6AWwpwKB22NN9FrKTeWWbIsvtU9q8OIyFE6FNbab
tFv+2U07CloW0TIUkp48UpQr3hRUr1G+s/A1iSCy29YzPgymrMGmZS5oVc3S0ALiSCBKjpA/
6UJVUqaG29flpWzNmhOU2xqe/jsa2Xe3XN5Nvj29aWpBBaeZC1K7in0tpS2eVAa9dKaM2r+Q
7xXFbjaZ1wfugtrYVb3/ALWRKcYmsofQpBQpbbSnCmlag0266rNGa1f2H6JEAy1KEqzOZcq2
8Vz1CP8At7kr7qpopSBGUoxdq8Pl10O516ojcXYdf8ipVIdtSWwh371TbjTML3sqQS0SUI59
wj/L9dxtpURkIfBWo2MXF524RQ7FRItbC5ElLsloJWlr6gwsEoeWa7JSd9EhME5ZbYHsdjOQ
LNbbih1Dn73InLSl2KrmQ0ELLja2R2xyqhKtMtMnGyHYjQ/9Hzn0sQlyW5bSESVvrTcUoWPp
aY+hxk09y+o0pyyaSglpFmuT/jCHOTakpYj3JxwzUISFmMphKVOOqBK1I7nt6UGs4TZWROXi
Dd5FkQ1cbU2m4IlxW7bFW1GatzocXw4QJDHF1xtaKdwuqO3TfVjyDzoisyYksQIUK5WeQuel
9AEwwRBZoAecOOpKUreCqgc1+6oqnrqnODTSYwzWzOxrzZkmzm1NzoEFTsNLa46VPElDqQXN
+Z2ClHodzpn5MumcDOfbFwc2ftkW1uJMWaltNok0mrQkFPJt1TXJLop1UnamqzUSKwyViWWG
jyubOuA4ISbi60mE2g+xs8u3RDiV1bG31DcaIaFQcLPbLOvEZEq8RJi1ovTMQqhpQh1Adjq9
q3HEqCU1orjTc/DWl8BxIhvC2nZM5tuSUt2qQtq4uLRwW6yHOIVCb377wSDyb61+WslVzoj4
UO1v2W/rU2pbsJMd6FLWsoWhC5PaKVtg8T3EKFa/SdWQbwQRQ1zKgkcuhFB1OnqHJ3hQlS5z
UVgpS9JdbaSpX0hS1BKSadAK76EoNNlrg4XarnEEaz3H7y5u3OLbkrcbUwGC626p7kklSVoC
mvasKr8hrTfky66OF58fSLZLZjsy0SPuoz70cFAZdW7GI5MhpLjvuWFexXLffWeDUNEOzY0J
XZRJfaYj3eiuBSpRZaL3aSpYSCSHPqRxHTQTmRyjGI/7hdUOzWrdBt8tcUypPcdAcClcG6NJ
KlVCD7qa1acI50XI8sGA3u8tyHLfIYVHiPIiokoLzrbrrie4hKey2tSQUnqsAV21No6dXBDI
ut+tbjsFqbIhmO8pLsdl1SQlxBKVbJPHrqgyrNko7ZMjk2dia9PZ7L8ZcqLFXKAkvR23V9wp
ZH1cFhaqK3puK6yok1ZwhnLsl8ZtJW/LaEdA5rtP3SS+gH3dxUUHYKSoKBHodNYCycBTDeGb
BbZjk55yLJdcDEZS+bTZiFPH28jxUkq+lSRtuOus2qmZytgZvWWX19m3sLVKmPuARmWGWm31
OJBPtW2lCulaiu+jRtOTicfym2XCI0mI7HnvUdg9tTZSUgjcOJUpqgJHIKO3rrTUBHgdPQs6
euU7uRpS57DYkXBKUpohlhwLDiu2eBSlwDdNfdqjgl5Djxs3TcHIrcGTInNIccfgrYD4SHj3
VrLawpNVFfIq676yaeFLI6NdchjGYWlvJduyTDlqWgqW9uKoSViocGwqn3Aba00GwRnbqi1u
9q3iRFiq4Gc5F732oH5UPFJDdCdxX10IkkEi5WsRQ2uzMOPNoCTL7z6FKWCT3CEq48v4U0FZ
ZkKVd56mmnn4iG5bhS6LqWSh15TZ5BRcPsXQjqkaUpRdocHeRkk4SEPIiMxFJdMp4IbWC8+t
Cmy6vmVV2cNONBog31wcGsulsxXYwQ0WXmUx1LpQBCI6oiR+PbWf46uph4yd3sudeEpu4MId
hTXkSJrAWprmW2kNIAdT7kULYOqCTyN7hkipsAw3W0NxymKlj3lRQ1CQtDSCo/Xs4an11JEI
kXtMmZcZMiEw65dkUSpfJSozlQpLrJBTRYCeO9RQnSkaRGn3qJ22+kjpUfLQZZ2TJKbeYwQU
qdkd4PVAGzZRw6b+7f6v4aiFvygu1x4e9WHHXCCQUVdSnenWvs3roLAiTJZfajMskhplhKHU
jjQu1PNdE+pqBU76eDLWRxcri1JmtPuKKmAzHacCgAQllKUEbdR7fXro2aYpq5oRe49wKlCP
FkBUegrwZQ5yRxRWgoPyg6fgZRxiyEpmPOvL3o87GBBKe9ups8QRQ19fT11NGU5OkS4qQLw5
LdLjtyjOtuLcBUtbjjiHDuPU8a1Olm2kjnGnOR4z0hMik1Z+2AqqqYy21JXSvtodk/LTWOSY
bspP+m48EFPJuc5JSBXkOTKEAn8vE8fx1nyZ0h1hM9m3Zda50paWmIzqnluODk2lQaWU8k9C
kqoKHQKwT9tvdsmQYMWPGjwlIgXFhFuMnsnvyHGVVVJUEmqylSkg/SPaNaWUCaaKpeYz8eeW
n0oSopDjaUPJkBKT0/VSTXp66yVR3BXZ/wBhZalNqefcu3NJbcShYaDKBVYKT7a6pFvgs+Sw
3pEJ2TaY7rqW5l3q5D7HbCBIHb7gO9OAqmnp01ooIe4m1uWpLTCPuJMeyMOdlQbDFC0VOvhS
aOF5rkFgHqa10VcIUsiPIlteiXCWWIb0eD3Gil0sITG4llHHsup68jpwVimrUkkEAlX+H0Pz
1QZkAUog8RxHoR130M0hKVgEkEe4U+ZOgmAj28tyAAmg231JlIQCgCagj4fHSAEpBbGwBqAE
0+PrqIIj478NqA+ldRSCnGgPX4/EaAFVBFG9uRFR/Dc6RgSQqoAVvQdfh+GkA2wErIO1flto
bFASCRVKSOR6DY00AAlSKeqVdPwHxrpkQEFf0gjaqvlX4ayyDHIr/KAN6danppRA5BSqDfj/
ACOogUCyARxUN+np8dJISpFfjUV2pStOmoQ0ra6nZY606EaiFJqOZrQdST0/D8dZAKuxJ2BH
ruB67/PUhCSFAdeSN+J9a6QAhVab9NiPw1CmEkEg7ciNyVD+3SEhtk8fRSjsSr4k9dRpOQ0n
go1+qu6DuSR0poIIAqPGhAqFEnep+GggyKOGmxUNvhQemqQBX3JKj0qeu3I6SFJB3UolFCCq
vy9NIhDmCpW1FfxFPh8tQhVX0IBIoR6Ch0GWAjlU1rtyAH+3TQR06J32qP47/P10mhCtth1H
r8a/8NBOwbfFCSvoSRU/DQACs9wEV5fEfz9dUkGpzkpXEmgFCdhUV66jQalbUFAmu1OupjAQ
UTuo9CaU6/jqMsNSqEL6Cm1OhJ9dIBV5GiiABWo6io1SKQC5ROwNeqR02rqNJg76eXHj7aU9
ela9dEiWzwo41EzS1zZEluHGiOrW6+6vgAhTSke34klf8teirSo/k53caLH4/tseLl8hmbJi
CMmNKafW48jtKW62sM9tfr76dNCt+LQNZJbHno/+ncXtz7sBxiHLlIyWO6ptbzcJb6SviV/l
UmpBRvqvDcmozoZwYuLlCQRGXc+UtOPBRH2j0XuLCRciSFJeQj/KPrtXWXbBdElA0tcKWvxL
d/p4oukeW3GWpBcS020tLi0pJ5daVoNbu02oMSRUxuO7htrkpZtoeMqQlb8dxSrjxpsJTR9o
R/8Ag1DVZQ4MxKkaYzFt8jIrZFuK/t7bIlMtS3lK4hDbiwlauX5aA6CSyaGjHMQdvseBJtbk
dSXpqXEBP2rciMzHWttQVzWS7zSDzG2s9nGzfRLJX8XYxa+PTBKtpjuw7ZPlmKy6UsuORkJU
wUk1XyrXnv7vTWm5UhWpytkPHbnFn3ONblfdW2Ow4bOlxZZkOuOltZYoS9xSiiqV6/LWlZ68
nO1UsyS87FMat9hmXoxVynEx7VJRA76m0MuXAOl5hahVZ7ZbFK+4aHZpwjrWvJA2e2Wmfcbu
mHbp1xgR4L0qO22823Ii9sJPec5bOoQokKSncjS9fJiJZXl/4lClNwNZRNwWSywosjCcnlvR
UuyIC4DkKUlB7rReeU26nkPyqSBsdaaiAVu32CiW6OcFuNx4JRJj3OGz3CgGrbqFmiF/Un3J
9w6HVORalE5lGJ2Y3zJ029D0MWJtqbIbUlKUFlam21tsoAryHcqCTQ/DWVo2l+hXshxpu1xE
TDJLgmqK7WkJHJUYblUhIJ7Dqdv0z1661VqYOF9yJyq3W+GbL9o2WjPtMSZJQVFaS+5yC1J5
VoFca8fTQ9it45OWPWVu5uTw5JEJi3w1zX3QgrK0tFIKOKfU8tif46YSUs3NtD8Ydyg/u6pS
xYzDFwS8EBUksrkmIkdoEIqHBvRXTV2SGrc5OOM2WHcsnZtgcEmG6l0trcCmefbZW4Eq4lSk
Gqeor/LWW5BJrRXxRRStVEigIFAOKvlq6h8kmbGg2+NPmyG4zktKnICFIUvuJSstq5LTsj3J
Ox0aFHFFvBtjs4zY7a2nUtpgFR+6UlYqXUIpQtpOyjXbT2QZ+w5ctCU40zfEzSv7iYuE5E4q
T2+DQdrzr7qhXTRKYptfcdy7DC/b0yolzX9q282yZUlKUR3A57VOR+BU5RsiqgpINNS8QT7J
+Q5duP2LE43OSqCl9LLgkgh4EpJ78ZsuKC208actjuNEo0rvyc75BeEW2Pt3R+eLmXRFbmBx
C0BLgaLnvW4ngtVU1B9DXWqpGW2NExLpaL+LauWYFxjviIuVHc5dpSiEni6yr3J935T021ng
ZfnJItN5QM2Vb03OT+8pecgLuLTji3ylqoISsqC1A8Nk11pQVXnJ1tr+bv26Rf7ddHy+5OTG
fSl0pU46tgqS4VKUEqVxTwod9DiS7W40QqF3pLkJKS8H2ZC/sEq5ckyg4CrtA7hfcpX56rKc
lVkrbbvlzVivQhXAtW9tX/21BIbKlqlOdtSyFoUT+p9VFbHfWYyNngYt5IUW8wf2q1uIS32Q
+qGkyNgUhfeqFcxWvL462pWTIpeTo+0Sy1aLZGkoCSiexHLUlKm6UcS4F0C9qk03Osy5FwSL
99zMWpM6oiQosyM4EoZbYUqWtpfZfKEpBWVt8go/m9dKQWup0RM+8XQzosgw2LZJhqS6wzFi
iIjkk1C1NgCvTevUbaoNduRbd/nv5ALw7Giy5jriVJhuM1iJXUceDIKQlCFfSkGidZaJDiTk
cpV5uJl2qBJTKeU5MtqWVfafcp5JU8lLa+QNVK3Cqb6W2Z751gTBy6VFTJZVDjSIT733SI/6
rLbTwbDXJvsONmnAUIUSNRqWNoM/GSF/udrflvqWpxx1iX9skgmoSG+24NvjXSm0CQ+kZbbz
Bhss2xpM6FDcgMT1uq7iA6twqqlNEOkNu8UlQqk76NC8kfcL5a5kc921j94WhLK56X3OPFCE
pQft6cAoJR1B1KScIcTbnaZGOW6ALS/G7L7jirmJCnA6pQQmQlCFoCAoBKSkV9uhsLOYR0hX
PE7PcI12tSrnJkxXFcY0xEdtooU0pB/UZUpYUOYI2prOxI3Er6uwyQSVCN9u9GJb4KWhMjiH
VtodC2is8Oi00+OtNcktE5Kzhl6Ysn7mTHQmG2wp/wC3bWlqNNTLcR246G2k8qEAJH1b6skz
hKyKyXK3Its1ybAQmVMmiXFShxwmU44pDZSpSKJCV+48uvQU1EOmc1tD9xuM+4wnHFtyV3Sx
NVCkiWqOI4beUacW/al0lO/NKR00TxwSQdlzuPDsEdouNNTYjK2SlyEZL0lbnMuO94uttIVV
yiQ42qnX4aUsl1nBXWLDCNvafRfba2tLfcMRa30vJKU17av0ijmeg91K6mieESt1v0Z1Md39
4dlwDJakHHFMuJbjtIfS4WquEs0CQadvqdjtonGAaykyTlZRHTc40l3IDd1sz5My3uFp0KhM
qZcDTI7qR7u4tB4pqkU1cDDDTm0URH5CbgkXdiO4uA4ppJWJT9t7b6hVFOTkv6iduW+oICev
dselPO22dCi3ZD7TVjnSUIbYjxGmkc0qUptaQlSi4E80ElWmSiBtLv8ABgtSX4BgSLop23B+
UY7TyVOpZcE11oLT2y2twjkQkA7UGgerf0Ilx20MvZU3FchNxnmeEFD7SnnF1eSqkBY/yXab
hatuNRplyVdFb40NEinLoN/zdOnroJpFiGQ3BWMyY0h3vx33FW9iArgI7HJlKzLQ0N+6ONAr
pUk9dWsmbHBx+OzZbKUsoWtqRMWtLiUkL3a4pUK147fm/hoRqDrlF6my7bb2JyvubgplEoXA
obaUhhxtSBFSloJ5JSU8+S96+mtJYK2GdJcpuFlbYiRGEuuMQ2mVvNBxDbrjTQW52jVKySTs
ob1r11llB0rbJudW9sxEdpp9Ma4NpbCESHmlqS66GEDikOU+gD0+OlsoGmPTITbklLtvhyWk
IdkS3pjKpChGaIJQwAR23KfSv+etNRoU0cIzNrkIyF8NFtluOpy1tuEqLRVJQlFVDZSg0qlf
XWVsHI5aXYlY7I+5tjCO3yZjTkqcXNVNLXNHKpDQYqCT7a+mqBsR78OAjHIstvaW7NfadJJ/
ykNtlHt/9alb6vIJzk7YdaYF2yWBbLm441bZDixKebqpxCEMrXVsetCmvzGsMokssTAsaj/Y
C4z4txccgSZlyfRM+0gpdbebSwhMsIc2CHPdRP1+3012e4WhSKhkca1s3J1m1NJZjNUTVuT9
62pzqpbcgIb5JV6DjtrDZMlbBhcy+26NKivMMuLm/aOpkSWY/wCmUoPNoOlPNXvNaaElk14J
WZ4+sUKwybvINwkfbyLiwFsOQW0hMF8sN82nlJeV3Oqu2DT010rsMnC4+PI0HF0XxTspvjEa
lGS79qYS5DiOYithDhkc1/Sjkj5nbWKg3nGjl5AwhvGIzbS5dzdU4tKY6ZaGUxXCUBxamlNv
OLoAr280DUqg6lStcCJMuDUSXNTbo6+RcmOJLiUpSCaBA3UpVOKUjqqmoaofZTjS8fnNRFOO
rQ+yl5AkMLivpCiRR1hZUpBNKjfcb601iQ5gg90jpUVJGw9PUH5a5jAhKuoAqK1KSK7n4aUA
aa+1SuhrQ/DUSQFcDxqeoFN/9uuoWJINCQoA7nbYiuoy0KI4hJKtwK/MaiC2pTkQpXofSvrq
NBlBSsUBJI6n8NJQJUN/b9XUJ36D0PpoAWkA1Uk0NKkelNQpBLqQQRWg5cv+OpAED9Sx1OxP
49NtaIUpOwpvQVJPw1kgu2lNeIBNKAegI1AJUk8gCacab9SRpEMhX5j7iaivqOhJ/DUIFJFA
DQU9fWuqCAagKr7VU3A6bdNBASzxCaVNd+KjttqKAlKHIBIrv19PmafHUAE8dt+taj+/bUIS
0lYFd6niFdBTSgaDStNFAVqfU9K6mikIK3JV7uI2VTbfbQKYAVdVb06J+Xppg0KbO1ONabqI
6A/Ch1mCQS6ciomoPqKVH4aQYDy2/OU7g9emoABR5KqCQamnTc/79QhClaJ3pXl/w31FAtxd
PakClKchsToATxKjQE7dRtTbrTSagICq1Aiop0+H46DLQsrNU7VI/kD+Gg31CDhHtpzTX2FX
w0gxFVcilIVsakHp+OmADBABBHPeu5qBog1Ips13G3WpHr8BoCQjyCaBRoN/QUNdyAdIQEVD
uclDpsNjuetdTEXQpST8KhR9d+g1khPAdK/qUr/f/KmoYLL4itFsvGUW603RK3IU91bC0tL7
SwrtKUhQVQ/SpNaU16q/xYaJfB7Dab5kLttlOhKRHlutIPJIcWw0tafcg1QRxqD0r6UOlp9J
g52zhMlbTg9ku2N4y+boi3Xq+vSIzTLyHHRIcS8G2AEt7NJ3oV76z7K5x4NuuRqx46u0mOuY
mTH+xil9u8zlE9m3SI6y2WZJ61cIHbKQQqusJwieHgZx7BDcwaZkXfdFzhXFmEuPRBZLUhtS
wrl9YUko/DWrJyvkGoGsixhizQrt+4RJCpa3GnLa0v8A6tgt/St9ugolY3SQdLcGVJwt9vl3
G4RrbFbLkuU4hiO2qnvccPFKanYbnRWGyLjL8SZ63JixKNSlSXHY8dxMhYbRIZbLq2VLeS3x
PBJIpsfjo/HgJcjJnxrlbrzbUNUKX32ZDqJUaW24xSKlKpLZc9vFbYUOST6fHTNYOikQzhGV
xpqUR3I7YSymSm6NS20w+y4rtoUmUCEgKWOH47aLRs5vwdBgOdPuTErjckMdhyXJcktJjlEk
kx3i8tYbWhfE8F1661gW3EIZR8Qyty43O3N25z9wtTC5NyjFSELbYbAK17qTzFFA+0mo6aW0
wq2iHBKga1I2UD66yiJ2yDJkY3d51tuH29phLYTc4KV8VuCSoobWEUIWnmkhVTtrbrgJaFxf
9Spw5+QJbqbB92iGYhQVMqeWCqvIDiCgD82/w1loaj3J7b5Ftspxi6KlvstSEst3EJWW3nfb
w4uEBa9+PEK2r00KqjBSyCefvjrNyVK76kmSly6FSaAS0lXEu7e136vaaaLVyGGoJPJJ2UR5
NrF+SxLSIbMi2NkNuNGG6CWk1a4mnX2ncHVWDScbBbcrlRXZKrVbI8eRMjOQ3vtm1qJaeKa7
VX1KQOny1pJvDHshuq+ZMmauG4w6XQ12F2pTCuKWuQd4fagVSOfv6bHfV1lGbWgPHbveWckT
LtkdpN1AdCW3m0oaZoghftXxSg8Kj3fhodcBIyi3gxTLLcSKtM5osONOshaWwuh5MgmrTgps
oaoySgCL+6iGxEltNPtxgpEMvA1ZStRWUpoQD7iTvq6CsZOQu0gWxy0fpGPIdTL5FtPe7jYK
QUu07iU06pBodUMz24Ha749/pdu1iAn7dMoyWrh7+XfLYQU//g1ewD2/x1KmScwhyq8SYgLi
7GmLHuLjD0xp1txEaT9uruJQ2FBIQk1NQhXTWoNz5I+7LYU4HTb34LyyV0fUsgtnolAcSk8E
6G5UM5pKugrhdxOcgKUwhEeDGZjBlBV23A0aqUQf/wALX3066jatmRInRf3b74QkR4wd74t8
RRQhtIUFdtsuczxFPWurezMZJmLk0AZx/qdMV5xBlKmGGlQKypZJLYUBunfY01hpo2snFm92
luzuWtxmQWDckXNqRVPIdpothpSfpVX1IOjLGYQDlCXJk6SqOK3d5Tk9VErLCS5zBhqI5Nug
dV+vTprU8AtHG23O1MWi9wn1vCRdEtIjEISRxYfD3Jw1HuXQA09d9aa8Ge2IYyRAYcjl4zmU
qCeQjqCwuo3KR7eNf46ymgyxzbWYlunRLkuVGlIiutyVQkrPcWEKCi3RSaVpoYrDJ2zZTaLZ
EQ2447OT+6ouCoykfpdoR3myj3k+5K3k+lCBqzIqIIpF6Ql6DFfdak29KHmJq2kLacWzKKe8
kqXyWeJQFJH8uumeTC8BLuEL96tUWJLQm1WlxCI89SFIDie93XnuFC4O6akIVXj00YOlXwOJ
c+2CbehCkojyJNwMmDOBLSREX3OTJUBVJVyT7aempszVAtV1itW9TfNpV2/cEyHJDiyy27ES
yE9olIqodwGop031SMEQLfNuTr8mHFSlpbqylppSEpQSqvBIWoK4itBpJFgkpiRrfam3i0mI
qyOCcyEo7ipXcfSwV09ylJcCdwdh8tA2+CJu7jRStMHtmy9tv7ZCwguJd7ae6of+4Fd3kfho
QcgucVKcRsziAlUoPykupQoFfbWW+yVoB2r7qbV1STtlL4FYta0qyWC1eohRbip5MgvpW00e
LK1JBWeNPclProgW5G1iXa7gruT4yI0iNCccYbYQOD8kFHZbU0TQk1Vy330wClKSUiWnH5Ls
hLkdTUkJgMvNvhLaEuy56WHXGEJUeISwr8xND7tMpicrpZrXDtibhDZRMkPypEZdu95DbDDi
20vJCDz93bFSTTfQDbn4OrmHWZdwmRockuRLRNcRdJXcr/0HZDrb9UhSUJ5hTXP1WpOpkkMG
LNj6Y9uXKfdQmdFMpxxDbi3Y1S4lDXBJ7ayeCeRPSp1SNbNMrSKrUhKjxK1AED3AciBt8tZZ
puWTMizxDfP2NkrTIRMRCVP5BbSw46Gg5woAkCtQAd9LUIylI8jY1b5L7im1OMR4s6VBfHNK
3FmMwt/uIJCQnn2iN60rqBuEd3cNiIhTprkhxDERlb7qE8C4T9kmc0lBpxFAvguvruNUZKTj
c8LFtYmTZMlTkK2Pohyw1xDq3VhJ5N8vZw9/rv11cjY4yMSaiRlyZUpSI/ehtstNICnCmdHM
htRJITVCdlenw1p4yTtwRrloajOXSPMlpZlW0lDLKUKcTIdS5wUkLTs2An3BSuvTRsPkjQDW
voog7f2aDSyTSYVmXBkOIeebdYbJbluEdqQ6ACIyGgOSVmp3KqUFfXURyNsZTbbTOdkFsXJU
lDhKahtEVaUAinuPIr6ahYm6wLezb2JsVx3itwtqiTOAfP6fPvoDft7JV7BXeulBB2l2SNCv
T0GfMUhuMhh5x5CeTqy80hzi2lRA5I7g6noNHyC2LZx51zIIFsiSUhc4NusSKkKZDqSoB0Ak
pcTx9yQdtMCN7TAYlBI/cBClSyY8dhKFOFxS6J4vFJT221chuduvw1WUMkwm7XNQxdQHAlu2
ICZqUqqlafuEtUSU7LSHKH5jfWSejqLO+YUhCJqHvtW1XCVbkFSkpaQgBT/P/LKgFhISPd6a
0mTGj8K4N2mHJcWP2916QIzVejyAgOnj6cklO/y0CdMaYvsm9RWbHy/dVJe+1405EBlZcSNj
uWuQ+egESeP/AOrIUKC/CLEa3ORH3Yirh2hGVFL6UuqPfSpNC+kAVHXW4jYprkicilypFyU9
KVEW8tCE8rf2uwUgHiUhgJQFf4vX46yOFoQw7eAxbEpbC2UTS5bRQHlI5IHHfdRrxFNXDQLL
XwWSTKyqRZHzcsegTIyHLg43NktITIaW4+pUvtnutqo0/wAuNEmh6aUymSImN5SzGkT50CsK
dDbiqdWlNC2hCUsvBAJUlaUpBSsj10NToGsjvNH7nK/6+fizNlmSnkLeno7oU4oo+kpcWtI5
gVPtGtK3Adm9YK3a7g1AnNSX4bc5lIUHoT1QlSFp4n3DdChWqVDcHfWRVoHeQXgXCSyG4JgR
I7QbixFuOvuBClFfNTr3vcqTUE7AbDTwMZIUNKcUUNtLWlIqUoSTsncnboB8dEGbXSAArkVI
CuI9yjTp6Cp6DWoGRAStKt6KP+OuxPw1lkGOSSrYA1CVhPp8tEEggip3Gw2ND/adRBlQok1F
KfX6V9OulIgANkCpNT0Naj56WiYEVSkE0rWg+HXrrJJh1CzUp3qemoBJ3qg++mxNPQ7nSMij
yLYBVQDcV0EGEgUUDVJFFI9K/LSMiRVII2+dN6HoNBlg4J4njQrBNR6aUhQkU6g0p+Y7+nTU
0QCpYIWVAkig29DqIIn2KISAOn8tQyBaEAGmwVSnqdQClAhRTyqNgADtQb1P4ahkQkVPIg/x
9KaQArbZIA6q31lig18iOXXjSlaAfhqRMCPcn3pIHxrUD8KaWAfAEctin5bDbYV0FAgq3SFU
IVvRNT0/DSakBJKRXoPh8vTUTD5BSSn6gr09QK+upgBa1Cg9d6EjRAA4rrU/TUU9BX+Gk1At
IFQBxITuD8vUV1k0IKk1Kh0TvToAnSZbAD7hx3WTX+Z6gagTDAK0HkSUn0H4/LQIYCB7vUHi
CDSh/wAJ+WoQFY3CzVOw4gVJI+egkEogrIAIA+XTb030ohBIoSr2itABtTWjItzZtBHtCd0i
h9etdZEIKKh02rU/h8tIiO6UglJJKjvUV29BqKTspwA8BQk9Kdf4azBNnH9H/wCmvX1/89UE
XLw5PyePkLLeLpjKvb3P7VMpLRCuLZUUtl4cErok09fTXpSmrfgzZJtFoxHJ8qORzp9oh2xN
zkNOOye/EbMdpLaD3ChFOLHNNUqp7STT11zacfBppcoXCyvNoGPwJsS0wZNmL0qbbG/shIMF
5twF95vgOcfiunDfjro6RyT2Q8byDlbK4iULQ8htp/lG7QUzMTLUpbqpLX0yDyVUK/LQfDQ6
la2R7a8gu0bx7KgnGIczH5EkNvXlxtZdTLCCWuTiXB+o2lR4EppTro9jmMme6eyEkZDcXrBD
sbzbKoUFxb0R0NJS9ycPvSp4DkpNT00OWScLRwtVzkWy5Q7pFKUyILyJLKl0UgONK5JCknqN
t9STejDaRbZvlz7u4Rp8e2IjOMKkKfbdnS5iVfdtqaVw+4WrtJ95ISkddaXqto33nBXccyT9
lcnPMJRJEqFJt7qHFlIbTJTwKhx/OmmqHEBMDjGsngW1mdFuMP8AcrLdG22Z0ZD5jro0vuIU
y8Avh7+u2+rq+DFmuSWvnkBFys0qzsW5Ma3uswI8ArdU6tpq29woDiqJDqlB01VQfhoaN9nw
Qtnu8SG/LeuEFu7iXFVHaMhxwKZcNA282pJqVIpQJO2pTAJcyRZVQjjU7bE+pr11BJbcbvmH
xMUvdourdxVOvfZH3EQx+yj7VRda4pdHLdSqL+XTWrN4Q1h8irfdsXawCZZHBcf3eZJYmLeS
hgwe7G5BtG6g5xUhe/qD00O+V8E0oJm+Z7ik1y/TIou7cvIWmoUqHKU2puM0y624pcdwHnyo
3RKCmifjrOeUVbV8kdmOcW3IbY5ERFdhqYlpcjOo4lctjjwLty6d2WgAFDnzI+equyfwKyiV
hd8esLNsvTjaYkFm2ypFwiLbbbDCVHvfordWrmo8eIG3XStuTbqP8SYsONvXV85fHWJlsfiJ
ftjTyZTK1rbKSlEhDYVUAniFV1OygOj0OE5fYC2LQm/SGH/2puAjLjHf7i32phlVUgKMqim1
duvL+zUvPAbwRuK3ezxPIhudzyRx2Cy0425dpEZ1S5ZcYUzwLaeSgKrBq51A331WagzVZK3F
stlVIujUq+NsJhtLVbpCWHnG5rqacWkigW1zB2UsUGlM00WW03VBxa3xbfkESyiI24m8w5jK
nVy3FLUpC0pS24l6jZ4UKknWcTklXwQDE6K3hUu3mclDy5qHkWj7RJC0BNO8iZutv5tHrpaF
1Jh4CR4tgsOXiAp+BcXJibYHwJbbC2kt0SxxHJznyJTWpG+qUW0TF1uCI9ldbfv0KXLlzIrk
S8MSXZinQhSiXZEBYUInbSRVKAOX0muhQ2VkNMhWtGLRIUqTEfuzk+O6wkThckSAGlJU8VrJ
+0RyUkKaqE0Py1Ngq+Bv5Giz0ZFYpU52G4VwIDEh5t1h5gPMji8h1MdRohFQFbbp6aqxBdck
Ui3SZefqZszcMvNzUux021YTBCWlJWpcdT5TRugNAo6m1wYSyWSHYJivNz0ZcBKkme/KW0lx
tCUxXuZS8hba0pA9wKeKtvhqjBtVOeP41cBi8y2SrCbncY9/aEu3OuFDzDKohC3SllaVqASr
qk0HU603wgqsDFnErA5NcbLxQ01NdZtLHdClXhKX+2I6HQQmOpH09xWyq/HWZywWhpZbBOk4
hlckWh1bMdUdUaUWVOKjuNyuDzaXgDQobPFyn4nUw64yVQoXySFNqQFDZZSaE/GpHQjT1YrB
3s8Rq4XqHbXn/t0y5CGC4aVSFqCeVFUrTVDiSbL9aMOst2tTMNMKZahKv7MNUuUkKPbbhPLc
7KylCqOFAJTQ02IrqeCVSKn4hj7s+BFtU9Tz9zZfEZipWhMtqnab7y2o9UuitfbUGg9dZZOu
SL/YrdDuthhTlvpkTSyu7RQlIcY7z/FtsBVPcW6c0qpQ1B1OS6qYDfslmjSLvMukl1i1wbo7
b0Ihob59yrimyA8oIQgJboU1qNarbwTpPI+sGE2i5QH5sm8piRFTfsYinuxHUpSmg9zcRIcb
9oCqEIJPqOo0PLJyVR6K2zJdZS4H0NuLbS6n6FcFceSa70NKjRZDR6ZYFYpDVGiq/dki4y7Y
q7sQOyqgaRz5IW9yolRS0SmgI+NNUQpG1khjcrBbrclbblzbN3aSkvW4suJFFoDgSiR9K1lC
wQmg9d9JSjlPtEeNY7RdW191c1cpDy0808FMKQntkEBJ2XXknrWh3Gs8BZZOVpttzvt3jWZm
T+rKUvguS6oMjg2pzktRJA9qDvpRQOYmGT7lMZatcqNcW3mVviTGU6pCENEJc7rYR3gUKUAa
INeoqNLgvoOXfHWRCW/GeLCHGWGJLLjri2w8iW72GEsBaQtTi3U8UpUlOssduBqvDb9Hccdd
lxI0eq2UT3ZYajvOIrzaad/MoEEEfEaibGi8ayGO+zE+3Wy9NkOW5MdKuJU6yU9xp0A02UtK
t9vXUkEyHGxzKlwXDEZcMd6qywHUh10IB/VbZKgtaKJPuSPQ6oGYGhvV0+2RFQ6lTPENNo7b
alFPQCvHlvX8dZiCbHE+x5XGhxlTIzwhqKURqKQshajxQgpQorQrlsAoA126629BXLHUiw54
m5QociBKE+UFpgR6IJcCUFLgBQSAeNeQUa00RiSwJFszpwqhfZzlJktulxjipXNtlsNO1/5U
NgJV/wAtPTVDRpJM5hWZSJ5jqYlvyZKTLLCmluKcC0p/W7RB5VShO9Pw1dWYbycu5kkxLkRt
mU6UyEuqYS044pLzCCEgpCSpJQ30T6DU01himtjb91vShdHELdU3OoLsptNUOJ7gcT3iBRP6
gqK030JAMCT2wpFU03NegFd66jaRLu2S/M2pt523raglfdEtSAVBVAmi9+aU0p9QA9dU4MvY
zW1Mejw0ORlBol37JwJV+seY7gH+LirbbppTwCeTreYt1ZWy1cobkF1LCG2u6hSCptINFnl1
P4baH5FvJ0W5c37wh9MN1U1C2nUxEpcW4eylNNt1mqUV/wB2qBERpcmHevvQ04mSl5bqmPeh
xK1VJFadwH3deunMkhvDfcZYlhttag6wthbia8UB6m6yAdqD+OpmXqBcWW4zDmRk8imYhDZK
TxTxQsOAKFDy3Ttvq0ajAkS0i3GCEHmt8vlWwBSG+IT05dd+tPlXQSQp65FdpiW7kr/o3Hne
CqcR3wilNq/k3rq+RHWLXtFkvTN0WHKsIdCA2risLeYW0kpVtSil11nrLM1ZNtZ49KhBqfIU
3PMFyGJjbTZZSVSUPBKWAAlCOCKe0dd9dJzkUVOfRU1TjbpkcjyU8U9slR6jj6ayxaJ2yZU1
boljhgAIgXNU24lxlDn6SltEcCoE7JbNQPWmqMEnk7ZLeLddrUwzFfYox+5Lc+4ZJfJkznJD
YZdoacm1pqK7HSwgK6ZRBlPXBMNLUcOxY8cXBLf6shptppt+I51NFqRUH/l66IwDwxpnE6LP
vdwuEN2I5EflrWwuMFB5aFj6ngfn/bqehTIa2SYDEpb0yOqQ0hlwNNIIqHightfu2IQrfh66
yKJLO59vuF8ZkRZCpLQt0GO46uhJdajpStJ6bpOx10dvxSJ7FY9LjNWpthhwMz/3JqRNcCuB
VbG26OIJJ9w5mvD11lMy4mWSkK4Y8bjDXGHYxr7mU5fWCOIUwt5aoYLYPJXFriEgdDqefqaT
Gy3rQLOlsqIs5s7QcG1f3vlVS6fXyHx6U+WlvJNBX1doXbrwhKgiE05HGI+0Dm0p6skpoOS/
091c966cbJ/ucDa7S1eIbcmIxCQLT9yywzJ+7Q/M7SlMrdWCeKnV05M+lKazgGmkPERnlMRr
kzEaevP2b7t/5tBYhq+5DTDn21OPccRsEketdarBVhIZIi267ZHPS3Z5a47UF6QYMctsPBxp
mpkuJWODbYUOam0+mw1lk2dMeslpkQ7GmShqT+9yJTM19X1Q244T21NkEcFK5FVVfDSoyahQ
GxYbctpMIdvkLUu8u3Gv6gfQyVfbbHhwqAnj1rqWzOYIGx2U3O82+3rWYrc95DJf4+2ijRSk
1oFcfXfQ/gUh9c7VAVYW75DR9qj79y2faKUpzn2WQ6XypVCCqtFDoPTWurKP3GuM25m45Fbb
c/VDUqU0yRQ0otQFNvQ1pX4axI1q24OUW2wZEmc1ImIt6IiXlRgUOPd51pZCGEFG4K/8Sth6
66OsODD0S1qxBmc5a4S3FJul9bck29SSnsNts8yvvCnJRV2lU49NZS5GOCrgc2woAJAoaep5
eup4cEGKmpryCtqHpUdN9BCV8AgpG/8AzaSYavpqOuwB6kgnbQQHEkpI67+7b10iEkJVQDr1
+YH4aGQftJNAaJrUn10EE4CaHjVQFSPmd9RBJKvaa8hSla03OokDmOBA3r1NNhvoJhpACPaa
Akb7ddaFBrSpZ2Ip0/jqILpvWlfh8elK6mwgCkihBoT1HXQQCitOVa9CB6DUQaVUpX8g2Jod
UDJzUkEVJO/Q+hA9NQMP3cSRSoHtp8P7tRIJPGgKTv6ppSleuqDSOgQ2EgKGy90pPpT1+egW
0E3UAkUPEVWPTf1OoyEWik8yQqvQnbr8hpIAIqChPKtKfA19dZFMBFCoDYA+2h9P+GlEAFIQ
eO6T1NOhO22tEF6FJGwIJ/D00FIAoKVv7EU2/H46oIRRNOvr1/N10muC3+HrrYbNlVvvF5kO
x4tudEgllrvLUaFIHGop9Wx11o2quOTlakuS14NeMVsOYybhLuzyLUlEltmQ3HUS+JTS0cVt
BQ4hJXWhrWmifxNw9EpjOZWa12PHbUnIZUL9iuD8ubHZZc7M+N3krSiqVApK0ChSv26JzLQ6
2CHnONR0JYUoia8qcq13dTQLllbkurWGG20ijzboO5/JyPHRaWDSIm2P2FXjS8W2Rdo7F3kT
mZjFvU27VaIza0FJWgFHJfP2+nx1WjBNkfPvrL+BWi1m5uuuQ5LrptLkZtLcdLlauNyk+9zl
6pO2mGrfBlOakfiz1ri5RaZd1bS9amZjLk9BQVgspWCscR129NaTgzaibNeuGa4tKyG3LuKL
TIZjuzXI8tTqpoSwuM4lll5CmWkpQpfH2GtD66wrWWns2q8FVxfOGLjc5MrImraZkS13AQpz
sZpIceU2n7aOpKUhCu2pNWqivpXTLSBtOMCLBlLV2lTp0li1xcuEJlq1zX2mWmH3u7+s4+hS
ftkrLJ4j2/260pVYnEmfxb0Td2u2MW+0XO4W9izyMjSzae6FMtPMplq7qZ6o7Ozak/QFcRx1
g3ZQVWxzLROvF8mGDZ4EeRAeLMC4d1TCHfaSmHT3JkE1Lddh01puKwKSKglSisJ49NiafD46
3CRwmS+4hap83xlmCI7Qea70F1hClNBXdYcUX1ISpQX7WyOVOo+Os3cNHSnwCFZpi/Ek+amH
RAu0V9uWlYBUwELS4tSOXuQ0qgB41FdU5RlpbLTntgtcu8ZNKlWJhj7VUZ+03FqUVLuclxxp
DkcqCy2QptR9qU8k/HWe0rYqqXBUszxnHLba5b1oC5j653Zltpcom0LKT/0S6/8AyVKVUB0b
e346abyFliRPkyzTobuOvO21cQzLLCU8tMctBySEHuEpQAC4ABy9fU6VDZNjPA8fh3iZdET4
cqUi3W56bHiRFdp91xpaBwqoGo4qNab/AA02UDVzsn/9E44iEu+/tkwx/wBqbuCMaS+4H23H
JpiKR3i2XyEpT3N0evw1mWEKZI3C7RbZnkNiG1AltxJDchTEOWlDzrKvtVrCnUlHFxAUNqpB
Ox66eJBpZaKYxGmvNuFiO8sMpLkjghau2gbFblASkfNWtJJZMq/wWNNix6FZbXKujlxXKvTS
3oS4IZU00G3FM0caX73DyTy9iht89Z2alTJHs22GcVkXRRlic1LTHQpLBNvKVJr75H5HuvFB
6jVZQ8g7QsZHcu3WU4NbrxHDibq5cnYMtbiwW1cWUvI7bafppXcnc6INrSfkVdrdjkfH8eu8
JMh8z5Etq5tSHAlLhiqbqlvtglse8prufXV0/Yy8NeGScmFgn7Za567ZOs5mST3IrckzXFwE
IJU4A4hsp5OUShYqCOR6jWUbhSMskx2wQ2LTLtjqIka6KW3ISmQLiy12lpCnBIbQ0VEBfua4
1TT56ZngIzEjSLjMebcrpGF1jm1WloyJF4DLqmlsBaUJcbjp/VVVawOPpqhcFBJ3nBbZFW04
i6x4tpahxHn7qtLr7apEtCljttoT3EpWEGlR7TsdNbSHJXLxaJtlurkGRT7qOEe9tRKVNOoD
iFJOxopCwaenQ6XDKszkkbZht1nwIU1iZDYTcluswIz73bfkPtLDfabQAqhUpQ4qNE/Ouhxy
GZDFgv7FtTMVc2IfeQ463alzFNSXkJUpp1aGf8tdVIUN1VNNSjTGXtaH7l/yxGFIlovko256
Su1/t4WktpZSxzoUn3p5A0A6FPz1lKCu2sjRrK87vTTdg/c3pqJxREREX2yHCfa23yKQR8Bv
qCrbO860eQPuYjUl5+VJXIRGY4S/uFtSGUlTba1JWrtOobSoitKJB9Na64+DtRthzbV5Eut5
jomLeuVwS0ZMSYZTTyQ2wsErRJ7nbBbURX3VBprLqc5BCgeRmsgRcIjEtF/lJdfTMUpsOqCl
dt1bjrh4JVz9qgshVdTrkE+ThJyLPrbc5t2mPvsTi4I1xkvoaWlT3CvFwKStpS+AqFAdPXQ1
+o1Z3tU7yPHeuiLfHlFQWHrs0uO28pD3D2rUHkK4udsinHelPlrUN5JaGdpvMxq3SHRj1vu0
WI4lcu4TISpCmy8r2pfeC0U5qqE8vw0uS7IKXm94etjVsS3HYjsxfsU8WUFxEcrWtYbdUCtt
K+4QpKVfTtrDZm2eB7dW8wGPOKmQ4amw3xmy0Ijm6pZBSB9yQS+hATwHIjpx30KfsajA0mXi
8z7RZYEmzwlW1LyzbhGjll6QW1JTIbLrai4S57e4aVOxGkztju8vPY49EmN4k3YZqitUeZ94
5PaWhxtSHG+BW4jkUObVIUOtNbjBrCwRONovlsvbcSDbnZEuWyhtVv8Aelb8dQS4lPJsoWkL
ASQpKh+OuZrD0PJWSXy336Qwq1CHMW5BJgL77jqVQZH3DSQp1S3HC4s8SeXTYa3bRmf1Dt98
uF2j/tsnGXL7EhFyQmLDVJbcYdW4pSnnVsJWvbmU8VCn8dZWQq540cYefXJv95kLhIkyZy3Z
kaQQpP7fJfaMZ59CE7KSphRb4r2BoeuqWM+Bxb88nx8RREEa4fbwY5gMTGJLrMANEEKQ+hLZ
BWS4eR7iSdq6oNymiHjsYS1Cbebk3dNzZQlzgWIpiGQ2AoNhYWHA3y26cqaa28g94O0rIceT
cI98hQZaL45Obn3AvPI+0LjbwfUlpDaUroo7AuE8evXU8mUoO8jL8ZhpfZtEacI0l6ZOmGc4
yFh+XHXGSEdkBPbR3Kkq3JGlpjY7u+Qba/Au0Vll4LuKFNNvBxNEcrezCHKhr9TClbeh0OQS
Hc/ObFeWp0GS9NtsefNcmGdHCXn2kUT2mgjm3WvE8qKFB01JwLUMY3XPWnGnGYEqTEC7lBlO
PBfbW7HgxBGKllCq9xSgTxqevXU1gzdZTI5WQW5z/VhbdmNJvjqXILDKkNMLSqQXSJyDupKU
7oCei9RUWPgrynKqA41Jr7TunrT+3WYNySyLoG7K8GnyblcXFx7m4oKU6uEEtlDfcV7eJcRv
T3bfDTozZccHSVd+Nox9iM925ds+9Ku2VJcaVIeCk+47e5I246EjT3PwIvdwS5aoVqiOhyE0
03LWCVrV988yG3qle9BxCQhPtHpqYklcr0uXmsp5iYY8C5Ow2pMuM6W/0mW2klSXiAtPEpVU
0/mNTgzyLteQpdzu33GSpCWIJMVlRWUp7DDLjbS3HlVKlEEVWrdR0vwSS2NMTvk2E2tP3Km4
tvjuTW4YUGUSXUqbo0+OrqVere5pUD11ckc7fKaXYslccSgLmGKpIBSjitUvuKDaetAPRPQd
dtLcslx9R4q/TTi8r7ookMS3FwWIAS2GYyu0hYloSB3O6KUH5aknrtor58GbKRlcXW/9LWBC
WkpWV3AvO0TVQ7jXAFQPI036/HbQjTTHHjxq1u5U0bpFE23ojTFvR1KAC0oiOHb4FCqKT8wN
PKBVlMu9rtGJKjMu2SHJnRWbRVMr7FE2bIeM3trkKhKUUpqKopyPBIrp2LUMzjJGli9SG3Ip
hFHAIbXF+xXw4+xSo35FKTv89ZaNFixfG8WuMDGHLnPVCkTrk7HLSYrkj7xtMhpHbWtCkhse
8o6etdStCYJZO+RWzGLdirb7MWM3dX2ZymmlxJjr6uE55lotyULTHT2m0AALT6b9daay4KTv
fcYw9mBIRGQwqTEERnsQ0yvvWZL6WSFylPExyyorWFKQOtBqROH+pB+R4WJW26u2+xmIp1iS
8h0xXpbjiEtnjxfTISG0qJ/wV3Hw0RgyyLwyDYJl3cbvzzTUMRnnGlSH1xmTISkdlLryEuKQ
lSup46zLNI5ZbaWLbc0ojsMtRZDKX4q48pU1p5CiR3kPrS2opKgRTiKU1pooHFjxq1SrWxcL
i46hE+5os8JLHEcX1thwvuFdeTaOaRwFCfjrIRI4j4KwudCtTspQvFzL/wBgpKaMIRFcdbcU
8D7z3CwePHp6110SS2WxAw5lcRKGpZN2Nvj3ZxhSQI4jyePBtLleXeAcSo7ceo1l1g1yNrpi
6I8G4SGJSpUiyOtx7w0pHFCXHnFNN/b1JK0hSDyJp8dKqzMuZOScTks3H7P72I4luGLpJlxn
u+02yU8lglA3dQPqb611n6ci3A+j4525Tcr98XGt9xQXoE9LbzkqUA92OAitnuFwODpU0G+p
oocjebiWQf6husBDyJs22x1zprqnQ0ft2kBayvukHuBJFWj7gduulqWkjKq1LBaMZvUu1Iei
ykMIuyXzBhqUe5N+03eDYA4jh095HI7DV1yanBwTYb+bGQl1Cm0sm5OWtK/10ReIV94pNKBs
gg/VX1prT2ZcjEyb3dU2+0cnpbMIqZtUFG/aLx5rDaRv71CqtZTgZY6v0C+s9h24OIkRyO2z
IjOJdYK0pBW0Fo9vdSCO4OvzOiHAsRiL95byOCqz9s3MOD7VbqeaEqTuFqFFfT1rTbQ0iTYq
Ld7+mTcpFtKmnJbTrdyXGaHb7Cz+qFABXBBPU7fjrVp52ZiEdoLmVtWR2bCZc+yjq7KJ4HJ5
oLSQpDBJ5hCkk8ikUHrrKeRckGGSCVD2inpvsB8NLcsoOZQkNihCRTofj11CGrccVq3FKilK
7dNRITsFDelRT8PhpMth0KkqSSRTdfoDoNIBqDSm3+I9fiABobJgNEIAJIUonkPXfUgYSQr0
V7t/afX4aRQCshunWuyulQPx+WgUEVHlwOydj/46QkJZpuSopP0V+WomBQUmlTRQ9KV/DUSY
oBfHrXb30oRX56BAmoKipIChQ1+eoyEE1ST9W3XppRoRXkONaI3NDqMnT9OgJ6A/yFOp0QIk
qKjQpIpsPSo/8dQgSkgnmCUj2/PU2DYtBSlQFKqVtxp0/wDHQQR9rfHpUE1/D/fqEQtI5JQS
Eq6hIqenTSZjItXBCdk+5W/Gv9+qDbUBEpCQTQkVCfn6+mqCAhIIqFU/xfHSxgIFQ6qINKV/
HQZYVNglIoBWpPrpAT20/DetfnXQMlz8HwWp+c2eDLiJnQJUgNSo7iCtCmloVXlx3HyPx13p
HVjBZPHtoYmZ4u1vsKLXbmNthTYUG3ENOKR3ELBBHt/H+OucYkrD6z4zjU3C8OTcWJLUm63O
Xb0zYqE8Qpx5CG1vLWPdw5bI663bLx4B7G7PjOGWJkxyW8mFZ3JbF5QEUkrVFcUht23tUUX2
zQdzf2GtdZ7Y+QiGRMG02N/xvd7o4wUXWFdIsdiWFHmWZDayUKT9NOSAa9davR1gEk3Iwm2i
3M4vb7q25MTOlPOtvsuR1IhFLVaKYk0otW3uT6angWkcsftL94vdvtTDoaeuEhthDyweAU6o
JHKnXrpVUwL854eR+6RoLN7Ya7z0hl1D4bW8hUZpbynQ2w46VMngU12IPprmmWSGtOD2y9Ok
WnIGHIrUOVMlF5lbbzP2SUrXyaBJKHAr9NVfTca0moBqdHGPhUKS3JmR71HdsUSOiZLnJacL
zSXHCyEORh7ufP5/TvqjBOsMdv8AjKTEhyLpNvMaLaI7cKQiUpDq1Ox7hz7LiWUp5hQ7Z5I6
jSrKIgYbIs4wy1dp8CTe7fGMNlUhiY46VR5ZSApDTLiAR3FpVsFeu2sJlBBb1I9NyfQjTJzb
J22Y1HnYxfL6ZBbk2RcQJY41Q63KWpBqrqlSSmo+On6mk4UoRHx4SMTeyJTigGp7MDihSfaH
Uk/qNkct6exQ2611qEnCCy7LZL5R41ulnu1wixHmpkW3ONpePeR3WWnSlKH5CB7W0lbgFfTW
cF+RBXLHr7AYmvSme23AfTFk8nAebu5Bar/mooK801H89aUSOdD2/Qb1j0u0vi6uSVTILFxi
SUrcSppMlJoiiyaKTQgkbazCmDWTtabp5DvbkuNbpcqapuIsy2ypJ4ReSe4srNOIB41I31Qk
ZykIRZM4RdlhpMr91DYdMwPkVZJ4BYlcwlSajjXn121NKC7AsELLF5UuHHly4d7Uh37mS2px
TxQ2guKKi2ea0+3qCfjqssBVNsio+QZA05Klx577Ui4Nlmc6hwgyGl/U26R9SVeoOqENZHlr
ezNm0PftapqbUK94sglpNRvTY0/+nWksg9DRv95NkecQJJsankofUOf2apFPYF0/T7lOld9D
WSqSL07MmsKZYeWv/SkiSpiK0UtcDJSkKWEmnc5cSPdXRAy50OsgOcHH2bddUd622x5KUx0I
ZK4zz2wS52k9xClUpQ/URorsrw1laGD0/KYMi0yX1uNvW5gR7U+niotspKlBokA//hFe1W++
p1DmWC+3HKW5EKLcWTbnIYROgQm2ER0tl2i0voaQOq+IO/w1pJQCbnAhGU3kXK43KS4mZIu6
VN3ZqQ2ktyULUFKDrYCfzICvbSh1ltmk2Prxk2Rc22b1DiuRpEOIpqA+wn7dyM0hQiuhDak7
pSogEGvx0rGhWRm/f41ynOzr1E+/lOhAQ428uMEIbQG0pCWwQQEpHX4aMmvgfpzeHDh2hqFa
WUv2R96RDekLU4tt155LrfBY4FSUFO6Fih01TYNxkj5eT/fQ2mLhBYkTIra2YswqcSpDbjin
SC0khtSgtxXEqH+7Vowrwhf7ywcMNmXbQU/emUi6B52qZHaCOJQP0z+l+X+OqHPwNmmsnO2X
SxW2dEucOPMcmQ3W3m2H1slla0GqgShIXxPpTSzNWxOP5FIsV/jXqMmq4ry3xHJKUkuJUg1p
+YJcNFU2OuZ07NEi/mr7wcbdMmXFVBegoRLfS6W++4hwrTxQhPVobU36nSZo3sfXDPoNzhXG
3S7e4mFdXpL0lbLqFOoMh9l9HDmngSgscVchuDqyKyNU5XaXrq8LjAdcsbkRiOiCFJU4HIbY
RGeNaN1PHi7QboJA1dhgc27yPIahPiYpaJ67iq6fctMsSeTnaQ0lqkoKKEthtPFSd6bHpqQS
oIBN3iqt16Q8JKpdzcbcZcbfLbFUuFx37lhIo9yr7OnBW+htyS1ghqlaK1qlXofgdMGSRaug
aszkZlakz5bi25rgHucicEcGVLJ3TzRXj+G+gXXAJN7X+z2eEwsoety5bgUnkhSC+tBSeYPX
2elKaUsEmcrjcGzaodpglKIQbblSGEgp5XAoU246sq6q4UTt7adNQtKR5eb4mZevuGpARFkR
YcSUporQFNMtNoWlVRXq3v8A2ay/gFtsJOQsKy21TCtv9vtEppECnINtxY75cRVRHM9SSojl
8taaxBfL2cLZd3k3CQh2Y4xBL8meG23lMJW+QotgrR7iF+0cT1+Wl2mweuUjpAvrr0rJp0zt
fdXa3S210o0guyHG1HgkeppsgaJlyMSmhVmyq5w7e+592VojNNxItsW5xiqS6hxtbjkYUS8p
CVVCj0NCr00I1GRh90wjCVRAarN4Q8k8xy4CGUfT9RFT9XTQTww8NcYTmFiW8UdhFwjKWHtm
uIdSSFk7cdvd8tD0Ro9mueHPS4T9hgLiqZn3V9QWtj7pyQuGpYTGUsKbSyhR4x+4D86nXRt6
M2emtQUPPLjd5lzjquIlpCWSmIJ6oy3gkrq57oqG0Ec+lRXRYZCsKceOK3x29CUUNPQkx3IK
WC+kqL3P/O24kJ3p8tHJSaC8xabYzd37dBedaNwgsx1R7dEuDgbNnbdSl5t0ENhbiuSi31VX
TXwUZIjHjYVY5aY0+Iy/crmZ8l63CAwtcwtvlP27ctSkLjKCUkN8U7em+tK2SdfAxvbdth4V
aW2LWwqbLtLEh55dqLrnJ91fJ39yCxxWlKRQFG1NHblC09FCAA2WSlFdz1oK+nxOsIUi8Xyx
41Nu+NY9Y3XksyYqZL8h+K026Q6yX1FTiFKLvINbA7IP060tGbHGBi2N3aI1doMqdBsymppe
RKQ0/JQ5AYRIPHt9tBQ4HAkV3B3Ol1jDNNpIrt1atCXmnbVIeeiOtpX25aEJeaXuFIWW/Yvp
UFOstLgkx9YLLa5EJy43aa5DhfdsQGiyyH1KffCl+5PJNEBCDXjVXwGiJIl5uE2f9+diGeuI
Z9wfg2COGFPIUqO8GP1VlaFNjkoUJqaddaeDFWcmPHncgx3l3NCJjyUqchllZSh2TJXEith2
tD3XGlcjT2Dc10tGznK8eXNjHpd+TJS83CQtc5jsraCGm1hBKXlK/UPJQongK6FVFaIkas4k
ZjGMtwH0KmZB90FLeUUMtrjuBHFVU+3gmvJe4Os8BOY+BtcMcehsxZDE6LPiS3vsxLiqWG0S
EoStTaitKDRKVAlY9pGkqtydrdZZozNiwN3tmItUhMNd4hvuORyHCBVlbPFTqVFVAE9ToGrc
kLc1y1T3/u5DkqS24thbz6lKcV2lFAqVlSvTbfW7qHAd5yTWK4vkN8k25Md1Mdh6U2zFWt5C
HeSnQh1cZpSklwtK9y+GsQKeR+seSGcalpj3CSMZT3g5GElI7sYOqS88IyldztLWFFRCfjrf
SHAWsyBcbydqZdQovpmMMJN3BNVJihTQT3jU1QCWv/5dYZVZL5Zc88u8Kyovk925C5hcyBGU
233OSVFgLK20grLlD67fm01WDVlIygWXPMeyWKxHtz0a+lK1RI5bQ8XG1Ci9jzaWmnUK2Glr
EmajS9t5fe8jcYuMWQ/fQO2YSI4S4hppJVxQw0kJSlCaqokU9dUcGVaThYsqvVlS43Adb4OK
DqEutIfQ28ElCX2Q4CG3gk0CxvoeDafglLXNzZjHpEqIwHILDgCLotKVzWOYUHREKj3uDgUr
vKQFD121VyKhC37jmbWJR5P7chi2OduIxfA2kS3GGt2Yxc5cgwCj2+wcqU5GlNaWhsxtkVwy
Q2qOxNtgtbV4bbkSH0NKQ5clIWotyXlEqqsKUaJSE1O9Dq+UYeDpeLzktuv4lXa1tQX1QWoj
9uMYRmH4XEAhSE0P6yR7nAa1/DWIwabyPcfzatyVGbxs3KG9FVCstqgvvtvw46njJc+3fbS6
8tatwtZFSn4aWwT8kVAmJeu13kWrGlz4r8d5tuE6JE8wu7QGV3AORdbKSUKWKA603DQ1tC/6
jiz5mzb7bFQ7b/uLraUyP9PyQ7xbbMw1dXIaoe7xqS2KgV61GstvPgpTU8hLzKMm3LLMBwX2
TAVYpM0rpFMLglCe22B/8kJTSvLjTeldKnfgm0NLferLbJlluVniPC6W9C13QyXeTD75JSgs
JQAptISSD89DmMhrQV/vVqNliWWxtPptUaQ5OD0wo76nnW0NrRxb9gQjh7T1PU01rvh/Jf8A
Q6eO7nZ7VlUG63aVJjQYZU8l2G33nHF8SlKKckUCuW6q65vIV2KttzstocyJiNNuLkOdCciW
5yKURDI5kGk5tZX+gd+SEmp12vZNyK1BJ2jLLDFbtN4fef8A3WywXrWza0t1S+HW3U/cd+tG
0pL3uQQSaCmucxj7lz9SjoNGu2jcUAHzoKHVbLkG1wAJqolQqmlKD4+lPnoIShSVUAFCK1r+
GpohdORPED5BXX8ToQHMoVUgbqO5PprQoNXJdKinqadNDQphlRKq8CkEe0H4HQiEAKAoSeQB
qPj8taAL3Hia1pQ/D56CkNFFe8H58etAOmokKRyKSs9K0I6+vz0GoC5UUoca7dRrRmQIJCjw
BB3BPXr0pqYgShCaOehqABvX4nWSAkLACdiaVAV11DIF0KSsbBNBt9Rp+OksCUBB9yaGpqK9
dtTZmRRUlXFSTVI2Jpt+GgZCSPqJ3rtX+PpXUQXMVBGx+H47VP8AdqghSuOwO9TUH4fD5U1C
EoFRFAAonr8D8NRSEFUVyNafA7HUaQR4pV9XU1p8B/fpMsWCUqFCFGp9KU0MRLaOKt9lfm9T
+O3ppMgISE+5QrU0FNgfifx0EDgjly4+2la09fx0jJavD8e9S8li22zXZVlnz1lhickuJSFF
JUEL7Xu4rKafLW0pRp6LdhsrOrnmc5iBfJMK7vIeVcp7ay466mIkq9yQUqc3SBtvTfWbVXUb
WWiQsFs8vzcbYvmN3GVKbub0sSobC0J4KbUkLV+qUtqcdJ9vAc9tTSUA9lZYPkRu4WNuMmem
apMk46N+57VL+7DFeh58+6D69dadZQez2Nkhbbvn0fx9cXLdcmk40zKEO425aWlOh2SCeRbU
2SQohQrzqN6U02SwwTSWiDkt5cjGoX3aJYxZT6jbOfL7P7jcOdv8vL6q/wAdZkJwMIL8tiYx
IiOLamNOJcjONV5JcSoFBRTooHprdTDLpcb/AOTIt2hvTLSq0XOj7sdTVsTGU/3UFD7iuDYL
p4KPL4ddPRaN1smQlsGWY7cnm2IEqHOlw3Y7rD0Zzm5EkDi4UIUmpSr/ABjpqdMGdOTrYpmU
2S5yrfHtinnpLfbm2ObEce7yGz3U92NRLhCT7gRrMYwLkc3LMstu0eXanYwbYf8AtmnbazFW
n7dMDkWGWm6FbYR3Fe01rqdTKvJFWXIbrZnJaoSUJVLjuQZQdZS+nsuEFYIUCEEEfUNxqgSO
JIFB8OIA6ADp+OpGNFvsF+fg4Zdopxpu52matpu53VTkhvtuoJXGQotK4J4k8k1G/Q6uuTcq
NCrdfe34/lWlGPpeiKlsuS7ymQ4laZyQr7bk1XgKo5DiBxV+OmHItpId5Fnzqp1/jz8cj2y+
3fts5IgPOhK2mloc7f25J7SlcBycSrppdHX5DaIq/wCcvXuL9rPitqajOhyzFKikwIlAFQ2f
8bSkpTurcUrrKTJvI8ynIMdu8uytzLTcLMzbIbMCTwdD76ozSSWlIbkIaHOqvqJoRqXlm1Hk
d2K9+PrELoG13W4NXC3vwXY8lDDFVOlCk8HY6lFJok+4j8dZmTLfX6CFZ3YHLacfdgTBi/7a
3bW1IcZVPoiWZiFKJT2Ce4eJ9vT563DWeTNnOxrh1/xmyZo3dxEubkCIFpiRGlNPSVFbKmiX
ioceISoqogbfhobbCmHBDQv9Hpfnpk/uL0MsL/ZltKZQ8H//AGzKHuQUf4uG+qYLqplkqi+2
CbZbREuUi4QZtiadaYEFKFNyA44p3ktxa0ltSSrj9KttTwabXBFNXaIjGJVtImCc9JS8hSJB
TCLSRQByLSina1o4NLiSdsEv+4Ys548Ysy7i/wDu7VwVcXIyoyu1xU2lktIe5UBonkFcaemh
tpknhEi/f8WtNzizcYuipUCHLZlCyuxHGH3ksmoMuYpS0uKCt/hXoNGdjBGzrzZkWV+0QJjk
g3O5tXV24uMqZMRSW1tqaLdSpxSe5UrRsabaIYNnfNbjZZF9sMyBd1T2okOFGky2mXWn23Ig
CVupD4HMn6kb/jpSgJUioF8tz1+yZ9d1V95dWFi1X+e0lpaX+6hfceDKFhla0JUmqE031Nmu
IJW83u0XArYtV6hRr2iHb2Gr1JbVHjBEdpaZTSVFtfHk4pNPb7qV1J4MWWcEHeLArI7/AHGZ
irMc29HZQUd9iMFPBhAeWhDy2yUqcCjVIpvrTeMikyw2+2RIFrxhq7rtMWPHflnJI0lTKpL0
RMlIUEOp5peoj6EIVyHprBqM5IK4Mw3cchv2ZVtagIiuJubchTaZv3ZfcoUNrq//AJXb4lO3
9utcmLL9Dgu0IV4xbmhUUzG7kuQ5+u0JJiFhLfJTXLmUh7YDjX16akxdYhgx7DLxGvtsfv1m
eYsn3LSpzklBTHDKiCoOLBolJB3NdVs6GqDx9WL3fJoNvucFmFAVMcSX4y1NpLAbc7TK+ZIU
rvBHFZI/wnrXRlElJLybJYIspf3tpVFms2yRJdiTChhCni80hgpaYed4lKFLP1+74baJgMne
44TYktXBy0Q1XC5x3JTFvtbbpX9ymLLbZ7vFB5lXZcKjxNDSulM00RknCoEy4z7NaD2rs01G
m8HV8g22W0iayUGm8Zai4qp5cUlO50LOw+p3tmKYbcIz1xiiY9AXc1w22GUSJa0RUNtnvhTC
di4pSlILtE0oPjrSYRgrTNpiOWPIZbcN6S3bn47TU8vNtdhLrqkDux68ni6ABVvZB+WhDMIg
Dv0PH+FR+OlmVaSwvO2xVkekOwWm23yqPaixzDzT7fBS3HnVbOo4KI49akdKazTI99LljeZG
s8ezY3KW2pxyQuaLohBUhTiEPJSylKiOIIQTumvz30yVlBzvDdvatERQYQxdngH0COVFkw1o
/TCwvcP9wbgGnH56JFKDtd4NkhZGYkZky4paiGK2HSlBkPMtlwPLPuCea1bDofkNOlkGnMMC
LPYXsutNs5luI84yxeQlY4MPFwokJadI9zaQBRZ+fpo7CqoaWiJb3ZD0ZxlyQ6l12q+8llpu
M3up3kQea6A7fy1W2MiIlqtEl/IOEpbsO2Q5Em2vkBtb5Q6hLXNB/wASF1KRqW8GW4QINts0
qG4XXH46WGOcm5K4dpEgpUW4/aHvX3FJ4hVfnTbVXJqfA0XaGRiaL33Sp9dy/bjG24hAjfcd
wnrWvt+GgmtCcatTd5v9vtTj32zU+Q1HckKFQ2HFcedPgmtToYItqPEE9sxm57/aemrm8WGE
ocWqPDjmQw6kqUlPGSP8up+nf5a1JlWUFNvNrZt0xDIQ+17O463IShK0k047IUoGqd+uh7Ny
OYGPXG4Wa4XSC29JFvdYbdjMtlalCQF8V7dEgoofx0qsk7Inpfj672r79T1xkpMGYiFIESPJ
cIWqIiWVuJQaoShLvDf1Hw1dTGnI0g4ZeJNvt0+PLUDcu8qGvtPBhsNOqbq9KA7TRWoVqVbe
uqEaTZxm2q8QbHEdk3V4RpkJqUxCT9yWlNPk9toqH6HL2mqdUC7EAlpby0tp3W6oJQCaAqUQ
Egk7AVO/9ug0S8+2XGLGFxVObmMxXhBW9HdUosPoBKGQTTYJSaFNU09dRnQqLcMpvN1YhsSJ
EqfMSYbSOW5TIPBbZAFAlfRR+HXWkLQ0dtMtFzdgMIclSGVFDnabXy5o2WngRX2q2qeuhoEx
3aJ2TQO4i2NvoBdT3kIa5cHmgVJNFJVRxIqQRuBqgm0Lby6/oddkfclyS48qUXXEoWpD7iuS
3UFQPBal+4lNN9ZbZKqQ6tt3y2SO3EUXERmm3AtSQUhu2rXLRVahRRQta10rVXTTIWfLOEnL
b3Jjzy52nFXxkxrhO7CEvSGyUuUU6BuUltPGnQCmlShcCjmt2bTaEsRIUZdjLn2TrUfir/qE
/q948v1OVeXpvvojgy9kYi8y2bRFtqO2YsSYue0VDkovLbS2rn6KRxbG2lIZge/v0uHlv75P
hMLnNSUSXIAT2GA4kJWlKEskcECgNEnRaWaTIu5S25lxkTmWExkyllwR0KUtAKzyUQpZKup9
Tps29mYJix5VHt6rR9za0XL9hkfdQCXXGwkuOh5aXUorz5LSOPSnz1n4NJwx3J8hXJ6yJtlZ
DKkx1wwliQUMKZddW6vuM8aqUruEH3AHSrZkPgZT8oVLx5i3x4pamdtLN3uKST9yzGWTETRI
9vDkQ4VH9T2/4Rpq8jgcjLrHGTjy7fDk9yysTGHvuXW1JdE3mVqR20pKVJLppy9KaOI+ZDkR
jOUWXHrg6bY1OcivQX4K1yew8639xxqttkjsEDhSivqrv01N8imEcwQ7lTF5lqmuR48NUJgx
ltwJQQGlIbSFR0hDaeS/eAPcnb10WZeSo04ICRQ8AAlQ2HT4aWZkt1ry+BEetNzfQ59/ZoK7
cxCACm3kvIeR3lO7dviX6lFN6bal/wDJ0TmfkcN5fZmVKuHB12S/Ct9tVbqcG20wC0S6XN0q
S52PYkCo5Gum1pCzyObbmVittwemqL04XK7R7utpCOKojbKnVFoF0kKWC8kDhtROpWz9g1sj
H7xa2kY7b2rj9yq3T3Jsq6lpTzYQ86haaNO0U4WuBWtCvapRpv11N4ZpRJ1Yv8dyVl7bF1TG
uN8dQu2X0oVDRxakl10kM8lRg+3Q8E7fl1NpPOTFSwR8givXO43i23ZpuO/JR9taJMxdqDri
GG0uTZCmwHHW1rQaN8gK/VrNtI19CpIuLDmO5St522i4XCWy82hcdRlrWp5SnDb3UfpsMgn3
g/UmlNbd5efBViI4JyVdbM5bpjDkxleLrt6GLRawB3WroOz3nuykckKNHCXSdx6+ms0tEBKb
D8gXOFIs89Cn2HmTc0KxNpgIqzawhSXAkIALaPoHBe5VU/PSn/j9zV4eeSNsFjNjyW4W+5iM
b3Ft7jtoaLjUho3FbaFxAlaSppS6KqAduWx0pLD4Mx+owzpba7wwvk0q4uxGTfEMpSlAuJB7
9Uo/TCunLh7eVdZbwD2VoJ3UNuexHxP/AIagkUCklSgAB1UD6/7euoZElIKQT7iR7adKDQUB
FwJPX27CnwJ0lIlJBBFN/wAdtQSKBIpVXzIp6aBEH6glPQjpvv8AGuomKaIJ3A4+id/5aCAT
UDfiPlvXSigQCKLHIjcbeo0kLPFIICuSfzCtanQUA2SSAOJp1/v1CEmlSoj2D09K/HUUgNQr
c+1PT+O+koE8VcvaaHqsfD8BoYC9qpRXio9Cfl10CBP0g0rSoP4jpTSQhIUob1B6kHrX00EL
Q2N+QHImgHwHx1FAQK6FIPy36/2aigDaeQKVkb7gn04/36RSASkJCiKKNaHbp+GohKOZSUpP
tpX/AMdRCuIFCr6TtQH3fjoNQEVBKSGzUHYkGpIGxGoGxS0oVxAokGg3+rpqBiViqk9a02P4
CmqSDBPbFRQAb13I0lAS0jam6gNirr8RoIHJfSm/XlT169Px0yUFo8TX+BYshjXqZb37mITn
eZixnC0vuJGyjRLnJIHVNNdKuFgmy34nlWL2zPXb8xYrlObo65boDUhBebU42oOdwhv9UcVK
KaAU0JtVaWmbhPI4tWa4hCtNphybXdFRLLc3bpZJrT7aS4suJcS1IStHaWE8aLUn3alZlMcn
WL5ZhtokxhbXUwrzInSchUh498KmLJSbe6f/AI3AceYpRfrohrDB9fuRdmyPCo2B3XHZirmL
nc3kS0ONJjrYCovMMg8ylZSvn+of5aW24MqqZEyr1a3MRh2dDEtu4MPqkPPqluLhOJUDThFN
UtrFfqT19euq0tmbOENsZvAs2RWm7dvvot8pmU4yFAFSWlhSkgketNtbQbNeleZ7Ei9wpbEu
RIholypiktW9qI4yp6O4037w66p1SS4ORHEEb01zqmbbRUMO8o3a3vyFXe5zJTbcG4N21ypd
damTWwgKStR5hPJIPWg6gaupSjtivkWatmdDyG+y4cp6E1BteQBC5D8QNv8AfVzW2UyFhVSm
oUSPw1uyUYDs2ybv/laKixXWPYLxLF6djWeIi8JbWw9MVB7glP8AcVVxHMLTQrPI9NCqm86C
5V7LnT7l+vV3ulzetsq7W92Ot2BFYWl9xQSO1IbWnilDnGq1o91d66moUEnLwU0BSaJIIAHt
p0Hp/HTg5wX3D/2Vfj7KYEm+QYM+7mJ9tCkrdQpX2TinFFRShSfeDRHz601XejrWuDrB+wV4
lkw5F6tqZn7kxcIltccKZbbLQUl5sUbrzWSlSU8iCPhqq/yCGkWzNskslyVfnzeLHOtz5jLx
ZlplIkMzUvtqW9KSWg6UdvlzWpZQpPporgWmlMFWzh3BpdpuDWM/ZolvXBC7spwce4pSaB21
VKu1C58ytB3px9NPaXkzdLg7eRLDc7rKxNqBKiXiYq0RbcpqHMZkOGTHStTifqrxA6KVsdFW
sjGTpgvj69wpd1XkmPslLdsfdtwuxQmH9y242Ula21nhQGm+29DqnwH1JZqyWJTy1Is9lezU
2hDq7FyZ+wM37vi5xaDoj8/tOKqBz59dDZp1T4I7ArTIc8tIBtkOB9uy+Z0WFJT2Iy3Iy0Bb
Ky77FcyBxStQTWnTVpZJMocTD8mkSLpFZhduTZGXJF0jrcaQttpvdZTyVRynwQSdbUScG2W6
zY1EVidrlW3Fk5IZrLjl5niU4y5CeQ4pKWwtK0oa/TCV+9J/lqbl7Oj1HBXo9virwSXcP29o
vt3BLTd2MwJkNoKRVlUD8yT/APhR/dotCJZRMzLA694os9zZsS2nkXZxp65JacUqTHWyng44
8RQNqcPBNKJFNt9C5BrRZ7xZYDtzgY5kNhRbZcu5RY/3VvgqhRILChRxhNxUB94taaJ5K2qK
gnUsmu3AwvVox2NMtl8etgVjsG4i2zbULW5bZjwU2pTS6KcWZiAUVO4Kjt+bVk0obljDNbFj
ymsVlQoqUy7pKejzY0OE5aXHWkuobaKIb7jvB1fIpQ4TxWfw0y+TnaqlEaxilhF3yn71F0Yt
eNsmQbb+im4lKnkMpbdVRbQUkrqSAQdWdeTSpPJK5biWIW5lV3lifFs0aHbWjDiNMtTVPyo6
3g68l7k0FEIo5Tqd9SkozBTcxx1jH8jl2hT33LbIaWxIWgIUpD7SHkhSd6EJcAVTavTWVky8
E3Z8KsUyFj/3d1fj3DJHXWLXFZjBbKHWnuyFPOqWj2KNNkDkPhTSlsRrPxmx2+1RFXG7rjXq
dFVMjw0RS7GUjuLbQ2p7kHEqWWlb8KdNMQLquBo/arcjCI12RQ3Bd1chulXLkhtEZLqUjcoU
DWpPUHbprJNQhrYbfKu13hWhEjtie8lhK3Cotp7hpVY/wj11NCmTMTx6J9xg2uy3yFdH5klU
J9IQ60WVpbU6pSUuCr7PBCuK0dVe311rtKMqjWuR/O8SzoFzbjyLnFixHYbsxuc+080B2HEt
KSuOAp1J5OJ4mlCOnTWYNQ0xs742ukNS3X7rBiQoyFiZcnFuhll1p4R1R1hCC4V9xQoUgpoa
10QghyRF4xW92wSnXVod7EhuMXIylOLcElkPtPJAAX2nUEAE0PL29daBptk3a/GmcumfGhON
NKYkGC4lD7qRJfDYWW2iyhSV0DgT+pxFTT46HHkkVVFjmqgTrjyaZatbiGJMd1xKJHJ1Rb4t
NE1cCVJovj01tLgE1vgj3AabJNEjZP8Av1gWiyzMUCbHLehX1m4KtaUSLhAabeSywHSlHJmS
r9B9fJSRRNFHf4a1iB+RiLPdJMWwhUlssXYyhbW33O2hjsuhDxUVexAWoVqOvrvrGsgnLFZF
Y3olsjXZF1j3y11NvanMd0cVsNhwR+D6G10QhX5fb6ddUFayk7P47fl5C9HuE+NHnW9qPIkX
KW/xZaSUNqjkucSTxCkCgSaevx0PZqThIsmTRcvhRkyWX7vcnGpttnsOJWxI+4WotvodUkJ4
rWlX1J/hrULZlCLFYb9NXInQn4UNC1OwkOz5DMcPOuji6wz3a810WPTau2iFyESMGIF+i/vc
dqOUPQI7iLy06lJWyyh1CHRRXRSXOI9u+rk04akepxXLTjTstENtVrf4y1pLjBlltpBUl5DJ
V9x2uJKiUpoRv6aqJFYiVG4/6fSssg2szVBEriATK7Aq3z607RCuPT11LkJyDHZFwj36C5a2
TIuTb6FQY3Hn3HkmqU8PX8NZg0mWDEpuYFcaBAsf7zGkuzg3FWhZEhT8cNy2uSVJIDbXFVAR
x1t5DErBF5dbpkCbHYfxw40pxjmIilOr7wqQXf1VLI329ppqbb2MyMI8+ejHZkJtK02+TIZd
flJKklK2m1pQ3yHtHILUf4aDMF4RkN9vTdxRNxG4XFBloku/tz0xhbLiYbUYNulpt3kFNNpX
7999ttCcDZENBvt+jW+MBbJhxqE1IhzotH/tJDDrylrbeWAWwWlLA5dQRpSbwWNi7heWp+Lw
4r9vvDbsKBHiNPpdV+2L+3Kgl5bJb478uoX101QOnMlVZW0l9kvMl2MhaVSG605tAjmgHr7k
7V1hjME9lN1xqelH7KLkwy266pq3TTG+1jMubpbZDFDtQAlXUD46TUjHHrx+z3u33YFxP2T6
Hl9pXBXFCwSkL2+oCmoGyStt8t0g3uPdZMuOzeXWnl3Bgd6QgsvLdSOKlo5hYXQnltTU7Ocm
atNQP73nCJdvdhW8yI7qZsN5qQFcCtuDB+1QtXE17ile/jvt10rApIqUiRJkyXJD6ip95ZW8
6aArWd1KNNtz8NAMvEPJbG1aQtdydjyEWJ+zIsqGVlC5K0LSmStyoa4r5jf6xTUhalHR+/46
sSZn3wXEuH2jcWxBhYMBTLjKnXeRAZ3Q0unbNfd+OlWhyDriBzLyvErjIdTPeWtp0z3FLS0C
vvuT+cRwVHVEUBNfRNR11qraRpJrAwypyxXOLZIlvmwEXlMqSJlwDzzkdDRba7BVKebQot8w
r8mxrrPfDkUpYvJ7jZIV2zGZbJFvnO3FcB2zrjoS62lPKr/aqlKWnAUe8ceh09tfCMNf5Gn3
NhbyKU/DkW+LFctABXKjGUyuWuKnvNttJp231uk8XBslWptODSSyjlg8GVKxnKWmH4rRkQmm
ENyX2GXluh5DpQO6pJNEIUap9dtFY7ZKJRTa8t6lNd6nauhokak1abREtt9SpmLBxuebYxbZ
6ZCRJkQ13Fn7h0gKWtXBIUVLUkFPw002oNR+MDLKsfxS1XOEy/bXoTEybJZafd7aGvslVQzI
CGnn1ultZSsq9tRtTfUjMZIiViqoWVWbFmYip9/jPkXtuOrnzUXUrQyO2Sn9JkclKQfzfEab
YFfyH12xQovWTrXaJN1uEW7qYj2eroe+1c7i/uVdodxQHFICqU31r2JT9i7YDsOC2OVa/u7m
zNS85Nkxn4zLT8l2I0wlBFftkrbDlVmneKUmn46xGSdeSqosyTiE+7CG+4WLk1DauodbDCQt
tSuy4xUuKcWE1Ck+0bjSmpcEsVkjrTGbmXSDEfcLDEqSyw8/sO0hxYSpe+3tBrrAF3tmEWi7
zyw+ly2pZuybO0G/qksoafeLtHOrh7KN0be78NbiJXwSTcNkdHxm1TLDGyNSFRwqHOmu2lpR
or7B5plKQtVXOLvdPM/8pp8lpTAtQpDl4narfZH8kdC3oyINum/tIWUJBujzrXEPbq4thnkn
4131lQyVYGd5xGBb77kdsenPBNmjh2IpuOt8vuKCFpadKNmU0cNXFbVHz1JSl8lZb+DpasLi
yolqb+6WLleoUq5RnDTsMtw+7ybWj6lFzsH3D6dCUqfmA64lHB6w2NOPquPOTFWtruQH5XbS
qc7VILbUZP6jbY5K/VPt9vxOlLJOeDlBxRFxuVrt8C6x5Ei4R3ZUxSEuJRELSFuLZdqPc4EN
k1TtqShSbdHI4j4XHk21q+xJKkWIxpUxwOAfdBqC4208EpHsKlrfTw3pTrpVZcGI5Im+2Jdo
kx2u4HG5UZmbFWRxJYkAqb5J/Kug9w0WrBfXZIWXB5tzjW5YeSy/enX2LMx1DrkUjvd5f/tJ
HIAHep1J4bF1K4EK5Kr6EjYdSDTQ1mDNZLFcfHuV2+Q4xJiBcxl1uO/FYcbecQ7ISFNJWlB9
vMH26VX9zbqMpuLXqHKix3mm3HJqw1F7DjbgU4ogBsqQSAqp/NrJmOBTuH3pm5XKLNQmCLKs
M3iQ8pPairUeKELW3z5KWrZPGutquUvJNeBhcbZcbXMVCnMlmQ2lK1Nq9UrSFoV80qQoKHy0
NQC8DQpAIUfdtuB8KV6ayIZSOCTvStdvn00EJ5IAok7U613/ABprRSDiRQJrSlNxQ76iQrj6
JG1dx6/PQISqdCOQ+PXcapBhISADueXqPTfSSQQHv5FXvPQV1GmHXiQfpV1pvqMhjieVQSr+
wEddAiED3dTuOtP5A6gDXzCARuv+dfjqEWocqBBAJFFGm1PTQIRaUACfT2k/A/8AlpISooKq
qT7R0PqT6aik6BwqcSOINAfbtoI5qXXYUSBuB0HXppFBniCF05muw6/w1FIEhJWVCood9tqf
gdQJnSnvr/i2HwCf+OgWxASlNPqJrsPl6fhpASFkq2I2qDUU6fPQLFpNSUilSNz8/wATqJC+
a60r8uXr/PUMItngJqUfI1jkMrDSI0tD7zqnEtJS0nZypUQKcTvrvRpUcmLpyXrx7j8xHmJ6
NLjNpYYXM+8q6hHaafQ52nUOBaeKuRTQpProraq9bT+xq1k/qSdgtSl4Pj9plWqPclRr5Mj3
hl8pU7FjOrb7jvaCkkeyqg4K8dZdk39gbGMTDMNL6Ww0h51qVPZx2CHuSb8yy6pKQ++CAwpo
bJVtz0Wcj1/Qj7BYrlK8P5YUW9bqI9whvRXEt8ygILiZHBX1cUJA561ZrBlpSQc20wE+P7dc
2rWyh9c1xo3hMzk64KEhlcKv6dKbL9f46G3JQm5IvHLfFueQ2u3SHexGmy2Y8h0FKSht1wJU
qqvaCK+ut1a5BmrJ8dePZeRxrOWZkf8A66VGebYRIaQ7HZYcWlYdk1Hd5tinD2ka5q2DStBW
8WsmA5JcHmxDmQ3IcCfMlRESO4hYhthbS0OLBIUo1DiPp+GmcF1WThZ7HhV1g3O9xokwMWWE
1LmWHvEqW44/2QGZNC4U8TzPs+XTTLMytkvcfHeJ2yxSsjkpmyIyY1qmxra26lKkC5l1KmHH
1JJ/TLeyuNflpdmg6lctFixu4Xe8swk3W5WuJBckwXIbKPuGVpAIMtC9uyhR4qUnrsdDyiSx
JV0LKt1K9w2r/eKaoMz5LRY7PZ5mCZXPks87jaHLeuBJClJWgSHVNuJWkHipKhuKjqNKnBqc
QHBsVue8eTrypCVT4t4iRVLUDzS06hRCW1BQACqe8KSfQg61mcDaqeyezLAMWZyDJ2rJMcYb
x/szLlEU1VluK6ptCksknuKcbK+W/tPprKWJCPkruS4U9YLf97Kmp4zXB+wUbP8A18QjkZKa
E9riCmqF776pyDEZrjtttKrE5bkLQzdbREuT7bi+5wfeCg5xVseNU1A0MLts44njUu9OXFLV
wbgNQYLkyY66VpbUw0pKVt0RuoqCxQeuiQSb5JE+OH/tFXD9xYGPfZpuSbmWXP8AIVI+2FYo
Jc5d0U6/PWm0dM+RtiuJwrxmMexmS3NiuIeW29HUprudphbqQkrSotKqnfkn5aZwYazkrVAE
J505J9Pp3rtTUkBPM4tLNsjzn5saA3cUKdtrT61NmU22ooUpKkgpA5gp99NZtg2pI8Wh5dkd
vffj9pl5MVcPuJE2pFeaY/Ut/FVdaXgurZcZODXVuyTnot0kSY9thJmyXG6i2OxyU1TGdS4e
awV/Spsbg6zC2VrNLY1esyZ+OrnMZXJkW9qWxB7N1S+xHLrqSskLLjyD2Ee5W2yd9TfhBD8j
a/QLuLZEvrV/kXeKzIENhyT32HWnePcSqOl5aytr27OoIFaaXWMFLI7I4WRRsj+1yN50XZXZ
Lr7r4lqDbtC2suJUvkEhVaBW2jSDI+RAyew565YoNxcavzUv7Az4ziquKcIBIKiCoKBrxV+H
XU/PBqjzBLwmfI/fvF5tN2lSZ9vmptMpZJDriaLKSvvnghI7ZHFe4JpowSs9oiIl8yuAX3JU
Fma9NmFl165QmpS1SwkHsJU6kqC+BB4DqKEbaOdjD5JWNlefvW2e1bnWLZGx5Dj82Iyy0wpr
7h8JWttpaVBC0OKAqjipIOtGXEYIBGa5EiyGzmWFQeCmUJW00t1CFqKlNtvKSXUpJJNAr1Oh
lay0EcvvIxkY3/0yrSglwJXGZ7qV1+sPlPcCuI48q1ptqWykmmbvOjwF3qJi1ttbttXHU3dG
0SEOIW8T2XG0POKQ5y4n8pGmTSvmCuQLlebLe49xYQtic2VONpdaUOaXEkLBQsDkhaVEGnpr
KRl26sdHLZzklf2cKDb0SGBDVGhx+2yUOOJX+YrUFFbad67DWk/INqSSl51fW35tuu0GHOYc
eki429xsqbccefEhfuZWlX6bqKoKFdNt9UkjhD8gXmNfJd3DMZxyYw3FVEUgiOhMfj9qUCvI
KjKQlTaifqG+s/JKwVmz+bbbc7AfY+9b+4dmtvmRJjLD7wo6pf2y2w8lRFeK9uvodUin5INq
9Ibt8yAqLFWqett1MtxsGS0pok8Y7hP6aVcvePzDVLkZwR4cTyCSfeKkjY7j1ppRIkn7hKON
wbcGHG4Dcl99mRyPaddWEBaeH08mwAa9d9EZBnGfd1SrXaoHFVLYmQgLUoqQr7h3u1Smg4dK
Hc166YBuBN8uxubcNv8AUS3DhMxUNuKCkBTaaLcQlISBz6n1+JOmYUFhji9Xti5ZKi7ONuFk
ORnC06ULUUx2221JNAEe5LZG46Hf11ng1AqLkyhmrN/kLdWzEmfctIASpxDKVFTTSEn9McEk
AJHtGhikxnY7qzAuL81wVd7Ej7NYQlxTctwEsuJSvYFCt+fUemlvIVwgWu//AG7V9VKWtb93
hLj93/MU464+h1RWokca8CSrffUkwemhNsviIVpmBslq7yFNsolFPJaYKkLS80lwk8Eq5J2A
3Hw0s1LgS5dY6sSbs4X+si6LmcAk8QhUZLVQqtK1HSldYXJB4TdItozGz3SU6pqPBltyHn0A
rWgI93IJ/NQ020vRGh2vybaLqmAi5pt1rcjN3Zv7RUd1dv5TmUcXX0N1Wv7h4KLlDsfgNOJM
rP1M+zJcYXZLsaTbX2HUJPbtDb7UZspJTQIfHNKlfUaba3ZprA6fwPbBkFqtuJXSNKtkO8yZ
c6Mtm2zA7wCW2HAp1BZW0QQVgGp1zUpk1lM0KbklrnN35MG42tpUi6KlR/vLpLtgS2bawyly
OpgpLvBaFIo5X6dNSykReP5zbYNnsllL6Tc0QpfbuTs2R9kzLW+6tKJ0NBEd5txJryUD9QJr
qa/yESs+AZNcprvja1x7bKQIsez29qbS9EKAbqHmHLRUJUupodq/m00gmZtZm4Tl6gNXFXC3
OymEzVklPGOpxIdPIdP0ySTrL0KRpV4s9nVNjIvVos9pfenS49sZt60IRKgfbL+2ee7TrqeX
fDYStZSanfTP6FZsY2m027Fbhhy79ChOS3Zstu7x5biHkNJQ40G1PpaWUoUgEqTU039dSWxc
Sim5M2tjI7gy/bW7S+l9XK2s8i0zyopKU1U4eKkkK+o9dLMtrRHj3g12ApVWsmkFUknYDb3D
r/I6CFKIp0HE9TSpr8tKAJHvACzv1+W2oZBRHXb+3fVJSA0ABFU8jQ/y20EmEpVSAKkk7UpQ
+lRqgBG49dgKqPy1B8iVcFFJIFabk0p8uuo12EFKlAEKIIqTy+GtSZBxIVUAUUPTqdEmpcBJ
UpI9hFfzU2pT8PUaBVnqQc3UOqLbqklJIK0kggH4EU0meQ0TZrbinGZLzTxAQXUuLS4oDoCs
HlT+OopxgUxcblEZeZjTX47UhNJTbLi20rr6OJQQF9eh0tGVjBwRySdlFCf8A2G41loU2J4V
A5UIPUfGvy0o0h1Ku90lGMuTLefVCQlqEXFqJZQ2apQjf2hPpq+Abcyd3MmyJ67MXl64PLur
JSWZZPJaS2KIA/KB8qUPrqZI6w8pv0K4rusea4Jz4pIeWEuBW9fchYUggK3SONEnpTWZNaGj
F6vbRnqZmutfuaFM3P3qrIaUrmtLvqoKVudaTgGO28mvDNldtLDwENYUjmUp7qGnDV1pp2nc
Qh0/WhJodAtuB3fc1ut6iFmfAthkKQ02bgiKhudwZSEpHfSa14pAO2mr6g2NHczyF5+3POTC
XLNGTDt5ShCO1GSSe0eIHJJ5mvKta01cQK9rmRy3nd0M0OJjx27Yll2KbI22W4X2z6kreaCA
rmA4tIWo8q1HUafoFfZk5DK33bm9dZ1viT1LjrhR4j6T9vGbKO2z2UJPt7Cf8uvrvonKF28H
ay5nMtMBiOlhmRJt6nXLJOcKy5Ddkijy2wkhLnOlaOAhKtxqly/knoryHW0uJKq7EH5kD00t
tl2gsD2a3FzOHcwQ201OXL+8TGBUWgvjwp1qqieldLevgJG8+54umbCdtVldYix3EvS2Zktb
5kbjkhSkpbCG1evEct+uq1u2wnK5gdRcotXevEKZajGx+/LQ89b7esIcjmMVKjpYcdCxxSpR
5cwajROU/B0VuCOybIJWQXyRdZDaW3Hg2lLbdSlDbLaWW0+7ckIbFSfXV2xANy2yLCwCUqAo
dyB0J1GQV59NwfpVrLAI/UAogAGtaakMA5hRSn/Efq9aaRTApKQnrUg15Vodv9+slIAVH3bc
QPX+Zp8dIQArBpzVxO2x9NJBJCj7lHl8gNQhEqoVJ6p9f94A1AKCwEpT1BHw2FPmNDENKjWi
Rx9N9qfPRBIJSd6LNNgEnqQR1qdaEI7I2NQD0GwOiQFApIIVRPIe0E9R8tBACSKciAs+01+H
pXUInipJUCocBv0+GogwElO4qDuo7eu2x1CghWhKhQjbYEn5V0kw1gLRQqoT6dK11GQgSahK
hTYD8R10GoDCkjpUJJqE9N/wOmCYVKoTVJSondZ339OmokKHIp3NeNeg9OlTobFB0cr9J506
fL406aDUE34qssK+ZVZrPP5/Z3Cc0xJLZ4rDazQ8T6H567+vT+EVeSRydFrjX2bHtcRyJBju
qZaZee76wW1FJUXClBVypXca5wmZ0Rai84eQcKVEfUgkKNeu/rphIJLPi+GRrzjt9u5uyI8i
xRjMRbAhSnHAhSUqWpeyUJqsUPUn00tmEsnW243MewS55PHvfbNveZZftDS3AsJkkp7jpBCE
hRB2oeQ66Ga7zodW/B8duWPXS9QMiWp6zRW5FwjyIC2W0uOq4Nx0v9xVVLVshXHfS38A0ypR
mXnXksNoU466QlDSByWtatglIG6lE9NEFJoEvx95Bc/YW13MzbvcJMmFHt6ZS1vQXoiAp1tx
wqUkEN7qSjp067aa1XBpZRHJwSVBydmzs31ERb8dx1+dJjTICUoAIW0ht1KXJHc6BKB7+mtJ
rlHPraJRM23xdk8Ge5Nt99agMxoYuDVwSzMQ921u9goVEDf3KDz9FIpTfWcQSUDD/TGUXaZe
kJvrcy2NdhV6urpkpYS8okMNusqb75Wk1oA3t11dcGnMEfZMWv7yJk+Hco9stLK1wjeXZDjE
Z9wD3MNqQlS18huUlNKddSq0ZVWhvDxMzcVuWRR7iw4q0vNszbcEr7oS8vg04h0DtKSuhNPl
pTLrIiz5nkVptUi02+Ulq2ywRMiraacDp3+pS0lRH+HfY6zGQrjA+TcvIUHEHrc5bnmsWdKV
recg1ZKju24ZBRWu/sVy1u1VwyTaOt2y7yQ/D+/uaHxFn9oKuzsItiSEKCmm3JHBKXWxwFEH
bbR1aHtyhf2ucXpcWIFsvKzGX90ww6UIUZEcn9Sikj7dtVduPtWnWUMsbX/K71cpVsbyCyxl
i2oEeLGbjORC9GRVIZC26LW2CapKPX8daSZjfA7tOcRLC/MTbsYbhy50J2C4047IfJbeII5M
vg1A4en89EOTcJCI+Z5I9HmFFtZex+FETEuFq7S0w48dbwdbS5RQcQPuPcDy66WmhlDDHMyc
sWSpyKHbIa30JUmNGUlxLLXcQUEo4q5GqFEVUdTtJhKArRJccXd3IuNNXJhcZwqYDTz6Lehz
o+0tJ5JUihopddKnYW0do2QPyccZamWZN4h2JssRLme8GoqZCy5we7Y7auS1Ep5kaHWH9TVb
QiP/AHGYnG3YKba0qCqWlw3hUcl5t0Cn24lUolKh/wC2dNcGWoHEPInbXY3mIURcRVzYcgz5
6lrcaksqUFraaQoBtCgQORSSfw1ND2aQcbKGkWyx2uTbmpUCzy3pMthS1IE0PFJ4P02HFKeK
VDeh0QSykJyS6Wm4yEyIjExiQpXJLct5LqOyfoQ2gJQUp9B8RoTegsoeDjNuNmmXpMtNtTaY
KC337bAUfbxG5bL5UQVkV32GptnRvJM3bKsYneQzla40yPBckNzpEYrZU73mlJUENr2T2zw6
nfS5agyqw5OlzzDGJ8XJLc8ic1bshuabq1KQpouMq/UJZW3UIdBLntNfnqYz10OJHkaO5e7h
eBA5PXN1lEmM7xdbZissIYDsatO3M9lUufl6apxBO07GGO3fHIkXJosyZMR+8RjFgO9kPu0D
yXg5IIUn3eyiqfGumch3SUckREssJ+IiQ7eYMdSuR+2e7yVgjoKpbUk8vx0NrRjryKFjjoZT
L/d7a4W0pf8Ase6sPGhBLfEo4lXp1pq7IonJekZ7jTEq6TVvvXGLcrrBubNnWyShliM4tTrC
i4QioCwAEDidFjVYIt/NGYky2uJVFnRo8xcp5Edt8PdlxpUdxnuySopT2lkBCdgfcN9Pwanz
wR10lWVN2sVrs81C7RZilTd2cStgOOOO995wpUkugbJQEmtCKjrqtVoxW1W5JSdcbKnIcuct
tzjIfu7rciyXNJW000j7kvOo58OTa1t+2nHc7aLORqowN7Nd7c3BuLDj7Ll+dmh1V0ceVFQ7
CSzRbaXEpVy5O+6hTv11cmp8EFLtNyv90nXK0W9lEF2SpKI8d1tptravBCHlIXxpvWmtSjNm
XFtm22+PizVzMBqyi2uHJYVGzKcWXHw33CAVuUWEce2qopvrKg0yt3g87BCFuci/sybfF+6Y
cLJf/cQmklaa/r8i5vWvHVUGvIm6W8jx5ZJALBfblzO6hLqO/wBt8tFnk2FcvcoKpttoQHLE
8bcYym2f6ktrrdpDqxM+7QthggNLUlC3PaE8lAAb9da6t6KQ8YdsN4useNc4DMZbbMp1sNex
DqkMkxY5ZJ4rWXupKqr6HWbOWNMV+w4kQ7Ob5HZRZHky/tf+piKbaDYkdwhLn2aHPcnt7KRz
rX3DS1GAVmKg43a2zdXJURp+dHTBVFtkVt6SjjKLgeUqOlaHW1p4J5I5fp1+elZDs4EWpnHE
DN40S1JktsxEuWgTOX3jZ77SXEJS2qlUAqVtU8Rv66IllV4K3abS3Ixi+TnYKpCof24RcUSk
NJid1dPdGPukd0e2qfo1pP8AwLr/AJIFQJCiFDkkU49T8jT5a5yLLNOuFsk4w885a40Xm+iN
aPtmyHWVsJSt9x99Qq8lxKqBJNeR+AGlB2lo4zGrDFi4w+I33HciuPXmOlSmw88mQ4lCSoja
rYSPbpa/H7lZfl9jnkT9rXDtziYceFdXWVSJCYDZajfbu07KQhRUe8KK5706azxJLY5u7WOQ
MyukODBbuEBLiWLQw68r7cLWW93VkocWgVV6jc/LS1AzKwdYlrxaX5AYtpV9vZ1lYnHucGxI
bjuF1EdwlRDPfQA2VGpGl2whS2McU/07KDcW6wvuHHlKdmTnHlsiLEQ2C4tpKCO46DUhCqgn
iPjrM5BfOjlCtdqkWXJLh3lKctyohtQUoIU4l+SW1c2x9Sg2OVB0OllbCHTcLF38fkyA09Hk
Q2m63NbpPeuDnuENMUAjtlIVR2uwFTvoSJuBpPtUeLjlluyHFKfui5qX2CQQ0IjqG0FFN/eF
mtdS0U5O2GWCHf76zaZExNsjPMvuPTFp/Tb7LC3Eqcp+WqaH5HWVgWWhjxWmIxHF4kLTLNvu
M6dboqmOTLkBxlptjvOHtVWl8LVXYCg662448gmUy7RYUe4PxoYfQhpXFYlKaW6FbchzY/SI
+FNUonDyTNkwq53i2xZsI80yLoLW42AkBoLbQvvEqUmvHnTiNHElOR9OwKLb7YJ0qVLWkqlB
T0eMhbCREkLjfqKU4hQKi3yNEmgOlFLmDnNwFyPZGboZa+blvZuSu4zwh0eBV2Ey1Kop4J3S
jj7t6dNNUL3BxyzDmsedejyZ7j05lxDXZMF5hpRKAtfbkLPFQSDtt7tDWDKcsgrZa7hc5yIM
BkvynApSEIHuKUJKlK/BKQSflogYDusFmHKMZuUJiAlLndS06yKrFSng8EroPjTfVBZBbrQ7
MDrvcbiwmv8A5E6SSiO2SCUpUoBR5L4kJAGo1A4YxO8uLebUyIzrTv24akq7fcfKA4hho0UF
uKbUFJHwI330wZg4R8cvDscyBHCXASG4izxfeCa9xTLNOSw1xPP4amswEHJVjun7aLgptP2x
4KKeQL4ac+h8tfV2Vei+mjqwW4Gqo8rizzZd/wCq/wDikoVR4V41b293u29vrog6/A6dxi/o
mMwhDUqVIQVtpSpC0lKDxWVLSrijgfr5EcfXWoMIb3WyXO1SBFnMFt1Q5gJKXErSNv01tlaV
0Ox4k01FY5wLdcpy3UwYzklTCC++hA5FDaBVSjqBIJi3TJMaTNjRnHYUQJVKlJSe21yIALiv
TfTArA1UCFqJPvBIJNDrInVcR9tpp15h1EeSFGPIUhSEOhH1cFH2q4+tDqGTlWo4kD57bkjQ
Ehe6oBO5FKagBVKSCBWhISR8tAgQ4oEhIqeu/wA/WmoJCWU1qj4bD8P9qa0TYR5VIG4HUnrv
8NQASsbkD2j09KfjqFB7BZ41Cevy3/46haAmoqr6aHod/loAILIUdiADsPTrqKQEVJ7ijUnZ
Q679NUkDikbdadOWmSgSEqoKqAp/tvqFIQHFe4KG6gAfXb00kdKlHoAfRNadeughKQfp4gAn
Yj46pEFQOifwA0AGQEpIANeg+NT8dJAPIUrQkAfgPjoIL29Uq93r8K/x1EGBX8tBToaVH8Pj
qEVyWTRQChSp+G/x/DUQkFIUpNKJIqFV9em+ogk0K9jQJHoK/wAdRIMFQWKVAPQDoNRBKUta
gim4PoBuT0OgA+2njxKvdSqlK3Tt66TQByJHGlQaAKp1OqACVxAHIe8bAjoanQIfEBaQTU/H
0A1EggkJ67kmnL12+Hx1EAFO9EAI+Pr/AA+eggyKoNCdtgNq/wAfjpTCBQSgI51IpvT01GoE
DnQgCqSRT06D46iFCnIqJ4VFD67/AA0yMAFNiAAU9VH/AHjQQanCDUVAVuVeu+gQcna89+P9
9fhpKck14qcu0fMbM5aW25NyTPYMGI8opaW9y9oWofSk9CddvXWU/oYsvksGfxi1lU+sONAc
Wvm7FgyzPYS6alzjIO/1dU/l6a5oW8FaUt1VOJpQV2226UOkyXfBcmfs9qvbLOKtXpiXFU1d
py1yUlqGoj2OKZPBKeaAQo7/AD1u0uvwbSlSLsWXQ7dh12tCsVTcLbPdaNwufekt9t0cjFQo
oq2ko5HiPz+uhy0pCFVie1mkDAX7O9j0xm0TZbVyXd1R3kpIQ3wQlS+PBTO/MVPXU2Z7eCKN
/hMw7Ou1WxNrvdrc7j13ZdWp2StKgppwoV7EFsjbjrLGpcWPN+VFFkcnRkzn7fOkSfuyyGvu
lSEFtTVWUIo4gLPuSakkVGletk7oK559blqtVmm4zcn7fFdkSBFu0qQ/ci5JSEBUR5baFo7R
QFN0SfdpSf3JWTYq4+ULe7dYCH4t7gxbZEXETMTcXGbyS453eb8haAHUj6UpWnpohha0MkG/
M9vdcmNcLtDiSokKMi6QJLabwtUAro8+6pPbcLvd4r26AaIY9l5Ow822x5+5dwXOxRpM5FwY
fs7kbuucGUMrRLS4Aj39vmop/MTqgMIh5eVY9JxnMJjTb0OTlE5h6Fb0RV/asNx3eX/yxRpR
UlVeAAodNtJAkZ2SK8qVA6/A/DTBmcmy3TyFiTEZmVHus6fNOLNWRVoQkGEl1aAla1OKUAlb
ZT7klBr6HWTSZXs2yyx3pN0ukLILs3IuiGh/pdbP/RNlPAKQ46XFNFAKOQ4Ng1pqlEo+xKwc
wwS3s4K5++yJjmIyHXZzS4jzai1KPOiFKWpP6NKcTsr0pq8mokdseRcRjSLYLhkc+/D95Xdf
3JyO4y/DYXGUyhg8lKJAcUFKSwQkp6UNNKfgWs5H87yVh0n7FH7qkXZMCfCVfmIsoNxn5Km1
x3UKkKdlFCOChWtU12FDo1gylJCW7NbFbFXs3y+DMJEm3RWGnBHW13HWZfd7aVup/VLSf1E9
0cSRwNRqllxBneVPNO5DMks3UXhuY53k3FTRYW53PcUutUSEOI6KSn2/DbW6rBzbzBdsDyC0
tWG0w3L0ixSLLfP3iWXi4lMyIG0oLTKmQe46CkjtroKK+FdFtnREvbMyxdy4Wa8tXBu02ezS
Jjl4x9xPB2a3KeccaLLDY7UijbgQoK+mnw0ElyjgzkdiXZos1VyZRZmMees8rGlEh9dyWVlt
4RKdtaeSkud+tRT46kweMnO8XywP22+y/wBwjPY9crQiFjtiSol+JPb7fJf2lKRylSVq7oPu
5etdJNY+CDjXDG0PYTIuarXLtsMg3liFFWmYltKhVNxr7ZC/gU/A6niTKqlCLTC+5cnWuFOv
VtXlTt5VIsl3dLMxpi0KiuA1UOKEp5ce00uhC6fTob2zcRghs3cuS7hjcK9wEtx2HaJuN3ls
SnZiS6nn967EKuDIH5OqQTQnU34Dom5YiyxIycgy6NaGbf8A6ibZ/wD1VZgrS6x3w6kvCC5I
olauzy4chXrTVmcjBKXODFS9czicWE/mrSYZmRGksupQTHIuhZacrH2f49zh0340FdMvkyko
wVbJcTbn5bfmcKiCXaLa0JLojuJW200Gkl9SVLUCpCXeY9taem2ifIWr4JnD7FbJtsxws25m
Xari44jMp5AUqGEuqDFXyQqIO0OYKfqPqfp06LwVoWWOrDJt5TAcWtieIrV1EpvtIQSKNOQ/
8xSlejo23+WhIU+SCjMB2Yy0aBLzqEUHwUoAgam8GVVSXqfiGEPZorELc7cIU5N0Tb0yZCm3
m3WyT3VqQAjtKbp7evL5aXKRY4Os/CcFRLtgauckR5UtUW4IZS5KKGksFzvJUqOwU+9PFSeK
uKfd6ac/c1VJnGHheNMZpjsOdHnu2S9uIaY4vRnkPqLoRVqS2EBTe/vHFLifx1q1pUmV/Iim
LJizqskurz89mw2JxltLTYjrmrMh9TCQVKIZCUKSfmRT11iJeBUj+74Dj9nYkyrrdZgiolMR
YyIcdtbrqZURMxCllxaUIWhK6L3ofy6E4yLTIizY/Yn5eSRlTGp/7Za5cu2zC2UNPdgoIeop
SVtlIV7QajlWunt+gqIY8t2BW+SzZ2G7wUXi9QlTbdA+1WppASHVFDj/AD2BSweJSk/MaFEZ
MWngKRgEWLhycklXJ0B2GxLjMoi0aedlq4tsNSFL4uluh74Cao9K00QD0VGHDflz40VlKTJl
utx2irYc3nA2mqvQVUKnSoGG2WGXiZfvDWP27JGbtMdkOxJcdaJMcMrYSpTpUl6qXWk9pXuT
60231qVGQ6+BLXj9yc5Hctl6gzLY+3KUbopL8ZptUJlL7yXEup7g/TUOJpQnbR+JpzyOcf8A
GMi4ZTaLYq7Qzb7q395HurL3aLzCSpDnYQ+G194LRx4qFae7poshquCNsuOIuWfJsUSQqM26
+6lpxMppbigEFfbRJQO064rjsaUOrBRyN8dwrILymPLgOsRlynVx4Xef7Dsl1FO+hhO6lqbS
vkvcbfHVrAJYnyV1+OliQ4lK0PhtxbaX2q8FpSSnmgmh4qpUVHTVaBSJZ/C79FtAur7TDTDj
P3CWFyGkylMU5d5McqDikcTyqkHbQqyNsbO94wXNrRbyufCWIjbjfdZQ826WlyUpDK3GW1qU
33QpISpSRXpqSkkc5+F5o39jGlwXXUvrEO3tsutSQHFKKgynsrc7Siaq4qp6n0OpplrAm5Yb
m/7jEttwtz6przKUQmytt1Ko7RKQlLqFqaSlv5qHH166YhGO2Y5GF7g5JbbtzvTbzF1UpMgL
eKSoqH0uBaSpKqceoPy1WJLhHaxwMpudxXNs0SRLuDfN555hsLKS5ULUqo4+4LP89Y2zehzZ
MXzR223W62u2yDAtzaotweDZJbQ7+m43wUCqqR9Z6pG+tdcxyCj7EIzLdaiyYjdAxK7Ze2ST
+ieSOKqEp3P5evrokcDv7e9NWYrVEeTapDqXUzFsLDSnUpKEhMgp4moJ9oV1+elC0dLnb7nF
sNpemmOzEeQ8/BZQpH3SmXHKqkPIT7u0VNkNqUfQ60tGbNzA5S1fMYmMOLS01PmRnEJj8kuu
oZltlujjYJ7a3EL5ISdyCCNc2iR3teYXCDARa1RmZcJqLLgLiyErSVNznG3XeRSUr5BbCSn+
/UlGylPCIxbhmyXXIUJMcIbU4qMwVLQhDSauL9xUqiQKqr01pokPIORSY0K3wUstqat9y/eG
jSvJ4JQkJV6cKNg6ksMVhkjcMkbu8Du3GymQYHfCrk0+8022Zr65CQ6lIU0Pe4Uo5EVAprdL
PRhqfg5v5fKlMriyWkLiCBDgfbhSggO28foyQK/5g3qPgSOmspwbdZFZbfYV6eeuioEuFNuD
gfDj7y3I6qj3hlK0J24j20VsNWidYGOM3yRYryi4siriW3WfaeKwl9pTSi2r0WErND8dZkgX
+8RpseBEiofDVuD9Zk1SFS3u+pKqPLR7SlrhxR8AdKtgylAm33JhFtl2O4IcTb5jjch1xoAP
tvMtqDdAuieKg4a1/HQnGRs+GTj3kEXCWJc+MWVRrg3eIbTG4U8zFajJjuFR2QUx0HkN+ulF
2zg5R87AnW++ym63a1IkNx4zQIZdTKLy1KccJ5I7apBoAPcKfPVOTKcI4ry2EYLzraFKucmz
R8eXH4lKEMMpQn7kOg+5Sw0P06dflp7cm+IRyhZRHhXHEp7IlzP2EBUiLKcT2UrS8XOESn+W
2obkK3Kt9GOsGm4eRzHuOMvRItikzlsxG3J0924JbdQl56UlIRDdQj9TtHhR1af4aXdS/kwn
IiNkIg5dZ5L17LMO1BX202zRhxgoWVKKI0Z9KUkqWqqq16166sRga2h5OFqulucs/wC2yJAj
PNXg3pyY8FUfaQgI7A7Yr3TusA+2p03SlsVZJJFgazTGpN3t96U2mPDtcy6SptrUAHJSLg6p
bCUISO2vgkhCuXQfLVa8/sGjPzbmE2BNz+/jfcKlGL+1Aq+6CeHP7ggjj2ifYDWtfTU3LYLE
F0y+/WGVEyF1hxD7F6fgu45CCqqiMxU0fCm+keo9lB9X4aysfoNmjPgpSlUpQAGo/HWTIlZU
rb0HSmxA6fjqKRSlUqTulPT1FP8AjpETxKh/iHH3HpoKAFKuNOlKU9eumSgOlB8eXr60HpqG
BJoBxKaJ3r8Pw1IGGeaQUqqAAOIPwHoNQSBJC1V9SaKr8tA7DAUBvvtQ1+B9dRQIKSSVfSR1
6evTbVApAVwCafVXoo/DUEBlIqBUcxsCaf26hE8uSlfmr1UnYCmtAEj+Cq13O3+74aBQopFV
GtVEUofUf7tBoApx5beynt9P4aTIRXRI6VpuTTb8DoISQqiDU8h6D/f+GomKHBCuQ2B9PSnz
rqJB8khQ2p6g9an+OgQkqHMlQomhIpvrRAqglPH3LT16Afx0EETwFAArc8v7tRB8dysfVQVH
qSflqITy5DYVrXY/Gu9DqFHQpQo0FQU9CfQah2IASEnfpsafDUDCoFUBNTuAD0/2+eoELdNQ
mm6aVCdunx0GnkSPQdCOgO/4aiDokLITUDqQo+ukoAUpV9AIG6k1r6+uggxySAvp1FCetPlp
EBcqSeRIOw2/3A+mplAagsJBT7iaDkR/edZFyglUSniSeVfqG4GoA6ABKjU03Ceu5+OlCI4+
/qadaeumQnJYfD1zh2zOLHcZzqY8GJcI7smSsbNthY5KJ+Q10opT+hhknl7DDWT3MtyWJjLk
p15iVEd7rCw4sqBCgBU0O4prCwXaSGokmgJSFdQPiOhOmSNL8YXJlnGMttlwv8W3xbrb1sQr
dJcUgLlqKD3aJSrbggpqf5arNQlybTFYvLYV4kyiwy79BaLz0d60219/trqysrfUBw3Lnt47
709NNnryD3gaY/lKrZ46yClzcVe7i6xbWoS3lqLUEpLjzjSFFSSlZAbUmmw6aImIGXHwV9Fm
gwItivU2VFudunu1m2iI6RMZaacAcQ8kj2dxNeJ0vBlPBsTeZ4JIRiL8O5m3w4N5ndmLLjR0
iHGdjlKEKaaJoipCUOH3VJUdxonYpwN7jmEm3tWBaGo79+RPlsptrt1VcHRGkxw13m7otRXH
3rwFfarfRHgsM4ZNkWAqm2UX3uC82yE4lQUo3tLUxT3JoTHipsywlskhPKiTsdtKbWUDUjy+
XG5p8gX93FJTU8SIlucnMwHY8G4vcWwP+heWFtpCDRTyR8aU21TgEkkdpEy7N5nd3cfaZftr
kiOq5osbkNu5pdLCOZfL6XELjpVy7vAbuV6aoJKNkVeAs2DyrGhlKsfYmw3IaItfsEOKfq4Y
/VtJrTnxPX5U1MzbRjCgAg7gE7pJ2oPn/HXRIGoRssvxzjH7eyn9keisPYym9KyUSHC2meG+
X2/BR7ACz+Uivw1g3gr2cYnZ7Mm4wbfjk1caCy04zlglOKZeK0oUVuoWkxiklRTRo1B/jqbk
yuUWPHLAgN+JnpNtWy1JnTG5bMlkrQslY4rUpaASHR7kpWSP8O2m2/JrsNGvGdhvEhl2NZ7l
alIvjlqlWuTICXH2G465K5DanG0hojtkcWwoU2TvTQZak7SfEmKmLFvcF2YuzvQLhPXb21OF
137BaE8GnZDTTqe4HPVs0ptoGIwROMYVh9+g5A5HjTLWhmHFehKupCnI7ypYbWWFoDKXu6j9
NPMD3bfPW3Z6gHUoN4jRY13mRWGZLDDbym22JqUolthJpxeCQEhf4aJky2XTDsYx1ditEy6Q
V3NeQ3v9h4JeWz9qjglQea7fV2q6+/agpT11SbSRKs+Ncbj3ayYxKS9Nl5LIlsMXpDimjD+y
ecZTwZSFNvcu1zXzPQ0FOupNhEjRvAcdMeLaCH13yZYXckbvXd/TR2eZ+2EWnBTRS1x515cj
X5auwzmDheMBsLH77Zo5fRfMXtjd3l3NxwKZlpeDauz9sAOyEpfHBYUa030qdhghbZh9vulw
xqDAuqlzr8sNSm3YjrQiLrTZSqJkJO/uQaaG+WLrkkUYlh8qI3dmFXFmyR7oLNdYvFuVMeeL
CnW3owbCR+spHAoIPCtd6ajLzkY5RjGMQn7U3Cmft0iasputulvtTV26qgEOvPRkpSeQJKm+
PJNPnqXkb5ezjbcUtz7t8fm3IyLJj7aXpMu2pCnJaVupZa+271EJ5KV1X0A084JY2SFw8f26
zCTcbpc3kY+39r9pIhspXLc+/ZL8YKaWpKEcWwe77vw66N/QeudlfySx3HGr7Mssp1KpUNSU
LdYKghwLQlxBHqAUrFUnQ0jLbJmx4M7cIVuQq5mLcclK0WOAhC1tSuysoUJTiSA0O4khFUq3
3NNSwSIT9iuJsr94DrP2jEj7R5ovIEkLIA5JYPvKBWhUNv7dXJqyZHsIcW6hpuvccWlDXp7i
QE/hvpMFlu2B5zEk85kQyJxlJjSOw+iVIakrqUIeDalLQtzjVBV10lLWYyPrhC8qu3O1uSZM
udO7patkxiamUll1pHdUkOoWpLa0tjkqtPbudtZf7Cm+VkS5j3kbIcugQZj65N5fAVBmOy2n
EJbbX/mIfbWpHsWdwj3/AC1tYWdA3OtghHyonMLkqCuS5kjSC1dHELjq9goEl5xZ+3NSBxqa
1+ddY6olZsZvWnybe3JkZyFPuCxMU7OacpxTNDQ5KWVFKAvtUCaGhT9OtONCrSHh91zm2Rr6
1Z3/ANvjw47ki8ofjNuHgghvtLDjbiklf00NE7b6zCkm8fA3ju520iJfY0aZ9raWCiFcUMEt
Mx3OYolfEpKT3F/GmpZJ2gnnspyqXgCLSMYWbUzC4M3ApeMMR45BXJaZUkNIeT1U6lVakmm+
pDaHsz6NKVFlNSmHCh5haXWXUmhQpshSVJr0ooV0QSZZ7tmWWNTYUxy1xrJPS6qa06zbRCcl
uOVStxwqHJ4LDhBpt7unTSlgW1Kg4z82vrPGK3aYtlipYktC2xIS4yP+uQGXnu2sqWVlCQkK
6CnTWmgbk4WXIslOQ41HhREPXayc4dphOo4FXd5rWH+RTuA6pXI0onWVgJ5FYXLmY1mDCY9q
i3m6MLLUBhyQSyh4IJ7jb7C0oc9lfzcToazBteSZwzyPi1ot10t8y3PMxbu8Xfso/GUhbW1G
Gy+406yRuA4glSq7/SNM/lIRjJnUxxkyHSygtMKcWpppRKlIbKiUoJNOXFJAKvXQyLOM9t3+
jZGMm1h+Q+2ppLsiSp1hpSt0vsxVoUWXR1qhwAncj01JuBcNElkXkjGXp13fstm5P3h2Eq5P
zHlOMyGYBbcQj7UCrZdcaov3kcempaB2JJ7zVGIhNM26S81EuTVxUmVIaUUpDLrDkZrsNNBK
Al4ltSqqHRVdM8layIHH8vxfGr03Ix+Hcmba5DfhSlSn478lKJHEKWy32/teSUoAotJ5b8vT
WXeeCkjM4ydrIbhEdYcluRIUf7dhU77ZLoSVqWri3EQ0ygVV0SN+pOl2UElmR5geYWiyxJrE
5p37h50PsyGWo8kKSlHD7ZxqWC2hCj7i6kFY6dNSXI1a8jmdnloud5zV15UuDBylaZEZbKkr
cacjr7jaHEckpKXT7FkHYeh6apcphHBREpK0or0IofQdD8NDJo0qdm1gcx+alu5XB6TOs7Fo
TjjjRTCiutJbBlIdLhaV/k1HFsK93XUtC8kRmWf3e+oxxP7m6/KssFouKeQ2OFxC3FFafb7k
8Sjr7flq4gnuSVvHkRN68ks31y4pMWKy0wxJkRUqDRDKQ93GmeDu7nI8knmnqj008BWuSwRc
3xVnNfvk3oTS5bJEdq5yzKMaPJddSsduT2xOCS0FCpSeB9oJTU6rW0SUShhG8hpZ8gXORb7s
xa4tzgKhKuMdpwxVy0xyhmQ6p5C3iA59ThTU+qdatwCW15Mzlc1ynFqdS+4Vr5uI6KUVEqWn
ZPtPUbD8NDcsYNBsF/s7USyTJE9DVos0d6NebAQouTH3g923UMAFEhP6iAVL+jjqWFgL0+5x
UcQRjKW23YicuatkNqVLKSqD2apLiY6aVNyQnj3Fn2qHLj7tCc7Bp8FilyYi7CxGyS5RJkad
crepMpq4GSiRGElS31Ihf/8AOShr6gkCo9p1SoNqsQVDP4lwjvNGfbbJb3lrc+2TZFskuxx9
Knm2FuN/CiyQqu1KabRELRmHJD4aqzpy2zi78FWz7hIk93dB2PALp+Xuca/8Nc2mK2WmxYoc
guF2ayNvlkLF0Wq8qLyWnUMCO6VLrUJLZkBlIKR6gDbXV7J1mqnZbMX8a4rIkfaXS2Bb3OJD
uPZUtxcF5URDkhTykPtNx6urNFK5ioIoKbkeBVSt2nBbRNxOVMZszr0ljvly5SJDrLZSwHOK
mZLXKGlVED9J9KSfj7k6XCtgOJM1tjcZ64QGZbnbjvPtIku14cG1rAWqv5eKamvprNiozSJO
JQJMlbEywuR2m7zEt1hbt47Um5QHi6XHUKWSmUrglDge2A6VHo1Sh/T9zVayQl4w61WS42Bh
oqyVqe+svTICyIsptDwQI0cpCnA61/73wqKfHVaiiTKWUPZ2M22BeM4lpshlpsMhtNosK0vh
lSJErslS0JIfdbaT9ND1oSdVllfQa1xI9uGC43a7quOzbn7qidd0WZDBdcUqEl5hh1Sqsiqn
ULfU2jubDjuCa6PkpTcGeXm1mFdbnGjpckQrbMeiiYEkoo26ptBccSOAK+G2+/ppvWHgxxLI
9lC3nGW0gDmtKQOhBUqg/wB+smpk0mD4VflZK9Zf3pphhC56W5amVKXS2OtMuqcQDtVT3odq
fPW7VSz9DUKCnXyy49DYiyLZkKbiw64qPLSqM5HdZKaEvJbUVdxg19qqgkihGtNKH5Riqlot
sXx7i8nytcsVk3RVqt8ZSksONtuyVucIofVxKt0dCs8//SN6a53wq/KNJZZWsTxu2XvI5EBU
oOxWYsySxISFsCUIzCnUcKpUpvlx5UWPlpuoaSCuVI6svje43a12yczdbcxJvLbq7VaXluCT
ILCilYolCkN14GilmmiM/Emms48SB3xxcmMaev8ALuEOLHZisS/sj3Vye3LNIiCEo7VXyDxo
s8fzU0qmYJor1is0++XqFZ4PD72e8lmOh08Uc1HqtfoEjc6ww2yem+O5q48eZYZzd6gyfuwh
0IMd3vW8AyGksOErWrisKb47rHoCKa31entGYZzsvjy+3W+Wmyuux7W9eGhJjPTnOPFpa1IR
zQmrgWtSPY3TkRv00NQaVeDnZcFuFzYQ6JcaCmTKXbrQmUpQM6ahQSWGwgK4fUmq3KJFQNV6
9Xngz1eIExcDyF603S6uttxIlpbdW531hC3iw6lh5EdAqXe04sBavpHxrrS9f5dRepGlyxmX
Cs9qvYebkQ7qHUJUgFKmX45AcYdB/OEqSsHoUnQlKbXAvDIgoTuQCT6/GmsEc1HYGh5Doof2
aTIqq1+gChsDtvXUaCClbhIKCPq/36AYAOqR9e9BpIJIH0gUKuiVHavroYikcuBBp2q0I6n4
emoglV6ndBrQVIO2kAUbV09wrRIO+59dAiUlRVQnpUcD6nUIaSedDShoNh/LUApLaFqSCKqq
eQHx0MkgxxVyoK8SQD/cPlqIJZUEgE+07H+GoYCCgEBSdyOoHpqJCiEue0bgjlv6f7HSIgVH
1Hp0r/fqBhFKAAj6iugIrQj4jQUC9glQFDQ7V6gemiSASoiijsBuR1rTSLArZKVFVKbK2NFf
M6UQdCpKUkkECgUPQaGUCVEqUeQrSqa03NOmoGGKFITSiU9Kf4j8dJpMClnhx9K7KVvU6jTA
lKlKCgoGtOJHQf8AlrJlBnlxoAOKuh6kfGnz1E2Cu1K+/p/zU/DUBYPDsGHOzvH4MtpEiHIu
Mdt9l3dC0rWApBT6gjXo9Wn9BUZJTNFpXl127UePEYalusNMx2wy0ENLKE8UJ6bDfXJOUZhE
IFEH3GhJJPH5ddaIuWDWDGrxY8reuBfN0tVsXcoXbWlDCQ2pKQXKe5SiV9OlPnqjEijnZMYs
Fw8b5Je3HXhfLO9DDSAUiOlElwppQ7rKgn1pT01pqIZlJknZLThVzwy+XqTaZUFVojttNz0T
CoPz3jRpKWloCT6qWkKqE6y5KZKdZI8F69W+LOe7EJ+QyifI2BZZWsJcWT6cU76qtTkuvBds
8wnH7VGVJsbMhy1pmmOL4J8a4Ru0alvmhgBaFLSOQCtz01IGuB5/2zxZvNbXZnbq9+zXOzou
aJKizGcW840VoZQXaNo7hFAF7/PTOCaf2G7/AIxiR5Ul1TFytdmtcI3KcZiosh59nuBv/onI
xUyo8lUPJXt9dU4JKMHNzxdEhQH8gmzlpxRqHFubbzLSTOLU9xbTILRIQFpUg8/dSnTQ1wLQ
qD4mddzOTis25tRHmoa5sSQ2FKL7ZY+4QlKDTieAqup2+emFEgxlj9ojXDxzkcoTJzEiyrju
GChwfYPpkudsc2evcTSvLVZuRrKRUiD0qO2NiFaEwsy7XXD/ACPKtceRcX/vWG7YzKiQxKbU
+q2tAltaY4IJbaCt9qjS1IzwR12xLNLdZq3E8YcdCXnICpSFOR0KpxLsTnyR9QP0+uiFwUwS
1uxjJbmjEI716mstZM841CK3HFsx/tzwQtspc3VQ040SU6Xgy5eBlesc8jxpcFuemY842+qL
a3i+XUodQSQhK+auwug58V0IG+hVkQSbb5WF4gtyDc5Fzot+3Opkl4D8rq0PoWpsU6L9346U
v0BMVPsvkeS3fXbs5KDsWKwq6tSXSVSIzjwaZSkJK0upDtNh6/PVrKJWfBWrx+6iatu7KdVc
I4DDyJBKnUdscUpVy92w231JBZk7iMDOXYsh/H3FxI8smJu6llMt8An7ePz+t/hWgTRVPXVB
S/qLsttz1uzSmLZ9xHgyVK5QVLDb7626h37Zpf6qiih7nbp866oQtvgSyz5AGJOpaU+MfKe7
2KpDvY5e5xCf8/7bn9XH2V66cT8k2xM5WeoxhlM1TybLx4BI494NGnBMin6/ZJ/y+57Phqxw
YVnsjH7vlDzVrjyZUvtW0f8A2Gn3pLKSRvGIAPVIpT4akMucllRfPJkG/wBqIhrTdFK78CIh
hstPvONKQt51pv8ATcfLaj3Cv3JHWmsKs6Np5IjIv9RWq42+TLs0WyyY5D0NyHHbbbcU2sKS
v2Fxt1SFD/jrSMvAiBkmSzMgmvsspuc2+8k3K3Ka5MzAo8ihxlsI2CwFe2nE76esoHfI6Zyf
K7jOmQ5UP95S4Ap+wvslbTX2aeDZS23xW39smqU79NjXRBVb5GaMsflzrnc7vFj3uZdGVNLk
TASWV0CEvs8OPFbaUhKfgBots2mmdbLl18gQ4qmYzchWPq52m5LSpX7cp1RKq8fYpLizUJd/
NuNaVUc5zEEeLvWzv2lcWItb8gShcFND75J/M2l+tQ0qu6KayzTyMYrhakNP0qW3ErCVH8yF
BXp8aaQmCfTmc5vOXMujMpRJVOVcTDKlKZ7iiSEqpQqA5bE76Ooq5IXHP35cy3OJduXahSFS
1JenFTncWgoJYcQhAbIB6kH4HbbUhmBpNzKE1kVmvsK1oZcs7zciQt0oC5i218uTwZS22kgb
c0pqfzaYbMu0Z4OMW/WBuPkNqkxJirJkKmFlLS2fu2FR31PpA5J7S0qUs1qK6pyaX8Rzl2do
v1sfgJiLjJXOjyWqucklmJCTEaDiRQFw8OR9B6aEoJnDGMmtTDmQSsjduD868wH4CZMYNrJV
I48nnS6QVKBQBT11Rklbhj23eQIse6Yu88iUYOP2xy3vRguqVuuNvtlxCK8QD3k8vXbQkaT2
Sl4yCySfHDcBFzQ3chb7fHuLiVlTktyAT2YximhZS307qdlcfdWutp5M2ZntnnMwb5bp0pr7
hqJKZkPMggc0NOJWpFSKe4J9dtZ4JvJcBmdtYyuPdH73Ovdv+8ly/sZDS+MUSUOIbUnvLc5O
Nlwf5dBRPxpqtrBl7xwOoWe262xGUu3yRd7vGiXdEe+OtO9wOTWW0RmQZBW57VtqPL6RXbQb
+QY/5Ihw8lxO7PzlqfjW9yJlEx9kPvLWVPLZqpxK1OcatjkNwNum2qcByQPji9MN53GvF4uM
a3RwHVXBx1v9J1LrSm1NIbabUgAlQNOIFOmiTSwsFo8YojRscebacZfnxZqhHDaA6xMUkDtL
uClJ7rDLTlFsrTx9SrprfL8HNuVPJkU1T7kt9Ul0Oyi86X30qCgp3uHuOJI2UFLqa9DqaNcI
v8q526X4/VHS7a4KmI6g1GbSw64+8kpoeKkJmR3+VSFhxTdakgDbRVpBbJOZ1Gxhq53y3zf2
WBDZmW9vH0RGk95g1QJ5lNs0eIDRKlhZoT9O+lWxAt+DvKg+OJf7M/OZt0V1i6pYdShcRKXo
ioLq21uphhP6S5CW6933DoTudExjgn+5XrMzbZOTtpy21WuJJRAfciW+1/bIakS0rQWEuNBw
xqn30ClAKA36DTaka0CfPJXfIrdkTe2GrZbhbKMAzo4LAHf5r9yWo63WmwUcfaFHff10PJKf
sSXjW1WWTGly7nBZuKg6iP2FIDrzLZTUyC2t+KhLP5e5yKgr0pvprCKM50S7sDHbZB8jWKFZ
4dxegSkC2SHnFqliMh1XcdaUlaQtMdNF+3/6uQ0pqSdfxwZcuqDsK1TyO9AdtcjbRpd2w/HW
cfuPC2LYTarXDuUTI/uHFonvvhoOxkoP6Ff1FgBv3Dh+OtVXky2xp5El4xHteI26CxNjWkWu
PcX2SuOtxQldzuPIcS2kmSrhupZKOntFNC0UTYkc4g4i55GYsaG5sexWmIhlMVtlpTiCptD4
SDGT3S2vuVW4oLcBrxFKDW+2EhrEiYnj7HHL3LclJet9kiQFTW2/ui/9x/1AjfpPoY7ooV+5
K2KginTfU4cQHLEWzGfHzN/yGE999eIMKzvXCE4wtLS21IYC1pPdbSpxbajRK+AT6lOhuWiS
5M9WEBSy3XjX2A9aE7Vp6066GiU8l5teJ2WQLRaFsuGffbY9dUXdDhAihhLy/twz9DiD2P1F
KPIchSnrKIz5Kzc44I1/DwzijGTffUtMthn7AqQoOuTnNnYym/qbbb4ro8r2rAHGpNNPXJN4
HTeIWaYcNYtT76H8j7zdweeCTxcak9k9ltB+kJ6VO5601l/xk3zAxv8AZbKmwQr/AGVt9iBI
mP2x6NJWh5ZejIDneDiEoHFaFiqCParoSNahLHKMyMMVsZyDJINnS92fu1kOvFPPi0hCnHCk
equCDT0r11hiPmMcev6JF1sieMJyc5FgxJbvOQG0Rlywtx1WyiGWf/zthtrpZQ4JvCZK2fxB
kN077cWYyJCQ0pljg+vuKkx0yUBa0JKGtlhNV7Vr6aGtMERP+j7qzjP7lIucWLFklSkQVre/
V4AkguISYwc9v+WtQV/Maf68tBMIrsGLKmzY8COAZMx5tmOF+0KcdUEIBUeg5HrrApNuC2v4
ZcH5EYWS8ffzIUhNme7q1w0xJqkuOdiO46ogsHtLSFDjVXpvrSozSy8EXDsmYxV2mRDD1ukL
Lz1vq6Yy4qWVBD77oJSYyKndZpyA9dZdWkZayoHOVz8vx7M1OLyJ+4XeJGY7N+jPvVWxJaDq
UMurIc4BLmtNYUk7NMr8G+3u3/em3z34Srk2tid2XFJLyFq5KStQ61O5PXWWKsS1uRmkbHV2
eHFd/bskVHeVxb7n3CGni0wkLrxSDISQOW5UOutJvYJRhnK5YFm1styrncbJJgwmClS5DgSO
AVTispCivhUj304/PWSTWyzSsq8vpjY84Lq8/KvvdftUdplvvrSz/wBP9fAdwPDcp5e4iq/d
TSlKnwass4K3Nx3KcLyiNFuFtCLwyptyPCkIRJadU5QIohVUOpPLjQ+u2q7lS2C2Tl6vefue
SnrgiKzPyq0u/bvu2uCVMuuBBQUusoQO5zQS2rkASARptPVJ6KrhzAwxPI8mtF9uU63Y3FuE
10FiRFctzjqIQeUW1tNspI+3C+Xbor8NV95MqIbQ1RmV/tt5tT5jNRpOPKksw4qmeKWC8tZc
QtFa/pqcUkAnbYanZmpLjeMvcleF7ZapGNzGrbHSiJAuDpR9k3JQoqVKQpI75W57qJX+mSSB
01Vbbb2NzNbJepdnvEO8wlhM2CtMiOtaQpAUg/mQdiCNjrnZMKvMl/eueZR71bYFpxFuHJhR
nrlZLLD5y3osmUpJXc1pKlu95PAcG3aBACfb6nc4nyamFhDJGYO2PO4WRZdja/3mI024+266
5AdkzKqKZ7/JJqpSTxKUpCTTTaY/1+hUtk4WzN4rHcVGx8yYNjlqu1hSXnnBb33VJ/UlvIT+
szyQmgVx39dLbtP/ALFZ/OjpB8p5QnF7pa5MRmdBmxnbbHmqiND7ZUp/7lxJf4E+5XIpbKup
5emlX/KeSeVBBXy+xHcYsVigR3kQ4H3EqTLkDd+XJIS6po7jtMobSjrWtSdFHCfyFrTC8FeW
k8jsU1ANKEVHod/jrMGBCUckktgrQhJKlIBUAAaGu2wHqdCFA7YIVWoqeg0FIX0qBUqop1H+
7SQs8COBryPQ1NT8idBCCpO9aK5eg9BqENAFeP00G/46iAVAcKigAqk+uoAJBSpSqmtevU6h
kWlSiORAqTQmtD8tBCPchxaiQf8ACNIhhYNDsOuw2qenpqAJaPaB0oNh6nUApK9iCBsPaB/K
uo0CqCfbTkRsodDTUMhKqR9VOO5I6H8dRCTXgKmlN/ht86aiYr9Tcp3CaDlQddDATSh57H1N
R66TcINSiE8vo5dK9af+OoxAEqJTQkgke4A9PloNQExVTfAkACpBG3XUSOjjYoSD7a70+A1E
0hBCEe3aitwKbmmwOoUdP0yClXoAOR6V+egZEqQAAN0j0p1r8tKLYSuNEEJ4k+uhkxVXPq/N
0pty4/Gut9BJjxVEucvLLTGtU1uBdXJbf7dKer2kSUmrfMhKzuqgGxGunr0zk1kuj9kybN/I
b1nlGIzeFOPJnvRmAlhH29e4722Qkr9w9xSKmtaazWsJsXlYGtv8X5jd8dTe7NEVc21yn4bk
SM2outmPQlxfKg4q5Cgry0yZg64zkXkfGIzdts8Z1lF9U62zHVAaeMtaD23WUB1tS1lJFFIH
8tUSax4H2O3ryFEwm+ftYgnHYzwav8V5iOp8OPKIQVNuILnEKJ47+01pos5iTNSufe5Q1ijl
v7TycXmSkvh0sksKlMpKRxfKSAoJJHEK1NtPILx5FSctvL9qtdsWGftbMsuQlIZQlzc8qOOU
5Ob9ArTWrbNJ9UTl8yzKpEFqBOska0xJj6Jz8aPbFQkTnGxUKeTQd2lfy06nUm1gElJ2vfkS
8XKdb5Fyx22tSLeyGGI64C0tvREp4JZcbcUebSQfYU/Seh1lJjIhzyXkCHosNm3RIloiMLiJ
xhDCxEcYkL7jzbrS1KdIWscq8tj0pq6/qHdMIeTrmH32psOJLssiO3AXjjqVohpYjKK47bYQ
oOILKlEg8qmprWunq2MwdbR5VkQctkZVcYca6XB9pURnmtxlmO0pBaLTQbI27Xs91adeuqHE
GcTJxjZjY4WMZHaIVpcYkZCpkVMkrjxmGHO4hCQtPcWqtaqKtVXIrwVpMO4ORnZDcZ16LHIT
IkJQpTbZV9KVuAFKSfSp1LYM0yb5FhKtbN1tOOvvTbfY2Men3x1whhnvoLakrYSCgk/+0SoV
9dahzHli2kpK/m2SWu6TZj15xx+BmT7bLcl52StDTam20pS4iIUBSVKbSNlLI31S0Z7JkpD8
lY5BiYq3Cs8sPYtKcmNOOS21IdL5Cn0qHbCgCoew+nz1lNimg4fkzGLatpi348+YLt3VernF
uD6XSXlMKZAjqCE+1PPmkug7gemmGhlMkpHmWyuwY1oVCnvW4RZ0GfNfXFTKcbuHAqcabYQ2
wlbam+hTQj56obRSpIaxZ3Y8UbuzOOxpzyZ8NqLHXdFtrIW2/wB1fJtqiWkrQSkds8gfd11R
KCIKfenLQ9cnn7M3JbgO+8MS1h55tahVae6n/MSDWiiKkdd9CRNFoxHM7DFs9ugXlEpP7Dc/
3u2OQ0tr77vFIMd3uEcElSEkLTXau2lrJLRKxPKFolXK0ZJc2H273jr0t6BBjBK48pM11x4p
ddWebJbW6RWiuQ+ep14BeTmjyFZPto92eTIGQQ7G7jqLalsGK4h3klMr7mvJISlw8m+NeQ2N
NDRptCLpn9ik/v15bD7l5ye1t2mZaloHaidoNpMhMmv6qFBkFKOIKSTXTHBmRlGzW32+Vhlx
YeuVyXjrndlW+atoR2ztVuEpI5JSqn/udKDWevBpWJS2X7CGGGLGu9yjAn3lV5mXqOy5Geio
Mdxr7ahKnCtZXxdWjbj0rrTT8ArJkTd7rEYuNhYjXeF+y2x/vRW7LHeSIVXErW725SU9548a
kn6iN9DDtk6xL3a5dwzOI/dAF5S0G4N/lNfbJ7rb6XqvoZBLAdSkpJQKD1GtNvHwVUpJa+ZL
Yr7FnWm33Nq1Tiq3Lbvsnmw1JECKWJA7qAXR3V0cQFj309DTQsbJpzjZXclNmyzL79d7dOiW
iElAkxWpoMdUotNJQptlCAQHXVJKgk9a/Gur4M8k5il4sabfi8h6XHiWuyKf/wBWWx9XB2YH
3FKa4sAH7z2EI/5T8BvpWNGo54Kql+KMIlMJft3fXcg4iIthf7olsJA5NyR7Oxv7mz610tS8
GU4/kQ1sS1+5xA7QMrfZS4FfTwLiQvl8qddZeiWGXyVKwWXn7tjl2u3w8eavCkMXKGpTJTDQ
pSVIWtJUl5Lvtoo04+m2ltoqWTfySN1sFgZu1jjvYyW5j8p5K4nfhRRIjpaJQkNsyHW1KDlF
BS1o5/5ehvZutVOTm3arfj/krGVXCFbGo8txHfjLStoMBTvFCpMYuupacHoe4pCxvTQ24BLJ
GQsYMmZlq/8ATaZF/tqmFQMaSpxKQh95QfWhtDnNYQjifr9vLV2yKrgd5RYcHx6DNks20XJS
bo3DbbdluBEdKoLb77R7dFKLTylJSSajoa6WYSb3ojMQx9N1bzF21W+XIht2mWYDyAp1xtYU
2pDDnBJQtS0E+3rtUaVZt+Ct611yOoGI4rKk47Zlxn03G+WRy5vXESTRl5tp90dtihTv2KLS
s/hqrO/kyrS4+BF2wqw23AWL+Le8+5OhRVxJKnXCtD0ihkuyIwQBHQxSjJUopcB31TOjVpSg
o1itbVyvtutjrgZRMlMxXHtjwS84lBVvttyrTWLOEagtMfDbNcsqZs8SJc7ahuTLYkSJIDiH
RDacc/SUpDVHHA1snfY61wZT5FQMGxq42tu+Nu3GHbFQ7nJdiPBpySly2IaXRJ4oTwc79NxU
U0cwa4kc4xg+KOZljQlypJs98t7l0jxn2kuOFTYeBYeUyUJpyZ7nJPUe2lTqlQK218Fcwm22
e+58xCkKadhTlyVs0Y7TS1JYcdRWNzBQ37K8UqqnU1k5zMnTD8Cg3qwovcy4qiw+a258htCV
txG2gFcpjpUgo+5QT2OKVVV9VBpUSxWF8FLeSjuq7IX2So9jnTuluvsCuO3KnWmhmkWabhbc
bG0Xti4mbzQldWIy3IgUaBTK5SCQ26kmlHEJFRSvrprSdhI6vfjh2Eu7iLembxMskmNDuzaG
nm+Kpaw2wttagove/ZSQKp+eh5FYOsvw7f2HLapuS32bk+uJ3XWlxy2tuOuUtRQrkVoDLSjV
PU7aYx8lZkXaMBdvs9UGwXONdGUxVzZElpt5tTTLZSFco6k91RBWKBINdDrkUsZGGV4jcsYn
tQZvFRkMJlMOJqkqYcUUhSm1hK0EqQfaoV9dVkEh43h91yFTqYa2kKZUkKS53VFRVskBLLbp
A3+ogD56kTJNvx7MbsV/uE+5RbfLxqSiDKtTyz33HFKUkhCkgp/L7N/fuNtXVyDtgqVDQqJr
1IJ3p8f7NZNFll4VlEW2OOyQ2EQgy/Lt6JCFvxW3yAy66wD+mF9xND/zD46R7LkcZZgV7sX7
HHktPv3O8R0OMRhwLaO5Xtxm1c1EuJ/N0TvtqWpDk65JguT4/lybIhTz18aZRJfkJ/SCaAK7
rbylV7TYIBcURQimtdcIwzi3b8+N8IDso3pDXcVMEpJUlgq4BX3Yc7fEqPH6+u2s29bTXyPa
B3ZsT8iz7vdHoLb6LzZmVy5rjrvbfoE7hCiqrhcQfZuUqHrqhtoYhOCoKWsOqqauEnf4/H+3
WoMosTFvywWFxqM4tEGWkyv24OcXXmECq5TbX1lkBJ5FOx+dNVXOhY3eueRKcnh5T5U7FZZu
aVIO0Nvh2Uugj2N0COB29N99ZdWjTJO63TOXMUs0mdyj2Fl9TVjkJaaZWh1kdxXaWhKXfXlU
7K+ehJwT8shblkF1u4bM55TrbYoyyhKUNpUr6iG0BKeazupVOStLCrzoNaL3jd6S4CYl0ta0
uOFJCvt3QAeDhTVNaGik19aHS0ab8D62Zje7bJmqtjLDbEt5yaInZ7jLDjjZbW4yD9NW3FI/
9JpqmchxA8i+VMkjy480tQZD8J37mAVxgpEVRbS1xZSFBKUcG0iiq9NTBWGMHNLjAt8i3MQ4
iVy0LbekOIUVKQ6CFBTfLsqUOZ4qUgqSd/TV2blksIhGm7lbxDuiUvR2y4F2+cpJCVOsEKq2
pQoooIBOgk4ZOHOXFrbQ5bYRirkJuNwihKu1MmoQpKZDwJ2A7nLtooiurs5NpxoOZ5CzN24M
TlPvt5I8wuFNuXCsmcy4oFpt5paC2rgAAiidxTVwZnI4uGZsZHnCL5lVu+9iBlLEq2RuSFER
We0kJKeJQrkOaj0B9KaCfkpjiSpCj+QCtRt7Sepp0023gIwX6w+UYVvs8G2ybUt1yCmO2h5t
7glX2clU2MFIKTSrqyHN90dKHSm6rA9iv5dmE3IMluN5j84RnstxnY3cKh2kISFNFQp+kpSO
XHRwWRzkmQ45NxbG7Ra0T25FlTKTIelFntuCY4Hllvt+8cVigB9NSiGanIWS3+w5BmwvalTI
cOatp6eiiHHmXG0pSoR/cAtPsHEqodVs1SJPJOZL5Ft7r+aOY7KnsuZTKiSG31kMLShpSi80
6ptRVQ1FKHW3fU8KATjHyXTGM+x55y8TUTvs3XJcZ23KfdTEUwpuAiK7MkpUoCWnkk8WSSdu
XVWszhLwas1E/UwqSlPfeQJBle9YEw8v1dzR6i/d7/q92/x0WtLMT5NHczmwOty5zEh2PcLp
aoFkEJaCpMD7JTVZ5X9DiB2ipDaByqfT1e2vgVZT9Rpf8ysS22pNibZhS2rgw/cQ7GQ5+7ON
lf8A9oKQoKQwlNSft6UJVXrpVpTTHvEDTyDlcfJPJlwuMSeIdtlyW2GbhGb+2X9n7QpxQQEq
UoDkSVbmg0N4SQK0MtLeZxp1zyF203SDbblH+yt2NT7nxcaFoh80uBtb7b9FvHi6oFNSVHWn
ZYXH/U6OyannkaY/lVpb/bli7RrdFtlxlS8ojstqYbubDq08EsRwP+oTRKh2lUCeVaU1nssz
tg2m1CxB1aybHVY+gGc2nHP2F63rxsVDiry68tYf+2FEqCapV9xXalPlrfZdseQxKXwSEjIs
RckrN8nMTsTkXG2O4xbOIdEOAw8FTAuOkVjDtpotJ/zP+brrNbqXHj9zSSxIzvN3xaQi3DOp
wuVyamT5Lb0OkoJty0hUBhzsqbqwp6q+yFBQTt7emsyoa4MRP1H9wnW9/wAy2iXapseRaRZ4
ZyN1lhuLFRC7B+8SpsFTaR2lUKOqSQke4aJiq8yaaUtMxh/7Uy3Ps6iKtxwxUk7dnme2d9/o
p13109rTtg51RwWFcuQpQbAdRX565iEKe41oFdSd9x0A1QUB8hWhNSeleoJ9dtUEEEnYihSk
nirrQ/hqITv3TsaHqfjXQAaUilOND6HoDTempgKUsj4VO9etANBoCulCkDluD6j8NQiUghO2
6Ruqvy0khVQTRKyFEfV139dQBlFE1IrQbAfAfDUAVU/UlNDSp/DUbkR7yrYcSKFS6U1SQpVK
EGgAI3A6/HQZkUgGtDuUkn+Hx1Cgl8uQFTx26nbb4akaErUUcTTkTXgpVOnr+GpgxWwBFaFI
PT5nUIEFJA4iiqdRtvokoAFJ5BIrStf4aYICVLoONKivuPoD66ibAGwUlRpuQon/AHbeugIF
inc6VQPzUpv89EmkFSrdB6fTUj+zTJpaOfbc4125167df92oCxeI75ZrBlEW93VmTIj291Ep
pmIWw4XWjyTy7gIKP8VN9dq2irgxCbNAx3M8JgeUzlrcS7PRO4uZChoEZT33DoV3ErpsWwF+
3j7vjoTfRmlXA5tme4IzZmLJIRfURIN4XeYMplUcLdKihSWJDZIbVugioPt0S9lL8khE832l
mXLUbfLKLrPmS7mvuBT8BuUFJQq0ubBl6iquKI9x1dZBQQWM5HgsXDcmsFxuFyFwvzjSmXBH
aeSlEVanGipfcSpS3OX6nwPx0t6COxAnI7arCG7EF3P7xEoyFN/chVtKd/cIp+l0V6/jpu5B
NJQiOsFzZt1+ts91svsw5LMh1gFIK0tuJWUgq9vRPrqqy7G6y/NWLKyGDOXdO/bE3Jc5TDVv
kJkMILLiU8nH3nAVVWEnspCTvrKRrqU/D/Ll4ayL7jJbq7IjMInuW+Y8jvPNyJLJbbSlVCoN
khPs+kHUlhjtaCxDydNkmY1kV7TFu/7c5Ds2SOsFx6O+t1LhU+82lbxBAKUkA0G3TU9GLbLD
dfJtnh2S5G2Xtp/LhbLXGN6bj0MqWy86ZLzbjjda9lY9ykg6cFxJS7Bmq5WWT73dp8exvTYa
477rFtRKafXRI98Y+1K3ONVOJ9fkdM4LrLKOolCqHcb0NPSv+7RBlppmj+PWo68EzGG5eLdC
cvEeOxChypQZWp2O93FKUhQoBw2Qr1O2q1jTcoVY4jD3iC9RXZ9sTMdlxpcCKqS03LCWFH7g
KSQFkkAcEmtfy6U0mUSXTObhYbs7fZ0l/HptrXDY/wBPSm1MqnKuqS2kCSf87jsQr8nHWag6
/BVM4t2CJsd3/wBONW9y8iZGbvAC+LbKlpFf2PlTnHU5y7leg+WtVsZsoWA/KOPZDcIuFJZa
Tc5iLQ1b5KYrzMl4y2ypamylpalKo3T3dPnqTXBWmQeNfG9yN2uf+p8YccaZtciRAj3NLkaO
qU0UFsKfqjjWprv00T8klBOpw2w8VPnG4C8yNnMn/SKHSYwliX2zxZD31fb+4Du/OmntnZuI
0RvjuwTv+8LDQx8WjsR3Vz4UaV3kQlPRlJS4h0qVw5L9vAlXHlTS2o2YTfgz6NheTSJN2jRr
W+p6ztqeujCuKHGGk1PJSVKFelaJqdJlyXS24hajithlxcUlZO/d47ztwnRJTrC4rqFqSEJ4
AsN+wBf62stuTarBWm7FFPj9+8fta1yG5qY6bymagJCdv0lwSeVd/wDMH92huQajJOy8djOe
KrJdY+NrZmG7qjS5YS8tyWyW0ltbjpA4NuLVwSQeI9DWuqX5JkhnCLPaILkabjMNnIo0llUV
uJb3kW6I2kVciypi+CLiVpok0SBy3rpfwODrJjWp+Fi3ew2DNv13Uu6/tVpaVFK7YGl9pC6K
WHAtSS6tNQqiePrqVn5JpTBHZXjuOyI+Hz7fDJavkl+LLNpivQTI7TqUhDEKWtxSHwFUCiri
rbTE/UOucEbDxDHjIy2VclXOJaMZShSowQwi5K77oZQlznVlC08qqHrpl6BUWyfzbCMHtIcv
UhcyDYmGbZFajwmWUyXJMuMXy88mQrtp5JT70jfloWfqbeChZnjycaye4WJ2UJAiOICJHDgV
IW2l1BKamiuKxXUs5RytHJZbL46s02Hjgk3xyLfMpS7+0whFLjSVtOLR+s9zT7DwH0gqFemh
x4OiSRF3TEcettshuXC//b3yfBRcWbaYq3I5bdUoNtfcJVVK1cD1RxGtpwogw1Bwl2eA1gFs
vbHulyLlJhylEqSpIaaQpKONShQHLlzFDvT01hZ2LrDRH41jr9+v0Gxx3UMu3FZZbecBLaDx
Kqq470onWnhE6ZLBbPGTd2uUKBYcgg3My3HWZBaaebLSo7JeWvtLAU81xBCVo+pW2s2aiIL+
tynI9f8ADF2jzzFeucONGMYTES3m3kFXJ9EYNrjhKnW1FxxNCrYjfTgzFmN5Hi25W5L8mder
dCtscHv3Z1b/AG0PB9UZyOUoQXOaXUEcqcT1rowayudkXeMAyO1qcEhDb5RObgKTGWXlFyQ2
l1l3ikV7bzbg4k712pUaoD4Ji0+KPIDqJQt8hllSJb8JttqS62JL8Y8XO0ppJR19oU4pNdLS
FKyKl+03D9olXQSGkNwnxEdjqfCJYWsdW2vqU2NwopNOuhOTMdR29nGVSbUmyvTh9o6hERx3
g2lxUdNAll58J7qmk0HtUroPlqNdp2OrvhMCJZJl3td9au6bS+wxcgiO6w2HX1cUfavuHhJC
VCp40PH3aa5KzaydZrGVT5mNqueRlZuEU3C2y5sl7jDQhxYUSpRP6n6JKQj6tgNZUxI8nPMU
ZI0IN1dyZd/h3ht+FFuKfuGStptaUvtKafCFhslafkr47aW4yisnMCW8fyWDk86DKyGLa3MS
SIn7nJkLQxHS7yQluPRCnSFdxVeKNqmuprCCorHrfnuM5p+x49LZh3mVGC2pDSmyy5F7Bkoc
Q482opCmgT9IV6HQkXwTPj++Ziq0KkMCzrF3mOLtjVwmGC/NlqWkKbRHYo3IAcKUhDoCd+Kf
aTpxJJQogzddmuD8a8XNTKGmLS6BdE1DfaXIeU2EoQP/AMZUcU/SNZmWSSRcZcbyNCxGSy61
b2Yn2iF3Ntj7QXhVuAQoGWlH65Y4KRVSvyka0lJSs4G2S+Qc7nIayEMx7O1dJplx5cNptEh5
22qSG0PPAc3QwpSeHMfz0PApednO05zml0nwYFjt0NLkaU9cmrbBhobSt0RloklSK1cStgr5
pKvwprMmoHOPqzq5z487HcUiJt86C+3+3RWExoUqEl1KZBWXHUuEB3jVXdG9Ka0iwpKznMe9
xb+pu8Wliyy0stn7COB2w0a8VpUHHysqNalSydV3okPcTyy+2S0oQ1bUy7Y7OS5BccL7SU3F
KUhI5sLb7p7dB2V1T601LTZSTDcbyLdmsnkuYtLuMC/SnXLihpl5DcedHWshSFIPJJjLWr9N
Wx6K0xnwc1WEVBeK5ILQLs7aZqbM82FC5lhz7coO3MOgcaH41prPVvRtPyXDI71f/wDT82bI
xCTb5d8ix4t4yBwSRFeYY7fa7bS0Blkr7SPcFn1p101T/QLVWimXq8m6ogHsiMIVvjW6qSSV
JipUA58irn9I6azHBpEzIzRi452/lUpl1pqS4kuR4Ujtut0ZQ0O08pK0kjhXitJSfUaW3gy1
nJPRfJNjjXmc7FtD1vtlxhpiyVxPtPvFuNP98PKbW0YdVfSqjQqN/q30ttlCZHIzWC9lF1uN
ybmSLdd7e7aHFlxpya3GWlKULB4oY5J7Y9gSEgbDWH8D9SpKcaDhABAr7R+Yj06etPhpAuMP
NrawxCuJjOOX61WtdjhNEpEVcZxpxkPuq/zAtCXlexI92240pg0pnydZ2b2ddi/aI8Z4Lt6I
SrdcnOK5M9UZSVLjXQ9FxU79pCfp4pG/pK/kW8HPMM8gZDjEaAi0twLoLo/cpK2HHnGuDrIb
CUd1auPSnD6UpA46Vc52t2iCsWO5C33u23BSA6IUpiSpse3n2XEuca79eNNEHahdrDf8St+T
ZGqTL79rusuJcI8gsKdLiGJ4nrjOs+i1pq3/AIQeu2ujvmfgK4J6Lm2DNFhpi4dpdv5pVIU3
KYYkRXZDkgojpj8XS8jvBNHiEEDb46HdaL6jeLlWJPqvwRe2bXZnrhOkxWozC25a2Xk0ZHZ7
bjElCqU7bnBSNwFb11N5UeAWUZCslSCKgVBAptt6aLIUzVLtkdjkKuk2LdIqm7+q1JtFvfbU
tNqVBLffdmMlJbaQOCh+ny7gV066zXEfQXlkHf5OMTY8dGKqjRHF3FP3SZo4qXLLah92ypfN
DNvKj/lK3SqlajptRD8hXZZckfjrze4LTerdIu92s0GJZrwzJbcbansIabeK5Cdoyl9taErP
9g1NqKz8k6vIzauECUmVEx+fFYyxqJaEP3VbqIyHnoq3f3RaJLnFCuXJvuH/ANwJNOVN56Se
8/8AwaUZHCZuJy7mJeOOQoVgbvEh/KWneEcPWla2u1+m57nWjxcAbQK1UNvgufuYhPejhGNg
NiRJbTFawtNrnJujCigOm9LW8YI7RPfLwBZ7ZT7QkH56G03/AK0acDbJ4tqax67u/wDSpxgx
IxwpTZb7ippLQmFATV/uA93u9zbp8tVXn45BqDPbvYLtaWLc/MZ7TV2Y+7gOJWhwOs8igmqC
qhChQpO41hLEg00TGBYxGyOTd4jyHHZjFtck25lkkOGQl1tKSlA3d4oUtRT8BU9NNWuynQpY
Lxk/j3BcVxCTdp0KbcpzFzehMRlyvtkrjl9aI7zwQkrCHGkckqR9XUbaXRN48C+CvYdj9sut
xy5dngT5tri2aY7a1qa7ymnO2nil0JQtKlKUVJT0VQVG+hfyqZhQzgi1Y454hdun7W+q/N3g
QnLml08EIEYu0W1x9rfpQn6t6+mtJJWsn4FxCbZ0tmLRLhhmNHsuNP3bJVQJEotJH6KmWgO0
5SvEdz6Tty31n1rFvhGoWF9R+x41xS6TG27VcpzLKL0qwyHZjbSnH3e2662Y6UHihThZDYCz
9Sgemno1+kkqp/oV3yHi1nxm7x7TDEsXFMdL92amqbLkdx4ktxz2hw5oSKrUkkKrtqiKz5MR
kPFrBZnMflX+6MuzkIuMW0R4LThYSHJaVqMhTiQVHgEexHQnrtoSk2qYkmJniNTU299m6Bdn
xuY/DyK4OMlKoraFEMuIbTXvqd6FLe6VdaDfT0mPkIjL0REPFrFKwb91Zff/AH43eLaihfER
UNyULVtSq1r9gKj0HTfWqUTtZf8AimKrMfJYG/E9vuF2m2a1znGZ1jubFmvEySAtp9yQtwd9
hCKKbQ32T7FE1HrXXOI/SQXqdszgj0+NBeotsuuKd9dvm/fCSxKHceYFsCC/IPYSStCkupKG
0JKq7b9dKrtcoK1yvkeZB4ytFguF4cmzZox20MWtTyu0Gp0hy7oK0JS2ohtKUKQorCjXanXT
/W3EbcjZPP1HCvBt1hN3Rcov3FuI67Cgt2oM81vIaQ8FuokrbVwKXUjgjkquhxiCagpzWMtv
YRNv7Di0zbRMai3eMugbLcuoYWyfq7iVoUlxB/HSqPt1J4SaZXePH2lP/MSd/wABrJSD2UUD
Su/Lj8eugpEiiAFIGyuh9KfhoIUQpzY7UA3+f4DUyDpua7n0HptoAJAAPOgA6gDbp8K6RQae
Yp7SQN1A0rvqGBKkqJoqoVuQPQE76kANgEpVtx9abiuhiGV1IAB4j6iK1r8NQQFRW5HuAoQT
Qb/HSQoe73pUVH1r6np/ZoaNIMgFG+/oqnp89BBIoCAARTp6muklUBNKqJ3OygevXodJaBtw
4LIPUAE0ofmPXQ2KDBofdsrb/wBP46DQFE/SQPma+ny0QCDFFKJqa/E0pXrvrRRyEjitNKKS
DWgp0+IOgBQAUeQAASaFRr/boOnAQoRSmxqaDcf+eoy0GDVFCrYbg9NxtqKAuB404nrTlT+7
SRbPA8adI8i2AQWlPKZnMrkBIrxY5UeUuu3AIJr8teiiUOTles5Rp+JYxcGv6hHYirU6G250
l1wJCmyxHc59uSlSCOA3HE9NFGujk7O34wP7RjbqvHjFulY0bw/HyiS1LjPdxLsZp0ISX1tt
FLhok1Cug9dDax9DN2nH0GafGGFm4KZU0sIh3OZFtTQkcl5A21uIzLiSEsLaVRBX61+Os9oM
tEPi2OuS/GnkApsznKM/ENuUpkvPR3A8pL7bbtOR4I2cofx1q+k+QZX3scYT42Zvosr6Xvvj
G/fxKQY6/wD8QqJXmlXwXSnz0MIhkJY7cLnfLfa1L7bU+UzHW4aKUhLrgSVAbdK+uqqJ2Nfd
8R+PZGTR7DBnPsy0XJdvlstOPPqcaS2sqdK3mkIYWlTf0pUoHQtEqlaxzEPHeSXpNvgP3SCq
M3NVNjOlt91bcJorRIbc4hILihxLdNvjpkU2crNimB3hmZdYEuezbLTbzOutqWGzLQpLgbSl
qQQGldzly3T7emgIJRzxZi7djkZM9cLgmxG3wrnGiNoZVL4THls9pa1fpcklFQqlNUwTqVqH
jmLSsjlRI90nTLM3FVIYmxIan5AWEhRaeZA2DdaOLTtXp11Mlgqym9qdK7A9aV1Jgy1Yrjto
umI5bNkJWLlY40aVAfS4UIBceLa0ON/SsK+J6a1PkIxIdqsFsk+Ob5fFhK7pAnQ2A6rkFNNv
lQ9lDxVz9QobU20JTHgVUs+beL7BGvd7bx+4Bg2WIzPm2lxtagzEUhvuKEkk8nOS+XCnT10E
qtfQql/wCVZLY7dJc1j7Ra0CwvpSofubLgClPRqV4JbSpJUF/hrSFh5jjdvx+Pjc61uPBN5t
bVwd7ik8mnlqKVJQpATRNU1HroewteBWJWvJcjlTIcK8Khpjw3ZU1yVIeS0YyOIdSqnKoIVu
NMlMjo+MpwaEpu5QU479mbgm/kPJYDPd7A/S498K7vt+n59NGCcnPFsOF2zVjHjcG5LbrTzj
cq3vcQ8EMqdSG3HE+1R47haR8NEIOz+5VlPOhVeaw4SA5RSgVem9DuNtaWA7Fit+H5G5Zm5a
JsW3xbghbsKJImCO7NQiqVFlv6FgEcaLI1GuzgikWC4Lx9WRpVH+xaeERxvuoEkKNKHsH3FG
/wBWlbONKt54Jx+0ZJGwi33kXtS7bcpS7cLYiQ7xZKUhwd0cu0kU9xTT29dZO0wSl+sF3csb
0pWZLu9jgy2ItwU592YyXFniHovdUUTA2QeXb3A30V+CbYmLgV0mCzTbFlLc7vTFQLc+tEmG
4x9u2qQtbAdNVNtpT0a/OQPXW5T4M9XMyNMos+YSJlnmPXV/IF3dRYtEtZfZf76HOJZ7Mng6
17lCitgfjrMwmPbI0s8fyHCySdGs33ichjhxu4oQ4nuJCSOYfW4otncCnNW/ppzyCcFgmMeY
7Zk7zMKXNud4fgRZU91sJd/SdQVNtOiQC3zb3Hx/hq4yXZzorbeYXyFImouVvgXKa88p2U5e
YSJUlLuySCpzisAU+npqgp8k6M38lP2WOLTblQrTb2nXEO2+GEstpdcUVOtFSXO1xUVJCmiC
NEL7mpwRH+os9VjFPtnHLEhr7RF1XEQ6UMEmrCJhQVpRUkU5bVppSM2fkOVfclVgkCBIbgqx
9chaIqDHbTJ7rASXHO5xCvcmiFLCqnodMqZMvKUhRs6biSmZlmxy2Wu6xFd2LOjIkOLbIBHI
NOOrQr2kihGqMG+2CJtFzvOP3pm4Roy2priVhuM+yoofafSULR21BJcbcSoj29a7azBmckkn
Nb5HlPqiW2JbS4y0w7Bjw1NJCWX0yU1aNXAe42CVKP07DbWmvJOzfA7X5NvMtp6Fc7dCudvl
qdekQH2FhDi3pJl8/wBNaFgIcWeJB+k01mINOymBMTylkbN8u12AjOyLo00l1vh+iyuLT7N9
lI6OReP6ZNfWtdPgLI4WjyJMttjRaXIzcpph2Q/DeXIkxyl2WoKe7iI7jaJCVLHIJWNug20u
2Rbxggk3ZSbO/Z1RYxRJfTJ+6LQ+7bKBQoadrVLZ/MnprHYU5UEYpRBoaezdR9N9UHPrDJe5
y7mrHrPa5kR+PDgmS7EkPB5Lb/3Sg4SgLSlv20/ITUdda2h3ga3q7uXSHaYamVJTboYgtgqU
4HP1VrS42kgcAS5QJTXfWk8D2c4F5Te5dzvSZMhh+OtpmM0mO+pfJKY6EoTxCko4pUUVHt/n
rCyKbOsrJ2Jufqyd6K6425OE5cZwpde4pKSpJJRwUfb1KKfLTbRJ/Amy5eqHmP8AqWW6/JWF
SC45zSt+rzTjSarcqn286EU6bDWSTWiOxjJGrCuc4lSmpzkB2NbpLRRyZfcKB3UlzdFEJUOS
PdvtrXXJlPhCoN/Yi47frQ4pXevJhKaHEKQREeLy+4onkDvUUrU9dZWxbWmGxkzcbFZtojvK
budxlA3B4JTzegJaARHW8T3OHeHItgcehPw1LcmnY53S/MScUsNnStX3FsVOW+kooEiU4haO
Kwffsg12HH56jMtjvxxlMfGcvi3p9xbX2rMoNLaTzUHnIy0NDj8C4pIPy0OppM0OB5YsF0ix
0XhUC0T02uXb1xjbFSLUwpyY06y2mKgq5oU02o/8qtaRjbgy7K3Lc5fnnLfKizI7qUu9y3RV
wIoVxoUNxnaKRxp7vQnRbJtMsuLZvabLiMG2vwLfdJJvRmSW7lHW8mOwWm095hSVN0X7TXr+
GhYkp5LRfslsF5sD7UO5WXvmbenu5clTG5jaJkzusORFMAI9yPdRYO+tq2dmatpDB3PbErHE
2dksM3FrGG4cO8kvqH3IZUmRBeYK+z+oglCFpR7VfzBXCFrYvyxd49ztapFvuNqehgQ+aYt0
lOTZCUR22+DtvWox08FjfgBxpXVXRqXJRMDVZxl9qN4LX7Wl8GQJAqyPae2XP+Tu8a/LrtrD
0C+DQZ0ey3CNHg3xVgVmcu3XNjuW5UdqGHldk20rcY4xUO1D3FWxH5uo10rbzqSak6Ym/YMW
yzGGK2WU+/Z5LF0kr7clgTnFvhpC3iQhDh9jS1dOO1ab6G/8kvgyqc1ITcpKZMdmK+h5xMiN
H4hhpzkeSGQkqT20n6eJIp66LvJmqL346tV1uGKZqzCtsd5tdtcaRMWGRJ+6Cm1IZaW6tKkj
t81+wb+prtprs08oz1QQTx22p6VFfhrIQGpSiskmlTsKdCNTJsKgPuFKdFelCT11IAglQp8t
qfEn00yKBxUU8QagCgI6jUZshJ5nqD8OJ9NBOsiV0A5dADxrXeny0oVgQSEqCkn1oB6k/DWg
kIr3A5FSVEhNfQ+taaMGpCJUhQ32BqNZYNhciCVVFT1HoK7ainIklJRxVShoadR8tEE87CBS
fdSlDUEj19TrRII7nkd3AOINdxTVozassB4hSCQa0pX4k/H4aDaO0u4zpTja5jy5CmUJZYSs
khDaRshI6AD5auBOTbzzK+60otqQNnEKUlQrsaKTvv0OoJAudLdILq1uggUDq1K2QKIrUn6R
sn4amSY9tl/vtsQ6LXcJcBEgpU79o+4zyUkbFXbUmvH0roNHJm7XFiNKjx5bzUeeAJsdtag2
+AeSUvIrxXQ77+umXMh8EgM2yxFnYtib1MRbYvBUaCHVdhpTaubZQ3WiShW4I9dZFWgOw5lf
7LcGZ8OSFqakqmKaeHcackOILa1uI2qVIWpPLrvsdabbGtoOmYZxdcplx357TDLMFrsRI0VC
ktoSpXJZKnFOOqKjv7lmnpTSm4gy3ka2HLMix9br1mnLhmSng6UhKtkmqVcVhSQtPVK6ck+h
Gsl2cC4mY5RDVbjHuDjarUXl27eoQqUSXlqCq81OcvcV11vsx7N7HSM8v7eJnFEfbJsqlpcC
PtWe/wA0mocEjj3e4OgVyqBt01K7TnknqBUryFlskQyqeptyAtDyHo6UMuOvNijciStI5PvJ
T7eSydtZ7YGcycbtnWQXKQw8t/7AMJUmIxbU/ZMslauTqkIZKeK3VbuKr7vw21Ozage7JFzy
jk790lXK4/bXb79qPHmQ7g13oi/tEBMZamuQ9zdKhVdyTXqdUvHwXY723y1kDC0yLlDi3u4x
5blxgT7g2pbsSW6UqLrJbU2nq2k8VggcQKay2Vbx9SEdy2SvG1WBDKUomTDcbvMBq7MeFSyF
1oEoZ5qKUp6qJPy10d5fZ7M8JeCBVUqBABQAfUfw/jrMgw09tNT+Q0Px66CQSqinwANB6/Gu
oQJIClVrQ7qPT+OgUgyQVpKSSlIqr5V6fjpQNAKlghPEcknenQDUSDJqfcKkbE+oH9+hiGHC
QajoK77g/L8dQiSoK4hdQlR67bb6iAgcKhK+tQdtjqIMgigJ9KlR/uGlAJC0jlyqkncfEn8N
RpBq5cgmlEnYAbbdNBkJS1KNFA1BoE+tPXcahTYZCiN01+B6b/PULF0QpoKSRyVQCtDUjQMB
BLlOCtxuUpHQ1+NdBBn3EJSCN9gab6hUiXKkkFQ/9HpUemkGGUAqryoo0qDUUJ1SAr3cqitO
oO+iSEqSrZJP6hNVIApX8dJpAINDyG4r7CaD5fhqkmHz/NT30p60pTpXVJvpwS3jO0sXfJbd
aHpbsRq5S2oypUfdxJdVwQQkkAjkfd8td6S00cW4L3ZcJXO8oKw565OhpmU/DM1D3YfdQzUA
NFXMdxRT7Uq21mv8fknZskbV41uNwxZd/g35MG4i6v2labhLEVpSUUCEB1NXFPOKUBQbHVbS
JyQ//a3yEX40VmKfuVTHoaEtv8kxZMf3LD6kVEcEe9KydxvoCXOzrZrJki8Wya4xsgeif6ec
bRMtrTrxbfTJWWipLjSw2SVA12NRvXfVZQkx/KMkJ/pa/f6WGScW/wBkU/8Abdzvt8w6CRQs
V5gbdaaIjJS2iLitSHnkR46FOPvLDbSWwSpS1GiQKb1J6a0smTRrjYPOSXLe1McuUt9l7s2/
tym3yxIDZq3Vtw9tfAHZdNZk0mRbOD+U7Nd2OxaZ8a6y23gyqPxUtxJQe+3zQpSQrgfchXu+
Wky0grbjXlSwZAzHtttuEK9PNlbDLLaVc2fzlVebSkD1qdj89SSJDuXF8xXOZcrZKiXV+Q+2
wm6QAwaFtBK49UIAQG+XIp4UFa6noVbBC2eDnFuvEliyxbhDvkdlxEtiO26mQlg0LgcQkcgj
pX+GoiDUColQCuR3HLp89RhyXPELlmsHGL8uxx4TtlaQg30yWWHnO0skIqHPeptKt/b0O+mc
mmp+gLRfMojePbmxHj2//TpfbZnqfYQZTzzhJbW04d3FNHofya05bJYRKZnlPku2ypduv0eK
1NcbZizbozFSpT7JShxEVclA7a0lITVHX01ioyVu6ZrkdxYuMWfwfZnPNvvMOMUTHcaG32wP
/wAcKA4qQmlRpWMmXkmctyG7PCwR8qxWLGhwo7f2DEcORFPwSKoQh1C3v06nlUCtdAOy5F4/
5CxHHXZ8i34wphc6G/BWHpy5TCkvcSO424hHt291FV0wJwHlKR2/sV2uGMc+y/bzYkuPJaDR
d+4oJHIvBXc93X5alPBJhYRlTsbOGrxYcaYmTFoWi12mM44lCP01BxaVVKnFdvly5fjrXECQ
0a82JmVc3l4+y8xNbU3DZefeP2Liif1GV7KWQenPV8GYHjWa2p+y261Xy0N3J60NOsWl0yFs
BLbquZS602CHaKNQajVHwLsvuRbd3aGPuWf9thl5T/3H7sUH75KQKdkLrTt/KmiYMpk4MtsC
8CiY47aH1Khzv3BU1Ugdlx8pShxC2+APBTaacQqo0Z/Umk0JzPI8dvCEPRLdcra6TWFEkSUL
gsMgUKIzQbboOPQ/z0rwOEOLd5EZgz8WkC1AQccirZegqX7X3ngpL01AUKNurCkqqfzJGpqG
K2cL/m9quTdhYbZl3GHZHXHJEm6PpM+Uh5wLUw68z9KEpFEEGorpjyZ7Z+gdrynG0qym3yYc
iDY8obbQEMOCTIiBh4PNAKfIDo24kqNdDk0mmT2QZ/ieRwH7NMRcLdbVft7keWhKJEnuW+Oq
Pwdb5IBC0q5cgrWVgpUyQ2QSrHmeTXbIJN4ZsK5khIZhSWH319ttpDQWVspUkcuHT0Otq0YM
uqZao+XYhaIeEOKvr017FkSlOwITa1RZSnXl8Uq5dstLUlQNVpII1laZqMFWvOQ2S5Wm3vtX
abb5cK1s2t2yMsqLTymVLBcL/IM8HAsEgp5bU1qAhHObcccd8aW2yNXbleoc96cqEuO6EcZS
UtFCXj7Bw48yeh/HTVb8C/geYhbrbimWWnIp9+tM2DbpHdkMwZKpEkpU2pA7bJbQVkKUCd9h
rDcolWDnh+evt5BHXkspcuE196qO/Jq6tl+UwppCwtP6iGkmntb+nqkV0w+DK1DLIx5EtDM+
R350YPfYR4iJ0ESnVOpXPbefbdkSf1XKMJV12A9ugVB0uuXWGfb5qLFd2LVkbzjyYV6fKowa
iJnurRH7/BRQFRlN8E0+kU2ppfgXoYSL14+vN8u7Ep1EG0RZUa7wZLaA0ZK2mW2p8dG1VLkq
RyZBomtT66HlGYbyOLFmFjdsomdi3iVIl3GRfrfIkCE26h9QLDYYDTxfQlv20QRTpq0U6KAi
YwMKnRBNYbdduLbzdqVH5SVIS3TuolfkbR9JbP1ddUqS0kV9CltuIcSQFpUlSVD0KTUfjpLs
Wm85NcHcSSxIlqnvZA+t+ep93uqYMN/9EMNdI3c5lSv8Q2G2hFPAm+5M9DZxSbbi21NtNkEV
K6NvEK7z9VcRXgoJXsFe5J30J4ge0NiM2urjk602x9xdwTa+2tV1feRIkyDMDb6kuPJKk9tr
6G0k+0VrvXSslokZ93uS/Kd7i2mQbWi93QMLlxksrksMhXSO4FBscq+7iqivjpSSGIOON3yD
dfJ5uj9tQlhUeagwW0trKltwHGg8vlxQt1ak9xavj01m2XBmumR/j7IZMO1vNFhpdns8ZU+6
Q22G1quQUtCEsy3XQSlv38QobpT89aUyThKSOtQgOYDmCvtUpk/cWow3kI5/bpXIc5pSs7oq
KJH+IDTXDf0Bpx9yQkX1EnCJb8yBGTCfX+22WFHioQmJKbQ28uZ91TuLWU1HaKjUqJ6azVYZ
N6InIGbWnCMScjMdue/+6G4vBBCnQmSkNe87L4I2FPp0I0d/Fdtsk7MG2r4wZdqRDmvSY6KB
VGoq1BxPzbPuT8xobKNmhtYl49t+OsXOKGLtDascqQb5IhOyUSJDdyZjiT9oVpVxSFqQkH6R
11qqT/UxfD+xj+Syocq8vuw2Y7LAo2ymJGVDaUlP5/t1qWptSq+6p1NDMlvxfD8aueKWSfcr
g3bpE7IFW5xbjTzofYDbJ7Se39HuWd/nrL5N+CRyez4Zj1m5/ZWwzHX7whlicJa5C0RJ640d
EdTSktDtoTQc+pG+tqJMZO98wzEIOIJlBmOh2NYYVwmONOSV3QTprai0642r/p/tlO8UroPa
CPU6q2wNlwQ/kyy4nY5Ei1WyLD+6ZcZbK25cpyY2BHQ4tb7Dn6A5rWR7eh1JpL6lEuSrYpYH
8hv0KysOBhU1Syp5QNEIZbU64rj8kINB8dYEn1YBEk2NWR2e4qdsaoEycw5MZDUkrgOttPMq
Q2paPcX0lK6/HXRqcE7QOsT8d2yfcrfEvN0MRi52V28tLYZLqk8EuntqFRWgY5E+o2Hx1l8f
UrW2vBS08QopbUXWkEhpYTx5IJ9quB+nkN6ems2WSqy1YxjOMXeBSXJksPAOrl3MhpECAlAP
a7pcBXIW714NHkK9NNR0VXkAkkjisDoep+Xz1FwWu5eOrtEkvQ2pcOfdoz7EaTbWHD3mlS+I
YNVhCFc1LCSEklPrtp2jKIu9WH9rCUpnQpylLLDyIq1EtPINClwOJT61HIVG2iIGZ0N7/Z5N
jujtskPxpbzHDk9BdTIjr5pCxwdTsqlaK+B0tGU/J3s9hbnRpdymzPsLPbS0mbODZfUlUhRQ
2hDCCFL5KSa77DQk24QjO/WOfZbxItVxSluZGKCsINQpDiA42tKvgpCwflpCfIwAIFSOVQRU
+n/HQJzCTVJBooUPLruNEgDjTc/mB2T/AH/HVJpCFJV6JIoKEj1r8tUg0Jqo+0gH8Bt/HSSZ
z4e2gUeJ/LoJoHKhNKn4kfT+H8dIIOqBUinWtPX8NBtBL514n3V6HpsflqBidvcKGo/xClNt
RAI5U3onoQN6H8NAABWKhXpv/sdJoNRHRJqlXp/edIyIrX6DQk7ctZMiiRX2g1G5I1EBIJWp
VK/w21CGrkeXEe/oR89RABBHEjYVBJ/nqFBdfzGqqVI/3HSGgVQolNCDWg3O+omwc1FXFe6T
ufgdBIUkGp5ChO1fQ/hogQiSDQqBFemlEwBTgUFfSKch8/x1MBIHpuFE7VHX10EAg9ONFHav
StdQyGSD7KpIH4+uokGU1V7iSgb8RUAj8dUjAFBNQjqQOQJIHX1GogJ5JRStfgR1/jpICUkC
p2NTzI9afjqJoHEVBChU77/PWWKD3CAVE+4mp1EBdAeJ+RJ+XQ6SAQUJNTSp/D8DXShC9quQ
5nfrQ9Kb9dABcUipPQetfjpGAyQj1AHU19K6ADbQAa1oqlDU+nXb46pJA955JBqBuD/dqkRX
6Q2pSuw+XxpoN9lAn4JB4gbEnemowHtypSnHY1Hz2IpqJBpQpAKRT29R600G0gia8vbVsUKS
fj8/jqKAJ4NqruVfA/79QJClqBWShQPL1+mo0ChIWVK41FD/AD/HSyTFb9z+z1pTWQ5JzxPc
psHKYcmDZ0Xye04lcC3r50U+g8m1J4KQorQU1ArTXppKTg5uZNDs+ZXJXlNV1YxCH/qZ6QpL
NqW9IaCJ/IlbvuWOLiqlKkK9vw1lawKaOv8ArqREsDz9wweGrGU3pyU0tcmQj7a7JCS42iQl
ZWPo5FJFCNtTnAtJsaN+br2iTLfbgwkpuUp2VkLaU+y5NPoKBHkcuRS222opQUmvrqZMTjWa
Wm34hf4CsSek268uoE6YxLkNtMJbUXIrQUG1pR267EqqoddDch2nZWP32EbALMqzQTJDxkC8
8T99xJr2iutFN+nTpps5JtDay3F22XSHcWQlTsR5qQhC60UWlhQTtvvTVVkjWpHnqMq8RLsz
DuLqW5330uA/NZ+3+hae2whtlBNFLBBdJIprKNOuCnYXnr2PX12U4JEmA+mXxhJeICXpjamw
9uePNIVurqRrXAPwzphObotcS4WS+OzpNjuEJyAgR3ElcTuOB1S47bp7fuUPcKjVtA3JYbz5
VsK8VmY3a2Lgls2yFaYU2QttDqhFkLedW8Gle0LSviAknVDZPyViwZqtF8N1v79znL+0MRp2
DMVElcRQISXR9SEgEcT+O+jWCTKssAqKk1oo1qTuD8dJnZfcFu+GQMYyK33i6yIk++x24iEt
wzJQ0hpzupcKkrSVcvp4+mpNm61lQKsN3xCN40vWOSr861dLs+zLjxFRHnGm1RVE8QoK41eF
KkfT611p2mDLSiC5ZB5Qxa5pvUuNkM6RGuduagQbGqOsIgyW1Nn7tCirsko4Fdfrr01nTyUe
CAzLPMXvNjvEC2SV2+ZIfiuzZ6owrfO0hKC9JCEp+2W0vksJT9XwrqgxaGjhmrGN3qFiVvtG
RwZEi2QEWx/vJeht8klTneLjyOKUfloTWukXSSY8X4yvH7zdp0+82MIVaJSGJDMtid2HQULQ
6pgiqgmlaAV9NTwgVMkiMssNC2cgti81NnVHOVLaAiKlGX3BV5TPGpj+3/K+VdCNxLwQ+FzI
LflUXK93+yyIzcdabjNbbESNILjJSEsoU2EuOBZSVrCU16j4aW0Cko0fGGpE27xXb5bIzluQ
Xm3FvlTMvqS3FcSmi1U26DfUZSZdbWuCcQsjVifxuMlLD6cnavaWO8ZBUopVRaC+oFugBaV1
0VcoXUqiH4B8eOxDItX3f3wcbimM4brwFKqRLrx7P/IR00pwG0WybEuErxBj7RftiJFuuplI
aS9GBRFWlKUOvtJVzWe4olwEFVOuqtlJuqyibzG+ZHY22i8yzk0ZFwjSnLrNlRXmFvpOzcKC
ytSosdRPGpJ23NNZkH8kw1Nx2TKsEDLWkJui7q7LYgXOdHuTiWfs3A2O+0lLbTSpHBKELJqo
A9NakSk5vZ1fvuMT7mtSrncXizPsd7kRXAyw25RtUiRCCQhl7kd+NU+mngElI2smNWade8zQ
1ZIl1n21ts2THo0lxyK4VupS8GXgppbyUNnkFVFPhqbiDKWy4ZrjFtkzpc6PjSb9eYrFphN2
BqS4exFVGUXHeUdSVuFtxPDmTTRWziBaMtzrFWLTl93tVkYkzLZDkBth5CVSOPJtK1Nl5AUD
wUop33+O+tdZyYbgu+OeMrBOteKmTYbg8u/NyBdb22+40zb1sqcCFKZCVJSohAJ7tE/x1lNw
bghbxhlnhWCAqJYbrcnZVnauj2SRnj9s065yKm1tFBj9tvjRXv5/x0zkrJEddrEtHiexXJMB
5pxVzmCTIWyeKmy0jtudwoCktnokFXEncb6U9wFq6IfA7HByLMbVYJbq22Zzq21qYI7oSltS
6o5AitU/DQ3yZSl5LDjuK4Dkt9jxLbKuMJppEt64szS0tZZiNcw8h1tPFsuKBBTxVx676G2j
XVPJLt+MsFkTXTGusyTbhEZeCGA24tt1+YiIkd9bTTTzf6vM8UgilNTsxdYG998aYjY7ZMvV
wuNzXZ4avtVMMNRxKMpMtyItVXD2ez+iVJ/NvTVW2TLSaG1y8Qy2ri9b7TNXcpkW6Q4Uloth
rtxLg0lyNI9xPu9ykqH0ileh1OAjwd7b4ox+TbvvFZQ3HamSp0e0ynQwy2tuE6Wu86h5xt1Q
Wof+0k00G1VcFIFnhHG5FzMp4TmZYjIiCO4uMtqlVO/dj9NKvgg7ka0zlGJIhCW1PIS+ooYJ
AcWkAqSgkclJSeqgn09dCNr5LXkVgxBGKm8Wr7yC87JDFmTNfbeVcmUKKJD/AGW0JVG7NAdy
QSeO/XWk+GT1KG9wxvF7XIx124TpCLfdbM3dZzzSW3HQ+tTg7LaduKV9sDkqvGtd9ZTcFz9h
GV4nZrbLsTEFcmM7cQDNtE9TK5cJLjqUtLcUwA3+u253EJIqKb9dM4kElMHR3D8ZiZXesenS
7lJXAm/t1ph22O05NmL5qR/7nFlPDjuPWu2iDScnJvAoMnO5mLx7oXY0ePIktzUJQpzlHiGS
WVpSS3yQtPbWUqIrWmpuIJYOOGYdYcjYisrvchi7zEuuKYYiB2LCab/964vrcb4Nq+oKTXb/
AJttKtjQSQcPHnX8Xvd/Eji3aJEOOqIhKlB8zFrSFJXsBw7dRUb11qeAS7RJMzMDtpx+4z7b
kaZ5s6GnpqW4zzcJS3eCe3GlKVwdeTz+koCiEq9Brmv3GHMogbrZJcOwWG4uze7FvKJTsWKO
Q7AjvhpwHl7f1Fe72/x0IqzJ3wnGr5kt9XabM6GLiuPIdZ9yk80tNFamUlPq6kcBXb46Us5N
QXOwePvIsFiLNbvjtgC7Sqf2SiW4+1EckhkR1R46HF1W5RwpSn5qoRrXVcFKn7FGzL9xGQyD
Pu373MQhtLlx4PNFVBTgpEhDTqS309ydV1ARA8tliy2RZ7FJhuqNuuF1Ma1tJUtQZnoLYL6k
hJDYPJHv+Xy1hqV9BiWWt2H5HtuPuCbllrhQ33bkEQ57yFOvLbkranra5R3f819KuPvFeu2u
iTmEZ648kZLxPPIcJVydukN9/wDY0F63IlpcuKLKpoe1ccpH6SGVDkOVUjVE4NWUI7eQm87a
hyomS3KzSX2lMfeMRnoblySpKAGw4lpCXvainIk9KV1lLAPZS7HPukG7xJ1rU4Lmy4BELY5L
Kle3hwAPLmFFPGm4NNZkU0WObk+cQbiluXAbt6YrLiFWJcEx4QjSFAvB2GQkdt1aUlStqkDf
WsrKJwdoV88iXm+xJ1vhrmT7XbzHjsR4gDX7Y0lZU0ttISlTSkKWj4qGwqdZb0XVZKi4vlJW
4lKWgtS19lCSlCCTUoSk7gJ6AHoNtL+QUcFwtNwuUvEFx3bHbptqx/uqTcZZU06y7NV3OKCH
Ww8+qlUJ4lXEfAaaOMIy/JUAivy5GiT0J+adEGix3K+X13IWs5U2iM/Imd6K6niWvuIiW+QC
CSaJokq5fHWes/YU0ccnnBTrR/YBZH31ie4pa31rkpePIKR3/pZWaqASKH47a6bUnPs62hDD
Iruq7XJ+6OxYtuMnjWNBaEeMjgkJ9jQJpXjU/Ek6zM4NOnJ3sl/iQYcq3z4onWS5FpUyKHSw
pa46ithbb6QopIKjtShH8DoVmma0Nb9fLhfbtIus9aVzJRRXgOCUhCA22hKf+VtKR8+unJhW
kjnK0orb0V8x8dBCVKUKDrUiieldt+umBaASAa8aIH0/3/x0FJzTuBuanpTrXVBCVEJG1aqV
T3dD/AaRgTyUNiflXqNv7tBQBZAPuTQk/wBuoRNQSd612p61G+oAjvumqa7qIFTU/CvpqIMF
fUnkQCN69DqISAoADap6V2r+GgkEUbUB29Ceu+mRaD5EH6agfyHz1Ag/fUbg/CnXUagIJIpt
Sm9K6gDqrdKUkbem1PjXUDQn2D3A7kUJPp+J1AL3HAVG35vloNiVbihrt0PT8dQA5hBomhUa
0Pz0oghUjbqTQ0+fppIMJABFN+tCTtoET7Qd/cD6HrXUSFBPXc7jfb0GhiBVSodfaKo3/nU6
ggIgV5VNB7j8vjtqIUoCuwqCATU0+e2gQKUDWqiKbD0JPppICUkrNRtToetfl/DQyQqiSke3
iU709NCIFAtJV9R/mKD1/HTIsSnklNTXeprUGv4/DUZTACFV4q2rvQHb8DpNiiFV40osDrSg
IB+PTWWZgAJKhQDjuR/sfTUMCaq5FCgaAAFI6fGuoMiirl7qD20CfUahkBU4d0gV+J9aaRgC
SNqk0IoQd6H5V0ChHJQpUA7kgHf5fy0AdOJCRy2Cup/4U1EJKiDVfUkAmnp6fLWpCA1JNE0o
FevyP/joN4AlQCuRG/pXb/d11SMwHxKhyqTx2oPnrJSEmgHLlU/DYdPSmkJFUFN/cTQkdOuo
WuRBCgeSaV9CB0FfX46jJ05JrSp6/VXb8dMGia8P/aIzO1yJk5q3x4cxiU9LkFSWghhYWU1S
FHkae3Xf12aTOTmVBr0GfZWfPLt+RkFtTY3pT05+cpwlpyO8o8o/vQau77jp89c01Dkc6glb
FfYFtx242O1ZRZod0GROz+48UqYdtziUFYaccaWhNQOP01NONdalOJXAqudHMX7xW7dlvOJt
7sRV4fXi4bZDaIZUghT1zSSjuRVPEKQj0A/gM5iAy0RWIsLkYR5HivXe1tybutpMNgSm47bj
sd4uOKabVxAbUj/LNN+mttqElsWlpFSE2GPGxiGVahMTN5phKiq/dwitOX3VeJaP+GnT11hh
GCEsLcBV5gIuBSm2mSyJxUSAGSsdzpuBxrWmlMU4N9uVh8bO5Ha4UnH4LFrduQTBuPdgtsuR
O0s8eEZfccbXseTwCgaaE/kykUzEZOCZJlIt9zxe2RERBOcS5GcXHjPsMsq7DTieXvWHE8u7
yr8tItYOOG/6MymRLW1i0Nm+wba87DtDbq0wrhLCx2kpjqWlfJDZPIBzf6tUYBqCfuWG4Xbc
fuN7nY/GF7iWqBLlWFT7qWI0yRIU0tBQ24XEjgEq48v5V1P40TmCm49Cxi/Zg/8AtuPB63rh
OOm0y7iIaWn0gc3GpJqSlJ3ShW/x6aklAIpCigKIANK7J60+R1QUmh+PLC/ccGzUqtX3hZhM
uwJH2/dWiQl01DTgSVBQRuUpPTfU2LeICx6zqkeHcomJgPFxqZBUmUGysOtBZSrhyQeAa35K
Qr192lA+C6eQcFxy4XvIe3Z5dvkWy0x7m3cWCG4ckoS2kxmGQ2GqrBIKuXPl6aFsIKdmPjix
WSx3C5x3Jj7rjzCIUIcFPWsONhxbd4A5FLiq8WuPU9fhrVbZLqpGPkSz263WnEp0OAi3OXK0
NyrippC221vhRTyUFk8VkDcamsm5h4GfjjFLXlVynxp0p1iNb7e9cSqIhDz6iwUkoCFVqSFG
mhowWP8A7b4qqx/6oE+4Ixw21V0EVSI5npUmV9sUdz/IpX3Vp8tUk0MMBx3FLr5Gt9qQp6dZ
5TbzjCLmwG1LV9utYQ6htaeSUkVC21b7HQ1gYUFFcqFVSruIBIC0jkNlU3IrQfM60lODHaMl
sViGOQrHZrhf749Bk35l2RAQxED7KEtLKB3lBaXd1j8iTtqT+DcS4IhNotxxdy7LuS03NEjs
C2CK6Wi1t+p96P0gr/kO+iZAl5WJ423gNoyOPMkLmyrmuBcQ42kNtpQ2HFBhA9y1JB+on3fA
aJyWSef8dY7ebD99h3eclfdx4UJh+U08t/7kqTWW2G2zAWngSErJr00x5KzcBXbwnKjsY7Bt
C3Zl1vMiTGmiSwqLFQuKkLW42laQ4UABVCRVYTVI1YkkoIbL8BhWSXYIrNwcTGvcRDz1xnML
itsrU6pta1IIC0tpACqK91OvXRItTgi7PjkCRmcLHXbj3YsiY3CF0txS4hfdIQhxouU9vI7g
+ldMwZrWH8kpi2GIl+RH8ZXKd4sPTWA9HWlh537RLhSEFQUlJX2wSDt11WcjVkviNmyl3C4F
wx3J1WiVcrm9BTbnpiorDzgQgthpKAoqdWpVFE7fE6mocMG20oGU2z+Vbw9EjTJ0iR3fuYq1
uS6NRyytRksT3Eni0eSSaOk8h9OlJFkh2oNzf8fzbm3dpIhQJ7EOVYiXBHV9xVTbyKL7avck
kjh/HVCkHdqpGHJchEFMX9zlLgt8QiIp9xTKQ2oLSA0VcOKVCoFNZjJOzgnm898n5G63Z2Lj
JmvzVhDEZhDKHFKT7tnG0IWnpUkKG1dMQKs2Om/H2X2lm13S2vD93lTX2YSYL7a0objMh1cp
MttZb4JqpK6kU9dElrQ2v3/dFp+VNusiastRmHZVx+4S6yYgfAZIcQpTakB8D6DsvrpVMmbW
gd4tK8nO5JbLbCuMyBMvLKpUd95IeSuHJUp9Unsq5dxK1pUugHKtaaGdEyNx1Pku4zLrc8fR
PmPzUORbtNY4kupc3WhSnSmpPHYI3A+GlrJicYHGP3LyhCsL7NlYlG0RlPBwpjtu9h0AB8NK
cSp1tQH19r1+espSKtieCpJu9yFrctJmOi1OvfcPweVGVPAAB1SfiAOutFsXNsN/hRBOmWyV
EirIQmQ6y4hqpFUgLI41IO3x1JGHuDtdm8mYtVqi3WDIi22Kh39tcejKY5JfV3V/rKSkuVO4
qTQdNtVlg0rZjk55S1eEJtTF3DKKWxgQW2UhDqYjhWWw/QJPe3NSrelNKco1sbXC63O83xVz
cYSuapTPsjMkBSmEpQ3+m2FeiAD8dU8GWsjiPfr0jMl39uGF3luU5Net4YXxQ6TVaTHT70pH
Lp6aLzyVWuBhY7tLtE5ydFiBxXbfj8FpWW0iShTSx7CCFDmeAr1+OpoW8HG13Zy32+6RG2Uu
sXOMmE8t3me2lDiXAU8SlPKrdPdUfLQ0zNUhES+us2G42cNoU1cZEZ1cgqVyQqMVcUJAPBQX
z3qK/DTBuZEO5CXMbj2AJTSHMenh0OKqVvNpaUktV4UQEV5U5b/DUm4JrkF0v65tls9sU0AL
M3IQl0rUruiU93uXE+1HDpRPXqd9BbO2HZWMbuD9yYQH3nIsmI2or4FpUhvgHQtPRTZ9w0qr
KS6Wnza59k2zfW7hLdXb1QJF3hzzFnOOmX913g+UqpsA2R6gafgPllEya52243t+ZbhMLMkA
/wD2i+JUorAHMuPBKOfy26aG5BNFlxnyreMfstitlsekw2rfdHLjcVRni2JjC1NfoKSPgltQ
qdt9ZNJk5cfJeO3XEXLI/IvlsSs3EuwreqIqG+JspySjvpdAX7eaUK4/DbWlbk1GBrdPMD9z
skuwyBITZHbTEhw2EqQFx50RCElwKABMd9SCHG1E+356EzDYrylndgy1hUiFdro44l5LsS0T
YkRuLHSWw25xktK7quIHt5DfTXUGrIq/j3JbZZMshXeUVriNIfZfdYop1v7iOtkOtiu6kFzk
BUdNjogmi5OZvh67Y1jLt7uE2Cq1y4UjLZUdSnucqU1ISlUYurdUhvs8Pr6moG2tVbTk5Whv
fB0tHkKyWrLLa9bbrPYsbWPJsL84ILbiXuDqUSVsNL/USy46FpCTy+G+stwkdEnL+So2jGI9
1RIkOZNaYLiJC2wq5vOx3n6GvfA7buzlankeVeultNkThzNy1eN71hiZ0eS6q5tdlTLTbrDk
TgvvrbkqQFe5zjxV9XwoNVcMLKUWpWa4d/22l2ZM9uR3LMtiHCdD6pTdwCQTyb4piNp51LS0
kqp13rqo4eTVlKwVbJf2p3xzjMRu+QZ9ztzsjvQmQ4H2o8wtqbRVTaArtcFc/dtX2101thoy
5+pz8kt2swcZXEvcC8KgWtm1ykRHHHHGnWFOOKVRxCP0z3AlJ+PpoTlDaZkZYLfIFnh5FOfT
DXcxAQmztzWUyUKkmS2oltpwFBcSgFW+iqyTeCxYte0XhF4mMJsNry1xEFph65ojtRXmW1Of
dvBLyFR0Pu1b5hKRsNvXVVzvwKXggIcm1omZcHHLIhbkR1McfbOuxnHCsEi1gkFl2oJQtW3X
01qzfZZBVSRTFAkKIPJQFEr9AR+OubYGsXq4Wc4zc1olwl4fJsrLGOWxst99N/QloPOBinfb
dStLqluK9pSRuaga7euMfubejIipR6BPX+fy1g5NnNSSCkAf+r4HQKEfmpX0Jp1oPlqEUUpr
X47Af8dZFhFXUA8iNgfw+Hy0wSEFKyqnIED0+WkJCG/5qGv0+upggKBKiFH+Ppt8dAiVlNCa
+09B8vnqGQ1JNEgqp8D0HzOgmHTrWvI7b02GohKa1B6A+nrtqKRSjVIG9NtvWtfT46gbE8Nz
XqdlE+v8NIwGUpNU0oSaEfPUQXBRFagD1B6/iNBQKTVKwlXX49fTVBQJIBUabJA3I9f46UQd
SVKqQNh+I9d9RBqBLianqNz8joFoIewFJJCq05daA/DUSAVCp40r6H5/PUUhVASfdyWOiTvu
rbSQFVAoUg/Gnx6f2aCFewFPU06L6EHUQVCSDyBp8ev8dQhoCQpQ3KfQH0J0MoACipcqVD0H
4fHQQZ4ivUE+0D+6n9+oWAkdPX02qNJNB0SByKiDSqqdNQpwEpYApXc70PTfod9RNhkA7qSU
noep9ajUSCS2guVBA5bVqRoGA67iiSUgEKI241+ekGgcllY9vtA2FNtRINRqSQOnt6dCdZNN
CAg+7keJqST/ALfDSYg6UPIk1KBtyG3QaGQOX6ZTTkn8vzJ9P4aDSAoUSE1BBI+QOkcBEHcC
pIqajYE/+GoJD58wCnYH6q7bjroCQgDRS/qr9JUKbdDqHIRqop40APQHrtpNJhqSonjvXoN6
V1AwcV0py9OnpTpqLBJeNI1vl5RbItwaS/BlTI8eQytRQFpddSgp5J3Bodtd6KTn8mlxcSx1
PnL/AEwUhNmburkJqOv9UK417bbnuQopPQ7101yn9Dp2cQScXBcFdwefPu0lVslM5K9amJ8V
lUglunFuOhpS0pCOXuKlGusxpLwZnKGrfgq4uXCdBj3Vp9yyzFR8kdCFNphxA33W5Y5kd4Kb
Bq2jcHbWQTXJCWDDMauOKZpcVPuSZWOpjPW2Y0rtNPNuvFvktlQUocgKjfbTGCdfBD/s1rGM
i8qu6Bcu92k2IsOlfbPR0Pf5dPWmqClvAwtlufnXCNCYAMiU6lhoLICStxQQn3HoKnfSkmOD
T539POVR5MaO3cIjy35KYMha23o6UOqBKSnmn9ZuopVH46zKB2ZEx/Dt1uFwZi2u92m5hbj7
DzjT60JZdjtFxxC0LQFEcEmjiRx+ekzLk5N+J79LU1Ig3S1S7Wlh2Wu9NSiI7LcdQQ6XElId
RxKgPp39NtSa5F6Ov/aPKv1pf39sNrbjtz13lyXxiLjPKKA6HCnkqikEEKTX4ank1VvnZHnx
3kYv7mPvOQWZQY+7bceltIiusKAKHWnzseaTsNtH0MwVcIUCQCOtNt+m2tSZRbcRtV8l2HIJ
9tvb1tFkYblPw21upDzbqihRBQoJChSm430MUzra4WRqwS6XmPdZke326QzG+wadV9u4JJo4
XAlY7Y3H5DzrqqpYx8E3muBeSbJ3YLU+Xd7AyGJKXRIPFJcSmjpjKdUtKUKPELpTb01Qc3Mw
Ve4WnOo378uW1NCIbjSMkUpyo7yjVn7g8v1FEkFJ30oX5JrJ7ln9p/09Mul+N2jzI6LpaA8f
uW2+QKAHGXk8SpINKbjRk0ObB5C8nT5rrVhiMSbiqM62o2+3sNvpbcola0qYShVU7U+B1A/g
4Gb5lbyn2oujmSfbFPZU0FqVGJ936dCyW67nb6vnqWUTeMBY9cvJ5z95tma9bslkNqEx+czU
tNNoLnvR21ltNEhKeCadPTW4UZBMgY+b5TDl3aTHlhl6+oUzdlBpoJeQoUWnhw4o6n6QNUSF
mySs2TeRo9gUza2X5Fnh8wiR9kmSmIFiq0tSFtr7I3qaK0M222QarhkaMeVAS5I/02t/uPMp
Sr7QyhQhRXTiHNv8WirMWlIsbt6z9OC22Cu3tt41Ik0tUsRG0PqlNnkFsuf5hcPGnOnupSuj
Qr5JjJE+YJkQQZloTHN1lNCaq3sx2pEqWByaExbKipLo+oc+O+pMXsjrtD8tfZ2ayXBh/lHf
eFmAfbLxfd97lXUulVQEmnI7dNJl2b+o2ymd5KgPWCZkUZ9h+0oDdmkSU9wL7K+alKKlOJdW
FEcq/LUoNJvcZGdiyW+OZwzfGbY1er7Ikd9qEpohKpSaKS4200UUWjjVI6fHVElMEnjeX3hH
kR+927HIki+ylyA3aquoQ08pK/uFpBWFcynnyCj8aaWoRir5F27PHrVaIUhWNQlWRm6PT7E+
4t9KWJqUoKm2nAvktLdASlY39dDlmlYj2/JdwbDiX4UV6FMdfeyKEUcGro7JUpXOVSqk9vkO
3wIpQaWmylIRByaMxgE2wOWFbzU99DqryX3kpEhipjjgE8KoSo8kV93U6moYuMDR53x2uGsR
W7yifwSAXnIio/c25cglIXxO9Kb9NGTJLWi7Ypjs5jJsbZvEiZbn0hCbgiMYVXEKStp1xiig
pTalcKeu+nZNwO7P5Ms1mbgRLDZHYUGG9MddSuX3ZChPYDCyhwoCUOIpVB4kfEazaW5YqDgv
yQHcijSXGJ0+ziI5bplvlyO++9HkijqEcEtNpVyoUcU9RXrrU/qQi2Z/Cb8lIyu+Q5AagLR+
322IpCAymOjtMtKDv5UN/UE091TqawZVoY1RkWGv2Zdhns3RFqh3J66Wp+MpgyVF9ASpqRzA
QKcQUqb1TGfINL9CbsXlGHbsft8RK5kGdZ3Ja4KmWokovfdLLg7r8lKnGVorxUpse7rSusqP
B1lFQYseNSbYJL2TojTnkLU7CXBkOKDhBPbLqDwVyPr00u05ZiqjWi85F5CxqPdLxNgSpd0l
XCJboKYSqCA0mL2HnHGnCSfqZKOHD6qmuiCtlMqObX6xXiTPlwbxcpKrtNMp60y2u3HioWsr
KRVxzmpHOiOITrfBiM5Y6zSVi+Q3DHGLbfCUwrYzbJU25RnWENmNyKHVKBeUoOc+PED201la
Ok5OdknRcLiX9yNkEeXNvFsdhW6RaVO9xp8vNLIcWtDZb5ISqihqiQraJJHDvJaRCuqL5MZb
vcpERqFeJbMhQEeIFgtPOw1NyVOL5j3qUQqnu6DS8i8ZglYflaypTdXlTBHlTLhPlJEWOplp
Z/akx4T3AlZSr7pHJPI8gr3nfS4NdpQ+l+Q8NN3tsu3Xxu3WKIlf+prKuKtK7st1oclJQlCk
OGtUq5kb6P8AJhuH8FJRkWLDCkXQLZGVxrWrGW7UWk8VNKcJRPIFEEpiqLXL6u57tEi/gmvI
mWYjcMXkW2zItptkhMVqyRu+4ZkZxJRzV9oWUJjnlyClF0jifWun1agzYrWL4XIsOR2u75jG
iHGYz4RPWmTHko5OIWhjk0y4tZQl7iTRO3rtrLRtvwScGY4vJ4pyqfjs66IhTv2abFTGWwic
pofafduMtoj8A5XgHEniak7a1j7Apj5JyHLwuLMbN2Njfvz0S3xb662hh6GXnrrxdLKUJDHc
EE+9bQ9o366yp2Zex5bx4yeVDcejWVarjJTHmVQ2A3HY++7RRuAyVBhjmobq2r9Wt/4KjwM2
7ZiclFgkWa12CaLimJIzcSezxhtPoaU6YrbriOx1cqlqpSR002UTH2Jvh6Iu1QPHb8zH7ALf
bVwbrEu7txvDjijNSI70r7Pg8paUsrDbTfVNVA76y8P7k0/2M/eYi/8Ab63yVR4H3709YXLD
yv3ItpaB7bsb6Ex6mqV9eW2msQxtwRNulNRJsaUthqWhhxK/t3wVNOcDUIdTUVQfhrmRbfIO
Qv3HHbI1ckolXuQyLmbm2wxES1GkIU23CaTHSjuJQpHcK1716euunbHwZWfsSmQXKJafJbyb
Pa4YXLatsKAJcYONxX3mI6VvojboWpRUfqBrUnrrD0maaUyLjO45P8v26I7b0KgR3xDuUcsJ
bRKlRm3UPSPtWhxbDq0g9tPSn46r8FS6hjHx9c7QlDkOZZ7dKhxW37leJsuMZsh6HHCaxY6S
QlhZBIDgpuaq6DWmmrytlKVcjO1R7HKx/M5oikOR/tV2TuBTqo6XplCkLG3JLJCFFX1em+p5
tkXMYJJM3GJGEz5LtjhQkI7dvtD7KXXLiu5htLqpD0pRDf23EK5NlPLcJHx0JIPkj5EbGI2N
YpPkxlyXZUieb2yytTTrrLLqEsNhwgpR7Sd0j8dSrhi3x8C83NgFmtMiNbItsvkxBluxrcp5
UdEF5v8A6ful9TlZJcSSQnYJ676U1D/YtDPP7VZbVmM+DZ/bbWOwGaLLvuVGaccoo7/5ilfh
ptpfQo39SbwDA7Bk2L3WZcbm1Z5sObEZizZSyiP2nOa5DZ2oXVNpq0PVQp66wbSLPkOD+M8c
jS3JEeO6U3CbHis3O7SoLymYiGilLQYadDiiXDyKqb0GtdmYcMxxwgIU6gFCaEobp0BNQD+H
TWb7wSXk1Gd4gs0e4XkpubLjEO3wZcWE3JS5cW3pao6FiQzxB7f66inf/DrVKptG60TcM7zf
G+BK8g23E4SngzJnPxn5jN0jzVJZjJXyC2kMo+3dWUj6yQnfrTU8L5Dqgm/EmKOXiNFU/KjN
u2qVdnbf97CkL7EcpEd5E8ITGSzIUpSarFU8d+ulxGAVTN8vtlnt18diWoSksRkpDn3bsaQs
ukVPB2J+ipFCOmp6BE5ieC49cbPBuF5nzGDeLsLHbGoDTLnGTwQsuyC8pP6f6qRRG/XWFmX4
NOsIcK8TFrI7LYV3Md26sXF6Q8luqGTbXH2yG0k1UHTGr7ulflro0KSgZpxDCHMAbypV+uDU
lx5UEQfsm1IE4R/uOHd7yf0qHjzpX5az1/Jrwc3UrOO2G5328wrNa20u3KavtxWlKCUldCal
Z2HQ9dDSN1ROPeL8sMiOywmFLYktvSP3KPNZchMtxSBJ+4kVCG1Mlaedf8Q66OocwM2/HeTy
MiZxyIiLJucpsyIymZjCorzSUlRWiRy7VBxPUg1FNLrAwVwpIWpKac0kpPr0/wDHVesOATLb
bPG9+uFnYnMPxvu5rL0q0WZSlCZMjRiQ++zRPbCUcVUC1Aq4qporWWLTKoCniCj3VTULHz30
NQwOYPQ036JB2G/rqICikhAPSu22+2oBSkkngK03KiPX56CgACiQUkk03/8AT8tQhigHtFRv
yPWvyGkUICaoO/Xcj1H/AJ6ADGwIKQVU9xPppIM/SaipO1fX+GoQ6FKaenpTr+OohLtCnoOK
TuafDpoBoVtsobhQ+XU+p1ChKh0AJUVH8NRAQVVAoAQaJI61P46SFN+5RqaK+W4qOv8AHQQo
JKK1T7geh2/t0CEQDUbjff4gH4jUTQVVoFa0UPbUbkD5aAQZ3IQVEg0NBSoOps1Mh8SaITv1
9o6bakQTTZURUCvzPoNMlAEnifUgjf5/z0SKACK1ASoq+Owr066hFKCieA3HUj5/PVJlhEkU
NOgNKdAD8a6CQW/EIoBQ9R/cdRuRTnBQSCoA12A2BpqKQAK58kmqabf8d9Uk2BaiVg9aUFCN
xoABR0oQkAVHzFamnWuo1W3VprYNgkEGoO3w39NIuIDUqpCPhsU/E/HWTKQDU8hsQrrU9D89
INBcE0Kajc0Hx+RHwGk2kgweIKEnmfUjff56ilLQOG3z+NduugCd8TJzkXxyZh0OTJukRPJS
orCJCmwo8eSkuBSBv0NNemq/F+DD+TR0XPzkrNnJaLbMdytiOEugwI5dRHJqlxQ7fb+rYOA8
vSuscBay8BwMn85vXK6QoUCTJkB7u3a3ptbLjaX6ABxxntcErokdOpFdD1JqMfAm3SvOt2iP
XmA1PfZsU5dxW6Wgh0TVng7+nxSp8oAopsg8U+lNXBl2HuGjzJNsd2kWSxwJdvvDri5v3caK
lyYupWrtIc4FxCFVKQBxCtk76bPGRbX3KWbjnjeJu2hbU0YkZBfWn7VX2vf5bnv9vb3+nOld
VnJduGMG2r1anI1z+1kwzGdSpmU4y4hCXWyFJopaeNQdZRkuMvyxkse5w70qyQIF1Mj7xyeY
Tzbsl7iU7qcVSigupDdK6UuBxor9gyy643kjl9ZYbM95D6FNyW1pb4SgUqHH2n8xppgbpLTC
xHL5GNS5ykxGZsa6R1QrjDkFaA6ws8gkLQULQraux0HNE7dvLNwm2SbY2bfDgWl+HHtrLCFu
LLDUZ4vp4uOKUpalKVvy9NSOjK3j2Rt2aaqcYUC6BxtccMXNoSGBy/MEkj3pptrS0ZXydYeM
5JcLbJvNvtcmRbIyiH5rDRU00RuoVHXjXfjXj66BaLVjt5/0rb75j17xO4SJt3abjXApfcju
JbV+q02G0tOcVmvIGvuHpqQdsQNrPmWOW/CbtjT1nlOru623JE1MtKOK46yWOLRbNOO3IV3+
WmeRccEzdPLlimv3e5NWB5u93y3Js9xeMsKjmMkJBUhHALS5xRQUNPXQngyR+U+SWMjsjlom
2l6NCjusOY0UPLSqMy0hLTiX1Lr92VIT7VK6H5alhi0rAybJsCyGNj8FAutsasscQFS30x5I
+3FVdztNlClO8zTYhNNMlbyS+C3TxrjEy6yv9RTZiZ1tkw+yIDkJyrvGnaeDjqUr9uxIpXWQ
4Da8j4oLKMWS3dEWb9sNuFzoyufyMr7kL7CVBs/4dl11SQxwzLsQsPkJq/vy7u7brcwWoqpC
W3ZLilNKZUlwBYDbY5VQmppSmmcAitMtYP8Ad3Yz5tw+34qVZ3WGGkrcdNTSShalBCa0+lWl
NhOMlkjZnYpOMY7Fev11sEzH2HmXo1tZ7ol9xZWlxC+6hpNa0/USdXJTJXW7/bhgr9k79yTM
VLEgQw63+2FuoPJTQHPu7dRtXU5kXlFgfu+MP+MLXZ3b++5doFw/cTFTGf5htaAgxWH1EoQp
sDklf06pG1ZhvgnU5xi9ktDjVqvpemyLhElxrgi3ralNIZUorcuqlFCJznv9D13GhC8lczib
h0ixJMB62zMnVNU6Z1ngu29j7NTfuRIQ6QFuKePLYbfHVJlJTgK9ZVaFX3Fo1nmvMWDHmYva
loa5uIklXdlPlhwBDjoX8qKAHXTGCnImw3K3yfLDN8lXlpMFm4tz3brNb+0LyG1pUr9FpKgh
agDROwPy1PQpckxii7FE8wXO5TrzbDZHnJr65DjpWy+3MDhbbRyRUuIUtJWNuPoTpvacmKqJ
RJ4bcoEDHLRZXb7ZEi23p92+sSVNuoetqwjn9s442oKCwk7JopXp01mdmlkZwpfjkyYpC7Ym
492crGVPNqEGNFUpZYRe0k+9f/4LYlPt5baWzPVb5IW1W9a/DV2jKnw0vP3KLPjwXJjSX+3H
SpLqgypXILJpRPVQ0zkYcEdA8c5TEmQ599sr7GPNvsu3SSVN9tMTmkvKPbWpYTwJ6CuhuUFa
5NRS1iv2r9syBNji2OVkjDkKPb3Wkl+3NodEV2QGVq6q4J5qoT+emhKNeDWH9SAutvw1F9ss
C7WiPalXJMyI9J5wqtl5vjEkdiC660nsv0o4oitTXprRJENGx+BGz/ErDbIrEufa/tnchlod
T23ZBcS8+SvmlNIyDxSpKtyK9RTWbPBVSkejDgi75otNhbv2RxrkhUC0yXe4VW6S66pUoJQ6
jnyAT7uW1da/xBypha5HeJ4Nicq2yZMqyOzLiLo7EnW2Ipc/7OMlKFcUrZeZ4V5KCXFFQ2p6
am8nRRBl9ws0kXGd+122a7bWpLzcRztOPkNIWUoSpxpKkKWkbKKT11GEsYNFlePLEzao1bHK
YZkY4q7v5MqQ722rg2wXOx2qFtPNYAKF77+3Wa5RpKCEznFLXZ4stiBjk/7eCxEdbyvvLVFk
F5ptbinELSGClS3ClPZVyFPx1pZG0TIxzaxSLfhuFTjCkR0SYcv7kvNKSkPCSaVUUJNXE+5K
VE7dNtCUr7mbVi2CJ8fWO0X69yol1W+3Di22bcHBEUlLx+0bDgCO4Cn+emNA3+LZOYrivj7J
JcqSy7Ot9ut1uXNnwZLqVKQ59whlpKJSGnFKQtDnNX6VQRT56JnRqtYRN27xLhM+5PMMXGe8
w9IgRILzYSgMuzWHnlhwvNtl9CPt6JWlKa1+WhvMBZtLJHPeMsNiotEmXJujkbJ5USFZ2o4j
h6MqWyh3lKUscXeBd/IB00uHrgUxj/2kaVcGh+5OOWuG/dYmS3BDYSIjlpKipTaD9QcaCVJC
zuSQOmhrHyU4OUnxbYI+LNTpV/aj3p61NXhuGtbAQoPIK0RksBX3XJSRs5ThX5aU0yaKZdbF
DjY1ZrmlUszLh3xKYkRlNxUdpfFv7aQfa/yT9fH6TtqTK04I6yJsYucZV576bWhSvuzDCS+U
hJ2b5bbqomvw0P4JFhzSy43Di2Z2Cwq23S4t92dZVShODUdwIMd5T3FHBbwUSWzuBQ+utOyg
m/y2Prni+GWjyBe8eWmfc48N9uJZIUNbSXpTrwRyDkhQUlATzNKJ93TbVIvUoOFguOTfJrOK
xLk6bWttS5EhCm3HWHkRVvOxu4AGnC06jtFYFFaLWCrwxlhGMYjkH28ee9PFzf7r0xyIGERb
ZFaKR93Icf8A8xscitYTQilNyRpbHeSPh4zElY7lF1+7LgsLkNEZLaQEShLkqY5nl7kDinmk
aHuAWiWXhuMyMauN3t9zluPWuMhyTcH2G2re/JWEqMCOoq75ke88dvyk9KaK+CnHgj/9HREW
zFZ8m5NxI+QOS0PyX0K7cVEN5LZdUU1U4Fcq0A26akpTY8wdslxa3W2xW6+2y4SJMSe67Hjx
J0cRZC0tJ5fctJC3AqMpR4JX/iro4JLMDPLMelY1k8y0OSjLehdk/cpBQCXWUOgpBJUCjmE9
fSul6RRmSVxLAMmyO13C72XlIlQpcWM5Fa5F4/eEp7wUk14NqFXD6Ak9BogYhFif8PTbSzMe
l3txltiXKtjzkG3TZaFCMhKnVumPXttq5igX1prcZM1M3El+O0621IWI7mzyEKPB0IPJPIDY
02KQeh0SDWS4yfHGYRFz7U4omHbozN3Wsh5MN37gNIb7SlDgXqSAk/gRpjXyVVlrwOrh42vj
d4tuJm9wpk8zHYMaEBLbQwtRKn1gutJQUckVUUVqaaoxJtM5yPHmQ3J6FS7wp8EMPx2rmXnB
HjotzIfcYX3G0uJ7bagQAgjWXVwDzkrWTR5abm45Lu0a8SJCErcmw3lPIJSkISlalJQQpKUj
206arKEjnV5Y4s0DMbhbnrfZ4cufAEhD0iPFaU639yyklsr4ggLSlSqb6q5OnyT1uzDyvJiz
norcy4xmX3X7g4uAiUlt5yndLiltLKFHj7unTTDmA7KJKX9jPdt7lwTEeVb0udlc0IV2EOK3
CC59IUoemiJZpMtM+Z5LgIfyWahxv93YjsTHylvuhppTaopeZHvYSpTCeClJHKmmqcyuAtZp
nGV5Syd+5M3gtQGLqw87JE2NBZZdW68hbbinSke8KDhJB9d9ZbkZGdi8hXmyRWIbCI0yJFjS
YKYs1kOoXFnKSt9lRJBKOTYUkehr8dad2UkJerx+6TDKbhRLcChKBGt7JYYTxr7g3VXuV6mu
huTLfgmsZ8hXKwwWoQt0G5RYsv8AcbZ962pZizaJHfaKFIqfYn2rqnbpokZY7j+Wr6y007Ig
w5d8itymoV+fQv7iOiapan6IQpLKql9wjkk8eXy1u15GSvLySV/pRnFgy2ILM9Vybf37hcUw
GOB/5eKeXxro7y2/I7+weJ5LJxvJIV6ix0SXoRUttlwqShZcQpvcp32C9ZFODviGUDHXZ3eh
i4QLnCcttwhKWpkrjvFJV23E17a6tg1A1q15v2MqsCmcnsEPJUXWLjUJdqaaU2iyT3HJbS1F
JHdecPBa3Kmvp0GsmpnZX1rNOYIBO4PTY77D5am8mWkXez+UDbbfbyLcl6/WeHItlluPMhlu
JL5l3vx//cdT3VhCqgb+4Gg1LwMooqEhLYSlPt2HEdaam5YQK5IIr+FSNvlqKRJUihSN+tPn
TrqENBqo70NKBPQ6CAKq3r6jb1I+GhEAAE79E7pJ6fLb5a0IpO6dqhddyOihoMsQptR24kKJ
3+aR89UikGa0JIorcBJ3p89tQBggJKa7HqR8tRASVFGw2Jqiu9NtRpAVw5p47VpWu/4jQTEr
AGyqpG5SPU/AagDIFBQCo/Kf9qaSAhRFdgB0p0OpiGlKU15GoO9TuANZFBlKFbkgKUKfieu2
omEkbFJqoVpyPoPhqJVCHNKgFJoDWlNyK/PSUCihBSmijWpoK7/PpoJAUlVQASkkihT0I0Gg
LIFabI320gxaf8qoNSKVr1/CmhgEHFKVQ1SDsfXUKUhKSmtDTY1JI6U/v1FAfRJB99ehPU1/
4aiAEp9tTXj9SviPj/DUMCRXlVdOhKSfXRACgQSFdeVBv8fjqGZFIABHIHgQQUnroERT2ElN
U+hHqPidII6ciaVSDsAhP0kD/FoQpsTxqPaKK6EHofkdJBIICuXGq1Egk12PwHw1ApDAHA+3
jX8w2O3TWZNwxXbTxpUU/wBttUlGJLN4mzNnHLblcZ1l9Tl7gfZtSWFhssq58uaiSDT40316
KuVBiJZbsU8iRhjN5xvJ5dxctt1jRY0KfCKXXobcVwuoaabWpAKFE/HbWnwFsFrvXmHDsijS
bbNN2tEESoUuFOjpQ5Id+zZDJQ82FthHPjy2UdZiMmVuZGL3lyx3C95sZBn223ZU20Ij8c9x
yMuOB7lNJW2B3uO5Srb1robxHg3C4ZO+OvOWL2jGIFqu4kxJlmQGYy2kKcRJQnuFHMoKSE/q
+5BqOhB1XtLky8kLkPkSw3e1wH4uTXezOwbYi3PY5EYKmZTiSfepwuBjtuA+6qOQ/HSnkyxG
Y+Z5dyzN6Rap0l7EpaYjL9qlIAQWmS2t0JZd5JbWVoNF9fnTQnDNdWy3ZD5fxOdMbUu7Jftr
txYmBiPDlffR226mvdlOOMIUk0BDSPcK9NCFV5JRHljx+Hre5Ou7EuXFlSXkyUMTHg2y7Fcb
bHKWFrqXOHJKfbX+eiTLUFdtPkPF7tisePkVzaOTzbfdIEq4yo4KW1PupVE7zqG9kAA8SkEp
1oVWRxEyTC7bZo9ps2RWxnI4llbgwr8plSI6JSJPN5PccacI5t7AqRuNLyL2VGFforHkKRcL
lktrC3IimpN7jWn7mI67QewRilsKX7d3QkA9PXSniAxyinQsqv8AbXGxb7lKjNxXnH4iWnC0
EOuVStxCAeKeadiPhrIOxuGA59hNowxEy6z2peRupXc7hshct2S0ooYQpTi6qeQFckk/l26a
mLcmAy5X3EuRIHIqecU4pVAkqK1FRokVArXprdfkx/ZmDVb5CwL9oujkRu0KzFu2NrnstqP7
Uhv867YaUVOCSiqQetaeusobIb5lbrgrw/iiJDzD79uekrkJEph1xph+hYHFKyqlPyge31pq
cSLUEF4csVqvnkC22y6xUTrfIQ+Xoq68VlLK1JrxodlAHTOANCi4bh76YK73Y7TAyh2Nc1R8
fiyAIspbBH2PNKXl8+e/508tX+AK1Z7Fcx5NxlK8Vi2aUFtvzbfDk80pbS7Qylo7yzHU2NwO
e9OVNXAJZI3MLDj8fNsjbyS4zLU65OdfhKRFTNLzDqyUuqUHW6FXX56zWTTLR42xXGii0Xe1
2xzKHV34R3ZrqlxjCitJC0SFR0LKQOVahyoUNOQJ25eOsClynZk23zbhIuky4rlv28SHDHW0
+tKUNlkpjM0CQqj3XSmDRRp8LG3/AA3HnQMd7lxZuTsSRc2nXFONpQmokSOIICHKUDajwSeh
0J+Rb1BZrh4bxVFlnFtuVBuVvjRX3ZgcfeZUuQUch3HG24bieKyf0Ve3U7YgWhvbsLxqB5Jg
2tnHLo3Hg3dEVdwmq+4t0pBQr/MS602AVU5I4FQ0TgVBUsvxfFVYrJyewNzIQjXty0PRpbiH
UOK4Kd7qShKO1QinAVFNLcgtDDx/i1nvr95dvL0pmDZrc7cXDBDZeX2lpSUAOAp3So0084Bp
Ehb8VwCbb7tfo8y7LsVmhx5Mu3pSx9+2/Ie7IaDygGHEge/kE/LRJQjrmPi+JZLQ7dIlwkSG
1TILEJh1tCVqZuEfvpLhRWrqD7fbsfhrVS6qYJF/xzhFtj53Huk25uzMW+37M1pLKUcHynge
2VJSpalK4qC9gNxvo7A1jYzsviizS8bttzuORtWuZd47suGw6Y4ZQhvkE9wOOJfXyKdy2g9d
XYuiI69YHi9qgMCRkhZvkm1sXdqI/FIjEPpCkRhIClK7qhXgeFPjTUmyiCq2O2OXa8260tOh
hydIaitrIqEKecCQSkfVxKq00PCkVlwaFF8V49fJ70TH5ciB+13lyxXRyUESS6W2luGUylAb
7aj2VgtqqOm/XS3wZVSs3fF8aft9pvtjkPwLLdJrtsfTcOD6462AgqkKXHA7jakLBKAnlXYV
0pxhGnWdnCy4vZpWcxsakXhKrY9IbjN3OMwtQeU4QEdppe6CVK48l7DruNZcmUosTOOYBj82
6ZFClzZBfs8tUWFDjOw40l5oOKQqQXJikMkI4jkhO9T8NaZqtW1JXs1x5zFMtuVgRKVI+ycS
2mSAWu4hbaXBVAPwXv8APptoTM5LZh9kydVns8GBlk+13HIUyncbt0ZbqIR+2UUuCUtC0lou
rSoJ4JVTqrrqlSX5cGf3HIb3ca/uM6RIUlCGeLi1FJS0SEJUAQFcD9JO+lqHAVtpjN+5T5EZ
uI5KdXEYPJmKt1RabVuPY2TxT19Bq0LyPDleTNORnRdpalw3EPRQ4+twNuIHFC0ocKkVANBt
rMCOrz5Fzi8xfs7reZEiN1W1+mgVpxoS2lCyCNiK0PrpgnnZD2283W1TUT7VNegzEAhuTHWW
3AhWxTyTSgPw1kFY7Ly3Jly3Jpu8tU150PuP95fNTqElCXComvJKVEA+gO2kTpbvIWZWxC02
y7yYiFtoZUltw0DbKShtKQa8eCSQKdBpgOy0RzWQ3li2zrWxNdbg3RSFXGOFe2Qpo8kFwmpV
xVuN99GSeVHA7Vn2ZGxf6f8A3Rw2jh2UxaIqGia9vu8e72/+TlT0pqWCbbZESLzdJUCLbpUt
52Bb1LMCM4tSm2C6audpH0p5nc00DM7GAHJS/j04jUEDy63e4XaU1JnLS4+2yzGbAQlI7cdH
bbHFIA9qRuep9dJRzyd38ru7uRDI3VNm6pfRLUstI7Rdb48VFkAI4+0VTSh1MgrXfrjbrz+9
RQyZ3J5wJcbStomQlSV/pn29FniPTQ9mklEHK13WRb2JrMZDbiJ8RyBJLqeZDbhSSUE04rBQ
KK0TkIxAqHeZEe1T7ahCCzdCx33Fpq4n7ZwuI7aq+2qvq+OlGgKvMn9hRZVNNlhEwz0v0V3e
4prs8a148eIrTjWvrpC3kVcL2/NstrtpaQGrV9x2nU15umS4HFc6kj2lNE0A266pgxbLkVe7
09eDCUtAZVDgsW9JSVFSm44IQs8yqijy3CaJ+A0SUM65Df3b9eZF1fYRHdllBUw2ta0p7bSW
68nCV78K7n+zUaTJGx5xdbHYpNrtq1R1y5rMxc1hxbb3FltbSmAUEexxLh5ajXYts7y/bbvG
eZvFmlVcmypiUW25vQEBMwIC2lpQhQdSntihV8da7GJ6mbKLa0rSE8UHkAgdEpJ9fjTU3k1K
ZoUvy7cJNwurkgSFWy4Q4sOPajJWqOwqOWD3Etn2e7sHoB9Ws9vBcskrl5YsUjLbflTTd8lS
4c52aiBcJjDsZpDqVJUiMAjk2UlQ4/Ib6eyiAeBvZfLrok2qXkLs6Tc7fDnW1d4jLQZH28tH
6Cwh72GQysn3nqig3prTaclCSRUs0u1ruV5++hS7lcO62j7mVd0sCQp5I4pA+29pSEAAE76w
2UI6RctELBH8fil9iY5dmrmZDa+DRbaYUgIPEhSlBwhQrtqXItwifYzex3awwG8gu94hXi2v
TJC37cELM5cxwO83VqW3wWkp41KVD1018GV1eSsfvVv/ANGPWUqnJmPzkyqJkf8A2cptAA98
X1e+C9S5NJ6gs+T5/j9yj5Bc4okm+5dHixbjCdQhLEIRFtk9t6vJ8OdlPAcRx3r6aU4afgLP
gzXkPQUKQSfT11koEKNSeJApvufjpKDnVQomu+wJHUfhqZkIhXI8RVI3NevL46ikCeIqP5V9
f4aBQkKrvt13SB01CgCoNaDYVrXfUICslFa7p+O5Hw1FIRIrVQ3NK0229aaiDSeJrWqTuSdt
BBhJCqUA9QT8tMkIqmiRXahPz31EHQCm+43J6knUAqqQKIG1KpHwOg0CopU/UCK/A/hqAOiR
xPRQ+mvXQIgFRFTvX6vj8NICgN9xUgDjx2/hTUQpHKpoOmxHoD8tApBDmSaUIPU+lB66RAAf
QVp1r0/joAUEoUQUiu9DTYAeg1EESAQfXeg9Px1IZE/VTnuBvX030kGpCiQDWpHUbHb46CaC
SQATU9aAfLQwFEJCCoe2tKk9Pw0CFSqap29T8enppkRSa1pQbdT609dQoCa8jSlCNzX4+g1M
mhASSoke30J+A6H8dUgLohKTRVTStPT+OgZDQK7K3H+Km38NBBe5RIqd/wA/91NKQIMg1IrS
u1R8PnqFCklRSOp+B9BXQaEuJTXaqadT1oRqGAyhJAqKdOKflqBoGw2NDyP0+m3z1SCQVRuq
tDX8vy1M1CFlRUr6CFfkIpSugoCIJTx6KJqD6n4/w0phASu4E1URxp0I3+Og1EIOhB4rPE0+
k9Kn56TCAaJrTofgK8qaGbrEhr5bnhxK9hvXprJuPIOC+PGg49K+lNJEt4li3F/NrQi3R1yZ
YmsOBtCO4eKXAVKKDVJSE1Kq+mvZ6YzOjhEm1zMZnR/6lWW27Q4Iz1yRKbS20pKUx6AGUgoG
yUK3Kuleuj1R1c+CSXUm044UYnmMSTjD+QONZUpwQFhxt4tOJAEkloB1SAlXIcdj16V1jDgw
0nsjZXhvCU3yXbkuPRYsK7JjxJC30q/cw4wHVW2OKp4PoVtzPx30KI+RWiGxLGEScd8kMf6d
fbEJtBtzMhkyJUV5DqkllD4TUuIR9YT/AB1NYIqsfF4yvGsi+GzTPu2ZoY/fESG0wkioq0uO
pXcKhWnIJpU9dDWh68FftMJM66Q4Bc7YmSGo5e2PEOLCOVDTpXWqoJaNpm+HPG374zY4F1kt
XFqe1BlNpdXIU4lYPJSgpltuOoUqPcoHQiIOx+P/ABvkeUIx+23K7QpTEiUxMakoacU63FaU
syG1pAS2CtHHtmp1aJ6Gtiwbx/fXZEm13a4tQbVAk3C8299poywmMoJSpp0AM/q8q0P0+uqM
EPx4qw82RzKTdri3jf7a3dWmQ2wuYUqeLKmlkUaJrQpI/jqRNFahYvh87K1W+LeZ020LiGW3
LhwVPy21ca9l5hAIqjcLUnbpqawWHgqK222ytIJPBVPcKH8SPT8NBlIt+HY1ZrvjmXypyHDN
s8FubAeQ6UBCispIW2NlpV89a4FoFkxy1SfG+SX5fBy5WuRDaQolYLKX1UJQAeCisncKGwGo
xC2WfNPEmOs3O6sY3cURpFngM3SZZ323VBmOpCS659ySauEqK+FOnrqNNYgq2Q+NZVktr12k
XGKYalNCxPUUDdUOpC1LjAElIbQqq+eoGuDplePxsYjYvdbJLlMu3a3Cas9wBbTq6ocDbjYQ
eB320M3pkbheGS8pmzIkN+PDTBjLnSZErkltLTRAWolIUfzV1MwnsmleJbkpImt3iB/p4xP3
EZCe8iP2kudkVZ496vc26fPWlocs5Yng0a759HxyRObmxn0OOpl254cXwhlTiQ246k8VEj6V
p0TgtlRU45GW43zWgglK0oJTWhoK066eoplpt2DZM/ZWZyLjDt0e6IW5Biy5v2zsxCKpWW2z
7FVpT3kV0OrTJuCKZst5TjEm9sSWkWwvJiy4qZPB9xe3ErjDdTYrsf46Vspf2JeTjd8Zwq1X
pV9S9bblLVbUW1Mh0tRxTl+sSS2kU3WgJ2FOp022KLFcvH+YSIUJ20ZPIv3YlNRIba1SYrJd
cBSly3PyF9t5CeJCltdBvrMfYnZyR0fxpkVyeiwbVe2LlaJc1yM/Kb7qGGJ7DSnHUusL9zjg
bSeC0j39BoyjIxuuC5RY12xu1TlTGMpC4UN1lL8JUghYS5HdYkBtwe6n1e060g+DhikTLrdm
qbBAnKsF+fe/bnitRCEuq+lDnb7gWkmlNjoY0JLGZfkSVmdytUW/SI92cL4uUsKW8msJKuS+
ABUaFHFCkAFNdVlmA7OJDtls8ozLQ5mNmdk3CTeVyY1ybYT3XShhKQtUgL/TWlQNEilRTbVA
y4giLZkPkeFEg2O3uSwzOadVaYvYS46to8kumIpaFOpFQrkEEU30pBOBnc4eSXTHW8nnSG5d
ugrYsqf1E99oIRVhstpAPb4k8VHTCknbBBNuOIWHEqKCkhSFpNFJV6KSRvUehGpsCwzfIGZX
GRCUu5PiVFdDkYxkpYWuSoBIec7QT3X1AAc1VJ/jrKFOSRvznkd9FmvFxbcjLRMcZs0Fhj7d
xqazxdccTCQ2mjiyQrlxJV+Gnhk5WiMbyjJ4WXOZFOYEm/NrS8+bjGPsdoO26tpQb4KHEcDt
q64BMc2283S95g9LasVpmXW6fqJiyooTFQ8mrqpKErUEoUriVKWSUnUyWG2R7GX3R+VeJ06O
1epN2jluXKnM99xrn9MhspoGnEgUSroB+GpyKaO9p8h3212tiPHbYdetyXU2a7ON8pVvTI/z
hHcB4DuVr70qoTVNNXUpkqxVyUVH8xqQdqHr01ozU4hbSlGhBPXenT56DTQhS23KqSoK415c
SCanp00cmcnebBmw3GW5jCoy5DLcpkLH1sOgltwEE1SsDbTtG20MysJHJR4g7AmnU76yYg5E
p2WSB6gnp8qa0E+QlFSiFUp8K7H/AMtAw2IIoOPU/D8N9UCI4p9CaqNQf+GgkIXuCNwKe46h
mRJAITQAK3BrvoGRFF1FQQP9v9+lGGxY5KV7QfSoV6D4nSxQKBJ2NQdwD1rrJuAlH2gVqPiO
upGZC4qpU9D6/GmkQcvdQGvGtfw0FIqiV0qaD0+R66gkWUq5DcU+pRptTULQASQquwArsegH
TUCOiTtVXqNj6f2ahDAKtyCSOivU0+eoGg21Abp3J+Fd/wCOpmRSa7UFQdqfD11CCp+dB0Fe
m+omgwaU/kOp30mkc1KUKDev9/z0My2KWFe0J2A/nX/z1IoE8DUgp4kGoP8AdqLqJJSR1Feo
67HpqEJadk7le+wHppkICJVyCQdq1KjuPnoZoJKU7hNQk7U+Vanf4apJnNRHLrv1JO3r0rqM
yEqqtyKV+Hy/DUQkpJNUkmtdh0G2oUEgqBJVSv07ahQCeR2TVVabn1HrqQthD5Ciq7K2oK6j
IY+siv8A5nYnQIXLcJHtSPQ9dRINXHpuQDtWuoWEof4UgipNK76TICkqB47gEFKa/H56GItV
RWqfcB/DRJCTQqrTiB0HpWmkhQJWNk9dhTroFMIDcHYbkcfh+GogVQlXH6hyqQOtB8NRCkpU
pRJ2HQAk7V0EEpACgn0Hu/Gg9TpJhFw8wKnfcfA/Kmkg6JJrQCnUDqRTroENLaio8R9PQH4d
TTUQKA+tKD6fhqEIBJVx+AqR6/7HUKD9eaR9W9D8RoZWAhR40HUdARUb+mgwgJVRJXWnUcfj
/HWoNoNQVwqQRQUFDU0Pw0BIAkciAr0rx+Z1AAqCapJoPh/DUMhoQKctjU03Hy601FVAJJUF
H2jqkDULCBNVEUIVU9NtElVizRR5p323Seg0SbYmoCRQAj1PruP7NQA4kUG1BtX/AH11FhIU
U8qE1qnY/iPXQTlgWolAHVH8Nj8aaiE1RUKp+KetB89aHk6A1QpITVJFAfQE9dtBoTUpUBX5
kH4fL8dASDia+4/p7fOvr6agYfJfStRtUbV/gNUkpAaVPEfTtRB2Oo0GUpTQLNeVaAV9fTUg
Ynn+j26Kp0413p8NED2HOAMJfyCJGXJehsyX2WFyo9e6lLjgRyQmqakculder11mTi0ae7hU
yN5bbw1y+SXD90mILqpwsvqaWAoobJUoBR6JTXiTrFXv6EmS0fxpdZEHILpEyV+BcbHd1WtL
lxmiK2WUigW7ICiS4T7UhP4dNTThBlkFI8VeTmJLcZUVa5CbkIKWmpHeU1KdTzDy+J/SQ4j3
d40+eiTffA4x+wZmi35WWcmft0rGx9zOgsvvOIkFRKFrS+y5wKuSaVNeWtOeqCycSVprEsik
Yw/kyEpVYmne2+4qQgLLpIFewVc1bqHu46zZMy0Rcdp5byGmwp15auLSEbqKjskJA35Gumqb
0SNPuFl89pTbmpb1yf8At3mkwmky23VMyeNGg4G1lTblPp7msmtkSzgvl+z3dibFtVwiXeUX
EsyWFIW4XFtqU6lSkLWEqU3yPFdCRpkz8iLZivlnHMgii32m4Qr1IQsRghCV91sU7orVbRAr
7kr6eo1TJbJC4x/OFxnTLZOh3ORJlR22psRLKOBihwloBLYDaW+5XdFN+ungIIGyWjyFaMic
i2aHcoWQsNL5x46HESw0aFR4p3Ug7fLV2xBFdWlwPq5pKXeR5gn3BRO9a+tetdEmcl4wa6Z3
Dx2+HHWLe5ao7Yevgksx3niyuo3S773GwUnYbDWnEGjpY79lrOAXlqFFtpxruIbuCJEZJkPO
PK9nbUR+oWidt/09tTnBvBK5jlPlS1KkW+9xIokPx2Ysu6xoYKnIywlbcZUxI4KSRTkg7np6
6JMNlYumdZdPZukW6hDsWc406+y7HIRFVHASkR0n/wCKClPFQTTkNRLZNZZkd5WjH2MtxWCz
bozCV21mOFxFPQin2oS8246Ut1IVsK11JmsSHj3kfD8fkTJdsxHsPSoj0JfcnPSmlB4CgW26
kBSPb7qGp0mIG3/de48DAVZ4Jx/7T9uNgT3kM9oud7Zzl3we5v1+Wo1sVhGWTmM7aumO4vCl
XFSVNWy1x1rZbZKW1JcUklf6iy3y5Fw6nlGZa0QTWQ2dmZdXF47EfbnJUmPGeceWILhJ97Ky
eSiDv7v92lFwP289tblgttov1ijXVVmbcZtb7sl6OlLb6uVHmW/a77txuK6EMkR/qFpOPOWE
26AXXHxJF07f/wBoJA27SXa/5W3Snx0MJ4LA5mFmPj+LjQsLlIsv75FwckqVHXJAAcS40EAF
tSNi2F7fHTIySivKH7JCQiw2J60qemMT20XFxx6IwY9SGoDK0N9ptfKhCT9OltvYJy4Iu8Zp
FVGi2iFbJlmtSrj++S095SpxlKTxWYrq0oDbaEk8Kgn4nR22PVcBZH5IavL9ijLjyrhZbE4p
5LVykqdmyVOq5PB6SilEkUSjgPaNMYBvkZ49kuOwM7ayOTCkRbbFlCbGt8RYdUhaFBTbZcfI
5IJ+o9dZYJk1jGZ4hbPJsvKy1c3YLpkOxYyUMrfLkxKw6HKK48EBw8CDX463Zing72LOMKtd
us0F5V1LWP3l27255pDXGUk8ShqQnmAlW1OSSafOuswUwKieVLL23Yj0ec1HuzsyVdLglxK5
tvdmKUoN2hWwba+nucvrqrprbMqHggrfd8UR4zn485PkpvkuWxcOwiKFMpUwlTaWu7zBovly
5cdvhrMQ5NPKgj3sXsjMUvNZbbJC+KSIvalpcJWQCKra4+3131SgSaZPYgjHsPyWDkT+Q228
R4S1pciQQ/8AdAOoW0HEJdaQklsrC6V6dN9AqZLJY/JWLWuPZrdIvkm9yITd1bcusuO+EsGe
hAjr4KWZCm0qQQeKgsb020sU+CPnZ/Yrpf3LVeJjT+N3G1/tc+fEjSG+24h37iM/R9bkl7sO
fE+vw1YS+TLSnJzxrNcae8vN3+VcG7JjlrY+wgNvJccU5CbjmMyz+mlRUTXuKK/TY7jRZYKV
vyNsfvcaDjUey2XLY1gn2+5SH7hcAh5Lc+I5x7HaCW1F5KN/03Kf26esZkzlpEzjefY5BwqH
GactalR0zjeoM9Uhgy3XXVltf2LDa0SAttaaArHE7dBpbyNsoziN4/yh+1JuDaIJiLbD+8+I
HA1QkgtFzmCB+XrpcMqynEGs5ldcVjXe+RL/ADrZNx4ybV+0WWK2hUiK62ppU0uNtoC082OX
JXMhVaaxU0lDf1KF5NkXOe4ptd2s0+2Oz1/sbMIsF+PGcJDfdW0232WggoCkuEmo+R1ttR8G
XU6+TMZuF5ye0NWJcK7THrRDaeZtkiO4PuYjXGSSElKEDkfb/i9NZTwNuThYrLNwuzZJNyG2
MRbsqGwrHU3NDMjk+mWlDq2miVpWptCqmv8AHauqcisomcBnYxeo95uq7FD/ANSOvxkpt7P2
SEmOGSJEhtq4lMZAceSCpKN0n6dq6E8mWnGB7CX4+Zt92eZttnBS/e5saPK7UpbbkJuOYbKH
SfeyVuOUQPavoK01pLLLceB/ecRwZq8JhWq1Wp/BFtTFX+9KcbckQ5KUOLaQzILgdQArt8Uo
SR/bollbcFHXj+FpwtGYOsx1Rp9riWqPbO5xcavqnO3Llbe8FlloyKlPE86amv2GHEEx5WxL
B7bYJSbPYnmXUyYzFhu7TKEtym1qHMl4PuqmdxH00bTQ76VWUDwykeP8IVOzGFCyO1y41rWl
9XbkNOxUuvtsrcZYLi0p3dcSlPEbnoNYaYz+pPW3Eo8+9OvXfBVWWXGtEy4wcaQ5JS3cpEdb
aUIDS1Kkp4hxXIIPvpt0OtvSQJbZaLP4tweTMaVOs64790Ysv3Fn+5eQq1yrnIfZeCAo9zkG
2UqQh6tK/DWXYIg5s+MvHxtsW4Ltb5FzaXLaaTMcSlhMW3vy1tIUalQeXGoSrdIJp6a2oLs2
znN8QYbCu9itSbXcJ7eSqUTcW31pTaUKbbcSCEJKHOPd6vEVA1l44NRmCvN+PcLfZg2dtmYi
8zMacyNd4EgdhLjCXF9tMXj7m3Q1uSr2120z/kzbUrwUu52aJHwrH703b5jEi6Oy0vT3Vtqh
yEsqASmOgfqIU30XzFCeldZXIxlEVZH7bEu0d+5Q1z7e0rnJhoc7JdABonmKlIKqVp6ampEt
GciwGNZg1boNsvzrYkXVm0ocRCTFkBCoyeLylnvgFRWU7UoNOqyFW5JO4w8Rtfk2+2iBZI93
YVNbhY7ElPuNwUFZR3FSFpIcWmivaeWx3OprX0KrcN/IVvsmB3HyXItrR442iNJc4hxSGxKY
hLWpDK1nmWfuUfp8t1Cmh7QqYZwwKPg8+2hi820lTLDsnIL07KWx9ozzShkwo7f+c6eQ5IWD
VX/LpSyLWCIg2W2O4Jfbwtav3OBNhR4KOYSCzILvdUpsbq+hO/prcJ2aXg5PSfyTdxtWAuYf
JutviSoLrK241ouEqWHHrnIQEGUhUJI4sIbSsq5g0+kbknWODp18DKVj+JQGcQnXCVJTBu8J
+XeVR+LryS1IW0htlBoE8ghIJV0ry+WhayJ0y3HbBEdsQt7D9sk3Ohm2SXITLeitOLR9s8p5
KUD9dtZUEKFQBX11pxE/6YVlOCIzSxxLBmF4ssRxT0W2zHIrLjhBcWlrbkqgAr+GiyiPoUFs
wDxDJy/HWLpCmhh5V2+0ntOcAlu3paQtyUjkQVqbUuhT8xrK5HWCVvXiHGbLjztylPXWahty
4JL8Zy3tIbEKSqO1zafUHl8+PJfbB+WttA5MmiRlPyY7JPFbi0IUob8eagD1/HWDSRqbngO8
RbxcYs51caBHukG126cpLahKTNkdpToQhZKO2n3cT1O2nr/gKuTkPDlsezKPjTMq7xXnI82Y
+mbCjB1TMSvExUNPKS4p1Sae8pppdQWQ2PCKHLlIjKnXBLDNvFwXA+wDl1bKpIjBl2Gh2ia1
7iTz3RvTV0xgX8mbZJa4Vtvky2w33pMeMvt9yVHMR/uI2WhxkqWUKSrbroaMypIrgmtSeNT0
6mg6aCaCJQGykEgE1qeg1QIPZTYCp+deg6amhEhKeJCVbqHuBO1B89KIASAneo9CetdDKAOU
A406eo9a/HQQaBsQBsQenSn/AIaWyQZ5VCfqVTp6U+GiRCKgDQt1VToKevx1GZAAQnYFNaGt
dQhklI9yK02p/fTQQfL31SOVN6f8D8dKEV0PLdVBsnp1+OoDn7aABPIJPXpt8zqEUdtgmvqA
egHyOooACkpqFA8TRSjufxGqCCKhQpVsk7AE6IINJBFFbj4AU2OlABKyk8a0HwPw+GpmkKTR
HQ19RXZVT8NZNCSKFRqN9v46TIOLYGxJV6/+P4HUAYpSg3V1J3oR6bfjoEIJXxUrl6+7+/fT
JCgvmnlQCtAADQfjqEJJSkFIJJBqB/v/AIayAXJSt6FJJ2FKVr89RQGd0gFI4hXuT8CdUiKV
x4Up1PuqKAH46kMgU2Qmq6Up0Sev4aQ2KTVIBB6ggb7Cvw1ljAnmoFJSKe2lTTfUjYZqEDiq
taEgda6iDUVAEcwQocqU61/HUwDrsSpQ5K9PWny0CwlJQElKdiNx8q/HUQEk0/5Qd96VPptp
CA1EBNQKEHYdf7NQsTU9ae4eh6/GpJ0AKA5FQpRIFafx+OoUg00KqdU/P5etdArAklHLZOwJ
rTZP410jIaiSg86kAbAfI9DoZNiuY68fdxrT0rqKR/41kLj5NEfZtKLxIS6n7W3rUsJceqO2
P0lJXUK3SK7nrr1Us1hHOXBqkzM7q95Qi3mThjDWUofQEWpb0hornFQ7TiuSxxWNgE/QfXXO
rZYgfXHOLk1b8hXd8Eirsb11S/fGnJb4Ee6JT+V9KiUqJ39u3ppnCDGCNV58yA3SXdo9vgsS
rlLakXV1uqlSYTTfbRCXy5UaCa1UNyd9CYMRimcWSBa8kDGGuy7dfAW5piS30sxoqiVNspUl
twN8VbhZNTqcwUzsqreS25Nhes4sdvVJkOFxu7qClT2khQIbS5UApAHHp0J1PJ0hIjoE5yBc
Y0xlIU7FeRISDWhU2oKofUbpppraDkzYpf8AUDDduse4oh3DufdtS5UFU1v7WjQrwbbbZSpV
VUp3FHjrNTSUuCo4f5Odx/NXshcZkP251+TLVaG36JLkhCkJXU+wqbC6BRFaa1xBWbiBeB+Q
3bE9Oh3FUqbYblDlQ3I7TvvjmWQVuspcJb5mnur11Ng3JOzPK1iaxJ/GbXDnpiJtYtcOZIdb
S9Xvd1a3g0QEpp7fbv8AHU3JfUqdkzVbN8auWRO3G6IjRTDjJizXIkhDf5Eh9FFdtNT7f46o
KzllbdWhTrq0ApCiVISSVq3OwKj1+Z9dUGUi9+Pchwy02HIIV8nyosu/RfsW248YPpbQlXMO
BRWhSlVNONP46W8QUYHOOZBh8TxzfMbm3mW1OvLrTsdj7RS22vtl8kgEOUJeAHIinH500OS2
i3XnyxiE794mR73cXkXG2ItkWxLjq7MV5JRWW0srLRKeJVuOVeh1Jmo5IXL/ACDi16sN2tkF
+RBlSHohk3DsJK72lltLanJgSlPZLRBWAk+74akTQzzP/Sl9t2JW60ZPDcftkJNueXKakQ2C
QStTqnVpIQn0CSCa6u0sGTniuyRcXu94ly8lsQC7XIbYkRZaJbjLtUKQ4WFITy48a0G56ank
uB83nWPLY7AyVhWVG0fZpzByOtLSpKpPc+stdwHte3/L+WqCeyHw+92tnymi833JrdNisRy1
PnmMqM1K5tFHbQ2W/wBVaV8SpwhNaanoVOShnH7XIuV3ZdyK3xm4gU9FfPdcYl1qQ3HWlKTy
HT3DrrSTM5L1ZrvbVYVYo1ju9gtfYafTksO9sIcdeeUqqFFotOOvDhsO2vQ0WH9CoonWxHjl
+EqdAE4TeaLd9iszijkCFonV4pR/+Lp021JSwiEW6W8/L8QWFhy72tmZa7omallLzHNiHx4o
UuOhPJ1xK1ErQQVH1rqxkm24ZaXrzKetSo1wyG2C+S7rDdtN4emJuUZ4pqVyUQ3CRAQlP/ti
grtoQlc8gzciXHtdpQyzdEN3MPw7pdp8Se/JkcOPFbSHO3HjKpy7ZVx6VOtKCy/qNfKEAHyf
ap8kW9cOYzCbWp95r7FTjKAmQmR9upSkISeu34V1lvAPZGYbAfc8yx/2huGYcW4IXIFucSqC
3DBSl4tLfKStviqh9fgNacQaRP4HjVya83XRLsHssMuXBa3EONo7DUpLv2zjZQviA4DxTSux
3A0PRlagfYpihk4Rj9jumOJuKWL5KZvJeeLTttYc4hb6kNOJqafmNUp/jozJJKCLi+PMJeXH
UWVOTSqei0WRuYgrv7EdawzJQ8D/ANIQkfT/AO5x9unKB1TIKJj9zk+Ep08WhTimbqw7DniM
O59qW1B9YdA5qaCxQ1NAda29mraUFCVbro1XnCkhI6lTLgTv6VKRqwYl6LF47xVjIczt9muq
ZEaHJ7pc4JKFq7TSnA2lShQKWUBI9d9tDsJpELx/i95x7HWZlmn49HBvE96HIUpUxz7Zprig
Ohr7hTKieQHbK9thrLcPyKK/dPGeMSZsy3487NXeH7T+5WeDIS6hpbzDlH2UqeaZee7jXubA
QDyB0rUk0IxnCbBG8wxsWQlVzZjRltzXFoTJYRcBEU48Ckp4rZSvZHKh5fMam3BVhyQVlxPG
GMSt17yAXiQ/dZsiFHj2xLRUwYwTVTqXkqUtSyvZA4/DS3ljVKET1q8U4lIxuBMumQLt9xuz
Up+IFhtJbQw4ptsLiFK33SSj38FDjX5az9Cy0ZI0GFuJoU8ioVNRXrT+et2Oas8GqXHw7Zl3
i72GzXyU5d7DIt7NwM2Khthabi8hltTPbUXCWy6CaiivTWdGrFMy20YjbXJDFoukuZMt8pyJ
cGpUVDSatlSVONuNLcTTmg+xdCRvpdYGlpFeSrHarLe4kO0oS3EetMCb7eQUtchrmp1YKl8V
r6kA8fhoTwDTUwcMUxKJe4V4uNzuhtVvsTDMiU+lhUlag852koQ2lSPcVHbf8dDeSU8lhsHh
aPfmrjLhZFGlWiI8yyxOYjlxTrj7XdCFsuOM9pTadljkSDtrTwXXAcbwmhyNKdmZLDj/AGL8
xLjiI70ptbFtLfefaW0d930cUdeu+2qMl1xk5XLwZcLbcBaZt8t6MmeYkS7faUtPuCRGihai
v7lI7TRUlpVEqG3Q9dC0aRB/9tL13JjjcqN9rHtMS8tziVJZkNzilEeO0s/+6twlArtySoem
lV0Z7OWSWYeG8sw6yOXmTNaKbc8y0+Gw+2uO88qiCwtxCEu8V+rRqOuiJNL5IOzLz3N7gzjw
vEmWHavlM+Y99s2iMkureUVqX/lgFWw5fDQSTHreB36Zev8Aoslts1tmIuc/kTU9z7eLHZIb
UXnFpDzZ5OJSkcN67eutPRnlj5rw75EnT5CmJ0R9ahEkRribhRuf95zEVcd0jk6pfaWBzoRT
TJMNrxJ5EdY9k6EIoFIzqrkhLLzXYLq3o6jQKabZ5Fw7UTyFNEQTZ0a8ceYIjEmD94Yap5U2
3bv3RLTtxS0ge6MzzH3KOFAlXw21c7JzDIn/ALf+R2LO1fAf0l24SGmWpjf3htRBClJjhfd7
CUn3ilBvtoiXAfJX7jYMiiY9arvKSE2a7KdTayHkrClNKo7+mFEtE/FSRXQaiBtj9jfvd5jW
lmSzE+5JBlTHAzHbQhJUtTi1egSk/M9BudEhBP5VjFyiiz3Y3mPf7bdqxLXdGe+lJMQoZLJQ
+htxKWuSUpIHH4dNbslBKJJFOMZYcwvsq5ZBBtt3xuWg3LIJ8koT94pfFpTRCFOOLJT1De1P
dTWrYieRq8SN4uIZpa80mWeK801f40WRLkPpcBbVGXFL7q0LKfcHWFk7pB39NYs5YcC8YwfL
59iDlplwI6ckaWxDtb8ppubcGYywpSWGlBRoHG6CqkkkfDSm5JtxghocW+rxG5yGOP7AmdEZ
nklIWZZSsxgkEFew5VoafHVWZfkysk1f8Kze046W56In2NoeDs6JHkR3ZkNybxQn7ttsl1sL
ISkA1AP46ayxhb+wzcsGZXVOOWgQg85KgrdsCGktpU7DLq1OOOqBB9q0rJU5uB8tDiJHWxWW
WbMI18hSLtEY+8uKWG4CoimHIz/23BlKULYUttSqhKV1Nanfrps31h6RLOuSHydV0/1Bcv3Z
pMa7iS4J0dKUpQ09yotpKUlSUhJ+B0MxUnrFLzRq0WOPboSnoTV8+5sykt8u9dAhsFgkEcxx
Qg8DQfPVw4+50UlvuX/ca+WL7q9YFFurcYT1tXd9BadaDj7jsnilt9H+S8V0HHalN9MuTLZk
0N55uVHWw2XXkuNqYFP8xYUCgf8A1HamsQaTg0W1Zf5BZyi9TmLCp+5T71Bl3OEGXKtXCM8p
1mLStU9xXJJSd9ttbn/AqR1K/wBV2+8EWzx5Jsl1ukW4RCxymyi+iSkB5xpLtSFMcq+3bffQ
7N5ZONCbfkWWqkzbPd8QnXaau2RrddUNfdx5zjUV7vRX3ltIU4ChNGgae5I3OlN1SBvZnOUI
bRe5gFqkWZPKot0pbrjzJoNnHH0odUVfVuNDTMwnkYptl2diOzWYb64bBo/NQ04tpBH/AOEc
A4I2+J0JSLwN1VSlTavbQV5U6n121kTlQAj4EUAV6fHSZD9qjXj7KVNNRpA9wKTuE1oTqEPa
g3JJ3Bp1rqIIkEgk0qCOI26aAFVp0NBtVXQV9dRBbUG3u36Hbf4aiCoASRuneu/9p1ELUogg
U6ihruQNRMIqbUB8U7JI6VpTUIdUAhIHvAoCo7k6mQYBSkKUqprQim2slIkOEhXUjYkH4jWo
NdsBpUU7bU69OnxqfnqMpg6H2kD1JpsNBoI7jYkDeh9B8dRiwZ4I4gkGg6b+vqSdQoAK1K5L
JqlNaj4dN9DENJSpJUdt6pB3/jqAPmmhP5R9W/8AKmoQVUtzc0oKAEU0FInaoB9PX/w0ohaU
UHLqSapPqfjok0mEFDkpI2cp6gUA+GmAYAFJO4ClKFSSdqfHUyDSQo8ikJ33HwB9fx1koDUn
lsnYdE7U/npJKQlJUpFQmiQevw9NMmoAWfbQkUXT3E/PprLFIWSUmvKp6poK/wA9EGpObZqo
qIKKgn4AV0mRSU86caUQa0rufjudRVFqV+YAgk7g76DUciSFdU15A0CTt16b6gkBBTQV5CtS
T8SdIJgKAglNRT4g70PrpHEAWhQ4qArQVKfl+A1mSFNKTyTv03Ap01EFQqXXcqFaj8Pl8NRA
PGhBVRRpw+H4DUQZSkqBPX5D5dRTUPIKJ5VrvStOW/8ALQawPPHDcZ3JIiH5zFvaS+045Lkq
UhpKG3AslRSFKH0/DXp9bhnLg3K83DHpPnqLkkPIbWq0uvNzHpynVdtDLSQ2tlfJBT3FpBAH
T5jRTEmYJ6NkNqhs5lAsOT2SBcrle03C0yHihUf7ZaQVAKcbW22sJqmvHY9NEqEMrEo4zsj8
PSb3MekLt8mzO3hhdnYjMcHW5LbIEmXKI4c4KnqVR+Y1PTVBki8GR32fIzTt6szSrsyuJFDc
puHFffCirmy0rjxZ4miVU+Ol/wAUL+Ckw5tvZ8ZT4C59nbnqmVahuRS5dFoCkgqYlg8Ut7fD
6a776LOYFuYkrFsTENxifelP2febEven6XMdzfr9NdSgsHo28W3xsbpb4zljtv7NIuEVu3XA
PwG0FgiqwG2D9w4hSfq73Q76EjElSxmb4+v/AJDTj1zxm0RoEaZK+2nsf9OyqKy0sIaeHLi6
VLAWlaj19NPApDbCXsIyu9yIszFrZDusKJOct7Eda2ok98EfaMKjqWApaRXfn7tLSKIJydim
GQbHLu93xu2tZHDsn3k2wlxSIzMoSODRLLTqinkg7gK30LDwFoKTjv8ApW/5g0bfjMNuIuET
ItU+euLD+4QPc60971tg/lb39dMYGMYKLISlLzo4BFFqCEIUVhIBNEhX5h8D66Gx4NL8V4/L
umKZqpu1Km8ren7Nz7cOqEtBNEsrKSrucT0SdSCywDEsfmSvDWYvIt0gOJkRFofSgrD6W3P1
UiqTQM0PLgR/zatQBd89wWwXW63h12ySoKoFmYuMa9MOFEaS4hKE/aNNBHZBUKp68+RqBoQl
IzDxrYLHYbndIqZj7wXGS1ADiFOWYPtpcWm5lJXyUtSuLdP9+unbgHgYeULJBt9lw+dHtaLa
Z9qDlwLbCmEOPggVc5fSsjfffWGVkpGnjDD7Jld1uEW5uSCzboDtwDMIoU+4pkp9iOQUDyBN
PnpJFm/7Z4muxDK3Dc27QbYbl+0c2/vUq+47Hb7xRxp+b/L0ZJojvH+P4zcfKNttrceWq1y2
nHGol3YQpfPsKUO6gcUuNAj2qFK7HTwUStlBeiO9x9LLS3WmVrLim0LVxSlRHJQSDxSPnqTy
Z0slwGIYjbccsdyvlzuDMzI2nX4SIMdp5tpLSuCQ4lSg84VE1PD00OWahEIixQF4m9eXH5wu
Dcn7dMZMNaoJSSKEzCOCXKb8evpq+hlryS8vG8UT45s2Rx3ZirjJuyrfcSvgoAIRzWiMyCQo
gU4cjUnrp+opyWB7C/HUuwyb1Cbmx4kOdGYkMRHzcLgpmQSgpmRFtt/ayCoUSlJVvqgWiNyj
HfHllYivvRnmbiJC2p+MouLcmQIhb5IdeeDdI7oXQKaPLUlI8wN8qxDEIC8aRBXJtEm/Mpl3
BqesSExIzqiGFrLaG1lVByUin4aZwCq5IfH8atM/OoWNyJ5l26TLTEauMEBvn3NkuIS+FcaK
O4Ir8NEkyRxPCLZdPIcrGH3HFxork9tsJWWHXlREucR3EpWEE8OSjSm1NNsGeB1ZvHMS9YlY
7nHuzNvvd4nSYCI0tTpTJUjilptpLSVcTU+9Szx0eRc4I1vxjlSihaXIjbAL6LlO74Ee2uxS
pLjM50D9FZKRwA2VUU0oy5Q1bszrnjp/JGbjKQ5HuLdvftiqpZ4PNlxC0KCuo47gpprUKRzg
4r8i5480qO7kVwcZISC0qQ4pPt3FQT6U21lo1OR5ByTyZmUxnGm71MuD0tYU3GffoirI7ncU
s04hATyr6U1OCkn3vH+cORLJdIN8N0u056Y6mUxOL0aOxDQjlKTMrzGy+KxStdqayENEFesZ
8lW6XKyCa8849bmWJn70iWXVBl5fYadZdJ7iqK9hA3T67a0m4gu0ZO+FWTN2Mwbs0e5SbBcb
lE/cX34i+b7kdTSpLaiAtHcK+vHlyG59NDJPfwIxNzzFPiybpYLhLZbub5Q+8qa3HVLlpACw
33lo7rg6VTvrTiTKb5FWmP5iRjDzNqdntWOT9wVQmn0IceCSUyu00pQkKAKVcy2OoPz1mBbg
h2vI16RBRATEtbjCGwwhardEU7xICQe4Ucir/m66ZYOJLHnF185SWzLv7FwhQnJLJaisoDaU
PJoGEhDRU8AVirYWacum+hM3yRWc3Py4y1b3MuZlsRmnw/DLzEdLbktvdKnAynit1IPR3elf
nqTiTNtnXNMqzG35PCay5m232ZChtOCNLgoDLaZbYUGloQlhRW0BQGpSPy6OCnJGqyHJskiy
7ZjuORorEtDbd2iWGCtPfSh3usqfop0jgtNUkU1ZFsRi1w8gWw3Gx2yyruKEOJeuNnl2771D
LyAUJeUy4hXbc4njy+GmMkrRk6Q8s8mvW95EWJIkwZS5sVwswCodyepC5TSS23RLlWE0SB7a
dNWQ7JD1/wAr+Tp8Z+8v29p91tLzCMlNs5OQ23gUOsMygng0mi1Ch+mp1LwKa/Urb+c5Z/pW
Hi66s2y2OtyW3CyUyEoaWpxhLzhH+S0twqbChxqR8tUB258D7JfIF3un27mQ49FabnSG5lwk
oivwn7j2/cf1ypSUcwaqLKR/LSrOMF1yLj+RMXtUtqfi2JJtlzBWypb9wkzmXY7yC28wphwJ
/wA1tXHklXIemg3+MQcY3kSJBuT32eJ26HbJMJdtuFjbRIQJLLq0ukvOqWZHJKm0lFFe0Cnr
o0T6kkx52vcWSyuJa4MeJDXbzBgJ7vbYZtXc7DAcUorUFKkKK1L31rtgzvI2Pmm5m1Q7cq3x
A3Chvw23eauSkSIioZWd6ApSrkPntrLZmCSf8+olzbbdZ+PQ5F7s4UnHpaJbzSI4dbSg91nc
SPcnluR1ppjg0vJANeUXGrqZyYLTkkY2cYKA4dkdosmT8eVCSUdPnoTgnENeSt3G+RZFktlp
atsWM9a+537gyFfcyw6ap+5JNPZ0RTWokGyMQhxyrIT3HCeCGkpKzU/TxSN6k+g0aBSWrMMj
n3LI4UqXDlQPsI0KOxb5XNKkIioSklpK0t8EuqQVABPXrXTH4mpR3RliZHlP/V5t0iQ2q4ru
K7ZRLsilalBqgo5JNKkoprV7SkjCUaONhzV225Lcb7cVSJUifGnxnnOQU+pcxpTQLi3Ar2p5
e71+FNCq5kXaNnDFcnTj9uv4YLjNynQm4kCcwpKVx1F1KnVdxQK0BbVU1QQrf4aE/wAikKHk
keNg9wxxXMyJ9xizkbI7XbjNrbVWvv5VcHGm2tVe2CeDpJy6OnCIWOwluNyFy5Eq+EBsfdgF
P2nNwEuudn3USv2pPSuiuZNsPJsqbuNvxtuE88yuzWkW2Q5s0UOdxxawhSVFSkFLg3VQ/Eay
sKAnI9v2ZW+ZktoXayWsasqootsZtpDSGUILbktSWkqNVLdSpSiVVX8taUdSVvy+CJy+8R71
lt3uzBUWbhOkSWitHAqQ64paSUVVxPE7iuizBI0bxx5ct+K47Y7T22ne3eXJl2U/HLxjR1oQ
hEiIa/5/HuJ/A6FyblD7LMuwnIcXajs3a2xpLLExvtzbS/IuAU9KdfQI8tB4s8krT/6TvrSt
k06syOySmm7vbpEg8Go8tl2Qs+4pQhxKlEU60Hw1hmVg3K0+XMOk5LcpbrcK1w3Mmiz486LH
dadlw2u+VSJe6ypSVLT+UH3HbWp/waqRDGV2CxZFOu65tlkNO2S4xY1vtIuCWHJLvEttvpfP
cR3vpqhQ6a2mnzyZ7YZL23O8WuDVybiXK3QmJVotjNss91kS4rUVbMpx6ZGdmMkvuKS4e42r
nuFAHpo7YRpVmTD8tKjkk/uzGJ5LxUJUR92UwUqFQlp579RaUA8fdvtquYeWa14pySDa8bt8
i65JDXao0achVvXJdZk29TiXfYm2p/SuffUtCuTiVBPT8usVWTbyo5MNTUAJX6U5JrUAkemm
8S4OaQhSUkHjQim9d66wIkJBolQNN+XHqPTSAog8QSamvu/AdNAhUUogCh+Y6fOukmJQvqVA
UP1K66COhPFNdyg0qetdRBlXFJAHIq+PwPwpqE58CEhKzsNgT0+P8NRCjRIKSK0A3Hz/AOOo
mGmvFRUaGg6dP7dQBAK9pCa70H/hqIC+JoQNj1P4aIIVUUIB6Vp/HShkSndZA+kA0VXpQdNR
mRQqUhSamo9PjoZoIKKhxQDUdfn6+upDACgnqOu3Xb/w0SAoK2CeWydjvWn46SBsapTua/UO
pp10CJoVEcag+o2ppgGGOQcNd1EAj/l1QQpJWTxURtWlBoFIQneu5G/qCevroNIWk/VQ+/Yn
0/lpkvgJSTwrxSSOifnqkID5HjUAhPX02PrvoEFApZUpVU9AenXUAYBoQCQK1Jr1Hy0M0Fzo
Q2oDf4V/3+upIUwwE9EqKBXUxkNdOXFfRVBWu2pAxK+KSpY6Cn/lpMnRJHQH3/hWg+esM2pD
cUAPp/8AqHQfw+Oo00JSQo7dE7LH46Q+oilKk0A/L6DSmZbgWVUSK+1W/TYDbQxkA4H6huQP
cNq09NEmg0oBKqDkVbVVUbDVJdQBBJoenx/Nt8AdRmqyGg0qCakCoA/46Gzqqhf+5Sh5fVyo
K6DPwKwVMZV8jCUyHoq5LKH2SopC0LcCS2VDccq9Rr2etS4OEZNhv+H4pb/OrWMto7dlVOis
CISXf85KTwVyUkqQpR391QOms0UplGJJmVgHj5qzZ09dibam035MKNOgtKeWy0afoNNqXQJK
juVq20RoBi7/AE9zHLxPtNsvTUqfbJDAmtKaU0lm3SWw6iZzJotSdwW077aFgVgisYwfE7mz
m4+5duAx+J91aboiscL4KUmq2Fgniunrp/2jaz5K1EsNodxaTdl3kM3NlwNtWMRXlqdQae8y
EjtI2JND8NDwpDREw4q5kxmOwmr0hxDTRJp71qCQK+gqdSKTWJ/9NmQxS2yi7wjJdfZiyEus
uMICnyACytW74QT7qJB9dWCbI1nwtKm3Vu02rJbVcJn3a4Utkd5ssPNNqc+laeTqaJI5IFAd
RlJnGN4fuct9H7Zf7VcIAbkPTZrTjiExUxD+spxtSe8QPyqSnfVAnRHhi9yGTcEXi1rsv2Zu
CL04663GUwF8FqILfcqlXxTpC0xDIp7xvcG78zZZN1tTSJEcTYtzelJTCcaIqg9wjkhSvRK0
g+urJRBV3m+08psqCwgqQVoPJKqbe1Q+r5HUMltwvHZ93st/kxL1Ity7LFE9URruUfRulQKk
LQEq2FNjpLg72Oz3l7AL5fmbpLYZtjjTX2Md49kiSritT6O4kpSQdiEnkdCyCwT2YeMc7tRc
jwrm9eLOwyzNU0ZYS4kKSkF0xC4TwbUQO4RtqFbKvccW8gQRfVzWJKG7Ytn9/Ut72qW4QWOR
KqP7kEFPKnXUDZLZNcc/sSMbuMrJXLqmcwm6WwOKW82yoDhRTUgKSVJCqVodSRY5Hlg8h+YL
7Pkw7GpM+YY7gW3HhxW3WmV8UqWHEJaKTypxNeulRBcnFELzg1lRcT+5nIzG5CQXUrBig8f8
7kWOHLbr9Xz1kPkVjqfLKs/dgGfLtWTzWlKmy5KS84GG08wVJSlwlGwCe2KfDSMlYTmuWxbl
dJLdzdRMuqVM3WQEoBkJ3TRQ40G1egGqDOyaxm7eXWMbebx1u4OWJnmEvsxu8lmo/U7LqkrW
38+2daeNm3gr6XMv/wBLuobM3/Sn3H6igF/Yfc/M04c6/wBuhGGWdy5eUmMDtUdcVLeNTJKG
LOExmRJckA8m1tLADwUop9rnU/HQnk1XgnbhO87Rn4rvYjOSmprRfTbUw3HVTeNGk3BLB5FV
K/5nrrSSyXJXs9i58+zF/erFb46HZSm2pdnZjEvSlg8mHHYhcqs7ngreu+hIy5nBGZJds4h5
kxPvjKoeUQURVMJ7aOaQ0kfbEoTySpVP5nRBpMPGchyx7PUXqPbWb3k8l1bzcaW0APuvq7iE
BTXFxHH2/DUwT8EvjGXZK35FmXK247bFZHI+4C4DiCy0w4hKjKUgqdSELWAvuVVvp2StgKFn
2Q2ew26ccctn7KLhJl2BxxC6RJiaKcLFHOQS1XYL2PrWmlrJYUSRTHle+shppTMR6CPuDdLe
pAEe5Oy1KKnpyR/mOAqHAgjjQU0RkG1oKLksqN42dsisabXapMtPLIlGQOU5sHgoqB7XdQ3U
ceny0uJHWBpLmeMFRCI1ku7U0hPF1dwbW3UEczw7VeleOhWZckjacxwfHrnHvmL2q5sXiKog
Jucpl6KtpxJbeQtLbaHKqbUQlSVbHfVJErB8qOWS3Wpy04wbXjLf7jHYW3KeLr4lBv7ktzVD
mh9pSUkOAfw1Iu3BGs+WOGWIuy4D0y0OQV2uba5sx2W7JiuVUsLkuioVzooFKRSml5QTwFi3
kRY8kystudplXq5yy4qJAgu8C2VtKaUnjxWVIQxsAKdOWpqcFCSkYt5jhT1ii2K72SdJtdqm
SJljTGmoadSmTQrZlOKbPcpxHvbodK2zN3MfBL2zzGiHikS2hq4xp1rYfjwBb5LLUdaXlKUh
TynGlyUqb50q2scgPmdFVkbWkrUK3eNU2xmQvILki5obS4uCm2oLPfAr2w73foKhTlTpvrLl
j1/Qvl88v4lbMlvt6xyLLnzb3Nt78pyU6lMUt21xD6Cxt30lxaeBSvZPpqVZGeDP8uuOETZS
nLM7eQLlNXNugllhKGUuqJWmO00eLjieZ4rXT4eutz+plYJTN7l4/wAsyi3SId5m2eA1b2Yb
8u5Q+6Eqgthtni1GUpai6mvI9EnWajPYb/u2O4ti+Q2W031d6lZGxEQiXDjvwxHMeT3HWni6
QtXdbNBw29DrUcl2cdSZwvyzFasdwtuSTnRcH5sea1dnYSbklTUZgMNx1M9xhSFJA9rlTttr
NWNmSEfzZZhaJ7Lz89E6YLyrutpDVXp8qM7GWoNKSlCu2wvkR9Naep1pRIqEskhkXl/CbpfV
5BFuc+NDYgTrajFRHUGpqpSHUofWpKwwjd4cg4kq2/DROIMRl/JVX/I2OjGGLglC1ZdcI1us
uSILY4KgW1zk46lShwUqW020gp9OP8zs4+hppckr5P8AIlkym1v2u13WCpi83GNIjsOx5jCo
TaCo1lvyXHWGwnnRXYRQ7jppThGbcELh9mt2EX1u9XXIrLJjPMyYTD9uk/fPRZUhopYlllKA
sNtLoStO6dEE3h/I7suUx494mtzs7j3G+P2d2LastWw+GYMlUlLnEvuN99XcaSoc+HtrxGx1
PeQS3Bb7X5MwGNdUokXWO846uzxsguCo9Wp/2saR968eSKrbW4poFRSCo021aLX0YX/c3DGr
FFMe5xE3eRAkS7koxmy6u6otqBEWsqbI7qZI9tNuQ1pxPwdHVJHW7ZR4+VdmXLPdrHHw1tEp
eW2hbTaX7hIeYJSthosqW5VRAqhaaKr8NXZ7nJhKG1wVN7LMUlxX7DIdt6bAzhbSmWwwyhSr
+2wgpKngjumQl3Ye6nx1JxZFZNylsoWR3CE9hWKMMzYUl+MmUJUSNFLMqMpTgoJUkkiRzG6K
AcdZ1K+TUZIOwZHdceuKLvZ3ft7nHS4I8goCykuIKCpIUCOXFXtPUHcb6GReM4yVUrJcZs6p
RuMTHzFX+7yZImSZC5a2ZMjuySSAltfsQ30QK13rpbwZr/L6EpGyi4OeX79b7ddF2e25Fe3l
XC4wXmG5IZaWtfFmao8EJXTqFcVbddVsM0tDTHsvYvee5TkUqIxFTcbLdSIzZQG0LMUNJPJw
p5rVx5K4+5SiaDTP5KCxl+QeP/INyt2MXMvFt2z4/CSluyIQwhm4u3BxTBXcFKq48GuXIcBy
oANhvqVZYOyIaG9Da8QSUuR0vSjkkUh9XAktIguEoVU9yildaDj/AB1Ss/Q1ZpJEzl2aSbl4
8iruzLLr96luLtLLUdiPGtTFuc7SmovbHdX3+fFQWqgT8Tvq9baT/Qw2htkd9g2GVhlxh2mM
7JjYzGU41LZQWlyXVPgSiipDi0fUlS+tBUay/wCK+pp/y+w+y2RbnvJOM2efETKuNukRI9+n
iK1E/cFyXmnwPtGAlAS0252k1HJY66bJ9UVVWSlZmmK3l98RGZDMZFymIjx0jgG20yFhKAn0
4janp01MyjV/GWLeOJ+O4S7kbZTdrjeZjcZDbfcE9LakN/aSFD6UJUtK6q9AoeupOEzbWheZ
Q7TaMLiJtliZclS4DrqlpsYmN1VKeRzVcwoFhSG0e32njQHXWEnJz7SjIsRgtTsrskF5sSGJ
U+Kw4yejrbjqUqST8CnY64WOlcuDcLF428aO5Hc2oqU3VlnKTAXFciusIiMIYlOripJWrvIB
aG4oTxHx11ay8cFBBWO34Ndskubkm22sWi0WKVclSm7VcIMcPhSEhcmKt5bzyWgajtqHXViE
o5MuuWyTk4b44tqsjlz4EFlq3RbMBKfZnuWpUi4dxSnoTMdxUjtOthASSsgLCtXWYg1FYMLv
syNIvEtyHFZhxVOqDUeN3eylKfant94rd4qpy95rvqaW0ZTk0vE/EsSb4+uuRvlc+aq0y7hb
kRH2UMxFMLCQiSCsOqdV7lcOPAJ6kqNNZ9aTsp0dNGZ2K2Ku16ttqS6GTcZTMVDlOgfcDfLj
t05VprLwZWWazC8O4pkdwfiWlcy0s2W+O2C5rcUJS5gisPSFym0EIDLqvt1Dt7pHIfDfq6pY
5iZJ1nPBFRMBwu42mPmMVmdExwW25XOXZFyA8+v9sfajhtqZxTxS+ZCVKUUEooQK7aumH8OC
6w8nW5eNMPs1tcyu4qnP43Ij2yTAtEdxpuWk3cOFKHZS0lCgwGVEKSj37VpvrPWVK+f2BVhw
yu5B4+gWPIMqtD15S27jiecNTkdxSppWlK0tkN1S0sBYBKvbXQ1rxYGmlJKW7xhZnbZb4r02
SnJ73ZpGRQFpQ2YLUZhK1CO+knurW4llfvTQJNNlb6KpPepg11cYEZD4wtNus18RGnSXMhxa
NDm31C0oEJ5ubwIaigfqpWz3kAleyt+m2t19cuPM/sV64laKrcMdt9um2yP+9RJcee01Ily4
yXFJhB36mn0UClLQOoTrFVKk0lDhl4PiDHm/IN1xV28vvRLdCYmRENpjR5k1b7KHe0yJK0MI
KQ5yPNXQfHS6xVPzJJFU8jYaxh2TftLb7j7CosaY0Xw2l5AlN8w26WipsqQdiUK4n01Ov4pm
MomMa8UovNutC3rsmJdsoEk4zA7JdZc+yqHTLeCgWUqUni3xCvirbWKVnP8AtRrqUIIruBSg
3Hr8wDrVq9XAIsd/w79pxvF7sJC3XMiZkyHAO32G+w+GkIQoe/lv7+XQ9NSX4z8wMZgdx/GN
1N9n2y4TIcGNZ4guN6ubTv3cdiIaUW32d31+8J7ad6nWerxHIQ+SJy7F5eN3UQX32pbEiOzO
t0xkKSl+LJHJl3gsBbZUkboVunW3TCtwwjL+CFHPilYFK9T09elNYYgCRQkDYA19P4aDUiae
0FJp60rvT8NBkFE8SRTj0Ip8PXTIoWkISa7ggVoBU9NtEjAjb202UNlHcba0AYA5kbAHcp6k
00MBXLooUFAan4ayakIrVRJCuO2wr6nUMix15KPuFOQG2pkhJoUpIFPw61GpImxQqQU1SB1p
6E/Aah2I3VUcgK7VofT4fPQAvkBxBoCQKben4akhbCQ2lVak77AdenrQ6SC5KUCfpLYoK/SR
/dqNV0LSlJP/ACgA1Px69ToGfAaN/cepBoo+4V9f7NTMgQkhJ6Djt89ZZpClJPr7ePomu5Hw
/HVJqDkEI5cySTTYkH1+OkwxaUj2q6r9OX9w0Mkg1hRc3qEj6fnXUKQaiACsGnGlB13+O+ss
6JRkIU4gkinw/wBvTQgvH1CKiSabGoAUfT1/t1pGEBRokknkaj3D+8ajpMA7a+51HOnKny+H
/hqlGYc/I88cT7tDyRh+1TG4Nw7iEsTHuHbbU4rgFL5pcHEct/ademsnF6NhuFw8useU4Vln
3Rp7LWuMaPMS0y+22h5IWV+1kE0T7iePIU21mqkE1ol2V+cnlZJKtt0auUm3zG4FzhNQW1ql
ucNnRHWyElIT1K6HVwWIKddss8yJKV3NdxaLN1RI5vMls/uaUpS23ySkclhCUhLXSnpqTFw8
Il8cyHzAoZTKhxYX3bfOVk7NxjRkSihQqsKadSlSkJp9AHH+ejMDjkqMe95wcWnwobkwYtIe
Ls5phk/Zh1RBIW4lHsqQn28h6aTBBxZD0d9t4KKXGSHG1jZSSk1TQ+lDvqQpmhzfJudl+Ld3
rJFiz3H2pAvAtK23pTjX+XV1QoutOjdOWpNSP48EHacqy+y5aMqbiqRd3nH3krfiuJbWuQFJ
c4NkJ5D3GgSdtMwoKuMCMYy2/YtenpzUVDjlwYcjzYcllYS+w8fc2E+xdCR1Qa6FkUv0Jm4+
WMgXbX7Gza4dutTsI25u3tNOjsNKc7qijuKUvkpX+LpqTwZv5KrYMjdx65mdGiw5LoQpstzm
UyGQFDdXbVtyFNj10rKMSMn3/uFuPCgS4oucUAJTVZ5GiRQBPwA1EmXrx/k37PZb403i7l7a
ns/b3ac0/IaDcZVTwV2krS3Q1IXUE6rPBucQdsYyWLEwDIbazij0u3Sy2u83JEx1KGeKyYfL
2HjQ+3/n9dUvBkl8h8ouR5c9U7Ejbcju1sTbJ7kiQ4lLkBSU0UI5RULKE+1VdKJohMj8ptZB
ZV2WbaENWuOWP9OIafWDb0tJDa0ciKyStA/9zoTpg02dsiyrDr/Cx+3yLbdbRHszAiNyCtt9
xUanLkltaGQtZVTflSmsthKZJ4NkHivF5lzlNz7xMFwt8iEpl6K01yDlD7HWXFlJVSgURQap
bQJCkeTcU/Y/9Km13BrGP2024O91hc8c3+/z4lKWD/hp11ZJkbhWW4fYs6YvjEW8PxYLZbt0
MuNypLi1IU2ou/SEoCVVShvpqeiVkQTS8ANxvDk5F2diLSpVo4GO28l5RJIkDdHHkfy6kyjB
PRcyxiXjGOQJ8u82ybjbT7LYtPbCZPdVzB7y1pLRB2PtOk0Vw36AnD5Fl/8AtH792QJCT92o
W4DlUcog2U5QfV6nf5ap8g0iwryXDnfGkDHXLtPdukOd+5LbEYpCgtIbVGafLigjimpQ5Sn/
AC6GY1yTUnOPH7GPu2JFyuc6BLnRnUOxoyIE+HGZqVF6WFf9W9vstW501k1EjHIspxaRittx
q23wlxq6CabzGt67eiNHDRbHdZbUFvv+pWnf56lsN4OOdZPjEvyDacktl8fcjtIityn48VTc
qP8AapCO639wC24pfVINaeupGuyTI/HbzZHvK6MgnXl1FtjzkTzOnskyJHaUmiVIjpISs/gE
6noyTOHXPFbf5Xut6lX+IqxvrmrDhZfIfE7mUthstn/KUoFdRT4V0t6CtXDklcMyezWjG7Ja
P9VwozVouz798YUy64ifb3PyMrU0s+//AAEAq/hotts1GMja35TgKVxUh6K1dR+4Kx+5uwyI
toYeUosRprIV/wBQspJCTxPCo66rOTPWPqV2Gi3/APZuZbDe7c3cH7kxcWLYuSoPhplstqQU
FFA8pZqE13G9RoqpYWUpIiJHivNmWFSFR4pbFFFSJ8JfUgCgS7Wnu9NMo1DJ7DPHszHMohXT
NoMdrG2VKQ887IiyGEvLQoRlOIadcUU98p348fjtqmQhou0C548/Fx6FlMuwzb1Fbu7qGLeq
MIiZjqWjGLiuP2iXVAKHJSeFfnq8xoUQF2jYPer/ADcdfi22zTrta0JZuzD8d8M3Jhfdb7jj
CUxY/eaqhzgDUU9dSZdOxzwhjHZnmN92yNxGMatkVyCuQ4tuKXOMZTKZSCpTZ7jz+xUj8u/Q
6m9EksjHHMXVGxCKxbbFY7zkLM+THyhN1eYV9sy3x+37bpebShKkknm1y+PUap8haYRLWHHf
Ha8Giy37I1dW3Isty9XBt+K32H0OOBKPvH5DTzXBISUcWlBQp1rqUg6rZlcPCcyWmNKcx+4q
hOltZfEN/tllRH6nII48eJrWulrDNKyk2LIPGeCtXe6QrtY2MZs8W5WyPYrs1MUlc1qQ6lEx
ClOOOA8GyVU4gt+uqrYYkz3yNicyOUR2sH/YnHLi5Dtj7L7izcGxUNo7Di3FKUsBKu6ig3p6
600kZ8eRfmzGL2xm0bt2qW209a4P2zKUOPULMdKXWkEBS1Bk0Sonp66yraF1/JkdiGN2ZjGM
nvuTWaRLfsggfZQX3H4AUZUhTSuZCQpSKAVpoeWWi14dgXjG/wBrueQNwLi9AM9iDEtSjJku
x0rjh15R/b0OPLHc5JacdATxG/u1VcuDaUneD4r8ei2yZL8GfKU0q5utrXJVGdLMCexDbYcb
ShQSqkmqldap+elHKAZD4UxS03R3GWYtzkv/ALZNubeUrcpFYXF7xQw40lssn/ICVEuA76u2
TUJT8fuV2R4osAtQvglSW7Dd41pRjL/JKnFT7ksNvNvcgEKEdTbpWkGqRx1didXGdheUvG+E
Y5ap7lnnynbla5qIT7LqluocBKkrccJYYQwUlOyQtQV6aE5JopnjzF7dk99kQ50l1uDBgS7n
KMXiqQtENouFprnVIUs7AkEaWtC9OSbtuIYPcU3W+Nm9xMctFuTOkR5CI4nuLVITHSlh0gML
R7+RVxqPp0LYQ0WyB4HxqVI912nIYuKbauzntMlxCblFdlBMvfiotpjqTVrrUfPQ4C0oSPB2
I/t/7k/fLkI7sZ65xEtxmC4IMeM1IWh2qwPuD3+KePs21qNEnKk4XXwZZbTfYuLzMikKyW6N
S5Fn7cZP2RZihak/cqUsOhxYaP0JIGpQvoMccohpniSyptkv7O+vPX+FYGskkwVxOMYRVtoc
WyH+ZUp79T2+3j8dSWSfMcFPvlit8GyWSdGmPPS7k06ubDcjOMoYLa+KO0+v2SAobqKOmhJE
k04GmPnHRdGXMhekotbSVqfTDCe8tSUkoaSV7JClUBV6DfUMlwyfDMcj5HjFvhB+2SbwtkXq
zPSG5TlvEh9CGiqQhKEFbzDnc4kVR66HhYNJNiomGYcnM71jTrVzu0qNc3bbYLTb1sNLkobW
4kvPSHUqbTwS2KpoK9agadMxGDjHwPH5mU5fbYN4XOtFgts65224MhFZK4aEKSkn6ePNZSVJ
60qOuqMpeSf8ZOuK4XhF5tLqnLjPj3GJCcm3a5BthFqtygF9hh8u/quKeUgISWzupVANjqqi
tWMkLGxNt3x+1lTkstOvXpNmMZxI7SGzG+4U+pY95KD+UDp89Xn4HwSmS4Ri0TFXsgsd3myY
zMtFvZcuEduMzcVEqC3bcEqU4pDPGrncSKAj11pA350crlg1qtV6tES8X37KHNsca9vy3GVO
KR9whS0xGWwf1F1SAipAPrrKWE/JtvZ3uPjrsZfjFojXN9RyksFlyYx2JsXvu9pBlRwtRQT/
AJjfu3T8NTjqVUyo3yCu33ufBW8XlwpUiOp4AguKZdU0V71pyKa6bKDEQaBh/h7NMltmO3G0
SSIN0lSWHVguBNuUxuX1kHo7x2KaHlRPUjWVlP4NZlDy7Ydk1kwkSZWXXJqBLhGQmCxEnrg9
txa2xHclNq+2bU6pG6Vim+/XXR1cg2zOMdtky6X+12yE79nMmSWY0Z8qKQ044sJS5VPuHFW9
U65vGRhs0TH/ABr5NTdlhu6PWiQLy5DRKfElouS48d55U1Ht5OjttrAUKqPKnrrq67+gLCR1
fxvNMjukuNIzpVwahWp6Tc5Etu4tlqEpxAUwtlxoPLS4riqiUkHjrOUoBJtne0Yz5SsJmPRc
zYtNvixbc01OLkgsvxJQdXCbba7LjjaU0c9q0JKa/PSqSlBpX6rPky3InZ8m+3CROuKLrMW6
pUi6NElt9XTmklLZII23SNVsYCuV4LKzhGQWfHDeFX+1Wxm/Wsurt70lTUyTBdXUN9vtlJ7i
2fanlWo0UY2TWCnRY8qXKaZhtuOSnloSwy0CXVOKNEJRTfkTsKeuubwPXwaRkNv83PXCxty5
L8+4MrLFqctz7bpYmtbutSXmKJTLbSP1VOkq4jdVAdacxIJuTpLHmNzNo8hbjT08RXHGpDKo
37H9h0knmkCH9sVbPA9V9fdTU+0I1XILWnzacsvHaZLtzo2Lo1OTHVb+HSGltL1Ip9DEDf8A
9O1dNpwUsqSL1nkf/UbfdmldxSWMpcebWpxY5EcZalJKmzyqKkj4dNWVb5AnYs7yuz49L7cV
ZxpCVx03NTTZmtQnPa6yw4f+oTBUo+8pHCu3IV0Vb4GYBlN08qHEISL5BMaz3NDSHroGUolX
BLCaxW57yT3FBpH+UlwJqN/dTWqTDa4JvOSEuOS5HeLtaHbvGTcpFtYjwbfAfi8Eux2lfpMl
tsIW6F1oT1V8dc+z6xwU5LddpubZX5IdkTMDhTr5BYDN0shYWlk0bSlpyV+sAlTSCkI942oN
9adn1SZfQr9zyvIbdlN1dyazRZl0fiKti4E+Nxbho4BLJjNJolssoA7Z32/np7PBNzWB3jOZ
57bsUL9vtP30PH+43b8lcYU4q0mXs+lp0Ub/AFQr84PAmopoTbf7wLfLKCSoKFCfl6/x1Nzl
mHsvma5SbhieK2//AEu1aWIDK3LROEh5/wC4jF0/cDtrNKLkJJJO46DbUn+LXBqVIVu8gS5W
UT5bOPw34GRxk2+443a2lssvsAAFMYM8nG3SpAXzTX3emsK8RHAtqCAy3Jpd/vX3UphERqMw
3b4UJvlxjRYie20wSuq1KQPqUrcnXW931VYiDK8+SEJTzTv09T89cmMhcDzAB2rsNBBdCoVr
Qe6u/wCGkAgo8hvxUKA+o6+miAkMqoSOJ3UQgmu4+Oo1IaFinQ8PXau3od9TENtCkCnWmxNO
JA0EEEkKU4oCiTRO+38dRC6e5JSKgbqB61/hoNJCV+3dNSdqA79dJWwGrik1O6D0I21IGg+A
2Qn6jQioqa6BkMp4+5SQFn4HetNRSEkrJBSPamlD8fl8tUgkBSSOg91Skb1A0DAYHEFB69AR
vt8NJtIVwHPkdkAEH8fnobNKs7EI5BVCQBSoPTr6n+GqTmzoFCtOiiDT1roaJMShPNJJqUj1
/u+WsydFAAFmgGyiQCepp8dtaM2SAhSDVPxqQknf+eoymKWtZ/LySPQfTvok2Gmv0CnIgVSf
h/bpF2CFFbK2A2A/A9DrIzIS0pKeP5gdgn4D4ah6oUkE0oOQPx9NROzE8duVT+O/H+eqTM4O
uAu2lrImJN2+4Tbmlpcd+17Ze4oVyHEOezqN6+mvVWz4ORsN58hYLfPKUPNA3d2IrS25EiMl
MZS1vR+PaS2nl/lqCfcVHl8NNZRzrV20S0vydgUxvLILwv8AFh5LMauDslhtlDzCwN2inmKo
5JGyjuNZmINurWR45/UFYkXiVdo1smSTLlxawpi0LjMRYrYR90wlNeM1RHXoBTVjkEpIbGM5
8e26bmD8+63T/wDWRl2FEcfZRKkdpZ5d55YcSFOVNOPw1ThIXoqkLKLPFwe44+HbwXZDxVGb
bkpZt6kEpIL8YVKlnjVW53p8NDyU/BXra+iNNiSVJ5tMutuqaV+ZLawopqfjTWk4YQehrh51
xZ25MTWbkVQH5cR563C3ul9pDBCl833XlN1TT29lvfWEjJV7H5qkf9xVy7xc5D+I/fvzWG3m
+682ntrRHDaN1NpHIVSn+OtYgUzhh3mGSu/SE5Xc1rhBuamz3N1oPPRJEo0bdK0hTqW0p9E9
NUhJYv8Autj9vsDkdvIDcsqj2d2K3kSI6wZEpx4La4LeRyqhNd1ig1MWUWy+RJ0vMY97vdyR
an40RUddziQGpLy9qAuNKCkrWv1cOpRBc4KPcH0OzpMkOF0OOLWHSgIKqqJCige1JPqkdNCZ
pwaR4ictsex5Smde7fbjdreu3xI8uQppanlb81p4kduhpy/s1qxng64c1b0+Ksptc262pqZM
cbVaGHX0Id5Rl/rKNUhfFaU/p1+r5am1golF5yfIMSuabg/PuNinWZ2zts21qjSp4uyAntlz
294pHxCuIHUayRVMyPj9+x302L9nXkKjDRdyRxjKVxSF/sdT7ff/AJhPpXWpCwjyPY7vdLHh
EWC/Gu90iwRBejxpbMh8vn3JSEhdVDgn6q020FOTr4w8V3tm8XE5XjIMVFufcgpuYAiCUgpL
ZWpKjx9an0FdU+AJgY7i7jQWbRY3c8/aHHjYkLZ+xVLD4Sg9kPBkq7PT9TpvqwaeSN8eWqSn
zHHU7abdaxFYU5cmLbKC2IqnWlJC0qDn6bqlEJKEKUBX56msAigf6CyWVeb5GjxEiRae5JuC
Xn2UKQ2SpQIJWUuEpFfadTEulpxC3O4XYLhZ8TZyh24NSF3yY5LUwqI4k+xPdDraGKJ39yTq
+oNZKmm2W3/tvKuH7ZEM1EwtN3ZU8/dhHIDgmBTipP8Az/D3akS0WK8Y8654OtV0bx5NvkNX
BS5U1ttRdci9ogSHXF1UEOLP/pG1BqcSDO+a2sWfBsGyKLjLNtlIMpVxbcjqebJ5JSyZXdFX
O4Pcnn1rttqfwM5yS0xpl5jD4j2KWu8ZRdGnr29Djx24bQhBCgymjXBL1B+optRqVAJ1KIk1
iSPzXGbHLGDPRbOsO3yQ/HnsRoibPKkpbXRKEw3FFthXwcKt/XUkYhMhrbgNiLuZyp8G5/a4
wthuPYGHm1T1KlOdv3utpdCi2PcOIIPqdtLszKLf5A8fYdGku3p+3XD7VCrbbWLTbg1HcbW+
wVqkv/puVUPpXsNxoTwbMgzywxcZzG6WFEsus294NNOvFKXFpKAscwKD83XWk/INtl3snizF
7jDxhhcy6JvGVQX5cWS02yq3sLjoW4ULJHNfLhsEmtN66zJbIbIsPxGyRGYbsu7PZI7bGLpy
ZjNOwCJCeYbUlJ76EpGxdUaD11KSaQwv1jscTxtil4iMJ+9nS57M6SpADiyyRRPIE8m0/l9o
OlWmRaaYy8eYmxlmVRbIZQt7b6H3DLQ2HShLDSnTRFU1rxp10MJLBYPHOJ5FPS3YsjcVb2Ys
qZcxOYbYkMIilKUqNF9gJfK6pK1jjvy1PGCRKQ/BtunzZEaLlDDsNLcPsLZabkqS9OeLCGX+
y6ptPFxNSpKz7daSCGMbv4bs1pgou92yYNWWU6xFt0hMFbrypLqnEcVsBxPBKVMq93I126aH
gzWnXPkZSfCF/N0hW5mSzMecuz1lnKZQVIiONth9LrtSk8FxyXeIGwGlWwzcDqJ4KuNwx5u8
JvcPg+h96CpxBTHcajrUhC1ylKSlnvcKpqkjcVOs8lZFQXLubWMsXFrJnA488Y5sCX5KXm2k
9HSQrtds9OI+I1qAbZFNPpu9+aXfLmtpqbIT+4XR5Kn3G0qIC3lJB5OKAHT11E9lkzLGcdt2
OW7JLLcbioSnlsQ4t4Sw2/IjIQVCdFDCipLPMcPeBudtRJuSRdsrOJ5oqLMy65Wppm1szk3G
Ila5rzkttDv2TCEqAqSr8ygk0330LRrydcsxW83TP8dxafklxmC8Jj0VdCVTYAlKJLMmP3XE
Jconnx5eorTVMIyqzMkXhWOQZEyTETk8+23d2cq1w7dZ2HXpboQVAyXkpcZSmOmlD7jTf01Q
aSxJHxcTuplZdBavQUnEYkiS89HU4pmUht9LZS2QoABxR5k77jevXWrMynCJq2YzOumKSY0P
OXnH/sF3i4WZKZa7ehIqstzJXPspkL40KVI3VTr10TJqzjPgrYsV4lYXaLku6A2iXeHLXDt8
hawzGe7SXHZJNShCFBXuoK+us+ZCZaks2Us5VbrNGySJnS8pgWC4Nw0BxEnsMS+KihUf7rmx
KSkJI5JrQEH10rUA3yNM4yzyLaJdoZn3plUp+HEvcGTAjMRXWvukKUhCnWW2llSU15CvE6W8
GUss545dfK+SSLxk1tvMuVd8fgpddSVqcfcirXxUyyjipCkpPvUgj57nWUbbLQrCPLUOfPuF
0y5NvktuwpMx10ynVmTKjr7I4R2XilTTXNHTiOg1pQ0DRn10zLMIL71qN+VNiRGnLc24ytYj
uR1toaWhvmlC+2pttKdwDtoyUKILhEuvm+e3boKbup1nKIMm5sy3QgqRHShzvhUktFcdSksq
9iFb/wAdSag23wMrnj+c2+Oua/k1tRcLpaY0B22CSROctsltAjxe2WeG6OGwVX561MmInQd5
8YeSn2oVifukG7ItEpu1tQI84PJtz81VEIcQUJLQcWild+lNYdYUj2TKZlOEybA02ZNytNxb
eWppTdrnIluIUkGpWlASUj0qfXWuoJjuzJzDNc8huQUNyL/KeYU2Q0htgKjJT21OoQO2EJQ0
OVR7vWpOh6NJZHMU5q3n065WSKL5e4b7778iBEXKiqU9yS44GQ2KNnuKCQUCnw21W2YmVoi7
GnKYLVyetdsfkMPwpFvnOGI68hmOunfNQkhpbdPq6p1rq1aeS7KPgj4t8lxrPcLW0htUS5qY
cfcUnk4DGWVt9tdfaCVe7rXToWpJuBbsruOGzoka3hFitSlXyVOdQpCykIEY9pa6JcQnkDxR
vXeusJzgSPuk68y7TZrZKgKYbtiH2oDhZdS48mS6Hl15bLoo+3gOnWurtiEFnIV9vUy8ybe6
/FEddvhR7cw0hLg7iIdUhSg4VHmSr3AdPgNLlKGVXOSVayO+3zyZFv0W3odvku5R3o9sQtxL
bkhBQEM9xxfcSFcBUqXrLNK7nBW7u+65dp70hAZfdkPuyGgoq4rU4ouJCiTUIVUVr/HWrzOT
NWmXDHfLl6sKMZYiN8YtgckOqYDy20TxIeS9wkpT7SGlpHGgPx1cQLspH9+8jwL7ijbE7Hpa
nYUVFtRcI9wktQUrClOtLejJQWFLqoq4qNVaaZcsSkY1ejZcjtV4cZTIVbJbUzsFXEOdlYVx
J348qamkxTyaTj3mO+RGTebnBmXS2C+yJjcxUtwpZVJiPNfZMOucg2ttL3cTT/D01dsuNQEQ
ME+WYVulXaVj4vLdxuVtFsTc7lcRKlsrQ6hxt5t1LaFBKUoKePz1SmjXEE3afMUa5ybsXod6
jXS+qgSJjuPPoTJdkwWFtvqShaF0adBC+2n6aHQ3hIN4MiursF65SXoPeXFddUtj7ooU+UqN
aulsBKl1qTxFDqtsyjR7N5VtNtwxdnfXdrioQXYH7NKdju2bm6ClEhPNBfb7deQQAaKGx0ev
DlmjPccvb2PXy13hlKX3rXIYlobdJ4OKYWFhJKdwFcdDywVoNOh+YMXsjgh2SBLfstxnTLne
lTXGmpKHLjHXGcZjdvk2pLDbqlJW4KqPoBraty9jC0NGvI+FIsqcNaTObw5NrfthvK0sqm85
EtM0vmMCGeIcbDfDny478q6f7EljzJp24Y6n+UMLvkZVmu7M23WCI7al2iWwGn5TqbQypltp
9pRQ2gyELJ5IVRBpXkNHZRBJ8kBc/I8G5u5rJlGfBlZY93IsOJIbTFoFH9OclQ5PBKSAnhT1
1O+U/CgG1EE6PJeLC3NXdIlpyJrHBiabP2wIpb48PvvuQa8Cj/2eFeXrTRS2EnwKsk2/InNv
JOL3Gz5c5anZMm4549AfuEOQ1wRbRbylfb7oURJK1JokpCaJ677a3S+V8IG1HxJBXnJoF4zS
yT4WQT4rEKLGjpvVwQlb0VTKTy7SYgSS0mtE/m3qTrnd/ikS3Jbr3mdgX5MyKZbMit6sZyht
t64pultkyoiiwEBplyOAlxbiFI5pWmifQ6Xmq8rBJwiuZldcUzfML9eV3X9liMwmhZmVxVLM
t2KyllLIQ2o/bhzhUciQkbHWu6aVfHJNyiexryDi8KxWCbJuTjD2NWy421/HEtLUqe/PSsNv
IKf0OA7v6hd3HHodtZrGvnYtf4McaRxQhqooAAVflFPzV66LubNowjUfIDtjVFwR2LfrZcHL
bCbg3IwipxSFoeLweU2W2w42lug33Udvnprn1tczJtfyxofwsms0jJs2bh5BEi3O+w2Y9hyh
LCrVFbW3wU97UBa4ncbSW+Q+o7nrolfj9C6pa2U/ype7NecwcnWx0SmRHixnp6EqQJMlhkNv
yfcAohxYqFKFVddau11S5UyZah45Kg59BNBQEVodx8v465ECpJVToaHfrXQTYSCTUAEFO5+G
oUJPGhHwNQDsBTSZD5qCyVCgCfqNTt+B0CGkFNQtJNBXboQfh66hSACSlQrxP5OVdxoKRKEO
JAFNz8aEVr6aAydEqUCCVDfr8dKZpMM+0c0+0evz1GmD2gHahpVSldNQBpCSkK5ABNK1+Py0
AxCwagE7CiiR1ofw0khZWEgUNAKEH8DoFhlQBIFUpPwFdvQnVBoJXoFGm3TQaQrdVSqqttqe
vw1DMiUCtABv0BHQD56iVewEtlNepoTT1/gK6mcmhagT/CnT1P4dNZGrAADXelaioPT5/DUb
eRKAEk8qKBrv8d+u/TUznMCiE1SSpJAPH+eg2t5DIIqKUCvQ0pQfMa0MB8jUkVqAK7ay0PVi
QTxBG3E0/lqTOnWA005DkNhvUdRX01M4/wCAuaK0ptWnz0QUjrx3FukjLIUe3RlybkZLSmmW
0hS/03ASUg+00G5rr2UicmWehc4xi4I/qKt62bQ49DkyIj7LTaODam2koDzwLdNm1bqPx1ir
BNdSffx6U3C8mR3sceyBTl4Ykx7a6pxC5Da0ji7ybo4tINTRPWmmFgwoIe4+HPH37vLhqbcs
0Rm4Qks3Rx/uIdclNhTloabUpPF2pqFmpFRo4EjMDxRLs7yNB/0wtmLHguIgwpDX3bjEhFe2
2iQQqrtDyok16anXEk5gpNnxFhzxxe7y9YJEqZAk9v8AeXJiGGo1OALaoiiHFqHLfY9flpai
DUZKnEjB+SwyFBruuIbKyKhIWoDlT4iupKWT+D0BdPCni9m4Isjb81u5tSIbDzyTIe7okKSl
fc5Mpjs15VSULNNCOZXYvj/xXcs8/wBHRU3WHOi3ByI73XkvJkstNqUt0K4gMlKk7Deo0ZFI
aWTBvGWRXp6Da13SD+1NzX7rBfUhx1xmGQEqafoEIU4TTjQ009cSUklE8R4LMsKMt791i2Ry
2P3E27ky5MCozobWjuFIQeQ6bbaieyqQ8YwS45fEg2mVd59nlxi+6xFih6ew8ASWFAJ4LCPz
LSnjpjAlMmRktzHmEh1KWlrSlLoCVgJVQdwD6V06jQiLn43xqxXu3ZWLoyXJNstap8F1Lqm+
04gkbpSaKCvWulmWDGMbtM7xpl17eQ25d7SYhZdcCv0EOroVMqSoAqX0PJJ6aq1Qwy3Zd4mx
R25y4+PT1QJ8CztXeTaiwpbAZSkd1f3BUVd1deXECmonsquReL/2OzSL47eWXLO+GDjcgNnn
cy+nkoJRyqx209eerAZka5xi1ks9mxW6WoPNOXyAZMxlxzuJDqSlJLew4pJPTUVlJywLF7pk
11mQo91/bEsQnZUmW6pzh2W+IcSQ2amoVv8AhpnAzCJoeIKx0XGPf4a8WchLuJvioz6UhDbo
ZKTF/wA3lzPX4ayCbGmIYJZLv5Dh465dGLvbXULWmZCU4wF0aUsNgqSVMuVHuBBpq0jUtlJk
tJYlvNGn6bi2xvWvFRH8emtGK3Tclsh+OZ0iwwrpLvVstQvCHF2qHOecQ5JQzso1Skto32HM
6ELZCjGnFY2vIPvYKe3IEYWwvj79VafqJapu3v1rrTgy3gn5mGymMGteQC/oeauc39veihTv
Yijjyq8uu/ClVgJ29NZNJuIJqV4un3S2R3rBkki7oemNQWU3BtyLElLVUd6Cta3A803xJKgn
6emlIEnOSFvWCTYjUK7Wi/t3G3on/tCrm8XYQiTmqqWn9QlQZSKlLif5DUmkE5OOW4lk0fOI
+OP3Q3y7yUR/tpq3XAlX3KeTaQ48pRCR+OgUIx/FfIrGQXOLZFri3W0VauUtmYmM00VEAIVK
5JR+ofpFd/4a03CBbLTOwjy3acnmRrHdZkmQIsVU+5uSxDHcfQVJjFx5zi4tG9BXp8NE4HOS
pRM3zvF1yrMp0RpDchxUtuZFjyHkyCffyXIQ4s8uv1UPpohG1EFhkXHz3fMbblstS0WJMRfE
xAzGQ/HUVKWsNNqQpZpWvbSDx+Wj6HFtkRMa8yHBEqdRcU4UpkEDkngYxPt5ICu+GPxHGny1
pHS1gr5fc/ZwXHpdxuqpdjuTi/2+1vMoLSU29YCFKVx4rSo7ca/+quiDNnoaw8+zW5Skx8ft
cONeFBSmHrPbmWpqUhB7nbU2CaduvKg6a1rJJLhDOz27yVjORR40C0z4N+msqSzFMcqVIjlJ
5p7agtDjdB7grp66zZ8grcEsb95qRdbglqNckT4iYrlwhMQ0JDDcVReinsobCUNpXVYoN9+u
tJi7YFw8s8zl9y0NW2XcHoLbbjtrkW1Mn7cFSnWnyyttXBSlOqKV9SDoZVsmskHDz/yFDVdV
MyZJfybkiW6ptRcfeALa+xsOLqQvtnt+6nt+GpltQKi5jmkfFxHTbG37ZbW1w2by/bUvuRGl
EhxhEtaFJaAUsiivprqMuWoK+/kWRrxZiwqW5/p5qSqRFaLQDf3RBClB7jVSuJ3Ry2+Gtmm1
giokeTKfbix21yJDqwhlhpJWtxajRKEpAJKifhrNmZTLPnllzi2xbK3lVmXZ2YsX9vgu8UcX
EIUpdHHELcT3RzqQabemmMYGYYqfEzq/ZdFR+xOm/oYiFm1sMqC+zHaT2nVJKjxC20pUpRUB
v6axZp4N/Icp/M4fk9Nxm2FTGUOT0zRZC26kLdWatoQkKLikr6Dis/jqf7BU44Q3m37/AHC6
49YJN2kpblszo7Dbygz96lbagVNlKuSeRoK1231ttcmUmvoRWN3C6Wu336LEgqfTMtyoVwdA
cT9uyHUEuqCaAe9IQQ4OP8dHbIyoHkNzKY2A3aOmzS3MfucmK+/eQ2+IyFQyaJKkjsqSVLoS
o7Hpoqm9E8DKRfZisMiWJURf2Me5PTm51V9tbrrKW1MgH9OqEjl7Ty330pC9pHbKLjfXsexy
03K2SrdEtUeQ1DckoeQ3IEh7vLdbQ6lCEkcgDwrUbnUlgy7KRvl+QP3yXbnn4rkX7S2Q4CEP
LUsqRGQUpeSpQTRDnVIGw9CdHBcsf4VnlyxW3X5q1B5m43VqO1GuTBKVRFMP90rOykq5iqN9
PU2kaCjzg3OiymZsG+R3X02/uz7NKLEkrgxyyrur7SvY8VlfH8NBYMburzDl0kvREvIjuOLW
0iSoLkBJNSHlBKQV/wCI0FdLgxBpFk82zoDOPwAJSLLabTItVxtjbvFmU/JS6lD5R9PtLqTR
Xw1JKIHtnJ3ybyhYLxabeyuTfG3oTduS3aOUU2vuwEoQogkd6iuClD5nUnOBaSySDv8AUCzc
r2J93alPxbdfm7xaGkLQFtQ9w5FfAKUOcR7mSfpVXemh+ASKV5Jy2yZLKjTIl4ut3ls9xpSr
mzDZDbKlckoR9od1cuvMaeBiGcfFWet4VmsO+PvPotrfNE9mMeK30KbUEtmpCVJ7nFVFGm2s
sk2TWIeQba5j93tt/vdztk653Ri7rv8AASX3ni0hTamHaOMqorlXlyp8taby4BMkcw83SJQj
ysUlTbRMTeZ92fYBCW1B/tiNzCTwdNEK5pUCBX11PBJmUyJDj77jrquLsha1uLAp73CVKNAP
bVRPTU8kb2/5fxB2LdnzeZ82Fc4lsYi4cuMRGhrhKZL4DilFj9QMqpwSAa+7fUlEGm5Y5/7u
4e1fW50zJpN7jv3xd2gJdiPJ/aYojSEpjI7pIqXHm0/pbe2vQal/0FfI4heYsJlNtInXZ43R
hmD9hJdZWUtSxBW3LW66hJdQh2Qr9VTfuVsRrbZm1VwRM/NcNn+XMSvrV8jRItqht/vdzW1J
4SXEOud1gdxK3l1bWEpU4Pp+GsWc1+5VUEVAzi04rZ/sLHeYq50jK1SZU6PGCgqzvtpKylT7
dUI5exSaVND6a1dptv4FLSK/d7/jLTPkKNbLk1Eg3ScV2a3NwUuolRw8pSO1IUOUQISa0A30
9l2n4MqEoY/jT2pHgaZZpGQW9Mlq4M3C3WRbvGSlhlK+6gJDY5OuLUFJBUaj1HTXOjy/keEU
XCHbG3l9mdvwSqxomMm5ocSVAsJWCsLSN6U6gaDVdm+tZHhyxbIWT3XHro2L5MmMMW7tJihn
7J1Nv+77bSWqpWUp5qQQnYKrvrc4f+uRUwVy9yPH96yNFglsWe3y7zaHYy73DfZkNx56XQ5E
U68wyzHaoElClITXgdzXW1dJLnJhUE4KcVe8yursS4ETGrPbl28TpTjcRUlSYq2PvEKUUfrP
P9VI34b+uudsQaWckdj9mtkLB7c3AgYxc7sw/PZy1+8PsqLSWlARuw73EKDZRyIXH5VPz11b
bu3y2DSSU5Jh60+MmfFbMhqzxJqlWZLz14+4iNOoupUQU83HBNLiV0AaDXBSdYT/ACkrMy7J
WLc1iWLqYbtYlvNSTMegrcVOUeYCRcG1ji2R/wC3x9NFYVX5krbHXiVqwqzBH7oIxWIsr9pb
ncTHNyDf/RBzn+nQu0pz9teusViVOjSU62aYGu5IcW01bn/MyLK2pbATEVW4mYQ6rt//AAlS
kwaVHX5ctdUk/wCWi1r4/wDkftW/GVXC4vYVEtE69G4wWskacDDsZm3Ljg3VbSJBDTbH3XIO
Ka6floNDzvcGorEcCLPY8EWI6bZEt0jx9Jk3kZldHeC1xmGnXP2wJfcIfYHa4KZ7e6ifXVaJ
lfyx/wDJjqtMyli02X/QdruMmE009Ju4jSLwJ575ij6m1W8D2pSPd3vT+Or2JdrRoa1ypNal
4fg79xRb8mtkCz2Fm+RYWIyWFJiqn2xbZU84t9C1KlIVRtRdUdlKPuFaaKrDjUFDnOypZTjs
d97Ev3fHY9nye7XOREl4/B4W5T1vZWlMdRQpSkMqWSpCX1U5AV3663aqi3haBJJnDx1itnke
bF4/LxpU61NSnGV25x9UluElKTVyQ60FIfCDtuQmpG+uPshQhSwSviW04pJkuWO944pVyF1c
RdJUu2Oy2m4W/wCiXS4z+3lCQVF1Q6b+mt+1Rd/sGIUGYXWxyUu3O42+E8vGYtwehMXQVcYT
R0hlBf6KUWykivXr66PbX8nA1fk0zPcAxi249lTUa1JhNYy3bFWLIgtfcu7k5KS8l1aiWH00
UVpDIHDj8K6166ptfK/Qb1xJi1RyShISa9aA+nxrrizCEpUoOcQKgbH8Pw0EGCOYNSmp+qtd
/lpKQ1FXM8U0FevwPpXQLYYQkVTyG/1A/wB2ooENrqfanpsaem/SmmCDSUpJSo0619N/loKQ
gmqQQaJI6H6qHSUCgVIUaqBNKAHp8tApgJJ9pHGg3PXpvtXQLYG1cjxSmnz/AB0mewe6enuN
egoBrIoFUpNTsfWnXb031HRoCl0OwUanYU2p8dIB8VFXFQ9w6dAD+Hw0gxKQEGgr8vU76yzE
nVFR7RTY7Dpv10GxIAqSSQaevw+WllMB8QADxoBsk/M776yaTFHmrag6b1220iFyUaV2qKfM
D+G2oe0h7J3qQaUG+/8AD4V0GWxJCOHMDY0JA9NAoNCOANEAE7pH4fHSLDpxBLg3P5Rvv8xo
aMigmixQUr+bp/sdDNJAUAk1P4bj/eNSZQHyqfaaJGywrf8AjqOia0wilI3IqnfgPQ/PWhbQ
E79QUk7g06J1lmGpBT2/xr6V/nrISc8NQybyht551lla0IU4yAXQlSwklAqkVFa7nXtpRW5O
K3BruRePzZPKkPDBepDzTzkdH7k852HAHwFFKVJ5UrXimuxPXWKVbkEsE2nxUXf9YOtZDIts
nGZzcWJKnzA20W1pCuUl9AJqAaAIHXbRDhEVy4+HfJUV51CUNznkSWWS1FkqfX3ZIBak8AKh
tXL/ADlU/s1SxT5HuKYVlr0rJ7acmdttyx+Mq4SkQ33n2pBQn3DvtuITz6JJIJ/lrVm4Jsqk
TFb9cMenZG04x+2xVkSS9JQh5bgopXBlZ5uE89j6+mhyTs9EK0h40CQSsmiUp6kk0CR89aSk
ZNPkYV56+wiMPPzXmoqmC3bhcUrejKWf0FLaC/0yK7KP06zbZjEjRzxl5ft90bntQpCLs4+G
/uY0tlchD7iSauLQ4S2VA9VneupM07SIj+P/AC9YbxFfg26VHuby1IjSYbrLpDgBLqXHELWh
JAB59w6tmGPp9j89z7i9Emt3SXKmRS06UusuNuRCv3I5IV2uAWd+JrXV2FNckBCxryPZsjbt
0G33CFkS21LYajkofUwB7ltuIVxKPiQqnpqGUVuUzNYlvtzErRLQ4tMltz/M7oV7uVfzVrWu
mTKbL144uXkRm2Xo4k/Dajw2VSbs1IRHLq2EpNSO8haloAFCnpXULZ1xi/Z4jBr/ACLe7Cax
xk87iiRFbWuU5IVuG1FpSVcOQIqoBPpqa0DJfLb15rtUIxbolLsZ+Gy05dIcILCYzoCkxnJg
aTsqg5Ir6/PRMMiq3HOM/lm8xrkXltzwwq8R3YxCWUsABri2U/8ASpptsBy0gTGV5DlyLZjy
cqsFqXZS2H7OyyylnmwkCrQWw4pbaPcCpIpoWBWw8Y8n2yyTn5llwyK1Meiuxqsvy3iQuhJW
04XUqSCNxT+OkoOH/dDLESEQXLXETBVEXCTjghONxiw4vuq/Qr3alYryBpqZSnoLDcsyJWes
zLLYLWLypsx7dA7SojEcIbVyKQVoosoryW4ok6m1BJ4wRDGXoh3K8yFWC1um5pWwphxpTrUV
VSFOReSlFKuW9STqTBIcR/IbX7DAtF2sUC9N2RC0WyTO71WUOUVxWhtSEOb7+7TMBZEOjJpI
x13HAzCDDzwkrkFhJl8qhXFL9eSU7fTTptqbyULRY0ZvFRg0CyKxRLlrizBJbuLj8ktOy0n9
Xn7e2ruJqlTXKgGrIvZJZTmtyaukLKU4rMsV6jOIcgXGW/KXCS2lPHssxnkNMpQpG1E0p1Gh
Mnsj8hyqTKiQLK5i0i1Y9JnG6SrakyS9OlOghZZfeQVJQQfYlCTT56YKrUwcs+zS3XTMIN5e
xt+BLiNsiVa7k44pqQ0wAloFBbaWlFPqO9dSmMCmpOFnzuyNR8lt90swOPZP2VvQ7c6GFRjG
c7jaWlOhxJTWvKor8NQxwWTJfKVhvCXrXkmOXCHbO9AnwWGHkolIXDZ7aO4qQ3Rba00IIGhf
AFau14wvKb7dMhvtynWedcJKnUwIkRMxtDQSlDdXVON1XRPu9vXppZNItR8keP7Q3isq3/uF
1umN22TDjL9kaMpT6VN8ZDaypSTxXy5NE/DRkpKrk+X4feojdwdaujGSN2yPa+yy6yi3ksIC
A6pYHfUhQ6tkddaTC3kO733CZ/j/AB3HGnLlGm2yU4+5NfYaVH5S1D7ggNr5qS2PoSnc+usp
jtodYhOwPBsij5HFyZd7eZbfZTBj2+RGeUX2lN80uPK4J4FQO+/w1GbQhp4y8nvWGa6xe5ki
TAdhS4kZa0/diK7MUhxx7srKS6lam/ejmK6XuRRcYPnKxRLk4ZMqZLjINpREfaiNwk9iFJW/
IQGGlqISUq9qVKJV02GgkjlcPLeJXuy222OXm5WKTBkMTZNxbYdfXI7TryjFV2nEOEBDiSFK
JT6U20w9ml+LTGrPmLDpNyuV0u8J5xy1XZ+94cwUgkOyGS2ph4o9rae+EyOVTvt+K0Y0jtaP
MNrYwe3sm4NM3OJAkxrhFkQpMhciS+pxSnUFt5uIUOFyp7qKjf5ak0Vqz+hl8u/QD46g2Buf
OXNizlyXLasNm3pQpBSl5tQAd7tVeppudLeytVYjgrkSfJhS2pUV5yNKYcDkaQ0ooW2tNCFo
UndJGskoks+VZGy5iNix20utqgqb/c7y2hK+45eHebbinlu/UQ1QDh7RX46BmGPs4y/9wzhU
u3XMot9xg2+23ORCU4025GQ00iQ0pShz4goPLb02rp4Rf7mPkZzCX5lsdxS9HaxrHZLUGzvB
Tgjt22OVBClOLSp5deZUpahUk/DVZcBSXbJF4TlKWshXHuNwLWPxZM28sQVyHYsZ64ICjGUt
TQKzzISmnqNttMZglo54/ly5kTyHOu7zKLlkVsUV8z2i7KdlocU2wgBQUevtr09dTeSlNDux
Z/coWF3iS/clybg1Hj4/a7dIfWGWrc+2tL60Q00Q8pCfaFq+kmpqaak8i8jBV/bieMMdYiOs
i42zIZc9pvmlaxxZaLTi2FA/pladidlUI0bkJ0O86zOfdMAtNtmznLrOvTrl5vMqTJMlbElD
jjLcZhgbREcDz4dVbUonT8gxh5Zuka5Xq0uMqbU21YrUwQy4h1IU0xRQ5IrQpJ9yOo9dUYFk
14kuuI27Ec6OSsmbbZUWCy7b23EtyXwqSof9PyIqppVF7dPXQ0ahQbRJviZkS7vYyw7IkOu2
dYt9muce2PttC3bgvuVStDZIQUfGnw0Iy0/3PLN//ck5Bc/3ZS13MSXfv1LcS6su8zy5uIJS
pXxUnY61fY1rg3GyS8DQcMakxlO39GHTFNzkPMiE2CxJIQ+yUlSnetPcDuNZqsL6heJf0JPy
A5kBtVkiQYdzOMkWJM+UVwlWVcajPMIQEfcpPcNFHn1r6aZBrK8E3Ou+P5Bl8ONH/wDtJu35
e3DmtzxEpADaXez9mGW0Kcjy1I4e9SqFNPnpmFHwNXOTHvN8rJX5VuTcoV3iW9HfMU3qNCjO
Le5fqdowm2wtARx+upGtNhaq3yMvDNvvkmdepdqjMr+ziIMqeYf7lKjNrcA/6GCoKS889Tt+
4cUjckaxGYNLTkhfKE2NIzWa4xZnrCghoLgy2Ux3y4Ee6Q6wgJbaW+feUIHEemttaMVfgt3j
6Tjtv8QZ5dVMShfUCLbzcGVM+1qataENthxCyhCimj/qpNONDrFP5lZ8IkGvGeKjGoUdVomu
y5eLuZEvLkvufaMyUNrcETshHY4gt8fcvnv+Gn15y+WNt54IHyXY/GOOTbxjsGDcY2QWpMZU
ae7IDzUlx1ptx5lxopT2kJS5ySsEqJ20jG4K/wCNcWhZXmMCyzH3GIkjvOvrZALikR2Fvqbb
5bBaw3xB9K1prmxRqb2B4LdPH9hvCLTcrZa4cO93iVAWUfuclMZ2O2hgSi2klvk5ySsoNBt8
9brWZXyFnycLP4Wwu5SWXg7dIsG7R7S5bI5UyqRGduy5CKyFqSA6hsRa0ABNflrUY/X9hyjr
b/BOEyJNvflzrhFteQCIbIXnorLvdlJPNg8kL+4cbpy4pSkU9dZson4M1llIt+I2mJjflBuV
xl3PHEsMwpTjaeCB+4dhbjdTybcWEfMUJGmIsvEGnXBI3jxViUO3Tosa7XB3I7dj7eTOtuNN
CAGFoQpUULB7pWS5VKqU9DqrXU8i/BS79jtttdix6ew5OMi7suOTkyoxZYQG1pFYbpoJCaKN
T+Hx0LKZdckp5Tx7G8fmY6jG/uPsrnY4lxcMtYW8pyQpyq1cfajklI9qdvhrSr+M8mrSNcKx
GBerdfLteLgu2WSwsNP3B9hkSZCvuHQy0ltpSkJNVmqqnprklmDMODTp3gCw3G7PmDdP22yQ
2LbGaeQ2lxUiVJiJkOyVJkus8G1BQUQFFQ/w0Guia6r7lZOMFPa8KyZUq2NIvEWTCkybnFuN
0Z98aD+01WpxSyRzQ41RwEdAdDo1ok/JEQcSt8zxg7kDau5el35i0tpqpHbbdYU4E7+xfcVQ
8jumnz1petS54Q+v8o+STn+IozeRR8Wt2UQ7jlipiLfOtZYkR+w4tPJS0POAofQ1Q8uIB+Gs
NRsEs/AjOvDsvD5lnal3mOuFeJBjJmusuxSwUkBxbzCypwtBKuXNP4amoRIpV7gIt13lQI05
m4NRHVNtTo4IZfCT/mIB6JrptWDLmS4YX49tN3s8G5Xq6PQUXy4mx2VuGyl5f3/FKu7L7hSk
RkcwCEe8+mspPL4QrZKxPDQKGrVJuxayy4onybRFZb5QFM2txxt77p9RC0KdLKy1wSQnbl10
qsb0adeThI8QUhvRId0VIy2FEh3K4W1TXbiiNcFtoYbalV5KeQXkFdU8aE8emtL1NtJ8k1ka
Zh46ZtFqkzbfeFXRFknCxX1p9ksJamrqf+jKirnG5JUklXE1HKlDrKo4j7i5k5M+O5b2TXSD
c71FdhWSCLjfL3FdVPQmMlKRxYIoXnRzSgIqAD6009bNL5BE234xnWJ+RdXsoNoxUsQ3IN8i
NuqlSkXMFURsxWlJU2fYe6FK9tNq7aOs5QpMhbli2cwHMwhSLokfsikpyFszlpEuv0lCFEGS
adQobdPlqdWrL5QPU8HGPi97V42eyBu8N/tf7mzEVYUOqqHXQUofcTUNt/TRPIE0320VTc/Q
bJqPkseeeJsmsOHpnSL69dIFidZZcgvMSWIrH31KLtzzxLUhHM0UpoD46fXVvkzZ8soeLYrP
yK4LiRHW2Wo7Lsy4zXyQzFhsCrz7lKqKUA/SncnYaylwaQeVYlcccmx2HHmZkWfHRNtNwYJD
cuI7Xi+EqotG4oULHIHTasF1Y9x3xfnmQ2390stqclW8KW0l5DjKCtbYqttpLi0qWv8A5Uip
9NZQNAT4zzxeNO5EmyvG0tIU6/IWW0L7LZot3sKWHihJHuUEfHVDmAeNjJvCMsLdleatjhby
MqRYinir7lSSAoN719pUPqppVZTfgUswPMqwK9Y2xHlPyIc6FIcXGM63vB9lMxoVeiOGgIdb
9fyn0OmHBFZ4ppQH3D6jX+PrrBBFPwNKmm//AB1CK+lAoKU9TuK6jU4DWmquBI6VqNEhCAlA
pStSqu/xFNRl1EpUagEV9T8aDatNQoUAqhqKepO3rqGQxyClAdFfyp6U1EmEkUTQJ3+Feh66
pEMe5StjxoD86/HbWQ6hmtN/Yo7Cu9P79R0QABsqp23GxqK/CnXSZaQpCgW6Adeh+P8APWRE
gA1UfasAhPXp66SbQqo7ZJHJdBQAfD11ABJIBKhQke4H4etPx0CEBQmtCD6fAempinAtSNwA
TUnY+gNPTWZZuMg/UpsfpFCoddRp5QATxKdgn1NfX4akYtCAAs7r+rcCo2+Z1EgbVIIAQN/i
T8v46pAINVUaKqinrv8AwOqR+oqhpwSfTkR10Ewu2KdB261471r+OoBeBSFMZIy4bc1dHC4A
i2P8u264VUQk8FII9xFPdr1Vng4s2TJcyyWT5It9zvmGw4+SsFoN291TzSnnVcRHUsh0iqKD
jvx+OqjJVwSlyzbLEtZa5dMFgPWxclg5Yz3nFNokjZtQdS6TyNR9A21mXBpKsLyQLvn3JE3G
TcIcKFBuL7jCnJrKVKdMOLsiESsr/R6lavqJP4alaUDS0dsN8iJiu5FcrVhKZrd3ZW3cEQnp
KI8eK4Pc2EoSsIClAq57H4UGrgkvJUWcwt7FnuFnZsdspcHCpmc+0X50ZO1G2H3DyATTY0r1
0NlJCxnS1JZeSaOsuJcQSPVJrv8AGumrhyBsNy/qGM6Y1KNtkfeh6M7Iji5OiCRGKVL7cZKB
9fD85UPXUUFStfk92L5IdzExVKaemOTXbQ0+Q2pS0qSnkqlCU8qhRTrSeDT0Kw3yQ5Yckm3F
9hyVbpyZSZNtafUENiYaqW2FckdwdKqTuNSfBmcFlR5jskDF1YtZ7LJatbVslQIrkiShbyVy
3A4pa+2lKSlNNgnc6GEFNsGXpjXKJMvjcq9RIbBjR4hmvRShuhCUoeaPNCEn8g2OqcC8/Ur8
x5h6W66y2WUOOKcbjlRWpCVGoTzVuriPU9dSBwXvxll2GY9Cvib07NEm9Q3LdwitMrbQ0v8A
9wFxaSVg/lpTWmyeR1imUYPbMGyTH5ku6B+/cQ2Goza0NIYWS0r/ADAnksU57UHz0NlBa53m
PEHPuZzEm8KkP2X9nTanEJERDi+I+5QruUChT/DU/LRzBMh8o8o47ecfuVmYcnx3nUxUM3Yp
a+6ufYb4LF0ICfaK+0JOkWMcruOCX7HsXstuyBTcixtKiOvzYbrDJDquSnVKSXFJpSgSEknU
2wdckx4ri4zid6mXF/M7Y407b5LKHIvfDyFniQpCXm0JUocdk13OiJBEm15RxT9vRak5FLcu
iLXIiIzF2I8HEvPSA6klNTIHFIoaGmomQeL5ZZIvkuJf71mLlxhQIpafuEuGtkyApCk9hCE8
lcUkg83tzp4gkioftOLTLxdzJyduPEaC5FvkJhyF/drUVKDKUnipFOlVfw21PYZgu1mzKG3h
Ngh2fKImMuQG5CcghyoZkrkrWqqF9oNLTI9tRuoaGhfxopqb/CHj6RZf3VX3DkvvJtCYLfFS
S4FdwzT70bCvAdPp6a00MQ0XB26xZfi6wWx7LLci7Wy5tTW0dwlcaMBwbSlpLQDjjJVyKT8/
cdE/BNZksSsotVsu8JiRfrZdsZensv3ibKmruFwlOIBKHjEKOzFbQ5xPBoe0DRskMb1k9ug3
m2X2JPt0nLlXF9huMLtJm28W2Q2QXXX3iRDcKqCrdOPwOkER2cZPj8yThdt+5gOTrO+69dX5
D795t7SHzVDb0p0h2SABVSQfbpShZKFIxxtvGpc3NXWk2CbflrYXjqJLaY1pWgrpJLDL6ghK
eHxVy9RrIJYyXXLDjd1nS5lnax+95E0q2x3mpjrLkZu2pZPf+37zjaPa5XcHl8RpSHEmVZR4
9uVzye8P4NZZFwxVMpTcGTBT3WPakdxDaq1KUuVFP5baW0jOINDt/jywwbJjbV7xm2RoEi1v
vZRdJrganx30Nr7KkoLiVJ5OAVUlKt9ttYTnJppFWyvGbZFxVqRasbtcjHlWmPIVlS5namCa
tP61D3Sp1aV7dntCvTWkkwtEDbMMUv48P4Zwtqu/GkSjKaaUlw0lLH27hZStaub+1OAr8hqq
9lZS19CL8Z+O35GWtM5bYprNk7El2QqS1Ihtc22SpoF9QRxqobb6XwYVk5JTx/bvGmZ3dYVj
7lukQLdLkuWuNJeealvIWn7dDKCsSFuIbUrmlKhyIqNNk1g3VSpLdB8V+M5V0lIfssxgUtTJ
hy33IymXrg+tpx1pHcceA4pCkodNa/I6zLRnpwNrh4mwpi2QbtbcZmXpy5SIzDlmjznyITLi
nULlhxpKnik9r/3PbXbR2YqvyQzXh7Cplylft9wUm345dpUfLHX3wpX7choyI7zXCiW/alTP
P1XrUkvJ0t3inx3Kw6Dcn03QS7pAfuTcqMiS8WEpWstNlLTSoh7aUgOBxxKjudtZ5KMGaTcc
io8b27Iv26W3LlTHI67uX2fsHEI5fotsV74WOP1EU2PxGtE0nBXbO9bo9zjPXCIqdbWnUuSo
SHO0p5tO5b7gBKK/4hpKpd/IwxdvFrFKj2eHZ8judbg0i2JeTFRanUqSyiQX1LDknuoryRsB
sdSWGLw4HeTQsHsPkmRAYsSLnDchwm7VblvOsxfvZTDRLsg1Dim+a1EpBG+swacNtC5OMYFK
8x2XG2Y6EW4OtQ8ljx1PoiffISvviItw94M1CQCT1r6a1ZtIxXP0GPj+Dgs64P2m4WRNxlfd
SXpVxmzXosWNa4oJcLaI/wCo5I4g0rsdtF1kys1yRtrsWI3CHnkuIp5US0RfucZ76y26oLlh
tsuJFOZ7SgCk6oyaWESlvtvjOdhVynLt0iE5bYaGl3x+cpT715dSVNx2ICUqbWwVJ6kgpTud
9SiclbWNkGnHsZawvHL5OdeRJn3aVEu/260rWmEwG1VaaX7Q5RR4lWx0w4Zf7lPgf5xZcGj4
jEvdoiu2m4Tn1m1wH5v3rsm3I5I+8kJ4JEdXcSEpRy339N9BmHJD+RcdtWP3yNBtji3WHLZC
mLK1hz9eUyHHQkppRPLoPTVVYk1yyW8a+MmMxs2QvJmtQ7nam4yrY5IcS3GUt5xQcQ8pX080
oojcb6pjAOk5NBuPg3x/a2riqbKdnMwZERiOpc+Jbm1qkwxJcWl2S2pKiVfQgb8fjoUmt4MJ
uTMFFxlJgJWmGhxaYyHVJW4G0miQpaQlCiPikUPXWmsl1NRt3g0PwrTdETgm3T7A5e5pU9GT
IRKaacc7TUckOuNEtp9/E+u+spyZdokd5T4mxyzM25oSro5JnuQI7Mjv29UYOTQhaiqKlX3a
AkOGlUbkD0OlPA54JO6+ALLCv1qtbd1nwlT7n+3MuSvs1mW22hxbj8P7VaiksqbHJLyQRy+O
jZNGd+UsZtuPyYcNh+7OSHkl95F0chOp7deKVMrhPPprUEKCqa3VE5GOEYteLsm43CJdmLDA
tKW1XC6S5Dkdlvvr7bKCWQtwlxewoKfHVfwKHuS+Mcyty1Srq+zKLlwj2xuWZBfW67KYEiO6
lSty0tpQ9x3HQjUjn3ScEgfElzj4nk1xlX6BDex2YIU60mQauuoUpNF7Ac+ST2AQef8Ay6wl
k1MlG/fr6Laq1pnyU2t08lwC86I5V1PJrlwO+/TSnBpsZypkiS+p6Q6t95RBU48pS1KIFByU
okmg21SS0LizpUKY1KiPLjyI6u4y+0ooWhwbpUlQ3BHpoFNpyTavIecu3Ji6Kv8AcFXONzMW
auQtS0d0UcCSSdl8QFDoaae3BmDm5neZqmuz1XqaZz62nXpReVzU5HBDK/kW+SuFOldLsx0O
Lb5Sz63ILMPIJrLZQ0ylCHPpbYr2koqDw4AkApoaamSYLb5Qzm2wJ0CFdnWIlycceuDXFpQe
cfqHVOFaFElf4/26JzIJjB7M8kU/JdNyeDsyEm1Sl1r3IKUpSmMdvoCUD56U3Mi2NpGRXiZH
gtSprspi0jhbmH1lxthBIUQ2lWwSSkVTTRItsk8x8i5ZlzENF+mNykwQURAmOxHKEEU4gsoQ
eA9E9B6ak8QTbG+K5lf8ackKtjrYRNbDUyPIZbkMvISeaA406FIPBXuTttqDOifPmbNXZ0t+
e9FuhmLafebuMSPJZD7DfZbfaaWnghwN+3kkdNXZkyMheRsthWW72VidWBf1qcuY4gKKlbud
vjQNh36XOFOSdumtK7nsxTHETyTfGMS/0ozFt6baHUyUviKn7sSE7okd/lUPIHtSumydtZrb
q8APZXmTKn7hFujLFshXdiU3NeucSE21KlPoBSFSHaqUtJCjyAoDp7YyaTgj7n5Cuc26W64x
7dbrS7aXQ/Cat8UNth8KCi45zLpcqUj2qPH5aJlQSwQd7vUy73mXdp6wufPdU/JcSlKAtxZq
ohKQEpr8ANTbZkn8U8lX7HYAgxmIktpmR99b/vWA99lOAoJccVT+pT0VVNaGmhOPoKY8heWM
njWn7RIYcubaH2Yd/db53CMzMUVym2XKhIDpUo8lJJTyVxpXWlbMsZBI8s5K7axGabjM3Jxu
PGl31puk6RHhqSuK04skoAaU2ndKQpQSOR0r2NCrZOeTeTblfoX2xgQ7c1Ili53dEVCkidcQ
KfculZUU+v6aKJqSaayrME2LjeS57eR3C7SbbAejXqN9ld7Myz9rDkRaAdtIaPJpdUhfcQeX
LUrtR8Eh+fMVxkPyI92tMO5WFwRUwLG4XWmIn7fUQg062oOkNBSuXInnU11d/BN/qQkvPLhN
XkUi4woUyfkhS5LuDrCVSI6wa84iq/o1FBtXYDV/bmX4gzxHA9h5zYWfHM7El2NTkyW+iYq6
iWtNJTIKWnOwUFPFCVEFHLfRS7q2/JpvC+A7v5ES/iAxS1xHYcB8sO3V2VLemOPOxqFpMdDl
ERmgqqilAqelaDTRwpWwwztCz6xWvI5ky04+mFYLpbjarrZDIW4pyO8gJkKbkqHNtxZSFJ2I
FPno7JQ1tGk9kPmWXHI5sLsxUwbXaYbVttMTmXFtxGSSjuOGnccUpRK1UA+A02viAczJ2lZs
+rDsdsMZtyFLsMuXObntOlKlrklJbUgJoUKb4Hev4azW0Va8i7ExkGeYnkFmtrtwt1wVk9qt
SbPElNSkNwVhsrUiQ6jj31KPcJWjlxUeu2t1u0oejN3NpQytmeRrU/i021WlDN0xtZefedkO
uomuKUFJ5NE8WQBX/L+OuatCaOlLw/2DzHNrRcrPGsFhgSYVmamybq4qc6h6QqZLFFgFtKEB
pCRRH5j1Otv2Sn5ZiXopq1FQbCfcobkHqRrkID/iJ32H8NRACXEmpXtSqSrYn5aikASFb1IP
UbVroJAUEq6Ecj1oKfPfTAgBG5I3SPp9dRCgDxCgigHr1O/y+WgGJ49a7BW6aCm4+B0khaRU
GnTY7/8ADQzTwBRUSQT7abAddBkJKSmnIHiBv61r6fw1SaFBSVIOxoFUodhv/u1FIfEFRHHc
dRXY7fHUQWyaA1I6U9T+A1EA9eABH+IE71+BI0Cg61UfakGtfiP4HU2ICEgK4r6fkPqPiDok
mAcvzEqIAoP9vXUkKYaU8fdTkD8Kb/PUMiqAKK07qP11NKfgNWTSS2FyHHepBB3NaD+OiB7K
BVQQONPmfSn46jIlJIJ4kqUPh/ZpgJBxJAruKlSidq/hoFJciu43y6GtP8R6fy0QMuQsIYYl
5Kyw9KZhR3HAHZkjkWW0g15KCQVU+GvZVxk89nk3/Nb1jF280WnJLRk8AQQWFSJq+5xYEUAL
Q4Skjk8KhH9usVcbJMsQy/GI8zOm7LmVrts6/SYsmzTlAqZaogpWlRLakIXsfdQ0rXrq8Fk4
XbNPFT1wmqubkO72lcqAn7CIylt9yc0E9+cXf0uUbolQ/MQfjuqFoYga4LNt/wDq3PZkzJbM
GLnBct8WQh5MNh11aR21NskAcGk+1S9969dM/jBN4gz+yTrfBwLI7S9fLbGllwoZifZfdSpQ
HEFcebUcGzT27Gm52roY2y1BSYyGS633qlqqQ6STyCCRy6b9NFdgtnpy9SvGfbjRo8CyOY66
7b02+QZERHEc0ciGEN/c+0V591dPjqjJlplTZyzD7j5RONzLNY1481cz9jcW2WWW22WUKqCt
FEOhxQ3KzSvTV1Q4+43xrIcLyTNl2u72GzMphuTVWiYy2mMw+r6YrDjY4tuJP1BSzvphQEFm
XZMHi2Uz8is9hGXRLRMlTLYgM/a95t1Ii1bZWW+Sk1HtNTrMAZtj83GsmzO3Kax2zWtv7RaL
hEnvrj2119CSe8UpB7Z/woFa+utQoFVwUa6Jabuc1LKWeyH3O2I1THACjs0pfuLf+GvpoRM1
PwVZLhcLfmHZt5lQ3bS6yzybQtKptD20IUvfnxUen/DWmFkDx/jN3k+J89YFtk/cKS020+mv
J5bCz3mOtFdkg10uFAQmX3MsNsV6dW/dcf4RI2Otyo+SofU2DJaSA3GCUntCvL1HI121iBKP
l3jjFLbj95ukGE+bq0zDVIsH3IUux99vkp2QsKKpHPYAeldKIZeUMclRcHwm4GzGE+5EU3dH
243ZJdJSGg8UpFVqTUjlud9WAjJFeJ8MtmUZFJgXiPKWwzCfktxo5Uw6440U8UhZHrWlPjoZ
TJc0+KMRVZEZC5aLnFP7ZJnPYqqQ4ZPdYfDSU91TffHJJqQEUrqQ6RDYHjdle8q2eIbFcIcG
Qyt4W66JS8tsllZDiwtui2Kj29xINdU4BLZQJFivEm6XKPBgSpCoTjqnUMMLUWmkLI5rSEjg
mnTb8NahFVypLixhWIQ8Ux65XZu83Gdk7b64/wC1JbWmP2VcUpLSkqU6STX6hrL8iQDeN2xW
CSL99vdlzGJfYEtDbX7alvmlPFbh9/d3pTpyoNacgi/rx/xNdcXvs+wMMrRZbe3MZ5GaieHU
kc0zFrP26g4QR+lvTpoQjBFqweZh1ru8vEHIc29XL7W0QLRIfckyWGRyfcQmQtaTyV7B0p10
JA0iHze0ePo7UCFEaatuT/fBm42y2ynrm3GhrAr31Oe1UtCjTttEiu2mGCIO44/ZLbnkSzlU
udajIjNPqmsrt0haX1JSpPAgLR9XtVTfQaqpeSXTh2NI82qxFwqVY27kmGhh0qWVgoSoNLcC
kr96jTkDUDU1gCSa8fYO9iN5n3KWbVLj5K5aok5tpyWAj6URktc0D6ty4o1AHx1Tx4JKVvIz
b8O5CmdOiRL20r9mluRsicQVtIt8dLReRLWCpJcQ40knijcHbTI5jJEW/H7Xd8My68vXCTOl
4r9t+0vlf/TuR5D5bFG3ApaAv6wAR10xLgE8FEcVV4qIHX6vSuhIG8j3HLfdLtfYFqtiu3cZ
j7bMJfIthLi1UCise5PHrUa3rJK2YNdPim55BiqI0HLHr/d3L2u3uqXImJhMtRmVOvJdjPgr
U6hSCpNOoIprCs8ilgq958JZbZHVOJmxneEGVdGCA9HkOoglIfbQw6lLocAcCk+hTXfVLGrY
ys3jqe/muM2a9TRHOTMtTj2XlCQ206lSkIWVJVxfUE+zkCN9UwpKcwcLJhmRSWLhcWb3HsFp
jz12dM6fLdYLz6CVfbpLKV14o9ylEBO+2m2zNU4HVh8O+QbnblPW5+M3FmPPRGEiQrhMVHXx
WEllDjSkc9kqcUAdU7GSkm8XyLDkWf76UzBStSJMFLrgZ5pVRwLaCu2TyG+2lGFfhFnvHh/O
rdBddf8As3lxWESn7dGmtvTER3SlKH/thuEKK0pr11lWNNcEVknjfJschrmT1QVNR3EszY8S
YxIkRXFbJbkMtnmhRPt9d9taq5MsVnuPT7KnHWp8924ibZ48xgOkKbYadKqR46gpaVNJ9Om9
dtTbiTeJOVjtGc5pe3X7YXrjdIDTch2W6+lK2mmClttxTrykji1RNN9hrJROSUZwfyz/AKql
S0MvNX+L2ri5eDLZbQTK5Bl9Mxaw0svUVxIUa0Pw0yYrbAmzePPLbc64OWu3yYc9la7bNUtx
mMpbshPJ2OgvLbS6pxCgSEVqCD66HbMi9DSy4V5P/Zbi9aLbKFskpciTmWi2lb6Iq6uthlSg
84Glo93BJoR8tadlJO2Pg6y8H8h2vxym8rZLeIXOQ3KcSHWzVTYCWH3EVKqK7hCKb1ryA0Tk
WkoIW7M5bGxi1JuDLzGNvuPSbItaUhlxxYSH1sqHuVUJHLl/DSmy6y8gtrOS5lPtGNsKTJXG
aVGtjSy2y0ywkqfdK3TxohI5LUpVTqaxIvJJ51j2V/dWufdZcK7tXRtEG1XO2vNvxHRFAYDK
VtJb4qbqkK5J+dToTxoJcwx3asSz1pjLcU7sO3wIq4zWUrmSWWIja2neUUKkufmUsniE9d66
kaTTRY28n8vC+zMSmW63XO6NtomOwp8eLJY7cOJxbfZUr9MgxUe0j6vxOrwFmokqEHx7m+Ux
f9QQoURMa5uvKgR0vR4hlLQauogxVqSpaW1HjxQNjtqbyENLB2jZLnio3+ow22Y2MwEYy8+W
0JDEaUlxpLK0EhSnFcl1X6HS1x4Mt1f3wWrJ5nktFubyC84pZ2JkEw3pF1EdhV1jJZCDGXKC
HVOtAhtCarQNjTaus1NJ5I+y5h5RYutslx7emZJyC6PX2zMllKw/LUXGZKoyUqqhtfcUHAad
Aa0FdHYUlJD+SbffGHrexPw63YwXSv7Zy1pBRKJoOJdQ9JbWUnokKqK766yH0GVjyK64Y/ec
fu1mjy2Jvbj3qy3JKyjuRl91oqU0tC0KbUfRVDrLXIJyW2P5Qy2awu5TcZhXi2ybrHVaW32H
fto9yix0sx2Gu04kuKSxx9iya9euiYUDGfkS0x5TusHLVuYa9cLbkkt2VcAULb+2uEV1fJ1p
SVpVWO4VJUlVUmm+t4RlPBk3bdWsJCgupokehPT+3WGjeyRbxjI3JT8NNplGXEW2xJYDDhcZ
edPFttxIBKVLOyQeuhokPJnjzOIUqNFl4/cWH5ai3GQuK6FOqSnksISU+4pTuQOg1LUhI3Ti
GV/uSrb+zT/v2kdx2GIrxdCCdlqbCeYSfQ00vCDtJHTrVcrdKVFnRnokpFCqLJaW06AvcKKH
AlQr6baDUyCJb5szumHFflKjpK3xHbW9wb9VOBAUUj/mO2pkhtQ7cRud6f311QQa2VhsOKQr
tulSULIUEqUn6qK6Ej1GtJgwOtqSlPNKklQCwVApqk7ck16p+Y0bJOQi25zNUqqNikggio2q
DvvoFh7qIV0B22/tFNQoJXFsJSSAo9EGn9ldBMP28ggn3/4a7/PbUAYBqDWiRUfHb56ikDfu
SXAahFeQBqR8jqZoBUFge8FPxSajb0OoAyEJqFKoRur4fID56iEqCgCkqHEEVB+eoGKCiCQV
VAHXrt/H4aSDSKAE7pGyPx0ECtSRUg1/8dIpigFV+qtP7BoIBIUAlexNamuqCFFRWsAqqsbc
UgU2+J0EEAaioo4BVNN9RMNaN0kDapKt9tQAIJA91CoVqfh86aiBwok70TTcA7k/D5ahF0HF
Kiap9T+Pw0EFxJQK0IG+/WvXVJB+0Cp2BFB6+mqTQn31oTTYEGnXfSAZryNSEqFCKUBroIHM
hVFAgDeo/u0GlWQCm4VVSq8RX0r8NQM6JPt32/3EjQQOKlIO5O1R8B/LSKEchWppQCm9d6ah
kU0AUpIJKgKlPQ09BqAL9QpJBH/MDv8AhsNRKrYrfiomitgDUVNK+lNgdZNtA4q7lV1Pryr/
AC/HSZiAJJKOPEk1+jfevroZQGKcCCBwHX1Pz30IGwwlJ/A78fkPXVJpJhH3JV7iOPQ9aj46
SYaeJqACf8W1afjXQSYaTuQrjuK8j8RoFL5DCqJon3AH47fx1HTqoCHbUOQTuBxJ9PnpOWwA
kCiv4f7q6yaSAA3wqBX/AAqPr/HWkacABUAFKA36emsyQoFAQK/E8U/h+PXUjbAWwFApTT2/
2fw0ycwwkcvbU8vQ+o611iTaUifdSvE9acaGldRdcDPHmkG7FtzYE04/Ll0FPjr3+lZRxhSb
f/UDFtcDLotktkCHb4MOIy6hMRhLbq1Opqe6sGrnTauuM5MVRmBKwlQpureg+H8fhqg62UF+
8K4ph+V5eiz5CqS6qQ04uIxGUlltZbSVqU85UL2A9qUDc9dadcSc+CS8WY7hd/v7mPT8dMzs
LkPTL6Z0hlLEVhRB5Ntjh7aBIJO566yGdh2fDfHN5sGcXOEuaX7IFv2uMpYSy3HKillS11Jd
UvjU+gHx0tYRSZpHQlx5CUceTqktkb0qs0BNNKSmCxJtT3g/ExOuWOMS5beQWVmFLuVyoh1m
S3LVRxtmNspBTtxJUa+uiOQch5X4lwGzWe7XFP3ikWCSwxLZYmsSXJDTiqHuoDYENfw5V+Gj
IVcORm74mx6+WKxO4zDnQrxkC3XGIVxkoebRAjmjkl7ghK0iqhxSmpV6aUhVh3J8G4rb7vbY
cq6zExJdvcmo7yGoTsqU2qgjsKkBDbBKfdRyqh/PVICLx4Be/doIgS3WselRmpVwkyQ3Idh9
1fANI+25JkLUfo4inqTTWUhOUbxLijV3zTHpqpz8rHoSp8CeXEtIWgNgpSpoJ9/vJqqtNaMq
sIydmQ8y0Q26tkOD9QIWpPy24katnRsv9n8S5tKtcCTEusRlV0YXLtlrM1xEmQ0lPJfFpI4E
hP1VP8dMszobz/HGYQbUVz7rBjuqY+/TYXrhSWWQKpcDCv0yqg2HKuoy2ObP48VPsmNXNy6y
ArJ7qm3yVIWlxCEV9pcSTzU6CnovYaZNNZgkr5gvlCz3g/td5Xc48C4CNFfFxS45HdUSGFyG
nFlthRT8emgESbrH9RiHmWmruqe1MDzLchqRHejoon9XuuFKO2Uo6K/lokHLIaNhHlRc6beE
3hxy6Q4Bkw58Wauc9JbSvtfbx3WVqNeR4kK+WmSTcFXuF4zzHMjm/c3WXFvxAbuUhqV3HSaA
9tx5ClhXHb219upGkTmKWnyBe2pN8ayP9sF4dTavv5r7iXLhKUn2xfYFKV7RTmqgGhhb4JHH
8R8v221rtMS8mwGU4+zb7E7J7bs5xhJ+4EYJ5BIFKV5Jr6aZAhm/HWdrxrtKkIaDjKri1i5f
UJTsZtXFUsRv8vY/E8j11N5k1I4v+P8AlcYW1FudxQ9bbQ03JkY2h9v7qFGVUsvyWUJSePX6
lKKRvQaU1Jm2WRURPlOA5jAZZntKbUteItrQmhLwC3DHBHuC0qqeXofhobRJOSfyy1+Y7zNt
MOYqPdVGWfsZNqEVSWZrQJdQ8/HQ321tfUvmaCla6pMvaIW5s+SrNl0C9SFpu98nUNpuaO3c
2ZZSO2EsKAUhxTfTiBVPX56V8m0O7Xkvlm8eR0yYkFt7MmmzHWw9CZb7SUqCi46hSUhtaSR+
orem2jEApJaz33zLMVeDFj2oRjcOF0TPZhMQTcWwEdtvulKFO+wK9u9fceuhsloqlz8geSrV
eRHnynI10tdwenPsLSElct4AL+4CCEvI4e1KTsE7DbSFWSmOXbyY1hl9ft2Ow5uPXJbrl1mO
wm1OOcjyUW0ckKcbYUqqeCCls6m8yUYgr8fJcUYs7bLmAxZCkMhhV3VKnoLjqU0U4SD2u5X3
U/u0b2DZ0bzvF4pZk2HDI1ov0ZSHrfd2p0t9TT6FA8wy5ybXUbcVbb621iRxwSh86ZTEdjtw
LTbrQyJblxejR4q20SHXkKaf581KKu6HCCU9NqaytFKmCGPk26Rsitl4g2iDazaO4hFtYQ6t
t9EgEPNvl5S3VBxO1BsB008FGZR3svkG+SvJLmWt2Nm9Xiin41tAdSiKI7YCVNJa91GGkcfd
sB11bRJxk5M+V4BizoE3FrfNsdwnfvEe1OPSUNsTFN9txxt1Cu4tDnUpXt8NUDI7tPnGRb7G
izyLRGfbivSXrciNLl29pgyV9xbS2IziUvNJV9KVnYbV1aYPKI20M+PpGPSLhcrFfpUmEUou
1xhS4zMND8lZDI4OIKkBZ2AOrnYMseUebMemz7lKxyw/b3W5QotvkXeTIUpa48YNrUlURJ7X
Pm0EhfL6fTTEGk5bKnmebYhkj79yjY2Ldf7lIEm6XFU12QCeryYzNEpZDp68uVB01Q4M2SJH
LM08fZVJxuPJtVxslrscRNtffYktzH1Q2kqLSUJdS233O4r3LUdxoWoF7Oa8m8e4/Zr5AxRd
1uDmQwF26WbmmMy3HSpaVpcR2CvmfaQUmmqqzkLVcQhfjryv/pq33C2XH796DO+3LMq3vMpk
sfaFXBtH3SHmuyQ4fbx29NTQ7JzGfK8GXebu7c7PeL8i6yDMTamnmrg040y0lCW5TUhpdChL
YJeZ4qA+SU6bOUDUI6WDzjEhYqzGbg3OJJs6ZiYKLWqMqKG5jqnG/uJD7TsprtFzhyQocvxO
p1yMYRQHMptEjxzGxqWxJ/drbPenWyU0psRnEzAkPokJV7wpIR7CjrXfTGWHj4Gt5vtouGM2
S1RoD0efa0uouEx2St5uSXDVBaYUeDHH14ddZjAtTlFfIWEBKFHluEqSSmg/xVG+pM0kW3Ks
xZul+sz8NTzVms7MFmJCUhKEsKZCDKU202e3+o6FK5dVV92mMQEwzvLy+yTfLEjJ56XJNglX
X76TGdZQtTsdKgUhyNy7a1AD6CaarZ0CUbFY55BcjZ1cMlvEyRLelxZ7IkFAddJkMraYT2yU
pCQFJTt9I6DVEv4Cq/GOYI/B8rTj6bnJLqmrq1bHGMckdsPGLMdWkOONKJoypTfMdyh/nq+p
puFAduyaIz4/yCwOOqE27XCDKQ2Wy4FtxitTilO19p5KB3BrqXLM+IOrmXJRgL1tYkLF4us9
Zv8AIKFKdkQWmkCM0uSsqKkBwH2CnpWuquGaspwDIsxW/j+HM2uWtq42O3yY762UrZcadfkL
XxS5X38kK3Umg9NSXkEsi8nyiDIkY7bLOtCccsrENbcdDRabRNWEqnPOBVVrWpYoVV6D27aX
HUk3JHeRb5Dv2d3+9Q3kyIc6a68w+hKkJW1slKghfuTsn109pgq4Rovi/wArWTFsJttrWzHm
T3MhEiUiS2pYiRQ02ETmiNi42QriNYaNOHBLZhk+HZBjBjQr1YG3UOXf33hmV+4IEie7IaXC
W0OCe42QRzHU76eTF68mGQnkplx31HglLzRUD/hCwVfyGo3V8npS1eXcCdy3JnCq321iRfbS
/GucdLyXLhGjvlTr76lKUkhkCtaJ69NPWP0CSETkVosueDIJVysDkAwbwlmPZ7hMlJcfdaKm
TJDyiplT1eCS2RvUaGpJErYM9sFxYdEOZbYjD+OtsQLLPuD8QxpK5iVzY8i4Ff3CuRHcaVzr
x2oNWQMM8hJk/wCsJolSo8l08FNrhTXLkwhtSQUtIlulTi+PryOx1WehiDV/FkrGH/HNvtK5
MaElU2Wcpkpuy7RLZiroGnAlviZo7fIpQqqQdqb6K2aK6TgwaWiOiS+1GcKo6XFoiuEcCptK
iEH5EpoTrV9is7PQV8yHEDb5sidcIUjx5OjWlnG8fb4rdZkRnmjcFJhJHcYWlIe7i6jmFDdV
dPrWl8ORbzkkH8hxBN3jqzm4224217IPvMSbHalNRrH2lpa5BpP/AE8fn2U9pz1TXjsTp6p/
p+4qHrYzZnJS7b4eQz4Mry63AuqbdPU7He7cx1xs2lLspA+27vb7pY5n21H07aYT3/GSTUON
kDdEY9cPJ9kjThZZt3bsvDLV3R/swF3hptfe+4fYBQp5KQgFTdQV+pOubX4qdz+xmEpaOniB
MdeNQl21MAgXdavIH3Qj8k4/wojl9zv9vTuf5fu5U9aanEvzwS0SkKLi3+nEm3s24eNFQLm5
kLzvZLwugdd/bkrW5/1aXQns9kJ6ivz1tJTjclHkyu922BH8d4zcU2+EzcZciQJdwZll2Y92
yQlEiJTiwEn6SOu3x1hVlNorVg0zIrX+xXTxhd7hhCUsuQ1sSLNGaADk1x5XYQkLKw7ICaOg
OVCj10KOn3HkjvMOPx7xLw60WmG4vNbgiUm4w5LMODceKnQqGmWzE4Rmz2wso6Hj9WtJLo38
4KM4I/xrjc1i0XgW+yMXfNIV4jW1+1zWUTDHtyqplu/brVwp3khtbu/EeqeujrDc+CUYnTKT
5DgY7Bzu+Q8cUhywx5SkW9Ta+832wASEOEkqSlfIA1O3rputPyZSgrVapoD60A/8tYEPrQk1
A9PQHoSdRAIT8djsdqn+egAxQbVoaE0/3ahDP0gceXqamp+NDokUKoQj2/m60231EBQSakii
wNlD4aSgIhVQVbg7U+B1A0GEpCPjyqKEUof4aCCQjdIqDTrTVBBtq2JrVsmlR6aiQqtCQVHp
sr8enTU8iBAosVOxqK+gI+H/AB1lkGeXxpT1Br/sNUkIO6lUPqCDtSh+I0yR1SFD3GgIqKeh
0G5CKwrisfT60psf46ikBQkqA2KlDYU/tpoCAkpUlSgQSBuQDSvpUagkNK1KJ9h2FCD/ALem
lMkw0jb12G5BoRtpNBULZKle2vqOlR8NAIVxVUKP0p3AJ3JPx0HT5C4moSFGvUfCn8emmTEC
q8Akgmg2FT7h/wCeskCp409dlBO2+gVVsMKqdz1//p9dUHTWAlrSKChpT3H0HzPx1GIkFCof
AAe1O5r+HyOkFRit1oCfgd0jao+GggVWDxSOQptvt+J1SMgSfaCaAVrUelfWmqAkWSRUq9ya
BW2/y30M3XISVj4EDc16Gg/HQaFJA3ofcrcIJ+P+7RYUgcqbbVpQnrT8K6Ex0IAUlVU12+pV
dh6aQhTgCioAkAA/T8BqGMB/qca/KtK+mrqa7OY4GOPpWLor9Sikn2E+qq1GvZ6pk86qbr5x
Te40m226+39F6uiGEOrS1AbiJQ04kcayASp07dD01yayYouWZV1qon2qNAafDSbdi/eH4/kt
+7yo+CuNsPrQDNkvFlKUJ34gKdQ4scv8KB+O2tW/iYawSeOW/wAh3e83CbBym1Q79dS5bJcZ
19DEp8BXEpQwhlSAlRT7VACo31jKUGZHmOwvMuO2HII9pTCbstoW8i8yS3FebdU2n9RDTjyF
Le4D0pQHbrpzCHLiSiIzfJEY49jaJDabNIeMl9gMtBallXOpdCedOQ9DqHElxuV781x8JhXq
fIch2JbrCIs6jTUx/tVXHLhp33W00qjnsfnrWJM8is2vHllWNsKv8Ji3W7JnQtaI8ZmPKnLb
oUmSlsF0q3BSCBX4aORwmRWbv+TYBsP+pW3ba+1DTGsqGill9MZFEhCgwrmF1p9Rr8tSlmbW
hD/Jrp5IYuGPf6zsrdwW1HMez2u4MF8PINE83GEOdxb3TdVFfLVMikSDn/faVkoLcKXClW5l
habfFUiHBixR/ktFKXENIQrgfapXLrqnBlcyT9um+TVSMyuU7Ho7mRy4ambmp+aGCxA7WxYh
1WFoAFQsK3O3x0dp4JGHBTYSDUcPQ9TT8NJpo1FrzQm22qwM22xMKu1ihOQ2rvKK1KbLyeKy
222QkpI/K566ZkLENkHkyPfIrSrvjlvmXmPGEJq8uLfXxQkUSpMULDPcTWoPx9NEiWGzZnc2
cVx2NbcM7keJdELs8szHVl+5pNVAIqCor3qn6RqUhIc7zJItl3lMf6Xh2y4uXFM/IIrzzj/c
lM7U7ava0repIrvvqQnT/wDKKco0wxaEOQmy+qQX570p5aZKClae6sVbHu9voKUppjBiXoiM
Z8mxMfNwXhGOiCp+IWXZP3D05aFBzmH3uSeHFI9vGiR89T+TScqCm5NerTdLmuda7U1akOgG
RFZdU4wZH/uONpUKoSo9EVNPjqSAtGH+Q2rdZWbPPtDt2j2yci9W0RXFNONymwQfuClK+TJ9
dgfnpZEy35tXJVFu94s4m3mxPSJdpkxHFMxEKnElX3aPeVJSo+2ihX11hjEbOSfMrQYYuLlq
dXlUS3OWaPMS5SB2HlcubjNOfdAOw501r6mVaRve/LNunQrtLZtbrWQ5DBatV2kLeDkBDLCe
POO0EhzuEDcKNB89UjJFw/IkC23LGbpa7S6mVYKmQqXOfkIklSAg9tK6pjp3JogU6DoNTRr5
LHbvM+L2JTUCw26U/aHZ0q5XdyY6yHwqa0ppxDAaHDi2lVQpf1H0GprkPggrn5Ax0M4tZbO1
OOP4u64+1N7qI9yeW+oqc4FmqGSmtEnlX8NSLkbYtl2OxPJX+qJ0y4w7W1JExKFFU2U8U0o3
IdK0cqkVKjX0/HTmCSgl7bn1gtuTXiQ3kclnGp0z9yajO2xiUXn1rLjgS0+tTbTiK8Q7+Yfh
oawCtBXL7esVyu/ZJkd3kS7TKmrMiywI7aZCHVBHFKZDpI4D2J+n4n4avoSwWbHfJ+JQmMXv
UxUoX7E7bItce0NtBbUwSEFIe+45BLIRX3JUkn4V1Q4gnuSmSslgueOIdgTcLk5PZmKfVbXO
ym1oQSs82qDvlz371NNz8tSJjHx1erdZ83sV0uLnbgQpjbsk8CujaTuQkAk0+A0tDVl9xXzQ
89nqHsslpnWBmbNk22TIjpceYW+hTUfiUjm2ylJFUIGx93XU0jK+dk455QszmUWp83izqfiw
ZkddxMeeoBT7iClH37qTIbcok8HA2Qj6d+WgqqJG8PydabZ5XXKhZO45aZ1uVFuFxdYR20zA
0oR+byGW1voaWofrFsE+oNNXBeRlgmTw4dpltf6ptcDIxkCZd9u8ttJbuNqDYC0MLUwvlVfI
hHBPXRA56psPLfKuMWm3WkYOxbno0ibc3Lpa3oLZ5RXJFYzL/cRzQ24lSlcUKB6dKU1QZaK/
YW7a/wCH8mhv3a0x7hPmR5lttzz4afQiMsl1H0lQ5JoGklRr8q61Pg17NfQsuQZVg10ueRWW
OzZ3rC3aoa8eSywxFMi58mu6j7kJS5yVVSFEmgTWvQ6ko2DS2Sfna0uWvB3WLTChxrYueyLi
4mMzDcQyAPtYUchtBkobdqovJVyI6+2uqoWWTJfFLWPvZowb2mKtsR5JgNXAgQ1Tg0ftA7yo
jj3PRft+OizOnGNl8tkFl27zl3u24iMwTaFKslvjOx/sHJfeCay2wsQu8Gq8By+nc76VlaMx
hxs75OrALFa7pcItpsEzJGRZUvwilEiKzNfS4Z/YZQvhxTRPLiSkas7JrI4eumA4nM8pQ7BB
jLjMxohgSWLg80t5EpbJXGjuMrrwacUpX6Zr+VXt1pqYYThr5Kr48xu5zvFWeuxYza1zWogt
73faaddXFkBx9rgtxFUNt/qEKT16Gusp5G2i23fCcdtviiVOteNRHbs/Y2ZBS8kruURD28yd
IWt4oWkIopjtp5I/CuirlyV9mG4zCjTcktsWawZEN6S01Jjh5EcuIWoBSA+uiGyr/EogD46b
Ckb9/wBp8Rl5ZZmXsdjRrfJcncbYl9+HMWliGp1tt9pTr6FJDiR/1LTnH0I30vRzqsjOD4vx
GVJs0i5Y43arwuBcJVyxFEmS+r/pX0NxlMsNufcOrdQpSuIcAXTkOmiWT2Kv3iXC7bMuk61Y
27f57UC2SGcUTIfSEKmuutynu20pcmjQaT7Ss8STX01TjJpVS2csP8UYHJg3OdeMcn/epu4g
LsCHJM523sFlDv1QKK5KKyErePEbJO9dZb8FVmGZFb4EG/3OHBDv2MSU+xH+5Ke8W23ClPcC
PbzAHuprdkFdGz23wzir+AmVKjSW7umwO3z90aXIcZDgQVoQpQbEOigjiWuZcHXroWUTTTKn
5ItmKNeP8Iudkx5VtduMZ5Uy499x5CltulBYdUUJSp4/WCSFJT7aHVTQ8la8X4pCyzOLfYpj
jqIb4eee7P8AnLEdlbwZbqCApzhwr89VjUGi2PxzgC8jwG4SbZdLbAyaVIjqxuapL6w9EdSE
l1bqWF9h0KPIca7baJlMynD+pzwWPgt+8tX2bcQ9IYhx5cy3sKgxUMkw2iVLejNqDP6RSO2k
bL/NqvtGaR0lEXgWBYhlkOZOnO3CTdZk9xEGDF+3tyXWlHn3GluoXFcd94rGS4nj6emlt/YW
tLklLZ4bw043Dl3m+SLdebizNejskJ/QEV1bSUOQkIdeeVVr9Tg4OJrTpUjWS7SpRWcqsOAR
vF2MXu3xri3fbq7LbcedW0Y61RltpeCwgbJSVfo8d/8AHrdORttIrXj3GY+UZrZsekPqis3G
SGnXkU5pTxKlFPLbkQKCuh2N/BfVeOcTyVu0yrNEk40u63C4WWPBUp2YhcmC2FR3nVLSHWg4
pXF9QBSj6tt9aSw54Oa+PAwt3jTHbb5FxvE8mmyn5Fx7KLtHhoSG25ElQDDTMlZSHW1IWC44
3WnRNT0w6wpNrJL4h44wVxnIJl5Q0qPEyFVit/39wct0btgrV7H2W3luyQlIohQCab1rp9ih
/BmqlIzbPbBCsWZXixxQ6qLbJa2GDKCQ/wAE0I5026HqOo31WjgkvJMxMZtEbxe7lExpcq63
e5i0WtPMpaihpsSH5CgP8xa0exA6Dc6Krb8G4Uoskv8Ap4vEa6T4X7tFcTGMlDT6WnebjsOM
zJUjtJClAESQkUJNR03GtqjhMwObr/TrItxt7jWSRGYs+cLZKmTm/swy+UKcSaBboc5BCkpB
KVcqA9dYlsUoKLn2CpxCezAanqmIdQpSmnoj0F1tSVcaLae5BSVdUrQtSTp64kyk+R1heGY7
Mx645Nk8qUxZIEmNbhGt6G1SXJMrdCgp4hoNtpBUodT0266E/B06STE7wZfW7tcILFxiSGLR
Och3mYpXbRb2UIDzUyWFGqGXW6kca7pI1ppmCJbwW2O+N7xlzN6+5mW2exDNuabUGyl5wtof
W4sDlzAKkBPQdeuqtH2aNy0pRSTxKiRxDn5xXf8AjrMhBYMatF9yWc5BiTV92HFkTwp91yiG
4jRccKD7uKuKaJpqzhF8lyh+DPIC4s29XGdBs37c8UXGfPmFCmFhttzuKdQFk+x5FOKuVdtN
kygZYz43nO57KsE25JQtq3PzxMtskEyWjHLyOy4acku7FxKqHjX11KU6t/7grMPyMrL43Rcv
HlwzH97gRTb3Gmv2t1SkuEucjxJ4+1a+P6Sd+XxGl0fd1NNDeBgiZWBqyVD5Mly8s2diMCni
nutFZU8k/qJUVAcabUrrNVM/CCCWuHgrPYkkRm0Qps77pqA9Egy233Yz79S0JKBTtBxKSqp9
OuhJsobIrMfHV0xa3W6bMnW6Si6qdTDRb5IlFTbFAp/kkBPb5nh8eQI0JPYsRiGC/vkOfdJ1
0Ys2O2vttzro+hx5SXpSuEdtDDX6iytXWmyRUnUESSNy8MeQYNybti7YHn3Zv7XHdjOJcbMg
oS6mqh/loW0tKwpygp+B02rGsoUd/wDtHeI/ji45pcVuNsMSREgsMtoeS6Ur4LkOuBYCGAqq
ElNSpXpTfTWuYY2layxb3hLJ2QlgToLt7YeiRrxaQ4oOW/8AcSBEU+4oBpfPkOXbJ4VFdSq2
EAyLwxebQbeuNcYlwbnz12psqQ9by1LQnmoKTNS0VNcQT3U+z+Y0KraZVyNk+K7gnyHKwSTc
IzF2YqiI6oLUxJlFpLzbCV+0t9xKqBSh1202o0k/JdW5a4KU+wWXHWXUltbSyhxCvqStBKSl
XoKHY6LVdXDA5hCAaEcf938tZZASFA7fm3Ff940EGKKpWgHz9TqNMSK1KTsU/wBugyLNAgLV
Q8h9VN69NtRoBUlSBQlQ+PrpgkETRW4I+A+A/DfQMhqSpa9lEJ6k9f7dAMAJQnfjUbD1JHz0
kGKmpO3HqR1APx+OiTSQZptuKkg1p6fHUQOgCQQelAf9+oag4nlQAivX1IHp89BMMKVQkiqq
gLJ/26aiDCTQFShSpIA/36i0FvxC0j1oTsfw21I6V0GvkaEpINenSgH4aQYdFGpqAB6fH+Gg
gIFK0JSCKEjc6jU4DCUr2RUJBPpQ1Hw0HPYdDUKAJKeletfXQiVZByoFApO3y9NKYsJHtHwN
en46BSW2LNKAkE16JJ0SalsRTqD7d/7P+Gg0nAsOIb2p7TtXr11QDYCkHcfEjl8aem2oynkH
ID0BA6V600nScHOiO39PzpX211GM+BpYXOdwWdwVBSiDtsN6b69PqcNHFy8I3DzZc498kWu/
sWy7W5T8Rtgm5RUsxVhpIILDlSpw77+lNYtsK42ZePcfaTUbgdf5apGC6+JcvxnEMoTkF5al
uuRUKTCYjdrgVupKFd3uEHZJ9vH+OlWxBh1yPsW8gYtjec3TIIkOY+y/HkCz98tfcMSXxTuu
caN8QCofgdKqka6jrAM4wu0Y7kMbIHbk5cMkaVHd+1aYLaEEqUFtqWpPvUpdVbU1O2A3pFUS
MHGLyea7krJi+Uw0gsiEmPUULxoVqWU1rx2r020SKnkuT2aYK54oRiRn3F69Myk3LkqOgoD3
HgGualk9sV2VSvy1WYNyzpleaYNdcDx2wQ7vcHbxYVKX9y6xTkX11cWpRcKh2wfbQ1Py1SpB
bJW8eV8MiysZutifXfrlj0IwC3cGFMpWCBWV3ealpcBHt69a11pvIIcXjzvj4lWi5WmziVLi
21yHLXJfeS4wt8gr+3kVW5y2V+r9Rr10TwKF3vyH4/yDIJExNzetDSYUZmMqZGcnW595sEK+
9hqJU4WgQGj8aknVBJ4JNvP8LlX3Kb4byxGhvWD9miJkksvyJCUk840ahKWa7AfH008BBgKa
0SUgE0HT46kxPR1ky/G7HiuGu3G+x49sRbJAumPoa7zktSkcW68Eq9yFH6VkaGkFiu5Pmtnm
WNkY7lEK02AWsRZGNKg92QuSAeXBkN8W1KNP1e5t11NZDZ1sN4xiJhmGsz8htpuFiuzM+Q0l
Su63EKiC0ClsEqRzqqv8ztpkXsnLvl+DzJbyr1kNrucSVeWJNmUxEQ8uFFSSXBISWxSquq1l
Va1poAeycn8VregKusuyuXVl2UI7wUiW20HG1BhTjqWGUBNQK8kEJO2qAwV+wZRY7eu8pze6
2S5B+1qZeYsaUoU82XgSwtxlLLbzik14pRSgrXrpg1CSMpz/ALIyaTIjTrfcI0pKXYi7W2lh
htoijbao4A7K0pHuQan1J0JAmaB4rv8AbI2KsRrddYtlu8S8tTr67IcTHMi2JTRbYcIPeCd/
0hqaNFqg5ZhTzkCZj0+DacViy7g5llteSiMZLD4V9uftlIUp8EbJCenwGphwM4t+xBNkjusz
ITPj9Nlfj3CzL4B1d2W7VB+0ILynDsQsH+I1bJDXLLrjwxa9/wDVwF4TItcdnELaz2S83dEg
9ztspAfbcC6laln13rpVcwD+SpxY2Ds3bB3Lu3YmrU57rumC6666oFscTcgocUnuflTtWv5d
EQMKS/AY8udaWs+FrXkYuctdjH/TcP2/tK+wDhj/AKXa73HgHdUeNFgqOXxYb87CY2RRoD2d
OrcGVQu61DbXH5n7YS3otWmap6KSOVNUBOcEbgNqiu+afsI9gg3C3tyil2E04ubDiNApq824
op7vDpVYoCelQNXAVbaLBirP7dlGRY+vFZZuSruqQblDiQZCmoDrig2y6iYC0zHUmq+aaGmq
E0PBQMzxN67ZhlszEIQl41aXy68/EU39tHZ7fJSh7hVHJCzRNem22mYJKEX3E8VxZ3HsfbVZ
oUnFbnapcnK8lcAU9FuDSato+7K+UYpVxCUBO9eh0GjMZ+OxGvGVuv4tbiZUmYqOq8LmNlpw
JK6tIhD9RJ9v1EenzGpbMviCL8f2GFkGaWaxzSsQrjLQxILKuLgQqpJQqhoduuljVSX7HcH8
UZRmScct4u9tlw5ExE5l55ElMiLDQqikuhAU06pxNOCUK9vz0NGVA9f8W+MVX23JbeuqbfKg
ypUhoNSi0lyO4hDZMpyMh1tpXP3r7JANBXfRL8lIVj8bYRD8gvWO8WS5vx37S9crcy5KYdbU
lDKlFbbrCQXkqI/RUQkhX1JOtN42SRX8XwLD7lYot+uIvAj3S+psNthxuz34wUgKDsta0EKU
kqoeKU9NQNTCkk8k8U+O8Rbt6MmvV1S9dZU+K1MisMKjsiC8We84hXJxQPsqlJrufhq6tmsF
XtNis7niXKLv2g7eIFzgxmpTjY9rDqiB2Fg1T3BUr5A+lNWEyamPqWK/eGsUt8i+WyDf5n7x
YYkWfPensNIgIjyi2Cnk2ru1Ql3mVU4021dnCbJ7+g38p+ObJiWNtrkXS6Tbu9L+2tEWX9ul
hUdlIU9LaS2t1XYWkhLXRVfqFNKcvIu2Sj4JiCcovi4LsswYcaHInzpQR3VJYio5ucG6p5KP
oCQNARhsnbXgOF3UXK5QMmkKxizwBPuchy3FMpsrc7SGm2Q72nCo+7klzYbddXYE29slpPhq
wW2C9d7plC41lQi3uQpKIC3Xn03RpTjKSylwdtQ4e/3EU/locsUvOx1afEeO2NebIyi4w5kz
Fmm0twyqWwyDJUjtvuLYQV0UlwBCEnZf1bb61LwZmUU/HMXs9w8eZpf5VXZ9kTA+0SVLQW0S
H+2pZABQ5y+ngr6fq9dCWTRdLxgmSWzxtLkXfNHzZ4kGPJYsbSJCozr0wgxozLy1JZfT17vb
B7dK0pqWWVmZLY7U7dLxEtyX2YgluoYEmU52o7fM0CnHN+KRpehNPa8S5BZssjWy2ZOxFnvs
Sy5LdYlwuzGZYK3lkuJPcYcbqnm2oj46MwCtGBnD8M3qZcrQq2X2JLtt1jyJNsvEdErksQlh
p1tDAR9xzSVjjXYjeumXyDYU3xDlFlnzHLlfYdnh29qM4q7uqkpChP5BhoNtoU+lZ7auSVJ2
p89Kzou7g6Yv4Wza6tTH7LeYiYn3ZgNTo78l1mW+UhZKVxm1+z3iqnaAEmvQ6w7A7ONGa3O2
zLZc5VulBKJMB5yNISghYS4yooWkEbEcgdx11uBVpyX9jxj5CXgpvcW5NrtDcVdxXbGpT/JD
BADiijgI3IJ+pAWVfLWZyZvdoYZvgM3Gcasbv+o4tzg3lszkW2I+4pDbgJR3m0KCULTtxLtA
a+2nrpTwa5grmJWW/wB4v0SDYl8bmSXWXS6GEthpPcW8t0kcEtpTyUrqPTQxNAg+NPJmQ5rZ
Pvb0maq5JLkPKIs8TW0x47gQ8ph5S0KUtlSgOCd6keldTbgkFjHiS7XPP73a7TKnQrdaWXf3
CaVxfv1IU1VTCUNPdlSpG4Hv4cfrI0t/qZSUEfiFi8ps2yc5il0ctFmclrhIS/cGoAkyGzxD
SEqX21PAEJVxNKmgJ1c6FWwdrTj/AJvRh7zNpkzGbG+iQ4bW3LS288hKimSttjl3VpJQqvDr
Q9dKyDusIj8gxLO8f8aW+ZIujK8Vvbv3CbWxLacCHUKSGnC2FHmVVB/T+n89DoUuS7cMqON2
S93u/Q7XY21u3aU6lERLa+0ruAFXLuEjhxoVcvQaNGonRoeTwvLDr/7xHyA5RHft0yMu8218
utiHFCRcGChYbWnjySpZCKuA8gTrSTA4YTG8wP5HjcSzPqiS3LeoY9JmlkNIty1gqW13Urog
uU40Tz6cdtZs8fc1skcQuvm++Zhd4tlmBy4uSAm9THW432KH46iwh5ZW0WkOEpohSEc1fPTb
wwTlGVXSTPkXGVIuL65M195apElxRcWt3keRKjua6bzIlrtDeTNeLr06t2OjGZ02KwxGlJUt
1+4tq5BVvoP03G0V7q60Kfb11UWfsVphQWJ/yL53WXrW4ic1OgpSZK0QUpmt9nt8lreQ33OR
HaDigakcQrUpQNwLv2Wea0yrdbptkMR1+YqfFtrdqYQ3KllCkOrW2lspfUpLiuYV8a6KvDJt
PBVPJN7zy4SYEXK7ebT+3skW23fafYttNPL5LU21QfWobkfCmptpRwSsuCR8Uq8jPJnt4vCj
Tree2ZzFzQwuAZCVD7b/AOUUtfdcv8kA1/EaK4YtvYbefeUrZPjtOJkid99LfdjvRl9yfKWe
1IRLQAPuu3/l9ulEdB6ab2tyFpJGHI8gQvFNwb/0nBOM3JapEu5qjgS68z25fAOJcCGFq4tu
9vgnprdX+clOF4KbfsoiTsasFlj21mO/aEPfcXJIT35anlBQDikpSeLYFE8io/PWU4TUbZcn
LC8muWO3n9yt7KJK3GXo8qM82pxp2O8gofbcSiiuKkdVAinx0J5EvGQeaM2yawT7KLXGEK9c
DI+yiuqLryFoPdQoFfvIZQggbUHx019kOS0RniW+XazXy4vWzH4t3uSYclTyZq3GBHjIbV93
0U2OSkEgpVv8N9ZScopw2Nrbe7vHxDIGmsdD+J3aQ0oLUJHahSkJUY/ZeCgtRQgqoFlQPrrt
3fdvnksEtb8pnQvFrUVONR12lu7NOC9KkOpfVdGG+aFdkL91GPbsnh/HXKsJv6C1o6WXy7KZ
v9wuN0twXDvd4jXye3GW4w6lcQqKUR1/4ar3r1pSu+l2b/SCrYT5gzewZdJt9xttulNS2ULa
fuEpIZQ8hJqhpqO2pxlHbJJUUqqSfcNSv+MGUyN8dXq7toudhYx1WW2y5BqVNs7QfS4HYauT
UgORquICCaKHRVaaynAttKfBKR/O2QMzZ0yTDiOSbw+peTuKSUffww0Y7UJYoft22m6pCke6
vU9a6tZeNCrSiPTOyWd4vXAhWZScctd5Vc3LsFng26632Gox505hCVJ+nkeldaV82hfyKsqH
yW+7eRc0tijfJ+FOW653GRAkZHcpjUlMaabcUmMhtp1KW4wVxBXxUanpTWKuV8rCJ2UwM8gz
dzK4dkZGN3SdjRur8sOuS3J8yVKLdFxIsktqLbbaaK7YQdCeGNbZO0jL25Xl2XmTWJ3VF6YZ
+5RYz3HVJntthtp54JbbcQwlABKQk1PrvpvdNJMFiTJn0zJLkt9bTi1oKn5yktqCWitZ5qcA
H6aedR7umn22m0+Qbk5qizS33uwvsBXaLxQQ3z6hHcpx5U9K11ykgvtJqyyEsr/VNI54mjhB
oQ2TQK329vrqkQpUSRHecaktuR32lFLsd1JQ42obFKkqAUD8QRphkISk05GpHQj026b6yUAP
KnGnNSTUHUQhSFkdeQrVJ1oWg01B61AFfgK/hrIQH9TfFNSQa1HqD8tJB8khS01qNgOW2gQx
yUPn9JJ32GgUJUap6inRPp/CukhfLnuram/L+/UISKhIUeqSSCN66GyqHRSqAigPVSR8fx0G
gIbCxwIIr/v+JOpmXkMA+u6QKBXxOoU2gFJJAJ22qR9R9aahdpBSoUSr3daU67aiQApHIdaU
BHp1+OgW/IAVKVQ+m43oARpMJnWgUKdUqGw6dPidZZ3wINKg0qU0qK/y1GG8ikn3HoD1qRv/
AD1FUArRZ32FCT/46GbiMg4qcI229CehHz1JhkIJBHCmwrQfL56mCmRQb5N8RQge7rWo+eiD
UYkIEgAGhB6eu3z0sJYNqdPbX/6adK6xJ064I+zgG5KNTxFSoj5H5693p/kjyQehv6jo11Xe
LFNcS8q1LtzCGXVKP2/dKarCKnjy40JoNc7MqmPJUSlQFdunpog0al/TdHee8gJ/+zWJ0ftK
M2VIQlZiJoeCmuRASpxYCSdzTWuDLbiCx+N5OW2vyVdMcky02qwwnZl2uVuLcZRdbSapAWpK
6c+SduQ9vw0Lky1gcePkScts3kO4t43HWbi2+u33Btjk4t1YPGK0SSkdtPE0SPqNSdNkoRGP
M4XlLthl30QFIs9vcLEycpaEpQ4khJRxKuRIJANBoSNOxqkbGkzPBdtmRcUbE4XdguVQpLku
ONlPPvqopDLpNFKCggDprdkkzJdottZFmkP5Li7aJdrnwSi1ogMMsgqVxLFsdTxM7kDQ9w0r
rEIpzgZ3LF7jeJMC6W6yRpbAuLrLWOXi2M2laVcCpL6nGqqfYYSdwa8yNSBEVKagPXmIzHwp
N0uVtjSF3S9rtaoELiFAd6PblFP3nZr7U9VV1QgLDJwPAjFGWS7Uygx7WxIQj9vc7a3XllPf
ds7akqChQURX416aRkiMfxv9ryTMLLPbt0uJLsTl6htMw0MdrmOCKMr5rYKaH2BXXfrqDg88
78UitKbKP0ncb6TT0bPjHhzD7lasZamv3MXfKIr77MpjtiHHcjoK6LSU8lV9BX46LIGiIyrx
7hGMsx7TPkXmXksq3/fsSojLbkIKNeLSmf8ANSnb3Lrt11FI9xnEsZl4Thdwcioeen5IiJdV
ONp5uoUSCz3Qalmienx0g12aJDIfEOK3W83g47JnRV2+7NW+4QDGQWkJkkEfZISQtQbB/Psf
w0Ikxbv9PVlkCG9bsgeVb+b7dwU6I0h1AjJK1dtUZRa5e3iUrOx1CmQ+F+NcHy127R8fuUue
41AU7DNwQIn28vuBtsuqa5IdRTeif47kaoG0tGc5JbIFsv0u3wJD0mNGc7QkSGTFcW4kUcqy
SVJHOvHlvTURdMKwHF5eNQ7/AJH9xIaud4asUGNFdDBYccFfuVrIPcI9EdNINcFiV4Rxy13K
22e8zpUq6ZDLmw7ROjcGmon2daOOtLqXVL9QDQDpvoYLYyb8M2BK49jkzJC8rm2t+7xp6OIg
ttsL4iOpk/qqKgCSuux9NLKORpevEuPQ4t6gR5spzKMdtjN4uL60t/ZPtOJ5FhlAPdQpApRS
up+GnwL8lZtmAM3W647bbbkMOZOyAfrNstu/9DRPOjqiQFqpUUH5h8N9DZTLLTF8KWS9mLJx
q7SW7U3PkWy7LuLSO+FwmlOvvR0s+1SFpRRKFbg9dD2Zzsr9/wDHVpSzjV1sd1LWO5UtceO/
eQhl2K4yopcVKU1Vvht7SP46ZZpYGeJ4TFumf/6UORR4zIdLDd3jcnUyVAgduLxpUr/xKPEU
PXS9CogsWJ+Jm747fAq7z3V265LtRt9sSy7MW2lRT91JS+60kMbUpua11cGHlFCy6yScUye6
Y998p4QnjHceb5IS8mgUnm2DTcK3BrQ6Nmi1WjxROuNpgRjfExb3kMV262exdtxUd+PFFeUl
4HttulNe37VU9ToRZ2ilysYuLWKxcmW7FVbZkgx2owkJVLCxyqpTA9yU1Qd/w+I0oLDTHbde
LlkEC3Wbkm7ynktwyHO1R0/SQ4KcaU66WS2W7/tD5TgXWF9iy07NelOMNTbfObc+3lspLriJ
DyFDsrSmq1VPTR28gsaH7+F+dE5RFkruD715XFdkRb8i59xhMRo8Hv8ArSoJbQhSgkpPqRpn
4KZ+BFk8d+WbzmktX7mmNkUKOmeq8O3BDqnEqbUphTDyFlTiVhBTyT7Ufm020UYlDfELP5xl
PXiRjMiU06ZhYu0huc02iRMoSpLbq1hD7h+KKnR/Fwwr/EZK8a+Zr9amFqtE6Vb4P3AhsSH0
BSVhxRkpaZdcDinC4hRUkDkSNTsabOtjunkSP4rus+DenomMxJTdtXawhKkvOS6l/wB1CUcN
uVetdtZjJePkK6WfzhaGZmSXeFcWmJgYjXGdISh4OoaUgsNut1WpTdUoCfbQ9NKaZluN+Sw+
Vrx5euOHvOZJizVitCpMdybLUVF119z/ACS2HnXO0PaeaWkinRVBpr8DYzPDZ2WRMihuYoH1
X5SlNR246A4twKBC21IUChSFJ+oKHHUKll1N+89Jy4xVwJv7+YhaTZ/29nsqhk8yPtUt/blv
luTT6vnon9Aq1lDG4yPNGRSblaZVruU2QJEZ2fCah8Sy9FQftkhCEANJS2SUoT7SNLgUxUBz
yRlreZz5s5uDGkMsryufOaUyjnGdSmOwEttqUF9xAHFIFPzaOSs/w+P+o3xDIc1heOMi/a2r
enHWFR27v93FbcffXKc4tJQtSVcy0r38SfZ9Q1pL8gsnCLJdc+yW6+MRZv8ARD4gsW3tIuZS
7+3sxopQlc2KytsJQ6n28nEundVab6yoF/JkttnzLfcI8yMAt6O4hxgLQl5BWlQolTagUrST
txINdO1kkapE8vZxarxb/wBwxtuHHhsTFwLNHgOw2y9LZLCpHZcCytKPVKRwpXbR1X2MyiMP
lbO0XBq2iyRGYiIbkH/SiLe43EUxJWH3v+lr3wXFpC+QV/ZqwimXAb/l7LbjLlR7jYIN0jON
R2HrA7Ce7SBAKuypLTSkutqb7pHWnTbUUpja1+Xr8xFlRP2K33CCZSrixCEd5piE+Ww2ShMR
bdG+KN0OE9Px0wVogoMmYqTIckuhKlyFqcUEJ4p5LUSeIHQVO3y1Nj60koRoo85X84y7a3LZ
GK3rZ+xLuh+4CvtUp4pSlrufbJcQPzJRVXroSK3JXsov95mY1jtqudpEU2+MTaLopDiHH7et
ajx4k9tSA5WiwmulaJ2ljHDstmYtf2r1HjNyShtxl+M8VJbcZfbLTrfJJCk8kK2UNxpgWy2w
fJ82TkmHN45jzLMPGnnBYcdZcekqdkTDVyr66uqUTTjt7dYeTK2McQbzKxZldbHCsZm5HMgz
rS7axQuNfdIHcWFNlSPYnepVx+J1q75ZUX4xJ1wTyjFwuDItcuz/ALg6p5RKFzXGmHFJAR2Z
UQpdYeSlSOvEK+eskmoJCN5ucYxqNYzCeTNiRHocIxLg/Eh9t5a1pLkRsVWW+4QP1BUAV9dd
KtCqplUuWVRZmA2XHpVtWJlmck/tV3QsttrYkOhyQ2pkp4uKDgoFpV7ehFdHkm8kfh2Wz8Uy
aDkdvS2uZbnC4hD4JbUCkoWlVCDQoURt066y0KZe0eUrXZp0Fi1Y69Bg2L7qVZYkuS6qU3dL
gEkSZJKWy4yhOyWaDmKFRO+tJzvnYr9TjC8n2OR5EtOaXe1y3psBtlUxqI6gCRPj+1t5ttYW
GWeIoWkeu4ppeVBJOcaHWN+axjWQum320TsUXdV3wwJrbL81t5xBSp1uQkJQ2pPKiVEbJ+Z1
WXK5M/UzK6XD7y5S5pbQwqU8t0MMp4NthxRVxQgbJSmvTRdyzWi1xMqgSfHLFhkx3jcLBc03
S1yEIKo7jcmiH2pSq/pn2gtqpv00LTC1lKNUl/1GYWpL0tFkuL0qQp1ciG6+00yfvyyJSA60
A6AhMYdogVJO9Naq5NzgaXTzxjK02uNBcuzSYlyeuD1wajwI7yGn47jAaSw2FMu8O4OXc3Wk
GqgdCezEGbeVcvx3IJ8A2NmQ0xFjqbfW/Vttx1xzmXGIocfRHB/MEKoo70Gl2UfJLCgcYbm+
LRMUfxjKG5aICboxe4kuClpbq5EZHb+1Wh6iOLiCaOA+09QRrK5+RnXwXCJ5xsBfcdnMTGpF
1mXGZKmNlDkiyqlpUhv9mWofWtJ5PlXEE9Kddas5CUsEbL8rYw9bHZxZlqyp7G/9JKtyuH2o
Y+n7/vg9w1T/AOzx+r81NbrdKyfCcjMp/JRMjtONRLJj8u33RUy7XCO49ebeeJTCWlYS0kLQ
BUuCpKFe5P8AHXNQ03zJcwS/iLKbHYL/AD3L244xButql2sym2++phUkAB1TQIUtPtoQDXfW
OUzScGr5f51xFrDb7a8MnzYd5uEgSoLzEf7PspkPNl5hKgSUKQ019aepO2utHWc+A7FOwDO7
bLyLJb9mmQptsi7Wx62K4xHHEPrfaDKHVpYFKtcQo19y1b6x3cr4KcNeRhas5bZ8W3vDFZFI
jOtTEPWxIbeUxMhBC0LiICTVjurUHD3Pb8d9dFZKzfn9gekSljv+Dx/G2PWyffmVXW239m+S
Lb9q+taIpohyOhYTxqDVwmvE/jrFHCt8oZhouFh8n2bJMu7VwuMZ9AylMrHk3NoIYYtQYeb/
AElcQltalKRxSr/3OJV662nVT/8Az+4plW/qNky/3KwxFXYSocWI4lmC64h6cyoue9+cWStn
uSBTipB3CdxohdNcmNkJ41yCxs4xc8ek3tWNTZFwhXJq68XlNuMQz+pFKmP1eZrzQPpURTY6
xWFM8o3Kx8F3Hkbxlcrqq5ziw1LuFynTLG7Ihgiycme2l6ehBQmV9w4A4locuHqa63Z6n4Jq
EUy9Z5Ak+H7NjaRBVdGbnJdkttxkofbZSn9J9KwAAtxVQpQ9xGx0q8OznPA22voT2H57i0LH
8TtV+ui7ixcruuZmCJanpP27MQcIDDiV8kllThD5p6jfbbXOkpNkrIs0rM0OW2x2x3OLK3kL
L1xclyYCCxAdaeSgNRXpDDbPZ7qNjIQkKTSg331pNJODNWKk+RIruSotlvutoVbv2mAzd5D9
zmRWUSYjy1luHcmv+rk8A5UpUTy6fLRqq8m06xvJ3V5Rw+5XpV8t97ZstpiXaRNyWBIbUw/d
oS46WGEIYQhX3VVIUO2vccuRHrrWIhbxky1heCJYzLCUY4h83KOcSVj6rYcPKayDey6VGQqH
Tgd6LEjlsB1rtp7Ljzn6EmmNvJOR2W7Y/e4UO6RLm3eplsVgdpjKHctzLISiQlSOLYhFVeBB
PuP89FHXfEfuPMcyUvzbdIEzMmA1IbnT4ltgwb3cG1BaX7hGa4yFhz/3KVSkq+VPTRZpeuq/
3f8AQOXGmZ4sgI9g5CtOSSQNcCApI6A+4EcVfCukmglcVAitQk0r8T+Hw0NkBVOo2Fdz/fqQ
h0QUmg9CKA77n4DTIoHEpVumoA29SP4DRINAA9qUVBUQRTf+356AAkJ7Z2ISPoT8dIyHwISm
p4kA7HqPhrIC6KVt0B67U6ahgSDTYqqn83LY0/hqkQymhBBBNaFXx1CAgq2HruCN6/LQbSFA
JKq7hI25AbD+GonVBUpyrWoO5T6fP/hpDSA3UkA1PzPqn4aDKtOBSVIqQU0G45f8NRuAGta1
4gEEn4fwOoogVSp9FCn1bevrv8NA9TmFKqonioq2qPX8NMGYOlOQpTkpI6H4H465tnWoDzoS
TQDYgf8AH4akwvVhEUr8DuVK3r8RrZiICSSCSfyigT6b6y2dVYWAByBAFeqQPTWZDqD30ryT
8K6RhzsjrQhKbooV4ooaAdP469dHk89lg3HzLj2P44LHbbZCcS7IhtzHpr8p99RK07tpbcPB
Ar/h1izyFMZMvWpRcr6DY/gdUm3kufi3C7ZluTNWiZd12pLlSwGW1OPvLAJKUGnBuiRUqX/D
fWobRzsS2J4Rh91yl7FJkq8qvCpz8eOqIiKY/ZZUod15Tp5VCUkq2/DWXTGCacJkjbfGFqmq
y1Fuyp8MY0l9bENtCg+8lkGq3QODKEKWCkcaqIFdtME3Jlgee7fa7i+3uQ0VHgT8QgnjX+Gk
nBpUPArlNwODezla3mZ8+NajbmlurYj/AHBCaPLUpKSpsKHJCU8fSutOuQlSWHI/DlwjyIlt
XfbzMV32o8SdKjn9saWsgBwO99wtpHQURWu2uabkykDJ/Gs+wst3665vdWIsWUqC5JlRpP3K
dtlxEB9SltKO3KqdtMsUOmfG2QXGTZblbc8myDKYfmRHJSJSJiIzQAWuMz3lrWV1ACUkVGrI
NZI+5+O/JlsySDMt2QzH37uwXXr1JW/AdisII5ffd5ai2kFWya1J2Arolkl5JGzYplUW85hj
L2ZyGrxHiG4XF5iMl4Smi0CUuSJBLyT7wninb11qcBGDDW/gAU9CAKbbf7taFGtW0eerniFv
jWdl9jHmYy2oQiOMsLkMmpVsXO84qgNOIBpXbWGyYxRC89uYf2Qm5t46I6lKjKdaQ59sPq9h
UJJa4n+Xy1SCSeyTtKvK0nEMfeayJUW13OezbbTEMdHFqij2n1PJRTilSNgPcaaYNNZCzBH9
QFtntR58i4z48SYhNvuEVpAQ/IBo2ri2O6qpJCe4KaJAb3C9/wBQTE23NSGriy8XVGBHbjsB
C3Sk8xxYQUK9qjyC/wCOlME0JnHzrMbuiZ6HraiPbXFS2nW2YSDBCwp37cNICSsrpXh7umpM
kUXMLllk67J/1Sl/90ZZaaCZTQZf7QTVsugJSpRKTXkvc6WLWcFn8af91DElIxIt/Yl5BUud
2Ux/uyP0xGMj2fc/Dhv8fTRItYJLHv8Avr+2zm7czIUWHX6OXANffokkH7kQFSP1e6RXn26/
z1Ng0cIknzOvCXnI0Zf7Qyy40mcptsXQQ+YD6GVLP3P24XQKon+OmQsIvk3zM9golXG3KZs0
lltqXdUstJnvxN+yiU4D9x2D+UqSB8zomB5hlcfzfPn/APTwLz6P2cFvH1Mxg3xJoj9MobHd
VQcfzf26pFrPyWy+ZP5ntV3syHLL+xuPvqkw7bAhoSzNlPjg8p5tsu9x1xKuKkqIoPQddCZm
ckFmWQ+QLXfbU1dbO1ZE2Ud+yWJEZP2TRcUSpaG+TodK17mqzv8ADSnJKJGeI5Zfv+4CMgbs
ab/kUlwux4QbW0EP0H6jbLPHiUpHQ+0dTvqbkV8EpaHrzdssu95t/jpN1uCHwp2KgyyxCloW
VLUoBaea1ue5SFH02oNTcGUV2Zl8wTcjVklkj3O/3hSkyZU5tbb0J0Ao5MtJ4hChtsenEenV
SkUsE3ZPI2YRcYYnRsfTJdx5hdvgZb2nlC3xnxxcaWEUYWd6JU59NfXQ8BZwylyclS5i7GOI
gwEJjSFShckMj9wcqDVDr9aqQOfw9B8NKsLanByxHIzjOTWy/Jjpkrtj6X22FEpC+II48gDx
61rTQKUEthfkeTjeYv5HHb+4Q85JW/BDpQkfeV5ltQqEOJCvavj+OtWckniCxPeZoLmRQrq5
Huv2tvivRkPG7LXN5PrSor7hQGOPs49tTRSRud6alJmBp/3VtLnkJWSxsdbYhPQXrfNgxlBL
7zT7RbdklTbYbS8QeqWwin89FsYFRBzxTyHhlstEa0XOzzZFvtN5/fLB9vJaQ6lYSlKWpRcR
xWkBCd2wNTqCzDOXkLy/PydVjkx2HLVdLM9OkiQw8r/OnP8Ae5MkcVJDafbyO51rEDmQWLJs
TY8Y3iwvwbw7OuElmXIuEb7f7NqSwSYrZUoVCV9VhXuP5dCZO2oJeb5jt87LMovHYlwE5BEt
8OI4goedhfZLaUt0JJCVbtFSUJ6nrrLWDMZfyyT85Zbgd+sDTVluKFShcFzkW+EhRbkKkJAe
mTStprg8QNglShXb56UjUOTPfGWV2zGr/JlXJD6oc+BLtj70bj3mUy0BPdbCilKlIp0qNTYl
mtGa4RbY94sDV0v71mutsbhfvEjtuSYzqJBf/wCnjpcHFlXRQ7leW+pGX4HmV+X7XOsEi1WV
24R1d2ztNzCey5IiWtlSHVPFtZKVOrIIRU9Kk6VAzLF3zzeh1zyEm0zZ0NnJnYjthaFGw0Uc
EzVOAE8FOtooaVroygdcQQOJX/CYXjHKccul0lRrnkJjqYhtRFPNNmC4XEfqBaQS/sgmg4D4
6q7F/BomfZRZJ/iF2PFyWOi4S4FvTNbZeKzMciJSBDZglwrh0Neag2Eqp7ttVQtl4MJxG6ot
GU2i5iSYggym3jJS0mQWgDu4GVkJcp8Kj5b00tSh0egIHkbD5uWwZacpe+5jQbo5Kn0lJhML
ejBDa4rE9bjqHyd+0hfA0292jJnqR9r8pYlGn2pqRkz1xu9stEqIcrlszGmJL8iUl9tEjtkT
ShttPEcSN/lq4EXf/JuL3afeDj+VDHLtOZtQayP7aSO6mIlxMhhRSHJA5KWlXvrX1VtoJ/Ar
CvKePW23y0JyCKu5m8vTLld7izLhi4MKbSlDgjwUqDivYR2nKD/8460DWMHny7S25l0uEptC
UIkyH3mUoT2m0pccUtISgH2p3+n06arbGujf5Gf4K94ll2FN1jSGFWBDEC3PF37sXFABKVRA
ymO1xWKtuBZUr4+uiuAulYonlbOnsqw/DVrvSJrsGJ2Lpa1p4SW7gkkKkKSEJBaU3RIUlVPl
rXreGLf5fYg/EVys1uzViTdnmI6UxpQt8uWnmwzPU0RFecHFYSEOb8ikgaLE9GrWzyDj9u8g
YNPu90s9yvTEOZGy/ImmULjpKiXIZbebbaSHUoT2+aEVAUUnrpj8fuSiSpeKvItrtN2zO4XK
LaIi7lZ5jkSO5FCGVS6ANwmGwfa0/X3tV91Ouhr8kSXWsEx4culhTh0mLNk2SzuPSXXZs+QI
i5PDjVLb8SYgl6ORVKPt3QoDald9Ty5CMEnb7l46jeJ2oiINplQ1Wx79z78mPGe/clLXulhb
Tk/ly4Kb4LCaUG1NK/lIPRTc7yp6++IsNZ+4tribf3412YbbYZmMyA8exxaSA4G1tUU4Ue1S
t1b6q6ZtuWVnxHPxa3eRbTOyYtCysrcLpkJ7jKHi0oMuLSQdkOlJrTbrrLCINOkPWeWm1W7P
59tvF5t9uvC8ruZfRIlMQn3W1QvtJLQKXZiTVTLfu4pUQaDW0/8AOCjIrCYtuh+Z8ccxmLa/
9NMWuO6Jy3GlPfZLCuUqUXS3wuCl+1xKU1SnoKb6zaOsfJujwxGCsSbQmRAgyrfb7tDyRx/O
UuuxQV4+pIWE8llSH4/FS+TbRJqRUdNbtufOjNFCSZimXLsq8pvD1iSlFjcmyDakBKkpEUuE
shIX7kjhSgOq6yS0X1gR/wDstYRCU2m3LyPjmyhTmTVP2H3FPf2e1z4/l5f82s0/3eRjKnRq
Mnw341jypUt2NEjQ5z0hmGt+aBEb7kyOLdw7a1LSHmO7xp1HWnXSkv2MpM55B468ct3bGZAx
FaH5UuZGkWBjtxFvNMsqcbcXGMpwO0Kdk91Kljb4aylhjEZMj824zDx/KIcaJb4duS9ED6o0
EvoTVTiglTsaQVuRnOIAKOSgqnIHWml1UAsFm8N25l3E0SrFCh3LJnb5HiZC3LbakFrHVo/W
WWnzxQypdQ44ncfEawlv9jWCTi+NvHFxX99AgKfcEm8rx7HmpVTkMKM6v7dxl5SucZEcApA6
uBG2ul65/QkgX3HsebwW4BFrhDEGMciz7JfkJT3n8gdUEuIErkXFuK96FsdAE/SOuqlV2j9S
9iRi10x69WyJBlzoi48a6MmRbnyUqQ+0lQSpaSknpUAg7jWGuVoHuCz+KsMt2WSr9ClNuvTI
trdkW1phRDn3IdaQFBCalzihalcflqol2SeialGqZR4b8bYRhc3IbxbbjdJMC4LiNRFzFQ++
0uSpiO4tSUEpQUJ5ckp93pprWXHwKSnwVXx3jdhud5z53HYE2XZWLDNVZH5LQddadW0n9FYC
XEqdUSoN/noOQ30Va7IVoZNWnET4IduicceXkLF2+xfu6H1lTKgx3e86gJPBih4FpVAVb1rt
qpWW54FpQiSsmEx7r4+wWO9b32U3jKXGLjMLZClx3kIaStt0oBQ2U7IFSnlUjWqR1t5wMLsp
JSL4i8dXaYym3C6QorGRSsalpkPIeVNdjR3HUFCkopG5uN9utCADU6H64n6SZqnBS/MOK4zi
d4g2Oy22TClNwxIu6pD65HKQ+eQabcKG0LSwBx7iB7q79NHVKvy2BI+M8Usb+KpvcyyDJZ8u
+xbAq3uF0ojxZDfcclcWPel0/QhavaPx1lKZfhGmlCXkmV+DcXK58yPPuJs9nl3JmdHUhLky
c1Dqpldo4J4P8RRL6j9NCddbVz8uDMEVf7Thtr8I45Pgp713vs1x2XJkRElx37RQDrDcjkSy
0gmgKf8AM9aaxVJ9muDVqrBPW3CcQy+yYuwbBDxW75JNkPQ3bat51f7Xb2FuSXFJdcWkl1ae
CNtjv01etKG39PuZdR8PH/jVfj+25Xb7bOkwYEC63ZcObxZmTksPNNtMSnWBuy2XCeTY+n4a
1WkzV+TTwNJ/jTArOBfHrVIucS5O2diPj33LrYguXhsuuHvNjvL7VOLXP4+6ujquqf1Lop+4
6T4Swm23e34vMTLuczIpd2YiXsO9pVvbtSSUqDSAW3lrP+Z3PToBqdF/L6Y+pP1qM+JI9nxP
ikqNIxmOmQxfYdliX9eRl0qbeMxSR9qIh/S7SUue1YVy5A/hp/rSeflD1zjyRma4bh9ntN0n
WqDIS7h96Ys1xRNeMhq6haSpbikAI7B9pHFs04n46V6VD/8A5kUlh+Ss+VsZtOPZk/Btbamb
bIjxbhEhuK5qjpltBwxwo7qDZrxJ9NYtVdU/JnUopKgOPPlTntT02+OuaAFTQUA+B6Aj8BqE
MEBPPf27V6mvxp8NAACvh7P8YOo0gcXOBJ9eitQhkqUmoHEECpBFAR6aCDUpIAKvbU7j8Pnq
AFEqUVBXIj8vpTpsdIpoCSOQSqh4+ldvl11mBQorUqlK77FJ3FdQiQAVBIAAB3B2O3w0QDFA
cRRSgoddvbSmoZAriRVO6duJ9a/jpCQglKN+ZI6lJ2HzpXUaxyHsQPbSg6Hc0PpqFqQuCinf
qD8R/u9Rokz18hqTyqsHb8oPoR8j/ZpOjsKI+kEAcT67/PWWzKcsOlSAUkmtD6EH021ls2G4
lsbgilSB+OtJmW8gQv3jiQQD1rvvv/ZrLQ1YRJFNvjQ9RT41GhI05S8ikAcKgVQrrTfc/jpY
VSBz3pX1AoPTWYFWDoUhQFU7evXfRBoT2z9XFVK14U3p+OmEHYY2Rha7qpKxyUf8yigBQ9TX
pTXt9dXODzG3eWJGazJFjteS2KBHuojtN2+VAUp95+OfY3VaVrSkcq7JH9msQ2wrVsh3vEOZ
NXC8QW0xX3rIw1JuIYcU6Al4EtttAI5OrVToNZa5KRph9+yDB7y7dEWRBubCeIXcYrxMUr/O
jdHbKxtVXX009sB1kn8XyDyHPzG5XvFrFHReJMNxUqOzFDLKGHiAt8NOrQSpah1qa/DUngHC
5GGFeSJuKW+5Qodjh3BVwQtmfKktvuKLSti0vtrCQ2PwH46bGrLwyJj5XHYsU6zosNsC5zvc
/dOyVSmUlQPbYUVENoFKD8dTykME9F8mCPgycVaxyG5be731ynVSFlUoUHeVRSUFRp9NePy0
NyZgewPN8i0NLTj+P26yqkPMPzltd9xt0xVckoCHVcGhX/DvqQQMsu8tPZBZZNnjWyPardPl
m43JSHXpLj0g0JIU8aNJqPpSKa0rAlGRve/JTd9v9vu92tyBBtkdENq3RJT0RPFIIBS+j3oq
TX2gVpTV2NPBJv8AnfLPuo5tjiLZZIaG2GbYkiUhTLZCqPPSA4txR/xHpo4AmIfnG3yL5fcl
uNvkOXG6wf2yJCiuNfatMlO6lrWEuqWVb7ClNQcYMojRZT47UVh6QpCVFf27S3SlPUkhANB+
OpBMG0WPyfZIePY09brBOvF8xGFIL0hsluLGD4LalPEJXyQetdumtMSEzPO7He2rfOyHG7hH
vzduEaC6mSuJCdZ93B5I4fcOI5K6VorpoLQqyeTcLteJ2ayNW65vP2m4N3XuqdjhLklB96SA
Ce2d+I6/E6pYT4JRfmvEmZM6ZDsU5wXu4MTr0iZISWwY+wRGLY5HoNlHjqQolmf6hcajtNxm
bbcH4yXXnH3l/ZsLDb7ak8W2o6UoPDkOPLc+p1RBQVvFvKuK4g/NcsDN2uReiFmO5d3mlIS+
XA4mjDZ4ttinu4kqUdTXkqlCzO62O735y4WRmc0Zv6stmY59w4mQo1UhlwFTi0f4efu1IluC
1YX5Bxu34wxj2QsTEotl1bvUF+GG1LW+1/7DqXaBCVf4hqYt5LC55xxq6y7Ver3bpce745Ll
zLbDiKQ4xI+7BHB512ha4bVKU7+mmDLZyHmbGi7FySRHlf6mgWt+ztWtviqKsPr5CQqSTzSE
g/RxrqKRpf8Ayvicli+XeG3LXkGTWpqzzLW4hIjxktp4F/v15O1H0p4/jonyJFxfKVjhXbDp
zci73VWOis6JNdjhlvk0GuMNKAKAf/jDWgA1JSKcNstNl8u4NjqmbZbpM27wZdzmXKbcSz9u
qMJzKmg2htSlKdLfOpUKV9NXyZ+CtXXPMaiIwmw2u7Sn4mJPvSJORsRwlxSn1KVSNFkFe6Aa
e/b4DSlJSc8LzCC35ffyV3JH4NoW73pUyc2RInMgJ/RcRERxSSQDSgTRIrvqaGpJ2TMrTEvt
9gqyK0SsRduZvDRnwpj3J5xZWr7dDfbX3W0USeZ41pT11mxLRUMolYtmOTZXlLt7TZgtZftV
tkMrckSyEBKUDtnggngOp25fI618GUXjGvJOKQ7VjVycvKo0XH7RKtdzxPg4VzpEhHFLraB/
07iFKNVLc6U31IXkzm63qB/2vtlnRdW3ZjUwvO2hEBKFtp957q7hXk5XkKJp60/KNSzkvBH+
MZlkjZ7ZJV8Wym0sykrmuSQFMpbCSfeCCKVp6amjVWaLifkfGb1nLMbK7dZRbYcia5aLp9s3
GA5BSIrC+Ce32eiuTiFUVQnU6mUWJ684U/mduLsLGlTW7dIRJeE2MsF1bie2A8IqLf30o5cQ
tH0+oOsxgpYztGUY9ZfKVxjQblYXYtwtLrRmphx47KJ/aKWmHn0lTG9f1VNENq2qK6XXGjNf
DIzx8xZHLUkrYxiVfP38pyg3AxS03aaCpgBZS2EA8uPZH92llWI+BGa5H4tx2LZU4/jtkv8A
b7g5cjcQ8hS5ZjNyVpiJD5PcZ5IUFJURyokempUQzmCNx+wSJXgu8oS3CQZFzj3COPumG5Lk
VgkPqdSXAVdqlEBSeXwroTyzeMFkv1r8Yyr9lNhttitZg2du2OWh21uBE+W/JdaD7LUhThQo
qSsoCaUB3O+rrBzTlv4Y2854ta8exBxqy4/bmYy7mETbrDZ7bsVpApGiuKddcc7qj/muI9q9
NCezPPD9isd4yiRFucducpq3y37Ra3nO03LntIBjx1KCkcuW/t5Cvx1M1BebXg0GRNvCpuCw
Y+VQrYzItuHMz1LakuuSChx1bSHitpSG9+33P+am+ssJ8DnIsU8cY7EnT1Y/Fmzlz7VAXa3Z
j6WobsyL3ZTQ7TnNXaV0ClGhPXamlKRjg6MxcJwm2+VWLZEk/c2l6HEj3BuY2H1R5ixxbYUp
twoCVKIdIqVp2NKV1qZakoxsqeF4zdZfg7NZUK1vPKVJt648ttJUh5qO6ovpSCDRMehUopPr
7thqxJXcJFlzXxzhVh8bzLtbLEuTPfhw1RZTz76pcYSAlb8qfFJSmMUdGuNUmu+ioWcGP4Ra
YF3y62W6a1Jkw5UhLUhiDw+6WggkpaCyEcjT8adN6am4NJSbU94XxJzIYjabHNZtz8S4vsxo
8uQH5bkJtC22uzNaRIYeqrir6mztQ10JmUoG0Lw7hstUO5ybXcrfEfs67jNsDkh5yRFdTJEd
HIIYXMcSsVVRLW3xppRCMi8O4hj7t2li1Xi+x4y7eyxZoTq0SGvvWFPuOrWGVulKVI4AKbT8
99D0WlkPG/DGBS7Z97dheISptzft8SC5z+6ipZoAl5uLHkpW6eVaK4g7aCfBht2isxZ8xhp0
rZivuttrcSUKUhpZSlRRvxqBUj01uBrlGz3LwpjMHCZ9zMqa1erXa2rvMacUhSCHKK7fFDXB
AUgnjR5RFKkdRorLYWngrHlzG8JtVsxN3HIU6K9drSzOkuynEOIcS6VhKlcBT7g8fdwojjSg
rqroP9xXfG+Jw8myB+LNkvR4FvgyrnLVFSlchTURHMtshft5rrQFWw1qxuDSMR8XePHsxxN+
RLnP4/kUSTcIdrnMAvqXB5BxuSthSUBFU89huPaetdYevuZ5GPjew+PL5fMvvFxdjrg2qBJu
NvhftzzcHtoCAJSozbxWEIUr/wCNzqrry1q6yCq0tkfh3izGsjsKrgq6y1XZ154Isttjsrda
YQshDwZecacfR/yxypSRturUrZDSSHa/CNjj4izdJeTtxb1LtYu7UFzshvtrBW20W1OCWVKS
n6w3xrolsHhED5AxTALRieL3KyS7g7dbxFXKeZltNoQtpDqm1L9q1dtSFJ4hKahQ3NNSyjbT
TwQvjLD4uW5fEs8yS5FjrafkSXWEpW7witKeUhsKonksI4gnpqfhGpUFuHimxZLDsM/E3ZFt
VksOdIs9jnK+4U5MtzyW1sJlgIShLiCpYU4nYjj6jWpgGmt5HmMeADcclkW9+5KlQbSqMxki
IEdSpLM6QriqIyFVSrtj3OSfoCa0BpqbcIkoRnGdWFvH8xvNia59m1zX2Ivfp3O2hZDalEAb
qRQ1A1pqIM0nklLVjdka8bXfLLsh2XJcmIs9liNOFtKJJZL7kl5Q+sIb+lA6nrtoTznSNNtr
BaGvAOTNzzERdoqYzwKHHgXf1HWrem5cFMJqaJbcA5b+7emqZUimls73v+na62uNAeF9hhmV
NYgyVy21w22FyUktrCnCrkn28aUSa0231iXkkVLyJgT+G3CMyq6idJeCyptUaRDktlulFlt4
UU2qvscQtQ9NbhxkyrSwYbiFnu9pveUZNNlR7HYvtmpbcBKXZrz8xwtshovENcU0JWVH5DWU
m8I09FguHgXJI96l2203WJP+1kNMTeKy07GhS2UyG5sxpXHhH7aqOUJooH5aOJRlpkVDwGC9
g+TZI3kAfl4280wmFESsodafkdhMgPK4Ubd3U2lO5HWlddFVq0GkoROSvCqvtXrPEvan8ptr
MObPtDjam7ehFzWhpvsPkk95PdQXCUAEGgO2ik4nTGP0JG4+Ibrk0iFbbbk37kiwS145OE5h
UZuE6w2t9wxEt8u8wQ0vc0WSBXro6tKOdg5meCMgeKWkJi5RZ8mdbxAQZdxdvPYcj3FpEJxL
D7aYzat1rW6kIIcCSknl009HrmSdXyObj4omNF+dkeWuNYqW7ebbcu2/LfkJnhX2QXEK6tlt
IVzCle38ta6stYBUc5KNd28rwW+3XHW7m9FlQny1I+ykuttOOJSCHAUFFfYr8246arVdYBWZ
ERZ92CH4jMt9LNyUEzGUurSh8lQp3QDRz3Gvurrm7cmo4NQjePfPUiScYYlSeEEqDLRuBREC
oZaUUM+6hU0p5BHtFD06acrJWQwNk8k4FcrfOTc4dJNwdiB9qY3MiJuDqOMhqYKqShztL95U
OnrpScNivBPZNg+X5v5Kn4fJvkRpGONFENctDUJCGlMiQoNRmirlyV9ZCjRNFH4abOKpecgl
hsqfj7HskXd77Btt7ctSoFulvXB+2PpX9y3HTy7SFIWgPNqV+ZJ9o309YskVXNWxzaMH8tqx
+0ZBbnVIiMRX3bChua0iQiMQpUr7NkrDnTkXAlO+sZmPkdDa6+OfITGHCbdVtMWKyMiQxBlT
G+cf7whQQ3HSpSkOviighQBUNNVZuFyT2RmHpzW/X6z2qzXF1ubbErNskrf7SITKP1n1oX+R
tNCtVOvwOsOYgvkuGSjzK3IiZWxkEjIIMCMu42zIYLh7aY6lhiQUMrDbiQFgIcQUH49NdIcQ
ZXyIxezeZv8AUd8diXh20X77D9wu7suSEyHI7rZeQOP6ilLUhJKaJq3/AMusuW0bhJZOeL4/
5Rewd5u13kQrdkDT8yNj65RRNuLMcVkusJoogBP1e9Jcp660nbtLDLg6zsT8suYZFsZuYmQo
ojyXMUYkVmxGpaj9ot9v2ntqUapR3FcCoHiNYTtnyL2SF6xfzHcLhY48u6Rsl/bZn2qGg+iR
FhTozfdcZuQUG0qW02j3rUVVSkjl6aq9uoryQeQ4Rk10yddxyPJbQ3+9J++iZE/IV9hPAUG1
oiuNtqPJo0T2+A4imm3ZqfH7GU4wMZ3h/ImLje7QZUN27WaEi5/YsuKUqXEUnmsx1cU/qNo9
ym1gKI6aa+tuP/YnqSjBAFHKckK3JG/XodZsocCJBNSQa7H8NtZFQEXQvcbEjr6aIBsMJbAo
rflvXpuf92oUDklJpy9nqB0H/nqEUSoqPJJJI2HwqPQ/HUZYoBSOZI5fEeugQJBIoK8vT56p
KoRB4fBXoDqFhDce41PSlKUFfjqNB8eSRQciB69NAgKWwOSenUfw+fz1FAHE7hfoB+XrX+O2
opDFU0rvToeigD6beuiQBxoPdtTYn1I+eps3AagEkFR5AEcfX+P8NCANIVQpAoTTY131QCQZ
oSo78VdK13I+Pw0GkGBy48TT4j0/EaYJthVSd006UrT0rqglcNJJbIqUk/UDSlR6aTLbehQC
Uig35UJIOubFJhr47JFVDcck7HUjqlIg8irjy235n/b4a0Yan6A5e3/M2+mld6f7b6jHyNLC
mLJvzaVyFMxHVUffKOZbQfqXwSfdQflHXXq9VoeTEwegM/yjx/kV7xaXa8gkNN2plmJLlfZu
pDTcf3B8AFK1LJ2CE9PjoTzILMst0zy541XlOQXGNkcqGnILfHhszmYjyVRXWSpHMKO6jRXL
Yf26FozOBrc/MODJQ8l2Q/kEViBEgOW55gtNXR9pYUqY646FrSGgNgvdR+OhNM06saY9meFH
yzeMqnZYHLe5DVGiuy2HULJfFe20htNEtsUp7uv9utT+JlIp2P5NZbDbcstLeSzGmZ/JEMWy
I32ptUkc1rkJU40k8uJ3HtqdYehRnKOBPuAFDVSQTv8A8dNTco9NTPKeCDH4rNvmWr9sVGjt
G2Ph9bza0qTzH2QQGUFH1dwq9NXJiZZXrn5lhyvJJtxmQZuCOTYi0vvx09uOlminFtKKR9S9
lLNdKRlIO2+WbNc/I/7fkBgS8RYuD79qnuxk8mkhJRGSjikUar7uShWu5OqEMFkay/EYsdpe
SXuy3bJYrFxdROYQy80lCyPtmuSG0IK+lEUrsdUFEmVWvyI7kGXWCZek2m1uW5DiJN3kQw40
5VJHckMt8ElQ6I4gBJNa6sGeSn5XJYk5Jc348xE9l+Stbcxln7ZpxJNAtDX/ALaT6DQhNH/p
7Spu5XyQ5NjRIbludiFL76GFLfcoWgErIKgBy93prXA4gc+McdkN4fnltkO27lLZVChuqlsJ
LslkKCkpc5CrQqKK+knp66vAQX28/wCnbhbba9d0WSXj8THw3MmuOMrnMzUNgtNIVy7iU9SA
gV5aIyDKZklhwdrHbvIgwrSnL41tim5wEuD7CEhaVc3ISqUclU47cjv+O6kpFs5+SsayGT4o
wRgRi/cIZUzIitrbcf5SKCM2ltslSypI6JrT11IrQR3iXxRdJOXhrLcVki1CK8sJmsuNsKeA
HbCl9Afhoki7I8dYi7HiuXDGrdFzNNunPt4qw7SO+6y4ExippLp7lUb15b9fTYgCtYTjtwb8
z2QO4nFsb7EcyZ1uhSO+hhKkKSJLgStXYXUgBHI61ELAplHuvjnMbnm+R26DblLlQnXpkhl5
1tsojOLKkOKWtVFc0mqd66AWUW/H8EsxwPGrpAw9eZT7yuQm7OIlOMmJ2lcWwhSVpaa23qvU
0JTmLBbh4xut6TZWTLYndhF5XcKLbSHEjtNwx/m0rTl6/V0GmMmU/BbRizUvwlZ7hCxP/wC0
VXlkPOJSvvTItCO4uQr3NsvKUE1B4j01nA8of+RHv9OWNaJ2Ixnb0zIYeiyWLSWbPbgKf9Mu
WpKfvytPtPL2lR+WlJBZvgcSmmH4OFxJOHWu75ReW3ry/AiR24LaISWz2QrtkB4J5BxTZNVE
cdCSiTXMEdn2IY1L/wBBPsWVanb7Jfjz24EMWaXJQ2r2pTCdVxZKf8albj+GmYQJKSEtnjbG
1vZzOuNsuaomLvRo8bHI76FTlGUsJHN9oOglP1AJ+O520irYLr5F8YYMzON6lWueqIp222iN
aLXwjFsvNclSX6IXVQ5UXsOms8BGTCfIVghYvm15x5mQXmYD/bbecKQ4pBSlY5AU3HKh1pIp
NKtXh3DpVsx6K5Kuv7zktpeucea2lk29hcdouFCqpKl8unEKrTeusFkrWY4HhNgjOW4vXmTl
DNsj3Fchtlpy21fSF9tQSO60hINO4pVNaSMWiRvm2OWOF4twK7wo6WpNxNw+/khAS66ttxIq
twH3ITuEbDbSns6W3BD+L8Nt2ZZOu0SrguBHbiSJi5LKA6sJYSFe1JO9a6W/BlFsxnxZgmUS
1O2LKJP7LEgOzbkia1HZmRlJdDTSFrWpMVKXalaSVbAb76NEicg/08Y5NuMqO1lqn4vOExDX
GZZfIdmIWvhJKV9o8O31aUQQRqckmMLj4Exm2KtT1zyeS3EySWxDx5TUJKnlPSEBVZaSsJQE
qUB7CdtHZkk39SO/7BT03O2xv3hl2M6/co19uCGipq3uWupcKuSkqWFoAUOnWmpksbHDv9Pb
owxu/v3+MzKct4uyIziEJZLCk80N94uB3uqR0/S412rpThi3BnVwsNug41aL1GuYenT1ufcW
r7dxtcUN13L6v01lW2yd9/lpTkHhjmNksjIrpboubZBcFWGOpRekcnJSmGwgmjTSyociUhNf
QGvy08DjZNZzh2M2qFjsy0KnQJV+V3FWa5Ox35DcRah9vLJYCQhLtTxQoV2roeiq2rRI9n4J
hdl8iX7GrlNustqA6xGs0K3IaVPnyZIRVIWU9pvgXPUb6HrAJBt+LbJJ8vHCI13eMNuO6+/J
oyuSw41HU+5EWUVZW424ngsp/wB+lzEkht46wTD8qjRo866XFF5lCQ9LRBaZ+ztkVin/AFc5
2QUAoVWvsVsNuuspwDWMERbMYcewTKb8zdHu3YJMSIxFZKgxJRLdUhThqRxTRPJO29d9azIp
tfcvd/W7P8Z3ZULObpdYVpYjMyn5Edpi2zXlcT9hGkKIluuNVrwcqKJ32pqom3AvBSU4DAix
MKny76m2RspbkyJMx1B7cFEV7gFAoPJwqpt09xHpvqyS3BN5JEuGOxLRkOPZFcpCskaftjX7
s12J5iLKUrdQhS3v+mdr7V7Go2+OiHGWHVkBnDN/w/Prnb03qXKuVtcTHF5DrrUhxKmkq3Vz
UsCiqfVpBTGSw4BhvkrJLHLyTGrvM/cmZzFqktNvvIfUw6Ae8t4LCi20ViqPQVV6auyNrRbG
PEuSWK3XVYzO7RI33cqJOXaIUyUlYjIBU9JDDyeHIK6qr676G3OAdsGBOUS64ttXcQnkpCxQ
FSfRQ/8AUN9aBJs2SV468tMxptjlXuYcchQI0huQ790mA+iYtpIitcj2ypBe9w3+nponwDtl
oK/eIr6LhZcHeytySp2c5Gt8CRBnsRmnOClPvMPPgMuISE79smtdtUwixMB2bwxlNqyGDMse
TJjOcJjqLmmJNivNKgpSX0/aOIS+6FIcHHgCFdNZbFNlTzy6ZTaMwYuKsrfvN8THINzDcqG6
wlaVN9oNyUNrTybUa8U0IOtLKBORng2J+R71EuS8SjumO40YFxcbeZjtuNve5UQqeU2HOYSC
UJqdE5k09ErY3/NdgjwMftP7hAau70lu1wAEJWt+MoolpZLlVsuIUCF0KdOsgmm8D+a35nt3
iWPdXZi28Ol1iob7jKn0xln2guUL4ZcUaBAX8ikDTVuXgLNRlbKFJynIZmOxcZdmrcssR4vQ
4RShXBaif8tZHcSmqj7ArjU9NGiJG/Y1mfju82t6S6IF2kR0XKI5EfCnGkqUpKeTiPaFVQap
BIpo2pGchXPynndxfnSJtzWtdzifYSkJCG0/alQWphpDaUJaQpSQVdsJKvWutfQ1OBu/5Cyx
+7W27uz3FXO1NMMQXxxSUtxVFTAUEgBzgfVfIn11NyoZPckHc7tNulzl3Kc4p6ZLdXIkuHYr
dcUVLJ9N1H002YSSlnzC527HrvjyUMv2q89tUlh9HMIeaP6b7KqjtuJrTkOo2NdSeZD4LQ15
78kNW5EJFybC20IbbmJisiWlSUBruiRx59wtJDZX14imhwa2wrh5vzOehKFCA0kTG7hIUxBj
o+4ltgp7sgcaPKWFEL5bHQnAtrggsx8kX/LIsKJOTHjQLdzMSBCYTHYbU79a0oFfcaU60+A1
pQlCMnDEs8vWMCW3DTGmW6elKZ1rnspkxXy2eTS3Gl7KU0r3II3GszAZJtHmjyAy+JSbkVPr
m/uE6QpADsp0I4JZkqFO5GSj2pY+kDT2NSJtnlabb8fvViNjtD0bIHFPXFbkZfLkVl1pKQhx
CQhhaqtJA9vz1J5knLFyPMuXv2xuElxhieEsIl31ppKblKaiKC47b8j8yWlJBFACaDkTTTVw
K1kczPOeZrmxZlvTEtD8eSqfMEFhLaJs1xBbdky0qKu4pxtRSobJ3NBU6lZJPBNoA80Xn7tt
DNvt7OOtRXYRxdpkptyo8lQcfSUlRd5LcSFc+dQQKbaz2wSYtPm/JHZMn9wt8C6WmQI/Yskt
gmHG+y/+J9uhCkrSGRUUKzyr7q6W8QUt/UjE51BnMZXKyG2pu+Q5CEqhz3EtpEV8qPJ9G3JJ
40CUI9u2+nunZN6QNp8FRafcZdZdQSktrSpBT05JNdc3DE2tn+qC/wAV9Epqw24XFClLcklc
g9xT5bVJPAqKU94sI6fTTbrpnEGXsyu35FGTfxcbjCFwiqlLkv2vuLZbdK1lwjmj3JAUrqN9
N7OxqtoLdcPMDT/kn/XTFkYYkPtuMXKD33nG5QeZ7Dh5GimVFr2jhttXS22l8AoX0G2D+RMe
xe5XeSMcTLauLDsGOwZr7f20OQOLjAWEkuEp/wDcV7h6aG27SXbEDiJ5cfiT7K/FtTSWbFa5
dogsrdWatzeVVrWRy5NpXT5/x09v8yPfJZ898oYLf/HTFlhsT1XRlET7fu8UFLsVvtFyZICz
93RNQiqBSvpp9fshuTTzaTNcLzCVi+QtXaKy3KCUOR5EV7kEvR30lt1uqaKRVBICk7jWFhyZ
la8lyj+alWt6P+wWhq3CyQ1wsWLrrklUESF85TrhUAJLj3oFgJR6DW3ecedmk8wKsvlbGomZ
XrKJ+Ouqfu0ZcdTUaaUobXKb7cxwKdQ4o91XuSn8nTfWbWmPKCM4OVt8vxrdaYjLNoS7d7LF
l2/H7kp9ZbYhTahYfYACX3UJUQlftB9RrXdT8bJtyPP++K2u5cIFobZym4NwWLvcHXFPRnGr
bxLIZi+3tKc7aO57iNvb10P2zvjX3FOdaHCPOUOFLra7GGYM6dJueQRpEhbrj8mc0ph9Ed0B
PZb7biu3sSFddH9nPP8A2BrEPRAz/IllevGKNs2lf+k8QVSFa5LgdkvpW4Hni+7QIKlrT7eK
aJ1p+z8GubOWMw5Olr8kw2b5leVSGpEjIr61JYtiVFP27Lc0lC1SFAoUpTTPFCAkempXTsrP
VeDCX4x5M9PbSkJSTRPtqdzt01ztftZvyMCU0qCOg3TTroZINPFXLYkegO5J+OsmwEKCVCnK
nXcbD1rqCGHzOxI2H0pO2pBIqgJVyqsA+nx+Q+OgkEaGlSePWvw9KEaTUCU8UEb1STWvroCQ
ICQaEFQpuRU7n10waTFKQDTY1O1Pw0Ewmx9Qp0Px21DVSKUDUb03oQNgBoFsCSlJNVVAqCP+
FdGwDVRR5JpWmyq7+35aoII/lCtgOqhuf5aDWQ+NDzSNtwd+o0ogcht1A6Jp9X4aSYsUKhzN
AeqK0H8/XWWCeQqJVzV8diOuw69PTUabQEAUUTUBZ2+AH/DQZjGQJASSCQPj8Pw0CGU8hQeh
oSSafw0Gu0/QNQKSQFct/bQ9KfjqQytATxA47A0qVen8NQppg5t0rx9vwpok11Y1t7clWR9p
plRfWri3GQklRNaJSEJFVV+AGvX6lLR5b7PRnmLEuzdMMcjY4puNIhx03GPb4xjpXIKgpbR4
IPFyldiCaalHYqRnJoErEbejMcwDtidlwl2SMYjTLCEKWqigtqMrthtK9gPaK6IUGEU2X4Nw
99sCOmfZ3p1tZuLL85znCgOLc4rZluLCF8l9Gwabg6Oo9msHDEvHWMxPL8nFn7HIl2liC4ru
XZJUXHWwP+oa7VEpQs1CeX+/SlhjmCkYvhVvubORuXC0XqVKtgWYrNuS00yyaKUlUlb5SpIo
KhIH06EsBZlCAXSqRtSoIO1f+Gt1rODXVs3V/wAF4VCtUaPOyN9u/wAiI3KSEhC0KU7T2iKl
KnuPpzUr8dZdYZzlkfc/EPjmJlKsRGQ3JjIO7GbY+4YaWy99yRUNcAFAoSrlyWQPx1Qxqgm/
D2C3LKV4rasmmN3+PKXGlsTI6Pe0ykrdeYLW1BTiOfU6nVwSwxxZ/BmLX2G1c7VkE5q1pXLY
kmTGaL4ch15FCW1ceJoeu+opKc/iGBPXizRrVk8mRDuIWJqVQluTIikioSWGOQWpz0A6dTqV
Q2VnIra1bLzNtzDkhxiO4UtuSmTGeKR6rZVug/I6URavEmIY3ll3n2+9odWlmA9Kjdpzt0ca
pXlseQ93TU1I2DwDD7DfMfy+4TENvzbPby/AZc5hKCAr9YqR7V040CTq6hot138I4zJbsjNj
uSIV9lWYXRcCQ266ZCgkKccLwPBlNTQADUy0yq3HxBIttldv7t6hnHURkSIN0QFFqU+7UJis
ISefPkkjkoU/t0Sa7HLJ8Ot1jwXFMptkyWmfee8Xg4tKS241tVhTYCkb1/N00hycsBYzzKsh
FptmRSYL5YeW7IkyZBb7aU/qApCldQqmmYMtEs14QvbzMe6W+/W+TY3I8iU5fUl5KGkRCEu1
SR3VjkaDj11nIyRuNYBDleQrXjysjjz4dyq69Otb60OKSEFRaqockOq49FDpplwCrBUr209b
L3dIDbq20sSXo6uTilLWG1lNHFAjnsNz660xq8FnsXjXJ5eOxrob1Asdvu5WLYxPmrimWWgQ
vihALfp+emstsmV9nEZpxuVkCrhARGivFgRFSEiY8pKuKlMsgVWmp+r1FT00tspjZaHMSyNr
xtbb8Ms5w509q3JtjUh1UeMlYKx9wvnwQWqVWgIon8dKbJQyYyDCHH8UN1keQ5twx6JIaYnO
zGZRiO8yEl23hbihNDav8A+ddEsmhcLxLIujeP3bHM4kSPvH1xrbKlR34b6GoiFLccigrWtS
WyniEopU6JYEZn3jnNEXiwvfv0rIJ9/UY1rVPEiFNbU0TyStuUrm036hVRqmRkiMaxPy1EyK
8RccfcjXW10Zu81qYhlkFyhQ25JKghRWTtvWumY2K0W664R5xsuUzWLBfZs15UeIq5XV6UiK
lbziTwi85Cyla0Dpx3odU4ApLPkPyLiEiVZHOwzJakOKnNzojEl8yHDVwredStair5q/DVAF
puNz/qNv+MNy0syY+PKhGogJjxkyIqqqUvsIIWaprUNpHt9NUoWo2QV3Z87q8fJTcRc/9EFh
CuC1NhCYw/y+4AfuOzSlOftp8tVXkLfIvJ8k8mQ8KxO4Xu6puFouSzIt9mkRWyw2LepKWO97
EJcQoUUEcv8A1a0TWYGsPyL5DyJ9Vtx20QGbq6hwKXZLY0xNLPGjye4nkeBSfcPw0QiaGWNW
fzBiWR/ZWezT4d5lRj/0Co6XUPReiubbgW0tCT/i6K+ehtBSW2TLN5/qDTdri4zDuKrhDdYf
uDTcNHFlTTakxgG0o4JQlCyUoQKHrqdkMHaBl/8AUIwtyDHts2W9akspWy9bUyVQu02e0tAW
g9twpNeY3Vtqxsm1srELMPKDFrn2OOua4xl63HJCXGnHJEtYJDymVUCqrNUu8OvQ6sbMuFge
uZf5RcwtAetQcsrUX9tRf3bYlTyIoJT2E3BSDRCalPXYeuiYeDTUlVvN3zF/FrPbLgZX+nYK
3DZEONcWO4okuFt3iO4dz+Y01tREGbbIyy2a6326sWu1xXJs6WaMxWh71qAKjQbfSAST0AFT
qbgUmy1+RLHn8O42uTlMBmO8uPHt9vchqadjvIhgNpQVsKcSp3cBdTX5U1SogVuSRatXlO8e
VJVzasaXcttktmfcoiQ23EjutBBaDhW4ltKVhCdu5VW+hudgMLQ15CsnkuT2rGlWV/8AWvuW
lxAUgJlMrU8sJCwngltwqQoKI6ddVmpKuFAMDxjyPIx69zMcx5y52u4wXbVKlFIoGypDiywC
psrcT2x9IV+FdGEzT0Q9tnZE1hOQQo8DvWSTJhG73JaVcozrC1GO3WoSnuKVvVJP4aZyFp2S
t2snkSD43hQ7jjciLj6JyrqzdHG1AlclkMhK/d7G1JSCklAJ+OqrkG+OSPuasju9ixKz/tTq
G47EhmyLabcWucl+QVuLbSa8ilw8PYNTtsnUc+RE5mm7Wp7JLG/Y5UaBEgQ2XUuVebgjglaV
LKt1eqU9NSsmoBPJG57drrd8zutyu9vXarhKfSuXb18yphQbSnge5760SFb/AB1QSJrGs8ym
y4YbTa477ER28Ny13VhTjZccDBbMIrT7VBxG5TWuhLZvssI0STnN/wAmgSkXnxveLolE2XJP
7ZKnw0tKk8Q6w72GjzKAkD3/AMt9SOemYIvZS0JTw5FSQKdPgKfL10s2may55lyR693mW7a5
P28yHb4ybOtb6mov2i2FJd7ak0q92v8ACPq6n1FwZnf1Ji7+R+3mdsytOH5Ei5sXByUzHuMu
S9FUX0OBbMZlxkJbJKgU8PQdNSSa2XIzsnlm+RJlv/1BZbpOvkS3z7Yqe0t5i4qgzSlbRbK0
qUFx1pVxdpWh+Wpv/Jntgz7yDdYdyvwmR4t4irLSUyTf5CpU1xaKgKDikoV2wigoa60dVZQT
OE5/ilvxpuw5NbJk2LDvDd/tjlvebZc+6Q0Gi28HgoKbISD7d9DrsPkt0Lzzi710tV9yCwTF
3uyzbjLtqYUhCIlLo6XFqeDiealtcqJpRJ9dNlOiiCgy8ttMvxszi0uLIFwgXF2426eyUCOv
7kBLyH0KHKoSPZw9eutVeWVnMFPaUjkKp9oUOZr+WorrNgL95UyvCL7Cx1jHGbm07ZICLUpF
xSwlC4zJUttwBkk9xS3FcuiaakvxLmUZ7x6gpoTT21/36kISm1JSTuD9PH4HWZIKhoQU1PRX
oNtaESkHluaimwP+7UZYrZRIQfcBun82/wAfhrMihNFghPTeiT+Hx0iEV/m9TvWm9PnpAMUC
U7/yGhkKXuBX2gemsiJIUmlVHkrqPWukhQT0/j0Hw0kJJ34np0NNvw1MBfIjeldj06HUIQJS
TU1VXcDbb4agYZUNq7k+nTb5ayyFBW/t+PpqNIMqPLlX3J6n01AwgtaU8vVPVVa7HSZgMBKa
lQAUR9Q3G/roEV7TUH0IrtUaSgCTTdNCOlT1qfhqYgCzXikhNelKkmugRSagEA15DYn46mZg
PmrqR8ANCEVuaLFOJ2+P4UOkhIcKQVV69AB1Pz+GkZFJrRSuRIT9X+w1hoVAalKqVCpJAIHU
fjqJsAUSRX2q9CehqOp0AGFUBQQSK71NDpILiCqtKbUCfSvz1EKTyQulAkDqsbnfRIh1UlZP
Gh6bGo/HUUhJPXjuTslIHXSimUA+4hfKhA9xJrv8NJSKoK09Buo9SD8jrIpgKUqO5CU7beu2
oW/IBRaa/UOgPpSuozIdUqIKd0A1UPQfj/x1ChVK19wqfQajQlSTySpO/rWnWmqQ+gqpFCTW
u4TSlANZgYCWRQJpUk1Ka7H4dfUaIJ2AhCTUpO5Jon0r8NISLSoDdRoPn6n5U1iTppBEDgAK
A9dtSBhoHQilB7uI6j/idLZQAUJoo716Eda+h1QaSDqoL39o6BKRt+OpsHRgpWnLrQpFd+nr
toSFKA+ISB+UEACpr/GutQabCKTy6VJNCFV6Dp01loEgUUrbY9N9tjog1h8gSF+6pFQaqG2/
8dMGE40F21/VVXLp0Fa11QOfORnZXHol+bWy64l5pX6T6SUuIUPzVBqKa9FMs4tZg2/O8ayD
GZeN9nKZlxduzLcyNKkuux0xi6QAqpcWUe1RqrrTTVOYZmkKcFwV4xyJWVXa2Iy+7B60Wpuf
HlvSgguuuFVEqWFlDTQKevX11mHAQo0Uq9eNvLqY86ZKkG6RHWmZ0x6PPVKRKRUpbWlFSp8p
PT20GpP4N4JTFMW8pKzN3G3sodsF8lxEy3CJDkpakNCiG3S0r2KQgkhKjsPx0NNoy4KlCx3P
smXfpMSQ7MYgKUq+zZEwMIWRXiXO4sdwlKDsa7aYcSNoKehXFfMbGgI+WpSZ+hrEa3f1BScV
aLDlwNn7SXY8FMhtD5jCnBaWq/cFv4V2pqs3JOBjLwDzi1cxk0uDNcuyHG1NzVPMuyguvFtR
QlaynfYe2m2pWGsDd7DfNNmvLORuWy4N3qQ/Rue2pDz6pDuxCw2pdOfQ8xQj5alYziSeu6v6
jlzmETWLimRIZdZjNRxG4lCgC8AGPYlRFKlVD8Dqk0kioRsV8n4xfoKI1sn2y9yOQt3ZSC4s
lJ5hpSSpJoivLfYddtKclCnBAX9m/ovUtu/F9N4DhVNElXJ7uEV/VNTU0+eiQVS6eIJnkBMu
fGwwQBK7KnZbk1DPc7KdlJbW4CqnxQnb1OllZElguSZ85acnm2dFqZiMNrl3uRJhIV3goH/p
2+CCgig9rdAkDQw4Je/X7zpbLFb3nYrEiI/bg41cIMEOPRIbqQA0ZCUcWipA3A1Ngyoy/Ink
94zoTzazGmw0Nybd9kezHhNghDjUfjxYTRR/U4/OulPBWTgl8lyXMG8Ox7/UOM2r/Sbiq2hl
hIaWtKRyWlLjLjjrAX1UUiqtA53yJxTyvj1huwn2PBIyLh2nGWzGlS3XClY6dtYcqmo921af
DVCHIa/L+eRpTcFdijMQH2HmEY4iA6yw81KIU8QivdUVEVqk0+WqUAyxbL8gPkCCu14jbUXl
tP21qtRYcgtR1UJU7upCisoryW4Tt00qGhyREnOPssrvdyk4zZnJcvnHciutKfjMvBRDrzVV
nmtatyqu/pTU0ZZ2heT3P9MwbDdsag5AxYy4u2uyg8Us933K7zTSg24PkrQ2JAJzOaMal2Jm
DBDEx7vvzkxUfdjmoKS2h3fttjj7UgbJ204GJRbo+cpa8bxLQrC0v4/HnNvG5vvSVMO3BO6y
TTgorFU9qvEDUsuDKwOPJ9zut+gMT7hgM+0TZHaat9ydkzFsNJoAlmNEUlLKOSdgkAalAw5O
qs+zbHLxis26Ym9Ah2WAbVCgOpfZTJ7wCXHm1FI4PrptxBP46p4BJyR+d57ckyMet9xxN6Ba
7CXJce23tyS+/K7pNS+++ltxbQPSg/3U1JKDWJI6w+T7HGi5NButiZVj+ULYdfgW50w0MLjK
5oSyui6IJHu2rXT9ARb8s8xwZZXbMwxGXAiB+Dc7bCTKDL6VRm+LS1qdbIW2sCooK6GhyU+4
Xfxxll4uGRZPNu1vu90kKdXDtrEd+KhHEIbo46tC1Hgkcthv00wzLUFsuPl3xtbE4+5Zo1yu
d2sVmetkKRIW3GZHfQW1JkNAL5KA9wU0flXUkXbMclUzHPMKyGOq4/YXJGUvwGIDlJiEW5P2
6AgOFDae87Wle2s0+OlYCz8DvJchwa84NieO/b3u2RLQ872bxKaZdaWh9XKWQhuheUhWyEoU
Kfm1lYNWcsGMX3xtgU+Te7JfLje5zkN+E1C+xMIp+4QB3e+p1Q/TIrQCvw1pVbMjXxV5YXjy
JcDIZcqVbpEFUOC6lLcoxSpzvLUhiQeDgcXuoKOqwl1hf1E2OPcluvouc2O1JhrjOLTGZUWI
rTiVJDLPbbQlS3faip+avTR1MyCT5ywi6RcbRNdvFtOMyI01PY7axNcZbCVsu1cbIbqmgUa1
+Gro3o1rJFs+drJIh3K6Src6cphSLk9hz1W1JiouteaXHKDj2ORIoDy1P9gSxPI8uHnOxOYM
3Djyn49wFmTZnLamC24pSwgtqUZa3e0Gl/UR2uQ9N9S2NlJlV/yW2TcJx6xsvXJ2Tai59wxJ
fSuAjlyCPtGgOSTvupXz+OmYFtN4KrCmzojv3ECQ5FkpCgmSytTS0hQIPFaSCmo2Py0mNF1y
/MrRMkY7brIUNY1ZWYy24iGe0lM3ZU19QqpTqlqT9ZO/QU0LRr/dLHd8zawXfyvcLnKluKw6
8XKO/dmuLrbcqIyEUDsdCuauJSaJrXTZY+wDjH/JTA8rTcou8sIhKjTokNaY6ghEZTK2YbTb
De6EhPFIFdvzHU1gVMEd47zZqyty5Nwllc+y295eGMyUuPx49zfWnm42yk9pKqcjVY48t9SX
AWcDa0ZRBb8aZVZZUgfud4uECU0wtC1OOpZUpTznIDtihIqFb77ais5S+o/mZ6V+PZiXp5mZ
bkMsx8ilSe69JXa47aTHbC1fpIQXE78fcab7aqrMmrZwcr5nD8ax4GqwXHsXnH7bJYcdj8g7
HdeeUQCVDjyU0o/TUUOpJF2z9jpm2V259eLWS0vpkWOytR5a3Sp51xVxkFKpq33HveSlSaUT
7UjpXRx8sEsy+CJ8tXeDevJGRXWDIRIhzJQXHeQVFC0BtKeSeQSae34aeC6l98WeScVxnx03
BuUWNdJz+RMyBbpClJ+3ZQ0gon0AUKNOIoR66z1lspiC/wCUZlj94tEluxXyz/cKn3V3vTbx
Mta2kyHB2HmhFp36pHL37DanU6knwjMM8tnj3VNufqEqotX+Lfc//VrTRpM9OS/IPjiTmGTl
gwm3Fs2No5CmS4sTQzIjKcQlhf6QDCUHkUeid/XXNJwi8/UcyskCPKtmus+8QI2NsXKc7z/f
13JtwKZeLDphOKUiLQbVR9PLj8NbegSyIxnPLJd022ZEnJQt2x3dpuFcro4i4t3Jx1jkx9+/
7mmltjkwv5K21Wq5LrCMP8wO3Z7LO7dHg6VspMVv9zF5LLW4KDLHxUCrgRtXWpxqCShly8SW
6wSMYjOtQrFOupvKW8i/fFM82LN20/qR0vuNhJJ5+5uqtcxvVNoscDBvFtyv+PXS1C1PYbCX
eUX1ciUhCnj3HP25C23VpdcPb4qRt09daacQS+SnZDeY9x8B2FEO3WpLttnvM3R1lKG5kYkp
7C+Kl8+Un/3VBJB/5da6qWguoaZRfHUN6TmtlZiGEmWuWjtKuSSqEFCqv101HJIpXj6mms2Z
qpo3n+Rf02y1W26xH5IZkvLj5NcnISpck8KLZaZhFXYi/nSldVVpvrVXhmYUlY8bx8bYxfMb
9d7XDvE20sQf2qDPWtLSnX5JbUrg2pC3AlNCoD8DTWUpcC9FyxiwYfkVhuN/xjC4l0u67jHi
rxuZKJaixSwVvyWQXWChC3/aklSuI0YGIRUI1isSbHny3rZbW5FvdCLel65KXIhnmpJRB4gp
nD8vIkdBrcTbQV0Z42htUlvuAIbUod09OKfzf+Gs7I3/AMkWqwRcMyNJhQI2NxEwP+18yP2g
9ILoT94pt1Ki6/VPIu92tD8DrVKqUuIyDU62ee1J93+IVNBXQaYgJJWafxqRoABpy361AJ6j
5ahCNSrcbj+2mqADKveUqoAdgfgdRSGR7aEk0FDXqPw0IRKiqhSeQ40NfjT5aQFoQmnuVsr6
R0Ar89Qh0VuQKmn1D4jUAQQoA7bnYf8AL+OggBDgNeVK/UfSmqSF8aVJNFkDcDanp09NRoFV
DY+2pBBP92oy2HxVzWOn/L/bqICTw9p3p0QPU6CFA1oE7jrX4V0jAfbBNaetAE6gCOy6A8id
uJ+J9dAi6L+ob+ilD+0DQQSEpIA2JFa8a7V3qdJBpClVSDU9dhsdEgGkbGmwPr+Oo0mKIUWx
Xb0CdBJBLqfaQSB6dT+OogK7tOo5HYA/D4baQD9eOyuIJT6CugTpQkAjZPokj+dNAiV/Tsqp
9PiDpRlhgfp9s16UG9NzvvTUbgNKSoDmaAb8vn86ay2ZSCKiohKabA1PT+zUaFEKUCgkE/P4
f8dJAQlIRSnJPpSo/wB2gVUHVR4ncmgFKDrqAUW6VrUn8qR0GiSgBHEpBofgB8/x0GpgCSVK
SDsVChNKmmqSkCUBJoSVDqK9TTSEB0SEggAJ+rcbV+ehkkBJKUkkgE9R1qNZg6NoBoqqttqU
B/lsdIIOg6JHXenrqgpCUKHlQkdOnp+OmAlilElNTWh2A+H4amh7SJNVVKTUAVCv7tRSdByK
a149OSv4bU1SaqAJPEb8aCm9CSD6jWRvnkIEJCiroOidup9dvXWZyHXAak7JKRWu6SRvXVIr
OhPL9On560p69a10yHQZ2V5oZC07MY5NpcCpEVKikOBI3QVDdIPqRr00cHKGehc/vt4n2fGr
5kOEMQLLD7TcBRmud6RGCOSGGwFdxAIAVzKSaaO0MzWQpH9QMR+8S7krFY6xcIabbPbclPHu
MJNUp+niNlEbD166OyNdPkZf99J33DX7FYWYc2LGRbcfW2pyW/FYJq4hKVgh9xzikbjYD8dL
sg8yPMb8gvNeQZ2XNYHNfufaLD0W3l/2PubPOyU9pY5ueiTTjTUmjLKiMhYg3e9sowht164J
UY0G6B+TIgihKlBCgkqqTyJUBSgptobUGlUpauRUmoAqd1dFdfgdKwOTbpPnh6ZY4y/2q6tO
x46IjrkeR2rart7Falpb7hURtwKwPjqdlJhor138oXeX5IbzNq13CHaVOxnJNuR3EpeaiUA7
rhSltQJFQDtpVoFaOds8xy7f5Gfy4MyP2WZLdlrtPcUltQW2WudfoW4hPT0rtrSyiLTb/OWI
4/F/b7Ra7muA4qbJLk9TTbypMwU4pCKJDSanevLWdmTMrFmr7U+0u3t+ZcbTaVK+2gsSnI5S
lYPJLDiSFI5KpWm5G1dIxOhje3/3i9TptqtrzEd1wuiKgvSltJO9FvEKWo/Eq1lDOC2+Gctx
LE7xNu19dmJecjriRmorCXUFLoBWpwqUOKgU+0D+OtTgGSmB5f48sNuyuFLuV0RHvjbkOK0I
yFqSyoHi+rivt95RWfb0ApvqyVi0o80YOyxaZjcm7Ll2i1LtaLOUBMV9am0o7y1BfFKhx+oj
p00upQ9kTePLmLz8cuFhD1wQ4bezGayFIbRPnOo5FUeXsOMaq6fVUpr/ABzINEdkl4wG9ePM
WxSDkBTPtDigZL8R2PFq+rk4pxZr20Ncj0CirSkRKeIbXimJ5gLvLzeyzGUxZDfCOXUuJ5AH
kC4njtT+PpoyU4J6H5hwZiIzZnMjlzpaYc5pnMpMZzkw9MWFN7KJeq2kUJRtt/J6gyp43mGP
teULTd7pnr1zh2qKptd0mRXG0yOQNY0dKAt1SPdyK3etNAp5KtOteBXjNr+/Ky1MS0LLkuDP
ahOqMhx1RWWG21kEca05GgJ6aXJTwXHGc4tMXx7jtus2XMYfcbW5IVfEORFSFTO4SUKS2ElL
5odgpQ320QyaKYxlsFHjS7WQ3yUl+bNU6izJhMJQ8nupX3X5QHJPIDl20q2ICemtNIkuC5/v
dvk+G7RaJmYwGLzDukefFT7nHIsZqiW2ktpQKuMlXOh2+espeCH2SZu7jbQdxTKrPcW3Jcd+
W/KkvTLlOkAgd19KkJYjso68WgAlI2OlLySgstq8oYTDm2q1XifanLg7OfuLkm3vyJduZdXH
UhpTkmRyWlxS1/k2QPhoSZL4KBmczCEZNic9NytqcgS+o5DJaflXm1MsgnslRfUVOHfdAUN/
h10pOCR2xC6YzIuWdPwbjZXcmkuM/wCnLnc47UKCtgLAklth4LbaHEdD7ldfXQ6isKC9ZNes
Uu1wky7DccdeyMSLc1PmXBUdTJtqG6voYVICkbLUrZG/x0NONA4MfyPxlkOUZRe7xgFj+9xN
ctaLc/GUw2yrgAHO0la0K49zlTanw21tWSUEsI1BeJ2KxWCwxr5abBBtZsbysjXKSx+6feJb
IZLauRWpJdru3UlWucSEJlNzy2W+LiyxZLbjP+jxbYy4lzWtv93MtSAXQyUKMhx5Tij7Vp49
a60tlZHLyHiV5keKvHkOHGZeuMMyYz8CI+047zmLCo4QyhxfJboHJwp9fqppryaeyN8c+M7l
a77KnZ9jK2LBGgSXU/ugDcZUhtAUykq5fUSDQaN4BvBLeM1+O85u0iRMxKDCu0K2qW3aYvAR
pshbmymob7rSCtprYhTu/XU6RgzvJc7ZhHiV2+T4z+PQY5MqDDkxJL7ZLBdYcdeLSGHlJjqP
FJKAtVN/joSwMY+BKvGeCKVY5dkxS33du7PxVZK0ZKltWuG6yFKeZQHkloHc+4q1NEVdnx74
neYkZHHS2vH8RkXRnIY33BUueEFRtpDoUKd6oSngKHTHAzifge3Lxt4zj+N0zhY5MharEi4i
/MElKJik8qKlKfSxRC/YWu1yp8TqSyD0ZNlNmtcXAsWnsWyLFmzy991cUTlSH5ITXjzhkUjp
T+Py9dVSayV3GLzEst7Zusq2x7wzEKnBbZZP2618TwLgT1CVHlT1pvrTUjBovk9+2G6YlapU
KP8AviERpl3uMWI1AbfanlK24qGGvqSyjbmd1HWXEYKtZY9fdtlv803/AB20WK2fdXW5sW+0
vXCL349vQoJLpbggcFlyvrQj06nTav6QVXg4WlrCb/5puzceztpsjcG4BNvWzRBlQ46kqfDC
CrtguoK0p6J/HVDQThjbxVc8TfsElq6Y1AlW2wQX7jks6QwqZOmoWeLLMVdUCJxKgOXKn5uu
pV/UOCAtdssb/ivKbm5EBuUe7QGYMogrWxHe5FSEudPdShr11RLHMIsV/n4TM8YyrunFoFqT
MlptuLiGh1U5D0ZKVyn5s1RCHUKTWiOFTX5V1Vqn9gs3pbIu4R8JsULx5dZllFwiy7U/KvMQ
LUyqY8l5aGu45QjiFAV49QKaYxPyPMHbyPBxlFwxOCi1Q7dkEhLEnIGrS28zB+2mLQqO0kPL
WpbqWye4tO3oK+hGJWgcyVjyja7VavI2QWy1MiPbIktTURhPIhCEpT7RzqrqT11uMAnk0Lxh
45wHIMBhXO/zE2uc5kKYbclSj/1TIaQpUJCR7QtfIlKutdc8y4NtZRaM0w3xZitoVIetNvVK
dlXZMZi4KuS3XUQ5BaabYXFJSnin2ku/jXUqyZb/AFPOLTSXJbSFDiFqCSk+gUrp/DXRvwKP
Rb/g3x61lmQwI8gSY0ORZo7Flaef+5h/fPstyC84sBC+6lauFFHjX01zyGcjONhXiW6eSrdi
8KDDXHW/PM5633Kc86lqE2spafQ+22hvkpIKi2VdCBpeiUyOh4t8Umel96Oy2wmwSL04lu4y
1WlaBIbajPCetrv0KVL7iUoPE01NuCVYMT8hf6XRkUhjHI0di2sBKB9pMensPOUCi4h55DSg
KGnHjraWMkreSxYR40sl6slruN4vTltcv90NmsjEeL9zykjjVT6lLb4N8lpHtrrm2/sLXgeu
eEpbeTWWwSbqgzbrGuchx5DRKWDa1uoKU1IK0ulj2nalemlSZrdtZRH5bhGGW7xpjGUW+7Sn
r3e1SEqjvMcGliOpCXkp9xLfZKupr3Ovt1pWcuS5XyUmyW9mbdYsN+Y3b2pDiW3Zr/JTTYJ+
pYQFK/kNT0bNezDw9j73lRGEY7IREZhwe7cJw78x7uIbS6ovtqDaG3FcqgIUGwkipB20OVVM
Gm3PBHHwzFs/k7Fcau1wEy134IfaktoU0tbalKSWFBCnO0pSkU5oUoDrqcxJV2Q2A4dY7/l1
5ss9b8ePHjXF6G9HWglCoRUpHPuA8kcUUNKVOtXUWSBL8ZM9CqoDhSK0ChxHqR0TqdYYxg0i
6+GW4EBvu5TblX1+3M3ZqxKQ6045HfAKUB9f6XeJVRLfVVK6ypZOowzfx7ExpiRFOUwbje7e
6li42Jtt9p9tawCpLC3E9t7hX3caUG+lTyRXMpx2HY5kZiPd4t4RIjNyVuww4lLKl1/RXzA9
6eu3pqWpKcktgmG2u9Qr3fL5Keh2HHWESbkYbaXJTgeX2m22EuFLfIr6qWaDQ9wjXBaLL4ms
ifKoxe6SnpdjNvXdm5cchiQ5GVFMpkKCgtLa6Ciuo+GprCc7AzG5fYOTXXLa24zAdUTGjvrD
jiGzulLjgCAtQ9SANbuknCMVnkbcQASPcB/btrBs0K9eObFZMew+9y72qdb8kXIE9UJnkYiI
/ALDJcKO8tJXxVyAFRttoVW5Joe3zxdhzcfFpVpyNxEfJXHUNm8MJjFqOyeBlHtKdqlSxwSk
7qNPTSk+rbLq5gqOe4r/AKVzG646Jf3ibW/2RLKA33BxSqpTVXH66UrrTrEfQEWnxp4ts+XM
Bt68vR7q66pCIEOKZX2zIG8uc4VtIZYqQAak/Aa5tsYJuV4Nt8PDGchkXSfJQuO8+t222/72
GgtuKbSFPJdQ5RXHlXhsDU6ermGLkZyfBUhlH2ib7E/frf8AYnILe6lbbEJm5LSiO4JZ/TdC
eaS4ABSuxNNKo2/8GozB1zjwNJxuJGns3UGG9chaX3LpHVbUodWCpMhDhU6lyKQg+/b8NVVh
/CkxLnBFSvDkpnyg1gJvcQPPBhQujoLbJ+4bS4lLaSSpTh5gITX3fLU6tVVvJrrJztvja0yI
ucMy7k+3fMSbkvslppP2khqI4GVcuR7qFrWRSnQa10fZLyYa/EoLjbbfGpKeO4I329TrAIt6
/E+dILyzbS4y1CZuRfSpKmnIz6ghksuA8XHFqVQNg8q+mrZ0h6JR7wD5MZkx4xgtPOynTGHZ
ksuBt7gXO0/QjtKKUnrrMt8GYIPJvHGV4y7HRcYyC3ObcMV+G4iW2vsq7bqe4yVp5IUeKh6a
Um1MaBxMHfF/Feb5LCkTbXAT9rGcSw+7KeaipS4oApRR9SDVXIBPxJ1mTUDxjw15HfiuyW7R
2+yt9n7Z99lmQ4uLs8GGVrSp3hSh4V0w/APCEWbxvKuWMW2/fdhlq63tqysthsr490bvKcSf
aUn/ANtSakbg6Wmp/wDUVWWvkkMg8HeQLPeRb0QfvEuzFQYctlbRQ8oAqQXAFqLBWgcwlwjb
UxqsDZXhzyP+5N2wWUmQ40uW28H2DHLKFBDivuQssDiogEFQNdZnEwwOv/ZbyCqItabWpyc1
MRb1W1JCnyt1kvpdQoEtFktpJ7nPjrSq2TUJPyUYx1h1TahwWglNFUPEpND0r/ZqaacMUi2T
vFPkKFaUXWXYZLUBXD9ZHBZ4u0CFcEqLnFRUkV49TrKy4RNHC++NM4sJiuXy0PwWZqwxGeUW
1oLqh7GlKQtQbWR+VdDorLThaM8nDNMNm4jkT1jmPNvSoyGlrdSlbYBebDnEhe9Ug0r0PprV
qtJN8knk7Yn4/wAiy6NcZVmbaWzamVPy1uupbAShJUUJO5UohJpt+JGiuWkjTUKStJUhfBCV
AFynDnsd+lU+tdTrDaYpyXLIPFGX2Kym6XBiOWm+z+5xWn0LlQDKP/TmYyN2g7+Q7/OmmtGy
fggL3jV8sMtMO9wHrbNcbDqI0pPFSm1EgOD5EpP8tHVxPBJMl8G8dX3NJzsa31YjRmnJEq4O
IccYaQ2mtFdpKlKWfyoA5H00MeJGzGA5XOhv3C3WiXOtkVbiHZrDDhQe1uopSoJX03ICaj1p
rVqQ+vISokhWojrryG2UlTiqBCEAqUa+gSNzrFlGzXXOCyp8X585PhQF49OZk3FZTEQ+yptK
1AVPvPtolPuV8BvrOYkcCJnjjOY98lWP9jlvXaEAZMWM2pzik7BwKSCFNq/KobHWurSTfJlN
PQq2+Ms8uEC43CJZZLka01TcKpKFoUn6kJaI5rWgbqSN6b6ocwWiNXiuRi0G8LtUv9p6fuIZ
cVHIrSvcA40r69NHMGscjGJBkyHm4sVhb8pxQS2w0kuOKV6JSlIJNdZGCyQvFuezE3RCLPJZ
l2tlEt63vsOtyXG3FcAWGSirpr6J1pVnBlvEkU1iOTSFyExrPPdXB2mNNRnlqaPUpdSElSFe
u40BPIg43fRbFXBdrl/t7dCZpjvdgA7AqdCeG/46m8wbSTQ5OG5YmKZn7LOTDHb5Syw4lv8A
WPFoBRAqHFbJI9dFXOjPJwcxjI2pyLe9apqLgoc24f2zodUgdVIa48yn5gaE5U8G2hxk2J3z
GZMWJeGUsyZcdqcyhC+agw9XhzGxQr27pPTXRetunbgwpbaXB2iYDmMrH3Mki2iQ5Zmge9MS
kGgT9S0or3FoTX3KQkgeuuaU4RP8Xkg6cfakck05FQoRrEQ8ndZEGqqEVBA9wHw/HSijIP0/
q4fmrSu+qTRHwC2L0jivkFE+4b9R116kjzs9C+bEJl4jhF0ZfjOtxLYzFfSh9pTwdW2iiA2C
V/lPLbb10ezZmu2Y0tStlpSN9lAKr+I0InJoHgmZLheQ4MpiRFixUb3CTNWy2lMYmiw2p7ot
RoPb7qfLW0lDkGsF0jz1xfO8xo5W1AxufLduk1cOfxjrbbSVIbfU2pCAtfHjQqrTWUZJvxpl
8+/5nmt5ffhR7RcW3WY0h91lt4rQkNx2kLXxWWg3uSBxr89a/wBot43gxZrArk9b73OVPtrD
NmWpt9KpaSp5xIrxjBPLu/BJGxO2gJNwizMys2ERJSHIV+U9bEtqgplRIdpgMABYrHUruSZJ
G6lGgr00vZMk8kuEa9Wx+5Zc47Ycf4xHDHRdWJUCcjmnnHEdkBzipP1U0IEcsnyG22cOy7zE
g3oi6R14BaEFhalxFIQ2QwlkfppG5T3Nq00pFIy8h3YTrPYYEy0qm5q/dxIh45dVxpckxwFc
u79pxQiPUg8FEVpv66EiTJO+4vlX/dWXOs0SJEj/ALQzxuq47Uh9gNE9wW+NXip9RomqxxA/
lqwKCxi85HN8pPsvWWZZIn7M59ymSGe7MWg8WpMsRwWUr3UEj8dSKJUnlx9PGQ8RuhKlA7bU
KjpJGzePvG+G3zF7HOusNyNcnnpRjR1PpSu/dltSg21VQ7CGyAkqpv8AHQ0UlsHiLxvExtpy
VZJb86bDclPyo33D6oztKlpL6VfaoDBBT79zTfQBHz/FvjVTl6sUe0uRJ1ssjV3VejLcW4XF
pV7VMqq3wHCqvU+mrqRRsZ8dePbjkdot7eYovX3khtqRbYsOTGcUkglZD7p4pSmn406aU3Bd
jR4/gbBbq4gC23DHhEuciGuMuQpa50dlsuJdSXB7UuU2KK7aCkzeZjuOqyjGBFw+8WaHMuAj
yokxSnm5KW3E7xy4nuGnVyo400qNSPMGlT8SsKFeVHbjYZE6OiTDVHZhspZkqZ7aVcYjnH2I
CvqUj0B9dRlEa14Bwdi8yjPM8W112Ixbkrkhrg7Jb5qZKkNuOOuJqKVSkfE6pwTRCZR4bwHE
WmDef327uXSdIjQFWpKe5HQzskraAJdUo/Drq2Jzz7xNjVtwG3ZNBiTO81bWu7EhpBcVLeUS
ZlwUsqLbSACOKE/Vt6akxezHbHANyvcG1F0tpmyGmFPgV491YSTT1pWutKNknmDa3f6fMTuU
242qw3e5s3KyTo0K4vXBppTC0yQFco6WwlR4pPVWxOsSBnOa4/46gqUzj1wu7r8KcqDdEzYz
YbQhCilbjLzQDdap9qFe49dhrawSck75L8bR3PI6sfxUQbdEbtkSSn7p5uE1700UpS3CeTri
tyBv/KuhMy0R+HeNrS35GtWM5pLbebuNOy1Z5DcoOOqJ4tvPII7KfaVH83SnWutcSbpojz4s
vd5vd9Yx9MM2+13KRDQZkxhhQDThASkOkKXxTSqqaLGYnLLZjXifHo8WxxcldluXfL50q2QX
LfKQI8ByGlQ71Ukpk8lAdDxp89HZoYRS7nicNmy3+bOvjj11xycq2QraY7rgdabd7ZcL26I6
DuUoJ9D8Rrb3BJ4Kg0gLmNjklpTikoU+vZKORpyURvxA6n4azJNGzX7xFj0R3BMcsNyiqyDI
Epkyb6y6+XVpcQtfdbZolsRgUUQoELJ6+upaZTmCNV4nVepcSJbM9jXqOlUsXBx4SUmGYKSp
90RXFLU8gfSlYpUnbQ5Abt+C464/7tFy6EvFVQFXVN+ejyGx2Wnuw4n7c/qJIUfb/i1dmEMe
2bwRAOQTLXecpYZjt2k3y1SWGXCqTFWhRTIWhQPaQ0U/qIV7lemptwizmSPwXwVkmX22Rcrb
eWGbYqUuHDkliSv7tTWxcKWxVlpW27mlslMHVnwFk5ahRHr/AGuLdbkJP7XZFuurckOQisOp
CkAtUT29l9N9MwacQO8b8JretU93JL20h+NYnr5Gx2K+r7lslBXGefSUqaLZ4nkE7gkaG/0J
6MZSkkJoncgFQT1Nd6agg0Wd4flPzMItGPPJmXbKbcbg+646n7ZK0qUVBJ4pKUtNp3rUk7Cu
icSTr+UEFneE3TFZ0b7y8Q7sqRzCJEN9bqm3GFBJadQ6lDrZTUUqPwO2tfUFbAvCcHyzMHp9
xt0piIIBb+5u0+QqOEvvq4Mo73uWXHF+1NP56naDWSwseEs3Zu9yjRrzbYioLyLe5c3ppiIe
nvshxyEy4RzW4lLnFY9dDcgcrb4M8n/Yq7Zi25U4vxE2x6ciPIluRVkOxkMg0dIU2VAHam+p
WgynKkbN+HfKP+j13dDSG7W+yLi7aPugmUthqvGQqJ0VxTuCTWmiTb+SDuGPXZjx1aMhXcS5
aZtwkxI1sBUQ062gLU8nfh+oNjQV012ZYxsVoyjL7hDsMAKuEttl1NviuOANtMtpU84hBWeK
E0So0+OulsIzGCetPj7yblwg3qI05PEll1cKdIkoRxYtqm2le91Se2llTiUorT+zWZhQaiH9
cj2V4a8tXO8TnJkL7uakNS59wflsdlxEoKLb/wBytYQ6lXbVVQPXQ/YR0tHjTzO6ZFliwn2U
WK4JdVFekssNt3IthTa45cUlLrqm+JSUV9pB9dZlCnyWLGo/9S0+A4LbMnRYa5EluR986wx+
oFH7taw+O5xS4T3FgUCtPaCaq0ZvGwu4nBXs37gTCauTVtjtcSouuLSXFuBdapDfEChG9dtT
eWWVEFun2vz3aFuX6YzcY799fhpkzFOMl12QhaTBS+nlzaUHEp7fNKd6DQmmi7RjyTWayv6j
LbktntF0mOv3qUhUiziC3FUtS3Wi2+ApttKuSEuKSvl7fUH10ppKQSl/JHWGN/ULjc+JYLdB
ntSoUR9US2dtiQ2mHJdR3ykK7rSkKdSjlUnifhodkS8FI8iSc4kZM4MzZXFvrDaGnGlsNRqN
gVRxQylLaga/WK1+OtJmHkuHiB7zHKaTbMObcFpdmthye5FakxYUh2iTISt1K1NqSkhSi3vs
NDak6JYyOmL956sliuSYTUiZYbW/Oiv3v7VEgBAdUiUUSHEl7srcqs06Hf00t/kHBT3p2eNY
8vFpUB1dnspRdHoUhgLVEbcKVclOEdxph7mlSk1oqtdOP1FqfsWPMJ01zH7ClvCrDbZeUNCV
al2mLIRcGw3IDaAkKWoVeIBTStUnWEksg65OJvHmeD5Bduku0zBl19YU1JhSIPNM2KUBpTRi
lPBbRS0AoD4a1KjPBVfAi6eU/I9nzGLdLvCjw79ZoghRLfKt6GURmFK7jfbYogIKa+xQ9NtM
TjgUyOsfmHIrPkFzvkW32oTbsmkltUFstJSU8VJZQCO2HB/mAfX66fZHBlQscFJmSlSJLshT
baS4tbnZZT220clE8EJH0JTWiQOg1O3YVglr9md6v9yiXOe4kzIMePFiuNpDfBqGP0dhX3J6
8j66pxBqSXy7yte8qgrhzrfa2FyVoeuVwhwm2Zkxbf0qffBUon1PDjX8NSS+5nZC5Xl9+yqe
xNvUkSZMaO3DYUltDfBhmvBNEBIJ33J1mcQUZkc4fm92xZ+SqG3HmxJjXYuVrmt9+JLarUIf
aJFeKt0kEEHRs2tE1YvLF2iZ67l11jN3d55h2JIhKP27ZYdaLPaa7Q/TShv2pAHTS3MLwBS7
i/GfmvuRWUxIqnCqNEC1OBppR9rQcX7lcOnI7nWrOXKJIbpUCaUIJ2BHx6ba5sS8ZT5L/wBQ
YfasVNht1vjWYqMOZE7oeSXKF+nNakjvKAUvbr8Naq4lGbPJxl+SJNxyGzXm622JPTZYkaAz
b3O4mOtqGkpaUeKgoKqeR4mlR0ppn8YNduR9cfKkafnz+YSsatshyU0pMu0vB12I86pPEyFc
yVc6U2G238dDaaSBHTGvK8K2YgrFbhjcS8W1UpUxxTr8mOpxahRId+3UjuBH5QutNLS7NrEh
PAqweVmrJaAzbsehNXsRn4bF8Dr4dSxI5BXOOFdlxwIVxC1CuwrobXaTX3EXvy9drtZ5Nsfg
RG5dzRGYyG8NpP3NwYhU+2bcSTwb4BI5lsVVQdNbreM88Emkw7v5Rj3KJBszVhZh4nElifMs
iJMh0y5CU9vkuW4S8hPDYJRQD565pwvqScOSSuHmK0TfJUHOXsXaD8VDJMT7x7iuTHSEx3+d
Pb2kICeITxV676m5ql4KWcnPJGLhjM5TFkfhXTLWXIraG5ReisNuuJeeWvujuFanUlW2wBpT
W17PzTFVUQZwXQCkpAqeih0H89cwNIHmmUm1N2Nm0xWrHb0xXrNbxUpiz4rvd++K6c3FOLKu
bZPGh/nqjVRdnMljlf1KPKu0K6x7Q8pxl9+VJjzLjIkMqceYWyEx2yEoZQkulSRQkdARoTgy
UmP5UyK3Y9YLZYnnLTKsKJrap8ZwhyQic+l9SVgjjRKkfx1u905+Ss28lnsnmOzLxORFzGJN
ye9OXeNcW1LkGPwbitUaWp8BXJSV+3tlO4NdYT+RmEhvfvMFgyaK05lGMC5XOE7OdtbrEt2N
HR+4Ol0pfbSCtfaWdilYrTfWn7OP9uP2MtibL5Oxq14TbLAmyPLuduuse+KnmV+k5Lj0TyLP
CoQWxx412O+s1tDfybTcp+CaZ89WKHc1yLTiwSzPuy71em5UoyO5IU0WwY5KB2+JWVp5Voql
NtLtO/EAm9Crt55sl2iC0z7LMfsL8B22zQqY2ZrnceQ8l4OpaS2FVbor2fhorfroCOgeZbNa
LHOxu243/wDq/cJqVToMuW4+p6D2Ay4wXRxW24pVHApBASdqEV0Sq5WzczszOY5DVNkKhtrZ
i9xX2jTqgtxDRJKEOOAJClBOxVQV1XsnoFZzLNrv/nHE4l3eumOWyTJukmHboTsuU/wiqZhL
Q8pKGOPcQrkjgaK4nrppZVSfKHs/3KXlWXeO7xePvY1pu0ZFynquF9C5qFFfcKlKZjISntJI
WqoW4Cr00WtK+0ArTHwPsv8AMrruZP5RhwlWdyfGZi3FqYmLKC/t0htstJUhwJSEJFfWvy02
snVLwU7kThXk2xwb7e7/AJbGnXC6XWK5b/uIBix0pYeb7bhUjihPc4gcSP4g6z/Y5r/6lCSc
ckLF8pZbb7ImwwJjabK2FNRm3okN11LJUSlK3VNFdadSD16ar2XaUTtLTJ/MPMEC+We8Ih2l
yLecqVCGTOvOh2OlNuCQ0mE3TmkOcAVczt6fHXT1+2FL2lg3MPGioZlktsvt1RLt1vdtcZLD
bIiOynZZ5tCilpcdJUEqrsn01zVl0SMctkr438iPYc5fFBMlRudtfhR1Rni2lqQ4AG5CwCAS
3uQoe4emisKyb0LbdWi7Yd5zgWnB7baHkz2rtaEygw/EEVaJK5RUvm49JS48wQpXu7YPL10z
LbZq67ZRktsuSoN3iTx3EqjSG3ubC+27VtQUVNrorgr4GnXWfZbvZt8lX8TT8v8ALuLzplku
NoiXF+fAuTVxnypSm4hkIb/9lxmKotOufF/glVPTfW1Cq5z4CuLfA5s/lzDYWYZTdnG7i5Gy
JbchiYtDL0iIvmXHo5juL7C2j9KFHdPWmq/sVur8KAmFDE3zzNY7vcc6LibjCt+VRojUB+MU
qfZchoSPe3ySkB4p4qUDUJ10r7q16/Ehusf7pHc3zja38ARa4xlwrmiyiyuRGo0ZxhR48FOf
dOHudtSfyBFa+uudPZVW+jN3rLlcmfeK8zt+HZpbr9MZckw46HWn0s07ye82W+aAdiU8q0Ou
VnLlm5lNPkth8nWOz4/e7TY7zkFxk3CEiPBuNxcDamXFSe48GQlZWyhTfX3GqvhrtX217dnr
wZVeH5LvC/qFwH9zXMeYuMZSXojpWWkyDJSxHDRRQupDSgsV7m5UNZ7fiidW8ES956sbl9Zd
QZ7VoRZ50IwxxUj72W8paXiwF9taUoIqTuOlNT9tUo5k6V9bdX9i3ZR5ZwqELJdo9+Rdmo09
iXKt0NRcclKEfsKkKbXT7Tsj3hmvEn/m300acnJKHkrczztjMkP25263BEKXBkRmbxEgpjPR
HZDqV1ZbDq3lEhNCe4BXoNKslkqIzby9mFuyvJ27hbFyVxmIMaIl6YAJDi2AQXFhJUPcTU79
dbv7q/1Kq3MmlV1l6bLNa/LeMQ7BaprsWX/qKy2iTY4dvRwER1MogfdLeJ5tlO9Ucanah1y9
FklD1MhZuyf/ALGQpQGwhCQKIGx/AU9Nc/f7O93bybqoE8juUqNRSg9N+o1yNMG9On6lfnSn
xpoDrjZGREcb4lLQCN6pHoDT469nrzs5G9eTbZj1iwXD02m0wYsq8w0Sp9wDQM11xKUlX6xN
eKiqqhrHsquwLDMo5D3E0Tx332rrJNln8a2LG79l9tteQuyWYc55MdpqKE83XlGiUKUs/po9
VKFT8NaSkxJc4OJ4KjyjcsKOOSLwVT/tYFJ7rCI7KUgrW4EIWpfEVWVE9NtSRrMEnaPGnie+
+Q8gscVyYmHaWVGLEacCkrUwkd11clRUrj3FcUoT+J+GnriS7YgxNfAuKcQA4UE0KQVcBWlQ
RU8dCwEmuWXAvFLmLNXO8vz7ew9GKmr/ADXW4yX5oIq1BgFKnX20FX+Z0NOum1c4Ikcn8T4I
1aZ8rE21396BGEhU1q8RlcBT3uuRkIUvgnfYHfWIhmZI5fheyS7dhv7BeVyJWVuvIXcX2i0w
hthBWstMbOdUkJC1b+tNb6QzShbQ6X4mwYwWbxBvV0atbU92z3BpUdMidIms1CUxExwR+qRQ
cumiGmZVvgb3/wAa4lGv1itkPJZVon3VzhcIE9bcubDqeLaFGIVI5uHYIJoOpOpJkSuKYPZ2
/I2R4LJul5UyhkufcxZn2vdSy2lShIS2Kuf5lE70A/HWswDZiEhBbecCRulakp+BSDogYwXf
HfF2a3y02682uWwuNJccaccEhQ+wbbClLclEf/HSoIJFNzt8dNh+CxwfBnk2VYy5HurH7W4h
b8eCmTILchoe4OJAT2Ehz6k8zvWp1htgxhO8L+QY0Ka85dYMmazEE6fa0TnVyzEpXk4CAlSA
E7Aq3ptpliim4EnJHcrticYWlN6U7/8AZ61lISFEHdXMFIHGvUa1IuGX26eMvN8+4xp5uzd5
nIlrjokxLiXlQ5AqtYUTwDPH1Cenw0SYakjp9k8lxMssT0jK2pdxujhgw7xEuX3SmQSEvJ7h
UkoArSmwUduupPyKa0XD/RvkRdyy9yBnFxE3G3IzDK5chLIkd1HMqedUvtsoQlR40r/PRmAr
BXrXiPn+Jc7lFg3B6HLdU25OlOXJCUyHHU1bUl5aiXSU+qRt01dhtrAdusH9RqWprEWXNhh2
Q6mQl+c0wuRJp+oWS4vmutPqQRpbkcQRl5xrzLZLbEvKbm+4yLOVSJbTxDcS3lfERHXXSlta
iqtEJqr4fHVLkzJToGOZXYrVbs7EDjZmpbYhTHFo4OvNqJSAgq7ikcmyK09NTcikT8zyr5bz
G+xYsa5yHZrskPW63QEpaQl5IKkBATQq4AE/qKI9TrWBSHku8+dsjmQ5MtibOXabiYsVrsMB
lN0bBJCmUpS264hIUSpaVBI9dEoEkwZtYfPeV3GHFyG0TJ8uO2t2KhDUbiEEhK19xmietBuq
o+GjsUFchSM98WZEXTb/ANnu7jR7f3bDT5LajUqa5BaOooVJNfTTMonbhCZVm8j+RblJyJuy
yLy+tXaflQ46G2uSRy40bCGwqhqfU+uiSJnEZPnC22a4WnGrbcTDiOLQ6EQ+45CkFBS4Iylg
qYc4q93b09kDlkbbI2fveNsjWzPXHxaFIaXdITvILly3lJHEewqUpJSlTnJYptXfWk/yNRob
X/xJn1jmsxpNtdkokCMlE2O2tcMuy6dtrvqSlHKqgFb0GsuyCyzA+vWTeTLBlFvuVzYQifho
bs0OT2OcVtTLZoypxP6brnFwk+6vrpSTCWQuHZfk1kyZy+WhlMue62+mVGUz3kOsO+58ONpH
+X6mlKamSmILBN8k53frc7aIlqYj2i5Q1W2HbbXBWhkRg6HXTFSjnuV05rFR+GjCFp8i4vkT
yMMmjzlWYypNutox962qhPKaXDCT+hIbT+pyUPcTsSOm2h2UDGyRsuV5zZcLakJxeFcbA9cp
S7awpqQTDkpTV9IYZWlYaT0o7UemrDfwYSjRCW/y3lse7Y7dEQ47r2OsSo1sb7Kw26JvLuqU
lB91OftCNk010eo4LkkR5pyBqwyOWPw1T5NrNheyPsvh1UQAtpQCFBkFNfQbkb6xiTTTaMsZ
UnuoBqoIIHDf3CvSo9dbaM9jS715ZizX8eV/pCFE/wBPJ7UJvuy1cou47O6kke9XIOD3BQGs
JKDVnnJD+TPI03NXbcmRbG4CLW0tttIW9JkrCyD+rJfq6sJ4+xJ6fHSmjMTknfCXk2y4O9dn
LmzLDtxZaRFfjJS+lsNr5K5RlrbStSvRdap/joceTfbEDiV5lxRFxnMN4t99Yv3YX+zsSZrz
L7M5KAhb7ziO5zS4tPLtk+3pXT1T5MpxoayPPV2lX/Gsgl2yMuVjr8+VRC1oD6rksrUDXl20
oqONK11OCHM/zxJnYqLMq3SP3BFv/aw8zcJDEMtUKe6qEjilTvFVN10PqPTTBPJUJs69ysRt
+Gi0yg/Ypr0qS4EO8krmBLbTa4/Eds/4Sd1VoNFmlkH5HmHyMr8a5PCyW6Y/MbjILjKm5jL0
Vt1D7Sm1pQ6pIAXwUSn8NXbtgU0tlid8uWsYhIxnHcbfiWhu0T4AUuSqU62JzzTjkh5YbT9J
apTYbjemrTK0tB3XyJeL94+Th8LF5dRb7TDMltDjyS3bnXHe720t7pfLlB6Ch66E0iy86H+B
eVL/AGPHpFnuttvsh+PNM37yAQl5Sy0lBZkqlMvlKUpbTxKaKA/hq2NmsFox3yNidw8eSWL4
q5mfLTc0yn2or8qay1OWXCxAnJaU2W1bdzuqFTWvpqnINGWW3M8Ji+LZOKyYlyVd5EtFxVMb
cYEf7thBbZASR3OzwPu/Ny6HWnWX8A28EpnnlfGMixB23ot81+9v/bhV2uC2FuspZNVNd9hL
bstJ6J74PHrWupY5FLkcQvMuPxvIdizNuDLQ/HtqbbeYZcaUlJbj/bodhlQ3oPcUujWXTCMx
D+p2ybzhAlxbrAgPXJ9MqzO2yJJkCLFUl1+Sh5xQahJbbQ2UN8TSqlH5a6VjZOrfJkt3vt1u
i467lMdlmJHbhxS8oq7UZqvbZTXohPI00JGmkaf428nYbaLNj7F9N0ZlYxcpFyiJtpb7UoSQ
mqJHcUggI49BXkPhrLq/1BLJYXPPOPf6VMZr7uNcG4s6G3DbiRHEuiU46pCzLdV3Gk8XfelK
D0NNMKRaKvkfle0z8BagRo7rWWXWLEtmVXdYTxfiQOQb4K+pS3gpIcr6JA00SWWEInrf5S8c
2aZ4/lw5NyuDmJMyYUpD0ZtkqZmoWFPtq7ixzZUocUeo9RrPXAqewjEPKOH4pe+03fr3fIT1
tlxP3G5sh1EV6UttYW1EU6XFp/S/V/UHLanTS1JZKN5hziNlmTRZkKUubGhQmoTby4yIfINq
WohDKVuqSiq9uairXROKwCXJQyoCtOuwUPhrAhKoake7f+zpoISVCquvFO1emog1V5inu9QN
UiwqmvQ16VHrqBCiU0IqQk+o66INBciBSg3pWnw/HTBkB2V6VAqAdTIMEU2NT6cuoGiBDKuQ
SpJPEf219DqIBUSr0HqR8CNQCSpJ3J3UevSnyOlIRZUQjikHiRsSB+OoAEio3oaDYbahFNrU
EAEGp6H5A+uhkGHCFA9DXf021mQbDNQoJJBpXcfH4aUKYSjXkOpO5PprRAqqgI69Ck9RoZSK
bAB6V2oR6fy0AxYdHEJ40FDy6HauqDUHMLJqOiT1HQfKmpmWdOXJPWidv4/LWYNBrcSKU6gd
R02+WmADbWoFI4kFW3JPrXUQqpHsBrUbE/PrqIAC6EUNEjYbDY6CgVUhW9ajZRp0HWoOqBDK
khyih9aTQ71Pz+GgpFNlQ2/lQjUQQUjkFA0SKjauqCCBWmtCAVAn/YahFAk7ior1r8Pw0QKQ
aO0riSSPSiSAK/jqJhUWsEE7A+nofx0ggJCQFdxWxP47amSFe0HbiU0oN/Q9Nvhok22GVlXK
lQuvpvUAfEagmAuagfw2p6b9P4ajMhpB+roT1BrtrIrAEnryqVV2/wDLSkIaCogkbf3H+/WW
UCQo1HUcRRJAr7j1/l6akQpJUVV5bncpI6n0OtMZFA0VyqKp35fLQScBBwq5UPIEVof7q6hY
dAoCiaBJqa7H8NECtBhYTRKAApVNq9K/HWYKQweO6SRyFCr1r+GtJhAXNVCOW56qO9PT8NUF
1ACUgcqcNvTata120HSqgWhSimtRUGu/p/4ayx2I/UpuTT1AHr/w1HNWa+Rf/PUf4efyp8dE
m8bgiofP95bStzmvbkogfCtNeurOVWb9nichj+J8bF7vzchqay0q1WliAhK0MIAoHZpJVVNR
VKQOWj2fyCFMGSKRUnqr4VO1T1prBRktni7/AFscoabw3gq9vIUEOupZUlpobrWovBQQB/iH
u9Brc4CyLmInkyV5AmS7jltotmT2Y/ZIlzHkM8u6ndMZpLNF1C+JUU1ro/KMAo5HuLYx5uxz
JbvZ8fcgP3HimTfLktLbrSS8OSUF95HPkob9sJ+Z1Zg03VmeJ8jZjEjXuEichCL6tRuzqGWQ
47UcFJStKR208duKKakwvU0mGnzbdscgpLFmV/0AVaokpqIq7KgISEhbLS0qVxp0J/36nszB
xyhfl63Y7KZvEuw2pDkVAuSY7sWPcpEYD2tL4J5q5DbinrqnOBccDG85N5mi47jxcs0ezwnH
Ut423DiIbmhylQllqrjjYcT12HIddNnkHM5Jld2/qCt+SWx79njrkye6IdtiNNCGH1pAfck9
lfFD/H6lLXsK/HVJYGj9n8noyi0zmcDsa3mi5IimAyz9opxKhzfkPod4lTatxzWKH0OsrHBE
/ZXPKK84n3R3FrKzlMmEftnFSxFZcZXstxKWi994rZPJRUAkUHrrSeAejzvLW4JLgcIU4lxQ
c49CoK3p8q6TXEF2xryzkeOW6BAtUGO1BjqdMtgsrKbi48Cmk1Qr3eKVUShNNqaGwVWS8jzT
k8yyoiyMejSFQmVxkSwzKSyyg1on7dCwx+mDQcx6b6nA9WhufNmSruV0ur1sZDV0tzVokFtt
wIbioqklDiqguK5KoV7fLTKAXi+eeJMfvcG7Q8TuUd+CtK0vquPeAoKE8ClIWd+hIGswEkor
+o+bbLiXLJj0S32tya/OlxSpxSpDr44KWXTRLSuJ34CnLSoJlSazfF15VbZkLCWIseM/3fsW
JMj7mVLJHZ7klwFXBCtw2lNCfXRISzQ7r5FuRdzNN28eKVb5AjLy5kzlforSgdnk4AUoWocS
Aj4b6i+hHQv6l0IkyZMuwpR3XGVwzClrjudqMgIQw88UrU6j1I231YNQ2R+S+c8TyxhprJsa
MpyJJflWxpme5Hbq8KFLxSnmoCm5TTS8GVkb3zzRGumMw8dv2KpcsDEFLMJsurYWJbNUokx3
yn/LSDx4+78dMGWmyiXDKrXIxSJYEWiM3cI7pefvJcdckOJqaIQ2o8Gk0IB49afHSkbeRhjF
0t9uv0OdcGpD8OI6HXWoT3276+O6Qh0UKPdSp+FdaQNm3DzvPjy7NlGRYrLZIelotk5pS2o6
7e+ihQ0h6qHn0HiS4eo/HWEpwieBjcf6hbeqLdYcSNcpTEu2yIEV+bIjt8HpFP1O1FbaQhKR
6p9x+WlUgG8YKvccmwTMImPxL5cZmPtY7a2re243GExUl2tXFgBae2gcRTlua/LRAp8lttGX
+H7D46ax5N+u1ySm8IuTQhRhDkFTYSr3pWotKZJRQ1VufTbQkxbG188zYdk0VD12XfLJMt9w
lzIjNleaSp5MmnBDr6intqHHqAaemmAkhsTy/B4/jrKMdlv3ddzyVxLoYjsokJaUw4XGwlxa
gXFLoO6soHx1Tkl8FwledMGTLk3Nh67yH50KHbnrU8EiHHTHWFOSGfefcQPppVR9RogWyp+S
vIOJeSJjAdkTLAuNMUiKp3k5bTDdUpSpT8dslTcrffhXl8fhpLBI4Yr/AKFwO+M5E1m6bwlp
tbSrXa4TzbskLFQ08qSC2hhdB3PX4az1bcBMF0sfmjEIdz5SWZGOOO2kQja1xFuQbW4l0OUj
ss/avqbko96iCNwPTo9ReyMy7zzDetV3Zst1mNXSTLtwZuEeOYPKJG/zvcFuOAKqUp5rKyNu
mlVQFkf834BImomx8mm2ONHvRur8VuI7Sex2m0/bOcCkpLikk77f4hrPUSLh+UvFsi+45lcu
4u2pywIujCLD9kpS1metam1JdaPaQlKV+grXS1iAQ0n+TsPk+MBYpF/cRcXLcIUa2Qo7jCi4
rYtyGXQ7DLQP1PNlLlNx7tSXPBp5KniGGWbFcms+SXLMMblW+0ymZMmLDlOvvqQk+4Ntdmri
wemiG8QZwTWKecPvPITa8ulNTcci3CZKt0uTFQtyN3ELajBJQOSGUpKVFISSFb9dLqKZbHvM
uKi8KkuXeAubHtbrDFxhMSXf1XpbKi335SOaylpCzTjRIJFdHUE+B3evKWD3Szz0WDJ4Njyt
TzqLffZETj2be3L5Ij8+yogLZ+lNOnXSqfBFffzTxFe8ivH37zTFptd0i321Ptsdr9weZjJb
lM025B51AUkKAG5J1OjWBiMok8Y8p4IcOalSFWSHOfXOfyGBLStsrdkOKU2ERm2HjJTxUEgB
xNNXWWDR5sguoFzYWocUB5BUFbBLaVgnbfYDSwPUsjzR47n5JfENvRIKG7nZXReUqUVXJqPI
bUpZHHZEVCTUn8dHTBPU/JlfnK5m5duQzd7TIhfdPLTFtl1m3FxRcJUh95iTVDNE7ezoTTpr
VZRNIhvEmSWCzQM0XemmZTMqyKjx7a+6pn7xZeSSwlbZCwVD/DvqtXJN/ibrBznF5uNiPZJ1
tZki1WVpq1OXd61pY7CnlORxMT+sSyFio9eh1lKBeSBxPyLZrDGg229XQOXGZkkxyTKhXV55
tgKQ2Y70hwEKlxSoBtRcI2r8DpaYnW8ZLcZPjmJHtV0tf7imPcRPfhX39rZZfXKcVVmGnZ9t
QqWwqm23roSc6McHlo0Iqn0G21KfKmujsRxKyKJqSR/YdRSCpO5/s+Wg0gKKxv1SabH01Jgw
JVyT+HU/79IyENjyJ67A/wDhoINKiRWpHWo9ABqZAodySSQajSiAVkgV2URXl8vnrJAUuqCE
02FVKPw0iF0ryB4nbgOo+YOgEEaADlTj8PXSTCFa/EV+O+3+/WSAoU3oCOvKmtCCp5A12O4+
H8dBLIDRII49fgdqakEQFQkg9FdQKddIg4BRO9AT16n8NBkUCNjuRvX0of46pGQJKSfaKqpt
8v56BkI1CuhJA/Cg+J0gLRQGnWg2Sen46BC61TXY0qfXSDBQCoINPiB8NEgGncVP4k/3nUzQ
sqB3PT5VJ0EJUQfmTsB11QUCiFcjSvKlFJ0kGqgokkFRFBX4j46ACTUmgry/w9d9Qihsrp+O
/wDw1EmGkpp7R7670rv+OliDkeKa+prQjany0AGU1UD6Gtaepr8D66hDUEprRNArenXpqMhp
5BIKRRJNfjXQxF8vcE7V+nr00EmBQ+pSSSmgodMDIdeIqTxSaVpoYCiolQqPcPpCj/aD89RA
CQaJA47f216V1MRQBO6U8T03FdvTfQTE7KNDsQfdQb7DamokwwGwfQ7UJO1D/HrpGRSOJCqU
pvyT01kZEniQaVPrTcD5ahYtJIIJVsdzt6Hr/boMwGsqKVcQeO1aChNP7tSGQ0khRJ+FRvQH
+WtQTCCQ4OIUKJIqOp/hoYJAUXAKdCSeRB9PnoN5ByDgqQePQ/8AiNTYIXy34geu9eny0CFt
zFVEfH4H8PnqgQgQkjkaHfkfX+P4ajIpYqBsDWvXoRoEIcVLKB9PQEfH0odIpADZSSknhTcC
nUdd9ElKTFkoAqNyetR/Lr6aBkIijm5rUbj5/iNMBiQJPMJBG6T0/vqNZbOkyLJJIJFQRQeh
r8TqkoaApSiOop/hHz/36jLYlQKSn15A7f8ADUUwFtStTTrSo6/CmsybnkjWFhV6aISpoEpR
xUKelN9emuDgbzmt4tl+8VYwhmLc/urA0Iz0r7RSYHI0Sv8A6pXtNOOwHXTfNgymZUaAglO1
KADcfjrI5LF49u9gs2V2+73x+Q1b4TqZAERCVuOONq5IQQspARXrv+GulSnBcLhn2Bv+YY2Z
tRLg/au79zJjyEt91UhtBDZZbSfoSsJPuVrKhZJV/EdYJ5Sxe351fcxyWTObduRdbjRI7SXQ
Q9tVzdNFtISlIAGmcbMMprR8aEX92Yu7POkq/wBOMoS00FKUCe5KIqE+6ntHp89ZSNOzLrHz
nxxbsNj2yy3u52W8ORQm8PNQ0vS5jykg9oy3VVbYCtkpb4immc7DLJO6eY8Ol4vOi3K4yMku
MyCmPFtsuBGjmNKpTvGW2BUIVuKV6auSaZ3T5SwW02PDQm7y73esXdW8+0Y7iFSDIQpDnJ14
nj2w4eO55U1puWDTQcfyP4tWn/TyrpPNhnXN+9Xa4IbciFKnk1ECjdXVJUTRxQ2pt61FEkk2
ErKvG069tsXTNA9i8SKtMCwwYEm32ptwrCkMyGmypx9FN1Cu9N/ho4LKyOMXzHG2PIpyO65p
b5dmttsXEZV9sq2ssl0jhGhRlclOcUpqoj5DVsOJPP0p5Dsh9TQPBTq1I29FKJ/2GoZNq8b+
RcMs2M2S1XyaiVOZkyHoT6o4KLEVIUlLyiU/9QtalcgBWlflqgmXNvy/hMXFY7Ua9wnO3Ffa
nImNSfuZUlQPJz7RujThfV7vefXVGQshhM8uYpOl3i1u3Zp7HV461EtsNTQ7C7kqqSltPEEq
T7dz7U/w1QTWCneNfFN0tOa2G5XK42J+JHkIU9HauDL6z7SAEtgUUoKO2j7E4NRez/x1Gu6I
OU3623qQi8SH7attpC24EYNlDTbikp4tlDnrU166oJPwZreLrdrlnWKpumaWO8y4E1ckzGG2
2o8aOkhYEmUUJbcUU+1tvjsfx06JF/8A36wM3TO/2bKLEzcL+uHIsrrzjbsdLjTfFfeCgW+V
UE+tKgnVAJjm033xorI7vOgz7KuSXIbE9ZMWMlXFH676H3m1JdTyNOLaRUjrodRIy9ycYaiu
u4BJxVgfuUhV+Xdez2VMlI405ArUj/8AR7fDRAz5IbLJmJXrC7ZaLRcrO3lzdjSlqRKCS0mG
FEvsRVqJRFfXxBTyTzCfh110S5gLJvRll3l2dPiC1wkyrOm4feqcdiMMqVd1pCljlIfJolA2
PEDdPH1rq5KzKZZpMOLcY8mZDRcIrLgU7CcKkIeAP0LKPfxPy0tYNQekL5dsHyHzZbbdcott
kWu2WdTy1uOlyOXDH5oaXyUWEhoD2hIrvU+mubq4RhWUvyQvi67+Os9v9bpidqhXS3Q3lxmo
wQ0xMdKx2koiuqQ0pTTdd1q3rU00usIV5LlEw7xdLyy6BONwv3qJAjqTY0GE8pa3XFh14xEv
CMlYCU1/U6b+u51wBjnkzA59zzu5NYLjD67ZEDKJbdu4yWmpSkcnEHsqW22oAiraVHjpUIkj
RcG8TQI+JYwbrhcWRIkyJSMtmXRSm5EOMnkpLqUqW3TYJ3H0/wAdZtk04JjC8PxuxuQrjjVk
iyrS5ari/IzBT4W+zIJWlDO66KSW9uITsBU600TIbI/HniiyYIqX/p56RHhQYsxN5ZPAyFKK
FLH3i3uLnPkQUJb/AA0RLBsp3ljx9geJY7Ou0IsynsjmR3cQSypX/TW9LSXn1fUrmglXbqoV
6aUTfBj0FNZ8fuJCuTqNjSlCoaXoqrJof9S6lf8AeK9Ghr2oXyBrFRX+WmukcrbZI/06jG2L
xfLhdIsh+6Wm1yrjAkNlrgw2ygBxSUuhQ7+/6aj7U6Gpg3pNkl4quGJs4f5Eyt5mc5eoyCv7
8qjOutsTHVdvslxBQl1av85XGh/KBrVs2BJqo3xqRido8BX25xWpbd+nSm7VNmgRlpU842Vo
aaU6la0ReJqulHCeh1jDZvaSQ4ubOHxfCOLRbUxKEnJLoGprriYyVSXWHUpfDj3FTiW00IZC
CP8Am9dXk1f+SRo1/wDHmBTHJFqbx+Ohq3Xa1xGkoguWxSWpLoS6hM5SlffdwfUU0p/LWYBu
cjWb408dZAuC4bKy2hm6y4i0Rob1jL6Y0VbzUQsrUpUgLWgfrIIr0HrpSMNlURg2MzLNEzCX
ijUTIv2O4XRzCW0vMxnZEOQllhaohPf4FCipSAaLIB0/BQSreLeKrKuRIuNlZYn3K32u4rjS
bfMukS3PSwvvRQ1Go6zz41QlZr/DbREqTesCYeHYRDuFxsF1s1nRk7l3+1jdy23GRbVNSGG1
xmGFsKowr9QKWHFe2prtTU64CXEDfDfEWIWzLLG3kkB+ZfLsu6SDCjoS5YYggrWz2HUrCnFI
B3QtSuoGptgv+hW5dg8eW3Bwb9jwt06ZbkrsnbcffvciYtVES3eH/TR4i17ISv3FOlpbJqfq
SszCPH9wxec/FxZyHdLPPtkZ+zRFSBeUokPhlbMpbx+3ccfT7kdr6aip1RAucNHG5+PfG9/v
EDG2mWbBkUd2XKu8WxuvTGmLfGYLimXHX/01zeYSj9MlKfdWuqI+4PJVrLiHjSSi55JbG7xd
bDZLU5c3LJckCI68+h5LIQX44IWz7uay1umlCd9TYfJB+XcWtON3m2G0hxmFfLVGvKIDy+4q
IZfLlG7h4qWlPD2qUK/HWk5UmazMFGWuh2O52INOnw0nSTkpwqSdgoJ232O2rQCeClHiSCDu
T+OgkpEKRQqoaihqfU6UygIFaiKprvSg6GmolIFcuI3oQaJ/v0AA7nlWgPQU/wB2kQJ5JVTf
l0HTb56EyAlNE+nL1/hpNChSvKtafSabfhoAQmvqnc9dRANOBNBtsB8dSEJVCqldwKb6YMg3
oKdfUVp/LVJBmnGoSNq0r8dAgJFOIqR/vGpGWHRJAJHQb/PUzYgj8vQCtabf7DUZFDjQAilP
T10jIFgHeoNNAQEOR67k9QNRMUpNSKAAgdelP/HUACvpT6vX56RAAEqPoD1J0GhSq9CQK0/H
+GsmRO4G5NDuCPUA6SgMAgkU6mu392tEK3AqdvgD6fKusiHyII2oBX8dUirBcFHimvT1+VNU
mWGtaabgH0JB/s0CABVetOPQetD8dIBgUXWhASKD47/hoEOlCSdqbAjcb/PUQsFFaqHXZfw+
VNQpiU7EbkelK9P46gYsUKVKUaU/s36V0CGeRQTsaUAHTau1NABpNKj8/wAPX8dICgklYp9J
6+n+x1G4DG1QSSRuCSN9QCVDkqqU8kU2+NdQChyJB+qp/D8SNAh+wA7ct6Hf+Wkg18gEmppT
fpX4aAgBIPv2PoRStDqBINCQtsGo/wCb5j8dBtCu2hPJQ33pSh2+O/x0E2JoQkKoeu9K9dJl
nQCqikgJKfrFTsRuOuiBQk8OnIhJ3r13/wBvTUaYRRStK0A32oaHqfloMi6K3P5etfkPTUbT
CFaAhVCSKU3p/t66BYrnRQofkd61/iNBlMQtPQAAk7VHQ/h+GtGpFKWnZIINN/d6H0OswUph
ppXkn3Eep9fjqJfAFAJSCkioFEkCg3/HSXYP2miT0ruTvX4b6mDQaSmpJPp7U6yxQVFcvcdq
bK/v209hjmRXuCyojb1I36aDSkMqUSoDYdT89ZgWCo9RTj0/GvoNKCGJqAkHiQa0P+/16aSe
4D5R+fPkK1r8+Xwr/frMF3RHIJTdmASDQJBqa+nxGvTVGIxg9G+Rot1d8FYNIjNuqtsZgmep
CwGkE0S3zSVAKNa8didXs2YnJifuHLlQq25fAV6azJqC5+IojsjyBa0NWtm7kPAusSRyaZaq
OcghSkJJbG6a7V9DrdY5CYNTuz2WWrzq9bLVMj2mBkT7TrkjjEdKorLY5lPJLpa2SpKem++m
kRlAogl8MuMbLPM9/mN2KNMtMFkw2rs4hCwwWhxSGxXtFUhXJRVQniANhrMKPkXoxA+N8zlz
L+pm3IbasSluXRTr7KEMggrCeRXQq478U6Mk1g2DGIV9s+BW6Q/iLNygz4BEayWyM06HAtIV
97cZ755oWsHl22+n9gWocMiUyaAi64jcvtILuIwotqD9ZdugfbLTx3balJU4sOODpQ1H46z1
yZnJSs5xKIjxBilzt+JLt9JHdurLSVmV2SOKS8+tPdq9tQkdSOI1tQng0nDLHkHj9K7vgxsu
DR1tOwnFybbIK2IzMhQStKprxSXHezuShY5L0YyPZ5H3+mcavFx7VwxTleLJCkSY8tVvVaoN
0mor22GYqjzebR13NTt6aElwYk64va7JF8i2KOLPCt9xyCzOTMjsyWUK7D7aUFsJbcCiwCVq
9o+r16aYUFB5juIaTcpad0BD7oKegFFkbaiSlmq+PfD+N5Zi0G6vSpsSQiW6maghKfvGm0KW
GbZypzc9tFE13rqtWDdqpFob8FeOY9oZeukm5xbhcWXJMaISpx6OmhLbSmG2V9xaOi6qG9fT
QYTY0l+DcBpNtTEi6i8wrKm9/furZ+3UFVHbDSUhW5Sa70GnJOzMbwDHYWTZdarK68YzNwfS
y5IbCeaQQT7K/Gn9+nsSybXH/pxxK8qdatFwulv+xuTltnqnttK7qWW+4pxgICeKSNkKV+NN
YgODN5+L+OjfbFHtP76LVJuCYV0Tco6ANnAKMuo4ocW5/hB9qd9bCtYhGmXLxt47be8lmfbj
GhWj7JEJ2AxzditLaCl/bMk8StR+pR/HprJqcEYx/TdYP3Z5Eq+S27VxjG3vKajoWpcpFeC3
XlJaKxt7W6k6gSghsj8GYfigQ7luTvx2Zkt2FanI0PvkBoV5vAnlX5IB+WpSUiMz8K4/aMCt
2UMSH2on2IelKS0487JnOq/S5I9qIzISCVLWdth11pWfAvGiiScRtDWBx8kauj8m5PSOy7bW
4jvYZSKirssjtc9qhI61/HVLnIYKoOKV1JIABAPQpB9dMjIhJPHiEDgD9BA69a06ajCZZsCw
pzMLnItUW5MQZwZW7b479azHkiojtmoAUfVRO2tNwjSrOS/2D+nW+Tbi9BfvMaLIt8dp+9ss
NKfdiPyRVqLRCgHllA5KUDQaw7eDWB5/2UvuOiT3c9Zs9pROagtOxTK/XkSEgtpLUdQ4uEq4
kH8a00Swtadmc54jKrVepWK327P3I2Z5xtJW8440FPALWtHM1/UBSVcvw1tOTLLTD8G3yRbr
etWSwW7ldLcq7wLFV/uLYbRyUSQO2n2mlT1O2s92MeSKyTxnOsdmUu6ZRbU3OHGalnGHHnhI
Sh8AoQzy/SW5xVUJR/PUmDcFitPiS1XQ+NWpk+XIbytMtVxeQ+FpaRHHJDEdCk1b4/Ss7ivT
ROJFRIm1f08zps5xyHeYk22hM0xJVrUZC/vIJ5JiKDgbBUr8yk7VGnsXzyUHyKzkYyiQrIbn
Hut9dQ2bm8w8mR2nkp4/bLW2Et9xkJCVJTsPjraM23gkcJ8b5Nf7PMvEW6Q7NamHBAlyp8ox
W1l5IPbPEK5pXVI4+p9NDtmEKWCds3gHyhLauUNhyLEaakmC6w7KUluU8yAvigNpWlaUkihc
pv8APWe5NERB8f3N7Ccumzbk7Dfw+Q0JViIKmlurWWFKUoK4JcQQpIPE7fI6XvARCTHcrwv5
VTi7VyfaQ5b4bBnNWn7nnJYZeoVvCMfYjkKKXRVaddKstGmpc8krkHjnzuq124XG4OXaO0+w
xGhNXEyVR33TxZqnlxT8OQPt+Os9kOmNsh8e+bblfLXDudy/fLkXFohKRcxI+1daHcX3FcgW
ClI5cqfx9NPZgtkHnFr8pYhkEK+X65PKvExKnLffmZf3JWGaNqS28k/kCgCmlN9VWHaBx49i
eXr1Pul1xO4yWJbrjYu90cmJjpeddV7EuOPKo44on2jr8KaeyFaLjIwXznackyG34jfpkmMm
ShuZdHJTcQzZjjKXXAlL6iVup58SUGvTfQr+UDKvakeebYq24zbpNyirvX3T1utofCe6WCoS
iruElFClXJKiK9ab6pWwovLOzl48pSvEH7u5kss2JyY3YY9nSEKDjPaNf1AO4kJKQhKfqV8d
MpODThwPshjf1KxoVocvL89xliVHFuaDsd55uYraN3G2ip3mSfb3fXrolFGTtfP/AMphd6si
J7s164B9xVnWwuIsIfCCHkqcYHBNG68kumnHWexQV3M8m812DLbfesmnSo+QNMK/bJIWwpos
E8XA2GOUdaSfrTQ12r6a0soBra8B8geQ7ddssiuu3m6NTWo8iMqin3S8jmp0LUUobQgClDQf
DS7KQiNckXA8UeRrkZv2OPzHzb3DGlp4BKkPIFVthKynktI9E10K6DILX4n8i3eCm52+xSFW
0hwic4EstJDAUXSpThTQDgodKV266e60a4yNRgd1Vh0DKUkdi6zzbYENKVF51aUcu4jbiUFX
6dK15aHbIeDlf/H2cY5ERLvdjmW2K6rtsvyGilBc9U1/KfhXr6aauSbK8tJSCr8g2+HX11M0
c1gk0rQgCvwP4aUZAEqKgKmu/wCG2kUg1L41Nd/Tap+WgRIrsoDfYUPWukmwxQilK8q1HQV0
QQOKuNOQpWn46yAR3PxUOvwOtIg+3QggjYdOu+piEU0UeVQKEk/j6aCAonkBx3pSpPX+GmAF
EVpXrXc6CEkEnZWtEKAAqr6inYU+ZqdECD4V3qf5aCCSHKHbkeu3Q6QC4EEV9dyPw6DQIAkV
NKAUHu/uP46myDTsCr/Fuabn+GoGEeleqfQfGmgkGkpA32pWhr6nVBCipR39aVoOlNaRIJJP
JO3TofhqZIUjj3KA+0E0I/v0QaAQVpUEjcbggVoPhqIMClQkbADp/wANBkIEgmmwI3V/HfVA
hk7qqmoI2p1Py1AGBQGhonpT4U+OghQACAKge3r6fz1EAAnYbinrsf56jSFJPuHLcnoaag+o
akqpySSpXUj0B0EEDyc+J9Uj1/HS0QsoSTQkknofUaCDC1AgFQ9/x2pTQUi1US38D+UdTX4a
kUgCaGoOxGyjt19NJQGDxSeVa/Cnr8dBCdgEiqUj1+f/AI6WR1UlRJ22JqAdjQ/PWDUCQCFn
cpqDsdwQPT8dJNBgpACVgipAHz0SCYKqB+ZNaDbp8NQhOlX1JVSopX8N9JNC1UKRUVpuPXf4
nSKgNKjWtdwDuen/AJ6yIfPaqD7vn6euns4jg6L2fjARNQRQCgBFOh9a11k4sFaqSqtCk1r6
ahDIB2J6/wAt/wC/QAhQ59DShpSu1B+OkRSkkDmSEg+pA3/jokXWBZFNwTXoAPp/gflqESDT
2EVFfb8P46iUiuSCopKRRXUD01FIXH3AAhXtIVU0H89DJCg3WtfckfSR0B+GsnTqApUSQByU
KA+iqfPSZbFEHiQSUlNK026/CmgloSVEuKQmhUaHkNjUddjqFPLDBNAePu9T6fw0wLFdtHHq
nlStfT8dRdSHQlLV0YDYHApQriCaVIr89dkzEs3bLcdxy1+G8XujMASLvfSsvT5DrzhZKDv2
WeXaTXp9O346fZhmK5bMrUk8lHanqkD/AHU9dAsnsOtFuvORwrbcrgq2RJbqW1SG2i84VKIS
22lI25KUacle1PU6Up0NbpZgu9ywnx7ZPItwxGYzebkoux2LciG7HSpS3kgkvrcTWvJW3EUC
euitH5MRJORfDeEzPKEnD7bkE6FHhMJcW0CHZDrwHN1CXkpDSENpKeXIVqaDTDgW8ZMWuSm4
r8mMhxXZQ8tCUqNOfBZoSPzH+GsousGs4f49xO8YwqXMye5ttJjKcm3FFI9mhvDf7NxT5SuQ
76KDQI1u0t7DmSVyXwtbbbirr8WdfcgMeOJLbcJyEYYXSo/RW4XEIR1NEdNZcyTclXyjDJ1r
waw5ErLXbou6SQwltLzphsqSD7w64eR7Sk7rCaD06aWm2KSTglL7hU2zS8YVcfIT4j3hlyU9
dlOSFMtJTx9sUc+66p3lRI25f2aVIJId33x3HjSLNc7ln1yiOuhaosS6tuqvCVg8UCJEQtSw
t38oPE6EnwHWWT+LYAbVmQsDOY3mNOySEqc92UR2pyC0B/8AOU53nEHiv2BJr1r01Q4HaPO9
yQlifKjgqUll1xCVqNVK4rIqon8x9dMGS72HDfKlzs9ju9mU85EElcWy9qSoLjOHkXHOINIz
ftPJZp/bob4F1nZZoHj/AM/SrC/Di3VX7Y53giF+5E/dpCiXVs8alaFmtDX3fhqdggaPePPO
cdl+7OSf+ochNuzYpnIXLegtfS2tmu7SfVHT030S5JjaxeafLlxucGBaHYkue64luHFEKI3z
UOieQQ3xA+Sk7eulQuCJjIrH/Uo9cos+TJkSZyJSm4rVvlMuCNIcFSFNtENto4/460T11K3w
IwvEfzvCyvHpd5uZcuk2QuHY5iZEaU004uiHuDaatpVw+s8a09dM/AKJLm3avObt8yhu2Zi7
KnY6iM0Glx2GjLceQXEgBz9JttAUr3K/HWZIrNtV/Uwxe7nFjJluXBSm3p7r5iraCymjRacf
qyDxHt7Xpp7YFIO2XH+p2Qi4tQY02Qph9z7p+Q1FLiH6ALDCnx7SB07W3w1Y5CCNuUrzxZYU
K5KlvuJetL3NtAQ8iPbkKo4ZgdSGkucj1VyVX56pkLOFJSZp8jv4DG+5VMODxnv+iU5REQSF
LUr20oVnmpXxFa61OTUYIC0zWoVwjTH4rVwaYWHFQZIJYdp+VwJKVcfiAdTFPBdJHlexulBP
j/HEhKgTxjOivGtE1Dg21lVRnCHUfyRfZLc1GGYnbrFMEZz7+42aKtcluHx/Uq6tSgwj4rH8
9UJFJJWPyL5olqixmbQu+sT7ehpiJLgGS1KiRV+19fQuqSs07hVTU2iY2mZB5svPO1fsUlaY
FzauD0GPbe32pTCE9ptSWkgJQlCRRHr1JOltIkRr/kG2Ozp0jLcHhXfIZT63LjMlPyYrhWfa
G+y2eLYQkBIT/PfTCYNMt138vZtMsEV7H8JRbLfb7f8AaNXj7V+U5HYcHBwx5CxxQ2pIp7uX
StdZfVcmmsFXyXPswvONJlXDForbcthuI5lgtzgkOstgJQlMlzk2ioFOSAD8Nbwn8lZeS5W7
LPIkhrAGLPilkiqdU+5iYSp0OpbQkpkdwLWS025uoncqptrOAb/IYRPKPlH9wft1gxaPalMR
58pyJEjuMJCnDSTcEKdUmvH8vE8d+h0/iTkqDmfWG93Bm55vjQuLy4qW+9CcVbHJS61+8fUh
JDilJ9tUgD10Wr4J4FXzObbLxd3FcRxx222eVLbuEorffuEgyGhxTwVxSlCSlPShOhNLkYHu
G+ZL1YsZRYhZf3OPAddkR3g/MjltT26xJTFUgPJrU0c9NtaaTMs64zmUqN49ylBw1NztV1cr
kN7Eh9phKlr5xxwRsgNrcHHid9uR0YkbaHF38+XO546zCds6fu+yzFdmLkylw1pj09yIPJMY
LVx3qT+GpJTHIna//wBRD92t6oZx9LbMiRFmyQuZLcbKoi0r7bKTxDDauPRoinz0wkZexxI/
qPIvlquUbH2kptyHUPPSJDj85xqSntlpEwpS4hA2KRRW4GhJciV6/wDkTDMvvUdzKLfd0Wy3
RVMxGo88S5i3nFhSlvPzE8QhIFAlCB86608aZRydonkPxjaLZIx234/cbpj9wfYnPM3Ka2zI
+8jKAaKFxUf5VPqSd69NDrjYVclsuv8AUDdrRdbpAyCwzLbPE771MKNNEVaFOsNjtSVFC1LS
UpSocOJ31lQ2U5hFRt3na5RsPutmdhodukhyWi23Zbi1qhs3JQVKbHcKlqUeiFFQI+ettKZC
tcQJtuX2aH4l/bGsZnLbYuzExd7MmkT91abq03xCOXHtAfphVfWus1S8i4O0vy7hysvbzC1Y
9IRkrk5m4XFbs8uRv008XENMtoT9Y6FdePprXXGzWkSdx8/26Xd7XLjs3j7GC/IkPkz2WXwt
5tTae0mM0hmiORP6oVyGx20Rgy4gpPlPO7DltzhSrNaxbmI7S2pDii13JLq1ci4tEdLbCKf8
iaqrvrdVCCOST8fZpjkLErji18ttzntT5jNy7tqcSh9AiJFEqQpKwtFR7uW1PmBrnZfMGkX+
2f1CR5TVwlOWq6IUm4SLgz+1pjvI4PpSUNyHZLLhaKQj62qfHVCnYC2PJfj+d4uehXO7uIvs
q3SochTba13Ah95TyYrLpbLLjBUoBSlqBUOtNarVzIPKwU6zZ7g8Dx7Y7Q7Guy7ja7s1e1uA
MCIuUjil1lKlHl2g2n2093LrqeXl7NVSlPwROWeTWL3Z8viLYkpdyW8sXeCp9fJLcZnkkNE1
NaAgJ4+3SokzGPuZstJUBtXegHX5AADSxEKbKVHlWoNCkghQI6gjQgY5tlpudzdW1bozs11t
tby2mEFa0tNJ5LWQPypG6jqs4FMapKlKANKAA8uop8RqgUOEQ5bkdySiM59q2pKXpCULLSCr
oFuU4pJ9AdM8AwlwZf26ZJjuGOtRSh7ioNqUnqkLI4kp9RWuhwQl6HKYQ268w420+nmy4tCk
pWgn60Eiih8xrKZCnYMpD6WH4zqHDQ9koUlZ5U40QRX3V2p11OyET9pIL6muw53gSCwEq51H
UFNOWw67akwEtsSDy4NqWQnkriCriK/UfgNa+oMMRXy13EpUUhQSVAe2pHqdLYhBBUeIPvSK
lJ+Hx1k0kcwltdSihQCAo/lGmTIqiQAtKwBU0USAFH8flokmBSBxHJVOQ2r/AHaUwAEKBFRx
INfh/u0tEKLZFQfnxA/36EaWRHCqlADpSvTr8dDAL2gpJOx2Pyp8dBASUhJCk1B2p6EaRDoC
QKD41+OkAgCNk7A1+R21NhAYUCCOo6qp/ZqENIIHuNEqHp6U+OgUElJoaEgg1A6b/jqbBgSF
clA03G5+GoIDbBUhSthx+of7tTEVxKa02rsCfh/DpoIOh3qBX41+Py1EGFAf8w6VpoIXtQit
QNyD1A0GkBR95CenUenX56ibATUe0AitCo7Hf46iDTVAqmgqaJr0r/t66Q0KFEpHGoUVVqDX
RBBgU9tQSeu2pkHtyBK9qAUA+GokAooUhXr7gR/PUQpW9VUI26nfbUQFDeidgacRTofl+OgR
VVdOQApSqulR8dUCgUIr7BUjdI9dUA2wwSsAbhSQK70H/nrMCtAVsqlBRR3Feh9KaQkLmoFI
V9ANK0p7vmNIhhBqQCClXqB0PXWWTF1Tx5CiQRQbep+GhDIXJVBQUpvQAGo9dMFIpdAsVGx3
AFKCugYkIca/SFUFKAfTTqdDJA51VXl1FQkbe0eukAHkquwAr9PodRQIb5c6Hfl0+FT6EaoM
5DSKJKqbpPH8K7dNRtCxSnLcn036U+Og1ASQ4ohSQOtSPTSZDLhKRsKdFU2OgUgwkinE8U1B
4g9fx0FAoGi+SdiSa7VJ+VK6zB0S+QcR7kCqUmoJr0qa0B1DjgSohC0p3G1E032G2o5cgBqn
jyoOietKjWpOj0L2+igpXr6azBS5IYqfFyYW4QrkUKTQcRx6AU13OMs9AXp3JT4Lsq7t+1R7
QlSkWRBadcuqwHCF9tQJaRvXkaV4jT7VLyNX+RkXuCqhRB/NvtX+Gso0yw4LLvcXKIciyWxF
3u4VSDDda+4bLh3Cw3VO6Pq5VonrrUwcmvBo8z/vNc/JQuLkC2RcmsCG3Hklcdhgd5BCFOLU
v9ZXFX+LbQm4IGOseYsVz+YGMdiXbL7sgzHn18XkssOqPJaHULbbZS4rY1O/TUngtopY8h3m
3z8gJtNrRNvBVHlO/aoP2vGqVIiJB4tg1PKtanfT2r4ydHEROS8WOd5Nvnj63NN4Ba7tZLVG
U1bZ8tv38QCFOssqcQXF7VKko3Om7q8oy1D2dQ55ftmIypcHBrfY23rf9vNyBoMsSvsgKqKm
lOUSSP8Alr69dChmIK7fMzv8jxjarc7hkSDijLqUwLovvEqWCS4phSyKKd93Je/U00ypyaas
yfc8oZaiXiMn/t1Hb7ALWLtLD7jrrfAJH24VVSQmqVBRTXofnowGZEzc3vmF5AchufjaPZpc
1txtc1999x9xbxFVJklTnBfptRW/XVVokSHj128NZs5erX44kQbmzb1PRYSZXYDqZBKS/Icm
nuOcqUQEjbeuqcE8GD3NTy7lLXJZDUlT7inmx0Q6VnkkH5H11ooL3hnmGTidrYttttzH25dW
/f1qVVVwQpKkIaWVBXZbQhVPbv8A26HBp5LAr+o1H+nUWsWMMuRmlx4LcadIYhIZXs2HY7dF
OBCTQVXvTS6ryZYzPnx398l3D9pZ5TLM3YkoDhJbQk1W8o09x3NEbem/XRiQgGKZB4Cx2/2y
6wxkgm295DqnpJiqZqNlKcQ3VRG52Tq6/JST4/qWsFrvUheN2Dhb5FyeuVzckSKuSFOI7ZW0
lOzRUPcQqu+qBRQjlvjJzILZNgY/Nj26PKMuWfvO/NkPFXNDTfL9JpoODkoJFVaoLRpV48p2
Z6ZmDF2wq9MxLvHiryNBdSl2M22jg26pNOLQUkp4gq39dAQNYP8AUli6JkuRKs8uM3SMi3vx
3WHHlR4qeIaeLyS2nkalXb39K6mkLkaZV5vwDLYqEX623mKuFLdl2tNtlNtKWHUBNHnT7m/X
6OmkyR9382YtesRi4ld7FLXYGISWy4w+USEzmSS0ptajwcaA2Pcqa70rrUJE2UK45faH8BhY
2La+5dI0hTy7k5LdWyhBJPbjxQQ2ioICjT4n12GoFuSt2mNDlXGLGnTEwIbiwiTNUguhlB+p
ztI9y6fAasmS+ycG8PpU0ljyMtfcWlLql2mQlLaKe5ZIV6U2Hx1JM0kTmPXDxt44kvX2y5dK
yiS7HWw3Z48JcNh5ah7ROcdKgpn/AJEiuhTOS0W+2efMObkz1y2p9qk3K3w2ng5HEqPDfikg
x4sRRQoMFK6joK+mqARFZP8A1AWu42pEWHLuTUhy+RJDshSUxudvjJRzTxYI/wAxSDRrfb6j
qiBRWckt+DZzlF5yheaQ7A1cJKlx7dMjPOPhtCUthxwtnikr41CeoHXfTD4JGjXXyZgNgg2M
M5RJuy7ZYV2/9mgNr+xluPN9pDjpVxS2tPDcLqQP7RVeibKb5H8hYzebLKctmWXZsToEaEMM
aY4RG1NISlXddWS32jQlXaHI+mlVgWpLBC8geNrc147fORvT3sS7zcsJhPp5NymiCrkvcBkg
IAFa6OsKC5JGL5pwK1MCPLvcrLHUt3KSZcuMtHISwAzblc+RpTZRA4ADV0MtmY5HBsXkDLV3
S35ZHtzUmGl+SjIFiOmGsK4JgR1NJ7biGxUp4JACetSdOYGIZNKydPjTx1MsFhyiDMyafc2J
qJFlWp5AiJQkKQ48tASkkopxG9Px1VrnIWfgtuAeasbaxz7q63C3Wu/SJ0iXfW3GZCEvhdOB
Ybi+16iBx4uHr+NdX9behkp2N3jFH/HOfxJuRwbbcMqfDtutq23ErbTHfW8B2mwUp7qSAhKS
ePqdXXOA4LnkPlPx49gabZDmW5NrehRYzFnLT70tpyqQ7/0nFMdtTfuV3OR6etdHR7NWbbkf
5l5L8eSrVHjovFqmqZukB62tOtuykNRW1juLfYDbIb4pryQ10FPw0dWGZlAn+Q/Gq80sE64X
+DOdYTLUlpATJgxHXGuLSzM7DbjaeeyWyFca8vQauuAnJQ/KzQ8i5LbYNgm2R2fboLq58xEz
g0rm4kIQqXIQwH1gbhKUe0E11pKOCjI7wG1/6Gs1ytk2/Y/Z8ulSY0tqdKcZnMqtbR4vsJWl
DgClGv6Yoo9a/C6t5gZL/Dz/AMauT77OiT7M47Iui3rjInvpYL8UR0JaLfOO+qSgBKk8EFPw
1l0ZGewcq8WqxcZY6xDZv2Nt3KBbLElujUtc1Z+xd7LlVLbbbcUVFW6f4a11zC0C18sjrdCs
H/YW1xLjeLSXI97au7tv+4T94YBHbfaLSR3FPklR4deProS3AwpXMFkyK4Y1/qu3KvVzxWTg
Ll1YXaoMBptU5mIlBKe8plCC20lXHuhytemqMYQT+pK5DfcFXl2MNPQMcdlpemL5rlx3Gwz2
yGQt2NHRGQVOf5YdSrfrtvq6YFMyrz+bJ/qS3i2S4MtSohMwW9qOgtrLhUhD7kT9B5YSaJUh
KSB1GtUULwYeyS8O3ZtrEbjCs18h45lhuUSW9PmOJjly0s077SH1JWPar3qb9R8dFqwzSev3
NGsV89lxyHEJ8N+1O3WY9aMf+9ZssYJVQOSLhzo9IS4pPJpriAEdSK0AqjpGLiZb3fG99U6u
zovUi8dwMpaV+5cCoKX9msHttxga026V/wCXW7Vhk0uqNZveQ4YIM1+5XC3yfHE1u1t4fY20
oW7HfYWkzFCMlPeZUmjpdUpXu5etaa5qsg1JXvMdziTMWlxp9zt9xemXtErCGoimlri2Qt8Q
D2kp7DRqlPBZrUHbautJRngmn9yG8e49Hw7yFdLNdp1sXkn7Q7/peelxL0RF1kJSqKUOrTwQ
4ncArFAf4arZiSU5SIPzXIhyLzae5IjzMiZtTDOVTInBSHbmhSu5yW2EoccSniFqTtX11pJh
GZO/gJJTk16Wn6m7BdFp3p0Y41/t0W2hemZewhBQhPSiQQqo2NK/x1ojZ8oyrHx4Lxi1QbYi
Mu4TJKp7LMx4griONc3H2a0WZH5QvZv8msKuWTc7HnlnKMXELx7b41nR+zRbXHuLtranPqZC
HytJjHj9LgpVT1O4eh1VUVY8yTWYZDjcz+o6yQJFuRLtdrkQ4MdBmrXFQp1LK23m0f5TTbVQ
C0n2rP1aWmqQCU2bCxnLcbuP9RV7uM2B9w4yZybbKdmqWGVwG3gXEKUOKg6E/po6NdU6LU0S
cJ/JEeKsxsSB5Dyh23v/ALuLc7PjzDcF/dJZdUlHYRIUgqLvLcv05elNadZukHWKjLE8isNp
8I5TJYgPs3iXNYtsuezMLS3hJQ4pBI4FQZbGy2x/mepGmJuxn8RxPvGNNeD8UtsKEuEL1d1o
uBVJCmnVxHG+67ISE8lNqrVCOQ7fxOsVxLNYlGqZvY4klpMCHY4xcRe7a1jbN1ZhMQH2/cXP
tHIgD7rPBPJYdqTtTWaVmfoSIa626I/lVptuQY8h26x2rg7Zr3dIsW3R7lcWmx9tCajMqSFs
9w80988lUHx3apQCI+3ORoOWRY7+Lok5vcrEy9fkWViE8/AmIfPNRhvqMRKnmS2l31Gxpvqt
VdZZLbgnbQ9hq89v1ssePCQwiXBdu19hR4EqO1VrjKYlfcHtMNhaVLcUx61povWEp2SQxxjx
D4xyCDMf/bO/Hucu6uwrrDceW1FYZfWlhCHQUssmiQUocSrkDrd1Fl9i4K/CxTxKnxDdF2+f
DmuNqtCrxd3g4JrLkiWnvNJqgJYR2+SUdutTUqOqtPyhiswMvLeP2qNiV2el2aBaTb7ozEwR
+GlDbk21qQlTrqlJWsykdvivuK6KPWu2mlV+wZbRgikV9taJJqSdqn4aYA4qTTbpT6q9D8Ad
EgKSkkitUin0/LQUClJUoJrsfX4imlIQBKTRVNztv00MoDCSmpSK79P7NSFAKU8lf82xHp8t
LIHVPbI2PQ/7fDQIB7iEn3Hoa/VqMikoonj8NvgT89UiJCKEV2oPp/HWSF8V14kV9SE7baiA
k0TRIolIPU/7zqIUK1AIFCdyfw1ABQUKkpJA6D4A7aoNBpJ9p6pHt6f3aGAFCiz7Dxr+O+lM
gwafUAnahPx1MkxaADvyqDQE9dtZEUQ3TmkEUNCn5fIamIZqgFBJoRUE7j/w1IGIT7gOKag7
k9DQagkNPuQAByr0VWo21EBNVJPGgUCQqop899TEUApSSniT6qp1G/oNAphgCqSD7VE+nRXz
+WooDCQk9SpRFD619K6hC7ZKeJ91Tseh29NEBAa0BSagmo2P8dBQGBtxWSkp3BpT2/w0gKHE
JKhUrrt8KU20GloUAoVJKR60Hrt/tXUaEUJBNAlQ6KO9PltoKQdvkUgH1rTf+ekgbg/4zWgS
BSh+IpqMh0HX8vQAep+Y9dQpwwDmF9KUACfhvoNwGE0BqaA7VHT8RokHgUNzQAqUPz9B8dTZ
TImhCTUdaEb/AA30Jg0Abj3Vqfif7NtaBNiwsV4EV6io6U1k6KwhAVQjqqnoaHbUEHQKVTZR
rTeu1D+GrBpLBz/VpWm1Kcqb16V+GkIIkLQZbBQCEpCU8nBSpH466o5G832+41efCVgs7Vwd
F3sLjqpMNuI+6lXccUaF9IDTYCVAkqP8Na9u/qaczoypyiU1FACKp61prnINlhwSZFg5Zbpc
25LtENpYXIlsoccXwQQotBDRC/1KcTvSnXXWloCS7Z5lnja+eWYGRLnvTLA6605dG1xi3wDC
QEobCqqd5KQOVEjbbWatLYcaJ7DPLmPveV71l92vjlusqgWodv7TqjIaSkts8ks8kJ7Sfd7v
U7a0l+JRgz121+O5l4v8uZkcpERC1u2oMwlh2YtwlXHgsqDaEGgqojl121zgnovmO5Nidqw6
3Pw80Yg5cmIWJEu5RpMx6G0RtEtzaKNMpG45jkTrbTQxjRJXXyN49umOSFZXcrXf1KtwYjwW
Lc+xc0yhsgLdWtSAhJ+fz+WqDJF5VkViuXiGxWmVl0OdkttfTJDKUuOVO6W2U+xKEhlKhUkc
dvXU1kFZcFivOR4tdpGGOP8AkViNcbSw6xebrC5rfX3mx3C26tvg0HCjjyUmorsNabefA4I+
4eRcTsVxgtSZFvv9ggKcXY7PaFuyX40gklM6W/NHF54JUQnl0Uagawi7QWaHmWJ3zy5Z8ht1
yaXabHZnGrtdZUhDSEreCuDY58O4sHdwp208AmoPMV7kR5N6nvx1h1h6S6tlSAeKgpxRCxX0
OgTYvF9+8fxMWtMDLV292YbkqTaWkoJXHWhKx9zdFVSFNcqcR8Kfw11TBs0G3Zj4/hYmXm51
hmqWiSq+SHHG4qZElRUFKTG7LkhxKz9CW6bUA0OpOxGv5l45ekSseLtoFgRjaFrYQw2nlc1k
BDSXKclOBJHsBrXrvq6A2UDAPBWdx8psk3IrKybOmUy5MbelRXaIP5VtJcJVufp31Nm5NauD
HjdN8FtzI2ALF4ULFGjpaQUR0N7IlBH0q7myg57SdEGZgzPJG7/Py/GI0yPixnR7mp+PGtq2
g4qG0rmFzSkpZQ2lCfaOXJStKA1B9EZu++Q1QHbNMlXtqC9a4UyQy4xIcaZUlQkI5f40mgP+
7VklA2s9g8au5HdZsW32WZNaRCZuURoRnEsOqTWQtCH3ER0Nio5FuqqimhrAzwMLzieJwIsy
TgmNY9kCnLg6i9C4yGe1GZSgEBtxax2QCa+0GmqAInKbBit7wO0W+1fs8nLItnWqBAkPgR2o
zjlX3o1eKFPJoO2pxXT3fHSlkWzJ7jCs7PhuDMTFtDFzfmqCpIfL14fQlSh/kgUZbHShPQVp
vpVYJsivFFhtd88iWG1XWOJNulSeMiOSQHEhKlUNDWlQK/LU3CFI9DK8T+OpFwjQb/jkGwPI
vDzVphxpHFy4QmWFONqc9xUsOLABFAR9PrrCRGb3PFJ681w9pXj6LYZz03uSLZElBapUZh1K
u8qMlSlMobAqpZ+rppxDBTJd5Hji0XDzFmL+RWB+5CQmPJtCnf1opQ4EtqdcZaWl9VFpKUkC
iQDt01W4CreRvjmN4la83zmwMYnb581i0qkQYjUhckKK0BK4baVpBZK1p5ED3pBG9KaWsIZc
MgcS8OWe+27B5ibMtwT7hcBl3ZcUUMNMqUGo6yFfpJSUpSPzH476GzX/AGL3k3j7FLrFstun
40VWmJZJIYyZuQppuD9sSppuifYVL+orc6/z1JBJ5swvEbhdb/Zoky2zU22fKYbkvpYeQksu
OAKUlZTxA4nrrVrIYejel+DvHki6Kt7uOTbFHg3hqDHlOy3VC5x3mlukt89kpBTtwPLbfWTC
RWMa8W4zaGMVn5RY5Spl5yJ23Kgylrjp+3KlCMtTShVSAEciD9Y+WmdstNQZ55et8SH5GvEG
12R6zRGnlhqGoOKKwFqH3DSSkcWXPyJT7QOh0qEiks3jfxxi1ywSXkd6tF0vEuPdGbc3b7e4
pjiy6EDuugJKuKS5VRA+Hz1l2lmuuievHh7AsbWRNtl9yRNwukqBCFqXV2IzH4pBUhKaOrUp
X5qdPx1dgWStYpi0N/xB5GlN259VzhSIbcV5xkOPJZbkULTZSkltzr3uB+XTWn/IGvxUGZWK
EbpfIFt59r76QzHS4oEpSXXAjl86ctas0kVZ7Qei2PDuDSLVfMXtTE+NNTeoFrl3q4NocUQ2
e645BWAkAOCoUn02rtrnORmf1Iq4+D/GEm82yFZL3JQ5clS4bTRcEhKpjLKnWCZAaQhG6Pci
lT6aVOwf7mV+RMFjYjEx2DMdUrI50NU+9Q1UU3HLjhEZKNqhRbSrkDvrVcqSlIn/ABvi+NDD
7jl11sqsjcYuUS0MWpK3GmkiUU9yQvsgrK0hXFHoD89ZbzB0xpclza8feM8bud6i3q0tyMfg
3FyK9kF2kvcy3wStES2MRf1Hn0Bau4pQABA39BPJzSwZkcUx6TheS5JDTc1mBcEx7UksAxEw
3HBw+9f/ACuFChsD9VP8WtJyw/7miSfEeGOi54ozDfj3my22DczlZcKvuXJqkpWypggNJbHc
9nFVapO51hPk2yC8h4BhEDG8hes1tk22dh10Ysy5T76nhcg+Kl5xKkpCHEndPDbiRraTkxZp
w/mCteM8Hi37I5ES+sSmYFst0i7vw0pLD8huMkKSykrHtS7WnIDp01WekuRWE5OPlDHsdgx8
bvtgiu2235PbzOTZ3XC/9qpLvApQ6oBakK6jlvqqZe4Hvgay2q7eRYkS6w2p0IRprhjvpDja
lIirKSUnY8TuPgdVsg9P6Ge7/nTyNAancH4bnWnga6Rqly8S4lbbDbXJ+WKh5BdrQm8wYDsY
iKQtNUsLkcj7nCClugqSNx01zTszo/g6seBlu5hkuPovFV46ban7jsAF1y4LaRQjl7OHdJHx
oNLtCRizgkmf6eLJNuP21lytUhEa5yLVepL0NTPYkR4zklSmvd+qkBkpV8+mh2cGbLXyVy7e
NMfgyMRnW+/qumL5NMVD+8XFXHdR2H0NP/pE1UhfP2EGupzD8m6pySk7xPiDmQZtJnXoY5je
N3ZFujttMOTVnvk9ptA5cidqb1/s1TGEFf458ssGPf02z4txujMrKP25EeQu3uSoTLigqIuO
mS8qSvklMZC2F8aLJClbHbV2ZpaMIuDERu4SG4C1PQm3XBDW7RC3GQohpagOilIoSB011cmF
k0+Z4uwePjOE3KJf3p8/JpiGX4AjraS439w2zIDS/wD2zG5FHJVeZ3TTXLu8yaiHHBOXX+n2
03DJJkTFMjZeis3hVsuERbL4VAWsLUygOr/+QpIb4rodjobZlSQDfhi0zZbS7XmcGZZI8WZK
vVzQw6hcJEAoDxMZRLjiFKdT21D6t9M2RKZ+BrkvhyTZsfnZPEvjFys0SNBmQJLbbjSpbNxe
WwkhtwVaUhTZqFVqNKs9C3Gyw23+m68vXGXCnZBGhsx5DMYSG2H3krUqKiW9y4jiwltpey3T
xUoazLNMOX4Px+5WOwvY5kbKbrc4s+TFiTEuhVxRDdWUOtoFUxklhuvuNKnW02tmONGP2qK9
crhCtzS+JmvNMNE7JSXlhAJHwHLfW/qK/JwarJ8D5Em4tW+2ZNCvM6Ncm7VOaZU+gQHnEKdS
oqX7faho8kt7g7ax2cG64KvfMShM3mzx2cxg5BDuMkR1SmFPpciKLiQtTrLlXEIPKqVp6kfL
WnMZDdi0I8Nzbj5GzG1WG6/tcHHXaOSJC3pUtbbiUnZEcd52u5UoDbaui9ngFXka2DxRb5Fs
y5UrMocJdgQkL+3LrkZ5ClJCXXikV7Syso4U5BdQRtvN27CpgjbP42y2a9j9tiXNuOvIra/e
YzZW6ltDccL5JdSnbmQ3t166znZQTXkDw43ZoU+4WC+MPRrfboVwuNkcdUqc20+hAU44EpS1
x7iqpSd+OlSDKHieLXvNMhgWCC+kSXApDKpDigyyw0C44qu/FKUpKuKRvqtaEWWaKf6d3LnD
x5rHrzFuD02PPnXK8NFbkEMR3W22OwhKS8VVXxWmhPL5aU4TnZQpK5kXgnMrIqUlxbMxyNLh
RQmNzUtablX7d4NlPNCOaeCguhroUvIfBzjeL2mEZ+zd5qTcMNic2kxV+118upQFe5PuaSCQ
rooHWq0bsl5FxAU3wzeYViYuMm7WpFykRGrgxjqpPG4LjPK4trQgjgpSga8eVf46z2YxBNXj
+mjyBbUw1lyFIXMksw3mW3FoVGdk0DfcW6hCFp5HiS2TvrKbZQ2yp5942uOFSWI8y4QLkp1a
21qhLWVMuNU5IeadS24ioUOJpQ/HWknEgx3458SZHnbM162vR4sK3BCX5cor4hx0kIQlDSXF
kmnomg1h2FLEk4z/AE65k6p5mVcLVb5SJjluix5UgoXKlNJSsNx6JPLuJXVP9tNac+BaGM/x
AqJ4+Rljt8gN3ATXILtlKyHw42QhcdJp7pKF7qb6cPdXSquWHgGY+E8mxi2zp8udbZZtPa/d
bfDkdyVFRIISy642pKdnFGm2/r00Vq2I+meEblKv+OWOwvpflXmyM3mZJecBjtdwqDim1JTz
LaQE/l5V6aEvx7fJWrlrwdv/AMnPOmbtLgPyrfGix4rc1V2decEZTLiikEAILwIUkhXJG3XQ
SM3vtjl2W8ybVJWy69Fc4uvRnA8wvYELacTspJSf4dOutxBz2aHF8KOSMejOpu7YyeZa136H
Yy0vtqtzZoVKk/Sh5XVKaUoN9Yqm88HV1eh8/wD0+TH3vsLVemZ98t8qNAySCtC2mYj01PJo
tOkfrpT0XQbemtKr2yhecCB4OhPKbuVtyNuVh7LM1253lbC2nY37apKJITHVu7VSx26Hf1pq
VLPHJERP8P3JrKI9jhXSG4xcLcLxZ7hMX9o2/EWCUhYXXtOkggoPTr01OsJP7F138HbFfDqr
7ZLfLdu7dvuOQrkM4zAWytwSnYYJeD7iNmEAjglW9Tv00dW58IHWUO1eGrcvBrhf4uQfcTrX
DM64spjK+xQtJouEJlQlUpPTgB9WtV9TdoBqEQ1r8SZFdbxjltiyoTz2SMrkwnGXwtLDbSQt
z7lKRyQtKD9Pr06jWXRw34cDGdk6rwnCUlN3jZIzIwhUWTMeyBcd1txtuI6GXk/aGq1r7ih2
6Gihvp6OY5NKuclPzbDJmJ3z9rektSo7zDUyDObqgPRHwS05wXRSFKA9yD01OjSngw1GC0Y3
4XmXqwwpIu8di/Xtl+TjlnU2tX3jMMfrl18e2OfRHLqfhrNat54NtQjPoMB2bNZhMJpKlOJb
ZS4Q2OazxSlSj7U/Mk6LLq2mESapkngaTbclxbGIUpxy7XpkvXCQ8thEZABTzSxRfNRbBNQr
6vyV3oqj6Owqu40h3F8Dxp/lSdiUOZJbs9rYS9cJb6o33CuST/kJQsoKVlPt5bo/OBqtWEvk
ksSMMU8MN3e25TepDslq0WNT7MRlpcRU111lRFHQpwMp4AVWQqivyE61b1/mqyWqzyd4n9Pc
mbiUa/JvITLk29u5hpcdYgBpZ2a+9JFHwD9BT8tZdXMC1Dhj67/0y3WM7FYt12El5ctMWaJk
dyG2mrRecfjOqKu+2hKFV4iusqjYbfwQ6vCDIQm8f6jZdwRcVc85Mth1LqWm3OytH2f+aVh0
gJ3ooHWl6rPHIR5JWF/Ti+3bpkm63t1sQ5AaaNsgO3AuMqZS+3IU22UuNpUhf0kbHbWXR8Gp
SWRlO8HQIuGsZGu+yXPu433MNEa1vvRiSopabceQqjJcNK8x7fXT0clbDgNn+n2SzY5c/IL1
CtEmNIiR1p76Hm2PuD7/ALxSfcy4hKgQDsa6zWlrMkkLf8B29m425s5OE266R3pEMvQnmbg6
uOUjtx4Cldx7mF8kUO6a6ejiRdd+RTH9PrSMjkWKXlDaJZUybcwzDdkSnEyEcw5IiJIVGbQN
nFKNEnU6NKQSxJXpvhDMo7Mh8uW5aGlyBHjpmNIkSWoyyhx+M2vj3Ee009dLo5CDrlXhm6WG
xzLim5xpk2y/bDI7S0laXIQm0MdQdVRt+tRyCd01009Ls0vOjTUMztTSjRXpXf0odc2jUYyD
lsQfQ9dZaCRXt7e6QRQ/yr8NBpoJJUUgcfaDUepppCRQqRUAJWTsRXp+GhkCqSQoGivpG1Kj
QQpSuSgCeh/mf7tORd+BNHK/SOXXt1NPjok1nyRD3NUiGle6uKflUjpudd0cYPRd6Relf01Y
6m1pkCEZMhV1McEIUyl5fufUmgUnmB9XU637FDLTMWK0qIVufjXrTXKINbJ7AIjcrLrbGXaD
fQ6+lBtdV8Vgndagj3FKB7lDoab7a6Vqnsxapsub2+8WTzSzbLHZ7bHi3/7WLAflw2HGkJQh
IeLCD9NN+VE7nRWqZJztkpbmcVyPzu/ZzjDE+3WaMYi5SWUojMutAqceeabHaWp1z2I59ANh
XSqLqzK0YxcfHeZ3DKr5Dt9le5wXXH3kFKW0R2FFSmytSylKOaBVKfhrCcGphGpYDYo1vwWF
dJ+DNTrfMZWG48eG5crpcHSk0krkEBuCxX6RuT1GtWSkEWF7GMcueOvQ7FjkfH1ItipCpV2s
ZUge2q1ia4sAKNdqgq9dZayUxsz7IMMx0+EbPfrRj0qPMemJ/cZ6wp6WqO3zCnVLKUobaWQC
mg4jbrphSBOXnx3jpi4DIs2Ey5rNzQ67Pt/cU3JeWUJU0JsxQSlCBupWw2qBpaRrmGTcDAPH
F/ubdqm2FMOZZW3pt3l2uLKh2t7tkJEVLro7kihNVLRStNuuhJbRNsLHcLsEPyPjzhtNkk2H
JIcn7KKLc4y60YiO4pYalqWtKllQ5KUNxtQaYUPAZPOmSoQxkV3aZSEIblvpQhOwSA6oAJHo
KdNKQSX/AMceGoGa4ym4t3R6HPYmpZuKXGj2hH4qXxiKoe9IIA9taD4fGaBfJcYP9PuB/tjV
xut8nwV3Duu26M+lll9tpGyEvMcVOOu/4ginw1lz5FY0N1f09YklxNqTfLgq+u2k3llzsNJi
9sK48FJP6tamlOvrpUlJj2G2xd+yq1WZmeYTk+WiM3JSoqUyVmnNIBT7vhvpTaNVrya83/Td
ZbnKmRbNkEsyrfcRb7mufFShslSe4txrgQpZodio0UfXWYZkoF1xDxzGulsh2i+3CXHduIgX
kyYXBLADnAqQ4iiFKWdggHlT3a0pfJI025eJPGbF38htzI4t8CwRISoLkdDsj7VLzXccdDPL
k84rj+Y00QXBFRf6aIT91W0vI1otSWI78WR9qlLijLBKQsrWhlFKUCQok/DRLREZfvAlgxZt
T2V5gi1QXZaodrdbhreCylHPk/RSQ3/brUthBxyvwjZbVgdtytu4uCA7DU/PkLaW+uTKcP8A
0zbLCQnstkAlS3Ve0UrvolgUORh9tbwhvJhd0vXJ99LH7Qyw6stoqRzfkf5SFUFUp+BHrrSe
TT0c/HVsvtzzS1W+xTf266SHgiLcQSCyriarFN6hNdvXU7lXZpZ/p9vdzll6zZZHvEtFxdt9
zkLQ+0qPIbCnXT3FlaneIBJ4+uhWY9lwVGVh1layayxbZnEe7ou0pEaZMih9mTFSXEoV3Avk
tIVWiN9z6U31pN8mZn6FrX4YuV08lZNZseuxtsKyFtCpT7z8qYpLjaVpSEoo86d6n8qdDfJU
jJyx/wAKQ3LnkkeZmjEV+xRXJS3ogfS4QWwovP8AJKVobBVxcSCVnRLLBBWjxxmEiLjv7Vd0
Nw8umSY1tSl19lH/AEtQuQ8kU4hYRVCaFWmY2aXyXLI/CM8W+2220ZUlc6TbXJk6zTZLtZTk
cnuFhlI7aWU04p7msqzM42ijY95L8t3C6W212zJJqpEp5qPFacdAaC1KCW0q2+gevy1uElJL
Zbbx4n8qXe8hbuVQr5c2Lh9vJDc15X2Mp1Jc96SlKWyQn6WxUfDWFZkiEs+AZ1mDlrmXXIW2
YtwuT1uizLlLedc77CqL7TayauK4kNgEFVN6DTLFtYGOSX7N/Hmd3iJbMrXcZxpHlXVtf3Di
m2lEIZeU+lzgts/UhJomtKnWk5SkykWjHL555zPHnpzOUswLQ1MQwu4y5bcBwyOFEtpdbQlX
E8wOPqo6G1wsilIyteA+bIUe4wmL8myNPS3ozkaRdBG++kJA73YqauE8t1VFdZ7vxJNEbjjW
cNeKcquLGSzIVtsLyIQssR5PaWZLvCQp1STySj3bKTXma60lkJwR8TzP5TlwmcehzW1MLQiB
GjsQ4gcKVANoQ2sNcgregINa76WqrJpbLxcMI83u4yuXdcilSLlCnwotossaWiUfuSqhLq0q
HadYBrvXbqdY7fAPZD5ThP8AUDKuECVeJDt0egpelQJbc1t5qOqIA44faUpS6KAj21V01p2k
ojJm+WQ8tkpgZLkK1yF5GhcqFMeWFOvoYIaUpQ6oHQJBpt0GmrCCz+KrNmgizr5acmRiNqC0
W6Rc33FpQ5JfI7cZLbYUSrcHnT2jfQ3k0kXrCrB5gtncxqNmrFkuapklqLaCDOeddSeT7y1I
aeUyha18g44RWtdVr8wKcozGRY8/Rid9lredXjFvuambyUvpMdc8OcVK7QP6vuI9wSRuNb7f
lrJyUdVJb7ph3l9zC41nfviZybUhiTIw1uT/ANbEYlGkZx4UAXuRRvme3UbfDn2jSNuUxn5N
sXkxrHI7+QZM1foFkfRAnQ48juqt0wpBQ1IFE9xynt7nu3BFdKc8GXjkiccu/k7OM/Yu8K6q
F/hxgty7uuJjtxocbdbjxASntJ5e4cTyr0Oq2FBqrTI/yvDy9rImZmT3Nm9KukdMq23WK4Fx
X4laI7HEIDaEmo4BIodNcmGiV8BJuB8gI+xcbZeTBuB7ryC4hKRFVvxSpBJ+G+hoeGZqATw4
kgihIPQa6BMo0674hnd6wNvOMhlLahWphi22CMuK8VuxWqcCFtoCGmxzql1w0Udgemuaa0hd
eeSyysl/qTatUF6RBlIiz3IaWJJhR/uJLza0GJ33AnulZUhNC5SvQ9dHapqynZXccvnmht2c
LJFlOyH7xKcmgRkOKN2cjuokoUCNnOwtyqRsOvXWnZTkzVKDpbI3miz3PH8fTj7zsuxoky7F
a5MNDwQmSQXnaqqhfE0oVK9iulDrLtU39AvLF18iW6Uu3ZO9FQ/kTEC8XKLFjpYKXGu52UO0
AKX0GvcI67D01pQ+DLUMunjvy15LuFtvM5GHuZO5Mk/cTbjFSphC1oYQ0ll9DaFodbS2hNED
+/WXCY4jBkEnCs2lQ5GSjHZKbO4XJS5bMdaYrbSlqUoo68W09P8AlA1p+xGVhFsmQ/MGPYTj
KZloH7QmciZYHnI6HZMeR3kqQyHPqaTIdooIP+ZrKdXJ0byjvFv3nPHbm9K/Z5UabkFyXduD
sBRU9MQ2sr7SD6JStSuA/H00u1WZagpuLZPlOIyRf48VDkK6NPw3BMY70OYzUd9lwKoHEpVx
rQ7HU32fyHY0DEc88r5ZcJ/7XabVNtqY8dh+3S4zLVmiNx1qcigBxSUIX3Vnt1USSfgNUpQj
W8l2wXNfJizlTlxwwz7o/LAua+6zA+4fZjBsQS08HRIAaRXg1uQT/iGiU39C4KJCd8wOxIWX
22yMR7Xj0afEt8RLYRwiSFKEstx3F951pju8eQHtpvWh1rstIHgqUTA80xiJaM1l2hZtEZ6P
LXVae8lnuVaW80klxlt3tkJWpNNHftsF+OS6XPy75KuGTQnLLjTNolXS4JukBqPCUmRcDRTL
JfWqneSGllJWkJr10p1SY1y4Oky0+R5Gd2mKjBrJEXaWnrizBjtNItbiPpdfflB0pc7a6bF3
2KptobURGWC3sTdcz8i2jyRPmS8Ot71+uMViU5DjMLkB1DVVNzo70Z1TnuAotSF0Vx3G2lxH
wKnKQ0xi6+WcqmZRkkSwR77GvTIYvsV5lKIrpZAUyhhsLaU46ylHKiCVeqqnTaynArWR5jOT
eV7b4+jXWHisOZBtMR+3Q8gfZJntQlKo8G2+aVqaQpfucCKD1PXQuredE8IgZ/ma9XyI5bHr
FAL9ybjQZ8qI04mbNiRVJU3FUsKV9XGnJKeXw0yk5QJpssN/vLnjmXb7rb/GycYu7vviz357
85hxpSSl+M42TwC1Nr4qSpQWmtdc6LOZFnDEvK2Y3y+WvHMdxy3JjKjP26PY43eYjll1aXlq
U93A4goU3zKwr411uzSRVWRzimY+R/8AuPf1WKNZ+bcXt3MKeraGWoCh2Jf3Tilc1tOK2WpW
5O+l2iE0ST+xEePrpfomK5hKXYbVdYDqloyG63N9bK3u4ruBhkoWgOKWtJcQlG5J2PTVaz7/
APsFv46wOJ+Q5NcsEt99lY5aFPoSzZrXf36m5uJiOJDf2kYqPNTSqJU4lNdZWX9DTw18lhzC
4eX5j9qFywRgPzp7Tqx3XpjcmU2kqSy4331oihW6ikBPT0pqraEzLbmCE8oRfKmW3bH8dm4n
9jKZZd/amY7ipi3Ubd3lMccd9jdB7SqidNLJVZdZZw8YPeT8enX3EoGLPXRxRS7dbctx+I4y
uOOTaxIYW3QKB2SFEL9NYbSyaSJa1P8AmO9qtN9tmKNLjWS7zLlFQpaWayFkIejqS+6lzi2W
+P8Air11u91HWPH7FlfQi51o8mowCeq44u29ZHJbmRxrop1CHoTrqgtx1pAc5OIWn2kKSfbr
S9s21vBlzghrjZ/IWd3265kmxqKX5cP72AgOIDipHFplLSF0W62vt0KknbWX7E8LhGoaNGnX
vytHy6yJjePmoFyZtj9tZjMvrU0/a0UC2TJS5xZLCyFBYWFAnfrrC/iPk4Wy4+VjnE15jAQq
Zb4zDQYMqQ24wiqltuJuK36vd0qNU8ilVOmx03suqBJmRZuMku+U3q7XCzqt8hT6jcosaOtL
MVxAAUhdAQjoFKKj7vq9dbtdYRlIuzWXeToXj9i5qx9KYaYZskXMlMq77dtUreMATxCSrZLy
k/IGus0tmNx/k1azGTnnPLnnUPWuFGg3h6QxMvM6GyVu3GRESEsqeaUSlIA3UlsCp1rtCyHZ
cDhXlfJPvWrbDxpiLaOxKak4m3HfLclueQ5KW5yrIq4pIUlSdkU21j+2M8+S2hk75Fy5/JVX
pNmjyWY0MWeLa3oK5EWLFG7bASoV5o+KzyP4Gml+xQl9yzkLGfJmT49Yo7DdqbmGzrkGwXeQ
y6pVrdlDjI40HaVyrUJc+k6u6bfjwalxk65X5PayDGGbPcMYbak26OIjNwakSmksLqCp/wC0
TRjuOK3JUncnrpp7Mz5MWywncqzywXLEL8i1Q7fMhwFKtUhiOnnLhk9lZmIQSSvqklVDvUdd
ELpjydMO2R2ny3eRIRa2sbjM4uzDeiP4mlEhSFMyVh55S3T/ANQlZcSFBXRNKan7sStlBF3n
yBMXkVymZNjcSU7Igt2+HbJbbjKIMZIrGLNffVCVbLP1V66neYX+1E3uNsOweY73ZrFBgtQo
sm52pl+NY769y+5hsS9n2whNG3eW/ErHtr661RpTOtwZspKCFOVUEEhA9orQmnzr1OsXs25f
JJtOS23TyRcbjnFtzAxGEy7b9kGYaSosrFvpw5193upvTp6aXaaKvgZ/cK0eRbna8suuTsRG
VybyJvdYcKghs3Cvc4H6qp5e2vX11r2e3tZP/wAYMrFYGVoyyVbcRvWLtx23Il++0EiQqodb
EJwuICae08z1r/DVX3P+1+yMsW5SXgukbz7dWbOhlq1RhekQGrV+8OOPKQqIysKQkwirsdwU
+vrrnVpbGeRzM/qGvKX0SrVZ4tvkvTf3G6KW/IltyHS0WVthDxIZbcbWQQ3pVlBa+gw/72vh
tNqTY4Yw4RlQl4rzd7XaUvurV91Xvcy4Aqv8Kaf7Htb8lLmWdYHm+Qm/PX652GLLuqXW3re8
zIkxEx0soCGGVIZXwfaQEjZwV67/AALWTjwh7OIEQPNT1vhz3I9jjNX64svsS7sh+SGlCUsr
dV+3lRj8yT9Q0/2p2l6RlMk2/wCoQd558Ynb+/cJbM68LW684JMhhPFCyhYKWwKAhIqAdS9p
vPGiHleVLJKvsu5v4qy+3cG+1cG5U+Y+8p0LDiHmJKj3YziKcf0tiNZd5rBVlVaJd7+oBE5E
+JdMajvR53aaV9tLlRJH28dAS2y5Ka/WeSKcjUipO+l3WI4MvwV6Z5Ltk6xQrfOxaJLm2hlc
SzXJ5+RyjMLWVpPaQUoddaP0rV16nSvatOepq17TPIrKfMl1v9lmQV2+LEl3pUdWR3FjmXJ/
2gCWCW1Hgzx4gq4Df5afX7oc+Nfcy0m8TH/UoDigQRQgbD41FdcIFsBCiK7gfL4/LQSYkAih
qV13IPQDp00FApBruihBqokdfn/LUNQzTqk77lJHXU0IggFW9AQPSvu+OiBgWOSln2kitDWm
xHr8/wAdUmWHT3cq/wDlok3BBONqbdjLQmgWApJrWu/x13RzN1kWe3QvBcK9rE2ZPustyO3H
XOeRBiltwkOJipPbcVxT6+u+t3UPLObTmYMuKx0CgVKPprDR0Ukti0VybfosP9zTZQ+sNG5E
rAaCv/0fvJV0AHU7a1VPgG/Jf8pwfFcZ8gvWLIcgvLwQxHMObEbQ7Mcff/IFKP6SAT7aevrq
Sducj28Inm/DsKJ5HZw6y5bLtpdipl3JTquEpbjgK0sNoYKELUlA5qUo0T866knBmUZDf3p1
uu91tiJ8hbX3DrEhSnVj7ntrKQp8BXvqB+aus1bCDVfG2C3LJsfbbjZ1c2HHGnC9breXDDgo
41Q3NdK0hCl0NG0Dprd1ZbM1jcErffDd3t2JNNXDLb3PhGMl8QIUZb1uqs+xB5ubJqRXkOm+
syzeJ0O8j8S5YzCi2Odm97mmQuNEbj/bOm2JccI4cne4R226fD0ApqTYDTPcY8h481GETMsk
vV6+5QxAipiPNsreUOJUh9S1IPtJodXZl1I3JcW82W9zG3/9TS7vkFzcfRChRJC3ExC2Eh0r
fKg0eINHFEAJod9SsyS8FhsOK5xbc/h267Zs4b7kNtLrF0jR0TVhtsqJZQ9J5JbSAOXJCRy+
GpPAzwefchiKi5BcmFurfcYlvtrkumqnFJcUCtQ+KjvpkC64TiHl+/2i3z8cWtNltUparWp2
W1HZalmvNbKFn3KqojlQ7607QDjgsdrsv9T0m2TIkaTNajLdfbeblSmmnnXASXuypz9ZW9fo
IHw1jsEQRlng+SrhiVzzc3+S0uIpnH22iApbzJdQhbCnKgNsoUoV4ipPr661LRQtl9ydnzPh
clEiDYrLc7VbAyv72JbmGFcqAlDTSFKeSlJPHkB+GsqGxsyuZVcP6mlSoF2nNS2AZQVBhwwz
wQ85s22tpkrV+biO9X56u0MuuQSbV/UXccmt8u9IajP2xp+db1TERhDb7SP1VhlhK21v0VQc
hyHXbSmvBaZA3XzT5qtNwQ/eJAhzJ0ZtxMV+IwgrjnlwK2gmvuqfq3pqhCkSeBZh5tyaXcZN
vuEQwlvNKkz7whhEFmQKJYaYCklKXDsEpQn4aX9BSJuBM/qQmvT48j7Np5iUvtPXdqKFqllH
LtW4LSQVcU8kgCmsz8BCI21D+ouVb1Tm3EpTFjSLe1apna7sxDav+oU3EWD31oUd3FevTWuy
8GYUSViVh3miR46ZivNqGMQ+U9FkUttEstFal/drYSA6W+XIhSz86U1dsi1BBYOjKMVv1gyt
NhlymHJIFrRwUhE10pKQ2wuiiqtfQb6MDk1u6Xv+oxN+tUiNZYcUzZbq24FuQ04z9y42UqRc
VhRo4hmvIqUNh8dSaQFZzCZ5Ksd4sV5udgsL8JDy/wBlgWtht63rmqHEqUlg9xyQnjVFVED0
31KPAPeB3Zs78s3rPpzkfFLau+GOhFxiyYgjIZS0eaXpLy1oWhe4oVL3FAAdP4ipDtNw89Tc
/wAhlLsLFxuTkUQLzBnNIbgCOqvZZSVFCVhW5QErJX66m0C1JA2nzJluJsxrLKsVvXMsj8hd
vM6MoPwVyFFTrbSQpHbHuIHqE7dNZ2Kcllbz/wAx3fD3cgg4rEVGiRnYkfIkxyZjbLppIVHQ
pVVp/wASggpGnEi5RX7Fm/jezi23djxw+hUN5kIuxuEtSVPMkKUpJKQ0pzavGtP4amp5Kcjq
f/UNe3r+ZtgsMK3JfuCZz7DTanH5q0JLbQkEdVcepbFa7611UBBFXny1kC/9Pw4OMR7A1Z7g
btb4bLL5U9IJJ5EOe5dSpRPHroUZByMZOT4peMvuF1yrDpCnpw5ItNtdkRVh9Sit2QsLSp1w
ucvgEj4aH8MdFxsvkHF0YtMx/HPHc24Jjyhd50KZIdfTHRFSn/qVuJSlxtaFBPtp0/GmiF5H
MSQc/wA0ovkNxrJ8SiX+THlSZsNbqpDbMdUmhWl1prZxA4ge8jbWvxFrwRVh8n2i0YJesaXi
7UtV7VyuE4vvtMlQWVR6NIFEIYJ9qUq91N9MKdhCHlrX41tFig5ZExu+FyFJRHjXJ2bGTFVc
2U97ippI58PbyVxHTatdTrLiTNnDWCYH9Q0OPJen2fF2IMydcWLpdpC5LzyZD7IKV8QpIDYI
OwH0/PQ6ryOdHC8/1A3BzILFebbbZMaLaZTjrgmy5ExT5dSUOsc3AhpCS2o0ASSDvqVVA4Wy
g+Sc9dzLIxcVx0wYMZhuFbIDZKm2IzQJSkK2qSSVFVPl6aVhGbIkcH8mW6xWGTj98srV8tDs
tq5R2C+5Fdbmx6BtfcbqS3RPuTTQ0jSZZmPOWPybZcWb5ZJSrteJS5l4nWicbaZSVCjDDikp
W6Wmm6JCedPUgknTCnDCZ0VW2O3CX4/v0a3YwmRBjS0zJWRErUqCyCChkKUUorRNCQCqijUb
ijjsU4Rb5nmO6G0uZTDxxEW8XcxrXd8hcWtyG+u3hLqGWI5PFtwoSnubnYbayqE0p+pAZt5X
t+Q2qbb7XYxa03mei7ZLKL65PdmISE0jhVEtNn3KoamvyGnqvuFtHO1eSsVsGXzbnYseVDxm
5W42q42l6QpbymXgA+4h/cIcXQU9P56eqheS8/JAZ7m0DIl2qPboIt9jscNNvtMdbnedDKVF
alvOmgU4pSqmg4jppSjBhqckh4fyyyYvlSrrdnizBVAmMJUhJWpTr7JQ2kAehJ69Pjqak0sV
aKOkU4JcATWgCK+tNhqkkmXmV5MnSPH9pxJb8lT1vnuyn1OvqLLrFEhmP261KGyioB9o9NYa
3AuZRoj39QOKxrhPv9qtNydvV9k2+Rdo0yQ0YbQt7iHOMTgOfv7dPd+Py1QUrQUXzlglmDxs
EC68ZNxnXaS/JcYLhlT4jjADIaNEpbW6Keu1eul1c5DrCS8EbjnmnHouG2/Dshj3PtIt1wt9
0uUN1H3IVMkIfS4yHfUBvivmRvp6conbgpnljObVlt9gy7YxIZhQLdGtzX3ykrkOGNyq44tJ
IUohQ39TqiFBNyaF4k8tQbNhzFgVDujk21yH7gk2uLHmd5pfuPPvBSmO305gba59ZY6Glu8w
k3G13VMG5PWex2GTBuTTRLjAmTVOfrrIPZCVFxPuWAflrcKYnkyoSIa3+TLIccwSPPXcXLrh
s9LkhhK0uRZUT7lLylq5q5B5ATwbFKD4jQ6bFWOll80ZE15KeurVxX+yzbu9MES5uLUwy2+H
GUqcKOa2+zHdp+n0p0NNausYNJKYYvzbdsCRYMVxrDJiZsK0ffOvlt1chDZluIWB31pa5qVx
NRx9u2qleWH/AEwQWBZhi8TGrzimVMzEWm6vxZf3tuCFvochK7iG+257VpWNia7fhotWMlJd
T5wxC9XVu9363TY0uz3dy+2ONCW242+tbKG0x5K3B+nuyhXcTUUqKao4TwVcZG6PMuKvssX+
bBmJy63wZ9qgw2loNvdFxWtZeW4od1Kmu8oFFPdtphT8GqrwM8m8qYrOst9mwY0xOS5Xboto
ukZ0tiHEbiISkutOAdx3upbTRBpx3rpq03nSK0PQ2tnl60W/KsTvLDFxmsWG3rt81iZJDih3
muw4qH6NpSk1SD1oBtTWfj5CVL8E9Zc+8WQGrRj0tNyfxqxxZj1tnS2CTMlzXUuAyITam0uR
mwnZKlcVqG+mz/VjYhVeTrPD8kryCTeLtd4At5hR5kRDVrltbe1lppFW0MIPQD1NfTdalJcS
CtEryDA/JOH26xWSPfW5yZuI3J+72tuEltTM5x9J/SeLhHZKVge/f2k+uq23nZJki35ixNcV
jIZDEwZVCtEmwxrW1x+ycRLWtX3Snz708A6qqONSQKaW5cf7ZkbJJfUpFjvOGYxJxG/21Uuf
kNslCRfLc7wbjUZVVoR3AKhSgB1rrDXaTMpE35DzzEpmJPY1jips1i5Xt3JJkuehLS2nnkqT
9qlCSrnx51U5Xf01uttt+IJuYGHhTyPFwfNW7tNkPN2d1h1m4JYQlwuexXZTQ+iXeJ2I1ysp
gU9kjgvmi9WCHmCplzdTcL7HW9BdS00pKrm4tI7yvZxT+mKdOPy11aTum9EtHGB5WlxvEF2w
77x0TJU5JYYU02WvsHAVSkc+NarcUT8fgRqrHZtjZysbJCd5nXKxLBrTJlyJYs8ou32MGmmi
tuPJbchhl4AboZQU1BG/1V1mEk/I91KZpDnnbx5Fu7UpuW9Obl3f9wcVEtwt6orQacQESVBV
Zi0qdFVeu+hLEgmiFu3ljBJEZrGTc3nIM62T7dOv1ut4gNsOTH23m3EW9Kqqp2uLvE+7rrX8
V9xga3Dy3gN5jHHpsq4Wq1Qv2z7K8NNB16X+z19jrCVJLZfrVNSeJ66zEYKcyO7f5fwS7JuE
nJXx+1vTZsg41LtaJzi2pe/CPOQpIZUvYr5evQ01NNtJcGE1EjM+SsClYehq8yReJUeE1Gtt
pctjbUuI6wtKmv8A7SSopcaaSkgCnurv66mvy+Do3GSfn+YcAlpmNSMru70O6XSNPbbbYdir
tcVrdbDDoUVdQAoo9D7dKx8YBEFc8/wf9+iMR8qVDxxhiT9kxb7UpiHEecUj2z2HFLXcGnkJ
PcqakgE9dE/iS8k075Q8V3We64/d3GFxIMOFblSYDzlocdY5qVKTa21BIKC5+klxVE9d9Dwk
pKeURh8sWOXaUtO51PtFzhTbg7Pkw7cFm9CQQWHAwohtvg2gNht0Gg26a0/t/wBjLZFXHy5i
MnHZc1t2WcgmY23iyrC40n7dlKVEmaHgeHFQ6NhPKv8APTRqcvT/AFG1v3KZ4ayuzYt5BgXq
8KX9jEQ+He02XVhbrKkNkNp3I5Ea5WyNS8+M/OzybhJVmly7j4iOsWi9ux1PuNF98PLQ8qOW
n1IPEBHFQ4/hrfsa7Y/iKX4subP9QOFsybkpua+HZEgq+4ZiKZS8lEBTTbhBU4uv3JTus8tk
k9NUIG8IOZ5u8eSJLVwZvEpuJEiyWbhjX2KuF1kSWgnvOK/y6hda9zrqVVMF5jZQ8m8n4lcM
EWIscpzO+woVoyJXbq23FgKPJ1BPsUXRx4gbiny12raqs3wpgbRMMsDfknALPd8Zu0bI3rlK
sdlk2hZVBeS4t5bfJl5Sl1TTlRHE1p16a4+uq6w/I2vM/JN+MfJFkvYgm7XlKL+1aAxdJMha
Ikt6QmZ3gGZiihvglGyk+oNB66ruuY8jxKMU8wT0zvIl7lC6tXht16rM+NXs9untZRyKvaz9
HUiu41v2vFV8aOdbLgpblVGiQFcd/htrjJSICFAVCtiehHp8tBAUn2kkgepJ2UdRC1cwfcrk
n8oO/wDD+OggJTvxXsnqTuPnoFBKUQSpI2XsVfhpMthggAmpSEbDod9QgVuFFJJ6AAaiDNa8
hVNBueg1GmAqNE0IqfqNd6fx1QAalK2G9egUfgPQ/DQbVoEnkoA0Ip1Udj/LUYs5FoU4SUAn
broZIOvtoOv+3roNMSe7z4DbnXcelPhpMhhAJCh6DisHcHVJuofMJPFGyhuD/v0A34BWh5E7
+pp19d9QANSRQ1AqTvQfPpqFPIEBYHt+o719PlVQ1C2ERyKgK8Qanp6+upjVydD6KHqPXY/z
1kXVM5956vryrTlT26YMwQrpdJjFwJ4pACOO21fza6knJvsJ69v+BUsSrRbk2KHJdXb7vNmK
afMgrPIMRUD9RYKlJAKvnTbW/bZN64DkyNS6HpxqfoJFSdYRNkrjk9MC/Qpircm69p1Km7a4
VhLrlaISoN+40VQhKeutpwZafBq19uflDJPJNtuycHbZyOzoZnOQEV5uISf0VSVuKohO1EpF
DXWVZIafJzg3jyFjXlF3Ibzhy5+V3zuLtsFKlDtD6XC0hou1SEe0qWaDUr4gFrBWJGcRYGUZ
BPn4dbv3KZyYbgyObjcJ7cOOAK5c3FE7nbfpTUoFaLX44zPLImLRl2Dx7+5ybN3xEvjBdaZ5
up/UW+2n2yHEg9a7dBTTayGyJCz5Zn0WwOXG1+OpgvD8Z5D2ROOyVRlJdqp14RFHgD1Pw/ht
qtdGYDezTLrZijq7X48uMCPMjNIul2lSJTrCorYqtTSVn9DmK++vrX4aZUk1gTePJueWyyso
xbDLnYBcJLbyJ09cmeVqSn2NxW30nilXw9fh66y7JMHVnPI/LXmuPDsYXYJVpnx0uRnZr0Hk
ma/I/IhjgEJ2TslNT66U0Zhk5ac8yi8+RLTdLxil4ZlY9bFiHaIcZJW+88ntvPvKd7KGmvRA
B66FZZgY5MCyWTJeyG5vyIxivOy3nHI61VU0pThJbNOpR0OiMm+INTw7yxgGPeO7NabrElXG
7Wu6G6R4sdQZS2ttRU0tbq/aQeW6RXW2oMNQJyPzlheWW2IrLbBLeuVtckOw2IUz7aKsyFV4
uOji9SlAeOhxwwTnJxw7OMVt/jG62hVlu81T7rc66y4pbTCivJdBjtpcUVKbbHBI9w5K31px
5OjnksuSecsZYuM+cMauTF+vjEVmfDuDoZZEJlfJK0JH6nJxBI2oPXWceTKySSv6oMPjuIRB
ttweR9yh51LwjMoZaAKVIa7W6uP5ee5+Orqigq1m8v8Aj7F8hl3THWL3cHZTMtdLk+haBKkq
C0oajpJS2ioqpf1U230t4yUQjN/JWTYvk15ZvdrhSYlymN9y+Nvudxr7voft1ElZQR/j6dBq
gwmWHx55HxK3YnJxXJ48029ycxdYkm3hLjypEdaFBgtue0JV2x7tK/ce2S5r/qDxO9rjXLI7
bLi3Cy3Jd2skSCpLqHzwLaGn1rT+nxqOagKfDQ0oI5N+fsXd/b8knQpP+rrKzNjwrezxVCdT
OUD3XHiOaEo/wjfVGDTqNrz5zxN6LNvkOJKXmFzsox+RblhP2TLRUSZBe+pXWqUDf0Px1ppK
AdHn5K7E8pWS3nAnmmrlcZWLr5zRMkDsn9Pt9qI1UpbA6pURWgA31NcDEFvT5Q8SuyXbN97c
F4/drq/fb7OothSHFo2go7X6i0KXs6sUHGo9ajO2CZX8qz3FXcqxmdbcnlTLLaJCnEQ7bARb
mbbH/wAMNo1C3VA8aqrt66g5F2nyNgk5zO7NdZU+2WbLnGHmL1IH3kloxqEB9A9yi4R7QNkj
bVkZwWW5+bfHeVRnoN1fmWODb7hDuNslpbD7ktMAJSGlNoI7S3afGgH4auuBbhyZze7nhmYX
PMcsvdwkWq4yFdywWdlAdW8rhwT3nKFKUjinlQjrUVpq/wAGUmXmF5cwZi2WTIDLlIyCx2R3
H28aS37X3FoSkSu+DwQ0OPrv8tPVx8C8yZ7f8vtMnxVjeNNXCbJuVtkLdkwiltuCwg86dspS
FOrqvYlRpVWpIxaywcvCmQ2Sw+SLXeL48I1sid9TkhSVLCCWVhCuIClbqIAoNVlKwbTNE8Ze
cjLyeuaz2jGjomqstwmNkutSJKgUJecbHJtsNgpHAbaHQEWpny7i0jPFPKvlqS01aExXLnwl
NoLipPNTbE5XJ0EN+41TxJ29NPT9SIe1+WLI15Cy1m35YY1ou1r7dvvMxtIZRc20BCHeSW0K
WlCE+1ak+8/HbU6klhheNM2xq24tZQnKYVqEGfPfzKNJb5P3dDilFpTdULU5zSQfj+XWOrNE
LnXm9q2xrTaMQchyrE/aHWrnbHI6Oyh59aghLntCu4yg14A8QetddFSNmfgVYVYy34mw+Dc8
jtJftl9j3Z6A44C+Ia1+5hSQmvcBUVr5bU9dYiZNOJXwWiDn1ly3IZdsmO226R42VRU4vCdQ
0ygwQFF11CkpqoVquqq8lUH5hq6mSE/qtTdI8LHY656HLb3H1fbOpQzNdlkEfcOtIQ0ntob/
AE0qA/HW/WsMy9lO8ESoMZjJQ1cLbactdisox25Xjt/bNUcrKHJxK0JKkUHTf+ej2ZZtTBoO
N3iI4/fVQcjxdrNFXKCq6XZTTceC9AbaCX0RS8haVq58ua0Aclb7bay6skzuc9wWNfLBDsb1
oTYr1e7p++qcYYIELlRsKU4P0m3TUp6VFKbaHSAbzBSMizywJ8JScdt8S2Idcv0qNHabbPfE
RCStqcBy/wA1SaNhwinHamtqqkEsJMfQbBGkeHcKjyZNpmuw743OXZTLaakPRZiggMLHXuqK
/fy3Sn120Rlwbayizf1P2p62YVDYgtRmbW9day1JYbiO1Shf2rDaUNt9xpoFXvqdFdMxbLRk
3gRy0t56FXGJClo+0fTGRcnUMth4AFJaU6lbRepXiHBx6+tNaspRuumbbItNtVKzCZjpxu5X
yNGtSIsuaxGbiMuuuvdxt4oKo/fKKcSjY+0HfXN1U6MjvGovipOSZDIgs2OUh64RWZrf/SIb
ZbEdP3DiVSeSFNKkFY4sjr66WuCXJWZVusse0OoxKBiT9lEm7Iydy8LTzbKH1pjpbXyLwbSx
x4dr5U0qscC9ZOt5h+NI3hRSoFotr7BsaXG7p3IiXv3NaACQSTM7wd9Pp9OmmtWYawefsHiW
uTmdijXZ1lq3PTWUzfuKhgN8xyDigUngeh3H46XoaxOT0V5OVg2PzcTvcez2iJNjXwRZDSm4
ikfYuIP6rrURSk0SKONlfuSaax1wx20Y559edc8pXl56PDZadUlyKuCUFt6OupafcUgq5Ouj
deutVgEuWWLwfY2JuNXSVbrLaMgyZFwismHeSgoatakEvPtoccaHLntyFTrF8salkRkODYxY
vLDGP2+MttiZFixJUeY40p+PMUUBtpTR5FuM5zNEGixsrYaKr8smf9sFadyizQ/6aGrXFt7c
eZPuyoU1bMhwKcWyyHjJcZT9QUlIR2z7AaH6tKWW0bey/wCSYJ4/cxGxSnrNa7JbHJlsZekl
xBlLafebDzjM9l1YfQpsqC+baSBVXprKROZGvla1ePsWkY5Pj4nChvovCGAmQGW2H4BqFOrZ
ZfdU6hBKVpdUBQ9etNXWUwbyZV/UIh1vybc+5bItsYeAdhGHxUmXHWVFuWsoUsc3R16bAba6
0rgHsnfBqJ5xy7u4qIas4EyIB959uXUWc7TC2JPs4f8A4Sm9P4aw/wCRpPC/ctkfDfEt0vbk
ssQ2EPXu4IxOKw8EsXxLbQ5RXaJWlhlt8cEL2B5bdRWaZKrRKYui7J8V3Jm8W+GxjBskxyG7
CMRduaXxWO1LLtZSpqHfaC2eOw9a6qr8oS5MvKKf5Zwu3WXG7iq2Ybb42NREQxY8xRNAly+4
lBUvt8lfclyquSeIoPd6aaJSTaSMYsibcq8QU3AhNtVJZTOqSKMlwBypG9OFemmyNUR6FkYZ
CyzyxmdnvsVDwiQoKMVjoWlpLVsDrfByIpKkp4hkrJp0NdZbwiWZfJbLZ4c8YwH/ALVdjZuc
loqZkRVL+6kIYdlvfbSn2w80EoMdKf1a9PTWWiTZFWbxFgTycigxcajyxCuUtlqfPfcUwlhh
KSGUvtOdyKpAX7VONkHrU6WtFweX2UsGchDiv0e6EqUDU8OYCvd6+2u+u167Q1yemskx20SW
HYD9itk3Hf3S0M+NoMdbLBnR3KKuCESG1KdKVJJLinOh1zol+xbedlQyDA8PtE/Ghhdtay9m
Ze5LKpzrvsWtoH/7KW0goqllI7vc/NT4HU1+Lb2NdotN+xl+2+R/IEmxWWK5mJbt0jD7e82y
pDkVfFqdIjMLKWllPAg+o30pVxOof6hVpfrklGbBiCL1L/0XbLTNRIvrMbNR+i+1EtaoqFzQ
hLqqNNF/uVUjoRT00KuM74Gf0PPF7xN11rIsisbSXsVtd1chNyEuAlDbrxEaiT71IKCmitdL
0hwvBjgZ4BbbXcc2sNvutP2uXPjsTgtRbSWVuAKqqo41G1a65t4NVPS9n8AeP4AnSL3an1si
Mt5ph6QUBLiHpK1MtUUCf+nbbO9aDf1Oh50U4MrwRWK5N5VwyTiuMOW7tvB+921Dyn46A04C
3JbWs86ISQpdeqhSlOr7K9VHyNWyw4faoTGWeTrHecRRer8pubNgQZKi1JfZekBSWWUo2CVh
QdC2/cPTTZTZeDMrqV/xxjNzn+JPJEmHaXXy+1HbhPtgrJDMgOyI6Un6i0miifq1qVX2fA/7
Y4LTdMF8ZPKulhi2FbUu0Wm13V27RpDjkp5yY42lbLTKiUUUhZp/zHbWK115ZqZY18+4PiuJ
YzDRZscZjrl3AodvLC3XuyhpP6TLqnCeD7wJ5o+n27afWk02zmymeFITciRkTsOFGuWVw7ap
7F7fLSlwOyQqji22lqCHVttVISfxprPVOynRpPHyadP8YeObve5H7kmPYwJVtamXCI+32F3d
9us2zoY5cG1ObKqnZBP8NTrhPkYXGUV2x4i7LwDyixCw7sOxJZbta6CTKZUxJHchoXU1MZCS
pSkmqgr121vpVexLiCf8ZJ6/41iCsQkszYkCDgTFvtb9gyWOltUpy4SHeMxaHQouOrCefNsi
lB020euiwueReXD8jPyBacegWvx9N8bqhPyWbvIasy4YU7JlcVN+91a0juKStP6gV7RyoNtV
ar+u/nAS+yMy80wbJb/KWQs2INpgh9KnEMmrSJK20qkto9E8XyscRsnprVq/ip3BjM/BSVKJ
IJBCj1+GucDICtZUamiVEgqFT09NQyBSiAKb1ryB1lkKJ9pKSeCqV+J/HUTYEqCaoqCFmtQN
UAgyFrSSPT6EjpoZBFCPzH57D+zVAhA71FRXoeo321QUiUhJPtNa9QdjTUAaSkq2BSB60/nt
pgULKipdangE1+J6+uhodgCSFBBNQSOnrXemsmHVgKdqkkkUon/y0yMBhw0pVJPyHy9NRSAI
Rw5c6ggbKJoa+tPjoNBnlsCAetK9KahEBQqTU0HQmtd9q6WCFHlw2oU+gH1b/HWSaC5KCCUg
g0oon8d/npRC0lJTyUNhWoVv+OgQ1H2lIFeW1fw3/hqAABKDuCr0J9a/36pNLQEUUEHhuTTf
/bbQyUg51JookAe4bV21BAZUKctxUEcQnb8d9SINPFKQoElI91KV3Pp+Oo2kgiFH2/EVHQdT
66GUB1VUhYG/XoNAth1XyrU06fKn+Kvw1SMqCCkOoUIiWz9KaLVQgbnpv111SORvLN4xqb4B
i4+u8sRb5b5rss25SXHZDiVFXAIQlJPvrWpIA9ddfftZ2g5wZMpZC+PoKbEb8id6/HXNI0kT
eFyXouTW99q6pspbdClXRQUoR0fnPtCle5NUig9ddKOHINmneWskwK++S7Zd4mSlyzyjHj3c
x232gxHbIC1KcNO4VCvtSDTWapzsE66ZNY/5Ws0/zQ/fE35i0YlCjohRkPBbZkRmUng02gpU
pI7iuZrx9Pw0xhg1GzPrpZcFu+aX6TJzFiLaOa5jUxuK8tchx1SlBiM2riVlPQq6H01hVZpW
UGheKczxOxY/aJt2yyO6YHfQbVLS83IgtcVcG4TDXseccJqtxwK+WulkYZOzfI2EXq1/e5Jd
rY/akxVoYhxnZqLsFA1YbWyFdtS/8aiKV+Wh1aZSOJefYTPaau2VXq0uvKMR2Ai1vzHXkvIW
FVkRFnthDfryTvvXVBT4Ji6+UcZgvvrj3y1OS505ldrCZj0hCmwOKnJp4qEVKan2t7dNSpkU
Ih5bgjORQX7nfYCcicEpDaYlxfmwwy6n2kvvnhHdWRsRQgVGhJmZgKNkUa55piTES4sOrsTc
uTkSYk5UqHHafQUM9yY8UqfUpfRJrTfYddS0xweWs0eiv5de34riHmHLhJW2+2rkhaFOq9yC
PQ/HURvPhyy2Njxbb7uqNZmZDt4KbpPu6GlKVDQr9RDTjuwcCPp/jtqskMkndBgzdrauOBR8
TdgyJUtd7kXspQlLYPFvhy/WSkdaIT06ddXRrgJkruE2lLvhq/x33bUw7cLi3OgPKebjd9pl
9BU4ptauSE0bUGkLTyp+OnErBPwXzP0YZdZN3kX39gXa5jEZm1XTuJM5Usq4KQ462ouJQgdO
GwFSdZ6imdblgfhx2JFgXS22qC01LZZjLbWxHU+jqAFNOOOKQvp+oeR0dQkhsfhW215ZJVf7
Hj2MxWIU/wC3kwHmlSnIY4pS6tkFxKKI/OTUqNKUrrXVcDODEPLmN26zXaEu0ItyMamRkrsj
lvdLrjrSd+9J5kuF08vcpQA9B0OlYBsufgmGj/S9zm48YbuctTY4JlForbtXNBkFCX6IAKed
VDf+zVbPBduDSYTGEGe9csFbtDsB66qRmcklgpRbg2QuheI4Nle/s2J0OvkkxvZ4WBx4LM2z
sWv/ALcTUXJzKJrvbr3QsfaJUpw95NAKIQkfw0RwUjK527DLZiU+kS1o8bqsXft039Nbzt7U
pXRyveW6ajanpTbV14JzBnULEcJTB8dR7tb4NuYu76V3yU7PLkyS0WuXJ5CaIjNKXQEcqjZO
2+lpse0M024Yw2/Li29/GrIrIY94eRiEF0JZZVZ20chIfQwVF1pr6qKHuUAPXRCJFQ8kwF3L
IMRslyxZ9STNWy/eFMxrS5c9iSyw22pXZZTsoqcPKnz0pKGHIjGMUszV3zyXYrJbpeTWJTCM
bx9LwuEdCHOPeWkOFPecSK1V0QrbQ0PBdZWG4fGu0+6YdYrZeb85cYUO9W9RRKZhxngDMU2y
V8GTyV7lDp0+OpLGSSWjCcvwJ2dluXSMIhJlYvYXVOOSWXEmOy0hvktAcWr30IVQJrsNaM1w
pk1iyYHh8XFLbEcskORilysLlzv2UPGr7VySlJQ2JIV+nxJ4hsD5b76zsbOfoZNkWIWuD4lx
7IU2l1m5XSQsSLm/KRxeCQv2R4qSVcPb9SgKU9ajWq7JwMPEWI27L/INssNyU4iBK7ynvt1c
XClppTnEK3punr8NN3AI0PDfHvhnNr99naEXS3ftTcqReIjj3eDjMdwNNKS+EqIUtR5FKAdt
uuhtrkqtNE9/2M8YvZSGG13T9uVbhKTGUH0tl5bxaBVLW13EJ+AU3T57axk0kNbV4w8d2zKM
xs9zsVymvW2zquUGIt9l5SGlICf+nLO6ni5y7SljYdRrTbMpvJDYT4cxG52mwSrvHu65GWS5
jMJEEgNW1uKpSU/eKUmqle3ckDfamh2HqN8q8Z+IcUiwIF9uF3/ebhbXpkaajtKihxtRShvt
JSVkuq2pWg+OtJsw4SGtnwK2XfxTia2orjFwu+SiHdbl2QpxLK0lsdtwD/JQmlOWxVqUKTUP
BNueE/HUqZIhWa8XOC9Z761ZLpLuAZU2tbwKk/bhsBQNeKUKV1UdYbczI1+Ct+dMCxvD02mN
BRc13uZ3HpT1zkJfLUVPsQzybBbUtSvfVKthsdbrrLB7IHxtgNgv9pyG+5FJmt2LHW45kR7Y
2lyW85KWUIDfc9gCSKqr11WeR4ktNu8SePlwpV+nTb6MeduMa12eMmGhmd3ZTQcC30PCnaTy
oFJHu66y7N8hVJP6ko54AweLdLfZ7vfLkblfLhPt9p+2YZLI+yVTuv8AM1ApSoB3PTR2ZStE
ZMw3ArH4VuV0VJauGSTLg5a0ynYbpWy/G+qNGUVhLSgElRfUCDWlOmtqZyU4REowiwyvGmHS
o5REul5vz8G5XaSjiltJTxSCuu7SAAqu1Tobhs09rwS3m7x1a8RsEP7i73e6XqTKXHhM3NSC
0iNHBDzyAlTlEuq4dvcGnUaaJszaymIKN4swi25lkarZdLqm0QkMLeMkhJW4pFAlpvuFLaSa
15LNNav+JVqX+d4XxuzRsgcu9/utvx62JgLdjmCFPurlqcSnkhCyysIKPa4KpFT6jXPsyWyT
tn9MNucvE+DcchdZZiymolseSw0nvd9hD4Ku+tAUpKXAChqp21OzHkq148RYhY4zKMmyh6Hd
pomqtaGIK345RCdUyFPKT+qkrUjokbA6pZEheP6fLfbMDcv8rI20Xhq2oui7cUNcCFp5hpCu
53+ZSaBXDjXWlZtk7JIyvF7A7fslttlYdLTlxfRHQ9xU5wCz7iUJHJW3oNaeEVVLNfvP9P2P
2S9Y6JOSLdst3uH7a+4iOkvJfSOTbSEtOOpouhQVKNUdSNY7MuTMfJ2O2CwZzdrLYpa5sGC+
413HkdtbbjaiFsn/APCdsjjzp7tdaPBjtOCXwbxvZbzjzl8yDIDYrSu4NWaIpEZcxx6a8juh
JQkp4I4/m+OudrOcG6xBcLZ4jxayY9m0zJ58Gfc8fc/bocYvSI7bLznLtuq7SVKLjooWUfTX
ZWqrbYcYIdWC4lA8Ioyd6dGn5HdZCGIilPPNmPw4LcYbaSngt5NSXe4eKR0NQKyVpaGyahck
heP6e3YcW0Ig5CzPl3V+NEjpSw6IJMo0UWJiO4hYQKlSSEkjprKs2itjB3yD+n2DYr3YY10y
pty33eZ+3PPfbqEhDhBUhLTKVuKUhwp4czslVKjVNmn8E2pgzbyZjlkx7M7hZ7Nc1XWFAdUy
JC21NuNuNqKFR1lX+YpriAXEiivTXaulJns2T3j/AATGJmNvZLlUqai2ruTFjhx7cEd8ypSQ
oOuKdPHtNpNeFKqOubs5waSx9SWPga6mXM+3vkU2+0TJkPIbisltNtbiAqQ8+2ohSg+2OSQ3
06aJehXkio2D2aR4qfyKJeHXrw3do0FdvCFJjNpkOEIKuQ/UWtIDgKfpGx31uWrPOjKWETGa
+KscgY7kBtN1myLzgrsRnIUykp+zfVOUED7FIPJrtKVQ8/qGhOz5K2FKMysNoeu9+g2iKsNu
XGQ1GadUKpSp5YQFKA3oOXpobhSNVmEadO8WO5Bl2QWjEpjiGMSVEtKV3J1a3JD63/tF8Vio
ZaDpNEDbiPnrWapD5fBM2L+mvKZ4S7JyFmIt1nirttvyFlaHls9pakFILaS1sqvT01h3sEEc
z4EvyrJdZc7JY0FyFJkw5UZlL8loqjKKayXWRRhLlPb3U9Na7WTJvGTI4EF2bcIsRBCFynW2
WldaKdUEpJI+Z9NVm1JJSzXL14hx5ouQ7Vkr7E+wXKNZr9LuSAiCiRMSVIchltSltoC08FBX
U711Kf8AqU5wR0LwTnP7lBtkiU1Au7vfly7ep1SpEOHGX2/v1JZKypLiva2Ee5R/slZw3wNF
BD+XcXdw7NV25q4TZLKYsZ+NcJZU2+oPt1X8FJHKop6dDpSwnJnbZRUvPNhaW1qbS4ntuBJK
QpB3KVgUqPlredmWa9Z/DfkSZZrbZ4t2jt4/f0wbi6xzUlCJExDvY7yAnkspRGNaGgqNtcl7
HErk6ui5IbN/DsjFIVyfdyO23C4WlbAutpjFaJDSJezKh3eIc61UlO6RudST8GLInLh44inM
8Lxw5E/HjZDa4cyTcXnHXg6/JUptSY6D7m+6gBCeX8dtaTfTtPJvM/BGWTx/amvNUbDWL6Xb
aZi45uURTjDqko5lcbkkVS97OJP01+Ws3n7ma7O+P+JbrkDX72crYtSpl3fstsVMXIclvSUK
KW20ut1UQpOxNf7NdPbMwuC6rb5JC1+FchYxmTdbllQscJmNLmSoqPuHEhuMtTXNSmilpSlO
p48K8qHprFe7tC2UJozjDoN7veT2u0wpa49wuUhmO3JWtz9JRUOKiUnlxQRUD09NXtlPZUbb
NlzbFMgzmBCah5s/c7fGuciyFi9Ibipeu8VpSgWewEpWl7gUNl33A/jqTsk1+ouHnzogME8G
yHs5tloyS6NWx5URV0nW9mQGbjHSmvBmo5cXFAczxOyN9DVoniYJJHLAsJ8fZJmE/GpL93RJ
XOdVbDbJEeTBRFQkq770lYHMpor9UJ923rq90plXQ2m+O1W63fds51Ghi6Jfl2e3SnpEdc6G
HFNsvKWmjYdfSj6Veux0tWqy65gnHvAMaQtePWu+vLyKxuwE32NJbKICP3anBUTiSoLbCk9y
o92s/lH1FpN/QVb/ABBZ5TrGQY3ks+NjdqcnJuEp5HanR37ajnJdhhtVOLqfprQj1rrUW1yS
GrnhrE48ZeU3K+TnMLlR4cq3FppH7qty6OqaaQ8FHtVQtClLVX3Dposm/r/2H+vMD6zf0w3R
V2uzF6uQbgQC8zbpERtTi5LrbPdBKKFLaQk0VyP1e1Px0OVBnEGfOYhapPitnK4PdYuVuuIt
t7jOGrTgfSVsOs1AKVIpxWk/jro/VFrVnQtaZUVoSRxqQE716a5wBybpy3VuN6emoyGspWNk
+8D2lNfTQUhpQT7+RrQCoHX131GgtyKkHfcD5H0GoyGEpKqp9u9SDvSnoKajQkq955Dc9CP9
9fTQB0SkBBPGhP0VO5+OkUALFCKjkdlU22+Wggcm6bcjTY/+Hx0FIalHid/x+A1GsCVKQClP
EgpT16k0O4+WpGGhSaKHTqKkfCh20ihVVJJ4mqvRW3+w1kUcyVEApohXRQ6/wA0gLIBHuPFI
9T+O2gRSa0olYNPl11EFUEAqBAX7SPx+GoUKKOoTxABqK7Gp266CYGwpCVBY5V6kfH+OgEKT
7UgqBIFCD8T6baINiQaGpSfVJptvqCWKFKjkoDl6mp/GlNQLYQUgbblQ2/4b+uo3IOPsoCFU
NDX1NfhpAFOVSRRdCAdZY/QFDxpx93+L10lkhXUEsxOaqfIjelf92uqMxg9D2j76P/TI65am
u2/Lurse4usN+92NWquSwCsI2G9RTXX30rWyjwc6mLOJCAAR7ag7GtCdYk6Nkpj7MF2/QETI
T11j95CXbdGUpDr4Jp2wpAUoAnrxFdbok8MI5Ns8jWkYh5JtbOP4fbH4lxixoFuZnxCuImS6
uiu3VSAp3cciammsUorYZVUktdbJgly8323E3sbYlR4cUJmpgtfbRzKUnurekNtg8m0CgSmt
KncmmsqtYZJtmQ5Zg2SyfIt7tFksMha25DrkWDHZUkIihZShaQqgS36JJ6+mhBLNN8M4fjUq
025u84s2tdxceZVc5jDsxc1xuvIRlNkNxGmePHm59SumtXqmCsWZOB+PJLP7Rjtjt7d3DLyl
PXSBKfBcbJStxM0lDJSk1CSdiem2h0UimNJuD+NZtrlJx20wWWIMRuVcpEqDLRORF6vvMyH+
LKnShKijY779NXXIOw6tXjTwxlFmU7ZreyYQlRmo0mIJLEpSV05Icdk0DilCvLt9BpiGTkdM
+G/Fd4SytizMJjxZ0hmQq3uPstHsJJ7Uhb3vWKj3KRt8K6OoSMY+EeNmb5hr0PHrdLg5A5Ki
Pkh9cUdhClpWy1IPIuVRxKlilK0Hrp67GWzzxncNiJm19ixWUMRWZ0hpllAohCQ4QEJHwA6a
apEX/AvFdjvuFQr/AH273BmLJuSbXCgW9tDiW3HFBAdWXKhPJR9ygBQfHVbZpONEjkvgvB8U
S07lmQzwLhKdj2tUCIHVJDA6uii1lSj6IGsy/JmEyKxXBsTmeMMtvj0dT18t8xmK06+gpcjs
qdQkFA5cQ64lXuqDx6a6taU4BWkuGbeCcKdu1yiYzKctNwtNtYlIt3261xihaikqU8Spxxay
DXgNc3JDZ7+lpKobK7fkHduQebZlIlMobQnubqIQ2pbiSBuErpXV2YkZi/hjx1ecpds8LJpN
8eiJlCdBSwYakyIxCUEunmEoUsnbqfw1rMBjwZdn2Js4tf3LOqWuXOZQk3J1LS2WkyD9TbBX
73EI6dygB9NVSLP43wLHrhjs/K8kek/ssKWxbUwLeoNyHX5KkIC1uL9obT3Bsnc6W3PgYRfZ
P9PeJ2G6txL/AHWbJiXm4ptdkbhhLZbdcb7iHZRV9fE7cUinrrEhKg5Rv6fLBHnx8cvN4mO3
y9Gcuyvx0pREaEA0KpDaiVrW51oOmqWD8DNfgexNQJlkl3iW5l8S1qv4cQhP7cmPyKSylBPc
LhpurTkZX6FDt3jH9wZxtuBe4sm55RIS0YTSFrRCSUlRVIe+krTQhTY3rsPjq7MZWi8TvEuH
R2TdYmVXSO3ZrkbHfJTzKnJDshAolEBEcqXRxRCEINeutJ2klvRG+R/HmPRrnjsO3X2XHu13
c7Eq25DJacegsEVD8lxpS0sp6ew76Kz5FLJHY14zgx5uSXOZkbyMZxPtouN2tSFtyJC5FBwi
hRTRNSUqVWh9K6uzCJUss7vgmDjc2RKu2Sy4ePTpMe3wHLegiW8uf7mkyRVKQhO3Pry1d7Mk
p3syzMbVccQvl1xFy4KcjxHQ1JSypbTLwCUrSVN1AUoBQ66VOwiTQbN4bdl4/DtE/JHol8u8
Bd/tdjQha7f9u2AeUhQNO+pJ9B7fnrLbeeAaRnlzxWbFw+15LJusV1u5OKaiWZDqnJTTaaku
OI+lpPtG3zGtzkJg5YFYckv2VxbVjkj7S8SitLEjuKYCEhBLhK0e4ewGtOuqzg0lJoFo8JeS
rfe7c9jN6gPPS1PpF2tc1QajGP7X0OLSkL9pPE8QdzTWXZ8ooJf/ALXeeYmYPuNZEEzXoYkS
si+9eDZjpVxQ2tSk936hsnhT11nt8E9DPF/FnkJ3J8hmjMoVsvNqYU+/dETi668XEBYWtwHk
hghXvcX9P+HS7PwUKBhhWJeY5+Pviw5EiDabpIkNxGHZ5j/uTqCUvmIk+5wLP5/bXrptbOia
iBncfDPla526Pcbo/GecjwS5Dt8qan7wQ44NUNMn6Q18AafOunvnCJpD+K75N/7XY3OhZJMW
3eLq3arRZmnG0tNNMj9BSlp94V3WxRKtgmhOpxn4K7hr5HkHDPO+K3difGdanzX7olcmI1Kb
mcrg4lQS5MbPQqbKjzJ9o3221l38klka+dZfkmVZrK/lj1oYgOPu/ttstDgeClIRRySV+4lO
5R9VAfTWqZQYx5Kt4lZ8nOXCc7gspUIssp/cJanW2IyEKV7O8t4FvkpVeApXrps44k1HPBcr
Jaf6ko95vaYlzcjzi4ym6TpMyP2X3XUBTIadd5oUrtEce2BRNBQbaw7TwHAyh4H/AFCSvtZ7
S3BJs0mam2KkS2EvIlKWTL7CXCVOLdVyJNDXS7p8BHIykeOsz/7RTMjv0yVHtjEtcy22dLAW
lbzwCXpklwqHbSuvFJ3NfTfUrSywoZJfvfleH47xGUi5qmQ7ncBCs+OJjMOIWmEoKjcyByXy
cR9CutNzqlKTbX5D7zpd/LM7Eoxyy3W20WkT0ntRJAffemdtVdubpSloBXNKSKHT60mYtgzv
xQPIpyRSsDCv3ZUdYkLT2u0ItQV9/v8A6XbKqfV601XYo1O03D+pdF0v1vjxEvXqR9m9MuDi
oimmY6UudhpoqV9t2XE86gJP8zrMrwC3sjcfe/qaVLvKbc3JXKbl8rl959qSmX2xtHMjbmWu
P+VtxoNPar2PBxtU7+pJ/Epsm3sSV2gOSi7IUiP92lSlq+7MfugSAOfKvbHWtNSspBS9kdfH
/PS/HHcmxHE4muE205NLMYSlQNu2HFj/AKrslNOvp100upJma4rKyCNkduesKXTfEPp/bkRw
C4XjskJTvWvT4a1aIyFbZg1jI7f/AFDX29WCxXaD9lNVLVOs3YRGjsoeaoXX+ceqat8uaqkn
1prn2rGjU5goPllvNE5xOTmCWkX5CUJkdlLaG3UCvB1HaAB7gqqqvcfXXSrwsGW0TviWb5eT
b7hFwu2i629DrciQy7HZkMNSQClK09+iUulIp7TWlNZvHIqXkd2nHfJN+xbM7ndpLsC1OPol
3l2ZFedelzonPg232kckdsghxX0p25dNN7qcCmkpI97Fc2HhyJdbg/8AaY5GnF+z21yO4X5D
8jihx8OpTxS3Q/p8zRX5eord5bBzhlkvuT/1Bx7RbxcLS9a478uK8zLjQWmHZEtCgYwe7YJU
oqAPFSRU6zV1NNOfkbZJF85X3IbFaZdhFouKpKptqbixGYSFyEFPOU44ivubqCqqthvTSr1h
qDMZKT5WXl68zmJy2G1CvyUhMtDDSGW3AFKpIHb9q+6aq5/m01As3h1vysIM1WKLiItzjzaA
q7KYEVc/YsJjB8FJmdOFP46LWzqTTmAQ8s84wLnaIDbEwXFL9wcgRFsclT5C1qRPU+n/APe1
IUFJPP6Kbanar2CaZ2tty8mHw7dexaIDWGMKH3lwejIRKddee4hxhZoXHGnF0S4B7B06a3+L
s8ZMP+OHgq+SeVc0yK1ItVylILCuH362WkNPTS0Alpcx1ICnlNpA48vx3OqqhG6thXLAsxxy
ZbOH6l5MIXt2DC5qk25hBC0OSiBRtQTRVB09dc3ms8E5l/BeLDlPmmRlU/JMfxxQnXtpDdyZ
aglUd6RACayi257UPIcUlVdvcemh3x9ChxjRIW/yL5+uEye1GxtM9+3ON923m2BSIEpAUebT
Rp23VciomprWul2rjArUlUi+evIMA3AJZhIucxx9x6e7DSmU05IUS8hCvaQASaJWDTWsTMYM
S2RCvFfkKBZImQognge083GQ4lU9pDiqMSHI4/UbbWsUSr8NT9nZy1g31jRYHPJHmGTKiF63
odmWu4tx5aDAQn7i68S0yJ4AHdkJA/TrShFeuhWSmP8ASCrsOrlH84S8kh4jKecGQWdpye5c
UyeDzEaYA44qVPChxaQTXjWgPSu2l2xMbF5f0JyxteWJPlGdfb1Zot0ulvgJiPJnPts2xaJD
YaiqQ8vuNud76kpT9aiTtvrNnhLgeGZza/D/AJFuUWbIj2lTf2LzjC4khSGZDrrKSp5qO0s8
nVtpSSQn06V119nsTthAlC2aNZsv86Wrx7Bnx7dCdt8aMkRJD8dty5txGyW40ns8g52Weakt
ucfXfbfXJNJxBqxSvJeO+SmbkM3yy3tJfuDzRkpbKFJZe7aO2zKYSpRZLjaQeKvq/HXTsnhY
QWwNcs8x3/ILjYp/2Fvtlwx9SF22Xb44acQhndtklSljtNndKKU1JJKAfkdWTyPlLeXTPIED
HoLsn2tSC1CUYjUh0EpeSlKqokOcVKKq1O+ud4eB7OCYXd/L0xyFOt2Kriw7ZPORRGoNuWmM
h9SByWACeTa0+7iOta6f7FGOShos0HyjPmeFbquXjMaRBMkwn3VuFEd+XcHFO94R+BKnG1q5
K4uDiaaaWm8+CssYMrumFeQMGNpvdxhSrUXCh+33Ajipt5JJSlavyOezkEK9N9ZbmYCINBv2
U+aImR2yNGgxHLo5b/3dq22qIAEuT0lC5shkD/5u26/y126nSrfjLWP8jESiFhXXzanKor37
NJlZTYbeuI/34i3JS4b4Ke5KJ3dISvihdf576X7E195LLRUoVxzDx3fLpAcj/t9zXDXbZ0eS
ErKI0ttKvynZRRxIUDrV7dmrsuzNBhO+Xh4yYUiyW2bb40JTMGTIjNPXpq2rISVsJJ5/bhSt
nOO3X565K0vKKyfkiZ/kDzHIh/tzjT8SVbpcSPc7izFDM5yWg0gtz36clrSf8sKAr611qt0p
NLJYps/zw5nESzmLDgXKHFM6Tb4zbLVs7MwESJFwSP0z3KEOlX8NHb8dbBJji2v+d5OU3NKo
lt4x2YrDlunJjix9sHlATDqe2TyPJjiqta11O7woCHBH2FX9QDDt0ySKuUJK3JLcuDMe4PyX
Eo4yDHhlSQ44wlP5RVNNq6XZt5Wv0CHGSPuOOeT5Hjay2r9pYZsgfRLdjsOAzpC5ayiPMnNK
PcS3VXFCqAep1pXc28vn/sb6uUvBB3rw3lNjkwxdJMBqJLmotb1wakpkMw5S+jcvh7mj8djr
nLa0CUlVyHG7tjl9mWW5sBm5W9wofR+XoFJUg9FIWlQUk/DTDSlgmRySo7gUqaE+pJ1kgBRq
KDZJ6V2Pz1DjgQKJWCnoipSDpMC+4odBx5bgHroKRKQRU8qBXVI3qB8NBIUlKSgGlCTvsQPk
NIiiUqTXYKINFDb+OhiEjtoqFkUVQIPqR0NK6jIYSKjfYVSfkT67aDSAsjiORBRt/PpXSUAU
OXFKa/GgHw0SWxXFQIRXiTTkPTbeuooCUfcd6JJoOlNQthKJ4jj9SdgDXoT6nUVWKUqlCARX
3Ef8NBqQlA8SQVVrsD1BOgGKUVEg8htuQfT8QNUELB5VVsa+6nSvz/hoIR1API8Qo0/j/v1C
LWV9QSpKNgjqB/DrqLIQWniFU40rsfTUyQblQTRBKvRA2qNCwahgrRJCt07Ek0/D01oywgrj
+H4dR03OiCFck8K12+NR9PTrrMDggHEN9lhVVEr29xJ47+muqBM3vGoDsTwHcr3Mu91+0emL
gR7JEeQzE7hCf1JGxWpKvzAEV2GuvsnH0M2WcGSgLHE0AUkb1Hw+WuSFEji7U83qEi2SxAnq
eSI09b3YSwo/+6XvyBIrU66VmcGbJcmn5lia7VmkCHmufSeLEBE2PelNPylNuLV7URkBSin6
eXcOivacPJpXrpIlv+1uSWfOYFvxbN3mrrkMQzJ1ylcoshDClgoCklSnHnHF9G+uxJoBpTtB
mvVYM4ym9ZhjuZX+EL/NfnpcVBuVy7qw5Jbb9vFZJKuB9E121KxpNMvnivEMzvGNNxIGcu2W
Pcg79pZYKXZC+KTxWuV2zSKhXQciK6rWcQZJmV40zm04MIl0z6UxaFMOLFsgRZUtlTIJIbDz
W/v+BIG/w1O8vRQloh8jxTyynCYkW/5FLWiV9u3DxdLbqwlt1YQymZLSAygJ2JStZCdSeTSh
PA8u2I+QZluZvbvkaJL/ANLvtMvuoU41EgOKIbKg6hKW3C3UJVxBV6ak3OjD0SEux+Xbq3as
isfkBnI5P3JiW1MRPYaCqfrubgNqQhI/UJSdtuujs1wVR5ZLV5KOfWu5XbL7fPNzjuxLJejF
+7YDyfc61DZSWktOUQeTivqSKfLTLiIGDBc4izYWY3qLOnLuE1ma83ImqSEl1wLJUspHSp9N
SyaSwaT4yyHze7hv7PgFtBt7Upxx26hDXIuOJBU1yfPb9uxqkVG2qzRl7HGOz/6mzAn2+1tT
nG47rqJj0hDKn23l1LqWXX/fXfl7K7nbQ7IlXBwwy8+V4vjTIHrfMZttksrpjymHYiXZUmTI
co8FPqBopJc961mo6a23ENoK/sTnkaR/UVZoC2ZUlU62x2mZEu726OhriQeSWe4P1FIbUBWg
ofXWe6nQsjLvnf8AUbFgxZ823PW2G++08HmITba3nvyJdCApf6h6hQ36aHdToGnI4F1/qOn3
luMbWiyqfjyi3WM1DjpbUgfcPKWCr9bjQJJ6eg66pUaFmXZzkWcXdq0IycOBLEQftbjzPadf
jE8UurURzdCuPtWrW0yeCyeHZvlAyJsDD2I8iAoJXPTcQgwGXCQGnVlz2h7kPYBufhqbXIxg
tGKyf6iyu921qMZEpp8vPy7slolmYoFVYKnKAvKR9IQCkCmstozxLG9imf1CScRnxYMNSmYC
3203KYlJuqCTSUzCWr9RVfzqA/A6XZNLGQZxlXLz7c/Ga3VW8ptSWVRn7qG0Juz8AfU2afq/
bbbqoCfw0dls1gpj2aeTXLNjSkJfjWixPBGNux43bSqQBwBQoJo+5QlPr1Pz01aJUbNQmZJ/
UBCyPHp8vFI6DJWpyJaozSUsvTXmuC5E4tqqh8N1V7lDiK6zKyEZK35NkZ/YbtZbzcsQs9je
YlF+I7bGUSES5ajTjJUlS1OKTSoQfXfWqpMZga4Xl3l2dm16hIsov1yvw7l8sdyZCI5DaQGl
uhXAMpb2CakV+etfjBhSTliy7zxNvt6tD+Oov05paZT8CewgRoEhn/IdYqUJHEI/TRyPLr89
Z/GDfyZ//r6928ZRHu9qZl5DkBU3cLrPbBlxjQpWGUKASjY7dOO3wGtJJsrNRCLdbM28tSfH
SpcLH0yotrjqtbOXJYCpbUFYCXGGf8SRxALiQeP46y4TCyhFAumWXudhVtxwwGY9ktsguNym
WChb75Cj+u/vzWkLOwPwr01JoLI5+PszmYblEfI4kVuZLi9xLcZ0qSlReQWtyjevuqAOuloq
lwsF6zLxJkyZl7tyXZN7hOl21qeIdbZkL5qqGeSoy+Qr8dZmfoUxgk4/np+PmP70/jhLQiJh
xYCZMtMpCUrLpcMhXJ1ZUdiCmnGnzquIF4IxnzXJOa3y/T8djyol8gqtk+yNFxr/AKev5nUA
uFwn/MWRv0220tKNk9HXGPOFustjtUKZi7NyuONvSX8amd91lEQylFRS42K9zjy47noPjvod
ZyGfqQuZZZlebxG8jZt5iW/G4DNrnzWHOIUuS4o1WKpVR5av8tNQB104TIm7N5hxG1YbYbGz
i63ZllnM3RucqYoJXMQoF50ICaDuJqlKCeI/hrHWeTTHOIeao0a9zZ9ygvxoV2yFu+z5cFRU
plsIUhDG49wJI5Go5J5ADfVCkExr598gYJlz9rex5DrsuElbUiYW1xWOySVIZbYUpXu5nkVU
H8fTdfxCOSIwnJo+KY7cLPllgen43mrDD7fad+2dcRDdJbcaUK+znWvQ/wANDsm8AnwS9p8y
YIzbpFnmYmtNgZubN5tESLOUlxt+OkNpL7rnIucuHJRG1dqaHVLkU8/QVK8/tzMmxa9TLSOW
PzbjPdZZdH667isqSlsqHs7aaCp+rrp6qNhzJT755EfuWFNY0ttaHWrxKurj5dJQUyQUoZDZ
3/TJOtJQ5J4yXe3Z1h8HBMSjqsN3gx7XdUyWshTIRx+8SUKnKZSR7zwqEoOyajfbWOsznIq2
UO/6g/InjnLbVbG8bfVKnRZTr6+y0uNFbZfSS6XULpzfWvj7k/OulKAddMovh7NY2K5K7MU1
OmNymTEDFscCXyta0qSAghSHhtTgoeteuq0bNJGlv+T/AB/e4uTW64Y3cWE3GVAbZsduHbnK
EFKi5JWGwGkKSqgU3QCnzroS+QWco7Wn+p6yNXG53GfZJTEh+cmYymC4yrusoaQwhmUp5JUO
CWuVW6VOtf1p84JMqt88rYFkFvZkXezXQ322omM237SZ9vEU3LdW8gvKQQ7VPc93DrT4HQq/
Ikjlf9QFnveAP2Jhm4NXeZbmrc8yFRhBR20pSt1Kko+5PLj9BVTWq0j7E0ZjhF4TjOW2bILj
AW9Divh8MkFvvpSCD21qpUJJr8K7aH+WESbXwannvnK0XCdjD1shT+WO3VN1c/cksxVvoKQo
NhMdKEpHok03Hx1mPkmoZlPkO9WO95hdL1ZEykW64vKkhM8oLqXXve8PZ7QgOKIQK149dbwl
sxqeS4+OPI2FWzEW8fylu5NMw7wi+26TaVN8nXktpa7D4cIBQePUf3aOjZrtgK9+Zbhc8dy6
C09MhOZDd0XBllt5fZYiK5d6MpQKd3faVACit9Kp1ZlPGCMufkyXM8cWHE0SJan7XNelyubi
uypkEGMykAk/pEVSCKJ2po1Pyak0q6+d8QdTabxBtl0uc63Toki5TpSGmu022FDsOvMURIcP
IhsuIHStdC9fyTZDZz51tM+74vJsqp02NZbmLrJMpuPBDhFB2EojDc8a1Wo71pQ6esJ+SW5M
18gXO03/ADmfPxwzpka6P99puYnlJL76itbSEt8qoQpXFAG9Nb1XJmclp8feQsUtmON49lMO
Y6xaLujIbU5AKAtyawgNiNIDgoG1U+pO41jrL+pvsWa3eebIIaodxizUJvTtzlXydHeIlQHZ
6lKbbs6yf0kEULnLdRqaaIU4J+CKm+d0Xbxld8Yutojic/BhW22So6VU7MRzkpT3JZotFOTf
AU59daqotIPRkSVqBSvbqDxIqKj46LEja0+VsLR5Fl5VJblhjKrMqDkcZpv9aFLUhttamO5s
60pLKePy/lo6tpfBLDfguR/qO8ed59pqPcozMpTk1c9TLMhbch51CnWExlHtrQUN0S4rpq6f
IrIzY8/4K/fb/LuP7kLTPkMS40DtNOoX2YzbROykOxnubey0LoRT4aXSYSYuEeer5NZuF4nT
GkuNNzJT77aH3O+6lLyypIddP+YoA7rPU761dKcHNM1+X5Txp+2Tr6m2XNm8X2LCsF+ktkCD
EZjceTsN/jVUhxttJQ2ojiRU6KridG3dfY4eRfLFky22RLfKiz7RHYuzcqNKYB5TLbw7Kpj/
AC4h2ajhVKgadRXVVKGk+DVR9f8Ayh41lZTcHojlzk2fIrILFe3FNttyoyGUoSy+wCVB3kEk
rSqny+GlJpLyjM7Ocvy/4/vCXLFdolxj4xBNtVZn45acmOftCShpEhtQ7dJCVH6foOh+t6n7
l2ljlHnrF7jPi5Fe7bKZvdgnTbhYocRaVRnvv0BstSXFgKQWuKTySKHeml1XnBTBHt+Z8aYh
G8ohSTmDtmaxpyIogW9MZo1EsOf5nNSQB2/jp39nJl2SwM/I3kXF7pj92bs0OezOzOdGul3V
M4BlgxAEdiMpCQXk9yp5+nTrrNHzzEGm9FJ8hwcMt+UORsQmKn2UMsluSpZc/WUir4S4Uo5p
SroeOmIqp2TaLr4Q8oYpiESdGvqJRK5bNwjOxEJc5rYaWz2lpWRQEO8uXy1zUyXBMeSfPLVw
tmNRMLmT7euwulRdc4td9DTKGmFqbQSlVaKJQoUFdbSST8spyV22ZVgw8S3ewTZ1yRkN2mi5
dtphC4qZLJPbSF8hxQ4aFw9Rpo/ybfJXcL8RnmGb2vJMXxOJJlTm7laGkwLtDWruxlsJUpRl
tLWrd8hVAlQ6f2lI6tDKnJebp5e8fwbsbtYpN1elpxdzHWXnmUNKafbSBFkdwK5cionmfy7E
afXlJPhyHbfyWLxl5dwiU3Fi3y6/ZvQbRbYrzdyWptt+ZBedc76ZSea1FHMUSr6q79NZsv3b
GzWzDM5bTdczyObaH5F9h/cvSjdQyrmtlZ5LcUhIoltClEBVAOIB21v3NY+hmcZ4L9E8t48n
HWbw5bJislZsqMQUtshNv+z3P3KXacg/2yr9L479NZq0oT0nI99tLZyzvy7j+W2dVjQ1PjRI
s6EqLNQsKlTYbSA245cFE++SgJCmjvv+GpNVl+Ubkk8g8p+OncjdcjC6ybRebEMbvvfbbZks
sISOy/GqVdxz6itK/wCHw00tFV5qYdllPlgneXMNlOix361XKFj1sVbJFjWjtia4bWjix9y2
sBHCQhVao+n56p8Pe/uPbM8ht/1AWOdKj3+62mS3k1lduDtgixnUmG6Lp7ViUtQ5pLI6FA92
ptan8f8AsSkajy/hzkZ6dJt0x2+XqDDsmQMJcSiI3BjEBx6MsDu9x1tIAQfpPrqV1zxoVbg4
5/nfj7JJNntsO7XSFjMea2qXalxWWYrEZINXWg0Vuvv+hLhUTUnWE2quHsE/yyUfyfmMfLs4
uV6itmPFeLbMFpZ93ZjthptS/gtaU1I9NdPY0q1quA7eSpqWRwJHKmwqdq/w1xBiTy7gV6Do
D8dUkgKNCRQg1r8lahFOcSEkmiR6gep9dQNBNlVSf41G2+hihTgVXbqNxTr89BMHbI9ldtqA
DYV/u1IYD6KNDUkUP+3pqgyACnpyPTavX46jaDUQE02KzSpptqIC6kGlSrcHj6H0roKwP/zd
9iDX/augyJIFSCaKrUDrpI6LC6ktn9IDpT+f8tRoQmtCFGgIqkA9B/HrqZSLogpNSQlVCip2
6dBTQCEhRJrtsN9uu/x9KaTUnSlAKGqa8gqtKH+PprIyGEgJ2qTX2V3Nfw1AxPuCga/iNutN
CICCOVUgEVoKahQqtNkqIJO9DUcR8NUCGUIJHEU3oQPj+OlBIXtogDquoofSn46BQmjHd4cV
9Pqpv/5ajM5+CFeU4piPUAJ3oQa139ddEMm+4bMuE3wPdrUvHhJssWW5L/epM9uDGTI4Jojg
auuFHXin6iQNdPa5jegltyZB7iQQTyr1Hrt8NYgWPLTKt8a4x3Z8VU2C2tK5MVLhZLiEkfph
xIJRyPqBrdMMpjBreb5Bk+ZZPYZDXj2QxcIjbUpuIovPuyoTCwUJU2tKEttE7FRTU11h9ZkI
aDl5RlFr8qRs2zPD5wuEpfCxWlC1NcXEI4J4ni4p7gkn2hI3Nemmt9llLGSBveZY6M9vN3v+
Ec3nEqKLFJku0RNWaqfkkgE//owAB+Osrqyhk/4qzd632dYsWDz7pc4LypUmfa3HWmHlrSot
oncQrmhoKohutABXrvrVrIy5ZYrH5IzE29d2gYTfJ94UmQ2txD0pVoU66TyUYpCkhKOiUp2H
x9dTtWYkYA951uMuyyZqcWur0X7ZFuvMhTq12llpv2vloBPBLq6lPJStvU7abKvaE8GXJWPJ
GT2zIsXjiDiF+t0JKG2bM06pTVnj0NApqOyjg4tQrQ+pNa6ymlyTkkWPKGQY49ijjWHTrfZc
ehmC8zIbdQHnZnEPPNnjQKUU1QDuSTXWlerkoZK4zl6n8qx63M47erZj+MiRPtduERyXcZr0
jkFF1XsQ00nukg/wJ+F3UMdGK59PkzMzvkp6I5b3X5ry1wn6F1rkqvFwJ9U+uqEU4g07x/5O
8eWPxfEs9/dmvXFm7m4MwYA4LqyUuNlxS+LfbUpNCmu/w0taM1ke5D52wbKrVHVfo15t0+BJ
fkwWbPJbbDvc2QlyQripPt2PFPzGj6MeYI7B81wqF46yKySYl5feuqzLmtQ2kuxoaEOVZT9y
6r14jm44PcdLc86NOr4LVf8AzvhzL8i5ot96Fzu8FmGbfL/6ZgRQskyGiokpUsE0KRvtrKS8
gkSCv6l/H8U8GGJ8oLktOdv7dtkNNIoVVWVrW6pJ3926j8NKrmJMyVq2+Z8CsOSSbsxcr7fk
yBMeEadQRWnpNODLLJJUORFFLUaJTsBvqaxlksmZ+V8qxbKL2xfbWici6y2gbzHlr5sMuJAC
URlk8igegoAB00oCa8b53iELDrpi2TLlQ4cuXHuTM+IgOrU7HWhYYLZofd2/q1pxEpm7TYvs
r+oLCchMebfY8y1ybNdW7raIkcJfMrtIKEMrUaBs03Ua8fnrPXGGUZOafP8Ah8l+05ROYlx8
msouCYtiaAWxIE8/UqQaBCUeu1fkdDhYMob3nznhhjvX+OmWrKJtjVYv2JSQmO0Son7lUgGh
RvsAOX4aoyXn5KpC8o43AjeP0pXcrk/iryXpzT/baioSlsoDcRpI9xHL2uL3oOtTtqP3H6F1
V5N8XuzJNoYyC4NWq+XV293m7tJcjKjIW3/8FCkcnT3D7HFpoAkkA131hqAWpK5lebYu3k2M
SLRlqJFhssnuswLVbDHYtraR9baXyv7l9daEqHz1tJ6Nrcs72jyJg1ykZ7bLldZ9ohZWph2L
kM1PffQItKB0NcSnnSiEp2A20R44M8FkuHmrx5k8R+NLuMvHWrZPhXGLPWyXFzmoISO0EtK5
JW7Q0So9P46FVkssy/Ip+GZndMxy+63V21SHl8rBZENh2TJV2whPNYqhCfaOW+1eu2609IG4
RoMPy1gcWzWS+tXN5mdaLA7YhiSG1JLz60oSH+YPaDdU/Ud/46FVk3szfJcqtUrxJjOMt3R6
TOgPqekW5DKWIsdJCwCp2nN9yq9jX1NdbShsU5gq2HXr9lye23RCWXTGkIVwkth1tIKglSyg
7EpBKk/A76nWUOmb3J8vYs75VyzIBOjmDDsbkTG5vYSS9LIbJCBxKnFKcBAKvyj4ax10YXIn
xP5jhXa4XC5ZfIt0TJ40KPDttyk0jB9sLKpCnH0IWEuKqPalNKDb11NLg1JcrD5Q8cquV/uM
S42iNdnZTQkSXVrhtvsNtCpQ8tpanvfyBokcuujox0jErnmniFd8vUm4Ycb8/LnPvNXBiW9B
jqaJo2G2AAUJoK7ipO/rTWuvyZlwSOJR7Mvwdklvm3yzwXrnNauUK3Ou8pLbbDgK2nE8e4Vu
Jb4tD/dXTarbwsF4L3Oz7Dr5fclxrlaZ+NsotTeN2/g0yiRKWpAf4P0CiE/SVdEp+WsdME22
xX9UkSRCwaKxGejs29dyT9xGLaGH3QhKvt2mUtoQFtMciSSa9NVCZkvgJ/HmspuL1xehRriL
bI/0+9cin7ZFxNA0pZWCgcRXr/v1u9dEsGv3z9hz1KsH+7hS8ml2GP371aGDJhRHYssuvJTw
97aXjQe35fEa5urWRdZOmbZFiGHyGodo/Z0Xdd3tkCWl5hlxxmAmMhEhwpP+XT6ST0qfXQqu
JKZZJ4hJ8RxWrq01Isa7XMulwVPC1xmu22V0aQEupUt1Ck7oLZSkDT1ZI4W6/eJ2rfbrYtyx
m3IYiRHUuNskpZfjOuyklZ9wPsRyPodup0qjJyyipsVgX498dov86z/tVru7kq5RfuWy6YE1
2rYWhBKlr4qT3fUDr0OrOYKcolpybc/n1kRlTmIptTdzlu2duAGy+qOlpX2zcngQylBVwoXD
9fpTT1xhBOcis7iY7O8g+PZOOm1wL20+7JvKmJMVAZYjuNqSHnWKMlYClhIpVXTWXWFoFv4G
GJW52J/VDdroLhBTbi5JnvykSmuJjS0KQ03WtCsuBJUgbjrrVtG6tJM5eOfGtnv6Lda1oiKu
uKZEuTkDyGVSmJcV6im46JTSVNLAHHkhZ9u9fWuW2seQXHwYZlyY/wDqq+CIlKYwny+wlqhS
Ed9fEJ/LQDpTbXWIRzplSejMvs3jeP4YnMxodrSuNamH7Zc2DFDz0khPNTVFKllYUo8ufx6a
5VUnS2SjeTsTvbnjrxyy65DdkwWF255AlsKWl2W8n7RGyvp7aQFK6J9dao4TJrJer1jcN7yN
aJd7RbLlObxRLNmiS5DS48i+QkbtvpSoAp/V25UB9Omsw+sBXEnT9gxlUuZJgWDGpHkdq1RF
v46pbX7amU4+UyCGy52uSWOJ2VXp8dHXyWg/GuMYO/OyKTeLLYVTXLk2xNhRnGJMWEyWEKWp
tUlbaUtrdKq9oEhVU9BpcyKtwirzJVpa8a+RMex202Z1NmvD/bLpQp/9vSXCmYhbix3XGCeD
JT6bUPrpVaYN4yWS9Y14pjeLI62rTD+yVCiKav6XYyXVSXlI7hD3dVJWtJUrk2W6emhVbY2b
Iry/boNm8Z5FCi2m02a3u3WALOq2uIW5Ngt8i0/ISlazy5Voo7n11V2Yspj6mTeH4uL3HLV2
fIm2vtrvDkQYUp76Y051NI73yUFjin4V10vwza5g0fDccskHzpjlksUWNMGKQm/9SXAuJDbk
ziovyRUpqtpx1CU8a9PlrnZQl8ktz8DPsYvgeKmRkOL229ZA/kkmFPZluJecRCLYkHtltSk8
ilXsUfpUanfbSqty2YpwhvBudouHgLKYtmsVvL0G6KdcbcUkz2YDnJTUtS1qSXHmeYZSpG1P
TrpSXZi1hTwTd28X4uyi+X4R7fGxaZZLciyS++3w+8cWymU+ltCi6kpBPNVNq6KqYRWbUwSX
m7x5gzWGRzYLJFtt9FziQrSIxZaVJbfUUBRIec7ja+qVucSOu2qlJn6G1llA/qSs90g5baJs
9ptH3VmhsrdQto9yRFSUyK9s9UqWBWlD6a6U/iZe5OnhTHYdzx3I5kOyQMnyqMuK3brPc3UN
tCM6o99zgpbYVxAG9dtc3vOhWi84fg+NSVPLbxKyzku3p+LljbkxD7dnhIbQU/auKW17PctV
UjY7emhryX1GRs/i5DULGotltzzNwx+5XIZAtwqlJVGW6InbVyAC6ICiT9Xw1v8Arz9wTlFc
8j5JYX/DWBWG128W9M5tya603KK0oeZd7TncaI97jq1lQUv6BUDbWlRLt9Stkuj/AI7U/kfi
6BkNkQq3tWh+FLgtOtuMtzGwZH6yEuclIp73KHfp8tc030+5t7+x2zzx1a7mu445asdt0e4I
sSZGNOxeEYS5y5STNcZLi6hDA4jg4dgo+h1qjSMPP1wRuWWfxxgtjXNVjtqvc1E23W92M45z
S2Db0uy3U8VblSgqh6cjX5alTEuTo3DxolfHuBeKrzgbs42SMuPdFXBxEp1xC5EFCSr7eO48
t1K0OpFOPFCgRudZeWZs8GfZdKtlx/p+xKRBs0Bv7WW9GuU6Of8AqIz3IBBUCrlymBJU5UEf
Cm2tqqTslwFtIua/HkcnxRCyeCwza4sGdFkw1yGwwqc6DJjNLKVn3SFI5K3oem2s0f4NfIt5
xs6XLF8Gs1lk37I8RtcPKIlllzJuKtvgxw6zKbREc4oWuncSpVaH3DbSvXOFqSbMg8v2+wws
rirskVmFEuNpg3ByLGVVht6Q1ycSzUqKU1FeJOtr1/in9TDmWjTfE+EYdePFa5kqxW1+fSUu
Vdp7o7dEVojuIcS7CUkUoS2oH6vXXFr8jekiZu/jrDGMVUJ+P262WpNihymskMkJlru7gb/T
UkK5JSpJJV7PdvpVJYtqfuT+QeNvEKHLKZdog25Cbm1HbWlaGkzGlMuFHJaXXS4044lIC1hJ
rorVtBOSvY3hePu5k0jMMMtNkuYt0l+FZIMtp5Et1LiEoK4hcDYWElXAlwc9/wDDrV6qManZ
JmTecrZjlsz96JYoCLZH+2juSYbakFKJC0qKwEtrdSgkcao5Gh/HWoSqvJjksWC5dYrR4Ry0
Nwe3eHpDEF6W1KWxIeRKCuCuKQfayK8kD2rGytXprNzo6zWCQx/FFOeB7Uua1HFqXkbFxuLr
byEH9q4dhbzqQrn7FK40A5JG9NFXLt8oU4aRP+bsNw9u2WeLjNkiWy8ybq1FtMph2KyiSypB
VyUUvOFaCoJIccCaHrSuj11UNvx+5iX2RJZHi8WX5NySdNtdsvF4kWeG7jcG4PMiNKfaShmY
pYC0J5or+am241QnWv7mk1n6k0qy4ZestvbT9ps8+Q1Ft8YSn3WpbEZpEUfp9pTrK0thVQHm
yo/EbabT0qvqDWzyjfmIjN7uMWL2hGYlPoYVHdL7HFLigkMvKAU4in0qIqRrftrDOaI1ZNRQ
mtf5nXIoQoOE9fcKbep66jYmlep3O9OtfmNEGGjoFEpI40Wr0Hrqg1ISdiD1Sn0Jr/EnUyQZ
SVqClUSAaOCmwr6j8dAg4gA+7puKnbSTEpWQeXKgqaL+J+JGqAQs8ia15UG4FBX4VJ66hCHE
FIWSD8f7joJhHl29t/lQbU+OohR5jpVLZ9oB3rTroIDhQBSoVSh9fU/7V1CmAkFFRUior6UO
qRYYKFbV+noT6V6ddUGQe+lTQmvU7fx0M0kKJUKmtDUlXqSfjqJiQqhH5gnrX4E6DMhhJ91N
gfpJHTVJpCyafWocT1+dPXUTYFVHUDfp09dJoUjlsFrHWo9flrJliVFRJqnc9PUAaAYFFWyQ
CU1/BXyppQrAohtxKUKSeY6pGw20wa7JhBA3SATUUUT0rrIMMkgctufzFTt8PloYpcg5N/E8
+vH50rqKSBceQY0dsLBWCoqAFAmp+OuqMyb9jFwx2V/TxcbDJvMGFc03JUtuHKcIeU2lCKdp
tIUtSlHZNBrr704T+C5Mg4igUkqSmpPI1J/HXM1Ekti7spvIoD0aY3bpSX0FifII7TCwa91d
UrFE9fpOt0eSUo1zzTkuO3jKLDNs2YhTi47VuuTtvU8lbbYXydfcdTwTx3qlB9d9NJT+DETg
mWfJdqm+Zba7ar9Fh4hj8REQTpiuKX2di8lpTqVqLji6binJKetNWYYTGCk5hZMRyXyhfJKc
wt8S0SCue7cyFuNpClCjDWye69TeiTT8TtrnDOnaMF08M5NY7Zj0RyblcFEG2S3aWyW45CWw
xVR76Wmv/mPvVBAcqlA2G+utvW4Tg5zXXJa3PIuJXq0OybzeoDeNBuSyG0TZUe78DUISYjfB
CnF/yGsOrRQC+X3xhNxuAw5PtrNiZixk2aI1Kd7qZyVAoS7b00aDSFbrUsE9a7aujmCkhfI+
VZDYMTm/teY264vPKadlXNU7u3B16oPZt0FlJYiISeiqk03Jrvq6vwOGHac2uEiDh2Mxczjm
7S23bpkN8uCkSTEqkdthIePbS8nmQgKOxTypptXIVaZLMXTKZXkLFrZFuUWTZLa69IlGPcvv
7g4whBSX7i9RptDRUocWk7cvw2EsaI88eUJEWX5GyR6M8iQ07cH1IfaUFtqTz+pKhsR+GqGL
Ni8IY/Z1eL1XtFvtDt0VdksSp15ShaBEQUFxLanPalaUFRT8+uq9QTLFMt+CNwnbvgNsxeei
ROeF5kXZxCGW2UCg7al1LYqeXsSQfTWekcFJWcCshl+Ic1YJtrLd3l922Sw6mO3IQy4CSEuq
qllHD9ILSCR8a61iVgpL7ncPD7ibhMyBmyKtTtuZZg3hbqfvfveXEM80kqbbTyr7E/EnQqS4
KRdy8b+IF2mPb7la7bb2kPR22JTbjTC3kqI+l0OreWlf01XQnroVGPciLFZbRac1pdscsGNw
ojU5UGZHfaXJdioTRDymarSkJTuVrPU0A66eoT8mHeZcatdmuNslWMW9OMS44Nlkwnu69IH1
LellZLhdKldaBI+kdDpiMQBa/AtrZVjd4nWRmFKzxqQwIDcwtqU1C5o+5W2h4hAqnlVfX8NE
fobdnEGqxoGCrusq64ExZ5RXdm28tkKLS0tW5LZEkp7p4pRzqSUbK+err5MzwN7PavHjDDc2
wRbW54/nLua8tnOcFcANoqOTig42n/ClI9fnXR1JN6GE2y4JbMVlFmFbD46fsqpKbmvg487e
FLJbT3eXeUunRHpSmnrn5JmbQcNwpVl8doucGJAavclCr1OdnEzpLJQSVFCaIjscutTyGyfj
ph5FYZqFyxeC8/Gtb+LWZV4jXhxrDbe5xjtOWppsK+5fS0VKdaa+tVR7lAJ6q1mECeSp+TbW
m433ELDPxd1RcnKYk3ZDDFqXcdjViM0lai20AQorcNQPnpqlBOEzljOG2Nm/Z1Ns1jgTshx8
x28cxwvffx0haU99dFqT33EpO56IVt10tYGXGC6S8HxCLdZt0xPHrder4q4w4d2ty+MhmEw+
AZi22eXBtQ5bn0pTbfWeMknGEYTmGBuzsty97CoSZeL2F1Tr0lpxJjstoQFLQhxShz4kL2Fe
mtdoaMRKya1ZcCxCHisCO9Y4kvGLlYXLpesrfI77M9ISptpL4NGwkqoEAenqa6zyabx9DI8j
xC1wPEeOZCi3Ps3K6vqTJuciQji6kBZCI8RBJ4e0e9QHT/mGtp/kHgZeHcNteYeQbfYbmpxE
GQHnHwyeC1JZaU5xSr0qU7/LWvZo0jR8N8d+GM2viY1obucBFpalSLzCcdLoW2y4G2il4BRC
l15ENg7bddc7SsSCtOYJ9Hgfxa7lZjs/uQgJt6JKIau+lpTrrymgTJU33EJ9uwKBv67HWcpb
FPnk7wf6dMAanXWReP3CPCalNw4EEv8AJaOTSXFOFyOh1S+RV7AoCgG+50y/ItyVpnx741iY
X5AUzZblebpYJv2bLyzxfTRYLamkoBUlKQavFSakA9NL3kxOCBlY149HgVjIIFnnyr47MUw7
dOVQ3Ibaq4t3hySmID9KetaVOlfyyNnKhDyz+PbbefGeCIYhSGH7xkCo96uiWAXO04FNhSXK
EhmnEI5e3lqtEsYyicmeHfH17mmLDvF2jqtl+GPz5NzcQ+HVcC5xjpSPYSoBCSRudyNE2RVu
pkpPnPx/ieDGz260Qp7FxnBcibInOh9CW/pQwhSEpaUutVnj0GtUlmG8kd4uwmZlFpyVVlny
GMptzDL1pgsLU2H21OBL/cWnf2gjiK9dNrQzbtJb8S8c4az5eveOnvX6ParXI5vSGw80q5Ja
/X72wqEKUeHry/CusNvEg9Mi7P438Xw8Rxm45nd7nbLjkzMstutNN/bRlRl8P1wpJd+A4jck
+mlzmNE2ohCbjhnjb/tJabvbEXR3ILncFQ03BaUhkvIUlDoeTVXCOmpU3T3FXXQrNNlrCJFj
wFEkX/L4Dc2almwSrfCtz5YClPqnKQl5RFEpV2ws0CT+OhvwXEnHyr4WxfDLLbb3ZbhIuUdy
4ot8+G6Wl8ilJKwlxgew+0pI3pX5a1WzeweWkVLzhjFmxjyFPtFmhmDaWmoq2Y/vUAXWErXR
a6lXuJ9dNVKJuGxWCYLitxxOflOUzpsa1QZbNvZatjCHn1Pvp58yF+0ICfh66LTMCogveJ+N
bPEjR47eWXy3Rsluj9rs7UGOuL3OykAuzmXFIUnlXpTp+Oju5+hVhDFjwZinGJAev8oZRPtk
m6RYrcZBhhuIpYIW4SFUWW9qb6z2s8sFZPIyzPEMBsPh3Hrja5LFxv8AfHDJFycYeTIWhtYS
62yokIabZrwXyBKzun5bo23kn4DV4zx67L8YWy3Pt29eTw5K7jdnkFLjjqXQSngVcSR/ltDb
ltXT3w/Mmnl40Pct8HY3aI8tuJdJwucCyy7/AC2pjDSCG2Foajx1JQohClqUsqVU7AbaFZ+T
LY3Y8I49DxRrKckyGTAgKtsGfIRGih10OXBxxtDKE1qf8se6nro7O2ENrZH2H/094zf37gtn
K+5bUTfsrRNjtNhEqrSXSqjy0BRQV8VIaqag6newKNkLdfGeCWjxzerrMu0tzI7Xd3LPxaYr
GMppCymOArctuBAWXTunpTSnZ22Ttj6kfBwm2XDxlj86KlJyG75MbU5KXySltKmf02gK0Umq
g4pVOVfb6anZpv4BJ4RZmP6fseuVzVbLTkz8hcC6GzXtT0MNBl5LLjxUx7vekFmm/wAa6OzN
T+hRs9wKyWKy2fIMdvbl3s95dkxm3Xo6ojoeiqCVqCSTVs19td9bp4Zl7wh74j8awM2XcxMv
KreYCUdiIw2lyTIUv0ZS4ptJpTpXlX00XbWjVcIscTwLjocbZueVPRH514dslqQm3ucnpCAl
SFOpWUqa+o86/iCRotZhCO1r/px+7xV27qyAMXhLMt9q3KYCW1NxFrSSVKWl0pV2qhaUFPTR
LkOqIy5+IsfizY+Nx8pLuauGEly1uw3UxyZ/BXskI5D2JeCjyArSnXWlaxqq8A8teGIeEWJq
6xMiRdayhAlRFoaS6lZSTzb7bjns9hCkqoRtrNZZMyN91axyUqu9ATvt/HXRTowc2nC2QRX4
EH4Hr/DTkVgWmQ+gKSkkJNKJ6VAPqNUiEXHK0rUCqj6b/HQEhl0qPJR95FEn5/H+WliO4N8u
dunMTYEx1iXHIVHkNKKFoUOhQoGo1LQNjq+ZRfb3NFwu0+RPmABoSJLinFhIGyd/pG+hrjgS
L76lVC1VFTtrTbLYO+vtlIWeJNNumhEBUhR9hNajr0/nrSx9ykISHSkJ5E8dqKJ2Px1PxwEB
h9wrKisqKjU167dNzos2QlxZptUb8iRuf4aypMhofKElAPJNKFJ9fx1Ch3cLtc5shtybJckq
S22yHHVFRDbQ4toqTWiE7Aemnu4gXk4CQrmA6d0jiPw9N/lqScF8iUPOK5e4j1qetR01vs0j
EhlQUTQqNRUk9a652cvJtMSFOJ6CopSh+H46ETFpkuJAQCUitSPWp+B06LtDFJeKQlKzUbAU
ArQaC7eQluqV7V+prQ71PpXU34LmQ+8TUAAAVAAApvuf+Orsa7hcq+5VVEn+RGw1NyAnkQDQ
g1NPlrJBn3t9Unp0FDt8aaBDDleQ6161+n8d+mpA2EhRSqtaKOygP92tGRSDQewin+31aDQQ
WQtS6EfGm4r8KaCk6bAA8aE1PupU10GmIDRJNFHj1p+GtSZgVXryNT+VVK7/ABr8tDGULWBx
ApzAFCPWh9RoFiaJB4gVrsj0NAfX56SCcJ2C00Cf7/jqJoCVUBKiChQ3KfiNDAUV1SoDov5+
ugpCTUigG+4qelfUU1EmKqUpIAJoeW/XUzQByKVUNQfSm4+X46gYfJJXxoAOhTU/3+usshVQ
OPLauw9em+/y1CCqRUbkKPtJ36eg1EgmyniVGqVdKD0/nqGAchyKSmitqE/HpvTUZDGw+C6n
3A7beg+GgRYBJUK8ab8U7nQaSjZz5KDiSNx6p6UOmTIomhAAqQfWtK/Omg0hSlK4lNeu6k/L
UIn2ceVDWnx2/HVBiCFcCTBZoK+812pX4a6Cb9gg+w/p9yW7QGGheV3AQ13HsockFhbaApCF
EKKQAo0prt7Viv0JOfoY8oJVuVbJrUUNB/A+uuBvsuBxA+0VOZXNQ8qGFpEpqMQl9bdaqS2o
ghKiNgTrfrhvJzb8bNt8oWXG8Nv+NybPh0aVAl25FLbMS4pky31+0vOVHN0bD3H+GmlU31Gi
b+pL5Rj3j0+TMSxCfjzEd95pC7zHtSSw2uXJoUtuGpUWW0pUo8TyNeupepOYegrOzMPJOIzG
/J10seO2l8sh5SLfBiMrJ7aAKlugJUn56wmkMt5Lr4Zw7EJkOM1f8cS5LfmKhv3K6ocU2t0V
pFgNt7d1IFXFubA7b9Bu6WHJnPP2Lt/268XitjsVntknJAZSVxboZTrheaJPAOoo3RAG+9NY
deQlkTdPCGNxceNqtn2UnI1Qk3K6T5IdXMRHUsF77VtNI7SeIKUBVf79Iz40Q+U2PxNYMWWb
hipgXXi0uz2z7l5d2eZJqpy4LRybihxPVNeQ+XTS97LbwTdoxnw1Ox/HbhfsWZxtF/kOqiNo
kvLKo0dBUguKJr+qqnQVoeup18FLOn+isHjZPi0i3Y3ZJGNXiY5b3H48iS+FPUUU+1zilXBL
avQitdKWGRiHlKDFt3kTIYUJhEWDGmONR4zaQhKEJNEhIHpoSLLJ7xz4ul57jkpmBeg3c4Ml
spsroUIzTDhAXMdO6a0qEpSOW2+ttuPg1xovFl/pqxy4x3Z7+SyEWtTyo9ucUyyytwskpcec
DxFElaVcUpFeND66x2suTBxhf05Y8+9AhOZU4u9XhiTJtjTMWsYojmgcU4rfjRSfgd9tTbZY
MYhWl2XkbNnadS3JdlJhIccrQKLna5H1pXfTsqo2WV/TREkuzodiyj7y+2t+PHnNyIymWWjJ
AIooFZUQk8vbt6ddCvZE0ik5pgWGWSQzDtWYfutyTNEC8R3IymzH4nip1X1VQg1pXqemtrsx
XEGjyPCXjxvMcotzijFt1nsUeXFkOLcWG3XkrC5bwB5OFPb5BA9vy1lzAJ5ZDwP6anZV8bjw
snC7RJt7c9i5Iju/cONPkhIDIKUpSeNfcvpTbVLQwuRtcv6eE2AzHMly2LY7IJCIcOYW1r+5
W4juBLjYUlKP411K1noMDe+eFYMDx7ByZN/pbS3Jk3KYeS2HFIWEQmocdAClrdIPvWaAb7ap
clPgo3+lVK8fqyaRe2gUyezGsSOTjoJ2W8sA8GhTfp7h66Za+haRUS4pSlFXVdAutNx/H/dr
q4MDyPerqmQmUia+iQ0ntpeLi+4ltPRAWDUD4AayaTRZMNxnKPIN3eiJuYMuJFcfZVOfcWty
hH/TxwolSlufAbarMYRcsX8AeQnrq2w1c2rTcExPurmW1uF+Cl8lLDDnYPIvOpSpRCTQJG/U
ax3xgYHiP6es0tT88SMogWeH3G4S5hdeaMoyAFJRwQQr3qUB21Gp66O74BJGaZLbb9iV0umJ
yZi22ozoROjsOLEd4pSFoUUCgX7VD6htrSbCZL5B8HZ3IxSLyv8AEZYnxf3iFjLkhwPOoCOZ
X2R+nyCKe74mnXWXdjEENknh/JLLiq7vd71bkyITLch/HTKKpsdp+nEdtWwJ29qeutKbBZRk
s2I+HEouuAJZu8qFIymJLl3GfCdQFNobb5CPHKRVJW2rg4o19dYb5GFJ3x3wPnNqnvz7RkDT
SGYkuRbplkeLzi5Ef2pg7cPea0Uen8dPcEsFCyC/eRcTyyb38jeXkSmm0XSVEmKfUkkA/buP
A8VKa6FI2Semu1VKloG8lm8c4/5lvtqmXywZMbRCnzDHlzJc5bBkSuIIpULUpZ5hII3rsOms
O61BpLA5xbwv5sMK8Jts9NqS88/BmsLlqQqeWiUvBKkhQWkqqnmSK7/PWHfJFetuO5C14lyS
4Ivz8WHDuTdvuGMNBQadWpaW+ThCuJPM/SBQ039NKeTNtL5LbcMO/qOtmIwnnbs6iDETG4Wa
PKAkRqKSI4WgBKQEK4bcjx/hpV1wjbFW3x/57w65WybBfanPTriXBGakCU0J7zSg49KCgkEl
rnyXXYV1nsmFarQfkjE/OGbXq1WS5ptb6EJffiItrqEwWC1xS+6+o1cSv3pG+29B66VZJaJL
DM6vFk8heJ8lZCpSYFzfjF2POhOBaHGHKoPBZHxFDUa6KyssnNYxJZvC9i8vXBMuZiNxNotk
6W2zcri44hBeeJKldtLoUXXUIWpdNcvZg6xjI8yTw35dyCVcOzJcvFss8ubGtTs99Lb8ng6V
yHGWj9ZcdqVGu6tumpWMdUV5GNeXYFqlYmlTrNrftv8Aqadbu8A0ITZC+4s/lWFIT+mDWoFd
PZcDErOixXm6+YHsLwyd/qOfPkZTKW1b7U0UAj7VaTGUXU8VFal+73dKb6K8/AvLQ5yKz/1I
O3ywtXSWZ09U1TtmDT8Z5pmZHbJUtzgkJR22ySeYp19dSajRLZG5lgPnnLcjh27Im/3GcxFc
fhyC8wmI2xyAdV3kcW68+II69NLuvAYZWYmQ+T/E91mWOPMVaZTqW1ymGy1IbUlaeTTqSoON
mo6KGpKclKLx44kebrnj8mXbctYtTd+lPotjNwcQqTcJqQVPpjKWlZbV7ac9hXpqbyDWMDG3
4n5Yetsa/t3tiPf41pebtlgedQJ67MhakPrRUFFKqURy9yutemhW+ORlQUO+QM0/0NjlyuSS
cS5vRcdJUkgcllx4JbH6nuWgnkr4aatPBm0o2RMzzw3l2C2O5T7dNmzQq5W+Attr7ZhMZCkK
77jSEklDRJ/TPXpUgaJXXRvDeRv5gzryZYLhj1/ZnQGobrMtiA9bkvLS4OafuW5KZqS4sVCC
kEcdttSUoMSVK3XHyd5Oi3MXPI48SwJERu63G4KbYjJcS4tUJk9tIUVF5ZICf4+mtK0YSyTq
iGj5t5P8ZzLhibMwW9cORWRHKGX0tvFIPcYWtKuPNKgap6jWq0TyCsPscnZXO8aeRb2b0ttg
uRFXKEppDqZj098turUpQJaXx/M3Qn+Gs2zcruK5IqD5X8gO41FwqHL421BaZhRmGGu+VJcC
mghwJ7nPnT3A8tXVVyzUGn31n+pEWy23K5yHI91buLcS1WtlqN3X3JUVxKpK1sjgoob5IPc+
mpO2sq1VwFuCg3Hxx5ieNnweTZ3ymEmRMtMYKZU0EvKT9w4ZCVcDRXEHkvavz091skuJJLx/
jfnXGL7d7PjdqU3KaSy7dYz7cd5gVqWHQXSptS6cuPA1pXRay5FaNDUjzjBFxskC82xUpi5P
9i6zS2iTd5zjCHFNRG3gsIXGR7UEUG1K6V8osFWtbnnFnAecWfF7a4j7iLM/2F3py2KUUPvJ
5gr7AWpRCeVa7j00Ve3ASdb+x5+TjMCK5cY8xxp23olW23loXeKtS0m2/fLCUr96wkpqrrTl
pVluDWExrn2L+asoctdqmTLdeosmU6lx61GO3Fj3FpClPonLQlH6zTQUSo1FK030VtC0Yeyn
veD8oF5EE3G2mzqiG4HJ0yUm2CIFdsul3qKOezjStflrTb8ZGFMHG2+Dc6m5NcrAllhEm0xf
vJEwuhUZcZQUWHGlipWl4J9lB8a0pqd9Ems/Byxnw3leS4+i8xFRWEy+5+zQJLyW5VyUykqe
TDbP1dsDepArtq7GnXAJPhnLmMROQuGK2tqL+5v2NToFxTbiafeln/8ABV+ddUmGlEnK4eH8
zi/6ZbMdt2bliVLtUNpwFdEcSO6T7EckLCuuw676lbDfgeuYJN7wJl6bxDhxplumwZiX1uXu
NJSuDGEOn3f3CzQo7BPu239NXZrglUqmY4TdsVuSIdwWxJblNIlW6fEWHI8qMv6H2V9eJpSh
Fa62lOTM5gryhRRoKHatf+GkZDFDsVHkTUAfHWSCP00IPyFdgfx0lAD9QIHXatd/x0yIvY7b
cd619SPgdDQbAgEmhJ+YPx9NDJBgFJ5GlSKj4mm2gmhNSFbn5hIFeuqAmBW9BvQg1NR6fw0o
UwlAqVx+kD3A/hpZmwRJ2r8aEeu+sSKZ03O4JR6KofT/AMdRoSST03+Caf7ddISGSVfUd/VQ
Hx1A3Ie9DxNQPb/w0M0hdU+lE0B/HWQaCSjqAakE0p/bqBIUBRNePQ/SdBqQGlDVe1D03/gd
MBIaCkgcj03SB0rpghQQPQ7KG1Nj19NRACOJ4n+Z/ma6GIlKiSQFU931U2OoELCkkgCpFaU6
/wA66DQlKV7ivrQHbUgB7R7VfmpVNNxTSQYQRuPgaKqemgGLAUQPUA9QfUdTTRIoILWQVpqV
b0NNUmnkJsAciaCu6RWhp6/26jIuiSfcoVoQSNj/AA0EJTXcqPv2oT8D8tIhpWkCi999vStf
Q6IGRRKlmqU9aelaHVBBIT7Ty3NehOxqen/DU0SYZJ4gdeJ3B9BoIABHpRfVIHqn8dRSBQSA
SCSKGgpUinX+Wg3Mg4rJoTzTXeu5Py1E0dNyCCCnf03pX1rogymc1KVxI6kbBf8AbvpNC+JK
kqURRXQDpSnroMsCa0VwNeJ5JBHw+Ogkwc18eQAFRyBG9K7ddJSxPv8Ap24Upyr8uugezIdx
s/ZocKlEhRSE/lSB8Px1s20bt40FzjeHclu679Ot9mZd7CrRbG2OT7zrSQFuPPBSkJVUJVwo
aDXe7cVObjnZkIUkmg3oR7a1/Gp1zEf2Vc794hqtjhauaJCDDdK0NhD/ACHBfNfsTxO9Vba3
605wDhmu55jeWR71j8LOvIKSy/G/c2ri4lx6JHW2oABlLY/VWrqFkAUGikzjYKCQlYBm9ryy
zXDHM1FwyLKW1ui6y2zHWiGAgd1ff7h4rqlKUpTyP5RTQrNNyicP4KRmWTeQsT8hXRDuSyJV
/jt/YSbsgBJUyQF9ttKh7EhX+Edd9Kth4RRyWvxDbfJs2yJj2rMhYIFyedTEhlJlyXnanuuo
QErWwivVZIqd9N7OFKGOSzJwvy5Z8SXEn5+i22lxL9GWWHZqwylR7ig+02pwctzRKvXrrDu3
wivDeMEHOa81RsDiuXHI3o0GjSrXZUx1rnyG+4Exy862g9tKtjxdc2H1fDWlf8paRQSmTWzz
Ff8AEblDuudWh+PC7QyCA0ltpDJUR+lJmNoCPb+dKfhoeXhEkiPGGZtFi2XKIvkWzPRMeUm2
WuWVFMSMVABaKqTwX7aA1BURtpm05RlRyWWJbs+cyXHL7dcysV2E1bsTHF/buOR0PlJ5uR2G
eyhTh407hOw0ZUqCMF8lsXGJ5Bv0W5zPv7iiY4Jc0JDYccO5UlsbIG9ANSRdh9jd38kRcTc/
YUPt47CuDUuXIjtAc5vJIaQ64Pe6Pp/SFR8daTSeSScGjQb/AP1NSnZ8QWpcuW0sPrclRY61
RXH0BYSypZCEHiUq4AHjrLtWNF1IiE//AFFuvxr5Ftkxa7TDft8KUplFEtLNJDoSogrdKk15
03psNK9i8bJLBF2vy9FtiosZGCWQyYi0p+6cbdXLLqVbrKlVUXlK3r/iOtdavOTKb0W/L/In
9QHcN5Xj7tgtiZTboS3FHNZTsy3IcVyW6kdPpTv/AA1itqpm+pX8zyby73rVcclxVmDb2prc
iPA+xDMaVM6NfcpClOOmqtklQqda9bq2STNBXe/OasuuUf7Sxm5QbQ1KuzSI7qkuMrKlNxAd
3HnFEFO3tHQddDdfGAayVZjyV5wbv0q3P4uJUiSy0pqwOQFlqOzH/wApbTaCCEpKtuSqV1dq
wadQ2vKvmK7S7nbZ2HsX+Uw6H1wpFucdEFfDgghseu3VZJ1lusGev6DR3yF5wt9ssr8q1tP2
2axNjxLY7GDiZYSe5JfkRUHklLdfb9KQPlrStUoyZ7/qbNnvH8ixRIpRiTcr7m5SmGCEqfcV
yQh2QB9IVTin8B8NSeSaK3aDERdYzkuOuZDS6hUiE0Shx5AO7SVAKUkr6VArpb8kavdrx4/Z
ZjOSPEEq3MuOoBdXJlIU4Un/ACGuSE1Uo7fhX11lKvDFJ/cXOzS349LjDEPGqsYySWDHg3Cc
t99xtT9GwYrTyUo7lTQKPTRXr5CGTEvy95Cx7JJGO3axRrvcREYi3RiIXVPyH2Uh0PuSI45q
XRfFQA463CjIQ3oiXM+8k5Sly1QsYTS13Fq/z47LTqFNphpT22XO4a0ojc/5iz0GszVDDWSP
ezXx/dptxvOYYbPul+uUtxT7rEhcdlsgBKIzSNt0JACq+4nf5a1C8hVF8zrydcLLbLM7a8KV
AkOWTs2q6S1KefixHQG1pKGxQJ9o490gnrTWFBtpmb5p5bteUQXVKxWAzkMplqNNyElTzxaZ
CR/07ahRlZSnjy3oDtrqqRkwWUeeo9uONO27CBD/ANPpXHgLeffUrg+2WnGUKLaakqIUrqdt
ZVa6kcvJ0gefbu06LThmJtwEuNvxrZCZU48tNwmrC1vCqQFEGvFrpvuaaIqiW4K1dckwa6ZZ
Ml59i0q3T2GkRX4doWmN3ZqFq+5lPBRRxUdglKdtiSTpTnRJM4ZV5AsMnD2MPwy1TINljz03
VUmY79xK+4CSkBHbBCEj6geRO3ppUVeyeSw4j/URKsuGxbI5Z5FwnWxDqWJaZTzTK+8orrMQ
2OThClEn3Co0dV5KWMsYzW1wvEF6gpxR2akSWn7pfHJXbjpuDjnKIoMgc6N0TRCSa+vXS6qT
LThEtkX9TP7tHZAsKlPh6K/Pbly3Fwj9qpLhSzGSAE9xSPz129CdSokpFMmbx58l3Wzx70zj
EtGO228svy5j85VVOuNr4RmSkIKKc6gD2gUr11iK+TSkYn+puMjImpSMcWzDEF2DIeU8E3Nz
uKSoOd8ISPYpG1QepNdKqnyEZKres+8a5rkz11zSNeGIcaK1CtLEB5EiQritSnXZLz3FNaqo
lKBSnXfVHCYYLHjWfYNGiosGL4besltFimJyGCpTtJDMhCSHHXuwlSe0mo4D1Ppq6rll8+CS
gf1B3eXjtxmMYxcpP2Dst1yRDeW1AbExxTrYnKSnkFN9zfiR/CupqqeyhwUG4+apsvxYjEDE
5XtTKYEm+FRK1W1K+6GhUlXImiVHpx1rqm5JZJyN5Px61YzhIt+J3NT2Oyy9bZ8h79CS+4R9
6hsJCu4pZVRAH0kj8NYSWVJpdm8DpWeWLBcwhXqJhN4syJqpj9xduTriX3TMSORiJcHZT21e
4+p6dNW+TLfXA4iefIk7Oo0mPbLzLimCqDFZZkJdnPOreDyiplKOwpBSnhw4n461aqS2EGU+
VcqiZRmk26QrUbSyUtx0wFAB1JYTxKnUAJSlZ9UgUTpWgnJe/GmfJTjEdlWNOXmfgSZV6tUt
l8MNMpeqHVy0E1WhJUCAj3Gn8dYdc/U38nFnzfb/ANqamuWdTubxLY5ZI1w7vGEITyiouqYH
uLyQtQA+netfTW/60uQSSwijXXLLVMw3HsciW/7eZZ1POSroXlOKeS6o0Slo+1sAUr+H46K1
QPgu2S+RvHrzWGpsbN6t7+M9tgzO+0l4Qefce7ZST+spf0qPtHqDqVGk0LWZOOZeWcdyy/Y0
zcI02XiViddMhc19LlxmIfPJzuOI4ISBxSlKQeld9VlCicjBywjLcYdk3zFP9OTrhjmQT0Tr
Ta7e5/1zK4ii5HaSTUOJ4Joveo6iuhpNTMGVJR87yyVlOW3LIJjCYsm5O9xUZBJS2lCUtpSF
GhPtQKmnXXZQgLDjGUWmB4pzLH33im63Z+2G3RgkqK0R3lOOqJGwCU/HXN/ykmm0VnEr2mxZ
Rab2Gg6bZKZl9kmiVhlwLKSfTlTrpdZRtbN8t/mjHYt6ZRjNivMl6XdX7/cmZS0uPcJMV5Ly
orYKglDaXO4kH2lI3OuWHyHXg4f/AJReKKyaLLU1c37Oxb34rMtZh/dNvPuIWXW46UCN9LXH
3devpvp1Wh5Gl3864Jfp8xu8QLo3bXDb5cJUJ5lqQZNuCxwdUOKQ2vmDVNPw1OuNg65G83z5
iMy8OXi52WU7Ls9xeu2LhDyG092S2ltTc0Dl7UlHIFFfQaVSfoUwRLHmizftjdwXanXc7Yta
8eYkB0C3iG6okvqaH6neAWUgfSdLS5eBrXGCSvfnCDa7hLnwselQc1uMm3PZPHuC6R2zaSlT
bTDYAdT3uIKuf0jpXWUpMzg4M+cMUssmNCxuySBj8iZKuN8amvpL6n7gwph5uMtA4htpKzwK
t1H4a10UTORSnA0/7sYJ2f8ASptc1Hjr9uFp595v9z4iR92mQFf5PLu/k+H8tMQpnI/JI4d5
Pxqd5JmZFNtNyUiLbE26yW62lL60xGGFtuOSK8eSgyrlVOwOs3hJIow/kjMd8xYhZLfa62mT
KuuHLmpw11x9KGXY89ZUDcAN+bdQf0vq+WtdU3E4ZJ2S+QXbzHaJeNSJCLM+1mFxsv8ApiRM
Uv8A+zzBaUCXm0Ecy9vxIrxGqkJy+DNliIJGD5JxdMTBo2IWq7XHJ8WccXGhvdtxqSiSC5NT
+mVOVrXtEJ9o66z1UOXs3P5YGjHmXDYIax+0WaUMKej3Bi6okPpM9w3biZCmXE+xHa4ANg/V
601rET/uCtlplH8iZhCyi5Wtm1wlQLPZITVptDD6+chbLZqFvqHtLi1K6I210xWsT8hDbkq1
ytk23zXYVwjriTmF9uQw8ChxC+tCk9KV1z7ExsBTjRW5JBA+Pz9NLAdOWy4sQIs2TGcRCmF0
Q5KkENulhQQ7wJ2VwUaGmiU9ChtU8SoEED84pseuiSyIUEJFQsJQASDUb/hpUEL5VHJJ9hAB
Pxr1pqYCu3RYCtlLHQ7HbRMiJSKkhulUmpHwrp7GIZ1chyI7y2n21NOtmjiHElCkmldwdwaG
us9zUM7Wq0XC6zmYNtYdlzX1FLEZoclrNCohCfU0HTTaw9GxuWwkgL9pO1VClCNiN/5aASkJ
VEOdoilTxIHX+XXSIaW1LWkIBJUafj6aGzLyx6nHr/8AuqbWq3yhdFcUt29TKw+eY5JKWyAo
8h0230dlE8DWrmBUWxXmS/KiwYUl6VDSpUuO20tTraW/8zmgDknj+avTU7I11nQlNkujtufu
rEV1y2RVIRMmpQpTTanNk81j2pr8zoVlMA1yN0IIcCRXkACQR/i+OmSSJVeJ5Mm4RLcbRLTc
JzaXYEVTLgdfbV9LjYI9yfw0YiTTo9Da62W9WeYuDdYMi3zEALXHlNqadShQ2PFQrQ+h0pqD
nkXAsN4nx5L8GDImMQmu9NcYaW4lps/nXxB4p1l2zBpJtSAWG8mzG7mA+LOl3tKufBRYDiui
C5TiCfTSrZB1aXwNGmn3CG0ILiiQkITUk1NBpeCWR1dLFd7LMVAu8J23zAlLi2JSC0str3Sq
igDxPodZTFpoY8lfWnYgU+PID5aikNKUV2PEVBSo+ukAwqlVUogA0pv06V0FIdApfPdW2wAo
aamxBQ8QCaEilfQ00ChRUa+6tT6j+z/z0E2cwEUVUkUNB6/iNJhYFGlSa0J3SAP92o2SNmsV
0v12jW61x1SpsxfCM2mgKlEfM0AA6k6MGq1kKVZbrDkSWpUVwOQVqZlEIK20qbJQo80ck0Cv
zVppthwxjyNxEeSlZWhbaUU5qUhQCQr6SvbYK9NEmYJCdimRQnrdHl29xhy7oS7a1roG5KHC
AhSF1pvyHWhGnanguv5deR+x4/y9d5uFlVbnEXy1srlTLcspS8W0JCyWkE/q+whQCa1G41lz
j5FLkrbaQUDjshW4oabHfeutWq04ZgClDj1oQCKgdR8NZEUihUGyKchsE+hPX8dDNp+Q+NAR
zqnpT0roCICbbIH1dBtXppNJgC0gewe4709R8dTJsCCPiTy3JAGsmEgLG6iOSaD0/Mn1/HSL
Bx258U0+qvp+OooZCq7n7e0A2A2lwnkSaqUfl8taRG9eJZVwleLMnsxx/wDcrUpwS5lxcmtw
YjCkNp4tvLXVa/pCiEDptrt7GnVbBrPwZEVAKKqBRWo0KEkII9Cmu5/jrkaFQfthLbXOb7sE
LH3bKFcFLbBqtKVmvEkbcqba6evZVrLg2DyFkd4zn/TMWBgMuGtttIthUp5+RJgx6EttNlKB
26fU4ofxpqVkrS8i6NT8HfI8lyljyTbM2zLDZ8GHFLTNis7bnbo6xuyhCqK5mpJKEIGs1t9j
DT+rIbLMxxeX5LmXnJMIdQlbJL9lkyXEuuSlAFD0gkDihKdg2kfPWVDLJK+I83bhRZn7PhM6
4XhuV93IesjjjbXZqSzFkmi/0G+VAiu/UiuulmkhhstuPeVMvfjvXKNhd/uF2ackJUxGdfNn
DrlQlJYpXi3WnBNd9+p1l9fIQIPnq5TLLKmR8Zukl2HEFuu7gdJs8YJP6zpCRTu0qBzOx2O2
n8Z2UOCuZ5ldmv2FMNWfDL/b7M2ltu1oUVMWdpYOzvaZH/UOEk0KieSt66MeScrZX83nZRLw
PG7A3i1xstmsKSqZKfYdSJEt+g7oHED3FR419xKqaVesjCfBc7bkE26+RcRcfxy8Way4xCSL
NaWobrz8lbYCVrCPYltPSqiegFdzrWHLKHOTKPJ02Vc8/v02XAdtjr8xa1QpJSXWxQUS5xqO
RG9ATTQmHVNQyzeL/LbOCWlxlhh6bMly21uMvrH2TUVBHNbLQ9ypSqEculKaUuzhhEGiWn+p
LCIlqfZah3OFwlSJMVCTGkuPl1RWS68+VdtRWo9K0+Oq1F5KssiWP6i7Wi/WC5SIsxTNlgy0
S4xfClSJkmnEe5SUqSin+YvffYanAKSuWi1eHWrtFyCZmkg3JMtE92Izb3Oyh1TodU2lxe/F
JNCsn56VW3ApLwaTdv6iPHlqyOdJtP7hejcZEYzHFqCYrLUXblES4eRC+vGgBO+sqk7ZGd5v
nWBX28xzCvd+W29c/wBxmSJS/wBCCmvI/ZwwarcP0pKlUQOmtKr8jMI0Zzy74/cyi5XNr9/a
XebO3FL6IqkONssqWRJZA/VokLUoudK0prHX5A4Wv+oPAGpJaVJntNQoUeHDuUhvvrldokuq
dZSpPFxVditR+Ol1wU5gGReefH+QRJTEa93fGXG5bclqbCjgvyUNtBHCgJSE8vRz5empU+gT
mCIkecfHdxxRrFZy7rHjuxH27jem1BUxC3HC4hlZQE93vDdziQmu3TT1l7MvwZhJzaxI8V/6
TZcuLk5yaqS1HWpDUBhruc0laW/c+4oCpCvalR26aVX5LMDHxHJt8TyXjcqa60xGjzUOuvvK
CG0AAnkSrb+fTRDeBmNnoR3zd43gZAWXb5JyGNIuqpvecZKmLchLakIQyVAc0JX7klArTf8A
HPRiZ/ecvx+fl+NtuZ/OukS1TV3By8S4objRUj3hpptKe+66unDkr2oTrUOC8FztnlLxnC8m
5DeW8hSiJeo0dTkl+E+jtraAR2Wn00dAKE8z7aE0+Gj+tsZhEdbvN+MuZzlLbeQzLXYbpADF
uukprucJ6U9v7hDaE9ziEBIbCtyRv11dXBmThhGY+L02PDXbpkP2crEp0+S8xIiuKemOSVL4
PqKedFKSvmVGprtqtRs2mWS5+asETbkXBjJVvQ2bfJiuYumMSuXLfp21dxSdi3UpVU8Ro6OT
DwjOfH/iCJbsix+83HL8cfbYlRpD1tRKDjhPIK7Q24qVXb4abdtQbq0tGqz/AC3gUDJkxrrk
zF8Su7qeiduOlTdsZbZU1wW4kUP6p2WKn10L1tkU6359huFNYba7PkUe5fb3aW/kE6LHPERp
ZUV8itKjWqkCqfdRPw1dZmTGoKVkeLSPI3ka/wA+Jl1rlRGigouc537RgpWVdqNHBFXAygUU
oClfmdacpGlBqWDQ7f498eRYsjL7JbppvPfl3FhTctD8ZKUqdjJKk8gstilQPb09dYVXZ6Ib
OeScOXaDLxXJrZjEF2VPfvsCZAEiRLDzh7JRH9pV+n0AV8vTS6NcE35KLaHMcV4Gu1mnZNbo
8+4TRc4NvKeUtJbWKMuIQK914oHEdEg9aV0uZJuUvgZ434dNivNsv+VXuwO2OG81JukVM1L6
1NpIUW+2hJ7iq/lFa6zLeIJYybA55G8frkQ4eRZHaru0m9Ll29qO1RmNFDLgiCRQcSWllI5k
bGmp+thJH5T5M8d3HK4FnnP2uSxc4M623G6RUl77VMlCe0TLKWkipSagD2/x09GlJbMF8vZB
j1xyCLbMZ4qsOPQ2bXBkpRxMgtVLsg/4ua1H3Hr19dbrWETeS/8Ah/LsdgYNChDJW8YuNrvT
d2vPJKkqnwUDiYyVIqXVenb/ALNZdXIxottr8s4OPsMmYvibRY7e7dVXrES3xkT1z1qXHWlh
HseICh9WyfiNX9bJJrZjdwv1lPhqPaGprQubl4dlO2dqMBIDRKilbso+gB9iR1FB6a1VbYLS
+DS8lvqYWJeN7sm+2Odf8Zk81WplxIQoSShDSEobACQw3/mrNOJBVUnXNVcOEM5G/mvKrLkk
CwYim5wHLo7de/MmRZb02DFaeSWwtcp8A7FfIoRsANaqmlJi1crwQnjtiyY3ec0xq3ZXCjX2
VGai47liiW4wcSe5ICXveG6g8OXrQ0Opp4bRqcQimebcjtF+zx2da5InMoixor1wQjgiRIZb
4vvJFEkpWr8x6/hrarGy5fjglPELjTOLeSXFOJbH+nXUJUVACq1geus32gtoziIlkTWQ4Apo
OICjXokqANaelNbalC0eg8k8kYSryWrGf2eyOYfHuNtXHvEdhA7QZKHZK1rQKPJWatH8oA9d
cX611+QVnZ/QnMftPjnFmpAv03HrjKk3u4XRoNuNSEtxDEe+3aXtsOZFEdKkU31KjfBS0iGY
znDV4a1lz8CxHL02OTW3GM0llMgT0ojo+2r9aG6kepT8tT9cvWCdmPsVdxd/zdachhSbNAio
x+O/fVBxuMj7yWy4hzsJBCUvA8QsA1SnrudMONEsNkbiFxxfGYmJ41LasNwfuF3ucbJbg423
J7cVtdEFD6+PBtYIAWr4beusxttDV4SRcG8at0HwhMuuP2+2xpC7TLXGeeaaUAlxxwLkfdlK
lclxz7WydjQHVVflkzLVcnk+2y/2+ZFmMNNyDEWh1DD6O42stkKSl1CtlJqNx6672rKaJHpe
dmtqe8lWPInJNkXZF47MXD7HBDynBDQHI8wD3pSXKoZQfSoG+uKrhYHEkYxl+GnCkZg5arB/
qv8AY3Vrtv27aWBJRPSiM39qD9aG9x+YjrtqVJZNsLA7/j2Qx7reLXbMatOSPzISJ0K7BCYv
2DbX/UvxkLolKnHd6Dp66X6/g6NQp8khEsHim53ux3W2zLFHx+xXa7OXdmQ4htTrLqgYiEtr
FXUV9yPQDpq6vUZMVhOSxpx2Bb/DSLpZrVbIj7tsbU88620Shl12suR94oHm460apaqClVAN
xTR1XaINFA8v5VieR2vPEqYtZmWu4W5OPz4qUCXLS7QSnS8CVPjiOJp7Uga3WjkzkqP9PFtt
txzeYi4R4cpqPaZjyE3EBURLjYTxW8D0SmvuUOg0exaOkQpRpobxc3KSYzWJq8h/s0TutK7Y
sxfMpf3PD/2u4I/D/m0KnlHNPM8ENgECO55OzCUJeNw4jluegOGE/wBmL91JZpyhd2iqc0kO
qRsOidjqaiMFw87JDx/FwRrxG9BuqseamQ25jNxmPlqUpx5BU2kLSe3IT0SptbCiCKEalT89
C8rBBP5jbLv4rwVE79nTGtV0aZyWN20JmtNJkNltbTQqpSXWklUhQ+rWukdkjSxEl0xOBgOJ
ZFMub13sbZuWRLnWhUR9pTka3OMPcUqIp2kq5cSkbb/hrLr2ylwYrhR9SneRcMXmT1it9jGP
v5fHZmyL0bK60xD+zL6RC9yilKl8Sdvq6121rSmBge+PcOOFW+fDvhx2Jk7smLJS7eXWZLKr
Q2qkntFJUA4FdEiitc4dtocJbL1AyDxNKbyOZKNmmpk3Keq8PzXmUKUyUhLBRzQ646lbIHHs
/wC/XW/qcrzBmcGXTrh41V45fypqPD/1BJsoxlNgSgc0yg6QqeEn3cxHooO+p26nW6Um0cLJ
KXzsn0Q8QYtPih3ILzZ5sCxOPs3eMy+hyhm0cjKWyBuEKSnvqV0Vua641q3VpI29ySNkmYwz
ncVeXysSnXEQZotbtqCG2EuKcR2USnVBUcLU3z7ZKdvdXqNTpKwsApJO23bxk7nt7TG/07Ce
chRESJfcjq4vpK+6tgutJiOkJ4BxKaV2Neuq1MLGQXJ5s8gC0/65vqrWuM7blTHFRVwUdqMp
s0NWmyTxSSTsDSvTamu1lCj4Mo9Bi7eJh4fitJas7qRbWRIStxpE1NwJTzozwVILiV78ufE/
hrh66Ns6O2SBzLPcUkM5U9CiWVD+O3a3qxEsR2qvNLV/1bm20hKkcuQ+mnzprqvTXX/rId4z
8kjkebYwvyNfMmkyLJcYa8ZdfxdPBtxTkgdrizMQRUyOaDwSd+G2ilOyS+ck7RJW/FeV2u++
WbdlFxNmxtdvhlF2JKYzU5wpU2HmkKHBL+6ahJG29dY9iwklySal5Lj40l+M7NiE+03Z+xSp
kefNTfHZD7XB5vkpTK2Ctp1chBQQB26EH5637KN3wthKSSIm8XPBJXhB+MzJscWS3DUI8FtK
HJLkgrqlKkLSmU0+n/8ACBak+tOO2qlF/Z8SXsZkfiC/49YvI9put9b5WuM4vvrKe4ElbakJ
cKKHkELIVtrHsqFbRY2hWfWFlEexSswiS81dtlxiM5wkFTEZ6bJS7GaVK4haOLaVCoHsJprr
1x2a/FvCNdlwOj5IxGVPfhWzKGbHdIci1u3bIXErQm5s29BTMQh1KebpcNKBX16OqXy2hVlP
wc4/lDx+40m5Rboi3Y5AN4FzxFbHB24qnuKMVxLCf03UhKxss+zV0/KOcZM9k0ZtaMuxeDjO
AokyW567FdTJu9s+zH3LTIUVEIkkhLyFA/5atz/9OtNJu8a4Ndsr4RqLHkHDnrjGjXDNw8+/
dZ13jX1mo+zt0hstItpeeSox1LrQpQPbxr1poaxMYwvuWEVe557bEeVbLNmX22Jstvtz8aFc
Lah+6IaS8pVGpRk1ddUdt/T0pU6Lr8F9TFL7+gnxn5Bxm24/Biyb/wDsDtmvMi7XOKhtxKbp
EWlfFhgNdfcadpe1NVknZpRk33xnwP3PKOEv49IlpuSo8L9jlWZOEhs1VNfdUtEvgP0OHFSa
r6pprVUu0eHLfwEyZ9hMnDcMybDMmcuovDBQpd5tyGil6C9w4b8va6Kq5Cm/t/DWPZ+U2Wpw
hUK2P1HflzKLJcLBYbJHv5ye5wJE2XMvnFYQpEtwqbjpW6e4rh1Un6Rrqo62bW4gx7LJszBR
BQp07FIrQdBTbfXn0SyJDS0mvEqr6mvEk/A7jU2gSAapI9tOXLcg8TTrolCKXzbTVyqQEpCz
uOvxHwOgQj2wgKWqiSRSpAFPh+P4agYCkg0JFfQetPl8aaJLYY6rT9S0JG4+fqfx1AmKTSgK
dyjZQG9Cfh+OqUaRe/C+U2mwZj9xcnlR4k2JIgGe0CtUQyEcUv8ABPuUEkCvHf11cpm1pou+
D+W7FjFkxHH3p6xBt8y4jKUtNdxuU0vmmOUFaeTrbilclIG/xG2vT7HRu7e3EC7efA6ufkTx
pJtd8mNTX0Xe646zj/7MuGQllUcEcy8CUL5V9pHQaPXavZJxCYRLjyQGXXnx3c7fgkCDkLzs
SwNmJcFmC6lxCHFBwyA2pQCvcngUVr6jprPZP1Wr5Yv+c/6wTU/N8Bd81u5+m9qNtjRUy2o5
jPNPLkMMiOiIkqFFcwOXLYCtDqvVW9dKysMqWaTMQmy/up0yVwDX3Tzj4bHRAdWVlAIFPbWl
db/5d074ycqqFA3UgU4g14+4J9aH1OvKagFHDQg126fLqdDICSSlRA2oBxO53+OgUFySOSju
B+T4U9dagJFc6e0j20Hr/v0MRXGmwUTQ1/vprIhBStviOv4ddtIMKhpz2/D50+FdA9iGQ8x+
3Fon9VTnJI+AA+OujGrybp4mkWd7wxmtlnXWFbZMl1l6MJzwZK0toBV20mqlH20HEHfXq9ym
lYOdX+RkpHKlOo239tQPkdxrywdHlne3d79ziqjqbakh1BYcep2m1chRTgUFJKUnc1BGuvqX
5IlVcm5ec8ltstnGJ9pzFmTfokVNvnPW11fdK3KF58uN8QhvkmtD1+GtKVfwjHVPY+lZ3Al+
VcTtGOZBFVYMZihEm8zXB2nCoAy3A89zq86kcAU+pNCNUNt4LtCKz5Ix6wZZ5fniLllsYt01
H3km58+bEdDSQntcqpS48qlQlKv+GuaoxlQWnwlc7ZbrQ5XKYAs9tuLim7dKdNtAZKuS56+J
CpS3An2NL9qR131p1caLBd2vIWO3+3PSZV5tzGKIVLQqV9+/EuXa394ithsKUtR9g/w/PbQ6
NcGUxrdbh4tmYRBiuSIDWJswW/tmG5bjb6poWC22YKPauqxVa3ASTX030v12TLZF+Qr/AJFj
+GT1wctt0+U82hcy5LmpU/zqAmPareyktMBPoupV6nffR1c6HYrCPIseNBwS2XjJ0XCbdnXr
je5MxffUxRBTFjqKiUtHuEUr0KeWt29blqMh2lSWK535dwyHELNBlhy7t3VU+dChTVTgi2tB
QU7KkqI4pNU/pdCTsDTWKqJnwSR5u8yS4snydkciK8H2FzFpadbPJJIACuKhsQDUayBa/EET
AHcRugzZiKi3S50aJGmJUf3F18qSRHbQApSGE9VrFPX+HS3rcIlY2nHbVgMdm4zEwLHNuCJS
o8xUQxUR40ZkUZbBkbJCUU5lI3VU65v1kmRFtl+LE3TH7QxbrEiDeo9wn3dwoS4ttpFS0Q+5
QoSv570G1NT9ZdjHoHhrObnekT2MeeVjEmYHGneSGiuCXapUEqWFhBa3FRWmrtmGaTN3yPC/
G8V2XacjtVltNheehR7E6xwYluuLp30uLB5jj0HpxO+itG9AZx5Xx+a3KgWWPimPwmH7q0xY
W4LiE3GdGqaJcQg0Q2Uirjil7bAU1utQ7ZyatIshR5Av0pUaI5+5Y4zDixXH0oS642taSwUg
80NqK0prTfRmC5Ia3eN/H0rI0vSLBalXeDbYwutjjFLiGZTyjy4x+aGqISN1qV09N9ZjBpOG
Ivni/C4Tlwm4ridvya6KmMszLW8+2hmKwWuTikBSuLSiT0P92rr50XZkZdMKxe6YBaIMGDaX
MoQxcJFms33FYi3C8UvuhaeJkllIo2VqCTSvQa1GcGXaEY4bJZo3hGZc1woKbyu4iMLlIfLk
xwNuALaiMIHFITQ8lKNCKn4aUsyw7YK142sVuyDO7HZ7igrgTZbTclCTxK2yalNRuOVKaW4W
BPSkzwz46lzk2y445Hx9v92DFrcYkHvXCMhkvLqVK5UURQpG49DrlEk4M7ybD2FZViURHjxq
xOyLkpLkFiWHXZsZlYWVdhNVIbS2OTji/wD061GBq8ovczxjY7z5qyaXklqfnMGLGfs7FaxV
pLYYWt1LSg77Vp4oFKChPw1N4JMb49h+BW3PMzs0fE4k2fGtX3cOK3JU+KOI4GKhC0/oOOrF
a/UkEamsErFaxnwnar7bsLnftDlLlcZ6ssLDp4RmWVLS1FB5exCVoCKj3nc11doGcmgZP47w
q4wbbDn41W1Q7LJLF+afLLEIRzzSkpTQKW59XcV1odZSCZPNeA4HeL7ktkiT7XORZbnIZTIm
CO6lH2zqvcUuFPFII/Nrvb3YwyrPJ6CT4J8cy7gIT2PSrO3Euv2jLzkpajcWAwp7mip2T7aD
j7tjrhkZKpivijE7L/o5/LLLKdu1+u8mKqFIcLSEJQVqjdxk/U2EN8iOqid9ttOchXGDMvKd
lUnyJdolmxuRamWXFKatiEOPLKCsj7gIAJS26d0JA4gUA10raFsUuUaH4x8TY3PwmNer7j1y
utylXYW5cRlbkYR46qJ+4WiiVcElXJR/D01i123s12ZKzvCuDWVCG045ecsFzlTkMybY6AIT
UdwssoUapaUpRG6lnr+GqZ2zE8FMs2Ird/p6yS4xLRIcuZurK1SeHKsSMoV7TlK9toci4obE
19NbVvyM2f4rwUbBcLuGQZVZrTJjSYsG5SW21zft10Q2rdS0qKeNSnZJ1Xsjdavk9HRPD2D3
CxJsMe0TbHFcvy0zHphKpclmAy4QphxXuDLvD06b65MJkgsg8FeMXrmzb7T96xcLnbLg5aYz
i3uyuZG4cFKcdSlw057opQ60m0gfxsxXypidjxK+wcfhOOO3KJBYVkLq1c2/v3k9xbbVOiUp
UkU10rL2ZeX8Gj+HMEsMjErVel42nJZV9vf7TcS/zWiBA4kLfQhH0Lr7i4eny1izy54NJRot
Vj8W4E2/Ds7di/c7Bf03V64ZY44VqgohOqTHbaeSO20AEbqV9R/lrP8AkXZtZMauOI2ljxDE
yxiJKduUu7Lhm5OOIEIMpKw2lDfLm4o8PqA68tdFmzSB2wvkvF3xDAIONeNLzIscuJj85585
LIcQ4uQ6hKkJb76kdEur3QlNPYaDXJN8PJqcnTzriWHs4xjszGLUzHn3O4OwmG4cSRDU+0Ee
1P27/wCopQc4jkBQ1109eGws5wVvxzgTMaNl9zyDH3bveMTYYU1iznJBcdfUf85LXJSkoRRQ
SNjptaQSSXlle80Y1ZMb8hS7baI64UNUeNKVBWsrVHckth1bIKvdRPLYHT61gytslPFzDKsM
8kOutpUtmwntBSQeJU6PduOtR11m38kVtGdwGESZ8VgHiXXW2iodQHFBNRXrSu2utpSwaTNw
uXibw4nMBhEC63ZrIo9yhxnw92+2+1IAW8ljin2lpo1KlevQHXFu0SUp6QjC/A9guVtfuN8e
uEFCLrcYiUJSEqMKCwt1DqQtNVKKkUr0I1f220gmAz4s8Tm0f6x+6vSMR/aP3MRiWVTuSZf2
1KgcPf6fD46O9phME4cfB3xrxjikLzTFsoZeulilWY3WFGmtpd4KkR3C2iVxHFQRSoP+Kmns
4NdnlEfg3iPBbhYcXGQybqzf8vcnRoCYwbDLBhrKebyHRy2oPb1Py0d3Mh67YRMT/E+N2jw7
LyKbc7rc0IhLeaajvhuAqW44ppCPtxVf6LgCnCr2q1utrOwW/jjZg1pbtv7rGTdy9+1h1P33
2vHv9mvv7fP286dK7a1eYwbWcM9AyMB8XQ/LK7NDgS2rXFx+TNuSJKQ43yERtTT8WtStxCVl
Sq7BzprilhZMNb4REq8R+K/2M5gZt4ViBtaLi1GIYM8n7kRlIUf8s9w9OnH46VOpNa2cLT4q
8VyEP3n7683PHZ9zj2eyJgNN/cNPPx0vOGXyTTiyV9skAdPXS7uInRpPg7SfAECNlOO2f76d
Ji3Fd5RdJqGkhLLdsWpLKkmikp7lKKCz16au7gzM4Xgl0+KsSt/jO23m4Trrc2ZaIRiMtvpT
bnJs55KU0jjkpKWCT3Oeyt6b6le2cj2cIrnmLx/49jxcnvGLqmRpliu7MC5QXEtohJ+6FeEV
KfelKKb12OmicmW42VDxDhVuym83Zm4SpkSDb7ZImum3AKkOJZKatJR+fkn8nrrV7tNJG3DT
L7I8NeOoEF7JZL19fxRcCBMh2phpr90S5cHHW0hzbhRHZqdvX5az3b5+5L8SIwjBMZXlebW0
wps6Jb7FMetCLjELbzbxZS4nvs09jqOVGuP1fUBvqs5ayCnI6xfwxid18Wt5GJNykZG5CdlK
ZYDaUMlqoSDFcCHHkewgrbV/DbVZzYm4FTsB8ZTbH42TaI1ziTsskNsSZ61ILSmlyA1J5rAI
7qT/AJHH8v1CuhVeXIuuSe8WeFLB+5TJ93ivyWI15uVoiwpTXtfiMxnil/gU1W4FIHEp2ros
8gkkjNPJuFWKwx8cuOOJuLVvvcaQ99jdUp+8aciPdlSlhAFOY3p6a7KcthmYLD4k8eYxebI3
kd/Zuk0rvca0w4duQk8FrSHS9JSoHkzuEq6bfjrlZt8nRGgzvAXjyam6zZE16BOnzLmq0tRd
mIzcR5aEJbYShfcALZJHJO2w1OzxByWjPr14hscLDZ+YtTnjj71lgyrG+VgrXdZS+2uMqqRV
CVIVt+XkN9tbrRu0STnJNRvFVpvkvxhbHoMi2RbpbJrt5eS3xkvPMqLxQpZTQqUEnt8vcE/h
rDf4/MnRrPxAeKeJvHeQ3xbwiX2yWSPblypMG6gMvLdQ8GUqYdQlSi3RXv8AYaKoPXTZNKJk
wnklIHgbxyvIbzb+/cp8dtERy0spc7FVSW1LU2Za2uCl0TVPPjUHVZuE5FPeDBcos7dmyW7W
xsvdq3S3o6EykpS/wbWQO6EEpCqdaGmurSSRhKTepngPx6xiMJ4T5X+oHIsKW4orJbcEpxCS
jthHFKVBziFc+u+vPVNs64n4IXIPHXiq3LkzoKbquDj1+asd8jvuN/8AUh+qOUcinDtrp9X1
DXTq+XmJJRKwd73498Wt57mqH4NztlgxeAJ78NlaUFx4uBNYnMEhhxOw5H6qmtNP9c9Unmxl
REjDxhimCXPyolqxRZN4x1y1vPlic13RDlOMLBjv0AS6E09qhSqqUO1dYdVjPJJKGyXwPwdh
F18dW+63+XNiXi5ty1trRz/6YxlKQUlkNr58VIqtKyn4ab1izSGERuV+IcOieK1ZFbW7k7c4
kFmZKlLWEALdUgkLiuJSeytC6hxpaqbHT66/lDC68FI8L43ZMh8gW+1XlvvxC2++Iwc7Reca
bK22Uq/51J1ezwFEkauMFsLUL/Vgw5v/AFQuyqnjA6Odnvfd/blz7Y1doGlcyBtXUvWnifxk
2tku14owi33JbFrxxF7Nxu8W3XeI84t/9nhSYqZDy0cCFtFtwlKVqO3SujriX4/UtYIkeN8M
TYnLSxaESsfVbLhcn80Kyp9idDfW2ywmQk9pI4pHJH566V6sw/5SUqJKZY/H+KOp8YyLnDfh
N5O66m6TFyEGLICN08AFFbLnIhO4A/jodYVs6ZRXtHwaJGwTG5T7F7umKR7Tk7Dd0FsxNHJt
qcLeKw1KYJ5OKXTdSac9a/qrMJ/jjJJ4n4K3M8dYzM8lRYL1h4Kn44i73fH4UhMVUSeoK7gj
hxXspseyo7Vr00On4p/+0BjMEB4uxOLfPGucPf6eXcp7MFSrVdwFLUl5ABDDbYGzo+uo3ptr
SrVe3q9TyH+yS6NwcDvPhWZNex8WxqLZu9EmvxUtFy4N1CXWLiFcnu4vo1T5HWfVT/7Ijkrt
teCgePfH5jeRLBBzGAkQ7rDXPtLDrqUszFhvnHQHAqg5r4jiSOtPXWWm1PA1w2uRXmuyQI8X
GLrItUaw5TdYz68issRPaZZLbiRHX9vVXaWtBNTX3UrruvWlVtPE4M2UuES3g+djVpw7Nr3K
iurvFsggpltFh3iy+e2kNNvpWnuBX1KIII21y9dO/sVTdmlQVGumMWv+nNwxIb7dyvVxNtmy
E9lxLrzTQdPLmkqbYKeiUUUFbg66er1q3ss8QpKzmFwP/I93wxmy+NbQ3bnkWSS0m5SoQdZB
U24S0SqQEc+6FHdVeJGxFemPVT/6rWNN/nBdnJdguXmXIoTcFclzGbQItst4bivOBwBtR/b2
XAhtXFC90uk77ig0X9SVK/IKYb+Sywsdxhm7Tv26zsrnSZNtRc2I0WI8+0HmSt1cxh09tlCj
/mdrf1FdZdMJsinKxzCf2uXGattqTgIbuzmQXJC0KcjXJiSUxW2nirutlKePbSke4fGuuj9K
nX5YLsmv8Dp+Bi8NcOabXbX7JHl2n/QDjTjDT04vJ/69Knlq/UHJR5B2gBpoXprEL5n4Nz+u
SXkfcRchgyl2hcmROhzm42PvR7Sxc4xbUg/cslJDL6CBQIX+O++uapNZjBmeEyNXhdql5RJn
O2+Lkl/YlWxT8OQpmC7bYLiC6tx5MTg2XmVg1UAa7DW3X8VKjf3NKyX/AHJHIYkuLa8juRsc
W/Wh+TcSYFvainmjdKn577x5hTRT7Ese6gqd9HWsr7GJ4Gca2vTcLis3S32ZMlty2pxyW4Y6
rPJUpaSFxVICZnJSAe8hw77021mtE7PGMm7xOGDJLDAueRR7XkOOtzbnHiT37FMl/axGbpKQ
hIRBbbjEK7CSeae8rnsDXrqVVH+sGFpv/X1K9jlmbt8m7Q7Zjtps+ahyAuZaHpDM1pu2OH/r
XGvuFFLY4j3t1JT19daXrrOZ68fI48/Uw3PGrAM3vqMdKP2IS3P2/tKKmi1t/lk/lCq8ddPf
61VV8xk511LK8pY4Ap6D1rufw15xkIABPFftV6Gm1OuiCgUj6RwHQ7D4jWYGBB4qFKUJ9wCv
hrSKBShQ1FeNAaEV3+eplAEuqSRQbHc16A/LQSDVy47KrSgUfnX01QAOAp9I/wDVvWv+3rqg
oIdNTblkdeYofh/Zp5OiZvfhpmPH8SZzfGIbL16gIYbhS3mEyFth0UUGwoKIO/prpZQkZfgx
11LnOiyT6qV619a/jrKYpwGy20p1HdKkxwsd1TdCsN1BVxrQcuNaV1qjzktm4eRLDheJwsMv
tsxL7m0yYK35cOcXauuPBHZMx5ANV71SnYE+lNXVS67+hl3bJPNrB4+RfcHx66Y63ap90SiT
eolpBBC5BCWIqlroUp5VLihvQHjpr68uGE5goPl/DkW7yVNsmO2dbcOrTVthRmnFBau2kqDY
ooqNTUkH8dc1Y2lKLL4bwjFHmHWsox1b9xE5MKbLu3dbhsLUQlMOK20Cp2WalSwqgSKbjW2u
ZMVlaND/AO2fiCNLXZ4FogT8jcffT9tc3pPc5pBW00hTKS3QIoSK7ep0ZKXoh5HgjHY2O/t0
ZFvfza4xFT35klTwXGb5DmIcdH6aUNj2pUtXuOpti7DUYB4gNqiWqRYXrVc7mtliyuLcW/ep
AcWAZT8dILUZtQJUApXT0HTTLfIclnneBfGN4jvWizxWoUiFNbiypsOQ6++EhPJz7nuDj3VJ
H0itCanWc7KSCt2CeOGZeO3i32NUaNLvKsfftUp9xSXigrQZalAhTiv06hJ9utqVKngFaF9T
HfL1qtlr8i3+32+OiHBjSAiPHaFEITxB9qR03OsJDHkkfG3ihzN4F0Xb7i21fYpa+2tbgIT2
VKAXIdd34pFSEpTuTrbbj4FfsX6xf00WOc1JkSMkfatbLxiRVlltlTrrWz6yh5QCWw4CGx1I
FdDs1yZaRzif03Y447BiP5WtVxvJkqtLLEbm0tETcrU4r5UJ+P5dXa2ySRk7svJEX5VjZvMk
vtzBCQTIdCOSXe2lXEqNEgitKdNbr7GkXVGu3f8ApyNxm3RiLljl0yyH9uu5plsKQ3zmfSov
EqUa0r7fTrrK9t0tm1bEIoee+PsPxkli15cblkMSWiBdIa47iPtzsFu8zyPbQT8d+g1pXs3l
nNpeDTf+zmEIza6W9x5bVvtWNtT2JSnXlH7lwuJVMeIVzXw4ckoFE/LWXZtbNNkHb/6cnpd5
jot2VldnnwUz03NLDqZLqFqoKtBQ2V1JWr4bav7LImjjcv6encfdnP3/ADVFlsQdaix7jwXy
kOup5hLqErSEAJ68if5af7bPAdU9bGV48IW6BgEHK42QBUTjKduN09621soUW4jEFhsBRLxr
uTxpudtHZtxIQkZ8zhTa8Bfy1+7NNvtvCNDsbaC7IUKgKcdUk0ZTQ1TXrT8NL/YYTGmAxb7K
zKzxrDJEa8vSUIt8pVOLbij/AJhqFbJG/TT3jgxVZya5dfBOZ3G4KetuZNX28RriIc9S1PBU
R5z9Ra+air3JHuUlO+s93ydGkyr3/DF23JrGqDnzF4nXWSICp7D7iZMVIWG3FLUVLUlG5A3A
J2ppVrJZCEWmb4myq4+VshtmOXyRbINkZjh+7SpT8iTweaC0oqkhxZVuoj6U6Hd7JeWc8a8J
SU5Fkf3ubNw5FnirfcnRXFokr5tBZefqebbQ5fqfmJ21O7CVyVqx4H5CVbceas2QKZgZZPkM
WtLb7zTakRgorluJQRQLSglKfqp11t+xpQDqXjKPCV/Njg2q2Zj3nHreqVcbPOlO/wDVrjn3
FhgHilhsbJ5VGsV9jFrwUTGvLfmO6Xm2Wi15A+7JlPtMRGnAyGuRUEoCzwHsH5vlrTa8Ctly
vPiTzDc71Dkf6xjXe5x56o5calu//Zzy0l5SuJ+g8QTxSmvQdNFfZHBJkFj/AI88i5vJslzv
OTKajzZ0hm3zJkla30fbKo8uO2SB3FlPsSDXap2039k6RVUZZEZBleb+Ps8v0S0ZUblNfcQi
ddhxkrd7Ve024t1KqLbSeK0p2B21usNS0ZnguWNSPPOe4ym4JzGPabcuYuMxJkPCK69IUgJL
Q7KRVO/FCR+ap1i9qp4RtrBG2zxf5kYx2faV5KzY7ep2Uwi0yZ5ZM7s1TIU1TbgpQoVEivU7
aF7M6Mv5ISCjLR4LuN3VkEtFnjz2rTEsTTwEb7dSqvFxIHKi1L9orQj5HWp/Jl2jZxjeYfM2
UrYxmPdnHnLopMNthlthhauRoAHEJSpIoPcQemqUloVWTQrn428vO2mBKVlzt4ytV3Qxbm48
sriRUoYX33nHCAUuN8SlQA6ehJ1jv5UDCK9lfizzfAvByOfeBc59phOz2Lu1IW4tDcYjk0yl
SQrn+pWgTQ61/avBlaMtzfG8gs9whyL66HJ18iovBJWXHQiUpRSX69HFcakemtVchZw4L74o
wa/3Cwuyv9XLxaBkTq7NbI7fcUq4S0pJLSginBrYgr6+n45tfxwNVglrJ4hy9WOysWcy1Nqu
N2Ml+24ohThjTmoK+26686n2oCiiqU+u1fln+1vItrZms7GsnY8fQchmzEf6ffmuR4FuW8S7
3QFB1xuP0oSggqG/89bVpcFMZNOvOL+UJzeBWOdmBl/6kdL0WMlaTHhGEELbPcRTvLa+HooU
366x2xoU1MDLzbieW4rMsuXycpm32cZBZhypkZcR+M7G/USptpft4V3BAG+++qt+GCfBA+OI
+Y32/wB3zJzKFY/HgBL1/wAocJUoGSe0hPaTs4pdKAEcRT8NN2u2ChRLwVPyRjV9xvL51qvc
xNwuCFJkPXFKlOfcJfHcQ6VK3qsHca3W05MOxZfGrcpzAPJbwfLaE2dnuAJSSsB+vHfoCAQd
ZaXZGW20zPIbjrchpxkq7yVpLJ6krCqoH86U109ijZ2k1/MPFud2dvH8jlyJsjyFks9BU0gI
CWXiCptPeCuXfSGwVU9oFfhrirys6RzmLQiTyiyf1HKeQbndhNZUxNV3Y0ppTSENMH7lolCU
8XFNV2p8ab6ldTobR9it49hXmLJMZiwbc8n9gmW4uMsvSG2WRb0yiUhfIe0d9KlgfD+Wruk9
F9dlkxXFv6h2MrnpiTfsprKIdunz3XmC2tgJrGRG5hQd4NErSE0+e51d6+BmSiZrkGX2DPZc
IXyTKk41cJqbfcV8QtLj7nN94JAICnVE8v7NbrlaMVtj7mnSJ/npfh1LH7XATZ3ralgy+QFy
VbnaIB7PLhRaVbqI5b11jtWdG7VUQZ3dfAfk20rZXNhRkpkyWoX6cppfZdkbNKfof021nYKO
r+zAYnZaMgwbzYxmcHEmbym73GPZ3mYLiZCEAW5aW0yW3OYBRUkIHPdQAIOitsZRKOzK9kuK
+YrDjdwt90fpjVqhRu+lh9tyOqE9JJZS2of5lJPUDcfhrasp1krtSRTGReT/ABlcZNkjz3bT
KebZemRmlNuIIdbDjatwoBfBwVI39NVVOTU8eC84bbfLTmFIXAzNMB29JlXCz4865zkTmmeS
pjvcKVlqvuIBNFHfbrq/sU5RKEWS5XLys94yZscaZaF3qPAiGfamEqFzZtbxSmKpThUWOf0c
gE1A3rXQmsuAvD0VO9eKvIMpT1uh5JGv8m9XVuPlUZpZSmNdW21vhT5UPelDYUeSabilNX9j
WY4KFrga2jH8xwG9W1/BrxAvTWaMu2e03tr2JD/cSHhxcNW1tqT7F7gj+Wj68EvBIYy75Zcy
rI7rd8yTjsi2OtWm9XeUUPtqklRTHjNMoHBVVEqqE0SKn4607zCg3OCRxjFvMlouN7C8vYsN
7u1xciRmpKw+u63BlsOe1ZQrgntq2WfiE8dDtLmDKZRk3ry7D8b3CSm7us4o1Pctk6EHkBz7
h9RU8gADmG1LUa8VAGvTrrVX+WECcpfI/GF5+1h+EwkZC0u25Pc0m22pLtG4ckJCmnluprwW
Eq5FCfpP/NoT/lgWsotfltjyxhsG2XpWcTbrHMpyIkuMrgOsyUsL9yELSO4hTfNPMaKuVEFg
gfHGXeRrxfLplVwylqBBtcNqPd7/AHFhMxLDDzv6LTLHGhWt3/DT56m5iqQpYGPkPyJ5fxvM
Z8Kdfw1cQwygyLWlLDEiKqrsd0ISB7ilw7/UOlaa60Sa0YdoJux2zySrwvkeZS8kuMS2Su4/
HhRFNqTJU6rjJffVXm2haqpUEUNd6a51v+eOBtWK42NMm8Y5ZaPEWPrnz5Lv7lcW/sLAhTSr
ez94ClsrVy5dxzlyTT2pqa01U9jt2ejTUNJFhzbEvK0W+YRiqcnuE69zkGWA8401Givxgkco
60HktTDalcuR935a1I1Vf4NwLh2haJF/Bs0k+WrlHRmN4afx+2MLVdVNCRNcRKWpJRGjsEpU
yVI5GoqPUeuh2fVaMvyOrF4r8jMZJkL68+uMeS6/GjqnRIzr78gOshxlcpkn9ANpVxP+H5ar
Xbhwi4gz3/sVdZaJSrhkMc5bOcnPWe2Hm4bg1AcWmS8t41DZWUEo59datezy8oIwThwbPV47
Ax//AF2pb0Q29+8Yzzc4QYct5sRXUumgcLalJUW67bU9NFG0tfQ0vlkrePB0leRRsbezC5PD
IZsiTcxKgOttvuxGlu/cJcJ7brgPTeprUdNCs4cxok0Q2T+Ns3k3122W/KZMxlFoQ3d5mQIc
tjbUPvUjxlqc5dxtbgJTStFddSs0lGzKrLkmsa8Qy7BlUy1WXN5lmuDrUJC48Br7mQtT7XdW
p7hRIjIXXi5/sRpwrGo3A2x/AMwuGM35qF5DlG5rfuButvilbsEvsqWlz72QkgtLkpbJ+YIO
+tttWX2MwoG0/wAa5Pc/GseLYs0m3iIgwm27OtKm7Y4qStKUsRpCjxdLRWCU9BQ9CNNLWVnP
yLWYRWYfgi+R8jix1ZBa1RWw8/Outuk95UFUIdx5LiEUWl1KRtQUqNcs+CqkmRHkBrIcZym3
ZJAyiTd/3qIm4WbIiXWZqmDVopd5e5Kk040GxGu02dY4RhrqyUwjxpnt3x5d+tOSxrS1f1Px
A3KmORnp7wVVcZVP8xTxqRUn56w/ba1p/wDE06tVgkY/gnymcCTJbnoatclLc2RjyXnBtyoH
1JA7ClIT7qV5U+er+yys/I9Fgczv6Ys+HajMXe3Twh3t9pL7iQ02FBLz3FY2S0v/ADAN9Y/s
cRwMJkff/FHkDj/qWLkLOQphW9c6Bdokl111xuA4lDjUdSquJWzzBG9KdN9aVrfxegeBFh8W
3B3yHJtGVXJwTl2dd8efjurEpa3GFLShS1791Kv8wL2IHXWr2vZVfDcIkkpXg64r4xzpeHQb
nacsZty75EemwrGiS8w7J+2Ci+kIRRFUoT9R23Ghu1bv4Zt1jDCybxLf7N49dul2yhv7S3tR
34lkCnloS/LopDTfL9LmptRIUivqD8dao73s1+piySMueuFwe7LciQ4tMVIaj9xalBpAPIJb
qTwFd6J1dn16rRl5Eyps+ZIXIlPLkyHCC8++tTjiiBQFS1EqVttrNm4hkcWpL6ELbS6pKV+x
1CSQlQ68VgEAj5awnDlDHkNMh1DQbS4oNk8i1yPEK6cgk7cqbVpXTLF2CW884EJcWpSUJo0h
SieO/RNagD8NtKeI4MNKZHCJ8tEsSO64l1sbO81Bz/8APB5fLr00Ntmu0i2LncG3VutSXEPP
Di+oLUFqB3PNVan+JOm128ySOIkuFBQFrS2oVW3UhPJOwJTXiSn0+Gs97TM5IBeWtCUqJUhu
oQCo8UlRqTT0J67aE3ESSC++lF3uLeWXWgaOlSioV/5ia63LiJwPZoMTX0JqlwqUocVrSVci
DueZ6n+Oq3ss3LZdsQwJlPhvj3FqArWhIH8QD66z/ZadkmLU+4EpSakJqoI5bD1qB0H8NXd+
TTtIlUh5ZClLUoCn1KUSD1BB0P2WiGzMsIuOk8kqqsmorXqetVeuh3fPAVwIWocDX6+ivQcf
jodm9iGFtqSTQeoB+f4aAEgeiyandIB6np/Zqk3IkKUlRJO1enrU/wDHSFbZgPkCQipWa+3b
p89Bp4FFRSTx39Px0E2guJbAAJIPTkCdvXbQZWBQKSoI49agj01tI0CiuX9ta/L46ighEpW5
BcVzPFpQon8tD66tAjd/BLVyRhWX3f8AfZ9qtVrZaXLgWwM96SVoWEnvPJX2+NKe3ff5a62n
qiwZI44l5RUlKkoUqqAVc1AHfdRAr+OuUB9DrAW+JrC46imQHUhlewPd5DgoV9v1ep21uqck
bLnWPZYWsYYznyCl+23dTkoujuSocbspFFJ7NPuFkq4pIHEfGmqbTjDDqtj264VkMC8Y9k2O
52q6X7JFKZt10mtCKRFbb4reUqRyo2E0SmiamoCa1017S5UlKSiSreR7x5FxLyK4iVlD9xyC
FFDDd0SkI4NvpC3G2kKBSkfMCur+zDwiaR38RyPM14dlWrEby/BgyHedwuclaVR233juoKdS
tS3nT+VHuPU/HTa6aRdeS52Oweao9gurCctkW/G2HpTbExEVyZNmKbWrvOMJQlbzaVLCqKUs
b/z1l3nhGokVj2O+X7xh37Wc2agNtxEyBZyjuvtxhVTaJk5sfo807lsrqBt8dNrZmEHVIS27
/UNkWHybnKvQhwlORxAiBptiZMU48G2VJUlCXGmlroUqWoVHy0K8NOETqiUynCPOdxhRX3cx
juv2uQhT0ZhCrewy91Ly5HFsPdv1O/rpreHMImkKjY5mn+rcfym85dbLv904bdaJD0R1caPK
cHtVDip7SVLXx2d6epPTUpaaSFtIxPyzCuELyJfY9wnKuU8SKvzVIS0HFEBWyEkhCRWgT8tF
chJK+Oofle8WWRasLjurtzcxqXNdZLbAXJbALSXn1FClITQHt1p6nWnZKArkvkG5/wBTs2bc
oTdvEyUw6XHpExiKUMuqSNorjvFse0AgJB1l3XguuZ4ImwyvMEpu65DJuX2i/H8R+NRxpL7i
XnwS82lP0KfI3ccX9O22tWtPGxhbLbJtueY5ZLe9FwKwXZtMRNwmXJDCnH23FVX73XVc3n/z
qUkUqdZ/CcyMsicpzX+oifDTkKLI5ZLMpxmS2iM0gvKSjdlL5JU84mp6FKfw01tVPUk68HGf
bv6g8nutmYumOw4cRyWmZ9s8w0zDdeZSpaVz0oUta+KakIV6+ms9kUfJFXzzn5dtE9m4XGFB
ifeMrZipdiBCZMdp3ipYSpXdLaiKJKtqfT8ddF18B1UfI5wXyb5iyi73BNsZt0qEtllMhE5p
DNqgttEhmm/sUVE8U1NT6am6pRAwWeLd/wCoO43i6Wm62ezyWkOoUmTdWUCIh4tlLQiVV+qt
aU7Df500TWIjIRyRtkuP9SE1DqEx4iG7S3KhuW6ehCW57rh7joSxsHlo/Lx4oSNvjomvgksS
ynxsQ81zvG822swURceMhc+Tb6NszpZS5VbvAfqKZbWNhsNhSoGq1/CM9YKvgrGRY7erFmq7
NKk2diahMd1tBCZL1SEsMKp7lKINKA61KiGFpNqynMPPNvlWy8x8WhQY0qd3F22CEyH3pDie
201OUk8g4W102pv1pTWKuppp6RVM6mZ3j+QWO43vDLQ1bWH+9AtdsRyivTlEhJkOs1ccfQd0
orSvx01deZGf1HkTyf5Jl+TXEowiH/qaXHbjuWztOMvAJUHUvvvcgpI48ale3GmtPoliQ6t6
O2P3LzLePIGUXU2a3XFfbNju8WW4lm3oCVEpituVq5xKiVUqVV36jWeyiASIZjzFlODvM41d
catj91xx9/7B11Kk/aGT71oZS2ePGi/aoH6dtPVNSEtFlbzvy3kGFPZBbsOjAR4jsRjIkgmU
I7lA+qMws8nASnkSPaPnTWW6pm4K7jeaeLLA1ab9G8dz22oT7bbN8clurSp9rdRqQltbmxPH
p+Grqm9lJ0X/AFH3d3I238axqNEbfnqmSYqObsic8tssp7hQPavt/wCEGp31rrVLIfQaXnyd
lSbniceDhgsbNkmLuFstKGX1KkOOhQVQKSlR2UupSOpr6ayuuShsjmp+MXHNrvJyfALguXNC
XIVht6nmS0DVT0h0KCXXFuKNeWyR8Na/GMOBytKS92TyDZ2cbVZsQ8bTpibBMVdXo05a1JiP
NULbwVRThd5/+2N6ay0uWCb2VaR5buOQ2fnesEj5Bd7UuSuNdHmnnI0dUhwuuB1hKS2eJP0q
UNgK61+K5HqyMt+eRovip/FP9FuOszFCROvjveU0X1K2lJTwShJQmgaTy4ig66vxmZyDTJmN
dPHeKWux5nYMJuKJsmQtvHpUyb3EvPR/Y4tbTZKin3ECg9yumjrO2Lb/AFHyPPVws8yD+2YR
+0Qn57816MVPlyW/KQptwtrKE/qFSq1TX0FNPWvkIY0vnmzLIWX2jJJeNv22ytMyYKIUxT61
S0OcS+hUh/cqQrjSifb/AB01rVonUy3LL/kOeZZOvJjOPzJRSpMSI2pwMx2wENoQlIrxQkAV
pudLioQy4YL5Ovdkx1NlGKpvjtllKnWR1bb5XAmKrycdDY3AqaJNPx1jE5ZcYH0LzTlCbC1J
VYUy8kgtyWYGXKacP2zEpZL9W0p7SjyJopRoPhparJrrgr05OVyPDEGtphRMdhXFa03pZSJ0
uQtRRRtKjyUhBVxVw2236HUmrNqoWWpLNmWZ2teDYOw3iLlrgx1lyyT/AL9SpDkeO6n7wNpQ
ApCn3PpWrp1TopVTljjtnZH5t5KcvM+wKuONSYmGWeYXv22Y9IfdlOEBTyHJb/uJUj8o+kHT
1UQnkrKHLGuM+UbXHu+TxDibU3EsocQ5IxxhS2y2iP7mO042Kiivcv20/DVaqQRKzkqebZBf
sxymXe5cItyX+DaY0dpfbabZQENt0oT7EJ9d/XVNVpmVXLcErh+Rw7Tg+bWx1l9cm+QmI0JT
TalthSHeThdcGyAlPx66v9yGywUwRpSEh8NuIar7HwlQbBr/AI6UrXXa3sTYQzQ1+V7qxkOF
X0290t4nCZhtokqcKZa0cw68lahsVpc26nbfXLrVqJyadYcsssLzpjFuei26xYq5Hx1a5rlw
gqlLdkPvXBvtOFtyh4+3oP8Adq/rS5BqTldvI7uSWGXguK4fIiNybfHt0O3oW4+6yzEkKluK
UFJSpXJGxqf56yuqzItS5Y5T/UBYZJ7d7xdU2LEkQZ1sYTKLJZlwIwjpU4QkchtyCafI10r1
prZQpmTJ8rvErI8kvGQiMttq5SnpK0JSVJZDyuYSpYFPaD1Ot4Sgz1g26R5eyGf4iE5GPPpt
0JqLaJNyXNCIKlNLQnk3GoHFuEJCTQkJJ31zpVTEiVTL8kyQ4tkd1nWYwrb5GuUebbJK3kLW
luErucO2PetKx9KyAP7NP4/oVq5gTdfL0eTk9zvjGOOw7tf7E/Z7mguL/UkPtoaTKZQU8glC
GqBA/u1UVYy9CqvIfhPJA1c52PZEzJuWPPQQiRavtHJw4w1F9sFoFCmUNlSnKjqfTfWfY0yS
+Cp+Xsxt2XeQ7nfbe0tqA+Gm46XRxc4MtJaCin8nIoqE+g11qoWDM5Lni+e5BFwBq4tYe7Ok
Y1GkWq25anvJZiMTB+t3mh7XVJ5GivpTX3fPnFU4FjV3zc4i2Jdh2RiNlUqPEg3m/FTikPRY
RBYCY5PBpauKOauhpt6UUlkJJ5zytlKLxZn8fwr9rn3+YL1KYV3ZBu0lbaorimULALbKkOKp
ToTXp1PxVZ2zXWSCzvKLnj18xeG7hhx2wY46ufa7DNUt1T6nl8nlOPnlzSVgUCfp1unV1jlk
oTOOBZq5dL9dbG5iTV+g5PPTcmrDGcXH7MxolxotvDo0gV58vTfbWbQiosYH7Xnm4Cc/KyDH
I1xvsK5SJ9ifSt1luLNWkMOJ4oKu+2ngkAVqSOupqsxwS+CjSMqmrwWVYXLU2lMm5G4u35Ta
w6HtwpjlTjxrXau3TSrLtJl1cJeCfyDO8XkeOrJjEfFZFvk255Mxm5qkO9tx5dDLWlJSK99I
2FfZ6avVEvJu25HuReTZebTbDjlksDztoiTm5ybO/JfuMuZIHsUhch0lQbLZ4hI2HU6cVq/L
JS3IlWYf6Uy3KLfdsKETGr0pj91xB5S2gzwIXGW1IQPb+oCscfaqu3QayksQzKb0VrJnsx8g
5Fd8gFrUt6DH70xiK0UtxIcZISgHn/gQB8z1prs71SSRWriRoznctPjx7D0sjsvXNF1+7ClB
R4M9rs8B7eJ+quilYbYzjId6zCff8exvH+wpKrCzIjsONFa3HzJe7te3TYp+kU0KqrVqSq52
Sl5zi+X/ADK2ZIxbHFPWZEBn7VoOutkwCCAtQFUlziaj0+euaa69ZGcyiz4PnOSOeQr/AAXs
dVMfzaSTLsrjz8F5tanC+0ESEBLjaEV9yiACNPssmk1wFOUWW3+aMtmKvdwewxdwtlplsyk/
avyWhDdhpEZHdfbqqSBx94O3qRTRZKUicxJCW/ypmkzF7vlAsUF+TaHHI8PIqlDkJu8uqUtl
tkHi+OZISTuivrrcV7QtGW3EjC5eV8nFiDMjF2o9ympiQrpfHmXgZzMFSVMRltqohKjxAWUm
qh8NZV1P+sGnKeiTv3ly8R7bAQ7g6rbY256pExic9NcRIlhgoKGnnSlccpbXzQEfL01U6w+W
DtGx7j/kq/5pc12SzYS1drB9gYj1iXJecq2h77kOuTXSFA93oFHfprLsq/U1VzkKX5zu6nrj
a7zgSQq4utxnIKHpMN3nCSlr7Xm2kOOceIUEV2rrb6pKG8GW7PSOGNZXktu8fTshg4pb2WrW
l61Rrw7Idae7ckqAY+2PFMxxhDtKr3pv1rorDvyzT0SuSX3yLIYxOf8A6BTb7kHYiMcuYdeW
lLjKgpDSogKW2u8Eq2UBVJJG+itquZHq5GuQSvINiy2zm34DbbTc7q7JSUQ1Cc1dPuBxlRn3
kqPFFPepG1Pq9NHZNS57FVcFMy2FmmbX+5QP2qNbZOFwiy3jsYhJYhxyC4hipPfUnlzO9SDt
rp/ZCVUt5ObzllTmZhLnYdaMadS2mFaJEiVFkJqHlGUAFBR6UTT2031VbqmvJWtMfBdU/wBQ
GS/6Yh2d2BEdkRY7MNF0V3i52GFhSApkLDPMBITz41OilUnk2rZJFr+pzN2pS3mokFK1Kd37
a1ApffD60EFXy4f+k/HS+odiKc873xq9WWZarbAtUKwpfTEs8ZtX2riJm8kO8iVFLlB06EVG
px1aXIu2RvYfNt9tmX3zJpVsg3WffAoP/eNqV22yngW2eKgQgoogpPUAa17Lq3XhVMK0JryN
2fMWRx5tpkxIsOKmyR5sO3xmkKLaI09RLiCFKJPbrxRvsBvXQ7J68yXZ/sWXNfOFryfxvHxU
Y8mNKjoYSxILwWzFUwOIXHTxDgKkinvURQnT6rJWbtz+5P8ALJjy3qEgoqCSQofEbnWIgGxP
JVKJoARv6E/w1llIZJUkkn2qNPwPw0IGBC0UA4lKugJ31MUBSqq3HuHUen4jVAi0nuVoK02p
WtaeuskgVCvQkAbE/A6TQlZSUp5ElR24/GmqDIn3BKkH2n0HTUik6JKqo6VNABQUBNfjpGQK
5V2oCBuPidEBIpQSTyO52oDsD86ayMAB2ABJJJA+A9dBSGgk8QRuK09RT+GqCB3CkcAQONak
V0QUid1e9KtiK76YEUSAk7kA09p339NRALlQSE19NhSn4V+OqCE1CR1BcQKJPy/j1pqIWlKa
8gRvSifh/LWWMiVK+pdD16Cm38vTSQs8wTWqjTYDY/MCugoApQ5CmyR9Stjv/wAdIhczT5dK
ev8ALVkMkO2ViC6hHEtqUkuOfm+Qpqg0jcfAEm6u43ldljY/+8QbjGZXcnnJiYDEdDYXw7jy
gT79z7d6DXotHReSeDLZakfcOFJQpsk/5f0UrsU13p8Drgslg5MLaCx3KqaJo4mvGqfgFeld
bq4ZlmvZvkd0zDFsZsVswR+3Rqfb4/KUt6S66y2kc2o6VJRyqACpwg/LWnZd52UM75lccuRl
WP5LmuGvwbFZhHhWu0NudoKcZ9zQDiQo1KwDxSnoOOml5biUCT+rGmdZvaLh5HYvmU4O5EbD
HKZZZT60PSnFIoy48SkcEIAB4BPu9dc/xfk1EKZOGA+bMosF3bi22Ayqzyp7koY3CYSSXHRx
S1HWpK3E09o232117V6w0Zlsv1g815bLW7bnsTuFxyK0SJMpbMJ11iM2p1RUkTm0D6WuVAlX
w6V1zt1aTRQ0xnYPKt4GMv3Ox4PMelIEj76eytYs6pb6yqRJeZA4uOem5NPpBA026pb+wusD
e4+S/Mtx8d3Zm443LSbklKjkbcURmmIbYHNRFN1UHtUTQDppq6SiVXOdHK4eYXm7DbIcvHrn
ccMmLQ7cpN9fW+5cEtUKW2DQNIbC0hZoKKpTpXTFW8sbJlhZ8gTL9fsWhqx6+RbQiaq8xnJL
SpU+W8wCptmOFdtDcZAXVSq0AoNtGFMMIa2Yx5cnTrh5Evcybb3bVIefChDeILqEhI49ziVJ
qQOWx9dFfguC5+M/ImDY/wCM7nZMkTLkPTLiiSiFC/Tc4NhCg4XCUoCeTe4rv8NasohyZSaJ
u/f1CYblFqXGyW13KKpEwzLei1SksrIQni2HHqoUk/Hht6+miElKZVZF+O85w6JjGWwHLNdJ
Lt6beenwYCucaHCTUJKpLyi5zPIlx1wbnprT4yaaTLNe/OmPQIMG5ox+6M3qXav2+1ty1hqI
7DVTlISK8uqSEqCd9Z6ryZeB8r+qPCosRKLba5zzlWB9s6WUNtdspKwHRydX06rrU/DR0UxI
SQDHm/x3Czf9/tES+XKTJckPvx5MgFpEh5soS1GipKkVWohJWd0pGwNdadIUNismd+Vs7x7M
ZcO7R7W/AydxHbvzjjhWweHtQhlKiVpp69AOlNXVcOTMucD7xRnuN2uw3vE8jhS37TkBaV3b
eAuUHGTyQ2hs/VyKevppaUTORTcl+X/UDjeQpfi5JZJrBt85u6WWJbz3HlrhAJbakBQHHpVa
gKDfWeqiU8mxLP8AUFYbgzCvlytMpeUWF6Y/bIcM8oi0zgUkyFqHJtLYPw3I266GlGwSkb3D
z7jkm3JyJm2SlZm1a3LKqNX/AOzm23VV763acvq+lHX0+em1UucAkVOL5Ws8C24NHbiy50zF
pKZUlyU8A0QhJT2YjaaoSNx7lJqKU9TpST+4tvgvTXn/AAmx3gSrLCnzId6uCrteH5PFtLK1
NdtTUYH6+BNVGtNqeurophsG3grF+8pYmxIxmxWM3AY9Y7mq8PXd1CRNdU8taiIrSwEgJDpA
Urr6aa1l7LwxrifkXFj5jmZhIuMuyWIqDpSpTsqRMKAlIafUCfYpQ5kbpFAkaOjeAo8uSzWX
y7idqyzIXmb/ACm8bkzBdGLc5bkSZEmW6orkdkKH6CAOKApe567U0dMfJpFBu+RYPllyzDK8
iXKi3eZ//Dtli7pKi3wQt94pKfZxHIbetPTU6MJ4L1H81YQxY7ddmDM/1LaLIqwM2JIpFXyS
kJlOOgUCfbsPq9PnptSIFTaTOstzazT/ABhi2LRnJj1zta1uS0ukIiMCigG2UJA7hPP617jf
47bpSN8mHngHhHL7Hiufxr5e1qRCjsSAVJQVq5raKUAJG9Sdq/3azak6KYNF8T+fVO3tYzq5
JDMaPIFonPsl1bbr7vNZeU176cAEJ40220P1zo0mXWB50wORnFzck3tuPBbgx4rM5cR1AeUl
xa3ey6kqfQlIWKcvqO/puP1NIXjBWIHmzH5WSZtGTks6yWO5x0/s1ydZDjjcpCQh55ttASsF
SUpDQUa0FTQ6HXRQ0dfH/lXAbJhuPuryByEi0RpbdzxdMbk5cpD9eDjjlCOW/KvLr9Wn+qzB
OSq+TfPlweeXZ8UlhzH5Fkatz7BbSEJeWD31Mcgk80po2FHYb0HrrSolszZ4gstqynxtCxzx
umbljMqViUnvSGW47pQEvNqBSVcfb9uCADuVEaw/XaYOn0JjAPLFrya4wI93uLT1zVfLjJh/
fJS0Y1vLKkxQ2qiUJdPIBArX6q+msujDRRP6qrhBfySytsXwXARoimzbUrDn2m4/VccSpSVu
SP50T8Ka6etNGGsjbwnlGP2rD71bHMnTh9/kS2JDV4VHEgritIHJof8A1VqPnXVetm5NN4L5
jHlfC0x0zFZm5bXY13fn3hT0JKHrwyUhLaeDSeLaTxATTegFdZ/rsIzh+ccdbu9nhJuH22Ji
yz3LpASyOP30ha1NxlAJqXEpVSg9tTo/raBIoXkLyoq/+LcSx9pxn7xtDyr9GZYS0lktOhMR
pKgPZyQnkQn6vX4akoZTnJodsn4PIu3i9tOU2+8XDHgYciOppSGVBxuodK1US2WOAS2FbqXT
Weltmn/KSD/q0lMLk41Gbu4lJZbfrblKS64lSlJ/6h1aCU+4e1P4HXX1qODDeSH/AKa8wxzG
5N+/eLjEtapbbCWJMlKkulCVK5hp5IVwoCDxKTU0+Gs3q28GuDVIWf4ozCkXqDmDFngy8kde
kSJMdtTk+LHaaS8ylKU8khXH2r41I/HR/W24jJlYCx3y54tZxSYli4xIEWY5cnV215otP833
HFtJDSEcClSFDdRPw1P1tOCeoKhfc8xpWKxGIuWRE4wYMGMcLat6HJPcbKPuOSlULdaKKlb0
9Kk6FRllryzr/UH5Nw694Gqz2i5QZy3pkdcFiIHFLaYZqVlfMJSyaUSAnrWmt09b2LUmaf0/
33HbH5BRcb7LahstxH0Qnnk1aTJcACO4QFFG1aLG46eum9ZRJwbBc/NOJseXLVPg3yMm0yrQ
uHe7iy2VtJlJWsxwtS0hzihaq1+HXXN+toG8weZ8oluSslush+cLvIdluuLuaE9tuSoqPJ5K
aDihfUCmvVEIxWvwegPGfkDELRguPtv5FFgWyBEmt5LjCoqXJNwkuhXbKV8SVfUkD4jb015u
jbOr8FHy/wAqpn+J8Sx6KIImsPPG5wm46eTCIjyFRBU7AuhJLhH1711v1+tKZOblJFiyTy3Y
bznfjl16TA/bLVFjP3V4RatRpriFpeaUkUV22vbRKdkn3b01heturNzLkv1w8reOD5As09++
RZjzNvnJalKQXIkWU+potD7kNpdSlSW1j14//Vq/qcGeZIKX5psiPLhdgXqNEtkyzqgT7sxH
K4/7gkLVGdWtxPN1DPP6th6HbTb1ws7HLZ5uyR92Tf7nIcnC5vOynVruSRwTJUpZq8lBCeIc
+oCg16NBaq4NvsnknD2MGtEh2/yI79osc2yPYcltdJsqYOKJAIUGyj3VUpQ9tP5+dUcwbup+
5zzXOfHkvDL5arbJYVlDkG2x5t+bihKLwlniFx2UVqz2faSqm/DWqetpysmYUQWR7OcOi5hg
t+m5pFvUu1wplsukltpYo5JbUpEhxKQni0FUQoDf+3WOrjRtvMrkof8AUBm+O36FjFstM+LP
kWtuYua9b23UQUqkOIKEtB5S3OdEe/f/AIa7ev1uJMMgvBmQ2i1366RrlNTak3u0S7ZDvC9k
RJL4HbdJ/Kn20r8dV66a4NJYaRovjPJ/F48f2i0X66xIFytX3sZbbzCnqOOzGpSZDS0pNU8Y
/EHrU64ur7OTPfsky5X/AM0eLbnhd6YZuDCmJ0OUY1pejr+6+7Wta/e1x7H1EFJrU+p1pely
aZTfOXk3C8h8btQLdfG7hcVvxVNwYrK22kNsA1UtDif0FAKoeCzXpSmmnreXwZtZSZl4ayux
Y7kk1y8SHIUS626TbE3VkclwlyQEpkhIINEU34mvroaz9Bo8QaX/AN0vHcePIgz5LuW2q22u
22hEKQyE/u78WQ44Zvdc5LbRH5+0E+6um1MZ2/8AUAmpOuP53hjuXeQpN2zss2zIIxhwkrhr
Q26l6KlpD4bbH1RE1ZptzA5euqyc1iCwpkx1WKY2fHcvIFXTje410RAi22qR344SFKfCP8wb
Gtfp9OutflLQWf4obePL7AsGeWG+3FKlQrdMbfkBA5LCAdylO1SOtNVk2oGtvJ6Sb8y+Kccs
80WS7l+4qgmPGdbhKbSZMVt5TLq+Q93cXICRUem+2sL1zYk4UGVeNvJqrl5LsmQZzeVRHrHE
Wy3ckMKcclJJUSw+WhU17ivdx6Cmt+2v/iNWsj3EfIVnx57N8ahZdIi2i61k4/fWYyihEkr7
rv8A0596S6n9Ll8q/DTZfmmCeHJH4bPwhvw1fLPc765Bu02UzPatv2zjqEPQjyQhLiRwrJSA
mpI40+Whpu7g04hF8f8AOGOXnKLvHk3ZSLG5LsbmM/ex+cdkRXELmLcTspB+r3KO/pqVEo+j
kpn9Rh/VNe7dNhWCJbsgZnMNPyHHbWh9MpfJyqkSlOoUoUSKtoSrcBW22n01cMzKmDPfGOVY
xDx/IsYyGfKs8C/KiSDeoaSssqhuFYacQmilIdrx26HrrLcWTNI01H9QWCOXh2bKiSnjMuaH
I6JCGnFWtDERMZFxYJQvm87xBUiu3+9fq+Zj9y7Z+pSrh5XuUjxMzjn3L0+SzenHZbr0Ydpy
AFF5kKdCaNqW/wC7b3fOmunWqtbMYwS4cFwPnTBYl4ev7SLlIk5FPt068QX0hKbe3bkBJQw4
aB6p9wpTbrTXJJNZ4QvDg45T5PxC4XXEG0XW4Jh264Sbk/e4Nv8AsuQUjgllplsju9eDix+U
+uqqXR52Vd6ImJnOCQPLmS55EmS5ba4r0i1RTGWhx2bJbLDjCqA8UNiiuaqf2b9PZWestKAW
mYkpqqAQDxqAhwCor6CvTfWfb7E7sGJWytJSqhAUDwBrQkdaHodY7GdHSPBuMiR9oxHdelr+
mKhsl1VBy9qAOSthXYaHZQa6s4EkV5dT0I/trrSMaAKKCjx6U2+WiAAlauvL+8j56QSF8h//
AHPQK69dSRqRNAUEJpQnkfUfPSkTCChsUe6u/Lr19N9ZaKPAYJHTqdjt/vr89BCh3D7VJFQa
g/LQxkJaVpQDQD0SOv8APSgBRBoQDU0+IpoNBpKzUJ3T6HSgkCCTRPSmxOhodgCgk9OQp7l/
+egEdApKj1NVA0BG3T/amo3sCiPoqP8AlPzp0/HQDSEJcSCEkk1UQOXT8DqA6VRSnp6kV0DA
AkcKgkEnr8RqkoEk81JBPEUIqPWnwGohSlADiRyBPpvoSFCFqIXsrkkbga0TWRSlAK5LRUkH
kBoICQCnckV6g/lPw0Ew0EgbipJ3APp8dRIAd9lNzX1poJgR3RsPxIB2A+VdJLIoAlKqg+33
UTtUHcaDUB8kcu5x9Pq9emqDJDt9sQXgVp7iinij126nTydJNv8AA0u2KwjObTOuUK2uXCEw
3GVcH0stLUgr5EFXXiCK0qdd3V/1p/IPgyp5LaH1pQoLSCoFyhAVTaqf7tcKqDL2FHNXmygB
RBTRKtxyqKBXy9TXW67QLJvPmLJYsrCMYejZfEnZTZ21tyv2x0qfceeSAShTQRwbTxoTt8Nd
c1vhQjS/UTds3jvZB4+xzGsgYdZsbKJVxvU9YU2mW8n9dbrroUnuJb5BJoSlSqddar/JtqTP
V/RDXzHa8fzDy60q35Pa24twZQX5xd5sRW47VCXnBRHNfH2ICt/WmuFfXZ8D9Tv4TyrxTiOR
yITrz0m6vTlR4WTlptEf7FI3/wA1VWELUklSqVKaa2vX2XyjLsl9DSrXlfjW5WCYmNOt7dpl
3Kc9fXJkxcN9Ta1qIcCEcVSFOIoUJO3Gg67ay/W0Kb0xri+V4rarD+6PZBE/0yzHeZt7Xf7L
0KOglDEZu3pFH5CgeS3Vitdhpfra4LsdsjzDD7jjJmZhJgotLrcYsRoVxdlOSVhQUllcNISO
I/8AcqN/7dSo5gJ5Hf8ArPCmbyqPfL1AuNuvk+M9jsLkl5ESIy0ndwceLCC4j2pPVW3x1f1N
4jKFeRnkrttuOcYvIhuFWZG5g/awp6pYTZ21833JBBCGUdKoTsenu1VxPgDAvNkyHM8sZG9E
eQ+39wlPebPJFUNpSoA7jY7H56IaKtjSf6f7HGdwG/3WJBtz9+RNaYjyLshCm22qIKkjnsk8
SpQ33NNFqrAyXubFw8/ud08ewsduF1+/S1eHrgptLCGGm/1AFH6QV1PJCaHf5alTygT4Kb44
sy7jh3k1xP7YqNeFPR4VxbWGI7rjKVcktB0hSYzRUOKlAV6601qETeDQL+cbkWNl69Jsz+PN
WNLEie6ptUpMzinssoUCVJR7uVEjro6OYjIppnWRiPi5rEWrXc4dqjWtDcVLcptTLaHe4pIU
ttxKlPmpJ9yzv11n+tzEZJsZQ4zFs8g29qfaMcstmhuyVWOQwttM52OiMoBxSB7UpQBVS1Hb
YdTpVcaFvmTCvN9ms7DlpuePLgDGJ7bn7f8Aaq5SnXeRW+9LUqq1LWo9TsOg1tUaMO3kmv6f
I7a7PkrtkVGR5B7TSbAqQUdxDBV/1Km+77RVP1HrTRZTng124NeEnF5k+4z8EkWz96RcIrOU
TgWk/wD2cyAZZC3KDgVE1Un6jo6NZaGeDlaJGHJaTeMWXbG8MkTZ7ubPqDSUqaSgojBXMcuP
JI4JT1B6b6nRraCeBmt7ELZjBlwxbW/F7lqfdl1DSi5d3F8kIKT+qtyhoE+n8tXSMNZB2ky+
DYsMRiPjhq6xbbAbutxQ5enVPc58liiuS3VigYYNQFJ5VGw+OtKrcpD3hqTXZ9uxl+fGtOcM
2tuAq8JGGxQGWwbc03zRsj/21OCh5bK2Hrrn1nCLtmeSleSI8W5O4TaMrgRJWaSLuUSbdHU2
w4q1BbgZbcWg0abUnh1+dPXWq1n4FLJDYlabUv8AqTlQoNkt9xt0VYS3Gg1VBgpbQgLkbgJW
42fYainNW24Glx1Ktnkv2Cdi0eQMxs7lilIvcu6m4PXWK3GeQm3yFH7dDynTwYQeKl0HuO5p
trDrhMyngxjyBiknKsyznIcYRGON2JwvSZvNDceqGxyDX+NSlJUfbsf4jXSr45KupZrdksON
2vBoLKIMB3A5GPuTr9dXe2pxV2okoSp3lz5hSiEoG4Ow1hrPyMzoxrLMetFu8M4ndGLaxFud
2kOrlXF13lLkoSF7obH+Wyk+h6bfHXSu8k3EDTwVilkynyTAtt9aD9u7T8h5gqKAsstlSeRB
B4g9aafZoFs1PALD4T8hX5pVvxxy3/s8aRIuNsStbrD/AOr2o9FJPN08arPGm9Bvri6wpkVY
s7Hh/wAUP51MjtY86URrdGdRb1B5MRLshxxPccCVF5CqN7BQ6VNNTlr4LtwRNjxPCbbevI9m
h4a3d7lb2EOxoCJHf5sSEpKIrWwU0eaSpSh7gCBpa0HZjLx54lwmXi1hN5xxy4u5BFlzrnkQ
dLUe1lsK7bSEg8RSnFNfUEnRLNNlf8iWfwjiUB7Hl2R9d+dsjM2BeUvrWpUp0ENpWmvFIUUl
S1UpTYa3SreZMu3gmrf40YvGDeI7dKtD7NrdlyHr6SC2o98FaFuKFFDvlI4evE0GsWs22+TT
2PbR408Q5Q7bH4VmdtsZq9TrQ7GEhx374wm1rBcUORQlS0VqOidq76m2uQTKF/Udithxm7Wa
12WwItEIsuSHp7ZUtMmQ4pPNtLqyVLSwAAKgfVrfr0YbyDw7guO3HEbnlFzxmTl8tia1b2LJ
FcU3wQtAcW+oJKeVK0+FNZtbMHQvmMeHsH5NOzMOm3FV4u0mGptcgn9mjNbJ7pZVwO/Xc9ae
mizfkENo/i/xIhcHFV2p6Rd7nZ5l2avYlKHbDLigghIPEk0HpSg0ZmSbxJW/IkrCrZ4FxW1Y
83IaF/c/cFvOpbBeXFXR9yQoVWSFmjSUmlOut0WX5MvLRNPeM4VzHiO3XC3S7ZYpEN5m5qKS
HPunf10tuLCapXIWk0ruE6wnj5KyfaSs/wBReFY3iLljgWWyG2tyEvSpc4OOPhx0lKRHQ65u
oNJBUfhy119ch8HL+nvx3iOXLvD2QQ356oAYRCjJW61HKn+XuedaBKVe32126k6LzJ0nBf4H
hHBGZTrcvG7hcw/fl2xAjyipECKW0KS68pJqUoKia9eldYdjMnaz+APGwxW4KktSJNxUme/G
uJecHFuK642ypIbHZGyE/UanfV2ey7YK1ffFGJxrUzZGMcu6p5jwFrzRKx9ip2WpruFSVKCO
Ke6RRIJ2Gr5nPgrWcBeefE/jnFsIVPsMSQxc409iGt1a31pcS4lXLl3UpRy9lf09tb9UtlZz
soXgrBrNluZuxb02t20QYT06S02ooUoN8UpBUgFfGq+iBXWvbbjyVfJsUfxl4xsXllu0psT1
wjXGwruMGA64pxDbqO4h1IbdotSnEBIRy3Sqp/Di2xbmTzJeWo7V1mIYiOW+O2+4luA8oqdj
oSo8WXFGhUtsbKNOuvRHgwn+KzJu2E+IsLuOK2UTrLcZU6+2yTc3MnbdUiLBW2F9thQT7K+w
fV8dcO7bNtkXmL2AWvwPikK1olNHIpInTZKmmu499o8lEtSnD7gEK2YSk9OutJNu3k52acJl
hzmw+PbllXjTBILExqyCGJzjDLSO84xKaK21qUgF1bzhZPdPQDcb6Ktqsrk26/lPgmHfA3js
5rFH2D8Oxptcme7GRIeeElTTrTaOqfuG+IdqpIT8PnobcRJdhhA8feLbV5LvNjZsEu7pdx03
a3QlrWFIVxWl1iOlwIcWt0ce2pW6FV02rhNszLcnmy5BhFykoaYVEaD7gbiOkqcYQFq4tOKI
BKkD2q+evVGMDXSPQttwrH0eO4sU49HfsU7F377Oy88u+1dkVLbLb9aI4nbt+v8APXkn/wD6
G7kiM78PYlYMRuuQQWJbkt1NtDNjU42X7KuSElz76iypXePtQOPVX8RurbeXgnCJHLfG2KTc
uwJq5W9WFY1NtB+9DxDSvuWSpwRnpCwkd9TfVSt6azW34wvJr/cyof1F43Y7LlNn/Z24cWFK
s0ZxESAvutp4qWAvnQFfcSRxcO66V119WFBhv8sjXwVb4yrrkV3TEbuV3sdllT7JCdCVpXKR
RIVwVsvgkkhOq7eE9G5Sq2XLBPBmP5FhmO5CGZj8iYyZFxUwSW33Dc0MqQU0oikbnXjT4+ms
X9jbZhV6xBbMq8C+OIOC3UxYUlFygtKlt3FDrq+SVPHtoSpQ+3WC37djUeusqZNNOCoee/FO
G41hUO74/ZXrapMxmIuU8873FBbauSX2HgBXkmqVtkj+Gt+rM5MWRTPBdngT79eHXbezd7zb
LTInY9apJBbkT2SkoSW6p7pHXhqusrwarir8mk3XxVhV9vE03dDOKS3Idmm3Oc2ttuLGucsr
TJgdha08XHwUqAH00/nlzGCSUkxjuGWGPJFutuIxVWifeJ8LMU3Eh9+3QYzHKOO7z/RS5/mp
UD6jfR+5T5PONww+8Jx13KGmg5jaJ7lsalhwcu8mqkpKfqoUfmpSuvS1FuvKCHCJHxBYrTfv
JdktV6Ql62zHlNvMrVxCx21FCeQpQlQA21y9zhFRZyb9A8GePsdxm6XXIbM7N/brazcC1JeU
ha1txS5KaQQQAkO0BqDx1hJ2tCNrUFP8RR8Xv/my1XbFLCuDa2oC3rrAUrutwZnacb7jS1Gq
myrgE1HU9Bq9lYUPclWcjfAbXamfG/kKz3HEW7lk9odbXIYWvtSwhS1FKwBUoTDp3CUfUDQ6
62qn7EpwD/jga4nh9yl+BZcpNtLrEq/wZbs9O/dhMVbdcUK7JZUVVI+fw1zlS/oaXH1LrNwD
xfdMguFnt+NKhJxu/wBugPvRn1uOy2pqVKdStBPtbR+NaAnbV0hLO0C8lQ/qQxSw41Hx+32a
wRba0+ZEp+5xCpTTjlQ39uha/ceKQFqB6E7euunqiG3s5+xzgb+BoUpywX2Rj1vh3LMkzYTL
LMxLbikWt1YEpxtt1SUkb0WeoGsXic+Dddf5Lwx4x8SXG9LlNR44Z/dLg1jtuYlIDN7S2wFr
iAk0aDEjklKgfl0pqdWl/n4FPkp18yKyRP6dLLBtsJyFKutxcTJKHgQp+CtLqlPAjkpJoOKf
ymh/HdaJWtOkDei8WbIIeYY5gdszlyHcHcmnyZyeLTcVLbVuQpLTCu2EgLef4pNeqTTXNUSq
34walT+5Ym495tsXFY1rx632m+NMXKRIxxl5kyGUOPs95VtW9yaEjgQviv20J0dUk+UNnnY4
tV5bY8mZHb4VrCQhu2TLzdGHYTLrPJkpfRN7g7ak+zkvs7102o+q+ZLtggX5OIv2Oc7Y/wBo
PjlxF5VlXEMoWqapav28JSv9RNdiz2/4a2vX+UNflgw3iGG9MwlFrjO5G5a1+PUrsxwxFGit
Lw4m4rKWx3k0qru8/TV/XMqvzP8A0OkznkhM2bvsy/4UwqRAczpOQPu2V6EWT2rGKqjhRZFO
zxTUBfpX56utetnGIx9TFdr9zGfLDlmd8kZKu0FtVtXPdVHUzQtlRp3eFNqF3kdttdvZSFXz
Bwn9CnVqkGpANDt0OuLECnBy5U2H5qagDUmiQVgKUNxT56BkCRyUgUHI+v4apNIM1KSCaH8g
Oo1IFUABUCCPQ0IrqgwEtPL6eoNQK7/wGgWgNhSykHcetf8AbfQyqpFFSiapOx/KOtNBWtwB
AUQBUK68h020BkFUpAUOhPp6fPUSAAoJIVvv1/DpT56jSFcUFyv509Cf7T+Oliwlp5qJJUFA
0pTWQFJVSvL4en1Gnx0xIykAq5I5mqSeg+I+WggkqNST0psOhA+eokBJSsbjb0r8f4emgBQQ
rjRJ5ehUPSvy0DAlJ9wHLl1JoKHbSykWSeJ6mvU+lT1p+GgQipCQe5v/AIRU7/OuomDt/pBV
CaDenWmogJqpY4lRR0oSK/hqkg6gA1qlB+O/+7UAoEKNaDkdqHYdPj8NZNpiOTlePbPX6f8A
l/DWpCSMjDlAkjiClJSSrpvX00M6VWTbf6eokRqw5pd24seTdrXbG3bY4+wiR2XVqXUoQsKT
VXFI6emvQ0v60/kzdrgyyYXlSnPuOYfKyp0LHuLh3NQehrrmwYlpr3Cv6QUQVKpVQ33NPw1V
icgkbfmOL4jYPH+GZNasVdehSO7Ju7M9ThceTQBszXkUASSeQSmg9BtpdF26tyKs/oOs8iYV
Hs+FQrnjMe03TIlom3KLZkEPpiK9rLCFrJ9zilDmfy0NBrVfSnZpOEZ7MrXnTAWLP5AbsuNW
RbFudZjtwY8Zpa+8+U1UlFeSnF1+r4euuVdHRZQ+8Q+Emr5dZDuXq+3ZgS0RXMfU6hmW6+eK
iXSSFJbSlXRPuV0FNaacfBmI2aJH8L4HDnzpciNBkz7lcn4Fit0nvuQo6WSUoSlDPuWvaq1L
UAOm2jMQXZwNcK8R4Ukqst4saJN4UJKVXSSpaX5MhnZxVvaR+izGaUeKXFmqj6GmmznkBd18
c+NX8YntYdbLZIusK3qkffTVS1PICQe48pRCW1ceqN6H4U1mGikpF+wvx5aPG2JZTCtky7R7
hKU9eHnSph6Sy2lQWk9vmiO0Vp2IFaetTrpZPtDYu3xBaHbf4ygx8FlN4VGt1wy2c2EwXJEh
fbgqPAOu0KeSlckFKenx1v10y0noy3GDKPN9tg2nyferdbYjcOBHU0GGGUhDaElpKiAgbdTX
XOpkmvGfjuDkmG3q8Xu9TYVitshpty3wWu+XHV0T3Cg1TsF8a8fjvTWru0JGqtFsyD+nnD8X
EibkuQy2LEqQ1EgpjRw6/wAlo51eoFABI/wJ1lXs9MYn6kJhfj/E5Vp8hynQ/KlWKIv9lanN
OIcQkoUpEpbJKQHF09iVCo6030y+qyDSjRcLv4JwebFtEK3SpFsuv7H+4BlLRcQ+sAKcdkvr
qOSlK48EUp+GsN28lCQ0b/pcafx9D8e/rdvnBpZYeaQ2wlTtKJKa95I32Khv8NafttOSVarW
hpZvB/juXmTGNuZJPulwivravUNMdTI/RaLh4vUISgromvKp6DfWv7Lbk3PgzXyhgjeGXhu2
LdcfnOBb0mjakRWkqWS0ww4v3vFCfrXSnLQm3sxJI+McIst4tN9yq/rkKsWOIa+5t8FXCTIW
+aBPc/KgfmpudTbWnssM0y4eAsHsE9U+6SZ0mw3CTFt0C1xT21tuzKFKn11qttqookfidZV3
qQS/UQ1/T7h9uujNnvc+dMdv8uTFsa2KNtR/tGysvSUg/qLJHt9BpV7REmhunwDicdo45crn
NdyORBkXeFMaAEFhllfDj2id3Vp+tR/hq72eWylLRnVv8Vt3Cw41MhXVLt4yeeiF9sllSo8V
CgqhkyBUc00/yx19NaVrInDNGc8AYveyuFYbxPF2s1xbtF3m3EBaVlCQ4pcdIPtCE/QmvXro
/sssyUp/QrWbeLcYk2uxX7GrtJZZvVyNokSL2uq0ONFxLkh10H6R2j7fwprVbNNzky40R2Me
PYjXls4fHyiTb7ekpa/c2m3Ir8tRSF/bsIG/uVX3q9tBy321dnEoVE5Lrg/ibHLteMubn3+e
qLHubtsRa2JyWZLiG1cUyJjjpq7y+lKdZfstsk+YyZL5GsbmKZje8StUqS5aYryVKZKlUdAQ
laS6lPtVw50qRpXsxkHLcmjWfwnbX8bjWS8ZBNayW6W1WQ2+CykqtrLKKcgtFfe8oFNVbU9N
Z72eRiDNLzhqoWBWbJ37wiQ7d3FIZtTSVOKYaSFVU87Xg2qqR+n139aHWqt9g6rBx8aYlfcr
ymNYLRMMGTKQ7zmFSkBtlKCXd0e41T7ePrXWr365KsM0yweBMmRebarEcwiPNSW31uXWCtba
o6GFdtdEoVyWC4eIIPXXPvZcEkiWi+CfJsPLp6YmZFlxUVEq43pLj/3LnMlKG1tJX3TTtk1J
pTV/bYYRGY94ZkQnsvlTc9Ztr9nbdTIkxHHEuKC0hSnpe/cDSuRSUglRVXfbe7taJNbGWHeM
M9vWFxojOYItcG+IkP2fGnnXEmbHa3U52kqohKwK0IO3XS7tPC0KhZGuUeC8gYtEy+XnKbe9
do1tTc5NrecWuR9qlISAVqPRIohG1CdhqrZvMGXCyiTkY3m1zxPxumJlE168ZRJe7ZclufbR
EsIo0G0o+lTLfLl68vaNVfbGSVc5JDHfFHkvGbva5GKZXAnoEqXHfW2pTkWKpKOUoutqqlZK
UgLp7qgDV/Z5RFV/qGgZFHvFmOSZG1e7s9GceRFiMmPGhxioBpSWz+Z/cmu/t+FNa9dvgrJN
4GfijFc1n264Xa1ZY3h9nS63CkXB59bCXn3N0MpCSmpAVXc+ujvDwpF1xktGKeLfIoiyoUXO
o1ogXCa/CiNomOlFzeRUOra4H3VNQo7nY6F7HMwKhDaP4S8g/trdwVlMRi6/tbyItoEhwS1W
9rklbKCPpa4j09u9NZXsfgw0nkZZ34uGNeIrNervPXLvjrqG4EdMlC40OKtRWphhoVLijXkt
Sdgfw3atti44LCuL5STefGrNoyN+5X+7w3rmVy5HfhMKoAqiCKcGo6yhXUk1A1K6ScocN+Cv
/wBRMbMWnbCrKMgiXSS82+5Bg29lUeOxHqgd4hXVby9v/p1qj+xlpSR/hLDfId7TdJmJ37/T
sSClKLjMLrieSle9COLQJVQDlU9NF7Q/JpaLpZMF8urtTtmaziFaY7d0mw2FiStD1xlO0U/7
x73OauiTunfUvZmYBIaWTxT5q/0LJkx8lTAtMYS1CyCQ8QpLJWh3dA7Y58FUBOp+34NNpojb
3gHk9OJQ4MrMY8hccQ1M4gZp+5ZU+pIjp7dQCodxJHw9OmpXxrAWjg5+afG/lSx4zBuuXZGL
5b2H0x245eeV2HnUmigHAkK9qSKjTS8vRm8FP8P4xmd+y9uJily/arky0t5yeXFNhtsUSqnC
qllXIDjTTe8Eqo0yD4czx3yLIbvuaph3a129M2Hd/uFqkuMnucVI7tFJbacCu7XoD89YftlR
BrBg12Dv7pMU5KROdD7gXMQVLbeUFnk8lSqKUFn3AnrXXdfJzU+INZx7x95Ok+PEsxcrYgwb
hDduMPEVy1pfkQ0VUtwNJPEJXxrTp8dcv7EnKRp4WQZR4mftfi3HJ06eZV+uMpDFri/dNCDF
YmOVKEoJqSskKcWn2pPXpoXsbbbQYUJckrlfiG/W3KcEs9mvL72YXGOVPXR6WktsCI2lSExw
j9VtphHIJP5/y711f2N1yha/LA6PizzJI8hRmUZimXeEQnH/AN7TIfKozLa+0ptSCO571qoE
gUO59ND9jiDSaOFu8M5oM5vz17zZu25BaIYuLV4MhwvvJW2Q28pSylaGG+JS78NqDfVb2Nxg
FEuTCLklabhIK3hKX3XAuSklSXVBZ5OhR3PM+6p16smEbA34umN4C5bFZY+m8PWsZcMX4r+w
MICgUtVeIe/s+XrrhX2tZhQavXhcEdlHjfyFZIt4ut5vLKIDkiAhVwVIUpF4Lqklp9pQJLqI
9QtXLoBX01mjlk04LDlnj693XLLNY7rmsm9WRFjORzbpJUt9thhHND64qCSV8koHAnemn+z8
dKZHCbfBn3lLGGrVJss+Bd3rzY7zBTJssmSkokNRmT2ww6g/T21bJ47a6ettqPBlvI58O2xt
dyu+RyrrJtNvxaCudKkQDwlLLlWm2WlEFILijQk7ems32qjVqHJY8R8e+VXrHaF2i+rh2u7o
blFlmU60lkTJP2IUptNApSq1Vw/L89c37IbjJLWSZvXhTylbcEQ+5k/3FsjgNqs33D6WG2n3
g2r3K/SUKkFQ1te+zcwpN9ksle8y+Ps3xi1283/Kf32OHvtmoa3JBUyotlSVIQ/7VJ4pI5J6
bfHT6r2eDLeSt+LcWiXeXcrvcrs9ZrRjkX9ymTIYUqWEpWEpEfj9LnL8x6artzBcSW64+Esq
n3OdFx28m9tPvQLh2JCi3KXGuCCtqdIQpXFfaJKVq6/ho/ve2KUOPBK2vw/GlNC1zsykOX3L
3JzNnehc3oEz9rJK3JjhUe4hRTRO+3z0drJ9tQTSagxtzIcgGON4448oWRuSqZ9hQdv7pQ4F
wqpVRA2FTQa7K0N+TEseeP8AFJmUZMxZ4ckQ5Drb76JK6kI+2ZU+TRPur+nQaz7LwvJpLEmq
QvB+aXGxzLnkOXtW+LCQl2euQ6++hLEhhuUtSlFQryS4ApPqR67ax/baYSNJ/jkiMI8fQYnk
5ywryD7q1qtL1ziXC1vORlS2+wXWkc0K5NKQoclpV/h366m3C8yZrXYzxzDLVc/F13zmRlzk
G/tLEd1lXdAWtYVSO84k81mSEjga8R+b5b7WfsjwTS64FWPF1TPE1snxrnKanXHJGrMqOh5x
Mdll9oqKFMhXbWSqi6gfLWa2atb6G+rxPJZ2PB14hZBFNgzZmTcmbsmDdZKA60uLMS2t5tXJ
SiHlhCKcfRRp8dYdrPfj9gVUQv8AUNb77BmWNi+Za9ktwcaeeXCej/aCMypQSy4GqJ4rd4qC
gRy9vw129LfR+DnbYw8aYNjM3HFZNkU64MxZVzbx62tWril4S5KQrvOLUR+mlJ3T665W7dsY
hSdlWV8slv8A8n26t/evJyJhVosrs9m8zlJU0YLsMFTSg0Ve4Pp4mqDt69NLveUvJhYzwc8k
wjC7J4exy/vyW7pd7zLQ+661IcaUY6FASGI7JHAlAPFxR6K3FRrVFa3YbNKFwTqvFHjq+2XH
J1jXcMbdurkmV/8AaDv3a/22E0px59LaCaHkE8PjrnSYf+sg1DkeTfF3jeFjNvy5Uy4Xe0QL
Y7OlyIpcjyZ6npIZjkJdPJgoCqLI+oDW69v4zmfsa50JX4Pw20XL7K6S7pO/1BdUWmyLYKW1
xA/FRKDss7hxSA5xKaUNK6xNrflOkUTjkZr8I4m1EVj7smc5lbltmXlu9t0TBQiC8WQwto/4
+JNeqTTTNlmfgsMkf+w+FXW7ybLbn58CVjs+JAu856hbnmYjuKWy2f8ALUjpXoRrP5JYexx9
iBew3B4uQ4berVaJgsWRzpdomWCYtZeSWF/bmSw82UuJ2Xz412p8NbdGqvP8I/cyq5j/AMkZ
Vmlgax3LrzY23zIatc16K28oUUpttZCSoJ25caV+eu923DfKOUQQp48DU1+f8f7NcWhTkSVF
SVb1AIP8tOgbDUCK1qKemiASDSoH0FQKCu2346yIlRSdgNlEU+dOo1pBIdPcVGnMDcH5/HUa
QriRQipAUKE/HodDKQxUUIOw+oD4dK6yNWGR7ajqfUDbbVANcgAVuokEgbAnoTrMFIXFXtUF
b/yNR1GkQgvjRKqgVrxG/wDbqSBivTce7Ygb0+WqCkCqBaTxV6+0n1+WoXkNCeSt+tPag7HV
JhIAXXkCBXc1O1APQaGjaEBSkgApqhIqKkg19NUhIsL5Joknj0odhU700SKYElxRHEcQNx8B
6HUIRJJPJPur1O1R6U0AL5I2HUfHrv10QUhEio2pvQqI+O+w+eoQJTVQIANBWhNAAPmNJB1H
uBVRNaV3GhEErkoqO5R/IGv4aWSFUKaDqDtyO+1NZEVzT86U+mp/lqIhmEc4j4qfbQkA0B39
Rqg2kbR/TtElqZyW5N3mfaINrt6ZMxu1lpMiSkKVRsLeStKAih9wFanrr0tv+tLiQtXJmk1x
K5brqG1oaWslsLWXF0Kife4d1K+J9dcTJyStHIJpyUo1CTuSoHbrrdNkzZ8xxjKEYrjcjMc+
cnWi/v8A6qW1OTYURDSR7glB/XWkniAkUCumtO1naISa+xJqJHF7wgQ42OZbYc4lXG73OQLf
YZtxQIIbYZQpK3A46VdtppO2yfXbfUu/Z4k1Xq3LIHynJzvDs7hom5ZMul+gQw61cqlAY+6S
QtDAPKgINDtU+usq7XCyULZCeOsUzfLco+/skhTE2O8l+TkUpZ7cd9xdA4pxfIrdWs+xAqSd
S9jRJ/oaxjmBeXICr/Fj5fOt2NxJTzSpbEdyZKmSkgl9xllPJbSSuvJZWKq/nrNvY2ksGeqO
OHYj5GvOJO2b/XD9uiqYccRYQ2p11EZRV7Zssf8AxS6AT2yutNa9l2+EUIkMqwDyEjAU2++5
tLlRPt2wLJDhLU29yoGWFTPZVPQKK1cQNyKaz3czg1gj8vxPyPdPGz4kZoLpAgpbjyrLAjFu
M4pBSluMzIbQn7ohXFPFG3LS/Y50gspOVow/PLPeMQyHML3GizX1ptNqiyI37h+3pCKNI7bZ
Q2h2oABqeBNVE62rdn/pGprVQjN/NNsuEDyZeGJ9yduk3m2t2c6lDal8m0kJDbftSlIPFIHp
rFbHNZZafC938t/sd1tfj+GwGe+iRNucjtJDRCaBoF89v3AcvpJ109ll1UrJt5zwWGxxv6om
J1ztkF8qd7pdlT5Zjrb7jwFPt3Xh+ZPogUpTprn/AGJrKBrAywCT5bgWvN2o9yRAasPflX2T
2GpUyVPWhS1I766nkAKqUSePoNabWJRlLBYMitf9Q6sOgiNdET4MuA3Knux0MsOtNqSFCM24
f1nf001WUgV30O6nQwiLmXb+ptzFk3V5CotuUlt0qZbYRPWGyOC3EICnjyoK1pXS/ZTaqXXi
SRLH9UVwudnMkx7SJa1pYd4x222nHWlc3JDTXNRWlvlw5VoegrrKtXOBwnDZkvkW4+SDbrTa
8skurt8Yvqs3fSlL7raVlsyV1/WKFb9suenT4632Tykc7Vhkj4We8jpuc1OKTI8C3IbCrzNu
NDbmUHZC3grZTlf8sdf4anZRDUjSvJoFisn9Qdvvt3hqvUaOxIcStV5uy0uR1yHhRpURKgSH
lhPtAFEj06ayrqIaNNZk4Y3Zv6g02q6WQXVm3qS8+3HenrSqfIkkcn27e6aqHLlVS/T09dDu
nwUPZwas/wDUDc/Hr9qcmx4zDTTraYDykou8qCglLqEOA8hHBTTcgq+NNad6vgOrRQ1XHyy5
imLymm5KMftspEfFVsIQ3zljkEKZbHvdWfeErIPrp7qdD1nZpGUr/qOAs11DkRUpiShJttnU
gqbnOANj78D2rcWg0X+VIJ6az3SzAdW8EB5L/wC8sG+WC+XJcKWhmUkWKPaUB2AxPWT+ihkV
7jxVXkVV39dVbVW1ghhicnyyx5kkPP2hq657IRzfauIBbipWlFJBLagloJb4pTTok0AqdLtW
MrBpLMFsxxryxd8yym7M27HmJ8eSm2LvE5rjGYlMEkogjfm4VK5LWoEnb8NDdI1kIM6uuVZp
h0rL8XuKY7l9vay1e7q4O9J4qRVSWXNgEuJXXpt6U1pVq8wDfBdrZL873rxWpMNiMmA3EU1F
lrCW7u/bhRLiGPzBgD81Ao+mstqdDasKGZtkF1zuf45sjE6IqPhMF5TVodQ0GW3X6KBUpQ9z
qgOfupTr660mphA6pjXxpkOUWDKmZmLRDOvjqHYsWLxLlVPoKeYQnqUDcV2+O2m0cgl4LfgT
3lfx3khabxaRJm39lTJgSmVlUhCSVFQ7ZBTx3J6fPWe1XsVV6LZbvI3mWJnM8HCUSLzJjM9q
1BhaDGisKJQpC0nlxK11UVK3P4U1maGlVjTHLt5sn5HmN5XjDd4lS46Y16hTY/GOkxv8iOy1
X9RaAr/L35DdR0u9YSM9cSN8W8jeTm8NbTZsSbnqsbL8GNlCY6luRUHd1KD9FUjrTb4jV+M5
GGkU/OEeS8sP+q7nYJkW3NQ48dEhplxLIjNJ/SWVL3Xy5FdTtvp7JYTJVNAc8jZhasNwOZBx
O2NR1yHE41Ga7rspfZHBSgKVQJC11PH3K/A11hJZBzOdjfB8zz3E7jZYmRYrOXFfuE2fFbba
W3JkzJySCW0n2kN8lfp/OvoNUpkk9EL/AFHX283u72u5zMXex2Mpt5hiRNCEzJZbKefcSjoh
qoCQfjrpRLhmXuRj4pzHKY9mn41a8QazCEt5M5yC8yt4Mu07YcJQCKUpQH+Gq1aTk6JN6LPj
Hk3yY9b5DcfBWLtLs0t9+LLTDcQm2vPAlSW2kDingOgqDTrosvWl8hkh2s58pKcYy4Y665Gg
2NyzJuAYd+3ShxSg5LWvopanKmg9uiaA6OM8kNlNqy1/xVid4urceDY7Tyt9pilShNltvu9x
ySU0oEeynUbb+utUabaF/iaK75Ll2y5+PpE/DI0SN2FSLJFti3VThGUCy20nkE0SsqDnDflQ
VI31zVV1kXM/JVv6j8mlZHcLPOexqZY2UJfjomXBtLUiStCklSOKSRwa5bfNR1v1pcZBJckb
4PzvLMeeuMfHcZORPSC26sNh0us9qoHuar7VFXRXU6rqs5CHwXaB5XzOVa+41girvfYVym3C
VLejOFmFJqOBZCAVJU0j2uBRBPX11mKcktSMbT5j8mf6FWW8VVOhIjyosnIQ0+EBLxUpxZSn
9LklS6lX89LVJJVcDC++Ss1XZ4GRLwaLDfdMP/8AW5cNalP/AG3EtcFqHFPPtAVSemw0JUGI
GHmrO87v9kiRb7i7mPwHZH3ffW2/+u9wIHFb2yUgLJ4p6/w1v1xwF68Fa8QZfkWN5C6qw2k3
uTcY64j1tAcUtbWylFBa96CCBU6fZVbZVqaFByvy1f8Aye/eGMRDk+22hdvudkebdQyIagpf
Elw8u49X2Cp5UoPXXN9YNJbyYXcXlLuEkmMIilvOKVFQkpDJKzVpKTuA39IB+GvQog5px8mu
WjyPnjGBxpETDmpKLZBctMXMjGdUpiIaoUkLHsPEKKSa0rudcuvr7ZY2eMkFllnyp3xfid0u
TbEK1W8PW2zRiV/fyUSHVOqkFoj6AU0Hx2231mrWRaiPJP5C9ntjzvEMkn22FJvku3sMW/HW
Q484I7DXaQH0AckuuJcKhxPtP4aJq6/Bdfy+Sfl+WvLRzZpDmFr++NsVGbx7tSO65G7gcL3c
r3dlJA+A6dTpinkoyRdovvli7+QcgyCLiCZjka3Ktl4sLzLgZEYJBTGVzPcW6qoUPVXwpqs6
9UhS5MRlKddmPrU12FrdcK2kjilslRqgJNaBP0j8NeqFxowk0ba5evMiPE7U12wxjbkxEW9V
5LaTc1WbajBRXn9svoV0/wCOvKnSX4N2wQWT+TfIF6g3i3XizNu26NOhc4ZjqDdpWwsBqIyE
gFsPhHBQO5Faddar05Gvkns98mZ7Y7pid6umMW+wuJgOtRYbaSsSbYopQuJJaXulroUo2IrX
V6qVssGbvOShzJ+V+V8vtlsiRGGXCkQrTbYrYaiQoyfcoADfginNSjvrq2qVhIKrs8nbGJ+Q
4TcLy03bm79Y7gJllncQtUCaGdlqadAJHbJSsKHQH56xdqc7LrKhF8xrPPMuKYQIn+kUvW2w
trYdukqGtTjKWldxIUQoVDDp59Px1j8J5NPWRGUeVvLysJ715xlEe2XNqOh3IHIywZBSQ4yp
YKu2K8OnGihpqqTgPZjRX8l8i5v5bTacMh2iK2v7kyGo8FChycCCFLUpxSu2hAUoqpQa0oom
zP8AIawLf5J8T3qa+q2MzIJig3SqBMtkqC8sJBWsdUdz212IOx66y2rbwaUwWKz5V50vFxvb
luYTbJ0MRr3cHQ2mKpuCwxWNESD/APu6201S3+Y1JPXTZ10l8IYalse49f8AzbNwu43+0W+3
tW+S5MnWxa220zIyHiTNVaUKNQ3Qnn136b6m6K2U3ANYRCXnDM8V4bsFrFibWhmebgpxDqXb
ghE+rcfuMAcm2nCv6io1NOmmvtrLbWxupaS4OVk8ZeWMIy+wT7XGjyLjLdWw2qO42+yw+GiZ
EaZvRHFkkrr6VodtYd1ZZBJrBdsinebshfn4fKRZYlmuEFbkq8RQlFuVDKkpD33PI/5fbDQ2
qB6eumtq1hpfkUTyZhZMozLxHlNzgOQIYuakJZkNy0JfRwIK0rYWkj2vIXuR9Q66317wy7OI
4ImH5JvcOPkUJEeD+35OpTk+AqOlUZpxQISuM3X9FTfL2EdNvhrd0k+xxl/x2Xyz3nzPHsEb
AomOtFUFSZEZS4iVPhbIRJS6HioILqUOIPLrxOvP2rMxs7pW14FN37y/g9/jXPI7CgtX+8/u
iYshCUoeuHBTYCFJUosq4ue0H1APpqt7O2fiCUzBN+VbDmHk7yc5i9ttsGPcMajVlTg6pSlo
kNoeSHXFpSogKIQgUNDU9Drasq0W8h1U7Kl4nb8lWS6ZTabTIRanbPCkz7xFmtokNtSIaaoK
UElIerslxNQPmNF2m0PEj61P/wBRDmLw8mYTIlWK3RZUpp5Zac77E/kqS5IaKqyT9R9ySU+m
n2eyrtER/wDBQ1shslwHyyz44t8m+Q0sYvjbRet6FuNd1tFwdTyHEHmVcik8VbgdNNPZLaS2
Zt8nHx1dvId9yeyRrRcWob+Nw3URZjwQhiLb0nm+qQVbLa93uCq7baLxWsQapy2XHJrv5+tt
4TkUec1dIMGM0mHc7W2yuI9AuCyG6xkiqm+43x9yfYR1pvo7KIagUmMrOrzdbo/kG4qvn2M6
0ILuQsPrQ6846WwQpgDkG19lXtdRTag/DXbtauA0tnVrGvL6PHItLGRNuMy46bq7iwdrN/bZ
KwkSO6aqDS1qqtrkK9eu2pexqztBqykk71g/mS5iw2WPljV2bs8lESQ3HfLa7ZNbb7yRJWKK
eKGweLlTxpTWa3aTx/IesvYtVs8kuZhFy6XmsSXZIMB6RCy91AVFZj1MV7hEKU0e7lBsNzQ1
09rR1gtS9SVi1eFotzyyTj2Q5UiFksh9DsMJYclC4sSUF1uUhY48eYB5Be6ab6PZ7rvLWDKo
o+hV7r407GK3u9Q7gmXJxq6m13qO2P0Sytztx5cdyvuQtYopPXXVVt26vcSc71mGuSjgJBKT
8agenyOsupzeGAq5AJXTlXqBXpqgewaananyJrvrJoSk8DtRNDWvWg+Y1QArlUjeld6/HVBA
KyAo1r+G1R66GKQtHEL2okqANQfU9NZB4ElauO56jr8DpECVKBHt6mhPp8tBIPanwPKlOlfh
qRaDJKVVX6fUOtK/79OzUhhRAAJqFHY/+PpqMphrX6pG9RQ/CnXWUakRzoCqgqSOdATU6mg0
KKiUigNa9a+usjIShX2jYfn/AIagDSR2eJNUU6g7n5b6hEo2qKHcb7fD11ExRCkrSkKqv0Kt
wQevXUZgFacVkDf1/D00GxSioN1qKE1KT1OoAqlKgK9RQpFD19dRA4p4kElNDxAPz/DQaFo4
7pBoKVp0PwrtpNoFVflFVHoPUACmswZYVXuNePt4/H0+NdUgRDAUmK+EpBUviCPhQ+mtG1s2
f+nCXc0v3+1xLI5fU3S3dqYhEhuGhhlKiebjzmwCq02316HdP1JeGDrmTN7gEpmSG2w2hCXV
pAZUVtABRH6ajupIPr6688QZOCRVQFU7im29fnrdQZr2T5ZNyLxtj2OWfBXocBC/tLDcFOuP
rddP+cmKjijuLcVvXcD003dW5mSU/Ris1VmSF4xOy7Dn7fieMMR4LcEL7P3C0gE/qCvBbqkD
ZI+XXW6+yvZtNo1Nk8ZD8lZ7Cu+dWa9ZNhDlvjsNJdmWuU+RImt0IZ5KKUhttOx48fd665JV
8llIi8R825TYchdds0FiPZ7hPTKcx2GyhSVJSQkMMqUlSkqKQBySK19NdKdIyZy/qaLZPMeX
zLnKtU7DLncb/FmuXRFvgvLYDIc/y2piAPchGwoevUiuh9OqaeSSZyxfypfXrdPuluwiZLvj
D8pdxnRFLTalS368nJLY/wA1bSfbQk8QKCmhusCqnSJn2YN4a9Kg4Ve35EmEpiZcp0h1duDR
r33m21n28k1oduP4a1bpKcz9hqvJ0uXkzLrViokYpgtztEYIjumXPSp2BDjx0iiYjBolCFf4
tvianRbqnuRgj4/lPIc1dwi3XazzW4zU5dylSoEZKl3F+MpS2kxWvakNpr+osq+er8Z/H9zH
VyZz5rn3Od5Mu82fb3LW+6pvtwX1JU6lsNhKFOcCUjkkctjrKNayWLxV5DwXH/H2QWbJm5En
91lM9qDFoHHG20gqWFkpSngpO9Trrb14TkE3JYrv/URh+UW2VByqzTmoolpmWxm3SA24oNI4
th1yqFBQpUlH92jokk1YMyRfi/N8Sj2zMGU2C5Sn72y+5ItluUFRYltQkj3PvK7nOqz3HV9T
002zVZgepZbt5tsVvtFruoxy4MXldpNtsqpSw3FdjEJS48N+RAUn2kDfWetZiShwOv8A8qDE
YltQ1bbTNXI4x0mG4ppMdntEFYS4Kuq2H1K6nS/VWdk0yFZ82eNoucoyG02y8TrjIddkTEPP
AoS66yptLcWMlRQStRSCoioT0qToaxDYJ8Gd+Vc/seXvRZ/7U7b8tFUX14rJaPD2toQhXu2H
WtKdNTqlpi1A68T5/YbPa71iuQQJMuzZGlAfNuHclhbNShDbfRXOtPlrTqomYaMS5wX4+erJ
flSbbkeOzqQ5rU+zQLeS5ILsADtsyKjYJ48nFgax1rEpmnuQof8AUBaLow1eLrYpT+TY+/Ln
WePCJVF/6sKQTKWRybS2k0rTfS6rhikNpfny0S7SnIWbPJczaJb12cKQK21pqRRRkOLpXkVf
S3qvRV08MHV6KdC8s2iBacKiR7c/LuWKyhMkypTpIITX9CKgEhpBrU+30/HWn608vTCswX1X
9QOJWK7In2O1TXYl6n/vF8kTKNp5qa7Sm4e1FhGxKuhp89ZtTyymCs5J5RxmNKxmyWNi4x8Y
styVeH7i8kNT31PrWpX26VABCE91QCj1/tIq/ItRA1xPyNiKvMsrM5zsqx2NSy81GZ5yXn3A
EoSh9QUokLVVxXUVoBrXRwFbcFuxzy7jlqzbJX4V1uzdhkSP3GNaEw235EqWslUkpCkK+3R0
QK0J6kimh0xwKq04M7umSYPk07M8oyhmS3kFwV/+rdpYKg0hSkcEuPugUPb4pqDSu9BvtJNr
4JPhbLz/AN7cPRYId7ahzVZdbbMrHmIIoYYbWEhMp50DYVTUI6+nz1l0jkKy9mdZZmlmuHjb
FsZjplPXKzqWuY++aMNhQIS1FbSacTUEqUK7U9ddEobyausheE8xs2HZ21f7vz+zjRJKR208
3CtbdEJTT4navpqvRvBz0aP4s84XS23AKz+ZKYhPwnf2WXJjrdUlT73cccWSA46k0CUqG1BQ
baz07aNy+S123+oHCXswuy5l3easxhxo8eSqIoIfW0txTpQG/wBVpNHPbU77n4an6X8DBV4P
mnGJl7zqLKvF0s2P3xlIs76Cp19LyU8XnUJr+mt4ABA6BIFaav62zn3hwPcJ8wYJZsMx143e
bFmY/b5EVeKMoPZmvu1CHHF07Z3NQonbcnS/Rb4ya7SU/wAnebL7dpM2FjUx84y/ao1rloIU
lgurSS926ija1/QFHcpBptprRVw9kvL0XKHn3i222zxyHcjeuE/ElrU8lEZygRIZUlzlyTsl
rZCOO+sf1Wci2pkfeLvMVuvU60wrzdv/ALTcudzlKcnqI7UN5tSYzLbtUpQ4QqiaHYBXxGsv
1sKvBRP6objZJ+ZW5y2Xf9yWzCDD0RpaXmIqUqBQlLoJC1uVKl+ooNbpVrYPcjjxHleNWnBJ
9jueRzMQuMie3c2bnFZWpb8ZLfBLSSEmqVKSeoodDr2eDcwlJccY8x+P27ZEmPZLc7a5bLjM
myYTrQVIuyXyS33y0O3Uim21OnQaV6LPUEpIxXnOxC7MNJnPtY+zi7zL9uSD2lXZ6pLXAfUp
AVx5/TrC9cGUm0yh+SvKszK8KxHHlTvuDGjc78320tt/eIWW2QaJTUNtgn27b/HXVevrnyGW
54NYtuQ+Pl5t48dhZQi4zLTDdtT0iSytlhTZZILocWEBLvMBCAK03rri6ODbcMov9VlwtEzJ
rImHdhOLEJbTsNDiXkxgHKoUp4KVyce3rU1oka7eqjWWY/3ZFf06eRMUxaBeol+uotbU1xl5
FWHVqdShJSoB1ghaVD0H4n10Xq3k22tGhW/yfgjFst96XlEq1sLu9yuSLY0C7InspXxDUpKf
cjlVKkcqbbemuf8AW2CQ3tnnHx2347dhOXFTUt+DMZVZ1MuF0SHy4oDuJ/S4KLlBt/HR/W3o
EiGv/lnBHJEO8RspuU2BIftyl4ahhKYkZqItpTgWVppQdoqCUHdVPTWv6X1yTIv+ofyxi+VY
1Ds1jugupNw+9d7bDjKGW221pQhSnCStXJz8tB11uvrjeCdSqf09ZpjuJ5RcJd8lLt5lwvto
kosreQhwuBZ7iEELKTx2po9lZiDWdGgO+cMYX5WnTG7xKhY5dLEi3uXBptaS3OTzKH1NGqqt
BR4nehOj+lmEkmzzhOdQqXIKHFvhbrikSHPrcHI0cXv9S/qOu1Rxwehbf5bweNhEJxN8mtyI
uPKsa8NbZUmK7LUhTZkqVsjcq+onp89cK+hvA3/KUUXOPMV9vOM4XEj3Fx2bZmESZwLaUJFy
jukR3ASkBRQ2PT277763T1YaCNFpnebbdM802bIJF2dbsMCCmOzJ+3C+29IYpIUWaJUUl3ZR
G9B7dC9b6xAylJc0+bPHsfOnpbNxfahv2n7Zi9SI77rPfTKLriWm1q7pQa0NFcQrb00P1W+D
L/Yqv/e6xTPI+UuKv0+2Y/ebY3Ai3SK0pC2pTCUj7pDAKle73JQr6t/5bfpajQxswBbLrq33
UJW42CpbrgSapSpfEKXStOVeteuuztCySUQbdM8rYK/YlXBl2czktyskfF3rfQKjRmWHElU1
tz8BVKevL+evNT1t/RFZK0okch8/4zIV9zboLn3lmu0aTHbWEoTeWWm+39zPUGk8XEFIW386
fw0vRjf1F4KV5m8n2jOo2NuxYX2kyDHkKuW7iyl6Q6FdpLjn1o9vOvxNNdKU6pmUpckN4fyS
1WHM2n7ul1NsnxpVvmyGAS4w1Lb4KeRxCj7OpoOm/prnfJpLa8mh+MPI/jSxYo/jl5myUqiO
3KLFmR2Obb8WetFJFDukhLAqnrvot6bS2VU2k2XK6f1HeP5+P3ZtDspmROjT4rVtWxzJcfCw
06XwohCFVAKR0OlehztFZeSn+RfMeF3zxWqyMzJ0y+yWojK0qR2UtiPwLqZK0HtPpCkGlE1r
T4a16/U1lwkF1CMx8WZZb8Zy9E+4Muybc+xIgTm4xAeQzLR21OtV25oBqB66rrCjgVDwjVYn
lCwYvIZslntVxmix2oWux264tFCrm5MfS88uVH48kIQndocanXNpPb3knI2tHk7xy95JyzIr
xcLpGgXiCmJFZ48yC+x2pLSwBTjHVszUdNbdLNViB7YZH2PylgVusFucUzOeyDF4txtOPJBQ
mPKjTypKX5A6trQ2r3JHX0+S6ZabxsFK2d3vNuJQo7l9tsGS7mV0Yt0S7x3lgQWkWtaFBbJA
5HvpaAAP071+dT1zt4UhfQ6X5twqFcGm8dtc6VBvFxlXXIWZC0h4PXFhUZ6PFCAeXb7hUlR6
nbR0w23k1HY5w/JeCyg147i2i6P4WuAbQ042Um7LcU+JJcDQTx/zU8O319flqaSXac/sU5K5
cMn8f5fkuW3jJGl21lFpEbGYoUtT4lxUhhivEAKXRNVhWw1t/wC2qZN7MqaP0IJHcJ+oHfmn
fbXT2JZRlPyenoX9S+ANJiXB21XA3RHFyRHQWvtw44y3FfCVE1KUtMhafidteanplQ2itfMm
OTc/Ve8oeXfpUyTiUm9LukiCHKupBUpKFNk7JcQyoJAG2untaiF4g1XieC4Xry/iI8xMZ5Z2
Z67ZJjqjXuNIKGnDVj7blH4KI9rdFe5X1ay1+KU6MpZc8kV42zjx1jF4yaTc491lQLoy9AtD
iVtmR9lIBCxI5KCe8U8aKGq+bpyaosQd4nmO2QpFiZZjSVW6z2KdZUMLcAq7K5BuRSvHZHEL
9evy1uyr5n8pB2z9i1eYfJfjbIvF0W3Wy5Pyb2Fw3AwG1NuqcYQG1quC1JSldEVCSkn3U1eh
Otm24RXl6M5xN3I/H+TxxerI84xkcFcNy2u/pLlQZ1G1dlY+hwmlK/x66xeytlcGlVrHJcZH
nN/GbnNYtOOG2XCyR2LLZ2pS1uLixI7xVKRMSFcXHXD0I2TqtWqSU/P3JXb/AOpwxzOsWDef
3t/FLnOtGQfpSpUeTyEWNLpVl11VaFTwKkKPpt6arWbtWXoNVg5xfK93lYnNkwLCF5LabUxb
LhkAWe03aGX0lpX25+p3ucQojb11qaq3WW1x9Rep/U6Sf6hZECYzJsePMW2VMmG65Gl1briJ
klTRYdDQV/ltLSpR9TyP8ynW1XL/ACiEYtZoZr84WlbzdlaxpAwNEFdvXjqn1l5TbjokKX9z
17geFU0FKa06pVw/ymWy7vkjGfNV0bz6dmBgttvPQHLVCiMKKftWu2W460uEKUpbfUn1+Wt3
VIrVfxX7k7tJ/JDMZ3Fh+M5OIxYXGbdJjcm8XRZr3W2DyYbQj8hC/cpX/HbVfZ+b9jeeDDtK
VSnFwBSqnkn0I9D8TrjIJhEE/wDprt8NElAkAJBVuaiiVfj8dMjgWFJIUdif8WswaQkpKVeo
I2UOpGtBZ5ApZBICqg7qp1PxG+swZkUEUSB0rvX4b+tdZNACqAnao6E7j+GqADK9tlEkj3k+
v8tDRSByqVAnen4CuiBgIb/SPQgjSZkMUVU8qH4GnTSQKELG9UgbACtSNGBloVy68AKjrt/P
bQ0UvgMKBHMe1H0n/YayKEugqT9Q9vQev4V0yUBpFakkUG6R6bbU1lm0A1BCiVc6kHfah+Wo
A9thsCkUGoAUJboelQAB0+e+ogCoUUqPFIBqfWvpTQag5thQTyCauHokdaagSg6OElIWDUeg
Hr8tBoUlKgocth+b4j4D56jMsMp91AoGo6ddRoL9LlX8vX1pXpTRBSRcVbSI8krNVFIS2DWu
50s2sGyf073K0sQ8vh3O5Q7am5WdUWLInOhhpTqlH2lxX+GvoK69Dq36seTNm2ZtIYaZkPMt
yO+02o/r0UhK6bckhQB4n0rrgEHAhKilRHImlW09PwHTXT1rIpG9eRsqt0rxZizScsiS8tsX
6i2LcVd5S3AEttsltLaWw0jZStumt3q63mIQJpjO8Z7Feg4FjOO5A1Ifhqbul5vNwWXGGrg7
v/1C3uaaxwVU2NDT103U+xpoZXGA/NoxXL/I1tdtuUW5yPPZRGlSypSmYaWEkqdfd2TVfRKA
a/Gmuf8ATeYjIT+gjxNk3irCssnMyX3rjIVKajWnKA20hlmL7Q65xdKu0haqhTgBVw6U1qvr
lRP5C3Bqtrz3xxOiXtUO5W6PGm3d967vzpbkNx6KAE91rt8VvgivbRXjTrrP9TiUZ7DPDsww
q0Qjco+TQ28Wh/cNwba48uO/Cjgq4JbgpH/VvPLVzLjgqAQBXfS/XblGpkVfM/xK54W7NyK6
QEW1yDSLEg3B9dwW6ndllcQBCOWw5kildum+p0afJlKTNpfkqd/2aZgScgcvWT5NOCZUOU8X
ftoTK6BpypohLqkjkduSVH0GtOjlYNuHpaLzmbcy8TvGqrffLY/fra42i4QLVIQKpAQuQW+0
Uttx2kNnlUgEUArqr+NnjBlfUyH+oK52+b5XvL9vlNy49GW1OtKC09xDSUrSFDYlJ2NNc6Iy
5ReP6d7ZAcw/LLqlu2ovbLkdiFcLqhCmGuQJCCVkBNT8PWmm9dfJpWNCkzsGkKul1wd3HF5E
mWyxcJlzKG43ZbQC9wJA2Uuo5Nih/gNH9bSlrBSVDxqxGmwvJD7syzIh3tDsWFcVLTEZcfSh
VQy26e4IrZWKLUPcd9LShYC2tl2nXTExjdsdu0yxSbFCsaYlwdUUOy/uQ2gMtskEqCCSTxCa
11P1vwyTwd3LR4lYwxu2XFNnZtXYjhEptxlPeLpTyUggmRWpPuVuT6ay/XaYjJrvORqyu12v
Ora3LYxi12GLIdVZHI6kCcWWoywVLp+khKacitSvgBudPTGnIN52Yf5zg42WrRd8Ym25eOzO
99pGj1+9ckLWVyJEsqq4taleq6ACgSNKo6vKB5JH+ntMQ27KTa5MWHnqmEIx6RK4p7be5kKb
LgKOlOXrTWrV54JWxBsCL/idwl3KXhdztbF9ZlQ27/cldttK4DNDLUlxwDk2VFQ5I6n+Gsv1
2SmMEnxwJtmQYT2f37GZ9siYmbhMezJbiW2+6yltSI6ShaeZCiAUJSN69N9Do0srYq04Gqr3
htvxhFzhyLY342dtkpc2CEt9166vL5IQGqdxTlDTj6fgNVvW1hrINtmWQomDN4T47jXYWuEz
PuaH760Fd6e/HBUFuSnkgfbtUoC38xT11t+ty0ad450a/dZGJ/ucS05u/aVwpF2Q7h8RIZ4o
trbQU3yCBxQ2pwAe7ZWw1z6N4gO3JTPJciFNk4fYMtbt9yzA3kquMdpxtkC2c19lqQ+2OLKF
o4e3rrSqRD4XEgTf6lJSLTbLbdrfFV9UNHGDCbaQlK3mgaJW4hXs5dCpRKfTRaq6gm5NBwR1
2y5zmlr/AGpZuDtyVc598ivROLUKQolhqQt4/pgJSpfBPu/DWeuBkxTN8WOZ5ZnWXY89Eaxi
zL7r05xXabcUGwOLKQKrUsoPGnXb466VtxBViJZrdpj47ZvHkFbaLYrx25YluXt50trfcvZS
ChBNe4pypICeo9PTWerThrJWcmNZbarHC8MYe9HjQot4uL7j0x3mXbhJASr3rps00CacT/y/
PW6/yyDzBVvHCLEvMrO3fIq51uXKabchtqCC4pSwlvkpX5OZBWPUba6XynwVYR6EyZ/Bb55U
zOfe7e1cYWEWLi3FffPZU+n3qCQKJapy7YSK+6p664dHCJX2wvFlj8T5xMuORR8XSwYUeLEX
ZaB1pL7iSt95DIKfbulCVH4HbfV7KxicFVstOP8AifxI9Pvc1uwMzg5PVFMfkmS1GDTaeYSA
ri1yWSVb7dPTWHKNN4MKkYz4KULpJl5LLiTBJlBi0wWC+htDbqkstIeKSl32gEr5cd9eheu/
2M4aLXjGAybl/T9BaftqozMzIY1wlvlaUreglSWS8Nx+RXBA/iBvri7NNwLSx8FnOE+I7/Pv
Noaxtq2t4/fYdsRIiOrU9JU7xLiadePVJFTxAKvTQ5QJyVj+pbErPi+PWNix2OHbocmW4qZM
i0qp1LRS0yKnnx7fJSvSo+OunoSkG8lX8DYVaL4MjucuzoySdaY7RtliW6GkOuPrKVqKjQe0
J6nYfy1e7ZpPGDVfIHjPHswvUi029EePkVuNnS/DW7wVFszbZ7jbZTVPVZH8tcVKyZlN/fJ1
uGBeKrRk1lsjuKx5cjJ7lPjMul1SPtmIrYqUIBPKnGg+BrvpU7NSKsHhXxqvxssyLWl6U9Cf
m/u6XCopJUtSOL4VwBSlIHGm3roCcBZb4t8Vs45e241ijxZ0KBMDE5Diu4lyAw2tK+NSCpbj
wSv+XU6kTtgi53ipm5ZD44j3myrj2ODYFsybd3AkqnNIL4iqIIVyWrkpVOulWxA/7p+BFg8Y
YjdM4YcvODpx9mLanZK8fTJTJL7ipAabeU00edEp5D0qd/TW28Qm4Kcju0ePbLaP6jGWMftC
BZ2bQZF0YIC2o65KHEIolZVwLhaAA/H01h6RmW2yi+JsDydVq8m2r9nS9MTActqOXbUtM9JK
hHbcKqCiXATQ06b66XvLQ2TdMPJL2nw/itxs0XKITcaRZrdi8pN0b7xU4q/MtqK1OJ9e1U+v
HYax2fJWtt/BiGD2WNeswsVpmLIiTZbLMv3cKoUoch3D9NU1Fdej2v8AEkb/AOcfF2I220WB
WJ2JuDfZN5bhRYyElP3KQlSihSVqVySS2PcabdeuvM2MttIaZlht/wD/AMpuzSI9sQqPMchS
2qcC0WIbbSJbhRX2hsn16mlK6nHQKrLH8LxpDGWZrOl4ozkl2N/Q3GtTryGW41sl1dTLCUmm
/wAOvp8dNngqzEfJ3/7W4VDiy1WTDo+VQ37lcGJspyalgW9iPxCEIcUSAAeR+I9dS3mQ4g6W
jxJ45X4XVNftbb1wNlduir02tSv1+2pwBL3ICqCniUcdhozIt4IK+2jFMotniof6biWvHbo6
0xJuTL4C2lq7il2/fir9QoKlLVvXbrrSlJwTtlMt+S+LPGH+o7E1GxttmYRcHGrWtf2yZpjx
+bTakFalKBcIorWVPAWbkyb+o2G3b1YbBTamrI4zaXFO2ZhwPIjLckFS0BxPUc667elb8A1+
U/ArxBguFZnjDzE4sxLpZLszc7rKec4lyzIb/Wb+SApJ5H0J66z7e3aEaVh9iGLSMh8YeU5+
PWkNW+6SWzZjzSlSY0aV9wuMOR2Sy1xX8ztvrNsWh8GbL8Rv5IuXiywx3sXYxNhU5Frt8yNe
WXVd1c1xDbpS4n8ramq8qHevT11v1euVMwabey7Xm04lkOd+MPvbBbo2M3O1q7TrTqeLzyIq
lJgqA4njHcUniTuVGnXbWEoq9yU/l9jngvhrHbXIxi2ZTbIsi/NRLvMmQFOIeU+sPMojN0C0
ocU00snjX1rqtZtvwVZUZJG14NbLf/UdDXjMJiLao9oEu9RklHbQZKXmKoQSoDkpKUqSmvx9
dDX4iuZPL1/tMyzXqfaJqEszIEl1l5pJCkJWlRrQpJH4a9tdGE2kb5Ow6yxvFcGXbsatcqwv
WRqTccqdlJanM3Bde6lCSCVKQae31+nXkqp3PY3a0/QuErx9gjciLHn49abdaG7nbmcWnsPo
cduTDlPuO+kH3Ao3Nfx1nq2uSb5KlbJnjm4XvC743jMGAg5HLsoiR3eI7bPH7aS//jUhyi9x
Q9K63/XhpTga7GeOSTlv9T8q8QWUORrVKWZQlSAsJZilUbvNKVxr76LQhNeP4DW/dTrSq5M+
tRLLp4h8bWtqDkLt2scOfeol8lp7krjIS6UAPx22VhRS3Ra0lZKd/wDdz9lptJVxVLwVqNgd
guLEXLpsS1xoDdgujV7YDjSeN6DjpKEs1rzZNQk/ACnprTrwp3+xKzifgY5NfPGjmd/6Cexy
1RraxPt4i36OpKKEpackLkLB4qZdQe2Eg7Hc6X6utVbySZcPIT+AYvl+DXqLbrVEUi6riXBK
EsEIiuIHF9aWFKALeziFq3TrC9U1bjQp52VSywn5X9UUq4OptMNiA8qY6ltxAjuR1IKG5DSi
eKpCw4lxdN+Vfhp9iXVQmVNtySvjqz2W3wpcCTHx6ZJbvsxvMH7m4244mAr9RlcRajQ1Qsmv
xr6jW/ZWbTG9FVqM6GUu7+Ojaxi0aDZ12qXis2eLkQn7oSmlufZtIeqClwJTXf3q20r0689o
/wC5mcQ+EYh4/Vb/APWtiVckx1W/7tkSkzFcY/Aq9wdIBon4mm2t/wDIrEpB67Zk9UTbdh8v
Lo0mK3YDLj2i5v8A2xDBZjpAZDapa2CUKYS5+cpCxU68y9eMrBtfXIzx6T44ZyqRInu4+vKU
2mIi8Nxiw3CU/wB1RfWwt0Ka5Ia4cgBUjW36nExiSdowiNMjD0KvQxOVisSO3dJTt8ReEJW2
5EWyhbX23DcoPv2b6Vpq/qacNOYCThNu3jH/ALFvxoibSVKtC0hjutIlJuPQBDZSZBWHKFKi
rf11qvqbvrk1fL8HnHG7wm0ZFb7k8wzMRDfQ67Hko7jDiK0UHGxSo9db9tE6uDFLdWbBlGW2
nKf6ibfOam26Ba7U+0I13cWTGkNMpS8VOKUVI5K3bRQAdNF016lVbtkaexKxfcLyDALLmGcL
kXiyvzrndBcIk16QgMqhyAVFoPqbcHcbNeaB8t9Z91G+uOINJ/j9yiWb/SkyxeU0wr9arSxk
L62bRbXXFNI4Mvd4OISQP03Ukhuidj6Aa316+2sKYWTKarX7lumeRMKThsntX+1G0ybdbmbb
YBFH3TDzLrZmCShISpYJFS3X3UNNZ9PrasnGpF2TeeSqf1GZdiV+slnTbLxCuU9iSpxcWEgO
JYYLZT7JHbaU2lSuP6SuXxr7da9HpcNtGbmCKV1GxGxB9f4620YkSpRJPTkOg6bawzLYKAH1
JH8/x0SUA9aCvt2SDtUfjqGQuYoSk1UDQAnp/LSItCjsDsobAH+egglmqSB0rXbqNKJi1BIS
NiEjceuoDnUhJBFdvqPQH5amTFkq4pHI7bJUN/nTWGLD3AJ6K9B+Pp+Oogkn31KSmo/sGqAa
DDihsqlD0FN6/wDhoYqQlKHIlIpT6hrIwAcCo0HFaetfWv8AZpJKAEFNUD1+kjrT+GjkWGlZ
2I9pHt321NAGQkU91DtX47mlTrItCVJSTStK9N/hpCZFIVRXD1FT8jX56INgVuDUBKK/CugG
ANguVp124k9BpTFJCVcEBVCTQ0Cfn8dLMtJC1KCgOHXbkepBHofjrJpBqJqSkEJBHtO29Kn+
egGBCl0JRUD12HX4DUaApQSBQErHWo6j01Aw+SiglI49Qr4im/8ADVAidqcq7UrWg6/HUZgj
oAUpuUAKo7RJNK9NB1WWbP8A0zR2XJWTTTFZkzLbY35FsW82l/svocFClCwocj+GvRZ//X9y
tbhGa3JctcyS5KU4ucta1Pl1JCy4pXJXMECnXpTXAy1A1AoAQrb6a9afwOt1iSSN1yTF8ftP
ibFMntGKKIU+p+7pn9xa320ApQZbg4ENOK9yUJoOgGh1r2jZVx8DrP38fGC4j+54rCiXXJH0
TZESxMhiQLek/pNNKV3Dze5JqfQelddKeuveE4+WbVnJDedsCFpyu2QMbsC4kCTGYZhRWGlq
70pVSptJIKnHN/d6/HXLkk20c/FvhqDccinR80dahO2x5llzH3JLbEiUt+iglS+RIbSlVSE1
Uo7baetnWVoy42apF8QeOoV4udzchRJL6rl+2We3qjvS4cYJbQQksMFJK6q961q4p6azDQdv
A2wrxdi0W5P2m848xLlvvSWnLq+ytz7p1vdSYAb/AEorEf6OTm5VUCtNLytj2cDpWC+N37I9
Fxq0W9uezCcfcm3GFJc4lFe6990sJaUB+Q777jbU0y7sqljGCz/Gyr3JwOPHanz2bPZG7aS/
cZKnFUdW249T37KCSfUGu2tOlZUNk7tE/eMVwa0Y3Fu7eLWuJPg3WJCdhKQ8W0B91LZYdcc4
fduI57lHs5fHVVNvbB2Mm/qPt0K3+U5saHGahxUR4xbjsIS23QtjolIAG/wGtep4MvOzt4i8
c2LJ7BkV4v79wXarMGibTbeJdkKWFHnRdUqKKe3b+OtXtaFDgqJbL7fP6fvHOPofvF2kXeRY
wphmJa4SQ7LQ48nmpS1UKlBI9Eiv9muKdm4k0QXj/AMTkXLyC05bpT0myQXFWKLc2OTzIcbU
pDq2z9L5NO2gioG/XXRpqqckWx7wp4/uVoxmEYsy3XV2yqmCSw2kMl9KEuPLluqBWVlSqBHp
rDtbyFhrH/phxl/GytF2k/6k7SHVqdShDbZdpxCo6aupG/VRr600O9vJYSGVr8JeLlZfFx37
i83CdDkpZvJdZ7UJ5LbSlmjyUhKUlYCac+R6D460/ZdrLNVtGkZt5h8dR8Ku7cZlEhZfLj70
rtcIaUrWezHjE1Wvtp2UtZ9x6aqtvYSdvFGDWO82fIMov8Z65WvGW2nFWSMotuy3Hq0SpxO4
QkJ6DrrbTTUOJM4NVu/g3x1Y5hvUqFLuVqlyIkOLYmnFIQw5NoS4tSfetLXJNEk/jrmrWb2b
njkDfgjx/ar4xAuEeXdRf5smDbHUrKG7emO2Vl1QT/mLKkkVOw6U66nazUNl2GjXgjAIROOX
FuZMuU6BJuzF+5BLERphdEspR9FVJpyUrrrXez5Mt4wZzbvEkG5Y1itwh3CQq55Nc24D7vYI
gxEr5e1TiuKnXvaCAk8Tv8NMtNyTro0geBMFvvetdpE+2T7JdE2qfc5S+6qYkIDrjrYV7R7K
hAAoPhrLtbyCgqmaeMsGkWbHbxYTKsLF4uqrNLanFUg/pqcQuUqp5qUe3sE7HbTNpeZJ2WE9
MjrL44xlnzQcMkXO5Q7UlbbTK1tlmXLcKEr4DhTstq9yuZH0/jrVe6UySqm9aLjhPi7xnNyD
KYt4l8nWbu7Bg2F6cuKBGQriHlKrzkKWdgN+nz1nteN4GUZZ5LxdFkzy/Y5jyJT1stq0uKZR
zdDaA2lZU5Tb2FX1K00s0iSwaVZfC+IKscC0Xh+evIbxaHMgjz2FUgxkoSlQa7Z2UspUApSu
vpo72fJYWtGYXnB4Vv8AG9ny5qc9KuF4kLaejNtf9PHbQlRCHHz1d2HtHz+Guiu24ZQklA38
V4M9m2ZRLC3K+xS4FvOSwCpaEtJ5KKBUe70G+r2PqsEkaXaPAGM5Bc40XHM1+/tj7b0q7vlA
TISGHe2klokbLc+kufAq31h+z2JZ0woqrKJeD/Te0xlcqBDzBxqFGhtvurjjjNWp1SkobCG1
BKkDh1qdyBor7rIYT2Kx3+mp6S5eGXMrkwbUzKMSKW0KbckFCQVrdQpTaaVVxHWtCemtW/5F
xVisTPEmB2zxrf8AIpV9cm3e3zlwI6IrSktJfaWUIjcVAci8KKLlaJHTWLWbeQbTWDnl+I43
ZPEdkvcTKZ1wu1xdpDiNl0RFraKQ60hCiA2iMeR59VGlNa9d2nnRNqMIkLR4+KLb4udxy6OR
L7ksmZKn3lK1r4ONJBJQhRCSWk80f8xPWh1l+x5ZRlFgybxbdc/nWhw547kFvRIlxZMhyMGh
GXFFXxGaRxDiyoBPz+NNHdrgsGYeVcMtvjm8wbRY7xMlXNyOX7m4r/pw0h1Q7DXFshQKkAlY
JPprt6/Y3sOcE34qtfkl3GLhk2H3lTd0mXGLaZcQJQ46UOe1Ml1boUpKGy5RNPgfhrF7JPKF
KRUDDLeq4eTX5V7dvU3EoUg26SpxxLy5Kql2UFJUfoWFIpy3UdafsbgJw2tyOnfE8aDibJuG
fJg3VdnF7ZsDhcRGEdY5KTXlx32TRKaqV6aK+y70atEnTIPF2Ots4RAs2YSJt7ytxKFFaXCy
4y8vm7LSCRxS2UAcFbrVQnUvbbLfAtqYOWNeJ8oysomKyl4Ki36RZojzynnHENxG1uOyUVX7
VqCAAlJ9etND9zM1jEjy/eJ1Yv5VxS23HJJ10h5M4WHJ7a1szQEkIDQcC1HgStO/LYV0W9ra
yKabMvy2VccezW/QbZcZjSI05+IHkyHA6tth0oRzcBBVxA9ddK2MVUoumIYDHcw+Je8gzp3F
zki5JtENPecTKMc8HHJCm1ClV/Hen8tCu5fVHTC4yW2HhOSjxqMW/wBdKiy3bQu8tYs1GAa/
blrKv1HwAujhV7hX1pSmua9nMGcEFe/6frBbIlzXAzJh652B6Ize0SWFNR4glrSlCluAmvHl
y/AfHS/bZkzl5rxzH8cvGK2O03dRmhCV3e6qkPvSO492w3KdBJCPYpRaCN+P8NNbNqQz2+CZ
keIlXvzPkVkt94mWu22CNFWHm1uyZig8yiiEFSgpQUsqWuhoNK9zSXkyksv50QWXeMZVlemy
LPkcya6LxCxuS+pS0uOyn2A8+4paVV4NFSUpTv676Le622a3hIlX/Btls7KoF1z1doXcZM6L
bmVtqQxIML2lb3FfBKSBRXL5AV0r3XszCc/UKP4Clu+LU5G5kzqJCbebp+1FKkx0MEFwinLl
UoG54UJ0V991hGpSSZWfLWF4hi9gxt3Hb7Jucq4x0zVMvNqSyppzkRMQFUDXJSQgI60Fda9d
7OU2TTnBP3jxjAn57aYEa7PwrbFxdm+XO+LLr8hSUhannm0qJUlSyU+0dBrC9riDT25CleDs
S7ci+3DMn/8ASggw5sa5rjLVIUJrzjSEdskqB5NGm3rvqt7rWwYhdjNvIOJyMHzC5Y4zPVKZ
YS0fuUgtd1h9sOpDia9eKhyT012paazyMTKNFxbwi1c/GBydOSSGn5EZ6ULdHbW5HbDYJCJK
UHuVVwoqiaa5L3WTNNJQC9eBbImDIYg5E/KyeIxbpE2A7GX20N3FxDSQl1NeakldRxJ2FNX9
1tsWWK9f0wQokq1pg5O67CckuR7gXG0qU0llhb7imA2VUV+kfYfUjWH7bPBOy8EHjPh/Espy
NKrPlU5yxNQVTX5kllUaS28HuylKS9wQQrqVfwOtWtevJhVh/EFB8rYocFzJ+x2+6uT2+w1J
ak8ilwIfBIQ5wPEkU9DShGvR6rtqWZallEcWeVXDyUsb16/xrrtZuw2b5OiZjqm+33FFutSg
KJTX5JO1dYWAq4ErmOLSgKWscdkDkSE7emlSlE4MsLvuGgJG3Qen46UoNS2KZfcQeSF8VCvI
jY/z1lvMitExZs4yuzxZMe13aXCjStpTUd5baXAQRVXEipodUpvKJYRD/crBO9Ek19evx1pu
WZooOa1qJ33QPaR8j11mzyLR0LiuVK+7jxB/5T89Lu4hkkxIdUPpIHpQjWXbyGg+8D7BSiE+
voPgNStmR7CQ8ogE1r8fiPjqayc3LAhXvTUUKdyD/v1PUG6pHaPKeYK+wso7gKXeBKeSVbEG
n1A+o1WWvg3AhSzQBNAlJ9qegAHwpqTZh5AHTsoDkkbCu+9a9NTckJDhUVUFRX+Z+B1mByEV
Enoaq6etaa2mZaCSr3VJoPgST/ADWXkklydA8eJPXem/wOxpqiSC7zvIioPwNPTT1Nq8ClrP
QH0FQemsKpmzkSpZUQVV+Hz1tNgJUoBAH006E7nVBQJqmnt2J2Bpv+GrqAaVLqATSo2Hprmx
gWUgnalBSn8dCLqIAIWDTr8fWmukooaFBNDX1T/u9Nc3kUL9nuINOQJ+e3rok04Ee1JHHod+
JJJHx1pGAwpBTQ032NOny0NDIaFJKCR7UpO1Rtt8NQSEONOShuBXmPnrAphgVTVXoN/xHoNJ
th0QtIqKHp+OsmXASKJqobEGh+GkkGVcwaKp8Ugeg9NUDISaEdagb1G1NTQQLUCCapofmdlf
M/PWSAnooKO5pxPofXRAzgInlRX5h9VOu3/hpkIyHwNCQNknah6E9NDZtINSvckglR+XpT41
0MGwvcUc60qDU/8Aj11kpENgEcVDoagg63IPJ0b9QAFJJNT8PjTWYFBp5g0UaA77j0HrqgQO
EqrxPGiieXQf+egQuRSA4alSa8idtUkK7gSj6aq2qCNz+PpqGQVTz5ct+vH0r8NQEZGQlSZA
UTs3VNDtt6kDroNrBrv9OFvkzcgurrd3n2lm32l6ZMNsKEPyENLT+glSwoJG/KtKk69Ldv6+
OsmYyUK6PNSrrMlspcbZeeU4lDq+44AtRNVrP1KP5vnrgUDb2BwFVT1p6j+Xz012BruTYzkS
MCx+65Fm7sy23iU2y7GQ45LiQ2m08qkBX6rrQ9vBIoDt6a62dnbq0k/0L8YHl5wa0xLBZMvx
7MZ064PzG7VYJU9P2iWkt15OIcWSW2GQDUgUH46X/Yrw4bGkRBC+VomYYjlto+6yqZeLrGhp
ls3VbiwGFOk1EbkpRAI6q6nRX3NN4QStIgcFxbLsyyUyLY+UyWXUS5t+lLIajrUuqHXHTUqd
Uv6APco6P7WtG3DUPRruO+MPJMG8ZA1DzC4wbDGkqalzorTj8ubOKAt5aGUk8ac6KcUrrot7
m6pQjDqtjXA8GzC62WTak5pcLQ1KL6kWBlKnXEMlSkqcuDtaRe7xJ4k1PpXVe1mlhD1SySV1
8YZZB8f/AG90zW6yLciGHkWWBFW7GW31aY7xPRW1Qrb5U1P2t7STJNSQuPeLYF8xyN+xeQnZ
T1idbEaI0y41FiTpiwlLbalFKuZX1IFafCutO3srGEEJliy7Gr5YLzYLnf8ANbndJEOc2zHX
Itncix3HQUKkJQr9JRqQlCjWla+mqj7OEkU8Iyn+oCzv2ryZOYkznrpIdaZfemSOIdJWgUAS
gJQlCRskAax65YpB+Lb15PiY7kYwmjMVkMSbrOCQt9AQSltmOFBVVub+3jrra1UlKkHV8Fxx
nGv6khPnMxLsuFImttzLs9KkNntOvpPabXzSvg9wAJQke0Ur6a5v24jqidci8awn+oy1RrnG
tE5u3NyJbhmSX3Wu9JlKSO44lS0rdUpWwSa9dS9q5rJrEFBv/k3yvFTNxS43qS00yRFmQ6pK
09scO13AOfEjZQB39ddu1XmFJzVW8F/uVt/qe/0uh2ZMU1BQw279m080qX2kUUgkIT3eQIHK
pr6a5L21mepKr0M84uH9R1ks0PJckuQjtwlJTHZaWxzbekoLaS4y2AFPcVkDlWlSdNbpvFRW
NuTvM8QZ7fbhYsKvWRPvx41ndvrrKwFhp5xfbEZmquTiuSgObhoN6DWL2e4GFJDYl48874Zl
DUWwMfa3GWwtxwpdadhdtvYl9SqtFSFKAHz6aV7cQ0Z65klcdxv+pW3368W+1vu/fSUd+6Tn
3W1skvVAcaceqnukDYoGwHoNL9qaiDcKZG0Ow+fLHhj0OHcFtxrtJlolRA5ycjNRgVS5Tkte
0dta68qKqa/E6b+2YxlGYjKKu1f/AC/cPFdwDU1YwS2uBiXILiEKc5kJEdtav1XGxyFW0mm+
+p2T4/IoRUHc3yZ2NbIKrk/9nYzytDKDwTGV/ib409/wUdxrcrD5C2sE5dvNHki6zrbMlXlz
vWqhghqjQDlOJWrhTuLUKhSlfPTNZlLAJjmLl3lPP8xj3aK65Pu9laVNihpKG2IbbIqt1LdO
2jem5FVKoN9c31XAROfBO4HA81Iy9rJrTbhOyG+sPyETLgltauwhSW1PnuFPZCiAhB2qNgKa
x3T2jqqxOYLLDif1Jryu+T41sYN9lojxpdydaj8WEIQotpiKWQlKaEklNanc121L2V5WAVeZ
MzkZTnmKs5RistRjy7y527+66CuWskEqBfJqEuJWeXxr89br1eTDl4LrET/UNc/GgiMR3xi5
jdwSiW0yXIPHZpKye72SlNeAFePy0W9lZwjpHkreS4x5gc8eWibeIbrWG2loOW5tZbbSyh7Z
Lq20+/3ctiv4/PTW9XhIzbG2T3jvxxnVnyTEnrVcG7deMpt0yUHXmVLEOKEUJ4197q21BSAP
pOs2v50Tq5iRxg2HeZMFuF1mxILcJAtjsuYi4pS+h+Owapb4o5VeUo7JB/HV/YntF1wMJXmT
yniuVXF27x47F3mNRu9b5EdHGM2lJWwhDST+l7VlRFa71O+uirV8YB4J/CM6/qGv9rmz7Jb2
ryxJlOuJuEhlpQakcRyDPNSQEoHEJFCB01izouJFVeyCx6B53cwfIIFvtbr9luD0g3R19tCn
y+DxkqYCiHOSt0lSQfluNH9lZ0DrheBhJcz0+BY/elRVYiq5/Yx4aEVnJUVFwhLgHtQXE1p9
R/DWpVnj9SjRdrrk/wDUNabPYOeMx4UOK601a2WIyFr7q0FppCmwpZQfd02qeusz6+EzbTnZ
wxJzzHgt7sFuueMLubS35r1uiJUjuOyZY5vvKeQVJSpCOW6ugJ0dqNcyDrCwRnl21eVvIedR
rcrFVW+ZEhqcYhtuIcJZUscnnpJolVVAISK7enrpVklgkpKlaMq8i+JrpdrOEC23CdHQzLae
HIt7EtvtUNOQCjxVuNaqlbIwXLwN/wB2FWd234ra4gstzlLbn5FOYDqkqKaKJUVAvJbpsmhH
I79Tos6LexVVGXgquU4t5YyqAjKZ1pdlWi2RkW+JLQgAKiwlKbQ4GQSs8jVSlAb/AIaf7KqI
MocyMt8s2m3xLZIgoab8duR5UiUqOjvRS7VMVt9w19h7xogdfXpqfSPlmbLJdr7n/muFkOJ2
2LDgO3W7Rf3hizQonFHGSlba2pFTv7AVqNRSvXQnTq2yjMeBnIvnnV7yRaHZONMOZFbID6rZ
bSwkRmo7qwFvDiviFAgISrl8tYmnKZqFx9yoZRhfmHO82u0mZjrgu8ZLSZrDCEMMtpUCWwkl
VFc9zXkSeuun9lUsEqnGy+YfI2H2X/STbcZk2tbzTDkpgLlRFrUe4lpZNEkLqdwdb605WTOU
Wr/U3n2T4xbfTakpx4REsm8lpAmOQkEcU1r3OCqUrx3TrmreucoojZHZRaPObNtvrl2szyI+
azIguhQ2hTjkhtY+3abQkqU0kqon56K3pOjSUMRnNm8i45Ot/knLm47N9VcW4sW1LbS41wix
R23FBKuJTRNOP+Kp0Uas+qWDLWRxj/8AUzkLGVyMhvNviznnYYhIZYrFKUhzuFSXBzUSogcq
+gFNdr+hYgzSWxk//Ujmyb/dbnbGokVq5utPIguNB9DLjLYbS4hSqEuKSn3K9fhp/qon5Rrg
peZ+RslzBuKi9PNusw1SXWkJbCAFTXe68o061XSnwG2qqSwjKRPOeeM/dxdzHS/HMNyGbc4/
2EfcqjlJQEF3r7U7DQqVTGGQM/yRkk+y2KyTFtOwceWl23BTSeY4fQ2451W2ncBJ0/0VjItu
ZLRK/qM8jTLxBunfhofgIcaZbRHSltTbwAcQ4ncqT7RQV20/1U8AQWT+Yc1yNqfHukxJj3ER
UvMMtJbbSmAtS2UtAfQApxRNOuq/qpwgeSAyvKrxk9+lX27uJemy+2HXEAITRltLSaIGwASk
a2oShGktss1o8159a8QGMQZ7bNuS2uO06GkKfQy4SS2h2nLjVR+euPVJ6Bslsp/qEzy+Mphs
PItNtbcjLjxoqQFtmIEqa/WI5qSHEdynx10p6qJayP1ONw/qG8nzJkKSq4tsuQZH3ccMx2m0
l0tqbUV0FFBaXFcgeusr1VaiMgsjc+fPIy8hF/XcmzORGVCDPYa+37Clcyks04E8/dy666f0
1jRpazsp2UZTesmvUi83mQZU+Tx5qolACEgBCEITRKUpA2A1pJJQjKwQ6hUVVtX4UJOkXAAK
cQDQnoPlogzAfU9CkA7U61HTQyjyGqm31JoakddSFhJ6VTvUiupkkEFcRx6ivTr/AG6IIVUq
VxI+IG/rqRkSFq3SSdhuo+hGt9QFI5EgAUr/ALbV0GguSSOh+XyPw1hogzQI3NEnoPlprXIA
Tur0p1SDTWmQORJAFU/Gvp8tZkkK+mlBSu1PSmgm8CHKAmgp15D/AG9dMEKOwr+Q9RWhpqmC
YXt5Eg/A00iLHHp9RPw/36wwkTXoAd9yaevwFdBClFCa13JFfx+etJGtCVcggUUD6oFevx/l
rZzeADmfWprUnWGEABUolR2SNztXRIoMjmnfcjc+lPnp0MSH2wQAT+O3SuiSDWsEjkQVD1HU
U1mWamdieVE0SOVT7abD+OpgKC+4TWpI2+Ypq0TciuoCgdx6DYb9dH1Mg5gEKNCadfnogUxJ
UmilJ+kbgdN9dKkECSgAA7mpVT+zWWDFJqoUV1qfXrrLRBhQ414nf2lVNqfDWYRtJgSUpCu3
WvoPloINIPPjuogUBqNjrSQ4AlfFRJoD+X4cvnogJCX7TyJpTbbam/TRIg5AkDofzGu23z1Y
IInisA1PXc9D+Px1QFkGtaCdxQjcdCP56Egk6BfFCdwOVQSeug2JCuAURSldh8qbHRspBy3J
V19VDoa9NRQGlaqAJT7T9Q9f5awDB0O3XiCCKda6ZCA0FRPpyJr1pqFBOU23FK7ppXb11Jk2
BtwqWmorxTsKbEDpqaFSGCkcTRVFdf8AhogglDflyBSR0FCK/M6hCp+lShr8Kb0r8dUiRzEl
LKXBwCi6koCq9B8h89Bs0Lw3n0bC7hcpku3quUS4W923OsNu9hRDqgeQcIVTZNOmu9bVfrdW
8yDK3PkMOyXX2Y32rLqyW4ralLS0ivtSFq9yqD1PXXNIF5OPEqFRtX8v/hqQwaBevKkCX48h
YZDx6PEZhVUzcS+486hwmrzvaICAt0k9SQn0Guns6tym39TnlDC6+Qk3iLi1tuFvSbDjLCY6
7eysoMmpCnnFrp7FO8R0BpvrfddpWDpWzWVknc28s2XLcgs90l4uw3EtwQibF+4cWuYy0f02
O4UpDaB60FT665xSctx5M5mRtZPNOSWPJJtwsjUW02m4SWZEqyMNNlkNM0ShpBUk8SUjdYHU
1prp670jq1JT5LbE/qXccVOTfLKqYH7gbjDbjTHIqWylIS2y8Up/UbSEDboreo0dKNbyYdoc
HK2f1Fsxi/OexttzInZL8v7pmS5HiuOOjghUpkVU+GUUQgE9BtQmui1KpYZp2Zxi/wBQ4h2V
aYdqeGSuMOsKuMia85FC3iSp5EPdPKh9o2A6DbTetdpz8FMIqkDyY5Cwu1YxGihCYt1F2uFw
Q4pD0paDVttSkAKRROyl1r0pTSrVVk0Euxcrx50x+8NxbKYE222CRNal3p8y1TpaxGUHG2I3
doltHNA5GvTTSimU/wBjU+TP/Leet5zm8u/RYyokVxLbMZhwgrKGU8eTlNqq60B2GsdeuAOn
jzyM5hCbhMtsfvZDJQhiDKeUft4qOVXlqYH+atYoE1Pt1p5RNQadbf6nbRFFwb/YJUdiW+mW
hUOUlL7sjgEurfW4CKLUmvt6dNX9VWt5M1t5IR7+oXv3LG5b9oKWrJcJdzlsNvFan3JCVpbb
bW57v00ue5S/qPwGpKq+6G25QzavPgi6OyL9k670cmuD7k2a1BCERWnnF8wy1yJJSjYcj166
q+q04aJWhSXLKf6kMOi3mRPxq1PzLjJZjQ37jJX22VR2FdwhDQqvnVRTUgfHQvUuWUlH8jeR
/H2TvOyIllmMS7pLafu9xkSO4W20US4mEwk9oOKQOKXFjYfjprWGoZS40XVz+o3AW71DnxrJ
cVoj2tVmWp11sLLBUkp9oJ6UJUqtTtp/pndkLZxs/wDUXh9vnJtzFnnMWO3QRCtb4cadmhRc
5rUsLq0AsAD1Ip89H9Sc/lkz2O19/qHwLKIs23X61XNi1qeYfiKgvoRIWthOwcUFI4jl/hOq
vqjVsmiIb/qGxb/T8bGJOOvLxx37r91i/cFTn6rpXHQ24vdwp6uKc+pXTbWv6VOXmDNmZ4c+
tbPjufiTNqK5c+eZTM154uNRGQoEJjtbJDtBxU561Py1jrPOgcpFQt6WHrgy3JeEWKt1KZEj
iXODRUAtYSPqKU709dTT4FWNRk47/TmI6ExcivRfWpCVPqje1tHIdxxSeKa7VoBv00L03fKN
JpkxaMk8L+P5n73iU+8Xm7dpxti3vp+3irKh7fullKCtpKt+Irvp/psttC/oWWxf1LYqLpJm
XWBLjSbjBjN3GdFCHgJEdSjwYadVxS0ErNPnuRXfU/VWMPRzViHyj+oi03mKzFjx5zCHL5Fn
yHnHE8vsIgR7KN8QVuKSSWx7QPjpXrS54FuCLuMzw1mV5vGUZNkE20zrnJWtm2MRlOhqM2A2
1zUErCnFpTzVQ7VprK9F9qIHukXjIvNnjG2RoK7NIm3a7QLKLXBQKtQiHkJSTISTQOthO/FJ
/wANdX9FluBVpUlN8oeUPHuT2+a5bheDd7u3GYkRZDxbtsVLHD9QMpNHlJ4niKUJ3OtL1uvK
gH+WCzxvNnii13PEpMJV2nKx2E/AMh5tKCpp1oJqUlXuWpxKfUBIrrP9L5aFyKZ/qTxHHohg
2GJOuEeFGc+xlXFVXHpMmQHF91RUVJbbSNlGpOwptXSvRPKKZKJJHhjL8qul6uF4m4zBeS06
mO4hct5+Y6FKkr50cAbQeKU16/Iaf6bf7SWNnfK/JtlxvGbFivj67zJKbNOcuDt4cR9uhxSw
oobDX/uBKl1PJNNvXTT09XLgbNvJbrP/AFI43b/H0WGr71eRRoS45jtobCHpLleUgylVUjkp
XM0FdH9OZxAP8sFasfkPxnZvFdpsDhnzr7DubF8UyEJQx96hYKkdxRP6SUVp6k6F6W222if4
xBdb/wD1LYRKuFq7CZ8mK3cG5s8FtuP2kNJVwQniSt4hagrcge356f8A87W2CyHcP6k8Ievt
lLDtwEOEZf38z7ZlCld5sIbSloEhSOW5Gx2Gsv0NZwKQI39SPjlnJLi41Fmw4UiHHjsXJLSX
nHFMLWtVY6lEISe4afOpI6al6G3sz2SwUW63DxT5Fyy85Lk2RybGHFsxrVF7BddXHYbCe+5w
SpCOaq/pp6eur+uycLg0rJbLZYPMfijGYdnt0SdeZSMUEqPb1MthuPOTKHIvPt1TVKVEhIO/
5ta//PaNrIWbTyJu/wDUljbnj42+3/eR785bhb24jbTaWmngngXfuTuUfm4gb9NZr6I28Clw
9lS8oeZbDleMwrXb4jkSdeHYr+YTaCqzESG0No6dwDdzpTYDQvVzJllol+ZfEzOVWG4Q3rq8
mFZnLBKlrbSlxthSAGn0gnkt3lXkRq//AD2W2TtyIxLzX47x++TmWJt4k21yEzGj3q5cZsnm
06txSe0qnFujlB89z6aX62+UPySVl/qNwYXy/wA28G4ogypEcw4/Zbd7jcdgN1UlBSWlLUOd
OVANvjpXpb00UHnXKbxHvGS3a6MsmNHny3pLTBXzLSHlkpSVHqQDudLqVXGDfp39RWIu4FHt
LLc1NxEGLAdhhppLKA0UpW4H6lShwSaJHXprmvS4KxFWn+oa0R8hyS5yUTZsW6XqDLtjC1V7
MKLXmKKVRCuhCE+vrrq/+O3htYQOySK15d8jYhecbt2P42u5SExLlJuEiTdSFL/6hBPBuhVR
PJSgE+g1U9TTlsUZKXEgEClT1PrrqCtkSkKoSVb0O/xPx0GusBFZJrX8aaoKAgaAGpB+o76g
kMVChX1/lTWpAIJUTRJpSh3+FdDwSCKhXoKn+I/jqJoFATRHU/DpXUiD3BAp7QN/htogGCoF
ak8ga/OtP92mGSYSSaciahXSnp8tS2KQZ5FW+24321s0xKQgkkqAUajkdtTkzgHEnZOx6+n8
9HANh1QTXqBsR8KapZBJAVWpqDtT5/LVsQBAKaH47nrtqSAILBBFPp/A6tCmGSoJBAH4f8dK
LeggCAa+nw+Z1MA96lKRsd+R0gKUCr81AR/aNZYoIkBX/IOhO3z0ImwUBKh6Dfp/u1QZkNVQ
n/lpVKfhpFMHsV1NCTufwHTWIKQ6FVCnYnr8daRlgPIcQR0BrX1/DWmkbX0CBFU0pQ7Anrtr
HUGGTxVWg4ipFDv89UAggRyG1a9fTf8AHTBWBwSdwdxU/wDHfWGiQuhVv0B606j4aJgZEkbE
mnEUrT5+ulWBBlRSrcgj1oPj1/HSyaAOSiQrYKG4p/v1kIAg0ASNt6Hf462ikNFeJoo1QaBN
KjWdiDlVBTTelQaami7CVbpPEdfQHpoQSKKqVI6k06etOu2tMUGKU9uxA9wO3X56wTDFBvQV
Hp6javXQtjwI4moIFAN1Hag5dNbbBIXUgfEqO41hiEog+0eu3/lrMgAroogb1AASR6ev8dBo
NSk1CTVNaJIHWn/HUDkJSEg06gdT16aclIo7rSeh47E71/Ean4NfIFEKUK7/AA+SfhrBMNJP
0qFD8eop89LBW8ANVEUTsRQKPVJ+H4aPqNnIAFdDuSPXVKCAcQncbbe1Rp00NiEn3ACoFSSV
U6/hoIVT9QpTuPyj1qfhqNBFRJqKhRAoPmD00QZBRahy2CvpIPyPXUUBNmoKUCh3pTqfnqYy
K3SUqChQ7c/n66CaDSn0FKgVPx/hT01SVUEEp36mpqPgfhT4amPIaiak9EtkBSU+tfQ/x0CD
vq4124/DemoiENEj5q2A+WpG5JOAW0t+4qryACU+qvTSZbH4RyWhtKlKdV7UoT9VRv0/36pF
CChRSSa1NQD8KeifnpkGDiNiACTQVO3TodAICQSClKgspNVJ9aH56ZkpFF5R9xTXjskGnUel
NDKRIAompqd6jf8At0phwK4KUCDsmtSANjTprUkkFRRIVWpp0Aodvn8NIoLkonkSeXUf8NtQ
MLcJohW53I+A/wCOgEHxNQf8XSv+/SMCSlPLkKrFd/8AhqllCEGqglYUCRU1qKCnzHprUwTQ
oINADUr4cwOlB/i1diVfIADU02NNvUH5aYFtHMLqeXz40Oss5qWCoJNPT+WpC3ICTxPwBoPi
fjrQgSOJJqajanwHz1AkAKI3BqSDSgFKfPUDqEmpRuaAnf8Ahp+hqqwECCSFbioAV1/hoFhk
7AK67hFfXQYDbB+mvvG9Om2tGlUA5ElJVVVPT1Pz1pGHaHkAUSKDdQ2p8a6mbSlB0rVQVQj6
h6Eaw7F1yGtfRSqgbHkfx1JkJqoH2r26lPoQdKJoBKqBJO5/sHrXWwnAkLUPWp6j8PjowNZB
7ztunbZJNd/lqckw0lRTXkSB0Pp+GiRCCvaEj3A1JUN6H4ak4CfASlO8vkd611qTLF91QVv9
RFB8x8BrnY0mJKjUDqkfx01ZMIOEE7E/h1+WtN+QDSCFglW23ED1I1OxpVACpKaGhIJqR031
A6hEUUB9JJ2IO2rAVWQi4up2PGvT5/DTEi7ORQQs/wDqB32rt8NEwKecgUTUVqqnX01JDbIA
FHYDiDvQ9aapMxgIlagdq7fhuPXWYIIqUd01Jpt89dEad8ANByIFPWp30ZMhlZUSk709fTfe
ulYICgs7jcD1pvv01djMPYChRGxoa9NSYpCRxTQ0+qoHpQ605GEgUHRXX81OpA1SMhoSE0NN
twadd+tNElIkVNQKU3NOv4aWZ3kNKaFSgd+pSf8AhqqxgSgbFVRxO9a/yp8dabQZFEVqofHa
vTQh+AgdyRsr8yfUV1piGGuRJJNetR6/LWXbIRImgr7SnmqtabEEaTLEgJNFAU3JV8DpeBkU
UniFGgHy9T/4aFYn8golQ9qeQ6EAb11uvySYKFP1emyfgSfQH56sMIBxChxSRU716fOmucqY
NWq0HQ067k0KabV+GtIw2H2/aSahIryPw02aWyidILiOJ9oNOp/jrHZTsUmBaQlSqq3H1bau
5NBJFf5VA66iQaWlcid+ad+hG1Njp7IurAhl5avahSjXbiKn+zS7VWwSkIoUFqBFaHooEH8C
NSaemalLYCUjf5Efwp8NahSZt8BoR8FJO1TvoaMhnpQ0+R/jrKyMQGlla6hKFKqKkJBKttye
Ir00tpbJJiVEhQH1dN99vgRodhkAQeRJ6gip9T/46ZYoP1NUk1NVEfAfDWWDCCgRz5bfmB3/
AAOlGUw+dEnjuPUD4D11OoyFQUJO43rU+vpTWerBi+PQ+mw2+PWmswyVhIJpQqrTb+Xz10gO
woJHEFR6dCD69dZNQBArUKFCDuB8TrLQJCRz3+exp/uOtCKqQARXl8flowZhgVRNdqb1Uemh
GogIrTTjWlBvpgmwyolIB+mta6oMNyKClcQrp6EH131mDQFK7hG4JA4pB9ANgAPlpiBEkAqC
qU5AbDen46HBCgCqqTSg+HUU9dc2jIRWmuw48N/w1JG5Dqso39Dt0J/jpYITyCug3H8CNECK
SkkAjcq3oOlRobCQKooio6/m6CmhMcBmgAI2OwPx/H+OhhAdUpAQak9Ff7fDUMgBO9Vcqbkf
GvoKaGKCSoAAV959CKj4/wCx1FIgEVrWnz6+vpqgkzq4KUOxI2CuvzPTWWxEuEJNK16VSDpI
BUvkCTUlIoRtt/DQLAshPt2VWhSR1FPTUASgEjnQ0O+/x+Q0CsCyslIondVBUegGoQDtlISB
7h6E0FD8RqAHLkkgAjf3AfAbV0FIXs4cKb0rxqfj11CM7c0FOLpRQCFE1oN6bUJ0SbNk/pgs
7lxyW7hVrRcYrNskVfcYDyWJJA7QQpQIQ4vcD4669V0l7kZJzxRhk9eP+TGFWl527JiGK1KG
/F+pUuKkgH9ToV8TsNtWOqfMlhWlGhXjBcVuNksibnYO1CiYwiQq/odLTERxpsL7YaSAgrcU
SS4upPprN1lr5J2KLkfi3ELdjt0u9vguKubFvjunHpEhHdtfeRyMy4KCqqWrqhug69PhddGU
MfJ2LOt+K/H8tNk/bUuIAu7jLAYUVurSlsuFQ5F10bp5ep+Gt/1179UVrQslZ8wYzacZyCHa
rZbBZ2kREOrhOSRLmFbhJ7slaSpKFq/wJPTf11zjJNyTfgfxvi2VKvVyyNDr8SyhkNW9orHe
ckFQ5L7IU6oJ49Ej136a3aj6p+RWC75R438NYtb7lkdys82VAjTo0FEFTzkdKO8hLiilJ/UV
RK60PuOivqdmlJm11XMEVJxvwbaMcteT3S3zJVsyKZKahOLdUgxmGOSULMdG6z7R7fid9bXo
faJya7Mm7J4d8fGBBQ5Zpkj94tj13cyBTpSxAb482WSlIDdSj4n4nXN0eZeUTtgrPkSweF8Q
izbKq3S139FsZlwppf5lx99J9ikfS0nopSiOmw009TfIO06RUPIWJ2GwYLicyJb3YU68NfdT
Jst4KkyVFCf8uOkqDTArVCjQmo21p1hvkJ4KPj0cyr5AaRDE4LktJ+yPJXeBWAWylHuIUPhr
VUm4bwYbfBvmdYHamv6iscgqsgRjcoxm2YkdlCWHO0iq6oAopCFBPc26a5USh/Q3OS3Kwayf
bZ8u6Y4/cTMvrSUMxkpZkuxf0u0EL2KWUqUT7KVGp1TgUyAnf09+OpVxnQ4r8iKiFcUNyXag
l5vsB77GBU0U9vRZNSnUm9lPlEDiOCW2ViPlAR8X7dzgOCJaEuAyJLIFasJcNUlxHEFZTvU7
63ZJRklbGDG8cxW53e7QIgiSURJUtqM9LbaUpCEuOBta0qI4kp6632XDBVcno24/05+O5blw
tMCNPtEmFJiR0XaS4HUyBIAK+wk+08U19K1+WuEvyMmd+QcFwWL+1tWPHr1Aji6ptsyZL3RK
a5hPKMVGrjjgB48RQDc66UWcstnP+oLCJLPlJq04/aFojSo0aPaIcVoJQtaGwFIZSkCvHqo/
z0UskgiTrgPiGxpsV3vubxLhLVbJ6bSmxWsVfTIolTinCndXAL6JO2+rs5xgqtfU5Wnxni82
4548q13ENY6wt21Wt9SY3aCm1LS7PeJ9hATyDR9xHpXTHyXbDbg6+P8AC8VunhbLLyxbH5uU
RUts/euIK22wtaSUw0AndLe61EV/hp9lYayKeDRrd41waZgVxt8fG41ruDcJt2CueFJnIU8A
lqTOkj9JBUo9zsoJ9uxp01jMzsrWyMsg/p8xGHhNrs9skxk3Sbd2WJeSylIUtxAaWp3tpQrg
gDieLdfTc6k7T8lK1wQXljxPhdlTgdstKPtGLnNdjTX2iJdwlBakJ7oDdUrVx+kJ2BVTQpz5
BttwQFuwXFon9Q8bE12VT9jbkNoTbZTvdWQY/PvPqbqKV9/A/gdb6xSZKryyawHB7K5/UBfb
fcLYpEOEZ8m1QlsBccJSrgytSF7dtIJLddiaay9FtNlixfxNg11wHELderXOau1zlTi5JjIC
VCTyX+pNURySlDaAEI6V1m23DIgnP6fcVRZBkcaVNn2+JEmGTAQ3wlTJjC1oSWBSiI6Sn3K3
qBX11tt6CVElSyHE7OjwDi9/gWYM3R6c8LncyFLW4ykODmpX5WyQkJHTTVxZ5J20ZaqHK5nl
Hc4ndNUKHXb4a6u68jBoHiXx1Z8lkXebkRmM2awxDLlMQ2yJT5KuKG2uQ+Rrtvtrja2YQ6WT
cpvgTxvcrlEefDlsx+22qIxGjtEMPvyJKnHFPyHFAnnxoCKVr+GuanSLsVGT/TPjk+VSyXJx
2NEvaol7ddUEKjQG2Q6qnIULlPznbcba2r2QVdXkrGPYjjk/xV5Eu9ohrkTG5zMSyvuNdxbc
LvpUntrANFuI3eKRsKa1bD8haeq+Sdvfg7BYDLeNRf3Z7MgmEH7t2SbXzlrQlaqgUSlAcrQm
vQbk6xWziZGzjQjzP4T8f4hhYulkly37pHlMwXkLWHEuuOGiuQSkcKcSQB+GmkvZWvkpfhzx
nbcuvFxTkCZjFutVuenraiIIedU2pKQhHIddyQPU6b34Q62aVI8D+NIrCspeTeDjCbfGlosz
aQq492W4W0VA32CalPp/DWJepJMgMDwLEnPJuWxI9vmvQLXZ35Njj3Bkl1LzrKaOPIUKpUOZ
7YUKnrTW7NpIOGzEvsZzIWhUV1LzR4OoUhZUlaRuFbVB+Ouy9qjJVTNvvXirxHbIjePPXSe1
nEiBEkx3VgGK5ImFKUtNoAB3UqtPyp9dcFazXYndJlvu39M/jZhVqgwZ8xUxc9iJcnO4lXcS
UKU7RITRokNmmjtbYuwp7+m/xc7FcuEeRc24UBpNxnRgUuOORXWVOtsM7bLonr69NHa3kFaC
s3rwLi1yjzFYkuem5ybPFu1ksk4BL36sksvF7auyAPb6E6e9kDyT2M/05+LZUm5mbcpMmHHm
ItEYNuhomUw0gS1cuJKv11KSkelPXS72YyuEed7/AGRMO93WFCQ8/BgzJEZh8oVyUllxSAVU
H1bb671skvkEpUwbBevC+DW60tWGMxeXcuMeE6u89oqtSXJjjaVBRGyUoDvr/vOuS9j/AJN/
YrWLI34A8aXCa1Agpu0U2u6tWy5vTKJRMAQpS1RjT6SUbKGsdmhleB7/APk+eIQwq6D9y+wg
tNzJcZTgKlpmNhUdCFDp26+6vXQ/ZbyPY5XX+n/xZaIEm+uR7xcbfBUqA9bIxC5D0pL4aLyS
mhCPkKfHSvZbyPd5GDvhTxDZry3bruLjJdu98Nns6WHEjtgsNOAvKI/9suHf1+Gr+yz50c62
WoM4meO8dhYLnF2CJMmfj97Ta4EzkEtJZS921KcBNFKUOoA2211Vn2+wOz6rjJn1mNtRdYTl
zQt22JfaVNZaIS4tgLBdSk/4lJBA12um1Cwxnqa75fmYNKwDH50GwxbHeLu+7LgNwkkLbtLR
WykynD7VOLWEmg6a89K76vRO0tSO7jGwrG/NMi1M4/HuinodqhWO3vbQ0TZbTIU8/wBSRxXy
2BJJ/jorWaS3hG0224//AMOqbTglz/qGlW+Lbo37VBhy0TISGyiMq4QojoeUho/lS5uK+orp
dbKq+TNW2m/0OHhW7YAMSlR7xj8STFtUGVccnu0xvvSFpWsNRmYaU7pNFAEq1q3rbe8sVb8S
n260WJPhmXd3o6TMGTxYrj1KuJiiMtxTKVeld/46OtnaE+DCfVKXyW3yleMNuXiW2zoePQrN
Juk5z/TQjNUfRBgr7Uhct7cLUuoHEfH10+uvWz5gbfk1I3vErFsS8hWNxFhj3PljVtTAgupA
jm4SkAIkvpp7xueY6nRSrssv8TXeG4JydCwy6f1NWSzsWuMhmJSJe2WGO1EfuLLDzjhQwa0Q
PYN+tP46b0a9f1CryefpCWg86UbDkrgkdAOR+Hy161WFB56PGT1NjmF+LlY7ZXpaWjlAw9x8
W8oBQ6y80tYmrBG60EKTyrXf5DXzobU8Sel23Hgaf1DxIttwBUG3WJDDPegtSLgm3paQ2gNp
UFImg+8qcog+31I119FVJzs3bZlnge02W45nMavKAu2N2i4OSuaUrKEpZoXEA19yQeQ11984
jydFVQzdsCx3BotshycfhKucFFgjqiyXIbciY8H57oUtccgBSvZ7j6DXmdXOQbf7DrDnbA27
N/1HboltuF6yZy3xI78FpPcDURC2GlNK5pZU80kGoJ9x+ep1UyuESeMbIq5tKt/h9xdhx3nJ
MOe9w+xYkMNj7p0OIefUoLSptvl0+H8Nb9Va9s6M2f4nm7xjCtkryBjkW4hDlteuEZEhLpAS
psrH1k+h9den/kV/FsvXs9Qqs9xuaEIyqywIeQKl3uHYIrbbLan4Jgr7KQkE8qqoa/H4a8kK
caUC+M5KdjIuOFZv4yxqRHjRp9wtQg3+K62244OUlx1NVVNFk7V09Zo7fJ05+xiHkbLrlk+U
yblcuyH2OcRH2zSWUdthxfCoT1Vv19de+tFVQkcrRGMs0e93Lxjj+JYzZrhizM5++Y+3OlXq
OuktmQ6VpYKN6DitJLny9NeSvqdk7TydPY+CT844hbLPh0pdvxa2w7JGXBFjymNJSqXKDiU9
0qaFS4FVV67ddXpqm9uTF5Zn/myFZY2UQW7RGhssO2iCt1VvXyaccUlXNwj0X0BH4H112/49
WqmfY5bLN/T85MTbcjbxp2G35AUYf7QZvbqIYcP3pYL1UFXb+qu9OmuXvX5ZWDdISx9yleZZ
GML8mX5zGez+zKfQWDGADBX20d8t02oXufTb4ba9Xr9DVV22cbOWU4+7cfGvw/36GiQggqHL
ffanTb/joAVzSnqQSdwNZyaC+kg1qAf9/wANbkEAN8RvUpPro7SDQqqkAKoCoVoD8NZCBJPt
FEUUNwR1A+B1C0AEcUg7U9BQ01bEXxVUU/FIPoaddQoS2VcQFVqdqj46ZKICqQogK3r/AAJH
w1NmUhXJXLkATUbn+/QLUgKSaDiB6hXrTUCYdAKAbg1O/QfHbWTX0ApVASU8VetOm3x0ODMB
FAUCoUHrX0/s1CGNqCtD6n/cNZbIIEgkhVCfadv56gC4/EGnp/zH01SUiuC68hQKV0I6Cusy
KAs/UDTkB6/+GtI0Bs0KKe0nciu3z1kEgJqqpI4p329aDVA4AlZKaqJofy9Pntogg3AeIJBS
QPxFNZQA3FdgCNgfx+GolAOQPQfSaV6Db0rqgUw9gElIqog0+X4ajTEg9d6KFOQH9ldBmRKl
EbkAKrTiN+nw1DViqpCwaAdSkHpoaEWmgA7ieKSKUrWlOv46zAoJCkAkioB3RXf8aDU0LgIK
dUmgFSk+2mwJ+f8ADSAEgArWrr0Fev8ADUxSgPnQEAhQQKmnUj066AYmpr3Pz/VSu2rJDKKw
l5woVWlFKomm/EV66zJ0NL8E2BzJMglWlV4m2dtER6Yn7Bam1PLjivBZCk0TQ9SCfQa9itZe
rBdcy0TPjnHn7xasukKu8mMxYYTspq2MynWe88okKceAPuQkJ39VH11zXsv1xqSV14LRP8Qy
pGP2Ji1ZOpUm6WlF2ftEuS4XJK0pC1FpgfpJZZTsFK/ho9nu9k54KU9lVufi7LrZAuV3nXNr
9n+1blTLqH1rYmPK3aiNK3VJdCqddkkfLWV7LNw/JKIF5zY7iz48xG+yMgn3hy/Ba0RJbquw
w61RHFpKlK6HbmfTfXS97d45+AUQVfN8UkY1OjRZV2jXS4PsJkSjCWp5tkqFA0Xz7XFpp6dN
cst52PVQTfiPEM9yO7yBiVxXZjFb5Tbol1bPbbcJ4p/TqtRWR0Gulfc6qIlCkuS2XHwFnq35
rF5yOE1bmJCHXps+U6EOSpI4hzgrke6sUAKvcemsW9trQkjLrXZwa8A5lOaREm3+DHjRZL0C
0MSZBUXH0kl1qM0KpBUoHkAfiTrNr3tsEkh3a/FnlCRjZtasobh26Sl1dvx4zFKVLaZPvdbZ
B4luoruaU3+Gur91uVlGoqyKyDwdmSbbLv8Ae8gt7lxjQxcJkR6SXJYjJFErdK6e0ABKd9z7
RrH9jmY2SSWCmZXid3teO2a93e6NSXLs1yh25Lin348UJCm1O9UtBXL2o/vrrTtZvJWwQ2Ny
p8O+wn7fOVbZndSli4JqVMlftK08dz7VHW6NzoxCZsWT2TM7b5utGNRMwuD91ksMRzkEpaQ6
0w+eTqWkk8QDTZPUmms1u8uF9BgskXCswdnZpMgZvMgTbZdG7fEkzpnFr/KQlUiUsj3KLa+C
EJApofttCUL9AaKbM8ReZ4c2MmNMXJl26cpNrLT6i4y9IAcfmK5EhlJqCpxXuOpe9+DSSezp
jFpyoYZnbTGavi24yHW3YVuUSzMcd5d57vmiuDigochurc6024WECrHyV2yedfLLUa32S23J
AYbDEWHDbjMJqAUobaBCfzbCtddZ9c/xDq3hMv2X4J5+uBNwm5Ezc7nBlNvtWWBIoqI++Qlt
QQkJCBVVE8q7V9Nede6HhQPVPHBXM8tHlqy3ew3q7ZZGu15Ev7O2qZkpdMWSuiVpCClKU0r7
jx/E9NdfXeX/ABx/kvxkaeYbnmmG+T2FKyOXPvFtgtiPdn+3ySJDZ73abSng2FEkUoT89Zpb
bhGHXOxHiOP5fuzV2lYte/2i3qdrc7nNfCGlynvdsXAvk6qtVKGtP3ViI7M0vWssj4+EZ7Je
zO3rvLbVttBMnK5zko9iTIAUtLaqci+4sg0r67HfWP7WuCVVBO4dG8io8M3y7xcjNtxq3tLE
eyxw2ZD5fdCHC6R72kKKjT1Vv0Gt3vlQjT6rZdl+Oc4yXARAmZ67cDFDTci3to5W9DtUhMcy
0D/qFNckg0JAV16a539jnCSf7hCWCHz3wuxjuO263zstnftLE0D9eGpEGKp3/wCRKUtPX/Cj
c1Jp8dap7ru047BOSJzbBfsLbhtwxbLpN/uU2QLdiTQR9twbbUSXWFGnEIdIqo+p66y7uW2s
g4eFyRWI2bL7N5tOPx8lYi5DMX9tcL81/wBWCp1IedQO4Pc6SAn3U92tVviWpHpLks2COeQ5
fmjJLPGyV5LiFv8A7xdu2yp99mBVDKUIcBSipIAA+kVOp3UaBJxJK2HH/M958d26dZstKXb+
5OL8WQ6hsoaLqjwZc4lxbzq+RUpJFPlrm/ZGGkVUUyXiHnBpaXZUhyOw7Y3Y0qap0BiJaGAQ
WFrA4tlQRslPuNevXWv7fj/XkUlqdHa85FncHxBid6fv7c21SpjcePjwjtpjtog+9hqQQApy
hZHJB26ddaX5OISM2egWPzP5fzO+26z2yNb5M9T/AH2GftW0tktpP6jxUdkNA8q606+pKWjV
VO2abfLd5+UjHBbLrCN8mGYuaIDTTUFmM2EpQ444Qru8uXtFNidumuX9lU8VFopF8e/qMxCT
eru/cy6tIhtSX2+3IS6XiW46WG1IIqN+SqD563X20nNSanRH47i/klqb5DsVyvjkZyNa13LJ
zGcS8qTJU13W2C6aceaVHuqT6e3S/asYKIlgxCzedGfFC/2adGteMSI0iUiI6ppqW7HUCp11
FU8+KxWhKunT00V9lZ1L8mbVa2x9ktn/AKhUeOWbrerqkWO2ssTvs+8hEpDbRC2VvcUpUS37
VcSo7/MazX2y8IeqWyr495C8uZ1mthhNTE3W5QnlvWyLKQ2mIhxDauT7yUhIUUIrRRqR6a6+
y1FXQz42anKnf1IN3CLb0Gzd26MSm0XKIG0tRO1xU6644nYKQKJFQRU/HXFW9f8A4sVVeSs2
Uea4vmOJYZuShF3yBlD86dHW0+yYLIUrk22ocAU0UEAJG5+euruusuuDNYdt6M5h+U8/xfIc
iXaru6mbcZqxcJTqGn3HVR1rbbUoqSQCE+idtaqq8qQpDNUwQ+e28DizrQqzi3SW5U5gzUtr
mSVLWt11xXLdS1Hkqp9Ka5O1J/iLrGJKlcfEPmS729rPbi82/OdbjyGmi6Ey0tjimOQkAJRw
HEinT8dL99ZhVwPRcMumQYZ/UvIcgvPXtqe7bneMJMZ1CVd15BjrdUkJTXtJcKVKV0qaaz/d
X/xKERysb/qEx5SHol6alMRorspUiK8h5vhbI4aDSgU1UsNqASmm53O+h+1eAhJMY4xjPnu+
5VGymPcQxeZtrRKVcZK0pS1CkqWhhpbYQUIU5xUtKQnYb9dL91X/ALcGuIGGPxfM+JSssxq3
XlFsTYIyrreVrdStH6ieaVNKWlX6jw3rsfjvp7ptYObiG50VjHPPWe47YmLNbFxEQY4cUnux
UPPKW8suLUpa91qKlHc67v1Uy2pYzP0NHyu15a74wuK5XkNy5PY81Fl3Wzst0ZbdWpLrDBmo
/wAxxKlJKU8uoG3TXGvsh/xWf8DaFxKItdz8mXeT4/duGZLYdvzE2YiQ+UMsQGWUqaU6pSeA
cdUzyAKvU0Hx1f2qWlVE1DFZ1EyWwt45GsGav3a05mpuKwXW1R1FDJZjtL4Eci0lJRxPy9a6
FdKW6oIySuNKypvM8itc7yHJtUezT1W2K6loPyZsqa4Xl9uMAr6lglRoaVpsNZVm9JE0VaRj
+WLznK7W5k633MEbmZFHn0ClOTEpbUV0JPFxSlJC61A47a6W9i6r8csNp+UObJ40j3zAJkiV
mLguT8R3J51jaQp2MkjmpDkt0Hil1fFWytx8NT9zmUoOl7KMIoVv8e/dYtjN7VORFcyK7rtK
UvkBmO22E1ecWd9lKNfQDVb3NtpLBhxiS0+WMAtlsxGxXm0ZO5f7cp9dmt7UhnspShrmtRij
87PcCuSulT1OsU/Fw+JMvagr/lGDe8X8oy2Xrq7cLxbzCfF1coHe8IzTjZHw7RolHyGu3psr
ViDSsptBZfGPizMMwtL+YWK7ONXd26KgTVEgK+1lo/6ySpw7lVHd0+or8tZ9nvhxGhotSXFX
gGyWDFL/ADpGQ3FiB/1jM5MVlKmlxoDx7If+HubCqa519lrWWitZQYNjxul0ft+MGU43b7hP
ZUuODVsSHKM97iOqkoVQfLXruuiduTMy0mbej+nO9LuLFnv1zlN2ONdzbcfClIVyhOtuyX30
pBVwUtTQFPjudeN+58f6YpKZgfL8FY9cvIDdndvl5el260iY6H22xIQ00tDMJtlVSKJ91B6U
Gn+2yrGDUcjm0/0/29zL75IXfrs7LhLiPMyo5bE5D81tbj3fXX6wOtD0VrL9tnAp4iDzlkcW
zRcjuUeyuuyLTHkLbhvPji4ptJpVYHryrr6FOyU2ODaejV/+0mVM44xdhl0QXdyxplx7Ep9a
JKrUtNVtAkj2ALoEj2nca8lfa5mPxT/c7WSWtnHyljmU2jHHo92zxu+PwXYzVzxpMha3I61p
q1+kpVFds/Lbrp9fst4wZstR9Bta/E6oeePWZ3IFNW+NY/3m73GGlXMQy3yfjhFdyU7fMenp
pr7n1TjLeDaqlPwR2dokYVc7U9id+uBsF7tjc60LK3GZDMZ1aqsOcSBs4muwprXrs2nqUYeJ
RSJV/vElwSJE6Q86HO+HVurWvvbDuVJ+sUHu667TiGDcaF/6ryMQjCRdZaYR584wfdDau4ar
qnlSiyaqHqdUuZCSISpSCVJNCk129Kf7taRMkH8gu7zrEqRMkSJMegYfcdWpxsf8i1KJT/DQ
7So4BVg4SLnNkP8A3bsl1coUq8palO+36TzUSqo/HWW295RrQ0cc5LqobqNTX/fTXRuTB07z
poFKPtFEg7gfIV6aFqASyLclyXUIQt1akI2bQVEpTt+VJNB/DWq2aUCkcqrUn1HxpQdf79Z0
QoLU2aJJrWhNSDT5azL2FvgBqUmu3wruB8tPZsEhIQTTb169NVmxASoV6bipFaUHw1mQDqAr
+FadK/DSl5FWaFBRoVUJ+NOn/nrLKQuZKBU0rXaux0wQBUppsQBufkd9WhkMkBINPb/LWUgY
QPSgKQT7tug/HV1MyA03rU06Gux0iABW529d/n8jolGswAUpTluk7JpXUzMikBylOVSrYfCn
w1zk2kBaEbcQT6bfjraZhrwJWqgHodqj4b+mtQDwLJUV+40IG/8Aw1hoU5CQniCfzH4awMQB
PGhNPaaaminAVOSgR+PL56IQZBVKSFVVwOyCRv8APU0xQaQeidwnoT0roNQGgChr9Y606D16
6hbwEOKTVOxVWgHTbUYTAklWyxz6mnTfqdDGQD3FJB39a+oPSmhMQHl6g1B6ehNdLRCgqoVU
UI2ToJ54EBXFVRuFdeQ0yAZryoAAVdEnr8aay2aAsLFAaAk+ny610STqEs+72mp+Xr+GhlAZ
UqnGnJKT7h8xoNBrUFGvQflOx6iu+ogBaCkE7hND8CPx1A8iU0TXluAag/I/GmgUKK015VIA
AoT8x00DID7vadttvj/bpQHKiqU9P8G3Wv8Av0gcYpkoWVsDkogjoCBX8dczsaV4WvuY2KZM
lYviyMhuDyOwZDrD7/abVXm22WlIA5j6q7011rekRaSv240WTD8tzq3ryNNqwmLcnrnUXhww
3zHjMFO0WiFhDbSOvFRqepOntTrDkHW0jyL5L8qSrCVQMZbCbZEVa28hbhu84jPEJcbbcr2g
pSfXfj6a1e3rmc/QnSOZGtzyDy4jCJFwnWVqFh89luBbmCxRqC23RHeiNKqptb1ePcXUqJqn
WXercJZLpyyVzhjyVJ8YQGZuHwLFj1rCfs3G+a5jLSiCQpC1rU0HlU5VHJVaeuj+yismpFKS
kZqjyfkEy3yr5ZH4jpYTGtVtZgmK2lkVVRlhIrvyqT/cNFr0nGg6kpgF5z/Apsm3N41IluX1
CEKtLzb7Uhzt1WgtlqjoAqa+hGulfZR1hl0ts75FmPk3I48/D1Y2uPImSm7jIhQ4zxfbbYQl
DbRqVKptyUtfuJ26alesJrDRVq3lkRLd8iXrFbbiMeyyjFx16Q6tyO06p9Tz5Kld5fRPBJOw
/FWtP/lLt25gLeuzLHC8w5jCsdphxcXaVfIsZVlsl9U06p5LQAQpthn6HHAOpHU9dV7eu1pU
52NaN4Kvmk7yHmct3JZVnkNW+Oy1CDrLThipTGHDj3Ff5iudVKJ9enTXN+xZS0UNHK4T86zC
yY5Y4tlLNoad+ztDcVgsomyyPe4t1R/VcoKqNaJ3On2Or0Dnkl/+z2T4jLj369Qod3tMRxgN
xGZKuE2U+vizDjKQO485z+rgPQ12ropeXmUZ65lFtz/Mcqt+YWnOcsw2HDMBxTUKP3zyfnNI
BDjjqaqUmONkpA48q7nT63SYm0Gur8kZ/wDlGh1y7JlYjb5EK7yEypEPuuDk82kBLil0qVc0
gk06Cmun9dI2wcke3/UblIuTs8RIxXOkmRe0p2E1tKO0xDJUFFthlv27e5XU+us//X8hXJxs
HnONZ7TkEIYvClTMhdU7PeccWiL7iewymKkUDLANAkK93qdb/qo6ppgp0hWP514WtjMFx7D5
028QlIccuLssI5vghanAhFEgBX0oA6baF/x1/wCYqzLBev6oVKvTs/HMfZt7cmWy/cJLyip+
Y3GqGW3Ep9rfWpKSfhq/qonDcgnJUMh8o4pdr3CkoxCLCtbcv7+7ssurMuc9Uq4rllPJDZUe
SkpG/rrS9aTw3HkW4HWXeZcfyrPLbkl0xWO7Et6f17eXVKMxxKaNF50poENGhCOO/rrK9ari
cPkFdt4F2LzbbI8G92rIcXi3e13e4G6It7DqojTDoACGwEg/poCE8R8RXWv66RuGic1tGyBh
eWpMAZI5CtEBiTf0dhh1DdEwI4BT24zZqnkU/nVvXfWOqagy7slsM8tY7jnju64z/psT5t6C
lT5z8hXbcUn/ACOTSU1o0PQK3Ot/1JxNtGqNveCakf1DoZxlq02XHmra6Y7MKXIbkO9lMdBB
WiNHFEsle/JYPLcnrrL9dU9jMkb5F85pyTG38ctNoNnh3Bxp26yFSXJbjvZ9yG2+5s2jnRW3
w/HSvVXcyFbdkQGReUFXnJMfuTttaTAxuLGiRLWFKDa0MHkvmU8T+qr6qem2hUSwv4hVt72P
sf8AKzDfk53Pb9am5kgnuRLfGV9syy4hIQ0U0CqhtA9epNTrPScTg1V+Cdx/zhjFq8hXnM3M
WH3NwT24cZiQUpa7tTIdcWpJ7rr5PWgCRsNdF6k1DcGfyXGGPrf/AFFWeBaYbLOIx/vrKuS9
j6lPOBqI7JUv8tOSkpQviRyqr5aw/VXhisEYv+oWd+xybCi0x3bPIguM/bvnmV3B083Z7yj9
VXFFQZHtGw9Na/rq1vP+sE2oOb/lLAJfj6x4M/jchEGE42uXdC+l19PJVZT0dBCU914FQTyN
E1+Wi3qW+2S7N5aJO1+TPDGI3Bq74njNw/cCFRZJlyRx+0dTR3hRTnvP5aimj+lc2F2xBLWv
+p+z2l2LAt2Mqg49DirjMRGZRL6eToWCh1Sem1DpXoq/9xWb2RkH+pR9jLb5eZFoEmBdIzDc
S1OPKXwdhnlHedcV9ZrXnQD0p00P1V1P3BKHM7IXx55lsuP/AOpbhkdmev8Afck7rUyT3wy2
qO/u4yUAGgUompG9KAaH6VEzBTg7zvNWMTsaiRZuIMSMgtUP9tt1xdeWYrLAJ4foA+7iD0J6
ga0vXWN48DX8nkl80/qNh5Vi6sfk2h6I1PEZu8ShJ7oQw0pCnRFZISkLXw9qlE0rqr6qzhks
7O+IeSPAWKXyPerJYbsxNitupQ648lwUW0U8eHNVSqtPl19NZf8Axm3HZC7yNo/9RFhZdFtj
4ohrD1xZcWXaUyD3XXJziXZC+7QcUkppT8flrf8A+eq5MzOCGx7y5htr8gNZK3iKI1tt0Ux7
JbojxC23Kkl591QV3V8VFKfRI0+z1LTsalnKFk/g+dcLtcsjxyemTNmLkRYcF5IYjxyBxSVF
SFKcWvktZO1TttrC9TemCwQ3kHyWxe3rPFx6K9ZLNYYjlvgJU93JCmXz+oXFpp9SRxKQT67n
W6UVeZJryaBcP6oDMsMGA3Yy2+lcUTlmSsslqKtCylhkABCl9sCqiaVPXWf6KrkMsOF/VPcG
JglCyslSg6lSO6acZExUl6hA68CG0/P3H4aH6K+QaaZEnz9Eg3mxrx7H27TjFmVJLtmS6pxU
n70Ukc3FAU/5dtjua6v66rG/k1ZY+RzjX9SUq3ZDkN0uFo+4h3lUcx4kZ8xlRm4iO0wyhaQa
oCPq2FVa1b00wkyScyVCR5kubt4y66ftsYy8sj/aOFdVpitU4kt8q818APcr1FflrS9NFaW8
BxBnaVpArUhKaUr1/HXTYos68yfV45Ywtpgtx0XNy7SJQWR3VqaDaGuAoKIpyqfWmuTlWduR
6CMmzFy+WTG7T2e0mwwXIKFk8i8XHi6VnYcRSgpoVOv5J5CWsj/IfIzt7zO1ZCqMWmrUiAzH
hBwqHC38falRHt7ikkmg20/1/j1kPkXbvJ0mF5Uez4wkPOqnSJyYClENpVIStKUcwCfZzG9N
6a3b0169ZGjcDHGc4l2MZIpTaZT+SW2RbX3HFGqDKWla3j1K1e3oTqtTtE8GFXENnO25vLt+
F3zGmGhS/PxHZEvmpKkIhqUoNhI2Vz5b19NNfWu3bk22Jm5c9Lwy04wpkNtWmXLmJlBRJX92
Ep4FFKAJ49fnoXqVW2tmHnYMlzJ++Q8eiONCOzjlvbtzISpVHOKy4t7jsErUSBQfDUvXFWvO
zSjkRn+YP5hlVwyJ9hMZ24KQoR2yVhHbaQ0AFq3OyKmuunq9SooRhLJOYn5cyDG8dh2K3Hss
Rrr+6ynULUhUlHbShURfH/21BFT66w/+Mm3Z8m6uXkt10/qKRdbK7BueLQrg+/8AeKQ64+6E
N/eureP6dKK4FYoD1p6azX/j18sGuDJLFdXbTd7fdGkd562yGpKG1GgWppYVRR+CuNDTW/ZS
awOjSbF/UDkNuusu5OxW5Zn3hy8FhxxwoQVsOs/bNkk8Wwl6vx21m3pVoXEQSeDhefNr8pM5
NosMKxOXG3rtr70RTvc4LdQ53OZoeaO3RHwqdK9FE5bbgJH8L+o27pcmOXWx2+7KnCGX0v8A
NIU9DZLPfVx6rcHX4al/xqPlou6mEZdfrsm7XmfckRWoImuqeEOMOLLPP8jY/wAI9NduiSiT
DUMkclzS6X+dap8gIZk2uDEt7PaBoUQahtSga1UeqvTWaetVrHkd5nJZM98z3bM7SYEyz22A
6661InXGGxwlSHGRRPccJNN9zT8Omin/ABvWvLZrukxmnzJm/wDrNrLlvMKuaIzcB5nsp7Ds
RA4qZdb6KC6+7+ymi3polEaM/wBjkh85ze8Zdd03S59pKmWkxokSM2GmI8dupQyy2OiU8j11
r1qtawjS1JXCeSQSabmhGlmZkA23qCnoBrKJAJINTvX0Hy21tKQkMfVUCpI6DWYNJAIO6jQe
m3y1JCxAoVAjfauk5MXupW9eRO/46hQCQRxP1D+PXVAhkVG42PX4gk+miSATuDTcbAeo/HWX
UOQyQFetBvX46UhjAQBTVXX1JHx1pmQCgptyJPp8/XXOMmm8CuQpxOwrUfEfLSzIEEgkE/DY
/wBmqQFN7JIUaE9On4bjWGMCUpQAVo6jYD5fHWgSDXxI5dQdxqRoG4BoOorv+OlmOQ6FXQbf
DoBrJpASSU1br1oQdiT6ayzSQFqBHwJ/nt1pqJhdaBO1Btv8NzoSKcAOyQK0VSqT6aUUgSUd
SKnprQNhlB6gbn09BrDJYCSSOv1b8Kb/APnoQvQFAVpUVJ2HoPXS0EB0cCVAihIqQOp1zVjU
ANQAaV+JPw9dtJmAvj6EmoTX4emkUKbKeBNQf8Qr/LWWU4CSRyTUgUB2r8fn+GiRAjYporpU
f3g/j8tTbMBghQqCa71PrrMCgKc40TtySNj6UPx+eo02JUoJFRxO3r00hICrinYA9Kg1/ka6
GOgKPBQUNwRQiu5p6b6xkZDKQlQUTWprQAD+3UaSAQADT8p3BHod+upIm0El0qQU7BIGw6E7
+v8AdpBMA/UTQbBJrU9f5amTYZVyJ4ilNhT1PpoGQBISoJVUVqa/P56hgLklA5/TXonqOtNE
FAKVVyICeI339DtQ6yZkTX1qePSv9+ukBAyG1UjYV3UDvTXNHc1PFfKhsXiNWLWqVKiX1V6/
cFSWFFpv7YNpASpaSFLKnECqOlOuvUlVdbY1n6mMyX/GPNuPpstnkZHdbk1eLVPlXGexDaBa
uzr9VIS8UKbbSlNQmiht+A0P1p5rENfoCtnQ/Y874eGLfeHpFwZmW+FJh/6UbTWFJfkFRDrr
oUG6Dn1KKj8dN/8AjtKU00/1Oiq3pFTkeVrZcPHuMwJ8idKyLGbimW7DWVKjyaPhxsreJNEt
I9qEEE/gNVlXummupzoaXcf6j8AlRIkhDUx99cyNInW8R/047bS1LUQs+xawo9fzUFKaz/8A
m/KJUDTs1KK3E8w4lZ8jcuacmvOR/oz3GUSm+21GelJAZaYQsqc5E7clHilI+ep+tpOYwL8Q
Rvirzu7GmyjnF1dWsQ/tbXcHWlSlMcnS48HS2UOrK9gFcvygdNbXqXsrNUlYy219C42v+oHB
k3u7SZN2kx0n7dESQqIsNOssg8whhlYWlalrPFTqjtrFf+O2sQTbmBjev6h8ZcvVl/a5UuHb
BeXZl6KUFCVwggIQlaQKudxXu4eg66q+lctaNwzqjzR4skXS0ZK/PlRp1jjz48a0IiqKlKmF
VHO6n2IBTQCm9eun/wDO4xEGXaHD2yAzvypiFzxs/t2S3SOl+3NwBh0RkMsd0J3L0hQKUpKv
qLW6hsPXWb+p03EFafuZ7mPkCBOxHFbRb5E5dzsrBRLlOr7DLBKeBZhttcfbTYuq9xHrvrpM
WlQDbZIeH8rRimXRbpkYmfYWmFJdtkJaHFhLjiQE8ELBSyhdTVyg/HT2/tUYkUn9Bx/UD5Xt
+fXqE1aWVKstpbozJcCkrfdfAUtYSfpT7aJ2qrr01wX44M5+xkzjvMbGhOwHqNKGTlzIT8TT
qPgdDRC0kA0bNFqG3yH460sAwB1QSU+pArvSn4apJCa+1Jr7dyfTfWqmkGVpqFJPQg8dakAw
Ve4getaH4etdAphEkpI5USaVSR6/E/LQTgSoK4lIoeoTX0+OpMz1DCjwSAKiu34/EaWQqp3P
x6j409dUkEBwUOINACT61rqYpB7jcmu1CB11lMhCaBBIqBWleu41qAgUlSqDl09Pw6/26maT
BzqQVe0n0Pp/HUjQtYBHsJAPVR+ProgGkIFACAagge4/36TAErNKH6iPb8/x0NMQFZJ9wJGl
FLYFK25EGpNAfgPhrUGsBqcqhR5EV/h/HWGiYSiVK2PTcmtd9aRnYdSkJCuhFSNYZCVKSaH0
PWmtrApBFSwaq2A6f3a0mgaDqpJJoRX5/HS2oFMClEE8vhU/Gn/HVCJ5FGhpxNPQD5aIFiQa
AAGvxPyGrZQGVk0A9p/Ofno6mQ1cgAoE06f36VUWF3CCK0FOvqNaguwmqd012JPpsBqL6Clp
AWKdNwo+lP8Ax1IY8BtqUNz8en476gkSomntPLanIfPSsGbJAATWooB+WnTSEhA1BPRI6/Cp
0L4FIIpG6jQV2r6Vr8Nbhiw+RJJoQB60rt0FRqQRICNqDY/zrTQKQFK+RoPqGtqDDbkMdONf
q+W4/n01EGUqST6EHYa1IhFSS2QduRqD869Px0cjInquoI6bUFDqZloWVAbgkn1ruB8aaypI
HMFNNymlAT131dRQkJKiUnYD1HTWqpmWFuoCp3HoPh8dLMth9d+ifU0320NmkwFRAIp0+B3H
z1uoxInoCr+Z+epqTOA07DfamwptufhrEFASykjrt1I+OopDBNKn2j8xPpog1AKJCQQa/DbW
8mYDKSRxFaDodYNQJqkg8h7j6egA6aWZbDUglKRQca7fCmpQKQYBAPI0/Hc/Lf00g1AdD1Se
Ppt0OgygqUArSvXbr8NZaNYBQ0VQe406+ulyCgUaKSCRSvTf0GhGgkbCo+mvzqNTMrIoJAB+
FRsen8NEEwJ5BdDRQp1PTSY2D3BZrsDvx6jUxFbhJVQUPUDY6zBqQidioHau1dh+G3TVBNgU
Og49TsP+GlIpBVQFaUrtQdKaGZeQyFJBG4PoPSuswUCN6AHcigCen8TqSGRZBUQE1H+EDbWY
gZkIgoJBAoKbdd/l89KyAVCTQg0JqRXc/PS0iSYsJ2NSCAKEmldZkYCBKQKjr6V1tIzoMFPu
NNkdCD/ZtrDkeBAIA6VrvQ+n46CQoLdr0qU7AnY/LWdmgJWSk19yiqpPzO2popwAhXUKoehF
N/4nQCFUSEbCjmwKfnoJiSmif8RFQa6CCJTwpT3Cnt6bDVBSwUJUKA02Jr/LbVBSG5RJSfco
K+P8jt8tQtITslv1ptTbbQkGQySSTWoSNh1qR/dpshTEgEJJIqOpruRXXMUKUkDoN/RJPt6f
36jYSkgt7139Af8AjqBgWa12HEfVx6n56CkCONSAqlTWny+GpCA8w3Unrtx+XyppMwGVe4Up
yAO1KbdaaDUiEbqPEe2lRTcivx1NiGVNhPt2CTUg9B/x0JGQc006+yvShpStf56iGrX+av6f
X69ZO6LVgvV//wC7PT/70+nr6f8AJ8dYYV0yfs//AN83D/7l6J/+T/lfm/8Ajf36TS0N7P8A
5Vy/+4/q/wD3r/5PT/2f+X/D89bf3MU0Wqd//pW2f/c3/wA2T/8AG/8An9f/AH//AMb8P+XW
Vs1wXnJv/wDQtt//AIf/AMgf/dn19B9H/wCN/wDwn8dC/kaWzHZH/wAa2/8A3B1H/wAf6fT/
AD/n8fnrd98lf+Qi5/8A3pA/+4+h+j/43r/8n/brqroOTq5//Ejf/wBx/wCWPq/+F0/L8/hr
NtcktBxP/ny//uL/ADE/5/8AlfQf8r/l1IKkZI6Mf/d//wAv8v0/V/7/AP8Aivhrp6udmHsf
xf8A7kn/AP3L/nu/V/8AM/8A7P8A+L/wfLWH/JHR6Dwr/wC8ca/+5P8A72j/APzv8zqn/wCb
/wDiP8Pz08MK6PRfmX/7iy//ACP8y2/X/nf/ACkf/O//AMH/ABfLWEc0Zj50/wDuBj/7q/8A
vV//AO5+v/xkfX/yf/gf+XWvVs29GGJ+tX4fx6+nz11OSAr83T1/D+Hz1rgXo5q+lP4H8f8A
y1ck9Ad6/wD1enXQhB/h+nqr/Y6SD/8AaP4fw6/79BcCT9B/H1662iQpz/KPX+P4auSFPfQr
8U9f79ZMiD1V1/L+PX00rRpC/wAjnX+7rqMsR6J69B0+nW2PAQ/zP+Go0wx6/h/fqZlge6o/
E/h1GgQ2evp1PX+/QzdQj9Z+rr+X+/UYsEvoj8B/vPXWjB2HT0+n+/WTa0JT9Cv46SCT/lj8
fT8fT5aVoyGr6V9Oh6ddZRrg5p6J6dBpQcHRXT19fw/jpCwhX+WOnp+GlG0dFf5aP/q6agWh
Lf8AlJ6dB0/v1lghLf8AD/Y+mtoRA6H/ANX8dLEWPz/+r11tEGejX4j8NZtsbbA509f49ev+
/RUxYQ19Y+np/frdiod2+vp+b8f4/PXJmuDmfrHXqOvTp666VIV+cfgen4/7tZRhiB/l/wAT
0/u0oWLc+j+A6a0AhP1ev8fw1qpcBudR/f110QAH1H/0nRwbsH+RH4a58kJV6/h663Y5oJX0
n8R169NXIhj6k/39f4a0iqKT0V/H/Yayh5AOqvw9fq/h89aeirthD838f7tZMMQPpX/6h001
EV/7h/8ASdaQWDb9en06mFQk9V/gOn92s1E5/kV0/wBjrqbWgx9Kv4aOTFtil/5Q/E/V/frN
gAj6D06f8dZWy5DT/HqPq/DWls0hCf8ALR1/hqRMUn6Ff/s9dHIcCVfR69P7x11Iyw2vpa/v
0vQoW59X+39us8jfQD0PX1/DRyCEfmT/ALemuiIUj6vToev1axYwKb6p/u+rodCNCv8A2vXp
/DWb6Kugh9S/w/P1/hpIU56fx/DSYuF+dX8f92pCIR6/iOurgOToOq/wP+x0IXsOP0H4fx6f
l0MBLXUfh6auDCDe6/w9fw1k7CR/D+H1dP8AdptsHo6K/wCHTr01MzwKX1H4/m/v+eufBLZx
Z/8Aq/h+Ol6FHVXX8v8AHWUbYj1HToOv92tgKH0I6dT+H8fnrLDgUror/ZXTWeRQlP8Ad+br
p4C38hLf+UenQdOvX01MkGn6T+P8evrrLDkS56/+n1/v0VFhNdf/AKfXUbQr8yP/AE+v1f8A
lrL2YYTn1fm6f7U0gK/9v/ausPZ04CX/AJB/H830/wAfnpBiE9T06Hp00ggI6f8ADWTSAv8A
zP8AaulCF6D8B0/u+WssuAj9Y6+v0/36OCYpPVf4+v1fx0EA9B1+j166EaYFdU9dAI6udT/+
z16/7tSE5vf5iuv8f79PBMR+f1/y/wCHT/dqE//Z
------------0860880181AA81244
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

------------0860880181AA81244--



From xen-devel-bounces@lists.xen.org Thu Feb 20 16:20:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWMi-0002Zu-4K; Thu, 20 Feb 2014 16:20:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWMg-0002Zl-5t
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:20:46 +0000
Received: from [85.158.143.35:4856] by server-1.bemta-4.messagelabs.com id
	1D/0E-31661-D5B26035; Thu, 20 Feb 2014 16:20:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392913244!7117073!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27571 invoked from network); 20 Feb 2014 16:20:45 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:20:45 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so778358eaj.33
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:20:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=++wBERLvgLQ8FwilCNu9/6PTFpOD+mGVkxCkJE9EmCk=;
	b=EF8MR4JOnKbOZqjtrJVK8xvZ3NN/abBmGxSM5ynwD6V074Lhfc80jygGZ/uQ6BhXeI
	1EgAOtAhxZj2Vw6byOf7WuUBWDMpDv1YbUho9FoSQtYc8PW4KbbJR/f4tMmNVv5x9oO0
	eAHDSRByDUrjyjT5Q9AO01xcGfw8HtTzzZFuH/bFFN50BG3ZGPvG67ueCgNG+uilpszx
	mX2eMvarFZVOcRELtR13xEmWok99xWMgbhGiKB5kF9OanC/3yfEKxp8/TtDQpFMlJzLw
	pziXlpcRay6Ff501qKYchlyrhxIX2geOJ7jIo5Fjjq8w+Ig2tGpZPvWMTPWzwvI58qAZ
	sZDg==
X-Gm-Message-State: ALoCoQnr2T/itDPvzBuNGk2Z+irt/8qwU5M6YV6Qhj+I+Pxts04WRvoY/DZE6Oak+pFKB1rrUac2
X-Received: by 10.15.48.1 with SMTP id g1mr2698628eew.51.1392913244545;
	Thu, 20 Feb 2014 08:20:44 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm15489002ees.4.2014.02.20.08.20.42
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 20 Feb 2014 08:20:43 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Thu, 20 Feb 2014 16:20:32 +0000
Message-Id: <1392913234-25429-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	ian.campbell@citrix.com, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH 0/2] Avoid to use Xen DMA ops when the device is
	protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all

This small patch series allows Xen guest to be used when Xen is programming
the IOMMUs.

For this purpose, I have added a new optional property "protected-devices" which
list the devices protected by an IOMMU.

The first patch create a new helper which will contain Xen specific check
to know if we might need to use swiotlb-xen. The second patch is implementing
the goal of this patch series.

Regards,

Julien Grall (2):
  arm/xen: Introduce need_xen_dma_ops and use it in get_dma_ops
  arm/xen: Don't use xen DMA ops when the device is protected by an
    IOMMU

 Documentation/devicetree/bindings/arm/xen.txt |    2 +
 arch/arm/include/asm/dma-mapping.h            |    5 +-
 arch/arm/include/asm/xen/dma-mapping.h        |   22 ++++++++
 arch/arm/include/asm/xen/hypervisor.h         |    2 -
 arch/arm/xen/enlighten.c                      |   75 +++++++++++++++++++++++++
 5 files changed, 101 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/dma-mapping.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:20:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWMi-0002Zu-4K; Thu, 20 Feb 2014 16:20:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWMg-0002Zl-5t
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:20:46 +0000
Received: from [85.158.143.35:4856] by server-1.bemta-4.messagelabs.com id
	1D/0E-31661-D5B26035; Thu, 20 Feb 2014 16:20:45 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-21.messagelabs.com!1392913244!7117073!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27571 invoked from network); 20 Feb 2014 16:20:45 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:20:45 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so778358eaj.33
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:20:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=++wBERLvgLQ8FwilCNu9/6PTFpOD+mGVkxCkJE9EmCk=;
	b=EF8MR4JOnKbOZqjtrJVK8xvZ3NN/abBmGxSM5ynwD6V074Lhfc80jygGZ/uQ6BhXeI
	1EgAOtAhxZj2Vw6byOf7WuUBWDMpDv1YbUho9FoSQtYc8PW4KbbJR/f4tMmNVv5x9oO0
	eAHDSRByDUrjyjT5Q9AO01xcGfw8HtTzzZFuH/bFFN50BG3ZGPvG67ueCgNG+uilpszx
	mX2eMvarFZVOcRELtR13xEmWok99xWMgbhGiKB5kF9OanC/3yfEKxp8/TtDQpFMlJzLw
	pziXlpcRay6Ff501qKYchlyrhxIX2geOJ7jIo5Fjjq8w+Ig2tGpZPvWMTPWzwvI58qAZ
	sZDg==
X-Gm-Message-State: ALoCoQnr2T/itDPvzBuNGk2Z+irt/8qwU5M6YV6Qhj+I+Pxts04WRvoY/DZE6Oak+pFKB1rrUac2
X-Received: by 10.15.48.1 with SMTP id g1mr2698628eew.51.1392913244545;
	Thu, 20 Feb 2014 08:20:44 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm15489002ees.4.2014.02.20.08.20.42
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 20 Feb 2014 08:20:43 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Thu, 20 Feb 2014 16:20:32 +0000
Message-Id: <1392913234-25429-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	ian.campbell@citrix.com, linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH 0/2] Avoid to use Xen DMA ops when the device is
	protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all

This small patch series allows Xen guest to be used when Xen is programming
the IOMMUs.

For this purpose, I have added a new optional property "protected-devices" which
list the devices protected by an IOMMU.

The first patch create a new helper which will contain Xen specific check
to know if we might need to use swiotlb-xen. The second patch is implementing
the goal of this patch series.

Regards,

Julien Grall (2):
  arm/xen: Introduce need_xen_dma_ops and use it in get_dma_ops
  arm/xen: Don't use xen DMA ops when the device is protected by an
    IOMMU

 Documentation/devicetree/bindings/arm/xen.txt |    2 +
 arch/arm/include/asm/dma-mapping.h            |    5 +-
 arch/arm/include/asm/xen/dma-mapping.h        |   22 ++++++++
 arch/arm/include/asm/xen/hypervisor.h         |    2 -
 arch/arm/xen/enlighten.c                      |   75 +++++++++++++++++++++++++
 5 files changed, 101 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/dma-mapping.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:21:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:21:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWNR-0002du-LK; Thu, 20 Feb 2014 16:21:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWNQ-0002dk-7Q
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:21:32 +0000
Received: from [85.158.137.68:59045] by server-10.bemta-3.messagelabs.com id
	FD/71-07302-B8B26035; Thu, 20 Feb 2014 16:21:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392913288!3209835!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24615 invoked from network); 20 Feb 2014 16:21:29 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:21:28 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so1022865eaj.12
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:21:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=cSPA0liprEYohos7LJNr1eZWUQk5Mg+9fN9f+6KpdEU=;
	b=Q2bU8Pv/1qsEnHSeVD/LPBXFr6X3T1EBl+KIw6BSGcBVmf4wBB7/H05s+91xdzspj2
	FB1NjpkQjP7H7ag3ep9bEBYpQ3k/zXGijhhxMNUs645SWxPFC5j38QRwV9sWwgInjrkc
	gWOIq1/N6bKqfJGw1mMME+7wjhhEWeF8K4zPUNURQL7p9uj+YYqrbSckMlIkvcd0s2ih
	IkokWvEz2/Yo6BL7ajscdJEhfVzyuI4VTl2vNTOoaoZXa+S0OQzoX0per1Q4yqJbEpBV
	JNW4ETBD435lzrbfI66UD5iwhLrlhztSJ9aE1jpR5N1JoPHeG4R2wP/9tKHl5JFIv5gl
	uz7g==
X-Gm-Message-State: ALoCoQkfbQ8OYvVlw/FwuROSKrAshz6r27gsV54yYK3RX5E3sYuCx4LID+0hj7EiUwbYm0vxdpWD
X-Received: by 10.14.22.5 with SMTP id s5mr2720867ees.85.1392913288717;
	Thu, 20 Feb 2014 08:21:28 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm15530834eeh.3.2014.02.20.08.21.25
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 20 Feb 2014 08:21:27 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Thu, 20 Feb 2014 16:21:21 +0000
Message-Id: <1392913281-25483-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	ian.campbell@citrix.com, linux-arm-kernel@lists.infradead.org,
	Russell King <linux@arm.linux.org.co.uk>
Subject: [Xen-devel] [PATCH 1/2] arm/xen: Introduce need_xen_dma_ops and use
	it in get_dma_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If Xen programs the IOMMU to protect a device, it can safely use it's own DMA
ops. Some devices may use DMA but are not protected by an IOMMU. A such device
still need to use xen-swiotlb.

This patch introduces need_xen_dma_ops which will check if a device needs to
use xen DMA ops. For now, it's only check if Linux is running as DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Russell King <linux@arm.linux.org.co.uk>
---
 arch/arm/include/asm/dma-mapping.h     |    5 ++---
 arch/arm/include/asm/xen/dma-mapping.h |   13 +++++++++++++
 arch/arm/include/asm/xen/hypervisor.h  |    2 --
 3 files changed, 15 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/dma-mapping.h

diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index e701a4d..a40d077 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -11,8 +11,7 @@
 #include <asm-generic/dma-coherent.h>
 #include <asm/memory.h>
 
-#include <xen/xen.h>
-#include <asm/xen/hypervisor.h>
+#include <asm/xen/dma-mapping.h>
 
 #define DMA_ERROR_CODE	(~0)
 extern struct dma_map_ops arm_dma_ops;
@@ -27,7 +26,7 @@ static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-	if (xen_initial_domain())
+	if (need_xen_dma_ops(dev))
 		return xen_dma_ops;
 	else
 		return __generic_dma_ops(dev);
diff --git a/arch/arm/include/asm/xen/dma-mapping.h b/arch/arm/include/asm/xen/dma-mapping.h
new file mode 100644
index 0000000..002fc57
--- /dev/null
+++ b/arch/arm/include/asm/xen/dma-mapping.h
@@ -0,0 +1,13 @@
+#ifndef _ASM_ARM_XEN_DMA_MAPPING_H
+#define _ASM_ARM_XEN_DMA_MAPPING_H
+
+#include <xen/xen.h>
+
+extern struct dma_map_ops *xen_dma_ops;
+
+static inline bool need_xen_dma_ops(struct device *dev)
+{
+	return xen_initial_domain();
+}
+
+#endif /* _ASM_ARM_XEN_DMA_MAPPING_H */
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
index 1317ee4..d7ab99a 100644
--- a/arch/arm/include/asm/xen/hypervisor.h
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -16,6 +16,4 @@ static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
 	return PARAVIRT_LAZY_NONE;
 }
 
-extern struct dma_map_ops *xen_dma_ops;
-
 #endif /* _ASM_ARM_XEN_HYPERVISOR_H */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:21:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:21:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWNR-0002du-LK; Thu, 20 Feb 2014 16:21:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWNQ-0002dk-7Q
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:21:32 +0000
Received: from [85.158.137.68:59045] by server-10.bemta-3.messagelabs.com id
	FD/71-07302-B8B26035; Thu, 20 Feb 2014 16:21:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-31.messagelabs.com!1392913288!3209835!1
X-Originating-IP: [209.85.215.181]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24615 invoked from network); 20 Feb 2014 16:21:29 -0000
Received: from mail-ea0-f181.google.com (HELO mail-ea0-f181.google.com)
	(209.85.215.181)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:21:28 -0000
Received: by mail-ea0-f181.google.com with SMTP id k10so1022865eaj.12
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:21:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=cSPA0liprEYohos7LJNr1eZWUQk5Mg+9fN9f+6KpdEU=;
	b=Q2bU8Pv/1qsEnHSeVD/LPBXFr6X3T1EBl+KIw6BSGcBVmf4wBB7/H05s+91xdzspj2
	FB1NjpkQjP7H7ag3ep9bEBYpQ3k/zXGijhhxMNUs645SWxPFC5j38QRwV9sWwgInjrkc
	gWOIq1/N6bKqfJGw1mMME+7wjhhEWeF8K4zPUNURQL7p9uj+YYqrbSckMlIkvcd0s2ih
	IkokWvEz2/Yo6BL7ajscdJEhfVzyuI4VTl2vNTOoaoZXa+S0OQzoX0per1Q4yqJbEpBV
	JNW4ETBD435lzrbfI66UD5iwhLrlhztSJ9aE1jpR5N1JoPHeG4R2wP/9tKHl5JFIv5gl
	uz7g==
X-Gm-Message-State: ALoCoQkfbQ8OYvVlw/FwuROSKrAshz6r27gsV54yYK3RX5E3sYuCx4LID+0hj7EiUwbYm0vxdpWD
X-Received: by 10.14.22.5 with SMTP id s5mr2720867ees.85.1392913288717;
	Thu, 20 Feb 2014 08:21:28 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id m9sm15530834eeh.3.2014.02.20.08.21.25
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 20 Feb 2014 08:21:27 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Thu, 20 Feb 2014 16:21:21 +0000
Message-Id: <1392913281-25483-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@linaro.org>,
	ian.campbell@citrix.com, linux-arm-kernel@lists.infradead.org,
	Russell King <linux@arm.linux.org.co.uk>
Subject: [Xen-devel] [PATCH 1/2] arm/xen: Introduce need_xen_dma_ops and use
	it in get_dma_ops
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If Xen programs the IOMMU to protect a device, it can safely use it's own DMA
ops. Some devices may use DMA but are not protected by an IOMMU. A such device
still need to use xen-swiotlb.

This patch introduces need_xen_dma_ops which will check if a device needs to
use xen DMA ops. For now, it's only check if Linux is running as DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Russell King <linux@arm.linux.org.co.uk>
---
 arch/arm/include/asm/dma-mapping.h     |    5 ++---
 arch/arm/include/asm/xen/dma-mapping.h |   13 +++++++++++++
 arch/arm/include/asm/xen/hypervisor.h  |    2 --
 3 files changed, 15 insertions(+), 5 deletions(-)
 create mode 100644 arch/arm/include/asm/xen/dma-mapping.h

diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index e701a4d..a40d077 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -11,8 +11,7 @@
 #include <asm-generic/dma-coherent.h>
 #include <asm/memory.h>
 
-#include <xen/xen.h>
-#include <asm/xen/hypervisor.h>
+#include <asm/xen/dma-mapping.h>
 
 #define DMA_ERROR_CODE	(~0)
 extern struct dma_map_ops arm_dma_ops;
@@ -27,7 +26,7 @@ static inline struct dma_map_ops *__generic_dma_ops(struct device *dev)
 
 static inline struct dma_map_ops *get_dma_ops(struct device *dev)
 {
-	if (xen_initial_domain())
+	if (need_xen_dma_ops(dev))
 		return xen_dma_ops;
 	else
 		return __generic_dma_ops(dev);
diff --git a/arch/arm/include/asm/xen/dma-mapping.h b/arch/arm/include/asm/xen/dma-mapping.h
new file mode 100644
index 0000000..002fc57
--- /dev/null
+++ b/arch/arm/include/asm/xen/dma-mapping.h
@@ -0,0 +1,13 @@
+#ifndef _ASM_ARM_XEN_DMA_MAPPING_H
+#define _ASM_ARM_XEN_DMA_MAPPING_H
+
+#include <xen/xen.h>
+
+extern struct dma_map_ops *xen_dma_ops;
+
+static inline bool need_xen_dma_ops(struct device *dev)
+{
+	return xen_initial_domain();
+}
+
+#endif /* _ASM_ARM_XEN_DMA_MAPPING_H */
diff --git a/arch/arm/include/asm/xen/hypervisor.h b/arch/arm/include/asm/xen/hypervisor.h
index 1317ee4..d7ab99a 100644
--- a/arch/arm/include/asm/xen/hypervisor.h
+++ b/arch/arm/include/asm/xen/hypervisor.h
@@ -16,6 +16,4 @@ static inline enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
 	return PARAVIRT_LAZY_NONE;
 }
 
-extern struct dma_map_ops *xen_dma_ops;
-
 #endif /* _ASM_ARM_XEN_HYPERVISOR_H */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWOG-0002jB-4g; Thu, 20 Feb 2014 16:22:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWOE-0002in-7M
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:22:22 +0000
Received: from [85.158.143.35:27184] by server-2.bemta-4.messagelabs.com id
	46/69-10891-DBB26035; Thu, 20 Feb 2014 16:22:21 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392913340!7134558!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25853 invoked from network); 20 Feb 2014 16:22:20 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:22:20 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so1028750eak.30
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:22:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=hjOukUcTFUX5sOwdAQWTaJ9tVIAUSel9t9TIbkUrvlk=;
	b=matHRy0noshp3iSY/ou6Iw32/slt4biX4RSyy88y/5oCY/FAXX2+/o4CWjRY297Daz
	4a2Dqm+itDQEYKYI1b09WsxX09zAjUA/B4Ro+JMdl/9GCU5WHA3OLmT8TlM7jHF+vSHL
	04Qb7AnReL/XnUM0NKIZBF2wooIL79ejyE8osRJJlt1A0s4rHhnHqtsDwaF47lV2r3pB
	manL7OKTO9acELr6ssGzlJG0hFI0ymwpk+iqMBQet5+u2pWSEW1IkmLuekvviVI6LQ0l
	bveW8/0smudGTVG0HofS5SZZ+30VTU+EVPP0rtjXO7CgNtYWr9KSIbNEPliod2LWv/8f
	XCng==
X-Gm-Message-State: ALoCoQko1Kpdf3MgmUuYb5RNHb82OVCf6X25lfDW5MzMVH34KeQ3enAiW0A8iNwE3qPlKAxmKvWr
X-Received: by 10.15.45.131 with SMTP id b3mr2656585eew.105.1392913340060;
	Thu, 20 Feb 2014 08:22:20 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	j41sm15524543eeg.10.2014.02.20.08.22.17 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 20 Feb 2014 08:22:19 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Thu, 20 Feb 2014 16:21:41 +0000
Message-Id: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Russell King <linux@arm.linux.org.uk>, ian.campbell@citrix.com,
	Pawel Moll <pawel.moll@arm.com>,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Julien Grall <julien.grall@linaro.org>,
	Rob Herring <robh+dt@kernel.org>, Rob Landley <rob@landley.net>,
	Kumar Gala <galak@codeaurora.org>, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
	device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
This patch introduce a new property "protected-devices" for the hypervisor
node which list device which the IOMMU are been correctly programmed by Xen.

During Linux boot, Xen specific code will create an hash table which
contains all these devices. The hash table will be used in need_xen_dma_ops
to check if the Xen DMA ops needs to be used for the current device.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ian Campbell <ijc+devicetree@hellion.org.uk>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Rob Landley <rob@landley.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: devicetree@vger.kernel.org
---
 Documentation/devicetree/bindings/arm/xen.txt |    2 +
 arch/arm/include/asm/xen/dma-mapping.h        |   11 +++-
 arch/arm/xen/enlighten.c                      |   75 +++++++++++++++++++++++++
 3 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
index 0f7b9c2..ee25a57 100644
--- a/Documentation/devicetree/bindings/arm/xen.txt
+++ b/Documentation/devicetree/bindings/arm/xen.txt
@@ -15,6 +15,8 @@ the following properties:
 - interrupts: the interrupt used by Xen to inject event notifications.
   A GIC node is also required.
 
+- protected-devices: (optional) List of phandles to device node where the
+IOMMU has been programmed by Xen.
 
 Example (assuming #address-cells = <2> and #size-cells = <2>):
 
diff --git a/arch/arm/include/asm/xen/dma-mapping.h b/arch/arm/include/asm/xen/dma-mapping.h
index 002fc57..da8e4fe 100644
--- a/arch/arm/include/asm/xen/dma-mapping.h
+++ b/arch/arm/include/asm/xen/dma-mapping.h
@@ -5,9 +5,18 @@
 
 extern struct dma_map_ops *xen_dma_ops;
 
+#ifdef CONFIG_XEN
+bool xen_is_protected_device(const struct device *dev);
+#else
+static inline bool xen_is_protected_device(const struct device *dev)
+{
+	return 0;
+}
+#endif
+
 static inline bool need_xen_dma_ops(struct device *dev)
 {
-	return xen_initial_domain();
+	return xen_initial_domain() && !xen_is_protected_device(dev);
 }
 
 #endif /* _ASM_ARM_XEN_DMA_MAPPING_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index b96723e..f124c8c 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -24,6 +24,7 @@
 #include <linux/cpuidle.h>
 #include <linux/cpufreq.h>
 #include <linux/cpu.h>
+#include <linux/hashtable.h>
 
 #include <linux/mm.h>
 
@@ -53,6 +54,42 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
 static __read_mostly int xen_events_irq = -1;
 
+/* Hash table for list of devices protected by an IOMMU in Xen */
+#define DEV_HASH_BITS	4
+#define DEV_HASH_SIZE	(1 << DEV_HASH_BITS)
+
+static struct hlist_head *protected_devices;
+
+struct protected_device
+{
+	struct hlist_node hlist;
+	struct device_node *node;
+};
+
+static unsigned long pdev_hash(const struct device_node *node)
+{
+	return (node->phandle % DEV_HASH_SIZE);
+}
+
+bool xen_is_protected_device(const struct device *dev)
+{
+	const struct device_node *node = dev->of_node;
+	unsigned long hash;
+	const struct protected_device *pdev;
+
+	if (!node->phandle)
+		return 0;
+
+	hash = pdev_hash(node);
+
+	hlist_for_each_entry(pdev, &protected_devices[hash], hlist) {
+		if (node == pdev->node)
+			return 1;
+	}
+
+	return 0;
+}
+
 /* map fgmfn of domid to lpfn in the current domain */
 static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 			    unsigned int domid)
@@ -235,6 +272,8 @@ static int __init xen_guest_init(void)
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
 	phys_addr_t grant_frames;
+	int i = 0;
+	struct device_node *dev;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -259,6 +298,31 @@ static int __init xen_guest_init(void)
 	if (xen_events_irq < 0)
 		return -ENODEV;
 
+	protected_devices = kmalloc(DEV_HASH_SIZE * sizeof (*protected_devices),
+				    GFP_KERNEL);
+	if (!protected_devices)
+		return -ENOMEM;
+
+	for (i = 0; i < DEV_HASH_SIZE; i++)
+		INIT_HLIST_HEAD(&protected_devices[i]);
+
+	pr_info("List of protected devices:\n");
+	i = 0;
+	while ((dev = of_parse_phandle(node, "protected-devices", i))) {
+		struct protected_device *pdev;
+		unsigned long hash;
+
+		pr_info(" - %s\n", of_node_full_name(dev));
+		pdev = kmalloc(sizeof (*pdev), GFP_KERNEL);
+		if (!pdev)
+			goto free_hash;
+
+		pdev->node = dev;
+		hash = pdev_hash(dev);
+		hlist_add_head(&pdev->hlist, &protected_devices[hash]);
+		i++;
+	}
+
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -324,6 +388,17 @@ static int __init xen_guest_init(void)
 	register_cpu_notifier(&xen_cpu_notifier);
 
 	return 0;
+free_hash:
+	for (i = 0; i < DEV_HASH_SIZE; i++) {
+		struct protected_device *pdev;
+		struct hlist_node *next;
+
+		hlist_for_each_entry_safe(pdev, next, &protected_devices[i],
+					  hlist)
+			kfree(pdev);
+	}
+	kfree(protected_devices);
+	return -ENOMEM;
 }
 early_initcall(xen_guest_init);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWOG-0002jB-4g; Thu, 20 Feb 2014 16:22:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWOE-0002in-7M
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:22:22 +0000
Received: from [85.158.143.35:27184] by server-2.bemta-4.messagelabs.com id
	46/69-10891-DBB26035; Thu, 20 Feb 2014 16:22:21 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-21.messagelabs.com!1392913340!7134558!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25853 invoked from network); 20 Feb 2014 16:22:20 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:22:20 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so1028750eak.30
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:22:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=hjOukUcTFUX5sOwdAQWTaJ9tVIAUSel9t9TIbkUrvlk=;
	b=matHRy0noshp3iSY/ou6Iw32/slt4biX4RSyy88y/5oCY/FAXX2+/o4CWjRY297Daz
	4a2Dqm+itDQEYKYI1b09WsxX09zAjUA/B4Ro+JMdl/9GCU5WHA3OLmT8TlM7jHF+vSHL
	04Qb7AnReL/XnUM0NKIZBF2wooIL79ejyE8osRJJlt1A0s4rHhnHqtsDwaF47lV2r3pB
	manL7OKTO9acELr6ssGzlJG0hFI0ymwpk+iqMBQet5+u2pWSEW1IkmLuekvviVI6LQ0l
	bveW8/0smudGTVG0HofS5SZZ+30VTU+EVPP0rtjXO7CgNtYWr9KSIbNEPliod2LWv/8f
	XCng==
X-Gm-Message-State: ALoCoQko1Kpdf3MgmUuYb5RNHb82OVCf6X25lfDW5MzMVH34KeQ3enAiW0A8iNwE3qPlKAxmKvWr
X-Received: by 10.15.45.131 with SMTP id b3mr2656585eew.105.1392913340060;
	Thu, 20 Feb 2014 08:22:20 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id
	j41sm15524543eeg.10.2014.02.20.08.22.17 for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Thu, 20 Feb 2014 08:22:19 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: linux-kernel@vger.kernel.org,
	stefano.stabellini@eu.citrix.com
Date: Thu, 20 Feb 2014 16:21:41 +0000
Message-Id: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Russell King <linux@arm.linux.org.uk>, ian.campbell@citrix.com,
	Pawel Moll <pawel.moll@arm.com>,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Julien Grall <julien.grall@linaro.org>,
	Rob Herring <robh+dt@kernel.org>, Rob Landley <rob@landley.net>,
	Kumar Gala <galak@codeaurora.org>, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
	device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
This patch introduce a new property "protected-devices" for the hypervisor
node which list device which the IOMMU are been correctly programmed by Xen.

During Linux boot, Xen specific code will create an hash table which
contains all these devices. The hash table will be used in need_xen_dma_ops
to check if the Xen DMA ops needs to be used for the current device.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Rob Herring <robh+dt@kernel.org>
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ian Campbell <ijc+devicetree@hellion.org.uk>
Cc: Kumar Gala <galak@codeaurora.org>
Cc: Rob Landley <rob@landley.net>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: devicetree@vger.kernel.org
---
 Documentation/devicetree/bindings/arm/xen.txt |    2 +
 arch/arm/include/asm/xen/dma-mapping.h        |   11 +++-
 arch/arm/xen/enlighten.c                      |   75 +++++++++++++++++++++++++
 3 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
index 0f7b9c2..ee25a57 100644
--- a/Documentation/devicetree/bindings/arm/xen.txt
+++ b/Documentation/devicetree/bindings/arm/xen.txt
@@ -15,6 +15,8 @@ the following properties:
 - interrupts: the interrupt used by Xen to inject event notifications.
   A GIC node is also required.
 
+- protected-devices: (optional) List of phandles to device node where the
+IOMMU has been programmed by Xen.
 
 Example (assuming #address-cells = <2> and #size-cells = <2>):
 
diff --git a/arch/arm/include/asm/xen/dma-mapping.h b/arch/arm/include/asm/xen/dma-mapping.h
index 002fc57..da8e4fe 100644
--- a/arch/arm/include/asm/xen/dma-mapping.h
+++ b/arch/arm/include/asm/xen/dma-mapping.h
@@ -5,9 +5,18 @@
 
 extern struct dma_map_ops *xen_dma_ops;
 
+#ifdef CONFIG_XEN
+bool xen_is_protected_device(const struct device *dev);
+#else
+static inline bool xen_is_protected_device(const struct device *dev)
+{
+	return 0;
+}
+#endif
+
 static inline bool need_xen_dma_ops(struct device *dev)
 {
-	return xen_initial_domain();
+	return xen_initial_domain() && !xen_is_protected_device(dev);
 }
 
 #endif /* _ASM_ARM_XEN_DMA_MAPPING_H */
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index b96723e..f124c8c 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -24,6 +24,7 @@
 #include <linux/cpuidle.h>
 #include <linux/cpufreq.h>
 #include <linux/cpu.h>
+#include <linux/hashtable.h>
 
 #include <linux/mm.h>
 
@@ -53,6 +54,42 @@ EXPORT_SYMBOL_GPL(xen_platform_pci_unplug);
 
 static __read_mostly int xen_events_irq = -1;
 
+/* Hash table for list of devices protected by an IOMMU in Xen */
+#define DEV_HASH_BITS	4
+#define DEV_HASH_SIZE	(1 << DEV_HASH_BITS)
+
+static struct hlist_head *protected_devices;
+
+struct protected_device
+{
+	struct hlist_node hlist;
+	struct device_node *node;
+};
+
+static unsigned long pdev_hash(const struct device_node *node)
+{
+	return (node->phandle % DEV_HASH_SIZE);
+}
+
+bool xen_is_protected_device(const struct device *dev)
+{
+	const struct device_node *node = dev->of_node;
+	unsigned long hash;
+	const struct protected_device *pdev;
+
+	if (!node->phandle)
+		return 0;
+
+	hash = pdev_hash(node);
+
+	hlist_for_each_entry(pdev, &protected_devices[hash], hlist) {
+		if (node == pdev->node)
+			return 1;
+	}
+
+	return 0;
+}
+
 /* map fgmfn of domid to lpfn in the current domain */
 static int map_foreign_page(unsigned long lpfn, unsigned long fgmfn,
 			    unsigned int domid)
@@ -235,6 +272,8 @@ static int __init xen_guest_init(void)
 	const char *xen_prefix = "xen,xen-";
 	struct resource res;
 	phys_addr_t grant_frames;
+	int i = 0;
+	struct device_node *dev;
 
 	node = of_find_compatible_node(NULL, NULL, "xen,xen");
 	if (!node) {
@@ -259,6 +298,31 @@ static int __init xen_guest_init(void)
 	if (xen_events_irq < 0)
 		return -ENODEV;
 
+	protected_devices = kmalloc(DEV_HASH_SIZE * sizeof (*protected_devices),
+				    GFP_KERNEL);
+	if (!protected_devices)
+		return -ENOMEM;
+
+	for (i = 0; i < DEV_HASH_SIZE; i++)
+		INIT_HLIST_HEAD(&protected_devices[i]);
+
+	pr_info("List of protected devices:\n");
+	i = 0;
+	while ((dev = of_parse_phandle(node, "protected-devices", i))) {
+		struct protected_device *pdev;
+		unsigned long hash;
+
+		pr_info(" - %s\n", of_node_full_name(dev));
+		pdev = kmalloc(sizeof (*pdev), GFP_KERNEL);
+		if (!pdev)
+			goto free_hash;
+
+		pdev->node = dev;
+		hash = pdev_hash(dev);
+		hlist_add_head(&pdev->hlist, &protected_devices[hash]);
+		i++;
+	}
+
 	xen_domain_type = XEN_HVM_DOMAIN;
 
 	xen_setup_features();
@@ -324,6 +388,17 @@ static int __init xen_guest_init(void)
 	register_cpu_notifier(&xen_cpu_notifier);
 
 	return 0;
+free_hash:
+	for (i = 0; i < DEV_HASH_SIZE; i++) {
+		struct protected_device *pdev;
+		struct hlist_node *next;
+
+		hlist_for_each_entry_safe(pdev, next, &protected_devices[i],
+					  hlist)
+			kfree(pdev);
+	}
+	kfree(protected_devices);
+	return -ENOMEM;
 }
 early_initcall(xen_guest_init);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:23:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWPM-0002rD-04; Thu, 20 Feb 2014 16:23:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGWPH-0002qS-G0
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 16:23:27 +0000
Received: from [193.109.254.147:46822] by server-14.bemta-14.messagelabs.com
	id 8A/E2-29228-EFB26035; Thu, 20 Feb 2014 16:23:26 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392913405!459126!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9369 invoked from network); 20 Feb 2014 16:23:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:23:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102681210"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 16:23:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:23:24 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGWPE-00050Z-4Z;
	Thu, 20 Feb 2014 16:23:24 +0000
Message-ID: <53062BF6.50303@eu.citrix.com>
Date: Thu, 20 Feb 2014 16:23:18 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402201452550.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201452550.15812@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] request for a release-ack on qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/20/2014 02:56 PM, Stefano Stabellini wrote:
> Hi George,
> the following two commits in qemu-xen:
>
> commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
> Author: Anthony PERARD <anthony.perard@citrix.com>
> Date:   Wed Sep 25 16:43:12 2013 +0000
>
>      xen: Enable cpu-hotplug on xenfv machine.
>      
>      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>      Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>      
>      Conflicts:
>          hw/i386/pc_piix.c
>
> commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
> Author: Anthony PERARD <anthony.perard@citrix.com>
> Date:   Wed Sep 25 16:41:48 2013 +0000
>
>      xen: Fix vcpu initialization.
>      
>      Each vcpu need a evtchn binded in qemu, even those that are
>      offline at QEMU initialisation.
>      
>      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>      Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> were already present in the tree and fix cpu-hotplug.
> One of the latest merges from the QEMU 1.6 stable tree effectively
> reverted them. May I have a release exception to push them back in?
> Without them cpu-hotplug does not work.

It looks like this touches vcpu initialization, which is pretty core to 
the functionality of a VM as a whole.  Hotplug, on the other hand, while 
an important feature, is much less commonly used.

Given that we're hoping to have an RC5 tomorrow and tag a release next 
week, I think I'd rather these wait until 4.4.1.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:23:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:23:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWPM-0002rD-04; Thu, 20 Feb 2014 16:23:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGWPH-0002qS-G0
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 16:23:27 +0000
Received: from [193.109.254.147:46822] by server-14.bemta-14.messagelabs.com
	id 8A/E2-29228-EFB26035; Thu, 20 Feb 2014 16:23:26 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392913405!459126!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9369 invoked from network); 20 Feb 2014 16:23:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:23:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102681210"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 16:23:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:23:24 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGWPE-00050Z-4Z;
	Thu, 20 Feb 2014 16:23:24 +0000
Message-ID: <53062BF6.50303@eu.citrix.com>
Date: Thu, 20 Feb 2014 16:23:18 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <alpine.DEB.2.02.1402201452550.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201452550.15812@kaball.uk.xensource.com>
X-DLP: MIA1
Cc: Anthony.Perard@citrix.com, xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] request for a release-ack on qemu-xen
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/20/2014 02:56 PM, Stefano Stabellini wrote:
> Hi George,
> the following two commits in qemu-xen:
>
> commit 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
> Author: Anthony PERARD <anthony.perard@citrix.com>
> Date:   Wed Sep 25 16:43:12 2013 +0000
>
>      xen: Enable cpu-hotplug on xenfv machine.
>      
>      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>      Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>      
>      Conflicts:
>          hw/i386/pc_piix.c
>
> commit 29a757f0fce7bfdf965d9e8ea48e8e34f997a32c
> Author: Anthony PERARD <anthony.perard@citrix.com>
> Date:   Wed Sep 25 16:41:48 2013 +0000
>
>      xen: Fix vcpu initialization.
>      
>      Each vcpu need a evtchn binded in qemu, even those that are
>      offline at QEMU initialisation.
>      
>      Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>      Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
>
> were already present in the tree and fix cpu-hotplug.
> One of the latest merges from the QEMU 1.6 stable tree effectively
> reverted them. May I have a release exception to push them back in?
> Without them cpu-hotplug does not work.

It looks like this touches vcpu initialization, which is pretty core to 
the functionality of a VM as a whole.  Hotplug, on the other hand, while 
an important feature, is much less commonly used.

Given that we're hoping to have an RC5 tomorrow and tag a release next 
week, I think I'd rather these wait until 4.4.1.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:26:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWSK-0003LO-Fb; Thu, 20 Feb 2014 16:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>)
	id 1WGWSI-0003Ks-Op; Thu, 20 Feb 2014 16:26:34 +0000
Received: from [85.158.143.35:6355] by server-1.bemta-4.messagelabs.com id
	90/07-31661-ABC26035; Thu, 20 Feb 2014 16:26:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392913591!7153637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6554 invoked from network); 20 Feb 2014 16:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104374556"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 16:26:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:26:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGWSE-00053H-Au;
	Thu, 20 Feb 2014 16:26:30 +0000
Date: Thu, 20 Feb 2014 16:26:24 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: George Dunlap <George.Dunlap@eu.citrix.com>
In-Reply-To: <CAFLBxZbHbCs=H6J0-n_YCxj0xx0Hp6ZpLOrzWZo05A1OixmeKA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402201623480.15812@kaball.uk.xensource.com>
References: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
	<CAFLBxZbHbCs=H6J0-n_YCxj0xx0Hp6ZpLOrzWZo05A1OixmeKA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Kai Luo <luokain@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem when launch instance in OpenStack when we
 use xen as virtuallization layer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, George Dunlap wrote:
> On Wed, Feb 19, 2014 at 1:59 PM, Kai Luo <luokain@gmail.com> wrote:
> > Hello all:
> >       Recently we are trying to deploy OpenStack with xen 4.3.0 as
> > virtualization layer, however an error occered when we launch an instance
> > from an image.We have confirmed that OpenStack services work fine and the
> > problem may lies in the xen layer.We checked the xend log and got the
> > following exception:
> 
> I think I would start by asking this on xen-users (CC'd) -- it seems
> likely that this is a configuration issue between openstack and Xen,
> and there's more expertise for that here than on xen-devel (where we
> have more expertise about issues internal to Xen).

I would start by switching to the libxl libvirt driver and making sure
that libvirt works correctly on its own.
I would avoid virt-manager and test using virtsh instead.

If this configuration fails for you, what is the error that libxl
and/or libvirt reports?

Once you have a working libvirt+libxl setup then you might want to try
out OpenStack on top of it.


> > TapdiskException: ('create',
> > '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> > (32512  )
> > [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:3078)
> > XendDomainInfo.destroy: domid=21
> > [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device model
> > [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasing devices
> > [2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request start failed.
> > Traceback (most recent call last):
> >   File "/usrb/xen-4.3/bin/..b/python/xen/web/SrvBase.py", line 85, in
> > perform
> >     return op_method(op, req)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/SrvDomain.py", line 77, in
> > op_start
> >     return self.xd.domain_start(self.dom.getName(), paused)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py", line 1070, in
> > domain_start
> >     dominfo.start(is_managed = True)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 474,
> > in start
> >     XendTask.log_progress(31, 60, self._initDomain)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendTask.py", line 209, in
> > log_progress
> >     retval = func(*args, **kwds)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 2845,
> > in _initDomain
> >     self._configureBootloader()
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3286,
> > in _configureBootloader
> >     mounted_vbd_uuid = dom0.create_vbd(vbd, disk);
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3979,
> > in create_vbd
> >     devid = dev_control.createDevice(config)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> > 172, in createDevice
> >     device = TapdiskController.create(params, file)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> > 284, in create
> >     return TapdiskController.exc('create', '-a%s:%s' % (dtype, image))
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> > 231, in exc
> >     (args, rc, out, err))
> > TapdiskException: ('create',
> > '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> > (32512  )
> > [2014-02-18 21:13:44 11395] INFO (XendDomain:1126) Domain instance-00000035
> > (db4b07c5-7443-418b-81c4-a1f55fb264d3) deleted.
> >
> >      We tryed to use image of RAW or QCOW2 format and switch the xen
> > toolstack from xm to xl,still did't work.when we switch the toolstack to xl
> > and closed the xend service, the virt-manager did not work either.We don't
> > know what caused this problem and we know XCP may be a better choice but we
> > have to use xen as OpenStack virtualization layer because some modifications
> > have made in xen.Could you give any suggestions?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:26:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWSK-0003LO-Fb; Thu, 20 Feb 2014 16:26:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>)
	id 1WGWSI-0003Ks-Op; Thu, 20 Feb 2014 16:26:34 +0000
Received: from [85.158.143.35:6355] by server-1.bemta-4.messagelabs.com id
	90/07-31661-ABC26035; Thu, 20 Feb 2014 16:26:34 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392913591!7153637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6554 invoked from network); 20 Feb 2014 16:26:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:26:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104374556"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 16:26:31 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:26:30 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGWSE-00053H-Au;
	Thu, 20 Feb 2014 16:26:30 +0000
Date: Thu, 20 Feb 2014 16:26:24 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: George Dunlap <George.Dunlap@eu.citrix.com>
In-Reply-To: <CAFLBxZbHbCs=H6J0-n_YCxj0xx0Hp6ZpLOrzWZo05A1OixmeKA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402201623480.15812@kaball.uk.xensource.com>
References: <CAN71wULW2S7izfJBJ5mpQ-0txw_StT7_YEk4MdBGhEwuSexo+Q@mail.gmail.com>
	<CAFLBxZbHbCs=H6J0-n_YCxj0xx0Hp6ZpLOrzWZo05A1OixmeKA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	Kai Luo <luokain@gmail.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Problem when launch instance in OpenStack when we
 use xen as virtuallization layer
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, George Dunlap wrote:
> On Wed, Feb 19, 2014 at 1:59 PM, Kai Luo <luokain@gmail.com> wrote:
> > Hello all:
> >       Recently we are trying to deploy OpenStack with xen 4.3.0 as
> > virtualization layer, however an error occered when we launch an instance
> > from an image.We have confirmed that OpenStack services work fine and the
> > problem may lies in the xen layer.We checked the xend log and got the
> > following exception:
> 
> I think I would start by asking this on xen-users (CC'd) -- it seems
> likely that this is a configuration issue between openstack and Xen,
> and there's more expertise for that here than on xen-devel (where we
> have more expertise about issues internal to Xen).

I would start by switching to the libxl libvirt driver and making sure
that libvirt works correctly on its own.
I would avoid virt-manager and test using virtsh instead.

If this configuration fails for you, what is the error that libxl
and/or libvirt reports?

Once you have a working libvirt+libxl setup then you might want to try
out OpenStack on top of it.


> > TapdiskException: ('create',
> > '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> > (32512  )
> > [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:3078)
> > XendDomainInfo.destroy: domid=21
> > [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2408) No device model
> > [2014-02-18 21:13:43 11395] DEBUG (XendDomainInfo:2410) Releasing devices
> > [2014-02-18 21:13:43 11395] ERROR (SrvBase:88) Request start failed.
> > Traceback (most recent call last):
> >   File "/usrb/xen-4.3/bin/..b/python/xen/web/SrvBase.py", line 85, in
> > perform
> >     return op_method(op, req)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/SrvDomain.py", line 77, in
> > op_start
> >     return self.xd.domain_start(self.dom.getName(), paused)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomain.py", line 1070, in
> > domain_start
> >     dominfo.start(is_managed = True)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 474,
> > in start
> >     XendTask.log_progress(31, 60, self._initDomain)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendTask.py", line 209, in
> > log_progress
> >     retval = func(*args, **kwds)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 2845,
> > in _initDomain
> >     self._configureBootloader()
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3286,
> > in _configureBootloader
> >     mounted_vbd_uuid = dom0.create_vbd(vbd, disk);
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xend/XendDomainInfo.py", line 3979,
> > in create_vbd
> >     devid = dev_control.createDevice(config)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> > 172, in createDevice
> >     device = TapdiskController.create(params, file)
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> > 284, in create
> >     return TapdiskController.exc('create', '-a%s:%s' % (dtype, image))
> >   File "/usrb/xen-4.3/bin/..b/python/xen/xendrver/BlktapController.py", line
> > 231, in exc
> >     (args, rc, out, err))
> > TapdiskException: ('create',
> > '-aqcow2:arbva/instances4b07c5-7443-418b-81c4-a1f55fb264d3/disk') failed
> > (32512  )
> > [2014-02-18 21:13:44 11395] INFO (XendDomain:1126) Domain instance-00000035
> > (db4b07c5-7443-418b-81c4-a1f55fb264d3) deleted.
> >
> >      We tryed to use image of RAW or QCOW2 format and switch the xen
> > toolstack from xm to xl,still did't work.when we switch the toolstack to xl
> > and closed the xend service, the virt-manager did not work either.We don't
> > know what caused this problem and we know XCP may be a better choice but we
> > have to use xen as OpenStack virtualization layer because some modifications
> > have made in xen.Could you give any suggestions?


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWbX-0004PY-4I; Thu, 20 Feb 2014 16:36:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGWbV-0004PM-3O
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:36:05 +0000
Received: from [193.109.254.147:60799] by server-7.bemta-14.messagelabs.com id
	D3/D1-23424-4FE26035; Thu, 20 Feb 2014 16:36:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392914162!1992304!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6502 invoked from network); 20 Feb 2014 16:36:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:36:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104379535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 16:36:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:36:01 -0500
Message-ID: <1392914159.32657.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 16:35:59 +0000
In-Reply-To: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Russell King <linux@arm.linux.org.uk>, Pawel Moll <pawel.moll@arm.com>,
	stefano.stabellini@eu.citrix.com,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> This patch introduce a new property "protected-devices" for the hypervisor
> node which list device which the IOMMU are been correctly programmed by Xen.
> 
> During Linux boot, Xen specific code will create an hash table which
> contains all these devices. The hash table will be used in need_xen_dma_ops
> to check if the Xen DMA ops needs to be used for the current device.

Is it out of the question to find a field within struct device itself to
store this e.g. in struct device_dma_parameters perhaps and avoid the
need for a hashtable lookup.

device->iommu_group might be another option, if we can create our own
group?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:36:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWbX-0004PY-4I; Thu, 20 Feb 2014 16:36:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGWbV-0004PM-3O
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:36:05 +0000
Received: from [193.109.254.147:60799] by server-7.bemta-14.messagelabs.com id
	D3/D1-23424-4FE26035; Thu, 20 Feb 2014 16:36:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392914162!1992304!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6502 invoked from network); 20 Feb 2014 16:36:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:36:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104379535"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 16:36:02 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:36:01 -0500
Message-ID: <1392914159.32657.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 16:35:59 +0000
In-Reply-To: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Russell King <linux@arm.linux.org.uk>, Pawel Moll <pawel.moll@arm.com>,
	stefano.stabellini@eu.citrix.com,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> This patch introduce a new property "protected-devices" for the hypervisor
> node which list device which the IOMMU are been correctly programmed by Xen.
> 
> During Linux boot, Xen specific code will create an hash table which
> contains all these devices. The hash table will be used in need_xen_dma_ops
> to check if the Xen DMA ops needs to be used for the current device.

Is it out of the question to find a field within struct device itself to
store this e.g. in struct device_dma_parameters perhaps and avoid the
need for a hashtable lookup.

device->iommu_group might be another option, if we can create our own
group?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:38:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWdc-0004aN-RK; Thu, 20 Feb 2014 16:38:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWdc-0004aH-9I
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:38:16 +0000
Received: from [193.109.254.147:62246] by server-3.bemta-14.messagelabs.com id
	93/D0-00432-77F26035; Thu, 20 Feb 2014 16:38:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392914294!5739859!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8352 invoked from network); 20 Feb 2014 16:38:14 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:38:14 -0000
Received: by mail-ee0-f52.google.com with SMTP id c41so202042eek.39
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:38:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=EKJ6CIZxGhoNgsszZG+x70YpDmuw2lD13MtL8b04ssY=;
	b=VVZN93jOzwNrx8knRtM53MTrSK0Z/Y/B99XPX2e9NUBmR9VwonZjYC+f7yZUsnwhMG
	TXUdxrHKI5y70uigyYJkxPDPg9gvOfUQKSMiGQ+9tMThdeYWQwtDiPUxZNfgtwI1wetF
	JnsH3ZkrL1Tu/VPy+hrIDhLt67e6xArzVKqnemZbeZ5gx6Ik+RK4ejxfytT3ClW3j0sn
	m+2eHR6tpaW7hwIvVKsKXp210H33US3XKFazfUCo2j6N1yoFKl9/Z/VftCXF+zkPw8Zl
	kZfBO55AvMnC3X0wFHv2b7Pj8StwsKORKbJ0wkh6up67w4Bg+KdeBCek/R7a1bbnFl04
	C28g==
X-Gm-Message-State: ALoCoQnVmiU6acC6vaQywI+odBfriyLYGjOyuaBFYQbW6iSahxHdkNhxlN+JZ3Je5TWeA5HuO6Uz
X-Received: by 10.14.204.9 with SMTP id g9mr2890279eeo.82.1392914294205;
	Thu, 20 Feb 2014 08:38:14 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id q44sm15697513eez.1.2014.02.20.08.38.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 08:38:13 -0800 (PST)
Message-ID: <53062F73.5090600@linaro.org>
Date: Thu, 20 Feb 2014 16:38:11 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-5-git-send-email-julien.grall@linaro.org>
	<1392808746.23084.137.camel@kazak.uk.xensource.com>
In-Reply-To: <1392808746.23084.137.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:19 AM, Ian Campbell wrote:
> On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
>> On ARM, a function (early_printk) was introduced to output message when the
>> serial port is not initialized.
>>
>> This solution is fragile because the developper needs to know when the serial
>> port is initialized, to use either early_printk or printk. Moreover some
>> functions (mainly in common code), only use printk. This will result to a loss
>> of message sometimes.
>>
>> Directly call early_printk in console code when the serial port is not yet
>> initialized. For this purpose use serial_steal_fn.
> 
> This relies on nothing stealing the console over the period where the
> console is initialised. Perhaps that is already not advisable/possible?

serial_steal_fn is set in console_steal. This function checks if the
serial handle is valid.

This handle is only valid after console_init_preirq (which set
serial_steal_fn to NULL). So I think we are safe.

>>
>> Cc: Keir Fraser <keir@xen.org>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/drivers/char/console.c | 13 ++++++++++++-
>>  1 file changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
>> index 532c426..f83c92e 100644
>> --- a/xen/drivers/char/console.c
>> +++ b/xen/drivers/char/console.c
>> @@ -28,6 +28,9 @@
>>  #include <asm/debugger.h>
>>  #include <asm/div64.h>
>>  #include <xen/hypercall.h> /* for do_console_io */
>> +#ifdef CONFIG_EARLY_PRINTK
>> +#include <asm/early_printk.h>
>> +#endif
>>  
>>  /* console: comma-separated list of console outputs. */
>>  static char __initdata opt_console[30] = OPT_CONSOLE_STR;
>> @@ -245,7 +248,12 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
>>  static char serial_rx_ring[SERIAL_RX_SIZE];
>>  static unsigned int serial_rx_cons, serial_rx_prod;
>>  
>> -static void (*serial_steal_fn)(const char *);
>> +#ifndef CONFIG_EARLY_PRINTK
>> +static inline void early_puts(const char *str)
>> +{}
> 
> This duplicates bits of asm-arm/early_printk.h. I think if the feature
> is going to be used from common code then the common bits of the asm
> header should be moved to xen/early_printk.h. If any per-arch stuff
> remains then xen/e_p.h can include asm/e_p.h.

I will do.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:38:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWdc-0004aN-RK; Thu, 20 Feb 2014 16:38:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWdc-0004aH-9I
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:38:16 +0000
Received: from [193.109.254.147:62246] by server-3.bemta-14.messagelabs.com id
	93/D0-00432-77F26035; Thu, 20 Feb 2014 16:38:15 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1392914294!5739859!1
X-Originating-IP: [74.125.83.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8352 invoked from network); 20 Feb 2014 16:38:14 -0000
Received: from mail-ee0-f52.google.com (HELO mail-ee0-f52.google.com)
	(74.125.83.52)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:38:14 -0000
Received: by mail-ee0-f52.google.com with SMTP id c41so202042eek.39
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:38:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=EKJ6CIZxGhoNgsszZG+x70YpDmuw2lD13MtL8b04ssY=;
	b=VVZN93jOzwNrx8knRtM53MTrSK0Z/Y/B99XPX2e9NUBmR9VwonZjYC+f7yZUsnwhMG
	TXUdxrHKI5y70uigyYJkxPDPg9gvOfUQKSMiGQ+9tMThdeYWQwtDiPUxZNfgtwI1wetF
	JnsH3ZkrL1Tu/VPy+hrIDhLt67e6xArzVKqnemZbeZ5gx6Ik+RK4ejxfytT3ClW3j0sn
	m+2eHR6tpaW7hwIvVKsKXp210H33US3XKFazfUCo2j6N1yoFKl9/Z/VftCXF+zkPw8Zl
	kZfBO55AvMnC3X0wFHv2b7Pj8StwsKORKbJ0wkh6up67w4Bg+KdeBCek/R7a1bbnFl04
	C28g==
X-Gm-Message-State: ALoCoQnVmiU6acC6vaQywI+odBfriyLYGjOyuaBFYQbW6iSahxHdkNhxlN+JZ3Je5TWeA5HuO6Uz
X-Received: by 10.14.204.9 with SMTP id g9mr2890279eeo.82.1392914294205;
	Thu, 20 Feb 2014 08:38:14 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id q44sm15697513eez.1.2014.02.20.08.38.12
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 08:38:13 -0800 (PST)
Message-ID: <53062F73.5090600@linaro.org>
Date: Thu, 20 Feb 2014 16:38:11 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1388957191-10337-1-git-send-email-julien.grall@linaro.org>
	<1388957191-10337-5-git-send-email-julien.grall@linaro.org>
	<1392808746.23084.137.camel@kazak.uk.xensource.com>
In-Reply-To: <1392808746.23084.137.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@eu.citrix.com, tim@xen.org, patches@linaro.org
Subject: Re: [Xen-devel] [RFC 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:19 AM, Ian Campbell wrote:
> On Sun, 2014-01-05 at 21:26 +0000, Julien Grall wrote:
>> On ARM, a function (early_printk) was introduced to output message when the
>> serial port is not initialized.
>>
>> This solution is fragile because the developper needs to know when the serial
>> port is initialized, to use either early_printk or printk. Moreover some
>> functions (mainly in common code), only use printk. This will result to a loss
>> of message sometimes.
>>
>> Directly call early_printk in console code when the serial port is not yet
>> initialized. For this purpose use serial_steal_fn.
> 
> This relies on nothing stealing the console over the period where the
> console is initialised. Perhaps that is already not advisable/possible?

serial_steal_fn is set in console_steal. This function checks if the
serial handle is valid.

This handle is only valid after console_init_preirq (which set
serial_steal_fn to NULL). So I think we are safe.

>>
>> Cc: Keir Fraser <keir@xen.org>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> ---
>>  xen/drivers/char/console.c | 13 ++++++++++++-
>>  1 file changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
>> index 532c426..f83c92e 100644
>> --- a/xen/drivers/char/console.c
>> +++ b/xen/drivers/char/console.c
>> @@ -28,6 +28,9 @@
>>  #include <asm/debugger.h>
>>  #include <asm/div64.h>
>>  #include <xen/hypercall.h> /* for do_console_io */
>> +#ifdef CONFIG_EARLY_PRINTK
>> +#include <asm/early_printk.h>
>> +#endif
>>  
>>  /* console: comma-separated list of console outputs. */
>>  static char __initdata opt_console[30] = OPT_CONSOLE_STR;
>> @@ -245,7 +248,12 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
>>  static char serial_rx_ring[SERIAL_RX_SIZE];
>>  static unsigned int serial_rx_cons, serial_rx_prod;
>>  
>> -static void (*serial_steal_fn)(const char *);
>> +#ifndef CONFIG_EARLY_PRINTK
>> +static inline void early_puts(const char *str)
>> +{}
> 
> This duplicates bits of asm-arm/early_printk.h. I think if the feature
> is going to be used from common code then the common bits of the asm
> header should be moved to xen/early_printk.h. If any per-arch stuff
> remains then xen/e_p.h can include asm/e_p.h.

I will do.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWkn-00054Q-Bp; Thu, 20 Feb 2014 16:45:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWkm-00054J-1M
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:45:40 +0000
Received: from [85.158.143.35:48554] by server-3.bemta-4.messagelabs.com id
	12/52-11539-33136035; Thu, 20 Feb 2014 16:45:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392914738!7158816!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18298 invoked from network); 20 Feb 2014 16:45:38 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:45:38 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so204426eek.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:45:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=NA9asQNjko/+e1pYXWNGw0KoyEq2FIz2BQhIEVWq6M4=;
	b=SyjlbVdQxeUa7FYW7icsw6DUzE20rEjvDIc82/66lrWY4GjUs9ywOkYx82x8Kl1p40
	2pd0fpyDT4tJ+IRbwsuAqVSXBU679dmm6cg/oa6eCz39z2AW27OKKJEuZ5+2ACMq7T4x
	IC2oBYsPCG+GI1FQBFCmILHITdkl6Hg1zWhrj4KDz+s5jttcTTz9tChonfgt8Yysgee2
	WVOpHAQZLg4EVBRlniZmJMTaJ7Hwyu/GiFgJn39yGA7EPEdqVZBSGtLOziqI+ZoVrLjx
	2AjbnTI0+VxGhi4mc++arwY+0TF9ktVLS+X0SSthYMPQzZQP+JBa1v6aVb27ZiQ17GJq
	uCQA==
X-Gm-Message-State: ALoCoQnY1ZVsh7MYSVNM2RBIAu/QAi/s1qhUJh/Z7fdpWbnWXAKcDJ5bq2cNBXOcRGRcEGpZ3FPK
X-Received: by 10.14.47.133 with SMTP id t5mr2955475eeb.96.1392914738237;
	Thu, 20 Feb 2014 08:45:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm15741442eeg.5.2014.02.20.08.45.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 08:45:37 -0800 (PST)
Message-ID: <5306312E.5000009@linaro.org>
Date: Thu, 20 Feb 2014 16:45:34 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
	<1392812318.29739.31.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812318.29739.31.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:18 PM, Ian Campbell wrote:
> On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
>> diff --git a/xen/arch/arm/arm32/proc-v7-c.c b/xen/arch/arm/arm32/proc-v7-c.c
>> new file mode 100644
>> index 0000000..a3b94a2
>> --- /dev/null
>> +++ b/xen/arch/arm/arm32/proc-v7-c.c
>> @@ -0,0 +1,32 @@
>> +/*
>> + * xen/arch/arm/arm32/proc-v7-c.c
>> + *
>> + * arm v7 specific initializations (C part)
> 
> I think strictly speaking this is actually cortex a{7,15} specific.
> Calling this file "proc-v7-ca15.c" or something (core-cortex.c?) would
> be nicer than the ugly -c suffix...

Right, but it's possible to have some specific quirk written in C for
other ARMv7 CPU.

I think we need to have a file where we can store v7 code (as
proc-v7.S). Creating a file per specific processor for only one function
seems odd. I'm open to any other generic name for this file.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWkn-00054Q-Bp; Thu, 20 Feb 2014 16:45:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWkm-00054J-1M
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:45:40 +0000
Received: from [85.158.143.35:48554] by server-3.bemta-4.messagelabs.com id
	12/52-11539-33136035; Thu, 20 Feb 2014 16:45:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392914738!7158816!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18298 invoked from network); 20 Feb 2014 16:45:38 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:45:38 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so204426eek.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:45:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=NA9asQNjko/+e1pYXWNGw0KoyEq2FIz2BQhIEVWq6M4=;
	b=SyjlbVdQxeUa7FYW7icsw6DUzE20rEjvDIc82/66lrWY4GjUs9ywOkYx82x8Kl1p40
	2pd0fpyDT4tJ+IRbwsuAqVSXBU679dmm6cg/oa6eCz39z2AW27OKKJEuZ5+2ACMq7T4x
	IC2oBYsPCG+GI1FQBFCmILHITdkl6Hg1zWhrj4KDz+s5jttcTTz9tChonfgt8Yysgee2
	WVOpHAQZLg4EVBRlniZmJMTaJ7Hwyu/GiFgJn39yGA7EPEdqVZBSGtLOziqI+ZoVrLjx
	2AjbnTI0+VxGhi4mc++arwY+0TF9ktVLS+X0SSthYMPQzZQP+JBa1v6aVb27ZiQ17GJq
	uCQA==
X-Gm-Message-State: ALoCoQnY1ZVsh7MYSVNM2RBIAu/QAi/s1qhUJh/Z7fdpWbnWXAKcDJ5bq2cNBXOcRGRcEGpZ3FPK
X-Received: by 10.14.47.133 with SMTP id t5mr2955475eeb.96.1392914738237;
	Thu, 20 Feb 2014 08:45:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm15741442eeg.5.2014.02.20.08.45.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 08:45:37 -0800 (PST)
Message-ID: <5306312E.5000009@linaro.org>
Date: Thu, 20 Feb 2014 16:45:34 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
	<1392812318.29739.31.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812318.29739.31.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:18 PM, Ian Campbell wrote:
> On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
>> diff --git a/xen/arch/arm/arm32/proc-v7-c.c b/xen/arch/arm/arm32/proc-v7-c.c
>> new file mode 100644
>> index 0000000..a3b94a2
>> --- /dev/null
>> +++ b/xen/arch/arm/arm32/proc-v7-c.c
>> @@ -0,0 +1,32 @@
>> +/*
>> + * xen/arch/arm/arm32/proc-v7-c.c
>> + *
>> + * arm v7 specific initializations (C part)
> 
> I think strictly speaking this is actually cortex a{7,15} specific.
> Calling this file "proc-v7-ca15.c" or something (core-cortex.c?) would
> be nicer than the ugly -c suffix...

Right, but it's possible to have some specific quirk written in C for
other ARMv7 CPU.

I think we need to have a file where we can store v7 code (as
proc-v7.S). Creating a file per specific processor for only one function
seems odd. I'm open to any other generic name for this file.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:49:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWok-0005WF-L5; Thu, 20 Feb 2014 16:49:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWoj-0005W2-Iw
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:49:45 +0000
Received: from [85.158.143.35:38586] by server-2.bemta-4.messagelabs.com id
	31/D2-10891-82236035; Thu, 20 Feb 2014 16:49:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392914984!7148847!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32581 invoked from network); 20 Feb 2014 16:49:44 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:49:44 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so1027136eae.21
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:49:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=y1Nd97QgUl///XFL5ZFOnFKPYxvpkCdlGYc8UKfbkA0=;
	b=a13KS3dW4zmcuJtw+wvyXAqGd7wkEHcmvlUELav0THXTtZ1NEOK4EQO6C6WlxOiEit
	ek1Az7lKAiNVGrkj2dF2+9L7vXGI5L0a8QwW4DHk7jSCUNO91wuM9ZcrYjFPZL0TN/nR
	bUpzyjqYkmooS6aOL8yueNSfFG0cTdvote9HOdhgW6+tPizX3akZBSpHCtd/kjWA8/2d
	i8qFX+9On148AFGr71Bveu4OCoi3l+B3zX4sYI0KuuW6vGBtJtW8mRAFve/+3tbtXFC5
	6ikW6i3iPRMLAyZzzO4hyveDdaBFQnqnl0DtUDlYBH5yCUNr9w+ybQFRiQEChOPtCKWn
	Ng5w==
X-Gm-Message-State: ALoCoQlc95Qm577g3fqfMVMKtfO3FkPFTSv+wq8qV7nYlUgRpecCa7GX7gQDhXpPunfJ5SFmenOk
X-Received: by 10.15.21.2 with SMTP id c2mr2877491eeu.77.1392914984031;
	Thu, 20 Feb 2014 08:49:44 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm15757135eet.6.2014.02.20.08.49.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 08:49:43 -0800 (PST)
Message-ID: <53063225.2020200@linaro.org>
Date: Thu, 20 Feb 2014 16:49:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-6-git-send-email-julien.grall@linaro.org>
	<1392812393.29739.32.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812393.29739.32.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
 asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:19 PM, Ian Campbell wrote:
> On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
>> Theses headers are not in the right directory and are not used anymore.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Although perhaps proc-v7-c.c ought to be using some of these? e.g.
> ACTLR_CA15_SMP instead of ACTLR_V7_SMP, which now looks to be badly
> named to me...

The name ACTLR_CA15_SMP gives the impression that it's only for Cortex
A15, rather than A7.

ACTLR_CORTEX_SMP is not better because we may have other cortex in
future that do not have the SMP bit in the future.

What about hardcoding the bit, as it will be only use here?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:49:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWok-0005WF-L5; Thu, 20 Feb 2014 16:49:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGWoj-0005W2-Iw
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:49:45 +0000
Received: from [85.158.143.35:38586] by server-2.bemta-4.messagelabs.com id
	31/D2-10891-82236035; Thu, 20 Feb 2014 16:49:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1392914984!7148847!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32581 invoked from network); 20 Feb 2014 16:49:44 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:49:44 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so1027136eae.21
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 08:49:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=y1Nd97QgUl///XFL5ZFOnFKPYxvpkCdlGYc8UKfbkA0=;
	b=a13KS3dW4zmcuJtw+wvyXAqGd7wkEHcmvlUELav0THXTtZ1NEOK4EQO6C6WlxOiEit
	ek1Az7lKAiNVGrkj2dF2+9L7vXGI5L0a8QwW4DHk7jSCUNO91wuM9ZcrYjFPZL0TN/nR
	bUpzyjqYkmooS6aOL8yueNSfFG0cTdvote9HOdhgW6+tPizX3akZBSpHCtd/kjWA8/2d
	i8qFX+9On148AFGr71Bveu4OCoi3l+B3zX4sYI0KuuW6vGBtJtW8mRAFve/+3tbtXFC5
	6ikW6i3iPRMLAyZzzO4hyveDdaBFQnqnl0DtUDlYBH5yCUNr9w+ybQFRiQEChOPtCKWn
	Ng5w==
X-Gm-Message-State: ALoCoQlc95Qm577g3fqfMVMKtfO3FkPFTSv+wq8qV7nYlUgRpecCa7GX7gQDhXpPunfJ5SFmenOk
X-Received: by 10.15.21.2 with SMTP id c2mr2877491eeu.77.1392914984031;
	Thu, 20 Feb 2014 08:49:44 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm15757135eet.6.2014.02.20.08.49.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 08:49:43 -0800 (PST)
Message-ID: <53063225.2020200@linaro.org>
Date: Thu, 20 Feb 2014 16:49:41 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-6-git-send-email-julien.grall@linaro.org>
	<1392812393.29739.32.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812393.29739.32.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
 asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 12:19 PM, Ian Campbell wrote:
> On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
>> Theses headers are not in the right directory and are not used anymore.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> Although perhaps proc-v7-c.c ought to be using some of these? e.g.
> ACTLR_CA15_SMP instead of ACTLR_V7_SMP, which now looks to be badly
> named to me...

The name ACTLR_CA15_SMP gives the impression that it's only for Cortex
A15, rather than A7.

ACTLR_CORTEX_SMP is not better because we may have other cortex in
future that do not have the SMP bit in the future.

What about hardcoding the bit, as it will be only use here?

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWrW-0005nO-9e; Thu, 20 Feb 2014 16:52:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGWrU-0005nD-53
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:52:36 +0000
Received: from [85.158.137.68:25131] by server-2.bemta-3.messagelabs.com id
	14/C8-06531-3D236035; Thu, 20 Feb 2014 16:52:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392915153!3219433!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7995 invoked from network); 20 Feb 2014 16:52:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:52:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102694106"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 16:52:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:52:31 -0500
Message-ID: <1392915150.32657.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 16:52:30 +0000
In-Reply-To: <53063225.2020200@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-6-git-send-email-julien.grall@linaro.org>
	<1392812393.29739.32.camel@kazak.uk.xensource.com>
	<53063225.2020200@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
 asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 16:49 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 02/19/2014 12:19 PM, Ian Campbell wrote:
> > On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> >> Theses headers are not in the right directory and are not used anymore.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Although perhaps proc-v7-c.c ought to be using some of these? e.g.
> > ACTLR_CA15_SMP instead of ACTLR_V7_SMP, which now looks to be badly
> > named to me...
> 
> The name ACTLR_CA15_SMP gives the impression that it's only for Cortex
> A15, rather than A7.
> 
> ACTLR_CORTEX_SMP is not better because we may have other cortex in
> future that do not have the SMP bit in the future.
> 
> What about hardcoding the bit, as it will be only use here?

Along with a suitable comment I suppose that would be OK in this
instance.
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 16:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 16:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGWrW-0005nO-9e; Thu, 20 Feb 2014 16:52:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGWrU-0005nD-53
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 16:52:36 +0000
Received: from [85.158.137.68:25131] by server-2.bemta-3.messagelabs.com id
	14/C8-06531-3D236035; Thu, 20 Feb 2014 16:52:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392915153!3219433!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7995 invoked from network); 20 Feb 2014 16:52:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 16:52:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102694106"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 16:52:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 11:52:31 -0500
Message-ID: <1392915150.32657.25.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Thu, 20 Feb 2014 16:52:30 +0000
In-Reply-To: <53063225.2020200@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-6-git-send-email-julien.grall@linaro.org>
	<1392812393.29739.32.camel@kazak.uk.xensource.com>
	<53063225.2020200@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 5/5] xen/arm: Remove
 asm-arm/processor-ca{15, 7}.h headers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 16:49 +0000, Julien Grall wrote:
> Hi Ian,
> 
> On 02/19/2014 12:19 PM, Ian Campbell wrote:
> > On Tue, 2014-02-11 at 20:04 +0000, Julien Grall wrote:
> >> Theses headers are not in the right directory and are not used anymore.
> >>
> >> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> > 
> > Acked-by: Ian Campbell <ian.campbell@citrix.com>
> > 
> > Although perhaps proc-v7-c.c ought to be using some of these? e.g.
> > ACTLR_CA15_SMP instead of ACTLR_V7_SMP, which now looks to be badly
> > named to me...
> 
> The name ACTLR_CA15_SMP gives the impression that it's only for Cortex
> A15, rather than A7.
> 
> ACTLR_CORTEX_SMP is not better because we may have other cortex in
> future that do not have the SMP bit in the future.
> 
> What about hardcoding the bit, as it will be only use here?

Along with a suitable comment I suppose that would be OK in this
instance.
Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:13:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXBg-0006tS-UO; Thu, 20 Feb 2014 17:13:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGXBf-0006tN-Jc
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:13:27 +0000
Received: from [193.109.254.147:16582] by server-10.bemta-14.messagelabs.com
	id 4C/D0-10711-6B736035; Thu, 20 Feb 2014 17:13:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392916405!471181!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5579 invoked from network); 20 Feb 2014 17:13:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:13:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104396292"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 17:13:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 12:13:23 -0500
Message-ID: <1392916401.32657.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>, <devicetree-spec@vger.kernel.org>
Date: Thu, 20 Feb 2014 17:13:21 +0000
In-Reply-To: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org, Russell
	King <linux@arm.linux.org.uk>, Pawel Moll <pawel.moll@arm.com>,
	stefano.stabellini@eu.citrix.com,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding the -spec list for the generic IOMMU binding question.

On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
> index 0f7b9c2..ee25a57 100644
> --- a/Documentation/devicetree/bindings/arm/xen.txt
> +++ b/Documentation/devicetree/bindings/arm/xen.txt
> @@ -15,6 +15,8 @@ the following properties:
>  - interrupts: the interrupt used by Xen to inject event notifications.
>    A GIC node is also required.
>  
> +- protected-devices: (optional) List of phandles to device node where the
> +IOMMU has been programmed by Xen. 

Is there some common/generic IOMMU binding which we can reuse here?
Although this isn't exactly an IOMMU it certainly has some similarities
-- it is providing IOMMU like functionality (albeit a very inflexible
IOMMU which you don't need to/can't actually program).

I'd far rather we followed existing patterns rather than invent our own
things.

I'm wondering if perhaps we didn't ought to integrate this as an actual
IOMMU driver, although I'm not convinced this would make sense.

I'm also not sure about shovelling everything as properties under a
single Xen node, should this not be its own node with compatible =
"xen,(pv)iommu"?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:13:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXBg-0006tS-UO; Thu, 20 Feb 2014 17:13:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WGXBf-0006tN-Jc
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:13:27 +0000
Received: from [193.109.254.147:16582] by server-10.bemta-14.messagelabs.com
	id 4C/D0-10711-6B736035; Thu, 20 Feb 2014 17:13:26 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392916405!471181!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5579 invoked from network); 20 Feb 2014 17:13:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:13:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104396292"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 17:13:24 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 12:13:23 -0500
Message-ID: <1392916401.32657.31.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>, <devicetree-spec@vger.kernel.org>
Date: Thu, 20 Feb 2014 17:13:21 +0000
In-Reply-To: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org, Russell
	King <linux@arm.linux.org.uk>, Pawel Moll <pawel.moll@arm.com>,
	stefano.stabellini@eu.citrix.com,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Adding the -spec list for the generic IOMMU binding question.

On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> diff --git a/Documentation/devicetree/bindings/arm/xen.txt b/Documentation/devicetree/bindings/arm/xen.txt
> index 0f7b9c2..ee25a57 100644
> --- a/Documentation/devicetree/bindings/arm/xen.txt
> +++ b/Documentation/devicetree/bindings/arm/xen.txt
> @@ -15,6 +15,8 @@ the following properties:
>  - interrupts: the interrupt used by Xen to inject event notifications.
>    A GIC node is also required.
>  
> +- protected-devices: (optional) List of phandles to device node where the
> +IOMMU has been programmed by Xen. 

Is there some common/generic IOMMU binding which we can reuse here?
Although this isn't exactly an IOMMU it certainly has some similarities
-- it is providing IOMMU like functionality (albeit a very inflexible
IOMMU which you don't need to/can't actually program).

I'd far rather we followed existing patterns rather than invent our own
things.

I'm wondering if perhaps we didn't ought to integrate this as an actual
IOMMU driver, although I'm not convinced this would make sense.

I'm also not sure about shovelling everything as properties under a
single Xen node, should this not be its own node with compatible =
"xen,(pv)iommu"?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:20:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXI9-00074F-Qe; Thu, 20 Feb 2014 17:20:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WGXI8-000748-77
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:20:08 +0000
Received: from [85.158.139.211:19221] by server-1.bemta-5.messagelabs.com id
	24/23-12859-74936035; Thu, 20 Feb 2014 17:20:07 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392916804!5259004!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27889 invoked from network); 20 Feb 2014 17:20:06 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:20:06 -0000
Received: by mail-pd0-f173.google.com with SMTP id y10so2065110pdj.18
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 09:20:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=cDaLSE7gA3S8Ag4DyyXgpxDMWlM0YbpLJbWSxnASHKQ=;
	b=BJ+EwotLItTike+oIE8EgEGBj9qXJgwESvK5pFaV1cqJpTnu6cRw0dUMP5wUxyopQs
	EwjQ61BK8nNkg9egp7mfTgyfH5DkXuVTIi4eop7/e+ixckZsQtneNyRqThMCfiAVek1t
	yu9o7eJb4PURrGzv3zLRCA5QJrbKeBXiku2tHT9u+dJBuCnXUaaYc5wpLWasQWp8b3cj
	yJsy5/1UapHNbdW3WkpLjzY0bnEKlZSHx7ZXmme6TOQQBTRL2SxRFMKU6lq4Nsme8YWK
	hTKjyohEIy3YiKZimF3Zfcd5Lq21BSnFu8zoFGlUWsbPZB3xD74RsDY4PEWrkTlZhM+q
	FIHw==
X-Gm-Message-State: ALoCoQkej4n0EICNBpOoUJm6g0hbN7sFFZJF3bJHY6NyCnR/gKWBmLb0ZknVLNTP9DVYsoN5epCL
X-Received: by 10.66.66.66 with SMTP id d2mr3414038pat.80.1392916803529;
	Thu, 20 Feb 2014 09:20:03 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id
	qq5sm13032494pbb.24.2014.02.20.09.20.02 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 09:20:03 -0800 (PST)
Date: Thu, 20 Feb 2014 09:19:58 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140220091958.62a8b444@nehalam.linuxnetplumber.net>
In-Reply-To: <CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014 09:59:33 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> > On Wed, 19 Feb 2014 09:02:06 -0800
> > "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
> >
> >> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
> >> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
> >> upon initialization so that root block will be used once the device
> >> gets added to a bridge. The purpose would be to avoid drivers from
> >> using the high MAC address hack, streamline to use a random MAC
> >> address thereby avoiding the possible duplicate address situation for
> >> IPv6. In the STP use case for these interfaces we'd just require
> >> userspace to unset the root block. I'd consider the STP use case the
> >> most odd of all. The caveat to this approach is 3.8 would be needed
> >> (or its the root block patches cherry picked) for base kernels older
> >> than 3.8.
> >>
> >> Stephen?
> >>
> >>   Luis
> >
> > Don't add IFF_ flags that adds yet another API hook into bridge.
> 
> The goal was not to add a userspace API, but rather consider a driver
> initialization preference.
> 
> > Please only use the netlink/sysfs flags fields that already exist
> > for new features.
> 
> Sure, but what if we know a driver in most cases wants the root block
> and we'd want to make it the default, thereby only requiring userspace
> for toggling it off.
> 
>   Luis

Something in userspace has to put the device into the bridge.
Fix the port setup in that tool via the netlink or sysfs flags in
the bridge. It should not have to be handled in the bridge looking
at magic flags in the device.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:20:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXI9-00074F-Qe; Thu, 20 Feb 2014 17:20:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <stephen@networkplumber.org>) id 1WGXI8-000748-77
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:20:08 +0000
Received: from [85.158.139.211:19221] by server-1.bemta-5.messagelabs.com id
	24/23-12859-74936035; Thu, 20 Feb 2014 17:20:07 +0000
X-Env-Sender: stephen@networkplumber.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392916804!5259004!1
X-Originating-IP: [209.85.192.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27889 invoked from network); 20 Feb 2014 17:20:06 -0000
Received: from mail-pd0-f173.google.com (HELO mail-pd0-f173.google.com)
	(209.85.192.173)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:20:06 -0000
Received: by mail-pd0-f173.google.com with SMTP id y10so2065110pdj.18
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 09:20:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:in-reply-to
	:references:mime-version:content-type:content-transfer-encoding;
	bh=cDaLSE7gA3S8Ag4DyyXgpxDMWlM0YbpLJbWSxnASHKQ=;
	b=BJ+EwotLItTike+oIE8EgEGBj9qXJgwESvK5pFaV1cqJpTnu6cRw0dUMP5wUxyopQs
	EwjQ61BK8nNkg9egp7mfTgyfH5DkXuVTIi4eop7/e+ixckZsQtneNyRqThMCfiAVek1t
	yu9o7eJb4PURrGzv3zLRCA5QJrbKeBXiku2tHT9u+dJBuCnXUaaYc5wpLWasQWp8b3cj
	yJsy5/1UapHNbdW3WkpLjzY0bnEKlZSHx7ZXmme6TOQQBTRL2SxRFMKU6lq4Nsme8YWK
	hTKjyohEIy3YiKZimF3Zfcd5Lq21BSnFu8zoFGlUWsbPZB3xD74RsDY4PEWrkTlZhM+q
	FIHw==
X-Gm-Message-State: ALoCoQkej4n0EICNBpOoUJm6g0hbN7sFFZJF3bJHY6NyCnR/gKWBmLb0ZknVLNTP9DVYsoN5epCL
X-Received: by 10.66.66.66 with SMTP id d2mr3414038pat.80.1392916803529;
	Thu, 20 Feb 2014 09:20:03 -0800 (PST)
Received: from nehalam.linuxnetplumber.net
	(static-50-53-83-51.bvtn.or.frontiernet.net. [50.53.83.51])
	by mx.google.com with ESMTPSA id
	qq5sm13032494pbb.24.2014.02.20.09.20.02 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 09:20:03 -0800 (PST)
Date: Thu, 20 Feb 2014 09:19:58 -0800
From: Stephen Hemminger <stephen@networkplumber.org>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Message-ID: <20140220091958.62a8b444@nehalam.linuxnetplumber.net>
In-Reply-To: <CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
X-Mailer: Claws Mail 3.8.1 (GTK+ 2.24.10; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 19 Feb 2014 09:59:33 -0800
"Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:

> On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> > On Wed, 19 Feb 2014 09:02:06 -0800
> > "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
> >
> >> Folks, what if I repurpose my patch to use the IFF_BRIDGE_NON_ROOT (or
> >> relabel to IFF_ROOT_BLOCK_DEF) flag for a default driver preference
> >> upon initialization so that root block will be used once the device
> >> gets added to a bridge. The purpose would be to avoid drivers from
> >> using the high MAC address hack, streamline to use a random MAC
> >> address thereby avoiding the possible duplicate address situation for
> >> IPv6. In the STP use case for these interfaces we'd just require
> >> userspace to unset the root block. I'd consider the STP use case the
> >> most odd of all. The caveat to this approach is 3.8 would be needed
> >> (or its the root block patches cherry picked) for base kernels older
> >> than 3.8.
> >>
> >> Stephen?
> >>
> >>   Luis
> >
> > Don't add IFF_ flags that adds yet another API hook into bridge.
> 
> The goal was not to add a userspace API, but rather consider a driver
> initialization preference.
> 
> > Please only use the netlink/sysfs flags fields that already exist
> > for new features.
> 
> Sure, but what if we know a driver in most cases wants the root block
> and we'd want to make it the default, thereby only requiring userspace
> for toggling it off.
> 
>   Luis

Something in userspace has to put the device into the bridge.
Fix the port setup in that tool via the netlink or sysfs flags in
the bridge. It should not have to be handled in the bridge looking
at magic flags in the device.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:28:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXQ0-0007Jp-Qn; Thu, 20 Feb 2014 17:28:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGXPy-0007Jk-S8
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:28:15 +0000
Received: from [85.158.143.35:12009] by server-3.bemta-4.messagelabs.com id
	34/2D-11539-E2B36035; Thu, 20 Feb 2014 17:28:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392917171!7168697!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18668 invoked from network); 20 Feb 2014 17:26:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:26:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102708811"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 17:26:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 12:26:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGXNy-0005xm-GR;
	Thu, 20 Feb 2014 17:26:10 +0000
Date: Thu, 20 Feb 2014 17:26:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <5301F74E.3070107@citrix.com>
Message-ID: <alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Feb 2014, Zoltan Kiss wrote:
> On 16/02/14 18:36, Stefano Stabellini wrote:
> > On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> > > diff --git a/arch/arm/include/asm/xen/page.h
> > > b/arch/arm/include/asm/xen/page.h
> > > index e0965ab..4eaeb3f 100644
> > > --- a/arch/arm/include/asm/xen/page.h
> > > +++ b/arch/arm/include/asm/xen/page.h
> > > @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
> > > address, unsigned int *level)
> > >   	return NULL;
> > >   }
> > > 
> > > -static inline int m2p_add_override(unsigned long mfn, struct page *page,
> > > -		struct gnttab_map_grant_ref *kmap_op)
> > > -{
> > > -	return 0;
> > > -}
> > > -
> > > -static inline int m2p_remove_override(struct page *page, bool clear_pte)
> > > -{
> > > -	return 0;
> > > -}
> > > +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> > > +				   struct gnttab_map_grant_ref *kmap_ops,
> > > +				   struct page **pages, unsigned int count,
> > > +				   bool m2p_override);
> > > +
> > > +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
> > > *unmap_ops,
> > > +				     struct gnttab_map_grant_ref *kmap_ops,
> > > +				     struct page **pages, unsigned int count,
> > > +				     bool m2p_override);
> > 
> > Much much better.
> > The only comment I have is about this m2p_override boolean parameter.
> > m2p_override is now meaningless in this context, what we really want to
> > let the arch specific implementation know is whether the mapping is a
> > kernel only mapping or a userspace mapping.
> > Testing for kmap_ops != NULL might even be enough, but it would not
> > improve the interface.
> gntdev is the only user of this, the kmap_ops parameter there is:
> use_ptemod ? map->kmap_ops + offset : NULL
> where:
> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> So I think we can't rely on kmap_ops to decide whether we should m2p_override
> or not.
> 
> > Is it possible to realize if the mapping is a userspace mapping by
> > checking for GNTMAP_application_map in map_ops?
> > Otherwise I would keep the boolean and rename it to user_mapping.
> Sounds better, but as far as I see gntdev set that flag in find_grant_ptes,
> which is called only
> 
> if (use_ptemod) {
> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
> 				  vma->vm_end - vma->vm_start,
> 				  find_grant_ptes, map);
> 
> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have kmap_ops,
> and GNTMAP_application_map is not set as well, but I guess we still need
> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
 
If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
m2p_override.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:28:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXQ0-0007Jp-Qn; Thu, 20 Feb 2014 17:28:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGXPy-0007Jk-S8
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:28:15 +0000
Received: from [85.158.143.35:12009] by server-3.bemta-4.messagelabs.com id
	34/2D-11539-E2B36035; Thu, 20 Feb 2014 17:28:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392917171!7168697!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18668 invoked from network); 20 Feb 2014 17:26:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:26:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102708811"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 17:26:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 12:26:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGXNy-0005xm-GR;
	Thu, 20 Feb 2014 17:26:10 +0000
Date: Thu, 20 Feb 2014 17:26:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <5301F74E.3070107@citrix.com>
Message-ID: <alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 17 Feb 2014, Zoltan Kiss wrote:
> On 16/02/14 18:36, Stefano Stabellini wrote:
> > On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> > > diff --git a/arch/arm/include/asm/xen/page.h
> > > b/arch/arm/include/asm/xen/page.h
> > > index e0965ab..4eaeb3f 100644
> > > --- a/arch/arm/include/asm/xen/page.h
> > > +++ b/arch/arm/include/asm/xen/page.h
> > > @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
> > > address, unsigned int *level)
> > >   	return NULL;
> > >   }
> > > 
> > > -static inline int m2p_add_override(unsigned long mfn, struct page *page,
> > > -		struct gnttab_map_grant_ref *kmap_op)
> > > -{
> > > -	return 0;
> > > -}
> > > -
> > > -static inline int m2p_remove_override(struct page *page, bool clear_pte)
> > > -{
> > > -	return 0;
> > > -}
> > > +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
> > > +				   struct gnttab_map_grant_ref *kmap_ops,
> > > +				   struct page **pages, unsigned int count,
> > > +				   bool m2p_override);
> > > +
> > > +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
> > > *unmap_ops,
> > > +				     struct gnttab_map_grant_ref *kmap_ops,
> > > +				     struct page **pages, unsigned int count,
> > > +				     bool m2p_override);
> > 
> > Much much better.
> > The only comment I have is about this m2p_override boolean parameter.
> > m2p_override is now meaningless in this context, what we really want to
> > let the arch specific implementation know is whether the mapping is a
> > kernel only mapping or a userspace mapping.
> > Testing for kmap_ops != NULL might even be enough, but it would not
> > improve the interface.
> gntdev is the only user of this, the kmap_ops parameter there is:
> use_ptemod ? map->kmap_ops + offset : NULL
> where:
> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> So I think we can't rely on kmap_ops to decide whether we should m2p_override
> or not.
> 
> > Is it possible to realize if the mapping is a userspace mapping by
> > checking for GNTMAP_application_map in map_ops?
> > Otherwise I would keep the boolean and rename it to user_mapping.
> Sounds better, but as far as I see gntdev set that flag in find_grant_ptes,
> which is called only
> 
> if (use_ptemod) {
> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
> 				  vma->vm_end - vma->vm_start,
> 				  find_grant_ptes, map);
> 
> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have kmap_ops,
> and GNTMAP_application_map is not set as well, but I guess we still need
> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
 
If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
m2p_override.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:37:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXYK-0007vu-Gr; Thu, 20 Feb 2014 17:36:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGXYI-0007vo-04
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:36:51 +0000
Received: from [85.158.137.68:42333] by server-4.bemta-3.messagelabs.com id
	1F/5E-04858-13D36035; Thu, 20 Feb 2014 17:36:49 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392917806!1961531!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18826 invoked from network); 20 Feb 2014 17:36:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:36:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102714100"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 17:36:46 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 12:36:45 -0500
Message-ID: <53063D2B.1040502@citrix.com>
Date: Thu, 20 Feb 2014 17:36:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 17:26, Stefano Stabellini wrote:
> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
>> On 16/02/14 18:36, Stefano Stabellini wrote:
>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>>>> diff --git a/arch/arm/include/asm/xen/page.h
>>>> b/arch/arm/include/asm/xen/page.h
>>>> index e0965ab..4eaeb3f 100644
>>>> --- a/arch/arm/include/asm/xen/page.h
>>>> +++ b/arch/arm/include/asm/xen/page.h
>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
>>>> address, unsigned int *level)
>>>>    	return NULL;
>>>>    }
>>>>
>>>> -static inline int m2p_add_override(unsigned long mfn, struct page *page,
>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>> -{
>>>> -	return 0;
>>>> -}
>>>> -
>>>> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
>>>> -{
>>>> -	return 0;
>>>> -}
>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
>>>> +				   struct gnttab_map_grant_ref *kmap_ops,
>>>> +				   struct page **pages, unsigned int count,
>>>> +				   bool m2p_override);
>>>> +
>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
>>>> *unmap_ops,
>>>> +				     struct gnttab_map_grant_ref *kmap_ops,
>>>> +				     struct page **pages, unsigned int count,
>>>> +				     bool m2p_override);
>>>
>>> Much much better.
>>> The only comment I have is about this m2p_override boolean parameter.
>>> m2p_override is now meaningless in this context, what we really want to
>>> let the arch specific implementation know is whether the mapping is a
>>> kernel only mapping or a userspace mapping.
>>> Testing for kmap_ops != NULL might even be enough, but it would not
>>> improve the interface.
>> gntdev is the only user of this, the kmap_ops parameter there is:
>> use_ptemod ? map->kmap_ops + offset : NULL
>> where:
>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>> So I think we can't rely on kmap_ops to decide whether we should m2p_override
>> or not.
>>
>>> Is it possible to realize if the mapping is a userspace mapping by
>>> checking for GNTMAP_application_map in map_ops?
>>> Otherwise I would keep the boolean and rename it to user_mapping.
>> Sounds better, but as far as I see gntdev set that flag in find_grant_ptes,
>> which is called only
>>
>> if (use_ptemod) {
>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>> 				  vma->vm_end - vma->vm_start,
>> 				  find_grant_ptes, map);
>>
>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have kmap_ops,
>> and GNTMAP_application_map is not set as well, but I guess we still need
>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
>
> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
> m2p_override.
>

So it's safe to assume that we need m2p_override only if kmap_ops != 
NULL, and we can avoid the extra bool parameter, is that correct? At 
least with the current users of grant mapping it seems to be true.
In which case we don't need the wrappers for gnttab_[un]map_refs as well.
Actually the most of m2p_add/remove_override takes effect only if there 
is a kmap_op parameter, but what about the rest of the code there?

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 17:37:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 17:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGXYK-0007vu-Gr; Thu, 20 Feb 2014 17:36:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGXYI-0007vo-04
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 17:36:51 +0000
Received: from [85.158.137.68:42333] by server-4.bemta-3.messagelabs.com id
	1F/5E-04858-13D36035; Thu, 20 Feb 2014 17:36:49 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392917806!1961531!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18826 invoked from network); 20 Feb 2014 17:36:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 17:36:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102714100"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 17:36:46 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 12:36:45 -0500
Message-ID: <53063D2B.1040502@citrix.com>
Date: Thu, 20 Feb 2014 17:36:43 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 17:26, Stefano Stabellini wrote:
> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
>> On 16/02/14 18:36, Stefano Stabellini wrote:
>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>>>> diff --git a/arch/arm/include/asm/xen/page.h
>>>> b/arch/arm/include/asm/xen/page.h
>>>> index e0965ab..4eaeb3f 100644
>>>> --- a/arch/arm/include/asm/xen/page.h
>>>> +++ b/arch/arm/include/asm/xen/page.h
>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
>>>> address, unsigned int *level)
>>>>    	return NULL;
>>>>    }
>>>>
>>>> -static inline int m2p_add_override(unsigned long mfn, struct page *page,
>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>> -{
>>>> -	return 0;
>>>> -}
>>>> -
>>>> -static inline int m2p_remove_override(struct page *page, bool clear_pte)
>>>> -{
>>>> -	return 0;
>>>> -}
>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
>>>> +				   struct gnttab_map_grant_ref *kmap_ops,
>>>> +				   struct page **pages, unsigned int count,
>>>> +				   bool m2p_override);
>>>> +
>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
>>>> *unmap_ops,
>>>> +				     struct gnttab_map_grant_ref *kmap_ops,
>>>> +				     struct page **pages, unsigned int count,
>>>> +				     bool m2p_override);
>>>
>>> Much much better.
>>> The only comment I have is about this m2p_override boolean parameter.
>>> m2p_override is now meaningless in this context, what we really want to
>>> let the arch specific implementation know is whether the mapping is a
>>> kernel only mapping or a userspace mapping.
>>> Testing for kmap_ops != NULL might even be enough, but it would not
>>> improve the interface.
>> gntdev is the only user of this, the kmap_ops parameter there is:
>> use_ptemod ? map->kmap_ops + offset : NULL
>> where:
>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>> So I think we can't rely on kmap_ops to decide whether we should m2p_override
>> or not.
>>
>>> Is it possible to realize if the mapping is a userspace mapping by
>>> checking for GNTMAP_application_map in map_ops?
>>> Otherwise I would keep the boolean and rename it to user_mapping.
>> Sounds better, but as far as I see gntdev set that flag in find_grant_ptes,
>> which is called only
>>
>> if (use_ptemod) {
>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>> 				  vma->vm_end - vma->vm_start,
>> 				  find_grant_ptes, map);
>>
>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have kmap_ops,
>> and GNTMAP_application_map is not set as well, but I guess we still need
>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
>
> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
> m2p_override.
>

So it's safe to assume that we need m2p_override only if kmap_ops != 
NULL, and we can avoid the extra bool parameter, is that correct? At 
least with the current users of grant mapping it seems to be true.
In which case we don't need the wrappers for gnttab_[un]map_refs as well.
Actually the most of m2p_add/remove_override takes effect only if there 
is a kmap_op parameter, but what about the rest of the code there?

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:13:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGY79-0000p7-Uf; Thu, 20 Feb 2014 18:12:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGY78-0000p2-8U
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 18:12:50 +0000
Received: from [85.158.139.211:16267] by server-10.bemta-5.messagelabs.com id
	FE/F0-08578-1A546035; Thu, 20 Feb 2014 18:12:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392919967!1310531!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27607 invoked from network); 20 Feb 2014 18:12:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:12:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102728006"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:12:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:12:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGY74-0006cj-2h;
	Thu, 20 Feb 2014 18:12:46 +0000
Date: Thu, 20 Feb 2014 18:12:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
Message-ID: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Olaf Hering <olaf@aepfle.de>,
	xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: [Xen-devel] [PULL 0/2] xen-140220
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following changes since commit 2ca92bb993991d6dcb8f68751aca9fc2ec2b8867:

  Merge remote-tracking branch 'remotes/kraxel/tags/pull-usb-3' into staging (2014-02-20 15:25:05 +0000)

are available in the git repository at:


  git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140220

for you to fetch changes up to 58da5b1e01a586eb5a52ba3eec342d6828269839:

  xen_disk: fix io accounting (2014-02-20 17:57:13 +0000)

----------------------------------------------------------------
Olaf Hering (1):
      xen_disk: fix io accounting

Stefano Stabellini (1):
      Call pci_piix3_xen_ide_unplug from unplug_disks

 hw/block/xen_disk.c   |   13 ++++++++++++-
 hw/ide/piix.c         |    3 +--
 hw/xen/xen_platform.c |    3 ++-
 include/hw/ide.h      |    1 +
 4 files changed, 16 insertions(+), 4 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:13:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGY79-0000p7-Uf; Thu, 20 Feb 2014 18:12:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGY78-0000p2-8U
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 18:12:50 +0000
Received: from [85.158.139.211:16267] by server-10.bemta-5.messagelabs.com id
	FE/F0-08578-1A546035; Thu, 20 Feb 2014 18:12:49 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392919967!1310531!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27607 invoked from network); 20 Feb 2014 18:12:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:12:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102728006"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:12:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:12:46 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGY74-0006cj-2h;
	Thu, 20 Feb 2014 18:12:46 +0000
Date: Thu, 20 Feb 2014 18:12:40 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Anthony Liguori <anthony@codemonkey.ws>
Message-ID: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Olaf Hering <olaf@aepfle.de>,
	xen-devel@lists.xensource.com,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Anthony.Perard@citrix.com,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: [Xen-devel] [PULL 0/2] xen-140220
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The following changes since commit 2ca92bb993991d6dcb8f68751aca9fc2ec2b8867:

  Merge remote-tracking branch 'remotes/kraxel/tags/pull-usb-3' into staging (2014-02-20 15:25:05 +0000)

are available in the git repository at:


  git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140220

for you to fetch changes up to 58da5b1e01a586eb5a52ba3eec342d6828269839:

  xen_disk: fix io accounting (2014-02-20 17:57:13 +0000)

----------------------------------------------------------------
Olaf Hering (1):
      xen_disk: fix io accounting

Stefano Stabellini (1):
      Call pci_piix3_xen_ide_unplug from unplug_disks

 hw/block/xen_disk.c   |   13 ++++++++++++-
 hw/ide/piix.c         |    3 +--
 hw/xen/xen_platform.c |    3 ++-
 include/hw/ide.h      |    1 +
 4 files changed, 16 insertions(+), 4 deletions(-)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:13:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGY7w-0000sU-6j; Thu, 20 Feb 2014 18:13:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGY7u-0000sB-Ja
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 18:13:38 +0000
Received: from [85.158.137.68:27341] by server-13.bemta-3.messagelabs.com id
	E3/72-26923-1D546035; Thu, 20 Feb 2014 18:13:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392920015!3192374!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28015 invoked from network); 20 Feb 2014 18:13:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:13:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104422168"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:13:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:13:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGY7l-0006dM-AI;
	Thu, 20 Feb 2014 18:13:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <anthony@codemonkey.ws>
Date: Thu, 20 Feb 2014 18:13:22 +0000
Message-ID: <1392920002-18522-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: peter.maydell@linaro.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Anthony.Perard@citrix.com, pbonzini@redhat.com
Subject: [Xen-devel] [PULL 2/2] xen_disk: fix io accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Olaf Hering <olaf@aepfle.de>

bdrv_acct_done was called unconditional. But in case the ioreq has no
segments there is no matching bdrv_acct_start call. This could lead to
bogus accounting values.

Found by code inspection.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 hw/block/xen_disk.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 098f6c6..7f0f14a 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -483,7 +483,18 @@ static void qemu_aio_complete(void *opaque, int ret)
     ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
     ioreq_unmap(ioreq);
     ioreq_finish(ioreq);
-    bdrv_acct_done(ioreq->blkdev->bs, &ioreq->acct);
+    switch (ioreq->req.operation) {
+    case BLKIF_OP_WRITE:
+    case BLKIF_OP_FLUSH_DISKCACHE:
+        if (!ioreq->req.nr_segments) {
+            break;
+        }
+    case BLKIF_OP_READ:
+        bdrv_acct_done(ioreq->blkdev->bs, &ioreq->acct);
+        break;
+    default:
+        break;
+    }
     qemu_bh_schedule(ioreq->blkdev->bh);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:13:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGY7v-0000sM-Qu; Thu, 20 Feb 2014 18:13:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGY7u-0000sA-Du
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 18:13:38 +0000
Received: from [85.158.143.35:45439] by server-2.bemta-4.messagelabs.com id
	1B/E9-10891-1D546035; Thu, 20 Feb 2014 18:13:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392920015!7178253!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12115 invoked from network); 20 Feb 2014 18:13:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:13:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102728301"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:13:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:13:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGY7l-0006dM-8N;
	Thu, 20 Feb 2014 18:13:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <anthony@codemonkey.ws>
Date: Thu, 20 Feb 2014 18:13:21 +0000
Message-ID: <1392920002-18522-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: peter.maydell@linaro.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Anthony.Perard@citrix.com, pbonzini@redhat.com
Subject: [Xen-devel] [PULL 1/2] Call pci_piix3_xen_ide_unplug from
	unplug_disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/ide/piix.c         |    3 +--
 hw/xen/xen_platform.c |    3 ++-
 include/hw/ide.h      |    1 +
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 0eda301..40757eb 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
     return 0;
 }
 
-static int pci_piix3_xen_ide_unplug(DeviceState *dev)
+int pci_piix3_xen_ide_unplug(DeviceState *dev)
 {
     PCIIDEState *pci_ide;
     DriveInfo *di;
@@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass, void *data)
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
     k->class_id = PCI_CLASS_STORAGE_IDE;
     set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
-    dc->unplug = pci_piix3_xen_ide_unplug;
 }
 
 static const TypeInfo piix3_ide_xen_info = {
diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
index 70875e4..1d9d0e9 100644
--- a/hw/xen/xen_platform.c
+++ b/hw/xen/xen_platform.c
@@ -27,6 +27,7 @@
 
 #include "hw/hw.h"
 #include "hw/i386/pc.h"
+#include "hw/ide.h"
 #include "hw/pci/pci.h"
 #include "hw/irq.h"
 #include "hw/xen/xen_common.h"
@@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *o)
     if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
             PCI_CLASS_STORAGE_IDE
             && strcmp(d->name, "xen-pci-passthrough") != 0) {
-        qdev_unplug(DEVICE(d), NULL);
+        pci_piix3_xen_ide_unplug(DEVICE(d));
     }
 }
 
diff --git a/include/hw/ide.h b/include/hw/ide.h
index 507e6d3..bc8bd32 100644
--- a/include/hw/ide.h
+++ b/include/hw/ide.h
@@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo **hd_table,
 PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
+int pci_piix3_xen_ide_unplug(DeviceState *dev);
 void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 
 /* ide-mmio.c */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:13:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGY7w-0000sU-6j; Thu, 20 Feb 2014 18:13:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGY7u-0000sB-Ja
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 18:13:38 +0000
Received: from [85.158.137.68:27341] by server-13.bemta-3.messagelabs.com id
	E3/72-26923-1D546035; Thu, 20 Feb 2014 18:13:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392920015!3192374!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28015 invoked from network); 20 Feb 2014 18:13:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:13:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104422168"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:13:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:13:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGY7l-0006dM-AI;
	Thu, 20 Feb 2014 18:13:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <anthony@codemonkey.ws>
Date: Thu, 20 Feb 2014 18:13:22 +0000
Message-ID: <1392920002-18522-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: peter.maydell@linaro.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Anthony.Perard@citrix.com, pbonzini@redhat.com
Subject: [Xen-devel] [PULL 2/2] xen_disk: fix io accounting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Olaf Hering <olaf@aepfle.de>

bdrv_acct_done was called unconditional. But in case the ioreq has no
segments there is no matching bdrv_acct_start call. This could lead to
bogus accounting values.

Found by code inspection.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 hw/block/xen_disk.c |   13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/hw/block/xen_disk.c b/hw/block/xen_disk.c
index 098f6c6..7f0f14a 100644
--- a/hw/block/xen_disk.c
+++ b/hw/block/xen_disk.c
@@ -483,7 +483,18 @@ static void qemu_aio_complete(void *opaque, int ret)
     ioreq->status = ioreq->aio_errors ? BLKIF_RSP_ERROR : BLKIF_RSP_OKAY;
     ioreq_unmap(ioreq);
     ioreq_finish(ioreq);
-    bdrv_acct_done(ioreq->blkdev->bs, &ioreq->acct);
+    switch (ioreq->req.operation) {
+    case BLKIF_OP_WRITE:
+    case BLKIF_OP_FLUSH_DISKCACHE:
+        if (!ioreq->req.nr_segments) {
+            break;
+        }
+    case BLKIF_OP_READ:
+        bdrv_acct_done(ioreq->blkdev->bs, &ioreq->acct);
+        break;
+    default:
+        break;
+    }
     qemu_bh_schedule(ioreq->blkdev->bh);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:13:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:13:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGY7v-0000sM-Qu; Thu, 20 Feb 2014 18:13:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGY7u-0000sA-Du
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 18:13:38 +0000
Received: from [85.158.143.35:45439] by server-2.bemta-4.messagelabs.com id
	1B/E9-10891-1D546035; Thu, 20 Feb 2014 18:13:37 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392920015!7178253!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12115 invoked from network); 20 Feb 2014 18:13:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:13:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102728301"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:13:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:13:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGY7l-0006dM-8N;
	Thu, 20 Feb 2014 18:13:29 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <anthony@codemonkey.ws>
Date: Thu, 20 Feb 2014 18:13:21 +0000
Message-ID: <1392920002-18522-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: peter.maydell@linaro.org, olaf@aepfle.de, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	qemu-devel@nongnu.org, Anthony.Perard@citrix.com, pbonzini@redhat.com
Subject: [Xen-devel] [PULL 1/2] Call pci_piix3_xen_ide_unplug from
	unplug_disks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/ide/piix.c         |    3 +--
 hw/xen/xen_platform.c |    3 ++-
 include/hw/ide.h      |    1 +
 3 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 0eda301..40757eb 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -167,7 +167,7 @@ static int pci_piix_ide_initfn(PCIDevice *dev)
     return 0;
 }
 
-static int pci_piix3_xen_ide_unplug(DeviceState *dev)
+int pci_piix3_xen_ide_unplug(DeviceState *dev)
 {
     PCIIDEState *pci_ide;
     DriveInfo *di;
@@ -266,7 +266,6 @@ static void piix3_ide_xen_class_init(ObjectClass *klass, void *data)
     k->device_id = PCI_DEVICE_ID_INTEL_82371SB_1;
     k->class_id = PCI_CLASS_STORAGE_IDE;
     set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
-    dc->unplug = pci_piix3_xen_ide_unplug;
 }
 
 static const TypeInfo piix3_ide_xen_info = {
diff --git a/hw/xen/xen_platform.c b/hw/xen/xen_platform.c
index 70875e4..1d9d0e9 100644
--- a/hw/xen/xen_platform.c
+++ b/hw/xen/xen_platform.c
@@ -27,6 +27,7 @@
 
 #include "hw/hw.h"
 #include "hw/i386/pc.h"
+#include "hw/ide.h"
 #include "hw/pci/pci.h"
 #include "hw/irq.h"
 #include "hw/xen/xen_common.h"
@@ -110,7 +111,7 @@ static void unplug_disks(PCIBus *b, PCIDevice *d, void *o)
     if (pci_get_word(d->config + PCI_CLASS_DEVICE) ==
             PCI_CLASS_STORAGE_IDE
             && strcmp(d->name, "xen-pci-passthrough") != 0) {
-        qdev_unplug(DEVICE(d), NULL);
+        pci_piix3_xen_ide_unplug(DEVICE(d));
     }
 }
 
diff --git a/include/hw/ide.h b/include/hw/ide.h
index 507e6d3..bc8bd32 100644
--- a/include/hw/ide.h
+++ b/include/hw/ide.h
@@ -17,6 +17,7 @@ void pci_cmd646_ide_init(PCIBus *bus, DriveInfo **hd_table,
 PCIDevice *pci_piix3_xen_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix3_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 PCIDevice *pci_piix4_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
+int pci_piix3_xen_ide_unplug(DeviceState *dev);
 void vt82c686b_ide_init(PCIBus *bus, DriveInfo **hd_table, int devfn);
 
 /* ide-mmio.c */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:17:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:17:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYBf-0001BM-Vr; Thu, 20 Feb 2014 18:17:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGYBe-0001BH-BN
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:17:30 +0000
Received: from [193.109.254.147:52096] by server-8.bemta-14.messagelabs.com id
	49/09-18529-9B646035; Thu, 20 Feb 2014 18:17:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392920246!2012622!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1423 invoked from network); 20 Feb 2014 18:17:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:17:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104423979"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:17:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:17:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGYBZ-0006gs-7x;
	Thu, 20 Feb 2014 18:17:25 +0000
Date: Thu, 20 Feb 2014 18:17:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <53063D2B.1040502@citrix.com>
Message-ID: <alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, Zoltan Kiss wrote:
> On 20/02/14 17:26, Stefano Stabellini wrote:
> > On Mon, 17 Feb 2014, Zoltan Kiss wrote:
> > > On 16/02/14 18:36, Stefano Stabellini wrote:
> > > > On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> > > > > diff --git a/arch/arm/include/asm/xen/page.h
> > > > > b/arch/arm/include/asm/xen/page.h
> > > > > index e0965ab..4eaeb3f 100644
> > > > > --- a/arch/arm/include/asm/xen/page.h
> > > > > +++ b/arch/arm/include/asm/xen/page.h
> > > > > @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
> > > > > address, unsigned int *level)
> > > > >    	return NULL;
> > > > >    }
> > > > > 
> > > > > -static inline int m2p_add_override(unsigned long mfn, struct page
> > > > > *page,
> > > > > -		struct gnttab_map_grant_ref *kmap_op)
> > > > > -{
> > > > > -	return 0;
> > > > > -}
> > > > > -
> > > > > -static inline int m2p_remove_override(struct page *page, bool
> > > > > clear_pte)
> > > > > -{
> > > > > -	return 0;
> > > > > -}
> > > > > +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
> > > > > *map_ops,
> > > > > +				   struct gnttab_map_grant_ref
> > > > > *kmap_ops,
> > > > > +				   struct page **pages, unsigned int
> > > > > count,
> > > > > +				   bool m2p_override);
> > > > > +
> > > > > +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
> > > > > *unmap_ops,
> > > > > +				     struct gnttab_map_grant_ref
> > > > > *kmap_ops,
> > > > > +				     struct page **pages, unsigned int
> > > > > count,
> > > > > +				     bool m2p_override);
> > > > 
> > > > Much much better.
> > > > The only comment I have is about this m2p_override boolean parameter.
> > > > m2p_override is now meaningless in this context, what we really want to
> > > > let the arch specific implementation know is whether the mapping is a
> > > > kernel only mapping or a userspace mapping.
> > > > Testing for kmap_ops != NULL might even be enough, but it would not
> > > > improve the interface.
> > > gntdev is the only user of this, the kmap_ops parameter there is:
> > > use_ptemod ? map->kmap_ops + offset : NULL
> > > where:
> > > use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> > > So I think we can't rely on kmap_ops to decide whether we should
> > > m2p_override
> > > or not.
> > > 
> > > > Is it possible to realize if the mapping is a userspace mapping by
> > > > checking for GNTMAP_application_map in map_ops?
> > > > Otherwise I would keep the boolean and rename it to user_mapping.
> > > Sounds better, but as far as I see gntdev set that flag in
> > > find_grant_ptes,
> > > which is called only
> > > 
> > > if (use_ptemod) {
> > > 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
> > > 				  vma->vm_end - vma->vm_start,
> > > 				  find_grant_ptes, map);
> > > 
> > > So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
> > > kmap_ops,
> > > and GNTMAP_application_map is not set as well, but I guess we still need
> > > m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
> > 
> > If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
> > m2p_override.
> > 
> 
> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
> we can avoid the extra bool parameter, is that correct? At least with the
> current users of grant mapping it seems to be true.
> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
> Actually the most of m2p_add/remove_override takes effect only if there is a
> kmap_op parameter, but what about the rest of the code there?

It is safe to assume that we only need the m2p_override if
!xen_feature(XENFEAT_auto_translated_physmap).
I wouldn't make any assumptions on kmap_ops != NULL.

I would remove the bool m2p_override parameter completely and determine
whether we need to call the m2p_override functions from the x86
implementation of set/clear_foreign_p2m_mapping by checking
xen_feature(XENFEAT_auto_translated_physmap).

David, does it seem reasonable to you?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:17:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:17:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYBf-0001BM-Vr; Thu, 20 Feb 2014 18:17:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGYBe-0001BH-BN
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:17:30 +0000
Received: from [193.109.254.147:52096] by server-8.bemta-14.messagelabs.com id
	49/09-18529-9B646035; Thu, 20 Feb 2014 18:17:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1392920246!2012622!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1423 invoked from network); 20 Feb 2014 18:17:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:17:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104423979"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:17:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:17:25 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGYBZ-0006gs-7x;
	Thu, 20 Feb 2014 18:17:25 +0000
Date: Thu, 20 Feb 2014 18:17:19 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Zoltan Kiss <zoltan.kiss@citrix.com>
In-Reply-To: <53063D2B.1040502@citrix.com>
Message-ID: <alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, Zoltan Kiss wrote:
> On 20/02/14 17:26, Stefano Stabellini wrote:
> > On Mon, 17 Feb 2014, Zoltan Kiss wrote:
> > > On 16/02/14 18:36, Stefano Stabellini wrote:
> > > > On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> > > > > diff --git a/arch/arm/include/asm/xen/page.h
> > > > > b/arch/arm/include/asm/xen/page.h
> > > > > index e0965ab..4eaeb3f 100644
> > > > > --- a/arch/arm/include/asm/xen/page.h
> > > > > +++ b/arch/arm/include/asm/xen/page.h
> > > > > @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
> > > > > address, unsigned int *level)
> > > > >    	return NULL;
> > > > >    }
> > > > > 
> > > > > -static inline int m2p_add_override(unsigned long mfn, struct page
> > > > > *page,
> > > > > -		struct gnttab_map_grant_ref *kmap_op)
> > > > > -{
> > > > > -	return 0;
> > > > > -}
> > > > > -
> > > > > -static inline int m2p_remove_override(struct page *page, bool
> > > > > clear_pte)
> > > > > -{
> > > > > -	return 0;
> > > > > -}
> > > > > +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
> > > > > *map_ops,
> > > > > +				   struct gnttab_map_grant_ref
> > > > > *kmap_ops,
> > > > > +				   struct page **pages, unsigned int
> > > > > count,
> > > > > +				   bool m2p_override);
> > > > > +
> > > > > +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
> > > > > *unmap_ops,
> > > > > +				     struct gnttab_map_grant_ref
> > > > > *kmap_ops,
> > > > > +				     struct page **pages, unsigned int
> > > > > count,
> > > > > +				     bool m2p_override);
> > > > 
> > > > Much much better.
> > > > The only comment I have is about this m2p_override boolean parameter.
> > > > m2p_override is now meaningless in this context, what we really want to
> > > > let the arch specific implementation know is whether the mapping is a
> > > > kernel only mapping or a userspace mapping.
> > > > Testing for kmap_ops != NULL might even be enough, but it would not
> > > > improve the interface.
> > > gntdev is the only user of this, the kmap_ops parameter there is:
> > > use_ptemod ? map->kmap_ops + offset : NULL
> > > where:
> > > use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> > > So I think we can't rely on kmap_ops to decide whether we should
> > > m2p_override
> > > or not.
> > > 
> > > > Is it possible to realize if the mapping is a userspace mapping by
> > > > checking for GNTMAP_application_map in map_ops?
> > > > Otherwise I would keep the boolean and rename it to user_mapping.
> > > Sounds better, but as far as I see gntdev set that flag in
> > > find_grant_ptes,
> > > which is called only
> > > 
> > > if (use_ptemod) {
> > > 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
> > > 				  vma->vm_end - vma->vm_start,
> > > 				  find_grant_ptes, map);
> > > 
> > > So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
> > > kmap_ops,
> > > and GNTMAP_application_map is not set as well, but I guess we still need
> > > m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
> > 
> > If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
> > m2p_override.
> > 
> 
> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
> we can avoid the extra bool parameter, is that correct? At least with the
> current users of grant mapping it seems to be true.
> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
> Actually the most of m2p_add/remove_override takes effect only if there is a
> kmap_op parameter, but what about the rest of the code there?

It is safe to assume that we only need the m2p_override if
!xen_feature(XENFEAT_auto_translated_physmap).
I wouldn't make any assumptions on kmap_ops != NULL.

I would remove the bool m2p_override parameter completely and determine
whether we need to call the m2p_override functions from the x86
implementation of set/clear_foreign_p2m_mapping by checking
xen_feature(XENFEAT_auto_translated_physmap).

David, does it seem reasonable to you?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:19:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYDN-0001Ni-Ls; Thu, 20 Feb 2014 18:19:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGYDM-0001NU-0v
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:19:16 +0000
Received: from [85.158.137.68:27749] by server-3.bemta-3.messagelabs.com id
	A6/80-14520-32746035; Thu, 20 Feb 2014 18:19:15 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392920352!3236738!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5122 invoked from network); 20 Feb 2014 18:19:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:19:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104424703"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:19:12 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:19:11 -0500
Message-ID: <5306471D.1080809@citrix.com>
Date: Thu, 20 Feb 2014 18:19:09 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
	<alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Jan Beulich <jbeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, Zoltan Kiss <zoltan.kiss@citrix.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 18:17, Stefano Stabellini wrote:
> On Thu, 20 Feb 2014, Zoltan Kiss wrote:
>> On 20/02/14 17:26, Stefano Stabellini wrote:
>>> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
>>>> On 16/02/14 18:36, Stefano Stabellini wrote:
>>>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>>>>>> diff --git a/arch/arm/include/asm/xen/page.h
>>>>>> b/arch/arm/include/asm/xen/page.h
>>>>>> index e0965ab..4eaeb3f 100644
>>>>>> --- a/arch/arm/include/asm/xen/page.h
>>>>>> +++ b/arch/arm/include/asm/xen/page.h
>>>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
>>>>>> address, unsigned int *level)
>>>>>>    	return NULL;
>>>>>>    }
>>>>>>
>>>>>> -static inline int m2p_add_override(unsigned long mfn, struct page
>>>>>> *page,
>>>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>>>> -{
>>>>>> -	return 0;
>>>>>> -}
>>>>>> -
>>>>>> -static inline int m2p_remove_override(struct page *page, bool
>>>>>> clear_pte)
>>>>>> -{
>>>>>> -	return 0;
>>>>>> -}
>>>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
>>>>>> *map_ops,
>>>>>> +				   struct gnttab_map_grant_ref
>>>>>> *kmap_ops,
>>>>>> +				   struct page **pages, unsigned int
>>>>>> count,
>>>>>> +				   bool m2p_override);
>>>>>> +
>>>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
>>>>>> *unmap_ops,
>>>>>> +				     struct gnttab_map_grant_ref
>>>>>> *kmap_ops,
>>>>>> +				     struct page **pages, unsigned int
>>>>>> count,
>>>>>> +				     bool m2p_override);
>>>>>
>>>>> Much much better.
>>>>> The only comment I have is about this m2p_override boolean parameter.
>>>>> m2p_override is now meaningless in this context, what we really want to
>>>>> let the arch specific implementation know is whether the mapping is a
>>>>> kernel only mapping or a userspace mapping.
>>>>> Testing for kmap_ops != NULL might even be enough, but it would not
>>>>> improve the interface.
>>>> gntdev is the only user of this, the kmap_ops parameter there is:
>>>> use_ptemod ? map->kmap_ops + offset : NULL
>>>> where:
>>>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>>>> So I think we can't rely on kmap_ops to decide whether we should
>>>> m2p_override
>>>> or not.
>>>>
>>>>> Is it possible to realize if the mapping is a userspace mapping by
>>>>> checking for GNTMAP_application_map in map_ops?
>>>>> Otherwise I would keep the boolean and rename it to user_mapping.
>>>> Sounds better, but as far as I see gntdev set that flag in
>>>> find_grant_ptes,
>>>> which is called only
>>>>
>>>> if (use_ptemod) {
>>>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>>>> 				  vma->vm_end - vma->vm_start,
>>>> 				  find_grant_ptes, map);
>>>>
>>>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
>>>> kmap_ops,
>>>> and GNTMAP_application_map is not set as well, but I guess we still need
>>>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
>>>
>>> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
>>> m2p_override.
>>>
>>
>> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
>> we can avoid the extra bool parameter, is that correct? At least with the
>> current users of grant mapping it seems to be true.
>> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
>> Actually the most of m2p_add/remove_override takes effect only if there is a
>> kmap_op parameter, but what about the rest of the code there?
> 
> It is safe to assume that we only need the m2p_override if
> !xen_feature(XENFEAT_auto_translated_physmap).
> I wouldn't make any assumptions on kmap_ops != NULL.

I think it is -- we only need the m2p override if we have userspace
mappings (where kmap_ops != 0).
> 
> I would remove the bool m2p_override parameter completely and determine
> whether we need to call the m2p_override functions from the x86
> implementation of set/clear_foreign_p2m_mapping by checking
> xen_feature(XENFEAT_auto_translated_physmap).
> 
> David, does it seem reasonable to you?

That would miss the point of this patch which is to avoid adding to the
m2p_override for kernel only mappings.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:19:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:19:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYDN-0001Ni-Ls; Thu, 20 Feb 2014 18:19:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGYDM-0001NU-0v
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:19:16 +0000
Received: from [85.158.137.68:27749] by server-3.bemta-3.messagelabs.com id
	A6/80-14520-32746035; Thu, 20 Feb 2014 18:19:15 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392920352!3236738!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5122 invoked from network); 20 Feb 2014 18:19:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:19:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104424703"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:19:12 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:19:11 -0500
Message-ID: <5306471D.1080809@citrix.com>
Date: Thu, 20 Feb 2014 18:19:09 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
	<alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Jan Beulich <jbeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, Zoltan Kiss <zoltan.kiss@citrix.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 18:17, Stefano Stabellini wrote:
> On Thu, 20 Feb 2014, Zoltan Kiss wrote:
>> On 20/02/14 17:26, Stefano Stabellini wrote:
>>> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
>>>> On 16/02/14 18:36, Stefano Stabellini wrote:
>>>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>>>>>> diff --git a/arch/arm/include/asm/xen/page.h
>>>>>> b/arch/arm/include/asm/xen/page.h
>>>>>> index e0965ab..4eaeb3f 100644
>>>>>> --- a/arch/arm/include/asm/xen/page.h
>>>>>> +++ b/arch/arm/include/asm/xen/page.h
>>>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
>>>>>> address, unsigned int *level)
>>>>>>    	return NULL;
>>>>>>    }
>>>>>>
>>>>>> -static inline int m2p_add_override(unsigned long mfn, struct page
>>>>>> *page,
>>>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>>>> -{
>>>>>> -	return 0;
>>>>>> -}
>>>>>> -
>>>>>> -static inline int m2p_remove_override(struct page *page, bool
>>>>>> clear_pte)
>>>>>> -{
>>>>>> -	return 0;
>>>>>> -}
>>>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
>>>>>> *map_ops,
>>>>>> +				   struct gnttab_map_grant_ref
>>>>>> *kmap_ops,
>>>>>> +				   struct page **pages, unsigned int
>>>>>> count,
>>>>>> +				   bool m2p_override);
>>>>>> +
>>>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
>>>>>> *unmap_ops,
>>>>>> +				     struct gnttab_map_grant_ref
>>>>>> *kmap_ops,
>>>>>> +				     struct page **pages, unsigned int
>>>>>> count,
>>>>>> +				     bool m2p_override);
>>>>>
>>>>> Much much better.
>>>>> The only comment I have is about this m2p_override boolean parameter.
>>>>> m2p_override is now meaningless in this context, what we really want to
>>>>> let the arch specific implementation know is whether the mapping is a
>>>>> kernel only mapping or a userspace mapping.
>>>>> Testing for kmap_ops != NULL might even be enough, but it would not
>>>>> improve the interface.
>>>> gntdev is the only user of this, the kmap_ops parameter there is:
>>>> use_ptemod ? map->kmap_ops + offset : NULL
>>>> where:
>>>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>>>> So I think we can't rely on kmap_ops to decide whether we should
>>>> m2p_override
>>>> or not.
>>>>
>>>>> Is it possible to realize if the mapping is a userspace mapping by
>>>>> checking for GNTMAP_application_map in map_ops?
>>>>> Otherwise I would keep the boolean and rename it to user_mapping.
>>>> Sounds better, but as far as I see gntdev set that flag in
>>>> find_grant_ptes,
>>>> which is called only
>>>>
>>>> if (use_ptemod) {
>>>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>>>> 				  vma->vm_end - vma->vm_start,
>>>> 				  find_grant_ptes, map);
>>>>
>>>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
>>>> kmap_ops,
>>>> and GNTMAP_application_map is not set as well, but I guess we still need
>>>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
>>>
>>> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
>>> m2p_override.
>>>
>>
>> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
>> we can avoid the extra bool parameter, is that correct? At least with the
>> current users of grant mapping it seems to be true.
>> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
>> Actually the most of m2p_add/remove_override takes effect only if there is a
>> kmap_op parameter, but what about the rest of the code there?
> 
> It is safe to assume that we only need the m2p_override if
> !xen_feature(XENFEAT_auto_translated_physmap).
> I wouldn't make any assumptions on kmap_ops != NULL.

I think it is -- we only need the m2p override if we have userspace
mappings (where kmap_ops != 0).
> 
> I would remove the bool m2p_override parameter completely and determine
> whether we need to call the m2p_override functions from the x86
> implementation of set/clear_foreign_p2m_mapping by checking
> xen_feature(XENFEAT_auto_translated_physmap).
> 
> David, does it seem reasonable to you?

That would miss the point of this patch which is to avoid adding to the
m2p_override for kernel only mappings.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:22:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYGe-0001aT-M1; Thu, 20 Feb 2014 18:22:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGYGd-0001aO-Tl
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 18:22:40 +0000
Received: from [85.158.143.35:4807] by server-3.bemta-4.messagelabs.com id
	4C/89-11539-FE746035; Thu, 20 Feb 2014 18:22:39 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392920549!7170363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjM3MDMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8443 invoked from network); 20 Feb 2014 18:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; 
	d="asc'?scan'208";a="102732017"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:22:28 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:22:27 -0500
Message-ID: <1392920545.32038.826.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 19:22:25 +0100
In-Reply-To: <53059BB0.1000705@ts.fujitsu.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com> <1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org, Simon
	Martin <furryfuttock@gmail.com>, Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5054450359051415728=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5054450359051415728==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-4I6/d7BIfMZDvSE7i73t"

--=-4I6/d7BIfMZDvSE7i73t
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-20 at 07:07 +0100, Juergen Gross wrote:
> On 18.02.2014 19:06, Dario Faggioli wrote:

> > While this one, although a bit more "boring" than the above, would
> > probably be something quite valuable to have!
> >
> > I can only think of rather expensive ways of implementing it, involving
> > going through all the cpupools and, for each cpupool, through all its
> > cpus and check the topology relationships, but perhaps there are others
> > (I'll think harder).
> >
> Adding some information like this would be nice, indeed. But I think we s=
hould
> not limit this to just hyperthreads. There are more levels of shared reso=
urces,
> like caches or memory interfaces on the same socket. In case we want to a=
dd
> information about potential performance influences due to shared resource=
s, we
> should be more generic.
>=20
All true... To the point that I know also wonder what a suitable
interface and a not too verbose output configuration could be...

> And what about some NUMA information? Wouldn't it be worthwhile to show m=
emory
> locality information as well? This should be considered to be displayed b=
y
> "xl list", too.
>=20
Indeed. I actually have an half backed series doing right that. It's a
bit more complicated (still from an interface point of view), as that
info resides in Xen, and some new hcall or similar is required to
retrieve that.

I'll post it in early 4.5 dev cycle.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-4I6/d7BIfMZDvSE7i73t
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMGR+EACgkQk4XaBE3IOsRsegCglnaBiAilnM19XUJh0rwIfr9C
S/YAnjzgEKStoFLyFamPM9luR2R9qr1G
=5KCq
-----END PGP SIGNATURE-----

--=-4I6/d7BIfMZDvSE7i73t--


--===============5054450359051415728==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5054450359051415728==--


From xen-devel-bounces@lists.xen.org Thu Feb 20 18:22:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYGe-0001aT-M1; Thu, 20 Feb 2014 18:22:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGYGd-0001aO-Tl
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 18:22:40 +0000
Received: from [85.158.143.35:4807] by server-3.bemta-4.messagelabs.com id
	4C/89-11539-FE746035; Thu, 20 Feb 2014 18:22:39 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392920549!7170363!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjM3MDMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8443 invoked from network); 20 Feb 2014 18:22:30 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:22:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; 
	d="asc'?scan'208";a="102732017"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:22:28 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:22:27 -0500
Message-ID: <1392920545.32038.826.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Thu, 20 Feb 2014 19:22:25 +0100
In-Reply-To: <53059BB0.1000705@ts.fujitsu.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com> <1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org, Simon
	Martin <furryfuttock@gmail.com>, Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5054450359051415728=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5054450359051415728==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-4I6/d7BIfMZDvSE7i73t"

--=-4I6/d7BIfMZDvSE7i73t
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-20 at 07:07 +0100, Juergen Gross wrote:
> On 18.02.2014 19:06, Dario Faggioli wrote:

> > While this one, although a bit more "boring" than the above, would
> > probably be something quite valuable to have!
> >
> > I can only think of rather expensive ways of implementing it, involving
> > going through all the cpupools and, for each cpupool, through all its
> > cpus and check the topology relationships, but perhaps there are others
> > (I'll think harder).
> >
> Adding some information like this would be nice, indeed. But I think we s=
hould
> not limit this to just hyperthreads. There are more levels of shared reso=
urces,
> like caches or memory interfaces on the same socket. In case we want to a=
dd
> information about potential performance influences due to shared resource=
s, we
> should be more generic.
>=20
All true... To the point that I know also wonder what a suitable
interface and a not too verbose output configuration could be...

> And what about some NUMA information? Wouldn't it be worthwhile to show m=
emory
> locality information as well? This should be considered to be displayed b=
y
> "xl list", too.
>=20
Indeed. I actually have an half backed series doing right that. It's a
bit more complicated (still from an interface point of view), as that
info resides in Xen, and some new hcall or similar is required to
retrieve that.

I'll post it in early 4.5 dev cycle.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-4I6/d7BIfMZDvSE7i73t
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMGR+EACgkQk4XaBE3IOsRsegCglnaBiAilnM19XUJh0rwIfr9C
S/YAnjzgEKStoFLyFamPM9luR2R9qr1G
=5KCq
-----END PGP SIGNATURE-----

--=-4I6/d7BIfMZDvSE7i73t--


--===============5054450359051415728==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5054450359051415728==--


From xen-devel-bounces@lists.xen.org Thu Feb 20 18:27:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYKv-0001l6-Ea; Thu, 20 Feb 2014 18:27:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGYKt-0001ky-QA
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:27:04 +0000
Received: from [85.158.137.68:33318] by server-13.bemta-3.messagelabs.com id
	EC/FD-26923-6F846035; Thu, 20 Feb 2014 18:27:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392920820!3237928!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7839 invoked from network); 20 Feb 2014 18:27:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:27:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102733324"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:26:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:26:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGYKp-0006pe-7V;
	Thu, 20 Feb 2014 18:26:59 +0000
Date: Thu, 20 Feb 2014 18:26:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <5306471D.1080809@citrix.com>
Message-ID: <alpine.DEB.2.02.1402201822550.15812@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
	<alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
	<5306471D.1080809@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Jan Beulich <jbeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, David Vrabel wrote:
> On 20/02/14 18:17, Stefano Stabellini wrote:
> > On Thu, 20 Feb 2014, Zoltan Kiss wrote:
> >> On 20/02/14 17:26, Stefano Stabellini wrote:
> >>> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
> >>>> On 16/02/14 18:36, Stefano Stabellini wrote:
> >>>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> >>>>>> diff --git a/arch/arm/include/asm/xen/page.h
> >>>>>> b/arch/arm/include/asm/xen/page.h
> >>>>>> index e0965ab..4eaeb3f 100644
> >>>>>> --- a/arch/arm/include/asm/xen/page.h
> >>>>>> +++ b/arch/arm/include/asm/xen/page.h
> >>>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
> >>>>>> address, unsigned int *level)
> >>>>>>    	return NULL;
> >>>>>>    }
> >>>>>>
> >>>>>> -static inline int m2p_add_override(unsigned long mfn, struct page
> >>>>>> *page,
> >>>>>> -		struct gnttab_map_grant_ref *kmap_op)
> >>>>>> -{
> >>>>>> -	return 0;
> >>>>>> -}
> >>>>>> -
> >>>>>> -static inline int m2p_remove_override(struct page *page, bool
> >>>>>> clear_pte)
> >>>>>> -{
> >>>>>> -	return 0;
> >>>>>> -}
> >>>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
> >>>>>> *map_ops,
> >>>>>> +				   struct gnttab_map_grant_ref
> >>>>>> *kmap_ops,
> >>>>>> +				   struct page **pages, unsigned int
> >>>>>> count,
> >>>>>> +				   bool m2p_override);
> >>>>>> +
> >>>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
> >>>>>> *unmap_ops,
> >>>>>> +				     struct gnttab_map_grant_ref
> >>>>>> *kmap_ops,
> >>>>>> +				     struct page **pages, unsigned int
> >>>>>> count,
> >>>>>> +				     bool m2p_override);
> >>>>>
> >>>>> Much much better.
> >>>>> The only comment I have is about this m2p_override boolean parameter.
> >>>>> m2p_override is now meaningless in this context, what we really want to
> >>>>> let the arch specific implementation know is whether the mapping is a
> >>>>> kernel only mapping or a userspace mapping.
> >>>>> Testing for kmap_ops != NULL might even be enough, but it would not
> >>>>> improve the interface.
> >>>> gntdev is the only user of this, the kmap_ops parameter there is:
> >>>> use_ptemod ? map->kmap_ops + offset : NULL
> >>>> where:
> >>>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> >>>> So I think we can't rely on kmap_ops to decide whether we should
> >>>> m2p_override
> >>>> or not.
> >>>>
> >>>>> Is it possible to realize if the mapping is a userspace mapping by
> >>>>> checking for GNTMAP_application_map in map_ops?
> >>>>> Otherwise I would keep the boolean and rename it to user_mapping.
> >>>> Sounds better, but as far as I see gntdev set that flag in
> >>>> find_grant_ptes,
> >>>> which is called only
> >>>>
> >>>> if (use_ptemod) {
> >>>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
> >>>> 				  vma->vm_end - vma->vm_start,
> >>>> 				  find_grant_ptes, map);
> >>>>
> >>>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
> >>>> kmap_ops,
> >>>> and GNTMAP_application_map is not set as well, but I guess we still need
> >>>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
> >>>
> >>> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
> >>> m2p_override.
> >>>
> >>
> >> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
> >> we can avoid the extra bool parameter, is that correct? At least with the
> >> current users of grant mapping it seems to be true.
> >> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
> >> Actually the most of m2p_add/remove_override takes effect only if there is a
> >> kmap_op parameter, but what about the rest of the code there?
> > 
> > It is safe to assume that we only need the m2p_override if
> > !xen_feature(XENFEAT_auto_translated_physmap).
> > I wouldn't make any assumptions on kmap_ops != NULL.
> 
> I think it is -- we only need the m2p override if we have userspace
> mappings (where kmap_ops != 0).
>
> > I would remove the bool m2p_override parameter completely and determine
> > whether we need to call the m2p_override functions from the x86
> > implementation of set/clear_foreign_p2m_mapping by checking
> > xen_feature(XENFEAT_auto_translated_physmap).
> > 
> > David, does it seem reasonable to you?
> 
> That would miss the point of this patch which is to avoid adding to the
> m2p_override for kernel only mappings.

I meant checking 

!xen_feature(XENFEAT_auto_translated_physmap) && kmap_ops != 0

At least this way the "hack" would be entirely self contained in the
arch specific code.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:27:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYKv-0001l6-Ea; Thu, 20 Feb 2014 18:27:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WGYKt-0001ky-QA
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:27:04 +0000
Received: from [85.158.137.68:33318] by server-13.bemta-3.messagelabs.com id
	EC/FD-26923-6F846035; Thu, 20 Feb 2014 18:27:02 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1392920820!3237928!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7839 invoked from network); 20 Feb 2014 18:27:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:27:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="102733324"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 20 Feb 2014 18:26:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:26:59 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WGYKp-0006pe-7V;
	Thu, 20 Feb 2014 18:26:59 +0000
Date: Thu, 20 Feb 2014 18:26:53 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <5306471D.1080809@citrix.com>
Message-ID: <alpine.DEB.2.02.1402201822550.15812@kaball.uk.xensource.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
	<alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
	<5306471D.1080809@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Jan Beulich <jbeulich@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014, David Vrabel wrote:
> On 20/02/14 18:17, Stefano Stabellini wrote:
> > On Thu, 20 Feb 2014, Zoltan Kiss wrote:
> >> On 20/02/14 17:26, Stefano Stabellini wrote:
> >>> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
> >>>> On 16/02/14 18:36, Stefano Stabellini wrote:
> >>>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
> >>>>>> diff --git a/arch/arm/include/asm/xen/page.h
> >>>>>> b/arch/arm/include/asm/xen/page.h
> >>>>>> index e0965ab..4eaeb3f 100644
> >>>>>> --- a/arch/arm/include/asm/xen/page.h
> >>>>>> +++ b/arch/arm/include/asm/xen/page.h
> >>>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
> >>>>>> address, unsigned int *level)
> >>>>>>    	return NULL;
> >>>>>>    }
> >>>>>>
> >>>>>> -static inline int m2p_add_override(unsigned long mfn, struct page
> >>>>>> *page,
> >>>>>> -		struct gnttab_map_grant_ref *kmap_op)
> >>>>>> -{
> >>>>>> -	return 0;
> >>>>>> -}
> >>>>>> -
> >>>>>> -static inline int m2p_remove_override(struct page *page, bool
> >>>>>> clear_pte)
> >>>>>> -{
> >>>>>> -	return 0;
> >>>>>> -}
> >>>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
> >>>>>> *map_ops,
> >>>>>> +				   struct gnttab_map_grant_ref
> >>>>>> *kmap_ops,
> >>>>>> +				   struct page **pages, unsigned int
> >>>>>> count,
> >>>>>> +				   bool m2p_override);
> >>>>>> +
> >>>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
> >>>>>> *unmap_ops,
> >>>>>> +				     struct gnttab_map_grant_ref
> >>>>>> *kmap_ops,
> >>>>>> +				     struct page **pages, unsigned int
> >>>>>> count,
> >>>>>> +				     bool m2p_override);
> >>>>>
> >>>>> Much much better.
> >>>>> The only comment I have is about this m2p_override boolean parameter.
> >>>>> m2p_override is now meaningless in this context, what we really want to
> >>>>> let the arch specific implementation know is whether the mapping is a
> >>>>> kernel only mapping or a userspace mapping.
> >>>>> Testing for kmap_ops != NULL might even be enough, but it would not
> >>>>> improve the interface.
> >>>> gntdev is the only user of this, the kmap_ops parameter there is:
> >>>> use_ptemod ? map->kmap_ops + offset : NULL
> >>>> where:
> >>>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
> >>>> So I think we can't rely on kmap_ops to decide whether we should
> >>>> m2p_override
> >>>> or not.
> >>>>
> >>>>> Is it possible to realize if the mapping is a userspace mapping by
> >>>>> checking for GNTMAP_application_map in map_ops?
> >>>>> Otherwise I would keep the boolean and rename it to user_mapping.
> >>>> Sounds better, but as far as I see gntdev set that flag in
> >>>> find_grant_ptes,
> >>>> which is called only
> >>>>
> >>>> if (use_ptemod) {
> >>>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
> >>>> 				  vma->vm_end - vma->vm_start,
> >>>> 				  find_grant_ptes, map);
> >>>>
> >>>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
> >>>> kmap_ops,
> >>>> and GNTMAP_application_map is not set as well, but I guess we still need
> >>>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
> >>>
> >>> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
> >>> m2p_override.
> >>>
> >>
> >> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
> >> we can avoid the extra bool parameter, is that correct? At least with the
> >> current users of grant mapping it seems to be true.
> >> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
> >> Actually the most of m2p_add/remove_override takes effect only if there is a
> >> kmap_op parameter, but what about the rest of the code there?
> > 
> > It is safe to assume that we only need the m2p_override if
> > !xen_feature(XENFEAT_auto_translated_physmap).
> > I wouldn't make any assumptions on kmap_ops != NULL.
> 
> I think it is -- we only need the m2p override if we have userspace
> mappings (where kmap_ops != 0).
>
> > I would remove the bool m2p_override parameter completely and determine
> > whether we need to call the m2p_override functions from the x86
> > implementation of set/clear_foreign_p2m_mapping by checking
> > xen_feature(XENFEAT_auto_translated_physmap).
> > 
> > David, does it seem reasonable to you?
> 
> That would miss the point of this patch which is to avoid adding to the
> m2p_override for kernel only mappings.

I meant checking 

!xen_feature(XENFEAT_auto_translated_physmap) && kmap_ops != 0

At least this way the "hack" would be entirely self contained in the
arch specific code.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:28:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYM4-0001pd-Tt; Thu, 20 Feb 2014 18:28:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGYM3-0001pV-DI
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:28:15 +0000
Received: from [193.109.254.147:15692] by server-6.bemta-14.messagelabs.com id
	AC/99-03396-E3946035; Thu, 20 Feb 2014 18:28:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392920892!5725725!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25744 invoked from network); 20 Feb 2014 18:28:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:28:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104427444"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:28:12 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:28:11 -0500
Message-ID: <53064939.5070103@citrix.com>
Date: Thu, 20 Feb 2014 18:28:09 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
	<alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
	<5306471D.1080809@citrix.com>
	<alpine.DEB.2.02.1402201822550.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201822550.15812@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Jan Beulich <jbeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, Zoltan Kiss <zoltan.kiss@citrix.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 18:26, Stefano Stabellini wrote:
> On Thu, 20 Feb 2014, David Vrabel wrote:
>> On 20/02/14 18:17, Stefano Stabellini wrote:
>>> On Thu, 20 Feb 2014, Zoltan Kiss wrote:
>>>> On 20/02/14 17:26, Stefano Stabellini wrote:
>>>>> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
>>>>>> On 16/02/14 18:36, Stefano Stabellini wrote:
>>>>>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>>>>>>>> diff --git a/arch/arm/include/asm/xen/page.h
>>>>>>>> b/arch/arm/include/asm/xen/page.h
>>>>>>>> index e0965ab..4eaeb3f 100644
>>>>>>>> --- a/arch/arm/include/asm/xen/page.h
>>>>>>>> +++ b/arch/arm/include/asm/xen/page.h
>>>>>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
>>>>>>>> address, unsigned int *level)
>>>>>>>>    	return NULL;
>>>>>>>>    }
>>>>>>>>
>>>>>>>> -static inline int m2p_add_override(unsigned long mfn, struct page
>>>>>>>> *page,
>>>>>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>>>>>> -{
>>>>>>>> -	return 0;
>>>>>>>> -}
>>>>>>>> -
>>>>>>>> -static inline int m2p_remove_override(struct page *page, bool
>>>>>>>> clear_pte)
>>>>>>>> -{
>>>>>>>> -	return 0;
>>>>>>>> -}
>>>>>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
>>>>>>>> *map_ops,
>>>>>>>> +				   struct gnttab_map_grant_ref
>>>>>>>> *kmap_ops,
>>>>>>>> +				   struct page **pages, unsigned int
>>>>>>>> count,
>>>>>>>> +				   bool m2p_override);
>>>>>>>> +
>>>>>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
>>>>>>>> *unmap_ops,
>>>>>>>> +				     struct gnttab_map_grant_ref
>>>>>>>> *kmap_ops,
>>>>>>>> +				     struct page **pages, unsigned int
>>>>>>>> count,
>>>>>>>> +				     bool m2p_override);
>>>>>>>
>>>>>>> Much much better.
>>>>>>> The only comment I have is about this m2p_override boolean parameter.
>>>>>>> m2p_override is now meaningless in this context, what we really want to
>>>>>>> let the arch specific implementation know is whether the mapping is a
>>>>>>> kernel only mapping or a userspace mapping.
>>>>>>> Testing for kmap_ops != NULL might even be enough, but it would not
>>>>>>> improve the interface.
>>>>>> gntdev is the only user of this, the kmap_ops parameter there is:
>>>>>> use_ptemod ? map->kmap_ops + offset : NULL
>>>>>> where:
>>>>>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>>>>>> So I think we can't rely on kmap_ops to decide whether we should
>>>>>> m2p_override
>>>>>> or not.
>>>>>>
>>>>>>> Is it possible to realize if the mapping is a userspace mapping by
>>>>>>> checking for GNTMAP_application_map in map_ops?
>>>>>>> Otherwise I would keep the boolean and rename it to user_mapping.
>>>>>> Sounds better, but as far as I see gntdev set that flag in
>>>>>> find_grant_ptes,
>>>>>> which is called only
>>>>>>
>>>>>> if (use_ptemod) {
>>>>>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>>>>>> 				  vma->vm_end - vma->vm_start,
>>>>>> 				  find_grant_ptes, map);
>>>>>>
>>>>>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
>>>>>> kmap_ops,
>>>>>> and GNTMAP_application_map is not set as well, but I guess we still need
>>>>>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
>>>>>
>>>>> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
>>>>> m2p_override.
>>>>>
>>>>
>>>> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
>>>> we can avoid the extra bool parameter, is that correct? At least with the
>>>> current users of grant mapping it seems to be true.
>>>> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
>>>> Actually the most of m2p_add/remove_override takes effect only if there is a
>>>> kmap_op parameter, but what about the rest of the code there?
>>>
>>> It is safe to assume that we only need the m2p_override if
>>> !xen_feature(XENFEAT_auto_translated_physmap).
>>> I wouldn't make any assumptions on kmap_ops != NULL.
>>
>> I think it is -- we only need the m2p override if we have userspace
>> mappings (where kmap_ops != 0).
>>
>>> I would remove the bool m2p_override parameter completely and determine
>>> whether we need to call the m2p_override functions from the x86
>>> implementation of set/clear_foreign_p2m_mapping by checking
>>> xen_feature(XENFEAT_auto_translated_physmap).
>>>
>>> David, does it seem reasonable to you?
>>
>> That would miss the point of this patch which is to avoid adding to the
>> m2p_override for kernel only mappings.
> 
> I meant checking 
> 
> !xen_feature(XENFEAT_auto_translated_physmap) && kmap_ops != 0
> 
> At least this way the "hack" would be entirely self contained in the
> arch specific code.

Ok. That would work.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:28:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYM4-0001pd-Tt; Thu, 20 Feb 2014 18:28:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGYM3-0001pV-DI
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 18:28:15 +0000
Received: from [193.109.254.147:15692] by server-6.bemta-14.messagelabs.com id
	AC/99-03396-E3946035; Thu, 20 Feb 2014 18:28:14 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392920892!5725725!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25744 invoked from network); 20 Feb 2014 18:28:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 18:28:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104427444"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 18:28:12 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 13:28:11 -0500
Message-ID: <53064939.5070103@citrix.com>
Date: Thu, 20 Feb 2014 18:28:09 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1392238453-26147-1-git-send-email-zoltan.kiss@citrix.com>
	<alpine.DEB.2.02.1402161828380.4307@kaball.uk.xensource.com>
	<5301F74E.3070107@citrix.com>
	<alpine.DEB.2.02.1402201724430.15812@kaball.uk.xensource.com>
	<53063D2B.1040502@citrix.com>
	<alpine.DEB.2.02.1402201746560.15812@kaball.uk.xensource.com>
	<5306471D.1080809@citrix.com>
	<alpine.DEB.2.02.1402201822550.15812@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402201822550.15812@kaball.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Jan Beulich <jbeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ingo Molnar <mingo@redhat.com>, Zoltan Kiss <zoltan.kiss@citrix.com>, "H.
	Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	=?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v9] xen/grant-table: Avoid m2p_override
	during mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 18:26, Stefano Stabellini wrote:
> On Thu, 20 Feb 2014, David Vrabel wrote:
>> On 20/02/14 18:17, Stefano Stabellini wrote:
>>> On Thu, 20 Feb 2014, Zoltan Kiss wrote:
>>>> On 20/02/14 17:26, Stefano Stabellini wrote:
>>>>> On Mon, 17 Feb 2014, Zoltan Kiss wrote:
>>>>>> On 16/02/14 18:36, Stefano Stabellini wrote:
>>>>>>> On Wed, 12 Feb 2014, Zoltan Kiss wrote:
>>>>>>>> diff --git a/arch/arm/include/asm/xen/page.h
>>>>>>>> b/arch/arm/include/asm/xen/page.h
>>>>>>>> index e0965ab..4eaeb3f 100644
>>>>>>>> --- a/arch/arm/include/asm/xen/page.h
>>>>>>>> +++ b/arch/arm/include/asm/xen/page.h
>>>>>>>> @@ -97,16 +97,15 @@ static inline pte_t *lookup_address(unsigned long
>>>>>>>> address, unsigned int *level)
>>>>>>>>    	return NULL;
>>>>>>>>    }
>>>>>>>>
>>>>>>>> -static inline int m2p_add_override(unsigned long mfn, struct page
>>>>>>>> *page,
>>>>>>>> -		struct gnttab_map_grant_ref *kmap_op)
>>>>>>>> -{
>>>>>>>> -	return 0;
>>>>>>>> -}
>>>>>>>> -
>>>>>>>> -static inline int m2p_remove_override(struct page *page, bool
>>>>>>>> clear_pte)
>>>>>>>> -{
>>>>>>>> -	return 0;
>>>>>>>> -}
>>>>>>>> +extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref
>>>>>>>> *map_ops,
>>>>>>>> +				   struct gnttab_map_grant_ref
>>>>>>>> *kmap_ops,
>>>>>>>> +				   struct page **pages, unsigned int
>>>>>>>> count,
>>>>>>>> +				   bool m2p_override);
>>>>>>>> +
>>>>>>>> +extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref
>>>>>>>> *unmap_ops,
>>>>>>>> +				     struct gnttab_map_grant_ref
>>>>>>>> *kmap_ops,
>>>>>>>> +				     struct page **pages, unsigned int
>>>>>>>> count,
>>>>>>>> +				     bool m2p_override);
>>>>>>>
>>>>>>> Much much better.
>>>>>>> The only comment I have is about this m2p_override boolean parameter.
>>>>>>> m2p_override is now meaningless in this context, what we really want to
>>>>>>> let the arch specific implementation know is whether the mapping is a
>>>>>>> kernel only mapping or a userspace mapping.
>>>>>>> Testing for kmap_ops != NULL might even be enough, but it would not
>>>>>>> improve the interface.
>>>>>> gntdev is the only user of this, the kmap_ops parameter there is:
>>>>>> use_ptemod ? map->kmap_ops + offset : NULL
>>>>>> where:
>>>>>> use_ptemod = !xen_feature(XENFEAT_auto_translated_physmap);
>>>>>> So I think we can't rely on kmap_ops to decide whether we should
>>>>>> m2p_override
>>>>>> or not.
>>>>>>
>>>>>>> Is it possible to realize if the mapping is a userspace mapping by
>>>>>>> checking for GNTMAP_application_map in map_ops?
>>>>>>> Otherwise I would keep the boolean and rename it to user_mapping.
>>>>>> Sounds better, but as far as I see gntdev set that flag in
>>>>>> find_grant_ptes,
>>>>>> which is called only
>>>>>>
>>>>>> if (use_ptemod) {
>>>>>> 	err = apply_to_page_range(vma->vm_mm, vma->vm_start,
>>>>>> 				  vma->vm_end - vma->vm_start,
>>>>>> 				  find_grant_ptes, map);
>>>>>>
>>>>>> So if xen_feature(XENFEAT_auto_translated_physmap), we don't have
>>>>>> kmap_ops,
>>>>>> and GNTMAP_application_map is not set as well, but I guess we still need
>>>>>> m2p_override. Or not? I'm a bit confused, maybe because of Monday ...
>>>>>
>>>>> If xen_feature(XENFEAT_auto_translated_physmap) we shouldn't need the
>>>>> m2p_override.
>>>>>
>>>>
>>>> So it's safe to assume that we need m2p_override only if kmap_ops != NULL, and
>>>> we can avoid the extra bool parameter, is that correct? At least with the
>>>> current users of grant mapping it seems to be true.
>>>> In which case we don't need the wrappers for gnttab_[un]map_refs as well.
>>>> Actually the most of m2p_add/remove_override takes effect only if there is a
>>>> kmap_op parameter, but what about the rest of the code there?
>>>
>>> It is safe to assume that we only need the m2p_override if
>>> !xen_feature(XENFEAT_auto_translated_physmap).
>>> I wouldn't make any assumptions on kmap_ops != NULL.
>>
>> I think it is -- we only need the m2p override if we have userspace
>> mappings (where kmap_ops != 0).
>>
>>> I would remove the bool m2p_override parameter completely and determine
>>> whether we need to call the m2p_override functions from the x86
>>> implementation of set/clear_foreign_p2m_mapping by checking
>>> xen_feature(XENFEAT_auto_translated_physmap).
>>>
>>> David, does it seem reasonable to you?
>>
>> That would miss the point of this patch which is to avoid adding to the
>> m2p_override for kernel only mappings.
> 
> I meant checking 
> 
> !xen_feature(XENFEAT_auto_translated_physmap) && kmap_ops != 0
> 
> At least this way the "hack" would be entirely self contained in the
> arch specific code.

Ok. That would work.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:36:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYTr-00027M-Tk; Thu, 20 Feb 2014 18:36:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WGYTq-00027H-W6
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 18:36:19 +0000
Received: from [85.158.143.35:3093] by server-3.bemta-4.messagelabs.com id
	96/D4-11539-22B46035; Thu, 20 Feb 2014 18:36:18 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392921376!7152129!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2814 invoked from network); 20 Feb 2014 18:36:17 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 18:36:17 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 1C6EC1A25E4;
	Thu, 20 Feb 2014 20:36:13 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D9C5F36C01F; Thu, 20 Feb 2014 20:36:13 +0200 (EET)
Date: Thu, 20 Feb 2014 20:36:13 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Cui, Dexuan" <dexuan.cui@intel.com>
Message-ID: <20140220183613.GJ3200@reaktio.net>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "Tian, Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
> Hi all,
> We're pleased to announce an update to XenGT since its first disclosure in last Sep. XenGT is a full GPU virtualization solution with mediated pass-through, on Intel Processor Graphics. A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability. Though we only support Xen on Intel Processor Graphics so far, the core logic can be easily ported to other hypervisors.
> 
> The update consists of:
> 
> Linux-vgt:
>     Rebased to kernel 3.11.6 
>     Lots of stability fixes
>     Improved sharing quality of render engine and display engine
>     Multi-monitors (clone/extended) support for VGA, HDMI, DP and eDP types
>     Support VMs with different resolutions
>     Improved monitor hotplug handling
>     Preliminary support for GPU recovery
> 
> Xen-vgt:
>     Rebased to Xen 4.3.1
>     >8 bytes MMIO emulation
> 
> Qemu-vgt:
>     Included VT-d GPU pass-through logic for comparison
>     Grub2 graphics mode works now
> 
> Please refer to the attached new setup guide:
> https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf
> It provides step-to-step details about building/configuring/running XenGT.
> 
> The new source codes are available at the updated github repos:
> Linux: https://github.com/01org/XenGT-Preview-kernel.git
> Xen: https://github.com/01org/XenGT-Preview-xen.git
> Qemu: https://github.com/01org/XenGT-Preview-qemu.git
> 
> More information about XenGT's background, architecture, etc can be found at:
> http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7.pdf
> https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt
> 
> Appreciating your comments!
> 

First of all: Very nice, thanks a lot for sharing this! 

Are you going to work on upstreaming this stuff? Xen 4.4 will be released soon(ish),
so the Xen 4.5 development window starts in the near future and hopefully this stuff can be upstreamed then.. 

Also: Haswell ("4th generation Intel core CPU") is listed as a requirement in the Setup Guide PDF.. 
will there be support for SNB/IVB GPUs aswell? 


Thanks,

-- Pasi

> Thanks,
> -- Dexuan
> 
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shan, Haitao
> > Sent: Monday, September 09, 2013 8:46 PM
> > To: xen-devel@lists.xen.org
> > Cc: Tian, Kevin; White, Michael L; Dong, Eddie; Li, Susie; Cowperthwaite,
> > David J; Haron, Sandra
> > Subject: [Xen-devel] [RFC] XenGT - An Mediated Graphics Passthrough
> > Solution from Intel
> > 
> > Hi, Xen Experts,
> > 
> > This email is aimed at a first time disclosure of project XenGT, which is
> > a Graphics virtualization solution based on Xen.
> > 
> > 
> > 
> > As you can see, requirements for GPU to be sharable among virtual machines
> > have been constantly rising. The targeted usage model might be
> > accelerating tasks ranging from gaming, video playback, fancy GUIs,
> > high-performance computing (GPU-based). This trend is observed on both
> > client and cloud. Efficient GPU virtualization is required to address the
> > increasing demands.
> > 
> > 
> > We have developed XenGT - a prototype based on a mediated passthrough
> > architecture. We support running a native graphics driver in multiple VMs
> > to achieve high performance. A specific mediator owns the scheduling and
> > sharing hardware resources among all the virtual machines. By saying
> > mediated pass-through, we actually divide graphics resource to two
> > categories: performance critical and others. We partition performance
> > critical resources for VM's direct access like pass-through, while save
> > and restore others.
> > 
> > 
> > 
> > XenGT implements the mediator in dom0, called vgt driver. This avoids
> > adding complex device knowledge to Xen, and also permits a more flexible
> > release model. In the meantime, we want to have a unified architecture to
> > mediate all the VMs, including dom0. Thus, We developed a deprivileged
> > dom0 mode, which essentially trapped Dom0's access to selected resources
> > (graphics resources in XenGT case) and forwarded to vgt driver (which is
> > also in Dom0) for processing.
> > 
> > 
> > Right now, we support 4 accelerated VMs: Dom0 + 3 HVM DomUs. We've
> > conducted verifications based on Ubuntu 12.04 and 13.04. Tests conducted
> > in VM include but are not limited to 3D gaming, media playbacks, 2D
> > accelerations.
> > 
> > We believe the architecture itself can be general so that different GPUs
> > can all use this mediated passthrough concept. However, we only developed
> > codes based on Intel 4th Core Processor with integrated Graphics.
> > 
> > If you have interests in trying, you can refer to the attached setup
> > guide, which provides step-to-step details on building/configuring/running
> > XenGT.
> > 
> > Source code is made available at github:
> > Xen: https://github.com/01org/XenGT-Preview-xen.git
> > Linux: https://github.com/01org/XenGT-Preview-kernel.git
> > Qemu: https://github.com/01org/XenGT-Preview-qemu.git
> > 
> > Any comments are welcome!
> > 
> > 
> > Special note: We are making this code available to general public since we
> > take community's involvement and feedbacks seriously. However, while
> > we've
> > tested our solution with various workloads, the code is only at pre-alpha
> > stage. Hangs might happen, so please don't try it on the system that
> > hosting critical data for you.
> > 
> > Shan Haitao
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 18:36:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 18:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYTr-00027M-Tk; Thu, 20 Feb 2014 18:36:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WGYTq-00027H-W6
	for xen-devel@lists.xen.org; Thu, 20 Feb 2014 18:36:19 +0000
Received: from [85.158.143.35:3093] by server-3.bemta-4.messagelabs.com id
	96/D4-11539-22B46035; Thu, 20 Feb 2014 18:36:18 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392921376!7152129!1
X-Originating-IP: [62.142.5.110]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjIuMTQyLjUuMTEwID0+IDkyMjA0\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2814 invoked from network); 20 Feb 2014 18:36:17 -0000
Received: from emh04.mail.saunalahti.fi (HELO emh04.mail.saunalahti.fi)
	(62.142.5.110)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 18:36:17 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh04.mail.saunalahti.fi (Postfix) with ESMTP id 1C6EC1A25E4;
	Thu, 20 Feb 2014 20:36:13 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id D9C5F36C01F; Thu, 20 Feb 2014 20:36:13 +0200 (EET)
Date: Thu, 20 Feb 2014 20:36:13 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Cui, Dexuan" <dexuan.cui@intel.com>
Message-ID: <20140220183613.GJ3200@reaktio.net>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: "Tian, Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
> Hi all,
> We're pleased to announce an update to XenGT since its first disclosure in last Sep. XenGT is a full GPU virtualization solution with mediated pass-through, on Intel Processor Graphics. A virtual GPU instance is maintained for each VM, with part of performance critical resources directly assigned. The capability of running native graphics driver inside a VM, without hypervisor intervention in performance critical paths, achieves a good balance among performance, feature, and sharing capability. Though we only support Xen on Intel Processor Graphics so far, the core logic can be easily ported to other hypervisors.
> 
> The update consists of:
> 
> Linux-vgt:
>     Rebased to kernel 3.11.6 
>     Lots of stability fixes
>     Improved sharing quality of render engine and display engine
>     Multi-monitors (clone/extended) support for VGA, HDMI, DP and eDP types
>     Support VMs with different resolutions
>     Improved monitor hotplug handling
>     Preliminary support for GPU recovery
> 
> Xen-vgt:
>     Rebased to Xen 4.3.1
>     >8 bytes MMIO emulation
> 
> Qemu-vgt:
>     Included VT-d GPU pass-through logic for comparison
>     Grub2 graphics mode works now
> 
> Please refer to the attached new setup guide:
> https://github.com/01org/XenGT-Preview-kernel/blob/master/XenGT_Setup_Guide.pdf
> It provides step-to-step details about building/configuring/running XenGT.
> 
> The new source codes are available at the updated github repos:
> Linux: https://github.com/01org/XenGT-Preview-kernel.git
> Xen: https://github.com/01org/XenGT-Preview-xen.git
> Qemu: https://github.com/01org/XenGT-Preview-qemu.git
> 
> More information about XenGT's background, architecture, etc can be found at:
> http://events.linuxfoundation.org/sites/events/files/slides/XenGT-Xen%20Summit-v7.pdf
> https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt
> 
> Appreciating your comments!
> 

First of all: Very nice, thanks a lot for sharing this! 

Are you going to work on upstreaming this stuff? Xen 4.4 will be released soon(ish),
so the Xen 4.5 development window starts in the near future and hopefully this stuff can be upstreamed then.. 

Also: Haswell ("4th generation Intel core CPU") is listed as a requirement in the Setup Guide PDF.. 
will there be support for SNB/IVB GPUs aswell? 


Thanks,

-- Pasi

> Thanks,
> -- Dexuan
> 
> > -----Original Message-----
> > From: xen-devel-bounces@lists.xen.org
> > [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Shan, Haitao
> > Sent: Monday, September 09, 2013 8:46 PM
> > To: xen-devel@lists.xen.org
> > Cc: Tian, Kevin; White, Michael L; Dong, Eddie; Li, Susie; Cowperthwaite,
> > David J; Haron, Sandra
> > Subject: [Xen-devel] [RFC] XenGT - An Mediated Graphics Passthrough
> > Solution from Intel
> > 
> > Hi, Xen Experts,
> > 
> > This email is aimed at a first time disclosure of project XenGT, which is
> > a Graphics virtualization solution based on Xen.
> > 
> > 
> > 
> > As you can see, requirements for GPU to be sharable among virtual machines
> > have been constantly rising. The targeted usage model might be
> > accelerating tasks ranging from gaming, video playback, fancy GUIs,
> > high-performance computing (GPU-based). This trend is observed on both
> > client and cloud. Efficient GPU virtualization is required to address the
> > increasing demands.
> > 
> > 
> > We have developed XenGT - a prototype based on a mediated passthrough
> > architecture. We support running a native graphics driver in multiple VMs
> > to achieve high performance. A specific mediator owns the scheduling and
> > sharing hardware resources among all the virtual machines. By saying
> > mediated pass-through, we actually divide graphics resource to two
> > categories: performance critical and others. We partition performance
> > critical resources for VM's direct access like pass-through, while save
> > and restore others.
> > 
> > 
> > 
> > XenGT implements the mediator in dom0, called vgt driver. This avoids
> > adding complex device knowledge to Xen, and also permits a more flexible
> > release model. In the meantime, we want to have a unified architecture to
> > mediate all the VMs, including dom0. Thus, We developed a deprivileged
> > dom0 mode, which essentially trapped Dom0's access to selected resources
> > (graphics resources in XenGT case) and forwarded to vgt driver (which is
> > also in Dom0) for processing.
> > 
> > 
> > Right now, we support 4 accelerated VMs: Dom0 + 3 HVM DomUs. We've
> > conducted verifications based on Ubuntu 12.04 and 13.04. Tests conducted
> > in VM include but are not limited to 3D gaming, media playbacks, 2D
> > accelerations.
> > 
> > We believe the architecture itself can be general so that different GPUs
> > can all use this mediated passthrough concept. However, we only developed
> > codes based on Intel 4th Core Processor with integrated Graphics.
> > 
> > If you have interests in trying, you can refer to the attached setup
> > guide, which provides step-to-step details on building/configuring/running
> > XenGT.
> > 
> > Source code is made available at github:
> > Xen: https://github.com/01org/XenGT-Preview-xen.git
> > Linux: https://github.com/01org/XenGT-Preview-kernel.git
> > Qemu: https://github.com/01org/XenGT-Preview-qemu.git
> > 
> > Any comments are welcome!
> > 
> > 
> > Special note: We are making this code available to general public since we
> > take community's involvement and feedbacks seriously. However, while
> > we've
> > tested our solution with various workloads, the code is only at pre-alpha
> > stage. Hangs might happen, so please don't try it on the system that
> > hosting critical data for you.
> > 
> > Shan Haitao
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 19:07:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 19:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYxy-0003Ao-LB; Thu, 20 Feb 2014 19:07:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WGYxx-0003Ah-HE
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 19:07:25 +0000
Received: from [85.158.137.68:40018] by server-10.bemta-3.messagelabs.com id
	97/2D-07302-C6256035; Thu, 20 Feb 2014 19:07:24 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392923243!3185317!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28638 invoked from network); 20 Feb 2014 19:07:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-9.tower-31.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 20 Feb 2014 19:07:24 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1715785AbaBTTHN (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Thu, 20 Feb 2014 20:07:13 +0100
Date: Thu, 20 Feb 2014 20:07:13 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140220190713.GA2183@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Wed, Feb 19, 2014 at 03:47:10PM +0400, Vasiliy Tolstov wrote:
> 2014-02-17 14:56 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> > They should. IIRC they were targeted for 4.3.
>
>
> Daniel, can you check this
> https://gist.github.com/vtolstov/9090413/raw/dc457631fe0b6df793552494d059eac62fd962e0/gistfile1.txt
> this is patch that merges 4 patches from email (because i can't get
> attaches and after copy/paste i have errors).
> As i see nothing changed , i'm use enforce=0 in xl.conf and start
> domain with memory=512 and maxmem=1024.

Use "mem_set_enforce_limit=0" in xl.conf instead of "enforce=0".

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 19:07:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 19:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGYxy-0003Ao-LB; Thu, 20 Feb 2014 19:07:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WGYxx-0003Ah-HE
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 19:07:25 +0000
Received: from [85.158.137.68:40018] by server-10.bemta-3.messagelabs.com id
	97/2D-07302-C6256035; Thu, 20 Feb 2014 19:07:24 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392923243!3185317!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28638 invoked from network); 20 Feb 2014 19:07:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-9.tower-31.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 20 Feb 2014 19:07:24 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S1715785AbaBTTHN (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Thu, 20 Feb 2014 20:07:13 +0100
Date: Thu, 20 Feb 2014 20:07:13 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140220190713.GA2183@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Wed, Feb 19, 2014 at 03:47:10PM +0400, Vasiliy Tolstov wrote:
> 2014-02-17 14:56 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> > They should. IIRC they were targeted for 4.3.
>
>
> Daniel, can you check this
> https://gist.github.com/vtolstov/9090413/raw/dc457631fe0b6df793552494d059eac62fd962e0/gistfile1.txt
> this is patch that merges 4 patches from email (because i can't get
> attaches and after copy/paste i have errors).
> As i see nothing changed , i'm use enforce=0 in xl.conf and start
> domain with memory=512 and maxmem=1024.

Use "mem_set_enforce_limit=0" in xl.conf instead of "enforce=0".

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 19:14:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 19:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZ4W-0003lp-3g; Thu, 20 Feb 2014 19:14:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGZ4V-0003lj-Bv
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 19:14:11 +0000
Received: from [85.158.143.35:23432] by server-2.bemta-4.messagelabs.com id
	AF/29-10891-20456035; Thu, 20 Feb 2014 19:14:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392923648!7158264!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2962 invoked from network); 20 Feb 2014 19:14:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 19:14:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104444858"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 19:14:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 14:14:07 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGZ4Q-0002kR-Oh;
	Thu, 20 Feb 2014 19:14:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGZ4Q-0002Pi-K7;
	Thu, 20 Feb 2014 19:14:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25152-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 19:14:06 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25152: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25152 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25152/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 xen                  0620cc886eef9018d2b2a5fcdc641be70b5ac54b
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=0620cc886eef9018d2b2a5fcdc641be70b5ac54b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 0620cc886eef9018d2b2a5fcdc641be70b5ac54b
+ branch=xen-4.2-testing
+ revision=0620cc886eef9018d2b2a5fcdc641be70b5ac54b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 0620cc886eef9018d2b2a5fcdc641be70b5ac54b:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   640b315..0620cc8  0620cc886eef9018d2b2a5fcdc641be70b5ac54b -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 19:14:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 19:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZ4W-0003lp-3g; Thu, 20 Feb 2014 19:14:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGZ4V-0003lj-Bv
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 19:14:11 +0000
Received: from [85.158.143.35:23432] by server-2.bemta-4.messagelabs.com id
	AF/29-10891-20456035; Thu, 20 Feb 2014 19:14:10 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1392923648!7158264!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2962 invoked from network); 20 Feb 2014 19:14:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 19:14:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,513,1389744000"; d="scan'208";a="104444858"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 19:14:07 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 14:14:07 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGZ4Q-0002kR-Oh;
	Thu, 20 Feb 2014 19:14:06 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGZ4Q-0002Pi-K7;
	Thu, 20 Feb 2014 19:14:06 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25152-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 19:14:06 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.2-testing test] 25152: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25152 xen-4.2-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25152/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemuu-winxpsp3 14 guest-stop                 fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 xen                  0620cc886eef9018d2b2a5fcdc641be70b5ac54b
baseline version:
 xen                  640b31535ab8fe07911d0b90ae4adbe6078026c9

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Keir Fraser <keir@xen.org>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-i386-i386-xl-qemuu-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.2-testing
+ revision=0620cc886eef9018d2b2a5fcdc641be70b5ac54b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.2-testing 0620cc886eef9018d2b2a5fcdc641be70b5ac54b
+ branch=xen-4.2-testing
+ revision=0620cc886eef9018d2b2a5fcdc641be70b5ac54b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.2-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.2-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.2-testing
++ : daily-cron.xen-4.2-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.2-testing.git
++ : daily-cron.xen-4.2-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.2-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.2-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.2-testing
+ xenversion=xen-4.2
+ xenversion=4.2
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 0620cc886eef9018d2b2a5fcdc641be70b5ac54b:stable-4.2
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   640b315..0620cc8  0620cc886eef9018d2b2a5fcdc641be70b5ac54b -> stable-4.2

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 19:43:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 19:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZWi-0004VY-8a; Thu, 20 Feb 2014 19:43:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGZWg-0004VT-SV
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 19:43:19 +0000
Received: from [193.109.254.147:45806] by server-4.bemta-14.messagelabs.com id
	F9/45-32066-6DA56035; Thu, 20 Feb 2014 19:43:18 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392925397!5759767!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7085 invoked from network); 20 Feb 2014 19:43:17 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 19:43:17 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so294714eek.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 11:43:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OFHtu20F/oAJ+5U9xwBXw+gaaDEABtwo4yTr/zBrPyE=;
	b=Fv9GH9TwMoLDt+Kb8efGlhk+itXMNvOd3hQLjlrTgG44F3tVj2/Jb9eLju/Edx1FYU
	BAODvIdyuR2/zJ4k7TW3e4mEIOQrGLLvnj6F6SnMPcgIADaauzDBMIZUhQa/bEZym+wH
	9kFsvaimzuIdw3LTL2iGlv751fyO3GEXGESgFjU6Je8WT+fcndK0F2oX22ct66T3zSMb
	OfTScMHjBRtruCuXfmy9hll040lhf6GGJaR0qrN/9eAkaOMUphCPLXeHDnyAd6IhnsjE
	yjB5rYgKXNEgbtgb6B88kBT1YmOrk1SQLzjytrxZtt7UW2poVuJ83dU4LZNAR3JWFHty
	x0Aw==
X-Gm-Message-State: ALoCoQnVlxNCBEsmxzY8C8KHcjEdnIm3dyeJEVUUrJEw607g/trsAUVUcyX46fJULdrGg7MLCE4d
X-Received: by 10.15.10.7 with SMTP id f7mr1993634eet.86.1392925396863;
	Thu, 20 Feb 2014 11:43:16 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm17481299eep.17.2014.02.20.11.43.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 11:43:16 -0800 (PST)
Message-ID: <53065AD3.7090106@linaro.org>
Date: Thu, 20 Feb 2014 19:43:15 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
	<1392812318.29739.31.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812318.29739.31.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 12:18 PM, Ian Campbell wrote:

>> + *
>> + * Julien Grall <julien.grall@linaro.org>
>> + * Copyright (c) 2014 Linaro Limited.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + */
>> +#include <asm/procinfo.h>
>> +#include <asm/processor.h>
>> +
>> +static void armv7_vcpu_initialize(struct vcpu *v)
>> +{
>> +    if ( v->domain->max_vcpus > 1 )
>> +        v->arch.actlr |= ACTLR_V7_SMP;
>> +    else
>> +        v->arch.actlr &= ~ACTLR_V7_SMP;
>> +}
>> +
>> +const struct processor armv7_processor = {
> 
> __rodata? (or whatever it is called)

I forgot to answer to this part. The compiler will put it by default in
rodata. Did you want to say __initconst? If so, we can't because I use a
pointer to this structure in arch/arm/processor.c

If we want to save space, we can copy it in another variable in
processor_setup.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 19:43:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 19:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZWi-0004VY-8a; Thu, 20 Feb 2014 19:43:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGZWg-0004VT-SV
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 19:43:19 +0000
Received: from [193.109.254.147:45806] by server-4.bemta-14.messagelabs.com id
	F9/45-32066-6DA56035; Thu, 20 Feb 2014 19:43:18 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392925397!5759767!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7085 invoked from network); 20 Feb 2014 19:43:17 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 19:43:17 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so294714eek.27
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 11:43:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=OFHtu20F/oAJ+5U9xwBXw+gaaDEABtwo4yTr/zBrPyE=;
	b=Fv9GH9TwMoLDt+Kb8efGlhk+itXMNvOd3hQLjlrTgG44F3tVj2/Jb9eLju/Edx1FYU
	BAODvIdyuR2/zJ4k7TW3e4mEIOQrGLLvnj6F6SnMPcgIADaauzDBMIZUhQa/bEZym+wH
	9kFsvaimzuIdw3LTL2iGlv751fyO3GEXGESgFjU6Je8WT+fcndK0F2oX22ct66T3zSMb
	OfTScMHjBRtruCuXfmy9hll040lhf6GGJaR0qrN/9eAkaOMUphCPLXeHDnyAd6IhnsjE
	yjB5rYgKXNEgbtgb6B88kBT1YmOrk1SQLzjytrxZtt7UW2poVuJ83dU4LZNAR3JWFHty
	x0Aw==
X-Gm-Message-State: ALoCoQnVlxNCBEsmxzY8C8KHcjEdnIm3dyeJEVUUrJEw607g/trsAUVUcyX46fJULdrGg7MLCE4d
X-Received: by 10.15.10.7 with SMTP id f7mr1993634eet.86.1392925396863;
	Thu, 20 Feb 2014 11:43:16 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm17481299eep.17.2014.02.20.11.43.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 11:43:16 -0800 (PST)
Message-ID: <53065AD3.7090106@linaro.org>
Date: Thu, 20 Feb 2014 19:43:15 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
	<1392812318.29739.31.camel@kazak.uk.xensource.com>
In-Reply-To: <1392812318.29739.31.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/19/2014 12:18 PM, Ian Campbell wrote:

>> + *
>> + * Julien Grall <julien.grall@linaro.org>
>> + * Copyright (c) 2014 Linaro Limited.
>> + *
>> + * This program is free software; you can redistribute it and/or modify
>> + * it under the terms of the GNU General Public License as published by
>> + * the Free Software Foundation; either version 2 of the License, or
>> + * (at your option) any later version.
>> + *
>> + * This program is distributed in the hope that it will be useful,
>> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
>> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> + * GNU General Public License for more details.
>> + */
>> +#include <asm/procinfo.h>
>> +#include <asm/processor.h>
>> +
>> +static void armv7_vcpu_initialize(struct vcpu *v)
>> +{
>> +    if ( v->domain->max_vcpus > 1 )
>> +        v->arch.actlr |= ACTLR_V7_SMP;
>> +    else
>> +        v->arch.actlr &= ~ACTLR_V7_SMP;
>> +}
>> +
>> +const struct processor armv7_processor = {
> 
> __rodata? (or whatever it is called)

I forgot to answer to this part. The compiler will put it by default in
rodata. Did you want to say __initconst? If so, we can't because I use a
pointer to this structure in arch/arm/processor.c

If we want to save space, we can copy it in another variable in
processor_setup.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZoc-00053A-4S; Thu, 20 Feb 2014 20:01:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGZob-000535-1s
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:01:49 +0000
Received: from [85.158.139.211:3765] by server-3.bemta-5.messagelabs.com id
	60/58-13671-C2F56035; Thu, 20 Feb 2014 20:01:48 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392926507!1324310!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23772 invoked from network); 20 Feb 2014 20:01:47 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:01:47 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so1665471lab.16
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:01:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=3XKDZPGV3Rarv0GuZPg7XjrMD1veXJK+C3vGlkm78tM=;
	b=udIzk63AoOK+GwGfqaEioIV3r8jF0U7MIhTvUB2l+lCfpZeS4RecrF/GarD/M49re3
	WvwBBz2sOIo5h8PVFxkPKxqD1ubHg1h8f4RafLJzHDe0ga5Amjc7/4NLFZO3G//DjyCw
	r1ZdB+qmy5hb7U0F7hrVEArMgYiMa0bBPuHxPp1O7Mpmw794vK6C2pUbpiGlZ66vgTUy
	TiZhFwot2AzqocBzdCAcqtnmuEOvsFbt4H8xeXD8sVQlgCC15Rzmf1MREEHVH6upI+jy
	ejRuQxMEVWt+baQQHYQekNUi7BHG20YUHOnaB67PAgG7QOqPcw8y1ZBZaq7VFgRDR7f+
	msQQ==
X-Received: by 10.152.229.225 with SMTP id st1mr2297608lac.2.1392926506313;
	Thu, 20 Feb 2014 12:01:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:01:25 -0800 (PST)
In-Reply-To: <530600C5.3070107@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<530600C5.3070107@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:01:25 -0800
X-Google-Sender-Auth: f-bF2so28YdqsVa-yD8zcyG0vtU
Message-ID: <CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 5:19 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> How about this: netback sets the root_block flag and a random MAC by
> default. So the default behaviour won't change, DAD will be happy, and
> userspace don't have to do anything unless it's using netback for STP root
> bridge (I don't think there are too many toolstacks doing that), in which
> case it has to remove the root_block flag instead of setting a random MAC.

:D that's exactly what I ended up proposing too. I mentioned how
xen-netback could do this as well, we'd keep or rename the flag I
added, and then the bridge could would look at it and enable the root
block if the flag is set. Stephen however does not like having the
bridge code look at magic flags for this behavior and would prefer for
us to get the tools to ask for the root block. Let's follow more up on
that thread.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZoc-00053A-4S; Thu, 20 Feb 2014 20:01:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGZob-000535-1s
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:01:49 +0000
Received: from [85.158.139.211:3765] by server-3.bemta-5.messagelabs.com id
	60/58-13671-C2F56035; Thu, 20 Feb 2014 20:01:48 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392926507!1324310!1
X-Originating-IP: [209.85.215.43]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23772 invoked from network); 20 Feb 2014 20:01:47 -0000
Received: from mail-la0-f43.google.com (HELO mail-la0-f43.google.com)
	(209.85.215.43)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:01:47 -0000
Received: by mail-la0-f43.google.com with SMTP id pv20so1665471lab.16
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:01:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=3XKDZPGV3Rarv0GuZPg7XjrMD1veXJK+C3vGlkm78tM=;
	b=udIzk63AoOK+GwGfqaEioIV3r8jF0U7MIhTvUB2l+lCfpZeS4RecrF/GarD/M49re3
	WvwBBz2sOIo5h8PVFxkPKxqD1ubHg1h8f4RafLJzHDe0ga5Amjc7/4NLFZO3G//DjyCw
	r1ZdB+qmy5hb7U0F7hrVEArMgYiMa0bBPuHxPp1O7Mpmw794vK6C2pUbpiGlZ66vgTUy
	TiZhFwot2AzqocBzdCAcqtnmuEOvsFbt4H8xeXD8sVQlgCC15Rzmf1MREEHVH6upI+jy
	ejRuQxMEVWt+baQQHYQekNUi7BHG20YUHOnaB67PAgG7QOqPcw8y1ZBZaq7VFgRDR7f+
	msQQ==
X-Received: by 10.152.229.225 with SMTP id st1mr2297608lac.2.1392926506313;
	Thu, 20 Feb 2014 12:01:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:01:25 -0800 (PST)
In-Reply-To: <530600C5.3070107@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<530600C5.3070107@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:01:25 -0800
X-Google-Sender-Auth: f-bF2so28YdqsVa-yD8zcyG0vtU
Message-ID: <CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 5:19 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> How about this: netback sets the root_block flag and a random MAC by
> default. So the default behaviour won't change, DAD will be happy, and
> userspace don't have to do anything unless it's using netback for STP root
> bridge (I don't think there are too many toolstacks doing that), in which
> case it has to remove the root_block flag instead of setting a random MAC.

:D that's exactly what I ended up proposing too. I mentioned how
xen-netback could do this as well, we'd keep or rename the flag I
added, and then the bridge could would look at it and enable the root
block if the flag is set. Stephen however does not like having the
bridge code look at magic flags for this behavior and would prefer for
us to get the tools to ask for the root block. Let's follow more up on
that thread.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:11:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZyH-0005WQ-TK; Thu, 20 Feb 2014 20:11:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WGZyG-0005WL-Nj
	for Xen-devel@lists.xen.org; Thu, 20 Feb 2014 20:11:48 +0000
Received: from [193.109.254.147:56999] by server-12.bemta-14.messagelabs.com
	id 98/0B-17220-48166035; Thu, 20 Feb 2014 20:11:48 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392927106!5762981!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19240 invoked from network); 20 Feb 2014 20:11:47 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-14.tower-27.messagelabs.com with SMTP;
	20 Feb 2014 20:11:47 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 805CE9A3;
	Thu, 20 Feb 2014 20:11:45 +0000 (UTC)
Date: Thu, 20 Feb 2014 12:13:12 -0800
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140220201312.GA6067@kroah.com>
References: <5305DE2C.7080502@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5305DE2C.7080502@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
> These two changes fix important bugs with 32-bit Xen PV guests.
> 
> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
> before using the m2p table)

Now applied.

> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
> selector corruption)

I had to edit this by-hand to get it to apply, can you verify I got it right?

thanks,

greg k-h

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:11:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGZyH-0005WQ-TK; Thu, 20 Feb 2014 20:11:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WGZyG-0005WL-Nj
	for Xen-devel@lists.xen.org; Thu, 20 Feb 2014 20:11:48 +0000
Received: from [193.109.254.147:56999] by server-12.bemta-14.messagelabs.com
	id 98/0B-17220-48166035; Thu, 20 Feb 2014 20:11:48 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1392927106!5762981!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19240 invoked from network); 20 Feb 2014 20:11:47 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-14.tower-27.messagelabs.com with SMTP;
	20 Feb 2014 20:11:47 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 805CE9A3;
	Thu, 20 Feb 2014 20:11:45 +0000 (UTC)
Date: Thu, 20 Feb 2014 12:13:12 -0800
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140220201312.GA6067@kroah.com>
References: <5305DE2C.7080502@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5305DE2C.7080502@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
> These two changes fix important bugs with 32-bit Xen PV guests.
> 
> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
> before using the m2p table)

Now applied.

> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
> selector corruption)

I had to edit this by-hand to get it to apply, can you verify I got it right?

thanks,

greg k-h

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaAa-0005vx-8u; Thu, 20 Feb 2014 20:24:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaAY-0005vs-RS
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:24:31 +0000
Received: from [85.158.139.211:37256] by server-6.bemta-5.messagelabs.com id
	9B/F0-14342-C7466035; Thu, 20 Feb 2014 20:24:28 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392927867!5280480!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16413 invoked from network); 20 Feb 2014 20:24:27 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:24:27 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1702096lan.12
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:24:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=AqrqcwuU5zAJR3L/y9/u3er5xNXLq6S1MtBVenu0YVQ=;
	b=0M3N52begcVKwwPaf5/l2im7OT5huZJHvGlCIs2D6p7yzj88bHB/SHSpH8jkqdaFm2
	NeOxAHbWmt2RCILy7bKzv4uEZ4sL0/MK8AN870cfWMuMuApFgLifmtEXYIJXR1TjOh+g
	dg+VNnU++vLpGv5mkXpcnoKnx7BFRVHTqE6DElD0AZ5dc2n+VvUcvOp9RgYEaREa61JL
	H3VHj6rEhlnlq+3lJvbHw1U85J9QswaCVuDSp0L00zUbN+YRiX3VYmTh8EhwsRXzG5+v
	zTquTQ8RESPys692zJky6vnhCwfIgBI7JfvwF3sTbfFxinAAMQZ9i8rDgxIGeWOLW2i7
	+V0A==
X-Received: by 10.152.6.101 with SMTP id z5mr2181990laz.53.1392927866787; Thu,
	20 Feb 2014 12:24:26 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:24:06 -0800 (PST)
In-Reply-To: <20140220091958.62a8b444@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:24:06 -0800
X-Google-Sender-Auth: OtBwFC_pfcZ9FKghoEwF8OFUnAs
Message-ID: <CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 9:19 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Wed, 19 Feb 2014 09:59:33 -0800 "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>> On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger <stephen@networkplumber.org> wrote:
>> >
>> > Please only use the netlink/sysfs flags fields that already exist
>> > for new features.
>>
>> Sure, but what if we know a driver in most cases wants the root block
>> and we'd want to make it the default, thereby only requiring userspace
>> for toggling it off.
>
> Something in userspace has to put the device into the bridge.
> Fix the port setup in that tool via the netlink or sysfs flags in
> the bridge. It should not have to be handled in the bridge looking
> at magic flags in the device.

Agreed that's the best strategy and I'll work on sending patches to
brctl to enable the root_block preference. This approach however also
requires a userspace upgrade. I'm trying to see if we can get an
old-nasty-cryptic-hack practice removed from the kernel and we'd try
to prevent future drivers from using it -- without requiring userspace
upgrade. In this case the bad practice is to using a high static MAC
address for mimicking a root block default preference. In order to
remove that *without* requiring a userspace upgrade the dev->priv_flag
approach is the only thing I can think of. If this would go in we'd
replace the high static MAC address with a random MAC address to
prevent IPv6 SLAAC / DAD conflicts. I'd document this flag and
indicate with preference for userspace to be the one tuning these
knobs.

Without this we'd have to keep the high static MAC address on upstream
drivers and let userspace do the random'ization if it confirms the
userspace knob to turn the root block flag is available. Is the
priv_flag approach worth the compromise to remove the root block hack
practice?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:25:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:25:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaAa-0005vx-8u; Thu, 20 Feb 2014 20:24:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaAY-0005vs-RS
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:24:31 +0000
Received: from [85.158.139.211:37256] by server-6.bemta-5.messagelabs.com id
	9B/F0-14342-C7466035; Thu, 20 Feb 2014 20:24:28 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392927867!5280480!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16413 invoked from network); 20 Feb 2014 20:24:27 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:24:27 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1702096lan.12
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:24:26 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=AqrqcwuU5zAJR3L/y9/u3er5xNXLq6S1MtBVenu0YVQ=;
	b=0M3N52begcVKwwPaf5/l2im7OT5huZJHvGlCIs2D6p7yzj88bHB/SHSpH8jkqdaFm2
	NeOxAHbWmt2RCILy7bKzv4uEZ4sL0/MK8AN870cfWMuMuApFgLifmtEXYIJXR1TjOh+g
	dg+VNnU++vLpGv5mkXpcnoKnx7BFRVHTqE6DElD0AZ5dc2n+VvUcvOp9RgYEaREa61JL
	H3VHj6rEhlnlq+3lJvbHw1U85J9QswaCVuDSp0L00zUbN+YRiX3VYmTh8EhwsRXzG5+v
	zTquTQ8RESPys692zJky6vnhCwfIgBI7JfvwF3sTbfFxinAAMQZ9i8rDgxIGeWOLW2i7
	+V0A==
X-Received: by 10.152.6.101 with SMTP id z5mr2181990laz.53.1392927866787; Thu,
	20 Feb 2014 12:24:26 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:24:06 -0800 (PST)
In-Reply-To: <20140220091958.62a8b444@nehalam.linuxnetplumber.net>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:24:06 -0800
X-Google-Sender-Auth: OtBwFC_pfcZ9FKghoEwF8OFUnAs
Message-ID: <CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 9:19 AM, Stephen Hemminger
<stephen@networkplumber.org> wrote:
> On Wed, 19 Feb 2014 09:59:33 -0800 "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>> On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger <stephen@networkplumber.org> wrote:
>> >
>> > Please only use the netlink/sysfs flags fields that already exist
>> > for new features.
>>
>> Sure, but what if we know a driver in most cases wants the root block
>> and we'd want to make it the default, thereby only requiring userspace
>> for toggling it off.
>
> Something in userspace has to put the device into the bridge.
> Fix the port setup in that tool via the netlink or sysfs flags in
> the bridge. It should not have to be handled in the bridge looking
> at magic flags in the device.

Agreed that's the best strategy and I'll work on sending patches to
brctl to enable the root_block preference. This approach however also
requires a userspace upgrade. I'm trying to see if we can get an
old-nasty-cryptic-hack practice removed from the kernel and we'd try
to prevent future drivers from using it -- without requiring userspace
upgrade. In this case the bad practice is to using a high static MAC
address for mimicking a root block default preference. In order to
remove that *without* requiring a userspace upgrade the dev->priv_flag
approach is the only thing I can think of. If this would go in we'd
replace the high static MAC address with a random MAC address to
prevent IPv6 SLAAC / DAD conflicts. I'd document this flag and
indicate with preference for userspace to be the one tuning these
knobs.

Without this we'd have to keep the high static MAC address on upstream
drivers and let userspace do the random'ization if it confirms the
userspace knob to turn the root block flag is available. Is the
priv_flag approach worth the compromise to remove the root block hack
practice?

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:29:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaEx-00063f-3u; Thu, 20 Feb 2014 20:29:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaEu-00063a-0r
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:29:00 +0000
Received: from [85.158.137.68:57637] by server-17.bemta-3.messagelabs.com id
	D6/C9-22569-B8566035; Thu, 20 Feb 2014 20:28:59 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392928137!1983126!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13810 invoked from network); 20 Feb 2014 20:28:58 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:28:58 -0000
Received: by mail-lb0-f177.google.com with SMTP id 10so1658319lbg.22
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:28:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=6Pq6i3kvQvSXZ/ZthAL87tb1UoVOiu66agHzyUPH5uw=;
	b=iHZ6oA/wj4HT52ZPL6bYTn9N4/8gYUbrvT54MPxMpt5FBZz+16/aEREjMmydy9qs4R
	YyNdFgj6dxiqIARMIIhiQh5/Q/D0kxN+VjSA/w9lnrTa9Zb8ielIIpJYp4hRfKWlouVx
	jmHnZIBDUBaEJK7D16t1t7/j+aj2ajZ7hFaC6fiTHbdRVALPD/SEiaODOmOOIBELRM4A
	kfmU8TFujJ7BQAwc6uri+I7mWe6XYE8bEFKfViVN4aljorh8/AEllA0xUhwsA0WFzOb6
	2j0DovO2VqmPHAX2TSDCa6fAm0Js+rcu4COUTyJvqry3bL13HLxb9GezCSqh+Y4638ik
	vbQA==
X-Received: by 10.112.26.79 with SMTP id j15mr2077256lbg.73.1392928137567;
	Thu, 20 Feb 2014 12:28:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:28:37 -0800 (PST)
In-Reply-To: <5306156B.4070105@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<53024C58.4010900@citrix.com>
	<CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
	<5306156B.4070105@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:28:37 -0800
X-Google-Sender-Auth: nfbVyPGq7YKnjaBDf_M-VjsdcnM
Message-ID: <CAB=NE6U7WsN80SOoDrDYvEj9tYYdmcLriCtS-tvLDVeqGmdfwA@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 6:47 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 19/02/14 16:45, Luis R. Rodriguez wrote:
>
>> You seem to describe a case whereby it can make sense for xen-netback
>> interfaces to end up becoming the root port of a bridge. Can you
>> elaborate a little more on that as it was unclear the use case.
>
> Well, I might be wrong on that, but the scenario I was thinking: a guest
> (let's say domain 1) can have multiple interfaces on different Dom0 (or
> driver domain) bridges, let's say vif1.0 is plugged into xenbr0 and vif1.1
> is in xenbr1. If the guest wants to make a bridge of this two, then using
> STP makes sense.

The bridging would happen on the front end in that case no?

>  I wanted to bring up CloudStack's virtual router as an
> example, but then I realized it's probably doesn't do such thing. However I
> don't think we should hardcode that a netback interface can't be RP ever.

My patch did allow for this but the root block flag that Stephen
mentioned can always be lifted.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:29:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaEx-00063f-3u; Thu, 20 Feb 2014 20:29:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaEu-00063a-0r
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:29:00 +0000
Received: from [85.158.137.68:57637] by server-17.bemta-3.messagelabs.com id
	D6/C9-22569-B8566035; Thu, 20 Feb 2014 20:28:59 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392928137!1983126!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13810 invoked from network); 20 Feb 2014 20:28:58 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:28:58 -0000
Received: by mail-lb0-f177.google.com with SMTP id 10so1658319lbg.22
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:28:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=6Pq6i3kvQvSXZ/ZthAL87tb1UoVOiu66agHzyUPH5uw=;
	b=iHZ6oA/wj4HT52ZPL6bYTn9N4/8gYUbrvT54MPxMpt5FBZz+16/aEREjMmydy9qs4R
	YyNdFgj6dxiqIARMIIhiQh5/Q/D0kxN+VjSA/w9lnrTa9Zb8ielIIpJYp4hRfKWlouVx
	jmHnZIBDUBaEJK7D16t1t7/j+aj2ajZ7hFaC6fiTHbdRVALPD/SEiaODOmOOIBELRM4A
	kfmU8TFujJ7BQAwc6uri+I7mWe6XYE8bEFKfViVN4aljorh8/AEllA0xUhwsA0WFzOb6
	2j0DovO2VqmPHAX2TSDCa6fAm0Js+rcu4COUTyJvqry3bL13HLxb9GezCSqh+Y4638ik
	vbQA==
X-Received: by 10.112.26.79 with SMTP id j15mr2077256lbg.73.1392928137567;
	Thu, 20 Feb 2014 12:28:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:28:37 -0800 (PST)
In-Reply-To: <5306156B.4070105@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<53024C58.4010900@citrix.com>
	<CAB=NE6XYjOd2vRpQCZOG-S5ZW4xjam+FOPAYzribNQpb50Q5pg@mail.gmail.com>
	<5306156B.4070105@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:28:37 -0800
X-Google-Sender-Auth: nfbVyPGq7YKnjaBDf_M-VjsdcnM
Message-ID: <CAB=NE6U7WsN80SOoDrDYvEj9tYYdmcLriCtS-tvLDVeqGmdfwA@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 6:47 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 19/02/14 16:45, Luis R. Rodriguez wrote:
>
>> You seem to describe a case whereby it can make sense for xen-netback
>> interfaces to end up becoming the root port of a bridge. Can you
>> elaborate a little more on that as it was unclear the use case.
>
> Well, I might be wrong on that, but the scenario I was thinking: a guest
> (let's say domain 1) can have multiple interfaces on different Dom0 (or
> driver domain) bridges, let's say vif1.0 is plugged into xenbr0 and vif1.1
> is in xenbr1. If the guest wants to make a bridge of this two, then using
> STP makes sense.

The bridging would happen on the front end in that case no?

>  I wanted to bring up CloudStack's virtual router as an
> example, but then I realized it's probably doesn't do such thing. However I
> don't think we should hardcode that a netback interface can't be RP ever.

My patch did allow for this but the root block flag that Stephen
mentioned can always be lifted.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaFB-00065y-PZ; Thu, 20 Feb 2014 20:29:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WGaF9-00065l-V2
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:29:16 +0000
Received: from [193.109.254.147:18813] by server-15.bemta-14.messagelabs.com
	id 38/E8-10839-B9566035; Thu, 20 Feb 2014 20:29:15 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392928153!5762611!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7799 invoked from network); 20 Feb 2014 20:29:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 20:29:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1KKSxU9014075
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 20:28:59 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1KKSvEm006388
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Feb 2014 20:28:57 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1KKSuuw027685; Thu, 20 Feb 2014 20:28:56 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 12:28:55 -0800
Date: Thu, 20 Feb 2014 21:28:50 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140220202850.GF3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87ha7ubme0.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Thu, Feb 20, 2014 at 12:01:19PM +1030, Rusty Russell wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
> > On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >> Daniel Kiper <daniel.kiper@oracle.com> writes:
> >>> Hi,
> >>>
> >>> Below you could find a summary of work in regards to VIRTIO compatibility with
> >>> different virtualization solutions. It was done mainly from Xen point of view
> >>> but results are quite generic and can be applied to wide spectrum
> >>> of virtualization platforms.
> >>
> >> Hi Daniel,
> >>
> >>         Sorry for the delayed response, I was pondering...  CC changed
> >> to virtio-dev.

Do not worry. It is not a problem. It is not easy issue.

> >> From a standard POV: It's possible to abstract out the where we use
> >> 'physical address' for 'address handle'.  It's also possible to define
> >> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> >> Xen-PV is a distinct platform from x86.
> >
> > I'll go even further and say that "address handle" doesn't make sense too.
>
> I was trying to come up with a unique term, I wasn't trying to define
> semantics :)
>
> There are three debates here now: (1) what should the standard say, and

Yep.

> (2) how would Linux implement it,

It seems to me that we should think about other common OSes too.

> (3) should we use each platform's PCI IOMMU.

I do not want emulate any hardware. It seems to me that we should think about
something which fits best in VIRTIO environment. DMA API with relevant backends
looks promising but I have also some worries about performance. Additionally,
it is Linux Kernel specific stuff so maybe we should invent something more generic
which will fit well in other guest OSes too.

[...]

> It's a fundamental assumption of virtio that the host can access all of
> guest memory.  That's paravert, not a hack.

Why? What if guests would like to limit access to their memory? I think
that it will happen sooner or later. Additionally, I think that your
assumption is not hypervisor agnostic which limits implementation of
VIRTIO spec. At least for Xen your idea will make difficulties and
probably prevent VRITIO implementation.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaFB-00065y-PZ; Thu, 20 Feb 2014 20:29:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WGaF9-00065l-V2
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:29:16 +0000
Received: from [193.109.254.147:18813] by server-15.bemta-14.messagelabs.com
	id 38/E8-10839-B9566035; Thu, 20 Feb 2014 20:29:15 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392928153!5762611!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7799 invoked from network); 20 Feb 2014 20:29:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 20:29:14 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1KKSxU9014075
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 20:28:59 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1KKSvEm006388
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 20 Feb 2014 20:28:57 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1KKSuuw027685; Thu, 20 Feb 2014 20:28:56 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 12:28:55 -0800
Date: Thu, 20 Feb 2014 21:28:50 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140220202850.GF3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87ha7ubme0.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Thu, Feb 20, 2014 at 12:01:19PM +1030, Rusty Russell wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
> > On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >> Daniel Kiper <daniel.kiper@oracle.com> writes:
> >>> Hi,
> >>>
> >>> Below you could find a summary of work in regards to VIRTIO compatibility with
> >>> different virtualization solutions. It was done mainly from Xen point of view
> >>> but results are quite generic and can be applied to wide spectrum
> >>> of virtualization platforms.
> >>
> >> Hi Daniel,
> >>
> >>         Sorry for the delayed response, I was pondering...  CC changed
> >> to virtio-dev.

Do not worry. It is not a problem. It is not easy issue.

> >> From a standard POV: It's possible to abstract out the where we use
> >> 'physical address' for 'address handle'.  It's also possible to define
> >> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> >> Xen-PV is a distinct platform from x86.
> >
> > I'll go even further and say that "address handle" doesn't make sense too.
>
> I was trying to come up with a unique term, I wasn't trying to define
> semantics :)
>
> There are three debates here now: (1) what should the standard say, and

Yep.

> (2) how would Linux implement it,

It seems to me that we should think about other common OSes too.

> (3) should we use each platform's PCI IOMMU.

I do not want emulate any hardware. It seems to me that we should think about
something which fits best in VIRTIO environment. DMA API with relevant backends
looks promising but I have also some worries about performance. Additionally,
it is Linux Kernel specific stuff so maybe we should invent something more generic
which will fit well in other guest OSes too.

[...]

> It's a fundamental assumption of virtio that the host can access all of
> guest memory.  That's paravert, not a hack.

Why? What if guests would like to limit access to their memory? I think
that it will happen sooner or later. Additionally, I think that your
assumption is not hypervisor agnostic which limits implementation of
VIRTIO spec. At least for Xen your idea will make difficulties and
probably prevent VRITIO implementation.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:32:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaHj-0006Ri-CZ; Thu, 20 Feb 2014 20:31:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaHh-0006RS-GT
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:31:53 +0000
Received: from [85.158.137.68:18869] by server-6.bemta-3.messagelabs.com id
	02/32-09180-83666035; Thu, 20 Feb 2014 20:31:52 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392928311!1983478!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21470 invoked from network); 20 Feb 2014 20:31:52 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:31:52 -0000
Received: by mail-lb0-f170.google.com with SMTP id u14so1717170lbd.1
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:31:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Z2tQWh2LY0YhmgNEB3z/4cNeAIHHYPjsFr0O7wsbNgs=;
	b=WWWsQ0Que+NHGIr5esQ7ouEymh5A7q9qip9hndbKxwrO0q90A3/16HJxcFv55WEScC
	g/DyWggej0bbvGhH3e9yZZ0Jbrb7ljCOg777E9iEZBF+VjrBvgsB6TandWZscFSlhA0d
	9Oj+OCv1l5Md+cT1y26CTQifcsZZ7hlzXNtNzMTjKSRYVwyksyWxXeSj1o/UgLRSuUy+
	cqdks10M4t/zatiBovjzmuy0uZepGImLiPoIgNMYPQZlQ/Q2OkyoNeqCpycFOwaJhewH
	vaO7kFOA7kkWdLd6aym8pKSV8w0Xy+Moc62ipF3BXKVP0sUSJ+JyUsDBnPqaAHYfxRlN
	2EnQ==
X-Received: by 10.152.19.66 with SMTP id c2mr2298301lae.54.1392928310988; Thu,
	20 Feb 2014 12:31:50 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:31:30 -0800 (PST)
In-Reply-To: <1392857777.22693.14.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:31:30 -0800
X-Google-Sender-Auth: nIToQvusOlMqLMEI9lWaYiQYOBs
Message-ID: <CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 4:56 PM, Dan Williams <dcbw@redhat.com> wrote:
> Note that there isn't yet a disable_ipv4 knob though, I was
> perhaps-too-subtly trying to get you to send a patch for it, since I can
> use it too :)

Sure, can you describe a little better the use case, as I could use
that for the commit log. My only current use case was the xen-netback
case but Zoltan has noted a few cases where an IPv4 or IPv6 address
*could* be used on the backend interfaces (which I'll still poke as
its unclear to me why they have 'em).

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:32:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaHj-0006Ri-CZ; Thu, 20 Feb 2014 20:31:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaHh-0006RS-GT
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:31:53 +0000
Received: from [85.158.137.68:18869] by server-6.bemta-3.messagelabs.com id
	02/32-09180-83666035; Thu, 20 Feb 2014 20:31:52 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392928311!1983478!1
X-Originating-IP: [209.85.217.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21470 invoked from network); 20 Feb 2014 20:31:52 -0000
Received: from mail-lb0-f170.google.com (HELO mail-lb0-f170.google.com)
	(209.85.217.170)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:31:52 -0000
Received: by mail-lb0-f170.google.com with SMTP id u14so1717170lbd.1
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:31:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Z2tQWh2LY0YhmgNEB3z/4cNeAIHHYPjsFr0O7wsbNgs=;
	b=WWWsQ0Que+NHGIr5esQ7ouEymh5A7q9qip9hndbKxwrO0q90A3/16HJxcFv55WEScC
	g/DyWggej0bbvGhH3e9yZZ0Jbrb7ljCOg777E9iEZBF+VjrBvgsB6TandWZscFSlhA0d
	9Oj+OCv1l5Md+cT1y26CTQifcsZZ7hlzXNtNzMTjKSRYVwyksyWxXeSj1o/UgLRSuUy+
	cqdks10M4t/zatiBovjzmuy0uZepGImLiPoIgNMYPQZlQ/Q2OkyoNeqCpycFOwaJhewH
	vaO7kFOA7kkWdLd6aym8pKSV8w0Xy+Moc62ipF3BXKVP0sUSJ+JyUsDBnPqaAHYfxRlN
	2EnQ==
X-Received: by 10.152.19.66 with SMTP id c2mr2298301lae.54.1392928310988; Thu,
	20 Feb 2014 12:31:50 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:31:30 -0800 (PST)
In-Reply-To: <1392857777.22693.14.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:31:30 -0800
X-Google-Sender-Auth: nIToQvusOlMqLMEI9lWaYiQYOBs
Message-ID: <CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 4:56 PM, Dan Williams <dcbw@redhat.com> wrote:
> Note that there isn't yet a disable_ipv4 knob though, I was
> perhaps-too-subtly trying to get you to send a patch for it, since I can
> use it too :)

Sure, can you describe a little better the use case, as I could use
that for the commit log. My only current use case was the xen-netback
case but Zoltan has noted a few cases where an IPv4 or IPv6 address
*could* be used on the backend interfaces (which I'll still poke as
its unclear to me why they have 'em).

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:37:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:37:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaN7-0006ew-6N; Thu, 20 Feb 2014 20:37:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WGaN5-0006er-QJ
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:37:28 +0000
Received: from [193.109.254.147:31475] by server-6.bemta-14.messagelabs.com id
	45/26-03396-68766035; Thu, 20 Feb 2014 20:37:26 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392928644!1788531!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20907 invoked from network); 20 Feb 2014 20:37:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 20:37:26 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1KKbD4F023335
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 20:37:14 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1KKbBUF027008
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 20 Feb 2014 20:37:12 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1KKbBPK028367; Thu, 20 Feb 2014 20:37:11 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 12:37:10 -0800
Date: Thu, 20 Feb 2014 21:37:04 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140220203704.GG3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8761oab4y7.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> Ian Campbell <Ian.Campbell@citrix.com> writes:
> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> For platforms using EPT, I don't think you want anything but guest
> >> addresses, do you?
> >
> > No, the arguments for preventing unfettered access by backends to
> > frontend RAM applies to EPT as well.
>
> I can see how you'd parse my sentence that way, I think, but the two
> are orthogonal.
>
> AFAICT your grant-table access restrictions are page granularity, though
> you don't use page-aligned data (eg. in xen-netfront).  This level of
> access control is possible using the virtio ring too, but noone has
> implemented such a thing AFAIK.

Could you say in short how it should be done? DMA API is an option but
if there is a simpler mechanism available in VIRTIO itself we will be
happy to use it in Xen.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:37:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:37:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaN7-0006ew-6N; Thu, 20 Feb 2014 20:37:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WGaN5-0006er-QJ
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:37:28 +0000
Received: from [193.109.254.147:31475] by server-6.bemta-14.messagelabs.com id
	45/26-03396-68766035; Thu, 20 Feb 2014 20:37:26 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392928644!1788531!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20907 invoked from network); 20 Feb 2014 20:37:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 20 Feb 2014 20:37:26 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1KKbD4F023335
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 20 Feb 2014 20:37:14 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1KKbBUF027008
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 20 Feb 2014 20:37:12 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1KKbBPK028367; Thu, 20 Feb 2014 20:37:11 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 12:37:10 -0800
Date: Thu, 20 Feb 2014 21:37:04 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140220203704.GG3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8761oab4y7.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> Ian Campbell <Ian.Campbell@citrix.com> writes:
> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> For platforms using EPT, I don't think you want anything but guest
> >> addresses, do you?
> >
> > No, the arguments for preventing unfettered access by backends to
> > frontend RAM applies to EPT as well.
>
> I can see how you'd parse my sentence that way, I think, but the two
> are orthogonal.
>
> AFAICT your grant-table access restrictions are page granularity, though
> you don't use page-aligned data (eg. in xen-netfront).  This level of
> access control is possible using the virtio ring too, but noone has
> implemented such a thing AFAIK.

Could you say in short how it should be done? DMA API is an option but
if there is a simpler mechanism available in VIRTIO itself we will be
happy to use it in Xen.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaPR-0006vD-OI; Thu, 20 Feb 2014 20:39:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaPQ-0006v6-0S
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:39:52 +0000
Received: from [193.109.254.147:53208] by server-2.bemta-14.messagelabs.com id
	74/DC-01236-71866035; Thu, 20 Feb 2014 20:39:51 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392928790!5793665!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31867 invoked from network); 20 Feb 2014 20:39:50 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:39:50 -0000
Received: by mail-lb0-f174.google.com with SMTP id l4so1725091lbv.33
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:39:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=R3cMzod5EPOgEBcq/UNFrIw7nVHe2m7JY620L0pUFZM=;
	b=hzxFM/j9EG9tN6Po3GW8IVo519yFxE2TOtsPOgt52mzIjaagLG40fnPbqmHXhNFqTK
	3cAOZoYd7ZH+7LDZXqXA4I+Trp3IrOoomwtEseSZSdfnd/to+plSZxUHVrVQElBeKKDM
	H9Qy6bECqAczq5PRNPILvYVJAFHfY8O2PySXlvXrA5rb+TWQCIPSogZQ6NaUQzAf0/nI
	/Wx2BXXMPL7qZXD65Z1zurEkzvANj9Gjl0Atm8wm1/Ag2lSJqvX6OYfjWrC4ZkPNRKab
	NI5338XN5t9nMwAguqYyp9hSmfN58Tc+TuZJK1cwInBmlHHB/ob/+zZAkyJIOi/d5zAg
	TY9A==
X-Received: by 10.112.26.79 with SMTP id j15mr2097933lbg.73.1392928789519;
	Thu, 20 Feb 2014 12:39:49 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:39:29 -0800 (PST)
In-Reply-To: <53050244.1020106@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<53050244.1020106@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:39:29 -0800
X-Google-Sender-Auth: 3V2rZCREbKS3Jo1LIMjRuVp2TMw
Message-ID: <CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:13 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 19/02/14 17:20, Luis R. Rodriguez wrote:
>>>> On 19/02/14 17:20, Luis R. Rodriguez also wrote:
>>>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>>>> backends though <...>
>>
>> As discussed in the other threads though there *is* some use cases
>> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
>> routing them (although its unclear to me if iptables can be used
>> instead, Zoltan?).
>
> Not with OVS, it steals the packet before netfilter hooks.

Got it, thanks! Can't the route be added using a front-end IP address
instead on the host though ? I just tried that on a Xen system and it
seems to work. Perhaps I'm not understand the exact topology on the
routing case. So in my case I have the backend without any IPv4 or
IPv6 interfaces, the guest has IPv4, IPv6 addresses and even a TUN for
VPN and I can create routes on the host to the front end by not using
the backend device name but instead using the front-end target IP.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaPR-0006vD-OI; Thu, 20 Feb 2014 20:39:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGaPQ-0006v6-0S
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:39:52 +0000
Received: from [193.109.254.147:53208] by server-2.bemta-14.messagelabs.com id
	74/DC-01236-71866035; Thu, 20 Feb 2014 20:39:51 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392928790!5793665!1
X-Originating-IP: [209.85.217.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31867 invoked from network); 20 Feb 2014 20:39:50 -0000
Received: from mail-lb0-f174.google.com (HELO mail-lb0-f174.google.com)
	(209.85.217.174)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:39:50 -0000
Received: by mail-lb0-f174.google.com with SMTP id l4so1725091lbv.33
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:39:49 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=R3cMzod5EPOgEBcq/UNFrIw7nVHe2m7JY620L0pUFZM=;
	b=hzxFM/j9EG9tN6Po3GW8IVo519yFxE2TOtsPOgt52mzIjaagLG40fnPbqmHXhNFqTK
	3cAOZoYd7ZH+7LDZXqXA4I+Trp3IrOoomwtEseSZSdfnd/to+plSZxUHVrVQElBeKKDM
	H9Qy6bECqAczq5PRNPILvYVJAFHfY8O2PySXlvXrA5rb+TWQCIPSogZQ6NaUQzAf0/nI
	/Wx2BXXMPL7qZXD65Z1zurEkzvANj9Gjl0Atm8wm1/Ag2lSJqvX6OYfjWrC4ZkPNRKab
	NI5338XN5t9nMwAguqYyp9hSmfN58Tc+TuZJK1cwInBmlHHB/ob/+zZAkyJIOi/d5zAg
	TY9A==
X-Received: by 10.112.26.79 with SMTP id j15mr2097933lbg.73.1392928789519;
	Thu, 20 Feb 2014 12:39:49 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Thu, 20 Feb 2014 12:39:29 -0800 (PST)
In-Reply-To: <53050244.1020106@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<53050244.1020106@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Thu, 20 Feb 2014 12:39:29 -0800
X-Google-Sender-Auth: 3V2rZCREbKS3Jo1LIMjRuVp2TMw
Message-ID: <CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 11:13 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 19/02/14 17:20, Luis R. Rodriguez wrote:
>>>> On 19/02/14 17:20, Luis R. Rodriguez also wrote:
>>>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>>>> backends though <...>
>>
>> As discussed in the other threads though there *is* some use cases
>> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
>> routing them (although its unclear to me if iptables can be used
>> instead, Zoltan?).
>
> Not with OVS, it steals the packet before netfilter hooks.

Got it, thanks! Can't the route be added using a front-end IP address
instead on the host though ? I just tried that on a Xen system and it
seems to work. Perhaps I'm not understand the exact topology on the
routing case. So in my case I have the backend without any IPv4 or
IPv6 interfaces, the guest has IPv4, IPv6 addresses and even a TUN for
VPN and I can create routes on the host to the front end by not using
the backend device name but instead using the front-end target IP.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaXu-000777-Pk; Thu, 20 Feb 2014 20:48:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGaXs-00076n-LX
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:48:36 +0000
Received: from [85.158.137.68:30432] by server-1.bemta-3.messagelabs.com id
	83/58-17293-32A66035; Thu, 20 Feb 2014 20:48:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392929314!1671101!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31731 invoked from network); 20 Feb 2014 20:48:35 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:48:35 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so1099709eek.30
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:48:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=61IFGl1JrcK6PNnLPmArQFpwmQU+51P1HoeaCrObcLg=;
	b=SAyNv40laIgj0Owhfx/PxtY4RlDrYx7CjFNcburYl0PuAphZqsZjEIcFBVksCOOfKu
	lN1Y4WOEKGSF6z4Lueg4hwZIcqSJALMQ5zCehT6HESOprx9+U3PX+4/L7j9qWCj6aEZ6
	2vmOvzMV/fv5UG7APnovn2mQXSyeGspFP6aDP5Namuft/IapEatgRCuboBunUJycOOZl
	8Cn7EKvHpCU+PTqnQLFp7raj9Lt2Gu5CzLNrULoinuWrapMXepd48Kom4k85+4Yp6EFE
	0ehrO4MZRqp72QEu+L1xwP2BScfjnsmor41a7VxsnkrMPCW1ZR4kd5VtxfP7YSwWacY1
	mKQg==
X-Gm-Message-State: ALoCoQmHgqmvB3H+CCV0gB4AjhY+GLhOHH7Sc1Ev4DDVBrYZXoNCJeMuzjpIVjo1nu/9ocUczooZ
X-Received: by 10.14.216.3 with SMTP id f3mr4193815eep.66.1392929314236;
	Thu, 20 Feb 2014 12:48:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	n41sm18103702eeg.16.2014.02.20.12.48.32 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 12:48:33 -0800 (PST)
Message-ID: <53066A1F.8020203@linaro.org>
Date: Thu, 20 Feb 2014 20:48:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
In-Reply-To: <5304D6BC020000780011DC4F@nat28.tlf.novell.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/19/2014 03:07 PM, Jan Beulich wrote:
>>>> On 19.02.14 at 15:51, Julien Grall <julien.grall@linaro.org> wrote:
>> Adding Keir and Jan.
>>
>> On 02/19/2014 02:38 PM, Ian Campbell wrote:
>>> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
>>>
>>>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>>>
>>>>> unsigned? What are the error codes here going to be?
>>>>
>>>> This is the return type requested by hw_interrupt_type.startup.
>>>>
>>>> It seems that the return is never checked (even in x86 code). Maybe we
>>>> should change the prototype of hw_interrupt_type.startup.
>>>
>>> Worth investigating. I wonder if someone thought this might return the
>>> resulting interrupt number (those are normally unsigned int I think) or
>>> if it actually did used to etc.
>>
>> I think it was copied from Linux which also have unsigned int. I gave a
>> quick look to the code and this callback is only used in 2 places which
>> always return 0.
>>
>> Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
>> an int...
>>
>> I can create a patch to return void instead of unsigned if everyone is
>> happy with this solution.
> 
> I'd be fine with such a change; I'd like to ask though that if you
> do this, you at the same time do the resulting possible cleanup:
> As an example, xen/arch/x86/msi.c:startup_msi_irq() becomes
> unnecessary then. It will in fact be interesting to see how many
> distinct startup routines actually remain.

Before the clean up there was 8 distinct startup routines for x86. No
there is only 2:
  - drivers/passthrough/amd/iommu_init.c: iommu_maskable_msi_startup
  - arch/x86/ioapic.c: startup_edge_ioapic_irq

For a latter one, I'm a bit surprised that the function can return 1,
but the result is never used.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 20:48:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 20:48:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGaXu-000777-Pk; Thu, 20 Feb 2014 20:48:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGaXs-00076n-LX
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 20:48:36 +0000
Received: from [85.158.137.68:30432] by server-1.bemta-3.messagelabs.com id
	83/58-17293-32A66035; Thu, 20 Feb 2014 20:48:35 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392929314!1671101!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31731 invoked from network); 20 Feb 2014 20:48:35 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 20:48:35 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so1099709eek.30
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 12:48:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=61IFGl1JrcK6PNnLPmArQFpwmQU+51P1HoeaCrObcLg=;
	b=SAyNv40laIgj0Owhfx/PxtY4RlDrYx7CjFNcburYl0PuAphZqsZjEIcFBVksCOOfKu
	lN1Y4WOEKGSF6z4Lueg4hwZIcqSJALMQ5zCehT6HESOprx9+U3PX+4/L7j9qWCj6aEZ6
	2vmOvzMV/fv5UG7APnovn2mQXSyeGspFP6aDP5Namuft/IapEatgRCuboBunUJycOOZl
	8Cn7EKvHpCU+PTqnQLFp7raj9Lt2Gu5CzLNrULoinuWrapMXepd48Kom4k85+4Yp6EFE
	0ehrO4MZRqp72QEu+L1xwP2BScfjnsmor41a7VxsnkrMPCW1ZR4kd5VtxfP7YSwWacY1
	mKQg==
X-Gm-Message-State: ALoCoQmHgqmvB3H+CCV0gB4AjhY+GLhOHH7Sc1Ev4DDVBrYZXoNCJeMuzjpIVjo1nu/9ocUczooZ
X-Received: by 10.14.216.3 with SMTP id f3mr4193815eep.66.1392929314236;
	Thu, 20 Feb 2014 12:48:34 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	n41sm18103702eeg.16.2014.02.20.12.48.32 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 12:48:33 -0800 (PST)
Message-ID: <53066A1F.8020203@linaro.org>
Date: Thu, 20 Feb 2014 20:48:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
In-Reply-To: <5304D6BC020000780011DC4F@nat28.tlf.novell.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/19/2014 03:07 PM, Jan Beulich wrote:
>>>> On 19.02.14 at 15:51, Julien Grall <julien.grall@linaro.org> wrote:
>> Adding Keir and Jan.
>>
>> On 02/19/2014 02:38 PM, Ian Campbell wrote:
>>> On Wed, 2014-02-19 at 14:35 +0000, Julien Grall wrote:
>>>
>>>>>> -static void gic_irq_enable(struct irq_desc *desc)
>>>>>> +static unsigned int gic_irq_startup(struct irq_desc *desc)
>>>>>
>>>>> unsigned? What are the error codes here going to be?
>>>>
>>>> This is the return type requested by hw_interrupt_type.startup.
>>>>
>>>> It seems that the return is never checked (even in x86 code). Maybe we
>>>> should change the prototype of hw_interrupt_type.startup.
>>>
>>> Worth investigating. I wonder if someone thought this might return the
>>> resulting interrupt number (those are normally unsigned int I think) or
>>> if it actually did used to etc.
>>
>> I think it was copied from Linux which also have unsigned int. I gave a
>> quick look to the code and this callback is only used in 2 places which
>> always return 0.
>>
>> Surprisingly, the wrapper irq_startup (kernel/irq/manage.c) is returning
>> an int...
>>
>> I can create a patch to return void instead of unsigned if everyone is
>> happy with this solution.
> 
> I'd be fine with such a change; I'd like to ask though that if you
> do this, you at the same time do the resulting possible cleanup:
> As an example, xen/arch/x86/msi.c:startup_msi_irq() becomes
> unnecessary then. It will in fact be interesting to see how many
> distinct startup routines actually remain.

Before the clean up there was 8 distinct startup routines for x86. No
there is only 2:
  - drivers/passthrough/amd/iommu_init.c: iommu_maskable_msi_startup
  - arch/x86/ioapic.c: startup_edge_ioapic_irq

For a latter one, I'm a bit surprised that the function can return 1,
but the result is never used.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 21:30:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 21:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGbBa-0008Bx-TI; Thu, 20 Feb 2014 21:29:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGbBZ-0008Bs-Or
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 21:29:37 +0000
Received: from [193.109.254.147:26149] by server-7.bemta-14.messagelabs.com id
	46/88-23424-0C376035; Thu, 20 Feb 2014 21:29:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392931776!5817745!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28987 invoked from network); 20 Feb 2014 21:29:36 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 21:29:36 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so1171979ead.41
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 13:29:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=9K9UBKntrvbQtmi6wtVJxfZi86pgHUjG6mB2GsyS/Gk=;
	b=dhCG++uKHewvvRsZrDma/E4oXgUJS/UiqTW5jSWOR4rt0nSmi8MrorOBCrAM6zo8hD
	iK5TSMDpqMN+kR6bO0JNPlpiNbGEn5yQIsUBdTB6HSPoMZF1ZpbCzwxv0yhjccPnXaSV
	FlAWDW+Nr1udEBi22WQUnta9z21NFi7f8f8gsXMtVHe2npzMZT0WloDo5k2vml1Wzg5Y
	XjvC0QgW1Gsr3owGW5yVrOlJBVxiDgk7g4ElSdNvYUtipHVlkwxtQLRLdBsWSiHGr9pU
	rdp9sCyToDNDo3anHdAjitP7YzIldW8TVjEhs2owvSBLaaK10lB18Ek6CdOtHpbXb6rJ
	f6bQ==
X-Gm-Message-State: ALoCoQmDxIjE879HlIAeNiIv1qrpcIYS1I6jOpHdFw13O3q0NwDmhur7ru92yKdaIo1scLiSWzV6
X-Received: by 10.15.41.14 with SMTP id r14mr4372545eev.78.1392931775549;
	Thu, 20 Feb 2014 13:29:35 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m8sm8510990eef.14.2014.02.20.13.29.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 13:29:34 -0800 (PST)
Message-ID: <530673BD.9010301@linaro.org>
Date: Thu, 20 Feb 2014 21:29:33 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810905.29739.19.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:55 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>> interrupt.
> 
> Mention here that you are therefore creating a linked list of actions
> for each interrupt.
> 
> If you use xen/list.h for this then you get a load of helpers and
> iterators which would save you open coding them.

After thinking, using xen/list.h won't really remove open code, except
removing "action_ptr" in release_dt_irq.

Calling release_dt_irq to an IRQ with multiple action shouldn't be
called often. Therefore, having both prev and next is a waste of space.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 21:30:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 21:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGbBa-0008Bx-TI; Thu, 20 Feb 2014 21:29:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGbBZ-0008Bs-Or
	for xen-devel@lists.xenproject.org; Thu, 20 Feb 2014 21:29:37 +0000
Received: from [193.109.254.147:26149] by server-7.bemta-14.messagelabs.com id
	46/88-23424-0C376035; Thu, 20 Feb 2014 21:29:36 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392931776!5817745!1
X-Originating-IP: [209.85.215.182]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28987 invoked from network); 20 Feb 2014 21:29:36 -0000
Received: from mail-ea0-f182.google.com (HELO mail-ea0-f182.google.com)
	(209.85.215.182)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 21:29:36 -0000
Received: by mail-ea0-f182.google.com with SMTP id r15so1171979ead.41
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 13:29:35 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=9K9UBKntrvbQtmi6wtVJxfZi86pgHUjG6mB2GsyS/Gk=;
	b=dhCG++uKHewvvRsZrDma/E4oXgUJS/UiqTW5jSWOR4rt0nSmi8MrorOBCrAM6zo8hD
	iK5TSMDpqMN+kR6bO0JNPlpiNbGEn5yQIsUBdTB6HSPoMZF1ZpbCzwxv0yhjccPnXaSV
	FlAWDW+Nr1udEBi22WQUnta9z21NFi7f8f8gsXMtVHe2npzMZT0WloDo5k2vml1Wzg5Y
	XjvC0QgW1Gsr3owGW5yVrOlJBVxiDgk7g4ElSdNvYUtipHVlkwxtQLRLdBsWSiHGr9pU
	rdp9sCyToDNDo3anHdAjitP7YzIldW8TVjEhs2owvSBLaaK10lB18Ek6CdOtHpbXb6rJ
	f6bQ==
X-Gm-Message-State: ALoCoQmDxIjE879HlIAeNiIv1qrpcIYS1I6jOpHdFw13O3q0NwDmhur7ru92yKdaIo1scLiSWzV6
X-Received: by 10.15.41.14 with SMTP id r14mr4372545eev.78.1392931775549;
	Thu, 20 Feb 2014 13:29:35 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m8sm8510990eef.14.2014.02.20.13.29.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 20 Feb 2014 13:29:34 -0800 (PST)
Message-ID: <530673BD.9010301@linaro.org>
Date: Thu, 20 Feb 2014 21:29:33 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
In-Reply-To: <1392810905.29739.19.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, patches@linaro.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

On 02/19/2014 11:55 AM, Ian Campbell wrote:
> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>> interrupt.
> 
> Mention here that you are therefore creating a linked list of actions
> for each interrupt.
> 
> If you use xen/list.h for this then you get a load of helpers and
> iterators which would save you open coding them.

After thinking, using xen/list.h won't really remove open code, except
removing "action_ptr" in release_dt_irq.

Calling release_dt_irq to an IRQ with multiple action shouldn't be
called often. Therefore, having both prev and next is a waste of space.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 23:11:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 23:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGclX-0001vd-5p; Thu, 20 Feb 2014 23:10:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGclV-0001vY-G7
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 23:10:49 +0000
Received: from [85.158.139.211:27577] by server-16.bemta-5.messagelabs.com id
	7D/F9-05060-87B86035; Thu, 20 Feb 2014 23:10:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392937846!5244848!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26914 invoked from network); 20 Feb 2014 23:10:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 23:10:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,514,1389744000"; d="scan'208";a="104517932"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 23:10:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 18:10:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGcl8-0003xZ-Kr;
	Thu, 20 Feb 2014 23:10:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGcl8-0002aF-Cg;
	Thu, 20 Feb 2014 23:10:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25153-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 23:10:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25153: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25153 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair         7 xen-boot/src_host         fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot           fail blocked in 12557
 test-amd64-i386-rhel6hvm-intel  5 xen-boot                     fail like 12557
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot               fail like 12557
 test-amd64-amd64-xl-sedf-pin 15 guest-localmigrate/x10    fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                e95003c3f9ccbfa7ab9d265e6eb703ee2fa4cfe7
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7059 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2387099 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 20 23:11:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Feb 2014 23:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGclX-0001vd-5p; Thu, 20 Feb 2014 23:10:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGclV-0001vY-G7
	for xen-devel@lists.xensource.com; Thu, 20 Feb 2014 23:10:49 +0000
Received: from [85.158.139.211:27577] by server-16.bemta-5.messagelabs.com id
	7D/F9-05060-87B86035; Thu, 20 Feb 2014 23:10:48 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392937846!5244848!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26914 invoked from network); 20 Feb 2014 23:10:47 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	20 Feb 2014 23:10:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,514,1389744000"; d="scan'208";a="104517932"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 20 Feb 2014 23:10:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 18:10:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGcl8-0003xZ-Kr;
	Thu, 20 Feb 2014 23:10:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGcl8-0002aF-Cg;
	Thu, 20 Feb 2014 23:10:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25153-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 20 Feb 2014 23:10:26 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25153: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25153 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair         7 xen-boot/src_host         fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd  5 xen-boot           fail blocked in 12557
 test-amd64-i386-rhel6hvm-intel  5 xen-boot                     fail like 12557
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot               fail like 12557
 test-amd64-amd64-xl-sedf-pin 15 guest-localmigrate/x10    fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                e95003c3f9ccbfa7ab9d265e6eb703ee2fa4cfe7
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7059 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2387099 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 00:26:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 00:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGdw0-0003kc-Cd; Fri, 21 Feb 2014 00:25:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGdvK-0003kX-7n
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 00:25:34 +0000
Received: from [193.109.254.147:56994] by server-6.bemta-14.messagelabs.com id
	88/A4-03396-DDC96035; Fri, 21 Feb 2014 00:25:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392942299!5785847!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6663 invoked from network); 21 Feb 2014 00:25:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 00:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,515,1389744000"; d="scan'208";a="102841260"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 00:24:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 19:24:58 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGdvF-0004LU-S1;
	Fri, 21 Feb 2014 00:24:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGdvF-0004Gy-K3;
	Fri, 21 Feb 2014 00:24:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25155-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 00:24:57 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25155: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25155 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25155/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 17 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=65fc9b78ba3d868a26952db0d8e51cecf01d47b4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
+ branch=qemu-upstream-unstable
+ revision=65fc9b78ba3d868a26952db0d8e51cecf01d47b4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 65fc9b78ba3d868a26952db0d8e51cecf01d47b4:master
Counting objects: 1   
Counting objects: 13, done.
Compressing objects:  14% (1/7)   
Compressing objects:  28% (2/7)   
Compressing objects:  42% (3/7)   
Compressing objects:  57% (4/7)   
Compressing objects:  71% (5/7)   
Compressing objects:  85% (6/7)   
Compressing objects: 100% (7/7)   
Compressing objects: 100% (7/7), done.
Writing objects:  12% (1/8)   
Writing objects:  25% (2/8)   
Writing objects:  37% (3/8)   
Writing objects:  50% (4/8)   
Writing objects:  62% (5/8)   
Writing objects:  75% (6/8)   
Writing objects:  87% (7/8)   
Writing objects: 100% (8/8)   
Writing objects: 100% (8/8), 2.23 KiB, done.
Total 8 (delta 5), reused 4 (delta 1)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   027c412..65fc9b7  65fc9b78ba3d868a26952db0d8e51cecf01d47b4 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 00:26:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 00:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGdw0-0003kc-Cd; Fri, 21 Feb 2014 00:25:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGdvK-0003kX-7n
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 00:25:34 +0000
Received: from [193.109.254.147:56994] by server-6.bemta-14.messagelabs.com id
	88/A4-03396-DDC96035; Fri, 21 Feb 2014 00:25:01 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392942299!5785847!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6663 invoked from network); 21 Feb 2014 00:25:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 00:25:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,515,1389744000"; d="scan'208";a="102841260"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 00:24:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 19:24:58 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGdvF-0004LU-S1;
	Fri, 21 Feb 2014 00:24:57 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGdvF-0004Gy-K3;
	Fri, 21 Feb 2014 00:24:57 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25155-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 00:24:57 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [qemu-upstream-unstable test] 25155: tolerable FAIL -
	PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25155 qemu-upstream-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25155/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemuu-winxpsp3 17 leak-check/check        fail never pass

version targeted for testing:
 qemuu                65fc9b78ba3d868a26952db0d8e51cecf01d47b4
baseline version:
 qemuu                027c412ff71ad8bff6e335cc7932857f4ea74391

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xend-qemuu-winxpsp3                          fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=qemu-upstream-unstable
+ revision=65fc9b78ba3d868a26952db0d8e51cecf01d47b4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push qemu-upstream-unstable 65fc9b78ba3d868a26952db0d8e51cecf01d47b4
+ branch=qemu-upstream-unstable
+ revision=65fc9b78ba3d868a26952db0d8e51cecf01d47b4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=qemuu
+ xenbranch=xen-unstable
+ '[' xqemuu = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.qemu-upstream-unstable
++ : daily-cron.qemu-upstream-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.qemu-upstream-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree qemu-upstream-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/qemu-upstream-unstable
+ git push osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git 65fc9b78ba3d868a26952db0d8e51cecf01d47b4:master
Counting objects: 1   
Counting objects: 13, done.
Compressing objects:  14% (1/7)   
Compressing objects:  28% (2/7)   
Compressing objects:  42% (3/7)   
Compressing objects:  57% (4/7)   
Compressing objects:  71% (5/7)   
Compressing objects:  85% (6/7)   
Compressing objects: 100% (7/7)   
Compressing objects: 100% (7/7), done.
Writing objects:  12% (1/8)   
Writing objects:  25% (2/8)   
Writing objects:  37% (3/8)   
Writing objects:  50% (4/8)   
Writing objects:  62% (5/8)   
Writing objects:  75% (6/8)   
Writing objects:  87% (7/8)   
Writing objects: 100% (8/8)   
Writing objects: 100% (8/8), 2.23 KiB, done.
Total 8 (delta 5), reused 4 (delta 1)
To osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
   027c412..65fc9b7  65fc9b78ba3d868a26952db0d8e51cecf01d47b4 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 00:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 00:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGdwn-0003lZ-5W; Fri, 21 Feb 2014 00:26:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGdwg-0003l5-Bl
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 00:26:29 +0000
Received: from [193.109.254.147:64018] by server-5.bemta-14.messagelabs.com id
	4D/45-16688-13D96035; Fri, 21 Feb 2014 00:26:25 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392942383!521927!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15312 invoked from network); 21 Feb 2014 00:26:24 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 00:26:24 -0000
Received: by mail-vc0-f181.google.com with SMTP id ie18so2619176vcb.12
	for <xen-devel@lists.xen.org>; Thu, 20 Feb 2014 16:26:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=U3UXHIg11lshGi3W0VGSYZyhLR5lVduQ9Rh8gESmeoA=;
	b=fX9wXL5LEFgE5F4gqspPLAyVWCLCEhah2i3G7k0lADrzmEL2ZY4c3Xem9kGQEe2qsg
	kux6ZqLbbESZUFewbkpXODuvQ8IgNFQv8IZQh+wh7NYeFthONk2S5Mc4i3tDTB/nPlD8
	EprvtSr2OtrUxV1Mhorf85jYpAITM6bh++W3DollRcLG+yezzRd4xhQYzdnC2vhdC8zx
	A5rQxicpJCnBIThasbP2nKHjzlJBgQLkg48uW2q34xI1kQn9quByIWrZb2qyn5v8tuBn
	QNRbELkUGT/zFvLfUMPE3DKqeL9S6g4Xys3upn3HRHMSBEY7mS3TjkM3lSNqVvJ++iMY
	XT5w==
MIME-Version: 1.0
X-Received: by 10.221.26.129 with SMTP id rm1mr2891823vcb.80.1392942383302;
	Thu, 20 Feb 2014 16:26:23 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Thu, 20 Feb 2014 16:26:23 -0800 (PST)
In-Reply-To: <1392887748.22494.17.camel@kazak.uk.xensource.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
Date: Thu, 20 Feb 2014 16:26:23 -0800
Message-ID: <CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4229897998886937889=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4229897998886937889==
Content-Type: multipart/alternative; boundary=001a1133984af30b8004f2dfab9c

--001a1133984af30b8004f2dfab9c
Content-Type: text/plain; charset=ISO-8859-1

Hi Ian --

So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough? Let
me try that out.

Thanks,
/Saurabh

On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> > config or driver do I need to enable in WR HVM guest such that it
> > accepts 'xl/xm trigger <vm> power'?
>
> Support for ACPI power events.
>
> Ian.
>
>
>

--001a1133984af30b8004f2dfab9c
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Hi Ian --<div><br></div><div class="gmail_extra">So enabling &#39;CONFIG_APM=y&#39; in WR kernel for HVM guest should be enough? Let me try that out.</div><div class="gmail_extra"><br></div><div class="gmail_extra">
Thanks,</div><div class="gmail_extra">/Saurabh<br><br><div class="gmail_quote">On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <span dir="ltr">&lt;<a href="mailto:Ian.Campbell@citrix.com" target="_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:<br>
&gt; config or driver do I need to enable in WR HVM guest such that it<br>
&gt; accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?<br>
<br>
</div>Support for ACPI power events.<br>
<span class="HOEnZb"><font color="#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div></div>

--001a1133984af30b8004f2dfab9c--


--===============4229897998886937889==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4229897998886937889==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 00:26:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 00:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGdwn-0003lZ-5W; Fri, 21 Feb 2014 00:26:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGdwg-0003l5-Bl
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 00:26:29 +0000
Received: from [193.109.254.147:64018] by server-5.bemta-14.messagelabs.com id
	4D/45-16688-13D96035; Fri, 21 Feb 2014 00:26:25 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1392942383!521927!1
X-Originating-IP: [209.85.220.181]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15312 invoked from network); 21 Feb 2014 00:26:24 -0000
Received: from mail-vc0-f181.google.com (HELO mail-vc0-f181.google.com)
	(209.85.220.181)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 00:26:24 -0000
Received: by mail-vc0-f181.google.com with SMTP id ie18so2619176vcb.12
	for <xen-devel@lists.xen.org>; Thu, 20 Feb 2014 16:26:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=U3UXHIg11lshGi3W0VGSYZyhLR5lVduQ9Rh8gESmeoA=;
	b=fX9wXL5LEFgE5F4gqspPLAyVWCLCEhah2i3G7k0lADrzmEL2ZY4c3Xem9kGQEe2qsg
	kux6ZqLbbESZUFewbkpXODuvQ8IgNFQv8IZQh+wh7NYeFthONk2S5Mc4i3tDTB/nPlD8
	EprvtSr2OtrUxV1Mhorf85jYpAITM6bh++W3DollRcLG+yezzRd4xhQYzdnC2vhdC8zx
	A5rQxicpJCnBIThasbP2nKHjzlJBgQLkg48uW2q34xI1kQn9quByIWrZb2qyn5v8tuBn
	QNRbELkUGT/zFvLfUMPE3DKqeL9S6g4Xys3upn3HRHMSBEY7mS3TjkM3lSNqVvJ++iMY
	XT5w==
MIME-Version: 1.0
X-Received: by 10.221.26.129 with SMTP id rm1mr2891823vcb.80.1392942383302;
	Thu, 20 Feb 2014 16:26:23 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Thu, 20 Feb 2014 16:26:23 -0800 (PST)
In-Reply-To: <1392887748.22494.17.camel@kazak.uk.xensource.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
Date: Thu, 20 Feb 2014 16:26:23 -0800
Message-ID: <CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4229897998886937889=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4229897998886937889==
Content-Type: multipart/alternative; boundary=001a1133984af30b8004f2dfab9c

--001a1133984af30b8004f2dfab9c
Content-Type: text/plain; charset=ISO-8859-1

Hi Ian --

So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough? Let
me try that out.

Thanks,
/Saurabh

On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> > config or driver do I need to enable in WR HVM guest such that it
> > accepts 'xl/xm trigger <vm> power'?
>
> Support for ACPI power events.
>
> Ian.
>
>
>

--001a1133984af30b8004f2dfab9c
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr">Hi Ian --<div><br></div><div class="gmail_extra">So enabling &#39;CONFIG_APM=y&#39; in WR kernel for HVM guest should be enough? Let me try that out.</div><div class="gmail_extra"><br></div><div class="gmail_extra">
Thanks,</div><div class="gmail_extra">/Saurabh<br><br><div class="gmail_quote">On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <span dir="ltr">&lt;<a href="mailto:Ian.Campbell@citrix.com" target="_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="im">On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:<br>
&gt; config or driver do I need to enable in WR HVM guest such that it<br>
&gt; accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?<br>
<br>
</div>Support for ACPI power events.<br>
<span class="HOEnZb"><font color="#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div></div>

--001a1133984af30b8004f2dfab9c--


--===============4229897998886937889==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4229897998886937889==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 00:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 00:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGeGY-0004Ks-2s; Fri, 21 Feb 2014 00:46:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGeGW-0004Kn-Th
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 00:46:57 +0000
Received: from [85.158.137.68:25274] by server-12.bemta-3.messagelabs.com id
	4F/4A-01674-002A6035; Fri, 21 Feb 2014 00:46:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392943613!3233354!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25137 invoked from network); 21 Feb 2014 00:46:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 00:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,515,1389744000"; d="scan'208";a="102845253"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 00:46:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 19:46:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGeGR-0004SD-Iq;
	Fri, 21 Feb 2014 00:46:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGeGP-0001Mt-UY;
	Fri, 21 Feb 2014 00:46:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25157-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 00:46:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25157 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25157/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 00:47:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 00:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGeGY-0004Ks-2s; Fri, 21 Feb 2014 00:46:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGeGW-0004Kn-Th
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 00:46:57 +0000
Received: from [85.158.137.68:25274] by server-12.bemta-3.messagelabs.com id
	4F/4A-01674-002A6035; Fri, 21 Feb 2014 00:46:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392943613!3233354!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25137 invoked from network); 21 Feb 2014 00:46:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 00:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,515,1389744000"; d="scan'208";a="102845253"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 00:46:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 19:46:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGeGR-0004SD-Iq;
	Fri, 21 Feb 2014 00:46:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGeGP-0001Mt-UY;
	Fri, 21 Feb 2014 00:46:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25157-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 00:46:50 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25157 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25157/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 01:19:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 01:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGem4-0000Zs-OR; Fri, 21 Feb 2014 01:19:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGelt-0000Zn-Te
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 01:19:27 +0000
Received: from [85.158.137.68:59867] by server-1.bemta-3.messagelabs.com id
	5C/BE-17293-899A6035; Fri, 21 Feb 2014 01:19:20 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392945558!3236380!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2550 invoked from network); 21 Feb 2014 01:19:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 01:19:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,515,1389744000"; d="scan'208";a="104540922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 01:19:17 +0000
Received: from [10.68.14.48] (10.68.14.48) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 20:19:17 -0500
Message-ID: <5306A993.5010504@citrix.com>
Date: Fri, 21 Feb 2014 01:19:15 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>			
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>		
	<1392743214.23084.38.camel@kazak.uk.xensource.com>		
	<5303C44D.4070500@citrix.com>	
	<1392804319.23084.109.camel@kazak.uk.xensource.com>	
	<53050BF5.1060009@citrix.com>
	<1392888808.22494.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1392888808.22494.21.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.68.14.48]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 09:33, Ian Campbell wrote:
> On Wed, 2014-02-19 at 19:54 +0000, Zoltan Kiss wrote:
>> On 19/02/14 10:05, Ian Campbell wrote:
>>> On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
>>>> On 18/02/14 17:06, Ian Campbell wrote:
>>>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>>>> This patch contains the new definitions necessary for grant mapping.
>>>>> Is this just adding a bunch of (currently) unused functions? That's a
>>>>> slightly odd way to structure a series. They don't seem to be "generic
>>>>> helpers" or anything so it would be more normal to introduce these as
>>>>> they get used -- it's a bit hard to review them out of context.
>>>> I've created two patches because they are quite huge even now,
>>>> separately. Together they would be a ~500 line change. That was the best
>>>> I could figure out keeping in mind that bisect should work. But as I
>>>> wrote in the first email, I welcome other suggestions. If you and Wei
>>>> prefer this two patch in one big one, I merge them in the next version.
>>> I suppose it is hard to split a change like this up in a sensible way,
>>> but it is rather hard to review something which is split in two parts
>>> sensibly.
>>>
>>> If the combined patch too large to fit on the lists?
>> Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible. It's up
>> to you and Wei, if you would like them to be merged, I can do that.
> 30kb doesn't sound too bad to me.
>
> Patches #1 and #2 are, respectively:
>
>   drivers/net/xen-netback/common.h    |   30 ++++++-
>   drivers/net/xen-netback/interface.c |    1 +
>   drivers/net/xen-netback/netback.c   |  161 +++++++++++++++++++++++++++++++++++
>   3 files changed, 191 insertions(+), 1 deletion(-)
>
>   drivers/net/xen-netback/interface.c |   63 ++++++++-
>   drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
>   2 files changed, 160 insertions(+), 157 deletions(-)
>
> I don't think combining those would be terrible, although I'm willing to
> be proven wrong ;-)
Ok, if noone comes up with any better argument before I send in the next 
version, I'll merge the 2 patches.
>
>>>>>> +		vif->dealloc_prod++;
>>>>> What happens if the dealloc ring becomes full, will this wrap and cause
>>>>> havoc?
>>>> Nope, if the dealloc ring is full, the value of the last increment won't
>>>> be used to index the dealloc ring again until some space made available.
>>> I don't follow -- what makes this the case?
>> The dealloc ring has the same size as the pending ring, and you can only
>> add slots to it which are already on the pending ring (the pending_idx
>> comes from ubuf->desc), as you are essentially free up slots here on the
>> pending ring.
>> So if the dealloc ring becomes full, vif->dealloc_prod -
>> vif->dealloc_cons will be 256, which would be bad. But the while loop
>> should exit here, as we shouldn't have any more pending slots. And if we
>> dealloc and create free pending slots in dealloc_action, dealloc_cons
>> will also advance.
> OK, so this is limited by the size of the pending array, makes sense,
> assuming that array is itself correctly guarded...
Well, that pending ring works the same as before, the only difference 
that now the slots are released from dealloc thread as well, not just 
from from NAPI instance. That's why we need response_lock. I'll make a 
comment on that.
>>>>>> +		}
>>>>>> +
>>>>>> +	} while (dp != vif->dealloc_prod);
>>>>>> +
>>>>>> +	vif->dealloc_cons = dc;
>>>>> No barrier here?
>>>> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by
>>>> the callback and the thread as well, that's why we need mb() in
>>>> previous. Btw. this function comes from classic's net_tx_action_dealloc
>>> Is this code close enough to that code architecturally that you can
>>> infer correctness due to that though?
>> Nope, I've just mentioned it because knowing that old code can help to
>> understand this new, as their logic is very similar some places, like here.
>>
>>> So long as you have considered the barrier semantics in the context of
>>> the current code and you think it is correct to not have one here then
>>> I'm ok. But if you have just assumed it is OK because some older code
>>> didn't have it then I'll have to ask you to consider it again...
>> Nope, as I mentioned above, dealloc_cons only accessed in that funcion,
>> from the same thread. Dealloc_prod is written in the callback and read
>> out here, that's why we need the barrier there.
> OK.
>
> Although this may no longer be true if you added some BUG_ONs as
> discussed above?
Yep, that BUG_ON might see a smaller value of dealloc_cons, but that 
should be OK. We will release those slots after grant unmapping, they 
shouldn't be filled up again until then.
>
>>>>>> +				netdev_err(vif->dev,
>>>>>> +					   " host_addr: %llx handle: %x status: %d\n",
>>>>>> +					   gop[i].host_addr,
>>>>>> +					   gop[i].handle,
>>>>>> +					   gop[i].status);
>>>>>> +			}
>>>>>> +			BUG();
>>>>>> +		}
>>>>>> +	}
>>>>>> +
>>>>>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
>>>>>> +		xenvif_idx_release(vif, pending_idx_release[i],
>>>>>> +				   XEN_NETIF_RSP_OKAY);
>>>>>> +}
>>>>>> +
>>>>>> +
>>>>>>     /* Called after netfront has transmitted */
>>>>>>     int xenvif_tx_action(struct xenvif *vif, int budget)
>>>>>>     {
>>>>>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>>>>>>     	vif->mmap_pages[pending_idx] = NULL;
>>>>>>     }
>>>>>>
>>>>>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
>>>>> This is a single shot version of the batched xenvif_tx_dealloc_action
>>>>> version? Why not just enqueue the idx to be unmapped later?
>>>> This is called only from the NAPI instance. Using the dealloc ring
>>>> require synchronization with the callback which can increase lock
>>>> contention. On the other hand, if the guest sends small packets
>>>> (<PAGE_SIZE), the TLB flushing can cause performance penalty.
>>> Right. When/How often is this called from the NAPI instance?
>> When grant mapping error detected in xenvif_tx_check_gop, and if a
>> packet smaller than PKT_PROT_LEN is sent. The latter would be removed if
>> we will grant copy such packets entirely.
>>
>>> Is the locking contention from this case so severe that it out weighs
>>> the benefits of batching the unmaps? That would surprise me. After all
>>> the locking contention is there for the zerocopy_callback case too
>>>
>>>>    The above
>>>> mentioned upcoming patch which gntcopy the header can prevent that
>>> So this is only called when doing the pull-up to the linear area?
>> Yes, as mentioned above.
> I'm not sure why you don't just enqueue the dealloc with the other
> normal ones though.
Well, I started off from this approach, as it maintains similarity with 
the grant copy ways of doing this. Historically we release the slots in 
xenvif_tx_check_gop straight away if there is a mapping error in any of 
them. I don't know if the guest expects that slots for the same packet 
comes back at the same time. Then I just reused the same function for 
<PKT_PROT_LEN packets instead of writing an another one. That will go 
away soon anyway.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 01:19:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 01:19:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGem4-0000Zs-OR; Fri, 21 Feb 2014 01:19:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGelt-0000Zn-Te
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 01:19:27 +0000
Received: from [85.158.137.68:59867] by server-1.bemta-3.messagelabs.com id
	5C/BE-17293-899A6035; Fri, 21 Feb 2014 01:19:20 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1392945558!3236380!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2550 invoked from network); 21 Feb 2014 01:19:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 01:19:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,515,1389744000"; d="scan'208";a="104540922"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 01:19:17 +0000
Received: from [10.68.14.48] (10.68.14.48) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 20 Feb 2014 20:19:17 -0500
Message-ID: <5306A993.5010504@citrix.com>
Date: Fri, 21 Feb 2014 01:19:15 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>			
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>		
	<1392743214.23084.38.camel@kazak.uk.xensource.com>		
	<5303C44D.4070500@citrix.com>	
	<1392804319.23084.109.camel@kazak.uk.xensource.com>	
	<53050BF5.1060009@citrix.com>
	<1392888808.22494.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1392888808.22494.21.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.68.14.48]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
	grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 09:33, Ian Campbell wrote:
> On Wed, 2014-02-19 at 19:54 +0000, Zoltan Kiss wrote:
>> On 19/02/14 10:05, Ian Campbell wrote:
>>> On Tue, 2014-02-18 at 20:36 +0000, Zoltan Kiss wrote:
>>>> On 18/02/14 17:06, Ian Campbell wrote:
>>>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>>>> This patch contains the new definitions necessary for grant mapping.
>>>>> Is this just adding a bunch of (currently) unused functions? That's a
>>>>> slightly odd way to structure a series. They don't seem to be "generic
>>>>> helpers" or anything so it would be more normal to introduce these as
>>>>> they get used -- it's a bit hard to review them out of context.
>>>> I've created two patches because they are quite huge even now,
>>>> separately. Together they would be a ~500 line change. That was the best
>>>> I could figure out keeping in mind that bisect should work. But as I
>>>> wrote in the first email, I welcome other suggestions. If you and Wei
>>>> prefer this two patch in one big one, I merge them in the next version.
>>> I suppose it is hard to split a change like this up in a sensible way,
>>> but it is rather hard to review something which is split in two parts
>>> sensibly.
>>>
>>> If the combined patch too large to fit on the lists?
>> Well, it's ca. 30 kb, ~500 lines changed. I guess it's possible. It's up
>> to you and Wei, if you would like them to be merged, I can do that.
> 30kb doesn't sound too bad to me.
>
> Patches #1 and #2 are, respectively:
>
>   drivers/net/xen-netback/common.h    |   30 ++++++-
>   drivers/net/xen-netback/interface.c |    1 +
>   drivers/net/xen-netback/netback.c   |  161 +++++++++++++++++++++++++++++++++++
>   3 files changed, 191 insertions(+), 1 deletion(-)
>
>   drivers/net/xen-netback/interface.c |   63 ++++++++-
>   drivers/net/xen-netback/netback.c   |  254 ++++++++++++++---------------------
>   2 files changed, 160 insertions(+), 157 deletions(-)
>
> I don't think combining those would be terrible, although I'm willing to
> be proven wrong ;-)
Ok, if noone comes up with any better argument before I send in the next 
version, I'll merge the 2 patches.
>
>>>>>> +		vif->dealloc_prod++;
>>>>> What happens if the dealloc ring becomes full, will this wrap and cause
>>>>> havoc?
>>>> Nope, if the dealloc ring is full, the value of the last increment won't
>>>> be used to index the dealloc ring again until some space made available.
>>> I don't follow -- what makes this the case?
>> The dealloc ring has the same size as the pending ring, and you can only
>> add slots to it which are already on the pending ring (the pending_idx
>> comes from ubuf->desc), as you are essentially free up slots here on the
>> pending ring.
>> So if the dealloc ring becomes full, vif->dealloc_prod -
>> vif->dealloc_cons will be 256, which would be bad. But the while loop
>> should exit here, as we shouldn't have any more pending slots. And if we
>> dealloc and create free pending slots in dealloc_action, dealloc_cons
>> will also advance.
> OK, so this is limited by the size of the pending array, makes sense,
> assuming that array is itself correctly guarded...
Well, that pending ring works the same as before, the only difference 
that now the slots are released from dealloc thread as well, not just 
from from NAPI instance. That's why we need response_lock. I'll make a 
comment on that.
>>>>>> +		}
>>>>>> +
>>>>>> +	} while (dp != vif->dealloc_prod);
>>>>>> +
>>>>>> +	vif->dealloc_cons = dc;
>>>>> No barrier here?
>>>> dealloc_cons only used in the dealloc_thread. dealloc_prod is used by
>>>> the callback and the thread as well, that's why we need mb() in
>>>> previous. Btw. this function comes from classic's net_tx_action_dealloc
>>> Is this code close enough to that code architecturally that you can
>>> infer correctness due to that though?
>> Nope, I've just mentioned it because knowing that old code can help to
>> understand this new, as their logic is very similar some places, like here.
>>
>>> So long as you have considered the barrier semantics in the context of
>>> the current code and you think it is correct to not have one here then
>>> I'm ok. But if you have just assumed it is OK because some older code
>>> didn't have it then I'll have to ask you to consider it again...
>> Nope, as I mentioned above, dealloc_cons only accessed in that funcion,
>> from the same thread. Dealloc_prod is written in the callback and read
>> out here, that's why we need the barrier there.
> OK.
>
> Although this may no longer be true if you added some BUG_ONs as
> discussed above?
Yep, that BUG_ON might see a smaller value of dealloc_cons, but that 
should be OK. We will release those slots after grant unmapping, they 
shouldn't be filled up again until then.
>
>>>>>> +				netdev_err(vif->dev,
>>>>>> +					   " host_addr: %llx handle: %x status: %d\n",
>>>>>> +					   gop[i].host_addr,
>>>>>> +					   gop[i].handle,
>>>>>> +					   gop[i].status);
>>>>>> +			}
>>>>>> +			BUG();
>>>>>> +		}
>>>>>> +	}
>>>>>> +
>>>>>> +	for (i = 0; i < gop - vif->tx_unmap_ops; ++i)
>>>>>> +		xenvif_idx_release(vif, pending_idx_release[i],
>>>>>> +				   XEN_NETIF_RSP_OKAY);
>>>>>> +}
>>>>>> +
>>>>>> +
>>>>>>     /* Called after netfront has transmitted */
>>>>>>     int xenvif_tx_action(struct xenvif *vif, int budget)
>>>>>>     {
>>>>>> @@ -1678,6 +1793,25 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
>>>>>>     	vif->mmap_pages[pending_idx] = NULL;
>>>>>>     }
>>>>>>
>>>>>> +void xenvif_idx_unmap(struct xenvif *vif, u16 pending_idx)
>>>>> This is a single shot version of the batched xenvif_tx_dealloc_action
>>>>> version? Why not just enqueue the idx to be unmapped later?
>>>> This is called only from the NAPI instance. Using the dealloc ring
>>>> require synchronization with the callback which can increase lock
>>>> contention. On the other hand, if the guest sends small packets
>>>> (<PAGE_SIZE), the TLB flushing can cause performance penalty.
>>> Right. When/How often is this called from the NAPI instance?
>> When grant mapping error detected in xenvif_tx_check_gop, and if a
>> packet smaller than PKT_PROT_LEN is sent. The latter would be removed if
>> we will grant copy such packets entirely.
>>
>>> Is the locking contention from this case so severe that it out weighs
>>> the benefits of batching the unmaps? That would surprise me. After all
>>> the locking contention is there for the zerocopy_callback case too
>>>
>>>>    The above
>>>> mentioned upcoming patch which gntcopy the header can prevent that
>>> So this is only called when doing the pull-up to the linear area?
>> Yes, as mentioned above.
> I'm not sure why you don't just enqueue the dealloc with the other
> normal ones though.
Well, I started off from this approach, as it maintains similarity with 
the grant copy ways of doing this. Historically we release the slots in 
xenvif_tx_check_gop straight away if there is a mapping error in any of 
them. I don't know if the guest expects that slots for the same packet 
comes back at the same time. Then I just reused the same function for 
<PKT_PROT_LEN packets instead of writing an another one. That will go 
away soon anyway.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 01:23:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 01:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGepQ-0000hw-CS; Fri, 21 Feb 2014 01:23:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGepO-0000hn-Db
	for Xen-devel@lists.xensource.com; Fri, 21 Feb 2014 01:22:58 +0000
Received: from [85.158.137.68:12568] by server-16.bemta-3.messagelabs.com id
	84/77-29917-17AA6035; Fri, 21 Feb 2014 01:22:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392945774!1950635!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3957 invoked from network); 21 Feb 2014 01:22:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 01:22:56 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1L1MaBV016180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 01:22:38 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1L1MZX8010034
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 01:22:36 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1L1MZSX005756; Fri, 21 Feb 2014 01:22:35 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 17:22:35 -0800
Date: Thu, 20 Feb 2014 17:22:34 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140220172234.7b6847ad@mantra.us.oracle.com>
In-Reply-To: <53060806.7040903@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
	<20140219182227.6a37a33c@mantra.us.oracle.com>
	<53060806.7040903@linaro.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014 13:49:58 +0000
Julien Grall <julien.grall@linaro.org> wrote:

> On 02/20/2014 02:22 AM, Mukesh Rathor wrote:
> > On Wed, 12 Feb 2014 16:47:54 +0000
> > Julien Grall <julien.grall@linaro.org> wrote:
> > 
> >> Hi Mukesh,
> >>
> >> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> >>> In preparation for the next patch, we update xsm_add_to_physmap to
> >>> allow for checking of foreign domain. Thus, the current domain
> >>> must have the right to update the mappings of target domain with
> >>> pages from foreign domain.
> >>>
> >>> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> >>
> >> While I was playing with XSM on ARM, I have noticed that Daniel De
> >> Graff has added xsm_map_gfmn_foreign few months ago (see commit
> >> 0b201e6).
> >>
> >> Would it be suitable to use this XSM instead of extending
> >> xsm_add_to_physmap?
> >>
> >> Regards,
> >>
> > 
> > Not the same thing. add to physmap could be adding to a domain's
> > physmap pages from a foreign domain.
> 
> Let assume you don't modify xsm_add_to_physmap, in this case:
>    - xsm_add_to_physmap checks if the current domain is allowed to
> modify the p2m of a given domain
>    - xsm_map_gfmn_foreign checks if the given domain is allowed to
> have foreign mapping from the foreign domain
> 
> Both XSM are distinct and should be used together. You don't care that

I see, i thought you meant replace one with another. I am not a security 
expert, so just followed the suggestions. But looking at the code
looks like above is the way to go, and I can just drop my xsm_add_to_physmap
change patch (which btw doesn't check whether target has access to
foreign mappings, so is prob not correct). Thanks for noticing.

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 01:23:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 01:23:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGepQ-0000hw-CS; Fri, 21 Feb 2014 01:23:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGepO-0000hn-Db
	for Xen-devel@lists.xensource.com; Fri, 21 Feb 2014 01:22:58 +0000
Received: from [85.158.137.68:12568] by server-16.bemta-3.messagelabs.com id
	84/77-29917-17AA6035; Fri, 21 Feb 2014 01:22:57 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1392945774!1950635!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3957 invoked from network); 21 Feb 2014 01:22:56 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 01:22:56 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1L1MaBV016180
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 01:22:38 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1L1MZX8010034
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 01:22:36 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1L1MZSX005756; Fri, 21 Feb 2014 01:22:35 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 17:22:35 -0800
Date: Thu, 20 Feb 2014 17:22:34 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Julien Grall <julien.grall@linaro.org>
Message-ID: <20140220172234.7b6847ad@mantra.us.oracle.com>
In-Reply-To: <53060806.7040903@linaro.org>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
	<20140219182227.6a37a33c@mantra.us.oracle.com>
	<53060806.7040903@linaro.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014 13:49:58 +0000
Julien Grall <julien.grall@linaro.org> wrote:

> On 02/20/2014 02:22 AM, Mukesh Rathor wrote:
> > On Wed, 12 Feb 2014 16:47:54 +0000
> > Julien Grall <julien.grall@linaro.org> wrote:
> > 
> >> Hi Mukesh,
> >>
> >> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> >>> In preparation for the next patch, we update xsm_add_to_physmap to
> >>> allow for checking of foreign domain. Thus, the current domain
> >>> must have the right to update the mappings of target domain with
> >>> pages from foreign domain.
> >>>
> >>> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> >>
> >> While I was playing with XSM on ARM, I have noticed that Daniel De
> >> Graff has added xsm_map_gfmn_foreign few months ago (see commit
> >> 0b201e6).
> >>
> >> Would it be suitable to use this XSM instead of extending
> >> xsm_add_to_physmap?
> >>
> >> Regards,
> >>
> > 
> > Not the same thing. add to physmap could be adding to a domain's
> > physmap pages from a foreign domain.
> 
> Let assume you don't modify xsm_add_to_physmap, in this case:
>    - xsm_add_to_physmap checks if the current domain is allowed to
> modify the p2m of a given domain
>    - xsm_map_gfmn_foreign checks if the given domain is allowed to
> have foreign mapping from the foreign domain
> 
> Both XSM are distinct and should be used together. You don't care that

I see, i thought you meant replace one with another. I am not a security 
expert, so just followed the suggestions. But looking at the code
looks like above is the way to go, and I can just drop my xsm_add_to_physmap
change patch (which btw doesn't check whether target has access to
foreign mappings, so is prob not correct). Thanks for noticing.

Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 01:26:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 01:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGesv-0000qv-Kb; Fri, 21 Feb 2014 01:26:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGesu-0000qn-Kp
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 01:26:36 +0000
Received: from [85.158.139.211:10337] by server-16.bemta-5.messagelabs.com id
	AC/E1-05060-B4BA6035; Fri, 21 Feb 2014 01:26:35 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392945967!5264808!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29329 invoked from network); 21 Feb 2014 01:26:35 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-206.messagelabs.com with SMTP;
	21 Feb 2014 01:26:35 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 17:21:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,515,1389772800"; d="scan'208";a="459183966"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 17:26:05 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 17:26:05 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 17:26:05 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 09:26:02 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>, xen-devel
	<xen-devel@lists.xenproject.org>, "Dong, Eddie" <eddie.dong@intel.com>, 
	"Nakajima, Jun" <jun.nakajima@intel.com>
Thread-Topic: Single step in HVM domU on Intel machine may see wrong DB6
Thread-Index: AQHPLhb/TXz9W/5OvEmwKI7dSE2uP5q+4NwQ
Date: Fri, 21 Feb 2014 01:26:02 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
In-Reply-To: <5305BE9F.2090600@ts.fujitsu.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
	wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Juergen Gross wrote on 2014-02-20:
> Hi,

Hi, Juergen

> 
> I think I've found a bug in debug trap handling in the Xen hypervisor 
> in case of a HVM domu using single stepping:
> 
> Debug registers are restored on vcpu switch only if db7 has any debug 
> events activated or if the debug registers are marked to be used by 
> the domU. This leads to problems if the domU uses single stepping and 
> vcpu switch occurs between the single step trap and reading of db6 in 
> the guest. db6 contents (single step indicator) are lost in this case.
> 
> Jan suggested to intercept the debug trap in the hypervisor and mark 
> the debug registers to be used by the domU to enable saving and 
> restoring the debug registers in case of a context switch. I used the 
> attached patch (applies to Xen 4.2.3) to verify this solution and it 
> worked (without the patch a test was able to reproduce the bug once in 
> about 3 hours, with the patch the test ran for more than 12 hours without problem).
> 
> Obviously the patch isn't the final one, as I deactivated the "monitor trap flag"
> feature to avoid any strange dependencies. Jan wanted someone from the 
> VMX folks to put together a proper fix to avoid overlooking some corner case.
> 

Thanks for reporting this issue. 
Actually, I don't know the scenario that you saw this issue. Are you using single step inside guest? Or running gdb to debug VM remotely?

> 
> Juergen
>


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 01:26:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 01:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGesv-0000qv-Kb; Fri, 21 Feb 2014 01:26:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGesu-0000qn-Kp
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 01:26:36 +0000
Received: from [85.158.139.211:10337] by server-16.bemta-5.messagelabs.com id
	AC/E1-05060-B4BA6035; Fri, 21 Feb 2014 01:26:35 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392945967!5264808!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29329 invoked from network); 21 Feb 2014 01:26:35 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-206.messagelabs.com with SMTP;
	21 Feb 2014 01:26:35 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 17:21:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,515,1389772800"; d="scan'208";a="459183966"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 17:26:05 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 17:26:05 -0800
Received: from shsmsx103.ccr.corp.intel.com (10.239.110.14) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 17:26:05 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX103.ccr.corp.intel.com ([169.254.4.202]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 09:26:02 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>, xen-devel
	<xen-devel@lists.xenproject.org>, "Dong, Eddie" <eddie.dong@intel.com>, 
	"Nakajima, Jun" <jun.nakajima@intel.com>
Thread-Topic: Single step in HVM domU on Intel machine may see wrong DB6
Thread-Index: AQHPLhb/TXz9W/5OvEmwKI7dSE2uP5q+4NwQ
Date: Fri, 21 Feb 2014 01:26:02 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
In-Reply-To: <5305BE9F.2090600@ts.fujitsu.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
	wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Juergen Gross wrote on 2014-02-20:
> Hi,

Hi, Juergen

> 
> I think I've found a bug in debug trap handling in the Xen hypervisor 
> in case of a HVM domu using single stepping:
> 
> Debug registers are restored on vcpu switch only if db7 has any debug 
> events activated or if the debug registers are marked to be used by 
> the domU. This leads to problems if the domU uses single stepping and 
> vcpu switch occurs between the single step trap and reading of db6 in 
> the guest. db6 contents (single step indicator) are lost in this case.
> 
> Jan suggested to intercept the debug trap in the hypervisor and mark 
> the debug registers to be used by the domU to enable saving and 
> restoring the debug registers in case of a context switch. I used the 
> attached patch (applies to Xen 4.2.3) to verify this solution and it 
> worked (without the patch a test was able to reproduce the bug once in 
> about 3 hours, with the patch the test ran for more than 12 hours without problem).
> 
> Obviously the patch isn't the final one, as I deactivated the "monitor trap flag"
> feature to avoid any strange dependencies. Jan wanted someone from the 
> VMX folks to put together a proper fix to avoid overlooking some corner case.
> 

Thanks for reporting this issue. 
Actually, I don't know the scenario that you saw this issue. Are you using single step inside guest? Or running gdb to debug VM remotely?

> 
> Juergen
>


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:22:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGfl7-0002Kn-KA; Fri, 21 Feb 2014 02:22:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dexuan.cui@intel.com>) id 1WGfl5-0002Ki-B7
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 02:22:35 +0000
Received: from [193.109.254.147:47467] by server-5.bemta-14.messagelabs.com id
	FA/8F-16688-A68B6035; Fri, 21 Feb 2014 02:22:34 +0000
X-Env-Sender: dexuan.cui@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392949353!5789862!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13052 invoked from network); 21 Feb 2014 02:22:33 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-27.messagelabs.com with SMTP;
	21 Feb 2014 02:22:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 20 Feb 2014 18:22:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,516,1389772800"; d="scan'208";a="478861411"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 20 Feb 2014 18:22:16 -0800
Received: from fmsmsx120.amr.corp.intel.com (10.19.9.29) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 18:22:16 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx120.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 18:22:15 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.202]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 10:22:12 +0800
From: "Cui, Dexuan" <dexuan.cui@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
	Graphics Passthrough Solution from Intel
Thread-Index: Ac8uCM+UTJUf7TAWTkKmKWKJu37vjgAHsPmAACDtz+A=
Date: Fri, 21 Feb 2014 02:22:11 +0000
Message-ID: <A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
	<20140220183613.GJ3200@reaktio.net>
In-Reply-To: <20140220183613.GJ3200@reaktio.net>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tian, Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote on 2014-02-21:
> On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
>> Hi all,
>> We're pleased to announce an update to XenGT since its first disclosure =
in
>> last Sep.
> =

> Are you going to work on upstreaming this stuff? Xen 4.4 will be
> released soon(ish), so the Xen 4.5 development window starts in the near
> future and hopefully this stuff can be upstreamed then..
Hi Pasi,
We do plan to upstream but not give a timeframe so far.

> Also: Haswell ("4th generation Intel core CPU") is listed as a requiremen=
t in
> the Setup Guide PDF..
> will there be support for SNB/IVB GPUs aswell?
There is no plan for SNB/IVB.

Thanks,
-- Dexuan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:22:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGfl7-0002Kn-KA; Fri, 21 Feb 2014 02:22:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dexuan.cui@intel.com>) id 1WGfl5-0002Ki-B7
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 02:22:35 +0000
Received: from [193.109.254.147:47467] by server-5.bemta-14.messagelabs.com id
	FA/8F-16688-A68B6035; Fri, 21 Feb 2014 02:22:34 +0000
X-Env-Sender: dexuan.cui@intel.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392949353!5789862!1
X-Originating-IP: [192.55.52.93]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjkzID0+IDMyNDY2NQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13052 invoked from network); 21 Feb 2014 02:22:33 -0000
Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93)
	by server-16.tower-27.messagelabs.com with SMTP;
	21 Feb 2014 02:22:33 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga102.fm.intel.com with ESMTP; 20 Feb 2014 18:22:32 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,516,1389772800"; d="scan'208";a="478861411"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 20 Feb 2014 18:22:16 -0800
Received: from fmsmsx120.amr.corp.intel.com (10.19.9.29) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 18:22:16 -0800
Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by
	fmsmsx120.amr.corp.intel.com (10.19.9.29) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Thu, 20 Feb 2014 18:22:15 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.202]) by
	SHSMSX151.ccr.corp.intel.com ([169.254.3.127]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 10:22:12 +0800
From: "Cui, Dexuan" <dexuan.cui@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
	Graphics Passthrough Solution from Intel
Thread-Index: Ac8uCM+UTJUf7TAWTkKmKWKJu37vjgAHsPmAACDtz+A=
Date: Fri, 21 Feb 2014 02:22:11 +0000
Message-ID: <A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
	<20140220183613.GJ3200@reaktio.net>
In-Reply-To: <20140220183613.GJ3200@reaktio.net>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Tian, Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote on 2014-02-21:
> On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
>> Hi all,
>> We're pleased to announce an update to XenGT since its first disclosure =
in
>> last Sep.
> =

> Are you going to work on upstreaming this stuff? Xen 4.4 will be
> released soon(ish), so the Xen 4.5 development window starts in the near
> future and hopefully this stuff can be upstreamed then..
Hi Pasi,
We do plan to upstream but not give a timeframe so far.

> Also: Haswell ("4th generation Intel core CPU") is listed as a requiremen=
t in
> the Setup Guide PDF..
> will there be support for SNB/IVB GPUs aswell?
There is no plan for SNB/IVB.

Thanks,
-- Dexuan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:32:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGfuA-0002cf-7R; Fri, 21 Feb 2014 02:31:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WGfu8-0002ca-2t
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 02:31:56 +0000
Received: from [85.158.143.35:45306] by server-3.bemta-4.messagelabs.com id
	A1/99-11539-B9AB6035; Fri, 21 Feb 2014 02:31:55 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392949910!7230395!1
X-Originating-IP: [202.81.31.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NSA9PiAzMTMyNzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20470 invoked from network); 21 Feb 2014 02:31:54 -0000
Received: from e23smtp03.au.ibm.com (HELO e23smtp03.au.ibm.com) (202.81.31.145)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 02:31:54 -0000
Received: from /spool/local
	by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Fri, 21 Feb 2014 12:31:47 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp03.au.ibm.com (202.81.31.209) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 21 Feb 2014 12:31:46 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id 4C2EA2CE805A
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 13:31:45 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1L2C68R5964204
	for <xen-devel@lists.xenproject.org>; Fri, 21 Feb 2014 13:12:06 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1L2Vhat030173
	for <xen-devel@lists.xenproject.org>; Fri, 21 Feb 2014 13:31:44 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1L2Vh6B030153; Fri, 21 Feb 2014 13:31:43 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 4D093A03B2; Fri, 21 Feb 2014 13:31:43 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
In-Reply-To: <20140220203704.GG3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Fri, 21 Feb 2014 11:24:14 +1030
Message-ID: <8761o99tft.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022102-6102-0000-0000-000004FCE335
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
	different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:
> Hey,
>
> On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
>> Ian Campbell <Ian.Campbell@citrix.com> writes:
>> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>> >> For platforms using EPT, I don't think you want anything but guest
>> >> addresses, do you?
>> >
>> > No, the arguments for preventing unfettered access by backends to
>> > frontend RAM applies to EPT as well.
>>
>> I can see how you'd parse my sentence that way, I think, but the two
>> are orthogonal.
>>
>> AFAICT your grant-table access restrictions are page granularity, though
>> you don't use page-aligned data (eg. in xen-netfront).  This level of
>> access control is possible using the virtio ring too, but noone has
>> implemented such a thing AFAIK.
>
> Could you say in short how it should be done? DMA API is an option but
> if there is a simpler mechanism available in VIRTIO itself we will be
> happy to use it in Xen.

OK, this challenged me to think harder.

The queue itself is effectively a grant table (as long as you don't give
the backend write access to it).  The available ring tells you where the
buffers are and whether they are readable or writable.  The used ring
tells you when they're used.

However, performance would suck due to no caching: you'd end up doing a
map and unmap on every packet.  I'm assuming Xen currently avoids that
somehow?  Seems likely...

On the other hand, if we wanted a more Xen-like setup, it would looke
like this:

1) Abstract away the "physical addresses" to "handles" in the standard,
   and allow some platform-specific mapping setup and teardown.

2) In Linux, implement a virtio DMA ops which handles the grant table
   stuff for Xen (returning grant table ids + offset or something?),
   noop for others.  This would be a runtime thing.

3) In Linux, change the drivers to use this API.

Now, Xen will not be able to use vhost to accelerate, but it doesn't now
anyway.

Am I missing anything?

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:32:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGfuA-0002cf-7R; Fri, 21 Feb 2014 02:31:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WGfu8-0002ca-2t
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 02:31:56 +0000
Received: from [85.158.143.35:45306] by server-3.bemta-4.messagelabs.com id
	A1/99-11539-B9AB6035; Fri, 21 Feb 2014 02:31:55 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1392949910!7230395!1
X-Originating-IP: [202.81.31.145]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NSA9PiAzMTMyNzY=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20470 invoked from network); 21 Feb 2014 02:31:54 -0000
Received: from e23smtp03.au.ibm.com (HELO e23smtp03.au.ibm.com) (202.81.31.145)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 02:31:54 -0000
Received: from /spool/local
	by e23smtp03.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Fri, 21 Feb 2014 12:31:47 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp03.au.ibm.com (202.81.31.209) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 21 Feb 2014 12:31:46 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id 4C2EA2CE805A
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 13:31:45 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1L2C68R5964204
	for <xen-devel@lists.xenproject.org>; Fri, 21 Feb 2014 13:12:06 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1L2Vhat030173
	for <xen-devel@lists.xenproject.org>; Fri, 21 Feb 2014 13:31:44 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1L2Vh6B030153; Fri, 21 Feb 2014 13:31:43 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 4D093A03B2; Fri, 21 Feb 2014 13:31:43 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
In-Reply-To: <20140220203704.GG3441@olila.local.net-space.pl>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Fri, 21 Feb 2014 11:24:14 +1030
Message-ID: <8761o99tft.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022102-6102-0000-0000-000004FCE335
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
	different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Daniel Kiper <daniel.kiper@oracle.com> writes:
> Hey,
>
> On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
>> Ian Campbell <Ian.Campbell@citrix.com> writes:
>> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>> >> For platforms using EPT, I don't think you want anything but guest
>> >> addresses, do you?
>> >
>> > No, the arguments for preventing unfettered access by backends to
>> > frontend RAM applies to EPT as well.
>>
>> I can see how you'd parse my sentence that way, I think, but the two
>> are orthogonal.
>>
>> AFAICT your grant-table access restrictions are page granularity, though
>> you don't use page-aligned data (eg. in xen-netfront).  This level of
>> access control is possible using the virtio ring too, but noone has
>> implemented such a thing AFAIK.
>
> Could you say in short how it should be done? DMA API is an option but
> if there is a simpler mechanism available in VIRTIO itself we will be
> happy to use it in Xen.

OK, this challenged me to think harder.

The queue itself is effectively a grant table (as long as you don't give
the backend write access to it).  The available ring tells you where the
buffers are and whether they are readable or writable.  The used ring
tells you when they're used.

However, performance would suck due to no caching: you'd end up doing a
map and unmap on every packet.  I'm assuming Xen currently avoids that
somehow?  Seems likely...

On the other hand, if we wanted a more Xen-like setup, it would looke
like this:

1) Abstract away the "physical addresses" to "handles" in the standard,
   and allow some platform-specific mapping setup and teardown.

2) In Linux, implement a virtio DMA ops which handles the grant table
   stuff for Xen (returning grant table ids + offset or something?),
   noop for others.  This would be a runtime thing.

3) In Linux, change the drivers to use this API.

Now, Xen will not be able to use vhost to accelerate, but it doesn't now
anyway.

Am I missing anything?

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGgCe-000354-2M; Fri, 21 Feb 2014 02:51:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1WGgCc-00034z-28
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 02:51:02 +0000
Received: from [85.158.143.35:15408] by server-1.bemta-4.messagelabs.com id
	1C/5A-31661-51FB6035; Fri, 21 Feb 2014 02:51:01 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392951059!7247329!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4992 invoked from network); 21 Feb 2014 02:51:00 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 02:51:00 -0000
Received: by mail-qc0-f179.google.com with SMTP id r5so65316qcx.38
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 18:50:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=K+VIrTq2OpganW969W3rHesNDopzf/JymdADEkuf/10=;
	b=bSlloedOggOo6PeycpsrTaL2kRO/KuymxZxaX911+wSWywkl9KH/vglH8pPy0UswhZ
	O0NqwUcpjP0RNcKDt8M4ky5+mc1YU6kL0aPeq3wER7umkK+dVmhzd/fnnWR1yKYSn8t0
	PcnPJX5Bxg0x36+R8GJJkonfO3wrK12k+QvC7GwvijWZif6o10mUxji5BFjac1Saow3o
	fGlEPj/2qZ9S7pdURnvBaSMN69IuB3/i28l34zkZDGzvkIAR9LCtQPhsCYpBNMNYy46s
	zz5NHMtm5DgSB9tBmiCOAVOJq/G/m3Cw++Ij9Wv4rnbI1TqZvf5JJcPVJAAALT6i5vQn
	UOsg==
X-Gm-Message-State: ALoCoQnnlNYO68/cvBqxcqrh7vJ6zYI2hmxcPxv8gy96z4O6emBwZMeeMMhD53F01M/2M4mIH91F
MIME-Version: 1.0
X-Received: by 10.140.96.180 with SMTP id k49mr6050672qge.4.1392951059074;
	Thu, 20 Feb 2014 18:50:59 -0800 (PST)
Received: by 10.140.25.111 with HTTP; Thu, 20 Feb 2014 18:50:59 -0800 (PST)
In-Reply-To: <87ha7ubme0.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
Date: Thu, 20 Feb 2014 18:50:59 -0800
Message-ID: <CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Rusty Russell <rusty@au1.ibm.com>
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
>> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>>> Hi,
>>>>
>>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>>> different virtualization solutions. It was done mainly from Xen point of view
>>>> but results are quite generic and can be applied to wide spectrum
>>>> of virtualization platforms.
>>>
>>> Hi Daniel,
>>>
>>>         Sorry for the delayed response, I was pondering...  CC changed
>>> to virtio-dev.
>>>
>>> From a standard POV: It's possible to abstract out the where we use
>>> 'physical address' for 'address handle'.  It's also possible to define
>>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>>> Xen-PV is a distinct platform from x86.
>>
>> I'll go even further and say that "address handle" doesn't make sense too.
>
> I was trying to come up with a unique term, I wasn't trying to define
> semantics :)

Understood, that wasn't really directed at you.

> There are three debates here now: (1) what should the standard say, and

The standard should say, "physical address"

> (2) how would Linux implement it,

Linux should use the PCI DMA API.

> (3) should we use each platform's PCI
> IOMMU.

Just like any other PCI device :-)

>> Just using grant table references is not enough to make virtio work
>> well under Xen.  You really need to use bounce buffers ala persistent
>> grants.
>
> Wait, if you're using bounce buffers, you didn't make it "work well"!

Preaching to the choir man...  but bounce buffering is proven to be
faster than doing grant mappings on every request.  xen-blk does
bounce buffering by default and I suspect netfront is heading that
direction soon.

It would be a lot easier to simply have a global pool of grant tables
that effectively becomes the DMA pool.  Then the DMA API can bounce
into that pool and those addresses can be placed on the ring.

It's a little different for Xen because now the backends have to deal
with physical addresses but the concept is still the same.

>> I think what you ultimately want is virtio using a DMA API (I know
>> benh has scoffed at this but I don't buy his argument at face value)
>> and a DMA layer that bounces requests to a pool of persistent grants.
>
> We can have a virtio DMA API, sure.  It'd be a noop for non-Xen.
>
> But emulating the programming of an IOMMU seems masochistic.  PowerPC
> have made it clear they don't want this.

I don't think the argument is all that clear.  Wouldn't it be nice for
other PCI devices to be faster under Power KVM?  Why not have not
change the DMA API under Power Linux to detect that it's under KVM and
simply not make any hypercalls?

>  And noone else has come up
> with a compelling reason to want this: virtio passthrough?

So I can run Xen under QEMU and use virtio-blk and virtio-net as the
device model.  Xen PV uses the DMA API to do mfn -> pfn mapping and
since virtio doesn't use it, it's the only PCI device in the QEMU
device model that doesn't actually work when running Xen under QEMU.

Regards,

Anthony Liguori

>>> For platforms using EPT, I don't think you want anything but guest
>>> addresses, do you?
>>>
>>> From an implementation POV:
>>>
>>> On IOMMU, start here for previous Linux discussion:
>>>         http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>>>
>>> And this is the real problem.  We don't want to use the PCI IOMMU for
>>> PCI devices.  So it's not just a matter of using existing Linux APIs.
>>
>> Is there any data to back up that claim?
>
> Yes, for powerpc.  Implementer gets to measure, as always.  I suspect
> that if you emulate an IOMMU on Intel, your performance will suck too.
>
>> Just because power currently does hypercalls for anything that uses
>> the PCI IOMMU layer doesn't mean this cannot be changed.
>
> Does someone have an implementation of an IOMMU which doesn't use
> hypercalls, or is this theoretical?
>
>>  It's pretty
>> hacky that virtio-pci just happens to work well by accident on power
>> today.  Not all architectures have this limitation.
>
> It's a fundamental assumption of virtio that the host can access all of
> guest memory.  That's paravert, not a hack.
>
> But tomayto tomatoh aside, it's unclear to me how you'd build an
> efficient IOMMU today.  And it's unclear what benefit you'd gain.  But
> the cost for Power is clear.
>
> So if someone wants do to this for PCI, they need to implement it and
> benchmark.  But this is a little orthogonal to the Xen discussion.
>
> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:51:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGgCe-000354-2M; Fri, 21 Feb 2014 02:51:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1WGgCc-00034z-28
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 02:51:02 +0000
Received: from [85.158.143.35:15408] by server-1.bemta-4.messagelabs.com id
	1C/5A-31661-51FB6035; Fri, 21 Feb 2014 02:51:01 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392951059!7247329!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4992 invoked from network); 21 Feb 2014 02:51:00 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 02:51:00 -0000
Received: by mail-qc0-f179.google.com with SMTP id r5so65316qcx.38
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 18:50:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=K+VIrTq2OpganW969W3rHesNDopzf/JymdADEkuf/10=;
	b=bSlloedOggOo6PeycpsrTaL2kRO/KuymxZxaX911+wSWywkl9KH/vglH8pPy0UswhZ
	O0NqwUcpjP0RNcKDt8M4ky5+mc1YU6kL0aPeq3wER7umkK+dVmhzd/fnnWR1yKYSn8t0
	PcnPJX5Bxg0x36+R8GJJkonfO3wrK12k+QvC7GwvijWZif6o10mUxji5BFjac1Saow3o
	fGlEPj/2qZ9S7pdURnvBaSMN69IuB3/i28l34zkZDGzvkIAR9LCtQPhsCYpBNMNYy46s
	zz5NHMtm5DgSB9tBmiCOAVOJq/G/m3Cw++Ij9Wv4rnbI1TqZvf5JJcPVJAAALT6i5vQn
	UOsg==
X-Gm-Message-State: ALoCoQnnlNYO68/cvBqxcqrh7vJ6zYI2hmxcPxv8gy96z4O6emBwZMeeMMhD53F01M/2M4mIH91F
MIME-Version: 1.0
X-Received: by 10.140.96.180 with SMTP id k49mr6050672qge.4.1392951059074;
	Thu, 20 Feb 2014 18:50:59 -0800 (PST)
Received: by 10.140.25.111 with HTTP; Thu, 20 Feb 2014 18:50:59 -0800 (PST)
In-Reply-To: <87ha7ubme0.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
Date: Thu, 20 Feb 2014 18:50:59 -0800
Message-ID: <CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Rusty Russell <rusty@au1.ibm.com>
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
>> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>>> Hi,
>>>>
>>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>>> different virtualization solutions. It was done mainly from Xen point of view
>>>> but results are quite generic and can be applied to wide spectrum
>>>> of virtualization platforms.
>>>
>>> Hi Daniel,
>>>
>>>         Sorry for the delayed response, I was pondering...  CC changed
>>> to virtio-dev.
>>>
>>> From a standard POV: It's possible to abstract out the where we use
>>> 'physical address' for 'address handle'.  It's also possible to define
>>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>>> Xen-PV is a distinct platform from x86.
>>
>> I'll go even further and say that "address handle" doesn't make sense too.
>
> I was trying to come up with a unique term, I wasn't trying to define
> semantics :)

Understood, that wasn't really directed at you.

> There are three debates here now: (1) what should the standard say, and

The standard should say, "physical address"

> (2) how would Linux implement it,

Linux should use the PCI DMA API.

> (3) should we use each platform's PCI
> IOMMU.

Just like any other PCI device :-)

>> Just using grant table references is not enough to make virtio work
>> well under Xen.  You really need to use bounce buffers ala persistent
>> grants.
>
> Wait, if you're using bounce buffers, you didn't make it "work well"!

Preaching to the choir man...  but bounce buffering is proven to be
faster than doing grant mappings on every request.  xen-blk does
bounce buffering by default and I suspect netfront is heading that
direction soon.

It would be a lot easier to simply have a global pool of grant tables
that effectively becomes the DMA pool.  Then the DMA API can bounce
into that pool and those addresses can be placed on the ring.

It's a little different for Xen because now the backends have to deal
with physical addresses but the concept is still the same.

>> I think what you ultimately want is virtio using a DMA API (I know
>> benh has scoffed at this but I don't buy his argument at face value)
>> and a DMA layer that bounces requests to a pool of persistent grants.
>
> We can have a virtio DMA API, sure.  It'd be a noop for non-Xen.
>
> But emulating the programming of an IOMMU seems masochistic.  PowerPC
> have made it clear they don't want this.

I don't think the argument is all that clear.  Wouldn't it be nice for
other PCI devices to be faster under Power KVM?  Why not have not
change the DMA API under Power Linux to detect that it's under KVM and
simply not make any hypercalls?

>  And noone else has come up
> with a compelling reason to want this: virtio passthrough?

So I can run Xen under QEMU and use virtio-blk and virtio-net as the
device model.  Xen PV uses the DMA API to do mfn -> pfn mapping and
since virtio doesn't use it, it's the only PCI device in the QEMU
device model that doesn't actually work when running Xen under QEMU.

Regards,

Anthony Liguori

>>> For platforms using EPT, I don't think you want anything but guest
>>> addresses, do you?
>>>
>>> From an implementation POV:
>>>
>>> On IOMMU, start here for previous Linux discussion:
>>>         http://thread.gmane.org/gmane.linux.kernel.virtualization/14410/focus=14650
>>>
>>> And this is the real problem.  We don't want to use the PCI IOMMU for
>>> PCI devices.  So it's not just a matter of using existing Linux APIs.
>>
>> Is there any data to back up that claim?
>
> Yes, for powerpc.  Implementer gets to measure, as always.  I suspect
> that if you emulate an IOMMU on Intel, your performance will suck too.
>
>> Just because power currently does hypercalls for anything that uses
>> the PCI IOMMU layer doesn't mean this cannot be changed.
>
> Does someone have an implementation of an IOMMU which doesn't use
> hypercalls, or is this theoretical?
>
>>  It's pretty
>> hacky that virtio-pci just happens to work well by accident on power
>> today.  Not all architectures have this limitation.
>
> It's a fundamental assumption of virtio that the host can access all of
> guest memory.  That's paravert, not a hack.
>
> But tomayto tomatoh aside, it's unclear to me how you'd build an
> efficient IOMMU today.  And it's unclear what benefit you'd gain.  But
> the cost for Power is clear.
>
> So if someone wants do to this for PCI, they need to implement it and
> benchmark.  But this is a little orthogonal to the Xen discussion.
>
> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:52:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGgEM-00039K-Ic; Fri, 21 Feb 2014 02:52:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGgEK-00039C-Qp
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 02:52:49 +0000
Received: from [193.109.254.147:11597] by server-14.bemta-14.messagelabs.com
	id 6A/EF-29228-08FB6035; Fri, 21 Feb 2014 02:52:48 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392951166!5792394!1
X-Originating-IP: [209.85.192.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15329 invoked from network); 21 Feb 2014 02:52:47 -0000
Received: from mail-qg0-f41.google.com (HELO mail-qg0-f41.google.com)
	(209.85.192.41)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 02:52:47 -0000
Received: by mail-qg0-f41.google.com with SMTP id i50so6092042qgf.0
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 18:52:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=CcgpME7T3Q1oUzuUTY5lnkG/8wTObfDNFgfL780bdfE=;
	b=LKvzSegI6DO7BdXem/oJIc1ka4ZCfbqkJV0aiIxbyopBHw2JA5yxcQqGrOI7Zy2S4k
	e/XC8GylNs/mhQq35GsukCxYtU8vjefCbbe3T5tE1uyR1mrNgE0ooSQTC7jyqBr/F6Q0
	tu0tEi0pLOSyrETumeby2gmFxg6shqE+uw+E8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=CcgpME7T3Q1oUzuUTY5lnkG/8wTObfDNFgfL780bdfE=;
	b=C2Eh9WiaFIczFGZaBMdNAhnFns4WvNaZF4HuXh5B5Ysq+P45R1QBepSvvaUypmutb8
	WbRLi5lZENdgtBL/7yZLIySTLLWhuGgZCyq5lP6rQ/wvKTOKnA6jFwl7w3j7lcmv7CHK
	KkPR8QnVARfn2WZpLwWZgrJPIJ2pqWWD89y77RQ70p2DE7FaOiZtQjf8ZtFGI1LYrJqQ
	qLvJkxuCP05nExT+etkVcc3lgboBpbLzjya2Y0KuLRAa91CSm3uDdtRSrO7dpj7Vb2uv
	5Xa9sUWSkmNdfiBFDSAFrw/nZiMb5wgNY53ksQUPDCC3hxDd22TMlgMMpUJh36nWkI7+
	C5Nw==
X-Gm-Message-State: ALoCoQkUOQ2NTGsHwKJlXoV/vNy4VH3s5Pr9bkP5Qoyckjx7e/k1O0AmB7k6iiDUgokUD5VvNfSW
X-Received: by 10.140.84.19 with SMTP id k19mr6103335qgd.98.1392951166071;
	Thu, 20 Feb 2014 18:52:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Thu, 20 Feb 2014 18:52:30 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140220190713.GA2183@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 06:52:30 +0400
X-Google-Sender-Auth: PDE5txCChuyfx4sftnwJcDj5hG0
Message-ID: <CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
To: Daniel Kiper <dkiper@net-space.pl>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-20 23:07 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> Use "mem_set_enforce_limit=0" in xl.conf instead of "enforce=0".


I'm already do that, but nothing changed =(

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 02:52:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 02:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGgEM-00039K-Ic; Fri, 21 Feb 2014 02:52:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGgEK-00039C-Qp
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 02:52:49 +0000
Received: from [193.109.254.147:11597] by server-14.bemta-14.messagelabs.com
	id 6A/EF-29228-08FB6035; Fri, 21 Feb 2014 02:52:48 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392951166!5792394!1
X-Originating-IP: [209.85.192.41]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15329 invoked from network); 21 Feb 2014 02:52:47 -0000
Received: from mail-qg0-f41.google.com (HELO mail-qg0-f41.google.com)
	(209.85.192.41)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 02:52:47 -0000
Received: by mail-qg0-f41.google.com with SMTP id i50so6092042qgf.0
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 18:52:46 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=CcgpME7T3Q1oUzuUTY5lnkG/8wTObfDNFgfL780bdfE=;
	b=LKvzSegI6DO7BdXem/oJIc1ka4ZCfbqkJV0aiIxbyopBHw2JA5yxcQqGrOI7Zy2S4k
	e/XC8GylNs/mhQq35GsukCxYtU8vjefCbbe3T5tE1uyR1mrNgE0ooSQTC7jyqBr/F6Q0
	tu0tEi0pLOSyrETumeby2gmFxg6shqE+uw+E8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=CcgpME7T3Q1oUzuUTY5lnkG/8wTObfDNFgfL780bdfE=;
	b=C2Eh9WiaFIczFGZaBMdNAhnFns4WvNaZF4HuXh5B5Ysq+P45R1QBepSvvaUypmutb8
	WbRLi5lZENdgtBL/7yZLIySTLLWhuGgZCyq5lP6rQ/wvKTOKnA6jFwl7w3j7lcmv7CHK
	KkPR8QnVARfn2WZpLwWZgrJPIJ2pqWWD89y77RQ70p2DE7FaOiZtQjf8ZtFGI1LYrJqQ
	qLvJkxuCP05nExT+etkVcc3lgboBpbLzjya2Y0KuLRAa91CSm3uDdtRSrO7dpj7Vb2uv
	5Xa9sUWSkmNdfiBFDSAFrw/nZiMb5wgNY53ksQUPDCC3hxDd22TMlgMMpUJh36nWkI7+
	C5Nw==
X-Gm-Message-State: ALoCoQkUOQ2NTGsHwKJlXoV/vNy4VH3s5Pr9bkP5Qoyckjx7e/k1O0AmB7k6iiDUgokUD5VvNfSW
X-Received: by 10.140.84.19 with SMTP id k19mr6103335qgd.98.1392951166071;
	Thu, 20 Feb 2014 18:52:46 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Thu, 20 Feb 2014 18:52:30 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140220190713.GA2183@router-fw-old.local.net-space.pl>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 06:52:30 +0400
X-Google-Sender-Auth: PDE5txCChuyfx4sftnwJcDj5hG0
Message-ID: <CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
To: Daniel Kiper <dkiper@net-space.pl>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-20 23:07 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> Use "mem_set_enforce_limit=0" in xl.conf instead of "enforce=0".


I'm already do that, but nothing changed =(

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 03:00:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 03:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGgM6-0003RE-4C; Fri, 21 Feb 2014 03:00:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1WGgM5-0003R9-9x
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 03:00:49 +0000
Received: from [85.158.143.35:60242] by server-1.bemta-4.messagelabs.com id
	25/8D-31661-061C6035; Fri, 21 Feb 2014 03:00:48 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392951646!7248177!1
X-Originating-IP: [209.85.192.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8872 invoked from network); 21 Feb 2014 03:00:47 -0000
Received: from mail-qg0-f42.google.com (HELO mail-qg0-f42.google.com)
	(209.85.192.42)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 03:00:47 -0000
Received: by mail-qg0-f42.google.com with SMTP id q107so6116036qgd.1
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 19:00:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=La4W2gjYcPHOUr3teFC1D6CnALYdpjWBvyFJa8U/ofg=;
	b=jPD6KNKJybJXz5HeZnGyGQc8xQSUG/WOvUrDjtQC0ZFIxEe2AkW42ZbqV/m9JZbDVi
	G70q9KxwIwl0Sqa6OdgVceheXzj46OqetFH3BsizKcMTOtN5nnMTEqs98F//DV4OIoWA
	/hXBjd2eydrtsGlRuJsj54hUOeF+pBISnfohtbwCivS7ZinqPjnCmfwb5XUibhVwOdBd
	Ky+daDQK7YxXz/9TRqZ4nyvYIVdagOdVLXT4pfQXLIA1ARq7utjcnYR8KtACvC/+IboV
	NzK5IQ+pidvL1dw0s/3KfhUqEkI1MjYf8Yw2r8nnITY9Mtt8zreEADTIxrMTHENw1O7w
	7CEw==
X-Gm-Message-State: ALoCoQnwIudL8xVtR2LOjkI8CKDrtB/Xuz00B584A8hShTGgzsS5ZpPOI46LbD2B10rU1QBZJS8S
MIME-Version: 1.0
X-Received: by 10.140.96.180 with SMTP id k49mr6086626qge.4.1392951645905;
	Thu, 20 Feb 2014 19:00:45 -0800 (PST)
Received: by 10.140.25.111 with HTTP; Thu, 20 Feb 2014 19:00:45 -0800 (PST)
In-Reply-To: <8761o99tft.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
Date: Thu, 20 Feb 2014 19:00:45 -0800
Message-ID: <CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Rusty Russell <rusty@au1.ibm.com>
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
>> Hey,
>>
>> On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
>>> Ian Campbell <Ian.Campbell@citrix.com> writes:
>>> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>>> >> For platforms using EPT, I don't think you want anything but guest
>>> >> addresses, do you?
>>> >
>>> > No, the arguments for preventing unfettered access by backends to
>>> > frontend RAM applies to EPT as well.
>>>
>>> I can see how you'd parse my sentence that way, I think, but the two
>>> are orthogonal.
>>>
>>> AFAICT your grant-table access restrictions are page granularity, though
>>> you don't use page-aligned data (eg. in xen-netfront).  This level of
>>> access control is possible using the virtio ring too, but noone has
>>> implemented such a thing AFAIK.
>>
>> Could you say in short how it should be done? DMA API is an option but
>> if there is a simpler mechanism available in VIRTIO itself we will be
>> happy to use it in Xen.
>
> OK, this challenged me to think harder.
>
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
>
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
>
> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
>
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.

At the risk of beating a dead horse, passing handles (grant
references) is going to be slow.  virtio-blk would never be as fast as
xen-blkif.  I don't want to see virtio adopt a bouncing mechanism like
blkfront has developed especially in a way that every driver had to
implement it on its own.

I really think the best paths forward for virtio on Xen are either (1)
reject the memory isolation thing and leave things as is or (2) assume
bounce buffering at the transport layer (by using the PCI DMA API).

Regards,

Anthony Liguori

> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.
>
> 3) In Linux, change the drivers to use this API.
>
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.
>
> Am I missing anything?
>
> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 03:00:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 03:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGgM6-0003RE-4C; Fri, 21 Feb 2014 03:00:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <anthony@codemonkey.ws>) id 1WGgM5-0003R9-9x
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 03:00:49 +0000
Received: from [85.158.143.35:60242] by server-1.bemta-4.messagelabs.com id
	25/8D-31661-061C6035; Fri, 21 Feb 2014 03:00:48 +0000
X-Env-Sender: anthony@codemonkey.ws
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392951646!7248177!1
X-Originating-IP: [209.85.192.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8872 invoked from network); 21 Feb 2014 03:00:47 -0000
Received: from mail-qg0-f42.google.com (HELO mail-qg0-f42.google.com)
	(209.85.192.42)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 03:00:47 -0000
Received: by mail-qg0-f42.google.com with SMTP id q107so6116036qgd.1
	for <xen-devel@lists.xenproject.org>;
	Thu, 20 Feb 2014 19:00:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=La4W2gjYcPHOUr3teFC1D6CnALYdpjWBvyFJa8U/ofg=;
	b=jPD6KNKJybJXz5HeZnGyGQc8xQSUG/WOvUrDjtQC0ZFIxEe2AkW42ZbqV/m9JZbDVi
	G70q9KxwIwl0Sqa6OdgVceheXzj46OqetFH3BsizKcMTOtN5nnMTEqs98F//DV4OIoWA
	/hXBjd2eydrtsGlRuJsj54hUOeF+pBISnfohtbwCivS7ZinqPjnCmfwb5XUibhVwOdBd
	Ky+daDQK7YxXz/9TRqZ4nyvYIVdagOdVLXT4pfQXLIA1ARq7utjcnYR8KtACvC/+IboV
	NzK5IQ+pidvL1dw0s/3KfhUqEkI1MjYf8Yw2r8nnITY9Mtt8zreEADTIxrMTHENw1O7w
	7CEw==
X-Gm-Message-State: ALoCoQnwIudL8xVtR2LOjkI8CKDrtB/Xuz00B584A8hShTGgzsS5ZpPOI46LbD2B10rU1QBZJS8S
MIME-Version: 1.0
X-Received: by 10.140.96.180 with SMTP id k49mr6086626qge.4.1392951645905;
	Thu, 20 Feb 2014 19:00:45 -0800 (PST)
Received: by 10.140.25.111 with HTTP; Thu, 20 Feb 2014 19:00:45 -0800 (PST)
In-Reply-To: <8761o99tft.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
Date: Thu, 20 Feb 2014 19:00:45 -0800
Message-ID: <CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
From: Anthony Liguori <anthony@codemonkey.ws>
To: Rusty Russell <rusty@au1.ibm.com>
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
>> Hey,
>>
>> On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
>>> Ian Campbell <Ian.Campbell@citrix.com> writes:
>>> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
>>> >> For platforms using EPT, I don't think you want anything but guest
>>> >> addresses, do you?
>>> >
>>> > No, the arguments for preventing unfettered access by backends to
>>> > frontend RAM applies to EPT as well.
>>>
>>> I can see how you'd parse my sentence that way, I think, but the two
>>> are orthogonal.
>>>
>>> AFAICT your grant-table access restrictions are page granularity, though
>>> you don't use page-aligned data (eg. in xen-netfront).  This level of
>>> access control is possible using the virtio ring too, but noone has
>>> implemented such a thing AFAIK.
>>
>> Could you say in short how it should be done? DMA API is an option but
>> if there is a simpler mechanism available in VIRTIO itself we will be
>> happy to use it in Xen.
>
> OK, this challenged me to think harder.
>
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
>
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
>
> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
>
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.

At the risk of beating a dead horse, passing handles (grant
references) is going to be slow.  virtio-blk would never be as fast as
xen-blkif.  I don't want to see virtio adopt a bouncing mechanism like
blkfront has developed especially in a way that every driver had to
implement it on its own.

I really think the best paths forward for virtio on Xen are either (1)
reject the memory isolation thing and leave things as is or (2) assume
bounce buffering at the transport layer (by using the PCI DMA API).

Regards,

Anthony Liguori

> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.
>
> 3) In Linux, change the drivers to use this API.
>
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.
>
> Am I missing anything?
>
> Cheers,
> Rusty.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 03:44:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 03:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGh20-0004Mm-3U; Fri, 21 Feb 2014 03:44:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WGh1u-0004MW-1r; Fri, 21 Feb 2014 03:44:06 +0000
Received: from [193.109.254.147:11657] by server-7.bemta-14.messagelabs.com id
	80/C3-23424-18BC6035; Fri, 21 Feb 2014 03:44:01 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392954239!541942!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=3.0 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20609 invoked from network); 21 Feb 2014 03:44:00 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 03:44:00 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1934918lan.26
	for <multiple recipients>; Thu, 20 Feb 2014 19:43:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=Mh/pDeNefzJM5DAgx2fKwF6GdhVHSH1FEwDAgfpq/9Y=;
	b=dydO3WSPpZCYr7FJ/AA7zqT/giCBJFEJVhaKzV//SUG/OXAmT6VnbGSwGQsSSS6L7L
	8pvmT3R6LNqb18E4JLNYqi8p2/IxyU5WKD8e8zZAhIIwovKHOgjAEdeZXfAwbrtvy4gp
	FIdy2JtstQ7fwutypj0yGS31eU/MwDM7rrZIRsI6cQw1azpAZhTD9GGqF/RwfUApTYad
	wpPE8mB/AflUmLZGPrgdAS8DLNifkcrc0XKjXNPZsqFi8GfZxZ46/7PG4S8uzrvhINjL
	JFGJ5ydsaCR+yeo/JRHKq7bmvngYqGe91CIPab31qGBCGHoNqTWEuLjWyMOF9B/zCY4L
	/hBg==
MIME-Version: 1.0
X-Received: by 10.112.43.70 with SMTP id u6mr2826482lbl.30.1392954239659; Thu,
	20 Feb 2014 19:43:59 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Thu, 20 Feb 2014 19:43:59 -0800 (PST)
Date: Thu, 20 Feb 2014 22:43:59 -0500
X-Google-Sender-Auth: kPNxg6BlqvOhla7w-H0hFSFQLJ4
Message-ID: <CAHehzX0fYFeyeLCP1OjOMj81B4yV22SBbmn3JXSDdQMbHJYROA@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] Monday Feb 24 is Xen Project Document Day preparing for
	the 4.4 Release
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Monday is our monthly Xen Project Document Day.

However, this Document Day is special -- it is the prep day for our
impending 4.4 release.

We have a good amount of solid documentation for 4.3, but we need to
update to cover 4.4.  The greatest software in the world is worthless
unless people understand how to use it.  If you are still looking for
a way to contribute to the upcoming release, your opportunity has
arrived.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

Looking for something which needs attention beside the 4.4 release?
Look at the current TODO list:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

If you haven't requested to be made a Wiki editor, just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

Hope to see you in Freenode IRC #xendocs on Monday!

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 03:44:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 03:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGh20-0004Mm-3U; Fri, 21 Feb 2014 03:44:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <russell.pavlicek.xen@gmail.com>)
	id 1WGh1u-0004MW-1r; Fri, 21 Feb 2014 03:44:06 +0000
Received: from [193.109.254.147:11657] by server-7.bemta-14.messagelabs.com id
	80/C3-23424-18BC6035; Fri, 21 Feb 2014 03:44:01 +0000
X-Env-Sender: russell.pavlicek.xen@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392954239!541942!1
X-Originating-IP: [209.85.215.53]
X-SpamReason: No, hits=3.0 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,SUSPICIOUS_RECIPS
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20609 invoked from network); 21 Feb 2014 03:44:00 -0000
Received: from mail-la0-f53.google.com (HELO mail-la0-f53.google.com)
	(209.85.215.53)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 03:44:00 -0000
Received: by mail-la0-f53.google.com with SMTP id e16so1934918lan.26
	for <multiple recipients>; Thu, 20 Feb 2014 19:43:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=Mh/pDeNefzJM5DAgx2fKwF6GdhVHSH1FEwDAgfpq/9Y=;
	b=dydO3WSPpZCYr7FJ/AA7zqT/giCBJFEJVhaKzV//SUG/OXAmT6VnbGSwGQsSSS6L7L
	8pvmT3R6LNqb18E4JLNYqi8p2/IxyU5WKD8e8zZAhIIwovKHOgjAEdeZXfAwbrtvy4gp
	FIdy2JtstQ7fwutypj0yGS31eU/MwDM7rrZIRsI6cQw1azpAZhTD9GGqF/RwfUApTYad
	wpPE8mB/AflUmLZGPrgdAS8DLNifkcrc0XKjXNPZsqFi8GfZxZ46/7PG4S8uzrvhINjL
	JFGJ5ydsaCR+yeo/JRHKq7bmvngYqGe91CIPab31qGBCGHoNqTWEuLjWyMOF9B/zCY4L
	/hBg==
MIME-Version: 1.0
X-Received: by 10.112.43.70 with SMTP id u6mr2826482lbl.30.1392954239659; Thu,
	20 Feb 2014 19:43:59 -0800 (PST)
Received: by 10.112.72.72 with HTTP; Thu, 20 Feb 2014 19:43:59 -0800 (PST)
Date: Thu, 20 Feb 2014 22:43:59 -0500
X-Google-Sender-Auth: kPNxg6BlqvOhla7w-H0hFSFQLJ4
Message-ID: <CAHehzX0fYFeyeLCP1OjOMj81B4yV22SBbmn3JXSDdQMbHJYROA@mail.gmail.com>
From: Russ Pavlicek <russell.pavlicek@xenproject.org>
To: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen-devel@lists.xen.org, xen-api@lists.xen.org, 
	xs-devel@lists.xenserver.org, cl-mirage@lists.cam.ac.uk
Subject: [Xen-devel] Monday Feb 24 is Xen Project Document Day preparing for
	the 4.4 Release
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Monday is our monthly Xen Project Document Day.

However, this Document Day is special -- it is the prep day for our
impending 4.4 release.

We have a good amount of solid documentation for 4.3, but we need to
update to cover 4.4.  The greatest software in the world is worthless
unless people understand how to use it.  If you are still looking for
a way to contribute to the upcoming release, your opportunity has
arrived.

Never participated in a Document Day before?  All the info you'll need is here:

http://wiki.xenproject.org/wiki/Xen_Document_Days

Looking for something which needs attention beside the 4.4 release?
Look at the current TODO list:

http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO

If you haven't requested to be made a Wiki editor, just fill out the form below:

http://xenproject.org/component/content/article/100-misc/145-request-to-be-made-a-wiki-editor.html

Hope to see you in Freenode IRC #xendocs on Monday!

Russ

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 04:32:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 04:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGhmX-0005d2-3g; Fri, 21 Feb 2014 04:32:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGhmM-0005cx-Be
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 04:32:09 +0000
Received: from [193.109.254.147:60695] by server-5.bemta-14.messagelabs.com id
	36/09-16688-0C6D6035; Fri, 21 Feb 2014 04:32:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392957118!5800915!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18855 invoked from network); 21 Feb 2014 04:31:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 04:31:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,516,1389744000"; d="scan'208";a="102877916"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 04:31:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 23:31:39 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGhly-0005bT-PG;
	Fri, 21 Feb 2014 04:31:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGhlw-0001do-IQ;
	Fri, 21 Feb 2014 04:31:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25163-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 04:31:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25163: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25163 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25163/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24873
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24873

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24873

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
baseline version:
 linux                a6d2ebcda7cb7467b3f5ca597710be25cc8ad76f

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Andrew Morton <akpm@linux-foundation.org>
  Asias He <asias@redhat.com>
  Avi Kivity <avi@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin LaHaise <bcrl@kvack.org>
  Bojan Smojver <bojan@rexursive.com>
  Dan Rosenberg <dan.j.rosenberg@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jeff Layton <jlayton@redhat.com>
  Jiang Liu <liuj97@gmail.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Rafael J. Wysocki <rjw@sisk.pl>
  Roland Dreier <roland@purestorage.com>
  Rusty Russell <rusty@rustcorp.com.au>
  Seth Forshee <seth.forshee@canonical.com>
  Stephen Smalley <sds@tycho.nsa.gov>
  Steven Rostedt <rostedt@goodmis.org>
  Tao Ma <boyu.mt@taobao.com>
  Trond Myklebust <Trond.Myklebust@netapp.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 934 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 04:32:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 04:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGhmX-0005d2-3g; Fri, 21 Feb 2014 04:32:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGhmM-0005cx-Be
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 04:32:09 +0000
Received: from [193.109.254.147:60695] by server-5.bemta-14.messagelabs.com id
	36/09-16688-0C6D6035; Fri, 21 Feb 2014 04:32:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392957118!5800915!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18855 invoked from network); 21 Feb 2014 04:31:59 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 04:31:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,516,1389744000"; d="scan'208";a="102877916"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 04:31:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 20 Feb 2014 23:31:39 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGhly-0005bT-PG;
	Fri, 21 Feb 2014 04:31:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGhlw-0001do-IQ;
	Fri, 21 Feb 2014 04:31:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25163-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 04:31:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25163: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25163 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25163/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24873
 test-amd64-i386-xl-win7-amd64  7 windows-install          fail REGR. vs. 24873

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24873

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
baseline version:
 linux                a6d2ebcda7cb7467b3f5ca597710be25cc8ad76f

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Andrew Morton <akpm@linux-foundation.org>
  Asias He <asias@redhat.com>
  Avi Kivity <avi@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin LaHaise <bcrl@kvack.org>
  Bojan Smojver <bojan@rexursive.com>
  Dan Rosenberg <dan.j.rosenberg@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jeff Layton <jlayton@redhat.com>
  Jiang Liu <liuj97@gmail.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Rafael J. Wysocki <rjw@sisk.pl>
  Roland Dreier <roland@purestorage.com>
  Rusty Russell <rusty@rustcorp.com.au>
  Seth Forshee <seth.forshee@canonical.com>
  Stephen Smalley <sds@tycho.nsa.gov>
  Steven Rostedt <rostedt@goodmis.org>
  Tao Ma <boyu.mt@taobao.com>
  Trond Myklebust <Trond.Myklebust@netapp.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 934 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 04:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 04:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGhpc-0005pk-KO; Fri, 21 Feb 2014 04:35:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGhpZ-0005pU-Nf
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 04:35:22 +0000
Received: from [193.109.254.147:13823] by server-2.bemta-14.messagelabs.com id
	C9/B0-01236-987D6035; Fri, 21 Feb 2014 04:35:21 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392957319!2138183!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31731 invoked from network); 21 Feb 2014 04:35:20 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 04:35:20 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so5138276qcx.9
	for <xen-devel@lists.xen.org>; Thu, 20 Feb 2014 20:35:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=RRSAzlxgXyJefsNaThzk8jd+N3zUCmBFFX+fIvlpqYU=;
	b=I9MgNj1gSDQUpW9FKME+iiHsH6hSZrataI6je0fxFasaLYZ56fgTanAXI1ISFCV0RV
	J6dknxzwFkwWx7L2d8YOq4d+mmtOvHcgVzN5YICWVm8iphlTCOIGWQ5FuPTxpiFGpI0S
	Ygrx6PgjCadqZCalk1WKuF+mlJ1g5qBbwI/tw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=RRSAzlxgXyJefsNaThzk8jd+N3zUCmBFFX+fIvlpqYU=;
	b=Nj8NB5S+ZidVbcFBHE955OHq+HEZ9Q3oAbi8wObqv9uvYqClYUvrgl40nix/Imyq7f
	ZV2+ear40wmbTAUoZ7Xlv4vAWOCSUbtVxPVpK9RmK1OvCkVgE6LIDbHktyqfI2qHgB20
	FZLsrPTCSLGobZeacsbok6x5WHF8Ll1nfN7nQRa9AlfKORS4yil15DH9KfSva5XZWpu6
	e/DouL4As1VHGSQvSRqL8ZW9s6H6DOaNHI0dpb3bZH4S+O/vRIRwhOwXB36mJd07sVIL
	14Za5gN9AsJgdx8fv4MopVN2Mozg42JNEWsnOtwBI46kEo5I2iSESWdNnUzZYMvLEWLT
	XHAA==
X-Gm-Message-State: ALoCoQl9VuJX5cQLD81GQmO8Xr35x5u3+F91aKEOWxM06bG+rr8uyAqZDXSe+tQ9YRjpCofBK9yO
X-Received: by 10.224.104.9 with SMTP id m9mr6933189qao.18.1392957319350; Thu,
	20 Feb 2014 20:35:19 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Thu, 20 Feb 2014 20:35:04 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 08:35:04 +0400
X-Google-Sender-Auth: O0B42AH6gIxN-EL0L95EAQgoGEs
Message-ID: <CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-20 9:52 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> Hello. I have some problems with swap files in domU - i have ssd disks
> that caches all io and if user use swap, ssd may fail very often.
> Is that possible to use tmem frontswap without swap file at all? And
> transparently push swap pages to tmem?


Okay as i see it can;'t be possible.
Another question - is that possible to reserve tmem to domains at specific size?
For example i need to get 20Gb for one domain and 10Gb for another.
But if second domain very hungry it can't
eaten all memory

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 04:35:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 04:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGhpc-0005pk-KO; Fri, 21 Feb 2014 04:35:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGhpZ-0005pU-Nf
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 04:35:22 +0000
Received: from [193.109.254.147:13823] by server-2.bemta-14.messagelabs.com id
	C9/B0-01236-987D6035; Fri, 21 Feb 2014 04:35:21 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-12.tower-27.messagelabs.com!1392957319!2138183!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31731 invoked from network); 21 Feb 2014 04:35:20 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 04:35:20 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so5138276qcx.9
	for <xen-devel@lists.xen.org>; Thu, 20 Feb 2014 20:35:19 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=RRSAzlxgXyJefsNaThzk8jd+N3zUCmBFFX+fIvlpqYU=;
	b=I9MgNj1gSDQUpW9FKME+iiHsH6hSZrataI6je0fxFasaLYZ56fgTanAXI1ISFCV0RV
	J6dknxzwFkwWx7L2d8YOq4d+mmtOvHcgVzN5YICWVm8iphlTCOIGWQ5FuPTxpiFGpI0S
	Ygrx6PgjCadqZCalk1WKuF+mlJ1g5qBbwI/tw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=RRSAzlxgXyJefsNaThzk8jd+N3zUCmBFFX+fIvlpqYU=;
	b=Nj8NB5S+ZidVbcFBHE955OHq+HEZ9Q3oAbi8wObqv9uvYqClYUvrgl40nix/Imyq7f
	ZV2+ear40wmbTAUoZ7Xlv4vAWOCSUbtVxPVpK9RmK1OvCkVgE6LIDbHktyqfI2qHgB20
	FZLsrPTCSLGobZeacsbok6x5WHF8Ll1nfN7nQRa9AlfKORS4yil15DH9KfSva5XZWpu6
	e/DouL4As1VHGSQvSRqL8ZW9s6H6DOaNHI0dpb3bZH4S+O/vRIRwhOwXB36mJd07sVIL
	14Za5gN9AsJgdx8fv4MopVN2Mozg42JNEWsnOtwBI46kEo5I2iSESWdNnUzZYMvLEWLT
	XHAA==
X-Gm-Message-State: ALoCoQl9VuJX5cQLD81GQmO8Xr35x5u3+F91aKEOWxM06bG+rr8uyAqZDXSe+tQ9YRjpCofBK9yO
X-Received: by 10.224.104.9 with SMTP id m9mr6933189qao.18.1392957319350; Thu,
	20 Feb 2014 20:35:19 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Thu, 20 Feb 2014 20:35:04 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 08:35:04 +0400
X-Google-Sender-Auth: O0B42AH6gIxN-EL0L95EAQgoGEs
Message-ID: <CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-20 9:52 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> Hello. I have some problems with swap files in domU - i have ssd disks
> that caches all io and if user use swap, ssd may fail very often.
> Is that possible to use tmem frontswap without swap file at all? And
> transparently push swap pages to tmem?


Okay as i see it can;'t be possible.
Another question - is that possible to reserve tmem to domains at specific size?
For example i need to get 20Gb for one domain and 10Gb for another.
But if second domain very hungry it can't
eaten all memory

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 05:36:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 05:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGima-0007Er-J7; Fri, 21 Feb 2014 05:36:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGimY-0007Em-2z
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 05:36:18 +0000
Received: from [85.158.143.35:14443] by server-3.bemta-4.messagelabs.com id
	54/B1-11539-1D5E6035; Fri, 21 Feb 2014 05:36:17 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392960974!7255076!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5165 invoked from network); 21 Feb 2014 05:36:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 05:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,516,1389744000"; d="scan'208";a="102889959"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 05:36:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 00:36:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGimT-0005uq-9u;
	Fri, 21 Feb 2014 05:36:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGimS-0006EY-HE;
	Fri, 21 Feb 2014 05:36:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25164-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 05:36:12 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 25164: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25164 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25164/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24878
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24878

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 linux                a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
baseline version:
 linux                29b5f720990fafc302a034468455426dd662e101

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Antti Palosaari <crope@iki.fi>
  Ben Hutchings <ben@decadent.org.uk>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Mason <clm@fb.com>
  Dave Jones <davej@fedoraproject.org>
  David Rientjes <rientjes@google.com>
  Davidlohr Bueso <davidlohr@hp.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Harald Freudenberger <freude@linux.vnet.ibm.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jason Cooper <jason@lakedaemon.net>
  Josef Bacik <jbacik@fb.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Kyle McMartin <kyle@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lior Amsalem <alior@marvell.com>
  Mark Salter <msalter@redhat.com>
  Mauro Carvalho Chehab <m.chehab@samsung.com>
  Mel Gorman <mgorman@suse.de>
  Michael Krufky <mkrufky@linuxtv.org>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathan Lynch <nathan_lynch@mentor.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Roland Dreier <roland@purestorage.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Stefan Becker <schtefan@gmx.net>
  Stephen Smalley <sds@tycho.nsa.gov>
  Stephen Warren <swarren@nvidia.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
  Tony Prisk <linux@prisktech.co.nz>
  Vinayak Kale <vkale@apm.com>
  Will Deacon <will.deacon@arm.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
+ branch=linux-3.10
+ revision=a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git a43e02cf87d0c1ddce1719d93478f0f6a3a095e8:tested/linux-3.10
Counting objects: 1   
Counting objects: 205, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/146)   
Writing objects:   1% (2/146)   
Writing objects:   2% (3/146)   
Writing objects:   3% (5/146)   
Writing objects:   4% (6/146)   
Writing objects:   5% (8/146)   
Writing objects:   6% (9/146)   
Writing objects:   7% (11/146)   
Writing objects:   8% (12/146)   
Writing objects:   9% (14/146)   
Writing objects:  10% (15/146)   
Writing objects:  11% (17/146)   
Writing objects:  12% (18/146)   
Writing objects:  13% (19/146)   
Writing objects:  14% (21/146)   
Writing objects:  15% (22/146)   
Writing objects:  16% (24/146)   
Writing objects:  17% (25/146)   
Writing objects:  18% (27/146)   
Writing objects:  19% (28/146)   
Writing objects:  20% (30/146)   
Writing objects:  21% (31/146)   
Writing objects:  22% (33/146)   
Writing objects:  23% (34/146)   
Writing objects:  24% (36/146)   
Writing objects:  25% (37/146)   
Writing objects:  26% (38/146)   
Writing objects:  27% (40/146)   
Writing objects:  28% (41/146)   
Writing objects:  29% (43/146)   
Writing objects:  30% (44/146)   
Writing objects:  31% (46/146)   
Writing objects:  32% (47/146)   
Writing objects:  33% (49/146)   
Writing objects:  34% (50/146)   
Writing objects:  35% (52/146)   
Writing objects:  36% (53/146)   
Writing objects:  37% (55/146)   
Writing objects:  38% (56/146)   
Writing objects:  39% (57/146)   
Writing objects:  40% (59/146)   
Writing objects:  41% (60/146)   
Writing objects:  42% (62/146)   
Writing objects:  43% (63/146)   
Writing objects:  44% (65/146)   
Writing objects:  45% (66/146)   
Writing objects:  46% (68/146)   
Writing objects:  47% (69/146)   
Writing objects:  48% (71/146)   
Writing objects:  49% (72/146)   
Writing objects:  50% (73/146)   
Writing objects:  51% (75/146)   
Writing objects:  52% (76/146)   
Writing objects:  53% (78/146)   
Writing objects:  54% (79/146)   
Writing objects:  55% (81/146)   
Writing objects:  56% (82/146)   
Writing objects:  57% (84/146)   
Writing objects:  58% (85/146)   
Writing objects:  59% (87/146)   
Writing objects:  60% (88/146)   
Writing objects:  61% (90/146)   
Writing objects:  62% (91/146)   
Writing objects:  63% (92/146)   
Writing objects:  64% (94/146)   
Writing objects:  65% (95/146)   
Writing objects:  66% (97/146)   
Writing objects:  67% (98/146)   
Writing objects:  68% (100/146)   
Writing objects:  69% (101/146)   
Writing objects:  70% (103/146)   
Writing objects:  71% (104/146)   
Writing objects:  72% (106/146)   
Writing objects:  73% (107/146)   
Writing objects:  74% (109/146)   
Writing objects:  75% (110/146)   
Writing objects:  76% (111/146)   
Writing objects:  77% (113/146)   
Writing objects:  78% (114/146)   
Writing objects:  79% (116/146)   
Writing objects:  80% (117/146)   
Writing objects:  81% (119/146)   
Writing objects:  82% (120/146)   
Writing objects:  83% (122/146)   
Writing objects:  84% (123/146)   
Writing objects:  85% (125/146)   
Writing objects:  86% (126/146)   
Writing objects:  87% (128/146)   
Writing objects:  88% (129/146)   
Writing objects:  89% (130/146)   
Writing objects:  90% (132/146)   
Writing objects:  91% (133/146)   
Writing objects:  92% (135/146)   
Writing objects:  93% (136/146)   
Writing objects:  94% (138/146)   
Writing objects:  95% (139/146)   
Writing objects:  96% (141/146)   
Writing objects:  97% (142/146)   
Writing objects:  98% (144/146)   
Writing objects:  99% (145/146)   
Writing objects: 100% (146/146)   
Writing objects: 100% (146/146), 29.16 KiB, done.
Total 146 (delta 119), reused 146 (delta 119)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   29b5f72..a43e02c  a43e02cf87d0c1ddce1719d93478f0f6a3a095e8 -> tested/linux-3.10
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 05:36:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 05:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGimm-0007Fj-5G; Fri, 21 Feb 2014 05:36:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGimk-0007Fa-QP
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 05:36:31 +0000
Received: from [85.158.137.68:16931] by server-10.bemta-3.messagelabs.com id
	70/4F-07302-DD5E6035; Fri, 21 Feb 2014 05:36:29 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392960988!3242939!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11215 invoked from network); 21 Feb 2014 05:36:29 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 05:36:29 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=OvYKNuHEb4MT2DGwjrBIWRtIPHm7xYsIbzc8guV0WM5dW59DGETuC19E
	fFn/PaYFZtSW4i31Ue6nNrv/sTeDv7zPJVh7zttrU5HrNh6T/FJGmLCgC
	4g7sph7Gf/4MpGDYm+8XXVI6MehQTXdQf41ce3/+dakk9f/GpHUb50ght
	klBuDzSSIcS29HLQgacnRjDWSzeWMCnTGzRouoASSelTmQInQx3DwGs4P
	p0d64irXsVDr9BDe6hnCQqIZZOR89;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392960989; x=1424496989;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=HGPTev8bkSRYH+5bG3EgublWW5Idhyuq6J3WMF8Ogi0=;
	b=FMEQ/6Qfvmy6N0NUabiVIi8izxoMOxIbURkTlrfUVyUtxqB8/V/CQzr2
	MnzkiqZXeWXNvNj5wCfDgcrXKihpdxzJqvo+M2KEdHYXvRR7gNV1EjIdD
	xowdGv8q0x8IIcRg2c3mQM79Ohwus0iAucOpiieOjdBtq0LUwysok6YdY
	TQb12/fgb6p1L88uVWvt5oG1BJhIP/U+UY6v+X38+iJ/TZguJid0hxlUv
	0Bt90z/G0yRIXOavrXPkzK6vfedXd;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,516,1389740400"; d="scan'208";a="159538915"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 21 Feb 2014 06:36:19 +0100
X-IronPort-AV: E=Sophos;i="4.97,516,1389740400"; d="scan'208";a="32026377"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 21 Feb 2014 06:36:20 +0100
Message-ID: <5306E5D3.6000302@ts.fujitsu.com>
Date: Fri, 21 Feb 2014 06:36:19 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, "Dong,
	Eddie" <eddie.dong@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21.02.2014 02:26, Zhang, Yang Z wrote:
> Juergen Gross wrote on 2014-02-20:
>> Hi,
>
> Hi, Juergen
>
>>
>> I think I've found a bug in debug trap handling in the Xen hypervisor
>> in case of a HVM domu using single stepping:
>>
>> Debug registers are restored on vcpu switch only if db7 has any debug
>> events activated or if the debug registers are marked to be used by
>> the domU. This leads to problems if the domU uses single stepping and
>> vcpu switch occurs between the single step trap and reading of db6 in
>> the guest. db6 contents (single step indicator) are lost in this case.
>>
>> Jan suggested to intercept the debug trap in the hypervisor and mark
>> the debug registers to be used by the domU to enable saving and
>> restoring the debug registers in case of a context switch. I used the
>> attached patch (applies to Xen 4.2.3) to verify this solution and it
>> worked (without the patch a test was able to reproduce the bug once in
>> about 3 hours, with the patch the test ran for more than 12 hours without problem).
>>
>> Obviously the patch isn't the final one, as I deactivated the "monitor trap flag"
>> feature to avoid any strange dependencies. Jan wanted someone from the
>> VMX folks to put together a proper fix to avoid overlooking some corner case.
>>
>
> Thanks for reporting this issue.
> Actually, I don't know the scenario that you saw this issue. Are you using single step inside guest? Or running gdb to debug VM remotely?

Single step inside guest:

1. Guest sets TF flag in status loaded by IRET and does IRET
2. Debug trap in guest occurs, physical DB6 holds single step indicator
3. vcpu scheduling event occurs, debug registers are NOT saved as not marked
    being dirty and DB7 has no debug events configured
4. when guest vcpu is scheduled again, DB6 has lost single step indicator


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 05:36:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 05:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGimm-0007Fj-5G; Fri, 21 Feb 2014 05:36:32 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGimk-0007Fa-QP
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 05:36:31 +0000
Received: from [85.158.137.68:16931] by server-10.bemta-3.messagelabs.com id
	70/4F-07302-DD5E6035; Fri, 21 Feb 2014 05:36:29 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1392960988!3242939!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11215 invoked from network); 21 Feb 2014 05:36:29 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 05:36:29 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=OvYKNuHEb4MT2DGwjrBIWRtIPHm7xYsIbzc8guV0WM5dW59DGETuC19E
	fFn/PaYFZtSW4i31Ue6nNrv/sTeDv7zPJVh7zttrU5HrNh6T/FJGmLCgC
	4g7sph7Gf/4MpGDYm+8XXVI6MehQTXdQf41ce3/+dakk9f/GpHUb50ght
	klBuDzSSIcS29HLQgacnRjDWSzeWMCnTGzRouoASSelTmQInQx3DwGs4P
	p0d64irXsVDr9BDe6hnCQqIZZOR89;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392960989; x=1424496989;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=HGPTev8bkSRYH+5bG3EgublWW5Idhyuq6J3WMF8Ogi0=;
	b=FMEQ/6Qfvmy6N0NUabiVIi8izxoMOxIbURkTlrfUVyUtxqB8/V/CQzr2
	MnzkiqZXeWXNvNj5wCfDgcrXKihpdxzJqvo+M2KEdHYXvRR7gNV1EjIdD
	xowdGv8q0x8IIcRg2c3mQM79Ohwus0iAucOpiieOjdBtq0LUwysok6YdY
	TQb12/fgb6p1L88uVWvt5oG1BJhIP/U+UY6v+X38+iJ/TZguJid0hxlUv
	0Bt90z/G0yRIXOavrXPkzK6vfedXd;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,516,1389740400"; d="scan'208";a="159538915"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 21 Feb 2014 06:36:19 +0100
X-IronPort-AV: E=Sophos;i="4.97,516,1389740400"; d="scan'208";a="32026377"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 21 Feb 2014 06:36:20 +0100
Message-ID: <5306E5D3.6000302@ts.fujitsu.com>
Date: Fri, 21 Feb 2014 06:36:19 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, "Dong,
	Eddie" <eddie.dong@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21.02.2014 02:26, Zhang, Yang Z wrote:
> Juergen Gross wrote on 2014-02-20:
>> Hi,
>
> Hi, Juergen
>
>>
>> I think I've found a bug in debug trap handling in the Xen hypervisor
>> in case of a HVM domu using single stepping:
>>
>> Debug registers are restored on vcpu switch only if db7 has any debug
>> events activated or if the debug registers are marked to be used by
>> the domU. This leads to problems if the domU uses single stepping and
>> vcpu switch occurs between the single step trap and reading of db6 in
>> the guest. db6 contents (single step indicator) are lost in this case.
>>
>> Jan suggested to intercept the debug trap in the hypervisor and mark
>> the debug registers to be used by the domU to enable saving and
>> restoring the debug registers in case of a context switch. I used the
>> attached patch (applies to Xen 4.2.3) to verify this solution and it
>> worked (without the patch a test was able to reproduce the bug once in
>> about 3 hours, with the patch the test ran for more than 12 hours without problem).
>>
>> Obviously the patch isn't the final one, as I deactivated the "monitor trap flag"
>> feature to avoid any strange dependencies. Jan wanted someone from the
>> VMX folks to put together a proper fix to avoid overlooking some corner case.
>>
>
> Thanks for reporting this issue.
> Actually, I don't know the scenario that you saw this issue. Are you using single step inside guest? Or running gdb to debug VM remotely?

Single step inside guest:

1. Guest sets TF flag in status loaded by IRET and does IRET
2. Debug trap in guest occurs, physical DB6 holds single step indicator
3. vcpu scheduling event occurs, debug registers are NOT saved as not marked
    being dirty and DB7 has no debug events configured
4. when guest vcpu is scheduled again, DB6 has lost single step indicator


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 05:36:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 05:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGima-0007Er-J7; Fri, 21 Feb 2014 05:36:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGimY-0007Em-2z
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 05:36:18 +0000
Received: from [85.158.143.35:14443] by server-3.bemta-4.messagelabs.com id
	54/B1-11539-1D5E6035; Fri, 21 Feb 2014 05:36:17 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392960974!7255076!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5165 invoked from network); 21 Feb 2014 05:36:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 05:36:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,516,1389744000"; d="scan'208";a="102889959"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 05:36:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 00:36:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGimT-0005uq-9u;
	Fri, 21 Feb 2014 05:36:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGimS-0006EY-HE;
	Fri, 21 Feb 2014 05:36:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25164-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 05:36:12 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 25164: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25164 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25164/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install          fail like 24878
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24878

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass

version targeted for testing:
 linux                a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
baseline version:
 linux                29b5f720990fafc302a034468455426dd662e101

------------------------------------------------------------
People who touched revisions under test:
  Andrew Morton <akpm@linux-foundation.org>
  Antti Palosaari <crope@iki.fi>
  Ben Hutchings <ben@decadent.org.uk>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Mason <clm@fb.com>
  Dave Jones <davej@fedoraproject.org>
  David Rientjes <rientjes@google.com>
  Davidlohr Bueso <davidlohr@hp.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Harald Freudenberger <freude@linux.vnet.ibm.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jason Cooper <jason@lakedaemon.net>
  Josef Bacik <jbacik@fb.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Kyle McMartin <kyle@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lior Amsalem <alior@marvell.com>
  Mark Salter <msalter@redhat.com>
  Mauro Carvalho Chehab <m.chehab@samsung.com>
  Mel Gorman <mgorman@suse.de>
  Michael Krufky <mkrufky@linuxtv.org>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Nathan Lynch <nathan_lynch@mentor.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Roland Dreier <roland@purestorage.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Stefan Becker <schtefan@gmx.net>
  Stephen Smalley <sds@tycho.nsa.gov>
  Stephen Warren <swarren@nvidia.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
  Tony Prisk <linux@prisktech.co.nz>
  Vinayak Kale <vkale@apm.com>
  Will Deacon <will.deacon@arm.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
+ branch=linux-3.10
+ revision=a43e02cf87d0c1ddce1719d93478f0f6a3a095e8
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git a43e02cf87d0c1ddce1719d93478f0f6a3a095e8:tested/linux-3.10
Counting objects: 1   
Counting objects: 205, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/146)   
Writing objects:   1% (2/146)   
Writing objects:   2% (3/146)   
Writing objects:   3% (5/146)   
Writing objects:   4% (6/146)   
Writing objects:   5% (8/146)   
Writing objects:   6% (9/146)   
Writing objects:   7% (11/146)   
Writing objects:   8% (12/146)   
Writing objects:   9% (14/146)   
Writing objects:  10% (15/146)   
Writing objects:  11% (17/146)   
Writing objects:  12% (18/146)   
Writing objects:  13% (19/146)   
Writing objects:  14% (21/146)   
Writing objects:  15% (22/146)   
Writing objects:  16% (24/146)   
Writing objects:  17% (25/146)   
Writing objects:  18% (27/146)   
Writing objects:  19% (28/146)   
Writing objects:  20% (30/146)   
Writing objects:  21% (31/146)   
Writing objects:  22% (33/146)   
Writing objects:  23% (34/146)   
Writing objects:  24% (36/146)   
Writing objects:  25% (37/146)   
Writing objects:  26% (38/146)   
Writing objects:  27% (40/146)   
Writing objects:  28% (41/146)   
Writing objects:  29% (43/146)   
Writing objects:  30% (44/146)   
Writing objects:  31% (46/146)   
Writing objects:  32% (47/146)   
Writing objects:  33% (49/146)   
Writing objects:  34% (50/146)   
Writing objects:  35% (52/146)   
Writing objects:  36% (53/146)   
Writing objects:  37% (55/146)   
Writing objects:  38% (56/146)   
Writing objects:  39% (57/146)   
Writing objects:  40% (59/146)   
Writing objects:  41% (60/146)   
Writing objects:  42% (62/146)   
Writing objects:  43% (63/146)   
Writing objects:  44% (65/146)   
Writing objects:  45% (66/146)   
Writing objects:  46% (68/146)   
Writing objects:  47% (69/146)   
Writing objects:  48% (71/146)   
Writing objects:  49% (72/146)   
Writing objects:  50% (73/146)   
Writing objects:  51% (75/146)   
Writing objects:  52% (76/146)   
Writing objects:  53% (78/146)   
Writing objects:  54% (79/146)   
Writing objects:  55% (81/146)   
Writing objects:  56% (82/146)   
Writing objects:  57% (84/146)   
Writing objects:  58% (85/146)   
Writing objects:  59% (87/146)   
Writing objects:  60% (88/146)   
Writing objects:  61% (90/146)   
Writing objects:  62% (91/146)   
Writing objects:  63% (92/146)   
Writing objects:  64% (94/146)   
Writing objects:  65% (95/146)   
Writing objects:  66% (97/146)   
Writing objects:  67% (98/146)   
Writing objects:  68% (100/146)   
Writing objects:  69% (101/146)   
Writing objects:  70% (103/146)   
Writing objects:  71% (104/146)   
Writing objects:  72% (106/146)   
Writing objects:  73% (107/146)   
Writing objects:  74% (109/146)   
Writing objects:  75% (110/146)   
Writing objects:  76% (111/146)   
Writing objects:  77% (113/146)   
Writing objects:  78% (114/146)   
Writing objects:  79% (116/146)   
Writing objects:  80% (117/146)   
Writing objects:  81% (119/146)   
Writing objects:  82% (120/146)   
Writing objects:  83% (122/146)   
Writing objects:  84% (123/146)   
Writing objects:  85% (125/146)   
Writing objects:  86% (126/146)   
Writing objects:  87% (128/146)   
Writing objects:  88% (129/146)   
Writing objects:  89% (130/146)   
Writing objects:  90% (132/146)   
Writing objects:  91% (133/146)   
Writing objects:  92% (135/146)   
Writing objects:  93% (136/146)   
Writing objects:  94% (138/146)   
Writing objects:  95% (139/146)   
Writing objects:  96% (141/146)   
Writing objects:  97% (142/146)   
Writing objects:  98% (144/146)   
Writing objects:  99% (145/146)   
Writing objects: 100% (146/146)   
Writing objects: 100% (146/146), 29.16 KiB, done.
Total 146 (delta 119), reused 146 (delta 119)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   29b5f72..a43e02c  a43e02cf87d0c1ddce1719d93478f0f6a3a095e8 -> tested/linux-3.10
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:31:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjdf-00009D-TD; Fri, 21 Feb 2014 06:31:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGjde-000098-OY
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 06:31:10 +0000
Received: from [85.158.139.211:41893] by server-13.bemta-5.messagelabs.com id
	2D/E3-18801-BA2F6035; Fri, 21 Feb 2014 06:31:07 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392964266!5339476!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6571 invoked from network); 21 Feb 2014 06:31:06 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 06:31:06 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=W+D2PaOUZpVBPHF9azJ/mYKyPWTuKSgAg9RoOBS9ymaBzQjDLT5AcQ2Y
	yIGB9Yey2WxZkbBl2818SnDtYbcha7bq2GZ8h4aUhMl6VhYQpLVmTnV9z
	tP1tEDKsf+ped1dqVKzGlTdJqcUDlLGFVjK9OlTKM5bUis0l6utJ8gg4a
	2eHzSZVPTDxg3bZivyHZwwb1AnNNZytc+Y/kpD4TMg0vKFRkINw79K2qd
	MgD+3Ftrl2UvuviKdyxDOQHZmZ8Gg;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392964267; x=1424500267;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=bn/5+rJ77o80LaXXelj4207U6p4V6mQCSg98J6EQ878=;
	b=fRElqu2yeNBf/CcDiFCWXi98665MFyrupN+b0SX4phW01XGHz2k98cYI
	205gpHQ5MsOmJBVvmOovKrkE2a6WbU35E0LpnnFF4D8l6UBB6FPg97CsH
	/A2vXRpX7BEkshl3IKOqp568yCj3lEdffmAY9rXXi8kX4jh9S+ndHw+ZT
	W03CNMWAQW4ujIVra1h0wQeuDTIuoxZwFpZNEuoMmWcMnv1FtA+DtcHEP
	9I6FlQsbZdDQl2YWgMBryXG7llz56;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,517,1389740400"; d="scan'208";a="186235237"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 21 Feb 2014 07:31:06 +0100
X-IronPort-AV: E=Sophos;i="4.97,517,1389740400"; d="scan'208";a="32028490"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 21 Feb 2014 07:31:05 +0100
Message-ID: <5306F2A9.1040503@ts.fujitsu.com>
Date: Fri, 21 Feb 2014 07:31:05 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
	<1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com>
	<1392920545.32038.826.camel@Solace>
In-Reply-To: <1392920545.32038.826.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20.02.2014 19:22, Dario Faggioli wrote:
> On gio, 2014-02-20 at 07:07 +0100, Juergen Gross wrote:
>> On 18.02.2014 19:06, Dario Faggioli wrote:
>
>>> While this one, although a bit more "boring" than the above, would
>>> probably be something quite valuable to have!
>>>
>>> I can only think of rather expensive ways of implementing it, involving
>>> going through all the cpupools and, for each cpupool, through all its
>>> cpus and check the topology relationships, but perhaps there are others
>>> (I'll think harder).
>>>
>> Adding some information like this would be nice, indeed. But I think we should
>> not limit this to just hyperthreads. There are more levels of shared resources,
>> like caches or memory interfaces on the same socket. In case we want to add
>> information about potential performance influences due to shared resources, we
>> should be more generic.
>>
> All true... To the point that I know also wonder what a suitable
> interface and a not too verbose output configuration could be...

Well, looking at the available topology information I think it should look like
the following example:

# xl cpupool-list --shareinfo
Name          CPUs   Sched     Active  Domain count  shared resources
Pool-0          1    credit       y         1        core:   lw_pool
lw_pool         1    credit       y         0        core:   Pool-0
bs2_pool        2    credit       y         1        socket: Pool-0,lw_pool

What do you think?

>> And what about some NUMA information? Wouldn't it be worthwhile to show memory
>> locality information as well? This should be considered to be displayed by
>> "xl list", too.
>>
> Indeed. I actually have an half backed series doing right that. It's a
> bit more complicated (still from an interface point of view), as that
> info resides in Xen, and some new hcall or similar is required to
> retrieve that.
>
> I'll post it in early 4.5 dev cycle.

Please do!


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:31:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjdf-00009D-TD; Fri, 21 Feb 2014 06:31:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WGjde-000098-OY
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 06:31:10 +0000
Received: from [85.158.139.211:41893] by server-13.bemta-5.messagelabs.com id
	2D/E3-18801-BA2F6035; Fri, 21 Feb 2014 06:31:07 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392964266!5339476!1
X-Originating-IP: [80.70.172.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjQ5ID0+IDI5MDUzOA==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6571 invoked from network); 21 Feb 2014 06:31:06 -0000
Received: from dgate10.ts.fujitsu.com (HELO dgate10.ts.fujitsu.com)
	(80.70.172.49)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 06:31:06 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=W+D2PaOUZpVBPHF9azJ/mYKyPWTuKSgAg9RoOBS9ymaBzQjDLT5AcQ2Y
	yIGB9Yey2WxZkbBl2818SnDtYbcha7bq2GZ8h4aUhMl6VhYQpLVmTnV9z
	tP1tEDKsf+ped1dqVKzGlTdJqcUDlLGFVjK9OlTKM5bUis0l6utJ8gg4a
	2eHzSZVPTDxg3bZivyHZwwb1AnNNZytc+Y/kpD4TMg0vKFRkINw79K2qd
	MgD+3Ftrl2UvuviKdyxDOQHZmZ8Gg;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1392964267; x=1424500267;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=bn/5+rJ77o80LaXXelj4207U6p4V6mQCSg98J6EQ878=;
	b=fRElqu2yeNBf/CcDiFCWXi98665MFyrupN+b0SX4phW01XGHz2k98cYI
	205gpHQ5MsOmJBVvmOovKrkE2a6WbU35E0LpnnFF4D8l6UBB6FPg97CsH
	/A2vXRpX7BEkshl3IKOqp568yCj3lEdffmAY9rXXi8kX4jh9S+ndHw+ZT
	W03CNMWAQW4ujIVra1h0wQeuDTIuoxZwFpZNEuoMmWcMnv1FtA+DtcHEP
	9I6FlQsbZdDQl2YWgMBryXG7llz56;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,517,1389740400"; d="scan'208";a="186235237"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate10u.abg.fsc.net with ESMTP; 21 Feb 2014 07:31:06 +0100
X-IronPort-AV: E=Sophos;i="4.97,517,1389740400"; d="scan'208";a="32028490"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 21 Feb 2014 07:31:05 +0100
Message-ID: <5306F2A9.1040503@ts.fujitsu.com>
Date: Fri, 21 Feb 2014 07:31:05 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
	<1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com>
	<1392920545.32038.826.camel@Solace>
In-Reply-To: <1392920545.32038.826.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20.02.2014 19:22, Dario Faggioli wrote:
> On gio, 2014-02-20 at 07:07 +0100, Juergen Gross wrote:
>> On 18.02.2014 19:06, Dario Faggioli wrote:
>
>>> While this one, although a bit more "boring" than the above, would
>>> probably be something quite valuable to have!
>>>
>>> I can only think of rather expensive ways of implementing it, involving
>>> going through all the cpupools and, for each cpupool, through all its
>>> cpus and check the topology relationships, but perhaps there are others
>>> (I'll think harder).
>>>
>> Adding some information like this would be nice, indeed. But I think we should
>> not limit this to just hyperthreads. There are more levels of shared resources,
>> like caches or memory interfaces on the same socket. In case we want to add
>> information about potential performance influences due to shared resources, we
>> should be more generic.
>>
> All true... To the point that I know also wonder what a suitable
> interface and a not too verbose output configuration could be...

Well, looking at the available topology information I think it should look like
the following example:

# xl cpupool-list --shareinfo
Name          CPUs   Sched     Active  Domain count  shared resources
Pool-0          1    credit       y         1        core:   lw_pool
lw_pool         1    credit       y         0        core:   Pool-0
bs2_pool        2    credit       y         1        socket: Pool-0,lw_pool

What do you think?

>> And what about some NUMA information? Wouldn't it be worthwhile to show memory
>> locality information as well? This should be considered to be displayed by
>> "xl list", too.
>>
> Indeed. I actually have an half backed series doing right that. It's a
> bit more complicated (still from an interface point of view), as that
> info resides in Xen, and some new hcall or similar is required to
> retrieve that.
>
> I'll post it in early 4.5 dev cycle.

Please do!


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:32:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:32:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjes-0000CY-F9; Fri, 21 Feb 2014 06:32:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1WGjer-0000CO-4d
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 06:32:25 +0000
Received: from [85.158.139.211:22679] by server-7.bemta-5.messagelabs.com id
	0E/97-14867-8F2F6035; Fri, 21 Feb 2014 06:32:24 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392964341!5316614!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22750 invoked from network); 21 Feb 2014 06:32:23 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 06:32:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1L6WEbM002137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 06:32:15 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1L6WCtU007162
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 06:32:14 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1L6WCF4016338; Fri, 21 Feb 2014 06:32:12 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 22:32:12 -0800
Message-ID: <5306F2E8.5090509@oracle.com>
Date: Fri, 21 Feb 2014 14:32:08 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
In-Reply-To: <587238484.20140220121842@eikelenboom.it>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/2/20 19:18, Sander Eikelenboom wrote:
> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>
>
>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>> Hi All,
>>>
>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>
>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>
>>>     In the guest:
>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>     [57539.859610] net eth0: Need more slots
>>>     [58157.675939] net eth0: Need more slots
>>>     [58725.344712] net eth0: Need more slots
>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>     [61815.849225] net eth0: Need more slots
>> This issue is familiar... and I thought it get fixed.
>>   From original analysis for similar issue I hit before, the root cause
>> is netback still creates response when the ring is full. I remember
>> larger MTU can trigger this issue before, what is the MTU size?
> In dom0 both for the physical nics and the guest vif's MTU=1500
> In domU the eth0 also has MTU=1500.
>
> So it's not jumbo frames .. just everywhere the same plain defaults ..
>
> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>
> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
Probably the response overlaps the request and grantcopy return error 
when using wrong grant reference, Netback returns resp->status with 
||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
Would it be possible to print log in xenvif_rx_action of netback to see 
whether something wrong with max slots and used slots?

Thanks
Annie

>
> Will keep you posted when it triggers again with the extra info in the warn.
>
> --
> Sander
>
>
>
>> Thanks
>> Annie
>>>     Xen reports:
>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>>
>>>
>>>
>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>>     - i can ping the guests from dom0
>>>     - i can ping dom0 from the guests
>>>     - But i can't ssh or access things by http
>>>     - I don't see any relevant error messages ...
>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>>       (that previously worked fine)
>>>
>>> --
>>>
>>> Sander
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:32:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:32:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjes-0000CY-F9; Fri, 21 Feb 2014 06:32:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <annie.li@oracle.com>) id 1WGjer-0000CO-4d
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 06:32:25 +0000
Received: from [85.158.139.211:22679] by server-7.bemta-5.messagelabs.com id
	0E/97-14867-8F2F6035; Fri, 21 Feb 2014 06:32:24 +0000
X-Env-Sender: annie.li@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392964341!5316614!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22750 invoked from network); 21 Feb 2014 06:32:23 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 06:32:23 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1L6WEbM002137
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 06:32:15 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1L6WCtU007162
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 06:32:14 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1L6WCF4016338; Fri, 21 Feb 2014 06:32:12 GMT
Received: from [10.182.38.215] (/10.182.38.215)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 20 Feb 2014 22:32:12 -0800
Message-ID: <5306F2E8.5090509@oracle.com>
Date: Fri, 21 Feb 2014 14:32:08 +0800
From: annie li <annie.li@oracle.com>
Organization: Oracle Corporation
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:17.0) Gecko/20130328 Thunderbird/17.0.5
MIME-Version: 1.0
To: Sander Eikelenboom <linux@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
In-Reply-To: <587238484.20140220121842@eikelenboom.it>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 2014/2/20 19:18, Sander Eikelenboom wrote:
> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>
>
>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>> Hi All,
>>>
>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>
>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>
>>>     In the guest:
>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>     [57539.859610] net eth0: Need more slots
>>>     [58157.675939] net eth0: Need more slots
>>>     [58725.344712] net eth0: Need more slots
>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>     [61815.849225] net eth0: Need more slots
>> This issue is familiar... and I thought it get fixed.
>>   From original analysis for similar issue I hit before, the root cause
>> is netback still creates response when the ring is full. I remember
>> larger MTU can trigger this issue before, what is the MTU size?
> In dom0 both for the physical nics and the guest vif's MTU=1500
> In domU the eth0 also has MTU=1500.
>
> So it's not jumbo frames .. just everywhere the same plain defaults ..
>
> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>
> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
Probably the response overlaps the request and grantcopy return error 
when using wrong grant reference, Netback returns resp->status with 
||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
Would it be possible to print log in xenvif_rx_action of netback to see 
whether something wrong with max slots and used slots?

Thanks
Annie

>
> Will keep you posted when it triggers again with the extra info in the warn.
>
> --
> Sander
>
>
>
>> Thanks
>> Annie
>>>     Xen reports:
>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>>
>>>
>>>
>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>>     - i can ping the guests from dom0
>>>     - i can ping dom0 from the guests
>>>     - But i can't ssh or access things by http
>>>     - I don't see any relevant error messages ...
>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>>       (that previously worked fine)
>>>
>>> --
>>>
>>> Sander
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@lists.xen.org
>>> http://lists.xen.org/xen-devel
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjvD-0000nP-KC; Fri, 21 Feb 2014 06:49:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjvB-0000mp-KM
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:18 +0000
Received: from [85.158.137.68:64022] by server-14.bemta-3.messagelabs.com id
	39/B7-08196-CE6F6035; Fri, 21 Feb 2014 06:49:16 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!6
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20459 invoked from network); 21 Feb 2014 06:49:15 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:15 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295723"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:12 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:13 +0800
Message-Id: <1392965053-1069-6-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 5/5] xen, gfx passthrough: add opregion mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

The OpRegion shouldn't be mapped 1:1 because the address in the host
can't be used in the guest directly.

This patch traps read and write access to the opregion of the Intel
GPU config space (offset 0xfc).

The original patch is from Jean Guyader <jean.guyader@eu.citrix.com>

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Jean Guyader <jean.guyader@eu.citrix.com>
---
 hw/xen/xen_pt.h             |    4 ++-
 hw/xen/xen_pt_config_init.c |   45 ++++++++++++++++++++++++++++++++++++++++++-
 hw/xen/xen_pt_graphics.c    |   45 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index 92e4d51..9f7fd4e 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -63,7 +63,7 @@ typedef int (*xen_pt_conf_byte_read)
 #define XEN_PT_BAR_UNMAPPED (-1)
 
 #define PCI_CAP_MAX 48
-
+#define PCI_INTEL_OPREGION 0xfc
 
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
@@ -307,5 +307,7 @@ int intel_pch_init(PCIBus *bus);
 void igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr,
                    uint32_t val, int len);
 uint32_t igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len);
+uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 
 #endif /* !XEN_PT_H */
diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index 8ccc2e4..30135c1 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -575,6 +575,22 @@ static int xen_pt_exp_rom_bar_reg_write(XenPCIPassthroughState *s,
     return 0;
 }
 
+static int xen_pt_intel_opregion_read(XenPCIPassthroughState *s,
+                                      XenPTReg *cfg_entry,
+                                      uint32_t *value, uint32_t valid_mask)
+{
+    *value = igd_read_opregion(s);
+    return 0;
+}
+
+static int xen_pt_intel_opregion_write(XenPCIPassthroughState *s,
+                                       XenPTReg *cfg_entry, uint32_t *value,
+                                       uint32_t dev_value, uint32_t valid_mask)
+{
+    igd_write_opregion(s, *value);
+    return 0;
+}
+
 /* Header Type0 reg static information table */
 static XenPTRegInfo xen_pt_emu_reg_header0[] = {
     /* Vendor ID reg */
@@ -1438,6 +1454,20 @@ static XenPTRegInfo xen_pt_emu_reg_msix[] = {
     },
 };
 
+static XenPTRegInfo xen_pt_emu_reg_igd_opregion[] = {
+    /* Intel IGFX OpRegion reg */
+    {
+        .offset     = 0x0,
+        .size       = 4,
+        .init_val   = 0,
+        .no_wb      = 1,
+        .u.dw.read   = xen_pt_intel_opregion_read,
+        .u.dw.write  = xen_pt_intel_opregion_write,
+    },
+    {
+        .size = 0,
+    },
+};
 
 /****************************
  * Capabilities
@@ -1675,6 +1705,14 @@ static const XenPTRegGroupInfo xen_pt_emu_reg_grps[] = {
         .size_init   = xen_pt_msix_size_init,
         .emu_regs = xen_pt_emu_reg_msix,
     },
+    /* Intel IGD Opregion group */
+    {
+        .grp_id      = PCI_INTEL_OPREGION,
+        .grp_type    = XEN_PT_GRP_TYPE_EMU,
+        .grp_size    = 0x4,
+        .size_init   = xen_pt_reg_grp_size_init,
+        .emu_regs    = xen_pt_emu_reg_igd_opregion,
+    },
     {
         .grp_size = 0,
     },
@@ -1804,7 +1842,8 @@ int xen_pt_config_init(XenPCIPassthroughState *s)
         uint32_t reg_grp_offset = 0;
         XenPTRegGroup *reg_grp_entry = NULL;
 
-        if (xen_pt_emu_reg_grps[i].grp_id != 0xFF) {
+        if (xen_pt_emu_reg_grps[i].grp_id != 0xFF
+            && xen_pt_emu_reg_grps[i].grp_id != PCI_INTEL_OPREGION) {
             if (xen_pt_hide_dev_cap(&s->real_device,
                                     xen_pt_emu_reg_grps[i].grp_id)) {
                 continue;
@@ -1817,6 +1856,10 @@ int xen_pt_config_init(XenPCIPassthroughState *s)
             }
         }
 
+        if (xen_pt_emu_reg_grps[i].grp_id == PCI_INTEL_OPREGION) {
+            reg_grp_offset = PCI_INTEL_OPREGION;
+        }
+
         reg_grp_entry = g_new0(XenPTRegGroup, 1);
         QLIST_INIT(&reg_grp_entry->reg_tbl_list);
         QLIST_INSERT_HEAD(&s->reg_grps, reg_grp_entry, entries);
diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
index 2a01406..bebfcfd 100644
--- a/hw/xen/xen_pt_graphics.c
+++ b/hw/xen/xen_pt_graphics.c
@@ -6,6 +6,7 @@
 #include "hw/xen/xen_backend.h"
 
 int igd_passthru;
+static int igd_guest_opregion;
 
 /*
  * register VGA resources for the domain with assigned gfx
@@ -360,3 +361,47 @@ err_out:
     return -1;
 }
 
+uint32_t igd_read_opregion(XenPCIPassthroughState *s)
+{
+    uint32_t val = -1;
+
+    if (igd_guest_opregion == 0) {
+        return val;
+    }
+
+    val = igd_guest_opregion;
+
+    XEN_PT_LOG(&s->dev, "Read opregion val=%x\n", val);
+    return val;
+}
+
+void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val)
+{
+    uint32_t host_opregion = 0;
+    int ret;
+
+    if (igd_guest_opregion) {
+        XEN_PT_LOG(&s->dev, "opregion register already been set, ignoring %x\n",
+                   val);
+        return;
+    }
+
+    xen_host_pci_get_block(&s->real_device, PCI_INTEL_OPREGION,
+            (uint8_t *)&host_opregion, 4);
+    igd_guest_opregion = (val & ~0xfff) | (host_opregion & 0xfff);
+
+    ret = xc_domain_memory_mapping(xen_xc, xen_domid,
+            igd_guest_opregion >> XC_PAGE_SHIFT,
+            host_opregion >> XC_PAGE_SHIFT,
+            2,
+            DPCI_ADD_MAPPING);
+
+    if (ret != 0) {
+        XEN_PT_ERR(&s->dev, "Error: Can't map opregion\n");
+        igd_guest_opregion = 0;
+        return;
+    }
+
+    XEN_PT_LOG(&s->dev, "Map OpRegion: %x -> %x\n", host_opregion,
+            igd_guest_opregion);
+}
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv6-0000lk-CW; Fri, 21 Feb 2014 06:49:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv5-0000lK-10
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:11 +0000
Received: from [85.158.137.68:11641] by server-5.bemta-3.messagelabs.com id
	9A/03-04712-6E6F6035; Fri, 21 Feb 2014 06:49:10 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20192 invoked from network); 21 Feb 2014 06:49:09 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:09 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295688"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:06 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:10 +0800
Message-Id: <1392965053-1069-3-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 2/5] xen,
	gfx passthrough: reserve 00:02.0 for INTEL IGD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Some VBIOSs and drivers assume the IGD BDF (bus:device:function) is
always 00:02.0, so this patch reserves 00:02.0 for assigned IGD in
guest.

The original patch is from Weidong Han <weidong.han@intel.com>

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Weidong Han <weidong.han@intel.com>
---
 hw/pci/pci.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 4e0701d..e81816e 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -808,6 +808,12 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     if (devfn < 0) {
         for(devfn = bus->devfn_min ; devfn < ARRAY_SIZE(bus->devices);
             devfn += PCI_FUNC_MAX) {
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+            /* If gfx_passthru is in use, reserve 00:02.* for the IGD */
+            if (gfx_passthru && devfn == 0x10) {
+                continue;
+            }
+#endif
             if (!bus->devices[devfn])
                 goto found;
         }
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv3-0000l9-E9; Fri, 21 Feb 2014 06:49:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv2-0000l3-4u
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:08 +0000
Received: from [85.158.137.68:11517] by server-13.bemta-3.messagelabs.com id
	2D/E5-26923-3E6F6035; Fri, 21 Feb 2014 06:49:07 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20074 invoked from network); 21 Feb 2014 06:49:06 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:06 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295664"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:02 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:08 +0800
Message-Id: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 0/5] xen: add Intel IGD passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

The following patches are ported from Xen Qemu-traditional branch which are
adding Intel IGD passthrough supporting to Qemu upstream.

To pass through IGD to guest, user need to add following lines in Xen config
file:
gfx_passthru=1
pci=['00:02.0@2']

Besides, since Xen + Qemu upstream is requiring seabios, user also need to
recompile seabios with CONFIG_OPTIONROMS_DEPLOYED=y to allow IGD pass through
successfully:
1. change CONFIG_OPTIONROMS_DEPLOYED=y in file: xen/tools/firmware/seabios-config
2. recompile the tools

I have successfully boot Win 7 and RHEL6u4 guests with IGD assigned in Haswell
desktop with Latest Xen + Qemu upstream.

Yang Zhang (5):
  xen, gfx passthrough: basic graphics passthrough support
  xen, gfx passthrough: reserve 00:02.0 for INTEL IGD
  xen, gfx passthrough: create intel isa bridge
  xen, gfx passthrough: support Intel IGD passthrough with VT-D
  xen, gfx passthrough: add opregion mapping

 configure                    |    2 +-
 hw/pci-host/piix.c           |   15 ++
 hw/pci/pci.c                 |   19 ++
 hw/xen/Makefile.objs         |    2 +-
 hw/xen/xen-host-pci-device.c |    5 +
 hw/xen/xen-host-pci-device.h |    1 +
 hw/xen/xen_pt.c              |   10 +
 hw/xen/xen_pt.h              |   13 ++-
 hw/xen/xen_pt_config_init.c  |   45 +++++-
 hw/xen/xen_pt_graphics.c     |  407 ++++++++++++++++++++++++++++++++++++++++++
 qemu-options.hx              |    9 +
 vl.c                         |    8 +
 12 files changed, 532 insertions(+), 4 deletions(-)
 create mode 100644 hw/xen/xen_pt_graphics.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjvB-0000mr-5t; Fri, 21 Feb 2014 06:49:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv9-0000mZ-Sz
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:16 +0000
Received: from [85.158.137.68:23048] by server-1.bemta-3.messagelabs.com id
	A1/FC-17293-BE6F6035; Fri, 21 Feb 2014 06:49:15 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!5
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20342 invoked from network); 21 Feb 2014 06:49:13 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:13 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295706"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:10 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:12 +0800
Message-Id: <1392965053-1069-5-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 4/5] xen,
	gfx passthrough: support Intel IGD passthrough with VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Some registers of Intel IGD are mapped in host bridge, so it needs to
passthrough these registers of physical host bridge to guest because
emulated host bridge in guest doesn't have these mappings.

The original patch is from Weidong Han < weidong.han @ intel.com >

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc:Weidong Han <weidong.han@intel.com>
---
 hw/pci-host/piix.c       |   15 ++++++
 hw/pci/pci.c             |   13 +++++
 hw/xen/xen_pt.h          |    5 ++
 hw/xen/xen_pt_graphics.c |  127 ++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 160 insertions(+), 0 deletions(-)

diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index ffdc853..68cf756 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -34,6 +34,9 @@
 #include "sysemu/sysemu.h"
 #include "hw/i386/ioapic.h"
 #include "qapi/visitor.h"
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+#include "hw/xen/xen_pt.h"
+#endif
 
 /*
  * I440FX chipset data sheet.
@@ -389,6 +392,18 @@ PCIBus *i440fx_init(PCII440FXState **pi440fx_state,
 
     i440fx_update_memory_mappings(f);
 
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+    /*
+     * Some registers of Intel IGD are mapped in host bridge, so it needs to
+     * passthrough these registers of physical host bridge to guest because
+     * emulated host bridge in guest doesn't have these mappings.
+     */
+    if (intel_pch_init(b) == 0) {
+        d->config_read = igd_pci_read;
+        d->config_write = igd_pci_write;
+    }
+#endif
+
     return b;
 }
 
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index e81816e..7016b71 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -36,6 +36,9 @@
 #include "hw/pci/msix.h"
 #include "exec/address-spaces.h"
 #include "hw/hotplug.h"
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+#include "hw/xen/xen_pt.h"
+#endif
 
 //#define DEBUG_PCI
 #ifdef DEBUG_PCI
@@ -805,6 +808,16 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     PCIConfigWriteFunc *config_write = pc->config_write;
     AddressSpace *dma_as;
 
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+    /*
+     * Some video bioses and gfx drivers will assume the bdf of IGD is 00:02.0.
+     * So user need to set it to 00:02.0 in Xen configure file explicitly,
+     * otherwise IGD will fail to work.
+     */
+    if (gfx_passthru && devfn == 0x10)
+        igd_passthru = 1;
+    else
+#endif
     if (devfn < 0) {
         for(devfn = bus->devfn_min ; devfn < ARRAY_SIZE(bus->devices);
             devfn += PCI_FUNC_MAX) {
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index c04bbfd..92e4d51 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -299,8 +299,13 @@ static inline bool xen_pt_has_msix_mapping(XenPCIPassthroughState *s, int bar)
 }
 
 extern int gfx_passthru;
+extern int igd_passthru;
 int register_vga_regions(XenHostPCIDevice *dev);
 int unregister_vga_regions(XenHostPCIDevice *dev);
 int setup_vga_pt(XenHostPCIDevice *dev);
+int intel_pch_init(PCIBus *bus);
+void igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr,
+                   uint32_t val, int len);
+uint32_t igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len);
 
 #endif /* !XEN_PT_H */
diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
index 54f16cf..2a01406 100644
--- a/hw/xen/xen_pt_graphics.c
+++ b/hw/xen/xen_pt_graphics.c
@@ -5,6 +5,8 @@
 #include "xen-host-pci-device.h"
 #include "hw/xen/xen_backend.h"
 
+int igd_passthru;
+
 /*
  * register VGA resources for the domain with assigned gfx
  */
@@ -233,3 +235,128 @@ static int create_pch_isa_bridge(PCIBus *bus, XenHostPCIDevice *hdev)
     XEN_PT_LOG(dev, "Intel PCH ISA bridge is created.\n");
     return 0;
 }
+
+int intel_pch_init(PCIBus *bus)
+{
+    XenHostPCIDevice hdev;
+    int r = 0;
+
+    if (!gfx_passthru) {
+        return -1;
+    }
+
+    r = xen_host_pci_device_get(&hdev, 0, 0, 0x1f, 0);
+    if (r) {
+        XEN_PT_ERR(NULL, "Fail to find intel PCH in host\n");
+        goto err;
+    }
+
+    if (hdev.vendor_id == PCI_VENDOR_ID_INTEL) {
+        r = create_pch_isa_bridge(bus, &hdev);
+        if (r) {
+            XEN_PT_ERR(NULL, "Fail to create PCH ISA bridge.\n");
+            goto err;
+        }
+    }
+
+    xen_host_pci_device_put(&hdev);
+
+    return  r;
+
+err:
+    return r;
+}
+
+void igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr,
+                   uint32_t val, int len)
+{
+    XenHostPCIDevice dev;
+    int r;
+
+    assert(pci_dev->devfn == 0x00);
+
+    if (!igd_passthru) {
+        goto write_default;
+    }
+
+    switch (config_addr) {
+    case 0x58:      /* PAVPC Offset */
+        break;
+    default:
+        goto write_default;
+    }
+
+    /* Host write */
+    r = xen_host_pci_device_get(&dev, 0, 0, 0, 0);
+    if (r) {
+        XEN_PT_ERR(pci_dev, "Can't get pci_dev_host_bridge\n");
+        abort();
+    }
+
+    r = xen_host_pci_set_block(&dev, config_addr, (uint8_t *)&val, len);
+    if (r) {
+        XEN_PT_ERR(pci_dev, "Can't get pci_dev_host_bridge\n");
+        abort();
+    }
+
+    xen_host_pci_device_put(&dev);
+
+    return;
+
+write_default:
+    pci_default_write_config(pci_dev, config_addr, val, len);
+}
+
+uint32_t igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len)
+{
+    XenHostPCIDevice dev;
+    uint32_t val;
+    int r;
+
+    assert(pci_dev->devfn == 0x00);
+
+    if (!igd_passthru) {
+        goto read_default;
+    }
+
+    switch (config_addr) {
+    case 0x00:        /* vendor id */
+    case 0x02:        /* device id */
+    case 0x08:        /* revision id */
+    case 0x2c:        /* sybsystem vendor id */
+    case 0x2e:        /* sybsystem id */
+    case 0x50:        /* SNB: processor graphics control register */
+    case 0x52:        /* processor graphics control register */
+    case 0xa0:        /* top of memory */
+    case 0xb0:        /* ILK: BSM: should read from dev 2 offset 0x5c */
+    case 0x58:        /* SNB: PAVPC Offset */
+    case 0xa4:        /* SNB: graphics base of stolen memory */
+    case 0xa8:        /* SNB: base of GTT stolen memory */
+        break;
+    default:
+        goto read_default;
+    }
+
+    /* Host read */
+    r = xen_host_pci_device_get(&dev, 0, 0, 0, 0);
+    if (r) {
+        goto err_out;
+    }
+
+    r = xen_host_pci_get_block(&dev, config_addr, (uint8_t *)&val, len);
+    if (r) {
+        goto err_out;
+    }
+
+    xen_host_pci_device_put(&dev);
+
+    return val;
+
+read_default:
+    return pci_default_read_config(pci_dev, config_addr, len);
+
+err_out:
+    XEN_PT_ERR(pci_dev, "Can't get pci_dev_host_bridge\n");
+    return -1;
+}
+
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv8-0000mI-PO; Fri, 21 Feb 2014 06:49:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv7-0000lp-5r
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:13 +0000
Received: from [85.158.137.68:11737] by server-7.bemta-3.messagelabs.com id
	81/3F-13775-8E6F6035; Fri, 21 Feb 2014 06:49:12 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!4
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20266 invoked from network); 21 Feb 2014 06:49:11 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:11 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295698"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:08 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:11 +0800
Message-Id: <1392965053-1069-4-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 3/5] xen,
	gfx passthrough: create intel isa bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

ISA bridge is needed since Intel gfx drive will probe it instead
of Dev31:Fun0 to make graphics device passthrough work easy for VMM, that
only need to expose ISA bridge to let driver know the real hardware underneath.

The original patch is from Allen Kay [allen.m.kay@intel.com]

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Allen Kay <allen.m.kay@intel.com>
---
 hw/xen/xen_pt_graphics.c |   71 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 71 insertions(+), 0 deletions(-)

diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
index 9ad8a74..54f16cf 100644
--- a/hw/xen/xen_pt_graphics.c
+++ b/hw/xen/xen_pt_graphics.c
@@ -162,3 +162,74 @@ out:
     free(bios);
     return rc;
 }
+
+static uint32_t isa_bridge_read_config(PCIDevice *d, uint32_t addr, int len)
+{
+    uint32_t v;
+
+    v = pci_default_read_config(d, addr, len);
+
+    return v;
+}
+
+static void isa_bridge_write_config(PCIDevice *d, uint32_t addr, uint32_t v,
+                                    int len)
+{
+    pci_default_write_config(d, addr, v, len);
+
+    return;
+}
+
+static void isa_bridge_class_init(ObjectClass *klass, void *data)
+{
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+
+    k->config_read = isa_bridge_read_config;
+    k->config_write = isa_bridge_write_config;
+
+    return;
+};
+
+typedef struct {
+    PCIDevice dev;
+} ISABridgeState;
+
+static TypeInfo isa_bridge_info = {
+    .name          = "inte-pch-isa-bridge",
+    .parent        = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(ISABridgeState),
+    .class_init = isa_bridge_class_init,
+};
+
+static void xen_pt_graphics_register_types(void)
+{
+    type_register_static(&isa_bridge_info);
+}
+
+type_init(xen_pt_graphics_register_types)
+
+static int create_pch_isa_bridge(PCIBus *bus, XenHostPCIDevice *hdev)
+{
+    struct PCIDevice *dev;
+
+    char rid;
+
+    dev = pci_create(bus, PCI_DEVFN(0x1f, 0), "inte-pch-isa-bridge");
+    if (!dev) {
+        XEN_PT_LOG(dev, "fail to create PCH ISA bridge.\n");
+        return -1;
+    }
+
+    qdev_init_nofail(&dev->qdev);
+
+    pci_config_set_vendor_id(dev->config, hdev->vendor_id);
+    pci_config_set_device_id(dev->config, hdev->device_id);
+
+    xen_host_pci_get_block(hdev, PCI_REVISION_ID, (uint8_t *)&rid, 1);
+
+    pci_config_set_revision(dev->config, rid);
+    pci_config_set_class(dev->config, PCI_CLASS_BRIDGE_ISA);
+
+    XEN_PT_LOG(dev, "Intel PCH ISA bridge is created.\n");
+    return 0;
+}
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv4-0000lL-PV; Fri, 21 Feb 2014 06:49:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv3-0000l8-Fa
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:09 +0000
Received: from [85.158.137.68:63630] by server-16.bemta-3.messagelabs.com id
	89/87-29917-4E6F6035; Fri, 21 Feb 2014 06:49:08 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20123 invoked from network); 21 Feb 2014 06:49:07 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:07 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295679"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:04 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:09 +0800
Message-Id: <1392965053-1069-2-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 1/5] xen,
	gfx passthrough: basic graphics passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

basic gfx passthrough support:
- add a vga type for gfx passthrough
- retrieve VGA bios from host 0xC0000, then load it to guest 0xC0000
- register/unregister legacy VGA I/O ports and MMIOs for passthroughed gfx

The original patch is from Weidong Han <weidong.han@intel.com>

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Weidong Han <weidong.han@intel.com>
---
 configure                    |    2 +-
 hw/xen/Makefile.objs         |    2 +-
 hw/xen/xen-host-pci-device.c |    5 ++
 hw/xen/xen-host-pci-device.h |    1 +
 hw/xen/xen_pt.c              |   10 +++
 hw/xen/xen_pt.h              |    4 +
 hw/xen/xen_pt_graphics.c     |  164 ++++++++++++++++++++++++++++++++++++++++++
 qemu-options.hx              |    9 +++
 vl.c                         |    8 ++
 9 files changed, 203 insertions(+), 2 deletions(-)
 create mode 100644 hw/xen/xen_pt_graphics.c

diff --git a/configure b/configure
index 4648117..19525ab 100755
--- a/configure
+++ b/configure
@@ -4608,7 +4608,7 @@ case "$target_name" in
     if test "$xen" = "yes" -a "$target_softmmu" = "yes" ; then
       echo "CONFIG_XEN=y" >> $config_target_mak
       if test "$xen_pci_passthrough" = yes; then
-        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
+        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_host_mak"
       fi
     fi
     ;;
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index ce640c6..350d337 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -3,4 +3,4 @@ common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
 
 obj-$(CONFIG_XEN_I386) += xen_platform.o xen_apic.o xen_pvdevice.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
-obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o
+obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o xen_pt_graphics.o
diff --git a/hw/xen/xen-host-pci-device.c b/hw/xen/xen-host-pci-device.c
index 743b37b..a54b7de 100644
--- a/hw/xen/xen-host-pci-device.c
+++ b/hw/xen/xen-host-pci-device.c
@@ -376,6 +376,11 @@ int xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
         goto error;
     }
     d->irq = v;
+    rc = xen_host_pci_get_hex_value(d, "class", &v);
+    if (rc) {
+        goto error;
+    }
+    d->class_code = v;
     d->is_virtfn = xen_host_pci_dev_is_virtfn(d);
 
     return 0;
diff --git a/hw/xen/xen-host-pci-device.h b/hw/xen/xen-host-pci-device.h
index c2486f0..f1e1c30 100644
--- a/hw/xen/xen-host-pci-device.h
+++ b/hw/xen/xen-host-pci-device.h
@@ -25,6 +25,7 @@ typedef struct XenHostPCIDevice {
 
     uint16_t vendor_id;
     uint16_t device_id;
+    uint32_t class_code;
     int irq;
 
     XenHostPCIIORegion io_regions[PCI_NUM_REGIONS - 1];
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index be4220b..5a36902 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -450,6 +450,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                    d->rom.size, d->rom.base_addr);
     }
 
+    register_vga_regions(d);
     return 0;
 }
 
@@ -470,6 +471,8 @@ static void xen_pt_unregister_regions(XenPCIPassthroughState *s)
     if (d->rom.base_addr && d->rom.size) {
         memory_region_destroy(&s->rom);
     }
+
+    unregister_vga_regions(d);
 }
 
 /* region mapping */
@@ -693,6 +696,13 @@ static int xen_pt_initfn(PCIDevice *d)
     /* Handle real device's MMIO/PIO BARs */
     xen_pt_register_regions(s);
 
+    /* Setup VGA bios for passthroughed gfx */
+    if (setup_vga_pt(&s->real_device) < 0) {
+        XEN_PT_ERR(d, "Setup VGA BIOS of passthroughed gfx failed!\n");
+        xen_host_pci_device_put(&s->real_device);
+        return -1;
+    }
+
     /* reinitialize each config register to be emulated */
     if (xen_pt_config_init(s)) {
         XEN_PT_ERR(d, "PCI Config space initialisation failed.\n");
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index 942dc60..c04bbfd 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -298,5 +298,9 @@ static inline bool xen_pt_has_msix_mapping(XenPCIPassthroughState *s, int bar)
     return s->msix && s->msix->bar_index == bar;
 }
 
+extern int gfx_passthru;
+int register_vga_regions(XenHostPCIDevice *dev);
+int unregister_vga_regions(XenHostPCIDevice *dev);
+int setup_vga_pt(XenHostPCIDevice *dev);
 
 #endif /* !XEN_PT_H */
diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
new file mode 100644
index 0000000..9ad8a74
--- /dev/null
+++ b/hw/xen/xen_pt_graphics.c
@@ -0,0 +1,164 @@
+/*
+ * graphics passthrough
+ */
+#include "xen_pt.h"
+#include "xen-host-pci-device.h"
+#include "hw/xen/xen_backend.h"
+
+/*
+ * register VGA resources for the domain with assigned gfx
+ */
+int register_vga_regions(XenHostPCIDevice *dev)
+{
+    int ret = 0;
+
+    if (!gfx_passthru || ((dev->class_code >> 0x8) != 0x0300)) {
+        return ret;
+    }
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3B0,
+            0x3B0, 0xA, DPCI_ADD_MAPPING);
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3C0,
+            0x3C0, 0x20, DPCI_ADD_MAPPING);
+
+    ret |= xc_domain_memory_mapping(xen_xc, xen_domid,
+            0xa0000 >> XC_PAGE_SHIFT,
+            0xa0000 >> XC_PAGE_SHIFT,
+            0x20,
+            DPCI_ADD_MAPPING);
+
+    if (ret != 0) {
+        XEN_PT_ERR(NULL, "VGA region mapping failed\n");
+    }
+
+    return ret;
+}
+
+/*
+ * unregister VGA resources for the domain with assigned gfx
+ */
+int unregister_vga_regions(XenHostPCIDevice *dev)
+{
+    int ret = 0;
+
+    if (!gfx_passthru || ((dev->class_code >> 0x8) != 0x0300)) {
+        return ret;
+    }
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3B0,
+            0x3B0, 0xC, DPCI_REMOVE_MAPPING);
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3C0,
+            0x3C0, 0x20, DPCI_REMOVE_MAPPING);
+
+    ret |= xc_domain_memory_mapping(xen_xc, xen_domid,
+            0xa0000 >> XC_PAGE_SHIFT,
+            0xa0000 >> XC_PAGE_SHIFT,
+            20,
+            DPCI_REMOVE_MAPPING);
+
+    if (ret != 0) {
+        XEN_PT_ERR(NULL, "VGA region unmapping failed\n");
+    }
+
+    return ret;
+}
+
+static int get_vgabios(unsigned char *buf)
+{
+    int fd;
+    uint32_t bios_size = 0;
+    uint32_t start = 0xC0000;
+    uint16_t magic = 0;
+
+    fd = open("/dev/mem", O_RDONLY);
+    if (fd < 0) {
+        XEN_PT_ERR(NULL, "Can't open /dev/mem: %s\n", strerror(errno));
+        return 0;
+    }
+
+    /*
+     * Check if it a real bios extension.
+     * The magic number is 0xAA55.
+     */
+    if (start != lseek(fd, start, SEEK_SET)) {
+        goto out;
+    }
+    if (read(fd, &magic, 2) != 2) {
+        goto out;
+    }
+    if (magic != 0xAA55) {
+        goto out;
+    }
+
+    /* Find the size of the rom extension */
+    if (start != lseek(fd, start, SEEK_SET)) {
+        goto out;
+    }
+    if (lseek(fd, 2, SEEK_CUR) != (start + 2)) {
+        goto out;
+    }
+    if (read(fd, &bios_size, 1) != 1) {
+        goto out;
+    }
+
+    /* This size is in 512 bytes */
+    bios_size *= 512;
+
+    /*
+     * Set the file to the begining of the rombios,
+     * to start the copy.
+     */
+    if (start != lseek(fd, start, SEEK_SET)) {
+        goto out;
+    }
+
+    if (bios_size != read(fd, buf, bios_size)) {
+        bios_size = 0;
+    }
+
+out:
+    close(fd);
+    return bios_size;
+}
+
+int setup_vga_pt(XenHostPCIDevice *dev)
+{
+    unsigned char *bios = NULL;
+    int bios_size = 0;
+    char *c = NULL;
+    char checksum = 0;
+    int rc = 0;
+
+    if (!gfx_passthru || ((dev->class_code >> 0x8) != 0x0300)) {
+        return rc;
+    }
+
+    bios = malloc(64 * 1024);
+    /* Allocated 64K for the vga bios */
+    if (!bios) {
+        return -1;
+    }
+
+    bios_size = get_vgabios(bios);
+    if (bios_size == 0 || bios_size > 64 * 1024) {
+        XEN_PT_ERR(NULL, "vga bios size (0x%x) is invalid!\n", bios_size);
+        rc = -1;
+        goto out;
+    }
+
+    /* Adjust the bios checksum */
+    for (c = (char *)bios; c < ((char *)bios + bios_size); c++) {
+        checksum += *c;
+    }
+    if (checksum) {
+        bios[bios_size - 1] -= checksum;
+        XEN_PT_LOG(NULL, "vga bios checksum is adjusted!\n");
+    }
+
+    cpu_physical_memory_rw(0xc0000, bios, bios_size, 1);
+out:
+    free(bios);
+    return rc;
+}
diff --git a/qemu-options.hx b/qemu-options.hx
index 56e5fdf..95de002 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1034,6 +1034,15 @@ STEXI
 Rotate graphical output some deg left (only PXA LCD).
 ETEXI
 
+DEF("gfx_passthru", 0, QEMU_OPTION_gfx_passthru,
+    "-gfx_passthru   enable Intel IGD passthrough by XEN\n",
+    QEMU_ARCH_ALL)
+STEXI
+@item -gfx_passthru
+@findex -gfx_passthru
+Enable Intel IGD passthrough by XEN
+ETEXI
+
 DEF("vga", HAS_ARG, QEMU_OPTION_vga,
     "-vga [std|cirrus|vmware|qxl|xenfb|none]\n"
     "                select video card type\n", QEMU_ARCH_ALL)
diff --git a/vl.c b/vl.c
index 316de54..8a91054 100644
--- a/vl.c
+++ b/vl.c
@@ -215,6 +215,9 @@ static bool tcg_allowed = true;
 bool xen_allowed;
 uint32_t xen_domid;
 enum xen_mode xen_mode = XEN_EMULATE;
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+int gfx_passthru = 0;
+#endif
 static int tcg_tb_size;
 
 static int default_serial = 1;
@@ -3775,6 +3778,11 @@ int main(int argc, char **argv, char **envp)
                 }
                 configure_msg(opts);
                 break;
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+            case QEMU_OPTION_gfx_passthru:
+                gfx_passthru = 1;
+                break;
+#endif
             default:
                 os_parse_cmd_args(popt->index, optarg);
             }
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjvD-0000nP-KC; Fri, 21 Feb 2014 06:49:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjvB-0000mp-KM
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:18 +0000
Received: from [85.158.137.68:64022] by server-14.bemta-3.messagelabs.com id
	39/B7-08196-CE6F6035; Fri, 21 Feb 2014 06:49:16 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!6
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20459 invoked from network); 21 Feb 2014 06:49:15 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:15 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:54 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295723"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:12 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:13 +0800
Message-Id: <1392965053-1069-6-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 5/5] xen, gfx passthrough: add opregion mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

The OpRegion shouldn't be mapped 1:1 because the address in the host
can't be used in the guest directly.

This patch traps read and write access to the opregion of the Intel
GPU config space (offset 0xfc).

The original patch is from Jean Guyader <jean.guyader@eu.citrix.com>

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Jean Guyader <jean.guyader@eu.citrix.com>
---
 hw/xen/xen_pt.h             |    4 ++-
 hw/xen/xen_pt_config_init.c |   45 ++++++++++++++++++++++++++++++++++++++++++-
 hw/xen/xen_pt_graphics.c    |   45 +++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 92 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index 92e4d51..9f7fd4e 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -63,7 +63,7 @@ typedef int (*xen_pt_conf_byte_read)
 #define XEN_PT_BAR_UNMAPPED (-1)
 
 #define PCI_CAP_MAX 48
-
+#define PCI_INTEL_OPREGION 0xfc
 
 typedef enum {
     XEN_PT_GRP_TYPE_HARDWIRED = 0,  /* 0 Hardwired reg group */
@@ -307,5 +307,7 @@ int intel_pch_init(PCIBus *bus);
 void igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr,
                    uint32_t val, int len);
 uint32_t igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len);
+uint32_t igd_read_opregion(XenPCIPassthroughState *s);
+void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val);
 
 #endif /* !XEN_PT_H */
diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index 8ccc2e4..30135c1 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -575,6 +575,22 @@ static int xen_pt_exp_rom_bar_reg_write(XenPCIPassthroughState *s,
     return 0;
 }
 
+static int xen_pt_intel_opregion_read(XenPCIPassthroughState *s,
+                                      XenPTReg *cfg_entry,
+                                      uint32_t *value, uint32_t valid_mask)
+{
+    *value = igd_read_opregion(s);
+    return 0;
+}
+
+static int xen_pt_intel_opregion_write(XenPCIPassthroughState *s,
+                                       XenPTReg *cfg_entry, uint32_t *value,
+                                       uint32_t dev_value, uint32_t valid_mask)
+{
+    igd_write_opregion(s, *value);
+    return 0;
+}
+
 /* Header Type0 reg static information table */
 static XenPTRegInfo xen_pt_emu_reg_header0[] = {
     /* Vendor ID reg */
@@ -1438,6 +1454,20 @@ static XenPTRegInfo xen_pt_emu_reg_msix[] = {
     },
 };
 
+static XenPTRegInfo xen_pt_emu_reg_igd_opregion[] = {
+    /* Intel IGFX OpRegion reg */
+    {
+        .offset     = 0x0,
+        .size       = 4,
+        .init_val   = 0,
+        .no_wb      = 1,
+        .u.dw.read   = xen_pt_intel_opregion_read,
+        .u.dw.write  = xen_pt_intel_opregion_write,
+    },
+    {
+        .size = 0,
+    },
+};
 
 /****************************
  * Capabilities
@@ -1675,6 +1705,14 @@ static const XenPTRegGroupInfo xen_pt_emu_reg_grps[] = {
         .size_init   = xen_pt_msix_size_init,
         .emu_regs = xen_pt_emu_reg_msix,
     },
+    /* Intel IGD Opregion group */
+    {
+        .grp_id      = PCI_INTEL_OPREGION,
+        .grp_type    = XEN_PT_GRP_TYPE_EMU,
+        .grp_size    = 0x4,
+        .size_init   = xen_pt_reg_grp_size_init,
+        .emu_regs    = xen_pt_emu_reg_igd_opregion,
+    },
     {
         .grp_size = 0,
     },
@@ -1804,7 +1842,8 @@ int xen_pt_config_init(XenPCIPassthroughState *s)
         uint32_t reg_grp_offset = 0;
         XenPTRegGroup *reg_grp_entry = NULL;
 
-        if (xen_pt_emu_reg_grps[i].grp_id != 0xFF) {
+        if (xen_pt_emu_reg_grps[i].grp_id != 0xFF
+            && xen_pt_emu_reg_grps[i].grp_id != PCI_INTEL_OPREGION) {
             if (xen_pt_hide_dev_cap(&s->real_device,
                                     xen_pt_emu_reg_grps[i].grp_id)) {
                 continue;
@@ -1817,6 +1856,10 @@ int xen_pt_config_init(XenPCIPassthroughState *s)
             }
         }
 
+        if (xen_pt_emu_reg_grps[i].grp_id == PCI_INTEL_OPREGION) {
+            reg_grp_offset = PCI_INTEL_OPREGION;
+        }
+
         reg_grp_entry = g_new0(XenPTRegGroup, 1);
         QLIST_INIT(&reg_grp_entry->reg_tbl_list);
         QLIST_INSERT_HEAD(&s->reg_grps, reg_grp_entry, entries);
diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
index 2a01406..bebfcfd 100644
--- a/hw/xen/xen_pt_graphics.c
+++ b/hw/xen/xen_pt_graphics.c
@@ -6,6 +6,7 @@
 #include "hw/xen/xen_backend.h"
 
 int igd_passthru;
+static int igd_guest_opregion;
 
 /*
  * register VGA resources for the domain with assigned gfx
@@ -360,3 +361,47 @@ err_out:
     return -1;
 }
 
+uint32_t igd_read_opregion(XenPCIPassthroughState *s)
+{
+    uint32_t val = -1;
+
+    if (igd_guest_opregion == 0) {
+        return val;
+    }
+
+    val = igd_guest_opregion;
+
+    XEN_PT_LOG(&s->dev, "Read opregion val=%x\n", val);
+    return val;
+}
+
+void igd_write_opregion(XenPCIPassthroughState *s, uint32_t val)
+{
+    uint32_t host_opregion = 0;
+    int ret;
+
+    if (igd_guest_opregion) {
+        XEN_PT_LOG(&s->dev, "opregion register already been set, ignoring %x\n",
+                   val);
+        return;
+    }
+
+    xen_host_pci_get_block(&s->real_device, PCI_INTEL_OPREGION,
+            (uint8_t *)&host_opregion, 4);
+    igd_guest_opregion = (val & ~0xfff) | (host_opregion & 0xfff);
+
+    ret = xc_domain_memory_mapping(xen_xc, xen_domid,
+            igd_guest_opregion >> XC_PAGE_SHIFT,
+            host_opregion >> XC_PAGE_SHIFT,
+            2,
+            DPCI_ADD_MAPPING);
+
+    if (ret != 0) {
+        XEN_PT_ERR(&s->dev, "Error: Can't map opregion\n");
+        igd_guest_opregion = 0;
+        return;
+    }
+
+    XEN_PT_LOG(&s->dev, "Map OpRegion: %x -> %x\n", host_opregion,
+            igd_guest_opregion);
+}
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv3-0000l9-E9; Fri, 21 Feb 2014 06:49:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv2-0000l3-4u
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:08 +0000
Received: from [85.158.137.68:11517] by server-13.bemta-3.messagelabs.com id
	2D/E5-26923-3E6F6035; Fri, 21 Feb 2014 06:49:07 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20074 invoked from network); 21 Feb 2014 06:49:06 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:06 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:44 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295664"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:02 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:08 +0800
Message-Id: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 0/5] xen: add Intel IGD passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

The following patches are ported from Xen Qemu-traditional branch which are
adding Intel IGD passthrough supporting to Qemu upstream.

To pass through IGD to guest, user need to add following lines in Xen config
file:
gfx_passthru=1
pci=['00:02.0@2']

Besides, since Xen + Qemu upstream is requiring seabios, user also need to
recompile seabios with CONFIG_OPTIONROMS_DEPLOYED=y to allow IGD pass through
successfully:
1. change CONFIG_OPTIONROMS_DEPLOYED=y in file: xen/tools/firmware/seabios-config
2. recompile the tools

I have successfully boot Win 7 and RHEL6u4 guests with IGD assigned in Haswell
desktop with Latest Xen + Qemu upstream.

Yang Zhang (5):
  xen, gfx passthrough: basic graphics passthrough support
  xen, gfx passthrough: reserve 00:02.0 for INTEL IGD
  xen, gfx passthrough: create intel isa bridge
  xen, gfx passthrough: support Intel IGD passthrough with VT-D
  xen, gfx passthrough: add opregion mapping

 configure                    |    2 +-
 hw/pci-host/piix.c           |   15 ++
 hw/pci/pci.c                 |   19 ++
 hw/xen/Makefile.objs         |    2 +-
 hw/xen/xen-host-pci-device.c |    5 +
 hw/xen/xen-host-pci-device.h |    1 +
 hw/xen/xen_pt.c              |   10 +
 hw/xen/xen_pt.h              |   13 ++-
 hw/xen/xen_pt_config_init.c  |   45 +++++-
 hw/xen/xen_pt_graphics.c     |  407 ++++++++++++++++++++++++++++++++++++++++++
 qemu-options.hx              |    9 +
 vl.c                         |    8 +
 12 files changed, 532 insertions(+), 4 deletions(-)
 create mode 100644 hw/xen/xen_pt_graphics.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv8-0000mI-PO; Fri, 21 Feb 2014 06:49:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv7-0000lp-5r
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:13 +0000
Received: from [85.158.137.68:11737] by server-7.bemta-3.messagelabs.com id
	81/3F-13775-8E6F6035; Fri, 21 Feb 2014 06:49:12 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!4
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20266 invoked from network); 21 Feb 2014 06:49:11 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:11 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:50 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295698"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:08 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:11 +0800
Message-Id: <1392965053-1069-4-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 3/5] xen,
	gfx passthrough: create intel isa bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

ISA bridge is needed since Intel gfx drive will probe it instead
of Dev31:Fun0 to make graphics device passthrough work easy for VMM, that
only need to expose ISA bridge to let driver know the real hardware underneath.

The original patch is from Allen Kay [allen.m.kay@intel.com]

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Allen Kay <allen.m.kay@intel.com>
---
 hw/xen/xen_pt_graphics.c |   71 ++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 71 insertions(+), 0 deletions(-)

diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
index 9ad8a74..54f16cf 100644
--- a/hw/xen/xen_pt_graphics.c
+++ b/hw/xen/xen_pt_graphics.c
@@ -162,3 +162,74 @@ out:
     free(bios);
     return rc;
 }
+
+static uint32_t isa_bridge_read_config(PCIDevice *d, uint32_t addr, int len)
+{
+    uint32_t v;
+
+    v = pci_default_read_config(d, addr, len);
+
+    return v;
+}
+
+static void isa_bridge_write_config(PCIDevice *d, uint32_t addr, uint32_t v,
+                                    int len)
+{
+    pci_default_write_config(d, addr, v, len);
+
+    return;
+}
+
+static void isa_bridge_class_init(ObjectClass *klass, void *data)
+{
+    PCIDeviceClass *k = PCI_DEVICE_CLASS(klass);
+
+    k->config_read = isa_bridge_read_config;
+    k->config_write = isa_bridge_write_config;
+
+    return;
+};
+
+typedef struct {
+    PCIDevice dev;
+} ISABridgeState;
+
+static TypeInfo isa_bridge_info = {
+    .name          = "inte-pch-isa-bridge",
+    .parent        = TYPE_PCI_DEVICE,
+    .instance_size = sizeof(ISABridgeState),
+    .class_init = isa_bridge_class_init,
+};
+
+static void xen_pt_graphics_register_types(void)
+{
+    type_register_static(&isa_bridge_info);
+}
+
+type_init(xen_pt_graphics_register_types)
+
+static int create_pch_isa_bridge(PCIBus *bus, XenHostPCIDevice *hdev)
+{
+    struct PCIDevice *dev;
+
+    char rid;
+
+    dev = pci_create(bus, PCI_DEVFN(0x1f, 0), "inte-pch-isa-bridge");
+    if (!dev) {
+        XEN_PT_LOG(dev, "fail to create PCH ISA bridge.\n");
+        return -1;
+    }
+
+    qdev_init_nofail(&dev->qdev);
+
+    pci_config_set_vendor_id(dev->config, hdev->vendor_id);
+    pci_config_set_device_id(dev->config, hdev->device_id);
+
+    xen_host_pci_get_block(hdev, PCI_REVISION_ID, (uint8_t *)&rid, 1);
+
+    pci_config_set_revision(dev->config, rid);
+    pci_config_set_class(dev->config, PCI_CLASS_BRIDGE_ISA);
+
+    XEN_PT_LOG(dev, "Intel PCH ISA bridge is created.\n");
+    return 0;
+}
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjvB-0000mr-5t; Fri, 21 Feb 2014 06:49:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv9-0000mZ-Sz
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:16 +0000
Received: from [85.158.137.68:23048] by server-1.bemta-3.messagelabs.com id
	A1/FC-17293-BE6F6035; Fri, 21 Feb 2014 06:49:15 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!5
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20342 invoked from network); 21 Feb 2014 06:49:13 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:13 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:52 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295706"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:10 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:12 +0800
Message-Id: <1392965053-1069-5-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 4/5] xen,
	gfx passthrough: support Intel IGD passthrough with VT-D
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Some registers of Intel IGD are mapped in host bridge, so it needs to
passthrough these registers of physical host bridge to guest because
emulated host bridge in guest doesn't have these mappings.

The original patch is from Weidong Han < weidong.han @ intel.com >

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc:Weidong Han <weidong.han@intel.com>
---
 hw/pci-host/piix.c       |   15 ++++++
 hw/pci/pci.c             |   13 +++++
 hw/xen/xen_pt.h          |    5 ++
 hw/xen/xen_pt_graphics.c |  127 ++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 160 insertions(+), 0 deletions(-)

diff --git a/hw/pci-host/piix.c b/hw/pci-host/piix.c
index ffdc853..68cf756 100644
--- a/hw/pci-host/piix.c
+++ b/hw/pci-host/piix.c
@@ -34,6 +34,9 @@
 #include "sysemu/sysemu.h"
 #include "hw/i386/ioapic.h"
 #include "qapi/visitor.h"
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+#include "hw/xen/xen_pt.h"
+#endif
 
 /*
  * I440FX chipset data sheet.
@@ -389,6 +392,18 @@ PCIBus *i440fx_init(PCII440FXState **pi440fx_state,
 
     i440fx_update_memory_mappings(f);
 
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+    /*
+     * Some registers of Intel IGD are mapped in host bridge, so it needs to
+     * passthrough these registers of physical host bridge to guest because
+     * emulated host bridge in guest doesn't have these mappings.
+     */
+    if (intel_pch_init(b) == 0) {
+        d->config_read = igd_pci_read;
+        d->config_write = igd_pci_write;
+    }
+#endif
+
     return b;
 }
 
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index e81816e..7016b71 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -36,6 +36,9 @@
 #include "hw/pci/msix.h"
 #include "exec/address-spaces.h"
 #include "hw/hotplug.h"
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+#include "hw/xen/xen_pt.h"
+#endif
 
 //#define DEBUG_PCI
 #ifdef DEBUG_PCI
@@ -805,6 +808,16 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     PCIConfigWriteFunc *config_write = pc->config_write;
     AddressSpace *dma_as;
 
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+    /*
+     * Some video bioses and gfx drivers will assume the bdf of IGD is 00:02.0.
+     * So user need to set it to 00:02.0 in Xen configure file explicitly,
+     * otherwise IGD will fail to work.
+     */
+    if (gfx_passthru && devfn == 0x10)
+        igd_passthru = 1;
+    else
+#endif
     if (devfn < 0) {
         for(devfn = bus->devfn_min ; devfn < ARRAY_SIZE(bus->devices);
             devfn += PCI_FUNC_MAX) {
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index c04bbfd..92e4d51 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -299,8 +299,13 @@ static inline bool xen_pt_has_msix_mapping(XenPCIPassthroughState *s, int bar)
 }
 
 extern int gfx_passthru;
+extern int igd_passthru;
 int register_vga_regions(XenHostPCIDevice *dev);
 int unregister_vga_regions(XenHostPCIDevice *dev);
 int setup_vga_pt(XenHostPCIDevice *dev);
+int intel_pch_init(PCIBus *bus);
+void igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr,
+                   uint32_t val, int len);
+uint32_t igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len);
 
 #endif /* !XEN_PT_H */
diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
index 54f16cf..2a01406 100644
--- a/hw/xen/xen_pt_graphics.c
+++ b/hw/xen/xen_pt_graphics.c
@@ -5,6 +5,8 @@
 #include "xen-host-pci-device.h"
 #include "hw/xen/xen_backend.h"
 
+int igd_passthru;
+
 /*
  * register VGA resources for the domain with assigned gfx
  */
@@ -233,3 +235,128 @@ static int create_pch_isa_bridge(PCIBus *bus, XenHostPCIDevice *hdev)
     XEN_PT_LOG(dev, "Intel PCH ISA bridge is created.\n");
     return 0;
 }
+
+int intel_pch_init(PCIBus *bus)
+{
+    XenHostPCIDevice hdev;
+    int r = 0;
+
+    if (!gfx_passthru) {
+        return -1;
+    }
+
+    r = xen_host_pci_device_get(&hdev, 0, 0, 0x1f, 0);
+    if (r) {
+        XEN_PT_ERR(NULL, "Fail to find intel PCH in host\n");
+        goto err;
+    }
+
+    if (hdev.vendor_id == PCI_VENDOR_ID_INTEL) {
+        r = create_pch_isa_bridge(bus, &hdev);
+        if (r) {
+            XEN_PT_ERR(NULL, "Fail to create PCH ISA bridge.\n");
+            goto err;
+        }
+    }
+
+    xen_host_pci_device_put(&hdev);
+
+    return  r;
+
+err:
+    return r;
+}
+
+void igd_pci_write(PCIDevice *pci_dev, uint32_t config_addr,
+                   uint32_t val, int len)
+{
+    XenHostPCIDevice dev;
+    int r;
+
+    assert(pci_dev->devfn == 0x00);
+
+    if (!igd_passthru) {
+        goto write_default;
+    }
+
+    switch (config_addr) {
+    case 0x58:      /* PAVPC Offset */
+        break;
+    default:
+        goto write_default;
+    }
+
+    /* Host write */
+    r = xen_host_pci_device_get(&dev, 0, 0, 0, 0);
+    if (r) {
+        XEN_PT_ERR(pci_dev, "Can't get pci_dev_host_bridge\n");
+        abort();
+    }
+
+    r = xen_host_pci_set_block(&dev, config_addr, (uint8_t *)&val, len);
+    if (r) {
+        XEN_PT_ERR(pci_dev, "Can't get pci_dev_host_bridge\n");
+        abort();
+    }
+
+    xen_host_pci_device_put(&dev);
+
+    return;
+
+write_default:
+    pci_default_write_config(pci_dev, config_addr, val, len);
+}
+
+uint32_t igd_pci_read(PCIDevice *pci_dev, uint32_t config_addr, int len)
+{
+    XenHostPCIDevice dev;
+    uint32_t val;
+    int r;
+
+    assert(pci_dev->devfn == 0x00);
+
+    if (!igd_passthru) {
+        goto read_default;
+    }
+
+    switch (config_addr) {
+    case 0x00:        /* vendor id */
+    case 0x02:        /* device id */
+    case 0x08:        /* revision id */
+    case 0x2c:        /* sybsystem vendor id */
+    case 0x2e:        /* sybsystem id */
+    case 0x50:        /* SNB: processor graphics control register */
+    case 0x52:        /* processor graphics control register */
+    case 0xa0:        /* top of memory */
+    case 0xb0:        /* ILK: BSM: should read from dev 2 offset 0x5c */
+    case 0x58:        /* SNB: PAVPC Offset */
+    case 0xa4:        /* SNB: graphics base of stolen memory */
+    case 0xa8:        /* SNB: base of GTT stolen memory */
+        break;
+    default:
+        goto read_default;
+    }
+
+    /* Host read */
+    r = xen_host_pci_device_get(&dev, 0, 0, 0, 0);
+    if (r) {
+        goto err_out;
+    }
+
+    r = xen_host_pci_get_block(&dev, config_addr, (uint8_t *)&val, len);
+    if (r) {
+        goto err_out;
+    }
+
+    xen_host_pci_device_put(&dev);
+
+    return val;
+
+read_default:
+    return pci_default_read_config(pci_dev, config_addr, len);
+
+err_out:
+    XEN_PT_ERR(pci_dev, "Can't get pci_dev_host_bridge\n");
+    return -1;
+}
+
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv6-0000lk-CW; Fri, 21 Feb 2014 06:49:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv5-0000lK-10
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:11 +0000
Received: from [85.158.137.68:11641] by server-5.bemta-3.messagelabs.com id
	9A/03-04712-6E6F6035; Fri, 21 Feb 2014 06:49:10 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!3
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20192 invoked from network); 21 Feb 2014 06:49:09 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:09 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:48 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295688"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:06 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:10 +0800
Message-Id: <1392965053-1069-3-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 2/5] xen,
	gfx passthrough: reserve 00:02.0 for INTEL IGD
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

Some VBIOSs and drivers assume the IGD BDF (bus:device:function) is
always 00:02.0, so this patch reserves 00:02.0 for assigned IGD in
guest.

The original patch is from Weidong Han <weidong.han@intel.com>

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Weidong Han <weidong.han@intel.com>
---
 hw/pci/pci.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 4e0701d..e81816e 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -808,6 +808,12 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev, PCIBus *bus,
     if (devfn < 0) {
         for(devfn = bus->devfn_min ; devfn < ARRAY_SIZE(bus->devices);
             devfn += PCI_FUNC_MAX) {
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+            /* If gfx_passthru is in use, reserve 00:02.* for the IGD */
+            if (gfx_passthru && devfn == 0x10) {
+                continue;
+            }
+#endif
             if (!bus->devices[devfn])
                 goto found;
         }
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 06:49:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 06:49:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGjv4-0000lL-PV; Fri, 21 Feb 2014 06:49:10 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WGjv3-0000l8-Fa
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 06:49:09 +0000
Received: from [85.158.137.68:63630] by server-16.bemta-3.messagelabs.com id
	89/87-29917-4E6F6035; Fri, 21 Feb 2014 06:49:08 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392965345!1724929!2
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20123 invoked from network); 21 Feb 2014 06:49:07 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-2.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 06:49:07 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 20 Feb 2014 22:44:46 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,517,1389772800"; d="scan'208";a="459295679"
Received: from yang-desktopd.sh.intel.com (HELO yang-desktop.sh.intel.com)
	([10.239.47.126])
	by orsmga001.jf.intel.com with ESMTP; 20 Feb 2014 22:49:04 -0800
From: Yang Zhang <yang.z.zhang@intel.com>
To: qemu-devel@nongnu.org
Date: Fri, 21 Feb 2014 14:44:09 +0800
Message-Id: <1392965053-1069-2-git-send-email-yang.z.zhang@intel.com>
X-Mailer: git-send-email 1.7.1.1
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Cc: peter.maydell@linaro.org, xen-devel@lists.xensource.com,
	stefano.stabellini@eu.citrix.com, allen.m.kay@intel.com,
	weidong.han@intel.com, jean.guyader@eu.citrix.com,
	Yang Zhang <yang.z.zhang@Intel.com>, anthony@codemonkey.ws,
	anthony.perard@citrix.com
Subject: [Xen-devel] [PATCH 1/5] xen,
	gfx passthrough: basic graphics passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Yang Zhang <yang.z.zhang@Intel.com>

basic gfx passthrough support:
- add a vga type for gfx passthrough
- retrieve VGA bios from host 0xC0000, then load it to guest 0xC0000
- register/unregister legacy VGA I/O ports and MMIOs for passthroughed gfx

The original patch is from Weidong Han <weidong.han@intel.com>

Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
Cc: Weidong Han <weidong.han@intel.com>
---
 configure                    |    2 +-
 hw/xen/Makefile.objs         |    2 +-
 hw/xen/xen-host-pci-device.c |    5 ++
 hw/xen/xen-host-pci-device.h |    1 +
 hw/xen/xen_pt.c              |   10 +++
 hw/xen/xen_pt.h              |    4 +
 hw/xen/xen_pt_graphics.c     |  164 ++++++++++++++++++++++++++++++++++++++++++
 qemu-options.hx              |    9 +++
 vl.c                         |    8 ++
 9 files changed, 203 insertions(+), 2 deletions(-)
 create mode 100644 hw/xen/xen_pt_graphics.c

diff --git a/configure b/configure
index 4648117..19525ab 100755
--- a/configure
+++ b/configure
@@ -4608,7 +4608,7 @@ case "$target_name" in
     if test "$xen" = "yes" -a "$target_softmmu" = "yes" ; then
       echo "CONFIG_XEN=y" >> $config_target_mak
       if test "$xen_pci_passthrough" = yes; then
-        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_target_mak"
+        echo "CONFIG_XEN_PCI_PASSTHROUGH=y" >> "$config_host_mak"
       fi
     fi
     ;;
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index ce640c6..350d337 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -3,4 +3,4 @@ common-obj-$(CONFIG_XEN_BACKEND) += xen_backend.o xen_devconfig.o
 
 obj-$(CONFIG_XEN_I386) += xen_platform.o xen_apic.o xen_pvdevice.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
-obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o
+obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_msi.o xen_pt_graphics.o
diff --git a/hw/xen/xen-host-pci-device.c b/hw/xen/xen-host-pci-device.c
index 743b37b..a54b7de 100644
--- a/hw/xen/xen-host-pci-device.c
+++ b/hw/xen/xen-host-pci-device.c
@@ -376,6 +376,11 @@ int xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
         goto error;
     }
     d->irq = v;
+    rc = xen_host_pci_get_hex_value(d, "class", &v);
+    if (rc) {
+        goto error;
+    }
+    d->class_code = v;
     d->is_virtfn = xen_host_pci_dev_is_virtfn(d);
 
     return 0;
diff --git a/hw/xen/xen-host-pci-device.h b/hw/xen/xen-host-pci-device.h
index c2486f0..f1e1c30 100644
--- a/hw/xen/xen-host-pci-device.h
+++ b/hw/xen/xen-host-pci-device.h
@@ -25,6 +25,7 @@ typedef struct XenHostPCIDevice {
 
     uint16_t vendor_id;
     uint16_t device_id;
+    uint32_t class_code;
     int irq;
 
     XenHostPCIIORegion io_regions[PCI_NUM_REGIONS - 1];
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index be4220b..5a36902 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -450,6 +450,7 @@ static int xen_pt_register_regions(XenPCIPassthroughState *s)
                    d->rom.size, d->rom.base_addr);
     }
 
+    register_vga_regions(d);
     return 0;
 }
 
@@ -470,6 +471,8 @@ static void xen_pt_unregister_regions(XenPCIPassthroughState *s)
     if (d->rom.base_addr && d->rom.size) {
         memory_region_destroy(&s->rom);
     }
+
+    unregister_vga_regions(d);
 }
 
 /* region mapping */
@@ -693,6 +696,13 @@ static int xen_pt_initfn(PCIDevice *d)
     /* Handle real device's MMIO/PIO BARs */
     xen_pt_register_regions(s);
 
+    /* Setup VGA bios for passthroughed gfx */
+    if (setup_vga_pt(&s->real_device) < 0) {
+        XEN_PT_ERR(d, "Setup VGA BIOS of passthroughed gfx failed!\n");
+        xen_host_pci_device_put(&s->real_device);
+        return -1;
+    }
+
     /* reinitialize each config register to be emulated */
     if (xen_pt_config_init(s)) {
         XEN_PT_ERR(d, "PCI Config space initialisation failed.\n");
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index 942dc60..c04bbfd 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -298,5 +298,9 @@ static inline bool xen_pt_has_msix_mapping(XenPCIPassthroughState *s, int bar)
     return s->msix && s->msix->bar_index == bar;
 }
 
+extern int gfx_passthru;
+int register_vga_regions(XenHostPCIDevice *dev);
+int unregister_vga_regions(XenHostPCIDevice *dev);
+int setup_vga_pt(XenHostPCIDevice *dev);
 
 #endif /* !XEN_PT_H */
diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
new file mode 100644
index 0000000..9ad8a74
--- /dev/null
+++ b/hw/xen/xen_pt_graphics.c
@@ -0,0 +1,164 @@
+/*
+ * graphics passthrough
+ */
+#include "xen_pt.h"
+#include "xen-host-pci-device.h"
+#include "hw/xen/xen_backend.h"
+
+/*
+ * register VGA resources for the domain with assigned gfx
+ */
+int register_vga_regions(XenHostPCIDevice *dev)
+{
+    int ret = 0;
+
+    if (!gfx_passthru || ((dev->class_code >> 0x8) != 0x0300)) {
+        return ret;
+    }
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3B0,
+            0x3B0, 0xA, DPCI_ADD_MAPPING);
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3C0,
+            0x3C0, 0x20, DPCI_ADD_MAPPING);
+
+    ret |= xc_domain_memory_mapping(xen_xc, xen_domid,
+            0xa0000 >> XC_PAGE_SHIFT,
+            0xa0000 >> XC_PAGE_SHIFT,
+            0x20,
+            DPCI_ADD_MAPPING);
+
+    if (ret != 0) {
+        XEN_PT_ERR(NULL, "VGA region mapping failed\n");
+    }
+
+    return ret;
+}
+
+/*
+ * unregister VGA resources for the domain with assigned gfx
+ */
+int unregister_vga_regions(XenHostPCIDevice *dev)
+{
+    int ret = 0;
+
+    if (!gfx_passthru || ((dev->class_code >> 0x8) != 0x0300)) {
+        return ret;
+    }
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3B0,
+            0x3B0, 0xC, DPCI_REMOVE_MAPPING);
+
+    ret |= xc_domain_ioport_mapping(xen_xc, xen_domid, 0x3C0,
+            0x3C0, 0x20, DPCI_REMOVE_MAPPING);
+
+    ret |= xc_domain_memory_mapping(xen_xc, xen_domid,
+            0xa0000 >> XC_PAGE_SHIFT,
+            0xa0000 >> XC_PAGE_SHIFT,
+            20,
+            DPCI_REMOVE_MAPPING);
+
+    if (ret != 0) {
+        XEN_PT_ERR(NULL, "VGA region unmapping failed\n");
+    }
+
+    return ret;
+}
+
+static int get_vgabios(unsigned char *buf)
+{
+    int fd;
+    uint32_t bios_size = 0;
+    uint32_t start = 0xC0000;
+    uint16_t magic = 0;
+
+    fd = open("/dev/mem", O_RDONLY);
+    if (fd < 0) {
+        XEN_PT_ERR(NULL, "Can't open /dev/mem: %s\n", strerror(errno));
+        return 0;
+    }
+
+    /*
+     * Check if it a real bios extension.
+     * The magic number is 0xAA55.
+     */
+    if (start != lseek(fd, start, SEEK_SET)) {
+        goto out;
+    }
+    if (read(fd, &magic, 2) != 2) {
+        goto out;
+    }
+    if (magic != 0xAA55) {
+        goto out;
+    }
+
+    /* Find the size of the rom extension */
+    if (start != lseek(fd, start, SEEK_SET)) {
+        goto out;
+    }
+    if (lseek(fd, 2, SEEK_CUR) != (start + 2)) {
+        goto out;
+    }
+    if (read(fd, &bios_size, 1) != 1) {
+        goto out;
+    }
+
+    /* This size is in 512 bytes */
+    bios_size *= 512;
+
+    /*
+     * Set the file to the begining of the rombios,
+     * to start the copy.
+     */
+    if (start != lseek(fd, start, SEEK_SET)) {
+        goto out;
+    }
+
+    if (bios_size != read(fd, buf, bios_size)) {
+        bios_size = 0;
+    }
+
+out:
+    close(fd);
+    return bios_size;
+}
+
+int setup_vga_pt(XenHostPCIDevice *dev)
+{
+    unsigned char *bios = NULL;
+    int bios_size = 0;
+    char *c = NULL;
+    char checksum = 0;
+    int rc = 0;
+
+    if (!gfx_passthru || ((dev->class_code >> 0x8) != 0x0300)) {
+        return rc;
+    }
+
+    bios = malloc(64 * 1024);
+    /* Allocated 64K for the vga bios */
+    if (!bios) {
+        return -1;
+    }
+
+    bios_size = get_vgabios(bios);
+    if (bios_size == 0 || bios_size > 64 * 1024) {
+        XEN_PT_ERR(NULL, "vga bios size (0x%x) is invalid!\n", bios_size);
+        rc = -1;
+        goto out;
+    }
+
+    /* Adjust the bios checksum */
+    for (c = (char *)bios; c < ((char *)bios + bios_size); c++) {
+        checksum += *c;
+    }
+    if (checksum) {
+        bios[bios_size - 1] -= checksum;
+        XEN_PT_LOG(NULL, "vga bios checksum is adjusted!\n");
+    }
+
+    cpu_physical_memory_rw(0xc0000, bios, bios_size, 1);
+out:
+    free(bios);
+    return rc;
+}
diff --git a/qemu-options.hx b/qemu-options.hx
index 56e5fdf..95de002 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -1034,6 +1034,15 @@ STEXI
 Rotate graphical output some deg left (only PXA LCD).
 ETEXI
 
+DEF("gfx_passthru", 0, QEMU_OPTION_gfx_passthru,
+    "-gfx_passthru   enable Intel IGD passthrough by XEN\n",
+    QEMU_ARCH_ALL)
+STEXI
+@item -gfx_passthru
+@findex -gfx_passthru
+Enable Intel IGD passthrough by XEN
+ETEXI
+
 DEF("vga", HAS_ARG, QEMU_OPTION_vga,
     "-vga [std|cirrus|vmware|qxl|xenfb|none]\n"
     "                select video card type\n", QEMU_ARCH_ALL)
diff --git a/vl.c b/vl.c
index 316de54..8a91054 100644
--- a/vl.c
+++ b/vl.c
@@ -215,6 +215,9 @@ static bool tcg_allowed = true;
 bool xen_allowed;
 uint32_t xen_domid;
 enum xen_mode xen_mode = XEN_EMULATE;
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+int gfx_passthru = 0;
+#endif
 static int tcg_tb_size;
 
 static int default_serial = 1;
@@ -3775,6 +3778,11 @@ int main(int argc, char **argv, char **envp)
                 }
                 configure_msg(opts);
                 break;
+#if defined(CONFIG_XEN_PCI_PASSTHROUGH)
+            case QEMU_OPTION_gfx_passthru:
+                gfx_passthru = 1;
+                break;
+#endif
             default:
                 os_parse_cmd_args(popt->index, optarg);
             }
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 08:00:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 08:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGl1W-0002xe-CO; Fri, 21 Feb 2014 07:59:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WGkNh-00027h-Hw
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 07:18:46 +0000
Received: from [85.158.139.211:9826] by server-16.bemta-5.messagelabs.com id
	AE/83-05060-4DDF6035; Fri, 21 Feb 2014 07:18:44 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392967120!786887!1
X-Originating-IP: [98.139.212.155]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,
	ML_RADAR_SPEW_LINKS_6,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30788 invoked from network); 21 Feb 2014 07:18:42 -0000
Received: from nm25-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm25-vm1.bullet.mail.bf1.yahoo.com) (98.139.212.155)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 07:18:42 -0000
Received: from [98.139.212.150] by nm25.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 07:18:40 -0000
Received: from [98.139.212.233] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 07:18:40 -0000
Received: from [127.0.0.1] by omp1042.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 07:18:40 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 224117.16432.bm@omp1042.mail.bf1.yahoo.com
Received: (qmail 85559 invoked by uid 60001); 21 Feb 2014 07:18:40 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1392967120; bh=lhhIZ/lXB67azgDXZdwBDVGH0U+JmpRFk8OnW1Vl5Bg=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type;
	b=wmWlV3BQHYk8uOk3TwFQzf6GGRGVmdrQu1TG4OS+pX3KgvFyGyVpDY2u+juyGtRByfHIZfL/ocwbDSn4kNEORm0BIcvngtG4DRCtwm2CUL/GLjqrMNb70Xmky3RvaSOKBO1vFx7PJ6yHhgZNvzu7ya7nWgSwCH7q+ihGVznp8C0=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type;
	b=N1v5k7DsCIKr1XR0M+sqKh+nc9zNo9A0dYDUcbh9m0cKxPmypDKZM27rBraQTuWrVp6FUr8oNrh6P6oShZ93Spqb+t3ax3WFzBR7NOI97OjLADwEXQmHYRt254ud/xsNH220KYE0MiMbCmY4KmT/KJB7VBthy7nr6zizTCQtC1o=;
X-YMail-OSG: Emry.wEVM1nrZSP8xMoHjMM4Df3DGIoJENp0OLeFW.MiB8s
	rN2YnEWOuoNDopakHHiETugwjxcm2azEQPXoK4yeSFBFqk06PiAMYApOHLBl
	KozFzXD7_nC5sgrS9KvkIBIVPWSKeYAtHNi1HgRThUIftRF7hRX1UQeJ.giF
	3H2bXN4bGdZTX4PKH06krfTaJNri1yvTh2Hdc9Q3zmfEyKkMsGzhbukW1ECb
	JMOdbT3yNiCUb3alk_c05FXHfDd2n7G5wIfCFfXK8DjCtbJaC7HcPgKwG2wH
	NPu8kWJf48btmPQ5wk5xE_MnIPVwfwuTNOR2l3Tz8ngZbvAzhgAa9Cv7Bnei
	wbBdhQzbaffJYyAdBLDMLteWBAIgjg9.jb3TZbDLNh1mUcxyudM4LqVs_mmF
	33w9b6bSGCLJQoqrD7bko1THcjfEU7MIpZwJzYLZjljS2OqJO81i.3Lw_yfh
	svCITVXdHKUDlsDzRAbpvjXNq4EAPQx_9ttM2_3N8FwqzhbHVa4BYPXVoC1A
	.qZfMEhXVL6.17DwN72IuAA5beEqs2N1THrWCkm1HAXOWVJSWXn6uFdkJFHz
	p9RkQn9Z36k4awkjyNhgL2AgU4Q6apBqs0426u00saGInnMPO7vGeCqrn1Fr
	vi1MO3Do_6hXK8MCXIeO3OYOlAC59
Received: from [212.50.248.7] by web161805.mail.bf1.yahoo.com via HTTP;
	Thu, 20 Feb 2014 23:18:39 PST
X-Rocket-MIMEInfo: 002.001,
	aSBjaGFuZ2UgY29kZSB4Y19zYXZlLmMgYXMgZm9sbG93czoKCsKgaW50wqAKwqBtYWluKGludCBhcmdjLCBjaGFyICoqYXJndinCoArCoHvCoAotIMKgIMKgdW5zaWduZWQgaW50IG1heGl0LCBtYXhfZjvCoAorIMKgIMKgdW5zaWduZWQgaW50IG1heGl0LCBtYXhfZiwgbGZsYWdzO8KgCsKgIMKgIMKgaW50IGlvX2ZkLCByZXQsIHBvcnQ7wqAKwqAgwqAgwqBzdHJ1Y3Qgc2F2ZV9jYWxsYmFja3MgY2FsbGJhY2tzO8KgCisgwqAgwqB4ZW50b29sbG9nX2xldmVsIGx2bDvCoAorIMKgIMKgeGVudG9vbGxvZ19sb2cBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.177.636
References: <20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
Message-ID: <1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
Date: Thu, 20 Feb 2014 23:18:39 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140219141207.GA9631@aepfle.de>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 21 Feb 2014 07:59:53 +0000
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8624475629277257537=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8624475629277257537==
Content-Type: multipart/alternative; boundary="-337026386-468065670-1392967119=:28200"

---337026386-468065670-1392967119=:28200
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

i change code xc_save.c as follows:=0A=0A=A0int=A0=0A=A0main(int argc, char=
 **argv)=A0=0A=A0{=A0=0A- =A0 =A0unsigned int maxit, max_f;=A0=0A+ =A0 =A0u=
nsigned int maxit, max_f, lflags;=A0=0A=A0 =A0 =A0int io_fd, ret, port;=A0=
=0A=A0 =A0 =A0struct save_callbacks callbacks;=A0=0A+ =A0 =A0xentoollog_lev=
el lvl;=A0=0A+ =A0 =A0xentoollog_logger *l;=A0=0A=A0=A0=0A=A0 =A0 =A0if (ar=
gc !=3D 6)=A0=0A=A0 =A0 =A0 =A0 =A0errx(1, "usage: %s iofd domid maxit maxf=
 flags", argv[0]);=A0=0A=A0=A0=0A- =A0 =A0si.xch =3D xc_interface_open(0,0,=
0);=A0=0A- =A0 =A0if (!si.xch)=A0=0A- =A0 =A0 =A0 =A0errx(1, "failed to ope=
n control interface");=A0=0A-=A0=0A=A0 =A0 =A0io_fd =3D atoi(argv[1]);=A0=
=0A=A0 =A0 =A0si.domid =3D atoi(argv[2]);=A0=0A=A0 =A0 =A0maxit =3D atoi(ar=
gv[3]);=A0=0A@@ -185,6 +183,13 @@ main(int argc, char **argv)=A0=0A=A0=A0=
=0A=A0 =A0 =A0si.suspend_evtchn=A0=3D -1;=A0=0A=A0=A0=0A+ =A0 =A0lvl =3D si=
.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;=A0=0A+ =A0=A0=A0lflags =3D =
XTL_STDIOSTREAM_HIDE_PROGRESS;=A0=0A+ =A0 =A0l =3D (xentoollog_logger *)xtl=
_createlogger_stdiostream(stderr, lvl, lflags);=A0=0A+ =A0 =A0si.xch =3D xc=
_interface_open(l, 0, 0);=A0=0A+ =A0 =A0if (!si.xch)=A0=0A+ =A0 =A0 =A0 =A0=
errx(1, "failed to open control interface");=A0=0A+=A0=0A=A0 =A0 =A0si.xce =
=3D xc_evtchn_open(NULL, 0);=A0=0A=A0 =A0 =A0if (si.xce =3D=3D NULL)=A0=0A=
=A0 =A0 =A0 =A0 =A0warnx("failed to open event channel handle");=A0=0A=0A=
=0A=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0Ain this code change lflags to=A0=
=A0lflags =3D XTL_STDIOSTREAM_HIDE_PROGRESS;=A0and=A0i don't know what mean=
's=A0@@ -185,6 +183,13 @@ main(int argc, char **argv) =A0 =A0?!!=0A=0A=A0so=
 i delete=A0@@ -185,6 +183,13 @@ main(int argc, char **argv)=A0from code an=
d test logger (to live migration of=A0=A0VM) But again result output in xen=
d.log don't change.=0A=A0=0A=A0=0AAdel Amani=0AM.Sc. Candidate@Computer Eng=
ineering Department, University of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=
=0A=0A=0AOn Wednesday, February 19, 2014 5:42 PM, Olaf Hering <olaf@aepfle.=
de> wrote:=0A =0AOn Tue, Feb 18, Adel Amani wrote:=0A=0A> Hi=0A> i know bec=
ause a lot of product output in xen-4.1, no logger in xc_save.c=0A> i chang=
e code again according to "http://xen.1045712.n5.nabble.com/=0A> xen-unstab=
le-tools-xc-restore-logging-in-xc-save-td5714324.html"=0A> But i don't know=
 purpose Mr "patchbot" of =0A> @@ -185,6 +183,13 @@ main(int argc, char **a=
rgv)=0A> i test again to code Mr "patchbot" without Consideration =0A> @@ -=
185,6 +183,13 @@ main(int argc, char **argv)=0A> and result again no messag=
e :-( ....=0A=0A=0AAll that is very imprecise, so we can not help.=0AAlso r=
ead what I wrote: dont drop xen-devel@lists.xen.org=0A=0AOlaf=0A=0A=0A=0A=
=0A> On Saturday, February 8, 2014 1:52 AM, Olaf Hering <olaf@aepfle.de> wr=
ote:=0A> Please keep xen-devel@lists.xen.org in CC list.=0A> =0A> On Fri, F=
eb 07, Adel Amani wrote:=0A> =0A> > yes, for print data, function print_sta=
ts() in xc_domain_save.c should run=0A> and=0A> > work. I read in function =
and check.... But i don't know really why this don't=0A> > work!!! :-|... I=
 test 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\=0A> n");'=0A> > =
But again not answer :'(.....=0A> =0A> Please make sure the self-compiled b=
inary is actually used. Try this to=0A> verify: grep STDERR /usr/lib/xen/bi=
n/xc_save (assuming the fprintf above=0A> is actually in the compiled code.=
)=0A> =0A> > how boot the domU with 'initcall_debug'?! Are affect on total =
time?!=0A> =0A> This is a kernel cmdline option. Please check the documenta=
tion about=0A> how to pass additional kernel parameters to a domU.=0A> =0A>=
 =0A> Olaf=0A> =0A> 
---337026386-468065670-1392967119=:28200
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span style=3D=
"font-weight: bold;">i change code xc_save.c as follows:</span></div><div s=
tyle=3D"background-color: transparent;"><span style=3D"font-weight: bold;">=
<br clear=3D"none"></span></div><div style=3D"background-color: transparent=
;"><span><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial, san=
s-serif;">&nbsp;int&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp;main(int argc, char **=
argv)&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva,=
 Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva=
, Helvetica, Arial, sans-serif;">&nbsp;{&nbsp;</span><br clear=3D"none" sty=
le=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span st=
yle=3D"font-family:
 Verdana, Geneva, Helvetica, Arial, sans-serif;">- &nbsp; &nbsp;unsigned in=
t maxit, max_f;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdan=
a, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verda=
na, Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;unsigned int maxi=
t, max_f, lflags;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;int io_fd,=
 ret, port;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, G=
eneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;struct save_call=
backs callbacks;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verda=
na, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;xentoollog_level
 lvl;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva,=
 Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva=
, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;xentoollog_logger *l;&nbsp;=
</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica,=
 Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva, Helvetica=
, Arial, sans-serif;">&nbsp;&nbsp;</span><br clear=3D"none" style=3D"font-f=
amily: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-=
family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp=
;if (argc !=3D 6)&nbsp;</span><br clear=3D"none" style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp;errx(1, "usage: %s iofd domid maxit maxf flags", argv[0]);&nbsp;</span>=
<br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial,=
 sans-serif;"><span
 style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbs=
p;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva, He=
lvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva, H=
elvetica, Arial, sans-serif;">- &nbsp; &nbsp;si.xch =3D xc_interface_open(0=
,0,0);&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva=
, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Genev=
a, Helvetica, Arial, sans-serif;">- &nbsp; &nbsp;if (!si.xch)&nbsp;</span><=
br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, =
sans-serif;"><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial,=
 sans-serif;">- &nbsp; &nbsp; &nbsp; &nbsp;errx(1, "failed to open control =
interface");&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana,=
 Geneva, Helvetica, Arial, sans-serif;">-&nbsp;</span><br clear=3D"none" st=
yle=3D"font-family:
 Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family=
: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;io_fd=
 =3D atoi(argv[1]);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;si.domid=
 =3D atoi(argv[2]);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;maxit =
=3D atoi(argv[3]);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;">@@ -185,6 +183,13 @@ main(int=
 argc, char **argv)&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica,
 Arial, sans-serif;">&nbsp;&nbsp;</span><br clear=3D"none" style=3D"font-fa=
mily: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-f=
amily: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;=
si.<span class=3D"yiv3389694537bold yiv3389694537highlight yiv3389694537sea=
rch-highlight" style=3D"background-color: rgb(255, 255, 153);">suspend_evtc=
hn</span>&nbsp;=3D -1;&nbsp;</span><br clear=3D"none" style=3D"font-family:=
 Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family=
: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp;&nbsp;</span><br cl=
ear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-=
serif;"><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans=
-serif;">+ &nbsp; &nbsp;lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XTL_DEBUG: X=
TL_DETAIL;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Ge=
neva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, G=
eneva, Helvetica, Arial,
 sans-serif;">+ &nbsp;&nbsp;<span style=3D"font-weight: bold;">&nbsp;lflags=
 =3D XTL_STDIOSTREAM_HIDE_PROGRESS;&nbsp;</span></span><br clear=3D"none" s=
tyle=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span =
style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">+ &nb=
sp; &nbsp;l =3D (xentoollog_logger *)xtl_createlogger_stdiostream(stderr, l=
vl, lflags);&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana,=
 Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;si.xch =3D xc_interf=
ace_open(l, 0, 0);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;if (!si.xch)&n=
bsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvet=
ica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva, Helve=
tica, Arial,
 sans-serif;">+ &nbsp; &nbsp; &nbsp; &nbsp;errx(1, "failed to open control =
interface");&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana,=
 Geneva, Helvetica, Arial, sans-serif;">+&nbsp;</span><br clear=3D"none" st=
yle=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span s=
tyle=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp;=
 &nbsp; &nbsp;si.xce =3D xc_evtchn_open(NULL, 0);&nbsp;</span><br clear=3D"=
none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"=
><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;=
">&nbsp; &nbsp; &nbsp;if (si.xce =3D=3D NULL)&nbsp;</span><br clear=3D"none=
" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><sp=
an style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&n=
bsp; &nbsp; &nbsp; &nbsp; &nbsp;warnx("failed to open event channel handle"=
);&nbsp;</span><br
 clear=3D"none"></span></div><div style=3D"font-family: 'Helvetica Neue', '=
Segoe UI Semibold', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-ser=
if; background-color: rgb(255, 255, 153);"><span style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;"><br clear=3D"none"></span></div=
><div style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif; =
background-color: transparent;"><span>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
</span></div><div style=3D"font-family: Verdana, Geneva, Helvetica, Arial, =
sans-serif; background-color: transparent;">in this code change lflags to&n=
bsp;<span style=3D"font-weight: bold; font-size: 10pt;">&nbsp;lflags =3D XT=
L_STDIOSTREAM_HIDE_PROGRESS;&nbsp;</span><span style=3D"font-size: 10pt;">a=
nd</span><span style=3D"background-color: transparent;">&nbsp;i don't know =
what mean's&nbsp;</span><span style=3D"background-color: transparent; font-=
size: 10pt; font-weight: bold; font-style: italic; color:
 rgb(157, 24, 17);">@@ -185,6 +183,13 @@ main(int argc, char **argv) &nbsp;=
 &nbsp;</span><span style=3D"background-color: transparent; font-size: 10pt=
;">?!!</span></div><div></div><div><br clear=3D"none"></div><div>&nbsp;so i=
 delete&nbsp;<span style=3D"color: rgb(157, 24, 17); font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif; font-style: italic; font-weight: bold=
; font-size: 10pt;">@@ -185,6 +183,13 @@ main(int argc, char **argv)</span>=
<span style=3D"font-size: 10pt;">&nbsp;from code and test logger (to live m=
igration of&nbsp;</span><span style=3D"font-size: 10pt;">&nbsp;</span><span=
 style=3D"font-size: 10pt;">VM</span><span style=3D"font-size: 10pt;">) But=
 again result output in xend.log don't change.</span></div><div><span></spa=
n></div><div><span style=3D"font-size: 10pt;">&nbsp;</span></div><div></div=
><div>&nbsp;</div><div><div style=3D"text-align:center;"><span style=3D"fon=
t-size: 16px; font-family: 'times new roman', 'new york', times, serif; lin=
e-height:
 normal;">Adel Amani</span><br></div><span style=3D"font-family: 'times new=
 roman', 'new york', times, serif; line-height: normal;"><div style=3D"font=
-size:16px;text-align:center;">M.Sc. Candidate@Computer Engineering Departm=
ent, University of Isfahan</div><div style=3D"text-align:center;"><span sty=
le=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-deco=
ration:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div><di=
v class=3D"yahoo_quoted" style=3D"display: block;"> <br> <br> <div style=3D=
"font-family: 'bookman old style', 'new york', times, serif; font-size: 10p=
t;"> <div style=3D"font-family: HelveticaNeue, 'Helvetica Neue', Helvetica,=
 Arial, 'Lucida Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <=
font size=3D"2" face=3D"Arial"> On Wednesday, February 19, 2014 5:42 PM, Ol=
af Hering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div>  <div class=3D"y=
_msg_container">On Tue, Feb 18, Adel Amani wrote:<br clear=3D"none"><br cle=
ar=3D"none">&gt; Hi<br
 clear=3D"none">&gt; i know because a lot of product output in xen-4.1, no =
logger in xc_save.c<br clear=3D"none">&gt; i change code again according to=
 "<a shape=3D"rect" href=3D"http://xen.1045712.n5.nabble.com/" target=3D"_b=
lank">http://xen.1045712.n5.nabble.com/</a><br clear=3D"none">&gt; xen-unst=
able-tools-xc-restore-logging-in-xc-save-td5714324.html"<br clear=3D"none">=
&gt; But i don't know purpose Mr "patchbot" of <br clear=3D"none">&gt; @@ -=
185,6 +183,13 @@ main(int argc, char **argv)<br clear=3D"none">&gt; i test =
again to code Mr "patchbot" without Consideration <br clear=3D"none">&gt; @=
@ -185,6 +183,13 @@ main(int argc, char **argv)<br clear=3D"none">&gt; and =
result again no message :-( ....<br clear=3D"none"><br clear=3D"none"><br c=
lear=3D"none">All that is very imprecise, so we can not help.<br clear=3D"n=
one">Also read what I wrote: dont drop <a shape=3D"rect" ymailto=3D"mailto:=
xen-devel@lists.xen.org" href=3D"mailto:xen-devel@lists.xen.org">xen-devel@=
lists.xen.org</a><br
 clear=3D"none"><br clear=3D"none">Olaf<div class=3D"yqt6908341691" id=3D"y=
qtfd79244"><br clear=3D"none"><br clear=3D"none"><br clear=3D"none"><br cle=
ar=3D"none">&gt; On Saturday, February 8, 2014 1:52 AM, Olaf Hering &lt;<a =
shape=3D"rect" ymailto=3D"mailto:olaf@aepfle.de" href=3D"mailto:olaf@aepfle=
.de">olaf@aepfle.de</a>&gt; wrote:<br clear=3D"none">&gt; Please keep <a sh=
ape=3D"rect" ymailto=3D"mailto:xen-devel@lists.xen.org" href=3D"mailto:xen-=
devel@lists.xen.org">xen-devel@lists.xen.org</a> in CC list.<br clear=3D"no=
ne">&gt; <br clear=3D"none">&gt; On Fri, Feb 07, Adel Amani wrote:<br clear=
=3D"none">&gt; <br clear=3D"none">&gt; &gt; yes, for print data, function p=
rint_stats() in xc_domain_save.c should run<br clear=3D"none">&gt; and<br c=
lear=3D"none">&gt; &gt; work. I read in function and check.... But i don't =
know really why this don't<br clear=3D"none">&gt; &gt; work!!! :-|... I tes=
t 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\<br clear=3D"none">&g=
t; n");'<br clear=3D"none">&gt; &gt; But
 again not answer :'(.....<br clear=3D"none">&gt; <br clear=3D"none">&gt; P=
lease make sure the self-compiled binary is actually used. Try this to<br c=
lear=3D"none">&gt; verify: grep STDERR /usr/lib/xen/bin/xc_save (assuming t=
he fprintf above<br clear=3D"none">&gt; is actually in the compiled code.)<=
br clear=3D"none">&gt; <br clear=3D"none">&gt; &gt; how boot the domU with =
'initcall_debug'?! Are affect on total time?!<br clear=3D"none">&gt; <br cl=
ear=3D"none">&gt; This is a kernel cmdline option. Please check the documen=
tation about<br clear=3D"none">&gt; how to pass additional kernel parameter=
s to a domU.<br clear=3D"none">&gt; <br clear=3D"none">&gt; <br clear=3D"no=
ne">&gt; Olaf<br clear=3D"none">&gt; <br clear=3D"none">&gt; <br clear=3D"n=
one"></div><br><br></div>  </div> </div>  </div> </div></body></html>
---337026386-468065670-1392967119=:28200--


--===============8624475629277257537==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8624475629277257537==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 08:00:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 08:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGl1W-0002xe-CO; Fri, 21 Feb 2014 07:59:54 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WGkNh-00027h-Hw
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 07:18:46 +0000
Received: from [85.158.139.211:9826] by server-16.bemta-5.messagelabs.com id
	AE/83-05060-4DDF6035; Fri, 21 Feb 2014 07:18:44 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392967120!786887!1
X-Originating-IP: [98.139.212.155]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,
	ML_RADAR_SPEW_LINKS_6,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30788 invoked from network); 21 Feb 2014 07:18:42 -0000
Received: from nm25-vm1.bullet.mail.bf1.yahoo.com (HELO
	nm25-vm1.bullet.mail.bf1.yahoo.com) (98.139.212.155)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 07:18:42 -0000
Received: from [98.139.212.150] by nm25.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 07:18:40 -0000
Received: from [98.139.212.233] by tm7.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 07:18:40 -0000
Received: from [127.0.0.1] by omp1042.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 07:18:40 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 224117.16432.bm@omp1042.mail.bf1.yahoo.com
Received: (qmail 85559 invoked by uid 60001); 21 Feb 2014 07:18:40 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1392967120; bh=lhhIZ/lXB67azgDXZdwBDVGH0U+JmpRFk8OnW1Vl5Bg=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type;
	b=wmWlV3BQHYk8uOk3TwFQzf6GGRGVmdrQu1TG4OS+pX3KgvFyGyVpDY2u+juyGtRByfHIZfL/ocwbDSn4kNEORm0BIcvngtG4DRCtwm2CUL/GLjqrMNb70Xmky3RvaSOKBO1vFx7PJ6yHhgZNvzu7ya7nWgSwCH7q+ihGVznp8C0=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type;
	b=N1v5k7DsCIKr1XR0M+sqKh+nc9zNo9A0dYDUcbh9m0cKxPmypDKZM27rBraQTuWrVp6FUr8oNrh6P6oShZ93Spqb+t3ax3WFzBR7NOI97OjLADwEXQmHYRt254ud/xsNH220KYE0MiMbCmY4KmT/KJB7VBthy7nr6zizTCQtC1o=;
X-YMail-OSG: Emry.wEVM1nrZSP8xMoHjMM4Df3DGIoJENp0OLeFW.MiB8s
	rN2YnEWOuoNDopakHHiETugwjxcm2azEQPXoK4yeSFBFqk06PiAMYApOHLBl
	KozFzXD7_nC5sgrS9KvkIBIVPWSKeYAtHNi1HgRThUIftRF7hRX1UQeJ.giF
	3H2bXN4bGdZTX4PKH06krfTaJNri1yvTh2Hdc9Q3zmfEyKkMsGzhbukW1ECb
	JMOdbT3yNiCUb3alk_c05FXHfDd2n7G5wIfCFfXK8DjCtbJaC7HcPgKwG2wH
	NPu8kWJf48btmPQ5wk5xE_MnIPVwfwuTNOR2l3Tz8ngZbvAzhgAa9Cv7Bnei
	wbBdhQzbaffJYyAdBLDMLteWBAIgjg9.jb3TZbDLNh1mUcxyudM4LqVs_mmF
	33w9b6bSGCLJQoqrD7bko1THcjfEU7MIpZwJzYLZjljS2OqJO81i.3Lw_yfh
	svCITVXdHKUDlsDzRAbpvjXNq4EAPQx_9ttM2_3N8FwqzhbHVa4BYPXVoC1A
	.qZfMEhXVL6.17DwN72IuAA5beEqs2N1THrWCkm1HAXOWVJSWXn6uFdkJFHz
	p9RkQn9Z36k4awkjyNhgL2AgU4Q6apBqs0426u00saGInnMPO7vGeCqrn1Fr
	vi1MO3Do_6hXK8MCXIeO3OYOlAC59
Received: from [212.50.248.7] by web161805.mail.bf1.yahoo.com via HTTP;
	Thu, 20 Feb 2014 23:18:39 PST
X-Rocket-MIMEInfo: 002.001,
	aSBjaGFuZ2UgY29kZSB4Y19zYXZlLmMgYXMgZm9sbG93czoKCsKgaW50wqAKwqBtYWluKGludCBhcmdjLCBjaGFyICoqYXJndinCoArCoHvCoAotIMKgIMKgdW5zaWduZWQgaW50IG1heGl0LCBtYXhfZjvCoAorIMKgIMKgdW5zaWduZWQgaW50IG1heGl0LCBtYXhfZiwgbGZsYWdzO8KgCsKgIMKgIMKgaW50IGlvX2ZkLCByZXQsIHBvcnQ7wqAKwqAgwqAgwqBzdHJ1Y3Qgc2F2ZV9jYWxsYmFja3MgY2FsbGJhY2tzO8KgCisgwqAgwqB4ZW50b29sbG9nX2xldmVsIGx2bDvCoAorIMKgIMKgeGVudG9vbGxvZ19sb2cBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.177.636
References: <20140202100044.GA5898@aepfle.de>
	<1391432170.33697.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
Message-ID: <1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
Date: Thu, 20 Feb 2014 23:18:39 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>
In-Reply-To: <20140219141207.GA9631@aepfle.de>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 21 Feb 2014 07:59:53 +0000
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8624475629277257537=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8624475629277257537==
Content-Type: multipart/alternative; boundary="-337026386-468065670-1392967119=:28200"

---337026386-468065670-1392967119=:28200
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

i change code xc_save.c as follows:=0A=0A=A0int=A0=0A=A0main(int argc, char=
 **argv)=A0=0A=A0{=A0=0A- =A0 =A0unsigned int maxit, max_f;=A0=0A+ =A0 =A0u=
nsigned int maxit, max_f, lflags;=A0=0A=A0 =A0 =A0int io_fd, ret, port;=A0=
=0A=A0 =A0 =A0struct save_callbacks callbacks;=A0=0A+ =A0 =A0xentoollog_lev=
el lvl;=A0=0A+ =A0 =A0xentoollog_logger *l;=A0=0A=A0=A0=0A=A0 =A0 =A0if (ar=
gc !=3D 6)=A0=0A=A0 =A0 =A0 =A0 =A0errx(1, "usage: %s iofd domid maxit maxf=
 flags", argv[0]);=A0=0A=A0=A0=0A- =A0 =A0si.xch =3D xc_interface_open(0,0,=
0);=A0=0A- =A0 =A0if (!si.xch)=A0=0A- =A0 =A0 =A0 =A0errx(1, "failed to ope=
n control interface");=A0=0A-=A0=0A=A0 =A0 =A0io_fd =3D atoi(argv[1]);=A0=
=0A=A0 =A0 =A0si.domid =3D atoi(argv[2]);=A0=0A=A0 =A0 =A0maxit =3D atoi(ar=
gv[3]);=A0=0A@@ -185,6 +183,13 @@ main(int argc, char **argv)=A0=0A=A0=A0=
=0A=A0 =A0 =A0si.suspend_evtchn=A0=3D -1;=A0=0A=A0=A0=0A+ =A0 =A0lvl =3D si=
.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL;=A0=0A+ =A0=A0=A0lflags =3D =
XTL_STDIOSTREAM_HIDE_PROGRESS;=A0=0A+ =A0 =A0l =3D (xentoollog_logger *)xtl=
_createlogger_stdiostream(stderr, lvl, lflags);=A0=0A+ =A0 =A0si.xch =3D xc=
_interface_open(l, 0, 0);=A0=0A+ =A0 =A0if (!si.xch)=A0=0A+ =A0 =A0 =A0 =A0=
errx(1, "failed to open control interface");=A0=0A+=A0=0A=A0 =A0 =A0si.xce =
=3D xc_evtchn_open(NULL, 0);=A0=0A=A0 =A0 =A0if (si.xce =3D=3D NULL)=A0=0A=
=A0 =A0 =A0 =A0 =A0warnx("failed to open event channel handle");=A0=0A=0A=
=0A=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=0Ain this code change lflags to=A0=
=A0lflags =3D XTL_STDIOSTREAM_HIDE_PROGRESS;=A0and=A0i don't know what mean=
's=A0@@ -185,6 +183,13 @@ main(int argc, char **argv) =A0 =A0?!!=0A=0A=A0so=
 i delete=A0@@ -185,6 +183,13 @@ main(int argc, char **argv)=A0from code an=
d test logger (to live migration of=A0=A0VM) But again result output in xen=
d.log don't change.=0A=A0=0A=A0=0AAdel Amani=0AM.Sc. Candidate@Computer Eng=
ineering Department, University of Isfahan=0AEmail: A.Amani@eng.ui.ac.ir=0A=
=0A=0A=0AOn Wednesday, February 19, 2014 5:42 PM, Olaf Hering <olaf@aepfle.=
de> wrote:=0A =0AOn Tue, Feb 18, Adel Amani wrote:=0A=0A> Hi=0A> i know bec=
ause a lot of product output in xen-4.1, no logger in xc_save.c=0A> i chang=
e code again according to "http://xen.1045712.n5.nabble.com/=0A> xen-unstab=
le-tools-xc-restore-logging-in-xc-save-td5714324.html"=0A> But i don't know=
 purpose Mr "patchbot" of =0A> @@ -185,6 +183,13 @@ main(int argc, char **a=
rgv)=0A> i test again to code Mr "patchbot" without Consideration =0A> @@ -=
185,6 +183,13 @@ main(int argc, char **argv)=0A> and result again no messag=
e :-( ....=0A=0A=0AAll that is very imprecise, so we can not help.=0AAlso r=
ead what I wrote: dont drop xen-devel@lists.xen.org=0A=0AOlaf=0A=0A=0A=0A=
=0A> On Saturday, February 8, 2014 1:52 AM, Olaf Hering <olaf@aepfle.de> wr=
ote:=0A> Please keep xen-devel@lists.xen.org in CC list.=0A> =0A> On Fri, F=
eb 07, Adel Amani wrote:=0A> =0A> > yes, for print data, function print_sta=
ts() in xc_domain_save.c should run=0A> and=0A> > work. I read in function =
and check.... But i don't know really why this don't=0A> > work!!! :-|... I=
 test 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\=0A> n");'=0A> > =
But again not answer :'(.....=0A> =0A> Please make sure the self-compiled b=
inary is actually used. Try this to=0A> verify: grep STDERR /usr/lib/xen/bi=
n/xc_save (assuming the fprintf above=0A> is actually in the compiled code.=
)=0A> =0A> > how boot the domU with 'initcall_debug'?! Are affect on total =
time?!=0A> =0A> This is a kernel cmdline option. Please check the documenta=
tion about=0A> how to pass additional kernel parameters to a domU.=0A> =0A>=
 =0A> Olaf=0A> =0A> 
---337026386-468065670-1392967119=:28200
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div><span style=3D=
"font-weight: bold;">i change code xc_save.c as follows:</span></div><div s=
tyle=3D"background-color: transparent;"><span style=3D"font-weight: bold;">=
<br clear=3D"none"></span></div><div style=3D"background-color: transparent=
;"><span><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial, san=
s-serif;">&nbsp;int&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp;main(int argc, char **=
argv)&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva,=
 Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva=
, Helvetica, Arial, sans-serif;">&nbsp;{&nbsp;</span><br clear=3D"none" sty=
le=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span st=
yle=3D"font-family:
 Verdana, Geneva, Helvetica, Arial, sans-serif;">- &nbsp; &nbsp;unsigned in=
t maxit, max_f;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdan=
a, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verda=
na, Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;unsigned int maxi=
t, max_f, lflags;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;int io_fd,=
 ret, port;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, G=
eneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;struct save_call=
backs callbacks;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verda=
na, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;xentoollog_level
 lvl;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva,=
 Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva=
, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;xentoollog_logger *l;&nbsp;=
</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica,=
 Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva, Helvetica=
, Arial, sans-serif;">&nbsp;&nbsp;</span><br clear=3D"none" style=3D"font-f=
amily: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-=
family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp=
;if (argc !=3D 6)&nbsp;</span><br clear=3D"none" style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp;errx(1, "usage: %s iofd domid maxit maxf flags", argv[0]);&nbsp;</span>=
<br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial,=
 sans-serif;"><span
 style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbs=
p;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva, He=
lvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva, H=
elvetica, Arial, sans-serif;">- &nbsp; &nbsp;si.xch =3D xc_interface_open(0=
,0,0);&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva=
, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Genev=
a, Helvetica, Arial, sans-serif;">- &nbsp; &nbsp;if (!si.xch)&nbsp;</span><=
br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, =
sans-serif;"><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial,=
 sans-serif;">- &nbsp; &nbsp; &nbsp; &nbsp;errx(1, "failed to open control =
interface");&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana,=
 Geneva, Helvetica, Arial, sans-serif;">-&nbsp;</span><br clear=3D"none" st=
yle=3D"font-family:
 Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family=
: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;io_fd=
 =3D atoi(argv[1]);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;si.domid=
 =3D atoi(argv[2]);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;maxit =
=3D atoi(argv[3]);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;">@@ -185,6 +183,13 @@ main(int=
 argc, char **argv)&nbsp;</span><br clear=3D"none" style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: V=
erdana, Geneva, Helvetica,
 Arial, sans-serif;">&nbsp;&nbsp;</span><br clear=3D"none" style=3D"font-fa=
mily: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-f=
amily: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp; &nbsp; &nbsp;=
si.<span class=3D"yiv3389694537bold yiv3389694537highlight yiv3389694537sea=
rch-highlight" style=3D"background-color: rgb(255, 255, 153);">suspend_evtc=
hn</span>&nbsp;=3D -1;&nbsp;</span><br clear=3D"none" style=3D"font-family:=
 Verdana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family=
: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp;&nbsp;</span><br cl=
ear=3D"none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-=
serif;"><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans=
-serif;">+ &nbsp; &nbsp;lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XTL_DEBUG: X=
TL_DETAIL;&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Ge=
neva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana, G=
eneva, Helvetica, Arial,
 sans-serif;">+ &nbsp;&nbsp;<span style=3D"font-weight: bold;">&nbsp;lflags=
 =3D XTL_STDIOSTREAM_HIDE_PROGRESS;&nbsp;</span></span><br clear=3D"none" s=
tyle=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span =
style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">+ &nb=
sp; &nbsp;l =3D (xentoollog_logger *)xtl_createlogger_stdiostream(stderr, l=
vl, lflags);&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana,=
 Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;si.xch =3D xc_interf=
ace_open(l, 0, 0);&nbsp;</span><br clear=3D"none" style=3D"font-family: Ver=
dana, Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Ve=
rdana, Geneva, Helvetica, Arial, sans-serif;">+ &nbsp; &nbsp;if (!si.xch)&n=
bsp;</span><br clear=3D"none" style=3D"font-family: Verdana, Geneva, Helvet=
ica, Arial, sans-serif;"><span style=3D"font-family: Verdana, Geneva, Helve=
tica, Arial,
 sans-serif;">+ &nbsp; &nbsp; &nbsp; &nbsp;errx(1, "failed to open control =
interface");&nbsp;</span><br clear=3D"none" style=3D"font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif;"><span style=3D"font-family: Verdana,=
 Geneva, Helvetica, Arial, sans-serif;">+&nbsp;</span><br clear=3D"none" st=
yle=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><span s=
tyle=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&nbsp;=
 &nbsp; &nbsp;si.xce =3D xc_evtchn_open(NULL, 0);&nbsp;</span><br clear=3D"=
none" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"=
><span style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;=
">&nbsp; &nbsp; &nbsp;if (si.xce =3D=3D NULL)&nbsp;</span><br clear=3D"none=
" style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;"><sp=
an style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif;">&n=
bsp; &nbsp; &nbsp; &nbsp; &nbsp;warnx("failed to open event channel handle"=
);&nbsp;</span><br
 clear=3D"none"></span></div><div style=3D"font-family: 'Helvetica Neue', '=
Segoe UI Semibold', 'Segoe UI', Helvetica, Arial, 'Lucida Grande', sans-ser=
if; background-color: rgb(255, 255, 153);"><span style=3D"font-family: Verd=
ana, Geneva, Helvetica, Arial, sans-serif;"><br clear=3D"none"></span></div=
><div style=3D"font-family: Verdana, Geneva, Helvetica, Arial, sans-serif; =
background-color: transparent;"><span>=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
</span></div><div style=3D"font-family: Verdana, Geneva, Helvetica, Arial, =
sans-serif; background-color: transparent;">in this code change lflags to&n=
bsp;<span style=3D"font-weight: bold; font-size: 10pt;">&nbsp;lflags =3D XT=
L_STDIOSTREAM_HIDE_PROGRESS;&nbsp;</span><span style=3D"font-size: 10pt;">a=
nd</span><span style=3D"background-color: transparent;">&nbsp;i don't know =
what mean's&nbsp;</span><span style=3D"background-color: transparent; font-=
size: 10pt; font-weight: bold; font-style: italic; color:
 rgb(157, 24, 17);">@@ -185,6 +183,13 @@ main(int argc, char **argv) &nbsp;=
 &nbsp;</span><span style=3D"background-color: transparent; font-size: 10pt=
;">?!!</span></div><div></div><div><br clear=3D"none"></div><div>&nbsp;so i=
 delete&nbsp;<span style=3D"color: rgb(157, 24, 17); font-family: Verdana, =
Geneva, Helvetica, Arial, sans-serif; font-style: italic; font-weight: bold=
; font-size: 10pt;">@@ -185,6 +183,13 @@ main(int argc, char **argv)</span>=
<span style=3D"font-size: 10pt;">&nbsp;from code and test logger (to live m=
igration of&nbsp;</span><span style=3D"font-size: 10pt;">&nbsp;</span><span=
 style=3D"font-size: 10pt;">VM</span><span style=3D"font-size: 10pt;">) But=
 again result output in xend.log don't change.</span></div><div><span></spa=
n></div><div><span style=3D"font-size: 10pt;">&nbsp;</span></div><div></div=
><div>&nbsp;</div><div><div style=3D"text-align:center;"><span style=3D"fon=
t-size: 16px; font-family: 'times new roman', 'new york', times, serif; lin=
e-height:
 normal;">Adel Amani</span><br></div><span style=3D"font-family: 'times new=
 roman', 'new york', times, serif; line-height: normal;"><div style=3D"font=
-size:16px;text-align:center;">M.Sc. Candidate@Computer Engineering Departm=
ent, University of Isfahan</div><div style=3D"text-align:center;"><span sty=
le=3D"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-deco=
ration:underline;">A.Amani@eng.ui.ac.ir</span></span></div></span></div><di=
v class=3D"yahoo_quoted" style=3D"display: block;"> <br> <br> <div style=3D=
"font-family: 'bookman old style', 'new york', times, serif; font-size: 10p=
t;"> <div style=3D"font-family: HelveticaNeue, 'Helvetica Neue', Helvetica,=
 Arial, 'Lucida Grande', sans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <=
font size=3D"2" face=3D"Arial"> On Wednesday, February 19, 2014 5:42 PM, Ol=
af Hering &lt;olaf@aepfle.de&gt; wrote:<br> </font> </div>  <div class=3D"y=
_msg_container">On Tue, Feb 18, Adel Amani wrote:<br clear=3D"none"><br cle=
ar=3D"none">&gt; Hi<br
 clear=3D"none">&gt; i know because a lot of product output in xen-4.1, no =
logger in xc_save.c<br clear=3D"none">&gt; i change code again according to=
 "<a shape=3D"rect" href=3D"http://xen.1045712.n5.nabble.com/" target=3D"_b=
lank">http://xen.1045712.n5.nabble.com/</a><br clear=3D"none">&gt; xen-unst=
able-tools-xc-restore-logging-in-xc-save-td5714324.html"<br clear=3D"none">=
&gt; But i don't know purpose Mr "patchbot" of <br clear=3D"none">&gt; @@ -=
185,6 +183,13 @@ main(int argc, char **argv)<br clear=3D"none">&gt; i test =
again to code Mr "patchbot" without Consideration <br clear=3D"none">&gt; @=
@ -185,6 +183,13 @@ main(int argc, char **argv)<br clear=3D"none">&gt; and =
result again no message :-( ....<br clear=3D"none"><br clear=3D"none"><br c=
lear=3D"none">All that is very imprecise, so we can not help.<br clear=3D"n=
one">Also read what I wrote: dont drop <a shape=3D"rect" ymailto=3D"mailto:=
xen-devel@lists.xen.org" href=3D"mailto:xen-devel@lists.xen.org">xen-devel@=
lists.xen.org</a><br
 clear=3D"none"><br clear=3D"none">Olaf<div class=3D"yqt6908341691" id=3D"y=
qtfd79244"><br clear=3D"none"><br clear=3D"none"><br clear=3D"none"><br cle=
ar=3D"none">&gt; On Saturday, February 8, 2014 1:52 AM, Olaf Hering &lt;<a =
shape=3D"rect" ymailto=3D"mailto:olaf@aepfle.de" href=3D"mailto:olaf@aepfle=
.de">olaf@aepfle.de</a>&gt; wrote:<br clear=3D"none">&gt; Please keep <a sh=
ape=3D"rect" ymailto=3D"mailto:xen-devel@lists.xen.org" href=3D"mailto:xen-=
devel@lists.xen.org">xen-devel@lists.xen.org</a> in CC list.<br clear=3D"no=
ne">&gt; <br clear=3D"none">&gt; On Fri, Feb 07, Adel Amani wrote:<br clear=
=3D"none">&gt; <br clear=3D"none">&gt; &gt; yes, for print data, function p=
rint_stats() in xc_domain_save.c should run<br clear=3D"none">&gt; and<br c=
lear=3D"none">&gt; &gt; work. I read in function and check.... But i don't =
know really why this don't<br clear=3D"none">&gt; &gt; work!!! :-|... I tes=
t 'fprintf(stderr,"STDERR\n"); fprintf(stdout,"STDOUT\<br clear=3D"none">&g=
t; n");'<br clear=3D"none">&gt; &gt; But
 again not answer :'(.....<br clear=3D"none">&gt; <br clear=3D"none">&gt; P=
lease make sure the self-compiled binary is actually used. Try this to<br c=
lear=3D"none">&gt; verify: grep STDERR /usr/lib/xen/bin/xc_save (assuming t=
he fprintf above<br clear=3D"none">&gt; is actually in the compiled code.)<=
br clear=3D"none">&gt; <br clear=3D"none">&gt; &gt; how boot the domU with =
'initcall_debug'?! Are affect on total time?!<br clear=3D"none">&gt; <br cl=
ear=3D"none">&gt; This is a kernel cmdline option. Please check the documen=
tation about<br clear=3D"none">&gt; how to pass additional kernel parameter=
s to a domU.<br clear=3D"none">&gt; <br clear=3D"none">&gt; <br clear=3D"no=
ne">&gt; Olaf<br clear=3D"none">&gt; <br clear=3D"none">&gt; <br clear=3D"n=
one"></div><br><br></div>  </div> </div>  </div> </div></body></html>
---337026386-468065670-1392967119=:28200--


--===============8624475629277257537==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8624475629277257537==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 08:56:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 08:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGltY-0004Qk-Vm; Fri, 21 Feb 2014 08:55:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGltX-0004Qf-8Q
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 08:55:43 +0000
Received: from [85.158.139.211:46942] by server-13.bemta-5.messagelabs.com id
	FA/AF-18801-E8417035; Fri, 21 Feb 2014 08:55:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392972941!5318831!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20056 invoked from network); 21 Feb 2014 08:55:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 08:55:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 08:55:41 +0000
Message-Id: <53072299020000780011E259@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 08:55:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
	<53066A1F.8020203@linaro.org>
In-Reply-To: <53066A1F.8020203@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.02.14 at 21:48, Julien Grall <julien.grall@linaro.org> wrote:
> Before the clean up there was 8 distinct startup routines for x86. No
> there is only 2:
>   - drivers/passthrough/amd/iommu_init.c: iommu_maskable_msi_startup
>   - arch/x86/ioapic.c: startup_edge_ioapic_irq
> 
> For a latter one, I'm a bit surprised that the function can return 1,
> but the result is never used.

Which means consumption of the return value was intended, but
never implemented (or lost _very_ long ago). Looking at the Linux
code, the intention apparently would be for the non-zero return
value to propagate into IRQ_PENDING in one very special case we
didn't ever support (auto-probing). Re-sending of an already
pending interrupt is being handled differently there anyway. So if
needed something like the setting of IRQ_PENDING at some point,
I guess we could as well have the startup routine do this itself. I.e.
I think converting the return value to void is still fine, as long as
you leave some commentary in
arch/x86/ioapic.c:startup_edge_ioapic_irq().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 08:56:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 08:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGltY-0004Qk-Vm; Fri, 21 Feb 2014 08:55:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGltX-0004Qf-8Q
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 08:55:43 +0000
Received: from [85.158.139.211:46942] by server-13.bemta-5.messagelabs.com id
	FA/AF-18801-E8417035; Fri, 21 Feb 2014 08:55:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392972941!5318831!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20056 invoked from network); 21 Feb 2014 08:55:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 08:55:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 08:55:41 +0000
Message-Id: <53072299020000780011E259@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 08:55:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
	<53066A1F.8020203@linaro.org>
In-Reply-To: <53066A1F.8020203@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
 contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 20.02.14 at 21:48, Julien Grall <julien.grall@linaro.org> wrote:
> Before the clean up there was 8 distinct startup routines for x86. No
> there is only 2:
>   - drivers/passthrough/amd/iommu_init.c: iommu_maskable_msi_startup
>   - arch/x86/ioapic.c: startup_edge_ioapic_irq
> 
> For a latter one, I'm a bit surprised that the function can return 1,
> but the result is never used.

Which means consumption of the return value was intended, but
never implemented (or lost _very_ long ago). Looking at the Linux
code, the intention apparently would be for the non-zero return
value to propagate into IRQ_PENDING in one very special case we
didn't ever support (auto-probing). Re-sending of an already
pending interrupt is being handled differently there anyway. So if
needed something like the setting of IRQ_PENDING at some point,
I guess we could as well have the startup routine do this itself. I.e.
I think converting the return value to void is still fine, as long as
you leave some commentary in
arch/x86/ioapic.c:startup_edge_ioapic_irq().

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 09:18:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 09:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmFE-0004w8-5l; Fri, 21 Feb 2014 09:18:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGmFD-0004w3-5Z
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 09:18:07 +0000
Received: from [193.109.254.147:16738] by server-13.bemta-14.messagelabs.com
	id 48/1A-01226-EC917035; Fri, 21 Feb 2014 09:18:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392974285!1881389!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10343 invoked from network); 21 Feb 2014 09:18:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 09:18:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 09:18:05 +0000
Message-Id: <530727DC020000780011E27C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 09:18:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.jackson@eu.citrix.com>
References: <osstest-25157-mainreport@xen.org>
In-Reply-To: <osstest-25157-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 01:46, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 25157 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25157/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
>  test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
>  build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24859
>  test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
>  test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
>  test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
>  test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
>  test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
>  test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859
> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
>  test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
>  test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Is this again fallout from some osstest of infrastructure change?
I ask because this single change:

> version targeted for testing:
>  xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
> baseline version:
>  xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

can't possibly have caused this afaict.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 09:18:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 09:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmFE-0004w8-5l; Fri, 21 Feb 2014 09:18:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGmFD-0004w3-5Z
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 09:18:07 +0000
Received: from [193.109.254.147:16738] by server-13.bemta-14.messagelabs.com
	id 48/1A-01226-EC917035; Fri, 21 Feb 2014 09:18:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392974285!1881389!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10343 invoked from network); 21 Feb 2014 09:18:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 09:18:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 09:18:05 +0000
Message-Id: <530727DC020000780011E27C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 09:18:04 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.jackson@eu.citrix.com>
References: <osstest-25157-mainreport@xen.org>
In-Reply-To: <osstest-25157-mainreport@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 01:46, xen.org <ian.jackson@eu.citrix.com> wrote:
> flight 25157 xen-4.1-testing real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25157/ 
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
>  test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
>  build-amd64-oldkern           4 xen-build                 fail REGR. vs. 24859
>  test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
>  test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
>  test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
>  test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
>  test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
>  test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859
> 
> Regressions which are regarded as allowable (not blocking):
>  test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
>  test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
>  test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Is this again fallout from some osstest of infrastructure change?
I ask because this single change:

> version targeted for testing:
>  xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
> baseline version:
>  xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

can't possibly have caused this afaict.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 09:30:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 09:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmQy-0005Pp-EK; Fri, 21 Feb 2014 09:30:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WGmQw-0005Pk-PF
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 09:30:15 +0000
Received: from [193.109.254.147:25162] by server-12.bemta-14.messagelabs.com
	id CB/1A-17220-5AC17035; Fri, 21 Feb 2014 09:30:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392975013!5907486!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21625 invoked from network); 21 Feb 2014 09:30:13 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 09:30:13 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392975012; l=245;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=Lw/ruMb5yrEoFOLw0i6MHQjaU6g=;
	b=Rnnieo8iRnesjIRPk4/h8vZWBIPl+x3rzN83UEZ0WEHaLMzGrmTdPKbuwY04EQyjiJX
	2rC3DfY93rwLM+pcP/SF5mJAFYjf0MrBCQaZS2igVhe3s5fgxDZckhIW3xxDZa9/8hSst
	xnhceaE7ZcngmHlVAxNv2Ptq5aZ15UmmmMM=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id Z03ecfq1L9U9BD6
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Fri, 21 Feb 2014 10:30:09 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 69A7F50275; Fri, 21 Feb 2014 10:30:09 +0100 (CET)
Date: Fri, 21 Feb 2014 10:30:09 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140221093009.GA3187@aepfle.de>
References: <20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
	<1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, Adel Amani wrote:

> +    lvl = si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL; 

Please follow the code and check how si.flags gets its values.

The "@@ " markers are from diff(1), so that patch(1) can do its work.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 09:30:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 09:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmQy-0005Pp-EK; Fri, 21 Feb 2014 09:30:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WGmQw-0005Pk-PF
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 09:30:15 +0000
Received: from [193.109.254.147:25162] by server-12.bemta-14.messagelabs.com
	id CB/1A-17220-5AC17035; Fri, 21 Feb 2014 09:30:13 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392975013!5907486!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21625 invoked from network); 21 Feb 2014 09:30:13 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 09:30:13 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1392975012; l=245;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=Lw/ruMb5yrEoFOLw0i6MHQjaU6g=;
	b=Rnnieo8iRnesjIRPk4/h8vZWBIPl+x3rzN83UEZ0WEHaLMzGrmTdPKbuwY04EQyjiJX
	2rC3DfY93rwLM+pcP/SF5mJAFYjf0MrBCQaZS2igVhe3s5fgxDZckhIW3xxDZa9/8hSst
	xnhceaE7ZcngmHlVAxNv2Ptq5aZ15UmmmMM=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id Z03ecfq1L9U9BD6
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Fri, 21 Feb 2014 10:30:09 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 69A7F50275; Fri, 21 Feb 2014 10:30:09 +0100 (CET)
Date: Fri, 21 Feb 2014 10:30:09 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140221093009.GA3187@aepfle.de>
References: <20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
	<1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, Adel Amani wrote:

> +    lvl = si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL; 

Please follow the code and check how si.flags gets its values.

The "@@ " markers are from diff(1), so that patch(1) can do its work.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 09:53:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 09:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmne-0005uS-Ib; Fri, 21 Feb 2014 09:53:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGmnd-0005uN-Kz
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 09:53:41 +0000
Received: from [193.109.254.147:14992] by server-11.bemta-14.messagelabs.com
	id A6/61-24604-52227035; Fri, 21 Feb 2014 09:53:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392976419!5855992!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19189 invoked from network); 21 Feb 2014 09:53:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 09:53:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="102932412"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 09:53:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 04:53:37 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGmnZ-0002Jg-K4;
	Fri, 21 Feb 2014 09:53:37 +0000
Date: Fri, 21 Feb 2014 09:53:37 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221095337.GQ18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 06:52:30AM +0400, Vasiliy Tolstov wrote:
> 2014-02-20 23:07 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> > Use "mem_set_enforce_limit=0" in xl.conf instead of "enforce=0".
> 
> 
> I'm already do that, but nothing changed =(
> 

Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
memory tells me that it's a global thing. I could be wrong though...

Wei.

> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 09:53:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 09:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmne-0005uS-Ib; Fri, 21 Feb 2014 09:53:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGmnd-0005uN-Kz
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 09:53:41 +0000
Received: from [193.109.254.147:14992] by server-11.bemta-14.messagelabs.com
	id A6/61-24604-52227035; Fri, 21 Feb 2014 09:53:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1392976419!5855992!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19189 invoked from network); 21 Feb 2014 09:53:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 09:53:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="102932412"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 09:53:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 04:53:37 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGmnZ-0002Jg-K4;
	Fri, 21 Feb 2014 09:53:37 +0000
Date: Fri, 21 Feb 2014 09:53:37 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221095337.GQ18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 06:52:30AM +0400, Vasiliy Tolstov wrote:
> 2014-02-20 23:07 GMT+04:00 Daniel Kiper <dkiper@net-space.pl>:
> > Use "mem_set_enforce_limit=0" in xl.conf instead of "enforce=0".
> 
> 
> I'm already do that, but nothing changed =(
> 

Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
memory tells me that it's a global thing. I could be wrong though...

Wei.

> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:00:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmtj-0006DB-Dc; Fri, 21 Feb 2014 09:59:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGmth-0006D6-Og
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 09:59:57 +0000
Received: from [85.158.139.211:3137] by server-5.bemta-5.messagelabs.com id
	2C/73-32749-D9327035; Fri, 21 Feb 2014 09:59:57 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392976795!1428793!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19155 invoked from network); 21 Feb 2014 09:59:56 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 09:59:56 -0000
Received: by mail-qc0-f179.google.com with SMTP id r5so690763qcx.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 01:59:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=cpch5WRM6zT0ObfMDMSMZJHYuSz5dFdK5ObWoXhEYTk=;
	b=EcgR2MG3E5PL/V4pr/F5mapoCb3yHWg2uEuRx7r2YZMZy1ytOccvI0R7TOBMZZdpox
	ytha2BN1fK4/lLUgOLeL0gYkjfPbdJY5B49rpbrdtfbWzaJ6bsnpMoI+/HiSbyUvFH3b
	rGSPZnIceKz/xwoerdJzILczI/foKqAN3jwGk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=cpch5WRM6zT0ObfMDMSMZJHYuSz5dFdK5ObWoXhEYTk=;
	b=bMX9IkRjHWCqAow4JSdAm81yTgw+C9HdjlIMIyKwijlG75BOUbwt9wTiD7CYpSbkiZ
	Allw2vyU/eQ2mPlnXsjAGd6ixUMRKASLeKnrvs8RvyaeE0IPJ8dS02HcSbUndqPT++uV
	Bwhy33/NjR8TYjB8AUaLZsctgGF4XaksE6yxpGMn9zvB7dRnT7fH9yzoXb0CbluMRXnT
	VQtKIqrHdJbqsq/SIZ3x3uO0L/ojS8rNWKIajkp+tpNw1gQ9b+sB4y1Wu8oaiFVPugLG
	tIVH8Ri23ImpiEI18n2AZlqt23PWrwOFK04aUby9MIEUa7Y5U5mOkjTpsR0TkmRMVapV
	b06Q==
X-Gm-Message-State: ALoCoQnGt8mXmVni6MfKZfLjU3LC3pOMnSwZctaQGNOXIYJccDGUpqTXOydPvWRAAkrLicoWjYva
X-Received: by 10.224.63.131 with SMTP id b3mr8461669qai.63.1392976794982;
	Fri, 21 Feb 2014 01:59:54 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 01:59:39 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221095337.GQ18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 13:59:39 +0400
X-Google-Sender-Auth: tacwmuUcDVVUECHKc0mpM4EuLfU
Message-ID: <CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 13:53 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
> memory tells me that it's a global thing. I could be wrong though...


Yes,
autoballoon = 0
lockfile = "/var/lock/xl"
vifscript = "/etc/xen/scripts/vif-ospf"
mem_set_enforce_limit = 0
claim_mode = 1
vif.default.script = "/etc/xen/scripts/vif-ospf"


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:00:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmtj-0006DB-Dc; Fri, 21 Feb 2014 09:59:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGmth-0006D6-Og
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 09:59:57 +0000
Received: from [85.158.139.211:3137] by server-5.bemta-5.messagelabs.com id
	2C/73-32749-D9327035; Fri, 21 Feb 2014 09:59:57 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392976795!1428793!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19155 invoked from network); 21 Feb 2014 09:59:56 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 09:59:56 -0000
Received: by mail-qc0-f179.google.com with SMTP id r5so690763qcx.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 01:59:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=cpch5WRM6zT0ObfMDMSMZJHYuSz5dFdK5ObWoXhEYTk=;
	b=EcgR2MG3E5PL/V4pr/F5mapoCb3yHWg2uEuRx7r2YZMZy1ytOccvI0R7TOBMZZdpox
	ytha2BN1fK4/lLUgOLeL0gYkjfPbdJY5B49rpbrdtfbWzaJ6bsnpMoI+/HiSbyUvFH3b
	rGSPZnIceKz/xwoerdJzILczI/foKqAN3jwGk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=cpch5WRM6zT0ObfMDMSMZJHYuSz5dFdK5ObWoXhEYTk=;
	b=bMX9IkRjHWCqAow4JSdAm81yTgw+C9HdjlIMIyKwijlG75BOUbwt9wTiD7CYpSbkiZ
	Allw2vyU/eQ2mPlnXsjAGd6ixUMRKASLeKnrvs8RvyaeE0IPJ8dS02HcSbUndqPT++uV
	Bwhy33/NjR8TYjB8AUaLZsctgGF4XaksE6yxpGMn9zvB7dRnT7fH9yzoXb0CbluMRXnT
	VQtKIqrHdJbqsq/SIZ3x3uO0L/ojS8rNWKIajkp+tpNw1gQ9b+sB4y1Wu8oaiFVPugLG
	tIVH8Ri23ImpiEI18n2AZlqt23PWrwOFK04aUby9MIEUa7Y5U5mOkjTpsR0TkmRMVapV
	b06Q==
X-Gm-Message-State: ALoCoQnGt8mXmVni6MfKZfLjU3LC3pOMnSwZctaQGNOXIYJccDGUpqTXOydPvWRAAkrLicoWjYva
X-Received: by 10.224.63.131 with SMTP id b3mr8461669qai.63.1392976794982;
	Fri, 21 Feb 2014 01:59:54 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 01:59:39 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221095337.GQ18398@zion.uk.xensource.com>
References: <CACaajQvOKLLkXCkXo1F-Y6yg-wQ70ij3grrP2+4gN6-rXUEdPg@mail.gmail.com>
	<20140214163426.GG18398@zion.uk.xensource.com>
	<CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 13:59:39 +0400
X-Google-Sender-Auth: tacwmuUcDVVUECHKc0mpM4EuLfU
Message-ID: <CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 13:53 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
> memory tells me that it's a global thing. I could be wrong though...


Yes,
autoballoon = 0
lockfile = "/var/lock/xl"
vifscript = "/etc/xen/scripts/vif-ospf"
mem_set_enforce_limit = 0
claim_mode = 1
vif.default.script = "/etc/xen/scripts/vif-ospf"


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:05:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmyo-0006Pm-Dc; Fri, 21 Feb 2014 10:05:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGmyl-0006Pf-Gm
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:05:12 +0000
Received: from [85.158.139.211:11976] by server-15.bemta-5.messagelabs.com id
	37/81-24395-6D427035; Fri, 21 Feb 2014 10:05:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392977108!5379788!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1689 invoked from network); 21 Feb 2014 10:05:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:05:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="104620143"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 10:05:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:05:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGmyg-0002UY-QG;
	Fri, 21 Feb 2014 10:05:06 +0000
Date: Fri, 21 Feb 2014 10:05:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Message-ID: <20140221100506.GR18398@zion.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> > Anthony Liguori <anthony@codemonkey.ws> writes:
> >> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >>> Daniel Kiper <daniel.kiper@oracle.com> writes:
> >>>> Hi,
> >>>>
> >>>> Below you could find a summary of work in regards to VIRTIO compatibility with
> >>>> different virtualization solutions. It was done mainly from Xen point of view
> >>>> but results are quite generic and can be applied to wide spectrum
> >>>> of virtualization platforms.
> >>>
> >>> Hi Daniel,
> >>>
> >>>         Sorry for the delayed response, I was pondering...  CC changed
> >>> to virtio-dev.
> >>>
> >>> From a standard POV: It's possible to abstract out the where we use
> >>> 'physical address' for 'address handle'.  It's also possible to define
> >>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> >>> Xen-PV is a distinct platform from x86.
> >>
> >> I'll go even further and say that "address handle" doesn't make sense too.
> >
> > I was trying to come up with a unique term, I wasn't trying to define
> > semantics :)
> 
> Understood, that wasn't really directed at you.
> 
> > There are three debates here now: (1) what should the standard say, and
> 
> The standard should say, "physical address"
> 
> > (2) how would Linux implement it,
> 
> Linux should use the PCI DMA API.
> 
> > (3) should we use each platform's PCI
> > IOMMU.
> 
> Just like any other PCI device :-)
> 
> >> Just using grant table references is not enough to make virtio work
> >> well under Xen.  You really need to use bounce buffers ala persistent
> >> grants.
> >
> > Wait, if you're using bounce buffers, you didn't make it "work well"!
> 
> Preaching to the choir man...  but bounce buffering is proven to be
> faster than doing grant mappings on every request.  xen-blk does
> bounce buffering by default and I suspect netfront is heading that
> direction soon.
> 

FWIW Annie Li @ Oracle once implemented a persistent map prototype for
netfront and the result was not satisfying.

> It would be a lot easier to simply have a global pool of grant tables
> that effectively becomes the DMA pool.  Then the DMA API can bounce
> into that pool and those addresses can be placed on the ring.
> 
> It's a little different for Xen because now the backends have to deal
> with physical addresses but the concept is still the same.
> 

How would you apply this to Xen's security model? How can hypervisor
effectively enforce access control? "Handle" and "physical address" are
essentially not the same concept, otherwise you wouldn't have proposed
this change. Not saying I'm against this change, just this description
is too vague for me to understand the bigger picture.

But a downside for sure is that if we go with this change we then have
to maintain two different paths in backend. However small the difference
is it is still a burden.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:05:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGmyo-0006Pm-Dc; Fri, 21 Feb 2014 10:05:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGmyl-0006Pf-Gm
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:05:12 +0000
Received: from [85.158.139.211:11976] by server-15.bemta-5.messagelabs.com id
	37/81-24395-6D427035; Fri, 21 Feb 2014 10:05:10 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1392977108!5379788!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1689 invoked from network); 21 Feb 2014 10:05:09 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:05:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="104620143"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 10:05:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:05:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGmyg-0002UY-QG;
	Fri, 21 Feb 2014 10:05:06 +0000
Date: Fri, 21 Feb 2014 10:05:06 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Anthony Liguori <anthony@codemonkey.ws>
Message-ID: <20140221100506.GR18398@zion.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> > Anthony Liguori <anthony@codemonkey.ws> writes:
> >> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >>> Daniel Kiper <daniel.kiper@oracle.com> writes:
> >>>> Hi,
> >>>>
> >>>> Below you could find a summary of work in regards to VIRTIO compatibility with
> >>>> different virtualization solutions. It was done mainly from Xen point of view
> >>>> but results are quite generic and can be applied to wide spectrum
> >>>> of virtualization platforms.
> >>>
> >>> Hi Daniel,
> >>>
> >>>         Sorry for the delayed response, I was pondering...  CC changed
> >>> to virtio-dev.
> >>>
> >>> From a standard POV: It's possible to abstract out the where we use
> >>> 'physical address' for 'address handle'.  It's also possible to define
> >>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> >>> Xen-PV is a distinct platform from x86.
> >>
> >> I'll go even further and say that "address handle" doesn't make sense too.
> >
> > I was trying to come up with a unique term, I wasn't trying to define
> > semantics :)
> 
> Understood, that wasn't really directed at you.
> 
> > There are three debates here now: (1) what should the standard say, and
> 
> The standard should say, "physical address"
> 
> > (2) how would Linux implement it,
> 
> Linux should use the PCI DMA API.
> 
> > (3) should we use each platform's PCI
> > IOMMU.
> 
> Just like any other PCI device :-)
> 
> >> Just using grant table references is not enough to make virtio work
> >> well under Xen.  You really need to use bounce buffers ala persistent
> >> grants.
> >
> > Wait, if you're using bounce buffers, you didn't make it "work well"!
> 
> Preaching to the choir man...  but bounce buffering is proven to be
> faster than doing grant mappings on every request.  xen-blk does
> bounce buffering by default and I suspect netfront is heading that
> direction soon.
> 

FWIW Annie Li @ Oracle once implemented a persistent map prototype for
netfront and the result was not satisfying.

> It would be a lot easier to simply have a global pool of grant tables
> that effectively becomes the DMA pool.  Then the DMA API can bounce
> into that pool and those addresses can be placed on the ring.
> 
> It's a little different for Xen because now the backends have to deal
> with physical addresses but the concept is still the same.
> 

How would you apply this to Xen's security model? How can hypervisor
effectively enforce access control? "Handle" and "physical address" are
essentially not the same concept, otherwise you wouldn't have proposed
this change. Not saying I'm against this change, just this description
is too vague for me to understand the bigger picture.

But a downside for sure is that if we go with this change we then have
to maintain two different paths in backend. However small the difference
is it is still a burden.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:10:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGn42-0006io-Gl; Fri, 21 Feb 2014 10:10:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGn41-0006ih-HS
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:10:37 +0000
Received: from [193.109.254.147:11797] by server-15.bemta-14.messagelabs.com
	id 1A/80-10839-C1627035; Fri, 21 Feb 2014 10:10:36 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392977435!5872813!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5973 invoked from network); 21 Feb 2014 10:10:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:10:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; 
   d="scan'208";a="9714290"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 10:10:35 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 11:10:34 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Andrew Bennieston
	<andrew.bennieston@citrix.com>
Thread-Topic: [PATCH] netif.h: Document xen-net{back,front} multi-queue feature
Thread-Index: AQHPLiuJcMIg2+TyAECWPq+7ueHek5q/eyKg
Date: Fri, 21 Feb 2014 10:10:34 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0250FF5@AMSPEX01CL01.citrite.net>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
	<530499D7.70902@citrix.com>
	<1392811341.29739.24.camel@kazak.uk.xensource.com>
	<5305DFBF.1030909@citrix.com>
	<1392894269.23342.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1392894269.23342.21.camel@kazak.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Wei Liu <wei.liu2@citrix.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 20 February 2014 11:04
> To: Andrew Bennieston
> Cc: xen-devel@lists.xenproject.org; Wei Liu; Paul Durrant; David Vrabel
> Subject: Re: [PATCH] netif.h: Document xen-net{back,front} multi-queue
> feature
> 
> On Thu, 2014-02-20 at 10:58 +0000, Andrew Bennieston wrote:
> > As such, I think the docs should, for now, say something along the lines of:
> >    Mapping of packets to queues is considered to be a function of the
> >    transmitting system (backend or frontend) and is not negotiated
> >    between the two. Guests are free to transmit packets on any queue
> >    they choose, provided it has been set up correctly.
> 
> That's the sort of thing I was looking for, thanks.
> 
> (fixing Paul's CC at last...)
> 

That all sounds fine. There will be some changes when we attempt to implement Toeplitz and Windows RSS (Receive Side Scaling)...

Briefly, the Windows stack provides a hash key at start of day which needs to be fed to the backend (which I guess we'll do via xenstore) so that it can use it in its calculation. The stack also provides a hash->queue mapping table which is updated on a fairly infrequent basis (so we can probably still use xenstore for that) to dictate which TCP flows coming from the backend should appear on which queue (and hence get processed by which frontend vCPU). The other complication is that the actual hash value needs to be passed to the stack with each TCP packet, so I suspect we'll have to use a new 'extra segment' type to pass that across.

All of this, as the same suggests, is (guest) receive side. There is no connection with transmit side, although I believe Windows will ensure that the transmissions of any particular TCP flow will occur on the same CPU that the hash->queue mapping mandates for reception (modulo a bit of mismatch whenever the table is updated).

  Paul

> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:10:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGn42-0006io-Gl; Fri, 21 Feb 2014 10:10:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGn41-0006ih-HS
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:10:37 +0000
Received: from [193.109.254.147:11797] by server-15.bemta-14.messagelabs.com
	id 1A/80-10839-C1627035; Fri, 21 Feb 2014 10:10:36 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392977435!5872813!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5973 invoked from network); 21 Feb 2014 10:10:35 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:10:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; 
   d="scan'208";a="9714290"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 10:10:35 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 11:10:34 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>, Andrew Bennieston
	<andrew.bennieston@citrix.com>
Thread-Topic: [PATCH] netif.h: Document xen-net{back,front} multi-queue feature
Thread-Index: AQHPLiuJcMIg2+TyAECWPq+7ueHek5q/eyKg
Date: Fri, 21 Feb 2014 10:10:34 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0250FF5@AMSPEX01CL01.citrite.net>
References: <1392660110-3190-1-git-send-email-andrew.bennieston@citrix.com>
	<1392807985.23084.132.camel@kazak.uk.xensource.com>
	<530499D7.70902@citrix.com>
	<1392811341.29739.24.camel@kazak.uk.xensource.com>
	<5305DFBF.1030909@citrix.com>
	<1392894269.23342.21.camel@kazak.uk.xensource.com>
In-Reply-To: <1392894269.23342.21.camel@kazak.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Wei Liu <wei.liu2@citrix.com>, David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Ian Campbell
> Sent: 20 February 2014 11:04
> To: Andrew Bennieston
> Cc: xen-devel@lists.xenproject.org; Wei Liu; Paul Durrant; David Vrabel
> Subject: Re: [PATCH] netif.h: Document xen-net{back,front} multi-queue
> feature
> 
> On Thu, 2014-02-20 at 10:58 +0000, Andrew Bennieston wrote:
> > As such, I think the docs should, for now, say something along the lines of:
> >    Mapping of packets to queues is considered to be a function of the
> >    transmitting system (backend or frontend) and is not negotiated
> >    between the two. Guests are free to transmit packets on any queue
> >    they choose, provided it has been set up correctly.
> 
> That's the sort of thing I was looking for, thanks.
> 
> (fixing Paul's CC at last...)
> 

That all sounds fine. There will be some changes when we attempt to implement Toeplitz and Windows RSS (Receive Side Scaling)...

Briefly, the Windows stack provides a hash key at start of day which needs to be fed to the backend (which I guess we'll do via xenstore) so that it can use it in its calculation. The stack also provides a hash->queue mapping table which is updated on a fairly infrequent basis (so we can probably still use xenstore for that) to dictate which TCP flows coming from the backend should appear on which queue (and hence get processed by which frontend vCPU). The other complication is that the actual hash value needs to be passed to the stack with each TCP packet, so I suspect we'll have to use a new 'extra segment' type to pass that across.

All of this, as the same suggests, is (guest) receive side. There is no connection with transmit side, although I believe Windows will ensure that the transmissions of any particular TCP flow will occur on the same CPU that the hash->queue mapping mandates for reception (modulo a bit of mismatch whenever the table is updated).

  Paul

> Ian.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:21:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:21:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnEC-00076M-0o; Fri, 21 Feb 2014 10:21:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGnEA-00076H-A7
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:21:06 +0000
Received: from [85.158.143.35:18659] by server-3.bemta-4.messagelabs.com id
	4C/01-11539-19827035; Fri, 21 Feb 2014 10:21:05 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392978063!7307581!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2191 invoked from network); 21 Feb 2014 10:21:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="102937803"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 10:21:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:21:02 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGnE6-0002ju-PG;
	Fri, 21 Feb 2014 10:21:02 +0000
Date: Fri, 21 Feb 2014 10:21:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140221102102.GS18398@zion.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8761o99tft.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 11:24:14AM +1030, Rusty Russell wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
> > Hey,
> >
> > On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> >> Ian Campbell <Ian.Campbell@citrix.com> writes:
> >> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> >> For platforms using EPT, I don't think you want anything but guest
> >> >> addresses, do you?
> >> >
> >> > No, the arguments for preventing unfettered access by backends to
> >> > frontend RAM applies to EPT as well.
> >>
> >> I can see how you'd parse my sentence that way, I think, but the two
> >> are orthogonal.
> >>
> >> AFAICT your grant-table access restrictions are page granularity, though
> >> you don't use page-aligned data (eg. in xen-netfront).  This level of
> >> access control is possible using the virtio ring too, but noone has
> >> implemented such a thing AFAIK.
> >
> > Could you say in short how it should be done? DMA API is an option but
> > if there is a simpler mechanism available in VIRTIO itself we will be
> > happy to use it in Xen.
> 
> OK, this challenged me to think harder.
> 
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
> 
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
> 

If you're talking about Xen drivers in Linux kernel...

At least for Xen network backend in mainline Linux, it uses copy instead
of map.  Zoltan Kiss @ Citrix is working on a mapping network backend.
He uses batch unmap to avoid performance penalty.

Wei.

> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
> 
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.
> 
> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.
> 
> 3) In Linux, change the drivers to use this API.
> 
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.
> 
> Am I missing anything?
> 
> Cheers,
> Rusty.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:21:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:21:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnEC-00076M-0o; Fri, 21 Feb 2014 10:21:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGnEA-00076H-A7
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:21:06 +0000
Received: from [85.158.143.35:18659] by server-3.bemta-4.messagelabs.com id
	4C/01-11539-19827035; Fri, 21 Feb 2014 10:21:05 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1392978063!7307581!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2191 invoked from network); 21 Feb 2014 10:21:04 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:21:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="102937803"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 10:21:03 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:21:02 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGnE6-0002ju-PG;
	Fri, 21 Feb 2014 10:21:02 +0000
Date: Fri, 21 Feb 2014 10:21:02 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140221102102.GS18398@zion.uk.xensource.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8761o99tft.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	stefano.stabellini@eu.citrix.com, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 11:24:14AM +1030, Rusty Russell wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
> > Hey,
> >
> > On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> >> Ian Campbell <Ian.Campbell@citrix.com> writes:
> >> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> >> For platforms using EPT, I don't think you want anything but guest
> >> >> addresses, do you?
> >> >
> >> > No, the arguments for preventing unfettered access by backends to
> >> > frontend RAM applies to EPT as well.
> >>
> >> I can see how you'd parse my sentence that way, I think, but the two
> >> are orthogonal.
> >>
> >> AFAICT your grant-table access restrictions are page granularity, though
> >> you don't use page-aligned data (eg. in xen-netfront).  This level of
> >> access control is possible using the virtio ring too, but noone has
> >> implemented such a thing AFAIK.
> >
> > Could you say in short how it should be done? DMA API is an option but
> > if there is a simpler mechanism available in VIRTIO itself we will be
> > happy to use it in Xen.
> 
> OK, this challenged me to think harder.
> 
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
> 
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
> 

If you're talking about Xen drivers in Linux kernel...

At least for Xen network backend in mainline Linux, it uses copy instead
of map.  Zoltan Kiss @ Citrix is working on a mapping network backend.
He uses batch unmap to avoid performance penalty.

Wei.

> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
> 
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.
> 
> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.
> 
> 3) In Linux, change the drivers to use this API.
> 
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.
> 
> Am I missing anything?
> 
> Cheers,
> Rusty.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnLj-0007O5-0F; Fri, 21 Feb 2014 10:28:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGnLh-0007O0-J2
	for Xen-devel@lists.xen.org; Fri, 21 Feb 2014 10:28:53 +0000
Received: from [85.158.139.211:56114] by server-16.bemta-5.messagelabs.com id
	C0/45-05060-46A27035; Fri, 21 Feb 2014 10:28:52 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392978530!5351574!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25706 invoked from network); 21 Feb 2014 10:28:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="104623989"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 10:28:50 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:28:50 -0500
Message-ID: <53072A61.7060907@citrix.com>
Date: Fri, 21 Feb 2014 10:28:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <5305DE2C.7080502@citrix.com> <20140220201312.GA6067@kroah.com>
In-Reply-To: <20140220201312.GA6067@kroah.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:13, Greg Kroah-Hartman wrote:
> On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
>> These two changes fix important bugs with 32-bit Xen PV guests.
>>
>> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
>> before using the m2p table)
> 
> Now applied.
> 
>> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
>> selector corruption)
> 
> I had to edit this by-hand to get it to apply, can you verify I got it right?

I can't find where you put it.  But if it looks like this, it's ok.

--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -245,6 +245,15 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	   old memory can be recycled */
 	make_lowmem_page_readwrite(xen_initial_gdt);

+#ifdef CONFIG_X86_32
+	/*
+	 * Xen starts us with XEN_FLAT_RING1_DS, but linux code
+	 * expects __USER_DS
+	 */
+	loadsegment(ds, __USER_DS);
+	loadsegment(es, __USER_DS);
+#endif
+
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
 }

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:29:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnLj-0007O5-0F; Fri, 21 Feb 2014 10:28:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGnLh-0007O0-J2
	for Xen-devel@lists.xen.org; Fri, 21 Feb 2014 10:28:53 +0000
Received: from [85.158.139.211:56114] by server-16.bemta-5.messagelabs.com id
	C0/45-05060-46A27035; Fri, 21 Feb 2014 10:28:52 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1392978530!5351574!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25706 invoked from network); 21 Feb 2014 10:28:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:28:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,517,1389744000"; d="scan'208";a="104623989"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 10:28:50 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:28:50 -0500
Message-ID: <53072A61.7060907@citrix.com>
Date: Fri, 21 Feb 2014 10:28:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <5305DE2C.7080502@citrix.com> <20140220201312.GA6067@kroah.com>
In-Reply-To: <20140220201312.GA6067@kroah.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:13, Greg Kroah-Hartman wrote:
> On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
>> These two changes fix important bugs with 32-bit Xen PV guests.
>>
>> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
>> before using the m2p table)
> 
> Now applied.
> 
>> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
>> selector corruption)
> 
> I had to edit this by-hand to get it to apply, can you verify I got it right?

I can't find where you put it.  But if it looks like this, it's ok.

--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -245,6 +245,15 @@ static void __init xen_smp_prepare_boot_cpu(void)
 	   old memory can be recycled */
 	make_lowmem_page_readwrite(xen_initial_gdt);

+#ifdef CONFIG_X86_32
+	/*
+	 * Xen starts us with XEN_FLAT_RING1_DS, but linux code
+	 * expects __USER_DS
+	 */
+	loadsegment(ds, __USER_DS);
+	loadsegment(es, __USER_DS);
+#endif
+
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
 }

Thanks.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:39:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnW0-0007j1-60; Fri, 21 Feb 2014 10:39:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGnVy-0007iw-Ms
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:39:30 +0000
Received: from [85.158.143.35:55865] by server-1.bemta-4.messagelabs.com id
	CD/AD-31661-0EC27035; Fri, 21 Feb 2014 10:39:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392979166!7310245!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32546 invoked from network); 21 Feb 2014 10:39:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:39:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102941209"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 10:39:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:39:25 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGnVt-0002yx-Nt;
	Fri, 21 Feb 2014 10:39:25 +0000
Date: Fri, 21 Feb 2014 10:39:25 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221103925.GT18398@zion.uk.xensource.com>
References: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 01:59:39PM +0400, Vasiliy Tolstov wrote:
> 2014-02-21 13:53 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
> > memory tells me that it's a global thing. I could be wrong though...
> 
> 
> Yes,
> autoballoon = 0
> lockfile = "/var/lock/xl"
> vifscript = "/etc/xen/scripts/vif-ospf"
> mem_set_enforce_limit = 0
> claim_mode = 1
> vif.default.script = "/etc/xen/scripts/vif-ospf"
> 

Ah, so Daniel's change only affects "xl mem-set" command. When building a
domain the maxmem is still capped to target_memory (which is memory= in
you domain config file).

Unfortunately Daniel's patch won't help you much with that.

Wei.

> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:39:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:39:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnW0-0007j1-60; Fri, 21 Feb 2014 10:39:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGnVy-0007iw-Ms
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:39:30 +0000
Received: from [85.158.143.35:55865] by server-1.bemta-4.messagelabs.com id
	CD/AD-31661-0EC27035; Fri, 21 Feb 2014 10:39:28 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392979166!7310245!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32546 invoked from network); 21 Feb 2014 10:39:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:39:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102941209"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 10:39:26 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 05:39:25 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGnVt-0002yx-Nt;
	Fri, 21 Feb 2014 10:39:25 +0000
Date: Fri, 21 Feb 2014 10:39:25 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221103925.GT18398@zion.uk.xensource.com>
References: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 01:59:39PM +0400, Vasiliy Tolstov wrote:
> 2014-02-21 13:53 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
> > memory tells me that it's a global thing. I could be wrong though...
> 
> 
> Yes,
> autoballoon = 0
> lockfile = "/var/lock/xl"
> vifscript = "/etc/xen/scripts/vif-ospf"
> mem_set_enforce_limit = 0
> claim_mode = 1
> vif.default.script = "/etc/xen/scripts/vif-ospf"
> 

Ah, so Daniel's change only affects "xl mem-set" command. When building a
domain the maxmem is still capped to target_memory (which is memory= in
you domain config file).

Unfortunately Daniel's patch won't help you much with that.

Wei.

> 
> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:46:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGncD-0007tF-1F; Fri, 21 Feb 2014 10:45:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGnbJ-0007sj-Pp
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:45:50 +0000
Received: from [85.158.137.68:24078] by server-8.bemta-3.messagelabs.com id
	46/82-16039-D2E27035; Fri, 21 Feb 2014 10:45:01 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392979497!2091397!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2308 invoked from network); 21 Feb 2014 10:44:58 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:44:58 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so5694552qcx.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 02:44:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=xT3aBACsGGOW5XLQzTplwCB1Xb+dlY8wsnW+7B5oJKg=;
	b=gWrad32JxQbrDh9FOVqeX09ljJOW1RT2Q+zWnRbxOmeNp7KNO9SVpCKzBiaIL20Top
	p+49dgGiZfhpJSlhEVBwBQy7kUc295XLJaT8AVPFoBxuUN8sLwG9+yn16uj9iNDm+PXl
	zQM+zJ08bz4BN31vnGqQxc4EzLkYYM51FEPyk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=xT3aBACsGGOW5XLQzTplwCB1Xb+dlY8wsnW+7B5oJKg=;
	b=cUej49NUB+sh+lqzNYWNhoG6UDWoopkVRlN9BIBEzt1lOIQHkP4fkuKG2408aJpIGg
	oxQa3xpM0JY7fVIdh0meoy3tRUiDx9O+a6nJ02Dr68qpZ+lN5YM9pnqsKeWNsv2KKq1/
	Csc3F+HPU92y4xoW0DO+VplpXKvcFpzOQ/wkB0NK6Q7C73pPeVHiKpl0+5MicsFug07e
	gXD8f83T1ZPCC78cumq1idlqoeio07Fl853Yyz4PtRJr4WlETnmRQ30DdPwD4+xGyHZR
	M9435M2Socr+iK8711zy6KcSHYJrQDVkHYEm2HgsiXqvZErF9MQySBnjgYbQEcMo2xeG
	MILQ==
X-Gm-Message-State: ALoCoQlAd8GbQCZpAP28DFloIbYWQoMafIX+zs6u9P4kzLPG0HUIGphZej7z6+X69CFlomO/lwXe
X-Received: by 10.224.131.135 with SMTP id x7mr8637724qas.15.1392979497123;
	Fri, 21 Feb 2014 02:44:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 02:44:41 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221103925.GT18398@zion.uk.xensource.com>
References: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 14:44:41 +0400
X-Google-Sender-Auth: -oOF89SzBviNtz-bge9iipnSxjI
Message-ID: <CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 14:39 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> Ah, so Daniel's change only affects "xl mem-set" command. When building a
> domain the maxmem is still capped to target_memory (which is memory= in
> you domain config file).
>
> Unfortunately Daniel's patch won't help you much with that


=( Very bad. How much changes needed to fix that?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:46:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGncD-0007tF-1F; Fri, 21 Feb 2014 10:45:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGnbJ-0007sj-Pp
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 10:45:50 +0000
Received: from [85.158.137.68:24078] by server-8.bemta-3.messagelabs.com id
	46/82-16039-D2E27035; Fri, 21 Feb 2014 10:45:01 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-4.tower-31.messagelabs.com!1392979497!2091397!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2308 invoked from network); 21 Feb 2014 10:44:58 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:44:58 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so5694552qcx.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 02:44:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=xT3aBACsGGOW5XLQzTplwCB1Xb+dlY8wsnW+7B5oJKg=;
	b=gWrad32JxQbrDh9FOVqeX09ljJOW1RT2Q+zWnRbxOmeNp7KNO9SVpCKzBiaIL20Top
	p+49dgGiZfhpJSlhEVBwBQy7kUc295XLJaT8AVPFoBxuUN8sLwG9+yn16uj9iNDm+PXl
	zQM+zJ08bz4BN31vnGqQxc4EzLkYYM51FEPyk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=xT3aBACsGGOW5XLQzTplwCB1Xb+dlY8wsnW+7B5oJKg=;
	b=cUej49NUB+sh+lqzNYWNhoG6UDWoopkVRlN9BIBEzt1lOIQHkP4fkuKG2408aJpIGg
	oxQa3xpM0JY7fVIdh0meoy3tRUiDx9O+a6nJ02Dr68qpZ+lN5YM9pnqsKeWNsv2KKq1/
	Csc3F+HPU92y4xoW0DO+VplpXKvcFpzOQ/wkB0NK6Q7C73pPeVHiKpl0+5MicsFug07e
	gXD8f83T1ZPCC78cumq1idlqoeio07Fl853Yyz4PtRJr4WlETnmRQ30DdPwD4+xGyHZR
	M9435M2Socr+iK8711zy6KcSHYJrQDVkHYEm2HgsiXqvZErF9MQySBnjgYbQEcMo2xeG
	MILQ==
X-Gm-Message-State: ALoCoQlAd8GbQCZpAP28DFloIbYWQoMafIX+zs6u9P4kzLPG0HUIGphZej7z6+X69CFlomO/lwXe
X-Received: by 10.224.131.135 with SMTP id x7mr8637724qas.15.1392979497123;
	Fri, 21 Feb 2014 02:44:57 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 02:44:41 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221103925.GT18398@zion.uk.xensource.com>
References: <CACaajQtjkL+SdpsUvCT-Ondwj16BovSN_nOdMghP8UeVRa=M3A@mail.gmail.com>
	<CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 14:44:41 +0400
X-Google-Sender-Auth: -oOF89SzBviNtz-bge9iipnSxjI
Message-ID: <CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 14:39 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> Ah, so Daniel's change only affects "xl mem-set" command. When building a
> domain the maxmem is still capped to target_memory (which is memory= in
> you domain config file).
>
> Unfortunately Daniel's patch won't help you much with that


=( Very bad. How much changes needed to fix that?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnp7-0008Ty-E8; Fri, 21 Feb 2014 10:59:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGnp6-0008Tt-3Y
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 10:59:16 +0000
Received: from [85.158.143.35:27949] by server-3.bemta-4.messagelabs.com id
	F1/09-11539-38137035; Fri, 21 Feb 2014 10:59:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392980353!7315804!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28947 invoked from network); 21 Feb 2014 10:59:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:59:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104628786"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 10:59:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 05:59:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGnp1-0007YR-Er;
	Fri, 21 Feb 2014 10:59:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGnp1-0005d6-6q;
	Fri, 21 Feb 2014 10:59:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25205-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 10:59:11 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25205: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25205 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25205/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 10:59:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 10:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnp7-0008Ty-E8; Fri, 21 Feb 2014 10:59:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGnp6-0008Tt-3Y
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 10:59:16 +0000
Received: from [85.158.143.35:27949] by server-3.bemta-4.messagelabs.com id
	F1/09-11539-38137035; Fri, 21 Feb 2014 10:59:15 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1392980353!7315804!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28947 invoked from network); 21 Feb 2014 10:59:14 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 10:59:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104628786"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 10:59:12 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 05:59:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGnp1-0007YR-Er;
	Fri, 21 Feb 2014 10:59:11 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGnp1-0005d6-6q;
	Fri, 21 Feb 2014 10:59:11 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25205-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 10:59:11 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25205: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25205 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25205/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)     fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail REGR. vs. 24859
 test-amd64-amd64-xl-qemut-winxpsp3  6 leak-check/basis(6) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:05:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnvD-0000Ku-8p; Fri, 21 Feb 2014 11:05:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGnvA-0000Kp-UJ
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:05:33 +0000
Received: from [85.158.143.35:38239] by server-3.bemta-4.messagelabs.com id
	97/76-11539-CF237035; Fri, 21 Feb 2014 11:05:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392980730!7335594!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23718 invoked from network); 21 Feb 2014 11:05:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:05:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102945361"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 11:05:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 06:05:29 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGnmi-0003Ed-DR;
	Fri, 21 Feb 2014 10:56:48 +0000
Date: Fri, 21 Feb 2014 10:56:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221105648.GV18398@zion.uk.xensource.com>
References: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 02:44:41PM +0400, Vasiliy Tolstov wrote:
> 2014-02-21 14:39 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > Ah, so Daniel's change only affects "xl mem-set" command. When building a
> > domain the maxmem is still capped to target_memory (which is memory= in
> > you domain config file).
> >
> > Unfortunately Daniel's patch won't help you much with that
> 
> 
> =( Very bad. How much changes needed to fix that?
> 

DISCLAIMER: xl is designed like that for a reason, I don't think I know
this particular reason. I just know how it is implemented. So you need
to evaluate the risk before changing it. I wouldn't claim the following
paragraphs as a "fix" to this issue.

If you don't care much about altering the behavior, the change is only
oneliner. Find libxl/libxl_dom.c:libxl__build_pre, then look for
xc_domain_setmaxmem. There's a parameter "info->target_memkb". Change it 
to "info->max_memkb".

With that change, not the cap is set to maxmem= by default when building
a domain.

Wei.

> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:05:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:05:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnvD-0000Ku-8p; Fri, 21 Feb 2014 11:05:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGnvA-0000Kp-UJ
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:05:33 +0000
Received: from [85.158.143.35:38239] by server-3.bemta-4.messagelabs.com id
	97/76-11539-CF237035; Fri, 21 Feb 2014 11:05:32 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392980730!7335594!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23718 invoked from network); 21 Feb 2014 11:05:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:05:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102945361"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 11:05:30 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 06:05:29 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGnmi-0003Ed-DR;
	Fri, 21 Feb 2014 10:56:48 +0000
Date: Fri, 21 Feb 2014 10:56:48 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221105648.GV18398@zion.uk.xensource.com>
References: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 02:44:41PM +0400, Vasiliy Tolstov wrote:
> 2014-02-21 14:39 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > Ah, so Daniel's change only affects "xl mem-set" command. When building a
> > domain the maxmem is still capped to target_memory (which is memory= in
> > you domain config file).
> >
> > Unfortunately Daniel's patch won't help you much with that
> 
> 
> =( Very bad. How much changes needed to fix that?
> 

DISCLAIMER: xl is designed like that for a reason, I don't think I know
this particular reason. I just know how it is implemented. So you need
to evaluate the risk before changing it. I wouldn't claim the following
paragraphs as a "fix" to this issue.

If you don't care much about altering the behavior, the change is only
oneliner. Find libxl/libxl_dom.c:libxl__build_pre, then look for
xc_domain_setmaxmem. There's a parameter "info->target_memkb". Change it 
to "info->max_memkb".

With that change, not the cap is set to maxmem= by default when building
a domain.

Wei.

> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:06:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnvm-0000Of-Qo; Fri, 21 Feb 2014 11:06:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luis.henriques@canonical.com>) id 1WGnvl-0000OQ-6i
	for Xen-devel@lists.xen.org; Fri, 21 Feb 2014 11:06:09 +0000
Received: from [85.158.139.211:47574] by server-9.bemta-5.messagelabs.com id
	C8/EF-11237-02337035; Fri, 21 Feb 2014 11:06:08 +0000
X-Env-Sender: luis.henriques@canonical.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392980766!5360002!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29151 invoked from network); 21 Feb 2014 11:06:06 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-9.tower-206.messagelabs.com with SMTP;
	21 Feb 2014 11:06:06 -0000
Received: from bl20-128-115.dsl.telepac.pt ([2.81.128.115] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <luis.henriques@canonical.com>)
	id 1WGnve-0007Cd-2S; Fri, 21 Feb 2014 11:06:02 +0000
Date: Fri, 21 Feb 2014 11:06:00 +0000
From: =?iso-8859-1?Q?Lu=EDs?= Henriques <luis.henriques@canonical.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140221110600.GB2607@hercules>
References: <5305DE2C.7080502@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5305DE2C.7080502@citrix.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
> These two changes fix important bugs with 32-bit Xen PV guests.
> =

> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
> before using the m2p table)
> =

> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
> selector corruption)
> =

> Please apply to 3.10.y.
> =

> Thanks.
> =

> David
> --
> To unsubscribe from this list: send the line "unsubscribe stable" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Thank you David, I'll queue these patches for the 3.11 kernel as well.

Cheers,
--
Lu=EDs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:06:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGnvm-0000Of-Qo; Fri, 21 Feb 2014 11:06:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luis.henriques@canonical.com>) id 1WGnvl-0000OQ-6i
	for Xen-devel@lists.xen.org; Fri, 21 Feb 2014 11:06:09 +0000
Received: from [85.158.139.211:47574] by server-9.bemta-5.messagelabs.com id
	C8/EF-11237-02337035; Fri, 21 Feb 2014 11:06:08 +0000
X-Env-Sender: luis.henriques@canonical.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392980766!5360002!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29151 invoked from network); 21 Feb 2014 11:06:06 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-9.tower-206.messagelabs.com with SMTP;
	21 Feb 2014 11:06:06 -0000
Received: from bl20-128-115.dsl.telepac.pt ([2.81.128.115] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <luis.henriques@canonical.com>)
	id 1WGnve-0007Cd-2S; Fri, 21 Feb 2014 11:06:02 +0000
Date: Fri, 21 Feb 2014 11:06:00 +0000
From: =?iso-8859-1?Q?Lu=EDs?= Henriques <luis.henriques@canonical.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140221110600.GB2607@hercules>
References: <5305DE2C.7080502@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5305DE2C.7080502@citrix.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	"stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
> These two changes fix important bugs with 32-bit Xen PV guests.
> =

> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
> before using the m2p table)
> =

> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
> selector corruption)
> =

> Please apply to 3.10.y.
> =

> Thanks.
> =

> David
> --
> To unsubscribe from this list: send the line "unsubscribe stable" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

Thank you David, I'll queue these patches for the 3.11 kernel as well.

Cheers,
--
Lu=EDs

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:12:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGo1x-0000pG-Mc; Fri, 21 Feb 2014 11:12:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WGo1v-0000pB-PG
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 11:12:32 +0000
Received: from [193.109.254.147:48306] by server-9.bemta-14.messagelabs.com id
	B4/5E-24895-F9437035; Fri, 21 Feb 2014 11:12:31 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392981150!5878361!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2468 invoked from network); 21 Feb 2014 11:12:30 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:12:30 -0000
Received: by mail-wg0-f49.google.com with SMTP id y10so2453651wgg.16
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 03:12:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=UdPLaiwyqE83F56IL72f7WOGzAdXz5hY/dsTOcG5Fqc=;
	b=ZYeXHhE9kiy7xzfWXxa9YTl0K664n5w1J0ZWmAegA2o4pObyK+GHrTxaS2dfeTxoB/
	nPjTudqdzDDRh07DP/zVD1YfIvDIH2D4n+GHFta085rVfb9cLtCyPceo1WGdXcY82aj0
	zrriX2wARhjyGIBaf+N1bNDNKhquVc9naZSiXqzsoVLLm83zDWl9fL0rjwmHEa3XoiC5
	M3uo0uj9jAG3gcrTRxb5n3CzzTJLQpSaE+YZQkbN0CE+e9+SjNM897pnN4hibk5dYNIj
	I2r8sgvDJirNXJeOgbLkzRPb9nk7i/+iWe8UhSZV3PE+Gxa3850SIPDXhDLVy66NKqv2
	aBQQ==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr2687942wib.28.1392981149914;
	Fri, 21 Feb 2014 03:12:29 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 21 Feb 2014 03:12:29 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
	<alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
Date: Fri, 21 Feb 2014 11:12:29 +0000
X-Google-Sender-Auth: xqo0vBeQGtpTHaMD_w5o_MOl9Rw
Message-ID: <CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Julien Grall <julien.grall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 19 Feb 2014, George Dunlap wrote:
>> On 02/19/2014 01:53 PM, Ian Campbell wrote:
>> > On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>> > > Hi all,
>> > >
>> > > Ping?
>> > No one made a case for a release exception so I put it in my 4.5 pile.
>> >
>> > >   It would be nice to have this patch for Xen 4.4 as IPI priority
>> > > patch won't be pushed before the release.
>> > >
>> > > The patch is a minor change and won't impact normal use. When dom0 is
>> > > built, Xen always do it on CPU 0.
>> > Right, so whoever is doing otherwise already has a big pile of patches I
>> > presume?
>> >
>> > It's rather late to be making such changes IMHO, but I'll defer to
>> > George.
>>
>> I can't figure out from the description what's the advantage of having it in
>> 4.4.
>
> People that use the default configuration won't see any differences but
> people that manually modify Xen to start a second domain and assign a
> device to it would.
> To give you a concrete example, it fixes a deadlock reported by
> Oleksandr Tyshchenko:
>
> http://marc.info/?l=xen-devel&m=139099606402232

Right -- I think if I had been cc'd when the patch was submitted, I
would have said yes for sure; but at this point I think we just want
to get 4.4.0 out without any more delays if possible.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:12:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGo1x-0000pG-Mc; Fri, 21 Feb 2014 11:12:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WGo1v-0000pB-PG
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 11:12:32 +0000
Received: from [193.109.254.147:48306] by server-9.bemta-14.messagelabs.com id
	B4/5E-24895-F9437035; Fri, 21 Feb 2014 11:12:31 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1392981150!5878361!1
X-Originating-IP: [74.125.82.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP,SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2468 invoked from network); 21 Feb 2014 11:12:30 -0000
Received: from mail-wg0-f49.google.com (HELO mail-wg0-f49.google.com)
	(74.125.82.49)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:12:30 -0000
Received: by mail-wg0-f49.google.com with SMTP id y10so2453651wgg.16
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 03:12:30 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=UdPLaiwyqE83F56IL72f7WOGzAdXz5hY/dsTOcG5Fqc=;
	b=ZYeXHhE9kiy7xzfWXxa9YTl0K664n5w1J0ZWmAegA2o4pObyK+GHrTxaS2dfeTxoB/
	nPjTudqdzDDRh07DP/zVD1YfIvDIH2D4n+GHFta085rVfb9cLtCyPceo1WGdXcY82aj0
	zrriX2wARhjyGIBaf+N1bNDNKhquVc9naZSiXqzsoVLLm83zDWl9fL0rjwmHEa3XoiC5
	M3uo0uj9jAG3gcrTRxb5n3CzzTJLQpSaE+YZQkbN0CE+e9+SjNM897pnN4hibk5dYNIj
	I2r8sgvDJirNXJeOgbLkzRPb9nk7i/+iWe8UhSZV3PE+Gxa3850SIPDXhDLVy66NKqv2
	aBQQ==
MIME-Version: 1.0
X-Received: by 10.180.165.15 with SMTP id yu15mr2687942wib.28.1392981149914;
	Fri, 21 Feb 2014 03:12:29 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 21 Feb 2014 03:12:29 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
	<alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
Date: Fri, 21 Feb 2014 11:12:29 +0000
X-Google-Sender-Auth: xqo0vBeQGtpTHaMD_w5o_MOl9Rw
Message-ID: <CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	Julien Grall <julien.grall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> On Wed, 19 Feb 2014, George Dunlap wrote:
>> On 02/19/2014 01:53 PM, Ian Campbell wrote:
>> > On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>> > > Hi all,
>> > >
>> > > Ping?
>> > No one made a case for a release exception so I put it in my 4.5 pile.
>> >
>> > >   It would be nice to have this patch for Xen 4.4 as IPI priority
>> > > patch won't be pushed before the release.
>> > >
>> > > The patch is a minor change and won't impact normal use. When dom0 is
>> > > built, Xen always do it on CPU 0.
>> > Right, so whoever is doing otherwise already has a big pile of patches I
>> > presume?
>> >
>> > It's rather late to be making such changes IMHO, but I'll defer to
>> > George.
>>
>> I can't figure out from the description what's the advantage of having it in
>> 4.4.
>
> People that use the default configuration won't see any differences but
> people that manually modify Xen to start a second domain and assign a
> device to it would.
> To give you a concrete example, it fixes a deadlock reported by
> Oleksandr Tyshchenko:
>
> http://marc.info/?l=xen-devel&m=139099606402232

Right -- I think if I had been cc'd when the patch was submitted, I
would have said yes for sure; but at this point I think we just want
to get 4.4.0 out without any more delays if possible.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:31:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoJV-0001KE-Dq; Fri, 21 Feb 2014 11:30:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGoJT-0001K9-8S
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:30:39 +0000
Received: from [85.158.139.211:20851] by server-10.bemta-5.messagelabs.com id
	31/F9-08578-ED837035; Fri, 21 Feb 2014 11:30:38 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392982236!5378826!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14411 invoked from network); 21 Feb 2014 11:30:37 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:30:37 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so5777363qcx.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 03:30:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=oZXczz0JcEp4ei/G8bfPoGtDnec03LQdpQigYS+0if0=;
	b=fPXnkZuN72LIXnusj5bRZhaayIA9clAtsBU346+ZWoE57p4S8ii61AR/pPzDt2+y2d
	wbJ1RL0mAnp2SJlR2QbM+eJa8HDehxR9N0mKuMVouh39GKk2mEhv2hbsQtn9b81XjtHL
	gR+bzVUJTc+9cyj+S7QlqzyFy+GR6dygMVrVo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=oZXczz0JcEp4ei/G8bfPoGtDnec03LQdpQigYS+0if0=;
	b=cFCypZFvvcvTTr8AYZFnirzs8j8bJPaRe6RI6cL94UxIjCrYfvkIZUgtcYT2yLmjSR
	gj38UNpACwsT4egIBbMWbisThxeaMxhY7c91/c2crrmslVPl+f0L8Q0RWsUrOj05NgBT
	0rvjywv1lvYnhv/AnsSmehKOTkFqcBUbAQYNFBelpEHE6hOo8wILRX64spk8aDqXjYgF
	KpZM4D/eorgZ8AYZnWbSxs0ayXsMYis2j6rp+78tQB/u3AR77enS9YNvQ1MA4t7VmNM8
	Gru7tXg91rvftaZv3NYfxp9+Kb4vl4sSaZjXD+GR2O/xsk1TZFjtf68qC+XYMVTh+jjA
	reEg==
X-Gm-Message-State: ALoCoQmiuEyL+1L9dHCgRyQB1uINJRe+zyb4HqR68yIwQbLNS0Ijq3XI+ILOUB/VCQBa4XWZMlfg
X-Received: by 10.224.131.135 with SMTP id x7mr8913887qas.15.1392982236375;
	Fri, 21 Feb 2014 03:30:36 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 03:30:21 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221105648.GV18398@zion.uk.xensource.com>
References: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
	<20140221105648.GV18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 15:30:21 +0400
X-Google-Sender-Auth: ibWccjh38NgRwCAHr_hNJH2r7hU
Message-ID: <CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 14:56 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> DISCLAIMER: xl is designed like that for a reason, I don't think I know
> this particular reason. I just know how it is implemented. So you need
> to evaluate the risk before changing it. I wouldn't claim the following
> paragraphs as a "fix" to this issue.
>
> If you don't care much about altering the behavior, the change is only
> oneliner. Find libxl/libxl_dom.c:libxl__build_pre, then look for
> xc_domain_setmaxmem. There's a parameter "info->target_memkb". Change it
> to "info->max_memkb".
>
> With that change, not the cap is set to maxmem= by default when building
> a domain.


Thanks! =) May be alter current setting based on enforce param from
Daniel patch?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:31:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoJV-0001KE-Dq; Fri, 21 Feb 2014 11:30:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGoJT-0001K9-8S
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:30:39 +0000
Received: from [85.158.139.211:20851] by server-10.bemta-5.messagelabs.com id
	31/F9-08578-ED837035; Fri, 21 Feb 2014 11:30:38 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-6.tower-206.messagelabs.com!1392982236!5378826!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14411 invoked from network); 21 Feb 2014 11:30:37 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:30:37 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so5777363qcx.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 03:30:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=oZXczz0JcEp4ei/G8bfPoGtDnec03LQdpQigYS+0if0=;
	b=fPXnkZuN72LIXnusj5bRZhaayIA9clAtsBU346+ZWoE57p4S8ii61AR/pPzDt2+y2d
	wbJ1RL0mAnp2SJlR2QbM+eJa8HDehxR9N0mKuMVouh39GKk2mEhv2hbsQtn9b81XjtHL
	gR+bzVUJTc+9cyj+S7QlqzyFy+GR6dygMVrVo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=oZXczz0JcEp4ei/G8bfPoGtDnec03LQdpQigYS+0if0=;
	b=cFCypZFvvcvTTr8AYZFnirzs8j8bJPaRe6RI6cL94UxIjCrYfvkIZUgtcYT2yLmjSR
	gj38UNpACwsT4egIBbMWbisThxeaMxhY7c91/c2crrmslVPl+f0L8Q0RWsUrOj05NgBT
	0rvjywv1lvYnhv/AnsSmehKOTkFqcBUbAQYNFBelpEHE6hOo8wILRX64spk8aDqXjYgF
	KpZM4D/eorgZ8AYZnWbSxs0ayXsMYis2j6rp+78tQB/u3AR77enS9YNvQ1MA4t7VmNM8
	Gru7tXg91rvftaZv3NYfxp9+Kb4vl4sSaZjXD+GR2O/xsk1TZFjtf68qC+XYMVTh+jjA
	reEg==
X-Gm-Message-State: ALoCoQmiuEyL+1L9dHCgRyQB1uINJRe+zyb4HqR68yIwQbLNS0Ijq3XI+ILOUB/VCQBa4XWZMlfg
X-Received: by 10.224.131.135 with SMTP id x7mr8913887qas.15.1392982236375;
	Fri, 21 Feb 2014 03:30:36 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 03:30:21 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221105648.GV18398@zion.uk.xensource.com>
References: <20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
	<20140221105648.GV18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 15:30:21 +0400
X-Google-Sender-Auth: ibWccjh38NgRwCAHr_hNJH2r7hU
Message-ID: <CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 14:56 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> DISCLAIMER: xl is designed like that for a reason, I don't think I know
> this particular reason. I just know how it is implemented. So you need
> to evaluate the risk before changing it. I wouldn't claim the following
> paragraphs as a "fix" to this issue.
>
> If you don't care much about altering the behavior, the change is only
> oneliner. Find libxl/libxl_dom.c:libxl__build_pre, then look for
> xc_domain_setmaxmem. There's a parameter "info->target_memkb". Change it
> to "info->max_memkb".
>
> With that change, not the cap is set to maxmem= by default when building
> a domain.


Thanks! =) May be alter current setting based on enforce param from
Daniel patch?

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:35:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoOD-0001Rn-4y; Fri, 21 Feb 2014 11:35:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGoOB-0001Rh-Li
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:35:31 +0000
Received: from [193.109.254.147:63474] by server-15.bemta-14.messagelabs.com
	id E6/AA-10839-20A37035; Fri, 21 Feb 2014 11:35:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392982529!5875201!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15799 invoked from network); 21 Feb 2014 11:35:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104634921"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 11:35:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 06:35:28 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGoO7-0003u4-Vk;
	Fri, 21 Feb 2014 11:35:27 +0000
Date: Fri, 21 Feb 2014 11:35:27 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221113527.GW18398@zion.uk.xensource.com>
References: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
	<20140221105648.GV18398@zion.uk.xensource.com>
	<CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 03:30:21PM +0400, Vasiliy Tolstov wrote:
> 2014-02-21 14:56 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > DISCLAIMER: xl is designed like that for a reason, I don't think I know
> > this particular reason. I just know how it is implemented. So you need
> > to evaluate the risk before changing it. I wouldn't claim the following
> > paragraphs as a "fix" to this issue.
> >
> > If you don't care much about altering the behavior, the change is only
> > oneliner. Find libxl/libxl_dom.c:libxl__build_pre, then look for
> > xc_domain_setmaxmem. There's a parameter "info->target_memkb". Change it
> > to "info->max_memkb".
> >
> > With that change, not the cap is set to maxmem= by default when building
> > a domain.
> 
> 
> Thanks! =) May be alter current setting based on enforce param from
> Daniel patch?
> 

That would require a fair amount of plumbing. Plus the paramater Daniel
introduced might not fit this purpose. I don't think we should / can do
anything before reaching a concensus on the list, not without toolstack
maintainer's input. Probably something for 4.5 cycle.

Wei.

> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:35:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoOD-0001Rn-4y; Fri, 21 Feb 2014 11:35:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WGoOB-0001Rh-Li
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:35:31 +0000
Received: from [193.109.254.147:63474] by server-15.bemta-14.messagelabs.com
	id E6/AA-10839-20A37035; Fri, 21 Feb 2014 11:35:30 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392982529!5875201!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15799 invoked from network); 21 Feb 2014 11:35:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:35:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104634921"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 11:35:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 06:35:28 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WGoO7-0003u4-Vk;
	Fri, 21 Feb 2014 11:35:27 +0000
Date: Fri, 21 Feb 2014 11:35:27 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
Message-ID: <20140221113527.GW18398@zion.uk.xensource.com>
References: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
	<20140221105648.GV18398@zion.uk.xensource.com>
	<CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 03:30:21PM +0400, Vasiliy Tolstov wrote:
> 2014-02-21 14:56 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > DISCLAIMER: xl is designed like that for a reason, I don't think I know
> > this particular reason. I just know how it is implemented. So you need
> > to evaluate the risk before changing it. I wouldn't claim the following
> > paragraphs as a "fix" to this issue.
> >
> > If you don't care much about altering the behavior, the change is only
> > oneliner. Find libxl/libxl_dom.c:libxl__build_pre, then look for
> > xc_domain_setmaxmem. There's a parameter "info->target_memkb". Change it
> > to "info->max_memkb".
> >
> > With that change, not the cap is set to maxmem= by default when building
> > a domain.
> 
> 
> Thanks! =) May be alter current setting based on enforce param from
> Daniel patch?
> 

That would require a fair amount of plumbing. Plus the paramater Daniel
introduced might not fit this purpose. I don't think we should / can do
anything before reaching a concensus on the list, not without toolstack
maintainer's input. Probably something for 4.5 cycle.

Wei.

> -- 
> Vasiliy Tolstov,
> e-mail: v.tolstov@selfip.ru
> jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:41:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoTS-0001mE-VI; Fri, 21 Feb 2014 11:40:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGoTR-0001m9-C9
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:40:57 +0000
Received: from [193.109.254.147:44379] by server-16.bemta-14.messagelabs.com
	id FC/E9-21945-84B37035; Fri, 21 Feb 2014 11:40:56 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392982855!635338!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30811 invoked from network); 21 Feb 2014 11:40:56 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:40:56 -0000
Received: by mail-qc0-f179.google.com with SMTP id r5so870192qcx.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 03:40:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=W6RDAesDNt89Nh/TtlWXA9h7HpGpVPKxrg46IEmIK8A=;
	b=USMHudtWZVhvdMUwbjIYX0Byqoij/v6aqfYTiJPe8725Xzza0vHKAaDBJwZmeNog+Z
	b9q4FSFqxHQIF39QukpqxQ+7V1SHb7cIK+eAMz6oNo14Eag/9/S34dUcH0EvJ4ZviWdn
	YXPNtpXGZMwZorsy34MN9RZFIDkFvtnfpVFG4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=W6RDAesDNt89Nh/TtlWXA9h7HpGpVPKxrg46IEmIK8A=;
	b=Pp+bawW2kc/qyzHyVcSeJkir+EzaIiFp8jpDErdV6fT89tYPsLc0exFqHrcqm2XhDe
	eCAan5k+dlmPxx6XIdxiqbd6FFalsqAjrompzbqSMjU0TBvhSYCmGHUBmdFr+hmJVoyL
	U4VCr8ggqQWA/lpYW/42ix/WOgRKZnWrNp24ZAtkKyaYmDXBBamuOi6Qta4D+te0gQYk
	OKbCGSGbKeD0LtoaISI+i7B19e43RfwZjP4T/rF6CPPfBufj4SOSMs+2AGIaEq63AP0k
	DbyRve4wCKqeNE8Uy9+48KX/ckrZVdSLE1LOfGpc2CR/MpcCl5PPB4Fgnk7YeNNw+GZ8
	BuEg==
X-Gm-Message-State: ALoCoQkSY4M413zZ9H2YuY+vKOy5JE7OUAAYW74JV2nFtwuVSj8FaEkx5gFCpMP/3Qtagjuzisgg
X-Received: by 10.229.171.8 with SMTP id f8mr9016650qcz.13.1392982854951; Fri,
	21 Feb 2014 03:40:54 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 03:40:39 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221113527.GW18398@zion.uk.xensource.com>
References: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
	<20140221105648.GV18398@zion.uk.xensource.com>
	<CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
	<20140221113527.GW18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 15:40:39 +0400
X-Google-Sender-Auth: bOQ2nSzSV0R0te4UUEV1wPityYM
Message-ID: <CACaajQuzDeneCoCp0Byjsw_h3_icyo_XjZ3K_TQ+CnbGTEtQ5Q@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 15:35 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> That would require a fair amount of plumbing. Plus the paramater Daniel
> introduced might not fit this purpose. I don't think we should / can do
> anything before reaching a concensus on the list, not without toolstack
> maintainer's input. Probably something for 4.5 cycle.


Ok. Anyway - big thanks for help.

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:41:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoTS-0001mE-VI; Fri, 21 Feb 2014 11:40:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WGoTR-0001m9-C9
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 11:40:57 +0000
Received: from [193.109.254.147:44379] by server-16.bemta-14.messagelabs.com
	id FC/E9-21945-84B37035; Fri, 21 Feb 2014 11:40:56 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392982855!635338!1
X-Originating-IP: [209.85.216.179]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30811 invoked from network); 21 Feb 2014 11:40:56 -0000
Received: from mail-qc0-f179.google.com (HELO mail-qc0-f179.google.com)
	(209.85.216.179)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:40:56 -0000
Received: by mail-qc0-f179.google.com with SMTP id r5so870192qcx.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 03:40:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=W6RDAesDNt89Nh/TtlWXA9h7HpGpVPKxrg46IEmIK8A=;
	b=USMHudtWZVhvdMUwbjIYX0Byqoij/v6aqfYTiJPe8725Xzza0vHKAaDBJwZmeNog+Z
	b9q4FSFqxHQIF39QukpqxQ+7V1SHb7cIK+eAMz6oNo14Eag/9/S34dUcH0EvJ4ZviWdn
	YXPNtpXGZMwZorsy34MN9RZFIDkFvtnfpVFG4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=W6RDAesDNt89Nh/TtlWXA9h7HpGpVPKxrg46IEmIK8A=;
	b=Pp+bawW2kc/qyzHyVcSeJkir+EzaIiFp8jpDErdV6fT89tYPsLc0exFqHrcqm2XhDe
	eCAan5k+dlmPxx6XIdxiqbd6FFalsqAjrompzbqSMjU0TBvhSYCmGHUBmdFr+hmJVoyL
	U4VCr8ggqQWA/lpYW/42ix/WOgRKZnWrNp24ZAtkKyaYmDXBBamuOi6Qta4D+te0gQYk
	OKbCGSGbKeD0LtoaISI+i7B19e43RfwZjP4T/rF6CPPfBufj4SOSMs+2AGIaEq63AP0k
	DbyRve4wCKqeNE8Uy9+48KX/ckrZVdSLE1LOfGpc2CR/MpcCl5PPB4Fgnk7YeNNw+GZ8
	BuEg==
X-Gm-Message-State: ALoCoQkSY4M413zZ9H2YuY+vKOy5JE7OUAAYW74JV2nFtwuVSj8FaEkx5gFCpMP/3Qtagjuzisgg
X-Received: by 10.229.171.8 with SMTP id f8mr9016650qcz.13.1392982854951; Fri,
	21 Feb 2014 03:40:54 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Fri, 21 Feb 2014 03:40:39 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <20140221113527.GW18398@zion.uk.xensource.com>
References: <20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
	<CACaajQteJfCiMYos66WVjWsO0K5FrJ4uAraRzB0VMWC8dJXXuw@mail.gmail.com>
	<20140221105648.GV18398@zion.uk.xensource.com>
	<CACaajQt7CYi64Q4YtUafdz7N_gYGpvJ8a52T=5B2D6w=Wb-+3g@mail.gmail.com>
	<20140221113527.GW18398@zion.uk.xensource.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Fri, 21 Feb 2014 15:40:39 +0400
X-Google-Sender-Auth: bOQ2nSzSV0R0te4UUEV1wPityYM
Message-ID: <CACaajQuzDeneCoCp0Byjsw_h3_icyo_XjZ3K_TQ+CnbGTEtQ5Q@mail.gmail.com>
To: Wei Liu <wei.liu2@citrix.com>
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-21 15:35 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> That would require a fair amount of plumbing. Plus the paramater Daniel
> introduced might not fit this purpose. I don't think we should / can do
> anything before reaching a concensus on the list, not without toolstack
> maintainer's input. Probably something for 4.5 cycle.


Ok. Anyway - big thanks for help.

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:59:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGolA-0002HQ-MH; Fri, 21 Feb 2014 11:59:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGol8-0002HL-GJ
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 11:59:14 +0000
Received: from [193.109.254.147:43565] by server-2.bemta-14.messagelabs.com id
	8E/AC-01236-19F37035; Fri, 21 Feb 2014 11:59:13 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392983952!5881857!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13418 invoked from network); 21 Feb 2014 11:59:13 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:59:13 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so1550051eek.19
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 03:59:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=fbJ7aHoZA0dCccnDKMtlcXAIFNdY7cN6rr/CiUfaC+I=;
	b=VV9k4dTlU2Z3QG59eEA88X5ixanGVCnGJaI9u+x3CkbEa6ApkZ8D+eCkwgCanRy//M
	CLrI6z+0OhW0XVOnFwlHFbVWq70E0VuGthHEVabXmcbW+LGsLCHpRuoGr6UPayiie8gp
	ZVSrsJzpsvortMu+l/silIODogMy3e6Tlorg1swrxEclX9qR7/bt/5ShPnRr2VDOJTHe
	77+7GbB7zYJFkPHKgqxx6xMyNmJBmZum3QRe4q4nh2cFEQI0Ufft6OapUF+X12/gn4V6
	ovgkJq17gQdEIk3y9m6mxX9eq4HdHvG2Xhs4ahykRUYpVVz7xcYilrJwWgd0YCmS50LB
	SAvA==
X-Gm-Message-State: ALoCoQkviNVRLKFZj/vrUQOz72ZWQRY36bpx8nqxtw/XICwcxwirfbQrlF0Ezi0O8selHSf+s0S8
X-Received: by 10.15.51.196 with SMTP id n44mr7970431eew.27.1392983952645;
	Fri, 21 Feb 2014 03:59:12 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm25735383eet.6.2014.02.21.03.59.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 03:59:09 -0800 (PST)
Message-ID: <53073F8C.3000300@linaro.org>
Date: Fri, 21 Feb 2014 11:59:08 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
	<alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
	<CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
In-Reply-To: <CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 11:12 AM, George Dunlap wrote:
> On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Wed, 19 Feb 2014, George Dunlap wrote:
>>> On 02/19/2014 01:53 PM, Ian Campbell wrote:
>>>> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>>>>> Hi all,
>>>>>
>>>>> Ping?
>>>> No one made a case for a release exception so I put it in my 4.5 pile.
>>>>
>>>>>   It would be nice to have this patch for Xen 4.4 as IPI priority
>>>>> patch won't be pushed before the release.
>>>>>
>>>>> The patch is a minor change and won't impact normal use. When dom0 is
>>>>> built, Xen always do it on CPU 0.
>>>> Right, so whoever is doing otherwise already has a big pile of patches I
>>>> presume?
>>>>
>>>> It's rather late to be making such changes IMHO, but I'll defer to
>>>> George.
>>>
>>> I can't figure out from the description what's the advantage of having it in
>>> 4.4.
>>
>> People that use the default configuration won't see any differences but
>> people that manually modify Xen to start a second domain and assign a
>> device to it would.
>> To give you a concrete example, it fixes a deadlock reported by
>> Oleksandr Tyshchenko:
>>
>> http://marc.info/?l=xen-devel&m=139099606402232
> 
> Right -- I think if I had been cc'd when the patch was submitted, I
> would have said yes for sure; but at this point I think we just want
> to get 4.4.0 out without any more delays if possible.

You were already CCed from the beginning :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 11:59:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 11:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGolA-0002HQ-MH; Fri, 21 Feb 2014 11:59:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGol8-0002HL-GJ
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 11:59:14 +0000
Received: from [193.109.254.147:43565] by server-2.bemta-14.messagelabs.com id
	8E/AC-01236-19F37035; Fri, 21 Feb 2014 11:59:13 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-27.messagelabs.com!1392983952!5881857!1
X-Originating-IP: [74.125.83.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13418 invoked from network); 21 Feb 2014 11:59:13 -0000
Received: from mail-ee0-f46.google.com (HELO mail-ee0-f46.google.com)
	(74.125.83.46)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 11:59:13 -0000
Received: by mail-ee0-f46.google.com with SMTP id c13so1550051eek.19
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 03:59:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=fbJ7aHoZA0dCccnDKMtlcXAIFNdY7cN6rr/CiUfaC+I=;
	b=VV9k4dTlU2Z3QG59eEA88X5ixanGVCnGJaI9u+x3CkbEa6ApkZ8D+eCkwgCanRy//M
	CLrI6z+0OhW0XVOnFwlHFbVWq70E0VuGthHEVabXmcbW+LGsLCHpRuoGr6UPayiie8gp
	ZVSrsJzpsvortMu+l/silIODogMy3e6Tlorg1swrxEclX9qR7/bt/5ShPnRr2VDOJTHe
	77+7GbB7zYJFkPHKgqxx6xMyNmJBmZum3QRe4q4nh2cFEQI0Ufft6OapUF+X12/gn4V6
	ovgkJq17gQdEIk3y9m6mxX9eq4HdHvG2Xhs4ahykRUYpVVz7xcYilrJwWgd0YCmS50LB
	SAvA==
X-Gm-Message-State: ALoCoQkviNVRLKFZj/vrUQOz72ZWQRY36bpx8nqxtw/XICwcxwirfbQrlF0Ezi0O8selHSf+s0S8
X-Received: by 10.15.51.196 with SMTP id n44mr7970431eew.27.1392983952645;
	Fri, 21 Feb 2014 03:59:12 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm25735383eet.6.2014.02.21.03.59.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 03:59:09 -0800 (PST)
Message-ID: <53073F8C.3000300@linaro.org>
Date: Fri, 21 Feb 2014 11:59:08 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
	<alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
	<CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
In-Reply-To: <CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 11:12 AM, George Dunlap wrote:
> On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini
> <stefano.stabellini@eu.citrix.com> wrote:
>> On Wed, 19 Feb 2014, George Dunlap wrote:
>>> On 02/19/2014 01:53 PM, Ian Campbell wrote:
>>>> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>>>>> Hi all,
>>>>>
>>>>> Ping?
>>>> No one made a case for a release exception so I put it in my 4.5 pile.
>>>>
>>>>>   It would be nice to have this patch for Xen 4.4 as IPI priority
>>>>> patch won't be pushed before the release.
>>>>>
>>>>> The patch is a minor change and won't impact normal use. When dom0 is
>>>>> built, Xen always do it on CPU 0.
>>>> Right, so whoever is doing otherwise already has a big pile of patches I
>>>> presume?
>>>>
>>>> It's rather late to be making such changes IMHO, but I'll defer to
>>>> George.
>>>
>>> I can't figure out from the description what's the advantage of having it in
>>> 4.4.
>>
>> People that use the default configuration won't see any differences but
>> people that manually modify Xen to start a second domain and assign a
>> device to it would.
>> To give you a concrete example, it fixes a deadlock reported by
>> Oleksandr Tyshchenko:
>>
>> http://marc.info/?l=xen-devel&m=139099606402232
> 
> Right -- I think if I had been cc'd when the patch was submitted, I
> would have said yes for sure; but at this point I think we just want
> to get 4.4.0 out without any more delays if possible.

You were already CCed from the beginning :).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGonp-0002Ve-Ba; Fri, 21 Feb 2014 12:02:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGono-0002VX-74
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:02:00 +0000
Received: from [85.158.143.35:45795] by server-3.bemta-4.messagelabs.com id
	25/EF-11539-73047035; Fri, 21 Feb 2014 12:01:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392984117!7347153!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22245 invoked from network); 21 Feb 2014 12:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102955292"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:01:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 07:01:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGonj-0007rG-LJ;
	Fri, 21 Feb 2014 12:01:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGonj-0002mZ-D5;
	Fri, 21 Feb 2014 12:01:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.16435.69137.96537@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 12:01:55 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <530727DC020000780011E27C@nat28.tlf.novell.com>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: ian.campbell@eu.citrix.com, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble: broken/fail/pass"):
> On 21.02.14 at 01:46, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 25157 xen-4.1-testing real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25157/ 
...
> >  test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859

> Is this again fallout from some osstest of infrastructure change?

I think this is because of some changes to osstest but I'm not 100%
sure which ones.  The actual error is:

    xl: error while loading shared libraries: libxlutil.so.1.0: cannot
    open shared object file: No such file or directory

from "xl list".  The relevant change to osstest must be in
250e6d7a701e..76eeba138e8e ie in one of these:

76eeba138e8e ts-xen-build: Apply python workaround in wheezy too
337947384c03 Do not attempt migration tests if the platform doesn't support it
64cc9fd6ea1e Configure the Calxeda fabric on host boot
ac2051504349 freebsd: switch to 10.0-RELEASE (20140116-r260789)
b3e68acc66df make-flight: abolish special-casing of suite for armhf
92c43811c831 Debian: Switch to wheezy
4ca7b8955a47 ts-guests-nbd-mirror: set "oldstyle=true"
79a1a2f01c90 ts-guests-nbd-mirror: add checkaccessible test
d9f536084623 ts-guests-nbd-mirror: purge old packages first
c46c344a3c84 TestSupport: Suppress prompting by apt
5d1d7ccc457b TestSupport: break out target_run_apt
e23940fa9213 ts-xen-install: default the interface to the one in /etc/network/interfaces
c0aa07833272 ts-xen-install: nodhcp: restructure
507fc9cb0770 ts-host-install: set `IPAPPEND 2' (if interface isn't forced)
dfb75f984940 ts-kernel-build: force CONFIG_BLK_DEV_NBD=y
da536cbf2933 ts-xen-build-prep: avoid lvextend segfault (Debian #736173) with wheezy

I suspect "switch to wheezy".  For now I have disabled the 4.1 tests,
while we decide what to do.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:02:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:02:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGonp-0002Ve-Ba; Fri, 21 Feb 2014 12:02:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGono-0002VX-74
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:02:00 +0000
Received: from [85.158.143.35:45795] by server-3.bemta-4.messagelabs.com id
	25/EF-11539-73047035; Fri, 21 Feb 2014 12:01:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1392984117!7347153!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22245 invoked from network); 21 Feb 2014 12:01:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:01:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102955292"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:01:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 07:01:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGonj-0007rG-LJ;
	Fri, 21 Feb 2014 12:01:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGonj-0002mZ-D5;
	Fri, 21 Feb 2014 12:01:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.16435.69137.96537@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 12:01:55 +0000
To: Jan Beulich <JBeulich@suse.com>
In-Reply-To: <530727DC020000780011E27C@nat28.tlf.novell.com>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: ian.campbell@eu.citrix.com, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich writes ("Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble: broken/fail/pass"):
> On 21.02.14 at 01:46, xen.org <ian.jackson@eu.citrix.com> wrote:
> > flight 25157 xen-4.1-testing real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25157/ 
...
> >  test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859

> Is this again fallout from some osstest of infrastructure change?

I think this is because of some changes to osstest but I'm not 100%
sure which ones.  The actual error is:

    xl: error while loading shared libraries: libxlutil.so.1.0: cannot
    open shared object file: No such file or directory

from "xl list".  The relevant change to osstest must be in
250e6d7a701e..76eeba138e8e ie in one of these:

76eeba138e8e ts-xen-build: Apply python workaround in wheezy too
337947384c03 Do not attempt migration tests if the platform doesn't support it
64cc9fd6ea1e Configure the Calxeda fabric on host boot
ac2051504349 freebsd: switch to 10.0-RELEASE (20140116-r260789)
b3e68acc66df make-flight: abolish special-casing of suite for armhf
92c43811c831 Debian: Switch to wheezy
4ca7b8955a47 ts-guests-nbd-mirror: set "oldstyle=true"
79a1a2f01c90 ts-guests-nbd-mirror: add checkaccessible test
d9f536084623 ts-guests-nbd-mirror: purge old packages first
c46c344a3c84 TestSupport: Suppress prompting by apt
5d1d7ccc457b TestSupport: break out target_run_apt
e23940fa9213 ts-xen-install: default the interface to the one in /etc/network/interfaces
c0aa07833272 ts-xen-install: nodhcp: restructure
507fc9cb0770 ts-host-install: set `IPAPPEND 2' (if interface isn't forced)
dfb75f984940 ts-kernel-build: force CONFIG_BLK_DEV_NBD=y
da536cbf2933 ts-xen-build-prep: avoid lvextend segfault (Debian #736173) with wheezy

I suspect "switch to wheezy".  For now I have disabled the 4.1 tests,
while we decide what to do.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:08:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGotq-0002iC-6Y; Fri, 21 Feb 2014 12:08:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGoto-0002i7-42
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 12:08:12 +0000
Received: from [85.158.139.211:15396] by server-9.bemta-5.messagelabs.com id
	28/AF-11237-BA147035; Fri, 21 Feb 2014 12:08:11 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392984487!5399309!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9971 invoked from network); 21 Feb 2014 12:08:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:08:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102956653"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:07:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:07:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGotI-0004LF-Ng;
	Fri, 21 Feb 2014 12:07:40 +0000
Message-ID: <53074186.1010305@eu.citrix.com>
Date: Fri, 21 Feb 2014 12:07:34 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
	<alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
	<CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
	<53073F8C.3000300@linaro.org>
In-Reply-To: <53073F8C.3000300@linaro.org>
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 11:59 AM, Julien Grall wrote:
> On 02/21/2014 11:12 AM, George Dunlap wrote:
>> On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>>> On Wed, 19 Feb 2014, George Dunlap wrote:
>>>> On 02/19/2014 01:53 PM, Ian Campbell wrote:
>>>>> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> Ping?
>>>>> No one made a case for a release exception so I put it in my 4.5 pile.
>>>>>
>>>>>>    It would be nice to have this patch for Xen 4.4 as IPI priority
>>>>>> patch won't be pushed before the release.
>>>>>>
>>>>>> The patch is a minor change and won't impact normal use. When dom0 is
>>>>>> built, Xen always do it on CPU 0.
>>>>> Right, so whoever is doing otherwise already has a big pile of patches I
>>>>> presume?
>>>>>
>>>>> It's rather late to be making such changes IMHO, but I'll defer to
>>>>> George.
>>>> I can't figure out from the description what's the advantage of having it in
>>>> 4.4.
>>> People that use the default configuration won't see any differences but
>>> people that manually modify Xen to start a second domain and assign a
>>> device to it would.
>>> To give you a concrete example, it fixes a deadlock reported by
>>> Oleksandr Tyshchenko:
>>>
>>> http://marc.info/?l=xen-devel&m=139099606402232
>> Right -- I think if I had been cc'd when the patch was submitted, I
>> would have said yes for sure; but at this point I think we just want
>> to get 4.4.0 out without any more delays if possible.
> You were already CCed from the beginning :).

Right. :-)  But as Ian said, no one made a case for a release exception, 
and I got a bit tired of having to ask every time. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:08:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGotq-0002iC-6Y; Fri, 21 Feb 2014 12:08:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGoto-0002i7-42
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 12:08:12 +0000
Received: from [85.158.139.211:15396] by server-9.bemta-5.messagelabs.com id
	28/AF-11237-BA147035; Fri, 21 Feb 2014 12:08:11 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1392984487!5399309!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9971 invoked from network); 21 Feb 2014 12:08:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:08:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102956653"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:07:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:07:40 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGotI-0004LF-Ng;
	Fri, 21 Feb 2014 12:07:40 +0000
Message-ID: <53074186.1010305@eu.citrix.com>
Date: Fri, 21 Feb 2014 12:07:34 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390844023-23123-1-git-send-email-oleksandr.tyshchenko@globallogic.com>
	<52E69CBC.3090207@linaro.org>
	<CAJEb2DF+qVcvAeOpQUOSoBy_T3iri_MiXkcXFXFdDj916k9ThQ@mail.gmail.com>
	<CAJEb2DGdOArRF5qeEABc8+V77TPvQ2FWe=WfmFCm3Z+KpnK8fw@mail.gmail.com>
	<alpine.DEB.2.02.1401291136340.4373@kaball.uk.xensource.com>
	<alpine.DEB.2.02.1402041615230.4373@kaball.uk.xensource.com>
	<5304B501.2010207@linaro.org>
	<1392818038.29739.74.camel@kazak.uk.xensource.com>
	<5304BC73.5090803@eu.citrix.com>
	<alpine.DEB.2.02.1402201447290.15812@kaball.uk.xensource.com>
	<CAFLBxZaKEsfTw7+UFuSnVNYQS06MwhWLgu9vZjoZb+CN5BopQQ@mail.gmail.com>
	<53073F8C.3000300@linaro.org>
In-Reply-To: <53073F8C.3000300@linaro.org>
X-DLP: MIA1
Cc: Oleksandr Tyshchenko <oleksandr.tyshchenko@globallogic.com>,
	George Dunlap <george.dunlap@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH] xen/arm: route irqs to cpu0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 11:59 AM, Julien Grall wrote:
> On 02/21/2014 11:12 AM, George Dunlap wrote:
>> On Thu, Feb 20, 2014 at 2:52 PM, Stefano Stabellini
>> <stefano.stabellini@eu.citrix.com> wrote:
>>> On Wed, 19 Feb 2014, George Dunlap wrote:
>>>> On 02/19/2014 01:53 PM, Ian Campbell wrote:
>>>>> On Wed, 2014-02-19 at 13:43 +0000, Julien Grall wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> Ping?
>>>>> No one made a case for a release exception so I put it in my 4.5 pile.
>>>>>
>>>>>>    It would be nice to have this patch for Xen 4.4 as IPI priority
>>>>>> patch won't be pushed before the release.
>>>>>>
>>>>>> The patch is a minor change and won't impact normal use. When dom0 is
>>>>>> built, Xen always do it on CPU 0.
>>>>> Right, so whoever is doing otherwise already has a big pile of patches I
>>>>> presume?
>>>>>
>>>>> It's rather late to be making such changes IMHO, but I'll defer to
>>>>> George.
>>>> I can't figure out from the description what's the advantage of having it in
>>>> 4.4.
>>> People that use the default configuration won't see any differences but
>>> people that manually modify Xen to start a second domain and assign a
>>> device to it would.
>>> To give you a concrete example, it fixes a deadlock reported by
>>> Oleksandr Tyshchenko:
>>>
>>> http://marc.info/?l=xen-devel&m=139099606402232
>> Right -- I think if I had been cc'd when the patch was submitted, I
>> would have said yes for sure; but at this point I think we just want
>> to get 4.4.0 out without any more delays if possible.
> You were already CCed from the beginning :).

Right. :-)  But as Ian said, no one made a case for a release exception, 
and I got a bit tired of having to ask every time. :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:08:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGou4-0002l7-JW; Fri, 21 Feb 2014 12:08:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGou3-0002jL-Hu
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:08:27 +0000
Received: from [85.158.143.35:55102] by server-2.bemta-4.messagelabs.com id
	A8/D6-10891-AB147035; Fri, 21 Feb 2014 12:08:26 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392984505!7358258!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24712 invoked from network); 21 Feb 2014 12:08:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:08:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717495"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:08:26 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:08:25 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific
	data into queue struct.
Thread-Index: AQHPLApeFYeYsdekekiHJlZu2kAQT5q/mzHg
Date: Fri, 21 Feb 2014 12:08:24 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 17 February 2014 17:58
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
> Vrabel; Andrew Bennieston
> Subject: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific data
> into queue struct.
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_hash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |   81 ++++--
>  drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
>  drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++------------
> -----
>  drivers/net/xen-netback/xenbus.c    |   87 ++++--
>  4 files changed, 593 insertions(+), 417 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index ae413a2..2550867 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -108,17 +108,36 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> XEN_NETIF_RX_RING_SIZE)
> 
> -struct xenvif {
> -	/* Unique identifier for this interface. */
> -	domid_t          domid;
> -	unsigned int     handle;
> +/* Queue name is interface name with "-qNNN" appended */
> +#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
> +

'-qNNN' is only 5 chars. Are you accounting for a NUL terminator too?

> +/* IRQ name is queue name with "-tx" or "-rx" appended */
> +#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
> +

If yes, then you appear to have doubly accounted for it here.

> +struct xenvif;
> +
> +struct xenvif_stats {
> +	/* Stats fields to be updated per-queue.
> +	 * A subset of struct net_device_stats that contains only the
> +	 * fields that are updated in netback.c for each queue.
> +	 */
> +	unsigned int rx_bytes;
> +	unsigned int rx_packets;
> +	unsigned int tx_bytes;
> +	unsigned int tx_packets;
> +};
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int id; /* Queue ID, 0-based */
> +	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
> +	struct xenvif *vif; /* Parent VIF */
> 
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,19 +159,34 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;
> 
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> 
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> 
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +	/* Statistics */
> +	struct xenvif_stats stats;
> +};
> +
> +struct xenvif {
> +	/* Unique identifier for this interface. */
> +	domid_t          domid;
> +	unsigned int     handle;
> +
>  	u8               fe_dev_addr[6];
> 
>  	/* Frontend feature information. */
> @@ -166,15 +200,12 @@ struct xenvif {
>  	/* Internal feature information. */
>  	u8 can_queue:1;	    /* can queue packets for receiver? */
> 
> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> -	unsigned long   credit_bytes;
> -	unsigned long   credit_usec;
> -	unsigned long   remaining_credit;
> -	struct timer_list credit_timeout;
> -	u64 credit_window_start;
> +	/* Queues */
> +	unsigned int num_queues;
> +	struct xenvif_queue *queues;
> 
>  	/* Statistics */
> -	unsigned long rx_gso_checksum_fixup;
> +	atomic_t rx_gso_checksum_fixup;

Any reason why this is not in xenvif_stats? If  it were there then it would not need to be atomic.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:08:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGou4-0002l7-JW; Fri, 21 Feb 2014 12:08:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGou3-0002jL-Hu
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:08:27 +0000
Received: from [85.158.143.35:55102] by server-2.bemta-4.messagelabs.com id
	A8/D6-10891-AB147035; Fri, 21 Feb 2014 12:08:26 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1392984505!7358258!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24712 invoked from network); 21 Feb 2014 12:08:26 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:08:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717495"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:08:26 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:08:25 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific
	data into queue struct.
Thread-Index: AQHPLApeFYeYsdekekiHJlZu2kAQT5q/mzHg
Date: Fri, 21 Feb 2014 12:08:24 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 17 February 2014 17:58
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
> Vrabel; Andrew Bennieston
> Subject: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific data
> into queue struct.
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_hash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |   81 ++++--
>  drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
>  drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++------------
> -----
>  drivers/net/xen-netback/xenbus.c    |   87 ++++--
>  4 files changed, 593 insertions(+), 417 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index ae413a2..2550867 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -108,17 +108,36 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> XEN_NETIF_RX_RING_SIZE)
> 
> -struct xenvif {
> -	/* Unique identifier for this interface. */
> -	domid_t          domid;
> -	unsigned int     handle;
> +/* Queue name is interface name with "-qNNN" appended */
> +#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
> +

'-qNNN' is only 5 chars. Are you accounting for a NUL terminator too?

> +/* IRQ name is queue name with "-tx" or "-rx" appended */
> +#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
> +

If yes, then you appear to have doubly accounted for it here.

> +struct xenvif;
> +
> +struct xenvif_stats {
> +	/* Stats fields to be updated per-queue.
> +	 * A subset of struct net_device_stats that contains only the
> +	 * fields that are updated in netback.c for each queue.
> +	 */
> +	unsigned int rx_bytes;
> +	unsigned int rx_packets;
> +	unsigned int tx_bytes;
> +	unsigned int tx_packets;
> +};
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int id; /* Queue ID, 0-based */
> +	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
> +	struct xenvif *vif; /* Parent VIF */
> 
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,19 +159,34 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;
> 
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> 
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> 
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +	/* Statistics */
> +	struct xenvif_stats stats;
> +};
> +
> +struct xenvif {
> +	/* Unique identifier for this interface. */
> +	domid_t          domid;
> +	unsigned int     handle;
> +
>  	u8               fe_dev_addr[6];
> 
>  	/* Frontend feature information. */
> @@ -166,15 +200,12 @@ struct xenvif {
>  	/* Internal feature information. */
>  	u8 can_queue:1;	    /* can queue packets for receiver? */
> 
> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> -	unsigned long   credit_bytes;
> -	unsigned long   credit_usec;
> -	unsigned long   remaining_credit;
> -	struct timer_list credit_timeout;
> -	u64 credit_window_start;
> +	/* Queues */
> +	unsigned int num_queues;
> +	struct xenvif_queue *queues;
> 
>  	/* Statistics */
> -	unsigned long rx_gso_checksum_fixup;
> +	atomic_t rx_gso_checksum_fixup;

Any reason why this is not in xenvif_stats? If  it were there then it would not need to be atomic.

  Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoyt-00038m-Ce; Fri, 21 Feb 2014 12:13:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGoyr-00038e-Ed
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:13:25 +0000
Received: from [85.158.139.211:40126] by server-1.bemta-5.messagelabs.com id
	1C/7B-12859-4E247035; Fri, 21 Feb 2014 12:13:24 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392984802!5416328!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4954 invoked from network); 21 Feb 2014 12:13:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:13:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717603"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:13:23 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:13:22 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
	queues
Thread-Index: AQHPLApeme3WmVSf6UC0lqNlzm7vm5q/pCMA
Date: Fri, 21 Feb 2014 12:13:21 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 17 February 2014 17:58
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
> Vrabel; Andrew Bennieston
> Subject: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
> queues
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |    2 +
>  drivers/net/xen-netback/interface.c |    7 +++-
>  drivers/net/xen-netback/netback.c   |    8 ++++
>  drivers/net/xen-netback/xenbus.c    |   76
> ++++++++++++++++++++++++++++++-----
>  4 files changed, 82 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 2550867..8180929 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index daf93f6..bc7a82d 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	char name[IFNAMSIZ] = {};
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/* Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			      xenvif_max_queues);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index 46b2f5b..64d66a1 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -54,6 +54,11 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
> 
> +unsigned int xenvif_max_queues;
> +module_param(xenvif_max_queues, uint, 0644);
> +MODULE_PARM_DESC(xenvif_max_queues,
> +		"Maximum number of queues per virtual interface");
> +
>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> @@ -1585,6 +1590,9 @@ static int __init netback_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
> 
> +	/* Allow as many queues as there are CPUs, by default */
> +	xenvif_max_queues = num_online_cpus();
> +
>  	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
>  		pr_info("fatal_skb_slots too small (%d), bump it to
> XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
>  			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index f23ea0a..d11f51e 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -20,6 +20,7 @@
> 
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
> 
> +	/* Multi-queue support: This is an optional feature. */
> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u",
> xenvif_max_queues);
> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0) {
> +		requested_num_queues = 1; /* Fall back to single queue */
> +	} else if (requested_num_queues > xenvif_max_queues) {
> +		/* buggy or malicious guest */
> +		xenbus_dev_fatal(dev, err,
> +			"guest requested %u queues, exceeding the
> maximum of %u.",
> +			requested_num_queues, xenvif_max_queues);
> +		return;
> +	}
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
> 
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
> >num_queues);
> +	rtnl_unlock();
> 
>  	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index) {
>  		queue = &be->vif->queues[queue_index];
> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;

I don't think you need the NULL init here. xspath is set in both branches of the if statement below. 

  Paul

> +	size_t xspathsize;
> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-
> NNN" */
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1) {
> +		xspath = (char *)dev->otherend;
> +	} else {
> +		xspathsize = strlen(dev->otherend) +
> xenstore_path_ext_size;
> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");
> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
> >otherend,
> +				 queue->id);
> +	}
> 
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
>  			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err,
>  				 "reading %s/ring-ref",
> -				 dev->otherend);
> -		return err;
> +				 xspath);
> +		goto err;
>  	}
> 
>  	/* Try split event channels first, then single event channel. */
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "event-channel-tx", "%u", &tx_evtchn,
>  			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>  	if (err < 0) {
> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
> +		err = xenbus_scanf(XBT_NIL, xspath,
>  				   "event-channel", "%u", &tx_evtchn);
>  		if (err < 0) {
>  			xenbus_dev_fatal(dev, err,
>  					 "reading %s/event-channel(-tx/rx)",
> -					 dev->otherend);
> -			return err;
> +					 xspath);
> +			goto err;
>  		}
>  		rx_evtchn = tx_evtchn;
>  	}
> @@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
>  				 tx_ring_ref, rx_ring_ref,
>  				 tx_evtchn, rx_evtchn);
> -		return err;
> +		goto err;
>  	}
> 
> -	return 0;
> +	err = 0;
> +err: /* Regular return falls through with err == 0 */
> +	if (xspath != dev->otherend)
> +		kfree(xspath);
> +
> +	return err;
>  }
> 
>  static int read_xenbus_vif_flags(struct backend_info *be)
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:13:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGoyt-00038m-Ce; Fri, 21 Feb 2014 12:13:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGoyr-00038e-Ed
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:13:25 +0000
Received: from [85.158.139.211:40126] by server-1.bemta-5.messagelabs.com id
	1C/7B-12859-4E247035; Fri, 21 Feb 2014 12:13:24 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392984802!5416328!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4954 invoked from network); 21 Feb 2014 12:13:22 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:13:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717603"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:13:23 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:13:22 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
	queues
Thread-Index: AQHPLApeme3WmVSf6UC0lqNlzm7vm5q/pCMA
Date: Fri, 21 Feb 2014 12:13:21 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 17 February 2014 17:58
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
> Vrabel; Andrew Bennieston
> Subject: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
> queues
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netback/common.h    |    2 +
>  drivers/net/xen-netback/interface.c |    7 +++-
>  drivers/net/xen-netback/netback.c   |    8 ++++
>  drivers/net/xen-netback/xenbus.c    |   76
> ++++++++++++++++++++++++++++++-----
>  4 files changed, 82 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 2550867..8180929 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index daf93f6..bc7a82d 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	char name[IFNAMSIZ] = {};
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/* Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			      xenvif_max_queues);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index 46b2f5b..64d66a1 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -54,6 +54,11 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
> 
> +unsigned int xenvif_max_queues;
> +module_param(xenvif_max_queues, uint, 0644);
> +MODULE_PARM_DESC(xenvif_max_queues,
> +		"Maximum number of queues per virtual interface");
> +
>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> @@ -1585,6 +1590,9 @@ static int __init netback_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
> 
> +	/* Allow as many queues as there are CPUs, by default */
> +	xenvif_max_queues = num_online_cpus();
> +
>  	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
>  		pr_info("fatal_skb_slots too small (%d), bump it to
> XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
>  			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index f23ea0a..d11f51e 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -20,6 +20,7 @@
> 
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
> 
> +	/* Multi-queue support: This is an optional feature. */
> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u",
> xenvif_max_queues);
> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0) {
> +		requested_num_queues = 1; /* Fall back to single queue */
> +	} else if (requested_num_queues > xenvif_max_queues) {
> +		/* buggy or malicious guest */
> +		xenbus_dev_fatal(dev, err,
> +			"guest requested %u queues, exceeding the
> maximum of %u.",
> +			requested_num_queues, xenvif_max_queues);
> +		return;
> +	}
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
> 
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
> >num_queues);
> +	rtnl_unlock();
> 
>  	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index) {
>  		queue = &be->vif->queues[queue_index];
> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath = NULL;

I don't think you need the NULL init here. xspath is set in both branches of the if statement below. 

  Paul

> +	size_t xspathsize;
> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-
> NNN" */
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1) {
> +		xspath = (char *)dev->otherend;
> +	} else {
> +		xspathsize = strlen(dev->otherend) +
> xenstore_path_ext_size;
> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");
> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
> >otherend,
> +				 queue->id);
> +	}
> 
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
>  			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err,
>  				 "reading %s/ring-ref",
> -				 dev->otherend);
> -		return err;
> +				 xspath);
> +		goto err;
>  	}
> 
>  	/* Try split event channels first, then single event channel. */
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "event-channel-tx", "%u", &tx_evtchn,
>  			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>  	if (err < 0) {
> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
> +		err = xenbus_scanf(XBT_NIL, xspath,
>  				   "event-channel", "%u", &tx_evtchn);
>  		if (err < 0) {
>  			xenbus_dev_fatal(dev, err,
>  					 "reading %s/event-channel(-tx/rx)",
> -					 dev->otherend);
> -			return err;
> +					 xspath);
> +			goto err;
>  		}
>  		rx_evtchn = tx_evtchn;
>  	}
> @@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
>  				 tx_ring_ref, rx_ring_ref,
>  				 tx_evtchn, rx_evtchn);
> -		return err;
> +		goto err;
>  	}
> 
> -	return 0;
> +	err = 0;
> +err: /* Regular return falls through with err == 0 */
> +	if (xspath != dev->otherend)
> +		kfree(xspath);
> +
> +	return err;
>  }
> 
>  static int read_xenbus_vif_flags(struct backend_info *be)
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:20:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp5s-0003To-74; Fri, 21 Feb 2014 12:20:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp5p-0003Th-A6
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:20:37 +0000
Received: from [85.158.137.68:50585] by server-8.bemta-3.messagelabs.com id
	23/A3-16039-49447035; Fri, 21 Feb 2014 12:20:36 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392985234!3388829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4591 invoked from network); 21 Feb 2014 12:20:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:20:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104643999"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 12:20:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:20:33 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp5l-0004Vx-0N; Fri, 21 Feb 2014 12:20:33 +0000
Message-ID: <53074490.8030806@citrix.com>
Date: Fri, 21 Feb 2014 12:20:32 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:13, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 17 February 2014 17:58
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
>> Vrabel; Andrew Bennieston
>> Subject: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
>> queues
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Builds on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Writes the maximum supported number of queues into XenStore, and reads
>> the values written by the frontend to determine how many queues to use.
>>
>> Ring references and event channels are read from XenStore on a per-queue
>> basis and rings are connected accordingly.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |    2 +
>>   drivers/net/xen-netback/interface.c |    7 +++-
>>   drivers/net/xen-netback/netback.c   |    8 ++++
>>   drivers/net/xen-netback/xenbus.c    |   76
>> ++++++++++++++++++++++++++++++-----
>>   4 files changed, 82 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index 2550867..8180929 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
>>
>>   extern bool separate_tx_rx_irq;
>>
>> +extern unsigned int xenvif_max_queues;
>> +
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
>> netback/interface.c
>> index daf93f6..bc7a82d 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/* Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +			      xenvif_max_queues);
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index 46b2f5b..64d66a1 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -54,6 +54,11 @@
>>   bool separate_tx_rx_irq = 1;
>>   module_param(separate_tx_rx_irq, bool, 0644);
>>
>> +unsigned int xenvif_max_queues;
>> +module_param(xenvif_max_queues, uint, 0644);
>> +MODULE_PARM_DESC(xenvif_max_queues,
>> +		"Maximum number of queues per virtual interface");
>> +
>>   /*
>>    * This is the maximum slots a skb can have. If a guest sends a skb
>>    * which exceeds this limit it is considered malicious.
>> @@ -1585,6 +1590,9 @@ static int __init netback_init(void)
>>   	if (!xen_domain())
>>   		return -ENODEV;
>>
>> +	/* Allow as many queues as there are CPUs, by default */
>> +	xenvif_max_queues = num_online_cpus();
>> +
>>   	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
>>   		pr_info("fatal_skb_slots too small (%d), bump it to
>> XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
>>   			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
>> netback/xenbus.c
>> index f23ea0a..d11f51e 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -20,6 +20,7 @@
>>
>>   #include "common.h"
>>   #include <linux/vmalloc.h>
>> +#include <linux/rtnetlink.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
>>   	if (err)
>>   		pr_debug("Error writing feature-split-event-channels\n");
>>
>> +	/* Multi-queue support: This is an optional feature. */
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			"multi-queue-max-queues", "%u",
>> xenvif_max_queues);
>> +	if (err)
>> +		pr_debug("Error writing multi-queue-max-queues\n");
>> +
>>   	err = xenbus_switch_state(dev, XenbusStateInitWait);
>>   	if (err)
>>   		goto fail;
>> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0) {
>> +		requested_num_queues = 1; /* Fall back to single queue */
>> +	} else if (requested_num_queues > xenvif_max_queues) {
>> +		/* buggy or malicious guest */
>> +		xenbus_dev_fatal(dev, err,
>> +			"guest requested %u queues, exceeding the
>> maximum of %u.",
>> +			requested_num_queues, xenvif_max_queues);
>> +		return;
>> +	}
>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
>>   	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>>   	read_xenbus_vif_flags(be);
>>
>> -	be->vif->num_queues = 1;
>> +	/* Use the number of queues requested by the frontend */
>> +	be->vif->num_queues = requested_num_queues;
>>   	be->vif->queues = vzalloc(be->vif->num_queues *
>>   			sizeof(struct xenvif_queue));
>> +	rtnl_lock();
>> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
>>> num_queues);
>> +	rtnl_unlock();
>>
>>   	for (queue_index = 0; queue_index < be->vif->num_queues;
>> ++queue_index) {
>>   		queue = &be->vif->queues[queue_index];
>> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>
> I don't think you need the NULL init here. xspath is set in both branches of the if statement below.

Indeed, but I prefer to initialise things sanely where possible. It 
makes it easier to spot problems with later modifications of the code, 
e.g. if one of those branches changed.

Andrew.
>
>    Paul
>
>> +	size_t xspathsize;
>> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-
>> NNN" */
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>> +	if (queue->vif->num_queues == 1) {
>> +		xspath = (char *)dev->otherend;
>> +	} else {
>> +		xspathsize = strlen(dev->otherend) +
>> xenstore_path_ext_size;
>> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
>> +		if (!xspath) {
>> +			xenbus_dev_fatal(dev, -ENOMEM,
>> +					"reading ring references");
>> +			return -ENOMEM;
>> +		}
>> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
>>> otherend,
>> +				 queue->id);
>> +	}
>>
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "tx-ring-ref", "%lu", &tx_ring_ref,
>>   			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>>   	if (err) {
>>   		xenbus_dev_fatal(dev, err,
>>   				 "reading %s/ring-ref",
>> -				 dev->otherend);
>> -		return err;
>> +				 xspath);
>> +		goto err;
>>   	}
>>
>>   	/* Try split event channels first, then single event channel. */
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "event-channel-tx", "%u", &tx_evtchn,
>>   			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>>   	if (err < 0) {
>> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +		err = xenbus_scanf(XBT_NIL, xspath,
>>   				   "event-channel", "%u", &tx_evtchn);
>>   		if (err < 0) {
>>   			xenbus_dev_fatal(dev, err,
>>   					 "reading %s/event-channel(-tx/rx)",
>> -					 dev->otherend);
>> -			return err;
>> +					 xspath);
>> +			goto err;
>>   		}
>>   		rx_evtchn = tx_evtchn;
>>   	}
>> @@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>>   				 tx_ring_ref, rx_ring_ref,
>>   				 tx_evtchn, rx_evtchn);
>> -		return err;
>> +		goto err;
>>   	}
>>
>> -	return 0;
>> +	err = 0;
>> +err: /* Regular return falls through with err == 0 */
>> +	if (xspath != dev->otherend)
>> +		kfree(xspath);
>> +
>> +	return err;
>>   }
>>
>>   static int read_xenbus_vif_flags(struct backend_info *be)
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:20:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp5s-0003To-74; Fri, 21 Feb 2014 12:20:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp5p-0003Th-A6
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:20:37 +0000
Received: from [85.158.137.68:50585] by server-8.bemta-3.messagelabs.com id
	23/A3-16039-49447035; Fri, 21 Feb 2014 12:20:36 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392985234!3388829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4591 invoked from network); 21 Feb 2014 12:20:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:20:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104643999"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 12:20:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:20:33 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp5l-0004Vx-0N; Fri, 21 Feb 2014 12:20:33 +0000
Message-ID: <53074490.8030806@citrix.com>
Date: Fri, 21 Feb 2014 12:20:32 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:13, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 17 February 2014 17:58
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
>> Vrabel; Andrew Bennieston
>> Subject: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
>> queues
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Builds on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Writes the maximum supported number of queues into XenStore, and reads
>> the values written by the frontend to determine how many queues to use.
>>
>> Ring references and event channels are read from XenStore on a per-queue
>> basis and rings are connected accordingly.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |    2 +
>>   drivers/net/xen-netback/interface.c |    7 +++-
>>   drivers/net/xen-netback/netback.c   |    8 ++++
>>   drivers/net/xen-netback/xenbus.c    |   76
>> ++++++++++++++++++++++++++++++-----
>>   4 files changed, 82 insertions(+), 11 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index 2550867..8180929 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
>>
>>   extern bool separate_tx_rx_irq;
>>
>> +extern unsigned int xenvif_max_queues;
>> +
>>   #endif /* __XEN_NETBACK__COMMON_H__ */
>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
>> netback/interface.c
>> index daf93f6..bc7a82d 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -373,7 +373,12 @@ struct xenvif *xenvif_alloc(struct device *parent,
>> domid_t domid,
>>   	char name[IFNAMSIZ] = {};
>>
>>   	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
>> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>> +	/* Allocate a netdev with the max. supported number of queues.
>> +	 * When the guest selects the desired number, it will be updated
>> +	 * via netif_set_real_num_tx_queues().
>> +	 */
>> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
>> +			      xenvif_max_queues);
>>   	if (dev == NULL) {
>>   		pr_warn("Could not allocate netdev for %s\n", name);
>>   		return ERR_PTR(-ENOMEM);
>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
>> netback/netback.c
>> index 46b2f5b..64d66a1 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -54,6 +54,11 @@
>>   bool separate_tx_rx_irq = 1;
>>   module_param(separate_tx_rx_irq, bool, 0644);
>>
>> +unsigned int xenvif_max_queues;
>> +module_param(xenvif_max_queues, uint, 0644);
>> +MODULE_PARM_DESC(xenvif_max_queues,
>> +		"Maximum number of queues per virtual interface");
>> +
>>   /*
>>    * This is the maximum slots a skb can have. If a guest sends a skb
>>    * which exceeds this limit it is considered malicious.
>> @@ -1585,6 +1590,9 @@ static int __init netback_init(void)
>>   	if (!xen_domain())
>>   		return -ENODEV;
>>
>> +	/* Allow as many queues as there are CPUs, by default */
>> +	xenvif_max_queues = num_online_cpus();
>> +
>>   	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
>>   		pr_info("fatal_skb_slots too small (%d), bump it to
>> XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
>>   			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
>> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
>> netback/xenbus.c
>> index f23ea0a..d11f51e 100644
>> --- a/drivers/net/xen-netback/xenbus.c
>> +++ b/drivers/net/xen-netback/xenbus.c
>> @@ -20,6 +20,7 @@
>>
>>   #include "common.h"
>>   #include <linux/vmalloc.h>
>> +#include <linux/rtnetlink.h>
>>
>>   struct backend_info {
>>   	struct xenbus_device *dev;
>> @@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
>>   	if (err)
>>   		pr_debug("Error writing feature-split-event-channels\n");
>>
>> +	/* Multi-queue support: This is an optional feature. */
>> +	err = xenbus_printf(XBT_NIL, dev->nodename,
>> +			"multi-queue-max-queues", "%u",
>> xenvif_max_queues);
>> +	if (err)
>> +		pr_debug("Error writing multi-queue-max-queues\n");
>> +
>>   	err = xenbus_switch_state(dev, XenbusStateInitWait);
>>   	if (err)
>>   		goto fail;
>> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>>   	unsigned long credit_bytes, credit_usec;
>>   	unsigned int queue_index;
>>   	struct xenvif_queue *queue;
>> +	unsigned int requested_num_queues;
>> +
>> +	/* Check whether the frontend requested multiple queues
>> +	 * and read the number requested.
>> +	 */
>> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +			"multi-queue-num-queues",
>> +			"%u", &requested_num_queues);
>> +	if (err < 0) {
>> +		requested_num_queues = 1; /* Fall back to single queue */
>> +	} else if (requested_num_queues > xenvif_max_queues) {
>> +		/* buggy or malicious guest */
>> +		xenbus_dev_fatal(dev, err,
>> +			"guest requested %u queues, exceeding the
>> maximum of %u.",
>> +			requested_num_queues, xenvif_max_queues);
>> +		return;
>> +	}
>>
>>   	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>>   	if (err) {
>> @@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
>>   	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>>   	read_xenbus_vif_flags(be);
>>
>> -	be->vif->num_queues = 1;
>> +	/* Use the number of queues requested by the frontend */
>> +	be->vif->num_queues = requested_num_queues;
>>   	be->vif->queues = vzalloc(be->vif->num_queues *
>>   			sizeof(struct xenvif_queue));
>> +	rtnl_lock();
>> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
>>> num_queues);
>> +	rtnl_unlock();
>>
>>   	for (queue_index = 0; queue_index < be->vif->num_queues;
>> ++queue_index) {
>>   		queue = &be->vif->queues[queue_index];
>> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   	unsigned long tx_ring_ref, rx_ring_ref;
>>   	unsigned int tx_evtchn, rx_evtchn;
>>   	int err;
>> +	char *xspath = NULL;
>
> I don't think you need the NULL init here. xspath is set in both branches of the if statement below.

Indeed, but I prefer to initialise things sanely where possible. It 
makes it easier to spot problems with later modifications of the code, 
e.g. if one of those branches changed.

Andrew.
>
>    Paul
>
>> +	size_t xspathsize;
>> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-
>> NNN" */
>> +
>> +	/* If the frontend requested 1 queue, or we have fallen back
>> +	 * to single queue due to lack of frontend support for multi-
>> +	 * queue, expect the remaining XenStore keys in the toplevel
>> +	 * directory. Otherwise, expect them in a subdirectory called
>> +	 * queue-N.
>> +	 */
>> +	if (queue->vif->num_queues == 1) {
>> +		xspath = (char *)dev->otherend;
>> +	} else {
>> +		xspathsize = strlen(dev->otherend) +
>> xenstore_path_ext_size;
>> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
>> +		if (!xspath) {
>> +			xenbus_dev_fatal(dev, -ENOMEM,
>> +					"reading ring references");
>> +			return -ENOMEM;
>> +		}
>> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
>>> otherend,
>> +				 queue->id);
>> +	}
>>
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "tx-ring-ref", "%lu", &tx_ring_ref,
>>   			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>>   	if (err) {
>>   		xenbus_dev_fatal(dev, err,
>>   				 "reading %s/ring-ref",
>> -				 dev->otherend);
>> -		return err;
>> +				 xspath);
>> +		goto err;
>>   	}
>>
>>   	/* Try split event channels first, then single event channel. */
>> -	err = xenbus_gather(XBT_NIL, dev->otherend,
>> +	err = xenbus_gather(XBT_NIL, xspath,
>>   			    "event-channel-tx", "%u", &tx_evtchn,
>>   			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>>   	if (err < 0) {
>> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
>> +		err = xenbus_scanf(XBT_NIL, xspath,
>>   				   "event-channel", "%u", &tx_evtchn);
>>   		if (err < 0) {
>>   			xenbus_dev_fatal(dev, err,
>>   					 "reading %s/event-channel(-tx/rx)",
>> -					 dev->otherend);
>> -			return err;
>> +					 xspath);
>> +			goto err;
>>   		}
>>   		rx_evtchn = tx_evtchn;
>>   	}
>> @@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be,
>> struct xenvif_queue *queue)
>>   				 "mapping shared-frames %lu/%lu port tx %u
>> rx %u",
>>   				 tx_ring_ref, rx_ring_ref,
>>   				 tx_evtchn, rx_evtchn);
>> -		return err;
>> +		goto err;
>>   	}
>>
>> -	return 0;
>> +	err = 0;
>> +err: /* Regular return falls through with err == 0 */
>> +	if (xspath != dev->otherend)
>> +		kfree(xspath);
>> +
>> +	return err;
>>   }
>>
>>   static int read_xenbus_vif_flags(struct backend_info *be)
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp7X-0003ah-EY; Fri, 21 Feb 2014 12:22:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGp7V-0003aZ-Rg
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:22:22 +0000
Received: from [193.109.254.147:27545] by server-3.bemta-14.messagelabs.com id
	52/B3-00432-DF447035; Fri, 21 Feb 2014 12:22:21 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392985340!5954993!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28520 invoked from network); 21 Feb 2014 12:22:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:22:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717888"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:22:20 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:22:20 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 5/5] xen-net{back,front}: Document
	multi-queue feature in netif.h
Thread-Index: AQHPLApfR6zrkA1M9EygmJwD9TVQUZq/pfLg
Date: Fri, 21 Feb 2014 12:22:19 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0255359@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 17 February 2014 17:58
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
> Vrabel; Andrew Bennieston
> Subject: [PATCH V4 net-next 5/5] xen-net{back,front}: Document multi-
> queue feature in netif.h
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/include/xen/interface/io/netif.h
> b/include/xen/interface/io/netif.h
> index c50061d..8868c51 100644
> --- a/include/xen/interface/io/netif.h
> +++ b/include/xen/interface/io/netif.h
> @@ -51,6 +51,27 @@
>   */
> 
>  /*
> + * Multiple transmit and receive queues:
> + * If supported, the backend will write "multi-queue-max-queues" and set
> its
> + * value to the maximum supported number of queues.
> + * Frontends that are aware of this feature and wish to use it can write the
> + * key "multi-queue-num-queues", set to the number they wish to use.
> + *
> + * Queues replicate the shared rings and event channels, and
> + * "feature-split-event-channels" is required when using multiple queues.
> + *

Is it? The code in patch 2 appears to cope with the "event-channel" key as well as the split variants regardless of the number of queues being used. Am I missing some other restriction?

  Paul

> + * For frontends requesting just one queue, the usual event-channel and
> + * ring-ref keys are written as before, simplifying the backend processing
> + * to avoid distinguishing between a frontend that doesn't understand the
> + * multi-queue feature, and one that does, but requested only one queue.
> + *
> + * Frontends requesting two or more queues must not write the toplevel
> + * event-channel and ring-ref keys, instead writing them under sub-keys
> having
> + * the name "queue-N" where N is the integer ID of the queue for which
> those
> + * keys belong. Queues are indexed from zero.
> + */
> +
> +/*
>   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP
> checksum
>   * offload off or on. If it is missing then the feature is assumed to be on.
>   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP
> checksum
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:22:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp7X-0003ah-EY; Fri, 21 Feb 2014 12:22:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGp7V-0003aZ-Rg
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:22:22 +0000
Received: from [193.109.254.147:27545] by server-3.bemta-14.messagelabs.com id
	52/B3-00432-DF447035; Fri, 21 Feb 2014 12:22:21 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1392985340!5954993!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28520 invoked from network); 21 Feb 2014 12:22:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:22:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717888"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:22:20 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:22:20 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 5/5] xen-net{back,front}: Document
	multi-queue feature in netif.h
Thread-Index: AQHPLApfR6zrkA1M9EygmJwD9TVQUZq/pfLg
Date: Fri, 21 Feb 2014 12:22:19 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0255359@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 17 February 2014 17:58
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
> Vrabel; Andrew Bennieston
> Subject: [PATCH V4 net-next 5/5] xen-net{back,front}: Document multi-
> queue feature in netif.h
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
> 
> diff --git a/include/xen/interface/io/netif.h
> b/include/xen/interface/io/netif.h
> index c50061d..8868c51 100644
> --- a/include/xen/interface/io/netif.h
> +++ b/include/xen/interface/io/netif.h
> @@ -51,6 +51,27 @@
>   */
> 
>  /*
> + * Multiple transmit and receive queues:
> + * If supported, the backend will write "multi-queue-max-queues" and set
> its
> + * value to the maximum supported number of queues.
> + * Frontends that are aware of this feature and wish to use it can write the
> + * key "multi-queue-num-queues", set to the number they wish to use.
> + *
> + * Queues replicate the shared rings and event channels, and
> + * "feature-split-event-channels" is required when using multiple queues.
> + *

Is it? The code in patch 2 appears to cope with the "event-channel" key as well as the split variants regardless of the number of queues being used. Am I missing some other restriction?

  Paul

> + * For frontends requesting just one queue, the usual event-channel and
> + * ring-ref keys are written as before, simplifying the backend processing
> + * to avoid distinguishing between a frontend that doesn't understand the
> + * multi-queue feature, and one that does, but requested only one queue.
> + *
> + * Frontends requesting two or more queues must not write the toplevel
> + * event-channel and ring-ref keys, instead writing them under sub-keys
> having
> + * the name "queue-N" where N is the integer ID of the queue for which
> those
> + * keys belong. Queues are indexed from zero.
> + */
> +
> +/*
>   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP
> checksum
>   * offload off or on. If it is missing then the feature is assumed to be on.
>   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP
> checksum
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:23:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp8B-0003fF-Sy; Fri, 21 Feb 2014 12:23:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp88-0003ev-KV
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:23:00 +0000
Received: from [85.158.139.211:34005] by server-3.bemta-5.messagelabs.com id
	AA/CF-13671-32547035; Fri, 21 Feb 2014 12:22:59 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392985369!5418430!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24125 invoked from network); 21 Feb 2014 12:22:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:22:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104644426"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 12:22:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:22:48 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp7w-0004Y7-Ey; Fri, 21 Feb 2014 12:22:48 +0000
Message-ID: <53074518.5010506@citrix.com>
Date: Fri, 21 Feb 2014 12:22:48 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:08, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 17 February 2014 17:58
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
>> Vrabel; Andrew Bennieston
>> Subject: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific data
>> into queue struct.
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netback, move the
>> queue-specific data from struct xenvif into struct xenvif_queue, and
>> update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_netdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0 for a single queue and uses
>> skb_get_hash() to compute the queue index otherwise.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |   81 ++++--
>>   drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
>>   drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++------------
>> -----
>>   drivers/net/xen-netback/xenbus.c    |   87 ++++--
>>   4 files changed, 593 insertions(+), 417 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index ae413a2..2550867 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -108,17 +108,36 @@ struct xenvif_rx_meta {
>>    */
>>   #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
>> XEN_NETIF_RX_RING_SIZE)
>>
>> -struct xenvif {
>> -	/* Unique identifier for this interface. */
>> -	domid_t          domid;
>> -	unsigned int     handle;
>> +/* Queue name is interface name with "-qNNN" appended */
>> +#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
>> +
>
> '-qNNN' is only 5 chars. Are you accounting for a NUL terminator too?

Almost certainly...

>
>> +/* IRQ name is queue name with "-tx" or "-rx" appended */
>> +#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
>> +
>
> If yes, then you appear to have doubly accounted for it here.

... yup, looks that way. I'll fix this.

>
>> +struct xenvif;
>> +
>> +struct xenvif_stats {
>> +	/* Stats fields to be updated per-queue.
>> +	 * A subset of struct net_device_stats that contains only the
>> +	 * fields that are updated in netback.c for each queue.
>> +	 */
>> +	unsigned int rx_bytes;
>> +	unsigned int rx_packets;
>> +	unsigned int tx_bytes;
>> +	unsigned int tx_packets;
>> +};
>> +
>> +struct xenvif_queue { /* Per-queue data for xenvif */
>> +	unsigned int id; /* Queue ID, 0-based */
>> +	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
>> +	struct xenvif *vif; /* Parent VIF */
>>
>>   	/* Use NAPI for guest TX */
>>   	struct napi_struct napi;
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int tx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> +	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
>>   	struct xen_netif_tx_back_ring tx;
>>   	struct sk_buff_head tx_queue;
>>   	struct page *mmap_pages[MAX_PENDING_REQS];
>> @@ -140,19 +159,34 @@ struct xenvif {
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int rx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> +	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>   	RING_IDX rx_last_skb_slots;
>>
>> -	/* This array is allocated seperately as it is large */
>> -	struct gnttab_copy *grant_copy_op;
>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>>
>>   	/* We create one meta structure per ring request we consume, so
>>   	 * the maximum number is the same as the ring size.
>>   	 */
>>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>
>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> +	unsigned long   credit_bytes;
>> +	unsigned long   credit_usec;
>> +	unsigned long   remaining_credit;
>> +	struct timer_list credit_timeout;
>> +	u64 credit_window_start;
>> +
>> +	/* Statistics */
>> +	struct xenvif_stats stats;
>> +};
>> +
>> +struct xenvif {
>> +	/* Unique identifier for this interface. */
>> +	domid_t          domid;
>> +	unsigned int     handle;
>> +
>>   	u8               fe_dev_addr[6];
>>
>>   	/* Frontend feature information. */
>> @@ -166,15 +200,12 @@ struct xenvif {
>>   	/* Internal feature information. */
>>   	u8 can_queue:1;	    /* can queue packets for receiver? */
>>
>> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> -	unsigned long   credit_bytes;
>> -	unsigned long   credit_usec;
>> -	unsigned long   remaining_credit;
>> -	struct timer_list credit_timeout;
>> -	u64 credit_window_start;
>> +	/* Queues */
>> +	unsigned int num_queues;
>> +	struct xenvif_queue *queues;
>>
>>   	/* Statistics */
>> -	unsigned long rx_gso_checksum_fixup;
>> +	atomic_t rx_gso_checksum_fixup;
>
> Any reason why this is not in xenvif_stats? If  it were there then it would not need to be atomic.
The expectation was that it wouldn't be used very often, so an atomic 
op. here wouldn't hurt. I can move it to xenvif_stats if you'd prefer, 
though. I think the use of an atomic pre-dated the xenvif_stats struct, 
so maybe it makes sense to move it there now.

Andrew.

>
>    Paul
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:23:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp8B-0003fF-Sy; Fri, 21 Feb 2014 12:23:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp88-0003ev-KV
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:23:00 +0000
Received: from [85.158.139.211:34005] by server-3.bemta-5.messagelabs.com id
	AA/CF-13671-32547035; Fri, 21 Feb 2014 12:22:59 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392985369!5418430!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24125 invoked from network); 21 Feb 2014 12:22:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:22:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104644426"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 12:22:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:22:48 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp7w-0004Y7-Ey; Fri, 21 Feb 2014 12:22:48 +0000
Message-ID: <53074518.5010506@citrix.com>
Date: Fri, 21 Feb 2014 12:22:48 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:08, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 17 February 2014 17:58
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
>> Vrabel; Andrew Bennieston
>> Subject: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific data
>> into queue struct.
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netback, move the
>> queue-specific data from struct xenvif into struct xenvif_queue, and
>> update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_netdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0 for a single queue and uses
>> skb_get_hash() to compute the queue index otherwise.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netback/common.h    |   81 ++++--
>>   drivers/net/xen-netback/interface.c |  314 ++++++++++++++-------
>>   drivers/net/xen-netback/netback.c   |  528 ++++++++++++++++++------------
>> -----
>>   drivers/net/xen-netback/xenbus.c    |   87 ++++--
>>   4 files changed, 593 insertions(+), 417 deletions(-)
>>
>> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
>> netback/common.h
>> index ae413a2..2550867 100644
>> --- a/drivers/net/xen-netback/common.h
>> +++ b/drivers/net/xen-netback/common.h
>> @@ -108,17 +108,36 @@ struct xenvif_rx_meta {
>>    */
>>   #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
>> XEN_NETIF_RX_RING_SIZE)
>>
>> -struct xenvif {
>> -	/* Unique identifier for this interface. */
>> -	domid_t          domid;
>> -	unsigned int     handle;
>> +/* Queue name is interface name with "-qNNN" appended */
>> +#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
>> +
>
> '-qNNN' is only 5 chars. Are you accounting for a NUL terminator too?

Almost certainly...

>
>> +/* IRQ name is queue name with "-tx" or "-rx" appended */
>> +#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 4)
>> +
>
> If yes, then you appear to have doubly accounted for it here.

... yup, looks that way. I'll fix this.

>
>> +struct xenvif;
>> +
>> +struct xenvif_stats {
>> +	/* Stats fields to be updated per-queue.
>> +	 * A subset of struct net_device_stats that contains only the
>> +	 * fields that are updated in netback.c for each queue.
>> +	 */
>> +	unsigned int rx_bytes;
>> +	unsigned int rx_packets;
>> +	unsigned int tx_bytes;
>> +	unsigned int tx_packets;
>> +};
>> +
>> +struct xenvif_queue { /* Per-queue data for xenvif */
>> +	unsigned int id; /* Queue ID, 0-based */
>> +	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
>> +	struct xenvif *vif; /* Parent VIF */
>>
>>   	/* Use NAPI for guest TX */
>>   	struct napi_struct napi;
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int tx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
>> +	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
>>   	struct xen_netif_tx_back_ring tx;
>>   	struct sk_buff_head tx_queue;
>>   	struct page *mmap_pages[MAX_PENDING_REQS];
>> @@ -140,19 +159,34 @@ struct xenvif {
>>   	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>>   	unsigned int rx_irq;
>>   	/* Only used when feature-split-event-channels = 1 */
>> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
>> +	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
>>   	struct xen_netif_rx_back_ring rx;
>>   	struct sk_buff_head rx_queue;
>>   	RING_IDX rx_last_skb_slots;
>>
>> -	/* This array is allocated seperately as it is large */
>> -	struct gnttab_copy *grant_copy_op;
>> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
>>
>>   	/* We create one meta structure per ring request we consume, so
>>   	 * the maximum number is the same as the ring size.
>>   	 */
>>   	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
>>
>> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> +	unsigned long   credit_bytes;
>> +	unsigned long   credit_usec;
>> +	unsigned long   remaining_credit;
>> +	struct timer_list credit_timeout;
>> +	u64 credit_window_start;
>> +
>> +	/* Statistics */
>> +	struct xenvif_stats stats;
>> +};
>> +
>> +struct xenvif {
>> +	/* Unique identifier for this interface. */
>> +	domid_t          domid;
>> +	unsigned int     handle;
>> +
>>   	u8               fe_dev_addr[6];
>>
>>   	/* Frontend feature information. */
>> @@ -166,15 +200,12 @@ struct xenvif {
>>   	/* Internal feature information. */
>>   	u8 can_queue:1;	    /* can queue packets for receiver? */
>>
>> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
>> -	unsigned long   credit_bytes;
>> -	unsigned long   credit_usec;
>> -	unsigned long   remaining_credit;
>> -	struct timer_list credit_timeout;
>> -	u64 credit_window_start;
>> +	/* Queues */
>> +	unsigned int num_queues;
>> +	struct xenvif_queue *queues;
>>
>>   	/* Statistics */
>> -	unsigned long rx_gso_checksum_fixup;
>> +	atomic_t rx_gso_checksum_fixup;
>
> Any reason why this is not in xenvif_stats? If  it were there then it would not need to be atomic.
The expectation was that it wouldn't be used very often, so an atomic 
op. here wouldn't hurt. I can move it to xenvif_stats if you'd prefer, 
though. I think the use of an atomic pre-dated the xenvif_stats struct, 
so maybe it makes sense to move it there now.

Andrew.

>
>    Paul
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:23:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp8J-0003gp-9S; Fri, 21 Feb 2014 12:23:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGp8H-0003gT-Oq
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:23:09 +0000
Received: from [85.158.143.35:17029] by server-2.bemta-4.messagelabs.com id
	98/00-10891-D2547035; Fri, 21 Feb 2014 12:23:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392985387!7343420!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2694 invoked from network); 21 Feb 2014 12:23:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:23:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102960316"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:22:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:22:41 -0500
Message-ID: <53074510.2040408@citrix.com>
Date: Fri, 21 Feb 2014 12:22:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
	<53074490.8030806@citrix.com>
In-Reply-To: <53074490.8030806@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> +    char *xspath = NULL;
>>
>> I don't think you need the NULL init here. xspath is set in both
>> branches of the if statement below.
> 
> Indeed, but I prefer to initialise things sanely where possible. It
> makes it easier to spot problems with later modifications of the code,
> e.g. if one of those branches changed.

Kernel style (but probably not documented anywhere) is to only
initialize locals were necessary.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:23:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp8J-0003gp-9S; Fri, 21 Feb 2014 12:23:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WGp8H-0003gT-Oq
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:23:09 +0000
Received: from [85.158.143.35:17029] by server-2.bemta-4.messagelabs.com id
	98/00-10891-D2547035; Fri, 21 Feb 2014 12:23:09 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1392985387!7343420!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2694 invoked from network); 21 Feb 2014 12:23:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:23:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102960316"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:22:42 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:22:41 -0500
Message-ID: <53074510.2040408@citrix.com>
Date: Fri, 21 Feb 2014 12:22:40 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Andrew Bennieston <andrew.bennieston@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
	<53074490.8030806@citrix.com>
In-Reply-To: <53074490.8030806@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> +    char *xspath = NULL;
>>
>> I don't think you need the NULL init here. xspath is set in both
>> branches of the if statement below.
> 
> Indeed, but I prefer to initialise things sanely where possible. It
> makes it easier to spot problems with later modifications of the code,
> e.g. if one of those branches changed.

Kernel style (but probably not documented anywhere) is to only
initialize locals were necessary.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:24:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp9D-0003pN-PI; Fri, 21 Feb 2014 12:24:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp9C-0003pB-FQ
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:24:06 +0000
Received: from [85.158.139.211:7788] by server-12.bemta-5.messagelabs.com id
	16/6C-15415-56547035; Fri, 21 Feb 2014 12:24:05 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392985442!1468193!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13595 invoked from network); 21 Feb 2014 12:24:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104644647"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 12:24:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:24:01 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp97-0004ZT-I8; Fri, 21 Feb 2014 12:24:01 +0000
Message-ID: <53074561.60503@citrix.com>
Date: Fri, 21 Feb 2014 12:24:01 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255359@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0255359@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:22, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 17 February 2014 17:58
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
>> Vrabel; Andrew Bennieston
>> Subject: [PATCH V4 net-next 5/5] xen-net{back,front}: Document multi-
>> queue feature in netif.h
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Document the multi-queue feature in terms of XenStore keys to be written
>> by the backend and by the frontend.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/include/xen/interface/io/netif.h
>> b/include/xen/interface/io/netif.h
>> index c50061d..8868c51 100644
>> --- a/include/xen/interface/io/netif.h
>> +++ b/include/xen/interface/io/netif.h
>> @@ -51,6 +51,27 @@
>>    */
>>
>>   /*
>> + * Multiple transmit and receive queues:
>> + * If supported, the backend will write "multi-queue-max-queues" and set
>> its
>> + * value to the maximum supported number of queues.
>> + * Frontends that are aware of this feature and wish to use it can write the
>> + * key "multi-queue-num-queues", set to the number they wish to use.
>> + *
>> + * Queues replicate the shared rings and event channels, and
>> + * "feature-split-event-channels" is required when using multiple queues.
>> + *
>
> Is it? The code in patch 2 appears to cope with the "event-channel" key as well as the split variants regardless of the number of queues being used. Am I missing some other restriction?
>
Hmm, perhaps I just assumed that limitation. I'll check and update the 
doc accordingly.

Andrew

>    Paul
>
>> + * For frontends requesting just one queue, the usual event-channel and
>> + * ring-ref keys are written as before, simplifying the backend processing
>> + * to avoid distinguishing between a frontend that doesn't understand the
>> + * multi-queue feature, and one that does, but requested only one queue.
>> + *
>> + * Frontends requesting two or more queues must not write the toplevel
>> + * event-channel and ring-ref keys, instead writing them under sub-keys
>> having
>> + * the name "queue-N" where N is the integer ID of the queue for which
>> those
>> + * keys belong. Queues are indexed from zero.
>> + */
>> +
>> +/*
>>    * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP
>> checksum
>>    * offload off or on. If it is missing then the feature is assumed to be on.
>>    * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP
>> checksum
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:24:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp9D-0003pN-PI; Fri, 21 Feb 2014 12:24:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp9C-0003pB-FQ
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:24:06 +0000
Received: from [85.158.139.211:7788] by server-12.bemta-5.messagelabs.com id
	16/6C-15415-56547035; Fri, 21 Feb 2014 12:24:05 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1392985442!1468193!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13595 invoked from network); 21 Feb 2014 12:24:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:24:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104644647"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 12:24:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:24:01 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp97-0004ZT-I8; Fri, 21 Feb 2014 12:24:01 +0000
Message-ID: <53074561.60503@citrix.com>
Date: Fri, 21 Feb 2014 12:24:01 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Paul Durrant <Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-6-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255359@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD0255359@AMSPEX01CL01.citrite.net>
X-DLP: MIA1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 5/5] xen-net{back,
 front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:22, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
>> Sent: 17 February 2014 17:58
>> To: xen-devel@lists.xenproject.org
>> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; David
>> Vrabel; Andrew Bennieston
>> Subject: [PATCH V4 net-next 5/5] xen-net{back,front}: Document multi-
>> queue feature in netif.h
>>
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Document the multi-queue feature in terms of XenStore keys to be written
>> by the backend and by the frontend.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   include/xen/interface/io/netif.h |   21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/include/xen/interface/io/netif.h
>> b/include/xen/interface/io/netif.h
>> index c50061d..8868c51 100644
>> --- a/include/xen/interface/io/netif.h
>> +++ b/include/xen/interface/io/netif.h
>> @@ -51,6 +51,27 @@
>>    */
>>
>>   /*
>> + * Multiple transmit and receive queues:
>> + * If supported, the backend will write "multi-queue-max-queues" and set
>> its
>> + * value to the maximum supported number of queues.
>> + * Frontends that are aware of this feature and wish to use it can write the
>> + * key "multi-queue-num-queues", set to the number they wish to use.
>> + *
>> + * Queues replicate the shared rings and event channels, and
>> + * "feature-split-event-channels" is required when using multiple queues.
>> + *
>
> Is it? The code in patch 2 appears to cope with the "event-channel" key as well as the split variants regardless of the number of queues being used. Am I missing some other restriction?
>
Hmm, perhaps I just assumed that limitation. I'll check and update the 
doc accordingly.

Andrew

>    Paul
>
>> + * For frontends requesting just one queue, the usual event-channel and
>> + * ring-ref keys are written as before, simplifying the backend processing
>> + * to avoid distinguishing between a frontend that doesn't understand the
>> + * multi-queue feature, and one that does, but requested only one queue.
>> + *
>> + * Frontends requesting two or more queues must not write the toplevel
>> + * event-channel and ring-ref keys, instead writing them under sub-keys
>> having
>> + * the name "queue-N" where N is the integer ID of the queue for which
>> those
>> + * keys belong. Queues are indexed from zero.
>> + */
>> +
>> +/*
>>    * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP
>> checksum
>>    * offload off or on. If it is missing then the feature is assumed to be on.
>>    * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP
>> checksum
>> --
>> 1.7.10.4
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp9Z-0003sy-7O; Fri, 21 Feb 2014 12:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp9X-0003sa-4z
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:24:27 +0000
Received: from [193.109.254.147:23879] by server-12.bemta-14.messagelabs.com
	id E8/96-17220-A7547035; Fri, 21 Feb 2014 12:24:26 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392985464!647174!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21655 invoked from network); 21 Feb 2014 12:24:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102960924"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:24:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:24:24 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp9T-0004Zi-PT; Fri, 21 Feb 2014 12:24:23 +0000
Message-ID: <53074577.7020403@citrix.com>
Date: Fri, 21 Feb 2014 12:24:23 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
	<53074490.8030806@citrix.com> <53074510.2040408@citrix.com>
In-Reply-To: <53074510.2040408@citrix.com>
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:22, David Vrabel wrote:
>>>> +    char *xspath = NULL;
>>>
>>> I don't think you need the NULL init here. xspath is set in both
>>> branches of the if statement below.
>>
>> Indeed, but I prefer to initialise things sanely where possible. It
>> makes it easier to spot problems with later modifications of the code,
>> e.g. if one of those branches changed.
>
> Kernel style (but probably not documented anywhere) is to only
> initialize locals were necessary.
>
> David
>
Ok, I'll remove the NULL initialiser then.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:24:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp9Z-0003sy-7O; Fri, 21 Feb 2014 12:24:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WGp9X-0003sa-4z
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:24:27 +0000
Received: from [193.109.254.147:23879] by server-12.bemta-14.messagelabs.com
	id E8/96-17220-A7547035; Fri, 21 Feb 2014 12:24:26 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1392985464!647174!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21655 invoked from network); 21 Feb 2014 12:24:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:24:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102960924"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 12:24:24 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 07:24:24 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WGp9T-0004Zi-PT; Fri, 21 Feb 2014 12:24:23 +0000
Message-ID: <53074577.7020403@citrix.com>
Date: Fri, 21 Feb 2014 12:24:23 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
	<53074490.8030806@citrix.com> <53074510.2040408@citrix.com>
In-Reply-To: <53074510.2040408@citrix.com>
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 12:22, David Vrabel wrote:
>>>> +    char *xspath = NULL;
>>>
>>> I don't think you need the NULL init here. xspath is set in both
>>> branches of the if statement below.
>>
>> Indeed, but I prefer to initialise things sanely where possible. It
>> makes it easier to spot problems with later modifications of the code,
>> e.g. if one of those branches changed.
>
> Kernel style (but probably not documented anywhere) is to only
> initialize locals were necessary.
>
> David
>
Ok, I'll remove the NULL initialiser then.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:24:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp9z-0003yy-Uq; Fri, 21 Feb 2014 12:24:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGp9y-0003yb-LR
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:24:54 +0000
Received: from [85.158.143.35:31465] by server-3.bemta-4.messagelabs.com id
	C8/0A-11539-69547035; Fri, 21 Feb 2014 12:24:54 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392985493!7359014!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15617 invoked from network); 21 Feb 2014 12:24:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:24:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717954"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:24:53 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:24:52 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific
	data into queue struct.
Thread-Index: AQHPLApeFYeYsdekekiHJlZu2kAQT5q/mzHg///7OQCAABEoYA==
Date: Fri, 21 Feb 2014 12:24:52 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0255399@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
	<53074518.5010506@citrix.com>
In-Reply-To: <53074518.5010506@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
[snip]
> >
> > Any reason why this is not in xenvif_stats? If  it were there then it would
> not need to be atomic.
> The expectation was that it wouldn't be used very often, so an atomic
> op. here wouldn't hurt. I can move it to xenvif_stats if you'd prefer,
> though. I think the use of an atomic pre-dated the xenvif_stats struct,
> so maybe it makes sense to move it there now.
> 

I think so. Seems odd to leave it out on its own.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:24:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGp9z-0003yy-Uq; Fri, 21 Feb 2014 12:24:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WGp9y-0003yb-LR
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 12:24:54 +0000
Received: from [85.158.143.35:31465] by server-3.bemta-4.messagelabs.com id
	C8/0A-11539-69547035; Fri, 21 Feb 2014 12:24:54 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1392985493!7359014!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15617 invoked from network); 21 Feb 2014 12:24:53 -0000
Received: from smtp.ctxuk.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 12:24:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; 
   d="scan'208";a="9717954"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 21 Feb 2014 12:24:53 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 21 Feb 2014 13:24:52 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 1/5] xen-netback: Factor queue-specific
	data into queue struct.
Thread-Index: AQHPLApeFYeYsdekekiHJlZu2kAQT5q/mzHg///7OQCAABEoYA==
Date: Fri, 21 Feb 2014 12:24:52 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD0255399@AMSPEX01CL01.citrite.net>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-2-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD02552E8@AMSPEX01CL01.citrite.net>
	<53074518.5010506@citrix.com>
In-Reply-To: <53074518.5010506@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
[snip]
> >
> > Any reason why this is not in xenvif_stats? If  it were there then it would
> not need to be atomic.
> The expectation was that it wouldn't be used very often, so an atomic
> op. here wouldn't hurt. I can move it to xenvif_stats if you'd prefer,
> though. I think the use of an atomic pre-dated the xenvif_stats struct,
> so maybe it makes sense to move it there now.
> 

I think so. Seems odd to leave it out on its own.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpYN-00054H-Vw; Fri, 21 Feb 2014 12:50:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luis.henriques@canonical.com>) id 1WGpYN-00054C-2Z
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 12:50:07 +0000
Received: from [85.158.137.68:50505] by server-16.bemta-3.messagelabs.com id
	55/9D-29917-E7B47035; Fri, 21 Feb 2014 12:50:06 +0000
X-Env-Sender: luis.henriques@canonical.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392987005!3384591!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19476 invoked from network); 21 Feb 2014 12:50:05 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-15.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 12:50:05 -0000
Received: from bl20-128-115.dsl.telepac.pt ([2.81.128.115] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <luis.henriques@canonical.com>)
	id 1WGpYH-0001zG-NZ; Fri, 21 Feb 2014 12:50:02 +0000
From: Luis Henriques <luis.henriques@canonical.com>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org,
	kernel-team@lists.ubuntu.com
Date: Fri, 21 Feb 2014 12:47:43 +0000
Message-Id: <1392986945-9693-40-git-send-email-luis.henriques@canonical.com>
X-Mailer: git-send-email 1.9.0
In-Reply-To: <1392986945-9693-1-git-send-email-luis.henriques@canonical.com>
References: <1392986945-9693-1-git-send-email-luis.henriques@canonical.com>
X-Extended-Stable: 3.11
Cc: Prarit Bhargava <prarit@redhat.com>,
	Luis Henriques <luis.henriques@canonical.com>,
	Richard Cochran <richardcochran@gmail.com>,
	xen-devel@lists.xen.org, John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 3.11 039/121] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.11.10.5 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 22f3ae2..7b96f30 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
-- 
1.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 12:50:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 12:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpYN-00054H-Vw; Fri, 21 Feb 2014 12:50:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <luis.henriques@canonical.com>) id 1WGpYN-00054C-2Z
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 12:50:07 +0000
Received: from [85.158.137.68:50505] by server-16.bemta-3.messagelabs.com id
	55/9D-29917-E7B47035; Fri, 21 Feb 2014 12:50:06 +0000
X-Env-Sender: luis.henriques@canonical.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1392987005!3384591!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19476 invoked from network); 21 Feb 2014 12:50:05 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-15.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 12:50:05 -0000
Received: from bl20-128-115.dsl.telepac.pt ([2.81.128.115] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <luis.henriques@canonical.com>)
	id 1WGpYH-0001zG-NZ; Fri, 21 Feb 2014 12:50:02 +0000
From: Luis Henriques <luis.henriques@canonical.com>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org,
	kernel-team@lists.ubuntu.com
Date: Fri, 21 Feb 2014 12:47:43 +0000
Message-Id: <1392986945-9693-40-git-send-email-luis.henriques@canonical.com>
X-Mailer: git-send-email 1.9.0
In-Reply-To: <1392986945-9693-1-git-send-email-luis.henriques@canonical.com>
References: <1392986945-9693-1-git-send-email-luis.henriques@canonical.com>
X-Extended-Stable: 3.11
Cc: Prarit Bhargava <prarit@redhat.com>,
	Luis Henriques <luis.henriques@canonical.com>,
	Richard Cochran <richardcochran@gmail.com>,
	xen-devel@lists.xen.org, John Stultz <john.stultz@linaro.org>,
	David Vrabel <david.vrabel@citrix.com>,
	Sasha Levin <sasha.levin@oracle.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@kernel.org>
Subject: [Xen-devel] [PATCH 3.11 039/121] timekeeping: Fix potential lost pv
	notification of time change
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

3.11.10.5 -stable review patch.  If anyone has any objections, please let me know.

------------------

From: John Stultz <john.stultz@linaro.org>

commit 5258d3f25c76f6ab86e9333abf97a55a877d3870 upstream.

In 780427f0e11 (Indicate that clock was set in the pvclock
gtod notifier), logic was added to pass a CLOCK_WAS_SET
notification to the pvclock notifier chain.

While that patch added a action flag returned from
accumulate_nsecs_to_secs(), it only uses the returned value
in one location, and not in the logarithmic accumulation.

This means if a leap second triggered during the logarithmic
accumulation (which is most likely where it would happen),
the notification that the clock was set would not make it to
the pv notifiers.

This patch extends the logarithmic_accumulation pass down
that action flag so proper notification will occur.

This patch also changes the varialbe action -> clock_set
per Ingo's suggestion.

Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: <xen-devel@lists.xen.org>
Signed-off-by: John Stultz <john.stultz@linaro.org>
Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
---
 kernel/time/timekeeping.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
index 22f3ae2..7b96f30 100644
--- a/kernel/time/timekeeping.c
+++ b/kernel/time/timekeeping.c
@@ -1256,7 +1256,7 @@ out_adjust:
 static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 {
 	u64 nsecps = (u64)NSEC_PER_SEC << tk->shift;
-	unsigned int action = 0;
+	unsigned int clock_set = 0;
 
 	while (tk->xtime_nsec >= nsecps) {
 		int leap;
@@ -1279,10 +1279,10 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
 			__timekeeping_set_tai_offset(tk, tk->tai_offset - leap);
 
 			clock_was_set_delayed();
-			action = TK_CLOCK_WAS_SET;
+			clock_set = TK_CLOCK_WAS_SET;
 		}
 	}
-	return action;
+	return clock_set;
 }
 
 /**
@@ -1295,7 +1295,8 @@ static inline unsigned int accumulate_nsecs_to_secs(struct timekeeper *tk)
  * Returns the unconsumed cycles.
  */
 static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
-						u32 shift)
+						u32 shift,
+						unsigned int *clock_set)
 {
 	cycle_t interval = tk->cycle_interval << shift;
 	u64 raw_nsecs;
@@ -1309,7 +1310,7 @@ static cycle_t logarithmic_accumulation(struct timekeeper *tk, cycle_t offset,
 	tk->cycle_last += interval;
 
 	tk->xtime_nsec += tk->xtime_interval << shift;
-	accumulate_nsecs_to_secs(tk);
+	*clock_set |= accumulate_nsecs_to_secs(tk);
 
 	/* Accumulate raw time */
 	raw_nsecs = (u64)tk->raw_interval << shift;
@@ -1367,7 +1368,7 @@ static void update_wall_time(void)
 	struct timekeeper *tk = &shadow_timekeeper;
 	cycle_t offset;
 	int shift = 0, maxshift;
-	unsigned int action;
+	unsigned int clock_set = 0;
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&timekeeper_lock, flags);
@@ -1402,7 +1403,8 @@ static void update_wall_time(void)
 	maxshift = (64 - (ilog2(ntp_tick_length())+1)) - 1;
 	shift = min(shift, maxshift);
 	while (offset >= tk->cycle_interval) {
-		offset = logarithmic_accumulation(tk, offset, shift);
+		offset = logarithmic_accumulation(tk, offset, shift,
+							&clock_set);
 		if (offset < tk->cycle_interval<<shift)
 			shift--;
 	}
@@ -1420,7 +1422,7 @@ static void update_wall_time(void)
 	 * Finally, make sure that after the rounding
 	 * xtime_nsec isn't larger than NSEC_PER_SEC
 	 */
-	action = accumulate_nsecs_to_secs(tk);
+	clock_set |= accumulate_nsecs_to_secs(tk);
 
 	write_seqcount_begin(&timekeeper_seq);
 	/* Update clock->cycle_last with the new value */
@@ -1436,7 +1438,7 @@ static void update_wall_time(void)
 	 * updating.
 	 */
 	memcpy(real_tk, tk, sizeof(*tk));
-	timekeeping_update(real_tk, action);
+	timekeeping_update(real_tk, clock_set);
 	write_seqcount_end(&timekeeper_seq);
 out:
 	raw_spin_unlock_irqrestore(&timekeeper_lock, flags);
-- 
1.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:03:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpko-0005Sj-EI; Fri, 21 Feb 2014 13:02:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGpkn-0005SV-4d
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:02:57 +0000
Received: from [85.158.143.35:21392] by server-2.bemta-4.messagelabs.com id
	45/71-10891-08E47035; Fri, 21 Feb 2014 13:02:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392987773!7366646!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28963 invoked from network); 21 Feb 2014 13:02:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:02:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104651946"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:02:38 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 08:02:37 -0500
Message-ID: <53074E6B.6030006@citrix.com>
Date: Fri, 21 Feb 2014 13:02:35 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, Stephen Hemminger
	<stephen@networkplumber.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
	<CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
In-Reply-To: <CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:24, Luis R. Rodriguez wrote:
> On Thu, Feb 20, 2014 at 9:19 AM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
>> On Wed, 19 Feb 2014 09:59:33 -0800 "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>>> On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger <stephen@networkplumber.org> wrote:
>>>>
>>>> Please only use the netlink/sysfs flags fields that already exist
>>>> for new features.
>>>
>>> Sure, but what if we know a driver in most cases wants the root block
>>> and we'd want to make it the default, thereby only requiring userspace
>>> for toggling it off.
>>
>> Something in userspace has to put the device into the bridge.
>> Fix the port setup in that tool via the netlink or sysfs flags in
>> the bridge. It should not have to be handled in the bridge looking
>> at magic flags in the device.
>
> Agreed that's the best strategy and I'll work on sending patches to
> brctl to enable the root_block preference. This approach however also
I don't think brctl should deal with any Xen specific stuff. I assume 
there is a misunderstanding in this thread: when I (and possibly other 
Xen folks) talk about "userspace" or "toolstack" here, I mean Xen 
specific tools which use e.g. brctl to set up bridges. Not brctl itself.
> requires a userspace upgrade. I'm trying to see if we can get an
> old-nasty-cryptic-hack practice removed from the kernel and we'd try
> to prevent future drivers from using it -- without requiring userspace
> upgrade. In this case the bad practice is to using a high static MAC
> address for mimicking a root block default preference. In order to
> remove that *without* requiring a userspace upgrade the dev->priv_flag
> approach is the only thing I can think of. If this would go in we'd
> replace the high static MAC address with a random MAC address to
> prevent IPv6 SLAAC / DAD conflicts. I'd document this flag and
> indicate with preference for userspace to be the one tuning these
> knobs.
>
> Without this we'd have to keep the high static MAC address on upstream
> drivers and let userspace do the random'ization if it confirms the
> userspace knob to turn the root block flag is available. Is the
> priv_flag approach worth the compromise to remove the root block hack
> practice?
>
>    Luis
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:03:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpko-0005Sc-22; Fri, 21 Feb 2014 13:02:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGpkm-0005SP-6R
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:02:56 +0000
Received: from [85.158.143.35:22671] by server-3.bemta-4.messagelabs.com id
	8B/6A-11539-F7E47035; Fri, 21 Feb 2014 13:02:55 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392987773!7366646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28854 invoked from network); 21 Feb 2014 13:02:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104651915"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:02:35 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 08:02:35 -0500
Message-ID: <53074E69.4060206@citrix.com>
Date: Fri, 21 Feb 2014 13:02:33 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<530600C5.3070107@citrix.com>
	<CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
In-Reply-To: <CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:01, Luis R. Rodriguez wrote:
> On Thu, Feb 20, 2014 at 5:19 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> How about this: netback sets the root_block flag and a random MAC by
>> default. So the default behaviour won't change, DAD will be happy, and
>> userspace don't have to do anything unless it's using netback for STP root
>> bridge (I don't think there are too many toolstacks doing that), in which
>> case it has to remove the root_block flag instead of setting a random MAC.
>
> :D that's exactly what I ended up proposing too. I mentioned how
> xen-netback could do this as well, we'd keep or rename the flag I
> added, and then the bridge could would look at it and enable the root
> block if the flag is set. Stephen however does not like having the
> bridge code look at magic flags for this behavior and would prefer for
> us to get the tools to ask for the root block. Let's follow more up on
> that thread
We don't need that new flag, just forget about it. Set that root_block 
flag from netback device init, around the time you generate the random 
MAC, or at the earliest possible time. Nothing else has to be done from 
kernel side. If someone wants netback to be a root port, then remove 
root_block from their tools, instead of changing the the MAC address, as 
it happens now.
Another problem with the random addresses, pointed out by Ian earlier, 
that when adding/removing interfaces, the bridge does recalculate it's 
MAC address, and choose the lowest one. In the general usecase I think 
that's normal, but in case of Xen networking, we would like to keep the 
bridge using the physical interface's MAC, because the local port of the 
bridge is used for Dom0 network traffic, therefore changing the bridge 
MAC when a netback device has lower MAC breaks that traffic. I think the 
best is to address this from userspace: if it set the MAC of the bridge 
explicitly, dev_set_mac_address() does dev->addr_assign_type = 
NET_ADDR_SET;, so br_stp_recalculate_bridge_id() will exit before 
changing anything.
And when I say userspace, I mean Xen specific tools which does 
networking configuration, e.g. xapi in XenServer case. Not brctl, it 
doesn't have to know whether this is a xenbrX device or a bridge used 
for another purposes.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:03:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpko-0005Sj-EI; Fri, 21 Feb 2014 13:02:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGpkn-0005SV-4d
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:02:57 +0000
Received: from [85.158.143.35:21392] by server-2.bemta-4.messagelabs.com id
	45/71-10891-08E47035; Fri, 21 Feb 2014 13:02:56 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392987773!7366646!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28963 invoked from network); 21 Feb 2014 13:02:55 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:02:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104651946"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:02:38 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 08:02:37 -0500
Message-ID: <53074E6B.6030006@citrix.com>
Date: Fri, 21 Feb 2014 13:02:35 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>, Stephen Hemminger
	<stephen@networkplumber.org>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
	<CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
In-Reply-To: <CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:24, Luis R. Rodriguez wrote:
> On Thu, Feb 20, 2014 at 9:19 AM, Stephen Hemminger
> <stephen@networkplumber.org> wrote:
>> On Wed, 19 Feb 2014 09:59:33 -0800 "Luis R. Rodriguez" <mcgrof@do-not-panic.com> wrote:
>>> On Wed, Feb 19, 2014 at 9:08 AM, Stephen Hemminger <stephen@networkplumber.org> wrote:
>>>>
>>>> Please only use the netlink/sysfs flags fields that already exist
>>>> for new features.
>>>
>>> Sure, but what if we know a driver in most cases wants the root block
>>> and we'd want to make it the default, thereby only requiring userspace
>>> for toggling it off.
>>
>> Something in userspace has to put the device into the bridge.
>> Fix the port setup in that tool via the netlink or sysfs flags in
>> the bridge. It should not have to be handled in the bridge looking
>> at magic flags in the device.
>
> Agreed that's the best strategy and I'll work on sending patches to
> brctl to enable the root_block preference. This approach however also
I don't think brctl should deal with any Xen specific stuff. I assume 
there is a misunderstanding in this thread: when I (and possibly other 
Xen folks) talk about "userspace" or "toolstack" here, I mean Xen 
specific tools which use e.g. brctl to set up bridges. Not brctl itself.
> requires a userspace upgrade. I'm trying to see if we can get an
> old-nasty-cryptic-hack practice removed from the kernel and we'd try
> to prevent future drivers from using it -- without requiring userspace
> upgrade. In this case the bad practice is to using a high static MAC
> address for mimicking a root block default preference. In order to
> remove that *without* requiring a userspace upgrade the dev->priv_flag
> approach is the only thing I can think of. If this would go in we'd
> replace the high static MAC address with a random MAC address to
> prevent IPv6 SLAAC / DAD conflicts. I'd document this flag and
> indicate with preference for userspace to be the one tuning these
> knobs.
>
> Without this we'd have to keep the high static MAC address on upstream
> drivers and let userspace do the random'ization if it confirms the
> userspace knob to turn the root block flag is available. Is the
> priv_flag approach worth the compromise to remove the root block hack
> practice?
>
>    Luis
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:03:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpko-0005Sc-22; Fri, 21 Feb 2014 13:02:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGpkm-0005SP-6R
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:02:56 +0000
Received: from [85.158.143.35:22671] by server-3.bemta-4.messagelabs.com id
	8B/6A-11539-F7E47035; Fri, 21 Feb 2014 13:02:55 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1392987773!7366646!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28854 invoked from network); 21 Feb 2014 13:02:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:02:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="104651915"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:02:35 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 08:02:35 -0500
Message-ID: <53074E69.4060206@citrix.com>
Date: Fri, 21 Feb 2014 13:02:33 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<530600C5.3070107@citrix.com>
	<CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
In-Reply-To: <CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:01, Luis R. Rodriguez wrote:
> On Thu, Feb 20, 2014 at 5:19 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> How about this: netback sets the root_block flag and a random MAC by
>> default. So the default behaviour won't change, DAD will be happy, and
>> userspace don't have to do anything unless it's using netback for STP root
>> bridge (I don't think there are too many toolstacks doing that), in which
>> case it has to remove the root_block flag instead of setting a random MAC.
>
> :D that's exactly what I ended up proposing too. I mentioned how
> xen-netback could do this as well, we'd keep or rename the flag I
> added, and then the bridge could would look at it and enable the root
> block if the flag is set. Stephen however does not like having the
> bridge code look at magic flags for this behavior and would prefer for
> us to get the tools to ask for the root block. Let's follow more up on
> that thread
We don't need that new flag, just forget about it. Set that root_block 
flag from netback device init, around the time you generate the random 
MAC, or at the earliest possible time. Nothing else has to be done from 
kernel side. If someone wants netback to be a root port, then remove 
root_block from their tools, instead of changing the the MAC address, as 
it happens now.
Another problem with the random addresses, pointed out by Ian earlier, 
that when adding/removing interfaces, the bridge does recalculate it's 
MAC address, and choose the lowest one. In the general usecase I think 
that's normal, but in case of Xen networking, we would like to keep the 
bridge using the physical interface's MAC, because the local port of the 
bridge is used for Dom0 network traffic, therefore changing the bridge 
MAC when a netback device has lower MAC breaks that traffic. I think the 
best is to address this from userspace: if it set the MAC of the bridge 
explicitly, dev_set_mac_address() does dev->addr_assign_type = 
NET_ADDR_SET;, so br_stp_recalculate_bridge_id() will exit before 
changing anything.
And when I say userspace, I mean Xen specific tools which does 
networking configuration, e.g. xapi in XenServer case. Not brctl, it 
doesn't have to know whether this is a xenbrX device or a bridge used 
for another purposes.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:03:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpl6-0005WN-RV; Fri, 21 Feb 2014 13:03:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGpl5-0005Vz-9J
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:03:15 +0000
Received: from [85.158.139.211:5012] by server-7.bemta-5.messagelabs.com id
	C1/29-14867-29E47035; Fri, 21 Feb 2014 13:03:14 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392987790!5428356!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24856 invoked from network); 21 Feb 2014 13:03:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:03:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102968264"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:02:40 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 08:02:39 -0500
Message-ID: <53074E6C.5080702@citrix.com>
Date: Fri, 21 Feb 2014 13:02:36 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<53050244.1020106@citrix.com>
	<CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
In-Reply-To: <CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:39, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 11:13 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> On 19/02/14 17:20, Luis R. Rodriguez wrote:
>>>>> On 19/02/14 17:20, Luis R. Rodriguez also wrote:
>>>>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>>>>> backends though <...>
>>>
>>> As discussed in the other threads though there *is* some use cases
>>> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
>>> routing them (although its unclear to me if iptables can be used
>>> instead, Zoltan?).
>>
>> Not with OVS, it steals the packet before netfilter hooks.
>
> Got it, thanks! Can't the route be added using a front-end IP address
> instead on the host though ? I just tried that on a Xen system and it
> seems to work. Perhaps I'm not understand the exact topology on the
> routing case. So in my case I have the backend without any IPv4 or
> IPv6 interfaces, the guest has IPv4, IPv6 addresses and even a TUN for
> VPN and I can create routes on the host to the front end by not using
> the backend device name but instead using the front-end target IP.
Check this how current Xen scripts does routed networking:

http://wiki.xen.org/wiki/Xen_Networking#Associating_routes_with_virtual_devices

Note, there are no bridges involved here! As the above page says, the 
backend has to have IP address, maybe it's not true anymore. I'm not too 
familiar with this setup too, I've used it only once.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:03:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpl6-0005WN-RV; Fri, 21 Feb 2014 13:03:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WGpl5-0005Vz-9J
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:03:15 +0000
Received: from [85.158.139.211:5012] by server-7.bemta-5.messagelabs.com id
	C1/29-14867-29E47035; Fri, 21 Feb 2014 13:03:14 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1392987790!5428356!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24856 invoked from network); 21 Feb 2014 13:03:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:03:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,518,1389744000"; d="scan'208";a="102968264"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:02:40 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 08:02:39 -0500
Message-ID: <53074E6C.5080702@citrix.com>
Date: Fri, 21 Feb 2014 13:02:36 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<53050244.1020106@citrix.com>
	<CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
In-Reply-To: <CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA2
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20/02/14 20:39, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 11:13 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> On 19/02/14 17:20, Luis R. Rodriguez wrote:
>>>>> On 19/02/14 17:20, Luis R. Rodriguez also wrote:
>>>>> Zoltan has noted though some use cases of IPv4 or IPv6 addresses on
>>>>> backends though <...>
>>>
>>> As discussed in the other threads though there *is* some use cases
>>> of assigning IPv4 or IPv6 addresses to the backend interfaces though:
>>> routing them (although its unclear to me if iptables can be used
>>> instead, Zoltan?).
>>
>> Not with OVS, it steals the packet before netfilter hooks.
>
> Got it, thanks! Can't the route be added using a front-end IP address
> instead on the host though ? I just tried that on a Xen system and it
> seems to work. Perhaps I'm not understand the exact topology on the
> routing case. So in my case I have the backend without any IPv4 or
> IPv6 interfaces, the guest has IPv4, IPv6 addresses and even a TUN for
> VPN and I can create routes on the host to the front end by not using
> the backend device name but instead using the front-end target IP.
Check this how current Xen scripts does routed networking:

http://wiki.xen.org/wiki/Xen_Networking#Associating_routes_with_virtual_devices

Note, there are no bridges involved here! As the above page says, the 
backend has to have IP address, maybe it's not true anymore. I'm not too 
familiar with this setup too, I've used it only once.

Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:17:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:17:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpz3-0006Hd-C1; Fri, 21 Feb 2014 13:17:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <David.Laight@ACULAB.COM>) id 1WGpz1-0006HY-KY
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:17:39 +0000
Received: from [85.158.139.211:22815] by server-1.bemta-5.messagelabs.com id
	2D/B0-12859-2F157035; Fri, 21 Feb 2014 13:17:38 +0000
X-Env-Sender: David.Laight@ACULAB.COM
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392988658!5433481!1
X-Originating-IP: [213.249.233.131]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22547 invoked from network); 21 Feb 2014 13:17:38 -0000
Received: from mx0.aculab.com (HELO mx0.aculab.com) (213.249.233.131)
	by server-16.tower-206.messagelabs.com with SMTP;
	21 Feb 2014 13:17:38 -0000
Received: (qmail 15585 invoked from network); 21 Feb 2014 13:17:37 -0000
Received: from localhost (127.0.0.1)
	by mx0.aculab.com with SMTP; 21 Feb 2014 13:17:37 -0000
Received: from mx0.aculab.com ([127.0.0.1])
	by localhost (mx0.aculab.com [127.0.0.1]) (amavisd-new,
	port 10024) with SMTP
	id 14164-08 for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 13:17:36 +0000 (GMT)
Received: (qmail 15540 invoked by uid 599); 21 Feb 2014 13:17:36 -0000
Received: from unknown (HELO AcuExch.aculab.com) (10.202.163.4)
	by mx0.aculab.com (qpsmtpd/0.28) with ESMTP;
	Fri, 21 Feb 2014 13:17:36 +0000
Received: from ACUEXCH.Aculab.com ([::1]) by AcuExch.aculab.com ([::1]) with
	mapi id 14.03.0123.003; Fri, 21 Feb 2014 13:16:52 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'Andrew Bennieston' <andrew.bennieston@citrix.com>, Paul Durrant
	<Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
	queues
Thread-Index: AQHPLv8+2dHLEOTD+EOUF3RysPkb7Zq/oKWA
Date: Fri, 21 Feb 2014 13:16:51 +0000
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D0F6C813A@AcuExch.aculab.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
	<53074490.8030806@citrix.com>
In-Reply-To: <53074490.8030806@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.202.99.200]
MIME-Version: 1.0
X-Virus-Scanned: by iCritical at mx0.aculab.com
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Of Andrew  Bennieston
> >> +	char *xspath = NULL;
> >
> > I don't think you need the NULL init here. xspath is set in both branches of the if statement below.
> 
> Indeed, but I prefer to initialise things sanely where possible. It
> makes it easier to spot problems with later modifications of the code,
> e.g. if one of those branches changed.

If you don't initialise it the compiler is likely to detect the
fact than one code path hasn't set it.
If you initialise it, the compiler doesn't.

	David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:17:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:17:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGpz3-0006Hd-C1; Fri, 21 Feb 2014 13:17:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <David.Laight@ACULAB.COM>) id 1WGpz1-0006HY-KY
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:17:39 +0000
Received: from [85.158.139.211:22815] by server-1.bemta-5.messagelabs.com id
	2D/B0-12859-2F157035; Fri, 21 Feb 2014 13:17:38 +0000
X-Env-Sender: David.Laight@ACULAB.COM
X-Msg-Ref: server-16.tower-206.messagelabs.com!1392988658!5433481!1
X-Originating-IP: [213.249.233.131]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22547 invoked from network); 21 Feb 2014 13:17:38 -0000
Received: from mx0.aculab.com (HELO mx0.aculab.com) (213.249.233.131)
	by server-16.tower-206.messagelabs.com with SMTP;
	21 Feb 2014 13:17:38 -0000
Received: (qmail 15585 invoked from network); 21 Feb 2014 13:17:37 -0000
Received: from localhost (127.0.0.1)
	by mx0.aculab.com with SMTP; 21 Feb 2014 13:17:37 -0000
Received: from mx0.aculab.com ([127.0.0.1])
	by localhost (mx0.aculab.com [127.0.0.1]) (amavisd-new,
	port 10024) with SMTP
	id 14164-08 for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 13:17:36 +0000 (GMT)
Received: (qmail 15540 invoked by uid 599); 21 Feb 2014 13:17:36 -0000
Received: from unknown (HELO AcuExch.aculab.com) (10.202.163.4)
	by mx0.aculab.com (qpsmtpd/0.28) with ESMTP;
	Fri, 21 Feb 2014 13:17:36 +0000
Received: from ACUEXCH.Aculab.com ([::1]) by AcuExch.aculab.com ([::1]) with
	mapi id 14.03.0123.003; Fri, 21 Feb 2014 13:16:52 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'Andrew Bennieston' <andrew.bennieston@citrix.com>, Paul Durrant
	<Paul.Durrant@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V4 net-next 2/5] xen-netback: Add support for multiple
	queues
Thread-Index: AQHPLv8+2dHLEOTD+EOUF3RysPkb7Zq/oKWA
Date: Fri, 21 Feb 2014 13:16:51 +0000
Message-ID: <063D6719AE5E284EB5DD2968C1650D6D0F6C813A@AcuExch.aculab.com>
References: <1392659880-2538-1-git-send-email-andrew.bennieston@citrix.com>
	<1392659880-2538-3-git-send-email-andrew.bennieston@citrix.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD0255316@AMSPEX01CL01.citrite.net>
	<53074490.8030806@citrix.com>
In-Reply-To: <53074490.8030806@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.202.99.200]
MIME-Version: 1.0
X-Virus-Scanned: by iCritical at mx0.aculab.com
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V4 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Of Andrew  Bennieston
> >> +	char *xspath = NULL;
> >
> > I don't think you need the NULL init here. xspath is set in both branches of the if statement below.
> 
> Indeed, but I prefer to initialise things sanely where possible. It
> makes it easier to spot problems with later modifications of the code,
> e.g. if one of those branches changed.

If you don't initialise it the compiler is likely to detect the
fact than one code path hasn't set it.
If you initialise it, the compiler doesn't.

	David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:19:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGq10-0006Zb-Sn; Fri, 21 Feb 2014 13:19:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGq10-0006ZS-0P
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:19:42 +0000
Received: from [85.158.139.211:8743] by server-6.bemta-5.messagelabs.com id
	EE/63-14342-B6257035; Fri, 21 Feb 2014 13:19:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392988778!879137!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26247 invoked from network); 21 Feb 2014 13:19:39 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:19:39 -0000
Received: by mail-ee0-f45.google.com with SMTP id e53so48629eek.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 05:19:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ZaJ+1fZEoxumqz+eSL5e/c1qcyYVh6Y4l0EhMLUDMg8=;
	b=VAagSLHCB6MaDGzqf+diFb95hcd8rPdmeNPi75tJhq5Vz1ame/2lYk2tNduqeD92Z1
	1/QIgafHiOwUfwy+2eP4PHcVoVml+gUd7OYpe+M105/ahhVwlnBt+ZzKJ63N4JWmEAAc
	09vV6v3YYcKxLDyPHAEqkM58aHUhQwE0GszKj9RrEAJN1CRvUVb/dzTP0Z8eqB+wHdrL
	k38/x6d/YCTe7/ycf7b29lfRMvXKCbBg1Noqo8s0l0AOrOuIvKpUDG6Jz3MPwGSR7mLn
	uNnf81vMxzzJISREJo3o2OmIcHvujOT1QJp98NNI795Ic6hT/Pm9GHPLXc9BiSyO0xKF
	jUAA==
X-Gm-Message-State: ALoCoQkNr3uSLaYLncZ/sP9e/Zpk+mBw9hfksjMzeZnZ/DqE0w2AfoOt1ADBE53LMD1c2Naf0OtU
X-Received: by 10.14.174.5 with SMTP id w5mr8566588eel.14.1392988778454;
	Fri, 21 Feb 2014 05:19:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	j41sm26450326eey.15.2014.02.21.05.19.37 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 05:19:37 -0800 (PST)
Message-ID: <53075268.9090806@linaro.org>
Date: Fri, 21 Feb 2014 13:19:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
	<53066A1F.8020203@linaro.org>
	<53072299020000780011E259@nat28.tlf.novell.com>
In-Reply-To: <53072299020000780011E259@nat28.tlf.novell.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 08:55 AM, Jan Beulich wrote:
>>>> On 20.02.14 at 21:48, Julien Grall <julien.grall@linaro.org> wrote:
>> Before the clean up there was 8 distinct startup routines for x86. No
>> there is only 2:
>>   - drivers/passthrough/amd/iommu_init.c: iommu_maskable_msi_startup
>>   - arch/x86/ioapic.c: startup_edge_ioapic_irq
>>
>> For a latter one, I'm a bit surprised that the function can return 1,
>> but the result is never used.
> 
> Which means consumption of the return value was intended, but
> never implemented (or lost _very_ long ago). Looking at the Linux
> code, the intention apparently would be for the non-zero return
> value to propagate into IRQ_PENDING in one very special case we
> didn't ever support (auto-probing). Re-sending of an already
> pending interrupt is being handled differently there anyway. So if
> needed something like the setting of IRQ_PENDING at some point,
> I guess we could as well have the startup routine do this itself. I.e.
> I think converting the return value to void is still fine, as long as
> you leave some commentary in
> arch/x86/ioapic.c:startup_edge_ioapic_irq().

I will send the patch to change startup prototype separately later.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:19:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGq10-0006Zb-Sn; Fri, 21 Feb 2014 13:19:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WGq10-0006ZS-0P
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:19:42 +0000
Received: from [85.158.139.211:8743] by server-6.bemta-5.messagelabs.com id
	EE/63-14342-B6257035; Fri, 21 Feb 2014 13:19:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392988778!879137!1
X-Originating-IP: [74.125.83.45]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26247 invoked from network); 21 Feb 2014 13:19:39 -0000
Received: from mail-ee0-f45.google.com (HELO mail-ee0-f45.google.com)
	(74.125.83.45)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:19:39 -0000
Received: by mail-ee0-f45.google.com with SMTP id e53so48629eek.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 05:19:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=ZaJ+1fZEoxumqz+eSL5e/c1qcyYVh6Y4l0EhMLUDMg8=;
	b=VAagSLHCB6MaDGzqf+diFb95hcd8rPdmeNPi75tJhq5Vz1ame/2lYk2tNduqeD92Z1
	1/QIgafHiOwUfwy+2eP4PHcVoVml+gUd7OYpe+M105/ahhVwlnBt+ZzKJ63N4JWmEAAc
	09vV6v3YYcKxLDyPHAEqkM58aHUhQwE0GszKj9RrEAJN1CRvUVb/dzTP0Z8eqB+wHdrL
	k38/x6d/YCTe7/ycf7b29lfRMvXKCbBg1Noqo8s0l0AOrOuIvKpUDG6Jz3MPwGSR7mLn
	uNnf81vMxzzJISREJo3o2OmIcHvujOT1QJp98NNI795Ic6hT/Pm9GHPLXc9BiSyO0xKF
	jUAA==
X-Gm-Message-State: ALoCoQkNr3uSLaYLncZ/sP9e/Zpk+mBw9hfksjMzeZnZ/DqE0w2AfoOt1ADBE53LMD1c2Naf0OtU
X-Received: by 10.14.174.5 with SMTP id w5mr8566588eel.14.1392988778454;
	Fri, 21 Feb 2014 05:19:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	j41sm26450326eey.15.2014.02.21.05.19.37 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 05:19:37 -0800 (PST)
Message-ID: <53075268.9090806@linaro.org>
Date: Fri, 21 Feb 2014 13:19:36 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-7-git-send-email-julien.grall@linaro.org>
	<1392810675.29739.15.camel@kazak.uk.xensource.com>
	<5304C13C.5060505@linaro.org>
	<1392820736.29739.86.camel@kazak.uk.xensource.com>
	<5304C4E1.2070901@linaro.org>
	<5304D6BC020000780011DC4F@nat28.tlf.novell.com>
	<53066A1F.8020203@linaro.org>
	<53072299020000780011E259@nat28.tlf.novell.com>
In-Reply-To: <53072299020000780011E259@nat28.tlf.novell.com>
Cc: "Keir \(Xen.org\)" <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 6/8] xen/arm: IRQ: Add lock
	contrainst for gic_irq_{startup, shutdown}
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 08:55 AM, Jan Beulich wrote:
>>>> On 20.02.14 at 21:48, Julien Grall <julien.grall@linaro.org> wrote:
>> Before the clean up there was 8 distinct startup routines for x86. No
>> there is only 2:
>>   - drivers/passthrough/amd/iommu_init.c: iommu_maskable_msi_startup
>>   - arch/x86/ioapic.c: startup_edge_ioapic_irq
>>
>> For a latter one, I'm a bit surprised that the function can return 1,
>> but the result is never used.
> 
> Which means consumption of the return value was intended, but
> never implemented (or lost _very_ long ago). Looking at the Linux
> code, the intention apparently would be for the non-zero return
> value to propagate into IRQ_PENDING in one very special case we
> didn't ever support (auto-probing). Re-sending of an already
> pending interrupt is being handled differently there anyway. So if
> needed something like the setting of IRQ_PENDING at some point,
> I guess we could as well have the startup routine do this itself. I.e.
> I think converting the return value to void is still fine, as long as
> you leave some commentary in
> arch/x86/ioapic.c:startup_edge_ioapic_irq().

I will send the patch to change startup prototype separately later.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIY-00071N-6B; Fri, 21 Feb 2014 13:37:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIU-00070j-Q0
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:47 +0000
Received: from [193.109.254.147:4859] by server-4.bemta-14.messagelabs.com id
	9E/3B-32066-AA657035; Fri, 21 Feb 2014 13:37:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392989862!5930386!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21073 invoked from network); 21 Feb 2014 13:37:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663181"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:40 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008LA-Oh;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0006nL-HT;
	Fri, 21 Feb 2014 13:37:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:31 +0000
Message-ID: <1392989851-26051-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 3/3] ts-hosts-allocate-Executive:
	Compress debug output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Send the voluminous host allocation debug output to a compressed
logfile "hosts-allocate.debug.gz".  Also send a copy of the logm
output to the debug log, by manipulating $logm_handle.

Remove the bodge which unshifted -D onto the arguments, and then
parsed it.  Now the script takes no options.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive |   18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 0f967e2..707c6a7 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -29,15 +29,12 @@ tsreadconfig();
 
 open DEBUG, ">/dev/null" or die $!;
 
-unshift @ARGV, '-D';
-
 while (@ARGV and $ARGV[0] =~ m/^-/) {
     $_= shift @ARGV;
     last if m/^--$/;
     while (m/^-./) {
-        if (s/^-D/-/) {
-            open DEBUG, ">&STDERR" or die $!;
-            DEBUG->autoflush(1);
+        if (0) {
+	    # no options
         } else {
             die "$_ ?";
         }
@@ -63,6 +60,17 @@ sub setup () {
 
     $taskid= findtask();
 
+    my $logbase = "hosts-allocate.debug.gz";
+    my $logfh = open_unique_stashfile \$logbase;
+    my $logchild = open DEBUG, "|-";  defined $logchild or die $!;
+    if (!$logchild) {
+	open STDOUT, ">&", $logfh or die $!;
+	exec "gzip" or die $!;
+    }
+    DEBUG->autoflush(1);
+    logm("host allocation debug log in $logbase");
+    $logm_handle = [ $logm_handle, \*DEBUG ];
+
     $fi= $dbh_tests->selectrow_hashref(<<END, {}, $flight);
         SELECT * FROM flights
          WHERE flight = ?
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIU-00070o-Pu; Fri, 21 Feb 2014 13:37:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIT-00070a-Ej
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:45 +0000
Received: from [193.109.254.147:4740] by server-6.bemta-14.messagelabs.com id
	C4/61-03396-8A657035; Fri, 21 Feb 2014 13:37:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392989862!5930386!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20920 invoked from network); 21 Feb 2014 13:37:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663177"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008L7-Hx;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0006nG-Bb;
	Fri, 21 Feb 2014 13:37:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:30 +0000
Message-ID: <1392989851-26051-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 2/3] TestSupport: logm: allow
	$logm_handle to be an aref
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If $logm_handle is an array reference, iterate over it.  This allows
calling code to duplicate the messages.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 31748b1..b2f0b22 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -175,11 +175,14 @@ sub ts_get_host_guest { # pass this @ARGV
 sub logm ($) {
     my ($m) = @_;
     my @t = gmtime;
-    printf $logm_handle "%04d-%02d-%02d %02d:%02d:%02d Z %s\n",
-        $t[5]+1900,$t[4]+1,$t[3], $t[2],$t[1],$t[0],
-        $m
-    or die $!;
-    $logm_handle->flush or die $!;
+    my $fm = sprintf "%04d-%02d-%02d %02d:%02d:%02d Z %s\n",
+		$t[5]+1900,$t[4]+1,$t[3], $t[2],$t[1],$t[0],
+		$m;
+    foreach my $h ((ref($logm_handle) eq 'ARRAY')
+		   ? @$logm_handle : $logm_handle) {
+	print $h $fm or die $!;
+	$h->flush or die $!;
+    }
 }
 
 sub fail ($) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIX-000719-7r; Fri, 21 Feb 2014 13:37:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIU-00070b-CM
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:46 +0000
Received: from [193.109.254.147:63666] by server-15.bemta-14.messagelabs.com
	id 69/7F-10839-9A657035; Fri, 21 Feb 2014 13:37:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392989862!5930386!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20981 invoked from network); 21 Feb 2014 13:37:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663170"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008L4-C5;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0006nB-4b;
	Fri, 21 Feb 2014 13:37:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:29 +0000
Message-ID: <1392989851-26051-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 1/3] Executive: support DebugFh xparam
	to alloc_resources
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This redirects some of the more verbose output (the json dumps)
elsewhere.  If DebugFh is not set, this output is suppressed.

ts-hosts-allocate-Executive provides DebugFh pointing to its DEBUG fh.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm        |   11 +++++++++--
 ts-hosts-allocate-Executive |    1 +
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index ac7b734..33f12e4 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -362,6 +362,11 @@ sub alloc_resources {
 
     logm("resource allocation: starting...");
 
+    my $debugfh = $xparams{DebugFh};
+    my $debugm = $debugfh
+	? sub { print $debugfh @_, "\n" or die $!; }
+        : sub { };
+
     my $set_info= sub {
         return if grep { !defined } @_;
         my @s;
@@ -449,7 +454,8 @@ sub alloc_resources {
 		read($qserv, $jplan, $jplanlen) == $jplanlen or die $!;
 		my $jplanprint= $jplan;
 		chomp $jplanprint;
-		logm("resource allocation: base plan $jplanprint");
+		logm("resource allocation: obtained base plan.");
+		$debugm->("base plan = ", $jplanprint);
 		$plan= from_json($jplan);
 	    }, sub {
 		if (!eval {
@@ -465,7 +471,8 @@ sub alloc_resources {
 	    if ($bookinglist && $ok!=-1) {
 		my $jbookings= to_json($bookinglist);
                 chomp($jbookings);
-                logm("resource allocation: booking $jbookings");
+                logm("resource allocation: booking.");
+		$debugm->("bookings = ", $jbookings);
 
 		printf $qserv "book-resources %d\n", length $jbookings
 		    or die $!;
diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 05046e3..0f967e2 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -526,6 +526,7 @@ sub alloc_hosts () {
     alloc_resources(WaitStart =>
                     ($ENV{OSSTEST_RESOURCE_WAITSTART} || $fi->{started}),
                     WaitStartAdjust => $waitstartadjust,
+		    DebugFh => \*DEBUG,
                     \&attempt_allocation);
 
     foreach my $hid (@hids) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIU-00070o-Pu; Fri, 21 Feb 2014 13:37:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIT-00070a-Ej
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:45 +0000
Received: from [193.109.254.147:4740] by server-6.bemta-14.messagelabs.com id
	C4/61-03396-8A657035; Fri, 21 Feb 2014 13:37:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392989862!5930386!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20920 invoked from network); 21 Feb 2014 13:37:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663177"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008L7-Hx;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0006nG-Bb;
	Fri, 21 Feb 2014 13:37:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:30 +0000
Message-ID: <1392989851-26051-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 2/3] TestSupport: logm: allow
	$logm_handle to be an aref
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If $logm_handle is an array reference, iterate over it.  This allows
calling code to duplicate the messages.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 31748b1..b2f0b22 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -175,11 +175,14 @@ sub ts_get_host_guest { # pass this @ARGV
 sub logm ($) {
     my ($m) = @_;
     my @t = gmtime;
-    printf $logm_handle "%04d-%02d-%02d %02d:%02d:%02d Z %s\n",
-        $t[5]+1900,$t[4]+1,$t[3], $t[2],$t[1],$t[0],
-        $m
-    or die $!;
-    $logm_handle->flush or die $!;
+    my $fm = sprintf "%04d-%02d-%02d %02d:%02d:%02d Z %s\n",
+		$t[5]+1900,$t[4]+1,$t[3], $t[2],$t[1],$t[0],
+		$m;
+    foreach my $h ((ref($logm_handle) eq 'ARRAY')
+		   ? @$logm_handle : $logm_handle) {
+	print $h $fm or die $!;
+	$h->flush or die $!;
+    }
 }
 
 sub fail ($) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIX-000719-7r; Fri, 21 Feb 2014 13:37:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIU-00070b-CM
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:46 +0000
Received: from [193.109.254.147:63666] by server-15.bemta-14.messagelabs.com
	id 69/7F-10839-9A657035; Fri, 21 Feb 2014 13:37:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392989862!5930386!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20981 invoked from network); 21 Feb 2014 13:37:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663170"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:39 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008L4-C5;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0006nB-4b;
	Fri, 21 Feb 2014 13:37:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:29 +0000
Message-ID: <1392989851-26051-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 1/3] Executive: support DebugFh xparam
	to alloc_resources
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This redirects some of the more verbose output (the json dumps)
elsewhere.  If DebugFh is not set, this output is suppressed.

ts-hosts-allocate-Executive provides DebugFh pointing to its DEBUG fh.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm        |   11 +++++++++--
 ts-hosts-allocate-Executive |    1 +
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index ac7b734..33f12e4 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -362,6 +362,11 @@ sub alloc_resources {
 
     logm("resource allocation: starting...");
 
+    my $debugfh = $xparams{DebugFh};
+    my $debugm = $debugfh
+	? sub { print $debugfh @_, "\n" or die $!; }
+        : sub { };
+
     my $set_info= sub {
         return if grep { !defined } @_;
         my @s;
@@ -449,7 +454,8 @@ sub alloc_resources {
 		read($qserv, $jplan, $jplanlen) == $jplanlen or die $!;
 		my $jplanprint= $jplan;
 		chomp $jplanprint;
-		logm("resource allocation: base plan $jplanprint");
+		logm("resource allocation: obtained base plan.");
+		$debugm->("base plan = ", $jplanprint);
 		$plan= from_json($jplan);
 	    }, sub {
 		if (!eval {
@@ -465,7 +471,8 @@ sub alloc_resources {
 	    if ($bookinglist && $ok!=-1) {
 		my $jbookings= to_json($bookinglist);
                 chomp($jbookings);
-                logm("resource allocation: booking $jbookings");
+                logm("resource allocation: booking.");
+		$debugm->("bookings = ", $jbookings);
 
 		printf $qserv "book-resources %d\n", length $jbookings
 		    or die $!;
diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 05046e3..0f967e2 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -526,6 +526,7 @@ sub alloc_hosts () {
     alloc_resources(WaitStart =>
                     ($ENV{OSSTEST_RESOURCE_WAITSTART} || $fi->{started}),
                     WaitStartAdjust => $waitstartadjust,
+		    DebugFh => \*DEBUG,
                     \&attempt_allocation);
 
     foreach my $hid (@hids) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIY-00071N-6B; Fri, 21 Feb 2014 13:37:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIU-00070j-Q0
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:47 +0000
Received: from [193.109.254.147:4859] by server-4.bemta-14.messagelabs.com id
	9E/3B-32066-AA657035; Fri, 21 Feb 2014 13:37:46 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1392989862!5930386!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21073 invoked from network); 21 Feb 2014 13:37:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663181"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:40 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:40 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008LA-Oh;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0006nL-HT;
	Fri, 21 Feb 2014 13:37:34 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:31 +0000
Message-ID: <1392989851-26051-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 3/3] ts-hosts-allocate-Executive:
	Compress debug output
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Send the voluminous host allocation debug output to a compressed
logfile "hosts-allocate.debug.gz".  Also send a copy of the logm
output to the debug log, by manipulating $logm_handle.

Remove the bodge which unshifted -D onto the arguments, and then
parsed it.  Now the script takes no options.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive |   18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 0f967e2..707c6a7 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -29,15 +29,12 @@ tsreadconfig();
 
 open DEBUG, ">/dev/null" or die $!;
 
-unshift @ARGV, '-D';
-
 while (@ARGV and $ARGV[0] =~ m/^-/) {
     $_= shift @ARGV;
     last if m/^--$/;
     while (m/^-./) {
-        if (s/^-D/-/) {
-            open DEBUG, ">&STDERR" or die $!;
-            DEBUG->autoflush(1);
+        if (0) {
+	    # no options
         } else {
             die "$_ ?";
         }
@@ -63,6 +60,17 @@ sub setup () {
 
     $taskid= findtask();
 
+    my $logbase = "hosts-allocate.debug.gz";
+    my $logfh = open_unique_stashfile \$logbase;
+    my $logchild = open DEBUG, "|-";  defined $logchild or die $!;
+    if (!$logchild) {
+	open STDOUT, ">&", $logfh or die $!;
+	exec "gzip" or die $!;
+    }
+    DEBUG->autoflush(1);
+    logm("host allocation debug log in $logbase");
+    $logm_handle = [ $logm_handle, \*DEBUG ];
+
     $fi= $dbh_tests->selectrow_hashref(<<END, {}, $flight);
         SELECT * FROM flights
          WHERE flight = ?
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIN-00070O-RN; Fri, 21 Feb 2014 13:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIM-00070J-HG
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:38 +0000
Received: from [85.158.139.211:34413] by server-5.bemta-5.messagelabs.com id
	23/52-32749-1A657035; Fri, 21 Feb 2014 13:37:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392989855!883490!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7264 invoked from network); 21 Feb 2014 13:37:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102978878"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:34 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008L1-5D;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqIH-0006n8-T1;
	Fri, 21 Feb 2014 13:37:33 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:28 +0000
Message-ID: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 0/3] Compress host allocation logs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are voluminous and will compress well.  We do want to keep the
whole log, because it is sometimes necessary to go back into the log
to see where some glitch happened.

 1/3 Executive: support DebugFh xparam to alloc_resources
 2/3 TestSupport: logm: allow $logm_handle to be an aref
 3/3 ts-hosts-allocate-Executive: Compress debug output

I'm going to push these onto my wip branch, which contains changes
which are queued up pending various other more urgent work on osstest.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:37:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqIN-00070O-RN; Fri, 21 Feb 2014 13:37:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqIM-00070J-HG
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:37:38 +0000
Received: from [85.158.139.211:34413] by server-5.bemta-5.messagelabs.com id
	23/52-32749-1A657035; Fri, 21 Feb 2014 13:37:37 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1392989855!883490!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7264 invoked from network); 21 Feb 2014 13:37:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:37:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102978878"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:37:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:37:34 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqII-0008L1-5D;
	Fri, 21 Feb 2014 13:37:34 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqIH-0006n8-T1;
	Fri, 21 Feb 2014 13:37:33 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:37:28 +0000
Message-ID: <1392989851-26051-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 0/3] Compress host allocation logs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are voluminous and will compress well.  We do want to keep the
whole log, because it is sometimes necessary to go back into the log
to see where some glitch happened.

 1/3 Executive: support DebugFh xparam to alloc_resources
 2/3 TestSupport: logm: allow $logm_handle to be an aref
 3/3 ts-hosts-allocate-Executive: Compress debug output

I'm going to push these onto my wip branch, which contains changes
which are queued up pending various other more urgent work on osstest.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqKd-0007Sa-OJ; Fri, 21 Feb 2014 13:39:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKc-0007S7-67
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:39:58 +0000
Received: from [193.109.254.147:28532] by server-3.bemta-14.messagelabs.com id
	12/A4-00432-D2757035; Fri, 21 Feb 2014 13:39:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392989995!1953016!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23848 invoked from network); 21 Feb 2014 13:39:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663982"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0008M4-PR;
	Fri, 21 Feb 2014 13:39:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0006ou-Fo;
	Fri, 21 Feb 2014 13:39:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:45 +0000
Message-ID: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 0/6] Prepare for move to new VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're trying to move off the creaking server "woking"; as an interim
move, we have a new VM "osstest" on the same Citrix internal network.

 1/6 git: Use "git foo" rather than "git-foo"
 2/6 production-config: use /home/xc_tftpboot, not /tftpboot
 3/6 readglobalconfig: change default DhcpWatchMethod
 4/6 production-config: authorise iwj@osstest key
 5/6 production-config: do not set WebspaceUrl,, WebspaceFile
 6/6 Executive.pm: Change default ControlDaemonHost

Following a successful adhoc test invoking things the new VM I have
sent this into the osstest push gate already.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqKd-0007Sa-OJ; Fri, 21 Feb 2014 13:39:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKc-0007S7-67
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:39:58 +0000
Received: from [193.109.254.147:28532] by server-3.bemta-14.messagelabs.com id
	12/A4-00432-D2757035; Fri, 21 Feb 2014 13:39:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392989995!1953016!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23848 invoked from network); 21 Feb 2014 13:39:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663982"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:55 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0008M4-PR;
	Fri, 21 Feb 2014 13:39:54 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0006ou-Fo;
	Fri, 21 Feb 2014 13:39:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:45 +0000
Message-ID: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 0/6] Prepare for move to new VM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We're trying to move off the creaking server "woking"; as an interim
move, we have a new VM "osstest" on the same Citrix internal network.

 1/6 git: Use "git foo" rather than "git-foo"
 2/6 production-config: use /home/xc_tftpboot, not /tftpboot
 3/6 readglobalconfig: change default DhcpWatchMethod
 4/6 production-config: authorise iwj@osstest key
 5/6 production-config: do not set WebspaceUrl,, WebspaceFile
 6/6 Executive.pm: Change default ControlDaemonHost

Following a successful adhoc test invoking things the new VM I have
sent this into the osstest push gate already.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqKz-0007TD-1w; Fri, 21 Feb 2014 13:40:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKc-0007SB-Nt
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:39:58 +0000
Received: from [193.109.254.147:49777] by server-1.bemta-14.messagelabs.com id
	0C/21-15438-E2757035; Fri, 21 Feb 2014 13:39:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392989995!1953016!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23907 invoked from network); 21 Feb 2014 13:39:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663986"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MA-1q;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0006p2-Vm;
	Fri, 21 Feb 2014 13:39:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:47 +0000
Message-ID: <1392989991-26151-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 2/6] production-config: use
	/home/xc_tftpboot, not /tftpboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

This is available on both woking and on the new osstest VM.

Signed-off-by: Ian Jackson <iwj@osstest.cam.xci-test.com>
---
 production-config |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 1ee0ba0..695cb81 100644
--- a/production-config
+++ b/production-config
@@ -64,7 +64,7 @@ END
 
 PlanRogueAllocationDuration= 86400*7
 
-TftpPath /tftpboot/pxe/
+TftpPath /home/xc_tftpboot/pxe/
 TftpPlayDir osstest/
 TftpTmpDir osstest/tmp/
 TftpPxeDir /
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqKz-0007TD-1w; Fri, 21 Feb 2014 13:40:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKc-0007SB-Nt
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:39:58 +0000
Received: from [193.109.254.147:49777] by server-1.bemta-14.messagelabs.com id
	0C/21-15438-E2757035; Fri, 21 Feb 2014 13:39:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392989995!1953016!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23907 invoked from network); 21 Feb 2014 13:39:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663986"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MA-1q;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0006p2-Vm;
	Fri, 21 Feb 2014 13:39:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:47 +0000
Message-ID: <1392989991-26151-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 2/6] production-config: use
	/home/xc_tftpboot, not /tftpboot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

This is available on both woking and on the new osstest VM.

Signed-off-by: Ian Jackson <iwj@osstest.cam.xci-test.com>
---
 production-config |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 1ee0ba0..695cb81 100644
--- a/production-config
+++ b/production-config
@@ -64,7 +64,7 @@ END
 
 PlanRogueAllocationDuration= 86400*7
 
-TftpPath /tftpboot/pxe/
+TftpPath /home/xc_tftpboot/pxe/
 TftpPlayDir osstest/
 TftpTmpDir osstest/tmp/
 TftpPxeDir /
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqL7-0007Vy-Kw; Fri, 21 Feb 2014 13:40:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKd-0007ST-Hc
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:39:59 +0000
Received: from [193.109.254.147:49821] by server-4.bemta-14.messagelabs.com id
	C8/DD-32066-E2757035; Fri, 21 Feb 2014 13:39:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392989995!1953016!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23969 invoked from network); 21 Feb 2014 13:39:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663988"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MG-EY;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006pC-8E;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:49 +0000
Message-ID: <1392989991-26151-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 4/6] production-config: authorise
	iwj@osstest key
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

Just like the iwj@woking key.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config |    1 +
 1 file changed, 1 insertion(+)

diff --git a/production-config b/production-config
index 695cb81..9df3ca1 100644
--- a/production-config
+++ b/production-config
@@ -94,4 +94,5 @@ ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq8eHHFJ+XHYgpHxfSdciq0b3tYPdMhHf9CgtwdKGSqCy
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs6FF9nfzWIlLPeYdqNteJBoYJAcgGxQgeNi7FHYDgWNFhoYPlMPXWOuXhgNxA2/vkX9tUMVZaAh+4WTL1iRBW5B/AS/Ek2O7uM2Uq8v68D2aU9/XalLVnIxssr84pewUmKW8hZfjNnRm99RTQ2Knr2BvtwcHqXtdGYdTYCJkel+FPYQ51yXGRU7dS0D59WapkDFU1tH1Y8s+dRZcRZNRJ5f1w/KO1zx1tOrZRkO3fPlEGNZHVUYfpZLPxz0VX8tOeoaOXhKZO8vSp1pD0L/uaD6FOmugMZxbtq9wEjhZciNCq61ynRf2yt2v9DMu4EAzbW/Ws7OBvWtYj/RHcSxKbw== iwj@woking.xci-test.com
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2m8+FRm8zaCy4+L2ZLsINt3OiRzDu82JE67b4Xyt3O0+IEyflPgw5zgGH69ypOn2GqYTaiBoiYNoAn9bpUksMk71q+co4gsZJ17Acm0256A3NP46ByT6z6/AKTl58vwwNKSCEAzNru53sXTYw2TcCZUN8A4vXY76OeJNJmCmgBDHCNod9fW6+EOn8ZSU1YjFUBV2UmS2ekKmsGNP5ecLAF1bZ8I13KpKUIDIY+UiG0UMwTWDfQY59SNsz6bCxv9NsxSXL29RS2XHFeIQis7t6hJuyZTT4b9YzjEAxvk8kdGzzK6314kwILibm1O1Y8LLyrYsWK1AvnJQFIhcYXF0EQ== iwj@mariner
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApAkFv1FwknjOoataWvq5SRN/eUHjfQ5gfWnGJpIa4qnT+zAqYuC10BAHu3pHPV6NiedMxud0KcYlu/giQBMVMnYBdb7gWKdK4AQTgxHgvMMWHufa8oTLONLRsvyp1wQADJBzjQSjmo6HHF9faUckZHfJTfRxqLuR/3ENIyl+CRV9G6KfN9fbABejBxdfsbuTHc5ew2JsYxhDJsDFHgMjtrUoHI/d6eBTQDx8GRj8uUor8W+riFpW3whTH9dqloOyrqIke2qGVQlMNmzx5Z04vB1+n95nu9c5SGOZTUT4BQ5FybEANWQsNfJ7b3aMcYgVCVkKuRHSbW8Q4Pyn1Nh31w== ian@liberator
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDY+wyHPeVjiqPzmKZxSxA0fxQ8r0zKMN1cYxJrryE68XzrocprAEqrGR8n3LN3JBApt4kf5gNn4DUdDo6BmCrnTuO4p43ydKJ2BDtWjQJAYdm0g5ttvF3C0A0wnog+jP3WZhTXu40LohKWO5a0If4/SBTkZvKBuSGV4v6wihbeA2Y2aEqwIlfvdSeq96jcbppNXlhWC4bB8VIVU1pa422nTQwpLdaD4qdLi31FEWSqPd2Ro/Z5i/w22M/5wvjYMkUXQcQIn6IsajM6BR56aBGgzIxWGwkxp7iQMPCCXJ4/wTpP1A5lU4k3B8FJCkM9nnSM2koPPr+HSnOKewBqwD1V iwj@osstest
 END
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqL7-0007Vy-Kw; Fri, 21 Feb 2014 13:40:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKd-0007ST-Hc
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:39:59 +0000
Received: from [193.109.254.147:49821] by server-4.bemta-14.messagelabs.com id
	C8/DD-32066-E2757035; Fri, 21 Feb 2014 13:39:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392989995!1953016!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23969 invoked from network); 21 Feb 2014 13:39:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663988"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MG-EY;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006pC-8E;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:49 +0000
Message-ID: <1392989991-26151-5-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 4/6] production-config: authorise
	iwj@osstest key
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

Just like the iwj@woking key.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config |    1 +
 1 file changed, 1 insertion(+)

diff --git a/production-config b/production-config
index 695cb81..9df3ca1 100644
--- a/production-config
+++ b/production-config
@@ -94,4 +94,5 @@ ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq8eHHFJ+XHYgpHxfSdciq0b3tYPdMhHf9CgtwdKGSqCy
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs6FF9nfzWIlLPeYdqNteJBoYJAcgGxQgeNi7FHYDgWNFhoYPlMPXWOuXhgNxA2/vkX9tUMVZaAh+4WTL1iRBW5B/AS/Ek2O7uM2Uq8v68D2aU9/XalLVnIxssr84pewUmKW8hZfjNnRm99RTQ2Knr2BvtwcHqXtdGYdTYCJkel+FPYQ51yXGRU7dS0D59WapkDFU1tH1Y8s+dRZcRZNRJ5f1w/KO1zx1tOrZRkO3fPlEGNZHVUYfpZLPxz0VX8tOeoaOXhKZO8vSp1pD0L/uaD6FOmugMZxbtq9wEjhZciNCq61ynRf2yt2v9DMu4EAzbW/Ws7OBvWtYj/RHcSxKbw== iwj@woking.xci-test.com
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA2m8+FRm8zaCy4+L2ZLsINt3OiRzDu82JE67b4Xyt3O0+IEyflPgw5zgGH69ypOn2GqYTaiBoiYNoAn9bpUksMk71q+co4gsZJ17Acm0256A3NP46ByT6z6/AKTl58vwwNKSCEAzNru53sXTYw2TcCZUN8A4vXY76OeJNJmCmgBDHCNod9fW6+EOn8ZSU1YjFUBV2UmS2ekKmsGNP5ecLAF1bZ8I13KpKUIDIY+UiG0UMwTWDfQY59SNsz6bCxv9NsxSXL29RS2XHFeIQis7t6hJuyZTT4b9YzjEAxvk8kdGzzK6314kwILibm1O1Y8LLyrYsWK1AvnJQFIhcYXF0EQ== iwj@mariner
 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApAkFv1FwknjOoataWvq5SRN/eUHjfQ5gfWnGJpIa4qnT+zAqYuC10BAHu3pHPV6NiedMxud0KcYlu/giQBMVMnYBdb7gWKdK4AQTgxHgvMMWHufa8oTLONLRsvyp1wQADJBzjQSjmo6HHF9faUckZHfJTfRxqLuR/3ENIyl+CRV9G6KfN9fbABejBxdfsbuTHc5ew2JsYxhDJsDFHgMjtrUoHI/d6eBTQDx8GRj8uUor8W+riFpW3whTH9dqloOyrqIke2qGVQlMNmzx5Z04vB1+n95nu9c5SGOZTUT4BQ5FybEANWQsNfJ7b3aMcYgVCVkKuRHSbW8Q4Pyn1Nh31w== ian@liberator
+ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDY+wyHPeVjiqPzmKZxSxA0fxQ8r0zKMN1cYxJrryE68XzrocprAEqrGR8n3LN3JBApt4kf5gNn4DUdDo6BmCrnTuO4p43ydKJ2BDtWjQJAYdm0g5ttvF3C0A0wnog+jP3WZhTXu40LohKWO5a0If4/SBTkZvKBuSGV4v6wihbeA2Y2aEqwIlfvdSeq96jcbppNXlhWC4bB8VIVU1pa422nTQwpLdaD4qdLi31FEWSqPd2Ro/Z5i/w22M/5wvjYMkUXQcQIn6IsajM6BR56aBGgzIxWGwkxp7iQMPCCXJ4/wTpP1A5lU4k3B8FJCkM9nnSM2koPPr+HSnOKewBqwD1V iwj@osstest
 END
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLA-0007XY-UU; Fri, 21 Feb 2014 13:40:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKd-0007SY-SL
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:00 +0000
Received: from [85.158.139.211:49250] by server-17.bemta-5.messagelabs.com id
	8A/BE-31975-F2757035; Fri, 21 Feb 2014 13:39:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392989997!5442422!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10751 invoked from network); 21 Feb 2014 13:39:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663989"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MJ-Kz;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006pH-E1;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:50 +0000
Message-ID: <1392989991-26151-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 5/6] production-config: do not set
	WebspaceUrl, WebspaceFile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

The defaults,
    $c{WebspaceFile} ||= "$ENV{'HOME'}/public_html/";
    $c{WebspaceUrl} ||= "http://$myfqdn/~$whoami/";
are adequate.  This means the configuration can work on both
woking and the new osstest VM.

Signed-off-by: Ian Jackson <iwj@osstest.cam.xci-test.com>
---
 production-config |    3 ---
 1 file changed, 3 deletions(-)

diff --git a/production-config b/production-config
index 9df3ca1..364976c 100644
--- a/production-config
+++ b/production-config
@@ -29,9 +29,6 @@ Logs /home/xc_osstest/logs
 Results /home/xc_osstest/results
 PubBaseDir /home/xc_osstest
 
-WebspaceFile /export/home/osstest/public_html/
-WebspaceUrl="http://woking.$c{DnsDomain}/~osstest/"
-
 OverlayLocal /export/home/osstest/overlay-local
 
 LogsMinSpaceMby= 10*1e3
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLA-0007XY-UU; Fri, 21 Feb 2014 13:40:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKd-0007SY-SL
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:00 +0000
Received: from [85.158.139.211:49250] by server-17.bemta-5.messagelabs.com id
	8A/BE-31975-F2757035; Fri, 21 Feb 2014 13:39:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1392989997!5442422!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10751 invoked from network); 21 Feb 2014 13:39:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663989"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MJ-Kz;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006pH-E1;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:50 +0000
Message-ID: <1392989991-26151-6-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 5/6] production-config: do not set
	WebspaceUrl, WebspaceFile
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

The defaults,
    $c{WebspaceFile} ||= "$ENV{'HOME'}/public_html/";
    $c{WebspaceUrl} ||= "http://$myfqdn/~$whoami/";
are adequate.  This means the configuration can work on both
woking and the new osstest VM.

Signed-off-by: Ian Jackson <iwj@osstest.cam.xci-test.com>
---
 production-config |    3 ---
 1 file changed, 3 deletions(-)

diff --git a/production-config b/production-config
index 9df3ca1..364976c 100644
--- a/production-config
+++ b/production-config
@@ -29,9 +29,6 @@ Logs /home/xc_osstest/logs
 Results /home/xc_osstest/results
 PubBaseDir /home/xc_osstest
 
-WebspaceFile /export/home/osstest/public_html/
-WebspaceUrl="http://woking.$c{DnsDomain}/~osstest/"
-
 OverlayLocal /export/home/osstest/overlay-local
 
 LogsMinSpaceMby= 10*1e3
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLF-0007Zm-ME; Fri, 21 Feb 2014 13:40:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKe-0007Sb-7C
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:00 +0000
Received: from [85.158.139.211:18549] by server-3.bemta-5.messagelabs.com id
	17/47-13671-F2757035; Fri, 21 Feb 2014 13:39:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392989997!5390240!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20279 invoked from network); 21 Feb 2014 13:39:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102979497"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MD-8m;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006p7-1I;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:48 +0000
Message-ID: <1392989991-26151-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 3/6] readglobalconfig: change default
	DhcpWatchMethod
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

Use the TCP connection to woking, so that it works after we switch
to the new osstest VM.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest.pm |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest.pm b/Osstest.pm
index 8df0c15..4600709 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -92,7 +92,7 @@ sub readglobalconfig () {
     return if $readglobalconfig_done;
     $readglobalconfig_done=1;
 
-    $c{HostProp_DhcpWatchMethod} = 'leases dhcp3 /var/lib/dhcp3/dhcpd.leases';
+    $c{HostProp_DhcpWatchMethod} = 'leases dhcp3 woking.cam.xci-test.com:5556';
     $c{AuthorizedKeysFiles} = '';
     $c{AuthorizedKeysAppend} = '';
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLF-0007Zm-ME; Fri, 21 Feb 2014 13:40:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKe-0007Sb-7C
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:00 +0000
Received: from [85.158.139.211:18549] by server-3.bemta-5.messagelabs.com id
	17/47-13671-F2757035; Fri, 21 Feb 2014 13:39:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392989997!5390240!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20279 invoked from network); 21 Feb 2014 13:39:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:39:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102979497"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:39:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:39:56 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MD-8m;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006p7-1I;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:48 +0000
Message-ID: <1392989991-26151-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Ian Jackson <iwj@osstest.cam.xci-test.com>
Subject: [Xen-devel] [OSSTEST PATCH 3/6] readglobalconfig: change default
	DhcpWatchMethod
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ian Jackson <iwj@osstest.cam.xci-test.com>

Use the TCP connection to woking, so that it works after we switch
to the new osstest VM.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest.pm |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest.pm b/Osstest.pm
index 8df0c15..4600709 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -92,7 +92,7 @@ sub readglobalconfig () {
     return if $readglobalconfig_done;
     $readglobalconfig_done=1;
 
-    $c{HostProp_DhcpWatchMethod} = 'leases dhcp3 /var/lib/dhcp3/dhcpd.leases';
+    $c{HostProp_DhcpWatchMethod} = 'leases dhcp3 woking.cam.xci-test.com:5556';
     $c{AuthorizedKeysFiles} = '';
     $c{AuthorizedKeysAppend} = '';
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLI-0007by-Ac; Fri, 21 Feb 2014 13:40:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKi-0007TJ-Jl
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:20 +0000
Received: from [85.158.137.68:24222] by server-7.bemta-3.messagelabs.com id
	B2/94-13775-33757035; Fri, 21 Feb 2014 13:40:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392990001!2135247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26407 invoked from network); 21 Feb 2014 13:40:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:40:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663999"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:40:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:40:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKZ-0008M7-01;
	Fri, 21 Feb 2014 13:39:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0006ox-Or;
	Fri, 21 Feb 2014 13:39:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:46 +0000
Message-ID: <1392989991-26151-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 1/6] git: Use "git foo" rather than
	"git-foo"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All of our machines now have "git foo" and have done for some time,
and some are going to not have "git-foo".

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 adhoc-revtuple-generator  |    4 ++--
 ap-fetch-version          |    4 ++--
 ap-fetch-version-baseline |    2 +-
 ap-fetch-version-old      |    4 ++--
 cr-for-branches           |   12 ++++++------
 cr-publish-flight-logs    |    2 +-
 cri-common                |    2 +-
 mg-execute-flight         |    4 ++--
 8 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/adhoc-revtuple-generator b/adhoc-revtuple-generator
index b9f9eb9..9efc3dc 100755
--- a/adhoc-revtuple-generator
+++ b/adhoc-revtuple-generator
@@ -106,7 +106,7 @@ END
     print DEBUG "GIT-GEN UPCMD\n$upcmd\n";
     shellcmd($upcmd) if $doupdate;
     my $cmd= "cd $c{Repos}/$treename &&".
-        " git-log --pretty=raw --date-order $tree->{Latest}";
+        " git log --pretty=raw --date-order $tree->{Latest}";
     print DEBUG "GIT-GEN CMD $cmd\n";
     my $fh= new IO::File;
     open $fh, "$cmd |" or die $!;
@@ -305,7 +305,7 @@ sub xu_withtag_generator ($) {
 		die unless $targetqemu =~ m/^[^-]/;
 		$!=0; $?=0;
 		$targetqemu= `cd $c{Repos}/$qemutree &&
-                              git-rev-parse '$targetqemu^0'`;
+                              git rev-parse '$targetqemu^0'`;
 		die "$! $?" if (!defined $targetqemu) or $?;
 		chomp $targetqemu;
 	    }
diff --git a/ap-fetch-version b/ap-fetch-version
index 1f3c6e9..dbd3fb7 100755
--- a/ap-fetch-version
+++ b/ap-fetch-version
@@ -70,8 +70,8 @@ linuxfirmware)
 		$UPSTREAM_TREE_LINUXFIRMWARE master daily-cron.$branch
 	;;
 osstest)
-	git-fetch $HOME/testing.git pretest:ap-fetch >&2
-        git-rev-parse ap-fetch^0
+	git fetch $HOME/testing.git pretest:ap-fetch >&2
+        git rev-parse ap-fetch^0
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/ap-fetch-version-baseline b/ap-fetch-version-baseline
index 474b90f..e693e16 100755
--- a/ap-fetch-version-baseline
+++ b/ap-fetch-version-baseline
@@ -31,7 +31,7 @@ case "$branch" in
 #linux)
 #	cd $repos/xen
 #	git fetch -f $BASE_TREE_LINUX $BASE_TAG_LINUX:$BASE_LOCALREV_LINUX
-#	git-rev-parse $BASE_LOCALREV_LINUX^0
+#	git rev-parse $BASE_LOCALREV_LINUX^0
 #	;;
 *)
 	exec ./ap-fetch-version-old "$@"
diff --git a/ap-fetch-version-old b/ap-fetch-version-old
index 353a817..d2f8b94 100755
--- a/ap-fetch-version-old
+++ b/ap-fetch-version-old
@@ -74,8 +74,8 @@ linuxfirmware)
 		$TREE_LINUXFIRMWARE master daily-cron-old.$branch
 	;;
 osstest)
-	git-fetch -f $HOME/testing.git incoming:ap-fetch
-        git-rev-parse ap-fetch^0
+	git fetch -f $HOME/testing.git incoming:ap-fetch
+        git rev-parse ap-fetch^0
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/cr-for-branches b/cr-for-branches
index 4c26a69..8bb1ac5 100755
--- a/cr-for-branches
+++ b/cr-for-branches
@@ -43,8 +43,8 @@ with-lock-ex $fetchwlem data-tree-lock bash -ec '
 	exec >>$LOGFILE
 	date
         printf "%s\n" "$FOR_LOGFILE"
-	git-pull . incoming:master 2>&1 ||:
-	git-checkout HEAD
+	git pull . incoming:master 2>&1 ||:
+	git checkout HEAD
 '
 
 export OSSTEST_TEST_PULLFROM=`pwd`
@@ -91,10 +91,10 @@ for branch in $BRANCHES; do
 
 		log ...
 
-		git-fetch $OSSTEST_TEST_PULLFROM master:incoming 2>&1 ||:
-		git-fetch $OSSTEST_TEST_PULLFROM incoming:incoming 2>&1 ||:
-		git-pull --no-commit . incoming:master 2>&1 ||:
-		git-checkout HEAD
+		git fetch $OSSTEST_TEST_PULLFROM master:incoming 2>&1 ||:
+		git fetch $OSSTEST_TEST_PULLFROM incoming:incoming 2>&1 ||:
+		git pull --no-commit . incoming:master 2>&1 ||:
+		git checkout HEAD
 
 		set +e
 		"$@" 2>&1
diff --git a/cr-publish-flight-logs b/cr-publish-flight-logs
index 56100da..d299329 100755
--- a/cr-publish-flight-logs
+++ b/cr-publish-flight-logs
@@ -43,7 +43,7 @@ if ($push_harness) {
     my $githost= $c{HarnessPublishGitUserHost};
     my $gitdir= $c{HarnessPublishGitRepoDir};
 
-    system_checked("git-push $githost:$gitdir HEAD:flight-$flight");
+    system_checked("git push $githost:$gitdir HEAD:flight-$flight");
     system_checked("ssh $githost 'cd $gitdir && git update-server-info'");
 }
 
diff --git a/cri-common b/cri-common
index 477b013..497d4e3 100644
--- a/cri-common
+++ b/cri-common
@@ -32,7 +32,7 @@ repo_tree_rev_fetch_git () {
 	fi
 	cd $repos/$treename
 	git fetch -f $realurl $remotetag:$localtag >&2
-	git-rev-parse $localtag^0
+	git rev-parse $localtag^0
 }
 
 select_xenbranch () {
diff --git a/mg-execute-flight b/mg-execute-flight
index 20067b3..287caa0 100755
--- a/mg-execute-flight
+++ b/mg-execute-flight
@@ -53,7 +53,7 @@ if [ x"$flight" = x ]; then badusage; fi
 
 set +e
 tty=`exec 2>/dev/null; tty`
-branch=`exec 2>/dev/null; git-branch | sed -n 's/^\* //p'`
+branch=`exec 2>/dev/null; git branch | sed -n 's/^\* //p'`
 set -e
 
 export OSSTEST_RESOURCE_PRIORITY=${OSSTEST_RESOURCE_PRIORITY--8}
@@ -80,7 +80,7 @@ Subject: [adhoc test] $subject
 $OSSTEST_RESOURCE_PREINFO
 END
 
-git-log -n1 --pretty=format:'harness %h: %s' | perl -pe 's/(.{70}).+/$1.../'
+git log -n1 --pretty=format:'harness %h: %s' | perl -pe 's/(.{70}).+/$1.../'
 echo
 
 cat <tmp/$flight.report
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLI-0007by-Ac; Fri, 21 Feb 2014 13:40:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKi-0007TJ-Jl
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:20 +0000
Received: from [85.158.137.68:24222] by server-7.bemta-3.messagelabs.com id
	B2/94-13775-33757035; Fri, 21 Feb 2014 13:40:03 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392990001!2135247!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26407 invoked from network); 21 Feb 2014 13:40:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:40:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104663999"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 13:40:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:40:00 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKZ-0008M7-01;
	Fri, 21 Feb 2014 13:39:55 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKY-0006ox-Or;
	Fri, 21 Feb 2014 13:39:54 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:46 +0000
Message-ID: <1392989991-26151-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 1/6] git: Use "git foo" rather than
	"git-foo"
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

All of our machines now have "git foo" and have done for some time,
and some are going to not have "git-foo".

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 adhoc-revtuple-generator  |    4 ++--
 ap-fetch-version          |    4 ++--
 ap-fetch-version-baseline |    2 +-
 ap-fetch-version-old      |    4 ++--
 cr-for-branches           |   12 ++++++------
 cr-publish-flight-logs    |    2 +-
 cri-common                |    2 +-
 mg-execute-flight         |    4 ++--
 8 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/adhoc-revtuple-generator b/adhoc-revtuple-generator
index b9f9eb9..9efc3dc 100755
--- a/adhoc-revtuple-generator
+++ b/adhoc-revtuple-generator
@@ -106,7 +106,7 @@ END
     print DEBUG "GIT-GEN UPCMD\n$upcmd\n";
     shellcmd($upcmd) if $doupdate;
     my $cmd= "cd $c{Repos}/$treename &&".
-        " git-log --pretty=raw --date-order $tree->{Latest}";
+        " git log --pretty=raw --date-order $tree->{Latest}";
     print DEBUG "GIT-GEN CMD $cmd\n";
     my $fh= new IO::File;
     open $fh, "$cmd |" or die $!;
@@ -305,7 +305,7 @@ sub xu_withtag_generator ($) {
 		die unless $targetqemu =~ m/^[^-]/;
 		$!=0; $?=0;
 		$targetqemu= `cd $c{Repos}/$qemutree &&
-                              git-rev-parse '$targetqemu^0'`;
+                              git rev-parse '$targetqemu^0'`;
 		die "$! $?" if (!defined $targetqemu) or $?;
 		chomp $targetqemu;
 	    }
diff --git a/ap-fetch-version b/ap-fetch-version
index 1f3c6e9..dbd3fb7 100755
--- a/ap-fetch-version
+++ b/ap-fetch-version
@@ -70,8 +70,8 @@ linuxfirmware)
 		$UPSTREAM_TREE_LINUXFIRMWARE master daily-cron.$branch
 	;;
 osstest)
-	git-fetch $HOME/testing.git pretest:ap-fetch >&2
-        git-rev-parse ap-fetch^0
+	git fetch $HOME/testing.git pretest:ap-fetch >&2
+        git rev-parse ap-fetch^0
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/ap-fetch-version-baseline b/ap-fetch-version-baseline
index 474b90f..e693e16 100755
--- a/ap-fetch-version-baseline
+++ b/ap-fetch-version-baseline
@@ -31,7 +31,7 @@ case "$branch" in
 #linux)
 #	cd $repos/xen
 #	git fetch -f $BASE_TREE_LINUX $BASE_TAG_LINUX:$BASE_LOCALREV_LINUX
-#	git-rev-parse $BASE_LOCALREV_LINUX^0
+#	git rev-parse $BASE_LOCALREV_LINUX^0
 #	;;
 *)
 	exec ./ap-fetch-version-old "$@"
diff --git a/ap-fetch-version-old b/ap-fetch-version-old
index 353a817..d2f8b94 100755
--- a/ap-fetch-version-old
+++ b/ap-fetch-version-old
@@ -74,8 +74,8 @@ linuxfirmware)
 		$TREE_LINUXFIRMWARE master daily-cron-old.$branch
 	;;
 osstest)
-	git-fetch -f $HOME/testing.git incoming:ap-fetch
-        git-rev-parse ap-fetch^0
+	git fetch -f $HOME/testing.git incoming:ap-fetch
+        git rev-parse ap-fetch^0
         ;;
 *)
 	echo >&2 "branch $branch ?"
diff --git a/cr-for-branches b/cr-for-branches
index 4c26a69..8bb1ac5 100755
--- a/cr-for-branches
+++ b/cr-for-branches
@@ -43,8 +43,8 @@ with-lock-ex $fetchwlem data-tree-lock bash -ec '
 	exec >>$LOGFILE
 	date
         printf "%s\n" "$FOR_LOGFILE"
-	git-pull . incoming:master 2>&1 ||:
-	git-checkout HEAD
+	git pull . incoming:master 2>&1 ||:
+	git checkout HEAD
 '
 
 export OSSTEST_TEST_PULLFROM=`pwd`
@@ -91,10 +91,10 @@ for branch in $BRANCHES; do
 
 		log ...
 
-		git-fetch $OSSTEST_TEST_PULLFROM master:incoming 2>&1 ||:
-		git-fetch $OSSTEST_TEST_PULLFROM incoming:incoming 2>&1 ||:
-		git-pull --no-commit . incoming:master 2>&1 ||:
-		git-checkout HEAD
+		git fetch $OSSTEST_TEST_PULLFROM master:incoming 2>&1 ||:
+		git fetch $OSSTEST_TEST_PULLFROM incoming:incoming 2>&1 ||:
+		git pull --no-commit . incoming:master 2>&1 ||:
+		git checkout HEAD
 
 		set +e
 		"$@" 2>&1
diff --git a/cr-publish-flight-logs b/cr-publish-flight-logs
index 56100da..d299329 100755
--- a/cr-publish-flight-logs
+++ b/cr-publish-flight-logs
@@ -43,7 +43,7 @@ if ($push_harness) {
     my $githost= $c{HarnessPublishGitUserHost};
     my $gitdir= $c{HarnessPublishGitRepoDir};
 
-    system_checked("git-push $githost:$gitdir HEAD:flight-$flight");
+    system_checked("git push $githost:$gitdir HEAD:flight-$flight");
     system_checked("ssh $githost 'cd $gitdir && git update-server-info'");
 }
 
diff --git a/cri-common b/cri-common
index 477b013..497d4e3 100644
--- a/cri-common
+++ b/cri-common
@@ -32,7 +32,7 @@ repo_tree_rev_fetch_git () {
 	fi
 	cd $repos/$treename
 	git fetch -f $realurl $remotetag:$localtag >&2
-	git-rev-parse $localtag^0
+	git rev-parse $localtag^0
 }
 
 select_xenbranch () {
diff --git a/mg-execute-flight b/mg-execute-flight
index 20067b3..287caa0 100755
--- a/mg-execute-flight
+++ b/mg-execute-flight
@@ -53,7 +53,7 @@ if [ x"$flight" = x ]; then badusage; fi
 
 set +e
 tty=`exec 2>/dev/null; tty`
-branch=`exec 2>/dev/null; git-branch | sed -n 's/^\* //p'`
+branch=`exec 2>/dev/null; git branch | sed -n 's/^\* //p'`
 set -e
 
 export OSSTEST_RESOURCE_PRIORITY=${OSSTEST_RESOURCE_PRIORITY--8}
@@ -80,7 +80,7 @@ Subject: [adhoc test] $subject
 $OSSTEST_RESOURCE_PREINFO
 END
 
-git-log -n1 --pretty=format:'harness %h: %s' | perl -pe 's/(.{70}).+/$1.../'
+git log -n1 --pretty=format:'harness %h: %s' | perl -pe 's/(.{70}).+/$1.../'
 echo
 
 cat <tmp/$flight.report
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLJ-0007dQ-P7; Fri, 21 Feb 2014 13:40:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKi-0007TK-OF
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:20 +0000
Received: from [85.158.139.211:21444] by server-5.bemta-5.messagelabs.com id
	24/F5-32749-43757035; Fri, 21 Feb 2014 13:40:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392989997!5390240!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20966 invoked from network); 21 Feb 2014 13:40:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:40:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102979516"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:40:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:40:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MM-Sm;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006pM-KW;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:51 +0000
Message-ID: <1392989991-26151-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 6/6] Executive.pm: Change default
	ControlDaemonHost
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to move these daemons from woking to the new osstest VM.
So use a DNS alias for now.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index ac7b734..5b1094e 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -88,7 +88,7 @@ our (@all_lock_tables) = qw(flights resources);
 #  Transactional reads must take out locks as if they were modifying
 
 augmentconfigdefaults(
-    ControlDaemonHost => 'woking.cam.xci-test.com',
+    ControlDaemonHost => 'control-daemons.osstest.cam.xci-test.com',
     OwnerDaemonPort => 4031,
     QueueDaemonPort => 4032,
     QueueDaemonRetry => 120, # seconds
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 13:40:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 13:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqLJ-0007dQ-P7; Fri, 21 Feb 2014 13:40:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGqKi-0007TK-OF
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 13:40:20 +0000
Received: from [85.158.139.211:21444] by server-5.bemta-5.messagelabs.com id
	24/F5-32749-43757035; Fri, 21 Feb 2014 13:40:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1392989997!5390240!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20966 invoked from network); 21 Feb 2014 13:40:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 13:40:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102979516"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 13:40:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 08:40:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0008MM-Sm;
	Fri, 21 Feb 2014 13:39:56 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGqKa-0006pM-KW;
	Fri, 21 Feb 2014 13:39:56 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 13:39:51 +0000
Message-ID: <1392989991-26151-7-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1392989991-26151-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 6/6] Executive.pm: Change default
	ControlDaemonHost
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We are going to move these daemons from woking to the new osstest VM.
So use a DNS alias for now.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index ac7b734..5b1094e 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -88,7 +88,7 @@ our (@all_lock_tables) = qw(flights resources);
 #  Transactional reads must take out locks as if they were modifying
 
 augmentconfigdefaults(
-    ControlDaemonHost => 'woking.cam.xci-test.com',
+    ControlDaemonHost => 'control-daemons.osstest.cam.xci-test.com',
     OwnerDaemonPort => 4031,
     QueueDaemonPort => 4032,
     QueueDaemonRetry => 120, # seconds
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:03:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqhF-0000xz-Re; Fri, 21 Feb 2014 14:03:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGqhE-0000xu-M7
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 14:03:20 +0000
Received: from [85.158.137.68:49626] by server-10.bemta-3.messagelabs.com id
	C1/D5-07302-7AC57035; Fri, 21 Feb 2014 14:03:19 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392991397!7145!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16418 invoked from network); 21 Feb 2014 14:03:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 14:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104670062"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 14:03:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 09:03:16 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGqhA-0005uC-J8;
	Fri, 21 Feb 2014 14:03:16 +0000
Message-ID: <53075C9E.9070608@eu.citrix.com>
Date: Fri, 21 Feb 2014 14:03:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Jan Beulich
	<JBeulich@suse.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian
	Campbell <ian.campbell@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-DLP: MIA1
Subject: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So it's now been several days, and no one has reported any major issues 
with RC4.  There are a handful of ARM-related fixes in the tree.

I propose that we tag RC5 today, and make that the official release 
(whenever we're ready to with the Linux Foundation PR process).

After tagging RC5 this afternoon, I don't see much reason not to branch 
-- I don't think at this point we're going to get much more testing in.

Thoughts?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:03:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqhF-0000xz-Re; Fri, 21 Feb 2014 14:03:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGqhE-0000xu-M7
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 14:03:20 +0000
Received: from [85.158.137.68:49626] by server-10.bemta-3.messagelabs.com id
	C1/D5-07302-7AC57035; Fri, 21 Feb 2014 14:03:19 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1392991397!7145!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16418 invoked from network); 21 Feb 2014 14:03:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 14:03:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104670062"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 14:03:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 09:03:16 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGqhA-0005uC-J8;
	Fri, 21 Feb 2014 14:03:16 +0000
Message-ID: <53075C9E.9070608@eu.citrix.com>
Date: Fri, 21 Feb 2014 14:03:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, Jan Beulich
	<JBeulich@suse.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, Ian
	Campbell <ian.campbell@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-DLP: MIA1
Subject: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

So it's now been several days, and no one has reported any major issues 
with RC4.  There are a handful of ARM-related fixes in the tree.

I propose that we tag RC5 today, and make that the official release 
(whenever we're ready to with the Linux Foundation PR process).

After tagging RC5 this afternoon, I don't see much reason not to branch 
-- I don't think at this point we're going to get much more testing in.

Thoughts?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:18:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqw6-0001NC-GI; Fri, 21 Feb 2014 14:18:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGqw4-0001N7-PR
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 14:18:40 +0000
Received: from [193.109.254.147:62307] by server-9.bemta-14.messagelabs.com id
	C0/86-24895-F3067035; Fri, 21 Feb 2014 14:18:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392992318!5966057!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11493 invoked from network); 21 Feb 2014 14:18:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 14:18:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 14:19:54 +0000
Message-Id: <53076E4D020000780011E565@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 14:18:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
In-Reply-To: <53075C9E.9070608@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 15:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> So it's now been several days, and no one has reported any major issues 
> with RC4.  There are a handful of ARM-related fixes in the tree.
> 
> I propose that we tag RC5 today, and make that the official release 
> (whenever we're ready to with the Linux Foundation PR process).
> 
> After tagging RC5 this afternoon, I don't see much reason not to branch 
> -- I don't think at this point we're going to get much more testing in.
> 
> Thoughts?

Fine with me, albeit it seems a bit pointless to me to have another
RC, in particular if we expect this to become the release.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:18:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqw6-0001NC-GI; Fri, 21 Feb 2014 14:18:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGqw4-0001N7-PR
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 14:18:40 +0000
Received: from [193.109.254.147:62307] by server-9.bemta-14.messagelabs.com id
	C0/86-24895-F3067035; Fri, 21 Feb 2014 14:18:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1392992318!5966057!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11493 invoked from network); 21 Feb 2014 14:18:39 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 14:18:39 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 14:19:54 +0000
Message-Id: <53076E4D020000780011E565@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 14:18:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <george.dunlap@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
In-Reply-To: <53075C9E.9070608@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 15:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> So it's now been several days, and no one has reported any major issues 
> with RC4.  There are a handful of ARM-related fixes in the tree.
> 
> I propose that we tag RC5 today, and make that the official release 
> (whenever we're ready to with the Linux Foundation PR process).
> 
> After tagging RC5 this afternoon, I don't see much reason not to branch 
> -- I don't think at this point we're going to get much more testing in.
> 
> Thoughts?

Fine with me, albeit it seems a bit pointless to me to have another
RC, in particular if we expect this to become the release.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:21:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqz9-0001UF-42; Fri, 21 Feb 2014 14:21:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGqz7-0001U9-TC
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 14:21:50 +0000
Received: from [193.109.254.147:35397] by server-9.bemta-14.messagelabs.com id
	6D/0B-24895-CF067035; Fri, 21 Feb 2014 14:21:48 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392992507!1963196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17681 invoked from network); 21 Feb 2014 14:21:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 14:21:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102991568"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 14:21:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 09:21:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGqz4-0006CZ-2N;
	Fri, 21 Feb 2014 14:21:46 +0000
Message-ID: <530760F3.9050000@eu.citrix.com>
Date: Fri, 21 Feb 2014 14:21:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <53075C9E.9070608@eu.citrix.com>
	<53076E4D020000780011E565@nat28.tlf.novell.com>
In-Reply-To: <53076E4D020000780011E565@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 02:18 PM, Jan Beulich wrote:
>>>> On 21.02.14 at 15:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> So it's now been several days, and no one has reported any major issues
>> with RC4.  There are a handful of ARM-related fixes in the tree.
>>
>> I propose that we tag RC5 today, and make that the official release
>> (whenever we're ready to with the Linux Foundation PR process).
>>
>> After tagging RC5 this afternoon, I don't see much reason not to branch
>> -- I don't think at this point we're going to get much more testing in.
>>
>> Thoughts?
> Fine with me, albeit it seems a bit pointless to me to have another
> RC, in particular if we expect this to become the release.

Isn't that what RC stands for -- release candidate? :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:21:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:21:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGqz9-0001UF-42; Fri, 21 Feb 2014 14:21:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGqz7-0001U9-TC
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 14:21:50 +0000
Received: from [193.109.254.147:35397] by server-9.bemta-14.messagelabs.com id
	6D/0B-24895-CF067035; Fri, 21 Feb 2014 14:21:48 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1392992507!1963196!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17681 invoked from network); 21 Feb 2014 14:21:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 14:21:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="102991568"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 14:21:46 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 09:21:46 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGqz4-0006CZ-2N;
	Fri, 21 Feb 2014 14:21:46 +0000
Message-ID: <530760F3.9050000@eu.citrix.com>
Date: Fri, 21 Feb 2014 14:21:39 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <53075C9E.9070608@eu.citrix.com>
	<53076E4D020000780011E565@nat28.tlf.novell.com>
In-Reply-To: <53076E4D020000780011E565@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 02:18 PM, Jan Beulich wrote:
>>>> On 21.02.14 at 15:03, George Dunlap <george.dunlap@eu.citrix.com> wrote:
>> So it's now been several days, and no one has reported any major issues
>> with RC4.  There are a handful of ARM-related fixes in the tree.
>>
>> I propose that we tag RC5 today, and make that the official release
>> (whenever we're ready to with the Linux Foundation PR process).
>>
>> After tagging RC5 this afternoon, I don't see much reason not to branch
>> -- I don't think at this point we're going to get much more testing in.
>>
>> Thoughts?
> Fine with me, albeit it seems a bit pointless to me to have another
> RC, in particular if we expect this to become the release.

Isn't that what RC stands for -- release candidate? :-)

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:28:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGr5t-0001lc-0i; Fri, 21 Feb 2014 14:28:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WGr5r-0001kr-6U
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 14:28:47 +0000
Received: from [85.158.139.211:28440] by server-5.bemta-5.messagelabs.com id
	47/14-32749-D9267035; Fri, 21 Feb 2014 14:28:45 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392992924!5412393!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3053 invoked from network); 21 Feb 2014 14:28:45 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-9.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 21 Feb 2014 14:28:45 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S2608930AbaBUO2j (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Fri, 21 Feb 2014 15:28:39 +0100
Date: Fri, 21 Feb 2014 15:28:39 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140221142839.GA15984@router-fw-old.local.net-space.pl>
References: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140221103925.GT18398@zion.uk.xensource.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>,
	Vasiliy Tolstov <v.tolstov@selfip.ru>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 10:39:25AM +0000, Wei Liu wrote:
> On Fri, Feb 21, 2014 at 01:59:39PM +0400, Vasiliy Tolstov wrote:
> > 2014-02-21 13:53 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > > Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
> > > memory tells me that it's a global thing. I could be wrong though...
> >
> >
> > Yes,
> > autoballoon = 0
> > lockfile = "/var/lock/xl"
> > vifscript = "/etc/xen/scripts/vif-ospf"
> > mem_set_enforce_limit = 0
> > claim_mode = 1
> > vif.default.script = "/etc/xen/scripts/vif-ospf"
> >
>
> Ah, so Daniel's change only affects "xl mem-set" command. When building a
> domain the maxmem is still capped to target_memory (which is memory= in
> you domain config file).

Ugh... I forgot about that. I will take this into accunt in next patch
series release. Thanks.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:28:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGr5t-0001lc-0i; Fri, 21 Feb 2014 14:28:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WGr5r-0001kr-6U
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 14:28:47 +0000
Received: from [85.158.139.211:28440] by server-5.bemta-5.messagelabs.com id
	47/14-32749-D9267035; Fri, 21 Feb 2014 14:28:45 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-9.tower-206.messagelabs.com!1392992924!5412393!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3053 invoked from network); 21 Feb 2014 14:28:45 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-9.tower-206.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 21 Feb 2014 14:28:45 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S2608930AbaBUO2j (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
	Fri, 21 Feb 2014 15:28:39 +0100
Date: Fri, 21 Feb 2014 15:28:39 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140221142839.GA15984@router-fw-old.local.net-space.pl>
References: <CACaajQsDwKCzi=O8jdS=Gcu4jLCidL+2DW=MstCEpsgxAgmDDw@mail.gmail.com>
	<20140217101911.GA13058@router-fw-old.local.net-space.pl>
	<CACaajQvvosjSsCdKutinjKGLd32QaTF0NYGAHQ9wdTRRe_KZWw@mail.gmail.com>
	<20140217105641.GA13535@router-fw-old.local.net-space.pl>
	<CACaajQt0T-TCou8J=Xd9E8_HUSvqrjvMEqiWX+33XC4u9O1FUQ@mail.gmail.com>
	<20140220190713.GA2183@router-fw-old.local.net-space.pl>
	<CACaajQuC=txS9+-W1UGpMbzRL-RqOLi_OAVPt3uCbf91XEnBmA@mail.gmail.com>
	<20140221095337.GQ18398@zion.uk.xensource.com>
	<CACaajQudF9yq_z-z-Rz8U-0a4CfWe1SYZGOf=S4C01nU=7BvXw@mail.gmail.com>
	<20140221103925.GT18398@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140221103925.GT18398@zion.uk.xensource.com>
User-Agent: Mutt/1.3.28i
Cc: xen-devel@lists.xenproject.org, Daniel Kiper <dkiper@net-space.pl>,
	Vasiliy Tolstov <v.tolstov@selfip.ru>
Subject: Re: [Xen-devel] xen 4.3.2 rc1 memory ballooning
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 10:39:25AM +0000, Wei Liu wrote:
> On Fri, Feb 21, 2014 at 01:59:39PM +0400, Vasiliy Tolstov wrote:
> > 2014-02-21 13:53 GMT+04:00 Wei Liu <wei.liu2@citrix.com>:
> > > Just to confirm, you added that in /etc/xen/xl.conf, right? My vague
> > > memory tells me that it's a global thing. I could be wrong though...
> >
> >
> > Yes,
> > autoballoon = 0
> > lockfile = "/var/lock/xl"
> > vifscript = "/etc/xen/scripts/vif-ospf"
> > mem_set_enforce_limit = 0
> > claim_mode = 1
> > vif.default.script = "/etc/xen/scripts/vif-ospf"
> >
>
> Ah, so Daniel's change only affects "xl mem-set" command. When building a
> domain the maxmem is still capped to target_memory (which is memory= in
> you domain config file).

Ugh... I forgot about that. I will take this into accunt in next patch
series release. Thanks.

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:34:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrAy-00022j-V1; Fri, 21 Feb 2014 14:34:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGrAx-00022c-KL
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 14:34:03 +0000
Received: from [85.158.137.68:53982] by server-10.bemta-3.messagelabs.com id
	3B/87-07302-AD367035; Fri, 21 Feb 2014 14:34:02 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392993242!593332!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20978 invoked from network); 21 Feb 2014 14:34:02 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 21 Feb 2014 14:34:02 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51212 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGr9m-0000Io-L9; Fri, 21 Feb 2014 15:32:50 +0100
Date: Fri, 21 Feb 2014 15:34:00 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <974010162.20140221153400@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] xen-unstable pci passthrough: bug in accounting
	assigned pci devices when assignment has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

It was decided that the bug that domain creation does not fail on non assignable pci devices was deferred to 4.5.
(and it wouldn't prevent this bug anyhow when doing pci hotplug with xl pci-attach)

But there seems to be a bug in the error path:

root@creanuc:~# xl pci-assignable-list
0000:02:00.0

Now when i boot a VM with  pci=['00:19.0'] in it's config file ... which is not assignable:

root@creanuc:~# xl create /etc/xen/domU/router.hvm
Parsing config from /etc/xen/domU/router.hvm
libxl: error: libxl_pci.c:1060:libxl__device_pci_add: PCI device 0:0:19.0 is not assignable

That looks ok ... and the pci device is not visible / accessible in the guest ...  but it seems the entry is still in xenstore nevertheless:

root@creanuc:~# xl pci-list router
Vdev Device
00.0 0000:00:19.0

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 14:34:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 14:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrAy-00022j-V1; Fri, 21 Feb 2014 14:34:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WGrAx-00022c-KL
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 14:34:03 +0000
Received: from [85.158.137.68:53982] by server-10.bemta-3.messagelabs.com id
	3B/87-07302-AD367035; Fri, 21 Feb 2014 14:34:02 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392993242!593332!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20978 invoked from network); 21 Feb 2014 14:34:02 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 21 Feb 2014 14:34:02 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:51212 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WGr9m-0000Io-L9; Fri, 21 Feb 2014 15:32:50 +0100
Date: Fri, 21 Feb 2014 15:34:00 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <974010162.20140221153400@eikelenboom.it>
To: Ian Campbell <Ian.Campbell@citrix.com>
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] xen-unstable pci passthrough: bug in accounting
	assigned pci devices when assignment has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Ian,

It was decided that the bug that domain creation does not fail on non assignable pci devices was deferred to 4.5.
(and it wouldn't prevent this bug anyhow when doing pci hotplug with xl pci-attach)

But there seems to be a bug in the error path:

root@creanuc:~# xl pci-assignable-list
0000:02:00.0

Now when i boot a VM with  pci=['00:19.0'] in it's config file ... which is not assignable:

root@creanuc:~# xl create /etc/xen/domU/router.hvm
Parsing config from /etc/xen/domU/router.hvm
libxl: error: libxl_pci.c:1060:libxl__device_pci_add: PCI device 0:0:19.0 is not assignable

That looks ok ... and the pci device is not visible / accessible in the guest ...  but it seems the entry is still in xenstore nevertheless:

root@creanuc:~# xl pci-list router
Vdev Device
00.0 0000:00:19.0

--
Sander


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:01:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrbW-0002yF-JP; Fri, 21 Feb 2014 15:01:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGrbV-0002yA-FM
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 15:01:29 +0000
Received: from [85.158.137.68:16286] by server-3.bemta-3.messagelabs.com id
	37/26-14520-84A67035; Fri, 21 Feb 2014 15:01:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392994884!3428561!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25066 invoked from network); 21 Feb 2014 15:01:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 15:01:26 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LF1CLC005191
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 15:01:12 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LF19dV009747
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 15:01:10 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LF191Q009734; Fri, 21 Feb 2014 15:01:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 07:01:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 09B091C0954; Fri, 21 Feb 2014 10:01:08 -0500 (EST)
Date: Fri, 21 Feb 2014 10:01:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140221150107.GG15905@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
	<20140221100506.GR18398@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140221100506.GR18398@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: virtio-dev@lists.oasis-open.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 10:05:06AM +0000, Wei Liu wrote:
> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> > On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> > > Anthony Liguori <anthony@codemonkey.ws> writes:
> > >> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> > >>> Daniel Kiper <daniel.kiper@oracle.com> writes:
> > >>>> Hi,
> > >>>>
> > >>>> Below you could find a summary of work in regards to VIRTIO compatibility with
> > >>>> different virtualization solutions. It was done mainly from Xen point of view
> > >>>> but results are quite generic and can be applied to wide spectrum
> > >>>> of virtualization platforms.
> > >>>
> > >>> Hi Daniel,
> > >>>
> > >>>         Sorry for the delayed response, I was pondering...  CC changed
> > >>> to virtio-dev.
> > >>>
> > >>> From a standard POV: It's possible to abstract out the where we use
> > >>> 'physical address' for 'address handle'.  It's also possible to define
> > >>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> > >>> Xen-PV is a distinct platform from x86.
> > >>
> > >> I'll go even further and say that "address handle" doesn't make sense too.
> > >
> > > I was trying to come up with a unique term, I wasn't trying to define
> > > semantics :)
> > 
> > Understood, that wasn't really directed at you.
> > 
> > > There are three debates here now: (1) what should the standard say, and
> > 
> > The standard should say, "physical address"

This conversation is heading towards - implementation needs it - hence lets
make the design have it. Which I am OK with - but if we are going that
route we might as well call this thing 'my-pony-number' because I think
each hypervisor will have a different view of it.

Some of them might use a physical address with some flag bits on it.
Some might use just physical address.

And some might want an 32-bit value that has no correlation to to physical
nor virtual addresses.
> > 
> > > (2) how would Linux implement it,
> > 
> > Linux should use the PCI DMA API.

Aye.
> > 
> > > (3) should we use each platform's PCI
> > > IOMMU.
> > 
> > Just like any other PCI device :-)

Aye.
> > 
> > >> Just using grant table references is not enough to make virtio work
> > >> well under Xen.  You really need to use bounce buffers ala persistent
> > >> grants.
> > >
> > > Wait, if you're using bounce buffers, you didn't make it "work well"!
> > 
> > Preaching to the choir man...  but bounce buffering is proven to be
> > faster than doing grant mappings on every request.  xen-blk does
> > bounce buffering by default and I suspect netfront is heading that
> > direction soon.
> > 
> 
> FWIW Annie Li @ Oracle once implemented a persistent map prototype for
> netfront and the result was not satisfying.

Which could be due to the traffic pattern. There is a lot of back/forth
traffic on a single ring in network (TCP with ACK/SYN).

With block the issue was a bit different and we do more of streaming
workloads.

> 
> > It would be a lot easier to simply have a global pool of grant tables
> > that effectively becomes the DMA pool.  Then the DMA API can bounce
> > into that pool and those addresses can be placed on the ring.
> > 
> > It's a little different for Xen because now the backends have to deal
> > with physical addresses but the concept is still the same.

Rusty, this is a part below is Xen specific - so you are welcome to gloss over it.

I presume you would also need some machinary for the hypervisor to give
access to this 64MB (or whatever size) pool (and we could make grant pages have
2MB granularity - so we just 32 grants) to the backend.

But the backend would have to know the grant entries to at least do the proper
mapping and unmapping (if it choose to)? And for that it needs
the grant value to make the proper hypercall to map its memory
(backend) to the frontend memory.

Or are you saying - instead of using grant entries just use physical
addresses - and naturally the hypervisor would have to use that as well.
Since it is just a number, why not make it at least something and we
won't need to keep a 'grant->physical address' lookup machinery?


> > 
> 
> How would you apply this to Xen's security model? How can hypervisor
> effectively enforce access control? "Handle" and "physical address" are
> essentially not the same concept, otherwise you wouldn't have proposed
> this change. Not saying I'm against this change, just this description
> is too vague for me to understand the bigger picture.
> 
> But a downside for sure is that if we go with this change we then have
> to maintain two different paths in backend. However small the difference
> is it is still a burden.

Or just in the grant machinery. The backends just plucks this number
in data structures and that is all it cares about.

> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:01:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrbW-0002yF-JP; Fri, 21 Feb 2014 15:01:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGrbV-0002yA-FM
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 15:01:29 +0000
Received: from [85.158.137.68:16286] by server-3.bemta-3.messagelabs.com id
	37/26-14520-84A67035; Fri, 21 Feb 2014 15:01:28 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1392994884!3428561!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25066 invoked from network); 21 Feb 2014 15:01:26 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 15:01:26 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LF1CLC005191
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 15:01:12 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LF19dV009747
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 15:01:10 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LF191Q009734; Fri, 21 Feb 2014 15:01:09 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 07:01:09 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 09B091C0954; Fri, 21 Feb 2014 10:01:08 -0500 (EST)
Date: Fri, 21 Feb 2014 10:01:07 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Wei Liu <wei.liu2@citrix.com>
Message-ID: <20140221150107.GG15905@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
	<20140221100506.GR18398@zion.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140221100506.GR18398@zion.uk.xensource.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: virtio-dev@lists.oasis-open.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 10:05:06AM +0000, Wei Liu wrote:
> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> > On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> > > Anthony Liguori <anthony@codemonkey.ws> writes:
> > >> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> > >>> Daniel Kiper <daniel.kiper@oracle.com> writes:
> > >>>> Hi,
> > >>>>
> > >>>> Below you could find a summary of work in regards to VIRTIO compatibility with
> > >>>> different virtualization solutions. It was done mainly from Xen point of view
> > >>>> but results are quite generic and can be applied to wide spectrum
> > >>>> of virtualization platforms.
> > >>>
> > >>> Hi Daniel,
> > >>>
> > >>>         Sorry for the delayed response, I was pondering...  CC changed
> > >>> to virtio-dev.
> > >>>
> > >>> From a standard POV: It's possible to abstract out the where we use
> > >>> 'physical address' for 'address handle'.  It's also possible to define
> > >>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
> > >>> Xen-PV is a distinct platform from x86.
> > >>
> > >> I'll go even further and say that "address handle" doesn't make sense too.
> > >
> > > I was trying to come up with a unique term, I wasn't trying to define
> > > semantics :)
> > 
> > Understood, that wasn't really directed at you.
> > 
> > > There are three debates here now: (1) what should the standard say, and
> > 
> > The standard should say, "physical address"

This conversation is heading towards - implementation needs it - hence lets
make the design have it. Which I am OK with - but if we are going that
route we might as well call this thing 'my-pony-number' because I think
each hypervisor will have a different view of it.

Some of them might use a physical address with some flag bits on it.
Some might use just physical address.

And some might want an 32-bit value that has no correlation to to physical
nor virtual addresses.
> > 
> > > (2) how would Linux implement it,
> > 
> > Linux should use the PCI DMA API.

Aye.
> > 
> > > (3) should we use each platform's PCI
> > > IOMMU.
> > 
> > Just like any other PCI device :-)

Aye.
> > 
> > >> Just using grant table references is not enough to make virtio work
> > >> well under Xen.  You really need to use bounce buffers ala persistent
> > >> grants.
> > >
> > > Wait, if you're using bounce buffers, you didn't make it "work well"!
> > 
> > Preaching to the choir man...  but bounce buffering is proven to be
> > faster than doing grant mappings on every request.  xen-blk does
> > bounce buffering by default and I suspect netfront is heading that
> > direction soon.
> > 
> 
> FWIW Annie Li @ Oracle once implemented a persistent map prototype for
> netfront and the result was not satisfying.

Which could be due to the traffic pattern. There is a lot of back/forth
traffic on a single ring in network (TCP with ACK/SYN).

With block the issue was a bit different and we do more of streaming
workloads.

> 
> > It would be a lot easier to simply have a global pool of grant tables
> > that effectively becomes the DMA pool.  Then the DMA API can bounce
> > into that pool and those addresses can be placed on the ring.
> > 
> > It's a little different for Xen because now the backends have to deal
> > with physical addresses but the concept is still the same.

Rusty, this is a part below is Xen specific - so you are welcome to gloss over it.

I presume you would also need some machinary for the hypervisor to give
access to this 64MB (or whatever size) pool (and we could make grant pages have
2MB granularity - so we just 32 grants) to the backend.

But the backend would have to know the grant entries to at least do the proper
mapping and unmapping (if it choose to)? And for that it needs
the grant value to make the proper hypercall to map its memory
(backend) to the frontend memory.

Or are you saying - instead of using grant entries just use physical
addresses - and naturally the hypervisor would have to use that as well.
Since it is just a number, why not make it at least something and we
won't need to keep a 'grant->physical address' lookup machinery?


> > 
> 
> How would you apply this to Xen's security model? How can hypervisor
> effectively enforce access control? "Handle" and "physical address" are
> essentially not the same concept, otherwise you wouldn't have proposed
> this change. Not saying I'm against this change, just this description
> is too vague for me to understand the bigger picture.
> 
> But a downside for sure is that if we go with this change we then have
> to maintain two different paths in backend. However small the difference
> is it is still a burden.

Or just in the grant machinery. The backends just plucks this number
in data structures and that is all it cares about.

> 
> Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:12:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrmE-0003Jp-3L; Fri, 21 Feb 2014 15:12:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGrmC-0003Jk-Kq
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 15:12:32 +0000
Received: from [193.109.254.147:23907] by server-13.bemta-14.messagelabs.com
	id 5B/30-01226-FDC67035; Fri, 21 Feb 2014 15:12:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392995549!5952582!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19130 invoked from network); 21 Feb 2014 15:12:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 15:12:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LFBnUv018675
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 15:11:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LFBmf5010500
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 21 Feb 2014 15:11:49 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LFBlK0003803; Fri, 21 Feb 2014 15:11:47 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 07:11:43 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7AB591C0954; Fri, 21 Feb 2014 10:11:42 -0500 (EST)
Date: Fri, 21 Feb 2014 10:11:42 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140221151142.GH15905@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8761o99tft.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 11:24:14AM +1030, Rusty Russell wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
> > Hey,
> >
> > On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> >> Ian Campbell <Ian.Campbell@citrix.com> writes:
> >> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> >> For platforms using EPT, I don't think you want anything but guest
> >> >> addresses, do you?
> >> >
> >> > No, the arguments for preventing unfettered access by backends to
> >> > frontend RAM applies to EPT as well.
> >>
> >> I can see how you'd parse my sentence that way, I think, but the two
> >> are orthogonal.
> >>
> >> AFAICT your grant-table access restrictions are page granularity, though
> >> you don't use page-aligned data (eg. in xen-netfront).  This level of
> >> access control is possible using the virtio ring too, but noone has
> >> implemented such a thing AFAIK.
> >
> > Could you say in short how it should be done? DMA API is an option but
> > if there is a simpler mechanism available in VIRTIO itself we will be
> > happy to use it in Xen.
> 
> OK, this challenged me to think harder.
> 
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
> 
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
> 
> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
> 
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.

+1
> 
> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.

Or perhaps an KVM specific DMA ops (which is nop) and Xen ops.
Easy enough to implement.
> 
> 3) In Linux, change the drivers to use this API.

+1
> 
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.

Correct. Thought one could implement an ring of grant entries system
where the frontend and backend share it along with the hypervisor.

And when the backend tries to access said memory thinking it has mapped
to the frontend (but it has not yet mapped this memory yet), it traps to
the hypervisor which then does mapping for the backend of the frontend
pages. Kind of lazy-grant system.

Anyhow, all of that is just implementation details and hand-waving.

If we wanted we can extend vhost for when it plucks entries of the
virtq to call an specific platform API. For KVM it would be all
nops. For Xen it would do a magic pony show or such <more hand-waving>.

> 
> Am I missing anything?

On a bit different topic:

I am unclear about the asynchronous vs synchronous nature of Virt configuration.
Xen is all about XenBus which is more of a callback mechanism. Virt does
its stuff on MMIO and PCI which are slow - but do get you the values.

Can we somehow make it clear that the configuration setup can be asynchronous?
That would also mean that in the future this configuration (say when migrating)
changes can be conveyed to the virtio frontends via an interrupt mechanism
(or callback) if the new host has something important to say?

> 
> Cheers,
> Rusty.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:12:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrmE-0003Jp-3L; Fri, 21 Feb 2014 15:12:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGrmC-0003Jk-Kq
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 15:12:32 +0000
Received: from [193.109.254.147:23907] by server-13.bemta-14.messagelabs.com
	id 5B/30-01226-FDC67035; Fri, 21 Feb 2014 15:12:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1392995549!5952582!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19130 invoked from network); 21 Feb 2014 15:12:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 15:12:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LFBnUv018675
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 15:11:50 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LFBmf5010500
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 21 Feb 2014 15:11:49 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LFBlK0003803; Fri, 21 Feb 2014 15:11:47 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 07:11:43 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 7AB591C0954; Fri, 21 Feb 2014 10:11:42 -0500 (EST)
Date: Fri, 21 Feb 2014 10:11:42 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140221151142.GH15905@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <8761o99tft.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: virtio-dev@lists.oasis-open.org, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, stefano.stabellini@eu.citrix.com,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	anthony@codemonkey.ws, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 11:24:14AM +1030, Rusty Russell wrote:
> Daniel Kiper <daniel.kiper@oracle.com> writes:
> > Hey,
> >
> > On Thu, Feb 20, 2014 at 06:18:00PM +1030, Rusty Russell wrote:
> >> Ian Campbell <Ian.Campbell@citrix.com> writes:
> >> > On Wed, 2014-02-19 at 10:56 +1030, Rusty Russell wrote:
> >> >> For platforms using EPT, I don't think you want anything but guest
> >> >> addresses, do you?
> >> >
> >> > No, the arguments for preventing unfettered access by backends to
> >> > frontend RAM applies to EPT as well.
> >>
> >> I can see how you'd parse my sentence that way, I think, but the two
> >> are orthogonal.
> >>
> >> AFAICT your grant-table access restrictions are page granularity, though
> >> you don't use page-aligned data (eg. in xen-netfront).  This level of
> >> access control is possible using the virtio ring too, but noone has
> >> implemented such a thing AFAIK.
> >
> > Could you say in short how it should be done? DMA API is an option but
> > if there is a simpler mechanism available in VIRTIO itself we will be
> > happy to use it in Xen.
> 
> OK, this challenged me to think harder.
> 
> The queue itself is effectively a grant table (as long as you don't give
> the backend write access to it).  The available ring tells you where the
> buffers are and whether they are readable or writable.  The used ring
> tells you when they're used.
> 
> However, performance would suck due to no caching: you'd end up doing a
> map and unmap on every packet.  I'm assuming Xen currently avoids that
> somehow?  Seems likely...
> 
> On the other hand, if we wanted a more Xen-like setup, it would looke
> like this:
> 
> 1) Abstract away the "physical addresses" to "handles" in the standard,
>    and allow some platform-specific mapping setup and teardown.

+1
> 
> 2) In Linux, implement a virtio DMA ops which handles the grant table
>    stuff for Xen (returning grant table ids + offset or something?),
>    noop for others.  This would be a runtime thing.

Or perhaps an KVM specific DMA ops (which is nop) and Xen ops.
Easy enough to implement.
> 
> 3) In Linux, change the drivers to use this API.

+1
> 
> Now, Xen will not be able to use vhost to accelerate, but it doesn't now
> anyway.

Correct. Thought one could implement an ring of grant entries system
where the frontend and backend share it along with the hypervisor.

And when the backend tries to access said memory thinking it has mapped
to the frontend (but it has not yet mapped this memory yet), it traps to
the hypervisor which then does mapping for the backend of the frontend
pages. Kind of lazy-grant system.

Anyhow, all of that is just implementation details and hand-waving.

If we wanted we can extend vhost for when it plucks entries of the
virtq to call an specific platform API. For KVM it would be all
nops. For Xen it would do a magic pony show or such <more hand-waving>.

> 
> Am I missing anything?

On a bit different topic:

I am unclear about the asynchronous vs synchronous nature of Virt configuration.
Xen is all about XenBus which is more of a callback mechanism. Virt does
its stuff on MMIO and PCI which are slow - but do get you the values.

Can we somehow make it clear that the configuration setup can be asynchronous?
That would also mean that in the future this configuration (say when migrating)
changes can be conveyed to the virtio frontends via an interrupt mechanism
(or callback) if the new host has something important to say?

> 
> Cheers,
> Rusty.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:19:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrtD-0003qA-LV; Fri, 21 Feb 2014 15:19:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WGrtC-0003q2-JT
	for Xen-devel@lists.xen.org; Fri, 21 Feb 2014 15:19:46 +0000
Received: from [85.158.137.68:33857] by server-5.bemta-3.messagelabs.com id
	BC/9F-04712-09E67035; Fri, 21 Feb 2014 15:19:44 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392995983!2156937!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24456 invoked from network); 21 Feb 2014 15:19:43 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-5.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 15:19:43 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 942F37B9;
	Fri, 21 Feb 2014 15:19:42 +0000 (UTC)
Date: Fri, 21 Feb 2014 07:22:23 -0800
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140221152223.GB5218@kroah.com>
References: <5305DE2C.7080502@citrix.com> <20140220201312.GA6067@kroah.com>
	<53072A61.7060907@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <53072A61.7060907@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 10:28:49AM +0000, David Vrabel wrote:
> On 20/02/14 20:13, Greg Kroah-Hartman wrote:
> > On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
> >> These two changes fix important bugs with 32-bit Xen PV guests.
> >>
> >> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
> >> before using the m2p table)
> > 
> > Now applied.
> > 
> >> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
> >> selector corruption)
> > 
> > I had to edit this by-hand to get it to apply, can you verify I got it right?
> 
> I can't find where you put it.  But if it looks like this, it's ok.

You should have gotten an email about it.

> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -245,6 +245,15 @@ static void __init xen_smp_prepare_boot_cpu(void)
>  	   old memory can be recycled */
>  	make_lowmem_page_readwrite(xen_initial_gdt);
> 
> +#ifdef CONFIG_X86_32
> +	/*
> +	 * Xen starts us with XEN_FLAT_RING1_DS, but linux code
> +	 * expects __USER_DS
> +	 */
> +	loadsegment(ds, __USER_DS);
> +	loadsegment(es, __USER_DS);
> +#endif
> +
>  	xen_filter_cpu_maps();
>  	xen_setup_vcpu_info_placement();
>  }

Yes, that was correct.

greg k-h

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:19:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGrtD-0003qA-LV; Fri, 21 Feb 2014 15:19:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WGrtC-0003q2-JT
	for Xen-devel@lists.xen.org; Fri, 21 Feb 2014 15:19:46 +0000
Received: from [85.158.137.68:33857] by server-5.bemta-3.messagelabs.com id
	BC/9F-04712-09E67035; Fri, 21 Feb 2014 15:19:44 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1392995983!2156937!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24456 invoked from network); 21 Feb 2014 15:19:43 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-5.tower-31.messagelabs.com with SMTP;
	21 Feb 2014 15:19:43 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 942F37B9;
	Fri, 21 Feb 2014 15:19:42 +0000 (UTC)
Date: Fri, 21 Feb 2014 07:22:23 -0800
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140221152223.GB5218@kroah.com>
References: <5305DE2C.7080502@citrix.com> <20140220201312.GA6067@kroah.com>
	<53072A61.7060907@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <53072A61.7060907@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
Cc: "stable@vger.kernel.org" <stable@vger.kernel.org>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen fixes for 3.10.y
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 10:28:49AM +0000, David Vrabel wrote:
> On 20/02/14 20:13, Greg Kroah-Hartman wrote:
> > On Thu, Feb 20, 2014 at 10:51:24AM +0000, David Vrabel wrote:
> >> These two changes fix important bugs with 32-bit Xen PV guests.
> >>
> >> 0160676bba69523e8b0ac83f306cce7d342ed7c8 (xen/p2m: check MFN is in range
> >> before using the m2p table)
> > 
> > Now applied.
> > 
> >> 7cde9b27e7b3a2e09d647bb4f6d94e842698d2d5 (xen: Fix possible user space
> >> selector corruption)
> > 
> > I had to edit this by-hand to get it to apply, can you verify I got it right?
> 
> I can't find where you put it.  But if it looks like this, it's ok.

You should have gotten an email about it.

> --- a/arch/x86/xen/smp.c
> +++ b/arch/x86/xen/smp.c
> @@ -245,6 +245,15 @@ static void __init xen_smp_prepare_boot_cpu(void)
>  	   old memory can be recycled */
>  	make_lowmem_page_readwrite(xen_initial_gdt);
> 
> +#ifdef CONFIG_X86_32
> +	/*
> +	 * Xen starts us with XEN_FLAT_RING1_DS, but linux code
> +	 * expects __USER_DS
> +	 */
> +	loadsegment(ds, __USER_DS);
> +	loadsegment(es, __USER_DS);
> +#endif
> +
>  	xen_filter_cpu_maps();
>  	xen_setup_vcpu_info_placement();
>  }

Yes, that was correct.

greg k-h

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:39:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsBi-0004aj-Dy; Fri, 21 Feb 2014 15:38:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGsBh-0004aO-8x
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 15:38:53 +0000
Received: from [85.158.137.68:27189] by server-6.bemta-3.messagelabs.com id
	91/E3-09180-C0377035; Fri, 21 Feb 2014 15:38:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392997129!609561!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18480 invoked from network); 21 Feb 2014 15:38:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 15:38:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104702502"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 15:38:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 10:38:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGsBc-0000Yi-2z;
	Fri, 21 Feb 2014 15:38:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGsBb-00050K-O7;
	Fri, 21 Feb 2014 15:38:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25210-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 15:38:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25210: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25210 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25210/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25149

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     11 guest-saverestore           fail pass in 25149

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 15:39:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 15:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsBi-0004aj-Dy; Fri, 21 Feb 2014 15:38:54 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGsBh-0004aO-8x
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 15:38:53 +0000
Received: from [85.158.137.68:27189] by server-6.bemta-3.messagelabs.com id
	91/E3-09180-C0377035; Fri, 21 Feb 2014 15:38:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1392997129!609561!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18480 invoked from network); 21 Feb 2014 15:38:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 15:38:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="104702502"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 15:38:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 10:38:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGsBc-0000Yi-2z;
	Fri, 21 Feb 2014 15:38:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGsBb-00050K-O7;
	Fri, 21 Feb 2014 15:38:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25210-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 15:38:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25210: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25210 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25210/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25149

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     11 guest-saverestore           fail pass in 25149

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:00:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsVu-0005D1-H6; Fri, 21 Feb 2014 15:59:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGsVs-0005Cw-Ny
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 15:59:44 +0000
Received: from [85.158.139.211:11417] by server-13.bemta-5.messagelabs.com id
	C0/9F-18801-FE777035; Fri, 21 Feb 2014 15:59:43 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392998382!916842!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23698 invoked from network); 21 Feb 2014 15:59:43 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 15:59:43 -0000
Received: by mail-lb0-f177.google.com with SMTP id p9so605724lbv.22
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 07:59:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=PynAR7rQ+RPU001NIYbrZ1339i8RFeWVKArCGxhvJIM=;
	b=q/Bj+6JTbidFLNhhi80ukk1Iji4vnX8nX1RrZOYwY82AS7idtgC6B9h95xU0BVHMVI
	C2XnqZx0jl96HOzvKlzqXY6e4VgBIcaPGIMOIAZV6XsHOYp5TjUAwfCgxJSwfJsYn9uP
	qeneSRLkyZBC0qwduhomlvWsCwk1OclaSJU8K42JUcHG9nxE/OzyGw0y62ZUmKDkOevt
	JnMh7ywH5Vvigiiu9dP2bxfqTMzwYF+xaPgIRLpLZSIt0XWOEpdPofQ3o2kbrA6yk51z
	XmCdMIT7CLvo7P5lgQ36XQLR8buE75KdBOZgs8g9H6SQ9qWrQu0tWM1oNEYKM5aMHVJy
	XVjQ==
X-Received: by 10.152.27.193 with SMTP id v1mr4863573lag.4.1392998381630; Fri,
	21 Feb 2014 07:59:41 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 07:59:20 -0800 (PST)
In-Reply-To: <53074E69.4060206@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<530600C5.3070107@citrix.com>
	<CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
	<53074E69.4060206@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 07:59:20 -0800
X-Google-Sender-Auth: nyC9iEkqBouGXG4SEmKLeD9xVBo
Message-ID: <CAB=NE6Xx1f2zYaTP=h-319_T4UtsXVjn0+ZPrSuCqXJ6hk5Jhw@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 20/02/14 20:01, Luis R. Rodriguez wrote:
>>
>> On Thu, Feb 20, 2014 at 5:19 AM, Zoltan Kiss <zoltan.kiss@citrix.com>
>> wrote:
>>>
>>> How about this: netback sets the root_block flag and a random MAC by
>>> default. So the default behaviour won't change, DAD will be happy, and
>>> userspace don't have to do anything unless it's using netback for STP
>>> root
>>> bridge (I don't think there are too many toolstacks doing that), in which
>>> case it has to remove the root_block flag instead of setting a random
>>> MAC.
>>
>>
>> :D that's exactly what I ended up proposing too. I mentioned how
>> xen-netback could do this as well, we'd keep or rename the flag I
>> added, and then the bridge could would look at it and enable the root
>> block if the flag is set. Stephen however does not like having the
>> bridge code look at magic flags for this behavior and would prefer for
>> us to get the tools to ask for the root block. Let's follow more up on
>> that thread
>
> We don't need that new flag, just forget about it.

Unless I'm missing something the root_block flag is a bridge port
primitive. This means we can't set it *until* the interface gets added
to a bridge, and even then, its a knob that would be available only to
the bridge.

> Another problem with the random addresses, pointed out by Ian earlier, that
> when adding/removing interfaces, the bridge does recalculate it's MAC
> address, and choose the lowest one. In the general usecase I think that's
> normal, but in case of Xen networking, we would like to keep the bridge
> using the physical interface's MAC, because the local port of the bridge is
> used for Dom0 network traffic, therefore changing the bridge MAC when a
> netback device has lower MAC breaks that traffic.

This is a good reason then to actually have an interface general
specific knob to annotate to the bridge that we'd prefer to root_block
by default, the alternative as you point out below is to have the xen
/ kvm utils to set the bridge MAC address statically, but that'll
requires a userspace upgrade. I'm looking for a kernel solution that
is backwards compatible with old userspace.

> I think the best is to
> address this from userspace: if it set the MAC of the bridge explicitly,
> dev_set_mac_address() does dev->addr_assign_type = NET_ADDR_SET;, so
> br_stp_recalculate_bridge_id() will exit before changing anything.

That will certainly work for new xen / kvm util userspace.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:00:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsVu-0005D1-H6; Fri, 21 Feb 2014 15:59:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGsVs-0005Cw-Ny
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 15:59:44 +0000
Received: from [85.158.139.211:11417] by server-13.bemta-5.messagelabs.com id
	C0/9F-18801-FE777035; Fri, 21 Feb 2014 15:59:43 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1392998382!916842!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23698 invoked from network); 21 Feb 2014 15:59:43 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 15:59:43 -0000
Received: by mail-lb0-f177.google.com with SMTP id p9so605724lbv.22
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 07:59:41 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=PynAR7rQ+RPU001NIYbrZ1339i8RFeWVKArCGxhvJIM=;
	b=q/Bj+6JTbidFLNhhi80ukk1Iji4vnX8nX1RrZOYwY82AS7idtgC6B9h95xU0BVHMVI
	C2XnqZx0jl96HOzvKlzqXY6e4VgBIcaPGIMOIAZV6XsHOYp5TjUAwfCgxJSwfJsYn9uP
	qeneSRLkyZBC0qwduhomlvWsCwk1OclaSJU8K42JUcHG9nxE/OzyGw0y62ZUmKDkOevt
	JnMh7ywH5Vvigiiu9dP2bxfqTMzwYF+xaPgIRLpLZSIt0XWOEpdPofQ3o2kbrA6yk51z
	XmCdMIT7CLvo7P5lgQ36XQLR8buE75KdBOZgs8g9H6SQ9qWrQu0tWM1oNEYKM5aMHVJy
	XVjQ==
X-Received: by 10.152.27.193 with SMTP id v1mr4863573lag.4.1392998381630; Fri,
	21 Feb 2014 07:59:41 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 07:59:20 -0800 (PST)
In-Reply-To: <53074E69.4060206@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<530600C5.3070107@citrix.com>
	<CAB=NE6WKhBJyyUO5o8B53J+F4PqF2PHvvbV3=yS3qs0ZDRKH7Q@mail.gmail.com>
	<53074E69.4060206@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 07:59:20 -0800
X-Google-Sender-Auth: nyC9iEkqBouGXG4SEmKLeD9xVBo
Message-ID: <CAB=NE6Xx1f2zYaTP=h-319_T4UtsXVjn0+ZPrSuCqXJ6hk5Jhw@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> On 20/02/14 20:01, Luis R. Rodriguez wrote:
>>
>> On Thu, Feb 20, 2014 at 5:19 AM, Zoltan Kiss <zoltan.kiss@citrix.com>
>> wrote:
>>>
>>> How about this: netback sets the root_block flag and a random MAC by
>>> default. So the default behaviour won't change, DAD will be happy, and
>>> userspace don't have to do anything unless it's using netback for STP
>>> root
>>> bridge (I don't think there are too many toolstacks doing that), in which
>>> case it has to remove the root_block flag instead of setting a random
>>> MAC.
>>
>>
>> :D that's exactly what I ended up proposing too. I mentioned how
>> xen-netback could do this as well, we'd keep or rename the flag I
>> added, and then the bridge could would look at it and enable the root
>> block if the flag is set. Stephen however does not like having the
>> bridge code look at magic flags for this behavior and would prefer for
>> us to get the tools to ask for the root block. Let's follow more up on
>> that thread
>
> We don't need that new flag, just forget about it.

Unless I'm missing something the root_block flag is a bridge port
primitive. This means we can't set it *until* the interface gets added
to a bridge, and even then, its a knob that would be available only to
the bridge.

> Another problem with the random addresses, pointed out by Ian earlier, that
> when adding/removing interfaces, the bridge does recalculate it's MAC
> address, and choose the lowest one. In the general usecase I think that's
> normal, but in case of Xen networking, we would like to keep the bridge
> using the physical interface's MAC, because the local port of the bridge is
> used for Dom0 network traffic, therefore changing the bridge MAC when a
> netback device has lower MAC breaks that traffic.

This is a good reason then to actually have an interface general
specific knob to annotate to the bridge that we'd prefer to root_block
by default, the alternative as you point out below is to have the xen
/ kvm utils to set the bridge MAC address statically, but that'll
requires a userspace upgrade. I'm looking for a kernel solution that
is backwards compatible with old userspace.

> I think the best is to
> address this from userspace: if it set the MAC of the bridge explicitly,
> dev_set_mac_address() does dev->addr_assign_type = NET_ADDR_SET;, so
> br_stp_recalculate_bridge_id() will exit before changing anything.

That will certainly work for new xen / kvm util userspace.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsXg-0005rX-TZ; Fri, 21 Feb 2014 16:01:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGsXf-0005rO-Aw
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 16:01:35 +0000
Received: from [85.158.139.211:57326] by server-10.bemta-5.messagelabs.com id
	89/26-08578-E5877035; Fri, 21 Feb 2014 16:01:34 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392998493!5425077!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12816 invoked from network); 21 Feb 2014 16:01:34 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:01:34 -0000
Received: by mail-lb0-f181.google.com with SMTP id z11so2444003lbi.26
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 08:01:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=n7hBoHr9WFc5t4zHypljSCl6cT3YgUdBu7ZXzeiU6ek=;
	b=j/5FpA2j5ZfVhBGCpXhsB39RSZh53nuqxiEc3turJZwNfR9vyX1QxUM5vfAOCaeFwH
	yfTIKjpnSpK5o6PXX5kfqNn7kk9kn4NokRy5GpEDa+PNc36Xq8QCPQUXA277z76GEm3Q
	pE+jz/ejgTMjw7lxTD7E5nXHP6RHqDBpE2dGkSmQQZpl1+kRq5nPkEJO2EEtkY0BeCaP
	pFaMtcMEtwWtTGzKUxsUdAzF65v0/BxiskVA70oviqVFDLGDXq0m+Y+tZ6ESpFecpIO2
	8ZWCMpIX/9e98CgzufV6YPdI7h/+PfZkpSjmZEXh3pZHKgl9zCZZmjF/h7D1cv+Wv7PW
	oOww==
X-Received: by 10.112.88.233 with SMTP id bj9mr4624079lbb.10.1392998493272;
	Fri, 21 Feb 2014 08:01:33 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 08:01:13 -0800 (PST)
In-Reply-To: <53074E6B.6030006@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
	<CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
	<53074E6B.6030006@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 08:01:13 -0800
X-Google-Sender-Auth: g8flil1XipO98jp74Y9RRz7l_gg
Message-ID: <CAB=NE6XR+FW2X2_nr2JAxgQD+zpm8=Xq7Y4fTf740rLhGCOzEw@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> Agreed that's the best strategy and I'll work on sending patches to
>> brctl to enable the root_block preference. This approach however also
>
> I don't think brctl should deal with any Xen specific stuff. I assume there
> is a misunderstanding in this thread: when I (and possibly other Xen folks)
> talk about "userspace" or "toolstack" here, I mean Xen specific tools which
> use e.g. brctl to set up bridges. Not brctl itself.

I did mean brctl, but as I looked at the code it doesn't used
rtnl_open() and not sure if Stephen would want that. Additionally even
if it did handle root_block the other issue with this strategy is that
as you noted upon initialization the bridge, without a static MAC
address, could end up setting the backend as the root port, until you
let userspace turn the root_block knob.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsXg-0005rX-TZ; Fri, 21 Feb 2014 16:01:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WGsXf-0005rO-Aw
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 16:01:35 +0000
Received: from [85.158.139.211:57326] by server-10.bemta-5.messagelabs.com id
	89/26-08578-E5877035; Fri, 21 Feb 2014 16:01:34 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1392998493!5425077!1
X-Originating-IP: [209.85.217.181]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12816 invoked from network); 21 Feb 2014 16:01:34 -0000
Received: from mail-lb0-f181.google.com (HELO mail-lb0-f181.google.com)
	(209.85.217.181)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:01:34 -0000
Received: by mail-lb0-f181.google.com with SMTP id z11so2444003lbi.26
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 08:01:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=n7hBoHr9WFc5t4zHypljSCl6cT3YgUdBu7ZXzeiU6ek=;
	b=j/5FpA2j5ZfVhBGCpXhsB39RSZh53nuqxiEc3turJZwNfR9vyX1QxUM5vfAOCaeFwH
	yfTIKjpnSpK5o6PXX5kfqNn7kk9kn4NokRy5GpEDa+PNc36Xq8QCPQUXA277z76GEm3Q
	pE+jz/ejgTMjw7lxTD7E5nXHP6RHqDBpE2dGkSmQQZpl1+kRq5nPkEJO2EEtkY0BeCaP
	pFaMtcMEtwWtTGzKUxsUdAzF65v0/BxiskVA70oviqVFDLGDXq0m+Y+tZ6ESpFecpIO2
	8ZWCMpIX/9e98CgzufV6YPdI7h/+PfZkpSjmZEXh3pZHKgl9zCZZmjF/h7D1cv+Wv7PW
	oOww==
X-Received: by 10.112.88.233 with SMTP id bj9mr4624079lbb.10.1392998493272;
	Fri, 21 Feb 2014 08:01:33 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 08:01:13 -0800 (PST)
In-Reply-To: <53074E6B.6030006@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
	<CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
	<53074E6B.6030006@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 08:01:13 -0800
X-Google-Sender-Auth: g8flil1XipO98jp74Y9RRz7l_gg
Message-ID: <CAB=NE6XR+FW2X2_nr2JAxgQD+zpm8=Xq7Y4fTf740rLhGCOzEw@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>> Agreed that's the best strategy and I'll work on sending patches to
>> brctl to enable the root_block preference. This approach however also
>
> I don't think brctl should deal with any Xen specific stuff. I assume there
> is a misunderstanding in this thread: when I (and possibly other Xen folks)
> talk about "userspace" or "toolstack" here, I mean Xen specific tools which
> use e.g. brctl to set up bridges. Not brctl itself.

I did mean brctl, but as I looked at the code it doesn't used
rtnl_open() and not sure if Stephen would want that. Additionally even
if it did handle root_block the other issue with this strategy is that
as you noted upon initialization the bridge, without a static MAC
address, could end up setting the backend as the root port, until you
let userspace turn the root_block knob.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:16:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsm5-0006kp-Ns; Fri, 21 Feb 2014 16:16:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGsm4-0006kY-3o
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:16:28 +0000
Received: from [85.158.143.35:3126] by server-3.bemta-4.messagelabs.com id
	BA/F4-11539-ADB77035; Fri, 21 Feb 2014 16:16:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392999384!7412879!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28927 invoked from network); 21 Feb 2014 16:16:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:16:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="103031930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:16:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:16:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGslz-0000ji-6w;
	Fri, 21 Feb 2014 16:16:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGsly-0007Sq-VG;
	Fri, 21 Feb 2014 16:16:23 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.31702.826721.362385@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:16:22 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <53075C9E.9070608@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Release, RC5, and branching"):
> So it's now been several days, and no one has reported any major issues 
> with RC4.  There are a handful of ARM-related fixes in the tree.
> 
> I propose that we tag RC5 today, and make that the official release 
> (whenever we're ready to with the Linux Foundation PR process).
> 
> After tagging RC5 this afternoon, I don't see much reason not to branch 
> -- I don't think at this point we're going to get much more testing in.

OK.  I'll do the branch after the RC5 tarball is made.

Also we should do the checklist items that involve changes to the
xen.git tree, ASAP after RC5 and branching, so that ideally we get a
test push from the new staging-4.4 branch onto stable-4.4 before the
release.

How sure are we that the qemu trees we will be releasing with are the
ones in RC5 ?  If we're sure then we could make the release tags now.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:16:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsm5-0006kp-Ns; Fri, 21 Feb 2014 16:16:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGsm4-0006kY-3o
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:16:28 +0000
Received: from [85.158.143.35:3126] by server-3.bemta-4.messagelabs.com id
	BA/F4-11539-ADB77035; Fri, 21 Feb 2014 16:16:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1392999384!7412879!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28927 invoked from network); 21 Feb 2014 16:16:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:16:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="103031930"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:16:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:16:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGslz-0000ji-6w;
	Fri, 21 Feb 2014 16:16:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGsly-0007Sq-VG;
	Fri, 21 Feb 2014 16:16:23 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.31702.826721.362385@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:16:22 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <53075C9E.9070608@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Release, RC5, and branching"):
> So it's now been several days, and no one has reported any major issues 
> with RC4.  There are a handful of ARM-related fixes in the tree.
> 
> I propose that we tag RC5 today, and make that the official release 
> (whenever we're ready to with the Linux Foundation PR process).
> 
> After tagging RC5 this afternoon, I don't see much reason not to branch 
> -- I don't think at this point we're going to get much more testing in.

OK.  I'll do the branch after the RC5 tarball is made.

Also we should do the checklist items that involve changes to the
xen.git tree, ASAP after RC5 and branching, so that ideally we get a
test push from the new staging-4.4 branch onto stable-4.4 before the
release.

How sure are we that the qemu trees we will be releasing with are the
ones in RC5 ?  If we're sure then we could make the release tags now.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:19:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsoh-0006uJ-AL; Fri, 21 Feb 2014 16:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGsof-0006uD-T9
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 16:19:10 +0000
Received: from [85.158.137.68:43724] by server-7.bemta-3.messagelabs.com id
	FC/C9-13775-D7C77035; Fri, 21 Feb 2014 16:19:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392999540!3443181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20142 invoked from network); 21 Feb 2014 16:19:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:19:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="103032837"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:18:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:18:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGsoR-0000kQ-Mg;
	Fri, 21 Feb 2014 16:18:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGsoR-0007GI-HC;
	Fri, 21 Feb 2014 16:18:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25221-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 16:18:55 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25221: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25221 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25221/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24873

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
baseline version:
 linux                a6d2ebcda7cb7467b3f5ca597710be25cc8ad76f

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Andrew Morton <akpm@linux-foundation.org>
  Asias He <asias@redhat.com>
  Avi Kivity <avi@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin LaHaise <bcrl@kvack.org>
  Bojan Smojver <bojan@rexursive.com>
  Dan Rosenberg <dan.j.rosenberg@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jeff Layton <jlayton@redhat.com>
  Jiang Liu <liuj97@gmail.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Rafael J. Wysocki <rjw@sisk.pl>
  Roland Dreier <roland@purestorage.com>
  Rusty Russell <rusty@rustcorp.com.au>
  Seth Forshee <seth.forshee@canonical.com>
  Stephen Smalley <sds@tycho.nsa.gov>
  Steven Rostedt <rostedt@goodmis.org>
  Tao Ma <boyu.mt@taobao.com>
  Trond Myklebust <Trond.Myklebust@netapp.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 934 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:19:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:19:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGsoh-0006uJ-AL; Fri, 21 Feb 2014 16:19:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGsof-0006uD-T9
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 16:19:10 +0000
Received: from [85.158.137.68:43724] by server-7.bemta-3.messagelabs.com id
	FC/C9-13775-D7C77035; Fri, 21 Feb 2014 16:19:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1392999540!3443181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20142 invoked from network); 21 Feb 2014 16:19:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:19:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,519,1389744000"; d="scan'208";a="103032837"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:18:57 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:18:56 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGsoR-0000kQ-Mg;
	Fri, 21 Feb 2014 16:18:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGsoR-0007GI-HC;
	Fri, 21 Feb 2014 16:18:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25221-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 16:18:55 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25221: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25221 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25221/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 24873

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
baseline version:
 linux                a6d2ebcda7cb7467b3f5ca597710be25cc8ad76f

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Andrew Morton <akpm@linux-foundation.org>
  Asias He <asias@redhat.com>
  Avi Kivity <avi@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin LaHaise <bcrl@kvack.org>
  Bojan Smojver <bojan@rexursive.com>
  Dan Rosenberg <dan.j.rosenberg@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jeff Layton <jlayton@redhat.com>
  Jiang Liu <liuj97@gmail.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Rafael J. Wysocki <rjw@sisk.pl>
  Roland Dreier <roland@purestorage.com>
  Rusty Russell <rusty@rustcorp.com.au>
  Seth Forshee <seth.forshee@canonical.com>
  Stephen Smalley <sds@tycho.nsa.gov>
  Steven Rostedt <rostedt@goodmis.org>
  Tao Ma <boyu.mt@taobao.com>
  Trond Myklebust <Trond.Myklebust@netapp.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 934 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:30:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGszN-0007HG-7z; Fri, 21 Feb 2014 16:30:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGszM-0007HB-6p
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:30:12 +0000
Received: from [193.109.254.147:25291] by server-11.bemta-14.messagelabs.com
	id 63/41-24604-31F77035; Fri, 21 Feb 2014 16:30:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393000209!6007845!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31742 invoked from network); 21 Feb 2014 16:30:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:30:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103036405"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:30:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:30:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGszI-0000o6-Dv;
	Fri, 21 Feb 2014 16:30:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGszI-0008Kj-4G;
	Fri, 21 Feb 2014 16:30:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.32527.955457.20429@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:30:07 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek
	Wilk <konrad.wilk@oracle.com>, Ian Campbell <ian.campbell@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <21255.31702.826721.362385@mariner.uk.xensource.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: Release, RC5, and branching"):
> George Dunlap writes ("Release, RC5, and branching"):
> > After tagging RC5 this afternoon, I don't see much reason not to branch 
> > -- I don't think at this point we're going to get much more testing in.
> 
> OK.  I'll do the branch after the RC5 tarball is made.

The RC5 tags and tarball are in the usual places.  George, do you want
to send an RC announcement ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:30:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGszN-0007HG-7z; Fri, 21 Feb 2014 16:30:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGszM-0007HB-6p
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:30:12 +0000
Received: from [193.109.254.147:25291] by server-11.bemta-14.messagelabs.com
	id 63/41-24604-31F77035; Fri, 21 Feb 2014 16:30:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393000209!6007845!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31742 invoked from network); 21 Feb 2014 16:30:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:30:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103036405"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:30:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:30:08 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGszI-0000o6-Dv;
	Fri, 21 Feb 2014 16:30:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGszI-0008Kj-4G;
	Fri, 21 Feb 2014 16:30:08 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.32527.955457.20429@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:30:07 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>, Jan Beulich <JBeulich@suse.com>,
	Konrad Rzeszutek
	Wilk <konrad.wilk@oracle.com>, Ian Campbell <ian.campbell@citrix.com>, 
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <21255.31702.826721.362385@mariner.uk.xensource.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: Release, RC5, and branching"):
> George Dunlap writes ("Release, RC5, and branching"):
> > After tagging RC5 this afternoon, I don't see much reason not to branch 
> > -- I don't think at this point we're going to get much more testing in.
> 
> OK.  I'll do the branch after the RC5 tarball is made.

The RC5 tags and tarball are in the usual places.  George, do you want
to send an RC announcement ?

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:34:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGt3S-0007VG-61; Fri, 21 Feb 2014 16:34:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGt3R-0007V4-1y
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:34:25 +0000
Received: from [193.109.254.147:9863] by server-10.bemta-14.messagelabs.com id
	15/A5-10711-01087035; Fri, 21 Feb 2014 16:34:24 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393000457!2301695!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29904 invoked from network); 21 Feb 2014 16:34:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:34:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103037786"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:34:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 11:34:22 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGt3N-0008DO-Ie;
	Fri, 21 Feb 2014 16:34:21 +0000
Message-ID: <53078006.60900@eu.citrix.com>
Date: Fri, 21 Feb 2014 16:34:14 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
In-Reply-To: <21255.31702.826721.362385@mariner.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 04:16 PM, Ian Jackson wrote:
> George Dunlap writes ("Release, RC5, and branching"):
>> So it's now been several days, and no one has reported any major issues
>> with RC4.  There are a handful of ARM-related fixes in the tree.
>>
>> I propose that we tag RC5 today, and make that the official release
>> (whenever we're ready to with the Linux Foundation PR process).
>>
>> After tagging RC5 this afternoon, I don't see much reason not to branch
>> -- I don't think at this point we're going to get much more testing in.
> OK.  I'll do the branch after the RC5 tarball is made.
>
> Also we should do the checklist items that involve changes to the
> xen.git tree, ASAP after RC5 and branching, so that ideally we get a
> test push from the new staging-4.4 branch onto stable-4.4 before the
> release.

Yes, please do the branch.

>
> How sure are we that the qemu trees we will be releasing with are the
> ones in RC5 ?  If we're sure then we could make the release tags now.

Is there a cost to waiting for the release tags?  I don't know exactly 
how we want to arrange the timing of the release with the press 
announcement, but if it wouldn't cause too much hassle / risk of missing 
something, it might be better to wait until the day of our official 
announcement, probably a week monday.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:34:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGt3S-0007VG-61; Fri, 21 Feb 2014 16:34:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGt3R-0007V4-1y
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:34:25 +0000
Received: from [193.109.254.147:9863] by server-10.bemta-14.messagelabs.com id
	15/A5-10711-01087035; Fri, 21 Feb 2014 16:34:24 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393000457!2301695!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29904 invoked from network); 21 Feb 2014 16:34:23 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:34:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103037786"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:34:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 11:34:22 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGt3N-0008DO-Ie;
	Fri, 21 Feb 2014 16:34:21 +0000
Message-ID: <53078006.60900@eu.citrix.com>
Date: Fri, 21 Feb 2014 16:34:14 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
In-Reply-To: <21255.31702.826721.362385@mariner.uk.xensource.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 04:16 PM, Ian Jackson wrote:
> George Dunlap writes ("Release, RC5, and branching"):
>> So it's now been several days, and no one has reported any major issues
>> with RC4.  There are a handful of ARM-related fixes in the tree.
>>
>> I propose that we tag RC5 today, and make that the official release
>> (whenever we're ready to with the Linux Foundation PR process).
>>
>> After tagging RC5 this afternoon, I don't see much reason not to branch
>> -- I don't think at this point we're going to get much more testing in.
> OK.  I'll do the branch after the RC5 tarball is made.
>
> Also we should do the checklist items that involve changes to the
> xen.git tree, ASAP after RC5 and branching, so that ideally we get a
> test push from the new staging-4.4 branch onto stable-4.4 before the
> release.

Yes, please do the branch.

>
> How sure are we that the qemu trees we will be releasing with are the
> ones in RC5 ?  If we're sure then we could make the release tags now.

Is there a cost to waiting for the release tags?  I don't know exactly 
how we want to arrange the timing of the release with the press 
announcement, but if it wouldn't cause too much hassle / risk of missing 
something, it might be better to wait until the day of our official 
announcement, probably a week monday.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:41:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtAb-0008RD-7p; Fri, 21 Feb 2014 16:41:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1WGtAZ-0008R4-Mf
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:41:47 +0000
Received: from [193.109.254.147:59640] by server-1.bemta-14.messagelabs.com id
	ED/0E-15438-BC187035; Fri, 21 Feb 2014 16:41:47 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393000904!6019237!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20713 invoked from network); 21 Feb 2014 16:41:45 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:41:45 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so6374576qcx.37
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 08:41:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=XAxjdbXHTFj6fAFATbn6dcVwWUMxoYz+QHyFJrQl8kE=;
	b=d+plO+3LGORSpdblJUy+v06ZBWgD7q0wh8F/Sz6RAO9/wqMIpl2qtKXJR6Iht/Z4OQ
	9ksZBexaM4WaXaDo56nlxJIXDHkOxxrL9QS+yB8vFx/yyUp1pTS4yeYhs3iA32jIzviC
	rlJAT6kfHJIl5DslI8EZwbss7mCRceB1acNCYqU/4/Z9gLJZqqPX4zp3mxfd0PjqIyVW
	dmz4wQXCnsI6MN5+UwY+0S4H74LypF+r1bPdFp7r7krQKnZe7T8wGNyfHA2At6CLjLRX
	6H2gsU7ecZlLQIiVogq4llRXaT8nJYIMsPOYoBLQasiOYiSlom9ZYJZLsdhn5auDngdy
	VAFA==
X-Gm-Message-State: ALoCoQk5mTpRmNPpyuILjPUoaIJ2xz0A3Qe6B5vIftbJ7GYFxdsL10V/pEendol+8omGsB1yvIgm
X-Received: by 10.140.23.209 with SMTP id 75mr10719285qgp.89.1393000904561;
	Fri, 21 Feb 2014 08:41:44 -0800 (PST)
Received: from andresmac.gridcentric.ca ([96.45.203.162])
	by mx.google.com with ESMTPSA id r13sm23201443qan.7.2014.02.21.08.41.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 08:41:44 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <mailman.9276.1392977438.24322.xen-devel@lists.xen.org>
Date: Fri, 21 Feb 2014 11:41:42 -0500
Message-Id: <C625F7EE-A8B6-48E4-9ED1-DA935C8A41BD@gridcentric.ca>
References: <mailman.9276.1392977438.24322.xen-devel@lists.xen.org>
To: xen-devel@lists.xen.org
X-Mailer: Apple Mail (2.1510)
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
>> On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>> Anthony Liguori <anthony@codemonkey.ws> writes:
>>>> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>>>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>>>>> Hi,
>>>>>> 
>>>>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>>>>> different virtualization solutions. It was done mainly from Xen point of view
>>>>>> but results are quite generic and can be applied to wide spectrum
>>>>>> of virtualization platforms.
>>>>> 
>>>>> Hi Daniel,
>>>>> 
>>>>>        Sorry for the delayed response, I was pondering...  CC changed
>>>>> to virtio-dev.
>>>>> 
>>>>> From a standard POV: It's possible to abstract out the where we use
>>>>> 'physical address' for 'address handle'.  It's also possible to define
>>>>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>>>>> Xen-PV is a distinct platform from x86.
>>>> 
>>>> I'll go even further and say that "address handle" doesn't make sense too.
>>> 
>>> I was trying to come up with a unique term, I wasn't trying to define
>>> semantics :)
>> 
>> Understood, that wasn't really directed at you.
>> 
>>> There are three debates here now: (1) what should the standard say, and
>> 
>> The standard should say, "physical address"
>> 
>>> (2) how would Linux implement it,
>> 
>> Linux should use the PCI DMA API.
>> 
>>> (3) should we use each platform's PCI
>>> IOMMU.
>> 
>> Just like any other PCI device :-)
>> 
>>>> Just using grant table references is not enough to make virtio work
>>>> well under Xen.  You really need to use bounce buffers ala persistent
>>>> grants.
>>> 
>>> Wait, if you're using bounce buffers, you didn't make it "work well"!
>> 
>> Preaching to the choir man...  but bounce buffering is proven to be
>> faster than doing grant mappings on every request.  xen-blk does
>> bounce buffering by default and I suspect netfront is heading that
>> direction soon.
>> 
> 
> FWIW Annie Li @ Oracle once implemented a persistent map prototype for
> netfront and the result was not satisfying.
> 
>> It would be a lot easier to simply have a global pool of grant tables
>> that effectively becomes the DMA pool.  Then the DMA API can bounce
>> into that pool and those addresses can be placed on the ring.
>> 
>> It's a little different for Xen because now the backends have to deal
>> with physical addresses but the concept is still the same.
>> 
> 
> How would you apply this to Xen's security model? How can hypervisor
> effectively enforce access control? "Handle" and "physical address" are
> essentially not the same concept, otherwise you wouldn't have proposed
> this change. Not saying I'm against this change, just this description
> is too vague for me to understand the bigger picture.

I might be missing something trivial. But the burden of enforcing visibility of memory only for handles befalls on the hypervisor. Taking KVM for example, the whole RAM of a guest is a vma in the mm of the faulting qemu process. That's KVM's way of doing things. "Handles" could be pfns for all that model cares, and translation+mapping from handles to actual guest RAM addresses is trivially O(1). And there's no guest control over ram visibility, and that's happy KVM.

Xen, on the other hand, can encode a 64 bit grant handle in the "__u64 addr" field of a virtio descriptor. The negotiation happens up front, the flags field is set to signal the guest is encoding handles in there. Once the Xen virtio backend gets that descriptor out of the ring, what is left is not all that different from what netback/blkback/gntdev do today with a ring request.

I'm obviously glossing over serious details (e.g. negotiation of what u64 addr means), but I what I'm going at is that I fail to understand why whole RAM visibility is a requirement for virtio. It seems to me to be a requirement for KVM and other hypervisors, while virtio is a transport and sync mechanism for high(er) level IO descriptors.

Can someone please clarify why "under Xen, you really need to use bounce buffers ala persistent grants?" Is that a performance need to avoid backend side repeated mapping and TLB junking? Granted. But why would it be a correctness need? Guest side grant table works requires no hyper calls in the data path.

If I am rewinding the conversation, feel free to ignore, but I'm not feeling a lot of clarity in the dialogue right now.

Thanks
Andres

> 
> But a downside for sure is that if we go with this change we then have
> to maintain two different paths in backend. However small the difference
> is it is still a burden.


> 
> Wei.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:41:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:41:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtAb-0008RD-7p; Fri, 21 Feb 2014 16:41:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andreslc@gridcentric.ca>) id 1WGtAZ-0008R4-Mf
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:41:47 +0000
Received: from [193.109.254.147:59640] by server-1.bemta-14.messagelabs.com id
	ED/0E-15438-BC187035; Fri, 21 Feb 2014 16:41:47 +0000
X-Env-Sender: andreslc@gridcentric.ca
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393000904!6019237!1
X-Originating-IP: [209.85.216.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20713 invoked from network); 21 Feb 2014 16:41:45 -0000
Received: from mail-qc0-f178.google.com (HELO mail-qc0-f178.google.com)
	(209.85.216.178)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:41:45 -0000
Received: by mail-qc0-f178.google.com with SMTP id m20so6374576qcx.37
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 08:41:44 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:content-type:mime-version:subject:from
	:in-reply-to:date:cc:content-transfer-encoding:message-id:references
	:to; bh=XAxjdbXHTFj6fAFATbn6dcVwWUMxoYz+QHyFJrQl8kE=;
	b=d+plO+3LGORSpdblJUy+v06ZBWgD7q0wh8F/Sz6RAO9/wqMIpl2qtKXJR6Iht/Z4OQ
	9ksZBexaM4WaXaDo56nlxJIXDHkOxxrL9QS+yB8vFx/yyUp1pTS4yeYhs3iA32jIzviC
	rlJAT6kfHJIl5DslI8EZwbss7mCRceB1acNCYqU/4/Z9gLJZqqPX4zp3mxfd0PjqIyVW
	dmz4wQXCnsI6MN5+UwY+0S4H74LypF+r1bPdFp7r7krQKnZe7T8wGNyfHA2At6CLjLRX
	6H2gsU7ecZlLQIiVogq4llRXaT8nJYIMsPOYoBLQasiOYiSlom9ZYJZLsdhn5auDngdy
	VAFA==
X-Gm-Message-State: ALoCoQk5mTpRmNPpyuILjPUoaIJ2xz0A3Qe6B5vIftbJ7GYFxdsL10V/pEendol+8omGsB1yvIgm
X-Received: by 10.140.23.209 with SMTP id 75mr10719285qgp.89.1393000904561;
	Fri, 21 Feb 2014 08:41:44 -0800 (PST)
Received: from andresmac.gridcentric.ca ([96.45.203.162])
	by mx.google.com with ESMTPSA id r13sm23201443qan.7.2014.02.21.08.41.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 08:41:44 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 6.6 \(1510\))
From: Andres Lagar-Cavilla <andreslc@gridcentric.ca>
In-Reply-To: <mailman.9276.1392977438.24322.xen-devel@lists.xen.org>
Date: Fri, 21 Feb 2014 11:41:42 -0500
Message-Id: <C625F7EE-A8B6-48E4-9ED1-DA935C8A41BD@gridcentric.ca>
References: <mailman.9276.1392977438.24322.xen-devel@lists.xen.org>
To: xen-devel@lists.xen.org
X-Mailer: Apple Mail (2.1510)
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Rusty Russell <rusty@au1.ibm.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
>> On Wed, Feb 19, 2014 at 5:31 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>> Anthony Liguori <anthony@codemonkey.ws> writes:
>>>> On Tue, Feb 18, 2014 at 4:26 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>>>>> Daniel Kiper <daniel.kiper@oracle.com> writes:
>>>>>> Hi,
>>>>>> 
>>>>>> Below you could find a summary of work in regards to VIRTIO compatibility with
>>>>>> different virtualization solutions. It was done mainly from Xen point of view
>>>>>> but results are quite generic and can be applied to wide spectrum
>>>>>> of virtualization platforms.
>>>>> 
>>>>> Hi Daniel,
>>>>> 
>>>>>        Sorry for the delayed response, I was pondering...  CC changed
>>>>> to virtio-dev.
>>>>> 
>>>>> From a standard POV: It's possible to abstract out the where we use
>>>>> 'physical address' for 'address handle'.  It's also possible to define
>>>>> this per-platform (ie. Xen-PV vs everyone else).  This is sane, since
>>>>> Xen-PV is a distinct platform from x86.
>>>> 
>>>> I'll go even further and say that "address handle" doesn't make sense too.
>>> 
>>> I was trying to come up with a unique term, I wasn't trying to define
>>> semantics :)
>> 
>> Understood, that wasn't really directed at you.
>> 
>>> There are three debates here now: (1) what should the standard say, and
>> 
>> The standard should say, "physical address"
>> 
>>> (2) how would Linux implement it,
>> 
>> Linux should use the PCI DMA API.
>> 
>>> (3) should we use each platform's PCI
>>> IOMMU.
>> 
>> Just like any other PCI device :-)
>> 
>>>> Just using grant table references is not enough to make virtio work
>>>> well under Xen.  You really need to use bounce buffers ala persistent
>>>> grants.
>>> 
>>> Wait, if you're using bounce buffers, you didn't make it "work well"!
>> 
>> Preaching to the choir man...  but bounce buffering is proven to be
>> faster than doing grant mappings on every request.  xen-blk does
>> bounce buffering by default and I suspect netfront is heading that
>> direction soon.
>> 
> 
> FWIW Annie Li @ Oracle once implemented a persistent map prototype for
> netfront and the result was not satisfying.
> 
>> It would be a lot easier to simply have a global pool of grant tables
>> that effectively becomes the DMA pool.  Then the DMA API can bounce
>> into that pool and those addresses can be placed on the ring.
>> 
>> It's a little different for Xen because now the backends have to deal
>> with physical addresses but the concept is still the same.
>> 
> 
> How would you apply this to Xen's security model? How can hypervisor
> effectively enforce access control? "Handle" and "physical address" are
> essentially not the same concept, otherwise you wouldn't have proposed
> this change. Not saying I'm against this change, just this description
> is too vague for me to understand the bigger picture.

I might be missing something trivial. But the burden of enforcing visibility of memory only for handles befalls on the hypervisor. Taking KVM for example, the whole RAM of a guest is a vma in the mm of the faulting qemu process. That's KVM's way of doing things. "Handles" could be pfns for all that model cares, and translation+mapping from handles to actual guest RAM addresses is trivially O(1). And there's no guest control over ram visibility, and that's happy KVM.

Xen, on the other hand, can encode a 64 bit grant handle in the "__u64 addr" field of a virtio descriptor. The negotiation happens up front, the flags field is set to signal the guest is encoding handles in there. Once the Xen virtio backend gets that descriptor out of the ring, what is left is not all that different from what netback/blkback/gntdev do today with a ring request.

I'm obviously glossing over serious details (e.g. negotiation of what u64 addr means), but I what I'm going at is that I fail to understand why whole RAM visibility is a requirement for virtio. It seems to me to be a requirement for KVM and other hypervisors, while virtio is a transport and sync mechanism for high(er) level IO descriptors.

Can someone please clarify why "under Xen, you really need to use bounce buffers ala persistent grants?" Is that a performance need to avoid backend side repeated mapping and TLB junking? Granted. But why would it be a correctness need? Guest side grant table works requires no hyper calls in the data path.

If I am rewinding the conversation, feel free to ignore, but I'm not feeling a lot of clarity in the dialogue right now.

Thanks
Andres

> 
> But a downside for sure is that if we go with this change we then have
> to maintain two different paths in backend. However small the difference
> is it is still a burden.


> 
> Wei.
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:50:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtIs-0000TY-Su; Fri, 21 Feb 2014 16:50:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtIZ-0000TO-9u
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:50:19 +0000
Received: from [85.158.143.35:13042] by server-3.bemta-4.messagelabs.com id
	B6/B3-11539-AB387035; Fri, 21 Feb 2014 16:50:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393001400!7414130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21350 invoked from network); 21 Feb 2014 16:50:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:50:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103042519"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:49:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:49:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtIU-0000tk-TJ;
	Fri, 21 Feb 2014 16:49:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtIT-0008PG-2m;
	Fri, 21 Feb 2014 16:49:57 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.33715.774848.583734@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:49:55 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <53078006.60900@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<53078006.60900@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: Release, RC5, and branching"):
> On 02/21/2014 04:16 PM, Ian Jackson wrote:
> > How sure are we that the qemu trees we will be releasing with are the
> > ones in RC5 ?  If we're sure then we could make the release tags now.
> 
> Is there a cost to waiting for the release tags?  I don't know exactly 
> how we want to arrange the timing of the release with the press 
> announcement, but if it wouldn't cause too much hassle / risk of missing 
> something, it might be better to wait until the day of our official 
> announcement, probably a week monday.

If we wait then we have to either (a) have old tags in Config.mk (eg
references to -rc5) or (b) release something that is still in
staging-4.4 and not in stable-4.4 yet.

If you're thinking about a release in a bit over a week we can
probably expect a test pass if we do the Config.mk etc. changes early
next week, but I'm not sure I see a reason to wait.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:50:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtIs-0000TY-Su; Fri, 21 Feb 2014 16:50:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtIZ-0000TO-9u
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:50:19 +0000
Received: from [85.158.143.35:13042] by server-3.bemta-4.messagelabs.com id
	B6/B3-11539-AB387035; Fri, 21 Feb 2014 16:50:02 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393001400!7414130!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21350 invoked from network); 21 Feb 2014 16:50:02 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:50:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103042519"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 16:49:59 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:49:59 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtIU-0000tk-TJ;
	Fri, 21 Feb 2014 16:49:58 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtIT-0008PG-2m;
	Fri, 21 Feb 2014 16:49:57 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.33715.774848.583734@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:49:55 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <53078006.60900@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<53078006.60900@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: Release, RC5, and branching"):
> On 02/21/2014 04:16 PM, Ian Jackson wrote:
> > How sure are we that the qemu trees we will be releasing with are the
> > ones in RC5 ?  If we're sure then we could make the release tags now.
> 
> Is there a cost to waiting for the release tags?  I don't know exactly 
> how we want to arrange the timing of the release with the press 
> announcement, but if it wouldn't cause too much hassle / risk of missing 
> something, it might be better to wait until the day of our official 
> announcement, probably a week monday.

If we wait then we have to either (a) have old tags in Config.mk (eg
references to -rc5) or (b) release something that is still in
staging-4.4 and not in stable-4.4 yet.

If you're thinking about a release in a bit over a week we can
probably expect a test pass if we do the Config.mk etc. changes early
next week, but I'm not sure I see a reason to wait.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:51:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtJy-0000YB-72; Fri, 21 Feb 2014 16:51:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtJw-0000Xw-M5
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:51:28 +0000
Received: from [85.158.137.68:55211] by server-9.bemta-3.messagelabs.com id
	21/01-10184-F0487035; Fri, 21 Feb 2014 16:51:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393001483!2171597!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29200 invoked from network); 21 Feb 2014 16:51:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:51:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104726902"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 16:51:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:51:14 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtJi-0000uE-3s;
	Fri, 21 Feb 2014 16:51:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtJg-0008PP-6P;
	Fri, 21 Feb 2014 16:51:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.33790.848491.795999@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:51:10 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <53078006.60900@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<53078006.60900@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan, I have made the {staging,stable}-4.4 branches in xen.git.

My release checklist says:

  Update new stable tree's MAINTAINERS to contain correct info for this
  stable branch

Will you take care of this ?  I'm not sure if it's just as simple as
copying the file from staging-4.3.  Please push the update to
staging-4.4.

thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:51:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtJy-0000YB-72; Fri, 21 Feb 2014 16:51:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtJw-0000Xw-M5
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:51:28 +0000
Received: from [85.158.137.68:55211] by server-9.bemta-3.messagelabs.com id
	21/01-10184-F0487035; Fri, 21 Feb 2014 16:51:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393001483!2171597!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29200 invoked from network); 21 Feb 2014 16:51:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:51:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104726902"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 16:51:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 11:51:14 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtJi-0000uE-3s;
	Fri, 21 Feb 2014 16:51:14 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtJg-0008PP-6P;
	Fri, 21 Feb 2014 16:51:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.33790.848491.795999@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 16:51:10 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <53078006.60900@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<53078006.60900@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan, I have made the {staging,stable}-4.4 branches in xen.git.

My release checklist says:

  Update new stable tree's MAINTAINERS to contain correct info for this
  stable branch

Will you take care of this ?  I'm not sure if it's just as simple as
copying the file from staging-4.3.  Please push the update to
staging-4.4.

thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:53:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:53:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtLt-0000ev-UV; Fri, 21 Feb 2014 16:53:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WGtLb-0000ed-LP
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:53:28 +0000
Received: from [85.158.139.211:10229] by server-4.bemta-5.messagelabs.com id
	90/5D-08092-67487035; Fri, 21 Feb 2014 16:53:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393001588!5480096!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26955 invoked from network); 21 Feb 2014 16:53:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:53:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104727220"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 16:52:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 11:52:04 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WGtKV-0008Uf-N6;
	Fri, 21 Feb 2014 16:52:03 +0000
Message-ID: <53078433.60202@citrix.com>
Date: Fri, 21 Feb 2014 16:52:03 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
In-Reply-To: <21255.32527.955457.20429@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 16:30, Ian Jackson wrote:
> Ian Jackson writes ("Re: Release, RC5, and branching"):
>> George Dunlap writes ("Release, RC5, and branching"):
>>> After tagging RC5 this afternoon, I don't see much reason not to branch 
>>> -- I don't think at this point we're going to get much more testing in.
>> OK.  I'll do the branch after the RC5 tarball is made.
> The RC5 tags and tarball are in the usual places.  George, do you want
> to send an RC announcement ?
>
> Ian.

What about changing the build to not be debug by default, and the main
Xen version?  They will need to be done before the release.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:53:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:53:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtLt-0000ev-UV; Fri, 21 Feb 2014 16:53:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WGtLb-0000ed-LP
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:53:28 +0000
Received: from [85.158.139.211:10229] by server-4.bemta-5.messagelabs.com id
	90/5D-08092-67487035; Fri, 21 Feb 2014 16:53:10 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393001588!5480096!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26955 invoked from network); 21 Feb 2014 16:53:10 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:53:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104727220"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 16:52:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 11:52:04 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WGtKV-0008Uf-N6;
	Fri, 21 Feb 2014 16:52:03 +0000
Message-ID: <53078433.60202@citrix.com>
Date: Fri, 21 Feb 2014 16:52:03 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
In-Reply-To: <21255.32527.955457.20429@mariner.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21/02/14 16:30, Ian Jackson wrote:
> Ian Jackson writes ("Re: Release, RC5, and branching"):
>> George Dunlap writes ("Release, RC5, and branching"):
>>> After tagging RC5 this afternoon, I don't see much reason not to branch 
>>> -- I don't think at this point we're going to get much more testing in.
>> OK.  I'll do the branch after the RC5 tarball is made.
> The RC5 tags and tarball are in the usual places.  George, do you want
> to send an RC announcement ?
>
> Ian.

What about changing the build to not be debug by default, and the main
Xen version?  They will need to be done before the release.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:55:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtNq-0000o1-Lq; Fri, 21 Feb 2014 16:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGtNo-0000nZ-40
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:55:28 +0000
Received: from [85.158.137.68:22280] by server-13.bemta-3.messagelabs.com id
	CA/78-26923-FF487035; Fri, 21 Feb 2014 16:55:27 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393001724!3394775!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2145 invoked from network); 21 Feb 2014 16:55:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:55:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104728200"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 16:54:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 11:54:28 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGtMp-00005W-6Q;
	Fri, 21 Feb 2014 16:54:27 +0000
Message-ID: <530784BC.7060307@eu.citrix.com>
Date: Fri, 21 Feb 2014 16:54:20 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
	<53078433.60202@citrix.com>
In-Reply-To: <53078433.60202@citrix.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian
	Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 04:52 PM, Andrew Cooper wrote:
> On 21/02/14 16:30, Ian Jackson wrote:
>> Ian Jackson writes ("Re: Release, RC5, and branching"):
>>> George Dunlap writes ("Release, RC5, and branching"):
>>>> After tagging RC5 this afternoon, I don't see much reason not to branch
>>>> -- I don't think at this point we're going to get much more testing in.
>>> OK.  I'll do the branch after the RC5 tarball is made.
>> The RC5 tags and tarball are in the usual places.  George, do you want
>> to send an RC announcement ?
>>
>> Ian.
> What about changing the build to not be debug by default, and the main
> Xen version?  They will need to be done before the release.

Hmm, arguably that should be done once we start RCs, I would have 
thought, so we're actually testing the codepaths we plan on releasing.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 16:55:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 16:55:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtNq-0000o1-Lq; Fri, 21 Feb 2014 16:55:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WGtNo-0000nZ-40
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:55:28 +0000
Received: from [85.158.137.68:22280] by server-13.bemta-3.messagelabs.com id
	CA/78-26923-FF487035; Fri, 21 Feb 2014 16:55:27 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393001724!3394775!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2145 invoked from network); 21 Feb 2014 16:55:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 16:55:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104728200"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 16:54:28 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 11:54:28 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WGtMp-00005W-6Q;
	Fri, 21 Feb 2014 16:54:27 +0000
Message-ID: <530784BC.7060307@eu.citrix.com>
Date: Fri, 21 Feb 2014 16:54:20 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
	<53078433.60202@citrix.com>
In-Reply-To: <53078433.60202@citrix.com>
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Ian
	Campbell <ian.campbell@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/21/2014 04:52 PM, Andrew Cooper wrote:
> On 21/02/14 16:30, Ian Jackson wrote:
>> Ian Jackson writes ("Re: Release, RC5, and branching"):
>>> George Dunlap writes ("Release, RC5, and branching"):
>>>> After tagging RC5 this afternoon, I don't see much reason not to branch
>>>> -- I don't think at this point we're going to get much more testing in.
>>> OK.  I'll do the branch after the RC5 tarball is made.
>> The RC5 tags and tarball are in the usual places.  George, do you want
>> to send an RC announcement ?
>>
>> Ian.
> What about changing the build to not be debug by default, and the main
> Xen version?  They will need to be done before the release.

Hmm, arguably that should be done once we start RCs, I would have 
thought, so we're actually testing the codepaths we plan on releasing.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:04:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtWX-0001O5-Ii; Fri, 21 Feb 2014 17:04:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGtWW-0001Ns-4h
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:04:28 +0000
Received: from [85.158.143.35:26426] by server-2.bemta-4.messagelabs.com id
	AB/0F-10891-B1787035; Fri, 21 Feb 2014 17:04:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393002266!7428521!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23144 invoked from network); 21 Feb 2014 17:04:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 17:04:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 17:04:26 +0000
Message-Id: <53079528020000780011E62D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 17:04:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<53078006.60900@eu.citrix.com>
	<21255.33790.848491.795999@mariner.uk.xensource.com>
In-Reply-To: <21255.33790.848491.795999@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 17:51, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan, I have made the {staging,stable}-4.4 branches in xen.git.
> 
> My release checklist says:
> 
>   Update new stable tree's MAINTAINERS to contain correct info for this
>   stable branch
> 
> Will you take care of this ?  I'm not sure if it's just as simple as
> copying the file from staging-4.3.  Please push the update to
> staging-4.4.

Sure, I'll take care of this (but I'm wonder whether I rather
should do this after the release).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:04:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtWX-0001O5-Ii; Fri, 21 Feb 2014 17:04:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGtWW-0001Ns-4h
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:04:28 +0000
Received: from [85.158.143.35:26426] by server-2.bemta-4.messagelabs.com id
	AB/0F-10891-B1787035; Fri, 21 Feb 2014 17:04:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393002266!7428521!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23144 invoked from network); 21 Feb 2014 17:04:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 17:04:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 17:04:26 +0000
Message-Id: <53079528020000780011E62D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 17:04:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<53078006.60900@eu.citrix.com>
	<21255.33790.848491.795999@mariner.uk.xensource.com>
In-Reply-To: <21255.33790.848491.795999@mariner.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 17:51, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> Jan, I have made the {staging,stable}-4.4 branches in xen.git.
> 
> My release checklist says:
> 
>   Update new stable tree's MAINTAINERS to contain correct info for this
>   stable branch
> 
> Will you take care of this ?  I'm not sure if it's just as simple as
> copying the file from staging-4.3.  Please push the update to
> staging-4.4.

Sure, I'll take care of this (but I'm wonder whether I rather
should do this after the release).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:05:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtXZ-0001VO-0t; Fri, 21 Feb 2014 17:05:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGtXY-0001VE-1T
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:05:32 +0000
Received: from [85.158.143.35:39170] by server-1.bemta-4.messagelabs.com id
	52/06-31661-B5787035; Fri, 21 Feb 2014 17:05:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393002330!7390779!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12223 invoked from network); 21 Feb 2014 17:05:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 17:05:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 17:05:30 +0000
Message-Id: <5307956A020000780011E631@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 17:05:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <george.dunlap@eu.citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
	<53078433.60202@citrix.com> <530784BC.7060307@eu.citrix.com>
In-Reply-To: <530784BC.7060307@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 17:54, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/21/2014 04:52 PM, Andrew Cooper wrote:
>> On 21/02/14 16:30, Ian Jackson wrote:
>>> Ian Jackson writes ("Re: Release, RC5, and branching"):
>>>> George Dunlap writes ("Release, RC5, and branching"):
>>>>> After tagging RC5 this afternoon, I don't see much reason not to branch
>>>>> -- I don't think at this point we're going to get much more testing in.
>>>> OK.  I'll do the branch after the RC5 tarball is made.
>>> The RC5 tags and tarball are in the usual places.  George, do you want
>>> to send an RC announcement ?
>>>
>>> Ian.
>> What about changing the build to not be debug by default, and the main
>> Xen version?  They will need to be done before the release.
> 
> Hmm, arguably that should be done once we start RCs, I would have 
> thought, so we're actually testing the codepaths we plan on releasing.

I think we changed to non-debug builds pretty late in the process
of all earlier releases.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:05:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:05:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtXZ-0001VO-0t; Fri, 21 Feb 2014 17:05:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WGtXY-0001VE-1T
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:05:32 +0000
Received: from [85.158.143.35:39170] by server-1.bemta-4.messagelabs.com id
	52/06-31661-B5787035; Fri, 21 Feb 2014 17:05:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393002330!7390779!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12223 invoked from network); 21 Feb 2014 17:05:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 17:05:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 21 Feb 2014 17:05:30 +0000
Message-Id: <5307956A020000780011E631@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 21 Feb 2014 17:05:30 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>,
	"George Dunlap" <george.dunlap@eu.citrix.com>,
	"Ian Jackson" <Ian.Jackson@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
	<53078433.60202@citrix.com> <530784BC.7060307@eu.citrix.com>
In-Reply-To: <530784BC.7060307@eu.citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	IanCampbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 17:54, George Dunlap <george.dunlap@eu.citrix.com> wrote:
> On 02/21/2014 04:52 PM, Andrew Cooper wrote:
>> On 21/02/14 16:30, Ian Jackson wrote:
>>> Ian Jackson writes ("Re: Release, RC5, and branching"):
>>>> George Dunlap writes ("Release, RC5, and branching"):
>>>>> After tagging RC5 this afternoon, I don't see much reason not to branch
>>>>> -- I don't think at this point we're going to get much more testing in.
>>>> OK.  I'll do the branch after the RC5 tarball is made.
>>> The RC5 tags and tarball are in the usual places.  George, do you want
>>> to send an RC announcement ?
>>>
>>> Ian.
>> What about changing the build to not be debug by default, and the main
>> Xen version?  They will need to be done before the release.
> 
> Hmm, arguably that should be done once we start RCs, I would have 
> thought, so we're actually testing the codepaths we plan on releasing.

I think we changed to non-debug builds pretty late in the process
of all earlier releases.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtbP-0001tY-N1; Fri, 21 Feb 2014 17:09:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WGtbO-0001tQ-Ox
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:09:30 +0000
Received: from [85.158.139.211:19825] by server-9.bemta-5.messagelabs.com id
	A3/70-11237-94887035; Fri, 21 Feb 2014 17:09:29 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393002569!5446874!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3147 invoked from network); 21 Feb 2014 17:09:29 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:09:29 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so2664091wes.8
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 09:09:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=9NiDrXGB+DlkRAhVRXGB8xCFPSnNxdC29XWer6NyfnU=;
	b=hNKSNSxCHub3Xuijrmq6F5RaB2sliMs5/WPsPgNpWdxTGqBdN51Crjn0OkCBAJMXLV
	c9Jg3Gnm77AvMi1aRLKWJTPIG119AnI1FOoRf7142a08GfsMVvE5lvMowoLUeTwsNmaT
	m8vH1SdyTaZAgJvoVs9o24u2+FLruMdqUoFdoDys+jQmnYgKQhWWMejnO3rtneH+9z+6
	LeCD5JaOgtjud4XlBqtOog78P0aIMq61wyo+7CA7OpRp+YB9kygdWKeaO+pko13Y7Yzo
	RrSyG9MWdWwQScP8QMiRcspiy1nFVqvJ+9N/N5Gc36SCPQIlf3TM8tkjMJnUMif/bcaO
	Rrlg==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr4126263wic.56.1393002569316;
	Fri, 21 Feb 2014 09:09:29 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 21 Feb 2014 09:09:29 -0800 (PST)
Date: Fri, 21 Feb 2014 17:09:29 +0000
X-Google-Sender-Auth: p4yBwtlLoGe-zntvkM2MxgFxtmw
Message-ID: <CAFLBxZaW9w4mZ2XBsoygrjRuHEEd=rHf7dGdZ856s3JUcnFCxA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 branched, Xen 4.5-unstable open for development
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have now branched 4.4 in preparation for the official release
sometime a week or two hence.  Thus the 4.5 development window is now
open.

Thank you everyone for your hard work this release cycle!

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtbP-0001tY-N1; Fri, 21 Feb 2014 17:09:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WGtbO-0001tQ-Ox
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:09:30 +0000
Received: from [85.158.139.211:19825] by server-9.bemta-5.messagelabs.com id
	A3/70-11237-94887035; Fri, 21 Feb 2014 17:09:29 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393002569!5446874!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3147 invoked from network); 21 Feb 2014 17:09:29 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:09:29 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so2664091wes.8
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 09:09:29 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=9NiDrXGB+DlkRAhVRXGB8xCFPSnNxdC29XWer6NyfnU=;
	b=hNKSNSxCHub3Xuijrmq6F5RaB2sliMs5/WPsPgNpWdxTGqBdN51Crjn0OkCBAJMXLV
	c9Jg3Gnm77AvMi1aRLKWJTPIG119AnI1FOoRf7142a08GfsMVvE5lvMowoLUeTwsNmaT
	m8vH1SdyTaZAgJvoVs9o24u2+FLruMdqUoFdoDys+jQmnYgKQhWWMejnO3rtneH+9z+6
	LeCD5JaOgtjud4XlBqtOog78P0aIMq61wyo+7CA7OpRp+YB9kygdWKeaO+pko13Y7Yzo
	RrSyG9MWdWwQScP8QMiRcspiy1nFVqvJ+9N/N5Gc36SCPQIlf3TM8tkjMJnUMif/bcaO
	Rrlg==
MIME-Version: 1.0
X-Received: by 10.180.185.197 with SMTP id fe5mr4126263wic.56.1393002569316;
	Fri, 21 Feb 2014 09:09:29 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Fri, 21 Feb 2014 09:09:29 -0800 (PST)
Date: Fri, 21 Feb 2014 17:09:29 +0000
X-Google-Sender-Auth: p4yBwtlLoGe-zntvkM2MxgFxtmw
Message-ID: <CAFLBxZaW9w4mZ2XBsoygrjRuHEEd=rHf7dGdZ856s3JUcnFCxA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen 4.4 branched, Xen 4.5-unstable open for development
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

We have now branched 4.4 in preparation for the official release
sometime a week or two hence.  Thus the 4.5 development window is now
open.

Thank you everyone for your hard work this release cycle!

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:11:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtdf-00028u-Ch; Fri, 21 Feb 2014 17:11:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGtdd-00028T-Ve
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:11:50 +0000
Received: from [193.109.254.147:18821] by server-3.bemta-14.messagelabs.com id
	0A/54-00432-5D887035; Fri, 21 Feb 2014 17:11:49 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393002698!6016195!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjU2ODMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11117 invoked from network); 21 Feb 2014 17:11:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:11:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; 
	d="asc'?scan'208";a="104734356"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:11:38 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 12:11:37 -0500
Message-ID: <1393002695.32038.831.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Feb 2014 18:11:35 +0100
In-Reply-To: <530784BC.7060307@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
	<53078433.60202@citrix.com> <530784BC.7060307@eu.citrix.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3792280216867158623=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3792280216867158623==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-CUhGG8Mk5/PZ3MoBtKPn"

--=-CUhGG8Mk5/PZ3MoBtKPn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-21 at 16:54 +0000, George Dunlap wrote:
> On 02/21/2014 04:52 PM, Andrew Cooper wrote:
> > What about changing the build to not be debug by default, and the main
> > Xen version?  They will need to be done before the release.
>=20
> Hmm, arguably that should be done once we start RCs, I would have=20
> thought, so we're actually testing the codepaths we plan on releasing.
>=20
On one hand, that makes a lot of sense. On the other hand, perhaps,
having debug=3Dy makes it easier to look at bugs reported by people
actually testing the RCs?

Perhaps, for next time of course, switch to debug=3Dn when we feel like
we're ~ in the middle of the RC process?

just my 2 cents,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-CUhGG8Mk5/PZ3MoBtKPn
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMHiMcACgkQk4XaBE3IOsSqMwCcCY+g3TG02y8IZECyZ044R3gD
RekAmgPb0OfCNPRPGHy08JFIOKDkZd8z
=w29b
-----END PGP SIGNATURE-----

--=-CUhGG8Mk5/PZ3MoBtKPn--


--===============3792280216867158623==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3792280216867158623==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:11:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtdf-00028u-Ch; Fri, 21 Feb 2014 17:11:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGtdd-00028T-Ve
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:11:50 +0000
Received: from [193.109.254.147:18821] by server-3.bemta-14.messagelabs.com id
	0A/54-00432-5D887035; Fri, 21 Feb 2014 17:11:49 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393002698!6016195!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjU2ODMgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11117 invoked from network); 21 Feb 2014 17:11:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:11:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; 
	d="asc'?scan'208";a="104734356"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:11:38 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 12:11:37 -0500
Message-ID: <1393002695.32038.831.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: George Dunlap <george.dunlap@eu.citrix.com>
Date: Fri, 21 Feb 2014 18:11:35 +0100
In-Reply-To: <530784BC.7060307@eu.citrix.com>
References: <53075C9E.9070608@eu.citrix.com>
	<21255.31702.826721.362385@mariner.uk.xensource.com>
	<21255.32527.955457.20429@mariner.uk.xensource.com>
	<53078433.60202@citrix.com> <530784BC.7060307@eu.citrix.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] Release, RC5, and branching
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3792280216867158623=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3792280216867158623==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-CUhGG8Mk5/PZ3MoBtKPn"

--=-CUhGG8Mk5/PZ3MoBtKPn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-21 at 16:54 +0000, George Dunlap wrote:
> On 02/21/2014 04:52 PM, Andrew Cooper wrote:
> > What about changing the build to not be debug by default, and the main
> > Xen version?  They will need to be done before the release.
>=20
> Hmm, arguably that should be done once we start RCs, I would have=20
> thought, so we're actually testing the codepaths we plan on releasing.
>=20
On one hand, that makes a lot of sense. On the other hand, perhaps,
having debug=3Dy makes it easier to look at bugs reported by people
actually testing the RCs?

Perhaps, for next time of course, switch to debug=3Dn when we feel like
we're ~ in the middle of the RC process?

just my 2 cents,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-CUhGG8Mk5/PZ3MoBtKPn
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMHiMcACgkQk4XaBE3IOsSqMwCcCY+g3TG02y8IZECyZ044R3gD
RekAmgPb0OfCNPRPGHy08JFIOKDkZd8z
=w29b
-----END PGP SIGNATURE-----

--=-CUhGG8Mk5/PZ3MoBtKPn--


--===============3792280216867158623==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3792280216867158623==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:18:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtk3-0002Rj-Bs; Fri, 21 Feb 2014 17:18:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtk1-0002Re-Bv
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 17:18:25 +0000
Received: from [193.109.254.147:32002] by server-9.bemta-14.messagelabs.com id
	0E/66-24895-06A87035; Fri, 21 Feb 2014 17:18:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393003102!5959727!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32283 invoked from network); 21 Feb 2014 17:18:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:18:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104736430"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:18:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 12:18:17 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjs-00012m-RC;
	Fri, 21 Feb 2014 17:18:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjr-0000CH-Eg;
	Fri, 21 Feb 2014 17:18:15 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 17:18:10 +0000
Message-ID: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 0/2] mg-allocate: Improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm queueing these two changes to this admin tool (relevant only
in non-standalone mode) up in my wip branch:

 1/2 mg-allocate: fix typo in message
 2/2 mg-allocate: allow alternatives


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:18:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtk3-0002Rj-Bs; Fri, 21 Feb 2014 17:18:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtk1-0002Re-Bv
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 17:18:25 +0000
Received: from [193.109.254.147:32002] by server-9.bemta-14.messagelabs.com id
	0E/66-24895-06A87035; Fri, 21 Feb 2014 17:18:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393003102!5959727!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32283 invoked from network); 21 Feb 2014 17:18:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:18:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104736430"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:18:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 12:18:17 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjs-00012m-RC;
	Fri, 21 Feb 2014 17:18:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjr-0000CH-Eg;
	Fri, 21 Feb 2014 17:18:15 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 17:18:10 +0000
Message-ID: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 0/2] mg-allocate: Improvements
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'm queueing these two changes to this admin tool (relevant only
in non-standalone mode) up in my wip branch:

 1/2 mg-allocate: fix typo in message
 2/2 mg-allocate: allow alternatives


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:18:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtkU-0002WM-Vl; Fri, 21 Feb 2014 17:18:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtkS-0002VT-1y
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 17:18:52 +0000
Received: from [85.158.143.35:3071] by server-3.bemta-4.messagelabs.com id
	59/F7-11539-B7A87035; Fri, 21 Feb 2014 17:18:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393003128!7419414!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19885 invoked from network); 21 Feb 2014 17:18:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:18:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103052784"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 17:18:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 12:18:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjt-00012p-N6;
	Fri, 21 Feb 2014 17:18:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjs-0000CK-D2;
	Fri, 21 Feb 2014 17:18:16 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 17:18:11 +0000
Message-ID: <1393003092-704-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 1/2] mg-allocate: fix typo in message
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 mg-allocate |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mg-allocate b/mg-allocate
index abeae10..883746b 100755
--- a/mg-allocate
+++ b/mg-allocate
@@ -242,7 +242,7 @@ sub plan () {
             foreach my $req (@reqlist) {
                 my ($ok, $shareix) = alloc_1res($req->{Ident});
                 if (!$ok) {
-                    logm("failed to allocated $req->{Ident}!");
+                    logm("failed to allocate $req->{Ident}!");
                     $allok=0;
                 } else {
                     $req->{GotShareix}= $shareix;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:18:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtkU-0002WM-Vl; Fri, 21 Feb 2014 17:18:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtkS-0002VT-1y
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 17:18:52 +0000
Received: from [85.158.143.35:3071] by server-3.bemta-4.messagelabs.com id
	59/F7-11539-B7A87035; Fri, 21 Feb 2014 17:18:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393003128!7419414!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19885 invoked from network); 21 Feb 2014 17:18:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:18:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103052784"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 17:18:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 12:18:18 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjt-00012p-N6;
	Fri, 21 Feb 2014 17:18:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjs-0000CK-D2;
	Fri, 21 Feb 2014 17:18:16 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 17:18:11 +0000
Message-ID: <1393003092-704-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 1/2] mg-allocate: fix typo in message
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 mg-allocate |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mg-allocate b/mg-allocate
index abeae10..883746b 100755
--- a/mg-allocate
+++ b/mg-allocate
@@ -242,7 +242,7 @@ sub plan () {
             foreach my $req (@reqlist) {
                 my ($ok, $shareix) = alloc_1res($req->{Ident});
                 if (!$ok) {
-                    logm("failed to allocated $req->{Ident}!");
+                    logm("failed to allocate $req->{Ident}!");
                     $allok=0;
                 } else {
                     $req->{GotShareix}= $shareix;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:18:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtkX-0002X6-OV; Fri, 21 Feb 2014 17:18:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtkT-0002Vu-49
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 17:18:53 +0000
Received: from [85.158.143.35:3135] by server-1.bemta-4.messagelabs.com id
	69/B4-31661-C7A87035; Fri, 21 Feb 2014 17:18:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393003128!7419414!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19961 invoked from network); 21 Feb 2014 17:18:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:18:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103052789"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 17:18:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 12:18:19 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjv-00012s-21;
	Fri, 21 Feb 2014 17:18:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjt-0000CP-GD;
	Fri, 21 Feb 2014 17:18:17 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 17:18:12 +0000
Message-ID: <1393003092-704-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 2/2] mg-allocate: allow alternatives
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 mg-allocate |  124 ++++++++++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 101 insertions(+), 23 deletions(-)

diff --git a/mg-allocate b/mg-allocate
index 883746b..00182f7 100755
--- a/mg-allocate
+++ b/mg-allocate
@@ -5,6 +5,10 @@
 #                                     type=='S' means 'shared-host'
 #                                     share defaults to *
 #                                     - means deallocate
+#                                     name=option|option|... means
+#                                       any one of those options
+#                                     option={flag,flag...} means anything
+#                                       with all those flags
 
 # This is part of "osstest", an automated testing framework for Xen.
 # Copyright (C) 2009-2013 Citrix Inc.
@@ -58,15 +62,36 @@ sub parse_1res ($) {
     my $shareix= defined($4) ? $4+0 : '*';
     my $shareixcond = $shareix eq '*' ? '' : "AND shareix = $shareix";
 
-    return ($allocate, $restype, $resname, $shareix, $shareixcond);
+    my @resnames;
+    foreach my $option (split /\|/, $resname) {
+	if ($option =~ m/^\{(.*)\}$/) {
+	    my $q = "SELECT resname FROM resources r WHERE restype = ?";
+	    my @qa = ($restype);
+	    die unless $restype eq 'host';
+	    foreach my $flag (split /\,/, $1) {
+		$q .= "\n AND EXISTS (SELECT 1 FROM hostflags h".
+		    " WHERE h.hostname = r.resname AND h.hostflag = ?)";
+		push @qa, $flag;
+	    }
+	    my $hosts = $dbh_tests->selectcol_arrayref($q, {}, @qa);
+	    logm("for $option possibilities are: @$hosts");
+	    push @resnames, @$hosts;
+	} else {
+	    push @resnames, $option;
+	}
+    }
+    logm("for $resname all possibilities are: @resnames") if @resnames!=1;
+    die "nothing for $resname" unless @resnames;
+
+    return [ map {
+	        [ $allocate, $restype, $_, $shareix, $shareixcond ]
+	     } @resnames ];
 }
 
-sub alloc_1res ($) {
-    my ($res) = @_;
+sub alloc_1rescand ($$) {
+    my ($res, $rescand) = @_;
+    my ($allocate, $restype, $resname, $shareix, $shareixcond) = @$rescand;
 
-    my ($allocate, $restype, $resname, $shareix, $shareixcond) =
-        parse_1res($res);
-    
     my $resq= $dbh_tests->prepare(<<END);
                 SELECT * FROM resources r
                          JOIN tasks t
@@ -184,22 +209,46 @@ END
         $got_shareix= $candrow->{shareix};
         $ok=1; last;
     }
-    return ($ok, $got_shareix);
+    return ($ok, { Allocate => $allocate,
+		   Shareix => $got_shareix,
+		   Info => "$resname ($restype/$resname/$got_shareix)"
+	    });
+}
+
+sub alloc_1res ($) {
+    my ($res) = @_;
+
+    my $rescands = parse_1res($res);
+
+    foreach my $rescand (@$rescands) {
+	my @got = alloc_1rescand($res, $rescand);
+	return @got if $got[0];
+    }
+    return (0,undef);
+}
+
+sub loggot {
+    my @got = @_;
+    logm(($_->{Allocate} ? "ALLOCATED" : "DEALLOCATED").": ".$_->{Info})
+	foreach @got;
 }
 
 sub execute () {
+    my @got;
     db_retry($dbh_tests, \@all_lock_tables, sub {
 
         alloc_prep();
 
         my $allok=1;
+	@got = ();
         foreach my $res (@ARGV) {
-            my ($ok, $shareix) = alloc_1res($res);
+            my ($ok, $got) = alloc_1res($res);
             if (!$ok) {
                 logm("nothing available for $res, sorry");
                 $allok=0;
             } else {
-                logm("processed $res (shareix=$shareix)");
+                logm("processed $res (shareix=$got->{Shareix})");
+		push @got, $got;
             }
         }
 
@@ -207,31 +256,58 @@ sub execute () {
             die "allocation/deallocation unsuccessful\n";
         }
     });
+    loggot(@got);
     logm("done.");
 }
 
 our $duration; # seconds, undef means immediate ad-hoc
 
+sub showposs ($) {
+    my ($poss) = @_;
+    join ' + ', map { $_->{Reso} } @$poss;
+}
+
 sub plan () {
+    my @got;
     alloc_resources(sub {
         my ($plan) = @_;
 
-        my @reqlist;
+	@got = ();
+        my @possmatrix = ([]);
 
         foreach my $res (@ARGV) {
-            my ($allocate, $restype, $resname, $shareix, $shareixcond) =
-                parse_1res($res);
-            die "cannot plan deallocation" unless $allocate;
-            die "cannot plan individual shares" unless $shareix eq '*';
-
-            push @reqlist, {
-                Ident => "$res",
-                Reso => "$restype $resname",
-            };
+	    my $rescands = parse_1res($res);
+	    my @reqlist;
+	    foreach my $rescand (@$rescands) {
+		my ($allocate, $restype, $resname, $shareix, $shareixcond) =
+		    @$rescand;
+		die "cannot plan deallocation" unless $allocate;
+		die "cannot plan individual shares" unless $shareix eq '*';
+		push @reqlist, {
+		    Ident => "$res",
+		    Reso => "$restype $resname",
+		};
+	    }
+	    @possmatrix = map {
+		my $possreqs = $_;
+		map { [ @$possreqs, $_ ] } @reqlist;
+	    } @possmatrix;
         }
 
-        my $planned= plan_search
-            ($plan, sub { print " @_\n"; }, $duration, \@reqlist);
+	my $planned;
+	my @reqlist;
+	foreach my $poss (@possmatrix) {
+	    my $tplanned= plan_search
+		($plan, sub { print " @_\n"; }, $duration, $poss);
+	    printf " possibility Start=%d %s\n",
+	        $tplanned->{Start}, showposs($poss);
+	    if (!$planned || $tplanned->{Start} < $planned->{Start}) {
+		$planned = $tplanned;
+		@reqlist = @$poss;
+	    }
+	}
+	logm("best at $planned->{Start} is ".showposs(\@reqlist));
+	die unless $planned;
 
         my $allok=0;
         if (!$planned->{Start}) {
@@ -240,12 +316,13 @@ sub plan () {
             alloc_prep();
 
             foreach my $req (@reqlist) {
-                my ($ok, $shareix) = alloc_1res($req->{Ident});
+                my ($ok, $got) = alloc_1res($req->{Ident});
                 if (!$ok) {
                     logm("failed to allocate $req->{Ident}!");
                     $allok=0;
                 } else {
-                    $req->{GotShareix}= $shareix;
+                    $req->{GotShareix}= $got->{Shareix};
+		    push @got, $got;
                 }
             }
         }
@@ -275,6 +352,7 @@ sub plan () {
 
         return ($allok, { Bookings => \@bookings });
     });
+    loggot(@got);
 }
 
 while (@ARGV && $ARGV[0] =~ m/^[-0-9]/) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:18:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:18:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtkX-0002X6-OV; Fri, 21 Feb 2014 17:18:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGtkT-0002Vu-49
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 17:18:53 +0000
Received: from [85.158.143.35:3135] by server-1.bemta-4.messagelabs.com id
	69/B4-31661-C7A87035; Fri, 21 Feb 2014 17:18:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393003128!7419414!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19961 invoked from network); 21 Feb 2014 17:18:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:18:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103052789"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 17:18:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 12:18:19 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjv-00012s-21;
	Fri, 21 Feb 2014 17:18:19 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGtjt-0000CP-GD;
	Fri, 21 Feb 2014 17:18:17 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Fri, 21 Feb 2014 17:18:12 +0000
Message-ID: <1393003092-704-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393003092-704-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [OSSTEST PATCH 2/2] mg-allocate: allow alternatives
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

---
 mg-allocate |  124 ++++++++++++++++++++++++++++++++++++++++++++++++-----------
 1 file changed, 101 insertions(+), 23 deletions(-)

diff --git a/mg-allocate b/mg-allocate
index 883746b..00182f7 100755
--- a/mg-allocate
+++ b/mg-allocate
@@ -5,6 +5,10 @@
 #                                     type=='S' means 'shared-host'
 #                                     share defaults to *
 #                                     - means deallocate
+#                                     name=option|option|... means
+#                                       any one of those options
+#                                     option={flag,flag...} means anything
+#                                       with all those flags
 
 # This is part of "osstest", an automated testing framework for Xen.
 # Copyright (C) 2009-2013 Citrix Inc.
@@ -58,15 +62,36 @@ sub parse_1res ($) {
     my $shareix= defined($4) ? $4+0 : '*';
     my $shareixcond = $shareix eq '*' ? '' : "AND shareix = $shareix";
 
-    return ($allocate, $restype, $resname, $shareix, $shareixcond);
+    my @resnames;
+    foreach my $option (split /\|/, $resname) {
+	if ($option =~ m/^\{(.*)\}$/) {
+	    my $q = "SELECT resname FROM resources r WHERE restype = ?";
+	    my @qa = ($restype);
+	    die unless $restype eq 'host';
+	    foreach my $flag (split /\,/, $1) {
+		$q .= "\n AND EXISTS (SELECT 1 FROM hostflags h".
+		    " WHERE h.hostname = r.resname AND h.hostflag = ?)";
+		push @qa, $flag;
+	    }
+	    my $hosts = $dbh_tests->selectcol_arrayref($q, {}, @qa);
+	    logm("for $option possibilities are: @$hosts");
+	    push @resnames, @$hosts;
+	} else {
+	    push @resnames, $option;
+	}
+    }
+    logm("for $resname all possibilities are: @resnames") if @resnames!=1;
+    die "nothing for $resname" unless @resnames;
+
+    return [ map {
+	        [ $allocate, $restype, $_, $shareix, $shareixcond ]
+	     } @resnames ];
 }
 
-sub alloc_1res ($) {
-    my ($res) = @_;
+sub alloc_1rescand ($$) {
+    my ($res, $rescand) = @_;
+    my ($allocate, $restype, $resname, $shareix, $shareixcond) = @$rescand;
 
-    my ($allocate, $restype, $resname, $shareix, $shareixcond) =
-        parse_1res($res);
-    
     my $resq= $dbh_tests->prepare(<<END);
                 SELECT * FROM resources r
                          JOIN tasks t
@@ -184,22 +209,46 @@ END
         $got_shareix= $candrow->{shareix};
         $ok=1; last;
     }
-    return ($ok, $got_shareix);
+    return ($ok, { Allocate => $allocate,
+		   Shareix => $got_shareix,
+		   Info => "$resname ($restype/$resname/$got_shareix)"
+	    });
+}
+
+sub alloc_1res ($) {
+    my ($res) = @_;
+
+    my $rescands = parse_1res($res);
+
+    foreach my $rescand (@$rescands) {
+	my @got = alloc_1rescand($res, $rescand);
+	return @got if $got[0];
+    }
+    return (0,undef);
+}
+
+sub loggot {
+    my @got = @_;
+    logm(($_->{Allocate} ? "ALLOCATED" : "DEALLOCATED").": ".$_->{Info})
+	foreach @got;
 }
 
 sub execute () {
+    my @got;
     db_retry($dbh_tests, \@all_lock_tables, sub {
 
         alloc_prep();
 
         my $allok=1;
+	@got = ();
         foreach my $res (@ARGV) {
-            my ($ok, $shareix) = alloc_1res($res);
+            my ($ok, $got) = alloc_1res($res);
             if (!$ok) {
                 logm("nothing available for $res, sorry");
                 $allok=0;
             } else {
-                logm("processed $res (shareix=$shareix)");
+                logm("processed $res (shareix=$got->{Shareix})");
+		push @got, $got;
             }
         }
 
@@ -207,31 +256,58 @@ sub execute () {
             die "allocation/deallocation unsuccessful\n";
         }
     });
+    loggot(@got);
     logm("done.");
 }
 
 our $duration; # seconds, undef means immediate ad-hoc
 
+sub showposs ($) {
+    my ($poss) = @_;
+    join ' + ', map { $_->{Reso} } @$poss;
+}
+
 sub plan () {
+    my @got;
     alloc_resources(sub {
         my ($plan) = @_;
 
-        my @reqlist;
+	@got = ();
+        my @possmatrix = ([]);
 
         foreach my $res (@ARGV) {
-            my ($allocate, $restype, $resname, $shareix, $shareixcond) =
-                parse_1res($res);
-            die "cannot plan deallocation" unless $allocate;
-            die "cannot plan individual shares" unless $shareix eq '*';
-
-            push @reqlist, {
-                Ident => "$res",
-                Reso => "$restype $resname",
-            };
+	    my $rescands = parse_1res($res);
+	    my @reqlist;
+	    foreach my $rescand (@$rescands) {
+		my ($allocate, $restype, $resname, $shareix, $shareixcond) =
+		    @$rescand;
+		die "cannot plan deallocation" unless $allocate;
+		die "cannot plan individual shares" unless $shareix eq '*';
+		push @reqlist, {
+		    Ident => "$res",
+		    Reso => "$restype $resname",
+		};
+	    }
+	    @possmatrix = map {
+		my $possreqs = $_;
+		map { [ @$possreqs, $_ ] } @reqlist;
+	    } @possmatrix;
         }
 
-        my $planned= plan_search
-            ($plan, sub { print " @_\n"; }, $duration, \@reqlist);
+	my $planned;
+	my @reqlist;
+	foreach my $poss (@possmatrix) {
+	    my $tplanned= plan_search
+		($plan, sub { print " @_\n"; }, $duration, $poss);
+	    printf " possibility Start=%d %s\n",
+	        $tplanned->{Start}, showposs($poss);
+	    if (!$planned || $tplanned->{Start} < $planned->{Start}) {
+		$planned = $tplanned;
+		@reqlist = @$poss;
+	    }
+	}
+	logm("best at $planned->{Start} is ".showposs(\@reqlist));
+	die unless $planned;
 
         my $allok=0;
         if (!$planned->{Start}) {
@@ -240,12 +316,13 @@ sub plan () {
             alloc_prep();
 
             foreach my $req (@reqlist) {
-                my ($ok, $shareix) = alloc_1res($req->{Ident});
+                my ($ok, $got) = alloc_1res($req->{Ident});
                 if (!$ok) {
                     logm("failed to allocate $req->{Ident}!");
                     $allok=0;
                 } else {
-                    $req->{GotShareix}= $shareix;
+                    $req->{GotShareix}= $got->{Shareix};
+		    push @got, $got;
                 }
             }
         }
@@ -275,6 +352,7 @@ sub plan () {
 
         return ($allok, { Bookings => \@bookings });
     });
+    loggot(@got);
 }
 
 while (@ARGV && $ARGV[0] =~ m/^[-0-9]/) {
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 17:24:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtpm-0002y2-NK; Fri, 21 Feb 2014 17:24:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGtpk-0002xf-Nx
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:24:20 +0000
Received: from [85.158.143.35:15982] by server-2.bemta-4.messagelabs.com id
	35/A4-10891-4CB87035; Fri, 21 Feb 2014 17:24:20 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393003449!7430908!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjI4MTYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1918 invoked from network); 21 Feb 2014 17:24:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:24:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; 
	d="asc'?scan'208";a="104737926"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:24:09 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 12:24:08 -0500
Message-ID: <1393003446.32038.832.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Fri, 21 Feb 2014 18:24:06 +0100
In-Reply-To: <5306F2A9.1040503@ts.fujitsu.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com> <1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com> <1392920545.32038.826.camel@Solace>
	<5306F2A9.1040503@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5295871534986047968=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5295871534986047968==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-ssBlbZCkRlXa8xeEFikI"

--=-ssBlbZCkRlXa8xeEFikI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-21 at 07:31 +0100, Juergen Gross wrote:
> On 20.02.2014 19:22, Dario Faggioli wrote:
> > All true... To the point that I know also wonder what a suitable
> > interface and a not too verbose output configuration could be...
>=20
> Well, looking at the available topology information I think it should loo=
k like
> the following example:
>=20
> # xl cpupool-list --shareinfo
> Name          CPUs   Sched     Active  Domain count  shared resources
> Pool-0          1    credit       y         1        core:   lw_pool
> lw_pool         1    credit       y         0        core:   Pool-0
> bs2_pool        2    credit       y         1        socket: Pool-0,lw_po=
ol
>=20
> What do you think?
>=20
Looks reasonable.

> > I'll post it in early 4.5 dev cycle.
>=20
> Please do!
>=20
I will. :-)

regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-ssBlbZCkRlXa8xeEFikI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMHi7YACgkQk4XaBE3IOsSZMwCgmkqFst9gfEbCQKBfCUh3XPRY
CMkAoIIcpEcZrJ0DtgAELw5vbwMmqh0W
=F76C
-----END PGP SIGNATURE-----

--=-ssBlbZCkRlXa8xeEFikI--


--===============5295871534986047968==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5295871534986047968==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:24:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGtpm-0002y2-NK; Fri, 21 Feb 2014 17:24:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGtpk-0002xf-Nx
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:24:20 +0000
Received: from [85.158.143.35:15982] by server-2.bemta-4.messagelabs.com id
	35/A4-10891-4CB87035; Fri, 21 Feb 2014 17:24:20 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393003449!7430908!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjI4MTYgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1918 invoked from network); 21 Feb 2014 17:24:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:24:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; 
	d="asc'?scan'208";a="104737926"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:24:09 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 12:24:08 -0500
Message-ID: <1393003446.32038.832.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Date: Fri, 21 Feb 2014 18:24:06 +0100
In-Reply-To: <5306F2A9.1040503@ts.fujitsu.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com> <1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com> <1392920545.32038.826.camel@Solace>
	<5306F2A9.1040503@ts.fujitsu.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5295871534986047968=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5295871534986047968==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-ssBlbZCkRlXa8xeEFikI"

--=-ssBlbZCkRlXa8xeEFikI
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On ven, 2014-02-21 at 07:31 +0100, Juergen Gross wrote:
> On 20.02.2014 19:22, Dario Faggioli wrote:
> > All true... To the point that I know also wonder what a suitable
> > interface and a not too verbose output configuration could be...
>=20
> Well, looking at the available topology information I think it should loo=
k like
> the following example:
>=20
> # xl cpupool-list --shareinfo
> Name          CPUs   Sched     Active  Domain count  shared resources
> Pool-0          1    credit       y         1        core:   lw_pool
> lw_pool         1    credit       y         0        core:   Pool-0
> bs2_pool        2    credit       y         1        socket: Pool-0,lw_po=
ol
>=20
> What do you think?
>=20
Looks reasonable.

> > I'll post it in early 4.5 dev cycle.
>=20
> Please do!
>=20
I will. :-)

regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-ssBlbZCkRlXa8xeEFikI
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMHi7YACgkQk4XaBE3IOsSZMwCgmkqFst9gfEbCQKBfCUh3XPRY
CMkAoIIcpEcZrJ0DtgAELw5vbwMmqh0W
=F76C
-----END PGP SIGNATURE-----

--=-ssBlbZCkRlXa8xeEFikI--


--===============5295871534986047968==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5295871534986047968==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:43:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGu8D-0003Pn-2e; Fri, 21 Feb 2014 17:43:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGu8B-0003Pi-UE
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:43:24 +0000
Received: from [85.158.137.68:45356] by server-11.bemta-3.messagelabs.com id
	7D/43-04255-B3097035; Fri, 21 Feb 2014 17:43:23 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393004600!3429797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3909 invoked from network); 21 Feb 2014 17:43:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:43:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; 
	d="asc'?scan'208";a="104743500"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:42:59 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 12:42:58 -0500
Message-ID: <1393004577.32038.843.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Charles <xiezhenjiang@foxmail.com>
Date: Fri, 21 Feb 2014 18:42:57 +0100
In-Reply-To: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage in Xen
 hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8034296615375592205=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8034296615375592205==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Vt01PuDRkEc8Lpx7ujOS"

--=-Vt01PuDRkEc8Lpx7ujOS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-20 at 16:09 +0800, Charles wrote:
> Hi everyone, I=E2=80=98m trying to monitor the VM CPU usage in xen hyperv=
isor and use the VM CPU usage=20
> information in Xen's Credit Scheduler.=20
>
Monitoring how? More important, "use the usage information in Credit"
how and for what purpose?

Providing as much info as possible on the final goal is often very
important for us to be able to help you.

> I googled it, and find that most references about this topic is monitorin=
g the VM CPU usage information in Domain-0 while not in the Xen hypervisor.
>
Provided that I really would like you to define better what you mean
with "VM CPU usage", what I can say is that credit does not have any
load tracking mechanism. Credit2 has, and it would be possible (and
nice!) to abstract that from there and make it available for all
schedulers.

Basically, that means moving some of the code in schedule.c, as well as
add some callbacks, to be implemente by the various scheduler (Credit,
Credit2, etc).

This is something I have on my TODO list since a while, but did not
manage to get to it yet. If you think this would be helpful for you too,
and you're up for helping, I'll be share to discuss/share more.

> I'm now blocking at how to determine the state of vCPU, My understanding =
is that vCPU running on a physical CPU doesn't mean it really consumes CPU =
cycles, so I could determine the whether vCPU is consuming CPU cycles or no=
t? So could you please give me some advice.
>
Mmm... if a vCPU is _running_ on a pCPU, I would say it is consuming
pCPU cycles... But perhaps I am not understanding what you really mean
here. So, again, please try to describe your final goal, I'm sure it
will help figuring out things better.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Vt01PuDRkEc8Lpx7ujOS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMHkCEACgkQk4XaBE3IOsQP2QCfQy0GF0xqkAmWtfmHV+KSWz6g
FncAn0zSNQqnOxYTL7wwjfHsZqJoNUBP
=WiDK
-----END PGP SIGNATURE-----

--=-Vt01PuDRkEc8Lpx7ujOS--


--===============8034296615375592205==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8034296615375592205==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:43:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGu8D-0003Pn-2e; Fri, 21 Feb 2014 17:43:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WGu8B-0003Pi-UE
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:43:24 +0000
Received: from [85.158.137.68:45356] by server-11.bemta-3.messagelabs.com id
	7D/43-04255-B3097035; Fri, 21 Feb 2014 17:43:23 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393004600!3429797!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	surbl: (ASYNC_NO) c3VyYmxfcmVjaGVja19kZWxheTogMCAoYWJhbmRv
	bmVkOiBhYm91dC5tZS9kYXJpby5mYWdnaW9s\naSk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3909 invoked from network); 21 Feb 2014 17:43:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:43:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; 
	d="asc'?scan'208";a="104743500"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 17:42:59 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 12:42:58 -0500
Message-ID: <1393004577.32038.843.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Charles <xiezhenjiang@foxmail.com>
Date: Fri, 21 Feb 2014 18:42:57 +0100
In-Reply-To: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage in Xen
 hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8034296615375592205=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8034296615375592205==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-Vt01PuDRkEc8Lpx7ujOS"

--=-Vt01PuDRkEc8Lpx7ujOS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On gio, 2014-02-20 at 16:09 +0800, Charles wrote:
> Hi everyone, I=E2=80=98m trying to monitor the VM CPU usage in xen hyperv=
isor and use the VM CPU usage=20
> information in Xen's Credit Scheduler.=20
>
Monitoring how? More important, "use the usage information in Credit"
how and for what purpose?

Providing as much info as possible on the final goal is often very
important for us to be able to help you.

> I googled it, and find that most references about this topic is monitorin=
g the VM CPU usage information in Domain-0 while not in the Xen hypervisor.
>
Provided that I really would like you to define better what you mean
with "VM CPU usage", what I can say is that credit does not have any
load tracking mechanism. Credit2 has, and it would be possible (and
nice!) to abstract that from there and make it available for all
schedulers.

Basically, that means moving some of the code in schedule.c, as well as
add some callbacks, to be implemente by the various scheduler (Credit,
Credit2, etc).

This is something I have on my TODO list since a while, but did not
manage to get to it yet. If you think this would be helpful for you too,
and you're up for helping, I'll be share to discuss/share more.

> I'm now blocking at how to determine the state of vCPU, My understanding =
is that vCPU running on a physical CPU doesn't mean it really consumes CPU =
cycles, so I could determine the whether vCPU is consuming CPU cycles or no=
t? So could you please give me some advice.
>
Mmm... if a vCPU is _running_ on a pCPU, I would say it is consuming
pCPU cycles... But perhaps I am not understanding what you really mean
here. So, again, please try to describe your final goal, I'm sure it
will help figuring out things better.

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-Vt01PuDRkEc8Lpx7ujOS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMHkCEACgkQk4XaBE3IOsQP2QCfQy0GF0xqkAmWtfmHV+KSWz6g
FncAn0zSNQqnOxYTL7wwjfHsZqJoNUBP
=WiDK
-----END PGP SIGNATURE-----

--=-Vt01PuDRkEc8Lpx7ujOS--


--===============8034296615375592205==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8034296615375592205==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:52:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGuGT-0003ap-4l; Fri, 21 Feb 2014 17:51:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGuGR-0003ak-71
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:51:55 +0000
Received: from [85.158.139.211:62554] by server-11.bemta-5.messagelabs.com id
	8E/06-23886-A3297035; Fri, 21 Feb 2014 17:51:54 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393005112!936663!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21909 invoked from network); 21 Feb 2014 17:51:53 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:51:53 -0000
Received: by mail-vc0-f169.google.com with SMTP id hq11so3626886vcb.0
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 09:51:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=TKOTqB3RVno0o6GHzO18JuCYxnvvB2NZiNCDXihE86E=;
	b=RkZ4TIQ4Jw6YWGu4l5scFrxjaEX/R5hc5TypNX9R4ONkUTOMrj1oLtVoyqI31YUlXl
	TVH5seZOQsaE76AAKEhC1hzxx2Y5Rj2qP6tko9HgE78ZLdCpnnKjplJxOPnOCDF3sGYq
	McRQls6sAprYY50nZUGyGgrXGXygCumQH50ZZMLxRYGGwtWjMYaofiSFiQh9h0MYU+e9
	oCZFw2HEJ1jR6saKemYgttvQe3FTo+kdivDZHSGJxW0Bq+pCQSz9ogJN/gaYGn21VQSg
	1xr/4HCiJHksyBZZytWTKxzMlG340Sg6tRshQg/wq+YP4KpZGnPJuZyHMRi99/2CVo1a
	/4sw==
MIME-Version: 1.0
X-Received: by 10.52.69.172 with SMTP id f12mr4713325vdu.58.1393005112344;
	Fri, 21 Feb 2014 09:51:52 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Fri, 21 Feb 2014 09:51:52 -0800 (PST)
In-Reply-To: <CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
Date: Fri, 21 Feb 2014 09:51:52 -0800
Message-ID: <CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5639928713055674093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5639928713055674093==
Content-Type: multipart/alternative; boundary=20cf307cffd6e4469504f2ee46d7

--20cf307cffd6e4469504f2ee46d7
Content-Type: text/plain; charset=ISO-8859-1

Hi --

I tried enabling CONFIG_APM but it still didn't work. Let me know if you
guys happen to know what all needs to be enabled in guest HVM for ACPI
events to work (xm trigger <vm> power). Since SuSE HVM VM works with 'xm
trigger <vm> power', I'm suspecting we have not enabled something in our WR
distro.

Thanks,
/Saurabh


On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra <saurabh.globe@gmail.com>wrote:

> Hi Ian --
>
> So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough?
> Let me try that out.
>
> Thanks,
> /Saurabh
>
>
> On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
>
>> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
>> > config or driver do I need to enable in WR HVM guest such that it
>> > accepts 'xl/xm trigger <vm> power'?
>>
>> Support for ACPI power events.
>>
>> Ian.
>>
>>
>>
>

--20cf307cffd6e4469504f2ee46d7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi --</div><div><br></div>I tried enabling CONFIG_APM=
 but it still didn&#39;t work. Let me know if you guys happen to know what =
all needs to be enabled in guest HVM for ACPI events to work (xm trigger &l=
t;vm&gt; power). Since SuSE HVM VM works with &#39;xm trigger &lt;vm&gt; po=
wer&#39;, I&#39;m suspecting we have not enabled something in our WR distro=
.<div>
<div><br></div><div>Thanks,</div><div>/Saurabh</div></div></div><div class=
=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Thu, Feb 20, 2014 at=
 4:26 PM, Saurabh Mishra <span dir=3D"ltr">&lt;<a href=3D"mailto:saurabh.gl=
obe@gmail.com" target=3D"_blank">saurabh.globe@gmail.com</a>&gt;</span> wro=
te:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Hi Ian --<div><br></div><di=
v class=3D"gmail_extra">So enabling &#39;CONFIG_APM=3Dy&#39; in WR kernel f=
or HVM guest should be enough? Let me try that out.</div>
<div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">
Thanks,</div><div class=3D"gmail_extra">/Saurabh<div><div class=3D"h5"><br>=
<br><div class=3D"gmail_quote">On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbel=
l <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=
=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Wed, 2014-02-19 at 11:02 -0800, Saur=
abh Mishra wrote:<br>
&gt; config or driver do I need to enable in WR HVM guest such that it<br>
&gt; accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?<br>
<br>
</div>Support for ACPI power events.<br>
<span><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>

--20cf307cffd6e4469504f2ee46d7--


--===============5639928713055674093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5639928713055674093==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 17:52:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 17:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGuGT-0003ap-4l; Fri, 21 Feb 2014 17:51:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGuGR-0003ak-71
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 17:51:55 +0000
Received: from [85.158.139.211:62554] by server-11.bemta-5.messagelabs.com id
	8E/06-23886-A3297035; Fri, 21 Feb 2014 17:51:54 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393005112!936663!1
X-Originating-IP: [209.85.220.169]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21909 invoked from network); 21 Feb 2014 17:51:53 -0000
Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com)
	(209.85.220.169)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 17:51:53 -0000
Received: by mail-vc0-f169.google.com with SMTP id hq11so3626886vcb.0
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 09:51:52 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=TKOTqB3RVno0o6GHzO18JuCYxnvvB2NZiNCDXihE86E=;
	b=RkZ4TIQ4Jw6YWGu4l5scFrxjaEX/R5hc5TypNX9R4ONkUTOMrj1oLtVoyqI31YUlXl
	TVH5seZOQsaE76AAKEhC1hzxx2Y5Rj2qP6tko9HgE78ZLdCpnnKjplJxOPnOCDF3sGYq
	McRQls6sAprYY50nZUGyGgrXGXygCumQH50ZZMLxRYGGwtWjMYaofiSFiQh9h0MYU+e9
	oCZFw2HEJ1jR6saKemYgttvQe3FTo+kdivDZHSGJxW0Bq+pCQSz9ogJN/gaYGn21VQSg
	1xr/4HCiJHksyBZZytWTKxzMlG340Sg6tRshQg/wq+YP4KpZGnPJuZyHMRi99/2CVo1a
	/4sw==
MIME-Version: 1.0
X-Received: by 10.52.69.172 with SMTP id f12mr4713325vdu.58.1393005112344;
	Fri, 21 Feb 2014 09:51:52 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Fri, 21 Feb 2014 09:51:52 -0800 (PST)
In-Reply-To: <CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
Date: Fri, 21 Feb 2014 09:51:52 -0800
Message-ID: <CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5639928713055674093=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5639928713055674093==
Content-Type: multipart/alternative; boundary=20cf307cffd6e4469504f2ee46d7

--20cf307cffd6e4469504f2ee46d7
Content-Type: text/plain; charset=ISO-8859-1

Hi --

I tried enabling CONFIG_APM but it still didn't work. Let me know if you
guys happen to know what all needs to be enabled in guest HVM for ACPI
events to work (xm trigger <vm> power). Since SuSE HVM VM works with 'xm
trigger <vm> power', I'm suspecting we have not enabled something in our WR
distro.

Thanks,
/Saurabh


On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra <saurabh.globe@gmail.com>wrote:

> Hi Ian --
>
> So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough?
> Let me try that out.
>
> Thanks,
> /Saurabh
>
>
> On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
>
>> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
>> > config or driver do I need to enable in WR HVM guest such that it
>> > accepts 'xl/xm trigger <vm> power'?
>>
>> Support for ACPI power events.
>>
>> Ian.
>>
>>
>>
>

--20cf307cffd6e4469504f2ee46d7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi --</div><div><br></div>I tried enabling CONFIG_APM=
 but it still didn&#39;t work. Let me know if you guys happen to know what =
all needs to be enabled in guest HVM for ACPI events to work (xm trigger &l=
t;vm&gt; power). Since SuSE HVM VM works with &#39;xm trigger &lt;vm&gt; po=
wer&#39;, I&#39;m suspecting we have not enabled something in our WR distro=
.<div>
<div><br></div><div>Thanks,</div><div>/Saurabh</div></div></div><div class=
=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Thu, Feb 20, 2014 at=
 4:26 PM, Saurabh Mishra <span dir=3D"ltr">&lt;<a href=3D"mailto:saurabh.gl=
obe@gmail.com" target=3D"_blank">saurabh.globe@gmail.com</a>&gt;</span> wro=
te:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Hi Ian --<div><br></div><di=
v class=3D"gmail_extra">So enabling &#39;CONFIG_APM=3Dy&#39; in WR kernel f=
or HVM guest should be enough? Let me try that out.</div>
<div class=3D"gmail_extra"><br></div><div class=3D"gmail_extra">
Thanks,</div><div class=3D"gmail_extra">/Saurabh<div><div class=3D"h5"><br>=
<br><div class=3D"gmail_quote">On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbel=
l <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian.Campbell@citrix.com" target=
=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</span> wrote:<br>

<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Wed, 2014-02-19 at 11:02 -0800, Saur=
abh Mishra wrote:<br>
&gt; config or driver do I need to enable in WR HVM guest such that it<br>
&gt; accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?<br>
<br>
</div>Support for ACPI power events.<br>
<span><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
</font></span></blockquote></div><br></div></div></div></div>
</blockquote></div><br></div>

--20cf307cffd6e4469504f2ee46d7--


--===============5639928713055674093==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5639928713055674093==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 18:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 18:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGure-0004IE-5Y; Fri, 21 Feb 2014 18:30:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGurc-0004I6-AY
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 18:30:20 +0000
Received: from [85.158.143.35:19241] by server-3.bemta-4.messagelabs.com id
	2A/B3-11539-B3B97035; Fri, 21 Feb 2014 18:30:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393007417!7426695!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14182 invoked from network); 21 Feb 2014 18:30:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 18:30:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104757689"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 18:30:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 13:30:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGurY-0001PT-0s;
	Fri, 21 Feb 2014 18:30:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGurW-0000ad-K3;
	Fri, 21 Feb 2014 18:30:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.39733.80220.360509@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 18:30:13 +0000
To: Jan Beulich <JBeulich@suse.com>, <ian.campbell@eu.citrix.com>, xen-devel
	<xen-devel@lists.xenproject.org>
In-Reply-To: <21255.16435.69137.96537@mariner.uk.xensource.com>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
	<21255.16435.69137.96537@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble: broken/fail/pass"):
> I think this is because of some changes to osstest but I'm not 100%
> sure which ones.  The actual error is:
> 
>     xl: error while loading shared libraries: libxlutil.so.1.0: cannot
>     open shared object file: No such file or directory

libxlutil.so.1.0 had been put in /usr/lib64 by the Xen 4.1 build
system.  That's not on the library search path (at least in Debian
wheezy).

I propose the following change to osstest to fix this problem.  I have
verified in an ad-hoc test that it appears to fix the problem with
osstest and 4.1.

Ian.

>From 0bd6b3de73693ec3113d923761772537c81bc480 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 18:27:12 +0000
Subject: [OSSTEST PATCH] ts-xen-build: Set LIBLEAFDIR_x86_64

Xen 4.1 puts things in /usr/lib64 otherwise.  This is wrong for
Debian, and does not work at all on wheezy.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-xen-build |    1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-xen-build b/ts-xen-build
index c8b27b7..7bc7cbc 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -65,6 +65,7 @@ sub checkout () {
 	>.config
 	echo >>.config debug=$debug_build
 	echo >>.config GIT_HTTP=y
+	echo >>.config LIBLEAFDIR_x86_64=lib
 	echo >>.config QEMU_REMOTE='$r{tree_qemu}'
 END
                (nonempty($r{revision_qemu}) ? <<END : '').
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 18:30:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 18:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGure-0004IE-5Y; Fri, 21 Feb 2014 18:30:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGurc-0004I6-AY
	for xen-devel@lists.xenproject.org; Fri, 21 Feb 2014 18:30:20 +0000
Received: from [85.158.143.35:19241] by server-3.bemta-4.messagelabs.com id
	2A/B3-11539-B3B97035; Fri, 21 Feb 2014 18:30:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393007417!7426695!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14182 invoked from network); 21 Feb 2014 18:30:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 18:30:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104757689"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 18:30:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 13:30:16 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGurY-0001PT-0s;
	Fri, 21 Feb 2014 18:30:16 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WGurW-0000ad-K3;
	Fri, 21 Feb 2014 18:30:14 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21255.39733.80220.360509@mariner.uk.xensource.com>
Date: Fri, 21 Feb 2014 18:30:13 +0000
To: Jan Beulich <JBeulich@suse.com>, <ian.campbell@eu.citrix.com>, xen-devel
	<xen-devel@lists.xenproject.org>
In-Reply-To: <21255.16435.69137.96537@mariner.uk.xensource.com>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
	<21255.16435.69137.96537@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA2
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble: broken/fail/pass"):
> I think this is because of some changes to osstest but I'm not 100%
> sure which ones.  The actual error is:
> 
>     xl: error while loading shared libraries: libxlutil.so.1.0: cannot
>     open shared object file: No such file or directory

libxlutil.so.1.0 had been put in /usr/lib64 by the Xen 4.1 build
system.  That's not on the library search path (at least in Debian
wheezy).

I propose the following change to osstest to fix this problem.  I have
verified in an ad-hoc test that it appears to fix the problem with
osstest and 4.1.

Ian.

>From 0bd6b3de73693ec3113d923761772537c81bc480 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 18:27:12 +0000
Subject: [OSSTEST PATCH] ts-xen-build: Set LIBLEAFDIR_x86_64

Xen 4.1 puts things in /usr/lib64 otherwise.  This is wrong for
Debian, and does not work at all on wheezy.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-xen-build |    1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-xen-build b/ts-xen-build
index c8b27b7..7bc7cbc 100755
--- a/ts-xen-build
+++ b/ts-xen-build
@@ -65,6 +65,7 @@ sub checkout () {
 	>.config
 	echo >>.config debug=$debug_build
 	echo >>.config GIT_HTTP=y
+	echo >>.config LIBLEAFDIR_x86_64=lib
 	echo >>.config QEMU_REMOTE='$r{tree_qemu}'
 END
                (nonempty($r{revision_qemu}) ? <<END : '').
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 19:08:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 19:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGvSb-0004oX-Re; Fri, 21 Feb 2014 19:08:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGvSa-0004oK-Kr
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 19:08:33 +0000
Received: from [85.158.139.211:19972] by server-5.bemta-5.messagelabs.com id
	B7/5A-32749-F24A7035; Fri, 21 Feb 2014 19:08:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393009707!5454496!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10560 invoked from network); 21 Feb 2014 19:08:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 19:08:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LJ8For013262
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 19:08:16 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJ8DS6026551
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 19:08:14 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJ8DeV005503; Fri, 21 Feb 2014 19:08:13 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 11:08:12 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C00701C293E; Fri, 21 Feb 2014 14:08:11 -0500 (EST)
Date: Fri, 21 Feb 2014 14:08:11 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>, jbeulich@suse.com
Message-ID: <20140221190811.GA9232@phenom.dumpdata.com>
References: <52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
	<52CBD92F.3050301@citrix.com>
	<7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="YZ5djTAD1cGYuMQK"
Content-Disposition: inline
In-Reply-To: <7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, Jan 07, 2014 at 10:44:29AM +0000, Gordan Bobic wrote:
> On 2014-01-07 10:38, Andrew Cooper wrote:
> >On 07/01/14 10:35, Gordan Bobic wrote:
> >>On 2014-01-07 03:17, Zhang, Yang Z wrote:
> >>>Konrad Rzeszutek Wilk wrote on 2014-01-07:
> >>>>>Which would look like this:
> >>>>>
> >>>>>C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
> >>>>>on the card
> >>>>>          \--------------> IEEE-1394a
> >>>>>
> >>>>>I am actually wondering if this 07:00.0 device is the one that
> >>>>>reports itself as 08:00.0 (which I think is what you alluding to
> >>>>>Jan)
> >>>>>
> >>>>
> >>>>And to double check that theory I decided to pass in the IEEE-1394a
> >>>>to a guest:
> >>>>
> >>>>           +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> >>>>TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> >>>>
> >>>>
> >>>>(XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> >>>>[VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> >>>>[VT-D]iommu.c:865: DMAR:[DMA Read] Request device
> >>>>[0000:08:00.0] fault
> >>>>addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> >>>>02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> >>>>iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
> >>>>root_entry
> >>>>= ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
> >>>>context
> >>>>= ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
> >>>>ctxt_entry[0]
> >>>>not present
> >>>>
> >>>>So, capture card OK - Likely the Tundra bridge has an issue:
> >>>>
> >>>>07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> >>>>(prog-if 01 [Subtractive decode])
> >>>>        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> >>>>        ParErr- Stepping- SERR- FastB2B- DisINTx- Status:
> >>>>Cap+ 66MHz-
> >>>>        UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
> >>>>        >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
> >>>>secondary=08,
> >>>>        subordinate=08, sec-latency=32 Memory behind bridge:
> >>>>        f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
> >>>>        DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
> >>>>BridgeCtl:
> >>>>        Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
> >>>>                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
> >>>>        Capabilities: [60] Subsystem: Super Micro Computer Inc
> >>>>Device 0805
> >>>>        Capabilities: [a0] Power Management version 3
> >>>>                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
> >>>>                PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
> >>>>NoSoftRst+
> >>>>                PME-Enable- DSel=0 DScale=0 PME-
> >>>>
> >>>>or there is some unknown bridge in the motherboard.
> >>>
> >>>According your description above, the upstream Linux should also have
> >>>the same problem. Did you see it with upstream Linux?
> >>
> >>The problem I was seeing with LSI cards (phantom device doing DMA)
> >>does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> >>bare metal Linux, the same problem occurs as with Xen.
> >>
> >>>There may be some buggy device that generate DMA request with
> >>>internal
> >>>BDF but it didn't expose it(not like Phantom device). For those
> >>>devices, I think we need to setup the VT-d page table manually.
> >>
> >>I think what is needed is a pci-phantom style override that tells the
> >>hypervisor to tell the IOMMU to allow DMA traffic from a specific
> >>invisible device ID.
> >>
> >>Gordan
> >
> >There is.  See "pci-phantom" in
> >http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> 
> I thought this was only applicable to phantom _functions_ (number
> after the
> dot) rather than whole phantom _devices_. Is that not the case?

My new Supermicro X10SAE has this issue as well and I wrote a patch to 
"fix" this. Just to be clear - this is an issue with the motherboard
(or rather the PCI-to-PCI bridge) is that it violates VT-d chapter
3.9.2:
"For devices behind conventional PCI bridges, the source-id in the DMA
requests is the requestor-id of the bridge device.".

In my case the bridge is is:

06:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)

But all the requests from the devices behind the bridge show up with
request-id +1 (aka 07:00.0):


(XEN) [2014-02-05 22:23:23] [VT-D]iommu.c:880: iommu_fau3] [VT-D]iommu.c:882: iommu_fault_status: Primary Pending Fault
(XEN) [2014-02-05 22:23:23] [VT-D]iommu.c:860: DMAR:[DMA Read] Request device [0000:07:00.0] fault addr a57fc000, iommu reg = ffff82c000203000
(XEN) [2014-02-05 22:23:23] DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) [2014-02-05 22:23:23] print_vtd_entries: iommu ffff83023948b9c0 dev 0000:07:00.0 gmfn a57fc
(XEN) [2014-02-05 22:23:23]     root_entry = ffff82004000d000
(XEN) [2014-02-05 22:23:23]     root_entry[7] = 1b8eba001
(XEN) [2014-02-05 22:23:23]     context = ffff82004000e000
(XEN) [2014-02-05 22:23:23]     context[0] = 0_0
(XEN) [2014-02-05 22:23:23]     ctxt_entry[0] not present
[   21.305708] firewire_ohci 0000:07:03.0: added OHCI v1.10 device as card 0, 4 IR + 8 IT contexts, quirks 0x2
(XEN) [2014-02-05 22:23:24] [VT-D]iommu.c:880: iommu_fault_status: Fault Overflow
(XEN) [2014-02-05 22:23:24] [VT-D]iommu.c:882: iommu_fault_status: Primary Pending Fault
(XEN) [2014-02-05 22:23:24] [VT-D]iommu.c:860: DMAR:[DMA Read] Request device [0000:07:00.0] fault addr b76cd000, iommu reg = ffff82c000203000
(XEN) [2014-02-05 22:23:24] DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) [2014-02-05 22:23:24] print_vtd_entries: iommu ffff83023948b9c0 dev 0000:07:00.0 gmfn b76cd
(XEN) [2014-02-05 22:23:24]     root_entry = ffff82004000d000
(XEN) [2014-02-05 22:23:24]     root_entry[7] = 1b8eba001
(XEN) [2014-02-05 22:23:24]     context = ffff82004000e000
(XEN) [2014-02-05 22:23:24]     context[0] = 0_0
(XEN) [2014-02-05 22:23:24]     ctxt_entry[0] not present
[   21.386993] firewire_ohci 0000:07:03.0: bad self ID 0/1 (00000000 != ~00000000)
[   21.402892] initcall fw_ohci_init+0x0/0x1000 [firewire_ohci] returned 0 after 254567 usecs


Anyhow, the "solution" was to provide a "link" to the device
being passed in and create a fake device in Xen so that when
I passthrough my fireware: 07:03.0 it will also setup
a context for 07:00.0 device (which does not exist at all).

Attached is the Linux kernel module I use to let the hypervisor
know about the new device (this could have been written as
an user application too).

And the patch for the hypervisor, along with some extra debug
patches so when doing 'xl debug-keys Q' you can get a better
sense of what is what.

The 0004-xen-pci-Introduce-a-way-to-deal-with-buggy-hardware-.patch
is what is interesting.

If there is an interest in upstream this I can take a look -
but I will need guidance from Jan how he would like to do it.


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="hack.c"


#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/stat.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <linux/limits.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <linux/device.h>

#include <linux/pci.h>

#include <xen/interface/xen.h>
#include <xen/interface/physdev.h>

#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>

#define LSI_HACK  "0.1"

MODULE_AUTHOR("Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>");
MODULE_DESCRIPTION("lsi hack");
MODULE_LICENSE("GPL");
MODULE_VERSION(LSI_HACK);

/* Want to link to the device passed in the guest */
int bus = 7;
module_param(bus, int, 0644);
MODULE_PARM_DESC(bus, "bus");
int slot = 3;
module_param(slot, int, 0644);
MODULE_PARM_DESC(slot, "slot");
int func = 0;
module_param(func, int, 0644);
MODULE_PARM_DESC(func, "slot");

#define XEN_PCI_DEV_LINK 0x8
static int __init lsi_hack_init(void)
{
        int r = 0;

        struct physdev_pci_device_add add = {
			.seg	= 0,
                        .bus    = 0x7,
                        .devfn  = PCI_DEVFN(0,0),
			.physfn.bus	= bus, /* The device we want to copy from */
			.physfn.devfn	= PCI_DEVFN(slot,func),
			.flags = XEN_PCI_DEV_LINK,
                };
	printk("%s: %02x:%02x.%u, %02x:%02x.%u, %x\n",
		__func__, add.bus, PCI_SLOT(add.devfn),
		PCI_FUNC(add.devfn), add.physfn.bus,
		PCI_SLOT(add.physfn.devfn), PCI_SLOT(add.physfn.devfn),
		add.flags);
        r = HYPERVISOR_physdev_op(PHYSDEVOP_pci_device_add, &add);

        return r;
}

static void __exit lsi_hack_exit(void)
{
        int r = 0;
        struct physdev_manage_pci manage_pci;

        manage_pci.bus = 0x7;
        manage_pci.devfn = PCI_DEVFN(0,0);

        r = HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_remove,
                &manage_pci);
        if (r)
                printk(KERN_ERR "%s: %d\n", __FUNCTION__, r);
}

module_init(lsi_hack_init);
module_exit(lsi_hack_exit);

--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0004-xen-pci-Introduce-a-way-to-deal-with-buggy-hardware-.patch"

>From cb165429726978952f5b9e75bece1dcb5630667f Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Feb 2014 10:58:19 -0500
Subject: [PATCH 4/6] xen/pci: Introduce a way to deal with buggy hardware with
 "hidden" PCI buses.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/physdev.c              | 14 +++++-
 xen/drivers/passthrough/pci.c       | 89 +++++++++++++++++++++++++++++++++----
 xen/drivers/passthrough/vtd/iommu.c | 31 +++++++++++++
 xen/include/public/physdev.h        |  1 +
 xen/include/xen/pci.h               |  3 ++
 5 files changed, 128 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index bc0634c..f843c49 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -609,7 +609,11 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&add, arg, 1) != 0 )
             break;
 
-        pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
+        if ( add.flags & XEN_PCI_DEV_EXTFN)
+            pdev_info.is_extfn = 1;
+        else
+            pdev_info.is_extfn = 0;
+
         if ( add.flags & XEN_PCI_DEV_VIRTFN )
         {
             pdev_info.is_virtfn = 1;
@@ -618,6 +622,14 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         }
         else
             pdev_info.is_virtfn = 0;
+
+        if ( add.flags & XEN_PCI_DEV_LINK )
+        {
+            pdev_info.is_link = 1;
+            pdev_info.physfn.bus = add.physfn.bus;
+            pdev_info.physfn.devfn = add.physfn.devfn;
+        } else
+            pdev_info.is_link = 0;
         ret = pci_add_device(add.seg, add.bus, add.devfn, &pdev_info);
         break;
     }
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 2a6eaa4..0e59216 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -153,7 +153,8 @@ static void __init parse_phantom_dev(char *str) {
 }
 custom_param("pci-phantom", parse_phantom_dev);
 
-static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
+static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn,
+                                  int link, u8 orig_bus, u8 orig_devfn)
 {
     struct pci_dev *pdev;
 
@@ -169,8 +170,38 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
     *((u8*) &pdev->bus) = bus;
     *((u8*) &pdev->devfn) = devfn;
     pdev->domain = NULL;
+    pdev->link = NULL;
     INIT_LIST_HEAD(&pdev->msi_list);
 
+    if ( link )
+    {
+        struct pci_dev *dev;
+        list_for_each_entry ( dev, &pseg->alldevs_list, alldevs_list )
+        {
+            if ( dev->bus == orig_bus && dev->devfn == orig_devfn )
+            {
+                /* N.B. The 'bus' passed is 'new' one, while 'orig_bus' are
+                 * the ones we expect to exist. We over-write 'bus' and
+                 * 'devfn' with the original one so that this new device
+                 * will be created with the original device properties.
+                 */
+                if ( dev->link )
+                {
+                    xfree (pdev);
+                    return NULL;
+                }
+                bus = dev->bus;
+                devfn = dev->devfn;
+                dev->link = pdev;
+                pdev->link = dev;
+                pdev->info.is_link = 1;
+                printk("%04x:%02x:%02x.%u linked with %02x:%02x.%u\n",
+                       pseg->nr, pdev->bus, PCI_SLOT(pdev->devfn),
+                       PCI_FUNC(pdev->devfn), dev->bus, PCI_SLOT(dev->devfn),
+                       PCI_FUNC(dev->devfn));
+            }
+        }
+    }
     if ( pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                              PCI_CAP_ID_MSIX) )
     {
@@ -201,12 +232,32 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
             sub_bus = pci_conf_read8(pseg->nr, bus, PCI_SLOT(devfn),
                                      PCI_FUNC(devfn), PCI_SUBORDINATE_BUS);
 
+            if ( pdev->info.is_link )
+            {
+                if ( sec_bus >= pdev->bus && pdev->bus <= sub_bus )
+                {
+#if 0
+                    u8 i = sec_bus;
+                    /* We can create an loop in bus2bridge by pointing to ourselves.
+                     * Hence destroy sec_bus up to pdev_bus values */
+                    spin_lock(&pseg->bus2bridge_lock);
+                    for ( ; i <= pdev->bus; i++ )
+                        pseg->bus2bridge[sec_bus].map = 0;
+                    spin_unlock(&pseg->bus2bridge_lock);
+                    /* And increment it so it won't cover us again*/
+                    sec_bus = pdev->bus + 1;
+                    printk("Link corrected [%02x:%02x:%u] spanning %x->%x\n", pdev->bus,
+                           PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),sec_bus, sub_bus);
+#endif
+                    break;
+                }
+            }
             spin_lock(&pseg->bus2bridge_lock);
             for ( ; sec_bus <= sub_bus; sec_bus++ )
             {
                 pseg->bus2bridge[sec_bus].map = 1;
-                pseg->bus2bridge[sec_bus].bus = bus;
-                pseg->bus2bridge[sec_bus].devfn = devfn;
+                pseg->bus2bridge[sec_bus].bus = pdev->bus;
+                pseg->bus2bridge[sec_bus].devfn = pdev->devfn;
             }
             spin_unlock(&pseg->bus2bridge_lock);
             break;
@@ -299,7 +350,7 @@ int __init pci_hide_device(int bus, int devfn)
     int rc = -ENOMEM;
 
     spin_lock(&pcidevs_lock);
-    pdev = alloc_pdev(get_pseg(0), bus, devfn);
+    pdev = alloc_pdev(get_pseg(0), bus, devfn, 0, 0, 0);
     if ( pdev )
     {
         _pci_hide_device(pdev);
@@ -317,7 +368,7 @@ int __init pci_ro_device(int seg, int bus, int devfn)
 
     if ( !pseg )
         return -ENOMEM;
-    pdev = alloc_pdev(pseg, bus, devfn);
+    pdev = alloc_pdev(pseg, bus, devfn, 0, 0, 0);
     if ( !pdev )
         return -ENOMEM;
 
@@ -458,6 +509,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *info)
     struct pci_seg *pseg;
     struct pci_dev *pdev;
     unsigned int slot = PCI_SLOT(devfn), func = PCI_FUNC(devfn);
+    u8 bus_link = 0, devfn_link = 0;
     const char *pdev_type;
     int ret;
 
@@ -474,6 +526,12 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *info)
             pci_add_device(seg, info->physfn.bus, info->physfn.devfn, NULL);
         pdev_type = "virtual function";
     }
+    else if (info->is_link)
+    {
+        bus_link = info->physfn.bus;
+        devfn_link = info->physfn.devfn;
+        pdev_type = "link";
+    }
     else
     {
         info = NULL;
@@ -490,7 +548,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *info)
     pseg = alloc_pseg(seg);
     if ( !pseg )
         goto out;
-    pdev = alloc_pdev(pseg, bus, devfn);
+    pdev = alloc_pdev(pseg, bus, devfn, (info && info->is_link), bus_link, devfn_link);
     if ( !pdev )
         goto out;
 
@@ -604,7 +662,7 @@ out:
 int pci_remove_device(u16 seg, u8 bus, u8 devfn)
 {
     struct pci_seg *pseg = get_pseg(seg);
-    struct pci_dev *pdev;
+    struct pci_dev *pdev, *link = NULL;
     int ret;
 
     ret = xsm_resource_unplug_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
@@ -617,16 +675,29 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
         return -ENODEV;
 
     spin_lock(&pcidevs_lock);
+retry:
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
         if ( pdev->bus == bus && pdev->devfn == devfn )
         {
             ret = iommu_remove_device(pdev);
             if ( pdev->domain )
                 list_del(&pdev->domain_list);
-            pci_cleanup_msi(pdev);
+            if ( !pdev->info.is_link ) /* If we are not the 'fake device' */
+                pci_cleanup_msi(pdev);
+            if ( pdev->link ) {
+                /* Can be NULL if the other device was removed first. */
+                link = pdev->link;
+            }
             free_pdev(pseg, pdev);
             printk(XENLOG_DEBUG "PCI remove device %04x:%02x:%02x.%u\n",
                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            if ( link )
+            {
+                bus = link->bus;
+                devfn = link->devfn;
+                link->link = NULL;
+                goto retry;
+            }
             break;
         }
 
@@ -838,7 +909,7 @@ static int __init _scan_pci_devices(struct pci_seg *pseg, void *arg)
                     continue;
                 }
 
-                pdev = alloc_pdev(pseg, bus, PCI_DEVFN(dev, func));
+                pdev = alloc_pdev(pseg, bus, PCI_DEVFN(dev, func), 0, 0, 0);
                 if ( !pdev )
                 {
                     printk("%s: alloc_pdev failed.\n", __func__);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..a5a4664 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1468,6 +1468,25 @@ static int domain_context_mapping(
         if ( ret )
             break;
 
+        if ( pdev->link )
+        {
+            u8 bus_link, devfn_link;
+            struct pci_dev *link_dev = pdev->link;
+
+            ASSERT ( link_dev );
+
+            bus_link = link_dev->bus;
+            devfn_link = link_dev->devfn;
+
+            if ( iommu_verbose )
+                dprintk(VTDPREFIX, "d%d:PCI: map %04x:%02x:%02x.%u\n",
+                        domain->domain_id, seg, bus_link,
+                        PCI_SLOT(devfn_link), PCI_FUNC(devfn_link));
+
+
+            ret = domain_context_mapping_one(domain, drhd->iommu, bus_link, devfn_link,
+                                             pci_get_pdev(seg, bus, devfn));
+        }
         if ( find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )
             break;
 
@@ -1603,6 +1622,18 @@ static int domain_context_unmap(
         if ( ret )
             break;
 
+        if ( pdev->link )
+        {
+            struct pci_dev *link = pdev->link;
+
+            ASSERT(link->link == pdev);
+            tmp_bus = link->bus;
+            tmp_devfn = link->devfn;
+            if ( iommu_verbose )
+                dprintk(VTDPREFIX, "d%d:PCI: unmap %04x:%02x:%02x.%u\n",
+                        domain->domain_id, seg, tmp_bus, PCI_SLOT(tmp_devfn), PCI_FUNC(tmp_devfn));
+            ret = domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn);
+        }
         tmp_bus = bus;
         tmp_devfn = devfn;
         if ( find_upstream_bridge(seg, &tmp_bus, &tmp_devfn, &secbus) < 1 )
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index d547928..c476175 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -281,6 +281,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_pci_mmcfg_reserved_t);
 #define XEN_PCI_DEV_EXTFN              0x1
 #define XEN_PCI_DEV_VIRTFN             0x2
 #define XEN_PCI_DEV_PXM                0x4
+#define XEN_PCI_DEV_LINK               0x8
 
 #define PHYSDEVOP_pci_device_add        25
 struct physdev_pci_device_add {
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index cadb525..b883c28 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -39,6 +39,7 @@ struct pci_dev_info {
         u8 bus;
         u8 devfn;
     } physfn;
+    bool_t is_link;
 };
 
 struct pci_dev {
@@ -75,6 +76,8 @@ struct pci_dev {
 #define PT_FAULT_THRESHOLD 10
     } fault;
     u64 vf_rlen[6];
+
+    struct pci_dev *link;
 };
 
 #define for_each_pdev(domain, pdev) \
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pci-On-PCI-dump-keyhandler-include-bus2bridge-inform.patch"

>From 3edf6e0b1b646a358ae14c64e726ad24350ad510 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 12:54:53 -0500
Subject: [PATCH 1/6] pci: On PCI dump keyhandler include bus2bridge
 information

As it helps in figuring out whether they match reality if the
initial domain has altered the bus topology.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..64dfd73 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -942,6 +942,7 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
 {
     struct pci_dev *pdev;
     struct msi_desc *msi;
+    int i;
 
     printk("==== segment %04x ====\n", pseg->nr);
 
@@ -955,6 +956,17 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
                printk("%d ", msi->irq);
         printk(">\n");
     }
+    printk("==== Bus2Bridge %04x ====\n", pseg->nr);
+    spin_lock(&pseg->bus2bridge_lock);
+    for ( i = 0; i < MAX_BUSES; i++)
+    {
+        if ( !pseg->bus2bridge[i].map )
+            continue;
+        printk("%02x -> %02x:%02x.%u\n", i, pseg->bus2bridge[i].bus,
+               PCI_SLOT(pseg->bus2bridge[i].devfn),
+               PCI_FUNC(pseg->bus2bridge[i].devfn));
+    }
+    spin_unlock(&pseg->bus2bridge_lock);
 
     return 0;
 }
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0002-pci-On-PCI-dump-device-keyhandler-include-Device-and.patch"

>From ad49978e083123a1068461cd7f2a8e0c2becca17 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 12:52:35 -0500
Subject: [PATCH 2/6] pci: On PCI dump device keyhandler include Device and
 Vendor ID

As it helps in troubleshooting if the initial domain has
re-numbered the bus numbers and what Xen sees is not the
reality.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 64dfd73..1ad4f17 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -948,9 +948,12 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
 
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
     {
-        printk("%04x:%02x:%02x.%u - dom %-3d - MSIs < ",
+        int id = pci_conf_read32(pseg->nr, pdev->bus, PCI_SLOT(pdev->devfn),
+                                 PCI_FUNC(pdev->devfn), 0);
+        printk("%04x:%02x:%02x.%u (%04x:%04x)- dom %-3d - MSIs < ",
                pseg->nr, pdev->bus,
                PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
+               id & 0xffff, (id >> 16) & 0xffff,
                pdev->domain ? pdev->domain->domain_id : -1);
         list_for_each_entry ( msi, &pdev->msi_list, list )
                printk("%d ", msi->irq);
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0003-DEBUG-Include-upstream-bridge-information.patch"

>From fb4cf64dcced21b1f98b1e60fbb9dfa70f61fc3c Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 17:01:42 -0500
Subject: [PATCH 3/6] DEBUG: Include upstream bridge information.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 1ad4f17..2a6eaa4 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -950,6 +950,9 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
     {
         int id = pci_conf_read32(pseg->nr, pdev->bus, PCI_SLOT(pdev->devfn),
                                  PCI_FUNC(pdev->devfn), 0);
+        int rc = 0;
+        u8 bus, devfn, secbus;
+
         printk("%04x:%02x:%02x.%u (%04x:%04x)- dom %-3d - MSIs < ",
                pseg->nr, pdev->bus,
                PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
@@ -957,7 +960,14 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
                pdev->domain ? pdev->domain->domain_id : -1);
         list_for_each_entry ( msi, &pdev->msi_list, list )
                printk("%d ", msi->irq);
-        printk(">\n");
+        bus = pdev->bus;
+        devfn = pdev->devfn;
+
+        rc = find_upstream_bridge( pseg->nr, &bus, &devfn, &secbus );
+        if ( rc < 0)
+            printk(">\n");
+        else
+            printk(">[%02x:%02x.%u]\n", bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
     }
     printk("==== Bus2Bridge %04x ====\n", pseg->nr);
     spin_lock(&pseg->bus2bridge_lock);
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--YZ5djTAD1cGYuMQK--


From xen-devel-bounces@lists.xen.org Fri Feb 21 19:08:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 19:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGvSb-0004oX-Re; Fri, 21 Feb 2014 19:08:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGvSa-0004oK-Kr
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 19:08:33 +0000
Received: from [85.158.139.211:19972] by server-5.bemta-5.messagelabs.com id
	B7/5A-32749-F24A7035; Fri, 21 Feb 2014 19:08:31 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393009707!5454496!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10560 invoked from network); 21 Feb 2014 19:08:29 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 19:08:29 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LJ8For013262
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 19:08:16 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJ8DS6026551
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 19:08:14 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJ8DeV005503; Fri, 21 Feb 2014 19:08:13 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 11:08:12 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id C00701C293E; Fri, 21 Feb 2014 14:08:11 -0500 (EST)
Date: Fri, 21 Feb 2014 14:08:11 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Gordan Bobic <gordan@bobich.net>, jbeulich@suse.com
Message-ID: <20140221190811.GA9232@phenom.dumpdata.com>
References: <52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
	<52CBD92F.3050301@citrix.com>
	<7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="YZ5djTAD1cGYuMQK"
Content-Disposition: inline
In-Reply-To: <7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

On Tue, Jan 07, 2014 at 10:44:29AM +0000, Gordan Bobic wrote:
> On 2014-01-07 10:38, Andrew Cooper wrote:
> >On 07/01/14 10:35, Gordan Bobic wrote:
> >>On 2014-01-07 03:17, Zhang, Yang Z wrote:
> >>>Konrad Rzeszutek Wilk wrote on 2014-01-07:
> >>>>>Which would look like this:
> >>>>>
> >>>>>C220 ---> Tundra Bridge -----> (HB6 PCI bridge -> Brooktree BDFs)
> >>>>>on the card
> >>>>>          \--------------> IEEE-1394a
> >>>>>
> >>>>>I am actually wondering if this 07:00.0 device is the one that
> >>>>>reports itself as 08:00.0 (which I think is what you alluding to
> >>>>>Jan)
> >>>>>
> >>>>
> >>>>And to double check that theory I decided to pass in the IEEE-1394a
> >>>>to a guest:
> >>>>
> >>>>           +-1c.5-[07-08]----00.0-[08]----03.0  Texas Instruments
> >>>>TSB43AB22A IEEE-1394a-2000 Controller (PHY/Link) [iOHCI-Lynx]
> >>>>
> >>>>
> >>>>(XEN) [VT-D]iommu.c:885: iommu_fault_status: Fault Overflow (XEN)
> >>>>[VT-D]iommu.c:887: iommu_fault_status: Primary Pending Fault (XEN)
> >>>>[VT-D]iommu.c:865: DMAR:[DMA Read] Request device
> >>>>[0000:08:00.0] fault
> >>>>addr 370f1000, iommu reg = ffff82c3ffd53000 (XEN) DMAR:[fault reason
> >>>>02h] Present bit in context entry is clear (XEN) print_vtd_entries:
> >>>>iommu ffff83083d4939b0 dev 0000:08:00.0 gmfn 370f1 (XEN)
> >>>>root_entry
> >>>>= ffff83083d47f000 (XEN)     root_entry[8] = 72569b001 (XEN)
> >>>>context
> >>>>= ffff83072569b000 (XEN)     context[0] = 0_0 (XEN)
> >>>>ctxt_entry[0]
> >>>>not present
> >>>>
> >>>>So, capture card OK - Likely the Tundra bridge has an issue:
> >>>>
> >>>>07:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)
> >>>>(prog-if 01 [Subtractive decode])
> >>>>        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> >>>>        ParErr- Stepping- SERR- FastB2B- DisINTx- Status:
> >>>>Cap+ 66MHz-
> >>>>        UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort+
> >>>>        >SERR- <PERR- INTx- Latency: 0 Bus: primary=07,
> >>>>secondary=08,
> >>>>        subordinate=08, sec-latency=32 Memory behind bridge:
> >>>>        f0600000-f06fffff Secondary status: 66MHz+ FastB2B+ ParErr-
> >>>>        DEVSEL=medium TAbort- <TAbort- <MAbort+ <SERR- <PERR-
> >>>>BridgeCtl:
> >>>>        Parity- SERR- NoISA- VGA- MAbort- >Reset- FastB2B-
> >>>>                PriDiscTmr- SecDiscTmr- DiscTmrStat- DiscTmrSERREn-
> >>>>        Capabilities: [60] Subsystem: Super Micro Computer Inc
> >>>>Device 0805
> >>>>        Capabilities: [a0] Power Management version 3
> >>>>                Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA
> >>>>                PME(D0+,D1+,D2+,D3hot+,D3cold+) Status: D0
> >>>>NoSoftRst+
> >>>>                PME-Enable- DSel=0 DScale=0 PME-
> >>>>
> >>>>or there is some unknown bridge in the motherboard.
> >>>
> >>>According your description above, the upstream Linux should also have
> >>>the same problem. Did you see it with upstream Linux?
> >>
> >>The problem I was seeing with LSI cards (phantom device doing DMA)
> >>does, indeed, also occur in upstream Linux. If I enable intel-iommu on
> >>bare metal Linux, the same problem occurs as with Xen.
> >>
> >>>There may be some buggy device that generate DMA request with
> >>>internal
> >>>BDF but it didn't expose it(not like Phantom device). For those
> >>>devices, I think we need to setup the VT-d page table manually.
> >>
> >>I think what is needed is a pci-phantom style override that tells the
> >>hypervisor to tell the IOMMU to allow DMA traffic from a specific
> >>invisible device ID.
> >>
> >>Gordan
> >
> >There is.  See "pci-phantom" in
> >http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html
> 
> I thought this was only applicable to phantom _functions_ (number
> after the
> dot) rather than whole phantom _devices_. Is that not the case?

My new Supermicro X10SAE has this issue as well and I wrote a patch to 
"fix" this. Just to be clear - this is an issue with the motherboard
(or rather the PCI-to-PCI bridge) is that it violates VT-d chapter
3.9.2:
"For devices behind conventional PCI bridges, the source-id in the DMA
requests is the requestor-id of the bridge device.".

In my case the bridge is is:

06:00.0 PCI bridge: Tundra Semiconductor Corp. Device 8113 (rev 01)

But all the requests from the devices behind the bridge show up with
request-id +1 (aka 07:00.0):


(XEN) [2014-02-05 22:23:23] [VT-D]iommu.c:880: iommu_fau3] [VT-D]iommu.c:882: iommu_fault_status: Primary Pending Fault
(XEN) [2014-02-05 22:23:23] [VT-D]iommu.c:860: DMAR:[DMA Read] Request device [0000:07:00.0] fault addr a57fc000, iommu reg = ffff82c000203000
(XEN) [2014-02-05 22:23:23] DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) [2014-02-05 22:23:23] print_vtd_entries: iommu ffff83023948b9c0 dev 0000:07:00.0 gmfn a57fc
(XEN) [2014-02-05 22:23:23]     root_entry = ffff82004000d000
(XEN) [2014-02-05 22:23:23]     root_entry[7] = 1b8eba001
(XEN) [2014-02-05 22:23:23]     context = ffff82004000e000
(XEN) [2014-02-05 22:23:23]     context[0] = 0_0
(XEN) [2014-02-05 22:23:23]     ctxt_entry[0] not present
[   21.305708] firewire_ohci 0000:07:03.0: added OHCI v1.10 device as card 0, 4 IR + 8 IT contexts, quirks 0x2
(XEN) [2014-02-05 22:23:24] [VT-D]iommu.c:880: iommu_fault_status: Fault Overflow
(XEN) [2014-02-05 22:23:24] [VT-D]iommu.c:882: iommu_fault_status: Primary Pending Fault
(XEN) [2014-02-05 22:23:24] [VT-D]iommu.c:860: DMAR:[DMA Read] Request device [0000:07:00.0] fault addr b76cd000, iommu reg = ffff82c000203000
(XEN) [2014-02-05 22:23:24] DMAR:[fault reason 02h] Present bit in context entry is clear
(XEN) [2014-02-05 22:23:24] print_vtd_entries: iommu ffff83023948b9c0 dev 0000:07:00.0 gmfn b76cd
(XEN) [2014-02-05 22:23:24]     root_entry = ffff82004000d000
(XEN) [2014-02-05 22:23:24]     root_entry[7] = 1b8eba001
(XEN) [2014-02-05 22:23:24]     context = ffff82004000e000
(XEN) [2014-02-05 22:23:24]     context[0] = 0_0
(XEN) [2014-02-05 22:23:24]     ctxt_entry[0] not present
[   21.386993] firewire_ohci 0000:07:03.0: bad self ID 0/1 (00000000 != ~00000000)
[   21.402892] initcall fw_ohci_init+0x0/0x1000 [firewire_ohci] returned 0 after 254567 usecs


Anyhow, the "solution" was to provide a "link" to the device
being passed in and create a fake device in Xen so that when
I passthrough my fireware: 07:03.0 it will also setup
a context for 07:00.0 device (which does not exist at all).

Attached is the Linux kernel module I use to let the hypervisor
know about the new device (this could have been written as
an user application too).

And the patch for the hypervisor, along with some extra debug
patches so when doing 'xl debug-keys Q' you can get a better
sense of what is what.

The 0004-xen-pci-Introduce-a-way-to-deal-with-buggy-hardware-.patch
is what is interesting.

If there is an interest in upstream this I can take a look -
but I will need guidance from Jan how he would like to do it.


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="hack.c"


#include <linux/module.h>
#include <linux/string.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/stat.h>
#include <linux/err.h>
#include <linux/ctype.h>
#include <linux/slab.h>
#include <linux/limits.h>
#include <linux/device.h>
#include <linux/pci.h>
#include <linux/device.h>

#include <linux/pci.h>

#include <xen/interface/xen.h>
#include <xen/interface/physdev.h>

#include <asm/xen/hypervisor.h>
#include <asm/xen/hypercall.h>

#define LSI_HACK  "0.1"

MODULE_AUTHOR("Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>");
MODULE_DESCRIPTION("lsi hack");
MODULE_LICENSE("GPL");
MODULE_VERSION(LSI_HACK);

/* Want to link to the device passed in the guest */
int bus = 7;
module_param(bus, int, 0644);
MODULE_PARM_DESC(bus, "bus");
int slot = 3;
module_param(slot, int, 0644);
MODULE_PARM_DESC(slot, "slot");
int func = 0;
module_param(func, int, 0644);
MODULE_PARM_DESC(func, "slot");

#define XEN_PCI_DEV_LINK 0x8
static int __init lsi_hack_init(void)
{
        int r = 0;

        struct physdev_pci_device_add add = {
			.seg	= 0,
                        .bus    = 0x7,
                        .devfn  = PCI_DEVFN(0,0),
			.physfn.bus	= bus, /* The device we want to copy from */
			.physfn.devfn	= PCI_DEVFN(slot,func),
			.flags = XEN_PCI_DEV_LINK,
                };
	printk("%s: %02x:%02x.%u, %02x:%02x.%u, %x\n",
		__func__, add.bus, PCI_SLOT(add.devfn),
		PCI_FUNC(add.devfn), add.physfn.bus,
		PCI_SLOT(add.physfn.devfn), PCI_SLOT(add.physfn.devfn),
		add.flags);
        r = HYPERVISOR_physdev_op(PHYSDEVOP_pci_device_add, &add);

        return r;
}

static void __exit lsi_hack_exit(void)
{
        int r = 0;
        struct physdev_manage_pci manage_pci;

        manage_pci.bus = 0x7;
        manage_pci.devfn = PCI_DEVFN(0,0);

        r = HYPERVISOR_physdev_op(PHYSDEVOP_manage_pci_remove,
                &manage_pci);
        if (r)
                printk(KERN_ERR "%s: %d\n", __FUNCTION__, r);
}

module_init(lsi_hack_init);
module_exit(lsi_hack_exit);

--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0004-xen-pci-Introduce-a-way-to-deal-with-buggy-hardware-.patch"

>From cb165429726978952f5b9e75bece1dcb5630667f Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Feb 2014 10:58:19 -0500
Subject: [PATCH 4/6] xen/pci: Introduce a way to deal with buggy hardware with
 "hidden" PCI buses.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/physdev.c              | 14 +++++-
 xen/drivers/passthrough/pci.c       | 89 +++++++++++++++++++++++++++++++++----
 xen/drivers/passthrough/vtd/iommu.c | 31 +++++++++++++
 xen/include/public/physdev.h        |  1 +
 xen/include/xen/pci.h               |  3 ++
 5 files changed, 128 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index bc0634c..f843c49 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -609,7 +609,11 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&add, arg, 1) != 0 )
             break;
 
-        pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
+        if ( add.flags & XEN_PCI_DEV_EXTFN)
+            pdev_info.is_extfn = 1;
+        else
+            pdev_info.is_extfn = 0;
+
         if ( add.flags & XEN_PCI_DEV_VIRTFN )
         {
             pdev_info.is_virtfn = 1;
@@ -618,6 +622,14 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         }
         else
             pdev_info.is_virtfn = 0;
+
+        if ( add.flags & XEN_PCI_DEV_LINK )
+        {
+            pdev_info.is_link = 1;
+            pdev_info.physfn.bus = add.physfn.bus;
+            pdev_info.physfn.devfn = add.physfn.devfn;
+        } else
+            pdev_info.is_link = 0;
         ret = pci_add_device(add.seg, add.bus, add.devfn, &pdev_info);
         break;
     }
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 2a6eaa4..0e59216 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -153,7 +153,8 @@ static void __init parse_phantom_dev(char *str) {
 }
 custom_param("pci-phantom", parse_phantom_dev);
 
-static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
+static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn,
+                                  int link, u8 orig_bus, u8 orig_devfn)
 {
     struct pci_dev *pdev;
 
@@ -169,8 +170,38 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
     *((u8*) &pdev->bus) = bus;
     *((u8*) &pdev->devfn) = devfn;
     pdev->domain = NULL;
+    pdev->link = NULL;
     INIT_LIST_HEAD(&pdev->msi_list);
 
+    if ( link )
+    {
+        struct pci_dev *dev;
+        list_for_each_entry ( dev, &pseg->alldevs_list, alldevs_list )
+        {
+            if ( dev->bus == orig_bus && dev->devfn == orig_devfn )
+            {
+                /* N.B. The 'bus' passed is 'new' one, while 'orig_bus' are
+                 * the ones we expect to exist. We over-write 'bus' and
+                 * 'devfn' with the original one so that this new device
+                 * will be created with the original device properties.
+                 */
+                if ( dev->link )
+                {
+                    xfree (pdev);
+                    return NULL;
+                }
+                bus = dev->bus;
+                devfn = dev->devfn;
+                dev->link = pdev;
+                pdev->link = dev;
+                pdev->info.is_link = 1;
+                printk("%04x:%02x:%02x.%u linked with %02x:%02x.%u\n",
+                       pseg->nr, pdev->bus, PCI_SLOT(pdev->devfn),
+                       PCI_FUNC(pdev->devfn), dev->bus, PCI_SLOT(dev->devfn),
+                       PCI_FUNC(dev->devfn));
+            }
+        }
+    }
     if ( pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
                              PCI_CAP_ID_MSIX) )
     {
@@ -201,12 +232,32 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
             sub_bus = pci_conf_read8(pseg->nr, bus, PCI_SLOT(devfn),
                                      PCI_FUNC(devfn), PCI_SUBORDINATE_BUS);
 
+            if ( pdev->info.is_link )
+            {
+                if ( sec_bus >= pdev->bus && pdev->bus <= sub_bus )
+                {
+#if 0
+                    u8 i = sec_bus;
+                    /* We can create an loop in bus2bridge by pointing to ourselves.
+                     * Hence destroy sec_bus up to pdev_bus values */
+                    spin_lock(&pseg->bus2bridge_lock);
+                    for ( ; i <= pdev->bus; i++ )
+                        pseg->bus2bridge[sec_bus].map = 0;
+                    spin_unlock(&pseg->bus2bridge_lock);
+                    /* And increment it so it won't cover us again*/
+                    sec_bus = pdev->bus + 1;
+                    printk("Link corrected [%02x:%02x:%u] spanning %x->%x\n", pdev->bus,
+                           PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),sec_bus, sub_bus);
+#endif
+                    break;
+                }
+            }
             spin_lock(&pseg->bus2bridge_lock);
             for ( ; sec_bus <= sub_bus; sec_bus++ )
             {
                 pseg->bus2bridge[sec_bus].map = 1;
-                pseg->bus2bridge[sec_bus].bus = bus;
-                pseg->bus2bridge[sec_bus].devfn = devfn;
+                pseg->bus2bridge[sec_bus].bus = pdev->bus;
+                pseg->bus2bridge[sec_bus].devfn = pdev->devfn;
             }
             spin_unlock(&pseg->bus2bridge_lock);
             break;
@@ -299,7 +350,7 @@ int __init pci_hide_device(int bus, int devfn)
     int rc = -ENOMEM;
 
     spin_lock(&pcidevs_lock);
-    pdev = alloc_pdev(get_pseg(0), bus, devfn);
+    pdev = alloc_pdev(get_pseg(0), bus, devfn, 0, 0, 0);
     if ( pdev )
     {
         _pci_hide_device(pdev);
@@ -317,7 +368,7 @@ int __init pci_ro_device(int seg, int bus, int devfn)
 
     if ( !pseg )
         return -ENOMEM;
-    pdev = alloc_pdev(pseg, bus, devfn);
+    pdev = alloc_pdev(pseg, bus, devfn, 0, 0, 0);
     if ( !pdev )
         return -ENOMEM;
 
@@ -458,6 +509,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *info)
     struct pci_seg *pseg;
     struct pci_dev *pdev;
     unsigned int slot = PCI_SLOT(devfn), func = PCI_FUNC(devfn);
+    u8 bus_link = 0, devfn_link = 0;
     const char *pdev_type;
     int ret;
 
@@ -474,6 +526,12 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *info)
             pci_add_device(seg, info->physfn.bus, info->physfn.devfn, NULL);
         pdev_type = "virtual function";
     }
+    else if (info->is_link)
+    {
+        bus_link = info->physfn.bus;
+        devfn_link = info->physfn.devfn;
+        pdev_type = "link";
+    }
     else
     {
         info = NULL;
@@ -490,7 +548,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn, const struct pci_dev_info *info)
     pseg = alloc_pseg(seg);
     if ( !pseg )
         goto out;
-    pdev = alloc_pdev(pseg, bus, devfn);
+    pdev = alloc_pdev(pseg, bus, devfn, (info && info->is_link), bus_link, devfn_link);
     if ( !pdev )
         goto out;
 
@@ -604,7 +662,7 @@ out:
 int pci_remove_device(u16 seg, u8 bus, u8 devfn)
 {
     struct pci_seg *pseg = get_pseg(seg);
-    struct pci_dev *pdev;
+    struct pci_dev *pdev, *link = NULL;
     int ret;
 
     ret = xsm_resource_unplug_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
@@ -617,16 +675,29 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
         return -ENODEV;
 
     spin_lock(&pcidevs_lock);
+retry:
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
         if ( pdev->bus == bus && pdev->devfn == devfn )
         {
             ret = iommu_remove_device(pdev);
             if ( pdev->domain )
                 list_del(&pdev->domain_list);
-            pci_cleanup_msi(pdev);
+            if ( !pdev->info.is_link ) /* If we are not the 'fake device' */
+                pci_cleanup_msi(pdev);
+            if ( pdev->link ) {
+                /* Can be NULL if the other device was removed first. */
+                link = pdev->link;
+            }
             free_pdev(pseg, pdev);
             printk(XENLOG_DEBUG "PCI remove device %04x:%02x:%02x.%u\n",
                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            if ( link )
+            {
+                bus = link->bus;
+                devfn = link->devfn;
+                link->link = NULL;
+                goto retry;
+            }
             break;
         }
 
@@ -838,7 +909,7 @@ static int __init _scan_pci_devices(struct pci_seg *pseg, void *arg)
                     continue;
                 }
 
-                pdev = alloc_pdev(pseg, bus, PCI_DEVFN(dev, func));
+                pdev = alloc_pdev(pseg, bus, PCI_DEVFN(dev, func), 0, 0, 0);
                 if ( !pdev )
                 {
                     printk("%s: alloc_pdev failed.\n", __func__);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..a5a4664 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1468,6 +1468,25 @@ static int domain_context_mapping(
         if ( ret )
             break;
 
+        if ( pdev->link )
+        {
+            u8 bus_link, devfn_link;
+            struct pci_dev *link_dev = pdev->link;
+
+            ASSERT ( link_dev );
+
+            bus_link = link_dev->bus;
+            devfn_link = link_dev->devfn;
+
+            if ( iommu_verbose )
+                dprintk(VTDPREFIX, "d%d:PCI: map %04x:%02x:%02x.%u\n",
+                        domain->domain_id, seg, bus_link,
+                        PCI_SLOT(devfn_link), PCI_FUNC(devfn_link));
+
+
+            ret = domain_context_mapping_one(domain, drhd->iommu, bus_link, devfn_link,
+                                             pci_get_pdev(seg, bus, devfn));
+        }
         if ( find_upstream_bridge(seg, &bus, &devfn, &secbus) < 1 )
             break;
 
@@ -1603,6 +1622,18 @@ static int domain_context_unmap(
         if ( ret )
             break;
 
+        if ( pdev->link )
+        {
+            struct pci_dev *link = pdev->link;
+
+            ASSERT(link->link == pdev);
+            tmp_bus = link->bus;
+            tmp_devfn = link->devfn;
+            if ( iommu_verbose )
+                dprintk(VTDPREFIX, "d%d:PCI: unmap %04x:%02x:%02x.%u\n",
+                        domain->domain_id, seg, tmp_bus, PCI_SLOT(tmp_devfn), PCI_FUNC(tmp_devfn));
+            ret = domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn);
+        }
         tmp_bus = bus;
         tmp_devfn = devfn;
         if ( find_upstream_bridge(seg, &tmp_bus, &tmp_devfn, &secbus) < 1 )
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index d547928..c476175 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -281,6 +281,7 @@ DEFINE_XEN_GUEST_HANDLE(physdev_pci_mmcfg_reserved_t);
 #define XEN_PCI_DEV_EXTFN              0x1
 #define XEN_PCI_DEV_VIRTFN             0x2
 #define XEN_PCI_DEV_PXM                0x4
+#define XEN_PCI_DEV_LINK               0x8
 
 #define PHYSDEVOP_pci_device_add        25
 struct physdev_pci_device_add {
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index cadb525..b883c28 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -39,6 +39,7 @@ struct pci_dev_info {
         u8 bus;
         u8 devfn;
     } physfn;
+    bool_t is_link;
 };
 
 struct pci_dev {
@@ -75,6 +76,8 @@ struct pci_dev {
 #define PT_FAULT_THRESHOLD 10
     } fault;
     u64 vf_rlen[6];
+
+    struct pci_dev *link;
 };
 
 #define for_each_pdev(domain, pdev) \
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0001-pci-On-PCI-dump-keyhandler-include-bus2bridge-inform.patch"

>From 3edf6e0b1b646a358ae14c64e726ad24350ad510 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 12:54:53 -0500
Subject: [PATCH 1/6] pci: On PCI dump keyhandler include bus2bridge
 information

As it helps in figuring out whether they match reality if the
initial domain has altered the bus topology.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..64dfd73 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -942,6 +942,7 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
 {
     struct pci_dev *pdev;
     struct msi_desc *msi;
+    int i;
 
     printk("==== segment %04x ====\n", pseg->nr);
 
@@ -955,6 +956,17 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
                printk("%d ", msi->irq);
         printk(">\n");
     }
+    printk("==== Bus2Bridge %04x ====\n", pseg->nr);
+    spin_lock(&pseg->bus2bridge_lock);
+    for ( i = 0; i < MAX_BUSES; i++)
+    {
+        if ( !pseg->bus2bridge[i].map )
+            continue;
+        printk("%02x -> %02x:%02x.%u\n", i, pseg->bus2bridge[i].bus,
+               PCI_SLOT(pseg->bus2bridge[i].devfn),
+               PCI_FUNC(pseg->bus2bridge[i].devfn));
+    }
+    spin_unlock(&pseg->bus2bridge_lock);
 
     return 0;
 }
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0002-pci-On-PCI-dump-device-keyhandler-include-Device-and.patch"

>From ad49978e083123a1068461cd7f2a8e0c2becca17 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 12:52:35 -0500
Subject: [PATCH 2/6] pci: On PCI dump device keyhandler include Device and
 Vendor ID

As it helps in troubleshooting if the initial domain has
re-numbered the bus numbers and what Xen sees is not the
reality.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 64dfd73..1ad4f17 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -948,9 +948,12 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
 
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
     {
-        printk("%04x:%02x:%02x.%u - dom %-3d - MSIs < ",
+        int id = pci_conf_read32(pseg->nr, pdev->bus, PCI_SLOT(pdev->devfn),
+                                 PCI_FUNC(pdev->devfn), 0);
+        printk("%04x:%02x:%02x.%u (%04x:%04x)- dom %-3d - MSIs < ",
                pseg->nr, pdev->bus,
                PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
+               id & 0xffff, (id >> 16) & 0xffff,
                pdev->domain ? pdev->domain->domain_id : -1);
         list_for_each_entry ( msi, &pdev->msi_list, list )
                printk("%d ", msi->irq);
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0003-DEBUG-Include-upstream-bridge-information.patch"

>From fb4cf64dcced21b1f98b1e60fbb9dfa70f61fc3c Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Tue, 4 Feb 2014 17:01:42 -0500
Subject: [PATCH 3/6] DEBUG: Include upstream bridge information.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/passthrough/pci.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 1ad4f17..2a6eaa4 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -950,6 +950,9 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
     {
         int id = pci_conf_read32(pseg->nr, pdev->bus, PCI_SLOT(pdev->devfn),
                                  PCI_FUNC(pdev->devfn), 0);
+        int rc = 0;
+        u8 bus, devfn, secbus;
+
         printk("%04x:%02x:%02x.%u (%04x:%04x)- dom %-3d - MSIs < ",
                pseg->nr, pdev->bus,
                PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn),
@@ -957,7 +960,14 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
                pdev->domain ? pdev->domain->domain_id : -1);
         list_for_each_entry ( msi, &pdev->msi_list, list )
                printk("%d ", msi->irq);
-        printk(">\n");
+        bus = pdev->bus;
+        devfn = pdev->devfn;
+
+        rc = find_upstream_bridge( pseg->nr, &bus, &devfn, &secbus );
+        if ( rc < 0)
+            printk(">\n");
+        else
+            printk(">[%02x:%02x.%u]\n", bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
     }
     printk("==== Bus2Bridge %04x ====\n", pseg->nr);
     spin_lock(&pseg->bus2bridge_lock);
-- 
1.8.3.1


--YZ5djTAD1cGYuMQK
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--YZ5djTAD1cGYuMQK--


From xen-devel-bounces@lists.xen.org Fri Feb 21 19:19:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 19:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGvcf-000503-0W; Fri, 21 Feb 2014 19:18:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGvcc-0004zy-Le
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 19:18:55 +0000
Received: from [85.158.139.211:9711] by server-13.bemta-5.messagelabs.com id
	7E/38-18801-D96A7035; Fri, 21 Feb 2014 19:18:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393010329!5500458!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20657 invoked from network); 21 Feb 2014 19:18:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 19:18:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LJIdZo024409
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 19:18:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJIb6G002066
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 19:18:39 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJIbtM029433; Fri, 21 Feb 2014 19:18:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 11:18:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3ADE51C293E; Fri, 21 Feb 2014 14:18:33 -0500 (EST)
Date: Fri, 21 Feb 2014 14:18:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140221191833.GA8812@phenom.dumpdata.com>
References: <52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="Qxx1br4bt0+wmkIi"
Content-Disposition: inline
In-Reply-To: <20140205200708.GA9278@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re:
 [PATCH] x86/msi: Validate the guest-identified PCI devices in
 pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

> But I am not sure if that is the right way of doing it. Anyhow there
> was another assumption made in which 'assign-busses' crippled Xen
> (see second attachment).
> 
> 3). Trap on PCI_SECONDARY_BUS and PCI_SUBORDINATE_BUS writes and
>     fixup the structures.
> 
>     I hadn't attempted that but that could also be done. That way Xen
>     is aware of those changes and can update its PCI structures.
> 

4). Make Xen do the bus re-assignment.

The attached patch is an interesting "solution" to the BIOS
not doing the right bus-extending with SR-IOV devices.

Paid good money for this motherboard and it has bugs <sigh>.
(To be fair, I also saw this issue on two other Intel
SandyBridge motherboard).

Anyhow, with this patch I can finally use SR-IOV cards on this
motherboard and it basically does what Linux 'assign-buses' does.
aka I don't get this:

SR-IOV: bus number out of range.

Because it is all done during bootup it only runs during boot-time.

It does not fix the issue if 'pci=assign-buses' is provide on the
Linux kernel, but it makes the need for that parameter obsolete.
A next step would have to actually delete that from the Linux line,
but that seems evil.

If there is interest in making this upstream in Xen I can do
that- but if we want to do that I think we need to make the Xen's
view of PCI devices be similar to what this patch does. That is have
a 'struct pci_bus' and 'struct pci_dev' and proper linking
between them. Otherwise it is quite hard to keep all of this
sane.

The patches have some serious case of #ifdef and skanky code,
but ugh - they work for me.


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0005-xen-pci-assign-buses-Renumber-the-bus-if-there-is-a-.patch"

>From abf8a206a73bb037788b31b868102023c081d079 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Feb 2014 17:16:01 -0500
Subject: [PATCH 5/6] xen/pci=assign-buses: Renumber the bus if there is a need
 to.

Xen can re-number the PCI buses if there are SR-IOV devices there
and the BIOS hadn't done its job.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/setup.c          |   2 +
 xen/drivers/passthrough/pci.c | 689 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/pci.h         |   1 +
 3 files changed, 692 insertions(+)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..0c2f9ba 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1294,6 +1294,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     acpi_mmcfg_init();
 
+    early_pci_reassign_busses();
+
     early_msi_init();
 
     iommu_setup();    /* setup iommu if available */
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 0e59216..ba852bd 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -999,6 +999,695 @@ static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void *arg)
     return 0;
 }
 
+/* Move this to its own file */
+#define DEBUG 1
+static void parse_pci_param(char *s);
+custom_param("pci", parse_pci_param);
+
+struct early_pci_bus;
+
+struct early_pci_dev {
+    struct list_head bus_list;  /* Linked against 'devices */
+    unsigned int is_serial:1;
+    unsigned int is_ehci:1;
+    unsigned int is_sriov:1;
+    unsigned int is_bridge:1;
+    u16 vendor;
+    u16 device;
+    u8 devfn;
+    u16 total_vfs;
+    u16 revision;
+    u16 class;
+    struct early_pci_bus *bus; /* On what bus we are. */
+    struct early_pci_bus *bridge; /* Ourselves if we are a bridge */
+};
+struct early_pci_bus {
+    struct list_head next;
+    struct list_head devices;
+    struct list_head children;
+    struct early_pci_bus *parent; /* Bus upstream of us. */
+    struct early_pci_dev *self; /* The PCI device that controls this bus. */
+    u8 primary; /* The (parent) bus number */
+    u8 number;
+    u8 start;
+    u8 end;
+    u8 new_end; /* To be updated too */
+    u8 new_start;
+    u8 new_primary;
+    u8 old_number;
+};
+
+static unsigned int __initdata assign_busses;
+static struct list_head __initdata early_buses_list;
+static int __initdata verbose;
+
+#define PCI_CLASS_SERIAL_USB_EHCI 0x0c0320
+#if 0
+static __init void print_pci_dev(const char *prefix, u8 bus, u8 devfn)
+{
+    u32 class, id;
+
+    class = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                            PCI_CLASS_REVISION);
+    id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                         PCI_VENDOR_ID);
+    printk("%04x:%02x.%u [%04x:%04x] class %06x [%s]\n", bus, PCI_SLOT(devfn),
+           PCI_FUNC(devfn), id & 0xfff, (id >> 16) & 0xffff, class, prefix);
+}
+#endif
+static __init struct early_pci_dev *early_alloc_pci_dev(struct early_pci_bus *bus,
+                                                        u8 devfn)
+{
+    struct early_pci_dev *dev;
+    u8 type;
+    u16 class_dev, total;
+    u32 class, id;
+    unsigned int pos;
+
+    if ( !bus )
+        return NULL;
+
+    dev = xzalloc(struct early_pci_dev);
+    if ( !dev )
+        return NULL;
+
+    INIT_LIST_HEAD(&dev->bus_list);
+    dev->devfn = devfn;
+    dev->bus = bus;
+    class = pci_conf_read32(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                            PCI_CLASS_REVISION);
+
+    dev->revision = class & 0xff;
+    dev->class = class >> 8;
+    if ( dev->class == PCI_CLASS_SERIAL_USB_EHCI )
+        dev->is_ehci = 1;
+
+    class_dev = pci_conf_read16(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                                PCI_CLASS_DEVICE);
+    switch ( class_dev )
+    {
+        case 0x0700: /* single port serial */
+        case 0x0702: /* multi port serial */
+        case 0x0780: /* other (e.g serial+parallel) */
+            dev->is_serial = 1;
+        default:
+            break;
+    }
+    type = pci_conf_read8(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                          PCI_HEADER_TYPE);
+    switch ( type & 0x7f )
+    {
+        case PCI_HEADER_TYPE_BRIDGE:
+        case PCI_HEADER_TYPE_CARDBUS:
+            dev->is_bridge = 1;
+            break;
+        case PCI_HEADER_TYPE_NORMAL:
+            pos = pci_find_cap_offset(0, bus->number, PCI_SLOT(devfn),
+                                      PCI_FUNC(devfn), PCI_CAP_ID_EXP);
+            if (!pos)   /* Not PCIe */
+                break;
+            pos = pci_find_ext_capability(0, bus->number, devfn,
+                                          PCI_EXT_CAP_ID_SRIOV);
+            if (!pos)   /* Not SR-IOV */
+                break;
+            total = pci_conf_read16(0, bus->number, PCI_SLOT(devfn),
+                                    PCI_FUNC(devfn), pos + PCI_SRIOV_TOTAL_VF);
+            if (!total)
+                break;
+            dev->is_sriov = 1;
+            dev->total_vfs = total;
+            /* Fall through */
+        default:
+            break;
+    }
+    id = pci_conf_read32(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                         PCI_VENDOR_ID);
+    dev->vendor = id & 0xffff;
+    dev->device = (id >> 16) & 0xffff;
+    /* In case MCFG is not configured we have our blacklist */
+    switch ( dev->vendor )
+    {
+        case 0x8086: /* Intel */
+            switch ( dev->device )
+            {
+                case 0x10c9: /* Intel Corporation 82576 Gigabit Network Connection (rev 01) */
+                    if ( dev->is_sriov )
+                        break;
+                    dev->is_sriov = 1;
+                    dev->total_vfs = 8;
+            }
+        default:
+            break;
+    }
+    return dev;
+}
+
+static __init struct early_pci_bus *__find_bus(struct early_pci_bus *parent,
+                                               u8 nr)
+{
+    struct early_pci_bus *child, *bus;
+
+    if ( parent->number == nr )
+        return parent;
+
+    list_for_each_entry ( child, &parent->children, next )
+    {
+        if ( child->number == nr )
+            return child;
+        bus = __find_bus(child, nr);
+        if ( bus )
+            return bus;
+    }
+    return NULL;
+}
+
+static __init struct early_pci_bus *find_bus(u8 nr)
+{
+    struct early_pci_bus *bus, *child;
+
+    list_for_each_entry ( bus, &early_buses_list, next )
+    {
+       child = __find_bus(bus, nr);
+       if ( child )
+            return child;
+    }
+    return NULL;
+}
+
+static __init struct early_pci_dev *find_dev(u8 nr, u8 devfn)
+{
+    struct early_pci_bus *bus = NULL;
+
+    bus = find_bus(nr);
+    if ( bus ) {
+        struct early_pci_dev *dev = NULL;
+
+        list_for_each_entry ( dev, &bus->devices, bus_list )
+            if ( dev->devfn == devfn )
+                return dev;
+    }
+    return NULL;
+}
+
+static __init struct early_pci_bus *early_alloc_pci_bus(struct early_pci_dev *dev, u8 nr)
+{
+    struct early_pci_bus *bus;
+
+    bus = xzalloc(struct early_pci_bus);
+    if ( !bus )
+        return NULL;
+
+    INIT_LIST_HEAD(&bus->next);
+    INIT_LIST_HEAD(&bus->devices);
+    INIT_LIST_HEAD(&bus->children);
+    bus->number = nr;
+    bus->old_number = nr;
+    bus->self = dev;
+    if ( dev )
+        if ( !dev->bridge )
+            dev->bridge = bus;
+    return bus;
+}
+
+static void __init early_free_pci_bus(struct early_pci_bus *bus)
+{
+    struct early_pci_dev *dev, *d_tmp;
+    struct early_pci_bus *b, *b_tmp;
+
+    list_for_each_entry_safe ( b, b_tmp, &bus->children, next )
+    {
+        early_free_pci_bus (b);
+        list_del ( &b->next );
+    }
+    list_for_each_entry_safe ( dev, d_tmp, &bus->devices, bus_list )
+    {
+        list_del ( &dev->bus_list );
+        xfree ( dev );
+    }
+}
+
+static void __init early_free_all(void)
+{
+    struct early_pci_bus *bus, *tmp;
+
+    list_for_each_entry_safe( bus, tmp, &early_buses_list, next )
+    {
+        early_free_pci_bus (bus);
+        list_del( &bus->next );
+        xfree(bus);
+    }
+}
+
+unsigned int __init pci_iov_scan(struct early_pci_bus *bus)
+{
+    struct early_pci_dev *dev;
+    unsigned int max = 0;
+    u8 busnr;
+
+    list_for_each_entry ( dev, &bus->devices, bus_list )
+    {
+        if ( !dev->is_sriov )
+            continue;
+        if ( !dev->total_vfs )
+            continue;
+        busnr = (dev->total_vfs) / 8; /* How many buses we will need */
+        if ( busnr > max )
+            max = busnr;
+    }
+    /* Do we have enough space for them ? */
+    if ( (bus->end - bus->start) >= max )
+        return 0;
+    return max;
+}
+
+#ifdef DEBUG
+static __init const char *spaces(unsigned int lvl)
+{
+    if (lvl == 0)
+        return " ";
+    if (lvl == 1)
+        return " +--+";
+    if (lvl == 2)
+        return "    +-+";
+    if (lvl == 3)
+        return "       +-+";
+    return "         +...+";
+}
+
+static void __init print_devs(struct early_pci_bus *parent, int lvl)
+{
+    struct early_pci_dev *dev;
+    struct early_pci_bus *bus;
+
+    list_for_each_entry( dev, &parent->devices, bus_list )
+    {
+        printk("%s%04x:%02x:%u [%04x:%04x] class %06x", spaces(lvl), parent->number,
+               PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), dev->vendor,
+               dev->device, dev->class);
+        if ( dev->is_bridge )
+        {
+            printk(" BRIDGE");
+            if ( dev->bridge )
+            {
+                struct early_pci_bus *bridge = dev->bridge;
+                printk(" to BUS %x [spans %x->%x] up BUS %x", bridge->number, bridge->start, bridge->end, bridge->primary);
+                printk(" (up: %x spans %x->%x)", bridge->new_primary, bridge->new_start, bridge->new_end);
+            }
+        }
+        if ( dev->is_sriov )
+            printk(" sriov: %d", dev->total_vfs);
+        if ( dev->is_ehci )
+            printk (" EHCI DEBUG ");
+        if ( dev->is_serial )
+            printk (" SERIAL ");
+        printk("\n");
+    }
+    list_for_each_entry( bus, &parent->children, next )
+        print_devs(bus, lvl + 1);
+}
+#endif
+
+static void __init print_devices(void)
+{
+#ifdef DEBUG
+    struct early_pci_bus *bus;
+
+    if ( !verbose )
+        return;
+
+    list_for_each_entry( bus, &early_buses_list, next )
+        print_devs(bus, 0);
+#endif
+}
+
+unsigned int pci_scan_bus( struct early_pci_bus *bus);
+unsigned int __init pci_scan_slot(struct early_pci_bus *bus, unsigned int devfn)
+{
+    struct early_pci_dev *dev;
+
+    if ( find_dev(bus->number, devfn) )
+        return 0;
+
+    if ( !pci_device_detect (0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn)) )
+        return 0;
+
+    dev = early_alloc_pci_dev(bus, devfn);
+    if ( !dev )
+        return -ENODEV;
+
+    list_add_tail(&dev->bus_list, &bus->devices);
+    return 0;
+}
+
+static int __init pci_scan_bridge(struct early_pci_bus *bus,
+                                  struct early_pci_dev *dev,
+                                  unsigned int max)
+{
+    struct early_pci_bus *child;
+    u32 buses;
+    u8 primary, secondary, subordinate;
+    unsigned int cmax = 0;
+
+    buses = pci_conf_read32(0, bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                            PCI_PRIMARY_BUS);
+
+    primary = buses & 0xFF;
+    secondary = (buses >> 8) & 0xFF;
+    subordinate = (buses >> 16) & 0xFF;
+
+    if (!primary && (primary != bus->number) && secondary && subordinate) {
+        printk("Primary bus is hard wired to 0\n");
+        primary = bus->number;
+    }
+
+    child = find_bus(secondary);
+    if ( !child )
+    {
+        child = early_alloc_pci_bus(dev, secondary);
+        if ( !child )
+            goto out;
+        /* Add to the parent's bus list */
+        list_add_tail(&child->next, &bus->children);
+        /* The primary is the upstream bus number. */
+        child->primary = primary;
+        child->start = secondary;
+        child->end = subordinate;
+        child->parent = bus;
+    }
+    cmax = pci_scan_bus(child);
+    if ( cmax > max )
+        max = cmax;
+
+    if ( child->end > max )
+        max = child->end;
+out:
+    return max;
+}
+
+unsigned int __init pci_scan_bus( struct early_pci_bus *bus)
+{
+    unsigned int max = 0, devfn;
+    struct early_pci_dev *dev;
+
+    for ( devfn = 0; devfn < 0x100; devfn++ )
+        pci_scan_slot (bus, devfn);
+
+    /* Walk all devices and create the bus structs */
+    list_for_each_entry ( dev, &bus->devices, bus_list )
+    {
+        if ( !dev->is_bridge )
+            continue;
+        if ( verbose )
+            printk("Scanning bridge %04x:%02x.%u [%04x:%04x] class %06x\n", bus->number,
+                   PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), dev->vendor, dev->device,
+                   dev->class);
+        max = pci_scan_bridge(bus, dev, max);
+    }
+    if ( max > bus->end )
+        bus->end = max;
+    return max;
+}
+
+static __init unsigned int adjust_span(struct early_pci_bus *bus,
+                                       unsigned int offset,
+                                       unsigned int adjust_start)
+{
+    struct early_pci_bus *child = NULL, *parent;
+    unsigned int scan;
+
+    scan = pci_iov_scan(bus);
+    offset += scan;
+
+    list_for_each_entry( child, &bus->children, next )
+    {
+        unsigned int new_offset;
+
+        new_offset = adjust_span(child , offset, adjust_start);
+
+        if ( new_offset > offset ) {
+            /* A new contender ! */
+            offset = new_offset;
+            /* If we didn't find any IOV devices then we must adjust the
+             * start for all our children from this point on? */
+            adjust_start = 1;
+        }
+    }
+    bus->new_start = bus->start;
+    bus->new_end = bus->end + offset;
+
+    /* Do not update our new_start if we were the one that discovered it. */
+    if ( scan )
+        adjust_start = 0;
+
+    /* We can't check against scan as the loop might have altered it. */
+    /* N.B. Ignore host bridges. */
+    parent = bus->parent;
+    if ( adjust_start && parent )
+        bus->new_start += offset;
+
+    return offset;
+}
+static __init void adjust_primary(struct early_pci_bus *bus,
+                                  unsigned int offset,
+                                  unsigned int adjust_start)
+{
+    struct early_pci_bus *child;
+
+    list_for_each_entry( child, &bus->children, next )
+    {
+        child->new_primary = bus->new_start;
+        adjust_primary(child, offset, adjust_start);
+
+    }
+}
+
+static void __init pci_disable_forwarding(struct early_pci_bus *parent)
+{
+    struct early_pci_dev *dev;
+    u32 buses;
+
+    list_for_each_entry ( dev, &parent->devices, bus_list )
+    {
+        u8 bus;
+        u16 bctl;
+
+        if ( !dev->is_bridge )
+            continue;
+
+        bus = dev->bus->number;
+        buses = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn),
+                            PCI_FUNC(dev->devfn), PCI_PRIMARY_BUS);
+        if ( verbose )
+            printk("%04x:%02x.%u PCI_PRIMARY_BUS read %x\n", bus,
+                   PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), buses);
+        /* Lifted from Linux but not sure if this MasterAbort masking is
+         * still needed. */
+
+        bctl = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                               PCI_BRIDGE_CONTROL);
+
+        pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                         PCI_BRIDGE_CONTROL, bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
+
+        /* Disable forwarding */
+        pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                         PCI_PRIMARY_BUS, buses &  ~0xffffff);
+
+        pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                         PCI_BRIDGE_CONTROL, bctl);
+    }
+}
+
+static void __init __pci_program_bridge(struct early_pci_dev *dev,
+                                        struct early_pci_bus *parent)
+{
+    u16 bctl;
+    u32 buses;
+    u8 bus;
+    struct early_pci_bus *child, *bridges;
+
+    u8 primary, secondary, subordinate;
+
+    bus = parent->number; /* Upstream number . */
+    child = dev->bridge; /* The bridge we are serving. */
+
+    ASSERT( child );
+
+    buses = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn),
+                            PCI_FUNC(dev->devfn), PCI_PRIMARY_BUS);
+    if ( verbose )
+        printk("%04x:%02x.%u PCI_PRIMARY_BUS read %x\n", bus,
+               PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), buses);
+
+    /* Lifted from Linux but not sure if this MasterAbort masking is
+     * still needed. */
+    bctl = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                           PCI_BRIDGE_CONTROL);
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_BRIDGE_CONTROL, bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
+
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_STATUS, 0xffff);
+
+    buses = (buses & 0xff000000)
+                | ((unsigned int)(child->new_primary)     <<  0)
+                | ((unsigned int)(child->new_start)   <<  8)
+                | ((unsigned int)(child->new_end) << 16);
+    if ( verbose )
+        printk("%04x:%02x.%u wrote to PCI_PRIMARY_BUS %x\n",  bus, PCI_SLOT(dev->devfn),
+               PCI_FUNC(dev->devfn), buses);
+
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_PRIMARY_BUS, buses);
+
+    /* Double check that it is correct. */
+    buses = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn),
+                            PCI_FUNC(dev->devfn), PCI_PRIMARY_BUS);
+    if ( verbose )
+        printk("%04x:%02x.%u PCI_PRIMARY_BUS read %x\n", bus,
+               PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), buses);
+
+    primary = buses & 0xFF;
+    secondary = (buses >> 8) & 0xFF;
+    subordinate = (buses >> 16) & 0xFF;
+
+    ASSERT(primary == child->new_primary);
+    ASSERT(secondary == child->new_start);
+    ASSERT(subordinate == child->new_end);
+
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_BRIDGE_CONTROL, bctl);
+
+    child->number = child->new_start;
+    child->primary = child->new_primary;
+    child->start = child->new_start;
+    child->end = child->new_end;
+
+    list_for_each_entry ( bridges, &child->children, next )
+        if (bridges->self)
+            __pci_program_bridge(bridges->self, bridges);
+}
+
+static void __init pci_program_bridge(struct early_pci_bus *bus)
+{
+    struct early_pci_dev *dev;
+
+    list_for_each_entry ( dev, &bus->devices, bus_list )
+    {
+        if ( !dev->is_bridge )
+            continue;
+        __pci_program_bridge(dev, bus);
+    }
+}
+static void __init update_console_devices(struct early_pci_bus *parent)
+{
+    struct early_pci_dev *dev;
+    struct early_pci_bus *bus;
+
+    list_for_each_entry( dev, &parent->devices, bus_list )
+    {
+        if ( dev->is_ehci || dev->is_serial || dev->is_bridge )
+        {
+            ;/* TODO */
+        }
+    }
+    list_for_each_entry( bus, &parent->children, next )
+        update_console_devices(bus);
+}
+static void __init parse_pci_param(char *s)
+{
+    char *ss;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        if ( !strcmp(s, "assign-buses") )
+            assign_busses = 1;
+        else if ( !strcmp(s, "verbose") )
+            verbose = 1;
+        s = ss + 1;
+    } while ( ss );
+}
+
+void __init early_pci_reassign_busses(void)
+{
+    unsigned int nr;
+    struct early_pci_bus *bus;
+    unsigned int max = 0, adjust = 0, last_end;
+
+    if ( !assign_busses )
+        return;
+
+    INIT_LIST_HEAD(&early_buses_list);
+    for ( nr = 0; nr < 256; nr++ )
+    {
+        if ( !pci_device_detect (0, nr, 0, 0) )
+            continue;
+        if ( find_bus(nr) )
+            continue;
+        /* Host bridges do not have any parent devices ! */
+        bus = early_alloc_pci_bus(NULL, nr);
+        if ( !bus )
+            goto out;
+        bus->start = nr;
+        bus->primary = 0;   /* Points to host, which is zero */
+        max = pci_scan_bus(bus);
+        list_add_tail(&bus->next, &early_buses_list);
+    }
+    /* Walk all the devices, figure out what will be the _new_
+     * max if any. */
+    last_end = 0;
+    list_for_each_entry( bus, &early_buses_list, next )
+    {
+        unsigned int offset;
+        /* Oh now, the previous end bus number overlaps! */
+        if ( last_end > bus->start )
+        {
+            bus->new_start = last_end;
+            bus->new_end = bus->new_end + last_end;
+        }
+        last_end = bus->end;
+        offset = adjust_span(bus, 0 /* no offset ! */, adjust);
+        if (offset > adjust) {
+            adjust = offset;
+            last_end = bus->new_end;
+        }
+        adjust_primary(bus, 0, 0);
+    }
+
+    print_devices();
+    if ( !adjust )
+    {
+        printk("No need to reassign busses.\n");
+        goto out;
+    }
+    printk("Re-assigning busses to make space for %d bus numbers.\n", adjust);
+
+    /* Walk all the devices, disable serial and ehci */
+    if ( !verbose)
+        serial_suspend();
+
+    /* Walk all the bridges, disable forwarding */
+    list_for_each_entry( bus, &early_buses_list, next )
+        pci_disable_forwarding(bus);
+
+    /* Walk all bridges, reprogram with max (so new primary, secondary and such. */
+    list_for_each_entry( bus, &early_buses_list, next )
+        pci_program_bridge(bus);
+
+    /* Walk all devices, re-enable serial, ehci with new bus number */
+    list_for_each_entry( bus, &early_buses_list, next )
+        update_console_devices(bus);
+
+    if ( !verbose )
+        serial_resume();
+    print_devices();
+out:
+    early_free_all();
+}
+
 void __init setup_dom0_pci_devices(
     struct domain *d, int (*handler)(u8 devfn, struct pci_dev *))
 {
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index b883c28..1750196 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -102,6 +102,7 @@ struct pci_dev *pci_lock_domain_pdev(
 
 void setup_dom0_pci_devices(struct domain *,
                             int (*)(u8 devfn, struct pci_dev *));
+void early_pci_reassign_busses(void);
 void pci_release_devices(struct domain *d);
 int pci_add_segment(u16 seg);
 const unsigned long *pci_get_ro_map(u16 seg);
-- 
1.8.3.1


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0006-pci-assign-buses-Suspend-resume-the-console-device-a.patch"

>From 020692241661be8c445bcd4087cf566e851ae3d5 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 21 Feb 2014 11:43:51 -0500
Subject: [PATCH 6/6] pci/assign-buses: Suspend/resume the console device and
 update bus.

When we suspend and resume the console devices we need the
proper bus number. With us altering the bus numbers we need
to update the bus numbers otherwise the console device might
reprogram the wrong device.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/char/ehci-dbgp.c  | 24 +++++++++++++++++++++++-
 xen/drivers/char/ns16550.c    | 37 +++++++++++++++++++++++++++++++++++++
 xen/drivers/char/serial.c     | 17 +++++++++++++++++
 xen/drivers/passthrough/pci.c | 10 +++++++++-
 xen/include/xen/serial.h      |  7 +++++++
 5 files changed, 93 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/ehci-dbgp.c b/xen/drivers/char/ehci-dbgp.c
index b900d60..a85b62e 100644
--- a/xen/drivers/char/ehci-dbgp.c
+++ b/xen/drivers/char/ehci-dbgp.c
@@ -1437,7 +1437,27 @@ static void ehci_dbgp_resume(struct serial_port *port)
     ehci_dbgp_setup_preirq(dbgp);
     ehci_dbgp_setup_postirq(dbgp);
 }
+static int __init ehci_dbgp_is_owner(struct serial_port *port, u8 bus, u8 devfn)
+{
+    struct ehci_dbgp *dbgp = port->uart;
 
+    if ( dbgp->bus == bus && dbgp->slot == PCI_SLOT(devfn) &&
+        dbgp->func == PCI_FUNC(devfn))
+        return 1;
+    return -ENODEV;
+}
+static int __init ehci_dbgp_update_bus(struct serial_port *port, u8 old_bus,
+                                       u8 devfn, u8 new_bus)
+{
+    struct ehci_dbgp *dbgp;
+
+    if ( ehci_dbgp_is_owner (port, old_bus, devfn) < 0 )
+        return -ENODEV;
+
+    dbgp = port->uart;
+    dbgp->bus = new_bus;
+    return 1;
+}
 static struct uart_driver __read_mostly ehci_dbgp_driver = {
     .init_preirq  = ehci_dbgp_init_preirq,
     .init_postirq = ehci_dbgp_init_postirq,
@@ -1447,7 +1467,9 @@ static struct uart_driver __read_mostly ehci_dbgp_driver = {
     .tx_ready     = ehci_dbgp_tx_ready,
     .putc         = ehci_dbgp_putc,
     .flush        = ehci_dbgp_flush,
-    .getc         = ehci_dbgp_getc
+    .getc         = ehci_dbgp_getc,
+    .is_owner     = ehci_dbgp_is_owner,
+    .update_bus   = ehci_dbgp_update_bus,
 };
 
 static struct ehci_dbgp ehci_dbgp = { .state = dbgp_unsafe, .phys_port = 1 };
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index e7cb0ba..8820fb9 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -462,7 +462,40 @@ static const struct vuart_info *ns16550_vuart_info(struct serial_port *port)
     return &uart->vuart;
 }
 #endif
+#ifdef HAS_PCI
+static int __init ns16550_is_owner(struct serial_port *port, u8 bus, u8 devfn)
+{
+    struct ns16550 *uart = port->uart;
+
+    if ( uart->ps_bdf_enable )
+    {
+        if ( (bus == uart->ps_bdf[0]) && (uart->ps_bdf[1] == PCI_SLOT(devfn)) &&
+             (uart->ps_bdf[2] == PCI_FUNC(devfn)) )
+            return 1;
+    }
+    if ( uart->pb_bdf_enable )
+    {
+        if ( (bus == uart->pb_bdf[0]) && (uart->pb_bdf[1] == PCI_SLOT(devfn)) &&
+             (uart->pb_bdf[2] == PCI_FUNC(devfn)) )
+            return 1;
+    }
+    return -ENODEV;
+}
+static int __init ns16550_update_bus(struct serial_port *port, u8 old_bus,
+                                      u8 devfn, u8 new_bus)
+{
+    struct ns16550 *uart;
 
+    if ( ns16550_is_owner(port, old_bus, devfn ) < 0 )
+        return -ENODEV;
+    uart = port->uart;
+    if ( uart->ps_bdf_enable )
+        uart->ps_bdf[0]= new_bus;
+    if ( uart->pb_bdf_enable )
+        uart->pb_bdf[0] = new_bus;
+    return 1;
+}
+#endif
 static struct uart_driver __read_mostly ns16550_driver = {
     .init_preirq  = ns16550_init_preirq,
     .init_postirq = ns16550_init_postirq,
@@ -479,6 +512,10 @@ static struct uart_driver __read_mostly ns16550_driver = {
 #ifdef CONFIG_ARM
     .vuart_info   = ns16550_vuart_info,
 #endif
+#ifdef HAS_PCI
+    .is_owner     = ns16550_is_owner,
+    .update_bus   = ns16550_update_bus,
+#endif
 };
 
 static int __init parse_parity_char(int c)
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 9b006f2..c620352 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -518,6 +518,23 @@ const struct vuart_info *serial_vuart_info(int idx)
     return NULL;
 }
 
+int __init serial_is_owner(u8 bus, u8 devfn)
+{
+    int i;
+    for ( i = 0; i < ARRAY_SIZE(com); i++ )
+        if ( com[i].driver->is_owner )
+            return com[i].driver->is_owner(&com[i], bus, devfn);
+
+    return 0;
+}
+int __init serial_update_bus(u8 old_bus, u8 devfn, u8 new_bus)
+{
+    int i;
+    for ( i = 0; i < ARRAY_SIZE(com); i++ )
+        if ( com[i].driver->update_bus )
+            return com[i].driver->update_bus(&com[i], old_bus, devfn, new_bus);
+    return 0;
+}
 void serial_suspend(void)
 {
     int i;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index ba852bd..e6d7316 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1000,6 +1000,7 @@ static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void *arg)
 }
 
 /* Move this to its own file */
+#include <xen/serial.h>
 #define DEBUG 1
 static void parse_pci_param(char *s);
 custom_param("pci", parse_pci_param);
@@ -1588,7 +1589,14 @@ static void __init update_console_devices(struct early_pci_bus *parent)
     {
         if ( dev->is_ehci || dev->is_serial || dev->is_bridge )
         {
-            ;/* TODO */
+            int rc = 0;
+            if ( serial_is_owner(parent->old_number , dev->devfn ) < 0 )
+                continue;
+            rc = serial_update_bus(parent->old_number, dev->devfn, parent->number);
+            if ( verbose )
+                printk("%02x:%02x.%u bus %x -> %x, rc=%d\n", parent->number,
+                       PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                       parent->old_number, parent->number, rc);
         }
     }
     list_for_each_entry( bus, &parent->children, next )
diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
index f38c9b7..08b68e0 100644
--- a/xen/include/xen/serial.h
+++ b/xen/include/xen/serial.h
@@ -85,6 +85,10 @@ struct uart_driver {
     const struct dt_irq *(*dt_irq_get)(struct serial_port *);
     /* Get serial information */
     const struct vuart_info *(*vuart_info)(struct serial_port *);
+    /* Check if the BDF matches this device */
+    int (*is_owner)(struct serial_port *, u8 , u8);
+    /* Update its BDF due to bus number changing. devfn still same. */
+    int (*update_bus)(struct serial_port *, u8, u8, u8);
 };
 
 /* 'Serial handles' are composed from the following fields. */
@@ -141,6 +145,9 @@ const struct dt_irq *serial_dt_irq(int idx);
 /* Retrieve basic UART information to emulate it (base address, size...) */
 const struct vuart_info* serial_vuart_info(int idx);
 
+int serial_is_owner(u8 bus, u8 devfn);
+int serial_update_bus(u8 old_bus, u8 devfn, u8 bus);
+
 /* Serial suspend/resume. */
 void serial_suspend(void);
 void serial_resume(void);
-- 
1.8.3.1


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--Qxx1br4bt0+wmkIi--


From xen-devel-bounces@lists.xen.org Fri Feb 21 19:19:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 19:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGvcf-000503-0W; Fri, 21 Feb 2014 19:18:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGvcc-0004zy-Le
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 19:18:55 +0000
Received: from [85.158.139.211:9711] by server-13.bemta-5.messagelabs.com id
	7E/38-18801-D96A7035; Fri, 21 Feb 2014 19:18:53 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393010329!5500458!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20657 invoked from network); 21 Feb 2014 19:18:51 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 19:18:51 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LJIdZo024409
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 19:18:40 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJIb6G002066
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 19:18:39 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LJIbtM029433; Fri, 21 Feb 2014 19:18:37 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 11:18:37 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 3ADE51C293E; Fri, 21 Feb 2014 14:18:33 -0500 (EST)
Date: Fri, 21 Feb 2014 14:18:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140221191833.GA8812@phenom.dumpdata.com>
References: <52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="Qxx1br4bt0+wmkIi"
Content-Disposition: inline
In-Reply-To: <20140205200708.GA9278@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4 Was:Re:
 [PATCH] x86/msi: Validate the guest-identified PCI devices in
 pci_prepare_msix()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

> But I am not sure if that is the right way of doing it. Anyhow there
> was another assumption made in which 'assign-busses' crippled Xen
> (see second attachment).
> 
> 3). Trap on PCI_SECONDARY_BUS and PCI_SUBORDINATE_BUS writes and
>     fixup the structures.
> 
>     I hadn't attempted that but that could also be done. That way Xen
>     is aware of those changes and can update its PCI structures.
> 

4). Make Xen do the bus re-assignment.

The attached patch is an interesting "solution" to the BIOS
not doing the right bus-extending with SR-IOV devices.

Paid good money for this motherboard and it has bugs <sigh>.
(To be fair, I also saw this issue on two other Intel
SandyBridge motherboard).

Anyhow, with this patch I can finally use SR-IOV cards on this
motherboard and it basically does what Linux 'assign-buses' does.
aka I don't get this:

SR-IOV: bus number out of range.

Because it is all done during bootup it only runs during boot-time.

It does not fix the issue if 'pci=assign-buses' is provide on the
Linux kernel, but it makes the need for that parameter obsolete.
A next step would have to actually delete that from the Linux line,
but that seems evil.

If there is interest in making this upstream in Xen I can do
that- but if we want to do that I think we need to make the Xen's
view of PCI devices be similar to what this patch does. That is have
a 'struct pci_bus' and 'struct pci_dev' and proper linking
between them. Otherwise it is quite hard to keep all of this
sane.

The patches have some serious case of #ifdef and skanky code,
but ugh - they work for me.


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0005-xen-pci-assign-buses-Renumber-the-bus-if-there-is-a-.patch"

>From abf8a206a73bb037788b31b868102023c081d079 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Wed, 19 Feb 2014 17:16:01 -0500
Subject: [PATCH 5/6] xen/pci=assign-buses: Renumber the bus if there is a need
 to.

Xen can re-number the PCI buses if there are SR-IOV devices there
and the BIOS hadn't done its job.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/arch/x86/setup.c          |   2 +
 xen/drivers/passthrough/pci.c | 689 ++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/pci.h         |   1 +
 3 files changed, 692 insertions(+)

diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..0c2f9ba 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1294,6 +1294,8 @@ void __init __start_xen(unsigned long mbi_p)
 
     acpi_mmcfg_init();
 
+    early_pci_reassign_busses();
+
     early_msi_init();
 
     iommu_setup();    /* setup iommu if available */
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 0e59216..ba852bd 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -999,6 +999,695 @@ static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void *arg)
     return 0;
 }
 
+/* Move this to its own file */
+#define DEBUG 1
+static void parse_pci_param(char *s);
+custom_param("pci", parse_pci_param);
+
+struct early_pci_bus;
+
+struct early_pci_dev {
+    struct list_head bus_list;  /* Linked against 'devices */
+    unsigned int is_serial:1;
+    unsigned int is_ehci:1;
+    unsigned int is_sriov:1;
+    unsigned int is_bridge:1;
+    u16 vendor;
+    u16 device;
+    u8 devfn;
+    u16 total_vfs;
+    u16 revision;
+    u16 class;
+    struct early_pci_bus *bus; /* On what bus we are. */
+    struct early_pci_bus *bridge; /* Ourselves if we are a bridge */
+};
+struct early_pci_bus {
+    struct list_head next;
+    struct list_head devices;
+    struct list_head children;
+    struct early_pci_bus *parent; /* Bus upstream of us. */
+    struct early_pci_dev *self; /* The PCI device that controls this bus. */
+    u8 primary; /* The (parent) bus number */
+    u8 number;
+    u8 start;
+    u8 end;
+    u8 new_end; /* To be updated too */
+    u8 new_start;
+    u8 new_primary;
+    u8 old_number;
+};
+
+static unsigned int __initdata assign_busses;
+static struct list_head __initdata early_buses_list;
+static int __initdata verbose;
+
+#define PCI_CLASS_SERIAL_USB_EHCI 0x0c0320
+#if 0
+static __init void print_pci_dev(const char *prefix, u8 bus, u8 devfn)
+{
+    u32 class, id;
+
+    class = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                            PCI_CLASS_REVISION);
+    id = pci_conf_read32(0, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                         PCI_VENDOR_ID);
+    printk("%04x:%02x.%u [%04x:%04x] class %06x [%s]\n", bus, PCI_SLOT(devfn),
+           PCI_FUNC(devfn), id & 0xfff, (id >> 16) & 0xffff, class, prefix);
+}
+#endif
+static __init struct early_pci_dev *early_alloc_pci_dev(struct early_pci_bus *bus,
+                                                        u8 devfn)
+{
+    struct early_pci_dev *dev;
+    u8 type;
+    u16 class_dev, total;
+    u32 class, id;
+    unsigned int pos;
+
+    if ( !bus )
+        return NULL;
+
+    dev = xzalloc(struct early_pci_dev);
+    if ( !dev )
+        return NULL;
+
+    INIT_LIST_HEAD(&dev->bus_list);
+    dev->devfn = devfn;
+    dev->bus = bus;
+    class = pci_conf_read32(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                            PCI_CLASS_REVISION);
+
+    dev->revision = class & 0xff;
+    dev->class = class >> 8;
+    if ( dev->class == PCI_CLASS_SERIAL_USB_EHCI )
+        dev->is_ehci = 1;
+
+    class_dev = pci_conf_read16(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                                PCI_CLASS_DEVICE);
+    switch ( class_dev )
+    {
+        case 0x0700: /* single port serial */
+        case 0x0702: /* multi port serial */
+        case 0x0780: /* other (e.g serial+parallel) */
+            dev->is_serial = 1;
+        default:
+            break;
+    }
+    type = pci_conf_read8(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                          PCI_HEADER_TYPE);
+    switch ( type & 0x7f )
+    {
+        case PCI_HEADER_TYPE_BRIDGE:
+        case PCI_HEADER_TYPE_CARDBUS:
+            dev->is_bridge = 1;
+            break;
+        case PCI_HEADER_TYPE_NORMAL:
+            pos = pci_find_cap_offset(0, bus->number, PCI_SLOT(devfn),
+                                      PCI_FUNC(devfn), PCI_CAP_ID_EXP);
+            if (!pos)   /* Not PCIe */
+                break;
+            pos = pci_find_ext_capability(0, bus->number, devfn,
+                                          PCI_EXT_CAP_ID_SRIOV);
+            if (!pos)   /* Not SR-IOV */
+                break;
+            total = pci_conf_read16(0, bus->number, PCI_SLOT(devfn),
+                                    PCI_FUNC(devfn), pos + PCI_SRIOV_TOTAL_VF);
+            if (!total)
+                break;
+            dev->is_sriov = 1;
+            dev->total_vfs = total;
+            /* Fall through */
+        default:
+            break;
+    }
+    id = pci_conf_read32(0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                         PCI_VENDOR_ID);
+    dev->vendor = id & 0xffff;
+    dev->device = (id >> 16) & 0xffff;
+    /* In case MCFG is not configured we have our blacklist */
+    switch ( dev->vendor )
+    {
+        case 0x8086: /* Intel */
+            switch ( dev->device )
+            {
+                case 0x10c9: /* Intel Corporation 82576 Gigabit Network Connection (rev 01) */
+                    if ( dev->is_sriov )
+                        break;
+                    dev->is_sriov = 1;
+                    dev->total_vfs = 8;
+            }
+        default:
+            break;
+    }
+    return dev;
+}
+
+static __init struct early_pci_bus *__find_bus(struct early_pci_bus *parent,
+                                               u8 nr)
+{
+    struct early_pci_bus *child, *bus;
+
+    if ( parent->number == nr )
+        return parent;
+
+    list_for_each_entry ( child, &parent->children, next )
+    {
+        if ( child->number == nr )
+            return child;
+        bus = __find_bus(child, nr);
+        if ( bus )
+            return bus;
+    }
+    return NULL;
+}
+
+static __init struct early_pci_bus *find_bus(u8 nr)
+{
+    struct early_pci_bus *bus, *child;
+
+    list_for_each_entry ( bus, &early_buses_list, next )
+    {
+       child = __find_bus(bus, nr);
+       if ( child )
+            return child;
+    }
+    return NULL;
+}
+
+static __init struct early_pci_dev *find_dev(u8 nr, u8 devfn)
+{
+    struct early_pci_bus *bus = NULL;
+
+    bus = find_bus(nr);
+    if ( bus ) {
+        struct early_pci_dev *dev = NULL;
+
+        list_for_each_entry ( dev, &bus->devices, bus_list )
+            if ( dev->devfn == devfn )
+                return dev;
+    }
+    return NULL;
+}
+
+static __init struct early_pci_bus *early_alloc_pci_bus(struct early_pci_dev *dev, u8 nr)
+{
+    struct early_pci_bus *bus;
+
+    bus = xzalloc(struct early_pci_bus);
+    if ( !bus )
+        return NULL;
+
+    INIT_LIST_HEAD(&bus->next);
+    INIT_LIST_HEAD(&bus->devices);
+    INIT_LIST_HEAD(&bus->children);
+    bus->number = nr;
+    bus->old_number = nr;
+    bus->self = dev;
+    if ( dev )
+        if ( !dev->bridge )
+            dev->bridge = bus;
+    return bus;
+}
+
+static void __init early_free_pci_bus(struct early_pci_bus *bus)
+{
+    struct early_pci_dev *dev, *d_tmp;
+    struct early_pci_bus *b, *b_tmp;
+
+    list_for_each_entry_safe ( b, b_tmp, &bus->children, next )
+    {
+        early_free_pci_bus (b);
+        list_del ( &b->next );
+    }
+    list_for_each_entry_safe ( dev, d_tmp, &bus->devices, bus_list )
+    {
+        list_del ( &dev->bus_list );
+        xfree ( dev );
+    }
+}
+
+static void __init early_free_all(void)
+{
+    struct early_pci_bus *bus, *tmp;
+
+    list_for_each_entry_safe( bus, tmp, &early_buses_list, next )
+    {
+        early_free_pci_bus (bus);
+        list_del( &bus->next );
+        xfree(bus);
+    }
+}
+
+unsigned int __init pci_iov_scan(struct early_pci_bus *bus)
+{
+    struct early_pci_dev *dev;
+    unsigned int max = 0;
+    u8 busnr;
+
+    list_for_each_entry ( dev, &bus->devices, bus_list )
+    {
+        if ( !dev->is_sriov )
+            continue;
+        if ( !dev->total_vfs )
+            continue;
+        busnr = (dev->total_vfs) / 8; /* How many buses we will need */
+        if ( busnr > max )
+            max = busnr;
+    }
+    /* Do we have enough space for them ? */
+    if ( (bus->end - bus->start) >= max )
+        return 0;
+    return max;
+}
+
+#ifdef DEBUG
+static __init const char *spaces(unsigned int lvl)
+{
+    if (lvl == 0)
+        return " ";
+    if (lvl == 1)
+        return " +--+";
+    if (lvl == 2)
+        return "    +-+";
+    if (lvl == 3)
+        return "       +-+";
+    return "         +...+";
+}
+
+static void __init print_devs(struct early_pci_bus *parent, int lvl)
+{
+    struct early_pci_dev *dev;
+    struct early_pci_bus *bus;
+
+    list_for_each_entry( dev, &parent->devices, bus_list )
+    {
+        printk("%s%04x:%02x:%u [%04x:%04x] class %06x", spaces(lvl), parent->number,
+               PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), dev->vendor,
+               dev->device, dev->class);
+        if ( dev->is_bridge )
+        {
+            printk(" BRIDGE");
+            if ( dev->bridge )
+            {
+                struct early_pci_bus *bridge = dev->bridge;
+                printk(" to BUS %x [spans %x->%x] up BUS %x", bridge->number, bridge->start, bridge->end, bridge->primary);
+                printk(" (up: %x spans %x->%x)", bridge->new_primary, bridge->new_start, bridge->new_end);
+            }
+        }
+        if ( dev->is_sriov )
+            printk(" sriov: %d", dev->total_vfs);
+        if ( dev->is_ehci )
+            printk (" EHCI DEBUG ");
+        if ( dev->is_serial )
+            printk (" SERIAL ");
+        printk("\n");
+    }
+    list_for_each_entry( bus, &parent->children, next )
+        print_devs(bus, lvl + 1);
+}
+#endif
+
+static void __init print_devices(void)
+{
+#ifdef DEBUG
+    struct early_pci_bus *bus;
+
+    if ( !verbose )
+        return;
+
+    list_for_each_entry( bus, &early_buses_list, next )
+        print_devs(bus, 0);
+#endif
+}
+
+unsigned int pci_scan_bus( struct early_pci_bus *bus);
+unsigned int __init pci_scan_slot(struct early_pci_bus *bus, unsigned int devfn)
+{
+    struct early_pci_dev *dev;
+
+    if ( find_dev(bus->number, devfn) )
+        return 0;
+
+    if ( !pci_device_detect (0, bus->number, PCI_SLOT(devfn), PCI_FUNC(devfn)) )
+        return 0;
+
+    dev = early_alloc_pci_dev(bus, devfn);
+    if ( !dev )
+        return -ENODEV;
+
+    list_add_tail(&dev->bus_list, &bus->devices);
+    return 0;
+}
+
+static int __init pci_scan_bridge(struct early_pci_bus *bus,
+                                  struct early_pci_dev *dev,
+                                  unsigned int max)
+{
+    struct early_pci_bus *child;
+    u32 buses;
+    u8 primary, secondary, subordinate;
+    unsigned int cmax = 0;
+
+    buses = pci_conf_read32(0, bus->number, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                            PCI_PRIMARY_BUS);
+
+    primary = buses & 0xFF;
+    secondary = (buses >> 8) & 0xFF;
+    subordinate = (buses >> 16) & 0xFF;
+
+    if (!primary && (primary != bus->number) && secondary && subordinate) {
+        printk("Primary bus is hard wired to 0\n");
+        primary = bus->number;
+    }
+
+    child = find_bus(secondary);
+    if ( !child )
+    {
+        child = early_alloc_pci_bus(dev, secondary);
+        if ( !child )
+            goto out;
+        /* Add to the parent's bus list */
+        list_add_tail(&child->next, &bus->children);
+        /* The primary is the upstream bus number. */
+        child->primary = primary;
+        child->start = secondary;
+        child->end = subordinate;
+        child->parent = bus;
+    }
+    cmax = pci_scan_bus(child);
+    if ( cmax > max )
+        max = cmax;
+
+    if ( child->end > max )
+        max = child->end;
+out:
+    return max;
+}
+
+unsigned int __init pci_scan_bus( struct early_pci_bus *bus)
+{
+    unsigned int max = 0, devfn;
+    struct early_pci_dev *dev;
+
+    for ( devfn = 0; devfn < 0x100; devfn++ )
+        pci_scan_slot (bus, devfn);
+
+    /* Walk all devices and create the bus structs */
+    list_for_each_entry ( dev, &bus->devices, bus_list )
+    {
+        if ( !dev->is_bridge )
+            continue;
+        if ( verbose )
+            printk("Scanning bridge %04x:%02x.%u [%04x:%04x] class %06x\n", bus->number,
+                   PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), dev->vendor, dev->device,
+                   dev->class);
+        max = pci_scan_bridge(bus, dev, max);
+    }
+    if ( max > bus->end )
+        bus->end = max;
+    return max;
+}
+
+static __init unsigned int adjust_span(struct early_pci_bus *bus,
+                                       unsigned int offset,
+                                       unsigned int adjust_start)
+{
+    struct early_pci_bus *child = NULL, *parent;
+    unsigned int scan;
+
+    scan = pci_iov_scan(bus);
+    offset += scan;
+
+    list_for_each_entry( child, &bus->children, next )
+    {
+        unsigned int new_offset;
+
+        new_offset = adjust_span(child , offset, adjust_start);
+
+        if ( new_offset > offset ) {
+            /* A new contender ! */
+            offset = new_offset;
+            /* If we didn't find any IOV devices then we must adjust the
+             * start for all our children from this point on? */
+            adjust_start = 1;
+        }
+    }
+    bus->new_start = bus->start;
+    bus->new_end = bus->end + offset;
+
+    /* Do not update our new_start if we were the one that discovered it. */
+    if ( scan )
+        adjust_start = 0;
+
+    /* We can't check against scan as the loop might have altered it. */
+    /* N.B. Ignore host bridges. */
+    parent = bus->parent;
+    if ( adjust_start && parent )
+        bus->new_start += offset;
+
+    return offset;
+}
+static __init void adjust_primary(struct early_pci_bus *bus,
+                                  unsigned int offset,
+                                  unsigned int adjust_start)
+{
+    struct early_pci_bus *child;
+
+    list_for_each_entry( child, &bus->children, next )
+    {
+        child->new_primary = bus->new_start;
+        adjust_primary(child, offset, adjust_start);
+
+    }
+}
+
+static void __init pci_disable_forwarding(struct early_pci_bus *parent)
+{
+    struct early_pci_dev *dev;
+    u32 buses;
+
+    list_for_each_entry ( dev, &parent->devices, bus_list )
+    {
+        u8 bus;
+        u16 bctl;
+
+        if ( !dev->is_bridge )
+            continue;
+
+        bus = dev->bus->number;
+        buses = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn),
+                            PCI_FUNC(dev->devfn), PCI_PRIMARY_BUS);
+        if ( verbose )
+            printk("%04x:%02x.%u PCI_PRIMARY_BUS read %x\n", bus,
+                   PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), buses);
+        /* Lifted from Linux but not sure if this MasterAbort masking is
+         * still needed. */
+
+        bctl = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                               PCI_BRIDGE_CONTROL);
+
+        pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                         PCI_BRIDGE_CONTROL, bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
+
+        /* Disable forwarding */
+        pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                         PCI_PRIMARY_BUS, buses &  ~0xffffff);
+
+        pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                         PCI_BRIDGE_CONTROL, bctl);
+    }
+}
+
+static void __init __pci_program_bridge(struct early_pci_dev *dev,
+                                        struct early_pci_bus *parent)
+{
+    u16 bctl;
+    u32 buses;
+    u8 bus;
+    struct early_pci_bus *child, *bridges;
+
+    u8 primary, secondary, subordinate;
+
+    bus = parent->number; /* Upstream number . */
+    child = dev->bridge; /* The bridge we are serving. */
+
+    ASSERT( child );
+
+    buses = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn),
+                            PCI_FUNC(dev->devfn), PCI_PRIMARY_BUS);
+    if ( verbose )
+        printk("%04x:%02x.%u PCI_PRIMARY_BUS read %x\n", bus,
+               PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), buses);
+
+    /* Lifted from Linux but not sure if this MasterAbort masking is
+     * still needed. */
+    bctl = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                           PCI_BRIDGE_CONTROL);
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_BRIDGE_CONTROL, bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
+
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_STATUS, 0xffff);
+
+    buses = (buses & 0xff000000)
+                | ((unsigned int)(child->new_primary)     <<  0)
+                | ((unsigned int)(child->new_start)   <<  8)
+                | ((unsigned int)(child->new_end) << 16);
+    if ( verbose )
+        printk("%04x:%02x.%u wrote to PCI_PRIMARY_BUS %x\n",  bus, PCI_SLOT(dev->devfn),
+               PCI_FUNC(dev->devfn), buses);
+
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_PRIMARY_BUS, buses);
+
+    /* Double check that it is correct. */
+    buses = pci_conf_read32(0, bus, PCI_SLOT(dev->devfn),
+                            PCI_FUNC(dev->devfn), PCI_PRIMARY_BUS);
+    if ( verbose )
+        printk("%04x:%02x.%u PCI_PRIMARY_BUS read %x\n", bus,
+               PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn), buses);
+
+    primary = buses & 0xFF;
+    secondary = (buses >> 8) & 0xFF;
+    subordinate = (buses >> 16) & 0xFF;
+
+    ASSERT(primary == child->new_primary);
+    ASSERT(secondary == child->new_start);
+    ASSERT(subordinate == child->new_end);
+
+    pci_conf_write32(0, bus, PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                     PCI_BRIDGE_CONTROL, bctl);
+
+    child->number = child->new_start;
+    child->primary = child->new_primary;
+    child->start = child->new_start;
+    child->end = child->new_end;
+
+    list_for_each_entry ( bridges, &child->children, next )
+        if (bridges->self)
+            __pci_program_bridge(bridges->self, bridges);
+}
+
+static void __init pci_program_bridge(struct early_pci_bus *bus)
+{
+    struct early_pci_dev *dev;
+
+    list_for_each_entry ( dev, &bus->devices, bus_list )
+    {
+        if ( !dev->is_bridge )
+            continue;
+        __pci_program_bridge(dev, bus);
+    }
+}
+static void __init update_console_devices(struct early_pci_bus *parent)
+{
+    struct early_pci_dev *dev;
+    struct early_pci_bus *bus;
+
+    list_for_each_entry( dev, &parent->devices, bus_list )
+    {
+        if ( dev->is_ehci || dev->is_serial || dev->is_bridge )
+        {
+            ;/* TODO */
+        }
+    }
+    list_for_each_entry( bus, &parent->children, next )
+        update_console_devices(bus);
+}
+static void __init parse_pci_param(char *s)
+{
+    char *ss;
+
+    do {
+        ss = strchr(s, ',');
+        if ( ss )
+            *ss = '\0';
+
+        if ( !strcmp(s, "assign-buses") )
+            assign_busses = 1;
+        else if ( !strcmp(s, "verbose") )
+            verbose = 1;
+        s = ss + 1;
+    } while ( ss );
+}
+
+void __init early_pci_reassign_busses(void)
+{
+    unsigned int nr;
+    struct early_pci_bus *bus;
+    unsigned int max = 0, adjust = 0, last_end;
+
+    if ( !assign_busses )
+        return;
+
+    INIT_LIST_HEAD(&early_buses_list);
+    for ( nr = 0; nr < 256; nr++ )
+    {
+        if ( !pci_device_detect (0, nr, 0, 0) )
+            continue;
+        if ( find_bus(nr) )
+            continue;
+        /* Host bridges do not have any parent devices ! */
+        bus = early_alloc_pci_bus(NULL, nr);
+        if ( !bus )
+            goto out;
+        bus->start = nr;
+        bus->primary = 0;   /* Points to host, which is zero */
+        max = pci_scan_bus(bus);
+        list_add_tail(&bus->next, &early_buses_list);
+    }
+    /* Walk all the devices, figure out what will be the _new_
+     * max if any. */
+    last_end = 0;
+    list_for_each_entry( bus, &early_buses_list, next )
+    {
+        unsigned int offset;
+        /* Oh now, the previous end bus number overlaps! */
+        if ( last_end > bus->start )
+        {
+            bus->new_start = last_end;
+            bus->new_end = bus->new_end + last_end;
+        }
+        last_end = bus->end;
+        offset = adjust_span(bus, 0 /* no offset ! */, adjust);
+        if (offset > adjust) {
+            adjust = offset;
+            last_end = bus->new_end;
+        }
+        adjust_primary(bus, 0, 0);
+    }
+
+    print_devices();
+    if ( !adjust )
+    {
+        printk("No need to reassign busses.\n");
+        goto out;
+    }
+    printk("Re-assigning busses to make space for %d bus numbers.\n", adjust);
+
+    /* Walk all the devices, disable serial and ehci */
+    if ( !verbose)
+        serial_suspend();
+
+    /* Walk all the bridges, disable forwarding */
+    list_for_each_entry( bus, &early_buses_list, next )
+        pci_disable_forwarding(bus);
+
+    /* Walk all bridges, reprogram with max (so new primary, secondary and such. */
+    list_for_each_entry( bus, &early_buses_list, next )
+        pci_program_bridge(bus);
+
+    /* Walk all devices, re-enable serial, ehci with new bus number */
+    list_for_each_entry( bus, &early_buses_list, next )
+        update_console_devices(bus);
+
+    if ( !verbose )
+        serial_resume();
+    print_devices();
+out:
+    early_free_all();
+}
+
 void __init setup_dom0_pci_devices(
     struct domain *d, int (*handler)(u8 devfn, struct pci_dev *))
 {
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index b883c28..1750196 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -102,6 +102,7 @@ struct pci_dev *pci_lock_domain_pdev(
 
 void setup_dom0_pci_devices(struct domain *,
                             int (*)(u8 devfn, struct pci_dev *));
+void early_pci_reassign_busses(void);
 void pci_release_devices(struct domain *d);
 int pci_add_segment(u16 seg);
 const unsigned long *pci_get_ro_map(u16 seg);
-- 
1.8.3.1


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="0006-pci-assign-buses-Suspend-resume-the-console-device-a.patch"

>From 020692241661be8c445bcd4087cf566e851ae3d5 Mon Sep 17 00:00:00 2001
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Date: Fri, 21 Feb 2014 11:43:51 -0500
Subject: [PATCH 6/6] pci/assign-buses: Suspend/resume the console device and
 update bus.

When we suspend and resume the console devices we need the
proper bus number. With us altering the bus numbers we need
to update the bus numbers otherwise the console device might
reprogram the wrong device.

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 xen/drivers/char/ehci-dbgp.c  | 24 +++++++++++++++++++++++-
 xen/drivers/char/ns16550.c    | 37 +++++++++++++++++++++++++++++++++++++
 xen/drivers/char/serial.c     | 17 +++++++++++++++++
 xen/drivers/passthrough/pci.c | 10 +++++++++-
 xen/include/xen/serial.h      |  7 +++++++
 5 files changed, 93 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/ehci-dbgp.c b/xen/drivers/char/ehci-dbgp.c
index b900d60..a85b62e 100644
--- a/xen/drivers/char/ehci-dbgp.c
+++ b/xen/drivers/char/ehci-dbgp.c
@@ -1437,7 +1437,27 @@ static void ehci_dbgp_resume(struct serial_port *port)
     ehci_dbgp_setup_preirq(dbgp);
     ehci_dbgp_setup_postirq(dbgp);
 }
+static int __init ehci_dbgp_is_owner(struct serial_port *port, u8 bus, u8 devfn)
+{
+    struct ehci_dbgp *dbgp = port->uart;
 
+    if ( dbgp->bus == bus && dbgp->slot == PCI_SLOT(devfn) &&
+        dbgp->func == PCI_FUNC(devfn))
+        return 1;
+    return -ENODEV;
+}
+static int __init ehci_dbgp_update_bus(struct serial_port *port, u8 old_bus,
+                                       u8 devfn, u8 new_bus)
+{
+    struct ehci_dbgp *dbgp;
+
+    if ( ehci_dbgp_is_owner (port, old_bus, devfn) < 0 )
+        return -ENODEV;
+
+    dbgp = port->uart;
+    dbgp->bus = new_bus;
+    return 1;
+}
 static struct uart_driver __read_mostly ehci_dbgp_driver = {
     .init_preirq  = ehci_dbgp_init_preirq,
     .init_postirq = ehci_dbgp_init_postirq,
@@ -1447,7 +1467,9 @@ static struct uart_driver __read_mostly ehci_dbgp_driver = {
     .tx_ready     = ehci_dbgp_tx_ready,
     .putc         = ehci_dbgp_putc,
     .flush        = ehci_dbgp_flush,
-    .getc         = ehci_dbgp_getc
+    .getc         = ehci_dbgp_getc,
+    .is_owner     = ehci_dbgp_is_owner,
+    .update_bus   = ehci_dbgp_update_bus,
 };
 
 static struct ehci_dbgp ehci_dbgp = { .state = dbgp_unsafe, .phys_port = 1 };
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index e7cb0ba..8820fb9 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -462,7 +462,40 @@ static const struct vuart_info *ns16550_vuart_info(struct serial_port *port)
     return &uart->vuart;
 }
 #endif
+#ifdef HAS_PCI
+static int __init ns16550_is_owner(struct serial_port *port, u8 bus, u8 devfn)
+{
+    struct ns16550 *uart = port->uart;
+
+    if ( uart->ps_bdf_enable )
+    {
+        if ( (bus == uart->ps_bdf[0]) && (uart->ps_bdf[1] == PCI_SLOT(devfn)) &&
+             (uart->ps_bdf[2] == PCI_FUNC(devfn)) )
+            return 1;
+    }
+    if ( uart->pb_bdf_enable )
+    {
+        if ( (bus == uart->pb_bdf[0]) && (uart->pb_bdf[1] == PCI_SLOT(devfn)) &&
+             (uart->pb_bdf[2] == PCI_FUNC(devfn)) )
+            return 1;
+    }
+    return -ENODEV;
+}
+static int __init ns16550_update_bus(struct serial_port *port, u8 old_bus,
+                                      u8 devfn, u8 new_bus)
+{
+    struct ns16550 *uart;
 
+    if ( ns16550_is_owner(port, old_bus, devfn ) < 0 )
+        return -ENODEV;
+    uart = port->uart;
+    if ( uart->ps_bdf_enable )
+        uart->ps_bdf[0]= new_bus;
+    if ( uart->pb_bdf_enable )
+        uart->pb_bdf[0] = new_bus;
+    return 1;
+}
+#endif
 static struct uart_driver __read_mostly ns16550_driver = {
     .init_preirq  = ns16550_init_preirq,
     .init_postirq = ns16550_init_postirq,
@@ -479,6 +512,10 @@ static struct uart_driver __read_mostly ns16550_driver = {
 #ifdef CONFIG_ARM
     .vuart_info   = ns16550_vuart_info,
 #endif
+#ifdef HAS_PCI
+    .is_owner     = ns16550_is_owner,
+    .update_bus   = ns16550_update_bus,
+#endif
 };
 
 static int __init parse_parity_char(int c)
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index 9b006f2..c620352 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -518,6 +518,23 @@ const struct vuart_info *serial_vuart_info(int idx)
     return NULL;
 }
 
+int __init serial_is_owner(u8 bus, u8 devfn)
+{
+    int i;
+    for ( i = 0; i < ARRAY_SIZE(com); i++ )
+        if ( com[i].driver->is_owner )
+            return com[i].driver->is_owner(&com[i], bus, devfn);
+
+    return 0;
+}
+int __init serial_update_bus(u8 old_bus, u8 devfn, u8 new_bus)
+{
+    int i;
+    for ( i = 0; i < ARRAY_SIZE(com); i++ )
+        if ( com[i].driver->update_bus )
+            return com[i].driver->update_bus(&com[i], old_bus, devfn, new_bus);
+    return 0;
+}
 void serial_suspend(void)
 {
     int i;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index ba852bd..e6d7316 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1000,6 +1000,7 @@ static int __init _setup_dom0_pci_devices(struct pci_seg *pseg, void *arg)
 }
 
 /* Move this to its own file */
+#include <xen/serial.h>
 #define DEBUG 1
 static void parse_pci_param(char *s);
 custom_param("pci", parse_pci_param);
@@ -1588,7 +1589,14 @@ static void __init update_console_devices(struct early_pci_bus *parent)
     {
         if ( dev->is_ehci || dev->is_serial || dev->is_bridge )
         {
-            ;/* TODO */
+            int rc = 0;
+            if ( serial_is_owner(parent->old_number , dev->devfn ) < 0 )
+                continue;
+            rc = serial_update_bus(parent->old_number, dev->devfn, parent->number);
+            if ( verbose )
+                printk("%02x:%02x.%u bus %x -> %x, rc=%d\n", parent->number,
+                       PCI_SLOT(dev->devfn), PCI_FUNC(dev->devfn),
+                       parent->old_number, parent->number, rc);
         }
     }
     list_for_each_entry( bus, &parent->children, next )
diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
index f38c9b7..08b68e0 100644
--- a/xen/include/xen/serial.h
+++ b/xen/include/xen/serial.h
@@ -85,6 +85,10 @@ struct uart_driver {
     const struct dt_irq *(*dt_irq_get)(struct serial_port *);
     /* Get serial information */
     const struct vuart_info *(*vuart_info)(struct serial_port *);
+    /* Check if the BDF matches this device */
+    int (*is_owner)(struct serial_port *, u8 , u8);
+    /* Update its BDF due to bus number changing. devfn still same. */
+    int (*update_bus)(struct serial_port *, u8, u8, u8);
 };
 
 /* 'Serial handles' are composed from the following fields. */
@@ -141,6 +145,9 @@ const struct dt_irq *serial_dt_irq(int idx);
 /* Retrieve basic UART information to emulate it (base address, size...) */
 const struct vuart_info* serial_vuart_info(int idx);
 
+int serial_is_owner(u8 bus, u8 devfn);
+int serial_update_bus(u8 old_bus, u8 devfn, u8 bus);
+
 /* Serial suspend/resume. */
 void serial_suspend(void);
 void serial_resume(void);
-- 
1.8.3.1


--Qxx1br4bt0+wmkIi
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--Qxx1br4bt0+wmkIi--


From xen-devel-bounces@lists.xen.org Fri Feb 21 20:14:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 20:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGwUK-0005gg-PX; Fri, 21 Feb 2014 20:14:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGwUI-0005gb-UU
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 20:14:23 +0000
Received: from [85.158.139.211:43863] by server-16.bemta-5.messagelabs.com id
	09/38-05060-E93B7035; Fri, 21 Feb 2014 20:14:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393013659!5528407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14073 invoked from network); 21 Feb 2014 20:14:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 20:14:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103107332"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 20:14:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 15:14:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGwTx-0001v0-Qp;
	Fri, 21 Feb 2014 20:14:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGwTx-0006Zy-QC;
	Fri, 21 Feb 2014 20:14:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25248-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 20:14:01 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25248: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25248 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25248/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass

version targeted for testing:
 linux                d158fc7f36a25e19791d25a55da5623399a2644f
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7060 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2387813 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 20:14:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 20:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGwUK-0005gg-PX; Fri, 21 Feb 2014 20:14:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WGwUI-0005gb-UU
	for xen-devel@lists.xensource.com; Fri, 21 Feb 2014 20:14:23 +0000
Received: from [85.158.139.211:43863] by server-16.bemta-5.messagelabs.com id
	09/38-05060-E93B7035; Fri, 21 Feb 2014 20:14:22 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393013659!5528407!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14073 invoked from network); 21 Feb 2014 20:14:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 20:14:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="103107332"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 21 Feb 2014 20:14:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 15:14:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WGwTx-0001v0-Qp;
	Fri, 21 Feb 2014 20:14:01 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WGwTx-0006Zy-QC;
	Fri, 21 Feb 2014 20:14:01 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25248-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 21 Feb 2014 20:14:01 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25248: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25248 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25248/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    4 xen-build                 fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass

version targeted for testing:
 linux                d158fc7f36a25e19791d25a55da5623399a2644f
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7060 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2387813 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 20:59:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 20:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxBx-0006G6-7o; Fri, 21 Feb 2014 20:59:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WGxBu-0006G1-VP
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 20:59:27 +0000
Received: from [85.158.137.68:39330] by server-3.bemta-3.messagelabs.com id
	C6/F7-14520-D2EB7035; Fri, 21 Feb 2014 20:59:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393016363!3451254!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11133 invoked from network); 21 Feb 2014 20:59:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 20:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104802692"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 20:59:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 15:59:22 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WGwuu-0003dk-Uf; Fri, 21 Feb 2014 20:41:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 21 Feb 2014 20:41:51 +0000
Message-ID: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is some coverity-inspired tidying.

Coverity has some grief analysing the call sites of atomic_read().  This is
believed to be a bug in Coverity itself when expanding the nested macros, but
there is no legitimate reason for it to be a macro in the first place.

This patch changes {,_}atomic_{read,set}() from being macros to being static
inline functions, thus gaining some type safety.

One issue which is not immediatly obvious is that the non-atomic varients take
their atomic_t at a different level of indirection to the atomic varients.

This is not suitable for _atomic_set() (when used to initialise an atomic_t)
which is converted to take its parameter as a pointer.  One callsite of
_atomic_set() is updated, while the other two callsites are updated to
ATOMIC_INIT().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
CC: Tim Deegan <tim@xen.org>

---

This is compile-tested on arm32 and 64, and functionally tested on x86_64
---
 xen/common/domain.c          |    5 ++---
 xen/include/asm-arm/atomic.h |   22 +++++++++++++++++----
 xen/include/asm-x86/atomic.h |   43 +++++++++++++++++++++++++++++++++++-------
 xen/include/xen/sched.h      |    2 +-
 4 files changed, 57 insertions(+), 15 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 2636fc9..a1483e4 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -759,13 +759,12 @@ static void complete_domain_destroy(struct rcu_head *head)
 void domain_destroy(struct domain *d)
 {
     struct domain **pd;
-    atomic_t      old, new;
+    atomic_t old = ATOMIC_INIT(0);
+    atomic_t new = ATOMIC_INIT(DOMAIN_DESTROYED);
 
     BUG_ON(!d->is_dying);
 
     /* May be already destroyed, or get_domain() can race us. */
-    _atomic_set(old, 0);
-    _atomic_set(new, DOMAIN_DESTROYED);
     old = atomic_compareandswap(old, new, &d->refcnt);
     if ( _atomic_read(old) != 0 )
         return;
diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index 69c8f3f..14eacd0 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -83,11 +83,25 @@ typedef struct { int counter; } atomic_t;
  * strex/ldrex monitor on some implementations. The reason we can use it for
  * atomic_set() is the clrex or dummy strex done on every exception return.
  */
-#define _atomic_read(v) ((v).counter)
-#define atomic_read(v)  (*(volatile int *)&(v)->counter)
+static inline int atomic_read(atomic_t *v)
+{
+    return *(volatile int *)&(v)->counter;
+}
+
+static inline int _atomic_read(atomic_t v)
+{
+    return v.counter;
+}
 
-#define _atomic_set(v,i) (((v).counter) = (i))
-#define atomic_set(v,i) (((v)->counter) = (i))
+static inline void atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
+
+static inline void _atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
 
 #if defined(CONFIG_ARM_32)
 # include <asm/arm32/atomic.h>
diff --git a/xen/include/asm-x86/atomic.h b/xen/include/asm-x86/atomic.h
index e476ab5..8972463 100644
--- a/xen/include/asm-x86/atomic.h
+++ b/xen/include/asm-x86/atomic.h
@@ -66,21 +66,50 @@ typedef struct { int counter; } atomic_t;
 /**
  * atomic_read - read atomic variable
  * @v: pointer of type atomic_t
- * 
+ *
  * Atomically reads the value of @v.
  */
-#define _atomic_read(v)  ((v).counter)
-#define atomic_read(v)   read_atomic(&((v)->counter))
+static inline int atomic_read(atomic_t *v)
+{
+    return read_atomic(&v->counter);
+}
+
+/**
+ * _atomic_read - read atomic variable non-atomically
+ * @v atomic_t
+ *
+ * Non-atomically reads the value of @v
+ */
+static inline int _atomic_read(atomic_t v)
+{
+    return v.counter;
+}
+
 
 /**
  * atomic_set - set atomic variable
  * @v: pointer of type atomic_t
  * @i: required value
- * 
+ *
  * Atomically sets the value of @v to @i.
- */ 
-#define _atomic_set(v,i) (((v).counter) = (i))
-#define atomic_set(v,i)  write_atomic(&((v)->counter), (i))
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+    write_atomic(&v->counter, i);
+}
+
+/**
+ * _atomic_set - set atomic variable non-atomically
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Non-atomically sets the value of @v to @i.
+ */
+static inline void _atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
+
 
 /**
  * atomic_add - add integer to atomic variable
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..e4b5cae 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -468,7 +468,7 @@ static always_inline int get_domain(struct domain *d)
         old = seen;
         if ( unlikely(_atomic_read(old) & DOMAIN_DESTROYED) )
             return 0;
-        _atomic_set(new, _atomic_read(old) + 1);
+        _atomic_set(&new, _atomic_read(old) + 1);
         seen = atomic_compareandswap(old, new, &d->refcnt);
     }
     while ( unlikely(_atomic_read(seen) != _atomic_read(old)) );
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 20:59:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 20:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxBx-0006G6-7o; Fri, 21 Feb 2014 20:59:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WGxBu-0006G1-VP
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 20:59:27 +0000
Received: from [85.158.137.68:39330] by server-3.bemta-3.messagelabs.com id
	C6/F7-14520-D2EB7035; Fri, 21 Feb 2014 20:59:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393016363!3451254!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11133 invoked from network); 21 Feb 2014 20:59:24 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 20:59:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,520,1389744000"; d="scan'208";a="104802692"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 21 Feb 2014 20:59:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 21 Feb 2014 15:59:22 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WGwuu-0003dk-Uf; Fri, 21 Feb 2014 20:41:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 21 Feb 2014 20:41:51 +0000
Message-ID: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is some coverity-inspired tidying.

Coverity has some grief analysing the call sites of atomic_read().  This is
believed to be a bug in Coverity itself when expanding the nested macros, but
there is no legitimate reason for it to be a macro in the first place.

This patch changes {,_}atomic_{read,set}() from being macros to being static
inline functions, thus gaining some type safety.

One issue which is not immediatly obvious is that the non-atomic varients take
their atomic_t at a different level of indirection to the atomic varients.

This is not suitable for _atomic_set() (when used to initialise an atomic_t)
which is converted to take its parameter as a pointer.  One callsite of
_atomic_set() is updated, while the other two callsites are updated to
ATOMIC_INIT().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
CC: Tim Deegan <tim@xen.org>

---

This is compile-tested on arm32 and 64, and functionally tested on x86_64
---
 xen/common/domain.c          |    5 ++---
 xen/include/asm-arm/atomic.h |   22 +++++++++++++++++----
 xen/include/asm-x86/atomic.h |   43 +++++++++++++++++++++++++++++++++++-------
 xen/include/xen/sched.h      |    2 +-
 4 files changed, 57 insertions(+), 15 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 2636fc9..a1483e4 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -759,13 +759,12 @@ static void complete_domain_destroy(struct rcu_head *head)
 void domain_destroy(struct domain *d)
 {
     struct domain **pd;
-    atomic_t      old, new;
+    atomic_t old = ATOMIC_INIT(0);
+    atomic_t new = ATOMIC_INIT(DOMAIN_DESTROYED);
 
     BUG_ON(!d->is_dying);
 
     /* May be already destroyed, or get_domain() can race us. */
-    _atomic_set(old, 0);
-    _atomic_set(new, DOMAIN_DESTROYED);
     old = atomic_compareandswap(old, new, &d->refcnt);
     if ( _atomic_read(old) != 0 )
         return;
diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index 69c8f3f..14eacd0 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -83,11 +83,25 @@ typedef struct { int counter; } atomic_t;
  * strex/ldrex monitor on some implementations. The reason we can use it for
  * atomic_set() is the clrex or dummy strex done on every exception return.
  */
-#define _atomic_read(v) ((v).counter)
-#define atomic_read(v)  (*(volatile int *)&(v)->counter)
+static inline int atomic_read(atomic_t *v)
+{
+    return *(volatile int *)&(v)->counter;
+}
+
+static inline int _atomic_read(atomic_t v)
+{
+    return v.counter;
+}
 
-#define _atomic_set(v,i) (((v).counter) = (i))
-#define atomic_set(v,i) (((v)->counter) = (i))
+static inline void atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
+
+static inline void _atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
 
 #if defined(CONFIG_ARM_32)
 # include <asm/arm32/atomic.h>
diff --git a/xen/include/asm-x86/atomic.h b/xen/include/asm-x86/atomic.h
index e476ab5..8972463 100644
--- a/xen/include/asm-x86/atomic.h
+++ b/xen/include/asm-x86/atomic.h
@@ -66,21 +66,50 @@ typedef struct { int counter; } atomic_t;
 /**
  * atomic_read - read atomic variable
  * @v: pointer of type atomic_t
- * 
+ *
  * Atomically reads the value of @v.
  */
-#define _atomic_read(v)  ((v).counter)
-#define atomic_read(v)   read_atomic(&((v)->counter))
+static inline int atomic_read(atomic_t *v)
+{
+    return read_atomic(&v->counter);
+}
+
+/**
+ * _atomic_read - read atomic variable non-atomically
+ * @v atomic_t
+ *
+ * Non-atomically reads the value of @v
+ */
+static inline int _atomic_read(atomic_t v)
+{
+    return v.counter;
+}
+
 
 /**
  * atomic_set - set atomic variable
  * @v: pointer of type atomic_t
  * @i: required value
- * 
+ *
  * Atomically sets the value of @v to @i.
- */ 
-#define _atomic_set(v,i) (((v).counter) = (i))
-#define atomic_set(v,i)  write_atomic(&((v)->counter), (i))
+ */
+static inline void atomic_set(atomic_t *v, int i)
+{
+    write_atomic(&v->counter, i);
+}
+
+/**
+ * _atomic_set - set atomic variable non-atomically
+ * @v: pointer of type atomic_t
+ * @i: required value
+ *
+ * Non-atomically sets the value of @v to @i.
+ */
+static inline void _atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
+
 
 /**
  * atomic_add - add integer to atomic variable
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..e4b5cae 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -468,7 +468,7 @@ static always_inline int get_domain(struct domain *d)
         old = seen;
         if ( unlikely(_atomic_read(old) & DOMAIN_DESTROYED) )
             return 0;
-        _atomic_set(new, _atomic_read(old) + 1);
+        _atomic_set(&new, _atomic_read(old) + 1);
         seen = atomic_compareandswap(old, new, &d->refcnt);
     }
     while ( unlikely(_atomic_read(seen) != _atomic_read(old)) );
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 21:13:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 21:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxPL-0006fh-7i; Fri, 21 Feb 2014 21:13:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGxPK-0006fa-2y
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 21:13:18 +0000
Received: from [85.158.137.68:42661] by server-2.bemta-3.messagelabs.com id
	AB/C0-06531-D61C7035; Fri, 21 Feb 2014 21:13:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393017194!3438149!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8112 invoked from network); 21 Feb 2014 21:13:16 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 21:13:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LLDBPB015803
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 21:13:12 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1LLDAr6002298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 21 Feb 2014 21:13:11 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LLDAbx010560; Fri, 21 Feb 2014 21:13:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 13:13:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 854AF1C3A88; Fri, 21 Feb 2014 16:13:09 -0500 (EST)
Date: Fri, 21 Feb 2014 16:13:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140221211309.GI26880@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
	<CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 09:51:52AM -0800, Saurabh Mishra wrote:
> Hi --
> 
> I tried enabling CONFIG_APM but it still didn't work. Let me know if you
> guys happen to know what all needs to be enabled in guest HVM for ACPI
> events to work (xm trigger <vm> power). Since SuSE HVM VM works with 'xm
> trigger <vm> power', I'm suspecting we have not enabled something in our WR
> distro.

What is 'WR'? Anyhow, you might also need ACPI. Do you get any ACPI
events at all in your guest?

Aka, does /proc/interrupts for acpi show an increasing number as you
do 'xl trigger poweroff' ?

> 
> Thanks,
> /Saurabh
> 
> 
> On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra <saurabh.globe@gmail.com>wrote:
> 
> > Hi Ian --
> >
> > So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough?
> > Let me try that out.
> >
> > Thanks,
> > /Saurabh
> >
> >
> > On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
> >
> >> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> >> > config or driver do I need to enable in WR HVM guest such that it
> >> > accepts 'xl/xm trigger <vm> power'?
> >>
> >> Support for ACPI power events.
> >>
> >> Ian.
> >>
> >>
> >>
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 21:13:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 21:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxPL-0006fh-7i; Fri, 21 Feb 2014 21:13:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGxPK-0006fa-2y
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 21:13:18 +0000
Received: from [85.158.137.68:42661] by server-2.bemta-3.messagelabs.com id
	AB/C0-06531-D61C7035; Fri, 21 Feb 2014 21:13:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393017194!3438149!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8112 invoked from network); 21 Feb 2014 21:13:16 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 21:13:16 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LLDBPB015803
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 21:13:12 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1LLDAr6002298
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 21 Feb 2014 21:13:11 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LLDAbx010560; Fri, 21 Feb 2014 21:13:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 13:13:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 854AF1C3A88; Fri, 21 Feb 2014 16:13:09 -0500 (EST)
Date: Fri, 21 Feb 2014 16:13:09 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140221211309.GI26880@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
	<CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 09:51:52AM -0800, Saurabh Mishra wrote:
> Hi --
> 
> I tried enabling CONFIG_APM but it still didn't work. Let me know if you
> guys happen to know what all needs to be enabled in guest HVM for ACPI
> events to work (xm trigger <vm> power). Since SuSE HVM VM works with 'xm
> trigger <vm> power', I'm suspecting we have not enabled something in our WR
> distro.

What is 'WR'? Anyhow, you might also need ACPI. Do you get any ACPI
events at all in your guest?

Aka, does /proc/interrupts for acpi show an increasing number as you
do 'xl trigger poweroff' ?

> 
> Thanks,
> /Saurabh
> 
> 
> On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra <saurabh.globe@gmail.com>wrote:
> 
> > Hi Ian --
> >
> > So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough?
> > Let me try that out.
> >
> > Thanks,
> > /Saurabh
> >
> >
> > On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com>wrote:
> >
> >> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> >> > config or driver do I need to enable in WR HVM guest such that it
> >> > accepts 'xl/xm trigger <vm> power'?
> >>
> >> Support for ACPI power events.
> >>
> >> Ian.
> >>
> >>
> >>
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 21:43:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 21:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxsO-0006yD-Sy; Fri, 21 Feb 2014 21:43:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WGxsN-0006y8-Fa
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 21:43:19 +0000
Received: from [85.158.143.35:44469] by server-2.bemta-4.messagelabs.com id
	69/5D-10891-678C7035; Fri, 21 Feb 2014 21:43:18 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393018996!7463777!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22613 invoked from network); 21 Feb 2014 21:43:17 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 21:43:17 -0000
Received: from G9W0364.americas.hpqcorp.net (g9w0364.houston.hp.com
	[16.216.193.45]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3426.houston.hp.com (Postfix) with ESMTPS id 585B115A
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 21:43:16 +0000 (UTC)
Received: from G9W3614.americas.hpqcorp.net (16.216.186.49) by
	G9W0364.americas.hpqcorp.net (16.216.193.45) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 21 Feb 2014 21:41:40 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3614.americas.hpqcorp.net ([16.216.186.49]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 21:41:40 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: q35 in xen?  vfio in xen?
Thread-Index: Ac8vTML0X0KD0THDRguzpODwr34SVQ==
Date: Fri, 21 Feb 2014 21:41:39 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Subject: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8071200253540375866=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8071200253540375866==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_"

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi all,

I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q3=
5 machine under xen yet.  I made a few quick hacks which all fail miserably=
 (linux kernel oops and window BSOD).  I was wondering why this hasn't been=
 done (q35 was introduced into qemu in 2009).

Next question, vfio works very well for me in standalone qemu (with Linux h=
ost handling iommu), but is that supported under xen?  I haven't tried anyt=
hing there yet because my gut-feeling is that it won't work.  Because passi=
ng vfio device to qemu can only be done on qemu commandline, and xen is not=
 aware of this passing through device, thus not able to make iommu arrangem=
ent for this device.  Am I on the right track here?

I am interested in implementing both these two features.  I'd like to conne=
ct with anyone who's already on this so we don't duplicate the efforts.

Regards/Eniac

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I am playing with q35 chipset in qemu (1.6.1).&nbsp;=
 It seems we can&#8217;t enable q35 machine under xen yet.&nbsp; I made a f=
ew quick hacks which all fail miserably (linux kernel oops and window BSOD)=
.&nbsp; I was wondering why this hasn&#8217;t been done (q35
 was introduced into qemu in 2009).&nbsp; <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Next question, vfio works very well for me in standa=
lone qemu (with Linux host handling iommu), but is that supported under xen=
?&nbsp; I haven&#8217;t tried anything there yet because my gut-feeling is =
that it won&#8217;t work.&nbsp; Because passing vfio device
 to qemu can only be done on qemu commandline, and xen is not aware of this=
 passing through device, thus not able to make iommu arrangement for this d=
evice.&nbsp; Am I on the right track here?&nbsp;
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I am interested in implementing both these two featu=
res.&nbsp; I&#8217;d like to connect with anyone who&#8217;s already on thi=
s so we don&#8217;t duplicate the efforts.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Regards/Eniac<o:p></o:p></p>
</div>
</body>
</html>

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_--


--===============8071200253540375866==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8071200253540375866==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 21:43:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 21:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxsO-0006yD-Sy; Fri, 21 Feb 2014 21:43:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WGxsN-0006y8-Fa
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 21:43:19 +0000
Received: from [85.158.143.35:44469] by server-2.bemta-4.messagelabs.com id
	69/5D-10891-678C7035; Fri, 21 Feb 2014 21:43:18 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393018996!7463777!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_60_70,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22613 invoked from network); 21 Feb 2014 21:43:17 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 21:43:17 -0000
Received: from G9W0364.americas.hpqcorp.net (g9w0364.houston.hp.com
	[16.216.193.45]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3426.houston.hp.com (Postfix) with ESMTPS id 585B115A
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 21:43:16 +0000 (UTC)
Received: from G9W3614.americas.hpqcorp.net (16.216.186.49) by
	G9W0364.americas.hpqcorp.net (16.216.193.45) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 21 Feb 2014 21:41:40 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3614.americas.hpqcorp.net ([16.216.186.49]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 21:41:40 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: q35 in xen?  vfio in xen?
Thread-Index: Ac8vTML0X0KD0THDRguzpODwr34SVQ==
Date: Fri, 21 Feb 2014 21:41:39 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Subject: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8071200253540375866=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8071200253540375866==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_"

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Hi all,

I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q3=
5 machine under xen yet.  I made a few quick hacks which all fail miserably=
 (linux kernel oops and window BSOD).  I was wondering why this hasn't been=
 done (q35 was introduced into qemu in 2009).

Next question, vfio works very well for me in standalone qemu (with Linux h=
ost handling iommu), but is that supported under xen?  I haven't tried anyt=
hing there yet because my gut-feeling is that it won't work.  Because passi=
ng vfio device to qemu can only be done on qemu commandline, and xen is not=
 aware of this passing through device, thus not able to make iommu arrangem=
ent for this device.  Am I on the right track here?

I am interested in implementing both these two features.  I'd like to conne=
ct with anyone who's already on this so we don't duplicate the efforts.

Regards/Eniac

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:SimSun;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@SimSun";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:blue;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:purple;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri","sans-serif";
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri","sans-serif";}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I am playing with q35 chipset in qemu (1.6.1).&nbsp;=
 It seems we can&#8217;t enable q35 machine under xen yet.&nbsp; I made a f=
ew quick hacks which all fail miserably (linux kernel oops and window BSOD)=
.&nbsp; I was wondering why this hasn&#8217;t been done (q35
 was introduced into qemu in 2009).&nbsp; <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Next question, vfio works very well for me in standa=
lone qemu (with Linux host handling iommu), but is that supported under xen=
?&nbsp; I haven&#8217;t tried anything there yet because my gut-feeling is =
that it won&#8217;t work.&nbsp; Because passing vfio device
 to qemu can only be done on qemu commandline, and xen is not aware of this=
 passing through device, thus not able to make iommu arrangement for this d=
evice.&nbsp; Am I on the right track here?&nbsp;
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">I am interested in implementing both these two featu=
res.&nbsp; I&#8217;d like to connect with anyone who&#8217;s already on thi=
s so we don&#8217;t duplicate the efforts.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Regards/Eniac<o:p></o:p></p>
</div>
</body>
</html>

--_000_3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6G9W0737americas_--


--===============8071200253540375866==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8071200253540375866==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 21:50:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 21:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxzE-000782-0A; Fri, 21 Feb 2014 21:50:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGxzC-00077x-Su
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 21:50:23 +0000
Received: from [85.158.139.211:52456] by server-10.bemta-5.messagelabs.com id
	0D/DF-08578-E1AC7035; Fri, 21 Feb 2014 21:50:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393019419!5518997!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5809 invoked from network); 21 Feb 2014 21:50:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 21:50:21 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LLoDrL021670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 21:50:14 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LLoCZJ026704
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 21:50:13 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LLoC8c020642; Fri, 21 Feb 2014 21:50:12 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 13:50:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0EECD1C3A8C; Fri, 21 Feb 2014 16:50:11 -0500 (EST)
Date: Fri, 21 Feb 2014 16:50:11 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Message-ID: <20140221215011.GB16731@phenom.dumpdata.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:
> Hi all,
> 
> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q35 machine under xen yet.  I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).  I was wondering why this hasn't been done (q35 was introduced into qemu in 2009).
> 
> Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?  I haven't tried anything there yet because my gut-feeling is that it won't work.  Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.  Am I on the right track here?

Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses
a different mechanism (and you need to bind the device to pciback).

> 
> I am interested in implementing both these two features.  I'd like to connect with anyone who's already on this so we don't duplicate the efforts.

What do you need Q35 for?

> 
> Regards/Eniac

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 21:50:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 21:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGxzE-000782-0A; Fri, 21 Feb 2014 21:50:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WGxzC-00077x-Su
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 21:50:23 +0000
Received: from [85.158.139.211:52456] by server-10.bemta-5.messagelabs.com id
	0D/DF-08578-E1AC7035; Fri, 21 Feb 2014 21:50:22 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393019419!5518997!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5809 invoked from network); 21 Feb 2014 21:50:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 21:50:21 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LLoDrL021670
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 21:50:14 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LLoCZJ026704
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 21:50:13 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LLoC8c020642; Fri, 21 Feb 2014 21:50:12 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 13:50:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 0EECD1C3A8C; Fri, 21 Feb 2014 16:50:11 -0500 (EST)
Date: Fri, 21 Feb 2014 16:50:11 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Message-ID: <20140221215011.GB16731@phenom.dumpdata.com>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:
> Hi all,
> 
> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q35 machine under xen yet.  I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).  I was wondering why this hasn't been done (q35 was introduced into qemu in 2009).
> 
> Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?  I haven't tried anything there yet because my gut-feeling is that it won't work.  Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.  Am I on the right track here?

Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses
a different mechanism (and you need to bind the device to pciback).

> 
> I am interested in implementing both these two features.  I'd like to connect with anyone who's already on this so we don't duplicate the efforts.

What do you need Q35 for?

> 
> Regards/Eniac

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 22:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGy9R-0007MX-Cq; Fri, 21 Feb 2014 22:00:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WGy9Q-0007MS-7i
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:00:56 +0000
Received: from [85.158.139.211:65250] by server-10.bemta-5.messagelabs.com id
	38/B4-08578-79CC7035; Fri, 21 Feb 2014 22:00:55 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393020051!1563866!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20819 invoked from network); 21 Feb 2014 22:00:53 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 22:00:53 -0000
Received: from G4W6310.americas.hpqcorp.net (g4w6310.houston.hp.com
	[16.210.26.217]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3426.houston.hp.com (Postfix) with ESMTPS id 5CA991BD;
	Fri, 21 Feb 2014 22:00:51 +0000 (UTC)
Received: from G9W3616.americas.hpqcorp.net (16.216.186.51) by
	G4W6310.americas.hpqcorp.net (16.210.26.217) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 21 Feb 2014 21:58:59 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3616.americas.hpqcorp.net ([16.216.186.51]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 21:58:59 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] q35 in xen?  vfio in xen?
Thread-Index: Ac8vTML0X0KD0THDRguzpODwr34SVQAAiHeAAAAREcA=
Date: Fri, 21 Feb 2014 21:58:59 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B7128@G9W0737.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
	<20140221215011.GB16731@phenom.dumpdata.com>
In-Reply-To: <20140221215011.GB16731@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Thanks for your reply.  

Yes, I am aware of the pciback.  Unfortunately it doesn't seem to support pci-e passthrough. (I could be wrong here)

There are two reason that I am interested in this.  For one, my project calls for pci-e device passthrough, which can't be accomplished with 440fx chipset emulation.  Secondly, I feel we ought to move on with the technology.  440fx is ancient in computer terms.  Qemu is good and all that, but if it refuses to support pci-e natively then it's just a matter of time that it will become obsoleted.  The trend is clear that pci-e is taking over the world.

Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] 
Sent: Friday, February 21, 2014 2:50 PM
To: Zhang, Eniac
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?

On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:
> Hi all,
> 
> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q35 machine under xen yet.  I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).  I was wondering why this hasn't been done (q35 was introduced into qemu in 2009).
> 
> Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?  I haven't tried anything there yet because my gut-feeling is that it won't work.  Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.  Am I on the right track here?

Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses a different mechanism (and you need to bind the device to pciback).

> 
> I am interested in implementing both these two features.  I'd like to connect with anyone who's already on this so we don't duplicate the efforts.

What do you need Q35 for?

> 
> Regards/Eniac

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 22:01:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGy9R-0007MX-Cq; Fri, 21 Feb 2014 22:00:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WGy9Q-0007MS-7i
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:00:56 +0000
Received: from [85.158.139.211:65250] by server-10.bemta-5.messagelabs.com id
	38/B4-08578-79CC7035; Fri, 21 Feb 2014 22:00:55 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393020051!1563866!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20819 invoked from network); 21 Feb 2014 22:00:53 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 21 Feb 2014 22:00:53 -0000
Received: from G4W6310.americas.hpqcorp.net (g4w6310.houston.hp.com
	[16.210.26.217]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3426.houston.hp.com (Postfix) with ESMTPS id 5CA991BD;
	Fri, 21 Feb 2014 22:00:51 +0000 (UTC)
Received: from G9W3616.americas.hpqcorp.net (16.216.186.51) by
	G4W6310.americas.hpqcorp.net (16.210.26.217) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Fri, 21 Feb 2014 21:58:59 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3616.americas.hpqcorp.net ([16.216.186.51]) with mapi id
	14.03.0123.003; Fri, 21 Feb 2014 21:58:59 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] q35 in xen?  vfio in xen?
Thread-Index: Ac8vTML0X0KD0THDRguzpODwr34SVQAAiHeAAAAREcA=
Date: Fri, 21 Feb 2014 21:58:59 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B7128@G9W0737.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
	<20140221215011.GB16731@phenom.dumpdata.com>
In-Reply-To: <20140221215011.GB16731@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Thanks for your reply.  

Yes, I am aware of the pciback.  Unfortunately it doesn't seem to support pci-e passthrough. (I could be wrong here)

There are two reason that I am interested in this.  For one, my project calls for pci-e device passthrough, which can't be accomplished with 440fx chipset emulation.  Secondly, I feel we ought to move on with the technology.  440fx is ancient in computer terms.  Qemu is good and all that, but if it refuses to support pci-e natively then it's just a matter of time that it will become obsoleted.  The trend is clear that pci-e is taking over the world.

Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] 
Sent: Friday, February 21, 2014 2:50 PM
To: Zhang, Eniac
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?

On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:
> Hi all,
> 
> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q35 machine under xen yet.  I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).  I was wondering why this hasn't been done (q35 was introduced into qemu in 2009).
> 
> Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?  I haven't tried anything there yet because my gut-feeling is that it won't work.  Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.  Am I on the right track here?

Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses a different mechanism (and you need to bind the device to pciback).

> 
> I am interested in implementing both these two features.  I'd like to connect with anyone who's already on this so we don't duplicate the efforts.

What do you need Q35 for?

> 
> Regards/Eniac

> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 22:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyCj-0007Sr-0m; Fri, 21 Feb 2014 22:04:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGyCg-0007Sk-JC
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:04:18 +0000
Received: from [193.109.254.147:61806] by server-1.bemta-14.messagelabs.com id
	86/AB-15438-16DC7035; Fri, 21 Feb 2014 22:04:17 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393020255!6058643!1
X-Originating-IP: [209.85.220.179]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24257 invoked from network); 21 Feb 2014 22:04:16 -0000
Received: from mail-vc0-f179.google.com (HELO mail-vc0-f179.google.com)
	(209.85.220.179)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 22:04:16 -0000
Received: by mail-vc0-f179.google.com with SMTP id lh14so3795885vcb.24
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 14:04:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Gul7VxpvNinKbKaffm+gBC1ZgRGQ09L45TSoT+sm/Lk=;
	b=NOkrL+79tBGIJ0q5kpf8YVoBTay81j7mq6GWlnUz4rbzBBBL88J/YNXnCrQTn+gauS
	CKnkYo/PL7m544LU0TvconAvTgdVGH6n/voiuUKYaIV+g9jtp5ROBLQMU24Q0n3/2Z3I
	8WYjiv6AMz/nYgswMgYz96RfsZ7STKhODIhnvPubR1gH4xOq/oiuwQW1q9HcqEYlbDfp
	EQvs07Yqtgl77LFor8T7js5UPAbLdwfJzAbCSMT2ZWu5HQ7JfdtNDAth+Q+U9Yg4nCmJ
	aFaP0dd5sqKVQIOgatVDC/ALuaPG8Xe7i/OYLfsEPInMWUGKDxJo/TKSplzege6pnIuj
	4fQQ==
MIME-Version: 1.0
X-Received: by 10.220.133.80 with SMTP id e16mr6316792vct.13.1393020255070;
	Fri, 21 Feb 2014 14:04:15 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Fri, 21 Feb 2014 14:04:14 -0800 (PST)
In-Reply-To: <20140221211309.GI26880@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
	<CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
	<20140221211309.GI26880@phenom.dumpdata.com>
Date: Fri, 21 Feb 2014 14:04:14 -0800
Message-ID: <CAMnwyJ3KkED8jaOMro1exG5--U8cyCz=VvBUAQDQJc+74ct7qQ@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3498106425075057227=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3498106425075057227==
Content-Type: multipart/alternative; boundary=001a11362cd477f01704f2f1cde5

--001a11362cd477f01704f2f1cde5
Content-Type: text/plain; charset=ISO-8859-1

WR is WindRiver distribution. I don't see any interrupt count going up for
acpi.

   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi

I don't see any ACPI events in the HVM guest either.

root@lc-6:/root>
root@lc-6:/root> ps -eaf | grep acpid
root      3771     1  0 21:57 ?        00:00:00 /usr/sbin/acpid
root      6684  3882  0 22:00 ttyS0    00:00:00 grep acpid
root@lc-6:/root> kill -9 3771
root@lc-6:/root> cat /proc/acpi/event

root@lc-6:/root> service acpid restart
Stopping acpi daemon: [FAILED]
Starting acpi daemon: [  OK  ]
root@lc-6:/root> service acpid restart
Stopping acpi daemon: [  OK  ]
Starting acpi daemon: [  OK  ]
root@lc-6:/root>

I issued trigger like this :-

xm trigger pvm-01-6 power


*HVM guest cfg file is :-*

boot = "c"
memory = 8192
vcpus = 4
disk = [
'file:/root/PSVs/mnt_local_ssd/local_ssd/pvm-6/ssc_pvm_01.img,hda,w',
',hdc:cdrom,r' ]
vif = [ 'model=e1000, mac=00:16:3e:00:05:00, bridge=br0', 'model=e1000,
mac=00:16:3f:00:05:01, bridge=br1' ]
pci = [  '0000:08:00.0=0@0b', '0000:01:00.0=0@0c', '0000:07:11.6=0@1a',
'0000:07:11.7=0@1b', '0000:88:11.6=0@1c', '0000:88:11.7=0@1d' ]
cpus = [  '36', '37', '38', '39' ]

#
# --- Mandatory config file entries ---
#

# HVM specific
kernel = "hvmloader"
builder = "hvm"
device_model = "qemu-dm"

# Enable ACPI support
acpi = 1

# Enable serial console
serial = "pty"

# Enable VNC
vnc = 1
vnclisten = "0.0.0.0"

pci_msitranslate = 0

# Default behavior for following events
on_reboot = "destroy"

# Enable Xen Platform PCI device for Platform VM
xen_platform_pci=0


Thanks,
/Saurabh

On Fri, Feb 21, 2014 at 1:13 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 21, 2014 at 09:51:52AM -0800, Saurabh Mishra wrote:
> > Hi --
> >
> > I tried enabling CONFIG_APM but it still didn't work. Let me know if you
> > guys happen to know what all needs to be enabled in guest HVM for ACPI
> > events to work (xm trigger <vm> power). Since SuSE HVM VM works with 'xm
> > trigger <vm> power', I'm suspecting we have not enabled something in our
> WR
> > distro.
>
> What is 'WR'? Anyhow, you might also need ACPI. Do you get any ACPI
> events at all in your guest?
>
> Aka, does /proc/interrupts for acpi show an increasing number as you
> do 'xl trigger poweroff' ?
>
> >
> > Thanks,
> > /Saurabh
> >
> >
> > On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra <saurabh.globe@gmail.com
> >wrote:
> >
> > > Hi Ian --
> > >
> > > So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough?
> > > Let me try that out.
> > >
> > > Thanks,
> > > /Saurabh
> > >
> > >
> > > On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com
> >wrote:
> > >
> > >> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> > >> > config or driver do I need to enable in WR HVM guest such that it
> > >> > accepts 'xl/xm trigger <vm> power'?
> > >>
> > >> Support for ACPI power events.
> > >>
> > >> Ian.
> > >>
> > >>
> > >>
> > >
>

--001a11362cd477f01704f2f1cde5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">WR is WindRiver distribution. I don&#39;t see any interrup=
t count going up for acpi.<div><br></div><div><div>=A0 =A09: =A0 =A0 =A0 =
=A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0=
 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =
=A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =
=A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-=
APIC-fasteoi =A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-=
APIC-fasteoi =A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-=
APIC-fasteoi =A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div></div><div><br></div><div>I don&=
#39;t see any ACPI events in the HVM guest either.</div>
<div><br></div><div><div>root@lc-6:/root&gt;</div><div>root@lc-6:/root&gt; =
ps -eaf | grep acpid</div><div>root =A0 =A0 =A03771 =A0 =A0 1 =A00 21:57 ? =
=A0 =A0 =A0 =A000:00:00 /usr/sbin/acpid</div><div>root =A0 =A0 =A06684 =A03=
882 =A00 22:00 ttyS0 =A0 =A000:00:00 grep acpid</div>
<div>root@lc-6:/root&gt; kill -9 3771</div><div>root@lc-6:/root&gt; cat /pr=
oc/acpi/event</div><div><br></div><div>root@lc-6:/root&gt; service acpid re=
start</div><div>Stopping acpi daemon: [FAILED]</div><div>Starting acpi daem=
on: [ =A0OK =A0]</div>
<div>root@lc-6:/root&gt; service acpid restart</div><div>Stopping acpi daem=
on: [ =A0OK =A0]</div><div>Starting acpi daemon: [ =A0OK =A0]</div><div>roo=
t@lc-6:/root&gt;=A0</div></div><div><br></div><div>I issued trigger like th=
is :-</div>
<div><br></div><div>xm trigger pvm-01-6 power<br></div><div><br></div><div>=
<br></div><div><b><u>HVM guest cfg file is :-</u></b></div><div><br></div><=
div><div>boot =3D &quot;c&quot;</div><div>memory =3D 8192</div><div>vcpus =
=3D 4</div>
<div>disk =3D [ &#39;file:/root/PSVs/mnt_local_ssd/local_ssd/pvm-6/ssc_pvm_=
01.img,hda,w&#39;, &#39;,hdc:cdrom,r&#39; ]</div><div>vif =3D [ &#39;model=
=3De1000, mac=3D00:16:3e:00:05:00, bridge=3Dbr0&#39;, &#39;model=3De1000, m=
ac=3D00:16:3f:00:05:01, bridge=3Dbr1&#39; ]</div>
<div>pci =3D [ =A0&#39;0000:08:00.0=3D0@0b&#39;, &#39;0000:01:00.0=3D0@0c&#=
39;, &#39;0000:07:11.6=3D0@1a&#39;, &#39;0000:07:11.7=3D0@1b&#39;, &#39;000=
0:88:11.6=3D0@1c&#39;, &#39;0000:88:11.7=3D0@1d&#39; ]</div><div>cpus =3D [=
 =A0&#39;36&#39;, &#39;37&#39;, &#39;38&#39;, &#39;39&#39; ]</div>
<div>=A0</div><div>#</div><div># --- Mandatory config file entries ---</div=
><div>#</div><div><br></div><div># HVM specific</div><div>kernel =3D &quot;=
hvmloader&quot;</div><div>builder =3D &quot;hvm&quot;</div><div>device_mode=
l =3D &quot;qemu-dm&quot;</div>
<div><br></div><div># Enable ACPI support</div><div>acpi =3D 1</div><div><b=
r></div><div># Enable serial console</div><div>serial =3D &quot;pty&quot;</=
div><div><br></div><div># Enable VNC</div><div>vnc =3D 1</div><div>vncliste=
n =3D &quot;0.0.0.0&quot;</div>
<div><br></div><div>pci_msitranslate =3D 0<br></div><div><br></div><div># D=
efault behavior for following events</div><div>on_reboot =3D &quot;destroy&=
quot;</div><div>=A0</div><div># Enable Xen Platform PCI device for Platform=
 VM</div>
<div>xen_platform_pci=3D0</div></div><div><br></div><div class=3D"gmail_ext=
ra"><br></div><div class=3D"gmail_extra">Thanks,</div><div class=3D"gmail_e=
xtra">/Saurabh<br><br><div class=3D"gmail_quote">On Fri, Feb 21, 2014 at 1:=
13 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad=
.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> w=
rote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex"><div class=3D"im">On Fri, Feb 21, 2014 at 09:51:52AM -0800=
, Saurabh Mishra wrote:<br>

&gt; Hi --<br>
&gt;<br>
&gt; I tried enabling CONFIG_APM but it still didn&#39;t work. Let me know =
if you<br>
&gt; guys happen to know what all needs to be enabled in guest HVM for ACPI=
<br>
&gt; events to work (xm trigger &lt;vm&gt; power). Since SuSE HVM VM works =
with &#39;xm<br>
&gt; trigger &lt;vm&gt; power&#39;, I&#39;m suspecting we have not enabled =
something in our WR<br>
&gt; distro.<br>
<br>
</div>What is &#39;WR&#39;? Anyhow, you might also need ACPI. Do you get an=
y ACPI<br>
events at all in your guest?<br>
<br>
Aka, does /proc/interrupts for acpi show an increasing number as you<br>
do &#39;xl trigger poweroff&#39; ?<br>
<div class=3D""><div class=3D"h5"><br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra &lt;<a href=3D"mailto:=
saurabh.globe@gmail.com">saurabh.globe@gmail.com</a>&gt;wrote:<br>
&gt;<br>
&gt; &gt; Hi Ian --<br>
&gt; &gt;<br>
&gt; &gt; So enabling &#39;CONFIG_APM=3Dy&#39; in WR kernel for HVM guest s=
hould be enough?<br>
&gt; &gt; Let me try that out.<br>
&gt; &gt;<br>
&gt; &gt; Thanks,<br>
&gt; &gt; /Saurabh<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell &lt;<a href=3D"mail=
to:Ian.Campbell@citrix.com">Ian.Campbell@citrix.com</a>&gt;wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:<br>
&gt; &gt;&gt; &gt; config or driver do I need to enable in WR HVM guest suc=
h that it<br>
&gt; &gt;&gt; &gt; accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Support for ACPI power events.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Ian.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div></div>

--001a11362cd477f01704f2f1cde5--


--===============3498106425075057227==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3498106425075057227==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 22:04:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyCj-0007Sr-0m; Fri, 21 Feb 2014 22:04:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WGyCg-0007Sk-JC
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:04:18 +0000
Received: from [193.109.254.147:61806] by server-1.bemta-14.messagelabs.com id
	86/AB-15438-16DC7035; Fri, 21 Feb 2014 22:04:17 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393020255!6058643!1
X-Originating-IP: [209.85.220.179]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24257 invoked from network); 21 Feb 2014 22:04:16 -0000
Received: from mail-vc0-f179.google.com (HELO mail-vc0-f179.google.com)
	(209.85.220.179)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 22:04:16 -0000
Received: by mail-vc0-f179.google.com with SMTP id lh14so3795885vcb.24
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 14:04:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=Gul7VxpvNinKbKaffm+gBC1ZgRGQ09L45TSoT+sm/Lk=;
	b=NOkrL+79tBGIJ0q5kpf8YVoBTay81j7mq6GWlnUz4rbzBBBL88J/YNXnCrQTn+gauS
	CKnkYo/PL7m544LU0TvconAvTgdVGH6n/voiuUKYaIV+g9jtp5ROBLQMU24Q0n3/2Z3I
	8WYjiv6AMz/nYgswMgYz96RfsZ7STKhODIhnvPubR1gH4xOq/oiuwQW1q9HcqEYlbDfp
	EQvs07Yqtgl77LFor8T7js5UPAbLdwfJzAbCSMT2ZWu5HQ7JfdtNDAth+Q+U9Yg4nCmJ
	aFaP0dd5sqKVQIOgatVDC/ALuaPG8Xe7i/OYLfsEPInMWUGKDxJo/TKSplzege6pnIuj
	4fQQ==
MIME-Version: 1.0
X-Received: by 10.220.133.80 with SMTP id e16mr6316792vct.13.1393020255070;
	Fri, 21 Feb 2014 14:04:15 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Fri, 21 Feb 2014 14:04:14 -0800 (PST)
In-Reply-To: <20140221211309.GI26880@phenom.dumpdata.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
	<CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
	<20140221211309.GI26880@phenom.dumpdata.com>
Date: Fri, 21 Feb 2014 14:04:14 -0800
Message-ID: <CAMnwyJ3KkED8jaOMro1exG5--U8cyCz=VvBUAQDQJc+74ct7qQ@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3498106425075057227=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3498106425075057227==
Content-Type: multipart/alternative; boundary=001a11362cd477f01704f2f1cde5

--001a11362cd477f01704f2f1cde5
Content-Type: text/plain; charset=ISO-8859-1

WR is WindRiver distribution. I don't see any interrupt count going up for
acpi.

   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi
   9:          0          0          0          0   IO-APIC-fasteoi   acpi

I don't see any ACPI events in the HVM guest either.

root@lc-6:/root>
root@lc-6:/root> ps -eaf | grep acpid
root      3771     1  0 21:57 ?        00:00:00 /usr/sbin/acpid
root      6684  3882  0 22:00 ttyS0    00:00:00 grep acpid
root@lc-6:/root> kill -9 3771
root@lc-6:/root> cat /proc/acpi/event

root@lc-6:/root> service acpid restart
Stopping acpi daemon: [FAILED]
Starting acpi daemon: [  OK  ]
root@lc-6:/root> service acpid restart
Stopping acpi daemon: [  OK  ]
Starting acpi daemon: [  OK  ]
root@lc-6:/root>

I issued trigger like this :-

xm trigger pvm-01-6 power


*HVM guest cfg file is :-*

boot = "c"
memory = 8192
vcpus = 4
disk = [
'file:/root/PSVs/mnt_local_ssd/local_ssd/pvm-6/ssc_pvm_01.img,hda,w',
',hdc:cdrom,r' ]
vif = [ 'model=e1000, mac=00:16:3e:00:05:00, bridge=br0', 'model=e1000,
mac=00:16:3f:00:05:01, bridge=br1' ]
pci = [  '0000:08:00.0=0@0b', '0000:01:00.0=0@0c', '0000:07:11.6=0@1a',
'0000:07:11.7=0@1b', '0000:88:11.6=0@1c', '0000:88:11.7=0@1d' ]
cpus = [  '36', '37', '38', '39' ]

#
# --- Mandatory config file entries ---
#

# HVM specific
kernel = "hvmloader"
builder = "hvm"
device_model = "qemu-dm"

# Enable ACPI support
acpi = 1

# Enable serial console
serial = "pty"

# Enable VNC
vnc = 1
vnclisten = "0.0.0.0"

pci_msitranslate = 0

# Default behavior for following events
on_reboot = "destroy"

# Enable Xen Platform PCI device for Platform VM
xen_platform_pci=0


Thanks,
/Saurabh

On Fri, Feb 21, 2014 at 1:13 PM, Konrad Rzeszutek Wilk <
konrad.wilk@oracle.com> wrote:

> On Fri, Feb 21, 2014 at 09:51:52AM -0800, Saurabh Mishra wrote:
> > Hi --
> >
> > I tried enabling CONFIG_APM but it still didn't work. Let me know if you
> > guys happen to know what all needs to be enabled in guest HVM for ACPI
> > events to work (xm trigger <vm> power). Since SuSE HVM VM works with 'xm
> > trigger <vm> power', I'm suspecting we have not enabled something in our
> WR
> > distro.
>
> What is 'WR'? Anyhow, you might also need ACPI. Do you get any ACPI
> events at all in your guest?
>
> Aka, does /proc/interrupts for acpi show an increasing number as you
> do 'xl trigger poweroff' ?
>
> >
> > Thanks,
> > /Saurabh
> >
> >
> > On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra <saurabh.globe@gmail.com
> >wrote:
> >
> > > Hi Ian --
> > >
> > > So enabling 'CONFIG_APM=y' in WR kernel for HVM guest should be enough?
> > > Let me try that out.
> > >
> > > Thanks,
> > > /Saurabh
> > >
> > >
> > > On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell <Ian.Campbell@citrix.com
> >wrote:
> > >
> > >> On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:
> > >> > config or driver do I need to enable in WR HVM guest such that it
> > >> > accepts 'xl/xm trigger <vm> power'?
> > >>
> > >> Support for ACPI power events.
> > >>
> > >> Ian.
> > >>
> > >>
> > >>
> > >
>

--001a11362cd477f01704f2f1cde5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">WR is WindRiver distribution. I don&#39;t see any interrup=
t count going up for acpi.<div><br></div><div><div>=A0 =A09: =A0 =A0 =A0 =
=A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0=
 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =
=A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =
=A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-=
APIC-fasteoi =A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-=
APIC-fasteoi =A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-=
APIC-fasteoi =A0 acpi</div>
<div>=A0 =A09: =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =
=A00 =A0 =A0 =A0 =A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div><div>=A0 =A09: =
=A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =A0 =A00 =A0 =A0 =A0 =
=A0 =A00 =A0 IO-APIC-fasteoi =A0 acpi</div></div><div><br></div><div>I don&=
#39;t see any ACPI events in the HVM guest either.</div>
<div><br></div><div><div>root@lc-6:/root&gt;</div><div>root@lc-6:/root&gt; =
ps -eaf | grep acpid</div><div>root =A0 =A0 =A03771 =A0 =A0 1 =A00 21:57 ? =
=A0 =A0 =A0 =A000:00:00 /usr/sbin/acpid</div><div>root =A0 =A0 =A06684 =A03=
882 =A00 22:00 ttyS0 =A0 =A000:00:00 grep acpid</div>
<div>root@lc-6:/root&gt; kill -9 3771</div><div>root@lc-6:/root&gt; cat /pr=
oc/acpi/event</div><div><br></div><div>root@lc-6:/root&gt; service acpid re=
start</div><div>Stopping acpi daemon: [FAILED]</div><div>Starting acpi daem=
on: [ =A0OK =A0]</div>
<div>root@lc-6:/root&gt; service acpid restart</div><div>Stopping acpi daem=
on: [ =A0OK =A0]</div><div>Starting acpi daemon: [ =A0OK =A0]</div><div>roo=
t@lc-6:/root&gt;=A0</div></div><div><br></div><div>I issued trigger like th=
is :-</div>
<div><br></div><div>xm trigger pvm-01-6 power<br></div><div><br></div><div>=
<br></div><div><b><u>HVM guest cfg file is :-</u></b></div><div><br></div><=
div><div>boot =3D &quot;c&quot;</div><div>memory =3D 8192</div><div>vcpus =
=3D 4</div>
<div>disk =3D [ &#39;file:/root/PSVs/mnt_local_ssd/local_ssd/pvm-6/ssc_pvm_=
01.img,hda,w&#39;, &#39;,hdc:cdrom,r&#39; ]</div><div>vif =3D [ &#39;model=
=3De1000, mac=3D00:16:3e:00:05:00, bridge=3Dbr0&#39;, &#39;model=3De1000, m=
ac=3D00:16:3f:00:05:01, bridge=3Dbr1&#39; ]</div>
<div>pci =3D [ =A0&#39;0000:08:00.0=3D0@0b&#39;, &#39;0000:01:00.0=3D0@0c&#=
39;, &#39;0000:07:11.6=3D0@1a&#39;, &#39;0000:07:11.7=3D0@1b&#39;, &#39;000=
0:88:11.6=3D0@1c&#39;, &#39;0000:88:11.7=3D0@1d&#39; ]</div><div>cpus =3D [=
 =A0&#39;36&#39;, &#39;37&#39;, &#39;38&#39;, &#39;39&#39; ]</div>
<div>=A0</div><div>#</div><div># --- Mandatory config file entries ---</div=
><div>#</div><div><br></div><div># HVM specific</div><div>kernel =3D &quot;=
hvmloader&quot;</div><div>builder =3D &quot;hvm&quot;</div><div>device_mode=
l =3D &quot;qemu-dm&quot;</div>
<div><br></div><div># Enable ACPI support</div><div>acpi =3D 1</div><div><b=
r></div><div># Enable serial console</div><div>serial =3D &quot;pty&quot;</=
div><div><br></div><div># Enable VNC</div><div>vnc =3D 1</div><div>vncliste=
n =3D &quot;0.0.0.0&quot;</div>
<div><br></div><div>pci_msitranslate =3D 0<br></div><div><br></div><div># D=
efault behavior for following events</div><div>on_reboot =3D &quot;destroy&=
quot;</div><div>=A0</div><div># Enable Xen Platform PCI device for Platform=
 VM</div>
<div>xen_platform_pci=3D0</div></div><div><br></div><div class=3D"gmail_ext=
ra"><br></div><div class=3D"gmail_extra">Thanks,</div><div class=3D"gmail_e=
xtra">/Saurabh<br><br><div class=3D"gmail_quote">On Fri, Feb 21, 2014 at 1:=
13 PM, Konrad Rzeszutek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad=
.wilk@oracle.com" target=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span> w=
rote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex"><div class=3D"im">On Fri, Feb 21, 2014 at 09:51:52AM -0800=
, Saurabh Mishra wrote:<br>

&gt; Hi --<br>
&gt;<br>
&gt; I tried enabling CONFIG_APM but it still didn&#39;t work. Let me know =
if you<br>
&gt; guys happen to know what all needs to be enabled in guest HVM for ACPI=
<br>
&gt; events to work (xm trigger &lt;vm&gt; power). Since SuSE HVM VM works =
with &#39;xm<br>
&gt; trigger &lt;vm&gt; power&#39;, I&#39;m suspecting we have not enabled =
something in our WR<br>
&gt; distro.<br>
<br>
</div>What is &#39;WR&#39;? Anyhow, you might also need ACPI. Do you get an=
y ACPI<br>
events at all in your guest?<br>
<br>
Aka, does /proc/interrupts for acpi show an increasing number as you<br>
do &#39;xl trigger poweroff&#39; ?<br>
<div class=3D""><div class=3D"h5"><br>
&gt;<br>
&gt; Thanks,<br>
&gt; /Saurabh<br>
&gt;<br>
&gt;<br>
&gt; On Thu, Feb 20, 2014 at 4:26 PM, Saurabh Mishra &lt;<a href=3D"mailto:=
saurabh.globe@gmail.com">saurabh.globe@gmail.com</a>&gt;wrote:<br>
&gt;<br>
&gt; &gt; Hi Ian --<br>
&gt; &gt;<br>
&gt; &gt; So enabling &#39;CONFIG_APM=3Dy&#39; in WR kernel for HVM guest s=
hould be enough?<br>
&gt; &gt; Let me try that out.<br>
&gt; &gt;<br>
&gt; &gt; Thanks,<br>
&gt; &gt; /Saurabh<br>
&gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; On Thu, Feb 20, 2014 at 1:15 AM, Ian Campbell &lt;<a href=3D"mail=
to:Ian.Campbell@citrix.com">Ian.Campbell@citrix.com</a>&gt;wrote:<br>
&gt; &gt;<br>
&gt; &gt;&gt; On Wed, 2014-02-19 at 11:02 -0800, Saurabh Mishra wrote:<br>
&gt; &gt;&gt; &gt; config or driver do I need to enable in WR HVM guest suc=
h that it<br>
&gt; &gt;&gt; &gt; accepts &#39;xl/xm trigger &lt;vm&gt; power&#39;?<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Support for ACPI power events.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt; Ian.<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;&gt;<br>
&gt; &gt;<br>
</div></div></blockquote></div><br></div></div>

--001a11362cd477f01704f2f1cde5--


--===============3498106425075057227==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3498106425075057227==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 22:09:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyHt-0007el-QA; Fri, 21 Feb 2014 22:09:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WGyHs-0007ec-Dv
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:09:40 +0000
Received: from [85.158.143.35:18499] by server-2.bemta-4.messagelabs.com id
	92/48-10891-3AEC7035; Fri, 21 Feb 2014 22:09:39 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393020578!7469272!1
X-Originating-IP: [209.85.214.48]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20072 invoked from network); 21 Feb 2014 22:09:38 -0000
Received: from mail-bk0-f48.google.com (HELO mail-bk0-f48.google.com)
	(209.85.214.48)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 22:09:38 -0000
Received: by mail-bk0-f48.google.com with SMTP id 6so1218788bkj.35
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 14:09:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LsU8jVZuZiV5cZPQJrFGmgLJ2IelVuAb6vLj5At+lW0=;
	b=mQwiVW7w/SomRrkG81nDViH8Nf54WU/BSAGCodmdxUS7bQv1h4ida9QrGom/SoWjhU
	aR0fsbnIcPB22hMlRV3be3usb1jajzx0MJFVWmi7cew7zcOEvnHRQcTJMcaI1eTDl9gx
	fU4lBO21BFjvRe1QN8We7FgBZjgS/pErIhxsCLdLamETQIiFWQ7StYKcj3QUN7I9+98Q
	3D11ffViu3KA8c4ux0yx4xNok6UsF6wqMjxytn/+n5AJkHrXP2DaLadKZ651LFxedOVE
	U6oOjGdCi7W2iw/kRFHiLAkhrjg+uNZCww+bPUcQ/tdAxB03of9kFTVw3TSO37+FvoRl
	3PUA==
X-Gm-Message-State: ALoCoQkJ5S5MtX6fcayyYjYh1AXItPgUN5KSSw1MZuAyfPiZ56Kt/thjIdrMJEqdXp6RKhweSSwx
MIME-Version: 1.0
X-Received: by 10.204.170.72 with SMTP id c8mr2225796bkz.34.1393020577932;
	Fri, 21 Feb 2014 14:09:37 -0800 (PST)
Received: by 10.205.96.200 with HTTP; Fri, 21 Feb 2014 14:09:37 -0800 (PST)
X-Originating-IP: [87.0.81.230]
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
Date: Fri, 21 Feb 2014 23:09:37 +0100
Message-ID: <CABMPFzhC54dv+AjM29Bnz74Zj4oZ8O8tJY_Wv9YU1BjLR-+CUg@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5086675808461276185=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5086675808461276185==
Content-Type: multipart/alternative; boundary=bcaec52c68d7b6cc8e04f2f1e00e

--bcaec52c68d7b6cc8e04f2f1e00e
Content-Type: text/plain; charset=ISO-8859-1

2014-02-21 22:41 GMT+01:00 Zhang, Eniac <eniac-xw.zhang@hp.com>:

>  Hi all,
>
>
>
> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable
> q35 machine under xen yet.  I made a few quick hacks which all fail
> miserably (linux kernel oops and window BSOD).  I was wondering why this
> hasn't been done (q35 was introduced into qemu in 2009).
>
>
>
> Next question, vfio works very well for me in standalone qemu (with Linux
> host handling iommu), but is that supported under xen?  I haven't tried
> anything there yet because my gut-feeling is that it won't work.  Because
> passing vfio device to qemu can only be done on qemu commandline, and xen
> is not aware of this passing through device, thus not able to make iommu
> arrangement for this device.  Am I on the right track here?
>
>
>
> I am interested in implementing both these two features.  I'd like to
> connect with anyone who's already on this so we don't duplicate the efforts.
>

I di some fast test with q35 time ago, one problem found is disks not
working with old qemu parameter, with new parameters (-device) see them on
boot but trying to do a patch with new qemu parameters compatible also with
old cipset with ide I found that automatic bus selection is bugged,
unfortunately, after I had more time to continue nor the knowledge to
make needed
changes in hvmloader.


>
>
> Regards/Eniac
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--bcaec52c68d7b6cc8e04f2f1e00e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-21 22:41 GMT+01:00 Zhang, Eniac <span dir=3D"ltr">=
&lt;<a href=3D"mailto:eniac-xw.zhang@hp.com" target=3D"_blank">eniac-xw.zha=
ng@hp.com</a>&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail=
_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex=
;border-left:1px solid rgb(204,204,204);padding-left:1ex">






<div link=3D"blue" vlink=3D"purple" lang=3D"EN-US">
<div>
<p class=3D"MsoNormal">Hi all,<u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">I am playing with q35 chipset in qemu (1.6.1).&nbsp;=
 It seems we can&rsquo;t enable q35 machine under xen yet.&nbsp; I made a f=
ew quick hacks which all fail miserably (linux kernel oops and window BSOD)=
.&nbsp; I was wondering why this hasn&rsquo;t been done (q35
 was introduced into qemu in 2009).&nbsp; <u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">Next question, vfio works very well for me in standa=
lone qemu (with Linux host handling iommu), but is that supported under xen=
?&nbsp; I haven&rsquo;t tried anything there yet because my gut-feeling is =
that it won&rsquo;t work.&nbsp; Because passing vfio device
 to qemu can only be done on qemu commandline, and xen is not aware of this=
 passing through device, thus not able to make iommu arrangement for this d=
evice.&nbsp; Am I on the right track here?&nbsp;
<u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">I am interested in implementing both these two featu=
res.&nbsp; I&rsquo;d like to connect with anyone who&rsquo;s already on thi=
s so we don&rsquo;t duplicate the efforts.</p></div></div></blockquote><div=
><br></div><div>I di some fast test with q35 time ago, one problem found is=
 disks not working with old qemu parameter, with new parameters (-device) s=
ee them on boot but trying to do a patch with new qemu parameters compatibl=
e also with old cipset with ide I found that automatic bus selection is bug=
ged, <span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">unfort=
unately,</span> <span class=3D"">after</span> <span class=3D"">I had</span>=
 <span class=3D"">more time to</span> <span class=3D"">continue </span></sp=
an><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">nor</spa=
n> <span class=3D"">the knowledge to</span> <span class=3D"">make</span> <s=
pan class=3D"">needed changes</span> <span class=3D"">in</span> <span class=
=3D"">hvmloader.</span></span></div>
<div>&nbsp;</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lin=
k=3D"blue" vlink=3D"purple" lang=3D"EN-US"><div><p class=3D"MsoNormal"><u><=
/u><u></u></p>

<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">Regards/Eniac<u></u><u></u></p>
</div>
</div>

<br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div></div>

--bcaec52c68d7b6cc8e04f2f1e00e--


--===============5086675808461276185==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5086675808461276185==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 22:09:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyHt-0007el-QA; Fri, 21 Feb 2014 22:09:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WGyHs-0007ec-Dv
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:09:40 +0000
Received: from [85.158.143.35:18499] by server-2.bemta-4.messagelabs.com id
	92/48-10891-3AEC7035; Fri, 21 Feb 2014 22:09:39 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393020578!7469272!1
X-Originating-IP: [209.85.214.48]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20072 invoked from network); 21 Feb 2014 22:09:38 -0000
Received: from mail-bk0-f48.google.com (HELO mail-bk0-f48.google.com)
	(209.85.214.48)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	21 Feb 2014 22:09:38 -0000
Received: by mail-bk0-f48.google.com with SMTP id 6so1218788bkj.35
	for <xen-devel@lists.xen.org>; Fri, 21 Feb 2014 14:09:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=LsU8jVZuZiV5cZPQJrFGmgLJ2IelVuAb6vLj5At+lW0=;
	b=mQwiVW7w/SomRrkG81nDViH8Nf54WU/BSAGCodmdxUS7bQv1h4ida9QrGom/SoWjhU
	aR0fsbnIcPB22hMlRV3be3usb1jajzx0MJFVWmi7cew7zcOEvnHRQcTJMcaI1eTDl9gx
	fU4lBO21BFjvRe1QN8We7FgBZjgS/pErIhxsCLdLamETQIiFWQ7StYKcj3QUN7I9+98Q
	3D11ffViu3KA8c4ux0yx4xNok6UsF6wqMjxytn/+n5AJkHrXP2DaLadKZ651LFxedOVE
	U6oOjGdCi7W2iw/kRFHiLAkhrjg+uNZCww+bPUcQ/tdAxB03of9kFTVw3TSO37+FvoRl
	3PUA==
X-Gm-Message-State: ALoCoQkJ5S5MtX6fcayyYjYh1AXItPgUN5KSSw1MZuAyfPiZ56Kt/thjIdrMJEqdXp6RKhweSSwx
MIME-Version: 1.0
X-Received: by 10.204.170.72 with SMTP id c8mr2225796bkz.34.1393020577932;
	Fri, 21 Feb 2014 14:09:37 -0800 (PST)
Received: by 10.205.96.200 with HTTP; Fri, 21 Feb 2014 14:09:37 -0800 (PST)
X-Originating-IP: [87.0.81.230]
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
References: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B70D6@G9W0737.americas.hpqcorp.net>
Date: Fri, 21 Feb 2014 23:09:37 +0100
Message-ID: <CABMPFzhC54dv+AjM29Bnz74Zj4oZ8O8tJY_Wv9YU1BjLR-+CUg@mail.gmail.com>
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5086675808461276185=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5086675808461276185==
Content-Type: multipart/alternative; boundary=bcaec52c68d7b6cc8e04f2f1e00e

--bcaec52c68d7b6cc8e04f2f1e00e
Content-Type: text/plain; charset=ISO-8859-1

2014-02-21 22:41 GMT+01:00 Zhang, Eniac <eniac-xw.zhang@hp.com>:

>  Hi all,
>
>
>
> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable
> q35 machine under xen yet.  I made a few quick hacks which all fail
> miserably (linux kernel oops and window BSOD).  I was wondering why this
> hasn't been done (q35 was introduced into qemu in 2009).
>
>
>
> Next question, vfio works very well for me in standalone qemu (with Linux
> host handling iommu), but is that supported under xen?  I haven't tried
> anything there yet because my gut-feeling is that it won't work.  Because
> passing vfio device to qemu can only be done on qemu commandline, and xen
> is not aware of this passing through device, thus not able to make iommu
> arrangement for this device.  Am I on the right track here?
>
>
>
> I am interested in implementing both these two features.  I'd like to
> connect with anyone who's already on this so we don't duplicate the efforts.
>

I di some fast test with q35 time ago, one problem found is disks not
working with old qemu parameter, with new parameters (-device) see them on
boot but trying to do a patch with new qemu parameters compatible also with
old cipset with ide I found that automatic bus selection is bugged,
unfortunately, after I had more time to continue nor the knowledge to
make needed
changes in hvmloader.


>
>
> Regards/Eniac
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>
>

--bcaec52c68d7b6cc8e04f2f1e00e
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">2014-02-21 22:41 GMT+01:00 Zhang, Eniac <span dir=3D"ltr">=
&lt;<a href=3D"mailto:eniac-xw.zhang@hp.com" target=3D"_blank">eniac-xw.zha=
ng@hp.com</a>&gt;</span>:<br><div class=3D"gmail_extra"><div class=3D"gmail=
_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex=
;border-left:1px solid rgb(204,204,204);padding-left:1ex">






<div link=3D"blue" vlink=3D"purple" lang=3D"EN-US">
<div>
<p class=3D"MsoNormal">Hi all,<u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">I am playing with q35 chipset in qemu (1.6.1).&nbsp;=
 It seems we can&rsquo;t enable q35 machine under xen yet.&nbsp; I made a f=
ew quick hacks which all fail miserably (linux kernel oops and window BSOD)=
.&nbsp; I was wondering why this hasn&rsquo;t been done (q35
 was introduced into qemu in 2009).&nbsp; <u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">Next question, vfio works very well for me in standa=
lone qemu (with Linux host handling iommu), but is that supported under xen=
?&nbsp; I haven&rsquo;t tried anything there yet because my gut-feeling is =
that it won&rsquo;t work.&nbsp; Because passing vfio device
 to qemu can only be done on qemu commandline, and xen is not aware of this=
 passing through device, thus not able to make iommu arrangement for this d=
evice.&nbsp; Am I on the right track here?&nbsp;
<u></u><u></u></p>
<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">I am interested in implementing both these two featu=
res.&nbsp; I&rsquo;d like to connect with anyone who&rsquo;s already on thi=
s so we don&rsquo;t duplicate the efforts.</p></div></div></blockquote><div=
><br></div><div>I di some fast test with q35 time ago, one problem found is=
 disks not working with old qemu parameter, with new parameters (-device) s=
ee them on boot but trying to do a patch with new qemu parameters compatibl=
e also with old cipset with ide I found that automatic bus selection is bug=
ged, <span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">unfort=
unately,</span> <span class=3D"">after</span> <span class=3D"">I had</span>=
 <span class=3D"">more time to</span> <span class=3D"">continue </span></sp=
an><span id=3D"result_box" class=3D"" lang=3D"en"><span class=3D"">nor</spa=
n> <span class=3D"">the knowledge to</span> <span class=3D"">make</span> <s=
pan class=3D"">needed changes</span> <span class=3D"">in</span> <span class=
=3D"">hvmloader.</span></span></div>
<div>&nbsp;</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px =
0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div lin=
k=3D"blue" vlink=3D"purple" lang=3D"EN-US"><div><p class=3D"MsoNormal"><u><=
/u><u></u></p>

<p class=3D"MsoNormal"><u></u>&nbsp;<u></u></p>
<p class=3D"MsoNormal">Regards/Eniac<u></u><u></u></p>
</div>
</div>

<br>_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
<br></blockquote></div><br></div></div>

--bcaec52c68d7b6cc8e04f2f1e00e--


--===============5086675808461276185==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5086675808461276185==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 22:20:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:20:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyS1-0007p7-W8; Fri, 21 Feb 2014 22:20:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGyRx-0007p2-7q
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:20:09 +0000
Received: from [193.109.254.147:28992] by server-8.bemta-14.messagelabs.com id
	84/F1-18529-411D7035; Fri, 21 Feb 2014 22:20:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393021202!2038268!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9352 invoked from network); 21 Feb 2014 22:20:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 22:20:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LMJ9Xe023785
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 22:19:10 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LMJ8qL022582
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 21 Feb 2014 22:19:09 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LMJ8EX022571; Fri, 21 Feb 2014 22:19:08 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 14:19:07 -0800
Date: Fri, 21 Feb 2014 14:19:06 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Charles" <xiezhenjiang@foxmail.com>
Message-ID: <20140221141906.65be3272@mantra.us.oracle.com>
In-Reply-To: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage in Xen
 hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMCBGZWIgMjAxNCAxNjowOTozMSArMDgwMAoiQ2hhcmxlcyIgPHhpZXpoZW5qaWFu
Z0Bmb3htYWlsLmNvbT4gd3JvdGU6Cgo+IEhpIGV2ZXJ5b25lLCBJ4oCYbSB0cnlpbmcgdG8gbW9u
aXRvciB0aGUgVk0gQ1BVIHVzYWdlIGluIHhlbiBoeXBlcnZpc29yCj4gYW5kIHVzZSB0aGUgVk0g
Q1BVIHVzYWdlIGluZm9ybWF0aW9uIGluIFhlbidzIENyZWRpdCBTY2hlZHVsZXIuIEkKPiBnb29n
bGVkIGl0LCBhbmQgZmluZCB0aGF0IG1vc3QgcmVmZXJlbmNlcyBhYm91dCB0aGlzIHRvcGljIGlz
Cj4gbW9uaXRvcmluZyB0aGUgVk0gQ1BVIHVzYWdlIGluZm9ybWF0aW9uIGluIERvbWFpbi0wIHdo
aWxlIG5vdCBpbiB0aGUKPiBYZW4gaHlwZXJ2aXNvci4gSSdtIG5vdyBibG9ja2luZyBhdCBob3cg
dG8gZGV0ZXJtaW5lIHRoZSBzdGF0ZSBvZgo+IHZDUFUsIE15IHVuZGVyc3RhbmRpbmcgaXMgdGhh
dCB2Q1BVIHJ1bm5pbmcgb24gYSBwaHlzaWNhbCBDUFUgZG9lc24ndAo+IG1lYW4gaXQgcmVhbGx5
IGNvbnN1bWVzIENQVSBjeWNsZXMsIHNvIEkgY291bGQgZGV0ZXJtaW5lIHRoZSB3aGV0aGVyCj4g
dkNQVSBpcyBjb25zdW1pbmcgQ1BVIGN5Y2xlcyBvciBub3Q/IFNvIGNvdWxkIHlvdSBwbGVhc2Ug
Z2l2ZSBtZSBzb21lCj4gYWR2aWNlLgo+IAo+IFRoYW5rIHlvdSB2ZXJ5IG11Y2ghCj4gX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwKCgpDaGVja291dCBzaW1wbGUgdXRpbGl0eSBjYWxsZWQgJ3htc3RhdCcgSSB3
cm90ZSBhIHdoaWxlIGFnbywgeW91IG1pZ2h0IApmaW5kIGl0IHVzZWZ1bCBhcyBpdCB3aWxsIHRl
bGwgeW91IGhvdyB2Y3B1cyBhcmUgYm91bmNpbmcgYXJvdW5kIGFuZCAKY29uc3VtaW5nIGNwdXM6
CgpodHRwOi8vb2xkLWxpc3QtYXJjaGl2ZXMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94
ZW4tZGV2ZWwvMjAxMC0wOC9tc2cwMTU4Ni5odG1sCgp0aGFua3MKbXVrZXNoCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Fri Feb 21 22:20:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:20:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyS1-0007p7-W8; Fri, 21 Feb 2014 22:20:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGyRx-0007p2-7q
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 22:20:09 +0000
Received: from [193.109.254.147:28992] by server-8.bemta-14.messagelabs.com id
	84/F1-18529-411D7035; Fri, 21 Feb 2014 22:20:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393021202!2038268!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9352 invoked from network); 21 Feb 2014 22:20:03 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 22:20:03 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LMJ9Xe023785
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 22:19:10 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LMJ8qL022582
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 21 Feb 2014 22:19:09 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LMJ8EX022571; Fri, 21 Feb 2014 22:19:08 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 14:19:07 -0800
Date: Fri, 21 Feb 2014 14:19:06 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Charles" <xiezhenjiang@foxmail.com>
Message-ID: <20140221141906.65be3272@mantra.us.oracle.com>
In-Reply-To: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage in Xen
 hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMCBGZWIgMjAxNCAxNjowOTozMSArMDgwMAoiQ2hhcmxlcyIgPHhpZXpoZW5qaWFu
Z0Bmb3htYWlsLmNvbT4gd3JvdGU6Cgo+IEhpIGV2ZXJ5b25lLCBJ4oCYbSB0cnlpbmcgdG8gbW9u
aXRvciB0aGUgVk0gQ1BVIHVzYWdlIGluIHhlbiBoeXBlcnZpc29yCj4gYW5kIHVzZSB0aGUgVk0g
Q1BVIHVzYWdlIGluZm9ybWF0aW9uIGluIFhlbidzIENyZWRpdCBTY2hlZHVsZXIuIEkKPiBnb29n
bGVkIGl0LCBhbmQgZmluZCB0aGF0IG1vc3QgcmVmZXJlbmNlcyBhYm91dCB0aGlzIHRvcGljIGlz
Cj4gbW9uaXRvcmluZyB0aGUgVk0gQ1BVIHVzYWdlIGluZm9ybWF0aW9uIGluIERvbWFpbi0wIHdo
aWxlIG5vdCBpbiB0aGUKPiBYZW4gaHlwZXJ2aXNvci4gSSdtIG5vdyBibG9ja2luZyBhdCBob3cg
dG8gZGV0ZXJtaW5lIHRoZSBzdGF0ZSBvZgo+IHZDUFUsIE15IHVuZGVyc3RhbmRpbmcgaXMgdGhh
dCB2Q1BVIHJ1bm5pbmcgb24gYSBwaHlzaWNhbCBDUFUgZG9lc24ndAo+IG1lYW4gaXQgcmVhbGx5
IGNvbnN1bWVzIENQVSBjeWNsZXMsIHNvIEkgY291bGQgZGV0ZXJtaW5lIHRoZSB3aGV0aGVyCj4g
dkNQVSBpcyBjb25zdW1pbmcgQ1BVIGN5Y2xlcyBvciBub3Q/IFNvIGNvdWxkIHlvdSBwbGVhc2Ug
Z2l2ZSBtZSBzb21lCj4gYWR2aWNlLgo+IAo+IFRoYW5rIHlvdSB2ZXJ5IG11Y2ghCj4gX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBYZW4tZGV2ZWwgbWFp
bGluZyBsaXN0Cj4gWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKPiBodHRwOi8vbGlzdHMueGVuLm9y
Zy94ZW4tZGV2ZWwKCgpDaGVja291dCBzaW1wbGUgdXRpbGl0eSBjYWxsZWQgJ3htc3RhdCcgSSB3
cm90ZSBhIHdoaWxlIGFnbywgeW91IG1pZ2h0IApmaW5kIGl0IHVzZWZ1bCBhcyBpdCB3aWxsIHRl
bGwgeW91IGhvdyB2Y3B1cyBhcmUgYm91bmNpbmcgYXJvdW5kIGFuZCAKY29uc3VtaW5nIGNwdXM6
CgpodHRwOi8vb2xkLWxpc3QtYXJjaGl2ZXMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94
ZW4tZGV2ZWwvMjAxMC0wOC9tc2cwMTU4Ni5odG1sCgp0aGFua3MKbXVrZXNoCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Fri Feb 21 22:22:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyTw-0007vQ-MK; Fri, 21 Feb 2014 22:22:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WGt38-0007Tm-KG
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:34:06 +0000
Received: from [193.109.254.147:22800] by server-7.bemta-14.messagelabs.com id
	1A/B2-23424-CFF77035; Fri, 21 Feb 2014 16:34:04 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393000442!2236416!1
X-Originating-IP: [98.139.212.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19463 invoked from network); 21 Feb 2014 16:34:03 -0000
Received: from nm4.bullet.mail.bf1.yahoo.com (HELO
	nm4.bullet.mail.bf1.yahoo.com) (98.139.212.163)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 16:34:03 -0000
Received: from [66.196.81.173] by nm4.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 16:34:01 -0000
Received: from [98.139.212.237] by tm19.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 16:34:01 -0000
Received: from [127.0.0.1] by omp1046.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 16:34:01 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 834710.28387.bm@omp1046.mail.bf1.yahoo.com
Received: (qmail 61048 invoked by uid 60001); 21 Feb 2014 16:34:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1393000441; bh=PaLWo3RygSIPkDjSorhDCYkbF0y1UuZKvNa6iI76u/4=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=hs3wsCPnNJBieM0YqYzhg6aEq1VjxM1REqMThTioK+XOOkyQAy4r8w7NWx0lW7kc1piZsSUerif+pw2xsLaA2c/y1OTa4f9AWSscjaAubXbkNFn8DDGTZ5Ft6MP/W77l1lGc4TTaXCFRMjLSi75uel/sYOj5IGMe0koS8Aq+G/I=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=Hzl2S4k8E8recjWyHlLBe939m/TaFkaPOd9mND8MmoLYHpjJjBYRvRh4Iq6MD/87fNlSjc1RDYscie2BzzCeoC3acGjTQFNkO/LKGLrjBaQMq2ohxga53KEy9FWFIS7CoNSKEGVyNZesZjzJUwUPXpgN1yU2YkEheEuWseXFS14=;
X-YMail-OSG: Z_9JFocVM1nMQeQw2rOplotFaU9hJmfRMf_EwZHxIz4ref9
	ElwFpYfZ0Y0o2jkCo.QYCWngmQysK9KdL7SDgkVcREBlqJ4ZVl1HywueIPb7
	aM3lh2cvmvKwcNI68GAvQpj7o9128HSLR2uxZdl.k8Z32OKovaiO1Y7RwLgp
	TgGWqtfKsurS3_GXLDaPHKq4XWaiQgxyZGdZNVw5K_R0JhFo.5ogfUfSGQ2Z
	hCMt6zu1lgCpz0tLlGq2U4E2fLyCjsYBZ95J.OHzDthub3yqihl8xXIe9LuH
	wm8nJNGUyQPNm5HauXc7GlKqzBJk0TCFPNT_YvBktcbQUpNQ8ox..M2GSeur
	XVgaGKJhrMD7tCFimmgfR9XZLiOPCiWYby58qmH8xj86IIZpGplJRrn62.Fx
	x2GhUhki6xDNY9SglZHJ1v7N5gBufs1RFiTmuQdpPCglw_gAzWZBRFjUF1bv
	y8z.mkrgDlLcpjy6hWtb7x28yPWV_SwbbHPd7pAFar.8.5REWweHan5qafx5
	xi3ovyMWEfZgMRdBh_h4NluIKkWN0tc2rb4_U9wrA8SyxPULhFPTGFgoEaCq
	gU8wLrA--
Received: from [192.227.225.3] by web161802.mail.bf1.yahoo.com via HTTP;
	Fri, 21 Feb 2014 08:34:01 PST
X-Rocket-MIMEInfo: 002.001,
	Y2FuIHlvdSBwbGVhc2UgZXhwbGFpbiBpbiBtb3JlIGRldGFpbHMgYWJvdXQ6wqBUaGUgIkBAICIgbWFya2VycyBhcmUgZnJvbSBkaWZmKDEpLCBzbyB0aGF0IHBhdGNoKDEpIGNhbiBkbyBpdHMgd29yay53aGF0IGxpbmUgY29kZSBpIG5lZWQgdG8gYWRkPyDCoMKgCmRvIGkgbmVlZCB0byBhZGQgdGhlc2UgbGluZXMgb2YgY29kZToKbWF4X2YgPSBhdG9pKGFyZ3ZbNF0pOwrCoCDCoCBzaS5mbGFncyA9IGF0b2koYXJndls1XSk7CsKgCkFkZWwgQW1hbmkKCk0uU2MuIENhbmRpZGF0ZUBDb21wdXRlciBFbmdpbmUBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.177.636
References: <20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
	<1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
	<20140221093009.GA3187@aepfle.de> 
Message-ID: <1393000441.22839.YahooMailNeo@web161802.mail.bf1.yahoo.com>
Date: Fri, 21 Feb 2014 08:34:01 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140221093009.GA3187@aepfle.de>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 21 Feb 2014 22:22:07 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0194334277532436951=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0194334277532436951==
Content-Type: multipart/alternative; boundary="-2096837515-990067434-1393000441=:22839"

---2096837515-990067434-1393000441=:22839
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

can you please explain in more details about:=A0The "@@ " markers are from =
diff(1), so that patch(1) can do its work.what line code i need to add? =A0=
=A0=0Ado i need to add these lines of code:=0Amax_f =3D atoi(argv[4]);=0A=
=A0 =A0 si.flags =3D atoi(argv[5]);=0A=A0=0AAdel Amani=0A=0AM.Sc. Candidate=
@Computer Engineering Department, University of Isfahan=0AEmail: A.Amani@en=
g.ui.ac.ir=0A=0A=0A=0AOn Friday, February 21, 2014 1:00 PM, Olaf Hering <ol=
af@aepfle.de> wrote:=0A =0AOn Thu, Feb 20, Adel Amani wrote:=0A=0A> +=A0 =
=A0 lvl =3D si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL; =0A=0APlease =
follow the code and check how si.flags gets its values.=0A=0AThe "@@ " mark=
ers are from diff(1), so that patch(1) can do its work.=0A=0A=0AOlaf
---2096837515-990067434-1393000441=:22839
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div id=3D"yiv38605=
76780"><div><div style=3D"color: rgb(0, 0, 0); background-color: rgb(255, 2=
55, 255); font-family: 'bookman old style', 'new york', times, serif; font-=
size: 10pt;"><div id=3D"yiv3860576780yui_3_13_0_ym1_7_1392999242988_10">can=
 you please explain in more details about:&nbsp;<span id=3D"yiv3860576780yu=
i_3_13_0_ym1_7_1392999242988_31" style=3D"font-family: monospace; font-size=
: 10pt; color: rgb(157, 24, 17); font-weight: bold;">The "@@ " markers are =
from diff(1), so that patch(1) can do its work.</span><span id=3D"yiv386057=
6780yui_3_13_0_ym1_1_1392999242988_6097" style=3D"font-size:10pt;">what lin=
e code i need to add? &nbsp;</span><span id=3D"yiv3860576780yui_3_13_0_ym1_=
7_1392999242988_37" style=3D"font-size:10pt;">&nbsp;</span></div><div id=3D=
"yiv3860576780yui_3_13_0_ym1_7_1392999242988_10">do i need to add these lin=
es of
 code:</div><div id=3D"yiv3860576780yui_3_13_0_ym1_7_1392999242988_10">max_=
f =3D atoi(argv[4]);</div><div id=3D"yiv3860576780yui_3_13_0_ym1_7_13929992=
42988_10">&nbsp; &nbsp; si.flags =3D atoi(argv[5]);</div><div id=3D"yiv3860=
576780yui_3_13_0_ym1_7_1392999242988_12"></div><div id=3D"yiv3860576780yui_=
3_13_0_ym1_7_1392999242988_14">&nbsp;</div><div id=3D"yiv3860576780yui_3_13=
_0_ym1_7_1392999242988_16"><div id=3D"yiv3860576780yui_3_13_0_ym1_1_1392999=
242988_4191" style=3D"text-align:center;"><span class=3D"yiv3860576780yui_3=
_13_0_ym1_1_1392999242988_3962" style=3D"font-size: 16px; font-family: 'tim=
es new roman', 'new york', times, serif; line-height: normal;">Adel Amani</=
span><br clear=3D"none"></div><span class=3D"yiv3860576780yui_3_13_0_ym1_1_=
1392999242988_3963" id=3D"yiv3860576780yui_3_13_0_ym1_1_1392999242988_4190"=
 style=3D"font-family: 'times new roman', 'new york', times, serif; line-he=
ight: normal;"></span><div id=3D"yiv3860576780yui_3_13_0_ym1_1_139299924298=
8_4189"
 style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@Computer Engin=
eering Department, University of Isfahan</div><div id=3D"yiv3860576780yui_3=
_13_0_ym1_1_1392999242988_5945" style=3D"text-align:center;"><span style=3D=
"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decoratio=
n:underline;">A.Amani@eng.ui.ac.ir</span></span></div></div><div class=3D"y=
iv3860576780yqt3649238746" id=3D"yiv3860576780yqt81197"><div class=3D"yiv38=
60576780yahoo_quoted" id=3D"yiv3860576780yui_3_13_0_ym1_7_1392999242988_18"=
 style=3D"display: block;"> <br clear=3D"none"> <br clear=3D"none"> <div cl=
ass=3D"yiv3860576780yui_3_13_0_ym1_1_1392999242988_3968" style=3D"font-fami=
ly: 'bookman old style', 'new york', times, serif; font-size: 10pt;"> <div =
class=3D"yiv3860576780yui_3_13_0_ym1_1_1392999242988_3969" style=3D"font-fa=
mily: HelveticaNeue, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', s=
ans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Ar=
ial"> On Friday, February 21,
 2014 1:00 PM, Olaf Hering &lt;olaf@aepfle.de&gt; wrote:<br clear=3D"none">=
 </font> </div>  <div class=3D"yiv3860576780y_msg_container">On Thu, Feb 20=
, Adel Amani wrote:<br clear=3D"none"><br clear=3D"none">&gt; +&nbsp; &nbsp=
; lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL; <br clear=
=3D"none"><br clear=3D"none">Please follow the code and check how si.flags =
gets its values.<br clear=3D"none"><br clear=3D"none">The "@@ " markers are=
 from diff(1), so that patch(1) can do its work.<div class=3D"yiv3860576780=
yqt4665789245" id=3D"yiv3860576780yqtfd40494"><br clear=3D"none"><br clear=
=3D"none">Olaf<br clear=3D"none"></div><br clear=3D"none"><br clear=3D"none=
"></div>  </div> </div>  </div></div> </div></div></div></div></body></html=
>
---2096837515-990067434-1393000441=:22839--


--===============0194334277532436951==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0194334277532436951==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 22:22:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 22:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGyTw-0007vQ-MK; Fri, 21 Feb 2014 22:22:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <adel.amani66@yahoo.com>) id 1WGt38-0007Tm-KG
	for xen-devel@lists.xen.org; Fri, 21 Feb 2014 16:34:06 +0000
Received: from [193.109.254.147:22800] by server-7.bemta-14.messagelabs.com id
	1A/B2-23424-CFF77035; Fri, 21 Feb 2014 16:34:04 +0000
X-Env-Sender: adel.amani66@yahoo.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393000442!2236416!1
X-Originating-IP: [98.139.212.163]
X-SpamReason: No, hits=0.0 required=7.0 tests=HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_12,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19463 invoked from network); 21 Feb 2014 16:34:03 -0000
Received: from nm4.bullet.mail.bf1.yahoo.com (HELO
	nm4.bullet.mail.bf1.yahoo.com) (98.139.212.163)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 16:34:03 -0000
Received: from [66.196.81.173] by nm4.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 16:34:01 -0000
Received: from [98.139.212.237] by tm19.bullet.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 16:34:01 -0000
Received: from [127.0.0.1] by omp1046.mail.bf1.yahoo.com with NNFMP;
	21 Feb 2014 16:34:01 -0000
X-Yahoo-Newman-Property: ymail-3
X-Yahoo-Newman-Id: 834710.28387.bm@omp1046.mail.bf1.yahoo.com
Received: (qmail 61048 invoked by uid 60001); 21 Feb 2014 16:34:01 -0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s1024;
	t=1393000441; bh=PaLWo3RygSIPkDjSorhDCYkbF0y1UuZKvNa6iI76u/4=;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=hs3wsCPnNJBieM0YqYzhg6aEq1VjxM1REqMThTioK+XOOkyQAy4r8w7NWx0lW7kc1piZsSUerif+pw2xsLaA2c/y1OTa4f9AWSscjaAubXbkNFn8DDGTZ5Ft6MP/W77l1lGc4TTaXCFRMjLSi75uel/sYOj5IGMe0koS8Aq+G/I=
DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com;
	h=X-YMail-OSG:Received:X-Rocket-MIMEInfo:X-Mailer:References:Message-ID:Date:From:Reply-To:Subject:To:In-Reply-To:MIME-Version:Content-Type;
	b=Hzl2S4k8E8recjWyHlLBe939m/TaFkaPOd9mND8MmoLYHpjJjBYRvRh4Iq6MD/87fNlSjc1RDYscie2BzzCeoC3acGjTQFNkO/LKGLrjBaQMq2ohxga53KEy9FWFIS7CoNSKEGVyNZesZjzJUwUPXpgN1yU2YkEheEuWseXFS14=;
X-YMail-OSG: Z_9JFocVM1nMQeQw2rOplotFaU9hJmfRMf_EwZHxIz4ref9
	ElwFpYfZ0Y0o2jkCo.QYCWngmQysK9KdL7SDgkVcREBlqJ4ZVl1HywueIPb7
	aM3lh2cvmvKwcNI68GAvQpj7o9128HSLR2uxZdl.k8Z32OKovaiO1Y7RwLgp
	TgGWqtfKsurS3_GXLDaPHKq4XWaiQgxyZGdZNVw5K_R0JhFo.5ogfUfSGQ2Z
	hCMt6zu1lgCpz0tLlGq2U4E2fLyCjsYBZ95J.OHzDthub3yqihl8xXIe9LuH
	wm8nJNGUyQPNm5HauXc7GlKqzBJk0TCFPNT_YvBktcbQUpNQ8ox..M2GSeur
	XVgaGKJhrMD7tCFimmgfR9XZLiOPCiWYby58qmH8xj86IIZpGplJRrn62.Fx
	x2GhUhki6xDNY9SglZHJ1v7N5gBufs1RFiTmuQdpPCglw_gAzWZBRFjUF1bv
	y8z.mkrgDlLcpjy6hWtb7x28yPWV_SwbbHPd7pAFar.8.5REWweHan5qafx5
	xi3ovyMWEfZgMRdBh_h4NluIKkWN0tc2rb4_U9wrA8SyxPULhFPTGFgoEaCq
	gU8wLrA--
Received: from [192.227.225.3] by web161802.mail.bf1.yahoo.com via HTTP;
	Fri, 21 Feb 2014 08:34:01 PST
X-Rocket-MIMEInfo: 002.001,
	Y2FuIHlvdSBwbGVhc2UgZXhwbGFpbiBpbiBtb3JlIGRldGFpbHMgYWJvdXQ6wqBUaGUgIkBAICIgbWFya2VycyBhcmUgZnJvbSBkaWZmKDEpLCBzbyB0aGF0IHBhdGNoKDEpIGNhbiBkbyBpdHMgd29yay53aGF0IGxpbmUgY29kZSBpIG5lZWQgdG8gYWRkPyDCoMKgCmRvIGkgbmVlZCB0byBhZGQgdGhlc2UgbGluZXMgb2YgY29kZToKbWF4X2YgPSBhdG9pKGFyZ3ZbNF0pOwrCoCDCoCBzaS5mbGFncyA9IGF0b2koYXJndls1XSk7CsKgCkFkZWwgQW1hbmkKCk0uU2MuIENhbmRpZGF0ZUBDb21wdXRlciBFbmdpbmUBMAEBAQE-
X-Mailer: YahooMailWebService/0.8.177.636
References: <20140203131144.GA31275@aepfle.de>
	<1391583040.24823.YahooMailNeo@web161802.mail.bf1.yahoo.com>
	<20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
	<1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
	<20140221093009.GA3187@aepfle.de> 
Message-ID: <1393000441.22839.YahooMailNeo@web161802.mail.bf1.yahoo.com>
Date: Fri, 21 Feb 2014 08:34:01 -0800 (PST)
From: Adel Amani <adel.amani66@yahoo.com>
To: Olaf Hering <olaf@aepfle.de>, Xen <xen-devel@lists.xen.org>
In-Reply-To: <20140221093009.GA3187@aepfle.de>
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 21 Feb 2014 22:22:07 +0000
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Adel Amani <adel.amani66@yahoo.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0194334277532436951=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0194334277532436951==
Content-Type: multipart/alternative; boundary="-2096837515-990067434-1393000441=:22839"

---2096837515-990067434-1393000441=:22839
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

can you please explain in more details about:=A0The "@@ " markers are from =
diff(1), so that patch(1) can do its work.what line code i need to add? =A0=
=A0=0Ado i need to add these lines of code:=0Amax_f =3D atoi(argv[4]);=0A=
=A0 =A0 si.flags =3D atoi(argv[5]);=0A=A0=0AAdel Amani=0A=0AM.Sc. Candidate=
@Computer Engineering Department, University of Isfahan=0AEmail: A.Amani@en=
g.ui.ac.ir=0A=0A=0A=0AOn Friday, February 21, 2014 1:00 PM, Olaf Hering <ol=
af@aepfle.de> wrote:=0A =0AOn Thu, Feb 20, Adel Amani wrote:=0A=0A> +=A0 =
=A0 lvl =3D si.flags & XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL; =0A=0APlease =
follow the code and check how si.flags gets its values.=0A=0AThe "@@ " mark=
ers are from diff(1), so that patch(1) can do its work.=0A=0A=0AOlaf
---2096837515-990067434-1393000441=:22839
Content-Type: text/html; charset=iso-8859-1
Content-Transfer-Encoding: quoted-printable

<html><body><div style=3D"color:#000; background-color:#fff; font-family:bo=
okman old style, new york, times, serif;font-size:10pt"><div id=3D"yiv38605=
76780"><div><div style=3D"color: rgb(0, 0, 0); background-color: rgb(255, 2=
55, 255); font-family: 'bookman old style', 'new york', times, serif; font-=
size: 10pt;"><div id=3D"yiv3860576780yui_3_13_0_ym1_7_1392999242988_10">can=
 you please explain in more details about:&nbsp;<span id=3D"yiv3860576780yu=
i_3_13_0_ym1_7_1392999242988_31" style=3D"font-family: monospace; font-size=
: 10pt; color: rgb(157, 24, 17); font-weight: bold;">The "@@ " markers are =
from diff(1), so that patch(1) can do its work.</span><span id=3D"yiv386057=
6780yui_3_13_0_ym1_1_1392999242988_6097" style=3D"font-size:10pt;">what lin=
e code i need to add? &nbsp;</span><span id=3D"yiv3860576780yui_3_13_0_ym1_=
7_1392999242988_37" style=3D"font-size:10pt;">&nbsp;</span></div><div id=3D=
"yiv3860576780yui_3_13_0_ym1_7_1392999242988_10">do i need to add these lin=
es of
 code:</div><div id=3D"yiv3860576780yui_3_13_0_ym1_7_1392999242988_10">max_=
f =3D atoi(argv[4]);</div><div id=3D"yiv3860576780yui_3_13_0_ym1_7_13929992=
42988_10">&nbsp; &nbsp; si.flags =3D atoi(argv[5]);</div><div id=3D"yiv3860=
576780yui_3_13_0_ym1_7_1392999242988_12"></div><div id=3D"yiv3860576780yui_=
3_13_0_ym1_7_1392999242988_14">&nbsp;</div><div id=3D"yiv3860576780yui_3_13=
_0_ym1_7_1392999242988_16"><div id=3D"yiv3860576780yui_3_13_0_ym1_1_1392999=
242988_4191" style=3D"text-align:center;"><span class=3D"yiv3860576780yui_3=
_13_0_ym1_1_1392999242988_3962" style=3D"font-size: 16px; font-family: 'tim=
es new roman', 'new york', times, serif; line-height: normal;">Adel Amani</=
span><br clear=3D"none"></div><span class=3D"yiv3860576780yui_3_13_0_ym1_1_=
1392999242988_3963" id=3D"yiv3860576780yui_3_13_0_ym1_1_1392999242988_4190"=
 style=3D"font-family: 'times new roman', 'new york', times, serif; line-he=
ight: normal;"></span><div id=3D"yiv3860576780yui_3_13_0_ym1_1_139299924298=
8_4189"
 style=3D"font-size:16px;text-align:center;">M.Sc. Candidate@Computer Engin=
eering Department, University of Isfahan</div><div id=3D"yiv3860576780yui_3=
_13_0_ym1_1_1392999242988_5945" style=3D"text-align:center;"><span style=3D=
"font-size:13px;">Email: <span style=3D"color:rgb(0, 0, 255);text-decoratio=
n:underline;">A.Amani@eng.ui.ac.ir</span></span></div></div><div class=3D"y=
iv3860576780yqt3649238746" id=3D"yiv3860576780yqt81197"><div class=3D"yiv38=
60576780yahoo_quoted" id=3D"yiv3860576780yui_3_13_0_ym1_7_1392999242988_18"=
 style=3D"display: block;"> <br clear=3D"none"> <br clear=3D"none"> <div cl=
ass=3D"yiv3860576780yui_3_13_0_ym1_1_1392999242988_3968" style=3D"font-fami=
ly: 'bookman old style', 'new york', times, serif; font-size: 10pt;"> <div =
class=3D"yiv3860576780yui_3_13_0_ym1_1_1392999242988_3969" style=3D"font-fa=
mily: HelveticaNeue, 'Helvetica Neue', Helvetica, Arial, 'Lucida Grande', s=
ans-serif; font-size: 12pt;"> <div dir=3D"ltr"> <font size=3D"2" face=3D"Ar=
ial"> On Friday, February 21,
 2014 1:00 PM, Olaf Hering &lt;olaf@aepfle.de&gt; wrote:<br clear=3D"none">=
 </font> </div>  <div class=3D"yiv3860576780y_msg_container">On Thu, Feb 20=
, Adel Amani wrote:<br clear=3D"none"><br clear=3D"none">&gt; +&nbsp; &nbsp=
; lvl =3D si.flags &amp; XCFLAGS_DEBUG ? XTL_DEBUG: XTL_DETAIL; <br clear=
=3D"none"><br clear=3D"none">Please follow the code and check how si.flags =
gets its values.<br clear=3D"none"><br clear=3D"none">The "@@ " markers are=
 from diff(1), so that patch(1) can do its work.<div class=3D"yiv3860576780=
yqt4665789245" id=3D"yiv3860576780yqtfd40494"><br clear=3D"none"><br clear=
=3D"none">Olaf<br clear=3D"none"></div><br clear=3D"none"><br clear=3D"none=
"></div>  </div> </div>  </div></div> </div></div></div></div></body></html=
>
---2096837515-990067434-1393000441=:22839--


--===============0194334277532436951==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0194334277532436951==--


From xen-devel-bounces@lists.xen.org Fri Feb 21 23:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 23:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGzuQ-0008Sp-EF; Fri, 21 Feb 2014 23:53:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGzuP-0008Sk-5A
	for Xen-devel@lists.xensource.com; Fri, 21 Feb 2014 23:53:33 +0000
Received: from [193.109.254.147:64142] by server-9.bemta-14.messagelabs.com id
	0F/20-24895-CF6E7035; Fri, 21 Feb 2014 23:53:32 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393026810!2353133!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2057 invoked from network); 21 Feb 2014 23:53:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 23:53:31 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LNrM5Z008761
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 23:53:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1LNrK3Q024413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 23:53:20 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LNrKRq012156; Fri, 21 Feb 2014 23:53:20 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 15:53:19 -0800
Date: Fri, 21 Feb 2014 15:53:18 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140221155318.6aaaefde@mantra.us.oracle.com>
In-Reply-To: <20140220172234.7b6847ad@mantra.us.oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
	<20140219182227.6a37a33c@mantra.us.oracle.com>
	<53060806.7040903@linaro.org>
	<20140220172234.7b6847ad@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	keir.xen@gmail.com, Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014 17:22:34 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Thu, 20 Feb 2014 13:49:58 +0000
> Julien Grall <julien.grall@linaro.org> wrote:
> 
> > On 02/20/2014 02:22 AM, Mukesh Rathor wrote:
> > > On Wed, 12 Feb 2014 16:47:54 +0000
> > > Julien Grall <julien.grall@linaro.org> wrote:
> > > 
> > >> Hi Mukesh,
> > >>
> > >> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> > >>> In preparation for the next patch, we update xsm_add_to_physmap
> > >>> to allow for checking of foreign domain. Thus, the current
> > >>> domain must have the right to update the mappings of target
> > >>> domain with pages from foreign domain.
> > >>>
> > >>> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > >>
> > >> While I was playing with XSM on ARM, I have noticed that Daniel
> > >> De Graff has added xsm_map_gfmn_foreign few months ago (see
> > >> commit 0b201e6).
> > >>
> > >> Would it be suitable to use this XSM instead of extending
> > >> xsm_add_to_physmap?
> > >>
> > >> Regards,
> > >>
> > > 
> > > Not the same thing. add to physmap could be adding to a domain's
> > > physmap pages from a foreign domain.
> > 
> > Let assume you don't modify xsm_add_to_physmap, in this case:
> >    - xsm_add_to_physmap checks if the current domain is allowed to
> > modify the p2m of a given domain
> >    - xsm_map_gfmn_foreign checks if the given domain is allowed to
> > have foreign mapping from the foreign domain
> > 
> > Both XSM are distinct and should be used together. You don't care
> > that
> 
> I see, i thought you meant replace one with another. I am not a
> security expert, so just followed the suggestions. But looking at the
> code looks like above is the way to go, and I can just drop my
> xsm_add_to_physmap change patch (which btw doesn't check whether
> target has access to foreign mappings, so is prob not correct).
> Thanks for noticing.


BTW, in include/xsm/xsm.h, shouldn't 

static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)

be

static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)

not sure how you were able to compile xsm enabled in arm???

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 21 23:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Feb 2014 23:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WGzuQ-0008Sp-EF; Fri, 21 Feb 2014 23:53:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WGzuP-0008Sk-5A
	for Xen-devel@lists.xensource.com; Fri, 21 Feb 2014 23:53:33 +0000
Received: from [193.109.254.147:64142] by server-9.bemta-14.messagelabs.com id
	0F/20-24895-CF6E7035; Fri, 21 Feb 2014 23:53:32 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393026810!2353133!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2057 invoked from network); 21 Feb 2014 23:53:31 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 21 Feb 2014 23:53:31 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1LNrM5Z008761
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 21 Feb 2014 23:53:23 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1LNrK3Q024413
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 21 Feb 2014 23:53:20 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1LNrKRq012156; Fri, 21 Feb 2014 23:53:20 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 15:53:19 -0800
Date: Fri, 21 Feb 2014 15:53:18 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140221155318.6aaaefde@mantra.us.oracle.com>
In-Reply-To: <20140220172234.7b6847ad@mantra.us.oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>
	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>
	<52FBA5BA.4020301@linaro.org>
	<20140219182227.6a37a33c@mantra.us.oracle.com>
	<53060806.7040903@linaro.org>
	<20140220172234.7b6847ad@mantra.us.oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	keir.xen@gmail.com, Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 20 Feb 2014 17:22:34 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> On Thu, 20 Feb 2014 13:49:58 +0000
> Julien Grall <julien.grall@linaro.org> wrote:
> 
> > On 02/20/2014 02:22 AM, Mukesh Rathor wrote:
> > > On Wed, 12 Feb 2014 16:47:54 +0000
> > > Julien Grall <julien.grall@linaro.org> wrote:
> > > 
> > >> Hi Mukesh,
> > >>
> > >> On 12/17/2013 02:38 AM, Mukesh Rathor wrote:
> > >>> In preparation for the next patch, we update xsm_add_to_physmap
> > >>> to allow for checking of foreign domain. Thus, the current
> > >>> domain must have the right to update the mappings of target
> > >>> domain with pages from foreign domain.
> > >>>
> > >>> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > >>
> > >> While I was playing with XSM on ARM, I have noticed that Daniel
> > >> De Graff has added xsm_map_gfmn_foreign few months ago (see
> > >> commit 0b201e6).
> > >>
> > >> Would it be suitable to use this XSM instead of extending
> > >> xsm_add_to_physmap?
> > >>
> > >> Regards,
> > >>
> > > 
> > > Not the same thing. add to physmap could be adding to a domain's
> > > physmap pages from a foreign domain.
> > 
> > Let assume you don't modify xsm_add_to_physmap, in this case:
> >    - xsm_add_to_physmap checks if the current domain is allowed to
> > modify the p2m of a given domain
> >    - xsm_map_gfmn_foreign checks if the given domain is allowed to
> > have foreign mapping from the foreign domain
> > 
> > Both XSM are distinct and should be used together. You don't care
> > that
> 
> I see, i thought you meant replace one with another. I am not a
> security expert, so just followed the suggestions. But looking at the
> code looks like above is the way to go, and I can just drop my
> xsm_add_to_physmap change patch (which btw doesn't check whether
> target has access to foreign mappings, so is prob not correct).
> Thanks for noticing.


BTW, in include/xsm/xsm.h, shouldn't 

static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)

be

static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)

not sure how you were able to compile xsm enabled in arm???

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 00:20:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 00:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH0KN-0000gy-Q4; Sat, 22 Feb 2014 00:20:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WH0KM-0000gt-P8
	for Xen-devel@lists.xensource.com; Sat, 22 Feb 2014 00:20:22 +0000
Received: from [85.158.137.68:57003] by server-10.bemta-3.messagelabs.com id
	F4/F4-07302-54DE7035; Sat, 22 Feb 2014 00:20:21 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393028420!2216516!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28070 invoked from network); 22 Feb 2014 00:20:21 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 00:20:21 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so3062900wes.39
	for <Xen-devel@lists.xensource.com>;
	Fri, 21 Feb 2014 16:20:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=LNK7h7vg4AOFhD/Dc7kocT6w3hUeM4hlyY9dSLBA0FE=;
	b=VoD2niq+JJuHM7uNSEas8vXYKElMBQR1YN7FNISuyJ252pcALQUONagde3zefPC6Y2
	NGd94+mlbMPsb9SQxCDiWGDkZohSE/nBKAk9/0jciDPypeUFSc7yknm8T9/bOevmofBk
	XHfjQd0Te/gZ26v06zYNc1MI6GFB/eOy/mZnyBH1J2P+xDAXj2TDdV5WvDESi0bAAi26
	VL9+8x3SyWOIMcWVeyd+avKlCZUWQxLzjtno5vEesuoZ/mF2YWLKGQ0icyEQVulHQtMt
	00PLRlx5JL2gNjCQWWPnMJA2A4kLWElgVNiUv4K/LjOkX31UVnOKlW72wTROGnYDbVMC
	E37g==
X-Gm-Message-State: ALoCoQmqCfKOIv3sZxiD2pQe5V779b1Y39PMeFG1rG61+bquGA+EzVHPuVRumb9zpKUSnFPjKpzf
X-Received: by 10.180.12.43 with SMTP id v11mr5219538wib.33.1393028420477;
	Fri, 21 Feb 2014 16:20:20 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cm5sm12164306wid.5.2014.02.21.16.20.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 16:20:19 -0800 (PST)
Message-ID: <5307ED42.3070409@linaro.org>
Date: Sat, 22 Feb 2014 00:20:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>	<52FBA5BA.4020301@linaro.org>	<20140219182227.6a37a33c@mantra.us.oracle.com>	<53060806.7040903@linaro.org>	<20140220172234.7b6847ad@mantra.us.oracle.com>
	<20140221155318.6aaaefde@mantra.us.oracle.com>
In-Reply-To: <20140221155318.6aaaefde@mantra.us.oracle.com>
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 21/02/14 23:53, Mukesh Rathor wrote:
>
> BTW, in include/xsm/xsm.h, shouldn't
>
> static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
>
> be
>
> static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
>
> not sure how you were able to compile xsm enabled in arm???

XSM doesn't compile at all on ARM. I'm currently working on a patch 
series for that.

One of the patch fixes the xsm_map_gfmn_foreign see: 
http://xenbits.xen.org/gitweb/?p=people/julieng/xen-unstable.git;a=commit;h=8414a0e5873d1dbcb3a7d20c45bcb2e683142dc5

If you want, I can send the patch monday.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 00:20:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 00:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH0KN-0000gy-Q4; Sat, 22 Feb 2014 00:20:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WH0KM-0000gt-P8
	for Xen-devel@lists.xensource.com; Sat, 22 Feb 2014 00:20:22 +0000
Received: from [85.158.137.68:57003] by server-10.bemta-3.messagelabs.com id
	F4/F4-07302-54DE7035; Sat, 22 Feb 2014 00:20:21 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393028420!2216516!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28070 invoked from network); 22 Feb 2014 00:20:21 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 00:20:21 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so3062900wes.39
	for <Xen-devel@lists.xensource.com>;
	Fri, 21 Feb 2014 16:20:20 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=LNK7h7vg4AOFhD/Dc7kocT6w3hUeM4hlyY9dSLBA0FE=;
	b=VoD2niq+JJuHM7uNSEas8vXYKElMBQR1YN7FNISuyJ252pcALQUONagde3zefPC6Y2
	NGd94+mlbMPsb9SQxCDiWGDkZohSE/nBKAk9/0jciDPypeUFSc7yknm8T9/bOevmofBk
	XHfjQd0Te/gZ26v06zYNc1MI6GFB/eOy/mZnyBH1J2P+xDAXj2TDdV5WvDESi0bAAi26
	VL9+8x3SyWOIMcWVeyd+avKlCZUWQxLzjtno5vEesuoZ/mF2YWLKGQ0icyEQVulHQtMt
	00PLRlx5JL2gNjCQWWPnMJA2A4kLWElgVNiUv4K/LjOkX31UVnOKlW72wTROGnYDbVMC
	E37g==
X-Gm-Message-State: ALoCoQmqCfKOIv3sZxiD2pQe5V779b1Y39PMeFG1rG61+bquGA+EzVHPuVRumb9zpKUSnFPjKpzf
X-Received: by 10.180.12.43 with SMTP id v11mr5219538wib.33.1393028420477;
	Fri, 21 Feb 2014 16:20:20 -0800 (PST)
Received: from localhost.localdomain
	(cpc8-cmbg15-2-0-cust169.5-4.cable.virginm.net. [86.30.140.170])
	by mx.google.com with ESMTPSA id cm5sm12164306wid.5.2014.02.21.16.20.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 21 Feb 2014 16:20:19 -0800 (PST)
Message-ID: <5307ED42.3070409@linaro.org>
Date: Sat, 22 Feb 2014 00:20:18 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Mukesh Rathor <mukesh.rathor@oracle.com>
References: <1387247911-28846-1-git-send-email-mukesh.rathor@oracle.com>	<1387247911-28846-6-git-send-email-mukesh.rathor@oracle.com>	<52FBA5BA.4020301@linaro.org>	<20140219182227.6a37a33c@mantra.us.oracle.com>	<53060806.7040903@linaro.org>	<20140220172234.7b6847ad@mantra.us.oracle.com>
	<20140221155318.6aaaefde@mantra.us.oracle.com>
In-Reply-To: <20140221155318.6aaaefde@mantra.us.oracle.com>
Cc: Xen-devel@lists.xensource.com, Ian Campbell <ian.campbell@citrix.com>,
	george.dunlap@eu.citrix.com, tim@xen.org, keir.xen@gmail.com,
	Jan Beulich <jbeulich@suse.com>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [V7 PATCH 5/7] pvh: change xsm_add_to_physmap
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 21/02/14 23:53, Mukesh Rathor wrote:
>
> BTW, in include/xsm/xsm.h, shouldn't
>
> static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
>
> be
>
> static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
>
> not sure how you were able to compile xsm enabled in arm???

XSM doesn't compile at all on ARM. I'm currently working on a patch 
series for that.

One of the patch fixes the xsm_map_gfmn_foreign see: 
http://xenbits.xen.org/gitweb/?p=people/julieng/xen-unstable.git;a=commit;h=8414a0e5873d1dbcb3a7d20c45bcb2e683142dc5

If you want, I can send the patch monday.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 00:31:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 00:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH0VP-0000sI-2y; Sat, 22 Feb 2014 00:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WH0VN-0000sD-RX
	for xen-devel@lists.xen.org; Sat, 22 Feb 2014 00:31:46 +0000
Received: from [85.158.139.211:49519] by server-3.bemta-5.messagelabs.com id
	52/31-13671-1FFE7035; Sat, 22 Feb 2014 00:31:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393029102!970429!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12490 invoked from network); 22 Feb 2014 00:31:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 00:31:44 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1M0VdsN005799
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 22 Feb 2014 00:31:40 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1M0VcIl005056
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 22 Feb 2014 00:31:39 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M0Vcxm023037; Sat, 22 Feb 2014 00:31:38 GMT
Message-Id: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
Received: from [192.168.2.114] (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 16:31:38 -0800
Date: Fri, 21 Feb 2014 19:31:35 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ck9uIEZlYiAyMSwgMjAxNCA0OjU4IFBNLCAiWmhhbmcsIEVuaWFjIiA8ZW5pYWMteHcuemhhbmdA
aHAuY29tPiB3cm90ZToKPgo+IEhpIEtvbnJhZCwgCj4KPiBUaGFua3MgZm9yIHlvdXIgcmVwbHku
wqAgCj4KPiBZZXMsIEkgYW0gYXdhcmUgb2YgdGhlIHBjaWJhY2suwqAgVW5mb3J0dW5hdGVseSBp
dCBkb2Vzbid0IHNlZW0gdG8gc3VwcG9ydCBwY2ktZSBwYXNzdGhyb3VnaC4gKEkgY291bGQgYmUg
d3JvbmcgaGVyZSkKCkkganVzdCBkaWQgUENJZSBwYXNzIHRocm91Z2ggb2YgYW4gVkYgb2YgYW4g
U1ItSU9WIGRldmljZS4gSXQgY2VydGFpbmx5IGlzIFBDSWUuCgo+Cj4gVGhlcmUgYXJlIHR3byBy
ZWFzb24gdGhhdCBJIGFtIGludGVyZXN0ZWQgaW4gdGhpcy7CoCBGb3Igb25lLCBteSBwcm9qZWN0
IGNhbGxzIGZvciBwY2ktZSBkZXZpY2UgcGFzc3Rocm91Z2gsIHdoaWNoIGNhbid0IGJlIGFjY29t
cGxpc2hlZCB3aXRoIDQ0MGZ4IGNoaXBzZXQgZW11bGF0aW9uLsKgIFNlY29uZGx5LCBJIGZlZWwg
d2Ugb3VnaHQgdG8gbW92ZSBvbiB3aXRoIHRoZSB0ZWNobm9sb2d5LsKgIDQ0MGZ4IGlzIGFuY2ll
bnQgaW4gY29tcHV0ZXIgdGVybXMuwqAgUWVtdSBpcyBnb29kIGFuZCBhbGwgdGhhdCwgYnV0IGlm
IGl0IHJlZnVzZXMgdG8gc3VwcG9ydCBwY2ktZSBuYXRpdmVseSB0aGVuIGl0J3MganVzdCBhIG1h
dHRlciBvZiB0aW1lIHRoYXQgaXQgd2lsbCBiZWNvbWUgb2Jzb2xldGVkLsKgIFRoZSB0cmVuZCBp
cyBjbGVhciB0aGF0IHBjaS1lIGlzIHRha2luZyBvdmVyIHRoZSB3b3JsZC4gCj4KCkkgYW0gbm90
IHN1cmUgd2hhdCB5b3UgYXJlIHNheWluZyBidXQgaXQgZG9lcyBub3QgbWF0dGVyIHdoZXRoZXIg
UUVNVSBlbXVsYXRlcyA0NDBmeCBvciBxMzUgZm9yIFBDSSBwYXNzIHRocm91Z2ggLgoKPiBSZWdh
cmRzL0VuaWFjIAo+Cj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0gCj4gRnJvbTogS29ucmFk
IFJ6ZXN6dXRlayBXaWxrIFttYWlsdG86a29ucmFkLndpbGtAb3JhY2xlLmNvbV0gCj4gU2VudDog
RnJpZGF5LCBGZWJydWFyeSAyMSwgMjAxNCAyOjUwIFBNIAo+IFRvOiBaaGFuZywgRW5pYWMgCj4g
Q2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBx
MzUgaW4geGVuPyB2ZmlvIGluIHhlbj8gCj4KPiBPbiBGcmksIEZlYiAyMSwgMjAxNCBhdCAwOTo0
MTozOVBNICswMDAwLCBaaGFuZywgRW5pYWMgd3JvdGU6IAo+ID4gSGkgYWxsLCAKPiA+IAo+ID4g
SSBhbSBwbGF5aW5nIHdpdGggcTM1IGNoaXBzZXQgaW4gcWVtdSAoMS42LjEpLsKgIEl0IHNlZW1z
IHdlIGNhbid0IGVuYWJsZSBxMzUgbWFjaGluZSB1bmRlciB4ZW4geWV0LsKgIEkgbWFkZSBhIGZl
dyBxdWljayBoYWNrcyB3aGljaCBhbGwgZmFpbCBtaXNlcmFibHkgKGxpbnV4IGtlcm5lbCBvb3Bz
IGFuZCB3aW5kb3cgQlNPRCkuwqAgSSB3YXMgd29uZGVyaW5nIHdoeSB0aGlzIGhhc24ndCBiZWVu
IGRvbmUgKHEzNSB3YXMgaW50cm9kdWNlZCBpbnRvIHFlbXUgaW4gMjAwOSkuIAo+ID4gCj4gPiBO
ZXh0IHF1ZXN0aW9uLCB2ZmlvIHdvcmtzIHZlcnkgd2VsbCBmb3IgbWUgaW4gc3RhbmRhbG9uZSBx
ZW11ICh3aXRoIExpbnV4IGhvc3QgaGFuZGxpbmcgaW9tbXUpLCBidXQgaXMgdGhhdCBzdXBwb3J0
ZWQgdW5kZXIgeGVuP8KgIEkgaGF2ZW4ndCB0cmllZCBhbnl0aGluZyB0aGVyZSB5ZXQgYmVjYXVz
ZSBteSBndXQtZmVlbGluZyBpcyB0aGF0IGl0IHdvbid0IHdvcmsuwqAgQmVjYXVzZSBwYXNzaW5n
IHZmaW8gZGV2aWNlIHRvIHFlbXUgY2FuIG9ubHkgYmUgZG9uZSBvbiBxZW11IGNvbW1hbmRsaW5l
LCBhbmQgeGVuIGlzIG5vdCBhd2FyZSBvZiB0aGlzIHBhc3NpbmcgdGhyb3VnaCBkZXZpY2UsIHRo
dXMgbm90IGFibGUgdG8gbWFrZSBpb21tdSBhcnJhbmdlbWVudCBmb3IgdGhpcyBkZXZpY2UuwqAg
QW0gSSBvbiB0aGUgcmlnaHQgdHJhY2sgaGVyZT8gCj4KPiBZZXMgYW5kIG5vLiBWRklPIHdvbid0
IHdvcmsgLSBidXQgUUVNVSBkb2VzIGRvIFBDSSBwYXNzdGhyb3VnaCB1bmRlciBYZW4uIEl0IHVz
ZXMgYSBkaWZmZXJlbnQgbWVjaGFuaXNtIChhbmQgeW91IG5lZWQgdG8gYmluZCB0aGUgZGV2aWNl
IHRvIHBjaWJhY2spLiAKPgo+ID4gCj4gPiBJIGFtIGludGVyZXN0ZWQgaW4gaW1wbGVtZW50aW5n
IGJvdGggdGhlc2UgdHdvIGZlYXR1cmVzLsKgIEknZCBsaWtlIHRvIGNvbm5lY3Qgd2l0aCBhbnlv
bmUgd2hvJ3MgYWxyZWFkeSBvbiB0aGlzIHNvIHdlIGRvbid0IGR1cGxpY2F0ZSB0aGUgZWZmb3J0
cy4gCj4KPiBXaGF0IGRvIHlvdSBuZWVkIFEzNSBmb3I/IAo+Cj4gPiAKPiA+IFJlZ2FyZHMvRW5p
YWMgCj4KPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
IAo+ID4gWGVuLWRldmVsIG1haWxpbmcgbGlzdCAKPiA+IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
IAo+ID4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsIAo+Cl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sat Feb 22 00:31:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 00:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH0VP-0000sI-2y; Sat, 22 Feb 2014 00:31:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WH0VN-0000sD-RX
	for xen-devel@lists.xen.org; Sat, 22 Feb 2014 00:31:46 +0000
Received: from [85.158.139.211:49519] by server-3.bemta-5.messagelabs.com id
	52/31-13671-1FFE7035; Sat, 22 Feb 2014 00:31:45 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393029102!970429!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12490 invoked from network); 22 Feb 2014 00:31:44 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 00:31:44 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1M0VdsN005799
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Sat, 22 Feb 2014 00:31:40 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1M0VcIl005056
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Sat, 22 Feb 2014 00:31:39 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M0Vcxm023037; Sat, 22 Feb 2014 00:31:38 GMT
Message-Id: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
Received: from [192.168.2.114] (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 16:31:38 -0800
Date: Fri, 21 Feb 2014 19:31:35 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ck9uIEZlYiAyMSwgMjAxNCA0OjU4IFBNLCAiWmhhbmcsIEVuaWFjIiA8ZW5pYWMteHcuemhhbmdA
aHAuY29tPiB3cm90ZToKPgo+IEhpIEtvbnJhZCwgCj4KPiBUaGFua3MgZm9yIHlvdXIgcmVwbHku
wqAgCj4KPiBZZXMsIEkgYW0gYXdhcmUgb2YgdGhlIHBjaWJhY2suwqAgVW5mb3J0dW5hdGVseSBp
dCBkb2Vzbid0IHNlZW0gdG8gc3VwcG9ydCBwY2ktZSBwYXNzdGhyb3VnaC4gKEkgY291bGQgYmUg
d3JvbmcgaGVyZSkKCkkganVzdCBkaWQgUENJZSBwYXNzIHRocm91Z2ggb2YgYW4gVkYgb2YgYW4g
U1ItSU9WIGRldmljZS4gSXQgY2VydGFpbmx5IGlzIFBDSWUuCgo+Cj4gVGhlcmUgYXJlIHR3byBy
ZWFzb24gdGhhdCBJIGFtIGludGVyZXN0ZWQgaW4gdGhpcy7CoCBGb3Igb25lLCBteSBwcm9qZWN0
IGNhbGxzIGZvciBwY2ktZSBkZXZpY2UgcGFzc3Rocm91Z2gsIHdoaWNoIGNhbid0IGJlIGFjY29t
cGxpc2hlZCB3aXRoIDQ0MGZ4IGNoaXBzZXQgZW11bGF0aW9uLsKgIFNlY29uZGx5LCBJIGZlZWwg
d2Ugb3VnaHQgdG8gbW92ZSBvbiB3aXRoIHRoZSB0ZWNobm9sb2d5LsKgIDQ0MGZ4IGlzIGFuY2ll
bnQgaW4gY29tcHV0ZXIgdGVybXMuwqAgUWVtdSBpcyBnb29kIGFuZCBhbGwgdGhhdCwgYnV0IGlm
IGl0IHJlZnVzZXMgdG8gc3VwcG9ydCBwY2ktZSBuYXRpdmVseSB0aGVuIGl0J3MganVzdCBhIG1h
dHRlciBvZiB0aW1lIHRoYXQgaXQgd2lsbCBiZWNvbWUgb2Jzb2xldGVkLsKgIFRoZSB0cmVuZCBp
cyBjbGVhciB0aGF0IHBjaS1lIGlzIHRha2luZyBvdmVyIHRoZSB3b3JsZC4gCj4KCkkgYW0gbm90
IHN1cmUgd2hhdCB5b3UgYXJlIHNheWluZyBidXQgaXQgZG9lcyBub3QgbWF0dGVyIHdoZXRoZXIg
UUVNVSBlbXVsYXRlcyA0NDBmeCBvciBxMzUgZm9yIFBDSSBwYXNzIHRocm91Z2ggLgoKPiBSZWdh
cmRzL0VuaWFjIAo+Cj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0gCj4gRnJvbTogS29ucmFk
IFJ6ZXN6dXRlayBXaWxrIFttYWlsdG86a29ucmFkLndpbGtAb3JhY2xlLmNvbV0gCj4gU2VudDog
RnJpZGF5LCBGZWJydWFyeSAyMSwgMjAxNCAyOjUwIFBNIAo+IFRvOiBaaGFuZywgRW5pYWMgCj4g
Q2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnIAo+IFN1YmplY3Q6IFJlOiBbWGVuLWRldmVsXSBx
MzUgaW4geGVuPyB2ZmlvIGluIHhlbj8gCj4KPiBPbiBGcmksIEZlYiAyMSwgMjAxNCBhdCAwOTo0
MTozOVBNICswMDAwLCBaaGFuZywgRW5pYWMgd3JvdGU6IAo+ID4gSGkgYWxsLCAKPiA+IAo+ID4g
SSBhbSBwbGF5aW5nIHdpdGggcTM1IGNoaXBzZXQgaW4gcWVtdSAoMS42LjEpLsKgIEl0IHNlZW1z
IHdlIGNhbid0IGVuYWJsZSBxMzUgbWFjaGluZSB1bmRlciB4ZW4geWV0LsKgIEkgbWFkZSBhIGZl
dyBxdWljayBoYWNrcyB3aGljaCBhbGwgZmFpbCBtaXNlcmFibHkgKGxpbnV4IGtlcm5lbCBvb3Bz
IGFuZCB3aW5kb3cgQlNPRCkuwqAgSSB3YXMgd29uZGVyaW5nIHdoeSB0aGlzIGhhc24ndCBiZWVu
IGRvbmUgKHEzNSB3YXMgaW50cm9kdWNlZCBpbnRvIHFlbXUgaW4gMjAwOSkuIAo+ID4gCj4gPiBO
ZXh0IHF1ZXN0aW9uLCB2ZmlvIHdvcmtzIHZlcnkgd2VsbCBmb3IgbWUgaW4gc3RhbmRhbG9uZSBx
ZW11ICh3aXRoIExpbnV4IGhvc3QgaGFuZGxpbmcgaW9tbXUpLCBidXQgaXMgdGhhdCBzdXBwb3J0
ZWQgdW5kZXIgeGVuP8KgIEkgaGF2ZW4ndCB0cmllZCBhbnl0aGluZyB0aGVyZSB5ZXQgYmVjYXVz
ZSBteSBndXQtZmVlbGluZyBpcyB0aGF0IGl0IHdvbid0IHdvcmsuwqAgQmVjYXVzZSBwYXNzaW5n
IHZmaW8gZGV2aWNlIHRvIHFlbXUgY2FuIG9ubHkgYmUgZG9uZSBvbiBxZW11IGNvbW1hbmRsaW5l
LCBhbmQgeGVuIGlzIG5vdCBhd2FyZSBvZiB0aGlzIHBhc3NpbmcgdGhyb3VnaCBkZXZpY2UsIHRo
dXMgbm90IGFibGUgdG8gbWFrZSBpb21tdSBhcnJhbmdlbWVudCBmb3IgdGhpcyBkZXZpY2UuwqAg
QW0gSSBvbiB0aGUgcmlnaHQgdHJhY2sgaGVyZT8gCj4KPiBZZXMgYW5kIG5vLiBWRklPIHdvbid0
IHdvcmsgLSBidXQgUUVNVSBkb2VzIGRvIFBDSSBwYXNzdGhyb3VnaCB1bmRlciBYZW4uIEl0IHVz
ZXMgYSBkaWZmZXJlbnQgbWVjaGFuaXNtIChhbmQgeW91IG5lZWQgdG8gYmluZCB0aGUgZGV2aWNl
IHRvIHBjaWJhY2spLiAKPgo+ID4gCj4gPiBJIGFtIGludGVyZXN0ZWQgaW4gaW1wbGVtZW50aW5n
IGJvdGggdGhlc2UgdHdvIGZlYXR1cmVzLsKgIEknZCBsaWtlIHRvIGNvbm5lY3Qgd2l0aCBhbnlv
bmUgd2hvJ3MgYWxyZWFkeSBvbiB0aGlzIHNvIHdlIGRvbid0IGR1cGxpY2F0ZSB0aGUgZWZmb3J0
cy4gCj4KPiBXaGF0IGRvIHlvdSBuZWVkIFEzNSBmb3I/IAo+Cj4gPiAKPiA+IFJlZ2FyZHMvRW5p
YWMgCj4KPiA+IF9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
IAo+ID4gWGVuLWRldmVsIG1haWxpbmcgbGlzdCAKPiA+IFhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
IAo+ID4gaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsIAo+Cl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Sat Feb 22 00:37:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 00:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH0ag-000109-Se; Sat, 22 Feb 2014 00:37:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH0af-000102-7C
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 00:37:13 +0000
Received: from [193.109.254.147:25390] by server-15.bemta-14.messagelabs.com
	id 63/CE-10839-831F7035; Sat, 22 Feb 2014 00:37:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393029430!2356669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8275 invoked from network); 22 Feb 2014 00:37:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 00:37:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,522,1389744000"; d="scan'208";a="104845542"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 00:37:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 19:37:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH0aa-0003E6-JZ;
	Sat, 22 Feb 2014 00:37:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH0aa-0006oJ-7L;
	Sat, 22 Feb 2014 00:37:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25253-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 00:37:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25253: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25253 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25253/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)     fail like 25254-bisect
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)   fail like 25255-bisect
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail like 25258-bisect
 test-amd64-amd64-xl-qemut-winxpsp3 6 leak-check/basis(6) fail like 25265-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 00:37:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 00:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH0ag-000109-Se; Sat, 22 Feb 2014 00:37:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH0af-000102-7C
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 00:37:13 +0000
Received: from [193.109.254.147:25390] by server-15.bemta-14.messagelabs.com
	id 63/CE-10839-831F7035; Sat, 22 Feb 2014 00:37:12 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393029430!2356669!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8275 invoked from network); 22 Feb 2014 00:37:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 00:37:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,522,1389744000"; d="scan'208";a="104845542"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 00:37:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 19:37:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH0aa-0003E6-JZ;
	Sat, 22 Feb 2014 00:37:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH0aa-0006oJ-7L;
	Sat, 22 Feb 2014 00:37:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25253-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 00:37:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.1-testing test] 25253: regressions - trouble:
	broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25253 xen-4.1-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25253/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv           6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl           6 leak-check/basis(6)       fail REGR. vs. 24859
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 24859
 test-amd64-amd64-pair    10 leak-check/basis/dst_host(10) fail REGR. vs. 24859
 test-amd64-amd64-pair      9 leak-check/basis/src_host(9) fail REGR. vs. 24859

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-pcipt-intel  6 leak-check/basis(6)    fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf      6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-sedf-pin  6 leak-check/basis(6)       fail REGR. vs. 24859
 test-amd64-amd64-xl-winxpsp3  6 leak-check/basis(6)     fail like 25254-bisect
 test-amd64-amd64-xl-win7-amd64  6 leak-check/basis(6)   fail like 25255-bisect
 test-amd64-amd64-xl-qemut-win7-amd64 6 leak-check/basis(6) fail like 25258-bisect
 test-amd64-amd64-xl-qemut-winxpsp3 6 leak-check/basis(6) fail like 25265-bisect

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-freebsd10-i386 18 leak-check/check       fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-qemuu-freebsd10-amd64 18 leak-check/check      fail never pass
 test-i386-i386-xl-winxpsp3   14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-i386-i386-xl-qemut-winxpsp3 14 guest-stop                 fail never pass

version targeted for testing:
 xen                  934858f00267a92bc2a2995a0c634d02d2c60fbd
baseline version:
 xen                  e21b2fa19946806ea27873a8808bc1ace48b7c69

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Jan Beulich <jbeulich@suse.com>
------------------------------------------------------------

jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-i386-i386-xl                                            pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-freebsd10-amd64                        fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         broken  
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               broken  
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-qemuu-freebsd10-i386                         fail    
 test-amd64-amd64-xl-pcipt-intel                              broken  
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        broken  
 test-amd64-i386-pair                                         pass    
 test-i386-i386-pair                                          pass    
 test-amd64-amd64-xl-sedf-pin                                 broken  
 test-amd64-amd64-pv                                          broken  
 test-amd64-i386-pv                                           pass    
 test-i386-i386-pv                                            pass    
 test-amd64-amd64-xl-sedf                                     broken  
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-i386-i386-xl-qemut-winxpsp3                             fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 broken  
 test-i386-i386-xl-winxpsp3                                   fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 934858f00267a92bc2a2995a0c634d02d2c60fbd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Feb 20 08:43:11 2014 +0100

    x86/AMD: work around erratum 793 for 32-bit
    
    The original change went into a 64-bit only code section, thus leaving
    the issue unfixed on 32-bit. Re-order code to address this.
    
    This is part of CVE-2013-6885 / XSA-82.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 01:39:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 01:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH1Yf-0005KG-3d; Sat, 22 Feb 2014 01:39:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WH1Yd-0005KB-6x
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 01:39:11 +0000
Received: from [193.109.254.147:34249] by server-12.bemta-14.messagelabs.com
	id 53/AB-17220-EBFF7035; Sat, 22 Feb 2014 01:39:10 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393033149!6011137!1
X-Originating-IP: [209.85.215.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3754 invoked from network); 22 Feb 2014 01:39:09 -0000
Received: from mail-la0-f44.google.com (HELO mail-la0-f44.google.com)
	(209.85.215.44)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 01:39:09 -0000
Received: by mail-la0-f44.google.com with SMTP id hr13so54902lab.3
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 17:39:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=M/AdDSSCfC7h5lJFxmwOWqeHgCGPrqez2na5463s61E=;
	b=PUjfggrsJ6ilnMtuuc7kY+oCGdoCejFZvdL402l6+xhQqbEiyTO0RtrxVwAjCWU5KP
	q68NOicLNoQnhzNzq4T2gcFdY7UazSM0wx2lml1pi5CMknAYxlCcKKdlDlEZN1Bem5ch
	//ZQW/dzKaHqVRE1uL1afHC1vRztrr5HOdQKxhtDvJN3bRIPXaAjMMjbPNZMdU8UFTEH
	Crf3hwLODM/DVJiZ5cAFHYff4f7XSpjVQffPzTJiRRAX1CiSt3GYyahu3DF+fLXkTZCZ
	70HNegcw3Kb+/OH92QdKD9Z050PBF6B/DX83SBsrMfWioHxid80+5vjAGxHSIo1L4CsB
	oxWA==
X-Received: by 10.153.7.137 with SMTP id dc9mr5926180lad.25.1393033148625;
	Fri, 21 Feb 2014 17:39:08 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 17:38:48 -0800 (PST)
In-Reply-To: <CAB=NE6XR+FW2X2_nr2JAxgQD+zpm8=Xq7Y4fTf740rLhGCOzEw@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
	<CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
	<53074E6B.6030006@citrix.com>
	<CAB=NE6XR+FW2X2_nr2JAxgQD+zpm8=Xq7Y4fTf740rLhGCOzEw@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 17:38:48 -0800
X-Google-Sender-Auth: mXtoea0mNN0QkOjk8vSn8G8ys5s
Message-ID: <CAB=NE6XtiAG0Q_kEUcomwSss2ftXeiL7Y80MLB55rEtEMGnadg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Luis R. Rodriguez" <mcgrof@suse.com>, bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 8:01 AM, Luis R. Rodriguez
<mcgrof@do-not-panic.com> wrote:
> On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>>> Agreed that's the best strategy and I'll work on sending patches to
>>> brctl to enable the root_block preference. This approach however also
>>
>> I don't think brctl should deal with any Xen specific stuff. I assume there
>> is a misunderstanding in this thread: when I (and possibly other Xen folks)
>> talk about "userspace" or "toolstack" here, I mean Xen specific tools which
>> use e.g. brctl to set up bridges. Not brctl itself.
>
> I did mean brctl, but as I looked at the code it doesn't used
> rtnl_open() and not sure if Stephen would want that.

Actually that'd be the incorrect tool to extend, iproute2 would be the
new way with:

ip link add dev xenbr0 type bridge
ip link set dev eth0 master xenbr0
ip link set dev vif1.0 master xenbr0 <root_block>

where root_block would be the new desired argument. This would use the
rtnetlink RTM_SETLINK + IFLA_MASTER, which will in turn kick off the
bridge ndo_add_slave(). Still though it seems this requires the eth0
device to actually exist and as such from what I can tell we can't set
the root_block preference until *after* the addition onto the bridge,
which should mean the bridge could still take the vif1.0 MAC address
momentarily. This is of course only an issue if the link was up during
the additions. This makes me think perhaps nothing is needed then and
scripts could just use the:

bridge link set dev vif1.0 root_block on

I also just noticed that if an entry that was the bridge root port got
a root_block toggle we don't kick off the newly blocked port, I just
verified this. Note that removing the interface from the bridge does
however reset the bridge with a proper new root port:

ip link set dev vif1.0 nomaster

For old userspace with brctl and no iproute2 we're shit out of luck,
this means we can't use root block (xen-netblock was added on
v2.6.39).

Stephen all this can we add the priv_flags flag to help out as
proposed, but I'd make it just toggle the new root_block flag, that'd
enable drivers to use this from initialization. Let me know if you
have other suggestions or things I may have missed.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 01:39:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 01:39:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH1Yf-0005KG-3d; Sat, 22 Feb 2014 01:39:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WH1Yd-0005KB-6x
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 01:39:11 +0000
Received: from [193.109.254.147:34249] by server-12.bemta-14.messagelabs.com
	id 53/AB-17220-EBFF7035; Sat, 22 Feb 2014 01:39:10 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393033149!6011137!1
X-Originating-IP: [209.85.215.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3754 invoked from network); 22 Feb 2014 01:39:09 -0000
Received: from mail-la0-f44.google.com (HELO mail-la0-f44.google.com)
	(209.85.215.44)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 01:39:09 -0000
Received: by mail-la0-f44.google.com with SMTP id hr13so54902lab.3
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 17:39:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=M/AdDSSCfC7h5lJFxmwOWqeHgCGPrqez2na5463s61E=;
	b=PUjfggrsJ6ilnMtuuc7kY+oCGdoCejFZvdL402l6+xhQqbEiyTO0RtrxVwAjCWU5KP
	q68NOicLNoQnhzNzq4T2gcFdY7UazSM0wx2lml1pi5CMknAYxlCcKKdlDlEZN1Bem5ch
	//ZQW/dzKaHqVRE1uL1afHC1vRztrr5HOdQKxhtDvJN3bRIPXaAjMMjbPNZMdU8UFTEH
	Crf3hwLODM/DVJiZ5cAFHYff4f7XSpjVQffPzTJiRRAX1CiSt3GYyahu3DF+fLXkTZCZ
	70HNegcw3Kb+/OH92QdKD9Z050PBF6B/DX83SBsrMfWioHxid80+5vjAGxHSIo1L4CsB
	oxWA==
X-Received: by 10.153.7.137 with SMTP id dc9mr5926180lad.25.1393033148625;
	Fri, 21 Feb 2014 17:39:08 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 17:38:48 -0800 (PST)
In-Reply-To: <CAB=NE6XR+FW2X2_nr2JAxgQD+zpm8=Xq7Y4fTf740rLhGCOzEw@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-2-git-send-email-mcgrof@do-not-panic.com>
	<20140216105754.63738163@nehalam.linuxnetplumber.net>
	<CAB=NE6WwbOXNLO_Mn42i7y62pEsmDOWd35B621eiJk4iaE3Hfg@mail.gmail.com>
	<1392803559.23084.99.camel@kazak.uk.xensource.com>
	<5304C13F.3030802@citrix.com>
	<CAB=NE6X6Vuo3iib0W-c5cxv0QBpnZtCC0sFyuULugQoEZAbRtg@mail.gmail.com>
	<20140219090855.610c0e04@nehalam.linuxnetplumber.net>
	<CAB=NE6UAqoiFAJdckE1d7cgMZnU6_=8X+N=tSc6qH2bgH08M7Q@mail.gmail.com>
	<20140220091958.62a8b444@nehalam.linuxnetplumber.net>
	<CAB=NE6WnMFkH7JPNt+ROiWEvhvZ03vdGr21STf75DV9fRaK=PA@mail.gmail.com>
	<53074E6B.6030006@citrix.com>
	<CAB=NE6XR+FW2X2_nr2JAxgQD+zpm8=Xq7Y4fTf740rLhGCOzEw@mail.gmail.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 17:38:48 -0800
X-Google-Sender-Auth: mXtoea0mNN0QkOjk8vSn8G8ys5s
Message-ID: <CAB=NE6XtiAG0Q_kEUcomwSss2ftXeiL7Y80MLB55rEtEMGnadg@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, kvm@vger.kernel.org,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Luis R. Rodriguez" <mcgrof@suse.com>, bridge@lists.linux-foundation.org,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Stephen Hemminger <stephen@networkplumber.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [RFC v2 1/4] bridge: enable interfaces to opt out
 from becoming the root bridge
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 8:01 AM, Luis R. Rodriguez
<mcgrof@do-not-panic.com> wrote:
> On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
>>> Agreed that's the best strategy and I'll work on sending patches to
>>> brctl to enable the root_block preference. This approach however also
>>
>> I don't think brctl should deal with any Xen specific stuff. I assume there
>> is a misunderstanding in this thread: when I (and possibly other Xen folks)
>> talk about "userspace" or "toolstack" here, I mean Xen specific tools which
>> use e.g. brctl to set up bridges. Not brctl itself.
>
> I did mean brctl, but as I looked at the code it doesn't used
> rtnl_open() and not sure if Stephen would want that.

Actually that'd be the incorrect tool to extend, iproute2 would be the
new way with:

ip link add dev xenbr0 type bridge
ip link set dev eth0 master xenbr0
ip link set dev vif1.0 master xenbr0 <root_block>

where root_block would be the new desired argument. This would use the
rtnetlink RTM_SETLINK + IFLA_MASTER, which will in turn kick off the
bridge ndo_add_slave(). Still though it seems this requires the eth0
device to actually exist and as such from what I can tell we can't set
the root_block preference until *after* the addition onto the bridge,
which should mean the bridge could still take the vif1.0 MAC address
momentarily. This is of course only an issue if the link was up during
the additions. This makes me think perhaps nothing is needed then and
scripts could just use the:

bridge link set dev vif1.0 root_block on

I also just noticed that if an entry that was the bridge root port got
a root_block toggle we don't kick off the newly blocked port, I just
verified this. Note that removing the interface from the bridge does
however reset the bridge with a proper new root port:

ip link set dev vif1.0 nomaster

For old userspace with brctl and no iproute2 we're shit out of luck,
this means we can't use root block (xen-netblock was added on
v2.6.39).

Stephen all this can we add the priv_flags flag to help out as
proposed, but I'd make it just toggle the new root_block flag, that'd
enable drivers to use this from initialization. Let me know if you
have other suggestions or things I may have missed.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 01:41:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 01:41:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH1ap-0005Nt-Q3; Sat, 22 Feb 2014 01:41:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WH1ao-0005Nn-Gn
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 01:41:26 +0000
Received: from [193.109.254.147:15084] by server-15.bemta-14.messagelabs.com
	id FF/FE-10839-54008035; Sat, 22 Feb 2014 01:41:25 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393033284!6029247!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21853 invoked from network); 22 Feb 2014 01:41:25 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 01:41:25 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so2987482lan.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 17:41:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=rSy9t7po+CJq+t8eFpyH31kiX6WWvXRhUpGgIR8FsQw=;
	b=0uyV806JFADKkA4OAQJbURtRnkPpUjTdbJwdIIkcHubmBRRILl/wpL3HUDd0lkLVMR
	JSZlXhqBMmtwdGrWoF83t0n10vaXtkRmOx8x0ctpuA6jPmOoWxV5vxoKpOrMeQ7ydH5f
	uKv+I+4eJY/P6cfdQ0Kjh/J0+g1ZEr2a21RYCI3foP5pF9Ma3NvvbRjL0RmXbuKnDPmt
	sODmyAFqmZ1NEs8RrGoBeGLeViS1mq1qpj3CyTXjXu8ycAx4FxNRdEQDuG8cDnRwUxwd
	CtiY6oqvaVYJwEVCHP6d1GVloFzf2aHqRQRlzgTXV7ZDT5ddlx8FSkHd1oFp23vYIfWX
	D6LQ==
X-Received: by 10.152.229.225 with SMTP id st1mr5985229lac.2.1393033270054;
	Fri, 21 Feb 2014 17:41:10 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 17:40:50 -0800 (PST)
In-Reply-To: <53074E6C.5080702@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<53050244.1020106@citrix.com>
	<CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
	<53074E6C.5080702@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 17:40:50 -0800
X-Google-Sender-Auth: sy1FSNnveYeAwZGuNJ_qoHquUb8
Message-ID: <CAB=NE6XBcpOkktvEcGh=c9tTfAJjrnGWGDN9GmG-n+cx83-LLQ@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> Check this how current Xen scripts does routed networking:
>
> http://wiki.xen.org/wiki/Xen_Networking#Associating_routes_with_virtual_devices
>
> Note, there are no bridges involved here! As the above page says, the
> backend has to have IP address, maybe it's not true anymore. I'm not too
> familiar with this setup too, I've used it only once.

Thanks, in such case I do think actually adding a bridge, adding the
backend interface to it, and then adding a route to the front end IP
would suffice to cover that case, but I'm pretty limited with test
devices so would appreciate if someone with a setup like that can test
it as an alternative. Please recall that the possible gains here
should be pretty significant in terms of simplification. And of
course, I still also haven't had time / systems to test the NAT
case...

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 01:41:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 01:41:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH1ap-0005Nt-Q3; Sat, 22 Feb 2014 01:41:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WH1ao-0005Nn-Gn
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 01:41:26 +0000
Received: from [193.109.254.147:15084] by server-15.bemta-14.messagelabs.com
	id FF/FE-10839-54008035; Sat, 22 Feb 2014 01:41:25 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393033284!6029247!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21853 invoked from network); 22 Feb 2014 01:41:25 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 01:41:25 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so2987482lan.24
	for <xen-devel@lists.xenproject.org>;
	Fri, 21 Feb 2014 17:41:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=rSy9t7po+CJq+t8eFpyH31kiX6WWvXRhUpGgIR8FsQw=;
	b=0uyV806JFADKkA4OAQJbURtRnkPpUjTdbJwdIIkcHubmBRRILl/wpL3HUDd0lkLVMR
	JSZlXhqBMmtwdGrWoF83t0n10vaXtkRmOx8x0ctpuA6jPmOoWxV5vxoKpOrMeQ7ydH5f
	uKv+I+4eJY/P6cfdQ0Kjh/J0+g1ZEr2a21RYCI3foP5pF9Ma3NvvbRjL0RmXbuKnDPmt
	sODmyAFqmZ1NEs8RrGoBeGLeViS1mq1qpj3CyTXjXu8ycAx4FxNRdEQDuG8cDnRwUxwd
	CtiY6oqvaVYJwEVCHP6d1GVloFzf2aHqRQRlzgTXV7ZDT5ddlx8FSkHd1oFp23vYIfWX
	D6LQ==
X-Received: by 10.152.229.225 with SMTP id st1mr5985229lac.2.1393033270054;
	Fri, 21 Feb 2014 17:41:10 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Fri, 21 Feb 2014 17:40:50 -0800 (PST)
In-Reply-To: <53074E6C.5080702@citrix.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<53050244.1020106@citrix.com>
	<CAB=NE6VswYVF1BOM+vwEh3MaX7sh0gvLqz0U0JoEFDS-EzO9Pg@mail.gmail.com>
	<53074E6C.5080702@citrix.com>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Fri, 21 Feb 2014 17:40:50 -0800
X-Google-Sender-Auth: sy1FSNnveYeAwZGuNJ_qoHquUb8
Message-ID: <CAB=NE6XBcpOkktvEcGh=c9tTfAJjrnGWGDN9GmG-n+cx83-LLQ@mail.gmail.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Dan Williams <dcbw@redhat.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 5:02 AM, Zoltan Kiss <zoltan.kiss@citrix.com> wrote:
> Check this how current Xen scripts does routed networking:
>
> http://wiki.xen.org/wiki/Xen_Networking#Associating_routes_with_virtual_devices
>
> Note, there are no bridges involved here! As the above page says, the
> backend has to have IP address, maybe it's not true anymore. I'm not too
> familiar with this setup too, I've used it only once.

Thanks, in such case I do think actually adding a bridge, adding the
backend interface to it, and then adding a route to the front end IP
would suffice to cover that case, but I'm pretty limited with test
devices so would appreciate if someone with a setup like that can test
it as an alternative. Please recall that the possible gains here
should be pretty significant in terms of simplification. And of
course, I still also haven't had time / systems to test the NAT
case...

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 02:24:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 02:24:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH2Fo-00064M-Ma; Sat, 22 Feb 2014 02:23:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WH2Fl-000645-Cx
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 02:23:45 +0000
Received: from [85.158.139.211:56950] by server-16.bemta-5.messagelabs.com id
	1C/9C-05060-03A08035; Sat, 22 Feb 2014 02:23:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393035822!5533246!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8250 invoked from network); 22 Feb 2014 02:23:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Feb 2014 02:23:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1M2Nfel006024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:41 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M2Netw011271
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:40 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M2Ndrs025497
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:39 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 18:23:39 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 21 Feb 2014 18:23:22 -0800
Message-Id: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [PATCH] pvh bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some rearrangement in linux code causes it to go thru certain hypercalls 
that will cause corruption in xen and crash. I like having a white list for a 
big feature while it goes thru it's adolescence to catch such bugs, but 
whatever. Since, it affects pvh dom0 paths only, I didnt' think it was 
necessary for 4.4.  

Attached patch adds check for it, and also certain paths (iirc it was
from xentrace) causes panic in hvm_hap_nested_page_fault. add a check
in there too.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 02:24:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 02:24:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH2Fn-00064F-F1; Sat, 22 Feb 2014 02:23:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WH2Fl-000646-D4
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 02:23:45 +0000
Received: from [193.109.254.147:34277] by server-9.bemta-14.messagelabs.com id
	89/D8-24895-03A08035; Sat, 22 Feb 2014 02:23:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393035822!764514!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15219 invoked from network); 22 Feb 2014 02:23:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 02:23:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1M2Nfgh006023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:41 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1M2NewS019616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:41 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M2NeMN011275
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:40 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 18:23:40 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 21 Feb 2014 18:23:23 -0800
Message-Id: <1393035803-9483-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
References: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH] pvh bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Nested hvm is presently not supported for pvh. Calling
hvm_hap_nested_page_fault in certain paths will crash.

The rearrange in linux code causes it to go thru paths that will
corrupt hvm_domain structs and xen to panic for dom0 pvh.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/hvm/hvm.c | 3 +++
 xen/arch/x86/irq.c     | 4 ++--
 xen/arch/x86/physdev.c | 4 ++++
 3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..a4a3dcf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
     int sharing_enomem = 0;
     mem_event_request_t *req_ptr = NULL;
 
+    if ( is_pvh_vcpu(v) )
+        return 0;
+
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
      * If this fails, inject a nested page fault into the guest.
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index db70077..88444be 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1068,13 +1068,13 @@ bool_t cpu_has_pending_apic_eoi(void)
 
 static inline void set_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         set_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
 static inline void clear_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         clear_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index bc0634c..9f85857 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -339,6 +339,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         unsigned long mfn;
         struct page_info *page;
 
+        ret = -ENOSYS;
+        if ( is_pvh_vcpu(current) )
+            break;
+
         ret = -EFAULT;
         if ( copy_from_guest(&info, arg, 1) != 0 )
             break;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 02:24:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 02:24:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH2Fo-00064M-Ma; Sat, 22 Feb 2014 02:23:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WH2Fl-000645-Cx
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 02:23:45 +0000
Received: from [85.158.139.211:56950] by server-16.bemta-5.messagelabs.com id
	1C/9C-05060-03A08035; Sat, 22 Feb 2014 02:23:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393035822!5533246!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8250 invoked from network); 22 Feb 2014 02:23:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 22 Feb 2014 02:23:43 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1M2Nfel006024
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:41 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M2Netw011271
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:40 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M2Ndrs025497
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:39 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 18:23:39 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 21 Feb 2014 18:23:22 -0800
Message-Id: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Subject: [Xen-devel] [PATCH] pvh bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Some rearrangement in linux code causes it to go thru certain hypercalls 
that will cause corruption in xen and crash. I like having a white list for a 
big feature while it goes thru it's adolescence to catch such bugs, but 
whatever. Since, it affects pvh dom0 paths only, I didnt' think it was 
necessary for 4.4.  

Attached patch adds check for it, and also certain paths (iirc it was
from xentrace) causes panic in hvm_hap_nested_page_fault. add a check
in there too.

thanks
Mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 02:24:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 02:24:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH2Fn-00064F-F1; Sat, 22 Feb 2014 02:23:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WH2Fl-000646-D4
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 02:23:45 +0000
Received: from [193.109.254.147:34277] by server-9.bemta-14.messagelabs.com id
	89/D8-24895-03A08035; Sat, 22 Feb 2014 02:23:44 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393035822!764514!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15219 invoked from network); 22 Feb 2014 02:23:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 02:23:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1M2Nfgh006023
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:41 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1M2NewS019616
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL)
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:41 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1M2NeMN011275
	for <xen-devel@lists.xenproject.org>; Sat, 22 Feb 2014 02:23:40 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 21 Feb 2014 18:23:40 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 21 Feb 2014 18:23:23 -0800
Message-Id: <1393035803-9483-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
References: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [PATCH] pvh bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Nested hvm is presently not supported for pvh. Calling
hvm_hap_nested_page_fault in certain paths will crash.

The rearrange in linux code causes it to go thru paths that will
corrupt hvm_domain structs and xen to panic for dom0 pvh.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/hvm/hvm.c | 3 +++
 xen/arch/x86/irq.c     | 4 ++--
 xen/arch/x86/physdev.c | 4 ++++
 3 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..a4a3dcf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
     int sharing_enomem = 0;
     mem_event_request_t *req_ptr = NULL;
 
+    if ( is_pvh_vcpu(v) )
+        return 0;
+
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
      * If this fails, inject a nested page fault into the guest.
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index db70077..88444be 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1068,13 +1068,13 @@ bool_t cpu_has_pending_apic_eoi(void)
 
 static inline void set_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         set_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
 static inline void clear_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         clear_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index bc0634c..9f85857 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -339,6 +339,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         unsigned long mfn;
         struct page_info *page;
 
+        ret = -ENOSYS;
+        if ( is_pvh_vcpu(current) )
+            break;
+
         ret = -EFAULT;
         if ( copy_from_guest(&info, arg, 1) != 0 )
             break;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 03:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 03:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH3DV-0006Z7-6l; Sat, 22 Feb 2014 03:25:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH3DT-0006Z2-Ad
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 03:25:27 +0000
Received: from [85.158.137.68:5540] by server-3.bemta-3.messagelabs.com id
	A1/C0-14520-3A818035; Sat, 22 Feb 2014 03:25:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393039520!3452259!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24497 invoked from network); 22 Feb 2014 03:25:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 03:25:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104861037"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 03:25:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 22:25:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH3DK-00043N-Ju;
	Sat, 22 Feb 2014 03:25:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH3DK-0003wj-E2;
	Sat, 22 Feb 2014 03:25:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25259-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 03:25:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25259: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25259 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25259/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24873

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass

version targeted for testing:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
baseline version:
 linux                a6d2ebcda7cb7467b3f5ca597710be25cc8ad76f

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Andrew Morton <akpm@linux-foundation.org>
  Asias He <asias@redhat.com>
  Avi Kivity <avi@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin LaHaise <bcrl@kvack.org>
  Bojan Smojver <bojan@rexursive.com>
  Dan Rosenberg <dan.j.rosenberg@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jeff Layton <jlayton@redhat.com>
  Jiang Liu <liuj97@gmail.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Rafael J. Wysocki <rjw@sisk.pl>
  Roland Dreier <roland@purestorage.com>
  Rusty Russell <rusty@rustcorp.com.au>
  Seth Forshee <seth.forshee@canonical.com>
  Stephen Smalley <sds@tycho.nsa.gov>
  Steven Rostedt <rostedt@goodmis.org>
  Tao Ma <boyu.mt@taobao.com>
  Trond Myklebust <Trond.Myklebust@netapp.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
+ branch=linux-3.4
+ revision=dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b:tested/linux-3.4
Counting objects: 1   
Counting objects: 199, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/138)   
Writing objects:   1% (2/138)   
Writing objects:   2% (3/138)   
Writing objects:   3% (5/138)   
Writing objects:   4% (6/138)   
Writing objects:   5% (7/138)   
Writing objects:   6% (9/138)   
Writing objects:   7% (10/138)   
Writing objects:   8% (12/138)   
Writing objects:   9% (13/138)   
Writing objects:  10% (14/138)   
Writing objects:  11% (16/138)   
Writing objects:  12% (17/138)   
Writing objects:  13% (18/138)   
Writing objects:  14% (20/138)   
Writing objects:  15% (21/138)   
Writing objects:  16% (23/138)   
Writing objects:  17% (24/138)   
Writing objects:  18% (25/138)   
Writing objects:  19% (27/138)   
Writing objects:  20% (28/138)   
Writing objects:  21% (29/138)   
Writing objects:  22% (31/138)   
Writing objects:  23% (32/138)   
Writing objects:  24% (34/138)   
Writing objects:  25% (35/138)   
Writing objects:  26% (36/138)   
Writing objects:  27% (38/138)   
Writing objects:  28% (39/138)   
Writing objects:  29% (41/138)   
Writing objects:  30% (42/138)   
Writing objects:  31% (43/138)   
Writing objects:  32% (45/138)   
Writing objects:  33% (46/138)   
Writing objects:  34% (47/138)   
Writing objects:  35% (49/138)   
Writing objects:  36% (50/138)   
Writing objects:  37% (52/138)   
Writing objects:  38% (53/138)   
Writing objects:  39% (54/138)   
Writing objects:  40% (56/138)   
Writing objects:  41% (57/138)   
Writing objects:  42% (58/138)   
Writing objects:  43% (60/138)   
Writing objects:  44% (61/138)   
Writing objects:  45% (63/138)   
Writing objects:  46% (64/138)   
Writing objects:  47% (65/138)   
Writing objects:  48% (67/138)   
Writing objects:  49% (68/138)   
Writing objects:  50% (69/138)   
Writing objects:  51% (71/138)   
Writing objects:  52% (72/138)   
Writing objects:  53% (74/138)   
Writing objects:  54% (75/138)   
Writing objects:  55% (76/138)   
Writing objects:  56% (78/138)   
Writing objects:  57% (79/138)   
Writing objects:  58% (81/138)   
Writing objects:  59% (82/138)   
Writing objects:  60% (83/138)   
Writing objects:  61% (85/138)   
Writing objects:  62% (86/138)   
Writing objects:  63% (87/138)   
Writing objects:  64% (89/138)   
Writing objects:  65% (90/138)   
Writing objects:  66% (92/138)   
Writing objects:  67% (93/138)   
Writing objects:  68% (94/138)   
Writing objects:  69% (96/138)   
Writing objects:  70% (97/138)   
Writing objects:  71% (98/138)   
Writing objects:  72% (100/138)   
Writing objects:  73% (101/138)   
Writing objects:  74% (103/138)   
Writing objects:  75% (104/138)   
Writing objects:  76% (105/138)   
Writing objects:  77% (107/138)   
Writing objects:  78% (108/138)   
Writing objects:  79% (110/138)   
Writing objects:  80% (111/138)   
Writing objects:  81% (112/138)   
Writing objects:  82% (114/138)   
Writing objects:  83% (115/138)   
Writing objects:  84% (116/138)   
Writing objects:  85% (118/138)   
Writing objects:  86% (119/138)   
Writing objects:  87% (121/138)   
Writing objects:  88% (122/138)   
Writing objects:  89% (123/138)   
Writing objects:  90% (125/138)   
Writing objects:  91% (126/138)   
Writing objects:  92% (127/138)   
Writing objects:  93% (129/138)   
Writing objects:  94% (130/138)   
Writing objects:  95% (132/138)   
Writing objects:  96% (133/138)   
Writing objects:  97% (134/138)   
Writing objects:  98% (136/138)   
Writing objects:  99% (137/138)   
Writing objects: 100% (138/138)   
Writing objects: 100% (138/138), 34.68 KiB, done.
Total 138 (delta 110), reused 138 (delta 110)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   a6d2ebc..dd12c7c  dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b -> tested/linux-3.4
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 03:25:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 03:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH3DV-0006Z7-6l; Sat, 22 Feb 2014 03:25:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH3DT-0006Z2-Ad
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 03:25:27 +0000
Received: from [85.158.137.68:5540] by server-3.bemta-3.messagelabs.com id
	A1/C0-14520-3A818035; Sat, 22 Feb 2014 03:25:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393039520!3452259!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24497 invoked from network); 22 Feb 2014 03:25:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 03:25:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104861037"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 03:25:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 22:25:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH3DK-00043N-Ju;
	Sat, 22 Feb 2014 03:25:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH3DK-0003wj-E2;
	Sat, 22 Feb 2014 03:25:18 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25259-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 03:25:18 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25259: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25259 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25259/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 24873

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass

version targeted for testing:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
baseline version:
 linux                a6d2ebcda7cb7467b3f5ca597710be25cc8ad76f

------------------------------------------------------------
People who touched revisions under test:
  "Theodore Ts'o" <tytso@mit.edu>
  Andrew Morton <akpm@linux-foundation.org>
  Asias He <asias@redhat.com>
  Avi Kivity <avi@redhat.com>
  Ben Hutchings <ben@decadent.org.uk>
  Benjamin LaHaise <bcrl@kvack.org>
  Bojan Smojver <bojan@rexursive.com>
  Dan Rosenberg <dan.j.rosenberg@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Rientjes <rientjes@google.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  H. Peter Anvin <hpa@zytor.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jeff Layton <jlayton@redhat.com>
  Jiang Liu <liuj97@gmail.com>
  KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
  Li Zefan <lizefan@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Bellinger <nab@linux-iscsi.org>
  Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Paul Moore <pmoore@redhat.com>
  Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Rafael J. Wysocki <rjw@sisk.pl>
  Roland Dreier <roland@purestorage.com>
  Rusty Russell <rusty@rustcorp.com.au>
  Seth Forshee <seth.forshee@canonical.com>
  Stephen Smalley <sds@tycho.nsa.gov>
  Steven Rostedt <rostedt@goodmis.org>
  Tao Ma <boyu.mt@taobao.com>
  Trond Myklebust <Trond.Myklebust@netapp.com>
  Xishi Qiu <qiuxishi@huawei.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
+ branch=linux-3.4
+ revision=dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b:tested/linux-3.4
Counting objects: 1   
Counting objects: 199, done.
Compressing objects:   3% (1/27)   
Compressing objects:   7% (2/27)   
Compressing objects:  11% (3/27)   
Compressing objects:  14% (4/27)   
Compressing objects:  18% (5/27)   
Compressing objects:  22% (6/27)   
Compressing objects:  25% (7/27)   
Compressing objects:  29% (8/27)   
Compressing objects:  33% (9/27)   
Compressing objects:  37% (10/27)   
Compressing objects:  40% (11/27)   
Compressing objects:  44% (12/27)   
Compressing objects:  48% (13/27)   
Compressing objects:  51% (14/27)   
Compressing objects:  55% (15/27)   
Compressing objects:  59% (16/27)   
Compressing objects:  62% (17/27)   
Compressing objects:  66% (18/27)   
Compressing objects:  70% (19/27)   
Compressing objects:  74% (20/27)   
Compressing objects:  77% (21/27)   
Compressing objects:  81% (22/27)   
Compressing objects:  85% (23/27)   
Compressing objects:  88% (24/27)   
Compressing objects:  92% (25/27)   
Compressing objects:  96% (26/27)   
Compressing objects: 100% (27/27)   
Compressing objects: 100% (27/27), done.
Writing objects:   0% (1/138)   
Writing objects:   1% (2/138)   
Writing objects:   2% (3/138)   
Writing objects:   3% (5/138)   
Writing objects:   4% (6/138)   
Writing objects:   5% (7/138)   
Writing objects:   6% (9/138)   
Writing objects:   7% (10/138)   
Writing objects:   8% (12/138)   
Writing objects:   9% (13/138)   
Writing objects:  10% (14/138)   
Writing objects:  11% (16/138)   
Writing objects:  12% (17/138)   
Writing objects:  13% (18/138)   
Writing objects:  14% (20/138)   
Writing objects:  15% (21/138)   
Writing objects:  16% (23/138)   
Writing objects:  17% (24/138)   
Writing objects:  18% (25/138)   
Writing objects:  19% (27/138)   
Writing objects:  20% (28/138)   
Writing objects:  21% (29/138)   
Writing objects:  22% (31/138)   
Writing objects:  23% (32/138)   
Writing objects:  24% (34/138)   
Writing objects:  25% (35/138)   
Writing objects:  26% (36/138)   
Writing objects:  27% (38/138)   
Writing objects:  28% (39/138)   
Writing objects:  29% (41/138)   
Writing objects:  30% (42/138)   
Writing objects:  31% (43/138)   
Writing objects:  32% (45/138)   
Writing objects:  33% (46/138)   
Writing objects:  34% (47/138)   
Writing objects:  35% (49/138)   
Writing objects:  36% (50/138)   
Writing objects:  37% (52/138)   
Writing objects:  38% (53/138)   
Writing objects:  39% (54/138)   
Writing objects:  40% (56/138)   
Writing objects:  41% (57/138)   
Writing objects:  42% (58/138)   
Writing objects:  43% (60/138)   
Writing objects:  44% (61/138)   
Writing objects:  45% (63/138)   
Writing objects:  46% (64/138)   
Writing objects:  47% (65/138)   
Writing objects:  48% (67/138)   
Writing objects:  49% (68/138)   
Writing objects:  50% (69/138)   
Writing objects:  51% (71/138)   
Writing objects:  52% (72/138)   
Writing objects:  53% (74/138)   
Writing objects:  54% (75/138)   
Writing objects:  55% (76/138)   
Writing objects:  56% (78/138)   
Writing objects:  57% (79/138)   
Writing objects:  58% (81/138)   
Writing objects:  59% (82/138)   
Writing objects:  60% (83/138)   
Writing objects:  61% (85/138)   
Writing objects:  62% (86/138)   
Writing objects:  63% (87/138)   
Writing objects:  64% (89/138)   
Writing objects:  65% (90/138)   
Writing objects:  66% (92/138)   
Writing objects:  67% (93/138)   
Writing objects:  68% (94/138)   
Writing objects:  69% (96/138)   
Writing objects:  70% (97/138)   
Writing objects:  71% (98/138)   
Writing objects:  72% (100/138)   
Writing objects:  73% (101/138)   
Writing objects:  74% (103/138)   
Writing objects:  75% (104/138)   
Writing objects:  76% (105/138)   
Writing objects:  77% (107/138)   
Writing objects:  78% (108/138)   
Writing objects:  79% (110/138)   
Writing objects:  80% (111/138)   
Writing objects:  81% (112/138)   
Writing objects:  82% (114/138)   
Writing objects:  83% (115/138)   
Writing objects:  84% (116/138)   
Writing objects:  85% (118/138)   
Writing objects:  86% (119/138)   
Writing objects:  87% (121/138)   
Writing objects:  88% (122/138)   
Writing objects:  89% (123/138)   
Writing objects:  90% (125/138)   
Writing objects:  91% (126/138)   
Writing objects:  92% (127/138)   
Writing objects:  93% (129/138)   
Writing objects:  94% (130/138)   
Writing objects:  95% (132/138)   
Writing objects:  96% (133/138)   
Writing objects:  97% (134/138)   
Writing objects:  98% (136/138)   
Writing objects:  99% (137/138)   
Writing objects: 100% (138/138)   
Writing objects: 100% (138/138), 34.68 KiB, done.
Total 138 (delta 110), reused 138 (delta 110)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   a6d2ebc..dd12c7c  dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b -> tested/linux-3.4
+ exit 0

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 04:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 04:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH4Ce-00070m-H8; Sat, 22 Feb 2014 04:28:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH4Cd-00070h-GH
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 04:28:39 +0000
Received: from [85.158.139.211:49002] by server-14.bemta-5.messagelabs.com id
	BC/28-27598-67728035; Sat, 22 Feb 2014 04:28:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393043315!985763!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25312 invoked from network); 22 Feb 2014 04:28:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 04:28:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="103187095"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Feb 2014 04:28:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 23:28:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH4CX-0004M2-Iw;
	Sat, 22 Feb 2014 04:28:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH4CX-0006pH-3f;
	Sat, 22 Feb 2014 04:28:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25262-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 04:28:33 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing baseline test] 25262: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 25262 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25262/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 04:29:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 04:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH4Ce-00070m-H8; Sat, 22 Feb 2014 04:28:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH4Cd-00070h-GH
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 04:28:39 +0000
Received: from [85.158.139.211:49002] by server-14.bemta-5.messagelabs.com id
	BC/28-27598-67728035; Sat, 22 Feb 2014 04:28:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393043315!985763!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25312 invoked from network); 22 Feb 2014 04:28:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 04:28:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="103187095"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Feb 2014 04:28:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 23:28:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH4CX-0004M2-Iw;
	Sat, 22 Feb 2014 04:28:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH4CX-0006pH-3f;
	Sat, 22 Feb 2014 04:28:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25262-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 04:28:33 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing baseline test] 25262: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 25262 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25262/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 04:38:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 04:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH4Ln-0007B8-PN; Sat, 22 Feb 2014 04:38:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH4Lf-0007B3-7R
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 04:37:59 +0000
Received: from [85.158.143.35:13891] by server-3.bemta-4.messagelabs.com id
	B5/F4-11539-6A928035; Sat, 22 Feb 2014 04:37:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393043876!7493050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9824 invoked from network); 22 Feb 2014 04:37:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 04:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104866040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 04:37:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 23:37:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH4Lb-0004PL-4h;
	Sat, 22 Feb 2014 04:37:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH4La-0008UC-Ve;
	Sat, 22 Feb 2014 04:37:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25260-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 04:37:55 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25260: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25260 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25260/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=869f5b6deab53bc924798df4dacfae92ee198cb4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 869f5b6deab53bc924798df4dacfae92ee198cb4
+ branch=xen-unstable
+ revision=869f5b6deab53bc924798df4dacfae92ee198cb4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 869f5b6deab53bc924798df4dacfae92ee198cb4:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   98b9a9e..869f5b6  869f5b6deab53bc924798df4dacfae92ee198cb4 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 04:38:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 04:38:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH4Ln-0007B8-PN; Sat, 22 Feb 2014 04:38:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WH4Lf-0007B3-7R
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 04:37:59 +0000
Received: from [85.158.143.35:13891] by server-3.bemta-4.messagelabs.com id
	B5/F4-11539-6A928035; Sat, 22 Feb 2014 04:37:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393043876!7493050!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9824 invoked from network); 22 Feb 2014 04:37:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 04:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104866040"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 04:37:55 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 21 Feb 2014 23:37:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WH4Lb-0004PL-4h;
	Sat, 22 Feb 2014 04:37:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WH4La-0008UC-Ve;
	Sat, 22 Feb 2014 04:37:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25260-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 04:37:55 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25260: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25260 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25260/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-unstable
+ revision=869f5b6deab53bc924798df4dacfae92ee198cb4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-unstable 869f5b6deab53bc924798df4dacfae92ee198cb4
+ branch=xen-unstable
+ revision=869f5b6deab53bc924798df4dacfae92ee198cb4
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-unstable
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-unstable
++ : daily-cron.xen-unstable
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.xen-unstable
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-unstable
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 869f5b6deab53bc924798df4dacfae92ee198cb4:master
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   98b9a9e..869f5b6  869f5b6deab53bc924798df4dacfae92ee198cb4 -> master

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 08:11:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 08:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH7fW-0000Uz-Vf; Sat, 22 Feb 2014 08:10:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WH7fW-0000Uu-5W
	for xen-devel@lists.xen.org; Sat, 22 Feb 2014 08:10:42 +0000
Received: from [193.109.254.147:29816] by server-8.bemta-14.messagelabs.com id
	BA/78-18529-18B58035; Sat, 22 Feb 2014 08:10:41 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393056639!794937!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20383 invoked from network); 22 Feb 2014 08:10:40 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 08:10:40 -0000
Received: from G9W0364.americas.hpqcorp.net (g9w0364.houston.hp.com
	[16.216.193.45]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3427.houston.hp.com (Postfix) with ESMTPS id 1D59C3F9;
	Sat, 22 Feb 2014 08:10:38 +0000 (UTC)
Received: from G9W3612.americas.hpqcorp.net (16.216.186.47) by
	G9W0364.americas.hpqcorp.net (16.216.193.45) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 22 Feb 2014 08:06:50 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3612.americas.hpqcorp.net ([16.216.186.47]) with mapi id
	14.03.0123.003; Sat, 22 Feb 2014 08:06:50 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] q35 in xen?  vfio in xen?
Thread-Index: AQHPL2VwX0KD0THDRguzpODwr34SVZrA6ArA
Date: Sat, 22 Feb 2014 08:06:50 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
In-Reply-To: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [15.201.58.27]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGkgS29ucmFkLA0KDQpIZXJlJ3Mgd2hhdCBJIHNlZSB3aGVuIHN0YXJ0IGEgVk0gdW5kZXIgeGVu
IHVzaW5nIHBjaWJhY2sgdG8gcGFzcyBhIHBjaS1lIGRldmljZSBpbnRvIGRvbVUuICBUaGUgZGV2
aWNlIGNhbiBiZSBzZWVuIGJ5IGd1ZXN0LCBhbmQgYWxzbyBmdW5jdGlvbmluZyBmaW5lLiAgQnV0
IGl0J3Mgbm90IHNlZW4gYXMgYSBwY2ktZSBkZXZpY2UsIHJhdGhlciwgaXQgbG9va3MganVzdCBs
aWtlIGFuIG9yZGluYXJ5IHBjaSBkZXZpY2UgYmVjYXVzZSBvbmx5IHRoZSBmaXJzdCAweDEwMCBi
eXRlcyBvZiBpdHMgY29uZmlndXJhdGlvbiBzcGFjZSBpcyBhY2Nlc3NpYmxlLiAgU28gaWYgYSBk
cml2ZXIgbmVlZHMgdG8gdXNlIGRhdGEgaW4gdGhlIGV4dGVuZGVkIGNvbmZpZ3VyYXRpb24gc3Bh
Y2UgZm9yIGNlcnRhaW4gZmVhdHVyZXMsIGl0IHdpbGwgZmFpbC4NCg0KV2hlbiB5b3Ugc2F5IHlv
dSAiZGlkIFBDSWUgcGFzcyB0aHJvdWdoIG9mIGFuIFZGIG9mIGFuIFNSLUlPViBkZXZpY2UiLiAg
QXJlIHlvdSBhY3R1YWxseSB1c2luZyBpdCBhcyBhIHBjaS1lIGRldmljZSBvciBoYXZlIHRocm90
dGxlZCBpdCBiYWNrIHRvIHBjaSBtb2RlIHdpdGhvdXQgYXdhcmUgb2YgdGhlIGRpZmZlcmVuY2U/
ICBJZiB5b3UgZGlkIHNlZSB0aGUgcGNpLWUgZGV2aWNlIGluIGd1ZXN0LCBjYW4geW91IHNoYXJl
IHlvdXIgeGwuY2ZnIGZpbGUgYXMgd2VsbCBhcyBsc3BjaS9sc3BjaSAtdC9sc3BjaSAteHh4eCBv
dXRwdXQgZnJvbSBndWVzdD8NCg0KQWxzbyB0byBlY2hvIHlvdXIgc2Vjb25kIGNvbW1lbnQ6ICBJ
IG1pZ2h0IHN0aWxsIGJlIGEgbmV3YmllIGluIHFlbXUgZmllbGQgKEkgc3RhcnRlZCB3b3JraW5n
IG9uIHRoaXMgNCBtb250aHMgYWdvKS4gIEkgdGhvdWdodCB0aGUgY2hpcHNldCBsaW1pdHMgd2hh
dCB5b3UgY2FuIHNlZS9kbyBpbiB2bS4gIEllLiAgSWYgeW91IGhhdmUgNDQwZnggZW11bGF0aW9u
cyB0aGVuIHlvdSBjYW4ndCBoYXZlIGFueSBwY2ktZSBkZXZpY2VzIChmYWtlIG9yIHBhc3N0aHJ1
KSBpbiB0aGUgc2FtZSBzeXN0ZW0uICBJcyB0aGF0IG5vdCB0cnVlPw0KDQpSZWdhcmRzL0VuaWFj
DQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpGcm9tOiBLb25yYWQgUnplc3p1dGVrIFdp
bGsgW21haWx0bzprb25yYWQud2lsa0BvcmFjbGUuY29tXSANClNlbnQ6IEZyaWRheSwgRmVicnVh
cnkgMjEsIDIwMTQgNTozMiBQTQ0KVG86IFpoYW5nLCBFbmlhYw0KQ2M6IHhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnDQpTdWJqZWN0OiBSRTogW1hlbi1kZXZlbF0gcTM1IGluIHhlbj8gdmZpbyBpbiB4
ZW4/DQoNCg0KT24gRmViIDIxLCAyMDE0IDQ6NTggUE0sICJaaGFuZywgRW5pYWMiIDxlbmlhYy14
dy56aGFuZ0BocC5jb20+IHdyb3RlOg0KPg0KPiBIaSBLb25yYWQsIA0KPg0KPiBUaGFua3MgZm9y
IHlvdXIgcmVwbHkuwqAgDQo+DQo+IFllcywgSSBhbSBhd2FyZSBvZiB0aGUgcGNpYmFjay7CoCBV
bmZvcnR1bmF0ZWx5IGl0IGRvZXNuJ3Qgc2VlbSB0byBzdXBwb3J0IHBjaS1lIHBhc3N0aHJvdWdo
LiAoSSBjb3VsZCBiZSB3cm9uZyBoZXJlKQ0KDQpJIGp1c3QgZGlkIFBDSWUgcGFzcyB0aHJvdWdo
IG9mIGFuIFZGIG9mIGFuIFNSLUlPViBkZXZpY2UuIEl0IGNlcnRhaW5seSBpcyBQQ0llLg0KDQo+
DQo+IFRoZXJlIGFyZSB0d28gcmVhc29uIHRoYXQgSSBhbSBpbnRlcmVzdGVkIGluIHRoaXMuwqAg
Rm9yIG9uZSwgbXkgcHJvamVjdCBjYWxscyBmb3IgcGNpLWUgZGV2aWNlIHBhc3N0aHJvdWdoLCB3
aGljaCBjYW4ndCBiZSBhY2NvbXBsaXNoZWQgd2l0aCA0NDBmeCBjaGlwc2V0IGVtdWxhdGlvbi7C
oCBTZWNvbmRseSwgSSBmZWVsIHdlIG91Z2h0IHRvIG1vdmUgb24gd2l0aCB0aGUgdGVjaG5vbG9n
eS7CoCA0NDBmeCBpcyBhbmNpZW50IGluIGNvbXB1dGVyIHRlcm1zLsKgIFFlbXUgaXMgZ29vZCBh
bmQgYWxsIHRoYXQsIGJ1dCBpZiBpdCByZWZ1c2VzIHRvIHN1cHBvcnQgcGNpLWUgbmF0aXZlbHkg
dGhlbiBpdCdzIGp1c3QgYSBtYXR0ZXIgb2YgdGltZSB0aGF0IGl0IHdpbGwgYmVjb21lIG9ic29s
ZXRlZC7CoCBUaGUgdHJlbmQgaXMgY2xlYXIgdGhhdCBwY2ktZSBpcyB0YWtpbmcgb3ZlciB0aGUg
d29ybGQuIA0KPg0KDQpJIGFtIG5vdCBzdXJlIHdoYXQgeW91IGFyZSBzYXlpbmcgYnV0IGl0IGRv
ZXMgbm90IG1hdHRlciB3aGV0aGVyIFFFTVUgZW11bGF0ZXMgNDQwZnggb3IgcTM1IGZvciBQQ0kg
cGFzcyB0aHJvdWdoIC4NCg0KPiBSZWdhcmRzL0VuaWFjIA0KPg0KPiAtLS0tLU9yaWdpbmFsIE1l
c3NhZ2UtLS0tLSANCj4gRnJvbTogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIFttYWlsdG86a29ucmFk
LndpbGtAb3JhY2xlLmNvbV0gDQo+IFNlbnQ6IEZyaWRheSwgRmVicnVhcnkgMjEsIDIwMTQgMjo1
MCBQTSANCj4gVG86IFpoYW5nLCBFbmlhYyANCj4gQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
IA0KPiBTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gcTM1IGluIHhlbj8gdmZpbyBpbiB4ZW4/IA0K
Pg0KPiBPbiBGcmksIEZlYiAyMSwgMjAxNCBhdCAwOTo0MTozOVBNICswMDAwLCBaaGFuZywgRW5p
YWMgd3JvdGU6IA0KPiA+IEhpIGFsbCwgDQo+ID4gDQo+ID4gSSBhbSBwbGF5aW5nIHdpdGggcTM1
IGNoaXBzZXQgaW4gcWVtdSAoMS42LjEpLsKgIEl0IHNlZW1zIHdlIGNhbid0IGVuYWJsZSBxMzUg
bWFjaGluZSB1bmRlciB4ZW4geWV0LsKgIEkgbWFkZSBhIGZldyBxdWljayBoYWNrcyB3aGljaCBh
bGwgZmFpbCBtaXNlcmFibHkgKGxpbnV4IGtlcm5lbCBvb3BzIGFuZCB3aW5kb3cgQlNPRCkuwqAg
SSB3YXMgd29uZGVyaW5nIHdoeSB0aGlzIGhhc24ndCBiZWVuIGRvbmUgKHEzNSB3YXMgaW50cm9k
dWNlZCBpbnRvIHFlbXUgaW4gMjAwOSkuIA0KPiA+IA0KPiA+IE5leHQgcXVlc3Rpb24sIHZmaW8g
d29ya3MgdmVyeSB3ZWxsIGZvciBtZSBpbiBzdGFuZGFsb25lIHFlbXUgKHdpdGggTGludXggaG9z
dCBoYW5kbGluZyBpb21tdSksIGJ1dCBpcyB0aGF0IHN1cHBvcnRlZCB1bmRlciB4ZW4/wqAgSSBo
YXZlbid0IHRyaWVkIGFueXRoaW5nIHRoZXJlIHlldCBiZWNhdXNlIG15IGd1dC1mZWVsaW5nIGlz
IHRoYXQgaXQgd29uJ3Qgd29yay7CoCBCZWNhdXNlIHBhc3NpbmcgdmZpbyBkZXZpY2UgdG8gcWVt
dSBjYW4gb25seSBiZSBkb25lIG9uIHFlbXUgY29tbWFuZGxpbmUsIGFuZCB4ZW4gaXMgbm90IGF3
YXJlIG9mIHRoaXMgcGFzc2luZyB0aHJvdWdoIGRldmljZSwgdGh1cyBub3QgYWJsZSB0byBtYWtl
IGlvbW11IGFycmFuZ2VtZW50IGZvciB0aGlzIGRldmljZS7CoCBBbSBJIG9uIHRoZSByaWdodCB0
cmFjayBoZXJlPyANCj4NCj4gWWVzIGFuZCBuby4gVkZJTyB3b24ndCB3b3JrIC0gYnV0IFFFTVUg
ZG9lcyBkbyBQQ0kgcGFzc3Rocm91Z2ggdW5kZXIgWGVuLiBJdCB1c2VzIGEgZGlmZmVyZW50IG1l
Y2hhbmlzbSAoYW5kIHlvdSBuZWVkIHRvIGJpbmQgdGhlIGRldmljZSB0byBwY2liYWNrKS4gDQo+
DQo+ID4gDQo+ID4gSSBhbSBpbnRlcmVzdGVkIGluIGltcGxlbWVudGluZyBib3RoIHRoZXNlIHR3
byBmZWF0dXJlcy7CoCBJJ2QgbGlrZSB0byBjb25uZWN0IHdpdGggYW55b25lIHdobydzIGFscmVh
ZHkgb24gdGhpcyBzbyB3ZSBkb24ndCBkdXBsaWNhdGUgdGhlIGVmZm9ydHMuIA0KPg0KPiBXaGF0
IGRvIHlvdSBuZWVkIFEzNSBmb3I/IA0KPg0KPiA+IA0KPiA+IFJlZ2FyZHMvRW5pYWMgDQo+DQo+
ID4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18gDQo+ID4g
WGVuLWRldmVsIG1haWxpbmcgbGlzdCANCj4gPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZyANCj4g
PiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwgDQo+DQpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Sat Feb 22 08:11:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 08:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH7fW-0000Uz-Vf; Sat, 22 Feb 2014 08:10:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WH7fW-0000Uu-5W
	for xen-devel@lists.xen.org; Sat, 22 Feb 2014 08:10:42 +0000
Received: from [193.109.254.147:29816] by server-8.bemta-14.messagelabs.com id
	BA/78-18529-18B58035; Sat, 22 Feb 2014 08:10:41 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393056639!794937!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20383 invoked from network); 22 Feb 2014 08:10:40 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 08:10:40 -0000
Received: from G9W0364.americas.hpqcorp.net (g9w0364.houston.hp.com
	[16.216.193.45]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3427.houston.hp.com (Postfix) with ESMTPS id 1D59C3F9;
	Sat, 22 Feb 2014 08:10:38 +0000 (UTC)
Received: from G9W3612.americas.hpqcorp.net (16.216.186.47) by
	G9W0364.americas.hpqcorp.net (16.216.193.45) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 22 Feb 2014 08:06:50 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3612.americas.hpqcorp.net ([16.216.186.47]) with mapi id
	14.03.0123.003; Sat, 22 Feb 2014 08:06:50 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] q35 in xen?  vfio in xen?
Thread-Index: AQHPL2VwX0KD0THDRguzpODwr34SVZrA6ArA
Date: Sat, 22 Feb 2014 08:06:50 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
In-Reply-To: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [15.201.58.27]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

SGkgS29ucmFkLA0KDQpIZXJlJ3Mgd2hhdCBJIHNlZSB3aGVuIHN0YXJ0IGEgVk0gdW5kZXIgeGVu
IHVzaW5nIHBjaWJhY2sgdG8gcGFzcyBhIHBjaS1lIGRldmljZSBpbnRvIGRvbVUuICBUaGUgZGV2
aWNlIGNhbiBiZSBzZWVuIGJ5IGd1ZXN0LCBhbmQgYWxzbyBmdW5jdGlvbmluZyBmaW5lLiAgQnV0
IGl0J3Mgbm90IHNlZW4gYXMgYSBwY2ktZSBkZXZpY2UsIHJhdGhlciwgaXQgbG9va3MganVzdCBs
aWtlIGFuIG9yZGluYXJ5IHBjaSBkZXZpY2UgYmVjYXVzZSBvbmx5IHRoZSBmaXJzdCAweDEwMCBi
eXRlcyBvZiBpdHMgY29uZmlndXJhdGlvbiBzcGFjZSBpcyBhY2Nlc3NpYmxlLiAgU28gaWYgYSBk
cml2ZXIgbmVlZHMgdG8gdXNlIGRhdGEgaW4gdGhlIGV4dGVuZGVkIGNvbmZpZ3VyYXRpb24gc3Bh
Y2UgZm9yIGNlcnRhaW4gZmVhdHVyZXMsIGl0IHdpbGwgZmFpbC4NCg0KV2hlbiB5b3Ugc2F5IHlv
dSAiZGlkIFBDSWUgcGFzcyB0aHJvdWdoIG9mIGFuIFZGIG9mIGFuIFNSLUlPViBkZXZpY2UiLiAg
QXJlIHlvdSBhY3R1YWxseSB1c2luZyBpdCBhcyBhIHBjaS1lIGRldmljZSBvciBoYXZlIHRocm90
dGxlZCBpdCBiYWNrIHRvIHBjaSBtb2RlIHdpdGhvdXQgYXdhcmUgb2YgdGhlIGRpZmZlcmVuY2U/
ICBJZiB5b3UgZGlkIHNlZSB0aGUgcGNpLWUgZGV2aWNlIGluIGd1ZXN0LCBjYW4geW91IHNoYXJl
IHlvdXIgeGwuY2ZnIGZpbGUgYXMgd2VsbCBhcyBsc3BjaS9sc3BjaSAtdC9sc3BjaSAteHh4eCBv
dXRwdXQgZnJvbSBndWVzdD8NCg0KQWxzbyB0byBlY2hvIHlvdXIgc2Vjb25kIGNvbW1lbnQ6ICBJ
IG1pZ2h0IHN0aWxsIGJlIGEgbmV3YmllIGluIHFlbXUgZmllbGQgKEkgc3RhcnRlZCB3b3JraW5n
IG9uIHRoaXMgNCBtb250aHMgYWdvKS4gIEkgdGhvdWdodCB0aGUgY2hpcHNldCBsaW1pdHMgd2hh
dCB5b3UgY2FuIHNlZS9kbyBpbiB2bS4gIEllLiAgSWYgeW91IGhhdmUgNDQwZnggZW11bGF0aW9u
cyB0aGVuIHlvdSBjYW4ndCBoYXZlIGFueSBwY2ktZSBkZXZpY2VzIChmYWtlIG9yIHBhc3N0aHJ1
KSBpbiB0aGUgc2FtZSBzeXN0ZW0uICBJcyB0aGF0IG5vdCB0cnVlPw0KDQpSZWdhcmRzL0VuaWFj
DQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpGcm9tOiBLb25yYWQgUnplc3p1dGVrIFdp
bGsgW21haWx0bzprb25yYWQud2lsa0BvcmFjbGUuY29tXSANClNlbnQ6IEZyaWRheSwgRmVicnVh
cnkgMjEsIDIwMTQgNTozMiBQTQ0KVG86IFpoYW5nLCBFbmlhYw0KQ2M6IHhlbi1kZXZlbEBsaXN0
cy54ZW4ub3JnDQpTdWJqZWN0OiBSRTogW1hlbi1kZXZlbF0gcTM1IGluIHhlbj8gdmZpbyBpbiB4
ZW4/DQoNCg0KT24gRmViIDIxLCAyMDE0IDQ6NTggUE0sICJaaGFuZywgRW5pYWMiIDxlbmlhYy14
dy56aGFuZ0BocC5jb20+IHdyb3RlOg0KPg0KPiBIaSBLb25yYWQsIA0KPg0KPiBUaGFua3MgZm9y
IHlvdXIgcmVwbHkuwqAgDQo+DQo+IFllcywgSSBhbSBhd2FyZSBvZiB0aGUgcGNpYmFjay7CoCBV
bmZvcnR1bmF0ZWx5IGl0IGRvZXNuJ3Qgc2VlbSB0byBzdXBwb3J0IHBjaS1lIHBhc3N0aHJvdWdo
LiAoSSBjb3VsZCBiZSB3cm9uZyBoZXJlKQ0KDQpJIGp1c3QgZGlkIFBDSWUgcGFzcyB0aHJvdWdo
IG9mIGFuIFZGIG9mIGFuIFNSLUlPViBkZXZpY2UuIEl0IGNlcnRhaW5seSBpcyBQQ0llLg0KDQo+
DQo+IFRoZXJlIGFyZSB0d28gcmVhc29uIHRoYXQgSSBhbSBpbnRlcmVzdGVkIGluIHRoaXMuwqAg
Rm9yIG9uZSwgbXkgcHJvamVjdCBjYWxscyBmb3IgcGNpLWUgZGV2aWNlIHBhc3N0aHJvdWdoLCB3
aGljaCBjYW4ndCBiZSBhY2NvbXBsaXNoZWQgd2l0aCA0NDBmeCBjaGlwc2V0IGVtdWxhdGlvbi7C
oCBTZWNvbmRseSwgSSBmZWVsIHdlIG91Z2h0IHRvIG1vdmUgb24gd2l0aCB0aGUgdGVjaG5vbG9n
eS7CoCA0NDBmeCBpcyBhbmNpZW50IGluIGNvbXB1dGVyIHRlcm1zLsKgIFFlbXUgaXMgZ29vZCBh
bmQgYWxsIHRoYXQsIGJ1dCBpZiBpdCByZWZ1c2VzIHRvIHN1cHBvcnQgcGNpLWUgbmF0aXZlbHkg
dGhlbiBpdCdzIGp1c3QgYSBtYXR0ZXIgb2YgdGltZSB0aGF0IGl0IHdpbGwgYmVjb21lIG9ic29s
ZXRlZC7CoCBUaGUgdHJlbmQgaXMgY2xlYXIgdGhhdCBwY2ktZSBpcyB0YWtpbmcgb3ZlciB0aGUg
d29ybGQuIA0KPg0KDQpJIGFtIG5vdCBzdXJlIHdoYXQgeW91IGFyZSBzYXlpbmcgYnV0IGl0IGRv
ZXMgbm90IG1hdHRlciB3aGV0aGVyIFFFTVUgZW11bGF0ZXMgNDQwZnggb3IgcTM1IGZvciBQQ0kg
cGFzcyB0aHJvdWdoIC4NCg0KPiBSZWdhcmRzL0VuaWFjIA0KPg0KPiAtLS0tLU9yaWdpbmFsIE1l
c3NhZ2UtLS0tLSANCj4gRnJvbTogS29ucmFkIFJ6ZXN6dXRlayBXaWxrIFttYWlsdG86a29ucmFk
LndpbGtAb3JhY2xlLmNvbV0gDQo+IFNlbnQ6IEZyaWRheSwgRmVicnVhcnkgMjEsIDIwMTQgMjo1
MCBQTSANCj4gVG86IFpoYW5nLCBFbmlhYyANCj4gQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3Jn
IA0KPiBTdWJqZWN0OiBSZTogW1hlbi1kZXZlbF0gcTM1IGluIHhlbj8gdmZpbyBpbiB4ZW4/IA0K
Pg0KPiBPbiBGcmksIEZlYiAyMSwgMjAxNCBhdCAwOTo0MTozOVBNICswMDAwLCBaaGFuZywgRW5p
YWMgd3JvdGU6IA0KPiA+IEhpIGFsbCwgDQo+ID4gDQo+ID4gSSBhbSBwbGF5aW5nIHdpdGggcTM1
IGNoaXBzZXQgaW4gcWVtdSAoMS42LjEpLsKgIEl0IHNlZW1zIHdlIGNhbid0IGVuYWJsZSBxMzUg
bWFjaGluZSB1bmRlciB4ZW4geWV0LsKgIEkgbWFkZSBhIGZldyBxdWljayBoYWNrcyB3aGljaCBh
bGwgZmFpbCBtaXNlcmFibHkgKGxpbnV4IGtlcm5lbCBvb3BzIGFuZCB3aW5kb3cgQlNPRCkuwqAg
SSB3YXMgd29uZGVyaW5nIHdoeSB0aGlzIGhhc24ndCBiZWVuIGRvbmUgKHEzNSB3YXMgaW50cm9k
dWNlZCBpbnRvIHFlbXUgaW4gMjAwOSkuIA0KPiA+IA0KPiA+IE5leHQgcXVlc3Rpb24sIHZmaW8g
d29ya3MgdmVyeSB3ZWxsIGZvciBtZSBpbiBzdGFuZGFsb25lIHFlbXUgKHdpdGggTGludXggaG9z
dCBoYW5kbGluZyBpb21tdSksIGJ1dCBpcyB0aGF0IHN1cHBvcnRlZCB1bmRlciB4ZW4/wqAgSSBo
YXZlbid0IHRyaWVkIGFueXRoaW5nIHRoZXJlIHlldCBiZWNhdXNlIG15IGd1dC1mZWVsaW5nIGlz
IHRoYXQgaXQgd29uJ3Qgd29yay7CoCBCZWNhdXNlIHBhc3NpbmcgdmZpbyBkZXZpY2UgdG8gcWVt
dSBjYW4gb25seSBiZSBkb25lIG9uIHFlbXUgY29tbWFuZGxpbmUsIGFuZCB4ZW4gaXMgbm90IGF3
YXJlIG9mIHRoaXMgcGFzc2luZyB0aHJvdWdoIGRldmljZSwgdGh1cyBub3QgYWJsZSB0byBtYWtl
IGlvbW11IGFycmFuZ2VtZW50IGZvciB0aGlzIGRldmljZS7CoCBBbSBJIG9uIHRoZSByaWdodCB0
cmFjayBoZXJlPyANCj4NCj4gWWVzIGFuZCBuby4gVkZJTyB3b24ndCB3b3JrIC0gYnV0IFFFTVUg
ZG9lcyBkbyBQQ0kgcGFzc3Rocm91Z2ggdW5kZXIgWGVuLiBJdCB1c2VzIGEgZGlmZmVyZW50IG1l
Y2hhbmlzbSAoYW5kIHlvdSBuZWVkIHRvIGJpbmQgdGhlIGRldmljZSB0byBwY2liYWNrKS4gDQo+
DQo+ID4gDQo+ID4gSSBhbSBpbnRlcmVzdGVkIGluIGltcGxlbWVudGluZyBib3RoIHRoZXNlIHR3
byBmZWF0dXJlcy7CoCBJJ2QgbGlrZSB0byBjb25uZWN0IHdpdGggYW55b25lIHdobydzIGFscmVh
ZHkgb24gdGhpcyBzbyB3ZSBkb24ndCBkdXBsaWNhdGUgdGhlIGVmZm9ydHMuIA0KPg0KPiBXaGF0
IGRvIHlvdSBuZWVkIFEzNSBmb3I/IA0KPg0KPiA+IA0KPiA+IFJlZ2FyZHMvRW5pYWMgDQo+DQo+
ID4gX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18gDQo+ID4g
WGVuLWRldmVsIG1haWxpbmcgbGlzdCANCj4gPiBYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZyANCj4g
PiBodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwgDQo+DQpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1k
ZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Sat Feb 22 10:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 10:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH9wA-0001CS-5z; Sat, 22 Feb 2014 10:36:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WH9w8-0001CN-NY
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 10:36:00 +0000
Received: from [85.158.139.211:34689] by server-14.bemta-5.messagelabs.com id
	5D/64-27598-F8D78035; Sat, 22 Feb 2014 10:35:59 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393065358!5533180!1
X-Originating-IP: [82.57.200.101]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6878 invoked from network); 22 Feb 2014 10:35:58 -0000
Received: from smtp205.alice.it (HELO smtp205.alice.it) (82.57.200.101)
	by server-9.tower-206.messagelabs.com with SMTP;
	22 Feb 2014 10:35:58 -0000
Received: from FantuNB.fritz.box (87.0.81.230) by smtp205.alice.it (8.6.060.28)
	id 529A5931136093AA; Sat, 22 Feb 2014 11:35:58 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Sat, 22 Feb 2014 11:35:54 +0100
Message-Id: <1393065354-11478-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: Fabio Fantoni <fabio.fantoni@m2r.biz>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RESEND] tools/libxl: comments cleanup on
	libxl_dm.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Removed some unuseful comments lines.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
---
 tools/libxl/libxl_dm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 76ac9e2..e87f606 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -417,7 +417,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     /*
      * Remove default devices created by qemu. Qemu will create only devices
      * defined by xen, since the devices not defined by xen are not usable.
-     * Remove deleting of empty floppy no more needed with nodefault.
      */
     flexarray_append(dm_args, "-nodefaults");
 
@@ -475,10 +474,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         /* XXX sdl->{display,xauthority} into $DISPLAY/$XAUTHORITY */
     }
 
-    /*if (info->type == LIBXL_DOMAIN_TYPE_PV && !b_info->nographic) {
-        flexarray_vappend(dm_args, "-vga", "xenfb", NULL);
-      } never was possible?*/
-
     if (keymap) {
         flexarray_vappend(dm_args, "-k", keymap, NULL);
     }
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 10:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 10:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH9wA-0001CS-5z; Sat, 22 Feb 2014 10:36:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WH9w8-0001CN-NY
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 10:36:00 +0000
Received: from [85.158.139.211:34689] by server-14.bemta-5.messagelabs.com id
	5D/64-27598-F8D78035; Sat, 22 Feb 2014 10:35:59 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393065358!5533180!1
X-Originating-IP: [82.57.200.101]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6878 invoked from network); 22 Feb 2014 10:35:58 -0000
Received: from smtp205.alice.it (HELO smtp205.alice.it) (82.57.200.101)
	by server-9.tower-206.messagelabs.com with SMTP;
	22 Feb 2014 10:35:58 -0000
Received: from FantuNB.fritz.box (87.0.81.230) by smtp205.alice.it (8.6.060.28)
	id 529A5931136093AA; Sat, 22 Feb 2014 11:35:58 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Sat, 22 Feb 2014 11:35:54 +0100
Message-Id: <1393065354-11478-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: Fabio Fantoni <fabio.fantoni@m2r.biz>, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH RESEND] tools/libxl: comments cleanup on
	libxl_dm.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Removed some unuseful comments lines.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
---
 tools/libxl/libxl_dm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 76ac9e2..e87f606 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -417,7 +417,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
     /*
      * Remove default devices created by qemu. Qemu will create only devices
      * defined by xen, since the devices not defined by xen are not usable.
-     * Remove deleting of empty floppy no more needed with nodefault.
      */
     flexarray_append(dm_args, "-nodefaults");
 
@@ -475,10 +474,6 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
         /* XXX sdl->{display,xauthority} into $DISPLAY/$XAUTHORITY */
     }
 
-    /*if (info->type == LIBXL_DOMAIN_TYPE_PV && !b_info->nographic) {
-        flexarray_vappend(dm_args, "-vga", "xenfb", NULL);
-      } never was possible?*/
-
     if (keymap) {
         flexarray_vappend(dm_args, "-k", keymap, NULL);
     }
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 10:37:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 10:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH9xP-0001FO-Lt; Sat, 22 Feb 2014 10:37:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WH9xN-0001FA-Ct
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 10:37:17 +0000
Received: from [85.158.139.211:53802] by server-10.bemta-5.messagelabs.com id
	47/C3-08578-CDD78035; Sat, 22 Feb 2014 10:37:16 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393065435!5563701!1
X-Originating-IP: [82.57.200.102]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24666 invoked from network); 22 Feb 2014 10:37:15 -0000
Received: from smtp206.alice.it (HELO smtp206.alice.it) (82.57.200.102)
	by server-8.tower-206.messagelabs.com with SMTP;
	22 Feb 2014 10:37:15 -0000
Received: from FantuNB.fritz.box (87.0.81.230) by smtp206.alice.it (8.6.060.28)
	id 529A678F11753C0E; Sat, 22 Feb 2014 11:37:15 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Sat, 22 Feb 2014 11:37:11 +0100
Message-Id: <1393065431-11802-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: anthony.perard@citrix.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v3 RESEND] libxl: Add none to vga parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Usage:
  vga="none"

Make possible to not have an emulated vga on hvm domUs.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>

---

Changes in v3:
- set video_memkb to 0 if vga is none.
- remove a check on one condition no more needed.

Changes in v2:
- libxl_dm.c:
 if vga is none, on qemu traditional:
  - add -vga none parameter.
  - do not add -videoram parameter.

---
 docs/man/xl.cfg.pod.5       |    2 +-
 tools/libxl/libxl_create.c  |    6 ++++++
 tools/libxl/libxl_dm.c      |    5 +++++
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |    2 ++
 5 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index e15a49f..2f36143 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1082,7 +1082,7 @@ This option is deprecated, use vga="stdvga" instead.
 
 =item B<vga="STRING">
 
-Selects the emulated video card (stdvga|cirrus).
+Selects the emulated video card (none|stdvga|cirrus).
 The default is cirrus.
 
 =item B<vnc=BOOLEAN>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..9110394 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -226,6 +226,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         switch (b_info->device_model_version) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
             switch (b_info->u.hvm.vga.kind) {
+            case LIBXL_VGA_INTERFACE_TYPE_NONE:
+                b_info->video_memkb = 0;
+                break;
             case LIBXL_VGA_INTERFACE_TYPE_STD:
                 if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
                     b_info->video_memkb = 8 * 1024;
@@ -246,6 +249,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
         default:
             switch (b_info->u.hvm.vga.kind) {
+            case LIBXL_VGA_INTERFACE_TYPE_NONE:
+                b_info->video_memkb = 0;
+                break;
             case LIBXL_VGA_INTERFACE_TYPE_STD:
                 if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
                     b_info->video_memkb = 16 * 1024;
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f6f7bbd..761bb61 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -217,6 +217,9 @@ static char ** libxl__build_device_model_args_old(libxl__gc *gc,
             break;
         case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
             break;
+        case LIBXL_VGA_INTERFACE_TYPE_NONE:
+            flexarray_append_pair(dm_args, "-vga", "none");
+            break;
         }
 
         if (b_info->u.hvm.boot) {
@@ -515,6 +518,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                 GCSPRINTF("vga.vram_size_mb=%d",
                 libxl__sizekb_to_mb(b_info->video_memkb)));
             break;
+        case LIBXL_VGA_INTERFACE_TYPE_NONE:
+            break;
         }
 
         if (b_info->u.hvm.boot) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b5a8387 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -153,6 +153,7 @@ libxl_shutdown_reason = Enumeration("shutdown_reason", [
 libxl_vga_interface_type = Enumeration("vga_interface_type", [
     (1, "CIRRUS"),
     (2, "STD"),
+    (3, "NONE"),
     ], init_val = 1)
 
 libxl_vendor_device = Enumeration("vendor_device", [
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..4d720b4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1667,6 +1667,8 @@ skip_vfb:
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
             } else if (!strcmp(buf, "cirrus")) {
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
+            } else if (!strcmp(buf, "none")) {
+                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
             } else {
                 fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
                 exit(1);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 10:37:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 10:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WH9xP-0001FO-Lt; Sat, 22 Feb 2014 10:37:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WH9xN-0001FA-Ct
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 10:37:17 +0000
Received: from [85.158.139.211:53802] by server-10.bemta-5.messagelabs.com id
	47/C3-08578-CDD78035; Sat, 22 Feb 2014 10:37:16 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393065435!5563701!1
X-Originating-IP: [82.57.200.102]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24666 invoked from network); 22 Feb 2014 10:37:15 -0000
Received: from smtp206.alice.it (HELO smtp206.alice.it) (82.57.200.102)
	by server-8.tower-206.messagelabs.com with SMTP;
	22 Feb 2014 10:37:15 -0000
Received: from FantuNB.fritz.box (87.0.81.230) by smtp206.alice.it (8.6.060.28)
	id 529A678F11753C0E; Sat, 22 Feb 2014 11:37:15 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
To: xen-devel@lists.xensource.com
Date: Sat, 22 Feb 2014 11:37:11 +0100
Message-Id: <1393065431-11802-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: git-send-email 1.7.9.5
Cc: anthony.perard@citrix.com, Fabio Fantoni <fabio.fantoni@m2r.biz>,
	Ian.Jackson@eu.citrix.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH v3 RESEND] libxl: Add none to vga parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Usage:
  vga="none"

Make possible to not have an emulated vga on hvm domUs.

Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>

---

Changes in v3:
- set video_memkb to 0 if vga is none.
- remove a check on one condition no more needed.

Changes in v2:
- libxl_dm.c:
 if vga is none, on qemu traditional:
  - add -vga none parameter.
  - do not add -videoram parameter.

---
 docs/man/xl.cfg.pod.5       |    2 +-
 tools/libxl/libxl_create.c  |    6 ++++++
 tools/libxl/libxl_dm.c      |    5 +++++
 tools/libxl/libxl_types.idl |    1 +
 tools/libxl/xl_cmdimpl.c    |    2 ++
 5 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
index e15a49f..2f36143 100644
--- a/docs/man/xl.cfg.pod.5
+++ b/docs/man/xl.cfg.pod.5
@@ -1082,7 +1082,7 @@ This option is deprecated, use vga="stdvga" instead.
 
 =item B<vga="STRING">
 
-Selects the emulated video card (stdvga|cirrus).
+Selects the emulated video card (none|stdvga|cirrus).
 The default is cirrus.
 
 =item B<vnc=BOOLEAN>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..9110394 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -226,6 +226,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         switch (b_info->device_model_version) {
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
             switch (b_info->u.hvm.vga.kind) {
+            case LIBXL_VGA_INTERFACE_TYPE_NONE:
+                b_info->video_memkb = 0;
+                break;
             case LIBXL_VGA_INTERFACE_TYPE_STD:
                 if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
                     b_info->video_memkb = 8 * 1024;
@@ -246,6 +249,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
         default:
             switch (b_info->u.hvm.vga.kind) {
+            case LIBXL_VGA_INTERFACE_TYPE_NONE:
+                b_info->video_memkb = 0;
+                break;
             case LIBXL_VGA_INTERFACE_TYPE_STD:
                 if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
                     b_info->video_memkb = 16 * 1024;
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f6f7bbd..761bb61 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -217,6 +217,9 @@ static char ** libxl__build_device_model_args_old(libxl__gc *gc,
             break;
         case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
             break;
+        case LIBXL_VGA_INTERFACE_TYPE_NONE:
+            flexarray_append_pair(dm_args, "-vga", "none");
+            break;
         }
 
         if (b_info->u.hvm.boot) {
@@ -515,6 +518,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
                 GCSPRINTF("vga.vram_size_mb=%d",
                 libxl__sizekb_to_mb(b_info->video_memkb)));
             break;
+        case LIBXL_VGA_INTERFACE_TYPE_NONE:
+            break;
         }
 
         if (b_info->u.hvm.boot) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..b5a8387 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -153,6 +153,7 @@ libxl_shutdown_reason = Enumeration("shutdown_reason", [
 libxl_vga_interface_type = Enumeration("vga_interface_type", [
     (1, "CIRRUS"),
     (2, "STD"),
+    (3, "NONE"),
     ], init_val = 1)
 
 libxl_vendor_device = Enumeration("vendor_device", [
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..4d720b4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1667,6 +1667,8 @@ skip_vfb:
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
             } else if (!strcmp(buf, "cirrus")) {
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
+            } else if (!strcmp(buf, "none")) {
+                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
             } else {
                 fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
                 exit(1);
-- 
1.7.9.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 11:54:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 11:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHB9M-0001q3-Uv; Sat, 22 Feb 2014 11:53:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHB9L-0001px-Hj
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 11:53:43 +0000
Received: from [85.158.137.68:32626] by server-5.bemta-3.messagelabs.com id
	57/EC-04712-6CF88035; Sat, 22 Feb 2014 11:53:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393070020!3547733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14905 invoked from network); 22 Feb 2014 11:53:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 11:53:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="103227244"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Feb 2014 11:53:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 06:53:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHB9G-0006Yv-99;
	Sat, 22 Feb 2014 11:53:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHB9G-0002UA-1a;
	Sat, 22 Feb 2014 11:53:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25266-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 11:53:38 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25266: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25266 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25266/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 25262

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.4-testing
+ revision=29f130cfd9aca7ee12deddbbc0217f39d55bec60
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.4-testing 29f130cfd9aca7ee12deddbbc0217f39d55bec60
+ branch=xen-4.4-testing
+ revision=29f130cfd9aca7ee12deddbbc0217f39d55bec60
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.4-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.4-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.4-testing
++ : daily-cron.xen-4.4-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.4-testing.git
++ : daily-cron.xen-4.4-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.4-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.4-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.4-testing
+ xenversion=xen-4.4
+ xenversion=4.4
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 29f130cfd9aca7ee12deddbbc0217f39d55bec60:stable-4.4
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   98b9a9e..29f130c  29f130cfd9aca7ee12deddbbc0217f39d55bec60 -> stable-4.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 11:54:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 11:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHB9M-0001q3-Uv; Sat, 22 Feb 2014 11:53:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHB9L-0001px-Hj
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 11:53:43 +0000
Received: from [85.158.137.68:32626] by server-5.bemta-3.messagelabs.com id
	57/EC-04712-6CF88035; Sat, 22 Feb 2014 11:53:42 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393070020!3547733!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14905 invoked from network); 22 Feb 2014 11:53:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 11:53:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="103227244"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Feb 2014 11:53:39 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 06:53:38 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHB9G-0006Yv-99;
	Sat, 22 Feb 2014 11:53:38 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHB9G-0002UA-1a;
	Sat, 22 Feb 2014 11:53:38 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25266-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 11:53:38 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25266: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25266 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25266/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 25262

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60
baseline version:
 xen                  98b9a9e8d3b12cb3210c8c9edd65019b584704ef

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=xen-4.4-testing
+ revision=29f130cfd9aca7ee12deddbbc0217f39d55bec60
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push xen-4.4-testing 29f130cfd9aca7ee12deddbbc0217f39d55bec60
+ branch=xen-4.4-testing
+ revision=29f130cfd9aca7ee12deddbbc0217f39d55bec60
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=xen
+ xenbranch=xen-4.4-testing
+ '[' xxen = xlinux ']'
+ linuxbranch=
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-4.4-testing.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://xenbits.xen.org/linux-pvops.git = x ']'
++ '[' x = x ']'
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-arm-xen
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.xen-4.4-testing
++ : daily-cron.xen-4.4-testing
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-4.4-testing.git
++ : daily-cron.xen-4.4-testing
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-4.4-testing.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree xen-4.4-testing
+ case $1 in
+ return 1
+ case "$branch" in
+ cd /export/home/osstest/repos/xen
+ xenversion=xen-4.4-testing
+ xenversion=xen-4.4
+ xenversion=4.4
+ git push osstest@xenbits.xensource.com:/home/xen/git/xen.git 29f130cfd9aca7ee12deddbbc0217f39d55bec60:stable-4.4
Total 0 (delta 0), reused 0 (delta 0)
To osstest@xenbits.xensource.com:/home/xen/git/xen.git
   98b9a9e..29f130c  29f130cfd9aca7ee12deddbbc0217f39d55bec60 -> stable-4.4

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 13:41:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 13:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHCpb-0002Sz-FU; Sat, 22 Feb 2014 13:41:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WHCpZ-0002Su-VF
	for xen-devel@lists.xen.org; Sat, 22 Feb 2014 13:41:26 +0000
Received: from [85.158.137.68:62603] by server-7.bemta-3.messagelabs.com id
	1A/B4-13775-509A8035; Sat, 22 Feb 2014 13:41:25 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393076483!2276225!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20796 invoked from network); 22 Feb 2014 13:41:24 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 13:41:24 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 840C881892;
	Sat, 22 Feb 2014 15:41:22 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 28CC236C01F; Sat, 22 Feb 2014 15:41:22 +0200 (EET)
Date: Sat, 22 Feb 2014 15:41:21 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Cui, Dexuan" <dexuan.cui@intel.com>
Message-ID: <20140222134121.GP3200@reaktio.net>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
	<20140220183613.GJ3200@reaktio.net>
	<A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Lars Kurth <lars.kurth@xenproject.org>, "Tian,
	Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 02:22:11AM +0000, Cui, Dexuan wrote:
> Pasi K=E4rkk=E4inen wrote on 2014-02-21:
> > On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
> >> Hi all,
> >> We're pleased to announce an update to XenGT since its first disclosur=
e in
> >> last Sep.
> > =

> > Are you going to work on upstreaming this stuff? Xen 4.4 will be
> > released soon(ish), so the Xen 4.5 development window starts in the near
> > future and hopefully this stuff can be upstreamed then..
> Hi Pasi,
> We do plan to upstream but not give a timeframe so far.
> =


Ok. Would you like to post an article on the Xen blog about XenGT ?
I think that would be a good thing to do to get more visibility for this fe=
ature.


> > Also: Haswell ("4th generation Intel core CPU") is listed as a requirem=
ent in
> > the Setup Guide PDF..
> > will there be support for SNB/IVB GPUs aswell?
> There is no plan for SNB/IVB.
> =


Is that because of some hardware limitation on earlier GPU generations,
or just a matter of amount of work needed to support XenGT on earlier GPUs?

Thanks,

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 13:41:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 13:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHCpb-0002Sz-FU; Sat, 22 Feb 2014 13:41:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WHCpZ-0002Su-VF
	for xen-devel@lists.xen.org; Sat, 22 Feb 2014 13:41:26 +0000
Received: from [85.158.137.68:62603] by server-7.bemta-3.messagelabs.com id
	1A/B4-13775-509A8035; Sat, 22 Feb 2014 13:41:25 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393076483!2276225!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20796 invoked from network); 22 Feb 2014 13:41:24 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 22 Feb 2014 13:41:24 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 840C881892;
	Sat, 22 Feb 2014 15:41:22 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 28CC236C01F; Sat, 22 Feb 2014 15:41:22 +0200 (EET)
Date: Sat, 22 Feb 2014 15:41:21 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: "Cui, Dexuan" <dexuan.cui@intel.com>
Message-ID: <20140222134121.GP3200@reaktio.net>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
	<20140220183613.GJ3200@reaktio.net>
	<A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: Lars Kurth <lars.kurth@xenproject.org>, "Tian,
	Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, 2014 at 02:22:11AM +0000, Cui, Dexuan wrote:
> Pasi K=E4rkk=E4inen wrote on 2014-02-21:
> > On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
> >> Hi all,
> >> We're pleased to announce an update to XenGT since its first disclosur=
e in
> >> last Sep.
> > =

> > Are you going to work on upstreaming this stuff? Xen 4.4 will be
> > released soon(ish), so the Xen 4.5 development window starts in the near
> > future and hopefully this stuff can be upstreamed then..
> Hi Pasi,
> We do plan to upstream but not give a timeframe so far.
> =


Ok. Would you like to post an article on the Xen blog about XenGT ?
I think that would be a good thing to do to get more visibility for this fe=
ature.


> > Also: Haswell ("4th generation Intel core CPU") is listed as a requirem=
ent in
> > the Setup Guide PDF..
> > will there be support for SNB/IVB GPUs aswell?
> There is no plan for SNB/IVB.
> =


Is that because of some hardware limitation on earlier GPU generations,
or just a matter of amount of work needed to support XenGT on earlier GPUs?

Thanks,

-- Pasi


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 13:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 13:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHCs7-0002YD-SZ; Sat, 22 Feb 2014 13:44:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHCs4-0002Y6-GB
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 13:44:00 +0000
Received: from [193.109.254.147:7564] by server-2.bemta-14.messagelabs.com id
	F6/53-01236-F99A8035; Sat, 22 Feb 2014 13:43:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393076637!6100190!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9320 invoked from network); 22 Feb 2014 13:43:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 13:43:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104912894"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 13:43:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 08:43:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHCrz-000765-HE;
	Sat, 22 Feb 2014 13:43:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHCry-0001mo-Rm;
	Sat, 22 Feb 2014 13:43:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25267-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 13:43:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25267: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25267 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25267/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25260

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   13 guest-saverestore.2         fail pass in 25260

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 13:44:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 13:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHCs7-0002YD-SZ; Sat, 22 Feb 2014 13:44:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHCs4-0002Y6-GB
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 13:44:00 +0000
Received: from [193.109.254.147:7564] by server-2.bemta-14.messagelabs.com id
	F6/53-01236-F99A8035; Sat, 22 Feb 2014 13:43:59 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393076637!6100190!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9320 invoked from network); 22 Feb 2014 13:43:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 13:43:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104912894"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 13:43:56 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 08:43:55 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHCrz-000765-HE;
	Sat, 22 Feb 2014 13:43:55 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHCry-0001mo-Rm;
	Sat, 22 Feb 2014 13:43:55 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25267-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 13:43:54 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25267: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25267 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25267/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25260

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-credit2   13 guest-saverestore.2         fail pass in 25260

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 18:35:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 18:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHHPQ-0003jw-1S; Sat, 22 Feb 2014 18:34:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHHPN-0003jr-U5
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 18:34:42 +0000
Received: from [85.158.137.68:33833] by server-6.bemta-3.messagelabs.com id
	39/E0-09180-0CDE8035; Sat, 22 Feb 2014 18:34:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393094078!2303196!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6677 invoked from network); 22 Feb 2014 18:34:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 18:34:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104940108"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 18:34:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 13:34:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHHPI-0008V1-AU;
	Sat, 22 Feb 2014 18:34:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHHPI-0004bO-92;
	Sat, 22 Feb 2014 18:34:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25268-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 18:34:36 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25268: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25268 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25268/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                10527106abec1e72a3b1f06fe58f0162603ed9df
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7063 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2389142 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 18:35:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 18:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHHPQ-0003jw-1S; Sat, 22 Feb 2014 18:34:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHHPN-0003jr-U5
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 18:34:42 +0000
Received: from [85.158.137.68:33833] by server-6.bemta-3.messagelabs.com id
	39/E0-09180-0CDE8035; Sat, 22 Feb 2014 18:34:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393094078!2303196!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6677 invoked from network); 22 Feb 2014 18:34:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 18:34:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,523,1389744000"; d="scan'208";a="104940108"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 22 Feb 2014 18:34:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 13:34:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHHPI-0008V1-AU;
	Sat, 22 Feb 2014 18:34:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHHPI-0004bO-92;
	Sat, 22 Feb 2014 18:34:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25268-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sat, 22 Feb 2014 18:34:36 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25268: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25268 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25268/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                10527106abec1e72a3b1f06fe58f0162603ed9df
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7063 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2389142 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 20:50:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 20:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHJVy-0004R8-QQ; Sat, 22 Feb 2014 20:49:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WHJVx-0004R3-8l
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 20:49:37 +0000
Received: from [85.158.137.68:12503] by server-7.bemta-3.messagelabs.com id
	E3/6E-13775-06D09035; Sat, 22 Feb 2014 20:49:36 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393102174!2306897!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10346 invoked from network); 22 Feb 2014 20:49:35 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 20:49:35 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so3771864lan.38
	for <xen-devel@lists.xensource.com>;
	Sat, 22 Feb 2014 12:49:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=gmzdv0DlVgl2lloJNj+QDbuOmkC1sgBJX8O9ZMHpszo=;
	b=L3H2vd6vycHamFysvG1HVZdutdyLkcblROPMrcN8sw0RFqxy/7Adx8Y4SX5i25OZY8
	vfCeihTGKBRUNQ9VUpXaCjM/Wrmv2oldy1QNTtU2p/eDqCQzjURnhHpzH/bIsL0UZ/0L
	315D3Nat7cUtdY4Gsun9pbLuCR1mh1NrxqHQKGGCijN/YABBGshqjpKOk4FoPuAa70jR
	6euHx42zx+II2iCVex2DYfd+bRwAu8fQekodMtRb29erf6D9XVXmt5/NuMbs3k0XOhME
	ncFx+hDO9UeXU3NmEorcyl/2ByW/DWUo1PIhbUXdWvmxdxs65V0HafhYkA8yYBgn6mGd
	PQIQ==
X-Received: by 10.152.19.66 with SMTP id c2mr7861623lae.54.1393102174538; Sat,
	22 Feb 2014 12:49:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Sat, 22 Feb 2014 12:49:14 -0800 (PST)
In-Reply-To: <osstest-25268-mainreport@xen.org>
References: <osstest-25268-mainreport@xen.org>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Sat, 22 Feb 2014 12:49:14 -0800
X-Google-Sender-Auth: mXl3IVnZ4puhVUYK759MOHht8zQ
Message-ID: <CAB=NE6U96UcnvHvoOU2Z62VkkthGMLNaQrL0rv4+Px8-Li_W=g@mail.gmail.com>
To: "xen.org" <ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-linus test] 25268: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 22, 2014 at 10:34 AM, xen.org <ian.jackson@eu.citrix.com> wrote:
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary

Is this the latest page for the documentation of the test harness?

http://wiki.xen.org/wiki/XenTest

That is basically just saying there are no docs but the README seems
to have good stuff, is that pretty up to date or should that be
considered pretty outdated to get started?

http://xenbits.xensource.com/gitweb/?p=osstest.git;a=blob;f=README

  Luis

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 20:50:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 20:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHJVy-0004R8-QQ; Sat, 22 Feb 2014 20:49:39 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WHJVx-0004R3-8l
	for xen-devel@lists.xensource.com; Sat, 22 Feb 2014 20:49:37 +0000
Received: from [85.158.137.68:12503] by server-7.bemta-3.messagelabs.com id
	E3/6E-13775-06D09035; Sat, 22 Feb 2014 20:49:36 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393102174!2306897!1
X-Originating-IP: [209.85.215.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10346 invoked from network); 22 Feb 2014 20:49:35 -0000
Received: from mail-la0-f51.google.com (HELO mail-la0-f51.google.com)
	(209.85.215.51)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 20:49:35 -0000
Received: by mail-la0-f51.google.com with SMTP id c6so3771864lan.38
	for <xen-devel@lists.xensource.com>;
	Sat, 22 Feb 2014 12:49:34 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=gmzdv0DlVgl2lloJNj+QDbuOmkC1sgBJX8O9ZMHpszo=;
	b=L3H2vd6vycHamFysvG1HVZdutdyLkcblROPMrcN8sw0RFqxy/7Adx8Y4SX5i25OZY8
	vfCeihTGKBRUNQ9VUpXaCjM/Wrmv2oldy1QNTtU2p/eDqCQzjURnhHpzH/bIsL0UZ/0L
	315D3Nat7cUtdY4Gsun9pbLuCR1mh1NrxqHQKGGCijN/YABBGshqjpKOk4FoPuAa70jR
	6euHx42zx+II2iCVex2DYfd+bRwAu8fQekodMtRb29erf6D9XVXmt5/NuMbs3k0XOhME
	ncFx+hDO9UeXU3NmEorcyl/2ByW/DWUo1PIhbUXdWvmxdxs65V0HafhYkA8yYBgn6mGd
	PQIQ==
X-Received: by 10.152.19.66 with SMTP id c2mr7861623lae.54.1393102174538; Sat,
	22 Feb 2014 12:49:34 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Sat, 22 Feb 2014 12:49:14 -0800 (PST)
In-Reply-To: <osstest-25268-mainreport@xen.org>
References: <osstest-25268-mainreport@xen.org>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Sat, 22 Feb 2014 12:49:14 -0800
X-Google-Sender-Auth: mXl3IVnZ4puhVUYK759MOHht8zQ
Message-ID: <CAB=NE6U96UcnvHvoOU2Z62VkkthGMLNaQrL0rv4+Px8-Li_W=g@mail.gmail.com>
To: "xen.org" <ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-linus test] 25268: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 22, 2014 at 10:34 AM, xen.org <ian.jackson@eu.citrix.com> wrote:
> Test harness code can be found at
>     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary

Is this the latest page for the documentation of the test harness?

http://wiki.xen.org/wiki/XenTest

That is basically just saying there are no docs but the README seems
to have good stuff, is that pretty up to date or should that be
considered pretty outdated to get started?

http://xenbits.xensource.com/gitweb/?p=osstest.git;a=blob;f=README

  Luis

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 22:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 22:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHL93-0004o9-Vn; Sat, 22 Feb 2014 22:34:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@schaman.hu>) id 1WHL8z-0004o4-Jm
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 22:34:03 +0000
Received: from [85.158.139.211:27065] by server-5.bemta-5.messagelabs.com id
	C0/6E-32749-8D529035; Sat, 22 Feb 2014 22:34:00 +0000
X-Env-Sender: zoltan.kiss@schaman.hu
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393108439!5581748!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24355 invoked from network); 22 Feb 2014 22:33:59 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 22:33:59 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm4so1896971wib.7
	for <xen-devel@lists.xenproject.org>;
	Sat, 22 Feb 2014 14:33:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=15H+DX30OadWuwUyF1uD9eQ7oBhqK357u1hk83x3j+4=;
	b=h6KCVTg842fEAR1nN7KnwiSSJNvCYZuZrJQnrSodfchoSALRXUcgoxFehZ7gxxUw3X
	5qWtLP6D/Vqw3GVJK0uA1ZTvxIC5HHdR5EN9Yk6O3QxLiRUVuns0i9GCKtTQW5GQqR3S
	wD6jQMVJqXxSMAQedRuByV9Wicp4L7QesbV1Rof59naEbGzUw6i1JMSGJkapx6MAN0mL
	w4mg3AYlleqdA05Dw0AjefvTAIqWem6yUoI0Txc7X6O0N1XJ0kCm9E2VClhRr4fnJN/0
	C1xSx73N9puBsk1R3kg7N1mRvVGbcoqshSmyrXVPmoDbnaUfBi8pu/59cEmA/+Msit5E
	3xwA==
X-Gm-Message-State: ALoCoQmZIqU279TV+9DTGr5ayi7wjA1LbzKxcTrruzvBaQqcC9zOgJG5nldTqTHGlC4I+T3efXqP
X-Received: by 10.181.12.9 with SMTP id em9mr8311519wid.37.1393108439151;
	Sat, 22 Feb 2014 14:33:59 -0800 (PST)
Received: from [192.168.0.10] (cpc36-slam6-2-0-cust234.2-4.cable.virginm.net.
	[80.6.103.235])
	by mx.google.com with ESMTPSA id ux5sm28146035wjc.6.2014.02.22.14.33.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 22 Feb 2014 14:33:58 -0800 (PST)
Message-ID: <530925D5.4010800@schaman.hu>
Date: Sat, 22 Feb 2014 22:33:57 +0000
From: Zoltan Kiss <zoltan.kiss@schaman.hu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745235.23084.60.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:40, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> This patch changes the grant copy on the TX patch to grant mapping
>
> Both this and the previous patch had a single sentence commit message (I
> count them together since they are split weirdly and are a single
> logical change to my eyes).
>
> Really a change of this magnitude deserves a commit message to match,
> e.g. explaining the approach which is taken by the code at a high level,
> what it is doing, how it is doing it, the rationale for using a kthread
> etc etc.
Ok, I'll  improve that

>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index f0f0c3d..b3daae2 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	BUG_ON(skb->dev != dev);
>>
>>   	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL || !xenvif_schedulable(vif))
>> +	if (vif->task == NULL ||
>> +	    vif->dealloc_task == NULL ||
>
> Under what conditions could this be true? Would it not represent a
> rather serious failure?
xenvif_start_xmit can start after xenvif_open, while the threads are 
created when the ring connects. I haven't checked under what 
circumstances can that happen, but I guess if it worked like that 
before, that's fine. If not, that's the topic of a different patch(series).

>
>> +	    !xenvif_schedulable(vif))
>>   		goto drop;
>>
>>   	/* At best we'll need one slot for the header and one for each
>> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>   	vif->pending_prod = MAX_PENDING_REQS;
>>   	for (i = 0; i < MAX_PENDING_REQS; i++)
>>   		vif->pending_ring[i] = i;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->mmap_pages[i] = NULL;
>> +	spin_lock_init(&vif->dealloc_lock);
>> +	spin_lock_init(&vif->response_lock);
>> +	/* If ballooning is disabled, this will consume real memory, so you
>> +	 * better enable it.
>
> Almost no one who would be affected by this is going to read this
> comment. And it doesn't just require enabling ballooning, but actually
> booting with some maxmem "slack" to leave space.
Where should we document this? I mean, in case David doesn't fix this 
before acceptance of this patch series :)


>> @@ -432,6 +454,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>>
>>   	vif->task = task;
>>
>> +	task = kthread_create(xenvif_dealloc_kthread,
>> +					   (void *)vif,
>> +					   "%s-dealloc",
>> +					   vif->dev->name);
>
> This is separate to the existing kthread that handles rx stuff. If they
> cannot or should not be combined then I think the existing one needs
> renaming, both the function and the thread itself in a precursor patch.
I've explained in another email about the reasons why they are separate 
thread. I'll rename the existing thread and functions

>
>> @@ -494,6 +534,23 @@ void xenvif_disconnect(struct xenvif *vif)
>>
>>   void xenvif_free(struct xenvif *vif)
>>   {
>> +	int i, unmap_timeout = 0;
>> +
>> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
>> +		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>> +			unmap_timeout++;
>> +			schedule_timeout(msecs_to_jiffies(1000));
>
> What are we waiting for here? Have we taken any action to ensure that it
> is going to happen, like kicking something?
We are waiting for skb's to be freed so we can return the slots. They 
are not owned by us after we sent them, and we don't know who owns them. 
As discussed months ago, it is safe to assume that other devices won't 
sit on it indefinitely. If it goes to userspace or further up the stack 
to IP layer, we swap the pages out with local ones. The only place where 
things can go wrong is an another netback thread, that's handled in 
patch #8.

>
>> +			if (unmap_timeout > 9 &&
>
> Why 9? Why not rely on net_ratelimit to DTRT? Or is it normal for this
> to fail at least once?
As mentioned earlier, this is quite temporary here, it is improved in 
patch #8

>
>> +			    net_ratelimit())
>> +				netdev_err(vif->dev,
>
> I thought there was a ratelimited netdev printk which combined the
> limiting and the printing in one function call. Maybe I am mistaken.
There is indeed, net_err_ratelimited and friends. But they call pr_err 
instead of netdev_err, so we lose the vif name from the log entry, which 
could be quite important. If someone introduce a netdev_err_ratelimit 
which calls netdev_err, we can change this, but I would defer this to a 
later patch.


>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index 195602f..747b428 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -646,9 +646,12 @@ static void xenvif_tx_err(struct xenvif *vif,
>>   			  struct xen_netif_tx_request *txp, RING_IDX end)
>>   {
>>   	RING_IDX cons = vif->tx.req_cons;
>> +	unsigned long flags;
>>
>>   	do {
>> +		spin_lock_irqsave(&vif->response_lock, flags);
>
> Looking at the callers you have added it would seem more natural to
> handle the locking within make_tx_response itself.
>
> What are you locking against here? Is this different to the dealloc
> lock? If the concern is the rx action stuff and the dealloc stuff
> conflicting perhaps a single vif lock would make sense?
I've improved the comment, as mentioned in another email, here is it:

	/* This prevents zerocopy callbacks  to race over dealloc_ring */
	spinlock_t callback_lock;
	/* This prevents dealloc thread and NAPI instance to race over response
	 * creation and pending_ring in xenvif_idx_release. In xenvif_tx_err
	 * it only protect response creation
	 */

>> @@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   		head = tx_info->head;
>>
>>   		/* Check error status: if okay then remember grant handle. */
>> -		do {
>>   			newerr = (++gop)->status;
>> -			if (newerr)
>> -				break;
>> -			peek = vif->pending_ring[pending_index(++head)];
>> -		} while (!pending_tx_is_head(vif, peek));
>>
>>   		if (likely(!newerr)) {
>> +			if (vif->grant_tx_handle[pending_idx] !=
>> +			    NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					   "Stale mapped handle! pending_idx %x handle %x\n",
>> +					   pending_idx,
>> +					   vif->grant_tx_handle[pending_idx]);
>> +				BUG();
>> +			}
>
> You had the same thing earlier. Perhaps a helper function would be
> useful?
Makes sense, I'll do that.

>
>> +			vif->grant_tx_handle[pending_idx] = gop->handle;
>>   			/* Had a previous error? Invalidate this fragment. */
>> -			if (unlikely(err))
>> +			if (unlikely(err)) {
>> +				xenvif_idx_unmap(vif, pending_idx);
>>   				xenvif_idx_release(vif, pending_idx,
>>   						   XEN_NETIF_RSP_OKAY);
>
> Would it make sense to unmap and release in a single function? (I
> Haven't looked to see if you ever do one without the other, but the next
> page of diff had two more occurrences of them together)
Yep, it's better call idx_release from unmap instead of doing it 
separately all the time.


>> @@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>
>>   		/* First error: invalidate header and preceding fragments. */
>>   		pending_idx = *((u16 *)skb->data);
>> +		xenvif_idx_unmap(vif, pending_idx);
>>   		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
>>   		for (j = start; j < i; j++) {
>>   			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
>> +			xenvif_idx_unmap(vif, pending_idx);
>>   			xenvif_idx_release(vif, pending_idx,
>>   					   XEN_NETIF_RSP_OKAY);
>>   		}
>
>>   	}
>> +	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
>> +	 * overlaps with "index", and "mapping" is not set. I think mapping
>> +	 * should be set. If delivered to local stack, it would drop this
>> +	 * skb in sk_filter unless the socket has the right to use it.
>
> What is the plan to fix this?
Probably not using "index" during grant mapping. When it is solved 
somehow we can clean this up.

>
> Is this dropping not a significant issue (TBH I'm not sure what "has the
> right to use it" would entail).
It doesn't happen as we fix it up with this workaround.

>
>> +	 */
>> +	skb->pfmemalloc	= false;
>>   }
>>
>>   static int xenvif_get_extras(struct xenvif *vif,
>> @@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
>
>> @@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		else if (txp->flags & XEN_NETTXF_data_validated)
>>   			skb->ip_summed = CHECKSUM_UNNECESSARY;
>>
>> -		xenvif_fill_frags(vif, skb);
>> +		xenvif_fill_frags(vif,
>> +				  skb,
>> +				  skb_shinfo(skb)->destructor_arg ?
>> +				  pending_idx :
>> +				  INVALID_PENDING_IDX
>
> Couldn't xenvif_fill_frags calculate the 3rd argument itself given that
> it has skb in hand.
We still have to pass pending_idx, as it is no longer in skb->data. I 
have plans (I've already prototyped it, actually) to move that 
pending_idx from skb->data to skb->cb, if that happens, this won't be 
necessary.
On the other hand, it makes more sense just to just pass pending_idx, 
and in fill_frags we can decide based on destructor_arg whether do we 
need it or not.

>> @@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		if (checksum_setup(vif, skb)) {
>>   			netdev_dbg(vif->dev,
>>   				   "Can't setup checksum in net_tx_action\n");
>> +			/* We have to set this flag so the dealloc thread can
>> +			 * send the slots back
>
> Wouldn't it be more accurate to say that we need it so that the callback
> happens (which we then use to trigger the dealloc thread)?
Yep, I'll change that.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 22:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 22:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHL93-0004o9-Vn; Sat, 22 Feb 2014 22:34:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@schaman.hu>) id 1WHL8z-0004o4-Jm
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 22:34:03 +0000
Received: from [85.158.139.211:27065] by server-5.bemta-5.messagelabs.com id
	C0/6E-32749-8D529035; Sat, 22 Feb 2014 22:34:00 +0000
X-Env-Sender: zoltan.kiss@schaman.hu
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393108439!5581748!1
X-Originating-IP: [209.85.212.180]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24355 invoked from network); 22 Feb 2014 22:33:59 -0000
Received: from mail-wi0-f180.google.com (HELO mail-wi0-f180.google.com)
	(209.85.212.180)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 22:33:59 -0000
Received: by mail-wi0-f180.google.com with SMTP id hm4so1896971wib.7
	for <xen-devel@lists.xenproject.org>;
	Sat, 22 Feb 2014 14:33:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=15H+DX30OadWuwUyF1uD9eQ7oBhqK357u1hk83x3j+4=;
	b=h6KCVTg842fEAR1nN7KnwiSSJNvCYZuZrJQnrSodfchoSALRXUcgoxFehZ7gxxUw3X
	5qWtLP6D/Vqw3GVJK0uA1ZTvxIC5HHdR5EN9Yk6O3QxLiRUVuns0i9GCKtTQW5GQqR3S
	wD6jQMVJqXxSMAQedRuByV9Wicp4L7QesbV1Rof59naEbGzUw6i1JMSGJkapx6MAN0mL
	w4mg3AYlleqdA05Dw0AjefvTAIqWem6yUoI0Txc7X6O0N1XJ0kCm9E2VClhRr4fnJN/0
	C1xSx73N9puBsk1R3kg7N1mRvVGbcoqshSmyrXVPmoDbnaUfBi8pu/59cEmA/+Msit5E
	3xwA==
X-Gm-Message-State: ALoCoQmZIqU279TV+9DTGr5ayi7wjA1LbzKxcTrruzvBaQqcC9zOgJG5nldTqTHGlC4I+T3efXqP
X-Received: by 10.181.12.9 with SMTP id em9mr8311519wid.37.1393108439151;
	Sat, 22 Feb 2014 14:33:59 -0800 (PST)
Received: from [192.168.0.10] (cpc36-slam6-2-0-cust234.2-4.cable.virginm.net.
	[80.6.103.235])
	by mx.google.com with ESMTPSA id ux5sm28146035wjc.6.2014.02.22.14.33.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sat, 22 Feb 2014 14:33:58 -0800 (PST)
Message-ID: <530925D5.4010800@schaman.hu>
Date: Sat, 22 Feb 2014 22:33:57 +0000
From: Zoltan Kiss <zoltan.kiss@schaman.hu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745235.23084.60.camel@kazak.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:40, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> This patch changes the grant copy on the TX patch to grant mapping
>
> Both this and the previous patch had a single sentence commit message (I
> count them together since they are split weirdly and are a single
> logical change to my eyes).
>
> Really a change of this magnitude deserves a commit message to match,
> e.g. explaining the approach which is taken by the code at a high level,
> what it is doing, how it is doing it, the rationale for using a kthread
> etc etc.
Ok, I'll  improve that

>> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
>> index f0f0c3d..b3daae2 100644
>> --- a/drivers/net/xen-netback/interface.c
>> +++ b/drivers/net/xen-netback/interface.c
>> @@ -122,7 +122,9 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>>   	BUG_ON(skb->dev != dev);
>>
>>   	/* Drop the packet if vif is not ready */
>> -	if (vif->task == NULL || !xenvif_schedulable(vif))
>> +	if (vif->task == NULL ||
>> +	    vif->dealloc_task == NULL ||
>
> Under what conditions could this be true? Would it not represent a
> rather serious failure?
xenvif_start_xmit can start after xenvif_open, while the threads are 
created when the ring connects. I haven't checked under what 
circumstances can that happen, but I guess if it worked like that 
before, that's fine. If not, that's the topic of a different patch(series).

>
>> +	    !xenvif_schedulable(vif))
>>   		goto drop;
>>
>>   	/* At best we'll need one slot for the header and one for each
>> @@ -344,8 +346,26 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
>>   	vif->pending_prod = MAX_PENDING_REQS;
>>   	for (i = 0; i < MAX_PENDING_REQS; i++)
>>   		vif->pending_ring[i] = i;
>> -	for (i = 0; i < MAX_PENDING_REQS; i++)
>> -		vif->mmap_pages[i] = NULL;
>> +	spin_lock_init(&vif->dealloc_lock);
>> +	spin_lock_init(&vif->response_lock);
>> +	/* If ballooning is disabled, this will consume real memory, so you
>> +	 * better enable it.
>
> Almost no one who would be affected by this is going to read this
> comment. And it doesn't just require enabling ballooning, but actually
> booting with some maxmem "slack" to leave space.
Where should we document this? I mean, in case David doesn't fix this 
before acceptance of this patch series :)


>> @@ -432,6 +454,18 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
>>
>>   	vif->task = task;
>>
>> +	task = kthread_create(xenvif_dealloc_kthread,
>> +					   (void *)vif,
>> +					   "%s-dealloc",
>> +					   vif->dev->name);
>
> This is separate to the existing kthread that handles rx stuff. If they
> cannot or should not be combined then I think the existing one needs
> renaming, both the function and the thread itself in a precursor patch.
I've explained in another email about the reasons why they are separate 
thread. I'll rename the existing thread and functions

>
>> @@ -494,6 +534,23 @@ void xenvif_disconnect(struct xenvif *vif)
>>
>>   void xenvif_free(struct xenvif *vif)
>>   {
>> +	int i, unmap_timeout = 0;
>> +
>> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
>> +		if (vif->grant_tx_handle[i] != NETBACK_INVALID_HANDLE) {
>> +			unmap_timeout++;
>> +			schedule_timeout(msecs_to_jiffies(1000));
>
> What are we waiting for here? Have we taken any action to ensure that it
> is going to happen, like kicking something?
We are waiting for skb's to be freed so we can return the slots. They 
are not owned by us after we sent them, and we don't know who owns them. 
As discussed months ago, it is safe to assume that other devices won't 
sit on it indefinitely. If it goes to userspace or further up the stack 
to IP layer, we swap the pages out with local ones. The only place where 
things can go wrong is an another netback thread, that's handled in 
patch #8.

>
>> +			if (unmap_timeout > 9 &&
>
> Why 9? Why not rely on net_ratelimit to DTRT? Or is it normal for this
> to fail at least once?
As mentioned earlier, this is quite temporary here, it is improved in 
patch #8

>
>> +			    net_ratelimit())
>> +				netdev_err(vif->dev,
>
> I thought there was a ratelimited netdev printk which combined the
> limiting and the printing in one function call. Maybe I am mistaken.
There is indeed, net_err_ratelimited and friends. But they call pr_err 
instead of netdev_err, so we lose the vif name from the log entry, which 
could be quite important. If someone introduce a netdev_err_ratelimit 
which calls netdev_err, we can change this, but I would defer this to a 
later patch.


>> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
>> index 195602f..747b428 100644
>> --- a/drivers/net/xen-netback/netback.c
>> +++ b/drivers/net/xen-netback/netback.c
>> @@ -646,9 +646,12 @@ static void xenvif_tx_err(struct xenvif *vif,
>>   			  struct xen_netif_tx_request *txp, RING_IDX end)
>>   {
>>   	RING_IDX cons = vif->tx.req_cons;
>> +	unsigned long flags;
>>
>>   	do {
>> +		spin_lock_irqsave(&vif->response_lock, flags);
>
> Looking at the callers you have added it would seem more natural to
> handle the locking within make_tx_response itself.
>
> What are you locking against here? Is this different to the dealloc
> lock? If the concern is the rx action stuff and the dealloc stuff
> conflicting perhaps a single vif lock would make sense?
I've improved the comment, as mentioned in another email, here is it:

	/* This prevents zerocopy callbacks  to race over dealloc_ring */
	spinlock_t callback_lock;
	/* This prevents dealloc thread and NAPI instance to race over response
	 * creation and pending_ring in xenvif_idx_release. In xenvif_tx_err
	 * it only protect response creation
	 */

>> @@ -936,18 +879,24 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>   		head = tx_info->head;
>>
>>   		/* Check error status: if okay then remember grant handle. */
>> -		do {
>>   			newerr = (++gop)->status;
>> -			if (newerr)
>> -				break;
>> -			peek = vif->pending_ring[pending_index(++head)];
>> -		} while (!pending_tx_is_head(vif, peek));
>>
>>   		if (likely(!newerr)) {
>> +			if (vif->grant_tx_handle[pending_idx] !=
>> +			    NETBACK_INVALID_HANDLE) {
>> +				netdev_err(vif->dev,
>> +					   "Stale mapped handle! pending_idx %x handle %x\n",
>> +					   pending_idx,
>> +					   vif->grant_tx_handle[pending_idx]);
>> +				BUG();
>> +			}
>
> You had the same thing earlier. Perhaps a helper function would be
> useful?
Makes sense, I'll do that.

>
>> +			vif->grant_tx_handle[pending_idx] = gop->handle;
>>   			/* Had a previous error? Invalidate this fragment. */
>> -			if (unlikely(err))
>> +			if (unlikely(err)) {
>> +				xenvif_idx_unmap(vif, pending_idx);
>>   				xenvif_idx_release(vif, pending_idx,
>>   						   XEN_NETIF_RSP_OKAY);
>
> Would it make sense to unmap and release in a single function? (I
> Haven't looked to see if you ever do one without the other, but the next
> page of diff had two more occurrences of them together)
Yep, it's better call idx_release from unmap instead of doing it 
separately all the time.


>> @@ -960,9 +909,11 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>>
>>   		/* First error: invalidate header and preceding fragments. */
>>   		pending_idx = *((u16 *)skb->data);
>> +		xenvif_idx_unmap(vif, pending_idx);
>>   		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
>>   		for (j = start; j < i; j++) {
>>   			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
>> +			xenvif_idx_unmap(vif, pending_idx);
>>   			xenvif_idx_release(vif, pending_idx,
>>   					   XEN_NETIF_RSP_OKAY);
>>   		}
>
>>   	}
>> +	/* FIXME: __skb_fill_page_desc set this to true because page->pfmemalloc
>> +	 * overlaps with "index", and "mapping" is not set. I think mapping
>> +	 * should be set. If delivered to local stack, it would drop this
>> +	 * skb in sk_filter unless the socket has the right to use it.
>
> What is the plan to fix this?
Probably not using "index" during grant mapping. When it is solved 
somehow we can clean this up.

>
> Is this dropping not a significant issue (TBH I'm not sure what "has the
> right to use it" would entail).
It doesn't happen as we fix it up with this workaround.

>
>> +	 */
>> +	skb->pfmemalloc	= false;
>>   }
>>
>>   static int xenvif_get_extras(struct xenvif *vif,
>> @@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
>
>> @@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		else if (txp->flags & XEN_NETTXF_data_validated)
>>   			skb->ip_summed = CHECKSUM_UNNECESSARY;
>>
>> -		xenvif_fill_frags(vif, skb);
>> +		xenvif_fill_frags(vif,
>> +				  skb,
>> +				  skb_shinfo(skb)->destructor_arg ?
>> +				  pending_idx :
>> +				  INVALID_PENDING_IDX
>
> Couldn't xenvif_fill_frags calculate the 3rd argument itself given that
> it has skb in hand.
We still have to pass pending_idx, as it is no longer in skb->data. I 
have plans (I've already prototyped it, actually) to move that 
pending_idx from skb->data to skb->cb, if that happens, this won't be 
necessary.
On the other hand, it makes more sense just to just pass pending_idx, 
and in fill_frags we can decide based on destructor_arg whether do we 
need it or not.

>> @@ -1595,6 +1553,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>   		if (checksum_setup(vif, skb)) {
>>   			netdev_dbg(vif->dev,
>>   				   "Can't setup checksum in net_tx_action\n");
>> +			/* We have to set this flag so the dealloc thread can
>> +			 * send the slots back
>
> Wouldn't it be more accurate to say that we need it so that the callback
> happens (which we then use to trigger the dealloc thread)?
Yep, I'll change that.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 23:19:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 23:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHLqM-00055I-Vw; Sat, 22 Feb 2014 23:18:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHLqL-00055D-FI
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 23:18:49 +0000
Received: from [85.158.143.35:9611] by server-2.bemta-4.messagelabs.com id
	F4/5F-10891-85039035; Sat, 22 Feb 2014 23:18:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393111126!7579761!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24830 invoked from network); 22 Feb 2014 23:18:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 23:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,526,1389744000"; d="scan'208";a="103288006"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Feb 2014 23:18:46 +0000
Received: from [10.68.14.49] (10.68.14.49) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sat, 22 Feb 2014 18:18:45 -0500
Message-ID: <53093051.9040907@citrix.com>
Date: Sat, 22 Feb 2014 23:18:41 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745532.23084.65.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.68.14.49]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:45, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>
> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
> guest RX path" would be clearer.
Ok, I'll do that.

>
>> RX path need to know if the SKB fragments are stored on pages from another
>> domain.
> Does this not need to be done either before the mapping change or at the
> same time? -- otherwise you have a window of a couple of commits where
> things are broken, breaking bisectability.
I can move this to the beginning, to keep bisectability. I've put it 
here originally because none of these makes sense without the previous 
patches.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sat Feb 22 23:19:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Feb 2014 23:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHLqM-00055I-Vw; Sat, 22 Feb 2014 23:18:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHLqL-00055D-FI
	for xen-devel@lists.xenproject.org; Sat, 22 Feb 2014 23:18:49 +0000
Received: from [85.158.143.35:9611] by server-2.bemta-4.messagelabs.com id
	F4/5F-10891-85039035; Sat, 22 Feb 2014 23:18:48 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393111126!7579761!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24830 invoked from network); 22 Feb 2014 23:18:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	22 Feb 2014 23:18:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,526,1389744000"; d="scan'208";a="103288006"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 22 Feb 2014 23:18:46 +0000
Received: from [10.68.14.49] (10.68.14.49) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Sat, 22 Feb 2014 18:18:45 -0500
Message-ID: <53093051.9040907@citrix.com>
Date: Sat, 22 Feb 2014 23:18:41 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
In-Reply-To: <1392745532.23084.65.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.68.14.49]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 17:45, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>
> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
> guest RX path" would be clearer.
Ok, I'll do that.

>
>> RX path need to know if the SKB fragments are stored on pages from another
>> domain.
> Does this not need to be done either before the mapping change or at the
> same time? -- otherwise you have a window of a couple of commits where
> things are broken, breaking bisectability.
I can move this to the beginning, to keep bisectability. I've put it 
here originally because none of these makes sense without the previous 
patches.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 00:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 00:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHNNp-0005oS-KT; Sun, 23 Feb 2014 00:57:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHNNo-0005oN-CR
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 00:57:28 +0000
Received: from [85.158.139.211:45248] by server-8.bemta-5.messagelabs.com id
	8A/6D-05298-77749035; Sun, 23 Feb 2014 00:57:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393117044!5629367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21957 invoked from network); 23 Feb 2014 00:57:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 00:57:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,527,1389744000"; d="scan'208";a="103295320"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Feb 2014 00:57:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 19:57:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHNNi-0001v7-SL;
	Sun, 23 Feb 2014 00:57:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHNNi-00081X-MY;
	Sun, 23 Feb 2014 00:57:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25271-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 00:57:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25271: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6039779342198392779=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6039779342198392779==
Content-Type: text/plain

flight 25271 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25271/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          13 guest-saverestore.2       fail REGR. vs. 25259
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 25259

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate.2      fail REGR. vs. 25259

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
baseline version:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Andrew Morton <akpm@linux-foundation.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Rientjes <rientjes@google.com>
  David Vrabel <david.vrabel@citrix.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hartmut Knaack <knaack.h@gmx.de>
  J. Bruce Fields <bfields@redhat.com>
  Jan Kara <jack@suse.cz>
  Jan Moskyto Matejka <mq@suse.cz>
  Jens Axboe <axboe@fb.com>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Cameron <jic23@kernel.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Lars Poeschel <poeschel@lemonage.de>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Brown <broonie@linaro.org>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Michael Holzheu <holzheu@linux.vnet.ibm.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
  Moritz Moeller-Herrmann <moritz-kernel@moeller-herrmann.de>
  NeilBrown <neilb@suse.de>
  Oleg Nesterov <oleg@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Bolle <pebolle@tiscali.nl>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Raymond Wanyoike <raymond.wanyoike@gmail.com>
  Roland Dreier <roland@purestorage.com>
  Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
  Steven Rostedt <rostedt@goodmis.org>
  Thomas Gleixner <tglx@linutronix.de>
  Ulrich Hahn <uhahn@eanco.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 672 lines long.)


--===============6039779342198392779==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6039779342198392779==--

From xen-devel-bounces@lists.xen.org Sun Feb 23 00:57:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 00:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHNNp-0005oS-KT; Sun, 23 Feb 2014 00:57:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHNNo-0005oN-CR
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 00:57:28 +0000
Received: from [85.158.139.211:45248] by server-8.bemta-5.messagelabs.com id
	8A/6D-05298-77749035; Sun, 23 Feb 2014 00:57:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393117044!5629367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21957 invoked from network); 23 Feb 2014 00:57:26 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 00:57:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,527,1389744000"; d="scan'208";a="103295320"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Feb 2014 00:57:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 19:57:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHNNi-0001v7-SL;
	Sun, 23 Feb 2014 00:57:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHNNi-00081X-MY;
	Sun, 23 Feb 2014 00:57:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25271-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 00:57:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25271: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6039779342198392779=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6039779342198392779==
Content-Type: text/plain

flight 25271 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25271/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl          13 guest-saverestore.2       fail REGR. vs. 25259
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. vs. 25259

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     14 guest-localmigrate.2      fail REGR. vs. 25259

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
baseline version:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Andrew Morton <akpm@linux-foundation.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Rientjes <rientjes@google.com>
  David Vrabel <david.vrabel@citrix.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hartmut Knaack <knaack.h@gmx.de>
  J. Bruce Fields <bfields@redhat.com>
  Jan Kara <jack@suse.cz>
  Jan Moskyto Matejka <mq@suse.cz>
  Jens Axboe <axboe@fb.com>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Cameron <jic23@kernel.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Lars Poeschel <poeschel@lemonage.de>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Brown <broonie@linaro.org>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Michael Holzheu <holzheu@linux.vnet.ibm.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
  Moritz Moeller-Herrmann <moritz-kernel@moeller-herrmann.de>
  NeilBrown <neilb@suse.de>
  Oleg Nesterov <oleg@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Bolle <pebolle@tiscali.nl>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Raymond Wanyoike <raymond.wanyoike@gmail.com>
  Roland Dreier <roland@purestorage.com>
  Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
  Steven Rostedt <rostedt@goodmis.org>
  Thomas Gleixner <tglx@linutronix.de>
  Ulrich Hahn <uhahn@eanco.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 672 lines long.)


--===============6039779342198392779==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6039779342198392779==--

From xen-devel-bounces@lists.xen.org Sun Feb 23 02:52:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 02:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHPAx-0001yN-PO; Sun, 23 Feb 2014 02:52:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHPAv-0001yI-SM
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 02:52:18 +0000
Received: from [85.158.143.35:61771] by server-1.bemta-4.messagelabs.com id
	B7/23-31661-06269035; Sun, 23 Feb 2014 02:52:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393123934!7609387!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21546 invoked from network); 23 Feb 2014 02:52:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 02:52:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,527,1389744000"; d="scan'208";a="104974698"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Feb 2014 02:52:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 21:52:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHPAq-0002T5-3N;
	Sun, 23 Feb 2014 02:52:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHPAq-0001Y6-2L;
	Sun, 23 Feb 2014 02:52:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25272-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 02:52:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 25272: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7267114013024382540=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7267114013024382540==
Content-Type: text/plain

flight 25272 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25272/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
baseline version:
 linux                a43e02cf87d0c1ddce1719d93478f0f6a3a095e8

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Borislav Petkov <bp@suse.de>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Milburn <dmilburn@redhat.com>
  David Rientjes <rientjes@google.com>
  David Vrabel <david.vrabel@citrix.com>
  Doug Anderson <dianders@chromium.org>
  Eliad Peller <eliad@wizery.com>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Frank Mayhar <fmayhar@google.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Grant Likely <grant.likely@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Hartmut Knaack <knaack.h@gmx.de>
  Ingo Molnar <mingo@kernel.org>
  J. Bruce Fields <bfields@redhat.com>
  Jan Kara <jack@suse.cz>
  Jan Moskyto Matejka <mq@suse.cz>
  Jens Axboe <axboe@fb.com>
  Jens Axboe <axboe@kernel.dk>
  Jim Strouth <james.strouth@ge.com>
  Jingoo Han <jg1.han@samsung.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  K. Y. Srinivasan <kys@microsoft.com>
  Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Lars Poeschel <poeschel@lemonage.de>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marcus Folkesson <marcus.folkesson@gmail.com>
  Mark Brown <broonie@linaro.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Martyn Welch <martyn.welch@ge.com>
  Michael Holzheu <holzheu@linux.vnet.ibm.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
  Moritz Moeller-Herrmann <moritz-kernel@moeller-herrmann.de>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Oleg Nesterov <oleg@redhat.com>
  Oleksij Rempel <linux@rempel-privat.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Bolle <pebolle@tiscali.nl>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Petr PÃ­saÅ™ <petr.pisar@atlas.cz>
  Prarit Bhargava <prarit@redhat.com>
  Raymond Wanyoike <raymond.wanyoike@gmail.com>
  Rob Herring <robh@kernel.org>
  Roland Dreier <roland@purestorage.com>
  Sarah Sharp <sarah.a.sharp@intel.com>
  Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Steve French <smfrench@gmail.com>
  Steven Noonan <steven@uplinklabs.net>
  Steven Rostedt <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Ulrich Hahn <uhahn@eanco.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
+ branch=linux-3.10
+ revision=61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17:tested/linux-3.10
Counting objects: 1   
Counting objects: 525, done.
Compressing objects:   1% (1/67)   
Compressing objects:   2% (2/67)   
Compressing objects:   4% (3/67)   
Compressing objects:   5% (4/67)   
Compressing objects:   7% (5/67)   
Compressing objects:   8% (6/67)   
Compressing objects:  10% (7/67)   
Compressing objects:  11% (8/67)   
Compressing objects:  13% (9/67)   
Compressing objects:  14% (10/67)   
Compressing objects:  16% (11/67)   
Compressing objects:  17% (12/67)   
Compressing objects:  19% (13/67)   
Compressing objects:  20% (14/67)   
Compressing objects:  22% (15/67)   
Compressing objects:  23% (16/67)   
Compressing objects:  25% (17/67)   
Compressing objects:  26% (18/67)   
Compressing objects:  28% (19/67)   
Compressing objects:  29% (20/67)   
Compressing objects:  31% (21/67)   
Compressing objects:  32% (22/67)   
Compressing objects:  34% (23/67)   
Compressing objects:  35% (24/67)   
Compressing objects:  37% (25/67)   
Compressing objects:  38% (26/67)   
Compressing objects:  40% (27/67)   
Compressing objects:  41% (28/67)   
Compressing objects:  43% (29/67)   
Compressing objects:  44% (30/67)   
Compressing objects:  46% (31/67)   
Compressing objects:  47% (32/67)   
Compressing objects:  49% (33/67)   
Compressing objects:  50% (34/67)   
Compressing objects:  52% (35/67)   
Compressing objects:  53% (36/67)   
Compressing objects:  55% (37/67)   
Compressing objects:  56% (38/67)   
Compressing objects:  58% (39/67)   
Compressing objects:  59% (40/67)   
Compressing objects:  61% (41/67)   
Compressing objects:  62% (42/67)   
Compressing objects:  64% (43/67)   
Compressing objects:  65% (44/67)   
Compressing objects:  67% (45/67)   
Compressing objects:  68% (46/67)   
Compressing objects:  70% (47/67)   
Compressing objects:  71% (48/67)   
Compressing objects:  73% (49/67)   
Compressing objects:  74% (50/67)   
Compressing objects:  76% (51/67)   
Compressing objects:  77% (52/67)   
Compressing objects:  79% (53/67)   
Compressing objects:  80% (54/67)   
Compressing objects:  82% (55/67)   
Compressing objects:  83% (56/67)   
Compressing objects:  85% (57/67)   
Compressing objects:  86% (58/67)   
Compressing objects:  88% (59/67)   
Compressing objects:  89% (60/67)   
Compressing objects:  91% (61/67)   
Compressing objects:  92% (62/67)   
Compressing objects:  94% (63/67)   
Compressing objects:  95% (64/67)   
Compressing objects:  97% (65/67)   
Compressing objects:  98% (66/67)   
Compressing objects: 100% (67/67)   
Compressing objects: 100% (67/67), done.
Writing objects:   0% (1/382)   
Writing objects:   1% (4/382)   
Writing objects:   2% (8/382)   
Writing objects:   3% (12/382)   
Writing objects:   4% (16/382)   
Writing objects:   5% (20/382)   
Writing objects:   6% (23/382)   
Writing objects:   7% (27/382)   
Writing objects:   8% (31/382)   
Writing objects:   9% (35/382)   
Writing objects:  10% (39/382)   
Writing objects:  11% (43/382)   
Writing objects:  12% (46/382)   
Writing objects:  13% (50/382)   
Writing objects:  14% (54/382)   
Writing objects:  15% (58/382)   
Writing objects:  16% (62/382)   
Writing objects:  17% (65/382)   
Writing objects:  18% (69/382)   
Writing objects:  19% (73/382)   
Writing objects:  20% (77/382)   
Writing objects:  21% (81/382)   
Writing objects:  22% (85/382)   
Writing objects:  23% (88/382)   
Writing objects:  24% (92/382)   
Writing objects:  25% (96/382)   
Writing objects:  26% (100/382)   
Writing objects:  27% (104/382)   
Writing objects:  28% (107/382)   
Writing objects:  29% (111/382)   
Writing objects:  30% (115/382)   
Writing objects:  31% (119/382)   
Writing objects:  32% (123/382)   
Writing objects:  33% (127/382)   
Writing objects:  34% (130/382)   
Writing objects:  35% (134/382)   
Writing objects:  36% (138/382)   
Writing objects:  37% (142/382)   
Writing objects:  38% (146/382)   
Writing objects:  39% (149/382)   
Writing objects:  40% (153/382)   
Writing objects:  41% (157/382)   
Writing objects:  42% (161/382)   
Writing objects:  43% (165/382)   
Writing objects:  44% (169/382)   
Writing objects:  45% (172/382)   
Writing objects:  46% (176/382)   
Writing objects:  47% (180/382)   
Writing objects:  48% (184/382)   
Writing objects:  49% (188/382)   
Writing objects:  50% (191/382)   
Writing objects:  51% (195/382)   
Writing objects:  52% (199/382)   
Writing objects:  53% (203/382)   
Writing objects:  54% (207/382)   
Writing objects:  55% (211/382)   
Writing objects:  56% (214/382)   
Writing objects:  57% (218/382)   
Writing objects:  58% (222/382)   
Writing objects:  59% (226/382)   
Writing objects:  60% (230/382)   
Writing objects:  61% (234/382)   
Writing objects:  62% (237/382)   
Writing objects:  63% (241/382)   
Writing objects:  64% (245/382)   
Writing objects:  65% (249/382)   
Writing objects:  66% (253/382)   
Writing objects:  67% (256/382)   
Writing objects:  68% (260/382)   
Writing objects:  69% (264/382)   
Writing objects:  70% (268/382)   
Writing objects:  71% (272/382)   
Writing objects:  72% (276/382)   
Writing objects:  73% (279/382)   
Writing objects:  74% (283/382)   
Writing objects:  75% (287/382)   
Writing objects:  76% (291/382)   
Writing objects:  77% (295/382)   
Writing objects:  78% (298/382)   
Writing objects:  79% (302/382)   
Writing objects:  80% (306/382)   
Writing objects:  81% (310/382)   
Writing objects:  82% (314/382)   
Writing objects:  83% (318/382)   
Writing objects:  84% (321/382)   
Writing objects:  85% (325/382)   
Writing objects:  86% (329/382)   
Writing objects:  87% (333/382)   
Writing objects:  88% (337/382)   
Writing objects:  89% (340/382)   
Writing objects:  90% (344/382)   
Writing objects:  91% (348/382)   
Writing objects:  92% (352/382)   
Writing objects:  93% (356/382)   
Writing objects:  94% (360/382)   
Writing objects:  95% (363/382)   
Writing objects:  96% (367/382)   
Writing objects:  97% (371/382)   
Writing objects:  98% (375/382)   
Writing objects:  99% (379/382)   
Writing objects: 100% (382/382)   
Writing objects: 100% (382/382), 67.57 KiB, done.
Total 382 (delta 314), reused 382 (delta 314)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   a43e02c..61dde96  61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17 -> tested/linux-3.10
+ exit 0


--===============7267114013024382540==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7267114013024382540==--

From xen-devel-bounces@lists.xen.org Sun Feb 23 02:52:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 02:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHPAx-0001yN-PO; Sun, 23 Feb 2014 02:52:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHPAv-0001yI-SM
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 02:52:18 +0000
Received: from [85.158.143.35:61771] by server-1.bemta-4.messagelabs.com id
	B7/23-31661-06269035; Sun, 23 Feb 2014 02:52:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393123934!7609387!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21546 invoked from network); 23 Feb 2014 02:52:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 02:52:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,527,1389744000"; d="scan'208";a="104974698"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Feb 2014 02:52:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Sat, 22 Feb 2014 21:52:12 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHPAq-0002T5-3N;
	Sun, 23 Feb 2014 02:52:12 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHPAq-0001Y6-2L;
	Sun, 23 Feb 2014 02:52:12 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25272-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 02:52:12 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.10 test] 25272: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7267114013024382540=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7267114013024382540==
Content-Type: text/plain

flight 25272 linux-3.10 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25272/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-armhf-armhf-xl           5 xen-boot                     fail   never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
baseline version:
 linux                a43e02cf87d0c1ddce1719d93478f0f6a3a095e8

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Borislav Petkov <bp@suse.de>
  Christian KÃ¶nig <christian.koenig@amd.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Milburn <dmilburn@redhat.com>
  David Rientjes <rientjes@google.com>
  David Vrabel <david.vrabel@citrix.com>
  Doug Anderson <dianders@chromium.org>
  Eliad Peller <eliad@wizery.com>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Frank Mayhar <fmayhar@google.com>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Grant Likely <grant.likely@linaro.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  H Hartley Sweeten <hsweeten@visionengravers.com>
  H. Peter Anvin <hpa@linux.intel.com>
  Hartmut Knaack <knaack.h@gmx.de>
  Ingo Molnar <mingo@kernel.org>
  J. Bruce Fields <bfields@redhat.com>
  Jan Kara <jack@suse.cz>
  Jan Moskyto Matejka <mq@suse.cz>
  Jens Axboe <axboe@fb.com>
  Jens Axboe <axboe@kernel.dk>
  Jim Strouth <james.strouth@ge.com>
  Jingoo Han <jg1.han@samsung.com>
  Johannes Berg <johannes.berg@intel.com>
  John W. Linville <linville@tuxdriver.com>
  Jonathan Cameron <jic23@kernel.org>
  K. Y. Srinivasan <kys@microsoft.com>
  Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Lars Poeschel <poeschel@lemonage.de>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marcus Folkesson <marcus.folkesson@gmail.com>
  Mark Brown <broonie@linaro.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Martyn Welch <martyn.welch@ge.com>
  Michael Holzheu <holzheu@linux.vnet.ibm.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
  Moritz Moeller-Herrmann <moritz-kernel@moeller-herrmann.de>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  NeilBrown <neilb@suse.de>
  Oleg Nesterov <oleg@redhat.com>
  Oleksij Rempel <linux@rempel-privat.de>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Bolle <pebolle@tiscali.nl>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Petr PÃ­saÅ™ <petr.pisar@atlas.cz>
  Prarit Bhargava <prarit@redhat.com>
  Raymond Wanyoike <raymond.wanyoike@gmail.com>
  Rob Herring <robh@kernel.org>
  Roland Dreier <roland@purestorage.com>
  Sarah Sharp <sarah.a.sharp@intel.com>
  Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
  Stanislaw Gruszka <sgruszka@redhat.com>
  Stefano Stabellini <stefano.stabellini@eu.citrix.com>
  Steve French <smfrench@gmail.com>
  Steven Noonan <steven@uplinklabs.net>
  Steven Rostedt <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Ulrich Hahn <uhahn@eanco.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.10
+ revision=61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.10 61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
+ branch=linux-3.10
+ revision=61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.10
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.10
++ : daily-cron.linux-3.10
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.10
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.10
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.10.y
+ : linux-3.10.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.10
+ : tested/linux-3.10
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17:tested/linux-3.10
Counting objects: 1   
Counting objects: 525, done.
Compressing objects:   1% (1/67)   
Compressing objects:   2% (2/67)   
Compressing objects:   4% (3/67)   
Compressing objects:   5% (4/67)   
Compressing objects:   7% (5/67)   
Compressing objects:   8% (6/67)   
Compressing objects:  10% (7/67)   
Compressing objects:  11% (8/67)   
Compressing objects:  13% (9/67)   
Compressing objects:  14% (10/67)   
Compressing objects:  16% (11/67)   
Compressing objects:  17% (12/67)   
Compressing objects:  19% (13/67)   
Compressing objects:  20% (14/67)   
Compressing objects:  22% (15/67)   
Compressing objects:  23% (16/67)   
Compressing objects:  25% (17/67)   
Compressing objects:  26% (18/67)   
Compressing objects:  28% (19/67)   
Compressing objects:  29% (20/67)   
Compressing objects:  31% (21/67)   
Compressing objects:  32% (22/67)   
Compressing objects:  34% (23/67)   
Compressing objects:  35% (24/67)   
Compressing objects:  37% (25/67)   
Compressing objects:  38% (26/67)   
Compressing objects:  40% (27/67)   
Compressing objects:  41% (28/67)   
Compressing objects:  43% (29/67)   
Compressing objects:  44% (30/67)   
Compressing objects:  46% (31/67)   
Compressing objects:  47% (32/67)   
Compressing objects:  49% (33/67)   
Compressing objects:  50% (34/67)   
Compressing objects:  52% (35/67)   
Compressing objects:  53% (36/67)   
Compressing objects:  55% (37/67)   
Compressing objects:  56% (38/67)   
Compressing objects:  58% (39/67)   
Compressing objects:  59% (40/67)   
Compressing objects:  61% (41/67)   
Compressing objects:  62% (42/67)   
Compressing objects:  64% (43/67)   
Compressing objects:  65% (44/67)   
Compressing objects:  67% (45/67)   
Compressing objects:  68% (46/67)   
Compressing objects:  70% (47/67)   
Compressing objects:  71% (48/67)   
Compressing objects:  73% (49/67)   
Compressing objects:  74% (50/67)   
Compressing objects:  76% (51/67)   
Compressing objects:  77% (52/67)   
Compressing objects:  79% (53/67)   
Compressing objects:  80% (54/67)   
Compressing objects:  82% (55/67)   
Compressing objects:  83% (56/67)   
Compressing objects:  85% (57/67)   
Compressing objects:  86% (58/67)   
Compressing objects:  88% (59/67)   
Compressing objects:  89% (60/67)   
Compressing objects:  91% (61/67)   
Compressing objects:  92% (62/67)   
Compressing objects:  94% (63/67)   
Compressing objects:  95% (64/67)   
Compressing objects:  97% (65/67)   
Compressing objects:  98% (66/67)   
Compressing objects: 100% (67/67)   
Compressing objects: 100% (67/67), done.
Writing objects:   0% (1/382)   
Writing objects:   1% (4/382)   
Writing objects:   2% (8/382)   
Writing objects:   3% (12/382)   
Writing objects:   4% (16/382)   
Writing objects:   5% (20/382)   
Writing objects:   6% (23/382)   
Writing objects:   7% (27/382)   
Writing objects:   8% (31/382)   
Writing objects:   9% (35/382)   
Writing objects:  10% (39/382)   
Writing objects:  11% (43/382)   
Writing objects:  12% (46/382)   
Writing objects:  13% (50/382)   
Writing objects:  14% (54/382)   
Writing objects:  15% (58/382)   
Writing objects:  16% (62/382)   
Writing objects:  17% (65/382)   
Writing objects:  18% (69/382)   
Writing objects:  19% (73/382)   
Writing objects:  20% (77/382)   
Writing objects:  21% (81/382)   
Writing objects:  22% (85/382)   
Writing objects:  23% (88/382)   
Writing objects:  24% (92/382)   
Writing objects:  25% (96/382)   
Writing objects:  26% (100/382)   
Writing objects:  27% (104/382)   
Writing objects:  28% (107/382)   
Writing objects:  29% (111/382)   
Writing objects:  30% (115/382)   
Writing objects:  31% (119/382)   
Writing objects:  32% (123/382)   
Writing objects:  33% (127/382)   
Writing objects:  34% (130/382)   
Writing objects:  35% (134/382)   
Writing objects:  36% (138/382)   
Writing objects:  37% (142/382)   
Writing objects:  38% (146/382)   
Writing objects:  39% (149/382)   
Writing objects:  40% (153/382)   
Writing objects:  41% (157/382)   
Writing objects:  42% (161/382)   
Writing objects:  43% (165/382)   
Writing objects:  44% (169/382)   
Writing objects:  45% (172/382)   
Writing objects:  46% (176/382)   
Writing objects:  47% (180/382)   
Writing objects:  48% (184/382)   
Writing objects:  49% (188/382)   
Writing objects:  50% (191/382)   
Writing objects:  51% (195/382)   
Writing objects:  52% (199/382)   
Writing objects:  53% (203/382)   
Writing objects:  54% (207/382)   
Writing objects:  55% (211/382)   
Writing objects:  56% (214/382)   
Writing objects:  57% (218/382)   
Writing objects:  58% (222/382)   
Writing objects:  59% (226/382)   
Writing objects:  60% (230/382)   
Writing objects:  61% (234/382)   
Writing objects:  62% (237/382)   
Writing objects:  63% (241/382)   
Writing objects:  64% (245/382)   
Writing objects:  65% (249/382)   
Writing objects:  66% (253/382)   
Writing objects:  67% (256/382)   
Writing objects:  68% (260/382)   
Writing objects:  69% (264/382)   
Writing objects:  70% (268/382)   
Writing objects:  71% (272/382)   
Writing objects:  72% (276/382)   
Writing objects:  73% (279/382)   
Writing objects:  74% (283/382)   
Writing objects:  75% (287/382)   
Writing objects:  76% (291/382)   
Writing objects:  77% (295/382)   
Writing objects:  78% (298/382)   
Writing objects:  79% (302/382)   
Writing objects:  80% (306/382)   
Writing objects:  81% (310/382)   
Writing objects:  82% (314/382)   
Writing objects:  83% (318/382)   
Writing objects:  84% (321/382)   
Writing objects:  85% (325/382)   
Writing objects:  86% (329/382)   
Writing objects:  87% (333/382)   
Writing objects:  88% (337/382)   
Writing objects:  89% (340/382)   
Writing objects:  90% (344/382)   
Writing objects:  91% (348/382)   
Writing objects:  92% (352/382)   
Writing objects:  93% (356/382)   
Writing objects:  94% (360/382)   
Writing objects:  95% (363/382)   
Writing objects:  96% (367/382)   
Writing objects:  97% (371/382)   
Writing objects:  98% (375/382)   
Writing objects:  99% (379/382)   
Writing objects: 100% (382/382)   
Writing objects: 100% (382/382), 67.57 KiB, done.
Total 382 (delta 314), reused 382 (delta 314)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   a43e02c..61dde96  61dde96f97bb5b1ed4c11caf9a857d55ad8f6e17 -> tested/linux-3.10
+ exit 0


--===============7267114013024382540==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7267114013024382540==--

From xen-devel-bounces@lists.xen.org Sun Feb 23 04:53:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 04:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHR41-0002R6-Ne; Sun, 23 Feb 2014 04:53:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dexuan.cui@intel.com>) id 1WHR40-0002R1-0X
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 04:53:16 +0000
Received: from [85.158.139.211:24303] by server-7.bemta-5.messagelabs.com id
	17/4C-14867-ABE79035; Sun, 23 Feb 2014 04:53:14 +0000
X-Env-Sender: dexuan.cui@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393131193!1088855!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5199 invoked from network); 23 Feb 2014 04:53:14 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	23 Feb 2014 04:53:14 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 22 Feb 2014 20:53:12 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,527,1389772800"; d="scan'208";a="480195943"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 22 Feb 2014 20:53:12 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 22 Feb 2014 20:53:12 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.202]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Sun, 23 Feb 2014 12:53:09 +0800
From: "Cui, Dexuan" <dexuan.cui@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
	Graphics Passthrough Solution from Intel
Thread-Index: Ac8uCM+UTJUf7TAWTkKmKWKJu37vjgAHsPmAACDtz+AAOVsWgAAv5MIQ
Date: Sun, 23 Feb 2014 04:53:08 +0000
Message-ID: <A25F549E4D43CD42B4C02DF47A1913231105F908@SHSMSX103.ccr.corp.intel.com>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
	<20140220183613.GJ3200@reaktio.net>
	<A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
	<20140222134121.GP3200@reaktio.net>
In-Reply-To: <20140222134121.GP3200@reaktio.net>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth@xenproject.org>, "Tian,
	Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote on 2014-02-22:
> On Fri, Feb 21, 2014 at 02:22:11AM +0000, Cui, Dexuan wrote:
>> Pasi K=E4rkk=E4inen wrote on 2014-02-21:
>>> On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
>>>> Hi all, We're pleased to announce an update to XenGT since its first
>>>> disclosure in last Sep.
>>> =

>>> Are you going to work on upstreaming this stuff? Xen 4.4 will be
>>> released soon(ish), so the Xen 4.5 development window starts in the near
>>> future and hopefully this stuff can be upstreamed then..
>> Hi Pasi,
>> We do plan to upstream but not give a timeframe so far.
>> =

> =

> Ok. Would you like to post an article on the Xen blog about XenGT ?
> I think that would be a good thing to do to get more visibility for this =
feature.
As I mentioned in the previous mail, we already have a blog article, which
can be easily found by googling xengt: :-)
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt

>>> Also: Haswell ("4th generation Intel core CPU") is listed as a
>>> requirement in the Setup Guide PDF.. will there be support for SNB/IVB
>>> GPUs aswell?
>> There is no plan for SNB/IVB.
>> =

> =

> Is that because of some hardware limitation on earlier GPU generations,
> or just a matter of amount of work needed to support XenGT on earlier
> GPUs?
> -- Pasi
No HW limitation on earlier GPUs -- actually you can even see some unused
codes about earlier generations in the linux-vgt.patch.
At present we'd like to focus on HSW (and newer generation in future).

-- Dexuan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 04:53:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 04:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHR41-0002R6-Ne; Sun, 23 Feb 2014 04:53:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dexuan.cui@intel.com>) id 1WHR40-0002R1-0X
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 04:53:16 +0000
Received: from [85.158.139.211:24303] by server-7.bemta-5.messagelabs.com id
	17/4C-14867-ABE79035; Sun, 23 Feb 2014 04:53:14 +0000
X-Env-Sender: dexuan.cui@intel.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393131193!1088855!1
X-Originating-IP: [143.182.124.37]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQzLjE4Mi4xMjQuMzcgPT4gMjU3MDM5\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5199 invoked from network); 23 Feb 2014 04:53:14 -0000
Received: from mga14.intel.com (HELO mga14.intel.com) (143.182.124.37)
	by server-11.tower-206.messagelabs.com with SMTP;
	23 Feb 2014 04:53:14 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by azsmga102.ch.intel.com with ESMTP; 22 Feb 2014 20:53:12 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,527,1389772800"; d="scan'208";a="480195943"
Received: from fmsmsx103.amr.corp.intel.com ([10.19.9.34])
	by fmsmga001.fm.intel.com with ESMTP; 22 Feb 2014 20:53:12 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX103.amr.corp.intel.com (10.19.9.34) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Sat, 22 Feb 2014 20:53:12 -0800
Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.202]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Sun, 23 Feb 2014 12:53:09 +0800
From: "Cui, Dexuan" <dexuan.cui@intel.com>
To: Pasi K?rkk?inen <pasik@iki.fi>
Thread-Topic: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
	Graphics Passthrough Solution from Intel
Thread-Index: Ac8uCM+UTJUf7TAWTkKmKWKJu37vjgAHsPmAACDtz+AAOVsWgAAv5MIQ
Date: Sun, 23 Feb 2014 04:53:08 +0000
Message-ID: <A25F549E4D43CD42B4C02DF47A1913231105F908@SHSMSX103.ccr.corp.intel.com>
References: <A25F549E4D43CD42B4C02DF47A19132311058FC7@SHSMSX103.ccr.corp.intel.com>
	<20140220183613.GJ3200@reaktio.net>
	<A25F549E4D43CD42B4C02DF47A1913231105A665@SHSMSX103.ccr.corp.intel.com>
	<20140222134121.GP3200@reaktio.net>
In-Reply-To: <20140222134121.GP3200@reaktio.net>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Lars Kurth <lars.kurth@xenproject.org>, "Tian,
	Kevin" <kevin.tian@intel.com>, "White,
	Michael L" <michael.l.white@intel.com>, "Cowperthwaite,
	David J" <david.j.cowperthwaite@intel.com>,
	"intel-gfx@lists.freedesktop.org" <intel-gfx@lists.freedesktop.org>,
	"Li, Susie" <susie.li@intel.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Haron,
	Sandra" <sandra.haron@intel.com>
Subject: Re: [Xen-devel] [Announcement] Updates to XenGT - a Mediated
 Graphics Passthrough Solution from Intel
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen wrote on 2014-02-22:
> On Fri, Feb 21, 2014 at 02:22:11AM +0000, Cui, Dexuan wrote:
>> Pasi K=E4rkk=E4inen wrote on 2014-02-21:
>>> On Thu, Feb 20, 2014 at 07:59:04AM +0000, Cui, Dexuan wrote:
>>>> Hi all, We're pleased to announce an update to XenGT since its first
>>>> disclosure in last Sep.
>>> =

>>> Are you going to work on upstreaming this stuff? Xen 4.4 will be
>>> released soon(ish), so the Xen 4.5 development window starts in the near
>>> future and hopefully this stuff can be upstreamed then..
>> Hi Pasi,
>> We do plan to upstream but not give a timeframe so far.
>> =

> =

> Ok. Would you like to post an article on the Xen blog about XenGT ?
> I think that would be a good thing to do to get more visibility for this =
feature.
As I mentioned in the previous mail, we already have a blog article, which
can be easily found by googling xengt: :-)
https://01.org/xen/blogs/srclarkx/2013/graphics-virtualization-xengt

>>> Also: Haswell ("4th generation Intel core CPU") is listed as a
>>> requirement in the Setup Guide PDF.. will there be support for SNB/IVB
>>> GPUs aswell?
>> There is no plan for SNB/IVB.
>> =

> =

> Is that because of some hardware limitation on earlier GPU generations,
> or just a matter of amount of work needed to support XenGT on earlier
> GPUs?
> -- Pasi
No HW limitation on earlier GPUs -- actually you can even see some unused
codes about earlier generations in the linux-vgt.patch.
At present we'd like to focus on HSW (and newer generation in future).

-- Dexuan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 08:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 08:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHUau-0003mJ-BX; Sun, 23 Feb 2014 08:39:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHUas-0003mE-0c
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 08:39:26 +0000
Received: from [85.158.139.211:30555] by server-3.bemta-5.messagelabs.com id
	42/55-13671-CB3B9035; Sun, 23 Feb 2014 08:39:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393144761!5622146!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4544 invoked from network); 23 Feb 2014 08:39:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 08:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,528,1389744000"; d="scan'208";a="103328113"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Feb 2014 08:39:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 23 Feb 2014 03:39:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHUac-0004D7-UH;
	Sun, 23 Feb 2014 08:39:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHUac-0007Na-Te;
	Sun, 23 Feb 2014 08:39:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25273-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 08:39:10 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25273: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5332538733474594459=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5332538733474594459==
Content-Type: text/plain

flight 25273 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25273/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     13 guest-saverestore.2         fail pass in 25271
 test-amd64-i386-pair     18 guest-migrate/dst_host/src_host fail pass in 25271
 test-amd64-amd64-xl         13 guest-saverestore.2 fail in 25271 pass in 25273
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 25271 pass in 25273

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 25259
 test-amd64-amd64-xl-sedf 14 guest-localmigrate.2 fail in 25271 REGR. vs. 25259

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop      fail in 25271 never pass

version targeted for testing:
 linux                2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
baseline version:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Andrew Morton <akpm@linux-foundation.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Rientjes <rientjes@google.com>
  David Vrabel <david.vrabel@citrix.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hartmut Knaack <knaack.h@gmx.de>
  J. Bruce Fields <bfields@redhat.com>
  Jan Kara <jack@suse.cz>
  Jan Moskyto Matejka <mq@suse.cz>
  Jens Axboe <axboe@fb.com>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Cameron <jic23@kernel.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Lars Poeschel <poeschel@lemonage.de>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Brown <broonie@linaro.org>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Michael Holzheu <holzheu@linux.vnet.ibm.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
  Moritz Moeller-Herrmann <moritz-kernel@moeller-herrmann.de>
  NeilBrown <neilb@suse.de>
  Oleg Nesterov <oleg@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Bolle <pebolle@tiscali.nl>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Raymond Wanyoike <raymond.wanyoike@gmail.com>
  Roland Dreier <roland@purestorage.com>
  Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
  Steven Rostedt <rostedt@goodmis.org>
  Thomas Gleixner <tglx@linutronix.de>
  Ulrich Hahn <uhahn@eanco.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
+ branch=linux-3.4
+ revision=2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c:tested/linux-3.4
Counting objects: 1   
Counting objects: 11   
Counting objects: 202, done.
Compressing objects:   3% (1/26)   
Compressing objects:   7% (2/26)   
Compressing objects:  11% (3/26)   
Compressing objects:  15% (4/26)   
Compressing objects:  19% (5/26)   
Compressing objects:  23% (6/26)   
Compressing objects:  26% (7/26)   
Compressing objects:  30% (8/26)   
Compressing objects:  34% (9/26)   
Compressing objects:  38% (10/26)   
Compressing objects:  42% (11/26)   
Compressing objects:  46% (12/26)   
Compressing objects:  50% (13/26)   
Compressing objects:  53% (14/26)   
Compressing objects:  57% (15/26)   
Compressing objects:  61% (16/26)   
Compressing objects:  65% (17/26)   
Compressing objects:  69% (18/26)   
Compressing objects:  73% (19/26)   
Compressing objects:  76% (20/26)   
Compressing objects:  80% (21/26)   
Compressing objects:  84% (22/26)   
Compressing objects:  88% (23/26)   
Compressing objects:  92% (24/26)   
Compressing objects:  96% (25/26)   
Compressing objects: 100% (26/26)   
Compressing objects: 100% (26/26), done.
Writing objects:   0% (1/141)   
Writing objects:   1% (2/141)   
Writing objects:   2% (3/141)   
Writing objects:   3% (5/141)   
Writing objects:   4% (6/141)   
Writing objects:   5% (8/141)   
Writing objects:   6% (9/141)   
Writing objects:   7% (10/141)   
Writing objects:   8% (12/141)   
Writing objects:   9% (13/141)   
Writing objects:  10% (15/141)   
Writing objects:  11% (16/141)   
Writing objects:  12% (17/141)   
Writing objects:  13% (19/141)   
Writing objects:  14% (20/141)   
Writing objects:  15% (22/141)   
Writing objects:  16% (23/141)   
Writing objects:  17% (24/141)   
Writing objects:  18% (26/141)   
Writing objects:  19% (27/141)   
Writing objects:  20% (29/141)   
Writing objects:  21% (30/141)   
Writing objects:  22% (32/141)   
Writing objects:  23% (33/141)   
Writing objects:  24% (34/141)   
Writing objects:  25% (36/141)   
Writing objects:  26% (37/141)   
Writing objects:  27% (39/141)   
Writing objects:  28% (40/141)   
Writing objects:  29% (41/141)   
Writing objects:  30% (43/141)   
Writing objects:  31% (44/141)   
Writing objects:  32% (46/141)   
Writing objects:  33% (47/141)   
Writing objects:  34% (48/141)   
Writing objects:  35% (50/141)   
Writing objects:  36% (51/141)   
Writing objects:  37% (53/141)   
Writing objects:  38% (54/141)   
Writing objects:  39% (55/141)   
Writing objects:  40% (57/141)   
Writing objects:  41% (58/141)   
Writing objects:  42% (60/141)   
Writing objects:  43% (61/141)   
Writing objects:  44% (63/141)   
Writing objects:  45% (64/141)   
Writing objects:  46% (65/141)   
Writing objects:  47% (67/141)   
Writing objects:  48% (68/141)   
Writing objects:  49% (70/141)   
Writing objects:  50% (71/141)   
Writing objects:  51% (72/141)   
Writing objects:  52% (74/141)   
Writing objects:  53% (75/141)   
Writing objects:  54% (77/141)   
Writing objects:  55% (78/141)   
Writing objects:  56% (79/141)   
Writing objects:  57% (81/141)   
Writing objects:  58% (82/141)   
Writing objects:  59% (84/141)   
Writing objects:  60% (85/141)   
Writing objects:  61% (87/141)   
Writing objects:  62% (88/141)   
Writing objects:  63% (89/141)   
Writing objects:  64% (91/141)   
Writing objects:  65% (92/141)   
Writing objects:  66% (94/141)   
Writing objects:  67% (95/141)   
Writing objects:  68% (96/141)   
Writing objects:  69% (98/141)   
Writing objects:  70% (99/141)   
Writing objects:  71% (101/141)   
Writing objects:  72% (102/141)   
Writing objects:  73% (103/141)   
Writing objects:  74% (105/141)   
Writing objects:  75% (106/141)   
Writing objects:  76% (108/141)   
Writing objects:  77% (109/141)   
Writing objects:  78% (110/141)   
Writing objects:  79% (112/141)   
Writing objects:  80% (113/141)   
Writing objects:  81% (115/141)   
Writing objects:  82% (116/141)   
Writing objects:  83% (118/141)   
Writing objects:  84% (119/141)   
Writing objects:  85% (120/141)   
Writing objects:  86% (122/141)   
Writing objects:  87% (123/141)   
Writing objects:  88% (125/141)   
Writing objects:  89% (126/141)   
Writing objects:  90% (127/141)   
Writing objects:  91% (129/141)   
Writing objects:  92% (130/141)   
Writing objects:  93% (132/141)   
Writing objects:  94% (133/141)   
Writing objects:  95% (134/141)   
Writing objects:  96% (136/141)   
Writing objects:  97% (137/141)   
Writing objects:  98% (139/141)   
Writing objects:  99% (140/141)   
Writing objects: 100% (141/141)   
Writing objects: 100% (141/141), 25.78 KiB, done.
Total 141 (delta 114), reused 141 (delta 114)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   dd12c7c..2606524  2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c -> tested/linux-3.4
+ exit 0


--===============5332538733474594459==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5332538733474594459==--

From xen-devel-bounces@lists.xen.org Sun Feb 23 08:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 08:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHUau-0003mJ-BX; Sun, 23 Feb 2014 08:39:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHUas-0003mE-0c
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 08:39:26 +0000
Received: from [85.158.139.211:30555] by server-3.bemta-5.messagelabs.com id
	42/55-13671-CB3B9035; Sun, 23 Feb 2014 08:39:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393144761!5622146!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4544 invoked from network); 23 Feb 2014 08:39:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 08:39:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,528,1389744000"; d="scan'208";a="103328113"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 23 Feb 2014 08:39:11 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 23 Feb 2014 03:39:11 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHUac-0004D7-UH;
	Sun, 23 Feb 2014 08:39:10 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHUac-0007Na-Te;
	Sun, 23 Feb 2014 08:39:10 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25273-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 08:39:10 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-3.4 test] 25273: tolerable FAIL - PUSHED
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5332538733474594459=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5332538733474594459==
Content-Type: text/plain

flight 25273 linux-3.4 real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25273/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf     13 guest-saverestore.2         fail pass in 25271
 test-amd64-i386-pair     18 guest-migrate/dst_host/src_host fail pass in 25271
 test-amd64-amd64-xl         13 guest-saverestore.2 fail in 25271 pass in 25273
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 25271 pass in 25273

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 25259
 test-amd64-amd64-xl-sedf 14 guest-localmigrate.2 fail in 25271 REGR. vs. 25259

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop      fail in 25271 never pass

version targeted for testing:
 linux                2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
baseline version:
 linux                dd12c7c4cb2167696bf8bacdcaa94cdeb8f74e3b

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Andrew Morton <akpm@linux-foundation.org>
  BjÃ¸rn Mork <bjorn@mork.no>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Rientjes <rientjes@google.com>
  David Vrabel <david.vrabel@citrix.com>
  Geert Uytterhoeven <geert+renesas@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hartmut Knaack <knaack.h@gmx.de>
  J. Bruce Fields <bfields@redhat.com>
  Jan Kara <jack@suse.cz>
  Jan Moskyto Matejka <mq@suse.cz>
  Jens Axboe <axboe@fb.com>
  Johannes Berg <johannes.berg@intel.com>
  Jonathan Cameron <jic23@kernel.org>
  Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
  Krzysztof Kozlowski <k.kozlowski@samsung.com>
  Lars Poeschel <poeschel@lemonage.de>
  Lars-Peter Clausen <lars@metafoo.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Brown <broonie@linaro.org>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Michael Holzheu <holzheu@linux.vnet.ibm.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
  Moritz Moeller-Herrmann <moritz-kernel@moeller-herrmann.de>
  NeilBrown <neilb@suse.de>
  Oleg Nesterov <oleg@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Bolle <pebolle@tiscali.nl>
  Paul Gortmaker <paul.gortmaker@windriver.com>
  Raymond Wanyoike <raymond.wanyoike@gmail.com>
  Roland Dreier <roland@purestorage.com>
  Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
  Steven Rostedt <rostedt@goodmis.org>
  Thomas Gleixner <tglx@linutronix.de>
  Ulrich Hahn <uhahn@eanco.de>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Pushing revision :

+ branch=linux-3.4
+ revision=2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x '!=' x/export/home/osstest/repos/lock ']'
++ OSSTEST_REPOS_LOCK_LOCKED=/export/home/osstest/repos/lock
++ exec with-lock-ex -w /export/home/osstest/repos/lock ./ap-push linux-3.4 2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
+ branch=linux-3.4
+ revision=2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c
+ . cri-lock-repos
++ . cri-common
+++ . cri-getconfig
+++ umask 002
+++ getconfig Repos
+++ perl -e '
                use Osstest;
                readglobalconfig();
                print $c{"Repos"} or die $!;
        '
++ repos=/export/home/osstest/repos
++ repos_lock=/export/home/osstest/repos/lock
++ '[' x/export/home/osstest/repos/lock '!=' x/export/home/osstest/repos/lock ']'
+ . cri-common
++ . cri-getconfig
++ umask 002
+ select_xenbranch
+ case "$branch" in
+ tree=linux
+ xenbranch=xen-unstable
+ '[' xlinux = xlinux ']'
+ linuxbranch=linux-3.4
+ : tested/2.6.39.x
+ . ap-common
++ : osstest@xenbits.xensource.com
++ : git://xenbits.xen.org/xen.git
++ : osstest@xenbits.xensource.com:/home/xen/git/xen.git
++ : git://xenbits.xen.org/staging/qemu-xen-unstable.git
++ : git://git.kernel.org
++ : git://git.kernel.org/pub/scm/linux/kernel/git
++ : git
++ : git://xenbits.xen.org/osstest/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/osstest/ext/linux-firmware.git
++ : git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git
++ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
++ : git://xenbits.xen.org/linux-pvops.git
++ : tested/linux-3.4
++ : tested/linux-arm-xen
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ '[' xgit://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git = x ']'
++ : git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
++ : tested/2.6.39.x
++ : daily-cron.linux-3.4
++ : daily-cron.linux-3.4
++ : http://hg.uk.xensource.com/carbon/trunk/linux-2.6.27
++ : git://xenbits.xen.org/staging/qemu-upstream-unstable.git
++ : daily-cron.linux-3.4
+ TREE_LINUX=osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ TREE_QEMU_UPSTREAM=osstest@xenbits.xensource.com:/home/xen/git/qemu-upstream-unstable.git
+ TREE_XEN=osstest@xenbits.xensource.com:/home/xen/git/xen.git
+ info_linux_tree linux-3.4
+ case $1 in
+ : git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git
+ : linux-3.4.y
+ : linux-3.4.y
+ : git
+ : git
+ : git://xenbits.xen.org/linux-pvops.git
+ : osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
+ : tested/linux-3.4
+ : tested/linux-3.4
+ return 0
+ cd /export/home/osstest/repos/linux
+ git push osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git 2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c:tested/linux-3.4
Counting objects: 1   
Counting objects: 11   
Counting objects: 202, done.
Compressing objects:   3% (1/26)   
Compressing objects:   7% (2/26)   
Compressing objects:  11% (3/26)   
Compressing objects:  15% (4/26)   
Compressing objects:  19% (5/26)   
Compressing objects:  23% (6/26)   
Compressing objects:  26% (7/26)   
Compressing objects:  30% (8/26)   
Compressing objects:  34% (9/26)   
Compressing objects:  38% (10/26)   
Compressing objects:  42% (11/26)   
Compressing objects:  46% (12/26)   
Compressing objects:  50% (13/26)   
Compressing objects:  53% (14/26)   
Compressing objects:  57% (15/26)   
Compressing objects:  61% (16/26)   
Compressing objects:  65% (17/26)   
Compressing objects:  69% (18/26)   
Compressing objects:  73% (19/26)   
Compressing objects:  76% (20/26)   
Compressing objects:  80% (21/26)   
Compressing objects:  84% (22/26)   
Compressing objects:  88% (23/26)   
Compressing objects:  92% (24/26)   
Compressing objects:  96% (25/26)   
Compressing objects: 100% (26/26)   
Compressing objects: 100% (26/26), done.
Writing objects:   0% (1/141)   
Writing objects:   1% (2/141)   
Writing objects:   2% (3/141)   
Writing objects:   3% (5/141)   
Writing objects:   4% (6/141)   
Writing objects:   5% (8/141)   
Writing objects:   6% (9/141)   
Writing objects:   7% (10/141)   
Writing objects:   8% (12/141)   
Writing objects:   9% (13/141)   
Writing objects:  10% (15/141)   
Writing objects:  11% (16/141)   
Writing objects:  12% (17/141)   
Writing objects:  13% (19/141)   
Writing objects:  14% (20/141)   
Writing objects:  15% (22/141)   
Writing objects:  16% (23/141)   
Writing objects:  17% (24/141)   
Writing objects:  18% (26/141)   
Writing objects:  19% (27/141)   
Writing objects:  20% (29/141)   
Writing objects:  21% (30/141)   
Writing objects:  22% (32/141)   
Writing objects:  23% (33/141)   
Writing objects:  24% (34/141)   
Writing objects:  25% (36/141)   
Writing objects:  26% (37/141)   
Writing objects:  27% (39/141)   
Writing objects:  28% (40/141)   
Writing objects:  29% (41/141)   
Writing objects:  30% (43/141)   
Writing objects:  31% (44/141)   
Writing objects:  32% (46/141)   
Writing objects:  33% (47/141)   
Writing objects:  34% (48/141)   
Writing objects:  35% (50/141)   
Writing objects:  36% (51/141)   
Writing objects:  37% (53/141)   
Writing objects:  38% (54/141)   
Writing objects:  39% (55/141)   
Writing objects:  40% (57/141)   
Writing objects:  41% (58/141)   
Writing objects:  42% (60/141)   
Writing objects:  43% (61/141)   
Writing objects:  44% (63/141)   
Writing objects:  45% (64/141)   
Writing objects:  46% (65/141)   
Writing objects:  47% (67/141)   
Writing objects:  48% (68/141)   
Writing objects:  49% (70/141)   
Writing objects:  50% (71/141)   
Writing objects:  51% (72/141)   
Writing objects:  52% (74/141)   
Writing objects:  53% (75/141)   
Writing objects:  54% (77/141)   
Writing objects:  55% (78/141)   
Writing objects:  56% (79/141)   
Writing objects:  57% (81/141)   
Writing objects:  58% (82/141)   
Writing objects:  59% (84/141)   
Writing objects:  60% (85/141)   
Writing objects:  61% (87/141)   
Writing objects:  62% (88/141)   
Writing objects:  63% (89/141)   
Writing objects:  64% (91/141)   
Writing objects:  65% (92/141)   
Writing objects:  66% (94/141)   
Writing objects:  67% (95/141)   
Writing objects:  68% (96/141)   
Writing objects:  69% (98/141)   
Writing objects:  70% (99/141)   
Writing objects:  71% (101/141)   
Writing objects:  72% (102/141)   
Writing objects:  73% (103/141)   
Writing objects:  74% (105/141)   
Writing objects:  75% (106/141)   
Writing objects:  76% (108/141)   
Writing objects:  77% (109/141)   
Writing objects:  78% (110/141)   
Writing objects:  79% (112/141)   
Writing objects:  80% (113/141)   
Writing objects:  81% (115/141)   
Writing objects:  82% (116/141)   
Writing objects:  83% (118/141)   
Writing objects:  84% (119/141)   
Writing objects:  85% (120/141)   
Writing objects:  86% (122/141)   
Writing objects:  87% (123/141)   
Writing objects:  88% (125/141)   
Writing objects:  89% (126/141)   
Writing objects:  90% (127/141)   
Writing objects:  91% (129/141)   
Writing objects:  92% (130/141)   
Writing objects:  93% (132/141)   
Writing objects:  94% (133/141)   
Writing objects:  95% (134/141)   
Writing objects:  96% (136/141)   
Writing objects:  97% (137/141)   
Writing objects:  98% (139/141)   
Writing objects:  99% (140/141)   
Writing objects: 100% (141/141)   
Writing objects: 100% (141/141), 25.78 KiB, done.
Total 141 (delta 114), reused 141 (delta 114)
To osstest@xenbits.xensource.com:/home/xen/git/linux-pvops.git
   dd12c7c..2606524  2606524141e4ff9b6a5d0bcbd9d601dfc5a8285c -> tested/linux-3.4
+ exit 0


--===============5332538733474594459==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5332538733474594459==--

From xen-devel-bounces@lists.xen.org Sun Feb 23 12:26:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 12:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHY8G-0004Q1-C1; Sun, 23 Feb 2014 12:26:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHY8F-0004Pw-4u
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 12:26:07 +0000
Received: from [85.158.143.35:44848] by server-2.bemta-4.messagelabs.com id
	D1/3E-10891-DD8E9035; Sun, 23 Feb 2014 12:26:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393158363!7649490!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11914 invoked from network); 23 Feb 2014 12:26:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 12:26:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,529,1389744000"; d="scan'208";a="105014919"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Feb 2014 12:26:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 23 Feb 2014 07:26:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHY8A-0005Iw-4k;
	Sun, 23 Feb 2014 12:26:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHY89-0002fI-VD;
	Sun, 23 Feb 2014 12:26:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25275-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 12:26:01 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25275: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25275 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25275/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 25267 REGR. vs. 25275

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-localmigrate/x10 fail pass in 25267
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 25267
 test-amd64-i386-xl-credit2  13 guest-saverestore.2 fail in 25267 pass in 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop    fail in 25267 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop    fail in 25267 never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 12:26:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 12:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHY8G-0004Q1-C1; Sun, 23 Feb 2014 12:26:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHY8F-0004Pw-4u
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 12:26:07 +0000
Received: from [85.158.143.35:44848] by server-2.bemta-4.messagelabs.com id
	D1/3E-10891-DD8E9035; Sun, 23 Feb 2014 12:26:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393158363!7649490!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11914 invoked from network); 23 Feb 2014 12:26:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 12:26:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,529,1389744000"; d="scan'208";a="105014919"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Feb 2014 12:26:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Sun, 23 Feb 2014 07:26:02 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHY8A-0005Iw-4k;
	Sun, 23 Feb 2014 12:26:02 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHY89-0002fI-VD;
	Sun, 23 Feb 2014 12:26:02 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25275-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 12:26:01 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25275: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25275 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25275/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 25267 REGR. vs. 25275

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 13 guest-localmigrate/x10 fail pass in 25267
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install     fail pass in 25267
 test-amd64-i386-xl-credit2  13 guest-saverestore.2 fail in 25267 pass in 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop    fail in 25267 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop    fail in 25267 never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 14:42:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 14:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHaFV-0004pJ-RO; Sun, 23 Feb 2014 14:41:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WHaFU-0004pE-LH
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 14:41:44 +0000
Received: from [85.158.139.211:19413] by server-6.bemta-5.messagelabs.com id
	4B/E6-14342-7A80A035; Sun, 23 Feb 2014 14:41:43 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393166500!5689944!1
X-Originating-IP: [209.85.160.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30303 invoked from network); 23 Feb 2014 14:41:42 -0000
Received: from mail-pb0-f46.google.com (HELO mail-pb0-f46.google.com)
	(209.85.160.46)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 14:41:42 -0000
Received: by mail-pb0-f46.google.com with SMTP id um1so5401374pbc.5
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 06:41:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=BH5FvJWqnXMPGpWlRBnavInxTenpTtJrfBqTJLlj4aQ=;
	b=jpObB0LUooWuZJdQejSSDB3S06kWPsOJK7CSR3uMo/malGqDNSpOG1kfJQCqVpUEVT
	l5ChSPb4aEV0CMLtyLNIQOssv6qpjhzUKOmHU/q8VwT57kyooNjkWAGRxmM+Ycm64sOu
	w7ql2Y5GxhnnCR4Bgtvn4L/Omqozb1GjCSEKpruQ4ht0ehYcBkH2GiWtKy6eJpObyRop
	M6lHasFgshGPpNFrwPcKSTDyTBbJkAxZTOx5ZJR3PBQBwtZYTAkWMkzMlIdCIP+TJUta
	kXGNfpSvGGgfUEIK0mmllUifZbzvYdZtOq1sNEiPLi9wo2uFkoXKtZ3LGBdbrtIdqM0z
	lz7Q==
X-Received: by 10.68.36.41 with SMTP id n9mr19147025pbj.99.1393166500330;
	Sun, 23 Feb 2014 06:41:40 -0800 (PST)
Received: from [192.168.1.102] ([113.247.1.76])
	by mx.google.com with ESMTPSA id
	kc9sm40771550pbc.25.2014.02.23.06.41.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 23 Feb 2014 06:41:39 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1392738894.23084.16.camel@kazak.uk.xensource.com>
Date: Sun, 23 Feb 2014 22:41:26 +0800
Message-Id: <580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<1392738894.23084.16.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 18, 2014, at 23:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sun, 2014-02-16 at 23:51 +0800, Chen Baozi wrote:
>> Hi all,
>> =

>> It is much later than I used to expect. I guess it might be help
>> to publish my work, though it is still not finished (and might not
>> be finished very soon...). =

>> =

>> I began to try to port mini-os to ARM64 since last summer. Since
>> the 64-bit guest support is not quite well at that time, this
>> work had been stopped for a long time until two months ago.
>> =

>> Though it is still at very early stage, it at least can be built,
>> setup a early page table for booting, parse the DTB passed by the
>> hypervisor, and be debugged by printk at present. So I put it
>> on github in case someone might be interested in it. Here is the
>> url: https://github.com/baozich/minios-arm64
> =

> Cool. Thank you very much for sharing.
> =

>> Right now, there are some troubles to make GIC work properly,
>> as I didn=92t consider mapping GIC=92s interface in address space and
>> follows x86=92s memory layout which make the kernel virtual address
>> starts at 0x0. I=92ll fix it as soon as possible.
> =

> Actually, having virtual memory start at 0x0 seems quite reasonable to
> me, what is the problem?

Hmmm, I don=92t think it is a big problem. I just didn=92t realise it is =

necessary to map GIC=92s interface after MMU on, which leads a exception
when I try to program GIC by the physical address populated by DT. =

I used to think about making mini-os kernel address start at 0x80000000
and leave the address below 0x80000000 to be 1:1 mapping, which
seems to be able to make things easier when initialising GIC.

> =

> Someone somewhere was thinking of making minios run without the MMU
> enabled on ARM -- to save on the overhead IIRC. But it occurs to me here
> that this would be problematic if we were to move the guest memory map
> around -- which we are planning to do for 4.5. I think this means that
> minios must use the MMU, at least by default.
> =

> I wouldn't necessarily object to the presence of an option to build an
> MMU-less variant for specific use cases, so long as it was clear to
> those enabling it that there VMs might only work on a single version of
> Xen.

Actually, I=92ve already enabled MMU in my current implementation. =


Cheers,

Baozi

> =

>> Besides, there is still lots of work to be done. So any comments
>> or patches are welcome.
> =

> Ian.
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 14:42:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 14:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHaFV-0004pJ-RO; Sun, 23 Feb 2014 14:41:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WHaFU-0004pE-LH
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 14:41:44 +0000
Received: from [85.158.139.211:19413] by server-6.bemta-5.messagelabs.com id
	4B/E6-14342-7A80A035; Sun, 23 Feb 2014 14:41:43 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393166500!5689944!1
X-Originating-IP: [209.85.160.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30303 invoked from network); 23 Feb 2014 14:41:42 -0000
Received: from mail-pb0-f46.google.com (HELO mail-pb0-f46.google.com)
	(209.85.160.46)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 14:41:42 -0000
Received: by mail-pb0-f46.google.com with SMTP id um1so5401374pbc.5
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 06:41:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=BH5FvJWqnXMPGpWlRBnavInxTenpTtJrfBqTJLlj4aQ=;
	b=jpObB0LUooWuZJdQejSSDB3S06kWPsOJK7CSR3uMo/malGqDNSpOG1kfJQCqVpUEVT
	l5ChSPb4aEV0CMLtyLNIQOssv6qpjhzUKOmHU/q8VwT57kyooNjkWAGRxmM+Ycm64sOu
	w7ql2Y5GxhnnCR4Bgtvn4L/Omqozb1GjCSEKpruQ4ht0ehYcBkH2GiWtKy6eJpObyRop
	M6lHasFgshGPpNFrwPcKSTDyTBbJkAxZTOx5ZJR3PBQBwtZYTAkWMkzMlIdCIP+TJUta
	kXGNfpSvGGgfUEIK0mmllUifZbzvYdZtOq1sNEiPLi9wo2uFkoXKtZ3LGBdbrtIdqM0z
	lz7Q==
X-Received: by 10.68.36.41 with SMTP id n9mr19147025pbj.99.1393166500330;
	Sun, 23 Feb 2014 06:41:40 -0800 (PST)
Received: from [192.168.1.102] ([113.247.1.76])
	by mx.google.com with ESMTPSA id
	kc9sm40771550pbc.25.2014.02.23.06.41.35 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 23 Feb 2014 06:41:39 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <1392738894.23084.16.camel@kazak.uk.xensource.com>
Date: Sun, 23 Feb 2014 22:41:26 +0800
Message-Id: <580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<1392738894.23084.16.camel@kazak.uk.xensource.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 18, 2014, at 23:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Sun, 2014-02-16 at 23:51 +0800, Chen Baozi wrote:
>> Hi all,
>> =

>> It is much later than I used to expect. I guess it might be help
>> to publish my work, though it is still not finished (and might not
>> be finished very soon...). =

>> =

>> I began to try to port mini-os to ARM64 since last summer. Since
>> the 64-bit guest support is not quite well at that time, this
>> work had been stopped for a long time until two months ago.
>> =

>> Though it is still at very early stage, it at least can be built,
>> setup a early page table for booting, parse the DTB passed by the
>> hypervisor, and be debugged by printk at present. So I put it
>> on github in case someone might be interested in it. Here is the
>> url: https://github.com/baozich/minios-arm64
> =

> Cool. Thank you very much for sharing.
> =

>> Right now, there are some troubles to make GIC work properly,
>> as I didn=92t consider mapping GIC=92s interface in address space and
>> follows x86=92s memory layout which make the kernel virtual address
>> starts at 0x0. I=92ll fix it as soon as possible.
> =

> Actually, having virtual memory start at 0x0 seems quite reasonable to
> me, what is the problem?

Hmmm, I don=92t think it is a big problem. I just didn=92t realise it is =

necessary to map GIC=92s interface after MMU on, which leads a exception
when I try to program GIC by the physical address populated by DT. =

I used to think about making mini-os kernel address start at 0x80000000
and leave the address below 0x80000000 to be 1:1 mapping, which
seems to be able to make things easier when initialising GIC.

> =

> Someone somewhere was thinking of making minios run without the MMU
> enabled on ARM -- to save on the overhead IIRC. But it occurs to me here
> that this would be problematic if we were to move the guest memory map
> around -- which we are planning to do for 4.5. I think this means that
> minios must use the MMU, at least by default.
> =

> I wouldn't necessarily object to the presence of an option to build an
> MMU-less variant for specific use cases, so long as it was clear to
> those enabling it that there VMs might only work on a single version of
> Xen.

Actually, I=92ve already enabled MMU in my current implementation. =


Cheers,

Baozi

> =

>> Besides, there is still lots of work to be done. So any comments
>> or patches are welcome.
> =

> Ian.
> =



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 17:33:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 17:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHcuw-0005bp-10; Sun, 23 Feb 2014 17:32:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHcuu-0005bk-L2
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 17:32:40 +0000
Received: from [193.109.254.147:12091] by server-8.bemta-14.messagelabs.com id
	83/88-18529-7B03A035; Sun, 23 Feb 2014 17:32:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393176757!6228414!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12822 invoked from network); 23 Feb 2014 17:32:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 17:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,529,1389744000"; d="scan'208";a="105040804"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Feb 2014 17:32:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 23 Feb 2014 12:32:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHcup-0006nf-OU;
	Sun, 23 Feb 2014 17:32:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHcup-0001jT-JB;
	Sun, 23 Feb 2014 17:32:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25278-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 17:32:35 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25278 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25278/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair         7 xen-boot/src_host         fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel  5 xen-boot         fail blocked in 12557
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot          fail blocked in 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                0f0ca14386e0431fbedaae5efc550d46cf93b9cf
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7063 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2389349 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 17:33:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 17:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHcuw-0005bp-10; Sun, 23 Feb 2014 17:32:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHcuu-0005bk-L2
	for xen-devel@lists.xensource.com; Sun, 23 Feb 2014 17:32:40 +0000
Received: from [193.109.254.147:12091] by server-8.bemta-14.messagelabs.com id
	83/88-18529-7B03A035; Sun, 23 Feb 2014 17:32:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393176757!6228414!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12822 invoked from network); 23 Feb 2014 17:32:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 17:32:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,529,1389744000"; d="scan'208";a="105040804"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 23 Feb 2014 17:32:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Sun, 23 Feb 2014 12:32:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHcup-0006nf-OU;
	Sun, 23 Feb 2014 17:32:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHcup-0001jT-JB;
	Sun, 23 Feb 2014 17:32:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25278-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Sun, 23 Feb 2014 17:32:35 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25278 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25278/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pair         7 xen-boot/src_host         fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel  5 xen-boot         fail blocked in 12557
 test-amd64-i386-xl-qemuu-win7-amd64  5 xen-boot          fail blocked in 12557
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install  fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                0f0ca14386e0431fbedaae5efc550d46cf93b9cf
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7063 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2389349 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:37:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgjP-000648-9T; Sun, 23 Feb 2014 21:37:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WHgjM-000643-Mj
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 21:37:00 +0000
Received: from [85.158.139.211:9222] by server-11.bemta-5.messagelabs.com id
	B0/CF-23886-BF96A035; Sun, 23 Feb 2014 21:36:59 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393191419!5746261!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28639 invoked from network); 23 Feb 2014 21:36:59 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Feb 2014 21:36:59 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1NLamw0011481
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:52 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1NLaif4015176
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:44 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s1NLairO002193
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:44 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s1NLaiXZ002187
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:44 GMT
Date: Sun, 23 Feb 2014 21:36:44 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: xen-devel@lists.xen.org
Message-ID: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s1NLamw0011481
Subject: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I was trying xen-4.4.0-rc5 with my standard set up (booting a pv guest 
using pygrub as a bootloader) but this no longer works. I would expect to 
get a boot menu where I could select a kernel and the guest, but nothing 
happens. If I examine the bootloader.1.log file I find the pygrub output I 
would expect to see on the console, and the output of xenstore-ls suggests 
pygrub is selecting the default kernel (as it would without input) and 
exiting, but xentop doesn't report any cpu usage on the guest. It seems xl 
has created a child process that doesn't exit, though if I kill the child 
process by hand the boot does continue.

I traced the change in behaviour to the commit 
http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=5f0c4a78100382972b4d2a71a04b90e015e9fe87
"libxl: fork: Share SIGCHLD handler amongst ctxs". If I revert this then 
I get the expected behaviour again, though I haven't worked out why this 
patch cause the effects I am seeing.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:37:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgjP-000648-9T; Sun, 23 Feb 2014 21:37:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WHgjM-000643-Mj
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 21:37:00 +0000
Received: from [85.158.139.211:9222] by server-11.bemta-5.messagelabs.com id
	B0/CF-23886-BF96A035; Sun, 23 Feb 2014 21:36:59 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393191419!5746261!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28639 invoked from network); 23 Feb 2014 21:36:59 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 23 Feb 2014 21:36:59 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1NLamw0011481
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:52 GMT
Received: from procyon.dur.ac.uk (procyon.dur.ac.uk [129.234.250.129])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1NLaif4015176
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:44 GMT
Received: from procyon.dur.ac.uk (localhost [127.0.0.1])
	by procyon.dur.ac.uk (8.14.3/8.11.1) with ESMTP id s1NLairO002193
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:44 GMT
Received: from localhost (dcl0may@localhost)
	by procyon.dur.ac.uk (8.14.3/8.14.3/Submit) with ESMTP id
	s1NLaiXZ002187
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 21:36:44 GMT
Date: Sun, 23 Feb 2014 21:36:44 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: xen-devel@lists.xen.org
Message-ID: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
User-Agent: Alpine 2.00 (DEB 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s1NLamw0011481
Subject: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I was trying xen-4.4.0-rc5 with my standard set up (booting a pv guest 
using pygrub as a bootloader) but this no longer works. I would expect to 
get a boot menu where I could select a kernel and the guest, but nothing 
happens. If I examine the bootloader.1.log file I find the pygrub output I 
would expect to see on the console, and the output of xenstore-ls suggests 
pygrub is selecting the default kernel (as it would without input) and 
exiting, but xentop doesn't report any cpu usage on the guest. It seems xl 
has created a child process that doesn't exit, though if I kill the child 
process by hand the boot does continue.

I traced the change in behaviour to the commit 
http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=5f0c4a78100382972b4d2a71a04b90e015e9fe87
"libxl: fork: Share SIGCHLD handler amongst ctxs". If I revert this then 
I get the expected behaviour again, though I haven't worked out why this 
patch cause the effects I am seeing.

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgmY-0006A3-Sz; Sun, 23 Feb 2014 21:40:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgmW-00069o-SS
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:17 +0000
Received: from [85.158.137.68:40394] by server-9.bemta-3.messagelabs.com id
	A0/BA-10184-FBA6A035; Sun, 23 Feb 2014 21:40:15 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393191614!2415728!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14621 invoked from network); 23 Feb 2014 21:40:14 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:14 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmM-0003se-IC; Sun, 23 Feb 2014 22:40:06 +0100
Message-Id: <20140223212737.869264085@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:16 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline; filename=x86-xen-use-core-irq-stats-function.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, x86 <x86@kernel.org>
Subject: [Xen-devel] [patch 15/26] x86: xen: Use the core irq stats function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Let the core do the irq_desc resolution.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
Cc: x86 <x86@kernel.org>
---
 arch/x86/xen/spinlock.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: tip/arch/x86/xen/spinlock.c
===================================================================
--- tip.orig/arch/x86/xen/spinlock.c
+++ tip/arch/x86/xen/spinlock.c
@@ -183,7 +183,7 @@ __visible void xen_lock_spinning(struct
 
 	local_irq_save(flags);
 
-	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
+	kstat_incr_irq_this_cpu(irq);
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgma-0006AH-NJ; Sun, 23 Feb 2014 21:40:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgmX-00069q-Fl
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:17 +0000
Received: from [85.158.143.35:30216] by server-1.bemta-4.messagelabs.com id
	4C/C0-31661-0CA6A035; Sun, 23 Feb 2014 21:40:16 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393191614!7671194!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25995 invoked from network); 23 Feb 2014 21:40:15 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:15 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmP-0003tR-Uk; Sun, 23 Feb 2014 22:40:10 +0100
Message-Id: <20140223212738.579581220@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:20 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline;
	filename=xen-get-rid-of-the-last-irq-to-desc-abuse.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: [Xen-devel] [patch 21/26] xen: Get rid of the last irq_desc abuse
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'd prefer to drop that completely but there seems to be some mystic
value to the error printout and the allocation check.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
---
 drivers/xen/events/events_base.c |   19 +++++--------------
 1 file changed, 5 insertions(+), 14 deletions(-)

Index: tip/drivers/xen/events/events_base.c
===================================================================
--- tip.orig/drivers/xen/events/events_base.c
+++ tip/drivers/xen/events/events_base.c
@@ -487,13 +487,6 @@ static void pirq_query_unmask(int irq)
 		info->u.pirq.flags |= PIRQ_NEEDS_EOI;
 }
 
-static bool probing_irq(int irq)
-{
-	struct irq_desc *desc = irq_to_desc(irq);
-
-	return desc && desc->action == NULL;
-}
-
 static void eoi_pirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -517,7 +510,7 @@ static void mask_ack_pirq(struct irq_dat
 	eoi_pirq(data);
 }
 
-static unsigned int __startup_pirq(unsigned int irq)
+static unsigned int __startup_pirq(struct irq_data *data, unsigned int irq)
 {
 	struct evtchn_bind_pirq bind_pirq;
 	struct irq_info *info = info_for_irq(irq);
@@ -535,7 +528,7 @@ static unsigned int __startup_pirq(unsig
 					BIND_PIRQ__WILL_SHARE : 0;
 	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
 	if (rc != 0) {
-		if (!probing_irq(irq))
+		if (!data || irqd_irq_has_action(data))
 			pr_info("Failed to obtain physical IRQ %d\n", irq);
 		return 0;
 	}
@@ -562,7 +555,7 @@ out:
 
 static unsigned int startup_pirq(struct irq_data *data)
 {
-	return __startup_pirq(data->irq);
+	return __startup_pirq(data, data->irq);
 }
 
 static void shutdown_pirq(struct irq_data *data)
@@ -769,15 +762,13 @@ error_irq:
 
 int xen_destroy_irq(int irq)
 {
-	struct irq_desc *desc;
 	struct physdev_unmap_pirq unmap_irq;
 	struct irq_info *info = info_for_irq(irq);
 	int rc = -ENOENT;
 
 	mutex_lock(&irq_mapping_update_lock);
 
-	desc = irq_to_desc(irq);
-	if (!desc)
+	if (!irq_is_allocated(irq))
 		goto out;
 
 	if (xen_initial_domain()) {
@@ -1432,7 +1423,7 @@ static void restore_pirqs(void)
 
 		printk(KERN_DEBUG "xen: --> irq=%d, pirq=%d\n", irq, map_irq.pirq);
 
-		__startup_pirq(irq);
+		__startup_pirq(irq_get_irq_data(irq), irq);
 	}
 }
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgma-0006AH-NJ; Sun, 23 Feb 2014 21:40:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgmX-00069q-Fl
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:17 +0000
Received: from [85.158.143.35:30216] by server-1.bemta-4.messagelabs.com id
	4C/C0-31661-0CA6A035; Sun, 23 Feb 2014 21:40:16 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393191614!7671194!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25995 invoked from network); 23 Feb 2014 21:40:15 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:15 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmP-0003tR-Uk; Sun, 23 Feb 2014 22:40:10 +0100
Message-Id: <20140223212738.579581220@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:20 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline;
	filename=xen-get-rid-of-the-last-irq-to-desc-abuse.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: [Xen-devel] [patch 21/26] xen: Get rid of the last irq_desc abuse
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I'd prefer to drop that completely but there seems to be some mystic
value to the error printout and the allocation check.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
---
 drivers/xen/events/events_base.c |   19 +++++--------------
 1 file changed, 5 insertions(+), 14 deletions(-)

Index: tip/drivers/xen/events/events_base.c
===================================================================
--- tip.orig/drivers/xen/events/events_base.c
+++ tip/drivers/xen/events/events_base.c
@@ -487,13 +487,6 @@ static void pirq_query_unmask(int irq)
 		info->u.pirq.flags |= PIRQ_NEEDS_EOI;
 }
 
-static bool probing_irq(int irq)
-{
-	struct irq_desc *desc = irq_to_desc(irq);
-
-	return desc && desc->action == NULL;
-}
-
 static void eoi_pirq(struct irq_data *data)
 {
 	int evtchn = evtchn_from_irq(data->irq);
@@ -517,7 +510,7 @@ static void mask_ack_pirq(struct irq_dat
 	eoi_pirq(data);
 }
 
-static unsigned int __startup_pirq(unsigned int irq)
+static unsigned int __startup_pirq(struct irq_data *data, unsigned int irq)
 {
 	struct evtchn_bind_pirq bind_pirq;
 	struct irq_info *info = info_for_irq(irq);
@@ -535,7 +528,7 @@ static unsigned int __startup_pirq(unsig
 					BIND_PIRQ__WILL_SHARE : 0;
 	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
 	if (rc != 0) {
-		if (!probing_irq(irq))
+		if (!data || irqd_irq_has_action(data))
 			pr_info("Failed to obtain physical IRQ %d\n", irq);
 		return 0;
 	}
@@ -562,7 +555,7 @@ out:
 
 static unsigned int startup_pirq(struct irq_data *data)
 {
-	return __startup_pirq(data->irq);
+	return __startup_pirq(data, data->irq);
 }
 
 static void shutdown_pirq(struct irq_data *data)
@@ -769,15 +762,13 @@ error_irq:
 
 int xen_destroy_irq(int irq)
 {
-	struct irq_desc *desc;
 	struct physdev_unmap_pirq unmap_irq;
 	struct irq_info *info = info_for_irq(irq);
 	int rc = -ENOENT;
 
 	mutex_lock(&irq_mapping_update_lock);
 
-	desc = irq_to_desc(irq);
-	if (!desc)
+	if (!irq_is_allocated(irq))
 		goto out;
 
 	if (xen_initial_domain()) {
@@ -1432,7 +1423,7 @@ static void restore_pirqs(void)
 
 		printk(KERN_DEBUG "xen: --> irq=%d, pirq=%d\n", irq, map_irq.pirq);
 
-		__startup_pirq(irq);
+		__startup_pirq(irq_get_irq_data(irq), irq);
 	}
 }
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgmY-0006A3-Sz; Sun, 23 Feb 2014 21:40:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgmW-00069o-SS
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:17 +0000
Received: from [85.158.137.68:40394] by server-9.bemta-3.messagelabs.com id
	A0/BA-10184-FBA6A035; Sun, 23 Feb 2014 21:40:15 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393191614!2415728!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14621 invoked from network); 23 Feb 2014 21:40:14 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:14 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmM-0003se-IC; Sun, 23 Feb 2014 22:40:06 +0100
Message-Id: <20140223212737.869264085@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:16 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline; filename=x86-xen-use-core-irq-stats-function.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, x86 <x86@kernel.org>
Subject: [Xen-devel] [patch 15/26] x86: xen: Use the core irq stats function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Let the core do the irq_desc resolution.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
Cc: x86 <x86@kernel.org>
---
 arch/x86/xen/spinlock.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Index: tip/arch/x86/xen/spinlock.c
===================================================================
--- tip.orig/arch/x86/xen/spinlock.c
+++ tip/arch/x86/xen/spinlock.c
@@ -183,7 +183,7 @@ __visible void xen_lock_spinning(struct
 
 	local_irq_save(flags);
 
-	kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
+	kstat_incr_irq_this_cpu(irq);
 out:
 	cpumask_clear_cpu(cpu, &waiting_cpus);
 	w->lock = NULL;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgmc-0006Ak-VB; Sun, 23 Feb 2014 21:40:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgmX-00069p-Co
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:17 +0000
Received: from [85.158.139.211:31975] by server-15.bemta-5.messagelabs.com id
	A7/CC-24395-0CA6A035; Sun, 23 Feb 2014 21:40:16 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393191614!5721973!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18513 invoked from network); 23 Feb 2014 21:40:15 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:15 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmO-0003sz-9l; Sun, 23 Feb 2014 22:40:08 +0100
Message-Id: <20140223212738.222412125@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:18 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline; filename=xen-use-the-proper-functions.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: [Xen-devel] [patch 18/26] xen: Use the proper irq functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I really can't understand why people keep adding irq_desc abuse. We
have enough proper interfaces. Delete another 14 lines of hackery.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
---
 drivers/xen/events/events_2l.c   |   15 ++++-----------
 drivers/xen/events/events_base.c |    7 ++-----
 drivers/xen/events/events_fifo.c |    8 ++------
 3 files changed, 8 insertions(+), 22 deletions(-)

Index: tip/drivers/xen/events/events_2l.c
===================================================================
--- tip.orig/drivers/xen/events/events_2l.c
+++ tip/drivers/xen/events/events_2l.c
@@ -166,7 +166,6 @@ static void evtchn_2l_handle_events(unsi
 	int start_word_idx, start_bit_idx;
 	int word_idx, bit_idx;
 	int i;
-	struct irq_desc *desc;
 	struct shared_info *s = HYPERVISOR_shared_info;
 	struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
 
@@ -176,11 +175,8 @@ static void evtchn_2l_handle_events(unsi
 		unsigned int evtchn = evtchn_from_irq(irq);
 		word_idx = evtchn / BITS_PER_LONG;
 		bit_idx = evtchn % BITS_PER_LONG;
-		if (active_evtchns(cpu, s, word_idx) & (1ULL << bit_idx)) {
-			desc = irq_to_desc(irq);
-			if (desc)
-				generic_handle_irq_desc(irq, desc);
-		}
+		if (active_evtchns(cpu, s, word_idx) & (1ULL << bit_idx))
+			generic_handle_irq(irq);
 	}
 
 	/*
@@ -245,11 +241,8 @@ static void evtchn_2l_handle_events(unsi
 			port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx;
 			irq = get_evtchn_to_irq(port);
 
-			if (irq != -1) {
-				desc = irq_to_desc(irq);
-				if (desc)
-					generic_handle_irq_desc(irq, desc);
-			}
+			if (irq != -1)
+				generic_handle_irq(irq);
 
 			bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD;
 
Index: tip/drivers/xen/events/events_base.c
===================================================================
--- tip.orig/drivers/xen/events/events_base.c
+++ tip/drivers/xen/events/events_base.c
@@ -336,9 +336,8 @@ static void bind_evtchn_to_cpu(unsigned
 
 	BUG_ON(irq == -1);
 #ifdef CONFIG_SMP
-	cpumask_copy(irq_to_desc(irq)->irq_data.affinity, cpumask_of(cpu));
+	cpumask_copy(irq_get_irq_data(irq)->affinity, cpumask_of(cpu));
 #endif
-
 	xen_evtchn_port_bind_to_cpu(info, cpu);
 
 	info->cpu = cpu;
@@ -373,10 +372,8 @@ static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
 #ifdef CONFIG_SMP
-	struct irq_desc *desc = irq_to_desc(irq);
-
 	/* By default all event channels notify CPU#0. */
-	cpumask_copy(desc->irq_data.affinity, cpumask_of(0));
+	cpumask_copy(irq_get_irq_data(irq)->affinity, cpumask_of(0));
 #endif
 
 	info = kzalloc(sizeof(*info), GFP_KERNEL);
Index: tip/drivers/xen/events/events_fifo.c
===================================================================
--- tip.orig/drivers/xen/events/events_fifo.c
+++ tip/drivers/xen/events/events_fifo.c
@@ -235,14 +235,10 @@ static uint32_t clear_linked(volatile ev
 static void handle_irq_for_port(unsigned port)
 {
 	int irq;
-	struct irq_desc *desc;
 
 	irq = get_evtchn_to_irq(port);
-	if (irq != -1) {
-		desc = irq_to_desc(irq);
-		if (desc)
-			generic_handle_irq_desc(irq, desc);
-	}
+	if (irq != -1)
+		generic_handle_irq(irq);
 }
 
 static void consume_one_event(unsigned cpu,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgmc-0006Ak-VB; Sun, 23 Feb 2014 21:40:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgmX-00069p-Co
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:17 +0000
Received: from [85.158.139.211:31975] by server-15.bemta-5.messagelabs.com id
	A7/CC-24395-0CA6A035; Sun, 23 Feb 2014 21:40:16 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393191614!5721973!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18513 invoked from network); 23 Feb 2014 21:40:15 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:15 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmO-0003sz-9l; Sun, 23 Feb 2014 22:40:08 +0100
Message-Id: <20140223212738.222412125@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:18 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline; filename=xen-use-the-proper-functions.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: [Xen-devel] [patch 18/26] xen: Use the proper irq functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

I really can't understand why people keep adding irq_desc abuse. We
have enough proper interfaces. Delete another 14 lines of hackery.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
---
 drivers/xen/events/events_2l.c   |   15 ++++-----------
 drivers/xen/events/events_base.c |    7 ++-----
 drivers/xen/events/events_fifo.c |    8 ++------
 3 files changed, 8 insertions(+), 22 deletions(-)

Index: tip/drivers/xen/events/events_2l.c
===================================================================
--- tip.orig/drivers/xen/events/events_2l.c
+++ tip/drivers/xen/events/events_2l.c
@@ -166,7 +166,6 @@ static void evtchn_2l_handle_events(unsi
 	int start_word_idx, start_bit_idx;
 	int word_idx, bit_idx;
 	int i;
-	struct irq_desc *desc;
 	struct shared_info *s = HYPERVISOR_shared_info;
 	struct vcpu_info *vcpu_info = __this_cpu_read(xen_vcpu);
 
@@ -176,11 +175,8 @@ static void evtchn_2l_handle_events(unsi
 		unsigned int evtchn = evtchn_from_irq(irq);
 		word_idx = evtchn / BITS_PER_LONG;
 		bit_idx = evtchn % BITS_PER_LONG;
-		if (active_evtchns(cpu, s, word_idx) & (1ULL << bit_idx)) {
-			desc = irq_to_desc(irq);
-			if (desc)
-				generic_handle_irq_desc(irq, desc);
-		}
+		if (active_evtchns(cpu, s, word_idx) & (1ULL << bit_idx))
+			generic_handle_irq(irq);
 	}
 
 	/*
@@ -245,11 +241,8 @@ static void evtchn_2l_handle_events(unsi
 			port = (word_idx * BITS_PER_EVTCHN_WORD) + bit_idx;
 			irq = get_evtchn_to_irq(port);
 
-			if (irq != -1) {
-				desc = irq_to_desc(irq);
-				if (desc)
-					generic_handle_irq_desc(irq, desc);
-			}
+			if (irq != -1)
+				generic_handle_irq(irq);
 
 			bit_idx = (bit_idx + 1) % BITS_PER_EVTCHN_WORD;
 
Index: tip/drivers/xen/events/events_base.c
===================================================================
--- tip.orig/drivers/xen/events/events_base.c
+++ tip/drivers/xen/events/events_base.c
@@ -336,9 +336,8 @@ static void bind_evtchn_to_cpu(unsigned
 
 	BUG_ON(irq == -1);
 #ifdef CONFIG_SMP
-	cpumask_copy(irq_to_desc(irq)->irq_data.affinity, cpumask_of(cpu));
+	cpumask_copy(irq_get_irq_data(irq)->affinity, cpumask_of(cpu));
 #endif
-
 	xen_evtchn_port_bind_to_cpu(info, cpu);
 
 	info->cpu = cpu;
@@ -373,10 +372,8 @@ static void xen_irq_init(unsigned irq)
 {
 	struct irq_info *info;
 #ifdef CONFIG_SMP
-	struct irq_desc *desc = irq_to_desc(irq);
-
 	/* By default all event channels notify CPU#0. */
-	cpumask_copy(desc->irq_data.affinity, cpumask_of(0));
+	cpumask_copy(irq_get_irq_data(irq)->affinity, cpumask_of(0));
 #endif
 
 	info = kzalloc(sizeof(*info), GFP_KERNEL);
Index: tip/drivers/xen/events/events_fifo.c
===================================================================
--- tip.orig/drivers/xen/events/events_fifo.c
+++ tip/drivers/xen/events/events_fifo.c
@@ -235,14 +235,10 @@ static uint32_t clear_linked(volatile ev
 static void handle_irq_for_port(unsigned port)
 {
 	int irq;
-	struct irq_desc *desc;
 
 	irq = get_evtchn_to_irq(port);
-	if (irq != -1) {
-		desc = irq_to_desc(irq);
-		if (desc)
-			generic_handle_irq_desc(irq, desc);
-	}
+	if (irq != -1)
+		generic_handle_irq(irq);
 }
 
 static void consume_one_event(unsigned cpu,



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgmi-0006Co-Kq; Sun, 23 Feb 2014 21:40:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgma-0006AA-HN
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:20 +0000
Received: from [85.158.143.35:30302] by server-1.bemta-4.messagelabs.com id
	5F/C0-31661-3CA6A035; Sun, 23 Feb 2014 21:40:19 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393191618!7673300!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28096 invoked from network); 23 Feb 2014 21:40:18 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:18 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmR-0003ti-5I; Sun, 23 Feb 2014 22:40:11 +0100
Message-Id: <20140223212738.808648133@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:21 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline; filename=xen-use-vector-accounting.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: [Xen-devel] [patch 23/26] xen: Add proper irq accounting for
	HYPERCALL vector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
---
 drivers/xen/events/events_base.c |    1 +
 1 file changed, 1 insertion(+)

Index: tip/drivers/xen/events/events_base.c
===================================================================
--- tip.orig/drivers/xen/events/events_base.c
+++ tip/drivers/xen/events/events_base.c
@@ -1239,6 +1239,7 @@ void xen_evtchn_do_upcall(struct pt_regs
 #ifdef CONFIG_X86
 	exit_idle();
 #endif
+	inc_irq_stat(irq_hv_callback_count);
 
 	__xen_evtchn_do_upcall();
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 21:40:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 21:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHgmi-0006Co-Kq; Sun, 23 Feb 2014 21:40:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WHgma-0006AA-HN
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 21:40:20 +0000
Received: from [85.158.143.35:30302] by server-1.bemta-4.messagelabs.com id
	5F/C0-31661-3CA6A035; Sun, 23 Feb 2014 21:40:19 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393191618!7673300!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28096 invoked from network); 23 Feb 2014 21:40:18 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 23 Feb 2014 21:40:18 -0000
Received: from localhost ([127.0.0.1] helo=ionos.tec.linutronix.de)
	by Galois.linutronix.de with esmtp (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WHgmR-0003ti-5I; Sun, 23 Feb 2014 22:40:11 +0100
Message-Id: <20140223212738.808648133@linutronix.de>
User-Agent: quilt/0.60-1
Date: Sun, 23 Feb 2014 21:40:21 -0000
From: Thomas Gleixner <tglx@linutronix.de>
To: LKML <linux-kernel@vger.kernel.org>
References: <20140223212703.511977310@linutronix.de>
Content-Disposition: inline; filename=xen-use-vector-accounting.patch
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>
Subject: [Xen-devel] [patch 23/26] xen: Add proper irq accounting for
	HYPERCALL vector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Xen <xen-devel@lists.xenproject.org>
---
 drivers/xen/events/events_base.c |    1 +
 1 file changed, 1 insertion(+)

Index: tip/drivers/xen/events/events_base.c
===================================================================
--- tip.orig/drivers/xen/events/events_base.c
+++ tip/drivers/xen/events/events_base.c
@@ -1239,6 +1239,7 @@ void xen_evtchn_do_upcall(struct pt_regs
 #ifdef CONFIG_X86
 	exit_idle();
 #endif
+	inc_irq_stat(irq_hv_callback_count);
 
 	__xen_evtchn_do_upcall();
 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLn-0006p4-Jg; Sun, 23 Feb 2014 22:16:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLm-0006oq-Ta
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:43 +0000
Received: from [193.109.254.147:65162] by server-16.bemta-14.messagelabs.com
	id AF/14-21945-A437A035; Sun, 23 Feb 2014 22:16:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393193801!2272969!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30740 invoked from network); 23 Feb 2014 22:16:41 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:41 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so1862319eek.41
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=jR7Dtw6vhSEvQMzJIg0vtTUQ9dyTitsrBB9wEfMCnLY=;
	b=B8wpesMB/G8DSbEZzhahL+kA/IDDVPtbx9tdM08ppqqvbVS1eTnlnHmyvPQuPnJVZJ
	/VDBal49Rsd4REu/TbADLi9fEPLFJiGraKGkK5F+NdMhKUd/Q7rwrAB/aETiG3Bwd4tt
	hyd1kGevFWDZASZHOg0p8fSKQluPJg2rixGcyOXn70Bubu3iNVG4ArKBUzZ3IZW0FM+X
	WHEnE97yoG5pXIador+nht0aDUhGta8mowwMqevLB3VU0I/64gjoMv7WHRnZ3ZkUmbIi
	0Gem0HAvBBBmJp9L0zfq5yIyukePYeRCRRahWhlbWOlxLdblgMbJ4saR4CRwyJ/D/E9g
	iNCg==
X-Gm-Message-State: ALoCoQmVSCOeYBMaq3/DYe8Syi7834kxYSURoWE0A2WYpVYfee8X/mLk5B5jiFa9i7oczyDYRAIV
X-Received: by 10.14.2.193 with SMTP id 41mr20691199eef.55.1393193800949;
	Sun, 23 Feb 2014 14:16:40 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:39 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:20 +0000
Message-Id: <1393193792-20008-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 03/15] xen/passthrough: vtd: Don't export
	iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_set_pgd is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 xen/include/xen/iommu.h             |    1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a8d33fc..d5ce5b7 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
 /*
  * set VT-d page table directory to EPT table if allowed
  */
-void iommu_set_pgd(struct domain *d)
+static void iommu_set_pgd(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
     mfn_t pgd_mfn;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 8bb0a1d..fcbc432 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_set_pgd(struct domain *d);
 void iommu_domain_teardown(struct domain *d);
 
 void pt_pci_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLt-0006rN-Fz; Sun, 23 Feb 2014 22:16:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLs-0006qb-EV
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:48 +0000
Received: from [85.158.137.68:23240] by server-6.bemta-3.messagelabs.com id
	31/D2-09180-F437A035; Sun, 23 Feb 2014 22:16:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393193806!2427400!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24872 invoked from network); 23 Feb 2014 22:16:46 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:46 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so2747429eek.8
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=+uMH06uuPkcNzd6UdmOAwaefHTF+PBUvUW9111zWnew=;
	b=l8olNbAFi6tC26UbIYDsjrAytdGbmVyqm/Vmhn5rHhqgejFHP9Snn8uv7QcmOz1SLl
	TU+inufe1yCetgHhyESHC7uvdRPqR8xo8+iSyo7bkXJ5rTQK6aR2rPD+6S/FBdh2FKJ+
	pvCSG8z/ZIaAzMtaKNL5cjG21jBmhXhuzg5CpMqScGhfyRB0Pg6FKPE2fGzmAkuNlmBH
	gzSP5DhFQBWT/2Av51cBhXkRkkaeqsDSWZTl6sPd7Mq/XRE19E5A+b0RN0QtPcjMRzy1
	lw7Amk+6wFHqmbSMj/1Vflx3HCuIZBMe8J+bkJ1HVRIT+fcoA2PxNBEd8+bpB6+ysDaX
	bDZA==
X-Gm-Message-State: ALoCoQk53y2h5JulrkyMXUIgW88Q7jQQH7RsN2GxwtY/s021IiF0u0farimjOchhAq/U8ZStleHK
X-Received: by 10.15.21.2 with SMTP id c2mr21063501eeu.77.1393193806441;
	Sun, 23 Feb 2014 14:16:46 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.45
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:45 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:24 +0000
Message-Id: <1393193792-20008-8-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 07/15] xen/passthrough: iommu: Don't need to
	map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently iommu_init_dom0 is browsing the page list and call map_page callback
on each page.

On both AMD and VTD drivers, the function will directly return if the page
table is shared with the processor. So Xen can safely avoid to run through
the page list.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/drivers/passthrough/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index b534893..3c63f87 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -156,7 +156,7 @@ void __init iommu_dom0_init(struct domain *d)
 
     register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
-    if ( need_iommu(d) )
+    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
     {
         struct page_info *page;
         unsigned int i = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLt-0006rN-Fz; Sun, 23 Feb 2014 22:16:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLs-0006qb-EV
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:48 +0000
Received: from [85.158.137.68:23240] by server-6.bemta-3.messagelabs.com id
	31/D2-09180-F437A035; Sun, 23 Feb 2014 22:16:47 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393193806!2427400!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24872 invoked from network); 23 Feb 2014 22:16:46 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:46 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so2747429eek.8
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:46 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=+uMH06uuPkcNzd6UdmOAwaefHTF+PBUvUW9111zWnew=;
	b=l8olNbAFi6tC26UbIYDsjrAytdGbmVyqm/Vmhn5rHhqgejFHP9Snn8uv7QcmOz1SLl
	TU+inufe1yCetgHhyESHC7uvdRPqR8xo8+iSyo7bkXJ5rTQK6aR2rPD+6S/FBdh2FKJ+
	pvCSG8z/ZIaAzMtaKNL5cjG21jBmhXhuzg5CpMqScGhfyRB0Pg6FKPE2fGzmAkuNlmBH
	gzSP5DhFQBWT/2Av51cBhXkRkkaeqsDSWZTl6sPd7Mq/XRE19E5A+b0RN0QtPcjMRzy1
	lw7Amk+6wFHqmbSMj/1Vflx3HCuIZBMe8J+bkJ1HVRIT+fcoA2PxNBEd8+bpB6+ysDaX
	bDZA==
X-Gm-Message-State: ALoCoQk53y2h5JulrkyMXUIgW88Q7jQQH7RsN2GxwtY/s021IiF0u0farimjOchhAq/U8ZStleHK
X-Received: by 10.15.21.2 with SMTP id c2mr21063501eeu.77.1393193806441;
	Sun, 23 Feb 2014 14:16:46 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.45
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:45 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:24 +0000
Message-Id: <1393193792-20008-8-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 07/15] xen/passthrough: iommu: Don't need to
	map dom0 page when the PT is shared
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently iommu_init_dom0 is browsing the page list and call map_page callback
on each page.

On both AMD and VTD drivers, the function will directly return if the page
table is shared with the processor. So Xen can safely avoid to run through
the page list.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/drivers/passthrough/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index b534893..3c63f87 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -156,7 +156,7 @@ void __init iommu_dom0_init(struct domain *d)
 
     register_keyhandler('o', &iommu_p2m_table);
     d->need_iommu = !!iommu_dom0_strict;
-    if ( need_iommu(d) )
+    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
     {
         struct page_info *page;
         unsigned int i = 0;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLk-0006ob-Rt; Sun, 23 Feb 2014 22:16:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLj-0006oK-Mz
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:39 +0000
Received: from [85.158.139.211:34533] by server-2.bemta-5.messagelabs.com id
	14/C1-23037-6437A035; Sun, 23 Feb 2014 22:16:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393193797!5730216!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5430 invoked from network); 23 Feb 2014 22:16:38 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:38 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so2735045ead.38
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Q3MLBwBKJtz9hvTG5OVtdmMvqfazovDKiwqeeyMV6A0=;
	b=AU8bbkZAdpPAgch0JjiFwhfeO1zf/uZ8wxwr2i7G0d+BOjySzo7H/6ZH61sweMce5a
	DjzZ8Zq4K6Ry/tBfDPRpg6SIsfxB19T1xv8oUHTX5pCCufvxZhPfKom3PJF5kT7UWZCu
	BTxSqc3uJung1RDgO4IyYdnJPK0eqZTJ48SyuNeSmKJPVsfyvNix5NCQWyFk48rdY8Vq
	VUulvDl0jWhSraf3slT/UEx5ZoBak9OFU8n3P4wndIXAaOAn/CrXpvrx7Tqd2ECYl+aZ
	2JjcSgs6P7Vv/1cZ6QEsvTsMBZsALHYRnaqpLYjRcMJh3kCjp0uxl6mOH5jdILpqx7i7
	sp0A==
X-Gm-Message-State: ALoCoQlxY1B0uwLQeOTY6e1JAw71Q83N+1J9auo0gTcZYMM8VbB/1ZSCB+0SeeHTTbBbp874BR98
X-Received: by 10.15.76.135 with SMTP id n7mr5973863eey.36.1393193797762;
	Sun, 23 Feb 2014 14:16:37 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.36
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:37 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:18 +0000
Message-Id: <1393193792-20008-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 01/15] xen/common: grant-table: only call
	IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From Xen point of view, ARM guests are PV guest with paging auto translate
enabled.

When IOMMU support will be added for ARM, mapping grant ref will always crash
Xen due to the BUG_ON in __gnttab_map_grant_ref.

On x86:
    - PV guests always have paging mode translate disabled
    - PVH and HVM guests have always paging mode translate enabled

It means that we can safely replace the check that the domain is a PV guests
by checking if the guest has paging mode translate enabled.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>
---
 xen/common/grant_table.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 107b000..778bdb7 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -721,12 +721,10 @@ __gnttab_map_grant_ref(
 
     double_gt_lock(lgt, rgt);
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        /* Shouldn't happen, because you can't use iommu in a HVM domain. */
-        BUG_ON(paging_mode_translate(ld));
         /* We're not translated, so we know that gmfns and mfns are
            the same things, so the IOMMU entry is always 1-to-1. */
         mapcount(lgt, rd, frame, &wrc, &rdc);
@@ -931,11 +929,10 @@ __gnttab_unmap_common(
             act->pin -= GNTPIN_hstw_inc;
     }
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        BUG_ON(paging_mode_translate(ld));
         mapcount(lgt, rd, op->frame, &wrc, &rdc);
         if ( (wrc + rdc) == 0 )
             err = iommu_unmap_page(ld, op->frame);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLn-0006p4-Jg; Sun, 23 Feb 2014 22:16:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLm-0006oq-Ta
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:43 +0000
Received: from [193.109.254.147:65162] by server-16.bemta-14.messagelabs.com
	id AF/14-21945-A437A035; Sun, 23 Feb 2014 22:16:42 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393193801!2272969!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30740 invoked from network); 23 Feb 2014 22:16:41 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:41 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so1862319eek.41
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:41 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=jR7Dtw6vhSEvQMzJIg0vtTUQ9dyTitsrBB9wEfMCnLY=;
	b=B8wpesMB/G8DSbEZzhahL+kA/IDDVPtbx9tdM08ppqqvbVS1eTnlnHmyvPQuPnJVZJ
	/VDBal49Rsd4REu/TbADLi9fEPLFJiGraKGkK5F+NdMhKUd/Q7rwrAB/aETiG3Bwd4tt
	hyd1kGevFWDZASZHOg0p8fSKQluPJg2rixGcyOXn70Bubu3iNVG4ArKBUzZ3IZW0FM+X
	WHEnE97yoG5pXIador+nht0aDUhGta8mowwMqevLB3VU0I/64gjoMv7WHRnZ3ZkUmbIi
	0Gem0HAvBBBmJp9L0zfq5yIyukePYeRCRRahWhlbWOlxLdblgMbJ4saR4CRwyJ/D/E9g
	iNCg==
X-Gm-Message-State: ALoCoQmVSCOeYBMaq3/DYe8Syi7834kxYSURoWE0A2WYpVYfee8X/mLk5B5jiFa9i7oczyDYRAIV
X-Received: by 10.14.2.193 with SMTP id 41mr20691199eef.55.1393193800949;
	Sun, 23 Feb 2014 14:16:40 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:39 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:20 +0000
Message-Id: <1393193792-20008-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 03/15] xen/passthrough: vtd: Don't export
	iommu_set_pgd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_set_pgd is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 xen/include/xen/iommu.h             |    1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a8d33fc..d5ce5b7 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1826,7 +1826,7 @@ static int vtd_ept_page_compatible(struct iommu *iommu)
 /*
  * set VT-d page table directory to EPT table if allowed
  */
-void iommu_set_pgd(struct domain *d)
+static void iommu_set_pgd(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
     mfn_t pgd_mfn;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 8bb0a1d..fcbc432 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -68,7 +68,6 @@ int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
 void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_set_pgd(struct domain *d);
 void iommu_domain_teardown(struct domain *d);
 
 void pt_pci_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLm-0006om-7m; Sun, 23 Feb 2014 22:16:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLk-0006oT-M5
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:40 +0000
Received: from [85.158.139.211:60264] by server-14.bemta-5.messagelabs.com id
	E9/CB-27598-8437A035; Sun, 23 Feb 2014 22:16:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393193799!5730217!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5486 invoked from network); 23 Feb 2014 22:16:39 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:39 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so2718712eak.2
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=mlQntBg3Wyiwve9vxUqXVe4wNzwHMZWLNGiLzbsUsaQ=;
	b=dNJR/fC2HD5lYlmkgG6ET3mfLiJDNRn8JsrpOHPBjSHMhUFVZ7ZMTdZtK/Z8N+pTU4
	kJ8u+m2uBMl2RyM2N/wKw0trTHZurGz4bhAeR9hUK76xQ0bCUeoQpYeWbLh1ZmiVE2Nm
	bdRJc7uxBKJby2HUIXSjGVXtmRMLHt0V7yafvzpeh7pROqSLodPNLBIWaFeBcd2SZGuV
	QGN5JINXUdDmhG4gKxddSqLkU7paPtnbiwiq5BMga5gSR1p2dWGe0FCh8+m0Vd0d5C/w
	tQcsgY0Zfx9UF0hXZCvIaZ6zCLHkwHT2Jv10RMMlQSKkp0yAwADI1sVOcsW3CJ2976tc
	0+rw==
X-Gm-Message-State: ALoCoQntAZ3dRs51n7uPfn/fqMjbFuCyetJVrW96GzxcaR/Bmkl2DDAVsVqZqLDMYB0HMCU6bjP1
X-Received: by 10.14.110.68 with SMTP id t44mr21060739eeg.74.1393193799254;
	Sun, 23 Feb 2014 14:16:39 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.37
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:38 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:19 +0000
Message-Id: <1393193792-20008-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
	iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_domain_teardown is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Cambell <ian.campbell@citrix.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..a8d33fc 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
     return ret;
 }
 
-void iommu_domain_teardown(struct domain *d)
+static void iommu_domain_teardown(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLk-0006oU-Ge; Sun, 23 Feb 2014 22:16:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLi-0006oJ-TX
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:39 +0000
Received: from [85.158.137.68:54882] by server-15.bemta-3.messagelabs.com id
	7E/57-19263-6437A035; Sun, 23 Feb 2014 22:16:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393193797!3704733!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19623 invoked from network); 23 Feb 2014 22:16:37 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:37 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so2770704eek.24
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=69NOsRIgmpVEGJSoK2AVdNajIw7iD5V0zMRCYskVTG8=;
	b=FYBvEoe1dni587+6cuVHOi1iN3JqAKpY8qs2LDSVpvQ4VKoPSFAyoRiPDK6i6RJnXk
	qfztGFcl6I58VFJNtO+vVl178WLLK5a9oehr3YHo4UbwMq3Zcvt4V5WS86JrKDcH+kdL
	eAjSAt1Pd/BOB39auhpqSapNKa9r9L/4JgGAori6SeAnMrki9GH+5wDkAQFy4Hx3z3aZ
	d3FYfu3J3RK/kSBD0L1Pgunw1q+gCnt6Jq3nnkg1Rv+3JiPo+Cbsq//Z9sx/zvAu0Oud
	R+y1QVJnYMvg5/NuSJXkKXIgqsldY/jwYFP9QFOfJnwGQG+fM3F2hAXIFVGlkPtiPzsK
	9RMg==
X-Gm-Message-State: ALoCoQlccMauIOWbl2tE5aW4kSRwueycvyYlfBET8lvzaRI4cbstsbkY7dr5JVCdk9i+BD5c8FDS
X-Received: by 10.14.9.134 with SMTP id 6mr20872611eet.70.1393193796519;
	Sun, 23 Feb 2014 14:16:36 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.35
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:35 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:17 +0000
Message-Id: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 00/15] IOMMU support for ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This is the second version of the patch series to add support for IOMMU on
ARM. It also add ARM SMMU driver which is used for instance on Midway.

The IOMMU architecture for ARM is relying on the page table is shared between
the processor and each IOMMU.

The patch series is divided following:
    - #1: fixing grant-table with IOMMU. Will be necessary for ARM later
    - #2-#3: Make static some vtd functions
    - #4-#5: Adding new device tree functions
    - #6-#9: Prepare IOMMU code to add support for ARM
    - #10: Adding basic device tree assignment support
    - #11-#15: Add IOMMU architecture for ARM
    - #15: Add SMMU drivers

For now the 1:1 workaround is not removed because a same platform can have
DMA-capable device which are under an IOMMU and some not.

Major changes for the RFC:
    - Adding basic device tree assignment support
    - Draft a binding to notify DOM0 which device is protected
    - Couple of fixes when the IOMMU is disabled or there is no IOMMU
    support for the board.

This series has also dependency on:
    - early printk series :
    http://lists.xen.org/archives/html/xen-devel/2014-01/msg00288.html
    - interrupt series:
    http://lists.xen.org/archives/html/xen-devel/2014-01/msg02139.html
    - few bug fixes on the previous series

A working tree can be found here:
    git://xenbits.xen.org/people/julieng/xen-unstable.git branch smmu-v2

Any comments, questions are welcomed.

Sincerely yours,

Julien Grall (15):
  xen/common: grant-table: only call IOMMU if paging mode translate is
    disabled
  xen/passthrough: vtd: Don't export iommu_domain_teardown
  xen/passthrough: vtd: Don't export iommu_set_pgd
  xen/dts: Add dt_property_read_bool
  xen/dts: Add dt_parse_phandle_with_args and dt_parse_phandle
  xen/passthrough: rework dom0_pvh_reqs to use it also on ARM
  xen/passthrough: iommu: Don't need to map dom0 page when the PT is
    shared
  xen/passthrough: iommu: Split generic IOMMU code
  xen/passthrough: iommu: Introduce arch specific code
  xen/passthrough: iommu: Basic support of device tree assignment
  xen/passthrough: Introduce IOMMU ARM architecture
  MAINTAINERS: Add drivers/passthrough/arm
  xen/arm: Don't give IOMMU devices to dom0 when iommu is disabled
  xen/arm: Add the property "protected-devices" in the hypervisor node
  drivers/passthrough: arm: Add support for SMMU drivers

 MAINTAINERS                                 |    1 +
 xen/arch/arm/Rules.mk                       |    1 +
 xen/arch/arm/device.c                       |   15 +
 xen/arch/arm/domain.c                       |    7 +
 xen/arch/arm/domain_build.c                 |   78 +-
 xen/arch/arm/kernel.h                       |    3 +
 xen/arch/arm/p2m.c                          |    4 +
 xen/arch/arm/setup.c                        |    2 +
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/common/device_tree.c                    |  161 ++-
 xen/common/grant_table.c                    |    7 +-
 xen/drivers/passthrough/Makefile            |    6 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +-
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +-
 xen/drivers/passthrough/arm/Makefile        |    2 +
 xen/drivers/passthrough/arm/iommu.c         |   65 +
 xen/drivers/passthrough/arm/smmu.c          | 1736 +++++++++++++++++++++++++++
 xen/drivers/passthrough/device_tree.c       |  106 ++
 xen/drivers/passthrough/iommu.c             |  542 +--------
 xen/drivers/passthrough/pci.c               |  437 +++++++
 xen/drivers/passthrough/vtd/iommu.c         |   84 +-
 xen/drivers/passthrough/x86/Makefile        |    1 +
 xen/drivers/passthrough/x86/iommu.c         |  106 ++
 xen/include/asm-arm/device.h                |   13 +-
 xen/include/asm-arm/domain.h                |    2 +
 xen/include/asm-arm/hvm/iommu.h             |   10 +
 xen/include/asm-arm/iommu.h                 |   36 +
 xen/include/asm-x86/hvm/iommu.h             |   29 +
 xen/include/asm-x86/iommu.h                 |   50 +
 xen/include/xen/device_tree.h               |   89 ++
 xen/include/xen/hvm/iommu.h                 |   33 +-
 xen/include/xen/iommu.h                     |   71 +-
 36 files changed, 3148 insertions(+), 679 deletions(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/drivers/passthrough/arm/smmu.c
 create mode 100644 xen/drivers/passthrough/device_tree.c
 create mode 100644 xen/drivers/passthrough/x86/iommu.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h
 create mode 100644 xen/include/asm-x86/iommu.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLw-0006sx-UL; Sun, 23 Feb 2014 22:16:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLv-0006rs-8X
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:51 +0000
Received: from [85.158.137.68:55162] by server-15.bemta-3.messagelabs.com id
	A0/77-19263-2537A035; Sun, 23 Feb 2014 22:16:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393193808!872284!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13778 invoked from network); 23 Feb 2014 22:16:48 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:48 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so2718814eae.7
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Pu2URYnky0IILXluJhG10sBol1MxYrX0INy5vHzafEA=;
	b=MEJG5psujWDMwLrT8dOhjkGdzyyHmBmstdq1XE8PZMzhdScfy2w3TEL0SThEdhxEvC
	wW1HfAdqc43MDSiienYQ47LcW6R3371mno9dqVfsUlwGCPw7yZ6Yz3dyVoZviEKAGhKC
	62qFLbWHNI7L0qEPwI1vwytGqp6cagGhBFbpqD+z+W7+b2XA3wooQ74Iynu8gChdkoUs
	zJb6qi3XUdgvJqTM4ZWFaGU3JxSZm7hY4/Wl8k3cnsxtQ5adnWWIAAkPqrmADDgdrhWN
	eKmaLNOVjjSXTMLNzAdH62bGSVjUoqiY4lmGwcsOxUQVnX/jNEUc6ByN+JqpJiRBfb2s
	TQfQ==
X-Gm-Message-State: ALoCoQmd7vHUH4dMdvMn7IhIe3fP/P38Zc4NO//hrcvrQmZL1JVdBrO8w9SnTAFbVWO+fLihnmGE
X-Received: by 10.15.73.134 with SMTP id h6mr21140747eey.15.1393193808064;
	Sun, 23 Feb 2014 14:16:48 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.46
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:47 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:25 +0000
Message-Id: <1393193792-20008-9-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split generic
	IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
functions specific to x86 and PCI.

Split the framework in 3 distincts files:
    - iommu.c: contains generic functions shared between x86 and ARM
               (when it will be supported)
    - pci.c: contains specific functions for PCI passthrough
    - x86/iommu.c: contains specific functions for x86

io.c contains x86 HVM specific code. Only compile for x86.

This patch is mostly code movement in new files.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Update commit message
        - Removing spurious change in drivers/passthrough/vtd/iommu.c
        - Move iommu_x86.c in x86/iommu.c
        - Merge iommu_pci.c in pci.c
        - Introduce iommu_do_pci_domctl
---
 xen/drivers/passthrough/Makefile     |    4 +-
 xen/drivers/passthrough/iommu.c      |  493 ++--------------------------------
 xen/drivers/passthrough/pci.c        |  437 ++++++++++++++++++++++++++++++
 xen/drivers/passthrough/x86/Makefile |    1 +
 xen/drivers/passthrough/x86/iommu.c  |   65 +++++
 xen/include/asm-x86/iommu.h          |   46 ++++
 xen/include/xen/hvm/iommu.h          |    1 +
 xen/include/xen/iommu.h              |   43 ++-
 8 files changed, 597 insertions(+), 493 deletions(-)
 create mode 100644 xen/drivers/passthrough/x86/iommu.c
 create mode 100644 xen/include/asm-x86/iommu.h

diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 7c40fa5..6e08f89 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -3,5 +3,5 @@ subdir-$(x86) += amd
 subdir-$(x86_64) += x86
 
 obj-y += iommu.o
-obj-y += io.o
-obj-y += pci.o
+obj-$(x86) += io.o
+obj-$(HAS_PCI) += pci.o
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 3c63f87..6893cf3 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -24,7 +24,6 @@
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
-static int iommu_populate_page_table(struct domain *d);
 static void iommu_dump_p2m_table(unsigned char key);
 
 /*
@@ -179,86 +178,7 @@ void __init iommu_dom0_init(struct domain *d)
     return hd->platform_ops->dom0_init(d);
 }
 
-int iommu_add_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    int rc;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
-    if ( rc || !pdev->phantom_stride )
-        return rc;
-
-    for ( devfn = pdev->devfn ; ; )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            return 0;
-        rc = hd->platform_ops->add_device(devfn, pdev);
-        if ( rc )
-            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
-                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-    }
-}
-
-int iommu_enable_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops ||
-         !hd->platform_ops->enable_device )
-        return 0;
-
-    return hd->platform_ops->enable_device(pdev);
-}
-
-int iommu_remove_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
-    {
-        int rc;
-
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->remove_device(devfn, pdev);
-        if ( !rc )
-            continue;
-
-        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
-               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-        return rc;
-    }
-
-    return hd->platform_ops->remove_device(pdev->devfn, pdev);
-}
-
-static void iommu_teardown(struct domain *d)
+void iommu_teardown(struct domain *d)
 {
     const struct hvm_iommu *hd = domain_hvm_iommu(d);
 
@@ -267,151 +187,6 @@ static void iommu_teardown(struct domain *d)
     tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
-/*
- * If the device isn't owned by dom0, it means it already
- * has been assigned to other domain, or it doesn't exist.
- */
-static int device_assigned(u16 seg, u8 bus, u8 devfn)
-{
-    struct pci_dev *pdev;
-
-    spin_lock(&pcidevs_lock);
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    spin_unlock(&pcidevs_lock);
-
-    return pdev ? 0 : -EBUSY;
-}
-
-static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int rc = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( unlikely(!need_iommu(d) &&
-            (d->arch.hvm_domain.mem_sharing_enabled ||
-             d->mem_event->paging.ring_page)) )
-        return -EXDEV;
-
-    if ( !spin_trylock(&pcidevs_lock) )
-        return -ERESTART;
-
-    if ( need_iommu(d) <= 0 )
-    {
-        if ( !iommu_use_hap_pt(d) )
-        {
-            rc = iommu_populate_page_table(d);
-            if ( rc )
-            {
-                spin_unlock(&pcidevs_lock);
-                return rc;
-            }
-        }
-        d->need_iommu = 1;
-    }
-
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    if ( !pdev )
-    {
-        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
-        goto done;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
-        goto done;
-
-    for ( ; pdev->phantom_stride; rc = 0 )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->assign_device(d, devfn, pdev);
-        if ( rc )
-            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
-                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   rc);
-    }
-
- done:
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-    spin_unlock(&pcidevs_lock);
-
-    return rc;
-}
-
-static int iommu_populate_page_table(struct domain *d)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct page_info *page;
-    int rc = 0, n = 0;
-
-    d->need_iommu = -1;
-
-    this_cpu(iommu_dont_flush_iotlb) = 1;
-    spin_lock(&d->page_alloc_lock);
-
-    if ( unlikely(d->is_dying) )
-        rc = -ESRCH;
-
-    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
-    {
-        if ( is_hvm_domain(d) ||
-            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
-        {
-            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
-            rc = hd->platform_ops->map_page(
-                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
-                IOMMUF_readable|IOMMUF_writable);
-            if ( rc )
-            {
-                page_list_add(page, &d->page_list);
-                break;
-            }
-        }
-        page_list_add_tail(page, &d->arch.relmem_list);
-        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
-             hypercall_preempt_check() )
-            rc = -ERESTART;
-    }
-
-    if ( !rc )
-    {
-        /*
-         * The expectation here is that generally there are many normal pages
-         * on relmem_list (the ones we put there) and only few being in an
-         * offline/broken state. The latter ones are always at the head of the
-         * list. Hence we first move the whole list, and then move back the
-         * first few entries.
-         */
-        page_list_move(&d->page_list, &d->arch.relmem_list);
-        while ( (page = page_list_first(&d->page_list)) != NULL &&
-                (page->count_info & (PGC_state|PGC_broken)) )
-        {
-            page_list_del(page, &d->page_list);
-            page_list_add_tail(page, &d->arch.relmem_list);
-        }
-    }
-
-    spin_unlock(&d->page_alloc_lock);
-    this_cpu(iommu_dont_flush_iotlb) = 0;
-
-    if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
-        iommu_teardown(d);
-
-    return rc;
-}
-
-
 void iommu_domain_destroy(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
@@ -498,53 +273,6 @@ void iommu_iotlb_flush_all(struct domain *d)
     hd->platform_ops->iotlb_flush_all(d);
 }
 
-/* caller should hold the pcidevs_lock */
-int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev = NULL;
-    int ret = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
-    if ( !pdev )
-        return -ENODEV;
-
-    while ( pdev->phantom_stride )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-        if ( !ret )
-            continue;
-
-        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
-               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
-        return ret;
-    }
-
-    devfn = pdev->devfn;
-    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-    if ( ret )
-    {
-        dprintk(XENLOG_G_ERR,
-                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
-                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-        return ret;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-
-    return ret;
-}
-
 int __init iommu_setup(void)
 {
     int rc = -ENODEV;
@@ -585,91 +313,37 @@ int __init iommu_setup(void)
     return rc;
 }
 
-static int iommu_get_device_group(
-    struct domain *d, u16 seg, u8 bus, u8 devfn,
-    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int group_id, sdev_id;
-    u32 bdf;
-    int i = 0;
-    const struct iommu_ops *ops = hd->platform_ops;
-
-    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
-        return 0;
-
-    group_id = ops->get_device_group_id(seg, bus, devfn);
-
-    spin_lock(&pcidevs_lock);
-    for_each_pdev( d, pdev )
-    {
-        if ( (pdev->seg != seg) ||
-             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
-            continue;
-
-        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
-            continue;
-
-        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
-        if ( (sdev_id == group_id) && (i < max_sdevs) )
-        {
-            bdf = 0;
-            bdf |= (pdev->bus & 0xff) << 16;
-            bdf |= (pdev->devfn & 0xff) << 8;
-
-            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
-            {
-                spin_unlock(&pcidevs_lock);
-                return -1;
-            }
-            i++;
-        }
-    }
-    spin_unlock(&pcidevs_lock);
-
-    return i;
-}
-
-void iommu_update_ire_from_apic(
-    unsigned int apic, unsigned int reg, unsigned int value)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    ops->update_ire_from_apic(apic, reg, value);
-}
-
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
+void iommu_resume()
 {
     const struct iommu_ops *ops = iommu_get_ops();
-    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
+    if ( iommu_enabled )
+        ops->resume();
 }
 
-void iommu_read_msi_from_ire(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
+int iommu_do_domctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    const struct iommu_ops *ops = iommu_get_ops();
-    if ( iommu_intremap )
-        ops->read_msi_from_ire(msi_desc, msg);
-}
+    int ret = 0;
 
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->read_apic_from_ire(apic, reg);
-}
+    if ( !iommu_enabled )
+        return -ENOSYS;
 
-int __init iommu_setup_hpet_msi(struct msi_desc *msi)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
-}
+    switch ( domctl->cmd )
+    {
+#ifdef HAS_PCI
+    case XEN_DOMCTL_get_device_group:
+    case XEN_DOMCTL_test_assign_device:
+    case XEN_DOMCTL_assign_device:
+    case XEN_DOMCTL_deassign_device:
+        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
+        break;
+#endif
+    default:
+        ret = -ENOSYS;
+    }
 
-void iommu_resume()
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    if ( iommu_enabled )
-        ops->resume();
+    return ret;
 }
 
 void iommu_suspend()
@@ -695,125 +369,6 @@ void iommu_crash_shutdown(void)
     iommu_enabled = iommu_intremap = 0;
 }
 
-int iommu_do_domctl(
-    struct xen_domctl *domctl, struct domain *d,
-    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
-{
-    u16 seg;
-    u8 bus, devfn;
-    int ret = 0;
-
-    if ( !iommu_enabled )
-        return -ENOSYS;
-
-    switch ( domctl->cmd )
-    {
-    case XEN_DOMCTL_get_device_group:
-    {
-        u32 max_sdevs;
-        XEN_GUEST_HANDLE_64(uint32) sdevs;
-
-        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.get_device_group.machine_sbdf >> 16;
-        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
-        max_sdevs = domctl->u.get_device_group.max_sdevs;
-        sdevs = domctl->u.get_device_group.sdev_array;
-
-        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
-        if ( ret < 0 )
-        {
-            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
-            ret = -EFAULT;
-            domctl->u.get_device_group.num_sdevs = 0;
-        }
-        else
-        {
-            domctl->u.get_device_group.num_sdevs = ret;
-            ret = 0;
-        }
-        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
-            ret = -EFAULT;
-    }
-    break;
-
-    case XEN_DOMCTL_test_assign_device:
-        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        if ( device_assigned(seg, bus, devfn) )
-        {
-            printk(XENLOG_G_INFO
-                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-            ret = -EINVAL;
-        }
-        break;
-
-    case XEN_DOMCTL_assign_device:
-        if ( unlikely(d->is_dying) )
-        {
-            ret = -EINVAL;
-            break;
-        }
-
-        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        ret = device_assigned(seg, bus, devfn) ?:
-              assign_device(d, seg, bus, devfn);
-        if ( ret == -ERESTART )
-            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
-                                                "h", u_domctl);
-        else if ( ret )
-            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
-                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    case XEN_DOMCTL_deassign_device:
-        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        spin_lock(&pcidevs_lock);
-        ret = deassign_device(d, seg, bus, devfn);
-        spin_unlock(&pcidevs_lock);
-        if ( ret )
-            printk(XENLOG_G_ERR
-                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    default:
-        ret = -ENOSYS;
-        break;
-    }
-
-    return ret;
-}
-
 static void iommu_dump_p2m_table(unsigned char key)
 {
     struct domain *d;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..0108f44 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -26,6 +26,9 @@
 #include <asm/hvm/irq.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/event.h>
+#include <xen/guest_access.h>
+#include <xen/paging.h>
 #include <xen/radix-tree.h>
 #include <xen/tasklet.h>
 #include <xsm/xsm.h>
@@ -980,6 +983,440 @@ static int __init setup_dump_pcidevs(void)
 }
 __initcall(setup_dump_pcidevs);
 
+static int iommu_populate_page_table(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct page_info *page;
+    int rc = 0, n = 0;
+
+    d->need_iommu = -1;
+
+    this_cpu(iommu_dont_flush_iotlb) = 1;
+    spin_lock(&d->page_alloc_lock);
+
+    if ( unlikely(d->is_dying) )
+        rc = -ESRCH;
+
+
+    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
+    {
+        if ( is_hvm_domain(d) ||
+            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
+        {
+            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
+            rc = hd->platform_ops->map_page(
+                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
+                IOMMUF_readable|IOMMUF_writable);
+            if ( rc )
+            {
+                page_list_add(page, &d->page_list);
+                break;
+            }
+        }
+        page_list_add_tail(page, &d->arch.relmem_list);
+        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
+             hypercall_preempt_check() )
+            rc = -ERESTART;
+    }
+
+    if ( !rc )
+    {
+        /*
+         * The expectation here is that generally there are many normal pages
+         * on relmem_list (the ones we put there) and only few being in an
+         * offline/broken state. The latter ones are always at the head of the
+         * list. Hence we first move the whole list, and then move back the
+         * first few entries.
+         */
+        page_list_move(&d->page_list, &d->arch.relmem_list);
+        while ( (page = page_list_first(&d->page_list)) != NULL &&
+                (page->count_info & (PGC_state|PGC_broken)) )
+        {
+            page_list_del(page, &d->page_list);
+            page_list_add_tail(page, &d->arch.relmem_list);
+        }
+    }
+
+    spin_unlock(&d->page_alloc_lock);
+    this_cpu(iommu_dont_flush_iotlb) = 0;
+
+    if ( !rc )
+        iommu_iotlb_flush_all(d);
+    else if ( rc != -ERESTART )
+        iommu_teardown(d);
+
+    return rc;
+}
+
+int iommu_add_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    int rc;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
+    if ( rc || !pdev->phantom_stride )
+        return rc;
+
+    for ( devfn = pdev->devfn ; ; )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            return 0;
+        rc = hd->platform_ops->add_device(devfn, pdev);
+        if ( rc )
+            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+    }
+}
+
+int iommu_enable_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops ||
+         !hd->platform_ops->enable_device )
+        return 0;
+
+    return hd->platform_ops->enable_device(pdev);
+}
+
+int iommu_remove_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
+    {
+        int rc;
+
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->remove_device(devfn, pdev);
+        if ( !rc )
+            continue;
+
+        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
+               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+        return rc;
+    }
+
+    return hd->platform_ops->remove_device(pdev->devfn, pdev);
+}
+
+/*
+ * If the device isn't owned by dom0, it means it already
+ * has been assigned to other domain, or it doesn't exist.
+ */
+static int device_assigned(u16 seg, u8 bus, u8 devfn)
+{
+    struct pci_dev *pdev = NULL;
+
+    spin_lock(&pcidevs_lock);
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    spin_unlock(&pcidevs_lock);
+
+    return pdev ? 0 : -EBUSY;
+}
+
+static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int rc = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    /* Prevent device assign if mem paging or mem sharing have been 
+     * enabled for this domain */
+    if ( unlikely(!need_iommu(d) &&
+            (d->arch.hvm_domain.mem_sharing_enabled ||
+             d->mem_event->paging.ring_page)) )
+        return -EXDEV;
+
+    if ( !spin_trylock(&pcidevs_lock) )
+        return -ERESTART;
+
+    if ( need_iommu(d) <= 0 )
+    {
+        if ( !iommu_use_hap_pt(d) )
+        {
+            rc = iommu_populate_page_table(d);
+            if ( rc )
+            {
+                spin_unlock(&pcidevs_lock);
+                return rc;
+            }
+        }
+        d->need_iommu = 1;
+    }
+
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    if ( !pdev )
+    {
+        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
+        goto done;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
+        goto done;
+
+    for ( ; pdev->phantom_stride; rc = 0 )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->assign_device(d, devfn, pdev);
+        if ( rc )
+            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
+                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   rc);
+    }
+
+ done:
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+    spin_unlock(&pcidevs_lock);
+
+    return rc;
+}
+
+/* caller should hold the pcidevs_lock */
+int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev = NULL;
+    int ret = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
+    if ( !pdev )
+        return -ENODEV;
+
+    while ( pdev->phantom_stride )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+        if ( !ret )
+            continue;
+
+        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
+               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
+        return ret;
+    }
+
+    devfn = pdev->devfn;
+    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+    if ( ret )
+    {
+        dprintk(XENLOG_G_ERR,
+                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
+                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return ret;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+
+    return ret;
+}
+
+static int iommu_get_device_group(
+    struct domain *d, u16 seg, u8 bus, u8 devfn,
+    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int group_id, sdev_id;
+    u32 bdf;
+    int i = 0;
+    const struct iommu_ops *ops = hd->platform_ops;
+
+    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
+        return 0;
+
+    group_id = ops->get_device_group_id(seg, bus, devfn);
+
+    spin_lock(&pcidevs_lock);
+    for_each_pdev( d, pdev )
+    {
+        if ( (pdev->seg != seg) ||
+             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
+            continue;
+
+        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
+            continue;
+
+        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
+        if ( (sdev_id == group_id) && (i < max_sdevs) )
+        {
+            bdf = 0;
+            bdf |= (pdev->bus & 0xff) << 16;
+            bdf |= (pdev->devfn & 0xff) << 8;
+
+            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
+            {
+                spin_unlock(&pcidevs_lock);
+                return -1;
+            }
+            i++;
+        }
+    }
+
+    spin_unlock(&pcidevs_lock);
+
+    return i;
+}
+
+int iommu_do_pci_domctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    u16 seg;
+    u8 bus, devfn;
+    int ret = 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_get_device_group:
+    {
+        u32 max_sdevs;
+        XEN_GUEST_HANDLE_64(uint32) sdevs;
+
+        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.get_device_group.machine_sbdf >> 16;
+        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
+        max_sdevs = domctl->u.get_device_group.max_sdevs;
+        sdevs = domctl->u.get_device_group.sdev_array;
+
+        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
+        if ( ret < 0 )
+        {
+            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
+            ret = -EFAULT;
+            domctl->u.get_device_group.num_sdevs = 0;
+        }
+        else
+        {
+            domctl->u.get_device_group.num_sdevs = ret;
+            ret = 0;
+        }
+        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
+            ret = -EFAULT;
+    }
+    break;
+
+    case XEN_DOMCTL_test_assign_device:
+        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        if ( device_assigned(seg, bus, devfn) )
+        {
+            printk(XENLOG_G_INFO
+                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            ret = -EINVAL;
+        }
+        break;
+
+    case XEN_DOMCTL_assign_device:
+        if ( unlikely(d->is_dying) )
+        {
+            ret = -EINVAL;
+            break;
+        }
+
+        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        ret = device_assigned(seg, bus, devfn) ?:
+              assign_device(d, seg, bus, devfn);
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
+                                                "h", u_domctl);
+        else if ( ret )
+            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
+                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    case XEN_DOMCTL_deassign_device:
+        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        spin_lock(&pcidevs_lock);
+        ret = deassign_device(d, seg, bus, devfn);
+        spin_unlock(&pcidevs_lock);
+        if ( ret )
+            printk(XENLOG_G_ERR
+                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index c124a51..a70cf94 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1 +1,2 @@
 obj-y += ats.o
+obj-y += iommu.o
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
new file mode 100644
index 0000000..bd3c23b
--- /dev/null
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -0,0 +1,65 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xsm/xsm.h>
+
+void iommu_update_ire_from_apic(
+    unsigned int apic, unsigned int reg, unsigned int value)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    ops->update_ire_from_apic(apic, reg, value);
+}
+
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
+}
+
+void iommu_read_msi_from_ire(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    if ( iommu_intremap )
+        ops->read_msi_from_ire(msi_desc, msg);
+}
+
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->read_apic_from_ire(apic, reg);
+}
+
+int __init iommu_setup_hpet_msi(struct msi_desc *msi)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
new file mode 100644
index 0000000..34c1896
--- /dev/null
+++ b/xen/include/asm-x86/iommu.h
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_X86_IOMMU_H__
+#define __ARCH_X86_IOMMU_H__
+
+#define MAX_IOMMUS 32
+
+#include <asm/msi.h>
+
+void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
+int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
+void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
+int iommu_setup_hpet_msi(struct msi_desc *);
+
+void iommu_share_p2m_table(struct domain *d);
+
+/* While VT-d specific, this must get declared in a generic header. */
+int adjust_vtd_irq_affinities(void);
+void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int iommu_supports_eim(void);
+int iommu_enable_x2apic_IR(void);
+void iommu_disable_x2apic_IR(void);
+void iommu_set_dom0_mapping(struct domain *d);
+
+#endif /* !__ARCH_X86_IOMMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 26539e0..2abb4e3 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -21,6 +21,7 @@
 #define __XEN_HVM_IOMMU_H__
 
 #include <xen/iommu.h>
+#include <asm/hvm/iommu.h>
 
 struct g2m_ioport {
     struct list_head list;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index fcbc432..65a37c0 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -25,6 +25,7 @@
 #include <xen/pci.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
+#include <asm/iommu.h>
 
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
@@ -39,17 +40,12 @@ extern bool_t amd_iommu_perdev_intremap;
 
 #define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
 
-#define MAX_IOMMUS 32
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
 #define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
 
 int iommu_setup(void);
-int iommu_supports_eim(void);
-int iommu_enable_x2apic_IR(void);
-void iommu_disable_x2apic_IR(void);
 
 int iommu_add_device(struct pci_dev *pdev);
 int iommu_enable_device(struct pci_dev *pdev);
@@ -59,6 +55,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+/* Function used internally, use iommu_domain_destroy */
+void iommu_teardown(struct domain *d);
+
 /* iommu_map_page() takes flags to direct the mapping operation. */
 #define _IOMMUF_readable 0
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
@@ -67,9 +66,8 @@ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_domain_teardown(struct domain *d);
 
+#ifdef HAS_PCI
 void pt_pci_init(void);
 
 struct pirq;
@@ -84,52 +82,56 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 bool_t pt_irq_need_timer(uint32_t flags);
 
 #define PT_IRQ_TIME_OUT MILLISECS(8)
+#endif /* HAS_PCI */
 
+#ifdef CONFIG_X86
 struct msi_desc;
 struct msi_msg;
+#endif /* CONFIG_X86 */
+
 struct page_info;
 
 struct iommu_ops {
     int (*init)(struct domain *d);
     void (*dom0_init)(struct domain *d);
+#ifdef HAS_PCI
     int (*add_device)(u8 devfn, struct pci_dev *);
     int (*enable_device)(struct pci_dev *pdev);
     int (*remove_device)(u8 devfn, struct pci_dev *);
     int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
+    int (*reassign_device)(struct domain *s, struct domain *t,
+			   u8 devfn, struct pci_dev *);
+    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#endif /* HAS_PCI */
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
-    int (*reassign_device)(struct domain *s, struct domain *t,
-			   u8 devfn, struct pci_dev *);
-    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
     int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
     void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
+    void (*share_p2m)(struct domain *d);
+#endif /* CONFIG_X86 */
     void (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
-int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
-void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
-int iommu_setup_hpet_msi(struct msi_desc *);
-
 void iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 
-void iommu_set_dom0_mapping(struct domain *d);
-void iommu_share_p2m_table(struct domain *d);
+#if HAS_PCI
+int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
+                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
+#endif
 
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
@@ -137,9 +139,6 @@ int iommu_do_domctl(struct xen_domctl *, struct domain *d,
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
 
-/* While VT-d specific, this must get declared in a generic header. */
-int adjust_vtd_irq_affinities(void);
-
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLr-0006pq-0B; Sun, 23 Feb 2014 22:16:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLp-0006pU-F6
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:45 +0000
Received: from [85.158.137.68:18589] by server-15.bemta-3.messagelabs.com id
	A8/67-19263-C437A035; Sun, 23 Feb 2014 22:16:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393193802!3654091!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4992 invoked from network); 23 Feb 2014 22:16:42 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:42 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so379504eek.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=GSmnRxnF+YykwIEr04izkGFqAGpcNSta6nD/GYqoscw=;
	b=TB+t8Pbnr4OpOlnPBkZhpWQMQ80/2Gk4RyoaleGe1W1EPzZ8SV/fkPgTjWzuhF2WuL
	qZOTTQHK+GiPuKrUTBZRc8gvwDhammhsSDjMfrTgft3O7Ku121J3j34VkikEYTYWj8BN
	X/FlLbda/++Oqw6tgecnhXDAAO33kQyDuK/5aoTRwUdl4en5JoCBJjYqdLObEMzYB1Uz
	I7DBQHTZpOJilgJMHsVZCD7XRnqPyGtOxPW3ZIWvfegztRp3GxLhSj9jTi0v+fIY/q6C
	HPB5dvYvh/Jjlax8narHHtobMQ+zTA9fEbTPb6UnvsBO0kCN42Y2tVglzvRt6Efobk+t
	gwbQ==
X-Gm-Message-State: ALoCoQlTDgHCmyoC0PlOLrB2H1e91yEE+uHes0O8zuXe41eOZSp6iRQa7I8aOwv/hi9Pryigluv/
X-Received: by 10.15.56.8 with SMTP id x8mr20588886eew.83.1393193802345;
	Sun, 23 Feb 2014 14:16:42 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.41
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:41 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:21 +0000
Message-Id: <1393193792-20008-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 04/15] xen/dts: Add dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function check if a property exists in a specific node.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---
    Changes in v2:
        - Fix typo in commit message
---
 xen/common/device_tree.c      |    6 ++----
 xen/include/xen/device_tree.h |   21 +++++++++++++++++++++
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index c66d1d5..ccdb7ff 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -512,10 +512,8 @@ static void __init *unflatten_dt_alloc(unsigned long *mem, unsigned long size,
 }
 
 /* Find a property with a given name for a given node and return it. */
-static const struct dt_property *
-dt_find_property(const struct dt_device_node *np,
-                 const char *name,
-                 u32 *lenp)
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp)
 {
     const struct dt_property *pp;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 9a8c3de..7c075d9 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #include <xen/init.h>
 #include <xen/string.h>
 #include <xen/types.h>
+#include <xen/stdbool.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -347,6 +348,10 @@ struct dt_device_node *dt_find_compatible_node(struct dt_device_node *from,
 const void *dt_get_property(const struct dt_device_node *np,
                             const char *name, u32 *lenp);
 
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp);
+
+
 /**
  * dt_property_read_u32 - Helper to read a u32 property.
  * @np: node to get the value
@@ -369,6 +374,22 @@ bool_t dt_property_read_u64(const struct dt_device_node *np,
                             const char *name, u64 *out_value);
 
 /**
+ * dt_property_read_bool - Check if a property exists
+ * @np: node to get the value
+ * @name: name of the property
+ *
+ * Search for a property in a device node.
+ * Return true if the property exists false otherwise.
+ */
+static inline bool_t dt_property_read_bool(const struct dt_device_node *np,
+                                           const char *name)
+{
+    const struct dt_property *prop = dt_find_property(np, name, NULL);
+
+    return prop ? true : false;
+}
+
+/**
  * dt_property_read_string - Find and read a string from a property
  * @np:         Device node from which the property value is to be read
  * @propname:   Name of the property to be searched
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLs-0006qg-21; Sun, 23 Feb 2014 22:16:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLr-0006pp-68
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:47 +0000
Received: from [193.109.254.147:53375] by server-6.bemta-14.messagelabs.com id
	FF/E0-03396-E437A035; Sun, 23 Feb 2014 22:16:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393193805!6238783!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6816 invoked from network); 23 Feb 2014 22:16:45 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:45 -0000
Received: by mail-ee0-f48.google.com with SMTP id t10so2718846eei.35
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=8PcC2TycjATB0VFYCPnInvMaDKm7QZULUuDgO5uVTeo=;
	b=U4DWhlCB4AeOhClsHGk/g4tModK12EAC+P2cNCoC3b6Ehq3dM3y1WZHZ4J2JExCqQV
	5tyY1yhQOQ17J1hKmt0NCCs0hZpMkIsldFdyRn2wDu+pC9ETqeODshts/VZXa1LddqTR
	4JgPxh7+AV1qM/acCxy7L4VUSW436KeaXyv08Uj/uw165BLTAEvs9zAN4fUnxSgdDpht
	CqP9+Yt5cV7lc09KzOHVZo7GN/zNdNQCAW6PVCOSte0Ok63kbjYaXh6cXFVJlOLERBQe
	xnoR2wBcelTXvn2NAy5FR4F3AqEP6t/PECdXlabyvOM4QB4wC53uJvcLe7rm3TmMlqt7
	PooA==
X-Gm-Message-State: ALoCoQkHmORpB6akeTPsBBIYu73dGKpghWRE0HQ3utcO7cmY1c6PA5PAgog+Dlsr/s+d5PC3BoKE
X-Received: by 10.15.23.194 with SMTP id h42mr20856788eeu.32.1393193805189;
	Sun, 23 Feb 2014 14:16:45 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.43
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:44 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:23 +0000
Message-Id: <1393193792-20008-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 06/15] xen/passthrough: rework dom0_pvh_reqs
	to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is enabled.
Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
to know if it needs to check the requirements.

Rename the function and remove "pvh" word in the panic message.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>

---
    Changes in v2:
        - IOMMU can be disabled on ARM if the platform doesn't have
        IOMMU.
---
 xen/drivers/passthrough/iommu.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 19b0e23..b534893 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -130,13 +130,17 @@ int iommu_domain_init(struct domain *d)
     return hd->platform_ops->init(d);
 }
 
-static __init void check_dom0_pvh_reqs(struct domain *d)
+static __init void check_dom0_reqs(struct domain *d)
 {
-    if ( !iommu_enabled )
+    if ( !paging_mode_translate(d) )
+        return;
+
+    if ( is_pvh_domain(d) && !iommu_enabled )
         panic("Presently, iommu must be enabled for pvh dom0\n");
 
     if ( iommu_passthrough )
-        panic("For pvh dom0, dom0-passthrough must not be enabled\n");
+        panic("Dom0 uses translate paging mode, dom0-passthrough must not be "
+              "enabled\n");
 
     iommu_dom0_strict = 1;
 }
@@ -145,8 +149,7 @@ void __init iommu_dom0_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    if ( is_pvh_domain(d) )
-        check_dom0_pvh_reqs(d);
+    check_dom0_reqs(d);
 
     if ( !iommu_enabled )
         return;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLr-0006q7-CI; Sun, 23 Feb 2014 22:16:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLp-0006pY-LL
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:45 +0000
Received: from [85.158.139.211:34693] by server-11.bemta-5.messagelabs.com id
	D2/0C-23886-C437A035; Sun, 23 Feb 2014 22:16:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393193803!5717360!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7530 invoked from network); 23 Feb 2014 22:16:43 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:43 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so1879469eek.27
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=X6qQLfZznS/F5jTc7GhsGRfH310akpKPm7hx0lzFyxw=;
	b=DamfoGANj+ncgbp72hD+oDZ7hdonwZ6oGH6K3qq0xHLdzwRSjOFLTqLZde7mlEYnaH
	ORcBgA7/KOqOxMS8bS5ZAfvQBc0/Bn//MX/XEXktDpitpj0dv6NuzEsGKD1Jm2MZ/h6W
	nXUD7rjyybLmImA/EfB+veavJfJpLgoBkQjYOuCNZPLx30lXcQk7sR2LpxWuOs4zi6rl
	kM5xL907OYUNi5Ys7I7j9wYVUVV2N3Ne+AKQigpMCvtSKTLugzbc8vCL9qwKtoL7C/lO
	pq3NmC4u1qMdJn7BZNP/4LRse0NfmlNqJlrXI29uZ/Wp3KbnGCAX0/mxVImiC0HHkHgQ
	30qg==
X-Gm-Message-State: ALoCoQk2M10UBTKkIYaJWZD/Du6loaNsDC1EGbdr78MHNyGcLFM/hj+W+scWJqivH6wI7ZFntBNQ
X-Received: by 10.15.76.135 with SMTP id n7mr5974163eey.36.1393193803757;
	Sun, 23 Feb 2014 14:16:43 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.42
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:42 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:22 +0000
Message-Id: <1393193792-20008-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 05/15] xen/dts: Add
	dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code adapted from linux drivers/of/base.c (commit ef42c58).

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Remove hard tabs in dt_parse_phandle
---
 xen/common/device_tree.c      |  151 ++++++++++++++++++++++++++++++++++++++++-
 xen/include/xen/device_tree.h |   54 +++++++++++++++
 2 files changed, 203 insertions(+), 2 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index ccdb7ff..564f2bb 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1090,9 +1090,9 @@ int dt_device_get_address(const struct dt_device_node *dev, int index,
  *
  * Returns a node pointer.
  */
-static const struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
+static struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
 {
-    const struct dt_device_node *np;
+    struct dt_device_node *np;
 
     dt_for_each_device_node(dt_host, np)
         if ( np->phandle == handle )
@@ -1477,6 +1477,153 @@ bool_t dt_device_is_available(const struct dt_device_node *device)
     return 0;
 }
 
+static int __dt_parse_phandle_with_args(const struct dt_device_node *np,
+                                        const char *list_name,
+                                        const char *cells_name,
+                                        int cell_count, int index,
+                                        struct dt_phandle_args *out_args)
+{
+    const __be32 *list, *list_end;
+    int rc = 0, cur_index = 0;
+    u32 size, count = 0;
+    struct dt_device_node *node = NULL;
+    dt_phandle phandle;
+
+    /* Retrieve the phandle list property */
+    list = dt_get_property(np, list_name, &size);
+    if ( !list )
+        return -ENOENT;
+    list_end = list + size / sizeof(*list);
+
+    /* Loop over the phandles until all the requested entry is found */
+    while ( list < list_end )
+    {
+        rc = -EINVAL;
+        count = 0;
+
+        /*
+         * If phandle is 0, then it is an empty entry with no
+         * arguments.  Skip forward to the next entry.
+         * */
+        phandle = be32_to_cpup(list++);
+        if ( phandle )
+        {
+            /*
+             * Find the provider node and parse the #*-cells
+             * property to determine the argument length.
+             *
+             * This is not needed if the cell count is hard-coded
+             * (i.e. cells_name not set, but cell_count is set),
+             * except when we're going to return the found node
+             * below.
+             */
+            if ( cells_name || cur_index == index )
+            {
+                node = dt_find_node_by_phandle(phandle);
+                if ( !node )
+                {
+                    dt_printk(XENLOG_ERR "%s: could not find phandle\n",
+                              np->full_name);
+                    goto err;
+                }
+            }
+
+            if ( cells_name )
+            {
+                if ( !dt_property_read_u32(node, cells_name, &count) )
+                {
+                    dt_printk("%s: could not get %s for %s\n",
+                              np->full_name, cells_name, node->full_name);
+                    goto err;
+                }
+            }
+            else
+                count = cell_count;
+
+            /*
+             * Make sure that the arguments actually fit in the
+             * remaining property data length
+             */
+            if ( list + count > list_end )
+            {
+                dt_printk(XENLOG_ERR "%s: arguments longer than property\n",
+                          np->full_name);
+                goto err;
+            }
+        }
+
+        /*
+         * All of the error cases above bail out of the loop, so at
+         * this point, the parsing is successful. If the requested
+         * index matches, then fill the out_args structure and return,
+         * or return -ENOENT for an empty entry.
+         */
+        rc = -ENOENT;
+        if ( cur_index == index )
+        {
+            if (!phandle)
+                goto err;
+
+            if ( out_args )
+            {
+                int i;
+
+                WARN_ON(count > MAX_PHANDLE_ARGS);
+                if (count > MAX_PHANDLE_ARGS)
+                    count = MAX_PHANDLE_ARGS;
+                out_args->np = node;
+                out_args->args_count = count;
+                for ( i = 0; i < count; i++ )
+                    out_args->args[i] = be32_to_cpup(list++);
+            }
+
+            /* Found it! return success */
+            return 0;
+        }
+
+        node = NULL;
+        list += count;
+        cur_index++;
+    }
+
+    /*
+     * Returning result will be one of:
+     * -ENOENT : index is for empty phandle
+     * -EINVAL : parsing error on data
+     * [1..n]  : Number of phandle (count mode; when index = -1)
+     */
+    rc = index < 0 ? cur_index : -ENOENT;
+err:
+    return rc;
+}
+
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+                                        const char *phandle_name, int index)
+{
+    struct dt_phandle_args args;
+
+    if (index < 0)
+        return NULL;
+
+    if (__dt_parse_phandle_with_args(np, phandle_name, NULL, 0,
+                                     index, &args))
+        return NULL;
+
+    return args.np;
+}
+
+
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args)
+{
+    if ( index < 0 )
+        return -EINVAL;
+    return __dt_parse_phandle_with_args(np, list_name, cells_name, 0,
+                                        index, out_args);
+}
+
 /**
  * unflatten_dt_node - Alloc and populate a device_node from the flat tree
  * @fdt: The parent device tree blob
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 7c075d9..d429e60 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -112,6 +112,13 @@ struct dt_device_node {
 
 };
 
+#define MAX_PHANDLE_ARGS 16
+struct dt_phandle_args {
+    struct dt_device_node *np;
+    int args_count;
+    uint32_t args[MAX_PHANDLE_ARGS];
+};
+
 /**
  * IRQ line type.
  *
@@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
 void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
                   u64 *address, u64 *size);
 
+/**
+ * dt_parse_phandle - Resolve a phandle property to a device_node pointer
+ * @np: Pointer to device node holding phandle property
+ * @phandle_name: Name of property holding a phandle value
+ * @index: For properties holding a table of phandles, this is the index into
+ *         the table
+ *
+ * Returns the device_node pointer.
+ */
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+				                        const char *phandle_name,
+                                        int index);
+
+/**
+ * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
+ * @np:	pointer to a device tree node containing a list
+ * @list_name: property name that contains a list
+ * @cells_name: property name that specifies phandles' arguments count
+ * @index: index of a phandle to parse out
+ * @out_args: optional pointer to output arguments structure (will be filled)
+ *
+ * This function is useful to parse lists of phandles and their arguments.
+ * Returns 0 on success and fills out_args, on error returns appropriate
+ * errno value.
+ *
+ * Example:
+ *
+ * phandle1: node1 {
+ * 	#list-cells = <2>;
+ * }
+ *
+ * phandle2: node2 {
+ * 	#list-cells = <1>;
+ * }
+ *
+ * node3 {
+ * 	list = <&phandle1 1 2 &phandle2 3>;
+ * }
+ *
+ * To get a device_node of the `node2' node you may call this:
+ * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
+ */
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args);
+
 #endif /* __XEN_DEVICE_TREE_H */
 
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLm-0006om-7m; Sun, 23 Feb 2014 22:16:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLk-0006oT-M5
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:40 +0000
Received: from [85.158.139.211:60264] by server-14.bemta-5.messagelabs.com id
	E9/CB-27598-8437A035; Sun, 23 Feb 2014 22:16:40 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393193799!5730217!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5486 invoked from network); 23 Feb 2014 22:16:39 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:39 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so2718712eak.2
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:39 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=mlQntBg3Wyiwve9vxUqXVe4wNzwHMZWLNGiLzbsUsaQ=;
	b=dNJR/fC2HD5lYlmkgG6ET3mfLiJDNRn8JsrpOHPBjSHMhUFVZ7ZMTdZtK/Z8N+pTU4
	kJ8u+m2uBMl2RyM2N/wKw0trTHZurGz4bhAeR9hUK76xQ0bCUeoQpYeWbLh1ZmiVE2Nm
	bdRJc7uxBKJby2HUIXSjGVXtmRMLHt0V7yafvzpeh7pROqSLodPNLBIWaFeBcd2SZGuV
	QGN5JINXUdDmhG4gKxddSqLkU7paPtnbiwiq5BMga5gSR1p2dWGe0FCh8+m0Vd0d5C/w
	tQcsgY0Zfx9UF0hXZCvIaZ6zCLHkwHT2Jv10RMMlQSKkp0yAwADI1sVOcsW3CJ2976tc
	0+rw==
X-Gm-Message-State: ALoCoQntAZ3dRs51n7uPfn/fqMjbFuCyetJVrW96GzxcaR/Bmkl2DDAVsVqZqLDMYB0HMCU6bjP1
X-Received: by 10.14.110.68 with SMTP id t44mr21060739eeg.74.1393193799254;
	Sun, 23 Feb 2014 14:16:39 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.37
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:38 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:19 +0000
Message-Id: <1393193792-20008-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
	iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

iommu_domain_teardown is only used internally in
xen/drivers/passthrough/vtd/iommu.c

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Cambell <ian.campbell@citrix.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/drivers/passthrough/vtd/iommu.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 5f10034..a8d33fc 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
     return ret;
 }
 
-void iommu_domain_teardown(struct domain *d)
+static void iommu_domain_teardown(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLk-0006oU-Ge; Sun, 23 Feb 2014 22:16:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLi-0006oJ-TX
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:39 +0000
Received: from [85.158.137.68:54882] by server-15.bemta-3.messagelabs.com id
	7E/57-19263-6437A035; Sun, 23 Feb 2014 22:16:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393193797!3704733!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19623 invoked from network); 23 Feb 2014 22:16:37 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:37 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so2770704eek.24
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:36 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=69NOsRIgmpVEGJSoK2AVdNajIw7iD5V0zMRCYskVTG8=;
	b=FYBvEoe1dni587+6cuVHOi1iN3JqAKpY8qs2LDSVpvQ4VKoPSFAyoRiPDK6i6RJnXk
	qfztGFcl6I58VFJNtO+vVl178WLLK5a9oehr3YHo4UbwMq3Zcvt4V5WS86JrKDcH+kdL
	eAjSAt1Pd/BOB39auhpqSapNKa9r9L/4JgGAori6SeAnMrki9GH+5wDkAQFy4Hx3z3aZ
	d3FYfu3J3RK/kSBD0L1Pgunw1q+gCnt6Jq3nnkg1Rv+3JiPo+Cbsq//Z9sx/zvAu0Oud
	R+y1QVJnYMvg5/NuSJXkKXIgqsldY/jwYFP9QFOfJnwGQG+fM3F2hAXIFVGlkPtiPzsK
	9RMg==
X-Gm-Message-State: ALoCoQlccMauIOWbl2tE5aW4kSRwueycvyYlfBET8lvzaRI4cbstsbkY7dr5JVCdk9i+BD5c8FDS
X-Received: by 10.14.9.134 with SMTP id 6mr20872611eet.70.1393193796519;
	Sun, 23 Feb 2014 14:16:36 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.35
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:35 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:17 +0000
Message-Id: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 00/15] IOMMU support for ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello,

This is the second version of the patch series to add support for IOMMU on
ARM. It also add ARM SMMU driver which is used for instance on Midway.

The IOMMU architecture for ARM is relying on the page table is shared between
the processor and each IOMMU.

The patch series is divided following:
    - #1: fixing grant-table with IOMMU. Will be necessary for ARM later
    - #2-#3: Make static some vtd functions
    - #4-#5: Adding new device tree functions
    - #6-#9: Prepare IOMMU code to add support for ARM
    - #10: Adding basic device tree assignment support
    - #11-#15: Add IOMMU architecture for ARM
    - #15: Add SMMU drivers

For now the 1:1 workaround is not removed because a same platform can have
DMA-capable device which are under an IOMMU and some not.

Major changes for the RFC:
    - Adding basic device tree assignment support
    - Draft a binding to notify DOM0 which device is protected
    - Couple of fixes when the IOMMU is disabled or there is no IOMMU
    support for the board.

This series has also dependency on:
    - early printk series :
    http://lists.xen.org/archives/html/xen-devel/2014-01/msg00288.html
    - interrupt series:
    http://lists.xen.org/archives/html/xen-devel/2014-01/msg02139.html
    - few bug fixes on the previous series

A working tree can be found here:
    git://xenbits.xen.org/people/julieng/xen-unstable.git branch smmu-v2

Any comments, questions are welcomed.

Sincerely yours,

Julien Grall (15):
  xen/common: grant-table: only call IOMMU if paging mode translate is
    disabled
  xen/passthrough: vtd: Don't export iommu_domain_teardown
  xen/passthrough: vtd: Don't export iommu_set_pgd
  xen/dts: Add dt_property_read_bool
  xen/dts: Add dt_parse_phandle_with_args and dt_parse_phandle
  xen/passthrough: rework dom0_pvh_reqs to use it also on ARM
  xen/passthrough: iommu: Don't need to map dom0 page when the PT is
    shared
  xen/passthrough: iommu: Split generic IOMMU code
  xen/passthrough: iommu: Introduce arch specific code
  xen/passthrough: iommu: Basic support of device tree assignment
  xen/passthrough: Introduce IOMMU ARM architecture
  MAINTAINERS: Add drivers/passthrough/arm
  xen/arm: Don't give IOMMU devices to dom0 when iommu is disabled
  xen/arm: Add the property "protected-devices" in the hypervisor node
  drivers/passthrough: arm: Add support for SMMU drivers

 MAINTAINERS                                 |    1 +
 xen/arch/arm/Rules.mk                       |    1 +
 xen/arch/arm/device.c                       |   15 +
 xen/arch/arm/domain.c                       |    7 +
 xen/arch/arm/domain_build.c                 |   78 +-
 xen/arch/arm/kernel.h                       |    3 +
 xen/arch/arm/p2m.c                          |    4 +
 xen/arch/arm/setup.c                        |    2 +
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/common/device_tree.c                    |  161 ++-
 xen/common/grant_table.c                    |    7 +-
 xen/drivers/passthrough/Makefile            |    6 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +-
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +-
 xen/drivers/passthrough/arm/Makefile        |    2 +
 xen/drivers/passthrough/arm/iommu.c         |   65 +
 xen/drivers/passthrough/arm/smmu.c          | 1736 +++++++++++++++++++++++++++
 xen/drivers/passthrough/device_tree.c       |  106 ++
 xen/drivers/passthrough/iommu.c             |  542 +--------
 xen/drivers/passthrough/pci.c               |  437 +++++++
 xen/drivers/passthrough/vtd/iommu.c         |   84 +-
 xen/drivers/passthrough/x86/Makefile        |    1 +
 xen/drivers/passthrough/x86/iommu.c         |  106 ++
 xen/include/asm-arm/device.h                |   13 +-
 xen/include/asm-arm/domain.h                |    2 +
 xen/include/asm-arm/hvm/iommu.h             |   10 +
 xen/include/asm-arm/iommu.h                 |   36 +
 xen/include/asm-x86/hvm/iommu.h             |   29 +
 xen/include/asm-x86/iommu.h                 |   50 +
 xen/include/xen/device_tree.h               |   89 ++
 xen/include/xen/hvm/iommu.h                 |   33 +-
 xen/include/xen/iommu.h                     |   71 +-
 36 files changed, 3148 insertions(+), 679 deletions(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/drivers/passthrough/arm/smmu.c
 create mode 100644 xen/drivers/passthrough/device_tree.c
 create mode 100644 xen/drivers/passthrough/x86/iommu.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h
 create mode 100644 xen/include/asm-x86/iommu.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLw-0006sx-UL; Sun, 23 Feb 2014 22:16:52 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLv-0006rs-8X
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:51 +0000
Received: from [85.158.137.68:55162] by server-15.bemta-3.messagelabs.com id
	A0/77-19263-2537A035; Sun, 23 Feb 2014 22:16:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393193808!872284!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13778 invoked from network); 23 Feb 2014 22:16:48 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:48 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so2718814eae.7
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Pu2URYnky0IILXluJhG10sBol1MxYrX0INy5vHzafEA=;
	b=MEJG5psujWDMwLrT8dOhjkGdzyyHmBmstdq1XE8PZMzhdScfy2w3TEL0SThEdhxEvC
	wW1HfAdqc43MDSiienYQ47LcW6R3371mno9dqVfsUlwGCPw7yZ6Yz3dyVoZviEKAGhKC
	62qFLbWHNI7L0qEPwI1vwytGqp6cagGhBFbpqD+z+W7+b2XA3wooQ74Iynu8gChdkoUs
	zJb6qi3XUdgvJqTM4ZWFaGU3JxSZm7hY4/Wl8k3cnsxtQ5adnWWIAAkPqrmADDgdrhWN
	eKmaLNOVjjSXTMLNzAdH62bGSVjUoqiY4lmGwcsOxUQVnX/jNEUc6ByN+JqpJiRBfb2s
	TQfQ==
X-Gm-Message-State: ALoCoQmd7vHUH4dMdvMn7IhIe3fP/P38Zc4NO//hrcvrQmZL1JVdBrO8w9SnTAFbVWO+fLihnmGE
X-Received: by 10.15.73.134 with SMTP id h6mr21140747eey.15.1393193808064;
	Sun, 23 Feb 2014 14:16:48 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.46
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:47 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:25 +0000
Message-Id: <1393193792-20008-9-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split generic
	IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The generic IOMMU framework code (xen/drivers/passthrough/iommu.c) contains
functions specific to x86 and PCI.

Split the framework in 3 distincts files:
    - iommu.c: contains generic functions shared between x86 and ARM
               (when it will be supported)
    - pci.c: contains specific functions for PCI passthrough
    - x86/iommu.c: contains specific functions for x86

io.c contains x86 HVM specific code. Only compile for x86.

This patch is mostly code movement in new files.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Update commit message
        - Removing spurious change in drivers/passthrough/vtd/iommu.c
        - Move iommu_x86.c in x86/iommu.c
        - Merge iommu_pci.c in pci.c
        - Introduce iommu_do_pci_domctl
---
 xen/drivers/passthrough/Makefile     |    4 +-
 xen/drivers/passthrough/iommu.c      |  493 ++--------------------------------
 xen/drivers/passthrough/pci.c        |  437 ++++++++++++++++++++++++++++++
 xen/drivers/passthrough/x86/Makefile |    1 +
 xen/drivers/passthrough/x86/iommu.c  |   65 +++++
 xen/include/asm-x86/iommu.h          |   46 ++++
 xen/include/xen/hvm/iommu.h          |    1 +
 xen/include/xen/iommu.h              |   43 ++-
 8 files changed, 597 insertions(+), 493 deletions(-)
 create mode 100644 xen/drivers/passthrough/x86/iommu.c
 create mode 100644 xen/include/asm-x86/iommu.h

diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 7c40fa5..6e08f89 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -3,5 +3,5 @@ subdir-$(x86) += amd
 subdir-$(x86_64) += x86
 
 obj-y += iommu.o
-obj-y += io.o
-obj-y += pci.o
+obj-$(x86) += io.o
+obj-$(HAS_PCI) += pci.o
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 3c63f87..6893cf3 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -24,7 +24,6 @@
 #include <xsm/xsm.h>
 
 static void parse_iommu_param(char *s);
-static int iommu_populate_page_table(struct domain *d);
 static void iommu_dump_p2m_table(unsigned char key);
 
 /*
@@ -179,86 +178,7 @@ void __init iommu_dom0_init(struct domain *d)
     return hd->platform_ops->dom0_init(d);
 }
 
-int iommu_add_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    int rc;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
-    if ( rc || !pdev->phantom_stride )
-        return rc;
-
-    for ( devfn = pdev->devfn ; ; )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            return 0;
-        rc = hd->platform_ops->add_device(devfn, pdev);
-        if ( rc )
-            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
-                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-    }
-}
-
-int iommu_enable_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops ||
-         !hd->platform_ops->enable_device )
-        return 0;
-
-    return hd->platform_ops->enable_device(pdev);
-}
-
-int iommu_remove_device(struct pci_dev *pdev)
-{
-    struct hvm_iommu *hd;
-    u8 devfn;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    hd = domain_hvm_iommu(pdev->domain);
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
-    {
-        int rc;
-
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->remove_device(devfn, pdev);
-        if ( !rc )
-            continue;
-
-        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
-               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
-        return rc;
-    }
-
-    return hd->platform_ops->remove_device(pdev->devfn, pdev);
-}
-
-static void iommu_teardown(struct domain *d)
+void iommu_teardown(struct domain *d)
 {
     const struct hvm_iommu *hd = domain_hvm_iommu(d);
 
@@ -267,151 +187,6 @@ static void iommu_teardown(struct domain *d)
     tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
-/*
- * If the device isn't owned by dom0, it means it already
- * has been assigned to other domain, or it doesn't exist.
- */
-static int device_assigned(u16 seg, u8 bus, u8 devfn)
-{
-    struct pci_dev *pdev;
-
-    spin_lock(&pcidevs_lock);
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    spin_unlock(&pcidevs_lock);
-
-    return pdev ? 0 : -EBUSY;
-}
-
-static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int rc = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return 0;
-
-    /* Prevent device assign if mem paging or mem sharing have been 
-     * enabled for this domain */
-    if ( unlikely(!need_iommu(d) &&
-            (d->arch.hvm_domain.mem_sharing_enabled ||
-             d->mem_event->paging.ring_page)) )
-        return -EXDEV;
-
-    if ( !spin_trylock(&pcidevs_lock) )
-        return -ERESTART;
-
-    if ( need_iommu(d) <= 0 )
-    {
-        if ( !iommu_use_hap_pt(d) )
-        {
-            rc = iommu_populate_page_table(d);
-            if ( rc )
-            {
-                spin_unlock(&pcidevs_lock);
-                return rc;
-            }
-        }
-        d->need_iommu = 1;
-    }
-
-    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
-    if ( !pdev )
-    {
-        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
-        goto done;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
-        goto done;
-
-    for ( ; pdev->phantom_stride; rc = 0 )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        rc = hd->platform_ops->assign_device(d, devfn, pdev);
-        if ( rc )
-            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
-                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   rc);
-    }
-
- done:
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-    spin_unlock(&pcidevs_lock);
-
-    return rc;
-}
-
-static int iommu_populate_page_table(struct domain *d)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct page_info *page;
-    int rc = 0, n = 0;
-
-    d->need_iommu = -1;
-
-    this_cpu(iommu_dont_flush_iotlb) = 1;
-    spin_lock(&d->page_alloc_lock);
-
-    if ( unlikely(d->is_dying) )
-        rc = -ESRCH;
-
-    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
-    {
-        if ( is_hvm_domain(d) ||
-            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
-        {
-            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
-            rc = hd->platform_ops->map_page(
-                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
-                IOMMUF_readable|IOMMUF_writable);
-            if ( rc )
-            {
-                page_list_add(page, &d->page_list);
-                break;
-            }
-        }
-        page_list_add_tail(page, &d->arch.relmem_list);
-        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
-             hypercall_preempt_check() )
-            rc = -ERESTART;
-    }
-
-    if ( !rc )
-    {
-        /*
-         * The expectation here is that generally there are many normal pages
-         * on relmem_list (the ones we put there) and only few being in an
-         * offline/broken state. The latter ones are always at the head of the
-         * list. Hence we first move the whole list, and then move back the
-         * first few entries.
-         */
-        page_list_move(&d->page_list, &d->arch.relmem_list);
-        while ( (page = page_list_first(&d->page_list)) != NULL &&
-                (page->count_info & (PGC_state|PGC_broken)) )
-        {
-            page_list_del(page, &d->page_list);
-            page_list_add_tail(page, &d->arch.relmem_list);
-        }
-    }
-
-    spin_unlock(&d->page_alloc_lock);
-    this_cpu(iommu_dont_flush_iotlb) = 0;
-
-    if ( !rc )
-        iommu_iotlb_flush_all(d);
-    else if ( rc != -ERESTART )
-        iommu_teardown(d);
-
-    return rc;
-}
-
-
 void iommu_domain_destroy(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
@@ -498,53 +273,6 @@ void iommu_iotlb_flush_all(struct domain *d)
     hd->platform_ops->iotlb_flush_all(d);
 }
 
-/* caller should hold the pcidevs_lock */
-int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev = NULL;
-    int ret = 0;
-
-    if ( !iommu_enabled || !hd->platform_ops )
-        return -EINVAL;
-
-    ASSERT(spin_is_locked(&pcidevs_lock));
-    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
-    if ( !pdev )
-        return -ENODEV;
-
-    while ( pdev->phantom_stride )
-    {
-        devfn += pdev->phantom_stride;
-        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
-            break;
-        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-        if ( !ret )
-            continue;
-
-        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
-               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
-        return ret;
-    }
-
-    devfn = pdev->devfn;
-    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
-    if ( ret )
-    {
-        dprintk(XENLOG_G_ERR,
-                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
-                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-        return ret;
-    }
-
-    pdev->fault.count = 0;
-
-    if ( !has_arch_pdevs(d) && need_iommu(d) )
-        iommu_teardown(d);
-
-    return ret;
-}
-
 int __init iommu_setup(void)
 {
     int rc = -ENODEV;
@@ -585,91 +313,37 @@ int __init iommu_setup(void)
     return rc;
 }
 
-static int iommu_get_device_group(
-    struct domain *d, u16 seg, u8 bus, u8 devfn,
-    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
-{
-    struct hvm_iommu *hd = domain_hvm_iommu(d);
-    struct pci_dev *pdev;
-    int group_id, sdev_id;
-    u32 bdf;
-    int i = 0;
-    const struct iommu_ops *ops = hd->platform_ops;
-
-    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
-        return 0;
-
-    group_id = ops->get_device_group_id(seg, bus, devfn);
-
-    spin_lock(&pcidevs_lock);
-    for_each_pdev( d, pdev )
-    {
-        if ( (pdev->seg != seg) ||
-             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
-            continue;
-
-        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
-            continue;
-
-        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
-        if ( (sdev_id == group_id) && (i < max_sdevs) )
-        {
-            bdf = 0;
-            bdf |= (pdev->bus & 0xff) << 16;
-            bdf |= (pdev->devfn & 0xff) << 8;
-
-            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
-            {
-                spin_unlock(&pcidevs_lock);
-                return -1;
-            }
-            i++;
-        }
-    }
-    spin_unlock(&pcidevs_lock);
-
-    return i;
-}
-
-void iommu_update_ire_from_apic(
-    unsigned int apic, unsigned int reg, unsigned int value)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    ops->update_ire_from_apic(apic, reg, value);
-}
-
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
+void iommu_resume()
 {
     const struct iommu_ops *ops = iommu_get_ops();
-    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
+    if ( iommu_enabled )
+        ops->resume();
 }
 
-void iommu_read_msi_from_ire(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
+int iommu_do_domctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    const struct iommu_ops *ops = iommu_get_ops();
-    if ( iommu_intremap )
-        ops->read_msi_from_ire(msi_desc, msg);
-}
+    int ret = 0;
 
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->read_apic_from_ire(apic, reg);
-}
+    if ( !iommu_enabled )
+        return -ENOSYS;
 
-int __init iommu_setup_hpet_msi(struct msi_desc *msi)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
-}
+    switch ( domctl->cmd )
+    {
+#ifdef HAS_PCI
+    case XEN_DOMCTL_get_device_group:
+    case XEN_DOMCTL_test_assign_device:
+    case XEN_DOMCTL_assign_device:
+    case XEN_DOMCTL_deassign_device:
+        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
+        break;
+#endif
+    default:
+        ret = -ENOSYS;
+    }
 
-void iommu_resume()
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-    if ( iommu_enabled )
-        ops->resume();
+    return ret;
 }
 
 void iommu_suspend()
@@ -695,125 +369,6 @@ void iommu_crash_shutdown(void)
     iommu_enabled = iommu_intremap = 0;
 }
 
-int iommu_do_domctl(
-    struct xen_domctl *domctl, struct domain *d,
-    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
-{
-    u16 seg;
-    u8 bus, devfn;
-    int ret = 0;
-
-    if ( !iommu_enabled )
-        return -ENOSYS;
-
-    switch ( domctl->cmd )
-    {
-    case XEN_DOMCTL_get_device_group:
-    {
-        u32 max_sdevs;
-        XEN_GUEST_HANDLE_64(uint32) sdevs;
-
-        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.get_device_group.machine_sbdf >> 16;
-        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
-        max_sdevs = domctl->u.get_device_group.max_sdevs;
-        sdevs = domctl->u.get_device_group.sdev_array;
-
-        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
-        if ( ret < 0 )
-        {
-            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
-            ret = -EFAULT;
-            domctl->u.get_device_group.num_sdevs = 0;
-        }
-        else
-        {
-            domctl->u.get_device_group.num_sdevs = ret;
-            ret = 0;
-        }
-        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
-            ret = -EFAULT;
-    }
-    break;
-
-    case XEN_DOMCTL_test_assign_device:
-        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        if ( device_assigned(seg, bus, devfn) )
-        {
-            printk(XENLOG_G_INFO
-                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
-            ret = -EINVAL;
-        }
-        break;
-
-    case XEN_DOMCTL_assign_device:
-        if ( unlikely(d->is_dying) )
-        {
-            ret = -EINVAL;
-            break;
-        }
-
-        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        ret = device_assigned(seg, bus, devfn) ?:
-              assign_device(d, seg, bus, devfn);
-        if ( ret == -ERESTART )
-            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
-                                                "h", u_domctl);
-        else if ( ret )
-            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
-                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    case XEN_DOMCTL_deassign_device:
-        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
-        if ( ret )
-            break;
-
-        seg = domctl->u.assign_device.machine_sbdf >> 16;
-        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
-        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
-
-        spin_lock(&pcidevs_lock);
-        ret = deassign_device(d, seg, bus, devfn);
-        spin_unlock(&pcidevs_lock);
-        if ( ret )
-            printk(XENLOG_G_ERR
-                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   d->domain_id, ret);
-
-        break;
-
-    default:
-        ret = -ENOSYS;
-        break;
-    }
-
-    return ret;
-}
-
 static void iommu_dump_p2m_table(unsigned char key)
 {
     struct domain *d;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c5c8344..0108f44 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -26,6 +26,9 @@
 #include <asm/hvm/irq.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/event.h>
+#include <xen/guest_access.h>
+#include <xen/paging.h>
 #include <xen/radix-tree.h>
 #include <xen/tasklet.h>
 #include <xsm/xsm.h>
@@ -980,6 +983,440 @@ static int __init setup_dump_pcidevs(void)
 }
 __initcall(setup_dump_pcidevs);
 
+static int iommu_populate_page_table(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct page_info *page;
+    int rc = 0, n = 0;
+
+    d->need_iommu = -1;
+
+    this_cpu(iommu_dont_flush_iotlb) = 1;
+    spin_lock(&d->page_alloc_lock);
+
+    if ( unlikely(d->is_dying) )
+        rc = -ESRCH;
+
+
+    while ( !rc && (page = page_list_remove_head(&d->page_list)) )
+    {
+        if ( is_hvm_domain(d) ||
+            (page->u.inuse.type_info & PGT_type_mask) == PGT_writable_page )
+        {
+            BUG_ON(SHARED_M2P(mfn_to_gmfn(d, page_to_mfn(page))));
+            rc = hd->platform_ops->map_page(
+                d, mfn_to_gmfn(d, page_to_mfn(page)), page_to_mfn(page),
+                IOMMUF_readable|IOMMUF_writable);
+            if ( rc )
+            {
+                page_list_add(page, &d->page_list);
+                break;
+            }
+        }
+        page_list_add_tail(page, &d->arch.relmem_list);
+        if ( !(++n & 0xff) && !page_list_empty(&d->page_list) &&
+             hypercall_preempt_check() )
+            rc = -ERESTART;
+    }
+
+    if ( !rc )
+    {
+        /*
+         * The expectation here is that generally there are many normal pages
+         * on relmem_list (the ones we put there) and only few being in an
+         * offline/broken state. The latter ones are always at the head of the
+         * list. Hence we first move the whole list, and then move back the
+         * first few entries.
+         */
+        page_list_move(&d->page_list, &d->arch.relmem_list);
+        while ( (page = page_list_first(&d->page_list)) != NULL &&
+                (page->count_info & (PGC_state|PGC_broken)) )
+        {
+            page_list_del(page, &d->page_list);
+            page_list_add_tail(page, &d->arch.relmem_list);
+        }
+    }
+
+    spin_unlock(&d->page_alloc_lock);
+    this_cpu(iommu_dont_flush_iotlb) = 0;
+
+    if ( !rc )
+        iommu_iotlb_flush_all(d);
+    else if ( rc != -ERESTART )
+        iommu_teardown(d);
+
+    return rc;
+}
+
+int iommu_add_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    int rc;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    rc = hd->platform_ops->add_device(pdev->devfn, pdev);
+    if ( rc || !pdev->phantom_stride )
+        return rc;
+
+    for ( devfn = pdev->devfn ; ; )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            return 0;
+        rc = hd->platform_ops->add_device(devfn, pdev);
+        if ( rc )
+            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
+                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+    }
+}
+
+int iommu_enable_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops ||
+         !hd->platform_ops->enable_device )
+        return 0;
+
+    return hd->platform_ops->enable_device(pdev);
+}
+
+int iommu_remove_device(struct pci_dev *pdev)
+{
+    struct hvm_iommu *hd;
+    u8 devfn;
+
+    if ( !pdev->domain )
+        return -EINVAL;
+
+    hd = domain_hvm_iommu(pdev->domain);
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    for ( devfn = pdev->devfn ; pdev->phantom_stride; )
+    {
+        int rc;
+
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->remove_device(devfn, pdev);
+        if ( !rc )
+            continue;
+
+        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
+               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+        return rc;
+    }
+
+    return hd->platform_ops->remove_device(pdev->devfn, pdev);
+}
+
+/*
+ * If the device isn't owned by dom0, it means it already
+ * has been assigned to other domain, or it doesn't exist.
+ */
+static int device_assigned(u16 seg, u8 bus, u8 devfn)
+{
+    struct pci_dev *pdev = NULL;
+
+    spin_lock(&pcidevs_lock);
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    spin_unlock(&pcidevs_lock);
+
+    return pdev ? 0 : -EBUSY;
+}
+
+static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int rc = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return 0;
+
+    /* Prevent device assign if mem paging or mem sharing have been 
+     * enabled for this domain */
+    if ( unlikely(!need_iommu(d) &&
+            (d->arch.hvm_domain.mem_sharing_enabled ||
+             d->mem_event->paging.ring_page)) )
+        return -EXDEV;
+
+    if ( !spin_trylock(&pcidevs_lock) )
+        return -ERESTART;
+
+    if ( need_iommu(d) <= 0 )
+    {
+        if ( !iommu_use_hap_pt(d) )
+        {
+            rc = iommu_populate_page_table(d);
+            if ( rc )
+            {
+                spin_unlock(&pcidevs_lock);
+                return rc;
+            }
+        }
+        d->need_iommu = 1;
+    }
+
+    pdev = pci_get_pdev_by_domain(dom0, seg, bus, devfn);
+    if ( !pdev )
+    {
+        rc = pci_get_pdev(seg, bus, devfn) ? -EBUSY : -ENODEV;
+        goto done;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( (rc = hd->platform_ops->assign_device(d, devfn, pdev)) )
+        goto done;
+
+    for ( ; pdev->phantom_stride; rc = 0 )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        rc = hd->platform_ops->assign_device(d, devfn, pdev);
+        if ( rc )
+            printk(XENLOG_G_WARNING "d%d: assign %04x:%02x:%02x.%u failed (%d)\n",
+                   d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   rc);
+    }
+
+ done:
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+    spin_unlock(&pcidevs_lock);
+
+    return rc;
+}
+
+/* caller should hold the pcidevs_lock */
+int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev = NULL;
+    int ret = 0;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    ASSERT(spin_is_locked(&pcidevs_lock));
+    pdev = pci_get_pdev_by_domain(d, seg, bus, devfn);
+    if ( !pdev )
+        return -ENODEV;
+
+    while ( pdev->phantom_stride )
+    {
+        devfn += pdev->phantom_stride;
+        if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
+            break;
+        ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+        if ( !ret )
+            continue;
+
+        printk(XENLOG_G_ERR "d%d: deassign %04x:%02x:%02x.%u failed (%d)\n",
+               d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
+        return ret;
+    }
+
+    devfn = pdev->devfn;
+    ret = hd->platform_ops->reassign_device(d, dom0, devfn, pdev);
+    if ( ret )
+    {
+        dprintk(XENLOG_G_ERR,
+                "d%d: deassign device (%04x:%02x:%02x.%u) failed\n",
+                d->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        return ret;
+    }
+
+    pdev->fault.count = 0;
+
+    if ( !has_arch_pdevs(d) && need_iommu(d) )
+        iommu_teardown(d);
+
+    return ret;
+}
+
+static int iommu_get_device_group(
+    struct domain *d, u16 seg, u8 bus, u8 devfn,
+    XEN_GUEST_HANDLE_64(uint32) buf, int max_sdevs)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct pci_dev *pdev;
+    int group_id, sdev_id;
+    u32 bdf;
+    int i = 0;
+    const struct iommu_ops *ops = hd->platform_ops;
+
+    if ( !iommu_enabled || !ops || !ops->get_device_group_id )
+        return 0;
+
+    group_id = ops->get_device_group_id(seg, bus, devfn);
+
+    spin_lock(&pcidevs_lock);
+    for_each_pdev( d, pdev )
+    {
+        if ( (pdev->seg != seg) ||
+             ((pdev->bus == bus) && (pdev->devfn == devfn)) )
+            continue;
+
+        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
+            continue;
+
+        sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
+        if ( (sdev_id == group_id) && (i < max_sdevs) )
+        {
+            bdf = 0;
+            bdf |= (pdev->bus & 0xff) << 16;
+            bdf |= (pdev->devfn & 0xff) << 8;
+
+            if ( unlikely(copy_to_guest_offset(buf, i, &bdf, 1)) )
+            {
+                spin_unlock(&pcidevs_lock);
+                return -1;
+            }
+            i++;
+        }
+    }
+
+    spin_unlock(&pcidevs_lock);
+
+    return i;
+}
+
+int iommu_do_pci_domctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    u16 seg;
+    u8 bus, devfn;
+    int ret = 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_get_device_group:
+    {
+        u32 max_sdevs;
+        XEN_GUEST_HANDLE_64(uint32) sdevs;
+
+        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.get_device_group.machine_sbdf >> 16;
+        bus = (domctl->u.get_device_group.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.get_device_group.machine_sbdf & 0xff;
+        max_sdevs = domctl->u.get_device_group.max_sdevs;
+        sdevs = domctl->u.get_device_group.sdev_array;
+
+        ret = iommu_get_device_group(d, seg, bus, devfn, sdevs, max_sdevs);
+        if ( ret < 0 )
+        {
+            dprintk(XENLOG_ERR, "iommu_get_device_group() failed!\n");
+            ret = -EFAULT;
+            domctl->u.get_device_group.num_sdevs = 0;
+        }
+        else
+        {
+            domctl->u.get_device_group.num_sdevs = ret;
+            ret = 0;
+        }
+        if ( __copy_field_to_guest(u_domctl, domctl, u.get_device_group) )
+            ret = -EFAULT;
+    }
+    break;
+
+    case XEN_DOMCTL_test_assign_device:
+        ret = xsm_test_assign_device(XSM_HOOK, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        if ( device_assigned(seg, bus, devfn) )
+        {
+            printk(XENLOG_G_INFO
+                   "%04x:%02x:%02x.%u already assigned, or non-existent\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            ret = -EINVAL;
+        }
+        break;
+
+    case XEN_DOMCTL_assign_device:
+        if ( unlikely(d->is_dying) )
+        {
+            ret = -EINVAL;
+            break;
+        }
+
+        ret = xsm_assign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        ret = device_assigned(seg, bus, devfn) ?:
+              assign_device(d, seg, bus, devfn);
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
+                                                "h", u_domctl);
+        else if ( ret )
+            printk(XENLOG_G_ERR "XEN_DOMCTL_assign_device: "
+                   "assign %04x:%02x:%02x.%u to dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    case XEN_DOMCTL_deassign_device:
+        ret = xsm_deassign_device(XSM_HOOK, d, domctl->u.assign_device.machine_sbdf);
+        if ( ret )
+            break;
+
+        seg = domctl->u.assign_device.machine_sbdf >> 16;
+        bus = (domctl->u.assign_device.machine_sbdf >> 8) & 0xff;
+        devfn = domctl->u.assign_device.machine_sbdf & 0xff;
+
+        spin_lock(&pcidevs_lock);
+        ret = deassign_device(d, seg, bus, devfn);
+        spin_unlock(&pcidevs_lock);
+        if ( ret )
+            printk(XENLOG_G_ERR
+                   "deassign %04x:%02x:%02x.%u from dom%d failed (%d)\n",
+                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                   d->domain_id, ret);
+
+        break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index c124a51..a70cf94 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1 +1,2 @@
 obj-y += ats.o
+obj-y += iommu.o
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
new file mode 100644
index 0000000..bd3c23b
--- /dev/null
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -0,0 +1,65 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
+#include <xen/event.h>
+#include <xen/softirq.h>
+#include <xsm/xsm.h>
+
+void iommu_update_ire_from_apic(
+    unsigned int apic, unsigned int reg, unsigned int value)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    ops->update_ire_from_apic(apic, reg, value);
+}
+
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
+}
+
+void iommu_read_msi_from_ire(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    if ( iommu_intremap )
+        ops->read_msi_from_ire(msi_desc, msg);
+}
+
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->read_apic_from_ire(apic, reg);
+}
+
+int __init iommu_setup_hpet_msi(struct msi_desc *msi)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
new file mode 100644
index 0000000..34c1896
--- /dev/null
+++ b/xen/include/asm-x86/iommu.h
@@ -0,0 +1,46 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_X86_IOMMU_H__
+#define __ARCH_X86_IOMMU_H__
+
+#define MAX_IOMMUS 32
+
+#include <asm/msi.h>
+
+void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
+int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
+void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
+unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
+int iommu_setup_hpet_msi(struct msi_desc *);
+
+void iommu_share_p2m_table(struct domain *d);
+
+/* While VT-d specific, this must get declared in a generic header. */
+int adjust_vtd_irq_affinities(void);
+void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
+int iommu_supports_eim(void);
+int iommu_enable_x2apic_IR(void);
+void iommu_disable_x2apic_IR(void);
+void iommu_set_dom0_mapping(struct domain *d);
+
+#endif /* !__ARCH_X86_IOMMU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 26539e0..2abb4e3 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -21,6 +21,7 @@
 #define __XEN_HVM_IOMMU_H__
 
 #include <xen/iommu.h>
+#include <asm/hvm/iommu.h>
 
 struct g2m_ioport {
     struct list_head list;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index fcbc432..65a37c0 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -25,6 +25,7 @@
 #include <xen/pci.h>
 #include <public/hvm/ioreq.h>
 #include <public/domctl.h>
+#include <asm/iommu.h>
 
 extern bool_t iommu_enable, iommu_enabled;
 extern bool_t force_iommu, iommu_verbose;
@@ -39,17 +40,12 @@ extern bool_t amd_iommu_perdev_intremap;
 
 #define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
 
-#define MAX_IOMMUS 32
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
 #define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
 
 int iommu_setup(void);
-int iommu_supports_eim(void);
-int iommu_enable_x2apic_IR(void);
-void iommu_disable_x2apic_IR(void);
 
 int iommu_add_device(struct pci_dev *pdev);
 int iommu_enable_device(struct pci_dev *pdev);
@@ -59,6 +55,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+/* Function used internally, use iommu_domain_destroy */
+void iommu_teardown(struct domain *d);
+
 /* iommu_map_page() takes flags to direct the mapping operation. */
 #define _IOMMUF_readable 0
 #define IOMMUF_readable  (1u<<_IOMMUF_readable)
@@ -67,9 +66,8 @@ int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags);
 int iommu_unmap_page(struct domain *d, unsigned long gfn);
-void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte, int order, int present);
-void iommu_domain_teardown(struct domain *d);
 
+#ifdef HAS_PCI
 void pt_pci_init(void);
 
 struct pirq;
@@ -84,52 +82,56 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
 bool_t pt_irq_need_timer(uint32_t flags);
 
 #define PT_IRQ_TIME_OUT MILLISECS(8)
+#endif /* HAS_PCI */
 
+#ifdef CONFIG_X86
 struct msi_desc;
 struct msi_msg;
+#endif /* CONFIG_X86 */
+
 struct page_info;
 
 struct iommu_ops {
     int (*init)(struct domain *d);
     void (*dom0_init)(struct domain *d);
+#ifdef HAS_PCI
     int (*add_device)(u8 devfn, struct pci_dev *);
     int (*enable_device)(struct pci_dev *pdev);
     int (*remove_device)(u8 devfn, struct pci_dev *);
     int (*assign_device)(struct domain *, u8 devfn, struct pci_dev *);
+    int (*reassign_device)(struct domain *s, struct domain *t,
+			   u8 devfn, struct pci_dev *);
+    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#endif /* HAS_PCI */
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
     int (*unmap_page)(struct domain *d, unsigned long gfn);
     void (*free_page_table)(struct page_info *);
-    int (*reassign_device)(struct domain *s, struct domain *t,
-			   u8 devfn, struct pci_dev *);
-    int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
+#ifdef CONFIG_X86
     void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
     int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
     void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
     unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
     int (*setup_hpet_msi)(struct msi_desc *);
+    void (*share_p2m)(struct domain *d);
+#endif /* CONFIG_X86 */
     void (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     void (*iotlb_flush)(struct domain *d, unsigned long gfn, unsigned int page_count);
     void (*iotlb_flush_all)(struct domain *d);
     void (*dump_p2m_table)(struct domain *d);
 };
 
-void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
-int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
-void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
-unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg);
-int iommu_setup_hpet_msi(struct msi_desc *);
-
 void iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 
-void iommu_set_dom0_mapping(struct domain *d);
-void iommu_share_p2m_table(struct domain *d);
+#if HAS_PCI
+int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
+                        XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
+#endif
 
 int iommu_do_domctl(struct xen_domctl *, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
@@ -137,9 +139,6 @@ int iommu_do_domctl(struct xen_domctl *, struct domain *d,
 void iommu_iotlb_flush(struct domain *d, unsigned long gfn, unsigned int page_count);
 void iommu_iotlb_flush_all(struct domain *d);
 
-/* While VT-d specific, this must get declared in a generic header. */
-int adjust_vtd_irq_affinities(void);
-
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLk-0006ob-Rt; Sun, 23 Feb 2014 22:16:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLj-0006oK-Mz
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:39 +0000
Received: from [85.158.139.211:34533] by server-2.bemta-5.messagelabs.com id
	14/C1-23037-6437A035; Sun, 23 Feb 2014 22:16:38 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393193797!5730216!1
X-Originating-IP: [209.85.215.179]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5430 invoked from network); 23 Feb 2014 22:16:38 -0000
Received: from mail-ea0-f179.google.com (HELO mail-ea0-f179.google.com)
	(209.85.215.179)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:38 -0000
Received: by mail-ea0-f179.google.com with SMTP id q10so2735045ead.38
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=Q3MLBwBKJtz9hvTG5OVtdmMvqfazovDKiwqeeyMV6A0=;
	b=AU8bbkZAdpPAgch0JjiFwhfeO1zf/uZ8wxwr2i7G0d+BOjySzo7H/6ZH61sweMce5a
	DjzZ8Zq4K6Ry/tBfDPRpg6SIsfxB19T1xv8oUHTX5pCCufvxZhPfKom3PJF5kT7UWZCu
	BTxSqc3uJung1RDgO4IyYdnJPK0eqZTJ48SyuNeSmKJPVsfyvNix5NCQWyFk48rdY8Vq
	VUulvDl0jWhSraf3slT/UEx5ZoBak9OFU8n3P4wndIXAaOAn/CrXpvrx7Tqd2ECYl+aZ
	2JjcSgs6P7Vv/1cZ6QEsvTsMBZsALHYRnaqpLYjRcMJh3kCjp0uxl6mOH5jdILpqx7i7
	sp0A==
X-Gm-Message-State: ALoCoQlxY1B0uwLQeOTY6e1JAw71Q83N+1J9auo0gTcZYMM8VbB/1ZSCB+0SeeHTTbBbp874BR98
X-Received: by 10.15.76.135 with SMTP id n7mr5973863eey.36.1393193797762;
	Sun, 23 Feb 2014 14:16:37 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.36
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:37 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:18 +0000
Message-Id: <1393193792-20008-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 01/15] xen/common: grant-table: only call
	IOMMU if paging mode translate is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From Xen point of view, ARM guests are PV guest with paging auto translate
enabled.

When IOMMU support will be added for ARM, mapping grant ref will always crash
Xen due to the BUG_ON in __gnttab_map_grant_ref.

On x86:
    - PV guests always have paging mode translate disabled
    - PVH and HVM guests have always paging mode translate enabled

It means that we can safely replace the check that the domain is a PV guests
by checking if the guest has paging mode translate enabled.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>
---
 xen/common/grant_table.c |    7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 107b000..778bdb7 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -721,12 +721,10 @@ __gnttab_map_grant_ref(
 
     double_gt_lock(lgt, rgt);
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        /* Shouldn't happen, because you can't use iommu in a HVM domain. */
-        BUG_ON(paging_mode_translate(ld));
         /* We're not translated, so we know that gmfns and mfns are
            the same things, so the IOMMU entry is always 1-to-1. */
         mapcount(lgt, rd, frame, &wrc, &rdc);
@@ -931,11 +929,10 @@ __gnttab_unmap_common(
             act->pin -= GNTPIN_hstw_inc;
     }
 
-    if ( is_pv_domain(ld) && need_iommu(ld) )
+    if ( !paging_mode_translate(ld) && need_iommu(ld) )
     {
         unsigned int wrc, rdc;
         int err = 0;
-        BUG_ON(paging_mode_translate(ld));
         mapcount(lgt, rd, op->frame, &wrc, &rdc);
         if ( (wrc + rdc) == 0 )
             err = iommu_unmap_page(ld, op->frame);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLr-0006pq-0B; Sun, 23 Feb 2014 22:16:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLp-0006pU-F6
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:45 +0000
Received: from [85.158.137.68:18589] by server-15.bemta-3.messagelabs.com id
	A8/67-19263-C437A035; Sun, 23 Feb 2014 22:16:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393193802!3654091!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4992 invoked from network); 23 Feb 2014 22:16:42 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:42 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so379504eek.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=GSmnRxnF+YykwIEr04izkGFqAGpcNSta6nD/GYqoscw=;
	b=TB+t8Pbnr4OpOlnPBkZhpWQMQ80/2Gk4RyoaleGe1W1EPzZ8SV/fkPgTjWzuhF2WuL
	qZOTTQHK+GiPuKrUTBZRc8gvwDhammhsSDjMfrTgft3O7Ku121J3j34VkikEYTYWj8BN
	X/FlLbda/++Oqw6tgecnhXDAAO33kQyDuK/5aoTRwUdl4en5JoCBJjYqdLObEMzYB1Uz
	I7DBQHTZpOJilgJMHsVZCD7XRnqPyGtOxPW3ZIWvfegztRp3GxLhSj9jTi0v+fIY/q6C
	HPB5dvYvh/Jjlax8narHHtobMQ+zTA9fEbTPb6UnvsBO0kCN42Y2tVglzvRt6Efobk+t
	gwbQ==
X-Gm-Message-State: ALoCoQlTDgHCmyoC0PlOLrB2H1e91yEE+uHes0O8zuXe41eOZSp6iRQa7I8aOwv/hi9Pryigluv/
X-Received: by 10.15.56.8 with SMTP id x8mr20588886eew.83.1393193802345;
	Sun, 23 Feb 2014 14:16:42 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.41
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:41 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:21 +0000
Message-Id: <1393193792-20008-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 04/15] xen/dts: Add dt_property_read_bool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The function check if a property exists in a specific node.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---
    Changes in v2:
        - Fix typo in commit message
---
 xen/common/device_tree.c      |    6 ++----
 xen/include/xen/device_tree.h |   21 +++++++++++++++++++++
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index c66d1d5..ccdb7ff 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -512,10 +512,8 @@ static void __init *unflatten_dt_alloc(unsigned long *mem, unsigned long size,
 }
 
 /* Find a property with a given name for a given node and return it. */
-static const struct dt_property *
-dt_find_property(const struct dt_device_node *np,
-                 const char *name,
-                 u32 *lenp)
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp)
 {
     const struct dt_property *pp;
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 9a8c3de..7c075d9 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -15,6 +15,7 @@
 #include <xen/init.h>
 #include <xen/string.h>
 #include <xen/types.h>
+#include <xen/stdbool.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -347,6 +348,10 @@ struct dt_device_node *dt_find_compatible_node(struct dt_device_node *from,
 const void *dt_get_property(const struct dt_device_node *np,
                             const char *name, u32 *lenp);
 
+const struct dt_property *dt_find_property(const struct dt_device_node *np,
+                                           const char *name, u32 *lenp);
+
+
 /**
  * dt_property_read_u32 - Helper to read a u32 property.
  * @np: node to get the value
@@ -369,6 +374,22 @@ bool_t dt_property_read_u64(const struct dt_device_node *np,
                             const char *name, u64 *out_value);
 
 /**
+ * dt_property_read_bool - Check if a property exists
+ * @np: node to get the value
+ * @name: name of the property
+ *
+ * Search for a property in a device node.
+ * Return true if the property exists false otherwise.
+ */
+static inline bool_t dt_property_read_bool(const struct dt_device_node *np,
+                                           const char *name)
+{
+    const struct dt_property *prop = dt_find_property(np, name, NULL);
+
+    return prop ? true : false;
+}
+
+/**
  * dt_property_read_string - Find and read a string from a property
  * @np:         Device node from which the property value is to be read
  * @propname:   Name of the property to be searched
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLs-0006qg-21; Sun, 23 Feb 2014 22:16:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLr-0006pp-68
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:47 +0000
Received: from [193.109.254.147:53375] by server-6.bemta-14.messagelabs.com id
	FF/E0-03396-E437A035; Sun, 23 Feb 2014 22:16:46 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393193805!6238783!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6816 invoked from network); 23 Feb 2014 22:16:45 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:45 -0000
Received: by mail-ee0-f48.google.com with SMTP id t10so2718846eei.35
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:45 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=8PcC2TycjATB0VFYCPnInvMaDKm7QZULUuDgO5uVTeo=;
	b=U4DWhlCB4AeOhClsHGk/g4tModK12EAC+P2cNCoC3b6Ehq3dM3y1WZHZ4J2JExCqQV
	5tyY1yhQOQ17J1hKmt0NCCs0hZpMkIsldFdyRn2wDu+pC9ETqeODshts/VZXa1LddqTR
	4JgPxh7+AV1qM/acCxy7L4VUSW436KeaXyv08Uj/uw165BLTAEvs9zAN4fUnxSgdDpht
	CqP9+Yt5cV7lc09KzOHVZo7GN/zNdNQCAW6PVCOSte0Ok63kbjYaXh6cXFVJlOLERBQe
	xnoR2wBcelTXvn2NAy5FR4F3AqEP6t/PECdXlabyvOM4QB4wC53uJvcLe7rm3TmMlqt7
	PooA==
X-Gm-Message-State: ALoCoQkHmORpB6akeTPsBBIYu73dGKpghWRE0HQ3utcO7cmY1c6PA5PAgog+Dlsr/s+d5PC3BoKE
X-Received: by 10.15.23.194 with SMTP id h42mr20856788eeu.32.1393193805189;
	Sun, 23 Feb 2014 14:16:45 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.43
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:44 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:23 +0000
Message-Id: <1393193792-20008-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 06/15] xen/passthrough: rework dom0_pvh_reqs
	to use it also on ARM
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DOM0 on ARM will have the same requirements as DOM0 PVH when iommu is enabled.
Both PVH and ARM guest has paging mode translate enabled, so Xen can use it
to know if it needs to check the requirements.

Rename the function and remove "pvh" word in the panic message.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>

---
    Changes in v2:
        - IOMMU can be disabled on ARM if the platform doesn't have
        IOMMU.
---
 xen/drivers/passthrough/iommu.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 19b0e23..b534893 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -130,13 +130,17 @@ int iommu_domain_init(struct domain *d)
     return hd->platform_ops->init(d);
 }
 
-static __init void check_dom0_pvh_reqs(struct domain *d)
+static __init void check_dom0_reqs(struct domain *d)
 {
-    if ( !iommu_enabled )
+    if ( !paging_mode_translate(d) )
+        return;
+
+    if ( is_pvh_domain(d) && !iommu_enabled )
         panic("Presently, iommu must be enabled for pvh dom0\n");
 
     if ( iommu_passthrough )
-        panic("For pvh dom0, dom0-passthrough must not be enabled\n");
+        panic("Dom0 uses translate paging mode, dom0-passthrough must not be "
+              "enabled\n");
 
     iommu_dom0_strict = 1;
 }
@@ -145,8 +149,7 @@ void __init iommu_dom0_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    if ( is_pvh_domain(d) )
-        check_dom0_pvh_reqs(d);
+    check_dom0_reqs(d);
 
     if ( !iommu_enabled )
         return;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLr-0006q7-CI; Sun, 23 Feb 2014 22:16:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLp-0006pY-LL
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:45 +0000
Received: from [85.158.139.211:34693] by server-11.bemta-5.messagelabs.com id
	D2/0C-23886-C437A035; Sun, 23 Feb 2014 22:16:44 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393193803!5717360!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7530 invoked from network); 23 Feb 2014 22:16:43 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:43 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so1879469eek.27
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:43 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=X6qQLfZznS/F5jTc7GhsGRfH310akpKPm7hx0lzFyxw=;
	b=DamfoGANj+ncgbp72hD+oDZ7hdonwZ6oGH6K3qq0xHLdzwRSjOFLTqLZde7mlEYnaH
	ORcBgA7/KOqOxMS8bS5ZAfvQBc0/Bn//MX/XEXktDpitpj0dv6NuzEsGKD1Jm2MZ/h6W
	nXUD7rjyybLmImA/EfB+veavJfJpLgoBkQjYOuCNZPLx30lXcQk7sR2LpxWuOs4zi6rl
	kM5xL907OYUNi5Ys7I7j9wYVUVV2N3Ne+AKQigpMCvtSKTLugzbc8vCL9qwKtoL7C/lO
	pq3NmC4u1qMdJn7BZNP/4LRse0NfmlNqJlrXI29uZ/Wp3KbnGCAX0/mxVImiC0HHkHgQ
	30qg==
X-Gm-Message-State: ALoCoQk2M10UBTKkIYaJWZD/Du6loaNsDC1EGbdr78MHNyGcLFM/hj+W+scWJqivH6wI7ZFntBNQ
X-Received: by 10.15.76.135 with SMTP id n7mr5974163eey.36.1393193803757;
	Sun, 23 Feb 2014 14:16:43 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.42
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:42 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:22 +0000
Message-Id: <1393193792-20008-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 05/15] xen/dts: Add
	dt_parse_phandle_with_args and dt_parse_phandle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code adapted from linux drivers/of/base.c (commit ef42c58).

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Remove hard tabs in dt_parse_phandle
---
 xen/common/device_tree.c      |  151 ++++++++++++++++++++++++++++++++++++++++-
 xen/include/xen/device_tree.h |   54 +++++++++++++++
 2 files changed, 203 insertions(+), 2 deletions(-)

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index ccdb7ff..564f2bb 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1090,9 +1090,9 @@ int dt_device_get_address(const struct dt_device_node *dev, int index,
  *
  * Returns a node pointer.
  */
-static const struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
+static struct dt_device_node *dt_find_node_by_phandle(dt_phandle handle)
 {
-    const struct dt_device_node *np;
+    struct dt_device_node *np;
 
     dt_for_each_device_node(dt_host, np)
         if ( np->phandle == handle )
@@ -1477,6 +1477,153 @@ bool_t dt_device_is_available(const struct dt_device_node *device)
     return 0;
 }
 
+static int __dt_parse_phandle_with_args(const struct dt_device_node *np,
+                                        const char *list_name,
+                                        const char *cells_name,
+                                        int cell_count, int index,
+                                        struct dt_phandle_args *out_args)
+{
+    const __be32 *list, *list_end;
+    int rc = 0, cur_index = 0;
+    u32 size, count = 0;
+    struct dt_device_node *node = NULL;
+    dt_phandle phandle;
+
+    /* Retrieve the phandle list property */
+    list = dt_get_property(np, list_name, &size);
+    if ( !list )
+        return -ENOENT;
+    list_end = list + size / sizeof(*list);
+
+    /* Loop over the phandles until all the requested entry is found */
+    while ( list < list_end )
+    {
+        rc = -EINVAL;
+        count = 0;
+
+        /*
+         * If phandle is 0, then it is an empty entry with no
+         * arguments.  Skip forward to the next entry.
+         * */
+        phandle = be32_to_cpup(list++);
+        if ( phandle )
+        {
+            /*
+             * Find the provider node and parse the #*-cells
+             * property to determine the argument length.
+             *
+             * This is not needed if the cell count is hard-coded
+             * (i.e. cells_name not set, but cell_count is set),
+             * except when we're going to return the found node
+             * below.
+             */
+            if ( cells_name || cur_index == index )
+            {
+                node = dt_find_node_by_phandle(phandle);
+                if ( !node )
+                {
+                    dt_printk(XENLOG_ERR "%s: could not find phandle\n",
+                              np->full_name);
+                    goto err;
+                }
+            }
+
+            if ( cells_name )
+            {
+                if ( !dt_property_read_u32(node, cells_name, &count) )
+                {
+                    dt_printk("%s: could not get %s for %s\n",
+                              np->full_name, cells_name, node->full_name);
+                    goto err;
+                }
+            }
+            else
+                count = cell_count;
+
+            /*
+             * Make sure that the arguments actually fit in the
+             * remaining property data length
+             */
+            if ( list + count > list_end )
+            {
+                dt_printk(XENLOG_ERR "%s: arguments longer than property\n",
+                          np->full_name);
+                goto err;
+            }
+        }
+
+        /*
+         * All of the error cases above bail out of the loop, so at
+         * this point, the parsing is successful. If the requested
+         * index matches, then fill the out_args structure and return,
+         * or return -ENOENT for an empty entry.
+         */
+        rc = -ENOENT;
+        if ( cur_index == index )
+        {
+            if (!phandle)
+                goto err;
+
+            if ( out_args )
+            {
+                int i;
+
+                WARN_ON(count > MAX_PHANDLE_ARGS);
+                if (count > MAX_PHANDLE_ARGS)
+                    count = MAX_PHANDLE_ARGS;
+                out_args->np = node;
+                out_args->args_count = count;
+                for ( i = 0; i < count; i++ )
+                    out_args->args[i] = be32_to_cpup(list++);
+            }
+
+            /* Found it! return success */
+            return 0;
+        }
+
+        node = NULL;
+        list += count;
+        cur_index++;
+    }
+
+    /*
+     * Returning result will be one of:
+     * -ENOENT : index is for empty phandle
+     * -EINVAL : parsing error on data
+     * [1..n]  : Number of phandle (count mode; when index = -1)
+     */
+    rc = index < 0 ? cur_index : -ENOENT;
+err:
+    return rc;
+}
+
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+                                        const char *phandle_name, int index)
+{
+    struct dt_phandle_args args;
+
+    if (index < 0)
+        return NULL;
+
+    if (__dt_parse_phandle_with_args(np, phandle_name, NULL, 0,
+                                     index, &args))
+        return NULL;
+
+    return args.np;
+}
+
+
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args)
+{
+    if ( index < 0 )
+        return -EINVAL;
+    return __dt_parse_phandle_with_args(np, list_name, cells_name, 0,
+                                        index, out_args);
+}
+
 /**
  * unflatten_dt_node - Alloc and populate a device_node from the flat tree
  * @fdt: The parent device tree blob
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index 7c075d9..d429e60 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -112,6 +112,13 @@ struct dt_device_node {
 
 };
 
+#define MAX_PHANDLE_ARGS 16
+struct dt_phandle_args {
+    struct dt_device_node *np;
+    int args_count;
+    uint32_t args[MAX_PHANDLE_ARGS];
+};
+
 /**
  * IRQ line type.
  *
@@ -621,6 +628,53 @@ void dt_set_range(__be32 **cellp, const struct dt_device_node *np,
 void dt_get_range(const __be32 **cellp, const struct dt_device_node *np,
                   u64 *address, u64 *size);
 
+/**
+ * dt_parse_phandle - Resolve a phandle property to a device_node pointer
+ * @np: Pointer to device node holding phandle property
+ * @phandle_name: Name of property holding a phandle value
+ * @index: For properties holding a table of phandles, this is the index into
+ *         the table
+ *
+ * Returns the device_node pointer.
+ */
+struct dt_device_node *dt_parse_phandle(const struct dt_device_node *np,
+				                        const char *phandle_name,
+                                        int index);
+
+/**
+ * dt_parse_phandle_with_args() - Find a node pointed by phandle in a list
+ * @np:	pointer to a device tree node containing a list
+ * @list_name: property name that contains a list
+ * @cells_name: property name that specifies phandles' arguments count
+ * @index: index of a phandle to parse out
+ * @out_args: optional pointer to output arguments structure (will be filled)
+ *
+ * This function is useful to parse lists of phandles and their arguments.
+ * Returns 0 on success and fills out_args, on error returns appropriate
+ * errno value.
+ *
+ * Example:
+ *
+ * phandle1: node1 {
+ * 	#list-cells = <2>;
+ * }
+ *
+ * phandle2: node2 {
+ * 	#list-cells = <1>;
+ * }
+ *
+ * node3 {
+ * 	list = <&phandle1 1 2 &phandle2 3>;
+ * }
+ *
+ * To get a device_node of the `node2' node you may call this:
+ * dt_parse_phandle_with_args(node3, "list", "#list-cells", 1, &args);
+ */
+int dt_parse_phandle_with_args(const struct dt_device_node *np,
+                               const char *list_name,
+                               const char *cells_name, int index,
+                               struct dt_phandle_args *out_args);
+
 #endif /* __XEN_DEVICE_TREE_H */
 
 /*
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLz-0006x6-LR; Sun, 23 Feb 2014 22:16:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLx-0006tA-K8
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:54 +0000
Received: from [193.109.254.147:53512] by server-11.bemta-14.messagelabs.com
	id 07/73-24604-4537A035; Sun, 23 Feb 2014 22:16:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393193810!980821!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7257 invoked from network); 23 Feb 2014 22:16:50 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:50 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so2763657eek.10
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=c/MMeN0GFo9+l2UD/3mkwdeIR5mWiH/Duw5rgtUfKWg=;
	b=QCm0t43VeuP7o6KdHoh+WcROa0o3uBFf9bZnlVbjwc/mwc2Sw3xvQ8Xu7D8fLpzTzz
	eoq1DKWJTwpm/KgYL2eba7Tw5X4p+Mf4Llgn4h7T3YrZH9epFSPwlzNEiRX2gt6mjzRb
	Lq5gziUwVQHKRSRrDU350ZAB3V82y8VkuVc3XO9A44GucEcEhHRDNkZ3Y2yQcVp89iql
	rEoXXkpKXqLJvRuMHFqZC+KuebdLpXP1cgLAWmnUTe8iFZBezkJ044/x1OhyodAjJAW3
	yCEQxRpky5MuXGB75ksFF+scFJHY4EY68bULJJPqhTwlwzfGUy9htTzNwoyClFo5lMcG
	51vg==
X-Gm-Message-State: ALoCoQm42NAyRomII+b+cEl45fXIoHKm7vz+Qoi919basbWJ3anh4VPmBepBfjug44tOC6JOeSfZ
X-Received: by 10.15.51.196 with SMTP id n44mr21074858eew.27.1393193810502;
	Sun, 23 Feb 2014 14:16:50 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.48
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:49 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:26 +0000
Message-Id: <1393193792-20008-10-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: Joseph Cihula <joseph.cihula@intel.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, Shane Wang <shane.wang@intel.com>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Gang Wei <gang.wei@intel.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce arch
	specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
x86 specific fields.

This patch creates:
    - arch_hvm_iommu structure which will contain architecture depend
    fields
    - arch_iommu_domain_{init,destroy} function to execute arch
    specific during domain creation/destruction

Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joseph Cihula <joseph.cihula@intel.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: Shane Wang <shane.wang@intel.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +--
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +++++++++----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +++++++++---------
 xen/drivers/passthrough/iommu.c             |   36 +++---------
 xen/drivers/passthrough/vtd/iommu.c         |   80 +++++++++++++--------------
 xen/drivers/passthrough/x86/iommu.c         |   41 ++++++++++++++
 xen/include/asm-x86/hvm/iommu.h             |   29 ++++++++++
 xen/include/asm-x86/iommu.h                 |    4 ++
 xen/include/xen/hvm/iommu.h                 |   26 +--------
 xen/include/xen/iommu.h                     |    8 +--
 14 files changed, 191 insertions(+), 163 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26635ff..e55d9d5 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -745,7 +745,7 @@ long arch_do_domctl(
                    "ioport_map:add: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
 
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if (g2m_ioport->mport == fmp )
                 {
                     g2m_ioport->gport = fgp;
@@ -764,7 +764,7 @@ long arch_do_domctl(
                 g2m_ioport->gport = fgp;
                 g2m_ioport->mport = fmp;
                 g2m_ioport->np = np;
-                list_add_tail(&g2m_ioport->list, &hd->g2m_ioport_list);
+                list_add_tail(&g2m_ioport->list, &hd->arch.g2m_ioport_list);
             }
             if ( !ret )
                 ret = ioports_permit_access(d, fmp, fmp + np - 1);
@@ -779,7 +779,7 @@ long arch_do_domctl(
             printk(XENLOG_G_INFO
                    "ioport_map:remove: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if ( g2m_ioport->mport == fmp )
                 {
                     list_del(&g2m_ioport->list);
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf6309d..ddb03f8 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -451,7 +451,7 @@ int dpci_ioport_intercept(ioreq_t *p)
     unsigned int s = 0, e = 0;
     int rc;
 
-    list_for_each_entry( g2m_ioport, &hd->g2m_ioport_list, list )
+    list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list )
     {
         s = g2m_ioport->gport;
         e = s + g2m_ioport->np;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index ccde4a0..c40fe12 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -230,7 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
         if ( !is_idle_domain(d) )
         {
             struct hvm_iommu *hd = domain_hvm_iommu(d);
-            update_iommu_mac(&ctx, hd->pgd_maddr, agaw_to_level(hd->agaw));
+            update_iommu_mac(&ctx, hd->arch.pgd_maddr,
+                             agaw_to_level(hd->arch.agaw));
         }
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index d27bd3c..f39bd9d 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -355,7 +355,7 @@ static void _amd_iommu_flush_pages(struct domain *d,
     unsigned long flags;
     struct amd_iommu *iommu;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
-    unsigned int dom_id = hd->domain_id;
+    unsigned int dom_id = hd->arch.domain_id;
 
     /* send INVALIDATE_IOMMU_PAGES command */
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 477de20..bd31bb5 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -60,12 +60,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t machine_bdf)
 
 static inline struct guest_iommu *domain_iommu(struct domain *d)
 {
-    return domain_hvm_iommu(d)->g_iommu;
+    return domain_hvm_iommu(d)->arch.g_iommu;
 }
 
 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v)
 {
-    return domain_hvm_iommu(v->domain)->g_iommu;
+    return domain_hvm_iommu(v->domain)->arch.g_iommu;
 }
 
 static void guest_iommu_enable(struct guest_iommu *iommu)
@@ -886,7 +886,7 @@ int guest_iommu_init(struct domain* d)
 
     guest_iommu_reg_init(iommu);
     iommu->domain = d;
-    hd->g_iommu = iommu;
+    hd->arch.g_iommu = iommu;
 
     tasklet_init(&iommu->cmd_buffer_tasklet,
                  guest_iommu_process_command, (unsigned long)d);
@@ -907,7 +907,7 @@ void guest_iommu_destroy(struct domain *d)
     tasklet_kill(&iommu->cmd_buffer_tasklet);
     xfree(iommu);
 
-    domain_hvm_iommu(d)->g_iommu = NULL;
+    domain_hvm_iommu(d)->arch.g_iommu = NULL;
 }
 
 static int guest_iommu_mmio_range(struct vcpu *v, unsigned long addr)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 1294561..be34e90 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -344,7 +344,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     struct hvm_iommu *hd = domain_hvm_iommu(d);
     bool_t ok = 0;
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     next_level = merge_level - 1;
 
@@ -398,7 +398,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     unsigned long first_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(pt_mfn);
     pde = table + pfn_to_pde_idx(gfn, merge_level);
@@ -448,8 +448,8 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
     struct page_info *table;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    table = hd->root_table;
-    level = hd->paging_mode;
+    table = hd->arch.root_table;
+    level = hd->arch.paging_mode;
 
     BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 || 
             level > IOMMU_PAGING_MODE_LEVEL_6 );
@@ -557,11 +557,11 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    level = hd->paging_mode;
-    old_root = hd->root_table;
+    level = hd->arch.paging_mode;
+    old_root = hd->arch.root_table;
     offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
 
-    ASSERT(spin_is_locked(&hd->mapping_lock) && is_hvm_domain(d));
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
 
     while ( offset >= PTE_PER_TABLE_SIZE )
     {
@@ -587,8 +587,8 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
     if ( new_root != NULL )
     {
-        hd->paging_mode = level;
-        hd->root_table = new_root;
+        hd->arch.paging_mode = level;
+        hd->arch.root_table = new_root;
 
         if ( !spin_is_locked(&pcidevs_lock) )
             AMD_IOMMU_DEBUG("%s Try to access pdev_list "
@@ -613,9 +613,9 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
                 /* valid = 0 only works for dom0 passthrough mode */
                 amd_iommu_set_root_page_table((u32 *)device_entry,
-                                              page_to_maddr(hd->root_table),
-                                              hd->domain_id,
-                                              hd->paging_mode, 1);
+                                              page_to_maddr(hd->arch.root_table),
+                                              hd->arch.domain_id,
+                                              hd->arch.paging_mode, 1);
 
                 amd_iommu_flush_device(iommu, req_id);
                 bdf += pdev->phantom_stride;
@@ -638,14 +638,14 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     unsigned long pt_mfn[7];
     unsigned int merge_level;
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -653,7 +653,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -662,7 +662,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -684,7 +684,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         amd_iommu_flush_pages(d, gfn, 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
-          merge_level <= hd->paging_mode; merge_level++ )
+          merge_level <= hd->arch.paging_mode; merge_level++ )
     {
         if ( pt_mfn[merge_level] == 0 )
             break;
@@ -697,7 +697,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
                                flags, merge_level) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
                             "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
             domain_crash(d);
@@ -706,7 +706,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     }
 
 out:
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -715,14 +715,14 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     unsigned long pt_mfn[7];
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -730,7 +730,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -739,7 +739,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -747,7 +747,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     /* mark PTE as 'page not present' */
     clear_iommu_pte_present(pt_mfn[1], gfn);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 
     amd_iommu_flush_pages(d, gfn, 0);
 
@@ -792,13 +792,13 @@ void amd_iommu_share_p2m(struct domain *d)
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
     p2m_table = mfn_to_page(mfn_x(pgd_mfn));
 
-    if ( hd->root_table != p2m_table )
+    if ( hd->arch.root_table != p2m_table )
     {
-        free_amd_iommu_pgtable(hd->root_table);
-        hd->root_table = p2m_table;
+        free_amd_iommu_pgtable(hd->arch.root_table);
+        hd->arch.root_table = p2m_table;
 
         /* When sharing p2m with iommu, paging mode = 4 */
-        hd->paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
+        hd->arch.paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
         AMD_IOMMU_DEBUG("Share p2m table with iommu: p2m table = %#lx\n",
                         mfn_x(pgd_mfn));
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index c26aabc..0c3cd3e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -120,7 +120,8 @@ static void amd_iommu_setup_domain_device(
 
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
 
-    BUG_ON( !hd->root_table || !hd->paging_mode || !iommu->dev_table.buffer );
+    BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode ||
+            !iommu->dev_table.buffer );
 
     if ( iommu_passthrough && (domain->domain_id == 0) )
         valid = 0;
@@ -138,8 +139,8 @@ static void amd_iommu_setup_domain_device(
     {
         /* bind DTE to domain page-tables */
         amd_iommu_set_root_page_table(
-            (u32 *)dte, page_to_maddr(hd->root_table), hd->domain_id,
-            hd->paging_mode, valid);
+            (u32 *)dte, page_to_maddr(hd->arch.root_table), hd->arch.domain_id,
+            hd->arch.paging_mode, valid);
 
         if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
@@ -151,8 +152,8 @@ static void amd_iommu_setup_domain_device(
                         "root table = %#"PRIx64", "
                         "domain = %d, paging mode = %d\n",
                         req_id, pdev->type,
-                        page_to_maddr(hd->root_table),
-                        hd->domain_id, hd->paging_mode);
+                        page_to_maddr(hd->arch.root_table),
+                        hd->arch.domain_id, hd->arch.paging_mode);
     }
 
     spin_unlock_irqrestore(&iommu->lock, flags);
@@ -225,17 +226,17 @@ int __init amd_iov_detect(void)
 static int allocate_domain_resources(struct hvm_iommu *hd)
 {
     /* allocate root table */
-    spin_lock(&hd->mapping_lock);
-    if ( !hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( !hd->arch.root_table )
     {
-        hd->root_table = alloc_amd_iommu_pgtable();
-        if ( !hd->root_table )
+        hd->arch.root_table = alloc_amd_iommu_pgtable();
+        if ( !hd->arch.root_table )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             return -ENOMEM;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -262,18 +263,18 @@ static int amd_iommu_domain_init(struct domain *d)
     /* allocate page directroy */
     if ( allocate_domain_resources(hd) != 0 )
     {
-        if ( hd->root_table )
-            free_domheap_page(hd->root_table);
+        if ( hd->arch.root_table )
+            free_domheap_page(hd->arch.root_table);
         return -ENOMEM;
     }
 
     /* For pv and dom0, stick with get_paging_mode(max_page)
      * For HVM dom0, use 2 level page table at first */
-    hd->paging_mode = is_hvm_domain(d) ?
+    hd->arch.paging_mode = is_hvm_domain(d) ?
                       IOMMU_PAGING_MODE_LEVEL_2 :
                       get_paging_mode(max_page);
 
-    hd->domain_id = d->domain_id;
+    hd->arch.domain_id = d->domain_id;
 
     guest_iommu_init(d);
 
@@ -333,8 +334,8 @@ void amd_iommu_disable_domain_device(struct domain *domain,
 
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
-                        req_id,  domain_hvm_iommu(domain)->domain_id,
-                        domain_hvm_iommu(domain)->paging_mode);
+                        req_id,  domain_hvm_iommu(domain)->arch.domain_id,
+                        domain_hvm_iommu(domain)->arch.paging_mode);
     }
     spin_unlock_irqrestore(&iommu->lock, flags);
 
@@ -374,7 +375,7 @@ static int reassign_device(struct domain *source, struct domain *target,
 
     /* IO page tables might be destroyed after pci-detach the last device
      * In this case, we have to re-allocate root table for next pci-attach.*/
-    if ( t->root_table == NULL )
+    if ( t->arch.root_table == NULL )
         allocate_domain_resources(t);
 
     amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
@@ -456,13 +457,13 @@ static void deallocate_iommu_page_tables(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    if ( hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( hd->arch.root_table )
     {
-        deallocate_next_page_table(hd->root_table, hd->paging_mode);
-        hd->root_table = NULL;
+        deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mode);
+        hd->arch.root_table = NULL;
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 
@@ -593,11 +594,11 @@ static void amd_dump_p2m_table(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
 
-    if ( !hd->root_table ) 
+    if ( !hd->arch.root_table ) 
         return;
 
-    printk("p2m table has %d levels\n", hd->paging_mode);
-    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+    printk("p2m table has %d levels\n", hd->arch.paging_mode);
+    amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0, 0);
 }
 
 const struct iommu_ops amd_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 6893cf3..e6a1839 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -117,10 +117,11 @@ static void __init parse_iommu_param(char *s)
 int iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
+    int ret = 0;
 
-    spin_lock_init(&hd->mapping_lock);
-    INIT_LIST_HEAD(&hd->g2m_ioport_list);
-    INIT_LIST_HEAD(&hd->mapped_rmrrs);
+    ret = arch_iommu_domain_init(d);
+    if ( ret )
+        return ret;
 
     if ( !iommu_enabled )
         return 0;
@@ -189,10 +190,7 @@ void iommu_teardown(struct domain *d)
 
 void iommu_domain_destroy(struct domain *d)
 {
-    struct hvm_iommu *hd  = domain_hvm_iommu(d);
-    struct list_head *ioport_list, *rmrr_list, *tmp;
-    struct g2m_ioport *ioport;
-    struct mapped_rmrr *mrmrr;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops )
         return;
@@ -200,20 +198,8 @@ void iommu_domain_destroy(struct domain *d)
     if ( need_iommu(d) )
         iommu_teardown(d);
 
-    list_for_each_safe ( ioport_list, tmp, &hd->g2m_ioport_list )
-    {
-        ioport = list_entry(ioport_list, struct g2m_ioport, list);
-        list_del(&ioport->list);
-        xfree(ioport);
-    }
-
-    list_for_each_safe ( rmrr_list, tmp, &hd->mapped_rmrrs )
-    {
-        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
-        list_del(&mrmrr->list);
-        xfree(mrmrr);
-    }
-}
+    arch_iommu_domain_destroy(d);
+ }
 
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
@@ -353,14 +339,6 @@ void iommu_suspend()
         ops->suspend();
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-
-    if ( iommu_enabled && is_hvm_domain(d) )
-        ops->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d5ce5b7..697b015 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -249,16 +249,16 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
-    int addr_width = agaw_to_width(hd->agaw);
+    int addr_width = agaw_to_width(hd->arch.agaw);
     struct dma_pte *parent, *pte = NULL;
-    int level = agaw_to_level(hd->agaw);
+    int level = agaw_to_level(hd->arch.agaw);
     int offset;
     u64 pte_maddr = 0, maddr;
     u64 *vaddr = NULL;
 
     addr &= (((u64)1) << addr_width) - 1;
-    ASSERT(spin_is_locked(&hd->mapping_lock));
-    if ( hd->pgd_maddr == 0 )
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+    if ( hd->arch.pgd_maddr == 0 )
     {
         /*
          * just get any passthrough device in the domainr - assume user
@@ -266,11 +266,11 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
          */
         pdev = pci_get_pdev_by_domain(domain, -1, -1, -1);
         drhd = acpi_find_matched_drhd_unit(pdev);
-        if ( !alloc || ((hd->pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
+        if ( !alloc || ((hd->arch.pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
             goto out;
     }
 
-    parent = (struct dma_pte *)map_vtd_domain_page(hd->pgd_maddr);
+    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr);
     while ( level > 1 )
     {
         offset = address_level_offset(addr, level);
@@ -580,7 +580,7 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     {
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -622,12 +622,12 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     u64 pg_maddr;
     struct mapped_rmrr *mrmrr;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
     pg_maddr = addr_to_dma_page_maddr(domain, addr, 0);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return;
     }
 
@@ -636,13 +636,13 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
 
     if ( !dma_pte_present(*pte) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return;
     }
 
     dma_clear_pte(*pte);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -653,8 +653,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     /* if the cleared address is between mapped RMRR region,
      * remove the mapped RMRR
      */
-    spin_lock(&hd->mapping_lock);
-    list_for_each_entry ( mrmrr, &hd->mapped_rmrrs, list )
+    spin_lock(&hd->arch.mapping_lock);
+    list_for_each_entry ( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( addr >= mrmrr->base && addr <= mrmrr->end )
         {
@@ -663,7 +663,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
             break;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1248,7 +1248,7 @@ static int intel_iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    hd->agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    hd->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
     return 0;
 }
@@ -1345,16 +1345,16 @@ int domain_context_mapping_one(
     }
     else
     {
-        spin_lock(&hd->mapping_lock);
+        spin_lock(&hd->arch.mapping_lock);
 
         /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->pgd_maddr == 0 )
+        if ( hd->arch.pgd_maddr == 0 )
         {
             addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->pgd_maddr == 0 )
+            if ( hd->arch.pgd_maddr == 0 )
             {
             nomem:
-                spin_unlock(&hd->mapping_lock);
+                spin_unlock(&hd->arch.mapping_lock);
                 spin_unlock(&iommu->lock);
                 unmap_vtd_domain_page(context_entries);
                 return -ENOMEM;
@@ -1362,7 +1362,7 @@ int domain_context_mapping_one(
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->pgd_maddr;
+        pgd_maddr = hd->arch.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1380,7 +1380,7 @@ int domain_context_mapping_one(
         else
             context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
 
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
     }
 
     if ( context_set_domain_id(context, domain, iommu) )
@@ -1406,7 +1406,7 @@ int domain_context_mapping_one(
         iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
     }
 
-    set_bit(iommu->index, &hd->iommu_bitmap);
+    set_bit(iommu->index, &hd->arch.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
 
@@ -1652,7 +1652,7 @@ static int domain_context_unmap(
         struct hvm_iommu *hd = domain_hvm_iommu(domain);
         int iommu_domid;
 
-        clear_bit(iommu->index, &hd->iommu_bitmap);
+        clear_bit(iommu->index, &hd->arch.iommu_bitmap);
 
         iommu_domid = domain_iommu_domid(domain, iommu);
         if ( iommu_domid == -1 )
@@ -1711,10 +1711,10 @@ static void iommu_domain_teardown(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    iommu_free_pagetable(hd->pgd_maddr, agaw_to_level(hd->agaw));
-    hd->pgd_maddr = 0;
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw));
+    hd->arch.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static int intel_iommu_map_page(
@@ -1733,12 +1733,12 @@ static int intel_iommu_map_page(
     if ( iommu_passthrough && (d->domain_id == 0) )
         return 0;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -1755,14 +1755,14 @@ static int intel_iommu_map_page(
 
     if ( old.val == new.val )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -1796,7 +1796,7 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -1837,7 +1837,7 @@ static void iommu_set_pgd(struct domain *d)
         return;
 
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    hd->pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
+    hd->arch.pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
 }
 
 static int rmrr_identity_mapping(struct domain *d,
@@ -1852,10 +1852,10 @@ static int rmrr_identity_mapping(struct domain *d,
     ASSERT(rmrr->base_address < rmrr->end_address);
 
     /*
-     * No need to acquire hd->mapping_lock, as the only theoretical race is
+     * No need to acquire hd->arch.mapping_lock, as the only theoretical race is
      * with the insertion below (impossible due to holding pcidevs_lock).
      */
-    list_for_each_entry( mrmrr, &hd->mapped_rmrrs, list )
+    list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( mrmrr->base == rmrr->base_address &&
              mrmrr->end == rmrr->end_address )
@@ -1880,9 +1880,9 @@ static int rmrr_identity_mapping(struct domain *d,
         return -ENOMEM;
     mrmrr->base = rmrr->base_address;
     mrmrr->end = rmrr->end_address;
-    spin_lock(&hd->mapping_lock);
-    list_add_tail(&mrmrr->list, &hd->mapped_rmrrs);
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
+    spin_unlock(&hd->arch.mapping_lock);
 
     return 0;
 }
@@ -2427,8 +2427,8 @@ static void vtd_dump_p2m_table(struct domain *d)
         return;
 
     hd = domain_hvm_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
-    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw));
+    vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0);
 }
 
 const struct iommu_ops intel_iommu_ops = {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index bd3c23b..c137cef 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -55,6 +55,47 @@ int __init iommu_setup_hpet_msi(struct msi_desc *msi)
     return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
 }
 
+void iommu_share_p2m_table(struct domain* d)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+
+    if ( iommu_enabled && is_hvm_domain(d) )
+        ops->share_p2m(d);
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    spin_lock_init(&hd->arch.mapping_lock);
+    INIT_LIST_HEAD(&hd->arch.g2m_ioport_list);
+    INIT_LIST_HEAD(&hd->arch.mapped_rmrrs);
+
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+   struct hvm_iommu *hd  = domain_hvm_iommu(d);
+   struct list_head *ioport_list, *rmrr_list, *tmp;
+   struct g2m_ioport *ioport;
+   struct mapped_rmrr *mrmrr;
+
+   list_for_each_safe ( ioport_list, tmp, &hd->arch.g2m_ioport_list )
+   {
+       ioport = list_entry(ioport_list, struct g2m_ioport, list);
+       list_del(&ioport->list);
+       xfree(ioport);
+   }
+
+    list_for_each_safe ( rmrr_list, tmp, &hd->arch.mapped_rmrrs )
+    {
+        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
+        list_del(&mrmrr->list);
+        xfree(mrmrr);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/hvm/iommu.h b/xen/include/asm-x86/hvm/iommu.h
index d488edf..a3f83d0 100644
--- a/xen/include/asm-x86/hvm/iommu.h
+++ b/xen/include/asm-x86/hvm/iommu.h
@@ -39,4 +39,33 @@ static inline int iommu_hardware_setup(void)
     return 0;
 }
 
+struct g2m_ioport {
+    struct list_head list;
+    unsigned int gport;
+    unsigned int mport;
+    unsigned int np;
+};
+
+struct mapped_rmrr {
+    struct list_head list;
+    u64 base;
+    u64 end;
+};
+
+struct arch_hvm_iommu
+{
+    u64 pgd_maddr;                 /* io page directory machine address */
+    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
+    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
+    /* amd iommu support */
+    int domain_id;
+    int paging_mode;
+    struct page_info *root_table;
+    struct guest_iommu *g_iommu;
+
+    struct list_head g2m_ioport_list;   /* guest to machine ioport mapping */
+    struct list_head mapped_rmrrs;
+    spinlock_t mapping_lock;            /* io page table lock */
+};
+
 #endif /* __ASM_X86_HVM_IOMMU_H__ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 34c1896..021cd80 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -19,6 +19,10 @@
 
 #include <asm/msi.h>
 
+/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
+#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
+#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
+
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
 int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
 void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 2abb4e3..f8f8a93 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -23,32 +23,8 @@
 #include <xen/iommu.h>
 #include <asm/hvm/iommu.h>
 
-struct g2m_ioport {
-    struct list_head list;
-    unsigned int gport;
-    unsigned int mport;
-    unsigned int np;
-};
-
-struct mapped_rmrr {
-    struct list_head list;
-    u64 base;
-    u64 end;
-};
-
 struct hvm_iommu {
-    u64 pgd_maddr;                 /* io page directory machine address */
-    spinlock_t mapping_lock;       /* io page table lock */
-    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
-    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
-    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
-    struct list_head mapped_rmrrs;
-
-    /* amd iommu support */
-    int domain_id;
-    int paging_mode;
-    struct page_info *root_table;
-    struct guest_iommu *g_iommu;
+    struct arch_hvm_iommu arch;
 
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 65a37c0..5a19c80 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -35,11 +35,6 @@ extern bool_t iommu_hap_pt_share;
 extern bool_t iommu_debug;
 extern bool_t amd_iommu_perdev_intremap;
 
-/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
-#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
-
-#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
@@ -55,6 +50,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+void arch_iommu_domain_destroy(struct domain *d);
+int arch_iommu_domain_init(struct domain *d);
+
 /* Function used internally, use iommu_domain_destroy */
 void iommu_teardown(struct domain *d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhLz-0006x6-LR; Sun, 23 Feb 2014 22:16:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLx-0006tA-K8
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:54 +0000
Received: from [193.109.254.147:53512] by server-11.bemta-14.messagelabs.com
	id 07/73-24604-4537A035; Sun, 23 Feb 2014 22:16:52 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393193810!980821!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7257 invoked from network); 23 Feb 2014 22:16:50 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:50 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so2763657eek.10
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=c/MMeN0GFo9+l2UD/3mkwdeIR5mWiH/Duw5rgtUfKWg=;
	b=QCm0t43VeuP7o6KdHoh+WcROa0o3uBFf9bZnlVbjwc/mwc2Sw3xvQ8Xu7D8fLpzTzz
	eoq1DKWJTwpm/KgYL2eba7Tw5X4p+Mf4Llgn4h7T3YrZH9epFSPwlzNEiRX2gt6mjzRb
	Lq5gziUwVQHKRSRrDU350ZAB3V82y8VkuVc3XO9A44GucEcEhHRDNkZ3Y2yQcVp89iql
	rEoXXkpKXqLJvRuMHFqZC+KuebdLpXP1cgLAWmnUTe8iFZBezkJ044/x1OhyodAjJAW3
	yCEQxRpky5MuXGB75ksFF+scFJHY4EY68bULJJPqhTwlwzfGUy9htTzNwoyClFo5lMcG
	51vg==
X-Gm-Message-State: ALoCoQm42NAyRomII+b+cEl45fXIoHKm7vz+Qoi919basbWJ3anh4VPmBepBfjug44tOC6JOeSfZ
X-Received: by 10.15.51.196 with SMTP id n44mr21074858eew.27.1393193810502;
	Sun, 23 Feb 2014 14:16:50 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.48
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:49 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:26 +0000
Message-Id: <1393193792-20008-10-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: Joseph Cihula <joseph.cihula@intel.com>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, Shane Wang <shane.wang@intel.com>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, Jan Beulich <jbeulich@suse.com>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Gang Wei <gang.wei@intel.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce arch
	specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently the structure hvm_iommu (xen/include/xen/hvm/iommu.h) contains
x86 specific fields.

This patch creates:
    - arch_hvm_iommu structure which will contain architecture depend
    fields
    - arch_iommu_domain_{init,destroy} function to execute arch
    specific during domain creation/destruction

Also move iommu_use_hap_pt and domain_hvm_iommu in asm-x86/iommu.h.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Joseph Cihula <joseph.cihula@intel.com>
Cc: Gang Wei <gang.wei@intel.com>
Cc: Shane Wang <shane.wang@intel.com>
Cc: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
---
 xen/arch/x86/domctl.c                       |    6 +-
 xen/arch/x86/hvm/io.c                       |    2 +-
 xen/arch/x86/tboot.c                        |    3 +-
 xen/drivers/passthrough/amd/iommu_cmd.c     |    2 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |    8 +--
 xen/drivers/passthrough/amd/iommu_map.c     |   56 +++++++++----------
 xen/drivers/passthrough/amd/pci_amd_iommu.c |   53 +++++++++---------
 xen/drivers/passthrough/iommu.c             |   36 +++---------
 xen/drivers/passthrough/vtd/iommu.c         |   80 +++++++++++++--------------
 xen/drivers/passthrough/x86/iommu.c         |   41 ++++++++++++++
 xen/include/asm-x86/hvm/iommu.h             |   29 ++++++++++
 xen/include/asm-x86/iommu.h                 |    4 ++
 xen/include/xen/hvm/iommu.h                 |   26 +--------
 xen/include/xen/iommu.h                     |    8 +--
 14 files changed, 191 insertions(+), 163 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 26635ff..e55d9d5 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -745,7 +745,7 @@ long arch_do_domctl(
                    "ioport_map:add: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
 
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if (g2m_ioport->mport == fmp )
                 {
                     g2m_ioport->gport = fgp;
@@ -764,7 +764,7 @@ long arch_do_domctl(
                 g2m_ioport->gport = fgp;
                 g2m_ioport->mport = fmp;
                 g2m_ioport->np = np;
-                list_add_tail(&g2m_ioport->list, &hd->g2m_ioport_list);
+                list_add_tail(&g2m_ioport->list, &hd->arch.g2m_ioport_list);
             }
             if ( !ret )
                 ret = ioports_permit_access(d, fmp, fmp + np - 1);
@@ -779,7 +779,7 @@ long arch_do_domctl(
             printk(XENLOG_G_INFO
                    "ioport_map:remove: dom%d gport=%x mport=%x nr=%x\n",
                    d->domain_id, fgp, fmp, np);
-            list_for_each_entry(g2m_ioport, &hd->g2m_ioport_list, list)
+            list_for_each_entry(g2m_ioport, &hd->arch.g2m_ioport_list, list)
                 if ( g2m_ioport->mport == fmp )
                 {
                     list_del(&g2m_ioport->list);
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index bf6309d..ddb03f8 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -451,7 +451,7 @@ int dpci_ioport_intercept(ioreq_t *p)
     unsigned int s = 0, e = 0;
     int rc;
 
-    list_for_each_entry( g2m_ioport, &hd->g2m_ioport_list, list )
+    list_for_each_entry( g2m_ioport, &hd->arch.g2m_ioport_list, list )
     {
         s = g2m_ioport->gport;
         e = s + g2m_ioport->np;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index ccde4a0..c40fe12 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -230,7 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
         if ( !is_idle_domain(d) )
         {
             struct hvm_iommu *hd = domain_hvm_iommu(d);
-            update_iommu_mac(&ctx, hd->pgd_maddr, agaw_to_level(hd->agaw));
+            update_iommu_mac(&ctx, hd->arch.pgd_maddr,
+                             agaw_to_level(hd->arch.agaw));
         }
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index d27bd3c..f39bd9d 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -355,7 +355,7 @@ static void _amd_iommu_flush_pages(struct domain *d,
     unsigned long flags;
     struct amd_iommu *iommu;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
-    unsigned int dom_id = hd->domain_id;
+    unsigned int dom_id = hd->arch.domain_id;
 
     /* send INVALIDATE_IOMMU_PAGES command */
     for_each_amd_iommu ( iommu )
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 477de20..bd31bb5 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -60,12 +60,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t machine_bdf)
 
 static inline struct guest_iommu *domain_iommu(struct domain *d)
 {
-    return domain_hvm_iommu(d)->g_iommu;
+    return domain_hvm_iommu(d)->arch.g_iommu;
 }
 
 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v)
 {
-    return domain_hvm_iommu(v->domain)->g_iommu;
+    return domain_hvm_iommu(v->domain)->arch.g_iommu;
 }
 
 static void guest_iommu_enable(struct guest_iommu *iommu)
@@ -886,7 +886,7 @@ int guest_iommu_init(struct domain* d)
 
     guest_iommu_reg_init(iommu);
     iommu->domain = d;
-    hd->g_iommu = iommu;
+    hd->arch.g_iommu = iommu;
 
     tasklet_init(&iommu->cmd_buffer_tasklet,
                  guest_iommu_process_command, (unsigned long)d);
@@ -907,7 +907,7 @@ void guest_iommu_destroy(struct domain *d)
     tasklet_kill(&iommu->cmd_buffer_tasklet);
     xfree(iommu);
 
-    domain_hvm_iommu(d)->g_iommu = NULL;
+    domain_hvm_iommu(d)->arch.g_iommu = NULL;
 }
 
 static int guest_iommu_mmio_range(struct vcpu *v, unsigned long addr)
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 1294561..be34e90 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -344,7 +344,7 @@ static int iommu_update_pde_count(struct domain *d, unsigned long pt_mfn,
     struct hvm_iommu *hd = domain_hvm_iommu(d);
     bool_t ok = 0;
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     next_level = merge_level - 1;
 
@@ -398,7 +398,7 @@ static int iommu_merge_pages(struct domain *d, unsigned long pt_mfn,
     unsigned long first_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    ASSERT( spin_is_locked(&hd->mapping_lock) && pt_mfn );
+    ASSERT( spin_is_locked(&hd->arch.mapping_lock) && pt_mfn );
 
     table = map_domain_page(pt_mfn);
     pde = table + pfn_to_pde_idx(gfn, merge_level);
@@ -448,8 +448,8 @@ static int iommu_pde_from_gfn(struct domain *d, unsigned long pfn,
     struct page_info *table;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    table = hd->root_table;
-    level = hd->paging_mode;
+    table = hd->arch.root_table;
+    level = hd->arch.paging_mode;
 
     BUG_ON( table == NULL || level < IOMMU_PAGING_MODE_LEVEL_1 || 
             level > IOMMU_PAGING_MODE_LEVEL_6 );
@@ -557,11 +557,11 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
     unsigned long old_root_mfn;
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    level = hd->paging_mode;
-    old_root = hd->root_table;
+    level = hd->arch.paging_mode;
+    old_root = hd->arch.root_table;
     offset = gfn >> (PTE_PER_TABLE_SHIFT * (level - 1));
 
-    ASSERT(spin_is_locked(&hd->mapping_lock) && is_hvm_domain(d));
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock) && is_hvm_domain(d));
 
     while ( offset >= PTE_PER_TABLE_SIZE )
     {
@@ -587,8 +587,8 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
     if ( new_root != NULL )
     {
-        hd->paging_mode = level;
-        hd->root_table = new_root;
+        hd->arch.paging_mode = level;
+        hd->arch.root_table = new_root;
 
         if ( !spin_is_locked(&pcidevs_lock) )
             AMD_IOMMU_DEBUG("%s Try to access pdev_list "
@@ -613,9 +613,9 @@ static int update_paging_mode(struct domain *d, unsigned long gfn)
 
                 /* valid = 0 only works for dom0 passthrough mode */
                 amd_iommu_set_root_page_table((u32 *)device_entry,
-                                              page_to_maddr(hd->root_table),
-                                              hd->domain_id,
-                                              hd->paging_mode, 1);
+                                              page_to_maddr(hd->arch.root_table),
+                                              hd->arch.domain_id,
+                                              hd->arch.paging_mode, 1);
 
                 amd_iommu_flush_device(iommu, req_id);
                 bdf += pdev->phantom_stride;
@@ -638,14 +638,14 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     unsigned long pt_mfn[7];
     unsigned int merge_level;
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -653,7 +653,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -662,7 +662,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -684,7 +684,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         amd_iommu_flush_pages(d, gfn, 0);
 
     for ( merge_level = IOMMU_PAGING_MODE_LEVEL_2;
-          merge_level <= hd->paging_mode; merge_level++ )
+          merge_level <= hd->arch.paging_mode; merge_level++ )
     {
         if ( pt_mfn[merge_level] == 0 )
             break;
@@ -697,7 +697,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
         if ( iommu_merge_pages(d, pt_mfn[merge_level], gfn, 
                                flags, merge_level) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Merge iommu page failed at level %d, "
                             "gfn = %lx mfn = %lx\n", merge_level, gfn, mfn);
             domain_crash(d);
@@ -706,7 +706,7 @@ int amd_iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
     }
 
 out:
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -715,14 +715,14 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     unsigned long pt_mfn[7];
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    BUG_ON( !hd->root_table );
+    BUG_ON( !hd->arch.root_table );
 
     if ( iommu_use_hap_pt(d) )
         return 0;
 
     memset(pt_mfn, 0, sizeof(pt_mfn));
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     /* Since HVM domain is initialized with 2 level IO page table,
      * we might need a deeper page table for lager gfn now */
@@ -730,7 +730,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
     {
         if ( update_paging_mode(d, gfn) )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             AMD_IOMMU_DEBUG("Update page mode failed gfn = %lx\n", gfn);
             domain_crash(d);
             return -EFAULT;
@@ -739,7 +739,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     if ( iommu_pde_from_gfn(d, gfn, pt_mfn) || (pt_mfn[1] == 0) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         AMD_IOMMU_DEBUG("Invalid IO pagetable entry gfn = %lx\n", gfn);
         domain_crash(d);
         return -EFAULT;
@@ -747,7 +747,7 @@ int amd_iommu_unmap_page(struct domain *d, unsigned long gfn)
 
     /* mark PTE as 'page not present' */
     clear_iommu_pte_present(pt_mfn[1], gfn);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 
     amd_iommu_flush_pages(d, gfn, 0);
 
@@ -792,13 +792,13 @@ void amd_iommu_share_p2m(struct domain *d)
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
     p2m_table = mfn_to_page(mfn_x(pgd_mfn));
 
-    if ( hd->root_table != p2m_table )
+    if ( hd->arch.root_table != p2m_table )
     {
-        free_amd_iommu_pgtable(hd->root_table);
-        hd->root_table = p2m_table;
+        free_amd_iommu_pgtable(hd->arch.root_table);
+        hd->arch.root_table = p2m_table;
 
         /* When sharing p2m with iommu, paging mode = 4 */
-        hd->paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
+        hd->arch.paging_mode = IOMMU_PAGING_MODE_LEVEL_4;
         AMD_IOMMU_DEBUG("Share p2m table with iommu: p2m table = %#lx\n",
                         mfn_x(pgd_mfn));
     }
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index c26aabc..0c3cd3e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -120,7 +120,8 @@ static void amd_iommu_setup_domain_device(
 
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
 
-    BUG_ON( !hd->root_table || !hd->paging_mode || !iommu->dev_table.buffer );
+    BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode ||
+            !iommu->dev_table.buffer );
 
     if ( iommu_passthrough && (domain->domain_id == 0) )
         valid = 0;
@@ -138,8 +139,8 @@ static void amd_iommu_setup_domain_device(
     {
         /* bind DTE to domain page-tables */
         amd_iommu_set_root_page_table(
-            (u32 *)dte, page_to_maddr(hd->root_table), hd->domain_id,
-            hd->paging_mode, valid);
+            (u32 *)dte, page_to_maddr(hd->arch.root_table), hd->arch.domain_id,
+            hd->arch.paging_mode, valid);
 
         if ( pci_ats_device(iommu->seg, bus, pdev->devfn) &&
              iommu_has_cap(iommu, PCI_CAP_IOTLB_SHIFT) )
@@ -151,8 +152,8 @@ static void amd_iommu_setup_domain_device(
                         "root table = %#"PRIx64", "
                         "domain = %d, paging mode = %d\n",
                         req_id, pdev->type,
-                        page_to_maddr(hd->root_table),
-                        hd->domain_id, hd->paging_mode);
+                        page_to_maddr(hd->arch.root_table),
+                        hd->arch.domain_id, hd->arch.paging_mode);
     }
 
     spin_unlock_irqrestore(&iommu->lock, flags);
@@ -225,17 +226,17 @@ int __init amd_iov_detect(void)
 static int allocate_domain_resources(struct hvm_iommu *hd)
 {
     /* allocate root table */
-    spin_lock(&hd->mapping_lock);
-    if ( !hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( !hd->arch.root_table )
     {
-        hd->root_table = alloc_amd_iommu_pgtable();
-        if ( !hd->root_table )
+        hd->arch.root_table = alloc_amd_iommu_pgtable();
+        if ( !hd->arch.root_table )
         {
-            spin_unlock(&hd->mapping_lock);
+            spin_unlock(&hd->arch.mapping_lock);
             return -ENOMEM;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     return 0;
 }
 
@@ -262,18 +263,18 @@ static int amd_iommu_domain_init(struct domain *d)
     /* allocate page directroy */
     if ( allocate_domain_resources(hd) != 0 )
     {
-        if ( hd->root_table )
-            free_domheap_page(hd->root_table);
+        if ( hd->arch.root_table )
+            free_domheap_page(hd->arch.root_table);
         return -ENOMEM;
     }
 
     /* For pv and dom0, stick with get_paging_mode(max_page)
      * For HVM dom0, use 2 level page table at first */
-    hd->paging_mode = is_hvm_domain(d) ?
+    hd->arch.paging_mode = is_hvm_domain(d) ?
                       IOMMU_PAGING_MODE_LEVEL_2 :
                       get_paging_mode(max_page);
 
-    hd->domain_id = d->domain_id;
+    hd->arch.domain_id = d->domain_id;
 
     guest_iommu_init(d);
 
@@ -333,8 +334,8 @@ void amd_iommu_disable_domain_device(struct domain *domain,
 
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
-                        req_id,  domain_hvm_iommu(domain)->domain_id,
-                        domain_hvm_iommu(domain)->paging_mode);
+                        req_id,  domain_hvm_iommu(domain)->arch.domain_id,
+                        domain_hvm_iommu(domain)->arch.paging_mode);
     }
     spin_unlock_irqrestore(&iommu->lock, flags);
 
@@ -374,7 +375,7 @@ static int reassign_device(struct domain *source, struct domain *target,
 
     /* IO page tables might be destroyed after pci-detach the last device
      * In this case, we have to re-allocate root table for next pci-attach.*/
-    if ( t->root_table == NULL )
+    if ( t->arch.root_table == NULL )
         allocate_domain_resources(t);
 
     amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
@@ -456,13 +457,13 @@ static void deallocate_iommu_page_tables(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    if ( hd->root_table )
+    spin_lock(&hd->arch.mapping_lock);
+    if ( hd->arch.root_table )
     {
-        deallocate_next_page_table(hd->root_table, hd->paging_mode);
-        hd->root_table = NULL;
+        deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mode);
+        hd->arch.root_table = NULL;
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 
@@ -593,11 +594,11 @@ static void amd_dump_p2m_table(struct domain *d)
 {
     struct hvm_iommu *hd  = domain_hvm_iommu(d);
 
-    if ( !hd->root_table ) 
+    if ( !hd->arch.root_table ) 
         return;
 
-    printk("p2m table has %d levels\n", hd->paging_mode);
-    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0, 0);
+    printk("p2m table has %d levels\n", hd->arch.paging_mode);
+    amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0, 0);
 }
 
 const struct iommu_ops amd_iommu_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 6893cf3..e6a1839 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -117,10 +117,11 @@ static void __init parse_iommu_param(char *s)
 int iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
+    int ret = 0;
 
-    spin_lock_init(&hd->mapping_lock);
-    INIT_LIST_HEAD(&hd->g2m_ioport_list);
-    INIT_LIST_HEAD(&hd->mapped_rmrrs);
+    ret = arch_iommu_domain_init(d);
+    if ( ret )
+        return ret;
 
     if ( !iommu_enabled )
         return 0;
@@ -189,10 +190,7 @@ void iommu_teardown(struct domain *d)
 
 void iommu_domain_destroy(struct domain *d)
 {
-    struct hvm_iommu *hd  = domain_hvm_iommu(d);
-    struct list_head *ioport_list, *rmrr_list, *tmp;
-    struct g2m_ioport *ioport;
-    struct mapped_rmrr *mrmrr;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
 
     if ( !iommu_enabled || !hd->platform_ops )
         return;
@@ -200,20 +198,8 @@ void iommu_domain_destroy(struct domain *d)
     if ( need_iommu(d) )
         iommu_teardown(d);
 
-    list_for_each_safe ( ioport_list, tmp, &hd->g2m_ioport_list )
-    {
-        ioport = list_entry(ioport_list, struct g2m_ioport, list);
-        list_del(&ioport->list);
-        xfree(ioport);
-    }
-
-    list_for_each_safe ( rmrr_list, tmp, &hd->mapped_rmrrs )
-    {
-        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
-        list_del(&mrmrr->list);
-        xfree(mrmrr);
-    }
-}
+    arch_iommu_domain_destroy(d);
+ }
 
 int iommu_map_page(struct domain *d, unsigned long gfn, unsigned long mfn,
                    unsigned int flags)
@@ -353,14 +339,6 @@ void iommu_suspend()
         ops->suspend();
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    const struct iommu_ops *ops = iommu_get_ops();
-
-    if ( iommu_enabled && is_hvm_domain(d) )
-        ops->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     const struct iommu_ops *ops = iommu_get_ops();
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d5ce5b7..697b015 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -249,16 +249,16 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
     struct hvm_iommu *hd = domain_hvm_iommu(domain);
-    int addr_width = agaw_to_width(hd->agaw);
+    int addr_width = agaw_to_width(hd->arch.agaw);
     struct dma_pte *parent, *pte = NULL;
-    int level = agaw_to_level(hd->agaw);
+    int level = agaw_to_level(hd->arch.agaw);
     int offset;
     u64 pte_maddr = 0, maddr;
     u64 *vaddr = NULL;
 
     addr &= (((u64)1) << addr_width) - 1;
-    ASSERT(spin_is_locked(&hd->mapping_lock));
-    if ( hd->pgd_maddr == 0 )
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+    if ( hd->arch.pgd_maddr == 0 )
     {
         /*
          * just get any passthrough device in the domainr - assume user
@@ -266,11 +266,11 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
          */
         pdev = pci_get_pdev_by_domain(domain, -1, -1, -1);
         drhd = acpi_find_matched_drhd_unit(pdev);
-        if ( !alloc || ((hd->pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
+        if ( !alloc || ((hd->arch.pgd_maddr = alloc_pgtable_maddr(drhd, 1)) == 0) )
             goto out;
     }
 
-    parent = (struct dma_pte *)map_vtd_domain_page(hd->pgd_maddr);
+    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr);
     while ( level > 1 )
     {
         offset = address_level_offset(addr, level);
@@ -580,7 +580,7 @@ static void __intel_iommu_iotlb_flush(struct domain *d, unsigned long gfn,
     {
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -622,12 +622,12 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     u64 pg_maddr;
     struct mapped_rmrr *mrmrr;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
     /* get last level pte */
     pg_maddr = addr_to_dma_page_maddr(domain, addr, 0);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return;
     }
 
@@ -636,13 +636,13 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
 
     if ( !dma_pte_present(*pte) )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return;
     }
 
     dma_clear_pte(*pte);
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -653,8 +653,8 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
     /* if the cleared address is between mapped RMRR region,
      * remove the mapped RMRR
      */
-    spin_lock(&hd->mapping_lock);
-    list_for_each_entry ( mrmrr, &hd->mapped_rmrrs, list )
+    spin_lock(&hd->arch.mapping_lock);
+    list_for_each_entry ( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( addr >= mrmrr->base && addr <= mrmrr->end )
         {
@@ -663,7 +663,7 @@ static void dma_pte_clear_one(struct domain *domain, u64 addr)
             break;
         }
     }
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static void iommu_free_pagetable(u64 pt_maddr, int level)
@@ -1248,7 +1248,7 @@ static int intel_iommu_domain_init(struct domain *d)
 {
     struct hvm_iommu *hd = domain_hvm_iommu(d);
 
-    hd->agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    hd->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
     return 0;
 }
@@ -1345,16 +1345,16 @@ int domain_context_mapping_one(
     }
     else
     {
-        spin_lock(&hd->mapping_lock);
+        spin_lock(&hd->arch.mapping_lock);
 
         /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->pgd_maddr == 0 )
+        if ( hd->arch.pgd_maddr == 0 )
         {
             addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->pgd_maddr == 0 )
+            if ( hd->arch.pgd_maddr == 0 )
             {
             nomem:
-                spin_unlock(&hd->mapping_lock);
+                spin_unlock(&hd->arch.mapping_lock);
                 spin_unlock(&iommu->lock);
                 unmap_vtd_domain_page(context_entries);
                 return -ENOMEM;
@@ -1362,7 +1362,7 @@ int domain_context_mapping_one(
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->pgd_maddr;
+        pgd_maddr = hd->arch.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1380,7 +1380,7 @@ int domain_context_mapping_one(
         else
             context_set_translation_type(*context, CONTEXT_TT_MULTI_LEVEL);
 
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
     }
 
     if ( context_set_domain_id(context, domain, iommu) )
@@ -1406,7 +1406,7 @@ int domain_context_mapping_one(
         iommu_flush_iotlb_dsi(iommu, 0, 1, flush_dev_iotlb);
     }
 
-    set_bit(iommu->index, &hd->iommu_bitmap);
+    set_bit(iommu->index, &hd->arch.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
 
@@ -1652,7 +1652,7 @@ static int domain_context_unmap(
         struct hvm_iommu *hd = domain_hvm_iommu(domain);
         int iommu_domid;
 
-        clear_bit(iommu->index, &hd->iommu_bitmap);
+        clear_bit(iommu->index, &hd->arch.iommu_bitmap);
 
         iommu_domid = domain_iommu_domid(domain, iommu);
         if ( iommu_domid == -1 )
@@ -1711,10 +1711,10 @@ static void iommu_domain_teardown(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->mapping_lock);
-    iommu_free_pagetable(hd->pgd_maddr, agaw_to_level(hd->agaw));
-    hd->pgd_maddr = 0;
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw));
+    hd->arch.pgd_maddr = 0;
+    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static int intel_iommu_map_page(
@@ -1733,12 +1733,12 @@ static int intel_iommu_map_page(
     if ( iommu_passthrough && (d->domain_id == 0) )
         return 0;
 
-    spin_lock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
 
     pg_maddr = addr_to_dma_page_maddr(d, (paddr_t)gfn << PAGE_SHIFT_4K, 1);
     if ( pg_maddr == 0 )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
     page = (struct dma_pte *)map_vtd_domain_page(pg_maddr);
@@ -1755,14 +1755,14 @@ static int intel_iommu_map_page(
 
     if ( old.val == new.val )
     {
-        spin_unlock(&hd->mapping_lock);
+        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
     *pte = new;
 
     iommu_flush_cache_entry(pte, sizeof(struct dma_pte));
-    spin_unlock(&hd->mapping_lock);
+    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) )
@@ -1796,7 +1796,7 @@ void iommu_pte_flush(struct domain *d, u64 gfn, u64 *pte,
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
-        if ( !test_bit(iommu->index, &hd->iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = find_ats_dev_drhd(iommu) ? 1 : 0;
@@ -1837,7 +1837,7 @@ static void iommu_set_pgd(struct domain *d)
         return;
 
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    hd->pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
+    hd->arch.pgd_maddr = pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
 }
 
 static int rmrr_identity_mapping(struct domain *d,
@@ -1852,10 +1852,10 @@ static int rmrr_identity_mapping(struct domain *d,
     ASSERT(rmrr->base_address < rmrr->end_address);
 
     /*
-     * No need to acquire hd->mapping_lock, as the only theoretical race is
+     * No need to acquire hd->arch.mapping_lock, as the only theoretical race is
      * with the insertion below (impossible due to holding pcidevs_lock).
      */
-    list_for_each_entry( mrmrr, &hd->mapped_rmrrs, list )
+    list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list )
     {
         if ( mrmrr->base == rmrr->base_address &&
              mrmrr->end == rmrr->end_address )
@@ -1880,9 +1880,9 @@ static int rmrr_identity_mapping(struct domain *d,
         return -ENOMEM;
     mrmrr->base = rmrr->base_address;
     mrmrr->end = rmrr->end_address;
-    spin_lock(&hd->mapping_lock);
-    list_add_tail(&mrmrr->list, &hd->mapped_rmrrs);
-    spin_unlock(&hd->mapping_lock);
+    spin_lock(&hd->arch.mapping_lock);
+    list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
+    spin_unlock(&hd->arch.mapping_lock);
 
     return 0;
 }
@@ -2427,8 +2427,8 @@ static void vtd_dump_p2m_table(struct domain *d)
         return;
 
     hd = domain_hvm_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->agaw));
-    vtd_dump_p2m_table_level(hd->pgd_maddr, agaw_to_level(hd->agaw), 0, 0);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw));
+    vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0);
 }
 
 const struct iommu_ops intel_iommu_ops = {
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index bd3c23b..c137cef 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -55,6 +55,47 @@ int __init iommu_setup_hpet_msi(struct msi_desc *msi)
     return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
 }
 
+void iommu_share_p2m_table(struct domain* d)
+{
+    const struct iommu_ops *ops = iommu_get_ops();
+
+    if ( iommu_enabled && is_hvm_domain(d) )
+        ops->share_p2m(d);
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    spin_lock_init(&hd->arch.mapping_lock);
+    INIT_LIST_HEAD(&hd->arch.g2m_ioport_list);
+    INIT_LIST_HEAD(&hd->arch.mapped_rmrrs);
+
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+   struct hvm_iommu *hd  = domain_hvm_iommu(d);
+   struct list_head *ioport_list, *rmrr_list, *tmp;
+   struct g2m_ioport *ioport;
+   struct mapped_rmrr *mrmrr;
+
+   list_for_each_safe ( ioport_list, tmp, &hd->arch.g2m_ioport_list )
+   {
+       ioport = list_entry(ioport_list, struct g2m_ioport, list);
+       list_del(&ioport->list);
+       xfree(ioport);
+   }
+
+    list_for_each_safe ( rmrr_list, tmp, &hd->arch.mapped_rmrrs )
+    {
+        mrmrr = list_entry(rmrr_list, struct mapped_rmrr, list);
+        list_del(&mrmrr->list);
+        xfree(mrmrr);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/hvm/iommu.h b/xen/include/asm-x86/hvm/iommu.h
index d488edf..a3f83d0 100644
--- a/xen/include/asm-x86/hvm/iommu.h
+++ b/xen/include/asm-x86/hvm/iommu.h
@@ -39,4 +39,33 @@ static inline int iommu_hardware_setup(void)
     return 0;
 }
 
+struct g2m_ioport {
+    struct list_head list;
+    unsigned int gport;
+    unsigned int mport;
+    unsigned int np;
+};
+
+struct mapped_rmrr {
+    struct list_head list;
+    u64 base;
+    u64 end;
+};
+
+struct arch_hvm_iommu
+{
+    u64 pgd_maddr;                 /* io page directory machine address */
+    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
+    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
+    /* amd iommu support */
+    int domain_id;
+    int paging_mode;
+    struct page_info *root_table;
+    struct guest_iommu *g_iommu;
+
+    struct list_head g2m_ioport_list;   /* guest to machine ioport mapping */
+    struct list_head mapped_rmrrs;
+    spinlock_t mapping_lock;            /* io page table lock */
+};
+
 #endif /* __ASM_X86_HVM_IOMMU_H__ */
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 34c1896..021cd80 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -19,6 +19,10 @@
 
 #include <asm/msi.h>
 
+/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
+#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
+#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
+
 void iommu_update_ire_from_apic(unsigned int apic, unsigned int reg, unsigned int value);
 int iommu_update_ire_from_msi(struct msi_desc *msi_desc, struct msi_msg *msg);
 void iommu_read_msi_from_ire(struct msi_desc *msi_desc, struct msi_msg *msg);
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index 2abb4e3..f8f8a93 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -23,32 +23,8 @@
 #include <xen/iommu.h>
 #include <asm/hvm/iommu.h>
 
-struct g2m_ioport {
-    struct list_head list;
-    unsigned int gport;
-    unsigned int mport;
-    unsigned int np;
-};
-
-struct mapped_rmrr {
-    struct list_head list;
-    u64 base;
-    u64 end;
-};
-
 struct hvm_iommu {
-    u64 pgd_maddr;                 /* io page directory machine address */
-    spinlock_t mapping_lock;       /* io page table lock */
-    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
-    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
-    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
-    struct list_head mapped_rmrrs;
-
-    /* amd iommu support */
-    int domain_id;
-    int paging_mode;
-    struct page_info *root_table;
-    struct guest_iommu *g_iommu;
+    struct arch_hvm_iommu arch;
 
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 65a37c0..5a19c80 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -35,11 +35,6 @@ extern bool_t iommu_hap_pt_share;
 extern bool_t iommu_debug;
 extern bool_t amd_iommu_perdev_intremap;
 
-/* Does this domain have a P2M table we can use as its IOMMU pagetable? */
-#define iommu_use_hap_pt(d) (hap_enabled(d) && iommu_hap_pt_share)
-
-#define domain_hvm_iommu(d)     (&d->arch.hvm_domain.hvm_iommu)
-
 #define PAGE_SHIFT_4K       (12)
 #define PAGE_SIZE_4K        (1UL << PAGE_SHIFT_4K)
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
@@ -55,6 +50,9 @@ void iommu_dom0_init(struct domain *d);
 void iommu_domain_destroy(struct domain *d);
 int deassign_device(struct domain *d, u16 seg, u8 bus, u8 devfn);
 
+void arch_iommu_domain_destroy(struct domain *d);
+int arch_iommu_domain_init(struct domain *d);
+
 /* Function used internally, use iommu_domain_destroy */
 void iommu_teardown(struct domain *d);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM1-0006zW-Gp; Sun, 23 Feb 2014 22:16:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLy-0006uL-V3
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:55 +0000
Received: from [85.158.137.68:23447] by server-13.bemta-3.messagelabs.com id
	64/53-26923-6537A035; Sun, 23 Feb 2014 22:16:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393193813!872290!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13848 invoked from network); 23 Feb 2014 22:16:53 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:53 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so379558eek.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=kptRHXz/SXXR3o1kQ/HzCpySIYQRN16qFazuTqO2J5o=;
	b=gRj7Aa6GsoeeH/nkJyVfl3R/omFBhAgXGJvhJ3z8hTgFLU1wGF1jak/PKfQER5usrE
	lrHOUy9/P8JDFwgqlzXlCHC68wtbLxVnoLAHN986lEqx1MvMsEw92k566yrpzdBZdQ/1
	nJ4ssHwsFsLERIcK9OMWn1URL3Y4A/XwmHqN6Dz4NPdoczB/dD3vj64taKGZuIfXFThr
	pMXGYFYPmXR3WhhxxdZ6rA4M1gObMU3RApCwmuEu1ybktX8/FIyYZDunYU4rwLyNBxKJ
	kW2ybIfOOhWzM8f4Zg1CBlSP4Hz7Gqd/yic48CuMGz3mFQrgTkZCJGu/hSz9pWiW+ONA
	b8Pw==
X-Gm-Message-State: ALoCoQlu3SuVXVpNYK6jEKRM9Nn0QTy5aKh3JqCz24SwZv83Ng7QaPunaZFSavjTdCS5kS82/xn+
X-Received: by 10.14.176.66 with SMTP id a42mr20730293eem.101.1393193812806;
	Sun, 23 Feb 2014 14:16:52 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.50
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:51 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:27 +0000
Message-Id: <1393193792-20008-11-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 10/15] xen/passthrough: iommu: Basic support
	of device tree assignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add IOMMU helpers to support device tree assignment/deassignment. This patch
introduces 2 new fields in the dt_device_node:
    - is_protected: Does the device is protected by an IOMMU
    - next_assigned: Pointer to the next device assigned to the same
    domain

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Patch added
---
 xen/common/device_tree.c              |    4 ++
 xen/drivers/passthrough/Makefile      |    1 +
 xen/drivers/passthrough/device_tree.c |  106 +++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu.c       |   10 ++++
 xen/include/xen/device_tree.h         |   14 +++++
 xen/include/xen/hvm/iommu.h           |    6 ++
 xen/include/xen/iommu.h               |   16 +++++
 7 files changed, 157 insertions(+)
 create mode 100644 xen/drivers/passthrough/device_tree.c

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 564f2bb..7c6b683 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1695,6 +1695,10 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         np->full_name = ((char *)np) + sizeof(struct dt_device_node);
         /* By default dom0 owns the device */
         np->used_by = 0;
+        /* By default the device is not protected */
+        np->is_protected = false;
+        INIT_LIST_HEAD(&np->next_assigned);
+
         if ( new_format )
         {
             char *fn = np->full_name;
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 6e08f89..5a0a35e 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -5,3 +5,4 @@ subdir-$(x86_64) += x86
 obj-y += iommu.o
 obj-$(x86) += io.o
 obj-$(HAS_PCI) += pci.o
+obj-$(HAS_DEVICE_TREE) += device_tree.o
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
new file mode 100644
index 0000000..7384e73
--- /dev/null
+++ b/xen/drivers/passthrough/device_tree.c
@@ -0,0 +1,106 @@
+/*
+ * xen/drivers/passthrough/arm/device_tree.c
+ *
+ * Code to passthrough device tree node to a guest
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/device_tree.h>
+
+static spinlock_t dtdevs_lock = SPIN_LOCK_UNLOCKED;
+
+int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev)
+{
+    int rc = -EBUSY;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    if ( !dt_device_is_protected(dev) )
+        return -EINVAL;
+
+    spin_lock(&dtdevs_lock);
+
+    if ( !list_empty(&dev->next_assigned) )
+        goto fail;
+
+    rc = hd->platform_ops->assign_dt_device(d, dev);
+
+    if ( rc )
+        goto fail;
+
+    list_add(&dev->next_assigned, &hd->dt_devices);
+    dt_device_set_used_by(dev, d->domain_id);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+
+    return rc;
+}
+
+int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    int rc;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    if ( !dt_device_is_protected(dev) )
+        return -EINVAL;
+
+    spin_lock(&dtdevs_lock);
+
+    rc = hd->platform_ops->reassign_dt_device(d, dom0, dev);
+    if ( rc )
+        goto fail;
+
+    dt_device_set_used_by(dev, dom0->domain_id);
+
+    list_del(&dev->next_assigned);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+
+    return rc;
+}
+
+int iommu_dt_domain_init(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    INIT_LIST_HEAD(&hd->dt_devices);
+
+    return 0;
+}
+
+void iommu_dt_domain_destroy(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct dt_device_node *dev, *_dev;
+    int rc;
+
+    list_for_each_entry_safe(dev, _dev, &hd->dt_devices, next_assigned)
+    {
+        rc = iommu_deassign_dt_device(d, dev);
+        if ( rc )
+            dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n",
+                    dt_node_full_name(dev), d->domain_id);
+    }
+}
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e6a1839..6c951d0 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -123,6 +123,12 @@ int iommu_domain_init(struct domain *d)
     if ( ret )
         return ret;
 
+#if HAS_DEVICE_TREE
+    ret = iommu_dt_domain_init(d);
+    if ( ret )
+        return ret;
+#endif
+
     if ( !iommu_enabled )
         return 0;
 
@@ -198,6 +204,10 @@ void iommu_domain_destroy(struct domain *d)
     if ( need_iommu(d) )
         iommu_teardown(d);
 
+#ifdef HAS_DEVICE_TREE
+    iommu_dt_domain_destroy(d);
+#endif
+
     arch_iommu_domain_destroy(d);
  }
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index d429e60..2aae047 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -16,6 +16,7 @@
 #include <xen/string.h>
 #include <xen/types.h>
 #include <xen/stdbool.h>
+#include <xen/list.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -110,6 +111,9 @@ struct dt_device_node {
     struct dt_device_node *next; /* TODO: Remove it. Only use to know the last children */
     struct dt_device_node *allnext;
 
+    /* IOMMU specific fields */
+    bool is_protected; /* Tell if the device is protected by an IOMMU */
+    struct list_head next_assigned;
 };
 
 #define MAX_PHANDLE_ARGS 16
@@ -325,6 +329,16 @@ static inline domid_t dt_device_used_by(const struct dt_device_node *device)
     return device->used_by;
 }
 
+static inline void dt_device_set_protected(struct dt_device_node *device)
+{
+    device->is_protected = true;
+}
+
+static inline bool dt_device_is_protected(const struct dt_device_node *device)
+{
+    return device->is_protected;
+}
+
 static inline bool_t dt_property_name_is_equal(const struct dt_property *pp,
                                                const char *name)
 {
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index f8f8a93..72002e1 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -21,6 +21,7 @@
 #define __XEN_HVM_IOMMU_H__
 
 #include <xen/iommu.h>
+#include <xen/list.h>
 #include <asm/hvm/iommu.h>
 
 struct hvm_iommu {
@@ -28,6 +29,11 @@ struct hvm_iommu {
 
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
+
+    #ifdef HAS_DEVICE_TREE
+    /* List of DT devices assigned to this domain */
+    struct list_head dt_devices;
+    #endif
 };
 
 #endif /* __XEN_HVM_IOMMU_H__ */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 5a19c80..266cd6e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -82,6 +82,16 @@ bool_t pt_irq_need_timer(uint32_t flags);
 #define PT_IRQ_TIME_OUT MILLISECS(8)
 #endif /* HAS_PCI */
 
+#ifdef HAS_DEVICE_TREE
+#include <xen/device_tree.h>
+
+int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev);
+int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
+int iommu_dt_domain_init(struct domain *d);
+void iommu_dt_domain_destroy(struct domain *d);
+
+#endif /* HAS_DEVICE_TREE */
+
 #ifdef CONFIG_X86
 struct msi_desc;
 struct msi_msg;
@@ -101,6 +111,12 @@ struct iommu_ops {
 			   u8 devfn, struct pci_dev *);
     int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
 #endif /* HAS_PCI */
+#ifdef HAS_DEVICE_TREE
+    int (*assign_dt_device)(struct domain *d, const struct dt_device_node *dev);
+    int (*reassign_dt_device)(struct domain *s, struct domain *t,
+                              const struct dt_device_node *dev);
+#endif
+
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM1-0006zW-Gp; Sun, 23 Feb 2014 22:16:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhLy-0006uL-V3
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:55 +0000
Received: from [85.158.137.68:23447] by server-13.bemta-3.messagelabs.com id
	64/53-26923-6537A035; Sun, 23 Feb 2014 22:16:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393193813!872290!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13848 invoked from network); 23 Feb 2014 22:16:53 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:53 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so379558eek.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=kptRHXz/SXXR3o1kQ/HzCpySIYQRN16qFazuTqO2J5o=;
	b=gRj7Aa6GsoeeH/nkJyVfl3R/omFBhAgXGJvhJ3z8hTgFLU1wGF1jak/PKfQER5usrE
	lrHOUy9/P8JDFwgqlzXlCHC68wtbLxVnoLAHN986lEqx1MvMsEw92k566yrpzdBZdQ/1
	nJ4ssHwsFsLERIcK9OMWn1URL3Y4A/XwmHqN6Dz4NPdoczB/dD3vj64taKGZuIfXFThr
	pMXGYFYPmXR3WhhxxdZ6rA4M1gObMU3RApCwmuEu1ybktX8/FIyYZDunYU4rwLyNBxKJ
	kW2ybIfOOhWzM8f4Zg1CBlSP4Hz7Gqd/yic48CuMGz3mFQrgTkZCJGu/hSz9pWiW+ONA
	b8Pw==
X-Gm-Message-State: ALoCoQlu3SuVXVpNYK6jEKRM9Nn0QTy5aKh3JqCz24SwZv83Ng7QaPunaZFSavjTdCS5kS82/xn+
X-Received: by 10.14.176.66 with SMTP id a42mr20730293eem.101.1393193812806;
	Sun, 23 Feb 2014 14:16:52 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.50
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:51 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:27 +0000
Message-Id: <1393193792-20008-11-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 10/15] xen/passthrough: iommu: Basic support
	of device tree assignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add IOMMU helpers to support device tree assignment/deassignment. This patch
introduces 2 new fields in the dt_device_node:
    - is_protected: Does the device is protected by an IOMMU
    - next_assigned: Pointer to the next device assigned to the same
    domain

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Patch added
---
 xen/common/device_tree.c              |    4 ++
 xen/drivers/passthrough/Makefile      |    1 +
 xen/drivers/passthrough/device_tree.c |  106 +++++++++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu.c       |   10 ++++
 xen/include/xen/device_tree.h         |   14 +++++
 xen/include/xen/hvm/iommu.h           |    6 ++
 xen/include/xen/iommu.h               |   16 +++++
 7 files changed, 157 insertions(+)
 create mode 100644 xen/drivers/passthrough/device_tree.c

diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 564f2bb..7c6b683 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -1695,6 +1695,10 @@ static unsigned long __init unflatten_dt_node(const void *fdt,
         np->full_name = ((char *)np) + sizeof(struct dt_device_node);
         /* By default dom0 owns the device */
         np->used_by = 0;
+        /* By default the device is not protected */
+        np->is_protected = false;
+        INIT_LIST_HEAD(&np->next_assigned);
+
         if ( new_format )
         {
             char *fn = np->full_name;
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 6e08f89..5a0a35e 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -5,3 +5,4 @@ subdir-$(x86_64) += x86
 obj-y += iommu.o
 obj-$(x86) += io.o
 obj-$(HAS_PCI) += pci.o
+obj-$(HAS_DEVICE_TREE) += device_tree.o
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
new file mode 100644
index 0000000..7384e73
--- /dev/null
+++ b/xen/drivers/passthrough/device_tree.c
@@ -0,0 +1,106 @@
+/*
+ * xen/drivers/passthrough/arm/device_tree.c
+ *
+ * Code to passthrough device tree node to a guest
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/sched.h>
+#include <xen/iommu.h>
+#include <xen/device_tree.h>
+
+static spinlock_t dtdevs_lock = SPIN_LOCK_UNLOCKED;
+
+int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev)
+{
+    int rc = -EBUSY;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    if ( !dt_device_is_protected(dev) )
+        return -EINVAL;
+
+    spin_lock(&dtdevs_lock);
+
+    if ( !list_empty(&dev->next_assigned) )
+        goto fail;
+
+    rc = hd->platform_ops->assign_dt_device(d, dev);
+
+    if ( rc )
+        goto fail;
+
+    list_add(&dev->next_assigned, &hd->dt_devices);
+    dt_device_set_used_by(dev, d->domain_id);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+
+    return rc;
+}
+
+int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    int rc;
+
+    if ( !iommu_enabled || !hd->platform_ops )
+        return -EINVAL;
+
+    if ( !dt_device_is_protected(dev) )
+        return -EINVAL;
+
+    spin_lock(&dtdevs_lock);
+
+    rc = hd->platform_ops->reassign_dt_device(d, dom0, dev);
+    if ( rc )
+        goto fail;
+
+    dt_device_set_used_by(dev, dom0->domain_id);
+
+    list_del(&dev->next_assigned);
+
+fail:
+    spin_unlock(&dtdevs_lock);
+
+    return rc;
+}
+
+int iommu_dt_domain_init(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+
+    INIT_LIST_HEAD(&hd->dt_devices);
+
+    return 0;
+}
+
+void iommu_dt_domain_destroy(struct domain *d)
+{
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    struct dt_device_node *dev, *_dev;
+    int rc;
+
+    list_for_each_entry_safe(dev, _dev, &hd->dt_devices, next_assigned)
+    {
+        rc = iommu_deassign_dt_device(d, dev);
+        if ( rc )
+            dprintk(XENLOG_ERR, "Failed to deassign %s in domain %u\n",
+                    dt_node_full_name(dev), d->domain_id);
+    }
+}
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e6a1839..6c951d0 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -123,6 +123,12 @@ int iommu_domain_init(struct domain *d)
     if ( ret )
         return ret;
 
+#if HAS_DEVICE_TREE
+    ret = iommu_dt_domain_init(d);
+    if ( ret )
+        return ret;
+#endif
+
     if ( !iommu_enabled )
         return 0;
 
@@ -198,6 +204,10 @@ void iommu_domain_destroy(struct domain *d)
     if ( need_iommu(d) )
         iommu_teardown(d);
 
+#ifdef HAS_DEVICE_TREE
+    iommu_dt_domain_destroy(d);
+#endif
+
     arch_iommu_domain_destroy(d);
  }
 
diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
index d429e60..2aae047 100644
--- a/xen/include/xen/device_tree.h
+++ b/xen/include/xen/device_tree.h
@@ -16,6 +16,7 @@
 #include <xen/string.h>
 #include <xen/types.h>
 #include <xen/stdbool.h>
+#include <xen/list.h>
 
 #define DEVICE_TREE_MAX_DEPTH 16
 
@@ -110,6 +111,9 @@ struct dt_device_node {
     struct dt_device_node *next; /* TODO: Remove it. Only use to know the last children */
     struct dt_device_node *allnext;
 
+    /* IOMMU specific fields */
+    bool is_protected; /* Tell if the device is protected by an IOMMU */
+    struct list_head next_assigned;
 };
 
 #define MAX_PHANDLE_ARGS 16
@@ -325,6 +329,16 @@ static inline domid_t dt_device_used_by(const struct dt_device_node *device)
     return device->used_by;
 }
 
+static inline void dt_device_set_protected(struct dt_device_node *device)
+{
+    device->is_protected = true;
+}
+
+static inline bool dt_device_is_protected(const struct dt_device_node *device)
+{
+    return device->is_protected;
+}
+
 static inline bool_t dt_property_name_is_equal(const struct dt_property *pp,
                                                const char *name)
 {
diff --git a/xen/include/xen/hvm/iommu.h b/xen/include/xen/hvm/iommu.h
index f8f8a93..72002e1 100644
--- a/xen/include/xen/hvm/iommu.h
+++ b/xen/include/xen/hvm/iommu.h
@@ -21,6 +21,7 @@
 #define __XEN_HVM_IOMMU_H__
 
 #include <xen/iommu.h>
+#include <xen/list.h>
 #include <asm/hvm/iommu.h>
 
 struct hvm_iommu {
@@ -28,6 +29,11 @@ struct hvm_iommu {
 
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
+
+    #ifdef HAS_DEVICE_TREE
+    /* List of DT devices assigned to this domain */
+    struct list_head dt_devices;
+    #endif
 };
 
 #endif /* __XEN_HVM_IOMMU_H__ */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 5a19c80..266cd6e 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -82,6 +82,16 @@ bool_t pt_irq_need_timer(uint32_t flags);
 #define PT_IRQ_TIME_OUT MILLISECS(8)
 #endif /* HAS_PCI */
 
+#ifdef HAS_DEVICE_TREE
+#include <xen/device_tree.h>
+
+int iommu_assign_dt_device(struct domain *d, struct dt_device_node *dev);
+int iommu_deassign_dt_device(struct domain *d, struct dt_device_node *dev);
+int iommu_dt_domain_init(struct domain *d);
+void iommu_dt_domain_destroy(struct domain *d);
+
+#endif /* HAS_DEVICE_TREE */
+
 #ifdef CONFIG_X86
 struct msi_desc;
 struct msi_msg;
@@ -101,6 +111,12 @@ struct iommu_ops {
 			   u8 devfn, struct pci_dev *);
     int (*get_device_group_id)(u16 seg, u8 bus, u8 devfn);
 #endif /* HAS_PCI */
+#ifdef HAS_DEVICE_TREE
+    int (*assign_dt_device)(struct domain *d, const struct dt_device_node *dev);
+    int (*reassign_dt_device)(struct domain *s, struct domain *t,
+                              const struct dt_device_node *dev);
+#endif
+
     void (*teardown)(struct domain *d);
     int (*map_page)(struct domain *d, unsigned long gfn, unsigned long mfn,
                     unsigned int flags);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM2-00070V-35; Sun, 23 Feb 2014 22:16:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM0-0006xg-Id
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:56 +0000
Received: from [85.158.143.35:16637] by server-1.bemta-4.messagelabs.com id
	44/3D-31661-7537A035; Sun, 23 Feb 2014 22:16:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393193814!7716570!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22523 invoked from network); 23 Feb 2014 22:16:54 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:54 -0000
Received: by mail-ee0-f48.google.com with SMTP id t10so2720258eei.21
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=sUpbMxiNfeUjkI+Arrdi8GWVkytTso6ry3ivy3JmjsM=;
	b=X4fiFZiDy1toE5NtvUey/q/bFIkHHYkUvaN6Vj1EvJokwuYD09Aofe3f9+jIOGjL2j
	yKTSdxnMjIqumcrV8URdy4CS6UHno3b1+jU3+v69FaL2mgEA16aUYFjszjj+uR/uK+bk
	eYRxQbFPjKqVyF8XSaD10TvbZBKATqof64tWz+C2zUgTdQ9G/khTYFseQ89GXNHnbiFt
	CFIIl8fRTqhwj3gFH5b+A9FWxWHgW5YBadtjmPLWsBUVYcaWXle2EcphbW2uR7kvchkm
	iLFwsGS7k2ZnZ5viQDTcDGFJL7wp9Lg3aZoYGvhzChsgssxUrosaBLRO37y3ti0UWtyF
	ZvqQ==
X-Gm-Message-State: ALoCoQlwwMynI6HNihZMP4Aoicek1iKXyAfU3OhVnqvEHpb2A0D7MPGcbJ2oTUfiW3NtLNcaw27o
X-Received: by 10.14.204.9 with SMTP id g9mr20787748eeo.82.1393193814160;
	Sun, 23 Feb 2014 14:16:54 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.52
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:53 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:28 +0000
Message-Id: <1393193792-20008-12-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 11/15] xen/passthrough: Introduce IOMMU ARM
	architecture
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the architecture to use IOMMUs on ARM. There is no
IOMMU drivers on this patch.

In this implementation, IOMMU page table will be shared with the P2M.

The code will run through the device tree and will initialize every IOMMU.
It's possible to have multiple IOMMUs on the same platform, but they must
be handled with the same driver. For now, there is no support for using
multiple iommu drivers at runtime.

Each new IOMMU drivers should contain:

static const char * const myiommu_dt_compat[] __initconst =
{
    /* list of device compatible with the drivers. Will be matched with
     * the "compatible" property on the device tree
     */
    NULL,
};

DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
        .compatible = myiommu_compatible,
        .init = myiommu_init,
DT_DEVICE_END

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Fix typoes in commit message
        - Remove useless comment in arch/arm/setup.c
        - Update copyright date to 2014
        - Move iommu_dom0_init earlier
        - Call iommu_assign_dt_device in map_device when the device is
        protected by an IOMMU
---
 xen/arch/arm/Rules.mk                |    1 +
 xen/arch/arm/domain.c                |    7 ++++
 xen/arch/arm/domain_build.c          |   19 ++++++++--
 xen/arch/arm/p2m.c                   |    4 +++
 xen/arch/arm/setup.c                 |    2 ++
 xen/drivers/passthrough/Makefile     |    1 +
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/iommu.c  |   65 ++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/device.h         |    3 +-
 xen/include/asm-arm/domain.h         |    2 ++
 xen/include/asm-arm/hvm/iommu.h      |   10 ++++++
 xen/include/asm-arm/iommu.h          |   36 +++++++++++++++++++
 12 files changed, 147 insertions(+), 4 deletions(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 57f2eb1..1703551 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -9,6 +9,7 @@
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
+HAS_PASSTHROUGH := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 84a21d5..f833c93 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -546,6 +546,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     if ( (d->domain_id == 0) && (rc = domain_vuart_init(d)) )
         goto fail;
 
+    if ( (rc = iommu_domain_init(d)) != 0 )
+        goto fail;
+
     return 0;
 
 fail:
@@ -557,6 +560,10 @@ fail:
 
 void arch_domain_destroy(struct domain *d)
 {
+    /* IOMMU page table is shared with P2M, always call
+     * iommu_domain_destroy() before p2m_teardown().
+     */
+    iommu_domain_destroy(d);
     p2m_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c9c3df2..ec304cf 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -669,7 +669,7 @@ static int make_timer_node(const struct domain *d, void *fdt,
 }
 
 /* Map the device in the domain */
-static int map_device(struct domain *d, const struct dt_device_node *dev)
+static int map_device(struct domain *d, struct dt_device_node *dev)
 {
     unsigned int nirq;
     unsigned int naddr;
@@ -684,6 +684,18 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
 
     DPRINT("%s nirq = %d naddr = %u\n", dt_node_full_name(dev), nirq, naddr);
 
+    if ( dt_device_is_protected(dev) )
+    {
+        DPRINT("%s setup iommu\n", dt_node_full_name(dev));
+        res = iommu_assign_dt_device(d, dev);
+        if ( res )
+        {
+            printk(XENLOG_ERR "Failed to setup the IOMMU for %s\n",
+                   dt_node_full_name(dev));
+            return res;
+        }
+    }
+
     /* Map IRQs */
     for ( i = 0; i < nirq; i++ )
     {
@@ -755,7 +767,7 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
 }
 
 static int handle_node(struct domain *d, struct kernel_info *kinfo,
-                       const struct dt_device_node *node)
+                       struct dt_device_node *node)
 {
     static const struct dt_device_match skip_matches[] __initconst =
     {
@@ -776,7 +788,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
         DT_MATCH_TIMER,
         { /* sentinel */ },
     };
-    const struct dt_device_node *child;
+    struct dt_device_node *child;
     int res;
     const char *name;
     const char *path;
@@ -1009,6 +1021,7 @@ int construct_dom0(struct domain *d)
     kinfo.unassigned_mem = dom0_mem;
 
     allocate_memory(d, &kinfo);
+    iommu_dom0_init(d);
 
     rc = kernel_prepare(&kinfo);
     if ( rc < 0 )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d00c882..d8ed0de 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -412,12 +412,16 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( flush )
     {
+        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
+        unsigned long egfn = paddr_to_pfn(end_gpaddr);
+
         /* At the beginning of the function, Xen is updating VTTBR
          * with the domain where the mappings are created. In this
          * case it's only necessary to flush TLBs on every CPUs with
          * the current VMID (our domain).
          */
         flush_tlb();
+        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
     }
 
     if ( op == ALLOCATE || op == INSERT )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5529cb1..7aba6d0 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -728,6 +728,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     local_irq_enable();
     local_abort_enable();
 
+    iommu_setup();
+
     smp_prepare_cpus(cpus);
 
     initialize_keytable();
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 5a0a35e..16e9027 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -1,6 +1,7 @@
 subdir-$(x86) += vtd
 subdir-$(x86) += amd
 subdir-$(x86_64) += x86
+subdir-$(arm) += arm
 
 obj-y += iommu.o
 obj-$(x86) += io.o
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
new file mode 100644
index 0000000..0484b79
--- /dev/null
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -0,0 +1 @@
+obj-y += iommu.o
diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
new file mode 100644
index 0000000..d93b915
--- /dev/null
+++ b/xen/drivers/passthrough/arm/iommu.c
@@ -0,0 +1,65 @@
+/*
+ * xen/drivers/passthrough/arm/iommu.c
+ *
+ * Generic IOMMU framework via the device tree
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/iommu.h>
+#include <xen/device_tree.h>
+#include <asm/device.h>
+
+static const struct iommu_ops *iommu_ops;
+
+const struct iommu_ops *iommu_get_ops(void)
+{
+    return iommu_ops;
+}
+
+void __init iommu_set_ops(const struct iommu_ops *ops)
+{
+    BUG_ON(ops == NULL);
+
+    if ( iommu_ops && iommu_ops != ops )
+        printk("WARNING: IOMMU ops already set to a different value\n");
+
+    iommu_ops = ops;
+}
+
+int __init iommu_hardware_setup(void)
+{
+    struct dt_device_node *np;
+    int rc;
+    unsigned int num_iommus = 0;
+
+    dt_for_each_device_node(dt_host, np)
+    {
+        rc = device_init(np, DEVICE_IOMMU, NULL);
+        if ( !rc )
+            num_iommus++;
+    }
+
+    return ( num_iommus > 0 ) ? 0 : -ENODEV;
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+}
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index 9e47ca6..ed04344 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -6,7 +6,8 @@
 
 enum device_type
 {
-    DEVICE_SERIAL
+    DEVICE_SERIAL,
+    DEVICE_IOMMU,
 };
 
 struct device_desc {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bc20a15..ad6587a 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -9,6 +9,7 @@
 #include <asm/vfp.h>
 #include <public/hvm/params.h>
 #include <xen/serial.h>
+#include <xen/hvm/iommu.h>
 
 /* Represents state corresponding to a block of 32 interrupts */
 struct vgic_irq_rank {
@@ -72,6 +73,7 @@ struct pending_irq
 struct hvm_domain
 {
     uint64_t              params[HVM_NR_PARAMS];
+    struct hvm_iommu      hvm_iommu;
 }  __cacheline_aligned;
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
new file mode 100644
index 0000000..461c8cf
--- /dev/null
+++ b/xen/include/asm-arm/hvm/iommu.h
@@ -0,0 +1,10 @@
+#ifndef __ASM_ARM_HVM_IOMMU_H_
+#define __ASM_ARM_HVM_IOMMU_H_
+
+struct arch_hvm_iommu
+{
+    /* Private information for the IOMMU drivers */
+    void *priv;
+};
+
+#endif /* __ASM_ARM_HVM_IOMMU_H_ */
diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h
new file mode 100644
index 0000000..81eec83
--- /dev/null
+++ b/xen/include/asm-arm/iommu.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_ARM_IOMMU_H__
+#define __ARCH_ARM_IOMMU_H__
+
+/* Always share P2M Table between the CPU and the IOMMU */
+#define iommu_use_hap_pt(d) (1)
+#define domain_hvm_iommu(d) (&d->arch.hvm_domain.hvm_iommu)
+
+const struct iommu_ops *iommu_get_ops(void);
+void __init iommu_set_ops(const struct iommu_ops *ops);
+
+int __init iommu_hardware_setup(void);
+
+#endif /* __ARCH_ARM_IOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM2-00070V-35; Sun, 23 Feb 2014 22:16:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM0-0006xg-Id
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:56 +0000
Received: from [85.158.143.35:16637] by server-1.bemta-4.messagelabs.com id
	44/3D-31661-7537A035; Sun, 23 Feb 2014 22:16:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393193814!7716570!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22523 invoked from network); 23 Feb 2014 22:16:54 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:54 -0000
Received: by mail-ee0-f48.google.com with SMTP id t10so2720258eei.21
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=sUpbMxiNfeUjkI+Arrdi8GWVkytTso6ry3ivy3JmjsM=;
	b=X4fiFZiDy1toE5NtvUey/q/bFIkHHYkUvaN6Vj1EvJokwuYD09Aofe3f9+jIOGjL2j
	yKTSdxnMjIqumcrV8URdy4CS6UHno3b1+jU3+v69FaL2mgEA16aUYFjszjj+uR/uK+bk
	eYRxQbFPjKqVyF8XSaD10TvbZBKATqof64tWz+C2zUgTdQ9G/khTYFseQ89GXNHnbiFt
	CFIIl8fRTqhwj3gFH5b+A9FWxWHgW5YBadtjmPLWsBUVYcaWXle2EcphbW2uR7kvchkm
	iLFwsGS7k2ZnZ5viQDTcDGFJL7wp9Lg3aZoYGvhzChsgssxUrosaBLRO37y3ti0UWtyF
	ZvqQ==
X-Gm-Message-State: ALoCoQlwwMynI6HNihZMP4Aoicek1iKXyAfU3OhVnqvEHpb2A0D7MPGcbJ2oTUfiW3NtLNcaw27o
X-Received: by 10.14.204.9 with SMTP id g9mr20787748eeo.82.1393193814160;
	Sun, 23 Feb 2014 14:16:54 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.52
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:53 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:28 +0000
Message-Id: <1393193792-20008-12-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 11/15] xen/passthrough: Introduce IOMMU ARM
	architecture
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch contains the architecture to use IOMMUs on ARM. There is no
IOMMU drivers on this patch.

In this implementation, IOMMU page table will be shared with the P2M.

The code will run through the device tree and will initialize every IOMMU.
It's possible to have multiple IOMMUs on the same platform, but they must
be handled with the same driver. For now, there is no support for using
multiple iommu drivers at runtime.

Each new IOMMU drivers should contain:

static const char * const myiommu_dt_compat[] __initconst =
{
    /* list of device compatible with the drivers. Will be matched with
     * the "compatible" property on the device tree
     */
    NULL,
};

DT_DEVICE_START(myiommu, "MY IOMMU", DEVICE_IOMMU)
        .compatible = myiommu_compatible,
        .init = myiommu_init,
DT_DEVICE_END

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Fix typoes in commit message
        - Remove useless comment in arch/arm/setup.c
        - Update copyright date to 2014
        - Move iommu_dom0_init earlier
        - Call iommu_assign_dt_device in map_device when the device is
        protected by an IOMMU
---
 xen/arch/arm/Rules.mk                |    1 +
 xen/arch/arm/domain.c                |    7 ++++
 xen/arch/arm/domain_build.c          |   19 ++++++++--
 xen/arch/arm/p2m.c                   |    4 +++
 xen/arch/arm/setup.c                 |    2 ++
 xen/drivers/passthrough/Makefile     |    1 +
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/iommu.c  |   65 ++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/device.h         |    3 +-
 xen/include/asm-arm/domain.h         |    2 ++
 xen/include/asm-arm/hvm/iommu.h      |   10 ++++++
 xen/include/asm-arm/iommu.h          |   36 +++++++++++++++++++
 12 files changed, 147 insertions(+), 4 deletions(-)
 create mode 100644 xen/drivers/passthrough/arm/Makefile
 create mode 100644 xen/drivers/passthrough/arm/iommu.c
 create mode 100644 xen/include/asm-arm/hvm/iommu.h
 create mode 100644 xen/include/asm-arm/iommu.h

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index 57f2eb1..1703551 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -9,6 +9,7 @@
 HAS_DEVICE_TREE := y
 HAS_VIDEO := y
 HAS_ARM_HDLCD := y
+HAS_PASSTHROUGH := y
 
 CFLAGS += -I$(BASEDIR)/include
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 84a21d5..f833c93 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -546,6 +546,9 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     if ( (d->domain_id == 0) && (rc = domain_vuart_init(d)) )
         goto fail;
 
+    if ( (rc = iommu_domain_init(d)) != 0 )
+        goto fail;
+
     return 0;
 
 fail:
@@ -557,6 +560,10 @@ fail:
 
 void arch_domain_destroy(struct domain *d)
 {
+    /* IOMMU page table is shared with P2M, always call
+     * iommu_domain_destroy() before p2m_teardown().
+     */
+    iommu_domain_destroy(d);
     p2m_teardown(d);
     domain_vgic_free(d);
     domain_vuart_free(d);
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index c9c3df2..ec304cf 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -669,7 +669,7 @@ static int make_timer_node(const struct domain *d, void *fdt,
 }
 
 /* Map the device in the domain */
-static int map_device(struct domain *d, const struct dt_device_node *dev)
+static int map_device(struct domain *d, struct dt_device_node *dev)
 {
     unsigned int nirq;
     unsigned int naddr;
@@ -684,6 +684,18 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
 
     DPRINT("%s nirq = %d naddr = %u\n", dt_node_full_name(dev), nirq, naddr);
 
+    if ( dt_device_is_protected(dev) )
+    {
+        DPRINT("%s setup iommu\n", dt_node_full_name(dev));
+        res = iommu_assign_dt_device(d, dev);
+        if ( res )
+        {
+            printk(XENLOG_ERR "Failed to setup the IOMMU for %s\n",
+                   dt_node_full_name(dev));
+            return res;
+        }
+    }
+
     /* Map IRQs */
     for ( i = 0; i < nirq; i++ )
     {
@@ -755,7 +767,7 @@ static int map_device(struct domain *d, const struct dt_device_node *dev)
 }
 
 static int handle_node(struct domain *d, struct kernel_info *kinfo,
-                       const struct dt_device_node *node)
+                       struct dt_device_node *node)
 {
     static const struct dt_device_match skip_matches[] __initconst =
     {
@@ -776,7 +788,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
         DT_MATCH_TIMER,
         { /* sentinel */ },
     };
-    const struct dt_device_node *child;
+    struct dt_device_node *child;
     int res;
     const char *name;
     const char *path;
@@ -1009,6 +1021,7 @@ int construct_dom0(struct domain *d)
     kinfo.unassigned_mem = dom0_mem;
 
     allocate_memory(d, &kinfo);
+    iommu_dom0_init(d);
 
     rc = kernel_prepare(&kinfo);
     if ( rc < 0 )
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index d00c882..d8ed0de 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -412,12 +412,16 @@ static int apply_p2m_changes(struct domain *d,
 
     if ( flush )
     {
+        unsigned long sgfn = paddr_to_pfn(start_gpaddr);
+        unsigned long egfn = paddr_to_pfn(end_gpaddr);
+
         /* At the beginning of the function, Xen is updating VTTBR
          * with the domain where the mappings are created. In this
          * case it's only necessary to flush TLBs on every CPUs with
          * the current VMID (our domain).
          */
         flush_tlb();
+        iommu_iotlb_flush(d, sgfn, egfn - sgfn);
     }
 
     if ( op == ALLOCATE || op == INSERT )
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 5529cb1..7aba6d0 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -728,6 +728,8 @@ void __init start_xen(unsigned long boot_phys_offset,
     local_irq_enable();
     local_abort_enable();
 
+    iommu_setup();
+
     smp_prepare_cpus(cpus);
 
     initialize_keytable();
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 5a0a35e..16e9027 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -1,6 +1,7 @@
 subdir-$(x86) += vtd
 subdir-$(x86) += amd
 subdir-$(x86_64) += x86
+subdir-$(arm) += arm
 
 obj-y += iommu.o
 obj-$(x86) += io.o
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
new file mode 100644
index 0000000..0484b79
--- /dev/null
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -0,0 +1 @@
+obj-y += iommu.o
diff --git a/xen/drivers/passthrough/arm/iommu.c b/xen/drivers/passthrough/arm/iommu.c
new file mode 100644
index 0000000..d93b915
--- /dev/null
+++ b/xen/drivers/passthrough/arm/iommu.c
@@ -0,0 +1,65 @@
+/*
+ * xen/drivers/passthrough/arm/iommu.c
+ *
+ * Generic IOMMU framework via the device tree
+ *
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (c) 2014 Linaro Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/iommu.h>
+#include <xen/device_tree.h>
+#include <asm/device.h>
+
+static const struct iommu_ops *iommu_ops;
+
+const struct iommu_ops *iommu_get_ops(void)
+{
+    return iommu_ops;
+}
+
+void __init iommu_set_ops(const struct iommu_ops *ops)
+{
+    BUG_ON(ops == NULL);
+
+    if ( iommu_ops && iommu_ops != ops )
+        printk("WARNING: IOMMU ops already set to a different value\n");
+
+    iommu_ops = ops;
+}
+
+int __init iommu_hardware_setup(void)
+{
+    struct dt_device_node *np;
+    int rc;
+    unsigned int num_iommus = 0;
+
+    dt_for_each_device_node(dt_host, np)
+    {
+        rc = device_init(np, DEVICE_IOMMU, NULL);
+        if ( !rc )
+            num_iommus++;
+    }
+
+    return ( num_iommus > 0 ) ? 0 : -ENODEV;
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    return 0;
+}
+
+void arch_iommu_domain_destroy(struct domain *d)
+{
+}
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index 9e47ca6..ed04344 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -6,7 +6,8 @@
 
 enum device_type
 {
-    DEVICE_SERIAL
+    DEVICE_SERIAL,
+    DEVICE_IOMMU,
 };
 
 struct device_desc {
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index bc20a15..ad6587a 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -9,6 +9,7 @@
 #include <asm/vfp.h>
 #include <public/hvm/params.h>
 #include <xen/serial.h>
+#include <xen/hvm/iommu.h>
 
 /* Represents state corresponding to a block of 32 interrupts */
 struct vgic_irq_rank {
@@ -72,6 +73,7 @@ struct pending_irq
 struct hvm_domain
 {
     uint64_t              params[HVM_NR_PARAMS];
+    struct hvm_iommu      hvm_iommu;
 }  __cacheline_aligned;
 
 #ifdef CONFIG_ARM_64
diff --git a/xen/include/asm-arm/hvm/iommu.h b/xen/include/asm-arm/hvm/iommu.h
new file mode 100644
index 0000000..461c8cf
--- /dev/null
+++ b/xen/include/asm-arm/hvm/iommu.h
@@ -0,0 +1,10 @@
+#ifndef __ASM_ARM_HVM_IOMMU_H_
+#define __ASM_ARM_HVM_IOMMU_H_
+
+struct arch_hvm_iommu
+{
+    /* Private information for the IOMMU drivers */
+    void *priv;
+};
+
+#endif /* __ASM_ARM_HVM_IOMMU_H_ */
diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h
new file mode 100644
index 0000000..81eec83
--- /dev/null
+++ b/xen/include/asm-arm/iommu.h
@@ -0,0 +1,36 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+*/
+#ifndef __ARCH_ARM_IOMMU_H__
+#define __ARCH_ARM_IOMMU_H__
+
+/* Always share P2M Table between the CPU and the IOMMU */
+#define iommu_use_hap_pt(d) (1)
+#define domain_hvm_iommu(d) (&d->arch.hvm_domain.hvm_iommu)
+
+const struct iommu_ops *iommu_get_ops(void);
+void __init iommu_set_ops(const struct iommu_ops *ops);
+
+int __init iommu_hardware_setup(void);
+
+#endif /* __ARCH_ARM_IOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM3-00072U-Jr; Sun, 23 Feb 2014 22:16:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM1-0006yj-C3
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:57 +0000
Received: from [193.109.254.147:11333] by server-6.bemta-14.messagelabs.com id
	39/F0-03396-8537A035; Sun, 23 Feb 2014 22:16:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393193815!6259273!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16180 invoked from network); 23 Feb 2014 22:16:55 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:55 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so2747457eek.8
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=5QUAHWhbwvA4f76W2wyXxz2TchChA1K/NYkivSSjfxg=;
	b=IwL7uQjMhlSc9bPQZnSMJKOKkSCtu467JMyRUn4UPYPWlKR2TxVvrHdC7RpIQ8+8Vu
	UL09ZtljInTZCVti095fn1b7OHfBa+PcmOLXHpOrTo9N73Fomfbe9FD5Y62ds/M96+U+
	w3qJ0ZoNHx/Qc8qsTwJPgj/co9RosCiS+Wj/m1KexJdcKImvyp+haU/Nxmbt/odh+HHH
	uGn8iRqLW2kshssD2FjznQlibqiDy/S/3d5RyiEhSz8hXu1tiB041miO6qYFkh1bTBGE
	Qe4lHVmLRzvBq4awZDqhS81VapcpdJmTul8xlg82kYUDl+hzV3dFTMT7xBw2T63ngO/s
	/2ag==
X-Gm-Message-State: ALoCoQkZ2fAYvP3CfT05L6BIp1p5gnFPe3wio8zw7p3z25aF/Boo0YNOQYgFPT3EEbxuHTK7hFDG
X-Received: by 10.14.111.5 with SMTP id v5mr15986526eeg.11.1393193815617;
	Sun, 23 Feb 2014 14:16:55 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.54
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:54 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:29 +0000
Message-Id: <1393193792-20008-13-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 12/15] MAINTAINERS: Add
	drivers/passthrough/arm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the ARM IOMMU directory to "ARM ARCHITECTURE" part

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>
---
 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7757cdd..ad6c8a9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -130,6 +130,7 @@ S:	Supported
 L:	xen-devel@lists.xen.org
 F:	xen/arch/arm/
 F:	xen/include/asm-arm/
+F:	xen/drivers/passthrough/arm
 
 CPU POOLS
 M:	Juergen Gross <juergen.gross@ts.fujitsu.com>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:16:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM3-00072U-Jr; Sun, 23 Feb 2014 22:16:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM1-0006yj-C3
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:57 +0000
Received: from [193.109.254.147:11333] by server-6.bemta-14.messagelabs.com id
	39/F0-03396-8537A035; Sun, 23 Feb 2014 22:16:56 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393193815!6259273!1
X-Originating-IP: [74.125.83.49]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16180 invoked from network); 23 Feb 2014 22:16:55 -0000
Received: from mail-ee0-f49.google.com (HELO mail-ee0-f49.google.com)
	(74.125.83.49)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:55 -0000
Received: by mail-ee0-f49.google.com with SMTP id d17so2747457eek.8
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:55 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=5QUAHWhbwvA4f76W2wyXxz2TchChA1K/NYkivSSjfxg=;
	b=IwL7uQjMhlSc9bPQZnSMJKOKkSCtu467JMyRUn4UPYPWlKR2TxVvrHdC7RpIQ8+8Vu
	UL09ZtljInTZCVti095fn1b7OHfBa+PcmOLXHpOrTo9N73Fomfbe9FD5Y62ds/M96+U+
	w3qJ0ZoNHx/Qc8qsTwJPgj/co9RosCiS+Wj/m1KexJdcKImvyp+haU/Nxmbt/odh+HHH
	uGn8iRqLW2kshssD2FjznQlibqiDy/S/3d5RyiEhSz8hXu1tiB041miO6qYFkh1bTBGE
	Qe4lHVmLRzvBq4awZDqhS81VapcpdJmTul8xlg82kYUDl+hzV3dFTMT7xBw2T63ngO/s
	/2ag==
X-Gm-Message-State: ALoCoQkZ2fAYvP3CfT05L6BIp1p5gnFPe3wio8zw7p3z25aF/Boo0YNOQYgFPT3EEbxuHTK7hFDG
X-Received: by 10.14.111.5 with SMTP id v5mr15986526eeg.11.1393193815617;
	Sun, 23 Feb 2014 14:16:55 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.54
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:54 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:29 +0000
Message-Id: <1393193792-20008-13-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 12/15] MAINTAINERS: Add
	drivers/passthrough/arm
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Add the ARM IOMMU directory to "ARM ARCHITECTURE" part

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Cc: Keir Fraser <keir@xen.org>
---
 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 7757cdd..ad6c8a9 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -130,6 +130,7 @@ S:	Supported
 L:	xen-devel@lists.xen.org
 F:	xen/arch/arm/
 F:	xen/include/asm-arm/
+F:	xen/drivers/passthrough/arm
 
 CPU POOLS
 M:	Juergen Gross <juergen.gross@ts.fujitsu.com>
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:17:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM4-00073U-4X; Sun, 23 Feb 2014 22:17:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM3-00071V-9u
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:59 +0000
Received: from [85.158.137.68:18957] by server-11.bemta-3.messagelabs.com id
	E9/9D-04255-A537A035; Sun, 23 Feb 2014 22:16:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393193817!3691820!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25156 invoked from network); 23 Feb 2014 22:16:57 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:57 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so2652238eek.30
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=VOLcSOtsm7FKqE5NcJ0TE6vCswy7Ooqezqr7Kpq28Lw=;
	b=dAP0megbbd2Ty1DVtmvnNlmGfPCY7+NeVgroXHaFWMLhHumgAnix9tvEtJxVxB5tNG
	e9bmQVffIuSVu8+Qts4we/2Wau47wh78eEXGi7Ed5meX9MGBZM44/iYHENBeLVy8T8tG
	k5tsOIQfgrSJQ2jAhU7N4uCNAQamH1kRAYut1OyHQ5OpQSEwuq1NE+4pGgbyfLGkx7tU
	1d7evYqOx4npBGB9IiW77Z7LNXm06VCO6D/IMyEB8nO3LgErB1lV5U/NIpiJ9j4ntQiJ
	jgLTXCaoADyQAMLKHcrOlJ/7uHXNYljOlpWJYj+ZYNlLHLxSunIiMGjJpwLu8yTwfMkY
	cJuw==
X-Gm-Message-State: ALoCoQkJGcxb3s+AEESils5h6xQ5xEKyrxPC11ulh/lqZ8lyt5CNkvUKDsnbs1OaN9OvpgSoi5Fb
X-Received: by 10.14.98.66 with SMTP id u42mr21262721eef.18.1393193817171;
	Sun, 23 Feb 2014 14:16:57 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.55
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:56 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:30 +0000
Message-Id: <1393193792-20008-14-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 13/15] xen/arm: Don't give IOMMU devices to
	dom0 when iommu is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When iommu={disable,off,no,false} is given to Xen command line, the IOMMU
framework won't specify that the device shouldn't be passthrough to DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Patch added
---
 xen/arch/arm/device.c        |   15 +++++++++++++++
 xen/arch/arm/domain_build.c  |   10 ++++++++++
 xen/include/asm-arm/device.h |   10 ++++++++++
 3 files changed, 35 insertions(+)

diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
index f86b2e3..59e94c0 100644
--- a/xen/arch/arm/device.c
+++ b/xen/arch/arm/device.c
@@ -67,6 +67,21 @@ int __init device_init(struct dt_device_node *dev, enum device_type type,
     return -EBADF;
 }
 
+enum device_type device_get_type(const struct dt_device_node *dev)
+{
+    const struct device_desc *desc;
+
+    ASSERT(dev != NULL);
+
+    for ( desc = _sdevice; desc != _edevice; desc++ )
+    {
+        if ( device_is_compatible(desc, dev) )
+            return desc->type;
+    }
+
+    return DEVICE_UNKNOWN;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index ec304cf..9cbdd61 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -11,6 +11,7 @@
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
+#include <asm/device.h>
 #include <asm/setup.h>
 #include <asm/platform.h>
 #include <asm/psci.h>
@@ -823,6 +824,15 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
         return 0;
     }
 
+    /* Even if the IOMMU device is not used by Xen, it should not be
+     * passthrough to DOM0
+     */
+    if ( device_get_type(node) == DEVICE_IOMMU )
+    {
+        DPRINT(" IOMMU, skip it\n");
+        return 0;
+    }
+
     /*
      * Some device doesn't need to be mapped in Xen:
      *  - Memory: the guest will see a different view of memory. It will
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index ed04344..60109cc 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -8,6 +8,8 @@ enum device_type
 {
     DEVICE_SERIAL,
     DEVICE_IOMMU,
+    /* Use for error */
+    DEVICE_UNKNOWN,
 };
 
 struct device_desc {
@@ -32,6 +34,14 @@ struct device_desc {
 int __init device_init(struct dt_device_node *dev, enum device_type type,
                        const void *data);
 
+/**
+ * device_get_type - Get the type of the device
+ * @dev: device to match
+ *
+ * Return the device type on success or DEVICE_ANY on failure
+ */
+enum device_type device_get_type(const struct dt_device_node *dev);
+
 #define DT_DEVICE_START(_name, _namestr, _type)                     \
 static const struct device_desc __dev_desc_##_name __used           \
 __attribute__((__section__(".dev.info"))) = {                       \
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:17:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:17:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM4-00073U-4X; Sun, 23 Feb 2014 22:17:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM3-00071V-9u
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:16:59 +0000
Received: from [85.158.137.68:18957] by server-11.bemta-3.messagelabs.com id
	E9/9D-04255-A537A035; Sun, 23 Feb 2014 22:16:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393193817!3691820!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25156 invoked from network); 23 Feb 2014 22:16:57 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:57 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so2652238eek.30
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=VOLcSOtsm7FKqE5NcJ0TE6vCswy7Ooqezqr7Kpq28Lw=;
	b=dAP0megbbd2Ty1DVtmvnNlmGfPCY7+NeVgroXHaFWMLhHumgAnix9tvEtJxVxB5tNG
	e9bmQVffIuSVu8+Qts4we/2Wau47wh78eEXGi7Ed5meX9MGBZM44/iYHENBeLVy8T8tG
	k5tsOIQfgrSJQ2jAhU7N4uCNAQamH1kRAYut1OyHQ5OpQSEwuq1NE+4pGgbyfLGkx7tU
	1d7evYqOx4npBGB9IiW77Z7LNXm06VCO6D/IMyEB8nO3LgErB1lV5U/NIpiJ9j4ntQiJ
	jgLTXCaoADyQAMLKHcrOlJ/7uHXNYljOlpWJYj+ZYNlLHLxSunIiMGjJpwLu8yTwfMkY
	cJuw==
X-Gm-Message-State: ALoCoQkJGcxb3s+AEESils5h6xQ5xEKyrxPC11ulh/lqZ8lyt5CNkvUKDsnbs1OaN9OvpgSoi5Fb
X-Received: by 10.14.98.66 with SMTP id u42mr21262721eef.18.1393193817171;
	Sun, 23 Feb 2014 14:16:57 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.55
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:56 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:30 +0000
Message-Id: <1393193792-20008-14-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 13/15] xen/arm: Don't give IOMMU devices to
	dom0 when iommu is disabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When iommu={disable,off,no,false} is given to Xen command line, the IOMMU
framework won't specify that the device shouldn't be passthrough to DOM0.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Patch added
---
 xen/arch/arm/device.c        |   15 +++++++++++++++
 xen/arch/arm/domain_build.c  |   10 ++++++++++
 xen/include/asm-arm/device.h |   10 ++++++++++
 3 files changed, 35 insertions(+)

diff --git a/xen/arch/arm/device.c b/xen/arch/arm/device.c
index f86b2e3..59e94c0 100644
--- a/xen/arch/arm/device.c
+++ b/xen/arch/arm/device.c
@@ -67,6 +67,21 @@ int __init device_init(struct dt_device_node *dev, enum device_type type,
     return -EBADF;
 }
 
+enum device_type device_get_type(const struct dt_device_node *dev)
+{
+    const struct device_desc *desc;
+
+    ASSERT(dev != NULL);
+
+    for ( desc = _sdevice; desc != _edevice; desc++ )
+    {
+        if ( device_is_compatible(desc, dev) )
+            return desc->type;
+    }
+
+    return DEVICE_UNKNOWN;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index ec304cf..9cbdd61 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -11,6 +11,7 @@
 #include <xen/device_tree.h>
 #include <xen/libfdt/libfdt.h>
 #include <xen/guest_access.h>
+#include <asm/device.h>
 #include <asm/setup.h>
 #include <asm/platform.h>
 #include <asm/psci.h>
@@ -823,6 +824,15 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
         return 0;
     }
 
+    /* Even if the IOMMU device is not used by Xen, it should not be
+     * passthrough to DOM0
+     */
+    if ( device_get_type(node) == DEVICE_IOMMU )
+    {
+        DPRINT(" IOMMU, skip it\n");
+        return 0;
+    }
+
     /*
      * Some device doesn't need to be mapped in Xen:
      *  - Memory: the guest will see a different view of memory. It will
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index ed04344..60109cc 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -8,6 +8,8 @@ enum device_type
 {
     DEVICE_SERIAL,
     DEVICE_IOMMU,
+    /* Use for error */
+    DEVICE_UNKNOWN,
 };
 
 struct device_desc {
@@ -32,6 +34,14 @@ struct device_desc {
 int __init device_init(struct dt_device_node *dev, enum device_type type,
                        const void *data);
 
+/**
+ * device_get_type - Get the type of the device
+ * @dev: device to match
+ *
+ * Return the device type on success or DEVICE_ANY on failure
+ */
+enum device_type device_get_type(const struct dt_device_node *dev);
+
 #define DT_DEVICE_START(_name, _namestr, _type)                     \
 static const struct device_desc __dev_desc_##_name __used           \
 __attribute__((__section__(".dev.info"))) = {                       \
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:17:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM7-00077e-MD; Sun, 23 Feb 2014 22:17:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM4-00072j-6a
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:17:00 +0000
Received: from [85.158.137.68:55375] by server-5.bemta-3.messagelabs.com id
	43/27-04712-B537A035; Sun, 23 Feb 2014 22:16:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393193818!2418437!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5867 invoked from network); 23 Feb 2014 22:16:58 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:58 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so2771408eei.40
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=VhH0h2OqJJMm5fZIfaVXUVPTANfSWRXesj78qe4Xhrc=;
	b=m5D1JWpLDOPnY7P/wr6uV53CQNIt0ZKhKofx4kI9kbDZWqPJrfsox+XRSBkVZhfytA
	kGyirB3VEw+NOlW1BuN7vwrz7kZvw9VxlSa2zzL9VfN9eiBrI9A3VtWTuIR2guAp1hMp
	9W4E5HqkLXfkOOQw440acXjmfnvNla+ypQxWMdg57ebsRw4Ec2vsvOiKuBI+ZBh+bX4h
	GumXZNzqIUuYMzTcgS+qBGEwWlmIjqx1SPy02NveOY4HkHNqqb/d/fN9z+tUrOB6LPk3
	bDS5iQ/q1JbBXRLs0+r8hbwjCJJatJ5BjB8ICL4VGlxlxk8IMFY44EC0XQ+WxD0/tO9y
	Fqjg==
X-Gm-Message-State: ALoCoQlivNLu/0LiSvuIO/OeFFTYC507bb8qiDZiJbNY1RTkV2X0YWQ7jLzQfXjJLjzexCzjwhM+
X-Received: by 10.14.205.3 with SMTP id i3mr20714289eeo.23.1393193818307;
	Sun, 23 Feb 2014 14:16:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:57 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:31 +0000
Message-Id: <1393193792-20008-15-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 14/15] xen/arm: Add the property
	"protected-devices" in the hypervisor node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DOM0 is using the swiotlb to bounce DMA. With the IOMMU support in Xen,
protected devices should not use it.

Only Xen is abled to know if an IOMMU protects the device. The new property
"protected-devices" is a list of device phandles protected by an IOMMU.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch *MUST NOT* be applied until we agreed on a device binding
    the device tree folks. DOM0 can run safely with swiotlb on protected
    devices while LVM is not used for guest disk.

    Changes in v2:
        - Patch added
---
 xen/arch/arm/domain_build.c |   51 ++++++++++++++++++++++++++++++++++++++-----
 xen/arch/arm/kernel.h       |    3 +++
 2 files changed, 48 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9cbdd61..ca7dade 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -324,19 +324,22 @@ static int make_memory_node(const struct domain *d,
     return res;
 }
 
-static int make_hypervisor_node(struct domain *d,
-                                void *fdt, const struct dt_device_node *parent)
+static int make_hypervisor_node(struct domain *d, struct kernel_info *kinfo,
+                                const struct dt_device_node *parent)
 {
     const char compat[] =
         "xen,xen-"__stringify(XEN_VERSION)"."__stringify(XEN_SUBVERSION)"\0"
         "xen,xen";
     __be32 reg[4];
     gic_interrupt_t intr;
-    __be32 *cells;
+    __be32 *cells, *_cells;
     int res;
     int addrcells = dt_n_addr_cells(parent);
     int sizecells = dt_n_size_cells(parent);
     paddr_t gnttab_start, gnttab_size;
+    const struct dt_device_node *dev;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    void *fdt = kinfo->fdt;
 
     DPRINT("Create hypervisor node\n");
 
@@ -384,6 +387,39 @@ static int make_hypervisor_node(struct domain *d,
     if ( res )
         return res;
 
+    if ( kinfo->num_dev_protected )
+    {
+        /* Don't need to take dtdevs_lock here */
+        cells = xmalloc_array(__be32, kinfo->num_dev_protected *
+                              dt_size_to_cells(sizeof(dt_phandle)));
+        if ( !cells )
+            return -FDT_ERR_XEN(ENOMEM);
+
+        _cells = cells;
+
+        DPRINT("  List of protected devices\n");
+        list_for_each_entry( dev, &hd->dt_devices, next_assigned )
+        {
+            DPRINT("    - %s\n", dt_node_full_name(dev));
+            if ( !dev->phandle )
+            {
+                printk(XENLOG_ERR "Unable to handle protected device (%s)"
+                       "with no phandle", dt_node_full_name(dev));
+                xfree(cells);
+                return -FDT_ERR_XEN(EINVAL);
+            }
+            dt_set_cell(&_cells, dt_size_to_cells(sizeof(dt_phandle)),
+                        dev->phandle);
+        }
+
+        res = fdt_property(fdt, "protected-devices", cells,
+                           sizeof (dt_phandle) * kinfo->num_dev_protected);
+
+        xfree(cells);
+        if ( res )
+            return res;
+    }
+
     res = fdt_end_node(fdt);
 
     return res;
@@ -670,7 +706,8 @@ static int make_timer_node(const struct domain *d, void *fdt,
 }
 
 /* Map the device in the domain */
-static int map_device(struct domain *d, struct dt_device_node *dev)
+static int map_device(struct domain *d, struct kernel_info *kinfo,
+                      struct dt_device_node *dev)
 {
     unsigned int nirq;
     unsigned int naddr;
@@ -695,6 +732,7 @@ static int map_device(struct domain *d, struct dt_device_node *dev)
                    dt_node_full_name(dev));
             return res;
         }
+        kinfo->num_dev_protected++;
     }
 
     /* Map IRQs */
@@ -844,7 +882,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
     if ( !dt_device_type_is_equal(node, "memory") &&
          dt_device_is_available(node) )
     {
-        res = map_device(d, node);
+        res = map_device(d, kinfo, node);
 
         if ( res )
             return res;
@@ -875,7 +913,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
 
     if ( node == dt_host )
     {
-        res = make_hypervisor_node(d, kinfo->fdt, node);
+        res = make_hypervisor_node(d, kinfo, node);
         if ( res )
             return res;
 
@@ -1028,6 +1066,7 @@ int construct_dom0(struct domain *d)
 
     d->max_pages = ~0U;
 
+    kinfo.num_dev_protected = 0;
     kinfo.unassigned_mem = dom0_mem;
 
     allocate_memory(d, &kinfo);
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index b48c2c9..3af5c50 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -18,6 +18,9 @@ struct kernel_info {
     paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
     struct dt_mem_info mem;
 
+    /* Number of devices protected by an IOMMU */
+    unsigned int num_dev_protected;
+
     paddr_t dtb_paddr;
     paddr_t entry;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:17:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhM7-00077e-MD; Sun, 23 Feb 2014 22:17:03 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM4-00072j-6a
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:17:00 +0000
Received: from [85.158.137.68:55375] by server-5.bemta-3.messagelabs.com id
	43/27-04712-B537A035; Sun, 23 Feb 2014 22:16:59 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393193818!2418437!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5867 invoked from network); 23 Feb 2014 22:16:58 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:16:58 -0000
Received: by mail-ee0-f53.google.com with SMTP id t10so2771408eei.40
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=VhH0h2OqJJMm5fZIfaVXUVPTANfSWRXesj78qe4Xhrc=;
	b=m5D1JWpLDOPnY7P/wr6uV53CQNIt0ZKhKofx4kI9kbDZWqPJrfsox+XRSBkVZhfytA
	kGyirB3VEw+NOlW1BuN7vwrz7kZvw9VxlSa2zzL9VfN9eiBrI9A3VtWTuIR2guAp1hMp
	9W4E5HqkLXfkOOQw440acXjmfnvNla+ypQxWMdg57ebsRw4Ec2vsvOiKuBI+ZBh+bX4h
	GumXZNzqIUuYMzTcgS+qBGEwWlmIjqx1SPy02NveOY4HkHNqqb/d/fN9z+tUrOB6LPk3
	bDS5iQ/q1JbBXRLs0+r8hbwjCJJatJ5BjB8ICL4VGlxlxk8IMFY44EC0XQ+WxD0/tO9y
	Fqjg==
X-Gm-Message-State: ALoCoQlivNLu/0LiSvuIO/OeFFTYC507bb8qiDZiJbNY1RTkV2X0YWQ7jLzQfXjJLjzexCzjwhM+
X-Received: by 10.14.205.3 with SMTP id i3mr20714289eeo.23.1393193818307;
	Sun, 23 Feb 2014 14:16:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:57 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:31 +0000
Message-Id: <1393193792-20008-15-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 14/15] xen/arm: Add the property
	"protected-devices" in the hypervisor node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

DOM0 is using the swiotlb to bounce DMA. With the IOMMU support in Xen,
protected devices should not use it.

Only Xen is abled to know if an IOMMU protects the device. The new property
"protected-devices" is a list of device phandles protected by an IOMMU.

Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    This patch *MUST NOT* be applied until we agreed on a device binding
    the device tree folks. DOM0 can run safely with swiotlb on protected
    devices while LVM is not used for guest disk.

    Changes in v2:
        - Patch added
---
 xen/arch/arm/domain_build.c |   51 ++++++++++++++++++++++++++++++++++++++-----
 xen/arch/arm/kernel.h       |    3 +++
 2 files changed, 48 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 9cbdd61..ca7dade 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -324,19 +324,22 @@ static int make_memory_node(const struct domain *d,
     return res;
 }
 
-static int make_hypervisor_node(struct domain *d,
-                                void *fdt, const struct dt_device_node *parent)
+static int make_hypervisor_node(struct domain *d, struct kernel_info *kinfo,
+                                const struct dt_device_node *parent)
 {
     const char compat[] =
         "xen,xen-"__stringify(XEN_VERSION)"."__stringify(XEN_SUBVERSION)"\0"
         "xen,xen";
     __be32 reg[4];
     gic_interrupt_t intr;
-    __be32 *cells;
+    __be32 *cells, *_cells;
     int res;
     int addrcells = dt_n_addr_cells(parent);
     int sizecells = dt_n_size_cells(parent);
     paddr_t gnttab_start, gnttab_size;
+    const struct dt_device_node *dev;
+    struct hvm_iommu *hd = domain_hvm_iommu(d);
+    void *fdt = kinfo->fdt;
 
     DPRINT("Create hypervisor node\n");
 
@@ -384,6 +387,39 @@ static int make_hypervisor_node(struct domain *d,
     if ( res )
         return res;
 
+    if ( kinfo->num_dev_protected )
+    {
+        /* Don't need to take dtdevs_lock here */
+        cells = xmalloc_array(__be32, kinfo->num_dev_protected *
+                              dt_size_to_cells(sizeof(dt_phandle)));
+        if ( !cells )
+            return -FDT_ERR_XEN(ENOMEM);
+
+        _cells = cells;
+
+        DPRINT("  List of protected devices\n");
+        list_for_each_entry( dev, &hd->dt_devices, next_assigned )
+        {
+            DPRINT("    - %s\n", dt_node_full_name(dev));
+            if ( !dev->phandle )
+            {
+                printk(XENLOG_ERR "Unable to handle protected device (%s)"
+                       "with no phandle", dt_node_full_name(dev));
+                xfree(cells);
+                return -FDT_ERR_XEN(EINVAL);
+            }
+            dt_set_cell(&_cells, dt_size_to_cells(sizeof(dt_phandle)),
+                        dev->phandle);
+        }
+
+        res = fdt_property(fdt, "protected-devices", cells,
+                           sizeof (dt_phandle) * kinfo->num_dev_protected);
+
+        xfree(cells);
+        if ( res )
+            return res;
+    }
+
     res = fdt_end_node(fdt);
 
     return res;
@@ -670,7 +706,8 @@ static int make_timer_node(const struct domain *d, void *fdt,
 }
 
 /* Map the device in the domain */
-static int map_device(struct domain *d, struct dt_device_node *dev)
+static int map_device(struct domain *d, struct kernel_info *kinfo,
+                      struct dt_device_node *dev)
 {
     unsigned int nirq;
     unsigned int naddr;
@@ -695,6 +732,7 @@ static int map_device(struct domain *d, struct dt_device_node *dev)
                    dt_node_full_name(dev));
             return res;
         }
+        kinfo->num_dev_protected++;
     }
 
     /* Map IRQs */
@@ -844,7 +882,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
     if ( !dt_device_type_is_equal(node, "memory") &&
          dt_device_is_available(node) )
     {
-        res = map_device(d, node);
+        res = map_device(d, kinfo, node);
 
         if ( res )
             return res;
@@ -875,7 +913,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
 
     if ( node == dt_host )
     {
-        res = make_hypervisor_node(d, kinfo->fdt, node);
+        res = make_hypervisor_node(d, kinfo, node);
         if ( res )
             return res;
 
@@ -1028,6 +1066,7 @@ int construct_dom0(struct domain *d)
 
     d->max_pages = ~0U;
 
+    kinfo.num_dev_protected = 0;
     kinfo.unassigned_mem = dom0_mem;
 
     allocate_memory(d, &kinfo);
diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
index b48c2c9..3af5c50 100644
--- a/xen/arch/arm/kernel.h
+++ b/xen/arch/arm/kernel.h
@@ -18,6 +18,9 @@ struct kernel_info {
     paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
     struct dt_mem_info mem;
 
+    /* Number of devices protected by an IOMMU */
+    unsigned int num_dev_protected;
+
     paddr_t dtb_paddr;
     paddr_t entry;
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:17:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhMB-0007CH-FA; Sun, 23 Feb 2014 22:17:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM7-00076M-TU
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:17:04 +0000
Received: from [85.158.139.211:35161] by server-15.bemta-5.messagelabs.com id
	88/89-24395-E537A035; Sun, 23 Feb 2014 22:17:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393193819!1773911!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 864 invoked from network); 23 Feb 2014 22:17:00 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:17:00 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so379583eek.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=d8lJ57f8Tf86+tca/aBCJQOVJUiHUNg/PIJRjJu+uS8=;
	b=evVLtpY0XNp/EveXV421saV8yNKFYOt/1CyIB2WxJVIzSNsHB3s7R4b48WlrSfFFh+
	oeaIcvaPOUdWx1ogSnnzkz4LOZeaXvGZ2xFOPzIPXKObJI2dVJoiEUlGz+/4KCcHJu0l
	Pb/6fA4/DdEeJZLxQ3VXbrOp0+G0YK3M8maeN7Vxvee4+klUaG1NewdCyTGXJmZusaXr
	+LzVMnOykfoX0SBSe1eTGjXHNRfhy0Kr38s9nOy6KBtxKx0CUivO0AZmuG9R4yUGVjhf
	c0n4f79CQERyBisYRulHyxB4L2tfNdzy4V6vKJNGFpW0GHui1epxk5mDPBfmb2deZOx8
	IdDg==
X-Gm-Message-State: ALoCoQnPb73FzV3dRsVlMAUnOYSw1cUGEXWAAyyp0JT+iZWvbrSjbWT4BcKvDxSQrH189NM83mqi
X-Received: by 10.14.176.66 with SMTP id a42mr20730646eem.101.1393193819692;
	Sun, 23 Feb 2014 14:16:59 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:32 +0000
Message-Id: <1393193792-20008-16-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 15/15] drivers/passthrough: arm: Add support
	for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add support for ARM architected SMMU driver. It's based on the
linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.

The major differences with the Linux driver are:
    - Fault by default if the SMMU is enabled to translate an
    address (Linux is bypassing the SMMU)
    - Using P2M page table instead of creating new one
    - Dropped stage-1 support
    - Dropped chained SMMUs support for now
    - Reworking device assignment and the different structures

Xen is programming each IOMMU by:
    - Using stage-2 mode translation
    - Sharing the page table with the processor
    - Injecting a fault if the device has made a wrong translation

Signed-off-by: Julien Grall<julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Update commit message
        - Update some comments in the code
        - Add new callbacks to assign/reassign DT device
        - Rework init_dom0 and domain_teardown. The
        assignment/deassignement is now made in the generic code
        - Set protected field in dt_device_node when the device is under
        an IOMMU
        - Use SZ_64K and SZ_4K by the global PAGE_SIZE_{64,4}K in
        xen/iommu.h. The first one was not defined.
---
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/smmu.c   | 1736 ++++++++++++++++++++++++++++++++++
 xen/include/xen/iommu.h              |    3 +
 3 files changed, 1740 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu.c

diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index 0484b79..f4cd26e 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1 +1,2 @@
 obj-y += iommu.o
+obj-y += smmu.o
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
new file mode 100644
index 0000000..25f9d77
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -0,0 +1,1736 @@
+/*
+ * IOMMU API for ARM architected SMMU implementations.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Based on Linux drivers/iommu/arm-smmu.c (commit 89a23cd)
+ * Copyright (C) 2013 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * Xen modification:
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (C) 2014 Linaro Limited.
+ *
+ * This driver currently supports:
+ *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
+ *  - Stream-matching and stream-indexing
+ *  - v7/v8 long-descriptor format
+ *  - Non-secure access to the SMMU
+ *  - 4k pages, p2m shared with the processor
+ *  - Up to 40-bit addressing
+ *  - Context fault reporting
+ */
+
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+
+/* Driver options */
+#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
+
+/* Maximum number of stream IDs assigned to a single device */
+#define MAX_MASTER_STREAMIDS    MAX_PHANDLE_ARGS
+
+/* Maximum stream ID */
+#define SMMU_MAX_STREAMIDS      (PAGE_SIZE_64K - 1)
+
+/* Maximum number of context banks per SMMU */
+#define SMMU_MAX_CBS        128
+
+/* Maximum number of mapping groups per SMMU */
+#define SMMU_MAX_SMRS       128
+
+/* SMMU global address space */
+#define SMMU_GR0(smmu)      ((smmu)->base)
+#define SMMU_GR1(smmu)      ((smmu)->base + (smmu)->pagesize)
+
+/*
+ * SMMU global address space with conditional offset to access secure aliases of
+ * non-secure registers (e.g. nsCR0: 0x400, nsGFSR: 0x448, nsGFSYNR0: 0x450)
+ */
+#define SMMU_GR0_NS(smmu)                                   \
+    ((smmu)->base +                                         \
+     ((smmu->options & SMMU_OPT_SECURE_CONFIG_ACCESS)    \
+        ? 0x400 : 0))
+
+/* Page table bits */
+#define SMMU_PTE_PAGE           (((pteval_t)3) << 0)
+#define SMMU_PTE_CONT           (((pteval_t)1) << 52)
+#define SMMU_PTE_AF             (((pteval_t)1) << 10)
+#define SMMU_PTE_SH_NS          (((pteval_t)0) << 8)
+#define SMMU_PTE_SH_OS          (((pteval_t)2) << 8)
+#define SMMU_PTE_SH_IS          (((pteval_t)3) << 8)
+
+#if PAGE_SIZE == PAGE_SIZE_4K
+#define SMMU_PTE_CONT_ENTRIES   16
+#elif PAGE_SIZE == PAGE_SIZE_64K
+#define SMMU_PTE_CONT_ENTRIES   32
+#else
+#define SMMU_PTE_CONT_ENTRIES   1
+#endif
+
+#define SMMU_PTE_CONT_SIZE      (PAGE_SIZE * SMMU_PTE_CONT_ENTRIES)
+#define SMMU_PTE_CONT_MASK      (~(SMMU_PTE_CONT_SIZE - 1))
+#define SMMU_PTE_HWTABLE_SIZE   (PTRS_PER_PTE * sizeof(pte_t))
+
+/* Stage-1 PTE */
+#define SMMU_PTE_AP_UNPRIV      (((pteval_t)1) << 6)
+#define SMMU_PTE_AP_RDONLY      (((pteval_t)2) << 6)
+#define SMMU_PTE_ATTRINDX_SHIFT 2
+#define SMMU_PTE_nG             (((pteval_t)1) << 11)
+
+/* Stage-2 PTE */
+#define SMMU_PTE_HAP_FAULT      (((pteval_t)0) << 6)
+#define SMMU_PTE_HAP_READ       (((pteval_t)1) << 6)
+#define SMMU_PTE_HAP_WRITE      (((pteval_t)2) << 6)
+#define SMMU_PTE_MEMATTR_OIWB   (((pteval_t)0xf) << 2)
+#define SMMU_PTE_MEMATTR_NC     (((pteval_t)0x5) << 2)
+#define SMMU_PTE_MEMATTR_DEV    (((pteval_t)0x1) << 2)
+
+/* Configuration registers */
+#define SMMU_GR0_sCR0           0x0
+#define SMMU_sCR0_CLIENTPD      (1 << 0)
+#define SMMU_sCR0_GFRE          (1 << 1)
+#define SMMU_sCR0_GFIE          (1 << 2)
+#define SMMU_sCR0_GCFGFRE       (1 << 4)
+#define SMMU_sCR0_GCFGFIE       (1 << 5)
+#define SMMU_sCR0_USFCFG        (1 << 10)
+#define SMMU_sCR0_VMIDPNE       (1 << 11)
+#define SMMU_sCR0_PTM           (1 << 12)
+#define SMMU_sCR0_FB            (1 << 13)
+#define SMMU_sCR0_BSU_SHIFT     14
+#define SMMU_sCR0_BSU_MASK      0x3
+
+/* Identification registers */
+#define SMMU_GR0_ID0            0x20
+#define SMMU_GR0_ID1            0x24
+#define SMMU_GR0_ID2            0x28
+#define SMMU_GR0_ID3            0x2c
+#define SMMU_GR0_ID4            0x30
+#define SMMU_GR0_ID5            0x34
+#define SMMU_GR0_ID6            0x38
+#define SMMU_GR0_ID7            0x3c
+#define SMMU_GR0_sGFSR          0x48
+#define SMMU_GR0_sGFSYNR0       0x50
+#define SMMU_GR0_sGFSYNR1       0x54
+#define SMMU_GR0_sGFSYNR2       0x58
+#define SMMU_GR0_PIDR0          0xfe0
+#define SMMU_GR0_PIDR1          0xfe4
+#define SMMU_GR0_PIDR2          0xfe8
+
+#define SMMU_ID0_S1TS           (1 << 30)
+#define SMMU_ID0_S2TS           (1 << 29)
+#define SMMU_ID0_NTS            (1 << 28)
+#define SMMU_ID0_SMS            (1 << 27)
+#define SMMU_ID0_PTFS_SHIFT     24
+#define SMMU_ID0_PTFS_MASK      0x2
+#define SMMU_ID0_PTFS_V8_ONLY   0x2
+#define SMMU_ID0_CTTW           (1 << 14)
+#define SMMU_ID0_NUMIRPT_SHIFT  16
+#define SMMU_ID0_NUMIRPT_MASK   0xff
+#define SMMU_ID0_NUMSMRG_SHIFT  0
+#define SMMU_ID0_NUMSMRG_MASK   0xff
+
+#define SMMU_ID1_PAGESIZE            (1 << 31)
+#define SMMU_ID1_NUMPAGENDXB_SHIFT   28
+#define SMMU_ID1_NUMPAGENDXB_MASK    7
+#define SMMU_ID1_NUMS2CB_SHIFT       16
+#define SMMU_ID1_NUMS2CB_MASK        0xff
+#define SMMU_ID1_NUMCB_SHIFT         0
+#define SMMU_ID1_NUMCB_MASK          0xff
+
+#define SMMU_ID2_OAS_SHIFT           4
+#define SMMU_ID2_OAS_MASK            0xf
+#define SMMU_ID2_IAS_SHIFT           0
+#define SMMU_ID2_IAS_MASK            0xf
+#define SMMU_ID2_UBS_SHIFT           8
+#define SMMU_ID2_UBS_MASK            0xf
+#define SMMU_ID2_PTFS_4K             (1 << 12)
+#define SMMU_ID2_PTFS_16K            (1 << 13)
+#define SMMU_ID2_PTFS_64K            (1 << 14)
+
+#define SMMU_PIDR2_ARCH_SHIFT        4
+#define SMMU_PIDR2_ARCH_MASK         0xf
+
+/* Global TLB invalidation */
+#define SMMU_GR0_STLBIALL           0x60
+#define SMMU_GR0_TLBIVMID           0x64
+#define SMMU_GR0_TLBIALLNSNH        0x68
+#define SMMU_GR0_TLBIALLH           0x6c
+#define SMMU_GR0_sTLBGSYNC          0x70
+#define SMMU_GR0_sTLBGSTATUS        0x74
+#define SMMU_sTLBGSTATUS_GSACTIVE   (1 << 0)
+#define SMMU_TLB_LOOP_TIMEOUT       1000000 /* 1s! */
+
+/* Stream mapping registers */
+#define SMMU_GR0_SMR(n)             (0x800 + ((n) << 2))
+#define SMMU_SMR_VALID              (1 << 31)
+#define SMMU_SMR_MASK_SHIFT         16
+#define SMMU_SMR_MASK_MASK          0x7fff
+#define SMMU_SMR_ID_SHIFT           0
+#define SMMU_SMR_ID_MASK            0x7fff
+
+#define SMMU_GR0_S2CR(n)        (0xc00 + ((n) << 2))
+#define SMMU_S2CR_CBNDX_SHIFT   0
+#define SMMU_S2CR_CBNDX_MASK    0xff
+#define SMMU_S2CR_TYPE_SHIFT    16
+#define SMMU_S2CR_TYPE_MASK     0x3
+#define SMMU_S2CR_TYPE_TRANS    (0 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_BYPASS   (1 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_FAULT    (2 << SMMU_S2CR_TYPE_SHIFT)
+
+/* Context bank attribute registers */
+#define SMMU_GR1_CBAR(n)                    (0x0 + ((n) << 2))
+#define SMMU_CBAR_VMID_SHIFT                0
+#define SMMU_CBAR_VMID_MASK                 0xff
+#define SMMU_CBAR_S1_MEMATTR_SHIFT          12
+#define SMMU_CBAR_S1_MEMATTR_MASK           0xf
+#define SMMU_CBAR_S1_MEMATTR_WB             0xf
+#define SMMU_CBAR_TYPE_SHIFT                16
+#define SMMU_CBAR_TYPE_MASK                 0x3
+#define SMMU_CBAR_TYPE_S2_TRANS             (0 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_BYPASS   (1 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_FAULT    (2 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_TRANS    (3 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_IRPTNDX_SHIFT             24
+#define SMMU_CBAR_IRPTNDX_MASK              0xff
+
+#define SMMU_GR1_CBA2R(n)                   (0x800 + ((n) << 2))
+#define SMMU_CBA2R_RW64_32BIT               (0 << 0)
+#define SMMU_CBA2R_RW64_64BIT               (1 << 0)
+
+/* Translation context bank */
+#define SMMU_CB_BASE(smmu)                  ((smmu)->base + ((smmu)->size >> 1))
+#define SMMU_CB(smmu, n)                    ((n) * (smmu)->pagesize)
+
+#define SMMU_CB_SCTLR                       0x0
+#define SMMU_CB_RESUME                      0x8
+#define SMMU_CB_TCR2                        0x10
+#define SMMU_CB_TTBR0_LO                    0x20
+#define SMMU_CB_TTBR0_HI                    0x24
+#define SMMU_CB_TCR                         0x30
+#define SMMU_CB_S1_MAIR0                    0x38
+#define SMMU_CB_FSR                         0x58
+#define SMMU_CB_FAR_LO                      0x60
+#define SMMU_CB_FAR_HI                      0x64
+#define SMMU_CB_FSYNR0                      0x68
+#define SMMU_CB_S1_TLBIASID                 0x610
+
+#define SMMU_SCTLR_S1_ASIDPNE               (1 << 12)
+#define SMMU_SCTLR_CFCFG                    (1 << 7)
+#define SMMU_SCTLR_CFIE                     (1 << 6)
+#define SMMU_SCTLR_CFRE                     (1 << 5)
+#define SMMU_SCTLR_E                        (1 << 4)
+#define SMMU_SCTLR_AFE                      (1 << 2)
+#define SMMU_SCTLR_TRE                      (1 << 1)
+#define SMMU_SCTLR_M                        (1 << 0)
+#define SMMU_SCTLR_EAE_SBOP                 (SMMU_SCTLR_AFE | SMMU_SCTLR_TRE)
+
+#define SMMU_RESUME_RETRY                   (0 << 0)
+#define SMMU_RESUME_TERMINATE               (1 << 0)
+
+#define SMMU_TCR_EAE                        (1 << 31)
+
+#define SMMU_TCR_PASIZE_SHIFT               16
+#define SMMU_TCR_PASIZE_MASK                0x7
+
+#define SMMU_TCR_TG0_4K                     (0 << 14)
+#define SMMU_TCR_TG0_64K                    (1 << 14)
+
+#define SMMU_TCR_SH0_SHIFT                  12
+#define SMMU_TCR_SH0_MASK                   0x3
+#define SMMU_TCR_SH_NS                      0
+#define SMMU_TCR_SH_OS                      2
+#define SMMU_TCR_SH_IS                      3
+
+#define SMMU_TCR_ORGN0_SHIFT                10
+#define SMMU_TCR_IRGN0_SHIFT                8
+#define SMMU_TCR_RGN_MASK                   0x3
+#define SMMU_TCR_RGN_NC                     0
+#define SMMU_TCR_RGN_WBWA                   1
+#define SMMU_TCR_RGN_WT                     2
+#define SMMU_TCR_RGN_WB                     3
+
+#define SMMU_TCR_SL0_SHIFT                  6
+#define SMMU_TCR_SL0_MASK                   0x3
+#define SMMU_TCR_SL0_LVL_2                  0
+#define SMMU_TCR_SL0_LVL_1                  1
+
+#define SMMU_TCR_T1SZ_SHIFT                 16
+#define SMMU_TCR_T0SZ_SHIFT                 0
+#define SMMU_TCR_SZ_MASK                    0xf
+
+#define SMMU_TCR2_SEP_SHIFT                 15
+#define SMMU_TCR2_SEP_MASK                  0x7
+
+#define SMMU_TCR2_PASIZE_SHIFT              0
+#define SMMU_TCR2_PASIZE_MASK               0x7
+
+/* Common definitions for PASize and SEP fields */
+#define SMMU_TCR2_ADDR_32                   0
+#define SMMU_TCR2_ADDR_36                   1
+#define SMMU_TCR2_ADDR_40                   2
+#define SMMU_TCR2_ADDR_42                   3
+#define SMMU_TCR2_ADDR_44                   4
+#define SMMU_TCR2_ADDR_48                   5
+
+#define SMMU_TTBRn_HI_ASID_SHIFT            16
+
+#define SMMU_MAIR_ATTR_SHIFT(n)             ((n) << 3)
+#define SMMU_MAIR_ATTR_MASK                 0xff
+#define SMMU_MAIR_ATTR_DEVICE               0x04
+#define SMMU_MAIR_ATTR_NC                   0x44
+#define SMMU_MAIR_ATTR_WBRWA                0xff
+#define SMMU_MAIR_ATTR_IDX_NC               0
+#define SMMU_MAIR_ATTR_IDX_CACHE            1
+#define SMMU_MAIR_ATTR_IDX_DEV              2
+
+#define SMMU_FSR_MULTI                      (1 << 31)
+#define SMMU_FSR_SS                         (1 << 30)
+#define SMMU_FSR_UUT                        (1 << 8)
+#define SMMU_FSR_ASF                        (1 << 7)
+#define SMMU_FSR_TLBLKF                     (1 << 6)
+#define SMMU_FSR_TLBMCF                     (1 << 5)
+#define SMMU_FSR_EF                         (1 << 4)
+#define SMMU_FSR_PF                         (1 << 3)
+#define SMMU_FSR_AFF                        (1 << 2)
+#define SMMU_FSR_TF                         (1 << 1)
+
+#define SMMU_FSR_IGN                        (SMMU_FSR_AFF | SMMU_FSR_ASF |    \
+                                             SMMU_FSR_TLBMCF | SMMU_FSR_TLBLKF)
+#define SMMU_FSR_FAULT                      (SMMU_FSR_MULTI | SMMU_FSR_SS |   \
+                                             SMMU_FSR_UUT | SMMU_FSR_EF |     \
+                                             SMMU_FSR_PF | SMMU_FSR_TF |      \
+                                             SMMU_FSR_IGN)
+
+#define SMMU_FSYNR0_WNR                     (1 << 4)
+
+#define smmu_print(dev, lvl, fmt, ...)                                        \
+    printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev->node), ## __VA_ARGS__)
+
+#define smmu_err(dev, fmt, ...) smmu_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+
+#define smmu_dbg(dev, fmt, ...)                                             \
+    smmu_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
+
+#define smmu_info(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
+
+#define smmu_warn(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
+
+struct arm_smmu_device {
+    const struct dt_device_node *node;
+
+    void __iomem                *base;
+    unsigned long               size;
+    unsigned long               pagesize;
+
+#define SMMU_FEAT_COHERENT_WALK (1 << 0)
+#define SMMU_FEAT_STREAM_MATCH  (1 << 1)
+#define SMMU_FEAT_TRANS_S1      (1 << 2)
+#define SMMU_FEAT_TRANS_S2      (1 << 3)
+#define SMMU_FEAT_TRANS_NESTED  (1 << 4)
+    u32                         features;
+    u32                         options;
+    int                         version;
+
+    u32                         num_context_banks;
+    u32                         num_s2_context_banks;
+    DECLARE_BITMAP(context_map, SMMU_MAX_CBS);
+    atomic_t                    irptndx;
+
+    u32                         num_mapping_groups;
+    DECLARE_BITMAP(smr_map, SMMU_MAX_SMRS);
+
+    unsigned long               input_size;
+    unsigned long               s1_output_size;
+    unsigned long               s2_output_size;
+
+    u32                         num_global_irqs;
+    u32                         num_context_irqs;
+    struct dt_irq               *irqs;
+
+    u32                         smr_mask_mask;
+    u32                         smr_id_mask;
+
+    unsigned long               *sids;
+
+    struct list_head            list;
+    struct rb_root              masters;
+};
+
+struct arm_smmu_smr {
+    u8                          idx;
+    u16                         mask;
+    u16                         id;
+};
+
+#define INVALID_IRPTNDX         0xff
+
+#define SMMU_CB_ASID(cfg)       ((cfg)->cbndx)
+#define SMMU_CB_VMID(cfg)       ((cfg)->cbndx + 1)
+
+struct arm_smmu_domain_cfg {
+    struct arm_smmu_device  *smmu;
+    u8                      cbndx;
+    u8                      irptndx;
+    u32                     cbar;
+    /* Domain associated to this device */
+    struct domain           *domain;
+    /* List of master which use this structure */
+    struct list_head        masters;
+
+    /* Used to link domain context for a same domain */
+    struct list_head        list;
+};
+
+struct arm_smmu_master {
+    const struct dt_device_node *dt_node;
+
+    /*
+     * The following is specific to the master's position in the
+     * SMMU chain.
+     */
+    struct rb_node              node;
+    u32                         num_streamids;
+    u16                         streamids[MAX_MASTER_STREAMIDS];
+    int                         num_s2crs;
+
+    struct arm_smmu_smr         *smrs;
+    struct arm_smmu_domain_cfg  *cfg;
+
+    /* Used to link masters in a same domain context */
+    struct list_head            list;
+};
+
+static LIST_HEAD(arm_smmu_devices);
+
+struct arm_smmu_domain {
+    spinlock_t lock;
+    struct list_head contexts;
+};
+
+struct arm_smmu_option_prop {
+    u32         opt;
+    const char  *prop;
+};
+
+static const struct arm_smmu_option_prop arm_smmu_options [] __initconst =
+{
+    { SMMU_OPT_SECURE_CONFIG_ACCESS, "calxeda,smmu-secure-config-access" },
+    { 0, NULL},
+};
+
+static void __init check_driver_options(struct arm_smmu_device *smmu)
+{
+    int i = 0;
+
+    do {
+        if ( dt_property_read_bool(smmu->node, arm_smmu_options[i].prop) )
+        {
+            smmu->options |= arm_smmu_options[i].opt;
+            smmu_dbg(smmu, "option %s\n", arm_smmu_options[i].prop);
+        }
+    } while ( arm_smmu_options[++i].opt );
+}
+
+static void arm_smmu_context_fault(int irq, void *data,
+                                   struct cpu_user_regs *regs)
+{
+    u32 fsr, far, fsynr;
+    unsigned long iova;
+    struct arm_smmu_domain_cfg *cfg = data;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    fsr = readl_relaxed(cb_base + SMMU_CB_FSR);
+
+    if ( !(fsr & SMMU_FSR_FAULT) )
+        return;
+
+    if ( fsr & SMMU_FSR_IGN )
+        smmu_err(smmu, "Unexpected context fault (fsr 0x%u)\n", fsr);
+
+    fsynr = readl_relaxed(cb_base + SMMU_CB_FSYNR0);
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_LO);
+    iova = far;
+#ifdef CONFIG_ARM_64
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_HI);
+    iova |= ((unsigned long)far << 32);
+#endif
+
+    smmu_err(smmu,
+             "Unhandled context fault: iova=0x%08lx, fsynr=0x%x, cb=%d\n",
+             iova, fsynr, cfg->cbndx);
+
+    /* Clear the faulting FSR */
+    writel(fsr, cb_base + SMMU_CB_FSR);
+
+    /* Terminate any stalled transactions */
+    if ( fsr & SMMU_FSR_SS )
+        writel_relaxed(SMMU_RESUME_TERMINATE, cb_base + SMMU_CB_RESUME);
+}
+
+static void arm_smmu_global_fault(int irq, void *data,
+                                  struct cpu_user_regs *regs)
+{
+    u32 gfsr, gfsynr0, gfsynr1, gfsynr2;
+    struct arm_smmu_device *smmu = data;
+    void __iomem *gr0_base = SMMU_GR0_NS(smmu);
+
+    gfsr = readl_relaxed(gr0_base + SMMU_GR0_sGFSR);
+    gfsynr0 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR0);
+    gfsynr1 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR1);
+    gfsynr2 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR2);
+
+    if ( !gfsr )
+        return;
+
+    smmu_err(smmu, "Unexpected global fault, this could be serious\n");
+    smmu_err(smmu,
+             "\tGFSR 0x%08x, GFSYNR0 0x%08x, GFSYNR1 0x%08x, GFSYNR2 0x%08x\n",
+             gfsr, gfsynr0, gfsynr1, gfsynr2);
+    writel(gfsr, gr0_base + SMMU_GR0_sGFSR);
+}
+
+static struct arm_smmu_master *
+find_smmu_master(struct arm_smmu_device *smmu,
+                 const struct dt_device_node *dev_node)
+{
+    struct rb_node *node = smmu->masters.rb_node;
+
+    while ( node )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+
+        if ( dev_node < master->dt_node )
+            node = node->rb_left;
+        else if ( dev_node > master->dt_node )
+            node = node->rb_right;
+        else
+            return master;
+    }
+
+    return NULL;
+}
+
+static __init int insert_smmu_master(struct arm_smmu_device *smmu,
+                                     struct arm_smmu_master *master)
+{
+    struct rb_node **new, *parent;
+
+    new = &smmu->masters.rb_node;
+    parent = NULL;
+    while ( *new )
+    {
+        struct arm_smmu_master *this;
+
+        this = container_of(*new, struct arm_smmu_master, node);
+
+        parent = *new;
+        if ( master->dt_node < this->dt_node )
+            new = &((*new)->rb_left);
+        else if (master->dt_node > this->dt_node)
+            new = &((*new)->rb_right);
+        else
+            return -EEXIST;
+    }
+
+    rb_link_node(&master->node, parent, new);
+    rb_insert_color(&master->node, &smmu->masters);
+    return 0;
+}
+
+static __init int register_smmu_master(struct arm_smmu_device *smmu,
+                                       struct dt_phandle_args *masterspec)
+{
+    int i, sid;
+    struct arm_smmu_master *master;
+    int rc = 0;
+
+    smmu_dbg(smmu, "Try to add master %s\n", masterspec->np->name);
+
+    master = find_smmu_master(smmu, masterspec->np);
+    if ( master )
+    {
+        smmu_err(smmu,
+                 "rejecting multiple registrations for master device %s\n",
+                 masterspec->np->name);
+        return -EBUSY;
+    }
+
+    if ( masterspec->args_count > MAX_MASTER_STREAMIDS )
+    {
+        smmu_err(smmu,
+            "reached maximum number (%d) of stream IDs for master device %s\n",
+            MAX_MASTER_STREAMIDS, masterspec->np->name);
+        return -ENOSPC;
+    }
+
+    master = xzalloc(struct arm_smmu_master);
+    if ( !master )
+        return -ENOMEM;
+
+    INIT_LIST_HEAD(&master->list);
+    master->dt_node = masterspec->np;
+    master->num_streamids = masterspec->args_count;
+
+    dt_device_set_protected(masterspec->np);
+
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        sid = masterspec->args[i];
+        if ( test_and_set_bit(sid, smmu->sids) )
+        {
+            smmu_err(smmu, "duplicate stream ID (%d)\n", sid);
+            xfree(master);
+            return -EEXIST;
+        }
+        master->streamids[i] = masterspec->args[i];
+    }
+
+    rc = insert_smmu_master(smmu, master);
+    /* Insertion should never fail */
+    ASSERT(rc == 0);
+
+    return 0;
+}
+
+static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end)
+{
+    int idx;
+
+    do
+    {
+        idx = find_next_zero_bit(map, end, start);
+        if ( idx == end )
+            return -ENOSPC;
+    } while ( test_and_set_bit(idx, map) );
+
+    return idx;
+}
+
+static void __arm_smmu_free_bitmap(unsigned long *map, int idx)
+{
+    clear_bit(idx, map);
+}
+
+static void arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
+{
+    int count = 0;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    writel_relaxed(0, gr0_base + SMMU_GR0_sTLBGSYNC);
+    while ( readl_relaxed(gr0_base + SMMU_GR0_sTLBGSTATUS) &
+            SMMU_sTLBGSTATUS_GSACTIVE )
+    {
+        cpu_relax();
+        if ( ++count == SMMU_TLB_LOOP_TIMEOUT )
+        {
+            smmu_err(smmu, "TLB sync timed out -- SMMU may be deadlocked\n");
+            return;
+        }
+        udelay(1);
+    }
+}
+
+static void arm_smmu_tlb_inv_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *base = SMMU_GR0(smmu);
+
+    writel_relaxed(SMMU_CB_VMID(cfg),
+                   base + SMMU_GR0_TLBIVMID);
+
+    arm_smmu_tlb_sync(smmu);
+}
+
+static void arm_smmu_iotlb_flush_all(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry(cfg, &smmu_domain->contexts, list)
+        arm_smmu_tlb_inv_context(cfg);
+    spin_unlock(&smmu_domain->lock);
+}
+
+static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                 unsigned int page_count)
+{
+    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
+    arm_smmu_iotlb_flush_all(d);
+}
+
+static int determine_smr_mask(struct arm_smmu_device *smmu,
+                              struct arm_smmu_master *master,
+                              struct arm_smmu_smr *smr, int start, int order)
+{
+    u16 i, zero_bits_mask, one_bits_mask, const_mask;
+    int nr;
+
+    nr = 1 << order;
+
+    if ( nr == 1 )
+    {
+        /* no mask, use streamid to match and be done with it */
+        smr->mask = 0;
+        smr->id = master->streamids[start];
+        return 0;
+    }
+
+    zero_bits_mask = 0;
+    one_bits_mask = 0xffff;
+    for ( i = start; i < start + nr; i++)
+    {
+        zero_bits_mask |= master->streamids[i];   /* const 0 bits */
+        one_bits_mask &= master->streamids[i]; /* const 1 bits */
+    }
+    zero_bits_mask = ~zero_bits_mask;
+
+    /* bits having constant values (either 0 or 1) */
+    const_mask = zero_bits_mask | one_bits_mask;
+
+    i = hweight16(~const_mask);
+    if ( (1 << i) == nr )
+    {
+        smr->mask = ~const_mask;
+        smr->id = one_bits_mask;
+    }
+    else
+        /* no usable mask for this set of streamids */
+        return 1;
+
+    if ( ((smr->mask & smmu->smr_mask_mask) != smr->mask) ||
+         ((smr->id & smmu->smr_id_mask) != smr->id) )
+        /* insufficient number of mask/id bits */
+        return 1;
+
+    return 0;
+}
+
+static int determine_smr_mapping(struct arm_smmu_device *smmu,
+                                 struct arm_smmu_master *master,
+                                 struct arm_smmu_smr *smrs, int max_smrs)
+{
+    int nr_sid, nr, i, bit, start;
+
+    /*
+     * This function is called only once -- when a master is added
+     * to a domain. If master->num_s2crs != 0 then this master
+     * was already added to a domain.
+     */
+    BUG_ON(master->num_s2crs);
+
+    start = nr = 0;
+    nr_sid = master->num_streamids;
+    do
+    {
+        /*
+         * largest power-of-2 number of streamids for which to
+         * determine a usable mask/id pair for stream matching
+         */
+        bit = fls(nr_sid);
+        if (!bit)
+            return 0;
+
+        /*
+         * iterate over power-of-2 numbers to determine
+         * largest possible mask/id pair for stream matching
+         * of next 2**i streamids
+         */
+        for ( i = bit - 1; i >= 0; i-- )
+        {
+            if( !determine_smr_mask(smmu, master,
+                                    &smrs[master->num_s2crs],
+                                    start, i))
+                break;
+        }
+
+        if ( i < 0 )
+            goto out;
+
+        nr = 1 << i;
+        nr_sid -= nr;
+        start += nr;
+        master->num_s2crs++;
+    } while ( master->num_s2crs <= max_smrs );
+
+out:
+    if ( nr_sid )
+    {
+        /* not enough mapping groups available */
+        master->num_s2crs = 0;
+        return -ENOSPC;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
+                                          struct arm_smmu_master *master)
+{
+    int i, max_smrs, ret;
+    struct arm_smmu_smr *smrs;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    if ( !(smmu->features & SMMU_FEAT_STREAM_MATCH) )
+        return 0;
+
+    if ( master->smrs )
+        return -EEXIST;
+
+    max_smrs = min(smmu->num_mapping_groups, master->num_streamids);
+    smrs = xmalloc_array(struct arm_smmu_smr, max_smrs);
+    if ( !smrs )
+    {
+        smmu_err(smmu, "failed to allocated %d SMRs for master %s\n",
+                 max_smrs, dt_node_name(master->dt_node));
+        return -ENOMEM;
+    }
+
+    ret = determine_smr_mapping(smmu, master, smrs, max_smrs);
+    if ( ret )
+        goto err_free_smrs;
+
+    /* Allocate the SMRs on the root SMMU */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
+                                          smmu->num_mapping_groups);
+        if ( idx < 0 )
+        {
+            smmu_err(smmu, "failed to allocate free SMR\n");
+            goto err_free_bitmap;
+        }
+        smrs[i].idx = idx;
+    }
+
+    /* It worked! Now, poke the actual hardware */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 reg = SMMU_SMR_VALID | smrs[i].id << SMMU_SMR_ID_SHIFT |
+            smrs[i].mask << SMMU_SMR_MASK_SHIFT;
+        smmu_dbg(smmu, "SMR%d: 0x%x\n", smrs[i].idx, reg);
+        writel_relaxed(reg, gr0_base + SMMU_GR0_SMR(smrs[i].idx));
+    }
+
+    master->smrs = smrs;
+    return 0;
+
+err_free_bitmap:
+    while (--i >= 0)
+        __arm_smmu_free_bitmap(smmu->smr_map, smrs[i].idx);
+    master->num_s2crs = 0;
+err_free_smrs:
+    xfree(smrs);
+    return -ENOSPC;
+}
+
+/* Forward declaration */
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg);
+
+static int arm_smmu_domain_add_master(struct domain *d,
+                                      struct arm_smmu_domain_cfg *cfg,
+                                      struct arm_smmu_master *master)
+{
+    int i, ret;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    if ( master->cfg )
+        return -EBUSY;
+
+    ret = arm_smmu_master_configure_smrs(smmu, master);
+    if ( ret )
+        return ret;
+
+    /* Now we're at the root, time to point at our context bank */
+    if ( !master->num_s2crs )
+        master->num_s2crs = master->num_streamids;
+
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 idx, s2cr;
+
+        idx = smrs ? smrs[i].idx : master->streamids[i];
+        s2cr = (SMMU_S2CR_TYPE_TRANS << SMMU_S2CR_TYPE_SHIFT) |
+            (cfg->cbndx << SMMU_S2CR_CBNDX_SHIFT);
+        smmu_dbg(smmu, "S2CR%d: 0x%x\n", idx, s2cr);
+        writel_relaxed(s2cr, gr0_base + SMMU_GR0_S2CR(idx));
+    }
+
+    master->cfg = cfg;
+    list_add(&master->list, &cfg->masters);
+
+    return 0;
+}
+
+static void arm_smmu_domain_remove_master(struct arm_smmu_master *master)
+{
+    int i;
+    struct arm_smmu_domain_cfg *cfg = master->cfg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    /*
+     * We *must* clear the S2CR first, because freeing the SMR means
+     * that it can be reallocated immediately
+     */
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        u16 sid = master->streamids[i];
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT,
+                       gr0_base + SMMU_GR0_S2CR(sid));
+    }
+
+    /* Invalidate the SMRs before freeing back to the allocator */
+    for (i = 0; i < master->num_streamids; ++i) {
+        u8 idx = smrs[i].idx;
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(idx));
+        __arm_smmu_free_bitmap(smmu->smr_map, idx);
+    }
+
+    master->smrs = NULL;
+    xfree(smrs);
+
+    master->cfg = NULL;
+    list_del(&master->list);
+    INIT_LIST_HEAD(&master->list);
+}
+
+static void arm_smmu_init_context_bank(struct arm_smmu_domain_cfg *cfg)
+{
+    u32 reg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base, *gr0_base, *gr1_base;
+    paddr_t p2maddr;
+
+    ASSERT(cfg->domain != NULL);
+    p2maddr = page_to_maddr(cfg->domain->arch.p2m.first_level);
+
+    gr0_base = SMMU_GR0(smmu);
+    gr1_base = SMMU_GR1(smmu);
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+
+    /* CBAR */
+    reg = cfg->cbar;
+    if ( smmu->version == 1 )
+        reg |= cfg->irptndx << SMMU_CBAR_IRPTNDX_SHIFT;
+
+    reg |= SMMU_CB_VMID(cfg) << SMMU_CBAR_VMID_SHIFT;
+    writel_relaxed(reg, gr1_base + SMMU_GR1_CBAR(cfg->cbndx));
+
+    if ( smmu->version > 1 )
+    {
+        /* CBA2R */
+#ifdef CONFIG_ARM_64
+        reg = SMMU_CBA2R_RW64_64BIT;
+#else
+        reg = SMMU_CBA2R_RW64_32BIT;
+#endif
+        writel_relaxed(reg, gr1_base + SMMU_GR1_CBA2R(cfg->cbndx));
+    }
+
+    /* TTBR0 */
+    reg = (p2maddr & ((1ULL << 32) - 1));
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_LO);
+    reg = (p2maddr >> 32);
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_HI);
+
+    /*
+     * TCR
+     * We use long descriptor, with inner-shareable WBWA tables in TTBR0.
+     */
+    if ( smmu->version > 1 )
+    {
+        /* 4K Page Table */
+        if ( PAGE_SIZE == PAGE_SIZE_4K )
+            reg = SMMU_TCR_TG0_4K;
+        else
+            reg = SMMU_TCR_TG0_64K;
+
+        switch ( smmu->s2_output_size ) {
+        case 32:
+            reg |= (SMMU_TCR2_ADDR_32 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 36:
+            reg |= (SMMU_TCR2_ADDR_36 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 40:
+            reg |= (SMMU_TCR2_ADDR_40 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 42:
+            reg |= (SMMU_TCR2_ADDR_42 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 44:
+            reg |= (SMMU_TCR2_ADDR_44 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 48:
+            reg |= (SMMU_TCR2_ADDR_48 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        }
+    }
+    else
+        reg = 0;
+
+    /* The attribute to walk the page table should be the same as VTCR_EL2 */
+    reg |= SMMU_TCR_EAE |
+        (SMMU_TCR_SH_NS << SMMU_TCR_SH0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_ORGN0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_IRGN0_SHIFT) |
+        (SMMU_TCR_SL0_LVL_1 << SMMU_TCR_SL0_SHIFT);
+    writel_relaxed(reg, cb_base + SMMU_CB_TCR);
+
+    /* SCTLR */
+    reg = SMMU_SCTLR_CFCFG |
+        SMMU_SCTLR_CFIE |
+        SMMU_SCTLR_CFRE |
+        SMMU_SCTLR_M |
+        SMMU_SCTLR_EAE_SBOP;
+
+    writel_relaxed(reg, cb_base + SMMU_CB_SCTLR);
+}
+
+static struct arm_smmu_domain_cfg *
+arm_smmu_alloc_domain_context(struct domain *d,
+                              struct arm_smmu_device *smmu)
+{
+    const struct dt_irq *irq;
+    int ret, start;
+    struct arm_smmu_domain_cfg *cfg;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+
+    cfg = xzalloc(struct arm_smmu_domain_cfg);
+    if ( !cfg )
+        return NULL;
+
+    /* Master already initialized to another domain ... */
+    if ( cfg->domain != NULL )
+        goto out_free_mem;
+
+    cfg->cbar = SMMU_CBAR_TYPE_S2_TRANS;
+    start = 0;
+
+    ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
+                                  smmu->num_context_banks);
+    if ( ret < 0 )
+        goto out_free_mem;
+
+    cfg->cbndx = ret;
+    if ( smmu->version == 1 )
+    {
+        cfg->irptndx = atomic_inc_return(&smmu->irptndx);
+        cfg->irptndx %= smmu->num_context_irqs;
+    }
+    else
+        cfg->irptndx = cfg->cbndx;
+
+    irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+    ret = request_dt_irq(irq, arm_smmu_context_fault,
+                         "arm-smmu-context-fault", cfg);
+    if ( ret )
+    {
+        smmu_err(smmu, "failed to request context IRQ %d (%u)\n",
+                 cfg->irptndx, irq->irq);
+        cfg->irptndx = INVALID_IRPTNDX;
+        goto out_free_context;
+    }
+
+    cfg->domain = d;
+    cfg->smmu = smmu;
+
+    arm_smmu_init_context_bank(cfg);
+    list_add(&cfg->list, &smmu_domain->contexts);
+    INIT_LIST_HEAD(&cfg->masters);
+
+    return cfg;
+
+out_free_context:
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+out_free_mem:
+    xfree(cfg);
+
+    return NULL;
+}
+
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct domain *d = cfg->domain;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+    const struct dt_irq *irq;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+    BUG_ON(!list_empty(&cfg->masters));
+
+    /* Disable the context bank and nuke the TLB before freeing it */
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+    arm_smmu_tlb_inv_context(cfg);
+
+    if ( cfg->irptndx != INVALID_IRPTNDX )
+    {
+        irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+        release_dt_irq(irq, cfg);
+    }
+
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+    list_del(&cfg->list);
+    xfree(cfg);
+}
+
+static struct arm_smmu_device *
+arm_smmu_find_smmu_by_dev(const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_master *master = NULL;
+
+    list_for_each_entry( smmu, &arm_smmu_devices, list )
+    {
+        master = find_smmu_master(smmu, dev);
+        if ( master )
+            break;
+    }
+
+    if ( !master )
+        return NULL;
+
+    return smmu;
+}
+
+static int arm_smmu_attach_dev(struct domain *d,
+                               const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu = arm_smmu_find_smmu_by_dev(dev);
+    struct arm_smmu_master *master;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg = NULL;
+    struct arm_smmu_domain_cfg *curr;
+    int ret;
+
+    printk(XENLOG_DEBUG "arm-smmu: attach %s to domain %d\n",
+           dt_node_full_name(dev), d->domain_id);
+
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: cannot attach to SMMU, is it on the same bus?\n",
+               dt_node_full_name(dev));
+        return -ENODEV;
+    }
+
+    master = find_smmu_master(smmu, dev);
+    BUG_ON(master == NULL);
+
+    /* Check if the device is already assigned to someone */
+    if ( master->cfg )
+        return -EBUSY;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry( curr, &smmu_domain->contexts, list )
+    {
+        if ( curr->smmu == smmu )
+        {
+            cfg = curr;
+            break;
+        }
+    }
+
+    if ( !cfg )
+    {
+        cfg = arm_smmu_alloc_domain_context(d, smmu);
+        if ( !cfg )
+        {
+            smmu_err(smmu, "unable to allocate context for domain %u\n",
+                     d->domain_id);
+            spin_unlock(&smmu_domain->lock);
+            return -ENOMEM;
+        }
+    }
+    spin_unlock(&smmu_domain->lock);
+
+    ret = arm_smmu_domain_add_master(d, cfg, master);
+    if ( ret )
+    {
+        spin_lock(&smmu_domain->lock);
+        if ( list_empty(&cfg->masters) )
+            arm_smmu_destroy_domain_context(cfg);
+        spin_unlock(&smmu_domain->lock);
+    }
+
+    return ret;
+}
+
+static int arm_smmu_detach_dev(struct domain *d,
+                               const struct dt_device_node *dev)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_master *master;
+    struct arm_smmu_device *smmu = arm_smmu_find_smmu_by_dev(dev);
+    struct arm_smmu_domain_cfg *cfg;
+
+    printk(XENLOG_DEBUG "arm-smmu: detach %s to domain %d\n",
+           dt_node_full_name(dev), d->domain_id);
+
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: cannot find the SMMU, is it on the same bus?\n",
+               dt_node_full_name(dev));
+        return -ENODEV;
+    }
+
+    master = find_smmu_master(smmu, dev);
+    BUG_ON(master == NULL);
+
+    cfg = master->cfg;
+
+    /* Sanity check to avoid removing a device that doesn't belong to
+     * the domain
+     */
+    if ( !cfg || cfg->domain != d )
+    {
+        printk(XENLOG_ERR "%s: was not attach to domain %d\n",
+               dt_node_full_name(dev), d->domain_id);
+        return -ESRCH;
+    }
+
+    arm_smmu_domain_remove_master(master);
+
+    spin_lock(&smmu_domain->lock);
+    if ( list_empty(&cfg->masters) )
+        arm_smmu_destroy_domain_context(cfg);
+    spin_unlock(&smmu_domain->lock);
+
+    return 0;
+}
+
+static int arm_smmu_reassign_dt_dev(struct domain *s, struct domain *t,
+                                    const struct dt_device_node *dev)
+{
+    int ret = 0;
+
+    /* Don't allow remapping on other domain than dom0 */
+    if ( t != dom0 )
+        return -EPERM;
+
+    if ( t == s )
+        return 0;
+
+    ret = arm_smmu_detach_dev(s, dev);
+    if ( ret )
+        return ret;
+
+    ret = arm_smmu_attach_dev(t, dev);
+
+    return ret;
+}
+
+static __init int arm_smmu_id_size_to_bits(int size)
+{
+    switch ( size )
+    {
+    case 0:
+        return 32;
+    case 1:
+        return 36;
+    case 2:
+        return 40;
+    case 3:
+        return 42;
+    case 4:
+        return 44;
+    case 5:
+    default:
+        return 48;
+    }
+}
+
+static __init int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
+{
+    unsigned long size;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    u32 id;
+
+    smmu_info(smmu, "probing hardware configuration...\n");
+
+    /*
+     * Primecell ID
+     */
+    id = readl_relaxed(gr0_base + SMMU_GR0_PIDR2);
+    smmu->version = ((id >> SMMU_PIDR2_ARCH_SHIFT) & SMMU_PIDR2_ARCH_MASK) + 1;
+    smmu_info(smmu, "SMMUv%d with:\n", smmu->version);
+
+    /* ID0 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID0);
+#ifndef CONFIG_ARM_64
+    if ( ((id >> SMMU_ID0_PTFS_SHIFT) & SMMU_ID0_PTFS_MASK) ==
+            SMMU_ID0_PTFS_V8_ONLY )
+    {
+        smmu_err(smmu, "\tno v7 descriptor support!\n");
+        return -ENODEV;
+    }
+#endif
+    if ( id & SMMU_ID0_S1TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S1;
+        smmu_info(smmu, "\tstage 1 translation\n");
+    }
+
+    if ( id & SMMU_ID0_S2TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S2;
+        smmu_info(smmu, "\tstage 2 translation\n");
+    }
+
+    if ( id & SMMU_ID0_NTS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_NESTED;
+        smmu_info(smmu, "\tnested translation\n");
+    }
+
+    if ( !(smmu->features &
+           (SMMU_FEAT_TRANS_S1 | SMMU_FEAT_TRANS_S2 |
+            SMMU_FEAT_TRANS_NESTED)) )
+    {
+        smmu_err(smmu, "\tno translation support!\n");
+        return -ENODEV;
+    }
+
+    /* We need at least support for Stage 2 */
+    if ( !(smmu->features & SMMU_FEAT_TRANS_S2) )
+    {
+        smmu_err(smmu, "\tno stage 2 translation!\n");
+        return -ENODEV;
+    }
+
+    if ( id & SMMU_ID0_CTTW )
+    {
+        smmu->features |= SMMU_FEAT_COHERENT_WALK;
+        smmu_info(smmu, "\tcoherent table walk\n");
+    }
+
+    if ( id & SMMU_ID0_SMS )
+    {
+        u32 smr, sid, mask;
+
+        smmu->features |= SMMU_FEAT_STREAM_MATCH;
+        smmu->num_mapping_groups = (id >> SMMU_ID0_NUMSMRG_SHIFT) &
+            SMMU_ID0_NUMSMRG_MASK;
+        if ( smmu->num_mapping_groups == 0 )
+        {
+            smmu_err(smmu,
+                     "stream-matching supported, but no SMRs present!\n");
+            return -ENODEV;
+        }
+
+        smr = SMMU_SMR_MASK_MASK << SMMU_SMR_MASK_SHIFT;
+        smr |= (SMMU_SMR_ID_MASK << SMMU_SMR_ID_SHIFT);
+        writel_relaxed(smr, gr0_base + SMMU_GR0_SMR(0));
+        smr = readl_relaxed(gr0_base + SMMU_GR0_SMR(0));
+
+        mask = (smr >> SMMU_SMR_MASK_SHIFT) & SMMU_SMR_MASK_MASK;
+        sid = (smr >> SMMU_SMR_ID_SHIFT) & SMMU_SMR_ID_MASK;
+        if ( (mask & sid) != sid )
+        {
+            smmu_err(smmu,
+                     "SMR mask bits (0x%x) insufficient for ID field (0x%x)\n",
+                     mask, sid);
+            return -ENODEV;
+        }
+        smmu->smr_mask_mask = mask;
+        smmu->smr_id_mask = sid;
+
+        smmu_info(smmu,
+                  "\tstream matching with %u register groups, mask 0x%x\n",
+                  smmu->num_mapping_groups, mask);
+    }
+
+    /* ID1 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID1);
+    smmu->pagesize = (id & SMMU_ID1_PAGESIZE) ? PAGE_SIZE_64K : PAGE_SIZE_4K;
+
+    /* Check for size mismatch of SMMU address space from mapped region */
+    size = 1 << (((id >> SMMU_ID1_NUMPAGENDXB_SHIFT) &
+                  SMMU_ID1_NUMPAGENDXB_MASK) + 1);
+    size *= (smmu->pagesize << 1);
+    if ( smmu->size != size )
+        smmu_warn(smmu, "SMMU address space size (0x%lx) differs "
+                  "from mapped region size (0x%lx)!\n", size, smmu->size);
+
+    smmu->num_s2_context_banks = (id >> SMMU_ID1_NUMS2CB_SHIFT) &
+        SMMU_ID1_NUMS2CB_MASK;
+    smmu->num_context_banks = (id >> SMMU_ID1_NUMCB_SHIFT) &
+        SMMU_ID1_NUMCB_MASK;
+    if ( smmu->num_s2_context_banks > smmu->num_context_banks )
+    {
+        smmu_err(smmu, "impossible number of S2 context banks!\n");
+        return -ENODEV;
+    }
+    smmu_info(smmu, "\t%u context banks (%u stage-2 only)\n",
+              smmu->num_context_banks, smmu->num_s2_context_banks);
+
+    /* ID2 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID2);
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_IAS_SHIFT) &
+                                    SMMU_ID2_IAS_MASK);
+
+    /*
+     * Stage-1 output limited by stage-2 input size due to VTCR_EL2
+     * setup (see setup_virt_paging)
+     */
+    /* Current maximum output size of 40 bits */
+    smmu->s1_output_size = min(40UL, size);
+
+    /* The stage-2 output mask is also applied for bypass */
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_OAS_SHIFT) &
+                                    SMMU_ID2_OAS_MASK);
+    smmu->s2_output_size = min((unsigned long)PADDR_BITS, size);
+
+    if ( smmu->version == 1 )
+        smmu->input_size = 32;
+    else
+    {
+#ifdef CONFIG_ARM_64
+        size = (id >> SMMU_ID2_UBS_SHIFT) & SMMU_ID2_UBS_MASK;
+        size = min(39, arm_smmu_id_size_to_bits(size));
+#else
+        size = 32;
+#endif
+        smmu->input_size = size;
+
+        if ( (PAGE_SIZE == PAGE_SIZE_4K && !(id & SMMU_ID2_PTFS_4K) ) ||
+             (PAGE_SIZE == PAGE_SIZE_64K && !(id & SMMU_ID2_PTFS_64K)) ||
+             (PAGE_SIZE != PAGE_SIZE_4K && PAGE_SIZE != PAGE_SIZE_64K) )
+        {
+            smmu_err(smmu, "CPU page size 0x%lx unsupported\n",
+                     PAGE_SIZE);
+            return -ENODEV;
+        }
+    }
+
+    smmu_info(smmu, "\t%lu-bit VA, %lu-bit IPA, %lu-bit PA\n",
+              smmu->input_size, smmu->s1_output_size, smmu->s2_output_size);
+    return 0;
+}
+
+static __init void arm_smmu_device_reset(struct arm_smmu_device *smmu)
+{
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    void __iomem *cb_base;
+    int i = 0;
+    u32 reg;
+
+    smmu_dbg(smmu, "device reset\n");
+
+    /* Clear Global FSR */
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+    writel(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+
+    /* Mark all SMRn as invalid and all S2CRn as fault */
+    for ( i = 0; i < smmu->num_mapping_groups; ++i )
+    {
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(i));
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT, gr0_base + SMMU_GR0_S2CR(i));
+    }
+
+    /* Make sure all context banks are disabled and clear CB_FSR  */
+    for ( i = 0; i < smmu->num_context_banks; ++i )
+    {
+        cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, i);
+        writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+        writel_relaxed(SMMU_FSR_FAULT, cb_base + SMMU_CB_FSR);
+    }
+
+    /* Invalidate the TLB, just in case */
+    writel_relaxed(0, gr0_base + SMMU_GR0_STLBIALL);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLH);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLNSNH);
+
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+
+    /* Enable fault reporting */
+    reg |= (SMMU_sCR0_GFRE | SMMU_sCR0_GFIE |
+            SMMU_sCR0_GCFGFRE | SMMU_sCR0_GCFGFIE);
+
+    /* Disable TLB broadcasting. */
+    reg |= (SMMU_sCR0_VMIDPNE | SMMU_sCR0_PTM);
+
+    /* Enable client access, generate a fault if no mapping is found */
+    reg &= ~(SMMU_sCR0_CLIENTPD);
+    reg |= SMMU_sCR0_USFCFG;
+
+    /* Disable forced broadcasting */
+    reg &= ~SMMU_sCR0_FB;
+
+    /* Don't upgrade barriers */
+    reg &= ~(SMMU_sCR0_BSU_MASK << SMMU_sCR0_BSU_SHIFT);
+
+    /* Push the button */
+    arm_smmu_tlb_sync(smmu);
+    writel_relaxed(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+}
+
+int arm_smmu_iommu_domain_init(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain;
+
+    smmu_domain = xzalloc(struct arm_smmu_domain);
+    if ( !smmu_domain )
+        return -ENOMEM;
+
+    spin_lock_init(&smmu_domain->lock);
+    INIT_LIST_HEAD(&smmu_domain->contexts);
+
+    domain_hvm_iommu(d)->arch.priv = smmu_domain;
+
+    return 0;
+}
+
+void arm_smmu_iommu_dom0_init(struct domain *d)
+{
+}
+
+void arm_smmu_iommu_domain_teardown(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+
+    ASSERT(list_empty(&smmu_domain->contexts));
+    xfree(smmu_domain);
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+    .init = arm_smmu_iommu_domain_init,
+    .dom0_init = arm_smmu_iommu_dom0_init,
+    .teardown = arm_smmu_iommu_domain_teardown,
+    .iotlb_flush = arm_smmu_iotlb_flush,
+    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+    .assign_dt_device = arm_smmu_attach_dev,
+    .reassign_dt_device = arm_smmu_reassign_dt_dev,
+};
+
+static int __init smmu_init(struct dt_device_node *dev,
+                            const void *data)
+{
+    struct arm_smmu_device *smmu;
+    int res;
+    u64 addr, size;
+    unsigned int num_irqs, i;
+    struct dt_phandle_args masterspec;
+    struct rb_node *node;
+
+    /* Even if the device can't be initialized, we don't want to give to
+     * dom0 the smmu device
+     */
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    smmu = xzalloc(struct arm_smmu_device);
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: failed to allocate arm_smmu_device\n",
+               dt_node_full_name(dev));
+        return -ENOMEM;
+    }
+
+    smmu->node = dev;
+    check_driver_options(smmu);
+
+    res = dt_device_get_address(smmu->node, 0, &addr, &size);
+    if ( res )
+    {
+        smmu_err(smmu, "unable to retrieve the base address of the SMMU\n");
+        goto out_err;
+    }
+
+    smmu->base = ioremap_nocache(addr, size);
+    if ( !smmu->base )
+    {
+        smmu_err(smmu, "unable to map the SMMU memory\n");
+        goto out_err;
+    }
+
+    smmu->size = size;
+
+    if ( !dt_property_read_u32(smmu->node, "#global-interrupts",
+                               &smmu->num_global_irqs) )
+    {
+        smmu_err(smmu, "missing #global-interrupts\n");
+        goto out_unmap;
+    }
+
+    num_irqs = dt_number_of_irq(smmu->node);
+    if ( num_irqs > smmu->num_global_irqs )
+        smmu->num_context_irqs = num_irqs - smmu->num_global_irqs;
+
+    if ( !smmu->num_context_irqs )
+    {
+        smmu_err(smmu, "found %d interrupts but expected at least %d\n",
+                 num_irqs, smmu->num_global_irqs + 1);
+        goto out_unmap;
+    }
+
+    smmu->irqs = xzalloc_array(struct dt_irq, num_irqs);
+    if ( !smmu->irqs )
+    {
+        smmu_err(smmu, "failed to allocated %d irqs\n", num_irqs);
+        goto out_unmap;
+    }
+
+    for ( i = 0; i < num_irqs; i++ )
+    {
+        res = dt_device_get_irq(smmu->node, i, &smmu->irqs[i]);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to get irq index %d\n", i);
+            goto out_free_irqs;
+        }
+    }
+
+    smmu->sids = xzalloc_array(unsigned long,
+                               BITS_TO_LONGS(SMMU_MAX_STREAMIDS));
+    if ( !smmu->sids )
+    {
+        smmu_err(smmu, "failed to allocated bitmap for stream ID tracking\n");
+        goto out_free_masters;
+    }
+
+
+    i = 0;
+    smmu->masters = RB_ROOT;
+    while ( !dt_parse_phandle_with_args(smmu->node, "mmu-masters",
+                                        "#stream-id-cells", i, &masterspec) )
+    {
+        res = register_smmu_master(smmu, &masterspec);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to add master %s\n",
+                     masterspec.np->name);
+            goto out_free_masters;
+        }
+        i++;
+    }
+
+    smmu_info(smmu, "registered %d master devices\n", i);
+
+    res = arm_smmu_device_cfg_probe(smmu);
+    if ( res )
+    {
+        smmu_err(smmu, "failed to probe the SMMU\n");
+        goto out_free_masters;
+    }
+
+    if ( smmu->version > 1 &&
+         smmu->num_context_banks != smmu->num_context_irqs )
+    {
+        smmu_err(smmu,
+                 "found only %d context interrupt(s) but %d required\n",
+                 smmu->num_context_irqs, smmu->num_context_banks);
+        goto out_free_masters;
+    }
+
+    smmu_dbg(smmu, "register global IRQs handler\n");
+
+    for ( i = 0; i < smmu->num_global_irqs; ++i )
+    {
+        smmu_dbg(smmu, "\t- global IRQ %u\n", smmu->irqs[i].irq);
+        res = request_dt_irq(&smmu->irqs[i], arm_smmu_global_fault,
+                             "arm-smmu global fault", smmu);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to request global IRQ %d (%u)\n",
+                     i, smmu->irqs[i].irq);
+            goto out_release_irqs;
+        }
+    }
+
+    INIT_LIST_HEAD(&smmu->list);
+    list_add(&smmu->list, &arm_smmu_devices);
+
+    arm_smmu_device_reset(smmu);
+
+    iommu_set_ops(&arm_smmu_iommu_ops);
+
+    /* sids field can be freed... */
+    xfree(smmu->sids);
+    smmu->sids = NULL;
+
+    return 0;
+
+out_release_irqs:
+    while (i--)
+        release_dt_irq(&smmu->irqs[i], smmu);
+
+out_free_masters:
+    for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+        xfree(master);
+    }
+
+    xfree(smmu->sids);
+
+out_free_irqs:
+    xfree(smmu->irqs);
+
+out_unmap:
+    iounmap(smmu->base);
+
+out_err:
+    xfree(smmu);
+
+    return -ENODEV;
+}
+
+static const char * const smmu_dt_compat[] __initconst =
+{
+    "arm,mmu-400",
+    NULL
+};
+
+DT_DEVICE_START(smmu, "ARM SMMU", DEVICE_IOMMU)
+    .compatible = smmu_dt_compat,
+    .init = smmu_init,
+DT_DEVICE_END
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 266cd6e..854e843 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -40,6 +40,9 @@ extern bool_t amd_iommu_perdev_intremap;
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
 #define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
 
+#define PAGE_SHIFT_64K      (16)
+#define PAGE_SIZE_64K       (1UL << PAGE_SHIFT_64K)
+
 int iommu_setup(void);
 
 int iommu_add_device(struct pci_dev *pdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 22:17:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 22:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHhMB-0007CH-FA; Sun, 23 Feb 2014 22:17:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHhM7-00076M-TU
	for xen-devel@lists.xenproject.org; Sun, 23 Feb 2014 22:17:04 +0000
Received: from [85.158.139.211:35161] by server-15.bemta-5.messagelabs.com id
	88/89-24395-E537A035; Sun, 23 Feb 2014 22:17:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393193819!1773911!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 864 invoked from network); 23 Feb 2014 22:17:00 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 22:17:00 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so379583eek.31
	for <xen-devel@lists.xenproject.org>;
	Sun, 23 Feb 2014 14:16:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=d8lJ57f8Tf86+tca/aBCJQOVJUiHUNg/PIJRjJu+uS8=;
	b=evVLtpY0XNp/EveXV421saV8yNKFYOt/1CyIB2WxJVIzSNsHB3s7R4b48WlrSfFFh+
	oeaIcvaPOUdWx1ogSnnzkz4LOZeaXvGZ2xFOPzIPXKObJI2dVJoiEUlGz+/4KCcHJu0l
	Pb/6fA4/DdEeJZLxQ3VXbrOp0+G0YK3M8maeN7Vxvee4+klUaG1NewdCyTGXJmZusaXr
	+LzVMnOykfoX0SBSe1eTGjXHNRfhy0Kr38s9nOy6KBtxKx0CUivO0AZmuG9R4yUGVjhf
	c0n4f79CQERyBisYRulHyxB4L2tfNdzy4V6vKJNGFpW0GHui1epxk5mDPBfmb2deZOx8
	IdDg==
X-Gm-Message-State: ALoCoQnPb73FzV3dRsVlMAUnOYSw1cUGEXWAAyyp0JT+iZWvbrSjbWT4BcKvDxSQrH189NM83mqi
X-Received: by 10.14.176.66 with SMTP id a42mr20730646eem.101.1393193819692;
	Sun, 23 Feb 2014 14:16:59 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id g1sm55994749eet.6.2014.02.23.14.16.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Sun, 23 Feb 2014 14:16:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Sun, 23 Feb 2014 22:16:32 +0000
Message-Id: <1393193792-20008-16-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
Cc: ian.campbell@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, stefano.stabellini@citrix.com,
	Jan Beulich <jbeulich@suse.com>, Xiantao Zhang <xiantao.zhang@intel.com>
Subject: [Xen-devel] [PATCH v2 15/15] drivers/passthrough: arm: Add support
	for SMMU drivers
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch add support for ARM architected SMMU driver. It's based on the
linux drivers (drivers/iommu/arm-smmu) commit 89ac23cd.

The major differences with the Linux driver are:
    - Fault by default if the SMMU is enabled to translate an
    address (Linux is bypassing the SMMU)
    - Using P2M page table instead of creating new one
    - Dropped stage-1 support
    - Dropped chained SMMUs support for now
    - Reworking device assignment and the different structures

Xen is programming each IOMMU by:
    - Using stage-2 mode translation
    - Sharing the page table with the processor
    - Injecting a fault if the device has made a wrong translation

Signed-off-by: Julien Grall<julien.grall@linaro.org>
Cc: Xiantao Zhang <xiantao.zhang@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Update commit message
        - Update some comments in the code
        - Add new callbacks to assign/reassign DT device
        - Rework init_dom0 and domain_teardown. The
        assignment/deassignement is now made in the generic code
        - Set protected field in dt_device_node when the device is under
        an IOMMU
        - Use SZ_64K and SZ_4K by the global PAGE_SIZE_{64,4}K in
        xen/iommu.h. The first one was not defined.
---
 xen/drivers/passthrough/arm/Makefile |    1 +
 xen/drivers/passthrough/arm/smmu.c   | 1736 ++++++++++++++++++++++++++++++++++
 xen/include/xen/iommu.h              |    3 +
 3 files changed, 1740 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu.c

diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index 0484b79..f4cd26e 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1 +1,2 @@
 obj-y += iommu.o
+obj-y += smmu.o
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
new file mode 100644
index 0000000..25f9d77
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -0,0 +1,1736 @@
+/*
+ * IOMMU API for ARM architected SMMU implementations.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ * Based on Linux drivers/iommu/arm-smmu.c (commit 89a23cd)
+ * Copyright (C) 2013 ARM Limited
+ *
+ * Author: Will Deacon <will.deacon@arm.com>
+ *
+ * Xen modification:
+ * Julien Grall <julien.grall@linaro.org>
+ * Copyright (C) 2014 Linaro Limited.
+ *
+ * This driver currently supports:
+ *  - SMMUv1 and v2 implementations (didn't try v2 SMMU)
+ *  - Stream-matching and stream-indexing
+ *  - v7/v8 long-descriptor format
+ *  - Non-secure access to the SMMU
+ *  - 4k pages, p2m shared with the processor
+ *  - Up to 40-bit addressing
+ *  - Context fault reporting
+ */
+
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+
+/* Driver options */
+#define SMMU_OPT_SECURE_CONFIG_ACCESS   (1 << 0)
+
+/* Maximum number of stream IDs assigned to a single device */
+#define MAX_MASTER_STREAMIDS    MAX_PHANDLE_ARGS
+
+/* Maximum stream ID */
+#define SMMU_MAX_STREAMIDS      (PAGE_SIZE_64K - 1)
+
+/* Maximum number of context banks per SMMU */
+#define SMMU_MAX_CBS        128
+
+/* Maximum number of mapping groups per SMMU */
+#define SMMU_MAX_SMRS       128
+
+/* SMMU global address space */
+#define SMMU_GR0(smmu)      ((smmu)->base)
+#define SMMU_GR1(smmu)      ((smmu)->base + (smmu)->pagesize)
+
+/*
+ * SMMU global address space with conditional offset to access secure aliases of
+ * non-secure registers (e.g. nsCR0: 0x400, nsGFSR: 0x448, nsGFSYNR0: 0x450)
+ */
+#define SMMU_GR0_NS(smmu)                                   \
+    ((smmu)->base +                                         \
+     ((smmu->options & SMMU_OPT_SECURE_CONFIG_ACCESS)    \
+        ? 0x400 : 0))
+
+/* Page table bits */
+#define SMMU_PTE_PAGE           (((pteval_t)3) << 0)
+#define SMMU_PTE_CONT           (((pteval_t)1) << 52)
+#define SMMU_PTE_AF             (((pteval_t)1) << 10)
+#define SMMU_PTE_SH_NS          (((pteval_t)0) << 8)
+#define SMMU_PTE_SH_OS          (((pteval_t)2) << 8)
+#define SMMU_PTE_SH_IS          (((pteval_t)3) << 8)
+
+#if PAGE_SIZE == PAGE_SIZE_4K
+#define SMMU_PTE_CONT_ENTRIES   16
+#elif PAGE_SIZE == PAGE_SIZE_64K
+#define SMMU_PTE_CONT_ENTRIES   32
+#else
+#define SMMU_PTE_CONT_ENTRIES   1
+#endif
+
+#define SMMU_PTE_CONT_SIZE      (PAGE_SIZE * SMMU_PTE_CONT_ENTRIES)
+#define SMMU_PTE_CONT_MASK      (~(SMMU_PTE_CONT_SIZE - 1))
+#define SMMU_PTE_HWTABLE_SIZE   (PTRS_PER_PTE * sizeof(pte_t))
+
+/* Stage-1 PTE */
+#define SMMU_PTE_AP_UNPRIV      (((pteval_t)1) << 6)
+#define SMMU_PTE_AP_RDONLY      (((pteval_t)2) << 6)
+#define SMMU_PTE_ATTRINDX_SHIFT 2
+#define SMMU_PTE_nG             (((pteval_t)1) << 11)
+
+/* Stage-2 PTE */
+#define SMMU_PTE_HAP_FAULT      (((pteval_t)0) << 6)
+#define SMMU_PTE_HAP_READ       (((pteval_t)1) << 6)
+#define SMMU_PTE_HAP_WRITE      (((pteval_t)2) << 6)
+#define SMMU_PTE_MEMATTR_OIWB   (((pteval_t)0xf) << 2)
+#define SMMU_PTE_MEMATTR_NC     (((pteval_t)0x5) << 2)
+#define SMMU_PTE_MEMATTR_DEV    (((pteval_t)0x1) << 2)
+
+/* Configuration registers */
+#define SMMU_GR0_sCR0           0x0
+#define SMMU_sCR0_CLIENTPD      (1 << 0)
+#define SMMU_sCR0_GFRE          (1 << 1)
+#define SMMU_sCR0_GFIE          (1 << 2)
+#define SMMU_sCR0_GCFGFRE       (1 << 4)
+#define SMMU_sCR0_GCFGFIE       (1 << 5)
+#define SMMU_sCR0_USFCFG        (1 << 10)
+#define SMMU_sCR0_VMIDPNE       (1 << 11)
+#define SMMU_sCR0_PTM           (1 << 12)
+#define SMMU_sCR0_FB            (1 << 13)
+#define SMMU_sCR0_BSU_SHIFT     14
+#define SMMU_sCR0_BSU_MASK      0x3
+
+/* Identification registers */
+#define SMMU_GR0_ID0            0x20
+#define SMMU_GR0_ID1            0x24
+#define SMMU_GR0_ID2            0x28
+#define SMMU_GR0_ID3            0x2c
+#define SMMU_GR0_ID4            0x30
+#define SMMU_GR0_ID5            0x34
+#define SMMU_GR0_ID6            0x38
+#define SMMU_GR0_ID7            0x3c
+#define SMMU_GR0_sGFSR          0x48
+#define SMMU_GR0_sGFSYNR0       0x50
+#define SMMU_GR0_sGFSYNR1       0x54
+#define SMMU_GR0_sGFSYNR2       0x58
+#define SMMU_GR0_PIDR0          0xfe0
+#define SMMU_GR0_PIDR1          0xfe4
+#define SMMU_GR0_PIDR2          0xfe8
+
+#define SMMU_ID0_S1TS           (1 << 30)
+#define SMMU_ID0_S2TS           (1 << 29)
+#define SMMU_ID0_NTS            (1 << 28)
+#define SMMU_ID0_SMS            (1 << 27)
+#define SMMU_ID0_PTFS_SHIFT     24
+#define SMMU_ID0_PTFS_MASK      0x2
+#define SMMU_ID0_PTFS_V8_ONLY   0x2
+#define SMMU_ID0_CTTW           (1 << 14)
+#define SMMU_ID0_NUMIRPT_SHIFT  16
+#define SMMU_ID0_NUMIRPT_MASK   0xff
+#define SMMU_ID0_NUMSMRG_SHIFT  0
+#define SMMU_ID0_NUMSMRG_MASK   0xff
+
+#define SMMU_ID1_PAGESIZE            (1 << 31)
+#define SMMU_ID1_NUMPAGENDXB_SHIFT   28
+#define SMMU_ID1_NUMPAGENDXB_MASK    7
+#define SMMU_ID1_NUMS2CB_SHIFT       16
+#define SMMU_ID1_NUMS2CB_MASK        0xff
+#define SMMU_ID1_NUMCB_SHIFT         0
+#define SMMU_ID1_NUMCB_MASK          0xff
+
+#define SMMU_ID2_OAS_SHIFT           4
+#define SMMU_ID2_OAS_MASK            0xf
+#define SMMU_ID2_IAS_SHIFT           0
+#define SMMU_ID2_IAS_MASK            0xf
+#define SMMU_ID2_UBS_SHIFT           8
+#define SMMU_ID2_UBS_MASK            0xf
+#define SMMU_ID2_PTFS_4K             (1 << 12)
+#define SMMU_ID2_PTFS_16K            (1 << 13)
+#define SMMU_ID2_PTFS_64K            (1 << 14)
+
+#define SMMU_PIDR2_ARCH_SHIFT        4
+#define SMMU_PIDR2_ARCH_MASK         0xf
+
+/* Global TLB invalidation */
+#define SMMU_GR0_STLBIALL           0x60
+#define SMMU_GR0_TLBIVMID           0x64
+#define SMMU_GR0_TLBIALLNSNH        0x68
+#define SMMU_GR0_TLBIALLH           0x6c
+#define SMMU_GR0_sTLBGSYNC          0x70
+#define SMMU_GR0_sTLBGSTATUS        0x74
+#define SMMU_sTLBGSTATUS_GSACTIVE   (1 << 0)
+#define SMMU_TLB_LOOP_TIMEOUT       1000000 /* 1s! */
+
+/* Stream mapping registers */
+#define SMMU_GR0_SMR(n)             (0x800 + ((n) << 2))
+#define SMMU_SMR_VALID              (1 << 31)
+#define SMMU_SMR_MASK_SHIFT         16
+#define SMMU_SMR_MASK_MASK          0x7fff
+#define SMMU_SMR_ID_SHIFT           0
+#define SMMU_SMR_ID_MASK            0x7fff
+
+#define SMMU_GR0_S2CR(n)        (0xc00 + ((n) << 2))
+#define SMMU_S2CR_CBNDX_SHIFT   0
+#define SMMU_S2CR_CBNDX_MASK    0xff
+#define SMMU_S2CR_TYPE_SHIFT    16
+#define SMMU_S2CR_TYPE_MASK     0x3
+#define SMMU_S2CR_TYPE_TRANS    (0 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_BYPASS   (1 << SMMU_S2CR_TYPE_SHIFT)
+#define SMMU_S2CR_TYPE_FAULT    (2 << SMMU_S2CR_TYPE_SHIFT)
+
+/* Context bank attribute registers */
+#define SMMU_GR1_CBAR(n)                    (0x0 + ((n) << 2))
+#define SMMU_CBAR_VMID_SHIFT                0
+#define SMMU_CBAR_VMID_MASK                 0xff
+#define SMMU_CBAR_S1_MEMATTR_SHIFT          12
+#define SMMU_CBAR_S1_MEMATTR_MASK           0xf
+#define SMMU_CBAR_S1_MEMATTR_WB             0xf
+#define SMMU_CBAR_TYPE_SHIFT                16
+#define SMMU_CBAR_TYPE_MASK                 0x3
+#define SMMU_CBAR_TYPE_S2_TRANS             (0 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_BYPASS   (1 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_FAULT    (2 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_TYPE_S1_TRANS_S2_TRANS    (3 << SMMU_CBAR_TYPE_SHIFT)
+#define SMMU_CBAR_IRPTNDX_SHIFT             24
+#define SMMU_CBAR_IRPTNDX_MASK              0xff
+
+#define SMMU_GR1_CBA2R(n)                   (0x800 + ((n) << 2))
+#define SMMU_CBA2R_RW64_32BIT               (0 << 0)
+#define SMMU_CBA2R_RW64_64BIT               (1 << 0)
+
+/* Translation context bank */
+#define SMMU_CB_BASE(smmu)                  ((smmu)->base + ((smmu)->size >> 1))
+#define SMMU_CB(smmu, n)                    ((n) * (smmu)->pagesize)
+
+#define SMMU_CB_SCTLR                       0x0
+#define SMMU_CB_RESUME                      0x8
+#define SMMU_CB_TCR2                        0x10
+#define SMMU_CB_TTBR0_LO                    0x20
+#define SMMU_CB_TTBR0_HI                    0x24
+#define SMMU_CB_TCR                         0x30
+#define SMMU_CB_S1_MAIR0                    0x38
+#define SMMU_CB_FSR                         0x58
+#define SMMU_CB_FAR_LO                      0x60
+#define SMMU_CB_FAR_HI                      0x64
+#define SMMU_CB_FSYNR0                      0x68
+#define SMMU_CB_S1_TLBIASID                 0x610
+
+#define SMMU_SCTLR_S1_ASIDPNE               (1 << 12)
+#define SMMU_SCTLR_CFCFG                    (1 << 7)
+#define SMMU_SCTLR_CFIE                     (1 << 6)
+#define SMMU_SCTLR_CFRE                     (1 << 5)
+#define SMMU_SCTLR_E                        (1 << 4)
+#define SMMU_SCTLR_AFE                      (1 << 2)
+#define SMMU_SCTLR_TRE                      (1 << 1)
+#define SMMU_SCTLR_M                        (1 << 0)
+#define SMMU_SCTLR_EAE_SBOP                 (SMMU_SCTLR_AFE | SMMU_SCTLR_TRE)
+
+#define SMMU_RESUME_RETRY                   (0 << 0)
+#define SMMU_RESUME_TERMINATE               (1 << 0)
+
+#define SMMU_TCR_EAE                        (1 << 31)
+
+#define SMMU_TCR_PASIZE_SHIFT               16
+#define SMMU_TCR_PASIZE_MASK                0x7
+
+#define SMMU_TCR_TG0_4K                     (0 << 14)
+#define SMMU_TCR_TG0_64K                    (1 << 14)
+
+#define SMMU_TCR_SH0_SHIFT                  12
+#define SMMU_TCR_SH0_MASK                   0x3
+#define SMMU_TCR_SH_NS                      0
+#define SMMU_TCR_SH_OS                      2
+#define SMMU_TCR_SH_IS                      3
+
+#define SMMU_TCR_ORGN0_SHIFT                10
+#define SMMU_TCR_IRGN0_SHIFT                8
+#define SMMU_TCR_RGN_MASK                   0x3
+#define SMMU_TCR_RGN_NC                     0
+#define SMMU_TCR_RGN_WBWA                   1
+#define SMMU_TCR_RGN_WT                     2
+#define SMMU_TCR_RGN_WB                     3
+
+#define SMMU_TCR_SL0_SHIFT                  6
+#define SMMU_TCR_SL0_MASK                   0x3
+#define SMMU_TCR_SL0_LVL_2                  0
+#define SMMU_TCR_SL0_LVL_1                  1
+
+#define SMMU_TCR_T1SZ_SHIFT                 16
+#define SMMU_TCR_T0SZ_SHIFT                 0
+#define SMMU_TCR_SZ_MASK                    0xf
+
+#define SMMU_TCR2_SEP_SHIFT                 15
+#define SMMU_TCR2_SEP_MASK                  0x7
+
+#define SMMU_TCR2_PASIZE_SHIFT              0
+#define SMMU_TCR2_PASIZE_MASK               0x7
+
+/* Common definitions for PASize and SEP fields */
+#define SMMU_TCR2_ADDR_32                   0
+#define SMMU_TCR2_ADDR_36                   1
+#define SMMU_TCR2_ADDR_40                   2
+#define SMMU_TCR2_ADDR_42                   3
+#define SMMU_TCR2_ADDR_44                   4
+#define SMMU_TCR2_ADDR_48                   5
+
+#define SMMU_TTBRn_HI_ASID_SHIFT            16
+
+#define SMMU_MAIR_ATTR_SHIFT(n)             ((n) << 3)
+#define SMMU_MAIR_ATTR_MASK                 0xff
+#define SMMU_MAIR_ATTR_DEVICE               0x04
+#define SMMU_MAIR_ATTR_NC                   0x44
+#define SMMU_MAIR_ATTR_WBRWA                0xff
+#define SMMU_MAIR_ATTR_IDX_NC               0
+#define SMMU_MAIR_ATTR_IDX_CACHE            1
+#define SMMU_MAIR_ATTR_IDX_DEV              2
+
+#define SMMU_FSR_MULTI                      (1 << 31)
+#define SMMU_FSR_SS                         (1 << 30)
+#define SMMU_FSR_UUT                        (1 << 8)
+#define SMMU_FSR_ASF                        (1 << 7)
+#define SMMU_FSR_TLBLKF                     (1 << 6)
+#define SMMU_FSR_TLBMCF                     (1 << 5)
+#define SMMU_FSR_EF                         (1 << 4)
+#define SMMU_FSR_PF                         (1 << 3)
+#define SMMU_FSR_AFF                        (1 << 2)
+#define SMMU_FSR_TF                         (1 << 1)
+
+#define SMMU_FSR_IGN                        (SMMU_FSR_AFF | SMMU_FSR_ASF |    \
+                                             SMMU_FSR_TLBMCF | SMMU_FSR_TLBLKF)
+#define SMMU_FSR_FAULT                      (SMMU_FSR_MULTI | SMMU_FSR_SS |   \
+                                             SMMU_FSR_UUT | SMMU_FSR_EF |     \
+                                             SMMU_FSR_PF | SMMU_FSR_TF |      \
+                                             SMMU_FSR_IGN)
+
+#define SMMU_FSYNR0_WNR                     (1 << 4)
+
+#define smmu_print(dev, lvl, fmt, ...)                                        \
+    printk(lvl "smmu: %s: " fmt, dt_node_full_name(dev->node), ## __VA_ARGS__)
+
+#define smmu_err(dev, fmt, ...) smmu_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+
+#define smmu_dbg(dev, fmt, ...)                                             \
+    smmu_print(dev, XENLOG_DEBUG, fmt, ## __VA_ARGS__)
+
+#define smmu_info(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_INFO, fmt, ## __VA_ARGS__)
+
+#define smmu_warn(dev, fmt, ...)                                            \
+    smmu_print(dev, XENLOG_WARNING, fmt, ## __VA_ARGS__)
+
+struct arm_smmu_device {
+    const struct dt_device_node *node;
+
+    void __iomem                *base;
+    unsigned long               size;
+    unsigned long               pagesize;
+
+#define SMMU_FEAT_COHERENT_WALK (1 << 0)
+#define SMMU_FEAT_STREAM_MATCH  (1 << 1)
+#define SMMU_FEAT_TRANS_S1      (1 << 2)
+#define SMMU_FEAT_TRANS_S2      (1 << 3)
+#define SMMU_FEAT_TRANS_NESTED  (1 << 4)
+    u32                         features;
+    u32                         options;
+    int                         version;
+
+    u32                         num_context_banks;
+    u32                         num_s2_context_banks;
+    DECLARE_BITMAP(context_map, SMMU_MAX_CBS);
+    atomic_t                    irptndx;
+
+    u32                         num_mapping_groups;
+    DECLARE_BITMAP(smr_map, SMMU_MAX_SMRS);
+
+    unsigned long               input_size;
+    unsigned long               s1_output_size;
+    unsigned long               s2_output_size;
+
+    u32                         num_global_irqs;
+    u32                         num_context_irqs;
+    struct dt_irq               *irqs;
+
+    u32                         smr_mask_mask;
+    u32                         smr_id_mask;
+
+    unsigned long               *sids;
+
+    struct list_head            list;
+    struct rb_root              masters;
+};
+
+struct arm_smmu_smr {
+    u8                          idx;
+    u16                         mask;
+    u16                         id;
+};
+
+#define INVALID_IRPTNDX         0xff
+
+#define SMMU_CB_ASID(cfg)       ((cfg)->cbndx)
+#define SMMU_CB_VMID(cfg)       ((cfg)->cbndx + 1)
+
+struct arm_smmu_domain_cfg {
+    struct arm_smmu_device  *smmu;
+    u8                      cbndx;
+    u8                      irptndx;
+    u32                     cbar;
+    /* Domain associated to this device */
+    struct domain           *domain;
+    /* List of master which use this structure */
+    struct list_head        masters;
+
+    /* Used to link domain context for a same domain */
+    struct list_head        list;
+};
+
+struct arm_smmu_master {
+    const struct dt_device_node *dt_node;
+
+    /*
+     * The following is specific to the master's position in the
+     * SMMU chain.
+     */
+    struct rb_node              node;
+    u32                         num_streamids;
+    u16                         streamids[MAX_MASTER_STREAMIDS];
+    int                         num_s2crs;
+
+    struct arm_smmu_smr         *smrs;
+    struct arm_smmu_domain_cfg  *cfg;
+
+    /* Used to link masters in a same domain context */
+    struct list_head            list;
+};
+
+static LIST_HEAD(arm_smmu_devices);
+
+struct arm_smmu_domain {
+    spinlock_t lock;
+    struct list_head contexts;
+};
+
+struct arm_smmu_option_prop {
+    u32         opt;
+    const char  *prop;
+};
+
+static const struct arm_smmu_option_prop arm_smmu_options [] __initconst =
+{
+    { SMMU_OPT_SECURE_CONFIG_ACCESS, "calxeda,smmu-secure-config-access" },
+    { 0, NULL},
+};
+
+static void __init check_driver_options(struct arm_smmu_device *smmu)
+{
+    int i = 0;
+
+    do {
+        if ( dt_property_read_bool(smmu->node, arm_smmu_options[i].prop) )
+        {
+            smmu->options |= arm_smmu_options[i].opt;
+            smmu_dbg(smmu, "option %s\n", arm_smmu_options[i].prop);
+        }
+    } while ( arm_smmu_options[++i].opt );
+}
+
+static void arm_smmu_context_fault(int irq, void *data,
+                                   struct cpu_user_regs *regs)
+{
+    u32 fsr, far, fsynr;
+    unsigned long iova;
+    struct arm_smmu_domain_cfg *cfg = data;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    fsr = readl_relaxed(cb_base + SMMU_CB_FSR);
+
+    if ( !(fsr & SMMU_FSR_FAULT) )
+        return;
+
+    if ( fsr & SMMU_FSR_IGN )
+        smmu_err(smmu, "Unexpected context fault (fsr 0x%u)\n", fsr);
+
+    fsynr = readl_relaxed(cb_base + SMMU_CB_FSYNR0);
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_LO);
+    iova = far;
+#ifdef CONFIG_ARM_64
+    far = readl_relaxed(cb_base + SMMU_CB_FAR_HI);
+    iova |= ((unsigned long)far << 32);
+#endif
+
+    smmu_err(smmu,
+             "Unhandled context fault: iova=0x%08lx, fsynr=0x%x, cb=%d\n",
+             iova, fsynr, cfg->cbndx);
+
+    /* Clear the faulting FSR */
+    writel(fsr, cb_base + SMMU_CB_FSR);
+
+    /* Terminate any stalled transactions */
+    if ( fsr & SMMU_FSR_SS )
+        writel_relaxed(SMMU_RESUME_TERMINATE, cb_base + SMMU_CB_RESUME);
+}
+
+static void arm_smmu_global_fault(int irq, void *data,
+                                  struct cpu_user_regs *regs)
+{
+    u32 gfsr, gfsynr0, gfsynr1, gfsynr2;
+    struct arm_smmu_device *smmu = data;
+    void __iomem *gr0_base = SMMU_GR0_NS(smmu);
+
+    gfsr = readl_relaxed(gr0_base + SMMU_GR0_sGFSR);
+    gfsynr0 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR0);
+    gfsynr1 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR1);
+    gfsynr2 = readl_relaxed(gr0_base + SMMU_GR0_sGFSYNR2);
+
+    if ( !gfsr )
+        return;
+
+    smmu_err(smmu, "Unexpected global fault, this could be serious\n");
+    smmu_err(smmu,
+             "\tGFSR 0x%08x, GFSYNR0 0x%08x, GFSYNR1 0x%08x, GFSYNR2 0x%08x\n",
+             gfsr, gfsynr0, gfsynr1, gfsynr2);
+    writel(gfsr, gr0_base + SMMU_GR0_sGFSR);
+}
+
+static struct arm_smmu_master *
+find_smmu_master(struct arm_smmu_device *smmu,
+                 const struct dt_device_node *dev_node)
+{
+    struct rb_node *node = smmu->masters.rb_node;
+
+    while ( node )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+
+        if ( dev_node < master->dt_node )
+            node = node->rb_left;
+        else if ( dev_node > master->dt_node )
+            node = node->rb_right;
+        else
+            return master;
+    }
+
+    return NULL;
+}
+
+static __init int insert_smmu_master(struct arm_smmu_device *smmu,
+                                     struct arm_smmu_master *master)
+{
+    struct rb_node **new, *parent;
+
+    new = &smmu->masters.rb_node;
+    parent = NULL;
+    while ( *new )
+    {
+        struct arm_smmu_master *this;
+
+        this = container_of(*new, struct arm_smmu_master, node);
+
+        parent = *new;
+        if ( master->dt_node < this->dt_node )
+            new = &((*new)->rb_left);
+        else if (master->dt_node > this->dt_node)
+            new = &((*new)->rb_right);
+        else
+            return -EEXIST;
+    }
+
+    rb_link_node(&master->node, parent, new);
+    rb_insert_color(&master->node, &smmu->masters);
+    return 0;
+}
+
+static __init int register_smmu_master(struct arm_smmu_device *smmu,
+                                       struct dt_phandle_args *masterspec)
+{
+    int i, sid;
+    struct arm_smmu_master *master;
+    int rc = 0;
+
+    smmu_dbg(smmu, "Try to add master %s\n", masterspec->np->name);
+
+    master = find_smmu_master(smmu, masterspec->np);
+    if ( master )
+    {
+        smmu_err(smmu,
+                 "rejecting multiple registrations for master device %s\n",
+                 masterspec->np->name);
+        return -EBUSY;
+    }
+
+    if ( masterspec->args_count > MAX_MASTER_STREAMIDS )
+    {
+        smmu_err(smmu,
+            "reached maximum number (%d) of stream IDs for master device %s\n",
+            MAX_MASTER_STREAMIDS, masterspec->np->name);
+        return -ENOSPC;
+    }
+
+    master = xzalloc(struct arm_smmu_master);
+    if ( !master )
+        return -ENOMEM;
+
+    INIT_LIST_HEAD(&master->list);
+    master->dt_node = masterspec->np;
+    master->num_streamids = masterspec->args_count;
+
+    dt_device_set_protected(masterspec->np);
+
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        sid = masterspec->args[i];
+        if ( test_and_set_bit(sid, smmu->sids) )
+        {
+            smmu_err(smmu, "duplicate stream ID (%d)\n", sid);
+            xfree(master);
+            return -EEXIST;
+        }
+        master->streamids[i] = masterspec->args[i];
+    }
+
+    rc = insert_smmu_master(smmu, master);
+    /* Insertion should never fail */
+    ASSERT(rc == 0);
+
+    return 0;
+}
+
+static int __arm_smmu_alloc_bitmap(unsigned long *map, int start, int end)
+{
+    int idx;
+
+    do
+    {
+        idx = find_next_zero_bit(map, end, start);
+        if ( idx == end )
+            return -ENOSPC;
+    } while ( test_and_set_bit(idx, map) );
+
+    return idx;
+}
+
+static void __arm_smmu_free_bitmap(unsigned long *map, int idx)
+{
+    clear_bit(idx, map);
+}
+
+static void arm_smmu_tlb_sync(struct arm_smmu_device *smmu)
+{
+    int count = 0;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    writel_relaxed(0, gr0_base + SMMU_GR0_sTLBGSYNC);
+    while ( readl_relaxed(gr0_base + SMMU_GR0_sTLBGSTATUS) &
+            SMMU_sTLBGSTATUS_GSACTIVE )
+    {
+        cpu_relax();
+        if ( ++count == SMMU_TLB_LOOP_TIMEOUT )
+        {
+            smmu_err(smmu, "TLB sync timed out -- SMMU may be deadlocked\n");
+            return;
+        }
+        udelay(1);
+    }
+}
+
+static void arm_smmu_tlb_inv_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *base = SMMU_GR0(smmu);
+
+    writel_relaxed(SMMU_CB_VMID(cfg),
+                   base + SMMU_GR0_TLBIVMID);
+
+    arm_smmu_tlb_sync(smmu);
+}
+
+static void arm_smmu_iotlb_flush_all(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry(cfg, &smmu_domain->contexts, list)
+        arm_smmu_tlb_inv_context(cfg);
+    spin_unlock(&smmu_domain->lock);
+}
+
+static void arm_smmu_iotlb_flush(struct domain *d, unsigned long gfn,
+                                 unsigned int page_count)
+{
+    /* ARM SMMU v1 doesn't have flush by VMA and VMID */
+    arm_smmu_iotlb_flush_all(d);
+}
+
+static int determine_smr_mask(struct arm_smmu_device *smmu,
+                              struct arm_smmu_master *master,
+                              struct arm_smmu_smr *smr, int start, int order)
+{
+    u16 i, zero_bits_mask, one_bits_mask, const_mask;
+    int nr;
+
+    nr = 1 << order;
+
+    if ( nr == 1 )
+    {
+        /* no mask, use streamid to match and be done with it */
+        smr->mask = 0;
+        smr->id = master->streamids[start];
+        return 0;
+    }
+
+    zero_bits_mask = 0;
+    one_bits_mask = 0xffff;
+    for ( i = start; i < start + nr; i++)
+    {
+        zero_bits_mask |= master->streamids[i];   /* const 0 bits */
+        one_bits_mask &= master->streamids[i]; /* const 1 bits */
+    }
+    zero_bits_mask = ~zero_bits_mask;
+
+    /* bits having constant values (either 0 or 1) */
+    const_mask = zero_bits_mask | one_bits_mask;
+
+    i = hweight16(~const_mask);
+    if ( (1 << i) == nr )
+    {
+        smr->mask = ~const_mask;
+        smr->id = one_bits_mask;
+    }
+    else
+        /* no usable mask for this set of streamids */
+        return 1;
+
+    if ( ((smr->mask & smmu->smr_mask_mask) != smr->mask) ||
+         ((smr->id & smmu->smr_id_mask) != smr->id) )
+        /* insufficient number of mask/id bits */
+        return 1;
+
+    return 0;
+}
+
+static int determine_smr_mapping(struct arm_smmu_device *smmu,
+                                 struct arm_smmu_master *master,
+                                 struct arm_smmu_smr *smrs, int max_smrs)
+{
+    int nr_sid, nr, i, bit, start;
+
+    /*
+     * This function is called only once -- when a master is added
+     * to a domain. If master->num_s2crs != 0 then this master
+     * was already added to a domain.
+     */
+    BUG_ON(master->num_s2crs);
+
+    start = nr = 0;
+    nr_sid = master->num_streamids;
+    do
+    {
+        /*
+         * largest power-of-2 number of streamids for which to
+         * determine a usable mask/id pair for stream matching
+         */
+        bit = fls(nr_sid);
+        if (!bit)
+            return 0;
+
+        /*
+         * iterate over power-of-2 numbers to determine
+         * largest possible mask/id pair for stream matching
+         * of next 2**i streamids
+         */
+        for ( i = bit - 1; i >= 0; i-- )
+        {
+            if( !determine_smr_mask(smmu, master,
+                                    &smrs[master->num_s2crs],
+                                    start, i))
+                break;
+        }
+
+        if ( i < 0 )
+            goto out;
+
+        nr = 1 << i;
+        nr_sid -= nr;
+        start += nr;
+        master->num_s2crs++;
+    } while ( master->num_s2crs <= max_smrs );
+
+out:
+    if ( nr_sid )
+    {
+        /* not enough mapping groups available */
+        master->num_s2crs = 0;
+        return -ENOSPC;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
+                                          struct arm_smmu_master *master)
+{
+    int i, max_smrs, ret;
+    struct arm_smmu_smr *smrs;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+
+    if ( !(smmu->features & SMMU_FEAT_STREAM_MATCH) )
+        return 0;
+
+    if ( master->smrs )
+        return -EEXIST;
+
+    max_smrs = min(smmu->num_mapping_groups, master->num_streamids);
+    smrs = xmalloc_array(struct arm_smmu_smr, max_smrs);
+    if ( !smrs )
+    {
+        smmu_err(smmu, "failed to allocated %d SMRs for master %s\n",
+                 max_smrs, dt_node_name(master->dt_node));
+        return -ENOMEM;
+    }
+
+    ret = determine_smr_mapping(smmu, master, smrs, max_smrs);
+    if ( ret )
+        goto err_free_smrs;
+
+    /* Allocate the SMRs on the root SMMU */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
+                                          smmu->num_mapping_groups);
+        if ( idx < 0 )
+        {
+            smmu_err(smmu, "failed to allocate free SMR\n");
+            goto err_free_bitmap;
+        }
+        smrs[i].idx = idx;
+    }
+
+    /* It worked! Now, poke the actual hardware */
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 reg = SMMU_SMR_VALID | smrs[i].id << SMMU_SMR_ID_SHIFT |
+            smrs[i].mask << SMMU_SMR_MASK_SHIFT;
+        smmu_dbg(smmu, "SMR%d: 0x%x\n", smrs[i].idx, reg);
+        writel_relaxed(reg, gr0_base + SMMU_GR0_SMR(smrs[i].idx));
+    }
+
+    master->smrs = smrs;
+    return 0;
+
+err_free_bitmap:
+    while (--i >= 0)
+        __arm_smmu_free_bitmap(smmu->smr_map, smrs[i].idx);
+    master->num_s2crs = 0;
+err_free_smrs:
+    xfree(smrs);
+    return -ENOSPC;
+}
+
+/* Forward declaration */
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg);
+
+static int arm_smmu_domain_add_master(struct domain *d,
+                                      struct arm_smmu_domain_cfg *cfg,
+                                      struct arm_smmu_master *master)
+{
+    int i, ret;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    if ( master->cfg )
+        return -EBUSY;
+
+    ret = arm_smmu_master_configure_smrs(smmu, master);
+    if ( ret )
+        return ret;
+
+    /* Now we're at the root, time to point at our context bank */
+    if ( !master->num_s2crs )
+        master->num_s2crs = master->num_streamids;
+
+    for ( i = 0; i < master->num_s2crs; ++i )
+    {
+        u32 idx, s2cr;
+
+        idx = smrs ? smrs[i].idx : master->streamids[i];
+        s2cr = (SMMU_S2CR_TYPE_TRANS << SMMU_S2CR_TYPE_SHIFT) |
+            (cfg->cbndx << SMMU_S2CR_CBNDX_SHIFT);
+        smmu_dbg(smmu, "S2CR%d: 0x%x\n", idx, s2cr);
+        writel_relaxed(s2cr, gr0_base + SMMU_GR0_S2CR(idx));
+    }
+
+    master->cfg = cfg;
+    list_add(&master->list, &cfg->masters);
+
+    return 0;
+}
+
+static void arm_smmu_domain_remove_master(struct arm_smmu_master *master)
+{
+    int i;
+    struct arm_smmu_domain_cfg *cfg = master->cfg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    struct arm_smmu_smr *smrs = master->smrs;
+
+    /*
+     * We *must* clear the S2CR first, because freeing the SMR means
+     * that it can be reallocated immediately
+     */
+    for ( i = 0; i < master->num_streamids; ++i )
+    {
+        u16 sid = master->streamids[i];
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT,
+                       gr0_base + SMMU_GR0_S2CR(sid));
+    }
+
+    /* Invalidate the SMRs before freeing back to the allocator */
+    for (i = 0; i < master->num_streamids; ++i) {
+        u8 idx = smrs[i].idx;
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(idx));
+        __arm_smmu_free_bitmap(smmu->smr_map, idx);
+    }
+
+    master->smrs = NULL;
+    xfree(smrs);
+
+    master->cfg = NULL;
+    list_del(&master->list);
+    INIT_LIST_HEAD(&master->list);
+}
+
+static void arm_smmu_init_context_bank(struct arm_smmu_domain_cfg *cfg)
+{
+    u32 reg;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base, *gr0_base, *gr1_base;
+    paddr_t p2maddr;
+
+    ASSERT(cfg->domain != NULL);
+    p2maddr = page_to_maddr(cfg->domain->arch.p2m.first_level);
+
+    gr0_base = SMMU_GR0(smmu);
+    gr1_base = SMMU_GR1(smmu);
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+
+    /* CBAR */
+    reg = cfg->cbar;
+    if ( smmu->version == 1 )
+        reg |= cfg->irptndx << SMMU_CBAR_IRPTNDX_SHIFT;
+
+    reg |= SMMU_CB_VMID(cfg) << SMMU_CBAR_VMID_SHIFT;
+    writel_relaxed(reg, gr1_base + SMMU_GR1_CBAR(cfg->cbndx));
+
+    if ( smmu->version > 1 )
+    {
+        /* CBA2R */
+#ifdef CONFIG_ARM_64
+        reg = SMMU_CBA2R_RW64_64BIT;
+#else
+        reg = SMMU_CBA2R_RW64_32BIT;
+#endif
+        writel_relaxed(reg, gr1_base + SMMU_GR1_CBA2R(cfg->cbndx));
+    }
+
+    /* TTBR0 */
+    reg = (p2maddr & ((1ULL << 32) - 1));
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_LO);
+    reg = (p2maddr >> 32);
+    writel_relaxed(reg, cb_base + SMMU_CB_TTBR0_HI);
+
+    /*
+     * TCR
+     * We use long descriptor, with inner-shareable WBWA tables in TTBR0.
+     */
+    if ( smmu->version > 1 )
+    {
+        /* 4K Page Table */
+        if ( PAGE_SIZE == PAGE_SIZE_4K )
+            reg = SMMU_TCR_TG0_4K;
+        else
+            reg = SMMU_TCR_TG0_64K;
+
+        switch ( smmu->s2_output_size ) {
+        case 32:
+            reg |= (SMMU_TCR2_ADDR_32 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 36:
+            reg |= (SMMU_TCR2_ADDR_36 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 40:
+            reg |= (SMMU_TCR2_ADDR_40 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 42:
+            reg |= (SMMU_TCR2_ADDR_42 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 44:
+            reg |= (SMMU_TCR2_ADDR_44 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        case 48:
+            reg |= (SMMU_TCR2_ADDR_48 << SMMU_TCR_PASIZE_SHIFT);
+            break;
+        }
+    }
+    else
+        reg = 0;
+
+    /* The attribute to walk the page table should be the same as VTCR_EL2 */
+    reg |= SMMU_TCR_EAE |
+        (SMMU_TCR_SH_NS << SMMU_TCR_SH0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_ORGN0_SHIFT) |
+        (SMMU_TCR_RGN_WBWA << SMMU_TCR_IRGN0_SHIFT) |
+        (SMMU_TCR_SL0_LVL_1 << SMMU_TCR_SL0_SHIFT);
+    writel_relaxed(reg, cb_base + SMMU_CB_TCR);
+
+    /* SCTLR */
+    reg = SMMU_SCTLR_CFCFG |
+        SMMU_SCTLR_CFIE |
+        SMMU_SCTLR_CFRE |
+        SMMU_SCTLR_M |
+        SMMU_SCTLR_EAE_SBOP;
+
+    writel_relaxed(reg, cb_base + SMMU_CB_SCTLR);
+}
+
+static struct arm_smmu_domain_cfg *
+arm_smmu_alloc_domain_context(struct domain *d,
+                              struct arm_smmu_device *smmu)
+{
+    const struct dt_irq *irq;
+    int ret, start;
+    struct arm_smmu_domain_cfg *cfg;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+
+    cfg = xzalloc(struct arm_smmu_domain_cfg);
+    if ( !cfg )
+        return NULL;
+
+    /* Master already initialized to another domain ... */
+    if ( cfg->domain != NULL )
+        goto out_free_mem;
+
+    cfg->cbar = SMMU_CBAR_TYPE_S2_TRANS;
+    start = 0;
+
+    ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
+                                  smmu->num_context_banks);
+    if ( ret < 0 )
+        goto out_free_mem;
+
+    cfg->cbndx = ret;
+    if ( smmu->version == 1 )
+    {
+        cfg->irptndx = atomic_inc_return(&smmu->irptndx);
+        cfg->irptndx %= smmu->num_context_irqs;
+    }
+    else
+        cfg->irptndx = cfg->cbndx;
+
+    irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+    ret = request_dt_irq(irq, arm_smmu_context_fault,
+                         "arm-smmu-context-fault", cfg);
+    if ( ret )
+    {
+        smmu_err(smmu, "failed to request context IRQ %d (%u)\n",
+                 cfg->irptndx, irq->irq);
+        cfg->irptndx = INVALID_IRPTNDX;
+        goto out_free_context;
+    }
+
+    cfg->domain = d;
+    cfg->smmu = smmu;
+
+    arm_smmu_init_context_bank(cfg);
+    list_add(&cfg->list, &smmu_domain->contexts);
+    INIT_LIST_HEAD(&cfg->masters);
+
+    return cfg;
+
+out_free_context:
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+out_free_mem:
+    xfree(cfg);
+
+    return NULL;
+}
+
+static void arm_smmu_destroy_domain_context(struct arm_smmu_domain_cfg *cfg)
+{
+    struct domain *d = cfg->domain;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_device *smmu = cfg->smmu;
+    void __iomem *cb_base;
+    const struct dt_irq *irq;
+
+    ASSERT(spin_is_locked(&smmu_domain->lock));
+    BUG_ON(!list_empty(&cfg->masters));
+
+    /* Disable the context bank and nuke the TLB before freeing it */
+    cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, cfg->cbndx);
+    writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+    arm_smmu_tlb_inv_context(cfg);
+
+    if ( cfg->irptndx != INVALID_IRPTNDX )
+    {
+        irq = &smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
+        release_dt_irq(irq, cfg);
+    }
+
+    __arm_smmu_free_bitmap(smmu->context_map, cfg->cbndx);
+    list_del(&cfg->list);
+    xfree(cfg);
+}
+
+static struct arm_smmu_device *
+arm_smmu_find_smmu_by_dev(const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_master *master = NULL;
+
+    list_for_each_entry( smmu, &arm_smmu_devices, list )
+    {
+        master = find_smmu_master(smmu, dev);
+        if ( master )
+            break;
+    }
+
+    if ( !master )
+        return NULL;
+
+    return smmu;
+}
+
+static int arm_smmu_attach_dev(struct domain *d,
+                               const struct dt_device_node *dev)
+{
+    struct arm_smmu_device *smmu = arm_smmu_find_smmu_by_dev(dev);
+    struct arm_smmu_master *master;
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_domain_cfg *cfg = NULL;
+    struct arm_smmu_domain_cfg *curr;
+    int ret;
+
+    printk(XENLOG_DEBUG "arm-smmu: attach %s to domain %d\n",
+           dt_node_full_name(dev), d->domain_id);
+
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: cannot attach to SMMU, is it on the same bus?\n",
+               dt_node_full_name(dev));
+        return -ENODEV;
+    }
+
+    master = find_smmu_master(smmu, dev);
+    BUG_ON(master == NULL);
+
+    /* Check if the device is already assigned to someone */
+    if ( master->cfg )
+        return -EBUSY;
+
+    spin_lock(&smmu_domain->lock);
+    list_for_each_entry( curr, &smmu_domain->contexts, list )
+    {
+        if ( curr->smmu == smmu )
+        {
+            cfg = curr;
+            break;
+        }
+    }
+
+    if ( !cfg )
+    {
+        cfg = arm_smmu_alloc_domain_context(d, smmu);
+        if ( !cfg )
+        {
+            smmu_err(smmu, "unable to allocate context for domain %u\n",
+                     d->domain_id);
+            spin_unlock(&smmu_domain->lock);
+            return -ENOMEM;
+        }
+    }
+    spin_unlock(&smmu_domain->lock);
+
+    ret = arm_smmu_domain_add_master(d, cfg, master);
+    if ( ret )
+    {
+        spin_lock(&smmu_domain->lock);
+        if ( list_empty(&cfg->masters) )
+            arm_smmu_destroy_domain_context(cfg);
+        spin_unlock(&smmu_domain->lock);
+    }
+
+    return ret;
+}
+
+static int arm_smmu_detach_dev(struct domain *d,
+                               const struct dt_device_node *dev)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+    struct arm_smmu_master *master;
+    struct arm_smmu_device *smmu = arm_smmu_find_smmu_by_dev(dev);
+    struct arm_smmu_domain_cfg *cfg;
+
+    printk(XENLOG_DEBUG "arm-smmu: detach %s to domain %d\n",
+           dt_node_full_name(dev), d->domain_id);
+
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: cannot find the SMMU, is it on the same bus?\n",
+               dt_node_full_name(dev));
+        return -ENODEV;
+    }
+
+    master = find_smmu_master(smmu, dev);
+    BUG_ON(master == NULL);
+
+    cfg = master->cfg;
+
+    /* Sanity check to avoid removing a device that doesn't belong to
+     * the domain
+     */
+    if ( !cfg || cfg->domain != d )
+    {
+        printk(XENLOG_ERR "%s: was not attach to domain %d\n",
+               dt_node_full_name(dev), d->domain_id);
+        return -ESRCH;
+    }
+
+    arm_smmu_domain_remove_master(master);
+
+    spin_lock(&smmu_domain->lock);
+    if ( list_empty(&cfg->masters) )
+        arm_smmu_destroy_domain_context(cfg);
+    spin_unlock(&smmu_domain->lock);
+
+    return 0;
+}
+
+static int arm_smmu_reassign_dt_dev(struct domain *s, struct domain *t,
+                                    const struct dt_device_node *dev)
+{
+    int ret = 0;
+
+    /* Don't allow remapping on other domain than dom0 */
+    if ( t != dom0 )
+        return -EPERM;
+
+    if ( t == s )
+        return 0;
+
+    ret = arm_smmu_detach_dev(s, dev);
+    if ( ret )
+        return ret;
+
+    ret = arm_smmu_attach_dev(t, dev);
+
+    return ret;
+}
+
+static __init int arm_smmu_id_size_to_bits(int size)
+{
+    switch ( size )
+    {
+    case 0:
+        return 32;
+    case 1:
+        return 36;
+    case 2:
+        return 40;
+    case 3:
+        return 42;
+    case 4:
+        return 44;
+    case 5:
+    default:
+        return 48;
+    }
+}
+
+static __init int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
+{
+    unsigned long size;
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    u32 id;
+
+    smmu_info(smmu, "probing hardware configuration...\n");
+
+    /*
+     * Primecell ID
+     */
+    id = readl_relaxed(gr0_base + SMMU_GR0_PIDR2);
+    smmu->version = ((id >> SMMU_PIDR2_ARCH_SHIFT) & SMMU_PIDR2_ARCH_MASK) + 1;
+    smmu_info(smmu, "SMMUv%d with:\n", smmu->version);
+
+    /* ID0 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID0);
+#ifndef CONFIG_ARM_64
+    if ( ((id >> SMMU_ID0_PTFS_SHIFT) & SMMU_ID0_PTFS_MASK) ==
+            SMMU_ID0_PTFS_V8_ONLY )
+    {
+        smmu_err(smmu, "\tno v7 descriptor support!\n");
+        return -ENODEV;
+    }
+#endif
+    if ( id & SMMU_ID0_S1TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S1;
+        smmu_info(smmu, "\tstage 1 translation\n");
+    }
+
+    if ( id & SMMU_ID0_S2TS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_S2;
+        smmu_info(smmu, "\tstage 2 translation\n");
+    }
+
+    if ( id & SMMU_ID0_NTS )
+    {
+        smmu->features |= SMMU_FEAT_TRANS_NESTED;
+        smmu_info(smmu, "\tnested translation\n");
+    }
+
+    if ( !(smmu->features &
+           (SMMU_FEAT_TRANS_S1 | SMMU_FEAT_TRANS_S2 |
+            SMMU_FEAT_TRANS_NESTED)) )
+    {
+        smmu_err(smmu, "\tno translation support!\n");
+        return -ENODEV;
+    }
+
+    /* We need at least support for Stage 2 */
+    if ( !(smmu->features & SMMU_FEAT_TRANS_S2) )
+    {
+        smmu_err(smmu, "\tno stage 2 translation!\n");
+        return -ENODEV;
+    }
+
+    if ( id & SMMU_ID0_CTTW )
+    {
+        smmu->features |= SMMU_FEAT_COHERENT_WALK;
+        smmu_info(smmu, "\tcoherent table walk\n");
+    }
+
+    if ( id & SMMU_ID0_SMS )
+    {
+        u32 smr, sid, mask;
+
+        smmu->features |= SMMU_FEAT_STREAM_MATCH;
+        smmu->num_mapping_groups = (id >> SMMU_ID0_NUMSMRG_SHIFT) &
+            SMMU_ID0_NUMSMRG_MASK;
+        if ( smmu->num_mapping_groups == 0 )
+        {
+            smmu_err(smmu,
+                     "stream-matching supported, but no SMRs present!\n");
+            return -ENODEV;
+        }
+
+        smr = SMMU_SMR_MASK_MASK << SMMU_SMR_MASK_SHIFT;
+        smr |= (SMMU_SMR_ID_MASK << SMMU_SMR_ID_SHIFT);
+        writel_relaxed(smr, gr0_base + SMMU_GR0_SMR(0));
+        smr = readl_relaxed(gr0_base + SMMU_GR0_SMR(0));
+
+        mask = (smr >> SMMU_SMR_MASK_SHIFT) & SMMU_SMR_MASK_MASK;
+        sid = (smr >> SMMU_SMR_ID_SHIFT) & SMMU_SMR_ID_MASK;
+        if ( (mask & sid) != sid )
+        {
+            smmu_err(smmu,
+                     "SMR mask bits (0x%x) insufficient for ID field (0x%x)\n",
+                     mask, sid);
+            return -ENODEV;
+        }
+        smmu->smr_mask_mask = mask;
+        smmu->smr_id_mask = sid;
+
+        smmu_info(smmu,
+                  "\tstream matching with %u register groups, mask 0x%x\n",
+                  smmu->num_mapping_groups, mask);
+    }
+
+    /* ID1 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID1);
+    smmu->pagesize = (id & SMMU_ID1_PAGESIZE) ? PAGE_SIZE_64K : PAGE_SIZE_4K;
+
+    /* Check for size mismatch of SMMU address space from mapped region */
+    size = 1 << (((id >> SMMU_ID1_NUMPAGENDXB_SHIFT) &
+                  SMMU_ID1_NUMPAGENDXB_MASK) + 1);
+    size *= (smmu->pagesize << 1);
+    if ( smmu->size != size )
+        smmu_warn(smmu, "SMMU address space size (0x%lx) differs "
+                  "from mapped region size (0x%lx)!\n", size, smmu->size);
+
+    smmu->num_s2_context_banks = (id >> SMMU_ID1_NUMS2CB_SHIFT) &
+        SMMU_ID1_NUMS2CB_MASK;
+    smmu->num_context_banks = (id >> SMMU_ID1_NUMCB_SHIFT) &
+        SMMU_ID1_NUMCB_MASK;
+    if ( smmu->num_s2_context_banks > smmu->num_context_banks )
+    {
+        smmu_err(smmu, "impossible number of S2 context banks!\n");
+        return -ENODEV;
+    }
+    smmu_info(smmu, "\t%u context banks (%u stage-2 only)\n",
+              smmu->num_context_banks, smmu->num_s2_context_banks);
+
+    /* ID2 */
+    id = readl_relaxed(gr0_base + SMMU_GR0_ID2);
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_IAS_SHIFT) &
+                                    SMMU_ID2_IAS_MASK);
+
+    /*
+     * Stage-1 output limited by stage-2 input size due to VTCR_EL2
+     * setup (see setup_virt_paging)
+     */
+    /* Current maximum output size of 40 bits */
+    smmu->s1_output_size = min(40UL, size);
+
+    /* The stage-2 output mask is also applied for bypass */
+    size = arm_smmu_id_size_to_bits((id >> SMMU_ID2_OAS_SHIFT) &
+                                    SMMU_ID2_OAS_MASK);
+    smmu->s2_output_size = min((unsigned long)PADDR_BITS, size);
+
+    if ( smmu->version == 1 )
+        smmu->input_size = 32;
+    else
+    {
+#ifdef CONFIG_ARM_64
+        size = (id >> SMMU_ID2_UBS_SHIFT) & SMMU_ID2_UBS_MASK;
+        size = min(39, arm_smmu_id_size_to_bits(size));
+#else
+        size = 32;
+#endif
+        smmu->input_size = size;
+
+        if ( (PAGE_SIZE == PAGE_SIZE_4K && !(id & SMMU_ID2_PTFS_4K) ) ||
+             (PAGE_SIZE == PAGE_SIZE_64K && !(id & SMMU_ID2_PTFS_64K)) ||
+             (PAGE_SIZE != PAGE_SIZE_4K && PAGE_SIZE != PAGE_SIZE_64K) )
+        {
+            smmu_err(smmu, "CPU page size 0x%lx unsupported\n",
+                     PAGE_SIZE);
+            return -ENODEV;
+        }
+    }
+
+    smmu_info(smmu, "\t%lu-bit VA, %lu-bit IPA, %lu-bit PA\n",
+              smmu->input_size, smmu->s1_output_size, smmu->s2_output_size);
+    return 0;
+}
+
+static __init void arm_smmu_device_reset(struct arm_smmu_device *smmu)
+{
+    void __iomem *gr0_base = SMMU_GR0(smmu);
+    void __iomem *cb_base;
+    int i = 0;
+    u32 reg;
+
+    smmu_dbg(smmu, "device reset\n");
+
+    /* Clear Global FSR */
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+    writel(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sGFSR);
+
+    /* Mark all SMRn as invalid and all S2CRn as fault */
+    for ( i = 0; i < smmu->num_mapping_groups; ++i )
+    {
+        writel_relaxed(~SMMU_SMR_VALID, gr0_base + SMMU_GR0_SMR(i));
+        writel_relaxed(SMMU_S2CR_TYPE_FAULT, gr0_base + SMMU_GR0_S2CR(i));
+    }
+
+    /* Make sure all context banks are disabled and clear CB_FSR  */
+    for ( i = 0; i < smmu->num_context_banks; ++i )
+    {
+        cb_base = SMMU_CB_BASE(smmu) + SMMU_CB(smmu, i);
+        writel_relaxed(0, cb_base + SMMU_CB_SCTLR);
+        writel_relaxed(SMMU_FSR_FAULT, cb_base + SMMU_CB_FSR);
+    }
+
+    /* Invalidate the TLB, just in case */
+    writel_relaxed(0, gr0_base + SMMU_GR0_STLBIALL);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLH);
+    writel_relaxed(0, gr0_base + SMMU_GR0_TLBIALLNSNH);
+
+    reg = readl_relaxed(SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+
+    /* Enable fault reporting */
+    reg |= (SMMU_sCR0_GFRE | SMMU_sCR0_GFIE |
+            SMMU_sCR0_GCFGFRE | SMMU_sCR0_GCFGFIE);
+
+    /* Disable TLB broadcasting. */
+    reg |= (SMMU_sCR0_VMIDPNE | SMMU_sCR0_PTM);
+
+    /* Enable client access, generate a fault if no mapping is found */
+    reg &= ~(SMMU_sCR0_CLIENTPD);
+    reg |= SMMU_sCR0_USFCFG;
+
+    /* Disable forced broadcasting */
+    reg &= ~SMMU_sCR0_FB;
+
+    /* Don't upgrade barriers */
+    reg &= ~(SMMU_sCR0_BSU_MASK << SMMU_sCR0_BSU_SHIFT);
+
+    /* Push the button */
+    arm_smmu_tlb_sync(smmu);
+    writel_relaxed(reg, SMMU_GR0_NS(smmu) + SMMU_GR0_sCR0);
+}
+
+int arm_smmu_iommu_domain_init(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain;
+
+    smmu_domain = xzalloc(struct arm_smmu_domain);
+    if ( !smmu_domain )
+        return -ENOMEM;
+
+    spin_lock_init(&smmu_domain->lock);
+    INIT_LIST_HEAD(&smmu_domain->contexts);
+
+    domain_hvm_iommu(d)->arch.priv = smmu_domain;
+
+    return 0;
+}
+
+void arm_smmu_iommu_dom0_init(struct domain *d)
+{
+}
+
+void arm_smmu_iommu_domain_teardown(struct domain *d)
+{
+    struct arm_smmu_domain *smmu_domain = domain_hvm_iommu(d)->arch.priv;
+
+    ASSERT(list_empty(&smmu_domain->contexts));
+    xfree(smmu_domain);
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+    .init = arm_smmu_iommu_domain_init,
+    .dom0_init = arm_smmu_iommu_dom0_init,
+    .teardown = arm_smmu_iommu_domain_teardown,
+    .iotlb_flush = arm_smmu_iotlb_flush,
+    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+    .assign_dt_device = arm_smmu_attach_dev,
+    .reassign_dt_device = arm_smmu_reassign_dt_dev,
+};
+
+static int __init smmu_init(struct dt_device_node *dev,
+                            const void *data)
+{
+    struct arm_smmu_device *smmu;
+    int res;
+    u64 addr, size;
+    unsigned int num_irqs, i;
+    struct dt_phandle_args masterspec;
+    struct rb_node *node;
+
+    /* Even if the device can't be initialized, we don't want to give to
+     * dom0 the smmu device
+     */
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    smmu = xzalloc(struct arm_smmu_device);
+    if ( !smmu )
+    {
+        printk(XENLOG_ERR "%s: failed to allocate arm_smmu_device\n",
+               dt_node_full_name(dev));
+        return -ENOMEM;
+    }
+
+    smmu->node = dev;
+    check_driver_options(smmu);
+
+    res = dt_device_get_address(smmu->node, 0, &addr, &size);
+    if ( res )
+    {
+        smmu_err(smmu, "unable to retrieve the base address of the SMMU\n");
+        goto out_err;
+    }
+
+    smmu->base = ioremap_nocache(addr, size);
+    if ( !smmu->base )
+    {
+        smmu_err(smmu, "unable to map the SMMU memory\n");
+        goto out_err;
+    }
+
+    smmu->size = size;
+
+    if ( !dt_property_read_u32(smmu->node, "#global-interrupts",
+                               &smmu->num_global_irqs) )
+    {
+        smmu_err(smmu, "missing #global-interrupts\n");
+        goto out_unmap;
+    }
+
+    num_irqs = dt_number_of_irq(smmu->node);
+    if ( num_irqs > smmu->num_global_irqs )
+        smmu->num_context_irqs = num_irqs - smmu->num_global_irqs;
+
+    if ( !smmu->num_context_irqs )
+    {
+        smmu_err(smmu, "found %d interrupts but expected at least %d\n",
+                 num_irqs, smmu->num_global_irqs + 1);
+        goto out_unmap;
+    }
+
+    smmu->irqs = xzalloc_array(struct dt_irq, num_irqs);
+    if ( !smmu->irqs )
+    {
+        smmu_err(smmu, "failed to allocated %d irqs\n", num_irqs);
+        goto out_unmap;
+    }
+
+    for ( i = 0; i < num_irqs; i++ )
+    {
+        res = dt_device_get_irq(smmu->node, i, &smmu->irqs[i]);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to get irq index %d\n", i);
+            goto out_free_irqs;
+        }
+    }
+
+    smmu->sids = xzalloc_array(unsigned long,
+                               BITS_TO_LONGS(SMMU_MAX_STREAMIDS));
+    if ( !smmu->sids )
+    {
+        smmu_err(smmu, "failed to allocated bitmap for stream ID tracking\n");
+        goto out_free_masters;
+    }
+
+
+    i = 0;
+    smmu->masters = RB_ROOT;
+    while ( !dt_parse_phandle_with_args(smmu->node, "mmu-masters",
+                                        "#stream-id-cells", i, &masterspec) )
+    {
+        res = register_smmu_master(smmu, &masterspec);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to add master %s\n",
+                     masterspec.np->name);
+            goto out_free_masters;
+        }
+        i++;
+    }
+
+    smmu_info(smmu, "registered %d master devices\n", i);
+
+    res = arm_smmu_device_cfg_probe(smmu);
+    if ( res )
+    {
+        smmu_err(smmu, "failed to probe the SMMU\n");
+        goto out_free_masters;
+    }
+
+    if ( smmu->version > 1 &&
+         smmu->num_context_banks != smmu->num_context_irqs )
+    {
+        smmu_err(smmu,
+                 "found only %d context interrupt(s) but %d required\n",
+                 smmu->num_context_irqs, smmu->num_context_banks);
+        goto out_free_masters;
+    }
+
+    smmu_dbg(smmu, "register global IRQs handler\n");
+
+    for ( i = 0; i < smmu->num_global_irqs; ++i )
+    {
+        smmu_dbg(smmu, "\t- global IRQ %u\n", smmu->irqs[i].irq);
+        res = request_dt_irq(&smmu->irqs[i], arm_smmu_global_fault,
+                             "arm-smmu global fault", smmu);
+        if ( res )
+        {
+            smmu_err(smmu, "failed to request global IRQ %d (%u)\n",
+                     i, smmu->irqs[i].irq);
+            goto out_release_irqs;
+        }
+    }
+
+    INIT_LIST_HEAD(&smmu->list);
+    list_add(&smmu->list, &arm_smmu_devices);
+
+    arm_smmu_device_reset(smmu);
+
+    iommu_set_ops(&arm_smmu_iommu_ops);
+
+    /* sids field can be freed... */
+    xfree(smmu->sids);
+    smmu->sids = NULL;
+
+    return 0;
+
+out_release_irqs:
+    while (i--)
+        release_dt_irq(&smmu->irqs[i], smmu);
+
+out_free_masters:
+    for ( node = rb_first(&smmu->masters); node; node = rb_next(node) )
+    {
+        struct arm_smmu_master *master;
+
+        master = container_of(node, struct arm_smmu_master, node);
+        xfree(master);
+    }
+
+    xfree(smmu->sids);
+
+out_free_irqs:
+    xfree(smmu->irqs);
+
+out_unmap:
+    iounmap(smmu->base);
+
+out_err:
+    xfree(smmu);
+
+    return -ENODEV;
+}
+
+static const char * const smmu_dt_compat[] __initconst =
+{
+    "arm,mmu-400",
+    NULL
+};
+
+DT_DEVICE_START(smmu, "ARM SMMU", DEVICE_IOMMU)
+    .compatible = smmu_dt_compat,
+    .init = smmu_init,
+DT_DEVICE_END
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 266cd6e..854e843 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -40,6 +40,9 @@ extern bool_t amd_iommu_perdev_intremap;
 #define PAGE_MASK_4K        (((u64)-1) << PAGE_SHIFT_4K)
 #define PAGE_ALIGN_4K(addr) (((addr) + PAGE_SIZE_4K - 1) & PAGE_MASK_4K)
 
+#define PAGE_SHIFT_64K      (16)
+#define PAGE_SIZE_64K       (1UL << PAGE_SHIFT_64K)
+
 int iommu_setup(void);
 
 int iommu_add_device(struct pci_dev *pdev);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 23:34:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 23:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHiYd-00017i-Ra; Sun, 23 Feb 2014 23:34:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WHiYb-00017d-Aa
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 23:34:02 +0000
Received: from [85.158.139.211:19457] by server-16.bemta-5.messagelabs.com id
	78/7C-05060-8658A035; Sun, 23 Feb 2014 23:34:00 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393198438!5754741!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31570 invoked from network); 23 Feb 2014 23:33:59 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 23:33:59 -0000
Received: by mail-ob0-f173.google.com with SMTP id uz6so6518695obc.4
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 15:33:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc
	:content-type;
	bh=Q6t4ZMngB4q3fkluRwY+LIea8JEbT32SG/XBhE3H1Lo=;
	b=gu3SRwmGAEOaUOKBTLY4zo9HGRRXvc2UM5VSx8zd6/kMYif9zRblRAvtN6vSZPpMkk
	GZs1cz3KrnGdUPjm+SAhfBV3s2q8QjgVBcRkao1MB4B+Ne0i0C+Vk7ptWFHgY0nRmwVQ
	lNQC14mpXgLQaCXeoNWZPAV9Zx5QfXB+NKi4MWWgEaLElJe4YAkeWGDlgoHWzCkLqvG0
	ns35WcabpiKB/BS8QxvQGICAyUcuqZw1BFCpmyQnTaUowSkufH5dujTdu2Iys2nYW4gI
	744kq6MGH61kFFgdU38LqXRlA7X+EZpl/e7tP6jhIdQN+u+xJR1oqgrweCC6NZ5PMJng
	VIIQ==
X-Gm-Message-State: ALoCoQmPxWIVsYBZAaAzr5ZSK6ihxJ3Q8GKxSkv4gl5nZCMyw+T9JCXIiAYQ+KuVn+bVBL9Utt67
MIME-Version: 1.0
X-Received: by 10.60.80.137 with SMTP id r9mr19286273oex.30.1393198437944;
	Sun, 23 Feb 2014 15:33:57 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Sun, 23 Feb 2014 15:33:57 -0800 (PST)
Date: Sun, 23 Feb 2014 13:33:57 -1000
Message-ID: <CA+o8iRWMGxfPBjvaGrJxm0bCWXfrWHjW73qzR1X2yiqNBRDghg@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Marcus.Granado@eu.citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, Henri Casanova <henric@hawaii.edu>
Subject: [Xen-devel] Support for guest per-vcpu hard and soft affinity masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario,

Are the vcpu hard and soft affinity masks getting set for guest vcpus?

I'm running xen built from your numa/per-vcpu-affinity-v5 branch.

I added the following code to...
  sched_credit.c in function csched_vcpu_insert
  sched_credit2.c in function runq_assign

    if ( cpumask_full(vc->cpu_hard_affinity) )
        printk("JTW - hard affinity map IS full, dom %d\n",
               vc->domain->domain_id);
    else
        printk("JTW - hard affinity map is NOT full, dom %d\n",
               vc->domain->domain_id);

    if ( cpumask_full(vc->cpu_soft_affinity) )
        printk("JTW - soft affinity map IS full, dom %d\n",
               vc->domain->domain_id);
    else
        printk("JTW - soft affinity map is NOT full, dom %d\n",
               vc->domain->domain_id);

I started up two domains; one is pinned to 0-3 and the other used
automatic placement which xl vcpu-list says set its soft affinity to
8-15.

xl vcpu-list
------------
Name            ID  VCPU   CPU State   Time(s) Affinity (Hard / Soft)
VM_MatrixMult    1     0    3   -b-       4.2  0-3 / all
VM_Quicksort     2     0    1   -b-       4.1  all / 8-15

Got the same output running credit and credit2 ...
------
(XEN) JTW - hard affinity map IS full, dom 1
(XEN) JTW - soft affinity map IS full, dom 1
(XEN) JTW - hard affinity map IS full, dom 2
(XEN) JTW - soft affinity map IS full, dom 2

I expected the hard map for 1 to be not full, and the soft map for 2
to be not full.

If they are getting set later, when does that happen? Or am I using a
branch that doesn't yet have this functionality? Sorry if I missed
something in the documentation or patch descriptions.

Thank you,
Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Sun Feb 23 23:34:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Feb 2014 23:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHiYd-00017i-Ra; Sun, 23 Feb 2014 23:34:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtweaver@hawaii.edu>) id 1WHiYb-00017d-Aa
	for xen-devel@lists.xen.org; Sun, 23 Feb 2014 23:34:02 +0000
Received: from [85.158.139.211:19457] by server-16.bemta-5.messagelabs.com id
	78/7C-05060-8658A035; Sun, 23 Feb 2014 23:34:00 +0000
X-Env-Sender: jtweaver@hawaii.edu
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393198438!5754741!1
X-Originating-IP: [209.85.214.173]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31570 invoked from network); 23 Feb 2014 23:33:59 -0000
Received: from mail-ob0-f173.google.com (HELO mail-ob0-f173.google.com)
	(209.85.214.173)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	23 Feb 2014 23:33:59 -0000
Received: by mail-ob0-f173.google.com with SMTP id uz6so6518695obc.4
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 15:33:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to:cc
	:content-type;
	bh=Q6t4ZMngB4q3fkluRwY+LIea8JEbT32SG/XBhE3H1Lo=;
	b=gu3SRwmGAEOaUOKBTLY4zo9HGRRXvc2UM5VSx8zd6/kMYif9zRblRAvtN6vSZPpMkk
	GZs1cz3KrnGdUPjm+SAhfBV3s2q8QjgVBcRkao1MB4B+Ne0i0C+Vk7ptWFHgY0nRmwVQ
	lNQC14mpXgLQaCXeoNWZPAV9Zx5QfXB+NKi4MWWgEaLElJe4YAkeWGDlgoHWzCkLqvG0
	ns35WcabpiKB/BS8QxvQGICAyUcuqZw1BFCpmyQnTaUowSkufH5dujTdu2Iys2nYW4gI
	744kq6MGH61kFFgdU38LqXRlA7X+EZpl/e7tP6jhIdQN+u+xJR1oqgrweCC6NZ5PMJng
	VIIQ==
X-Gm-Message-State: ALoCoQmPxWIVsYBZAaAzr5ZSK6ihxJ3Q8GKxSkv4gl5nZCMyw+T9JCXIiAYQ+KuVn+bVBL9Utt67
MIME-Version: 1.0
X-Received: by 10.60.80.137 with SMTP id r9mr19286273oex.30.1393198437944;
	Sun, 23 Feb 2014 15:33:57 -0800 (PST)
Received: by 10.182.120.10 with HTTP; Sun, 23 Feb 2014 15:33:57 -0800 (PST)
Date: Sun, 23 Feb 2014 13:33:57 -1000
Message-ID: <CA+o8iRWMGxfPBjvaGrJxm0bCWXfrWHjW73qzR1X2yiqNBRDghg@mail.gmail.com>
From: Justin Weaver <jtweaver@hawaii.edu>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: Marcus.Granado@eu.citrix.com, George Dunlap <george.dunlap@eu.citrix.com>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>, esb@ics.hawaii.edu,
	xen-devel@lists.xen.org, Henri Casanova <henric@hawaii.edu>
Subject: [Xen-devel] Support for guest per-vcpu hard and soft affinity masks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Dario,

Are the vcpu hard and soft affinity masks getting set for guest vcpus?

I'm running xen built from your numa/per-vcpu-affinity-v5 branch.

I added the following code to...
  sched_credit.c in function csched_vcpu_insert
  sched_credit2.c in function runq_assign

    if ( cpumask_full(vc->cpu_hard_affinity) )
        printk("JTW - hard affinity map IS full, dom %d\n",
               vc->domain->domain_id);
    else
        printk("JTW - hard affinity map is NOT full, dom %d\n",
               vc->domain->domain_id);

    if ( cpumask_full(vc->cpu_soft_affinity) )
        printk("JTW - soft affinity map IS full, dom %d\n",
               vc->domain->domain_id);
    else
        printk("JTW - soft affinity map is NOT full, dom %d\n",
               vc->domain->domain_id);

I started up two domains; one is pinned to 0-3 and the other used
automatic placement which xl vcpu-list says set its soft affinity to
8-15.

xl vcpu-list
------------
Name            ID  VCPU   CPU State   Time(s) Affinity (Hard / Soft)
VM_MatrixMult    1     0    3   -b-       4.2  0-3 / all
VM_Quicksort     2     0    1   -b-       4.1  all / 8-15

Got the same output running credit and credit2 ...
------
(XEN) JTW - hard affinity map IS full, dom 1
(XEN) JTW - soft affinity map IS full, dom 1
(XEN) JTW - hard affinity map IS full, dom 2
(XEN) JTW - soft affinity map IS full, dom 2

I expected the hard map for 1 to be not full, and the soft map for 2
to be not full.

If they are getting set later, when does that happen? Or am I using a
branch that doesn't yet have this functionality? Sorry if I missed
something in the documentation or patch descriptions.

Thank you,
Justin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 01:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 01:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHkSh-0005j0-71; Mon, 24 Feb 2014 01:36:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WHkSf-0005iv-Ql
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 01:36:02 +0000
Received: from [85.158.143.35:29507] by server-3.bemta-4.messagelabs.com id
	30/93-11539-002AA035; Mon, 24 Feb 2014 01:36:00 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393205757!7734592!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13287 invoked from network); 24 Feb 2014 01:35:59 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 01:35:59 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so5843186pad.14
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 17:35:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=Rsm1VQ/uFWAaZ6q6vCSu+5L+V/LTSzkHBr4hxKdPOSo=;
	b=yl62LwGnCe6XTr4U4VVm3PmySJmeWDCk/p/JE3adc/mPYLSGOFlhmTuQnnUkuJlO1R
	Igsk6iC4CDP46RsROuUFcksjdIcK+SeVCHn3M3+GHWhmFOeDig7v/mP+OalsSy412EPx
	KS1q1vpxaXEczsybpxdXqExIuL4g7Br8dIgwJ4h36/pzDC0VdAnrmbpqKS4TCZJpcw/d
	ejsDboMBlWCfVn9bDNMHGGkhV6rZnJUQ71PJUV0xLxxnPNek+WIWaLyVL9TTXHjobQLs
	U6Yo1oHzW62a+e4KfAZKyCpOnd4iz2nvAvLK0y+LUUfdAiQCNafhsgoeep6vHzq44e3v
	2Irw==
X-Received: by 10.66.160.225 with SMTP id xn1mr21992349pab.108.1393205757141; 
	Sun, 23 Feb 2014 17:35:57 -0800 (PST)
Received: from [192.168.1.101] ([220.202.153.106])
	by mx.google.com with ESMTPSA id
	oa3sm44288175pbb.15.2014.02.23.17.35.52 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 23 Feb 2014 17:35:56 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
Date: Mon, 24 Feb 2014 09:35:45 +0800
Message-Id: <623EFCB6-221C-4EAA-994C-BAC8CAA9EA36@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<1392738894.23084.16.camel@kazak.uk.xensource.com>
	<580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 23, 2014, at 22:41, Chen Baozi <baozich@gmail.com> wrote:

> =

> On Feb 18, 2014, at 23:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> =

>> On Sun, 2014-02-16 at 23:51 +0800, Chen Baozi wrote:
>>> Hi all,
>>> =

>>> It is much later than I used to expect. I guess it might be help
>>> to publish my work, though it is still not finished (and might not
>>> be finished very soon...). =

>>> =

>>> I began to try to port mini-os to ARM64 since last summer. Since
>>> the 64-bit guest support is not quite well at that time, this
>>> work had been stopped for a long time until two months ago.
>>> =

>>> Though it is still at very early stage, it at least can be built,
>>> setup a early page table for booting, parse the DTB passed by the
>>> hypervisor, and be debugged by printk at present. So I put it
>>> on github in case someone might be interested in it. Here is the
>>> url: https://github.com/baozich/minios-arm64
>> =

>> Cool. Thank you very much for sharing.
>> =

>>> Right now, there are some troubles to make GIC work properly,
>>> as I didn=92t consider mapping GIC=92s interface in address space and
>>> follows x86=92s memory layout which make the kernel virtual address
>>> starts at 0x0. I=92ll fix it as soon as possible.
>> =

>> Actually, having virtual memory start at 0x0 seems quite reasonable to
>> me, what is the problem?
> =

> Hmmm, I don=92t think it is a big problem. I just didn=92t realise it is =

> necessary to map GIC=92s interface after MMU on, which leads a exception
> when I try to program GIC by the physical address populated by DT. =

> I used to think about making mini-os kernel address start at 0x80000000
> and leave the address below 0x80000000 to be 1:1 mapping, which
> seems to be able to make things easier when initialising GIC.

Well, I think we need a fixmap region for mini-os on arm/arm64, which is not
included in original x86 version.

> =

>> =

>> Someone somewhere was thinking of making minios run without the MMU
>> enabled on ARM -- to save on the overhead IIRC. But it occurs to me here
>> that this would be problematic if we were to move the guest memory map
>> around -- which we are planning to do for 4.5. I think this means that
>> minios must use the MMU, at least by default.
>> =

>> I wouldn't necessarily object to the presence of an option to build an
>> MMU-less variant for specific use cases, so long as it was clear to
>> those enabling it that there VMs might only work on a single version of
>> Xen.
> =

> Actually, I=92ve already enabled MMU in my current implementation. =

> =

> Cheers,
> =

> Baozi
> =

>> =

>>> Besides, there is still lots of work to be done. So any comments
>>> or patches are welcome.
>> =

>> Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 01:36:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 01:36:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHkSh-0005j0-71; Mon, 24 Feb 2014 01:36:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <baozich@gmail.com>) id 1WHkSf-0005iv-Ql
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 01:36:02 +0000
Received: from [85.158.143.35:29507] by server-3.bemta-4.messagelabs.com id
	30/93-11539-002AA035; Mon, 24 Feb 2014 01:36:00 +0000
X-Env-Sender: baozich@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393205757!7734592!1
X-Originating-IP: [209.85.220.41]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13287 invoked from network); 24 Feb 2014 01:35:59 -0000
Received: from mail-pa0-f41.google.com (HELO mail-pa0-f41.google.com)
	(209.85.220.41)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 01:35:59 -0000
Received: by mail-pa0-f41.google.com with SMTP id fa1so5843186pad.14
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 17:35:57 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=content-type:mime-version:subject:from:in-reply-to:date:cc
	:content-transfer-encoding:message-id:references:to;
	bh=Rsm1VQ/uFWAaZ6q6vCSu+5L+V/LTSzkHBr4hxKdPOSo=;
	b=yl62LwGnCe6XTr4U4VVm3PmySJmeWDCk/p/JE3adc/mPYLSGOFlhmTuQnnUkuJlO1R
	Igsk6iC4CDP46RsROuUFcksjdIcK+SeVCHn3M3+GHWhmFOeDig7v/mP+OalsSy412EPx
	KS1q1vpxaXEczsybpxdXqExIuL4g7Br8dIgwJ4h36/pzDC0VdAnrmbpqKS4TCZJpcw/d
	ejsDboMBlWCfVn9bDNMHGGkhV6rZnJUQ71PJUV0xLxxnPNek+WIWaLyVL9TTXHjobQLs
	U6Yo1oHzW62a+e4KfAZKyCpOnd4iz2nvAvLK0y+LUUfdAiQCNafhsgoeep6vHzq44e3v
	2Irw==
X-Received: by 10.66.160.225 with SMTP id xn1mr21992349pab.108.1393205757141; 
	Sun, 23 Feb 2014 17:35:57 -0800 (PST)
Received: from [192.168.1.101] ([220.202.153.106])
	by mx.google.com with ESMTPSA id
	oa3sm44288175pbb.15.2014.02.23.17.35.52 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Sun, 23 Feb 2014 17:35:56 -0800 (PST)
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
From: Chen Baozi <baozich@gmail.com>
In-Reply-To: <580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
Date: Mon, 24 Feb 2014 09:35:45 +0800
Message-Id: <623EFCB6-221C-4EAA-994C-BAC8CAA9EA36@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<1392738894.23084.16.camel@kazak.uk.xensource.com>
	<580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
X-Mailer: Apple Mail (2.1827)
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 23, 2014, at 22:41, Chen Baozi <baozich@gmail.com> wrote:

> =

> On Feb 18, 2014, at 23:54, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> =

>> On Sun, 2014-02-16 at 23:51 +0800, Chen Baozi wrote:
>>> Hi all,
>>> =

>>> It is much later than I used to expect. I guess it might be help
>>> to publish my work, though it is still not finished (and might not
>>> be finished very soon...). =

>>> =

>>> I began to try to port mini-os to ARM64 since last summer. Since
>>> the 64-bit guest support is not quite well at that time, this
>>> work had been stopped for a long time until two months ago.
>>> =

>>> Though it is still at very early stage, it at least can be built,
>>> setup a early page table for booting, parse the DTB passed by the
>>> hypervisor, and be debugged by printk at present. So I put it
>>> on github in case someone might be interested in it. Here is the
>>> url: https://github.com/baozich/minios-arm64
>> =

>> Cool. Thank you very much for sharing.
>> =

>>> Right now, there are some troubles to make GIC work properly,
>>> as I didn=92t consider mapping GIC=92s interface in address space and
>>> follows x86=92s memory layout which make the kernel virtual address
>>> starts at 0x0. I=92ll fix it as soon as possible.
>> =

>> Actually, having virtual memory start at 0x0 seems quite reasonable to
>> me, what is the problem?
> =

> Hmmm, I don=92t think it is a big problem. I just didn=92t realise it is =

> necessary to map GIC=92s interface after MMU on, which leads a exception
> when I try to program GIC by the physical address populated by DT. =

> I used to think about making mini-os kernel address start at 0x80000000
> and leave the address below 0x80000000 to be 1:1 mapping, which
> seems to be able to make things easier when initialising GIC.

Well, I think we need a fixmap region for mini-os on arm/arm64, which is not
included in original x86 version.

> =

>> =

>> Someone somewhere was thinking of making minios run without the MMU
>> enabled on ARM -- to save on the overhead IIRC. But it occurs to me here
>> that this would be problematic if we were to move the guest memory map
>> around -- which we are planning to do for 4.5. I think this means that
>> minios must use the MMU, at least by default.
>> =

>> I wouldn't necessarily object to the presence of an option to build an
>> MMU-less variant for specific use cases, so long as it was clear to
>> those enabling it that there VMs might only work on a single version of
>> Xen.
> =

> Actually, I=92ve already enabled MMU in my current implementation. =

> =

> Cheers,
> =

> Baozi
> =

>> =

>>> Besides, there is still lots of work to be done. So any comments
>>> or patches are welcome.
>> =

>> Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 03:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 03:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHmHx-0006gv-DJ; Mon, 24 Feb 2014 03:33:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WHmHv-0006gq-Kl
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 03:33:03 +0000
Received: from [85.158.139.211:53267] by server-4.bemta-5.messagelabs.com id
	AA/C9-08092-E6DBA035; Mon, 24 Feb 2014 03:33:02 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393212780!5772852!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6252 invoked from network); 24 Feb 2014 03:33:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 03:33:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1O3Wvae000980
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 03:32:58 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1O3Wu4t026526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 03:32:56 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1O3WumL023355; Mon, 24 Feb 2014 03:32:56 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 23 Feb 2014 19:32:55 -0800
Message-ID: <530ABD5C.10506@oracle.com>
Date: Mon, 24 Feb 2014 11:32:44 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
In-Reply-To: <CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 02/21/2014 12:35 PM, Vasiliy Tolstov wrote:
> 2014-02-20 9:52 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
>> Hello. I have some problems with swap files in domU - i have ssd disks
>> that caches all io and if user use swap, ssd may fail very often.
>> Is that possible to use tmem frontswap without swap file at all? And
>> transparently push swap pages to tmem?
> 
> 
> Okay as i see it can;'t be possible.
> Another question - is that possible to reserve tmem to domains at specific size?
> For example i need to get 20Gb for one domain and 10Gb for another.
> But if second domain very hungry it can't
> eaten all memory
> 

Two types of page can be stored in tmem: persistent_page and ephemeral_page.

Persistent pages are swapped-out pages, whose date can't be dropped by
tmem. The rule for persistent pages is:
'current_domain_pages +  persistent_pages have to smaller than
domain->max_pages'.

Ephemeral pages are clean pagecache pages, those pages have a copy in
disk already.
The amount number of ephemeral pages are not limited, but XEN host will
reclaim those pages if under memory pressure.
There is a tmem parameter 'weight' which can be used to control how many
ephemeral_pages should be reclaimed between domains.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 03:33:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 03:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHmHx-0006gv-DJ; Mon, 24 Feb 2014 03:33:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WHmHv-0006gq-Kl
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 03:33:03 +0000
Received: from [85.158.139.211:53267] by server-4.bemta-5.messagelabs.com id
	AA/C9-08092-E6DBA035; Mon, 24 Feb 2014 03:33:02 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393212780!5772852!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6252 invoked from network); 24 Feb 2014 03:33:01 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 03:33:01 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1O3Wvae000980
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 03:32:58 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1O3Wu4t026526
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 03:32:56 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1O3WumL023355; Mon, 24 Feb 2014 03:32:56 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Sun, 23 Feb 2014 19:32:55 -0800
Message-ID: <530ABD5C.10506@oracle.com>
Date: Mon, 24 Feb 2014 11:32:44 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
In-Reply-To: <CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 02/21/2014 12:35 PM, Vasiliy Tolstov wrote:
> 2014-02-20 9:52 GMT+04:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
>> Hello. I have some problems with swap files in domU - i have ssd disks
>> that caches all io and if user use swap, ssd may fail very often.
>> Is that possible to use tmem frontswap without swap file at all? And
>> transparently push swap pages to tmem?
> 
> 
> Okay as i see it can;'t be possible.
> Another question - is that possible to reserve tmem to domains at specific size?
> For example i need to get 20Gb for one domain and 10Gb for another.
> But if second domain very hungry it can't
> eaten all memory
> 

Two types of page can be stored in tmem: persistent_page and ephemeral_page.

Persistent pages are swapped-out pages, whose date can't be dropped by
tmem. The rule for persistent pages is:
'current_domain_pages +  persistent_pages have to smaller than
domain->max_pages'.

Ephemeral pages are clean pagecache pages, those pages have a copy in
disk already.
The amount number of ephemeral pages are not limited, but XEN host will
reclaim those pages if under memory pressure.
There is a tmem parameter 'weight' which can be used to control how many
ephemeral_pages should be reclaimed between domains.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 03:42:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 03:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHmQr-0006pH-IT; Mon, 24 Feb 2014 03:42:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <machi1271@gmail.com>) id 1WHmQp-0006p7-D4
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 03:42:15 +0000
Received: from [193.109.254.147:47226] by server-5.bemta-14.messagelabs.com id
	3B/79-16688-69FBA035; Mon, 24 Feb 2014 03:42:14 +0000
X-Env-Sender: machi1271@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393213332!1009285!1
X-Originating-IP: [209.85.160.66]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6565 invoked from network); 24 Feb 2014 03:42:13 -0000
Received: from mail-pb0-f66.google.com (HELO mail-pb0-f66.google.com)
	(209.85.160.66)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 03:42:13 -0000
Received: by mail-pb0-f66.google.com with SMTP id ma3so2870174pbc.1
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 19:42:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=0e60oObphPVCJd0Zyt8dnGJlvrQIw44hlby/XqRQlDs=;
	b=Z/jbhKWEj0zbN/1sI3P2GEsM9PRQ1F3seIHuzV3Cz4lbV6dJXBpjvGmhDXAgW+MfFd
	E5grZQnhCCNbF7D1qdW7tvUByDIpHq6Jk49kdHj7mPNJ7tj9nOKaARzGNVKxp9zcfzyb
	fAIPVrScowcJ+X+iPi2+fvQ/Zi1+D4NoLHqeUkXsHcL3lR49E1x7Jow4Q4U7dhx8i44f
	bydFfd+ZHdMh5XwPmz1YuVETF+JuYonxbZVWtwKxzaFz60nW4ABbHGmVm0AXo+uVWIRe
	EpWYkn9Qr/eLMmdRhQN/er9TuJ3QTIjdwr+XNh1f5tJrwLjI4+NEgv3zkkFrWmkMcucy
	XWYw==
MIME-Version: 1.0
X-Received: by 10.66.243.131 with SMTP id wy3mr11231950pac.32.1393213331495;
	Sun, 23 Feb 2014 19:42:11 -0800 (PST)
Received: by 10.68.152.105 with HTTP; Sun, 23 Feb 2014 19:42:11 -0800 (PST)
Date: Mon, 24 Feb 2014 11:42:11 +0800
Message-ID: <CAMZDhMEFfBzmgEgGvAS2N4kcmbKgrvAGx6qgAVSiStMYO1t6Zg@mail.gmail.com>
From: chi ma <machi1271@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Add e1000 device to guest OS?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6369215367256816316=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6369215367256816316==
Content-Type: multipart/alternative; boundary=047d7b15a133b85f3204f31ec1ac

--047d7b15a133b85f3204f31ec1ac
Content-Type: text/plain; charset=ISO-8859-1

HI ALL:

    Does anyone try adding an INTEL e1000 emulated device to a specific
guest OS?
    Do I have to apply some specific configrations?
    I've tried the steps supplied on this web page:

http://www.netservers.co.uk/articles/open-source-howtos/citrix_e1000_gigabit
    but it doesn't work...

Regards!

--047d7b15a133b85f3204f31ec1ac
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">HI ALL:<div><br><div>=A0 =A0 Does anyone try adding an INT=
EL e1000 emulated device to a specific guest OS?</div><div>=A0 =A0 Do I hav=
e to apply some specific configrations?</div><div>=A0 =A0 I&#39;ve tried th=
e steps supplied on this web page:</div>
<div>=A0 =A0 =A0 =A0=A0<a href=3D"http://www.netservers.co.uk/articles/open=
-source-howtos/citrix_e1000_gigabit">http://www.netservers.co.uk/articles/o=
pen-source-howtos/citrix_e1000_gigabit</a></div><div>=A0 =A0 but it doesn&#=
39;t work...</div>
<div><br></div><div>Regards!</div></div></div>

--047d7b15a133b85f3204f31ec1ac--


--===============6369215367256816316==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6369215367256816316==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 03:42:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 03:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHmQr-0006pH-IT; Mon, 24 Feb 2014 03:42:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <machi1271@gmail.com>) id 1WHmQp-0006p7-D4
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 03:42:15 +0000
Received: from [193.109.254.147:47226] by server-5.bemta-14.messagelabs.com id
	3B/79-16688-69FBA035; Mon, 24 Feb 2014 03:42:14 +0000
X-Env-Sender: machi1271@gmail.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393213332!1009285!1
X-Originating-IP: [209.85.160.66]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6565 invoked from network); 24 Feb 2014 03:42:13 -0000
Received: from mail-pb0-f66.google.com (HELO mail-pb0-f66.google.com)
	(209.85.160.66)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 03:42:13 -0000
Received: by mail-pb0-f66.google.com with SMTP id ma3so2870174pbc.1
	for <xen-devel@lists.xen.org>; Sun, 23 Feb 2014 19:42:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=0e60oObphPVCJd0Zyt8dnGJlvrQIw44hlby/XqRQlDs=;
	b=Z/jbhKWEj0zbN/1sI3P2GEsM9PRQ1F3seIHuzV3Cz4lbV6dJXBpjvGmhDXAgW+MfFd
	E5grZQnhCCNbF7D1qdW7tvUByDIpHq6Jk49kdHj7mPNJ7tj9nOKaARzGNVKxp9zcfzyb
	fAIPVrScowcJ+X+iPi2+fvQ/Zi1+D4NoLHqeUkXsHcL3lR49E1x7Jow4Q4U7dhx8i44f
	bydFfd+ZHdMh5XwPmz1YuVETF+JuYonxbZVWtwKxzaFz60nW4ABbHGmVm0AXo+uVWIRe
	EpWYkn9Qr/eLMmdRhQN/er9TuJ3QTIjdwr+XNh1f5tJrwLjI4+NEgv3zkkFrWmkMcucy
	XWYw==
MIME-Version: 1.0
X-Received: by 10.66.243.131 with SMTP id wy3mr11231950pac.32.1393213331495;
	Sun, 23 Feb 2014 19:42:11 -0800 (PST)
Received: by 10.68.152.105 with HTTP; Sun, 23 Feb 2014 19:42:11 -0800 (PST)
Date: Mon, 24 Feb 2014 11:42:11 +0800
Message-ID: <CAMZDhMEFfBzmgEgGvAS2N4kcmbKgrvAGx6qgAVSiStMYO1t6Zg@mail.gmail.com>
From: chi ma <machi1271@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Add e1000 device to guest OS?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6369215367256816316=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6369215367256816316==
Content-Type: multipart/alternative; boundary=047d7b15a133b85f3204f31ec1ac

--047d7b15a133b85f3204f31ec1ac
Content-Type: text/plain; charset=ISO-8859-1

HI ALL:

    Does anyone try adding an INTEL e1000 emulated device to a specific
guest OS?
    Do I have to apply some specific configrations?
    I've tried the steps supplied on this web page:

http://www.netservers.co.uk/articles/open-source-howtos/citrix_e1000_gigabit
    but it doesn't work...

Regards!

--047d7b15a133b85f3204f31ec1ac
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">HI ALL:<div><br><div>=A0 =A0 Does anyone try adding an INT=
EL e1000 emulated device to a specific guest OS?</div><div>=A0 =A0 Do I hav=
e to apply some specific configrations?</div><div>=A0 =A0 I&#39;ve tried th=
e steps supplied on this web page:</div>
<div>=A0 =A0 =A0 =A0=A0<a href=3D"http://www.netservers.co.uk/articles/open=
-source-howtos/citrix_e1000_gigabit">http://www.netservers.co.uk/articles/o=
pen-source-howtos/citrix_e1000_gigabit</a></div><div>=A0 =A0 but it doesn&#=
39;t work...</div>
<div><br></div><div>Regards!</div></div></div>

--047d7b15a133b85f3204f31ec1ac--


--===============6369215367256816316==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6369215367256816316==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 08:03:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 08:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHqVf-00086C-Kp; Mon, 24 Feb 2014 08:03:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WHqVd-00085w-Gh; Mon, 24 Feb 2014 08:03:29 +0000
Received: from [85.158.139.211:36260] by server-11.bemta-5.messagelabs.com id
	D4/A5-23886-0DCFA035; Mon, 24 Feb 2014 08:03:28 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393228995!5736210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTI1MTEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12440 invoked from network); 24 Feb 2014 08:03:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 08:03:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; 
	d="asc'?scan'208";a="103465112"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 08:03:05 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 03:03:04 -0500
Message-ID: <1393228968.32038.849.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 09:02:48 +0100
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen <xen@lists.fedoraproject.org>, xs-devel@lists.xenserver.org,
	cl-mirage@lists.cam.ac.uk, xen-api@lists.xen.org
Subject: [Xen-devel] TODAY is Xen Document Day!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4214842719588218478=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4214842719588218478==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-O00XatADD/sOt0Eld6jV"

--=-O00XatADD/sOt0Eld6jV
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Today is the day!=20

Xen Project Document Day is a day to help improve overall Xen=20
documentation, with emphasis on the Xen Project Wiki.=20

However, this Document Day is special -- it is the prep day for our
impending 4.4 release.

We have a good amount of solid documentation for 4.3, but we need to
update to cover 4.4.  The greatest software in the world is worthless
unless people understand how to use it.  If you are still looking for
a way to contribute to the upcoming release, your opportunity has
arrived.

Never participated in a Document Day? All the info you need is here:=20
http://wiki.xenproject.org/wiki/Xen_Document_Days=20

Not sure what needs attention? Here is the current TODO list:=20
http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO=20

Add any documentation deficiencies you have come across while working=20
with Xen. Is there a subject you wrestled with? That's a perfect=20
opportunity for you to help shape the documentation into something=20
more useful for the next person who needs it!=20

If you haven't requested to be made a Wiki editor, just fill out the
form below:=20

http://xenproject.org/component/content/article/100-misc/145-request-to-be-=
made-a-wiki-editor.html=20

See you in #xendocs!=20

Dario=20
--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-O00XatADD/sOt0Eld6jV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMK/KgACgkQk4XaBE3IOsTcTQCeIGJAR5iOaky2tt6nAr3s7D1i
lSYAn0OMliuyzcxQkErWSyFJu2yemsZm
=qpze
-----END PGP SIGNATURE-----

--=-O00XatADD/sOt0Eld6jV--


--===============4214842719588218478==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4214842719588218478==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 08:03:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 08:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHqVf-00086C-Kp; Mon, 24 Feb 2014 08:03:31 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>)
	id 1WHqVd-00085w-Gh; Mon, 24 Feb 2014 08:03:29 +0000
Received: from [85.158.139.211:36260] by server-11.bemta-5.messagelabs.com id
	D4/A5-23886-0DCFA035; Mon, 24 Feb 2014 08:03:28 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393228995!5736210!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNTI1MTEgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12440 invoked from network); 24 Feb 2014 08:03:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 08:03:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; 
	d="asc'?scan'208";a="103465112"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 08:03:05 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 03:03:04 -0500
Message-ID: <1393228968.32038.849.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 09:02:48 +0100
Organization: Citrix
X-Mailer: Evolution 3.10.3 (3.10.3-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-users@lists.xen.org" <xen-users@lists.xen.org>,
	xen <xen@lists.fedoraproject.org>, xs-devel@lists.xenserver.org,
	cl-mirage@lists.cam.ac.uk, xen-api@lists.xen.org
Subject: [Xen-devel] TODAY is Xen Document Day!
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4214842719588218478=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4214842719588218478==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-O00XatADD/sOt0Eld6jV"

--=-O00XatADD/sOt0Eld6jV
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Today is the day!=20

Xen Project Document Day is a day to help improve overall Xen=20
documentation, with emphasis on the Xen Project Wiki.=20

However, this Document Day is special -- it is the prep day for our
impending 4.4 release.

We have a good amount of solid documentation for 4.3, but we need to
update to cover 4.4.  The greatest software in the world is worthless
unless people understand how to use it.  If you are still looking for
a way to contribute to the upcoming release, your opportunity has
arrived.

Never participated in a Document Day? All the info you need is here:=20
http://wiki.xenproject.org/wiki/Xen_Document_Days=20

Not sure what needs attention? Here is the current TODO list:=20
http://wiki.xenproject.org/wiki/Xen_Document_Days/TODO=20

Add any documentation deficiencies you have come across while working=20
with Xen. Is there a subject you wrestled with? That's a perfect=20
opportunity for you to help shape the documentation into something=20
more useful for the next person who needs it!=20

If you haven't requested to be made a Wiki editor, just fill out the
form below:=20

http://xenproject.org/component/content/article/100-misc/145-request-to-be-=
made-a-wiki-editor.html=20

See you in #xendocs!=20

Dario=20
--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-O00XatADD/sOt0Eld6jV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iEYEABECAAYFAlMK/KgACgkQk4XaBE3IOsTcTQCeIGJAR5iOaky2tt6nAr3s7D1i
lSYAn0OMliuyzcxQkErWSyFJu2yemsZm
=qpze
-----END PGP SIGNATURE-----

--=-O00XatADD/sOt0Eld6jV--


--===============4214842719588218478==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4214842719588218478==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 08:39:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 08:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHr3v-0000BU-0Q; Mon, 24 Feb 2014 08:38:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHr3t-0000BO-KL
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 08:38:53 +0000
Received: from [85.158.143.35:19400] by server-2.bemta-4.messagelabs.com id
	31/08-10891-D150B035; Mon, 24 Feb 2014 08:38:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393231131!7781928!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31798 invoked from network); 24 Feb 2014 08:38:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 08:38:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; d="scan'208";a="105131679"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 08:38:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 03:38:50 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WHr3p-0007nj-G7;
	Mon, 24 Feb 2014 08:38:49 +0000
Message-ID: <1393231128.22033.60.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 08:38:48 +0000
In-Reply-To: <21255.39733.80220.360509@mariner.uk.xensource.com>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
	<21255.16435.69137.96537@mariner.uk.xensource.com>
	<21255.39733.80220.360509@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 18:30 +0000, Ian Jackson wrote:
> Xen 4.1 puts things in /usr/lib64 otherwise.  This is wrong for
> Debian, and does not work at all on wheezy.

This is safe on newer autoconf using Xens, right? I think this is the
case because with newer Xen:
        $ git grep LIBLEAFDIR
        $

In which case:
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 08:39:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 08:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHr3v-0000BU-0Q; Mon, 24 Feb 2014 08:38:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHr3t-0000BO-KL
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 08:38:53 +0000
Received: from [85.158.143.35:19400] by server-2.bemta-4.messagelabs.com id
	31/08-10891-D150B035; Mon, 24 Feb 2014 08:38:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393231131!7781928!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31798 invoked from network); 24 Feb 2014 08:38:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 08:38:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; d="scan'208";a="105131679"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 08:38:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 03:38:50 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WHr3p-0007nj-G7;
	Mon, 24 Feb 2014 08:38:49 +0000
Message-ID: <1393231128.22033.60.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 08:38:48 +0000
In-Reply-To: <21255.39733.80220.360509@mariner.uk.xensource.com>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
	<21255.16435.69137.96537@mariner.uk.xensource.com>
	<21255.39733.80220.360509@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 18:30 +0000, Ian Jackson wrote:
> Xen 4.1 puts things in /usr/lib64 otherwise.  This is wrong for
> Debian, and does not work at all on wheezy.

This is safe on newer autoconf using Xens, right? I think this is the
case because with newer Xen:
        $ git grep LIBLEAFDIR
        $

In which case:
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <ian.campbell@citrix.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:03:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHrR9-0000NV-Kx; Mon, 24 Feb 2014 09:02:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WHrR8-0000NN-Gv
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 09:02:54 +0000
Received: from [85.158.139.211:41921] by server-4.bemta-5.messagelabs.com id
	71/22-08092-CBA0B035; Mon, 24 Feb 2014 09:02:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393232571!5794969!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19455 invoked from network); 24 Feb 2014 09:02:51 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 09:02:51 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1393232571; l=569;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=WEZlh/Wv5x16dOtFDrUoMroiU9M=;
	b=J5bB9vLBeeGvuopn+Zs1Yq7wTQ/el7uUi7UM24OdaLiKfsoLnv15tizVgE7KcHnFIwW
	Ws3mhqg3vdlzgqifMaUh6nZ8XvRrjtDYv5kUrIcX195tAbqu4FS0lAQYzdcczBkN9MEmj
	MzxaSKSR/IMhkrsx3GW/Ec/qQwh7FiKCEhI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id 605272q1O92moh3
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Mon, 24 Feb 2014 10:02:48 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 4D15050277; Mon, 24 Feb 2014 10:02:48 +0100 (CET)
Date: Mon, 24 Feb 2014 10:02:48 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140224090248.GB15597@aepfle.de>
References: <20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
	<1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
	<20140221093009.GA3187@aepfle.de>
	<1393000441.22839.YahooMailNeo@web161802.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393000441.22839.YahooMailNeo@web161802.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, Adel Amani wrote:

> can you please explain in more details about: The "@@ " markers are from diff
> (1), so that patch(1) can do its work.what line code i need to add?   

http://en.wikipedia.org/wiki/Diff
http://en.wikipedia.org/wiki/Patch_%28Unix%29

> do i need to add these lines of code:
> max_f = atoi(argv[4]);
>     si.flags = atoi(argv[5]);

Counter question: how will si.flags get its value without an assignment?

Please go and find a mentor to get this task done.
This list is the wrong forum to learn hacking source code.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:03:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHrR9-0000NV-Kx; Mon, 24 Feb 2014 09:02:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WHrR8-0000NN-Gv
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 09:02:54 +0000
Received: from [85.158.139.211:41921] by server-4.bemta-5.messagelabs.com id
	71/22-08092-CBA0B035; Mon, 24 Feb 2014 09:02:52 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393232571!5794969!1
X-Originating-IP: [81.169.146.217]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19455 invoked from network); 24 Feb 2014 09:02:51 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.217)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 09:02:51 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1393232571; l=569;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=WEZlh/Wv5x16dOtFDrUoMroiU9M=;
	b=J5bB9vLBeeGvuopn+Zs1Yq7wTQ/el7uUi7UM24OdaLiKfsoLnv15tizVgE7KcHnFIwW
	Ws3mhqg3vdlzgqifMaUh6nZ8XvRrjtDYv5kUrIcX195tAbqu4FS0lAQYzdcczBkN9MEmj
	MzxaSKSR/IMhkrsx3GW/Ec/qQwh7FiKCEhI=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id 605272q1O92moh3
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Mon, 24 Feb 2014 10:02:48 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 4D15050277; Mon, 24 Feb 2014 10:02:48 +0100 (CET)
Date: Mon, 24 Feb 2014 10:02:48 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Adel Amani <adel.amani66@yahoo.com>
Message-ID: <20140224090248.GB15597@aepfle.de>
References: <20140205123908.GA1198@aepfle.de>
	<1391689706.75705.YahooMailNeo@web161806.mail.bf1.yahoo.com>
	<20140206214018.GA14658@aepfle.de>
	<1391807749.53657.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140207222247.GA23234@aepfle.de>
	<1392796739.73315.YahooMailNeo@web161801.mail.bf1.yahoo.com>
	<20140219141207.GA9631@aepfle.de>
	<1392967119.28200.YahooMailNeo@web161805.mail.bf1.yahoo.com>
	<20140221093009.GA3187@aepfle.de>
	<1393000441.22839.YahooMailNeo@web161802.mail.bf1.yahoo.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393000441.22839.YahooMailNeo@web161802.mail.bf1.yahoo.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] function snprintf() in xen_save_domain.c for
	debugged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 21, Adel Amani wrote:

> can you please explain in more details about: The "@@ " markers are from diff
> (1), so that patch(1) can do its work.what line code i need to add?   

http://en.wikipedia.org/wiki/Diff
http://en.wikipedia.org/wiki/Patch_%28Unix%29

> do i need to add these lines of code:
> max_f = atoi(argv[4]);
>     si.flags = atoi(argv[5]);

Counter question: how will si.flags get its value without an assignment?

Please go and find a mentor to get this task done.
This list is the wrong forum to learn hacking source code.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:16:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHreC-0000Yv-LS; Mon, 24 Feb 2014 09:16:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHrdo-0000Yq-WE
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 09:16:01 +0000
Received: from [193.109.254.147:35649] by server-1.bemta-14.messagelabs.com id
	EA/A7-15438-0DD0B035; Mon, 24 Feb 2014 09:16:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393233359!2657984!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28757 invoked from network); 24 Feb 2014 09:15:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 09:15:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 09:15:58 +0000
Message-Id: <530B1BD3020000780011E9B8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 09:15:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
	<20140221191833.GA8812@phenom.dumpdata.com>
In-Reply-To: <20140221191833.GA8812@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 20:18, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 4). Make Xen do the bus re-assignment.
> 
> The attached patch is an interesting "solution" to the BIOS
> not doing the right bus-extending with SR-IOV devices.

Nice that you got this to work, but this is definitely not the route
to go: There's no way we can guarantee to do this re-numbering
on segments other than segment 0 (since we can't necessarily
access the config spaces of the devices on other segments
before Dom0 telling us necessary bits of information).

Apart from that, I'd also really like to avoid duplicating code from
Linux into the hypervisor when all that is needed is making that
code work right in an admittedly rather special case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:16:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:16:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHreC-0000Yv-LS; Mon, 24 Feb 2014 09:16:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHrdo-0000Yq-WE
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 09:16:01 +0000
Received: from [193.109.254.147:35649] by server-1.bemta-14.messagelabs.com id
	EA/A7-15438-0DD0B035; Mon, 24 Feb 2014 09:16:00 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393233359!2657984!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28757 invoked from network); 24 Feb 2014 09:15:59 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 09:15:59 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 09:15:58 +0000
Message-Id: <530B1BD3020000780011E9B8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 09:15:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52DFA2200200007800115B70@nat28.tlf.novell.com>
	<52DF9D46.7030904@citrix.com>
	<52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
	<20140221191833.GA8812@phenom.dumpdata.com>
In-Reply-To: <20140221191833.GA8812@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 20:18, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> 4). Make Xen do the bus re-assignment.
> 
> The attached patch is an interesting "solution" to the BIOS
> not doing the right bus-extending with SR-IOV devices.

Nice that you got this to work, but this is definitely not the route
to go: There's no way we can guarantee to do this re-numbering
on segments other than segment 0 (since we can't necessarily
access the config spaces of the devices on other segments
before Dom0 telling us necessary bits of information).

Apart from that, I'd also really like to avoid duplicating code from
Linux into the hypervisor when all that is needed is making that
code work right in an admittedly rather special case.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHrn4-0000hp-9j; Mon, 24 Feb 2014 09:25:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WHrn2-0000hk-22
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 09:25:32 +0000
Received: from [85.158.139.211:5828] by server-8.bemta-5.messagelabs.com id
	18/7E-05298-B001B035; Mon, 24 Feb 2014 09:25:31 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393233928!1245205!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2616 invoked from network); 24 Feb 2014 09:25:30 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 09:25:30 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=QGFHYXXvNOS6yEVjVTJ9gXv8Ej4W106R8R/swVFi0RG0i7xqMkP2Qx39
	vB0ChiogeWun071WxsnVIVnDI0uNqOhjTP3neKZQocsioPsDgr0BOQCYF
	ICFRYLMOHE+IhHGyWAXV9oo0qb5vuik/JGwjwoCeN/id7oTNJXrhGVQRl
	7bMqF9/uGc1Wf3T/bm6DE/nX530rbqGrTWYHdIh158ZPLr9kiEWesYgli
	RLoWTazou8NZDPevIrUKtXmpMDGSZ;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1393233928; x=1424769928;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=/3azxWq+XU8c7ZOFOua6TvIAdLcMiaz8HMTcoNU7xLg=;
	b=WHKQyvf9xjCbo0Hmy8eaLk3LHqguGFNMlMmiHvKtCFybf6Q26VUuiVqo
	srMCVQwRPfSMlTo+STK/zj5Yo1B62fvzZDVLsUbbxcVs0ryTY5urV5RCc
	DuMNNt3Utl28rm2Z7+Zs5wCb2hK3Qn10GNZGZx2k4dAiCA4maL3PpPDoF
	mBY+OkwnSOJ22LAUJj5Y1x1R7HCkWXzbmYTaxAxgux56MyJVKlB3HqxTZ
	EL906/LCNXszU4L3MgjQPd8OiQNqj;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,533,1389740400"; d="scan'208";a="159700014"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 24 Feb 2014 10:25:26 +0100
X-IronPort-AV: E=Sophos;i="4.97,533,1389740400"; d="scan'208";a="32153488"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 24 Feb 2014 10:25:28 +0100
Message-ID: <530B1008.3090403@ts.fujitsu.com>
Date: Mon, 24 Feb 2014 10:25:28 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
	<1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com>
	<1392920545.32038.826.camel@Solace>
	<5306F2A9.1040503@ts.fujitsu.com>
	<1393003446.32038.832.camel@Solace>
In-Reply-To: <1393003446.32038.832.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21.02.2014 18:24, Dario Faggioli wrote:
> On ven, 2014-02-21 at 07:31 +0100, Juergen Gross wrote:
>> On 20.02.2014 19:22, Dario Faggioli wrote:
>>> All true... To the point that I know also wonder what a suitable
>>> interface and a not too verbose output configuration could be...
>>
>> Well, looking at the available topology information I think it should look like
>> the following example:
>>
>> # xl cpupool-list --shareinfo
>> Name          CPUs   Sched     Active  Domain count  shared resources
>> Pool-0          1    credit       y         1        core:   lw_pool
>> lw_pool         1    credit       y         0        core:   Pool-0
>> bs2_pool        2    credit       y         1        socket: Pool-0,lw_pool
>>
>> What do you think?
>>
> Looks reasonable.

Another solution would be to add a --long option. This would have the advantage
of not having to choose between clobbering the table output or not being able
to show multiple optional information items.

So we could do something like:

# xl cpupool-list --long
Pool-0
   n-cpus:        1
   cpu-list:      0
   scheduler:     credit
   n-domains:     1
   domain-list:   Dom0
   res-share-lvl: core
   res-sharers:   lw_pool
lw_pool
   n-cpus:        1
   cpu-list:      1
   scheduler:     credit
   n-domains:     0
   domain-list:
   res-share-lvl: core
   res-sharers:   Pool-0
bs2_pool
   n-cpus:        2
   cpu-list:      2,3
   scheduler:     credit
   n-domains:     1
   domain-list:   BS2000
   res-share-lvl: socket
   res-sharers:   Pool-0,lw_pool

We could add scheduler parameters, NUMA-information, ... as well.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHrn4-0000hp-9j; Mon, 24 Feb 2014 09:25:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <juergen.gross@ts.fujitsu.com>) id 1WHrn2-0000hk-22
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 09:25:32 +0000
Received: from [85.158.139.211:5828] by server-8.bemta-5.messagelabs.com id
	18/7E-05298-B001B035; Mon, 24 Feb 2014 09:25:31 +0000
X-Env-Sender: juergen.gross@ts.fujitsu.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393233928!1245205!1
X-Originating-IP: [80.70.172.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogODAuNzAuMTcyLjUxID0+IDIwMzQ5Mg==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2616 invoked from network); 24 Feb 2014 09:25:30 -0000
Received: from dgate20.ts.fujitsu.com (HELO dgate20.ts.fujitsu.com)
	(80.70.172.51)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 09:25:30 -0000
DomainKey-Signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns;
	h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV:
	Received:Message-ID:Date:From:Organization:User-Agent:
	MIME-Version:To:CC:Subject:References:In-Reply-To:
	Content-Type:Content-Transfer-Encoding;
	b=QGFHYXXvNOS6yEVjVTJ9gXv8Ej4W106R8R/swVFi0RG0i7xqMkP2Qx39
	vB0ChiogeWun071WxsnVIVnDI0uNqOhjTP3neKZQocsioPsDgr0BOQCYF
	ICFRYLMOHE+IhHGyWAXV9oo0qb5vuik/JGwjwoCeN/id7oTNJXrhGVQRl
	7bMqF9/uGc1Wf3T/bm6DE/nX530rbqGrTWYHdIh158ZPLr9kiEWesYgli
	RLoWTazou8NZDPevIrUKtXmpMDGSZ;
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
	d=ts.fujitsu.com; i=@ts.fujitsu.com; q=dns/txt;
	s=s1536b; t=1393233928; x=1424769928;
	h=message-id:date:from:mime-version:to:cc:subject:
	references:in-reply-to:content-transfer-encoding;
	bh=/3azxWq+XU8c7ZOFOua6TvIAdLcMiaz8HMTcoNU7xLg=;
	b=WHKQyvf9xjCbo0Hmy8eaLk3LHqguGFNMlMmiHvKtCFybf6Q26VUuiVqo
	srMCVQwRPfSMlTo+STK/zj5Yo1B62fvzZDVLsUbbxcVs0ryTY5urV5RCc
	DuMNNt3Utl28rm2Z7+Zs5wCb2hK3Qn10GNZGZx2k4dAiCA4maL3PpPDoF
	mBY+OkwnSOJ22LAUJj5Y1x1R7HCkWXzbmYTaxAxgux56MyJVKlB3HqxTZ
	EL906/LCNXszU4L3MgjQPd8OiQNqj;
X-SBRSScore: None
X-IronPort-AV: E=Sophos;i="4.97,533,1389740400"; d="scan'208";a="159700014"
Received: from unknown (HELO abgdate50u.abg.fsc.net) ([172.25.138.66])
	by dgate20u.abg.fsc.net with ESMTP; 24 Feb 2014 10:25:26 +0100
X-IronPort-AV: E=Sophos;i="4.97,533,1389740400"; d="scan'208";a="32153488"
Received: from mchverdon.mch.fsc.net (HELO [10.172.102.158]) ([10.172.102.158])
	by abgdate50u.abg.fsc.net with ESMTP; 24 Feb 2014 10:25:28 +0100
Message-ID: <530B1008.3090403@ts.fujitsu.com>
Date: Mon, 24 Feb 2014 10:25:28 +0100
From: Juergen Gross <juergen.gross@ts.fujitsu.com>
Organization: Fujitsu Technology Solutions GmbH
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>
References: <1646915994.20140213165604@gmail.com>
	<1392313015.32038.112.camel@Solace>
	<295276356.20140213222507@gmail.com>
	<6010385428.20140214120238@gmail.com>
	<1392398466.32038.334.camel@Solace>
	<752791084.20140217124616@gmail.com>
	<1392742549.32038.580.camel@Solace>
	<53039F55.3030901@terremark.com>
	<1392746781.32038.594.camel@Solace>
	<53059BB0.1000705@ts.fujitsu.com>
	<1392920545.32038.826.camel@Solace>
	<5306F2A9.1040503@ts.fujitsu.com>
	<1393003446.32038.832.camel@Solace>
In-Reply-To: <1393003446.32038.832.camel@Solace>
Cc: Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Don Slutz <dslutz@verizon.com>, xen-devel@lists.xen.org,
	Simon Martin <furryfuttock@gmail.com>,
	Nate Studer <nate.studer@dornerworks.com>
Subject: Re: [Xen-devel] Strange interdependace between domains
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 21.02.2014 18:24, Dario Faggioli wrote:
> On ven, 2014-02-21 at 07:31 +0100, Juergen Gross wrote:
>> On 20.02.2014 19:22, Dario Faggioli wrote:
>>> All true... To the point that I know also wonder what a suitable
>>> interface and a not too verbose output configuration could be...
>>
>> Well, looking at the available topology information I think it should look like
>> the following example:
>>
>> # xl cpupool-list --shareinfo
>> Name          CPUs   Sched     Active  Domain count  shared resources
>> Pool-0          1    credit       y         1        core:   lw_pool
>> lw_pool         1    credit       y         0        core:   Pool-0
>> bs2_pool        2    credit       y         1        socket: Pool-0,lw_pool
>>
>> What do you think?
>>
> Looks reasonable.

Another solution would be to add a --long option. This would have the advantage
of not having to choose between clobbering the table output or not being able
to show multiple optional information items.

So we could do something like:

# xl cpupool-list --long
Pool-0
   n-cpus:        1
   cpu-list:      0
   scheduler:     credit
   n-domains:     1
   domain-list:   Dom0
   res-share-lvl: core
   res-sharers:   lw_pool
lw_pool
   n-cpus:        1
   cpu-list:      1
   scheduler:     credit
   n-domains:     0
   domain-list:
   res-share-lvl: core
   res-sharers:   Pool-0
bs2_pool
   n-cpus:        2
   cpu-list:      2,3
   scheduler:     credit
   n-domains:     1
   domain-list:   BS2000
   res-share-lvl: socket
   res-sharers:   Pool-0,lw_pool

We could add scheduler parameters, NUMA-information, ... as well.


Juergen

-- 
Juergen Gross                 Principal Developer Operating Systems
PBG PDG ES&S SWE OS6                   Telephone: +49 (0) 89 62060 2932
Fujitsu                                   e-mail: juergen.gross@ts.fujitsu.com
Mies-van-der-Rohe-Str. 8                Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:51:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsBj-00017N-Mp; Mon, 24 Feb 2014 09:51:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHsBh-00017I-IH
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 09:51:03 +0000
Received: from [85.158.143.35:39105] by server-2.bemta-4.messagelabs.com id
	46/B6-10891-4061B035; Mon, 24 Feb 2014 09:51:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393235459!7810229!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30647 invoked from network); 24 Feb 2014 09:51:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:51:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; d="scan'208";a="105142310"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 09:50:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 04:50:57 -0500
Message-ID: <1393235457.16570.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 09:50:57 +0000
In-Reply-To: <osstest-25278-mainreport@xen.org>
References: <osstest-25278-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-23 at 17:32 +0000, xen.org wrote:

>  build-armhf-pvops             4 kernel-build                 fail   never pass 

This is:
        ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!
which is a deliberately forced link error due to over large udelay
parameter (see also
http://osdir.com/ml/scm-fedora-commits/2014-01/msg14211.html)

I could do this for ARM only if you prefer, but in the first instance
that seemed like unnecessary faff..

8<-----------------------

>From e3d00b001e57ab862032afd5d8d74183e52917a2 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 24 Feb 2014 09:42:28 +0000
Subject: [PATCH] Disable "Brocade BFA Fibre Channel Support"

This driver is broken on ARM:
    ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!

I've taken the lazy way out and disabled it on all platforms. I think it isn't
especially likely that any of the current osstest hosts are using Fibre
Channel right now. The code to enable it came from a big batch addition of
drivers in 451f39c6149e.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ts-kernel-build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-kernel-build b/ts-kernel-build
index 9b92ffc..05d9e96 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -151,7 +151,7 @@ setopt CONFIG_SCSI_DC390T m
 setopt CONFIG_SCSI_NSP32 m
 setopt CONFIG_SCSI_PMCRAID m
 setopt CONFIG_SCSI_SRP m
-setopt CONFIG_SCSI_BFA_FC m
+setopt CONFIG_SCSI_BFA_FC n
 
 setopt CONFIG_MEGARAID_NEWGEN y
 setopt CONFIG_MEGARAID_MM m
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:51:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsBj-00017N-Mp; Mon, 24 Feb 2014 09:51:03 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHsBh-00017I-IH
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 09:51:03 +0000
Received: from [85.158.143.35:39105] by server-2.bemta-4.messagelabs.com id
	46/B6-10891-4061B035; Mon, 24 Feb 2014 09:51:00 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393235459!7810229!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30647 invoked from network); 24 Feb 2014 09:51:00 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:51:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; d="scan'208";a="105142310"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 09:50:58 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 04:50:57 -0500
Message-ID: <1393235457.16570.10.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 09:50:57 +0000
In-Reply-To: <osstest-25278-mainreport@xen.org>
References: <osstest-25278-mainreport@xen.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 2014-02-23 at 17:32 +0000, xen.org wrote:

>  build-armhf-pvops             4 kernel-build                 fail   never pass 

This is:
        ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!
which is a deliberately forced link error due to over large udelay
parameter (see also
http://osdir.com/ml/scm-fedora-commits/2014-01/msg14211.html)

I could do this for ARM only if you prefer, but in the first instance
that seemed like unnecessary faff..

8<-----------------------

>From e3d00b001e57ab862032afd5d8d74183e52917a2 Mon Sep 17 00:00:00 2001
From: Ian Campbell <ian.campbell@citrix.com>
Date: Mon, 24 Feb 2014 09:42:28 +0000
Subject: [PATCH] Disable "Brocade BFA Fibre Channel Support"

This driver is broken on ARM:
    ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!

I've taken the lazy way out and disabled it on all platforms. I think it isn't
especially likely that any of the current osstest hosts are using Fibre
Channel right now. The code to enable it came from a big batch addition of
drivers in 451f39c6149e.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 ts-kernel-build | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-kernel-build b/ts-kernel-build
index 9b92ffc..05d9e96 100755
--- a/ts-kernel-build
+++ b/ts-kernel-build
@@ -151,7 +151,7 @@ setopt CONFIG_SCSI_DC390T m
 setopt CONFIG_SCSI_NSP32 m
 setopt CONFIG_SCSI_PMCRAID m
 setopt CONFIG_SCSI_SRP m
-setopt CONFIG_SCSI_BFA_FC m
+setopt CONFIG_SCSI_BFA_FC n
 
 setopt CONFIG_MEGARAID_NEWGEN y
 setopt CONFIG_MEGARAID_MM m
-- 
1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsFA-0001FK-CZ; Mon, 24 Feb 2014 09:54:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHsF9-0001FE-6I
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 09:54:35 +0000
Received: from [85.158.143.35:38934] by server-3.bemta-4.messagelabs.com id
	CB/60-11539-AD61B035; Mon, 24 Feb 2014 09:54:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393235672!7791362!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26077 invoked from network); 24 Feb 2014 09:54:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:54:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; d="scan'208";a="105142782"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 09:54:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 04:54:31 -0500
Message-ID: <1393235670.16570.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 24 Feb 2014 09:54:30 +0000
In-Reply-To: <CAB=NE6U96UcnvHvoOU2Z62VkkthGMLNaQrL0rv4+Px8-Li_W=g@mail.gmail.com>
References: <osstest-25268-mainreport@xen.org>
	<CAB=NE6U96UcnvHvoOU2Z62VkkthGMLNaQrL0rv4+Px8-Li_W=g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-linus test] 25268: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-22 at 12:49 -0800, Luis R. Rodriguez wrote:
> On Sat, Feb 22, 2014 at 10:34 AM, xen.org <ian.jackson@eu.citrix.com> wrote:
> > Test harness code can be found at
> >     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
> 
> Is this the latest page for the documentation of the test harness?
> 
> http://wiki.xen.org/wiki/XenTest

TBH I think that page should just be archived, it doesn't contain
anything of any use.

> That is basically just saying there are no docs but the README seems
> to have good stuff, is that pretty up to date or should that be
> considered pretty outdated to get started?

The README should be a reasonable starting point, yes. It was written
long after the wiki page.

> 
> http://xenbits.xensource.com/gitweb/?p=osstest.git;a=blob;f=README
> 
>   Luis
> 
>   Luis
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsFA-0001FK-CZ; Mon, 24 Feb 2014 09:54:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHsF9-0001FE-6I
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 09:54:35 +0000
Received: from [85.158.143.35:38934] by server-3.bemta-4.messagelabs.com id
	CB/60-11539-AD61B035; Mon, 24 Feb 2014 09:54:34 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393235672!7791362!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26077 invoked from network); 24 Feb 2014 09:54:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:54:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,533,1389744000"; d="scan'208";a="105142782"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 09:54:32 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 04:54:31 -0500
Message-ID: <1393235670.16570.12.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 24 Feb 2014 09:54:30 +0000
In-Reply-To: <CAB=NE6U96UcnvHvoOU2Z62VkkthGMLNaQrL0rv4+Px8-Li_W=g@mail.gmail.com>
References: <osstest-25268-mainreport@xen.org>
	<CAB=NE6U96UcnvHvoOU2Z62VkkthGMLNaQrL0rv4+Px8-Li_W=g@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-linus test] 25268: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, 2014-02-22 at 12:49 -0800, Luis R. Rodriguez wrote:
> On Sat, Feb 22, 2014 at 10:34 AM, xen.org <ian.jackson@eu.citrix.com> wrote:
> > Test harness code can be found at
> >     http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary
> 
> Is this the latest page for the documentation of the test harness?
> 
> http://wiki.xen.org/wiki/XenTest

TBH I think that page should just be archived, it doesn't contain
anything of any use.

> That is basically just saying there are no docs but the README seems
> to have good stuff, is that pretty up to date or should that be
> considered pretty outdated to get started?

The README should be a reasonable starting point, yes. It was written
long after the wiki page.

> 
> http://xenbits.xensource.com/gitweb/?p=osstest.git;a=blob;f=README
> 
>   Luis
> 
>   Luis
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:58:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsJ8-0001QD-5n; Mon, 24 Feb 2014 09:58:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHsJ6-0001Q5-BN
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 09:58:40 +0000
Received: from [193.109.254.147:51481] by server-7.bemta-14.messagelabs.com id
	F0/C7-23424-FC71B035; Mon, 24 Feb 2014 09:58:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393235917!2604597!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11637 invoked from network); 24 Feb 2014 09:58:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:58:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105143351"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 09:58:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 04:58:36 -0500
Message-ID: <1393235915.16570.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 24 Feb 2014 09:58:35 +0000
In-Reply-To: <53065AD3.7090106@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
	<1392812318.29739.31.camel@kazak.uk.xensource.com>
	<53065AD3.7090106@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 19:43 +0000, Julien Grall wrote:
> On 02/19/2014 12:18 PM, Ian Campbell wrote:
> 
> >> + *
> >> + * Julien Grall <julien.grall@linaro.org>
> >> + * Copyright (c) 2014 Linaro Limited.
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License as published by
> >> + * the Free Software Foundation; either version 2 of the License, or
> >> + * (at your option) any later version.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >> + * GNU General Public License for more details.
> >> + */
> >> +#include <asm/procinfo.h>
> >> +#include <asm/processor.h>
> >> +
> >> +static void armv7_vcpu_initialize(struct vcpu *v)
> >> +{
> >> +    if ( v->domain->max_vcpus > 1 )
> >> +        v->arch.actlr |= ACTLR_V7_SMP;
> >> +    else
> >> +        v->arch.actlr &= ~ACTLR_V7_SMP;
> >> +}
> >> +
> >> +const struct processor armv7_processor = {
> > 
> > __rodata? (or whatever it is called)
> 
> I forgot to answer to this part. The compiler will put it by default in
> rodata. Did you want to say __initconst? If so, we can't because I use a
> pointer to this structure in arch/arm/processor.c

I meant __initconst yes. Leaving it in rodata is fine.

__init indeed doesn't make sense now you make me think of it.

> If we want to save space, we can copy it in another variable in
> processor_setup.

No need IMHO (at least not yet, maybe if we end up with dozens of
these...).

 Thanks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 09:58:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 09:58:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsJ8-0001QD-5n; Mon, 24 Feb 2014 09:58:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHsJ6-0001Q5-BN
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 09:58:40 +0000
Received: from [193.109.254.147:51481] by server-7.bemta-14.messagelabs.com id
	F0/C7-23424-FC71B035; Mon, 24 Feb 2014 09:58:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393235917!2604597!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11637 invoked from network); 24 Feb 2014 09:58:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:58:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105143351"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 09:58:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 04:58:36 -0500
Message-ID: <1393235915.16570.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@linaro.org>
Date: Mon, 24 Feb 2014 09:58:35 +0000
In-Reply-To: <53065AD3.7090106@linaro.org>
References: <1392149085-14366-1-git-send-email-julien.grall@linaro.org>
	<1392149085-14366-5-git-send-email-julien.grall@linaro.org>
	<1392812318.29739.31.camel@kazak.uk.xensource.com>
	<53065AD3.7090106@linaro.org>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, tim@xen.org, stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [RFC for-4.5 4/5] xen/arm: Remove processor
 specific setup in vcpu_initialise
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 19:43 +0000, Julien Grall wrote:
> On 02/19/2014 12:18 PM, Ian Campbell wrote:
> 
> >> + *
> >> + * Julien Grall <julien.grall@linaro.org>
> >> + * Copyright (c) 2014 Linaro Limited.
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License as published by
> >> + * the Free Software Foundation; either version 2 of the License, or
> >> + * (at your option) any later version.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >> + * GNU General Public License for more details.
> >> + */
> >> +#include <asm/procinfo.h>
> >> +#include <asm/processor.h>
> >> +
> >> +static void armv7_vcpu_initialize(struct vcpu *v)
> >> +{
> >> +    if ( v->domain->max_vcpus > 1 )
> >> +        v->arch.actlr |= ACTLR_V7_SMP;
> >> +    else
> >> +        v->arch.actlr &= ~ACTLR_V7_SMP;
> >> +}
> >> +
> >> +const struct processor armv7_processor = {
> > 
> > __rodata? (or whatever it is called)
> 
> I forgot to answer to this part. The compiler will put it by default in
> rodata. Did you want to say __initconst? If so, we can't because I use a
> pointer to this structure in arch/arm/processor.c

I meant __initconst yes. Leaving it in rodata is fine.

__init indeed doesn't make sense now you make me think of it.

> If we want to save space, we can copy it in another variable in
> processor_setup.

No need IMHO (at least not yet, maybe if we end up with dozens of
these...).

 Thanks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:00:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsLF-0001dK-DE; Mon, 24 Feb 2014 10:00:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHsLE-0001dC-5i
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 10:00:52 +0000
Received: from [85.158.137.68:5943] by server-16.bemta-3.messagelabs.com id
	BF/3C-29917-3581B035; Mon, 24 Feb 2014 10:00:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393236048!2514586!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17458 invoked from network); 24 Feb 2014 10:00:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 10:00:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105143837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 10:00:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 05:00:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHsL9-0004dE-PN;
	Mon, 24 Feb 2014 10:00:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHsL9-0005z0-Nr;
	Mon, 24 Feb 2014 10:00:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25281-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 10:00:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25281: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25281 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25281/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:00:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsLF-0001dK-DE; Mon, 24 Feb 2014 10:00:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHsLE-0001dC-5i
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 10:00:52 +0000
Received: from [85.158.137.68:5943] by server-16.bemta-3.messagelabs.com id
	BF/3C-29917-3581B035; Mon, 24 Feb 2014 10:00:51 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393236048!2514586!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17458 invoked from network); 24 Feb 2014 10:00:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 10:00:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105143837"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 10:00:48 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 05:00:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHsL9-0004dE-PN;
	Mon, 24 Feb 2014 10:00:47 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHsL9-0005z0-Nr;
	Mon, 24 Feb 2014 10:00:47 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25281-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 10:00:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25281: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25281 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25281/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:07:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsRL-0001xn-BJ; Mon, 24 Feb 2014 10:07:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lamhaison@gmail.com>) id 1WHru9-0000qZ-J0
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 09:32:53 +0000
Received: from [85.158.139.211:7616] by server-14.bemta-5.messagelabs.com id
	F9/11-27598-4C11B035; Mon, 24 Feb 2014 09:32:52 +0000
X-Env-Sender: lamhaison@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393234371!5805099!1
X-Originating-IP: [209.85.215.66]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32324 invoked from network); 24 Feb 2014 09:32:52 -0000
Received: from mail-la0-f66.google.com (HELO mail-la0-f66.google.com)
	(209.85.215.66)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:32:52 -0000
Received: by mail-la0-f66.google.com with SMTP id mc6so950394lab.9
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 01:32:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=tnR8bTcVCIscDEYuEYinQRR3am8O7fR6JnYz5mo8bGA=;
	b=l3HlI3y+cFPeaJXUj30C0RanlxViXxmnU8uG3/za+QYpgdNPoTLZqBnVrw9WHefucj
	mvEvL42RYGagh/N+i+x29EoZUA5/7vDO9M2rLLpygxd8uoWdINppIHdLdKL+ZUjSulvI
	zOIDk6fYBC1QSUgAp5Qa0s61fmsgFvTyOI+fFtgCyhg4yEhSLqlyj969CqFEJ7aNRr9H
	OiERUGluXreIsRP0g9dex3Qow2l91z5gtdfRnFT+F9+HOTJO9O3uCN277t5AvzkHE++T
	hjGUEj/wl9sKCt9dEu0tqgGIAcw5uqneQqTpgDPFSQilC1aCXCbpuwlRGhJ0DntnZqTM
	IXLw==
MIME-Version: 1.0
X-Received: by 10.152.242.131 with SMTP id wq3mr11660271lac.12.1393234371114; 
	Mon, 24 Feb 2014 01:32:51 -0800 (PST)
Received: by 10.114.0.72 with HTTP; Mon, 24 Feb 2014 01:32:51 -0800 (PST)
In-Reply-To: <CAN90_rkxUvEDkXLrDkbp-tZLj_OmkMiN+XYpPfqqZioHUBgBUA@mail.gmail.com>
References: <CAN90_rkxUvEDkXLrDkbp-tZLj_OmkMiN+XYpPfqqZioHUBgBUA@mail.gmail.com>
Date: Mon, 24 Feb 2014 16:32:51 +0700
Message-ID: <CAN90_r=K2GPMFvNsc4cWzth7BgEoa5rxTCVzn8Uk5_CfOvPT0w@mail.gmail.com>
From: =?UTF-8?B?TMOibSBI4bqjaSBTxqFu?= <lamhaison@gmail.com>
To: xen-devel@lists.xenproject.org
X-Mailman-Approved-At: Mon, 24 Feb 2014 10:07:09 +0000
Subject: [Xen-devel] Fwd: [How to read xen source]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1102228678188664101=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1102228678188664101==
Content-Type: multipart/alternative; boundary=001a11336ba0c77d6704f323a7d6

--001a11336ba0c77d6704f323a7d6
Content-Type: text/plain; charset=ISO-8859-1

Hello!
I'm a student, and i interst to source xen. I want to modify source but
Before I do it I must understand this code. I have a question, How to read
xen source, I have to use tool which help me, such as source navigator, and
How to do it. I think you are a developer, I want to question, where does
code use restart, delete virtual machine on xen. Thank you!. Help me
-- 
LHS




-- 
LHS

--001a11336ba0c77d6704f323a7d6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote"><div dir=3D"ltr"><div>Hello!<br=
></div>I&#39;m a student, and i interst to source xen. I want to modify sou=
rce but Before I do it I must understand this code. I have a question, How =
to read xen source, I have to use tool which help me, such as source naviga=
tor, and How to do it. I think you are a developer, I want to question, whe=
re does code use restart, delete virtual machine on xen. Thank you!. Help m=
e<span class=3D"HOEnZb"><font color=3D"#888888"><br>

<div><div><div><div><div>-- <br>LHS<br><br>
</div></div></div></div></div></font></span></div>
</div><br><br clear=3D"all"><br>-- <br>LHS<br><br>
</div>

--001a11336ba0c77d6704f323a7d6--


--===============1102228678188664101==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1102228678188664101==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 10:07:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsRL-0001xn-BJ; Mon, 24 Feb 2014 10:07:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lamhaison@gmail.com>) id 1WHru9-0000qZ-J0
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 09:32:53 +0000
Received: from [85.158.139.211:7616] by server-14.bemta-5.messagelabs.com id
	F9/11-27598-4C11B035; Mon, 24 Feb 2014 09:32:52 +0000
X-Env-Sender: lamhaison@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393234371!5805099!1
X-Originating-IP: [209.85.215.66]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32324 invoked from network); 24 Feb 2014 09:32:52 -0000
Received: from mail-la0-f66.google.com (HELO mail-la0-f66.google.com)
	(209.85.215.66)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 09:32:52 -0000
Received: by mail-la0-f66.google.com with SMTP id mc6so950394lab.9
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 01:32:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:content-type; bh=tnR8bTcVCIscDEYuEYinQRR3am8O7fR6JnYz5mo8bGA=;
	b=l3HlI3y+cFPeaJXUj30C0RanlxViXxmnU8uG3/za+QYpgdNPoTLZqBnVrw9WHefucj
	mvEvL42RYGagh/N+i+x29EoZUA5/7vDO9M2rLLpygxd8uoWdINppIHdLdKL+ZUjSulvI
	zOIDk6fYBC1QSUgAp5Qa0s61fmsgFvTyOI+fFtgCyhg4yEhSLqlyj969CqFEJ7aNRr9H
	OiERUGluXreIsRP0g9dex3Qow2l91z5gtdfRnFT+F9+HOTJO9O3uCN277t5AvzkHE++T
	hjGUEj/wl9sKCt9dEu0tqgGIAcw5uqneQqTpgDPFSQilC1aCXCbpuwlRGhJ0DntnZqTM
	IXLw==
MIME-Version: 1.0
X-Received: by 10.152.242.131 with SMTP id wq3mr11660271lac.12.1393234371114; 
	Mon, 24 Feb 2014 01:32:51 -0800 (PST)
Received: by 10.114.0.72 with HTTP; Mon, 24 Feb 2014 01:32:51 -0800 (PST)
In-Reply-To: <CAN90_rkxUvEDkXLrDkbp-tZLj_OmkMiN+XYpPfqqZioHUBgBUA@mail.gmail.com>
References: <CAN90_rkxUvEDkXLrDkbp-tZLj_OmkMiN+XYpPfqqZioHUBgBUA@mail.gmail.com>
Date: Mon, 24 Feb 2014 16:32:51 +0700
Message-ID: <CAN90_r=K2GPMFvNsc4cWzth7BgEoa5rxTCVzn8Uk5_CfOvPT0w@mail.gmail.com>
From: =?UTF-8?B?TMOibSBI4bqjaSBTxqFu?= <lamhaison@gmail.com>
To: xen-devel@lists.xenproject.org
X-Mailman-Approved-At: Mon, 24 Feb 2014 10:07:09 +0000
Subject: [Xen-devel] Fwd: [How to read xen source]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1102228678188664101=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1102228678188664101==
Content-Type: multipart/alternative; boundary=001a11336ba0c77d6704f323a7d6

--001a11336ba0c77d6704f323a7d6
Content-Type: text/plain; charset=ISO-8859-1

Hello!
I'm a student, and i interst to source xen. I want to modify source but
Before I do it I must understand this code. I have a question, How to read
xen source, I have to use tool which help me, such as source navigator, and
How to do it. I think you are a developer, I want to question, where does
code use restart, delete virtual machine on xen. Thank you!. Help me
-- 
LHS




-- 
LHS

--001a11336ba0c77d6704f323a7d6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote"><div dir=3D"ltr"><div>Hello!<br=
></div>I&#39;m a student, and i interst to source xen. I want to modify sou=
rce but Before I do it I must understand this code. I have a question, How =
to read xen source, I have to use tool which help me, such as source naviga=
tor, and How to do it. I think you are a developer, I want to question, whe=
re does code use restart, delete virtual machine on xen. Thank you!. Help m=
e<span class=3D"HOEnZb"><font color=3D"#888888"><br>

<div><div><div><div><div>-- <br>LHS<br><br>
</div></div></div></div></div></font></span></div>
</div><br><br clear=3D"all"><br>-- <br>LHS<br><br>
</div>

--001a11336ba0c77d6704f323a7d6--


--===============1102228678188664101==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1102228678188664101==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 10:13:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsX8-0002Av-Dn; Mon, 24 Feb 2014 10:13:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WHsX7-0002Ao-5Q
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:13:09 +0000
Received: from [193.109.254.147:52827] by server-7.bemta-14.messagelabs.com id
	09/D1-23424-43B1B035; Mon, 24 Feb 2014 10:13:08 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393236787!2371860!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29337 invoked from network); 24 Feb 2014 10:13:07 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 10:13:07 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 585EF81983;
	Mon, 24 Feb 2014 12:13:06 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 1445836C01F; Mon, 24 Feb 2014 12:13:06 +0200 (EET)
Date: Mon, 24 Feb 2014 12:13:05 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: M A Young <m.a.young@durham.ac.uk>
Message-ID: <20140224101305.GV3200@reaktio.net>
References: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen
 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 23, 2014 at 09:36:44PM +0000, M A Young wrote:
> I was trying xen-4.4.0-rc5 with my standard set up (booting a pv
> guest using pygrub as a bootloader) but this no longer works. I
> would expect to get a boot menu where I could select a kernel and
> the guest, but nothing happens. If I examine the bootloader.1.log
> file I find the pygrub output I would expect to see on the console,
> and the output of xenstore-ls suggests pygrub is selecting the
> default kernel (as it would without input) and exiting, but xentop
> doesn't report any cpu usage on the guest. It seems xl has created a
> child process that doesn't exit, though if I kill the child process
> by hand the boot does continue.
> 
> I traced the change in behaviour to the commit http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=5f0c4a78100382972b4d2a71a04b90e015e9fe87
> "libxl: fork: Share SIGCHLD handler amongst ctxs". If I revert this
> then I get the expected behaviour again, though I haven't worked out
> why this patch cause the effects I am seeing.
> 

Uh oh.. I wonder if Ian (author of the libxl changes in question) has some ideas why that might be happening..

This is kind of a bug that's not very easy to detect even with automated pygrub testing.. 
interactive testing is pretty much required to notice this bug.

-- Pasi

> 	Michael Young
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:13:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsX8-0002Av-Dn; Mon, 24 Feb 2014 10:13:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pasik@iki.fi>) id 1WHsX7-0002Ao-5Q
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:13:09 +0000
Received: from [193.109.254.147:52827] by server-7.bemta-14.messagelabs.com id
	09/D1-23424-43B1B035; Mon, 24 Feb 2014 10:13:08 +0000
X-Env-Sender: pasik@iki.fi
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393236787!2371860!1
X-Originating-IP: [62.142.5.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n,sa_preprocessor: 
	QmFkIElQOiA2Mi4xNDIuNS4xMDggPT4gOTU3MDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29337 invoked from network); 24 Feb 2014 10:13:07 -0000
Received: from emh02.mail.saunalahti.fi (HELO emh02.mail.saunalahti.fi)
	(62.142.5.108)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 10:13:07 -0000
Received: from ydin.reaktio.net (reaktio.net [85.76.255.15])
	by emh02.mail.saunalahti.fi (Postfix) with ESMTP id 585EF81983;
	Mon, 24 Feb 2014 12:13:06 +0200 (EET)
Received: by ydin.reaktio.net (Postfix, from userid 1001)
	id 1445836C01F; Mon, 24 Feb 2014 12:13:06 +0200 (EET)
Date: Mon, 24 Feb 2014 12:13:05 +0200
From: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
To: M A Young <m.a.young@durham.ac.uk>
Message-ID: <20140224101305.GV3200@reaktio.net>
References: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
User-Agent: Mutt/1.5.20 (2009-06-14)
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen
 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, Feb 23, 2014 at 09:36:44PM +0000, M A Young wrote:
> I was trying xen-4.4.0-rc5 with my standard set up (booting a pv
> guest using pygrub as a bootloader) but this no longer works. I
> would expect to get a boot menu where I could select a kernel and
> the guest, but nothing happens. If I examine the bootloader.1.log
> file I find the pygrub output I would expect to see on the console,
> and the output of xenstore-ls suggests pygrub is selecting the
> default kernel (as it would without input) and exiting, but xentop
> doesn't report any cpu usage on the guest. It seems xl has created a
> child process that doesn't exit, though if I kill the child process
> by hand the boot does continue.
> 
> I traced the change in behaviour to the commit http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=5f0c4a78100382972b4d2a71a04b90e015e9fe87
> "libxl: fork: Share SIGCHLD handler amongst ctxs". If I revert this
> then I get the expected behaviour again, though I haven't worked out
> why this patch cause the effects I am seeing.
> 

Uh oh.. I wonder if Ian (author of the libxl changes in question) has some ideas why that might be happening..

This is kind of a bug that's not very easy to detect even with automated pygrub testing.. 
interactive testing is pretty much required to notice this bug.

-- Pasi

> 	Michael Young
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:14:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsYj-0002H6-UE; Mon, 24 Feb 2014 10:14:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHsYi-0002Gr-5H
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:14:48 +0000
Received: from [193.109.254.147:37994] by server-5.bemta-14.messagelabs.com id
	10/6F-16688-79B1B035; Mon, 24 Feb 2014 10:14:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393236144!1083001!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3436 invoked from network); 24 Feb 2014 10:02:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:02:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:02:24 +0000
Message-Id: <530B26BC020000780011EA65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:02:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This is some coverity-inspired tidying.
> 
> Coverity has some grief analysing the call sites of atomic_read().  This is
> believed to be a bug in Coverity itself when expanding the nested macros, 
> but
> there is no legitimate reason for it to be a macro in the first place.
> 
> This patch changes {,_}atomic_{read,set}() from being macros to being static
> inline functions, thus gaining some type safety.
> 
> One issue which is not immediatly obvious is that the non-atomic varients 
> take
> their atomic_t at a different level of indirection to the atomic varients.
> 
> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
> which is converted to take its parameter as a pointer.  One callsite of
> _atomic_set() is updated, while the other two callsites are updated to
> ATOMIC_INIT().

Did you consider leaving these "non-atomic atomic ops" untouched
(as they don't involve macro nesting), altering only the "real" ones?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:14:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsYj-0002H6-UE; Mon, 24 Feb 2014 10:14:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHsYi-0002Gr-5H
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:14:48 +0000
Received: from [193.109.254.147:37994] by server-5.bemta-14.messagelabs.com id
	10/6F-16688-79B1B035; Mon, 24 Feb 2014 10:14:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393236144!1083001!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3436 invoked from network); 24 Feb 2014 10:02:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:02:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:02:24 +0000
Message-Id: <530B26BC020000780011EA65@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:02:20 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This is some coverity-inspired tidying.
> 
> Coverity has some grief analysing the call sites of atomic_read().  This is
> believed to be a bug in Coverity itself when expanding the nested macros, 
> but
> there is no legitimate reason for it to be a macro in the first place.
> 
> This patch changes {,_}atomic_{read,set}() from being macros to being static
> inline functions, thus gaining some type safety.
> 
> One issue which is not immediatly obvious is that the non-atomic varients 
> take
> their atomic_t at a different level of indirection to the atomic varients.
> 
> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
> which is converted to take its parameter as a pointer.  One callsite of
> _atomic_set() is updated, while the other two callsites are updated to
> ATOMIC_INIT().

Did you consider leaving these "non-atomic atomic ops" untouched
(as they don't involve macro nesting), altering only the "real" ones?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:14:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsYq-0002IV-A5; Mon, 24 Feb 2014 10:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHsYp-0002I7-0x
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:14:55 +0000
Received: from [85.158.139.211:32551] by server-14.bemta-5.messagelabs.com id
	C1/CD-27598-E9B1B035; Mon, 24 Feb 2014 10:14:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393236893!1260277!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8634 invoked from network); 24 Feb 2014 10:14:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:14:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:14:53 +0000
Message-Id: <530B29A9020000780011EA79@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:14:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
	<52CBD92F.3050301@citrix.com>
	<7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
	<20140221190811.GA9232@phenom.dumpdata.com>
In-Reply-To: <20140221190811.GA9232@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gordan Bobic <gordan@bobich.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 20:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> The 0004-xen-pci-Introduce-a-way-to-deal-with-buggy-hardware-.patch
> is what is interesting.
> 
> If there is an interest in upstream this I can take a look -
> but I will need guidance from Jan how he would like to do it.

Well, I don't know. I'm not really in favor of adding such hacks,
since this doesn't scale: What if someone else breaks their
hardware in a different way, and we want to work around that
too? Such workarounds could easily start conflicting with one
another.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:14:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsYq-0002IV-A5; Mon, 24 Feb 2014 10:14:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHsYp-0002I7-0x
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:14:55 +0000
Received: from [85.158.139.211:32551] by server-14.bemta-5.messagelabs.com id
	C1/CD-27598-E9B1B035; Mon, 24 Feb 2014 10:14:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393236893!1260277!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8634 invoked from network); 24 Feb 2014 10:14:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:14:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:14:53 +0000
Message-Id: <530B29A9020000780011EA79@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:14:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com>
References: <52AAF9D7020000780010CF2C@nat28.tlf.novell.com>
	<20131213144319.GK2923@phenom.dumpdata.com>
	<52AB2E18020000780010D13B@nat28.tlf.novell.com>
	<52AB275D.2010401@bobich.net>
	<20140106202621.GA30667@phenom.dumpdata.com>
	<20140106214527.GA31147@phenom.dumpdata.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9A3304@SHSMSX104.ccr.corp.intel.com>
	<1e555bfbd9f1c2eccfb4721d90782d12@mail.shatteredsilicon.net>
	<52CBD92F.3050301@citrix.com>
	<7496ae8fbc23e417e234a3cfceee19e6@mail.shatteredsilicon.net>
	<20140221190811.GA9232@phenom.dumpdata.com>
In-Reply-To: <20140221190811.GA9232@phenom.dumpdata.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Gordan Bobic <gordan@bobich.net>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Multi-bridged PCIe devices (Was: Re: iommuu/vt-d
 issues with LSI MegaSAS (PERC5i))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 21.02.14 at 20:08, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> The 0004-xen-pci-Introduce-a-way-to-deal-with-buggy-hardware-.patch
> is what is interesting.
> 
> If there is an interest in upstream this I can take a look -
> but I will need guidance from Jan how he would like to do it.

Well, I don't know. I'm not really in favor of adding such hacks,
since this doesn't scale: What if someone else breaks their
hardware in a different way, and we want to work around that
too? Such workarounds could easily start conflicting with one
another.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:28:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHslO-0002cy-9i; Mon, 24 Feb 2014 10:27:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHslM-0002ct-Tx
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:27:53 +0000
Received: from [85.158.143.35:37000] by server-2.bemta-4.messagelabs.com id
	1E/57-10891-8AE1B035; Mon, 24 Feb 2014 10:27:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393237670!7812485!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13845 invoked from network); 24 Feb 2014 10:27:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 10:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105148339"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 10:27:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 05:27:49 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHsje-0001Bx-Gj;
	Mon, 24 Feb 2014 10:26:06 +0000
Message-ID: <530B1E3E.6040805@citrix.com>
Date: Mon, 24 Feb 2014 10:26:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<530B26BC020000780011EA65@nat28.tlf.novell.com>
In-Reply-To: <530B26BC020000780011EA65@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 10:02, Jan Beulich wrote:
>>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> This is some coverity-inspired tidying.
>>
>> Coverity has some grief analysing the call sites of atomic_read().  This is
>> believed to be a bug in Coverity itself when expanding the nested macros, 
>> but
>> there is no legitimate reason for it to be a macro in the first place.
>>
>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>> inline functions, thus gaining some type safety.
>>
>> One issue which is not immediatly obvious is that the non-atomic varients 
>> take
>> their atomic_t at a different level of indirection to the atomic varients.
>>
>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>> which is converted to take its parameter as a pointer.  One callsite of
>> _atomic_set() is updated, while the other two callsites are updated to
>> ATOMIC_INIT().
> Did you consider leaving these "non-atomic atomic ops" untouched
> (as they don't involve macro nesting), altering only the "real" ones?
>
> Jan
>

Yes, but for the sake of three updates at callsites, I felt the benefits
outweighed the costs.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:28:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHslO-0002cy-9i; Mon, 24 Feb 2014 10:27:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHslM-0002ct-Tx
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:27:53 +0000
Received: from [85.158.143.35:37000] by server-2.bemta-4.messagelabs.com id
	1E/57-10891-8AE1B035; Mon, 24 Feb 2014 10:27:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393237670!7812485!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13845 invoked from network); 24 Feb 2014 10:27:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 10:27:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105148339"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 10:27:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 05:27:49 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHsje-0001Bx-Gj;
	Mon, 24 Feb 2014 10:26:06 +0000
Message-ID: <530B1E3E.6040805@citrix.com>
Date: Mon, 24 Feb 2014 10:26:06 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<530B26BC020000780011EA65@nat28.tlf.novell.com>
In-Reply-To: <530B26BC020000780011EA65@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 10:02, Jan Beulich wrote:
>>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> This is some coverity-inspired tidying.
>>
>> Coverity has some grief analysing the call sites of atomic_read().  This is
>> believed to be a bug in Coverity itself when expanding the nested macros, 
>> but
>> there is no legitimate reason for it to be a macro in the first place.
>>
>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>> inline functions, thus gaining some type safety.
>>
>> One issue which is not immediatly obvious is that the non-atomic varients 
>> take
>> their atomic_t at a different level of indirection to the atomic varients.
>>
>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>> which is converted to take its parameter as a pointer.  One callsite of
>> _atomic_set() is updated, while the other two callsites are updated to
>> ATOMIC_INIT().
> Did you consider leaving these "non-atomic atomic ops" untouched
> (as they don't involve macro nesting), altering only the "real" ones?
>
> Jan
>

Yes, but for the sake of three updates at callsites, I felt the benefits
outweighed the costs.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:40:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsx8-0002nU-D4; Mon, 24 Feb 2014 10:40:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHsx5-0002nP-VP
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:40:00 +0000
Received: from [193.109.254.147:42057] by server-15.bemta-14.messagelabs.com
	id F3/9E-10839-F712B035; Mon, 24 Feb 2014 10:39:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393238398!6395607!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25455 invoked from network); 24 Feb 2014 10:39:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:39:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:39:58 +0000
Message-Id: <530B2F8A020000780011EAA4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:39:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-9-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-9-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> +int iommu_do_domctl(
> +    struct xen_domctl *domctl, struct domain *d,
> +    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> -    const struct iommu_ops *ops = iommu_get_ops();
> -    if ( iommu_intremap )
> -        ops->read_msi_from_ire(msi_desc, msg);
> -}
> +    int ret = 0;
>  
> -unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
> -{
> -    const struct iommu_ops *ops = iommu_get_ops();
> -    return ops->read_apic_from_ire(apic, reg);
> -}
> +    if ( !iommu_enabled )
> +        return -ENOSYS;
>  
> -int __init iommu_setup_hpet_msi(struct msi_desc *msi)
> -{
> -    const struct iommu_ops *ops = iommu_get_ops();
> -    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
> -}
> +    switch ( domctl->cmd )
> +    {
> +#ifdef HAS_PCI
> +    case XEN_DOMCTL_get_device_group:
> +    case XEN_DOMCTL_test_assign_device:
> +    case XEN_DOMCTL_assign_device:
> +    case XEN_DOMCTL_deassign_device:
> +        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
> +        break;
> +#endif
> +    default:
> +        ret = -ENOSYS;
> +    }

Please simply have the default case chain to iommu_do_pci_domctl(),
avoiding the need to change two source files when new sub-ops get
added.

Also, last case in the set of case statements or not - the default
case should also have a break statement.

> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
>...
> @@ -980,6 +983,440 @@ static int __init setup_dump_pcidevs(void)
>  }
>  __initcall(setup_dump_pcidevs);
>  
> +static int iommu_populate_page_table(struct domain *d)
> +{

Now why is this function being moved here? It doesn't appear to
have anything PCI specific at all.

> --- /dev/null
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -0,0 +1,65 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + */
> +
> +#include <xen/sched.h>
> +#include <xen/iommu.h>
> +#include <xen/paging.h>
> +#include <xen/guest_access.h>
> +#include <xen/event.h>
> +#include <xen/softirq.h>
> +#include <xsm/xsm.h>
> +
> +void iommu_update_ire_from_apic(
> +    unsigned int apic, unsigned int reg, unsigned int value)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    ops->update_ire_from_apic(apic, reg, value);
> +}

While one might argue that this one is x86-specific (albeit from past
IA64 days we know it isn't entirely), ...

> +
> +int iommu_update_ire_from_msi(
> +    struct msi_desc *msi_desc, struct msi_msg *msg)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
> +}

... this one clearly isn't - I'm sure once you support PCI on ARM, you
will also want/need to support MSI. (The same then of course goes
for the respective functions' declarations.)

> --- /dev/null
> +++ b/xen/include/asm-x86/iommu.h
> @@ -0,0 +1,46 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> +*/
> +#ifndef __ARCH_X86_IOMMU_H__
> +#define __ARCH_X86_IOMMU_H__
> +
> +#define MAX_IOMMUS 32
> +
> +#include <asm/msi.h>

Please don't, if at all possible.

> @@ -84,52 +82,56 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
>  bool_t pt_irq_need_timer(uint32_t flags);
>  
>  #define PT_IRQ_TIME_OUT MILLISECS(8)
> +#endif /* HAS_PCI */
>  
> +#ifdef CONFIG_X86
>  struct msi_desc;
>  struct msi_msg;
> +#endif /* CONFIG_X86 */

Hardly - this again is a direct descendant from PCI.

> +#ifdef CONFIG_X86
>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
>      int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
>      void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
>      unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
>      int (*setup_hpet_msi)(struct msi_desc *);
> +    void (*share_p2m)(struct domain *d);
> +#endif /* CONFIG_X86 */

Is that last one really x86-specific in any way?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:40:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHsx8-0002nU-D4; Mon, 24 Feb 2014 10:40:02 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHsx5-0002nP-VP
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:40:00 +0000
Received: from [193.109.254.147:42057] by server-15.bemta-14.messagelabs.com
	id F3/9E-10839-F712B035; Mon, 24 Feb 2014 10:39:59 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393238398!6395607!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25455 invoked from network); 24 Feb 2014 10:39:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:39:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:39:58 +0000
Message-Id: <530B2F8A020000780011EAA4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:39:54 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-9-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-9-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> +int iommu_do_domctl(
> +    struct xen_domctl *domctl, struct domain *d,
> +    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> -    const struct iommu_ops *ops = iommu_get_ops();
> -    if ( iommu_intremap )
> -        ops->read_msi_from_ire(msi_desc, msg);
> -}
> +    int ret = 0;
>  
> -unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
> -{
> -    const struct iommu_ops *ops = iommu_get_ops();
> -    return ops->read_apic_from_ire(apic, reg);
> -}
> +    if ( !iommu_enabled )
> +        return -ENOSYS;
>  
> -int __init iommu_setup_hpet_msi(struct msi_desc *msi)
> -{
> -    const struct iommu_ops *ops = iommu_get_ops();
> -    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
> -}
> +    switch ( domctl->cmd )
> +    {
> +#ifdef HAS_PCI
> +    case XEN_DOMCTL_get_device_group:
> +    case XEN_DOMCTL_test_assign_device:
> +    case XEN_DOMCTL_assign_device:
> +    case XEN_DOMCTL_deassign_device:
> +        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
> +        break;
> +#endif
> +    default:
> +        ret = -ENOSYS;
> +    }

Please simply have the default case chain to iommu_do_pci_domctl(),
avoiding the need to change two source files when new sub-ops get
added.

Also, last case in the set of case statements or not - the default
case should also have a break statement.

> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
>...
> @@ -980,6 +983,440 @@ static int __init setup_dump_pcidevs(void)
>  }
>  __initcall(setup_dump_pcidevs);
>  
> +static int iommu_populate_page_table(struct domain *d)
> +{

Now why is this function being moved here? It doesn't appear to
have anything PCI specific at all.

> --- /dev/null
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -0,0 +1,65 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> + */
> +
> +#include <xen/sched.h>
> +#include <xen/iommu.h>
> +#include <xen/paging.h>
> +#include <xen/guest_access.h>
> +#include <xen/event.h>
> +#include <xen/softirq.h>
> +#include <xsm/xsm.h>
> +
> +void iommu_update_ire_from_apic(
> +    unsigned int apic, unsigned int reg, unsigned int value)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    ops->update_ire_from_apic(apic, reg, value);
> +}

While one might argue that this one is x86-specific (albeit from past
IA64 days we know it isn't entirely), ...

> +
> +int iommu_update_ire_from_msi(
> +    struct msi_desc *msi_desc, struct msi_msg *msg)
> +{
> +    const struct iommu_ops *ops = iommu_get_ops();
> +    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
> +}

... this one clearly isn't - I'm sure once you support PCI on ARM, you
will also want/need to support MSI. (The same then of course goes
for the respective functions' declarations.)

> --- /dev/null
> +++ b/xen/include/asm-x86/iommu.h
> @@ -0,0 +1,46 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
> + * Place - Suite 330, Boston, MA 02111-1307 USA.
> +*/
> +#ifndef __ARCH_X86_IOMMU_H__
> +#define __ARCH_X86_IOMMU_H__
> +
> +#define MAX_IOMMUS 32
> +
> +#include <asm/msi.h>

Please don't, if at all possible.

> @@ -84,52 +82,56 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
>  bool_t pt_irq_need_timer(uint32_t flags);
>  
>  #define PT_IRQ_TIME_OUT MILLISECS(8)
> +#endif /* HAS_PCI */
>  
> +#ifdef CONFIG_X86
>  struct msi_desc;
>  struct msi_msg;
> +#endif /* CONFIG_X86 */

Hardly - this again is a direct descendant from PCI.

> +#ifdef CONFIG_X86
>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
>      int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
>      void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
>      unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
>      int (*setup_hpet_msi)(struct msi_desc *);
> +    void (*share_p2m)(struct domain *d);
> +#endif /* CONFIG_X86 */

Is that last one really x86-specific in any way?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHt18-0002uJ-Fv; Mon, 24 Feb 2014 10:44:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHt16-0002uC-VZ
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:44:09 +0000
Received: from [193.109.254.147:9322] by server-2.bemta-14.messagelabs.com id
	8D/F1-01236-8722B035; Mon, 24 Feb 2014 10:44:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393238647!6363837!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 874 invoked from network); 24 Feb 2014 10:44:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:44:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:44:07 +0000
Message-Id: <530B3083020000780011EAB1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:44:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-10-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> --- a/xen/include/xen/hvm/iommu.h
> +++ b/xen/include/xen/hvm/iommu.h
> @@ -23,32 +23,8 @@
>  #include <xen/iommu.h>
>  #include <asm/hvm/iommu.h>
>  
> -struct g2m_ioport {
> -    struct list_head list;
> -    unsigned int gport;
> -    unsigned int mport;
> -    unsigned int np;
> -};
> -
> -struct mapped_rmrr {
> -    struct list_head list;
> -    u64 base;
> -    u64 end;
> -};
> -
>  struct hvm_iommu {
> -    u64 pgd_maddr;                 /* io page directory machine address */
> -    spinlock_t mapping_lock;       /* io page table lock */
> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
> -    struct list_head mapped_rmrrs;
> -
> -    /* amd iommu support */
> -    int domain_id;

At the very least this field doesn't look all that architecture specific,
even if it might only be used on x86/AMD right now.

> -    int paging_mode;

The same might go for this one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:44:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHt18-0002uJ-Fv; Mon, 24 Feb 2014 10:44:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHt16-0002uC-VZ
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:44:09 +0000
Received: from [193.109.254.147:9322] by server-2.bemta-14.messagelabs.com id
	8D/F1-01236-8722B035; Mon, 24 Feb 2014 10:44:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393238647!6363837!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 874 invoked from network); 24 Feb 2014 10:44:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:44:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:44:07 +0000
Message-Id: <530B3083020000780011EAB1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:44:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-10-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> --- a/xen/include/xen/hvm/iommu.h
> +++ b/xen/include/xen/hvm/iommu.h
> @@ -23,32 +23,8 @@
>  #include <xen/iommu.h>
>  #include <asm/hvm/iommu.h>
>  
> -struct g2m_ioport {
> -    struct list_head list;
> -    unsigned int gport;
> -    unsigned int mport;
> -    unsigned int np;
> -};
> -
> -struct mapped_rmrr {
> -    struct list_head list;
> -    u64 base;
> -    u64 end;
> -};
> -
>  struct hvm_iommu {
> -    u64 pgd_maddr;                 /* io page directory machine address */
> -    spinlock_t mapping_lock;       /* io page table lock */
> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
> -    struct list_head mapped_rmrrs;
> -
> -    /* amd iommu support */
> -    int domain_id;

At the very least this field doesn't look all that architecture specific,
even if it might only be used on x86/AMD right now.

> -    int paging_mode;

The same might go for this one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:47:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHt4P-00032P-4r; Mon, 24 Feb 2014 10:47:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHt4O-00032J-62
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:47:32 +0000
Received: from [85.158.139.211:30559] by server-6.bemta-5.messagelabs.com id
	F5/6E-14342-3432B035; Mon, 24 Feb 2014 10:47:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393238850!5832239!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11850 invoked from network); 24 Feb 2014 10:47:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:47:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:47:29 +0000
Message-Id: <530B314D020000780011EAB4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:47:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-11-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-11-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 10/15] xen/passthrough: iommu: Basic
 support of device tree assignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> Add IOMMU helpers to support device tree assignment/deassignment. This patch
> introduces 2 new fields in the dt_device_node:
>     - is_protected: Does the device is protected by an IOMMU
>     - next_assigned: Pointer to the next device assigned to the same
>     domain
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     Changes in v2:
>         - Patch added
> ---
>  xen/common/device_tree.c              |    4 ++
>  xen/drivers/passthrough/Makefile      |    1 +
>  xen/drivers/passthrough/device_tree.c |  106 +++++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/iommu.c       |   10 ++++
>  xen/include/xen/device_tree.h         |   14 +++++
>  xen/include/xen/hvm/iommu.h           |    6 ++
>  xen/include/xen/iommu.h               |   16 +++++

No matter how small the changes to generic IOMMU code, you should
Cc the maintainers.

> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -123,6 +123,12 @@ int iommu_domain_init(struct domain *d)
>      if ( ret )
>          return ret;
>  
> +#if HAS_DEVICE_TREE
> +    ret = iommu_dt_domain_init(d);
> +    if ( ret )
> +        return ret;
> +#endif

Why can this not be part of arch_iommu_domain_init()?

> @@ -198,6 +204,10 @@ void iommu_domain_destroy(struct domain *d)
>      if ( need_iommu(d) )
>          iommu_teardown(d);
>  
> +#ifdef HAS_DEVICE_TREE
> +    iommu_dt_domain_destroy(d);
> +#endif
> +
>      arch_iommu_domain_destroy(d);

And the former one here part of the latter?

> @@ -28,6 +29,11 @@ struct hvm_iommu {
>  
>      /* iommu_ops */
>      const struct iommu_ops *platform_ops;
> +
> +    #ifdef HAS_DEVICE_TREE
> +    /* List of DT devices assigned to this domain */
> +    struct list_head dt_devices;
> +    #endif

Indentation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:47:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHt4P-00032P-4r; Mon, 24 Feb 2014 10:47:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHt4O-00032J-62
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:47:32 +0000
Received: from [85.158.139.211:30559] by server-6.bemta-5.messagelabs.com id
	F5/6E-14342-3432B035; Mon, 24 Feb 2014 10:47:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393238850!5832239!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11850 invoked from network); 24 Feb 2014 10:47:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:47:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:47:29 +0000
Message-Id: <530B314D020000780011EAB4@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:47:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-11-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-11-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 10/15] xen/passthrough: iommu: Basic
 support of device tree assignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> Add IOMMU helpers to support device tree assignment/deassignment. This patch
> introduces 2 new fields in the dt_device_node:
>     - is_protected: Does the device is protected by an IOMMU
>     - next_assigned: Pointer to the next device assigned to the same
>     domain
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> ---
>     Changes in v2:
>         - Patch added
> ---
>  xen/common/device_tree.c              |    4 ++
>  xen/drivers/passthrough/Makefile      |    1 +
>  xen/drivers/passthrough/device_tree.c |  106 +++++++++++++++++++++++++++++++++
>  xen/drivers/passthrough/iommu.c       |   10 ++++
>  xen/include/xen/device_tree.h         |   14 +++++
>  xen/include/xen/hvm/iommu.h           |    6 ++
>  xen/include/xen/iommu.h               |   16 +++++

No matter how small the changes to generic IOMMU code, you should
Cc the maintainers.

> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -123,6 +123,12 @@ int iommu_domain_init(struct domain *d)
>      if ( ret )
>          return ret;
>  
> +#if HAS_DEVICE_TREE
> +    ret = iommu_dt_domain_init(d);
> +    if ( ret )
> +        return ret;
> +#endif

Why can this not be part of arch_iommu_domain_init()?

> @@ -198,6 +204,10 @@ void iommu_domain_destroy(struct domain *d)
>      if ( need_iommu(d) )
>          iommu_teardown(d);
>  
> +#ifdef HAS_DEVICE_TREE
> +    iommu_dt_domain_destroy(d);
> +#endif
> +
>      arch_iommu_domain_destroy(d);

And the former one here part of the latter?

> @@ -28,6 +29,11 @@ struct hvm_iommu {
>  
>      /* iommu_ops */
>      const struct iommu_ops *platform_ops;
> +
> +    #ifdef HAS_DEVICE_TREE
> +    /* List of DT devices assigned to this domain */
> +    struct list_head dt_devices;
> +    #endif

Indentation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:49:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHt6h-0003DD-Ud; Mon, 24 Feb 2014 10:49:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHt6g-0003D8-Fc
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:49:54 +0000
Received: from [85.158.139.211:13908] by server-17.bemta-5.messagelabs.com id
	E5/5F-31975-1D32B035; Mon, 24 Feb 2014 10:49:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393238979!5789751!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1856 invoked from network); 24 Feb 2014 10:49:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 10:49:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103492540"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 10:49:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 05:49:38 -0500
Message-ID: <1393238977.16570.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Mon, 24 Feb 2014 10:49:37 +0000
In-Reply-To: <974010162.20140221153400@eikelenboom.it>
References: <974010162.20140221153400@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] xen-unstable pci passthrough: bug in accounting
 assigned pci devices when assignment has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

graft 27 ^
thanks

Adding this info to the bug for the benefit of whoever ends up looking
at it. Thanks.

Ian.

On Fri, 2014-02-21 at 15:34 +0100, Sander Eikelenboom wrote:
> Hi Ian,
> 
> It was decided that the bug that domain creation does not fail on non assignable pci devices was deferred to 4.5.
> (and it wouldn't prevent this bug anyhow when doing pci hotplug with xl pci-attach)
> 
> But there seems to be a bug in the error path:
> 
> root@creanuc:~# xl pci-assignable-list
> 0000:02:00.0
> 
> Now when i boot a VM with  pci=['00:19.0'] in it's config file ... which is not assignable:
> 
> root@creanuc:~# xl create /etc/xen/domU/router.hvm
> Parsing config from /etc/xen/domU/router.hvm
> libxl: error: libxl_pci.c:1060:libxl__device_pci_add: PCI device 0:0:19.0 is not assignable
> 
> That looks ok ... and the pci device is not visible / accessible in the guest ...  but it seems the entry is still in xenstore nevertheless:
> 
> root@creanuc:~# xl pci-list router
> Vdev Device
> 00.0 0000:00:19.0
> 
> --
> Sander
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:49:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHt6h-0003DD-Ud; Mon, 24 Feb 2014 10:49:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHt6g-0003D8-Fc
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 10:49:54 +0000
Received: from [85.158.139.211:13908] by server-17.bemta-5.messagelabs.com id
	E5/5F-31975-1D32B035; Mon, 24 Feb 2014 10:49:53 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393238979!5789751!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1856 invoked from network); 24 Feb 2014 10:49:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 10:49:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103492540"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 10:49:39 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 05:49:38 -0500
Message-ID: <1393238977.16570.37.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Date: Mon, 24 Feb 2014 10:49:37 +0000
In-Reply-To: <974010162.20140221153400@eikelenboom.it>
References: <974010162.20140221153400@eikelenboom.it>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] xen-unstable pci passthrough: bug in accounting
 assigned pci devices when assignment has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

graft 27 ^
thanks

Adding this info to the bug for the benefit of whoever ends up looking
at it. Thanks.

Ian.

On Fri, 2014-02-21 at 15:34 +0100, Sander Eikelenboom wrote:
> Hi Ian,
> 
> It was decided that the bug that domain creation does not fail on non assignable pci devices was deferred to 4.5.
> (and it wouldn't prevent this bug anyhow when doing pci hotplug with xl pci-attach)
> 
> But there seems to be a bug in the error path:
> 
> root@creanuc:~# xl pci-assignable-list
> 0000:02:00.0
> 
> Now when i boot a VM with  pci=['00:19.0'] in it's config file ... which is not assignable:
> 
> root@creanuc:~# xl create /etc/xen/domU/router.hvm
> Parsing config from /etc/xen/domU/router.hvm
> libxl: error: libxl_pci.c:1060:libxl__device_pci_add: PCI device 0:0:19.0 is not assignable
> 
> That looks ok ... and the pci device is not visible / accessible in the guest ...  but it seems the entry is still in xenstore nevertheless:
> 
> root@creanuc:~# xl pci-list router
> Vdev Device
> 00.0 0000:00:19.0
> 
> --
> Sander
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtBG-0003Ob-Op; Mon, 24 Feb 2014 10:54:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHtBF-0003OW-2A
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:54:37 +0000
Received: from [85.158.137.68:59699] by server-14.bemta-3.messagelabs.com id
	6F/01-08196-CE42B035; Mon, 24 Feb 2014 10:54:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393239275!3749654!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1521 invoked from network); 24 Feb 2014 10:54:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:54:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:54:35 +0000
Message-Id: <530B32F7020000780011EAE0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:54:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<530B26BC020000780011EA65@nat28.tlf.novell.com>
	<530B1E3E.6040805@citrix.com>
In-Reply-To: <530B1E3E.6040805@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 11:26, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 24/02/14 10:02, Jan Beulich wrote:
>>>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> This is some coverity-inspired tidying.
>>>
>>> Coverity has some grief analysing the call sites of atomic_read().  This is
>>> believed to be a bug in Coverity itself when expanding the nested macros, 
>>> but
>>> there is no legitimate reason for it to be a macro in the first place.
>>>
>>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>>> inline functions, thus gaining some type safety.
>>>
>>> One issue which is not immediatly obvious is that the non-atomic varients 
>>> take
>>> their atomic_t at a different level of indirection to the atomic varients.
>>>
>>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>>> which is converted to take its parameter as a pointer.  One callsite of
>>> _atomic_set() is updated, while the other two callsites are updated to
>>> ATOMIC_INIT().
>> Did you consider leaving these "non-atomic atomic ops" untouched
>> (as they don't involve macro nesting), altering only the "real" ones?
> 
> Yes, but for the sake of three updates at callsites, I felt the benefits
> outweighed the costs.

Except that I don't really see much of a benefit here - the type safety
argument doesn't really count all that much, considering that a wrongly
used type would need to have a suitable field named "counter", which
is unlikely enough to not worry much.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:54:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtBG-0003Ob-Op; Mon, 24 Feb 2014 10:54:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHtBF-0003OW-2A
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:54:37 +0000
Received: from [85.158.137.68:59699] by server-14.bemta-3.messagelabs.com id
	6F/01-08196-CE42B035; Mon, 24 Feb 2014 10:54:36 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393239275!3749654!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1521 invoked from network); 24 Feb 2014 10:54:35 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 10:54:35 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 10:54:35 +0000
Message-Id: <530B32F7020000780011EAE0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 10:54:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<530B26BC020000780011EA65@nat28.tlf.novell.com>
	<530B1E3E.6040805@citrix.com>
In-Reply-To: <530B1E3E.6040805@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: KeirFraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 11:26, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 24/02/14 10:02, Jan Beulich wrote:
>>>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> This is some coverity-inspired tidying.
>>>
>>> Coverity has some grief analysing the call sites of atomic_read().  This is
>>> believed to be a bug in Coverity itself when expanding the nested macros, 
>>> but
>>> there is no legitimate reason for it to be a macro in the first place.
>>>
>>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>>> inline functions, thus gaining some type safety.
>>>
>>> One issue which is not immediatly obvious is that the non-atomic varients 
>>> take
>>> their atomic_t at a different level of indirection to the atomic varients.
>>>
>>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>>> which is converted to take its parameter as a pointer.  One callsite of
>>> _atomic_set() is updated, while the other two callsites are updated to
>>> ATOMIC_INIT().
>> Did you consider leaving these "non-atomic atomic ops" untouched
>> (as they don't involve macro nesting), altering only the "real" ones?
> 
> Yes, but for the sake of three updates at callsites, I felt the benefits
> outweighed the costs.

Except that I don't really see much of a benefit here - the type safety
argument doesn't really count all that much, considering that a wrongly
used type would need to have a suitable field named "counter", which
is unlikely enough to not worry much.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:55:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtC2-0003RP-85; Mon, 24 Feb 2014 10:55:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WHtC0-0003RE-Rs
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:55:24 +0000
Received: from [85.158.137.68:27509] by server-5.bemta-3.messagelabs.com id
	D6/ED-04712-C152B035; Mon, 24 Feb 2014 10:55:24 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393239322!3794732!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3592 invoked from network); 24 Feb 2014 10:55:23 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 24 Feb 2014 10:55:23 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WHtGU-0001lf-Sa; Mon, 24 Feb 2014 11:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1393239602.6797@bugs.xenproject.org>
References: <974010162.20140221153400@eikelenboom.it>
	<1393238977.16570.37.camel@kazak.uk.xensource.com>
In-Reply-To: <1393238977.16570.37.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 24 Feb 2014 11:00:02 +0000
Subject: [Xen-devel] Processed: Re: xen-unstable pci passthrough: bug in
 accounting assigned pci devices when assignment has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> graft 27 ^
Graft `<974010162.20140221153400@eikelenboom.it>' onto #27
> thanks
Finished processing.

Modified/created Bugs:
 - 27: http://bugs.xenproject.org/xen/bug/27

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 10:55:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 10:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtC2-0003RP-85; Mon, 24 Feb 2014 10:55:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WHtC0-0003RE-Rs
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 10:55:24 +0000
Received: from [85.158.137.68:27509] by server-5.bemta-3.messagelabs.com id
	D6/ED-04712-C152B035; Mon, 24 Feb 2014 10:55:24 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393239322!3794732!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3592 invoked from network); 24 Feb 2014 10:55:23 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 24 Feb 2014 10:55:23 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WHtGU-0001lf-Sa; Mon, 24 Feb 2014 11:00:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1393239602.6797@bugs.xenproject.org>
References: <974010162.20140221153400@eikelenboom.it>
	<1393238977.16570.37.camel@kazak.uk.xensource.com>
In-Reply-To: <1393238977.16570.37.camel@kazak.uk.xensource.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: Ian Campbell <Ian.Campbell@citrix.com>
Date: Mon, 24 Feb 2014 11:00:02 +0000
Subject: [Xen-devel] Processed: Re: xen-unstable pci passthrough: bug in
 accounting assigned pci devices when assignment has failed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> graft 27 ^
Graft `<974010162.20140221153400@eikelenboom.it>' onto #27
> thanks
Finished processing.

Modified/created Bugs:
 - 27: http://bugs.xenproject.org/xen/bug/27

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:01:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtIA-0003i1-9m; Mon, 24 Feb 2014 11:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WHtI8-0003hw-Pq
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:01:45 +0000
Received: from [85.158.143.35:62042] by server-1.bemta-4.messagelabs.com id
	34/0E-31661-8962B035; Mon, 24 Feb 2014 11:01:44 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393239702!7823007!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23396 invoked from network); 24 Feb 2014 11:01:43 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:01:43 -0000
Received: by mail-qc0-f172.google.com with SMTP id w7so6649419qcr.3
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 03:01:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=FG1L5E9ppD/fpJyo9IXIGTWR0gSbuQ3TbSmpyGrju4s=;
	b=I7fg1TFhkfGezmJguowwqCaAvl/Kvu/JkhwmvUDZ447Bzt2oZw6jdE5ixwqLXFN4e5
	MKHIv+vjuiinTJ0tLFOZXgQUfmmTu627S9UyV+EdKCSGWrhwFbHEiXWtqwXUi2frAlnp
	GQoVRfNQ3SSJ1LDvmtV/4tba35zq9mshs3pug=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=FG1L5E9ppD/fpJyo9IXIGTWR0gSbuQ3TbSmpyGrju4s=;
	b=Z46IWdlsg5tLWfGjPyFDID8PevawMXPjUHfR/GX2ILGxbC5hi/nXs36C9XMItQ1s4n
	3qKkdRgnaM+G3PEhz+WHj4EMIA8G7qCj4PvTs/M3fqYNM1U6cpvJyU9BRyl/eE6ZrZkM
	rNrxBkKu/Xi8S2ws1sQZNd5R78zvz3pIQCNb81A6FTGxPvi3SxIr6jIy7hThXxLlwQXQ
	YTfvMjYnVhDVLJvgziwkEMHuAz8jZFNrkhYdSXStDrgwXR29/9qEPErOkO0UtKpTH0Y5
	OnMDe9Pyfp2yQcg3qEOSNX336wyFH00I3pG2H0zXAZnQjkTUt3CPGeNEw8ZJnWmEyWVe
	F6Cg==
X-Gm-Message-State: ALoCoQl6FPxGnm/NOMx/7PebmBI32PzkWVS40JV6loo7yX2EWNI2drxc8cxaNjTmiHw6cBH58VbC
X-Received: by 10.140.29.38 with SMTP id a35mr27619040qga.55.1393239702152;
	Mon, 24 Feb 2014 03:01:42 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 24 Feb 2014 03:01:27 -0800 (PST)
X-Originating-IP: [217.66.157.125]
In-Reply-To: <530ABD5C.10506@oracle.com>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Mon, 24 Feb 2014 15:01:27 +0400
X-Google-Sender-Auth: L_F_BAhvt4Vcid0QDlL9PTfoc3I
Message-ID: <CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
To: Bob Liu <bob.liu@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-24 7:32 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
> Two types of page can be stored in tmem: persistent_page and ephemeral_page.
>
> Persistent pages are swapped-out pages, whose date can't be dropped by
> tmem. The rule for persistent pages is:
> 'current_domain_pages +  persistent_pages have to smaller than
> domain->max_pages'.
>
> Ephemeral pages are clean pagecache pages, those pages have a copy in
> disk already.
> The amount number of ephemeral pages are not limited, but XEN host will
> reclaim those pages if under memory pressure.
> There is a tmem parameter 'weight' which can be used to control how many
> ephemeral_pages should be reclaimed between domains.


Very good, thanks for answers! What minimal kernel versions recommends
for frontswap/cleancache in domU (dom0 xen 4.3.2)

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:01:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:01:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtIA-0003i1-9m; Mon, 24 Feb 2014 11:01:46 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WHtI8-0003hw-Pq
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:01:45 +0000
Received: from [85.158.143.35:62042] by server-1.bemta-4.messagelabs.com id
	34/0E-31661-8962B035; Mon, 24 Feb 2014 11:01:44 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393239702!7823007!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23396 invoked from network); 24 Feb 2014 11:01:43 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:01:43 -0000
Received: by mail-qc0-f172.google.com with SMTP id w7so6649419qcr.3
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 03:01:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=FG1L5E9ppD/fpJyo9IXIGTWR0gSbuQ3TbSmpyGrju4s=;
	b=I7fg1TFhkfGezmJguowwqCaAvl/Kvu/JkhwmvUDZ447Bzt2oZw6jdE5ixwqLXFN4e5
	MKHIv+vjuiinTJ0tLFOZXgQUfmmTu627S9UyV+EdKCSGWrhwFbHEiXWtqwXUi2frAlnp
	GQoVRfNQ3SSJ1LDvmtV/4tba35zq9mshs3pug=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=FG1L5E9ppD/fpJyo9IXIGTWR0gSbuQ3TbSmpyGrju4s=;
	b=Z46IWdlsg5tLWfGjPyFDID8PevawMXPjUHfR/GX2ILGxbC5hi/nXs36C9XMItQ1s4n
	3qKkdRgnaM+G3PEhz+WHj4EMIA8G7qCj4PvTs/M3fqYNM1U6cpvJyU9BRyl/eE6ZrZkM
	rNrxBkKu/Xi8S2ws1sQZNd5R78zvz3pIQCNb81A6FTGxPvi3SxIr6jIy7hThXxLlwQXQ
	YTfvMjYnVhDVLJvgziwkEMHuAz8jZFNrkhYdSXStDrgwXR29/9qEPErOkO0UtKpTH0Y5
	OnMDe9Pyfp2yQcg3qEOSNX336wyFH00I3pG2H0zXAZnQjkTUt3CPGeNEw8ZJnWmEyWVe
	F6Cg==
X-Gm-Message-State: ALoCoQl6FPxGnm/NOMx/7PebmBI32PzkWVS40JV6loo7yX2EWNI2drxc8cxaNjTmiHw6cBH58VbC
X-Received: by 10.140.29.38 with SMTP id a35mr27619040qga.55.1393239702152;
	Mon, 24 Feb 2014 03:01:42 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 24 Feb 2014 03:01:27 -0800 (PST)
X-Originating-IP: [217.66.157.125]
In-Reply-To: <530ABD5C.10506@oracle.com>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Mon, 24 Feb 2014 15:01:27 +0400
X-Google-Sender-Auth: L_F_BAhvt4Vcid0QDlL9PTfoc3I
Message-ID: <CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
To: Bob Liu <bob.liu@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-24 7:32 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
> Two types of page can be stored in tmem: persistent_page and ephemeral_page.
>
> Persistent pages are swapped-out pages, whose date can't be dropped by
> tmem. The rule for persistent pages is:
> 'current_domain_pages +  persistent_pages have to smaller than
> domain->max_pages'.
>
> Ephemeral pages are clean pagecache pages, those pages have a copy in
> disk already.
> The amount number of ephemeral pages are not limited, but XEN host will
> reclaim those pages if under memory pressure.
> There is a tmem parameter 'weight' which can be used to control how many
> ephemeral_pages should be reclaimed between domains.


Very good, thanks for answers! What minimal kernel versions recommends
for frontswap/cleancache in domU (dom0 xen 4.3.2)

-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:05:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtM2-0003uX-Ky; Mon, 24 Feb 2014 11:05:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtM0-0003uL-Vv
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:05:45 +0000
Received: from [85.158.139.211:65102] by server-8.bemta-5.messagelabs.com id
	6A/EE-05298-8872B035; Mon, 24 Feb 2014 11:05:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393239941!1881623!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20698 invoked from network); 24 Feb 2014 11:05:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103495324"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:05:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:05:40 -0500
Message-ID: <1393239940.16570.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Date: Mon, 24 Feb 2014 11:05:40 +0000
In-Reply-To: <CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
	<CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 09:51 -0800, Saurabh Mishra wrote:
> Hi --
> 
> 
> I tried enabling CONFIG_APM but it still didn't work.

APM != ACPI. They are totally different things.

I'm afraid I don't know specifically what needs to be done in a distro
to support ACPI but it is the same thing as makes a physical system shut
down when you press the power button on the case. At a minumum there is
probably some package which needs to be installed, but I can't advise
more specifically for a WR system.

If I were you I'd be asking WR about their ACPI support.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:05:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:05:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtM2-0003uX-Ky; Mon, 24 Feb 2014 11:05:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtM0-0003uL-Vv
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:05:45 +0000
Received: from [85.158.139.211:65102] by server-8.bemta-5.messagelabs.com id
	6A/EE-05298-8872B035; Mon, 24 Feb 2014 11:05:44 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393239941!1881623!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20698 invoked from network); 24 Feb 2014 11:05:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:05:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103495324"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:05:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:05:40 -0500
Message-ID: <1393239940.16570.45.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Date: Mon, 24 Feb 2014 11:05:40 +0000
In-Reply-To: <CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
References: <CAMnwyJ02t5R=s=AmF_LK7NGAUN4DvFDw3NKe2VVRm5Snzq3mdA@mail.gmail.com>
	<20140219185621.GC12300@phenom.dumpdata.com>
	<CAMnwyJ2DTYtPFYdmkdXRidSPt_0fsZSY8s_6oEHoAemaWyV4bA@mail.gmail.com>
	<1392887748.22494.17.camel@kazak.uk.xensource.com>
	<CAMnwyJ2zMOXwKikNYcCXSK-GBkxxWwm9rVLi1t3vEOHXJpVC8A@mail.gmail.com>
	<CAMnwyJ0cCaF4gkwCF1TCXqqaW9K9=xhoPs-UDEKb4EeQfv6sUg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] xm/xl shutdown does not work with HVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 09:51 -0800, Saurabh Mishra wrote:
> Hi --
> 
> 
> I tried enabling CONFIG_APM but it still didn't work.

APM != ACPI. They are totally different things.

I'm afraid I don't know specifically what needs to be done in a distro
to support ACPI but it is the same thing as makes a physical system shut
down when you press the power button on the case. At a minumum there is
probably some package which needs to be installed, but I can't advise
more specifically for a WR system.

If I were you I'd be asking WR about their ACPI support.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:13:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtTB-00049c-MV; Mon, 24 Feb 2014 11:13:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHtT9-00049X-WD
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:13:08 +0000
Received: from [85.158.143.35:6801] by server-3.bemta-4.messagelabs.com id
	81/C3-11539-3492B035; Mon, 24 Feb 2014 11:13:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393240385!7833313!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20661 invoked from network); 24 Feb 2014 11:13:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:13:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103496332"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:12:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:12:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHtQc-0002C4-6h;
	Mon, 24 Feb 2014 11:10:30 +0000
Message-ID: <530B28A5.8010903@citrix.com>
Date: Mon, 24 Feb 2014 11:10:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
In-Reply-To: <530B3083020000780011EAB1@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Julien Grall <julien.grall@linaro.org>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 10:44, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> --- a/xen/include/xen/hvm/iommu.h
>> +++ b/xen/include/xen/hvm/iommu.h
>> @@ -23,32 +23,8 @@
>>  #include <xen/iommu.h>
>>  #include <asm/hvm/iommu.h>
>>  
>> -struct g2m_ioport {
>> -    struct list_head list;
>> -    unsigned int gport;
>> -    unsigned int mport;
>> -    unsigned int np;
>> -};
>> -
>> -struct mapped_rmrr {
>> -    struct list_head list;
>> -    u64 base;
>> -    u64 end;
>> -};
>> -
>>  struct hvm_iommu {
>> -    u64 pgd_maddr;                 /* io page directory machine address */
>> -    spinlock_t mapping_lock;       /* io page table lock */
>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>> -    struct list_head mapped_rmrrs;
>> -
>> -    /* amd iommu support */
>> -    int domain_id;
> At the very least this field doesn't look all that architecture specific,
> even if it might only be used on x86/AMD right now.

Furthermore, it can be found using container_of()

The current size of struct domain is 3584 bytes, which is quite close to
the 1 page limit.  We should certainly be taking opportunities like this
to reduce bloat.

~Andrew

>
>> -    int paging_mode;
> The same might go for this one.
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:13:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtTB-00049c-MV; Mon, 24 Feb 2014 11:13:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHtT9-00049X-WD
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:13:08 +0000
Received: from [85.158.143.35:6801] by server-3.bemta-4.messagelabs.com id
	81/C3-11539-3492B035; Mon, 24 Feb 2014 11:13:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393240385!7833313!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20661 invoked from network); 24 Feb 2014 11:13:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:13:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103496332"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:12:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:12:44 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHtQc-0002C4-6h;
	Mon, 24 Feb 2014 11:10:30 +0000
Message-ID: <530B28A5.8010903@citrix.com>
Date: Mon, 24 Feb 2014 11:10:29 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
In-Reply-To: <530B3083020000780011EAB1@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Julien Grall <julien.grall@linaro.org>, Keir Fraser <keir@xen.org>,
	ian.campbell@citrix.com, Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 10:44, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> --- a/xen/include/xen/hvm/iommu.h
>> +++ b/xen/include/xen/hvm/iommu.h
>> @@ -23,32 +23,8 @@
>>  #include <xen/iommu.h>
>>  #include <asm/hvm/iommu.h>
>>  
>> -struct g2m_ioport {
>> -    struct list_head list;
>> -    unsigned int gport;
>> -    unsigned int mport;
>> -    unsigned int np;
>> -};
>> -
>> -struct mapped_rmrr {
>> -    struct list_head list;
>> -    u64 base;
>> -    u64 end;
>> -};
>> -
>>  struct hvm_iommu {
>> -    u64 pgd_maddr;                 /* io page directory machine address */
>> -    spinlock_t mapping_lock;       /* io page table lock */
>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>> -    struct list_head mapped_rmrrs;
>> -
>> -    /* amd iommu support */
>> -    int domain_id;
> At the very least this field doesn't look all that architecture specific,
> even if it might only be used on x86/AMD right now.

Furthermore, it can be found using container_of()

The current size of struct domain is 3584 bytes, which is quite close to
the 1 page limit.  We should certainly be taking opportunities like this
to reduce bloat.

~Andrew

>
>> -    int paging_mode;
> The same might go for this one.
>
> Jan
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:13:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtTh-0004CJ-4W; Mon, 24 Feb 2014 11:13:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtTg-0004C9-K4
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:13:40 +0000
Received: from [85.158.137.68:6651] by server-12.bemta-3.messagelabs.com id
	E5/C9-01674-3692B035; Mon, 24 Feb 2014 11:13:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393240417!407563!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22531 invoked from network); 24 Feb 2014 11:13:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:13:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103496490"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:13:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:13:36 -0500
Message-ID: <1393240415.16570.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Mon, 24 Feb 2014 11:13:35 +0000
In-Reply-To: <5306A993.5010504@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
	<53050BF5.1060009@citrix.com>
	<1392888808.22494.21.camel@kazak.uk.xensource.com>
	<5306A993.5010504@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 01:19 +0000, Zoltan Kiss wrote:
> I don't know if the guest expects that slots for the same packet 
> comes back at the same time. 

I don't think the guest is allowed to assume that. In particular they
aren't allowed to assume the the slots will be freed in the order they
were presented on the ring. There used to be a debug patch to
deliberately permute the responses, perhaps it was in the old
netchannel2 tree.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:13:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtTh-0004CJ-4W; Mon, 24 Feb 2014 11:13:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtTg-0004C9-K4
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:13:40 +0000
Received: from [85.158.137.68:6651] by server-12.bemta-3.messagelabs.com id
	E5/C9-01674-3692B035; Mon, 24 Feb 2014 11:13:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393240417!407563!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22531 invoked from network); 24 Feb 2014 11:13:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:13:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103496490"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:13:37 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:13:36 -0500
Message-ID: <1393240415.16570.46.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Date: Mon, 24 Feb 2014 11:13:35 +0000
In-Reply-To: <5306A993.5010504@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-2-git-send-email-zoltan.kiss@citrix.com>
	<1392743214.23084.38.camel@kazak.uk.xensource.com>
	<5303C44D.4070500@citrix.com>
	<1392804319.23084.109.camel@kazak.uk.xensource.com>
	<53050BF5.1060009@citrix.com>
	<1392888808.22494.21.camel@kazak.uk.xensource.com>
	<5306A993.5010504@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 1/9] xen-netback: Introduce TX
 grant map definitions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 01:19 +0000, Zoltan Kiss wrote:
> I don't know if the guest expects that slots for the same packet 
> comes back at the same time. 

I don't think the guest is allowed to assume that. In particular they
aren't allowed to assume the the slots will be freed in the order they
were presented on the ring. There used to be a debug patch to
deliberately permute the responses, perhaps it was in the old
netchannel2 tree.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:27:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtgn-0004a6-VW; Mon, 24 Feb 2014 11:27:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHtgm-0004a1-N1
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 11:27:12 +0000
Received: from [85.158.139.211:26929] by server-11.bemta-5.messagelabs.com id
	A1/B2-23886-F8C2B035; Mon, 24 Feb 2014 11:27:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393241229!5838980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9131 invoked from network); 24 Feb 2014 11:27:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:27:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103498767"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:27:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:27:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtgi-0005E3-Om;
	Mon, 24 Feb 2014 11:27:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtgg-000367-Uf;
	Mon, 24 Feb 2014 11:27:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11401.136857.892381@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:27:05 +0000
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
In-Reply-To: <1393065354-11478-1-git-send-email-fabio.fantoni@m2r.biz>
References: <1393065354-11478-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RESEND] tools/libxl: comments cleanup on
	libxl_dm.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fabio Fantoni writes ("[PATCH RESEND] tools/libxl: comments cleanup on libxl_dm.c"):
> Removed some unuseful comments lines.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:27:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtgn-0004a6-VW; Mon, 24 Feb 2014 11:27:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHtgm-0004a1-N1
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 11:27:12 +0000
Received: from [85.158.139.211:26929] by server-11.bemta-5.messagelabs.com id
	A1/B2-23886-F8C2B035; Mon, 24 Feb 2014 11:27:11 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393241229!5838980!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9131 invoked from network); 24 Feb 2014 11:27:11 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:27:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103498767"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:27:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:27:09 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtgi-0005E3-Om;
	Mon, 24 Feb 2014 11:27:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtgg-000367-Uf;
	Mon, 24 Feb 2014 11:27:06 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11401.136857.892381@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:27:05 +0000
To: Fabio Fantoni <fabio.fantoni@m2r.biz>
In-Reply-To: <1393065354-11478-1-git-send-email-fabio.fantoni@m2r.biz>
References: <1393065354-11478-1-git-send-email-fabio.fantoni@m2r.biz>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian.Campbell@citrix.com,
	Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH RESEND] tools/libxl: comments cleanup on
	libxl_dm.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Fabio Fantoni writes ("[PATCH RESEND] tools/libxl: comments cleanup on libxl_dm.c"):
> Removed some unuseful comments lines.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Committed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:27:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHthP-0004cb-CW; Mon, 24 Feb 2014 11:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHthN-0004cN-Tj
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:27:50 +0000
Received: from [85.158.143.35:46139] by server-1.bemta-4.messagelabs.com id
	52/E3-31661-5BC2B035; Mon, 24 Feb 2014 11:27:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393241267!7835747!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4418 invoked from network); 24 Feb 2014 11:27:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:27:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105158347"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:27:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:27:46 -0500
Message-ID: <1393241265.16570.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Mon, 24 Feb 2014 11:27:45 +0000
In-Reply-To: <580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<1392738894.23084.16.camel@kazak.uk.xensource.com>
	<580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCAyMDE0LTAyLTIzIGF0IDIyOjQxICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IE9u
IEZlYiAxOCwgMjAxNCwgYXQgMjM6NTQsIElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+IHdyb3RlOgo+IAo+ID4gT24gU3VuLCAyMDE0LTAyLTE2IGF0IDIzOjUxICswODAwLCBD
aGVuIEJhb3ppIHdyb3RlOgo+ID4+IEhpIGFsbCwKPiA+PiAKPiA+PiBJdCBpcyBtdWNoIGxhdGVy
IHRoYW4gSSB1c2VkIHRvIGV4cGVjdC4gSSBndWVzcyBpdCBtaWdodCBiZSBoZWxwCj4gPj4gdG8g
cHVibGlzaCBteSB3b3JrLCB0aG91Z2ggaXQgaXMgc3RpbGwgbm90IGZpbmlzaGVkIChhbmQgbWln
aHQgbm90Cj4gPj4gYmUgZmluaXNoZWQgdmVyeSBzb29uLi4uKS4gCj4gPj4gCj4gPj4gSSBiZWdh
biB0byB0cnkgdG8gcG9ydCBtaW5pLW9zIHRvIEFSTTY0IHNpbmNlIGxhc3Qgc3VtbWVyLiBTaW5j
ZQo+ID4+IHRoZSA2NC1iaXQgZ3Vlc3Qgc3VwcG9ydCBpcyBub3QgcXVpdGUgd2VsbCBhdCB0aGF0
IHRpbWUsIHRoaXMKPiA+PiB3b3JrIGhhZCBiZWVuIHN0b3BwZWQgZm9yIGEgbG9uZyB0aW1lIHVu
dGlsIHR3byBtb250aHMgYWdvLgo+ID4+IAo+ID4+IFRob3VnaCBpdCBpcyBzdGlsbCBhdCB2ZXJ5
IGVhcmx5IHN0YWdlLCBpdCBhdCBsZWFzdCBjYW4gYmUgYnVpbHQsCj4gPj4gc2V0dXAgYSBlYXJs
eSBwYWdlIHRhYmxlIGZvciBib290aW5nLCBwYXJzZSB0aGUgRFRCIHBhc3NlZCBieSB0aGUKPiA+
PiBoeXBlcnZpc29yLCBhbmQgYmUgZGVidWdnZWQgYnkgcHJpbnRrIGF0IHByZXNlbnQuIFNvIEkg
cHV0IGl0Cj4gPj4gb24gZ2l0aHViIGluIGNhc2Ugc29tZW9uZSBtaWdodCBiZSBpbnRlcmVzdGVk
IGluIGl0LiBIZXJlIGlzIHRoZQo+ID4+IHVybDogaHR0cHM6Ly9naXRodWIuY29tL2Jhb3ppY2gv
bWluaW9zLWFybTY0Cj4gPiAKPiA+IENvb2wuIFRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHNoYXJp
bmcuCj4gPiAKPiA+PiBSaWdodCBub3csIHRoZXJlIGFyZSBzb21lIHRyb3VibGVzIHRvIG1ha2Ug
R0lDIHdvcmsgcHJvcGVybHksCj4gPj4gYXMgSSBkaWRu4oCZdCBjb25zaWRlciBtYXBwaW5nIEdJ
Q+KAmXMgaW50ZXJmYWNlIGluIGFkZHJlc3Mgc3BhY2UgYW5kCj4gPj4gZm9sbG93cyB4ODbigJlz
IG1lbW9yeSBsYXlvdXQgd2hpY2ggbWFrZSB0aGUga2VybmVsIHZpcnR1YWwgYWRkcmVzcwo+ID4+
IHN0YXJ0cyBhdCAweDAuIEnigJlsbCBmaXggaXQgYXMgc29vbiBhcyBwb3NzaWJsZS4KPiA+IAo+
ID4gQWN0dWFsbHksIGhhdmluZyB2aXJ0dWFsIG1lbW9yeSBzdGFydCBhdCAweDAgc2VlbXMgcXVp
dGUgcmVhc29uYWJsZSB0bwo+ID4gbWUsIHdoYXQgaXMgdGhlIHByb2JsZW0/Cj4gCj4gSG1tbSwg
SSBkb27igJl0IHRoaW5rIGl0IGlzIGEgYmlnIHByb2JsZW0uIEkganVzdCBkaWRu4oCZdCByZWFs
aXNlIGl0IGlzIAo+IG5lY2Vzc2FyeSB0byBtYXAgR0lD4oCZcyBpbnRlcmZhY2UgYWZ0ZXIgTU1V
IG9uLCB3aGljaCBsZWFkcyBhIGV4Y2VwdGlvbgo+IHdoZW4gSSB0cnkgdG8gcHJvZ3JhbSBHSUMg
YnkgdGhlIHBoeXNpY2FsIGFkZHJlc3MgcG9wdWxhdGVkIGJ5IERULiAKPiBJIHVzZWQgdG8gdGhp
bmsgYWJvdXQgbWFraW5nIG1pbmktb3Mga2VybmVsIGFkZHJlc3Mgc3RhcnQgYXQgMHg4MDAwMDAw
MAo+IGFuZCBsZWF2ZSB0aGUgYWRkcmVzcyBiZWxvdyAweDgwMDAwMDAwIHRvIGJlIDE6MSBtYXBw
aW5nLCB3aGljaAo+IHNlZW1zIHRvIGJlIGFibGUgdG8gbWFrZSB0aGluZ3MgZWFzaWVyIHdoZW4g
aW5pdGlhbGlzaW5nIEdJQy4KClJlbWVtYmVyIHRoYXQgd2UgYXJlIGxpa2VseSB0byByZXdvcmsg
dGhlIGd1ZXN0IChwc2V1ZG8pcGh5c2ljYWwgYWRkcmVzcwpzcGFjZSBpbiA0LjUgYW5kIGluIGFu
eSBjYXNlIHdpbGwgbm90IG1ha2UgYW55IGd1YXJhbnRlZXMgYWJvdXQgdGhlCmxheW91dCBnb2lu
ZyBmb3J3YXJkLgoKSXQgaXMgbm90IGltcG9zc2libGUgdGhhdCB0aGUgR0lDIG1pZ2h0IG1vdmUg
b3V0IG9mIHRoZSAweDAtMHg4MDAwMDAwMApyZWdpb24gKGUuZy4gd2UgbWlnaHQgbW92ZSBpdCB1
cCB0byBqdXN0IGJlbG93IDRHQiBmb3IgZXhhbXBsZSkuCgpZb3Ugd2lsbCBuZWVkIHRvIHBhcnNl
IHRoZSBEVEIgaW4gb3JkZXIgdG8gZmlndXJlIG91dCB0aGUgYXBwcm9wcmlhdGUKYWRkcmVzcyBh
bmQgdGhlbiBjcmVhdGUgdmlydHVhbCBhZGRyZXNzIG1hcHBpbmdzIHRvIHRob3NlIGFkZHJlc3Nl
cy4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:27:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHthP-0004cb-CW; Mon, 24 Feb 2014 11:27:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHthN-0004cN-Tj
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:27:50 +0000
Received: from [85.158.143.35:46139] by server-1.bemta-4.messagelabs.com id
	52/E3-31661-5BC2B035; Mon, 24 Feb 2014 11:27:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393241267!7835747!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4418 invoked from network); 24 Feb 2014 11:27:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:27:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105158347"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:27:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:27:46 -0500
Message-ID: <1393241265.16570.49.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Chen Baozi <baozich@gmail.com>
Date: Mon, 24 Feb 2014 11:27:45 +0000
In-Reply-To: <580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<1392738894.23084.16.camel@kazak.uk.xensource.com>
	<580D6587-ABF8-4AD7-8AB6-2EB700CC0F1D@gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	List Developer Xen <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gU3VuLCAyMDE0LTAyLTIzIGF0IDIyOjQxICswODAwLCBDaGVuIEJhb3ppIHdyb3RlOgo+IE9u
IEZlYiAxOCwgMjAxNCwgYXQgMjM6NTQsIElhbiBDYW1wYmVsbCA8SWFuLkNhbXBiZWxsQGNpdHJp
eC5jb20+IHdyb3RlOgo+IAo+ID4gT24gU3VuLCAyMDE0LTAyLTE2IGF0IDIzOjUxICswODAwLCBD
aGVuIEJhb3ppIHdyb3RlOgo+ID4+IEhpIGFsbCwKPiA+PiAKPiA+PiBJdCBpcyBtdWNoIGxhdGVy
IHRoYW4gSSB1c2VkIHRvIGV4cGVjdC4gSSBndWVzcyBpdCBtaWdodCBiZSBoZWxwCj4gPj4gdG8g
cHVibGlzaCBteSB3b3JrLCB0aG91Z2ggaXQgaXMgc3RpbGwgbm90IGZpbmlzaGVkIChhbmQgbWln
aHQgbm90Cj4gPj4gYmUgZmluaXNoZWQgdmVyeSBzb29uLi4uKS4gCj4gPj4gCj4gPj4gSSBiZWdh
biB0byB0cnkgdG8gcG9ydCBtaW5pLW9zIHRvIEFSTTY0IHNpbmNlIGxhc3Qgc3VtbWVyLiBTaW5j
ZQo+ID4+IHRoZSA2NC1iaXQgZ3Vlc3Qgc3VwcG9ydCBpcyBub3QgcXVpdGUgd2VsbCBhdCB0aGF0
IHRpbWUsIHRoaXMKPiA+PiB3b3JrIGhhZCBiZWVuIHN0b3BwZWQgZm9yIGEgbG9uZyB0aW1lIHVu
dGlsIHR3byBtb250aHMgYWdvLgo+ID4+IAo+ID4+IFRob3VnaCBpdCBpcyBzdGlsbCBhdCB2ZXJ5
IGVhcmx5IHN0YWdlLCBpdCBhdCBsZWFzdCBjYW4gYmUgYnVpbHQsCj4gPj4gc2V0dXAgYSBlYXJs
eSBwYWdlIHRhYmxlIGZvciBib290aW5nLCBwYXJzZSB0aGUgRFRCIHBhc3NlZCBieSB0aGUKPiA+
PiBoeXBlcnZpc29yLCBhbmQgYmUgZGVidWdnZWQgYnkgcHJpbnRrIGF0IHByZXNlbnQuIFNvIEkg
cHV0IGl0Cj4gPj4gb24gZ2l0aHViIGluIGNhc2Ugc29tZW9uZSBtaWdodCBiZSBpbnRlcmVzdGVk
IGluIGl0LiBIZXJlIGlzIHRoZQo+ID4+IHVybDogaHR0cHM6Ly9naXRodWIuY29tL2Jhb3ppY2gv
bWluaW9zLWFybTY0Cj4gPiAKPiA+IENvb2wuIFRoYW5rIHlvdSB2ZXJ5IG11Y2ggZm9yIHNoYXJp
bmcuCj4gPiAKPiA+PiBSaWdodCBub3csIHRoZXJlIGFyZSBzb21lIHRyb3VibGVzIHRvIG1ha2Ug
R0lDIHdvcmsgcHJvcGVybHksCj4gPj4gYXMgSSBkaWRu4oCZdCBjb25zaWRlciBtYXBwaW5nIEdJ
Q+KAmXMgaW50ZXJmYWNlIGluIGFkZHJlc3Mgc3BhY2UgYW5kCj4gPj4gZm9sbG93cyB4ODbigJlz
IG1lbW9yeSBsYXlvdXQgd2hpY2ggbWFrZSB0aGUga2VybmVsIHZpcnR1YWwgYWRkcmVzcwo+ID4+
IHN0YXJ0cyBhdCAweDAuIEnigJlsbCBmaXggaXQgYXMgc29vbiBhcyBwb3NzaWJsZS4KPiA+IAo+
ID4gQWN0dWFsbHksIGhhdmluZyB2aXJ0dWFsIG1lbW9yeSBzdGFydCBhdCAweDAgc2VlbXMgcXVp
dGUgcmVhc29uYWJsZSB0bwo+ID4gbWUsIHdoYXQgaXMgdGhlIHByb2JsZW0/Cj4gCj4gSG1tbSwg
SSBkb27igJl0IHRoaW5rIGl0IGlzIGEgYmlnIHByb2JsZW0uIEkganVzdCBkaWRu4oCZdCByZWFs
aXNlIGl0IGlzIAo+IG5lY2Vzc2FyeSB0byBtYXAgR0lD4oCZcyBpbnRlcmZhY2UgYWZ0ZXIgTU1V
IG9uLCB3aGljaCBsZWFkcyBhIGV4Y2VwdGlvbgo+IHdoZW4gSSB0cnkgdG8gcHJvZ3JhbSBHSUMg
YnkgdGhlIHBoeXNpY2FsIGFkZHJlc3MgcG9wdWxhdGVkIGJ5IERULiAKPiBJIHVzZWQgdG8gdGhp
bmsgYWJvdXQgbWFraW5nIG1pbmktb3Mga2VybmVsIGFkZHJlc3Mgc3RhcnQgYXQgMHg4MDAwMDAw
MAo+IGFuZCBsZWF2ZSB0aGUgYWRkcmVzcyBiZWxvdyAweDgwMDAwMDAwIHRvIGJlIDE6MSBtYXBw
aW5nLCB3aGljaAo+IHNlZW1zIHRvIGJlIGFibGUgdG8gbWFrZSB0aGluZ3MgZWFzaWVyIHdoZW4g
aW5pdGlhbGlzaW5nIEdJQy4KClJlbWVtYmVyIHRoYXQgd2UgYXJlIGxpa2VseSB0byByZXdvcmsg
dGhlIGd1ZXN0IChwc2V1ZG8pcGh5c2ljYWwgYWRkcmVzcwpzcGFjZSBpbiA0LjUgYW5kIGluIGFu
eSBjYXNlIHdpbGwgbm90IG1ha2UgYW55IGd1YXJhbnRlZXMgYWJvdXQgdGhlCmxheW91dCBnb2lu
ZyBmb3J3YXJkLgoKSXQgaXMgbm90IGltcG9zc2libGUgdGhhdCB0aGUgR0lDIG1pZ2h0IG1vdmUg
b3V0IG9mIHRoZSAweDAtMHg4MDAwMDAwMApyZWdpb24gKGUuZy4gd2UgbWlnaHQgbW92ZSBpdCB1
cCB0byBqdXN0IGJlbG93IDRHQiBmb3IgZXhhbXBsZSkuCgpZb3Ugd2lsbCBuZWVkIHRvIHBhcnNl
IHRoZSBEVEIgaW4gb3JkZXIgdG8gZmlndXJlIG91dCB0aGUgYXBwcm9wcmlhdGUKYWRkcmVzcyBh
bmQgdGhlbiBjcmVhdGUgdmlydHVhbCBhZGRyZXNzIG1hcHBpbmdzIHRvIHRob3NlIGFkZHJlc3Nl
cy4KCklhbi4KCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
XwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9s
aXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:28:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHti8-0004lF-VF; Mon, 24 Feb 2014 11:28:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHti7-0004kP-TV
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:28:36 +0000
Received: from [193.109.254.147:55342] by server-10.bemta-14.messagelabs.com
	id CE/04-10711-3EC2B035; Mon, 24 Feb 2014 11:28:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393241313!1106976!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11889 invoked from network); 24 Feb 2014 11:28:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:28:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103498904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:28:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:28:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHti4-0005EN-Dx;
	Mon, 24 Feb 2014 11:28:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHti2-00036N-Ef;
	Mon, 24 Feb 2014 11:28:30 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11485.66879.950200@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:28:29 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1393231128.22033.60.camel@dagon.hellion.org.uk>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
	<21255.16435.69137.96537@mariner.uk.xensource.com>
	<21255.39733.80220.360509@mariner.uk.xensource.com>
	<1393231128.22033.60.camel@dagon.hellion.org.uk>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble: broken/fail/pass"):
> On Fri, 2014-02-21 at 18:30 +0000, Ian Jackson wrote:
> > Xen 4.1 puts things in /usr/lib64 otherwise.  This is wrong for
> > Debian, and does not work at all on wheezy.
> 
> This is safe on newer autoconf using Xens, right? I think this is the
> case because with newer Xen:
>         $ git grep LIBLEAFDIR
>         $

Indeed.

> In which case:
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.  It's in the push gate ATM.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:28:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHti8-0004lF-VF; Mon, 24 Feb 2014 11:28:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHti7-0004kP-TV
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:28:36 +0000
Received: from [193.109.254.147:55342] by server-10.bemta-14.messagelabs.com
	id CE/04-10711-3EC2B035; Mon, 24 Feb 2014 11:28:35 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393241313!1106976!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11889 invoked from network); 24 Feb 2014 11:28:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:28:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103498904"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:28:33 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:28:32 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHti4-0005EN-Dx;
	Mon, 24 Feb 2014 11:28:32 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHti2-00036N-Ef;
	Mon, 24 Feb 2014 11:28:30 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11485.66879.950200@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:28:29 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1393231128.22033.60.camel@dagon.hellion.org.uk>
References: <osstest-25157-mainreport@xen.org>
	<530727DC020000780011E27C@nat28.tlf.novell.com>
	<21255.16435.69137.96537@mariner.uk.xensource.com>
	<21255.39733.80220.360509@mariner.uk.xensource.com>
	<1393231128.22033.60.camel@dagon.hellion.org.uk>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions -
 trouble: broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [xen-4.1-testing test] 25157: regressions - trouble: broken/fail/pass"):
> On Fri, 2014-02-21 at 18:30 +0000, Ian Jackson wrote:
> > Xen 4.1 puts things in /usr/lib64 otherwise.  This is wrong for
> > Debian, and does not work at all on wheezy.
> 
> This is safe on newer autoconf using Xens, right? I think this is the
> case because with newer Xen:
>         $ git grep LIBLEAFDIR
>         $

Indeed.

> In which case:
> > Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> 
> Acked-by: Ian Campbell <ian.campbell@citrix.com>

Thanks.  It's in the push gate ATM.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtin-0004r9-Ds; Mon, 24 Feb 2014 11:29:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHtil-0004qi-Id
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:29:15 +0000
Received: from [193.109.254.147:36823] by server-3.bemta-14.messagelabs.com id
	81/4A-00432-A0D2B035; Mon, 24 Feb 2014 11:29:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393241353!6371378!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7497 invoked from network); 24 Feb 2014 11:29:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:29:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103498960"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:29:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:29:12 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHtih-0002YR-Ot;
	Mon, 24 Feb 2014 11:29:11 +0000
Message-ID: <530B2D07.60107@citrix.com>
Date: Mon, 24 Feb 2014 11:29:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<530B26BC020000780011EA65@nat28.tlf.novell.com>
	<530B1E3E.6040805@citrix.com>
	<530B32F7020000780011EAE0@nat28.tlf.novell.com>
In-Reply-To: <530B32F7020000780011EAE0@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: KeirFraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 10:54, Jan Beulich wrote:
>>>> On 24.02.14 at 11:26, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 24/02/14 10:02, Jan Beulich wrote:
>>>>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>> This is some coverity-inspired tidying.
>>>>
>>>> Coverity has some grief analysing the call sites of atomic_read().  This is
>>>> believed to be a bug in Coverity itself when expanding the nested macros, 
>>>> but
>>>> there is no legitimate reason for it to be a macro in the first place.
>>>>
>>>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>>>> inline functions, thus gaining some type safety.
>>>>
>>>> One issue which is not immediatly obvious is that the non-atomic varients 
>>>> take
>>>> their atomic_t at a different level of indirection to the atomic varients.
>>>>
>>>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>>>> which is converted to take its parameter as a pointer.  One callsite of
>>>> _atomic_set() is updated, while the other two callsites are updated to
>>>> ATOMIC_INIT().
>>> Did you consider leaving these "non-atomic atomic ops" untouched
>>> (as they don't involve macro nesting), altering only the "real" ones?
>> Yes, but for the sake of three updates at callsites, I felt the benefits
>> outweighed the costs.
> Except that I don't really see much of a benefit here - the type safety
> argument doesn't really count all that much, considering that a wrongly
> used type would need to have a suitable field named "counter", which
> is unlikely enough to not worry much.
>
> Jan
>

An error message of "Expeted atomic_t *, got <something else>" is
substantially more useful than "<something> doesn't have .counter" or .
being an invalid operator in context.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:29:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:29:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtin-0004r9-Ds; Mon, 24 Feb 2014 11:29:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHtil-0004qi-Id
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:29:15 +0000
Received: from [193.109.254.147:36823] by server-3.bemta-14.messagelabs.com id
	81/4A-00432-A0D2B035; Mon, 24 Feb 2014 11:29:14 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393241353!6371378!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7497 invoked from network); 24 Feb 2014 11:29:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:29:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103498960"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:29:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:29:12 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHtih-0002YR-Ot;
	Mon, 24 Feb 2014 11:29:11 +0000
Message-ID: <530B2D07.60107@citrix.com>
Date: Mon, 24 Feb 2014 11:29:11 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<530B26BC020000780011EA65@nat28.tlf.novell.com>
	<530B1E3E.6040805@citrix.com>
	<530B32F7020000780011EAE0@nat28.tlf.novell.com>
In-Reply-To: <530B32F7020000780011EAE0@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: KeirFraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	StefanoStabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 10:54, Jan Beulich wrote:
>>>> On 24.02.14 at 11:26, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 24/02/14 10:02, Jan Beulich wrote:
>>>>>> On 21.02.14 at 21:41, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>> This is some coverity-inspired tidying.
>>>>
>>>> Coverity has some grief analysing the call sites of atomic_read().  This is
>>>> believed to be a bug in Coverity itself when expanding the nested macros, 
>>>> but
>>>> there is no legitimate reason for it to be a macro in the first place.
>>>>
>>>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>>>> inline functions, thus gaining some type safety.
>>>>
>>>> One issue which is not immediatly obvious is that the non-atomic varients 
>>>> take
>>>> their atomic_t at a different level of indirection to the atomic varients.
>>>>
>>>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>>>> which is converted to take its parameter as a pointer.  One callsite of
>>>> _atomic_set() is updated, while the other two callsites are updated to
>>>> ATOMIC_INIT().
>>> Did you consider leaving these "non-atomic atomic ops" untouched
>>> (as they don't involve macro nesting), altering only the "real" ones?
>> Yes, but for the sake of three updates at callsites, I felt the benefits
>> outweighed the costs.
> Except that I don't really see much of a benefit here - the type safety
> argument doesn't really count all that much, considering that a wrongly
> used type would need to have a suitable field named "counter", which
> is unlikely enough to not worry much.
>
> Jan
>

An error message of "Expeted atomic_t *, got <something else>" is
substantially more useful than "<something> doesn't have .counter" or .
being an invalid operator in context.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtkN-00055G-8c; Mon, 24 Feb 2014 11:30:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtjh-0004zF-Cv
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:30:13 +0000
Received: from [85.158.143.35:26380] by server-2.bemta-4.messagelabs.com id
	FD/D8-10891-44D2B035; Mon, 24 Feb 2014 11:30:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393241410!7836590!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27403 invoked from network); 24 Feb 2014 11:30:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:30:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105158633"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:30:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:30:08 -0500
Message-ID: <1393241408.16570.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: chi ma <machi1271@gmail.com>
Date: Mon, 24 Feb 2014 11:30:08 +0000
In-Reply-To: <CAMZDhMEFfBzmgEgGvAS2N4kcmbKgrvAGx6qgAVSiStMYO1t6Zg@mail.gmail.com>
References: <CAMZDhMEFfBzmgEgGvAS2N4kcmbKgrvAGx6qgAVSiStMYO1t6Zg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
X-Mailman-Approved-At: Mon, 24 Feb 2014 11:30:54 +0000
Cc: xen-users <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] Add e1000 device to guest OS?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 11:42 +0800, chi ma wrote:
> HI ALL:
> 
>     Does anyone try adding an INTEL e1000 emulated device to a
> specific guest OS?

This is a user question, redirecting to the user list.

>     Do I have to apply some specific configrations?
>     I've tried the steps supplied on this web page:
> 
>   http://www.netservers.co.uk/articles/open-source-howtos/citrix_e1000_gigabit
>     but it doesn't work...

This is specific to XCP/XenServer. If you are using that then you should
use the xenserver.org lists.

If you are using regular Xen then see the xl.cfg man page for
information on configuring network devices in your cfg file.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtkN-00055G-8c; Mon, 24 Feb 2014 11:30:55 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtjh-0004zF-Cv
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:30:13 +0000
Received: from [85.158.143.35:26380] by server-2.bemta-4.messagelabs.com id
	FD/D8-10891-44D2B035; Mon, 24 Feb 2014 11:30:12 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393241410!7836590!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27403 invoked from network); 24 Feb 2014 11:30:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:30:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105158633"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:30:09 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:30:08 -0500
Message-ID: <1393241408.16570.51.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: chi ma <machi1271@gmail.com>
Date: Mon, 24 Feb 2014 11:30:08 +0000
In-Reply-To: <CAMZDhMEFfBzmgEgGvAS2N4kcmbKgrvAGx6qgAVSiStMYO1t6Zg@mail.gmail.com>
References: <CAMZDhMEFfBzmgEgGvAS2N4kcmbKgrvAGx6qgAVSiStMYO1t6Zg@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
X-Mailman-Approved-At: Mon, 24 Feb 2014 11:30:54 +0000
Cc: xen-users <xen-users@lists.xen.org>
Subject: Re: [Xen-devel] Add e1000 device to guest OS?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 11:42 +0800, chi ma wrote:
> HI ALL:
> 
>     Does anyone try adding an INTEL e1000 emulated device to a
> specific guest OS?

This is a user question, redirecting to the user list.

>     Do I have to apply some specific configrations?
>     I've tried the steps supplied on this web page:
> 
>   http://www.netservers.co.uk/articles/open-source-howtos/citrix_e1000_gigabit
>     but it doesn't work...

This is specific to XCP/XenServer. If you are using that then you should
use the xenserver.org lists.

If you are using regular Xen then see the xl.cfg man page for
information on configuring network devices in your cfg file.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:32:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtlX-0005I4-Ol; Mon, 24 Feb 2014 11:32:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHtlV-0005Hg-Sb
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 11:32:06 +0000
Received: from [85.158.139.211:38630] by server-8.bemta-5.messagelabs.com id
	DF/2F-05298-5BD2B035; Mon, 24 Feb 2014 11:32:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393241522!1888805!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18360 invoked from network); 24 Feb 2014 11:32:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:32:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105158935"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:32:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:32:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtlR-0005FC-Sp;
	Mon, 24 Feb 2014 11:32:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtlQ-00038s-4u;
	Mon, 24 Feb 2014 11:32:00 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11694.831215.652571@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:31:58 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1393235457.16570.10.camel@kazak.uk.xensource.com>
References: <osstest-25278-mainreport@xen.org>
	<1393235457.16570.10.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL"):
> On Sun, 2014-02-23 at 17:32 +0000, xen.org wrote:
> >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> 
> This is:
>         ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!
> which is a deliberately forced link error due to over large udelay
> parameter (see also
> http://osdir.com/ml/scm-fedora-commits/2014-01/msg14211.html)
> 
> I could do this for ARM only if you prefer, but in the first instance
> that seemed like unnecessary faff..

Quite.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I have queued this in a branch of moderately-urgent stuff and will
push it when the opportunity arises.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:32:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtlX-0005I4-Ol; Mon, 24 Feb 2014 11:32:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHtlV-0005Hg-Sb
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 11:32:06 +0000
Received: from [85.158.139.211:38630] by server-8.bemta-5.messagelabs.com id
	DF/2F-05298-5BD2B035; Mon, 24 Feb 2014 11:32:05 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393241522!1888805!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18360 invoked from network); 24 Feb 2014 11:32:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:32:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105158935"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:32:02 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:32:02 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtlR-0005FC-Sp;
	Mon, 24 Feb 2014 11:32:01 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtlQ-00038s-4u;
	Mon, 24 Feb 2014 11:32:00 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11694.831215.652571@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:31:58 +0000
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1393235457.16570.10.camel@kazak.uk.xensource.com>
References: <osstest-25278-mainreport@xen.org>
	<1393235457.16570.10.camel@kazak.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, "xen.org" <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL"):
> On Sun, 2014-02-23 at 17:32 +0000, xen.org wrote:
> >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> 
> This is:
>         ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!
> which is a deliberately forced link error due to over large udelay
> parameter (see also
> http://osdir.com/ml/scm-fedora-commits/2014-01/msg14211.html)
> 
> I could do this for ARM only if you prefer, but in the first instance
> that seemed like unnecessary faff..

Quite.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I have queued this in a branch of moderately-urgent stuff and will
push it when the opportunity arises.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtnk-0005XC-B6; Mon, 24 Feb 2014 11:34:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHtnj-0005X5-U4
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:34:24 +0000
Received: from [85.158.139.211:44488] by server-1.bemta-5.messagelabs.com id
	4C/B4-12859-F3E2B035; Mon, 24 Feb 2014 11:34:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393241661!5802076!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9532 invoked from network); 24 Feb 2014 11:34:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:34:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105159595"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:34:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:34:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtnd-0005Fq-QZ;
	Mon, 24 Feb 2014 11:34:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtnc-00039B-7b;
	Mon, 24 Feb 2014 11:34:16 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11831.367280.182874@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:34:15 +0000
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
In-Reply-To: <20140224101305.GV3200@reaktio.net>
References: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
	<20140224101305.GV3200@reaktio.net>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xen.org,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen
 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen writes ("Re: [Xen-devel] [BUG] xl create -c with pygrub=
 hangs in xen 4.4.0-rc5"):
> On Sun, Feb 23, 2014 at 09:36:44PM +0000, M A Young wrote:
> > I was trying xen-4.4.0-rc5 with my standard set up (booting a pv
> > guest using pygrub as a bootloader) but this no longer works. I
> > would expect to get a boot menu where I could select a kernel and
> > the guest, but nothing happens. If I examine the bootloader.1.log
> > file I find the pygrub output I would expect to see on the console,
> > and the output of xenstore-ls suggests pygrub is selecting the
> > default kernel (as it would without input) and exiting, but xentop
> > doesn't report any cpu usage on the guest. It seems xl has created a
> > child process that doesn't exit, though if I kill the child process
> > by hand the boot does continue.
> > =

> > I traced the change in behaviour to the commit http://xenbits.xenprojec=
t.org/gitweb/?p=3Dxen.git;a=3Dcommit;h=3D5f0c4a78100382972b4d2a71a04b90e015=
e9fe87
> > "libxl: fork: Share SIGCHLD handler amongst ctxs". If I revert this
> > then I get the expected behaviour again, though I haven't worked out
> > why this patch cause the effects I am seeing.
> =

> Uh oh.. I wonder if Ian (author of the libxl changes in question) has som=
e ideas why that might be happening..
> =

> This is kind of a bug that's not very easy to detect even with automated =
pygrub testing.. =

> interactive testing is pretty much required to notice this bug.

Thanks to Michael for reporting this.  I don't immediately see what's
wrong but will think about it.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:34:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtnk-0005XC-B6; Mon, 24 Feb 2014 11:34:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHtnj-0005X5-U4
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:34:24 +0000
Received: from [85.158.139.211:44488] by server-1.bemta-5.messagelabs.com id
	4C/B4-12859-F3E2B035; Mon, 24 Feb 2014 11:34:23 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393241661!5802076!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9532 invoked from network); 24 Feb 2014 11:34:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:34:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105159595"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:34:20 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:34:20 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtnd-0005Fq-QZ;
	Mon, 24 Feb 2014 11:34:17 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHtnc-00039B-7b;
	Mon, 24 Feb 2014 11:34:16 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.11831.367280.182874@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:34:15 +0000
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>
In-Reply-To: <20140224101305.GV3200@reaktio.net>
References: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
	<20140224101305.GV3200@reaktio.net>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xen.org,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen
 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pasi K=E4rkk=E4inen writes ("Re: [Xen-devel] [BUG] xl create -c with pygrub=
 hangs in xen 4.4.0-rc5"):
> On Sun, Feb 23, 2014 at 09:36:44PM +0000, M A Young wrote:
> > I was trying xen-4.4.0-rc5 with my standard set up (booting a pv
> > guest using pygrub as a bootloader) but this no longer works. I
> > would expect to get a boot menu where I could select a kernel and
> > the guest, but nothing happens. If I examine the bootloader.1.log
> > file I find the pygrub output I would expect to see on the console,
> > and the output of xenstore-ls suggests pygrub is selecting the
> > default kernel (as it would without input) and exiting, but xentop
> > doesn't report any cpu usage on the guest. It seems xl has created a
> > child process that doesn't exit, though if I kill the child process
> > by hand the boot does continue.
> > =

> > I traced the change in behaviour to the commit http://xenbits.xenprojec=
t.org/gitweb/?p=3Dxen.git;a=3Dcommit;h=3D5f0c4a78100382972b4d2a71a04b90e015=
e9fe87
> > "libxl: fork: Share SIGCHLD handler amongst ctxs". If I revert this
> > then I get the expected behaviour again, though I haven't worked out
> > why this patch cause the effects I am seeing.
> =

> Uh oh.. I wonder if Ian (author of the libxl changes in question) has som=
e ideas why that might be happening..
> =

> This is kind of a bug that's not very easy to detect even with automated =
pygrub testing.. =

> interactive testing is pretty much required to notice this bug.

Thanks to Michael for reporting this.  I don't immediately see what's
wrong but will think about it.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:34:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHto3-0005Zv-Nv; Mon, 24 Feb 2014 11:34:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHto2-0005ZX-FA
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:34:42 +0000
Received: from [85.158.137.68:29412] by server-8.bemta-3.messagelabs.com id
	20/E2-16039-15E2B035; Mon, 24 Feb 2014 11:34:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393241680!989138!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22116 invoked from network); 24 Feb 2014 11:34:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 11:34:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 11:34:40 +0000
Message-Id: <530B3C5C020000780011EB3D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 11:34:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-3-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> iommu_domain_teardown is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Acked-by: Ian Cambell <ian.campbell@citrix.com>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index 5f10034..a8d33fc 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>      return ret;
>  }
>  
> -void iommu_domain_teardown(struct domain *d)
> +static void iommu_domain_teardown(struct domain *d)
>  {
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>  

Please build-test your changes - this was lacking the removal of
the function's declaration from xen/iommu.h.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:34:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHto3-0005Zv-Nv; Mon, 24 Feb 2014 11:34:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHto2-0005ZX-FA
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:34:42 +0000
Received: from [85.158.137.68:29412] by server-8.bemta-3.messagelabs.com id
	20/E2-16039-15E2B035; Mon, 24 Feb 2014 11:34:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393241680!989138!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22116 invoked from network); 24 Feb 2014 11:34:41 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 11:34:41 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 11:34:40 +0000
Message-Id: <530B3C5C020000780011EB3D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 11:34:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-3-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
> iommu_domain_teardown is only used internally in
> xen/drivers/passthrough/vtd/iommu.c
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Acked-by: Ian Cambell <ian.campbell@citrix.com>
> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
> ---
>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
> b/xen/drivers/passthrough/vtd/iommu.c
> index 5f10034..a8d33fc 100644
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>      return ret;
>  }
>  
> -void iommu_domain_teardown(struct domain *d)
> +static void iommu_domain_teardown(struct domain *d)
>  {
>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>  

Please build-test your changes - this was lacking the removal of
the function's declaration from xen/iommu.h.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtpw-0005n5-Ae; Mon, 24 Feb 2014 11:36:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtpv-0005mp-AC
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 11:36:39 +0000
Received: from [193.109.254.147:65451] by server-2.bemta-14.messagelabs.com id
	13/26-01236-6CE2B035; Mon, 24 Feb 2014 11:36:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393241796!6401995!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 24 Feb 2014 11:36:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:36:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103500436"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:36:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:36:35 -0500
Message-ID: <1393241794.16570.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 11:36:34 +0000
In-Reply-To: <21259.11694.831215.652571@mariner.uk.xensource.com>
References: <osstest-25278-mainreport@xen.org>
	<1393235457.16570.10.camel@kazak.uk.xensource.com>
	<21259.11694.831215.652571@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 11:31 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL"):
> > On Sun, 2014-02-23 at 17:32 +0000, xen.org wrote:
> > >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> > 
> > This is:
> >         ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!
> > which is a deliberately forced link error due to over large udelay
> > parameter (see also
> > http://osdir.com/ml/scm-fedora-commits/2014-01/msg14211.html)
> > 
> > I could do this for ARM only if you prefer, but in the first instance
> > that seemed like unnecessary faff..
> 
> Quite.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I have queued this in a branch of moderately-urgent stuff and will
> push it when the opportunity arises.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:36:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHtpw-0005n5-Ae; Mon, 24 Feb 2014 11:36:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHtpv-0005mp-AC
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 11:36:39 +0000
Received: from [193.109.254.147:65451] by server-2.bemta-14.messagelabs.com id
	13/26-01236-6CE2B035; Mon, 24 Feb 2014 11:36:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393241796!6401995!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21741 invoked from network); 24 Feb 2014 11:36:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:36:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103500436"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 11:36:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:36:35 -0500
Message-ID: <1393241794.16570.53.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 11:36:34 +0000
In-Reply-To: <21259.11694.831215.652571@mariner.uk.xensource.com>
References: <osstest-25278-mainreport@xen.org>
	<1393235457.16570.10.camel@kazak.uk.xensource.com>
	<21259.11694.831215.652571@mariner.uk.xensource.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com
Subject: Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 11:31 +0000, Ian Jackson wrote:
> Ian Campbell writes ("Re: [Xen-devel] [linux-linus test] 25278: regressions - FAIL"):
> > On Sun, 2014-02-23 at 17:32 +0000, xen.org wrote:
> > >  build-armhf-pvops             4 kernel-build                 fail   never pass 
> > 
> > This is:
> >         ERROR: "__bad_udelay" [drivers/scsi/bfa/bfa.ko] undefined!
> > which is a deliberately forced link error due to over large udelay
> > parameter (see also
> > http://osdir.com/ml/scm-fedora-commits/2014-01/msg14211.html)
> > 
> > I could do this for ARM only if you prefer, but in the first instance
> > that seemed like unnecessary faff..
> 
> Quite.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I have queued this in a branch of moderately-urgent stuff and will
> push it when the opportunity arises.

Thanks.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:49:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHu1u-00068y-2V; Mon, 24 Feb 2014 11:49:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHu1t-00068t-Ap
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:49:01 +0000
Received: from [85.158.139.211:29779] by server-4.bemta-5.messagelabs.com id
	4F/4F-08092-CA13B035; Mon, 24 Feb 2014 11:49:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393242538!5845334!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28675 invoked from network); 24 Feb 2014 11:48:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:48:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105161629"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:48:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:48:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHu1p-0005KE-Ip;
	Mon, 24 Feb 2014 11:48:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHu1n-0004EC-Eg;
	Mon, 24 Feb 2014 11:48:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.12710.26795.105059@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:48:54 +0000
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>, M A Young
	<m.a.young@durham.ac.uk>, <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
In-Reply-To: <21259.11831.367280.182874@mariner.uk.xensource.com>
References: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
	<20140224101305.GV3200@reaktio.net>
	<21259.11831.367280.182874@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen
 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen 4.4.0-rc5"):
> Thanks to Michael for reporting this.  I don't immediately see what's
> wrong but will think about it.

I have reproduced the problem.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:49:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHu1u-00068y-2V; Mon, 24 Feb 2014 11:49:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHu1t-00068t-Ap
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:49:01 +0000
Received: from [85.158.139.211:29779] by server-4.bemta-5.messagelabs.com id
	4F/4F-08092-CA13B035; Mon, 24 Feb 2014 11:49:00 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393242538!5845334!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28675 invoked from network); 24 Feb 2014 11:48:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:48:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105161629"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:48:58 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 06:48:57 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHu1p-0005KE-Ip;
	Mon, 24 Feb 2014 11:48:57 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHu1n-0004EC-Eg;
	Mon, 24 Feb 2014 11:48:55 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.12710.26795.105059@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 11:48:54 +0000
To: Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?= <pasik@iki.fi>, M A Young
	<m.a.young@durham.ac.uk>, <xen-devel@lists.xen.org>, George Dunlap
	<george.dunlap@eu.citrix.com>
In-Reply-To: <21259.11831.367280.182874@mariner.uk.xensource.com>
References: <alpine.DEB.2.00.1402232115250.18519@procyon.dur.ac.uk>
	<20140224101305.GV3200@reaktio.net>
	<21259.11831.367280.182874@mariner.uk.xensource.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen
 4.4.0-rc5
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Jackson writes ("Re: [Xen-devel] [BUG] xl create -c with pygrub hangs in xen 4.4.0-rc5"):
> Thanks to Michael for reporting this.  I don't immediately see what's
> wrong but will think about it.

I have reproduced the problem.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHu3Y-0006DY-JC; Mon, 24 Feb 2014 11:50:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHu3X-0006DQ-9q
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:50:43 +0000
Received: from [193.109.254.147:29728] by server-6.bemta-14.messagelabs.com id
	B9/ED-03396-2123B035; Mon, 24 Feb 2014 11:50:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393242640!6364586!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8141 invoked from network); 24 Feb 2014 11:50:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:50:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105162052"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:50:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:50:39 -0500
Message-ID: <1393242638.16570.57.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 24 Feb 2014 11:50:38 +0000
In-Reply-To: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>, Jan
	Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 20:41 +0000, Andrew Cooper wrote:
> This is some coverity-inspired tidying.
> 
> Coverity has some grief analysing the call sites of atomic_read().  This is
> believed to be a bug in Coverity itself when expanding the nested macros, but
> there is no legitimate reason for it to be a macro in the first place.
> 
> This patch changes {,_}atomic_{read,set}() from being macros to being static
> inline functions, thus gaining some type safety.
> 
> One issue which is not immediatly obvious is that the non-atomic varients take
> their atomic_t at a different level of indirection to the atomic varients.

"variants" and "immediately"

> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
> which is converted to take its parameter as a pointer.  One callsite of
> _atomic_set() is updated, while the other two callsites are updated to
> ATOMIC_INIT().
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <ian.campbell@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
> CC: Tim Deegan <tim@xen.org>
> 
> ---
> 
> This is compile-tested on arm32 and 64

Thanks!

> +static inline void atomic_set(atomic_t *v, int i)
> +{
> +    v->counter = i;
> +}
> +
> +static inline void _atomic_set(atomic_t *v, int i)
> +{
> +    v->counter = i;
> +}

Are these now the same on purpose? (previously one took a pointer and
the other the actual variable).

I don't have any especially strong feelings on the patch generally. If
x86 is going to change then I suppose ARM might as well do so for
consistency.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHu3Y-0006DY-JC; Mon, 24 Feb 2014 11:50:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHu3X-0006DQ-9q
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:50:43 +0000
Received: from [193.109.254.147:29728] by server-6.bemta-14.messagelabs.com id
	B9/ED-03396-2123B035; Mon, 24 Feb 2014 11:50:42 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393242640!6364586!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8141 invoked from network); 24 Feb 2014 11:50:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:50:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105162052"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:50:40 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:50:39 -0500
Message-ID: <1393242638.16570.57.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Mon, 24 Feb 2014 11:50:38 +0000
In-Reply-To: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>, Jan
	Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 20:41 +0000, Andrew Cooper wrote:
> This is some coverity-inspired tidying.
> 
> Coverity has some grief analysing the call sites of atomic_read().  This is
> believed to be a bug in Coverity itself when expanding the nested macros, but
> there is no legitimate reason for it to be a macro in the first place.
> 
> This patch changes {,_}atomic_{read,set}() from being macros to being static
> inline functions, thus gaining some type safety.
> 
> One issue which is not immediatly obvious is that the non-atomic varients take
> their atomic_t at a different level of indirection to the atomic varients.

"variants" and "immediately"

> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
> which is converted to take its parameter as a pointer.  One callsite of
> _atomic_set() is updated, while the other two callsites are updated to
> ATOMIC_INIT().
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Ian Campbell <ian.campbell@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
> CC: Tim Deegan <tim@xen.org>
> 
> ---
> 
> This is compile-tested on arm32 and 64

Thanks!

> +static inline void atomic_set(atomic_t *v, int i)
> +{
> +    v->counter = i;
> +}
> +
> +static inline void _atomic_set(atomic_t *v, int i)
> +{
> +    v->counter = i;
> +}

Are these now the same on purpose? (previously one took a pointer and
the other the actual variable).

I don't have any especially strong feelings on the patch generally. If
x86 is going to change then I suppose ARM might as well do so for
consistency.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:54:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHu7Q-0006Qb-Dw; Mon, 24 Feb 2014 11:54:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WHu7P-0006QV-LH
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:54:43 +0000
Received: from [193.109.254.147:26145] by server-10.bemta-14.messagelabs.com
	id DA/40-10711-3033B035; Mon, 24 Feb 2014 11:54:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393242880!6365839!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10285 invoked from network); 24 Feb 2014 11:54:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:54:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105162585"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:54:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:54:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WHu7L-00032x-2Y;
	Mon, 24 Feb 2014 11:54:39 +0000
Date: Mon, 24 Feb 2014 11:54:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-15-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1402241152010.4471@kaball.uk.xensource.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-15-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 14/15] xen/arm: Add the property
 "protected-devices" in the hypervisor node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 23 Feb 2014, Julien Grall wrote:
> DOM0 is using the swiotlb to bounce DMA. With the IOMMU support in Xen,
> protected devices should not use it.
> 
> Only Xen is abled to know if an IOMMU protects the device. The new property
> "protected-devices" is a list of device phandles protected by an IOMMU.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

You need to send a patch to

Documentation/devicetree/bindings/arm/xen.txt

I would like the commit message of this patch to reference the changes to it


>     This patch *MUST NOT* be applied until we agreed on a device binding
>     the device tree folks. DOM0 can run safely with swiotlb on protected
>     devices while LVM is not used for guest disk.
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/domain_build.c |   51 ++++++++++++++++++++++++++++++++++++++-----
>  xen/arch/arm/kernel.h       |    3 +++
>  2 files changed, 48 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9cbdd61..ca7dade 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -324,19 +324,22 @@ static int make_memory_node(const struct domain *d,
>      return res;
>  }
>  
> -static int make_hypervisor_node(struct domain *d,
> -                                void *fdt, const struct dt_device_node *parent)
> +static int make_hypervisor_node(struct domain *d, struct kernel_info *kinfo,
> +                                const struct dt_device_node *parent)
>  {
>      const char compat[] =
>          "xen,xen-"__stringify(XEN_VERSION)"."__stringify(XEN_SUBVERSION)"\0"
>          "xen,xen";
>      __be32 reg[4];
>      gic_interrupt_t intr;
> -    __be32 *cells;
> +    __be32 *cells, *_cells;
>      int res;
>      int addrcells = dt_n_addr_cells(parent);
>      int sizecells = dt_n_size_cells(parent);
>      paddr_t gnttab_start, gnttab_size;
> +    const struct dt_device_node *dev;
> +    struct hvm_iommu *hd = domain_hvm_iommu(d);
> +    void *fdt = kinfo->fdt;
>  
>      DPRINT("Create hypervisor node\n");
>  
> @@ -384,6 +387,39 @@ static int make_hypervisor_node(struct domain *d,
>      if ( res )
>          return res;
>  
> +    if ( kinfo->num_dev_protected )
> +    {
> +        /* Don't need to take dtdevs_lock here */
> +        cells = xmalloc_array(__be32, kinfo->num_dev_protected *
> +                              dt_size_to_cells(sizeof(dt_phandle)));
> +        if ( !cells )
> +            return -FDT_ERR_XEN(ENOMEM);
> +
> +        _cells = cells;
> +
> +        DPRINT("  List of protected devices\n");
> +        list_for_each_entry( dev, &hd->dt_devices, next_assigned )
> +        {
> +            DPRINT("    - %s\n", dt_node_full_name(dev));
> +            if ( !dev->phandle )
> +            {
> +                printk(XENLOG_ERR "Unable to handle protected device (%s)"
> +                       "with no phandle", dt_node_full_name(dev));
> +                xfree(cells);
> +                return -FDT_ERR_XEN(EINVAL);
> +            }
> +            dt_set_cell(&_cells, dt_size_to_cells(sizeof(dt_phandle)),
> +                        dev->phandle);
> +        }
> +
> +        res = fdt_property(fdt, "protected-devices", cells,
> +                           sizeof (dt_phandle) * kinfo->num_dev_protected);
> +
> +        xfree(cells);
> +        if ( res )
> +            return res;
> +    }
> +
>      res = fdt_end_node(fdt);
>  
>      return res;
> @@ -670,7 +706,8 @@ static int make_timer_node(const struct domain *d, void *fdt,
>  }
>  
>  /* Map the device in the domain */
> -static int map_device(struct domain *d, struct dt_device_node *dev)
> +static int map_device(struct domain *d, struct kernel_info *kinfo,
> +                      struct dt_device_node *dev)
>  {
>      unsigned int nirq;
>      unsigned int naddr;
> @@ -695,6 +732,7 @@ static int map_device(struct domain *d, struct dt_device_node *dev)
>                     dt_node_full_name(dev));
>              return res;
>          }
> +        kinfo->num_dev_protected++;
>      }
>  
>      /* Map IRQs */
> @@ -844,7 +882,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
>      if ( !dt_device_type_is_equal(node, "memory") &&
>           dt_device_is_available(node) )
>      {
> -        res = map_device(d, node);
> +        res = map_device(d, kinfo, node);
>  
>          if ( res )
>              return res;
> @@ -875,7 +913,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
>  
>      if ( node == dt_host )
>      {
> -        res = make_hypervisor_node(d, kinfo->fdt, node);
> +        res = make_hypervisor_node(d, kinfo, node);
>          if ( res )
>              return res;
>  
> @@ -1028,6 +1066,7 @@ int construct_dom0(struct domain *d)
>  
>      d->max_pages = ~0U;
>  
> +    kinfo.num_dev_protected = 0;
>      kinfo.unassigned_mem = dom0_mem;
>  
>      allocate_memory(d, &kinfo);
> diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
> index b48c2c9..3af5c50 100644
> --- a/xen/arch/arm/kernel.h
> +++ b/xen/arch/arm/kernel.h
> @@ -18,6 +18,9 @@ struct kernel_info {
>      paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
>      struct dt_mem_info mem;
>  
> +    /* Number of devices protected by an IOMMU */
> +    unsigned int num_dev_protected;
> +
>      paddr_t dtb_paddr;
>      paddr_t entry;
>  
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:54:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHu7Q-0006Qb-Dw; Mon, 24 Feb 2014 11:54:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WHu7P-0006QV-LH
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 11:54:43 +0000
Received: from [193.109.254.147:26145] by server-10.bemta-14.messagelabs.com
	id DA/40-10711-3033B035; Mon, 24 Feb 2014 11:54:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393242880!6365839!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10285 invoked from network); 24 Feb 2014 11:54:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:54:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105162585"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:54:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:54:39 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WHu7L-00032x-2Y;
	Mon, 24 Feb 2014 11:54:39 +0000
Date: Mon, 24 Feb 2014 11:54:36 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Julien Grall <julien.grall@linaro.org>
In-Reply-To: <1393193792-20008-15-git-send-email-julien.grall@linaro.org>
Message-ID: <alpine.DEB.2.02.1402241152010.4471@kaball.uk.xensource.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-15-git-send-email-julien.grall@linaro.org>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 14/15] xen/arm: Add the property
 "protected-devices" in the hypervisor node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sun, 23 Feb 2014, Julien Grall wrote:
> DOM0 is using the swiotlb to bounce DMA. With the IOMMU support in Xen,
> protected devices should not use it.
> 
> Only Xen is abled to know if an IOMMU protects the device. The new property
> "protected-devices" is a list of device phandles protected by an IOMMU.
> 
> Signed-off-by: Julien Grall <julien.grall@linaro.org>

You need to send a patch to

Documentation/devicetree/bindings/arm/xen.txt

I would like the commit message of this patch to reference the changes to it


>     This patch *MUST NOT* be applied until we agreed on a device binding
>     the device tree folks. DOM0 can run safely with swiotlb on protected
>     devices while LVM is not used for guest disk.
> 
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/domain_build.c |   51 ++++++++++++++++++++++++++++++++++++++-----
>  xen/arch/arm/kernel.h       |    3 +++
>  2 files changed, 48 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 9cbdd61..ca7dade 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -324,19 +324,22 @@ static int make_memory_node(const struct domain *d,
>      return res;
>  }
>  
> -static int make_hypervisor_node(struct domain *d,
> -                                void *fdt, const struct dt_device_node *parent)
> +static int make_hypervisor_node(struct domain *d, struct kernel_info *kinfo,
> +                                const struct dt_device_node *parent)
>  {
>      const char compat[] =
>          "xen,xen-"__stringify(XEN_VERSION)"."__stringify(XEN_SUBVERSION)"\0"
>          "xen,xen";
>      __be32 reg[4];
>      gic_interrupt_t intr;
> -    __be32 *cells;
> +    __be32 *cells, *_cells;
>      int res;
>      int addrcells = dt_n_addr_cells(parent);
>      int sizecells = dt_n_size_cells(parent);
>      paddr_t gnttab_start, gnttab_size;
> +    const struct dt_device_node *dev;
> +    struct hvm_iommu *hd = domain_hvm_iommu(d);
> +    void *fdt = kinfo->fdt;
>  
>      DPRINT("Create hypervisor node\n");
>  
> @@ -384,6 +387,39 @@ static int make_hypervisor_node(struct domain *d,
>      if ( res )
>          return res;
>  
> +    if ( kinfo->num_dev_protected )
> +    {
> +        /* Don't need to take dtdevs_lock here */
> +        cells = xmalloc_array(__be32, kinfo->num_dev_protected *
> +                              dt_size_to_cells(sizeof(dt_phandle)));
> +        if ( !cells )
> +            return -FDT_ERR_XEN(ENOMEM);
> +
> +        _cells = cells;
> +
> +        DPRINT("  List of protected devices\n");
> +        list_for_each_entry( dev, &hd->dt_devices, next_assigned )
> +        {
> +            DPRINT("    - %s\n", dt_node_full_name(dev));
> +            if ( !dev->phandle )
> +            {
> +                printk(XENLOG_ERR "Unable to handle protected device (%s)"
> +                       "with no phandle", dt_node_full_name(dev));
> +                xfree(cells);
> +                return -FDT_ERR_XEN(EINVAL);
> +            }
> +            dt_set_cell(&_cells, dt_size_to_cells(sizeof(dt_phandle)),
> +                        dev->phandle);
> +        }
> +
> +        res = fdt_property(fdt, "protected-devices", cells,
> +                           sizeof (dt_phandle) * kinfo->num_dev_protected);
> +
> +        xfree(cells);
> +        if ( res )
> +            return res;
> +    }
> +
>      res = fdt_end_node(fdt);
>  
>      return res;
> @@ -670,7 +706,8 @@ static int make_timer_node(const struct domain *d, void *fdt,
>  }
>  
>  /* Map the device in the domain */
> -static int map_device(struct domain *d, struct dt_device_node *dev)
> +static int map_device(struct domain *d, struct kernel_info *kinfo,
> +                      struct dt_device_node *dev)
>  {
>      unsigned int nirq;
>      unsigned int naddr;
> @@ -695,6 +732,7 @@ static int map_device(struct domain *d, struct dt_device_node *dev)
>                     dt_node_full_name(dev));
>              return res;
>          }
> +        kinfo->num_dev_protected++;
>      }
>  
>      /* Map IRQs */
> @@ -844,7 +882,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
>      if ( !dt_device_type_is_equal(node, "memory") &&
>           dt_device_is_available(node) )
>      {
> -        res = map_device(d, node);
> +        res = map_device(d, kinfo, node);
>  
>          if ( res )
>              return res;
> @@ -875,7 +913,7 @@ static int handle_node(struct domain *d, struct kernel_info *kinfo,
>  
>      if ( node == dt_host )
>      {
> -        res = make_hypervisor_node(d, kinfo->fdt, node);
> +        res = make_hypervisor_node(d, kinfo, node);
>          if ( res )
>              return res;
>  
> @@ -1028,6 +1066,7 @@ int construct_dom0(struct domain *d)
>  
>      d->max_pages = ~0U;
>  
> +    kinfo.num_dev_protected = 0;
>      kinfo.unassigned_mem = dom0_mem;
>  
>      allocate_memory(d, &kinfo);
> diff --git a/xen/arch/arm/kernel.h b/xen/arch/arm/kernel.h
> index b48c2c9..3af5c50 100644
> --- a/xen/arch/arm/kernel.h
> +++ b/xen/arch/arm/kernel.h
> @@ -18,6 +18,9 @@ struct kernel_info {
>      paddr_t unassigned_mem; /* RAM not (yet) assigned to a bank */
>      struct dt_mem_info mem;
>  
> +    /* Number of devices protected by an IOMMU */
> +    unsigned int num_dev_protected;
> +
>      paddr_t dtb_paddr;
>      paddr_t entry;
>  
> -- 
> 1.7.10.4
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:59:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuCK-0006aJ-9r; Mon, 24 Feb 2014 11:59:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHuCJ-0006aC-Ab
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:59:47 +0000
Received: from [193.109.254.147:31930] by server-14.bemta-14.messagelabs.com
	id 18/E7-29228-2343B035; Mon, 24 Feb 2014 11:59:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393243184!2404213!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20514 invoked from network); 24 Feb 2014 11:59:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105163125"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:59:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:59:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHuCF-00038q-0V;
	Mon, 24 Feb 2014 11:59:43 +0000
Message-ID: <530B342E.6010503@citrix.com>
Date: Mon, 24 Feb 2014 11:59:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<1393242638.16570.57.camel@kazak.uk.xensource.com>
In-Reply-To: <1393242638.16570.57.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 11:50, Ian Campbell wrote:
> On Fri, 2014-02-21 at 20:41 +0000, Andrew Cooper wrote:
>> This is some coverity-inspired tidying.
>>
>> Coverity has some grief analysing the call sites of atomic_read().  This is
>> believed to be a bug in Coverity itself when expanding the nested macros, but
>> there is no legitimate reason for it to be a macro in the first place.
>>
>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>> inline functions, thus gaining some type safety.
>>
>> One issue which is not immediatly obvious is that the non-atomic varients take
>> their atomic_t at a different level of indirection to the atomic varients.
> "variants" and "immediately"

Oops - I did correct these, then sent the previous patch.

>
>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>> which is converted to take its parameter as a pointer.  One callsite of
>> _atomic_set() is updated, while the other two callsites are updated to
>> ATOMIC_INIT().
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Ian Campbell <ian.campbell@citrix.com>
>> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
>> CC: Tim Deegan <tim@xen.org>
>>
>> ---
>>
>> This is compile-tested on arm32 and 64
> Thanks!
>
>> +static inline void atomic_set(atomic_t *v, int i)
>> +{
>> +    v->counter = i;
>> +}
>> +
>> +static inline void _atomic_set(atomic_t *v, int i)
>> +{
>> +    v->counter = i;
>> +}
> Are these now the same on purpose? (previously one took a pointer and
> the other the actual variable).
>
> I don't have any especially strong feelings on the patch generally. If
> x86 is going to change then I suppose ARM might as well do so for
> consistency.
>
> Ian.
>
>
>

_atomic_set(stack_var, val)  makes it a functional noop, and crucially,
leaves the caller with an untouchted atomic_t

I had to change the indirection to make x86 work, before checking arm
and finding that the two were identical.  They are used in common code
where the difference is required in x86.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 11:59:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 11:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuCK-0006aJ-9r; Mon, 24 Feb 2014 11:59:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHuCJ-0006aC-Ab
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 11:59:47 +0000
Received: from [193.109.254.147:31930] by server-14.bemta-14.messagelabs.com
	id 18/E7-29228-2343B035; Mon, 24 Feb 2014 11:59:46 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393243184!2404213!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20514 invoked from network); 24 Feb 2014 11:59:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 11:59:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="105163125"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 11:59:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 06:59:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHuCF-00038q-0V;
	Mon, 24 Feb 2014 11:59:43 +0000
Message-ID: <530B342E.6010503@citrix.com>
Date: Mon, 24 Feb 2014 11:59:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1393015311-16167-1-git-send-email-andrew.cooper3@citrix.com>
	<1393242638.16570.57.camel@kazak.uk.xensource.com>
In-Reply-To: <1393242638.16570.57.camel@kazak.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] Xen/atomic: use static inlines instead of
	macros
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 11:50, Ian Campbell wrote:
> On Fri, 2014-02-21 at 20:41 +0000, Andrew Cooper wrote:
>> This is some coverity-inspired tidying.
>>
>> Coverity has some grief analysing the call sites of atomic_read().  This is
>> believed to be a bug in Coverity itself when expanding the nested macros, but
>> there is no legitimate reason for it to be a macro in the first place.
>>
>> This patch changes {,_}atomic_{read,set}() from being macros to being static
>> inline functions, thus gaining some type safety.
>>
>> One issue which is not immediatly obvious is that the non-atomic varients take
>> their atomic_t at a different level of indirection to the atomic varients.
> "variants" and "immediately"

Oops - I did correct these, then sent the previous patch.

>
>> This is not suitable for _atomic_set() (when used to initialise an atomic_t)
>> which is converted to take its parameter as a pointer.  One callsite of
>> _atomic_set() is updated, while the other two callsites are updated to
>> ATOMIC_INIT().
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Ian Campbell <ian.campbell@citrix.com>
>> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
>> CC: Tim Deegan <tim@xen.org>
>>
>> ---
>>
>> This is compile-tested on arm32 and 64
> Thanks!
>
>> +static inline void atomic_set(atomic_t *v, int i)
>> +{
>> +    v->counter = i;
>> +}
>> +
>> +static inline void _atomic_set(atomic_t *v, int i)
>> +{
>> +    v->counter = i;
>> +}
> Are these now the same on purpose? (previously one took a pointer and
> the other the actual variable).
>
> I don't have any especially strong feelings on the patch generally. If
> x86 is going to change then I suppose ARM might as well do so for
> consistency.
>
> Ian.
>
>
>

_atomic_set(stack_var, val)  makes it a functional noop, and crucially,
leaves the caller with an untouchted atomic_t

I had to change the indirection to make x86 work, before checking arm
and finding that the two were identical.  They are used in common code
where the difference is required in x86.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:05:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuHo-0006sV-31; Mon, 24 Feb 2014 12:05:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuHm-0006sQ-3v
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:05:26 +0000
Received: from [193.109.254.147:30399] by server-13.bemta-14.messagelabs.com
	id D7/30-01226-5853B035; Mon, 24 Feb 2014 12:05:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393243524!6388365!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24172 invoked from network); 24 Feb 2014 12:05:24 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:05:24 -0000
Received: by mail-ee0-f48.google.com with SMTP id c13so257506eek.21
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:05:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4GsCYzhdH49CqqHuLZ5bL3Ihwe+BivCJtFQSzu3Fp6A=;
	b=FPtQ1l/x/KJ1bUVTd+M7b654sTYReYQXhlWPJvV81nbZbkfzZYzTU7JN12hh+gmgu9
	jgO6VRj9rqy/yUsvPFPNA1SXS4yeK/bSh8zj38bPDIFoNTqtRyCF6alJvBtdsvLNrN07
	6cn8aNI9R6IDgI46pfsbTf5eeg42xxe145rzcr773Fmgx4cs+Bj0lS/9KDw/ivO8bCMK
	V22eA0pB4SPmdBRnYACulC+p3c+JPiHYxfEnNdfhTOL7EaIVtggAc5ytk3NcgFWC7r3Q
	KPZPK2F+NJpdbfYqwNIPpzA63GSWNd2S9/stzuvq/Asc7BHjPKai5Xeh0pbiL7BhGXl1
	hG9Q==
X-Gm-Message-State: ALoCoQn0Cghw6yzT+s/cAKuFzJqR8K9hK8eUjeZNaXIF1/HdJ5YHfVrk+NY0BIIm2tE+LMflHJ0y
X-Received: by 10.15.22.65 with SMTP id e41mr24041241eeu.5.1393243524444;
	Mon, 24 Feb 2014 04:05:24 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm63001088ees.4.2014.02.24.04.05.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:05:23 -0800 (PST)
Message-ID: <530B3581.1040602@linaro.org>
Date: Mon, 24 Feb 2014 12:05:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-15-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1402241152010.4471@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402241152010.4471@kaball.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 14/15] xen/arm: Add the property
 "protected-devices" in the hypervisor node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 02/24/2014 11:54 AM, Stefano Stabellini wrote:
> On Sun, 23 Feb 2014, Julien Grall wrote:
>> DOM0 is using the swiotlb to bounce DMA. With the IOMMU support in Xen,
>> protected devices should not use it.
>>
>> Only Xen is abled to know if an IOMMU protects the device. The new property
>> "protected-devices" is a list of device phandles protected by an IOMMU.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> You need to send a patch to

I already sent the patch Friday: https://patches.linaro.org/25070/.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:05:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:05:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuHo-0006sV-31; Mon, 24 Feb 2014 12:05:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuHm-0006sQ-3v
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:05:26 +0000
Received: from [193.109.254.147:30399] by server-13.bemta-14.messagelabs.com
	id D7/30-01226-5853B035; Mon, 24 Feb 2014 12:05:25 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393243524!6388365!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24172 invoked from network); 24 Feb 2014 12:05:24 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:05:24 -0000
Received: by mail-ee0-f48.google.com with SMTP id c13so257506eek.21
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:05:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=4GsCYzhdH49CqqHuLZ5bL3Ihwe+BivCJtFQSzu3Fp6A=;
	b=FPtQ1l/x/KJ1bUVTd+M7b654sTYReYQXhlWPJvV81nbZbkfzZYzTU7JN12hh+gmgu9
	jgO6VRj9rqy/yUsvPFPNA1SXS4yeK/bSh8zj38bPDIFoNTqtRyCF6alJvBtdsvLNrN07
	6cn8aNI9R6IDgI46pfsbTf5eeg42xxe145rzcr773Fmgx4cs+Bj0lS/9KDw/ivO8bCMK
	V22eA0pB4SPmdBRnYACulC+p3c+JPiHYxfEnNdfhTOL7EaIVtggAc5ytk3NcgFWC7r3Q
	KPZPK2F+NJpdbfYqwNIPpzA63GSWNd2S9/stzuvq/Asc7BHjPKai5Xeh0pbiL7BhGXl1
	hG9Q==
X-Gm-Message-State: ALoCoQn0Cghw6yzT+s/cAKuFzJqR8K9hK8eUjeZNaXIF1/HdJ5YHfVrk+NY0BIIm2tE+LMflHJ0y
X-Received: by 10.15.22.65 with SMTP id e41mr24041241eeu.5.1393243524444;
	Mon, 24 Feb 2014 04:05:24 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id 46sm63001088ees.4.2014.02.24.04.05.22
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:05:23 -0800 (PST)
Message-ID: <530B3581.1040602@linaro.org>
Date: Mon, 24 Feb 2014 12:05:21 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-15-git-send-email-julien.grall@linaro.org>
	<alpine.DEB.2.02.1402241152010.4471@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402241152010.4471@kaball.uk.xensource.com>
Cc: xen-devel@lists.xenproject.org, tim@xen.org, ian.campbell@citrix.com,
	stefano.stabellini@citrix.com
Subject: Re: [Xen-devel] [PATCH v2 14/15] xen/arm: Add the property
 "protected-devices" in the hypervisor node
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 02/24/2014 11:54 AM, Stefano Stabellini wrote:
> On Sun, 23 Feb 2014, Julien Grall wrote:
>> DOM0 is using the swiotlb to bounce DMA. With the IOMMU support in Xen,
>> protected devices should not use it.
>>
>> Only Xen is abled to know if an IOMMU protects the device. The new property
>> "protected-devices" is a list of device phandles protected by an IOMMU.
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> 
> You need to send a patch to

I already sent the patch Friday: https://patches.linaro.org/25070/.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:11:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuNA-00073Z-0p; Mon, 24 Feb 2014 12:11:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuN9-00073T-0x
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:10:59 +0000
Received: from [85.158.137.68:4768] by server-8.bemta-3.messagelabs.com id
	8A/8D-16039-2D63B035; Mon, 24 Feb 2014 12:10:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393243857!3815815!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24600 invoked from network); 24 Feb 2014 12:10:57 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:10:57 -0000
Received: by mail-ee0-f47.google.com with SMTP id e49so914090eek.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:10:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=/scf1I3W84Ro60ti30AR9js2Mo8yu7JodORsqP5Mf5Y=;
	b=Y+l06DqDsvlKTf0BTwDppUWppz9H58LPojz/NL/HuW1FHkWCIjIi313g6D2cn/QSFC
	/qBedCDTR5sTcBMtsLTLZWqF6lG/CWSAMgtWIY1rH0xq9b8efRdL2W6x/9bneRY5znBR
	rSP1YuUdF8S1P8CLmIDanai5R8MujclufOv93KixfXQg/j2cYjs7itUj2ojqiA4R4Hgo
	vt5UQJwtn79Qy8uS17Xfr2mlbA7+CTDOHB9blnTe9O+OB+tW1RoA7acfc0Nv1NkxDWux
	wdPYf4kO6a8h0n+BNTp8a1LNT5U4zAgFGvRgMNcVWMIxhGwQJpibRHeqziKRC2tdd12C
	HrQQ==
X-Gm-Message-State: ALoCoQmhL5xHtkE9acyCohCisLmsb/PzV7p4q4Ul7rZ0yGWmwa25ae2AoSwl5aJtDmeXXu7fhAGh
X-Received: by 10.15.81.197 with SMTP id x45mr23725227eey.28.1393243856772;
	Mon, 24 Feb 2014 04:10:56 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o43sm63090930eef.12.2014.02.24.04.10.55 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:10:56 -0800 (PST)
Message-ID: <530B36CD.3020302@linaro.org>
Date: Mon, 24 Feb 2014 12:10:53 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
	<530B3C5C020000780011EB3D@nat28.tlf.novell.com>
In-Reply-To: <530B3C5C020000780011EB3D@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 11:34 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> iommu_domain_teardown is only used internally in
>> xen/drivers/passthrough/vtd/iommu.c
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Acked-by: Ian Cambell <ian.campbell@citrix.com>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> ---
>>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
>> b/xen/drivers/passthrough/vtd/iommu.c
>> index 5f10034..a8d33fc 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>>      return ret;
>>  }
>>  
>> -void iommu_domain_teardown(struct domain *d)
>> +static void iommu_domain_teardown(struct domain *d)
>>  {
>>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>>  
> 
> Please build-test your changes - this was lacking the removal of
> the function's declaration from xen/iommu.h.

Actually I did the build test on x86 ... By mistake I remove the
function's declaration in the wrong patch "xen/passthrough: iommu: Split
generic IOMMU code".

I will fix it in the next patch series.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:11:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuNA-00073Z-0p; Mon, 24 Feb 2014 12:11:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuN9-00073T-0x
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:10:59 +0000
Received: from [85.158.137.68:4768] by server-8.bemta-3.messagelabs.com id
	8A/8D-16039-2D63B035; Mon, 24 Feb 2014 12:10:58 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393243857!3815815!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24600 invoked from network); 24 Feb 2014 12:10:57 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:10:57 -0000
Received: by mail-ee0-f47.google.com with SMTP id e49so914090eek.6
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:10:57 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=/scf1I3W84Ro60ti30AR9js2Mo8yu7JodORsqP5Mf5Y=;
	b=Y+l06DqDsvlKTf0BTwDppUWppz9H58LPojz/NL/HuW1FHkWCIjIi313g6D2cn/QSFC
	/qBedCDTR5sTcBMtsLTLZWqF6lG/CWSAMgtWIY1rH0xq9b8efRdL2W6x/9bneRY5znBR
	rSP1YuUdF8S1P8CLmIDanai5R8MujclufOv93KixfXQg/j2cYjs7itUj2ojqiA4R4Hgo
	vt5UQJwtn79Qy8uS17Xfr2mlbA7+CTDOHB9blnTe9O+OB+tW1RoA7acfc0Nv1NkxDWux
	wdPYf4kO6a8h0n+BNTp8a1LNT5U4zAgFGvRgMNcVWMIxhGwQJpibRHeqziKRC2tdd12C
	HrQQ==
X-Gm-Message-State: ALoCoQmhL5xHtkE9acyCohCisLmsb/PzV7p4q4Ul7rZ0yGWmwa25ae2AoSwl5aJtDmeXXu7fhAGh
X-Received: by 10.15.81.197 with SMTP id x45mr23725227eey.28.1393243856772;
	Mon, 24 Feb 2014 04:10:56 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249]) by mx.google.com with ESMTPSA id
	o43sm63090930eef.12.2014.02.24.04.10.55 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:10:56 -0800 (PST)
Message-ID: <530B36CD.3020302@linaro.org>
Date: Mon, 24 Feb 2014 12:10:53 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
	<530B3C5C020000780011EB3D@nat28.tlf.novell.com>
In-Reply-To: <530B3C5C020000780011EB3D@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 11:34 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> iommu_domain_teardown is only used internally in
>> xen/drivers/passthrough/vtd/iommu.c
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>> Acked-by: Ian Cambell <ian.campbell@citrix.com>
>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>> ---
>>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
>> b/xen/drivers/passthrough/vtd/iommu.c
>> index 5f10034..a8d33fc 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>>      return ret;
>>  }
>>  
>> -void iommu_domain_teardown(struct domain *d)
>> +static void iommu_domain_teardown(struct domain *d)
>>  {
>>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>>  
> 
> Please build-test your changes - this was lacking the removal of
> the function's declaration from xen/iommu.h.

Actually I did the build test on x86 ... By mistake I remove the
function's declaration in the wrong patch "xen/passthrough: iommu: Split
generic IOMMU code".

I will fix it in the next patch series.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuR6-0007At-Oh; Mon, 24 Feb 2014 12:15:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHuR5-0007An-BE
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 12:15:03 +0000
Received: from [85.158.137.68:63686] by server-1.bemta-3.messagelabs.com id
	4C/17-17293-6C73B035; Mon, 24 Feb 2014 12:15:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393244100!2554027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9909 invoked from network); 24 Feb 2014 12:15:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:15:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103507040"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 12:14:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 07:14:59 -0500
Message-ID: <1393244098.16570.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 24 Feb 2014 12:14:58 +0000
In-Reply-To: <CAFLBxZaW9w4mZ2XBsoygrjRuHEEd=rHf7dGdZ856s3JUcnFCxA@mail.gmail.com>
References: <CAFLBxZaW9w4mZ2XBsoygrjRuHEEd=rHf7dGdZ856s3JUcnFCxA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 branched,
 Xen 4.5-unstable open for development
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 17:09 +0000, George Dunlap wrote:
> We have now branched 4.4 in preparation for the official release
> sometime a week or two hence.

Woohoo.

> Thus the 4.5 development window is now open.

Woohoo!

On that note: I'm at a company wide meeting all day tomorrow and on
Wednesday I fly out to Macau for Linaro Connect. I'll be back on 11
March.

I've got an email folder full of patches which were deferred for 4.5 and
it's unlikely I'll start wading through that with any amount of gusto
until I get back. If anything needs rebasing or resending then there's
no harm in doing so over the next week or so though.

> Thank you everyone for your hard work this release cycle!

Yes indeed! Thanks for you RM efforts...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuR6-0007At-Oh; Mon, 24 Feb 2014 12:15:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHuR5-0007An-BE
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 12:15:03 +0000
Received: from [85.158.137.68:63686] by server-1.bemta-3.messagelabs.com id
	4C/17-17293-6C73B035; Mon, 24 Feb 2014 12:15:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393244100!2554027!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9909 invoked from network); 24 Feb 2014 12:15:01 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:15:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103507040"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 12:14:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 07:14:59 -0500
Message-ID: <1393244098.16570.63.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Mon, 24 Feb 2014 12:14:58 +0000
In-Reply-To: <CAFLBxZaW9w4mZ2XBsoygrjRuHEEd=rHf7dGdZ856s3JUcnFCxA@mail.gmail.com>
References: <CAFLBxZaW9w4mZ2XBsoygrjRuHEEd=rHf7dGdZ856s3JUcnFCxA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen 4.4 branched,
 Xen 4.5-unstable open for development
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-21 at 17:09 +0000, George Dunlap wrote:
> We have now branched 4.4 in preparation for the official release
> sometime a week or two hence.

Woohoo.

> Thus the 4.5 development window is now open.

Woohoo!

On that note: I'm at a company wide meeting all day tomorrow and on
Wednesday I fly out to Macau for Linaro Connect. I'll be back on 11
March.

I've got an email folder full of patches which were deferred for 4.5 and
it's unlikely I'll start wading through that with any amount of gusto
until I get back. If anything needs rebasing or resending then there's
no harm in doing so over the next week or so though.

> Thank you everyone for your hard work this release cycle!

Yes indeed! Thanks for you RM efforts...

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:18:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuUb-0007Iy-Jd; Mon, 24 Feb 2014 12:18:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHuUZ-0007It-Uk
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:18:40 +0000
Received: from [85.158.143.35:37005] by server-1.bemta-4.messagelabs.com id
	C6/A9-31661-F983B035; Mon, 24 Feb 2014 12:18:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393244318!7848391!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17837 invoked from network); 24 Feb 2014 12:18:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:18:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:18:38 +0000
Message-Id: <530B46AC020000780011EB91@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:18:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
	<530B3C5C020000780011EB3D@nat28.tlf.novell.com>
	<530B36CD.3020302@linaro.org>
In-Reply-To: <530B36CD.3020302@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 13:10, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/24/2014 11:34 AM, Jan Beulich wrote:
>>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>>> iommu_domain_teardown is only used internally in
>>> xen/drivers/passthrough/vtd/iommu.c
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> Acked-by: Ian Cambell <ian.campbell@citrix.com>
>>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>>> ---
>>>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
>>> b/xen/drivers/passthrough/vtd/iommu.c
>>> index 5f10034..a8d33fc 100644
>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>>>      return ret;
>>>  }
>>>  
>>> -void iommu_domain_teardown(struct domain *d)
>>> +static void iommu_domain_teardown(struct domain *d)
>>>  {
>>>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>>>  
>> 
>> Please build-test your changes - this was lacking the removal of
>> the function's declaration from xen/iommu.h.
> 
> Actually I did the build test on x86 ... By mistake I remove the
> function's declaration in the wrong patch "xen/passthrough: iommu: Split
> generic IOMMU code".

So I suppose you build tested the whole series, but not each
individual patch...

> I will fix it in the next patch series.

In the sense that you need to drop the change from the later
patch then, as I applied the trivial three patches from the series
already.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:18:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuUb-0007Iy-Jd; Mon, 24 Feb 2014 12:18:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHuUZ-0007It-Uk
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:18:40 +0000
Received: from [85.158.143.35:37005] by server-1.bemta-4.messagelabs.com id
	C6/A9-31661-F983B035; Mon, 24 Feb 2014 12:18:39 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393244318!7848391!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17837 invoked from network); 24 Feb 2014 12:18:38 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:18:38 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:18:38 +0000
Message-Id: <530B46AC020000780011EB91@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:18:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
	<530B3C5C020000780011EB3D@nat28.tlf.novell.com>
	<530B36CD.3020302@linaro.org>
In-Reply-To: <530B36CD.3020302@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 13:10, Julien Grall <julien.grall@linaro.org> wrote:
> On 02/24/2014 11:34 AM, Jan Beulich wrote:
>>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>>> iommu_domain_teardown is only used internally in
>>> xen/drivers/passthrough/vtd/iommu.c
>>>
>>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>> Acked-by: Ian Cambell <ian.campbell@citrix.com>
>>> Cc: Xiantao Zhang <xiantao.zhang@intel.com>
>>> ---
>>>  xen/drivers/passthrough/vtd/iommu.c |    2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
>>> b/xen/drivers/passthrough/vtd/iommu.c
>>> index 5f10034..a8d33fc 100644
>>> --- a/xen/drivers/passthrough/vtd/iommu.c
>>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>>> @@ -1701,7 +1701,7 @@ static int reassign_device_ownership(
>>>      return ret;
>>>  }
>>>  
>>> -void iommu_domain_teardown(struct domain *d)
>>> +static void iommu_domain_teardown(struct domain *d)
>>>  {
>>>      struct hvm_iommu *hd = domain_hvm_iommu(d);
>>>  
>> 
>> Please build-test your changes - this was lacking the removal of
>> the function's declaration from xen/iommu.h.
> 
> Actually I did the build test on x86 ... By mistake I remove the
> function's declaration in the wrong patch "xen/passthrough: iommu: Split
> generic IOMMU code".

So I suppose you build tested the whole series, but not each
individual patch...

> I will fix it in the next patch series.

In the sense that you need to drop the change from the later
patch then, as I applied the trivial three patches from the series
already.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:19:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuVI-0007Lv-98; Mon, 24 Feb 2014 12:19:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WHuVE-0007LL-99
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:19:22 +0000
Received: from [85.158.143.35:43612] by server-2.bemta-4.messagelabs.com id
	32/99-10891-7C83B035; Mon, 24 Feb 2014 12:19:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393244357!7827065!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6027 invoked from network); 24 Feb 2014 12:19:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:19:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103507957"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 12:19:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 07:19:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WHuV8-0003QJ-KP;
	Mon, 24 Feb 2014 12:19:14 +0000
Date: Mon, 24 Feb 2014 12:19:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392914159.32657.18.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
	<1392914159.32657.18.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Russell King <linux@arm.linux.org.uk>, Pawel Moll <pawel.moll@arm.com>,
	stefano.stabellini@eu.citrix.com,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CC'ing Greg.

On Thu, 20 Feb 2014, Ian Campbell wrote:
> On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> > Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> > This patch introduce a new property "protected-devices" for the hypervisor
> > node which list device which the IOMMU are been correctly programmed by Xen.
> > 
> > During Linux boot, Xen specific code will create an hash table which
> > contains all these devices. The hash table will be used in need_xen_dma_ops
> > to check if the Xen DMA ops needs to be used for the current device.
> 
> Is it out of the question to find a field within struct device itself to
> store this e.g. in struct device_dma_parameters perhaps and avoid the
> need for a hashtable lookup.
> 
> device->iommu_group might be another option, if we can create our own
> group?

I agree that a field in struct device would be ideal.
Greg, get_maintainer.pl points at you as main maintainer of device.h, do
you have an opinion on this?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:19:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuVI-0007Lv-98; Mon, 24 Feb 2014 12:19:24 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WHuVE-0007LL-99
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:19:22 +0000
Received: from [85.158.143.35:43612] by server-2.bemta-4.messagelabs.com id
	32/99-10891-7C83B035; Mon, 24 Feb 2014 12:19:19 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393244357!7827065!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6027 invoked from network); 24 Feb 2014 12:19:18 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:19:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,534,1389744000"; d="scan'208";a="103507957"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 12:19:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 07:19:16 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WHuV8-0003QJ-KP;
	Mon, 24 Feb 2014 12:19:14 +0000
Date: Mon, 24 Feb 2014 12:19:11 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1392914159.32657.18.camel@kazak.uk.xensource.com>
Message-ID: <alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
	<1392914159.32657.18.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Russell King <linux@arm.linux.org.uk>, Pawel Moll <pawel.moll@arm.com>,
	stefano.stabellini@eu.citrix.com,
	"gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>,
	Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CC'ing Greg.

On Thu, 20 Feb 2014, Ian Campbell wrote:
> On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> > Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> > This patch introduce a new property "protected-devices" for the hypervisor
> > node which list device which the IOMMU are been correctly programmed by Xen.
> > 
> > During Linux boot, Xen specific code will create an hash table which
> > contains all these devices. The hash table will be used in need_xen_dma_ops
> > to check if the Xen DMA ops needs to be used for the current device.
> 
> Is it out of the question to find a field within struct device itself to
> store this e.g. in struct device_dma_parameters perhaps and avoid the
> need for a hashtable lookup.
> 
> device->iommu_group might be another option, if we can create our own
> group?

I agree that a field in struct device would be ideal.
Greg, get_maintainer.pl points at you as main maintainer of device.h, do
you have an opinion on this?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:21:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuXR-0007Xd-E4; Mon, 24 Feb 2014 12:21:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuXP-0007XO-Kf
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:21:35 +0000
Received: from [85.158.139.211:9775] by server-12.bemta-5.messagelabs.com id
	17/33-15415-E493B035; Mon, 24 Feb 2014 12:21:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393244494!1902017!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15847 invoked from network); 24 Feb 2014 12:21:34 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:21:34 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so3085814eak.29
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:21:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=CrXydCyDEs4GoseLgDLzPnTcKOT4SFqEV2TzUrdSCqE=;
	b=kAQLh30r7B9pDZ4h3rxAtYCv7rLVe3RGU7CSQ0IPU+kt8C8KoPkmEAnZsZNv6EF5Iq
	cTLfy2mv7dxKXZaRIrh08t1xk/Ly3/KcZB/jZ0+Ft0pZw61cPxPMc1NzqEZpyFVcszv9
	6tosVGfs+OATtPGCfrZuMb2t+m0yK79iNcwqWayb/KF6BE8Zlgg6boTeDWnP8PzMjap/
	f6KN4XMT298uRqIhDiWKRI51loji4vBk+rW9UeF4x8fJMBAxWXCIurVn9SygsY7FCvSs
	f+GRxm48cverfazq5qYLidsPt0vUA6Of6ehQoaL7/HKl+ycDCTIQINs0Nyetn1ae6SkU
	MlAg==
X-Gm-Message-State: ALoCoQlejtOKHkSepCGNWn5YLRU3+uqkTs/QeXOdYsp5Vulhc2miPn6HWNOvlk5ST7ieqkC+8hij
X-Received: by 10.14.1.68 with SMTP id 44mr24585239eec.0.1393244493795;
	Mon, 24 Feb 2014 04:21:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id n48sm42366131eew.0.2014.02.24.04.21.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:21:33 -0800 (PST)
Message-ID: <530B394B.1040601@linaro.org>
Date: Mon, 24 Feb 2014 12:21:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
	<530B3C5C020000780011EB3D@nat28.tlf.novell.com>
	<530B36CD.3020302@linaro.org>
	<530B46AC020000780011EB91@nat28.tlf.novell.com>
In-Reply-To: <530B46AC020000780011EB91@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 12:18 PM, Jan Beulich wrote:
> So I suppose you build tested the whole series, but not each
> individual patch...

I did on most of the patches. But forget to do it for trivial patches.

>> I will fix it in the next patch series.
> 
> In the sense that you need to drop the change from the later
> patch then, as I applied the trivial three patches from the series
> already.

Thanks,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:21:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuXR-0007Xd-E4; Mon, 24 Feb 2014 12:21:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuXP-0007XO-Kf
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:21:35 +0000
Received: from [85.158.139.211:9775] by server-12.bemta-5.messagelabs.com id
	17/33-15415-E493B035; Mon, 24 Feb 2014 12:21:34 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393244494!1902017!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15847 invoked from network); 24 Feb 2014 12:21:34 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:21:34 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so3085814eak.29
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:21:34 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=CrXydCyDEs4GoseLgDLzPnTcKOT4SFqEV2TzUrdSCqE=;
	b=kAQLh30r7B9pDZ4h3rxAtYCv7rLVe3RGU7CSQ0IPU+kt8C8KoPkmEAnZsZNv6EF5Iq
	cTLfy2mv7dxKXZaRIrh08t1xk/Ly3/KcZB/jZ0+Ft0pZw61cPxPMc1NzqEZpyFVcszv9
	6tosVGfs+OATtPGCfrZuMb2t+m0yK79iNcwqWayb/KF6BE8Zlgg6boTeDWnP8PzMjap/
	f6KN4XMT298uRqIhDiWKRI51loji4vBk+rW9UeF4x8fJMBAxWXCIurVn9SygsY7FCvSs
	f+GRxm48cverfazq5qYLidsPt0vUA6Of6ehQoaL7/HKl+ycDCTIQINs0Nyetn1ae6SkU
	MlAg==
X-Gm-Message-State: ALoCoQlejtOKHkSepCGNWn5YLRU3+uqkTs/QeXOdYsp5Vulhc2miPn6HWNOvlk5ST7ieqkC+8hij
X-Received: by 10.14.1.68 with SMTP id 44mr24585239eec.0.1393244493795;
	Mon, 24 Feb 2014 04:21:33 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id n48sm42366131eew.0.2014.02.24.04.21.32
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:21:33 -0800 (PST)
Message-ID: <530B394B.1040601@linaro.org>
Date: Mon, 24 Feb 2014 12:21:31 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-3-git-send-email-julien.grall@linaro.org>
	<530B3C5C020000780011EB3D@nat28.tlf.novell.com>
	<530B36CD.3020302@linaro.org>
	<530B46AC020000780011EB91@nat28.tlf.novell.com>
In-Reply-To: <530B46AC020000780011EB91@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 02/15] xen/passthrough: vtd: Don't export
 iommu_domain_teardown
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 12:18 PM, Jan Beulich wrote:
> So I suppose you build tested the whole series, but not each
> individual patch...

I did on most of the patches. But forget to do it for trivial patches.

>> I will fix it in the next patch series.
> 
> In the sense that you need to drop the change from the later
> patch then, as I applied the trivial three patches from the series
> already.

Thanks,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:31:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuh0-0007mk-V8; Mon, 24 Feb 2014 12:31:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHugz-0007mf-Is
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:31:29 +0000
Received: from [85.158.137.68:3289] by server-7.bemta-3.messagelabs.com id
	78/CB-13775-0AB3B035; Mon, 24 Feb 2014 12:31:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393245087!3838523!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21080 invoked from network); 24 Feb 2014 12:31:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:31:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:31:27 +0000
Message-Id: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:31:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH RESEND 0/4] x86: enable xsave-based ISA
	extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are AVX-512 and MPX.

1: xsave: enable support for new ISA extensions
2: MPX IA32_BNDCFGS msr handle
3: generic MSRs save/restore
4: MSR_IA32_BNDCFGS save/restore

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:31:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuh0-0007mk-V8; Mon, 24 Feb 2014 12:31:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHugz-0007mf-Is
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:31:29 +0000
Received: from [85.158.137.68:3289] by server-7.bemta-3.messagelabs.com id
	78/CB-13775-0AB3B035; Mon, 24 Feb 2014 12:31:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393245087!3838523!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21080 invoked from network); 24 Feb 2014 12:31:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:31:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:31:27 +0000
Message-Id: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:31:25 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH RESEND 0/4] x86: enable xsave-based ISA
	extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These are AVX-512 and MPX.

1: xsave: enable support for new ISA extensions
2: MPX IA32_BNDCFGS msr handle
3: generic MSRs save/restore
4: MSR_IA32_BNDCFGS save/restore

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:36:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHulm-0007vA-PW; Mon, 24 Feb 2014 12:36:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHulk-0007v3-Sy
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:36:25 +0000
Received: from [193.109.254.147:64626] by server-5.bemta-14.messagelabs.com id
	03/84-16688-8CC3B035; Mon, 24 Feb 2014 12:36:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393245383!6382770!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9786 invoked from network); 24 Feb 2014 12:36:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:36:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:36:23 +0000
Message-Id: <530B4AD3020000780011EBCE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:36:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDFED8AD3.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH RESEND 1/4] x86/xsave: enable support for new
 ISA extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDFED8AD3.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Intel has released a new version of Intel Architecture Instruction Set
Extensions Programming Reference, adding new features like AVX-512,
MPX, etc. Refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf=20

This patch adds support for these new instruction set extensions
without enabling this support for guest use, yet.

It also adjusts XCR0 validation, at once fixing the definition of
XSTATE_ALL (which is not supposed to include bit 63).

Signed-off-by: Jan Beulich <jbeulich@novell.com>

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -253,7 +253,7 @@ void xstate_free_save_area(struct vcpu *
 /* Collect the information of processor's extended state */
 void xstate_init(bool_t bsp)
 {
-    u32 eax, ebx, ecx, edx, min_size;
+    u32 eax, ebx, ecx, edx;
     u64 feature_mask;
=20
     if ( boot_cpu_data.cpuid_level < XSTATE_CPUID )
@@ -269,12 +269,6 @@ void xstate_init(bool_t bsp)
     BUG_ON((eax & XSTATE_YMM) && !(eax & XSTATE_SSE));
     feature_mask =3D (((u64)edx << 32) | eax) & XCNTXT_MASK;
=20
-    /* FP/SSE, XSAVE.HEADER, YMM */
-    min_size =3D  XSTATE_AREA_MIN_SIZE;
-    if ( eax & XSTATE_YMM )
-        min_size +=3D XSTATE_YMM_SIZE;
-    BUG_ON(ecx < min_size);
-
     /*
      * Set CR4_OSXSAVE and run "cpuid" to get xsave_cntxt_size.
      */
@@ -327,14 +321,38 @@ unsigned int xstate_ctxt_size(u64 xcr0)
     return ebx;
 }
=20
+static bool_t valid_xcr0(u64 xcr0)
+{
+    /* FP must be unconditionally set. */
+    if ( !(xcr0 & XSTATE_FP) )
+        return 0;
+
+    /* YMM depends on SSE. */
+    if ( (xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE) )
+        return 0;
+
+    if ( xcr0 & (XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM) )
+    {
+        /* OPMASK, ZMM, and HI_ZMM require YMM. */
+        if ( !(xcr0 & XSTATE_YMM) )
+            return 0;
+
+        /* OPMASK, ZMM, and HI_ZMM must be the same. */
+        if ( ~xcr0 & (XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM) )
+            return 0;
+    }
+
+    /* BNDREGS and BNDCSR must be the same. */
+    return !(xcr0 & XSTATE_BNDREGS) =3D=3D !(xcr0 & XSTATE_BNDCSR);
+}
+
 int validate_xstate(u64 xcr0, u64 xcr0_accum, u64 xstate_bv, u64 =
xfeat_mask)
 {
     if ( (xcr0_accum & ~xfeat_mask) ||
          (xstate_bv & ~xcr0_accum) ||
          (xcr0 & ~xcr0_accum) ||
-         !(xcr0 & XSTATE_FP) ||
-         ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) ||
-         ((xcr0_accum & XSTATE_YMM) && !(xcr0_accum & XSTATE_SSE)) )
+         !valid_xcr0(xcr0) ||
+         !valid_xcr0(xcr0_accum) )
         return -EINVAL;
=20
     if ( xcr0_accum & ~xfeature_mask )
@@ -351,10 +369,7 @@ int handle_xsetbv(u32 index, u64 new_bv)
     if ( index !=3D XCR_XFEATURE_ENABLED_MASK )
         return -EOPNOTSUPP;
=20
-    if ( (new_bv & ~xfeature_mask) || !(new_bv & XSTATE_FP) )
-        return -EINVAL;
-
-    if ( (new_bv & XSTATE_YMM) && !(new_bv & XSTATE_SSE) )
+    if ( (new_bv & ~xfeature_mask) || !valid_xcr0(new_bv) )
         return -EINVAL;
=20
     if ( !set_xcr0(new_bv) )
@@ -364,6 +379,10 @@ int handle_xsetbv(u32 index, u64 new_bv)
     curr->arch.xcr0 =3D new_bv;
     curr->arch.xcr0_accum |=3D new_bv;
=20
+    /* LWP sets nonlazy_xstate_used independently. */
+    if ( new_bv & (XSTATE_NONLAZY & ~XSTATE_LWP) )
+        curr->arch.nonlazy_xstate_used =3D 1;
+
     mask &=3D curr->fpu_dirtied ? ~XSTATE_FP_SSE : XSTATE_NONLAZY;
     if ( mask )
     {
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -20,18 +20,23 @@
 #define XCR_XFEATURE_ENABLED_MASK 0x00000000  /* index of XCR0 */
=20
 #define XSTATE_YMM_SIZE           256
-#define XSTATE_YMM_OFFSET         XSAVE_AREA_MIN_SIZE
 #define XSTATE_AREA_MIN_SIZE      (512 + 64)  /* FP/SSE + XSAVE.HEADER */
=20
 #define XSTATE_FP      (1ULL << 0)
 #define XSTATE_SSE     (1ULL << 1)
 #define XSTATE_YMM     (1ULL << 2)
+#define XSTATE_BNDREGS (1ULL << 3)
+#define XSTATE_BNDCSR  (1ULL << 4)
+#define XSTATE_OPMASK  (1ULL << 5)
+#define XSTATE_ZMM     (1ULL << 6)
+#define XSTATE_HI_ZMM  (1ULL << 7)
 #define XSTATE_LWP     (1ULL << 62) /* AMD lightweight profiling */
 #define XSTATE_FP_SSE  (XSTATE_FP | XSTATE_SSE)
-#define XCNTXT_MASK    (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)
+#define XCNTXT_MASK    (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_OPMAS=
K | \
+                        XSTATE_ZMM | XSTATE_HI_ZMM | XSTATE_NONLAZY)
=20
-#define XSTATE_ALL     (~0)
-#define XSTATE_NONLAZY (XSTATE_LWP)
+#define XSTATE_ALL     (~(1ULL << 63))
+#define XSTATE_NONLAZY (XSTATE_LWP | XSTATE_BNDREGS | XSTATE_BNDCSR)
 #define XSTATE_LAZY    (XSTATE_ALL & ~XSTATE_NONLAZY)
=20
 extern u64 xfeature_mask;



--=__PartDFED8AD3.1__=
Content-Type: text/plain; name="x86-xsave-more.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-xsave-more.patch"

x86/xsave: enable support for new ISA extensions=0A=0AIntel has released a =
new version of Intel Architecture Instruction Set=0AExtensions Programming =
Reference, adding new features like AVX-512,=0AMPX, etc. Refer to=0Ahttp://=
download-software.intel.com/sites/default/files/319433-015.pdf =0A=0AThis =
patch adds support for these new instruction set extensions=0Awithout =
enabling this support for guest use, yet.=0A=0AIt also adjusts XCR0 =
validation, at once fixing the definition of=0AXSTATE_ALL (which is not =
supposed to include bit 63).=0A=0ASigned-off-by: Jan Beulich <jbeulich@nove=
ll.com>=0A=0A--- a/xen/arch/x86/xstate.c=0A+++ b/xen/arch/x86/xstate.c=0A@@=
 -253,7 +253,7 @@ void xstate_free_save_area(struct vcpu *=0A /* Collect =
the information of processor's extended state */=0A void xstate_init(bool_t=
 bsp)=0A {=0A-    u32 eax, ebx, ecx, edx, min_size;=0A+    u32 eax, ebx, =
ecx, edx;=0A     u64 feature_mask;=0A =0A     if ( boot_cpu_data.cpuid_leve=
l < XSTATE_CPUID )=0A@@ -269,12 +269,6 @@ void xstate_init(bool_t bsp)=0A  =
   BUG_ON((eax & XSTATE_YMM) && !(eax & XSTATE_SSE));=0A     feature_mask =
=3D (((u64)edx << 32) | eax) & XCNTXT_MASK;=0A =0A-    /* FP/SSE, =
XSAVE.HEADER, YMM */=0A-    min_size =3D  XSTATE_AREA_MIN_SIZE;=0A-    if =
( eax & XSTATE_YMM )=0A-        min_size +=3D XSTATE_YMM_SIZE;=0A-    =
BUG_ON(ecx < min_size);=0A-=0A     /*=0A      * Set CR4_OSXSAVE and run =
"cpuid" to get xsave_cntxt_size.=0A      */=0A@@ -327,14 +321,38 @@ =
unsigned int xstate_ctxt_size(u64 xcr0)=0A     return ebx;=0A }=0A =
=0A+static bool_t valid_xcr0(u64 xcr0)=0A+{=0A+    /* FP must be unconditio=
nally set. */=0A+    if ( !(xcr0 & XSTATE_FP) )=0A+        return =
0;=0A+=0A+    /* YMM depends on SSE. */=0A+    if ( (xcr0 & XSTATE_YMM) && =
!(xcr0 & XSTATE_SSE) )=0A+        return 0;=0A+=0A+    if ( xcr0 & =
(XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM) )=0A+    {=0A+        /* =
OPMASK, ZMM, and HI_ZMM require YMM. */=0A+        if ( !(xcr0 & XSTATE_YMM=
) )=0A+            return 0;=0A+=0A+        /* OPMASK, ZMM, and HI_ZMM =
must be the same. */=0A+        if ( ~xcr0 & (XSTATE_OPMASK | XSTATE_ZMM | =
XSTATE_HI_ZMM) )=0A+            return 0;=0A+    }=0A+=0A+    /* BNDREGS =
and BNDCSR must be the same. */=0A+    return !(xcr0 & XSTATE_BNDREGS) =
=3D=3D !(xcr0 & XSTATE_BNDCSR);=0A+}=0A+=0A int validate_xstate(u64 xcr0, =
u64 xcr0_accum, u64 xstate_bv, u64 xfeat_mask)=0A {=0A     if ( (xcr0_accum=
 & ~xfeat_mask) ||=0A          (xstate_bv & ~xcr0_accum) ||=0A          =
(xcr0 & ~xcr0_accum) ||=0A-         !(xcr0 & XSTATE_FP) ||=0A-         =
((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) ||=0A-         ((xcr0_accum =
& XSTATE_YMM) && !(xcr0_accum & XSTATE_SSE)) )=0A+         !valid_xcr0(xcr0=
) ||=0A+         !valid_xcr0(xcr0_accum) )=0A         return -EINVAL;=0A =
=0A     if ( xcr0_accum & ~xfeature_mask )=0A@@ -351,10 +369,7 @@ int =
handle_xsetbv(u32 index, u64 new_bv)=0A     if ( index !=3D XCR_XFEATURE_EN=
ABLED_MASK )=0A         return -EOPNOTSUPP;=0A =0A-    if ( (new_bv & =
~xfeature_mask) || !(new_bv & XSTATE_FP) )=0A-        return -EINVAL;=0A-=
=0A-    if ( (new_bv & XSTATE_YMM) && !(new_bv & XSTATE_SSE) )=0A+    if ( =
(new_bv & ~xfeature_mask) || !valid_xcr0(new_bv) )=0A         return =
-EINVAL;=0A =0A     if ( !set_xcr0(new_bv) )=0A@@ -364,6 +379,10 @@ int =
handle_xsetbv(u32 index, u64 new_bv)=0A     curr->arch.xcr0 =3D new_bv;=0A =
    curr->arch.xcr0_accum |=3D new_bv;=0A =0A+    /* LWP sets nonlazy_xstat=
e_used independently. */=0A+    if ( new_bv & (XSTATE_NONLAZY & ~XSTATE_LWP=
) )=0A+        curr->arch.nonlazy_xstate_used =3D 1;=0A+=0A     mask &=3D =
curr->fpu_dirtied ? ~XSTATE_FP_SSE : XSTATE_NONLAZY;=0A     if ( mask )=0A =
    {=0A--- a/xen/include/asm-x86/xstate.h=0A+++ b/xen/include/asm-x86/xsta=
te.h=0A@@ -20,18 +20,23 @@=0A #define XCR_XFEATURE_ENABLED_MASK 0x00000000 =
 /* index of XCR0 */=0A =0A #define XSTATE_YMM_SIZE           256=0A-#defin=
e XSTATE_YMM_OFFSET         XSAVE_AREA_MIN_SIZE=0A #define XSTATE_AREA_MIN_=
SIZE      (512 + 64)  /* FP/SSE + XSAVE.HEADER */=0A =0A #define XSTATE_FP =
     (1ULL << 0)=0A #define XSTATE_SSE     (1ULL << 1)=0A #define =
XSTATE_YMM     (1ULL << 2)=0A+#define XSTATE_BNDREGS (1ULL << 3)=0A+#define=
 XSTATE_BNDCSR  (1ULL << 4)=0A+#define XSTATE_OPMASK  (1ULL << 5)=0A+#defin=
e XSTATE_ZMM     (1ULL << 6)=0A+#define XSTATE_HI_ZMM  (1ULL << 7)=0A =
#define XSTATE_LWP     (1ULL << 62) /* AMD lightweight profiling */=0A =
#define XSTATE_FP_SSE  (XSTATE_FP | XSTATE_SSE)=0A-#define XCNTXT_MASK    =
(XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)=0A+#define XCNTXT_MASK  =
  (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_OPMASK | \=0A+             =
           XSTATE_ZMM | XSTATE_HI_ZMM | XSTATE_NONLAZY)=0A =0A-#define =
XSTATE_ALL     (~0)=0A-#define XSTATE_NONLAZY (XSTATE_LWP)=0A+#define =
XSTATE_ALL     (~(1ULL << 63))=0A+#define XSTATE_NONLAZY (XSTATE_LWP | =
XSTATE_BNDREGS | XSTATE_BNDCSR)=0A #define XSTATE_LAZY    (XSTATE_ALL & =
~XSTATE_NONLAZY)=0A =0A extern u64 xfeature_mask;=0A
--=__PartDFED8AD3.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDFED8AD3.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:36:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:36:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHulm-0007vA-PW; Mon, 24 Feb 2014 12:36:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHulk-0007v3-Sy
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:36:25 +0000
Received: from [193.109.254.147:64626] by server-5.bemta-14.messagelabs.com id
	03/84-16688-8CC3B035; Mon, 24 Feb 2014 12:36:24 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393245383!6382770!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9786 invoked from network); 24 Feb 2014 12:36:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:36:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:36:23 +0000
Message-Id: <530B4AD3020000780011EBCE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:36:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDFED8AD3.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH RESEND 1/4] x86/xsave: enable support for new
 ISA extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDFED8AD3.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Intel has released a new version of Intel Architecture Instruction Set
Extensions Programming Reference, adding new features like AVX-512,
MPX, etc. Refer to
http://download-software.intel.com/sites/default/files/319433-015.pdf=20

This patch adds support for these new instruction set extensions
without enabling this support for guest use, yet.

It also adjusts XCR0 validation, at once fixing the definition of
XSTATE_ALL (which is not supposed to include bit 63).

Signed-off-by: Jan Beulich <jbeulich@novell.com>

--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -253,7 +253,7 @@ void xstate_free_save_area(struct vcpu *
 /* Collect the information of processor's extended state */
 void xstate_init(bool_t bsp)
 {
-    u32 eax, ebx, ecx, edx, min_size;
+    u32 eax, ebx, ecx, edx;
     u64 feature_mask;
=20
     if ( boot_cpu_data.cpuid_level < XSTATE_CPUID )
@@ -269,12 +269,6 @@ void xstate_init(bool_t bsp)
     BUG_ON((eax & XSTATE_YMM) && !(eax & XSTATE_SSE));
     feature_mask =3D (((u64)edx << 32) | eax) & XCNTXT_MASK;
=20
-    /* FP/SSE, XSAVE.HEADER, YMM */
-    min_size =3D  XSTATE_AREA_MIN_SIZE;
-    if ( eax & XSTATE_YMM )
-        min_size +=3D XSTATE_YMM_SIZE;
-    BUG_ON(ecx < min_size);
-
     /*
      * Set CR4_OSXSAVE and run "cpuid" to get xsave_cntxt_size.
      */
@@ -327,14 +321,38 @@ unsigned int xstate_ctxt_size(u64 xcr0)
     return ebx;
 }
=20
+static bool_t valid_xcr0(u64 xcr0)
+{
+    /* FP must be unconditionally set. */
+    if ( !(xcr0 & XSTATE_FP) )
+        return 0;
+
+    /* YMM depends on SSE. */
+    if ( (xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE) )
+        return 0;
+
+    if ( xcr0 & (XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM) )
+    {
+        /* OPMASK, ZMM, and HI_ZMM require YMM. */
+        if ( !(xcr0 & XSTATE_YMM) )
+            return 0;
+
+        /* OPMASK, ZMM, and HI_ZMM must be the same. */
+        if ( ~xcr0 & (XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM) )
+            return 0;
+    }
+
+    /* BNDREGS and BNDCSR must be the same. */
+    return !(xcr0 & XSTATE_BNDREGS) =3D=3D !(xcr0 & XSTATE_BNDCSR);
+}
+
 int validate_xstate(u64 xcr0, u64 xcr0_accum, u64 xstate_bv, u64 =
xfeat_mask)
 {
     if ( (xcr0_accum & ~xfeat_mask) ||
          (xstate_bv & ~xcr0_accum) ||
          (xcr0 & ~xcr0_accum) ||
-         !(xcr0 & XSTATE_FP) ||
-         ((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) ||
-         ((xcr0_accum & XSTATE_YMM) && !(xcr0_accum & XSTATE_SSE)) )
+         !valid_xcr0(xcr0) ||
+         !valid_xcr0(xcr0_accum) )
         return -EINVAL;
=20
     if ( xcr0_accum & ~xfeature_mask )
@@ -351,10 +369,7 @@ int handle_xsetbv(u32 index, u64 new_bv)
     if ( index !=3D XCR_XFEATURE_ENABLED_MASK )
         return -EOPNOTSUPP;
=20
-    if ( (new_bv & ~xfeature_mask) || !(new_bv & XSTATE_FP) )
-        return -EINVAL;
-
-    if ( (new_bv & XSTATE_YMM) && !(new_bv & XSTATE_SSE) )
+    if ( (new_bv & ~xfeature_mask) || !valid_xcr0(new_bv) )
         return -EINVAL;
=20
     if ( !set_xcr0(new_bv) )
@@ -364,6 +379,10 @@ int handle_xsetbv(u32 index, u64 new_bv)
     curr->arch.xcr0 =3D new_bv;
     curr->arch.xcr0_accum |=3D new_bv;
=20
+    /* LWP sets nonlazy_xstate_used independently. */
+    if ( new_bv & (XSTATE_NONLAZY & ~XSTATE_LWP) )
+        curr->arch.nonlazy_xstate_used =3D 1;
+
     mask &=3D curr->fpu_dirtied ? ~XSTATE_FP_SSE : XSTATE_NONLAZY;
     if ( mask )
     {
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -20,18 +20,23 @@
 #define XCR_XFEATURE_ENABLED_MASK 0x00000000  /* index of XCR0 */
=20
 #define XSTATE_YMM_SIZE           256
-#define XSTATE_YMM_OFFSET         XSAVE_AREA_MIN_SIZE
 #define XSTATE_AREA_MIN_SIZE      (512 + 64)  /* FP/SSE + XSAVE.HEADER */
=20
 #define XSTATE_FP      (1ULL << 0)
 #define XSTATE_SSE     (1ULL << 1)
 #define XSTATE_YMM     (1ULL << 2)
+#define XSTATE_BNDREGS (1ULL << 3)
+#define XSTATE_BNDCSR  (1ULL << 4)
+#define XSTATE_OPMASK  (1ULL << 5)
+#define XSTATE_ZMM     (1ULL << 6)
+#define XSTATE_HI_ZMM  (1ULL << 7)
 #define XSTATE_LWP     (1ULL << 62) /* AMD lightweight profiling */
 #define XSTATE_FP_SSE  (XSTATE_FP | XSTATE_SSE)
-#define XCNTXT_MASK    (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)
+#define XCNTXT_MASK    (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_OPMAS=
K | \
+                        XSTATE_ZMM | XSTATE_HI_ZMM | XSTATE_NONLAZY)
=20
-#define XSTATE_ALL     (~0)
-#define XSTATE_NONLAZY (XSTATE_LWP)
+#define XSTATE_ALL     (~(1ULL << 63))
+#define XSTATE_NONLAZY (XSTATE_LWP | XSTATE_BNDREGS | XSTATE_BNDCSR)
 #define XSTATE_LAZY    (XSTATE_ALL & ~XSTATE_NONLAZY)
=20
 extern u64 xfeature_mask;



--=__PartDFED8AD3.1__=
Content-Type: text/plain; name="x86-xsave-more.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-xsave-more.patch"

x86/xsave: enable support for new ISA extensions=0A=0AIntel has released a =
new version of Intel Architecture Instruction Set=0AExtensions Programming =
Reference, adding new features like AVX-512,=0AMPX, etc. Refer to=0Ahttp://=
download-software.intel.com/sites/default/files/319433-015.pdf =0A=0AThis =
patch adds support for these new instruction set extensions=0Awithout =
enabling this support for guest use, yet.=0A=0AIt also adjusts XCR0 =
validation, at once fixing the definition of=0AXSTATE_ALL (which is not =
supposed to include bit 63).=0A=0ASigned-off-by: Jan Beulich <jbeulich@nove=
ll.com>=0A=0A--- a/xen/arch/x86/xstate.c=0A+++ b/xen/arch/x86/xstate.c=0A@@=
 -253,7 +253,7 @@ void xstate_free_save_area(struct vcpu *=0A /* Collect =
the information of processor's extended state */=0A void xstate_init(bool_t=
 bsp)=0A {=0A-    u32 eax, ebx, ecx, edx, min_size;=0A+    u32 eax, ebx, =
ecx, edx;=0A     u64 feature_mask;=0A =0A     if ( boot_cpu_data.cpuid_leve=
l < XSTATE_CPUID )=0A@@ -269,12 +269,6 @@ void xstate_init(bool_t bsp)=0A  =
   BUG_ON((eax & XSTATE_YMM) && !(eax & XSTATE_SSE));=0A     feature_mask =
=3D (((u64)edx << 32) | eax) & XCNTXT_MASK;=0A =0A-    /* FP/SSE, =
XSAVE.HEADER, YMM */=0A-    min_size =3D  XSTATE_AREA_MIN_SIZE;=0A-    if =
( eax & XSTATE_YMM )=0A-        min_size +=3D XSTATE_YMM_SIZE;=0A-    =
BUG_ON(ecx < min_size);=0A-=0A     /*=0A      * Set CR4_OSXSAVE and run =
"cpuid" to get xsave_cntxt_size.=0A      */=0A@@ -327,14 +321,38 @@ =
unsigned int xstate_ctxt_size(u64 xcr0)=0A     return ebx;=0A }=0A =
=0A+static bool_t valid_xcr0(u64 xcr0)=0A+{=0A+    /* FP must be unconditio=
nally set. */=0A+    if ( !(xcr0 & XSTATE_FP) )=0A+        return =
0;=0A+=0A+    /* YMM depends on SSE. */=0A+    if ( (xcr0 & XSTATE_YMM) && =
!(xcr0 & XSTATE_SSE) )=0A+        return 0;=0A+=0A+    if ( xcr0 & =
(XSTATE_OPMASK | XSTATE_ZMM | XSTATE_HI_ZMM) )=0A+    {=0A+        /* =
OPMASK, ZMM, and HI_ZMM require YMM. */=0A+        if ( !(xcr0 & XSTATE_YMM=
) )=0A+            return 0;=0A+=0A+        /* OPMASK, ZMM, and HI_ZMM =
must be the same. */=0A+        if ( ~xcr0 & (XSTATE_OPMASK | XSTATE_ZMM | =
XSTATE_HI_ZMM) )=0A+            return 0;=0A+    }=0A+=0A+    /* BNDREGS =
and BNDCSR must be the same. */=0A+    return !(xcr0 & XSTATE_BNDREGS) =
=3D=3D !(xcr0 & XSTATE_BNDCSR);=0A+}=0A+=0A int validate_xstate(u64 xcr0, =
u64 xcr0_accum, u64 xstate_bv, u64 xfeat_mask)=0A {=0A     if ( (xcr0_accum=
 & ~xfeat_mask) ||=0A          (xstate_bv & ~xcr0_accum) ||=0A          =
(xcr0 & ~xcr0_accum) ||=0A-         !(xcr0 & XSTATE_FP) ||=0A-         =
((xcr0 & XSTATE_YMM) && !(xcr0 & XSTATE_SSE)) ||=0A-         ((xcr0_accum =
& XSTATE_YMM) && !(xcr0_accum & XSTATE_SSE)) )=0A+         !valid_xcr0(xcr0=
) ||=0A+         !valid_xcr0(xcr0_accum) )=0A         return -EINVAL;=0A =
=0A     if ( xcr0_accum & ~xfeature_mask )=0A@@ -351,10 +369,7 @@ int =
handle_xsetbv(u32 index, u64 new_bv)=0A     if ( index !=3D XCR_XFEATURE_EN=
ABLED_MASK )=0A         return -EOPNOTSUPP;=0A =0A-    if ( (new_bv & =
~xfeature_mask) || !(new_bv & XSTATE_FP) )=0A-        return -EINVAL;=0A-=
=0A-    if ( (new_bv & XSTATE_YMM) && !(new_bv & XSTATE_SSE) )=0A+    if ( =
(new_bv & ~xfeature_mask) || !valid_xcr0(new_bv) )=0A         return =
-EINVAL;=0A =0A     if ( !set_xcr0(new_bv) )=0A@@ -364,6 +379,10 @@ int =
handle_xsetbv(u32 index, u64 new_bv)=0A     curr->arch.xcr0 =3D new_bv;=0A =
    curr->arch.xcr0_accum |=3D new_bv;=0A =0A+    /* LWP sets nonlazy_xstat=
e_used independently. */=0A+    if ( new_bv & (XSTATE_NONLAZY & ~XSTATE_LWP=
) )=0A+        curr->arch.nonlazy_xstate_used =3D 1;=0A+=0A     mask &=3D =
curr->fpu_dirtied ? ~XSTATE_FP_SSE : XSTATE_NONLAZY;=0A     if ( mask )=0A =
    {=0A--- a/xen/include/asm-x86/xstate.h=0A+++ b/xen/include/asm-x86/xsta=
te.h=0A@@ -20,18 +20,23 @@=0A #define XCR_XFEATURE_ENABLED_MASK 0x00000000 =
 /* index of XCR0 */=0A =0A #define XSTATE_YMM_SIZE           256=0A-#defin=
e XSTATE_YMM_OFFSET         XSAVE_AREA_MIN_SIZE=0A #define XSTATE_AREA_MIN_=
SIZE      (512 + 64)  /* FP/SSE + XSAVE.HEADER */=0A =0A #define XSTATE_FP =
     (1ULL << 0)=0A #define XSTATE_SSE     (1ULL << 1)=0A #define =
XSTATE_YMM     (1ULL << 2)=0A+#define XSTATE_BNDREGS (1ULL << 3)=0A+#define=
 XSTATE_BNDCSR  (1ULL << 4)=0A+#define XSTATE_OPMASK  (1ULL << 5)=0A+#defin=
e XSTATE_ZMM     (1ULL << 6)=0A+#define XSTATE_HI_ZMM  (1ULL << 7)=0A =
#define XSTATE_LWP     (1ULL << 62) /* AMD lightweight profiling */=0A =
#define XSTATE_FP_SSE  (XSTATE_FP | XSTATE_SSE)=0A-#define XCNTXT_MASK    =
(XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_LWP)=0A+#define XCNTXT_MASK  =
  (XSTATE_FP | XSTATE_SSE | XSTATE_YMM | XSTATE_OPMASK | \=0A+             =
           XSTATE_ZMM | XSTATE_HI_ZMM | XSTATE_NONLAZY)=0A =0A-#define =
XSTATE_ALL     (~0)=0A-#define XSTATE_NONLAZY (XSTATE_LWP)=0A+#define =
XSTATE_ALL     (~(1ULL << 63))=0A+#define XSTATE_NONLAZY (XSTATE_LWP | =
XSTATE_BNDREGS | XSTATE_BNDCSR)=0A #define XSTATE_LAZY    (XSTATE_ALL & =
~XSTATE_NONLAZY)=0A =0A extern u64 xfeature_mask;=0A
--=__PartDFED8AD3.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDFED8AD3.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:37:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHumY-000803-Cv; Mon, 24 Feb 2014 12:37:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHumW-0007zq-BX
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:37:12 +0000
Received: from [193.109.254.147:24026] by server-15.bemta-14.messagelabs.com
	id 5F/A9-10839-7FC3B035; Mon, 24 Feb 2014 12:37:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393245430!6432652!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24001 invoked from network); 24 Feb 2014 12:37:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:37:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:37:10 +0000
Message-Id: <530B4B03020000780011EBD2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:37:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartEFDDBAE3.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH RESEND 2/4] x86: MPX IA32_BNDCFGS msr handle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartEFDDBAE3.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When MPX supported, a new guest-state field for IA32_BNDCFGS
is added to the VMCS. In addition, two new controls are added:
 - a VM-exit control called "clear BNDCFGS"
 - a VM-entry control called "load BNDCFGS."
VM exits always save IA32_BNDCFGS into BNDCFGS field of VMCS.

Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

Unlikely, but in case VMX support is not available, not expose
MPX to hvm guest.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2909,6 +2909,12 @@ void hvm_cpuid(unsigned int input, unsig
         if ( (count =3D=3D 0) && !cpu_has_smep )
             *ebx &=3D ~cpufeat_mask(X86_FEATURE_SMEP);
=20
+        /* Don't expose MPX to hvm when VMX support is not available */
+        if ( (count =3D=3D 0) &&
+             (!(vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) ||
+              !(vmx_vmentry_control & VM_ENTRY_LOAD_BNDCFGS)) )
+            *ebx &=3D ~cpufeat_mask(X86_FEATURE_MPX);
+
         /* Don't expose INVPCID to non-hap hvm. */
         if ( (count =3D=3D 0) && !hap_enabled(d) )
             *ebx &=3D ~cpufeat_mask(X86_FEATURE_INVPCID);
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -269,7 +269,8 @@ static int vmx_init_vmcs_config(void)
     }
=20
     min =3D VM_EXIT_ACK_INTR_ON_EXIT;
-    opt =3D VM_EXIT_SAVE_GUEST_PAT | VM_EXIT_LOAD_HOST_PAT;
+    opt =3D VM_EXIT_SAVE_GUEST_PAT | VM_EXIT_LOAD_HOST_PAT |
+          VM_EXIT_CLEAR_BNDCFGS;
     min |=3D VM_EXIT_IA32E_MODE;
     _vmx_vmexit_control =3D adjust_vmx_controls(
         "VMExit Control", min, opt, MSR_IA32_VMX_EXIT_CTLS, &mismatch);
@@ -283,7 +284,7 @@ static int vmx_init_vmcs_config(void)
         _vmx_pin_based_exec_control  &=3D ~ PIN_BASED_POSTED_INTERRUPT;
=20
     min =3D 0;
-    opt =3D VM_ENTRY_LOAD_GUEST_PAT;
+    opt =3D VM_ENTRY_LOAD_GUEST_PAT | VM_ENTRY_LOAD_BNDCFGS;
     _vmx_vmentry_control =3D adjust_vmx_controls(
         "VMEntry Control", min, opt, MSR_IA32_VMX_ENTRY_CTLS, &mismatch);
=20
@@ -955,6 +956,9 @@ static int construct_vmcs(struct vcpu *v
         vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP, MSR_TYPE_R=
 | MSR_TYPE_W);
         if ( paging_mode_hap(d) && (!iommu_enabled || iommu_snoop) )
             vmx_disable_intercept_for_msr(v, MSR_IA32_CR_PAT, MSR_TYPE_R =
| MSR_TYPE_W);
+        if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&
+             (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) )
+            vmx_disable_intercept_for_msr(v, MSR_IA32_BNDCFGS, MSR_TYPE_R =
| MSR_TYPE_W);
     }
=20
     /* I/O access bitmap. */
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -148,6 +148,7 @@
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID =
*/
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional =
Memory */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as =
zero */
+#define X86_FEATURE_MPX		(7*32+14) /* Memory Protection =
Extensions */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access =
Prevention */
=20
 #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
@@ -197,6 +198,7 @@
 #define cpu_has_xsave           boot_cpu_has(X86_FEATURE_XSAVE)
 #define cpu_has_avx             boot_cpu_has(X86_FEATURE_AVX)
 #define cpu_has_lwp             boot_cpu_has(X86_FEATURE_LWP)
+#define cpu_has_mpx             boot_cpu_has(X86_FEATURE_MPX)
=20
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
=20
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -186,6 +186,7 @@ extern u32 vmx_pin_based_exec_control;
 #define VM_EXIT_SAVE_GUEST_EFER         0x00100000
 #define VM_EXIT_LOAD_HOST_EFER          0x00200000
 #define VM_EXIT_SAVE_PREEMPT_TIMER      0x00400000
+#define VM_EXIT_CLEAR_BNDCFGS           0x00800000
 extern u32 vmx_vmexit_control;
=20
 #define VM_ENTRY_IA32E_MODE             0x00000200
@@ -194,6 +195,7 @@ extern u32 vmx_vmexit_control;
 #define VM_ENTRY_LOAD_PERF_GLOBAL_CTRL  0x00002000
 #define VM_ENTRY_LOAD_GUEST_PAT         0x00004000
 #define VM_ENTRY_LOAD_GUEST_EFER        0x00008000
+#define VM_ENTRY_LOAD_BNDCFGS           0x00010000
 extern u32 vmx_vmentry_control;
=20
 #define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -56,6 +56,8 @@
 #define MSR_IA32_DS_AREA		0x00000600
 #define MSR_IA32_PERF_CAPABILITIES	0x00000345
=20
+#define MSR_IA32_BNDCFGS		0x00000D90
+
 #define MSR_MTRRfix64K_00000		0x00000250
 #define MSR_MTRRfix16K_80000		0x00000258
 #define MSR_MTRRfix16K_A0000		0x00000259



--=__PartEFDDBAE3.1__=
Content-Type: text/plain; name="x86-MPX.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MPX.patch"

x86: MPX IA32_BNDCFGS msr handle=0A=0AWhen MPX supported, a new guest-state=
 field for IA32_BNDCFGS=0Ais added to the VMCS. In addition, two new =
controls are added:=0A - a VM-exit control called "clear BNDCFGS"=0A - a =
VM-entry control called "load BNDCFGS."=0AVM exits always save IA32_BNDCFGS=
 into BNDCFGS field of VMCS.=0A=0ASigned-off-by: Xudong Hao <xudong.hao@int=
el.com>=0AReviewed-by: Liu Jinsong <jinsong.liu@intel.com>=0A=0AUnlikely, =
but in case VMX support is not available, not expose=0AMPX to hvm =
guest.=0A=0ASuggested-by: Andrew Cooper <andrew.cooper3@citrix.com>=0ASugge=
sted-by: Jan Beulich <jbeulich@suse.com>=0ASigned-off-by: Liu Jinsong =
<jinsong.liu@intel.com>=0AReviewed-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ =
-2909,6 +2909,12 @@ void hvm_cpuid(unsigned int input, unsig=0A         if =
( (count =3D=3D 0) && !cpu_has_smep )=0A             *ebx &=3D ~cpufeat_mas=
k(X86_FEATURE_SMEP);=0A =0A+        /* Don't expose MPX to hvm when VMX =
support is not available */=0A+        if ( (count =3D=3D 0) &&=0A+        =
     (!(vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) ||=0A+              =
!(vmx_vmentry_control & VM_ENTRY_LOAD_BNDCFGS)) )=0A+            *ebx &=3D =
~cpufeat_mask(X86_FEATURE_MPX);=0A+=0A         /* Don't expose INVPCID to =
non-hap hvm. */=0A         if ( (count =3D=3D 0) && !hap_enabled(d) )=0A   =
          *ebx &=3D ~cpufeat_mask(X86_FEATURE_INVPCID);=0A--- a/xen/arch/x8=
6/hvm/vmx/vmcs.c=0A+++ b/xen/arch/x86/hvm/vmx/vmcs.c=0A@@ -269,7 +269,8 @@ =
static int vmx_init_vmcs_config(void)=0A     }=0A =0A     min =3D =
VM_EXIT_ACK_INTR_ON_EXIT;=0A-    opt =3D VM_EXIT_SAVE_GUEST_PAT | =
VM_EXIT_LOAD_HOST_PAT;=0A+    opt =3D VM_EXIT_SAVE_GUEST_PAT | VM_EXIT_LOAD=
_HOST_PAT |=0A+          VM_EXIT_CLEAR_BNDCFGS;=0A     min |=3D VM_EXIT_IA3=
2E_MODE;=0A     _vmx_vmexit_control =3D adjust_vmx_controls(=0A         =
"VMExit Control", min, opt, MSR_IA32_VMX_EXIT_CTLS, &mismatch);=0A@@ =
-283,7 +284,7 @@ static int vmx_init_vmcs_config(void)=0A         =
_vmx_pin_based_exec_control  &=3D ~ PIN_BASED_POSTED_INTERRUPT;=0A =0A     =
min =3D 0;=0A-    opt =3D VM_ENTRY_LOAD_GUEST_PAT;=0A+    opt =3D =
VM_ENTRY_LOAD_GUEST_PAT | VM_ENTRY_LOAD_BNDCFGS;=0A     _vmx_vmentry_contro=
l =3D adjust_vmx_controls(=0A         "VMEntry Control", min, opt, =
MSR_IA32_VMX_ENTRY_CTLS, &mismatch);=0A =0A@@ -955,6 +956,9 @@ static int =
construct_vmcs(struct vcpu *v=0A         vmx_disable_intercept_for_msr(v, =
MSR_IA32_SYSENTER_EIP, MSR_TYPE_R | MSR_TYPE_W);=0A         if ( paging_mod=
e_hap(d) && (!iommu_enabled || iommu_snoop) )=0A             vmx_disable_in=
tercept_for_msr(v, MSR_IA32_CR_PAT, MSR_TYPE_R | MSR_TYPE_W);=0A+        =
if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&=0A+             (vmentry_ctl =
& VM_ENTRY_LOAD_BNDCFGS) )=0A+            vmx_disable_intercept_for_msr(v, =
MSR_IA32_BNDCFGS, MSR_TYPE_R | MSR_TYPE_W);=0A     }=0A =0A     /* I/O =
access bitmap. */=0A--- a/xen/include/asm-x86/cpufeature.h=0A+++ b/xen/incl=
ude/asm-x86/cpufeature.h=0A@@ -148,6 +148,7 @@=0A #define X86_FEATURE_INVPC=
ID	(7*32+10) /* Invalidate Process Context ID */=0A #define X86_FEATUR=
E_RTM 	(7*32+11) /* Restricted Transactional Memory */=0A #define =
X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */=0A+#define=
 X86_FEATURE_MPX		(7*32+14) /* Memory Protection Extensions =
*/=0A #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access =
Prevention */=0A =0A #define cpu_has(c, bit)		test_bit(bit, =
(c)->x86_capability)=0A@@ -197,6 +198,7 @@=0A #define cpu_has_xsave        =
   boot_cpu_has(X86_FEATURE_XSAVE)=0A #define cpu_has_avx             =
boot_cpu_has(X86_FEATURE_AVX)=0A #define cpu_has_lwp             boot_cpu_h=
as(X86_FEATURE_LWP)=0A+#define cpu_has_mpx             boot_cpu_has(X86_FEA=
TURE_MPX)=0A =0A #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_A=
RCH_PERFMON)=0A =0A--- a/xen/include/asm-x86/hvm/vmx/vmcs.h=0A+++ =
b/xen/include/asm-x86/hvm/vmx/vmcs.h=0A@@ -186,6 +186,7 @@ extern u32 =
vmx_pin_based_exec_control;=0A #define VM_EXIT_SAVE_GUEST_EFER         =
0x00100000=0A #define VM_EXIT_LOAD_HOST_EFER          0x00200000=0A =
#define VM_EXIT_SAVE_PREEMPT_TIMER      0x00400000=0A+#define VM_EXIT_CLEAR=
_BNDCFGS           0x00800000=0A extern u32 vmx_vmexit_control;=0A =0A =
#define VM_ENTRY_IA32E_MODE             0x00000200=0A@@ -194,6 +195,7 @@ =
extern u32 vmx_vmexit_control;=0A #define VM_ENTRY_LOAD_PERF_GLOBAL_CTRL  =
0x00002000=0A #define VM_ENTRY_LOAD_GUEST_PAT         0x00004000=0A =
#define VM_ENTRY_LOAD_GUEST_EFER        0x00008000=0A+#define VM_ENTRY_LOAD=
_BNDCFGS           0x00010000=0A extern u32 vmx_vmentry_control;=0A =0A =
#define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001=0A--- a/xen/incl=
ude/asm-x86/msr-index.h=0A+++ b/xen/include/asm-x86/msr-index.h=0A@@ -56,6 =
+56,8 @@=0A #define MSR_IA32_DS_AREA		0x00000600=0A #define =
MSR_IA32_PERF_CAPABILITIES	0x00000345=0A =0A+#define MSR_IA32_BNDCFGS	=
	0x00000D90=0A+=0A #define MSR_MTRRfix64K_00000		0x00000250=
=0A #define MSR_MTRRfix16K_80000		0x00000258=0A #define =
MSR_MTRRfix16K_A0000		0x00000259=0A
--=__PartEFDDBAE3.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartEFDDBAE3.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:37:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHumY-000803-Cv; Mon, 24 Feb 2014 12:37:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHumW-0007zq-BX
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:37:12 +0000
Received: from [193.109.254.147:24026] by server-15.bemta-14.messagelabs.com
	id 5F/A9-10839-7FC3B035; Mon, 24 Feb 2014 12:37:11 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393245430!6432652!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24001 invoked from network); 24 Feb 2014 12:37:10 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:37:10 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:37:10 +0000
Message-Id: <530B4B03020000780011EBD2@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:37:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartEFDDBAE3.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH RESEND 2/4] x86: MPX IA32_BNDCFGS msr handle
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartEFDDBAE3.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When MPX supported, a new guest-state field for IA32_BNDCFGS
is added to the VMCS. In addition, two new controls are added:
 - a VM-exit control called "clear BNDCFGS"
 - a VM-entry control called "load BNDCFGS."
VM exits always save IA32_BNDCFGS into BNDCFGS field of VMCS.

Signed-off-by: Xudong Hao <xudong.hao@intel.com>
Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

Unlikely, but in case VMX support is not available, not expose
MPX to hvm guest.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2909,6 +2909,12 @@ void hvm_cpuid(unsigned int input, unsig
         if ( (count =3D=3D 0) && !cpu_has_smep )
             *ebx &=3D ~cpufeat_mask(X86_FEATURE_SMEP);
=20
+        /* Don't expose MPX to hvm when VMX support is not available */
+        if ( (count =3D=3D 0) &&
+             (!(vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) ||
+              !(vmx_vmentry_control & VM_ENTRY_LOAD_BNDCFGS)) )
+            *ebx &=3D ~cpufeat_mask(X86_FEATURE_MPX);
+
         /* Don't expose INVPCID to non-hap hvm. */
         if ( (count =3D=3D 0) && !hap_enabled(d) )
             *ebx &=3D ~cpufeat_mask(X86_FEATURE_INVPCID);
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -269,7 +269,8 @@ static int vmx_init_vmcs_config(void)
     }
=20
     min =3D VM_EXIT_ACK_INTR_ON_EXIT;
-    opt =3D VM_EXIT_SAVE_GUEST_PAT | VM_EXIT_LOAD_HOST_PAT;
+    opt =3D VM_EXIT_SAVE_GUEST_PAT | VM_EXIT_LOAD_HOST_PAT |
+          VM_EXIT_CLEAR_BNDCFGS;
     min |=3D VM_EXIT_IA32E_MODE;
     _vmx_vmexit_control =3D adjust_vmx_controls(
         "VMExit Control", min, opt, MSR_IA32_VMX_EXIT_CTLS, &mismatch);
@@ -283,7 +284,7 @@ static int vmx_init_vmcs_config(void)
         _vmx_pin_based_exec_control  &=3D ~ PIN_BASED_POSTED_INTERRUPT;
=20
     min =3D 0;
-    opt =3D VM_ENTRY_LOAD_GUEST_PAT;
+    opt =3D VM_ENTRY_LOAD_GUEST_PAT | VM_ENTRY_LOAD_BNDCFGS;
     _vmx_vmentry_control =3D adjust_vmx_controls(
         "VMEntry Control", min, opt, MSR_IA32_VMX_ENTRY_CTLS, &mismatch);
=20
@@ -955,6 +956,9 @@ static int construct_vmcs(struct vcpu *v
         vmx_disable_intercept_for_msr(v, MSR_IA32_SYSENTER_EIP, MSR_TYPE_R=
 | MSR_TYPE_W);
         if ( paging_mode_hap(d) && (!iommu_enabled || iommu_snoop) )
             vmx_disable_intercept_for_msr(v, MSR_IA32_CR_PAT, MSR_TYPE_R =
| MSR_TYPE_W);
+        if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&
+             (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) )
+            vmx_disable_intercept_for_msr(v, MSR_IA32_BNDCFGS, MSR_TYPE_R =
| MSR_TYPE_W);
     }
=20
     /* I/O access bitmap. */
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -148,6 +148,7 @@
 #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID =
*/
 #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional =
Memory */
 #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as =
zero */
+#define X86_FEATURE_MPX		(7*32+14) /* Memory Protection =
Extensions */
 #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access =
Prevention */
=20
 #define cpu_has(c, bit)		test_bit(bit, (c)->x86_capability)
@@ -197,6 +198,7 @@
 #define cpu_has_xsave           boot_cpu_has(X86_FEATURE_XSAVE)
 #define cpu_has_avx             boot_cpu_has(X86_FEATURE_AVX)
 #define cpu_has_lwp             boot_cpu_has(X86_FEATURE_LWP)
+#define cpu_has_mpx             boot_cpu_has(X86_FEATURE_MPX)
=20
 #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
=20
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -186,6 +186,7 @@ extern u32 vmx_pin_based_exec_control;
 #define VM_EXIT_SAVE_GUEST_EFER         0x00100000
 #define VM_EXIT_LOAD_HOST_EFER          0x00200000
 #define VM_EXIT_SAVE_PREEMPT_TIMER      0x00400000
+#define VM_EXIT_CLEAR_BNDCFGS           0x00800000
 extern u32 vmx_vmexit_control;
=20
 #define VM_ENTRY_IA32E_MODE             0x00000200
@@ -194,6 +195,7 @@ extern u32 vmx_vmexit_control;
 #define VM_ENTRY_LOAD_PERF_GLOBAL_CTRL  0x00002000
 #define VM_ENTRY_LOAD_GUEST_PAT         0x00004000
 #define VM_ENTRY_LOAD_GUEST_EFER        0x00008000
+#define VM_ENTRY_LOAD_BNDCFGS           0x00010000
 extern u32 vmx_vmentry_control;
=20
 #define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -56,6 +56,8 @@
 #define MSR_IA32_DS_AREA		0x00000600
 #define MSR_IA32_PERF_CAPABILITIES	0x00000345
=20
+#define MSR_IA32_BNDCFGS		0x00000D90
+
 #define MSR_MTRRfix64K_00000		0x00000250
 #define MSR_MTRRfix16K_80000		0x00000258
 #define MSR_MTRRfix16K_A0000		0x00000259



--=__PartEFDDBAE3.1__=
Content-Type: text/plain; name="x86-MPX.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MPX.patch"

x86: MPX IA32_BNDCFGS msr handle=0A=0AWhen MPX supported, a new guest-state=
 field for IA32_BNDCFGS=0Ais added to the VMCS. In addition, two new =
controls are added:=0A - a VM-exit control called "clear BNDCFGS"=0A - a =
VM-entry control called "load BNDCFGS."=0AVM exits always save IA32_BNDCFGS=
 into BNDCFGS field of VMCS.=0A=0ASigned-off-by: Xudong Hao <xudong.hao@int=
el.com>=0AReviewed-by: Liu Jinsong <jinsong.liu@intel.com>=0A=0AUnlikely, =
but in case VMX support is not available, not expose=0AMPX to hvm =
guest.=0A=0ASuggested-by: Andrew Cooper <andrew.cooper3@citrix.com>=0ASugge=
sted-by: Jan Beulich <jbeulich@suse.com>=0ASigned-off-by: Liu Jinsong =
<jinsong.liu@intel.com>=0AReviewed-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ =
-2909,6 +2909,12 @@ void hvm_cpuid(unsigned int input, unsig=0A         if =
( (count =3D=3D 0) && !cpu_has_smep )=0A             *ebx &=3D ~cpufeat_mas=
k(X86_FEATURE_SMEP);=0A =0A+        /* Don't expose MPX to hvm when VMX =
support is not available */=0A+        if ( (count =3D=3D 0) &&=0A+        =
     (!(vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) ||=0A+              =
!(vmx_vmentry_control & VM_ENTRY_LOAD_BNDCFGS)) )=0A+            *ebx &=3D =
~cpufeat_mask(X86_FEATURE_MPX);=0A+=0A         /* Don't expose INVPCID to =
non-hap hvm. */=0A         if ( (count =3D=3D 0) && !hap_enabled(d) )=0A   =
          *ebx &=3D ~cpufeat_mask(X86_FEATURE_INVPCID);=0A--- a/xen/arch/x8=
6/hvm/vmx/vmcs.c=0A+++ b/xen/arch/x86/hvm/vmx/vmcs.c=0A@@ -269,7 +269,8 @@ =
static int vmx_init_vmcs_config(void)=0A     }=0A =0A     min =3D =
VM_EXIT_ACK_INTR_ON_EXIT;=0A-    opt =3D VM_EXIT_SAVE_GUEST_PAT | =
VM_EXIT_LOAD_HOST_PAT;=0A+    opt =3D VM_EXIT_SAVE_GUEST_PAT | VM_EXIT_LOAD=
_HOST_PAT |=0A+          VM_EXIT_CLEAR_BNDCFGS;=0A     min |=3D VM_EXIT_IA3=
2E_MODE;=0A     _vmx_vmexit_control =3D adjust_vmx_controls(=0A         =
"VMExit Control", min, opt, MSR_IA32_VMX_EXIT_CTLS, &mismatch);=0A@@ =
-283,7 +284,7 @@ static int vmx_init_vmcs_config(void)=0A         =
_vmx_pin_based_exec_control  &=3D ~ PIN_BASED_POSTED_INTERRUPT;=0A =0A     =
min =3D 0;=0A-    opt =3D VM_ENTRY_LOAD_GUEST_PAT;=0A+    opt =3D =
VM_ENTRY_LOAD_GUEST_PAT | VM_ENTRY_LOAD_BNDCFGS;=0A     _vmx_vmentry_contro=
l =3D adjust_vmx_controls(=0A         "VMEntry Control", min, opt, =
MSR_IA32_VMX_ENTRY_CTLS, &mismatch);=0A =0A@@ -955,6 +956,9 @@ static int =
construct_vmcs(struct vcpu *v=0A         vmx_disable_intercept_for_msr(v, =
MSR_IA32_SYSENTER_EIP, MSR_TYPE_R | MSR_TYPE_W);=0A         if ( paging_mod=
e_hap(d) && (!iommu_enabled || iommu_snoop) )=0A             vmx_disable_in=
tercept_for_msr(v, MSR_IA32_CR_PAT, MSR_TYPE_R | MSR_TYPE_W);=0A+        =
if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&=0A+             (vmentry_ctl =
& VM_ENTRY_LOAD_BNDCFGS) )=0A+            vmx_disable_intercept_for_msr(v, =
MSR_IA32_BNDCFGS, MSR_TYPE_R | MSR_TYPE_W);=0A     }=0A =0A     /* I/O =
access bitmap. */=0A--- a/xen/include/asm-x86/cpufeature.h=0A+++ b/xen/incl=
ude/asm-x86/cpufeature.h=0A@@ -148,6 +148,7 @@=0A #define X86_FEATURE_INVPC=
ID	(7*32+10) /* Invalidate Process Context ID */=0A #define X86_FEATUR=
E_RTM 	(7*32+11) /* Restricted Transactional Memory */=0A #define =
X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */=0A+#define=
 X86_FEATURE_MPX		(7*32+14) /* Memory Protection Extensions =
*/=0A #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access =
Prevention */=0A =0A #define cpu_has(c, bit)		test_bit(bit, =
(c)->x86_capability)=0A@@ -197,6 +198,7 @@=0A #define cpu_has_xsave        =
   boot_cpu_has(X86_FEATURE_XSAVE)=0A #define cpu_has_avx             =
boot_cpu_has(X86_FEATURE_AVX)=0A #define cpu_has_lwp             boot_cpu_h=
as(X86_FEATURE_LWP)=0A+#define cpu_has_mpx             boot_cpu_has(X86_FEA=
TURE_MPX)=0A =0A #define cpu_has_arch_perfmon    boot_cpu_has(X86_FEATURE_A=
RCH_PERFMON)=0A =0A--- a/xen/include/asm-x86/hvm/vmx/vmcs.h=0A+++ =
b/xen/include/asm-x86/hvm/vmx/vmcs.h=0A@@ -186,6 +186,7 @@ extern u32 =
vmx_pin_based_exec_control;=0A #define VM_EXIT_SAVE_GUEST_EFER         =
0x00100000=0A #define VM_EXIT_LOAD_HOST_EFER          0x00200000=0A =
#define VM_EXIT_SAVE_PREEMPT_TIMER      0x00400000=0A+#define VM_EXIT_CLEAR=
_BNDCFGS           0x00800000=0A extern u32 vmx_vmexit_control;=0A =0A =
#define VM_ENTRY_IA32E_MODE             0x00000200=0A@@ -194,6 +195,7 @@ =
extern u32 vmx_vmexit_control;=0A #define VM_ENTRY_LOAD_PERF_GLOBAL_CTRL  =
0x00002000=0A #define VM_ENTRY_LOAD_GUEST_PAT         0x00004000=0A =
#define VM_ENTRY_LOAD_GUEST_EFER        0x00008000=0A+#define VM_ENTRY_LOAD=
_BNDCFGS           0x00010000=0A extern u32 vmx_vmentry_control;=0A =0A =
#define SECONDARY_EXEC_VIRTUALIZE_APIC_ACCESSES 0x00000001=0A--- a/xen/incl=
ude/asm-x86/msr-index.h=0A+++ b/xen/include/asm-x86/msr-index.h=0A@@ -56,6 =
+56,8 @@=0A #define MSR_IA32_DS_AREA		0x00000600=0A #define =
MSR_IA32_PERF_CAPABILITIES	0x00000345=0A =0A+#define MSR_IA32_BNDCFGS	=
	0x00000D90=0A+=0A #define MSR_MTRRfix64K_00000		0x00000250=
=0A #define MSR_MTRRfix16K_80000		0x00000258=0A #define =
MSR_MTRRfix16K_A0000		0x00000259=0A
--=__PartEFDDBAE3.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartEFDDBAE3.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHun7-00084I-RF; Mon, 24 Feb 2014 12:37:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHun6-000842-B5
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:37:48 +0000
Received: from [193.109.254.147:33654] by server-12.bemta-14.messagelabs.com
	id 9D/2E-17220-B1D3B035; Mon, 24 Feb 2014 12:37:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393245466!6392007!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17265 invoked from network); 24 Feb 2014 12:37:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:37:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:37:46 +0000
Message-Id: <530B4B27020000780011EBD6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:37:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0A385F07.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH RESEND 3/4] x86: generic MSRs save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0A385F07.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This patch introduces a generic MSRs save/restore mechanism, so that
in the future new MSRs' save/restore could be added w/ smaller change
than the full blown addition of a new save/restore type.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
---
v3: Use C99/gcc extensions in public header when available.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1127,10 +1127,117 @@ static int hvm_load_cpu_xsave_states(str
     return 0;
 }
=20
-/* We need variable length data chunk for xsave area, hence customized
- * declaration other than HVM_REGISTER_SAVE_RESTORE.
+#define HVM_CPU_MSR_SIZE(cnt) offsetof(struct hvm_msr, msr[cnt])
+static unsigned int __read_mostly msr_count_max;
+
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+    {
+        struct hvm_msr *ctxt;
+        unsigned int i;
+
+        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+                             HVM_CPU_MSR_SIZE(msr_count_max)) )
+            return 1;
+        ctxt =3D (struct hvm_msr *)&h->data[h->cur];
+        ctxt->count =3D 0;
+
+        if ( hvm_funcs.save_msr )
+            hvm_funcs.save_msr(v, ctxt);
+
+        for ( i =3D 0; i < ctxt->count; ++i )
+            ctxt->msr[i]._rsvd =3D 0;
+
+        if ( ctxt->count )
+            h->cur +=3D HVM_CPU_MSR_SIZE(ctxt->count);
+        else
+            h->cur -=3D sizeof(struct hvm_save_descriptor);
+    }
+
+    return 0;
+}
+
+static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    unsigned int i, vcpuid =3D hvm_load_instance(h);
+    struct vcpu *v;
+    const struct hvm_save_descriptor *desc;
+    struct hvm_msr *ctxt;
+    int err =3D 0;
+
+    if ( vcpuid >=3D d->max_vcpus || (v =3D d->vcpu[vcpuid]) =3D=3D NULL =
)
+    {
+        dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vcpu%u\n",
+                d->domain_id, vcpuid);
+        return -EINVAL;
+    }
+
+    /* Customized checking for entry since our entry is of variable =
length */
+    desc =3D (struct hvm_save_descriptor *)&h->data[h->cur];
+    if ( sizeof (*desc) > h->size - h->cur)
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore: not enough data left to read MSR =
descriptor\n",
+               d->domain_id, vcpuid);
+        return -ENODATA;
+    }
+    if ( desc->length + sizeof (*desc) > h->size - h->cur)
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore: not enough data left to read %u MSR =
bytes\n",
+               d->domain_id, vcpuid, desc->length);
+        return -ENODATA;
+    }
+    if ( desc->length < HVM_CPU_MSR_SIZE(1) )
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore mismatch: MSR length %u < %zu\n",
+               d->domain_id, vcpuid, desc->length, HVM_CPU_MSR_SIZE(1));
+        return -EINVAL;
+    }
+
+    h->cur +=3D sizeof(*desc);
+    ctxt =3D (struct hvm_msr *)&h->data[h->cur];
+    h->cur +=3D desc->length;
+
+    if ( desc->length !=3D HVM_CPU_MSR_SIZE(ctxt->count) )
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore mismatch: MSR length %u !=3D %zu\n",
+               d->domain_id, vcpuid, desc->length,
+               HVM_CPU_MSR_SIZE(ctxt->count));
+        return -EOPNOTSUPP;
+    }
+
+    for ( i =3D 0; i < ctxt->count; ++i )
+        if ( ctxt->msr[i]._rsvd )
+            return -EOPNOTSUPP;
+    /* Checking finished */
+
+    if ( hvm_funcs.load_msr )
+        err =3D hvm_funcs.load_msr(v, ctxt);
+
+    for ( i =3D 0; !err && i < ctxt->count; ++i )
+    {
+        switch ( ctxt->msr[i].index )
+        {
+        default:
+            if ( !ctxt->msr[i]._rsvd )
+                err =3D -ENXIO;
+            break;
+        }
+    }
+
+    return err;
+}
+
+/* We need variable length data chunks for XSAVE area and MSRs, hence
+ * a custom declaration rather than HVM_REGISTER_SAVE_RESTORE.
  */
-static int __init __hvm_register_CPU_XSAVE_save_and_restore(void)
+static int __init hvm_register_CPU_save_and_restore(void)
 {
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
@@ -1139,9 +1246,22 @@ static int __init __hvm_register_CPU_XSA
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
                         HVMSR_PER_VCPU);
+
+    if ( hvm_funcs.init_msr )
+        msr_count_max +=3D hvm_funcs.init_msr();
+
+    if ( msr_count_max )
+        hvm_register_savevm(CPU_MSR_CODE,
+                            "CPU_MSR",
+                            hvm_save_cpu_msrs,
+                            hvm_load_cpu_msrs,
+                            HVM_CPU_MSR_SIZE(msr_count_max) +
+                                sizeof(struct hvm_save_descriptor),
+                            HVMSR_PER_VCPU);
+
     return 0;
 }
-__initcall(__hvm_register_CPU_XSAVE_save_and_restore);
+__initcall(hvm_register_CPU_save_and_restore);
=20
 int hvm_vcpu_initialise(struct vcpu *v)
 {
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -109,6 +109,10 @@ struct hvm_function_table {
     void (*save_cpu_ctxt)(struct vcpu *v, struct hvm_hw_cpu *ctxt);
     int (*load_cpu_ctxt)(struct vcpu *v, struct hvm_hw_cpu *ctxt);
=20
+    unsigned int (*init_msr)(void);
+    void (*save_msr)(struct vcpu *, struct hvm_msr *);
+    int (*load_msr)(struct vcpu *, struct hvm_msr *);
+
     /* Examine specifics of the guest state. */
     unsigned int (*get_interrupt_shadow)(struct vcpu *v);
     void (*set_interrupt_shadow)(struct vcpu *v, unsigned int intr_shadow)=
;
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -592,9 +592,27 @@ struct hvm_tsc_adjust {
=20
 DECLARE_HVM_SAVE_TYPE(TSC_ADJUST, 19, struct hvm_tsc_adjust);
=20
+
+struct hvm_msr {
+    uint32_t count;
+    struct hvm_one_msr {
+        uint32_t index;
+        uint32_t _rsvd;
+        uint64_t val;
+#if defined(__STDC_VERSION__) && __STDC_VERSION__ >=3D 199901L
+    } msr[];
+#elif defined(__GNUC__)
+    } msr[0];
+#else
+    } msr[1 /* variable size */];
+#endif
+};
+
+#define CPU_MSR_CODE  20
+
 /*=20
  * Largest type-code in use
  */
-#define HVM_SAVE_CODE_MAX 19
+#define HVM_SAVE_CODE_MAX 20
=20
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */



--=__Part0A385F07.1__=
Content-Type: text/plain; name="x86-MSR-sr.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MSR-sr.patch"

x86: generic MSRs save/restore=0A=0AThis patch introduces a generic MSRs =
save/restore mechanism, so that=0Ain the future new MSRs' save/restore =
could be added w/ smaller change=0Athan the full blown addition of a new =
save/restore type.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0ARe=
viewed-by: Liu Jinsong <jinsong.liu@intel.com>=0A---=0Av3: Use C99/gcc =
extensions in public header when available.=0A=0A--- a/xen/arch/x86/hvm/hvm=
.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -1127,10 +1127,117 @@ static int =
hvm_load_cpu_xsave_states(str=0A     return 0;=0A }=0A =0A-/* We need =
variable length data chunk for xsave area, hence customized=0A- * =
declaration other than HVM_REGISTER_SAVE_RESTORE.=0A+#define HVM_CPU_MSR_SI=
ZE(cnt) offsetof(struct hvm_msr, msr[cnt])=0A+static unsigned int =
__read_mostly msr_count_max;=0A+=0A+static int hvm_save_cpu_msrs(struct =
domain *d, hvm_domain_context_t *h)=0A+{=0A+    struct vcpu *v;=0A+=0A+    =
for_each_vcpu ( d, v )=0A+    {=0A+        struct hvm_msr *ctxt;=0A+       =
 unsigned int i;=0A+=0A+        if ( _hvm_init_entry(h, CPU_MSR_CODE, =
v->vcpu_id,=0A+                             HVM_CPU_MSR_SIZE(msr_count_max)=
) )=0A+            return 1;=0A+        ctxt =3D (struct hvm_msr *)&h->data=
[h->cur];=0A+        ctxt->count =3D 0;=0A+=0A+        if ( hvm_funcs.save_=
msr )=0A+            hvm_funcs.save_msr(v, ctxt);=0A+=0A+        for ( i =
=3D 0; i < ctxt->count; ++i )=0A+            ctxt->msr[i]._rsvd =3D =
0;=0A+=0A+        if ( ctxt->count )=0A+            h->cur +=3D HVM_CPU_MSR=
_SIZE(ctxt->count);=0A+        else=0A+            h->cur -=3D sizeof(struc=
t hvm_save_descriptor);=0A+    }=0A+=0A+    return 0;=0A+}=0A+=0A+static =
int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)=0A+{=0A+  =
  unsigned int i, vcpuid =3D hvm_load_instance(h);=0A+    struct vcpu =
*v;=0A+    const struct hvm_save_descriptor *desc;=0A+    struct hvm_msr =
*ctxt;=0A+    int err =3D 0;=0A+=0A+    if ( vcpuid >=3D d->max_vcpus || =
(v =3D d->vcpu[vcpuid]) =3D=3D NULL )=0A+    {=0A+        dprintk(XENLOG_G_=
ERR, "HVM restore: dom%d has no vcpu%u\n",=0A+                d->domain_id,=
 vcpuid);=0A+        return -EINVAL;=0A+    }=0A+=0A+    /* Customized =
checking for entry since our entry is of variable length */=0A+    desc =
=3D (struct hvm_save_descriptor *)&h->data[h->cur];=0A+    if ( sizeof =
(*desc) > h->size - h->cur)=0A+    {=0A+        printk(XENLOG_G_WARNING=0A+=
               "HVM%d.%d restore: not enough data left to read MSR =
descriptor\n",=0A+               d->domain_id, vcpuid);=0A+        return =
-ENODATA;=0A+    }=0A+    if ( desc->length + sizeof (*desc) > h->size - =
h->cur)=0A+    {=0A+        printk(XENLOG_G_WARNING=0A+               =
"HVM%d.%d restore: not enough data left to read %u MSR bytes\n",=0A+       =
        d->domain_id, vcpuid, desc->length);=0A+        return -ENODATA;=0A=
+    }=0A+    if ( desc->length < HVM_CPU_MSR_SIZE(1) )=0A+    {=0A+       =
 printk(XENLOG_G_WARNING=0A+               "HVM%d.%d restore mismatch: MSR =
length %u < %zu\n",=0A+               d->domain_id, vcpuid, desc->length, =
HVM_CPU_MSR_SIZE(1));=0A+        return -EINVAL;=0A+    }=0A+=0A+    =
h->cur +=3D sizeof(*desc);=0A+    ctxt =3D (struct hvm_msr *)&h->data[h->cu=
r];=0A+    h->cur +=3D desc->length;=0A+=0A+    if ( desc->length !=3D =
HVM_CPU_MSR_SIZE(ctxt->count) )=0A+    {=0A+        printk(XENLOG_G_WARNING=
=0A+               "HVM%d.%d restore mismatch: MSR length %u !=3D =
%zu\n",=0A+               d->domain_id, vcpuid, desc->length,=0A+          =
     HVM_CPU_MSR_SIZE(ctxt->count));=0A+        return -EOPNOTSUPP;=0A+    =
}=0A+=0A+    for ( i =3D 0; i < ctxt->count; ++i )=0A+        if ( =
ctxt->msr[i]._rsvd )=0A+            return -EOPNOTSUPP;=0A+    /* Checking =
finished */=0A+=0A+    if ( hvm_funcs.load_msr )=0A+        err =3D =
hvm_funcs.load_msr(v, ctxt);=0A+=0A+    for ( i =3D 0; !err && i < =
ctxt->count; ++i )=0A+    {=0A+        switch ( ctxt->msr[i].index )=0A+   =
     {=0A+        default:=0A+            if ( !ctxt->msr[i]._rsvd )=0A+   =
             err =3D -ENXIO;=0A+            break;=0A+        }=0A+    =
}=0A+=0A+    return err;=0A+}=0A+=0A+/* We need variable length data =
chunks for XSAVE area and MSRs, hence=0A+ * a custom declaration rather =
than HVM_REGISTER_SAVE_RESTORE.=0A  */=0A-static int __init __hvm_register_=
CPU_XSAVE_save_and_restore(void)=0A+static int __init hvm_register_CPU_save=
_and_restore(void)=0A {=0A     hvm_register_savevm(CPU_XSAVE_CODE,=0A      =
                   "CPU_XSAVE",=0A@@ -1139,9 +1246,22 @@ static int __init =
__hvm_register_CPU_XSA=0A                         HVM_CPU_XSAVE_SIZE(xfeatu=
re_mask) +=0A                             sizeof(struct hvm_save_descriptor=
),=0A                         HVMSR_PER_VCPU);=0A+=0A+    if ( hvm_funcs.in=
it_msr )=0A+        msr_count_max +=3D hvm_funcs.init_msr();=0A+=0A+    if =
( msr_count_max )=0A+        hvm_register_savevm(CPU_MSR_CODE,=0A+         =
                   "CPU_MSR",=0A+                            hvm_save_cpu_m=
srs,=0A+                            hvm_load_cpu_msrs,=0A+                 =
           HVM_CPU_MSR_SIZE(msr_count_max) +=0A+                           =
     sizeof(struct hvm_save_descriptor),=0A+                            =
HVMSR_PER_VCPU);=0A+=0A     return 0;=0A }=0A-__initcall(__hvm_register_CPU=
_XSAVE_save_and_restore);=0A+__initcall(hvm_register_CPU_save_and_restore);=
=0A =0A int hvm_vcpu_initialise(struct vcpu *v)=0A {=0A--- a/xen/include/as=
m-x86/hvm/hvm.h=0A+++ b/xen/include/asm-x86/hvm/hvm.h=0A@@ -109,6 +109,10 =
@@ struct hvm_function_table {=0A     void (*save_cpu_ctxt)(struct vcpu =
*v, struct hvm_hw_cpu *ctxt);=0A     int (*load_cpu_ctxt)(struct vcpu *v, =
struct hvm_hw_cpu *ctxt);=0A =0A+    unsigned int (*init_msr)(void);=0A+   =
 void (*save_msr)(struct vcpu *, struct hvm_msr *);=0A+    int (*load_msr)(=
struct vcpu *, struct hvm_msr *);=0A+=0A     /* Examine specifics of the =
guest state. */=0A     unsigned int (*get_interrupt_shadow)(struct vcpu =
*v);=0A     void (*set_interrupt_shadow)(struct vcpu *v, unsigned int =
intr_shadow);=0A--- a/xen/include/public/arch-x86/hvm/save.h=0A+++ =
b/xen/include/public/arch-x86/hvm/save.h=0A@@ -592,9 +592,27 @@ struct =
hvm_tsc_adjust {=0A =0A DECLARE_HVM_SAVE_TYPE(TSC_ADJUST, 19, struct =
hvm_tsc_adjust);=0A =0A+=0A+struct hvm_msr {=0A+    uint32_t count;=0A+    =
struct hvm_one_msr {=0A+        uint32_t index;=0A+        uint32_t =
_rsvd;=0A+        uint64_t val;=0A+#if defined(__STDC_VERSION__) && =
__STDC_VERSION__ >=3D 199901L=0A+    } msr[];=0A+#elif defined(__GNUC__)=0A=
+    } msr[0];=0A+#else=0A+    } msr[1 /* variable size */];=0A+#endif=0A+}=
;=0A+=0A+#define CPU_MSR_CODE  20=0A+=0A /* =0A  * Largest type-code in =
use=0A  */=0A-#define HVM_SAVE_CODE_MAX 19=0A+#define HVM_SAVE_CODE_MAX =
20=0A =0A #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */=0A
--=__Part0A385F07.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0A385F07.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:37:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHun7-00084I-RF; Mon, 24 Feb 2014 12:37:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHun6-000842-B5
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:37:48 +0000
Received: from [193.109.254.147:33654] by server-12.bemta-14.messagelabs.com
	id 9D/2E-17220-B1D3B035; Mon, 24 Feb 2014 12:37:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393245466!6392007!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17265 invoked from network); 24 Feb 2014 12:37:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:37:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:37:46 +0000
Message-Id: <530B4B27020000780011EBD6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:37:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0A385F07.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH RESEND 3/4] x86: generic MSRs save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0A385F07.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

This patch introduces a generic MSRs save/restore mechanism, so that
in the future new MSRs' save/restore could be added w/ smaller change
than the full blown addition of a new save/restore type.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
---
v3: Use C99/gcc extensions in public header when available.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1127,10 +1127,117 @@ static int hvm_load_cpu_xsave_states(str
     return 0;
 }
=20
-/* We need variable length data chunk for xsave area, hence customized
- * declaration other than HVM_REGISTER_SAVE_RESTORE.
+#define HVM_CPU_MSR_SIZE(cnt) offsetof(struct hvm_msr, msr[cnt])
+static unsigned int __read_mostly msr_count_max;
+
+static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    struct vcpu *v;
+
+    for_each_vcpu ( d, v )
+    {
+        struct hvm_msr *ctxt;
+        unsigned int i;
+
+        if ( _hvm_init_entry(h, CPU_MSR_CODE, v->vcpu_id,
+                             HVM_CPU_MSR_SIZE(msr_count_max)) )
+            return 1;
+        ctxt =3D (struct hvm_msr *)&h->data[h->cur];
+        ctxt->count =3D 0;
+
+        if ( hvm_funcs.save_msr )
+            hvm_funcs.save_msr(v, ctxt);
+
+        for ( i =3D 0; i < ctxt->count; ++i )
+            ctxt->msr[i]._rsvd =3D 0;
+
+        if ( ctxt->count )
+            h->cur +=3D HVM_CPU_MSR_SIZE(ctxt->count);
+        else
+            h->cur -=3D sizeof(struct hvm_save_descriptor);
+    }
+
+    return 0;
+}
+
+static int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
+{
+    unsigned int i, vcpuid =3D hvm_load_instance(h);
+    struct vcpu *v;
+    const struct hvm_save_descriptor *desc;
+    struct hvm_msr *ctxt;
+    int err =3D 0;
+
+    if ( vcpuid >=3D d->max_vcpus || (v =3D d->vcpu[vcpuid]) =3D=3D NULL =
)
+    {
+        dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vcpu%u\n",
+                d->domain_id, vcpuid);
+        return -EINVAL;
+    }
+
+    /* Customized checking for entry since our entry is of variable =
length */
+    desc =3D (struct hvm_save_descriptor *)&h->data[h->cur];
+    if ( sizeof (*desc) > h->size - h->cur)
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore: not enough data left to read MSR =
descriptor\n",
+               d->domain_id, vcpuid);
+        return -ENODATA;
+    }
+    if ( desc->length + sizeof (*desc) > h->size - h->cur)
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore: not enough data left to read %u MSR =
bytes\n",
+               d->domain_id, vcpuid, desc->length);
+        return -ENODATA;
+    }
+    if ( desc->length < HVM_CPU_MSR_SIZE(1) )
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore mismatch: MSR length %u < %zu\n",
+               d->domain_id, vcpuid, desc->length, HVM_CPU_MSR_SIZE(1));
+        return -EINVAL;
+    }
+
+    h->cur +=3D sizeof(*desc);
+    ctxt =3D (struct hvm_msr *)&h->data[h->cur];
+    h->cur +=3D desc->length;
+
+    if ( desc->length !=3D HVM_CPU_MSR_SIZE(ctxt->count) )
+    {
+        printk(XENLOG_G_WARNING
+               "HVM%d.%d restore mismatch: MSR length %u !=3D %zu\n",
+               d->domain_id, vcpuid, desc->length,
+               HVM_CPU_MSR_SIZE(ctxt->count));
+        return -EOPNOTSUPP;
+    }
+
+    for ( i =3D 0; i < ctxt->count; ++i )
+        if ( ctxt->msr[i]._rsvd )
+            return -EOPNOTSUPP;
+    /* Checking finished */
+
+    if ( hvm_funcs.load_msr )
+        err =3D hvm_funcs.load_msr(v, ctxt);
+
+    for ( i =3D 0; !err && i < ctxt->count; ++i )
+    {
+        switch ( ctxt->msr[i].index )
+        {
+        default:
+            if ( !ctxt->msr[i]._rsvd )
+                err =3D -ENXIO;
+            break;
+        }
+    }
+
+    return err;
+}
+
+/* We need variable length data chunks for XSAVE area and MSRs, hence
+ * a custom declaration rather than HVM_REGISTER_SAVE_RESTORE.
  */
-static int __init __hvm_register_CPU_XSAVE_save_and_restore(void)
+static int __init hvm_register_CPU_save_and_restore(void)
 {
     hvm_register_savevm(CPU_XSAVE_CODE,
                         "CPU_XSAVE",
@@ -1139,9 +1246,22 @@ static int __init __hvm_register_CPU_XSA
                         HVM_CPU_XSAVE_SIZE(xfeature_mask) +
                             sizeof(struct hvm_save_descriptor),
                         HVMSR_PER_VCPU);
+
+    if ( hvm_funcs.init_msr )
+        msr_count_max +=3D hvm_funcs.init_msr();
+
+    if ( msr_count_max )
+        hvm_register_savevm(CPU_MSR_CODE,
+                            "CPU_MSR",
+                            hvm_save_cpu_msrs,
+                            hvm_load_cpu_msrs,
+                            HVM_CPU_MSR_SIZE(msr_count_max) +
+                                sizeof(struct hvm_save_descriptor),
+                            HVMSR_PER_VCPU);
+
     return 0;
 }
-__initcall(__hvm_register_CPU_XSAVE_save_and_restore);
+__initcall(hvm_register_CPU_save_and_restore);
=20
 int hvm_vcpu_initialise(struct vcpu *v)
 {
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -109,6 +109,10 @@ struct hvm_function_table {
     void (*save_cpu_ctxt)(struct vcpu *v, struct hvm_hw_cpu *ctxt);
     int (*load_cpu_ctxt)(struct vcpu *v, struct hvm_hw_cpu *ctxt);
=20
+    unsigned int (*init_msr)(void);
+    void (*save_msr)(struct vcpu *, struct hvm_msr *);
+    int (*load_msr)(struct vcpu *, struct hvm_msr *);
+
     /* Examine specifics of the guest state. */
     unsigned int (*get_interrupt_shadow)(struct vcpu *v);
     void (*set_interrupt_shadow)(struct vcpu *v, unsigned int intr_shadow)=
;
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -592,9 +592,27 @@ struct hvm_tsc_adjust {
=20
 DECLARE_HVM_SAVE_TYPE(TSC_ADJUST, 19, struct hvm_tsc_adjust);
=20
+
+struct hvm_msr {
+    uint32_t count;
+    struct hvm_one_msr {
+        uint32_t index;
+        uint32_t _rsvd;
+        uint64_t val;
+#if defined(__STDC_VERSION__) && __STDC_VERSION__ >=3D 199901L
+    } msr[];
+#elif defined(__GNUC__)
+    } msr[0];
+#else
+    } msr[1 /* variable size */];
+#endif
+};
+
+#define CPU_MSR_CODE  20
+
 /*=20
  * Largest type-code in use
  */
-#define HVM_SAVE_CODE_MAX 19
+#define HVM_SAVE_CODE_MAX 20
=20
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */



--=__Part0A385F07.1__=
Content-Type: text/plain; name="x86-MSR-sr.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MSR-sr.patch"

x86: generic MSRs save/restore=0A=0AThis patch introduces a generic MSRs =
save/restore mechanism, so that=0Ain the future new MSRs' save/restore =
could be added w/ smaller change=0Athan the full blown addition of a new =
save/restore type.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0ARe=
viewed-by: Liu Jinsong <jinsong.liu@intel.com>=0A---=0Av3: Use C99/gcc =
extensions in public header when available.=0A=0A--- a/xen/arch/x86/hvm/hvm=
.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -1127,10 +1127,117 @@ static int =
hvm_load_cpu_xsave_states(str=0A     return 0;=0A }=0A =0A-/* We need =
variable length data chunk for xsave area, hence customized=0A- * =
declaration other than HVM_REGISTER_SAVE_RESTORE.=0A+#define HVM_CPU_MSR_SI=
ZE(cnt) offsetof(struct hvm_msr, msr[cnt])=0A+static unsigned int =
__read_mostly msr_count_max;=0A+=0A+static int hvm_save_cpu_msrs(struct =
domain *d, hvm_domain_context_t *h)=0A+{=0A+    struct vcpu *v;=0A+=0A+    =
for_each_vcpu ( d, v )=0A+    {=0A+        struct hvm_msr *ctxt;=0A+       =
 unsigned int i;=0A+=0A+        if ( _hvm_init_entry(h, CPU_MSR_CODE, =
v->vcpu_id,=0A+                             HVM_CPU_MSR_SIZE(msr_count_max)=
) )=0A+            return 1;=0A+        ctxt =3D (struct hvm_msr *)&h->data=
[h->cur];=0A+        ctxt->count =3D 0;=0A+=0A+        if ( hvm_funcs.save_=
msr )=0A+            hvm_funcs.save_msr(v, ctxt);=0A+=0A+        for ( i =
=3D 0; i < ctxt->count; ++i )=0A+            ctxt->msr[i]._rsvd =3D =
0;=0A+=0A+        if ( ctxt->count )=0A+            h->cur +=3D HVM_CPU_MSR=
_SIZE(ctxt->count);=0A+        else=0A+            h->cur -=3D sizeof(struc=
t hvm_save_descriptor);=0A+    }=0A+=0A+    return 0;=0A+}=0A+=0A+static =
int hvm_load_cpu_msrs(struct domain *d, hvm_domain_context_t *h)=0A+{=0A+  =
  unsigned int i, vcpuid =3D hvm_load_instance(h);=0A+    struct vcpu =
*v;=0A+    const struct hvm_save_descriptor *desc;=0A+    struct hvm_msr =
*ctxt;=0A+    int err =3D 0;=0A+=0A+    if ( vcpuid >=3D d->max_vcpus || =
(v =3D d->vcpu[vcpuid]) =3D=3D NULL )=0A+    {=0A+        dprintk(XENLOG_G_=
ERR, "HVM restore: dom%d has no vcpu%u\n",=0A+                d->domain_id,=
 vcpuid);=0A+        return -EINVAL;=0A+    }=0A+=0A+    /* Customized =
checking for entry since our entry is of variable length */=0A+    desc =
=3D (struct hvm_save_descriptor *)&h->data[h->cur];=0A+    if ( sizeof =
(*desc) > h->size - h->cur)=0A+    {=0A+        printk(XENLOG_G_WARNING=0A+=
               "HVM%d.%d restore: not enough data left to read MSR =
descriptor\n",=0A+               d->domain_id, vcpuid);=0A+        return =
-ENODATA;=0A+    }=0A+    if ( desc->length + sizeof (*desc) > h->size - =
h->cur)=0A+    {=0A+        printk(XENLOG_G_WARNING=0A+               =
"HVM%d.%d restore: not enough data left to read %u MSR bytes\n",=0A+       =
        d->domain_id, vcpuid, desc->length);=0A+        return -ENODATA;=0A=
+    }=0A+    if ( desc->length < HVM_CPU_MSR_SIZE(1) )=0A+    {=0A+       =
 printk(XENLOG_G_WARNING=0A+               "HVM%d.%d restore mismatch: MSR =
length %u < %zu\n",=0A+               d->domain_id, vcpuid, desc->length, =
HVM_CPU_MSR_SIZE(1));=0A+        return -EINVAL;=0A+    }=0A+=0A+    =
h->cur +=3D sizeof(*desc);=0A+    ctxt =3D (struct hvm_msr *)&h->data[h->cu=
r];=0A+    h->cur +=3D desc->length;=0A+=0A+    if ( desc->length !=3D =
HVM_CPU_MSR_SIZE(ctxt->count) )=0A+    {=0A+        printk(XENLOG_G_WARNING=
=0A+               "HVM%d.%d restore mismatch: MSR length %u !=3D =
%zu\n",=0A+               d->domain_id, vcpuid, desc->length,=0A+          =
     HVM_CPU_MSR_SIZE(ctxt->count));=0A+        return -EOPNOTSUPP;=0A+    =
}=0A+=0A+    for ( i =3D 0; i < ctxt->count; ++i )=0A+        if ( =
ctxt->msr[i]._rsvd )=0A+            return -EOPNOTSUPP;=0A+    /* Checking =
finished */=0A+=0A+    if ( hvm_funcs.load_msr )=0A+        err =3D =
hvm_funcs.load_msr(v, ctxt);=0A+=0A+    for ( i =3D 0; !err && i < =
ctxt->count; ++i )=0A+    {=0A+        switch ( ctxt->msr[i].index )=0A+   =
     {=0A+        default:=0A+            if ( !ctxt->msr[i]._rsvd )=0A+   =
             err =3D -ENXIO;=0A+            break;=0A+        }=0A+    =
}=0A+=0A+    return err;=0A+}=0A+=0A+/* We need variable length data =
chunks for XSAVE area and MSRs, hence=0A+ * a custom declaration rather =
than HVM_REGISTER_SAVE_RESTORE.=0A  */=0A-static int __init __hvm_register_=
CPU_XSAVE_save_and_restore(void)=0A+static int __init hvm_register_CPU_save=
_and_restore(void)=0A {=0A     hvm_register_savevm(CPU_XSAVE_CODE,=0A      =
                   "CPU_XSAVE",=0A@@ -1139,9 +1246,22 @@ static int __init =
__hvm_register_CPU_XSA=0A                         HVM_CPU_XSAVE_SIZE(xfeatu=
re_mask) +=0A                             sizeof(struct hvm_save_descriptor=
),=0A                         HVMSR_PER_VCPU);=0A+=0A+    if ( hvm_funcs.in=
it_msr )=0A+        msr_count_max +=3D hvm_funcs.init_msr();=0A+=0A+    if =
( msr_count_max )=0A+        hvm_register_savevm(CPU_MSR_CODE,=0A+         =
                   "CPU_MSR",=0A+                            hvm_save_cpu_m=
srs,=0A+                            hvm_load_cpu_msrs,=0A+                 =
           HVM_CPU_MSR_SIZE(msr_count_max) +=0A+                           =
     sizeof(struct hvm_save_descriptor),=0A+                            =
HVMSR_PER_VCPU);=0A+=0A     return 0;=0A }=0A-__initcall(__hvm_register_CPU=
_XSAVE_save_and_restore);=0A+__initcall(hvm_register_CPU_save_and_restore);=
=0A =0A int hvm_vcpu_initialise(struct vcpu *v)=0A {=0A--- a/xen/include/as=
m-x86/hvm/hvm.h=0A+++ b/xen/include/asm-x86/hvm/hvm.h=0A@@ -109,6 +109,10 =
@@ struct hvm_function_table {=0A     void (*save_cpu_ctxt)(struct vcpu =
*v, struct hvm_hw_cpu *ctxt);=0A     int (*load_cpu_ctxt)(struct vcpu *v, =
struct hvm_hw_cpu *ctxt);=0A =0A+    unsigned int (*init_msr)(void);=0A+   =
 void (*save_msr)(struct vcpu *, struct hvm_msr *);=0A+    int (*load_msr)(=
struct vcpu *, struct hvm_msr *);=0A+=0A     /* Examine specifics of the =
guest state. */=0A     unsigned int (*get_interrupt_shadow)(struct vcpu =
*v);=0A     void (*set_interrupt_shadow)(struct vcpu *v, unsigned int =
intr_shadow);=0A--- a/xen/include/public/arch-x86/hvm/save.h=0A+++ =
b/xen/include/public/arch-x86/hvm/save.h=0A@@ -592,9 +592,27 @@ struct =
hvm_tsc_adjust {=0A =0A DECLARE_HVM_SAVE_TYPE(TSC_ADJUST, 19, struct =
hvm_tsc_adjust);=0A =0A+=0A+struct hvm_msr {=0A+    uint32_t count;=0A+    =
struct hvm_one_msr {=0A+        uint32_t index;=0A+        uint32_t =
_rsvd;=0A+        uint64_t val;=0A+#if defined(__STDC_VERSION__) && =
__STDC_VERSION__ >=3D 199901L=0A+    } msr[];=0A+#elif defined(__GNUC__)=0A=
+    } msr[0];=0A+#else=0A+    } msr[1 /* variable size */];=0A+#endif=0A+}=
;=0A+=0A+#define CPU_MSR_CODE  20=0A+=0A /* =0A  * Largest type-code in =
use=0A  */=0A-#define HVM_SAVE_CODE_MAX 19=0A+#define HVM_SAVE_CODE_MAX =
20=0A =0A #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */=0A
--=__Part0A385F07.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0A385F07.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHunq-0008AF-AG; Mon, 24 Feb 2014 12:38:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHuno-00089w-My
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:38:32 +0000
Received: from [193.109.254.147:5244] by server-9.bemta-14.messagelabs.com id
	50/17-24895-74D3B035; Mon, 24 Feb 2014 12:38:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393245510!6392272!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29549 invoked from network); 24 Feb 2014 12:38:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:38:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:38:30 +0000
Message-Id: <530B4B53020000780011EBDA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:38:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part5E6C0B53.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH RESEND 4/4] x86: MSR_IA32_BNDCFGS save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part5E6C0B53.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -580,6 +580,55 @@ static int vmx_load_vmcs_ctxt(struct vcp
     return 0;
 }
=20
+static unsigned int __init vmx_init_msr(void)
+{
+    return !!cpu_has_mpx;
+}
+
+static void vmx_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
+{
+    vmx_vmcs_enter(v);
+
+    if ( cpu_has_mpx )
+    {
+        __vmread(GUEST_BNDCFGS, &ctxt->msr[ctxt->count].val);
+        if ( ctxt->msr[ctxt->count].val )
+            ctxt->msr[ctxt->count++].index =3D MSR_IA32_BNDCFGS;
+    }
+
+    vmx_vmcs_exit(v);
+}
+
+static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
+{
+    unsigned int i;
+    int err =3D 0;
+
+    vmx_vmcs_enter(v);
+
+    for ( i =3D 0; i < ctxt->count; ++i )
+    {
+        switch ( ctxt->msr[i].index )
+        {
+        case MSR_IA32_BNDCFGS:
+            if ( cpu_has_mpx )
+                __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);
+            else
+                err =3D -ENXIO;
+            break;
+        default:
+            continue;
+        }
+        if ( err )
+            break;
+        ctxt->msr[i]._rsvd =3D 1;
+    }
+
+    vmx_vmcs_exit(v);
+
+    return err;
+}
+
 static void vmx_fpu_enter(struct vcpu *v)
 {
     vcpu_restore_fpu_lazy(v);
@@ -1607,6 +1656,9 @@ static struct hvm_function_table __initd
     .vcpu_destroy         =3D vmx_vcpu_destroy,
     .save_cpu_ctxt        =3D vmx_save_vmcs_ctxt,
     .load_cpu_ctxt        =3D vmx_load_vmcs_ctxt,
+    .init_msr             =3D vmx_init_msr,
+    .save_msr             =3D vmx_save_msr,
+    .load_msr             =3D vmx_load_msr,
     .get_interrupt_shadow =3D vmx_get_interrupt_shadow,
     .set_interrupt_shadow =3D vmx_set_interrupt_shadow,
     .guest_x86_mode       =3D vmx_guest_x86_mode,
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -367,6 +367,8 @@ enum vmcs_field {
     GUEST_PDPTR2_HIGH               =3D 0x0000280f,
     GUEST_PDPTR3                    =3D 0x00002810,
     GUEST_PDPTR3_HIGH               =3D 0x00002811,
+    GUEST_BNDCFGS                   =3D 0x00002812,
+    GUEST_BNDCFGS_HIGH              =3D 0x00002813,
     HOST_PAT                        =3D 0x00002c00,
     HOST_PAT_HIGH                   =3D 0x00002c01,
     HOST_EFER                       =3D 0x00002c02,




--=__Part5E6C0B53.1__=
Content-Type: text/plain; name="x86-MPX-sr.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MPX-sr.patch"

x86: MSR_IA32_BNDCFGS save/restore=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0AReviewed-by: Liu Jinsong <jinsong.liu@intel.com>=0A=
=0A--- a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@=
 -580,6 +580,55 @@ static int vmx_load_vmcs_ctxt(struct vcp=0A     return =
0;=0A }=0A =0A+static unsigned int __init vmx_init_msr(void)=0A+{=0A+    =
return !!cpu_has_mpx;=0A+}=0A+=0A+static void vmx_save_msr(struct vcpu *v, =
struct hvm_msr *ctxt)=0A+{=0A+    vmx_vmcs_enter(v);=0A+=0A+    if ( =
cpu_has_mpx )=0A+    {=0A+        __vmread(GUEST_BNDCFGS, &ctxt->msr[ctxt->=
count].val);=0A+        if ( ctxt->msr[ctxt->count].val )=0A+            =
ctxt->msr[ctxt->count++].index =3D MSR_IA32_BNDCFGS;=0A+    }=0A+=0A+    =
vmx_vmcs_exit(v);=0A+}=0A+=0A+static int vmx_load_msr(struct vcpu *v, =
struct hvm_msr *ctxt)=0A+{=0A+    unsigned int i;=0A+    int err =3D =
0;=0A+=0A+    vmx_vmcs_enter(v);=0A+=0A+    for ( i =3D 0; i < ctxt->count;=
 ++i )=0A+    {=0A+        switch ( ctxt->msr[i].index )=0A+        {=0A+  =
      case MSR_IA32_BNDCFGS:=0A+            if ( cpu_has_mpx )=0A+         =
       __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);=0A+            else=0A+ =
               err =3D -ENXIO;=0A+            break;=0A+        default:=0A=
+            continue;=0A+        }=0A+        if ( err )=0A+            =
break;=0A+        ctxt->msr[i]._rsvd =3D 1;=0A+    }=0A+=0A+    vmx_vmcs_ex=
it(v);=0A+=0A+    return err;=0A+}=0A+=0A static void vmx_fpu_enter(struct =
vcpu *v)=0A {=0A     vcpu_restore_fpu_lazy(v);=0A@@ -1607,6 +1656,9 @@ =
static struct hvm_function_table __initd=0A     .vcpu_destroy         =3D =
vmx_vcpu_destroy,=0A     .save_cpu_ctxt        =3D vmx_save_vmcs_ctxt,=0A  =
   .load_cpu_ctxt        =3D vmx_load_vmcs_ctxt,=0A+    .init_msr          =
   =3D vmx_init_msr,=0A+    .save_msr             =3D vmx_save_msr,=0A+    =
.load_msr             =3D vmx_load_msr,=0A     .get_interrupt_shadow =3D =
vmx_get_interrupt_shadow,=0A     .set_interrupt_shadow =3D vmx_set_interrup=
t_shadow,=0A     .guest_x86_mode       =3D vmx_guest_x86_mode,=0A--- =
a/xen/include/asm-x86/hvm/vmx/vmcs.h=0A+++ b/xen/include/asm-x86/hvm/vmx/vm=
cs.h=0A@@ -367,6 +367,8 @@ enum vmcs_field {=0A     GUEST_PDPTR2_HIGH      =
         =3D 0x0000280f,=0A     GUEST_PDPTR3                    =3D =
0x00002810,=0A     GUEST_PDPTR3_HIGH               =3D 0x00002811,=0A+    =
GUEST_BNDCFGS                   =3D 0x00002812,=0A+    GUEST_BNDCFGS_HIGH  =
            =3D 0x00002813,=0A     HOST_PAT                        =3D =
0x00002c00,=0A     HOST_PAT_HIGH                   =3D 0x00002c01,=0A     =
HOST_EFER                       =3D 0x00002c02,=0A
--=__Part5E6C0B53.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part5E6C0B53.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:38:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHunq-0008AF-AG; Mon, 24 Feb 2014 12:38:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHuno-00089w-My
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:38:32 +0000
Received: from [193.109.254.147:5244] by server-9.bemta-14.messagelabs.com id
	50/17-24895-74D3B035; Mon, 24 Feb 2014 12:38:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393245510!6392272!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29549 invoked from network); 24 Feb 2014 12:38:31 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 12:38:31 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 12:38:30 +0000
Message-Id: <530B4B53020000780011EBDA@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 12:38:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part5E6C0B53.1__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Keir Fraser <keir@xen.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH RESEND 4/4] x86: MSR_IA32_BNDCFGS save/restore
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part5E6C0B53.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -580,6 +580,55 @@ static int vmx_load_vmcs_ctxt(struct vcp
     return 0;
 }
=20
+static unsigned int __init vmx_init_msr(void)
+{
+    return !!cpu_has_mpx;
+}
+
+static void vmx_save_msr(struct vcpu *v, struct hvm_msr *ctxt)
+{
+    vmx_vmcs_enter(v);
+
+    if ( cpu_has_mpx )
+    {
+        __vmread(GUEST_BNDCFGS, &ctxt->msr[ctxt->count].val);
+        if ( ctxt->msr[ctxt->count].val )
+            ctxt->msr[ctxt->count++].index =3D MSR_IA32_BNDCFGS;
+    }
+
+    vmx_vmcs_exit(v);
+}
+
+static int vmx_load_msr(struct vcpu *v, struct hvm_msr *ctxt)
+{
+    unsigned int i;
+    int err =3D 0;
+
+    vmx_vmcs_enter(v);
+
+    for ( i =3D 0; i < ctxt->count; ++i )
+    {
+        switch ( ctxt->msr[i].index )
+        {
+        case MSR_IA32_BNDCFGS:
+            if ( cpu_has_mpx )
+                __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);
+            else
+                err =3D -ENXIO;
+            break;
+        default:
+            continue;
+        }
+        if ( err )
+            break;
+        ctxt->msr[i]._rsvd =3D 1;
+    }
+
+    vmx_vmcs_exit(v);
+
+    return err;
+}
+
 static void vmx_fpu_enter(struct vcpu *v)
 {
     vcpu_restore_fpu_lazy(v);
@@ -1607,6 +1656,9 @@ static struct hvm_function_table __initd
     .vcpu_destroy         =3D vmx_vcpu_destroy,
     .save_cpu_ctxt        =3D vmx_save_vmcs_ctxt,
     .load_cpu_ctxt        =3D vmx_load_vmcs_ctxt,
+    .init_msr             =3D vmx_init_msr,
+    .save_msr             =3D vmx_save_msr,
+    .load_msr             =3D vmx_load_msr,
     .get_interrupt_shadow =3D vmx_get_interrupt_shadow,
     .set_interrupt_shadow =3D vmx_set_interrupt_shadow,
     .guest_x86_mode       =3D vmx_guest_x86_mode,
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -367,6 +367,8 @@ enum vmcs_field {
     GUEST_PDPTR2_HIGH               =3D 0x0000280f,
     GUEST_PDPTR3                    =3D 0x00002810,
     GUEST_PDPTR3_HIGH               =3D 0x00002811,
+    GUEST_BNDCFGS                   =3D 0x00002812,
+    GUEST_BNDCFGS_HIGH              =3D 0x00002813,
     HOST_PAT                        =3D 0x00002c00,
     HOST_PAT_HIGH                   =3D 0x00002c01,
     HOST_EFER                       =3D 0x00002c02,




--=__Part5E6C0B53.1__=
Content-Type: text/plain; name="x86-MPX-sr.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-MPX-sr.patch"

x86: MSR_IA32_BNDCFGS save/restore=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0AReviewed-by: Liu Jinsong <jinsong.liu@intel.com>=0A=
=0A--- a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@=
 -580,6 +580,55 @@ static int vmx_load_vmcs_ctxt(struct vcp=0A     return =
0;=0A }=0A =0A+static unsigned int __init vmx_init_msr(void)=0A+{=0A+    =
return !!cpu_has_mpx;=0A+}=0A+=0A+static void vmx_save_msr(struct vcpu *v, =
struct hvm_msr *ctxt)=0A+{=0A+    vmx_vmcs_enter(v);=0A+=0A+    if ( =
cpu_has_mpx )=0A+    {=0A+        __vmread(GUEST_BNDCFGS, &ctxt->msr[ctxt->=
count].val);=0A+        if ( ctxt->msr[ctxt->count].val )=0A+            =
ctxt->msr[ctxt->count++].index =3D MSR_IA32_BNDCFGS;=0A+    }=0A+=0A+    =
vmx_vmcs_exit(v);=0A+}=0A+=0A+static int vmx_load_msr(struct vcpu *v, =
struct hvm_msr *ctxt)=0A+{=0A+    unsigned int i;=0A+    int err =3D =
0;=0A+=0A+    vmx_vmcs_enter(v);=0A+=0A+    for ( i =3D 0; i < ctxt->count;=
 ++i )=0A+    {=0A+        switch ( ctxt->msr[i].index )=0A+        {=0A+  =
      case MSR_IA32_BNDCFGS:=0A+            if ( cpu_has_mpx )=0A+         =
       __vmwrite(GUEST_BNDCFGS, ctxt->msr[i].val);=0A+            else=0A+ =
               err =3D -ENXIO;=0A+            break;=0A+        default:=0A=
+            continue;=0A+        }=0A+        if ( err )=0A+            =
break;=0A+        ctxt->msr[i]._rsvd =3D 1;=0A+    }=0A+=0A+    vmx_vmcs_ex=
it(v);=0A+=0A+    return err;=0A+}=0A+=0A static void vmx_fpu_enter(struct =
vcpu *v)=0A {=0A     vcpu_restore_fpu_lazy(v);=0A@@ -1607,6 +1656,9 @@ =
static struct hvm_function_table __initd=0A     .vcpu_destroy         =3D =
vmx_vcpu_destroy,=0A     .save_cpu_ctxt        =3D vmx_save_vmcs_ctxt,=0A  =
   .load_cpu_ctxt        =3D vmx_load_vmcs_ctxt,=0A+    .init_msr          =
   =3D vmx_init_msr,=0A+    .save_msr             =3D vmx_save_msr,=0A+    =
.load_msr             =3D vmx_load_msr,=0A     .get_interrupt_shadow =3D =
vmx_get_interrupt_shadow,=0A     .set_interrupt_shadow =3D vmx_set_interrup=
t_shadow,=0A     .guest_x86_mode       =3D vmx_guest_x86_mode,=0A--- =
a/xen/include/asm-x86/hvm/vmx/vmcs.h=0A+++ b/xen/include/asm-x86/hvm/vmx/vm=
cs.h=0A@@ -367,6 +367,8 @@ enum vmcs_field {=0A     GUEST_PDPTR2_HIGH      =
         =3D 0x0000280f,=0A     GUEST_PDPTR3                    =3D =
0x00002810,=0A     GUEST_PDPTR3_HIGH               =3D 0x00002811,=0A+    =
GUEST_BNDCFGS                   =3D 0x00002812,=0A+    GUEST_BNDCFGS_HIGH  =
            =3D 0x00002813,=0A     HOST_PAT                        =3D =
0x00002c00,=0A     HOST_PAT_HIGH                   =3D 0x00002c01,=0A     =
HOST_EFER                       =3D 0x00002c02,=0A
--=__Part5E6C0B53.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part5E6C0B53.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 12:47:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuvx-000062-Mm; Mon, 24 Feb 2014 12:46:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuvv-00005w-7D
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:46:55 +0000
Received: from [85.158.137.68:57781] by server-17.bemta-3.messagelabs.com id
	4F/9F-22569-E3F3B035; Mon, 24 Feb 2014 12:46:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393246013!2509037!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20306 invoked from network); 24 Feb 2014 12:46:53 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:46:53 -0000
Received: by mail-ee0-f47.google.com with SMTP id e49so935403eek.20
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:46:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0zexlc00veIprRZ2YtCbijKszWfH86bX3DuWNTZJpkk=;
	b=KdczqyunRkKEzWOk6yPunL4ASmb3Kv0fcO/j9B6iVbxNBnjzNHgv9bVMbdaX/WZWmf
	MOtvg1V7JrXB4dr4csf+G9JPP3lWY7lB7iNNnCgUZWTksjlEwldh951eFAJoNQA2nxen
	/OrHnPz4A9KM0qIseU1JUZtJvsM8uoonTlzM7YRmxCwL7gLjX1v4H0bZV5DVVVUIhyby
	NiZekQvNZK3N2rKIc97TwhjG4sRgXZxsWtiy6I3v+jdeA6E4WK8HQEA84wxzfazqpW1a
	0r1pOLTl1KLn2CojmgYXUTNo22zWDmBEP/0TAXAv0GbeNN+fK3kfmJtkRic/L0RFSJTN
	FRrw==
X-Gm-Message-State: ALoCoQk+ea4Zu34bDsod69VJpdNqTIGgDTLjai94YnmvKCMMSdvhkfEPpT/6G8pq/8FcKecnF84K
X-Received: by 10.15.61.7 with SMTP id h7mr24474890eex.49.1393246012777;
	Mon, 24 Feb 2014 04:46:52 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m8sm53466341eef.14.2014.02.24.04.46.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:46:52 -0800 (PST)
Message-ID: <530B3F3A.6000202@linaro.org>
Date: Mon, 24 Feb 2014 12:46:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-9-git-send-email-julien.grall@linaro.org>
	<530B2F8A020000780011EAA4@nat28.tlf.novell.com>
In-Reply-To: <530B2F8A020000780011EAA4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/24/2014 10:39 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> +int iommu_do_domctl(
>> +    struct xen_domctl *domctl, struct domain *d,
>> +    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>  {
>> -    const struct iommu_ops *ops = iommu_get_ops();
>> -    if ( iommu_intremap )
>> -        ops->read_msi_from_ire(msi_desc, msg);
>> -}
>> +    int ret = 0;
>>  
>> -unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
>> -{
>> -    const struct iommu_ops *ops = iommu_get_ops();
>> -    return ops->read_apic_from_ire(apic, reg);
>> -}
>> +    if ( !iommu_enabled )
>> +        return -ENOSYS;
>>  
>> -int __init iommu_setup_hpet_msi(struct msi_desc *msi)
>> -{
>> -    const struct iommu_ops *ops = iommu_get_ops();
>> -    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
>> -}
>> +    switch ( domctl->cmd )
>> +    {
>> +#ifdef HAS_PCI
>> +    case XEN_DOMCTL_get_device_group:
>> +    case XEN_DOMCTL_test_assign_device:
>> +    case XEN_DOMCTL_assign_device:
>> +    case XEN_DOMCTL_deassign_device:
>> +        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
>> +        break;
>> +#endif
>> +    default:
>> +        ret = -ENOSYS;
>> +    }
> 
> Please simply have the default case chain to iommu_do_pci_domctl(),
> avoiding the need to change two source files when new sub-ops get
> added.

I wrote in this manner because we will add soon "iommu_do_dt_domctl" to
handle DOMCTL for device tree assignment.

For one of this function we will have to deal with this trick. Or ... we
can do:

   ret = iommu_do_pci_domctl(...)
   if ( ret != ENOSYS )
     return ret;
   ret = iommu_do_dt_domctl(...)

IHMO, I would prefer the former solution.

> 
> Also, last case in the set of case statements or not - the default
> case should also have a break statement.

I will add it.

>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> ...
>> @@ -980,6 +983,440 @@ static int __init setup_dump_pcidevs(void)
>>  }
>>  __initcall(setup_dump_pcidevs);
>>  
>> +static int iommu_populate_page_table(struct domain *d)
>> +{
> 
> Now why is this function being moved here? It doesn't appear to
> have anything PCI specific at all.

Because this function is only used in assign_device.

I also remember this function can't work on ARM (arch.relmem_list
doesn't exists). I will move this code in x86/iommu.c because it's seems
architecture initialization.

> 
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -0,0 +1,65 @@
>> +/*
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
>> + * Place - Suite 330, Boston, MA 02111-1307 USA.
>> + */
>> +
>> +#include <xen/sched.h>
>> +#include <xen/iommu.h>
>> +#include <xen/paging.h>
>> +#include <xen/guest_access.h>
>> +#include <xen/event.h>
>> +#include <xen/softirq.h>
>> +#include <xsm/xsm.h>
>> +
>> +void iommu_update_ire_from_apic(
>> +    unsigned int apic, unsigned int reg, unsigned int value)
>> +{
>> +    const struct iommu_ops *ops = iommu_get_ops();
>> +    ops->update_ire_from_apic(apic, reg, value);
>> +}
> 
> While one might argue that this one is x86-specific (albeit from past
> IA64 days we know it isn't entirely), ...
> 
>> +
>> +int iommu_update_ire_from_msi(
>> +    struct msi_desc *msi_desc, struct msi_msg *msg)
>> +{
>> +    const struct iommu_ops *ops = iommu_get_ops();
>> +    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
>> +}
> 
> ... this one clearly isn't - I'm sure once you support PCI on ARM, you
> will also want/need to support MSI. (The same then of course goes
> for the respective functions' declarations.)

Right, I guess it's the same for iommu_read_msi_from_ire.

>> --- /dev/null
>> +++ b/xen/include/asm-x86/iommu.h
>> @@ -0,0 +1,46 @@
>> +/*
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
>> + * Place - Suite 330, Boston, MA 02111-1307 USA.
>> +*/
>> +#ifndef __ARCH_X86_IOMMU_H__
>> +#define __ARCH_X86_IOMMU_H__
>> +
>> +#define MAX_IOMMUS 32
>> +
>> +#include <asm/msi.h>
> 
> Please don't, if at all possible.

I'm not sure to understand ... what do you mean? Don't include "asm/msi.h"?

>> @@ -84,52 +82,56 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
>>  bool_t pt_irq_need_timer(uint32_t flags);
>>  
>>  #define PT_IRQ_TIME_OUT MILLISECS(8)
>> +#endif /* HAS_PCI */
>>  
>> +#ifdef CONFIG_X86
>>  struct msi_desc;
>>  struct msi_msg;
>> +#endif /* CONFIG_X86 */
> 
> Hardly - this again is a direct descendant from PCI.

I will #ifdef HAS_PCI.

>> +#ifdef CONFIG_X86
>>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
>>      int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>      void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>      unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
>>      int (*setup_hpet_msi)(struct msi_desc *);
>> +    void (*share_p2m)(struct domain *d);
>> +#endif /* CONFIG_X86 */
> 
> Is that last one really x86-specific in any way?

On ARM, P2M are share by default. You don't need to call this function
explicitly. So I think we can safely say it's x86-specific.

Developper won't call this function by mistake.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:47:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHuvx-000062-Mm; Mon, 24 Feb 2014 12:46:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHuvv-00005w-7D
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:46:55 +0000
Received: from [85.158.137.68:57781] by server-17.bemta-3.messagelabs.com id
	4F/9F-22569-E3F3B035; Mon, 24 Feb 2014 12:46:54 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393246013!2509037!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20306 invoked from network); 24 Feb 2014 12:46:53 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:46:53 -0000
Received: by mail-ee0-f47.google.com with SMTP id e49so935403eek.20
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:46:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0zexlc00veIprRZ2YtCbijKszWfH86bX3DuWNTZJpkk=;
	b=KdczqyunRkKEzWOk6yPunL4ASmb3Kv0fcO/j9B6iVbxNBnjzNHgv9bVMbdaX/WZWmf
	MOtvg1V7JrXB4dr4csf+G9JPP3lWY7lB7iNNnCgUZWTksjlEwldh951eFAJoNQA2nxen
	/OrHnPz4A9KM0qIseU1JUZtJvsM8uoonTlzM7YRmxCwL7gLjX1v4H0bZV5DVVVUIhyby
	NiZekQvNZK3N2rKIc97TwhjG4sRgXZxsWtiy6I3v+jdeA6E4WK8HQEA84wxzfazqpW1a
	0r1pOLTl1KLn2CojmgYXUTNo22zWDmBEP/0TAXAv0GbeNN+fK3kfmJtkRic/L0RFSJTN
	FRrw==
X-Gm-Message-State: ALoCoQk+ea4Zu34bDsod69VJpdNqTIGgDTLjai94YnmvKCMMSdvhkfEPpT/6G8pq/8FcKecnF84K
X-Received: by 10.15.61.7 with SMTP id h7mr24474890eex.49.1393246012777;
	Mon, 24 Feb 2014 04:46:52 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m8sm53466341eef.14.2014.02.24.04.46.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:46:52 -0800 (PST)
Message-ID: <530B3F3A.6000202@linaro.org>
Date: Mon, 24 Feb 2014 12:46:50 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-9-git-send-email-julien.grall@linaro.org>
	<530B2F8A020000780011EAA4@nat28.tlf.novell.com>
In-Reply-To: <530B2F8A020000780011EAA4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/24/2014 10:39 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> +int iommu_do_domctl(
>> +    struct xen_domctl *domctl, struct domain *d,
>> +    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>  {
>> -    const struct iommu_ops *ops = iommu_get_ops();
>> -    if ( iommu_intremap )
>> -        ops->read_msi_from_ire(msi_desc, msg);
>> -}
>> +    int ret = 0;
>>  
>> -unsigned int iommu_read_apic_from_ire(unsigned int apic, unsigned int reg)
>> -{
>> -    const struct iommu_ops *ops = iommu_get_ops();
>> -    return ops->read_apic_from_ire(apic, reg);
>> -}
>> +    if ( !iommu_enabled )
>> +        return -ENOSYS;
>>  
>> -int __init iommu_setup_hpet_msi(struct msi_desc *msi)
>> -{
>> -    const struct iommu_ops *ops = iommu_get_ops();
>> -    return ops->setup_hpet_msi ? ops->setup_hpet_msi(msi) : -ENODEV;
>> -}
>> +    switch ( domctl->cmd )
>> +    {
>> +#ifdef HAS_PCI
>> +    case XEN_DOMCTL_get_device_group:
>> +    case XEN_DOMCTL_test_assign_device:
>> +    case XEN_DOMCTL_assign_device:
>> +    case XEN_DOMCTL_deassign_device:
>> +        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
>> +        break;
>> +#endif
>> +    default:
>> +        ret = -ENOSYS;
>> +    }
> 
> Please simply have the default case chain to iommu_do_pci_domctl(),
> avoiding the need to change two source files when new sub-ops get
> added.

I wrote in this manner because we will add soon "iommu_do_dt_domctl" to
handle DOMCTL for device tree assignment.

For one of this function we will have to deal with this trick. Or ... we
can do:

   ret = iommu_do_pci_domctl(...)
   if ( ret != ENOSYS )
     return ret;
   ret = iommu_do_dt_domctl(...)

IHMO, I would prefer the former solution.

> 
> Also, last case in the set of case statements or not - the default
> case should also have a break statement.

I will add it.

>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> ...
>> @@ -980,6 +983,440 @@ static int __init setup_dump_pcidevs(void)
>>  }
>>  __initcall(setup_dump_pcidevs);
>>  
>> +static int iommu_populate_page_table(struct domain *d)
>> +{
> 
> Now why is this function being moved here? It doesn't appear to
> have anything PCI specific at all.

Because this function is only used in assign_device.

I also remember this function can't work on ARM (arch.relmem_list
doesn't exists). I will move this code in x86/iommu.c because it's seems
architecture initialization.

> 
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -0,0 +1,65 @@
>> +/*
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
>> + * Place - Suite 330, Boston, MA 02111-1307 USA.
>> + */
>> +
>> +#include <xen/sched.h>
>> +#include <xen/iommu.h>
>> +#include <xen/paging.h>
>> +#include <xen/guest_access.h>
>> +#include <xen/event.h>
>> +#include <xen/softirq.h>
>> +#include <xsm/xsm.h>
>> +
>> +void iommu_update_ire_from_apic(
>> +    unsigned int apic, unsigned int reg, unsigned int value)
>> +{
>> +    const struct iommu_ops *ops = iommu_get_ops();
>> +    ops->update_ire_from_apic(apic, reg, value);
>> +}
> 
> While one might argue that this one is x86-specific (albeit from past
> IA64 days we know it isn't entirely), ...
> 
>> +
>> +int iommu_update_ire_from_msi(
>> +    struct msi_desc *msi_desc, struct msi_msg *msg)
>> +{
>> +    const struct iommu_ops *ops = iommu_get_ops();
>> +    return iommu_intremap ? ops->update_ire_from_msi(msi_desc, msg) : 0;
>> +}
> 
> ... this one clearly isn't - I'm sure once you support PCI on ARM, you
> will also want/need to support MSI. (The same then of course goes
> for the respective functions' declarations.)

Right, I guess it's the same for iommu_read_msi_from_ire.

>> --- /dev/null
>> +++ b/xen/include/asm-x86/iommu.h
>> @@ -0,0 +1,46 @@
>> +/*
>> + * This program is free software; you can redistribute it and/or modify it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHOUT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License along with
>> + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
>> + * Place - Suite 330, Boston, MA 02111-1307 USA.
>> +*/
>> +#ifndef __ARCH_X86_IOMMU_H__
>> +#define __ARCH_X86_IOMMU_H__
>> +
>> +#define MAX_IOMMUS 32
>> +
>> +#include <asm/msi.h>
> 
> Please don't, if at all possible.

I'm not sure to understand ... what do you mean? Don't include "asm/msi.h"?

>> @@ -84,52 +82,56 @@ void free_hvm_irq_dpci(struct hvm_irq_dpci *dpci);
>>  bool_t pt_irq_need_timer(uint32_t flags);
>>  
>>  #define PT_IRQ_TIME_OUT MILLISECS(8)
>> +#endif /* HAS_PCI */
>>  
>> +#ifdef CONFIG_X86
>>  struct msi_desc;
>>  struct msi_msg;
>> +#endif /* CONFIG_X86 */
> 
> Hardly - this again is a direct descendant from PCI.

I will #ifdef HAS_PCI.

>> +#ifdef CONFIG_X86
>>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
>>      int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>      void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>      unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
>>      int (*setup_hpet_msi)(struct msi_desc *);
>> +    void (*share_p2m)(struct domain *d);
>> +#endif /* CONFIG_X86 */
> 
> Is that last one really x86-specific in any way?

On ARM, P2M are share by default. You don't need to call this function
explicitly. So I think we can safely say it's x86-specific.

Developper won't call this function by mistake.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:58:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHv6i-0000H3-4O; Mon, 24 Feb 2014 12:58:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHv6f-0000Gy-Hp
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:58:01 +0000
Received: from [85.158.143.35:43797] by server-2.bemta-4.messagelabs.com id
	81/10-10891-8D14B035; Mon, 24 Feb 2014 12:58:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393246680!7864881!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30547 invoked from network); 24 Feb 2014 12:58:00 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:58:00 -0000
Received: by mail-ee0-f53.google.com with SMTP id c13so295098eek.12
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:58:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=MdCHxCubxrWaA7EyLOFSd9Ua9SzPIIMPWCCKVAebkrc=;
	b=ZQnS33/BliPQ4CznHTlOPQgWhDS3W9QaiKRRzBMck5PrXfX3N1XVk4dF0L0Sw6X92f
	2Y7OsCEYcIZb43KwzpXy561F1N+300Vh6iVRWr0HrDQUeoZrtvk1/Ycds/RK4lz1lYtd
	WDeKFg1wFL0bQbxzApRYeAO+LyYU+qDtLz47nvyAft7bFX5IsGQHeAxRmx4AbV9vK1nx
	sWn1TY5FTCd8fgCHhoCDVxetD5MWrlRebBMB8dWtU0Yvh+ZuNZb3Bjl2nPTxFJjD4OjP
	ayPa2ReV6qHyQTb055jk4DQswO+ir1EAcx8J6vhlSrtUBQPp+mXKKqmXGLT2rmi2Cv/6
	Q4ZA==
X-Gm-Message-State: ALoCoQkdS0QoWABbt0+A/ll5ftBwirAZzZd8r8BLhFlgH4DHwiI7+LjqsdLr/L/CT4Iop0GyN+gM
X-Received: by 10.14.108.1 with SMTP id p1mr24461251eeg.97.1393246679810;
	Mon, 24 Feb 2014 04:57:59 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm63497869eeg.5.2014.02.24.04.57.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:57:59 -0800 (PST)
Message-ID: <530B41D5.8070803@linaro.org>
Date: Mon, 24 Feb 2014 12:57:57 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
In-Reply-To: <530B3083020000780011EAB1@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/24/2014 10:44 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> --- a/xen/include/xen/hvm/iommu.h
>> +++ b/xen/include/xen/hvm/iommu.h
>> @@ -23,32 +23,8 @@
>>  #include <xen/iommu.h>
>>  #include <asm/hvm/iommu.h>
>>  
>> -struct g2m_ioport {
>> -    struct list_head list;
>> -    unsigned int gport;
>> -    unsigned int mport;
>> -    unsigned int np;
>> -};
>> -
>> -struct mapped_rmrr {
>> -    struct list_head list;
>> -    u64 base;
>> -    u64 end;
>> -};
>> -
>>  struct hvm_iommu {
>> -    u64 pgd_maddr;                 /* io page directory machine address */
>> -    spinlock_t mapping_lock;       /* io page table lock */
>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>> -    struct list_head mapped_rmrrs;
>> -
>> -    /* amd iommu support */
>> -    int domain_id;
> 
> At the very least this field doesn't look all that architecture specific,
> even if it might only be used on x86/AMD right now.

On ARM, each IOMMU will have it's own private data stored in arch.priv.
I don't think domain_id will be used as the driver can directly use
d->domain_id.

I gave a look on AMD IOMMU drivers, and in a same function they mixed
d->domain_id and domain_hvm_iommu(d)->arch.domain_id. The latter one has
been initialized to d->domain_id in amd_iommu_domain_init.

I think, we can even remove this field for x86...

>> -    int paging_mode;
> 
> The same might go for this one.

There is only one paging mode on ARM.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 12:58:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 12:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHv6i-0000H3-4O; Mon, 24 Feb 2014 12:58:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHv6f-0000Gy-Hp
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 12:58:01 +0000
Received: from [85.158.143.35:43797] by server-2.bemta-4.messagelabs.com id
	81/10-10891-8D14B035; Mon, 24 Feb 2014 12:58:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393246680!7864881!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30547 invoked from network); 24 Feb 2014 12:58:00 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 12:58:00 -0000
Received: by mail-ee0-f53.google.com with SMTP id c13so295098eek.12
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 04:58:00 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=MdCHxCubxrWaA7EyLOFSd9Ua9SzPIIMPWCCKVAebkrc=;
	b=ZQnS33/BliPQ4CznHTlOPQgWhDS3W9QaiKRRzBMck5PrXfX3N1XVk4dF0L0Sw6X92f
	2Y7OsCEYcIZb43KwzpXy561F1N+300Vh6iVRWr0HrDQUeoZrtvk1/Ycds/RK4lz1lYtd
	WDeKFg1wFL0bQbxzApRYeAO+LyYU+qDtLz47nvyAft7bFX5IsGQHeAxRmx4AbV9vK1nx
	sWn1TY5FTCd8fgCHhoCDVxetD5MWrlRebBMB8dWtU0Yvh+ZuNZb3Bjl2nPTxFJjD4OjP
	ayPa2ReV6qHyQTb055jk4DQswO+ir1EAcx8J6vhlSrtUBQPp+mXKKqmXGLT2rmi2Cv/6
	Q4ZA==
X-Gm-Message-State: ALoCoQkdS0QoWABbt0+A/ll5ftBwirAZzZd8r8BLhFlgH4DHwiI7+LjqsdLr/L/CT4Iop0GyN+gM
X-Received: by 10.14.108.1 with SMTP id p1mr24461251eeg.97.1393246679810;
	Mon, 24 Feb 2014 04:57:59 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm63497869eeg.5.2014.02.24.04.57.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 04:57:59 -0800 (PST)
Message-ID: <530B41D5.8070803@linaro.org>
Date: Mon, 24 Feb 2014 12:57:57 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
In-Reply-To: <530B3083020000780011EAB1@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/24/2014 10:44 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> --- a/xen/include/xen/hvm/iommu.h
>> +++ b/xen/include/xen/hvm/iommu.h
>> @@ -23,32 +23,8 @@
>>  #include <xen/iommu.h>
>>  #include <asm/hvm/iommu.h>
>>  
>> -struct g2m_ioport {
>> -    struct list_head list;
>> -    unsigned int gport;
>> -    unsigned int mport;
>> -    unsigned int np;
>> -};
>> -
>> -struct mapped_rmrr {
>> -    struct list_head list;
>> -    u64 base;
>> -    u64 end;
>> -};
>> -
>>  struct hvm_iommu {
>> -    u64 pgd_maddr;                 /* io page directory machine address */
>> -    spinlock_t mapping_lock;       /* io page table lock */
>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>> -    struct list_head mapped_rmrrs;
>> -
>> -    /* amd iommu support */
>> -    int domain_id;
> 
> At the very least this field doesn't look all that architecture specific,
> even if it might only be used on x86/AMD right now.

On ARM, each IOMMU will have it's own private data stored in arch.priv.
I don't think domain_id will be used as the driver can directly use
d->domain_id.

I gave a look on AMD IOMMU drivers, and in a same function they mixed
d->domain_id and domain_hvm_iommu(d)->arch.domain_id. The latter one has
been initialized to d->domain_id in amd_iommu_domain_init.

I think, we can even remove this field for x86...

>> -    int paging_mode;
> 
> The same might go for this one.

There is only one paging mode on ARM.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:05:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvE7-0000Ta-Jy; Mon, 24 Feb 2014 13:05:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvE6-0000TM-AX
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:05:42 +0000
Received: from [85.158.137.68:27899] by server-2.bemta-3.messagelabs.com id
	AA/43-06531-5A34B035; Mon, 24 Feb 2014 13:05:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393247140!2291199!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21020 invoked from network); 24 Feb 2014 13:05:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:05:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:05:39 +0000
Message-Id: <530B51B0020000780011EC46@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:05:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Eddie Dong" <eddie.dong@intel.com>,
	"Jun Nakajima" <jun.nakajima@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC3F196B0.1__="
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] [RFC] utilizing EPT_MISCONFIG VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC3F196B0.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Attached draft patch (still having debugging code in it, and assuming
other adjustments having happened before) demonstrates that we
should evaluate whether - not only for dealing with the memory type
adjustments here, but for basically everything currently done through
->change_entry_type_global() - utilizing the EPT_MISCONFIG VM exit
would be an architecturally clean approach. The fundamental idea
here is to defer updates to the page tables until the respective
entries actually get used, instead of iterating through all page tables
when the change is being requested, thus
- avoiding (here) or eliminating (elsewhere) long lasting operations
  without having to introduce expensive/fragile preemption handling
- leaving unaffected the sharing of the page tables with the IOMMU
  (since the EPT memory type field is available to the programmer on
  the IOMMU side; we obviously can't use the read/write bits without
  affecting the IOMMU)

The main question obviously is whether it is architecturally safe to
use any particular, presently invalid memory type (right now the
patches use type 7, i.e. the value defined for UC- in the PAT MSR
only), or whether such an invalid type could be determined at run
time.

Obviously if on EPT we can go this route, the goal ought to be to
eliminate ->change_entry_type_global() altogether (i.e. also from
the generic P2M code) by using on-access adjustments instead of
on-request ones. Quite likely that would involved adding an address
range to ->memory_type_changed().

Jan


--=__PartC3F196B0.1__=
Content-Type: text/plain; name="EPT-sync-mem-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="EPT-sync-mem-types.patch"

--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -252,6 =
+252,9 @@ int hvm_set_guest_pat(struct vcpu *v, u6=0A =0A     if ( =
!hvm_funcs.set_guest_pat(v, guest_pat) )=0A         v->arch.hvm_vcpu.pat_cr=
 =3D guest_pat;=0A+=0A+    memory_type_changed(v->domain);=0A+=0A     =
return 1;=0A }=0A =0A--- a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm=
/mtrr.c=0A@@ -444,8 +444,12 @@ bool_t mtrr_def_type_msr_set(struct doma=0A =
         return 0;=0A     }=0A =0A-    m->enabled =3D enabled;=0A-    =
m->def_type =3D def_type;=0A+    if ( m->enabled !=3D enabled || m->def_typ=
e !=3D def_type )=0A+    {=0A+        m->enabled =3D enabled;=0A+        =
m->def_type =3D def_type;=0A+        memory_type_changed(d);=0A+    }=0A =
=0A     return 1;=0A }=0A@@ -465,6 +469,7 @@ bool_t mtrr_fix_range_msr_set(=
struct dom=0A                 return 0;=0A =0A         fixed_range_base[row=
] =3D msr_content;=0A+        memory_type_changed(d);=0A     }=0A =0A     =
return 1;=0A@@ -504,6 +509,8 @@ bool_t mtrr_var_range_msr_set(=0A =0A     =
m->overlapped =3D is_var_mtrr_overlapped(m);=0A =0A+    memory_type_changed=
(d);=0A+=0A     return 1;=0A }=0A =0A@@ -696,6 +703,12 @@ static int =
hvm_load_mtrr_msr(struct doma=0A HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_m=
trr_msr, hvm_load_mtrr_msr,=0A                           1, HVMSR_PER_VCPU)=
;=0A =0A+void memory_type_changed(struct domain *d)=0A+{=0A+    if ( =
iommu_enabled && !iommu_snoop && d->vcpu && d->vcpu[0] )=0A+        =
p2m_memory_type_changed(d);=0A+}=0A+=0A uint8_t epte_get_entry_emt(struct =
domain *d, unsigned long gfn, mfn_t mfn,=0A                            =
uint8_t *ipat, bool_t direct_mmio)=0A {=0A@@ -735,8 +748,7 @@ uint8_t =
epte_get_entry_emt(struct domain=0A         return MTRR_TYPE_WRBACK;=0A    =
 }=0A =0A-    gmtrr_mtype =3D is_hvm_domain(d) && v &&=0A-                 =
 d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?=0A+    gmtrr_mtype =3D =
is_hvm_domain(d) && v ?=0A                   get_mtrr_type(&v->arch.hvm_vcp=
u.mtrr, (gfn << PAGE_SHIFT)) :=0A                   MTRR_TYPE_WRBACK;=0A =
=0A--- a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@=
 -3006,6 +3006,16 @@ void vmx_vmexit_handler(struct cpu_user_=0A         =
break;=0A     }=0A =0A+    case EXIT_REASON_EPT_MISCONFIG:=0A+    {=0A+    =
    paddr_t gpa;=0A+=0A+        __vmread(GUEST_PHYSICAL_ADDRESS, &gpa);=0A+=
        if ( !ept_handle_misconfig(gpa) )=0A+            goto exit_and_cras=
h;=0A+        break;=0A+    }=0A+=0A     case EXIT_REASON_MONITOR_TRAP_FLAG=
:=0A         v->arch.hvm_vmx.exec_control &=3D ~CPU_BASED_MONITOR_TRAP_FLAG=
;=0A         vmx_update_cpu_exec_control(v);=0A--- a/xen/arch/x86/mm/p2m-ep=
t.c=0A+++ b/xen/arch/x86/mm/p2m-ept.c=0A@@ -687,6 +687,168 @@ static void =
ept_change_entry_type_global=0A     ept_sync_domain(p2m);=0A }=0A =
=0A+static bool_t ept_invalidate_emt(mfn_t mfn)=0A+{=0A+    ept_entry_t =
*epte =3D map_domain_page(mfn_x(mfn));=0A+    unsigned int i;=0A+    =
bool_t changed =3D 0;=0A+=0A+    for ( i =3D 0; i < EPT_PAGETABLE_ENTRIES; =
i++ )=0A+    {=0A+        ept_entry_t e =3D atomic_read_ept_entry(&epte[i])=
;=0A+=0A+        if ( !is_epte_valid(&e) || !is_epte_present(&e) ||=0A+    =
         e.emt =3D=3D MTRR_NUM_TYPES )=0A+            continue;=0A+=0A+    =
    e.emt =3D MTRR_NUM_TYPES;=0A+        atomic_write_ept_entry(&epte[i], =
e);=0A+        changed =3D 1;=0A+    }=0A+=0A+    unmap_domain_page(epte);=
=0A+=0A+    return changed;=0A+}=0A+=0A+static unsigned long invcnt, =
invthr, hmccnt, hmcthr, hmctot;//temp=0A+static void ept_memory_type_change=
d(struct p2m_domain *p2m)=0A+{=0A+    unsigned long mfn =3D ept_get_asr(&p2=
m->ept);=0A+=0A+    if ( !mfn )=0A+        return;=0A+if(++invcnt > =
invthr) {//temp=0A+ invthr |=3D invcnt;=0A+ printk("d%d: invalidate\n", =
p2m->domain->domain_id);=0A+ hmctot +=3D hmccnt;=0A+ hmccnt =3D hmcthr =3D =
0;=0A+}=0A+=0A+    if ( ept_invalidate_emt(_mfn(mfn)) )=0A+        =
ept_sync_domain(p2m);=0A+}=0A+=0A+bool_t ept_handle_misconfig(uint64_t =
gpa)=0A+{=0A+    struct vcpu *curr =3D current;=0A+    struct p2m_domain =
*p2m =3D p2m_get_hostp2m(curr->domain);=0A+    struct ept_data *ept =3D =
&p2m->ept;=0A+    unsigned int level =3D ept_get_wl(ept);=0A+    unsigned =
long gfn =3D PFN_DOWN(gpa);=0A+    unsigned long mfn =3D ept_get_asr(ept);=
=0A+    ept_entry_t *epte;=0A+    bool_t okay;=0A+bool_t print =3D =
0;//temp=0A+static struct {//temp=0A+ u64 gpa;=0A+ u16 vcpu;=0A+ s8 =
mt[4];=0A+} log[32];=0A+static unsigned logidx;//temp=0A+unsigned =
l;//temp=0A+=0A+    if ( !mfn )=0A+        return 0;=0A+=0A+    p2m_lock(p2=
m);=0A+if(++hmccnt > hmcthr) {//temp=0A+ hmcthr |=3D hmccnt;=0A+ print =3D =
1;=0A+ printk("%pv: miscfg@%09lx [%08lx/%08lx]\n", curr, gfn, hmctot + =
hmccnt, invcnt);=0A+}=0A+l =3D logidx++ % ARRAY_SIZE(log);//temp=0A+log[l].=
gpa =3D gpa;//temp=0A+log[l].vcpu =3D curr->vcpu_id;//temp=0A+memset(log[l]=
.mt, -1, ARRAY_SIZE(log[l].mt));//temp=0A+=0A+    for ( okay =3D curr->arch=
.hvm_vmx.ept_spurious_misconfig; ; --level) {=0A+        ept_entry_t =
e;=0A+        unsigned int i;=0A+=0A+        epte =3D map_domain_page(mfn);=
=0A+        i =3D (gfn >> (level * EPT_TABLE_ORDER)) & (EPT_PAGETABLE_ENTRI=
ES - 1);=0A+        e =3D atomic_read_ept_entry(&epte[i]);=0A+log[l].mt[3 =
- level] =3D e.emt;//temp=0A+if(print) printk("L%u[%03x]: %u\n", level, i, =
e.emt);//temp=0A+=0A+        if ( level =3D=3D 0 || is_epte_superpage(&e) =
)=0A+        {=0A+            uint8_t ipat =3D 0;=0A+            struct =
vcpu *v;=0A+=0A+            if ( e.emt !=3D MTRR_NUM_TYPES )=0A+           =
     break;=0A+=0A+            if ( level =3D=3D 0 )=0A+            {=0A+  =
              for ( gfn -=3D i, i =3D 0; i < EPT_PAGETABLE_ENTRIES; ++i =
)=0A+                {=0A+                    e =3D atomic_read_ept_entry(&=
epte[i]);=0A+                    if ( e.emt =3D=3D MTRR_NUM_TYPES )=0A+    =
                    e.emt =3D 0;=0A+                    if ( !is_epte_valid=
(&e) || !is_epte_present(&e) )=0A+                        continue;=0A+    =
                e.emt =3D epte_get_entry_emt(p2m->domain, gfn + i,=0A+     =
                                          _mfn(e.mfn), &ipat,=0A+          =
                                     e.sa_p2mt =3D=3D p2m_mmio_direct);=0A+=
                    e.ipat =3D ipat;=0A+                    atomic_write_ep=
t_entry(&epte[i], e);=0A+++level;//temp=0A+                }=0A+if(print) =
printk("L0[] -> %u entries adjusted\n", level);//temp=0A+            }=0A+ =
           else=0A+            {=0A+                e.emt =3D epte_get_entr=
y_emt(p2m->domain, gfn, _mfn(e.mfn),=0A+                                   =
        &ipat,=0A+                                           e.sa_p2mt =
=3D=3D p2m_mmio_direct);=0A+                e.ipat =3D ipat;=0A+           =
     atomic_write_ept_entry(&epte[i], e);=0A+if(print) printk("L%u[%03x] =
-> %u:%d (%lx)\n", level, i, e.emt, ipat, (long)e.mfn);//temp=0A+          =
  }=0A+=0A+            for_each_vcpu ( curr->domain, v )=0A+               =
 v->arch.hvm_vmx.ept_spurious_misconfig =3D 1;=0A+            okay =3D =
1;=0A+            break;=0A+        }=0A+=0A+        if ( e.emt =3D=3D =
MTRR_NUM_TYPES )=0A+        {=0A+            ASSERT(is_epte_present(&e));=
=0A+            e.emt =3D 0;=0A+            atomic_write_ept_entry(&epte[i]=
, e);=0A+            unmap_domain_page(epte);=0A+=0A+            ept_invali=
date_emt(_mfn(e.mfn));=0A+            okay =3D 1;=0A+        }=0A+        =
else if ( is_epte_present(&e) && !e.emt )=0A+            unmap_domain_page(=
epte);=0A+        else=0A+            break;=0A+=0A+        mfn =3D =
e.mfn;=0A+    }=0A+=0A+    unmap_domain_page(epte);=0A+    curr->arch.hvm_v=
mx.ept_spurious_misconfig =3D 0;=0A+if(!okay) {//temp=0A+ printk("%pv: =
miscfg@%lx fail@%u\n", curr, gfn, level);=0A+ for(;; l =3D (l ?: ARRAY_SIZE=
(log)) - 1) {=0A+  printk("v%d %012"PRIx64" %d:%d:%d:%d\n",=0A+         =
log[l].vcpu, log[l].gpa, log[l].mt[0], log[l].mt[1], log[l].mt[2], =
log[l].mt[3]);=0A+  if(l =3D=3D (logidx % ARRAY_SIZE(log)))=0A+   =
break;=0A+ }=0A+}=0A+    ept_sync_domain(p2m);=0A+    p2m_unlock(p2m);=0A+=
=0A+    return okay;=0A+}=0A+=0A static void __ept_sync_domain(void =
*info)=0A {=0A     struct ept_data *ept =3D &((struct p2m_domain *)info)->e=
pt;=0A@@ -724,6 +886,7 @@ int ept_p2m_init(struct p2m_domain *p2m)=0A     =
p2m->set_entry =3D ept_set_entry;=0A     p2m->get_entry =3D ept_get_entry;=
=0A     p2m->change_entry_type_global =3D ept_change_entry_type_global;=0A+=
    p2m->memory_type_changed =3D ept_memory_type_changed;=0A     p2m->audit=
_p2m =3D NULL;=0A =0A     /* Set the memory type used when accessing EPT =
paging structures. */=0A--- a/xen/arch/x86/mm/p2m.c=0A+++ b/xen/arch/x86/mm=
/p2m.c=0A@@ -200,6 +200,18 @@ void p2m_change_entry_type_global(struct=0A  =
   p2m_unlock(p2m);=0A }=0A =0A+void p2m_memory_type_changed(struct domain =
*d)=0A+{=0A+    struct p2m_domain *p2m =3D p2m_get_hostp2m(d);=0A+=0A+    =
if ( p2m->memory_type_changed )=0A+    {=0A+        p2m_lock(p2m);=0A+     =
   p2m->memory_type_changed(p2m);=0A+        p2m_unlock(p2m);=0A+    =
}=0A+}=0A+=0A mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned =
long gfn,=0A                     p2m_type_t *t, p2m_access_t *a, p2m_query_=
t q,=0A                     unsigned int *page_order, bool_t locked)=0A--- =
a/xen/include/asm-x86/hvm/vmx/vmcs.h=0A+++ b/xen/include/asm-x86/hvm/vmx/vm=
cs.h=0A@@ -124,6 +124,9 @@ struct arch_vmx_struct {=0A =0A     unsigned =
long        host_cr0;=0A =0A+    /* Do we need to tolerate a spurious =
EPT_MISCONFIG VM exit? */=0A+    bool_t               ept_spurious_misconfi=
g;=0A+=0A     /* Is the guest in real mode? */=0A     uint8_t              =
vmx_realmode;=0A     /* Are we emulating rather than VMENTERing? */=0A--- =
a/xen/include/asm-x86/hvm/vmx/vmx.h=0A+++ b/xen/include/asm-x86/hvm/vmx/vmx=
.h=0A@@ -520,6 +520,7 @@ int ept_p2m_init(struct p2m_domain *p2m)=0A void =
ept_p2m_uninit(struct p2m_domain *p2m);=0A =0A void ept_walk_table(struct =
domain *d, unsigned long gfn);=0A+bool_t ept_handle_misconfig(uint64_t =
gpa);=0A void setup_ept_dump(void);=0A =0A void update_guest_eip(void);=0A-=
-- a/xen/include/asm-x86/p2m.h=0A+++ b/xen/include/asm-x86/p2m.h=0A@@ =
-233,6 +233,7 @@ struct p2m_domain {=0A     void               (*change_ent=
ry_type_global)(struct p2m_domain *p2m,=0A                                 =
                   p2m_type_t ot,=0A                                       =
             p2m_type_t nt);=0A+    void               (*memory_type_change=
d)(struct p2m_domain *p2m);=0A     =0A     void               (*write_p2m_e=
ntry)(struct p2m_domain *p2m,=0A                                           =
unsigned long gfn, l1_pgentry_t *p,=0A@@ -506,6 +507,9 @@ void p2m_change_t=
ype_range(struct domain=0A p2m_type_t p2m_change_type(struct domain *d, =
unsigned long gfn,=0A                            p2m_type_t ot, p2m_type_t =
nt);=0A =0A+/* Report a change affecting memory types. */=0A+void =
p2m_memory_type_changed(struct domain *d);=0A+=0A /* Set mmio addresses in =
the p2m table (for pass-through) */=0A int set_mmio_p2m_entry(struct =
domain *d, unsigned long gfn, mfn_t mfn);=0A int clear_mmio_p2m_entry(struc=
t domain *d, unsigned long gfn);=0A
--=__PartC3F196B0.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC3F196B0.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 13:05:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvE6-0000TN-6B; Mon, 24 Feb 2014 13:05:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHvE4-0000TH-69
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:05:40 +0000
Received: from [193.109.254.147:30672] by server-7.bemta-14.messagelabs.com id
	1C/41-23424-3A34B035; Mon, 24 Feb 2014 13:05:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393247138!6390653!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25445 invoked from network); 24 Feb 2014 13:05:38 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:05:38 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so3089698eae.7
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:05:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=rn8aaWn6ApmNKAoouhsJ0d/ebxKrS9mid5y+hBZcdMQ=;
	b=grgwbrGYgr4m9TKoc39aTh61+AjCvaCWpaUuqGy+2QNQWW0I10xrhskdAEAfV032Ny
	VxOOFKjpPDN8W0UJ4KVtjB9d9nUsbuXgYx9x1M4FC990ZmcejofflvKtp0+V+CVHLlID
	/on9riidhOVQVJCDs8Sf4zGY9cQirmHUyXWTVPfi6fChCFRnbH0BwmQzl2pd8FX3vlZZ
	+w4gkXR23cxIYWcrmVvm2HfTrtZpiCYagtewmz1rue0CbY8I+kLB48ejN97LieLJ1VFd
	gyEs8M3I0o/9/b6haE9qANJEzxCSgQGCowCvfdXyD8EQkHzJtVQKa5ZYst/rn3tmBlF7
	S8XQ==
X-Gm-Message-State: ALoCoQnKVI8YSrhH6SRLd1FhtEeGVCgjEcsR9NsdRYGF0L7y/CCqjPD7tcWjePOFsDLoA4AlhQMe
X-Received: by 10.15.94.135 with SMTP id bb7mr24293415eeb.48.1393247138320;
	Mon, 24 Feb 2014 05:05:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm63577524eeo.8.2014.02.24.05.05.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:05:37 -0800 (PST)
Message-ID: <530B439E.7030504@linaro.org>
Date: Mon, 24 Feb 2014 13:05:34 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
	<530B28A5.8010903@citrix.com>
In-Reply-To: <530B28A5.8010903@citrix.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	Gang Wei <gang.wei@intel.com>, stefano.stabellini@citrix.com,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xenproject.org,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Andrew,

On 02/24/2014 11:10 AM, Andrew Cooper wrote:
> On 24/02/14 10:44, Jan Beulich wrote:
>>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>>> --- a/xen/include/xen/hvm/iommu.h
>>> +++ b/xen/include/xen/hvm/iommu.h
>>> @@ -23,32 +23,8 @@
>>>  #include <xen/iommu.h>
>>>  #include <asm/hvm/iommu.h>
>>>  
>>> -struct g2m_ioport {
>>> -    struct list_head list;
>>> -    unsigned int gport;
>>> -    unsigned int mport;
>>> -    unsigned int np;
>>> -};
>>> -
>>> -struct mapped_rmrr {
>>> -    struct list_head list;
>>> -    u64 base;
>>> -    u64 end;
>>> -};
>>> -
>>>  struct hvm_iommu {
>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>> -    spinlock_t mapping_lock;       /* io page table lock */
>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>> -    struct list_head mapped_rmrrs;
>>> -
>>> -    /* amd iommu support */
>>> -    int domain_id;
>> At the very least this field doesn't look all that architecture specific,
>> even if it might only be used on x86/AMD right now.
> 
> Furthermore, it can be found using container_of()

Every AMD function starts with:
  hd = domain_hvm_iommu(d)
And then use hd->arch.domain_id in the code.

x86/AMD iommu can directly use d->domain_id, as it's already done in
some place. No need to use container_of() :).

I will send a patch to remove domain_id in the structure.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:05:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvE7-0000Ta-Jy; Mon, 24 Feb 2014 13:05:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvE6-0000TM-AX
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:05:42 +0000
Received: from [85.158.137.68:27899] by server-2.bemta-3.messagelabs.com id
	AA/43-06531-5A34B035; Mon, 24 Feb 2014 13:05:41 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393247140!2291199!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21020 invoked from network); 24 Feb 2014 13:05:40 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:05:40 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:05:39 +0000
Message-Id: <530B51B0020000780011EC46@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:05:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Eddie Dong" <eddie.dong@intel.com>,
	"Jun Nakajima" <jun.nakajima@intel.com>,
	"Yang Z Zhang" <yang.z.zhang@intel.com>,"Tim Deegan" <tim@xen.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartC3F196B0.1__="
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: [Xen-devel] [RFC] utilizing EPT_MISCONFIG VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartC3F196B0.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Attached draft patch (still having debugging code in it, and assuming
other adjustments having happened before) demonstrates that we
should evaluate whether - not only for dealing with the memory type
adjustments here, but for basically everything currently done through
->change_entry_type_global() - utilizing the EPT_MISCONFIG VM exit
would be an architecturally clean approach. The fundamental idea
here is to defer updates to the page tables until the respective
entries actually get used, instead of iterating through all page tables
when the change is being requested, thus
- avoiding (here) or eliminating (elsewhere) long lasting operations
  without having to introduce expensive/fragile preemption handling
- leaving unaffected the sharing of the page tables with the IOMMU
  (since the EPT memory type field is available to the programmer on
  the IOMMU side; we obviously can't use the read/write bits without
  affecting the IOMMU)

The main question obviously is whether it is architecturally safe to
use any particular, presently invalid memory type (right now the
patches use type 7, i.e. the value defined for UC- in the PAT MSR
only), or whether such an invalid type could be determined at run
time.

Obviously if on EPT we can go this route, the goal ought to be to
eliminate ->change_entry_type_global() altogether (i.e. also from
the generic P2M code) by using on-access adjustments instead of
on-request ones. Quite likely that would involved adding an address
range to ->memory_type_changed().

Jan


--=__PartC3F196B0.1__=
Content-Type: text/plain; name="EPT-sync-mem-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="EPT-sync-mem-types.patch"

--- a/xen/arch/x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -252,6 =
+252,9 @@ int hvm_set_guest_pat(struct vcpu *v, u6=0A =0A     if ( =
!hvm_funcs.set_guest_pat(v, guest_pat) )=0A         v->arch.hvm_vcpu.pat_cr=
 =3D guest_pat;=0A+=0A+    memory_type_changed(v->domain);=0A+=0A     =
return 1;=0A }=0A =0A--- a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm=
/mtrr.c=0A@@ -444,8 +444,12 @@ bool_t mtrr_def_type_msr_set(struct doma=0A =
         return 0;=0A     }=0A =0A-    m->enabled =3D enabled;=0A-    =
m->def_type =3D def_type;=0A+    if ( m->enabled !=3D enabled || m->def_typ=
e !=3D def_type )=0A+    {=0A+        m->enabled =3D enabled;=0A+        =
m->def_type =3D def_type;=0A+        memory_type_changed(d);=0A+    }=0A =
=0A     return 1;=0A }=0A@@ -465,6 +469,7 @@ bool_t mtrr_fix_range_msr_set(=
struct dom=0A                 return 0;=0A =0A         fixed_range_base[row=
] =3D msr_content;=0A+        memory_type_changed(d);=0A     }=0A =0A     =
return 1;=0A@@ -504,6 +509,8 @@ bool_t mtrr_var_range_msr_set(=0A =0A     =
m->overlapped =3D is_var_mtrr_overlapped(m);=0A =0A+    memory_type_changed=
(d);=0A+=0A     return 1;=0A }=0A =0A@@ -696,6 +703,12 @@ static int =
hvm_load_mtrr_msr(struct doma=0A HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_m=
trr_msr, hvm_load_mtrr_msr,=0A                           1, HVMSR_PER_VCPU)=
;=0A =0A+void memory_type_changed(struct domain *d)=0A+{=0A+    if ( =
iommu_enabled && !iommu_snoop && d->vcpu && d->vcpu[0] )=0A+        =
p2m_memory_type_changed(d);=0A+}=0A+=0A uint8_t epte_get_entry_emt(struct =
domain *d, unsigned long gfn, mfn_t mfn,=0A                            =
uint8_t *ipat, bool_t direct_mmio)=0A {=0A@@ -735,8 +748,7 @@ uint8_t =
epte_get_entry_emt(struct domain=0A         return MTRR_TYPE_WRBACK;=0A    =
 }=0A =0A-    gmtrr_mtype =3D is_hvm_domain(d) && v &&=0A-                 =
 d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?=0A+    gmtrr_mtype =3D =
is_hvm_domain(d) && v ?=0A                   get_mtrr_type(&v->arch.hvm_vcp=
u.mtrr, (gfn << PAGE_SHIFT)) :=0A                   MTRR_TYPE_WRBACK;=0A =
=0A--- a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@=
 -3006,6 +3006,16 @@ void vmx_vmexit_handler(struct cpu_user_=0A         =
break;=0A     }=0A =0A+    case EXIT_REASON_EPT_MISCONFIG:=0A+    {=0A+    =
    paddr_t gpa;=0A+=0A+        __vmread(GUEST_PHYSICAL_ADDRESS, &gpa);=0A+=
        if ( !ept_handle_misconfig(gpa) )=0A+            goto exit_and_cras=
h;=0A+        break;=0A+    }=0A+=0A     case EXIT_REASON_MONITOR_TRAP_FLAG=
:=0A         v->arch.hvm_vmx.exec_control &=3D ~CPU_BASED_MONITOR_TRAP_FLAG=
;=0A         vmx_update_cpu_exec_control(v);=0A--- a/xen/arch/x86/mm/p2m-ep=
t.c=0A+++ b/xen/arch/x86/mm/p2m-ept.c=0A@@ -687,6 +687,168 @@ static void =
ept_change_entry_type_global=0A     ept_sync_domain(p2m);=0A }=0A =
=0A+static bool_t ept_invalidate_emt(mfn_t mfn)=0A+{=0A+    ept_entry_t =
*epte =3D map_domain_page(mfn_x(mfn));=0A+    unsigned int i;=0A+    =
bool_t changed =3D 0;=0A+=0A+    for ( i =3D 0; i < EPT_PAGETABLE_ENTRIES; =
i++ )=0A+    {=0A+        ept_entry_t e =3D atomic_read_ept_entry(&epte[i])=
;=0A+=0A+        if ( !is_epte_valid(&e) || !is_epte_present(&e) ||=0A+    =
         e.emt =3D=3D MTRR_NUM_TYPES )=0A+            continue;=0A+=0A+    =
    e.emt =3D MTRR_NUM_TYPES;=0A+        atomic_write_ept_entry(&epte[i], =
e);=0A+        changed =3D 1;=0A+    }=0A+=0A+    unmap_domain_page(epte);=
=0A+=0A+    return changed;=0A+}=0A+=0A+static unsigned long invcnt, =
invthr, hmccnt, hmcthr, hmctot;//temp=0A+static void ept_memory_type_change=
d(struct p2m_domain *p2m)=0A+{=0A+    unsigned long mfn =3D ept_get_asr(&p2=
m->ept);=0A+=0A+    if ( !mfn )=0A+        return;=0A+if(++invcnt > =
invthr) {//temp=0A+ invthr |=3D invcnt;=0A+ printk("d%d: invalidate\n", =
p2m->domain->domain_id);=0A+ hmctot +=3D hmccnt;=0A+ hmccnt =3D hmcthr =3D =
0;=0A+}=0A+=0A+    if ( ept_invalidate_emt(_mfn(mfn)) )=0A+        =
ept_sync_domain(p2m);=0A+}=0A+=0A+bool_t ept_handle_misconfig(uint64_t =
gpa)=0A+{=0A+    struct vcpu *curr =3D current;=0A+    struct p2m_domain =
*p2m =3D p2m_get_hostp2m(curr->domain);=0A+    struct ept_data *ept =3D =
&p2m->ept;=0A+    unsigned int level =3D ept_get_wl(ept);=0A+    unsigned =
long gfn =3D PFN_DOWN(gpa);=0A+    unsigned long mfn =3D ept_get_asr(ept);=
=0A+    ept_entry_t *epte;=0A+    bool_t okay;=0A+bool_t print =3D =
0;//temp=0A+static struct {//temp=0A+ u64 gpa;=0A+ u16 vcpu;=0A+ s8 =
mt[4];=0A+} log[32];=0A+static unsigned logidx;//temp=0A+unsigned =
l;//temp=0A+=0A+    if ( !mfn )=0A+        return 0;=0A+=0A+    p2m_lock(p2=
m);=0A+if(++hmccnt > hmcthr) {//temp=0A+ hmcthr |=3D hmccnt;=0A+ print =3D =
1;=0A+ printk("%pv: miscfg@%09lx [%08lx/%08lx]\n", curr, gfn, hmctot + =
hmccnt, invcnt);=0A+}=0A+l =3D logidx++ % ARRAY_SIZE(log);//temp=0A+log[l].=
gpa =3D gpa;//temp=0A+log[l].vcpu =3D curr->vcpu_id;//temp=0A+memset(log[l]=
.mt, -1, ARRAY_SIZE(log[l].mt));//temp=0A+=0A+    for ( okay =3D curr->arch=
.hvm_vmx.ept_spurious_misconfig; ; --level) {=0A+        ept_entry_t =
e;=0A+        unsigned int i;=0A+=0A+        epte =3D map_domain_page(mfn);=
=0A+        i =3D (gfn >> (level * EPT_TABLE_ORDER)) & (EPT_PAGETABLE_ENTRI=
ES - 1);=0A+        e =3D atomic_read_ept_entry(&epte[i]);=0A+log[l].mt[3 =
- level] =3D e.emt;//temp=0A+if(print) printk("L%u[%03x]: %u\n", level, i, =
e.emt);//temp=0A+=0A+        if ( level =3D=3D 0 || is_epte_superpage(&e) =
)=0A+        {=0A+            uint8_t ipat =3D 0;=0A+            struct =
vcpu *v;=0A+=0A+            if ( e.emt !=3D MTRR_NUM_TYPES )=0A+           =
     break;=0A+=0A+            if ( level =3D=3D 0 )=0A+            {=0A+  =
              for ( gfn -=3D i, i =3D 0; i < EPT_PAGETABLE_ENTRIES; ++i =
)=0A+                {=0A+                    e =3D atomic_read_ept_entry(&=
epte[i]);=0A+                    if ( e.emt =3D=3D MTRR_NUM_TYPES )=0A+    =
                    e.emt =3D 0;=0A+                    if ( !is_epte_valid=
(&e) || !is_epte_present(&e) )=0A+                        continue;=0A+    =
                e.emt =3D epte_get_entry_emt(p2m->domain, gfn + i,=0A+     =
                                          _mfn(e.mfn), &ipat,=0A+          =
                                     e.sa_p2mt =3D=3D p2m_mmio_direct);=0A+=
                    e.ipat =3D ipat;=0A+                    atomic_write_ep=
t_entry(&epte[i], e);=0A+++level;//temp=0A+                }=0A+if(print) =
printk("L0[] -> %u entries adjusted\n", level);//temp=0A+            }=0A+ =
           else=0A+            {=0A+                e.emt =3D epte_get_entr=
y_emt(p2m->domain, gfn, _mfn(e.mfn),=0A+                                   =
        &ipat,=0A+                                           e.sa_p2mt =
=3D=3D p2m_mmio_direct);=0A+                e.ipat =3D ipat;=0A+           =
     atomic_write_ept_entry(&epte[i], e);=0A+if(print) printk("L%u[%03x] =
-> %u:%d (%lx)\n", level, i, e.emt, ipat, (long)e.mfn);//temp=0A+          =
  }=0A+=0A+            for_each_vcpu ( curr->domain, v )=0A+               =
 v->arch.hvm_vmx.ept_spurious_misconfig =3D 1;=0A+            okay =3D =
1;=0A+            break;=0A+        }=0A+=0A+        if ( e.emt =3D=3D =
MTRR_NUM_TYPES )=0A+        {=0A+            ASSERT(is_epte_present(&e));=
=0A+            e.emt =3D 0;=0A+            atomic_write_ept_entry(&epte[i]=
, e);=0A+            unmap_domain_page(epte);=0A+=0A+            ept_invali=
date_emt(_mfn(e.mfn));=0A+            okay =3D 1;=0A+        }=0A+        =
else if ( is_epte_present(&e) && !e.emt )=0A+            unmap_domain_page(=
epte);=0A+        else=0A+            break;=0A+=0A+        mfn =3D =
e.mfn;=0A+    }=0A+=0A+    unmap_domain_page(epte);=0A+    curr->arch.hvm_v=
mx.ept_spurious_misconfig =3D 0;=0A+if(!okay) {//temp=0A+ printk("%pv: =
miscfg@%lx fail@%u\n", curr, gfn, level);=0A+ for(;; l =3D (l ?: ARRAY_SIZE=
(log)) - 1) {=0A+  printk("v%d %012"PRIx64" %d:%d:%d:%d\n",=0A+         =
log[l].vcpu, log[l].gpa, log[l].mt[0], log[l].mt[1], log[l].mt[2], =
log[l].mt[3]);=0A+  if(l =3D=3D (logidx % ARRAY_SIZE(log)))=0A+   =
break;=0A+ }=0A+}=0A+    ept_sync_domain(p2m);=0A+    p2m_unlock(p2m);=0A+=
=0A+    return okay;=0A+}=0A+=0A static void __ept_sync_domain(void =
*info)=0A {=0A     struct ept_data *ept =3D &((struct p2m_domain *)info)->e=
pt;=0A@@ -724,6 +886,7 @@ int ept_p2m_init(struct p2m_domain *p2m)=0A     =
p2m->set_entry =3D ept_set_entry;=0A     p2m->get_entry =3D ept_get_entry;=
=0A     p2m->change_entry_type_global =3D ept_change_entry_type_global;=0A+=
    p2m->memory_type_changed =3D ept_memory_type_changed;=0A     p2m->audit=
_p2m =3D NULL;=0A =0A     /* Set the memory type used when accessing EPT =
paging structures. */=0A--- a/xen/arch/x86/mm/p2m.c=0A+++ b/xen/arch/x86/mm=
/p2m.c=0A@@ -200,6 +200,18 @@ void p2m_change_entry_type_global(struct=0A  =
   p2m_unlock(p2m);=0A }=0A =0A+void p2m_memory_type_changed(struct domain =
*d)=0A+{=0A+    struct p2m_domain *p2m =3D p2m_get_hostp2m(d);=0A+=0A+    =
if ( p2m->memory_type_changed )=0A+    {=0A+        p2m_lock(p2m);=0A+     =
   p2m->memory_type_changed(p2m);=0A+        p2m_unlock(p2m);=0A+    =
}=0A+}=0A+=0A mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned =
long gfn,=0A                     p2m_type_t *t, p2m_access_t *a, p2m_query_=
t q,=0A                     unsigned int *page_order, bool_t locked)=0A--- =
a/xen/include/asm-x86/hvm/vmx/vmcs.h=0A+++ b/xen/include/asm-x86/hvm/vmx/vm=
cs.h=0A@@ -124,6 +124,9 @@ struct arch_vmx_struct {=0A =0A     unsigned =
long        host_cr0;=0A =0A+    /* Do we need to tolerate a spurious =
EPT_MISCONFIG VM exit? */=0A+    bool_t               ept_spurious_misconfi=
g;=0A+=0A     /* Is the guest in real mode? */=0A     uint8_t              =
vmx_realmode;=0A     /* Are we emulating rather than VMENTERing? */=0A--- =
a/xen/include/asm-x86/hvm/vmx/vmx.h=0A+++ b/xen/include/asm-x86/hvm/vmx/vmx=
.h=0A@@ -520,6 +520,7 @@ int ept_p2m_init(struct p2m_domain *p2m)=0A void =
ept_p2m_uninit(struct p2m_domain *p2m);=0A =0A void ept_walk_table(struct =
domain *d, unsigned long gfn);=0A+bool_t ept_handle_misconfig(uint64_t =
gpa);=0A void setup_ept_dump(void);=0A =0A void update_guest_eip(void);=0A-=
-- a/xen/include/asm-x86/p2m.h=0A+++ b/xen/include/asm-x86/p2m.h=0A@@ =
-233,6 +233,7 @@ struct p2m_domain {=0A     void               (*change_ent=
ry_type_global)(struct p2m_domain *p2m,=0A                                 =
                   p2m_type_t ot,=0A                                       =
             p2m_type_t nt);=0A+    void               (*memory_type_change=
d)(struct p2m_domain *p2m);=0A     =0A     void               (*write_p2m_e=
ntry)(struct p2m_domain *p2m,=0A                                           =
unsigned long gfn, l1_pgentry_t *p,=0A@@ -506,6 +507,9 @@ void p2m_change_t=
ype_range(struct domain=0A p2m_type_t p2m_change_type(struct domain *d, =
unsigned long gfn,=0A                            p2m_type_t ot, p2m_type_t =
nt);=0A =0A+/* Report a change affecting memory types. */=0A+void =
p2m_memory_type_changed(struct domain *d);=0A+=0A /* Set mmio addresses in =
the p2m table (for pass-through) */=0A int set_mmio_p2m_entry(struct =
domain *d, unsigned long gfn, mfn_t mfn);=0A int clear_mmio_p2m_entry(struc=
t domain *d, unsigned long gfn);=0A
--=__PartC3F196B0.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartC3F196B0.1__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 13:05:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvE6-0000TN-6B; Mon, 24 Feb 2014 13:05:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHvE4-0000TH-69
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:05:40 +0000
Received: from [193.109.254.147:30672] by server-7.bemta-14.messagelabs.com id
	1C/41-23424-3A34B035; Mon, 24 Feb 2014 13:05:39 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393247138!6390653!1
X-Originating-IP: [209.85.215.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25445 invoked from network); 24 Feb 2014 13:05:38 -0000
Received: from mail-ea0-f176.google.com (HELO mail-ea0-f176.google.com)
	(209.85.215.176)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:05:38 -0000
Received: by mail-ea0-f176.google.com with SMTP id b10so3089698eae.7
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:05:38 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=rn8aaWn6ApmNKAoouhsJ0d/ebxKrS9mid5y+hBZcdMQ=;
	b=grgwbrGYgr4m9TKoc39aTh61+AjCvaCWpaUuqGy+2QNQWW0I10xrhskdAEAfV032Ny
	VxOOFKjpPDN8W0UJ4KVtjB9d9nUsbuXgYx9x1M4FC990ZmcejofflvKtp0+V+CVHLlID
	/on9riidhOVQVJCDs8Sf4zGY9cQirmHUyXWTVPfi6fChCFRnbH0BwmQzl2pd8FX3vlZZ
	+w4gkXR23cxIYWcrmVvm2HfTrtZpiCYagtewmz1rue0CbY8I+kLB48ejN97LieLJ1VFd
	gyEs8M3I0o/9/b6haE9qANJEzxCSgQGCowCvfdXyD8EQkHzJtVQKa5ZYst/rn3tmBlF7
	S8XQ==
X-Gm-Message-State: ALoCoQnKVI8YSrhH6SRLd1FhtEeGVCgjEcsR9NsdRYGF0L7y/CCqjPD7tcWjePOFsDLoA4AlhQMe
X-Received: by 10.15.94.135 with SMTP id bb7mr24293415eeb.48.1393247138320;
	Mon, 24 Feb 2014 05:05:38 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm63577524eeo.8.2014.02.24.05.05.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:05:37 -0800 (PST)
Message-ID: <530B439E.7030504@linaro.org>
Date: Mon, 24 Feb 2014 13:05:34 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
	<530B28A5.8010903@citrix.com>
In-Reply-To: <530B28A5.8010903@citrix.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	Gang Wei <gang.wei@intel.com>, stefano.stabellini@citrix.com,
	Jan Beulich <JBeulich@suse.com>, xen-devel@lists.xenproject.org,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Andrew,

On 02/24/2014 11:10 AM, Andrew Cooper wrote:
> On 24/02/14 10:44, Jan Beulich wrote:
>>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>>> --- a/xen/include/xen/hvm/iommu.h
>>> +++ b/xen/include/xen/hvm/iommu.h
>>> @@ -23,32 +23,8 @@
>>>  #include <xen/iommu.h>
>>>  #include <asm/hvm/iommu.h>
>>>  
>>> -struct g2m_ioport {
>>> -    struct list_head list;
>>> -    unsigned int gport;
>>> -    unsigned int mport;
>>> -    unsigned int np;
>>> -};
>>> -
>>> -struct mapped_rmrr {
>>> -    struct list_head list;
>>> -    u64 base;
>>> -    u64 end;
>>> -};
>>> -
>>>  struct hvm_iommu {
>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>> -    spinlock_t mapping_lock;       /* io page table lock */
>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>> -    struct list_head mapped_rmrrs;
>>> -
>>> -    /* amd iommu support */
>>> -    int domain_id;
>> At the very least this field doesn't look all that architecture specific,
>> even if it might only be used on x86/AMD right now.
> 
> Furthermore, it can be found using container_of()

Every AMD function starts with:
  hd = domain_hvm_iommu(d)
And then use hd->arch.domain_id in the code.

x86/AMD iommu can directly use d->domain_id, as it's already done in
some place. No need to use container_of() :).

I will send a patch to remove domain_id in the structure.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:08:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvGW-0000hw-Mj; Mon, 24 Feb 2014 13:08:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHvGV-0000hf-AJ
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:08:11 +0000
Received: from [193.109.254.147:64003] by server-12.bemta-14.messagelabs.com
	id 93/A4-17220-A344B035; Mon, 24 Feb 2014 13:08:10 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393247282!1133901!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10196 invoked from network); 24 Feb 2014 13:08:02 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:08:02 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so3165387eae.9
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:08:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=mI1EDEw3eQLhJNgDnbqlSmA2qEGgbyTakswQ1UjpBrM=;
	b=ajATh7Uy2/Ika6nC4IRNWoeH7Lic8xWOVRYpcBWA+EY3pTcz95ZTvWgglAknxAvb+g
	Gw2H1OcUINpLjADp33wwY2z4vjWRji29Qo6H5jPqjv7D6P99+bJMX+VyeeJuMjQlHQU7
	9jhVEIwncHNfkQToIrCU1nC9vA5OJTmE+korN4sZ3J3hF/wgcDWivZfPDfwA2hvlyOKv
	JdT/gJqV3vqbQwUdIKSg4DMplF/iw5rp+SF8Zr5rFlUxMECBm+z5ZgRABquqRslgY1bD
	D7T8GGvODCEpXfZwz1SWRFnkyl0AI5yOo837IsnHvtD9P6hlQXgxg3GpOYZJ/cr+MNR4
	1M9g==
X-Gm-Message-State: ALoCoQnw0Ye8vd/uUCuFalFuwYJt941F4O2WnLXm6r8BOnAq+XDwbjhJWdOG/byMz9lTGpG0ywpP
X-Received: by 10.15.23.66 with SMTP id g42mr17491740eeu.9.1393247281786;
	Mon, 24 Feb 2014 05:08:01 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm63617913eep.11.2014.02.24.05.08.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:08:01 -0800 (PST)
Message-ID: <530B442F.7030200@linaro.org>
Date: Mon, 24 Feb 2014 13:07:59 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-11-git-send-email-julien.grall@linaro.org>
	<530B314D020000780011EAB4@nat28.tlf.novell.com>
In-Reply-To: <530B314D020000780011EAB4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 10/15] xen/passthrough: iommu: Basic
 support of device tree assignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/24/2014 10:47 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> Add IOMMU helpers to support device tree assignment/deassignment. This patch
>> introduces 2 new fields in the dt_device_node:
>>     - is_protected: Does the device is protected by an IOMMU
>>     - next_assigned: Pointer to the next device assigned to the same
>>     domain
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>     Changes in v2:
>>         - Patch added
>> ---
>>  xen/common/device_tree.c              |    4 ++
>>  xen/drivers/passthrough/Makefile      |    1 +
>>  xen/drivers/passthrough/device_tree.c |  106 +++++++++++++++++++++++++++++++++
>>  xen/drivers/passthrough/iommu.c       |   10 ++++
>>  xen/include/xen/device_tree.h         |   14 +++++
>>  xen/include/xen/hvm/iommu.h           |    6 ++
>>  xen/include/xen/iommu.h               |   16 +++++
> 
> No matter how small the changes to generic IOMMU code, you should
> Cc the maintainers.

Oops, sorry.

>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -123,6 +123,12 @@ int iommu_domain_init(struct domain *d)
>>      if ( ret )
>>          return ret;
>>  
>> +#if HAS_DEVICE_TREE
>> +    ret = iommu_dt_domain_init(d);
>> +    if ( ret )
>> +        return ret;
>> +#endif
> 
> Why can this not be part of arch_iommu_domain_init()?

I didn't think about this solution. I will move in
arch_iommu_domain_init in the next version.

>> @@ -198,6 +204,10 @@ void iommu_domain_destroy(struct domain *d)
>>      if ( need_iommu(d) )
>>          iommu_teardown(d);
>>  
>> +#ifdef HAS_DEVICE_TREE
>> +    iommu_dt_domain_destroy(d);
>> +#endif
>> +
>>      arch_iommu_domain_destroy(d);
> 
> And the former one here part of the latter?

Ok.

>> @@ -28,6 +29,11 @@ struct hvm_iommu {
>>  
>>      /* iommu_ops */
>>      const struct iommu_ops *platform_ops;
>> +
>> +    #ifdef HAS_DEVICE_TREE
>> +    /* List of DT devices assigned to this domain */
>> +    struct list_head dt_devices;
>> +    #endif
> 
> Indentation.

Will do.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:08:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvGW-0000hw-Mj; Mon, 24 Feb 2014 13:08:12 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHvGV-0000hf-AJ
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:08:11 +0000
Received: from [193.109.254.147:64003] by server-12.bemta-14.messagelabs.com
	id 93/A4-17220-A344B035; Mon, 24 Feb 2014 13:08:10 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393247282!1133901!1
X-Originating-IP: [209.85.215.178]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10196 invoked from network); 24 Feb 2014 13:08:02 -0000
Received: from mail-ea0-f178.google.com (HELO mail-ea0-f178.google.com)
	(209.85.215.178)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:08:02 -0000
Received: by mail-ea0-f178.google.com with SMTP id a15so3165387eae.9
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:08:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=mI1EDEw3eQLhJNgDnbqlSmA2qEGgbyTakswQ1UjpBrM=;
	b=ajATh7Uy2/Ika6nC4IRNWoeH7Lic8xWOVRYpcBWA+EY3pTcz95ZTvWgglAknxAvb+g
	Gw2H1OcUINpLjADp33wwY2z4vjWRji29Qo6H5jPqjv7D6P99+bJMX+VyeeJuMjQlHQU7
	9jhVEIwncHNfkQToIrCU1nC9vA5OJTmE+korN4sZ3J3hF/wgcDWivZfPDfwA2hvlyOKv
	JdT/gJqV3vqbQwUdIKSg4DMplF/iw5rp+SF8Zr5rFlUxMECBm+z5ZgRABquqRslgY1bD
	D7T8GGvODCEpXfZwz1SWRFnkyl0AI5yOo837IsnHvtD9P6hlQXgxg3GpOYZJ/cr+MNR4
	1M9g==
X-Gm-Message-State: ALoCoQnw0Ye8vd/uUCuFalFuwYJt941F4O2WnLXm6r8BOnAq+XDwbjhJWdOG/byMz9lTGpG0ywpP
X-Received: by 10.15.23.66 with SMTP id g42mr17491740eeu.9.1393247281786;
	Mon, 24 Feb 2014 05:08:01 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm63617913eep.11.2014.02.24.05.08.00
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:08:01 -0800 (PST)
Message-ID: <530B442F.7030200@linaro.org>
Date: Mon, 24 Feb 2014 13:07:59 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-11-git-send-email-julien.grall@linaro.org>
	<530B314D020000780011EAB4@nat28.tlf.novell.com>
In-Reply-To: <530B314D020000780011EAB4@nat28.tlf.novell.com>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 10/15] xen/passthrough: iommu: Basic
 support of device tree assignment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Jan,

On 02/24/2014 10:47 AM, Jan Beulich wrote:
>>>> On 23.02.14 at 23:16, Julien Grall <julien.grall@linaro.org> wrote:
>> Add IOMMU helpers to support device tree assignment/deassignment. This patch
>> introduces 2 new fields in the dt_device_node:
>>     - is_protected: Does the device is protected by an IOMMU
>>     - next_assigned: Pointer to the next device assigned to the same
>>     domain
>>
>> Signed-off-by: Julien Grall <julien.grall@linaro.org>
>>
>> ---
>>     Changes in v2:
>>         - Patch added
>> ---
>>  xen/common/device_tree.c              |    4 ++
>>  xen/drivers/passthrough/Makefile      |    1 +
>>  xen/drivers/passthrough/device_tree.c |  106 +++++++++++++++++++++++++++++++++
>>  xen/drivers/passthrough/iommu.c       |   10 ++++
>>  xen/include/xen/device_tree.h         |   14 +++++
>>  xen/include/xen/hvm/iommu.h           |    6 ++
>>  xen/include/xen/iommu.h               |   16 +++++
> 
> No matter how small the changes to generic IOMMU code, you should
> Cc the maintainers.

Oops, sorry.

>> --- a/xen/drivers/passthrough/iommu.c
>> +++ b/xen/drivers/passthrough/iommu.c
>> @@ -123,6 +123,12 @@ int iommu_domain_init(struct domain *d)
>>      if ( ret )
>>          return ret;
>>  
>> +#if HAS_DEVICE_TREE
>> +    ret = iommu_dt_domain_init(d);
>> +    if ( ret )
>> +        return ret;
>> +#endif
> 
> Why can this not be part of arch_iommu_domain_init()?

I didn't think about this solution. I will move in
arch_iommu_domain_init in the next version.

>> @@ -198,6 +204,10 @@ void iommu_domain_destroy(struct domain *d)
>>      if ( need_iommu(d) )
>>          iommu_teardown(d);
>>  
>> +#ifdef HAS_DEVICE_TREE
>> +    iommu_dt_domain_destroy(d);
>> +#endif
>> +
>>      arch_iommu_domain_destroy(d);
> 
> And the former one here part of the latter?

Ok.

>> @@ -28,6 +29,11 @@ struct hvm_iommu {
>>  
>>      /* iommu_ops */
>>      const struct iommu_ops *platform_ops;
>> +
>> +    #ifdef HAS_DEVICE_TREE
>> +    /* List of DT devices assigned to this domain */
>> +    struct list_head dt_devices;
>> +    #endif
> 
> Indentation.

Will do.

Cheers,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:14:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvM4-0000w5-Kf; Mon, 24 Feb 2014 13:13:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvM3-0000w0-Bx
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:13:55 +0000
Received: from [85.158.143.35:27921] by server-3.bemta-4.messagelabs.com id
	7C/1A-11539-2954B035; Mon, 24 Feb 2014 13:13:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393247633!7858520!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7495 invoked from network); 24 Feb 2014 13:13:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:13:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:13:53 +0000
Message-Id: <530B539D020000780011EC83@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:13:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-9-git-send-email-julien.grall@linaro.org>
	<530B2F8A020000780011EAA4@nat28.tlf.novell.com>
	<530B3F3A.6000202@linaro.org>
In-Reply-To: <530B3F3A.6000202@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 13:46, Julien Grall <julien.grall@linaro.org> wrote:
>>> +    switch ( domctl->cmd )
>>> +    {
>>> +#ifdef HAS_PCI
>>> +    case XEN_DOMCTL_get_device_group:
>>> +    case XEN_DOMCTL_test_assign_device:
>>> +    case XEN_DOMCTL_assign_device:
>>> +    case XEN_DOMCTL_deassign_device:
>>> +        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
>>> +        break;
>>> +#endif
>>> +    default:
>>> +        ret = -ENOSYS;
>>> +    }
>> 
>> Please simply have the default case chain to iommu_do_pci_domctl(),
>> avoiding the need to change two source files when new sub-ops get
>> added.
> 
> I wrote in this manner because we will add soon "iommu_do_dt_domctl" to
> handle DOMCTL for device tree assignment.
> 
> For one of this function we will have to deal with this trick. Or ... we
> can do:
> 
>    ret = iommu_do_pci_domctl(...)
>    if ( ret != ENOSYS )
>      return ret;
>    ret = iommu_do_dt_domctl(...)
> 
> IHMO, I would prefer the former solution.

While I'd prefer the latter, perhaps slightly adjusted to not as
heavily special case -ENOSYS (i.e. call the second function if any
error was returned from the first, and use the error value that
was not -ENOSYS unless both functions returned it).

>>> +#ifndef __ARCH_X86_IOMMU_H__
>>> +#define __ARCH_X86_IOMMU_H__
>>> +
>>> +#define MAX_IOMMUS 32
>>> +
>>> +#include <asm/msi.h>
>> 
>> Please don't, if at all possible.
> 
> I'm not sure to understand ... what do you mean? Don't include "asm/msi.h"?

Exactly. All you need in this header are forward declarations of two
struct-s.

>>> +#ifdef CONFIG_X86
>>>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
>>>      int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>>      void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>>      unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
>>>      int (*setup_hpet_msi)(struct msi_desc *);
>>> +    void (*share_p2m)(struct domain *d);
>>> +#endif /* CONFIG_X86 */
>> 
>> Is that last one really x86-specific in any way?
> 
> On ARM, P2M are share by default. You don't need to call this function
> explicitly. So I think we can safely say it's x86-specific.
> 
> Developper won't call this function by mistake.

But then again this could easily be dealt with in ARM (providing a
no-op stub) or the generic code (checking the pointer to be non-
NULL), allowing future ports (or ARM itself, should it ever need to)
to use it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:14:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvM4-0000w5-Kf; Mon, 24 Feb 2014 13:13:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvM3-0000w0-Bx
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:13:55 +0000
Received: from [85.158.143.35:27921] by server-3.bemta-4.messagelabs.com id
	7C/1A-11539-2954B035; Mon, 24 Feb 2014 13:13:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393247633!7858520!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7495 invoked from network); 24 Feb 2014 13:13:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:13:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:13:53 +0000
Message-Id: <530B539D020000780011EC83@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:13:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-9-git-send-email-julien.grall@linaro.org>
	<530B2F8A020000780011EAA4@nat28.tlf.novell.com>
	<530B3F3A.6000202@linaro.org>
In-Reply-To: <530B3F3A.6000202@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@citrix.com,
	ian.campbell@citrix.com, Xiantao Zhang <xiantao.zhang@intel.com>,
	tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 08/15] xen/passthrough: iommu: Split
 generic IOMMU code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 13:46, Julien Grall <julien.grall@linaro.org> wrote:
>>> +    switch ( domctl->cmd )
>>> +    {
>>> +#ifdef HAS_PCI
>>> +    case XEN_DOMCTL_get_device_group:
>>> +    case XEN_DOMCTL_test_assign_device:
>>> +    case XEN_DOMCTL_assign_device:
>>> +    case XEN_DOMCTL_deassign_device:
>>> +        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
>>> +        break;
>>> +#endif
>>> +    default:
>>> +        ret = -ENOSYS;
>>> +    }
>> 
>> Please simply have the default case chain to iommu_do_pci_domctl(),
>> avoiding the need to change two source files when new sub-ops get
>> added.
> 
> I wrote in this manner because we will add soon "iommu_do_dt_domctl" to
> handle DOMCTL for device tree assignment.
> 
> For one of this function we will have to deal with this trick. Or ... we
> can do:
> 
>    ret = iommu_do_pci_domctl(...)
>    if ( ret != ENOSYS )
>      return ret;
>    ret = iommu_do_dt_domctl(...)
> 
> IHMO, I would prefer the former solution.

While I'd prefer the latter, perhaps slightly adjusted to not as
heavily special case -ENOSYS (i.e. call the second function if any
error was returned from the first, and use the error value that
was not -ENOSYS unless both functions returned it).

>>> +#ifndef __ARCH_X86_IOMMU_H__
>>> +#define __ARCH_X86_IOMMU_H__
>>> +
>>> +#define MAX_IOMMUS 32
>>> +
>>> +#include <asm/msi.h>
>> 
>> Please don't, if at all possible.
> 
> I'm not sure to understand ... what do you mean? Don't include "asm/msi.h"?

Exactly. All you need in this header are forward declarations of two
struct-s.

>>> +#ifdef CONFIG_X86
>>>      void (*update_ire_from_apic)(unsigned int apic, unsigned int reg, unsigned int value);
>>>      int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>>      void (*read_msi_from_ire)(struct msi_desc *msi_desc, struct msi_msg *msg);
>>>      unsigned int (*read_apic_from_ire)(unsigned int apic, unsigned int reg);
>>>      int (*setup_hpet_msi)(struct msi_desc *);
>>> +    void (*share_p2m)(struct domain *d);
>>> +#endif /* CONFIG_X86 */
>> 
>> Is that last one really x86-specific in any way?
> 
> On ARM, P2M are share by default. You don't need to call this function
> explicitly. So I think we can safely say it's x86-specific.
> 
> Developper won't call this function by mistake.

But then again this could easily be dealt with in ARM (providing a
no-op stub) or the generic code (checking the pointer to be non-
NULL), allowing future ports (or ARM itself, should it ever need to)
to use it.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:16:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvOy-00012u-9R; Mon, 24 Feb 2014 13:16:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvOw-00012p-Or
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:16:54 +0000
Received: from [85.158.139.211:20852] by server-2.bemta-5.messagelabs.com id
	A2/9A-23037-6464B035; Mon, 24 Feb 2014 13:16:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393247812!5867139!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21098 invoked from network); 24 Feb 2014 13:16:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:16:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:16:52 +0000
Message-Id: <530B544F020000780011EC94@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:16:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
	<530B41D5.8070803@linaro.org>
In-Reply-To: <530B41D5.8070803@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 13:57, Julien Grall <julien.grall@linaro.org> wrote:
>>>  struct hvm_iommu {
>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>> -    spinlock_t mapping_lock;       /* io page table lock */
>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>> -    struct list_head mapped_rmrrs;
>>> -
>>> -    /* amd iommu support */
>>> -    int domain_id;
>> 
>> At the very least this field doesn't look all that architecture specific,
>> even if it might only be used on x86/AMD right now.
> 
> On ARM, each IOMMU will have it's own private data stored in arch.priv.
> I don't think domain_id will be used as the driver can directly use
> d->domain_id.
> 
> I gave a look on AMD IOMMU drivers, and in a same function they mixed
> d->domain_id and domain_hvm_iommu(d)->arch.domain_id. The latter one has
> been initialized to d->domain_id in amd_iommu_domain_init.
> 
> I think, we can even remove this field for x86...

As Andrew suggested too.

>>> -    int paging_mode;
>> 
>> The same might go for this one.
> 
> There is only one paging mode on ARM.

But please make your changes here not in the spirit of fitting in ARM,
but to make the code reasonable architecture clean. Which means
that fields having a purpose outside of x86 should remain common,
instead of making x86-specific everything that ARM (currently) has
no use for.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:16:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvOy-00012u-9R; Mon, 24 Feb 2014 13:16:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvOw-00012p-Or
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:16:54 +0000
Received: from [85.158.139.211:20852] by server-2.bemta-5.messagelabs.com id
	A2/9A-23037-6464B035; Mon, 24 Feb 2014 13:16:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393247812!5867139!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21098 invoked from network); 24 Feb 2014 13:16:53 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:16:53 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:16:52 +0000
Message-Id: <530B544F020000780011EC94@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:16:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
	<530B41D5.8070803@linaro.org>
In-Reply-To: <530B41D5.8070803@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 13:57, Julien Grall <julien.grall@linaro.org> wrote:
>>>  struct hvm_iommu {
>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>> -    spinlock_t mapping_lock;       /* io page table lock */
>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>> -    struct list_head mapped_rmrrs;
>>> -
>>> -    /* amd iommu support */
>>> -    int domain_id;
>> 
>> At the very least this field doesn't look all that architecture specific,
>> even if it might only be used on x86/AMD right now.
> 
> On ARM, each IOMMU will have it's own private data stored in arch.priv.
> I don't think domain_id will be used as the driver can directly use
> d->domain_id.
> 
> I gave a look on AMD IOMMU drivers, and in a same function they mixed
> d->domain_id and domain_hvm_iommu(d)->arch.domain_id. The latter one has
> been initialized to d->domain_id in amd_iommu_domain_init.
> 
> I think, we can even remove this field for x86...

As Andrew suggested too.

>>> -    int paging_mode;
>> 
>> The same might go for this one.
> 
> There is only one paging mode on ARM.

But please make your changes here not in the spirit of fitting in ARM,
but to make the code reasonable architecture clean. Which means
that fields having a purpose outside of x86 should remain common,
instead of making x86-specific everything that ARM (currently) has
no use for.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:31:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvcj-0001Jw-3G; Mon, 24 Feb 2014 13:31:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvci-0001Jr-1r
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:31:08 +0000
Received: from [85.158.143.35:21860] by server-1.bemta-4.messagelabs.com id
	6B/9F-31661-B994B035; Mon, 24 Feb 2014 13:31:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393248666!7863815!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4117 invoked from network); 24 Feb 2014 13:31:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:31:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:31:06 +0000
Message-Id: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:31:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFFCDAA86.0__="
Cc: dgdegra@tycho.nsa.gov, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to subject
	domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFFCDAA86.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Not having got any satisfactory suggestions on the inquiry on how to
determine the amount a PoD guest needs to balloon down by (see
http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg01524.html=
=20
and the thread following it), expose XENMEM_get_pod_target such that
the guest can use it for this purpose.

Also leverage some cleanup potential resulting from this change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4800,7 +4800,6 @@ long arch_memory_op(int op, XEN_GUEST_HA
     {
         xen_pod_target_t target;
         struct domain *d;
-        struct p2m_domain *p2m;
=20
         if ( copy_from_guest(&target, arg, 1) )
             return -EFAULT;
@@ -4810,23 +4809,17 @@ long arch_memory_op(int op, XEN_GUEST_HA
             return -ESRCH;
=20
         if ( op =3D=3D XENMEM_set_pod_target )
-            rc =3D xsm_set_pod_target(XSM_PRIV, d);
-        else
-            rc =3D xsm_get_pod_target(XSM_PRIV, d);
-
-        if ( rc !=3D 0 )
-            goto pod_target_out_unlock;
-
-        if ( op =3D=3D XENMEM_set_pod_target )
         {
-            if ( target.target_pages > d->max_pages )
-            {
+            rc =3D xsm_set_pod_target(XSM_PRIV, d);
+            if ( rc )
+                /* nothing */;
+            else if ( target.target_pages > d->max_pages )
                 rc =3D -EINVAL;
-                goto pod_target_out_unlock;
-            }
-           =20
-            rc =3D p2m_pod_set_mem_target(d, target.target_pages);
+            else
+                rc =3D p2m_pod_set_mem_target(d, target.target_pages);
         }
+        else
+            rc =3D xsm_get_pod_target(XSM_TARGET, d);
=20
         if ( rc =3D=3D -EAGAIN )
         {
@@ -4835,19 +4828,16 @@ long arch_memory_op(int op, XEN_GUEST_HA
         }
         else if ( rc >=3D 0 )
         {
-            p2m =3D p2m_get_hostp2m(d);
+            const struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
+
             target.tot_pages       =3D d->tot_pages;
             target.pod_cache_pages =3D p2m->pod.count;
             target.pod_entries     =3D p2m->pod.entry_count;
=20
             if ( __copy_to_guest(arg, &target, 1) )
-            {
                 rc=3D -EFAULT;
-                goto pod_target_out_unlock;
-            }
         }
        =20
-    pod_target_out_unlock:
         rcu_unlock_domain(d);
         return rc;
     }
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -307,7 +307,7 @@ static XSM_INLINE char *xsm_show_securit
=20
 static XSM_INLINE int xsm_get_pod_target(XSM_DEFAULT_ARG struct domain =
*d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
=20




--=__PartFFCDAA86.0__=
Content-Type: text/plain; name="expose-get_pod_target.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="expose-get_pod_target.patch"

x86: expose XENMEM_get_pod_target to subject domain=0A=0ANot having got =
any satisfactory suggestions on the inquiry on how to=0Adetermine the =
amount a PoD guest needs to balloon down by (see=0Ahttp://lists.xenproject.=
org/archives/html/xen-devel/2014-01/msg01524.html=0Aand the thread =
following it), expose XENMEM_get_pod_target such that=0Athe guest can use =
it for this purpose.=0A=0AAlso leverage some cleanup potential resulting =
from this change.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A=
--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -4800,7 +4800,6 @@ =
long arch_memory_op(int op, XEN_GUEST_HA=0A     {=0A         xen_pod_target=
_t target;=0A         struct domain *d;=0A-        struct p2m_domain =
*p2m;=0A =0A         if ( copy_from_guest(&target, arg, 1) )=0A            =
 return -EFAULT;=0A@@ -4810,23 +4809,17 @@ long arch_memory_op(int op, =
XEN_GUEST_HA=0A             return -ESRCH;=0A =0A         if ( op =3D=3D =
XENMEM_set_pod_target )=0A-            rc =3D xsm_set_pod_target(XSM_PRIV, =
d);=0A-        else=0A-            rc =3D xsm_get_pod_target(XSM_PRIV, =
d);=0A-=0A-        if ( rc !=3D 0 )=0A-            goto pod_target_out_unlo=
ck;=0A-=0A-        if ( op =3D=3D XENMEM_set_pod_target )=0A         {=0A- =
           if ( target.target_pages > d->max_pages )=0A-            {=0A+  =
          rc =3D xsm_set_pod_target(XSM_PRIV, d);=0A+            if ( rc =
)=0A+                /* nothing */;=0A+            else if ( target.target_=
pages > d->max_pages )=0A                 rc =3D -EINVAL;=0A-              =
  goto pod_target_out_unlock;=0A-            }=0A-            =0A-         =
   rc =3D p2m_pod_set_mem_target(d, target.target_pages);=0A+            =
else=0A+                rc =3D p2m_pod_set_mem_target(d, target.target_page=
s);=0A         }=0A+        else=0A+            rc =3D xsm_get_pod_target(X=
SM_TARGET, d);=0A =0A         if ( rc =3D=3D -EAGAIN )=0A         {=0A@@ =
-4835,19 +4828,16 @@ long arch_memory_op(int op, XEN_GUEST_HA=0A         =
}=0A         else if ( rc >=3D 0 )=0A         {=0A-            p2m =3D =
p2m_get_hostp2m(d);=0A+            const struct p2m_domain *p2m =3D =
p2m_get_hostp2m(d);=0A+=0A             target.tot_pages       =3D =
d->tot_pages;=0A             target.pod_cache_pages =3D p2m->pod.count;=0A =
            target.pod_entries     =3D p2m->pod.entry_count;=0A =0A        =
     if ( __copy_to_guest(arg, &target, 1) )=0A-            {=0A           =
      rc=3D -EFAULT;=0A-                goto pod_target_out_unlock;=0A-    =
        }=0A         }=0A         =0A-    pod_target_out_unlock:=0A        =
 rcu_unlock_domain(d);=0A         return rc;=0A     }=0A--- a/xen/include/x=
sm/dummy.h=0A+++ b/xen/include/xsm/dummy.h=0A@@ -307,7 +307,7 @@ static =
XSM_INLINE char *xsm_show_securit=0A =0A static XSM_INLINE int xsm_get_pod_=
target(XSM_DEFAULT_ARG struct domain *d)=0A {=0A-    XSM_ASSERT_ACTION(XSM_=
PRIV);=0A+    XSM_ASSERT_ACTION(XSM_TARGET);=0A     return xsm_default_acti=
on(action, current->domain, d);=0A }=0A =0A
--=__PartFFCDAA86.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFFCDAA86.0__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 13:31:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:31:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvcj-0001Jw-3G; Mon, 24 Feb 2014 13:31:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHvci-0001Jr-1r
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:31:08 +0000
Received: from [85.158.143.35:21860] by server-1.bemta-4.messagelabs.com id
	6B/9F-31661-B994B035; Mon, 24 Feb 2014 13:31:07 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393248666!7863815!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4117 invoked from network); 24 Feb 2014 13:31:06 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 13:31:06 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 13:31:06 +0000
Message-Id: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 13:31:02 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFFCDAA86.0__="
Cc: dgdegra@tycho.nsa.gov, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to subject
	domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFFCDAA86.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Not having got any satisfactory suggestions on the inquiry on how to
determine the amount a PoD guest needs to balloon down by (see
http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg01524.html=
=20
and the thread following it), expose XENMEM_get_pod_target such that
the guest can use it for this purpose.

Also leverage some cleanup potential resulting from this change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4800,7 +4800,6 @@ long arch_memory_op(int op, XEN_GUEST_HA
     {
         xen_pod_target_t target;
         struct domain *d;
-        struct p2m_domain *p2m;
=20
         if ( copy_from_guest(&target, arg, 1) )
             return -EFAULT;
@@ -4810,23 +4809,17 @@ long arch_memory_op(int op, XEN_GUEST_HA
             return -ESRCH;
=20
         if ( op =3D=3D XENMEM_set_pod_target )
-            rc =3D xsm_set_pod_target(XSM_PRIV, d);
-        else
-            rc =3D xsm_get_pod_target(XSM_PRIV, d);
-
-        if ( rc !=3D 0 )
-            goto pod_target_out_unlock;
-
-        if ( op =3D=3D XENMEM_set_pod_target )
         {
-            if ( target.target_pages > d->max_pages )
-            {
+            rc =3D xsm_set_pod_target(XSM_PRIV, d);
+            if ( rc )
+                /* nothing */;
+            else if ( target.target_pages > d->max_pages )
                 rc =3D -EINVAL;
-                goto pod_target_out_unlock;
-            }
-           =20
-            rc =3D p2m_pod_set_mem_target(d, target.target_pages);
+            else
+                rc =3D p2m_pod_set_mem_target(d, target.target_pages);
         }
+        else
+            rc =3D xsm_get_pod_target(XSM_TARGET, d);
=20
         if ( rc =3D=3D -EAGAIN )
         {
@@ -4835,19 +4828,16 @@ long arch_memory_op(int op, XEN_GUEST_HA
         }
         else if ( rc >=3D 0 )
         {
-            p2m =3D p2m_get_hostp2m(d);
+            const struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
+
             target.tot_pages       =3D d->tot_pages;
             target.pod_cache_pages =3D p2m->pod.count;
             target.pod_entries     =3D p2m->pod.entry_count;
=20
             if ( __copy_to_guest(arg, &target, 1) )
-            {
                 rc=3D -EFAULT;
-                goto pod_target_out_unlock;
-            }
         }
        =20
-    pod_target_out_unlock:
         rcu_unlock_domain(d);
         return rc;
     }
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -307,7 +307,7 @@ static XSM_INLINE char *xsm_show_securit
=20
 static XSM_INLINE int xsm_get_pod_target(XSM_DEFAULT_ARG struct domain =
*d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_TARGET);
     return xsm_default_action(action, current->domain, d);
 }
=20




--=__PartFFCDAA86.0__=
Content-Type: text/plain; name="expose-get_pod_target.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="expose-get_pod_target.patch"

x86: expose XENMEM_get_pod_target to subject domain=0A=0ANot having got =
any satisfactory suggestions on the inquiry on how to=0Adetermine the =
amount a PoD guest needs to balloon down by (see=0Ahttp://lists.xenproject.=
org/archives/html/xen-devel/2014-01/msg01524.html=0Aand the thread =
following it), expose XENMEM_get_pod_target such that=0Athe guest can use =
it for this purpose.=0A=0AAlso leverage some cleanup potential resulting =
from this change.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A=
--- a/xen/arch/x86/mm.c=0A+++ b/xen/arch/x86/mm.c=0A@@ -4800,7 +4800,6 @@ =
long arch_memory_op(int op, XEN_GUEST_HA=0A     {=0A         xen_pod_target=
_t target;=0A         struct domain *d;=0A-        struct p2m_domain =
*p2m;=0A =0A         if ( copy_from_guest(&target, arg, 1) )=0A            =
 return -EFAULT;=0A@@ -4810,23 +4809,17 @@ long arch_memory_op(int op, =
XEN_GUEST_HA=0A             return -ESRCH;=0A =0A         if ( op =3D=3D =
XENMEM_set_pod_target )=0A-            rc =3D xsm_set_pod_target(XSM_PRIV, =
d);=0A-        else=0A-            rc =3D xsm_get_pod_target(XSM_PRIV, =
d);=0A-=0A-        if ( rc !=3D 0 )=0A-            goto pod_target_out_unlo=
ck;=0A-=0A-        if ( op =3D=3D XENMEM_set_pod_target )=0A         {=0A- =
           if ( target.target_pages > d->max_pages )=0A-            {=0A+  =
          rc =3D xsm_set_pod_target(XSM_PRIV, d);=0A+            if ( rc =
)=0A+                /* nothing */;=0A+            else if ( target.target_=
pages > d->max_pages )=0A                 rc =3D -EINVAL;=0A-              =
  goto pod_target_out_unlock;=0A-            }=0A-            =0A-         =
   rc =3D p2m_pod_set_mem_target(d, target.target_pages);=0A+            =
else=0A+                rc =3D p2m_pod_set_mem_target(d, target.target_page=
s);=0A         }=0A+        else=0A+            rc =3D xsm_get_pod_target(X=
SM_TARGET, d);=0A =0A         if ( rc =3D=3D -EAGAIN )=0A         {=0A@@ =
-4835,19 +4828,16 @@ long arch_memory_op(int op, XEN_GUEST_HA=0A         =
}=0A         else if ( rc >=3D 0 )=0A         {=0A-            p2m =3D =
p2m_get_hostp2m(d);=0A+            const struct p2m_domain *p2m =3D =
p2m_get_hostp2m(d);=0A+=0A             target.tot_pages       =3D =
d->tot_pages;=0A             target.pod_cache_pages =3D p2m->pod.count;=0A =
            target.pod_entries     =3D p2m->pod.entry_count;=0A =0A        =
     if ( __copy_to_guest(arg, &target, 1) )=0A-            {=0A           =
      rc=3D -EFAULT;=0A-                goto pod_target_out_unlock;=0A-    =
        }=0A         }=0A         =0A-    pod_target_out_unlock:=0A        =
 rcu_unlock_domain(d);=0A         return rc;=0A     }=0A--- a/xen/include/x=
sm/dummy.h=0A+++ b/xen/include/xsm/dummy.h=0A@@ -307,7 +307,7 @@ static =
XSM_INLINE char *xsm_show_securit=0A =0A static XSM_INLINE int xsm_get_pod_=
target(XSM_DEFAULT_ARG struct domain *d)=0A {=0A-    XSM_ASSERT_ACTION(XSM_=
PRIV);=0A+    XSM_ASSERT_ACTION(XSM_TARGET);=0A     return xsm_default_acti=
on(action, current->domain, d);=0A }=0A =0A
--=__PartFFCDAA86.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFFCDAA86.0__=--


From xen-devel-bounces@lists.xen.org Mon Feb 24 13:34:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvfN-0001Qi-OF; Mon, 24 Feb 2014 13:33:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHvfM-0001Qc-EG
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:33:52 +0000
Received: from [85.158.137.68:64598] by server-2.bemta-3.messagelabs.com id
	A6/CF-06531-F3A4B035; Mon, 24 Feb 2014 13:33:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393248830!3807364!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20990 invoked from network); 24 Feb 2014 13:33:50 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:33:50 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so3060733eaj.25
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:33:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Fnb1Gc/R/WNpfT5mox2jipSpioStK2Xl27tMADAfK9Q=;
	b=KzyRBt/mVlj0f2czsv4oJT88LcNhBUo+2Q2KS9YeYsZyPmi/erVI1fgZpzPTe+Nf2h
	b3UUneEDzD45Ok5Eg1upxw/zYYzc6NEIjzVB+C0apc2F9Cbs9P4NSg1Z0LGpsVbw5jkx
	8VB0d7o+xiOb58enK4XtbombEuY//4lXLaqyxSYU0bOyHK3qQ6l9IcHhBXHs1HzojlK7
	UmydqPmfYRARDzrzndcyR9xcyW1iSVrMXdRWQxejU801x0mLHf2dqJyZaj3b4rowhYm9
	QN/g9ToFgWT7zyAF4onEUABz2p4zKOKGW8JsUv8dm3T3aAvz6diJkr0/PBCkuNwKTz4U
	iK3g==
X-Gm-Message-State: ALoCoQnzOQIcwzaUpff8gq8rtn3+MYmtDrstwVC6JDsPso8pJklQO2YE3tOJBtvqVcZqgdF28zJS
X-Received: by 10.14.106.193 with SMTP id m41mr24799673eeg.62.1393248830505;
	Mon, 24 Feb 2014 05:33:50 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m1sm63841329een.7.2014.02.24.05.33.44
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:33:49 -0800 (PST)
Message-ID: <530B4A38.5070008@linaro.org>
Date: Mon, 24 Feb 2014 13:33:44 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
	<530B41D5.8070803@linaro.org>
	<530B544F020000780011EC94@nat28.tlf.novell.com>
In-Reply-To: <530B544F020000780011EC94@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 01:16 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 13:57, Julien Grall <julien.grall@linaro.org> wrote:
>>>>  struct hvm_iommu {
>>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>>> -    spinlock_t mapping_lock;       /* io page table lock */
>>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>>> -    struct list_head mapped_rmrrs;
>>>> -
>>>> -    /* amd iommu support */
>>>> -    int domain_id;
>>>
>>> At the very least this field doesn't look all that architecture specific,
>>> even if it might only be used on x86/AMD right now.
>>
>> On ARM, each IOMMU will have it's own private data stored in arch.priv.
>> I don't think domain_id will be used as the driver can directly use
>> d->domain_id.
>>
>> I gave a look on AMD IOMMU drivers, and in a same function they mixed
>> d->domain_id and domain_hvm_iommu(d)->arch.domain_id. The latter one has
>> been initialized to d->domain_id in amd_iommu_domain_init.
>>
>> I think, we can even remove this field for x86...
> 
> As Andrew suggested too.
> 
>>>> -    int paging_mode;
>>>
>>> The same might go for this one.
>>
>> There is only one paging mode on ARM.
> 
> But please make your changes here not in the spirit of fitting in ARM,
> but to make the code reasonable architecture clean. Which means
> that fields having a purpose outside of x86 should remain common,
> instead of making x86-specific everything that ARM (currently) has
> no use for.

I didn't want to repeat that I said for the previous field. ARM doesn't
have a generic IOMMU driver (or two) as x86. Basically every vendor can
decide to design it's own IOMMU.

To avoid to have a big structure with some fields used only by one
driver, every IOMMU ARM drivers as its own private structure.

IHMO, I think every fields stored in the generic hvm_iommu structure
should be initialized by generic code, not by a specific driver.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:34:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvfN-0001Qi-OF; Mon, 24 Feb 2014 13:33:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHvfM-0001Qc-EG
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:33:52 +0000
Received: from [85.158.137.68:64598] by server-2.bemta-3.messagelabs.com id
	A6/CF-06531-F3A4B035; Mon, 24 Feb 2014 13:33:51 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393248830!3807364!1
X-Originating-IP: [209.85.215.180]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20990 invoked from network); 24 Feb 2014 13:33:50 -0000
Received: from mail-ea0-f180.google.com (HELO mail-ea0-f180.google.com)
	(209.85.215.180)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:33:50 -0000
Received: by mail-ea0-f180.google.com with SMTP id o10so3060733eaj.25
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:33:50 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Fnb1Gc/R/WNpfT5mox2jipSpioStK2Xl27tMADAfK9Q=;
	b=KzyRBt/mVlj0f2czsv4oJT88LcNhBUo+2Q2KS9YeYsZyPmi/erVI1fgZpzPTe+Nf2h
	b3UUneEDzD45Ok5Eg1upxw/zYYzc6NEIjzVB+C0apc2F9Cbs9P4NSg1Z0LGpsVbw5jkx
	8VB0d7o+xiOb58enK4XtbombEuY//4lXLaqyxSYU0bOyHK3qQ6l9IcHhBXHs1HzojlK7
	UmydqPmfYRARDzrzndcyR9xcyW1iSVrMXdRWQxejU801x0mLHf2dqJyZaj3b4rowhYm9
	QN/g9ToFgWT7zyAF4onEUABz2p4zKOKGW8JsUv8dm3T3aAvz6diJkr0/PBCkuNwKTz4U
	iK3g==
X-Gm-Message-State: ALoCoQnzOQIcwzaUpff8gq8rtn3+MYmtDrstwVC6JDsPso8pJklQO2YE3tOJBtvqVcZqgdF28zJS
X-Received: by 10.14.106.193 with SMTP id m41mr24799673eeg.62.1393248830505;
	Mon, 24 Feb 2014 05:33:50 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id m1sm63841329een.7.2014.02.24.05.33.44
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:33:49 -0800 (PST)
Message-ID: <530B4A38.5070008@linaro.org>
Date: Mon, 24 Feb 2014 13:33:44 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393193792-20008-1-git-send-email-julien.grall@linaro.org>
	<1393193792-20008-10-git-send-email-julien.grall@linaro.org>
	<530B3083020000780011EAB1@nat28.tlf.novell.com>
	<530B41D5.8070803@linaro.org>
	<530B544F020000780011EC94@nat28.tlf.novell.com>
In-Reply-To: <530B544F020000780011EC94@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Shane Wang <shane.wang@intel.com>,
	Joseph Cihula <joseph.cihula@intel.com>, tim@xen.org,
	stefano.stabellini@citrix.com,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	xen-devel@lists.xenproject.org, Gang Wei <gang.wei@intel.com>,
	Xiantao Zhang <xiantao.zhang@intel.com>
Subject: Re: [Xen-devel] [PATCH v2 09/15] xen/passthrough: iommu: Introduce
 arch specific code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 01:16 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 13:57, Julien Grall <julien.grall@linaro.org> wrote:
>>>>  struct hvm_iommu {
>>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>>> -    spinlock_t mapping_lock;       /* io page table lock */
>>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>>> -    struct list_head g2m_ioport_list;  /* guest to machine ioport mapping */
>>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>>> -    struct list_head mapped_rmrrs;
>>>> -
>>>> -    /* amd iommu support */
>>>> -    int domain_id;
>>>
>>> At the very least this field doesn't look all that architecture specific,
>>> even if it might only be used on x86/AMD right now.
>>
>> On ARM, each IOMMU will have it's own private data stored in arch.priv.
>> I don't think domain_id will be used as the driver can directly use
>> d->domain_id.
>>
>> I gave a look on AMD IOMMU drivers, and in a same function they mixed
>> d->domain_id and domain_hvm_iommu(d)->arch.domain_id. The latter one has
>> been initialized to d->domain_id in amd_iommu_domain_init.
>>
>> I think, we can even remove this field for x86...
> 
> As Andrew suggested too.
> 
>>>> -    int paging_mode;
>>>
>>> The same might go for this one.
>>
>> There is only one paging mode on ARM.
> 
> But please make your changes here not in the spirit of fitting in ARM,
> but to make the code reasonable architecture clean. Which means
> that fields having a purpose outside of x86 should remain common,
> instead of making x86-specific everything that ARM (currently) has
> no use for.

I didn't want to repeat that I said for the previous field. ARM doesn't
have a generic IOMMU driver (or two) as x86. Basically every vendor can
decide to design it's own IOMMU.

To avoid to have a big structure with some fields used only by one
driver, every IOMMU ARM drivers as its own private structure.

IHMO, I think every fields stored in the generic hvm_iommu structure
should be initialized by generic code, not by a specific driver.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvv2-0001fw-Pr; Mon, 24 Feb 2014 13:50:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@schaman.hu>) id 1WHvv1-0001fr-KA
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:50:03 +0000
Received: from [193.109.254.147:35831] by server-8.bemta-14.messagelabs.com id
	3F/A2-18529-90E4B035; Mon, 24 Feb 2014 13:50:01 +0000
X-Env-Sender: zoltan.kiss@schaman.hu
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393249799!6399915!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 866 invoked from network); 24 Feb 2014 13:49:59 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:49:59 -0000
Received: by mail-we0-f173.google.com with SMTP id x48so4668498wes.18
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:49:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0pCqzDeNdjoo3zl/RLklaqQJKtaATIq2RuxK8SIcYjQ=;
	b=ZOwxZexZsysBCrJZTxZup3vKm8re1AFmTz8PXfCnM6szi+3LOjyd3S4w2c7tfr/P8l
	c6BdMNzG6IhvZNQwHq3i7sBdkh0CAvjX/psbsdRfZ9pa4B5V7PQXtYkskwg4fClzs92G
	DtVDxiDBKeh2DbWQxXHbAB42D5r64KtyUE8PoNnx6eUQNKXFQ/bztlsDHv97SZm6bdud
	kqTTP9I+l31JVCrLRnQhdZz2nXb/DXgr8rpi7dzingonk2SoyvJef3DOH9mWRWO27bNa
	aM9YnG+Vr4fDFRnmMuXKTshZh8CnFEohm0NS2Qgfres8lrwvLDtm66PMlpBnmikFiPZO
	iYXA==
X-Gm-Message-State: ALoCoQmycNOpKrOdim0sL5X5ljwcHtOAKaJAmcj3FSWty2RA7BQbSbt+E7I//tFFQ/ojuedwuvQL
X-Received: by 10.194.109.68 with SMTP id hq4mr19325272wjb.12.1393249799110;
	Mon, 24 Feb 2014 05:49:59 -0800 (PST)
Received: from ?IPv6:2001:630:12:2e1c:bc84:4c22:6fb0:2b1a?
	([2001:630:12:2e1c:bc84:4c22:6fb0:2b1a])
	by mx.google.com with ESMTPSA id f1sm25092101wik.1.2014.02.24.05.49.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:49:58 -0800 (PST)
Message-ID: <530B4E05.4020900@schaman.hu>
Date: Mon, 24 Feb 2014 13:49:57 +0000
From: Zoltan Kiss <zoltan.kiss@schaman.hu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com>
In-Reply-To: <53093051.9040907@citrix.com>
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/02/14 23:18, Zoltan Kiss wrote:
> On 18/02/14 17:45, Ian Campbell wrote:
>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>
>> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
>> guest RX path" would be clearer.
> Ok, I'll do that.
>
>>
>>> RX path need to know if the SKB fragments are stored on pages from 
>>> another
>>> domain.
>> Does this not need to be done either before the mapping change or at the
>> same time? -- otherwise you have a window of a couple of commits where
>> things are broken, breaking bisectability.
> I can move this to the beginning, to keep bisectability. I've put it 
> here originally because none of these makes sense without the previous 
> patches.
Well, I gave it a close look: to move this to the beginning as a 
separate patch I would need to put move a lot of definitions from the 
first patch to here (ubuf_to_vif helper, xenvif_zerocopy_callback etc.). 
That would be the best from bisect point of view, but from patch review 
point of view even worse than now. So the only option I see is to merge 
this with the first 2 patches, so it will be even bigger. And based on 
that principle, patch #6 and #8 should be merged there as well, as they 
solve corner cases introduced by the grant mapping.
I don't know how much the bisecting requirements are written in stone. 
At this moment, all the separate patches compile, but after #2 there are 
new problems solved in #4, #6 and #8. If someone bisect in the middle of 
this range and run into these problems, they could quite easily figure 
out what went wrong looking at the adjacent patches. So I would 
recommend to keep this current order.
What's your opinion?

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 13:50:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 13:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHvv2-0001fw-Pr; Mon, 24 Feb 2014 13:50:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@schaman.hu>) id 1WHvv1-0001fr-KA
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 13:50:03 +0000
Received: from [193.109.254.147:35831] by server-8.bemta-14.messagelabs.com id
	3F/A2-18529-90E4B035; Mon, 24 Feb 2014 13:50:01 +0000
X-Env-Sender: zoltan.kiss@schaman.hu
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393249799!6399915!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 866 invoked from network); 24 Feb 2014 13:49:59 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 13:49:59 -0000
Received: by mail-we0-f173.google.com with SMTP id x48so4668498wes.18
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 05:49:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0pCqzDeNdjoo3zl/RLklaqQJKtaATIq2RuxK8SIcYjQ=;
	b=ZOwxZexZsysBCrJZTxZup3vKm8re1AFmTz8PXfCnM6szi+3LOjyd3S4w2c7tfr/P8l
	c6BdMNzG6IhvZNQwHq3i7sBdkh0CAvjX/psbsdRfZ9pa4B5V7PQXtYkskwg4fClzs92G
	DtVDxiDBKeh2DbWQxXHbAB42D5r64KtyUE8PoNnx6eUQNKXFQ/bztlsDHv97SZm6bdud
	kqTTP9I+l31JVCrLRnQhdZz2nXb/DXgr8rpi7dzingonk2SoyvJef3DOH9mWRWO27bNa
	aM9YnG+Vr4fDFRnmMuXKTshZh8CnFEohm0NS2Qgfres8lrwvLDtm66PMlpBnmikFiPZO
	iYXA==
X-Gm-Message-State: ALoCoQmycNOpKrOdim0sL5X5ljwcHtOAKaJAmcj3FSWty2RA7BQbSbt+E7I//tFFQ/ojuedwuvQL
X-Received: by 10.194.109.68 with SMTP id hq4mr19325272wjb.12.1393249799110;
	Mon, 24 Feb 2014 05:49:59 -0800 (PST)
Received: from ?IPv6:2001:630:12:2e1c:bc84:4c22:6fb0:2b1a?
	([2001:630:12:2e1c:bc84:4c22:6fb0:2b1a])
	by mx.google.com with ESMTPSA id f1sm25092101wik.1.2014.02.24.05.49.58
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 05:49:58 -0800 (PST)
Message-ID: <530B4E05.4020900@schaman.hu>
Date: Mon, 24 Feb 2014 13:49:57 +0000
From: Zoltan Kiss <zoltan.kiss@schaman.hu>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com>
In-Reply-To: <53093051.9040907@citrix.com>
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/02/14 23:18, Zoltan Kiss wrote:
> On 18/02/14 17:45, Ian Campbell wrote:
>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>
>> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
>> guest RX path" would be clearer.
> Ok, I'll do that.
>
>>
>>> RX path need to know if the SKB fragments are stored on pages from 
>>> another
>>> domain.
>> Does this not need to be done either before the mapping change or at the
>> same time? -- otherwise you have a window of a couple of commits where
>> things are broken, breaking bisectability.
> I can move this to the beginning, to keep bisectability. I've put it 
> here originally because none of these makes sense without the previous 
> patches.
Well, I gave it a close look: to move this to the beginning as a 
separate patch I would need to put move a lot of definitions from the 
first patch to here (ubuf_to_vif helper, xenvif_zerocopy_callback etc.). 
That would be the best from bisect point of view, but from patch review 
point of view even worse than now. So the only option I see is to merge 
this with the first 2 patches, so it will be even bigger. And based on 
that principle, patch #6 and #8 should be merged there as well, as they 
solve corner cases introduced by the grant mapping.
I don't know how much the bisecting requirements are written in stone. 
At this moment, all the separate patches compile, but after #2 there are 
new problems solved in #4, #6 and #8. If someone bisect in the middle of 
this range and run into these problems, they could quite easily figure 
out what went wrong looking at the adjacent patches. So I would 
recommend to keep this current order.
What's your opinion?

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwDL-0001yo-2k; Mon, 24 Feb 2014 14:08:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1WHwDK-0001yj-1O
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:08:58 +0000
Received: from [85.158.143.35:59368] by server-1.bemta-4.messagelabs.com id
	95/36-31661-9725B035; Mon, 24 Feb 2014 14:08:57 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393250935!7893678!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 901 invoked from network); 24 Feb 2014 14:08:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:08:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103538047"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:08:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:08:54 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1WHwDF-0005OT-Mk;
	Mon, 24 Feb 2014 14:08:53 +0000
Message-ID: <530B5275.7010008@citrix.com>
Date: Mon, 24 Feb 2014 14:08:53 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org>
In-Reply-To: <530673BD.9010301@linaro.org>
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Adding Jan for x86 part).

On 02/20/2014 09:29 PM, Julien Grall wrote:
> Hi Ian,
> 
> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>> interrupt.
>>
>> Mention here that you are therefore creating a linked list of actions
>> for each interrupt.
>>
>> If you use xen/list.h for this then you get a load of helpers and
>> iterators which would save you open coding them.
> 
> After thinking, using xen/list.h won't really remove open code, except
> removing "action_ptr" in release_dt_irq.
> 
> Calling release_dt_irq to an IRQ with multiple action shouldn't be
> called often. Therefore, having both prev and next is a waste of space.

Jan, as it's common code, do you have any thoughts?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:09:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwDL-0001yo-2k; Mon, 24 Feb 2014 14:08:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@citrix.com>) id 1WHwDK-0001yj-1O
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:08:58 +0000
Received: from [85.158.143.35:59368] by server-1.bemta-4.messagelabs.com id
	95/36-31661-9725B035; Mon, 24 Feb 2014 14:08:57 +0000
X-Env-Sender: julien.grall@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393250935!7893678!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 901 invoked from network); 24 Feb 2014 14:08:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:08:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103538047"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:08:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:08:54 -0500
Received: from chilopoda.uk.xensource.com ([10.80.2.139])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<julien.grall@citrix.com>)	id 1WHwDF-0005OT-Mk;
	Mon, 24 Feb 2014 14:08:53 +0000
Message-ID: <530B5275.7010008@citrix.com>
Date: Mon, 24 Feb 2014 14:08:53 +0000
From: Julien Grall <julien.grall@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org>
In-Reply-To: <530673BD.9010301@linaro.org>
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(Adding Jan for x86 part).

On 02/20/2014 09:29 PM, Julien Grall wrote:
> Hi Ian,
> 
> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>> interrupt.
>>
>> Mention here that you are therefore creating a linked list of actions
>> for each interrupt.
>>
>> If you use xen/list.h for this then you get a load of helpers and
>> iterators which would save you open coding them.
> 
> After thinking, using xen/list.h won't really remove open code, except
> removing "action_ptr" in release_dt_irq.
> 
> Calling release_dt_irq to an IRQ with multiple action shouldn't be
> called often. Therefore, having both prev and next is a waste of space.

Jan, as it's common code, do you have any thoughts?

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:12:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwH8-00025a-Q2; Mon, 24 Feb 2014 14:12:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwH6-00025V-8r
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:12:53 +0000
Received: from [193.109.254.147:22272] by server-16.bemta-14.messagelabs.com
	id 0E/4B-21945-3635B035; Mon, 24 Feb 2014 14:12:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393251170!6404755!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30655 invoked from network); 24 Feb 2014 14:12:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:12:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:12:49 +0000
Message-Id: <530B616E020000780011ECD5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:12:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-2-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-2-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 1/6] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> +struct pqos_cqm __read_mostly *cqm = NULL;

Misplaced __read_mostly (belongs after the *).

> +static void __init init_cqm(void)
> +{
> +    unsigned int rmid;
> +    unsigned int eax, edx;
> +    unsigned int cqm_pages;
> +    unsigned int i;
> +
> +    if ( !opt_cqm_max_rmid )
> +        return;
> +
> +    cqm = xzalloc(struct pqos_cqm);
> +    if ( !cqm )
> +        return;
> +
> +    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
> +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
> +        goto out;
> +
> +    cqm->min_rmid = 1;
> +    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
> +
> +    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
> +    if ( !cqm->rmid_to_dom )
> +        goto out;
> +
> +    /* Reserve RMID 0 for all domains not being monitored */
> +    cqm->rmid_to_dom[0] = DOMID_XEN;
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
> +
> +    /* Allocate CQM buffer size in initialization stage */
> +    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
> +                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/

Does this really need to be NR_CPUS (rather than nr_cpu_ids)?

> +                PAGE_SIZE + 1;
> +    cqm->buffer_size = cqm_pages * PAGE_SIZE;
> +
> +    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);

And does the allocation really need to be physically contiguous?
If so - did you calculate how much more memory you allocate
(due to the rounding up to the next power of 2), to decide
whether it's worthwhile freeing the unused portion?

> +    if ( !cqm->buffer )
> +    {
> +        xfree(cqm->rmid_to_dom);
> +        goto out;
> +    }
> +    memset(cqm->buffer, 0, cqm->buffer_size);
> +
> +    for ( i = 0; i < cqm_pages; i++ )
> +        share_xen_page_with_privileged_guests(
> +            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),

virt_to_page((void *)cqm->buffer + i * PAGE_SIZE)

> +static void __init init_qos_monitor(void)
> +{
> +    unsigned int qm_features;
> +    unsigned int eax, ebx, ecx;
> +
> +    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )

Pointless pair of parentheses.

> +        return;
> +
> +    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
> +
> +    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
> +        init_cqm();
> +}
> +
> +void __init init_platform_qos(void)
> +{
> +    if ( !opt_pqos )
> +        return;
> +
> +    init_qos_monitor();
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..639528f 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -48,6 +48,7 @@
>  #include <asm/setup.h>
>  #include <xen/cpu.h>
>  #include <asm/nmi.h>
> +#include <asm/pqos.h>
>  
>  /* opt_nosmp: If true, secondary processors are ignored. */
>  static bool_t __initdata opt_nosmp;
> @@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
>  
>      domain_unpause_by_systemcontroller(dom0);
>  
> +    init_platform_qos();
> +
>      reset_stack_and_jump(init_done);
>  }
>  
> diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
> index 1cfaf94..ca59668 100644
> --- a/xen/include/asm-x86/cpufeature.h
> +++ b/xen/include/asm-x86/cpufeature.h
> @@ -147,6 +147,7 @@
>  #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
>  #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
>  #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
> +#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
>  #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
>  #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
>  
> diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
> new file mode 100644
> index 0000000..0a8065c
> --- /dev/null
> +++ b/xen/include/asm-x86/pqos.h
> @@ -0,0 +1,43 @@
> +/*
> + * pqos.h: Platform QoS related service for guest.
> + *
> + * Copyright (c) 2014, Intel Corporation
> + * Author: Jiongxi Li  <jiongxi.li@intel.com>
> + * Author: Dongxiao Xu <dongxiao.xu@intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License 
> for
> + * more details.
> + */
> +#ifndef ASM_PQOS_H
> +#define ASM_PQOS_H
> +
> +#include <public/xen.h>
> +#include <xen/spinlock.h>
> +
> +/* QoS Resource Type Enumeration */
> +#define QOS_MONITOR_TYPE_L3            0x2
> +
> +/* QoS Monitoring Event ID */
> +#define QOS_MONITOR_EVTID_L3           0x1
> +
> +struct pqos_cqm {
> +    spinlock_t cqm_lock;
> +    uint64_t *buffer;
> +    unsigned int min_rmid;
> +    unsigned int max_rmid;
> +    unsigned int used_rmid;
> +    unsigned int upscaling_factor;
> +    unsigned int buffer_size;
> +    domid_t *rmid_to_dom;
> +};
> +extern struct pqos_cqm *cqm;
> +
> +void init_platform_qos(void);
> +
> +#endif
> -- 
> 1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:12:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwH8-00025a-Q2; Mon, 24 Feb 2014 14:12:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwH6-00025V-8r
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:12:53 +0000
Received: from [193.109.254.147:22272] by server-16.bemta-14.messagelabs.com
	id 0E/4B-21945-3635B035; Mon, 24 Feb 2014 14:12:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393251170!6404755!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30655 invoked from network); 24 Feb 2014 14:12:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:12:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:12:49 +0000
Message-Id: <530B616E020000780011ECD5@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:12:46 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-2-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-2-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 1/6] x86: detect and initialize Cache QoS
 Monitoring feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> +struct pqos_cqm __read_mostly *cqm = NULL;

Misplaced __read_mostly (belongs after the *).

> +static void __init init_cqm(void)
> +{
> +    unsigned int rmid;
> +    unsigned int eax, edx;
> +    unsigned int cqm_pages;
> +    unsigned int i;
> +
> +    if ( !opt_cqm_max_rmid )
> +        return;
> +
> +    cqm = xzalloc(struct pqos_cqm);
> +    if ( !cqm )
> +        return;
> +
> +    cpuid_count(0xf, 1, &eax, &cqm->upscaling_factor, &cqm->max_rmid, &edx);
> +    if ( !(edx & QOS_MONITOR_EVTID_L3) )
> +        goto out;
> +
> +    cqm->min_rmid = 1;
> +    cqm->max_rmid = min(opt_cqm_max_rmid, cqm->max_rmid);
> +
> +    cqm->rmid_to_dom = xmalloc_array(domid_t, cqm->max_rmid + 1);
> +    if ( !cqm->rmid_to_dom )
> +        goto out;
> +
> +    /* Reserve RMID 0 for all domains not being monitored */
> +    cqm->rmid_to_dom[0] = DOMID_XEN;
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +        cqm->rmid_to_dom[rmid] = DOMID_INVALID;
> +
> +    /* Allocate CQM buffer size in initialization stage */
> +    cqm_pages = ((cqm->max_rmid + 1) * sizeof(domid_t) +
> +                (cqm->max_rmid + 1) * sizeof(uint64_t) * NR_CPUS)/

Does this really need to be NR_CPUS (rather than nr_cpu_ids)?

> +                PAGE_SIZE + 1;
> +    cqm->buffer_size = cqm_pages * PAGE_SIZE;
> +
> +    cqm->buffer = alloc_xenheap_pages(get_order_from_pages(cqm_pages), 0);

And does the allocation really need to be physically contiguous?
If so - did you calculate how much more memory you allocate
(due to the rounding up to the next power of 2), to decide
whether it's worthwhile freeing the unused portion?

> +    if ( !cqm->buffer )
> +    {
> +        xfree(cqm->rmid_to_dom);
> +        goto out;
> +    }
> +    memset(cqm->buffer, 0, cqm->buffer_size);
> +
> +    for ( i = 0; i < cqm_pages; i++ )
> +        share_xen_page_with_privileged_guests(
> +            virt_to_page((void *)((unsigned long)cqm->buffer + i * PAGE_SIZE)),

virt_to_page((void *)cqm->buffer + i * PAGE_SIZE)

> +static void __init init_qos_monitor(void)
> +{
> +    unsigned int qm_features;
> +    unsigned int eax, ebx, ecx;
> +
> +    if ( !(boot_cpu_has(X86_FEATURE_QOSM)) )

Pointless pair of parentheses.

> +        return;
> +
> +    cpuid_count(0xf, 0, &eax, &ebx, &ecx, &qm_features);
> +
> +    if ( opt_cqm && (qm_features & QOS_MONITOR_TYPE_L3) )
> +        init_cqm();
> +}
> +
> +void __init init_platform_qos(void)
> +{
> +    if ( !opt_pqos )
> +        return;
> +
> +    init_qos_monitor();
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index b49256d..639528f 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -48,6 +48,7 @@
>  #include <asm/setup.h>
>  #include <xen/cpu.h>
>  #include <asm/nmi.h>
> +#include <asm/pqos.h>
>  
>  /* opt_nosmp: If true, secondary processors are ignored. */
>  static bool_t __initdata opt_nosmp;
> @@ -1419,6 +1420,8 @@ void __init __start_xen(unsigned long mbi_p)
>  
>      domain_unpause_by_systemcontroller(dom0);
>  
> +    init_platform_qos();
> +
>      reset_stack_and_jump(init_done);
>  }
>  
> diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
> index 1cfaf94..ca59668 100644
> --- a/xen/include/asm-x86/cpufeature.h
> +++ b/xen/include/asm-x86/cpufeature.h
> @@ -147,6 +147,7 @@
>  #define X86_FEATURE_ERMS	(7*32+ 9) /* Enhanced REP MOVSB/STOSB */
>  #define X86_FEATURE_INVPCID	(7*32+10) /* Invalidate Process Context ID */
>  #define X86_FEATURE_RTM 	(7*32+11) /* Restricted Transactional Memory */
> +#define X86_FEATURE_QOSM	(7*32+12) /* Platform QoS monitoring capability */
>  #define X86_FEATURE_NO_FPU_SEL 	(7*32+13) /* FPU CS/DS stored as zero */
>  #define X86_FEATURE_SMAP	(7*32+20) /* Supervisor Mode Access Prevention */
>  
> diff --git a/xen/include/asm-x86/pqos.h b/xen/include/asm-x86/pqos.h
> new file mode 100644
> index 0000000..0a8065c
> --- /dev/null
> +++ b/xen/include/asm-x86/pqos.h
> @@ -0,0 +1,43 @@
> +/*
> + * pqos.h: Platform QoS related service for guest.
> + *
> + * Copyright (c) 2014, Intel Corporation
> + * Author: Jiongxi Li  <jiongxi.li@intel.com>
> + * Author: Dongxiao Xu <dongxiao.xu@intel.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License 
> for
> + * more details.
> + */
> +#ifndef ASM_PQOS_H
> +#define ASM_PQOS_H
> +
> +#include <public/xen.h>
> +#include <xen/spinlock.h>
> +
> +/* QoS Resource Type Enumeration */
> +#define QOS_MONITOR_TYPE_L3            0x2
> +
> +/* QoS Monitoring Event ID */
> +#define QOS_MONITOR_EVTID_L3           0x1
> +
> +struct pqos_cqm {
> +    spinlock_t cqm_lock;
> +    uint64_t *buffer;
> +    unsigned int min_rmid;
> +    unsigned int max_rmid;
> +    unsigned int used_rmid;
> +    unsigned int upscaling_factor;
> +    unsigned int buffer_size;
> +    domid_t *rmid_to_dom;
> +};
> +extern struct pqos_cqm *cqm;
> +
> +void init_platform_qos(void);
> +
> +#endif
> -- 
> 1.7.9.5



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:13:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwHY-00027m-73; Mon, 24 Feb 2014 14:13:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwHW-00027c-1E
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:13:18 +0000
Received: from [85.158.143.35:55920] by server-2.bemta-4.messagelabs.com id
	7F/CD-10891-D735B035; Mon, 24 Feb 2014 14:13:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393251195!7864171!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15916 invoked from network); 24 Feb 2014 14:13:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:13:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105193215"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:12:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:12:58 -0500
Message-ID: <1393251177.16570.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Mon, 24 Feb 2014 14:12:57 +0000
In-Reply-To: <530B5275.7010008@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org> <530B5275.7010008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:08 +0000, Julien Grall wrote:
> (Adding Jan for x86 part).
> 
> On 02/20/2014 09:29 PM, Julien Grall wrote:
> > Hi Ian,
> > 
> > On 02/19/2014 11:55 AM, Ian Campbell wrote:
> >> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> >>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
> >>> interrupt.
> >>
> >> Mention here that you are therefore creating a linked list of actions
> >> for each interrupt.
> >>
> >> If you use xen/list.h for this then you get a load of helpers and
> >> iterators which would save you open coding them.
> > 
> > After thinking, using xen/list.h won't really remove open code, except
> > removing "action_ptr" in release_dt_irq.

You can list_for_each*() in do_IRQ too and in release_dt_irq you get to
use a well debugged list delete implementation instead of reimplementing
your own.

> > Calling release_dt_irq to an IRQ with multiple action shouldn't be
> > called often. Therefore, having both prev and next is a waste of space.

If this is a concern we could take in the Linux single linked list
macros alongside the existing doubly linked ines we took from them.

Ian.

> Jan, as it's common code, do you have any thoughts?
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:13:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:13:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwHY-00027m-73; Mon, 24 Feb 2014 14:13:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwHW-00027c-1E
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:13:18 +0000
Received: from [85.158.143.35:55920] by server-2.bemta-4.messagelabs.com id
	7F/CD-10891-D735B035; Mon, 24 Feb 2014 14:13:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393251195!7864171!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15916 invoked from network); 24 Feb 2014 14:13:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:13:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105193215"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:12:59 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:12:58 -0500
Message-ID: <1393251177.16570.69.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Julien Grall <julien.grall@citrix.com>
Date: Mon, 24 Feb 2014 14:12:57 +0000
In-Reply-To: <530B5275.7010008@citrix.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org> <530B5275.7010008@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, patches@linaro.org,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:08 +0000, Julien Grall wrote:
> (Adding Jan for x86 part).
> 
> On 02/20/2014 09:29 PM, Julien Grall wrote:
> > Hi Ian,
> > 
> > On 02/19/2014 11:55 AM, Ian Campbell wrote:
> >> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
> >>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
> >>> interrupt.
> >>
> >> Mention here that you are therefore creating a linked list of actions
> >> for each interrupt.
> >>
> >> If you use xen/list.h for this then you get a load of helpers and
> >> iterators which would save you open coding them.
> > 
> > After thinking, using xen/list.h won't really remove open code, except
> > removing "action_ptr" in release_dt_irq.

You can list_for_each*() in do_IRQ too and in release_dt_irq you get to
use a well debugged list delete implementation instead of reimplementing
your own.

> > Calling release_dt_irq to an IRQ with multiple action shouldn't be
> > called often. Therefore, having both prev and next is a waste of space.

If this is a concern we could take in the Linux single linked list
macros alongside the existing doubly linked ines we took from them.

Ian.

> Jan, as it's common code, do you have any thoughts?
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwJR-0002HX-Qv; Mon, 24 Feb 2014 14:15:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwJQ-0002HK-DQ
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:15:16 +0000
Received: from [193.109.254.147:62374] by server-13.bemta-14.messagelabs.com
	id 62/34-01226-3F35B035; Mon, 24 Feb 2014 14:15:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393251314!6425749!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3015 invoked from network); 24 Feb 2014 14:15:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:15:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:15:14 +0000
Message-Id: <530B61FE020000780011ECD8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:15:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-3-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-3-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 2/6] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> @@ -1245,6 +1246,33 @@ long arch_do_domctl(
>      }
>      break;
>  
> +    case XEN_DOMCTL_attach_pqos:
> +    {
> +        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
> +            ret = -EINVAL;
> +        else if ( !system_supports_cqm() )
> +            ret = -ENODEV;
> +        else
> +            ret = alloc_cqm_rmid(d);
> +    }

Pointless curly braces.

> +    case XEN_DOMCTL_detach_pqos:
> +    {
> +        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
> +            ret = -EINVAL;
> +        else if ( !system_supports_cqm() )
> +            ret = -ENODEV;
> +        else if ( d->arch.pqos_cqm_rmid > 0 )
> +        {
> +            free_cqm_rmid(d);
> +            ret = 0;
> +        }
> +        else
> +            ret = -ENOENT;
> +    }

Again.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:15:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwJR-0002HX-Qv; Mon, 24 Feb 2014 14:15:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwJQ-0002HK-DQ
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:15:16 +0000
Received: from [193.109.254.147:62374] by server-13.bemta-14.messagelabs.com
	id 62/34-01226-3F35B035; Mon, 24 Feb 2014 14:15:15 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393251314!6425749!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3015 invoked from network); 24 Feb 2014 14:15:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:15:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:15:14 +0000
Message-Id: <530B61FE020000780011ECD8@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:15:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-3-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-3-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 2/6] x86: dynamically attach/detach CQM
 service for a guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> @@ -1245,6 +1246,33 @@ long arch_do_domctl(
>      }
>      break;
>  
> +    case XEN_DOMCTL_attach_pqos:
> +    {
> +        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
> +            ret = -EINVAL;
> +        else if ( !system_supports_cqm() )
> +            ret = -ENODEV;
> +        else
> +            ret = alloc_cqm_rmid(d);
> +    }

Pointless curly braces.

> +    case XEN_DOMCTL_detach_pqos:
> +    {
> +        if ( !(domctl->u.qos_type.flags & XEN_DOMCTL_pqos_cqm) )
> +            ret = -EINVAL;
> +        else if ( !system_supports_cqm() )
> +            ret = -ENODEV;
> +        else if ( d->arch.pqos_cqm_rmid > 0 )
> +        {
> +            free_cqm_rmid(d);
> +            ret = 0;
> +        }
> +        else
> +            ret = -ENOENT;
> +    }

Again.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNU-0002Ww-Jv; Mon, 24 Feb 2014 14:19:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNT-0002Wq-26
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:27 +0000
Received: from [85.158.139.211:47930] by server-8.bemta-5.messagelabs.com id
	AB/39-05298-EE45B035; Mon, 24 Feb 2014 14:19:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393251564!5885270!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16091 invoked from network); 24 Feb 2014 14:19:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105194897"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNP-00064N-2H;
	Mon, 24 Feb 2014 14:19:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNM-0005qL-TC;
	Mon, 24 Feb 2014 14:19:20 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:12 +0000
Message-ID: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 0/3] libxl: Fix deadlock with pygrub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks to Michael Young for reporting this, and my apologies for
introducing this bug.

  1/3 libxl: Fix libxl_postfork_child_noexec deadlock etc.
  2/3 libxl: Hold the atfork lock while closing carefd
  3/3 libxl: Fix carefd lock leak in save callout

Patch 1 is the actual bugfix.  The bug is IMO clearly a blocker for
4.4 so the patch should go in if it is correct.  I have checked that
it seems to fix the problem for me.

Patches 2 and 3 are other theoretical locking bugs I discovered while
looking for the cause of the pygrub deadlock.  I haven't tried to
construct test cases that might exercise these bugs; doing so would be
quite difficult.

The atfork carefd race (patch 2) might be relevant to callers which
are multithreaded and also call libxl_postfork_child_noexec.  libvirt
does not make any such call.  xl is single-threaded.  I haven't
investigated other toolstacks, but the race is theoretical rather than
observed.

The carefd lock leak (patch 3) is obvious but theoretical except in
deeply pathological situations.  I don't think it's worth adding risk
to 4.4 to fix it.

So I think patch 1 should go into 4.4.0 after review.
2-3 should wait for 4.4.1 (coming via unstable in the usual way).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNU-0002Ww-Jv; Mon, 24 Feb 2014 14:19:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNT-0002Wq-26
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:27 +0000
Received: from [85.158.139.211:47930] by server-8.bemta-5.messagelabs.com id
	AB/39-05298-EE45B035; Mon, 24 Feb 2014 14:19:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393251564!5885270!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16091 invoked from network); 24 Feb 2014 14:19:25 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105194897"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:23 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNP-00064N-2H;
	Mon, 24 Feb 2014 14:19:23 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNM-0005qL-TC;
	Mon, 24 Feb 2014 14:19:20 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:12 +0000
Message-ID: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 0/3] libxl: Fix deadlock with pygrub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Thanks to Michael Young for reporting this, and my apologies for
introducing this bug.

  1/3 libxl: Fix libxl_postfork_child_noexec deadlock etc.
  2/3 libxl: Hold the atfork lock while closing carefd
  3/3 libxl: Fix carefd lock leak in save callout

Patch 1 is the actual bugfix.  The bug is IMO clearly a blocker for
4.4 so the patch should go in if it is correct.  I have checked that
it seems to fix the problem for me.

Patches 2 and 3 are other theoretical locking bugs I discovered while
looking for the cause of the pygrub deadlock.  I haven't tried to
construct test cases that might exercise these bugs; doing so would be
quite difficult.

The atfork carefd race (patch 2) might be relevant to callers which
are multithreaded and also call libxl_postfork_child_noexec.  libvirt
does not make any such call.  xl is single-threaded.  I haven't
investigated other toolstacks, but the race is theoretical rather than
observed.

The carefd lock leak (patch 3) is obvious but theoretical except in
deeply pathological situations.  I don't think it's worth adding risk
to 4.4 to fix it.

So I think patch 1 should go into 4.4.0 after review.
2-3 should wait for 4.4.1 (coming via unstable in the usual way).


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNd-0002Xr-0S; Mon, 24 Feb 2014 14:19:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNb-0002XZ-17
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:35 +0000
Received: from [85.158.139.211:7080] by server-16.bemta-5.messagelabs.com id
	B1/20-05060-6F45B035; Mon, 24 Feb 2014 14:19:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393251571!5909438!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8267 invoked from network); 24 Feb 2014 14:19:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105194945"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNR-00064Q-HY;
	Mon, 24 Feb 2014 14:19:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNP-0005qQ-UA;
	Mon, 24 Feb 2014 14:19:23 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:13 +0000
Message-ID: <1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_postfork_child_noexec would nestedly reaquire the non-recursive
"no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
The result on Linux is that the process always deadlocks before
returning from this function.

This is used by xl's console child.  So, the ultimate effect is that
xl with pygrub does not manage to connect to the pygrub console.
This beahviour was reported by Michael Young in Xen 4.4.0 RC5.

Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
documented to suffice if called only on one ctx.  So deregistering the
ctx it's called on is not sufficient.  Instead, we need a new approach
which discards the whole sigchld_user list and unconditionally removes
our SIGCHLD handler if we had one.

Prompted by this, clarify the semantics of
libxl_postfork_child_noexec.  Specifically, expand on the meaning of
"quickly" by explaining what operations are not permitted; and
document the fact that the function doesn't reclaim the resources in
the ctxs.

And add a comment in libxl_postfork_child_noexec explaining the
internal concurrency situation.

This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Reported-by: M A Young <m.a.young@durham.ac.uk>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/libxl_event.h |   16 ++++++++++++++++
 tools/libxl/libxl_fork.c  |   44 +++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 59 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index ca43cb9..b5db83c 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -601,6 +601,22 @@ void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
  * this all previously existing libxl_ctx's are invalidated and
  * must not be used - or even freed.  It is harmless to call this
  * postfork function and then exec anyway.
+ *
+ * Until libxl_postfork_child_noexec has returned:
+ *  - No other libxl calls may be made.
+ *  - If any libxl ctx was configured handle the process's SIGCHLD,
+ *    the child may not create further (grand)child processes, nor
+ *    manipulate SIGCHLD.
+ *
+ * libxl_postfork_child_noexec may not reclaim all the resources
+ * associated with the libxl ctx.  This includes but is not limited
+ * to: ordinary memory; files on disk and in /var/run; file
+ * descriptors; memory mapped into the process from domains being
+ * managed (grant maps); Xen event channels.  Use of libxl in
+ * processes which fork long-lived children is not recommended for
+ * this reason.  libxl_postfork_child_noexec is provided so that
+ * an application can make further libxl calls in a child which
+ * is going to exec or exit soon.
  */
 void libxl_postfork_child_noexec(libxl_ctx *ctx);
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 1d0017b..8421296 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -60,6 +60,9 @@ static void sigchld_removehandler_core(void); /* idempotent */
 static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
 static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
+static void defer_sigchld(void);
+static void release_sigchld(void);
+
 static void atfork_lock(void)
 {
     int r = pthread_mutex_lock(&no_forking);
@@ -117,6 +120,19 @@ libxl__carefd *libxl__carefd_opened(libxl_ctx *ctx, int fd)
 
 void libxl_postfork_child_noexec(libxl_ctx *ctx)
 {
+    /*
+     * Anything running without the no_forking lock (atfork_lock)
+     * might be interrupted by fork.  But libxl functions other than
+     * this one are then forbidden to the child.
+     *
+     * Conversely, this function might interrupt any other libxl
+     * operation (even though that other operation has the libxl ctx
+     * lock).  We don't take the lock ourselves, since we are running
+     * in the child and if the lock is held the thread that took it no
+     * longer exists.  To prevent us being interrupted by another call
+     * to ourselves (whether in another thread or by virtue of another
+     * fork) we take the atfork lock ourselves.
+     */
     libxl__carefd *cf, *cf_tmp;
     int r;
 
@@ -134,7 +150,33 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    sigchld_user_remove(ctx);
+    if (sigchld_installed) {
+        /* We are in theory not at risk of concurrent execution of the
+         * SIGCHLD handler, because the application should call
+         * libxl_postfork_child_noexec before the child forks again.
+         * (If the SIGCHLD was in flight in the parent at the time of
+         * the fork, the thread it was delivered on exists only in the
+         * parent so is not our concern.)
+         *
+         * But in case the application violated this rule (and did so
+         * while multithreaded in the child), we use our deferral
+         * machinery.  The result is that the SIGCHLD may then be lost
+         * (i.e. signaled to the now-defunct libxl ctx(s)).  But at
+         * least we won't execute undefined behaviour (by examining
+         * the list in the signal handler concurrently with clearing
+         * it here), and since we won't actually reap the new children
+         * things will in fact go OK if the application doesn't try to
+         * use SIGCHLD, but instead just waits for the child(ren). */
+        defer_sigchld();
+
+        LIBXL_LIST_INIT(&sigchld_users);
+        /* After this the ->sigchld_user_registered entries in the
+         * now-obsolete contexts may be lies.  But that's OK because
+         * no-one will look at them. */
+
+        release_sigchld();
+        sigchld_removehandler_core();
+    }
 
     atfork_unlock();
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNd-0002Xr-0S; Mon, 24 Feb 2014 14:19:37 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNb-0002XZ-17
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:35 +0000
Received: from [85.158.139.211:7080] by server-16.bemta-5.messagelabs.com id
	B1/20-05060-6F45B035; Mon, 24 Feb 2014 14:19:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393251571!5909438!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8267 invoked from network); 24 Feb 2014 14:19:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105194945"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNR-00064Q-HY;
	Mon, 24 Feb 2014 14:19:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNP-0005qQ-UA;
	Mon, 24 Feb 2014 14:19:23 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:13 +0000
Message-ID: <1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

libxl_postfork_child_noexec would nestedly reaquire the non-recursive
"no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
The result on Linux is that the process always deadlocks before
returning from this function.

This is used by xl's console child.  So, the ultimate effect is that
xl with pygrub does not manage to connect to the pygrub console.
This beahviour was reported by Michael Young in Xen 4.4.0 RC5.

Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
documented to suffice if called only on one ctx.  So deregistering the
ctx it's called on is not sufficient.  Instead, we need a new approach
which discards the whole sigchld_user list and unconditionally removes
our SIGCHLD handler if we had one.

Prompted by this, clarify the semantics of
libxl_postfork_child_noexec.  Specifically, expand on the meaning of
"quickly" by explaining what operations are not permitted; and
document the fact that the function doesn't reclaim the resources in
the ctxs.

And add a comment in libxl_postfork_child_noexec explaining the
internal concurrency situation.

This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
Reported-by: M A Young <m.a.young@durham.ac.uk>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/libxl_event.h |   16 ++++++++++++++++
 tools/libxl/libxl_fork.c  |   44 +++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 59 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_event.h b/tools/libxl/libxl_event.h
index ca43cb9..b5db83c 100644
--- a/tools/libxl/libxl_event.h
+++ b/tools/libxl/libxl_event.h
@@ -601,6 +601,22 @@ void libxl_childproc_sigchld_occurred(libxl_ctx *ctx)
  * this all previously existing libxl_ctx's are invalidated and
  * must not be used - or even freed.  It is harmless to call this
  * postfork function and then exec anyway.
+ *
+ * Until libxl_postfork_child_noexec has returned:
+ *  - No other libxl calls may be made.
+ *  - If any libxl ctx was configured handle the process's SIGCHLD,
+ *    the child may not create further (grand)child processes, nor
+ *    manipulate SIGCHLD.
+ *
+ * libxl_postfork_child_noexec may not reclaim all the resources
+ * associated with the libxl ctx.  This includes but is not limited
+ * to: ordinary memory; files on disk and in /var/run; file
+ * descriptors; memory mapped into the process from domains being
+ * managed (grant maps); Xen event channels.  Use of libxl in
+ * processes which fork long-lived children is not recommended for
+ * this reason.  libxl_postfork_child_noexec is provided so that
+ * an application can make further libxl calls in a child which
+ * is going to exec or exit soon.
  */
 void libxl_postfork_child_noexec(libxl_ctx *ctx);
 
diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 1d0017b..8421296 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -60,6 +60,9 @@ static void sigchld_removehandler_core(void); /* idempotent */
 static void sigchld_user_remove(libxl_ctx *ctx); /* idempotent */
 static void sigchld_sethandler_raw(void (*handler)(int), struct sigaction *old);
 
+static void defer_sigchld(void);
+static void release_sigchld(void);
+
 static void atfork_lock(void)
 {
     int r = pthread_mutex_lock(&no_forking);
@@ -117,6 +120,19 @@ libxl__carefd *libxl__carefd_opened(libxl_ctx *ctx, int fd)
 
 void libxl_postfork_child_noexec(libxl_ctx *ctx)
 {
+    /*
+     * Anything running without the no_forking lock (atfork_lock)
+     * might be interrupted by fork.  But libxl functions other than
+     * this one are then forbidden to the child.
+     *
+     * Conversely, this function might interrupt any other libxl
+     * operation (even though that other operation has the libxl ctx
+     * lock).  We don't take the lock ourselves, since we are running
+     * in the child and if the lock is held the thread that took it no
+     * longer exists.  To prevent us being interrupted by another call
+     * to ourselves (whether in another thread or by virtue of another
+     * fork) we take the atfork lock ourselves.
+     */
     libxl__carefd *cf, *cf_tmp;
     int r;
 
@@ -134,7 +150,33 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
     }
     LIBXL_LIST_INIT(&carefds);
 
-    sigchld_user_remove(ctx);
+    if (sigchld_installed) {
+        /* We are in theory not at risk of concurrent execution of the
+         * SIGCHLD handler, because the application should call
+         * libxl_postfork_child_noexec before the child forks again.
+         * (If the SIGCHLD was in flight in the parent at the time of
+         * the fork, the thread it was delivered on exists only in the
+         * parent so is not our concern.)
+         *
+         * But in case the application violated this rule (and did so
+         * while multithreaded in the child), we use our deferral
+         * machinery.  The result is that the SIGCHLD may then be lost
+         * (i.e. signaled to the now-defunct libxl ctx(s)).  But at
+         * least we won't execute undefined behaviour (by examining
+         * the list in the signal handler concurrently with clearing
+         * it here), and since we won't actually reap the new children
+         * things will in fact go OK if the application doesn't try to
+         * use SIGCHLD, but instead just waits for the child(ren). */
+        defer_sigchld();
+
+        LIBXL_LIST_INIT(&sigchld_users);
+        /* After this the ->sigchld_user_registered entries in the
+         * now-obsolete contexts may be lies.  But that's OK because
+         * no-one will look at them. */
+
+        release_sigchld();
+        sigchld_removehandler_core();
+    }
 
     atfork_unlock();
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNg-0002ZG-KC; Mon, 24 Feb 2014 14:19:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNf-0002Ym-5k
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:39 +0000
Received: from [85.158.143.35:44184] by server-1.bemta-4.messagelabs.com id
	07/DA-31661-AF45B035; Mon, 24 Feb 2014 14:19:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393251576!7902398!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19868 invoked from network); 24 Feb 2014 14:19:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103540594"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:35 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNW-00064T-6N;
	Mon, 24 Feb 2014 14:19:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNU-0005qY-H8;
	Mon, 24 Feb 2014 14:19:28 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:14 +0000
Message-ID: <1393251555-22418-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 2/3] libxl: Hold the atfork lock while closing
	carefd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This avoids the process being forked while a carefd is recorded in the
list but the actual fd has been closed.  If that happened, a
subsequent libxl_postfork_child_noexec would attempt to close the fd
again.  If we are lucky that results in a harmless warning; but if we
are unlucky the fd number has been reused and we close an unrelated
fd.

This race has not been observed anywhere as far as we are aware.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/libxl_fork.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 8421296..fa15095 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -184,9 +184,9 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
 int libxl__carefd_close(libxl__carefd *cf)
 {
     if (!cf) return 0;
+    atfork_lock();
     int r = cf->fd < 0 ? 0 : close(cf->fd);
     int esave = errno;
-    atfork_lock();
     LIBXL_LIST_REMOVE(cf, entry);
     atfork_unlock();
     free(cf);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNg-0002ZG-KC; Mon, 24 Feb 2014 14:19:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNf-0002Ym-5k
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:39 +0000
Received: from [85.158.143.35:44184] by server-1.bemta-4.messagelabs.com id
	07/DA-31661-AF45B035; Mon, 24 Feb 2014 14:19:38 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393251576!7902398!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19868 invoked from network); 24 Feb 2014 14:19:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103540594"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:35 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:35 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNW-00064T-6N;
	Mon, 24 Feb 2014 14:19:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNU-0005qY-H8;
	Mon, 24 Feb 2014 14:19:28 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:14 +0000
Message-ID: <1393251555-22418-3-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 2/3] libxl: Hold the atfork lock while closing
	carefd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This avoids the process being forked while a carefd is recorded in the
list but the actual fd has been closed.  If that happened, a
subsequent libxl_postfork_child_noexec would attempt to close the fd
again.  If we are lucky that results in a harmless warning; but if we
are unlucky the fd number has been reused and we close an unrelated
fd.

This race has not been observed anywhere as far as we are aware.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/libxl_fork.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
index 8421296..fa15095 100644
--- a/tools/libxl/libxl_fork.c
+++ b/tools/libxl/libxl_fork.c
@@ -184,9 +184,9 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
 int libxl__carefd_close(libxl__carefd *cf)
 {
     if (!cf) return 0;
+    atfork_lock();
     int r = cf->fd < 0 ? 0 : close(cf->fd);
     int esave = errno;
-    atfork_lock();
     LIBXL_LIST_REMOVE(cf, entry);
     atfork_unlock();
     free(cf);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNi-0002aA-2F; Mon, 24 Feb 2014 14:19:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNg-0002ZC-OS
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:40 +0000
Received: from [85.158.143.35:54366] by server-3.bemta-4.messagelabs.com id
	EE/8B-11539-CF45B035; Mon, 24 Feb 2014 14:19:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393251576!7902398!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20201 invoked from network); 24 Feb 2014 14:19:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103540624"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNZ-00064W-0S;
	Mon, 24 Feb 2014 14:19:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNX-0005qg-Fm;
	Mon, 24 Feb 2014 14:19:31 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:15 +0000
Message-ID: <1393251555-22418-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 3/3] libxl: Fix carefd lock leak in save callout
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If libxl_pipe fails we leave the carefd locked, which translates to
the atfork lock remaining held.  This would probably cause the process
to deadlock shortly afterwards.

Of course libxl_pipe is very unlikely to fail unless things are
already going very badly.  This bug has not been observed anywhere as
far as we are aware.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/libxl_save_callout.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
index 6e45b2f..e3bda8f 100644
--- a/tools/libxl/libxl_save_callout.c
+++ b/tools/libxl/libxl_save_callout.c
@@ -185,7 +185,11 @@ static void run_helper(libxl__egc *egc, libxl__save_helper_state *shs,
     for (childfd=0; childfd<2; childfd++) {
         /* Setting up the pipe for the child's fd childfd */
         int fds[2];
-        if (libxl_pipe(CTX,fds)) { rc = ERROR_FAIL; goto out; }
+        if (libxl_pipe(CTX,fds)) {
+            rc = ERROR_FAIL;
+            libxl__carefd_unlock();
+            goto out;
+        }
         int childs_end = childfd==0 ? 0 /*read*/  : 1 /*write*/;
         int our_end    = childfd==0 ? 1 /*write*/ : 0 /*read*/;
         childs_pipes[childfd] = libxl__carefd_record(CTX, fds[childs_end]);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:19:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwNi-0002aA-2F; Mon, 24 Feb 2014 14:19:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHwNg-0002ZC-OS
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:19:40 +0000
Received: from [85.158.143.35:54366] by server-3.bemta-4.messagelabs.com id
	EE/8B-11539-CF45B035; Mon, 24 Feb 2014 14:19:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393251576!7902398!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20201 invoked from network); 24 Feb 2014 14:19:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:19:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103540624"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:19:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 09:19:38 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNZ-00064W-0S;
	Mon, 24 Feb 2014 14:19:33 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHwNX-0005qg-Fm;
	Mon, 24 Feb 2014 14:19:31 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 14:19:15 +0000
Message-ID: <1393251555-22418-4-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH 3/3] libxl: Fix carefd lock leak in save callout
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If libxl_pipe fails we leave the carefd locked, which translates to
the atfork lock remaining held.  This would probably cause the process
to deadlock shortly afterwards.

Of course libxl_pipe is very unlikely to fail unless things are
already going very badly.  This bug has not been observed anywhere as
far as we are aware.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/libxl/libxl_save_callout.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
index 6e45b2f..e3bda8f 100644
--- a/tools/libxl/libxl_save_callout.c
+++ b/tools/libxl/libxl_save_callout.c
@@ -185,7 +185,11 @@ static void run_helper(libxl__egc *egc, libxl__save_helper_state *shs,
     for (childfd=0; childfd<2; childfd++) {
         /* Setting up the pipe for the child's fd childfd */
         int fds[2];
-        if (libxl_pipe(CTX,fds)) { rc = ERROR_FAIL; goto out; }
+        if (libxl_pipe(CTX,fds)) {
+            rc = ERROR_FAIL;
+            libxl__carefd_unlock();
+            goto out;
+        }
         int childs_end = childfd==0 ? 0 /*read*/  : 1 /*write*/;
         int our_end    = childfd==0 ? 1 /*write*/ : 0 /*read*/;
         childs_pipes[childfd] = libxl__carefd_record(CTX, fds[childs_end]);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:20:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwOK-0002ke-I7; Mon, 24 Feb 2014 14:20:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwOJ-0002kE-R2
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:20:19 +0000
Received: from [193.109.254.147:29896] by server-12.bemta-14.messagelabs.com
	id FF/D4-17220-3255B035; Mon, 24 Feb 2014 14:20:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393251617!2684924!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19247 invoked from network); 24 Feb 2014 14:20:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:20:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105195234"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:20:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:20:16 -0500
Message-ID: <530B551F.2060202@citrix.com>
Date: Mon, 24 Feb 2014 14:20:15 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212737.869264085@linutronix.de>
In-Reply-To: <20140223212737.869264085@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, x86 <x86@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 15/26] x86: xen: Use the core irq stats
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> Let the core do the irq_desc resolution.
> 
> No functional change.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:20:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwOK-0002ke-I7; Mon, 24 Feb 2014 14:20:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwOJ-0002kE-R2
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:20:19 +0000
Received: from [193.109.254.147:29896] by server-12.bemta-14.messagelabs.com
	id FF/D4-17220-3255B035; Mon, 24 Feb 2014 14:20:19 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393251617!2684924!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19247 invoked from network); 24 Feb 2014 14:20:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:20:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105195234"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:20:17 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:20:16 -0500
Message-ID: <530B551F.2060202@citrix.com>
Date: Mon, 24 Feb 2014 14:20:15 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212737.869264085@linutronix.de>
In-Reply-To: <20140223212737.869264085@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, x86 <x86@kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 15/26] x86: xen: Use the core irq stats
	function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> Let the core do the irq_desc resolution.
> 
> No functional change.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:23:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwRZ-00038j-Cy; Mon, 24 Feb 2014 14:23:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwRX-00038b-Dy
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:23:39 +0000
Received: from [85.158.139.211:59509] by server-7.bemta-5.messagelabs.com id
	76/C6-14867-AE55B035; Mon, 24 Feb 2014 14:23:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393251817!5893719!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6604 invoked from network); 24 Feb 2014 14:23:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:23:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:23:37 +0000
Message-Id: <530B63F6020000780011ED0D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:23:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 3/6] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> +static void read_cqm_data(void *arg)
> +{
> +    uint64_t cqm_data;
> +    unsigned int rmid;
> +    int socket = cpu_to_socket(smp_processor_id());
> +    unsigned long i;
> +
> +    ASSERT(system_supports_cqm());
> +
> +    if ( socket < 0 )
> +        return;
> +
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> +            continue;
> +
> +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> +        rdmsrl(MSR_IA32_QMC, cqm_data);
> +
> +        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
> +        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
> +            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;

So my earlier comment regarding the NR_CPUS use in the allocation
of this buffer becomes even more relevant with the fact that you're
indexing by socket here, not by CPU - in that case, even nr_cpu_ids
is likely to be a gross overestimation.

> +static void select_socket_cpu(cpumask_t *cpu_bitmap)
> +{
> +    int i;
> +    unsigned int cpu;
> +    int socket, socket_curr = cpu_to_socket(smp_processor_id());
> +    DECLARE_BITMAP(sockets, NR_CPUS);

Please avoid putting a 4095-bit bitmap on the stack.

> +
> +    bitmap_zero(sockets, NR_CPUS);
> +    if (socket_curr >= 0)

Coding style.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:23:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:23:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwRZ-00038j-Cy; Mon, 24 Feb 2014 14:23:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwRX-00038b-Dy
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:23:39 +0000
Received: from [85.158.139.211:59509] by server-7.bemta-5.messagelabs.com id
	76/C6-14867-AE55B035; Mon, 24 Feb 2014 14:23:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393251817!5893719!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6604 invoked from network); 24 Feb 2014 14:23:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:23:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:23:37 +0000
Message-Id: <530B63F6020000780011ED0D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:23:34 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-4-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-4-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 3/6] x86: collect CQM information from
 all sockets
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> +static void read_cqm_data(void *arg)
> +{
> +    uint64_t cqm_data;
> +    unsigned int rmid;
> +    int socket = cpu_to_socket(smp_processor_id());
> +    unsigned long i;
> +
> +    ASSERT(system_supports_cqm());
> +
> +    if ( socket < 0 )
> +        return;
> +
> +    for ( rmid = cqm->min_rmid; rmid <= cqm->max_rmid; rmid++ )
> +    {
> +        if ( cqm->rmid_to_dom[rmid] == DOMID_INVALID )
> +            continue;
> +
> +        wrmsr(MSR_IA32_QOSEVTSEL, QOS_MONITOR_EVTID_L3, rmid);
> +        rdmsrl(MSR_IA32_QMC, cqm_data);
> +
> +        i = (unsigned long)(cqm->max_rmid + 1) * socket + rmid;
> +        if ( !(cqm_data & IA32_QM_CTR_ERROR_MASK) )
> +            cqm->buffer[i] = cqm_data * cqm->upscaling_factor;

So my earlier comment regarding the NR_CPUS use in the allocation
of this buffer becomes even more relevant with the fact that you're
indexing by socket here, not by CPU - in that case, even nr_cpu_ids
is likely to be a gross overestimation.

> +static void select_socket_cpu(cpumask_t *cpu_bitmap)
> +{
> +    int i;
> +    unsigned int cpu;
> +    int socket, socket_curr = cpu_to_socket(smp_processor_id());
> +    DECLARE_BITMAP(sockets, NR_CPUS);

Please avoid putting a 4095-bit bitmap on the stack.

> +
> +    bitmap_zero(sockets, NR_CPUS);
> +    if (socket_curr >= 0)

Coding style.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwSZ-0003H3-SM; Mon, 24 Feb 2014 14:24:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwSX-0003Gt-KR
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:24:41 +0000
Received: from [193.109.254.147:27685] by server-2.bemta-14.messagelabs.com id
	C0/28-01236-8265B035; Mon, 24 Feb 2014 14:24:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393251878!6408436!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5750 invoked from network); 24 Feb 2014 14:24:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:24:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105196365"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:24:38 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:24:38 -0500
Message-ID: <530B5624.8080309@citrix.com>
Date: Mon, 24 Feb 2014 14:24:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.222412125@linutronix.de>
In-Reply-To: <20140223212738.222412125@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 18/26] xen: Use the proper irq functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> I really can't understand why people keep adding irq_desc abuse. We
> have enough proper interfaces. Delete another 14 lines of hackery.

    generic_handler_irq() already tests for !desc so use this instead
    of generic_handle_irq_desc().  Use irq_get_irq_data() instead of
    desc->irq_data.

Otherwise,

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:24:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwSZ-0003H3-SM; Mon, 24 Feb 2014 14:24:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwSX-0003Gt-KR
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:24:41 +0000
Received: from [193.109.254.147:27685] by server-2.bemta-14.messagelabs.com id
	C0/28-01236-8265B035; Mon, 24 Feb 2014 14:24:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393251878!6408436!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5750 invoked from network); 24 Feb 2014 14:24:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:24:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105196365"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:24:38 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:24:38 -0500
Message-ID: <530B5624.8080309@citrix.com>
Date: Mon, 24 Feb 2014 14:24:36 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.222412125@linutronix.de>
In-Reply-To: <20140223212738.222412125@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 18/26] xen: Use the proper irq functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> I really can't understand why people keep adding irq_desc abuse. We
> have enough proper interfaces. Delete another 14 lines of hackery.

    generic_handler_irq() already tests for !desc so use this instead
    of generic_handle_irq_desc().  Use irq_get_irq_data() instead of
    desc->irq_data.

Otherwise,

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:26:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwU3-0003PM-DO; Mon, 24 Feb 2014 14:26:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwU2-0003PB-CM
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:26:14 +0000
Received: from [85.158.139.211:39426] by server-2.bemta-5.messagelabs.com id
	F9/66-23037-5865B035; Mon, 24 Feb 2014 14:26:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393251972!5894497!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24758 invoked from network); 24 Feb 2014 14:26:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:26:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:26:12 +0000
Message-Id: <530B6490020000780011ED10@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:26:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-5-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-5-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 4/6] x86: enable CQM monitoring for each
 domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> +void cqm_assoc_rmid(unsigned int rmid)
> +{
> +    uint64_t val;
> +    uint64_t new_val;
> +
> +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> +    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
> +    if ( val != new_val )
> +        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
> +}

Considering that even the addition of two RDMSRs in the context
switch path is relatively expensive, I think you will want to track the
most recently written value in a per-CPU variable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:26:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwU3-0003PM-DO; Mon, 24 Feb 2014 14:26:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwU2-0003PB-CM
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:26:14 +0000
Received: from [85.158.139.211:39426] by server-2.bemta-5.messagelabs.com id
	F9/66-23037-5865B035; Mon, 24 Feb 2014 14:26:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393251972!5894497!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24758 invoked from network); 24 Feb 2014 14:26:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:26:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:26:12 +0000
Message-Id: <530B6490020000780011ED10@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:26:08 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Dongxiao Xu" <dongxiao.xu@intel.com>
References: <1392791564-37170-1-git-send-email-dongxiao.xu@intel.com>
	<1392791564-37170-5-git-send-email-dongxiao.xu@intel.com>
In-Reply-To: <1392791564-37170-5-git-send-email-dongxiao.xu@intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: keir@xen.org, Ian.Campbell@citrix.com, andrew.cooper3@citrix.com,
	stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com,
	xen-devel@lists.xen.org, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH v9 4/6] x86: enable CQM monitoring for each
 domain RMID
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 19.02.14 at 07:32, Dongxiao Xu <dongxiao.xu@intel.com> wrote:
> +void cqm_assoc_rmid(unsigned int rmid)
> +{
> +    uint64_t val;
> +    uint64_t new_val;
> +
> +    rdmsrl(MSR_IA32_PQR_ASSOC, val);
> +    new_val = (val & ~rmid_mask) | (rmid & rmid_mask);
> +    if ( val != new_val )
> +        wrmsrl(MSR_IA32_PQR_ASSOC, new_val);
> +}

Considering that even the addition of two RDMSRs in the context
switch path is relatively expensive, I think you will want to track the
most recently written value in a per-CPU variable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwYb-0003gR-8a; Mon, 24 Feb 2014 14:30:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwYZ-0003gI-Op
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:30:55 +0000
Received: from [85.158.143.35:50682] by server-3.bemta-4.messagelabs.com id
	4D/62-11539-F975B035; Mon, 24 Feb 2014 14:30:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393252254!7896358!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28648 invoked from network); 24 Feb 2014 14:30:54 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:30:54 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so3132674wib.4
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 06:30:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Uxh0xGNWKmt0gYect7nGgMyy9nbDYUis+EEipq4ehUA=;
	b=Z9ZHT+HWXvqVA59zUxRXQ80dSLOM+/qc1uGWHEHld5N30ijZ9CFrBc/qp8vAmJGylh
	G5MRb5zHkjQZxINDhYt1SgmZFhwM3D8acWCQJWiBdY8Znf5mg0cXs08Q8Xv74jInkzhf
	DiNZtV+9XVfblms9xdONAq38vQmV0pBdSsaRD05OJVS+9sGSJCZAf0tfsNAaahlDTFOh
	J+gO8jo4iGetdc8BnsAN6QlVGQNMsPbAmFLy5nifRQTuWMOupSLQyyoA4ZzLUkA8qWN3
	DfkPWOn0qIPQvpuGMhWonxFGfMsfzOymN5RvPyAndQTNknoSquIReePzVOCfxQVVE85a
	N3ZQ==
X-Gm-Message-State: ALoCoQkN1Hrk9QFsnkK+ye80pID5lGVYRRpxAklKOqKg5cg4Hdr/zw2V6669APHediRJPWWu6XNd
X-Received: by 10.180.36.8 with SMTP id m8mr14727425wij.42.1393252253862;
	Mon, 24 Feb 2014 06:30:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id n3sm24773531wix.10.2014.02.24.06.30.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 06:30:53 -0800 (PST)
Message-ID: <530B579B.70707@linaro.org>
Date: Mon, 24 Feb 2014 14:30:51 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Chen Baozi <baozich@gmail.com>, 
	List Developer Xen <xen-devel@lists.xen.org>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<53012314.2050906@linaro.org>
In-Reply-To: <53012314.2050906@linaro.org>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Lars Kurth <lars.kurth@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Somehow, my gmail account claimed it sent the mail with Karim CCed, but
it was not in the thread. Adding Karim back.

Thanks Lars for spotting it!

On 02/16/2014 08:44 PM, Julien Grall wrote:
> On 16/02/14 15:51, Chen Baozi wrote:
>> Hi all,
> =

> Hello Chen,
> =

>> It is much later than I used to expect. I guess it might be help
>> to publish my work, though it is still not finished (and might not
>> be finished very soon...).
>>
>> I began to try to port mini-os to ARM64 since last summer. Since
>> the 64-bit guest support is not quite well at that time, this
>> work had been stopped for a long time until two months ago.
>>
>> Though it is still at very early stage, it at least can be built,
>> setup a early page table for booting, parse the DTB passed by the
>> hypervisor, and be debugged by printk at present. So I put it
>> on github in case someone might be interested in it. Here is the
>> url: https://github.com/baozich/minios-arm64
> =

> Good job!
> =

>> Right now, there are some troubles to make GIC work properly,
>> as I didn=92t consider mapping GIC=92s interface in address space and
>> follows x86=92s memory layout which make the kernel virtual address
>> starts at 0x0. I=92ll fix it as soon as possible.
> =

> I think you should try to sync up with Karim (in CC). He has started to
> port mini-OS on arm32. Except assembly code (which should be fairly
> small) everything can be shared between the both architecture.
> =

> If I remember correctly, Karim already wrote a GIC support but without
> FDT support.
> =

>> Besides, there is still lots of work to be done. So any comments
>> or patches are welcome.
> =

> Regards,
> =



-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:31:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwYb-0003gR-8a; Mon, 24 Feb 2014 14:30:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwYZ-0003gI-Op
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 14:30:55 +0000
Received: from [85.158.143.35:50682] by server-3.bemta-4.messagelabs.com id
	4D/62-11539-F975B035; Mon, 24 Feb 2014 14:30:55 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393252254!7896358!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28648 invoked from network); 24 Feb 2014 14:30:54 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:30:54 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so3132674wib.4
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 06:30:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=Uxh0xGNWKmt0gYect7nGgMyy9nbDYUis+EEipq4ehUA=;
	b=Z9ZHT+HWXvqVA59zUxRXQ80dSLOM+/qc1uGWHEHld5N30ijZ9CFrBc/qp8vAmJGylh
	G5MRb5zHkjQZxINDhYt1SgmZFhwM3D8acWCQJWiBdY8Znf5mg0cXs08Q8Xv74jInkzhf
	DiNZtV+9XVfblms9xdONAq38vQmV0pBdSsaRD05OJVS+9sGSJCZAf0tfsNAaahlDTFOh
	J+gO8jo4iGetdc8BnsAN6QlVGQNMsPbAmFLy5nifRQTuWMOupSLQyyoA4ZzLUkA8qWN3
	DfkPWOn0qIPQvpuGMhWonxFGfMsfzOymN5RvPyAndQTNknoSquIReePzVOCfxQVVE85a
	N3ZQ==
X-Gm-Message-State: ALoCoQkN1Hrk9QFsnkK+ye80pID5lGVYRRpxAklKOqKg5cg4Hdr/zw2V6669APHediRJPWWu6XNd
X-Received: by 10.180.36.8 with SMTP id m8mr14727425wij.42.1393252253862;
	Mon, 24 Feb 2014 06:30:53 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id n3sm24773531wix.10.2014.02.24.06.30.52
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 06:30:53 -0800 (PST)
Message-ID: <530B579B.70707@linaro.org>
Date: Mon, 24 Feb 2014 14:30:51 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Chen Baozi <baozich@gmail.com>, 
	List Developer Xen <xen-devel@lists.xen.org>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<53012314.2050906@linaro.org>
In-Reply-To: <53012314.2050906@linaro.org>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Lars Kurth <lars.kurth@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Somehow, my gmail account claimed it sent the mail with Karim CCed, but
it was not in the thread. Adding Karim back.

Thanks Lars for spotting it!

On 02/16/2014 08:44 PM, Julien Grall wrote:
> On 16/02/14 15:51, Chen Baozi wrote:
>> Hi all,
> =

> Hello Chen,
> =

>> It is much later than I used to expect. I guess it might be help
>> to publish my work, though it is still not finished (and might not
>> be finished very soon...).
>>
>> I began to try to port mini-os to ARM64 since last summer. Since
>> the 64-bit guest support is not quite well at that time, this
>> work had been stopped for a long time until two months ago.
>>
>> Though it is still at very early stage, it at least can be built,
>> setup a early page table for booting, parse the DTB passed by the
>> hypervisor, and be debugged by printk at present. So I put it
>> on github in case someone might be interested in it. Here is the
>> url: https://github.com/baozich/minios-arm64
> =

> Good job!
> =

>> Right now, there are some troubles to make GIC work properly,
>> as I didn=92t consider mapping GIC=92s interface in address space and
>> follows x86=92s memory layout which make the kernel virtual address
>> starts at 0x0. I=92ll fix it as soon as possible.
> =

> I think you should try to sync up with Karim (in CC). He has started to
> port mini-OS on arm32. Except assembly code (which should be fairly
> small) everything can be shared between the both architecture.
> =

> If I remember correctly, Karim already wrote a GIC support but without
> FDT support.
> =

>> Besides, there is still lots of work to be done. So any comments
>> or patches are welcome.
> =

> Regards,
> =



-- =

Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:32:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwa5-0003lG-QU; Mon, 24 Feb 2014 14:32:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwa3-0003l5-ND
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:32:27 +0000
Received: from [85.158.139.211:16783] by server-11.bemta-5.messagelabs.com id
	49/D4-23886-AF75B035; Mon, 24 Feb 2014 14:32:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393252345!5854667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32243 invoked from network); 24 Feb 2014 14:32:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:32:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:32:25 +0000
Message-Id: <530B6606020000780011ED39@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:32:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@citrix.com>,
	"Julien Grall" <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org> <530B5275.7010008@citrix.com>
In-Reply-To: <530B5275.7010008@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 15:08, Julien Grall <julien.grall@citrix.com> wrote:
> (Adding Jan for x86 part).
> 
> On 02/20/2014 09:29 PM, Julien Grall wrote:
>> Hi Ian,
>> 
>> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>>> interrupt.
>>>
>>> Mention here that you are therefore creating a linked list of actions
>>> for each interrupt.
>>>
>>> If you use xen/list.h for this then you get a load of helpers and
>>> iterators which would save you open coding them.
>> 
>> After thinking, using xen/list.h won't really remove open code, except
>> removing "action_ptr" in release_dt_irq.
>> 
>> Calling release_dt_irq to an IRQ with multiple action shouldn't be
>> called often. Therefore, having both prev and next is a waste of space.
> 
> Jan, as it's common code, do you have any thoughts?

In fact I'm not convinced this action chaining is correct in the first
place, as mentioned by Ian too (considering the potential sharing
between hypervisor and guest). Furthermore, if this is really just
about IOMMU handlers, why can't the SMMU code register a single
action and disambiguate by the dev_id argument passed to the
handler?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:32:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwa5-0003lG-QU; Mon, 24 Feb 2014 14:32:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHwa3-0003l5-ND
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:32:27 +0000
Received: from [85.158.139.211:16783] by server-11.bemta-5.messagelabs.com id
	49/D4-23886-AF75B035; Mon, 24 Feb 2014 14:32:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393252345!5854667!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32243 invoked from network); 24 Feb 2014 14:32:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 14:32:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 14:32:25 +0000
Message-Id: <530B6606020000780011ED39@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 14:32:22 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@citrix.com>,
	"Julien Grall" <julien.grall@linaro.org>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org> <530B5275.7010008@citrix.com>
In-Reply-To: <530B5275.7010008@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 15:08, Julien Grall <julien.grall@citrix.com> wrote:
> (Adding Jan for x86 part).
> 
> On 02/20/2014 09:29 PM, Julien Grall wrote:
>> Hi Ian,
>> 
>> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>>> interrupt.
>>>
>>> Mention here that you are therefore creating a linked list of actions
>>> for each interrupt.
>>>
>>> If you use xen/list.h for this then you get a load of helpers and
>>> iterators which would save you open coding them.
>> 
>> After thinking, using xen/list.h won't really remove open code, except
>> removing "action_ptr" in release_dt_irq.
>> 
>> Calling release_dt_irq to an IRQ with multiple action shouldn't be
>> called often. Therefore, having both prev and next is a waste of space.
> 
> Jan, as it's common code, do you have any thoughts?

In fact I'm not convinced this action chaining is correct in the first
place, as mentioned by Ian too (considering the potential sharing
between hypervisor and guest). Furthermore, if this is really just
about IOMMU handlers, why can't the SMMU code register a single
action and disambiguate by the dev_id argument passed to the
handler?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbO-0003rO-BM; Mon, 24 Feb 2014 14:33:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbN-0003rB-AM
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:49 +0000
Received: from [85.158.137.68:53638] by server-2.bemta-3.messagelabs.com id
	94/81-06531-C485B035; Mon, 24 Feb 2014 14:33:48 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393252426!2592457!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14643 invoked from network); 24 Feb 2014 14:33:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544720"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:45 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwb5-0005kB-Tq;
	Mon, 24 Feb 2014 14:33:31 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwb4-0004ZZ-BD; Mon, 24 Feb 2014 14:33:30 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:28:51 +0000
Message-ID: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, david.vrabel@citrix.com
Subject: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

---
V2: Improve documentation based on comments about areas which were unclear.

---
 xen/include/public/io/netif.h |   29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index d7fb771..5d98734 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -69,6 +69,35 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" may be used when using multiple queues.
+ * Each queue consists of one shared ring pair, i.e. there must be the same
+ * number of tx and rx rings.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
+ * instead writing them under sub-keys having the name "queue-N" where
+ * N is the integer ID of the queue for which those keys belong. Queues 
+ * are indexed from zero.
+ *
+ * Mapping of packets to queues is considered to be a function of the
+ * transmitting system (backend or frontend) and is not negotiated
+ * between the two. Guests are free to transmit packets on any queue
+ * they choose, provided it has been set up correctly.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbP-0003ry-SW; Mon, 24 Feb 2014 14:33:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbO-0003rK-EV
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:50 +0000
Received: from [85.158.137.68:53731] by server-3.bemta-3.messagelabs.com id
	D6/A6-14520-D485B035; Mon, 24 Feb 2014 14:33:49 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393252426!2592457!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14821 invoked from network); 24 Feb 2014 14:33:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544728"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:46 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwb1-0005jR-BG;
	Mon, 24 Feb 2014 14:33:27 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwaz-0004ZK-E8; Mon, 24 Feb 2014 14:33:25 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:07 +0000
Message-ID: <1393252387-17496-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..a375a75 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,35 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" may be used when using multiple queues.
+ * Each queue consists of one shared ring pair, i.e. there must be the same
+ * number of tx and rx rings.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
+ * instead writing them under sub-keys having the name "queue-N" where
+ * N is the integer ID of the queue for which those keys belong. Queues 
+ * are indexed from zero.
+ *
+ * Mapping of packets to queues is considered to be a function of the
+ * transmitting system (backend or frontend) and is not negotiated
+ * between the two. Guests are free to transmit packets on any queue
+ * they choose, provided it has been set up correctly.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbQ-0003sF-A1; Mon, 24 Feb 2014 14:33:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbO-0003rM-OM
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:51 +0000
Received: from [85.158.143.35:24658] by server-1.bemta-4.messagelabs.com id
	6A/56-31661-E485B035; Mon, 24 Feb 2014 14:33:50 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393252427!7892074!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26846 invoked from network); 24 Feb 2014 14:33:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544731"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:47 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwb0-0005jO-0A;
	Mon, 24 Feb 2014 14:33:26 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHway-0004ZF-Hq; Mon, 24 Feb 2014 14:33:24 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:06 +0000
Message-ID: <1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 140 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 4f5a431..470d6ed 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+MODULE_PARM_DESC(xennet_max_queues,
+		"Maximum number of queues per virtual interface");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +571,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1) {
+		queue_idx = 0;
+	} else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1329,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1678,6 +1696,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1687,10 +1787,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1706,12 +1817,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1749,49 +1861,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1841,8 +1939,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2236,6 +2335,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbO-0003rO-BM; Mon, 24 Feb 2014 14:33:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbN-0003rB-AM
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:49 +0000
Received: from [85.158.137.68:53638] by server-2.bemta-3.messagelabs.com id
	94/81-06531-C485B035; Mon, 24 Feb 2014 14:33:48 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393252426!2592457!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14643 invoked from network); 24 Feb 2014 14:33:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544720"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:45 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:45 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwb5-0005kB-Tq;
	Mon, 24 Feb 2014 14:33:31 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwb4-0004ZZ-BD; Mon, 24 Feb 2014 14:33:30 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:28:51 +0000
Message-ID: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com, david.vrabel@citrix.com
Subject: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
	front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

---
V2: Improve documentation based on comments about areas which were unclear.

---
 xen/include/public/io/netif.h |   29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index d7fb771..5d98734 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -69,6 +69,35 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" may be used when using multiple queues.
+ * Each queue consists of one shared ring pair, i.e. there must be the same
+ * number of tx and rx rings.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
+ * instead writing them under sub-keys having the name "queue-N" where
+ * N is the integer ID of the queue for which those keys belong. Queues 
+ * are indexed from zero.
+ *
+ * Mapping of packets to queues is considered to be a function of the
+ * transmitting system (backend or frontend) and is not negotiated
+ * between the two. Guests are free to transmit packets on any queue
+ * they choose, provided it has been set up correctly.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbQ-0003sF-A1; Mon, 24 Feb 2014 14:33:52 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbO-0003rM-OM
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:51 +0000
Received: from [85.158.143.35:24658] by server-1.bemta-4.messagelabs.com id
	6A/56-31661-E485B035; Mon, 24 Feb 2014 14:33:50 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393252427!7892074!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26846 invoked from network); 24 Feb 2014 14:33:49 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544731"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:47 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwb0-0005jO-0A;
	Mon, 24 Feb 2014 14:33:26 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHway-0004ZF-Hq; Mon, 24 Feb 2014 14:33:24 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:06 +0000
Message-ID: <1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Build on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Check XenStore for multi-queue support, and set up the rings and event
channels accordingly.

Write ring references and event channels to XenStore in a queue
hierarchy if appropriate, or flat when using only one queue.

Update the xennet_select_queue() function to choose the queue on which
to transmit a packet based on the skb hash result.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
 1 file changed, 140 insertions(+), 38 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 4f5a431..470d6ed 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -57,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+/* Module parameters */
+unsigned int xennet_max_queues;
+module_param(xennet_max_queues, uint, 0644);
+MODULE_PARM_DESC(xennet_max_queues,
+		"Maximum number of queues per virtual interface");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -565,10 +571,22 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
-static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv)
 {
-	/* Stub for later implementation of queue selection */
-	return 0;
+	struct netfront_info *info = netdev_priv(dev);
+	u32 hash;
+	u16 queue_idx;
+
+	/* First, check if there is only one queue */
+	if (info->num_queues == 1) {
+		queue_idx = 0;
+	} else {
+		hash = skb_get_hash(skb);
+		queue_idx = (u16) (((u64)hash * info->num_queues) >> 32);
+	}
+
+	return queue_idx;
 }
 
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -1311,7 +1329,7 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), xennet_max_queues);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
@@ -1678,6 +1696,88 @@ static int xennet_init_queue(struct netfront_queue *queue)
 	return err;
 }
 
+static int write_queue_xenstore_keys(struct netfront_queue *queue,
+			   struct xenbus_transaction *xbt, int write_hierarchical)
+{
+	/* Write the queue-specific keys into XenStore in the traditional
+	 * way for a single queue, or in a queue subkeys for multiple
+	 * queues.
+	 */
+	struct xenbus_device *dev = queue->info->xbdev;
+	int err;
+	const char *message;
+	char *path;
+	size_t pathsize;
+
+	/* Choose the correct place to write the keys */
+	if (write_hierarchical) {
+		pathsize = strlen(dev->nodename) + 10;
+		path = kzalloc(pathsize, GFP_KERNEL);
+		if (!path) {
+			err = -ENOMEM;
+			message = "out of memory while writing ring references";
+			goto error;
+		}
+		snprintf(path, pathsize, "%s/queue-%u",
+				dev->nodename, queue->id);
+	} else {
+		path = (char *)dev->nodename;
+	}
+
+	/* Write ring references */
+	err = xenbus_printf(*xbt, path, "tx-ring-ref", "%u",
+			queue->tx_ring_ref);
+	if (err) {
+		message = "writing tx-ring-ref";
+		goto error;
+	}
+
+	err = xenbus_printf(*xbt, path, "rx-ring-ref", "%u",
+			queue->rx_ring_ref);
+	if (err) {
+		message = "writing rx-ring-ref";
+		goto error;
+	}
+
+	/* Write event channels; taking into account both shared
+	 * and split event channel scenarios.
+	 */
+	if (queue->tx_evtchn == queue->rx_evtchn) {
+		/* Shared event channel */
+		err = xenbus_printf(*xbt, path,
+				"event-channel", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel";
+			goto error;
+		}
+	} else {
+		/* Split event channels */
+		err = xenbus_printf(*xbt, path,
+				"event-channel-tx", "%u", queue->tx_evtchn);
+		if (err) {
+			message = "writing event-channel-tx";
+			goto error;
+		}
+
+		err = xenbus_printf(*xbt, path,
+				"event-channel-rx", "%u", queue->rx_evtchn);
+		if (err) {
+			message = "writing event-channel-rx";
+			goto error;
+		}
+	}
+
+	if (write_hierarchical)
+		kfree(path);
+	return 0;
+
+error:
+	if (write_hierarchical)
+		kfree(path);
+	xenbus_dev_fatal(dev, err, "%s", message);
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1687,10 +1787,21 @@ static int talk_to_netback(struct xenbus_device *dev,
 	int err;
 	unsigned int feature_split_evtchn;
 	unsigned int i = 0;
+	unsigned int max_queues = 0;
 	struct netfront_queue *queue = NULL;
 
 	info->netdev->irq = 0;
 
+	/* Check if backend supports multiple queues */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			"multi-queue-max-queues", "%u", &max_queues);
+	if (err < 0) {
+		max_queues = 1;
+	} else if (max_queues > xennet_max_queues) {
+		/* limit to frontend max if backend max is higher */
+		max_queues = xennet_max_queues;
+	}
+
 	/* Check feature-split-event-channels */
 	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
 			   "feature-split-event-channels", "%u",
@@ -1706,12 +1817,13 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 	/* Allocate array of queues */
-	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	info->queues = kcalloc(max_queues, sizeof(struct netfront_queue), GFP_KERNEL);
 	if (!info->queues) {
 		err = -ENOMEM;
 		goto out;
 	}
-	info->num_queues = 1;
+	info->num_queues = max_queues;
+	netif_set_real_num_tx_queues(info->netdev, info->num_queues);
 
 	/* Create shared ring, alloc event channel -- for each queue */
 	for (i = 0; i < info->num_queues; ++i) {
@@ -1749,49 +1861,35 @@ static int talk_to_netback(struct xenbus_device *dev,
 	}
 
 again:
-	queue = &info->queues[0]; /* Use first queue only */
-
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
 		goto destroy_ring;
 	}
 
-	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    queue->tx_ring_ref);
-	if (err) {
-		message = "writing tx ring-ref";
-		goto abort_transaction;
-	}
-	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    queue->rx_ring_ref);
-	if (err) {
-		message = "writing rx ring-ref";
-		goto abort_transaction;
-	}
-
-	if (queue->tx_evtchn == queue->rx_evtchn) {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", queue->tx_evtchn);
-		if (err) {
-			message = "writing event-channel";
-			goto abort_transaction;
-		}
+	if (info->num_queues == 1) {
+		err = write_queue_xenstore_keys(&info->queues[0], &xbt, 0); /* flat */
+		if (err)
+			goto abort_transaction_no_dev_fatal;
 	} else {
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", queue->tx_evtchn);
+		/* Write the number of queues */
+		err = xenbus_printf(xbt, dev->nodename, "multi-queue-num-queues",
+				"%u", info->num_queues);
 		if (err) {
-			message = "writing event-channel-tx";
-			goto abort_transaction;
+			message = "writing multi-queue-num-queues";
+			goto abort_transaction_no_dev_fatal;
 		}
-		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", queue->rx_evtchn);
-		if (err) {
-			message = "writing event-channel-rx";
-			goto abort_transaction;
+
+		/* Write the keys for each queue */
+		for (i = 0; i < info->num_queues; ++i) {
+			queue = &info->queues[i];
+			err = write_queue_xenstore_keys(queue, &xbt, 1); /* hierarchical */
+			if (err)
+				goto abort_transaction_no_dev_fatal;
 		}
 	}
 
+	/* The remaining keys are not queue-specific */
 	err = xenbus_printf(xbt, dev->nodename, "request-rx-copy", "%u",
 			    1);
 	if (err) {
@@ -1841,8 +1939,9 @@ again:
 	return 0;
 
  abort_transaction:
-	xenbus_transaction_end(xbt, 1);
 	xenbus_dev_fatal(dev, err, "%s", message);
+abort_transaction_no_dev_fatal:
+	xenbus_transaction_end(xbt, 1);
  destroy_ring:
 	xennet_disconnect_backend(info);
 	kfree(info->queues);
@@ -2236,6 +2335,9 @@ static int __init netif_init(void)
 
 	pr_info("Initialising Xen virtual ethernet driver\n");
 
+	/* Allow as many queues as there are CPUs, by default */
+	xennet_max_queues = num_online_cpus();
+
 	return xenbus_register_frontend(&netfront_driver);
 }
 module_init(netif_init);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:33:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbP-0003ry-SW; Mon, 24 Feb 2014 14:33:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbO-0003rK-EV
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:50 +0000
Received: from [85.158.137.68:53731] by server-3.bemta-3.messagelabs.com id
	D6/A6-14520-D485B035; Mon, 24 Feb 2014 14:33:49 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393252426!2592457!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14821 invoked from network); 24 Feb 2014 14:33:48 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544728"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:46 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwb1-0005jR-BG;
	Mon, 24 Feb 2014 14:33:27 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwaz-0004ZK-E8; Mon, 24 Feb 2014 14:33:25 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:07 +0000
Message-ID: <1393252387-17496-6-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 5/5] xen-net{back,
	front}: Document multi-queue feature in netif.h
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Document the multi-queue feature in terms of XenStore keys to be written
by the backend and by the frontend.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 include/xen/interface/io/netif.h |   29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/include/xen/interface/io/netif.h b/include/xen/interface/io/netif.h
index c50061d..a375a75 100644
--- a/include/xen/interface/io/netif.h
+++ b/include/xen/interface/io/netif.h
@@ -51,6 +51,35 @@
  */
 
 /*
+ * Multiple transmit and receive queues:
+ * If supported, the backend will write "multi-queue-max-queues" and set its
+ * value to the maximum supported number of queues.
+ * Frontends that are aware of this feature and wish to use it can write the
+ * key "multi-queue-num-queues", set to the number they wish to use.
+ *
+ * Queues replicate the shared rings and event channels, and
+ * "feature-split-event-channels" may be used when using multiple queues.
+ * Each queue consists of one shared ring pair, i.e. there must be the same
+ * number of tx and rx rings.
+ *
+ * For frontends requesting just one queue, the usual event-channel and
+ * ring-ref keys are written as before, simplifying the backend processing
+ * to avoid distinguishing between a frontend that doesn't understand the
+ * multi-queue feature, and one that does, but requested only one queue.
+ *
+ * Frontends requesting two or more queues must not write the toplevel
+ * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
+ * instead writing them under sub-keys having the name "queue-N" where
+ * N is the integer ID of the queue for which those keys belong. Queues 
+ * are indexed from zero.
+ *
+ * Mapping of packets to queues is considered to be a function of the
+ * transmitting system (backend or frontend) and is not negotiated
+ * between the two. Guests are free to transmit packets on any queue
+ * they choose, provided it has been set up correctly.
+ */
+
+/*
  * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
  * offload off or on. If it is missing then the feature is assumed to be on.
  * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbT-0003uT-NH; Mon, 24 Feb 2014 14:33:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbS-0003sV-Bb
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:54 +0000
Received: from [85.158.137.68:54018] by server-17.bemta-3.messagelabs.com id
	5F/F5-22569-0585B035; Mon, 24 Feb 2014 14:33:52 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393252430!3859913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18027 invoked from network); 24 Feb 2014 14:33:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544742"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwaw-0005jC-7V;
	Mon, 24 Feb 2014 14:33:22 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwau-0004Yy-9O; Mon, 24 Feb 2014 14:33:20 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:02 +0000
Message-ID: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V5:
- Fix bug in xenvif_free() that could lead to an attempt to transmit an
  skb after the queue structures had been freed.
- Improve the XenStore protocol documentation in netif.h.
- Fix IRQ_NAME_SIZE double-accounting for null terminator.
- Move rx_gso_checksum_fixup stat into struct xenvif_stats (per-queue).
- Don't initialise a local variable that is set in both branches (xspath).

V4:
- Add MODULE_PARM_DESC() for the multi-queue parameters for netback
  and netfront modules.
- Move del_timer_sync() in netfront to after unregister_netdev, which
  restores the order in which these functions were called before applying
  these patches.

V3:
- Further indentation and style fixups.

V2:
- Rebase onto net-next.
- Change queue->number to queue->id.
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu.
- Fixup formatting and style issues.
- XenStore protocol changes documented in netif.h.
- Default max. number of queues to num_online_cpus().
- Check requested number of queues does not exceed maximum.

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbU-0003up-6Z; Mon, 24 Feb 2014 14:33:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbS-0003ti-Dy
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:55 +0000
Received: from [85.158.139.211:45967] by server-16.bemta-5.messagelabs.com id
	6B/40-05060-1585B035; Mon, 24 Feb 2014 14:33:53 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393252430!5892084!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16130 invoked from network); 24 Feb 2014 14:33:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198661"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:48 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHway-0005jL-VL;
	Mon, 24 Feb 2014 14:33:24 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwax-0004ZA-OL; Mon, 24 Feb 2014 14:33:23 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:05 +0000
Message-ID: <1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  945 ++++++++++++++++++++++++++------------------
 1 file changed, 552 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 2b62d79..4f5a431 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1257,66 +1307,27 @@ static const struct net_device_ops xennet_netdev_ops = {
 
 static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 {
-	int i, err;
+	int err;
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->stats == NULL)
 		goto exit;
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1342,10 +1353,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1403,30 +1410,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1467,100 +1479,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1569,13 +1567,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1583,21 +1581,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1608,17 +1606,78 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+		queue->grant_tx_page[i] = NULL;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1626,13 +1685,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1640,34 +1758,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1727,6 +1845,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1739,6 +1860,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1759,36 +1882,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1797,14 +1924,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1877,7 +2007,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(np + xennet_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1909,7 +2039,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1920,6 +2053,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1933,16 +2068,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1952,7 +2090,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1963,6 +2104,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1976,16 +2119,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1995,7 +2141,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2042,6 +2191,8 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
@@ -2051,7 +2202,15 @@ static int xennet_remove(struct xenbus_device *dev)
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
 
 	free_percpu(info->stats);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbU-0003ve-Ru; Mon, 24 Feb 2014 14:33:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbT-0003u1-FV
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:55 +0000
Received: from [85.158.139.211:61244] by server-13.bemta-5.messagelabs.com id
	F4/B8-18801-2585B035; Mon, 24 Feb 2014 14:33:54 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393252430!5892084!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16359 invoked from network); 24 Feb 2014 14:33:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198665"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:48 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHway-0005jI-3G;
	Mon, 24 Feb 2014 14:33:24 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwaw-0004Z5-Jj; Mon, 24 Feb 2014 14:33:22 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:04 +0000
Message-ID: <1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    8 ++++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 82 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4176539..e72bf38 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 0297980..3f623b4 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -381,7 +381,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			      xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a32abd6..9849b63 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,11 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+MODULE_PARM_DESC(xenvif_max_queues,
+		"Maximum number of queues per virtual interface");
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1590,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..c1ae148 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+			"guest requested %u queues, exceeding the maximum of %u.",
+			requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1) {
+		xspath = (char *)dev->otherend;
+	} else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbX-0003xT-BR; Mon, 24 Feb 2014 14:33:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbV-0003vq-HO
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:58 +0000
Received: from [85.158.143.35:25284] by server-3.bemta-4.messagelabs.com id
	6B/88-11539-4585B035; Mon, 24 Feb 2014 14:33:56 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393252433!7897972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19514 invoked from network); 24 Feb 2014 14:33:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198670"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwax-0005jF-4y;
	Mon, 24 Feb 2014 14:33:23 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwav-0004Z1-H0; Mon, 24 Feb 2014 14:33:21 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:03 +0000
Message-ID: <1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   85 ++++--
 drivers/net/xen-netback/interface.c |  329 ++++++++++++++--------
 drivers/net/xen-netback/netback.c   |  530 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 608 insertions(+), 423 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..4176539 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,39 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+
+	/* Additional stats used by xenvif */
+	unsigned long rx_gso_checksum_fixup;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +162,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +203,9 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
-
-	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..0297980 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv, select_queue_fallback_t fallback)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -232,7 +307,7 @@ static const struct xenvif_stat {
 } xenvif_stats[] = {
 	{
 		"rx_gso_checksum_fixup",
-		offsetof(struct xenvif, rx_gso_checksum_fixup)
+		offsetof(struct xenvif_stats, rx_gso_checksum_fixup)
 	},
 };
 
@@ -249,11 +324,19 @@ static int xenvif_get_sset_count(struct net_device *dev, int string_set)
 static void xenvif_get_ethtool_stats(struct net_device *dev,
 				     struct ethtool_stats *stats, u64 * data)
 {
-	void *vif = netdev_priv(dev);
+	struct xenvif *vif = netdev_priv(dev);
 	int i;
-
-	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+	unsigned int queue_index;
+	struct xenvif_stats *vif_stats;
+
+	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {
+		unsigned long accum = 0;
+		for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+			vif_stats = &vif->queues[queue_index].stats;
+			accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset);
+		}
+		data[i] = accum;
+	}
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +369,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +379,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +391,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +410,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +419,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +435,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
+	queue->task = task;
 
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
-
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +558,53 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
 	unregister_netdev(vif->dev);
 
-	vfree(vif->grant_copy_op);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
+
+	/* Free the array of queues */
+	vif->num_queues = 0;
+	vfree(vif->queues);
+	vif->queues = NULL;
+
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..a32abd6 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1048,7 +1049,7 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
 	return 0;
 }
 
-static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
+static int checksum_setup(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	bool recalculate_partial_csum = false;
 
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		queue->stats.rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbU-0003ve-Ru; Mon, 24 Feb 2014 14:33:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbT-0003u1-FV
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:55 +0000
Received: from [85.158.139.211:61244] by server-13.bemta-5.messagelabs.com id
	F4/B8-18801-2585B035; Mon, 24 Feb 2014 14:33:54 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393252430!5892084!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16359 invoked from network); 24 Feb 2014 14:33:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198665"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:48 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHway-0005jI-3G;
	Mon, 24 Feb 2014 14:33:24 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwaw-0004Z5-Jj; Mon, 24 Feb 2014 14:33:22 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:04 +0000
Message-ID: <1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 2/5] xen-netback: Add support for
	multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

Builds on the refactoring of the previous patch to implement multiple
queues between xen-netfront and xen-netback.

Writes the maximum supported number of queues into XenStore, and reads
the values written by the frontend to determine how many queues to use.

Ring references and event channels are read from XenStore on a per-queue
basis and rings are connected accordingly.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |    2 +
 drivers/net/xen-netback/interface.c |    7 +++-
 drivers/net/xen-netback/netback.c   |    8 ++++
 drivers/net/xen-netback/xenbus.c    |   76 ++++++++++++++++++++++++++++++-----
 4 files changed, 82 insertions(+), 11 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index 4176539..e72bf38 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
+extern unsigned int xenvif_max_queues;
+
 #endif /* __XEN_NETBACK__COMMON_H__ */
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 0297980..3f623b4 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -381,7 +381,12 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	char name[IFNAMSIZ] = {};
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
+	/* Allocate a netdev with the max. supported number of queues.
+	 * When the guest selects the desired number, it will be updated
+	 * via netif_set_real_num_tx_queues().
+	 */
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
+			      xenvif_max_queues);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index a32abd6..9849b63 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -54,6 +54,11 @@
 bool separate_tx_rx_irq = 1;
 module_param(separate_tx_rx_irq, bool, 0644);
 
+unsigned int xenvif_max_queues;
+module_param(xenvif_max_queues, uint, 0644);
+MODULE_PARM_DESC(xenvif_max_queues,
+		"Maximum number of queues per virtual interface");
+
 /*
  * This is the maximum slots a skb can have. If a guest sends a skb
  * which exceeds this limit it is considered malicious.
@@ -1585,6 +1590,9 @@ static int __init netback_init(void)
 	if (!xen_domain())
 		return -ENODEV;
 
+	/* Allow as many queues as there are CPUs, by default */
+	xenvif_max_queues = num_online_cpus();
+
 	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
 		pr_info("fatal_skb_slots too small (%d), bump it to XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
 			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index f23ea0a..c1ae148 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -20,6 +20,7 @@
 
 #include "common.h"
 #include <linux/vmalloc.h>
+#include <linux/rtnetlink.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
 	if (err)
 		pr_debug("Error writing feature-split-event-channels\n");
 
+	/* Multi-queue support: This is an optional feature. */
+	err = xenbus_printf(XBT_NIL, dev->nodename,
+			"multi-queue-max-queues", "%u", xenvif_max_queues);
+	if (err)
+		pr_debug("Error writing multi-queue-max-queues\n");
+
 	err = xenbus_switch_state(dev, XenbusStateInitWait);
 	if (err)
 		goto fail;
@@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
 	unsigned long credit_bytes, credit_usec;
 	unsigned int queue_index;
 	struct xenvif_queue *queue;
+	unsigned int requested_num_queues;
+
+	/* Check whether the frontend requested multiple queues
+	 * and read the number requested.
+	 */
+	err = xenbus_scanf(XBT_NIL, dev->otherend,
+			"multi-queue-num-queues",
+			"%u", &requested_num_queues);
+	if (err < 0) {
+		requested_num_queues = 1; /* Fall back to single queue */
+	} else if (requested_num_queues > xenvif_max_queues) {
+		/* buggy or malicious guest */
+		xenbus_dev_fatal(dev, err,
+			"guest requested %u queues, exceeding the maximum of %u.",
+			requested_num_queues, xenvif_max_queues);
+		return;
+	}
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
 	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
 	read_xenbus_vif_flags(be);
 
-	be->vif->num_queues = 1;
+	/* Use the number of queues requested by the frontend */
+	be->vif->num_queues = requested_num_queues;
 	be->vif->queues = vzalloc(be->vif->num_queues *
 			sizeof(struct xenvif_queue));
+	rtnl_lock();
+	netif_set_real_num_tx_queues(be->vif->dev, be->vif->num_queues);
+	rtnl_unlock();
 
 	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
 		queue = &be->vif->queues[queue_index];
@@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 	unsigned long tx_ring_ref, rx_ring_ref;
 	unsigned int tx_evtchn, rx_evtchn;
 	int err;
+	char *xspath;
+	size_t xspathsize;
+	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-NNN" */
+
+	/* If the frontend requested 1 queue, or we have fallen back
+	 * to single queue due to lack of frontend support for multi-
+	 * queue, expect the remaining XenStore keys in the toplevel
+	 * directory. Otherwise, expect them in a subdirectory called
+	 * queue-N.
+	 */
+	if (queue->vif->num_queues == 1) {
+		xspath = (char *)dev->otherend;
+	} else {
+		xspathsize = strlen(dev->otherend) + xenstore_path_ext_size;
+		xspath = kzalloc(xspathsize, GFP_KERNEL);
+		if (!xspath) {
+			xenbus_dev_fatal(dev, -ENOMEM,
+					"reading ring references");
+			return -ENOMEM;
+		}
+		snprintf(xspath, xspathsize, "%s/queue-%u", dev->otherend,
+				 queue->id);
+	}
 
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
 			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
 	if (err) {
 		xenbus_dev_fatal(dev, err,
 				 "reading %s/ring-ref",
-				 dev->otherend);
-		return err;
+				 xspath);
+		goto err;
 	}
 
 	/* Try split event channels first, then single event channel. */
-	err = xenbus_gather(XBT_NIL, dev->otherend,
+	err = xenbus_gather(XBT_NIL, xspath,
 			    "event-channel-tx", "%u", &tx_evtchn,
 			    "event-channel-rx", "%u", &rx_evtchn, NULL);
 	if (err < 0) {
-		err = xenbus_scanf(XBT_NIL, dev->otherend,
+		err = xenbus_scanf(XBT_NIL, xspath,
 				   "event-channel", "%u", &tx_evtchn);
 		if (err < 0) {
 			xenbus_dev_fatal(dev, err,
 					 "reading %s/event-channel(-tx/rx)",
-					 dev->otherend);
-			return err;
+					 xspath);
+			goto err;
 		}
 		rx_evtchn = tx_evtchn;
 	}
@@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 				 "mapping shared-frames %lu/%lu port tx %u rx %u",
 				 tx_ring_ref, rx_ring_ref,
 				 tx_evtchn, rx_evtchn);
-		return err;
+		goto err;
 	}
 
-	return 0;
+	err = 0;
+err: /* Regular return falls through with err == 0 */
+	if (xspath != dev->otherend)
+		kfree(xspath);
+
+	return err;
 }
 
 static int read_xenbus_vif_flags(struct backend_info *be)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbX-0003xT-BR; Mon, 24 Feb 2014 14:33:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbV-0003vq-HO
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:58 +0000
Received: from [85.158.143.35:25284] by server-3.bemta-4.messagelabs.com id
	6B/88-11539-4585B035; Mon, 24 Feb 2014 14:33:56 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393252433!7897972!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19514 invoked from network); 24 Feb 2014 14:33:54 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198670"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwax-0005jF-4y;
	Mon, 24 Feb 2014 14:33:23 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwav-0004Z1-H0; Mon, 24 Feb 2014 14:33:21 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:03 +0000
Message-ID: <1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 1/5] xen-netback: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netback, move the
queue-specific data from struct xenvif into struct xenvif_queue, and
update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_netdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0 for a single queue and uses
skb_get_hash() to compute the queue index otherwise.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netback/common.h    |   85 ++++--
 drivers/net/xen-netback/interface.c |  329 ++++++++++++++--------
 drivers/net/xen-netback/netback.c   |  530 ++++++++++++++++++-----------------
 drivers/net/xen-netback/xenbus.c    |   87 ++++--
 4 files changed, 608 insertions(+), 423 deletions(-)

diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h
index ae413a2..4176539 100644
--- a/drivers/net/xen-netback/common.h
+++ b/drivers/net/xen-netback/common.h
@@ -108,17 +108,39 @@ struct xenvif_rx_meta {
  */
 #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS * XEN_NETIF_RX_RING_SIZE)
 
-struct xenvif {
-	/* Unique identifier for this interface. */
-	domid_t          domid;
-	unsigned int     handle;
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+
+struct xenvif;
+
+struct xenvif_stats {
+	/* Stats fields to be updated per-queue.
+	 * A subset of struct net_device_stats that contains only the
+	 * fields that are updated in netback.c for each queue.
+	 */
+	unsigned int rx_bytes;
+	unsigned int rx_packets;
+	unsigned int tx_bytes;
+	unsigned int tx_packets;
+
+	/* Additional stats used by xenvif */
+	unsigned long rx_gso_checksum_fixup;
+};
+
+struct xenvif_queue { /* Per-queue data for xenvif */
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct xenvif *vif; /* Parent VIF */
 
 	/* Use NAPI for guest TX */
 	struct napi_struct napi;
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int tx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
 	struct xen_netif_tx_back_ring tx;
 	struct sk_buff_head tx_queue;
 	struct page *mmap_pages[MAX_PENDING_REQS];
@@ -140,19 +162,34 @@ struct xenvif {
 	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
 	unsigned int rx_irq;
 	/* Only used when feature-split-event-channels = 1 */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 	struct xen_netif_rx_back_ring rx;
 	struct sk_buff_head rx_queue;
 	RING_IDX rx_last_skb_slots;
 
-	/* This array is allocated seperately as it is large */
-	struct gnttab_copy *grant_copy_op;
+	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
 
 	/* We create one meta structure per ring request we consume, so
 	 * the maximum number is the same as the ring size.
 	 */
 	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
 
+	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
+	unsigned long   credit_bytes;
+	unsigned long   credit_usec;
+	unsigned long   remaining_credit;
+	struct timer_list credit_timeout;
+	u64 credit_window_start;
+
+	/* Statistics */
+	struct xenvif_stats stats;
+};
+
+struct xenvif {
+	/* Unique identifier for this interface. */
+	domid_t          domid;
+	unsigned int     handle;
+
 	u8               fe_dev_addr[6];
 
 	/* Frontend feature information. */
@@ -166,15 +203,9 @@ struct xenvif {
 	/* Internal feature information. */
 	u8 can_queue:1;	    /* can queue packets for receiver? */
 
-	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
-	unsigned long   credit_bytes;
-	unsigned long   credit_usec;
-	unsigned long   remaining_credit;
-	struct timer_list credit_timeout;
-	u64 credit_window_start;
-
-	/* Statistics */
-	unsigned long rx_gso_checksum_fixup;
+	/* Queues */
+	unsigned int num_queues;
+	struct xenvif_queue *queues;
 
 	/* Miscellaneous private stuff. */
 	struct net_device *dev;
@@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
 			    domid_t domid,
 			    unsigned int handle);
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue);
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn);
 void xenvif_disconnect(struct xenvif *vif);
@@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
 
 int xenvif_schedulable(struct xenvif *vif);
 
-int xenvif_must_stop_queue(struct xenvif *vif);
+int xenvif_must_stop_queue(struct xenvif_queue *queue);
 
 /* (Un)Map communication rings. */
-void xenvif_unmap_frontend_rings(struct xenvif *vif);
-int xenvif_map_frontend_rings(struct xenvif *vif,
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref);
 
 /* Check for SKBs from frontend and schedule backend processing */
-void xenvif_check_rx_xenvif(struct xenvif *vif);
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
 
 /* Prevent the device from generating any further traffic. */
 void xenvif_carrier_off(struct xenvif *vif);
 
-int xenvif_tx_action(struct xenvif *vif, int budget);
+int xenvif_tx_action(struct xenvif_queue *queue, int budget);
 
 int xenvif_kthread(void *data);
-void xenvif_kick_thread(struct xenvif *vif);
+void xenvif_kick_thread(struct xenvif_queue *queue);
 
 /* Determine whether the needed number of slots (req) are available,
  * and set req_event if not.
  */
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed);
 
-void xenvif_stop_queue(struct xenvif *vif);
+void xenvif_carrier_on(struct xenvif *vif);
 
 extern bool separate_tx_rx_irq;
 
diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c
index 7669d49..0297980 100644
--- a/drivers/net/xen-netback/interface.c
+++ b/drivers/net/xen-netback/interface.c
@@ -34,7 +34,6 @@
 #include <linux/ethtool.h>
 #include <linux/rtnetlink.h>
 #include <linux/if_vlan.h>
-#include <linux/vmalloc.h>
 
 #include <xen/events.h>
 #include <asm/xen/hypercall.h>
@@ -42,6 +41,16 @@
 #define XENVIF_QUEUE_LENGTH 32
 #define XENVIF_NAPI_WEIGHT  64
 
+static inline void xenvif_stop_queue(struct xenvif_queue *queue)
+{
+	struct net_device *dev = queue->vif->dev;
+
+	if (!queue->vif->can_queue)
+		return;
+
+	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
+}
+
 int xenvif_schedulable(struct xenvif *vif)
 {
 	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
@@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
 
 static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
-		napi_schedule(&vif->napi);
+	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
+		napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
 
-static int xenvif_poll(struct napi_struct *napi, int budget)
+int xenvif_poll(struct napi_struct *napi, int budget)
 {
-	struct xenvif *vif = container_of(napi, struct xenvif, napi);
+	struct xenvif_queue *queue = container_of(napi, struct xenvif_queue, napi);
 	int work_done;
 
-	work_done = xenvif_tx_action(vif, budget);
+	work_done = xenvif_tx_action(queue, budget);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
@@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int budget)
 
 static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
 {
-	struct xenvif *vif = dev_id;
+	struct xenvif_queue *queue = dev_id;
 
-	xenvif_kick_thread(vif);
+	xenvif_kick_thread(queue);
 
 	return IRQ_HANDLED;
 }
@@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
+static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff *skb,
+			       void *accel_priv, select_queue_fallback_t fallback)
+{
+	struct xenvif *vif = netdev_priv(dev);
+	u32 hash;
+	u16 queue_index;
+
+	/* First, check if there is only one queue to optimise the
+	 * single-queue or old frontend scenario.
+	 */
+	if (vif->num_queues == 1) {
+		queue_index = 0;
+	} else {
+		/* Use skb_get_hash to obtain an L4 hash if available */
+		hash = skb_get_hash(skb);
+		queue_index = (u16) (((u64)hash * vif->num_queues) >> 32);
+	}
+
+	return queue_index;
+}
+
 static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	u16 index;
 	int min_slots_needed;
 
 	BUG_ON(skb->dev != dev);
 
+	/* Drop the packet if queues are not set up */
+	if (vif->num_queues < 1)
+		goto drop;
+
+	/* Obtain the queue to be used to transmit this packet */
+	index = skb_get_queue_mapping(skb);
+	if (index >= vif->num_queues)
+		index = 0; /* Fall back to queue 0 if out of range */
+	queue = &vif->queues[index];
+
 	/* Drop the packet if vif is not ready */
-	if (vif->task == NULL || !xenvif_schedulable(vif))
+	if (queue->task == NULL || !xenvif_schedulable(vif))
 		goto drop;
 
 	/* At best we'll need one slot for the header and one for each
@@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	 * then turn off the queue to give the ring a chance to
 	 * drain.
 	 */
-	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
-		xenvif_stop_queue(vif);
+	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
+		xenvif_stop_queue(queue);
 
-	skb_queue_tail(&vif->rx_queue, skb);
-	xenvif_kick_thread(vif);
+	skb_queue_tail(&queue->rx_queue, skb);
+	xenvif_kick_thread(queue);
 
 	return NETDEV_TX_OK;
 
@@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
 {
 	struct xenvif *vif = netdev_priv(dev);
+	struct xenvif_queue *queue = NULL;
+	unsigned long rx_bytes = 0;
+	unsigned long rx_packets = 0;
+	unsigned long tx_bytes = 0;
+	unsigned long tx_packets = 0;
+	unsigned int index;
+
+	/* Aggregate tx and rx stats from each queue */
+	for (index = 0; index < vif->num_queues; ++index) {
+		queue = &vif->queues[index];
+		rx_bytes += queue->stats.rx_bytes;
+		rx_packets += queue->stats.rx_packets;
+		tx_bytes += queue->stats.tx_bytes;
+		tx_packets += queue->stats.tx_packets;
+	}
+
+	vif->dev->stats.rx_bytes = rx_bytes;
+	vif->dev->stats.rx_packets = rx_packets;
+	vif->dev->stats.tx_bytes = tx_bytes;
+	vif->dev->stats.tx_packets = tx_packets;
+
 	return &vif->dev->stats;
 }
 
 static void xenvif_up(struct xenvif *vif)
 {
-	napi_enable(&vif->napi);
-	enable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		enable_irq(vif->rx_irq);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_enable(&queue->napi);
+		enable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			enable_irq(queue->rx_irq);
+		xenvif_check_rx_xenvif(queue);
+	}
 }
 
 static void xenvif_down(struct xenvif *vif)
 {
-	napi_disable(&vif->napi);
-	disable_irq(vif->tx_irq);
-	if (vif->tx_irq != vif->rx_irq)
-		disable_irq(vif->rx_irq);
-	del_timer_sync(&vif->credit_timeout);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		napi_disable(&queue->napi);
+		disable_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			disable_irq(queue->rx_irq);
+		del_timer_sync(&queue->credit_timeout);
+	}
 }
 
 static int xenvif_open(struct net_device *dev)
@@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_up(vif);
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 	return 0;
 }
 
@@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
 	struct xenvif *vif = netdev_priv(dev);
 	if (netif_carrier_ok(dev))
 		xenvif_down(vif);
-	netif_stop_queue(dev);
+	netif_tx_stop_all_queues(dev);
 	return 0;
 }
 
@@ -232,7 +307,7 @@ static const struct xenvif_stat {
 } xenvif_stats[] = {
 	{
 		"rx_gso_checksum_fixup",
-		offsetof(struct xenvif, rx_gso_checksum_fixup)
+		offsetof(struct xenvif_stats, rx_gso_checksum_fixup)
 	},
 };
 
@@ -249,11 +324,19 @@ static int xenvif_get_sset_count(struct net_device *dev, int string_set)
 static void xenvif_get_ethtool_stats(struct net_device *dev,
 				     struct ethtool_stats *stats, u64 * data)
 {
-	void *vif = netdev_priv(dev);
+	struct xenvif *vif = netdev_priv(dev);
 	int i;
-
-	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
-		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
+	unsigned int queue_index;
+	struct xenvif_stats *vif_stats;
+
+	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {
+		unsigned long accum = 0;
+		for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+			vif_stats = &vif->queues[queue_index].stats;
+			accum += *(unsigned long *)(vif_stats + xenvif_stats[i].offset);
+		}
+		data[i] = accum;
+	}
 }
 
 static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -286,6 +369,7 @@ static const struct net_device_ops xenvif_netdev_ops = {
 	.ndo_fix_features = xenvif_fix_features,
 	.ndo_set_mac_address = eth_mac_addr,
 	.ndo_validate_addr   = eth_validate_addr,
+	.ndo_select_queue = xenvif_select_queue,
 };
 
 struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
@@ -295,10 +379,9 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	struct net_device *dev;
 	struct xenvif *vif;
 	char name[IFNAMSIZ] = {};
-	int i;
 
 	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
-	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
+	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
 	if (dev == NULL) {
 		pr_warn("Could not allocate netdev for %s\n", name);
 		return ERR_PTR(-ENOMEM);
@@ -308,24 +391,15 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	vif = netdev_priv(dev);
 
-	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
-				     MAX_GRANT_COPY_OPS);
-	if (vif->grant_copy_op == NULL) {
-		pr_warn("Could not allocate grant copy space for %s\n", name);
-		free_netdev(dev);
-		return ERR_PTR(-ENOMEM);
-	}
-
 	vif->domid  = domid;
 	vif->handle = handle;
 	vif->can_sg = 1;
 	vif->ip_csum = 1;
 	vif->dev = dev;
 
-	vif->credit_bytes = vif->remaining_credit = ~0UL;
-	vif->credit_usec  = 0UL;
-	init_timer(&vif->credit_timeout);
-	vif->credit_window_start = get_jiffies_64();
+	/* Start out with no queues */
+	vif->num_queues = 0;
+	vif->queues = NULL;
 
 	dev->netdev_ops	= &xenvif_netdev_ops;
 	dev->hw_features = NETIF_F_SG |
@@ -336,16 +410,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 
 	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
 
-	skb_queue_head_init(&vif->rx_queue);
-	skb_queue_head_init(&vif->tx_queue);
-
-	vif->pending_cons = 0;
-	vif->pending_prod = MAX_PENDING_REQS;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->pending_ring[i] = i;
-	for (i = 0; i < MAX_PENDING_REQS; i++)
-		vif->mmap_pages[i] = NULL;
-
 	/*
 	 * Initialise a dummy MAC address. We choose the numerically
 	 * largest non-broadcast address to prevent the address getting
@@ -355,8 +419,6 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	memset(dev->dev_addr, 0xFF, ETH_ALEN);
 	dev->dev_addr[0] &= ~0x01;
 
-	netif_napi_add(dev, &vif->napi, xenvif_poll, XENVIF_NAPI_WEIGHT);
-
 	netif_carrier_off(dev);
 
 	err = register_netdev(dev);
@@ -373,85 +435,111 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
 	return vif;
 }
 
-int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
+void xenvif_init_queue(struct xenvif_queue *queue)
+{
+	int i;
+
+	queue->credit_bytes = queue->remaining_credit = ~0UL;
+	queue->credit_usec  = 0UL;
+	init_timer(&queue->credit_timeout);
+	queue->credit_window_start = get_jiffies_64();
+
+	skb_queue_head_init(&queue->rx_queue);
+	skb_queue_head_init(&queue->tx_queue);
+
+	queue->pending_cons = 0;
+	queue->pending_prod = MAX_PENDING_REQS;
+	for (i = 0; i < MAX_PENDING_REQS; ++i) {
+		queue->pending_ring[i] = i;
+		queue->mmap_pages[i] = NULL;
+	}
+
+	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
+			XENVIF_NAPI_WEIGHT);
+}
+
+void xenvif_carrier_on(struct xenvif *vif)
+{
+	rtnl_lock();
+	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
+		dev_set_mtu(vif->dev, ETH_DATA_LEN);
+	netdev_update_features(vif->dev);
+	netif_carrier_on(vif->dev);
+	if (netif_running(vif->dev))
+		xenvif_up(vif);
+	rtnl_unlock();
+}
+
+int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
 		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
 		   unsigned int rx_evtchn)
 {
 	struct task_struct *task;
 	int err = -ENOMEM;
 
-	BUG_ON(vif->tx_irq);
-	BUG_ON(vif->task);
+	BUG_ON(queue->tx_irq);
+	BUG_ON(queue->task);
 
-	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
+	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
 	if (err < 0)
 		goto err;
 
-	init_waitqueue_head(&vif->wq);
+	init_waitqueue_head(&queue->wq);
 
 	if (tx_evtchn == rx_evtchn) {
 		/* feature-split-event-channels == 0 */
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_interrupt, 0,
-			vif->dev->name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
+			queue->name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = vif->rx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = queue->rx_irq = err;
+		disable_irq(queue->tx_irq);
 	} else {
 		/* feature-split-event-channels == 1 */
-		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
-			 "%s-tx", vif->dev->name);
+		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+			 "%s-tx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
-			vif->tx_irq_name, vif);
+			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
+			queue->tx_irq_name, queue);
 		if (err < 0)
 			goto err_unmap;
-		vif->tx_irq = err;
-		disable_irq(vif->tx_irq);
+		queue->tx_irq = err;
+		disable_irq(queue->tx_irq);
 
-		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
-			 "%s-rx", vif->dev->name);
+		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+			 "%s-rx", queue->name);
 		err = bind_interdomain_evtchn_to_irqhandler(
-			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
-			vif->rx_irq_name, vif);
+			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
+			queue->rx_irq_name, queue);
 		if (err < 0)
 			goto err_tx_unbind;
-		vif->rx_irq = err;
-		disable_irq(vif->rx_irq);
+		queue->rx_irq = err;
+		disable_irq(queue->rx_irq);
 	}
 
 	task = kthread_create(xenvif_kthread,
-			      (void *)vif, "%s", vif->dev->name);
+			      (void *)queue, "%s", queue->name);
 	if (IS_ERR(task)) {
-		pr_warn("Could not allocate kthread for %s\n", vif->dev->name);
+		pr_warn("Could not allocate kthread for %s\n", queue->name);
 		err = PTR_ERR(task);
 		goto err_rx_unbind;
 	}
 
-	vif->task = task;
+	queue->task = task;
 
-	rtnl_lock();
-	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
-		dev_set_mtu(vif->dev, ETH_DATA_LEN);
-	netdev_update_features(vif->dev);
-	netif_carrier_on(vif->dev);
-	if (netif_running(vif->dev))
-		xenvif_up(vif);
-	rtnl_unlock();
-
-	wake_up_process(vif->task);
+	wake_up_process(queue->task);
 
 	return 0;
 
 err_rx_unbind:
-	unbind_from_irqhandler(vif->rx_irq, vif);
-	vif->rx_irq = 0;
+	unbind_from_irqhandler(queue->rx_irq, queue);
+	queue->rx_irq = 0;
 err_tx_unbind:
-	unbind_from_irqhandler(vif->tx_irq, vif);
-	vif->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 err_unmap:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 err:
 	module_put(THIS_MODULE);
 	return err;
@@ -470,34 +558,53 @@ void xenvif_carrier_off(struct xenvif *vif)
 
 void xenvif_disconnect(struct xenvif *vif)
 {
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
+
 	if (netif_carrier_ok(vif->dev))
 		xenvif_carrier_off(vif);
 
-	if (vif->task) {
-		kthread_stop(vif->task);
-		vif->task = NULL;
-	}
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
 
-	if (vif->tx_irq) {
-		if (vif->tx_irq == vif->rx_irq)
-			unbind_from_irqhandler(vif->tx_irq, vif);
-		else {
-			unbind_from_irqhandler(vif->tx_irq, vif);
-			unbind_from_irqhandler(vif->rx_irq, vif);
+		if (queue->task) {
+			kthread_stop(queue->task);
+			queue->task = NULL;
 		}
-		vif->tx_irq = 0;
+
+		if (queue->tx_irq) {
+			if (queue->tx_irq == queue->rx_irq)
+				unbind_from_irqhandler(queue->tx_irq, queue);
+			else {
+				unbind_from_irqhandler(queue->tx_irq, queue);
+				unbind_from_irqhandler(queue->rx_irq, queue);
+			}
+			queue->tx_irq = 0;
+		}
+
+		xenvif_unmap_frontend_rings(queue);
 	}
 
-	xenvif_unmap_frontend_rings(vif);
+
 }
 
 void xenvif_free(struct xenvif *vif)
 {
-	netif_napi_del(&vif->napi);
+	struct xenvif_queue *queue = NULL;
+	unsigned int queue_index;
 
 	unregister_netdev(vif->dev);
 
-	vfree(vif->grant_copy_op);
+	for (queue_index = 0; queue_index < vif->num_queues; ++queue_index) {
+		queue = &vif->queues[queue_index];
+		netif_napi_del(&queue->napi);
+	}
+
+	/* Free the array of queues */
+	vif->num_queues = 0;
+	vfree(vif->queues);
+	vif->queues = NULL;
+
 	free_netdev(vif->dev);
 
 	module_put(THIS_MODULE);
diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..a32abd6 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
  * one or more merged tx requests, otherwise it is the continuation of
  * previous tx request.
  */
-static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
+static inline int pending_tx_is_head(struct xenvif_queue *queue, RING_IDX idx)
 {
-	return vif->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
+	return queue->pending_tx_info[idx].head != INVALID_PENDING_RING_IDX;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status);
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st);
 
-static inline int tx_work_todo(struct xenvif *vif);
-static inline int rx_work_todo(struct xenvif *vif);
+static inline int tx_work_todo(struct xenvif_queue *queue);
+static inline int rx_work_todo(struct xenvif_queue *queue);
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags);
 
-static inline unsigned long idx_to_pfn(struct xenvif *vif,
+static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
 				       u16 idx)
 {
-	return page_to_pfn(vif->mmap_pages[idx]);
+	return page_to_pfn(queue->mmap_pages[idx]);
 }
 
-static inline unsigned long idx_to_kaddr(struct xenvif *vif,
+static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
 					 u16 idx)
 {
-	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
+	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
 }
 
 /* This is a miniumum size for the linear area to avoid lots of
@@ -131,30 +131,30 @@ static inline pending_ring_idx_t pending_index(unsigned i)
 	return i & (MAX_PENDING_REQS-1);
 }
 
-static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
+static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue *queue)
 {
 	return MAX_PENDING_REQS -
-		vif->pending_prod + vif->pending_cons;
+		queue->pending_prod + queue->pending_cons;
 }
 
-bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
+bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int needed)
 {
 	RING_IDX prod, cons;
 
 	do {
-		prod = vif->rx.sring->req_prod;
-		cons = vif->rx.req_cons;
+		prod = queue->rx.sring->req_prod;
+		cons = queue->rx.req_cons;
 
 		if (prod - cons >= needed)
 			return true;
 
-		vif->rx.sring->req_event = prod + 1;
+		queue->rx.sring->req_event = prod + 1;
 
 		/* Make sure event is visible before we check prod
 		 * again.
 		 */
 		mb();
-	} while (vif->rx.sring->req_prod != prod);
+	} while (queue->rx.sring->req_prod != prod);
 
 	return false;
 }
@@ -208,13 +208,13 @@ struct netrx_pending_operations {
 	grant_ref_t copy_gref;
 };
 
-static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
+static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue *queue,
 						 struct netrx_pending_operations *npo)
 {
 	struct xenvif_rx_meta *meta;
 	struct xen_netif_rx_request *req;
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 
 	meta = npo->meta + npo->meta_prod++;
 	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
@@ -232,7 +232,7 @@ static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
  * Set up the grant operations for this fragment. If it's a flipping
  * interface, we also set up the unmap request from here.
  */
-static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
+static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct sk_buff *skb,
 				 struct netrx_pending_operations *npo,
 				 struct page *page, unsigned long size,
 				 unsigned long offset, int *head)
@@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 			 */
 			BUG_ON(*head);
 
-			meta = get_next_rx_buffer(vif, npo);
+			meta = get_next_rx_buffer(queue, npo);
 		}
 
 		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
@@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		copy_gop->source.u.gmfn = virt_to_mfn(page_address(page));
 		copy_gop->source.offset = offset;
 
-		copy_gop->dest.domid = vif->domid;
+		copy_gop->dest.domid = queue->vif->domid;
 		copy_gop->dest.offset = npo->copy_off;
 		copy_gop->dest.u.ref = npo->copy_gref;
 
@@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
 		else
 			gso_type = XEN_NETIF_GSO_TYPE_NONE;
 
-		if (*head && ((1 << gso_type) & vif->gso_mask))
-			vif->rx.req_cons++;
+		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
+			queue->rx.req_cons++;
 
 		*head = 0; /* There must be something in this buffer now. */
 
@@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
  * frontend-side LRO).
  */
 static int xenvif_gop_skb(struct sk_buff *skb,
-			  struct netrx_pending_operations *npo)
+			  struct netrx_pending_operations *npo,
+			  struct xenvif_queue *queue)
 {
 	struct xenvif *vif = netdev_priv(skb->dev);
 	int nr_frags = skb_shinfo(skb)->nr_frags;
@@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 
 	/* Set up a GSO prefix descriptor, if necessary */
 	if ((1 << gso_type) & vif->gso_prefix_mask) {
-		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+		req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 		meta = npo->meta + npo->meta_prod++;
 		meta->gso_type = gso_type;
 		meta->gso_size = gso_size;
@@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		meta->id = req->id;
 	}
 
-	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
+	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
 	meta = npo->meta + npo->meta_prod++;
 
 	if ((1 << gso_type) & vif->gso_mask) {
@@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
 		if (data + len > skb_tail_pointer(skb))
 			len = skb_tail_pointer(skb) - data;
 
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     virt_to_page(data), len, offset, &head);
 		data += len;
 	}
 
 	for (i = 0; i < nr_frags; i++) {
-		xenvif_gop_frag_copy(vif, skb, npo,
+		xenvif_gop_frag_copy(queue, skb, npo,
 				     skb_frag_page(&skb_shinfo(skb)->frags[i]),
 				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
 				     skb_shinfo(skb)->frags[i].page_offset,
@@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int nr_meta_slots,
 	return status;
 }
 
-static void xenvif_add_frag_responses(struct xenvif *vif, int status,
+static void xenvif_add_frag_responses(struct xenvif_queue *queue, int status,
 				      struct xenvif_rx_meta *meta,
 				      int nr_meta_slots)
 {
@@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif *vif, int status,
 			flags = XEN_NETRXF_more_data;
 
 		offset = 0;
-		make_rx_response(vif, meta[i].id, status, offset,
+		make_rx_response(queue, meta[i].id, status, offset,
 				 meta[i].size, flags);
 	}
 }
@@ -459,12 +460,12 @@ struct skb_cb_overlay {
 	int meta_slots_used;
 };
 
-void xenvif_kick_thread(struct xenvif *vif)
+void xenvif_kick_thread(struct xenvif_queue *queue)
 {
-	wake_up(&vif->wq);
+	wake_up(&queue->wq);
 }
 
-static void xenvif_rx_action(struct xenvif *vif)
+static void xenvif_rx_action(struct xenvif_queue *queue)
 {
 	s8 status;
 	u16 flags;
@@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
 	bool need_to_notify = false;
 
 	struct netrx_pending_operations npo = {
-		.copy  = vif->grant_copy_op,
-		.meta  = vif->meta,
+		.copy  = queue->grant_copy_op,
+		.meta  = queue->meta,
 	};
 
 	skb_queue_head_init(&rxq);
 
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
 		RING_IDX max_slots_needed;
 		int i;
 
@@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
 			max_slots_needed++;
 
 		/* If the skb may not fit then bail out now */
-		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
-			skb_queue_head(&vif->rx_queue, skb);
+		if (!xenvif_rx_ring_slots_available(queue, max_slots_needed)) {
+			skb_queue_head(&queue->rx_queue, skb);
 			need_to_notify = true;
-			vif->rx_last_skb_slots = max_slots_needed;
+			queue->rx_last_skb_slots = max_slots_needed;
 			break;
 		} else
-			vif->rx_last_skb_slots = 0;
+			queue->rx_last_skb_slots = 0;
 
 		sco = (struct skb_cb_overlay *)skb->cb;
-		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
+		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
 		BUG_ON(sco->meta_slots_used > max_slots_needed);
 
 		__skb_queue_tail(&rxq, skb);
 	}
 
-	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
+	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
 
 	if (!npo.copy_prod)
 		goto done;
 
 	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
-	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
+	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
 
 	while ((skb = __skb_dequeue(&rxq)) != NULL) {
 		sco = (struct skb_cb_overlay *)skb->cb;
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_prefix_mask) {
-			resp = RING_GET_RESPONSE(&vif->rx,
-						 vif->rx.rsp_prod_pvt++);
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_prefix_mask) {
+			resp = RING_GET_RESPONSE(&queue->rx,
+						 queue->rx.rsp_prod_pvt++);
 
 			resp->flags = XEN_NETRXF_gso_prefix | XEN_NETRXF_more_data;
 
-			resp->offset = vif->meta[npo.meta_cons].gso_size;
-			resp->id = vif->meta[npo.meta_cons].id;
+			resp->offset = queue->meta[npo.meta_cons].gso_size;
+			resp->id = queue->meta[npo.meta_cons].id;
 			resp->status = sco->meta_slots_used;
 
 			npo.meta_cons++;
@@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
 		}
 
 
-		vif->dev->stats.tx_bytes += skb->len;
-		vif->dev->stats.tx_packets++;
+		queue->stats.tx_bytes += skb->len;
+		queue->stats.tx_packets++;
 
-		status = xenvif_check_gop(vif, sco->meta_slots_used, &npo);
+		status = xenvif_check_gop(queue->vif, sco->meta_slots_used, &npo);
 
 		if (sco->meta_slots_used == 1)
 			flags = 0;
@@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
 			flags |= XEN_NETRXF_data_validated;
 
 		offset = 0;
-		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
+		resp = make_rx_response(queue, queue->meta[npo.meta_cons].id,
 					status, offset,
-					vif->meta[npo.meta_cons].size,
+					queue->meta[npo.meta_cons].size,
 					flags);
 
-		if ((1 << vif->meta[npo.meta_cons].gso_type) &
-		    vif->gso_mask) {
+		if ((1 << queue->meta[npo.meta_cons].gso_type) &
+		    queue->vif->gso_mask) {
 			struct xen_netif_extra_info *gso =
 				(struct xen_netif_extra_info *)
-				RING_GET_RESPONSE(&vif->rx,
-						  vif->rx.rsp_prod_pvt++);
+				RING_GET_RESPONSE(&queue->rx,
+						  queue->rx.rsp_prod_pvt++);
 
 			resp->flags |= XEN_NETRXF_extra_info;
 
-			gso->u.gso.type = vif->meta[npo.meta_cons].gso_type;
-			gso->u.gso.size = vif->meta[npo.meta_cons].gso_size;
+			gso->u.gso.type = queue->meta[npo.meta_cons].gso_type;
+			gso->u.gso.size = queue->meta[npo.meta_cons].gso_size;
 			gso->u.gso.pad = 0;
 			gso->u.gso.features = 0;
 
@@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
 			gso->flags = 0;
 		}
 
-		xenvif_add_frag_responses(vif, status,
-					  vif->meta + npo.meta_cons + 1,
+		xenvif_add_frag_responses(queue, status,
+					  queue->meta + npo.meta_cons + 1,
 					  sco->meta_slots_used);
 
-		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret);
+		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx, ret);
 
 		need_to_notify |= !!ret;
 
@@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
 
 done:
 	if (need_to_notify)
-		notify_remote_via_irq(vif->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
-void xenvif_check_rx_xenvif(struct xenvif *vif)
+void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
 {
 	int more_to_do;
 
-	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
+	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
 
 	if (more_to_do)
-		napi_schedule(&vif->napi);
+		napi_schedule(&queue->napi);
 }
 
-static void tx_add_credit(struct xenvif *vif)
+static void tx_add_credit(struct xenvif_queue *queue)
 {
 	unsigned long max_burst, max_credit;
 
@@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
 	 * Allow a burst big enough to transmit a jumbo packet of up to 128kB.
 	 * Otherwise the interface can seize up due to insufficient credit.
 	 */
-	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
+	max_burst = RING_GET_REQUEST(&queue->tx, queue->tx.req_cons)->size;
 	max_burst = min(max_burst, 131072UL);
-	max_burst = max(max_burst, vif->credit_bytes);
+	max_burst = max(max_burst, queue->credit_bytes);
 
 	/* Take care that adding a new chunk of credit doesn't wrap to zero. */
-	max_credit = vif->remaining_credit + vif->credit_bytes;
-	if (max_credit < vif->remaining_credit)
+	max_credit = queue->remaining_credit + queue->credit_bytes;
+	if (max_credit < queue->remaining_credit)
 		max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */
 
-	vif->remaining_credit = min(max_credit, max_burst);
+	queue->remaining_credit = min(max_credit, max_burst);
 }
 
 static void tx_credit_callback(unsigned long data)
 {
-	struct xenvif *vif = (struct xenvif *)data;
-	tx_add_credit(vif);
-	xenvif_check_rx_xenvif(vif);
+	struct xenvif_queue *queue = (struct xenvif_queue *)data;
+	tx_add_credit(queue);
+	xenvif_check_rx_xenvif(queue);
 }
 
-static void xenvif_tx_err(struct xenvif *vif,
+static void xenvif_tx_err(struct xenvif_queue *queue,
 			  struct xen_netif_tx_request *txp, RING_IDX end)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
-		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
+		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
 		if (cons == end)
 			break;
-		txp = RING_GET_REQUEST(&vif->tx, cons++);
+		txp = RING_GET_REQUEST(&queue->tx, cons++);
 	} while (1);
-	vif->tx.req_cons = cons;
+	queue->tx.req_cons = cons;
 }
 
 static void xenvif_fatal_tx_err(struct xenvif *vif)
@@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
 	xenvif_carrier_off(vif);
 }
 
-static int xenvif_count_requests(struct xenvif *vif,
+static int xenvif_count_requests(struct xenvif_queue *queue,
 				 struct xen_netif_tx_request *first,
 				 struct xen_netif_tx_request *txp,
 				 int work_to_do)
 {
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 	int slots = 0;
 	int drop_err = 0;
 	int more_data;
@@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		struct xen_netif_tx_request dropped_tx = { 0 };
 
 		if (slots >= work_to_do) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Asked for %d slots but exceeds this limit\n",
 				   work_to_do);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -ENODATA;
 		}
 
@@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 * considered malicious.
 		 */
 		if (unlikely(slots >= fatal_skb_slots)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Malicious frontend using %d slots, threshold %u\n",
 				   slots, fatal_skb_slots);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -E2BIG;
 		}
 
@@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Too many slots (%d) exceeding limit (%d), dropping packet\n",
 					   slots, XEN_NETBK_LEGACY_SLOTS_MAX);
 			drop_err = -E2BIG;
@@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		if (drop_err)
 			txp = &dropped_tx;
 
-		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
+		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons + slots),
 		       sizeof(*txp));
 
 		/* If the guest submitted a frame >= 64 KiB then
@@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
 		 */
 		if (!drop_err && txp->size > first->size) {
 			if (net_ratelimit())
-				netdev_dbg(vif->dev,
+				netdev_dbg(queue->vif->dev,
 					   "Invalid tx request, slot size %u > remaining size %u\n",
 					   txp->size, first->size);
 			drop_err = -EIO;
@@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
 		slots++;
 
 		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
-			netdev_err(vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
+			netdev_err(queue->vif->dev, "Cross page boundary, txp->offset: %x, size: %u\n",
 				 txp->offset, txp->size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
@@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
 	} while (more_data);
 
 	if (drop_err) {
-		xenvif_tx_err(vif, first, cons + slots);
+		xenvif_tx_err(queue, first, cons + slots);
 		return drop_err;
 	}
 
 	return slots;
 }
 
-static struct page *xenvif_alloc_page(struct xenvif *vif,
+static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
 				      u16 pending_idx)
 {
 	struct page *page;
@@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif *vif,
 	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 	if (!page)
 		return NULL;
-	vif->mmap_pages[pending_idx] = page;
+	queue->mmap_pages[pending_idx] = page;
 
 	return page;
 }
 
-static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
+static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue *queue,
 					       struct sk_buff *skb,
 					       struct xen_netif_tx_request *txp,
 					       struct gnttab_copy *gop)
@@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 	for (shinfo->nr_frags = slot = start; slot < nr_slots;
 	     shinfo->nr_frags++) {
 		struct pending_tx_info *pending_tx_info =
-			vif->pending_tx_info;
+			queue->pending_tx_info;
 
 		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
 		if (!page)
@@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 			gop->flags = GNTCOPY_source_gref;
 
 			gop->source.u.ref = txp->gref;
-			gop->source.domid = vif->domid;
+			gop->source.domid = queue->vif->domid;
 			gop->source.offset = txp->offset;
 
 			gop->dest.domid = DOMID_SELF;
@@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				gop->len = txp->size;
 				dst_offset += gop->len;
 
-				index = pending_index(vif->pending_cons++);
+				index = pending_index(queue->pending_cons++);
 
-				pending_idx = vif->pending_ring[index];
+				pending_idx = queue->pending_ring[index];
 
 				memcpy(&pending_tx_info[pending_idx].req, txp,
 				       sizeof(*txp));
@@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 				 * fields for head tx req will be set
 				 * to correct values after the loop.
 				 */
-				vif->mmap_pages[pending_idx] = (void *)(~0UL);
+				queue->mmap_pages[pending_idx] = (void *)(~0UL);
 				pending_tx_info[pending_idx].head =
 					INVALID_PENDING_RING_IDX;
 
@@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 		first->req.offset = 0;
 		first->req.size = dst_offset;
 		first->head = start_idx;
-		vif->mmap_pages[head_idx] = page;
+		queue->mmap_pages[head_idx] = page;
 		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
 	}
 
@@ -881,18 +882,18 @@ static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
 err:
 	/* Unwind, freeing all pages and sending error responses. */
 	while (shinfo->nr_frags-- > start) {
-		xenvif_idx_release(vif,
+		xenvif_idx_release(queue,
 				frag_get_pending_idx(&frags[shinfo->nr_frags]),
 				XEN_NETIF_RSP_ERROR);
 	}
 	/* The head too, if necessary. */
 	if (start)
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	return NULL;
 }
 
-static int xenvif_tx_check_gop(struct xenvif *vif,
+static int xenvif_tx_check_gop(struct xenvif_queue *queue,
 			       struct sk_buff *skb,
 			       struct gnttab_copy **gopp)
 {
@@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	/* Check status of header. */
 	err = gop->status;
 	if (unlikely(err))
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 	/* Skip first skb fragment if it is on same page as header fragment. */
 	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
@@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 		pending_ring_idx_t head;
 
 		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
-		tx_info = &vif->pending_tx_info[pending_idx];
+		tx_info = &queue->pending_tx_info[pending_idx];
 		head = tx_info->head;
 
 		/* Check error status: if okay then remember grant handle. */
@@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 			newerr = (++gop)->status;
 			if (newerr)
 				break;
-			peek = vif->pending_ring[pending_index(++head)];
-		} while (!pending_tx_is_head(vif, peek));
+			peek = queue->pending_ring[pending_index(++head)];
+		} while (!pending_tx_is_head(queue, peek));
 
 		if (likely(!newerr)) {
 			/* Had a previous error? Invalidate this fragment. */
 			if (unlikely(err))
-				xenvif_idx_release(vif, pending_idx,
+				xenvif_idx_release(queue, pending_idx,
 						   XEN_NETIF_RSP_OKAY);
 			continue;
 		}
 
 		/* Error on this fragment: respond to client with an error. */
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_ERROR);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_ERROR);
 
 		/* Not the first error? Preceding frags already invalidated. */
 		if (err)
@@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 
 		/* First error: invalidate header and preceding fragments. */
 		pending_idx = *((u16 *)skb->data);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 		for (j = start; j < i; j++) {
 			pending_idx = frag_get_pending_idx(&shinfo->frags[j]);
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
 	return err;
 }
 
-static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
+static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
 	int nr_frags = shinfo->nr_frags;
@@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
 
 		pending_idx = frag_get_pending_idx(frag);
 
-		txp = &vif->pending_tx_info[pending_idx].req;
-		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
+		txp = &queue->pending_tx_info[pending_idx].req;
+		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
 		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
 		skb->len += txp->size;
 		skb->data_len += txp->size;
 		skb->truesize += txp->size;
 
 		/* Take an extra reference to offset xenvif_idx_release */
-		get_page(vif->mmap_pages[pending_idx]);
-		xenvif_idx_release(vif, pending_idx, XEN_NETIF_RSP_OKAY);
+		get_page(queue->mmap_pages[pending_idx]);
+		xenvif_idx_release(queue, pending_idx, XEN_NETIF_RSP_OKAY);
 	}
 }
 
-static int xenvif_get_extras(struct xenvif *vif,
+static int xenvif_get_extras(struct xenvif_queue *queue,
 				struct xen_netif_extra_info *extras,
 				int work_to_do)
 {
 	struct xen_netif_extra_info extra;
-	RING_IDX cons = vif->tx.req_cons;
+	RING_IDX cons = queue->tx.req_cons;
 
 	do {
 		if (unlikely(work_to_do-- <= 0)) {
-			netdev_err(vif->dev, "Missing extra info\n");
-			xenvif_fatal_tx_err(vif);
+			netdev_err(queue->vif->dev, "Missing extra info\n");
+			xenvif_fatal_tx_err(queue->vif);
 			return -EBADR;
 		}
 
-		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
+		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
 		       sizeof(extra));
 		if (unlikely(!extra.type ||
 			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
-			vif->tx.req_cons = ++cons;
-			netdev_err(vif->dev,
+			queue->tx.req_cons = ++cons;
+			netdev_err(queue->vif->dev,
 				   "Invalid extra type: %d\n", extra.type);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			return -EINVAL;
 		}
 
 		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
-		vif->tx.req_cons = ++cons;
+		queue->tx.req_cons = ++cons;
 	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	return work_to_do;
@@ -1048,7 +1049,7 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
 	return 0;
 }
 
-static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
+static int checksum_setup(struct xenvif_queue *queue, struct sk_buff *skb)
 {
 	bool recalculate_partial_csum = false;
 
@@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	 * recalculate the partial checksum.
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
-		vif->rx_gso_checksum_fixup++;
+		queue->stats.rx_gso_checksum_fixup++;
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
+static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size)
 {
 	u64 now = get_jiffies_64();
-	u64 next_credit = vif->credit_window_start +
-		msecs_to_jiffies(vif->credit_usec / 1000);
+	u64 next_credit = queue->credit_window_start +
+		msecs_to_jiffies(queue->credit_usec / 1000);
 
 	/* Timer could already be pending in rare cases. */
-	if (timer_pending(&vif->credit_timeout))
+	if (timer_pending(&queue->credit_timeout))
 		return true;
 
 	/* Passed the point where we can replenish credit? */
 	if (time_after_eq64(now, next_credit)) {
-		vif->credit_window_start = now;
-		tx_add_credit(vif);
+		queue->credit_window_start = now;
+		tx_add_credit(queue);
 	}
 
 	/* Still too big to send right now? Set a callback. */
-	if (size > vif->remaining_credit) {
-		vif->credit_timeout.data     =
-			(unsigned long)vif;
-		vif->credit_timeout.function =
+	if (size > queue->remaining_credit) {
+		queue->credit_timeout.data     =
+			(unsigned long)queue;
+		queue->credit_timeout.function =
 			tx_credit_callback;
-		mod_timer(&vif->credit_timeout,
+		mod_timer(&queue->credit_timeout,
 			  next_credit);
-		vif->credit_window_start = next_credit;
+		queue->credit_window_start = next_credit;
 
 		return true;
 	}
@@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
 	return false;
 }
 
-static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
+static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int budget)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
+	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
 	struct sk_buff *skb;
 	int ret;
 
-	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	while ((nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 		< MAX_PENDING_REQS) &&
-	       (skb_queue_len(&vif->tx_queue) < budget)) {
+	       (skb_queue_len(&queue->tx_queue) < budget)) {
 		struct xen_netif_tx_request txreq;
 		struct xen_netif_tx_request txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
 		struct page *page;
@@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		unsigned int data_len;
 		pending_ring_idx_t index;
 
-		if (vif->tx.sring->req_prod - vif->tx.req_cons >
+		if (queue->tx.sring->req_prod - queue->tx.req_cons >
 		    XEN_NETIF_TX_RING_SIZE) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "Impossible number of requests. "
 				   "req_prod %d, req_cons %d, size %ld\n",
-				   vif->tx.sring->req_prod, vif->tx.req_cons,
+				   queue->tx.sring->req_prod, queue->tx.req_cons,
 				   XEN_NETIF_TX_RING_SIZE);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			continue;
 		}
 
-		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif->tx);
+		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
 		if (!work_to_do)
 			break;
 
-		idx = vif->tx.req_cons;
+		idx = queue->tx.req_cons;
 		rmb(); /* Ensure that we see the request before we copy it. */
-		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx), sizeof(txreq));
+		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx), sizeof(txreq));
 
 		/* Credit-based scheduling. */
-		if (txreq.size > vif->remaining_credit &&
-		    tx_credit_exceeded(vif, txreq.size))
+		if (txreq.size > queue->remaining_credit &&
+		    tx_credit_exceeded(queue, txreq.size))
 			break;
 
-		vif->remaining_credit -= txreq.size;
+		queue->remaining_credit -= txreq.size;
 
 		work_to_do--;
-		vif->tx.req_cons = ++idx;
+		queue->tx.req_cons = ++idx;
 
 		memset(extras, 0, sizeof(extras));
 		if (txreq.flags & XEN_NETTXF_extra_info) {
-			work_to_do = xenvif_get_extras(vif, extras,
+			work_to_do = xenvif_get_extras(queue, extras,
 						       work_to_do);
-			idx = vif->tx.req_cons;
+			idx = queue->tx.req_cons;
 			if (unlikely(work_to_do < 0))
 				break;
 		}
 
-		ret = xenvif_count_requests(vif, &txreq, txfrags, work_to_do);
+		ret = xenvif_count_requests(queue, &txreq, txfrags, work_to_do);
 		if (unlikely(ret < 0))
 			break;
 
 		idx += ret;
 
 		if (unlikely(txreq.size < ETH_HLEN)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Bad packet size: %d\n", txreq.size);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		/* No crossing a page as the payload mustn't fragment. */
 		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
-			netdev_err(vif->dev,
+			netdev_err(queue->vif->dev,
 				   "txreq.offset: %x, size: %u, end: %lu\n",
 				   txreq.offset, txreq.size,
 				   (txreq.offset&~PAGE_MASK) + txreq.size);
-			xenvif_fatal_tx_err(vif);
+			xenvif_fatal_tx_err(queue->vif);
 			break;
 		}
 
-		index = pending_index(vif->pending_cons);
-		pending_idx = vif->pending_ring[index];
+		index = pending_index(queue->pending_cons);
+		pending_idx = queue->pending_ring[index];
 
 		data_len = (txreq.size > PKT_PROT_LEN &&
 			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
@@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
 				GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(skb == NULL)) {
-			netdev_dbg(vif->dev,
+			netdev_dbg(queue->vif->dev,
 				   "Can't allocate a skb in start_xmit.\n");
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
@@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 			struct xen_netif_extra_info *gso;
 			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
 
-			if (xenvif_set_skb_gso(vif, skb, gso)) {
+			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
 				/* Failure in xenvif_set_skb_gso is fatal. */
 				kfree_skb(skb);
 				break;
@@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 		}
 
 		/* XXX could copy straight to head */
-		page = xenvif_alloc_page(vif, pending_idx);
+		page = xenvif_alloc_page(queue, pending_idx);
 		if (!page) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 
 		gop->source.u.ref = txreq.gref;
-		gop->source.domid = vif->domid;
+		gop->source.domid = queue->vif->domid;
 		gop->source.offset = txreq.offset;
 
 		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
@@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 
 		gop++;
 
-		memcpy(&vif->pending_tx_info[pending_idx].req,
+		memcpy(&queue->pending_tx_info[pending_idx].req,
 		       &txreq, sizeof(txreq));
-		vif->pending_tx_info[pending_idx].head = index;
+		queue->pending_tx_info[pending_idx].head = index;
 		*((u16 *)skb->data) = pending_idx;
 
 		__skb_put(skb, data_len);
@@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
 					     INVALID_PENDING_IDX);
 		}
 
-		vif->pending_cons++;
+		queue->pending_cons++;
 
-		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
+		request_gop = xenvif_get_requests(queue, skb, txfrags, gop);
 		if (request_gop == NULL) {
 			kfree_skb(skb);
-			xenvif_tx_err(vif, &txreq, idx);
+			xenvif_tx_err(queue, &txreq, idx);
 			break;
 		}
 		gop = request_gop;
 
-		__skb_queue_tail(&vif->tx_queue, skb);
+		__skb_queue_tail(&queue->tx_queue, skb);
 
-		vif->tx.req_cons = idx;
+		queue->tx.req_cons = idx;
 
-		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif->tx_copy_ops))
+		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue->tx_copy_ops))
 			break;
 	}
 
-	return gop - vif->tx_copy_ops;
+	return gop - queue->tx_copy_ops;
 }
 
 
-static int xenvif_tx_submit(struct xenvif *vif)
+static int xenvif_tx_submit(struct xenvif_queue *queue)
 {
-	struct gnttab_copy *gop = vif->tx_copy_ops;
+	struct gnttab_copy *gop = queue->tx_copy_ops;
 	struct sk_buff *skb;
 	int work_done = 0;
 
-	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
+	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
 		struct xen_netif_tx_request *txp;
 		u16 pending_idx;
 		unsigned data_len;
 
 		pending_idx = *((u16 *)skb->data);
-		txp = &vif->pending_tx_info[pending_idx].req;
+		txp = &queue->pending_tx_info[pending_idx].req;
 
 		/* Check the remap error code. */
-		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
-			netdev_dbg(vif->dev, "netback grant failed.\n");
+		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
+			netdev_dbg(queue->vif->dev, "netback grant failed.\n");
 			skb_shinfo(skb)->nr_frags = 0;
 			kfree_skb(skb);
 			continue;
@@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 
 		data_len = skb->len;
 		memcpy(skb->data,
-		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
+		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
 		       data_len);
 		if (data_len < txp->size) {
 			/* Append the packet payload as a fragment. */
@@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
 			txp->size -= data_len;
 		} else {
 			/* Schedule a response immediately. */
-			xenvif_idx_release(vif, pending_idx,
+			xenvif_idx_release(queue, pending_idx,
 					   XEN_NETIF_RSP_OKAY);
 		}
 
@@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
 		else if (txp->flags & XEN_NETTXF_data_validated)
 			skb->ip_summed = CHECKSUM_UNNECESSARY;
 
-		xenvif_fill_frags(vif, skb);
+		xenvif_fill_frags(queue, skb);
 
 		if (skb_is_nonlinear(skb) && skb_headlen(skb) < PKT_PROT_LEN) {
 			int target = min_t(int, skb->len, PKT_PROT_LEN);
 			__pskb_pull_tail(skb, target - skb_headlen(skb));
 		}
 
-		skb->dev      = vif->dev;
+		skb->dev      = queue->vif->dev;
 		skb->protocol = eth_type_trans(skb, skb->dev);
 		skb_reset_network_header(skb);
 
-		if (checksum_setup(vif, skb)) {
-			netdev_dbg(vif->dev,
+		if (checksum_setup(queue, skb)) {
+			netdev_dbg(queue->vif->dev,
 				   "Can't setup checksum in net_tx_action\n");
 			kfree_skb(skb);
 			continue;
@@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
 				DIV_ROUND_UP(skb->len - hdrlen, mss);
 		}
 
-		vif->dev->stats.rx_bytes += skb->len;
-		vif->dev->stats.rx_packets++;
+		queue->stats.rx_bytes += skb->len;
+		queue->stats.rx_packets++;
 
 		work_done++;
 
@@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
 }
 
 /* Called after netfront has transmitted */
-int xenvif_tx_action(struct xenvif *vif, int budget)
+int xenvif_tx_action(struct xenvif_queue *queue, int budget)
 {
 	unsigned nr_gops;
 	int work_done;
 
-	if (unlikely(!tx_work_todo(vif)))
+	if (unlikely(!tx_work_todo(queue)))
 		return 0;
 
-	nr_gops = xenvif_tx_build_gops(vif, budget);
+	nr_gops = xenvif_tx_build_gops(queue, budget);
 
 	if (nr_gops == 0)
 		return 0;
 
-	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
+	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
 
-	work_done = xenvif_tx_submit(vif);
+	work_done = xenvif_tx_submit(queue);
 
 	return work_done;
 }
 
-static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
+static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx,
 			       u8 status)
 {
 	struct pending_tx_info *pending_tx_info;
 	pending_ring_idx_t head;
 	u16 peek; /* peek into next tx request */
 
-	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
+	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
 
 	/* Already complete? */
-	if (vif->mmap_pages[pending_idx] == NULL)
+	if (queue->mmap_pages[pending_idx] == NULL)
 		return;
 
-	pending_tx_info = &vif->pending_tx_info[pending_idx];
+	pending_tx_info = &queue->pending_tx_info[pending_idx];
 
 	head = pending_tx_info->head;
 
-	BUG_ON(!pending_tx_is_head(vif, head));
-	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
+	BUG_ON(!pending_tx_is_head(queue, head));
+	BUG_ON(queue->pending_ring[pending_index(head)] != pending_idx);
 
 	do {
 		pending_ring_idx_t index;
 		pending_ring_idx_t idx = pending_index(head);
-		u16 info_idx = vif->pending_ring[idx];
+		u16 info_idx = queue->pending_ring[idx];
 
-		pending_tx_info = &vif->pending_tx_info[info_idx];
-		make_tx_response(vif, &pending_tx_info->req, status);
+		pending_tx_info = &queue->pending_tx_info[info_idx];
+		make_tx_response(queue, &pending_tx_info->req, status);
 
 		/* Setting any number other than
 		 * INVALID_PENDING_RING_IDX indicates this slot is
@@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
 		 */
 		pending_tx_info->head = 0;
 
-		index = pending_index(vif->pending_prod++);
-		vif->pending_ring[index] = vif->pending_ring[info_idx];
+		index = pending_index(queue->pending_prod++);
+		queue->pending_ring[index] = queue->pending_ring[info_idx];
 
-		peek = vif->pending_ring[pending_index(++head)];
+		peek = queue->pending_ring[pending_index(++head)];
 
-	} while (!pending_tx_is_head(vif, peek));
+	} while (!pending_tx_is_head(queue, peek));
 
-	put_page(vif->mmap_pages[pending_idx]);
-	vif->mmap_pages[pending_idx] = NULL;
+	put_page(queue->mmap_pages[pending_idx]);
+	queue->mmap_pages[pending_idx] = NULL;
 }
 
 
-static void make_tx_response(struct xenvif *vif,
+static void make_tx_response(struct xenvif_queue *queue,
 			     struct xen_netif_tx_request *txp,
 			     s8       st)
 {
-	RING_IDX i = vif->tx.rsp_prod_pvt;
+	RING_IDX i = queue->tx.rsp_prod_pvt;
 	struct xen_netif_tx_response *resp;
 	int notify;
 
-	resp = RING_GET_RESPONSE(&vif->tx, i);
+	resp = RING_GET_RESPONSE(&queue->tx, i);
 	resp->id     = txp->id;
 	resp->status = st;
 
 	if (txp->flags & XEN_NETTXF_extra_info)
-		RING_GET_RESPONSE(&vif->tx, ++i)->status = XEN_NETIF_RSP_NULL;
+		RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL;
 
-	vif->tx.rsp_prod_pvt = ++i;
-	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
+	queue->tx.rsp_prod_pvt = ++i;
+	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(vif->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 }
 
-static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
+static struct xen_netif_rx_response *make_rx_response(struct xenvif_queue *queue,
 					     u16      id,
 					     s8       st,
 					     u16      offset,
 					     u16      size,
 					     u16      flags)
 {
-	RING_IDX i = vif->rx.rsp_prod_pvt;
+	RING_IDX i = queue->rx.rsp_prod_pvt;
 	struct xen_netif_rx_response *resp;
 
-	resp = RING_GET_RESPONSE(&vif->rx, i);
+	resp = RING_GET_RESPONSE(&queue->rx, i);
 	resp->offset     = offset;
 	resp->flags      = flags;
 	resp->id         = id;
@@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
 	if (st < 0)
 		resp->status = (s16)st;
 
-	vif->rx.rsp_prod_pvt = ++i;
+	queue->rx.rsp_prod_pvt = ++i;
 
 	return resp;
 }
 
-static inline int rx_work_todo(struct xenvif *vif)
+static inline int rx_work_todo(struct xenvif_queue *queue)
 {
-	return !skb_queue_empty(&vif->rx_queue) &&
-	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
+	return !skb_queue_empty(&queue->rx_queue) &&
+	       xenvif_rx_ring_slots_available(queue, queue->rx_last_skb_slots);
 }
 
-static inline int tx_work_todo(struct xenvif *vif)
+static inline int tx_work_todo(struct xenvif_queue *queue)
 {
 
-	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
-	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
+	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
+	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
 	     < MAX_PENDING_REQS))
 		return 1;
 
 	return 0;
 }
 
-void xenvif_unmap_frontend_rings(struct xenvif *vif)
+void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
 {
-	if (vif->tx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->tx.sring);
-	if (vif->rx.sring)
-		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
-					vif->rx.sring);
+	if (queue->tx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->tx.sring);
+	if (queue->rx.sring)
+		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
+					queue->rx.sring);
 }
 
-int xenvif_map_frontend_rings(struct xenvif *vif,
+int xenvif_map_frontend_rings(struct xenvif_queue *queue,
 			      grant_ref_t tx_ring_ref,
 			      grant_ref_t rx_ring_ref)
 {
@@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif *vif,
 
 	int err = -ENOMEM;
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     tx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	txs = (struct xen_netif_tx_sring *)addr;
-	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
-	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
+	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue->vif),
 				     rx_ring_ref, &addr);
 	if (err)
 		goto err;
 
 	rxs = (struct xen_netif_rx_sring *)addr;
-	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
+	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	return 0;
 
 err:
-	xenvif_unmap_frontend_rings(vif);
+	xenvif_unmap_frontend_rings(queue);
 	return err;
 }
 
-void xenvif_stop_queue(struct xenvif *vif)
+static inline void xenvif_wake_queue(struct xenvif_queue *queue)
 {
-	if (!vif->can_queue)
-		return;
+	struct net_device *dev = queue->vif->dev;
+	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
+}
 
-	netif_stop_queue(vif->dev);
+static void xenvif_start_queue(struct xenvif_queue *queue)
+{
+	if (xenvif_schedulable(queue->vif))
+		xenvif_wake_queue(queue);
 }
 
-static void xenvif_start_queue(struct xenvif *vif)
+static int xenvif_queue_stopped(struct xenvif_queue *queue)
 {
-	if (xenvif_schedulable(vif))
-		netif_wake_queue(vif->dev);
+	struct net_device *dev = queue->vif->dev;
+	unsigned int id = queue->id;
+	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
 }
 
 int xenvif_kthread(void *data)
 {
-	struct xenvif *vif = data;
+	struct xenvif_queue *queue = data;
 	struct sk_buff *skb;
 
 	while (!kthread_should_stop()) {
-		wait_event_interruptible(vif->wq,
-					 rx_work_todo(vif) ||
+		wait_event_interruptible(queue->wq,
+					 rx_work_todo(queue) ||
 					 kthread_should_stop());
 		if (kthread_should_stop())
 			break;
 
-		if (!skb_queue_empty(&vif->rx_queue))
-			xenvif_rx_action(vif);
+		if (!skb_queue_empty(&queue->rx_queue))
+			xenvif_rx_action(queue);
 
-		if (skb_queue_empty(&vif->rx_queue) &&
-		    netif_queue_stopped(vif->dev))
-			xenvif_start_queue(vif);
+		if (skb_queue_empty(&queue->rx_queue) &&
+		    xenvif_queue_stopped(queue))
+			xenvif_start_queue(queue);
 
 		cond_resched();
 	}
 
 	/* Bin any remaining skbs */
-	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
+	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
 		dev_kfree_skb(skb);
 
 	return 0;
diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/xenbus.c
index 7a206cf..f23ea0a 100644
--- a/drivers/net/xen-netback/xenbus.c
+++ b/drivers/net/xen-netback/xenbus.c
@@ -19,6 +19,7 @@
 */
 
 #include "common.h"
+#include <linux/vmalloc.h>
 
 struct backend_info {
 	struct xenbus_device *dev;
@@ -34,8 +35,9 @@ struct backend_info {
 	u8 have_hotplug_status_watch:1;
 };
 
-static int connect_rings(struct backend_info *);
-static void connect(struct backend_info *);
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue);
+static void connect(struct backend_info *be);
+static int read_xenbus_vif_flags(struct backend_info *be);
 static void backend_create_xenvif(struct backend_info *be);
 static void unregister_hotplug_status_watch(struct backend_info *be);
 static void set_backend_state(struct backend_info *be,
@@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
 {
 	int err;
 	struct xenbus_device *dev = be->dev;
-
-	err = connect_rings(be);
-	if (err)
-		return;
+	unsigned long credit_bytes, credit_usec;
+	unsigned int queue_index;
+	struct xenvif_queue *queue;
 
 	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
 	if (err) {
@@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
 		return;
 	}
 
-	xen_net_read_rate(dev, &be->vif->credit_bytes,
-			  &be->vif->credit_usec);
-	be->vif->remaining_credit = be->vif->credit_bytes;
+	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
+	read_xenbus_vif_flags(be);
+
+	be->vif->num_queues = 1;
+	be->vif->queues = vzalloc(be->vif->num_queues *
+			sizeof(struct xenvif_queue));
+
+	for (queue_index = 0; queue_index < be->vif->num_queues; ++queue_index) {
+		queue = &be->vif->queues[queue_index];
+		queue->vif = be->vif;
+		queue->id = queue_index;
+		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
+				be->vif->dev->name, queue->id);
+
+		xenvif_init_queue(queue);
+
+		queue->remaining_credit = credit_bytes;
+
+		err = connect_rings(be, queue);
+		if (err)
+			goto err;
+	}
+
+	xenvif_carrier_on(be->vif);
 
 	unregister_hotplug_status_watch(be);
 	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
@@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
 	if (!err)
 		be->have_hotplug_status_watch = 1;
 
-	netif_wake_queue(be->vif->dev);
+	netif_tx_wake_all_queues(be->vif->dev);
+
+	return;
+
+err:
+	vfree(be->vif->queues);
+	be->vif->queues = NULL;
+	be->vif->num_queues = 0;
+	return;
 }
 
 
-static int connect_rings(struct backend_info *be)
+static int connect_rings(struct backend_info *be, struct xenvif_queue *queue)
 {
-	struct xenvif *vif = be->vif;
 	struct xenbus_device *dev = be->dev;
 	unsigned long tx_ring_ref, rx_ring_ref;
-	unsigned int tx_evtchn, rx_evtchn, rx_copy;
+	unsigned int tx_evtchn, rx_evtchn;
 	int err;
-	int val;
 
 	err = xenbus_gather(XBT_NIL, dev->otherend,
 			    "tx-ring-ref", "%lu", &tx_ring_ref,
@@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
 		rx_evtchn = tx_evtchn;
 	}
 
+	/* Map the shared frame, irq etc. */
+	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
+			     tx_evtchn, rx_evtchn);
+	if (err) {
+		xenbus_dev_fatal(dev, err,
+				 "mapping shared-frames %lu/%lu port tx %u rx %u",
+				 tx_ring_ref, rx_ring_ref,
+				 tx_evtchn, rx_evtchn);
+		return err;
+	}
+
+	return 0;
+}
+
+static int read_xenbus_vif_flags(struct backend_info *be)
+{
+	struct xenvif *vif = be->vif;
+	struct xenbus_device *dev = be->dev;
+	unsigned int rx_copy;
+	int err, val;
+
 	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy", "%u",
 			   &rx_copy);
 	if (err == -ENOENT) {
@@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
 		val = 0;
 	vif->ipv6_csum = !!val;
 
-	/* Map the shared frame, irq etc. */
-	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
-			     tx_evtchn, rx_evtchn);
-	if (err) {
-		xenbus_dev_fatal(dev, err,
-				 "mapping shared-frames %lu/%lu port tx %u rx %u",
-				 tx_ring_ref, rx_ring_ref,
-				 tx_evtchn, rx_evtchn);
-		return err;
-	}
 	return 0;
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbT-0003uT-NH; Mon, 24 Feb 2014 14:33:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbS-0003sV-Bb
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:54 +0000
Received: from [85.158.137.68:54018] by server-17.bemta-3.messagelabs.com id
	5F/F5-22569-0585B035; Mon, 24 Feb 2014 14:33:52 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393252430!3859913!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18027 invoked from network); 24 Feb 2014 14:33:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103544742"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:49 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHwaw-0005jC-7V;
	Mon, 24 Feb 2014 14:33:22 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwau-0004Yy-9O; Mon, 24 Feb 2014 14:33:20 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:02 +0000
Message-ID: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: netdev@vger.kernel.org, paul.durrant@citrix.com, wei.liu2@citrix.com,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 0/5] xen-net{back,
	front}: Multiple transmit and receive queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


This patch series implements multiple transmit and receive queues (i.e.
multiple shared rings) for the xen virtual network interfaces.

The series is split up as follows:
 - Patches 1 and 3 factor out the queue-specific data for netback and
    netfront respectively, and modify the rest of the code to use these
    as appropriate.
 - Patches 2 and 4 introduce new XenStore keys to negotiate and use
   multiple shared rings and event channels, and code to connect these
   as appropriate.
 - Patch 5 documents the XenStore keys required for the new feature
   in include/xen/interface/io/netif.h

All other transmit and receive processing remains unchanged, i.e. there
is a kthread per queue and a NAPI context per queue.

The performance of these patches has been analysed in detail, with
results available at:

http://wiki.xenproject.org/wiki/Xen-netback_and_xen-netfront_multi-queue_performance_testing

To summarise:
  * Using multiple queues allows a VM to transmit at line rate on a 10
    Gbit/s NIC, compared with a maximum aggregate throughput of 6 Gbit/s
    with a single queue.
  * For intra-host VM--VM traffic, eight queues provide 171% of the
    throughput of a single queue; almost 12 Gbit/s instead of 6 Gbit/s.
  * There is a corresponding increase in total CPU usage, i.e. this is a
    scaling out over available resources, not an efficiency improvement.
  * Results depend on the availability of sufficient CPUs, as well as the
    distribution of interrupts and the distribution of TCP streams across
    the queues.

Queue selection is currently achieved via an L4 hash on the packet (i.e.
TCP src/dst port, IP src/dst address) and is not negotiated between the
frontend and backend, since only one option exists. Future patches to
support other frontends (particularly Windows) will need to add some
capability to negotiate not only the hash algorithm selection, but also
allow the frontend to specify some parameters to this.

Note that queue selection is a decision by the transmitting system about
which queue to use for a particular packet. In general, the algorithm
may differ between the frontend and the backend with no adverse effects.

Queue-specific XenStore entries for ring references and event channels
are stored hierarchically, i.e. under .../queue-N/... where N varies
from 0 to one less than the requested number of queues (inclusive). If
only one queue is requested, it falls back to the flat structure where
the ring references and event channels are written at the same level as
other vif information.

V5:
- Fix bug in xenvif_free() that could lead to an attempt to transmit an
  skb after the queue structures had been freed.
- Improve the XenStore protocol documentation in netif.h.
- Fix IRQ_NAME_SIZE double-accounting for null terminator.
- Move rx_gso_checksum_fixup stat into struct xenvif_stats (per-queue).
- Don't initialise a local variable that is set in both branches (xspath).

V4:
- Add MODULE_PARM_DESC() for the multi-queue parameters for netback
  and netfront modules.
- Move del_timer_sync() in netfront to after unregister_netdev, which
  restores the order in which these functions were called before applying
  these patches.

V3:
- Further indentation and style fixups.

V2:
- Rebase onto net-next.
- Change queue->number to queue->id.
- Add atomic operations around the small number of stats variables that
  are not queue-specific or per-cpu.
- Fixup formatting and style issues.
- XenStore protocol changes documented in netif.h.
- Default max. number of queues to num_online_cpus().
- Check requested number of queues does not exceed maximum.

--
Andrew J. Bennieston

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbU-0003up-6Z; Mon, 24 Feb 2014 14:33:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)
	id 1WHwbS-0003ti-Dy
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:33:55 +0000
Received: from [85.158.139.211:45967] by server-16.bemta-5.messagelabs.com id
	6B/40-05060-1585B035; Mon, 24 Feb 2014 14:33:53 +0000
X-Env-Sender: andrewbe@dhcp-3-231.uk.xensource.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393252430!5892084!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16130 invoked from network); 24 Feb 2014 14:33:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:33:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198661"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:48 -0500
Received: from [10.80.3.220] (helo=dhcp-3-231.uk.xensource.com)	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrewbe@dhcp-3-231.uk.xensource.com>)	id 1WHway-0005jL-VL;
	Mon, 24 Feb 2014 14:33:24 +0000
Received: from andrewbe by dhcp-3-231.uk.xensource.com with local (Exim 4.80)
	(envelope-from <andrewbe@dhcp-3-231.uk.xensource.com>)	id
	1WHwax-0004ZA-OL; Mon, 24 Feb 2014 14:33:23 +0000
From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
To: <xen-devel@lists.xenproject.org>
Date: Mon, 24 Feb 2014 14:33:05 +0000
Message-ID: <1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	netdev@vger.kernel.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH V5 net-next 3/5] xen-netfront: Factor
	queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>

In preparation for multi-queue support in xen-netfront, move the
queue-specific data from struct netfront_info to struct netfront_queue,
and update the rest of the code to use this.

Also adds loops over queues where appropriate, even though only one is
configured at this point, and uses alloc_etherdev_mq() and the
corresponding multi-queue netif wake/start/stop functions in preparation
for multiple active queues.

Finally, implements a trivial queue selection function suitable for
ndo_select_queue, which simply returns 0, selecting the first (and
only) queue.

Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
---
 drivers/net/xen-netfront.c |  945 ++++++++++++++++++++++++++------------------
 1 file changed, 552 insertions(+), 393 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 2b62d79..4f5a431 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -73,6 +73,12 @@ struct netfront_cb {
 #define NET_RX_RING_SIZE __CONST_RING_SIZE(xen_netif_rx, PAGE_SIZE)
 #define TX_MAX_TARGET min_t(int, NET_TX_RING_SIZE, 256)
 
+/* Queue name is interface name with "-qNNN" appended */
+#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
+
+/* IRQ name is queue name with "-tx" or "-rx" appended */
+#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
+
 struct netfront_stats {
 	u64			rx_packets;
 	u64			tx_packets;
@@ -81,9 +87,12 @@ struct netfront_stats {
 	struct u64_stats_sync	syncp;
 };
 
-struct netfront_info {
-	struct list_head list;
-	struct net_device *netdev;
+struct netfront_info;
+
+struct netfront_queue {
+	unsigned int id; /* Queue ID, 0-based */
+	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
+	struct netfront_info *info;
 
 	struct napi_struct napi;
 
@@ -93,10 +102,8 @@ struct netfront_info {
 	unsigned int tx_evtchn, rx_evtchn;
 	unsigned int tx_irq, rx_irq;
 	/* Only used when split event channels support is enabled */
-	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
-	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
-
-	struct xenbus_device *xbdev;
+	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
+	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
 
 	spinlock_t   tx_lock;
 	struct xen_netif_tx_front_ring tx;
@@ -140,11 +147,22 @@ struct netfront_info {
 	unsigned long rx_pfn_array[NET_RX_RING_SIZE];
 	struct multicall_entry rx_mcl[NET_RX_RING_SIZE+1];
 	struct mmu_update rx_mmu[NET_RX_RING_SIZE];
+};
+
+struct netfront_info {
+	struct list_head list;
+	struct net_device *netdev;
+
+	struct xenbus_device *xbdev;
+
+	/* Multi-queue support */
+	unsigned int num_queues;
+	struct netfront_queue *queues;
 
 	/* Statistics */
 	struct netfront_stats __percpu *stats;
 
-	unsigned long rx_gso_checksum_fixup;
+	atomic_t rx_gso_checksum_fixup;
 };
 
 struct netfront_rx_info {
@@ -187,21 +205,21 @@ static int xennet_rxidx(RING_IDX idx)
 	return idx & (NET_RX_RING_SIZE - 1);
 }
 
-static struct sk_buff *xennet_get_rx_skb(struct netfront_info *np,
+static struct sk_buff *xennet_get_rx_skb(struct netfront_queue *queue,
 					 RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	struct sk_buff *skb = np->rx_skbs[i];
-	np->rx_skbs[i] = NULL;
+	struct sk_buff *skb = queue->rx_skbs[i];
+	queue->rx_skbs[i] = NULL;
 	return skb;
 }
 
-static grant_ref_t xennet_get_rx_ref(struct netfront_info *np,
+static grant_ref_t xennet_get_rx_ref(struct netfront_queue *queue,
 					    RING_IDX ri)
 {
 	int i = xennet_rxidx(ri);
-	grant_ref_t ref = np->grant_rx_ref[i];
-	np->grant_rx_ref[i] = GRANT_INVALID_REF;
+	grant_ref_t ref = queue->grant_rx_ref[i];
+	queue->grant_rx_ref[i] = GRANT_INVALID_REF;
 	return ref;
 }
 
@@ -221,41 +239,40 @@ static bool xennet_can_sg(struct net_device *dev)
 
 static void rx_refill_timeout(unsigned long data)
 {
-	struct net_device *dev = (struct net_device *)data;
-	struct netfront_info *np = netdev_priv(dev);
-	napi_schedule(&np->napi);
+	struct netfront_queue *queue = (struct netfront_queue *)data;
+	napi_schedule(&queue->napi);
 }
 
-static int netfront_tx_slot_available(struct netfront_info *np)
+static int netfront_tx_slot_available(struct netfront_queue *queue)
 {
-	return (np->tx.req_prod_pvt - np->tx.rsp_cons) <
+	return (queue->tx.req_prod_pvt - queue->tx.rsp_cons) <
 		(TX_MAX_TARGET - MAX_SKB_FRAGS - 2);
 }
 
-static void xennet_maybe_wake_tx(struct net_device *dev)
+static void xennet_maybe_wake_tx(struct netfront_queue *queue)
 {
-	struct netfront_info *np = netdev_priv(dev);
+	struct net_device *dev = queue->info->netdev;
+	struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, queue->id);
 
-	if (unlikely(netif_queue_stopped(dev)) &&
-	    netfront_tx_slot_available(np) &&
+	if (unlikely(netif_tx_queue_stopped(dev_queue)) &&
+	    netfront_tx_slot_available(queue) &&
 	    likely(netif_running(dev)))
-		netif_wake_queue(dev);
+		netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
 }
 
-static void xennet_alloc_rx_buffers(struct net_device *dev)
+static void xennet_alloc_rx_buffers(struct netfront_queue *queue)
 {
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 	struct page *page;
 	int i, batch_target, notify;
-	RING_IDX req_prod = np->rx.req_prod_pvt;
+	RING_IDX req_prod = queue->rx.req_prod_pvt;
 	grant_ref_t ref;
 	unsigned long pfn;
 	void *vaddr;
 	struct xen_netif_rx_request *req;
 
-	if (unlikely(!netif_carrier_ok(dev)))
+	if (unlikely(!netif_carrier_ok(queue->info->netdev)))
 		return;
 
 	/*
@@ -264,9 +281,10 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 	 * allocator, so should reduce the chance of failed allocation requests
 	 * both for ourself and for other kernel subsystems.
 	 */
-	batch_target = np->rx_target - (req_prod - np->rx.rsp_cons);
-	for (i = skb_queue_len(&np->rx_batch); i < batch_target; i++) {
-		skb = __netdev_alloc_skb(dev, RX_COPY_THRESHOLD + NET_IP_ALIGN,
+	batch_target = queue->rx_target - (req_prod - queue->rx.rsp_cons);
+	for (i = skb_queue_len(&queue->rx_batch); i < batch_target; i++) {
+		skb = __netdev_alloc_skb(queue->info->netdev,
+					 RX_COPY_THRESHOLD + NET_IP_ALIGN,
 					 GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb))
 			goto no_skb;
@@ -279,7 +297,7 @@ static void xennet_alloc_rx_buffers(struct net_device *dev)
 			kfree_skb(skb);
 no_skb:
 			/* Could not allocate any skbuffs. Try again later. */
-			mod_timer(&np->rx_refill_timer,
+			mod_timer(&queue->rx_refill_timer,
 				  jiffies + (HZ/10));
 
 			/* Any skbuffs queued for refill? Force them out. */
@@ -289,44 +307,44 @@ no_skb:
 		}
 
 		skb_add_rx_frag(skb, 0, page, 0, 0, PAGE_SIZE);
-		__skb_queue_tail(&np->rx_batch, skb);
+		__skb_queue_tail(&queue->rx_batch, skb);
 	}
 
 	/* Is the batch large enough to be worthwhile? */
-	if (i < (np->rx_target/2)) {
-		if (req_prod > np->rx.sring->req_prod)
+	if (i < (queue->rx_target/2)) {
+		if (req_prod > queue->rx.sring->req_prod)
 			goto push;
 		return;
 	}
 
 	/* Adjust our fill target if we risked running out of buffers. */
-	if (((req_prod - np->rx.sring->rsp_prod) < (np->rx_target / 4)) &&
-	    ((np->rx_target *= 2) > np->rx_max_target))
-		np->rx_target = np->rx_max_target;
+	if (((req_prod - queue->rx.sring->rsp_prod) < (queue->rx_target / 4)) &&
+	    ((queue->rx_target *= 2) > queue->rx_max_target))
+		queue->rx_target = queue->rx_max_target;
 
  refill:
 	for (i = 0; ; i++) {
-		skb = __skb_dequeue(&np->rx_batch);
+		skb = __skb_dequeue(&queue->rx_batch);
 		if (skb == NULL)
 			break;
 
-		skb->dev = dev;
+		skb->dev = queue->info->netdev;
 
 		id = xennet_rxidx(req_prod + i);
 
-		BUG_ON(np->rx_skbs[id]);
-		np->rx_skbs[id] = skb;
+		BUG_ON(queue->rx_skbs[id]);
+		queue->rx_skbs[id] = skb;
 
-		ref = gnttab_claim_grant_reference(&np->gref_rx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_rx_head);
 		BUG_ON((signed short)ref < 0);
-		np->grant_rx_ref[id] = ref;
+		queue->grant_rx_ref[id] = ref;
 
 		pfn = page_to_pfn(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 		vaddr = page_address(skb_frag_page(&skb_shinfo(skb)->frags[0]));
 
-		req = RING_GET_REQUEST(&np->rx, req_prod + i);
+		req = RING_GET_REQUEST(&queue->rx, req_prod + i);
 		gnttab_grant_foreign_access_ref(ref,
-						np->xbdev->otherend_id,
+						queue->info->xbdev->otherend_id,
 						pfn_to_mfn(pfn),
 						0);
 
@@ -337,72 +355,76 @@ no_skb:
 	wmb();		/* barrier so backend seens requests */
 
 	/* Above is a suitable barrier to ensure backend will see requests. */
-	np->rx.req_prod_pvt = req_prod + i;
+	queue->rx.req_prod_pvt = req_prod + i;
  push:
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->rx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->rx, notify);
 	if (notify)
-		notify_remote_via_irq(np->rx_irq);
+		notify_remote_via_irq(queue->rx_irq);
 }
 
 static int xennet_open(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-
-	napi_enable(&np->napi);
-
-	spin_lock_bh(&np->rx_lock);
-	if (netif_carrier_ok(dev)) {
-		xennet_alloc_rx_buffers(dev);
-		np->rx.sring->rsp_event = np->rx.rsp_cons + 1;
-		if (RING_HAS_UNCONSUMED_RESPONSES(&np->rx))
-			napi_schedule(&np->napi);
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_enable(&queue->napi);
+
+		spin_lock_bh(&queue->rx_lock);
+		if (netif_carrier_ok(dev)) {
+			xennet_alloc_rx_buffers(queue);
+			queue->rx.sring->rsp_event = queue->rx.rsp_cons + 1;
+			if (RING_HAS_UNCONSUMED_RESPONSES(&queue->rx))
+				napi_schedule(&queue->napi);
+		}
+		spin_unlock_bh(&queue->rx_lock);
 	}
-	spin_unlock_bh(&np->rx_lock);
 
-	netif_start_queue(dev);
+	netif_tx_start_all_queues(dev);
 
 	return 0;
 }
 
-static void xennet_tx_buf_gc(struct net_device *dev)
+static void xennet_tx_buf_gc(struct netfront_queue *queue)
 {
 	RING_IDX cons, prod;
 	unsigned short id;
-	struct netfront_info *np = netdev_priv(dev);
 	struct sk_buff *skb;
 
-	BUG_ON(!netif_carrier_ok(dev));
+	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
-		prod = np->tx.sring->rsp_prod;
+		prod = queue->tx.sring->rsp_prod;
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
-		for (cons = np->tx.rsp_cons; cons != prod; cons++) {
+		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
 			struct xen_netif_tx_response *txrsp;
 
-			txrsp = RING_GET_RESPONSE(&np->tx, cons);
+			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
 			if (txrsp->status == XEN_NETIF_RSP_NULL)
 				continue;
 
 			id  = txrsp->id;
-			skb = np->tx_skbs[id].skb;
+			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
-				np->grant_tx_ref[id]) != 0)) {
+				queue->grant_tx_ref[id]) != 0)) {
 				pr_alert("%s: warning -- grant still in use by backend domain\n",
 					 __func__);
 				BUG();
 			}
 			gnttab_end_foreign_access_ref(
-				np->grant_tx_ref[id], GNTMAP_readonly);
+				queue->grant_tx_ref[id], GNTMAP_readonly);
 			gnttab_release_grant_reference(
-				&np->gref_tx_head, np->grant_tx_ref[id]);
-			np->grant_tx_ref[id] = GRANT_INVALID_REF;
-			np->grant_tx_page[id] = NULL;
-			add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, id);
+				&queue->gref_tx_head, queue->grant_tx_ref[id]);
+			queue->grant_tx_ref[id] = GRANT_INVALID_REF;
+			queue->grant_tx_page[id] = NULL;
+			add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, id);
 			dev_kfree_skb_irq(skb);
 		}
 
-		np->tx.rsp_cons = prod;
+		queue->tx.rsp_cons = prod;
 
 		/*
 		 * Set a new event, then check for race with update of tx_cons.
@@ -412,21 +434,20 @@ static void xennet_tx_buf_gc(struct net_device *dev)
 		 * data is outstanding: in such cases notification from Xen is
 		 * likely to be the only kick that we'll get.
 		 */
-		np->tx.sring->rsp_event =
-			prod + ((np->tx.sring->req_prod - prod) >> 1) + 1;
+		queue->tx.sring->rsp_event =
+			prod + ((queue->tx.sring->req_prod - prod) >> 1) + 1;
 		mb();		/* update shared area */
-	} while ((cons == prod) && (prod != np->tx.sring->rsp_prod));
+	} while ((cons == prod) && (prod != queue->tx.sring->rsp_prod));
 
-	xennet_maybe_wake_tx(dev);
+	xennet_maybe_wake_tx(queue);
 }
 
-static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
+static void xennet_make_frags(struct sk_buff *skb, struct netfront_queue *queue,
 			      struct xen_netif_tx_request *tx)
 {
-	struct netfront_info *np = netdev_priv(dev);
 	char *data = skb->data;
 	unsigned long mfn;
-	RING_IDX prod = np->tx.req_prod_pvt;
+	RING_IDX prod = queue->tx.req_prod_pvt;
 	int frags = skb_shinfo(skb)->nr_frags;
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
@@ -443,19 +464,19 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		data += tx->size;
 		offset = 0;
 
-		id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-		np->tx_skbs[id].skb = skb_get(skb);
-		tx = RING_GET_REQUEST(&np->tx, prod++);
+		id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+		queue->tx_skbs[id].skb = skb_get(skb);
+		tx = RING_GET_REQUEST(&queue->tx, prod++);
 		tx->id = id;
-		ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+		ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 		BUG_ON((signed short)ref < 0);
 
 		mfn = virt_to_mfn(data);
-		gnttab_grant_foreign_access_ref(ref, np->xbdev->otherend_id,
+		gnttab_grant_foreign_access_ref(ref, queue->info->xbdev->otherend_id,
 						mfn, GNTMAP_readonly);
 
-		np->grant_tx_page[id] = virt_to_page(data);
-		tx->gref = np->grant_tx_ref[id] = ref;
+		queue->grant_tx_page[id] = virt_to_page(data);
+		tx->gref = queue->grant_tx_ref[id] = ref;
 		tx->offset = offset;
 		tx->size = len;
 		tx->flags = 0;
@@ -487,21 +508,21 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 
 			tx->flags |= XEN_NETTXF_more_data;
 
-			id = get_id_from_freelist(&np->tx_skb_freelist,
-						  np->tx_skbs);
-			np->tx_skbs[id].skb = skb_get(skb);
-			tx = RING_GET_REQUEST(&np->tx, prod++);
+			id = get_id_from_freelist(&queue->tx_skb_freelist,
+						  queue->tx_skbs);
+			queue->tx_skbs[id].skb = skb_get(skb);
+			tx = RING_GET_REQUEST(&queue->tx, prod++);
 			tx->id = id;
-			ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+			ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 			BUG_ON((signed short)ref < 0);
 
 			mfn = pfn_to_mfn(page_to_pfn(page));
 			gnttab_grant_foreign_access_ref(ref,
-							np->xbdev->otherend_id,
+							queue->info->xbdev->otherend_id,
 							mfn, GNTMAP_readonly);
 
-			np->grant_tx_page[id] = page;
-			tx->gref = np->grant_tx_ref[id] = ref;
+			queue->grant_tx_page[id] = page;
+			tx->gref = queue->grant_tx_ref[id] = ref;
 			tx->offset = offset;
 			tx->size = bytes;
 			tx->flags = 0;
@@ -518,7 +539,7 @@ static void xennet_make_frags(struct sk_buff *skb, struct net_device *dev,
 		}
 	}
 
-	np->tx.req_prod_pvt = prod;
+	queue->tx.req_prod_pvt = prod;
 }
 
 /*
@@ -544,6 +565,12 @@ static int xennet_count_skb_frag_slots(struct sk_buff *skb)
 	return pages;
 }
 
+static u16 xennet_select_queue(struct net_device *dev, struct sk_buff *skb)
+{
+	/* Stub for later implementation of queue selection */
+	return 0;
+}
+
 static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	unsigned short id;
@@ -559,6 +586,15 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	unsigned int offset = offset_in_page(data);
 	unsigned int len = skb_headlen(skb);
 	unsigned long flags;
+	struct netfront_queue *queue = NULL;
+	u16 queue_index;
+
+	/* Drop the packet if no queues are set up */
+	if (np->num_queues < 1)
+		goto drop;
+	/* Determine which queue to transmit this SKB on */
+	queue_index = skb_get_queue_mapping(skb);
+	queue = &np->queues[queue_index];
 
 	/* If skb->len is too big for wire format, drop skb and alert
 	 * user about misconfiguration.
@@ -578,30 +614,30 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		goto drop;
 	}
 
-	spin_lock_irqsave(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
 
 	if (unlikely(!netif_carrier_ok(dev) ||
 		     (slots > 1 && !xennet_can_sg(dev)) ||
 		     netif_needs_gso(skb, netif_skb_features(skb)))) {
-		spin_unlock_irqrestore(&np->tx_lock, flags);
+		spin_unlock_irqrestore(&queue->tx_lock, flags);
 		goto drop;
 	}
 
-	i = np->tx.req_prod_pvt;
+	i = queue->tx.req_prod_pvt;
 
-	id = get_id_from_freelist(&np->tx_skb_freelist, np->tx_skbs);
-	np->tx_skbs[id].skb = skb;
+	id = get_id_from_freelist(&queue->tx_skb_freelist, queue->tx_skbs);
+	queue->tx_skbs[id].skb = skb;
 
-	tx = RING_GET_REQUEST(&np->tx, i);
+	tx = RING_GET_REQUEST(&queue->tx, i);
 
 	tx->id   = id;
-	ref = gnttab_claim_grant_reference(&np->gref_tx_head);
+	ref = gnttab_claim_grant_reference(&queue->gref_tx_head);
 	BUG_ON((signed short)ref < 0);
 	mfn = virt_to_mfn(data);
 	gnttab_grant_foreign_access_ref(
-		ref, np->xbdev->otherend_id, mfn, GNTMAP_readonly);
-	np->grant_tx_page[id] = virt_to_page(data);
-	tx->gref = np->grant_tx_ref[id] = ref;
+		ref, queue->info->xbdev->otherend_id, mfn, GNTMAP_readonly);
+	queue->grant_tx_page[id] = virt_to_page(data);
+	tx->gref = queue->grant_tx_ref[id] = ref;
 	tx->offset = offset;
 	tx->size = len;
 
@@ -617,7 +653,7 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		struct xen_netif_extra_info *gso;
 
 		gso = (struct xen_netif_extra_info *)
-			RING_GET_REQUEST(&np->tx, ++i);
+			RING_GET_REQUEST(&queue->tx, ++i);
 
 		tx->flags |= XEN_NETTXF_extra_info;
 
@@ -632,14 +668,14 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 		gso->flags = 0;
 	}
 
-	np->tx.req_prod_pvt = i + 1;
+	queue->tx.req_prod_pvt = i + 1;
 
-	xennet_make_frags(skb, dev, tx);
+	xennet_make_frags(skb, queue, tx);
 	tx->size = skb->len;
 
-	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&np->tx, notify);
+	RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&queue->tx, notify);
 	if (notify)
-		notify_remote_via_irq(np->tx_irq);
+		notify_remote_via_irq(queue->tx_irq);
 
 	u64_stats_update_begin(&stats->syncp);
 	stats->tx_bytes += skb->len;
@@ -647,12 +683,12 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 	u64_stats_update_end(&stats->syncp);
 
 	/* Note: It is not safe to access skb after xennet_tx_buf_gc()! */
-	xennet_tx_buf_gc(dev);
+	xennet_tx_buf_gc(queue);
 
-	if (!netfront_tx_slot_available(np))
-		netif_stop_queue(dev);
+	if (!netfront_tx_slot_available(queue))
+		netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
 
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return NETDEV_TX_OK;
 
@@ -665,32 +701,37 @@ static int xennet_start_xmit(struct sk_buff *skb, struct net_device *dev)
 static int xennet_close(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	netif_stop_queue(np->netdev);
-	napi_disable(&np->napi);
+	unsigned int i;
+	struct netfront_queue *queue;
+	netif_tx_stop_all_queues(np->netdev);
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		napi_disable(&queue->napi);
+	}
 	return 0;
 }
 
-static void xennet_move_rx_slot(struct netfront_info *np, struct sk_buff *skb,
+static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
-	int new = xennet_rxidx(np->rx.req_prod_pvt);
-
-	BUG_ON(np->rx_skbs[new]);
-	np->rx_skbs[new] = skb;
-	np->grant_rx_ref[new] = ref;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->id = new;
-	RING_GET_REQUEST(&np->rx, np->rx.req_prod_pvt)->gref = ref;
-	np->rx.req_prod_pvt++;
+	int new = xennet_rxidx(queue->rx.req_prod_pvt);
+
+	BUG_ON(queue->rx_skbs[new]);
+	queue->rx_skbs[new] = skb;
+	queue->grant_rx_ref[new] = ref;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->id = new;
+	RING_GET_REQUEST(&queue->rx, queue->rx.req_prod_pvt)->gref = ref;
+	queue->rx.req_prod_pvt++;
 }
 
-static int xennet_get_extras(struct netfront_info *np,
+static int xennet_get_extras(struct netfront_queue *queue,
 			     struct xen_netif_extra_info *extras,
 			     RING_IDX rp)
 
 {
 	struct xen_netif_extra_info *extra;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
 
 	do {
@@ -705,7 +746,7 @@ static int xennet_get_extras(struct netfront_info *np,
 		}
 
 		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 
 		if (unlikely(!extra->type ||
 			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
@@ -718,33 +759,33 @@ static int xennet_get_extras(struct netfront_info *np,
 			       sizeof(*extra));
 		}
 
-		skb = xennet_get_rx_skb(np, cons);
-		ref = xennet_get_rx_ref(np, cons);
-		xennet_move_rx_slot(np, skb, ref);
+		skb = xennet_get_rx_skb(queue, cons);
+		ref = xennet_get_rx_ref(queue, cons);
+		xennet_move_rx_slot(queue, skb, ref);
 	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
-	np->rx.rsp_cons = cons;
+	queue->rx.rsp_cons = cons;
 	return err;
 }
 
-static int xennet_get_responses(struct netfront_info *np,
+static int xennet_get_responses(struct netfront_queue *queue,
 				struct netfront_rx_info *rinfo, RING_IDX rp,
 				struct sk_buff_head *list)
 {
 	struct xen_netif_rx_response *rx = &rinfo->rx;
 	struct xen_netif_extra_info *extras = rinfo->extras;
-	struct device *dev = &np->netdev->dev;
-	RING_IDX cons = np->rx.rsp_cons;
-	struct sk_buff *skb = xennet_get_rx_skb(np, cons);
-	grant_ref_t ref = xennet_get_rx_ref(np, cons);
+	struct device *dev = &queue->info->netdev->dev;
+	RING_IDX cons = queue->rx.rsp_cons;
+	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
+	grant_ref_t ref = xennet_get_rx_ref(queue, cons);
 	int max = MAX_SKB_FRAGS + (rx->status <= RX_COPY_THRESHOLD);
 	int slots = 1;
 	int err = 0;
 	unsigned long ret;
 
 	if (rx->flags & XEN_NETRXF_extra_info) {
-		err = xennet_get_extras(np, extras, rp);
-		cons = np->rx.rsp_cons;
+		err = xennet_get_extras(queue, extras, rp);
+		cons = queue->rx.rsp_cons;
 	}
 
 	for (;;) {
@@ -753,7 +794,7 @@ static int xennet_get_responses(struct netfront_info *np,
 			if (net_ratelimit())
 				dev_warn(dev, "rx->offset: %x, size: %u\n",
 					 rx->offset, rx->status);
-			xennet_move_rx_slot(np, skb, ref);
+			xennet_move_rx_slot(queue, skb, ref);
 			err = -EINVAL;
 			goto next;
 		}
@@ -774,7 +815,7 @@ static int xennet_get_responses(struct netfront_info *np,
 		ret = gnttab_end_foreign_access_ref(ref, 0);
 		BUG_ON(!ret);
 
-		gnttab_release_grant_reference(&np->gref_rx_head, ref);
+		gnttab_release_grant_reference(&queue->gref_rx_head, ref);
 
 		__skb_queue_tail(list, skb);
 
@@ -789,9 +830,9 @@ next:
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&np->rx, cons + slots);
-		skb = xennet_get_rx_skb(np, cons + slots);
-		ref = xennet_get_rx_ref(np, cons + slots);
+		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		skb = xennet_get_rx_skb(queue, cons + slots);
+		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
 	}
 
@@ -802,7 +843,7 @@ next:
 	}
 
 	if (unlikely(err))
-		np->rx.rsp_cons = cons + slots;
+		queue->rx.rsp_cons = cons + slots;
 
 	return err;
 }
@@ -836,17 +877,17 @@ static int xennet_set_skb_gso(struct sk_buff *skb,
 	return 0;
 }
 
-static RING_IDX xennet_fill_frags(struct netfront_info *np,
+static RING_IDX xennet_fill_frags(struct netfront_queue *queue,
 				  struct sk_buff *skb,
 				  struct sk_buff_head *list)
 {
 	struct skb_shared_info *shinfo = skb_shinfo(skb);
-	RING_IDX cons = np->rx.rsp_cons;
+	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
 		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&np->rx, ++cons);
+			RING_GET_RESPONSE(&queue->rx, ++cons);
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
 		if (shinfo->nr_frags == MAX_SKB_FRAGS) {
@@ -879,7 +920,7 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	 */
 	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
 		struct netfront_info *np = netdev_priv(dev);
-		np->rx_gso_checksum_fixup++;
+		atomic_inc(&np->rx_gso_checksum_fixup);
 		skb->ip_summed = CHECKSUM_PARTIAL;
 		recalculate_partial_csum = true;
 	}
@@ -891,11 +932,10 @@ static int checksum_setup(struct net_device *dev, struct sk_buff *skb)
 	return skb_checksum_setup(skb, recalculate_partial_csum);
 }
 
-static int handle_incoming_queue(struct net_device *dev,
+static int handle_incoming_queue(struct netfront_queue *queue,
 				 struct sk_buff_head *rxq)
 {
-	struct netfront_info *np = netdev_priv(dev);
-	struct netfront_stats *stats = this_cpu_ptr(np->stats);
+	struct netfront_stats *stats = this_cpu_ptr(queue->info->stats);
 	int packets_dropped = 0;
 	struct sk_buff *skb;
 
@@ -906,12 +946,12 @@ static int handle_incoming_queue(struct net_device *dev,
 			__pskb_pull_tail(skb, pull_to - skb_headlen(skb));
 
 		/* Ethernet work: Delayed to here as it peeks the header. */
-		skb->protocol = eth_type_trans(skb, dev);
+		skb->protocol = eth_type_trans(skb, queue->info->netdev);
 
-		if (checksum_setup(dev, skb)) {
+		if (checksum_setup(queue->info->netdev, skb)) {
 			kfree_skb(skb);
 			packets_dropped++;
-			dev->stats.rx_errors++;
+			queue->info->netdev->stats.rx_errors++;
 			continue;
 		}
 
@@ -921,7 +961,7 @@ static int handle_incoming_queue(struct net_device *dev,
 		u64_stats_update_end(&stats->syncp);
 
 		/* Pass it up. */
-		napi_gro_receive(&np->napi, skb);
+		napi_gro_receive(&queue->napi, skb);
 	}
 
 	return packets_dropped;
@@ -929,8 +969,8 @@ static int handle_incoming_queue(struct net_device *dev,
 
 static int xennet_poll(struct napi_struct *napi, int budget)
 {
-	struct netfront_info *np = container_of(napi, struct netfront_info, napi);
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = container_of(napi, struct netfront_queue, napi);
+	struct net_device *dev = queue->info->netdev;
 	struct sk_buff *skb;
 	struct netfront_rx_info rinfo;
 	struct xen_netif_rx_response *rx = &rinfo.rx;
@@ -943,29 +983,29 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	unsigned long flags;
 	int err;
 
-	spin_lock(&np->rx_lock);
+	spin_lock(&queue->rx_lock);
 
 	skb_queue_head_init(&rxq);
 	skb_queue_head_init(&errq);
 	skb_queue_head_init(&tmpq);
 
-	rp = np->rx.sring->rsp_prod;
+	rp = queue->rx.sring->rsp_prod;
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
-	i = np->rx.rsp_cons;
+	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&np->rx, i), sizeof(*rx));
+		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
 		memset(extras, 0, sizeof(rinfo.extras));
 
-		err = xennet_get_responses(np, &rinfo, rp, &tmpq);
+		err = xennet_get_responses(queue, &rinfo, rp, &tmpq);
 
 		if (unlikely(err)) {
 err:
 			while ((skb = __skb_dequeue(&tmpq)))
 				__skb_queue_tail(&errq, skb);
 			dev->stats.rx_errors++;
-			i = np->rx.rsp_cons;
+			i = queue->rx.rsp_cons;
 			continue;
 		}
 
@@ -977,7 +1017,7 @@ err:
 
 			if (unlikely(xennet_set_skb_gso(skb, gso))) {
 				__skb_queue_head(&tmpq, skb);
-				np->rx.rsp_cons += skb_queue_len(&tmpq);
+				queue->rx.rsp_cons += skb_queue_len(&tmpq);
 				goto err;
 			}
 		}
@@ -991,7 +1031,7 @@ err:
 		skb->data_len = rx->status;
 		skb->len += rx->status;
 
-		i = xennet_fill_frags(np, skb, &tmpq);
+		i = xennet_fill_frags(queue, skb, &tmpq);
 
 		if (rx->flags & XEN_NETRXF_csum_blank)
 			skb->ip_summed = CHECKSUM_PARTIAL;
@@ -1000,22 +1040,22 @@ err:
 
 		__skb_queue_tail(&rxq, skb);
 
-		np->rx.rsp_cons = ++i;
+		queue->rx.rsp_cons = ++i;
 		work_done++;
 	}
 
 	__skb_queue_purge(&errq);
 
-	work_done -= handle_incoming_queue(dev, &rxq);
+	work_done -= handle_incoming_queue(queue, &rxq);
 
 	/* If we get a callback with very few responses, reduce fill target. */
 	/* NB. Note exponential increase, linear decrease. */
-	if (((np->rx.req_prod_pvt - np->rx.sring->rsp_prod) >
-	     ((3*np->rx_target) / 4)) &&
-	    (--np->rx_target < np->rx_min_target))
-		np->rx_target = np->rx_min_target;
+	if (((queue->rx.req_prod_pvt - queue->rx.sring->rsp_prod) >
+	     ((3*queue->rx_target) / 4)) &&
+	    (--queue->rx_target < queue->rx_min_target))
+		queue->rx_target = queue->rx_min_target;
 
-	xennet_alloc_rx_buffers(dev);
+	xennet_alloc_rx_buffers(queue);
 
 	if (work_done < budget) {
 		int more_to_do = 0;
@@ -1024,14 +1064,14 @@ err:
 
 		local_irq_save(flags);
 
-		RING_FINAL_CHECK_FOR_RESPONSES(&np->rx, more_to_do);
+		RING_FINAL_CHECK_FOR_RESPONSES(&queue->rx, more_to_do);
 		if (!more_to_do)
 			__napi_complete(napi);
 
 		local_irq_restore(flags);
 	}
 
-	spin_unlock(&np->rx_lock);
+	spin_unlock(&queue->rx_lock);
 
 	return work_done;
 }
@@ -1079,43 +1119,43 @@ static struct rtnl_link_stats64 *xennet_get_stats64(struct net_device *dev,
 	return tot;
 }
 
-static void xennet_release_tx_bufs(struct netfront_info *np)
+static void xennet_release_tx_bufs(struct netfront_queue *queue)
 {
 	struct sk_buff *skb;
 	int i;
 
 	for (i = 0; i < NET_TX_RING_SIZE; i++) {
 		/* Skip over entries which are actually freelist references */
-		if (skb_entry_is_link(&np->tx_skbs[i]))
+		if (skb_entry_is_link(&queue->tx_skbs[i]))
 			continue;
 
-		skb = np->tx_skbs[i].skb;
-		get_page(np->grant_tx_page[i]);
-		gnttab_end_foreign_access(np->grant_tx_ref[i],
+		skb = queue->tx_skbs[i].skb;
+		get_page(queue->grant_tx_page[i]);
+		gnttab_end_foreign_access(queue->grant_tx_ref[i],
 					  GNTMAP_readonly,
-					  (unsigned long)page_address(np->grant_tx_page[i]));
-		np->grant_tx_page[i] = NULL;
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-		add_id_to_freelist(&np->tx_skb_freelist, np->tx_skbs, i);
+					  (unsigned long)page_address(queue->grant_tx_page[i]));
+		queue->grant_tx_page[i] = NULL;
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+		add_id_to_freelist(&queue->tx_skb_freelist, queue->tx_skbs, i);
 		dev_kfree_skb_irq(skb);
 	}
 }
 
-static void xennet_release_rx_bufs(struct netfront_info *np)
+static void xennet_release_rx_bufs(struct netfront_queue *queue)
 {
 	int id, ref;
 
-	spin_lock_bh(&np->rx_lock);
+	spin_lock_bh(&queue->rx_lock);
 
 	for (id = 0; id < NET_RX_RING_SIZE; id++) {
 		struct sk_buff *skb;
 		struct page *page;
 
-		skb = np->rx_skbs[id];
+		skb = queue->rx_skbs[id];
 		if (!skb)
 			continue;
 
-		ref = np->grant_rx_ref[id];
+		ref = queue->grant_rx_ref[id];
 		if (ref == GRANT_INVALID_REF)
 			continue;
 
@@ -1127,21 +1167,27 @@ static void xennet_release_rx_bufs(struct netfront_info *np)
 		get_page(page);
 		gnttab_end_foreign_access(ref, 0,
 					  (unsigned long)page_address(page));
-		np->grant_rx_ref[id] = GRANT_INVALID_REF;
+		queue->grant_rx_ref[id] = GRANT_INVALID_REF;
 
 		kfree_skb(skb);
 	}
 
-	spin_unlock_bh(&np->rx_lock);
+	spin_unlock_bh(&queue->rx_lock);
 }
 
 static void xennet_uninit(struct net_device *dev)
 {
 	struct netfront_info *np = netdev_priv(dev);
-	xennet_release_tx_bufs(np);
-	xennet_release_rx_bufs(np);
-	gnttab_free_grant_references(np->gref_tx_head);
-	gnttab_free_grant_references(np->gref_rx_head);
+	struct netfront_queue *queue;
+	unsigned int i;
+
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		xennet_release_tx_bufs(queue);
+		xennet_release_rx_bufs(queue);
+		gnttab_free_grant_references(queue->gref_tx_head);
+		gnttab_free_grant_references(queue->gref_rx_head);
+	}
 }
 
 static netdev_features_t xennet_fix_features(struct net_device *dev,
@@ -1202,25 +1248,24 @@ static int xennet_set_features(struct net_device *dev,
 
 static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
-	spin_lock_irqsave(&np->tx_lock, flags);
-	xennet_tx_buf_gc(dev);
-	spin_unlock_irqrestore(&np->tx_lock, flags);
+	spin_lock_irqsave(&queue->tx_lock, flags);
+	xennet_tx_buf_gc(queue);
+	spin_unlock_irqrestore(&queue->tx_lock, flags);
 
 	return IRQ_HANDLED;
 }
 
 static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 {
-	struct netfront_info *np = dev_id;
-	struct net_device *dev = np->netdev;
+	struct netfront_queue *queue = dev_id;
+	struct net_device *dev = queue->info->netdev;
 
 	if (likely(netif_carrier_ok(dev) &&
-		   RING_HAS_UNCONSUMED_RESPONSES(&np->rx)))
-			napi_schedule(&np->napi);
+		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
+			napi_schedule(&queue->napi);
 
 	return IRQ_HANDLED;
 }
@@ -1235,7 +1280,11 @@ static irqreturn_t xennet_interrupt(int irq, void *dev_id)
 #ifdef CONFIG_NET_POLL_CONTROLLER
 static void xennet_poll_controller(struct net_device *dev)
 {
-	xennet_interrupt(0, dev);
+	/* Poll each queue */
+	struct netfront_info *info = netdev_priv(dev);
+	unsigned int i;
+	for (i = 0; i < info->num_queues; ++i)
+		xennet_interrupt(0, &info->queues[i]);
 }
 #endif
 
@@ -1250,6 +1299,7 @@ static const struct net_device_ops xennet_netdev_ops = {
 	.ndo_validate_addr   = eth_validate_addr,
 	.ndo_fix_features    = xennet_fix_features,
 	.ndo_set_features    = xennet_set_features,
+	.ndo_select_queue    = xennet_select_queue,
 #ifdef CONFIG_NET_POLL_CONTROLLER
 	.ndo_poll_controller = xennet_poll_controller,
 #endif
@@ -1257,66 +1307,27 @@ static const struct net_device_ops xennet_netdev_ops = {
 
 static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 {
-	int i, err;
+	int err;
 	struct net_device *netdev;
 	struct netfront_info *np;
 
-	netdev = alloc_etherdev(sizeof(struct netfront_info));
+	netdev = alloc_etherdev_mq(sizeof(struct netfront_info), 1);
 	if (!netdev)
 		return ERR_PTR(-ENOMEM);
 
 	np                   = netdev_priv(netdev);
 	np->xbdev            = dev;
 
-	spin_lock_init(&np->tx_lock);
-	spin_lock_init(&np->rx_lock);
-
-	skb_queue_head_init(&np->rx_batch);
-	np->rx_target     = RX_DFL_MIN_TARGET;
-	np->rx_min_target = RX_DFL_MIN_TARGET;
-	np->rx_max_target = RX_MAX_TARGET;
-
-	init_timer(&np->rx_refill_timer);
-	np->rx_refill_timer.data = (unsigned long)netdev;
-	np->rx_refill_timer.function = rx_refill_timeout;
+	np->num_queues = 0;
+	np->queues = NULL;
 
 	err = -ENOMEM;
 	np->stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->stats == NULL)
 		goto exit;
 
-	/* Initialise tx_skbs as a free chain containing every entry. */
-	np->tx_skb_freelist = 0;
-	for (i = 0; i < NET_TX_RING_SIZE; i++) {
-		skb_entry_set_link(&np->tx_skbs[i], i+1);
-		np->grant_tx_ref[i] = GRANT_INVALID_REF;
-	}
-
-	/* Clear out rx_skbs */
-	for (i = 0; i < NET_RX_RING_SIZE; i++) {
-		np->rx_skbs[i] = NULL;
-		np->grant_rx_ref[i] = GRANT_INVALID_REF;
-		np->grant_tx_page[i] = NULL;
-	}
-
-	/* A grant for every tx ring slot */
-	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
-					  &np->gref_tx_head) < 0) {
-		pr_alert("can't alloc tx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_stats;
-	}
-	/* A grant for every rx ring slot */
-	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
-					  &np->gref_rx_head) < 0) {
-		pr_alert("can't alloc rx grant refs\n");
-		err = -ENOMEM;
-		goto exit_free_tx;
-	}
-
 	netdev->netdev_ops	= &xennet_netdev_ops;
 
-	netif_napi_add(netdev, &np->napi, xennet_poll, 64);
 	netdev->features        = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
 				  NETIF_F_GSO_ROBUST;
 	netdev->hw_features	= NETIF_F_SG |
@@ -1342,10 +1353,6 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	return netdev;
 
- exit_free_tx:
-	gnttab_free_grant_references(np->gref_tx_head);
- exit_free_stats:
-	free_percpu(np->stats);
  exit:
 	free_netdev(netdev);
 	return ERR_PTR(err);
@@ -1403,30 +1410,35 @@ static void xennet_end_access(int ref, void *page)
 
 static void xennet_disconnect_backend(struct netfront_info *info)
 {
-	/* Stop old i/f to prevent errors whilst we rebuild the state. */
-	spin_lock_bh(&info->rx_lock);
-	spin_lock_irq(&info->tx_lock);
-	netif_carrier_off(info->netdev);
-	spin_unlock_irq(&info->tx_lock);
-	spin_unlock_bh(&info->rx_lock);
-
-	if (info->tx_irq && (info->tx_irq == info->rx_irq))
-		unbind_from_irqhandler(info->tx_irq, info);
-	if (info->tx_irq && (info->tx_irq != info->rx_irq)) {
-		unbind_from_irqhandler(info->tx_irq, info);
-		unbind_from_irqhandler(info->rx_irq, info);
-	}
-	info->tx_evtchn = info->rx_evtchn = 0;
-	info->tx_irq = info->rx_irq = 0;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
+
+	for (i = 0; i < info->num_queues; ++i) {
+		/* Stop old i/f to prevent errors whilst we rebuild the state. */
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
+		netif_carrier_off(queue->info->netdev);
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+
+		if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))
+			unbind_from_irqhandler(queue->tx_irq, queue);
+		if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {
+			unbind_from_irqhandler(queue->tx_irq, queue);
+			unbind_from_irqhandler(queue->rx_irq, queue);
+		}
+		queue->tx_evtchn = queue->rx_evtchn = 0;
+		queue->tx_irq = queue->rx_irq = 0;
 
-	/* End access and free the pages */
-	xennet_end_access(info->tx_ring_ref, info->tx.sring);
-	xennet_end_access(info->rx_ring_ref, info->rx.sring);
+		/* End access and free the pages */
+		xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
+		xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->tx.sring = NULL;
-	info->rx.sring = NULL;
+		queue->tx_ring_ref = GRANT_INVALID_REF;
+		queue->rx_ring_ref = GRANT_INVALID_REF;
+		queue->tx.sring = NULL;
+		queue->rx.sring = NULL;
+	}
 }
 
 /**
@@ -1467,100 +1479,86 @@ static int xen_net_read_mac(struct xenbus_device *dev, u8 mac[])
 	return 0;
 }
 
-static int setup_netfront_single(struct netfront_info *info)
+static int setup_netfront_single(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
 
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_interrupt,
-					0, info->netdev->name, info);
+					0, queue->info->netdev->name, queue);
 	if (err < 0)
 		goto bind_fail;
-	info->rx_evtchn = info->tx_evtchn;
-	info->rx_irq = info->tx_irq = err;
+	queue->rx_evtchn = queue->tx_evtchn;
+	queue->rx_irq = queue->tx_irq = err;
 
 	return 0;
 
 bind_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront_split(struct netfront_info *info)
+static int setup_netfront_split(struct netfront_queue *queue)
 {
 	int err;
 
-	err = xenbus_alloc_evtchn(info->xbdev, &info->tx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->tx_evtchn);
 	if (err < 0)
 		goto fail;
-	err = xenbus_alloc_evtchn(info->xbdev, &info->rx_evtchn);
+	err = xenbus_alloc_evtchn(queue->info->xbdev, &queue->rx_evtchn);
 	if (err < 0)
 		goto alloc_rx_evtchn_fail;
 
-	snprintf(info->tx_irq_name, sizeof(info->tx_irq_name),
-		 "%s-tx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->tx_evtchn,
+	snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
+		 "%s-tx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->tx_evtchn,
 					xennet_tx_interrupt,
-					0, info->tx_irq_name, info);
+					0, queue->tx_irq_name, queue);
 	if (err < 0)
 		goto bind_tx_fail;
-	info->tx_irq = err;
+	queue->tx_irq = err;
 
-	snprintf(info->rx_irq_name, sizeof(info->rx_irq_name),
-		 "%s-rx", info->netdev->name);
-	err = bind_evtchn_to_irqhandler(info->rx_evtchn,
+	snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
+		 "%s-rx", queue->name);
+	err = bind_evtchn_to_irqhandler(queue->rx_evtchn,
 					xennet_rx_interrupt,
-					0, info->rx_irq_name, info);
+					0, queue->rx_irq_name, queue);
 	if (err < 0)
 		goto bind_rx_fail;
-	info->rx_irq = err;
+	queue->rx_irq = err;
 
 	return 0;
 
 bind_rx_fail:
-	unbind_from_irqhandler(info->tx_irq, info);
-	info->tx_irq = 0;
+	unbind_from_irqhandler(queue->tx_irq, queue);
+	queue->tx_irq = 0;
 bind_tx_fail:
-	xenbus_free_evtchn(info->xbdev, info->rx_evtchn);
-	info->rx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->rx_evtchn);
+	queue->rx_evtchn = 0;
 alloc_rx_evtchn_fail:
-	xenbus_free_evtchn(info->xbdev, info->tx_evtchn);
-	info->tx_evtchn = 0;
+	xenbus_free_evtchn(queue->info->xbdev, queue->tx_evtchn);
+	queue->tx_evtchn = 0;
 fail:
 	return err;
 }
 
-static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
+static int setup_netfront(struct xenbus_device *dev,
+			struct netfront_queue *queue, unsigned int feature_split_evtchn)
 {
 	struct xen_netif_tx_sring *txs;
 	struct xen_netif_rx_sring *rxs;
 	int err;
-	struct net_device *netdev = info->netdev;
-	unsigned int feature_split_evtchn;
 
-	info->tx_ring_ref = GRANT_INVALID_REF;
-	info->rx_ring_ref = GRANT_INVALID_REF;
-	info->rx.sring = NULL;
-	info->tx.sring = NULL;
-	netdev->irq = 0;
-
-	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
-			   "feature-split-event-channels", "%u",
-			   &feature_split_evtchn);
-	if (err < 0)
-		feature_split_evtchn = 0;
-
-	err = xen_net_read_mac(dev, netdev->dev_addr);
-	if (err) {
-		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
-		goto fail;
-	}
+	queue->tx_ring_ref = GRANT_INVALID_REF;
+	queue->rx_ring_ref = GRANT_INVALID_REF;
+	queue->rx.sring = NULL;
+	queue->tx.sring = NULL;
 
 	txs = (struct xen_netif_tx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!txs) {
@@ -1569,13 +1567,13 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto fail;
 	}
 	SHARED_RING_INIT(txs);
-	FRONT_RING_INIT(&info->tx, txs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->tx, txs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(txs));
 	if (err < 0)
 		goto grant_tx_ring_fail;
+	queue->tx_ring_ref = err;
 
-	info->tx_ring_ref = err;
 	rxs = (struct xen_netif_rx_sring *)get_zeroed_page(GFP_NOIO | __GFP_HIGH);
 	if (!rxs) {
 		err = -ENOMEM;
@@ -1583,21 +1581,21 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 		goto alloc_rx_ring_fail;
 	}
 	SHARED_RING_INIT(rxs);
-	FRONT_RING_INIT(&info->rx, rxs, PAGE_SIZE);
+	FRONT_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
 
 	err = xenbus_grant_ring(dev, virt_to_mfn(rxs));
 	if (err < 0)
 		goto grant_rx_ring_fail;
-	info->rx_ring_ref = err;
+	queue->rx_ring_ref = err;
 
 	if (feature_split_evtchn)
-		err = setup_netfront_split(info);
+		err = setup_netfront_split(queue);
 	/* setup single event channel if
 	 *  a) feature-split-event-channels == 0
 	 *  b) feature-split-event-channels == 1 but failed to setup
 	 */
 	if (!feature_split_evtchn || (feature_split_evtchn && err))
-		err = setup_netfront_single(info);
+		err = setup_netfront_single(queue);
 
 	if (err)
 		goto alloc_evtchn_fail;
@@ -1608,17 +1606,78 @@ static int setup_netfront(struct xenbus_device *dev, struct netfront_info *info)
 	 * granted pages because backend is not accessing it at this point.
 	 */
 alloc_evtchn_fail:
-	gnttab_end_foreign_access_ref(info->rx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->rx_ring_ref, 0);
 grant_rx_ring_fail:
 	free_page((unsigned long)rxs);
 alloc_rx_ring_fail:
-	gnttab_end_foreign_access_ref(info->tx_ring_ref, 0);
+	gnttab_end_foreign_access_ref(queue->tx_ring_ref, 0);
 grant_tx_ring_fail:
 	free_page((unsigned long)txs);
 fail:
 	return err;
 }
 
+/* Queue-specific initialisation
+ * This used to be done in xennet_create_dev() but must now
+ * be run per-queue.
+ */
+static int xennet_init_queue(struct netfront_queue *queue)
+{
+	unsigned short i;
+	int err = 0;
+
+	spin_lock_init(&queue->tx_lock);
+	spin_lock_init(&queue->rx_lock);
+
+	skb_queue_head_init(&queue->rx_batch);
+	queue->rx_target     = RX_DFL_MIN_TARGET;
+	queue->rx_min_target = RX_DFL_MIN_TARGET;
+	queue->rx_max_target = RX_MAX_TARGET;
+
+	init_timer(&queue->rx_refill_timer);
+	queue->rx_refill_timer.data = (unsigned long)queue;
+	queue->rx_refill_timer.function = rx_refill_timeout;
+
+	/* Initialise tx_skbs as a free chain containing every entry. */
+	queue->tx_skb_freelist = 0;
+	for (i = 0; i < NET_TX_RING_SIZE; i++) {
+		skb_entry_set_link(&queue->tx_skbs[i], i+1);
+		queue->grant_tx_ref[i] = GRANT_INVALID_REF;
+	}
+
+	/* Clear out rx_skbs */
+	for (i = 0; i < NET_RX_RING_SIZE; i++) {
+		queue->rx_skbs[i] = NULL;
+		queue->grant_rx_ref[i] = GRANT_INVALID_REF;
+		queue->grant_tx_page[i] = NULL;
+	}
+
+	/* A grant for every tx ring slot */
+	if (gnttab_alloc_grant_references(TX_MAX_TARGET,
+					  &queue->gref_tx_head) < 0) {
+		pr_alert("can't alloc tx grant refs\n");
+		err = -ENOMEM;
+		goto exit;
+	}
+
+	/* A grant for every rx ring slot */
+	if (gnttab_alloc_grant_references(RX_MAX_TARGET,
+					  &queue->gref_rx_head) < 0) {
+		pr_alert("can't alloc rx grant refs\n");
+		err = -ENOMEM;
+		goto exit_free_tx;
+	}
+
+	netif_napi_add(queue->info->netdev, &queue->napi, xennet_poll, 64);
+
+	return 0;
+
+ exit_free_tx:
+	gnttab_free_grant_references(queue->gref_tx_head);
+ exit:
+	return err;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1626,13 +1685,72 @@ static int talk_to_netback(struct xenbus_device *dev,
 	const char *message;
 	struct xenbus_transaction xbt;
 	int err;
+	unsigned int feature_split_evtchn;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
-	/* Create shared ring, alloc event channel. */
-	err = setup_netfront(dev, info);
-	if (err)
+	info->netdev->irq = 0;
+
+	/* Check feature-split-event-channels */
+	err = xenbus_scanf(XBT_NIL, info->xbdev->otherend,
+			   "feature-split-event-channels", "%u",
+			   &feature_split_evtchn);
+	if (err < 0)
+		feature_split_evtchn = 0;
+
+	/* Read mac addr. */
+	err = xen_net_read_mac(dev, info->netdev->dev_addr);
+	if (err) {
+		xenbus_dev_fatal(dev, err, "parsing %s/mac", dev->nodename);
 		goto out;
+	}
+
+	/* Allocate array of queues */
+	info->queues = kcalloc(1, sizeof(struct netfront_queue), GFP_KERNEL);
+	if (!info->queues) {
+		err = -ENOMEM;
+		goto out;
+	}
+	info->num_queues = 1;
+
+	/* Create shared ring, alloc event channel -- for each queue */
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		queue->id = i;
+		queue->info = info;
+		err = xennet_init_queue(queue);
+		if (err) {
+			/* xennet_init_queue() cleans up after itself on failure,
+			 * but we still have to clean up any previously initialised
+			 * queues. If i > 0, set info->num_queues to i, then goto
+			 * destroy_ring, which calls xennet_disconnect_backend()
+			 * to tidy up.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+		err = setup_netfront(dev, queue, feature_split_evtchn);
+		if (err) {
+			/* As for xennet_init_queue(), setup_netfront() will tidy
+			 * up the current queue on error, but we need to clean up
+			 * those already allocated.
+			 */
+			if (i > 0) {
+				info->num_queues = i;
+				goto destroy_ring;
+			} else {
+				goto out;
+			}
+		}
+	}
 
 again:
+	queue = &info->queues[0]; /* Use first queue only */
+
 	err = xenbus_transaction_start(&xbt);
 	if (err) {
 		xenbus_dev_fatal(dev, err, "starting transaction");
@@ -1640,34 +1758,34 @@ again:
 	}
 
 	err = xenbus_printf(xbt, dev->nodename, "tx-ring-ref", "%u",
-			    info->tx_ring_ref);
+			    queue->tx_ring_ref);
 	if (err) {
 		message = "writing tx ring-ref";
 		goto abort_transaction;
 	}
 	err = xenbus_printf(xbt, dev->nodename, "rx-ring-ref", "%u",
-			    info->rx_ring_ref);
+			    queue->rx_ring_ref);
 	if (err) {
 		message = "writing rx ring-ref";
 		goto abort_transaction;
 	}
 
-	if (info->tx_evtchn == info->rx_evtchn) {
+	if (queue->tx_evtchn == queue->rx_evtchn) {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel", "%u", info->tx_evtchn);
+				    "event-channel", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel";
 			goto abort_transaction;
 		}
 	} else {
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-tx", "%u", info->tx_evtchn);
+				    "event-channel-tx", "%u", queue->tx_evtchn);
 		if (err) {
 			message = "writing event-channel-tx";
 			goto abort_transaction;
 		}
 		err = xenbus_printf(xbt, dev->nodename,
-				    "event-channel-rx", "%u", info->rx_evtchn);
+				    "event-channel-rx", "%u", queue->rx_evtchn);
 		if (err) {
 			message = "writing event-channel-rx";
 			goto abort_transaction;
@@ -1727,6 +1845,9 @@ again:
 	xenbus_dev_fatal(dev, err, "%s", message);
  destroy_ring:
 	xennet_disconnect_backend(info);
+	kfree(info->queues);
+	info->queues = NULL;
+	info->num_queues = 0;
  out:
 	return err;
 }
@@ -1739,6 +1860,8 @@ static int xennet_connect(struct net_device *dev)
 	grant_ref_t ref;
 	struct xen_netif_rx_request *req;
 	unsigned int feature_rx_copy;
+	unsigned int j = 0;
+	struct netfront_queue *queue = NULL;
 
 	err = xenbus_scanf(XBT_NIL, np->xbdev->otherend,
 			   "feature-rx-copy", "%u", &feature_rx_copy);
@@ -1759,36 +1882,40 @@ static int xennet_connect(struct net_device *dev)
 	netdev_update_features(dev);
 	rtnl_unlock();
 
-	spin_lock_bh(&np->rx_lock);
-	spin_lock_irq(&np->tx_lock);
+	/* By now, the queue structures have been set up */
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		spin_lock_bh(&queue->rx_lock);
+		spin_lock_irq(&queue->tx_lock);
 
-	/* Step 1: Discard all pending TX packet fragments. */
-	xennet_release_tx_bufs(np);
+		/* Step 1: Discard all pending TX packet fragments. */
+		xennet_release_tx_bufs(queue);
 
-	/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
-	for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
-		skb_frag_t *frag;
-		const struct page *page;
-		if (!np->rx_skbs[i])
-			continue;
+		/* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
+		for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
+			skb_frag_t *frag;
+			const struct page *page;
+			if (!queue->rx_skbs[i])
+				continue;
 
-		skb = np->rx_skbs[requeue_idx] = xennet_get_rx_skb(np, i);
-		ref = np->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(np, i);
-		req = RING_GET_REQUEST(&np->rx, requeue_idx);
+			skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
+			ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
+			req = RING_GET_REQUEST(&queue->rx, requeue_idx);
 
-		frag = &skb_shinfo(skb)->frags[0];
-		page = skb_frag_page(frag);
-		gnttab_grant_foreign_access_ref(
-			ref, np->xbdev->otherend_id,
-			pfn_to_mfn(page_to_pfn(page)),
-			0);
-		req->gref = ref;
-		req->id   = requeue_idx;
+			frag = &skb_shinfo(skb)->frags[0];
+			page = skb_frag_page(frag);
+			gnttab_grant_foreign_access_ref(
+				ref, queue->info->xbdev->otherend_id,
+				pfn_to_mfn(page_to_pfn(page)),
+				0);
+			req->gref = ref;
+			req->id   = requeue_idx;
 
-		requeue_idx++;
-	}
+			requeue_idx++;
+		}
 
-	np->rx.req_prod_pvt = requeue_idx;
+		queue->rx.req_prod_pvt = requeue_idx;
+	}
 
 	/*
 	 * Step 3: All public and private state should now be sane.  Get
@@ -1797,14 +1924,17 @@ static int xennet_connect(struct net_device *dev)
 	 * packets.
 	 */
 	netif_carrier_on(np->netdev);
-	notify_remote_via_irq(np->tx_irq);
-	if (np->tx_irq != np->rx_irq)
-		notify_remote_via_irq(np->rx_irq);
-	xennet_tx_buf_gc(dev);
-	xennet_alloc_rx_buffers(dev);
-
-	spin_unlock_irq(&np->tx_lock);
-	spin_unlock_bh(&np->rx_lock);
+	for (j = 0; j < np->num_queues; ++j) {
+		queue = &np->queues[j];
+		notify_remote_via_irq(queue->tx_irq);
+		if (queue->tx_irq != queue->rx_irq)
+			notify_remote_via_irq(queue->rx_irq);
+		xennet_tx_buf_gc(queue);
+		xennet_alloc_rx_buffers(queue);
+
+		spin_unlock_irq(&queue->tx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 
 	return 0;
 }
@@ -1877,7 +2007,7 @@ static void xennet_get_ethtool_stats(struct net_device *dev,
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(xennet_stats); i++)
-		data[i] = *(unsigned long *)(np + xennet_stats[i].offset);
+		data[i] = atomic_read((atomic_t *)(np + xennet_stats[i].offset));
 }
 
 static void xennet_get_strings(struct net_device *dev, u32 stringset, u8 * data)
@@ -1909,7 +2039,10 @@ static ssize_t show_rxbuf_min(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_min_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_min_target);
+	else
+		return sprintf(buf, "%u\n", RX_MIN_TARGET);
 }
 
 static ssize_t store_rxbuf_min(struct device *dev,
@@ -1920,6 +2053,8 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i;
+	struct netfront_queue *queue;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1933,16 +2068,19 @@ static ssize_t store_rxbuf_min(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target > np->rx_max_target)
-		np->rx_max_target = target;
-	np->rx_min_target = target;
-	if (target > np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target > queue->rx_max_target)
+			queue->rx_max_target = target;
+		queue->rx_min_target = target;
+		if (target > queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1952,7 +2090,10 @@ static ssize_t show_rxbuf_max(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_max_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_max_target);
+	else
+		return sprintf(buf, "%u\n", RX_MAX_TARGET);
 }
 
 static ssize_t store_rxbuf_max(struct device *dev,
@@ -1963,6 +2104,8 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	struct netfront_info *np = netdev_priv(netdev);
 	char *endp;
 	unsigned long target;
+	unsigned int i = 0;
+	struct netfront_queue *queue = NULL;
 
 	if (!capable(CAP_NET_ADMIN))
 		return -EPERM;
@@ -1976,16 +2119,19 @@ static ssize_t store_rxbuf_max(struct device *dev,
 	if (target > RX_MAX_TARGET)
 		target = RX_MAX_TARGET;
 
-	spin_lock_bh(&np->rx_lock);
-	if (target < np->rx_min_target)
-		np->rx_min_target = target;
-	np->rx_max_target = target;
-	if (target < np->rx_target)
-		np->rx_target = target;
+	for (i = 0; i < np->num_queues; ++i) {
+		queue = &np->queues[i];
+		spin_lock_bh(&queue->rx_lock);
+		if (target < queue->rx_min_target)
+			queue->rx_min_target = target;
+		queue->rx_max_target = target;
+		if (target < queue->rx_target)
+			queue->rx_target = target;
 
-	xennet_alloc_rx_buffers(netdev);
+		xennet_alloc_rx_buffers(queue);
 
-	spin_unlock_bh(&np->rx_lock);
+		spin_unlock_bh(&queue->rx_lock);
+	}
 	return len;
 }
 
@@ -1995,7 +2141,10 @@ static ssize_t show_rxbuf_cur(struct device *dev,
 	struct net_device *netdev = to_net_dev(dev);
 	struct netfront_info *info = netdev_priv(netdev);
 
-	return sprintf(buf, "%u\n", info->rx_target);
+	if (info->num_queues)
+		return sprintf(buf, "%u\n", info->queues[0].rx_target);
+	else
+		return sprintf(buf, "0\n");
 }
 
 static struct device_attribute xennet_attrs[] = {
@@ -2042,6 +2191,8 @@ static const struct xenbus_device_id netfront_ids[] = {
 static int xennet_remove(struct xenbus_device *dev)
 {
 	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	struct netfront_queue *queue = NULL;
+	unsigned int i = 0;
 
 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
@@ -2051,7 +2202,15 @@ static int xennet_remove(struct xenbus_device *dev)
 
 	unregister_netdev(info->netdev);
 
-	del_timer_sync(&info->rx_refill_timer);
+	for (i = 0; i < info->num_queues; ++i) {
+		queue = &info->queues[i];
+		del_timer_sync(&queue->rx_refill_timer);
+	}
+
+	if (info->num_queues) {
+		kfree(info->queues);
+		info->queues = NULL;
+	}
 
 	free_percpu(info->stats);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbd-00041p-1s; Mon, 24 Feb 2014 14:34:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwbb-0003zd-D8
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:34:03 +0000
Received: from [85.158.139.211:46927] by server-9.bemta-5.messagelabs.com id
	F0/C3-11237-A585B035; Mon, 24 Feb 2014 14:34:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393252440!1339839!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18077 invoked from network); 24 Feb 2014 14:34:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:34:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198717"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:59 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:59 -0500
Message-ID: <530B5856.7080900@citrix.com>
Date: Mon, 24 Feb 2014 14:33:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.579581220@linutronix.de>
In-Reply-To: <20140223212738.579581220@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 21/26] xen: Get rid of the last irq_desc
	abuse
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> I'd prefer to drop that completely but there seems to be some mystic
> value to the error printout and the allocation check.

    Warn if any PIRQ cannot be bound to an event channel.  Remove an
    unnecessary test for !desc in xen_destroy_irq() since the only caller
    will only do so if the irq was previously allocated.

> --- tip.orig/drivers/xen/events/events_base.c
> +++ tip/drivers/xen/events/events_base.c
[...]
> @@ -535,7 +528,7 @@ static unsigned int __startup_pirq(unsig
>  					BIND_PIRQ__WILL_SHARE : 0;
>  	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
>  	if (rc != 0) {
> -		if (!probing_irq(irq))
> +		if (!data || irqd_irq_has_action(data))
>  			pr_info("Failed to obtain physical IRQ %d\n", irq);

Remove this if and change the pr_info() to a pr_warn().

This hypercall never fails in practice, but it's still useful to have the
message in case on some systems it does.

>  		return 0;
>  	}
> @@ -769,15 +762,13 @@ error_irq:
>  
>  int xen_destroy_irq(int irq)
>  {
> -	struct irq_desc *desc;
>  	struct physdev_unmap_pirq unmap_irq;
>  	struct irq_info *info = info_for_irq(irq);
>  	int rc = -ENOENT;
>  
>  	mutex_lock(&irq_mapping_update_lock);
>  
> -	desc = irq_to_desc(irq);
> -	if (!desc)
> +	if (!irq_is_allocated(irq))
>  		goto out;

Remove this test.  The only caller of xen_destroy_irq() will only do
so if the irq was previously fully setup.

I think this means you don't need to introduce the irqd_irq_has_action()
and irq_is_allocated() helpers.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:34:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwbd-00041p-1s; Mon, 24 Feb 2014 14:34:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwbb-0003zd-D8
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:34:03 +0000
Received: from [85.158.139.211:46927] by server-9.bemta-5.messagelabs.com id
	F0/C3-11237-A585B035; Mon, 24 Feb 2014 14:34:02 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393252440!1339839!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18077 invoked from network); 24 Feb 2014 14:34:01 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:34:01 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105198717"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:33:59 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:33:59 -0500
Message-ID: <530B5856.7080900@citrix.com>
Date: Mon, 24 Feb 2014 14:33:58 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.579581220@linutronix.de>
In-Reply-To: <20140223212738.579581220@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 21/26] xen: Get rid of the last irq_desc
	abuse
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> I'd prefer to drop that completely but there seems to be some mystic
> value to the error printout and the allocation check.

    Warn if any PIRQ cannot be bound to an event channel.  Remove an
    unnecessary test for !desc in xen_destroy_irq() since the only caller
    will only do so if the irq was previously allocated.

> --- tip.orig/drivers/xen/events/events_base.c
> +++ tip/drivers/xen/events/events_base.c
[...]
> @@ -535,7 +528,7 @@ static unsigned int __startup_pirq(unsig
>  					BIND_PIRQ__WILL_SHARE : 0;
>  	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
>  	if (rc != 0) {
> -		if (!probing_irq(irq))
> +		if (!data || irqd_irq_has_action(data))
>  			pr_info("Failed to obtain physical IRQ %d\n", irq);

Remove this if and change the pr_info() to a pr_warn().

This hypercall never fails in practice, but it's still useful to have the
message in case on some systems it does.

>  		return 0;
>  	}
> @@ -769,15 +762,13 @@ error_irq:
>  
>  int xen_destroy_irq(int irq)
>  {
> -	struct irq_desc *desc;
>  	struct physdev_unmap_pirq unmap_irq;
>  	struct irq_info *info = info_for_irq(irq);
>  	int rc = -ENOENT;
>  
>  	mutex_lock(&irq_mapping_update_lock);
>  
> -	desc = irq_to_desc(irq);
> -	if (!desc)
> +	if (!irq_is_allocated(irq))
>  		goto out;

Remove this test.  The only caller of xen_destroy_irq() will only do
so if the irq was previously fully setup.

I think this means you don't need to introduce the irqd_irq_has_action()
and irq_is_allocated() helpers.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwn6-0005BY-I2; Mon, 24 Feb 2014 14:45:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwn5-0005BT-2K
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:45:55 +0000
Received: from [193.109.254.147:23336] by server-3.bemta-14.messagelabs.com id
	16/89-00432-22B5B035; Mon, 24 Feb 2014 14:45:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393253152!1161958!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16336 invoked from network); 24 Feb 2014 14:45:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:45:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103548864"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:45:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:45:51 -0500
Message-ID: <1393253150.16570.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:45:50 +0000
In-Reply-To: <1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
> The result on Linux is that the process always deadlocks before
> returning from this function.
> 
> This is used by xl's console child.  So, the ultimate effect is that
> xl with pygrub does not manage to connect to the pygrub console.
> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.

"behaviour".

Michael reported this earlier on -rc2 as well but it fell through the
cracks because I failed to properly appreciate the severity. Sorry.

> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
> documented to suffice if called only on one ctx.  So deregistering the
> ctx it's called on is not sufficient.  Instead, we need a new approach
> which discards the whole sigchld_user list and unconditionally removes
> our SIGCHLD handler if we had one.
> 
> Prompted by this, clarify the semantics of
> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
> "quickly" by explaining what operations are not permitted; and
> document the fact that the function doesn't reclaim the resources in
> the ctxs.
> 
> And add a comment in libxl_postfork_child_noexec explaining the
> internal concurrency situation.
> 
> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Reported-by: M A Young <m.a.young@durham.ac.uk>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/libxl_event.h |   16 ++++++++++++++++
>  tools/libxl/libxl_fork.c  |   44 +++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 59 insertions(+), 1 deletion(-)

Impressive considering the real meat is -1/+6 ;-)

Not that I'm going to complain about lots of docs!

> 
> @@ -134,7 +150,33 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
>      }
>      LIBXL_LIST_INIT(&carefds);
>  
> -    sigchld_user_remove(ctx);
> +    if (sigchld_installed) {
> +        defer_sigchld();
> +
> +        LIBXL_LIST_INIT(&sigchld_users);
> +        /* After this the ->sigchld_user_registered entries in the
> +         * now-obsolete contexts may be lies.  But that's OK because
> +         * no-one will look at them. */
> +
> +        release_sigchld();
> +        sigchld_removehandler_core();
> +    }
>  
>      atfork_unlock();
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwn6-0005BY-I2; Mon, 24 Feb 2014 14:45:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwn5-0005BT-2K
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:45:55 +0000
Received: from [193.109.254.147:23336] by server-3.bemta-14.messagelabs.com id
	16/89-00432-22B5B035; Mon, 24 Feb 2014 14:45:54 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393253152!1161958!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16336 invoked from network); 24 Feb 2014 14:45:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:45:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103548864"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:45:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:45:51 -0500
Message-ID: <1393253150.16570.79.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:45:50 +0000
In-Reply-To: <1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
> The result on Linux is that the process always deadlocks before
> returning from this function.
> 
> This is used by xl's console child.  So, the ultimate effect is that
> xl with pygrub does not manage to connect to the pygrub console.
> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.

"behaviour".

Michael reported this earlier on -rc2 as well but it fell through the
cracks because I failed to properly appreciate the severity. Sorry.

> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
> documented to suffice if called only on one ctx.  So deregistering the
> ctx it's called on is not sufficient.  Instead, we need a new approach
> which discards the whole sigchld_user list and unconditionally removes
> our SIGCHLD handler if we had one.
> 
> Prompted by this, clarify the semantics of
> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
> "quickly" by explaining what operations are not permitted; and
> document the fact that the function doesn't reclaim the resources in
> the ctxs.
> 
> And add a comment in libxl_postfork_child_noexec explaining the
> internal concurrency situation.
> 
> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Reported-by: M A Young <m.a.young@durham.ac.uk>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/libxl_event.h |   16 ++++++++++++++++
>  tools/libxl/libxl_fork.c  |   44 +++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 59 insertions(+), 1 deletion(-)

Impressive considering the real meat is -1/+6 ;-)

Not that I'm going to complain about lots of docs!

> 
> @@ -134,7 +150,33 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
>      }
>      LIBXL_LIST_INIT(&carefds);
>  
> -    sigchld_user_remove(ctx);
> +    if (sigchld_installed) {
> +        defer_sigchld();
> +
> +        LIBXL_LIST_INIT(&sigchld_users);
> +        /* After this the ->sigchld_user_registered entries in the
> +         * now-obsolete contexts may be lies.  But that's OK because
> +         * no-one will look at them. */
> +
> +        release_sigchld();
> +        sigchld_removehandler_core();
> +    }
>  
>      atfork_unlock();
>  }



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwpF-0005I9-DB; Mon, 24 Feb 2014 14:48:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwpE-0005I4-Lj
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:48:08 +0000
Received: from [85.158.137.68:12535] by server-11.bemta-3.messagelabs.com id
	F6/D1-04255-7AB5B035; Mon, 24 Feb 2014 14:48:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393253285!2590574!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15553 invoked from network); 24 Feb 2014 14:48:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105203012"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:47:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:47:35 -0500
Message-ID: <1393253254.16570.80.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:47:34 +0000
In-Reply-To: <1393251555-22418-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 2/3] libxl: Hold the atfork lock while
	closing carefd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> This avoids the process being forked while a carefd is recorded in the
> list but the actual fd has been closed.  If that happened, a
> subsequent libxl_postfork_child_noexec would attempt to close the fd
> again.  If we are lucky that results in a harmless warning; but if we
> are unlucky the fd number has been reused and we close an unrelated
> fd.
> 
> This race has not been observed anywhere as far as we are aware.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/libxl_fork.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 8421296..fa15095 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -184,9 +184,9 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
>  int libxl__carefd_close(libxl__carefd *cf)
>  {
>      if (!cf) return 0;
> +    atfork_lock();
>      int r = cf->fd < 0 ? 0 : close(cf->fd);
>      int esave = errno;
> -    atfork_lock();
>      LIBXL_LIST_REMOVE(cf, entry);
>      atfork_unlock();
>      free(cf);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwpF-0005I9-DB; Mon, 24 Feb 2014 14:48:09 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwpE-0005I4-Lj
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:48:08 +0000
Received: from [85.158.137.68:12535] by server-11.bemta-3.messagelabs.com id
	F6/D1-04255-7AB5B035; Mon, 24 Feb 2014 14:48:07 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393253285!2590574!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15553 invoked from network); 24 Feb 2014 14:48:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105203012"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:47:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:47:35 -0500
Message-ID: <1393253254.16570.80.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:47:34 +0000
In-Reply-To: <1393251555-22418-3-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-3-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 2/3] libxl: Hold the atfork lock while
	closing carefd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> This avoids the process being forked while a carefd is recorded in the
> list but the actual fd has been closed.  If that happened, a
> subsequent libxl_postfork_child_noexec would attempt to close the fd
> again.  If we are lucky that results in a harmless warning; but if we
> are unlucky the fd number has been reused and we close an unrelated
> fd.
> 
> This race has not been observed anywhere as far as we are aware.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/libxl_fork.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/libxl_fork.c b/tools/libxl/libxl_fork.c
> index 8421296..fa15095 100644
> --- a/tools/libxl/libxl_fork.c
> +++ b/tools/libxl/libxl_fork.c
> @@ -184,9 +184,9 @@ void libxl_postfork_child_noexec(libxl_ctx *ctx)
>  int libxl__carefd_close(libxl__carefd *cf)
>  {
>      if (!cf) return 0;
> +    atfork_lock();
>      int r = cf->fd < 0 ? 0 : close(cf->fd);
>      int esave = errno;
> -    atfork_lock();
>      LIBXL_LIST_REMOVE(cf, entry);
>      atfork_unlock();
>      free(cf);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwpO-0005JU-Qs; Mon, 24 Feb 2014 14:48:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwpN-0005JB-7V
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:48:17 +0000
Received: from [85.158.137.68:13472] by server-1.bemta-3.messagelabs.com id
	49/FA-17293-0BB5B035; Mon, 24 Feb 2014 14:48:16 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393253295!466921!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2545 invoked from network); 24 Feb 2014 14:48:15 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:15 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so2858615eaj.5
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:48:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=KvrlMxKg7s6+1l55MvhGise/Y/gPAc1dS87KBryFaYU=;
	b=j8ysHCpiNkEWe6Sql8pj5evTexjWn3riQP9QVOmHnnge1zajX7ltK9S0BSTGeMS9UU
	ModQr+2MLvhZxj10vndg3GZxQB+Nxl8khRlsrPbCSRno/i3o0cdX7Ihr+PPF7GtXpKz+
	NSzlmNnDVvTqKP0RUuNxjgo55QuZQk5kou98pS1oUlMvDRXAcI9u128CQipFPqc7fqzz
	pagwA7aFqVgyBoebILpY040M3qRkMURsCCN/Epfkww+udCQgictzeCl1P8HxW8nxH/R0
	beVaKxvx9qw6ncrac9N36z7i9MRiT+jdiNE0ZXwEzvvHe7TDyZguT7MHH1XUG8A+Odvh
	uhkw==
X-Gm-Message-State: ALoCoQkiD1b7G7GBEkuKN/kUGlsaFu1Zd2NtRrAUf4TRUoiIKhRBtWLOouUwLKi2+khBaUji8XfJ
X-Received: by 10.14.8.7 with SMTP id 7mr24751159eeq.56.1393253295328;
	Mon, 24 Feb 2014 06:48:15 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm64603946eep.17.2014.02.24.06.48.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 06:48:14 -0800 (PST)
Message-ID: <530B5BAC.2010100@linaro.org>
Date: Mon, 24 Feb 2014 14:48:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org> <530B5275.7010008@citrix.com>
	<530B6606020000780011ED39@nat28.tlf.novell.com>
In-Reply-To: <530B6606020000780011ED39@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 02:32 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 15:08, Julien Grall <julien.grall@citrix.com> wrote:
>> (Adding Jan for x86 part).
>>
>> On 02/20/2014 09:29 PM, Julien Grall wrote:
>>> Hi Ian,
>>>
>>> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>>>> interrupt.
>>>>
>>>> Mention here that you are therefore creating a linked list of actions
>>>> for each interrupt.
>>>>
>>>> If you use xen/list.h for this then you get a load of helpers and
>>>> iterators which would save you open coding them.
>>>
>>> After thinking, using xen/list.h won't really remove open code, except
>>> removing "action_ptr" in release_dt_irq.
>>>
>>> Calling release_dt_irq to an IRQ with multiple action shouldn't be
>>> called often. Therefore, having both prev and next is a waste of space.
>>
>> Jan, as it's common code, do you have any thoughts?
> 
> In fact I'm not convinced this action chaining is correct in the first
> place, as mentioned by Ian too (considering the potential sharing
> between hypervisor and guest). Furthermore, if this is really just
> about IOMMU handlers, why can't the SMMU code register a single
> action and disambiguate by the dev_id argument passed to the
> handler?

The patch #3 of this serie protects the IRQ to be shared with the domain.

I should have remove "eg ARM SMMU" in the description. ARM SMMU is not
the only the case, we don't know in advance if the IRQ will be shared
(except browsing the DT and checking if this IRQ was used by another
devices...). We may have the same thing with other devices.

The logic is painful to handle internally in ARM SMMU driver while we
can handle it generically. No need to duplicate the code when a new
driver will have the same problem.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:48:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwpO-0005JU-Qs; Mon, 24 Feb 2014 14:48:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwpN-0005JB-7V
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:48:17 +0000
Received: from [85.158.137.68:13472] by server-1.bemta-3.messagelabs.com id
	49/FA-17293-0BB5B035; Mon, 24 Feb 2014 14:48:16 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393253295!466921!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2545 invoked from network); 24 Feb 2014 14:48:15 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:15 -0000
Received: by mail-ea0-f174.google.com with SMTP id m10so2858615eaj.5
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:48:15 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=KvrlMxKg7s6+1l55MvhGise/Y/gPAc1dS87KBryFaYU=;
	b=j8ysHCpiNkEWe6Sql8pj5evTexjWn3riQP9QVOmHnnge1zajX7ltK9S0BSTGeMS9UU
	ModQr+2MLvhZxj10vndg3GZxQB+Nxl8khRlsrPbCSRno/i3o0cdX7Ihr+PPF7GtXpKz+
	NSzlmNnDVvTqKP0RUuNxjgo55QuZQk5kou98pS1oUlMvDRXAcI9u128CQipFPqc7fqzz
	pagwA7aFqVgyBoebILpY040M3qRkMURsCCN/Epfkww+udCQgictzeCl1P8HxW8nxH/R0
	beVaKxvx9qw6ncrac9N36z7i9MRiT+jdiNE0ZXwEzvvHe7TDyZguT7MHH1XUG8A+Odvh
	uhkw==
X-Gm-Message-State: ALoCoQkiD1b7G7GBEkuKN/kUGlsaFu1Zd2NtRrAUf4TRUoiIKhRBtWLOouUwLKi2+khBaUji8XfJ
X-Received: by 10.14.8.7 with SMTP id 7mr24751159eeq.56.1393253295328;
	Mon, 24 Feb 2014 06:48:15 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id k6sm64603946eep.17.2014.02.24.06.48.13
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 06:48:14 -0800 (PST)
Message-ID: <530B5BAC.2010100@linaro.org>
Date: Mon, 24 Feb 2014 14:48:12 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1390581822-32624-1-git-send-email-julien.grall@linaro.org>
	<1390581822-32624-8-git-send-email-julien.grall@linaro.org>
	<1392810905.29739.19.camel@kazak.uk.xensource.com>
	<530673BD.9010301@linaro.org> <530B5275.7010008@citrix.com>
	<530B6606020000780011ED39@nat28.tlf.novell.com>
In-Reply-To: <530B6606020000780011ED39@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	patches@linaro.org, tim@xen.org, stefano.stabellini@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH for-4.5 7/8] xen/irq: Handle multiple action
 per IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 02:32 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 15:08, Julien Grall <julien.grall@citrix.com> wrote:
>> (Adding Jan for x86 part).
>>
>> On 02/20/2014 09:29 PM, Julien Grall wrote:
>>> Hi Ian,
>>>
>>> On 02/19/2014 11:55 AM, Ian Campbell wrote:
>>>> On Fri, 2014-01-24 at 16:43 +0000, Julien Grall wrote:
>>>>> On ARM, it may happen (eg ARM SMMU) to setup multiple handler for the same
>>>>> interrupt.
>>>>
>>>> Mention here that you are therefore creating a linked list of actions
>>>> for each interrupt.
>>>>
>>>> If you use xen/list.h for this then you get a load of helpers and
>>>> iterators which would save you open coding them.
>>>
>>> After thinking, using xen/list.h won't really remove open code, except
>>> removing "action_ptr" in release_dt_irq.
>>>
>>> Calling release_dt_irq to an IRQ with multiple action shouldn't be
>>> called often. Therefore, having both prev and next is a waste of space.
>>
>> Jan, as it's common code, do you have any thoughts?
> 
> In fact I'm not convinced this action chaining is correct in the first
> place, as mentioned by Ian too (considering the potential sharing
> between hypervisor and guest). Furthermore, if this is really just
> about IOMMU handlers, why can't the SMMU code register a single
> action and disambiguate by the dev_id argument passed to the
> handler?

The patch #3 of this serie protects the IRQ to be shared with the domain.

I should have remove "eg ARM SMMU" in the description. ARM SMMU is not
the only the case, we don't know in advance if the IRQ will be shared
(except browsing the DT and checking if this IRQ was used by another
devices...). We may have the same thing with other devices.

The logic is painful to handle internally in ARM SMMU driver while we
can handle it generically. No need to duplicate the code when a new
driver will have the same problem.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwpt-0005RQ-9y; Mon, 24 Feb 2014 14:48:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwpr-0005Qa-3w
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:48:47 +0000
Received: from [85.158.139.211:52940] by server-7.bemta-5.messagelabs.com id
	8C/04-14867-ECB5B035; Mon, 24 Feb 2014 14:48:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393253324!1340348!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5632 invoked from network); 24 Feb 2014 14:48:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105203401"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:48:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:48:42 -0500
Message-ID: <1393253321.16570.81.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:48:41 +0000
In-Reply-To: <1393251555-22418-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 3/3] libxl: Fix carefd lock leak in save
	callout
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> If libxl_pipe fails we leave the carefd locked, which translates to
> the atfork lock remaining held.  This would probably cause the process
> to deadlock shortly afterwards.
> 
> Of course libxl_pipe is very unlikely to fail unless things are
> already going very badly.  This bug has not been observed anywhere as
> far as we are aware.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/libxl_save_callout.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
> index 6e45b2f..e3bda8f 100644
> --- a/tools/libxl/libxl_save_callout.c
> +++ b/tools/libxl/libxl_save_callout.c
> @@ -185,7 +185,11 @@ static void run_helper(libxl__egc *egc, libxl__save_helper_state *shs,
>      for (childfd=0; childfd<2; childfd++) {
>          /* Setting up the pipe for the child's fd childfd */
>          int fds[2];
> -        if (libxl_pipe(CTX,fds)) { rc = ERROR_FAIL; goto out; }
> +        if (libxl_pipe(CTX,fds)) {
> +            rc = ERROR_FAIL;
> +            libxl__carefd_unlock();
> +            goto out;
> +        }
>          int childs_end = childfd==0 ? 0 /*read*/  : 1 /*write*/;
>          int our_end    = childfd==0 ? 1 /*write*/ : 0 /*read*/;
>          childs_pipes[childfd] = libxl__carefd_record(CTX, fds[childs_end]);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:48:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwpt-0005RQ-9y; Mon, 24 Feb 2014 14:48:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwpr-0005Qa-3w
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:48:47 +0000
Received: from [85.158.139.211:52940] by server-7.bemta-5.messagelabs.com id
	8C/04-14867-ECB5B035; Mon, 24 Feb 2014 14:48:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393253324!1340348!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5632 invoked from network); 24 Feb 2014 14:48:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105203401"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 14:48:43 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:48:42 -0500
Message-ID: <1393253321.16570.81.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:48:41 +0000
In-Reply-To: <1393251555-22418-4-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-4-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 3/3] libxl: Fix carefd lock leak in save
	callout
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> If libxl_pipe fails we leave the carefd locked, which translates to
> the atfork lock remaining held.  This would probably cause the process
> to deadlock shortly afterwards.
> 
> Of course libxl_pipe is very unlikely to fail unless things are
> already going very badly.  This bug has not been observed anywhere as
> far as we are aware.
> 
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

Acked-by: Ian Campbell <Ian.Campbell@citrix.com>

> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/libxl/libxl_save_callout.c |    6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libxl/libxl_save_callout.c b/tools/libxl/libxl_save_callout.c
> index 6e45b2f..e3bda8f 100644
> --- a/tools/libxl/libxl_save_callout.c
> +++ b/tools/libxl/libxl_save_callout.c
> @@ -185,7 +185,11 @@ static void run_helper(libxl__egc *egc, libxl__save_helper_state *shs,
>      for (childfd=0; childfd<2; childfd++) {
>          /* Setting up the pipe for the child's fd childfd */
>          int fds[2];
> -        if (libxl_pipe(CTX,fds)) { rc = ERROR_FAIL; goto out; }
> +        if (libxl_pipe(CTX,fds)) {
> +            rc = ERROR_FAIL;
> +            libxl__carefd_unlock();
> +            goto out;
> +        }
>          int childs_end = childfd==0 ? 0 /*read*/  : 1 /*write*/;
>          int our_end    = childfd==0 ? 1 /*write*/ : 0 /*read*/;
>          childs_pipes[childfd] = libxl__carefd_record(CTX, fds[childs_end]);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:49:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwq4-0005V2-P1; Mon, 24 Feb 2014 14:49:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwq4-0005Uf-3f
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:49:00 +0000
Received: from [85.158.137.68:54390] by server-17.bemta-3.messagelabs.com id
	FE/C0-22569-BDB5B035; Mon, 24 Feb 2014 14:48:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393253337!3861318!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13317 invoked from network); 24 Feb 2014 14:48:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103549747"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:48:56 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:48:56 -0500
Message-ID: <530B5BD6.2000301@citrix.com>
Date: Mon, 24 Feb 2014 14:48:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.808648133@linutronix.de>
In-Reply-To: <20140223212738.808648133@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 23/26] xen: Add proper irq accounting for
 HYPERCALL vector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> --- tip.orig/drivers/xen/events/events_base.c
> +++ tip/drivers/xen/events/events_base.c
> @@ -1239,6 +1239,7 @@ void xen_evtchn_do_upcall(struct pt_regs
>  #ifdef CONFIG_X86
>  	exit_idle();
>  #endif
> +	inc_irq_stat(irq_hv_callback_count);
>  
>  	__xen_evtchn_do_upcall();

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:49:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwq4-0005V2-P1; Mon, 24 Feb 2014 14:49:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHwq4-0005Uf-3f
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:49:00 +0000
Received: from [85.158.137.68:54390] by server-17.bemta-3.messagelabs.com id
	FE/C0-22569-BDB5B035; Mon, 24 Feb 2014 14:48:59 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393253337!3861318!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13317 invoked from network); 24 Feb 2014 14:48:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:48:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103549747"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:48:56 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:48:56 -0500
Message-ID: <530B5BD6.2000301@citrix.com>
Date: Mon, 24 Feb 2014 14:48:54 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Thomas Gleixner <tglx@linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.808648133@linutronix.de>
In-Reply-To: <20140223212738.808648133@linutronix.de>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 23/26] xen: Add proper irq accounting for
 HYPERCALL vector
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 23/02/14 21:40, Thomas Gleixner wrote:
> --- tip.orig/drivers/xen/events/events_base.c
> +++ tip/drivers/xen/events/events_base.c
> @@ -1239,6 +1239,7 @@ void xen_evtchn_do_upcall(struct pt_regs
>  #ifdef CONFIG_X86
>  	exit_idle();
>  #endif
> +	inc_irq_stat(irq_hv_callback_count);
>  
>  	__xen_evtchn_do_upcall();

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwqN-0005bo-Br; Mon, 24 Feb 2014 14:49:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwqM-0005bB-81
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:49:18 +0000
Received: from [193.109.254.147:37754] by server-11.bemta-14.messagelabs.com
	id 65/4A-24604-DEB5B035; Mon, 24 Feb 2014 14:49:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393253355!6459791!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20478 invoked from network); 24 Feb 2014 14:49:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:49:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103549824"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:49:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:49:14 -0500
Message-ID: <1393253353.16570.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:49:13 +0000
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 0/3] libxl: Fix deadlock with pygrub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> So I think patch 1 should go into 4.4.0 after review.
> 2-3 should wait for 4.4.1 (coming via unstable in the usual way).

I agree with your logic and your conclusion.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:49:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwqN-0005bo-Br; Mon, 24 Feb 2014 14:49:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwqM-0005bB-81
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:49:18 +0000
Received: from [193.109.254.147:37754] by server-11.bemta-14.messagelabs.com
	id 65/4A-24604-DEB5B035; Mon, 24 Feb 2014 14:49:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393253355!6459791!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20478 invoked from network); 24 Feb 2014 14:49:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:49:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103549824"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:49:15 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:49:14 -0500
Message-ID: <1393253353.16570.82.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 14:49:13 +0000
In-Reply-To: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 0/3] libxl: Fix deadlock with pygrub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> So I think patch 1 should go into 4.4.0 after review.
> 2-3 should wait for 4.4.1 (coming via unstable in the usual way).

I agree with your logic and your conclusion.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:52:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwtA-0005zp-4k; Mon, 24 Feb 2014 14:52:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WHwt8-0005zf-Sk
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:52:11 +0000
Received: from [85.158.143.35:35974] by server-3.bemta-4.messagelabs.com id
	5B/3C-11539-A9C5B035; Mon, 24 Feb 2014 14:52:10 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393253524!7890934!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14331 invoked from network); 24 Feb 2014 14:52:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:52:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; 
   d="scan'208";a="9756577"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 24 Feb 2014 14:52:04 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Mon, 24 Feb 2014 15:52:03 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V5 net-next 1/5] xen-netback: Factor queue-specific
	data into queue struct.
Thread-Index: AQHPMW1v/GZt1s5x30ilP+CR+n6W4JrEfOuQ
Date: Mon, 24 Feb 2014 14:52:02 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD025788C@AMSPEX01CL01.citrite.net>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 24 February 2014 14:33
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; Andrew
> Bennieston
> Subject: [PATCH V5 net-next 1/5] xen-netback: Factor queue-specific data
> into queue struct.
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_hash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
>  drivers/net/xen-netback/common.h    |   85 ++++--
>  drivers/net/xen-netback/interface.c |  329 ++++++++++++++--------
>  drivers/net/xen-netback/netback.c   |  530 ++++++++++++++++++------------
> -----
>  drivers/net/xen-netback/xenbus.c    |   87 ++++--
>  4 files changed, 608 insertions(+), 423 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index ae413a2..4176539 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -108,17 +108,39 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> XEN_NETIF_RX_RING_SIZE)
> 
> -struct xenvif {
> -	/* Unique identifier for this interface. */
> -	domid_t          domid;
> -	unsigned int     handle;
> +/* Queue name is interface name with "-qNNN" appended */
> +#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
> +
> +/* IRQ name is queue name with "-tx" or "-rx" appended */
> +#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
> +
> +struct xenvif;
> +
> +struct xenvif_stats {
> +	/* Stats fields to be updated per-queue.
> +	 * A subset of struct net_device_stats that contains only the
> +	 * fields that are updated in netback.c for each queue.
> +	 */
> +	unsigned int rx_bytes;
> +	unsigned int rx_packets;
> +	unsigned int tx_bytes;
> +	unsigned int tx_packets;
> +
> +	/* Additional stats used by xenvif */
> +	unsigned long rx_gso_checksum_fixup;
> +};
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int id; /* Queue ID, 0-based */
> +	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
> +	struct xenvif *vif; /* Parent VIF */
> 
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,19 +162,34 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;
> 
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> 
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> 
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +	/* Statistics */
> +	struct xenvif_stats stats;
> +};
> +
> +struct xenvif {
> +	/* Unique identifier for this interface. */
> +	domid_t          domid;
> +	unsigned int     handle;
> +
>  	u8               fe_dev_addr[6];
> 
>  	/* Frontend feature information. */
> @@ -166,15 +203,9 @@ struct xenvif {
>  	/* Internal feature information. */
>  	u8 can_queue:1;	    /* can queue packets for receiver? */
> 
> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> -	unsigned long   credit_bytes;
> -	unsigned long   credit_usec;
> -	unsigned long   remaining_credit;
> -	struct timer_list credit_timeout;
> -	u64 credit_window_start;
> -
> -	/* Statistics */
> -	unsigned long rx_gso_checksum_fixup;
> +	/* Queues */
> +	unsigned int num_queues;
> +	struct xenvif_queue *queues;
> 
>  	/* Miscellaneous private stuff. */
>  	struct net_device *dev;
> @@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>  			    domid_t domid,
>  			    unsigned int handle);
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue);
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn);
>  void xenvif_disconnect(struct xenvif *vif);
> @@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
> 
>  int xenvif_schedulable(struct xenvif *vif);
> 
> -int xenvif_must_stop_queue(struct xenvif *vif);
> +int xenvif_must_stop_queue(struct xenvif_queue *queue);
> 
>  /* (Un)Map communication rings. */
> -void xenvif_unmap_frontend_rings(struct xenvif *vif);
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref);
> 
>  /* Check for SKBs from frontend and schedule backend processing */
> -void xenvif_check_rx_xenvif(struct xenvif *vif);
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
> 
>  /* Prevent the device from generating any further traffic. */
>  void xenvif_carrier_off(struct xenvif *vif);
> 
> -int xenvif_tx_action(struct xenvif *vif, int budget);
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget);
> 
>  int xenvif_kthread(void *data);
> -void xenvif_kick_thread(struct xenvif *vif);
> +void xenvif_kick_thread(struct xenvif_queue *queue);
> 
>  /* Determine whether the needed number of slots (req) are available,
>   * and set req_event if not.
>   */
> -bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
> +bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int
> needed);
> 
> -void xenvif_stop_queue(struct xenvif *vif);
> +void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 7669d49..0297980 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -34,7 +34,6 @@
>  #include <linux/ethtool.h>
>  #include <linux/rtnetlink.h>
>  #include <linux/if_vlan.h>
> -#include <linux/vmalloc.h>
> 
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> @@ -42,6 +41,16 @@
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> 
> +static inline void xenvif_stop_queue(struct xenvif_queue *queue)
> +{
> +	struct net_device *dev = queue->vif->dev;
> +
> +	if (!queue->vif->can_queue)
> +		return;
> +
> +	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
> +}
> +
>  int xenvif_schedulable(struct xenvif *vif)
>  {
>  	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
> @@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
> 
>  static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
> -		napi_schedule(&vif->napi);
> +	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
> +		napi_schedule(&queue->napi);
> 
>  	return IRQ_HANDLED;
>  }
> 
> -static int xenvif_poll(struct napi_struct *napi, int budget)
> +int xenvif_poll(struct napi_struct *napi, int budget)
>  {
> -	struct xenvif *vif = container_of(napi, struct xenvif, napi);
> +	struct xenvif_queue *queue = container_of(napi, struct
> xenvif_queue, napi);
>  	int work_done;
> 
> -	work_done = xenvif_tx_action(vif, budget);
> +	work_done = xenvif_tx_action(queue, budget);
> 
>  	if (work_done < budget) {
>  		int more_to_do = 0;
> @@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  		local_irq_save(flags);
> 
> -		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx,
> more_to_do);
> +		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx,
> more_to_do);
>  		if (!more_to_do)
>  			__napi_complete(napi);
> 
> @@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	xenvif_kick_thread(vif);
> +	xenvif_kick_thread(queue);
> 
>  	return IRQ_HANDLED;
>  }
> @@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
>  	return IRQ_HANDLED;
>  }
> 
> +static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff
> *skb,
> +			       void *accel_priv, select_queue_fallback_t
> fallback)
> +{
> +	struct xenvif *vif = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_index;
> +
> +	/* First, check if there is only one queue to optimise the
> +	 * single-queue or old frontend scenario.
> +	 */
> +	if (vif->num_queues == 1) {
> +		queue_index = 0;
> +	} else {
> +		/* Use skb_get_hash to obtain an L4 hash if available */
> +		hash = skb_get_hash(skb);
> +		queue_index = (u16) (((u64)hash * vif->num_queues) >>
> 32);
> +	}
> +
> +	return queue_index;
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	struct xenvif_queue *queue = NULL;
> +	u16 index;
>  	int min_slots_needed;
> 
>  	BUG_ON(skb->dev != dev);
> 
> +	/* Drop the packet if queues are not set up */
> +	if (vif->num_queues < 1)
> +		goto drop;
> +
> +	/* Obtain the queue to be used to transmit this packet */
> +	index = skb_get_queue_mapping(skb);
> +	if (index >= vif->num_queues)
> +		index = 0; /* Fall back to queue 0 if out of range */
> +	queue = &vif->queues[index];
> +
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (queue->task == NULL || !xenvif_schedulable(vif))
>  		goto drop;
> 
>  	/* At best we'll need one slot for the header and one for each
> @@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
>  	 * then turn off the queue to give the ring a chance to
>  	 * drain.
>  	 */
> -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> -		xenvif_stop_queue(vif);
> +	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
> +		xenvif_stop_queue(queue);
> 
> -	skb_queue_tail(&vif->rx_queue, skb);
> -	xenvif_kick_thread(vif);
> +	skb_queue_tail(&queue->rx_queue, skb);
> +	xenvif_kick_thread(queue);
> 
>  	return NETDEV_TX_OK;
> 
> @@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
>  static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned long rx_bytes = 0;
> +	unsigned long rx_packets = 0;
> +	unsigned long tx_bytes = 0;
> +	unsigned long tx_packets = 0;
> +	unsigned int index;
> +
> +	/* Aggregate tx and rx stats from each queue */
> +	for (index = 0; index < vif->num_queues; ++index) {
> +		queue = &vif->queues[index];
> +		rx_bytes += queue->stats.rx_bytes;
> +		rx_packets += queue->stats.rx_packets;
> +		tx_bytes += queue->stats.tx_bytes;
> +		tx_packets += queue->stats.tx_packets;
> +	}
> +
> +	vif->dev->stats.rx_bytes = rx_bytes;
> +	vif->dev->stats.rx_packets = rx_packets;
> +	vif->dev->stats.tx_bytes = tx_bytes;
> +	vif->dev->stats.tx_packets = tx_packets;
> +
>  	return &vif->dev->stats;
>  }
> 
>  static void xenvif_up(struct xenvif *vif)
>  {
> -	napi_enable(&vif->napi);
> -	enable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		enable_irq(vif->rx_irq);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_enable(&queue->napi);
> +		enable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			enable_irq(queue->rx_irq);
> +		xenvif_check_rx_xenvif(queue);
> +	}
>  }
> 
>  static void xenvif_down(struct xenvif *vif)
>  {
> -	napi_disable(&vif->napi);
> -	disable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		disable_irq(vif->rx_irq);
> -	del_timer_sync(&vif->credit_timeout);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_disable(&queue->napi);
> +		disable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			disable_irq(queue->rx_irq);
> +		del_timer_sync(&queue->credit_timeout);
> +	}
>  }
> 
>  static int xenvif_open(struct net_device *dev)
> @@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_up(vif);
> -	netif_start_queue(dev);
> +	netif_tx_start_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_down(vif);
> -	netif_stop_queue(dev);
> +	netif_tx_stop_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -232,7 +307,7 @@ static const struct xenvif_stat {
>  } xenvif_stats[] = {
>  	{
>  		"rx_gso_checksum_fixup",
> -		offsetof(struct xenvif, rx_gso_checksum_fixup)
> +		offsetof(struct xenvif_stats, rx_gso_checksum_fixup)
>  	},
>  };
> 
> @@ -249,11 +324,19 @@ static int xenvif_get_sset_count(struct net_device
> *dev, int string_set)
>  static void xenvif_get_ethtool_stats(struct net_device *dev,
>  				     struct ethtool_stats *stats, u64 * data)
>  {
> -	void *vif = netdev_priv(dev);
> +	struct xenvif *vif = netdev_priv(dev);
>  	int i;
> -
> -	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
> -		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
> +	unsigned int queue_index;
> +	struct xenvif_stats *vif_stats;
> +
> +	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {
> +		unsigned long accum = 0;
> +		for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +			vif_stats = &vif->queues[queue_index].stats;
> +			accum += *(unsigned long *)(vif_stats +
> xenvif_stats[i].offset);
> +		}
> +		data[i] = accum;
> +	}
>  }
> 
>  static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 *
> data)
> @@ -286,6 +369,7 @@ static const struct net_device_ops
> xenvif_netdev_ops = {
>  	.ndo_fix_features = xenvif_fix_features,
>  	.ndo_set_mac_address = eth_mac_addr,
>  	.ndo_validate_addr   = eth_validate_addr,
> +	.ndo_select_queue = xenvif_select_queue,
>  };
> 
>  struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> @@ -295,10 +379,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	struct net_device *dev;
>  	struct xenvif *vif;
>  	char name[IFNAMSIZ] = {};
> -	int i;
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> @@ -308,24 +391,15 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	vif = netdev_priv(dev);
> 
> -	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
> -				     MAX_GRANT_COPY_OPS);
> -	if (vif->grant_copy_op == NULL) {
> -		pr_warn("Could not allocate grant copy space for %s\n",
> name);
> -		free_netdev(dev);
> -		return ERR_PTR(-ENOMEM);
> -	}
> -
>  	vif->domid  = domid;
>  	vif->handle = handle;
>  	vif->can_sg = 1;
>  	vif->ip_csum = 1;
>  	vif->dev = dev;
> 
> -	vif->credit_bytes = vif->remaining_credit = ~0UL;
> -	vif->credit_usec  = 0UL;
> -	init_timer(&vif->credit_timeout);
> -	vif->credit_window_start = get_jiffies_64();
> +	/* Start out with no queues */
> +	vif->num_queues = 0;
> +	vif->queues = NULL;
> 
>  	dev->netdev_ops	= &xenvif_netdev_ops;
>  	dev->hw_features = NETIF_F_SG |
> @@ -336,16 +410,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
> 
> -	skb_queue_head_init(&vif->rx_queue);
> -	skb_queue_head_init(&vif->tx_queue);
> -
> -	vif->pending_cons = 0;
> -	vif->pending_prod = MAX_PENDING_REQS;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> -
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
>  	 * largest non-broadcast address to prevent the address getting
> @@ -355,8 +419,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	memset(dev->dev_addr, 0xFF, ETH_ALEN);
>  	dev->dev_addr[0] &= ~0x01;
> 
> -	netif_napi_add(dev, &vif->napi, xenvif_poll,
> XENVIF_NAPI_WEIGHT);
> -
>  	netif_carrier_off(dev);
> 
>  	err = register_netdev(dev);
> @@ -373,85 +435,111 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	return vif;
>  }
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue)
> +{
> +	int i;
> +
> +	queue->credit_bytes = queue->remaining_credit = ~0UL;
> +	queue->credit_usec  = 0UL;
> +	init_timer(&queue->credit_timeout);
> +	queue->credit_window_start = get_jiffies_64();
> +
> +	skb_queue_head_init(&queue->rx_queue);
> +	skb_queue_head_init(&queue->tx_queue);
> +
> +	queue->pending_cons = 0;
> +	queue->pending_prod = MAX_PENDING_REQS;
> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
> +		queue->pending_ring[i] = i;
> +		queue->mmap_pages[i] = NULL;
> +	}
> +
> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
> +			XENVIF_NAPI_WEIGHT);
> +}
> +
> +void xenvif_carrier_on(struct xenvif *vif)
> +{
> +	rtnl_lock();
> +	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> +		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> +	netdev_update_features(vif->dev);
> +	netif_carrier_on(vif->dev);
> +	if (netif_running(vif->dev))
> +		xenvif_up(vif);
> +	rtnl_unlock();
> +}
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn)
>  {
>  	struct task_struct *task;
>  	int err = -ENOMEM;
> 
> -	BUG_ON(vif->tx_irq);
> -	BUG_ON(vif->task);
> +	BUG_ON(queue->tx_irq);
> +	BUG_ON(queue->task);
> 
> -	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
> +	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
> 
> -	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&queue->wq);
> 
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_interrupt, 0,
> -			vif->dev->name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
> +			queue->name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = vif->rx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = queue->rx_irq = err;
> +		disable_irq(queue->tx_irq);
>  	} else {
>  		/* feature-split-event-channels == 1 */
> -		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
> -			 "%s-tx", vif->dev->name);
> +		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
> +			 "%s-tx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
> -			vif->tx_irq_name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt,
> 0,
> +			queue->tx_irq_name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = err;
> +		disable_irq(queue->tx_irq);
> 
> -		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
> -			 "%s-rx", vif->dev->name);
> +		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
> +			 "%s-rx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
> -			vif->rx_irq_name, vif);
> +			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt,
> 0,
> +			queue->rx_irq_name, queue);
>  		if (err < 0)
>  			goto err_tx_unbind;
> -		vif->rx_irq = err;
> -		disable_irq(vif->rx_irq);
> +		queue->rx_irq = err;
> +		disable_irq(queue->rx_irq);
>  	}
> 
>  	task = kthread_create(xenvif_kthread,
> -			      (void *)vif, "%s", vif->dev->name);
> +			      (void *)queue, "%s", queue->name);
>  	if (IS_ERR(task)) {
> -		pr_warn("Could not allocate kthread for %s\n", vif->dev-
> >name);
> +		pr_warn("Could not allocate kthread for %s\n", queue-
> >name);
>  		err = PTR_ERR(task);
>  		goto err_rx_unbind;
>  	}
> 
> -	vif->task = task;
> +	queue->task = task;
> 
> -	rtnl_lock();
> -	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> -		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> -	netdev_update_features(vif->dev);
> -	netif_carrier_on(vif->dev);
> -	if (netif_running(vif->dev))
> -		xenvif_up(vif);
> -	rtnl_unlock();
> -
> -	wake_up_process(vif->task);
> +	wake_up_process(queue->task);
> 
>  	return 0;
> 
>  err_rx_unbind:
> -	unbind_from_irqhandler(vif->rx_irq, vif);
> -	vif->rx_irq = 0;
> +	unbind_from_irqhandler(queue->rx_irq, queue);
> +	queue->rx_irq = 0;
>  err_tx_unbind:
> -	unbind_from_irqhandler(vif->tx_irq, vif);
> -	vif->tx_irq = 0;
> +	unbind_from_irqhandler(queue->tx_irq, queue);
> +	queue->tx_irq = 0;
>  err_unmap:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  err:
>  	module_put(THIS_MODULE);
>  	return err;
> @@ -470,34 +558,53 @@ void xenvif_carrier_off(struct xenvif *vif)
> 
>  void xenvif_disconnect(struct xenvif *vif)
>  {
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
>  	if (netif_carrier_ok(vif->dev))
>  		xenvif_carrier_off(vif);
> 
> -	if (vif->task) {
> -		kthread_stop(vif->task);
> -		vif->task = NULL;
> -	}
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> 
> -	if (vif->tx_irq) {
> -		if (vif->tx_irq == vif->rx_irq)
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -		else {
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -			unbind_from_irqhandler(vif->rx_irq, vif);
> +		if (queue->task) {
> +			kthread_stop(queue->task);
> +			queue->task = NULL;
>  		}
> -		vif->tx_irq = 0;
> +
> +		if (queue->tx_irq) {
> +			if (queue->tx_irq == queue->rx_irq)
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +			else {
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +				unbind_from_irqhandler(queue->rx_irq,
> queue);
> +			}
> +			queue->tx_irq = 0;
> +		}
> +
> +		xenvif_unmap_frontend_rings(queue);
>  	}
> 
> -	xenvif_unmap_frontend_rings(vif);
> +
>  }
> 
>  void xenvif_free(struct xenvif *vif)
>  {
> -	netif_napi_del(&vif->napi);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> 
>  	unregister_netdev(vif->dev);
> 
> -	vfree(vif->grant_copy_op);
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		netif_napi_del(&queue->napi);
> +	}
> +
> +	/* Free the array of queues */
> +	vif->num_queues = 0;
> +	vfree(vif->queues);
> +	vif->queues = NULL;
> +
>  	free_netdev(vif->dev);
> 
>  	module_put(THIS_MODULE);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index e5284bc..a32abd6 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
>   * one or more merged tx requests, otherwise it is the continuation of
>   * previous tx request.
>   */
> -static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
> +static inline int pending_tx_is_head(struct xenvif_queue *queue,
> RING_IDX idx)
>  {
> -	return vif->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
> +	return queue->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status);
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st);
> 
> -static inline int tx_work_todo(struct xenvif *vif);
> -static inline int rx_work_todo(struct xenvif *vif);
> +static inline int tx_work_todo(struct xenvif_queue *queue);
> +static inline int rx_work_todo(struct xenvif_queue *queue);
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags);
> 
> -static inline unsigned long idx_to_pfn(struct xenvif *vif,
> +static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
>  				       u16 idx)
>  {
> -	return page_to_pfn(vif->mmap_pages[idx]);
> +	return page_to_pfn(queue->mmap_pages[idx]);
>  }
> 
> -static inline unsigned long idx_to_kaddr(struct xenvif *vif,
> +static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
>  					 u16 idx)
>  {
> -	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
> +	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
>  }
> 
>  /* This is a miniumum size for the linear area to avoid lots of
> @@ -131,30 +131,30 @@ static inline pending_ring_idx_t
> pending_index(unsigned i)
>  	return i & (MAX_PENDING_REQS-1);
>  }
> 
> -static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
> +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue
> *queue)
>  {
>  	return MAX_PENDING_REQS -
> -		vif->pending_prod + vif->pending_cons;
> +		queue->pending_prod + queue->pending_cons;
>  }
> 
> -bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
> +bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int
> needed)
>  {
>  	RING_IDX prod, cons;
> 
>  	do {
> -		prod = vif->rx.sring->req_prod;
> -		cons = vif->rx.req_cons;
> +		prod = queue->rx.sring->req_prod;
> +		cons = queue->rx.req_cons;
> 
>  		if (prod - cons >= needed)
>  			return true;
> 
> -		vif->rx.sring->req_event = prod + 1;
> +		queue->rx.sring->req_event = prod + 1;
> 
>  		/* Make sure event is visible before we check prod
>  		 * again.
>  		 */
>  		mb();
> -	} while (vif->rx.sring->req_prod != prod);
> +	} while (queue->rx.sring->req_prod != prod);
> 
>  	return false;
>  }
> @@ -208,13 +208,13 @@ struct netrx_pending_operations {
>  	grant_ref_t copy_gref;
>  };
> 
> -static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
> +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue
> *queue,
>  						 struct
> netrx_pending_operations *npo)
>  {
>  	struct xenvif_rx_meta *meta;
>  	struct xen_netif_rx_request *req;
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
> 
>  	meta = npo->meta + npo->meta_prod++;
>  	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
> @@ -232,7 +232,7 @@ static struct xenvif_rx_meta
> *get_next_rx_buffer(struct xenvif *vif,
>   * Set up the grant operations for this fragment. If it's a flipping
>   * interface, we also set up the unmap request from here.
>   */
> -static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
> +static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct
> sk_buff *skb,
>  				 struct netrx_pending_operations *npo,
>  				 struct page *page, unsigned long size,
>  				 unsigned long offset, int *head)
> @@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  			 */
>  			BUG_ON(*head);
> 
> -			meta = get_next_rx_buffer(vif, npo);
> +			meta = get_next_rx_buffer(queue, npo);
>  		}
> 
>  		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
> @@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		copy_gop->source.u.gmfn =
> virt_to_mfn(page_address(page));
>  		copy_gop->source.offset = offset;
> 
> -		copy_gop->dest.domid = vif->domid;
> +		copy_gop->dest.domid = queue->vif->domid;
>  		copy_gop->dest.offset = npo->copy_off;
>  		copy_gop->dest.u.ref = npo->copy_gref;
> 
> @@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		else
>  			gso_type = XEN_NETIF_GSO_TYPE_NONE;
> 
> -		if (*head && ((1 << gso_type) & vif->gso_mask))
> -			vif->rx.req_cons++;
> +		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
> +			queue->rx.req_cons++;
> 
>  		*head = 0; /* There must be something in this buffer now. */
> 
> @@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>   * frontend-side LRO).
>   */
>  static int xenvif_gop_skb(struct sk_buff *skb,
> -			  struct netrx_pending_operations *npo)
> +			  struct netrx_pending_operations *npo,
> +			  struct xenvif_queue *queue)
>  {
>  	struct xenvif *vif = netdev_priv(skb->dev);
>  	int nr_frags = skb_shinfo(skb)->nr_frags;
> @@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
> 
>  	/* Set up a GSO prefix descriptor, if necessary */
>  	if ((1 << gso_type) & vif->gso_prefix_mask) {
> -		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +		req = RING_GET_REQUEST(&queue->rx, queue-
> >rx.req_cons++);
>  		meta = npo->meta + npo->meta_prod++;
>  		meta->gso_type = gso_type;
>  		meta->gso_size = gso_size;
> @@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		meta->id = req->id;
>  	}
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>  	meta = npo->meta + npo->meta_prod++;
> 
>  	if ((1 << gso_type) & vif->gso_mask) {
> @@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		if (data + len > skb_tail_pointer(skb))
>  			len = skb_tail_pointer(skb) - data;
> 
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     virt_to_page(data), len, offset, &head);
>  		data += len;
>  	}
> 
>  	for (i = 0; i < nr_frags; i++) {
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     skb_frag_page(&skb_shinfo(skb)-
> >frags[i]),
>  				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>  				     skb_shinfo(skb)->frags[i].page_offset,
> @@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int
> nr_meta_slots,
>  	return status;
>  }
> 
> -static void xenvif_add_frag_responses(struct xenvif *vif, int status,
> +static void xenvif_add_frag_responses(struct xenvif_queue *queue, int
> status,
>  				      struct xenvif_rx_meta *meta,
>  				      int nr_meta_slots)
>  {
> @@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif
> *vif, int status,
>  			flags = XEN_NETRXF_more_data;
> 
>  		offset = 0;
> -		make_rx_response(vif, meta[i].id, status, offset,
> +		make_rx_response(queue, meta[i].id, status, offset,
>  				 meta[i].size, flags);
>  	}
>  }
> @@ -459,12 +460,12 @@ struct skb_cb_overlay {
>  	int meta_slots_used;
>  };
> 
> -void xenvif_kick_thread(struct xenvif *vif)
> +void xenvif_kick_thread(struct xenvif_queue *queue)
>  {
> -	wake_up(&vif->wq);
> +	wake_up(&queue->wq);
>  }
> 
> -static void xenvif_rx_action(struct xenvif *vif)
> +static void xenvif_rx_action(struct xenvif_queue *queue)
>  {
>  	s8 status;
>  	u16 flags;
> @@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
>  	bool need_to_notify = false;
> 
>  	struct netrx_pending_operations npo = {
> -		.copy  = vif->grant_copy_op,
> -		.meta  = vif->meta,
> +		.copy  = queue->grant_copy_op,
> +		.meta  = queue->meta,
>  	};
> 
>  	skb_queue_head_init(&rxq);
> 
> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
>  		RING_IDX max_slots_needed;
>  		int i;
> 
> @@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
>  			max_slots_needed++;
> 
>  		/* If the skb may not fit then bail out now */
> -		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
> -			skb_queue_head(&vif->rx_queue, skb);
> +		if (!xenvif_rx_ring_slots_available(queue,
> max_slots_needed)) {
> +			skb_queue_head(&queue->rx_queue, skb);
>  			need_to_notify = true;
> -			vif->rx_last_skb_slots = max_slots_needed;
> +			queue->rx_last_skb_slots = max_slots_needed;
>  			break;
>  		} else
> -			vif->rx_last_skb_slots = 0;
> +			queue->rx_last_skb_slots = 0;
> 
>  		sco = (struct skb_cb_overlay *)skb->cb;
> -		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> +		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
>  		BUG_ON(sco->meta_slots_used > max_slots_needed);
> 
>  		__skb_queue_tail(&rxq, skb);
>  	}
> 
> -	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> +	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
> 
>  	if (!npo.copy_prod)
>  		goto done;
> 
>  	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
> -	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
> +	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
> 
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>  		sco = (struct skb_cb_overlay *)skb->cb;
> 
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_prefix_mask) {
> -			resp = RING_GET_RESPONSE(&vif->rx,
> -						 vif->rx.rsp_prod_pvt++);
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_prefix_mask) {
> +			resp = RING_GET_RESPONSE(&queue->rx,
> +						 queue->rx.rsp_prod_pvt++);
> 
>  			resp->flags = XEN_NETRXF_gso_prefix |
> XEN_NETRXF_more_data;
> 
> -			resp->offset = vif->meta[npo.meta_cons].gso_size;
> -			resp->id = vif->meta[npo.meta_cons].id;
> +			resp->offset = queue-
> >meta[npo.meta_cons].gso_size;
> +			resp->id = queue->meta[npo.meta_cons].id;
>  			resp->status = sco->meta_slots_used;
> 
>  			npo.meta_cons++;
> @@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
>  		}
> 
> 
> -		vif->dev->stats.tx_bytes += skb->len;
> -		vif->dev->stats.tx_packets++;
> +		queue->stats.tx_bytes += skb->len;
> +		queue->stats.tx_packets++;
> 
> -		status = xenvif_check_gop(vif, sco->meta_slots_used,
> &npo);
> +		status = xenvif_check_gop(queue->vif, sco-
> >meta_slots_used, &npo);
> 
>  		if (sco->meta_slots_used == 1)
>  			flags = 0;
> @@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
>  			flags |= XEN_NETRXF_data_validated;
> 
>  		offset = 0;
> -		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
> +		resp = make_rx_response(queue, queue-
> >meta[npo.meta_cons].id,
>  					status, offset,
> -					vif->meta[npo.meta_cons].size,
> +					queue->meta[npo.meta_cons].size,
>  					flags);
> 
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_mask) {
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_mask) {
>  			struct xen_netif_extra_info *gso =
>  				(struct xen_netif_extra_info *)
> -				RING_GET_RESPONSE(&vif->rx,
> -						  vif->rx.rsp_prod_pvt++);
> +				RING_GET_RESPONSE(&queue->rx,
> +						  queue-
> >rx.rsp_prod_pvt++);
> 
>  			resp->flags |= XEN_NETRXF_extra_info;
> 
> -			gso->u.gso.type = vif-
> >meta[npo.meta_cons].gso_type;
> -			gso->u.gso.size = vif-
> >meta[npo.meta_cons].gso_size;
> +			gso->u.gso.type = queue-
> >meta[npo.meta_cons].gso_type;
> +			gso->u.gso.size = queue-
> >meta[npo.meta_cons].gso_size;
>  			gso->u.gso.pad = 0;
>  			gso->u.gso.features = 0;
> 
> @@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
>  			gso->flags = 0;
>  		}
> 
> -		xenvif_add_frag_responses(vif, status,
> -					  vif->meta + npo.meta_cons + 1,
> +		xenvif_add_frag_responses(queue, status,
> +					  queue->meta + npo.meta_cons + 1,
>  					  sco->meta_slots_used);
> 
> -		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx,
> ret);
> +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx,
> ret);
> 
>  		need_to_notify |= !!ret;
> 
> @@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
> 
>  done:
>  	if (need_to_notify)
> -		notify_remote_via_irq(vif->rx_irq);
> +		notify_remote_via_irq(queue->rx_irq);
>  }
> 
> -void xenvif_check_rx_xenvif(struct xenvif *vif)
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
>  {
>  	int more_to_do;
> 
> -	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
> +	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
> 
>  	if (more_to_do)
> -		napi_schedule(&vif->napi);
> +		napi_schedule(&queue->napi);
>  }
> 
> -static void tx_add_credit(struct xenvif *vif)
> +static void tx_add_credit(struct xenvif_queue *queue)
>  {
>  	unsigned long max_burst, max_credit;
> 
> @@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
>  	 * Allow a burst big enough to transmit a jumbo packet of up to
> 128kB.
>  	 * Otherwise the interface can seize up due to insufficient credit.
>  	 */
> -	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
> +	max_burst = RING_GET_REQUEST(&queue->tx, queue-
> >tx.req_cons)->size;
>  	max_burst = min(max_burst, 131072UL);
> -	max_burst = max(max_burst, vif->credit_bytes);
> +	max_burst = max(max_burst, queue->credit_bytes);
> 
>  	/* Take care that adding a new chunk of credit doesn't wrap to zero.
> */
> -	max_credit = vif->remaining_credit + vif->credit_bytes;
> -	if (max_credit < vif->remaining_credit)
> +	max_credit = queue->remaining_credit + queue->credit_bytes;
> +	if (max_credit < queue->remaining_credit)
>  		max_credit = ULONG_MAX; /* wrapped: clamp to
> ULONG_MAX */
> 
> -	vif->remaining_credit = min(max_credit, max_burst);
> +	queue->remaining_credit = min(max_credit, max_burst);
>  }
> 
>  static void tx_credit_callback(unsigned long data)
>  {
> -	struct xenvif *vif = (struct xenvif *)data;
> -	tx_add_credit(vif);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = (struct xenvif_queue *)data;
> +	tx_add_credit(queue);
> +	xenvif_check_rx_xenvif(queue);
>  }
> 
> -static void xenvif_tx_err(struct xenvif *vif,
> +static void xenvif_tx_err(struct xenvif_queue *queue,
>  			  struct xen_netif_tx_request *txp, RING_IDX end)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
> -		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
> +		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
>  		if (cons == end)
>  			break;
> -		txp = RING_GET_REQUEST(&vif->tx, cons++);
> +		txp = RING_GET_REQUEST(&queue->tx, cons++);
>  	} while (1);
> -	vif->tx.req_cons = cons;
> +	queue->tx.req_cons = cons;
>  }
> 
>  static void xenvif_fatal_tx_err(struct xenvif *vif)
> @@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
>  	xenvif_carrier_off(vif);
>  }
> 
> -static int xenvif_count_requests(struct xenvif *vif,
> +static int xenvif_count_requests(struct xenvif_queue *queue,
>  				 struct xen_netif_tx_request *first,
>  				 struct xen_netif_tx_request *txp,
>  				 int work_to_do)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
>  	int slots = 0;
>  	int drop_err = 0;
>  	int more_data;
> @@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		struct xen_netif_tx_request dropped_tx = { 0 };
> 
>  		if (slots >= work_to_do) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Asked for %d slots but exceeds this
> limit\n",
>  				   work_to_do);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -ENODATA;
>  		}
> 
> @@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 * considered malicious.
>  		 */
>  		if (unlikely(slots >= fatal_skb_slots)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Malicious frontend using %d slots,
> threshold %u\n",
>  				   slots, fatal_skb_slots);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -E2BIG;
>  		}
> 
> @@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX)
> {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Too many slots (%d) exceeding
> limit (%d), dropping packet\n",
>  					   slots,
> XEN_NETBK_LEGACY_SLOTS_MAX);
>  			drop_err = -E2BIG;
> @@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		if (drop_err)
>  			txp = &dropped_tx;
> 
> -		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
> +		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons +
> slots),
>  		       sizeof(*txp));
> 
>  		/* If the guest submitted a frame >= 64 KiB then
> @@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && txp->size > first->size) {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Invalid tx request, slot size %u >
> remaining size %u\n",
>  					   txp->size, first->size);
>  			drop_err = -EIO;
> @@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		slots++;
> 
>  		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev, "Cross page boundary, txp-
> >offset: %x, size: %u\n",
> +			netdev_err(queue->vif->dev, "Cross page boundary,
> txp->offset: %x, size: %u\n",
>  				 txp->offset, txp->size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
> @@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
>  	} while (more_data);
> 
>  	if (drop_err) {
> -		xenvif_tx_err(vif, first, cons + slots);
> +		xenvif_tx_err(queue, first, cons + slots);
>  		return drop_err;
>  	}
> 
>  	return slots;
>  }
> 
> -static struct page *xenvif_alloc_page(struct xenvif *vif,
> +static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
>  				      u16 pending_idx)
>  {
>  	struct page *page;
> @@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif
> *vif,
>  	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  	if (!page)
>  		return NULL;
> -	vif->mmap_pages[pending_idx] = page;
> +	queue->mmap_pages[pending_idx] = page;
> 
>  	return page;
>  }
> 
> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
> +static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue
> *queue,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
>  					       struct gnttab_copy *gop)
> @@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  	for (shinfo->nr_frags = slot = start; slot < nr_slots;
>  	     shinfo->nr_frags++) {
>  		struct pending_tx_info *pending_tx_info =
> -			vif->pending_tx_info;
> +			queue->pending_tx_info;
> 
>  		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  		if (!page)
> @@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  			gop->flags = GNTCOPY_source_gref;
> 
>  			gop->source.u.ref = txp->gref;
> -			gop->source.domid = vif->domid;
> +			gop->source.domid = queue->vif->domid;
>  			gop->source.offset = txp->offset;
> 
>  			gop->dest.domid = DOMID_SELF;
> @@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				gop->len = txp->size;
>  				dst_offset += gop->len;
> 
> -				index = pending_index(vif-
> >pending_cons++);
> +				index = pending_index(queue-
> >pending_cons++);
> 
> -				pending_idx = vif->pending_ring[index];
> +				pending_idx = queue->pending_ring[index];
> 
> 
> 	memcpy(&pending_tx_info[pending_idx].req, txp,
>  				       sizeof(*txp));
> @@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				 * fields for head tx req will be set
>  				 * to correct values after the loop.
>  				 */
> -				vif->mmap_pages[pending_idx] = (void
> *)(~0UL);
> +				queue->mmap_pages[pending_idx] = (void
> *)(~0UL);
>  				pending_tx_info[pending_idx].head =
>  					INVALID_PENDING_RING_IDX;
> 
> @@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  		first->req.offset = 0;
>  		first->req.size = dst_offset;
>  		first->head = start_idx;
> -		vif->mmap_pages[head_idx] = page;
> +		queue->mmap_pages[head_idx] = page;
>  		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>  	}
> 
> @@ -881,18 +882,18 @@ static struct gnttab_copy
> *xenvif_get_requests(struct xenvif *vif,
>  err:
>  	/* Unwind, freeing all pages and sending error responses. */
>  	while (shinfo->nr_frags-- > start) {
> -		xenvif_idx_release(vif,
> +		xenvif_idx_release(queue,
>  				frag_get_pending_idx(&frags[shinfo-
> >nr_frags]),
>  				XEN_NETIF_RSP_ERROR);
>  	}
>  	/* The head too, if necessary. */
>  	if (start)
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	return NULL;
>  }
> 
> -static int xenvif_tx_check_gop(struct xenvif *vif,
> +static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>  			       struct sk_buff *skb,
>  			       struct gnttab_copy **gopp)
>  {
> @@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	/* Check status of header. */
>  	err = gop->status;
>  	if (unlikely(err))
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		pending_ring_idx_t head;
> 
>  		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
> -		tx_info = &vif->pending_tx_info[pending_idx];
> +		tx_info = &queue->pending_tx_info[pending_idx];
>  		head = tx_info->head;
> 
>  		/* Check error status: if okay then remember grant handle.
> */
> @@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  			newerr = (++gop)->status;
>  			if (newerr)
>  				break;
> -			peek = vif->pending_ring[pending_index(++head)];
> -		} while (!pending_tx_is_head(vif, peek));
> +			peek = queue-
> >pending_ring[pending_index(++head)];
> +		} while (!pending_tx_is_head(queue, peek));
> 
>  		if (likely(!newerr)) {
>  			/* Had a previous error? Invalidate this fragment. */
>  			if (unlikely(err))
> -				xenvif_idx_release(vif, pending_idx,
> +				xenvif_idx_release(queue, pending_idx,
>  						   XEN_NETIF_RSP_OKAY);
>  			continue;
>  		}
> 
>  		/* Error on this fragment: respond to client with an error. */
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  		/* Not the first error? Preceding frags already invalidated. */
>  		if (err)
> @@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> 
>  		/* First error: invalidate header and preceding fragments. */
>  		pending_idx = *((u16 *)skb->data);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  		for (j = start; j < i; j++) {
>  			pending_idx = frag_get_pending_idx(&shinfo-
> >frags[j]);
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	return err;
>  }
> 
> -static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
> +static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	int nr_frags = shinfo->nr_frags;
> @@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct
> sk_buff *skb)
> 
>  		pending_idx = frag_get_pending_idx(frag);
> 
> -		txp = &vif->pending_tx_info[pending_idx].req;
> -		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
> +		txp = &queue->pending_tx_info[pending_idx].req;
> +		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
>  		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
>  		skb->len += txp->size;
>  		skb->data_len += txp->size;
>  		skb->truesize += txp->size;
> 
>  		/* Take an extra reference to offset xenvif_idx_release */
> -		get_page(vif->mmap_pages[pending_idx]);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		get_page(queue->mmap_pages[pending_idx]);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  	}
>  }
> 
> -static int xenvif_get_extras(struct xenvif *vif,
> +static int xenvif_get_extras(struct xenvif_queue *queue,
>  				struct xen_netif_extra_info *extras,
>  				int work_to_do)
>  {
>  	struct xen_netif_extra_info extra;
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
>  		if (unlikely(work_to_do-- <= 0)) {
> -			netdev_err(vif->dev, "Missing extra info\n");
> -			xenvif_fatal_tx_err(vif);
> +			netdev_err(queue->vif->dev, "Missing extra
> info\n");
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EBADR;
>  		}
> 
> -		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
> +		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
>  		       sizeof(extra));
>  		if (unlikely(!extra.type ||
>  			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> -			vif->tx.req_cons = ++cons;
> -			netdev_err(vif->dev,
> +			queue->tx.req_cons = ++cons;
> +			netdev_err(queue->vif->dev,
>  				   "Invalid extra type: %d\n", extra.type);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
>  		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
> -		vif->tx.req_cons = ++cons;
> +		queue->tx.req_cons = ++cons;
>  	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
> 
>  	return work_to_do;
> @@ -1048,7 +1049,7 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
>  	return 0;
>  }
> 
> -static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
> +static int checksum_setup(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
>  	bool recalculate_partial_csum = false;
> 
> @@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct
> sk_buff *skb)
>  	 * recalculate the partial checksum.
>  	 */
>  	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
> -		vif->rx_gso_checksum_fixup++;
> +		queue->stats.rx_gso_checksum_fixup++;
>  		skb->ip_summed = CHECKSUM_PARTIAL;
>  		recalculate_partial_csum = true;
>  	}
> @@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif,
> struct sk_buff *skb)
>  	return skb_checksum_setup(skb, recalculate_partial_csum);
>  }
> 
> -static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
> +static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned
> size)
>  {
>  	u64 now = get_jiffies_64();
> -	u64 next_credit = vif->credit_window_start +
> -		msecs_to_jiffies(vif->credit_usec / 1000);
> +	u64 next_credit = queue->credit_window_start +
> +		msecs_to_jiffies(queue->credit_usec / 1000);
> 
>  	/* Timer could already be pending in rare cases. */
> -	if (timer_pending(&vif->credit_timeout))
> +	if (timer_pending(&queue->credit_timeout))
>  		return true;
> 
>  	/* Passed the point where we can replenish credit? */
>  	if (time_after_eq64(now, next_credit)) {
> -		vif->credit_window_start = now;
> -		tx_add_credit(vif);
> +		queue->credit_window_start = now;
> +		tx_add_credit(queue);
>  	}
> 
>  	/* Still too big to send right now? Set a callback. */
> -	if (size > vif->remaining_credit) {
> -		vif->credit_timeout.data     =
> -			(unsigned long)vif;
> -		vif->credit_timeout.function =
> +	if (size > queue->remaining_credit) {
> +		queue->credit_timeout.data     =
> +			(unsigned long)queue;
> +		queue->credit_timeout.function =
>  			tx_credit_callback;
> -		mod_timer(&vif->credit_timeout,
> +		mod_timer(&queue->credit_timeout,
>  			  next_credit);
> -		vif->credit_window_start = next_credit;
> +		queue->credit_window_start = next_credit;
> 
>  		return true;
>  	}
> @@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif,
> unsigned size)
>  	return false;
>  }
> 
> -static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
> +static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int
> budget)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
> +	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
>  	struct sk_buff *skb;
>  	int ret;
> 
> -	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	while ((nr_pending_reqs(queue) +
> XEN_NETBK_LEGACY_SLOTS_MAX
>  		< MAX_PENDING_REQS) &&
> -	       (skb_queue_len(&vif->tx_queue) < budget)) {
> +	       (skb_queue_len(&queue->tx_queue) < budget)) {
>  		struct xen_netif_tx_request txreq;
>  		struct xen_netif_tx_request
> txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
>  		struct page *page;
> @@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		unsigned int data_len;
>  		pending_ring_idx_t index;
> 
> -		if (vif->tx.sring->req_prod - vif->tx.req_cons >
> +		if (queue->tx.sring->req_prod - queue->tx.req_cons >
>  		    XEN_NETIF_TX_RING_SIZE) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Impossible number of requests. "
>  				   "req_prod %d, req_cons %d, size %ld\n",
> -				   vif->tx.sring->req_prod, vif->tx.req_cons,
> +				   queue->tx.sring->req_prod, queue-
> >tx.req_cons,
>  				   XEN_NETIF_TX_RING_SIZE);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			continue;
>  		}
> 
> -		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif-
> >tx);
> +		work_to_do =
> RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
>  		if (!work_to_do)
>  			break;
> 
> -		idx = vif->tx.req_cons;
> +		idx = queue->tx.req_cons;
>  		rmb(); /* Ensure that we see the request before we copy it.
> */
> -		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx),
> sizeof(txreq));
> +		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx),
> sizeof(txreq));
> 
>  		/* Credit-based scheduling. */
> -		if (txreq.size > vif->remaining_credit &&
> -		    tx_credit_exceeded(vif, txreq.size))
> +		if (txreq.size > queue->remaining_credit &&
> +		    tx_credit_exceeded(queue, txreq.size))
>  			break;
> 
> -		vif->remaining_credit -= txreq.size;
> +		queue->remaining_credit -= txreq.size;
> 
>  		work_to_do--;
> -		vif->tx.req_cons = ++idx;
> +		queue->tx.req_cons = ++idx;
> 
>  		memset(extras, 0, sizeof(extras));
>  		if (txreq.flags & XEN_NETTXF_extra_info) {
> -			work_to_do = xenvif_get_extras(vif, extras,
> +			work_to_do = xenvif_get_extras(queue, extras,
>  						       work_to_do);
> -			idx = vif->tx.req_cons;
> +			idx = queue->tx.req_cons;
>  			if (unlikely(work_to_do < 0))
>  				break;
>  		}
> 
> -		ret = xenvif_count_requests(vif, &txreq, txfrags,
> work_to_do);
> +		ret = xenvif_count_requests(queue, &txreq, txfrags,
> work_to_do);
>  		if (unlikely(ret < 0))
>  			break;
> 
>  		idx += ret;
> 
>  		if (unlikely(txreq.size < ETH_HLEN)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Bad packet size: %d\n", txreq.size);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		/* No crossing a page as the payload mustn't fragment. */
>  		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "txreq.offset: %x, size: %u, end: %lu\n",
>  				   txreq.offset, txreq.size,
>  				   (txreq.offset&~PAGE_MASK) + txreq.size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			break;
>  		}
> 
> -		index = pending_index(vif->pending_cons);
> -		pending_idx = vif->pending_ring[index];
> +		index = pending_index(queue->pending_cons);
> +		pending_idx = queue->pending_ring[index];
> 
>  		data_len = (txreq.size > PKT_PROT_LEN &&
>  			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
> @@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
>  				GFP_ATOMIC | __GFP_NOWARN);
>  		if (unlikely(skb == NULL)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't allocate a skb in start_xmit.\n");
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
> @@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  			struct xen_netif_extra_info *gso;
>  			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
> 
> -			if (xenvif_set_skb_gso(vif, skb, gso)) {
> +			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
>  				/* Failure in xenvif_set_skb_gso is fatal. */
>  				kfree_skb(skb);
>  				break;
> @@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		}
> 
>  		/* XXX could copy straight to head */
> -		page = xenvif_alloc_page(vif, pending_idx);
> +		page = xenvif_alloc_page(queue, pending_idx);
>  		if (!page) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		gop->source.u.ref = txreq.gref;
> -		gop->source.domid = vif->domid;
> +		gop->source.domid = queue->vif->domid;
>  		gop->source.offset = txreq.offset;
> 
>  		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
> @@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
> 
>  		gop++;
> 
> -		memcpy(&vif->pending_tx_info[pending_idx].req,
> +		memcpy(&queue->pending_tx_info[pending_idx].req,
>  		       &txreq, sizeof(txreq));
> -		vif->pending_tx_info[pending_idx].head = index;
> +		queue->pending_tx_info[pending_idx].head = index;
>  		*((u16 *)skb->data) = pending_idx;
> 
>  		__skb_put(skb, data_len);
> @@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  					     INVALID_PENDING_IDX);
>  		}
> 
> -		vif->pending_cons++;
> +		queue->pending_cons++;
> 
> -		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
> +		request_gop = xenvif_get_requests(queue, skb, txfrags,
> gop);
>  		if (request_gop == NULL) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
>  		gop = request_gop;
> 
> -		__skb_queue_tail(&vif->tx_queue, skb);
> +		__skb_queue_tail(&queue->tx_queue, skb);
> 
> -		vif->tx.req_cons = idx;
> +		queue->tx.req_cons = idx;
> 
> -		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif-
> >tx_copy_ops))
> +		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue-
> >tx_copy_ops))
>  			break;
>  	}
> 
> -	return gop - vif->tx_copy_ops;
> +	return gop - queue->tx_copy_ops;
>  }
> 
> 
> -static int xenvif_tx_submit(struct xenvif *vif)
> +static int xenvif_tx_submit(struct xenvif_queue *queue)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops;
> +	struct gnttab_copy *gop = queue->tx_copy_ops;
>  	struct sk_buff *skb;
>  	int work_done = 0;
> 
> -	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
> +	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
>  		struct xen_netif_tx_request *txp;
>  		u16 pending_idx;
>  		unsigned data_len;
> 
>  		pending_idx = *((u16 *)skb->data);
> -		txp = &vif->pending_tx_info[pending_idx].req;
> +		txp = &queue->pending_tx_info[pending_idx].req;
> 
>  		/* Check the remap error code. */
> -		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
> -			netdev_dbg(vif->dev, "netback grant failed.\n");
> +		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
> +			netdev_dbg(queue->vif->dev, "netback grant
> failed.\n");
>  			skb_shinfo(skb)->nr_frags = 0;
>  			kfree_skb(skb);
>  			continue;
> @@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
> 
>  		data_len = skb->len;
>  		memcpy(skb->data,
> -		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
> +		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
>  		       data_len);
>  		if (data_len < txp->size) {
>  			/* Append the packet payload as a fragment. */
> @@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  			txp->size -= data_len;
>  		} else {
>  			/* Schedule a response immediately. */
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
> 
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(queue, skb);
> 
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) <
> PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
>  		}
> 
> -		skb->dev      = vif->dev;
> +		skb->dev      = queue->vif->dev;
>  		skb->protocol = eth_type_trans(skb, skb->dev);
>  		skb_reset_network_header(skb);
> 
> -		if (checksum_setup(vif, skb)) {
> -			netdev_dbg(vif->dev,
> +		if (checksum_setup(queue, skb)) {
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't setup checksum in
> net_tx_action\n");
>  			kfree_skb(skb);
>  			continue;
> @@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  				DIV_ROUND_UP(skb->len - hdrlen, mss);
>  		}
> 
> -		vif->dev->stats.rx_bytes += skb->len;
> -		vif->dev->stats.rx_packets++;
> +		queue->stats.rx_bytes += skb->len;
> +		queue->stats.rx_packets++;
> 
>  		work_done++;
> 
> @@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  }
> 
>  /* Called after netfront has transmitted */
> -int xenvif_tx_action(struct xenvif *vif, int budget)
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget)
>  {
>  	unsigned nr_gops;
>  	int work_done;
> 
> -	if (unlikely(!tx_work_todo(vif)))
> +	if (unlikely(!tx_work_todo(queue)))
>  		return 0;
> 
> -	nr_gops = xenvif_tx_build_gops(vif, budget);
> +	nr_gops = xenvif_tx_build_gops(queue, budget);
> 
>  	if (nr_gops == 0)
>  		return 0;
> 
> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> +	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
> 
> -	work_done = xenvif_tx_submit(vif);
> +	work_done = xenvif_tx_submit(queue);
> 
>  	return work_done;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status)
>  {
>  	struct pending_tx_info *pending_tx_info;
>  	pending_ring_idx_t head;
>  	u16 peek; /* peek into next tx request */
> 
> -	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
> +	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
> 
>  	/* Already complete? */
> -	if (vif->mmap_pages[pending_idx] == NULL)
> +	if (queue->mmap_pages[pending_idx] == NULL)
>  		return;
> 
> -	pending_tx_info = &vif->pending_tx_info[pending_idx];
> +	pending_tx_info = &queue->pending_tx_info[pending_idx];
> 
>  	head = pending_tx_info->head;
> 
> -	BUG_ON(!pending_tx_is_head(vif, head));
> -	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
> +	BUG_ON(!pending_tx_is_head(queue, head));
> +	BUG_ON(queue->pending_ring[pending_index(head)] !=
> pending_idx);
> 
>  	do {
>  		pending_ring_idx_t index;
>  		pending_ring_idx_t idx = pending_index(head);
> -		u16 info_idx = vif->pending_ring[idx];
> +		u16 info_idx = queue->pending_ring[idx];
> 
> -		pending_tx_info = &vif->pending_tx_info[info_idx];
> -		make_tx_response(vif, &pending_tx_info->req, status);
> +		pending_tx_info = &queue->pending_tx_info[info_idx];
> +		make_tx_response(queue, &pending_tx_info->req, status);
> 
>  		/* Setting any number other than
>  		 * INVALID_PENDING_RING_IDX indicates this slot is
> @@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif,
> u16 pending_idx,
>  		 */
>  		pending_tx_info->head = 0;
> 
> -		index = pending_index(vif->pending_prod++);
> -		vif->pending_ring[index] = vif->pending_ring[info_idx];
> +		index = pending_index(queue->pending_prod++);
> +		queue->pending_ring[index] = queue-
> >pending_ring[info_idx];
> 
> -		peek = vif->pending_ring[pending_index(++head)];
> +		peek = queue->pending_ring[pending_index(++head)];
> 
> -	} while (!pending_tx_is_head(vif, peek));
> +	} while (!pending_tx_is_head(queue, peek));
> 
> -	put_page(vif->mmap_pages[pending_idx]);
> -	vif->mmap_pages[pending_idx] = NULL;
> +	put_page(queue->mmap_pages[pending_idx]);
> +	queue->mmap_pages[pending_idx] = NULL;
>  }
> 
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st)
>  {
> -	RING_IDX i = vif->tx.rsp_prod_pvt;
> +	RING_IDX i = queue->tx.rsp_prod_pvt;
>  	struct xen_netif_tx_response *resp;
>  	int notify;
> 
> -	resp = RING_GET_RESPONSE(&vif->tx, i);
> +	resp = RING_GET_RESPONSE(&queue->tx, i);
>  	resp->id     = txp->id;
>  	resp->status = st;
> 
>  	if (txp->flags & XEN_NETTXF_extra_info)
> -		RING_GET_RESPONSE(&vif->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> +		RING_GET_RESPONSE(&queue->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> 
> -	vif->tx.rsp_prod_pvt = ++i;
> -	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
> +	queue->tx.rsp_prod_pvt = ++i;
> +	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
>  	if (notify)
> -		notify_remote_via_irq(vif->tx_irq);
> +		notify_remote_via_irq(queue->tx_irq);
>  }
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags)
>  {
> -	RING_IDX i = vif->rx.rsp_prod_pvt;
> +	RING_IDX i = queue->rx.rsp_prod_pvt;
>  	struct xen_netif_rx_response *resp;
> 
> -	resp = RING_GET_RESPONSE(&vif->rx, i);
> +	resp = RING_GET_RESPONSE(&queue->rx, i);
>  	resp->offset     = offset;
>  	resp->flags      = flags;
>  	resp->id         = id;
> @@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response
> *make_rx_response(struct xenvif *vif,
>  	if (st < 0)
>  		resp->status = (s16)st;
> 
> -	vif->rx.rsp_prod_pvt = ++i;
> +	queue->rx.rsp_prod_pvt = ++i;
> 
>  	return resp;
>  }
> 
> -static inline int rx_work_todo(struct xenvif *vif)
> +static inline int rx_work_todo(struct xenvif_queue *queue)
>  {
> -	return !skb_queue_empty(&vif->rx_queue) &&
> -	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
> +	return !skb_queue_empty(&queue->rx_queue) &&
> +	       xenvif_rx_ring_slots_available(queue, queue-
> >rx_last_skb_slots);
>  }
> 
> -static inline int tx_work_todo(struct xenvif *vif)
> +static inline int tx_work_todo(struct xenvif_queue *queue)
>  {
> 
> -	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
> -	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
> +	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
>  	     < MAX_PENDING_REQS))
>  		return 1;
> 
>  	return 0;
>  }
> 
> -void xenvif_unmap_frontend_rings(struct xenvif *vif)
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
>  {
> -	if (vif->tx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->tx.sring);
> -	if (vif->rx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->rx.sring);
> +	if (queue->tx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->tx.sring);
> +	if (queue->rx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->rx.sring);
>  }
> 
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref)
>  {
> @@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif
> *vif,
> 
>  	int err = -ENOMEM;
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     tx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	txs = (struct xen_netif_tx_sring *)addr;
> -	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     rx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	rxs = (struct xen_netif_rx_sring *)addr;
> -	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
> 
>  	return 0;
> 
>  err:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  	return err;
>  }
> 
> -void xenvif_stop_queue(struct xenvif *vif)
> +static inline void xenvif_wake_queue(struct xenvif_queue *queue)
>  {
> -	if (!vif->can_queue)
> -		return;
> +	struct net_device *dev = queue->vif->dev;
> +	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
> +}
> 
> -	netif_stop_queue(vif->dev);
> +static void xenvif_start_queue(struct xenvif_queue *queue)
> +{
> +	if (xenvif_schedulable(queue->vif))
> +		xenvif_wake_queue(queue);
>  }
> 
> -static void xenvif_start_queue(struct xenvif *vif)
> +static int xenvif_queue_stopped(struct xenvif_queue *queue)
>  {
> -	if (xenvif_schedulable(vif))
> -		netif_wake_queue(vif->dev);
> +	struct net_device *dev = queue->vif->dev;
> +	unsigned int id = queue->id;
> +	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
>  }
> 
>  int xenvif_kthread(void *data)
>  {
> -	struct xenvif *vif = data;
> +	struct xenvif_queue *queue = data;
>  	struct sk_buff *skb;
> 
>  	while (!kthread_should_stop()) {
> -		wait_event_interruptible(vif->wq,
> -					 rx_work_todo(vif) ||
> +		wait_event_interruptible(queue->wq,
> +					 rx_work_todo(queue) ||
>  					 kthread_should_stop());
>  		if (kthread_should_stop())
>  			break;
> 
> -		if (!skb_queue_empty(&vif->rx_queue))
> -			xenvif_rx_action(vif);
> +		if (!skb_queue_empty(&queue->rx_queue))
> +			xenvif_rx_action(queue);
> 
> -		if (skb_queue_empty(&vif->rx_queue) &&
> -		    netif_queue_stopped(vif->dev))
> -			xenvif_start_queue(vif);
> +		if (skb_queue_empty(&queue->rx_queue) &&
> +		    xenvif_queue_stopped(queue))
> +			xenvif_start_queue(queue);
> 
>  		cond_resched();
>  	}
> 
>  	/* Bin any remaining skbs */
> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
>  		dev_kfree_skb(skb);
> 
>  	return 0;
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index 7a206cf..f23ea0a 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -19,6 +19,7 @@
>  */
> 
>  #include "common.h"
> +#include <linux/vmalloc.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -34,8 +35,9 @@ struct backend_info {
>  	u8 have_hotplug_status_watch:1;
>  };
> 
> -static int connect_rings(struct backend_info *);
> -static void connect(struct backend_info *);
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue);
> +static void connect(struct backend_info *be);
> +static int read_xenbus_vif_flags(struct backend_info *be);
>  static void backend_create_xenvif(struct backend_info *be);
>  static void unregister_hotplug_status_watch(struct backend_info *be);
>  static void set_backend_state(struct backend_info *be,
> @@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
>  {
>  	int err;
>  	struct xenbus_device *dev = be->dev;
> -
> -	err = connect_rings(be);
> -	if (err)
> -		return;
> +	unsigned long credit_bytes, credit_usec;
> +	unsigned int queue_index;
> +	struct xenvif_queue *queue;
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
>  		return;
>  	}
> 
> -	xen_net_read_rate(dev, &be->vif->credit_bytes,
> -			  &be->vif->credit_usec);
> -	be->vif->remaining_credit = be->vif->credit_bytes;
> +	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
> +	read_xenbus_vif_flags(be);
> +
> +	be->vif->num_queues = 1;
> +	be->vif->queues = vzalloc(be->vif->num_queues *
> +			sizeof(struct xenvif_queue));
> +
> +	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index) {
> +		queue = &be->vif->queues[queue_index];
> +		queue->vif = be->vif;
> +		queue->id = queue_index;
> +		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
> +				be->vif->dev->name, queue->id);
> +
> +		xenvif_init_queue(queue);
> +
> +		queue->remaining_credit = credit_bytes;
> +
> +		err = connect_rings(be, queue);
> +		if (err)
> +			goto err;
> +	}
> +
> +	xenvif_carrier_on(be->vif);
> 
>  	unregister_hotplug_status_watch(be);
>  	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
> @@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
>  	if (!err)
>  		be->have_hotplug_status_watch = 1;
> 
> -	netif_wake_queue(be->vif->dev);
> +	netif_tx_wake_all_queues(be->vif->dev);
> +
> +	return;
> +
> +err:
> +	vfree(be->vif->queues);
> +	be->vif->queues = NULL;
> +	be->vif->num_queues = 0;
> +	return;
>  }
> 
> 
> -static int connect_rings(struct backend_info *be)
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue)
>  {
> -	struct xenvif *vif = be->vif;
>  	struct xenbus_device *dev = be->dev;
>  	unsigned long tx_ring_ref, rx_ring_ref;
> -	unsigned int tx_evtchn, rx_evtchn, rx_copy;
> +	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> -	int val;
> 
>  	err = xenbus_gather(XBT_NIL, dev->otherend,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
> @@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
>  		rx_evtchn = tx_evtchn;
>  	}
> 
> +	/* Map the shared frame, irq etc. */
> +	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
> +			     tx_evtchn, rx_evtchn);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err,
> +				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> +				 tx_ring_ref, rx_ring_ref,
> +				 tx_evtchn, rx_evtchn);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static int read_xenbus_vif_flags(struct backend_info *be)
> +{
> +	struct xenvif *vif = be->vif;
> +	struct xenbus_device *dev = be->dev;
> +	unsigned int rx_copy;
> +	int err, val;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy",
> "%u",
>  			   &rx_copy);
>  	if (err == -ENOENT) {
> @@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
>  		val = 0;
>  	vif->ipv6_csum = !!val;
> 
> -	/* Map the shared frame, irq etc. */
> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
> -			     tx_evtchn, rx_evtchn);
> -	if (err) {
> -		xenbus_dev_fatal(dev, err,
> -				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> -				 tx_ring_ref, rx_ring_ref,
> -				 tx_evtchn, rx_evtchn);
> -		return err;
> -	}
>  	return 0;
>  }
> 
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:52:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwtA-0005zp-4k; Mon, 24 Feb 2014 14:52:12 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WHwt8-0005zf-Sk
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:52:11 +0000
Received: from [85.158.143.35:35974] by server-3.bemta-4.messagelabs.com id
	5B/3C-11539-A9C5B035; Mon, 24 Feb 2014 14:52:10 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393253524!7890934!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14331 invoked from network); 24 Feb 2014 14:52:05 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:52:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; 
   d="scan'208";a="9756577"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 24 Feb 2014 14:52:04 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Mon, 24 Feb 2014 15:52:03 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V5 net-next 1/5] xen-netback: Factor queue-specific
	data into queue struct.
Thread-Index: AQHPMW1v/GZt1s5x30ilP+CR+n6W4JrEfOuQ
Date: Mon, 24 Feb 2014 14:52:02 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD025788C@AMSPEX01CL01.citrite.net>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 24 February 2014 14:33
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; Andrew
> Bennieston
> Subject: [PATCH V5 net-next 1/5] xen-netback: Factor queue-specific data
> into queue struct.
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_hash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
>  drivers/net/xen-netback/common.h    |   85 ++++--
>  drivers/net/xen-netback/interface.c |  329 ++++++++++++++--------
>  drivers/net/xen-netback/netback.c   |  530 ++++++++++++++++++------------
> -----
>  drivers/net/xen-netback/xenbus.c    |   87 ++++--
>  4 files changed, 608 insertions(+), 423 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index ae413a2..4176539 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -108,17 +108,39 @@ struct xenvif_rx_meta {
>   */
>  #define MAX_GRANT_COPY_OPS (MAX_SKB_FRAGS *
> XEN_NETIF_RX_RING_SIZE)
> 
> -struct xenvif {
> -	/* Unique identifier for this interface. */
> -	domid_t          domid;
> -	unsigned int     handle;
> +/* Queue name is interface name with "-qNNN" appended */
> +#define QUEUE_NAME_SIZE (IFNAMSIZ + 6)
> +
> +/* IRQ name is queue name with "-tx" or "-rx" appended */
> +#define IRQ_NAME_SIZE (QUEUE_NAME_SIZE + 3)
> +
> +struct xenvif;
> +
> +struct xenvif_stats {
> +	/* Stats fields to be updated per-queue.
> +	 * A subset of struct net_device_stats that contains only the
> +	 * fields that are updated in netback.c for each queue.
> +	 */
> +	unsigned int rx_bytes;
> +	unsigned int rx_packets;
> +	unsigned int tx_bytes;
> +	unsigned int tx_packets;
> +
> +	/* Additional stats used by xenvif */
> +	unsigned long rx_gso_checksum_fixup;
> +};
> +
> +struct xenvif_queue { /* Per-queue data for xenvif */
> +	unsigned int id; /* Queue ID, 0-based */
> +	char name[QUEUE_NAME_SIZE]; /* DEVNAME-qN */
> +	struct xenvif *vif; /* Parent VIF */
> 
>  	/* Use NAPI for guest TX */
>  	struct napi_struct napi;
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int tx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char tx_irq_name[IFNAMSIZ+4]; /* DEVNAME-tx */
> +	char tx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-tx */
>  	struct xen_netif_tx_back_ring tx;
>  	struct sk_buff_head tx_queue;
>  	struct page *mmap_pages[MAX_PENDING_REQS];
> @@ -140,19 +162,34 @@ struct xenvif {
>  	/* When feature-split-event-channels = 0, tx_irq = rx_irq. */
>  	unsigned int rx_irq;
>  	/* Only used when feature-split-event-channels = 1 */
> -	char rx_irq_name[IFNAMSIZ+4]; /* DEVNAME-rx */
> +	char rx_irq_name[IRQ_NAME_SIZE]; /* DEVNAME-qN-rx */
>  	struct xen_netif_rx_back_ring rx;
>  	struct sk_buff_head rx_queue;
>  	RING_IDX rx_last_skb_slots;
> 
> -	/* This array is allocated seperately as it is large */
> -	struct gnttab_copy *grant_copy_op;
> +	struct gnttab_copy grant_copy_op[MAX_GRANT_COPY_OPS];
> 
>  	/* We create one meta structure per ring request we consume, so
>  	 * the maximum number is the same as the ring size.
>  	 */
>  	struct xenvif_rx_meta meta[XEN_NETIF_RX_RING_SIZE];
> 
> +	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> +	unsigned long   credit_bytes;
> +	unsigned long   credit_usec;
> +	unsigned long   remaining_credit;
> +	struct timer_list credit_timeout;
> +	u64 credit_window_start;
> +
> +	/* Statistics */
> +	struct xenvif_stats stats;
> +};
> +
> +struct xenvif {
> +	/* Unique identifier for this interface. */
> +	domid_t          domid;
> +	unsigned int     handle;
> +
>  	u8               fe_dev_addr[6];
> 
>  	/* Frontend feature information. */
> @@ -166,15 +203,9 @@ struct xenvif {
>  	/* Internal feature information. */
>  	u8 can_queue:1;	    /* can queue packets for receiver? */
> 
> -	/* Transmit shaping: allow 'credit_bytes' every 'credit_usec'. */
> -	unsigned long   credit_bytes;
> -	unsigned long   credit_usec;
> -	unsigned long   remaining_credit;
> -	struct timer_list credit_timeout;
> -	u64 credit_window_start;
> -
> -	/* Statistics */
> -	unsigned long rx_gso_checksum_fixup;
> +	/* Queues */
> +	unsigned int num_queues;
> +	struct xenvif_queue *queues;
> 
>  	/* Miscellaneous private stuff. */
>  	struct net_device *dev;
> @@ -189,7 +220,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
>  			    domid_t domid,
>  			    unsigned int handle);
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue);
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn);
>  void xenvif_disconnect(struct xenvif *vif);
> @@ -200,31 +233,31 @@ void xenvif_xenbus_fini(void);
> 
>  int xenvif_schedulable(struct xenvif *vif);
> 
> -int xenvif_must_stop_queue(struct xenvif *vif);
> +int xenvif_must_stop_queue(struct xenvif_queue *queue);
> 
>  /* (Un)Map communication rings. */
> -void xenvif_unmap_frontend_rings(struct xenvif *vif);
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue);
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref);
> 
>  /* Check for SKBs from frontend and schedule backend processing */
> -void xenvif_check_rx_xenvif(struct xenvif *vif);
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue);
> 
>  /* Prevent the device from generating any further traffic. */
>  void xenvif_carrier_off(struct xenvif *vif);
> 
> -int xenvif_tx_action(struct xenvif *vif, int budget);
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget);
> 
>  int xenvif_kthread(void *data);
> -void xenvif_kick_thread(struct xenvif *vif);
> +void xenvif_kick_thread(struct xenvif_queue *queue);
> 
>  /* Determine whether the needed number of slots (req) are available,
>   * and set req_event if not.
>   */
> -bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed);
> +bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int
> needed);
> 
> -void xenvif_stop_queue(struct xenvif *vif);
> +void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 7669d49..0297980 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -34,7 +34,6 @@
>  #include <linux/ethtool.h>
>  #include <linux/rtnetlink.h>
>  #include <linux/if_vlan.h>
> -#include <linux/vmalloc.h>
> 
>  #include <xen/events.h>
>  #include <asm/xen/hypercall.h>
> @@ -42,6 +41,16 @@
>  #define XENVIF_QUEUE_LENGTH 32
>  #define XENVIF_NAPI_WEIGHT  64
> 
> +static inline void xenvif_stop_queue(struct xenvif_queue *queue)
> +{
> +	struct net_device *dev = queue->vif->dev;
> +
> +	if (!queue->vif->can_queue)
> +		return;
> +
> +	netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));
> +}
> +
>  int xenvif_schedulable(struct xenvif *vif)
>  {
>  	return netif_running(vif->dev) && netif_carrier_ok(vif->dev);
> @@ -49,20 +58,20 @@ int xenvif_schedulable(struct xenvif *vif)
> 
>  static irqreturn_t xenvif_tx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	if (RING_HAS_UNCONSUMED_REQUESTS(&vif->tx))
> -		napi_schedule(&vif->napi);
> +	if (RING_HAS_UNCONSUMED_REQUESTS(&queue->tx))
> +		napi_schedule(&queue->napi);
> 
>  	return IRQ_HANDLED;
>  }
> 
> -static int xenvif_poll(struct napi_struct *napi, int budget)
> +int xenvif_poll(struct napi_struct *napi, int budget)
>  {
> -	struct xenvif *vif = container_of(napi, struct xenvif, napi);
> +	struct xenvif_queue *queue = container_of(napi, struct
> xenvif_queue, napi);
>  	int work_done;
> 
> -	work_done = xenvif_tx_action(vif, budget);
> +	work_done = xenvif_tx_action(queue, budget);
> 
>  	if (work_done < budget) {
>  		int more_to_do = 0;
> @@ -86,7 +95,7 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  		local_irq_save(flags);
> 
> -		RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx,
> more_to_do);
> +		RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx,
> more_to_do);
>  		if (!more_to_do)
>  			__napi_complete(napi);
> 
> @@ -98,9 +107,9 @@ static int xenvif_poll(struct napi_struct *napi, int
> budget)
> 
>  static irqreturn_t xenvif_rx_interrupt(int irq, void *dev_id)
>  {
> -	struct xenvif *vif = dev_id;
> +	struct xenvif_queue *queue = dev_id;
> 
> -	xenvif_kick_thread(vif);
> +	xenvif_kick_thread(queue);
> 
>  	return IRQ_HANDLED;
>  }
> @@ -113,15 +122,48 @@ static irqreturn_t xenvif_interrupt(int irq, void
> *dev_id)
>  	return IRQ_HANDLED;
>  }
> 
> +static u16 xenvif_select_queue(struct net_device *dev, struct sk_buff
> *skb,
> +			       void *accel_priv, select_queue_fallback_t
> fallback)
> +{
> +	struct xenvif *vif = netdev_priv(dev);
> +	u32 hash;
> +	u16 queue_index;
> +
> +	/* First, check if there is only one queue to optimise the
> +	 * single-queue or old frontend scenario.
> +	 */
> +	if (vif->num_queues == 1) {
> +		queue_index = 0;
> +	} else {
> +		/* Use skb_get_hash to obtain an L4 hash if available */
> +		hash = skb_get_hash(skb);
> +		queue_index = (u16) (((u64)hash * vif->num_queues) >>
> 32);
> +	}
> +
> +	return queue_index;
> +}
> +
>  static int xenvif_start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	struct xenvif_queue *queue = NULL;
> +	u16 index;
>  	int min_slots_needed;
> 
>  	BUG_ON(skb->dev != dev);
> 
> +	/* Drop the packet if queues are not set up */
> +	if (vif->num_queues < 1)
> +		goto drop;
> +
> +	/* Obtain the queue to be used to transmit this packet */
> +	index = skb_get_queue_mapping(skb);
> +	if (index >= vif->num_queues)
> +		index = 0; /* Fall back to queue 0 if out of range */
> +	queue = &vif->queues[index];
> +
>  	/* Drop the packet if vif is not ready */
> -	if (vif->task == NULL || !xenvif_schedulable(vif))
> +	if (queue->task == NULL || !xenvif_schedulable(vif))
>  		goto drop;
> 
>  	/* At best we'll need one slot for the header and one for each
> @@ -140,11 +182,11 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
>  	 * then turn off the queue to give the ring a chance to
>  	 * drain.
>  	 */
> -	if (!xenvif_rx_ring_slots_available(vif, min_slots_needed))
> -		xenvif_stop_queue(vif);
> +	if (!xenvif_rx_ring_slots_available(queue, min_slots_needed))
> +		xenvif_stop_queue(queue);
> 
> -	skb_queue_tail(&vif->rx_queue, skb);
> -	xenvif_kick_thread(vif);
> +	skb_queue_tail(&queue->rx_queue, skb);
> +	xenvif_kick_thread(queue);
> 
>  	return NETDEV_TX_OK;
> 
> @@ -157,25 +199,58 @@ static int xenvif_start_xmit(struct sk_buff *skb,
> struct net_device *dev)
>  static struct net_device_stats *xenvif_get_stats(struct net_device *dev)
>  {
>  	struct xenvif *vif = netdev_priv(dev);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned long rx_bytes = 0;
> +	unsigned long rx_packets = 0;
> +	unsigned long tx_bytes = 0;
> +	unsigned long tx_packets = 0;
> +	unsigned int index;
> +
> +	/* Aggregate tx and rx stats from each queue */
> +	for (index = 0; index < vif->num_queues; ++index) {
> +		queue = &vif->queues[index];
> +		rx_bytes += queue->stats.rx_bytes;
> +		rx_packets += queue->stats.rx_packets;
> +		tx_bytes += queue->stats.tx_bytes;
> +		tx_packets += queue->stats.tx_packets;
> +	}
> +
> +	vif->dev->stats.rx_bytes = rx_bytes;
> +	vif->dev->stats.rx_packets = rx_packets;
> +	vif->dev->stats.tx_bytes = tx_bytes;
> +	vif->dev->stats.tx_packets = tx_packets;
> +
>  	return &vif->dev->stats;
>  }
> 
>  static void xenvif_up(struct xenvif *vif)
>  {
> -	napi_enable(&vif->napi);
> -	enable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		enable_irq(vif->rx_irq);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_enable(&queue->napi);
> +		enable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			enable_irq(queue->rx_irq);
> +		xenvif_check_rx_xenvif(queue);
> +	}
>  }
> 
>  static void xenvif_down(struct xenvif *vif)
>  {
> -	napi_disable(&vif->napi);
> -	disable_irq(vif->tx_irq);
> -	if (vif->tx_irq != vif->rx_irq)
> -		disable_irq(vif->rx_irq);
> -	del_timer_sync(&vif->credit_timeout);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		napi_disable(&queue->napi);
> +		disable_irq(queue->tx_irq);
> +		if (queue->tx_irq != queue->rx_irq)
> +			disable_irq(queue->rx_irq);
> +		del_timer_sync(&queue->credit_timeout);
> +	}
>  }
> 
>  static int xenvif_open(struct net_device *dev)
> @@ -183,7 +258,7 @@ static int xenvif_open(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_up(vif);
> -	netif_start_queue(dev);
> +	netif_tx_start_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -192,7 +267,7 @@ static int xenvif_close(struct net_device *dev)
>  	struct xenvif *vif = netdev_priv(dev);
>  	if (netif_carrier_ok(dev))
>  		xenvif_down(vif);
> -	netif_stop_queue(dev);
> +	netif_tx_stop_all_queues(dev);
>  	return 0;
>  }
> 
> @@ -232,7 +307,7 @@ static const struct xenvif_stat {
>  } xenvif_stats[] = {
>  	{
>  		"rx_gso_checksum_fixup",
> -		offsetof(struct xenvif, rx_gso_checksum_fixup)
> +		offsetof(struct xenvif_stats, rx_gso_checksum_fixup)
>  	},
>  };
> 
> @@ -249,11 +324,19 @@ static int xenvif_get_sset_count(struct net_device
> *dev, int string_set)
>  static void xenvif_get_ethtool_stats(struct net_device *dev,
>  				     struct ethtool_stats *stats, u64 * data)
>  {
> -	void *vif = netdev_priv(dev);
> +	struct xenvif *vif = netdev_priv(dev);
>  	int i;
> -
> -	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++)
> -		data[i] = *(unsigned long *)(vif + xenvif_stats[i].offset);
> +	unsigned int queue_index;
> +	struct xenvif_stats *vif_stats;
> +
> +	for (i = 0; i < ARRAY_SIZE(xenvif_stats); i++) {
> +		unsigned long accum = 0;
> +		for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +			vif_stats = &vif->queues[queue_index].stats;
> +			accum += *(unsigned long *)(vif_stats +
> xenvif_stats[i].offset);
> +		}
> +		data[i] = accum;
> +	}
>  }
> 
>  static void xenvif_get_strings(struct net_device *dev, u32 stringset, u8 *
> data)
> @@ -286,6 +369,7 @@ static const struct net_device_ops
> xenvif_netdev_ops = {
>  	.ndo_fix_features = xenvif_fix_features,
>  	.ndo_set_mac_address = eth_mac_addr,
>  	.ndo_validate_addr   = eth_validate_addr,
> +	.ndo_select_queue = xenvif_select_queue,
>  };
> 
>  struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
> @@ -295,10 +379,9 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	struct net_device *dev;
>  	struct xenvif *vif;
>  	char name[IFNAMSIZ] = {};
> -	int i;
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev(sizeof(struct xenvif), name, ether_setup);
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> @@ -308,24 +391,15 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	vif = netdev_priv(dev);
> 
> -	vif->grant_copy_op = vmalloc(sizeof(struct gnttab_copy) *
> -				     MAX_GRANT_COPY_OPS);
> -	if (vif->grant_copy_op == NULL) {
> -		pr_warn("Could not allocate grant copy space for %s\n",
> name);
> -		free_netdev(dev);
> -		return ERR_PTR(-ENOMEM);
> -	}
> -
>  	vif->domid  = domid;
>  	vif->handle = handle;
>  	vif->can_sg = 1;
>  	vif->ip_csum = 1;
>  	vif->dev = dev;
> 
> -	vif->credit_bytes = vif->remaining_credit = ~0UL;
> -	vif->credit_usec  = 0UL;
> -	init_timer(&vif->credit_timeout);
> -	vif->credit_window_start = get_jiffies_64();
> +	/* Start out with no queues */
> +	vif->num_queues = 0;
> +	vif->queues = NULL;
> 
>  	dev->netdev_ops	= &xenvif_netdev_ops;
>  	dev->hw_features = NETIF_F_SG |
> @@ -336,16 +410,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
> 
>  	dev->tx_queue_len = XENVIF_QUEUE_LENGTH;
> 
> -	skb_queue_head_init(&vif->rx_queue);
> -	skb_queue_head_init(&vif->tx_queue);
> -
> -	vif->pending_cons = 0;
> -	vif->pending_prod = MAX_PENDING_REQS;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->pending_ring[i] = i;
> -	for (i = 0; i < MAX_PENDING_REQS; i++)
> -		vif->mmap_pages[i] = NULL;
> -
>  	/*
>  	 * Initialise a dummy MAC address. We choose the numerically
>  	 * largest non-broadcast address to prevent the address getting
> @@ -355,8 +419,6 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	memset(dev->dev_addr, 0xFF, ETH_ALEN);
>  	dev->dev_addr[0] &= ~0x01;
> 
> -	netif_napi_add(dev, &vif->napi, xenvif_poll,
> XENVIF_NAPI_WEIGHT);
> -
>  	netif_carrier_off(dev);
> 
>  	err = register_netdev(dev);
> @@ -373,85 +435,111 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	return vif;
>  }
> 
> -int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
> +void xenvif_init_queue(struct xenvif_queue *queue)
> +{
> +	int i;
> +
> +	queue->credit_bytes = queue->remaining_credit = ~0UL;
> +	queue->credit_usec  = 0UL;
> +	init_timer(&queue->credit_timeout);
> +	queue->credit_window_start = get_jiffies_64();
> +
> +	skb_queue_head_init(&queue->rx_queue);
> +	skb_queue_head_init(&queue->tx_queue);
> +
> +	queue->pending_cons = 0;
> +	queue->pending_prod = MAX_PENDING_REQS;
> +	for (i = 0; i < MAX_PENDING_REQS; ++i) {
> +		queue->pending_ring[i] = i;
> +		queue->mmap_pages[i] = NULL;
> +	}
> +
> +	netif_napi_add(queue->vif->dev, &queue->napi, xenvif_poll,
> +			XENVIF_NAPI_WEIGHT);
> +}
> +
> +void xenvif_carrier_on(struct xenvif *vif)
> +{
> +	rtnl_lock();
> +	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> +		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> +	netdev_update_features(vif->dev);
> +	netif_carrier_on(vif->dev);
> +	if (netif_running(vif->dev))
> +		xenvif_up(vif);
> +	rtnl_unlock();
> +}
> +
> +int xenvif_connect(struct xenvif_queue *queue, unsigned long tx_ring_ref,
>  		   unsigned long rx_ring_ref, unsigned int tx_evtchn,
>  		   unsigned int rx_evtchn)
>  {
>  	struct task_struct *task;
>  	int err = -ENOMEM;
> 
> -	BUG_ON(vif->tx_irq);
> -	BUG_ON(vif->task);
> +	BUG_ON(queue->tx_irq);
> +	BUG_ON(queue->task);
> 
> -	err = xenvif_map_frontend_rings(vif, tx_ring_ref, rx_ring_ref);
> +	err = xenvif_map_frontend_rings(queue, tx_ring_ref, rx_ring_ref);
>  	if (err < 0)
>  		goto err;
> 
> -	init_waitqueue_head(&vif->wq);
> +	init_waitqueue_head(&queue->wq);
> 
>  	if (tx_evtchn == rx_evtchn) {
>  		/* feature-split-event-channels == 0 */
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_interrupt, 0,
> -			vif->dev->name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_interrupt, 0,
> +			queue->name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = vif->rx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = queue->rx_irq = err;
> +		disable_irq(queue->tx_irq);
>  	} else {
>  		/* feature-split-event-channels == 1 */
> -		snprintf(vif->tx_irq_name, sizeof(vif->tx_irq_name),
> -			 "%s-tx", vif->dev->name);
> +		snprintf(queue->tx_irq_name, sizeof(queue->tx_irq_name),
> +			 "%s-tx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, tx_evtchn, xenvif_tx_interrupt, 0,
> -			vif->tx_irq_name, vif);
> +			queue->vif->domid, tx_evtchn, xenvif_tx_interrupt,
> 0,
> +			queue->tx_irq_name, queue);
>  		if (err < 0)
>  			goto err_unmap;
> -		vif->tx_irq = err;
> -		disable_irq(vif->tx_irq);
> +		queue->tx_irq = err;
> +		disable_irq(queue->tx_irq);
> 
> -		snprintf(vif->rx_irq_name, sizeof(vif->rx_irq_name),
> -			 "%s-rx", vif->dev->name);
> +		snprintf(queue->rx_irq_name, sizeof(queue->rx_irq_name),
> +			 "%s-rx", queue->name);
>  		err = bind_interdomain_evtchn_to_irqhandler(
> -			vif->domid, rx_evtchn, xenvif_rx_interrupt, 0,
> -			vif->rx_irq_name, vif);
> +			queue->vif->domid, rx_evtchn, xenvif_rx_interrupt,
> 0,
> +			queue->rx_irq_name, queue);
>  		if (err < 0)
>  			goto err_tx_unbind;
> -		vif->rx_irq = err;
> -		disable_irq(vif->rx_irq);
> +		queue->rx_irq = err;
> +		disable_irq(queue->rx_irq);
>  	}
> 
>  	task = kthread_create(xenvif_kthread,
> -			      (void *)vif, "%s", vif->dev->name);
> +			      (void *)queue, "%s", queue->name);
>  	if (IS_ERR(task)) {
> -		pr_warn("Could not allocate kthread for %s\n", vif->dev-
> >name);
> +		pr_warn("Could not allocate kthread for %s\n", queue-
> >name);
>  		err = PTR_ERR(task);
>  		goto err_rx_unbind;
>  	}
> 
> -	vif->task = task;
> +	queue->task = task;
> 
> -	rtnl_lock();
> -	if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
> -		dev_set_mtu(vif->dev, ETH_DATA_LEN);
> -	netdev_update_features(vif->dev);
> -	netif_carrier_on(vif->dev);
> -	if (netif_running(vif->dev))
> -		xenvif_up(vif);
> -	rtnl_unlock();
> -
> -	wake_up_process(vif->task);
> +	wake_up_process(queue->task);
> 
>  	return 0;
> 
>  err_rx_unbind:
> -	unbind_from_irqhandler(vif->rx_irq, vif);
> -	vif->rx_irq = 0;
> +	unbind_from_irqhandler(queue->rx_irq, queue);
> +	queue->rx_irq = 0;
>  err_tx_unbind:
> -	unbind_from_irqhandler(vif->tx_irq, vif);
> -	vif->tx_irq = 0;
> +	unbind_from_irqhandler(queue->tx_irq, queue);
> +	queue->tx_irq = 0;
>  err_unmap:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  err:
>  	module_put(THIS_MODULE);
>  	return err;
> @@ -470,34 +558,53 @@ void xenvif_carrier_off(struct xenvif *vif)
> 
>  void xenvif_disconnect(struct xenvif *vif)
>  {
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> +
>  	if (netif_carrier_ok(vif->dev))
>  		xenvif_carrier_off(vif);
> 
> -	if (vif->task) {
> -		kthread_stop(vif->task);
> -		vif->task = NULL;
> -	}
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> 
> -	if (vif->tx_irq) {
> -		if (vif->tx_irq == vif->rx_irq)
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -		else {
> -			unbind_from_irqhandler(vif->tx_irq, vif);
> -			unbind_from_irqhandler(vif->rx_irq, vif);
> +		if (queue->task) {
> +			kthread_stop(queue->task);
> +			queue->task = NULL;
>  		}
> -		vif->tx_irq = 0;
> +
> +		if (queue->tx_irq) {
> +			if (queue->tx_irq == queue->rx_irq)
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +			else {
> +				unbind_from_irqhandler(queue->tx_irq,
> queue);
> +				unbind_from_irqhandler(queue->rx_irq,
> queue);
> +			}
> +			queue->tx_irq = 0;
> +		}
> +
> +		xenvif_unmap_frontend_rings(queue);
>  	}
> 
> -	xenvif_unmap_frontend_rings(vif);
> +
>  }
> 
>  void xenvif_free(struct xenvif *vif)
>  {
> -	netif_napi_del(&vif->napi);
> +	struct xenvif_queue *queue = NULL;
> +	unsigned int queue_index;
> 
>  	unregister_netdev(vif->dev);
> 
> -	vfree(vif->grant_copy_op);
> +	for (queue_index = 0; queue_index < vif->num_queues;
> ++queue_index) {
> +		queue = &vif->queues[queue_index];
> +		netif_napi_del(&queue->napi);
> +	}
> +
> +	/* Free the array of queues */
> +	vif->num_queues = 0;
> +	vfree(vif->queues);
> +	vif->queues = NULL;
> +
>  	free_netdev(vif->dev);
> 
>  	module_put(THIS_MODULE);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index e5284bc..a32abd6 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -75,38 +75,38 @@ module_param(fatal_skb_slots, uint, 0444);
>   * one or more merged tx requests, otherwise it is the continuation of
>   * previous tx request.
>   */
> -static inline int pending_tx_is_head(struct xenvif *vif, RING_IDX idx)
> +static inline int pending_tx_is_head(struct xenvif_queue *queue,
> RING_IDX idx)
>  {
> -	return vif->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
> +	return queue->pending_tx_info[idx].head !=
> INVALID_PENDING_RING_IDX;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status);
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st);
> 
> -static inline int tx_work_todo(struct xenvif *vif);
> -static inline int rx_work_todo(struct xenvif *vif);
> +static inline int tx_work_todo(struct xenvif_queue *queue);
> +static inline int rx_work_todo(struct xenvif_queue *queue);
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags);
> 
> -static inline unsigned long idx_to_pfn(struct xenvif *vif,
> +static inline unsigned long idx_to_pfn(struct xenvif_queue *queue,
>  				       u16 idx)
>  {
> -	return page_to_pfn(vif->mmap_pages[idx]);
> +	return page_to_pfn(queue->mmap_pages[idx]);
>  }
> 
> -static inline unsigned long idx_to_kaddr(struct xenvif *vif,
> +static inline unsigned long idx_to_kaddr(struct xenvif_queue *queue,
>  					 u16 idx)
>  {
> -	return (unsigned long)pfn_to_kaddr(idx_to_pfn(vif, idx));
> +	return (unsigned long)pfn_to_kaddr(idx_to_pfn(queue, idx));
>  }
> 
>  /* This is a miniumum size for the linear area to avoid lots of
> @@ -131,30 +131,30 @@ static inline pending_ring_idx_t
> pending_index(unsigned i)
>  	return i & (MAX_PENDING_REQS-1);
>  }
> 
> -static inline pending_ring_idx_t nr_pending_reqs(struct xenvif *vif)
> +static inline pending_ring_idx_t nr_pending_reqs(struct xenvif_queue
> *queue)
>  {
>  	return MAX_PENDING_REQS -
> -		vif->pending_prod + vif->pending_cons;
> +		queue->pending_prod + queue->pending_cons;
>  }
> 
> -bool xenvif_rx_ring_slots_available(struct xenvif *vif, int needed)
> +bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue, int
> needed)
>  {
>  	RING_IDX prod, cons;
> 
>  	do {
> -		prod = vif->rx.sring->req_prod;
> -		cons = vif->rx.req_cons;
> +		prod = queue->rx.sring->req_prod;
> +		cons = queue->rx.req_cons;
> 
>  		if (prod - cons >= needed)
>  			return true;
> 
> -		vif->rx.sring->req_event = prod + 1;
> +		queue->rx.sring->req_event = prod + 1;
> 
>  		/* Make sure event is visible before we check prod
>  		 * again.
>  		 */
>  		mb();
> -	} while (vif->rx.sring->req_prod != prod);
> +	} while (queue->rx.sring->req_prod != prod);
> 
>  	return false;
>  }
> @@ -208,13 +208,13 @@ struct netrx_pending_operations {
>  	grant_ref_t copy_gref;
>  };
> 
> -static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif *vif,
> +static struct xenvif_rx_meta *get_next_rx_buffer(struct xenvif_queue
> *queue,
>  						 struct
> netrx_pending_operations *npo)
>  {
>  	struct xenvif_rx_meta *meta;
>  	struct xen_netif_rx_request *req;
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
> 
>  	meta = npo->meta + npo->meta_prod++;
>  	meta->gso_type = XEN_NETIF_GSO_TYPE_NONE;
> @@ -232,7 +232,7 @@ static struct xenvif_rx_meta
> *get_next_rx_buffer(struct xenvif *vif,
>   * Set up the grant operations for this fragment. If it's a flipping
>   * interface, we also set up the unmap request from here.
>   */
> -static void xenvif_gop_frag_copy(struct xenvif *vif, struct sk_buff *skb,
> +static void xenvif_gop_frag_copy(struct xenvif_queue *queue, struct
> sk_buff *skb,
>  				 struct netrx_pending_operations *npo,
>  				 struct page *page, unsigned long size,
>  				 unsigned long offset, int *head)
> @@ -267,7 +267,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  			 */
>  			BUG_ON(*head);
> 
> -			meta = get_next_rx_buffer(vif, npo);
> +			meta = get_next_rx_buffer(queue, npo);
>  		}
> 
>  		if (npo->copy_off + bytes > MAX_BUFFER_OFFSET)
> @@ -281,7 +281,7 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		copy_gop->source.u.gmfn =
> virt_to_mfn(page_address(page));
>  		copy_gop->source.offset = offset;
> 
> -		copy_gop->dest.domid = vif->domid;
> +		copy_gop->dest.domid = queue->vif->domid;
>  		copy_gop->dest.offset = npo->copy_off;
>  		copy_gop->dest.u.ref = npo->copy_gref;
> 
> @@ -306,8 +306,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>  		else
>  			gso_type = XEN_NETIF_GSO_TYPE_NONE;
> 
> -		if (*head && ((1 << gso_type) & vif->gso_mask))
> -			vif->rx.req_cons++;
> +		if (*head && ((1 << gso_type) & queue->vif->gso_mask))
> +			queue->rx.req_cons++;
> 
>  		*head = 0; /* There must be something in this buffer now. */
> 
> @@ -327,7 +327,8 @@ static void xenvif_gop_frag_copy(struct xenvif *vif,
> struct sk_buff *skb,
>   * frontend-side LRO).
>   */
>  static int xenvif_gop_skb(struct sk_buff *skb,
> -			  struct netrx_pending_operations *npo)
> +			  struct netrx_pending_operations *npo,
> +			  struct xenvif_queue *queue)
>  {
>  	struct xenvif *vif = netdev_priv(skb->dev);
>  	int nr_frags = skb_shinfo(skb)->nr_frags;
> @@ -355,7 +356,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
> 
>  	/* Set up a GSO prefix descriptor, if necessary */
>  	if ((1 << gso_type) & vif->gso_prefix_mask) {
> -		req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +		req = RING_GET_REQUEST(&queue->rx, queue-
> >rx.req_cons++);
>  		meta = npo->meta + npo->meta_prod++;
>  		meta->gso_type = gso_type;
>  		meta->gso_size = gso_size;
> @@ -363,7 +364,7 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		meta->id = req->id;
>  	}
> 
> -	req = RING_GET_REQUEST(&vif->rx, vif->rx.req_cons++);
> +	req = RING_GET_REQUEST(&queue->rx, queue->rx.req_cons++);
>  	meta = npo->meta + npo->meta_prod++;
> 
>  	if ((1 << gso_type) & vif->gso_mask) {
> @@ -387,13 +388,13 @@ static int xenvif_gop_skb(struct sk_buff *skb,
>  		if (data + len > skb_tail_pointer(skb))
>  			len = skb_tail_pointer(skb) - data;
> 
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     virt_to_page(data), len, offset, &head);
>  		data += len;
>  	}
> 
>  	for (i = 0; i < nr_frags; i++) {
> -		xenvif_gop_frag_copy(vif, skb, npo,
> +		xenvif_gop_frag_copy(queue, skb, npo,
>  				     skb_frag_page(&skb_shinfo(skb)-
> >frags[i]),
>  				     skb_frag_size(&skb_shinfo(skb)->frags[i]),
>  				     skb_shinfo(skb)->frags[i].page_offset,
> @@ -429,7 +430,7 @@ static int xenvif_check_gop(struct xenvif *vif, int
> nr_meta_slots,
>  	return status;
>  }
> 
> -static void xenvif_add_frag_responses(struct xenvif *vif, int status,
> +static void xenvif_add_frag_responses(struct xenvif_queue *queue, int
> status,
>  				      struct xenvif_rx_meta *meta,
>  				      int nr_meta_slots)
>  {
> @@ -450,7 +451,7 @@ static void xenvif_add_frag_responses(struct xenvif
> *vif, int status,
>  			flags = XEN_NETRXF_more_data;
> 
>  		offset = 0;
> -		make_rx_response(vif, meta[i].id, status, offset,
> +		make_rx_response(queue, meta[i].id, status, offset,
>  				 meta[i].size, flags);
>  	}
>  }
> @@ -459,12 +460,12 @@ struct skb_cb_overlay {
>  	int meta_slots_used;
>  };
> 
> -void xenvif_kick_thread(struct xenvif *vif)
> +void xenvif_kick_thread(struct xenvif_queue *queue)
>  {
> -	wake_up(&vif->wq);
> +	wake_up(&queue->wq);
>  }
> 
> -static void xenvif_rx_action(struct xenvif *vif)
> +static void xenvif_rx_action(struct xenvif_queue *queue)
>  {
>  	s8 status;
>  	u16 flags;
> @@ -478,13 +479,13 @@ static void xenvif_rx_action(struct xenvif *vif)
>  	bool need_to_notify = false;
> 
>  	struct netrx_pending_operations npo = {
> -		.copy  = vif->grant_copy_op,
> -		.meta  = vif->meta,
> +		.copy  = queue->grant_copy_op,
> +		.meta  = queue->meta,
>  	};
> 
>  	skb_queue_head_init(&rxq);
> 
> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL) {
>  		RING_IDX max_slots_needed;
>  		int i;
> 
> @@ -505,41 +506,41 @@ static void xenvif_rx_action(struct xenvif *vif)
>  			max_slots_needed++;
> 
>  		/* If the skb may not fit then bail out now */
> -		if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
> -			skb_queue_head(&vif->rx_queue, skb);
> +		if (!xenvif_rx_ring_slots_available(queue,
> max_slots_needed)) {
> +			skb_queue_head(&queue->rx_queue, skb);
>  			need_to_notify = true;
> -			vif->rx_last_skb_slots = max_slots_needed;
> +			queue->rx_last_skb_slots = max_slots_needed;
>  			break;
>  		} else
> -			vif->rx_last_skb_slots = 0;
> +			queue->rx_last_skb_slots = 0;
> 
>  		sco = (struct skb_cb_overlay *)skb->cb;
> -		sco->meta_slots_used = xenvif_gop_skb(skb, &npo);
> +		sco->meta_slots_used = xenvif_gop_skb(skb, &npo, queue);
>  		BUG_ON(sco->meta_slots_used > max_slots_needed);
> 
>  		__skb_queue_tail(&rxq, skb);
>  	}
> 
> -	BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> +	BUG_ON(npo.meta_prod > ARRAY_SIZE(queue->meta));
> 
>  	if (!npo.copy_prod)
>  		goto done;
> 
>  	BUG_ON(npo.copy_prod > MAX_GRANT_COPY_OPS);
> -	gnttab_batch_copy(vif->grant_copy_op, npo.copy_prod);
> +	gnttab_batch_copy(queue->grant_copy_op, npo.copy_prod);
> 
>  	while ((skb = __skb_dequeue(&rxq)) != NULL) {
>  		sco = (struct skb_cb_overlay *)skb->cb;
> 
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_prefix_mask) {
> -			resp = RING_GET_RESPONSE(&vif->rx,
> -						 vif->rx.rsp_prod_pvt++);
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_prefix_mask) {
> +			resp = RING_GET_RESPONSE(&queue->rx,
> +						 queue->rx.rsp_prod_pvt++);
> 
>  			resp->flags = XEN_NETRXF_gso_prefix |
> XEN_NETRXF_more_data;
> 
> -			resp->offset = vif->meta[npo.meta_cons].gso_size;
> -			resp->id = vif->meta[npo.meta_cons].id;
> +			resp->offset = queue-
> >meta[npo.meta_cons].gso_size;
> +			resp->id = queue->meta[npo.meta_cons].id;
>  			resp->status = sco->meta_slots_used;
> 
>  			npo.meta_cons++;
> @@ -547,10 +548,10 @@ static void xenvif_rx_action(struct xenvif *vif)
>  		}
> 
> 
> -		vif->dev->stats.tx_bytes += skb->len;
> -		vif->dev->stats.tx_packets++;
> +		queue->stats.tx_bytes += skb->len;
> +		queue->stats.tx_packets++;
> 
> -		status = xenvif_check_gop(vif, sco->meta_slots_used,
> &npo);
> +		status = xenvif_check_gop(queue->vif, sco-
> >meta_slots_used, &npo);
> 
>  		if (sco->meta_slots_used == 1)
>  			flags = 0;
> @@ -564,22 +565,22 @@ static void xenvif_rx_action(struct xenvif *vif)
>  			flags |= XEN_NETRXF_data_validated;
> 
>  		offset = 0;
> -		resp = make_rx_response(vif, vif->meta[npo.meta_cons].id,
> +		resp = make_rx_response(queue, queue-
> >meta[npo.meta_cons].id,
>  					status, offset,
> -					vif->meta[npo.meta_cons].size,
> +					queue->meta[npo.meta_cons].size,
>  					flags);
> 
> -		if ((1 << vif->meta[npo.meta_cons].gso_type) &
> -		    vif->gso_mask) {
> +		if ((1 << queue->meta[npo.meta_cons].gso_type) &
> +		    queue->vif->gso_mask) {
>  			struct xen_netif_extra_info *gso =
>  				(struct xen_netif_extra_info *)
> -				RING_GET_RESPONSE(&vif->rx,
> -						  vif->rx.rsp_prod_pvt++);
> +				RING_GET_RESPONSE(&queue->rx,
> +						  queue-
> >rx.rsp_prod_pvt++);
> 
>  			resp->flags |= XEN_NETRXF_extra_info;
> 
> -			gso->u.gso.type = vif-
> >meta[npo.meta_cons].gso_type;
> -			gso->u.gso.size = vif-
> >meta[npo.meta_cons].gso_size;
> +			gso->u.gso.type = queue-
> >meta[npo.meta_cons].gso_type;
> +			gso->u.gso.size = queue-
> >meta[npo.meta_cons].gso_size;
>  			gso->u.gso.pad = 0;
>  			gso->u.gso.features = 0;
> 
> @@ -587,11 +588,11 @@ static void xenvif_rx_action(struct xenvif *vif)
>  			gso->flags = 0;
>  		}
> 
> -		xenvif_add_frag_responses(vif, status,
> -					  vif->meta + npo.meta_cons + 1,
> +		xenvif_add_frag_responses(queue, status,
> +					  queue->meta + npo.meta_cons + 1,
>  					  sco->meta_slots_used);
> 
> -		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx,
> ret);
> +		RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->rx,
> ret);
> 
>  		need_to_notify |= !!ret;
> 
> @@ -601,20 +602,20 @@ static void xenvif_rx_action(struct xenvif *vif)
> 
>  done:
>  	if (need_to_notify)
> -		notify_remote_via_irq(vif->rx_irq);
> +		notify_remote_via_irq(queue->rx_irq);
>  }
> 
> -void xenvif_check_rx_xenvif(struct xenvif *vif)
> +void xenvif_check_rx_xenvif(struct xenvif_queue *queue)
>  {
>  	int more_to_do;
> 
> -	RING_FINAL_CHECK_FOR_REQUESTS(&vif->tx, more_to_do);
> +	RING_FINAL_CHECK_FOR_REQUESTS(&queue->tx, more_to_do);
> 
>  	if (more_to_do)
> -		napi_schedule(&vif->napi);
> +		napi_schedule(&queue->napi);
>  }
> 
> -static void tx_add_credit(struct xenvif *vif)
> +static void tx_add_credit(struct xenvif_queue *queue)
>  {
>  	unsigned long max_burst, max_credit;
> 
> @@ -622,37 +623,37 @@ static void tx_add_credit(struct xenvif *vif)
>  	 * Allow a burst big enough to transmit a jumbo packet of up to
> 128kB.
>  	 * Otherwise the interface can seize up due to insufficient credit.
>  	 */
> -	max_burst = RING_GET_REQUEST(&vif->tx, vif->tx.req_cons)->size;
> +	max_burst = RING_GET_REQUEST(&queue->tx, queue-
> >tx.req_cons)->size;
>  	max_burst = min(max_burst, 131072UL);
> -	max_burst = max(max_burst, vif->credit_bytes);
> +	max_burst = max(max_burst, queue->credit_bytes);
> 
>  	/* Take care that adding a new chunk of credit doesn't wrap to zero.
> */
> -	max_credit = vif->remaining_credit + vif->credit_bytes;
> -	if (max_credit < vif->remaining_credit)
> +	max_credit = queue->remaining_credit + queue->credit_bytes;
> +	if (max_credit < queue->remaining_credit)
>  		max_credit = ULONG_MAX; /* wrapped: clamp to
> ULONG_MAX */
> 
> -	vif->remaining_credit = min(max_credit, max_burst);
> +	queue->remaining_credit = min(max_credit, max_burst);
>  }
> 
>  static void tx_credit_callback(unsigned long data)
>  {
> -	struct xenvif *vif = (struct xenvif *)data;
> -	tx_add_credit(vif);
> -	xenvif_check_rx_xenvif(vif);
> +	struct xenvif_queue *queue = (struct xenvif_queue *)data;
> +	tx_add_credit(queue);
> +	xenvif_check_rx_xenvif(queue);
>  }
> 
> -static void xenvif_tx_err(struct xenvif *vif,
> +static void xenvif_tx_err(struct xenvif_queue *queue,
>  			  struct xen_netif_tx_request *txp, RING_IDX end)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
> -		make_tx_response(vif, txp, XEN_NETIF_RSP_ERROR);
> +		make_tx_response(queue, txp, XEN_NETIF_RSP_ERROR);
>  		if (cons == end)
>  			break;
> -		txp = RING_GET_REQUEST(&vif->tx, cons++);
> +		txp = RING_GET_REQUEST(&queue->tx, cons++);
>  	} while (1);
> -	vif->tx.req_cons = cons;
> +	queue->tx.req_cons = cons;
>  }
> 
>  static void xenvif_fatal_tx_err(struct xenvif *vif)
> @@ -661,12 +662,12 @@ static void xenvif_fatal_tx_err(struct xenvif *vif)
>  	xenvif_carrier_off(vif);
>  }
> 
> -static int xenvif_count_requests(struct xenvif *vif,
> +static int xenvif_count_requests(struct xenvif_queue *queue,
>  				 struct xen_netif_tx_request *first,
>  				 struct xen_netif_tx_request *txp,
>  				 int work_to_do)
>  {
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
>  	int slots = 0;
>  	int drop_err = 0;
>  	int more_data;
> @@ -678,10 +679,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		struct xen_netif_tx_request dropped_tx = { 0 };
> 
>  		if (slots >= work_to_do) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Asked for %d slots but exceeds this
> limit\n",
>  				   work_to_do);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -ENODATA;
>  		}
> 
> @@ -689,10 +690,10 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 * considered malicious.
>  		 */
>  		if (unlikely(slots >= fatal_skb_slots)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Malicious frontend using %d slots,
> threshold %u\n",
>  				   slots, fatal_skb_slots);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -E2BIG;
>  		}
> 
> @@ -705,7 +706,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX)
> {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Too many slots (%d) exceeding
> limit (%d), dropping packet\n",
>  					   slots,
> XEN_NETBK_LEGACY_SLOTS_MAX);
>  			drop_err = -E2BIG;
> @@ -714,7 +715,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		if (drop_err)
>  			txp = &dropped_tx;
> 
> -		memcpy(txp, RING_GET_REQUEST(&vif->tx, cons + slots),
> +		memcpy(txp, RING_GET_REQUEST(&queue->tx, cons +
> slots),
>  		       sizeof(*txp));
> 
>  		/* If the guest submitted a frame >= 64 KiB then
> @@ -728,7 +729,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		 */
>  		if (!drop_err && txp->size > first->size) {
>  			if (net_ratelimit())
> -				netdev_dbg(vif->dev,
> +				netdev_dbg(queue->vif->dev,
>  					   "Invalid tx request, slot size %u >
> remaining size %u\n",
>  					   txp->size, first->size);
>  			drop_err = -EIO;
> @@ -738,9 +739,9 @@ static int xenvif_count_requests(struct xenvif *vif,
>  		slots++;
> 
>  		if (unlikely((txp->offset + txp->size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev, "Cross page boundary, txp-
> >offset: %x, size: %u\n",
> +			netdev_err(queue->vif->dev, "Cross page boundary,
> txp->offset: %x, size: %u\n",
>  				 txp->offset, txp->size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
> @@ -752,14 +753,14 @@ static int xenvif_count_requests(struct xenvif *vif,
>  	} while (more_data);
> 
>  	if (drop_err) {
> -		xenvif_tx_err(vif, first, cons + slots);
> +		xenvif_tx_err(queue, first, cons + slots);
>  		return drop_err;
>  	}
> 
>  	return slots;
>  }
> 
> -static struct page *xenvif_alloc_page(struct xenvif *vif,
> +static struct page *xenvif_alloc_page(struct xenvif_queue *queue,
>  				      u16 pending_idx)
>  {
>  	struct page *page;
> @@ -767,12 +768,12 @@ static struct page *xenvif_alloc_page(struct xenvif
> *vif,
>  	page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  	if (!page)
>  		return NULL;
> -	vif->mmap_pages[pending_idx] = page;
> +	queue->mmap_pages[pending_idx] = page;
> 
>  	return page;
>  }
> 
> -static struct gnttab_copy *xenvif_get_requests(struct xenvif *vif,
> +static struct gnttab_copy *xenvif_get_requests(struct xenvif_queue
> *queue,
>  					       struct sk_buff *skb,
>  					       struct xen_netif_tx_request *txp,
>  					       struct gnttab_copy *gop)
> @@ -803,7 +804,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  	for (shinfo->nr_frags = slot = start; slot < nr_slots;
>  	     shinfo->nr_frags++) {
>  		struct pending_tx_info *pending_tx_info =
> -			vif->pending_tx_info;
> +			queue->pending_tx_info;
> 
>  		page = alloc_page(GFP_ATOMIC|__GFP_COLD);
>  		if (!page)
> @@ -815,7 +816,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  			gop->flags = GNTCOPY_source_gref;
> 
>  			gop->source.u.ref = txp->gref;
> -			gop->source.domid = vif->domid;
> +			gop->source.domid = queue->vif->domid;
>  			gop->source.offset = txp->offset;
> 
>  			gop->dest.domid = DOMID_SELF;
> @@ -840,9 +841,9 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				gop->len = txp->size;
>  				dst_offset += gop->len;
> 
> -				index = pending_index(vif-
> >pending_cons++);
> +				index = pending_index(queue-
> >pending_cons++);
> 
> -				pending_idx = vif->pending_ring[index];
> +				pending_idx = queue->pending_ring[index];
> 
> 
> 	memcpy(&pending_tx_info[pending_idx].req, txp,
>  				       sizeof(*txp));
> @@ -851,7 +852,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  				 * fields for head tx req will be set
>  				 * to correct values after the loop.
>  				 */
> -				vif->mmap_pages[pending_idx] = (void
> *)(~0UL);
> +				queue->mmap_pages[pending_idx] = (void
> *)(~0UL);
>  				pending_tx_info[pending_idx].head =
>  					INVALID_PENDING_RING_IDX;
> 
> @@ -871,7 +872,7 @@ static struct gnttab_copy *xenvif_get_requests(struct
> xenvif *vif,
>  		first->req.offset = 0;
>  		first->req.size = dst_offset;
>  		first->head = start_idx;
> -		vif->mmap_pages[head_idx] = page;
> +		queue->mmap_pages[head_idx] = page;
>  		frag_set_pending_idx(&frags[shinfo->nr_frags], head_idx);
>  	}
> 
> @@ -881,18 +882,18 @@ static struct gnttab_copy
> *xenvif_get_requests(struct xenvif *vif,
>  err:
>  	/* Unwind, freeing all pages and sending error responses. */
>  	while (shinfo->nr_frags-- > start) {
> -		xenvif_idx_release(vif,
> +		xenvif_idx_release(queue,
>  				frag_get_pending_idx(&frags[shinfo-
> >nr_frags]),
>  				XEN_NETIF_RSP_ERROR);
>  	}
>  	/* The head too, if necessary. */
>  	if (start)
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	return NULL;
>  }
> 
> -static int xenvif_tx_check_gop(struct xenvif *vif,
> +static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>  			       struct sk_buff *skb,
>  			       struct gnttab_copy **gopp)
>  {
> @@ -907,7 +908,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	/* Check status of header. */
>  	err = gop->status;
>  	if (unlikely(err))
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  	/* Skip first skb fragment if it is on same page as header fragment. */
>  	start = (frag_get_pending_idx(&shinfo->frags[0]) == pending_idx);
> @@ -917,7 +918,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  		pending_ring_idx_t head;
> 
>  		pending_idx = frag_get_pending_idx(&shinfo->frags[i]);
> -		tx_info = &vif->pending_tx_info[pending_idx];
> +		tx_info = &queue->pending_tx_info[pending_idx];
>  		head = tx_info->head;
> 
>  		/* Check error status: if okay then remember grant handle.
> */
> @@ -925,19 +926,19 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  			newerr = (++gop)->status;
>  			if (newerr)
>  				break;
> -			peek = vif->pending_ring[pending_index(++head)];
> -		} while (!pending_tx_is_head(vif, peek));
> +			peek = queue-
> >pending_ring[pending_index(++head)];
> +		} while (!pending_tx_is_head(queue, peek));
> 
>  		if (likely(!newerr)) {
>  			/* Had a previous error? Invalidate this fragment. */
>  			if (unlikely(err))
> -				xenvif_idx_release(vif, pending_idx,
> +				xenvif_idx_release(queue, pending_idx,
>  						   XEN_NETIF_RSP_OKAY);
>  			continue;
>  		}
> 
>  		/* Error on this fragment: respond to client with an error. */
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_ERROR);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_ERROR);
> 
>  		/* Not the first error? Preceding frags already invalidated. */
>  		if (err)
> @@ -945,10 +946,10 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
> 
>  		/* First error: invalidate header and preceding fragments. */
>  		pending_idx = *((u16 *)skb->data);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  		for (j = start; j < i; j++) {
>  			pending_idx = frag_get_pending_idx(&shinfo-
> >frags[j]);
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -960,7 +961,7 @@ static int xenvif_tx_check_gop(struct xenvif *vif,
>  	return err;
>  }
> 
> -static void xenvif_fill_frags(struct xenvif *vif, struct sk_buff *skb)
> +static void xenvif_fill_frags(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
>  	struct skb_shared_info *shinfo = skb_shinfo(skb);
>  	int nr_frags = shinfo->nr_frags;
> @@ -974,46 +975,46 @@ static void xenvif_fill_frags(struct xenvif *vif, struct
> sk_buff *skb)
> 
>  		pending_idx = frag_get_pending_idx(frag);
> 
> -		txp = &vif->pending_tx_info[pending_idx].req;
> -		page = virt_to_page(idx_to_kaddr(vif, pending_idx));
> +		txp = &queue->pending_tx_info[pending_idx].req;
> +		page = virt_to_page(idx_to_kaddr(queue, pending_idx));
>  		__skb_fill_page_desc(skb, i, page, txp->offset, txp->size);
>  		skb->len += txp->size;
>  		skb->data_len += txp->size;
>  		skb->truesize += txp->size;
> 
>  		/* Take an extra reference to offset xenvif_idx_release */
> -		get_page(vif->mmap_pages[pending_idx]);
> -		xenvif_idx_release(vif, pending_idx,
> XEN_NETIF_RSP_OKAY);
> +		get_page(queue->mmap_pages[pending_idx]);
> +		xenvif_idx_release(queue, pending_idx,
> XEN_NETIF_RSP_OKAY);
>  	}
>  }
> 
> -static int xenvif_get_extras(struct xenvif *vif,
> +static int xenvif_get_extras(struct xenvif_queue *queue,
>  				struct xen_netif_extra_info *extras,
>  				int work_to_do)
>  {
>  	struct xen_netif_extra_info extra;
> -	RING_IDX cons = vif->tx.req_cons;
> +	RING_IDX cons = queue->tx.req_cons;
> 
>  	do {
>  		if (unlikely(work_to_do-- <= 0)) {
> -			netdev_err(vif->dev, "Missing extra info\n");
> -			xenvif_fatal_tx_err(vif);
> +			netdev_err(queue->vif->dev, "Missing extra
> info\n");
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EBADR;
>  		}
> 
> -		memcpy(&extra, RING_GET_REQUEST(&vif->tx, cons),
> +		memcpy(&extra, RING_GET_REQUEST(&queue->tx, cons),
>  		       sizeof(extra));
>  		if (unlikely(!extra.type ||
>  			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> -			vif->tx.req_cons = ++cons;
> -			netdev_err(vif->dev,
> +			queue->tx.req_cons = ++cons;
> +			netdev_err(queue->vif->dev,
>  				   "Invalid extra type: %d\n", extra.type);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			return -EINVAL;
>  		}
> 
>  		memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
> -		vif->tx.req_cons = ++cons;
> +		queue->tx.req_cons = ++cons;
>  	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
> 
>  	return work_to_do;
> @@ -1048,7 +1049,7 @@ static int xenvif_set_skb_gso(struct xenvif *vif,
>  	return 0;
>  }
> 
> -static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
> +static int checksum_setup(struct xenvif_queue *queue, struct sk_buff
> *skb)
>  {
>  	bool recalculate_partial_csum = false;
> 
> @@ -1058,7 +1059,7 @@ static int checksum_setup(struct xenvif *vif, struct
> sk_buff *skb)
>  	 * recalculate the partial checksum.
>  	 */
>  	if (skb->ip_summed != CHECKSUM_PARTIAL && skb_is_gso(skb)) {
> -		vif->rx_gso_checksum_fixup++;
> +		queue->stats.rx_gso_checksum_fixup++;
>  		skb->ip_summed = CHECKSUM_PARTIAL;
>  		recalculate_partial_csum = true;
>  	}
> @@ -1070,31 +1071,31 @@ static int checksum_setup(struct xenvif *vif,
> struct sk_buff *skb)
>  	return skb_checksum_setup(skb, recalculate_partial_csum);
>  }
> 
> -static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
> +static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned
> size)
>  {
>  	u64 now = get_jiffies_64();
> -	u64 next_credit = vif->credit_window_start +
> -		msecs_to_jiffies(vif->credit_usec / 1000);
> +	u64 next_credit = queue->credit_window_start +
> +		msecs_to_jiffies(queue->credit_usec / 1000);
> 
>  	/* Timer could already be pending in rare cases. */
> -	if (timer_pending(&vif->credit_timeout))
> +	if (timer_pending(&queue->credit_timeout))
>  		return true;
> 
>  	/* Passed the point where we can replenish credit? */
>  	if (time_after_eq64(now, next_credit)) {
> -		vif->credit_window_start = now;
> -		tx_add_credit(vif);
> +		queue->credit_window_start = now;
> +		tx_add_credit(queue);
>  	}
> 
>  	/* Still too big to send right now? Set a callback. */
> -	if (size > vif->remaining_credit) {
> -		vif->credit_timeout.data     =
> -			(unsigned long)vif;
> -		vif->credit_timeout.function =
> +	if (size > queue->remaining_credit) {
> +		queue->credit_timeout.data     =
> +			(unsigned long)queue;
> +		queue->credit_timeout.function =
>  			tx_credit_callback;
> -		mod_timer(&vif->credit_timeout,
> +		mod_timer(&queue->credit_timeout,
>  			  next_credit);
> -		vif->credit_window_start = next_credit;
> +		queue->credit_window_start = next_credit;
> 
>  		return true;
>  	}
> @@ -1102,15 +1103,15 @@ static bool tx_credit_exceeded(struct xenvif *vif,
> unsigned size)
>  	return false;
>  }
> 
> -static unsigned xenvif_tx_build_gops(struct xenvif *vif, int budget)
> +static unsigned xenvif_tx_build_gops(struct xenvif_queue *queue, int
> budget)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops, *request_gop;
> +	struct gnttab_copy *gop = queue->tx_copy_ops, *request_gop;
>  	struct sk_buff *skb;
>  	int ret;
> 
> -	while ((nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	while ((nr_pending_reqs(queue) +
> XEN_NETBK_LEGACY_SLOTS_MAX
>  		< MAX_PENDING_REQS) &&
> -	       (skb_queue_len(&vif->tx_queue) < budget)) {
> +	       (skb_queue_len(&queue->tx_queue) < budget)) {
>  		struct xen_netif_tx_request txreq;
>  		struct xen_netif_tx_request
> txfrags[XEN_NETBK_LEGACY_SLOTS_MAX];
>  		struct page *page;
> @@ -1121,69 +1122,69 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		unsigned int data_len;
>  		pending_ring_idx_t index;
> 
> -		if (vif->tx.sring->req_prod - vif->tx.req_cons >
> +		if (queue->tx.sring->req_prod - queue->tx.req_cons >
>  		    XEN_NETIF_TX_RING_SIZE) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "Impossible number of requests. "
>  				   "req_prod %d, req_cons %d, size %ld\n",
> -				   vif->tx.sring->req_prod, vif->tx.req_cons,
> +				   queue->tx.sring->req_prod, queue-
> >tx.req_cons,
>  				   XEN_NETIF_TX_RING_SIZE);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			continue;
>  		}
> 
> -		work_to_do = RING_HAS_UNCONSUMED_REQUESTS(&vif-
> >tx);
> +		work_to_do =
> RING_HAS_UNCONSUMED_REQUESTS(&queue->tx);
>  		if (!work_to_do)
>  			break;
> 
> -		idx = vif->tx.req_cons;
> +		idx = queue->tx.req_cons;
>  		rmb(); /* Ensure that we see the request before we copy it.
> */
> -		memcpy(&txreq, RING_GET_REQUEST(&vif->tx, idx),
> sizeof(txreq));
> +		memcpy(&txreq, RING_GET_REQUEST(&queue->tx, idx),
> sizeof(txreq));
> 
>  		/* Credit-based scheduling. */
> -		if (txreq.size > vif->remaining_credit &&
> -		    tx_credit_exceeded(vif, txreq.size))
> +		if (txreq.size > queue->remaining_credit &&
> +		    tx_credit_exceeded(queue, txreq.size))
>  			break;
> 
> -		vif->remaining_credit -= txreq.size;
> +		queue->remaining_credit -= txreq.size;
> 
>  		work_to_do--;
> -		vif->tx.req_cons = ++idx;
> +		queue->tx.req_cons = ++idx;
> 
>  		memset(extras, 0, sizeof(extras));
>  		if (txreq.flags & XEN_NETTXF_extra_info) {
> -			work_to_do = xenvif_get_extras(vif, extras,
> +			work_to_do = xenvif_get_extras(queue, extras,
>  						       work_to_do);
> -			idx = vif->tx.req_cons;
> +			idx = queue->tx.req_cons;
>  			if (unlikely(work_to_do < 0))
>  				break;
>  		}
> 
> -		ret = xenvif_count_requests(vif, &txreq, txfrags,
> work_to_do);
> +		ret = xenvif_count_requests(queue, &txreq, txfrags,
> work_to_do);
>  		if (unlikely(ret < 0))
>  			break;
> 
>  		idx += ret;
> 
>  		if (unlikely(txreq.size < ETH_HLEN)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Bad packet size: %d\n", txreq.size);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		/* No crossing a page as the payload mustn't fragment. */
>  		if (unlikely((txreq.offset + txreq.size) > PAGE_SIZE)) {
> -			netdev_err(vif->dev,
> +			netdev_err(queue->vif->dev,
>  				   "txreq.offset: %x, size: %u, end: %lu\n",
>  				   txreq.offset, txreq.size,
>  				   (txreq.offset&~PAGE_MASK) + txreq.size);
> -			xenvif_fatal_tx_err(vif);
> +			xenvif_fatal_tx_err(queue->vif);
>  			break;
>  		}
> 
> -		index = pending_index(vif->pending_cons);
> -		pending_idx = vif->pending_ring[index];
> +		index = pending_index(queue->pending_cons);
> +		pending_idx = queue->pending_ring[index];
> 
>  		data_len = (txreq.size > PKT_PROT_LEN &&
>  			    ret < XEN_NETBK_LEGACY_SLOTS_MAX) ?
> @@ -1192,9 +1193,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  		skb = alloc_skb(data_len + NET_SKB_PAD + NET_IP_ALIGN,
>  				GFP_ATOMIC | __GFP_NOWARN);
>  		if (unlikely(skb == NULL)) {
> -			netdev_dbg(vif->dev,
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't allocate a skb in start_xmit.\n");
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
> @@ -1205,7 +1206,7 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
>  			struct xen_netif_extra_info *gso;
>  			gso = &extras[XEN_NETIF_EXTRA_TYPE_GSO - 1];
> 
> -			if (xenvif_set_skb_gso(vif, skb, gso)) {
> +			if (xenvif_set_skb_gso(queue->vif, skb, gso)) {
>  				/* Failure in xenvif_set_skb_gso is fatal. */
>  				kfree_skb(skb);
>  				break;
> @@ -1213,15 +1214,15 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  		}
> 
>  		/* XXX could copy straight to head */
> -		page = xenvif_alloc_page(vif, pending_idx);
> +		page = xenvif_alloc_page(queue, pending_idx);
>  		if (!page) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
> 
>  		gop->source.u.ref = txreq.gref;
> -		gop->source.domid = vif->domid;
> +		gop->source.domid = queue->vif->domid;
>  		gop->source.offset = txreq.offset;
> 
>  		gop->dest.u.gmfn = virt_to_mfn(page_address(page));
> @@ -1233,9 +1234,9 @@ static unsigned xenvif_tx_build_gops(struct xenvif
> *vif, int budget)
> 
>  		gop++;
> 
> -		memcpy(&vif->pending_tx_info[pending_idx].req,
> +		memcpy(&queue->pending_tx_info[pending_idx].req,
>  		       &txreq, sizeof(txreq));
> -		vif->pending_tx_info[pending_idx].head = index;
> +		queue->pending_tx_info[pending_idx].head = index;
>  		*((u16 *)skb->data) = pending_idx;
> 
>  		__skb_put(skb, data_len);
> @@ -1250,45 +1251,45 @@ static unsigned xenvif_tx_build_gops(struct
> xenvif *vif, int budget)
>  					     INVALID_PENDING_IDX);
>  		}
> 
> -		vif->pending_cons++;
> +		queue->pending_cons++;
> 
> -		request_gop = xenvif_get_requests(vif, skb, txfrags, gop);
> +		request_gop = xenvif_get_requests(queue, skb, txfrags,
> gop);
>  		if (request_gop == NULL) {
>  			kfree_skb(skb);
> -			xenvif_tx_err(vif, &txreq, idx);
> +			xenvif_tx_err(queue, &txreq, idx);
>  			break;
>  		}
>  		gop = request_gop;
> 
> -		__skb_queue_tail(&vif->tx_queue, skb);
> +		__skb_queue_tail(&queue->tx_queue, skb);
> 
> -		vif->tx.req_cons = idx;
> +		queue->tx.req_cons = idx;
> 
> -		if ((gop-vif->tx_copy_ops) >= ARRAY_SIZE(vif-
> >tx_copy_ops))
> +		if ((gop - queue->tx_copy_ops) >= ARRAY_SIZE(queue-
> >tx_copy_ops))
>  			break;
>  	}
> 
> -	return gop - vif->tx_copy_ops;
> +	return gop - queue->tx_copy_ops;
>  }
> 
> 
> -static int xenvif_tx_submit(struct xenvif *vif)
> +static int xenvif_tx_submit(struct xenvif_queue *queue)
>  {
> -	struct gnttab_copy *gop = vif->tx_copy_ops;
> +	struct gnttab_copy *gop = queue->tx_copy_ops;
>  	struct sk_buff *skb;
>  	int work_done = 0;
> 
> -	while ((skb = __skb_dequeue(&vif->tx_queue)) != NULL) {
> +	while ((skb = __skb_dequeue(&queue->tx_queue)) != NULL) {
>  		struct xen_netif_tx_request *txp;
>  		u16 pending_idx;
>  		unsigned data_len;
> 
>  		pending_idx = *((u16 *)skb->data);
> -		txp = &vif->pending_tx_info[pending_idx].req;
> +		txp = &queue->pending_tx_info[pending_idx].req;
> 
>  		/* Check the remap error code. */
> -		if (unlikely(xenvif_tx_check_gop(vif, skb, &gop))) {
> -			netdev_dbg(vif->dev, "netback grant failed.\n");
> +		if (unlikely(xenvif_tx_check_gop(queue, skb, &gop))) {
> +			netdev_dbg(queue->vif->dev, "netback grant
> failed.\n");
>  			skb_shinfo(skb)->nr_frags = 0;
>  			kfree_skb(skb);
>  			continue;
> @@ -1296,7 +1297,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
> 
>  		data_len = skb->len;
>  		memcpy(skb->data,
> -		       (void *)(idx_to_kaddr(vif, pending_idx)|txp->offset),
> +		       (void *)(idx_to_kaddr(queue, pending_idx)|txp->offset),
>  		       data_len);
>  		if (data_len < txp->size) {
>  			/* Append the packet payload as a fragment. */
> @@ -1304,7 +1305,7 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  			txp->size -= data_len;
>  		} else {
>  			/* Schedule a response immediately. */
> -			xenvif_idx_release(vif, pending_idx,
> +			xenvif_idx_release(queue, pending_idx,
>  					   XEN_NETIF_RSP_OKAY);
>  		}
> 
> @@ -1313,19 +1314,19 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  		else if (txp->flags & XEN_NETTXF_data_validated)
>  			skb->ip_summed = CHECKSUM_UNNECESSARY;
> 
> -		xenvif_fill_frags(vif, skb);
> +		xenvif_fill_frags(queue, skb);
> 
>  		if (skb_is_nonlinear(skb) && skb_headlen(skb) <
> PKT_PROT_LEN) {
>  			int target = min_t(int, skb->len, PKT_PROT_LEN);
>  			__pskb_pull_tail(skb, target - skb_headlen(skb));
>  		}
> 
> -		skb->dev      = vif->dev;
> +		skb->dev      = queue->vif->dev;
>  		skb->protocol = eth_type_trans(skb, skb->dev);
>  		skb_reset_network_header(skb);
> 
> -		if (checksum_setup(vif, skb)) {
> -			netdev_dbg(vif->dev,
> +		if (checksum_setup(queue, skb)) {
> +			netdev_dbg(queue->vif->dev,
>  				   "Can't setup checksum in
> net_tx_action\n");
>  			kfree_skb(skb);
>  			continue;
> @@ -1347,8 +1348,8 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  				DIV_ROUND_UP(skb->len - hdrlen, mss);
>  		}
> 
> -		vif->dev->stats.rx_bytes += skb->len;
> -		vif->dev->stats.rx_packets++;
> +		queue->stats.rx_bytes += skb->len;
> +		queue->stats.rx_packets++;
> 
>  		work_done++;
> 
> @@ -1359,53 +1360,53 @@ static int xenvif_tx_submit(struct xenvif *vif)
>  }
> 
>  /* Called after netfront has transmitted */
> -int xenvif_tx_action(struct xenvif *vif, int budget)
> +int xenvif_tx_action(struct xenvif_queue *queue, int budget)
>  {
>  	unsigned nr_gops;
>  	int work_done;
> 
> -	if (unlikely(!tx_work_todo(vif)))
> +	if (unlikely(!tx_work_todo(queue)))
>  		return 0;
> 
> -	nr_gops = xenvif_tx_build_gops(vif, budget);
> +	nr_gops = xenvif_tx_build_gops(queue, budget);
> 
>  	if (nr_gops == 0)
>  		return 0;
> 
> -	gnttab_batch_copy(vif->tx_copy_ops, nr_gops);
> +	gnttab_batch_copy(queue->tx_copy_ops, nr_gops);
> 
> -	work_done = xenvif_tx_submit(vif);
> +	work_done = xenvif_tx_submit(queue);
> 
>  	return work_done;
>  }
> 
> -static void xenvif_idx_release(struct xenvif *vif, u16 pending_idx,
> +static void xenvif_idx_release(struct xenvif_queue *queue, u16
> pending_idx,
>  			       u8 status)
>  {
>  	struct pending_tx_info *pending_tx_info;
>  	pending_ring_idx_t head;
>  	u16 peek; /* peek into next tx request */
> 
> -	BUG_ON(vif->mmap_pages[pending_idx] == (void *)(~0UL));
> +	BUG_ON(queue->mmap_pages[pending_idx] == (void *)(~0UL));
> 
>  	/* Already complete? */
> -	if (vif->mmap_pages[pending_idx] == NULL)
> +	if (queue->mmap_pages[pending_idx] == NULL)
>  		return;
> 
> -	pending_tx_info = &vif->pending_tx_info[pending_idx];
> +	pending_tx_info = &queue->pending_tx_info[pending_idx];
> 
>  	head = pending_tx_info->head;
> 
> -	BUG_ON(!pending_tx_is_head(vif, head));
> -	BUG_ON(vif->pending_ring[pending_index(head)] != pending_idx);
> +	BUG_ON(!pending_tx_is_head(queue, head));
> +	BUG_ON(queue->pending_ring[pending_index(head)] !=
> pending_idx);
> 
>  	do {
>  		pending_ring_idx_t index;
>  		pending_ring_idx_t idx = pending_index(head);
> -		u16 info_idx = vif->pending_ring[idx];
> +		u16 info_idx = queue->pending_ring[idx];
> 
> -		pending_tx_info = &vif->pending_tx_info[info_idx];
> -		make_tx_response(vif, &pending_tx_info->req, status);
> +		pending_tx_info = &queue->pending_tx_info[info_idx];
> +		make_tx_response(queue, &pending_tx_info->req, status);
> 
>  		/* Setting any number other than
>  		 * INVALID_PENDING_RING_IDX indicates this slot is
> @@ -1413,50 +1414,50 @@ static void xenvif_idx_release(struct xenvif *vif,
> u16 pending_idx,
>  		 */
>  		pending_tx_info->head = 0;
> 
> -		index = pending_index(vif->pending_prod++);
> -		vif->pending_ring[index] = vif->pending_ring[info_idx];
> +		index = pending_index(queue->pending_prod++);
> +		queue->pending_ring[index] = queue-
> >pending_ring[info_idx];
> 
> -		peek = vif->pending_ring[pending_index(++head)];
> +		peek = queue->pending_ring[pending_index(++head)];
> 
> -	} while (!pending_tx_is_head(vif, peek));
> +	} while (!pending_tx_is_head(queue, peek));
> 
> -	put_page(vif->mmap_pages[pending_idx]);
> -	vif->mmap_pages[pending_idx] = NULL;
> +	put_page(queue->mmap_pages[pending_idx]);
> +	queue->mmap_pages[pending_idx] = NULL;
>  }
> 
> 
> -static void make_tx_response(struct xenvif *vif,
> +static void make_tx_response(struct xenvif_queue *queue,
>  			     struct xen_netif_tx_request *txp,
>  			     s8       st)
>  {
> -	RING_IDX i = vif->tx.rsp_prod_pvt;
> +	RING_IDX i = queue->tx.rsp_prod_pvt;
>  	struct xen_netif_tx_response *resp;
>  	int notify;
> 
> -	resp = RING_GET_RESPONSE(&vif->tx, i);
> +	resp = RING_GET_RESPONSE(&queue->tx, i);
>  	resp->id     = txp->id;
>  	resp->status = st;
> 
>  	if (txp->flags & XEN_NETTXF_extra_info)
> -		RING_GET_RESPONSE(&vif->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> +		RING_GET_RESPONSE(&queue->tx, ++i)->status =
> XEN_NETIF_RSP_NULL;
> 
> -	vif->tx.rsp_prod_pvt = ++i;
> -	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->tx, notify);
> +	queue->tx.rsp_prod_pvt = ++i;
> +	RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify);
>  	if (notify)
> -		notify_remote_via_irq(vif->tx_irq);
> +		notify_remote_via_irq(queue->tx_irq);
>  }
> 
> -static struct xen_netif_rx_response *make_rx_response(struct xenvif *vif,
> +static struct xen_netif_rx_response *make_rx_response(struct
> xenvif_queue *queue,
>  					     u16      id,
>  					     s8       st,
>  					     u16      offset,
>  					     u16      size,
>  					     u16      flags)
>  {
> -	RING_IDX i = vif->rx.rsp_prod_pvt;
> +	RING_IDX i = queue->rx.rsp_prod_pvt;
>  	struct xen_netif_rx_response *resp;
> 
> -	resp = RING_GET_RESPONSE(&vif->rx, i);
> +	resp = RING_GET_RESPONSE(&queue->rx, i);
>  	resp->offset     = offset;
>  	resp->flags      = flags;
>  	resp->id         = id;
> @@ -1464,39 +1465,39 @@ static struct xen_netif_rx_response
> *make_rx_response(struct xenvif *vif,
>  	if (st < 0)
>  		resp->status = (s16)st;
> 
> -	vif->rx.rsp_prod_pvt = ++i;
> +	queue->rx.rsp_prod_pvt = ++i;
> 
>  	return resp;
>  }
> 
> -static inline int rx_work_todo(struct xenvif *vif)
> +static inline int rx_work_todo(struct xenvif_queue *queue)
>  {
> -	return !skb_queue_empty(&vif->rx_queue) &&
> -	       xenvif_rx_ring_slots_available(vif, vif->rx_last_skb_slots);
> +	return !skb_queue_empty(&queue->rx_queue) &&
> +	       xenvif_rx_ring_slots_available(queue, queue-
> >rx_last_skb_slots);
>  }
> 
> -static inline int tx_work_todo(struct xenvif *vif)
> +static inline int tx_work_todo(struct xenvif_queue *queue)
>  {
> 
> -	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&vif->tx)) &&
> -	    (nr_pending_reqs(vif) + XEN_NETBK_LEGACY_SLOTS_MAX
> +	if (likely(RING_HAS_UNCONSUMED_REQUESTS(&queue->tx)) &&
> +	    (nr_pending_reqs(queue) + XEN_NETBK_LEGACY_SLOTS_MAX
>  	     < MAX_PENDING_REQS))
>  		return 1;
> 
>  	return 0;
>  }
> 
> -void xenvif_unmap_frontend_rings(struct xenvif *vif)
> +void xenvif_unmap_frontend_rings(struct xenvif_queue *queue)
>  {
> -	if (vif->tx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->tx.sring);
> -	if (vif->rx.sring)
> -		xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(vif),
> -					vif->rx.sring);
> +	if (queue->tx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->tx.sring);
> +	if (queue->rx.sring)
> +
> 	xenbus_unmap_ring_vfree(xenvif_to_xenbus_device(queue->vif),
> +					queue->rx.sring);
>  }
> 
> -int xenvif_map_frontend_rings(struct xenvif *vif,
> +int xenvif_map_frontend_rings(struct xenvif_queue *queue,
>  			      grant_ref_t tx_ring_ref,
>  			      grant_ref_t rx_ring_ref)
>  {
> @@ -1506,67 +1507,72 @@ int xenvif_map_frontend_rings(struct xenvif
> *vif,
> 
>  	int err = -ENOMEM;
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     tx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	txs = (struct xen_netif_tx_sring *)addr;
> -	BACK_RING_INIT(&vif->tx, txs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->tx, txs, PAGE_SIZE);
> 
> -	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(vif),
> +	err = xenbus_map_ring_valloc(xenvif_to_xenbus_device(queue-
> >vif),
>  				     rx_ring_ref, &addr);
>  	if (err)
>  		goto err;
> 
>  	rxs = (struct xen_netif_rx_sring *)addr;
> -	BACK_RING_INIT(&vif->rx, rxs, PAGE_SIZE);
> +	BACK_RING_INIT(&queue->rx, rxs, PAGE_SIZE);
> 
>  	return 0;
> 
>  err:
> -	xenvif_unmap_frontend_rings(vif);
> +	xenvif_unmap_frontend_rings(queue);
>  	return err;
>  }
> 
> -void xenvif_stop_queue(struct xenvif *vif)
> +static inline void xenvif_wake_queue(struct xenvif_queue *queue)
>  {
> -	if (!vif->can_queue)
> -		return;
> +	struct net_device *dev = queue->vif->dev;
> +	netif_tx_wake_queue(netdev_get_tx_queue(dev, queue->id));
> +}
> 
> -	netif_stop_queue(vif->dev);
> +static void xenvif_start_queue(struct xenvif_queue *queue)
> +{
> +	if (xenvif_schedulable(queue->vif))
> +		xenvif_wake_queue(queue);
>  }
> 
> -static void xenvif_start_queue(struct xenvif *vif)
> +static int xenvif_queue_stopped(struct xenvif_queue *queue)
>  {
> -	if (xenvif_schedulable(vif))
> -		netif_wake_queue(vif->dev);
> +	struct net_device *dev = queue->vif->dev;
> +	unsigned int id = queue->id;
> +	return netif_tx_queue_stopped(netdev_get_tx_queue(dev, id));
>  }
> 
>  int xenvif_kthread(void *data)
>  {
> -	struct xenvif *vif = data;
> +	struct xenvif_queue *queue = data;
>  	struct sk_buff *skb;
> 
>  	while (!kthread_should_stop()) {
> -		wait_event_interruptible(vif->wq,
> -					 rx_work_todo(vif) ||
> +		wait_event_interruptible(queue->wq,
> +					 rx_work_todo(queue) ||
>  					 kthread_should_stop());
>  		if (kthread_should_stop())
>  			break;
> 
> -		if (!skb_queue_empty(&vif->rx_queue))
> -			xenvif_rx_action(vif);
> +		if (!skb_queue_empty(&queue->rx_queue))
> +			xenvif_rx_action(queue);
> 
> -		if (skb_queue_empty(&vif->rx_queue) &&
> -		    netif_queue_stopped(vif->dev))
> -			xenvif_start_queue(vif);
> +		if (skb_queue_empty(&queue->rx_queue) &&
> +		    xenvif_queue_stopped(queue))
> +			xenvif_start_queue(queue);
> 
>  		cond_resched();
>  	}
> 
>  	/* Bin any remaining skbs */
> -	while ((skb = skb_dequeue(&vif->rx_queue)) != NULL)
> +	while ((skb = skb_dequeue(&queue->rx_queue)) != NULL)
>  		dev_kfree_skb(skb);
> 
>  	return 0;
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index 7a206cf..f23ea0a 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -19,6 +19,7 @@
>  */
> 
>  #include "common.h"
> +#include <linux/vmalloc.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -34,8 +35,9 @@ struct backend_info {
>  	u8 have_hotplug_status_watch:1;
>  };
> 
> -static int connect_rings(struct backend_info *);
> -static void connect(struct backend_info *);
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue);
> +static void connect(struct backend_info *be);
> +static int read_xenbus_vif_flags(struct backend_info *be);
>  static void backend_create_xenvif(struct backend_info *be);
>  static void unregister_hotplug_status_watch(struct backend_info *be);
>  static void set_backend_state(struct backend_info *be,
> @@ -485,10 +487,9 @@ static void connect(struct backend_info *be)
>  {
>  	int err;
>  	struct xenbus_device *dev = be->dev;
> -
> -	err = connect_rings(be);
> -	if (err)
> -		return;
> +	unsigned long credit_bytes, credit_usec;
> +	unsigned int queue_index;
> +	struct xenvif_queue *queue;
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -496,9 +497,30 @@ static void connect(struct backend_info *be)
>  		return;
>  	}
> 
> -	xen_net_read_rate(dev, &be->vif->credit_bytes,
> -			  &be->vif->credit_usec);
> -	be->vif->remaining_credit = be->vif->credit_bytes;
> +	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
> +	read_xenbus_vif_flags(be);
> +
> +	be->vif->num_queues = 1;
> +	be->vif->queues = vzalloc(be->vif->num_queues *
> +			sizeof(struct xenvif_queue));
> +
> +	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index) {
> +		queue = &be->vif->queues[queue_index];
> +		queue->vif = be->vif;
> +		queue->id = queue_index;
> +		snprintf(queue->name, sizeof(queue->name), "%s-q%u",
> +				be->vif->dev->name, queue->id);
> +
> +		xenvif_init_queue(queue);
> +
> +		queue->remaining_credit = credit_bytes;
> +
> +		err = connect_rings(be, queue);
> +		if (err)
> +			goto err;
> +	}
> +
> +	xenvif_carrier_on(be->vif);
> 
>  	unregister_hotplug_status_watch(be);
>  	err = xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
> @@ -507,18 +529,24 @@ static void connect(struct backend_info *be)
>  	if (!err)
>  		be->have_hotplug_status_watch = 1;
> 
> -	netif_wake_queue(be->vif->dev);
> +	netif_tx_wake_all_queues(be->vif->dev);
> +
> +	return;
> +
> +err:
> +	vfree(be->vif->queues);
> +	be->vif->queues = NULL;
> +	be->vif->num_queues = 0;
> +	return;
>  }
> 
> 
> -static int connect_rings(struct backend_info *be)
> +static int connect_rings(struct backend_info *be, struct xenvif_queue
> *queue)
>  {
> -	struct xenvif *vif = be->vif;
>  	struct xenbus_device *dev = be->dev;
>  	unsigned long tx_ring_ref, rx_ring_ref;
> -	unsigned int tx_evtchn, rx_evtchn, rx_copy;
> +	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> -	int val;
> 
>  	err = xenbus_gather(XBT_NIL, dev->otherend,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
> @@ -546,6 +574,27 @@ static int connect_rings(struct backend_info *be)
>  		rx_evtchn = tx_evtchn;
>  	}
> 
> +	/* Map the shared frame, irq etc. */
> +	err = xenvif_connect(queue, tx_ring_ref, rx_ring_ref,
> +			     tx_evtchn, rx_evtchn);
> +	if (err) {
> +		xenbus_dev_fatal(dev, err,
> +				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> +				 tx_ring_ref, rx_ring_ref,
> +				 tx_evtchn, rx_evtchn);
> +		return err;
> +	}
> +
> +	return 0;
> +}
> +
> +static int read_xenbus_vif_flags(struct backend_info *be)
> +{
> +	struct xenvif *vif = be->vif;
> +	struct xenbus_device *dev = be->dev;
> +	unsigned int rx_copy;
> +	int err, val;
> +
>  	err = xenbus_scanf(XBT_NIL, dev->otherend, "request-rx-copy",
> "%u",
>  			   &rx_copy);
>  	if (err == -ENOENT) {
> @@ -621,16 +670,6 @@ static int connect_rings(struct backend_info *be)
>  		val = 0;
>  	vif->ipv6_csum = !!val;
> 
> -	/* Map the shared frame, irq etc. */
> -	err = xenvif_connect(vif, tx_ring_ref, rx_ring_ref,
> -			     tx_evtchn, rx_evtchn);
> -	if (err) {
> -		xenbus_dev_fatal(dev, err,
> -				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
> -				 tx_ring_ref, rx_ring_ref,
> -				 tx_evtchn, rx_evtchn);
> -		return err;
> -	}
>  	return 0;
>  }
> 
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwu0-00066L-Rw; Mon, 24 Feb 2014 14:53:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwtz-000666-Mz
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:03 +0000
Received: from [85.158.139.211:4735] by server-6.bemta-5.messagelabs.com id
	30/B8-14342-ECC5B035; Mon, 24 Feb 2014 14:53:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393253582!5852774!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21803 invoked from network); 24 Feb 2014 14:53:02 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:02 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3143811eek.37
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Juzm8YXWU+yhV61V4wobfqR/pUumixuKdy3/IohwLrU=;
	b=H2fcRe1/xcd6yOmCDfzgcSS1NjjuWnqCqpU4oADgSDwWDLGKTYaU76I1m4s1Gg55Sa
	qV3/uOPokwssHcwAErZT7Juiw5wvF71Vv0iprktxyBeAv9+LtKDcKrMRwvhWM4s0aIbM
	kXxc2zw8LXKHdzzlKQ6QxN2dseCuhfSrMCLx/AzLREGsfx76Lp0txpWeOM/OrnKyvusf
	NCf5z2XneuGP9RyWaSW4Dwqq5Zva8iXd1+vGKzZweKhF45KXjIS7Sieg+gRjFirkivVc
	MeAvS0OtH7Jqcl/4YSLU3aWO4Ke7RfC5CVs4AlPBDU+Y5aSNzXcy12Hc3GoKmIHH/UQt
	jBFA==
X-Gm-Message-State: ALoCoQmcrQ2UkMzBbxVOrPVSnI7g61+4I701DjsUbyN9DM8Dg5uW1knHqxRLhkz0l36fwO5dLT/m
X-Received: by 10.14.198.132 with SMTP id v4mr24661799een.43.1393253582092;
	Mon, 24 Feb 2014 06:53:02 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.52.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:01 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:46 +0000
Message-Id: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 0/6] xen/arm: Merge early_printk function in
	console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

This patch series aims to merge earlyu printk in the console code. This will
avoid the developper to care whether the message is printed before or after
the console is initialized.

Sincerely yours,

Julien Grall (6):
  xen/arm: earlyprintk: move early_flush in early_puts
  xen/arm: earlyprintk: export early_puts
  xen/arm: Rename EARLY_PRINTK compile option to CONFIG_EARLY_PRINTK
  xen/console: Add support for early printk
  xen/console: Add noreturn attribute to panic function
  xen/arm: Replace early_{printk,panic} call to {printk,panic} call

 xen/arch/arm/Rules.mk              |    2 +-
 xen/arch/arm/arm32/head.S          |   18 +++++++++---------
 xen/arch/arm/arm64/head.S          |   18 +++++++++---------
 xen/arch/arm/early_printk.c        |   36 ++----------------------------------
 xen/arch/arm/mm.c                  |    5 ++---
 xen/arch/arm/setup.c               |   28 +++++++++++++---------------
 xen/common/device_tree.c           |   36 +++++++++++++-----------------------
 xen/drivers/char/console.c         |   10 ++++++++--
 xen/drivers/char/dt-uart.c         |    9 ++++-----
 xen/drivers/char/exynos4210-uart.c |   13 +++++--------
 xen/drivers/char/omap-uart.c       |   13 ++++++-------
 xen/drivers/char/pl011.c           |   13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      |   29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h |   27 ++-------------------------
 xen/include/xen/early_printk.h     |   21 +++++++++++++++++++++
 xen/include/xen/lib.h              |    2 +-
 16 files changed, 116 insertions(+), 164 deletions(-)
 create mode 100644 xen/include/xen/early_printk.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwu0-00066L-Rw; Mon, 24 Feb 2014 14:53:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwtz-000666-Mz
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:03 +0000
Received: from [85.158.139.211:4735] by server-6.bemta-5.messagelabs.com id
	30/B8-14342-ECC5B035; Mon, 24 Feb 2014 14:53:02 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393253582!5852774!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21803 invoked from network); 24 Feb 2014 14:53:02 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:02 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3143811eek.37
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:02 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=Juzm8YXWU+yhV61V4wobfqR/pUumixuKdy3/IohwLrU=;
	b=H2fcRe1/xcd6yOmCDfzgcSS1NjjuWnqCqpU4oADgSDwWDLGKTYaU76I1m4s1Gg55Sa
	qV3/uOPokwssHcwAErZT7Juiw5wvF71Vv0iprktxyBeAv9+LtKDcKrMRwvhWM4s0aIbM
	kXxc2zw8LXKHdzzlKQ6QxN2dseCuhfSrMCLx/AzLREGsfx76Lp0txpWeOM/OrnKyvusf
	NCf5z2XneuGP9RyWaSW4Dwqq5Zva8iXd1+vGKzZweKhF45KXjIS7Sieg+gRjFirkivVc
	MeAvS0OtH7Jqcl/4YSLU3aWO4Ke7RfC5CVs4AlPBDU+Y5aSNzXcy12Hc3GoKmIHH/UQt
	jBFA==
X-Gm-Message-State: ALoCoQmcrQ2UkMzBbxVOrPVSnI7g61+4I701DjsUbyN9DM8Dg5uW1knHqxRLhkz0l36fwO5dLT/m
X-Received: by 10.14.198.132 with SMTP id v4mr24661799een.43.1393253582092;
	Mon, 24 Feb 2014 06:53:02 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.52.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:01 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:46 +0000
Message-Id: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 0/6] xen/arm: Merge early_printk function in
	console code
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hello all,

This patch series aims to merge earlyu printk in the console code. This will
avoid the developper to care whether the message is printed before or after
the console is initialized.

Sincerely yours,

Julien Grall (6):
  xen/arm: earlyprintk: move early_flush in early_puts
  xen/arm: earlyprintk: export early_puts
  xen/arm: Rename EARLY_PRINTK compile option to CONFIG_EARLY_PRINTK
  xen/console: Add support for early printk
  xen/console: Add noreturn attribute to panic function
  xen/arm: Replace early_{printk,panic} call to {printk,panic} call

 xen/arch/arm/Rules.mk              |    2 +-
 xen/arch/arm/arm32/head.S          |   18 +++++++++---------
 xen/arch/arm/arm64/head.S          |   18 +++++++++---------
 xen/arch/arm/early_printk.c        |   36 ++----------------------------------
 xen/arch/arm/mm.c                  |    5 ++---
 xen/arch/arm/setup.c               |   28 +++++++++++++---------------
 xen/common/device_tree.c           |   36 +++++++++++++-----------------------
 xen/drivers/char/console.c         |   10 ++++++++--
 xen/drivers/char/dt-uart.c         |    9 ++++-----
 xen/drivers/char/exynos4210-uart.c |   13 +++++--------
 xen/drivers/char/omap-uart.c       |   13 ++++++-------
 xen/drivers/char/pl011.c           |   13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      |   29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h |   27 ++-------------------------
 xen/include/xen/early_printk.h     |   21 +++++++++++++++++++++
 xen/include/xen/lib.h              |    2 +-
 16 files changed, 116 insertions(+), 164 deletions(-)
 create mode 100644 xen/include/xen/early_printk.h

-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwu6-00067v-8Y; Mon, 24 Feb 2014 14:53:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwu4-00067P-Jf
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:08 +0000
Received: from [85.158.143.35:63030] by server-3.bemta-4.messagelabs.com id
	72/2E-11539-4DC5B035; Mon, 24 Feb 2014 14:53:08 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393253587!7899016!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17791 invoked from network); 24 Feb 2014 14:53:07 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:07 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so3211808eek.24
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=UppToxUrf02vu7PpdsT2QtgDInYWdEY6BUTUj3+BGWo=;
	b=HPRRyWJT0Sct8SzePN1GeFJP/lDqjMtA8QUiPlro01orFCfTPvVYZWjlgEYhEXxBgm
	5Ob44ve4wg+Dd2bu8mIeKN2Y6KX2NygNsEbo0nWIETFLooLS1mFOSKWClRMxUo6RsxWt
	IGjqgB96CMJv/9ccAjxMuMbA/tqvYa8csvqdPzF5qX0AgAQEnye54WHMUe5Nr0G40EEj
	54e6id5agfU0LnV8RI3ATwRqe0AX/p2Dg2DqJaz163S//Tdja/LnxviwDuOuDp3ydC6F
	IAlnylrrsw3GZU0kKkv7/mcc0IAQ0R1QDGS2CBc8Z2uymsZslE9PdRfmVZ8wD56J1VVn
	5N1Q==
X-Gm-Message-State: ALoCoQmQWmbEC+CqVwhXh0pHH1UTUNJ6iJHtFvrLmHjgKR1U3k7EjJVMf3zvwfn9lnOuPs1SHM1Q
X-Received: by 10.14.98.66 with SMTP id u42mr25304313eef.18.1393253586990;
	Mon, 24 Feb 2014 06:53:06 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.03
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:06 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:47 +0000
Message-Id: <1393253572-7157-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 1/6] xen/arm: earlyprintk: move early_flush
	in early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

early_puts function will be exported to be used in the console code. To
avoid loosing characters (see why in commit cafdceb "xen/arm: avoid lost
characters with early_printk), early_flush needs to be called in this
function.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/early_printk.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..7143f9e 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -29,12 +29,6 @@ static void __init early_puts(const char *s)
         early_putch(*s);
         s++;
     }
-}
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
 
     /*
      * Wait the UART has finished to transfer all characters before
@@ -43,6 +37,12 @@ static void __init early_vprintk(const char *fmt, va_list args)
     early_flush();
 }
 
+static void __init early_vprintk(const char *fmt, va_list args)
+{
+    vsnprintf(buf, sizeof(buf), fmt, args);
+    early_puts(buf);
+}
+
 void __init early_printk(const char *fmt, ...)
 {
     va_list args;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwu6-00067v-8Y; Mon, 24 Feb 2014 14:53:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwu4-00067P-Jf
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:08 +0000
Received: from [85.158.143.35:63030] by server-3.bemta-4.messagelabs.com id
	72/2E-11539-4DC5B035; Mon, 24 Feb 2014 14:53:08 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393253587!7899016!1
X-Originating-IP: [74.125.83.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17791 invoked from network); 24 Feb 2014 14:53:07 -0000
Received: from mail-ee0-f51.google.com (HELO mail-ee0-f51.google.com)
	(74.125.83.51)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:07 -0000
Received: by mail-ee0-f51.google.com with SMTP id b57so3211808eek.24
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=UppToxUrf02vu7PpdsT2QtgDInYWdEY6BUTUj3+BGWo=;
	b=HPRRyWJT0Sct8SzePN1GeFJP/lDqjMtA8QUiPlro01orFCfTPvVYZWjlgEYhEXxBgm
	5Ob44ve4wg+Dd2bu8mIeKN2Y6KX2NygNsEbo0nWIETFLooLS1mFOSKWClRMxUo6RsxWt
	IGjqgB96CMJv/9ccAjxMuMbA/tqvYa8csvqdPzF5qX0AgAQEnye54WHMUe5Nr0G40EEj
	54e6id5agfU0LnV8RI3ATwRqe0AX/p2Dg2DqJaz163S//Tdja/LnxviwDuOuDp3ydC6F
	IAlnylrrsw3GZU0kKkv7/mcc0IAQ0R1QDGS2CBc8Z2uymsZslE9PdRfmVZ8wD56J1VVn
	5N1Q==
X-Gm-Message-State: ALoCoQmQWmbEC+CqVwhXh0pHH1UTUNJ6iJHtFvrLmHjgKR1U3k7EjJVMf3zvwfn9lnOuPs1SHM1Q
X-Received: by 10.14.98.66 with SMTP id u42mr25304313eef.18.1393253586990;
	Mon, 24 Feb 2014 06:53:06 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.03
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:06 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:47 +0000
Message-Id: <1393253572-7157-2-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 1/6] xen/arm: earlyprintk: move early_flush
	in early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

early_puts function will be exported to be used in the console code. To
avoid loosing characters (see why in commit cafdceb "xen/arm: avoid lost
characters with early_printk), early_flush needs to be called in this
function.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/early_printk.c |   12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..7143f9e 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -29,12 +29,6 @@ static void __init early_puts(const char *s)
         early_putch(*s);
         s++;
     }
-}
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
 
     /*
      * Wait the UART has finished to transfer all characters before
@@ -43,6 +37,12 @@ static void __init early_vprintk(const char *fmt, va_list args)
     early_flush();
 }
 
+static void __init early_vprintk(const char *fmt, va_list args)
+{
+    vsnprintf(buf, sizeof(buf), fmt, args);
+    early_puts(buf);
+}
+
 void __init early_printk(const char *fmt, ...)
 {
     va_list args;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwu7-00068s-Ml; Mon, 24 Feb 2014 14:53:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwu6-00067q-G8
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:10 +0000
Received: from [85.158.139.211:5400] by server-3.bemta-5.messagelabs.com id
	7A/31-13671-5DC5B035; Mon, 24 Feb 2014 14:53:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393253588!5898034!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3373 invoked from network); 24 Feb 2014 14:53:09 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:09 -0000
Received: by mail-ea0-f172.google.com with SMTP id l9so3148910eaj.17
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=9EBvvPBUQZ9/S26xy373EXGuoDKGDYEOhagRhH8bI0k=;
	b=HNRZ2NpK3phXQtO+NEpg2UTlU5OXJQdHQy3IDVMyXnHlQ3GH1aEjIgZnpGsPy0HPj3
	0NIyuROFx11TOTOam9q4XO1DRfZ4JLDJlsSnEjS2IthAdIU038L+XyEqUEv741xQOGSB
	IotmSP9UKzm0/mOCcs8QFTbJmDlWbImBLJokXGEIyW5gaOIG7g7EGczmv8xccl6jpHq+
	J5xco0xcersaIlJzMKQxk4GKKcG59jOQjM2bL308iSGFhyoOmQWlgNE6S+uy+SxdVQGm
	H4zY6IHgGUuV+quU2XhB1u8a3VoEUP21+9wnj0yfmmdb8BB9Pku0zTfpkPLjRArGgHC+
	N6iw==
X-Gm-Message-State: ALoCoQmRS7PUF1Rwk/PJ5FCDWTlwGoEIst5ze7B1J++i9EW8lEB5jpkBEBHcJbV4uPestfWxxPSE
X-Received: by 10.14.204.9 with SMTP id g9mr24750331eeo.82.1393253588735;
	Mon, 24 Feb 2014 06:53:08 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.07
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:07 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:48 +0000
Message-Id: <1393253572-7157-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 2/6] xen/arm: earlyprintk: export early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Cambpell <ian.campbell@citrix.com>

---
    Changes in v2:
        - Fix coding style
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/include/asm-arm/early_printk.h |    3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 7143f9e..affe424 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -21,7 +21,7 @@ void early_flush(void);
 /* Early printk buffer */
 static char __initdata buf[512];
 
-static void __init early_puts(const char *s)
+void early_puts(const char *s)
 {
     while (*s != '\0') {
         if (*s == '\n')
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..27ac8ce 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -24,6 +24,7 @@
 
 #ifdef EARLY_PRINTK
 
+void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 void early_panic(const char *fmt, ...) __attribute__((noreturn))
@@ -31,6 +32,8 @@ void early_panic(const char *fmt, ...) __attribute__((noreturn))
 
 #else
 
+static inline void early_puts(const char *) {}
+
 static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwu7-00068s-Ml; Mon, 24 Feb 2014 14:53:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwu6-00067q-G8
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:10 +0000
Received: from [85.158.139.211:5400] by server-3.bemta-5.messagelabs.com id
	7A/31-13671-5DC5B035; Mon, 24 Feb 2014 14:53:09 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393253588!5898034!1
X-Originating-IP: [209.85.215.172]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3373 invoked from network); 24 Feb 2014 14:53:09 -0000
Received: from mail-ea0-f172.google.com (HELO mail-ea0-f172.google.com)
	(209.85.215.172)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:09 -0000
Received: by mail-ea0-f172.google.com with SMTP id l9so3148910eaj.17
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=9EBvvPBUQZ9/S26xy373EXGuoDKGDYEOhagRhH8bI0k=;
	b=HNRZ2NpK3phXQtO+NEpg2UTlU5OXJQdHQy3IDVMyXnHlQ3GH1aEjIgZnpGsPy0HPj3
	0NIyuROFx11TOTOam9q4XO1DRfZ4JLDJlsSnEjS2IthAdIU038L+XyEqUEv741xQOGSB
	IotmSP9UKzm0/mOCcs8QFTbJmDlWbImBLJokXGEIyW5gaOIG7g7EGczmv8xccl6jpHq+
	J5xco0xcersaIlJzMKQxk4GKKcG59jOQjM2bL308iSGFhyoOmQWlgNE6S+uy+SxdVQGm
	H4zY6IHgGUuV+quU2XhB1u8a3VoEUP21+9wnj0yfmmdb8BB9Pku0zTfpkPLjRArGgHC+
	N6iw==
X-Gm-Message-State: ALoCoQmRS7PUF1Rwk/PJ5FCDWTlwGoEIst5ze7B1J++i9EW8lEB5jpkBEBHcJbV4uPestfWxxPSE
X-Received: by 10.14.204.9 with SMTP id g9mr24750331eeo.82.1393253588735;
	Mon, 24 Feb 2014 06:53:08 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.07
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:07 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:48 +0000
Message-Id: <1393253572-7157-3-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 2/6] xen/arm: earlyprintk: export early_puts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Cambpell <ian.campbell@citrix.com>

---
    Changes in v2:
        - Fix coding style
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/include/asm-arm/early_printk.h |    3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 7143f9e..affe424 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -21,7 +21,7 @@ void early_flush(void);
 /* Early printk buffer */
 static char __initdata buf[512];
 
-static void __init early_puts(const char *s)
+void early_puts(const char *s)
 {
     while (*s != '\0') {
         if (*s == '\n')
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..27ac8ce 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -24,6 +24,7 @@
 
 #ifdef EARLY_PRINTK
 
+void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 void early_panic(const char *fmt, ...) __attribute__((noreturn))
@@ -31,6 +32,8 @@ void early_panic(const char *fmt, ...) __attribute__((noreturn))
 
 #else
 
+static inline void early_puts(const char *) {}
+
 static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwuA-0006AW-3u; Mon, 24 Feb 2014 14:53:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwu8-00069X-Q3
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:13 +0000
Received: from [85.158.139.211:49003] by server-15.bemta-5.messagelabs.com id
	00/3F-24395-8DC5B035; Mon, 24 Feb 2014 14:53:12 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393253591!1341578!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6813 invoked from network); 24 Feb 2014 14:53:11 -0000
Received: from mail-ea0-f173.google.com (HELO mail-ea0-f173.google.com)
	(209.85.215.173)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:11 -0000
Received: by mail-ea0-f173.google.com with SMTP id n15so2037467ead.32
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=k8jUWrKzQQGvJYr7NFk6GvbuEP5zovnJ+17O3iKXc80=;
	b=FXOLjDZgP8KtwR/yThm3B6GPtsGc0u2o9z+eWCQ4XDIB2sSHlv9LL3GOwhctZYSWB1
	tvftT9VbcLL5MMfqH5M72Z3dpFYR6bmEcZ2+/I//7G0xGLZjs2vQEbZIMLdRzoG/Z6Gh
	YeEnp+sj7QJTX+S7geHrSyZZBawm51BtoYd+j27fLNSKarER+YB7wTy4bAnmbcMwH+h/
	VOMydNU5B9ugQLJ7DdoYPzKugZUC9RBDD+MDClXS5yA7TCH9ACdo9A2Mcb7dFq0kqPrt
	JO/AwH013yIXnb9xpXLFVischXKTxLzuGO4HX+woYSF9/0ZPoNDSOS83AUagjRYgPW0q
	S9HQ==
X-Gm-Message-State: ALoCoQl9kLNfLnttHCG08JoW5Xoy2Zb+N6bnJVT31WPdnkulIlAv1ylmf3s+Olh3iBSSZCRyjPlE
X-Received: by 10.14.87.129 with SMTP id y1mr25241857eee.38.1393253590895;
	Mon, 24 Feb 2014 06:53:10 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.08
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:09 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:49 +0000
Message-Id: <1393253572-7157-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 3/6] xen/arm: Rename EARLY_PRINTK compile
	option to CONFIG_EARLY_PRINTK
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Most of common compile option start with CONFIG_. Rename EARLY_PRINTK option
to CONFIG_EARLY_PRINTK to be compliant.

This option will be used in common code (eg console) later.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/Rules.mk              |    2 +-
 xen/arch/arm/arm32/head.S          |   18 +++++++++---------
 xen/arch/arm/arm64/head.S          |   18 +++++++++---------
 xen/include/asm-arm/early_printk.h |    6 +++---
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index aaa203e..57f2eb1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -93,7 +93,7 @@ ifneq ($(EARLY_PRINTK_INC),)
 EARLY_PRINTK := y
 endif
 
-CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK
+CFLAGS-$(EARLY_PRINTK) += -DCONFIG_EARLY_PRINTK
 CFLAGS-$(EARLY_PRINTK_INIT_UART) += -DEARLY_PRINTK_INIT_UART
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..1b1801b 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -34,7 +34,7 @@
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -59,7 +59,7 @@
  */
 /* Macro to print a string to the UART, if there is one.
  * Clobbers r0-r3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   r0, 98f ; \
         bl    puts    ; \
@@ -67,9 +67,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         .arm
 
@@ -149,7 +149,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   r11, =EARLY_UART_BASE_ADDRESS  /* r11 := UART base address */
         teq   r12, #0                /* Boot CPU sets up the UART too */
         bleq  init_uart
@@ -330,7 +330,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         teq   r12, #0
@@ -492,7 +492,7 @@ ENTRY(relocate_xen)
 
         mov pc, lr
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * r11: Early UART base address
  * Clobbers r0-r2 */
@@ -537,7 +537,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -545,7 +545,7 @@ early_puts:
 puts:
 putn:   mov   pc, lr
 
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..c97c194 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -30,7 +30,7 @@
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -71,7 +71,7 @@
 
 /* Macro to print a string to the UART, if there is one.
  * Clobbers x0-x3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   x0, 98f ; \
         bl    puts    ; \
@@ -79,9 +79,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         /*.aarch64*/
 
@@ -174,7 +174,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   x23, =EARLY_UART_BASE_ADDRESS /* x23 := UART base address */
         cbnz  x22, 1f
         bl    init_uart                 /* Boot CPU sets up the UART too */
@@ -343,7 +343,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb   sy
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         cbnz  x22, 1f
@@ -489,7 +489,7 @@ ENTRY(relocate_xen)
 
         ret
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * x23: Early UART base address
  * Clobbers x0-x1 */
@@ -536,7 +536,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -544,7 +544,7 @@ early_puts:
 puts:
 putn:   ret
 
-#endif /* EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 27ac8ce..dd190c9 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -12,7 +12,7 @@
 
 #include <xen/config.h>
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 /* need to add the uart address offset in page to the fixmap address */
 #define EARLY_UART_VIRTUAL_ADDRESS \
@@ -22,7 +22,7 @@
 
 #ifndef __ASSEMBLY__
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
@@ -42,7 +42,7 @@ static inline void  __attribute__((noreturn))
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
-#endif
+#endif /* !CONFIG_EARLY_PRINTK */
 
 #endif	/* __ASSEMBLY__ */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwuA-0006AW-3u; Mon, 24 Feb 2014 14:53:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwu8-00069X-Q3
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:13 +0000
Received: from [85.158.139.211:49003] by server-15.bemta-5.messagelabs.com id
	00/3F-24395-8DC5B035; Mon, 24 Feb 2014 14:53:12 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393253591!1341578!1
X-Originating-IP: [209.85.215.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6813 invoked from network); 24 Feb 2014 14:53:11 -0000
Received: from mail-ea0-f173.google.com (HELO mail-ea0-f173.google.com)
	(209.85.215.173)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:11 -0000
Received: by mail-ea0-f173.google.com with SMTP id n15so2037467ead.32
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=k8jUWrKzQQGvJYr7NFk6GvbuEP5zovnJ+17O3iKXc80=;
	b=FXOLjDZgP8KtwR/yThm3B6GPtsGc0u2o9z+eWCQ4XDIB2sSHlv9LL3GOwhctZYSWB1
	tvftT9VbcLL5MMfqH5M72Z3dpFYR6bmEcZ2+/I//7G0xGLZjs2vQEbZIMLdRzoG/Z6Gh
	YeEnp+sj7QJTX+S7geHrSyZZBawm51BtoYd+j27fLNSKarER+YB7wTy4bAnmbcMwH+h/
	VOMydNU5B9ugQLJ7DdoYPzKugZUC9RBDD+MDClXS5yA7TCH9ACdo9A2Mcb7dFq0kqPrt
	JO/AwH013yIXnb9xpXLFVischXKTxLzuGO4HX+woYSF9/0ZPoNDSOS83AUagjRYgPW0q
	S9HQ==
X-Gm-Message-State: ALoCoQl9kLNfLnttHCG08JoW5Xoy2Zb+N6bnJVT31WPdnkulIlAv1ylmf3s+Olh3iBSSZCRyjPlE
X-Received: by 10.14.87.129 with SMTP id y1mr25241857eee.38.1393253590895;
	Mon, 24 Feb 2014 06:53:10 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.08
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:09 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:49 +0000
Message-Id: <1393253572-7157-4-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 3/6] xen/arm: Rename EARLY_PRINTK compile
	option to CONFIG_EARLY_PRINTK
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Most of common compile option start with CONFIG_. Rename EARLY_PRINTK option
to CONFIG_EARLY_PRINTK to be compliant.

This option will be used in common code (eg console) later.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
---
 xen/arch/arm/Rules.mk              |    2 +-
 xen/arch/arm/arm32/head.S          |   18 +++++++++---------
 xen/arch/arm/arm64/head.S          |   18 +++++++++---------
 xen/include/asm-arm/early_printk.h |    6 +++---
 4 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/xen/arch/arm/Rules.mk b/xen/arch/arm/Rules.mk
index aaa203e..57f2eb1 100644
--- a/xen/arch/arm/Rules.mk
+++ b/xen/arch/arm/Rules.mk
@@ -93,7 +93,7 @@ ifneq ($(EARLY_PRINTK_INC),)
 EARLY_PRINTK := y
 endif
 
-CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK
+CFLAGS-$(EARLY_PRINTK) += -DCONFIG_EARLY_PRINTK
 CFLAGS-$(EARLY_PRINTK_INIT_UART) += -DEARLY_PRINTK_INIT_UART
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_INC=\"debug-$(EARLY_PRINTK_INC).inc\"
 CFLAGS-$(EARLY_PRINTK) += -DEARLY_PRINTK_BAUD=$(EARLY_PRINTK_BAUD)
diff --git a/xen/arch/arm/arm32/head.S b/xen/arch/arm/arm32/head.S
index 96230ac..1b1801b 100644
--- a/xen/arch/arm/arm32/head.S
+++ b/xen/arch/arm/arm32/head.S
@@ -34,7 +34,7 @@
 #define PT_UPPER(x) (PT_##x & 0xf00)
 #define PT_LOWER(x) (PT_##x & 0x0ff)
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -59,7 +59,7 @@
  */
 /* Macro to print a string to the UART, if there is one.
  * Clobbers r0-r3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   r0, 98f ; \
         bl    puts    ; \
@@ -67,9 +67,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         .arm
 
@@ -149,7 +149,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   r11, =EARLY_UART_BASE_ADDRESS  /* r11 := UART base address */
         teq   r12, #0                /* Boot CPU sets up the UART too */
         bleq  init_uart
@@ -330,7 +330,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         teq   r12, #0
@@ -492,7 +492,7 @@ ENTRY(relocate_xen)
 
         mov pc, lr
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * r11: Early UART base address
  * Clobbers r0-r2 */
@@ -537,7 +537,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -545,7 +545,7 @@ early_puts:
 puts:
 putn:   mov   pc, lr
 
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/arch/arm/arm64/head.S b/xen/arch/arm/arm64/head.S
index 31afdd0..c97c194 100644
--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -30,7 +30,7 @@
 #define PT_DEV    0xe71 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=0 P=1 */
 #define PT_DEV_L3 0xe73 /* nG=1 AF=1 SH=10 AP=01 NS=1 ATTR=100 T=1 P=1 */
 
-#if (defined (EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
+#if (defined (CONFIG_EARLY_PRINTK)) && (defined (EARLY_PRINTK_INC))
 #include EARLY_PRINTK_INC
 #endif
 
@@ -71,7 +71,7 @@
 
 /* Macro to print a string to the UART, if there is one.
  * Clobbers x0-x3. */
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 #define PRINT(_s)       \
         adr   x0, 98f ; \
         bl    puts    ; \
@@ -79,9 +79,9 @@
 98:     .asciz _s     ; \
         .align 2      ; \
 99:
-#else /* EARLY_PRINTK */
+#else /* CONFIG_EARLY_PRINTK */
 #define PRINT(s)
-#endif /* !EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
         /*.aarch64*/
 
@@ -174,7 +174,7 @@ common_start:
         b     2b
 1:
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
         ldr   x23, =EARLY_UART_BASE_ADDRESS /* x23 := UART base address */
         cbnz  x22, 1f
         bl    init_uart                 /* Boot CPU sets up the UART too */
@@ -343,7 +343,7 @@ paging:
         /* Now we can install the fixmap and dtb mappings, since we
          * don't need the 1:1 map any more */
         dsb   sy
-#if defined(EARLY_PRINTK) /* Fixmap is only used by early printk */
+#if defined(CONFIG_EARLY_PRINTK) /* Fixmap is only used by early printk */
         /* Non-boot CPUs don't need to rebuild the fixmap itself, just
 	 * the mapping from boot_second to xen_fixmap */
         cbnz  x22, 1f
@@ -489,7 +489,7 @@ ENTRY(relocate_xen)
 
         ret
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 /* Bring up the UART.
  * x23: Early UART base address
  * Clobbers x0-x1 */
@@ -536,7 +536,7 @@ putn:
 hex:    .ascii "0123456789abcdef"
         .align 2
 
-#else  /* EARLY_PRINTK */
+#else  /* CONFIG_EARLY_PRINTK */
 
 init_uart:
 .global early_puts
@@ -544,7 +544,7 @@ early_puts:
 puts:
 putn:   ret
 
-#endif /* EARLY_PRINTK */
+#endif /* !CONFIG_EARLY_PRINTK */
 
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 27ac8ce..dd190c9 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -12,7 +12,7 @@
 
 #include <xen/config.h>
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 /* need to add the uart address offset in page to the fixmap address */
 #define EARLY_UART_VIRTUAL_ADDRESS \
@@ -22,7 +22,7 @@
 
 #ifndef __ASSEMBLY__
 
-#ifdef EARLY_PRINTK
+#ifdef CONFIG_EARLY_PRINTK
 
 void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
@@ -42,7 +42,7 @@ static inline void  __attribute__((noreturn))
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
-#endif
+#endif /* !CONFIG_EARLY_PRINTK */
 
 #endif	/* __ASSEMBLY__ */
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwuC-0006CI-HJ; Mon, 24 Feb 2014 14:53:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwuA-0006Ak-QX
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:15 +0000
Received: from [85.158.139.211:49219] by server-3.bemta-5.messagelabs.com id
	52/61-13671-ADC5B035; Mon, 24 Feb 2014 14:53:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393253592!5884970!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10365 invoked from network); 24 Feb 2014 14:53:12 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:12 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so3048710eek.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=UgsMcxFtNLFNEIcSrr9svus0St3eqFFTEdASqkH+A0U=;
	b=Z92k1a0qlpCVSYd47zeilBb6HQNtunh8Wcrx8M/CLd8gLQBPg/dYMlyjezRwUQNfPn
	a/1jrYSfzegVCBUXXcckq0huQU8+P961/a4w5sDOK1Bf0fzR8ESYQShJ838HmpFigGCm
	VBXwFiGnWJ7HQ+WJvq5/b/4aOGAJvckWHyNRbgfWTCkmbgTD3PrxwOEJOo20ztfZr46I
	o/xt7j6fpIK6+R8boHN+/+/Aie2tqfWI/58CgnCWC6zaOjs88RZOVavENJYKGLPuVxSY
	0c/l7jjVcePk4Q5VjXR9T5V6u6zBnxNlBKGHdZOl4gUAmBbPrQLNnHjy9RHllNbh7P3U
	SSUA==
X-Gm-Message-State: ALoCoQmuHnEOeVGU/AuOteeJC5xgwp6yEm3sdOFTVyH6FSYIJx0OTUTw0NeCtN497XVHRBIX8TpP
X-Received: by 10.14.194.193 with SMTP id m41mr25170875een.76.1393253592430;
	Mon, 24 Feb 2014 06:53:12 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.10
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:11 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:50 +0000
Message-Id: <1393253572-7157-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, a function (early_printk) was introduced to output message when the
serial port is not initialized.

This solution is fragile because the developper needs to know when the serial
port is initialized, to use either early_printk or printk. Moreover some
functions (mainly in common code), only use printk. This will result to a loss
of message sometimes.

Directly call early_printk in console code when the serial port is not yet
initialized. For this purpose use serial_steal_fn.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Create xen/early_printk.h
---
 xen/arch/arm/early_printk.c        |    1 +
 xen/drivers/char/console.c         |    6 +++++-
 xen/include/asm-arm/early_printk.h |    3 ---
 xen/include/xen/early_printk.h     |   21 +++++++++++++++++++++
 4 files changed, 27 insertions(+), 4 deletions(-)
 create mode 100644 xen/include/xen/early_printk.h

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index affe424..8f5a94f 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -13,6 +13,7 @@
 #include <xen/lib.h>
 #include <xen/stdarg.h>
 #include <xen/string.h>
+#include <xen/early_printk.h>
 #include <asm/early_printk.h>
 
 void early_putch(char c);
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..cdf23f1 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -28,6 +28,7 @@
 #include <asm/debugger.h>
 #include <asm/div64.h>
 #include <xen/hypercall.h> /* for do_console_io */
+#include <xen/early_printk.h>
 
 /* console: comma-separated list of console outputs. */
 static char __initdata opt_console[30] = OPT_CONSOLE_STR;
@@ -245,7 +246,7 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
 static char serial_rx_ring[SERIAL_RX_SIZE];
 static unsigned int serial_rx_cons, serial_rx_prod;
 
-static void (*serial_steal_fn)(const char *);
+static void (*serial_steal_fn)(const char *) = early_puts;
 
 int console_steal(int handle, void (*fn)(const char *))
 {
@@ -652,7 +653,10 @@ void __init console_init_preirq(void)
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
+        {
             sercon_handle = sh;
+            serial_steal_fn = NULL;
+        }
         else
         {
             char *q = strchr(p, ',');
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index dd190c9..2e8c18a 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -24,7 +24,6 @@
 
 #ifdef CONFIG_EARLY_PRINTK
 
-void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 void early_panic(const char *fmt, ...) __attribute__((noreturn))
@@ -32,8 +31,6 @@ void early_panic(const char *fmt, ...) __attribute__((noreturn))
 
 #else
 
-static inline void early_puts(const char *) {}
-
 static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
diff --git a/xen/include/xen/early_printk.h b/xen/include/xen/early_printk.h
new file mode 100644
index 0000000..c213d18
--- /dev/null
+++ b/xen/include/xen/early_printk.h
@@ -0,0 +1,21 @@
+/*
+ * printk() for use before the console is initialized
+ */
+#ifndef __XEN_EARLY_PRINTK_H__
+#define __XEN_EARLY_PRINTK_H__
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_puts(const char *s);
+#else
+static inline void early_puts(const char *s) {};
+#endif
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwuC-0006CI-HJ; Mon, 24 Feb 2014 14:53:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwuA-0006Ak-QX
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:15 +0000
Received: from [85.158.139.211:49219] by server-3.bemta-5.messagelabs.com id
	52/61-13671-ADC5B035; Mon, 24 Feb 2014 14:53:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393253592!5884970!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10365 invoked from network); 24 Feb 2014 14:53:12 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:12 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so3048710eek.16
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:12 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=UgsMcxFtNLFNEIcSrr9svus0St3eqFFTEdASqkH+A0U=;
	b=Z92k1a0qlpCVSYd47zeilBb6HQNtunh8Wcrx8M/CLd8gLQBPg/dYMlyjezRwUQNfPn
	a/1jrYSfzegVCBUXXcckq0huQU8+P961/a4w5sDOK1Bf0fzR8ESYQShJ838HmpFigGCm
	VBXwFiGnWJ7HQ+WJvq5/b/4aOGAJvckWHyNRbgfWTCkmbgTD3PrxwOEJOo20ztfZr46I
	o/xt7j6fpIK6+R8boHN+/+/Aie2tqfWI/58CgnCWC6zaOjs88RZOVavENJYKGLPuVxSY
	0c/l7jjVcePk4Q5VjXR9T5V6u6zBnxNlBKGHdZOl4gUAmBbPrQLNnHjy9RHllNbh7P3U
	SSUA==
X-Gm-Message-State: ALoCoQmuHnEOeVGU/AuOteeJC5xgwp6yEm3sdOFTVyH6FSYIJx0OTUTw0NeCtN497XVHRBIX8TpP
X-Received: by 10.14.194.193 with SMTP id m41mr25170875een.76.1393253592430;
	Mon, 24 Feb 2014 06:53:12 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.10
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:11 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:50 +0000
Message-Id: <1393253572-7157-5-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 4/6] xen/console: Add support for early printk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On ARM, a function (early_printk) was introduced to output message when the
serial port is not initialized.

This solution is fragile because the developper needs to know when the serial
port is initialized, to use either early_printk or printk. Moreover some
functions (mainly in common code), only use printk. This will result to a loss
of message sometimes.

Directly call early_printk in console code when the serial port is not yet
initialized. For this purpose use serial_steal_fn.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>

---
    Changes in v2:
        - Create xen/early_printk.h
---
 xen/arch/arm/early_printk.c        |    1 +
 xen/drivers/char/console.c         |    6 +++++-
 xen/include/asm-arm/early_printk.h |    3 ---
 xen/include/xen/early_printk.h     |   21 +++++++++++++++++++++
 4 files changed, 27 insertions(+), 4 deletions(-)
 create mode 100644 xen/include/xen/early_printk.h

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index affe424..8f5a94f 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -13,6 +13,7 @@
 #include <xen/lib.h>
 #include <xen/stdarg.h>
 #include <xen/string.h>
+#include <xen/early_printk.h>
 #include <asm/early_printk.h>
 
 void early_putch(char c);
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..cdf23f1 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -28,6 +28,7 @@
 #include <asm/debugger.h>
 #include <asm/div64.h>
 #include <xen/hypercall.h> /* for do_console_io */
+#include <xen/early_printk.h>
 
 /* console: comma-separated list of console outputs. */
 static char __initdata opt_console[30] = OPT_CONSOLE_STR;
@@ -245,7 +246,7 @@ long read_console_ring(struct xen_sysctl_readconsole *op)
 static char serial_rx_ring[SERIAL_RX_SIZE];
 static unsigned int serial_rx_cons, serial_rx_prod;
 
-static void (*serial_steal_fn)(const char *);
+static void (*serial_steal_fn)(const char *) = early_puts;
 
 int console_steal(int handle, void (*fn)(const char *))
 {
@@ -652,7 +653,10 @@ void __init console_init_preirq(void)
         else if ( !strncmp(p, "none", 4) )
             continue;
         else if ( (sh = serial_parse_handle(p)) >= 0 )
+        {
             sercon_handle = sh;
+            serial_steal_fn = NULL;
+        }
         else
         {
             char *q = strchr(p, ',');
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index dd190c9..2e8c18a 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -24,7 +24,6 @@
 
 #ifdef CONFIG_EARLY_PRINTK
 
-void early_puts(const char *s);
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 void early_panic(const char *fmt, ...) __attribute__((noreturn))
@@ -32,8 +31,6 @@ void early_panic(const char *fmt, ...) __attribute__((noreturn))
 
 #else
 
-static inline void early_puts(const char *) {}
-
 static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
diff --git a/xen/include/xen/early_printk.h b/xen/include/xen/early_printk.h
new file mode 100644
index 0000000..c213d18
--- /dev/null
+++ b/xen/include/xen/early_printk.h
@@ -0,0 +1,21 @@
+/*
+ * printk() for use before the console is initialized
+ */
+#ifndef __XEN_EARLY_PRINTK_H__
+#define __XEN_EARLY_PRINTK_H__
+
+#ifdef CONFIG_EARLY_PRINTK
+void early_puts(const char *s);
+#else
+static inline void early_puts(const char *s) {};
+#endif
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwuG-0006FC-Uk; Mon, 24 Feb 2014 14:53:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwuF-0006De-2C
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:19 +0000
Received: from [85.158.137.68:28776] by server-5.bemta-3.messagelabs.com id
	1E/CE-04712-EDC5B035; Mon, 24 Feb 2014 14:53:18 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393253596!2599436!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17717 invoked from network); 24 Feb 2014 14:53:16 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:16 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3143959eek.37
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=rmpAFcugRBlx9IzNX6GZBqOiI57feOIXb0oqD12fTEQ=;
	b=WZjQtkJ5c5mC5YlwzEu/6qbiKkEFIW7NwSUMfaKQb2Wch2j21jZCKg9uccGVc1qbUb
	ZJGVr0MwAT3NQgRBvnZ6B6Etwe3pAGHkfJyBy6wSAl/E4BWUuZ+8OwQmyvTMsrtRoi0y
	x2bUDx7E9S+Lp5r0iAbacWD90k0TpFcmm+0sJ/KhQthh+JmtAg/rG98jGSh+TIC2Daf3
	9iHLVRwXxlr+16uHQyUa6texN89IUUQ4/OdpeDUtudhN83Hysc/yPxEuagL14OSEyrpQ
	PdcJ1K7DGDtIPBj1YdnoJfJ9w/kTMwe6OqGlahm/uCd2FXLoorMuqSUKC1BJSWATlUEX
	xPXg==
X-Gm-Message-State: ALoCoQkSCUPcDU3Tc1uaJ7Pcjw2ff3hXI5aU/tRcqP/bS/TnVXIm+RunMfg5Iui8eQ9jPkHcUZzy
X-Received: by 10.14.216.193 with SMTP id g41mr25263766eep.13.1393253595761;
	Mon, 24 Feb 2014 06:53:15 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.14
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:14 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:52 +0000
Message-Id: <1393253572-7157-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 6/6] xen/arm: Replace early_{printk,
	panic} call to {printk, panic} call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that the console supports earlyprintk, we can get a rid of
early_printk and early_panic calls.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---
    Changes in v2:
        - Update commit title to reflect I also removed early_panic
        - Remove orphan early_panic in arm64 code
        - Remove asm/early_printk.h include where it's unecessary
---
 xen/arch/arm/early_printk.c        |   33 ---------------------------------
 xen/arch/arm/mm.c                  |    5 ++---
 xen/arch/arm/setup.c               |   28 +++++++++++++---------------
 xen/common/device_tree.c           |   36 +++++++++++++-----------------------
 xen/drivers/char/dt-uart.c         |    9 ++++-----
 xen/drivers/char/exynos4210-uart.c |   13 +++++--------
 xen/drivers/char/omap-uart.c       |   13 ++++++-------
 xen/drivers/char/pl011.c           |   13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      |   29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h |   23 -----------------------
 10 files changed, 63 insertions(+), 139 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 8f5a94f..c85db69 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -14,14 +14,10 @@
 #include <xen/stdarg.h>
 #include <xen/string.h>
 #include <xen/early_printk.h>
-#include <asm/early_printk.h>
 
 void early_putch(char c);
 void early_flush(void);
 
-/* Early printk buffer */
-static char __initdata buf[512];
-
 void early_puts(const char *s)
 {
     while (*s != '\0') {
@@ -37,32 +33,3 @@ void early_puts(const char *s)
      */
     early_flush();
 }
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
-}
-
-void __init early_printk(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-}
-
-void __attribute__((noreturn)) __init
-early_panic(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-
-    early_printk("\n\nEarly Panic: Stopping\n");
-
-    while(1);
-}
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 308a798..fba3856 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -37,7 +37,6 @@
 #include <public/memory.h>
 #include <xen/sched.h>
 #include <xen/vmap.h>
-#include <asm/early_printk.h>
 #include <xsm/xsm.h>
 #include <xen/pfn.h>
 
@@ -649,8 +648,8 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
         xenheap_mfn_start = base_mfn;
 
     if ( base_mfn < xenheap_mfn_start )
-        early_panic("cannot add xenheap mapping at %lx below heap start %lx",
-                    base_mfn, xenheap_mfn_start);
+        panic("cannot add xenheap mapping at %lx below heap start %lx",
+              base_mfn, xenheap_mfn_start);
 
     end_mfn = base_mfn + nr_mfns;
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9480f42..5434784 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -39,7 +39,6 @@
 #include <asm/page.h>
 #include <asm/current.h>
 #include <asm/setup.h>
-#include <asm/early_printk.h>
 #include <asm/gic.h>
 #include <asm/cpufeature.h>
 #include <asm/platform.h>
@@ -346,10 +345,10 @@ static paddr_t __init get_xen_paddr(void)
     }
 
     if ( !paddr )
-        early_panic("Not enough memory to relocate Xen");
+        panic("Not enough memory to relocate Xen");
 
-    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
-                 paddr, paddr + min_size);
+    printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+           paddr, paddr + min_size);
 
     early_info.modules.module[MOD_XEN].start = paddr;
     early_info.modules.module[MOD_XEN].size = min_size;
@@ -371,7 +370,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     void *fdt;
 
     if ( !early_info.mem.nr_banks )
-        early_panic("No memory bank");
+        panic("No memory bank");
 
     /*
      * We are going to accumulate two regions here.
@@ -430,8 +429,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( i != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     i, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               i, early_info.mem.nr_banks);
         early_info.mem.nr_banks = i;
     }
 
@@ -465,14 +464,13 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
 
     if ( ! e )
-        early_panic("Not not enough space for xenheap");
+        panic("Not not enough space for xenheap");
 
     domheap_pages = heap_pages - xenheap_pages;
 
-    early_printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
-                 e - (pfn_to_paddr(xenheap_pages)), e,
-                 xenheap_pages);
-    early_printk("Dom heap: %lu pages\n", domheap_pages);
+    printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
+            e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages);
+    printk("Dom heap: %lu pages\n", domheap_pages);
 
     setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
@@ -606,8 +604,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( bank != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     bank, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               bank, early_info.mem.nr_banks);
         early_info.mem.nr_banks = bank;
     }
 
@@ -672,7 +670,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     fdt_size = device_tree_early_init(device_tree_flattened, fdt_paddr);
 
     cmdline = device_tree_bootargs(device_tree_flattened);
-    early_printk("Command line: %s\n", cmdline);
+    printk("Command line: %s\n", cmdline);
     cmdline_parse(cmdline);
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 55716a8..c66d1d5 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -23,7 +23,6 @@
 #include <xen/cpumask.h>
 #include <xen/ctype.h>
 #include <xen/lib.h>
-#include <asm/early_printk.h>
 
 struct dt_early_info __initdata early_info;
 const void *device_tree_flattened;
@@ -54,16 +53,7 @@ struct dt_alias_prop {
 
 static LIST_HEAD(aliases_lookup);
 
-/* Some device tree functions may be called both before and after the
-   console is initialized. */
-#define dt_printk(fmt, ...)                         \
-    do                                              \
-    {                                               \
-        if ( system_state == SYS_STATE_early_boot ) \
-            early_printk(fmt, ## __VA_ARGS__);      \
-        else                                        \
-            printk(fmt, ## __VA_ARGS__);            \
-    } while (0)
+#define dt_printk(fmt, ...) printk(fmt, ## __VA_ARGS__);
 
 // #define DEBUG_DT
 
@@ -316,7 +306,7 @@ static void __init process_memory_node(const void *fdt, int node,
 
     if ( address_cells < 1 || size_cells < 1 )
     {
-        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+        dt_printk("fdt: node `%s': invalid #address-cells or #size-cells",
                      name);
         return;
     }
@@ -324,7 +314,7 @@ static void __init process_memory_node(const void *fdt, int node,
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
     {
-        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        dt_printk("fdt: node `%s': missing `reg' property\n", name);
         return;
     }
 
@@ -355,16 +345,16 @@ static void __init process_multiboot_node(const void *fdt, int node,
     else if ( fdt_node_check_compatible(fdt, node, "xen,linux-initrd") == 0)
         nr = MOD_INITRD;
     else
-        early_panic("%s not a known xen multiboot type\n", name);
+        panic("%s not a known xen multiboot type\n", name);
 
     mod = &early_info.modules.module[nr];
 
     prop = fdt_get_property(fdt, node, "reg", &len);
     if ( !prop )
-        early_panic("node %s missing `reg' property\n", name);
+        panic("node %s missing `reg' property\n", name);
 
     if ( len < dt_cells_to_size(address_cells + size_cells) )
-        early_panic("fdt: node `%s': `reg` property length is too short\n",
+        panic("fdt: node `%s': `reg` property length is too short\n",
                     name);
 
     cell = (const __be32 *)prop->data;
@@ -375,7 +365,7 @@ static void __init process_multiboot_node(const void *fdt, int node,
     if ( prop )
     {
         if ( len > sizeof(mod->cmdline) )
-            early_panic("module %d command line too long\n", nr);
+            panic("module %d command line too long\n", nr);
 
         safe_strcpy(mod->cmdline, prop->data);
     }
@@ -458,12 +448,12 @@ static void __init early_print_info(void)
     int i, nr_rsvd;
 
     for ( i = 0; i < mi->nr_banks; i++ )
-        early_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
                      mi->bank[i].start,
                      mi->bank[i].start + mi->bank[i].size - 1);
-    early_printk("\n");
+    dt_printk("\n");
     for ( i = 1 ; i < mods->nr_mods + 1; i++ )
-        early_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
+        dt_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
                      i,
                      mods->module[i].start,
                      mods->module[i].start + mods->module[i].size,
@@ -476,10 +466,10 @@ static void __init early_print_info(void)
             continue;
         /* fdt_get_mem_rsv returns length */
         e += s;
-        early_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
                      i, s, e);
     }
-    early_printk("\n");
+    dt_printk("\n");
 }
 
 /**
@@ -495,7 +485,7 @@ size_t __init device_tree_early_init(const void *fdt, paddr_t paddr)
 
     ret = fdt_check_header(fdt);
     if ( ret < 0 )
-        early_panic("No valid device tree\n");
+        panic("No valid device tree\n");
 
     mod = &early_info.modules.module[MOD_FDT];
     mod->start = paddr;
diff --git a/xen/drivers/char/dt-uart.c b/xen/drivers/char/dt-uart.c
index d7204fb..fa92b5c 100644
--- a/xen/drivers/char/dt-uart.c
+++ b/xen/drivers/char/dt-uart.c
@@ -18,7 +18,6 @@
  */
 
 #include <asm/device.h>
-#include <asm/early_printk.h>
 #include <asm/types.h>
 #include <xen/console.h>
 #include <xen/device_tree.h>
@@ -44,7 +43,7 @@ void __init dt_uart_init(void)
 
     if ( !console_has("dtuart") || !strcmp(opt_dtuart, "") )
     {
-        early_printk("No console\n");
+        printk("No console\n");
         return;
     }
 
@@ -54,7 +53,7 @@ void __init dt_uart_init(void)
     else
         options = "";
 
-    early_printk("Looking for UART console %s\n", devpath);
+    printk("Looking for UART console %s\n", devpath);
     if ( *devpath == '/' )
         dev = dt_find_node_by_path(devpath);
     else
@@ -62,12 +61,12 @@ void __init dt_uart_init(void)
 
     if ( !dev )
     {
-        early_printk("Unable to find device \"%s\"\n", devpath);
+        printk("Unable to find device \"%s\"\n", devpath);
         return;
     }
 
     ret = device_init(dev, DEVICE_SERIAL, options);
 
     if ( ret )
-        early_printk("Unable to initialize serial: %d\n", ret);
+        printk("Unable to initialize serial: %d\n", ret);
 }
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 150d49b..d49e1fe 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -24,7 +24,6 @@
 #include <xen/init.h>
 #include <xen/irq.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include <asm/device.h>
 #include <asm/exynos4210-uart.h>
 #include <asm/io.h>
@@ -314,9 +313,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-    {
-        early_printk("WARNING: UART configuration is not supported\n");
-    }
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &exynos4210_com;
 
@@ -329,22 +326,22 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("exynos4210: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the IRQ\n");
+        printk("exynos4210: Unable to retrieve the IRQ\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("exynos4210: Unable to map the UART memory\n");
+        printk("exynos4210: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index b29f610..49ae1a4 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -15,7 +15,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <asm/device.h>
 #include <xen/errno.h>
@@ -301,14 +300,14 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &omap_com;
 
     res = dt_property_read_u32(dev, "clock-frequency", &clkspec);
     if ( !res )
     {
-        early_printk("omap-uart: Unable to retrieve the clock frequency\n");
+        printk("omap-uart: Unable to retrieve the clock frequency\n");
         return -EINVAL;
     }
 
@@ -321,22 +320,22 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("omap-uart: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the IRQ\n");
+        printk("omap-uart: Unable to retrieve the IRQ\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("omap-uart: Unable to map the UART memory\n");
+        printk("omap-uart: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index fe99af6..90bf0c6 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -22,7 +22,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <xen/errno.h>
 #include <asm/device.h>
@@ -107,7 +106,7 @@ static void __init pl011_init_preirq(struct serial_port *port)
         /* Baud rate already set: read it out from the divisor latch. */
         divisor = (pl011_read(uart, IBRD) << 6) | (pl011_read(uart, FBRD));
         if (!divisor)
-            early_panic("pl011: No Baud rate configured\n");
+            panic("pl011: No Baud rate configured\n");
         uart->baud = (uart->clock_hz << 2) / divisor;
     }
     /* This write must follow FBRD and IBRD writes. */
@@ -229,7 +228,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
 
     if ( strcmp(config, "") )
     {
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
     }
 
     uart = &pl011_com;
@@ -243,22 +242,22 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("pl011: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the IRQ\n");
+        printk("pl011: Unable to retrieve the IRQ\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("pl011: Unable to map the UART memory\n");
+        printk("pl011: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
index 647f22c..2a5f72e 100644
--- a/xen/drivers/video/arm_hdlcd.c
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -25,7 +25,6 @@
 #include <xen/libfdt/libfdt.h>
 #include <xen/init.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include "font.h"
 #include "lfb.h"
 #include "modelines.h"
@@ -123,21 +122,21 @@ void __init video_init(void)
 
     if ( !dev )
     {
-        early_printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
+        printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
         return;
     }
 
     res = dt_device_get_address(dev, 0, &hdlcd_start, &hdlcd_size);
     if ( !res )
     {
-        early_printk("HDLCD: Unable to retrieve MMIO base address\n");
+        printk("HDLCD: Unable to retrieve MMIO base address\n");
         return;
     }
 
     cells = dt_get_property(dev, "framebuffer", &lenp);
     if ( !cells )
     {
-        early_printk("HDLCD: Unable to retrieve framebuffer property\n");
+        printk("HDLCD: Unable to retrieve framebuffer property\n");
         return;
     }
 
@@ -146,13 +145,13 @@ void __init video_init(void)
 
     if ( !hdlcd_start )
     {
-        early_printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
         return;
     }
 
     if ( !framebuffer_start )
     {
-        early_printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
         return;
     }
 
@@ -166,13 +165,13 @@ void __init video_init(void)
     else if ( strlen(mode_string) < strlen("800x600@60") ||
             strlen(mode_string) > sizeof(_mode_string) - 1 )
     {
-        early_printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
+        printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
         return;
     } else {
         char *s = strchr(mode_string, '-');
         if ( !s )
         {
-            early_printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+            printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
                          mode_string);
             get_color_masks("32", &c);
             memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
@@ -180,13 +179,13 @@ void __init video_init(void)
         } else {
             if ( strlen(s) < 6 )
             {
-                early_printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
+                printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
                 return;
             }
             s++;
             if ( get_color_masks(s, &c) < 0 )
             {
-                early_printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
+                printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
                 return;
             }
             bytes_per_pixel = simple_strtoll(s, NULL, 10) / 8;
@@ -205,23 +204,23 @@ void __init video_init(void)
     }
     if ( !videomode )
     {
-        early_printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
-                     _mode_string);
+        printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
+               _mode_string);
         return;
     }
 
     if ( framebuffer_size < bytes_per_pixel * videomode->xres * videomode->yres )
     {
-        early_printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
+        printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
         return;
     }
 
-    early_printk(KERN_INFO "Initializing HDLCD driver\n");
+    printk(KERN_INFO "Initializing HDLCD driver\n");
 
     lfb = ioremap_wc(framebuffer_start, framebuffer_size);
     if ( !lfb )
     {
-        early_printk(KERN_ERR "Couldn't map the framebuffer\n");
+        printk(KERN_ERR "Couldn't map the framebuffer\n");
         return;
     }
     memset(lfb, 0x00, bytes_per_pixel * videomode->xres * videomode->yres);
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 2e8c18a..8c3d6a8 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -18,29 +18,6 @@
 #define EARLY_UART_VIRTUAL_ADDRESS \
     (FIXMAP_ADDR(FIXMAP_CONSOLE) +(EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
 
-#endif
-
-#ifndef __ASSEMBLY__
-
-#ifdef CONFIG_EARLY_PRINTK
-
-void early_printk(const char *fmt, ...)
-    __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
-    __attribute__((format (printf, 1, 2)));
-
-#else
-
-static inline  __attribute__((format (printf, 1, 2))) void
-early_printk(const char *fmt, ...)
-{}
-
-static inline void  __attribute__((noreturn))
-__attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
-{while(1);}
-
 #endif /* !CONFIG_EARLY_PRINTK */
 
-#endif	/* __ASSEMBLY__ */
-
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:53:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwuG-0006FC-Uk; Mon, 24 Feb 2014 14:53:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwuF-0006De-2C
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:53:19 +0000
Received: from [85.158.137.68:28776] by server-5.bemta-3.messagelabs.com id
	1E/CE-04712-EDC5B035; Mon, 24 Feb 2014 14:53:18 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393253596!2599436!1
X-Originating-IP: [74.125.83.50]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17717 invoked from network); 24 Feb 2014 14:53:16 -0000
Received: from mail-ee0-f50.google.com (HELO mail-ee0-f50.google.com)
	(74.125.83.50)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:53:16 -0000
Received: by mail-ee0-f50.google.com with SMTP id d17so3143959eek.37
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=rmpAFcugRBlx9IzNX6GZBqOiI57feOIXb0oqD12fTEQ=;
	b=WZjQtkJ5c5mC5YlwzEu/6qbiKkEFIW7NwSUMfaKQb2Wch2j21jZCKg9uccGVc1qbUb
	ZJGVr0MwAT3NQgRBvnZ6B6Etwe3pAGHkfJyBy6wSAl/E4BWUuZ+8OwQmyvTMsrtRoi0y
	x2bUDx7E9S+Lp5r0iAbacWD90k0TpFcmm+0sJ/KhQthh+JmtAg/rG98jGSh+TIC2Daf3
	9iHLVRwXxlr+16uHQyUa6texN89IUUQ4/OdpeDUtudhN83Hysc/yPxEuagL14OSEyrpQ
	PdcJ1K7DGDtIPBj1YdnoJfJ9w/kTMwe6OqGlahm/uCd2FXLoorMuqSUKC1BJSWATlUEX
	xPXg==
X-Gm-Message-State: ALoCoQkSCUPcDU3Tc1uaJ7Pcjw2ff3hXI5aU/tRcqP/bS/TnVXIm+RunMfg5Iui8eQ9jPkHcUZzy
X-Received: by 10.14.216.193 with SMTP id g41mr25263766eep.13.1393253595761;
	Mon, 24 Feb 2014 06:53:15 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.14
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:14 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:52 +0000
Message-Id: <1393253572-7157-7-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Julien Grall <julien.grall@linaro.org>,
	tim@xen.org, ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 6/6] xen/arm: Replace early_{printk,
	panic} call to {printk, panic} call
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Now that the console supports earlyprintk, we can get a rid of
early_printk and early_panic calls.

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Acked-by: Ian Campbell <ian.campbell@citrix.com>

---
    Changes in v2:
        - Update commit title to reflect I also removed early_panic
        - Remove orphan early_panic in arm64 code
        - Remove asm/early_printk.h include where it's unecessary
---
 xen/arch/arm/early_printk.c        |   33 ---------------------------------
 xen/arch/arm/mm.c                  |    5 ++---
 xen/arch/arm/setup.c               |   28 +++++++++++++---------------
 xen/common/device_tree.c           |   36 +++++++++++++-----------------------
 xen/drivers/char/dt-uart.c         |    9 ++++-----
 xen/drivers/char/exynos4210-uart.c |   13 +++++--------
 xen/drivers/char/omap-uart.c       |   13 ++++++-------
 xen/drivers/char/pl011.c           |   13 ++++++-------
 xen/drivers/video/arm_hdlcd.c      |   29 ++++++++++++++---------------
 xen/include/asm-arm/early_printk.h |   23 -----------------------
 10 files changed, 63 insertions(+), 139 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 8f5a94f..c85db69 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -14,14 +14,10 @@
 #include <xen/stdarg.h>
 #include <xen/string.h>
 #include <xen/early_printk.h>
-#include <asm/early_printk.h>
 
 void early_putch(char c);
 void early_flush(void);
 
-/* Early printk buffer */
-static char __initdata buf[512];
-
 void early_puts(const char *s)
 {
     while (*s != '\0') {
@@ -37,32 +33,3 @@ void early_puts(const char *s)
      */
     early_flush();
 }
-
-static void __init early_vprintk(const char *fmt, va_list args)
-{
-    vsnprintf(buf, sizeof(buf), fmt, args);
-    early_puts(buf);
-}
-
-void __init early_printk(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-}
-
-void __attribute__((noreturn)) __init
-early_panic(const char *fmt, ...)
-{
-    va_list args;
-
-    va_start(args, fmt);
-    early_vprintk(fmt, args);
-    va_end(args);
-
-    early_printk("\n\nEarly Panic: Stopping\n");
-
-    while(1);
-}
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 308a798..fba3856 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -37,7 +37,6 @@
 #include <public/memory.h>
 #include <xen/sched.h>
 #include <xen/vmap.h>
-#include <asm/early_printk.h>
 #include <xsm/xsm.h>
 #include <xen/pfn.h>
 
@@ -649,8 +648,8 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
         xenheap_mfn_start = base_mfn;
 
     if ( base_mfn < xenheap_mfn_start )
-        early_panic("cannot add xenheap mapping at %lx below heap start %lx",
-                    base_mfn, xenheap_mfn_start);
+        panic("cannot add xenheap mapping at %lx below heap start %lx",
+              base_mfn, xenheap_mfn_start);
 
     end_mfn = base_mfn + nr_mfns;
 
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 9480f42..5434784 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -39,7 +39,6 @@
 #include <asm/page.h>
 #include <asm/current.h>
 #include <asm/setup.h>
-#include <asm/early_printk.h>
 #include <asm/gic.h>
 #include <asm/cpufeature.h>
 #include <asm/platform.h>
@@ -346,10 +345,10 @@ static paddr_t __init get_xen_paddr(void)
     }
 
     if ( !paddr )
-        early_panic("Not enough memory to relocate Xen");
+        panic("Not enough memory to relocate Xen");
 
-    early_printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
-                 paddr, paddr + min_size);
+    printk("Placing Xen at 0x%"PRIpaddr"-0x%"PRIpaddr"\n",
+           paddr, paddr + min_size);
 
     early_info.modules.module[MOD_XEN].start = paddr;
     early_info.modules.module[MOD_XEN].size = min_size;
@@ -371,7 +370,7 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     void *fdt;
 
     if ( !early_info.mem.nr_banks )
-        early_panic("No memory bank");
+        panic("No memory bank");
 
     /*
      * We are going to accumulate two regions here.
@@ -430,8 +429,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( i != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     i, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               i, early_info.mem.nr_banks);
         early_info.mem.nr_banks = i;
     }
 
@@ -465,14 +464,13 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
     } while ( xenheap_pages > 128<<(20-PAGE_SHIFT) );
 
     if ( ! e )
-        early_panic("Not not enough space for xenheap");
+        panic("Not not enough space for xenheap");
 
     domheap_pages = heap_pages - xenheap_pages;
 
-    early_printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
-                 e - (pfn_to_paddr(xenheap_pages)), e,
-                 xenheap_pages);
-    early_printk("Dom heap: %lu pages\n", domheap_pages);
+    printk("Xen heap: %"PRIpaddr"-%"PRIpaddr" (%lu pages)\n",
+            e - (pfn_to_paddr(xenheap_pages)), e, xenheap_pages);
+    printk("Dom heap: %lu pages\n", domheap_pages);
 
     setup_xenheap_mappings((e >> PAGE_SHIFT) - xenheap_pages, xenheap_pages);
 
@@ -606,8 +604,8 @@ static void __init setup_mm(unsigned long dtb_paddr, size_t dtb_size)
 
     if ( bank != early_info.mem.nr_banks )
     {
-        early_printk("WARNING: only using %d out of %d memory banks\n",
-                     bank, early_info.mem.nr_banks);
+        printk("WARNING: only using %d out of %d memory banks\n",
+               bank, early_info.mem.nr_banks);
         early_info.mem.nr_banks = bank;
     }
 
@@ -672,7 +670,7 @@ void __init start_xen(unsigned long boot_phys_offset,
     fdt_size = device_tree_early_init(device_tree_flattened, fdt_paddr);
 
     cmdline = device_tree_bootargs(device_tree_flattened);
-    early_printk("Command line: %s\n", cmdline);
+    printk("Command line: %s\n", cmdline);
     cmdline_parse(cmdline);
 
     setup_pagetables(boot_phys_offset, get_xen_paddr());
diff --git a/xen/common/device_tree.c b/xen/common/device_tree.c
index 55716a8..c66d1d5 100644
--- a/xen/common/device_tree.c
+++ b/xen/common/device_tree.c
@@ -23,7 +23,6 @@
 #include <xen/cpumask.h>
 #include <xen/ctype.h>
 #include <xen/lib.h>
-#include <asm/early_printk.h>
 
 struct dt_early_info __initdata early_info;
 const void *device_tree_flattened;
@@ -54,16 +53,7 @@ struct dt_alias_prop {
 
 static LIST_HEAD(aliases_lookup);
 
-/* Some device tree functions may be called both before and after the
-   console is initialized. */
-#define dt_printk(fmt, ...)                         \
-    do                                              \
-    {                                               \
-        if ( system_state == SYS_STATE_early_boot ) \
-            early_printk(fmt, ## __VA_ARGS__);      \
-        else                                        \
-            printk(fmt, ## __VA_ARGS__);            \
-    } while (0)
+#define dt_printk(fmt, ...) printk(fmt, ## __VA_ARGS__);
 
 // #define DEBUG_DT
 
@@ -316,7 +306,7 @@ static void __init process_memory_node(const void *fdt, int node,
 
     if ( address_cells < 1 || size_cells < 1 )
     {
-        early_printk("fdt: node `%s': invalid #address-cells or #size-cells",
+        dt_printk("fdt: node `%s': invalid #address-cells or #size-cells",
                      name);
         return;
     }
@@ -324,7 +314,7 @@ static void __init process_memory_node(const void *fdt, int node,
     prop = fdt_get_property(fdt, node, "reg", NULL);
     if ( !prop )
     {
-        early_printk("fdt: node `%s': missing `reg' property\n", name);
+        dt_printk("fdt: node `%s': missing `reg' property\n", name);
         return;
     }
 
@@ -355,16 +345,16 @@ static void __init process_multiboot_node(const void *fdt, int node,
     else if ( fdt_node_check_compatible(fdt, node, "xen,linux-initrd") == 0)
         nr = MOD_INITRD;
     else
-        early_panic("%s not a known xen multiboot type\n", name);
+        panic("%s not a known xen multiboot type\n", name);
 
     mod = &early_info.modules.module[nr];
 
     prop = fdt_get_property(fdt, node, "reg", &len);
     if ( !prop )
-        early_panic("node %s missing `reg' property\n", name);
+        panic("node %s missing `reg' property\n", name);
 
     if ( len < dt_cells_to_size(address_cells + size_cells) )
-        early_panic("fdt: node `%s': `reg` property length is too short\n",
+        panic("fdt: node `%s': `reg` property length is too short\n",
                     name);
 
     cell = (const __be32 *)prop->data;
@@ -375,7 +365,7 @@ static void __init process_multiboot_node(const void *fdt, int node,
     if ( prop )
     {
         if ( len > sizeof(mod->cmdline) )
-            early_panic("module %d command line too long\n", nr);
+            panic("module %d command line too long\n", nr);
 
         safe_strcpy(mod->cmdline, prop->data);
     }
@@ -458,12 +448,12 @@ static void __init early_print_info(void)
     int i, nr_rsvd;
 
     for ( i = 0; i < mi->nr_banks; i++ )
-        early_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk("RAM: %"PRIpaddr" - %"PRIpaddr"\n",
                      mi->bank[i].start,
                      mi->bank[i].start + mi->bank[i].size - 1);
-    early_printk("\n");
+    dt_printk("\n");
     for ( i = 1 ; i < mods->nr_mods + 1; i++ )
-        early_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
+        dt_printk("MODULE[%d]: %"PRIpaddr" - %"PRIpaddr" %s\n",
                      i,
                      mods->module[i].start,
                      mods->module[i].start + mods->module[i].size,
@@ -476,10 +466,10 @@ static void __init early_print_info(void)
             continue;
         /* fdt_get_mem_rsv returns length */
         e += s;
-        early_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
+        dt_printk(" RESVD[%d]: %"PRIpaddr" - %"PRIpaddr"\n",
                      i, s, e);
     }
-    early_printk("\n");
+    dt_printk("\n");
 }
 
 /**
@@ -495,7 +485,7 @@ size_t __init device_tree_early_init(const void *fdt, paddr_t paddr)
 
     ret = fdt_check_header(fdt);
     if ( ret < 0 )
-        early_panic("No valid device tree\n");
+        panic("No valid device tree\n");
 
     mod = &early_info.modules.module[MOD_FDT];
     mod->start = paddr;
diff --git a/xen/drivers/char/dt-uart.c b/xen/drivers/char/dt-uart.c
index d7204fb..fa92b5c 100644
--- a/xen/drivers/char/dt-uart.c
+++ b/xen/drivers/char/dt-uart.c
@@ -18,7 +18,6 @@
  */
 
 #include <asm/device.h>
-#include <asm/early_printk.h>
 #include <asm/types.h>
 #include <xen/console.h>
 #include <xen/device_tree.h>
@@ -44,7 +43,7 @@ void __init dt_uart_init(void)
 
     if ( !console_has("dtuart") || !strcmp(opt_dtuart, "") )
     {
-        early_printk("No console\n");
+        printk("No console\n");
         return;
     }
 
@@ -54,7 +53,7 @@ void __init dt_uart_init(void)
     else
         options = "";
 
-    early_printk("Looking for UART console %s\n", devpath);
+    printk("Looking for UART console %s\n", devpath);
     if ( *devpath == '/' )
         dev = dt_find_node_by_path(devpath);
     else
@@ -62,12 +61,12 @@ void __init dt_uart_init(void)
 
     if ( !dev )
     {
-        early_printk("Unable to find device \"%s\"\n", devpath);
+        printk("Unable to find device \"%s\"\n", devpath);
         return;
     }
 
     ret = device_init(dev, DEVICE_SERIAL, options);
 
     if ( ret )
-        early_printk("Unable to initialize serial: %d\n", ret);
+        printk("Unable to initialize serial: %d\n", ret);
 }
diff --git a/xen/drivers/char/exynos4210-uart.c b/xen/drivers/char/exynos4210-uart.c
index 150d49b..d49e1fe 100644
--- a/xen/drivers/char/exynos4210-uart.c
+++ b/xen/drivers/char/exynos4210-uart.c
@@ -24,7 +24,6 @@
 #include <xen/init.h>
 #include <xen/irq.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include <asm/device.h>
 #include <asm/exynos4210-uart.h>
 #include <asm/io.h>
@@ -314,9 +313,7 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-    {
-        early_printk("WARNING: UART configuration is not supported\n");
-    }
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &exynos4210_com;
 
@@ -329,22 +326,22 @@ static int __init exynos4210_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("exynos4210: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("exynos4210: Unable to retrieve the IRQ\n");
+        printk("exynos4210: Unable to retrieve the IRQ\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("exynos4210: Unable to map the UART memory\n");
+        printk("exynos4210: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
diff --git a/xen/drivers/char/omap-uart.c b/xen/drivers/char/omap-uart.c
index b29f610..49ae1a4 100644
--- a/xen/drivers/char/omap-uart.c
+++ b/xen/drivers/char/omap-uart.c
@@ -15,7 +15,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <asm/device.h>
 #include <xen/errno.h>
@@ -301,14 +300,14 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     u64 addr, size;
 
     if ( strcmp(config, "") )
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
 
     uart = &omap_com;
 
     res = dt_property_read_u32(dev, "clock-frequency", &clkspec);
     if ( !res )
     {
-        early_printk("omap-uart: Unable to retrieve the clock frequency\n");
+        printk("omap-uart: Unable to retrieve the clock frequency\n");
         return -EINVAL;
     }
 
@@ -321,22 +320,22 @@ static int __init omap_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("omap-uart: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("omap-uart: Unable to retrieve the IRQ\n");
+        printk("omap-uart: Unable to retrieve the IRQ\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("omap-uart: Unable to map the UART memory\n");
+        printk("omap-uart: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
diff --git a/xen/drivers/char/pl011.c b/xen/drivers/char/pl011.c
index fe99af6..90bf0c6 100644
--- a/xen/drivers/char/pl011.c
+++ b/xen/drivers/char/pl011.c
@@ -22,7 +22,6 @@
 #include <xen/serial.h>
 #include <xen/init.h>
 #include <xen/irq.h>
-#include <asm/early_printk.h>
 #include <xen/device_tree.h>
 #include <xen/errno.h>
 #include <asm/device.h>
@@ -107,7 +106,7 @@ static void __init pl011_init_preirq(struct serial_port *port)
         /* Baud rate already set: read it out from the divisor latch. */
         divisor = (pl011_read(uart, IBRD) << 6) | (pl011_read(uart, FBRD));
         if (!divisor)
-            early_panic("pl011: No Baud rate configured\n");
+            panic("pl011: No Baud rate configured\n");
         uart->baud = (uart->clock_hz << 2) / divisor;
     }
     /* This write must follow FBRD and IBRD writes. */
@@ -229,7 +228,7 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
 
     if ( strcmp(config, "") )
     {
-        early_printk("WARNING: UART configuration is not supported\n");
+        printk("WARNING: UART configuration is not supported\n");
     }
 
     uart = &pl011_com;
@@ -243,22 +242,22 @@ static int __init pl011_uart_init(struct dt_device_node *dev,
     res = dt_device_get_address(dev, 0, &addr, &size);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the base"
-                     " address of the UART\n");
+        printk("pl011: Unable to retrieve the base"
+               " address of the UART\n");
         return res;
     }
 
     res = dt_device_get_irq(dev, 0, &uart->irq);
     if ( res )
     {
-        early_printk("pl011: Unable to retrieve the IRQ\n");
+        printk("pl011: Unable to retrieve the IRQ\n");
         return res;
     }
 
     uart->regs = ioremap_nocache(addr, size);
     if ( !uart->regs )
     {
-        early_printk("pl011: Unable to map the UART memory\n");
+        printk("pl011: Unable to map the UART memory\n");
         return -ENOMEM;
     }
 
diff --git a/xen/drivers/video/arm_hdlcd.c b/xen/drivers/video/arm_hdlcd.c
index 647f22c..2a5f72e 100644
--- a/xen/drivers/video/arm_hdlcd.c
+++ b/xen/drivers/video/arm_hdlcd.c
@@ -25,7 +25,6 @@
 #include <xen/libfdt/libfdt.h>
 #include <xen/init.h>
 #include <xen/mm.h>
-#include <asm/early_printk.h>
 #include "font.h"
 #include "lfb.h"
 #include "modelines.h"
@@ -123,21 +122,21 @@ void __init video_init(void)
 
     if ( !dev )
     {
-        early_printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
+        printk("HDLCD: Cannot find node compatible with \"arm,hdcld\"\n");
         return;
     }
 
     res = dt_device_get_address(dev, 0, &hdlcd_start, &hdlcd_size);
     if ( !res )
     {
-        early_printk("HDLCD: Unable to retrieve MMIO base address\n");
+        printk("HDLCD: Unable to retrieve MMIO base address\n");
         return;
     }
 
     cells = dt_get_property(dev, "framebuffer", &lenp);
     if ( !cells )
     {
-        early_printk("HDLCD: Unable to retrieve framebuffer property\n");
+        printk("HDLCD: Unable to retrieve framebuffer property\n");
         return;
     }
 
@@ -146,13 +145,13 @@ void __init video_init(void)
 
     if ( !hdlcd_start )
     {
-        early_printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: address missing from device tree, disabling driver\n");
         return;
     }
 
     if ( !framebuffer_start )
     {
-        early_printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
+        printk(KERN_ERR "HDLCD: framebuffer address missing from device tree, disabling driver\n");
         return;
     }
 
@@ -166,13 +165,13 @@ void __init video_init(void)
     else if ( strlen(mode_string) < strlen("800x600@60") ||
             strlen(mode_string) > sizeof(_mode_string) - 1 )
     {
-        early_printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
+        printk(KERN_ERR "HDLCD: invalid modeline=%s\n", mode_string);
         return;
     } else {
         char *s = strchr(mode_string, '-');
         if ( !s )
         {
-            early_printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
+            printk(KERN_INFO "HDLCD: bpp not found in modeline %s, assume 32 bpp\n",
                          mode_string);
             get_color_masks("32", &c);
             memcpy(_mode_string, mode_string, strlen(mode_string) + 1);
@@ -180,13 +179,13 @@ void __init video_init(void)
         } else {
             if ( strlen(s) < 6 )
             {
-                early_printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
+                printk(KERN_ERR "HDLCD: invalid mode %s\n", mode_string);
                 return;
             }
             s++;
             if ( get_color_masks(s, &c) < 0 )
             {
-                early_printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
+                printk(KERN_WARNING "HDLCD: unsupported bpp %s\n", s);
                 return;
             }
             bytes_per_pixel = simple_strtoll(s, NULL, 10) / 8;
@@ -205,23 +204,23 @@ void __init video_init(void)
     }
     if ( !videomode )
     {
-        early_printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
-                     _mode_string);
+        printk(KERN_WARNING "HDLCD: unsupported videomode %s\n",
+               _mode_string);
         return;
     }
 
     if ( framebuffer_size < bytes_per_pixel * videomode->xres * videomode->yres )
     {
-        early_printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
+        printk(KERN_ERR "HDLCD: the framebuffer is too small, disabling the HDLCD driver\n");
         return;
     }
 
-    early_printk(KERN_INFO "Initializing HDLCD driver\n");
+    printk(KERN_INFO "Initializing HDLCD driver\n");
 
     lfb = ioremap_wc(framebuffer_start, framebuffer_size);
     if ( !lfb )
     {
-        early_printk(KERN_ERR "Couldn't map the framebuffer\n");
+        printk(KERN_ERR "Couldn't map the framebuffer\n");
         return;
     }
     memset(lfb, 0x00, bytes_per_pixel * videomode->xres * videomode->yres);
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 2e8c18a..8c3d6a8 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -18,29 +18,6 @@
 #define EARLY_UART_VIRTUAL_ADDRESS \
     (FIXMAP_ADDR(FIXMAP_CONSOLE) +(EARLY_UART_BASE_ADDRESS & ~PAGE_MASK))
 
-#endif
-
-#ifndef __ASSEMBLY__
-
-#ifdef CONFIG_EARLY_PRINTK
-
-void early_printk(const char *fmt, ...)
-    __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
-    __attribute__((format (printf, 1, 2)));
-
-#else
-
-static inline  __attribute__((format (printf, 1, 2))) void
-early_printk(const char *fmt, ...)
-{}
-
-static inline void  __attribute__((noreturn))
-__attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
-{while(1);}
-
 #endif /* !CONFIG_EARLY_PRINTK */
 
-#endif	/* __ASSEMBLY__ */
-
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:54:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwv1-0006aY-On; Mon, 24 Feb 2014 14:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WHwuv-0006Zb-NY
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:54:06 +0000
Received: from [193.109.254.147:48103] by server-2.bemta-14.messagelabs.com id
	C7/FF-01236-90D5B035; Mon, 24 Feb 2014 14:54:01 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393253640!6432433!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8786 invoked from network); 24 Feb 2014 14:54:00 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 14:54:00 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1OErfgb024494;
	Mon, 24 Feb 2014 14:53:45 GMT
Received: from algedi.dur.ac.uk (algedi.dur.ac.uk [129.234.2.28])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1OErbRw030998
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 14:53:37 GMT
Received: from algedi.dur.ac.uk (localhost [127.0.0.1])
	by algedi.dur.ac.uk (8.14.5+Sun/8.11.1) with ESMTP id s1OErbtV016763;
	Mon, 24 Feb 2014 14:53:37 GMT
Received: from localhost (dcl0may@localhost)
	by algedi.dur.ac.uk (8.14.5+Sun/8.14.5/Submit) with ESMTP id
	s1OEraJg016760; Mon, 24 Feb 2014 14:53:36 GMT
Date: Mon, 24 Feb 2014 14:53:36 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1393253150.16570.79.camel@kazak.uk.xensource.com>
Message-ID: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<1393253150.16570.79.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (GSO 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s1OErfgb024494
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, Ian Campbell wrote:

> On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>> The result on Linux is that the process always deadlocks before
>> returning from this function.
>>
>> This is used by xl's console child.  So, the ultimate effect is that
>> xl with pygrub does not manage to connect to the pygrub console.
>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>
> "behaviour".
>
> Michael reported this earlier on -rc2 as well but it fell through the
> cracks because I failed to properly appreciate the severity. Sorry.

No that was a different problem (you get some warnings and a messed 
up terminal sesssion on dom0 after using pygrub). This is a new problem, 
probably introduced in rc4 (which I didn't get around to testing).

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:54:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwv1-0006aY-On; Mon, 24 Feb 2014 14:54:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <m.a.young@durham.ac.uk>) id 1WHwuv-0006Zb-NY
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:54:06 +0000
Received: from [193.109.254.147:48103] by server-2.bemta-14.messagelabs.com id
	C7/FF-01236-90D5B035; Mon, 24 Feb 2014 14:54:01 +0000
X-Env-Sender: m.a.young@durham.ac.uk
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393253640!6432433!1
X-Originating-IP: [129.234.248.1]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTI5LjIzNC4yNDguMSA9PiAxMjI2NTk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8786 invoked from network); 24 Feb 2014 14:54:00 -0000
Received: from hermes1.dur.ac.uk (HELO hermes1.dur.ac.uk) (129.234.248.1)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 14:54:00 -0000
Received: from smtphost2.dur.ac.uk (smtphost2.dur.ac.uk [129.234.252.2])
	by hermes1.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1OErfgb024494;
	Mon, 24 Feb 2014 14:53:45 GMT
Received: from algedi.dur.ac.uk (algedi.dur.ac.uk [129.234.2.28])
	by smtphost2.dur.ac.uk (8.14.4/8.14.4) with ESMTP id s1OErbRw030998
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 14:53:37 GMT
Received: from algedi.dur.ac.uk (localhost [127.0.0.1])
	by algedi.dur.ac.uk (8.14.5+Sun/8.11.1) with ESMTP id s1OErbtV016763;
	Mon, 24 Feb 2014 14:53:37 GMT
Received: from localhost (dcl0may@localhost)
	by algedi.dur.ac.uk (8.14.5+Sun/8.14.5/Submit) with ESMTP id
	s1OEraJg016760; Mon, 24 Feb 2014 14:53:36 GMT
Date: Mon, 24 Feb 2014 14:53:36 +0000 (GMT)
From: M A Young <m.a.young@durham.ac.uk>
To: Ian Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <1393253150.16570.79.camel@kazak.uk.xensource.com>
Message-ID: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<1393253150.16570.79.camel@kazak.uk.xensource.com>
User-Agent: Alpine 2.00 (GSO 1167 2008-08-23)
MIME-Version: 1.0
X-DurhamAcUk-MailScanner: Found to be clean, Found to be clean
X-DurhamAcUk-MailScanner-ID: s1OErfgb024494
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, Ian Campbell wrote:

> On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>> The result on Linux is that the process always deadlocks before
>> returning from this function.
>>
>> This is used by xl's console child.  So, the ultimate effect is that
>> xl with pygrub does not manage to connect to the pygrub console.
>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>
> "behaviour".
>
> Michael reported this earlier on -rc2 as well but it fell through the
> cracks because I failed to properly appreciate the severity. Sorry.

No that was a different problem (you get some warnings and a messed 
up terminal sesssion on dom0 after using pygrub). This is a new problem, 
probably introduced in rc4 (which I didn't get around to testing).

 	Michael Young

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:54:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwvB-0006ea-6Y; Mon, 24 Feb 2014 14:54:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwv8-0006dk-Vd
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:54:15 +0000
Received: from [193.109.254.147:43771] by server-7.bemta-14.messagelabs.com id
	62/0F-23424-61D5B035; Mon, 24 Feb 2014 14:54:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393253653!6438015!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30976 invoked from network); 24 Feb 2014 14:54:13 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:54:13 -0000
Received: by mail-ee0-f47.google.com with SMTP id e49so1017551eek.20
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=y2/5JwZk2I2M/i8dLOJcPLPys0a5bytPxMFofrlr8h8=;
	b=NUyU2c75efybD969KcGyXSggngpuFV1ggF2kS7SA/basyXoaq7zvYMPmzygio4aPuf
	vjBWRHTUx9bx0OyF8aRN1kZGH3AcEpCyE05YOuW/FTcS/JcGcaoZySnogsF99agorp6u
	MnMdnZx8ezRY+4IK7DZKFCc3/cP83jpQwaWY14qLf9JrrfteW5AqPGF110+WlLGYhC2j
	pUXatS+ZzdLzMjQQ8dQjMjhqhPrR/OrAh6hQYpb7KOKMA5ZFGiiXTvXVjFtwHY/FPN97
	1ULQNXVAbovy/jkhIIDxJvZkhloZ7VBIz3sAJwSUc387EO7eDXiSGCt4rX5ettuTBOF5
	mGCQ==
X-Gm-Message-State: ALoCoQkHyO4EA2EDrma1+ceHjmV8ALyJaQJwHOXAr7uhskB6mh0coddtsBE60yGUvTMTy6Gc9sPU
X-Received: by 10.15.34.71 with SMTP id d47mr693936eev.107.1393253594010;
	Mon, 24 Feb 2014 06:53:14 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.12
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:13 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:51 +0000
Message-Id: <1393253572-7157-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 5/6] xen/console: Add noreturn attribute to
	panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Panic function will never return. Without this attribute, gcc may output
warnings in call function.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/console.c |    4 +++-
 xen/include/xen/lib.h      |    2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index cdf23f1..f17848b 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1042,7 +1042,7 @@ __initcall(debugtrace_init);
  * **************************************************************
  */
 
-void panic(const char *fmt, ...)
+void __attribute__((noreturn)) panic(const char *fmt, ...)
 {
     va_list args;
     unsigned long flags;
@@ -1085,6 +1085,8 @@ void panic(const char *fmt, ...)
         watchdog_disable();
         machine_restart(5000);
     }
+
+    while ( 1 );
 }
 
 void __bug(char *file, int line)
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..9c3a242 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -88,7 +88,7 @@ extern void printk(const char *format, ...)
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
 extern void panic(const char *format, ...)
-    __attribute__ ((format (printf, 1, 2)));
+    __attribute__ ((format (printf, 1, 2))) __attribute__ ((noreturn));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
 extern int printk_ratelimit(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:54:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwvB-0006ea-6Y; Mon, 24 Feb 2014 14:54:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHwv8-0006dk-Vd
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:54:15 +0000
Received: from [193.109.254.147:43771] by server-7.bemta-14.messagelabs.com id
	62/0F-23424-61D5B035; Mon, 24 Feb 2014 14:54:14 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393253653!6438015!1
X-Originating-IP: [74.125.83.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30976 invoked from network); 24 Feb 2014 14:54:13 -0000
Received: from mail-ee0-f47.google.com (HELO mail-ee0-f47.google.com)
	(74.125.83.47)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:54:13 -0000
Received: by mail-ee0-f47.google.com with SMTP id e49so1017551eek.20
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 06:53:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
	:references;
	bh=y2/5JwZk2I2M/i8dLOJcPLPys0a5bytPxMFofrlr8h8=;
	b=NUyU2c75efybD969KcGyXSggngpuFV1ggF2kS7SA/basyXoaq7zvYMPmzygio4aPuf
	vjBWRHTUx9bx0OyF8aRN1kZGH3AcEpCyE05YOuW/FTcS/JcGcaoZySnogsF99agorp6u
	MnMdnZx8ezRY+4IK7DZKFCc3/cP83jpQwaWY14qLf9JrrfteW5AqPGF110+WlLGYhC2j
	pUXatS+ZzdLzMjQQ8dQjMjhqhPrR/OrAh6hQYpb7KOKMA5ZFGiiXTvXVjFtwHY/FPN97
	1ULQNXVAbovy/jkhIIDxJvZkhloZ7VBIz3sAJwSUc387EO7eDXiSGCt4rX5ettuTBOF5
	mGCQ==
X-Gm-Message-State: ALoCoQkHyO4EA2EDrma1+ceHjmV8ALyJaQJwHOXAr7uhskB6mh0coddtsBE60yGUvTMTy6Gc9sPU
X-Received: by 10.15.34.71 with SMTP id d47mr693936eev.107.1393253594010;
	Mon, 24 Feb 2014 06:53:14 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id x6sm64689130eew.20.2014.02.24.06.53.12
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 06:53:13 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 14:52:51 +0000
Message-Id: <1393253572-7157-6-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
Cc: stefano.stabellini@citrix.com, Keir Fraser <keir@xen.org>,
	Julien Grall <julien.grall@linaro.org>, tim@xen.org,
	ian.campbell@citrix.com
Subject: [Xen-devel] [PATCH v2 5/6] xen/console: Add noreturn attribute to
	panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Panic function will never return. Without this attribute, gcc may output
warnings in call function.

Cc: Keir Fraser <keir@xen.org>
Signed-off-by: Julien Grall <julien.grall@linaro.org>
---
 xen/drivers/char/console.c |    4 +++-
 xen/include/xen/lib.h      |    2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index cdf23f1..f17848b 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1042,7 +1042,7 @@ __initcall(debugtrace_init);
  * **************************************************************
  */
 
-void panic(const char *fmt, ...)
+void __attribute__((noreturn)) panic(const char *fmt, ...)
 {
     va_list args;
     unsigned long flags;
@@ -1085,6 +1085,8 @@ void panic(const char *fmt, ...)
         watchdog_disable();
         machine_restart(5000);
     }
+
+    while ( 1 );
 }
 
 void __bug(char *file, int line)
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..9c3a242 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -88,7 +88,7 @@ extern void printk(const char *format, ...)
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
 extern void panic(const char *format, ...)
-    __attribute__ ((format (printf, 1, 2)));
+    __attribute__ ((format (printf, 1, 2))) __attribute__ ((noreturn));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
 extern int printk_ratelimit(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:55:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwwb-00071n-Nu; Mon, 24 Feb 2014 14:55:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwwa-00071V-Hv
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:55:44 +0000
Received: from [85.158.139.211:44439] by server-10.bemta-5.messagelabs.com id
	F8/F4-08578-F6D5B035; Mon, 24 Feb 2014 14:55:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393253741!5903158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20918 invoked from network); 24 Feb 2014 14:55:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:55:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103551920"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:55:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:55:40 -0500
Message-ID: <1393253739.16570.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Mon, 24 Feb 2014 14:55:39 +0000
In-Reply-To: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<1393253150.16570.79.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:53 +0000, M A Young wrote:
> On Mon, 24 Feb 2014, Ian Campbell wrote:
> 
> > On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> >> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
> >> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
> >> The result on Linux is that the process always deadlocks before
> >> returning from this function.
> >>
> >> This is used by xl's console child.  So, the ultimate effect is that
> >> xl with pygrub does not manage to connect to the pygrub console.
> >> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
> >
> > "behaviour".
> >
> > Michael reported this earlier on -rc2 as well but it fell through the
> > cracks because I failed to properly appreciate the severity. Sorry.
> 
> No that was a different problem (you get some warnings and a messed 
> up terminal sesssion on dom0 after using pygrub). This is a new problem, 
> probably introduced in rc4 (which I didn't get around to testing).

OK, thanks for clarifying!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:55:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwwb-00071n-Nu; Mon, 24 Feb 2014 14:55:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHwwa-00071V-Hv
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 14:55:44 +0000
Received: from [85.158.139.211:44439] by server-10.bemta-5.messagelabs.com id
	F8/F4-08578-F6D5B035; Mon, 24 Feb 2014 14:55:43 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393253741!5903158!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20918 invoked from network); 24 Feb 2014 14:55:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:55:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103551920"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 14:55:41 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 09:55:40 -0500
Message-ID: <1393253739.16570.83.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: M A Young <m.a.young@durham.ac.uk>
Date: Mon, 24 Feb 2014 14:55:39 +0000
In-Reply-To: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<1393253150.16570.79.camel@kazak.uk.xensource.com>
	<alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 14:53 +0000, M A Young wrote:
> On Mon, 24 Feb 2014, Ian Campbell wrote:
> 
> > On Mon, 2014-02-24 at 14:19 +0000, Ian Jackson wrote:
> >> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
> >> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
> >> The result on Linux is that the process always deadlocks before
> >> returning from this function.
> >>
> >> This is used by xl's console child.  So, the ultimate effect is that
> >> xl with pygrub does not manage to connect to the pygrub console.
> >> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
> >
> > "behaviour".
> >
> > Michael reported this earlier on -rc2 as well but it fell through the
> > cracks because I failed to properly appreciate the severity. Sorry.
> 
> No that was a different problem (you get some warnings and a messed 
> up terminal sesssion on dom0 after using pygrub). This is a new problem, 
> probably introduced in rc4 (which I didn't get around to testing).

OK, thanks for clarifying!

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:59:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwzw-0007NK-E8; Mon, 24 Feb 2014 14:59:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WHwzu-0007NC-6Q
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:59:10 +0000
Received: from [85.158.137.68:5116] by server-9.bemta-3.messagelabs.com id
	66/E6-10184-D3E5B035; Mon, 24 Feb 2014 14:59:09 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393253946!3817149!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19303 invoked from network); 24 Feb 2014 14:59:07 -0000
Received: from unknown (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:59:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; 
   d="scan'208";a="9756778"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 24 Feb 2014 14:58:07 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Mon, 24 Feb 2014 15:58:06 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V5 net-next 2/5] xen-netback: Add support for multiple
	queues
Thread-Index: AQHPMW1vsw9WOyIBJ0y1PEV4BSyKUZrEfr1w
Date: Mon, 24 Feb 2014 14:58:05 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02578E6@AMSPEX01CL01.citrite.net>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 24 February 2014 14:33
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; Andrew
> Bennieston
> Subject: [PATCH V5 net-next 2/5] xen-netback: Add support for multiple
> queues
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
>  drivers/net/xen-netback/common.h    |    2 +
>  drivers/net/xen-netback/interface.c |    7 +++-
>  drivers/net/xen-netback/netback.c   |    8 ++++
>  drivers/net/xen-netback/xenbus.c    |   76
> ++++++++++++++++++++++++++++++-----
>  4 files changed, 82 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 4176539..e72bf38 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 0297980..3f623b4 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -381,7 +381,12 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	char name[IFNAMSIZ] = {};
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/* Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			      xenvif_max_queues);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index a32abd6..9849b63 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -54,6 +54,11 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
> 
> +unsigned int xenvif_max_queues;
> +module_param(xenvif_max_queues, uint, 0644);
> +MODULE_PARM_DESC(xenvif_max_queues,
> +		"Maximum number of queues per virtual interface");
> +
>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> @@ -1585,6 +1590,9 @@ static int __init netback_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
> 
> +	/* Allow as many queues as there are CPUs, by default */
> +	xenvif_max_queues = num_online_cpus();
> +
>  	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
>  		pr_info("fatal_skb_slots too small (%d), bump it to
> XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
>  			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index f23ea0a..c1ae148 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -20,6 +20,7 @@
> 
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
> 
> +	/* Multi-queue support: This is an optional feature. */
> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u",
> xenvif_max_queues);
> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0) {
> +		requested_num_queues = 1; /* Fall back to single queue */
> +	} else if (requested_num_queues > xenvif_max_queues) {
> +		/* buggy or malicious guest */
> +		xenbus_dev_fatal(dev, err,
> +			"guest requested %u queues, exceeding the
> maximum of %u.",
> +			requested_num_queues, xenvif_max_queues);
> +		return;
> +	}
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
> 
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
> >num_queues);
> +	rtnl_unlock();
> 
>  	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index) {
>  		queue = &be->vif->queues[queue_index];
> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath;
> +	size_t xspathsize;
> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-
> NNN" */
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1) {
> +		xspath = (char *)dev->otherend;
> +	} else {
> +		xspathsize = strlen(dev->otherend) +
> xenstore_path_ext_size;
> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");
> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
> >otherend,
> +				 queue->id);
> +	}
> 
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
>  			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err,
>  				 "reading %s/ring-ref",
> -				 dev->otherend);
> -		return err;
> +				 xspath);
> +		goto err;
>  	}
> 
>  	/* Try split event channels first, then single event channel. */
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "event-channel-tx", "%u", &tx_evtchn,
>  			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>  	if (err < 0) {
> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
> +		err = xenbus_scanf(XBT_NIL, xspath,
>  				   "event-channel", "%u", &tx_evtchn);
>  		if (err < 0) {
>  			xenbus_dev_fatal(dev, err,
>  					 "reading %s/event-channel(-tx/rx)",
> -					 dev->otherend);
> -			return err;
> +					 xspath);
> +			goto err;
>  		}
>  		rx_evtchn = tx_evtchn;
>  	}
> @@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
>  				 tx_ring_ref, rx_ring_ref,
>  				 tx_evtchn, rx_evtchn);
> -		return err;
> +		goto err;
>  	}
> 
> -	return 0;
> +	err = 0;
> +err: /* Regular return falls through with err == 0 */
> +	if (xspath != dev->otherend)
> +		kfree(xspath);
> +
> +	return err;
>  }
> 
>  static int read_xenbus_vif_flags(struct backend_info *be)
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 14:59:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 14:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHwzw-0007NK-E8; Mon, 24 Feb 2014 14:59:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WHwzu-0007NC-6Q
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 14:59:10 +0000
Received: from [85.158.137.68:5116] by server-9.bemta-3.messagelabs.com id
	66/E6-10184-D3E5B035; Mon, 24 Feb 2014 14:59:09 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393253946!3817149!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19303 invoked from network); 24 Feb 2014 14:59:07 -0000
Received: from unknown (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 14:59:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; 
   d="scan'208";a="9756778"
Received: from unknown (HELO AMSPEX01CL02.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 24 Feb 2014 14:58:07 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL02.citrite.net ([10.69.46.33]) with mapi id 14.02.0342.004;
	Mon, 24 Feb 2014 15:58:06 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Topic: [PATCH V5 net-next 2/5] xen-netback: Add support for multiple
	queues
Thread-Index: AQHPMW1vsw9WOyIBJ0y1PEV4BSyKUZrEfr1w
Date: Mon, 24 Feb 2014 14:58:05 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD02578E6@AMSPEX01CL01.citrite.net>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: Andrew Bennieston <andrew.bennieston@citrix.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Wei Liu <wei.liu2@citrix.com>, Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH V5 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Andrew J. Bennieston [mailto:andrew.bennieston@citrix.com]
> Sent: 24 February 2014 14:33
> To: xen-devel@lists.xenproject.org
> Cc: Ian Campbell; Wei Liu; Paul Durrant; netdev@vger.kernel.org; Andrew
> Bennieston
> Subject: [PATCH V5 net-next 2/5] xen-netback: Add support for multiple
> queues
> 
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> ---
>  drivers/net/xen-netback/common.h    |    2 +
>  drivers/net/xen-netback/interface.c |    7 +++-
>  drivers/net/xen-netback/netback.c   |    8 ++++
>  drivers/net/xen-netback/xenbus.c    |   76
> ++++++++++++++++++++++++++++++-----
>  4 files changed, 82 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-
> netback/common.h
> index 4176539..e72bf38 100644
> --- a/drivers/net/xen-netback/common.h
> +++ b/drivers/net/xen-netback/common.h
> @@ -261,4 +261,6 @@ void xenvif_carrier_on(struct xenvif *vif);
> 
>  extern bool separate_tx_rx_irq;
> 
> +extern unsigned int xenvif_max_queues;
> +
>  #endif /* __XEN_NETBACK__COMMON_H__ */
> diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-
> netback/interface.c
> index 0297980..3f623b4 100644
> --- a/drivers/net/xen-netback/interface.c
> +++ b/drivers/net/xen-netback/interface.c
> @@ -381,7 +381,12 @@ struct xenvif *xenvif_alloc(struct device *parent,
> domid_t domid,
>  	char name[IFNAMSIZ] = {};
> 
>  	snprintf(name, IFNAMSIZ - 1, "vif%u.%u", domid, handle);
> -	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup, 1);
> +	/* Allocate a netdev with the max. supported number of queues.
> +	 * When the guest selects the desired number, it will be updated
> +	 * via netif_set_real_num_tx_queues().
> +	 */
> +	dev = alloc_netdev_mq(sizeof(struct xenvif), name, ether_setup,
> +			      xenvif_max_queues);
>  	if (dev == NULL) {
>  		pr_warn("Could not allocate netdev for %s\n", name);
>  		return ERR_PTR(-ENOMEM);
> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-
> netback/netback.c
> index a32abd6..9849b63 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -54,6 +54,11 @@
>  bool separate_tx_rx_irq = 1;
>  module_param(separate_tx_rx_irq, bool, 0644);
> 
> +unsigned int xenvif_max_queues;
> +module_param(xenvif_max_queues, uint, 0644);
> +MODULE_PARM_DESC(xenvif_max_queues,
> +		"Maximum number of queues per virtual interface");
> +
>  /*
>   * This is the maximum slots a skb can have. If a guest sends a skb
>   * which exceeds this limit it is considered malicious.
> @@ -1585,6 +1590,9 @@ static int __init netback_init(void)
>  	if (!xen_domain())
>  		return -ENODEV;
> 
> +	/* Allow as many queues as there are CPUs, by default */
> +	xenvif_max_queues = num_online_cpus();
> +
>  	if (fatal_skb_slots < XEN_NETBK_LEGACY_SLOTS_MAX) {
>  		pr_info("fatal_skb_slots too small (%d), bump it to
> XEN_NETBK_LEGACY_SLOTS_MAX (%d)\n",
>  			fatal_skb_slots, XEN_NETBK_LEGACY_SLOTS_MAX);
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-
> netback/xenbus.c
> index f23ea0a..c1ae148 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -20,6 +20,7 @@
> 
>  #include "common.h"
>  #include <linux/vmalloc.h>
> +#include <linux/rtnetlink.h>
> 
>  struct backend_info {
>  	struct xenbus_device *dev;
> @@ -159,6 +160,12 @@ static int netback_probe(struct xenbus_device *dev,
>  	if (err)
>  		pr_debug("Error writing feature-split-event-channels\n");
> 
> +	/* Multi-queue support: This is an optional feature. */
> +	err = xenbus_printf(XBT_NIL, dev->nodename,
> +			"multi-queue-max-queues", "%u",
> xenvif_max_queues);
> +	if (err)
> +		pr_debug("Error writing multi-queue-max-queues\n");
> +
>  	err = xenbus_switch_state(dev, XenbusStateInitWait);
>  	if (err)
>  		goto fail;
> @@ -490,6 +497,23 @@ static void connect(struct backend_info *be)
>  	unsigned long credit_bytes, credit_usec;
>  	unsigned int queue_index;
>  	struct xenvif_queue *queue;
> +	unsigned int requested_num_queues;
> +
> +	/* Check whether the frontend requested multiple queues
> +	 * and read the number requested.
> +	 */
> +	err = xenbus_scanf(XBT_NIL, dev->otherend,
> +			"multi-queue-num-queues",
> +			"%u", &requested_num_queues);
> +	if (err < 0) {
> +		requested_num_queues = 1; /* Fall back to single queue */
> +	} else if (requested_num_queues > xenvif_max_queues) {
> +		/* buggy or malicious guest */
> +		xenbus_dev_fatal(dev, err,
> +			"guest requested %u queues, exceeding the
> maximum of %u.",
> +			requested_num_queues, xenvif_max_queues);
> +		return;
> +	}
> 
>  	err = xen_net_read_mac(dev, be->vif->fe_dev_addr);
>  	if (err) {
> @@ -500,9 +524,13 @@ static void connect(struct backend_info *be)
>  	xen_net_read_rate(dev, &credit_bytes, &credit_usec);
>  	read_xenbus_vif_flags(be);
> 
> -	be->vif->num_queues = 1;
> +	/* Use the number of queues requested by the frontend */
> +	be->vif->num_queues = requested_num_queues;
>  	be->vif->queues = vzalloc(be->vif->num_queues *
>  			sizeof(struct xenvif_queue));
> +	rtnl_lock();
> +	netif_set_real_num_tx_queues(be->vif->dev, be->vif-
> >num_queues);
> +	rtnl_unlock();
> 
>  	for (queue_index = 0; queue_index < be->vif->num_queues;
> ++queue_index) {
>  		queue = &be->vif->queues[queue_index];
> @@ -547,29 +575,52 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  	unsigned long tx_ring_ref, rx_ring_ref;
>  	unsigned int tx_evtchn, rx_evtchn;
>  	int err;
> +	char *xspath;
> +	size_t xspathsize;
> +	const size_t xenstore_path_ext_size = 11; /* sufficient for "/queue-
> NNN" */
> +
> +	/* If the frontend requested 1 queue, or we have fallen back
> +	 * to single queue due to lack of frontend support for multi-
> +	 * queue, expect the remaining XenStore keys in the toplevel
> +	 * directory. Otherwise, expect them in a subdirectory called
> +	 * queue-N.
> +	 */
> +	if (queue->vif->num_queues == 1) {
> +		xspath = (char *)dev->otherend;
> +	} else {
> +		xspathsize = strlen(dev->otherend) +
> xenstore_path_ext_size;
> +		xspath = kzalloc(xspathsize, GFP_KERNEL);
> +		if (!xspath) {
> +			xenbus_dev_fatal(dev, -ENOMEM,
> +					"reading ring references");
> +			return -ENOMEM;
> +		}
> +		snprintf(xspath, xspathsize, "%s/queue-%u", dev-
> >otherend,
> +				 queue->id);
> +	}
> 
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "tx-ring-ref", "%lu", &tx_ring_ref,
>  			    "rx-ring-ref", "%lu", &rx_ring_ref, NULL);
>  	if (err) {
>  		xenbus_dev_fatal(dev, err,
>  				 "reading %s/ring-ref",
> -				 dev->otherend);
> -		return err;
> +				 xspath);
> +		goto err;
>  	}
> 
>  	/* Try split event channels first, then single event channel. */
> -	err = xenbus_gather(XBT_NIL, dev->otherend,
> +	err = xenbus_gather(XBT_NIL, xspath,
>  			    "event-channel-tx", "%u", &tx_evtchn,
>  			    "event-channel-rx", "%u", &rx_evtchn, NULL);
>  	if (err < 0) {
> -		err = xenbus_scanf(XBT_NIL, dev->otherend,
> +		err = xenbus_scanf(XBT_NIL, xspath,
>  				   "event-channel", "%u", &tx_evtchn);
>  		if (err < 0) {
>  			xenbus_dev_fatal(dev, err,
>  					 "reading %s/event-channel(-tx/rx)",
> -					 dev->otherend);
> -			return err;
> +					 xspath);
> +			goto err;
>  		}
>  		rx_evtchn = tx_evtchn;
>  	}
> @@ -582,10 +633,15 @@ static int connect_rings(struct backend_info *be,
> struct xenvif_queue *queue)
>  				 "mapping shared-frames %lu/%lu port tx %u
> rx %u",
>  				 tx_ring_ref, rx_ring_ref,
>  				 tx_evtchn, rx_evtchn);
> -		return err;
> +		goto err;
>  	}
> 
> -	return 0;
> +	err = 0;
> +err: /* Regular return falls through with err == 0 */
> +	if (xspath != dev->otherend)
> +		kfree(xspath);
> +
> +	return err;
>  }
> 
>  static int read_xenbus_vif_flags(struct backend_info *be)
> --
> 1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:00:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx0z-0007VT-Ud; Mon, 24 Feb 2014 15:00:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx0y-0007VK-FX
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:00:16 +0000
Received: from [85.158.137.68:33801] by server-11.bemta-3.messagelabs.com id
	10/C8-04255-F7E5B035; Mon, 24 Feb 2014 15:00:15 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393254013!2323351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25368 invoked from network); 24 Feb 2014 15:00:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:00:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105207234"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:00:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:00:12 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHwpn-00061T-OU; Mon, 24 Feb 2014 14:48:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 14:48:42 +0000
Message-ID: <1393253322-31437-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/mce: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When booting with "no-mce", the user does not need to be told that "MCE
support [was] disabled by bootparam" for each cpu.  Furthermore, a file:line
reference is unnecessary.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/cpu/mcheck/mce.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index b375ef7..d2df334 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -730,7 +730,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t bsp)
     enum mcheck_type inited = mcheck_none;
 
     if (mce_disabled == 1) {
-        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
+        if (bsp)
+            printk(XENLOG_INFO "MCE support disabled by bootparam\n");
         return;
     }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:00:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:00:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx0z-0007VT-Ud; Mon, 24 Feb 2014 15:00:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx0y-0007VK-FX
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:00:16 +0000
Received: from [85.158.137.68:33801] by server-11.bemta-3.messagelabs.com id
	10/C8-04255-F7E5B035; Mon, 24 Feb 2014 15:00:15 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393254013!2323351!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25368 invoked from network); 24 Feb 2014 15:00:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:00:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105207234"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:00:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:00:12 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHwpn-00061T-OU; Mon, 24 Feb 2014 14:48:43 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 14:48:42 +0000
Message-ID: <1393253322-31437-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/mce: Reduce boot-time logspam
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

When booting with "no-mce", the user does not need to be told that "MCE
support [was] disabled by bootparam" for each cpu.  Furthermore, a file:line
reference is unnecessary.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/cpu/mcheck/mce.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index b375ef7..d2df334 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -730,7 +730,8 @@ void mcheck_init(struct cpuinfo_x86 *c, bool_t bsp)
     enum mcheck_type inited = mcheck_none;
 
     if (mce_disabled == 1) {
-        dprintk(XENLOG_INFO, "MCE support disabled by bootparam\n");
+        if (bsp)
+            printk(XENLOG_INFO "MCE support disabled by bootparam\n");
         return;
     }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2H-0007ec-FD; Mon, 24 Feb 2014 15:01:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2G-0007dj-5Y
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:36 +0000
Received: from [85.158.143.35:13403] by server-2.bemta-4.messagelabs.com id
	B9/99-10891-FCE5B035; Mon, 24 Feb 2014 15:01:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393254092!7912858!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16114 invoked from network); 24 Feb 2014 15:01:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103554180"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-AM; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:30 +0000
Message-ID: <1393254090-5081-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH v3 5/5] xen/x86: Identify reset_stack_and_jump()
	as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

reset_stack_and_jump() is actually a macro, but can effectivly become noreturn
by giving it an unreachable() declaration.

Propagate the 'noreturn-ness' up through the direct and indirect callers.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/domain.c             |    6 ++----
 xen/arch/x86/hvm/svm/svm.c        |    2 +-
 xen/arch/x86/setup.c              |    2 +-
 xen/include/asm-x86/current.h     |   11 +++++++----
 xen/include/asm-x86/domain.h      |    2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 +-
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..c42a079 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -133,12 +133,12 @@ void startup_cpu_idle_loop(void)
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_idle_domain(struct vcpu *v)
+static void noreturn continue_idle_domain(struct vcpu *v)
 {
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_nonidle_domain(struct vcpu *v)
+static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
@@ -1521,13 +1521,11 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     update_vcpu_system_time(next);
 
     schedule_tail(next);
-    BUG();
 }
 
 void continue_running(struct vcpu *same)
 {
     schedule_tail(same);
-    BUG();
 }
 
 int __sync_local_execstate(void)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..a1d9320 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -911,7 +911,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
         wrmsrl(MSR_TSC_AUX, hvm_msr_tsc_aux(v));
 }
 
-static void svm_do_resume(struct vcpu *v) 
+static void noreturn svm_do_resume(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     bool_t debug_state = v->domain->debugger_attached;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..addd071 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -541,7 +541,7 @@ static char * __init cmdline_cook(char *p, char *loader_name)
     return p;
 }
 
-void __init __start_xen(unsigned long mbi_p)
+void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
     char *cmdline, *kextra, *loader;
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index c2792ce..4d1f20e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -59,10 +59,13 @@ static inline struct cpu_info *get_cpu_info(void)
     ((sp & (~(STACK_SIZE-1))) +                 \
      (STACK_SIZE - sizeof(struct cpu_info) - sizeof(unsigned long)))
 
-#define reset_stack_and_jump(__fn)              \
-    __asm__ __volatile__ (                      \
-        "mov %0,%%"__OP"sp; jmp %c1"            \
-        : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" )
+#define reset_stack_and_jump(__fn)                                      \
+    ({                                                                  \
+        __asm__ __volatile__ (                                          \
+            "mov %0,%%"__OP"sp; jmp %c1"                                \
+            : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" );   \
+        unreachable();                                                  \
+    })
 
 #define schedule_tail(vcpu) (((vcpu)->arch.schedule_tail)(vcpu))
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4ff89f0..caaa7bc 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -395,7 +395,7 @@ struct arch_vcpu
 
     unsigned long      flags; /* TF_ */
 
-    void (*schedule_tail) (struct vcpu *);
+    void (*schedule_tail) (struct vcpu *) noreturn;
 
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 6f6b672..755189d 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -91,7 +91,7 @@ typedef enum {
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
-void vmx_do_resume(struct vcpu *);
+void vmx_do_resume(struct vcpu *) noreturn;
 void vmx_vlapic_msr_changed(struct vcpu *v);
 void vmx_realmode(struct cpu_user_regs *regs);
 void vmx_update_debug_state(struct vcpu *v);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2G-0007dw-JS; Mon, 24 Feb 2014 15:01:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2F-0007db-4Z
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:35 +0000
Received: from [85.158.143.35:13295] by server-3.bemta-4.messagelabs.com id
	51/7E-11539-ECE5B035; Mon, 24 Feb 2014 15:01:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393254092!7906606!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9300 invoked from network); 24 Feb 2014 15:01:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105208003"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-6B; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:27 +0000
Message-ID: <1393254090-5081-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 2/5] x86/crash: Fix up declaration of
	do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... so it can correctly be annotated as noreturn.  Move the declaration of
nmi_crash() to be effectivly private in crash.c

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/crash.c            |    3 ++-
 xen/include/asm-x86/processor.h |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 01fd906..ec586bd 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -36,7 +36,7 @@ static unsigned int crashing_cpu;
 static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
 /* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
+void do_nmi_crash(struct cpu_user_regs *regs)
 {
     int cpu = smp_processor_id();
 
@@ -113,6 +113,7 @@ void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
         halt();
 }
 
+void nmi_crash(void);
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c120460..9696acc 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -529,7 +529,6 @@ void do_ ## _name(struct cpu_user_regs *regs)
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
-DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -550,6 +549,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
+void do_nmi_crash(struct cpu_user_regs *regs) noreturn;
 
 void syscall_enter(void);
 void sysenter_entry(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2G-0007e4-Ul; Mon, 24 Feb 2014 15:01:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2F-0007di-VH
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:36 +0000
Received: from [85.158.143.35:20026] by server-3.bemta-4.messagelabs.com id
	C3/7E-11539-ECE5B035; Mon, 24 Feb 2014 15:01:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393254092!7912858!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16034 invoked from network); 24 Feb 2014 15:01:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103554179"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-8e; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:29 +0000
Message-ID: <1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
	previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This includes:
 * A stale comment in sh_skip_sync()
 * A dead for ever loop in __bug()
 * A prototype for machine_power_off() which unimplemented in any architecture
 * Replacing a for(;;); loop with unreachable()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/efi/boot.c         |    2 +-
 xen/arch/x86/mm/shadow/common.c |    1 -
 xen/drivers/char/console.c      |    1 -
 xen/include/xen/shutdown.h      |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index a26e0af..62c4812 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -201,7 +201,7 @@ static void __init noreturn blexit(const CHAR16 *str)
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_bs->Exit(efi_ih, EFI_SUCCESS, 0, NULL);
-    for( ; ; ); /* not reached */
+    unreachable(); /* not reached */
 }
 
 /* generic routine for printing error messages */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 11c6b62..b400ccb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
     SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
                  mfn_x(gl1mfn));
     BUG();
-    return 0; /* BUG() is no longer __attribute__((noreturn)). */
 }
 
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..7d4383c 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1089,7 +1089,6 @@ void __bug(char *file, int line)
     printk("Xen BUG at %s:%d\n", file, line);
     dump_execution_state();
     panic("Xen BUG at %s:%d", file, line);
-    for ( ; ; ) ;
 }
 
 void __warn(char *file, int line)
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2b82495..7e449a5 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -10,6 +10,5 @@ void dom0_shutdown(u8 reason) noreturn;
 
 void machine_restart(unsigned int delay_millisecs) noreturn;
 void machine_halt(void) noreturn;
-void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2J-0007fV-9T; Mon, 24 Feb 2014 15:01:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2H-0007eK-MC
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:37 +0000
Received: from [85.158.143.35:13481] by server-3.bemta-4.messagelabs.com id
	8F/7E-11539-0DE5B035; Mon, 24 Feb 2014 15:01:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393254092!7912858!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16188 invoked from network); 24 Feb 2014 15:01:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103554181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-6t; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:28 +0000
Message-ID: <1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
	functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On an x86 build (GCC Debian 4.7.2-5), this reduces the .text size by exactly
4K, and .init.text by 1751 bytes.

Experimentally, even in a non-debug build, GCC uses `call` rather than `jmp`
so there should be no impact on any stack trace generation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v2:
 * Remove redundant uses with publically declared functions.

I have compile tested arm32 and arm64
---
 xen/arch/arm/shutdown.c         |    2 +-
 xen/arch/x86/cpu/mcheck/mce.h   |    2 +-
 xen/arch/x86/shutdown.c         |    2 +-
 xen/common/shutdown.c           |    2 +-
 xen/include/asm-arm/smp.h       |    2 +-
 xen/include/asm-x86/processor.h |    2 +-
 xen/include/xen/lib.h           |    2 +-
 xen/include/xen/shutdown.h      |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 767cc12..adc0529 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -11,7 +11,7 @@ static void raw_machine_reset(void)
     platform_reset();
 }
 
-static void halt_this_cpu(void *arg)
+static void noreturn halt_this_cpu(void *arg)
 {
     stop_cpu();
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index cbd123d..db791ff 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
 struct mc_info *x86_mcinfo_getptr(void);
-void mc_panic(char *s);
+void mc_panic(char *s) noreturn;
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
 			 uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6143c40..827515d 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -452,7 +452,7 @@ static int __init reboot_init(void)
 }
 __initcall(reboot_init);
 
-static void __machine_restart(void *pdelay)
+static void noreturn __machine_restart(void *pdelay)
 {
     machine_restart(*(unsigned int *)pdelay);
 }
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 9bccd34..fadb69b 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -17,7 +17,7 @@
 bool_t __read_mostly opt_noreboot;
 boolean_param("noreboot", opt_noreboot);
 
-static void maybe_reboot(void)
+static void noreturn maybe_reboot(void)
 {
     if ( opt_noreboot )
     {
diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
index a1de03c..897d99d 100644
--- a/xen/include/asm-arm/smp.h
+++ b/xen/include/asm-arm/smp.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
 #define raw_smp_processor_id() (get_processor_id())
 
-extern void stop_cpu(void);
+extern void stop_cpu(void) noreturn;
 
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 9696acc..eeea44a 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -514,7 +514,7 @@ void show_registers(struct cpu_user_regs *regs);
 void show_execution_state(struct cpu_user_regs *regs);
 #define dump_execution_state() run_in_exception_handler(show_execution_state)
 void show_page_walk(unsigned long addr);
-void fatal_trap(int trapnr, struct cpu_user_regs *regs);
+void fatal_trap(int trapnr, struct cpu_user_regs *regs) noreturn;
 
 void compat_show_guest_stack(struct vcpu *, struct cpu_user_regs *, int lines);
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 814fcb4..c8b35f1 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -87,7 +87,7 @@ extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
-extern void panic(const char *format, ...)
+extern void panic(const char *format, ...) noreturn
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2bee748..2b82495 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -1,13 +1,15 @@
 #ifndef __XEN_SHUTDOWN_H__
 #define __XEN_SHUTDOWN_H__
 
+#include <xen/compiler.h>
+
 /* opt_noreboot: If true, machine will need manual reset on error. */
 extern bool_t opt_noreboot;
 
-void dom0_shutdown(u8 reason);
+void dom0_shutdown(u8 reason) noreturn;
 
-void machine_restart(unsigned int delay_millisecs);
-void machine_halt(void);
+void machine_restart(unsigned int delay_millisecs) noreturn;
+void machine_halt(void) noreturn;
 void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2H-0007f0-SU; Mon, 24 Feb 2014 15:01:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2G-0007dq-Pu
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:37 +0000
Received: from [85.158.143.35:13380] by server-3.bemta-4.messagelabs.com id
	F6/7E-11539-FCE5B035; Mon, 24 Feb 2014 15:01:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393254092!7906606!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9422 invoked from network); 24 Feb 2014 15:01:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105208004"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-5f; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:26 +0000
Message-ID: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
__attribute__((noreturn)).  This includes removing redundant uses with
function definitions which have a public declaration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v3:
 * Leave do_nmi_crash() till the subsequent patch
 * Leave the stale comment in sh_skip_sync() till the final patch
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/arch/x86/efi/boot.c            |    4 ++--
 xen/arch/x86/shutdown.c            |    2 +-
 xen/include/asm-arm/early_printk.h |    4 ++--
 xen/include/xen/compiler.h         |    2 ++
 xen/include/xen/lib.h              |    2 +-
 xen/include/xen/sched.h            |    4 ++--
 7 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..2870a30 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -52,7 +52,7 @@ void __init early_printk(const char *fmt, ...)
     va_end(args);
 }
 
-void __attribute__((noreturn)) __init
+void __init
 early_panic(const char *fmt, ...)
 {
     va_list args;
diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index 0dd935c..a26e0af 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -183,7 +183,7 @@ static bool_t __init match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2)
            !memcmp(guid1->Data4, guid2->Data4, sizeof(guid1->Data4));
 }
 
-static void __init __attribute__((__noreturn__)) blexit(const CHAR16 *str)
+static void __init noreturn blexit(const CHAR16 *str)
 {
     if ( str )
         PrintStr((CHAR16 *)str);
@@ -762,7 +762,7 @@ static void __init relocate_trampoline(unsigned long phys)
         *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
 }
 
-void EFIAPI __init __attribute__((__noreturn__))
+void EFIAPI __init noreturn
 efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     static EFI_GUID __initdata loaded_image_guid = LOADED_IMAGE_PROTOCOL;
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6eba271..6143c40 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -85,7 +85,7 @@ static inline void kb_wait(void)
             break;
 }
 
-static void __attribute__((noreturn)) __machine_halt(void *unused)
+static void noreturn __machine_halt(void *unused)
 {
     local_irq_disable();
     for ( ; ; )
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..3cb8dab 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -26,7 +26,7 @@
 
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
+void early_panic(const char *fmt, ...) noreturn
     __attribute__((format (printf, 1, 2)));
 
 #else
@@ -35,7 +35,7 @@ static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
 
-static inline void  __attribute__((noreturn))
+static inline void noreturn
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d6805c..c80398d 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -14,6 +14,8 @@
 #define always_inline __inline__ __attribute__ ((always_inline))
 #define noinline      __attribute__((noinline))
 
+#define noreturn      __attribute__((noreturn))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..814fcb4 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -8,7 +8,7 @@
 #include <xen/string.h>
 #include <asm/bug.h>
 
-void __bug(char *file, int line) __attribute__((noreturn));
+void __bug(char *file, int line) noreturn;
 void __warn(char *file, int line);
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..7910079 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -580,7 +580,7 @@ void __domain_crash(struct domain *d);
  * Mark current domain as crashed and synchronously deschedule from the local
  * processor. This function never returns.
  */
-void __domain_crash_synchronous(void) __attribute__((noreturn));
+void __domain_crash_synchronous(void) noreturn;
 #define domain_crash_synchronous() do {                                   \
     printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__);  \
     __domain_crash_synchronous();                                         \
@@ -591,7 +591,7 @@ void __domain_crash_synchronous(void) __attribute__((noreturn));
  * the crash occured.  If addr is 0, look up address from last extable
  * redirection.
  */
-void asm_domain_crash_synchronous(unsigned long addr) __attribute__((noreturn));
+void asm_domain_crash_synchronous(unsigned long addr) noreturn;
 
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2H-0007ec-FD; Mon, 24 Feb 2014 15:01:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2G-0007dj-5Y
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:36 +0000
Received: from [85.158.143.35:13403] by server-2.bemta-4.messagelabs.com id
	B9/99-10891-FCE5B035; Mon, 24 Feb 2014 15:01:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393254092!7912858!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16114 invoked from network); 24 Feb 2014 15:01:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103554180"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-AM; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:30 +0000
Message-ID: <1393254090-5081-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH v3 5/5] xen/x86: Identify reset_stack_and_jump()
	as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

reset_stack_and_jump() is actually a macro, but can effectivly become noreturn
by giving it an unreachable() declaration.

Propagate the 'noreturn-ness' up through the direct and indirect callers.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/domain.c             |    6 ++----
 xen/arch/x86/hvm/svm/svm.c        |    2 +-
 xen/arch/x86/setup.c              |    2 +-
 xen/include/asm-x86/current.h     |   11 +++++++----
 xen/include/asm-x86/domain.h      |    2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 +-
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..c42a079 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -133,12 +133,12 @@ void startup_cpu_idle_loop(void)
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_idle_domain(struct vcpu *v)
+static void noreturn continue_idle_domain(struct vcpu *v)
 {
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_nonidle_domain(struct vcpu *v)
+static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
@@ -1521,13 +1521,11 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     update_vcpu_system_time(next);
 
     schedule_tail(next);
-    BUG();
 }
 
 void continue_running(struct vcpu *same)
 {
     schedule_tail(same);
-    BUG();
 }
 
 int __sync_local_execstate(void)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..a1d9320 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -911,7 +911,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
         wrmsrl(MSR_TSC_AUX, hvm_msr_tsc_aux(v));
 }
 
-static void svm_do_resume(struct vcpu *v) 
+static void noreturn svm_do_resume(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     bool_t debug_state = v->domain->debugger_attached;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..addd071 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -541,7 +541,7 @@ static char * __init cmdline_cook(char *p, char *loader_name)
     return p;
 }
 
-void __init __start_xen(unsigned long mbi_p)
+void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
     char *cmdline, *kextra, *loader;
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index c2792ce..4d1f20e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -59,10 +59,13 @@ static inline struct cpu_info *get_cpu_info(void)
     ((sp & (~(STACK_SIZE-1))) +                 \
      (STACK_SIZE - sizeof(struct cpu_info) - sizeof(unsigned long)))
 
-#define reset_stack_and_jump(__fn)              \
-    __asm__ __volatile__ (                      \
-        "mov %0,%%"__OP"sp; jmp %c1"            \
-        : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" )
+#define reset_stack_and_jump(__fn)                                      \
+    ({                                                                  \
+        __asm__ __volatile__ (                                          \
+            "mov %0,%%"__OP"sp; jmp %c1"                                \
+            : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" );   \
+        unreachable();                                                  \
+    })
 
 #define schedule_tail(vcpu) (((vcpu)->arch.schedule_tail)(vcpu))
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4ff89f0..caaa7bc 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -395,7 +395,7 @@ struct arch_vcpu
 
     unsigned long      flags; /* TF_ */
 
-    void (*schedule_tail) (struct vcpu *);
+    void (*schedule_tail) (struct vcpu *) noreturn;
 
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 6f6b672..755189d 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -91,7 +91,7 @@ typedef enum {
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
-void vmx_do_resume(struct vcpu *);
+void vmx_do_resume(struct vcpu *) noreturn;
 void vmx_vlapic_msr_changed(struct vcpu *v);
 void vmx_realmode(struct cpu_user_regs *regs);
 void vmx_update_debug_state(struct vcpu *v);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2H-0007f0-SU; Mon, 24 Feb 2014 15:01:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2G-0007dq-Pu
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:37 +0000
Received: from [85.158.143.35:13380] by server-3.bemta-4.messagelabs.com id
	F6/7E-11539-FCE5B035; Mon, 24 Feb 2014 15:01:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393254092!7906606!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9422 invoked from network); 24 Feb 2014 15:01:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105208004"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-5f; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:26 +0000
Message-ID: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
__attribute__((noreturn)).  This includes removing redundant uses with
function definitions which have a public declaration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v3:
 * Leave do_nmi_crash() till the subsequent patch
 * Leave the stale comment in sh_skip_sync() till the final patch
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/arch/x86/efi/boot.c            |    4 ++--
 xen/arch/x86/shutdown.c            |    2 +-
 xen/include/asm-arm/early_printk.h |    4 ++--
 xen/include/xen/compiler.h         |    2 ++
 xen/include/xen/lib.h              |    2 +-
 xen/include/xen/sched.h            |    4 ++--
 7 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..2870a30 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -52,7 +52,7 @@ void __init early_printk(const char *fmt, ...)
     va_end(args);
 }
 
-void __attribute__((noreturn)) __init
+void __init
 early_panic(const char *fmt, ...)
 {
     va_list args;
diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index 0dd935c..a26e0af 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -183,7 +183,7 @@ static bool_t __init match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2)
            !memcmp(guid1->Data4, guid2->Data4, sizeof(guid1->Data4));
 }
 
-static void __init __attribute__((__noreturn__)) blexit(const CHAR16 *str)
+static void __init noreturn blexit(const CHAR16 *str)
 {
     if ( str )
         PrintStr((CHAR16 *)str);
@@ -762,7 +762,7 @@ static void __init relocate_trampoline(unsigned long phys)
         *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
 }
 
-void EFIAPI __init __attribute__((__noreturn__))
+void EFIAPI __init noreturn
 efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     static EFI_GUID __initdata loaded_image_guid = LOADED_IMAGE_PROTOCOL;
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6eba271..6143c40 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -85,7 +85,7 @@ static inline void kb_wait(void)
             break;
 }
 
-static void __attribute__((noreturn)) __machine_halt(void *unused)
+static void noreturn __machine_halt(void *unused)
 {
     local_irq_disable();
     for ( ; ; )
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..3cb8dab 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -26,7 +26,7 @@
 
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
+void early_panic(const char *fmt, ...) noreturn
     __attribute__((format (printf, 1, 2)));
 
 #else
@@ -35,7 +35,7 @@ static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
 
-static inline void  __attribute__((noreturn))
+static inline void noreturn
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d6805c..c80398d 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -14,6 +14,8 @@
 #define always_inline __inline__ __attribute__ ((always_inline))
 #define noinline      __attribute__((noinline))
 
+#define noreturn      __attribute__((noreturn))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..814fcb4 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -8,7 +8,7 @@
 #include <xen/string.h>
 #include <asm/bug.h>
 
-void __bug(char *file, int line) __attribute__((noreturn));
+void __bug(char *file, int line) noreturn;
 void __warn(char *file, int line);
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..7910079 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -580,7 +580,7 @@ void __domain_crash(struct domain *d);
  * Mark current domain as crashed and synchronously deschedule from the local
  * processor. This function never returns.
  */
-void __domain_crash_synchronous(void) __attribute__((noreturn));
+void __domain_crash_synchronous(void) noreturn;
 #define domain_crash_synchronous() do {                                   \
     printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__);  \
     __domain_crash_synchronous();                                         \
@@ -591,7 +591,7 @@ void __domain_crash_synchronous(void) __attribute__((noreturn));
  * the crash occured.  If addr is 0, look up address from last extable
  * redirection.
  */
-void asm_domain_crash_synchronous(unsigned long addr) __attribute__((noreturn));
+void asm_domain_crash_synchronous(unsigned long addr) noreturn;
 
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2G-0007dw-JS; Mon, 24 Feb 2014 15:01:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2F-0007db-4Z
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:35 +0000
Received: from [85.158.143.35:13295] by server-3.bemta-4.messagelabs.com id
	51/7E-11539-ECE5B035; Mon, 24 Feb 2014 15:01:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393254092!7906606!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9300 invoked from network); 24 Feb 2014 15:01:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105208003"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-6B; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:27 +0000
Message-ID: <1393254090-5081-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 2/5] x86/crash: Fix up declaration of
	do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... so it can correctly be annotated as noreturn.  Move the declaration of
nmi_crash() to be effectivly private in crash.c

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/crash.c            |    3 ++-
 xen/include/asm-x86/processor.h |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 01fd906..ec586bd 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -36,7 +36,7 @@ static unsigned int crashing_cpu;
 static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
 /* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
+void do_nmi_crash(struct cpu_user_regs *regs)
 {
     int cpu = smp_processor_id();
 
@@ -113,6 +113,7 @@ void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
         halt();
 }
 
+void nmi_crash(void);
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c120460..9696acc 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -529,7 +529,6 @@ void do_ ## _name(struct cpu_user_regs *regs)
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
-DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -550,6 +549,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
+void do_nmi_crash(struct cpu_user_regs *regs) noreturn;
 
 void syscall_enter(void);
 void sysenter_entry(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2G-0007e4-Ul; Mon, 24 Feb 2014 15:01:36 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2F-0007di-VH
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:36 +0000
Received: from [85.158.143.35:20026] by server-3.bemta-4.messagelabs.com id
	C3/7E-11539-ECE5B035; Mon, 24 Feb 2014 15:01:34 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393254092!7912858!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16034 invoked from network); 24 Feb 2014 15:01:34 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103554179"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-8e; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:29 +0000
Message-ID: <1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
	previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This includes:
 * A stale comment in sh_skip_sync()
 * A dead for ever loop in __bug()
 * A prototype for machine_power_off() which unimplemented in any architecture
 * Replacing a for(;;); loop with unreachable()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/efi/boot.c         |    2 +-
 xen/arch/x86/mm/shadow/common.c |    1 -
 xen/drivers/char/console.c      |    1 -
 xen/include/xen/shutdown.h      |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index a26e0af..62c4812 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -201,7 +201,7 @@ static void __init noreturn blexit(const CHAR16 *str)
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_bs->Exit(efi_ih, EFI_SUCCESS, 0, NULL);
-    for( ; ; ); /* not reached */
+    unreachable(); /* not reached */
 }
 
 /* generic routine for printing error messages */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 11c6b62..b400ccb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
     SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
                  mfn_x(gl1mfn));
     BUG();
-    return 0; /* BUG() is no longer __attribute__((noreturn)). */
 }
 
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..7d4383c 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1089,7 +1089,6 @@ void __bug(char *file, int line)
     printk("Xen BUG at %s:%d\n", file, line);
     dump_execution_state();
     panic("Xen BUG at %s:%d", file, line);
-    for ( ; ; ) ;
 }
 
 void __warn(char *file, int line)
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2b82495..7e449a5 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -10,6 +10,5 @@ void dom0_shutdown(u8 reason) noreturn;
 
 void machine_restart(unsigned int delay_millisecs) noreturn;
 void machine_halt(void) noreturn;
-void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:01:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx2J-0007fV-9T; Mon, 24 Feb 2014 15:01:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHx2H-0007eK-MC
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:01:37 +0000
Received: from [85.158.143.35:13481] by server-3.bemta-4.messagelabs.com id
	8F/7E-11539-0DE5B035; Mon, 24 Feb 2014 15:01:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393254092!7912858!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16188 invoked from network); 24 Feb 2014 15:01:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:01:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103554181"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:01:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:01:31 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WHx2B-0006J2-6t; Mon, 24 Feb 2014 15:01:31 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 15:01:28 +0000
Message-ID: <1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
	functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On an x86 build (GCC Debian 4.7.2-5), this reduces the .text size by exactly
4K, and .init.text by 1751 bytes.

Experimentally, even in a non-debug build, GCC uses `call` rather than `jmp`
so there should be no impact on any stack trace generation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v2:
 * Remove redundant uses with publically declared functions.

I have compile tested arm32 and arm64
---
 xen/arch/arm/shutdown.c         |    2 +-
 xen/arch/x86/cpu/mcheck/mce.h   |    2 +-
 xen/arch/x86/shutdown.c         |    2 +-
 xen/common/shutdown.c           |    2 +-
 xen/include/asm-arm/smp.h       |    2 +-
 xen/include/asm-x86/processor.h |    2 +-
 xen/include/xen/lib.h           |    2 +-
 xen/include/xen/shutdown.h      |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 767cc12..adc0529 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -11,7 +11,7 @@ static void raw_machine_reset(void)
     platform_reset();
 }
 
-static void halt_this_cpu(void *arg)
+static void noreturn halt_this_cpu(void *arg)
 {
     stop_cpu();
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index cbd123d..db791ff 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
 struct mc_info *x86_mcinfo_getptr(void);
-void mc_panic(char *s);
+void mc_panic(char *s) noreturn;
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
 			 uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6143c40..827515d 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -452,7 +452,7 @@ static int __init reboot_init(void)
 }
 __initcall(reboot_init);
 
-static void __machine_restart(void *pdelay)
+static void noreturn __machine_restart(void *pdelay)
 {
     machine_restart(*(unsigned int *)pdelay);
 }
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 9bccd34..fadb69b 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -17,7 +17,7 @@
 bool_t __read_mostly opt_noreboot;
 boolean_param("noreboot", opt_noreboot);
 
-static void maybe_reboot(void)
+static void noreturn maybe_reboot(void)
 {
     if ( opt_noreboot )
     {
diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
index a1de03c..897d99d 100644
--- a/xen/include/asm-arm/smp.h
+++ b/xen/include/asm-arm/smp.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
 #define raw_smp_processor_id() (get_processor_id())
 
-extern void stop_cpu(void);
+extern void stop_cpu(void) noreturn;
 
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 9696acc..eeea44a 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -514,7 +514,7 @@ void show_registers(struct cpu_user_regs *regs);
 void show_execution_state(struct cpu_user_regs *regs);
 #define dump_execution_state() run_in_exception_handler(show_execution_state)
 void show_page_walk(unsigned long addr);
-void fatal_trap(int trapnr, struct cpu_user_regs *regs);
+void fatal_trap(int trapnr, struct cpu_user_regs *regs) noreturn;
 
 void compat_show_guest_stack(struct vcpu *, struct cpu_user_regs *, int lines);
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 814fcb4..c8b35f1 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -87,7 +87,7 @@ extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
-extern void panic(const char *format, ...)
+extern void panic(const char *format, ...) noreturn
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2bee748..2b82495 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -1,13 +1,15 @@
 #ifndef __XEN_SHUTDOWN_H__
 #define __XEN_SHUTDOWN_H__
 
+#include <xen/compiler.h>
+
 /* opt_noreboot: If true, machine will need manual reset on error. */
 extern bool_t opt_noreboot;
 
-void dom0_shutdown(u8 reason);
+void dom0_shutdown(u8 reason) noreturn;
 
-void machine_restart(unsigned int delay_millisecs);
-void machine_halt(void);
+void machine_restart(unsigned int delay_millisecs) noreturn;
+void machine_halt(void) noreturn;
 void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:09:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx9N-0008Si-J2; Mon, 24 Feb 2014 15:08:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHx9L-0008Sd-HD
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:08:55 +0000
Received: from [85.158.139.211:21416] by server-8.bemta-5.messagelabs.com id
	36/F6-05298-6806B035; Mon, 24 Feb 2014 15:08:54 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393254531!5899743!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24590 invoked from network); 24 Feb 2014 15:08:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:08:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103557346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:08:34 +0000
Received: from [10.68.14.50] (10.68.14.50) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:08:33 -0500
Message-ID: <530B606F.2070902@citrix.com>
Date: Mon, 24 Feb 2014 15:08:31 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@schaman.hu>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
In-Reply-To: <530B4E05.4020900@schaman.hu>
X-Originating-IP: [10.68.14.50]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 13:49, Zoltan Kiss wrote:
> On 22/02/14 23:18, Zoltan Kiss wrote:
>> On 18/02/14 17:45, Ian Campbell wrote:
>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>
>>> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
>>> guest RX path" would be clearer.
>> Ok, I'll do that.
>>
>>>
>>>> RX path need to know if the SKB fragments are stored on pages from 
>>>> another
>>>> domain.
>>> Does this not need to be done either before the mapping change or at 
>>> the
>>> same time? -- otherwise you have a window of a couple of commits where
>>> things are broken, breaking bisectability.
>> I can move this to the beginning, to keep bisectability. I've put it 
>> here originally because none of these makes sense without the 
>> previous patches.
> Well, I gave it a close look: to move this to the beginning as a 
> separate patch I would need to put move a lot of definitions from the 
> first patch to here (ubuf_to_vif helper, xenvif_zerocopy_callback 
> etc.). That would be the best from bisect point of view, but from 
> patch review point of view even worse than now. So the only option I 
> see is to merge this with the first 2 patches, so it will be even bigger. 
Actually I was stupid, we can move this patch earlier and introduce 
stubs for those 2 functions. But for the another two patches (#6 and #8) 
it's still true that we can't move them before, only merge them into the 
main, as they heavily rely on the main patch. #6 is necessary for 
Windows frontends, as they are keen to send too many slots. #8 is quite 
a rare case, happens only if a guest wedge or malicious, and sits on the 
packet.
So my question is still up: do you prefer perfect bisectability or more 
segmented patches which are not that pain to review?

> And based on that principle, patch #6 and #8 should be merged there as 
> well, as they solve corner cases introduced by the grant mapping.
> I don't know how much the bisecting requirements are written in stone. 
> At this moment, all the separate patches compile, but after #2 there 
> are new problems solved in #4, #6 and #8. If someone bisect in the 
> middle of this range and run into these problems, they could quite 
> easily figure out what went wrong looking at the adjacent patches. So 
> I would recommend to keep this current order.
> What's your opinion?
>
> Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:09:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:09:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHx9N-0008Si-J2; Mon, 24 Feb 2014 15:08:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHx9L-0008Sd-HD
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:08:55 +0000
Received: from [85.158.139.211:21416] by server-8.bemta-5.messagelabs.com id
	36/F6-05298-6806B035; Mon, 24 Feb 2014 15:08:54 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393254531!5899743!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24590 invoked from network); 24 Feb 2014 15:08:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:08:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103557346"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:08:34 +0000
Received: from [10.68.14.50] (10.68.14.50) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:08:33 -0500
Message-ID: <530B606F.2070902@citrix.com>
Date: Mon, 24 Feb 2014 15:08:31 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@schaman.hu>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
In-Reply-To: <530B4E05.4020900@schaman.hu>
X-Originating-IP: [10.68.14.50]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 13:49, Zoltan Kiss wrote:
> On 22/02/14 23:18, Zoltan Kiss wrote:
>> On 18/02/14 17:45, Ian Campbell wrote:
>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>
>>> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
>>> guest RX path" would be clearer.
>> Ok, I'll do that.
>>
>>>
>>>> RX path need to know if the SKB fragments are stored on pages from 
>>>> another
>>>> domain.
>>> Does this not need to be done either before the mapping change or at 
>>> the
>>> same time? -- otherwise you have a window of a couple of commits where
>>> things are broken, breaking bisectability.
>> I can move this to the beginning, to keep bisectability. I've put it 
>> here originally because none of these makes sense without the 
>> previous patches.
> Well, I gave it a close look: to move this to the beginning as a 
> separate patch I would need to put move a lot of definitions from the 
> first patch to here (ubuf_to_vif helper, xenvif_zerocopy_callback 
> etc.). That would be the best from bisect point of view, but from 
> patch review point of view even worse than now. So the only option I 
> see is to merge this with the first 2 patches, so it will be even bigger. 
Actually I was stupid, we can move this patch earlier and introduce 
stubs for those 2 functions. But for the another two patches (#6 and #8) 
it's still true that we can't move them before, only merge them into the 
main, as they heavily rely on the main patch. #6 is necessary for 
Windows frontends, as they are keen to send too many slots. #8 is quite 
a rare case, happens only if a guest wedge or malicious, and sits on the 
packet.
So my question is still up: do you prefer perfect bisectability or more 
segmented patches which are not that pain to review?

> And based on that principle, patch #6 and #8 should be merged there as 
> well, as they solve corner cases introduced by the grant mapping.
> I don't know how much the bisecting requirements are written in stone. 
> At this moment, all the separate patches compile, but after #2 there 
> are new problems solved in #4, #6 and #8. If someone bisect in the 
> middle of this range and run into these problems, they could quite 
> easily figure out what went wrong looking at the adjacent patches. So 
> I would recommend to keep this current order.
> What's your opinion?
>
> Zoli


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:10:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxAS-000070-8J; Mon, 24 Feb 2014 15:10:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHxAP-00006n-DQ
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:10:01 +0000
Received: from [193.109.254.147:7420] by server-4.bemta-14.messagelabs.com id
	5C/0D-32066-8C06B035; Mon, 24 Feb 2014 15:10:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393254599!6427189!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22873 invoked from network); 24 Feb 2014 15:09:59 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:09:59 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so3148288eak.15
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 07:09:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=yef+9SL73RX5SjMlp7ogW7LipUvS9ZtDy5otSQkJYiE=;
	b=PzEyLvRDYSr2EBfsPe4COBCUig/pulAUvWgYHw5YIQrXozqOvbXlRfqMHoDik8y0Fq
	2sbTEKCUHiHvzNGKdt7I0D7t5MpGnb3/uk2o9GYyMAt9UqHMEgBdNQd4A0RejweM21pG
	c+qfBjkG+tmil+ozJuPnu9nPjqzwlb/RVEoX6J5zPSJqfRhJNvAjW31JZvVODqltjmPk
	JBI4u2Ni0Q7i4210M5fCJBjmHEJjr5+h6mSc7KuNegmqPpRpOq4DCq9qwcho2kOagZDK
	nFwh7U2CYJ2olBFZdj8kjvr0s5tkWCcFIIvGtYvZVpTbXPx89ogA23hEgnGsjq5+lUWg
	IE3w==
X-Gm-Message-State: ALoCoQmBegSTzS/xzoRBQXIfwIqxYV37IBbrTTN2+qgXT+GrqFiTOs4U6gkXrbCmT2tIDX/yBqEZ
X-Received: by 10.15.23.194 with SMTP id h42mr24910937eeu.32.1393254598965;
	Mon, 24 Feb 2014 07:09:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm64716451eeg.5.2014.02.24.07.09.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 07:09:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 15:09:50 +0000
Message-Id: <1393254590-9357-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Julien Grall <julien.grall@linaro.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH] xen/xsm: Fix xsm_map_gfmn_foreign prototype
	when XSM is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/include/xsm/xsm.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1939453..5d35455 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -621,7 +621,7 @@ static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint3
 #endif /* CONFIG_X86 */
 
 #ifdef CONFIG_ARM
-static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
+static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
 {
     return xsm_ops->map_gmfn_foreign(d, t);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:10:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:10:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxAS-000070-8J; Mon, 24 Feb 2014 15:10:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHxAP-00006n-DQ
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:10:01 +0000
Received: from [193.109.254.147:7420] by server-4.bemta-14.messagelabs.com id
	5C/0D-32066-8C06B035; Mon, 24 Feb 2014 15:10:00 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393254599!6427189!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22873 invoked from network); 24 Feb 2014 15:09:59 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:09:59 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so3148288eak.15
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 07:09:59 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:from:to:cc:subject:date:message-id;
	bh=yef+9SL73RX5SjMlp7ogW7LipUvS9ZtDy5otSQkJYiE=;
	b=PzEyLvRDYSr2EBfsPe4COBCUig/pulAUvWgYHw5YIQrXozqOvbXlRfqMHoDik8y0Fq
	2sbTEKCUHiHvzNGKdt7I0D7t5MpGnb3/uk2o9GYyMAt9UqHMEgBdNQd4A0RejweM21pG
	c+qfBjkG+tmil+ozJuPnu9nPjqzwlb/RVEoX6J5zPSJqfRhJNvAjW31JZvVODqltjmPk
	JBI4u2Ni0Q7i4210M5fCJBjmHEJjr5+h6mSc7KuNegmqPpRpOq4DCq9qwcho2kOagZDK
	nFwh7U2CYJ2olBFZdj8kjvr0s5tkWCcFIIvGtYvZVpTbXPx89ogA23hEgnGsjq5+lUWg
	IE3w==
X-Gm-Message-State: ALoCoQmBegSTzS/xzoRBQXIfwIqxYV37IBbrTTN2+qgXT+GrqFiTOs4U6gkXrbCmT2tIDX/yBqEZ
X-Received: by 10.15.23.194 with SMTP id h42mr24910937eeu.32.1393254598965;
	Mon, 24 Feb 2014 07:09:58 -0800 (PST)
Received: from belegaer.uk.xensource.com. ([185.25.64.249])
	by mx.google.com with ESMTPSA id f45sm64716451eeg.5.2014.02.24.07.09.57
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Mon, 24 Feb 2014 07:09:58 -0800 (PST)
From: Julien Grall <julien.grall@linaro.org>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 15:09:50 +0000
Message-Id: <1393254590-9357-1-git-send-email-julien.grall@linaro.org>
X-Mailer: git-send-email 1.7.10.4
Cc: Julien Grall <julien.grall@linaro.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [Xen-devel] [PATCH] xen/xsm: Fix xsm_map_gfmn_foreign prototype
	when XSM is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Julien Grall <julien.grall@linaro.org>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 xen/include/xsm/xsm.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1939453..5d35455 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -621,7 +621,7 @@ static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint3
 #endif /* CONFIG_X86 */
 
 #ifdef CONFIG_ARM
-static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
+static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
 {
     return xsm_ops->map_gmfn_foreign(d, t);
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:13:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxE7-0000Ln-3a; Mon, 24 Feb 2014 15:13:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WHxE5-0000LY-C4
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:13:49 +0000
Received: from [193.109.254.147:7680] by server-14.bemta-14.messagelabs.com id
	97/88-29228-CA16B035; Mon, 24 Feb 2014 15:13:48 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393254827!6476317!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4323 invoked from network); 24 Feb 2014 15:13:47 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-3.tower-27.messagelabs.com with SMTP;
	24 Feb 2014 15:13:47 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 766A787A;
	Mon, 24 Feb 2014 15:13:46 +0000 (UTC)
Date: Mon, 24 Feb 2014 07:16:36 -0800
From: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140224151636.GA13489@kroah.com>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
	<1392914159.32657.18.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Russell King <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, Pawel Moll <pawel.moll@arm.com>,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 12:19:11PM +0000, Stefano Stabellini wrote:
> CC'ing Greg.
> 
> On Thu, 20 Feb 2014, Ian Campbell wrote:
> > On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> > > Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> > > This patch introduce a new property "protected-devices" for the hypervisor
> > > node which list device which the IOMMU are been correctly programmed by Xen.
> > > 
> > > During Linux boot, Xen specific code will create an hash table which
> > > contains all these devices. The hash table will be used in need_xen_dma_ops
> > > to check if the Xen DMA ops needs to be used for the current device.
> > 
> > Is it out of the question to find a field within struct device itself to
> > store this e.g. in struct device_dma_parameters perhaps and avoid the
> > need for a hashtable lookup.
> > 
> > device->iommu_group might be another option, if we can create our own
> > group?
> 
> I agree that a field in struct device would be ideal.
> Greg, get_maintainer.pl points at you as main maintainer of device.h, do
> you have an opinion on this?

I need a whole lot more context here please.  With a patch would be even
better so that I know exactly what you are referring to...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:13:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxE7-0000Ln-3a; Mon, 24 Feb 2014 15:13:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gregkh@linuxfoundation.org>) id 1WHxE5-0000LY-C4
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:13:49 +0000
Received: from [193.109.254.147:7680] by server-14.bemta-14.messagelabs.com id
	97/88-29228-CA16B035; Mon, 24 Feb 2014 15:13:48 +0000
X-Env-Sender: gregkh@linuxfoundation.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393254827!6476317!1
X-Originating-IP: [140.211.169.12]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4323 invoked from network); 24 Feb 2014 15:13:47 -0000
Received: from mail.linuxfoundation.org (HELO mail.linuxfoundation.org)
	(140.211.169.12) by server-3.tower-27.messagelabs.com with SMTP;
	24 Feb 2014 15:13:47 -0000
Received: from localhost (c-76-28-255-20.hsd1.wa.comcast.net [76.28.255.20])
	by mail.linuxfoundation.org (Postfix) with ESMTPSA id 766A787A;
	Mon, 24 Feb 2014 15:13:46 +0000 (UTC)
Date: Mon, 24 Feb 2014 07:16:36 -0800
From: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140224151636.GA13489@kroah.com>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
	<1392914159.32657.18.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Russell King <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, Pawel Moll <pawel.moll@arm.com>,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 12:19:11PM +0000, Stefano Stabellini wrote:
> CC'ing Greg.
> 
> On Thu, 20 Feb 2014, Ian Campbell wrote:
> > On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> > > Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> > > This patch introduce a new property "protected-devices" for the hypervisor
> > > node which list device which the IOMMU are been correctly programmed by Xen.
> > > 
> > > During Linux boot, Xen specific code will create an hash table which
> > > contains all these devices. The hash table will be used in need_xen_dma_ops
> > > to check if the Xen DMA ops needs to be used for the current device.
> > 
> > Is it out of the question to find a field within struct device itself to
> > store this e.g. in struct device_dma_parameters perhaps and avoid the
> > need for a hashtable lookup.
> > 
> > device->iommu_group might be another option, if we can create our own
> > group?
> 
> I agree that a field in struct device would be ideal.
> Greg, get_maintainer.pl points at you as main maintainer of device.h, do
> you have an opinion on this?

I need a whole lot more context here please.  With a patch would be even
better so that I know exactly what you are referring to...

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:18:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxIR-0000bW-3h; Mon, 24 Feb 2014 15:18:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WHxIP-0000bQ-7f
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:18:17 +0000
Received: from [193.109.254.147:41474] by server-8.bemta-14.messagelabs.com id
	EF/AE-18529-8B26B035; Mon, 24 Feb 2014 15:18:16 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393255094!6425107!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25001 invoked from network); 24 Feb 2014 15:18:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:18:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105215342"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:17:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:17:55 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WHxI3-0006cM-21;
	Mon, 24 Feb 2014 15:17:55 +0000
Message-ID: <530B62A2.3080901@eu.citrix.com>
Date: Mon, 24 Feb 2014 15:17:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 02:19 PM, Ian Jackson wrote:
> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
> The result on Linux is that the process always deadlocks before
> returning from this function.
>
> This is used by xl's console child.  So, the ultimate effect is that
> xl with pygrub does not manage to connect to the pygrub console.
> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>
> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
> documented to suffice if called only on one ctx.  So deregistering the
> ctx it's called on is not sufficient.  Instead, we need a new approach
> which discards the whole sigchld_user list and unconditionally removes
> our SIGCHLD handler if we had one.
>
> Prompted by this, clarify the semantics of
> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
> "quickly" by explaining what operations are not permitted; and
> document the fact that the function doesn't reclaim the resources in
> the ctxs.
>
> And add a comment in libxl_postfork_child_noexec explaining the
> internal concurrency situation.
>
> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Reported-by: M A Young <m.a.young@durham.ac.uk>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

So it looks like this path gets called from a number of other places in xl:

libxl_postfork_child_noexec() is called by xl.c:postfork().

postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(), 
autoconnect_console(), and do_daemonize().

do_daemonize() is called during "xl create", and "xl devd".

Was this deadlock not triggered for those, or was it triggered and 
nobody noticed?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:18:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxIR-0000bW-3h; Mon, 24 Feb 2014 15:18:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WHxIP-0000bQ-7f
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:18:17 +0000
Received: from [193.109.254.147:41474] by server-8.bemta-14.messagelabs.com id
	EF/AE-18529-8B26B035; Mon, 24 Feb 2014 15:18:16 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393255094!6425107!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25001 invoked from network); 24 Feb 2014 15:18:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:18:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105215342"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:17:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:17:55 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WHxI3-0006cM-21;
	Mon, 24 Feb 2014 15:17:55 +0000
Message-ID: <530B62A2.3080901@eu.citrix.com>
Date: Mon, 24 Feb 2014 15:17:54 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 02:19 PM, Ian Jackson wrote:
> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
> The result on Linux is that the process always deadlocks before
> returning from this function.
>
> This is used by xl's console child.  So, the ultimate effect is that
> xl with pygrub does not manage to connect to the pygrub console.
> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>
> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
> documented to suffice if called only on one ctx.  So deregistering the
> ctx it's called on is not sufficient.  Instead, we need a new approach
> which discards the whole sigchld_user list and unconditionally removes
> our SIGCHLD handler if we had one.
>
> Prompted by this, clarify the semantics of
> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
> "quickly" by explaining what operations are not permitted; and
> document the fact that the function doesn't reclaim the resources in
> the ctxs.
>
> And add a comment in libxl_postfork_child_noexec explaining the
> internal concurrency situation.
>
> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> Reported-by: M A Young <m.a.young@durham.ac.uk>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>

So it looks like this path gets called from a number of other places in xl:

libxl_postfork_child_noexec() is called by xl.c:postfork().

postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(), 
autoconnect_console(), and do_daemonize().

do_daemonize() is called during "xl create", and "xl devd".

Was this deadlock not triggered for those, or was it triggered and 
nobody noticed?

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:22:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxMN-0000pj-SL; Mon, 24 Feb 2014 15:22:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHxMM-0000pc-77
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:22:22 +0000
Received: from [85.158.143.35:4097] by server-2.bemta-4.messagelabs.com id
	43/71-10891-DA36B035; Mon, 24 Feb 2014 15:22:21 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393255340!7906188!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3425 invoked from network); 24 Feb 2014 15:22:21 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:22:21 -0000
Received: by mail-wg0-f42.google.com with SMTP id k14so2893981wgh.5
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 07:22:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type:content-transfer-encoding;
	bh=1Y/NUE62/2T17ef2m5mJNrTkYCmuBOovqeYIouaXUCc=;
	b=cvV5u5lawQHL1ieyxVmTzkHmUWNb99D38fCOP/1kc6SFFcp2BFOaVUcgMavZPk70yF
	j1Oysu/+HkbLMXZTAxsqRWHh+1yNAXpSqyrrCynFoP4JKS5tf+lxByySWL38uClGL8Ai
	r7F/nAWDQtFOcRd7IOrXMPhHCEX5sWFGZzlkbL/6UH4/ZtPlD8knQAOW12FKjlgiq6aD
	FoxGtvxg/X6tI5jEs/kQ/KpEx7TAWP0vj8Dru8636SDJd0qBmnVS9ll15PMWKT6sQyXP
	ZwvwTU2t0v1cTDgGZFFSk5mgtEMIaHy0kJUShLYV87Op3YL3Ci3D5DrPTwLCUIU8P6f9
	XrcQ==
MIME-Version: 1.0
X-Received: by 10.180.75.105 with SMTP id b9mr14799755wiw.6.1393255340474;
	Mon, 24 Feb 2014 07:22:20 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 07:22:20 -0800 (PST)
In-Reply-To: <CAN90_r=K2GPMFvNsc4cWzth7BgEoa5rxTCVzn8Uk5_CfOvPT0w@mail.gmail.com>
References: <CAN90_rkxUvEDkXLrDkbp-tZLj_OmkMiN+XYpPfqqZioHUBgBUA@mail.gmail.com>
	<CAN90_r=K2GPMFvNsc4cWzth7BgEoa5rxTCVzn8Uk5_CfOvPT0w@mail.gmail.com>
Date: Mon, 24 Feb 2014 15:22:20 +0000
X-Google-Sender-Auth: rKO37cQ5QHeyrnprqrHTh-5e3fI
Message-ID: <CAFLBxZZXnbtLSjyxV+xr7VG5Wx-MvXeDK660ff=si0PRYFYNOQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: =?UTF-8?B?TMOibSBI4bqjaSBTxqFu?= <lamhaison@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Fwd: [How to read xen source]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCBGZWIgMjQsIDIwMTQgYXQgOTozMiBBTSwgTMOibSBI4bqjaSBTxqFuIDxsYW1oYWlz
b25AZ21haWwuY29tPiB3cm90ZToKPiBIZWxsbyEKPiBJJ20gYSBzdHVkZW50LCBhbmQgaSBpbnRl
cnN0IHRvIHNvdXJjZSB4ZW4uIEkgd2FudCB0byBtb2RpZnkgc291cmNlIGJ1dAo+IEJlZm9yZSBJ
IGRvIGl0IEkgbXVzdCB1bmRlcnN0YW5kIHRoaXMgY29kZS4gSSBoYXZlIGEgcXVlc3Rpb24sIEhv
dyB0byByZWFkCj4geGVuIHNvdXJjZSwgSSBoYXZlIHRvIHVzZSB0b29sIHdoaWNoIGhlbHAgbWUs
IHN1Y2ggYXMgc291cmNlIG5hdmlnYXRvciwgYW5kCj4gSG93IHRvIGRvIGl0LiBJIHRoaW5rIHlv
dSBhcmUgYSBkZXZlbG9wZXIsIEkgd2FudCB0byBxdWVzdGlvbiwgd2hlcmUgZG9lcwo+IGNvZGUg
dXNlIHJlc3RhcnQsIGRlbGV0ZSB2aXJ0dWFsIG1hY2hpbmUgb24geGVuLiBUaGFuayB5b3UhLiBI
ZWxwIG1lCgpBIHJlYWxseSBoYW5keSAicnVuZSIgZm9yIGZvbGxvd2luZyB0aHJlYWRzIGFyb3Vu
ZCB0aGUgc291cmNlIGNvZGUgaXMKdGhlIGZvbGxvd2luZzoKCmZpbmQgLiAtbmFtZSAiKi5bY1No
XSIgfCB4YXJncyBncmVwIC1IIFtzdHJpbmddCgpJZiB5b3Ugd2FudCB0byBpbmNsdWRlIGFsbCBm
aWxlcywgeW91IGNhbiByZXBsYWNlIHRoZSAtbmFtZSBhcmd1bWVudAp3aXRoICItdHlwZSBmIi4K
CkFsc28sIGxvb2sgaW50byBjc2NvcGUgb3IgR05VIGdsb2JhbC4gIFRoZXkgdXN1YWxseSBoYXZl
IGdvb2QKaW50ZWdyYXRpb24gaW50byB2aSBhbmQgZW1hY3MuCgpHb29kIGx1Y2ssCiAtR2Vvcmdl
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:22:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxMN-0000pj-SL; Mon, 24 Feb 2014 15:22:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHxMM-0000pc-77
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:22:22 +0000
Received: from [85.158.143.35:4097] by server-2.bemta-4.messagelabs.com id
	43/71-10891-DA36B035; Mon, 24 Feb 2014 15:22:21 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393255340!7906188!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3425 invoked from network); 24 Feb 2014 15:22:21 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:22:21 -0000
Received: by mail-wg0-f42.google.com with SMTP id k14so2893981wgh.5
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 07:22:20 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type:content-transfer-encoding;
	bh=1Y/NUE62/2T17ef2m5mJNrTkYCmuBOovqeYIouaXUCc=;
	b=cvV5u5lawQHL1ieyxVmTzkHmUWNb99D38fCOP/1kc6SFFcp2BFOaVUcgMavZPk70yF
	j1Oysu/+HkbLMXZTAxsqRWHh+1yNAXpSqyrrCynFoP4JKS5tf+lxByySWL38uClGL8Ai
	r7F/nAWDQtFOcRd7IOrXMPhHCEX5sWFGZzlkbL/6UH4/ZtPlD8knQAOW12FKjlgiq6aD
	FoxGtvxg/X6tI5jEs/kQ/KpEx7TAWP0vj8Dru8636SDJd0qBmnVS9ll15PMWKT6sQyXP
	ZwvwTU2t0v1cTDgGZFFSk5mgtEMIaHy0kJUShLYV87Op3YL3Ci3D5DrPTwLCUIU8P6f9
	XrcQ==
MIME-Version: 1.0
X-Received: by 10.180.75.105 with SMTP id b9mr14799755wiw.6.1393255340474;
	Mon, 24 Feb 2014 07:22:20 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 07:22:20 -0800 (PST)
In-Reply-To: <CAN90_r=K2GPMFvNsc4cWzth7BgEoa5rxTCVzn8Uk5_CfOvPT0w@mail.gmail.com>
References: <CAN90_rkxUvEDkXLrDkbp-tZLj_OmkMiN+XYpPfqqZioHUBgBUA@mail.gmail.com>
	<CAN90_r=K2GPMFvNsc4cWzth7BgEoa5rxTCVzn8Uk5_CfOvPT0w@mail.gmail.com>
Date: Mon, 24 Feb 2014 15:22:20 +0000
X-Google-Sender-Auth: rKO37cQ5QHeyrnprqrHTh-5e3fI
Message-ID: <CAFLBxZZXnbtLSjyxV+xr7VG5Wx-MvXeDK660ff=si0PRYFYNOQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: =?UTF-8?B?TMOibSBI4bqjaSBTxqFu?= <lamhaison@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] Fwd: [How to read xen source]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCBGZWIgMjQsIDIwMTQgYXQgOTozMiBBTSwgTMOibSBI4bqjaSBTxqFuIDxsYW1oYWlz
b25AZ21haWwuY29tPiB3cm90ZToKPiBIZWxsbyEKPiBJJ20gYSBzdHVkZW50LCBhbmQgaSBpbnRl
cnN0IHRvIHNvdXJjZSB4ZW4uIEkgd2FudCB0byBtb2RpZnkgc291cmNlIGJ1dAo+IEJlZm9yZSBJ
IGRvIGl0IEkgbXVzdCB1bmRlcnN0YW5kIHRoaXMgY29kZS4gSSBoYXZlIGEgcXVlc3Rpb24sIEhv
dyB0byByZWFkCj4geGVuIHNvdXJjZSwgSSBoYXZlIHRvIHVzZSB0b29sIHdoaWNoIGhlbHAgbWUs
IHN1Y2ggYXMgc291cmNlIG5hdmlnYXRvciwgYW5kCj4gSG93IHRvIGRvIGl0LiBJIHRoaW5rIHlv
dSBhcmUgYSBkZXZlbG9wZXIsIEkgd2FudCB0byBxdWVzdGlvbiwgd2hlcmUgZG9lcwo+IGNvZGUg
dXNlIHJlc3RhcnQsIGRlbGV0ZSB2aXJ0dWFsIG1hY2hpbmUgb24geGVuLiBUaGFuayB5b3UhLiBI
ZWxwIG1lCgpBIHJlYWxseSBoYW5keSAicnVuZSIgZm9yIGZvbGxvd2luZyB0aHJlYWRzIGFyb3Vu
ZCB0aGUgc291cmNlIGNvZGUgaXMKdGhlIGZvbGxvd2luZzoKCmZpbmQgLiAtbmFtZSAiKi5bY1No
XSIgfCB4YXJncyBncmVwIC1IIFtzdHJpbmddCgpJZiB5b3Ugd2FudCB0byBpbmNsdWRlIGFsbCBm
aWxlcywgeW91IGNhbiByZXBsYWNlIHRoZSAtbmFtZSBhcmd1bWVudAp3aXRoICItdHlwZSBmIi4K
CkFsc28sIGxvb2sgaW50byBjc2NvcGUgb3IgR05VIGdsb2JhbC4gIFRoZXkgdXN1YWxseSBoYXZl
IGdvb2QKaW50ZWdyYXRpb24gaW50byB2aSBhbmQgZW1hY3MuCgpHb29kIGx1Y2ssCiAtR2Vvcmdl
CgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2
ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4u
b3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:24:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxOJ-0000xm-3B; Mon, 24 Feb 2014 15:24:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHxOI-0000xd-9v
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:24:22 +0000
Received: from [193.109.254.147:5000] by server-13.bemta-14.messagelabs.com id
	F6/45-01226-5246B035; Mon, 24 Feb 2014 15:24:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393255460!6426883!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8098 invoked from network); 24 Feb 2014 15:24:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 15:24:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 15:24:20 +0000
Message-Id: <530B7230020000780011EE18@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 15:24:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
	<1393253572-7157-6-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393253572-7157-6-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 5/6] xen/console: Add noreturn attribute
 to panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 15:52, Julien Grall <julien.grall@linaro.org> wrote:
> -void panic(const char *fmt, ...)
> +void __attribute__((noreturn)) panic(const char *fmt, ...)
>  {
>      va_list args;
>      unsigned long flags;
> @@ -1085,6 +1085,8 @@ void panic(const char *fmt, ...)
>          watchdog_disable();
>          machine_restart(5000);
>      }
> +
> +    while ( 1 );

The canonical thing here would be "for ( ; ; );", since there are
compilers warning about the literal 1 in what you propose.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:24:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxOJ-0000xm-3B; Mon, 24 Feb 2014 15:24:23 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHxOI-0000xd-9v
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:24:22 +0000
Received: from [193.109.254.147:5000] by server-13.bemta-14.messagelabs.com id
	F6/45-01226-5246B035; Mon, 24 Feb 2014 15:24:21 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393255460!6426883!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8098 invoked from network); 24 Feb 2014 15:24:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 15:24:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 15:24:20 +0000
Message-Id: <530B7230020000780011EE18@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 15:24:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Julien Grall" <julien.grall@linaro.org>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
	<1393253572-7157-6-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393253572-7157-6-git-send-email-julien.grall@linaro.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, Keir Fraser <keir@xen.org>,
	stefano.stabellini@citrix.com, ian.campbell@citrix.com, tim@xen.org
Subject: Re: [Xen-devel] [PATCH v2 5/6] xen/console: Add noreturn attribute
 to panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 15:52, Julien Grall <julien.grall@linaro.org> wrote:
> -void panic(const char *fmt, ...)
> +void __attribute__((noreturn)) panic(const char *fmt, ...)
>  {
>      va_list args;
>      unsigned long flags;
> @@ -1085,6 +1085,8 @@ void panic(const char *fmt, ...)
>          watchdog_disable();
>          machine_restart(5000);
>      }
> +
> +    while ( 1 );

The canonical thing here would be "for ( ; ; );", since there are
compilers warning about the literal 1 in what you propose.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:26:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxQD-00016u-Nc; Mon, 24 Feb 2014 15:26:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHxQB-00016l-Uf
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:26:20 +0000
Received: from [85.158.137.68:27169] by server-8.bemta-3.messagelabs.com id
	20/E6-16039-B946B035; Mon, 24 Feb 2014 15:26:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393255576!3881802!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19413 invoked from network); 24 Feb 2014 15:26:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105218392"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:26:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 10:26:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxQ2-0006kK-U6;
	Mon, 24 Feb 2014 15:26:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxQ1-0000ue-1Z;
	Mon, 24 Feb 2014 15:26:09 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 15:26:04 +0000
Message-ID: <1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
References: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH] tools/console: reset tty when xenconsole fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If xenconsole (the client program) fails, it calls err.  This would
previously neglect to reset the user's terminal to sanity.  Use atexit
to do so.

This routinely happens in Xen 4.4 RC5 with pygrub because something
writes the value "" to the tty xenstore key when using xenconsole.
The cause of this is not yet known, but after this patch it just
results in a harmless error message.

Reported-by: M A Young <m.a.young@durham.ac.uk>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/console/client/main.c |   11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 3242008..add3313 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -258,6 +258,13 @@ typedef enum {
        CONSOLE_SERIAL,
 } console_type;
 
+static struct termios stdin_old_attr;
+
+static void restore_term_stdin(void)
+{
+	restore_term(STDIN_FILENO, &stdin_old_attr);
+}
+
 int main(int argc, char **argv)
 {
 	struct termios attr;
@@ -384,9 +391,9 @@ int main(int argc, char **argv)
 	}
 
 	init_term(spty, &attr);
-	init_term(STDIN_FILENO, &attr);
+	init_term(STDIN_FILENO, &stdin_old_attr);
+        atexit(restore_term_stdin); /* if this fails, oh dear */
 	console_loop(spty, xs, path);
-	restore_term(STDIN_FILENO, &attr);
 
 	free(path);
 	free(dom_path);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:26:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxQD-00016u-Nc; Mon, 24 Feb 2014 15:26:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHxQB-00016l-Uf
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:26:20 +0000
Received: from [85.158.137.68:27169] by server-8.bemta-3.messagelabs.com id
	20/E6-16039-B946B035; Mon, 24 Feb 2014 15:26:19 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393255576!3881802!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19413 invoked from network); 24 Feb 2014 15:26:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:26:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105218392"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 15:26:16 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 10:26:15 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxQ2-0006kK-U6;
	Mon, 24 Feb 2014 15:26:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxQ1-0000ue-1Z;
	Mon, 24 Feb 2014 15:26:09 +0000
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Mon, 24 Feb 2014 15:26:04 +0000
Message-ID: <1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
References: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
MIME-Version: 1.0
X-DLP: MIA2
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: [Xen-devel] [PATCH] tools/console: reset tty when xenconsole fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If xenconsole (the client program) fails, it calls err.  This would
previously neglect to reset the user's terminal to sanity.  Use atexit
to do so.

This routinely happens in Xen 4.4 RC5 with pygrub because something
writes the value "" to the tty xenstore key when using xenconsole.
The cause of this is not yet known, but after this patch it just
results in a harmless error message.

Reported-by: M A Young <m.a.young@durham.ac.uk>
Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: George Dunlap <george.dunlap@eu.citrix.com>
---
 tools/console/client/main.c |   11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 3242008..add3313 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -258,6 +258,13 @@ typedef enum {
        CONSOLE_SERIAL,
 } console_type;
 
+static struct termios stdin_old_attr;
+
+static void restore_term_stdin(void)
+{
+	restore_term(STDIN_FILENO, &stdin_old_attr);
+}
+
 int main(int argc, char **argv)
 {
 	struct termios attr;
@@ -384,9 +391,9 @@ int main(int argc, char **argv)
 	}
 
 	init_term(spty, &attr);
-	init_term(STDIN_FILENO, &attr);
+	init_term(STDIN_FILENO, &stdin_old_attr);
+        atexit(restore_term_stdin); /* if this fails, oh dear */
 	console_loop(spty, xs, path);
-	restore_term(STDIN_FILENO, &attr);
 
 	free(path);
 	free(dom_path);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:31:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxVI-0001Q8-J1; Mon, 24 Feb 2014 15:31:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHxVG-0001Pc-Ga
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:31:34 +0000
Received: from [193.109.254.147:25973] by server-7.bemta-14.messagelabs.com id
	A1/83-23424-5D56B035; Mon, 24 Feb 2014 15:31:33 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393255880!2706378!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21946 invoked from network); 24 Feb 2014 15:31:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:31:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103565992"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:31:20 +0000
Received: from [10.68.14.50] (10.68.14.50) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:31:19 -0500
Message-ID: <530B65C5.4040201@citrix.com>
Date: Mon, 24 Feb 2014 15:31:17 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1392803434.23084.97.camel@kazak.uk.xensource.com>
In-Reply-To: <1392803434.23084.97.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.68.14.50]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 09:50, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> A long known problem of the upstream netback implementation that on the TX
>> path (from guest to Dom0) it copies the whole packet from guest memory into
>> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
>> huge perfomance penalty. The classic kernel version of netback used grant
>> mapping, and to get notified when the page can be unmapped, it used page
>> destructors. Unfortunately that destructor is not an upstreamable solution.
>> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
>> problem, however it seems to be very invasive on the network stack's code,
>> and therefore haven't progressed very well.
>> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
>> know when the skb is freed up. That is the way KVM solved the same problem,
>> and based on my initial tests it can do the same for us. Avoiding the extra
>> copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
>> Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
>> running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
>> switch)
>> Based on my investigations the packet get only copied if it is delivered to
>> Dom0 stack,
> This is not quite complete/accurate since you previously told me that it
> is copied in the NAT/routed rather than bridged network topologies.
>
> Please can you cover that aspect here too.
Ok.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:31:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxVI-0001Q8-J1; Mon, 24 Feb 2014 15:31:36 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHxVG-0001Pc-Ga
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:31:34 +0000
Received: from [193.109.254.147:25973] by server-7.bemta-14.messagelabs.com id
	A1/83-23424-5D56B035; Mon, 24 Feb 2014 15:31:33 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393255880!2706378!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21946 invoked from network); 24 Feb 2014 15:31:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:31:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103565992"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:31:20 +0000
Received: from [10.68.14.50] (10.68.14.50) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:31:19 -0500
Message-ID: <530B65C5.4040201@citrix.com>
Date: Mon, 24 Feb 2014 15:31:17 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1392803434.23084.97.camel@kazak.uk.xensource.com>
In-Reply-To: <1392803434.23084.97.camel@kazak.uk.xensource.com>
X-Originating-IP: [10.68.14.50]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 0/9] xen-netback: TX grant
 mapping with SKBTX_DEV_ZEROCOPY instead of copy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 19/02/14 09:50, Ian Campbell wrote:
> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>> A long known problem of the upstream netback implementation that on the TX
>> path (from guest to Dom0) it copies the whole packet from guest memory into
>> Dom0. That simply became a bottleneck with 10Gb NICs, and generally it's a
>> huge perfomance penalty. The classic kernel version of netback used grant
>> mapping, and to get notified when the page can be unmapped, it used page
>> destructors. Unfortunately that destructor is not an upstreamable solution.
>> Ian Campbell's skb fragment destructor patch series [1] tried to solve this
>> problem, however it seems to be very invasive on the network stack's code,
>> and therefore haven't progressed very well.
>> This patch series use SKBTX_DEV_ZEROCOPY flags to tell the stack it needs to
>> know when the skb is freed up. That is the way KVM solved the same problem,
>> and based on my initial tests it can do the same for us. Avoiding the extra
>> copy boosted up TX throughput from 6.8 Gbps to 7.9 (I used a slower
>> Interlagos box, both Dom0 and guest on upstream kernel, on the same NUMA node,
>> running iperf 2.0.5, and the remote end was a bare metal box on the same 10Gb
>> switch)
>> Based on my investigations the packet get only copied if it is delivered to
>> Dom0 stack,
> This is not quite complete/accurate since you previously told me that it
> is copied in the NAT/routed rather than bridged network topologies.
>
> Please can you cover that aspect here too.
Ok.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:43:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:43:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxg8-0001eZ-S5; Mon, 24 Feb 2014 15:42:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHxg8-0001eU-0k
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:42:48 +0000
Received: from [85.158.143.35:21296] by server-1.bemta-4.messagelabs.com id
	EC/C5-31661-7786B035; Mon, 24 Feb 2014 15:42:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393256565!7911396!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4980 invoked from network); 24 Feb 2014 15:42:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:42:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103570084"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:42:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:42:44 -0500
Message-ID: <1393256563.16570.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 15:42:43 +0000
In-Reply-To: <1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
References: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
	<1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH] tools/console: reset tty when xenconsole
	fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 15:26 +0000, Ian Jackson wrote:
> If xenconsole (the client program) fails, it calls err.  This would
> previously neglect to reset the user's terminal to sanity.  Use atexit
> to do so.
> 
> This routinely happens in Xen 4.4 RC5 with pygrub because something
> writes the value "" to the tty xenstore key when using xenconsole.
> The cause of this is not yet known, but after this patch it just
> results in a harmless error message.
> 
> Reported-by: M A Young <m.a.young@durham.ac.uk>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/console/client/main.c |   11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/console/client/main.c b/tools/console/client/main.c
> index 3242008..add3313 100644
> --- a/tools/console/client/main.c
> +++ b/tools/console/client/main.c
> @@ -258,6 +258,13 @@ typedef enum {
>         CONSOLE_SERIAL,
>  } console_type;
>  
> +static struct termios stdin_old_attr;
> +
> +static void restore_term_stdin(void)
> +{
> +	restore_term(STDIN_FILENO, &stdin_old_attr);
> +}
> +
>  int main(int argc, char **argv)
>  {
>  	struct termios attr;
> @@ -384,9 +391,9 @@ int main(int argc, char **argv)
>  	}
>  
>  	init_term(spty, &attr);
> -	init_term(STDIN_FILENO, &attr);
> +	init_term(STDIN_FILENO, &stdin_old_attr);
> +        atexit(restore_term_stdin); /* if this fails, oh dear */

Looks like some sort of tab/space damage, but otherwise:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>  	console_loop(spty, xs, path);
> -	restore_term(STDIN_FILENO, &attr);
>  
>  	free(path);
>  	free(dom_path);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:43:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:43:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxg8-0001eZ-S5; Mon, 24 Feb 2014 15:42:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHxg8-0001eU-0k
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:42:48 +0000
Received: from [85.158.143.35:21296] by server-1.bemta-4.messagelabs.com id
	EC/C5-31661-7786B035; Mon, 24 Feb 2014 15:42:47 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393256565!7911396!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4980 invoked from network); 24 Feb 2014 15:42:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:42:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103570084"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:42:44 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 10:42:44 -0500
Message-ID: <1393256563.16570.88.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 15:42:43 +0000
In-Reply-To: <1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
References: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
	<1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, xen-devel@lists.xensource.com,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH] tools/console: reset tty when xenconsole
	fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 15:26 +0000, Ian Jackson wrote:
> If xenconsole (the client program) fails, it calls err.  This would
> previously neglect to reset the user's terminal to sanity.  Use atexit
> to do so.
> 
> This routinely happens in Xen 4.4 RC5 with pygrub because something
> writes the value "" to the tty xenstore key when using xenconsole.
> The cause of this is not yet known, but after this patch it just
> results in a harmless error message.
> 
> Reported-by: M A Young <m.a.young@durham.ac.uk>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: George Dunlap <george.dunlap@eu.citrix.com>
> ---
>  tools/console/client/main.c |   11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/console/client/main.c b/tools/console/client/main.c
> index 3242008..add3313 100644
> --- a/tools/console/client/main.c
> +++ b/tools/console/client/main.c
> @@ -258,6 +258,13 @@ typedef enum {
>         CONSOLE_SERIAL,
>  } console_type;
>  
> +static struct termios stdin_old_attr;
> +
> +static void restore_term_stdin(void)
> +{
> +	restore_term(STDIN_FILENO, &stdin_old_attr);
> +}
> +
>  int main(int argc, char **argv)
>  {
>  	struct termios attr;
> @@ -384,9 +391,9 @@ int main(int argc, char **argv)
>  	}
>  
>  	init_term(spty, &attr);
> -	init_term(STDIN_FILENO, &attr);
> +	init_term(STDIN_FILENO, &stdin_old_attr);
> +        atexit(restore_term_stdin); /* if this fails, oh dear */

Looks like some sort of tab/space damage, but otherwise:
Acked-by: Ian Campbell <ian.campbell@citrix.com>

>  	console_loop(spty, xs, path);
> -	restore_term(STDIN_FILENO, &attr);
>  
>  	free(path);
>  	free(dom_path);



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:47:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxkv-0001qe-Qs; Mon, 24 Feb 2014 15:47:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHxkt-0001qY-TX
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:47:44 +0000
Received: from [193.109.254.147:11253] by server-14.bemta-14.messagelabs.com
	id 3A/21-29228-F996B035; Mon, 24 Feb 2014 15:47:43 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393256862!6484071!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8323 invoked from network); 24 Feb 2014 15:47:42 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:47:42 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so4828910wes.39
	for <xen-devel@lists.xensource.com>;
	Mon, 24 Feb 2014 07:47:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=jv1lF5OJYVthd8bvWQxDWu+aXl/kvjvSZakCeNR8rzY=;
	b=M5vccbj65wPQrYbwrJNLaM1TipPHcQSlEglOX5ouSvTGHO5scivCLxwF1ss1IDvfnT
	TD9ZZaYu1ZLv6jIDLCR3AJ/4VpILTtFeKOXmq1qjFigZ8Ytv8UFlM3pFaLXYvaeuDQdr
	ofXrwvCBtlt8P3GIBOhLDrdcoMOYks/71wbsrUfGkHV26shcMRvfbLnl2Zgn2ZdZ/8MR
	SfzDIxIpiHeZpLkQC678KMkmA8tnuo7rUBan/PuFxUlfNNgeTHRdhXjYaVJLQyGxhY9P
	1AOfSZ/GNvSO1qBAU7tQi3a5wKC+cQK0IlmxGZxrm4TRaDC/Pu2S0BlXdmChYdiVm0Ly
	mpQw==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr14907213wie.6.1393256861952;
	Mon, 24 Feb 2014 07:47:41 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 07:47:41 -0800 (PST)
In-Reply-To: <530B62A2.3080901@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
Date: Mon, 24 Feb 2014 15:47:41 +0000
X-Google-Sender-Auth: Ng_Bj6P8wkrpkrLteYAALKKZzzk
Message-ID: <CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, 
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 3:17 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
> On 02/24/2014 02:19 PM, Ian Jackson wrote:
>>
>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>> The result on Linux is that the process always deadlocks before
>> returning from this function.
>>
>> This is used by xl's console child.  So, the ultimate effect is that
>> xl with pygrub does not manage to connect to the pygrub console.
>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>>
>> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
>> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
>> documented to suffice if called only on one ctx.  So deregistering the
>> ctx it's called on is not sufficient.  Instead, we need a new approach
>> which discards the whole sigchld_user list and unconditionally removes
>> our SIGCHLD handler if we had one.
>>
>> Prompted by this, clarify the semantics of
>> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
>> "quickly" by explaining what operations are not permitted; and
>> document the fact that the function doesn't reclaim the resources in
>> the ctxs.
>>
>> And add a comment in libxl_postfork_child_noexec explaining the
>> internal concurrency situation.
>>
>> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>>
>> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> Reported-by: M A Young <m.a.young@durham.ac.uk>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>
>
> So it looks like this path gets called from a number of other places in xl:
>
> libxl_postfork_child_noexec() is called by xl.c:postfork().
>
> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(),
> autoconnect_console(), and do_daemonize().
>
> do_daemonize() is called during "xl create", and "xl devd".
>
> Was this deadlock not triggered for those, or was it triggered and nobody
> noticed?

In any case, I do think we need to fix this; the main question is, do
we need to delay the release a bit further to make sure it gets
sufficient testing?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:47:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxkv-0001qe-Qs; Mon, 24 Feb 2014 15:47:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHxkt-0001qY-TX
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:47:44 +0000
Received: from [193.109.254.147:11253] by server-14.bemta-14.messagelabs.com
	id 3A/21-29228-F996B035; Mon, 24 Feb 2014 15:47:43 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393256862!6484071!1
X-Originating-IP: [74.125.82.180]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8323 invoked from network); 24 Feb 2014 15:47:42 -0000
Received: from mail-we0-f180.google.com (HELO mail-we0-f180.google.com)
	(74.125.82.180)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:47:42 -0000
Received: by mail-we0-f180.google.com with SMTP id u57so4828910wes.39
	for <xen-devel@lists.xensource.com>;
	Mon, 24 Feb 2014 07:47:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=jv1lF5OJYVthd8bvWQxDWu+aXl/kvjvSZakCeNR8rzY=;
	b=M5vccbj65wPQrYbwrJNLaM1TipPHcQSlEglOX5ouSvTGHO5scivCLxwF1ss1IDvfnT
	TD9ZZaYu1ZLv6jIDLCR3AJ/4VpILTtFeKOXmq1qjFigZ8Ytv8UFlM3pFaLXYvaeuDQdr
	ofXrwvCBtlt8P3GIBOhLDrdcoMOYks/71wbsrUfGkHV26shcMRvfbLnl2Zgn2ZdZ/8MR
	SfzDIxIpiHeZpLkQC678KMkmA8tnuo7rUBan/PuFxUlfNNgeTHRdhXjYaVJLQyGxhY9P
	1AOfSZ/GNvSO1qBAU7tQi3a5wKC+cQK0IlmxGZxrm4TRaDC/Pu2S0BlXdmChYdiVm0Ly
	mpQw==
MIME-Version: 1.0
X-Received: by 10.180.19.130 with SMTP id f2mr14907213wie.6.1393256861952;
	Mon, 24 Feb 2014 07:47:41 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 07:47:41 -0800 (PST)
In-Reply-To: <530B62A2.3080901@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
Date: Mon, 24 Feb 2014 15:47:41 +0000
X-Google-Sender-Auth: Ng_Bj6P8wkrpkrLteYAALKKZzzk
Message-ID: <CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, 
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Cc: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 3:17 PM, George Dunlap
<george.dunlap@eu.citrix.com> wrote:
> On 02/24/2014 02:19 PM, Ian Jackson wrote:
>>
>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>> The result on Linux is that the process always deadlocks before
>> returning from this function.
>>
>> This is used by xl's console child.  So, the ultimate effect is that
>> xl with pygrub does not manage to connect to the pygrub console.
>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>>
>> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
>> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
>> documented to suffice if called only on one ctx.  So deregistering the
>> ctx it's called on is not sufficient.  Instead, we need a new approach
>> which discards the whole sigchld_user list and unconditionally removes
>> our SIGCHLD handler if we had one.
>>
>> Prompted by this, clarify the semantics of
>> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
>> "quickly" by explaining what operations are not permitted; and
>> document the fact that the function doesn't reclaim the resources in
>> the ctxs.
>>
>> And add a comment in libxl_postfork_child_noexec explaining the
>> internal concurrency situation.
>>
>> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>>
>> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
>> Reported-by: M A Young <m.a.young@durham.ac.uk>
>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>
>
> So it looks like this path gets called from a number of other places in xl:
>
> libxl_postfork_child_noexec() is called by xl.c:postfork().
>
> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(),
> autoconnect_console(), and do_daemonize().
>
> do_daemonize() is called during "xl create", and "xl devd".
>
> Was this deadlock not triggered for those, or was it triggered and nobody
> noticed?

In any case, I do think we need to fix this; the main question is, do
we need to delay the release a bit further to make sure it gets
sufficient testing?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:49:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxmJ-00022Y-D8; Mon, 24 Feb 2014 15:49:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHxmH-00022P-EC
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:49:09 +0000
Received: from [85.158.137.68:36492] by server-9.bemta-3.messagelabs.com id
	C3/B8-10184-4F96B035; Mon, 24 Feb 2014 15:49:08 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393256947!3888050!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31088 invoked from network); 24 Feb 2014 15:49:07 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:49:07 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so3102946eek.2
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 07:49:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0bgxwpSqOYk9jQ0p3CgfWRH3YYh5tq/5FH3v5ZVJ7S0=;
	b=VHV+fgZgUGVld8Go4cMmFnzfT3/2TEhShM5JjqBUZ2RlPb4yaDm7EHwJDs8xqhMRYo
	VANEayMQ50NX2DaJ1O/1kDMbvYH23j3G4iG64dnX1yfz+HJsbj/fRH9PMMQTdyhvwmwF
	64NhK1rMf3HhPfBOBRCk75GBxHD4w5lk83xq/YxMBa6UetMXZyPjjzYe5g/vw73nnP8l
	5pl0qb0CVPLcR3E1XMlOC5KUIL91dhFikd7KaTx/QDYLqj/ZqGzATy8IOc5ajJSe+yGl
	qky3PgEiMGCvZOVyBD8pgxZtm4Fe3HtTB/vCDNiZVDrtCKBCm6DD11G3uoh2DrNLlm/Z
	2ckw==
X-Gm-Message-State: ALoCoQmSrk4IaOpZQYYjppLb5uH3TLnzqJ9w0/B3Ok4y/bMAddwH1rtCyg7ijXyfleeyGEBVadPG
X-Received: by 10.14.88.131 with SMTP id a3mr25668223eef.64.1393256947313;
	Mon, 24 Feb 2014 07:49:07 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm65133571eep.11.2014.02.24.07.49.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 07:49:06 -0800 (PST)
Message-ID: <530B69F0.3020904@linaro.org>
Date: Mon, 24 Feb 2014 15:49:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<53012314.2050906@linaro.org>
	<CAOTdubvBf9Ce=MZJQs639sOt+718CsgDv2D+oO1_B7XXu+SgGw@mail.gmail.com>
In-Reply-To: <CAOTdubvBf9Ce=MZJQs639sOt+718CsgDv2D+oO1_B7XXu+SgGw@mail.gmail.com>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Chen Baozi <baozich@gmail.com>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

You forget to reply all.

On 02/24/2014 03:30 PM, karim.allah.ahmed@gmail.com wrote:
> On Sun, Feb 16, 2014 at 8:44 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> On 16/02/14 15:51, Chen Baozi wrote:
>>>
>>> Hi all,
>>
>>
>> Hello Chen,
>>
>>
>>> It is much later than I used to expect. I guess it might be help
>>> to publish my work, though it is still not finished (and might not
>>> be finished very soon...).
>>>
>>> I began to try to port mini-os to ARM64 since last summer. Since
>>> the 64-bit guest support is not quite well at that time, this
>>> work had been stopped for a long time until two months ago.
>>>
>>> Though it is still at very early stage, it at least can be built,
>>> setup a early page table for booting, parse the DTB passed by the
>>> hypervisor, and be debugged by printk at present. So I put it
>>> on github in case someone might be interested in it. Here is the
>>> url: https://github.com/baozich/minios-arm64
>>
>>
>> Good job!
>>
>>
>>> Right now, there are some troubles to make GIC work properly,
>>> as I didn't consider mapping GIC's interface in address space and
>>> follows x86's memory layout which make the kernel virtual address
>>> starts at 0x0. I'll fix it as soon as possible.
>>
>>
>> I think you should try to sync up with Karim (in CC). He has started to port
>> mini-OS on arm32. Except assembly code (which should be fairly small)
>> everything can be shared between the both architecture.
> 
> +1
> 
> I totally agree with Julien.
> 
> I don't know how far are you at the moment, but would it be easy to
> rebase your work on top of mine ( or the other way around ? ).
> Let me know what can I do to sync up with your work
> 
>>
>> If I remember correctly, Karim already wrote a GIC support but without FDT
>> support.
> 
> Yes, that's correct. All addresses ( not just for GIC ) at the moment
> are fixed, so once the memory layout changes everything will crash :)
> 
>>
>>
>>> Besides, there is still lots of work to be done. So any comments
>>> or patches are welcome.
>>
>>
>> Regards,
>>
>> --
>> Julien Grall
> 
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:49:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:49:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxmJ-00022Y-D8; Mon, 24 Feb 2014 15:49:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHxmH-00022P-EC
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:49:09 +0000
Received: from [85.158.137.68:36492] by server-9.bemta-3.messagelabs.com id
	C3/B8-10184-4F96B035; Mon, 24 Feb 2014 15:49:08 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393256947!3888050!1
X-Originating-IP: [74.125.83.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31088 invoked from network); 24 Feb 2014 15:49:07 -0000
Received: from mail-ee0-f43.google.com (HELO mail-ee0-f43.google.com)
	(74.125.83.43)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:49:07 -0000
Received: by mail-ee0-f43.google.com with SMTP id e51so3102946eek.2
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 07:49:07 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=0bgxwpSqOYk9jQ0p3CgfWRH3YYh5tq/5FH3v5ZVJ7S0=;
	b=VHV+fgZgUGVld8Go4cMmFnzfT3/2TEhShM5JjqBUZ2RlPb4yaDm7EHwJDs8xqhMRYo
	VANEayMQ50NX2DaJ1O/1kDMbvYH23j3G4iG64dnX1yfz+HJsbj/fRH9PMMQTdyhvwmwF
	64NhK1rMf3HhPfBOBRCk75GBxHD4w5lk83xq/YxMBa6UetMXZyPjjzYe5g/vw73nnP8l
	5pl0qb0CVPLcR3E1XMlOC5KUIL91dhFikd7KaTx/QDYLqj/ZqGzATy8IOc5ajJSe+yGl
	qky3PgEiMGCvZOVyBD8pgxZtm4Fe3HtTB/vCDNiZVDrtCKBCm6DD11G3uoh2DrNLlm/Z
	2ckw==
X-Gm-Message-State: ALoCoQmSrk4IaOpZQYYjppLb5uH3TLnzqJ9w0/B3Ok4y/bMAddwH1rtCyg7ijXyfleeyGEBVadPG
X-Received: by 10.14.88.131 with SMTP id a3mr25668223eef.64.1393256947313;
	Mon, 24 Feb 2014 07:49:07 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id u6sm65133571eep.11.2014.02.24.07.49.05
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 07:49:06 -0800 (PST)
Message-ID: <530B69F0.3020904@linaro.org>
Date: Mon, 24 Feb 2014 15:49:04 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: "karim.allah.ahmed@gmail.com" <karim.allah.ahmed@gmail.com>
References: <A531A2CE-5CC3-4B36-8F15-A3CB173EFC8C@gmail.com>
	<53012314.2050906@linaro.org>
	<CAOTdubvBf9Ce=MZJQs639sOt+718CsgDv2D+oO1_B7XXu+SgGw@mail.gmail.com>
In-Reply-To: <CAOTdubvBf9Ce=MZJQs639sOt+718CsgDv2D+oO1_B7XXu+SgGw@mail.gmail.com>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Chen Baozi <baozich@gmail.com>, Ian Campbell <ian.campbell@citrix.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Initial Mini-OS port to ARM64
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

You forget to reply all.

On 02/24/2014 03:30 PM, karim.allah.ahmed@gmail.com wrote:
> On Sun, Feb 16, 2014 at 8:44 PM, Julien Grall <julien.grall@linaro.org> wrote:
>> On 16/02/14 15:51, Chen Baozi wrote:
>>>
>>> Hi all,
>>
>>
>> Hello Chen,
>>
>>
>>> It is much later than I used to expect. I guess it might be help
>>> to publish my work, though it is still not finished (and might not
>>> be finished very soon...).
>>>
>>> I began to try to port mini-os to ARM64 since last summer. Since
>>> the 64-bit guest support is not quite well at that time, this
>>> work had been stopped for a long time until two months ago.
>>>
>>> Though it is still at very early stage, it at least can be built,
>>> setup a early page table for booting, parse the DTB passed by the
>>> hypervisor, and be debugged by printk at present. So I put it
>>> on github in case someone might be interested in it. Here is the
>>> url: https://github.com/baozich/minios-arm64
>>
>>
>> Good job!
>>
>>
>>> Right now, there are some troubles to make GIC work properly,
>>> as I didn't consider mapping GIC's interface in address space and
>>> follows x86's memory layout which make the kernel virtual address
>>> starts at 0x0. I'll fix it as soon as possible.
>>
>>
>> I think you should try to sync up with Karim (in CC). He has started to port
>> mini-OS on arm32. Except assembly code (which should be fairly small)
>> everything can be shared between the both architecture.
> 
> +1
> 
> I totally agree with Julien.
> 
> I don't know how far are you at the moment, but would it be easy to
> rebase your work on top of mine ( or the other way around ? ).
> Let me know what can I do to sync up with your work
> 
>>
>> If I remember correctly, Karim already wrote a GIC support but without FDT
>> support.
> 
> Yes, that's correct. All addresses ( not just for GIC ) at the moment
> are fixed, so once the memory layout changes everything will crash :)
> 
>>
>>
>>> Besides, there is still lots of work to be done. So any comments
>>> or patches are welcome.
>>
>>
>> Regards,
>>
>> --
>> Julien Grall
> 
> 
> 


-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:49:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxmt-00026p-Re; Mon, 24 Feb 2014 15:49:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHxms-00026b-9Q
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:49:46 +0000
Received: from [193.109.254.147:59081] by server-13.bemta-14.messagelabs.com
	id 16/52-01226-91A6B035; Mon, 24 Feb 2014 15:49:45 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393256982!1181236!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21368 invoked from network); 24 Feb 2014 15:49:43 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:49:43 -0000
Received: by mail-wi0-f169.google.com with SMTP id e4so3179028wiv.0
	for <xen-devel@lists.xensource.com>;
	Mon, 24 Feb 2014 07:49:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=RaE9NLJZm2PfpuT/2gYkmb8Vjjc5zqpiweMuBtTDBSs=;
	b=G5zOSUiG+Pkgqtci1/+o3FZ/szthWopOzbrzYg0Tl1wG7bM8lpOSTkzh6LicQWgAW1
	tff+JhJJGi7Ml3xFDIjkg0GhS5yph928YcoT40N6ZK2ThWGReoo/y2wyfocGcvVpaMuh
	cnvc4Ixno9juY1Zgge1FdsMo48thiWI1QO1o3ZWLrNrI+zzHgGFakyGihf+vF7Ylwp4f
	xpqy+7RKwcpOGaAVHwWvnPMxADc0GWKfEw0nUzQX68VWf4wpuxDofUZVAic+gF/pNeoI
	3sUoeFLXpWxEKvkPJw9kxjxnzRoZipubfR0zPbezqnhxxNKyqP7Ovoa3r3fUZjYYD6te
	hNmA==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr19378547wjy.57.1393256982694;
	Mon, 24 Feb 2014 07:49:42 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 07:49:42 -0800 (PST)
In-Reply-To: <CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
Date: Mon, 24 Feb 2014 15:49:42 +0000
X-Google-Sender-Auth: emcmI595Y84QR9HfLHmWnTQQ89A
Message-ID: <CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, 
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 3:47 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> On Mon, Feb 24, 2014 at 3:17 PM, George Dunlap
> <george.dunlap@eu.citrix.com> wrote:
>> On 02/24/2014 02:19 PM, Ian Jackson wrote:
>>>
>>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>>> The result on Linux is that the process always deadlocks before
>>> returning from this function.
>>>
>>> This is used by xl's console child.  So, the ultimate effect is that
>>> xl with pygrub does not manage to connect to the pygrub console.
>>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>>>
>>> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
>>> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
>>> documented to suffice if called only on one ctx.  So deregistering the
>>> ctx it's called on is not sufficient.  Instead, we need a new approach
>>> which discards the whole sigchld_user list and unconditionally removes
>>> our SIGCHLD handler if we had one.
>>>
>>> Prompted by this, clarify the semantics of
>>> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
>>> "quickly" by explaining what operations are not permitted; and
>>> document the fact that the function doesn't reclaim the resources in
>>> the ctxs.
>>>
>>> And add a comment in libxl_postfork_child_noexec explaining the
>>> internal concurrency situation.
>>>
>>> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>>>
>>> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>> Reported-by: M A Young <m.a.young@durham.ac.uk>
>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>
>>
>> So it looks like this path gets called from a number of other places in xl:
>>
>> libxl_postfork_child_noexec() is called by xl.c:postfork().
>>
>> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(),
>> autoconnect_console(), and do_daemonize().
>>
>> do_daemonize() is called during "xl create", and "xl devd".
>>
>> Was this deadlock not triggered for those, or was it triggered and nobody
>> noticed?
>
> In any case, I do think we need to fix this; the main question is, do
> we need to delay the release a bit further to make sure it gets
> sufficient testing?

Also,  it would be nice to get a Tested-by: from someone using it with
libvirt (before the release at least, if not before the check-in).

Jim / Dario?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:49:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:49:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxmt-00026p-Re; Mon, 24 Feb 2014 15:49:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHxms-00026b-9Q
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:49:46 +0000
Received: from [193.109.254.147:59081] by server-13.bemta-14.messagelabs.com
	id 16/52-01226-91A6B035; Mon, 24 Feb 2014 15:49:45 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393256982!1181236!1
X-Originating-IP: [209.85.212.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21368 invoked from network); 24 Feb 2014 15:49:43 -0000
Received: from mail-wi0-f169.google.com (HELO mail-wi0-f169.google.com)
	(209.85.212.169)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:49:43 -0000
Received: by mail-wi0-f169.google.com with SMTP id e4so3179028wiv.0
	for <xen-devel@lists.xensource.com>;
	Mon, 24 Feb 2014 07:49:42 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=RaE9NLJZm2PfpuT/2gYkmb8Vjjc5zqpiweMuBtTDBSs=;
	b=G5zOSUiG+Pkgqtci1/+o3FZ/szthWopOzbrzYg0Tl1wG7bM8lpOSTkzh6LicQWgAW1
	tff+JhJJGi7Ml3xFDIjkg0GhS5yph928YcoT40N6ZK2ThWGReoo/y2wyfocGcvVpaMuh
	cnvc4Ixno9juY1Zgge1FdsMo48thiWI1QO1o3ZWLrNrI+zzHgGFakyGihf+vF7Ylwp4f
	xpqy+7RKwcpOGaAVHwWvnPMxADc0GWKfEw0nUzQX68VWf4wpuxDofUZVAic+gF/pNeoI
	3sUoeFLXpWxEKvkPJw9kxjxnzRoZipubfR0zPbezqnhxxNKyqP7Ovoa3r3fUZjYYD6te
	hNmA==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr19378547wjy.57.1393256982694;
	Mon, 24 Feb 2014 07:49:42 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 07:49:42 -0800 (PST)
In-Reply-To: <CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
Date: Mon, 24 Feb 2014 15:49:42 +0000
X-Google-Sender-Auth: emcmI595Y84QR9HfLHmWnTQQ89A
Message-ID: <CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, 
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>, Jim Fehlig <jfehlig@suse.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 3:47 PM, George Dunlap
<George.Dunlap@eu.citrix.com> wrote:
> On Mon, Feb 24, 2014 at 3:17 PM, George Dunlap
> <george.dunlap@eu.citrix.com> wrote:
>> On 02/24/2014 02:19 PM, Ian Jackson wrote:
>>>
>>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>>> The result on Linux is that the process always deadlocks before
>>> returning from this function.
>>>
>>> This is used by xl's console child.  So, the ultimate effect is that
>>> xl with pygrub does not manage to connect to the pygrub console.
>>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>>>
>>> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
>>> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
>>> documented to suffice if called only on one ctx.  So deregistering the
>>> ctx it's called on is not sufficient.  Instead, we need a new approach
>>> which discards the whole sigchld_user list and unconditionally removes
>>> our SIGCHLD handler if we had one.
>>>
>>> Prompted by this, clarify the semantics of
>>> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
>>> "quickly" by explaining what operations are not permitted; and
>>> document the fact that the function doesn't reclaim the resources in
>>> the ctxs.
>>>
>>> And add a comment in libxl_postfork_child_noexec explaining the
>>> internal concurrency situation.
>>>
>>> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>>>
>>> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>> Reported-by: M A Young <m.a.young@durham.ac.uk>
>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>
>>
>> So it looks like this path gets called from a number of other places in xl:
>>
>> libxl_postfork_child_noexec() is called by xl.c:postfork().
>>
>> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(),
>> autoconnect_console(), and do_daemonize().
>>
>> do_daemonize() is called during "xl create", and "xl devd".
>>
>> Was this deadlock not triggered for those, or was it triggered and nobody
>> noticed?
>
> In any case, I do think we need to fix this; the main question is, do
> we need to delay the release a bit further to make sure it gets
> sufficient testing?

Also,  it would be nice to get a Tested-by: from someone using it with
libvirt (before the release at least, if not before the check-in).

Jim / Dario?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:55:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxs9-0002Mh-Lm; Mon, 24 Feb 2014 15:55:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHxs8-0002Mc-8H
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:55:12 +0000
Received: from [193.109.254.147:30340] by server-4.bemta-14.messagelabs.com id
	D0/DA-32066-F5B6B035; Mon, 24 Feb 2014 15:55:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393257310!2775484!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9831 invoked from network); 24 Feb 2014 15:55:11 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:55:11 -0000
Received: by mail-ee0-f53.google.com with SMTP id c13so414059eek.40
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 07:55:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=LJCzZA7ebUno1MV05xvGnTo7AZKPJrHiFIKFadJfe4A=;
	b=GLYb7sVZA16BirjFY2dQ461K/ygvvMk1Fe7xlUR04r2yPvVf6e2+S0aD82EHA3SzSB
	u0RrpgrZUGkmIDgsG//ptOw75V5JBRPzs+/ZFzGTjP505nOcUGIjtqhBtyBI6R/ckv7p
	2BZyYF/dy7c6h8AOaj73QP/KkT2ykvmrcouIGpyStsaJOZZHkpHxkw2lDVyB3M1gfvK7
	8H7FITTfnxbzNliKH2fsYFP5r7Ffi6NQwvrzzMUbXCq+syHRxMALqt1uXA6zr4NjuBKs
	KniWahXV0kPfJPWhN/L+wdPrTHE6xYQjdeVGinamcmGJJ8Iu0b96Q888873TAY5wWsrc
	+fZw==
X-Gm-Message-State: ALoCoQk7OrzceHMyutCgv2a6dOJjpvULgOz5QVIMK4xbzjKA8zVQYSm2iQixDogxn5sGA1jpo859
X-Received: by 10.14.98.136 with SMTP id v8mr18468243eef.24.1393257310287;
	Mon, 24 Feb 2014 07:55:10 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm65158605eeo.8.2014.02.24.07.55.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 07:55:09 -0800 (PST)
Message-ID: <530B6B5B.1010706@linaro.org>
Date: Mon, 24 Feb 2014 15:55:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
	<1393253572-7157-6-git-send-email-julien.grall@linaro.org>
	<530B7230020000780011EE18@nat28.tlf.novell.com>
In-Reply-To: <530B7230020000780011EE18@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2 5/6] xen/console: Add noreturn attribute
 to panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 03:24 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 15:52, Julien Grall <julien.grall@linaro.org> wrote:
>> -void panic(const char *fmt, ...)
>> +void __attribute__((noreturn)) panic(const char *fmt, ...)
>>  {
>>      va_list args;
>>      unsigned long flags;
>> @@ -1085,6 +1085,8 @@ void panic(const char *fmt, ...)
>>          watchdog_disable();
>>          machine_restart(5000);
>>      }
>> +
>> +    while ( 1 );
> 
> The canonical thing here would be "for ( ; ; );", since there are
> compilers warning about the literal 1 in what you propose.

With Andrew's patch series I don't need this patch anymore. I will drop it.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:55:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxsF-0002N6-2H; Mon, 24 Feb 2014 15:55:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WHxsD-0002Mv-Kw
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:55:17 +0000
Received: from [193.109.254.147:58017] by server-3.bemta-14.messagelabs.com id
	55/E1-00432-56B6B035; Mon, 24 Feb 2014 15:55:17 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393257313!6455467!1
X-Originating-IP: [64.18.0.184]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29348 invoked from network); 24 Feb 2014 15:55:15 -0000
Received: from exprod5og107.obsmtp.com (HELO exprod5og107.obsmtp.com)
	(64.18.0.184)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 15:55:15 -0000
Received: from mail-wi0-f173.google.com ([209.85.212.173]) (using TLSv1) by
	exprod5ob107.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwtrYRXl2s365gF+nNGHK44VhU+2kge2@postini.com;
	Mon, 24 Feb 2014 07:55:15 PST
Received: by mail-wi0-f173.google.com with SMTP id bs8so3252663wib.0
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 07:55:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=zdc/RdKVHVc8eZQjMKJfAo8d3iu57dC/0gX0pH/55HI=;
	b=MdxPtldWVBiVSyyu2agyKPm2u33lHAsMn7+/rKuPsdjYTSV8yMmrxq8WLfoXSNbc+v
	TN3wfL4vlTImXcWq8vneAq32HyKhLfoDyqbZ0ra3coGkHiQq/moUOo0sCqTgquXg4Yxs
	VXoBO6gQVkmENIbx0n6C8Ct2zMGF57eraM9pM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=zdc/RdKVHVc8eZQjMKJfAo8d3iu57dC/0gX0pH/55HI=;
	b=JrWGUqXp1okYTWAAOxwQCXbGvYyD6ADWtarb7LDDOu+p3SHlsGuJA4IeVNpT6AuKKH
	4UTTyhalRAM/XLWJmzETxjVj+0OGs5EYuAggCj5wWRgXmEVGPvefcT2MIcUg33qfBKsN
	sErzcPBN05XY42YL5HCjCs2FfZlx8Ak7TeiBK6pWenn6OHWPW0kXG/K7EQpgfCQviR/B
	/8ehWAgwH77SC2WoUHeW50Kg4IU8aphUN1QXMr5NulFBQTdzCDwjUD3poiIeWIR8A73I
	XmbkqDFqeF77DQtDrhI0D021Bzm/CJBZABTrste7HxZCQIWBplRbQxOC6O2NvI4a3pWM
	UOyg==
X-Gm-Message-State: ALoCoQlwCUbSSbnWwQGOZwN9lsyTDqEtcgbZtq6DIEl6bsd63DhmqXlJO/9ntyghkBTnv822DCwRARpsQVYlZp6AZicmVs5z+tfS8088SCyXxDDS3bNxUXIatDZxd9HAFFD10GKawsYQKvkxJ+dRrVfaVb2qKVf1XVz0Ry8KXI3+WLHm9f2s35E=
X-Received: by 10.194.203.200 with SMTP id ks8mr19523094wjc.61.1393257312474; 
	Mon, 24 Feb 2014 07:55:12 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.194.203.200 with SMTP id ks8mr19523085wjc.61.1393257312374; 
	Mon, 24 Feb 2014 07:55:12 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Mon, 24 Feb 2014 07:55:12 -0800 (PST)
Date: Mon, 24 Feb 2014 15:55:12 +0000
Message-ID: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5803981878580862006=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5803981878580862006==
Content-Type: multipart/alternative; boundary=047d7bb70ae42f655a04f328ff8d

--047d7bb70ae42f655a04f328ff8d
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

We are using Linux as Dom0 and Android as DomU on OMAP5 processors
family. Xen version is 4.4-rc2. We have our own devtree configuration file
for
DomU. In our case device tree binary is appended to zImage by using
CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I
understood, Xen can determine that there is appended to zImage DTB for
DomU. But it tries to generate its own devtree configuration file via
libxl__arch_domain_configure() function call thru libxl__build_pv()
function in
tools/libxl/libxl_dom.c file. Which is dummy work in that case. In the same
time we have some platform specific configuration which is different from
generic ARM configuration that Xen tries to generate for us. Why Xen tries
to generate its own devtree even if appended to zImage DTB can be
detected? Should we expect some modifications in domain configuration
functions for those cases when DomU requires more platform specific
configuration? Please share your thoughts.

Regards,
Victor

--047d7bb70ae42f655a04f328ff8d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi all,<br><br></div><div>We are using Linux as Dom0 =
and Android as DomU on OMAP5 processors <br>family. Xen version is 4.4-rc2.=
 We have our own devtree configuration file for <br>DomU. In our case devic=
e tree binary is appended to zImage by using <br>
CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I <br>understo=
od, Xen can determine that there is appended to zImage DTB for <br>DomU. Bu=
t it tries to generate its own devtree configuration file via <br>libxl__ar=
ch_domain_configure() function call thru libxl__build_pv() function in <br>
tools/libxl/libxl_dom.c file. Which is dummy work in that case. In the same=
 <br>time we have some platform specific configuration which is different f=
rom <br></div><div>generic ARM configuration that Xen tries to generate for=
 us. Why Xen tries <br>
to generate its own devtree even if appended to zImage DTB can be <br>detec=
ted? Should we expect some modifications in domain configuration <br>functi=
ons for those cases when DomU requires more platform specific <br>configura=
tion? Please share your thoughts.<br>
 </div><div><br>Regards,<br></div>Victor<br></div>

--047d7bb70ae42f655a04f328ff8d--


--===============5803981878580862006==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5803981878580862006==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 15:55:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxs9-0002Mh-Lm; Mon, 24 Feb 2014 15:55:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WHxs8-0002Mc-8H
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 15:55:12 +0000
Received: from [193.109.254.147:30340] by server-4.bemta-14.messagelabs.com id
	D0/DA-32066-F5B6B035; Mon, 24 Feb 2014 15:55:11 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393257310!2775484!1
X-Originating-IP: [74.125.83.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9831 invoked from network); 24 Feb 2014 15:55:11 -0000
Received: from mail-ee0-f53.google.com (HELO mail-ee0-f53.google.com)
	(74.125.83.53)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:55:11 -0000
Received: by mail-ee0-f53.google.com with SMTP id c13so414059eek.40
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 07:55:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=LJCzZA7ebUno1MV05xvGnTo7AZKPJrHiFIKFadJfe4A=;
	b=GLYb7sVZA16BirjFY2dQ461K/ygvvMk1Fe7xlUR04r2yPvVf6e2+S0aD82EHA3SzSB
	u0RrpgrZUGkmIDgsG//ptOw75V5JBRPzs+/ZFzGTjP505nOcUGIjtqhBtyBI6R/ckv7p
	2BZyYF/dy7c6h8AOaj73QP/KkT2ykvmrcouIGpyStsaJOZZHkpHxkw2lDVyB3M1gfvK7
	8H7FITTfnxbzNliKH2fsYFP5r7Ffi6NQwvrzzMUbXCq+syHRxMALqt1uXA6zr4NjuBKs
	KniWahXV0kPfJPWhN/L+wdPrTHE6xYQjdeVGinamcmGJJ8Iu0b96Q888873TAY5wWsrc
	+fZw==
X-Gm-Message-State: ALoCoQk7OrzceHMyutCgv2a6dOJjpvULgOz5QVIMK4xbzjKA8zVQYSm2iQixDogxn5sGA1jpo859
X-Received: by 10.14.98.136 with SMTP id v8mr18468243eef.24.1393257310287;
	Mon, 24 Feb 2014 07:55:10 -0800 (PST)
Received: from [10.80.2.139] ([185.25.64.249])
	by mx.google.com with ESMTPSA id x2sm65158605eeo.8.2014.02.24.07.55.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 07:55:09 -0800 (PST)
Message-ID: <530B6B5B.1010706@linaro.org>
Date: Mon, 24 Feb 2014 15:55:07 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux i686;
	rv:17.0) Gecko/20131104 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393253572-7157-1-git-send-email-julien.grall@linaro.org>
	<1393253572-7157-6-git-send-email-julien.grall@linaro.org>
	<530B7230020000780011EE18@nat28.tlf.novell.com>
In-Reply-To: <530B7230020000780011EE18@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, ian.campbell@citrix.com,
	Andrew Cooper <andrew.cooper3@citrix.com>, tim@xen.org,
	stefano.stabellini@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH v2 5/6] xen/console: Add noreturn attribute
 to panic function
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 03:24 PM, Jan Beulich wrote:
>>>> On 24.02.14 at 15:52, Julien Grall <julien.grall@linaro.org> wrote:
>> -void panic(const char *fmt, ...)
>> +void __attribute__((noreturn)) panic(const char *fmt, ...)
>>  {
>>      va_list args;
>>      unsigned long flags;
>> @@ -1085,6 +1085,8 @@ void panic(const char *fmt, ...)
>>          watchdog_disable();
>>          machine_restart(5000);
>>      }
>> +
>> +    while ( 1 );
> 
> The canonical thing here would be "for ( ; ; );", since there are
> compilers warning about the literal 1 in what you propose.

With Andrew's patch series I don't need this patch anymore. I will drop it.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:55:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:55:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxsF-0002N6-2H; Mon, 24 Feb 2014 15:55:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WHxsD-0002Mv-Kw
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 15:55:17 +0000
Received: from [193.109.254.147:58017] by server-3.bemta-14.messagelabs.com id
	55/E1-00432-56B6B035; Mon, 24 Feb 2014 15:55:17 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393257313!6455467!1
X-Originating-IP: [64.18.0.184]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29348 invoked from network); 24 Feb 2014 15:55:15 -0000
Received: from exprod5og107.obsmtp.com (HELO exprod5og107.obsmtp.com)
	(64.18.0.184)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 15:55:15 -0000
Received: from mail-wi0-f173.google.com ([209.85.212.173]) (using TLSv1) by
	exprod5ob107.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwtrYRXl2s365gF+nNGHK44VhU+2kge2@postini.com;
	Mon, 24 Feb 2014 07:55:15 PST
Received: by mail-wi0-f173.google.com with SMTP id bs8so3252663wib.0
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 07:55:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=zdc/RdKVHVc8eZQjMKJfAo8d3iu57dC/0gX0pH/55HI=;
	b=MdxPtldWVBiVSyyu2agyKPm2u33lHAsMn7+/rKuPsdjYTSV8yMmrxq8WLfoXSNbc+v
	TN3wfL4vlTImXcWq8vneAq32HyKhLfoDyqbZ0ra3coGkHiQq/moUOo0sCqTgquXg4Yxs
	VXoBO6gQVkmENIbx0n6C8Ct2zMGF57eraM9pM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=zdc/RdKVHVc8eZQjMKJfAo8d3iu57dC/0gX0pH/55HI=;
	b=JrWGUqXp1okYTWAAOxwQCXbGvYyD6ADWtarb7LDDOu+p3SHlsGuJA4IeVNpT6AuKKH
	4UTTyhalRAM/XLWJmzETxjVj+0OGs5EYuAggCj5wWRgXmEVGPvefcT2MIcUg33qfBKsN
	sErzcPBN05XY42YL5HCjCs2FfZlx8Ak7TeiBK6pWenn6OHWPW0kXG/K7EQpgfCQviR/B
	/8ehWAgwH77SC2WoUHeW50Kg4IU8aphUN1QXMr5NulFBQTdzCDwjUD3poiIeWIR8A73I
	XmbkqDFqeF77DQtDrhI0D021Bzm/CJBZABTrste7HxZCQIWBplRbQxOC6O2NvI4a3pWM
	UOyg==
X-Gm-Message-State: ALoCoQlwCUbSSbnWwQGOZwN9lsyTDqEtcgbZtq6DIEl6bsd63DhmqXlJO/9ntyghkBTnv822DCwRARpsQVYlZp6AZicmVs5z+tfS8088SCyXxDDS3bNxUXIatDZxd9HAFFD10GKawsYQKvkxJ+dRrVfaVb2qKVf1XVz0Ry8KXI3+WLHm9f2s35E=
X-Received: by 10.194.203.200 with SMTP id ks8mr19523094wjc.61.1393257312474; 
	Mon, 24 Feb 2014 07:55:12 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.194.203.200 with SMTP id ks8mr19523085wjc.61.1393257312374; 
	Mon, 24 Feb 2014 07:55:12 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Mon, 24 Feb 2014 07:55:12 -0800 (PST)
Date: Mon, 24 Feb 2014 15:55:12 +0000
Message-ID: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5803981878580862006=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5803981878580862006==
Content-Type: multipart/alternative; boundary=047d7bb70ae42f655a04f328ff8d

--047d7bb70ae42f655a04f328ff8d
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

We are using Linux as Dom0 and Android as DomU on OMAP5 processors
family. Xen version is 4.4-rc2. We have our own devtree configuration file
for
DomU. In our case device tree binary is appended to zImage by using
CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I
understood, Xen can determine that there is appended to zImage DTB for
DomU. But it tries to generate its own devtree configuration file via
libxl__arch_domain_configure() function call thru libxl__build_pv()
function in
tools/libxl/libxl_dom.c file. Which is dummy work in that case. In the same
time we have some platform specific configuration which is different from
generic ARM configuration that Xen tries to generate for us. Why Xen tries
to generate its own devtree even if appended to zImage DTB can be
detected? Should we expect some modifications in domain configuration
functions for those cases when DomU requires more platform specific
configuration? Please share your thoughts.

Regards,
Victor

--047d7bb70ae42f655a04f328ff8d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi all,<br><br></div><div>We are using Linux as Dom0 =
and Android as DomU on OMAP5 processors <br>family. Xen version is 4.4-rc2.=
 We have our own devtree configuration file for <br>DomU. In our case devic=
e tree binary is appended to zImage by using <br>
CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I <br>understo=
od, Xen can determine that there is appended to zImage DTB for <br>DomU. Bu=
t it tries to generate its own devtree configuration file via <br>libxl__ar=
ch_domain_configure() function call thru libxl__build_pv() function in <br>
tools/libxl/libxl_dom.c file. Which is dummy work in that case. In the same=
 <br>time we have some platform specific configuration which is different f=
rom <br></div><div>generic ARM configuration that Xen tries to generate for=
 us. Why Xen tries <br>
to generate its own devtree even if appended to zImage DTB can be <br>detec=
ted? Should we expect some modifications in domain configuration <br>functi=
ons for those cases when DomU requires more platform specific <br>configura=
tion? Please share your thoughts.<br>
 </div><div><br>Regards,<br></div>Victor<br></div>

--047d7bb70ae42f655a04f328ff8d--


--===============5803981878580862006==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5803981878580862006==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 15:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxtS-0002V7-I3; Mon, 24 Feb 2014 15:56:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHxtS-0002Uz-1Y
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:56:34 +0000
Received: from [85.158.143.35:50431] by server-2.bemta-4.messagelabs.com id
	26/1E-10891-1BB6B035; Mon, 24 Feb 2014 15:56:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393257391!7910295!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2796 invoked from network); 24 Feb 2014 15:56:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:56:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103574804"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:56:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 10:56:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxtO-0006tm-Aa;
	Mon, 24 Feb 2014 15:56:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxtM-00016s-FN;
	Mon, 24 Feb 2014 15:56:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.27563.255108.357028@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 15:56:27 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <530B62A2.3080901@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
> So it looks like this path gets called from a number of other places in xl:
> 
> libxl_postfork_child_noexec() is called by xl.c:postfork().

In the old code the deadlock only happens when
ctx->sigchld_user_registered (for the ctx passed to
libxl_postfork_child_noexec).

Because xl uses .chldowner = libxl_sigchld_owner_libxl rather than
libxl_sigchld_owner_libxl_always, sigchld_user_registered is only true
when libxl actually has a child process.

In the single-threaded synchronous model used by xl for its
long_running operations (ie always passing ao_how==0), libxl only has
children _during_ the libxl call, when libxl calls back to the
application with a progress callback.

That's what happens in the pygrub case: xl needs to restart the
console viewer.

> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(), 
> autoconnect_console(), and do_daemonize().

osstest doesn't use `xl create -c' so they weren't tested anyway.
But it also works just fine in the non-pygrub case.

Likewise `xl create -V' (autoconnect_vncviewer) works too.

> do_daemonize() is called during "xl create", and "xl devd".

These work without the bugfix.

> Was this deadlock not triggered for those, or was it triggered and 
> nobody noticed?

Mostly, the former.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 15:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 15:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxtS-0002V7-I3; Mon, 24 Feb 2014 15:56:34 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHxtS-0002Uz-1Y
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 15:56:34 +0000
Received: from [85.158.143.35:50431] by server-2.bemta-4.messagelabs.com id
	26/1E-10891-1BB6B035; Mon, 24 Feb 2014 15:56:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393257391!7910295!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2796 invoked from network); 24 Feb 2014 15:56:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 15:56:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103574804"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 15:56:31 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 10:56:30 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxtO-0006tm-Aa;
	Mon, 24 Feb 2014 15:56:30 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHxtM-00016s-FN;
	Mon, 24 Feb 2014 15:56:28 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.27563.255108.357028@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 15:56:27 +0000
To: George Dunlap <george.dunlap@eu.citrix.com>
In-Reply-To: <530B62A2.3080901@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Ian Campbell <Ian.Campbell@citrix.com>,
	M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
	deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
> So it looks like this path gets called from a number of other places in xl:
> 
> libxl_postfork_child_noexec() is called by xl.c:postfork().

In the old code the deadlock only happens when
ctx->sigchld_user_registered (for the ctx passed to
libxl_postfork_child_noexec).

Because xl uses .chldowner = libxl_sigchld_owner_libxl rather than
libxl_sigchld_owner_libxl_always, sigchld_user_registered is only true
when libxl actually has a child process.

In the single-threaded synchronous model used by xl for its
long_running operations (ie always passing ao_how==0), libxl only has
children _during_ the libxl call, when libxl calls back to the
application with a progress callback.

That's what happens in the pygrub case: xl needs to restart the
console viewer.

> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(), 
> autoconnect_console(), and do_daemonize().

osstest doesn't use `xl create -c' so they weren't tested anyway.
But it also works just fine in the non-pygrub case.

Likewise `xl create -V' (autoconnect_vncviewer) works too.

> do_daemonize() is called during "xl create", and "xl devd".

These work without the bugfix.

> Was this deadlock not triggered for those, or was it triggered and 
> nobody noticed?

Mostly, the former.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxyT-0003JL-Hb; Mon, 24 Feb 2014 16:01:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WHxyR-0003JD-Js
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 16:01:43 +0000
Received: from [85.158.137.68:30163] by server-1.bemta-3.messagelabs.com id
	89/F3-17293-6EC6B035; Mon, 24 Feb 2014 16:01:42 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393257700!3889588!1
X-Originating-IP: [209.85.214.196]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24218 invoked from network); 24 Feb 2014 16:01:42 -0000
Received: from mail-ob0-f196.google.com (HELO mail-ob0-f196.google.com)
	(209.85.214.196)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:01:42 -0000
Received: by mail-ob0-f196.google.com with SMTP id wo20so722350obc.11
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 08:01:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=g+9T2Ib8qz+iwaHc7aqmXgRQ6uzDrwSS9BGOf1ZcVWQ=;
	b=ZjZRp/ISDB81N8fo913UmmjToE5mK9Ov+El17Ds+pDLkyMQ/564wHmfNLRtKECnZmd
	qybGWokJN64h3r/9K15TFxlhJPZE4wVCnrYwJJ87Squf/q0jX9UB7XT8LB8gROC8PWxX
	7vchVF5mccvikgAdqt5CjM5UF7ur8Nw2WU3WG8FEcUXRzKm3p9HZFQpg7ndy437wRyhs
	3cK0zGZwEVL2eSZvnzjrFavrE5VORGTvN3qAiL2k8Gge2AOkgDwiKJ2JgEKcuSo5hSDD
	gxBQVUrQNuVXcXtVVPOJBcshm6twqlbRqyFQ6Pd5PzkNOkpQDE8tA8rgv3gZehYWnJUJ
	6yGg==
MIME-Version: 1.0
X-Received: by 10.60.123.10 with SMTP id lw10mr22322267oeb.24.1393257700411;
	Mon, 24 Feb 2014 08:01:40 -0800 (PST)
Received: by 10.182.166.102 with HTTP; Mon, 24 Feb 2014 08:01:40 -0800 (PST)
Date: Tue, 25 Feb 2014 00:01:40 +0800
Message-ID: <CAL9N-M1GvV77VBgH2fG-PsUMVauwKLhVs+=BYO5CVqdcvgS2=g@mail.gmail.com>
From: Gordon Gong <gordongong0350@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] bug report - the respose of EIO in VHD with tapdisk2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5239253698657979673=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5239253698657979673==
Content-Type: multipart/alternative; boundary=047d7b5d49ce504ddc04f3291627

--047d7b5d49ce504ddc04f3291627
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

Hi=A3=AC

My environment.XEN version is 4.0.4=A3=ACDomain 0 kernel is 3.0.101=A3=ACDo=
mianU
kernel is
2.6.32.57, PVHVM model, 8 GB system disk & 200GB data disk in VHD by
blktap2.

The problem is  errorcode  of EIO when I pressed it whith   "./iozone -I -s
2G -r 512k -r 1m -r 2m -r 4m -i 0 -i 1 -f /data/iozone_test.tmp".

4108367(vreq->sec, but random each time), in failed queue and the only one
element(list_head   next->prev =3D=3D next->next in gdb), failed in
alloc_vhd_request continuously 100 times, so labeled with  BLKIF_RSP_ERROR
and sent to complete queue. The following is log and 2810252557 is
 timestemp of tsc.

Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2810252557]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2810531657]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2810786838]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2811063513]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2811316410]:sector 4108367

Is anybody got this problem.

thx.

--047d7b5d49ce504ddc04f3291627
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><span style=3D"font-family:arial,sans-serif;font-size:14px=
">Hi=A3=AC</span><div style=3D"font-family:arial,sans-serif;font-size:14px"=
><br></div><div style=3D"font-family:arial,sans-serif;font-size:14px">My en=
vironment.XEN version is 4.0.4=A3=ACDomain 0 kernel is 3.0.101=A3=ACDomianU=
 kernel is</div>
<div style=3D"font-family:arial,sans-serif;font-size:14px">2.6.32.57, PVHVM=
 model, 8 GB system disk &amp; 200GB data disk in VHD by blktap2.</div><div=
 style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div style=
=3D"font-family:arial,sans-serif;font-size:14px">
The problem is &nbsp;errorcode &nbsp;of EIO when I pressed it whith &nbsp; =
&quot;./iozone -I -s 2G -r 512k -r 1m -r 2m -r 4m -i 0 -i 1 -f /data/iozone=
_test.tmp&quot;.</div><div style=3D"font-family:arial,sans-serif;font-size:=
14px"><br></div>
<div style=3D"font-family:arial,sans-serif;font-size:14px">4108367(vreq-&gt=
;sec, but random each time), in failed queue and the only one element(list_=
head &nbsp; next-&gt;prev =3D=3D next-&gt;next in gdb), failed in alloc_vhd=
_request continuously 100 times, so labeled with &nbsp;BLKIF_RSP_ERROR and =
sent to complete queue. The following is log and 2810252557 is &nbsp;timest=
emp of tsc.</div>
<div style=3D"font-family:arial,sans-serif;font-size:14px"><br><div><div>Fe=
b 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write 2810252557]:secto=
r 4108367</div><div>Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_w=
rite 2810531657]:sector 4108367</div>
<div>Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write 2810786838=
]:sector 4108367</div><div>Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule=
_data_write 2811063513]:sector 4108367</div><div>Feb 22 18:53:42 dom0_119_2=
25 tapdisk2: [schedule_data_write 2811316410]:sector 4108367</div>
<div><br></div></div></div><div style=3D"font-family:arial,sans-serif;font-=
size:14px">Is anybody got this problem.</div><div style=3D"font-family:aria=
l,sans-serif;font-size:14px"><br></div><div style=3D"font-family:arial,sans=
-serif;font-size:14px">
thx.</div></div>

--047d7b5d49ce504ddc04f3291627--


--===============5239253698657979673==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5239253698657979673==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 16:01:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxyT-0003JL-Hb; Mon, 24 Feb 2014 16:01:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WHxyR-0003JD-Js
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 16:01:43 +0000
Received: from [85.158.137.68:30163] by server-1.bemta-3.messagelabs.com id
	89/F3-17293-6EC6B035; Mon, 24 Feb 2014 16:01:42 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393257700!3889588!1
X-Originating-IP: [209.85.214.196]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24218 invoked from network); 24 Feb 2014 16:01:42 -0000
Received: from mail-ob0-f196.google.com (HELO mail-ob0-f196.google.com)
	(209.85.214.196)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:01:42 -0000
Received: by mail-ob0-f196.google.com with SMTP id wo20so722350obc.11
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 08:01:40 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=g+9T2Ib8qz+iwaHc7aqmXgRQ6uzDrwSS9BGOf1ZcVWQ=;
	b=ZjZRp/ISDB81N8fo913UmmjToE5mK9Ov+El17Ds+pDLkyMQ/564wHmfNLRtKECnZmd
	qybGWokJN64h3r/9K15TFxlhJPZE4wVCnrYwJJ87Squf/q0jX9UB7XT8LB8gROC8PWxX
	7vchVF5mccvikgAdqt5CjM5UF7ur8Nw2WU3WG8FEcUXRzKm3p9HZFQpg7ndy437wRyhs
	3cK0zGZwEVL2eSZvnzjrFavrE5VORGTvN3qAiL2k8Gge2AOkgDwiKJ2JgEKcuSo5hSDD
	gxBQVUrQNuVXcXtVVPOJBcshm6twqlbRqyFQ6Pd5PzkNOkpQDE8tA8rgv3gZehYWnJUJ
	6yGg==
MIME-Version: 1.0
X-Received: by 10.60.123.10 with SMTP id lw10mr22322267oeb.24.1393257700411;
	Mon, 24 Feb 2014 08:01:40 -0800 (PST)
Received: by 10.182.166.102 with HTTP; Mon, 24 Feb 2014 08:01:40 -0800 (PST)
Date: Tue, 25 Feb 2014 00:01:40 +0800
Message-ID: <CAL9N-M1GvV77VBgH2fG-PsUMVauwKLhVs+=BYO5CVqdcvgS2=g@mail.gmail.com>
From: Gordon Gong <gordongong0350@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [Xen-devel] bug report - the respose of EIO in VHD with tapdisk2
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5239253698657979673=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5239253698657979673==
Content-Type: multipart/alternative; boundary=047d7b5d49ce504ddc04f3291627

--047d7b5d49ce504ddc04f3291627
Content-Type: text/plain; charset=GB2312
Content-Transfer-Encoding: quoted-printable

Hi=A3=AC

My environment.XEN version is 4.0.4=A3=ACDomain 0 kernel is 3.0.101=A3=ACDo=
mianU
kernel is
2.6.32.57, PVHVM model, 8 GB system disk & 200GB data disk in VHD by
blktap2.

The problem is  errorcode  of EIO when I pressed it whith   "./iozone -I -s
2G -r 512k -r 1m -r 2m -r 4m -i 0 -i 1 -f /data/iozone_test.tmp".

4108367(vreq->sec, but random each time), in failed queue and the only one
element(list_head   next->prev =3D=3D next->next in gdb), failed in
alloc_vhd_request continuously 100 times, so labeled with  BLKIF_RSP_ERROR
and sent to complete queue. The following is log and 2810252557 is
 timestemp of tsc.

Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2810252557]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2810531657]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2810786838]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2811063513]:sector 4108367
Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write
2811316410]:sector 4108367

Is anybody got this problem.

thx.

--047d7b5d49ce504ddc04f3291627
Content-Type: text/html; charset=GB2312
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><span style=3D"font-family:arial,sans-serif;font-size:14px=
">Hi=A3=AC</span><div style=3D"font-family:arial,sans-serif;font-size:14px"=
><br></div><div style=3D"font-family:arial,sans-serif;font-size:14px">My en=
vironment.XEN version is 4.0.4=A3=ACDomain 0 kernel is 3.0.101=A3=ACDomianU=
 kernel is</div>
<div style=3D"font-family:arial,sans-serif;font-size:14px">2.6.32.57, PVHVM=
 model, 8 GB system disk &amp; 200GB data disk in VHD by blktap2.</div><div=
 style=3D"font-family:arial,sans-serif;font-size:14px"><br></div><div style=
=3D"font-family:arial,sans-serif;font-size:14px">
The problem is &nbsp;errorcode &nbsp;of EIO when I pressed it whith &nbsp; =
&quot;./iozone -I -s 2G -r 512k -r 1m -r 2m -r 4m -i 0 -i 1 -f /data/iozone=
_test.tmp&quot;.</div><div style=3D"font-family:arial,sans-serif;font-size:=
14px"><br></div>
<div style=3D"font-family:arial,sans-serif;font-size:14px">4108367(vreq-&gt=
;sec, but random each time), in failed queue and the only one element(list_=
head &nbsp; next-&gt;prev =3D=3D next-&gt;next in gdb), failed in alloc_vhd=
_request continuously 100 times, so labeled with &nbsp;BLKIF_RSP_ERROR and =
sent to complete queue. The following is log and 2810252557 is &nbsp;timest=
emp of tsc.</div>
<div style=3D"font-family:arial,sans-serif;font-size:14px"><br><div><div>Fe=
b 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write 2810252557]:secto=
r 4108367</div><div>Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_w=
rite 2810531657]:sector 4108367</div>
<div>Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule_data_write 2810786838=
]:sector 4108367</div><div>Feb 22 18:53:42 dom0_119_225 tapdisk2: [schedule=
_data_write 2811063513]:sector 4108367</div><div>Feb 22 18:53:42 dom0_119_2=
25 tapdisk2: [schedule_data_write 2811316410]:sector 4108367</div>
<div><br></div></div></div><div style=3D"font-family:arial,sans-serif;font-=
size:14px">Is anybody got this problem.</div><div style=3D"font-family:aria=
l,sans-serif;font-size:14px"><br></div><div style=3D"font-family:arial,sans=
-serif;font-size:14px">
thx.</div></div>

--047d7b5d49ce504ddc04f3291627--


--===============5239253698657979673==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5239253698657979673==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 16:03:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxzo-0003Ng-0p; Mon, 24 Feb 2014 16:03:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHxzl-0003NW-Tb
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:03:06 +0000
Received: from [85.158.143.35:23127] by server-3.bemta-4.messagelabs.com id
	D9/94-11539-93D6B035; Mon, 24 Feb 2014 16:03:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393257784!7895610!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24281 invoked from network); 24 Feb 2014 16:03:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:03:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:03:04 +0000
Message-Id: <530B7B42020000780011EE5E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:02:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:

This patch shows a somewhat undesirable inconsistency (having been
present in I think les obvious ways in earlier patches too):

> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -11,7 +11,7 @@ static void raw_machine_reset(void)
>      platform_reset();
>  }
>  
> -static void halt_this_cpu(void *arg)
> +static void noreturn halt_this_cpu(void *arg)

For function definitions you place the attribute where I personally
would expect it to be (iirc it can't go between the closing paren
after the parameter declarations and the opening brace of the
function body), yet ...

> --- a/xen/arch/x86/cpu/mcheck/mce.h
> +++ b/xen/arch/x86/cpu/mcheck/mce.h
> @@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
>  unsigned int mce_firstbank(struct cpuinfo_x86 *c);
>  /* Helper functions used for collecting error telemetry */
>  struct mc_info *x86_mcinfo_getptr(void);
> -void mc_panic(char *s);
> +void mc_panic(char *s) noreturn;

... on function declarations you put it at the end when it could go
equally well at the same place where it sits function definitions.

This isn't meant to say that I don't approve of the patch, but I'd
like to ask considering to be more consistent in the future.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:03:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHxzo-0003Ng-0p; Mon, 24 Feb 2014 16:03:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHxzl-0003NW-Tb
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:03:06 +0000
Received: from [85.158.143.35:23127] by server-3.bemta-4.messagelabs.com id
	D9/94-11539-93D6B035; Mon, 24 Feb 2014 16:03:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393257784!7895610!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24281 invoked from network); 24 Feb 2014 16:03:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:03:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:03:04 +0000
Message-Id: <530B7B42020000780011EE5E@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:02:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:

This patch shows a somewhat undesirable inconsistency (having been
present in I think les obvious ways in earlier patches too):

> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -11,7 +11,7 @@ static void raw_machine_reset(void)
>      platform_reset();
>  }
>  
> -static void halt_this_cpu(void *arg)
> +static void noreturn halt_this_cpu(void *arg)

For function definitions you place the attribute where I personally
would expect it to be (iirc it can't go between the closing paren
after the parameter declarations and the opening brace of the
function body), yet ...

> --- a/xen/arch/x86/cpu/mcheck/mce.h
> +++ b/xen/arch/x86/cpu/mcheck/mce.h
> @@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
>  unsigned int mce_firstbank(struct cpuinfo_x86 *c);
>  /* Helper functions used for collecting error telemetry */
>  struct mc_info *x86_mcinfo_getptr(void);
> -void mc_panic(char *s);
> +void mc_panic(char *s) noreturn;

... on function declarations you put it at the end when it could go
equally well at the same place where it sits function definitions.

This isn't meant to say that I don't approve of the patch, but I'd
like to ask considering to be more consistent in the future.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHy2s-0003Xx-Kb; Mon, 24 Feb 2014 16:06:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHy2r-0003Xq-9R
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:06:17 +0000
Received: from [85.158.137.68:37375] by server-5.bemta-3.messagelabs.com id
	5A/46-04712-8FD6B035; Mon, 24 Feb 2014 16:06:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393257975!3890933!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22374 invoked from network); 24 Feb 2014 16:06:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:06:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:06:15 +0000
Message-Id: <530B7C01020000780011EE6F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:06:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
>      SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
>                   mfn_x(gl1mfn));
>      BUG();
> -    return 0; /* BUG() is no longer __attribute__((noreturn)). */
>  }

Did you check that this is fine with the whole range of compiler
versions we support? Also, s/no longer/now/ ?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHy2s-0003Xx-Kb; Mon, 24 Feb 2014 16:06:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHy2r-0003Xq-9R
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:06:17 +0000
Received: from [85.158.137.68:37375] by server-5.bemta-3.messagelabs.com id
	5A/46-04712-8FD6B035; Mon, 24 Feb 2014 16:06:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393257975!3890933!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22374 invoked from network); 24 Feb 2014 16:06:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:06:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:06:15 +0000
Message-Id: <530B7C01020000780011EE6F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:06:09 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
>      SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
>                   mfn_x(gl1mfn));
>      BUG();
> -    return 0; /* BUG() is no longer __attribute__((noreturn)). */
>  }

Did you check that this is fine with the whole range of compiler
versions we support? Also, s/no longer/now/ ?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:12:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHy8z-0003nu-FJ; Mon, 24 Feb 2014 16:12:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WHy8y-0003np-I1
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:12:36 +0000
Received: from [85.158.137.68:56175] by server-6.bemta-3.messagelabs.com id
	45/68-09180-37F6B035; Mon, 24 Feb 2014 16:12:35 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393258353!3859207!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21023 invoked from network); 24 Feb 2014 16:12:35 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-8.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	24 Feb 2014 16:12:35 -0000
Received: from mail216-ch1-R.bigfish.com (10.43.68.233) by
	CH1EHSOBE001.bigfish.com (10.43.70.51) with Microsoft SMTP Server id
	14.1.225.22; Mon, 24 Feb 2014 16:12:33 +0000
Received: from mail216-ch1 (localhost [127.0.0.1])	by
	mail216-ch1-R.bigfish.com (Postfix) with ESMTP id 541DA460141;
	Mon, 24 Feb 2014 16:12:33 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(z579ehzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail216-ch1 (localhost.localdomain [127.0.0.1]) by mail216-ch1
	(MessageSwitch) id 1393258351162318_31616;
	Mon, 24 Feb 2014 16:12:31 +0000 (UTC)
Received: from CH1EHSMHS013.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.248])	by mail216-ch1.bigfish.com (Postfix) with ESMTP id
	22E1C48006A;	Mon, 24 Feb 2014 16:12:31 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CH1EHSMHS013.bigfish.com
	(10.43.70.13) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 24 Feb 2014 16:12:30 +0000
X-WSS-ID: 0N1ID0T-07-4IJ-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2FE3C12C0084;	Mon, 24 Feb 2014 10:12:28 -0600 (CST)
Received: from SATLEXDAG01.amd.com (10.181.40.3) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 10:12:44 -0600
Received: from [127.0.0.1] (10.180.168.240) by SATLEXDAG01.amd.com
	(10.181.40.3) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 11:12:28 -0500
Message-ID: <530B6F66.8070201@amd.com>
Date: Mon, 24 Feb 2014 10:12:22 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, George Dunlap <george.dunlap@eu.citrix.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com> <52A1F4AC.6020506@eu.citrix.com>
	<52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com>
In-Reply-To: <52AF0E35.7070000@citrix.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "Lendacky, Thomas" <Thomas.Lendacky@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"shurd@broadcom.com" <shurd@broadcom.com>, "Suthikulpanit,
	Suravee" <Suravee.Suthikulpanit@amd.com>, "Hurwitz,
	Sherry" <sherry.hurwitz@amd.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/16/2013 8:29 AM, Andrew Cooper wrote:
>
> It turns out that we have some of this hardware in our testing pool.
>
> Therefore,
>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> (by way of backport
> to 4.3)
>
> I would however agree that on the whole, it is probably too high-risk /
> low-reward  for inclusion in 4.4, but it should be fine for accepting as
> soon as the 4.5 window opens.
>
>

Ping..

Jan (or George), could you please clarify if (hopefully) there are no 
other concerns regarding the patch, it would be picked up automatically 
for 4.5?


Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:12:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHy8z-0003nu-FJ; Mon, 24 Feb 2014 16:12:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WHy8y-0003np-I1
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:12:36 +0000
Received: from [85.158.137.68:56175] by server-6.bemta-3.messagelabs.com id
	45/68-09180-37F6B035; Mon, 24 Feb 2014 16:12:35 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393258353!3859207!1
X-Originating-IP: [216.32.181.186]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21023 invoked from network); 24 Feb 2014 16:12:35 -0000
Received: from ch1ehsobe006.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.186)
	by server-8.tower-31.messagelabs.com with AES128-SHA encrypted SMTP;
	24 Feb 2014 16:12:35 -0000
Received: from mail216-ch1-R.bigfish.com (10.43.68.233) by
	CH1EHSOBE001.bigfish.com (10.43.70.51) with Microsoft SMTP Server id
	14.1.225.22; Mon, 24 Feb 2014 16:12:33 +0000
Received: from mail216-ch1 (localhost [127.0.0.1])	by
	mail216-ch1-R.bigfish.com (Postfix) with ESMTP id 541DA460141;
	Mon, 24 Feb 2014 16:12:33 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: -5
X-BigFish: VPS-5(z579ehzbb2dI98dI9371I1432I4015Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail216-ch1 (localhost.localdomain [127.0.0.1]) by mail216-ch1
	(MessageSwitch) id 1393258351162318_31616;
	Mon, 24 Feb 2014 16:12:31 +0000 (UTC)
Received: from CH1EHSMHS013.bigfish.com (snatpool1.int.messaging.microsoft.com
	[10.43.68.248])	by mail216-ch1.bigfish.com (Postfix) with ESMTP id
	22E1C48006A;	Mon, 24 Feb 2014 16:12:31 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by CH1EHSMHS013.bigfish.com
	(10.43.70.13) with Microsoft SMTP Server id 14.16.227.3;
	Mon, 24 Feb 2014 16:12:30 +0000
X-WSS-ID: 0N1ID0T-07-4IJ-02
X-M-MSG: 
Received: from satlvexedge01.amd.com (satlvexedge01.amd.com [10.177.96.28])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2FE3C12C0084;	Mon, 24 Feb 2014 10:12:28 -0600 (CST)
Received: from SATLEXDAG01.amd.com (10.181.40.3) by satlvexedge01.amd.com
	(10.177.96.28) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 10:12:44 -0600
Received: from [127.0.0.1] (10.180.168.240) by SATLEXDAG01.amd.com
	(10.181.40.3) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 11:12:28 -0500
Message-ID: <530B6F66.8070201@amd.com>
Date: Mon, 24 Feb 2014 10:12:22 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, George Dunlap <george.dunlap@eu.citrix.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com> <52A1F4AC.6020506@eu.citrix.com>
	<52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com>
In-Reply-To: <52AF0E35.7070000@citrix.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "Lendacky, Thomas" <Thomas.Lendacky@amd.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"shurd@broadcom.com" <shurd@broadcom.com>, "Suthikulpanit,
	Suravee" <Suravee.Suthikulpanit@amd.com>, "Hurwitz,
	Sherry" <sherry.hurwitz@amd.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 12/16/2013 8:29 AM, Andrew Cooper wrote:
>
> It turns out that we have some of this hardware in our testing pool.
>
> Therefore,
>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> (by way of backport
> to 4.3)
>
> I would however agree that on the whole, it is probably too high-risk /
> low-reward  for inclusion in 4.4, but it should be fine for accepting as
> soon as the 4.5 window opens.
>
>

Ping..

Jan (or George), could you please clarify if (hopefully) there are no 
other concerns regarding the patch, it would be picked up automatically 
for 4.5?


Thanks,
-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:15:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyBt-0003uK-3m; Mon, 24 Feb 2014 16:15:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WHyBr-0003uF-Du
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:15:35 +0000
Received: from [85.158.137.68:45524] by server-15.bemta-3.messagelabs.com id
	E8/F8-19263-6207B035; Mon, 24 Feb 2014 16:15:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393258532!3887938!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19643 invoked from network); 24 Feb 2014 16:15:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 16:15:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OGFR6N014719
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 16:15:28 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OGFQ4g003387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 24 Feb 2014 16:15:26 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OGFPNQ003364; Mon, 24 Feb 2014 16:15:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 08:15:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B80A81C0FE0; Mon, 24 Feb 2014 11:15:24 -0500 (EST)
Date: Mon, 24 Feb 2014 11:15:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140224161524.GJ816@phenom.dumpdata.com>
References: <52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
	<20140221191833.GA8812@phenom.dumpdata.com>
	<530B1BD3020000780011E9B8@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530B1BD3020000780011E9B8@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 09:15:47AM +0000, Jan Beulich wrote:
> >>> On 21.02.14 at 20:18, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 4). Make Xen do the bus re-assignment.
> > 
> > The attached patch is an interesting "solution" to the BIOS
> > not doing the right bus-extending with SR-IOV devices.
> 
> Nice that you got this to work, but this is definitely not the route
> to go: There's no way we can guarantee to do this re-numbering

<nods>

> on segments other than segment 0 (since we can't necessarily
> access the config spaces of the devices on other segments
> before Dom0 telling us necessary bits of information).
>
Correct.
 
> Apart from that, I'd also really like to avoid duplicating code from
> Linux into the hypervisor when all that is needed is making that
> code work right in an admittedly rather special case.

<nods>
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:15:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyBt-0003uK-3m; Mon, 24 Feb 2014 16:15:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WHyBr-0003uF-Du
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:15:35 +0000
Received: from [85.158.137.68:45524] by server-15.bemta-3.messagelabs.com id
	E8/F8-19263-6207B035; Mon, 24 Feb 2014 16:15:34 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393258532!3887938!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19643 invoked from network); 24 Feb 2014 16:15:34 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 16:15:34 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OGFR6N014719
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 16:15:28 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OGFQ4g003387
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 24 Feb 2014 16:15:26 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OGFPNQ003364; Mon, 24 Feb 2014 16:15:26 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 08:15:25 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id B80A81C0FE0; Mon, 24 Feb 2014 11:15:24 -0500 (EST)
Date: Mon, 24 Feb 2014 11:15:24 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140224161524.GJ816@phenom.dumpdata.com>
References: <52DFC2DA0200007800115C79@nat28.tlf.novell.com>
	<20140122214034.GB9460@phenom.dumpdata.com>
	<52E0DFBB0200007800116041@nat28.tlf.novell.com>
	<20140124150128.GF12946@phenom.dumpdata.com>
	<52E2A0930200007800116CAE@nat28.tlf.novell.com>
	<20140124174349.GA15472@phenom.dumpdata.com>
	<20140124215652.GA18710@phenom.dumpdata.com>
	<20140205200708.GA9278@phenom.dumpdata.com>
	<20140221191833.GA8812@phenom.dumpdata.com>
	<530B1BD3020000780011E9B8@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530B1BD3020000780011E9B8@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Is: pci=assign-busses blows up Xen 4.4
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 09:15:47AM +0000, Jan Beulich wrote:
> >>> On 21.02.14 at 20:18, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> wrote:
> > 4). Make Xen do the bus re-assignment.
> > 
> > The attached patch is an interesting "solution" to the BIOS
> > not doing the right bus-extending with SR-IOV devices.
> 
> Nice that you got this to work, but this is definitely not the route
> to go: There's no way we can guarantee to do this re-numbering

<nods>

> on segments other than segment 0 (since we can't necessarily
> access the config spaces of the devices on other segments
> before Dom0 telling us necessary bits of information).
>
Correct.
 
> Apart from that, I'd also really like to avoid duplicating code from
> Linux into the hypervisor when all that is needed is making that
> code work right in an admittedly rather special case.

<nods>
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:17:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyDY-00040F-J0; Mon, 24 Feb 2014 16:17:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WHyDX-000407-Is
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 16:17:20 +0000
Received: from [193.109.254.147:60416] by server-1.bemta-14.messagelabs.com id
	67/12-15438-E807B035; Mon, 24 Feb 2014 16:17:18 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393258637!6491425!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23631 invoked from network); 24 Feb 2014 16:17:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:17:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105237968"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:17:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:17:12 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WHyDP-0007ZY-Pd;
	Mon, 24 Feb 2014 16:17:11 +0000
Message-ID: <530B7086.2010309@eu.citrix.com>
Date: Mon, 24 Feb 2014 16:17:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
	<1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH] tools/console: reset tty when xenconsole
	fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 03:26 PM, Ian Jackson wrote:
> If xenconsole (the client program) fails, it calls err.  This would
> previously neglect to reset the user's terminal to sanity.  Use atexit
> to do so.
>
> This routinely happens in Xen 4.4 RC5 with pygrub because something
> writes the value "" to the tty xenstore key when using xenconsole.
> The cause of this is not yet known, but after this patch it just
> results in a harmless error message.
>
> Reported-by: M A Young <m.a.young@durham.ac.uk>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

I'm pretty sure this has been here for quite a while -- possibly since 
4.3.  I've been working around it for some time, at any rate. :-)  So I 
think at this point we should save this for 4.4.1.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:17:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyDY-00040F-J0; Mon, 24 Feb 2014 16:17:20 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WHyDX-000407-Is
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 16:17:20 +0000
Received: from [193.109.254.147:60416] by server-1.bemta-14.messagelabs.com id
	67/12-15438-E807B035; Mon, 24 Feb 2014 16:17:18 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393258637!6491425!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23631 invoked from network); 24 Feb 2014 16:17:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:17:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105237968"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:17:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:17:12 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WHyDP-0007ZY-Pd;
	Mon, 24 Feb 2014 16:17:11 +0000
Message-ID: <530B7086.2010309@eu.citrix.com>
Date: Mon, 24 Feb 2014 16:17:10 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Jackson <ian.jackson@eu.citrix.com>, <xen-devel@lists.xensource.com>
References: <alpine.GSO.2.00.1402241447430.16271@algedi.dur.ac.uk>
	<1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
In-Reply-To: <1393255564-3474-1-git-send-email-ian.jackson@eu.citrix.com>
X-DLP: MIA2
Cc: Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH] tools/console: reset tty when xenconsole
	fails
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 03:26 PM, Ian Jackson wrote:
> If xenconsole (the client program) fails, it calls err.  This would
> previously neglect to reset the user's terminal to sanity.  Use atexit
> to do so.
>
> This routinely happens in Xen 4.4 RC5 with pygrub because something
> writes the value "" to the tty xenstore key when using xenconsole.
> The cause of this is not yet known, but after this patch it just
> results in a harmless error message.
>
> Reported-by: M A Young <m.a.young@durham.ac.uk>
> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>

I'm pretty sure this has been here for quite a while -- possibly since 
4.3.  I've been working around it for some time, at any rate. :-)  So I 
think at this point we should save this for 4.4.1.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:20:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyG1-0004Fx-56; Mon, 24 Feb 2014 16:19:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHyFz-0004Fl-Pn
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:19:51 +0000
Received: from [85.158.143.35:59927] by server-2.bemta-4.messagelabs.com id
	83/78-10891-7217B035; Mon, 24 Feb 2014 16:19:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393258789!7903102!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11696 invoked from network); 24 Feb 2014 16:19:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:19:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103584520"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 16:19:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:19:46 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHyFp-0007jm-Nu;
	Mon, 24 Feb 2014 16:19:41 +0000
Message-ID: <530B711D.2080408@citrix.com>
Date: Mon, 24 Feb 2014 16:19:41 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
	<530B7B42020000780011EE5E@nat28.tlf.novell.com>
In-Reply-To: <530B7B42020000780011EE5E@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 16:02, Jan Beulich wrote:
>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This patch shows a somewhat undesirable inconsistency (having been
> present in I think les obvious ways in earlier patches too):
>
>> --- a/xen/arch/arm/shutdown.c
>> +++ b/xen/arch/arm/shutdown.c
>> @@ -11,7 +11,7 @@ static void raw_machine_reset(void)
>>      platform_reset();
>>  }
>>  
>> -static void halt_this_cpu(void *arg)
>> +static void noreturn halt_this_cpu(void *arg)
> For function definitions you place the attribute where I personally
> would expect it to be (iirc it can't go between the closing paren
> after the parameter declarations and the opening brace of the
> function body), yet ...

Hmm - I thought I had fixed all of these - I shall audit and respin.  I
certainly did intend to be consistent.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:20:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyG1-0004Fx-56; Mon, 24 Feb 2014 16:19:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHyFz-0004Fl-Pn
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:19:51 +0000
Received: from [85.158.143.35:59927] by server-2.bemta-4.messagelabs.com id
	83/78-10891-7217B035; Mon, 24 Feb 2014 16:19:51 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393258789!7903102!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11696 invoked from network); 24 Feb 2014 16:19:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:19:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103584520"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 16:19:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:19:46 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHyFp-0007jm-Nu;
	Mon, 24 Feb 2014 16:19:41 +0000
Message-ID: <530B711D.2080408@citrix.com>
Date: Mon, 24 Feb 2014 16:19:41 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
	<530B7B42020000780011EE5E@nat28.tlf.novell.com>
In-Reply-To: <530B7B42020000780011EE5E@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 16:02, Jan Beulich wrote:
>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> This patch shows a somewhat undesirable inconsistency (having been
> present in I think les obvious ways in earlier patches too):
>
>> --- a/xen/arch/arm/shutdown.c
>> +++ b/xen/arch/arm/shutdown.c
>> @@ -11,7 +11,7 @@ static void raw_machine_reset(void)
>>      platform_reset();
>>  }
>>  
>> -static void halt_this_cpu(void *arg)
>> +static void noreturn halt_this_cpu(void *arg)
> For function definitions you place the attribute where I personally
> would expect it to be (iirc it can't go between the closing paren
> after the parameter declarations and the opening brace of the
> function body), yet ...

Hmm - I thought I had fixed all of these - I shall audit and respin.  I
certainly did intend to be consistent.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:23:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyJp-0004PS-Qd; Mon, 24 Feb 2014 16:23:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHyJn-0004PH-TT
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:23:48 +0000
Received: from [85.158.137.68:29332] by server-7.bemta-3.messagelabs.com id
	09/BE-13775-3127B035; Mon, 24 Feb 2014 16:23:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393259023!3853508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13228 invoked from network); 24 Feb 2014 16:23:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:23:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105240052"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:23:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:23:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHyIu-0007nq-8C;
	Mon, 24 Feb 2014 16:22:52 +0000
Message-ID: <530B71DC.7090002@citrix.com>
Date: Mon, 24 Feb 2014 16:22:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
	<530B7C01020000780011EE6F@nat28.tlf.novell.com>
In-Reply-To: <530B7C01020000780011EE6F@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 16:06, Jan Beulich wrote:
>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/mm/shadow/common.c
>> +++ b/xen/arch/x86/mm/shadow/common.c
>> @@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
>>      SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
>>                   mfn_x(gl1mfn));
>>      BUG();
>> -    return 0; /* BUG() is no longer __attribute__((noreturn)). */
>>  }
> Did you check that this is fine with the whole range of compiler
> versions we support? Also, s/no longer/now/ ?
>
> Jan
>

It was tested on GCC 4.1.1 from the old XenServer build system, which I
believe is the oldest GCC we support.

BUG() for a while lost noreturn, then had it reintroduced by Tim in
d2fd6f2b01ed0e "x86: mark BUG()s and assertion failures as terminal."

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:23:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyJp-0004PS-Qd; Mon, 24 Feb 2014 16:23:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHyJn-0004PH-TT
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:23:48 +0000
Received: from [85.158.137.68:29332] by server-7.bemta-3.messagelabs.com id
	09/BE-13775-3127B035; Mon, 24 Feb 2014 16:23:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393259023!3853508!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13228 invoked from network); 24 Feb 2014 16:23:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:23:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105240052"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:23:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:23:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHyIu-0007nq-8C;
	Mon, 24 Feb 2014 16:22:52 +0000
Message-ID: <530B71DC.7090002@citrix.com>
Date: Mon, 24 Feb 2014 16:22:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
	<530B7C01020000780011EE6F@nat28.tlf.novell.com>
In-Reply-To: <530B7C01020000780011EE6F@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 16:06, Jan Beulich wrote:
>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/mm/shadow/common.c
>> +++ b/xen/arch/x86/mm/shadow/common.c
>> @@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
>>      SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
>>                   mfn_x(gl1mfn));
>>      BUG();
>> -    return 0; /* BUG() is no longer __attribute__((noreturn)). */
>>  }
> Did you check that this is fine with the whole range of compiler
> versions we support? Also, s/no longer/now/ ?
>
> Jan
>

It was tested on GCC 4.1.1 from the old XenServer build system, which I
believe is the oldest GCC we support.

BUG() for a while lost noreturn, then had it reintroduced by Tim in
d2fd6f2b01ed0e "x86: mark BUG()s and assertion failures as terminal."

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyKK-0004To-Db; Mon, 24 Feb 2014 16:24:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHyKI-0004TQ-9o
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:24:18 +0000
Received: from [85.158.137.68:35763] by server-10.bemta-3.messagelabs.com id
	E6/07-07302-1327B035; Mon, 24 Feb 2014 16:24:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393259054!2567042!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11847 invoked from network); 24 Feb 2014 16:24:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:24:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105240435"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:24:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:24:12 -0500
Message-ID: <1393259051.16570.101.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Date: Mon, 24 Feb 2014 16:24:11 +0000
In-Reply-To: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
References: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 15:55 +0000, Viktor Kleinik wrote:
> We are using Linux as Dom0 and Android as DomU on OMAP5 processors 
> family. Xen version is 4.4-rc2. We have our own devtree configuration file for 
> DomU. In our case device tree binary is appended to zImage by using 
> CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I 
> understood, Xen can determine that there is appended to zImage DTB for 
> DomU. But it tries to generate its own devtree configuration file via 
> libxl__arch_domain_configure() function call thru libxl__build_pv() function in 
> tools/libxl/libxl_dom.c file. Which is dummy work in that case.

This is somewhat similar to real hardware too -- the appended DTB would
override anything passed from the bootloader.

> In the same time we have some platform specific configuration which is different from 
> 
> generic ARM configuration that Xen tries to generate for us. Why Xen tries 
> to generate its own devtree even if appended to zImage DTB can be 
> detected?

I think just nobody thought to stop it doing so. I'm not sure how easy
it would be to arrange it anyway -- the thing which knows about the
kernel/appending is a different library to what creates the dtb.

>  Should we expect some modifications in domain configuration 
> functions for those cases when DomU requires more platform specific 
> configuration? Please share your thoughts.

I think as features such as device passthrough get wired up properly in
the toolstack then the generated DTB should reflect this, if that is
what you are asking.

Appending a DTB really ought to become the exception not the rule.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyKK-0004To-Db; Mon, 24 Feb 2014 16:24:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WHyKI-0004TQ-9o
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:24:18 +0000
Received: from [85.158.137.68:35763] by server-10.bemta-3.messagelabs.com id
	E6/07-07302-1327B035; Mon, 24 Feb 2014 16:24:17 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393259054!2567042!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11847 invoked from network); 24 Feb 2014 16:24:16 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:24:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105240435"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:24:12 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:24:12 -0500
Message-ID: <1393259051.16570.101.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Date: Mon, 24 Feb 2014 16:24:11 +0000
In-Reply-To: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
References: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 15:55 +0000, Viktor Kleinik wrote:
> We are using Linux as Dom0 and Android as DomU on OMAP5 processors 
> family. Xen version is 4.4-rc2. We have our own devtree configuration file for 
> DomU. In our case device tree binary is appended to zImage by using 
> CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I 
> understood, Xen can determine that there is appended to zImage DTB for 
> DomU. But it tries to generate its own devtree configuration file via 
> libxl__arch_domain_configure() function call thru libxl__build_pv() function in 
> tools/libxl/libxl_dom.c file. Which is dummy work in that case.

This is somewhat similar to real hardware too -- the appended DTB would
override anything passed from the bootloader.

> In the same time we have some platform specific configuration which is different from 
> 
> generic ARM configuration that Xen tries to generate for us. Why Xen tries 
> to generate its own devtree even if appended to zImage DTB can be 
> detected?

I think just nobody thought to stop it doing so. I'm not sure how easy
it would be to arrange it anyway -- the thing which knows about the
kernel/appending is a different library to what creates the dtb.

>  Should we expect some modifications in domain configuration 
> functions for those cases when DomU requires more platform specific 
> configuration? Please share your thoughts.

I think as features such as device passthrough get wired up properly in
the toolstack then the generated DTB should reflect this, if that is
what you are asking.

Appending a DTB really ought to become the exception not the rule.

Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:26:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyMZ-0004fg-VI; Mon, 24 Feb 2014 16:26:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHyMY-0004fW-Eq
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:26:38 +0000
Received: from [85.158.139.211:44836] by server-14.bemta-5.messagelabs.com id
	B7/79-27598-DB27B035; Mon, 24 Feb 2014 16:26:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393259196!5927312!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14996 invoked from network); 24 Feb 2014 16:26:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:26:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:26:36 +0000
Message-Id: <530B80C5020000780011EEBE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:26:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com>
	<52A1F4AC.6020506@eu.citrix.com> <52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com> <530B6F66.8070201@amd.com>
In-Reply-To: <530B6F66.8070201@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	Sherry Hurwitz <sherry.hurwitz@amd.com>,
	"shurd@broadcom.com" <shurd@broadcom.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 17:12, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> On 12/16/2013 8:29 AM, Andrew Cooper wrote:
>>
>> It turns out that we have some of this hardware in our testing pool.
>>
>> Therefore,
>>
>> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> (by way of backport
>> to 4.3)
>>
>> I would however agree that on the whole, it is probably too high-risk /
>> low-reward  for inclusion in 4.4, but it should be fine for accepting as
>> soon as the 4.5 window opens.
>>
>>
> 
> Ping..
> 
> Jan (or George), could you please clarify if (hopefully) there are no 
> other concerns regarding the patch, it would be picked up automatically 
> for 4.5?

It's been well over two months since you posted this, and it's still not
having Keir's ack. The best way therefore is for you to re-post, with
Keir properly Cc-ed, and with Andrew's Tested-by added.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:26:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyMZ-0004fg-VI; Mon, 24 Feb 2014 16:26:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHyMY-0004fW-Eq
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:26:38 +0000
Received: from [85.158.139.211:44836] by server-14.bemta-5.messagelabs.com id
	B7/79-27598-DB27B035; Mon, 24 Feb 2014 16:26:37 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393259196!5927312!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14996 invoked from network); 24 Feb 2014 16:26:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:26:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:26:36 +0000
Message-Id: <530B80C5020000780011EEBE@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:26:29 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com>
	<52A1F4AC.6020506@eu.citrix.com> <52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com> <530B6F66.8070201@amd.com>
In-Reply-To: <530B6F66.8070201@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	Sherry Hurwitz <sherry.hurwitz@amd.com>,
	"shurd@broadcom.com" <shurd@broadcom.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 17:12, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> On 12/16/2013 8:29 AM, Andrew Cooper wrote:
>>
>> It turns out that we have some of this hardware in our testing pool.
>>
>> Therefore,
>>
>> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com> (by way of backport
>> to 4.3)
>>
>> I would however agree that on the whole, it is probably too high-risk /
>> low-reward  for inclusion in 4.4, but it should be fine for accepting as
>> soon as the 4.5 window opens.
>>
>>
> 
> Ping..
> 
> Jan (or George), could you please clarify if (hopefully) there are no 
> other concerns regarding the patch, it would be picked up automatically 
> for 4.5?

It's been well over two months since you posted this, and it's still not
having Keir's ack. The best way therefore is for you to re-post, with
Keir properly Cc-ed, and with Andrew's Tested-by added.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:28:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyOk-0004p1-Ii; Mon, 24 Feb 2014 16:28:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHyOi-0004ot-T4
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 16:28:53 +0000
Received: from [85.158.143.35:18438] by server-3.bemta-4.messagelabs.com id
	CD/F3-11539-4437B035; Mon, 24 Feb 2014 16:28:52 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393259331!7920706!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3958 invoked from network); 24 Feb 2014 16:28:51 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:28:51 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so3298664wib.4
	for <xen-devel@lists.xensource.com>;
	Mon, 24 Feb 2014 08:28:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=Rs2VP0+g4C2V6CWv3nAVqwvq7YCAfcPoKGEjP2WRR2o=;
	b=MhWlhR4oMmv3Dv0TMgww0zAETbK81a6vZ/lmuudOtAJHvJWz4DD4u6GI+Pl8Z9sk9q
	eoxtVexfKEj1YwK0R4SaPoGB6TbctiH/Wr84CgFwKDJLtKfHav/YmhVd0qPkZWEw3iiI
	cVE9kUb/Sq/7s4y+AKUL7UdI3S2rqiDzUR8iHFpT2B7AswhXVckhfpRUYFOCUO16sbk0
	H4AehY6XFw7uCV67+s/nKv5KNJ6Y7qhcrWOWwVREw3Ye5Z8af0L8fRpOppOId4OmCxKi
	jIxDotIWapxQ2tKqKHZLa7WmSOv8ID8hlB7/GFN3zDBTIKugqEybUcAZJB9KATpYUbjY
	m06Q==
MIME-Version: 1.0
X-Received: by 10.194.109.68 with SMTP id hq4mr20107046wjb.12.1393259331141;
	Mon, 24 Feb 2014 08:28:51 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 08:28:51 -0800 (PST)
In-Reply-To: <21259.27563.255108.357028@mariner.uk.xensource.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<21259.27563.255108.357028@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 16:28:51 +0000
X-Google-Sender-Auth: jrP6HKdmsADGFwp8gkpQvdCORNc
Message-ID: <CAFLBxZZV64KTF6ZU493bjamSiDYD1hfnr+niVWtSA4XNyU1piQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 3:56 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> George Dunlap writes ("Re: [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
>> So it looks like this path gets called from a number of other places in xl:
>>
>> libxl_postfork_child_noexec() is called by xl.c:postfork().
>
> In the old code the deadlock only happens when
> ctx->sigchld_user_registered (for the ctx passed to
> libxl_postfork_child_noexec).
>
> Because xl uses .chldowner = libxl_sigchld_owner_libxl rather than
> libxl_sigchld_owner_libxl_always, sigchld_user_registered is only true
> when libxl actually has a child process.
>
> In the single-threaded synchronous model used by xl for its
> long_running operations (ie always passing ao_how==0), libxl only has
> children _during_ the libxl call, when libxl calls back to the
> application with a progress callback.
>
> That's what happens in the pygrub case: xl needs to restart the
> console viewer.

Right.  Having chatted f2f with Ian, and looked through the old and
new code, I'm reasonably convinced that only the broken case can
actually be affected.

So:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

We're also going to do an RC6 -- please do a smoke-test if you get a
chance.  (I'll send out a separate e-mail to the list as well.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:28:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:28:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyOk-0004p1-Ii; Mon, 24 Feb 2014 16:28:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHyOi-0004ot-T4
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 16:28:53 +0000
Received: from [85.158.143.35:18438] by server-3.bemta-4.messagelabs.com id
	CD/F3-11539-4437B035; Mon, 24 Feb 2014 16:28:52 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393259331!7920706!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3958 invoked from network); 24 Feb 2014 16:28:51 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:28:51 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so3298664wib.4
	for <xen-devel@lists.xensource.com>;
	Mon, 24 Feb 2014 08:28:51 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=Rs2VP0+g4C2V6CWv3nAVqwvq7YCAfcPoKGEjP2WRR2o=;
	b=MhWlhR4oMmv3Dv0TMgww0zAETbK81a6vZ/lmuudOtAJHvJWz4DD4u6GI+Pl8Z9sk9q
	eoxtVexfKEj1YwK0R4SaPoGB6TbctiH/Wr84CgFwKDJLtKfHav/YmhVd0qPkZWEw3iiI
	cVE9kUb/Sq/7s4y+AKUL7UdI3S2rqiDzUR8iHFpT2B7AswhXVckhfpRUYFOCUO16sbk0
	H4AehY6XFw7uCV67+s/nKv5KNJ6Y7qhcrWOWwVREw3Ye5Z8af0L8fRpOppOId4OmCxKi
	jIxDotIWapxQ2tKqKHZLa7WmSOv8ID8hlB7/GFN3zDBTIKugqEybUcAZJB9KATpYUbjY
	m06Q==
MIME-Version: 1.0
X-Received: by 10.194.109.68 with SMTP id hq4mr20107046wjb.12.1393259331141;
	Mon, 24 Feb 2014 08:28:51 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 08:28:51 -0800 (PST)
In-Reply-To: <21259.27563.255108.357028@mariner.uk.xensource.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<21259.27563.255108.357028@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 16:28:51 +0000
X-Google-Sender-Auth: jrP6HKdmsADGFwp8gkpQvdCORNc
Message-ID: <CAFLBxZZV64KTF6ZU493bjamSiDYD1hfnr+niVWtSA4XNyU1piQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 3:56 PM, Ian Jackson <Ian.Jackson@eu.citrix.com> wrote:
> George Dunlap writes ("Re: [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
>> So it looks like this path gets called from a number of other places in xl:
>>
>> libxl_postfork_child_noexec() is called by xl.c:postfork().
>
> In the old code the deadlock only happens when
> ctx->sigchld_user_registered (for the ctx passed to
> libxl_postfork_child_noexec).
>
> Because xl uses .chldowner = libxl_sigchld_owner_libxl rather than
> libxl_sigchld_owner_libxl_always, sigchld_user_registered is only true
> when libxl actually has a child process.
>
> In the single-threaded synchronous model used by xl for its
> long_running operations (ie always passing ao_how==0), libxl only has
> children _during_ the libxl call, when libxl calls back to the
> application with a progress callback.
>
> That's what happens in the pygrub case: xl needs to restart the
> console viewer.

Right.  Having chatted f2f with Ian, and looked through the old and
new code, I'm reasonably convinced that only the broken case can
actually be affected.

So:

Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>

We're also going to do an RC6 -- please do a smoke-test if you get a
chance.  (I'll send out a separate e-mail to the list as well.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:29:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyP1-0004qw-VT; Mon, 24 Feb 2014 16:29:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHyP0-0004qf-4e
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:29:10 +0000
Received: from [85.158.137.68:36372] by server-2.bemta-3.messagelabs.com id
	8B/DB-06531-5537B035; Mon, 24 Feb 2014 16:29:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393259347!3841199!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1558 invoked from network); 24 Feb 2014 16:29:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:29:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:29:07 +0000
Message-Id: <530B815D020000780011EEC1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:29:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
	<530B7C01020000780011EE6F@nat28.tlf.novell.com>
	<530B71DC.7090002@citrix.com>
In-Reply-To: <530B71DC.7090002@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 17:22, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 24/02/14 16:06, Jan Beulich wrote:
>>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/arch/x86/mm/shadow/common.c
>>> +++ b/xen/arch/x86/mm/shadow/common.c
>>> @@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
>>>      SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
>>>                   mfn_x(gl1mfn));
>>>      BUG();
>>> -    return 0; /* BUG() is no longer __attribute__((noreturn)). */
>>>  }
>> Did you check that this is fine with the whole range of compiler
>> versions we support? Also, s/no longer/now/ ?
> 
> It was tested on GCC 4.1.1 from the old XenServer build system, which I
> believe is the oldest GCC we support.

Good, thanks.

> BUG() for a while lost noreturn, then had it reintroduced by Tim in
> d2fd6f2b01ed0e "x86: mark BUG()s and assertion failures as terminal."

Right. And I mistook the bogus comment to be added, when in fact
you remove it. I'm sorry.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:29:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyP1-0004qw-VT; Mon, 24 Feb 2014 16:29:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WHyP0-0004qf-4e
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:29:10 +0000
Received: from [85.158.137.68:36372] by server-2.bemta-3.messagelabs.com id
	8B/DB-06531-5537B035; Mon, 24 Feb 2014 16:29:09 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393259347!3841199!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1558 invoked from network); 24 Feb 2014 16:29:07 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:29:07 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Mon, 24 Feb 2014 16:29:07 +0000
Message-Id: <530B815D020000780011EEC1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Mon, 24 Feb 2014 16:29:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
	<530B7C01020000780011EE6F@nat28.tlf.novell.com>
	<530B71DC.7090002@citrix.com>
In-Reply-To: <530B71DC.7090002@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 17:22, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 24/02/14 16:06, Jan Beulich wrote:
>>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/arch/x86/mm/shadow/common.c
>>> +++ b/xen/arch/x86/mm/shadow/common.c
>>> @@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
>>>      SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
>>>                   mfn_x(gl1mfn));
>>>      BUG();
>>> -    return 0; /* BUG() is no longer __attribute__((noreturn)). */
>>>  }
>> Did you check that this is fine with the whole range of compiler
>> versions we support? Also, s/no longer/now/ ?
> 
> It was tested on GCC 4.1.1 from the old XenServer build system, which I
> believe is the oldest GCC we support.

Good, thanks.

> BUG() for a while lost noreturn, then had it reintroduced by Tim in
> d2fd6f2b01ed0e "x86: mark BUG()s and assertion failures as terminal."

Right. And I mistook the bogus comment to be added, when in fact
you remove it. I'm sorry.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:40:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:40:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyZo-0005M0-AI; Mon, 24 Feb 2014 16:40:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WHyZm-0005Lv-Jw
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:40:18 +0000
Received: from [85.158.139.211:45464] by server-8.bemta-5.messagelabs.com id
	76/89-05298-1F57B035; Mon, 24 Feb 2014 16:40:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393260015!5930294!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23102 invoked from network); 24 Feb 2014 16:40:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:40:16 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OGeB7a016457
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 16:40:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OGeB03004178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 16:40:11 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1OGeAhO006944; Mon, 24 Feb 2014 16:40:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 08:40:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8CF9D1C0FE0; Mon, 24 Feb 2014 11:40:04 -0500 (EST)
Date: Mon, 24 Feb 2014 11:40:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Message-ID: <20140224164004.GO816@phenom.dumpdata.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> Hi Konrad,
> =

> Here's what I see when start a VM under xen using pciback to pass a pci-e=
 device into domU.  The device can be seen by guest, and also functioning f=
ine.  But it's not seen as a pci-e device, rather, it looks just like an or=
dinary pci device because only the first 0x100 bytes of its configuration s=
pace is accessible.  So if a driver needs to use data in the extended confi=
guration space for certain features, it will fail.
> =

> When you say you "did PCIe pass through of an VF of an SR-IOV device".  A=
re you actually using it as a pci-e device or have throttled it back to pci=
 mode without aware of the difference?  If you did see the pci-e device in =
guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx=
 output from guest?

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton I=
I]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01)
# lspci -t
-[0000:00]-+-00.0
           +-01.0
           +-01.1
           +-01.3
           +-02.0
           +-03.0
           \-04.0
# lspci -s 00:04.0 -xxxxx
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01)
00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

-bash-4.1# more /vm-pci.cfg =

builder=3D'hvm'
disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
memory =3D 2048
boot=3D"d"
maxvcpus=3D32
vcpus=3D1
serial=3D'pty'
vnclisten=3D"0.0.0.0"
name=3D"latest"
pci =3D ["0000:02:10.0"]

> =

> Also to echo your second comment:  I might still be a newbie in qemu fiel=
d (I started working on this 4 months ago).  I thought the chipset limits w=
hat you can see/do in vm.  Ie.  If you have 440fx emulations then you can't=
 have any pci-e devices (fake or passthru) in the same system.  Is that not=
 true?
> =

> Regards/Eniac
> =

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

> Sent: Friday, February 21, 2014 5:32 PM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> =

> =

> On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
> >
> > Hi Konrad, =

> >
> > Thanks for your reply.=A0 =

> >
> > Yes, I am aware of the pciback.=A0 Unfortunately it doesn't seem to sup=
port pci-e passthrough. (I could be wrong here)
> =

> I just did PCIe pass through of an VF of an SR-IOV device. It certainly i=
s PCIe.
> =

> >
> > There are two reason that I am interested in this.=A0 For one, my proje=
ct calls for pci-e device passthrough, which can't be accomplished with 440=
fx chipset emulation.=A0 Secondly, I feel we ought to move on with the tech=
nology.=A0 440fx is ancient in computer terms.=A0 Qemu is good and all that=
, but if it refuses to support pci-e natively then it's just a matter of ti=
me that it will become obsoleted.=A0 The trend is clear that pci-e is takin=
g over the world. =

> >
> =

> I am not sure what you are saying but it does not matter whether QEMU emu=
lates 440fx or q35 for PCI pass through .
> =

> > Regards/Eniac =

> >
> > -----Original Message----- =

> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

> > Sent: Friday, February 21, 2014 2:50 PM =

> > To: Zhang, Eniac =

> > Cc: xen-devel@lists.xen.org =

> > Subject: Re: [Xen-devel] q35 in xen? vfio in xen? =

> >
> > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: =

> > > Hi all, =

> > > =

> > > I am playing with q35 chipset in qemu (1.6.1).=A0 It seems we can't e=
nable q35 machine under xen yet.=A0 I made a few quick hacks which all fail=
 miserably (linux kernel oops and window BSOD).=A0 I was wondering why this=
 hasn't been done (q35 was introduced into qemu in 2009). =

> > > =

> > > Next question, vfio works very well for me in standalone qemu (with L=
inux host handling iommu), but is that supported under xen?=A0 I haven't tr=
ied anything there yet because my gut-feeling is that it won't work.=A0 Bec=
ause passing vfio device to qemu can only be done on qemu commandline, and =
xen is not aware of this passing through device, thus not able to make iomm=
u arrangement for this device.=A0 Am I on the right track here? =

> >
> > Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xe=
n. It uses a different mechanism (and you need to bind the device to pcibac=
k). =

> >
> > > =

> > > I am interested in implementing both these two features.=A0 I'd like =
to connect with anyone who's already on this so we don't duplicate the effo=
rts. =

> >
> > What do you need Q35 for? =

> >
> > > =

> > > Regards/Eniac =

> >
> > > _______________________________________________ =

> > > Xen-devel mailing list =

> > > Xen-devel@lists.xen.org =

> > > http://lists.xen.org/xen-devel =

> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:40:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:40:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyZo-0005M0-AI; Mon, 24 Feb 2014 16:40:20 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WHyZm-0005Lv-Jw
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 16:40:18 +0000
Received: from [85.158.139.211:45464] by server-8.bemta-5.messagelabs.com id
	76/89-05298-1F57B035; Mon, 24 Feb 2014 16:40:17 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393260015!5930294!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23102 invoked from network); 24 Feb 2014 16:40:16 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 16:40:16 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OGeB7a016457
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 16:40:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OGeB03004178
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 16:40:11 GMT
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1OGeAhO006944; Mon, 24 Feb 2014 16:40:10 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 08:40:10 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 8CF9D1C0FE0; Mon, 24 Feb 2014 11:40:04 -0500 (EST)
Date: Mon, 24 Feb 2014 11:40:04 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
Message-ID: <20140224164004.GO816@phenom.dumpdata.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> Hi Konrad,
> =

> Here's what I see when start a VM under xen using pciback to pass a pci-e=
 device into domU.  The device can be seen by guest, and also functioning f=
ine.  But it's not seen as a pci-e device, rather, it looks just like an or=
dinary pci device because only the first 0x100 bytes of its configuration s=
pace is accessible.  So if a driver needs to use data in the extended confi=
guration space for certain features, it will fail.
> =

> When you say you "did PCIe pass through of an VF of an SR-IOV device".  A=
re you actually using it as a pci-e device or have throttled it back to pci=
 mode without aware of the difference?  If you did see the pci-e device in =
guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx=
 output from guest?

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton I=
I]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01)
# lspci -t
-[0000:00]-+-00.0
           +-01.0
           +-01.1
           +-01.3
           +-02.0
           +-03.0
           \-04.0
# lspci -s 00:04.0 -xxxxx
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01)
00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

-bash-4.1# more /vm-pci.cfg =

builder=3D'hvm'
disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
memory =3D 2048
boot=3D"d"
maxvcpus=3D32
vcpus=3D1
serial=3D'pty'
vnclisten=3D"0.0.0.0"
name=3D"latest"
pci =3D ["0000:02:10.0"]

> =

> Also to echo your second comment:  I might still be a newbie in qemu fiel=
d (I started working on this 4 months ago).  I thought the chipset limits w=
hat you can see/do in vm.  Ie.  If you have 440fx emulations then you can't=
 have any pci-e devices (fake or passthru) in the same system.  Is that not=
 true?
> =

> Regards/Eniac
> =

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

> Sent: Friday, February 21, 2014 5:32 PM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> =

> =

> On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
> >
> > Hi Konrad, =

> >
> > Thanks for your reply.=A0 =

> >
> > Yes, I am aware of the pciback.=A0 Unfortunately it doesn't seem to sup=
port pci-e passthrough. (I could be wrong here)
> =

> I just did PCIe pass through of an VF of an SR-IOV device. It certainly i=
s PCIe.
> =

> >
> > There are two reason that I am interested in this.=A0 For one, my proje=
ct calls for pci-e device passthrough, which can't be accomplished with 440=
fx chipset emulation.=A0 Secondly, I feel we ought to move on with the tech=
nology.=A0 440fx is ancient in computer terms.=A0 Qemu is good and all that=
, but if it refuses to support pci-e natively then it's just a matter of ti=
me that it will become obsoleted.=A0 The trend is clear that pci-e is takin=
g over the world. =

> >
> =

> I am not sure what you are saying but it does not matter whether QEMU emu=
lates 440fx or q35 for PCI pass through .
> =

> > Regards/Eniac =

> >
> > -----Original Message----- =

> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

> > Sent: Friday, February 21, 2014 2:50 PM =

> > To: Zhang, Eniac =

> > Cc: xen-devel@lists.xen.org =

> > Subject: Re: [Xen-devel] q35 in xen? vfio in xen? =

> >
> > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: =

> > > Hi all, =

> > > =

> > > I am playing with q35 chipset in qemu (1.6.1).=A0 It seems we can't e=
nable q35 machine under xen yet.=A0 I made a few quick hacks which all fail=
 miserably (linux kernel oops and window BSOD).=A0 I was wondering why this=
 hasn't been done (q35 was introduced into qemu in 2009). =

> > > =

> > > Next question, vfio works very well for me in standalone qemu (with L=
inux host handling iommu), but is that supported under xen?=A0 I haven't tr=
ied anything there yet because my gut-feeling is that it won't work.=A0 Bec=
ause passing vfio device to qemu can only be done on qemu commandline, and =
xen is not aware of this passing through device, thus not able to make iomm=
u arrangement for this device.=A0 Am I on the right track here? =

> >
> > Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xe=
n. It uses a different mechanism (and you need to bind the device to pcibac=
k). =

> >
> > > =

> > > I am interested in implementing both these two features.=A0 I'd like =
to connect with anyone who's already on this so we don't duplicate the effo=
rts. =

> >
> > What do you need Q35 for? =

> >
> > > =

> > > Regards/Eniac =

> >
> > > _______________________________________________ =

> > > Xen-devel mailing list =

> > > Xen-devel@lists.xen.org =

> > > http://lists.xen.org/xen-devel =

> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHypH-0005bH-Uy; Mon, 24 Feb 2014 16:56:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHypG-0005bC-A9
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 16:56:18 +0000
Received: from [193.109.254.147:51731] by server-2.bemta-14.messagelabs.com id
	80/C3-01236-1B97B035; Mon, 24 Feb 2014 16:56:17 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393260970!6454368!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 578 invoked from network); 24 Feb 2014 16:56:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:56:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105252587"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:56:09 +0000
Received: from [10.68.14.50] (10.68.14.50) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:56:08 -0500
Message-ID: <530B79A5.7010600@citrix.com>
Date: Mon, 24 Feb 2014 16:56:05 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@schaman.hu>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
	<530925D5.4010800@schaman.hu>
In-Reply-To: <530925D5.4010800@schaman.hu>
X-Originating-IP: [10.68.14.50]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/02/14 22:33, Zoltan Kiss wrote:
> On 18/02/14 17:40, Ian Campbell wrote:
>> +     */
>>> +    skb->pfmemalloc    = false;
>>>   }
>>>
>>>   static int xenvif_get_extras(struct xenvif *vif,
>>> @@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif 
>>> *vif, unsigned size)
>>
>>> @@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>>           else if (txp->flags & XEN_NETTXF_data_validated)
>>>               skb->ip_summed = CHECKSUM_UNNECESSARY;
>>>
>>> -        xenvif_fill_frags(vif, skb);
>>> +        xenvif_fill_frags(vif,
>>> +                  skb,
>>> +                  skb_shinfo(skb)->destructor_arg ?
>>> +                  pending_idx :
>>> +                  INVALID_PENDING_IDX
>>
>> Couldn't xenvif_fill_frags calculate the 3rd argument itself given that
>> it has skb in hand.
> We still have to pass pending_idx, as it is no longer in skb->data. I 
> have plans (I've already prototyped it, actually) to move that 
> pending_idx from skb->data to skb->cb, if that happens, this won't be 
> necessary.
> On the other hand, it makes more sense just to just pass pending_idx, 
> and in fill_frags we can decide based on destructor_arg whether do we 
> need it or not.
Actually, I've just moved the skb->cb patch to the beginning of this 
series, so we can completely omit that new parameter from fill_frags.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 16:56:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 16:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHypH-0005bH-Uy; Mon, 24 Feb 2014 16:56:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WHypG-0005bC-A9
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 16:56:18 +0000
Received: from [193.109.254.147:51731] by server-2.bemta-14.messagelabs.com id
	80/C3-01236-1B97B035; Mon, 24 Feb 2014 16:56:17 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393260970!6454368!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 578 invoked from network); 24 Feb 2014 16:56:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 16:56:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105252587"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 16:56:09 +0000
Received: from [10.68.14.50] (10.68.14.50) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 11:56:08 -0500
Message-ID: <530B79A5.7010600@citrix.com>
Date: Mon, 24 Feb 2014 16:56:05 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Zoltan Kiss <zoltan.kiss@schaman.hu>, Ian Campbell
	<Ian.Campbell@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>	
	<1390253069-25507-3-git-send-email-zoltan.kiss@citrix.com>
	<1392745235.23084.60.camel@kazak.uk.xensource.com>
	<530925D5.4010800@schaman.hu>
In-Reply-To: <530925D5.4010800@schaman.hu>
X-Originating-IP: [10.68.14.50]
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	wei.liu2@citrix.com, linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 2/9] xen-netback: Change TX path
 from grant copy to mapping
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 22/02/14 22:33, Zoltan Kiss wrote:
> On 18/02/14 17:40, Ian Campbell wrote:
>> +     */
>>> +    skb->pfmemalloc    = false;
>>>   }
>>>
>>>   static int xenvif_get_extras(struct xenvif *vif,
>>> @@ -1372,7 +1341,7 @@ static bool tx_credit_exceeded(struct xenvif 
>>> *vif, unsigned size)
>>
>>> @@ -1581,7 +1535,11 @@ static int xenvif_tx_submit(struct xenvif *vif)
>>>           else if (txp->flags & XEN_NETTXF_data_validated)
>>>               skb->ip_summed = CHECKSUM_UNNECESSARY;
>>>
>>> -        xenvif_fill_frags(vif, skb);
>>> +        xenvif_fill_frags(vif,
>>> +                  skb,
>>> +                  skb_shinfo(skb)->destructor_arg ?
>>> +                  pending_idx :
>>> +                  INVALID_PENDING_IDX
>>
>> Couldn't xenvif_fill_frags calculate the 3rd argument itself given that
>> it has skb in hand.
> We still have to pass pending_idx, as it is no longer in skb->data. I 
> have plans (I've already prototyped it, actually) to move that 
> pending_idx from skb->data to skb->cb, if that happens, this won't be 
> necessary.
> On the other hand, it makes more sense just to just pass pending_idx, 
> and in fill_frags we can decide based on destructor_arg whether do we 
> need it or not.
Actually, I've just moved the skb->cb patch to the beginning of this 
series, so we can completely omit that new parameter from fill_frags.

Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:03:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHywB-0005rF-Ro; Mon, 24 Feb 2014 17:03:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHyw9-0005rA-Sl
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 17:03:26 +0000
Received: from [193.109.254.147:55662] by server-11.bemta-14.messagelabs.com
	id F9/60-24604-D5B7B035; Mon, 24 Feb 2014 17:03:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393261402!6479197!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26646 invoked from network); 24 Feb 2014 17:03:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:03:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103601884"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 17:03:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 12:03:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHyvx-0007E6-6q;
	Mon, 24 Feb 2014 17:03:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHyvx-0003yI-0b;
	Mon, 24 Feb 2014 17:03:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25284-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 17:03:13 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25284: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25284 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair          7 xen-boot/src_host         fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                cfbf8d4857c26a8a307fb7cd258074c9dcd8c691
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2391534 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:03:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHywB-0005rF-Ro; Mon, 24 Feb 2014 17:03:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHyw9-0005rA-Sl
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 17:03:26 +0000
Received: from [193.109.254.147:55662] by server-11.bemta-14.messagelabs.com
	id F9/60-24604-D5B7B035; Mon, 24 Feb 2014 17:03:25 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393261402!6479197!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26646 invoked from network); 24 Feb 2014 17:03:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:03:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="103601884"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 17:03:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 12:03:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WHyvx-0007E6-6q;
	Mon, 24 Feb 2014 17:03:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WHyvx-0003yI-0b;
	Mon, 24 Feb 2014 17:03:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25284-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 17:03:13 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25284: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25284 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair          7 xen-boot/src_host         fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                cfbf8d4857c26a8a307fb7cd258074c9dcd8c691
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2391534 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:07:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyzX-0005zv-Lb; Mon, 24 Feb 2014 17:06:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WHyzW-0005zp-1t
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 17:06:54 +0000
Received: from [85.158.139.211:28016] by server-10.bemta-5.messagelabs.com id
	5F/6B-08578-D2C7B035; Mon, 24 Feb 2014 17:06:53 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393261602!5937242!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjYzMjIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11613 invoked from network); 24 Feb 2014 17:06:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:06:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; 
	d="asc'?scan'208";a="105256608"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:06:42 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:06:41 -0500
Message-ID: <1393261600.16485.74.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Charles <xiezhenjiang@foxmail.com>
Date: Mon, 24 Feb 2014 18:06:40 +0100
In-Reply-To: <tencent_1F08A5736F1E5981265AE0C8@qq.com>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
	<1393004577.32038.843.camel@Solace>
	<tencent_1F08A5736F1E5981265AE0C8@qq.com>
Organization: Citrix
X-Mailer: Evolution 3.10.4 (3.10.4-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage in
 Xenhypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7221696248721840474=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7221696248721840474==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-JTG4NKrmmLQLUIc3+EtX"

--=-JTG4NKrmmLQLUIc3+EtX
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

[please, don't drop the list. Re-added]

On lun, 2014-02-24 at 10:30 +0800, Charles wrote:
> Thanks for your reply.=20
> 1.The VM usage I mean here is similar to the result of command top in the=
 VM.=20
>
Ok, but still, when running top inside the VM, which part are you
interested in? How busy the various vCPUs are or what (as in what
process/OS component) is actually keeping the busy?

That matters because, how busy they are is something that you, to some
extent, see in Xen, as it is at lest bound to how much he various vCPUs
want to run on the host's pCPUs, and that's the hypervisor's scheduler's
job!

If you want to know what process they're running, at what time that
started, etc., what's the "priority" (inside the VM) then this is
something I don't think you can easily have Xen aware of, unless you
introduce some kind of scheduler paravirtualization.

> 2."use the usage information in Credit" means I want to use the usage inf=
ormation to guess the workload type running in the VM
>=20
Still too few info (see above). xentop tells already whether a
particular vCPU is getting, say, 5% of pCPU time. Have something like
that inside Xen should not be that hard.

What it does not (xentop) tell is whether a particular vCPU, despite
getting 5%, is asking for more, and only getting that for whatever
reason. That has to do with the estimation of the system load that I was
mentioning, which is embedded in credit2 right now, but can be
generalized.

So, which one, if any, of the above are you after?

> To the best of my knowledge, when in physical OS, the CPU may be idle or =
busy. Then the OS CPU usage can be computed by (busy_time / (idle_time + bu=
sy_time))
> If the vCPU is running the on pCPU and it's consuming pCPU cycles all the=
 time, then is it mean  idle_time =3D 0? then how to I compute the CPU usag=
e?
>=20
Err... from Xen's perspective, if a vCPU is always running, then
idle_time=3D0, then (busy_time/(idle_time+busy_time))=3D1, which matches
with the concept of being "always running".

I'm sure I'm missing something of what you meant here...

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-JTG4NKrmmLQLUIc3+EtX
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEABECAAYFAlMLfCAACgkQk4XaBE3IOsTMdgCeLRQNGnhan93aVFkAPKasIWa6
MvsAoITAfTDk7R9WGf/jakdoZtUP2WWf
=taif
-----END PGP SIGNATURE-----

--=-JTG4NKrmmLQLUIc3+EtX--


--===============7221696248721840474==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7221696248721840474==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 17:07:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHyzX-0005zv-Lb; Mon, 24 Feb 2014 17:06:55 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WHyzW-0005zp-1t
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 17:06:54 +0000
Received: from [85.158.139.211:28016] by server-10.bemta-5.messagelabs.com id
	5F/6B-08578-D2C7B035; Mon, 24 Feb 2014 17:06:53 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393261602!5937242!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: ,
	async_handler: YXN5bmNfZGVsYXk6IDcwNjYzMjIgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11613 invoked from network); 24 Feb 2014 17:06:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:06:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; 
	d="asc'?scan'208";a="105256608"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:06:42 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:06:41 -0500
Message-ID: <1393261600.16485.74.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Charles <xiezhenjiang@foxmail.com>
Date: Mon, 24 Feb 2014 18:06:40 +0100
In-Reply-To: <tencent_1F08A5736F1E5981265AE0C8@qq.com>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
	<1393004577.32038.843.camel@Solace>
	<tencent_1F08A5736F1E5981265AE0C8@qq.com>
Organization: Citrix
X-Mailer: Evolution 3.10.4 (3.10.4-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage in
 Xenhypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7221696248721840474=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============7221696248721840474==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-JTG4NKrmmLQLUIc3+EtX"

--=-JTG4NKrmmLQLUIc3+EtX
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

[please, don't drop the list. Re-added]

On lun, 2014-02-24 at 10:30 +0800, Charles wrote:
> Thanks for your reply.=20
> 1.The VM usage I mean here is similar to the result of command top in the=
 VM.=20
>
Ok, but still, when running top inside the VM, which part are you
interested in? How busy the various vCPUs are or what (as in what
process/OS component) is actually keeping the busy?

That matters because, how busy they are is something that you, to some
extent, see in Xen, as it is at lest bound to how much he various vCPUs
want to run on the host's pCPUs, and that's the hypervisor's scheduler's
job!

If you want to know what process they're running, at what time that
started, etc., what's the "priority" (inside the VM) then this is
something I don't think you can easily have Xen aware of, unless you
introduce some kind of scheduler paravirtualization.

> 2."use the usage information in Credit" means I want to use the usage inf=
ormation to guess the workload type running in the VM
>=20
Still too few info (see above). xentop tells already whether a
particular vCPU is getting, say, 5% of pCPU time. Have something like
that inside Xen should not be that hard.

What it does not (xentop) tell is whether a particular vCPU, despite
getting 5%, is asking for more, and only getting that for whatever
reason. That has to do with the estimation of the system load that I was
mentioning, which is embedded in credit2 right now, but can be
generalized.

So, which one, if any, of the above are you after?

> To the best of my knowledge, when in physical OS, the CPU may be idle or =
busy. Then the OS CPU usage can be computed by (busy_time / (idle_time + bu=
sy_time))
> If the vCPU is running the on pCPU and it's consuming pCPU cycles all the=
 time, then is it mean  idle_time =3D 0? then how to I compute the CPU usag=
e?
>=20
Err... from Xen's perspective, if a vCPU is always running, then
idle_time=3D0, then (busy_time/(idle_time+busy_time))=3D1, which matches
with the concept of being "always running".

I'm sure I'm missing something of what you meant here...

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-JTG4NKrmmLQLUIc3+EtX
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEABECAAYFAlMLfCAACgkQk4XaBE3IOsTMdgCeLRQNGnhan93aVFkAPKasIWa6
MvsAoITAfTDk7R9WGf/jakdoZtUP2WWf
=taif
-----END PGP SIGNATURE-----

--=-JTG4NKrmmLQLUIc3+EtX--


--===============7221696248721840474==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7221696248721840474==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 17:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHz21-00066T-CK; Mon, 24 Feb 2014 17:09:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHz1z-00066L-MT
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 17:09:27 +0000
Received: from [193.109.254.147:29792] by server-13.bemta-14.messagelabs.com
	id BE/71-01226-6CC7B035; Mon, 24 Feb 2014 17:09:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393261765!6455381!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3666 invoked from network); 24 Feb 2014 17:09:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:09:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105257616"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:09:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 12:09:24 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHz1w-0007GU-6o;
	Mon, 24 Feb 2014 17:09:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHz1u-0001UF-Ux;
	Mon, 24 Feb 2014 17:09:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.31937.554787.159299@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 17:09:21 +0000
To: <stefano.stabellini@eu.citrix.com>
In-Reply-To: <CAFLBxZZV64KTF6ZU493bjamSiDYD1hfnr+niVWtSA4XNyU1piQ@mail.gmail.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<21259.27563.255108.357028@mariner.uk.xensource.com>
	<CAFLBxZZV64KTF6ZU493bjamSiDYD1hfnr+niVWtSA4XNyU1piQ@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
> We're also going to do an RC6 -- please do a smoke-test if you get a
> chance.  (I'll send out a separate e-mail to the list as well.)

Stefano, I wrote on IRC:

16:35 <Diziet> stefano_s: Can you make a tag for 4.4.0-rc6 at the same
               point as -rc5 in qemu-xen-upstream-4.4-testing.git (or
               whatever it's called) please ?
16:36 <Diziet> (and push the tag to both the master and staging/ of that tree)

It seems you're busy on a conference call so I'm going to do this
myself.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:09:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:09:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHz21-00066T-CK; Mon, 24 Feb 2014 17:09:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WHz1z-00066L-MT
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 17:09:27 +0000
Received: from [193.109.254.147:29792] by server-13.bemta-14.messagelabs.com
	id BE/71-01226-6CC7B035; Mon, 24 Feb 2014 17:09:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393261765!6455381!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3666 invoked from network); 24 Feb 2014 17:09:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:09:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105257616"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:09:24 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 12:09:24 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHz1w-0007GU-6o;
	Mon, 24 Feb 2014 17:09:24 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WHz1u-0001UF-Ux;
	Mon, 24 Feb 2014 17:09:22 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21259.31937.554787.159299@mariner.uk.xensource.com>
Date: Mon, 24 Feb 2014 17:09:21 +0000
To: <stefano.stabellini@eu.citrix.com>
In-Reply-To: <CAFLBxZZV64KTF6ZU493bjamSiDYD1hfnr+niVWtSA4XNyU1piQ@mail.gmail.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<21259.27563.255108.357028@mariner.uk.xensource.com>
	<CAFLBxZZV64KTF6ZU493bjamSiDYD1hfnr+niVWtSA4XNyU1piQ@mail.gmail.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap writes ("Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
> We're also going to do an RC6 -- please do a smoke-test if you get a
> chance.  (I'll send out a separate e-mail to the list as well.)

Stefano, I wrote on IRC:

16:35 <Diziet> stefano_s: Can you make a tag for 4.4.0-rc6 at the same
               point as -rc5 in qemu-xen-upstream-4.4-testing.git (or
               whatever it's called) please ?
16:36 <Diziet> (and push the tag to both the master and staging/ of that tree)

It seems you're busy on a conference call so I'm going to do this
myself.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzGB-0006VB-0M; Mon, 24 Feb 2014 17:24:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHzG9-0006V5-Di
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 17:24:05 +0000
Received: from [85.158.139.211:38842] by server-3.bemta-5.messagelabs.com id
	66/14-13671-3308B035; Mon, 24 Feb 2014 17:24:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393262642!5897028!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25099 invoked from network); 24 Feb 2014 17:24:02 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:24:02 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so5010562wes.1
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 09:24:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=/Vf7JeNmonDVz7HBLSga/hsvkX6F7igT6pGo/DeTi/k=;
	b=BU63EjXVYngC18C1emVhrCWJbji66FV59s11GWySpLNRNTvUeWG3hs9o6bafDpPXS2
	o4dRIhAUnsrC6opAszqE79CJkcYmyUqkorYIBapEI6/EtlYrLWVzGS3JmxUkZCjXC0IK
	Mms0JjAtaSM1L4EarNUtENA45zFaKkK/4R6HPtlrSfRTw/RCywR0ZFWOUnqfMgfUMnSn
	nvuR4LVihE7kOomNdH4VDDE1JnplUWPatVMa1PPK3T99LTe1vTPcr4ZhbUUlrjsGQgzw
	3WX3H0xk6F1Jg6E7gblBQ88lOjPNaDhZqU0ujoBMpU65HYUBck37wTCSYju714w48VqW
	KnHg==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr19834339wjy.57.1393262641972;
	Mon, 24 Feb 2014 09:24:01 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 09:24:01 -0800 (PST)
In-Reply-To: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
Date: Mon, 24 Feb 2014 17:24:01 +0000
X-Google-Sender-Auth: cUwHrHQKgIrTrTNJOEces0K62ZM
Message-ID: <CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
	subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 1:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> Not having got any satisfactory suggestions on the inquiry on how to
> determine the amount a PoD guest needs to balloon down by (see
> http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg01524.html
> and the thread following it), expose XENMEM_get_pod_target such that
> the guest can use it for this purpose.

So in theory the bug you're seeing has nothing to do with PoD -- it
just has to do with a different interpretation that the balloon driver
and Xen may have as to what "target" means.  Is that right?  The only
difference is that in the PoD case, not ballooning down enough can be
deadly to the domain; whereas in the non-PoD case, the worst that can
happen is that the toolstack has less memory left over on the host
than it may have expected.

I don't like the idea of exposing specific PoD information to the
guest -- PoD should be completely transparent to the guest.  If we
make it PoD-specific, we may end up with a different sized domain
depending on whether we booted with PoD mode or not.

Is the real problem that there's no way to determine the number of
potentially non-empty pfn ranges?  If we exposed the number of
non-empty p2m ranges (either ram or PoD), then the guest could compare
that to the target and balloon down as necessary, no?

> Also leverage some cleanup potential resulting from this change.

Cleanup should generally be done in separate patches, so that one
change can be reviewed at a time.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:24:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzGB-0006VB-0M; Mon, 24 Feb 2014 17:24:07 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WHzG9-0006V5-Di
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 17:24:05 +0000
Received: from [85.158.139.211:38842] by server-3.bemta-5.messagelabs.com id
	66/14-13671-3308B035; Mon, 24 Feb 2014 17:24:03 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393262642!5897028!1
X-Originating-IP: [74.125.82.170]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25099 invoked from network); 24 Feb 2014 17:24:02 -0000
Received: from mail-we0-f170.google.com (HELO mail-we0-f170.google.com)
	(74.125.82.170)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:24:02 -0000
Received: by mail-we0-f170.google.com with SMTP id w62so5010562wes.1
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 09:24:02 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=/Vf7JeNmonDVz7HBLSga/hsvkX6F7igT6pGo/DeTi/k=;
	b=BU63EjXVYngC18C1emVhrCWJbji66FV59s11GWySpLNRNTvUeWG3hs9o6bafDpPXS2
	o4dRIhAUnsrC6opAszqE79CJkcYmyUqkorYIBapEI6/EtlYrLWVzGS3JmxUkZCjXC0IK
	Mms0JjAtaSM1L4EarNUtENA45zFaKkK/4R6HPtlrSfRTw/RCywR0ZFWOUnqfMgfUMnSn
	nvuR4LVihE7kOomNdH4VDDE1JnplUWPatVMa1PPK3T99LTe1vTPcr4ZhbUUlrjsGQgzw
	3WX3H0xk6F1Jg6E7gblBQ88lOjPNaDhZqU0ujoBMpU65HYUBck37wTCSYju714w48VqW
	KnHg==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr19834339wjy.57.1393262641972;
	Mon, 24 Feb 2014 09:24:01 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 09:24:01 -0800 (PST)
In-Reply-To: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
Date: Mon, 24 Feb 2014 17:24:01 +0000
X-Google-Sender-Auth: cUwHrHQKgIrTrTNJOEces0K62ZM
Message-ID: <CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
From: George Dunlap <dunlapg@umich.edu>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
	subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 1:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
> Not having got any satisfactory suggestions on the inquiry on how to
> determine the amount a PoD guest needs to balloon down by (see
> http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg01524.html
> and the thread following it), expose XENMEM_get_pod_target such that
> the guest can use it for this purpose.

So in theory the bug you're seeing has nothing to do with PoD -- it
just has to do with a different interpretation that the balloon driver
and Xen may have as to what "target" means.  Is that right?  The only
difference is that in the PoD case, not ballooning down enough can be
deadly to the domain; whereas in the non-PoD case, the worst that can
happen is that the toolstack has less memory left over on the host
than it may have expected.

I don't like the idea of exposing specific PoD information to the
guest -- PoD should be completely transparent to the guest.  If we
make it PoD-specific, we may end up with a different sized domain
depending on whether we booted with PoD mode or not.

Is the real problem that there's no way to determine the number of
potentially non-empty pfn ranges?  If we exposed the number of
non-empty p2m ranges (either ram or PoD), then the guest could compare
that to the target and balloon down as necessary, no?

> Also leverage some cleanup potential resulting from this change.

Cleanup should generally be done in separate patches, so that one
change can be reviewed at a time.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:32:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:32:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzNu-0006jf-0h; Mon, 24 Feb 2014 17:32:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WHzNr-0006ja-B9
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 17:32:04 +0000
Received: from [85.158.139.211:56515] by server-9.bemta-5.messagelabs.com id
	8E/32-11237-2128B035; Mon, 24 Feb 2014 17:32:02 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393263119!5942084!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2500 invoked from network); 24 Feb 2014 17:32:01 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:32:01 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so6850005pab.2
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 09:31:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=9XnoyyATQWY40/mqFFdPLf70XZ8d51syF7aUPTA/yq0=;
	b=PhL+7SBEZn/mXu7pCuuPqeFwIk6UpToS2cUYk+PWL+sMuDXfYF3/H5LfZOVKRAmBtD
	nA9HL+HmhdE8GST+55rirF8W4tAcb/NTSsvp0i4fD1dG0O3HrMwve0CjiJH9tXSweHJL
	44Yk69wadstiA5pb9Mk+XXdauDZ5YDUB2QwiKUNDOpoUS/O74Ow4/Y5tXuSbjcYlAIuO
	FmQMWAITMNY39fVjidTh9mFOfybtZmrXqbKoPHooPK0HHdRU7HUt7f/ZMfdaMxUQp0la
	Hr3INvvdc015LRRd9QDkBZ3j6ZVU3G6XbSsMvwAVqz49foFaSU9xdBXxDUaSZWomNkcu
	LciA==
MIME-Version: 1.0
X-Received: by 10.68.33.106 with SMTP id q10mr982148pbi.132.1393263119471;
	Mon, 24 Feb 2014 09:31:59 -0800 (PST)
Received: by 10.70.30.5 with HTTP; Mon, 24 Feb 2014 09:31:59 -0800 (PST)
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Date: Mon, 24 Feb 2014 17:31:59 +0000
X-Google-Sender-Auth: c4_PcdraR0gn6yNV0b95QmzdsBQ
Message-ID: <CAHqL=acJ_xJPENe1+E+yzmrBzda3juZHO2ZVM_Rg+O9nnOkgWA@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: Jinsong Liu <jinsong.liu@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH RESEND 0/4] x86: enable xsave-based ISA
	extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2660679897349257623=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2660679897349257623==
Content-Type: multipart/alternative; boundary=bcaec520e913509ac404f32a5980

--bcaec520e913509ac404f32a5980
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Feb 24, 2014 at 12:31 PM, Jan Beulich <JBeulich@suse.com> wrote:

> These are AVX-512 and MPX.
>
> 1: xsave: enable support for new ISA extensions
> 2: MPX IA32_BNDCFGS msr handle
> 3: generic MSRs save/restore
> 4: MSR_IA32_BNDCFGS save/restore
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
>
Acked-by: Keir Fraser <keir@xen.org>

--bcaec520e913509ac404f32a5980
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Feb 24, 2014 at 12:31 PM, Jan Beulich <span dir="ltr">&lt;<a href="mailto:JBeulich@suse.com" target="_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">These are AVX-512 and MPX.<br>
<br>
1: xsave: enable support for new ISA extensions<br>
2: MPX IA32_BNDCFGS msr handle<br>
3: generic MSRs save/restore<br>
4: MSR_IA32_BNDCFGS save/restore<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href="mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt;<br>
<br>
</blockquote></div><br></div><div class="gmail_extra">Acked-by: Keir Fraser &lt;<a href="mailto:keir@xen.org">keir@xen.org</a>&gt;</div></div>

--bcaec520e913509ac404f32a5980--


--===============2660679897349257623==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2660679897349257623==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 17:32:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:32:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzNu-0006jf-0h; Mon, 24 Feb 2014 17:32:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WHzNr-0006ja-B9
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 17:32:04 +0000
Received: from [85.158.139.211:56515] by server-9.bemta-5.messagelabs.com id
	8E/32-11237-2128B035; Mon, 24 Feb 2014 17:32:02 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393263119!5942084!1
X-Originating-IP: [209.85.220.43]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2500 invoked from network); 24 Feb 2014 17:32:01 -0000
Received: from mail-pa0-f43.google.com (HELO mail-pa0-f43.google.com)
	(209.85.220.43)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:32:01 -0000
Received: by mail-pa0-f43.google.com with SMTP id rd3so6850005pab.2
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 09:31:59 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=9XnoyyATQWY40/mqFFdPLf70XZ8d51syF7aUPTA/yq0=;
	b=PhL+7SBEZn/mXu7pCuuPqeFwIk6UpToS2cUYk+PWL+sMuDXfYF3/H5LfZOVKRAmBtD
	nA9HL+HmhdE8GST+55rirF8W4tAcb/NTSsvp0i4fD1dG0O3HrMwve0CjiJH9tXSweHJL
	44Yk69wadstiA5pb9Mk+XXdauDZ5YDUB2QwiKUNDOpoUS/O74Ow4/Y5tXuSbjcYlAIuO
	FmQMWAITMNY39fVjidTh9mFOfybtZmrXqbKoPHooPK0HHdRU7HUt7f/ZMfdaMxUQp0la
	Hr3INvvdc015LRRd9QDkBZ3j6ZVU3G6XbSsMvwAVqz49foFaSU9xdBXxDUaSZWomNkcu
	LciA==
MIME-Version: 1.0
X-Received: by 10.68.33.106 with SMTP id q10mr982148pbi.132.1393263119471;
	Mon, 24 Feb 2014 09:31:59 -0800 (PST)
Received: by 10.70.30.5 with HTTP; Mon, 24 Feb 2014 09:31:59 -0800 (PST)
In-Reply-To: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
References: <530B49AD020000780011EBBE@nat28.tlf.novell.com>
Date: Mon, 24 Feb 2014 17:31:59 +0000
X-Google-Sender-Auth: c4_PcdraR0gn6yNV0b95QmzdsBQ
Message-ID: <CAHqL=acJ_xJPENe1+E+yzmrBzda3juZHO2ZVM_Rg+O9nnOkgWA@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: Jinsong Liu <jinsong.liu@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH RESEND 0/4] x86: enable xsave-based ISA
	extensions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2660679897349257623=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2660679897349257623==
Content-Type: multipart/alternative; boundary=bcaec520e913509ac404f32a5980

--bcaec520e913509ac404f32a5980
Content-Type: text/plain; charset=ISO-8859-1

On Mon, Feb 24, 2014 at 12:31 PM, Jan Beulich <JBeulich@suse.com> wrote:

> These are AVX-512 and MPX.
>
> 1: xsave: enable support for new ISA extensions
> 2: MPX IA32_BNDCFGS msr handle
> 3: generic MSRs save/restore
> 4: MSR_IA32_BNDCFGS save/restore
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
>
Acked-by: Keir Fraser <keir@xen.org>

--bcaec520e913509ac404f32a5980
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Feb 24, 2014 at 12:31 PM, Jan Beulich <span dir="ltr">&lt;<a href="mailto:JBeulich@suse.com" target="_blank">JBeulich@suse.com</a>&gt;</span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">These are AVX-512 and MPX.<br>
<br>
1: xsave: enable support for new ISA extensions<br>
2: MPX IA32_BNDCFGS msr handle<br>
3: generic MSRs save/restore<br>
4: MSR_IA32_BNDCFGS save/restore<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href="mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt;<br>
<br>
</blockquote></div><br></div><div class="gmail_extra">Acked-by: Keir Fraser &lt;<a href="mailto:keir@xen.org">keir@xen.org</a>&gt;</div></div>

--bcaec520e913509ac404f32a5980--


--===============2660679897349257623==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2660679897349257623==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVE-0006zr-Nq; Mon, 24 Feb 2014 17:39:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVC-0006yG-7b
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:38 +0000
Received: from [193.109.254.147:49034] by server-5.bemta-14.messagelabs.com id
	B0/3C-16688-9D38B035; Mon, 24 Feb 2014 17:39:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29256 invoked from network); 24 Feb 2014 17:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267927"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-Ak;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:19 +0000
Message-ID: <1393263564-14038-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 3/8] x86/xen: compactly store large identity
	ranges in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Large (multi-GB) identity ranges currently require a unique middle page
(filled with p2m_identity entries) per 1 GB region.

Similar to the common p2m_mid_missing middle page for large missing
regions, introduce a p2m_mid_identity page (filled with p2m_identity
entries) which can be used instead.

set_phys_range_identity() thus only needs to allocate new middle pages
at the beginning and end of the range.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |  155 +++++++++++++++++++++++++++++++++++-----------------
 1 files changed, 105 insertions(+), 50 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 4ed9138..66222c0 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -36,7 +36,7 @@
  *  pfn_to_mfn(0xc0000)=0xc0000
  *
  * The benefit of this is, that we can assume for non-RAM regions (think
- * PCI BARs, or ACPI spaces), we can create mappings easily b/c we
+ * PCI BARs, or ACPI spaces), we can create mappings easily because we
  * get the PFN value to match the MFN.
  *
  * For this to work efficiently we have one new page p2m_identity and
@@ -60,7 +60,7 @@
  * There is also a digram of the P2M at the end that can help.
  * Imagine your E820 looking as so:
  *
- *                    1GB                                           2GB
+ *                    1GB                                           2GB    4GB
  * /-------------------+---------\/----\         /----------\    /---+-----\
  * | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
  * \-------------------+---------/\----/         \----------/    \---+-----/
@@ -77,9 +77,8 @@
  * of the PFN and the end PFN (263424 and 512256 respectively). The first step
  * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
  * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
- * aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn
- * to end pfn.  We reserve_brk top leaf pages if they are missing (means they
- * point to p2m_mid_missing).
+ * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
+ * required to split any existing p2m_mid_missing middle pages.
  *
  * With the E820 example above, 263424 is not 1GB aligned so we allocate a
  * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
@@ -88,7 +87,7 @@
  * Next stage is to determine if we need to do a more granular boundary check
  * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
  * We check if the start pfn and end pfn violate that boundary check, and if
- * so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
+ * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
  * granularity of setting which PFNs are missing and which ones are identity.
  * In our example 263424 and 512256 both fail the check so we reserve_brk two
  * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
@@ -102,9 +101,10 @@
  *
  * The next step is to walk from the start pfn to the end pfn setting
  * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
- * If we find that the middle leaf is pointing to p2m_missing we can swap it
- * over to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this
- * point we do not need to worry about boundary aligment (so no need to
+ * If we find that the middle entry is pointing to p2m_missing we can swap it
+ * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
+ * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
+ * At this point we do not need to worry about boundary aligment (so no need to
  * reserve_brk a middle page, figure out which PFNs are "missing" and which
  * ones are identity), as that has been done earlier.  If we find that the
  * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
@@ -118,6 +118,9 @@
  * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
  * contain the INVALID_P2M_ENTRY value and are considered "missing."
  *
+ * Finally, the region beyond the end of of the E820 (4 GB in this example)
+ * is set to be identity (in case there are MMIO regions placed here).
+ *
  * This is what the p2m ends up looking (for the E820 above) with this
  * fabulous drawing:
  *
@@ -129,21 +132,27 @@
  *  |-----|    \                      | [p2m_identity]+\\    | ....            |
  *  |  2  |--\  \-------------------->|  ...          | \\   \----------------/
  *  |-----|   \                       \---------------/  \\
- *  |  3  |\   \                                          \\  p2m_identity
- *  |-----| \   \-------------------->/---------------\   /-----------------\
- *  | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
- *  \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
- *         / /---------------\        | ....          |   \-----------------/
- *        /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
- *       /   | IDENTITY[@256]|<----/  \---------------/
- *      /    | ~0, ~0, ....  |
- *     |     \---------------/
- *     |
- *   p2m_mid_missing           p2m_missing
- * /-----------------\     /------------\
- * | [p2m_missing]   +---->| ~0, ~0, ~0 |
- * | [p2m_missing]   +---->| ..., ~0    |
- * \-----------------/     \------------/
+ *  |  3  |-\  \                                          \\  p2m_identity [1]
+ *  |-----|  \  \-------------------->/---------------\   /-----------------\
+ *  | ..  |\  |                       | [p2m_identity]+-->| ~0, ~0, ~0, ... |
+ *  \-----/ | |                       | [p2m_identity]+-->| ..., ~0         |
+ *          | |                       | ....          |   \-----------------/
+ *          | |                       +-[x], ~0, ~0.. +\
+ *          | |                       \---------------/ \
+ *          | |                                          \-> /---------------\
+ *          | V  p2m_mid_missing       p2m_missing           | IDENTITY[@0]  |
+ *          | /-----------------\     /------------\         | IDENTITY[@256]|
+ *          | | [p2m_missing]   +---->| ~0, ~0, ...|         | ~0, ~0, ....  |
+ *          | | [p2m_missing]   +---->| ..., ~0    |         \---------------/
+ *          | | ...             |     \------------/
+ *          | \-----------------/
+ *          |
+ *          |     p2m_mid_identity 
+ *          |   /-----------------\     
+ *          \-->| [p2m_identity]  +---->[1]
+ *              | [p2m_identity]  +---->[1]
+ *              | ...             |
+ *              \-----------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -187,13 +196,15 @@ static RESERVE_BRK_ARRAY(unsigned long, p2m_top_mfn, P2M_TOP_PER_PAGE);
 static RESERVE_BRK_ARRAY(unsigned long *, p2m_top_mfn_p, P2M_TOP_PER_PAGE);
 
 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity, P2M_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity, P2M_MID_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long, p2m_mid_identity_mfn, P2M_MID_PER_PAGE);
 
 RESERVE_BRK(p2m_mid, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 
 /* We might hit two boundary violations at the start and end, at max each
  * boundary violation will require three middle nodes. */
-RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
+RESERVE_BRK(p2m_mid_extra, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
@@ -242,20 +253,20 @@ static void p2m_top_mfn_p_init(unsigned long **top)
 		top[i] = p2m_mid_missing_mfn;
 }
 
-static void p2m_mid_init(unsigned long **mid)
+static void p2m_mid_init(unsigned long **mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = p2m_missing;
+		mid[i] = leaf;
 }
 
-static void p2m_mid_mfn_init(unsigned long *mid)
+static void p2m_mid_mfn_init(unsigned long *mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = virt_to_mfn(p2m_missing);
+		mid[i] = virt_to_mfn(leaf);
 }
 
 static void p2m_init(unsigned long *p2m)
@@ -286,7 +297,9 @@ void __ref xen_build_mfn_list_list(void)
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_identity_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 
 		p2m_top_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
 		p2m_top_mfn_p_init(p2m_top_mfn_p);
@@ -295,7 +308,8 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_top_mfn_init(p2m_top_mfn);
 	} else {
 		/* Reinitialise, mfn's all change after migration */
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 	}
 
 	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += P2M_PER_PAGE) {
@@ -327,7 +341,7 @@ void __ref xen_build_mfn_list_list(void)
 			 * it too late.
 			 */
 			mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_mfn_init(mid_mfn_p);
+			p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 			p2m_top_mfn_p[topidx] = mid_mfn_p;
 		}
@@ -365,16 +379,17 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_init(p2m_missing);
+	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_init(p2m_identity);
 
 	p2m_mid_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_mid_init(p2m_mid_missing);
+	p2m_mid_init(p2m_mid_missing, p2m_missing);
+	p2m_mid_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_mid_init(p2m_mid_identity, p2m_identity);
 
 	p2m_top = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_top_init(p2m_top);
 
-	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_init(p2m_identity);
-
 	/*
 	 * The domain builder gives us a pre-constructed p2m array in
 	 * mfn_list for all the pages initially given to us, so we just
@@ -386,7 +401,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 		if (p2m_top[topidx] == p2m_mid_missing) {
 			unsigned long **mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_init(mid);
+			p2m_mid_init(mid, p2m_missing);
 
 			p2m_top[topidx] = mid;
 		}
@@ -545,7 +560,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid)
 			return false;
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		if (cmpxchg(top_p, p2m_mid_missing, mid) != p2m_mid_missing)
 			free_p2m_page(mid);
@@ -565,7 +580,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid_mfn)
 			return false;
 
-		p2m_mid_mfn_init(mid_mfn);
+		p2m_mid_mfn_init(mid_mfn, p2m_missing);
 
 		missing_mfn = virt_to_mfn(p2m_mid_missing_mfn);
 		mid_mfn_mfn = virt_to_mfn(mid_mfn);
@@ -649,7 +664,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	if (mid == p2m_mid_missing) {
 		mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		p2m_top[topidx] = mid;
 
@@ -658,7 +673,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	/* And the save/restore P2M tables.. */
 	if (mid_mfn_p == p2m_mid_missing_mfn) {
 		mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(mid_mfn_p);
+		p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
@@ -769,6 +784,24 @@ bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	return true;
 }
+
+static void __init early_split_p2m(unsigned long pfn)
+{
+	unsigned long mididx, idx;
+
+	mididx = p2m_mid_index(pfn);
+	idx = p2m_index(pfn);
+
+	/*
+	 * Allocate new middle and leaf pages if this pfn lies in the
+	 * middle of one.
+	 */
+	if (mididx || idx)
+		early_alloc_p2m_middle(pfn);
+	if (idx)
+		early_alloc_p2m(pfn, false);
+}
+
 unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 				      unsigned long pfn_e)
 {
@@ -786,19 +819,27 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_e > MAX_P2M_PFN)
 		pfn_e = MAX_P2M_PFN;
 
-	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
-		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
-		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-	{
-		WARN_ON(!early_alloc_p2m(pfn));
-	}
+	early_split_p2m(pfn_s);
+	early_split_p2m(pfn_e);
 
-	early_alloc_p2m_middle(pfn_s, true);
-	early_alloc_p2m_middle(pfn_e, true);
+	for (pfn = pfn_s; pfn < pfn_e;) {
+		unsigned topidx = p2m_top_index(pfn);
+		unsigned mididx = p2m_mid_index(pfn);
 
-	for (pfn = pfn_s; pfn < pfn_e; pfn++)
 		if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
 			break;
+		pfn++;
+
+		/*
+		 * If the PFN was set to a middle or leaf identity
+		 * page the remainder must also be identity, so skip
+		 * ahead to the next middle or leaf entry.
+		 */
+		if (p2m_top[topidx] == p2m_mid_identity)
+			pfn = ALIGN(pfn, P2M_MID_PER_PAGE * P2M_PER_PAGE);
+		else if (p2m_top[topidx][mididx] == p2m_identity)
+			pfn = ALIGN(pfn, P2M_PER_PAGE);
+	}
 
 	if (!WARN((pfn - pfn_s) != (pfn_e - pfn_s),
 		"Identity mapping failed. We are %ld short of 1-1 mappings!\n",
@@ -828,8 +869,22 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	/* For sparse holes were the p2m leaf has real PFN along with
 	 * PCI holes, stick in the PFN as the MFN value.
+	 *
+	 * set_phys_range_identity() will have allocated new middle
+	 * and leaf pages as required so an existing p2m_mid_missing
+	 * or p2m_missing mean that whole range will be identity so
+	 * these can be switched to p2m_mid_identity or p2m_identity.
 	 */
 	if (mfn != INVALID_P2M_ENTRY && (mfn & IDENTITY_FRAME_BIT)) {
+		if (p2m_top[topidx] == p2m_mid_identity)
+			return true;
+
+		if (p2m_top[topidx] == p2m_mid_missing) {
+			WARN_ON(cmpxchg(&p2m_top[topidx], p2m_mid_missing,
+					p2m_mid_identity) != p2m_mid_missing);
+			return true;
+		}
+
 		if (p2m_top[topidx][mididx] == p2m_identity)
 			return true;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVB-0006y8-CP; Mon, 24 Feb 2014 17:39:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVA-0006xp-4k
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:36 +0000
Received: from [193.109.254.147:48843] by server-5.bemta-14.messagelabs.com id
	89/1C-16688-7D38B035; Mon, 24 Feb 2014 17:39:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29043 invoked from network); 24 Feb 2014 17:39:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267921"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-Cd;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:22 +0000
Message-ID: <1393263564-14038-7-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 6/8] x86/xen: do not use _PAGE_IOMAP in
	xen_remap_domain_mfn_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

_PAGE_IOMAP is used in xen_remap_domain_mfn_range() to prevent the
pfn_pte() call in remap_area_mfn_pte_fn() from using the p2m to translate
the MFN.  If mfn_pte() is used instead, the p2m look up is avoided and
the use of _PAGE_IOMAP is no longer needed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 256282e..b36116e 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2523,7 +2523,7 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(mfn_pte(rmd->mfn++, rmd->prot));
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2548,8 +2548,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVF-00070o-VL; Mon, 24 Feb 2014 17:39:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVD-0006yk-Ae
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:39 +0000
Received: from [85.158.143.35:14348] by server-1.bemta-4.messagelabs.com id
	22/48-31661-AD38B035; Mon, 24 Feb 2014 17:39:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393263574!7937107!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25116 invoked from network); 24 Feb 2014 17:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267926"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-By;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:21 +0000
Message-ID: <1393263564-14038-6-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 5/8] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 66222c0..bd348aa 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -507,7 +507,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2afe55e..210426a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -469,6 +469,15 @@ char * __init xen_memory_setup(void)
 	}
 
 	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(map[i-1].addr / PAGE_SIZE, ~0ul);
+
+	/*
 	 * In domU, the ISA region is normal, usable memory, but we
 	 * reserve ISA memory anyway because too many things poke
 	 * about in there.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVB-0006y1-13; Mon, 24 Feb 2014 17:39:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzV9-0006xk-7a
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:35 +0000
Received: from [193.109.254.147:3085] by server-5.bemta-14.messagelabs.com id
	87/1C-16688-6D38B035; Mon, 24 Feb 2014 17:39:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28972 invoked from network); 24 Feb 2014 17:39:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267920"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-7h;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:16 +0000
Message-ID: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCHv5 0/8] x86/xen: fixes for mapping high MMIO
	regions (and remove _PAGE_IOMAP)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[ x86 maintainers, this is predominately a Xen series but the end
result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]

This a fix for the problems with mapping high MMIO regions in certain
cases (e.g., the RDMA drivers) as not all mappers were specifing
_PAGE_IOMAP which meant no valid MFN could be found and the resulting
PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series has been tested (in dom0) on all unique machines we have
in out test lab (~100 machines), some of which have PCI devices with
BARs above the end of RAM.

Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
long).

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v5:
- improve performance of set_phys_range_identity() by not iterating
  over all pages if p2m_mid_identity or p2m_identity mid or leaves are
  already present. (Thanks to Andrew Cooper for reporting this and
  Frediano Ziglio for providing a fix.)

Changes in v4:
- fix p2m_mid_identity initialization.

Changes in v3 (not posted):
- use correct end of e820
- fix xen_remap_domain_mfn_range()

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVE-0006zW-Ay; Mon, 24 Feb 2014 17:39:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVC-0006yH-BT
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:38 +0000
Received: from [85.158.143.35:14369] by server-3.bemta-4.messagelabs.com id
	35/9A-11539-9D38B035; Mon, 24 Feb 2014 17:39:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393263575!7948369!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8875 invoked from network); 24 Feb 2014 17:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267930"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-DZ;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:23 +0000
Message-ID: <1393263564-14038-8-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 7/8] x86/xen: do not use _PAGE_IOMAP PTE flag
	for I/O mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since mfn_to_pfn() returns the correct PFN for identity mappings (as
used for MMIO regions), the use of _PAGE_IOMAP is not required in
pte_mfn_to_pfn().

Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
in pte_mfn_to_pfn().

This will allow _PAGE_IOMAP to be removed, making it available for
future use.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |   48 ++++--------------------------------------------
 1 files changed, 4 insertions(+), 44 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b36116e..2916662 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -399,38 +399,14 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ __visible pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 __visible pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ __visible pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2096,7 +2056,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVE-0006zr-Nq; Mon, 24 Feb 2014 17:39:40 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVC-0006yG-7b
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:38 +0000
Received: from [193.109.254.147:49034] by server-5.bemta-14.messagelabs.com id
	B0/3C-16688-9D38B035; Mon, 24 Feb 2014 17:39:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!5
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29256 invoked from network); 24 Feb 2014 17:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267927"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-Ak;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:19 +0000
Message-ID: <1393263564-14038-4-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 3/8] x86/xen: compactly store large identity
	ranges in the p2m
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Large (multi-GB) identity ranges currently require a unique middle page
(filled with p2m_identity entries) per 1 GB region.

Similar to the common p2m_mid_missing middle page for large missing
regions, introduce a p2m_mid_identity page (filled with p2m_identity
entries) which can be used instead.

set_phys_range_identity() thus only needs to allocate new middle pages
at the beginning and end of the range.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |  155 +++++++++++++++++++++++++++++++++++-----------------
 1 files changed, 105 insertions(+), 50 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 4ed9138..66222c0 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -36,7 +36,7 @@
  *  pfn_to_mfn(0xc0000)=0xc0000
  *
  * The benefit of this is, that we can assume for non-RAM regions (think
- * PCI BARs, or ACPI spaces), we can create mappings easily b/c we
+ * PCI BARs, or ACPI spaces), we can create mappings easily because we
  * get the PFN value to match the MFN.
  *
  * For this to work efficiently we have one new page p2m_identity and
@@ -60,7 +60,7 @@
  * There is also a digram of the P2M at the end that can help.
  * Imagine your E820 looking as so:
  *
- *                    1GB                                           2GB
+ *                    1GB                                           2GB    4GB
  * /-------------------+---------\/----\         /----------\    /---+-----\
  * | System RAM        | Sys RAM ||ACPI|         | reserved |    | Sys RAM |
  * \-------------------+---------/\----/         \----------/    \---+-----/
@@ -77,9 +77,8 @@
  * of the PFN and the end PFN (263424 and 512256 respectively). The first step
  * is to reserve_brk a top leaf page if the p2m[1] is missing. The top leaf page
  * covers 512^2 of page estate (1GB) and in case the start or end PFN is not
- * aligned on 512^2*PAGE_SIZE (1GB) we loop on aligned 1GB PFNs from start pfn
- * to end pfn.  We reserve_brk top leaf pages if they are missing (means they
- * point to p2m_mid_missing).
+ * aligned on 512^2*PAGE_SIZE (1GB) we reserve_brk new middle and leaf pages as
+ * required to split any existing p2m_mid_missing middle pages.
  *
  * With the E820 example above, 263424 is not 1GB aligned so we allocate a
  * reserve_brk page which will cover the PFNs estate from 0x40000 to 0x80000.
@@ -88,7 +87,7 @@
  * Next stage is to determine if we need to do a more granular boundary check
  * on the 4MB (or 2MB depending on architecture) off the start and end pfn's.
  * We check if the start pfn and end pfn violate that boundary check, and if
- * so reserve_brk a middle (p2m[x][y]) leaf page. This way we have a much finer
+ * so reserve_brk a (p2m[x][y]) leaf page. This way we have a much finer
  * granularity of setting which PFNs are missing and which ones are identity.
  * In our example 263424 and 512256 both fail the check so we reserve_brk two
  * pages. Populate them with INVALID_P2M_ENTRY (so they both have "missing"
@@ -102,9 +101,10 @@
  *
  * The next step is to walk from the start pfn to the end pfn setting
  * the IDENTITY_FRAME_BIT on each PFN. This is done in set_phys_range_identity.
- * If we find that the middle leaf is pointing to p2m_missing we can swap it
- * over to p2m_identity - this way covering 4MB (or 2MB) PFN space.  At this
- * point we do not need to worry about boundary aligment (so no need to
+ * If we find that the middle entry is pointing to p2m_missing we can swap it
+ * over to p2m_identity - this way covering 4MB (or 2MB) PFN space (and
+ * similarly swapping p2m_mid_missing for p2m_mid_identity for larger regions).
+ * At this point we do not need to worry about boundary aligment (so no need to
  * reserve_brk a middle page, figure out which PFNs are "missing" and which
  * ones are identity), as that has been done earlier.  If we find that the
  * middle leaf is not occupied by p2m_identity or p2m_missing, we dereference
@@ -118,6 +118,9 @@
  * considered missing). In our case, p2m[1][2][0->255] and p2m[1][488][257->511]
  * contain the INVALID_P2M_ENTRY value and are considered "missing."
  *
+ * Finally, the region beyond the end of of the E820 (4 GB in this example)
+ * is set to be identity (in case there are MMIO regions placed here).
+ *
  * This is what the p2m ends up looking (for the E820 above) with this
  * fabulous drawing:
  *
@@ -129,21 +132,27 @@
  *  |-----|    \                      | [p2m_identity]+\\    | ....            |
  *  |  2  |--\  \-------------------->|  ...          | \\   \----------------/
  *  |-----|   \                       \---------------/  \\
- *  |  3  |\   \                                          \\  p2m_identity
- *  |-----| \   \-------------------->/---------------\   /-----------------\
- *  | ..  +->+                        | [p2m_identity]+-->| ~0, ~0, ~0, ... |
- *  \-----/ /                         | [p2m_identity]+-->| ..., ~0         |
- *         / /---------------\        | ....          |   \-----------------/
- *        /  | IDENTITY[@0]  |      /-+-[x], ~0, ~0.. |
- *       /   | IDENTITY[@256]|<----/  \---------------/
- *      /    | ~0, ~0, ....  |
- *     |     \---------------/
- *     |
- *   p2m_mid_missing           p2m_missing
- * /-----------------\     /------------\
- * | [p2m_missing]   +---->| ~0, ~0, ~0 |
- * | [p2m_missing]   +---->| ..., ~0    |
- * \-----------------/     \------------/
+ *  |  3  |-\  \                                          \\  p2m_identity [1]
+ *  |-----|  \  \-------------------->/---------------\   /-----------------\
+ *  | ..  |\  |                       | [p2m_identity]+-->| ~0, ~0, ~0, ... |
+ *  \-----/ | |                       | [p2m_identity]+-->| ..., ~0         |
+ *          | |                       | ....          |   \-----------------/
+ *          | |                       +-[x], ~0, ~0.. +\
+ *          | |                       \---------------/ \
+ *          | |                                          \-> /---------------\
+ *          | V  p2m_mid_missing       p2m_missing           | IDENTITY[@0]  |
+ *          | /-----------------\     /------------\         | IDENTITY[@256]|
+ *          | | [p2m_missing]   +---->| ~0, ~0, ...|         | ~0, ~0, ....  |
+ *          | | [p2m_missing]   +---->| ..., ~0    |         \---------------/
+ *          | | ...             |     \------------/
+ *          | \-----------------/
+ *          |
+ *          |     p2m_mid_identity 
+ *          |   /-----------------\     
+ *          \-->| [p2m_identity]  +---->[1]
+ *              | [p2m_identity]  +---->[1]
+ *              | ...             |
+ *              \-----------------/
  *
  * where ~0 is INVALID_P2M_ENTRY. IDENTITY is (PFN | IDENTITY_BIT)
  */
@@ -187,13 +196,15 @@ static RESERVE_BRK_ARRAY(unsigned long, p2m_top_mfn, P2M_TOP_PER_PAGE);
 static RESERVE_BRK_ARRAY(unsigned long *, p2m_top_mfn_p, P2M_TOP_PER_PAGE);
 
 static RESERVE_BRK_ARRAY(unsigned long, p2m_identity, P2M_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long *, p2m_mid_identity, P2M_MID_PER_PAGE);
+static RESERVE_BRK_ARRAY(unsigned long, p2m_mid_identity_mfn, P2M_MID_PER_PAGE);
 
 RESERVE_BRK(p2m_mid, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 RESERVE_BRK(p2m_mid_mfn, PAGE_SIZE * (MAX_DOMAIN_PAGES / (P2M_PER_PAGE * P2M_MID_PER_PAGE)));
 
 /* We might hit two boundary violations at the start and end, at max each
  * boundary violation will require three middle nodes. */
-RESERVE_BRK(p2m_mid_identity, PAGE_SIZE * 2 * 3);
+RESERVE_BRK(p2m_mid_extra, PAGE_SIZE * 2 * 3);
 
 /* When we populate back during bootup, the amount of pages can vary. The
  * max we have is seen is 395979, but that does not mean it can't be more.
@@ -242,20 +253,20 @@ static void p2m_top_mfn_p_init(unsigned long **top)
 		top[i] = p2m_mid_missing_mfn;
 }
 
-static void p2m_mid_init(unsigned long **mid)
+static void p2m_mid_init(unsigned long **mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = p2m_missing;
+		mid[i] = leaf;
 }
 
-static void p2m_mid_mfn_init(unsigned long *mid)
+static void p2m_mid_mfn_init(unsigned long *mid, unsigned long *leaf)
 {
 	unsigned i;
 
 	for (i = 0; i < P2M_MID_PER_PAGE; i++)
-		mid[i] = virt_to_mfn(p2m_missing);
+		mid[i] = virt_to_mfn(leaf);
 }
 
 static void p2m_init(unsigned long *p2m)
@@ -286,7 +297,9 @@ void __ref xen_build_mfn_list_list(void)
 	/* Pre-initialize p2m_top_mfn to be completely missing */
 	if (p2m_top_mfn == NULL) {
 		p2m_mid_missing_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_identity_mfn = extend_brk(PAGE_SIZE, PAGE_SIZE);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 
 		p2m_top_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
 		p2m_top_mfn_p_init(p2m_top_mfn_p);
@@ -295,7 +308,8 @@ void __ref xen_build_mfn_list_list(void)
 		p2m_top_mfn_init(p2m_top_mfn);
 	} else {
 		/* Reinitialise, mfn's all change after migration */
-		p2m_mid_mfn_init(p2m_mid_missing_mfn);
+		p2m_mid_mfn_init(p2m_mid_missing_mfn, p2m_missing);
+		p2m_mid_mfn_init(p2m_mid_identity_mfn, p2m_identity);
 	}
 
 	for (pfn = 0; pfn < xen_max_p2m_pfn; pfn += P2M_PER_PAGE) {
@@ -327,7 +341,7 @@ void __ref xen_build_mfn_list_list(void)
 			 * it too late.
 			 */
 			mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_mfn_init(mid_mfn_p);
+			p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 			p2m_top_mfn_p[topidx] = mid_mfn_p;
 		}
@@ -365,16 +379,17 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 	p2m_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_init(p2m_missing);
+	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_init(p2m_identity);
 
 	p2m_mid_missing = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_mid_init(p2m_mid_missing);
+	p2m_mid_init(p2m_mid_missing, p2m_missing);
+	p2m_mid_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
+	p2m_mid_init(p2m_mid_identity, p2m_identity);
 
 	p2m_top = extend_brk(PAGE_SIZE, PAGE_SIZE);
 	p2m_top_init(p2m_top);
 
-	p2m_identity = extend_brk(PAGE_SIZE, PAGE_SIZE);
-	p2m_init(p2m_identity);
-
 	/*
 	 * The domain builder gives us a pre-constructed p2m array in
 	 * mfn_list for all the pages initially given to us, so we just
@@ -386,7 +401,7 @@ void __init xen_build_dynamic_phys_to_machine(void)
 
 		if (p2m_top[topidx] == p2m_mid_missing) {
 			unsigned long **mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
-			p2m_mid_init(mid);
+			p2m_mid_init(mid, p2m_missing);
 
 			p2m_top[topidx] = mid;
 		}
@@ -545,7 +560,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid)
 			return false;
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		if (cmpxchg(top_p, p2m_mid_missing, mid) != p2m_mid_missing)
 			free_p2m_page(mid);
@@ -565,7 +580,7 @@ static bool alloc_p2m(unsigned long pfn)
 		if (!mid_mfn)
 			return false;
 
-		p2m_mid_mfn_init(mid_mfn);
+		p2m_mid_mfn_init(mid_mfn, p2m_missing);
 
 		missing_mfn = virt_to_mfn(p2m_mid_missing_mfn);
 		mid_mfn_mfn = virt_to_mfn(mid_mfn);
@@ -649,7 +664,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	if (mid == p2m_mid_missing) {
 		mid = extend_brk(PAGE_SIZE, PAGE_SIZE);
 
-		p2m_mid_init(mid);
+		p2m_mid_init(mid, p2m_missing);
 
 		p2m_top[topidx] = mid;
 
@@ -658,7 +673,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn)
 	/* And the save/restore P2M tables.. */
 	if (mid_mfn_p == p2m_mid_missing_mfn) {
 		mid_mfn_p = extend_brk(PAGE_SIZE, PAGE_SIZE);
-		p2m_mid_mfn_init(mid_mfn_p);
+		p2m_mid_mfn_init(mid_mfn_p, p2m_missing);
 
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
@@ -769,6 +784,24 @@ bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	return true;
 }
+
+static void __init early_split_p2m(unsigned long pfn)
+{
+	unsigned long mididx, idx;
+
+	mididx = p2m_mid_index(pfn);
+	idx = p2m_index(pfn);
+
+	/*
+	 * Allocate new middle and leaf pages if this pfn lies in the
+	 * middle of one.
+	 */
+	if (mididx || idx)
+		early_alloc_p2m_middle(pfn);
+	if (idx)
+		early_alloc_p2m(pfn, false);
+}
+
 unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 				      unsigned long pfn_e)
 {
@@ -786,19 +819,27 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_e > MAX_P2M_PFN)
 		pfn_e = MAX_P2M_PFN;
 
-	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
-		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
-		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-	{
-		WARN_ON(!early_alloc_p2m(pfn));
-	}
+	early_split_p2m(pfn_s);
+	early_split_p2m(pfn_e);
 
-	early_alloc_p2m_middle(pfn_s, true);
-	early_alloc_p2m_middle(pfn_e, true);
+	for (pfn = pfn_s; pfn < pfn_e;) {
+		unsigned topidx = p2m_top_index(pfn);
+		unsigned mididx = p2m_mid_index(pfn);
 
-	for (pfn = pfn_s; pfn < pfn_e; pfn++)
 		if (!__set_phys_to_machine(pfn, IDENTITY_FRAME(pfn)))
 			break;
+		pfn++;
+
+		/*
+		 * If the PFN was set to a middle or leaf identity
+		 * page the remainder must also be identity, so skip
+		 * ahead to the next middle or leaf entry.
+		 */
+		if (p2m_top[topidx] == p2m_mid_identity)
+			pfn = ALIGN(pfn, P2M_MID_PER_PAGE * P2M_PER_PAGE);
+		else if (p2m_top[topidx][mididx] == p2m_identity)
+			pfn = ALIGN(pfn, P2M_PER_PAGE);
+	}
 
 	if (!WARN((pfn - pfn_s) != (pfn_e - pfn_s),
 		"Identity mapping failed. We are %ld short of 1-1 mappings!\n",
@@ -828,8 +869,22 @@ bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 
 	/* For sparse holes were the p2m leaf has real PFN along with
 	 * PCI holes, stick in the PFN as the MFN value.
+	 *
+	 * set_phys_range_identity() will have allocated new middle
+	 * and leaf pages as required so an existing p2m_mid_missing
+	 * or p2m_missing mean that whole range will be identity so
+	 * these can be switched to p2m_mid_identity or p2m_identity.
 	 */
 	if (mfn != INVALID_P2M_ENTRY && (mfn & IDENTITY_FRAME_BIT)) {
+		if (p2m_top[topidx] == p2m_mid_identity)
+			return true;
+
+		if (p2m_top[topidx] == p2m_mid_missing) {
+			WARN_ON(cmpxchg(&p2m_top[topidx], p2m_mid_missing,
+					p2m_mid_identity) != p2m_mid_missing);
+			return true;
+		}
+
 		if (p2m_top[topidx][mididx] == p2m_identity)
 			return true;
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVF-00070T-Gd; Mon, 24 Feb 2014 17:39:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVC-0006yI-Kb
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:39 +0000
Received: from [85.158.143.35:23786] by server-2.bemta-4.messagelabs.com id
	CE/E9-10891-9D38B035; Mon, 24 Feb 2014 17:39:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393263574!7937107!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25166 invoked from network); 24 Feb 2014 17:39:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267931"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-FS;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:24 +0000
Message-ID: <1393263564-14038-9-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 8/8] x86: remove the Xen-specific _PAGE_IOMAP
	PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
that were used to map I/O regions that are 1:1 in the p2m.  This
allowed Xen to obtain the correct PFN when converting the MFNs read
from a PTE back to their PFN.

Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
returns the correct PFN by using a combination of the m2p and p2m to
determine if an MFN corresponds to a 1:1 mapping in the the p2m.

Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
future uses of the PTE flag.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
--
This depends on the preceeding Xen changes, so I think it would be
best if this was acked by an x86 maintainer and merged via the Xen
tree.
---
 arch/x86/include/asm/pgtable_types.h |   12 ++++++------
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 --
 arch/x86/xen/enlighten.c             |    2 --
 5 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 1aa9ccd..b821f37 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -164,10 +164,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index e395048..af7259c 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index f35c66c..9e6fa6d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 201d09a..c5e21e3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1555,8 +1555,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVD-0006yn-Dc; Mon, 24 Feb 2014 17:39:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVB-0006xz-5s
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:37 +0000
Received: from [193.109.254.147:48962] by server-3.bemta-14.messagelabs.com id
	BF/44-00432-8D38B035; Mon, 24 Feb 2014 17:39:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29180 invoked from network); 24 Feb 2014 17:39:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267925"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-BN;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:20 +0000
Message-ID: <1393263564-14038-5-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 4/8] x86/xen: only warn once if bad MFNs are
	found during setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

In xen_add_extra_mem(), if the WARN() checks for bad MFNs trigger it is
likely that they will trigger at lot, spamming the log.

Use WARN_ONCE() instead.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/setup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..2afe55e 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -89,10 +89,10 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
 
-		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+		if (WARN_ONCE(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
 			continue;
-		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
-			pfn, mfn);
+		WARN_ONCE(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			  pfn, mfn);
 
 		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 	}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVC-0006yM-Ge; Mon, 24 Feb 2014 17:39:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVA-0006xq-GW
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:36 +0000
Received: from [193.109.254.147:48903] by server-7.bemta-14.messagelabs.com id
	0B/B7-23424-7D38B035; Mon, 24 Feb 2014 17:39:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29107 invoked from network); 24 Feb 2014 17:39:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267922"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-AB;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:18 +0000
Message-ID: <1393263564-14038-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 2/8] x86/xen: fix set_phys_range_identity() if
	pfn_e > MAX_P2M_PFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Allow set_phys_range_identity() to work with a range that overlaps
MAX_P2M_PFN by clamping pfn_e to MAX_P2M_PFN.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 7402709..4ed9138 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -774,7 +774,7 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 {
 	unsigned long pfn;
 
-	if (unlikely(pfn_s >= MAX_P2M_PFN || pfn_e >= MAX_P2M_PFN))
+	if (unlikely(pfn_s >= MAX_P2M_PFN))
 		return 0;
 
 	if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
@@ -783,6 +783,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_s > pfn_e)
 		return 0;
 
+	if (pfn_e > MAX_P2M_PFN)
+		pfn_e = MAX_P2M_PFN;
+
 	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
 		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
 		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVB-0006y8-CP; Mon, 24 Feb 2014 17:39:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVA-0006xp-4k
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:36 +0000
Received: from [193.109.254.147:48843] by server-5.bemta-14.messagelabs.com id
	89/1C-16688-7D38B035; Mon, 24 Feb 2014 17:39:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29043 invoked from network); 24 Feb 2014 17:39:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267921"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-Cd;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:22 +0000
Message-ID: <1393263564-14038-7-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 6/8] x86/xen: do not use _PAGE_IOMAP in
	xen_remap_domain_mfn_range()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

_PAGE_IOMAP is used in xen_remap_domain_mfn_range() to prevent the
pfn_pte() call in remap_area_mfn_pte_fn() from using the p2m to translate
the MFN.  If mfn_pte() is used instead, the p2m look up is avoided and
the use of _PAGE_IOMAP is no longer needed.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index 256282e..b36116e 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -2523,7 +2523,7 @@ static int remap_area_mfn_pte_fn(pte_t *ptep, pgtable_t token,
 				 unsigned long addr, void *data)
 {
 	struct remap_data *rmd = data;
-	pte_t pte = pte_mkspecial(pfn_pte(rmd->mfn++, rmd->prot));
+	pte_t pte = pte_mkspecial(mfn_pte(rmd->mfn++, rmd->prot));
 
 	rmd->mmu_update->ptr = virt_to_machine(ptep).maddr;
 	rmd->mmu_update->val = pte_val_ma(pte);
@@ -2548,8 +2548,6 @@ int xen_remap_domain_mfn_range(struct vm_area_struct *vma,
 	if (xen_feature(XENFEAT_auto_translated_physmap))
 		return -EINVAL;
 
-	prot = __pgprot(pgprot_val(prot) | _PAGE_IOMAP);
-
 	BUG_ON(!((vma->vm_flags & (VM_PFNMAP | VM_IO)) == (VM_PFNMAP | VM_IO)));
 
 	rmd.mfn = mfn;
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVF-00070C-4W; Mon, 24 Feb 2014 17:39:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVB-0006y0-8N
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:38 +0000
Received: from [85.158.143.35:23688] by server-1.bemta-4.messagelabs.com id
	1E/38-31661-8D38B035; Mon, 24 Feb 2014 17:39:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393263574!7937107!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25044 invoked from network); 24 Feb 2014 17:39:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267924"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-8I;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:17 +0000
Message-ID: <1393263564-14038-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 1/8] x86/xen: rename early_p2m_alloc() and
	early_p2m_alloc_middle()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

early_p2m_alloc_middle() allocates a new leaf page and
early_p2m_alloc() allocates a new middle page.  This is confusing.

Swap the names so they match what the functions actually do.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..7402709 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -596,7 +596,7 @@ static bool alloc_p2m(unsigned long pfn)
 	return true;
 }
 
-static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary)
+static bool __init early_alloc_p2m(unsigned long pfn, bool check_boundary)
 {
 	unsigned topidx, mididx, idx;
 	unsigned long *p2m;
@@ -638,7 +638,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary
 	return true;
 }
 
-static bool __init early_alloc_p2m(unsigned long pfn)
+static bool __init early_alloc_p2m_middle(unsigned long pfn)
 {
 	unsigned topidx = p2m_top_index(pfn);
 	unsigned long *mid_mfn_p;
@@ -663,7 +663,7 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
 		/* Note: we don't set mid_mfn_p[midix] here,
-		 * look in early_alloc_p2m_middle */
+		 * look in early_alloc_p2m() */
 	}
 	return true;
 }
@@ -739,7 +739,7 @@ found:
 
 	/* This shouldn't happen */
 	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
-		early_alloc_p2m(set_pfn);
+		early_alloc_p2m_middle(set_pfn);
 
 	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
 		return false;
@@ -754,13 +754,13 @@ found:
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
-		if (!early_alloc_p2m(pfn))
+		if (!early_alloc_p2m_middle(pfn))
 			return false;
 
 		if (early_can_reuse_p2m_middle(pfn, mfn))
 			return __set_phys_to_machine(pfn, mfn);
 
-		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
+		if (!early_alloc_p2m(pfn, false /* boundary crossover OK!*/))
 			return false;
 
 		if (!__set_phys_to_machine(pfn, mfn))
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVB-0006y1-13; Mon, 24 Feb 2014 17:39:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzV9-0006xk-7a
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:35 +0000
Received: from [193.109.254.147:3085] by server-5.bemta-14.messagelabs.com id
	87/1C-16688-6D38B035; Mon, 24 Feb 2014 17:39:34 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28972 invoked from network); 24 Feb 2014 17:39:33 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:33 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267920"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-7h;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:16 +0000
Message-ID: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCHv5 0/8] x86/xen: fixes for mapping high MMIO
	regions (and remove _PAGE_IOMAP)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

[ x86 maintainers, this is predominately a Xen series but the end
result is the _PAGE_IOMAP PTE flag is removed. See patch #8. ]

This a fix for the problems with mapping high MMIO regions in certain
cases (e.g., the RDMA drivers) as not all mappers were specifing
_PAGE_IOMAP which meant no valid MFN could be found and the resulting
PTEs would be set as not present, causing subsequent faults.

It assumes that anything that isn't RAM (whether ballooned out or not)
is an I/O region and thus should be 1:1 in the p2m.  Specifically, the
region after the end of the E820 map and the region beyond the end of
the p2m.  Ballooned frames are still marked as missing in the p2m as
before.

As a follow on, pte_mfn_to_pfn() and pte_pfn_to_mfn() are modified to
not use the _PAGE_IOMAP PTE flag and MFN to PFN and PFN to MFN
translations will now do the right thing for all I/O regions.  This
means the Xen-specific _PAGE_IOMAP can be removed,

This series has been tested (in dom0) on all unique machines we have
in out test lab (~100 machines), some of which have PCI devices with
BARs above the end of RAM.

Note this does not fix a 32-bit dom0 trying to access BARs above 16 TB
as this is a caused by MFNs/PFNs being limited to 32-bits (unsigned
long).

You may find it useful to apply patch #3 to more easily review the
updated p2m diagram.

Changes in v5:
- improve performance of set_phys_range_identity() by not iterating
  over all pages if p2m_mid_identity or p2m_identity mid or leaves are
  already present. (Thanks to Andrew Cooper for reporting this and
  Frediano Ziglio for providing a fix.)

Changes in v4:
- fix p2m_mid_identity initialization.

Changes in v3 (not posted):
- use correct end of e820
- fix xen_remap_domain_mfn_range()

Changes in v2:
- fix to actually set end-of-RAM to 512 GiB region as 1:1.
- introduce p2m_mid_identity to efficiently store large 1:1 regions.
- Split the _PAGE_IOMAP patch into Xen and generic x86 halves.

David


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVF-00070o-VL; Mon, 24 Feb 2014 17:39:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVD-0006yk-Ae
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:39 +0000
Received: from [85.158.143.35:14348] by server-1.bemta-4.messagelabs.com id
	22/48-31661-AD38B035; Mon, 24 Feb 2014 17:39:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393263574!7937107!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25116 invoked from network); 24 Feb 2014 17:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267926"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-By;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:21 +0000
Message-ID: <1393263564-14038-6-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 5/8] x86/xen: set regions above the end of RAM
	as 1:1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

PCI devices may have BARs located above the end of RAM so mark such
frames as identity frames in the p2m (instead of the default of
missing).

PFNs outside the p2m (above MAX_P2M_PFN) are also considered to be
identity frames for the same reason.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c   |    2 +-
 arch/x86/xen/setup.c |    9 +++++++++
 2 files changed, 10 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 66222c0..bd348aa 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -507,7 +507,7 @@ unsigned long get_phys_to_machine(unsigned long pfn)
 	unsigned topidx, mididx, idx;
 
 	if (unlikely(pfn >= MAX_P2M_PFN))
-		return INVALID_P2M_ENTRY;
+		return IDENTITY_FRAME(pfn);
 
 	topidx = p2m_top_index(pfn);
 	mididx = p2m_mid_index(pfn);
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 2afe55e..210426a 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -469,6 +469,15 @@ char * __init xen_memory_setup(void)
 	}
 
 	/*
+	 * Set the rest as identity mapped, in case PCI BARs are
+	 * located here.
+	 *
+	 * PFNs above MAX_P2M_PFN are considered identity mapped as
+	 * well.
+	 */
+	set_phys_range_identity(map[i-1].addr / PAGE_SIZE, ~0ul);
+
+	/*
 	 * In domU, the ISA region is normal, usable memory, but we
 	 * reserve ISA memory anyway because too many things poke
 	 * about in there.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVE-0006zW-Ay; Mon, 24 Feb 2014 17:39:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVC-0006yH-BT
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:38 +0000
Received: from [85.158.143.35:14369] by server-3.bemta-4.messagelabs.com id
	35/9A-11539-9D38B035; Mon, 24 Feb 2014 17:39:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393263575!7948369!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8875 invoked from network); 24 Feb 2014 17:39:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267930"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-DZ;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:23 +0000
Message-ID: <1393263564-14038-8-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 7/8] x86/xen: do not use _PAGE_IOMAP PTE flag
	for I/O mappings
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Since mfn_to_pfn() returns the correct PFN for identity mappings (as
used for MMIO regions), the use of _PAGE_IOMAP is not required in
pte_mfn_to_pfn().

Do not set the _PAGE_IOMAP flag in pte_pfn_to_mfn() and do not use it
in pte_mfn_to_pfn().

This will allow _PAGE_IOMAP to be removed, making it available for
future use.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/mmu.c |   48 ++++--------------------------------------------
 1 files changed, 4 insertions(+), 44 deletions(-)

diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c
index b36116e..2916662 100644
--- a/arch/x86/xen/mmu.c
+++ b/arch/x86/xen/mmu.c
@@ -399,38 +399,14 @@ static pteval_t pte_pfn_to_mfn(pteval_t val)
 		if (unlikely(mfn == INVALID_P2M_ENTRY)) {
 			mfn = 0;
 			flags = 0;
-		} else {
-			/*
-			 * Paramount to do this test _after_ the
-			 * INVALID_P2M_ENTRY as INVALID_P2M_ENTRY &
-			 * IDENTITY_FRAME_BIT resolves to true.
-			 */
-			mfn &= ~FOREIGN_FRAME_BIT;
-			if (mfn & IDENTITY_FRAME_BIT) {
-				mfn &= ~IDENTITY_FRAME_BIT;
-				flags |= _PAGE_IOMAP;
-			}
-		}
+		} else
+			mfn &= ~(FOREIGN_FRAME_BIT | IDENTITY_FRAME_BIT);
 		val = ((pteval_t)mfn << PAGE_SHIFT) | flags;
 	}
 
 	return val;
 }
 
-static pteval_t iomap_pte(pteval_t val)
-{
-	if (val & _PAGE_PRESENT) {
-		unsigned long pfn = (val & PTE_PFN_MASK) >> PAGE_SHIFT;
-		pteval_t flags = val & PTE_FLAGS_MASK;
-
-		/* We assume the pte frame number is a MFN, so
-		   just use it as-is. */
-		val = ((pteval_t)pfn << PAGE_SHIFT) | flags;
-	}
-
-	return val;
-}
-
 __visible pteval_t xen_pte_val(pte_t pte)
 {
 	pteval_t pteval = pte.pte;
@@ -441,9 +417,6 @@ __visible pteval_t xen_pte_val(pte_t pte)
 		pteval = (pteval & ~_PAGE_PAT) | _PAGE_PWT;
 	}
 #endif
-	if (xen_initial_domain() && (pteval & _PAGE_IOMAP))
-		return pteval;
-
 	return pte_mfn_to_pfn(pteval);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_pte_val);
@@ -481,7 +454,6 @@ void xen_set_pat(u64 pat)
 
 __visible pte_t xen_make_pte(pteval_t pte)
 {
-	phys_addr_t addr = (pte & PTE_PFN_MASK);
 #if 0
 	/* If Linux is trying to set a WC pte, then map to the Xen WC.
 	 * If _PAGE_PAT is set, then it probably means it is really
@@ -496,19 +468,7 @@ __visible pte_t xen_make_pte(pteval_t pte)
 			pte = (pte & ~(_PAGE_PCD | _PAGE_PWT)) | _PAGE_PAT;
 	}
 #endif
-	/*
-	 * Unprivileged domains are allowed to do IOMAPpings for
-	 * PCI passthrough, but not map ISA space.  The ISA
-	 * mappings are just dummy local mappings to keep other
-	 * parts of the kernel happy.
-	 */
-	if (unlikely(pte & _PAGE_IOMAP) &&
-	    (xen_initial_domain() || addr >= ISA_END_ADDRESS)) {
-		pte = iomap_pte(pte);
-	} else {
-		pte &= ~_PAGE_IOMAP;
-		pte = pte_pfn_to_mfn(pte);
-	}
+	pte = pte_pfn_to_mfn(pte);
 
 	return native_make_pte(pte);
 }
@@ -2096,7 +2056,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	default:
 		/* By default, set_fixmap is used for hardware mappings */
-		pte = mfn_pte(phys, __pgprot(pgprot_val(prot) | _PAGE_IOMAP));
+		pte = mfn_pte(phys, prot);
 		break;
 	}
 
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVD-0006yn-Dc; Mon, 24 Feb 2014 17:39:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVB-0006xz-5s
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:37 +0000
Received: from [193.109.254.147:48962] by server-3.bemta-14.messagelabs.com id
	BF/44-00432-8D38B035; Mon, 24 Feb 2014 17:39:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29180 invoked from network); 24 Feb 2014 17:39:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267925"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-BN;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:20 +0000
Message-ID: <1393263564-14038-5-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 4/8] x86/xen: only warn once if bad MFNs are
	found during setup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

In xen_add_extra_mem(), if the WARN() checks for bad MFNs trigger it is
likely that they will trigger at lot, spamming the log.

Use WARN_ONCE() instead.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/setup.c |    6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 0982233..2afe55e 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -89,10 +89,10 @@ static void __init xen_add_extra_mem(u64 start, u64 size)
 	for (pfn = PFN_DOWN(start); pfn < xen_max_p2m_pfn; pfn++) {
 		unsigned long mfn = pfn_to_mfn(pfn);
 
-		if (WARN(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
+		if (WARN_ONCE(mfn == pfn, "Trying to over-write 1-1 mapping (pfn: %lx)\n", pfn))
 			continue;
-		WARN(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
-			pfn, mfn);
+		WARN_ONCE(mfn != INVALID_P2M_ENTRY, "Trying to remove %lx which has %lx mfn!\n",
+			  pfn, mfn);
 
 		__set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
 	}
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVF-00070T-Gd; Mon, 24 Feb 2014 17:39:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVC-0006yI-Kb
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:39 +0000
Received: from [85.158.143.35:23786] by server-2.bemta-4.messagelabs.com id
	CE/E9-10891-9D38B035; Mon, 24 Feb 2014 17:39:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393263574!7937107!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25166 invoked from network); 24 Feb 2014 17:39:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267931"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-FS;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:24 +0000
Message-ID: <1393263564-14038-9-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 8/8] x86: remove the Xen-specific _PAGE_IOMAP
	PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
that were used to map I/O regions that are 1:1 in the p2m.  This
allowed Xen to obtain the correct PFN when converting the MFNs read
from a PTE back to their PFN.

Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
returns the correct PFN by using a combination of the m2p and p2m to
determine if an MFN corresponds to a 1:1 mapping in the the p2m.

Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
future uses of the PTE flag.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
--
This depends on the preceeding Xen changes, so I think it would be
best if this was acked by an x86 maintainer and merged via the Xen
tree.
---
 arch/x86/include/asm/pgtable_types.h |   12 ++++++------
 arch/x86/mm/init_32.c                |    2 +-
 arch/x86/mm/init_64.c                |    2 +-
 arch/x86/pci/i386.c                  |    2 --
 arch/x86/xen/enlighten.c             |    2 --
 5 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 1aa9ccd..b821f37 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -17,7 +17,7 @@
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
 #define _PAGE_BIT_UNUSED1	9	/* available for programmer */
-#define _PAGE_BIT_IOMAP		10	/* flag used to indicate IO mapping */
+#define _PAGE_BIT_UNUSED2	10	/* available for programmer */
 #define _PAGE_BIT_HIDDEN	11	/* hidden by kmemcheck */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
 #define _PAGE_BIT_SPECIAL	_PAGE_BIT_UNUSED1
@@ -41,7 +41,7 @@
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_UNUSED1	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED1)
-#define _PAGE_IOMAP	(_AT(pteval_t, 1) << _PAGE_BIT_IOMAP)
+#define _PAGE_UNUSED2	(_AT(pteval_t, 1) << _PAGE_BIT_UNUSED2)
 #define _PAGE_PAT	(_AT(pteval_t, 1) << _PAGE_BIT_PAT)
 #define _PAGE_PAT_LARGE (_AT(pteval_t, 1) << _PAGE_BIT_PAT_LARGE)
 #define _PAGE_SPECIAL	(_AT(pteval_t, 1) << _PAGE_BIT_SPECIAL)
@@ -164,10 +164,10 @@
 #define __PAGE_KERNEL_LARGE_NOCACHE	(__PAGE_KERNEL | _PAGE_CACHE_UC | _PAGE_PSE)
 #define __PAGE_KERNEL_LARGE_EXEC	(__PAGE_KERNEL_EXEC | _PAGE_PSE)
 
-#define __PAGE_KERNEL_IO		(__PAGE_KERNEL | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS | _PAGE_IOMAP)
-#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC | _PAGE_IOMAP)
+#define __PAGE_KERNEL_IO		(__PAGE_KERNEL)
+#define __PAGE_KERNEL_IO_NOCACHE	(__PAGE_KERNEL_NOCACHE)
+#define __PAGE_KERNEL_IO_UC_MINUS	(__PAGE_KERNEL_UC_MINUS)
+#define __PAGE_KERNEL_IO_WC		(__PAGE_KERNEL_WC)
 
 #define PAGE_KERNEL			__pgprot(__PAGE_KERNEL)
 #define PAGE_KERNEL_RO			__pgprot(__PAGE_KERNEL_RO)
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index e395048..af7259c 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -537,7 +537,7 @@ static void __init pagetable_init(void)
 	permanent_kmaps_init(pgd_base);
 }
 
-pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL | _PAGE_IOMAP);
+pteval_t __supported_pte_mask __read_mostly = ~(_PAGE_NX | _PAGE_GLOBAL);
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 /* user-defined highmem size */
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index f35c66c..9e6fa6d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -151,7 +151,7 @@ early_param("gbpages", parse_direct_gbpages_on);
  * around without checking the pgd every time.
  */
 
-pteval_t __supported_pte_mask __read_mostly = ~_PAGE_IOMAP;
+pteval_t __supported_pte_mask __read_mostly = ~0;
 EXPORT_SYMBOL_GPL(__supported_pte_mask);
 
 int force_personality32;
diff --git a/arch/x86/pci/i386.c b/arch/x86/pci/i386.c
index db6b1ab..1f642d6 100644
--- a/arch/x86/pci/i386.c
+++ b/arch/x86/pci/i386.c
@@ -433,8 +433,6 @@ int pci_mmap_page_range(struct pci_dev *dev, struct vm_area_struct *vma,
 		 */
 		prot |= _PAGE_CACHE_UC_MINUS;
 
-	prot |= _PAGE_IOMAP;	/* creating a mapping for IO */
-
 	vma->vm_page_prot = __pgprot(prot);
 
 	if (io_remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 201d09a..c5e21e3 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1555,8 +1555,6 @@ asmlinkage void __init xen_start_kernel(void)
 #endif
 		__supported_pte_mask &= ~(_PAGE_PWT | _PAGE_PCD);
 
-	__supported_pte_mask |= _PAGE_IOMAP;
-
 	/*
 	 * Prevent page tables from being allocated in highmem, even
 	 * if CONFIG_HIGHPTE is enabled.
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVC-0006yM-Ge; Mon, 24 Feb 2014 17:39:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVA-0006xq-GW
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:36 +0000
Received: from [193.109.254.147:48903] by server-7.bemta-14.messagelabs.com id
	0B/B7-23424-7D38B035; Mon, 24 Feb 2014 17:39:35 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393263572!6463053!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29107 invoked from network); 24 Feb 2014 17:39:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267922"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:31 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-AB;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:18 +0000
Message-ID: <1393263564-14038-3-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 2/8] x86/xen: fix set_phys_range_identity() if
	pfn_e > MAX_P2M_PFN
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Allow set_phys_range_identity() to work with a range that overlaps
MAX_P2M_PFN by clamping pfn_e to MAX_P2M_PFN.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 7402709..4ed9138 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -774,7 +774,7 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 {
 	unsigned long pfn;
 
-	if (unlikely(pfn_s >= MAX_P2M_PFN || pfn_e >= MAX_P2M_PFN))
+	if (unlikely(pfn_s >= MAX_P2M_PFN))
 		return 0;
 
 	if (unlikely(xen_feature(XENFEAT_auto_translated_physmap)))
@@ -783,6 +783,9 @@ unsigned long __init set_phys_range_identity(unsigned long pfn_s,
 	if (pfn_s > pfn_e)
 		return 0;
 
+	if (pfn_e > MAX_P2M_PFN)
+		pfn_e = MAX_P2M_PFN;
+
 	for (pfn = (pfn_s & ~(P2M_MID_PER_PAGE * P2M_PER_PAGE - 1));
 		pfn < ALIGN(pfn_e, (P2M_MID_PER_PAGE * P2M_PER_PAGE));
 		pfn += P2M_MID_PER_PAGE * P2M_PER_PAGE)
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVF-00070C-4W; Mon, 24 Feb 2014 17:39:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WHzVB-0006y0-8N
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:38 +0000
Received: from [85.158.143.35:23688] by server-1.bemta-4.messagelabs.com id
	1E/38-31661-8D38B035; Mon, 24 Feb 2014 17:39:36 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393263574!7937107!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25044 invoked from network); 24 Feb 2014 17:39:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105267924"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:32 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WHzV4-0000Vq-8I;
	Mon, 24 Feb 2014 17:39:30 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Mon, 24 Feb 2014 17:39:17 +0000
Message-ID: <1393263564-14038-2-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
In-Reply-To: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: [Xen-devel] [PATCH 1/8] x86/xen: rename early_p2m_alloc() and
	early_p2m_alloc_middle()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

early_p2m_alloc_middle() allocates a new leaf page and
early_p2m_alloc() allocates a new middle page.  This is confusing.

Swap the names so they match what the functions actually do.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
 arch/x86/xen/p2m.c |   12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..7402709 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -596,7 +596,7 @@ static bool alloc_p2m(unsigned long pfn)
 	return true;
 }
 
-static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary)
+static bool __init early_alloc_p2m(unsigned long pfn, bool check_boundary)
 {
 	unsigned topidx, mididx, idx;
 	unsigned long *p2m;
@@ -638,7 +638,7 @@ static bool __init early_alloc_p2m_middle(unsigned long pfn, bool check_boundary
 	return true;
 }
 
-static bool __init early_alloc_p2m(unsigned long pfn)
+static bool __init early_alloc_p2m_middle(unsigned long pfn)
 {
 	unsigned topidx = p2m_top_index(pfn);
 	unsigned long *mid_mfn_p;
@@ -663,7 +663,7 @@ static bool __init early_alloc_p2m(unsigned long pfn)
 		p2m_top_mfn_p[topidx] = mid_mfn_p;
 		p2m_top_mfn[topidx] = virt_to_mfn(mid_mfn_p);
 		/* Note: we don't set mid_mfn_p[midix] here,
-		 * look in early_alloc_p2m_middle */
+		 * look in early_alloc_p2m() */
 	}
 	return true;
 }
@@ -739,7 +739,7 @@ found:
 
 	/* This shouldn't happen */
 	if (WARN_ON(p2m_top[topidx] == p2m_mid_missing))
-		early_alloc_p2m(set_pfn);
+		early_alloc_p2m_middle(set_pfn);
 
 	if (WARN_ON(p2m_top[topidx][mididx] != p2m_missing))
 		return false;
@@ -754,13 +754,13 @@ found:
 bool __init early_set_phys_to_machine(unsigned long pfn, unsigned long mfn)
 {
 	if (unlikely(!__set_phys_to_machine(pfn, mfn)))  {
-		if (!early_alloc_p2m(pfn))
+		if (!early_alloc_p2m_middle(pfn))
 			return false;
 
 		if (early_can_reuse_p2m_middle(pfn, mfn))
 			return __set_phys_to_machine(pfn, mfn);
 
-		if (!early_alloc_p2m_middle(pfn, false /* boundary crossover OK!*/))
+		if (!early_alloc_p2m(pfn, false /* boundary crossover OK!*/))
 			return false;
 
 		if (!__set_phys_to_machine(pfn, mfn))
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVS-00078d-MG; Mon, 24 Feb 2014 17:39:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHzVQ-00074J-NC
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:52 +0000
Received: from [193.109.254.147:10069] by server-9.bemta-14.messagelabs.com id
	DB/F3-24895-6E38B035; Mon, 24 Feb 2014 17:39:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393263588!2736742!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5144 invoked from network); 24 Feb 2014 17:39:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105268011"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHzVL-0000Wz-K4;
	Mon, 24 Feb 2014 17:39:47 +0000
Message-ID: <530B83E3.1020603@citrix.com>
Date: Mon, 24 Feb 2014 17:39:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
	<530B7B42020000780011EE5E@nat28.tlf.novell.com>
	<530B711D.2080408@citrix.com>
In-Reply-To: <530B711D.2080408@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 16:19, Andrew Cooper wrote:
> On 24/02/14 16:02, Jan Beulich wrote:
>>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> This patch shows a somewhat undesirable inconsistency (having been
>> present in I think les obvious ways in earlier patches too):
>>
>>> --- a/xen/arch/arm/shutdown.c
>>> +++ b/xen/arch/arm/shutdown.c
>>> @@ -11,7 +11,7 @@ static void raw_machine_reset(void)
>>>      platform_reset();
>>>  }
>>>  =

>>> -static void halt_this_cpu(void *arg)
>>> +static void noreturn halt_this_cpu(void *arg)
>> For function definitions you place the attribute where I personally
>> would expect it to be (iirc it can't go between the closing paren
>> after the parameter declarations and the opening brace of the
>> function body), yet ...
> Hmm - I thought I had fixed all of these - I shall audit and respin.  I
> certainly did intend to be consistent.
>
> ~Andrew

And now I remember why it is strictly this way around.

It is a compile error to have the noreturn after the arguments on a
static function.

shutdown.c:15:1: error: expected =91,=92 or =91;=92 before =91{=92 token
{
^
shutdown.c:14:13: error: =91halt_this_cpu=92 used but never defined [-Werro=
r]
static void halt_this_cpu(void *arg) noreturn
^

but fine to have the attributes between the return type and name.

I could standardise on the other way around, to be the same as __init &
friends ?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:39:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzVS-00078d-MG; Mon, 24 Feb 2014 17:39:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WHzVQ-00074J-NC
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:39:52 +0000
Received: from [193.109.254.147:10069] by server-9.bemta-14.messagelabs.com id
	DB/F3-24895-6E38B035; Mon, 24 Feb 2014 17:39:50 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393263588!2736742!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5144 invoked from network); 24 Feb 2014 17:39:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 17:39:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,535,1389744000"; d="scan'208";a="105268011"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 24 Feb 2014 17:39:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 12:39:48 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WHzVL-0000Wz-K4;
	Mon, 24 Feb 2014 17:39:47 +0000
Message-ID: <530B83E3.1020603@citrix.com>
Date: Mon, 24 Feb 2014 17:39:47 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
	<530B7B42020000780011EE5E@nat28.tlf.novell.com>
	<530B711D.2080408@citrix.com>
In-Reply-To: <530B711D.2080408@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 16:19, Andrew Cooper wrote:
> On 24/02/14 16:02, Jan Beulich wrote:
>>>>> On 24.02.14 at 16:01, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> This patch shows a somewhat undesirable inconsistency (having been
>> present in I think les obvious ways in earlier patches too):
>>
>>> --- a/xen/arch/arm/shutdown.c
>>> +++ b/xen/arch/arm/shutdown.c
>>> @@ -11,7 +11,7 @@ static void raw_machine_reset(void)
>>>      platform_reset();
>>>  }
>>>  =

>>> -static void halt_this_cpu(void *arg)
>>> +static void noreturn halt_this_cpu(void *arg)
>> For function definitions you place the attribute where I personally
>> would expect it to be (iirc it can't go between the closing paren
>> after the parameter declarations and the opening brace of the
>> function body), yet ...
> Hmm - I thought I had fixed all of these - I shall audit and respin.  I
> certainly did intend to be consistent.
>
> ~Andrew

And now I remember why it is strictly this way around.

It is a compile error to have the noreturn after the arguments on a
static function.

shutdown.c:15:1: error: expected =91,=92 or =91;=92 before =91{=92 token
{
^
shutdown.c:14:13: error: =91halt_this_cpu=92 used but never defined [-Werro=
r]
static void halt_this_cpu(void *arg) noreturn
^

but fine to have the attributes between the return type and name.

I could standardise on the other way around, to be the same as __init &
friends ?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:51:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzgc-0008Qc-H9; Mon, 24 Feb 2014 17:51:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1WHzgb-0008QN-0c
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:51:25 +0000
Received: from [85.158.139.211:60564] by server-7.bemta-5.messagelabs.com id
	E4/DD-14867-C968B035; Mon, 24 Feb 2014 17:51:24 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393264281!5945025!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30125 invoked from network); 24 Feb 2014 17:51:23 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 17:51:23 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3340:50:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.7/8.14.5) with ESMTP id s1OHp8B9023986
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 24 Feb 2014 09:51:09 -0800
Message-ID: <530B8687.6040406@zytor.com>
Date: Mon, 24 Feb 2014 09:51:03 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
	<1393263564-14038-9-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1393263564-14038-9-git-send-email-david.vrabel@citrix.com>
X-Enigmail-Version: 1.6
Cc: x86@kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Ingo Molnar <mingo@redhat.com>, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH 8/8] x86: remove the Xen-specific
	_PAGE_IOMAP PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 09:39 AM, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
> that were used to map I/O regions that are 1:1 in the p2m.  This
> allowed Xen to obtain the correct PFN when converting the MFNs read
> from a PTE back to their PFN.
> 
> Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
> returns the correct PFN by using a combination of the m2p and p2m to
> determine if an MFN corresponds to a 1:1 mapping in the the p2m.
> 
> Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
> future uses of the PTE flag.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> --
> This depends on the preceeding Xen changes, so I think it would be
> best if this was acked by an x86 maintainer and merged via the Xen
> tree.

Acked-by: H. Peter Anvin <hpa@zytor.com>

Yay, a bit back!

	-hpa



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 17:51:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 17:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzgc-0008Qc-H9; Mon, 24 Feb 2014 17:51:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hpa@zytor.com>) id 1WHzgb-0008QN-0c
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 17:51:25 +0000
Received: from [85.158.139.211:60564] by server-7.bemta-5.messagelabs.com id
	E4/DD-14867-C968B035; Mon, 24 Feb 2014 17:51:24 +0000
X-Env-Sender: hpa@zytor.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393264281!5945025!1
X-Originating-IP: [198.137.202.10]
X-SpamReason: No, hits=0.3 required=7.0 tests=MAILTO_TO_SPAM_ADDR
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30125 invoked from network); 24 Feb 2014 17:51:23 -0000
Received: from terminus.zytor.com (HELO mail.zytor.com) (198.137.202.10)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 17:51:23 -0000
Received: from tazenda.hos.anvin.org
	([IPv6:2601:9:3340:50:e269:95ff:fe35:9f3c]) (authenticated bits=0)
	by mail.zytor.com (8.14.7/8.14.5) with ESMTP id s1OHp8B9023986
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO);
	Mon, 24 Feb 2014 09:51:09 -0800
Message-ID: <530B8687.6040406@zytor.com>
Date: Mon, 24 Feb 2014 09:51:03 -0800
From: "H. Peter Anvin" <hpa@zytor.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>, xen-devel@lists.xen.org
References: <1393263564-14038-1-git-send-email-david.vrabel@citrix.com>
	<1393263564-14038-9-git-send-email-david.vrabel@citrix.com>
In-Reply-To: <1393263564-14038-9-git-send-email-david.vrabel@citrix.com>
X-Enigmail-Version: 1.6
Cc: x86@kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Ingo Molnar <mingo@redhat.com>, Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH 8/8] x86: remove the Xen-specific
	_PAGE_IOMAP PTE flag
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 09:39 AM, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> The _PAGE_IO_MAP PTE flag was only used by Xen PV guests to mark PTEs
> that were used to map I/O regions that are 1:1 in the p2m.  This
> allowed Xen to obtain the correct PFN when converting the MFNs read
> from a PTE back to their PFN.
> 
> Xen guests no longer use _PAGE_IOMAP for this. Instead mfn_to_pfn()
> returns the correct PFN by using a combination of the m2p and p2m to
> determine if an MFN corresponds to a 1:1 mapping in the the p2m.
> 
> Remove _PAGE_IOMAP, replacing it with _PAGE_UNUSED2 to allow for
> future uses of the PTE flag.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: x86@kernel.org
> --
> This depends on the preceeding Xen changes, so I think it would be
> best if this was acked by an x86 maintainer and merged via the Xen
> tree.

Acked-by: H. Peter Anvin <hpa@zytor.com>

Yay, a bit back!

	-hpa



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:09:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzxg-0000H6-IP; Mon, 24 Feb 2014 18:09:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WHzxf-0000H1-1U
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 18:09:03 +0000
Received: from [85.158.139.211:25418] by server-11.bemta-5.messagelabs.com id
	5D/69-23886-EBA8B035; Mon, 24 Feb 2014 18:09:02 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393265316!1389760!1
X-Originating-IP: [216.32.181.181]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12679 invoked from network); 24 Feb 2014 18:09:01 -0000
Received: from ch1ehsobe001.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.181)
	by server-11.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	24 Feb 2014 18:09:01 -0000
Received: from mail100-ch1-R.bigfish.com (10.43.68.237) by
	CH1EHSOBE003.bigfish.com (10.43.70.53) with Microsoft SMTP Server id
	14.1.225.22; Mon, 24 Feb 2014 18:08:35 +0000
Received: from mail100-ch1 (localhost [127.0.0.1])	by
	mail100-ch1-R.bigfish.com (Postfix) with ESMTP id 905E820166;
	Mon, 24 Feb 2014 18:08:35 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z579ehzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail100-ch1 (localhost.localdomain [127.0.0.1]) by mail100-ch1
	(MessageSwitch) id 1393265310120764_29664;
	Mon, 24 Feb 2014 18:08:30 +0000 (UTC)
Received: from CH1EHSMHS001.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.237])	by mail100-ch1.bigfish.com (Postfix) with ESMTP id
	0E5B02C016C;	Mon, 24 Feb 2014 18:08:30 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CH1EHSMHS001.bigfish.com
	(10.43.70.1) with Microsoft SMTP Server id 14.16.227.3; Mon, 24 Feb 2014
	18:08:28 +0000
X-WSS-ID: 0N1IIE0-08-CF5-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2CEEFD1605F;	Mon, 24 Feb 2014 12:08:24 -0600 (CST)
Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 12:08:43 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag06.amd.com
	(10.181.40.13) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 13:08:27 -0500
Message-ID: <530B8A99.8030909@amd.com>
Date: Mon, 24 Feb 2014 12:08:25 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com> <52A1F4AC.6020506@eu.citrix.com>
	<52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com> <530B6F66.8070201@amd.com>
	<530B80C5020000780011EEBE@nat28.tlf.novell.com>
In-Reply-To: <530B80C5020000780011EEBE@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	Sherry Hurwitz <sherry.hurwitz@amd.com>,
	"shurd@broadcom.com" <shurd@broadcom.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/24/2014 10:26 AM, Jan Beulich wrote:
>>>   
>>> It's been well over two months since you posted this, and it's still not
>>> having Keir's ack. The best way therefore is for you to re-post, with
>>> Keir properly Cc-ed, and with Andrew's Tested-by added.
>>>
>>> Jan
>>>
>>>

Ok, will do..

Shall I retain the version number or start from scratch?

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:09:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WHzxg-0000H6-IP; Mon, 24 Feb 2014 18:09:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WHzxf-0000H1-1U
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 18:09:03 +0000
Received: from [85.158.139.211:25418] by server-11.bemta-5.messagelabs.com id
	5D/69-23886-EBA8B035; Mon, 24 Feb 2014 18:09:02 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393265316!1389760!1
X-Originating-IP: [216.32.181.181]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12679 invoked from network); 24 Feb 2014 18:09:01 -0000
Received: from ch1ehsobe001.messaging.microsoft.com (HELO
	ch1outboundpool.messaging.microsoft.com) (216.32.181.181)
	by server-11.tower-206.messagelabs.com with AES128-SHA encrypted SMTP;
	24 Feb 2014 18:09:01 -0000
Received: from mail100-ch1-R.bigfish.com (10.43.68.237) by
	CH1EHSOBE003.bigfish.com (10.43.70.53) with Microsoft SMTP Server id
	14.1.225.22; Mon, 24 Feb 2014 18:08:35 +0000
Received: from mail100-ch1 (localhost [127.0.0.1])	by
	mail100-ch1-R.bigfish.com (Postfix) with ESMTP id 905E820166;
	Mon, 24 Feb 2014 18:08:35 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.222; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp02.amd.com; RD:none; EFVD:NLI
X-SpamScore: -4
X-BigFish: VPS-4(z579ehzbb2dI98dI9371I1432Izz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzzz2dh839h947hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h1765h18e1h190ch1946h19b4h19c3h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dfeh1dffh1f5fh1fe8h1ff5h209eh22d0h2336h2438h2461h2487h24ach24d7h2516h2545h255eh1155h)
Received: from mail100-ch1 (localhost.localdomain [127.0.0.1]) by mail100-ch1
	(MessageSwitch) id 1393265310120764_29664;
	Mon, 24 Feb 2014 18:08:30 +0000 (UTC)
Received: from CH1EHSMHS001.bigfish.com (snatpool2.int.messaging.microsoft.com
	[10.43.68.237])	by mail100-ch1.bigfish.com (Postfix) with ESMTP id
	0E5B02C016C;	Mon, 24 Feb 2014 18:08:30 +0000 (UTC)
Received: from atltwp02.amd.com (165.204.84.222) by CH1EHSMHS001.bigfish.com
	(10.43.70.1) with Microsoft SMTP Server id 14.16.227.3; Mon, 24 Feb 2014
	18:08:28 +0000
X-WSS-ID: 0N1IIE0-08-CF5-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp02.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	2CEEFD1605F;	Mon, 24 Feb 2014 12:08:24 -0600 (CST)
Received: from SATLEXDAG06.amd.com (10.181.40.13) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 12:08:43 -0600
Received: from [127.0.0.1] (10.180.168.240) by satlexdag06.amd.com
	(10.181.40.13) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Mon, 24 Feb 2014 13:08:27 -0500
Message-ID: <530B8A99.8030909@amd.com>
Date: Mon, 24 Feb 2014 12:08:25 -0600
From: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com> <52A1F4AC.6020506@eu.citrix.com>
	<52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com> <530B6F66.8070201@amd.com>
	<530B80C5020000780011EEBE@nat28.tlf.novell.com>
In-Reply-To: <530B80C5020000780011EEBE@nat28.tlf.novell.com>
X-Originating-IP: [10.180.168.240]
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	Sherry Hurwitz <sherry.hurwitz@amd.com>,
	"shurd@broadcom.com" <shurd@broadcom.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/24/2014 10:26 AM, Jan Beulich wrote:
>>>   
>>> It's been well over two months since you posted this, and it's still not
>>> having Keir's ack. The best way therefore is for you to re-post, with
>>> Keir properly Cc-ed, and with Andrew's Tested-by added.
>>>
>>> Jan
>>>
>>>

Ok, will do..

Shall I retain the version number or start from scratch?

-Aravind.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:19:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI07v-0000Xd-V7; Mon, 24 Feb 2014 18:19:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WI07u-0000XY-N1
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 18:19:38 +0000
Received: from [85.158.139.211:47610] by server-11.bemta-5.messagelabs.com id
	F7/B4-23886-A3D8B035; Mon, 24 Feb 2014 18:19:38 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393265975!5895195!1
X-Originating-IP: [74.125.82.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17250 invoked from network); 24 Feb 2014 18:19:35 -0000
Received: from mail-we0-f172.google.com (HELO mail-we0-f172.google.com)
	(74.125.82.172)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 18:19:35 -0000
Received: by mail-we0-f172.google.com with SMTP id u56so5090166wes.17
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 10:19:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=x7mUhpGuwOFAQqV5XVPpfzBb7ALKMbx7/76QSmCYQlw=;
	b=nKRNW4SNLHjUvRn8uq1Ln4mW/bf0XrldPUp+HCuELzT5tQMBvSgWQYa5Gccelc9Kmi
	8BJQlAIBbmxpVmkQboVxtBAgOAQOVS8BhVq5GGToqMxFGl/5hN00qfwl33MPorKnm0rD
	b2jgFh1fnab2MruGqIqcb/FPM7gXx/4Q1c5MydAkLXjseUKiPG42Tm3aToAxEg7sWWaw
	5g2r7SUBeFfwFVULLQwm17LgObebuUcrwSoPwDz8uDGzlehZQWWEa6ZBaXlh1JTa+KJH
	gLEqR8Q/UgaHovbgsZp7ZdI9gxgIFo8wwY/iA7nZxPGPeYLDozt0Sx04uwn36JXmKrBv
	/d9Q==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr20059530wjy.57.1393265975567;
	Mon, 24 Feb 2014 10:19:35 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 10:19:35 -0800 (PST)
Date: Mon, 24 Feb 2014 18:19:35 +0000
X-Google-Sender-Auth: js1gB9EO170J9Vl7wkSMPdFDReo
Message-ID: <CAFLBxZaS8R+Q_1KmZak-hbrKe523iF4At+_+mfaa5Aere3Gmhg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen development update: RC6 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In response to a last-minute bug found in libxl that caused a hang
when running "xl create -c" when using pygrub, we've made a
last-minute patch and tagged another RC.

Looking at the code, we're reasonably confident that only the broken
path can be affected by the change, so we will still plan on moving
forward with the release.  However, if you have the bandwidth to do
one more test, we'd appreciate it.

Tarballs below.

Thanks!
 -George

http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz
http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz.sig

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:19:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI07v-0000Xd-V7; Mon, 24 Feb 2014 18:19:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WI07u-0000XY-N1
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 18:19:38 +0000
Received: from [85.158.139.211:47610] by server-11.bemta-5.messagelabs.com id
	F7/B4-23886-A3D8B035; Mon, 24 Feb 2014 18:19:38 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393265975!5895195!1
X-Originating-IP: [74.125.82.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17250 invoked from network); 24 Feb 2014 18:19:35 -0000
Received: from mail-we0-f172.google.com (HELO mail-we0-f172.google.com)
	(74.125.82.172)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 18:19:35 -0000
Received: by mail-we0-f172.google.com with SMTP id u56so5090166wes.17
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 10:19:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:date:message-id:subject:from:to:content-type;
	bh=x7mUhpGuwOFAQqV5XVPpfzBb7ALKMbx7/76QSmCYQlw=;
	b=nKRNW4SNLHjUvRn8uq1Ln4mW/bf0XrldPUp+HCuELzT5tQMBvSgWQYa5Gccelc9Kmi
	8BJQlAIBbmxpVmkQboVxtBAgOAQOVS8BhVq5GGToqMxFGl/5hN00qfwl33MPorKnm0rD
	b2jgFh1fnab2MruGqIqcb/FPM7gXx/4Q1c5MydAkLXjseUKiPG42Tm3aToAxEg7sWWaw
	5g2r7SUBeFfwFVULLQwm17LgObebuUcrwSoPwDz8uDGzlehZQWWEa6ZBaXlh1JTa+KJH
	gLEqR8Q/UgaHovbgsZp7ZdI9gxgIFo8wwY/iA7nZxPGPeYLDozt0Sx04uwn36JXmKrBv
	/d9Q==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr20059530wjy.57.1393265975567;
	Mon, 24 Feb 2014 10:19:35 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Mon, 24 Feb 2014 10:19:35 -0800 (PST)
Date: Mon, 24 Feb 2014 18:19:35 +0000
X-Google-Sender-Auth: js1gB9EO170J9Vl7wkSMPdFDReo
Message-ID: <CAFLBxZaS8R+Q_1KmZak-hbrKe523iF4At+_+mfaa5Aere3Gmhg@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] Xen development update: RC6 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

In response to a last-minute bug found in libxl that caused a hang
when running "xl create -c" when using pygrub, we've made a
last-minute patch and tagged another RC.

Looking at the code, we're reasonably confident that only the broken
path can be affected by the change, so we will still plan on moving
forward with the release.  However, if you have the bandwidth to do
one more test, we'd appreciate it.

Tarballs below.

Thanks!
 -George

http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz
http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz.sig

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:20:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI08w-0000ac-Eq; Mon, 24 Feb 2014 18:20:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WI08t-0000aU-Bi
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 18:20:39 +0000
Received: from [85.158.143.35:19884] by server-2.bemta-4.messagelabs.com id
	50/F1-10891-67D8B035; Mon, 24 Feb 2014 18:20:38 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393266037!7968473!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16982 invoked from network); 24 Feb 2014 18:20:37 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	24 Feb 2014 18:20:37 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1OIKUUZ006127
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 13:20:30 -0500
Received: from [10.3.237.153] (vpn-237-153.phx2.redhat.com [10.3.237.153])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1OIKRoE014615; Mon, 24 Feb 2014 13:20:28 -0500
Message-ID: <1393266120.8041.19.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 24 Feb 2014 12:22:00 -0600
In-Reply-To: <CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 12:31 -0800, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 4:56 PM, Dan Williams <dcbw@redhat.com> wrote:
> > Note that there isn't yet a disable_ipv4 knob though, I was
> > perhaps-too-subtly trying to get you to send a patch for it, since I can
> > use it too :)
> 
> Sure, can you describe a little better the use case, as I could use
> that for the commit log. My only current use case was the xen-netback
> case but Zoltan has noted a few cases where an IPv4 or IPv6 address
> *could* be used on the backend interfaces (which I'll still poke as
> its unclear to me why they have 'em).

My use-case would simply be to have an analogue for the disable_ipv6
case.  In the future I expect more people will want to disable IPv4 as
they move to IPv6.  If you don't have something like disable_ipv4, then
there's no way to ensure that some random program or something doesn't
set up IPv4 stuff that you don't want.

Same thing for IPv6; some people really don't want IPv6 enabled on an
interface no matter what; they don't want an IPv6LL address assigned,
they don't want kernel SLAAC, they want to ensure that *nothing*
IPv6-related gets done for that interface.  The same can be true for
IPv4, but we don't have a way of doing that right now.

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:20:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI08w-0000ac-Eq; Mon, 24 Feb 2014 18:20:42 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WI08t-0000aU-Bi
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 18:20:39 +0000
Received: from [85.158.143.35:19884] by server-2.bemta-4.messagelabs.com id
	50/F1-10891-67D8B035; Mon, 24 Feb 2014 18:20:38 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393266037!7968473!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16982 invoked from network); 24 Feb 2014 18:20:37 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-21.messagelabs.com with SMTP;
	24 Feb 2014 18:20:37 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1OIKUUZ006127
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 13:20:30 -0500
Received: from [10.3.237.153] (vpn-237-153.phx2.redhat.com [10.3.237.153])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1OIKRoE014615; Mon, 24 Feb 2014 13:20:28 -0500
Message-ID: <1393266120.8041.19.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 24 Feb 2014 12:22:00 -0600
In-Reply-To: <CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-20 at 12:31 -0800, Luis R. Rodriguez wrote:
> On Wed, Feb 19, 2014 at 4:56 PM, Dan Williams <dcbw@redhat.com> wrote:
> > Note that there isn't yet a disable_ipv4 knob though, I was
> > perhaps-too-subtly trying to get you to send a patch for it, since I can
> > use it too :)
> 
> Sure, can you describe a little better the use case, as I could use
> that for the commit log. My only current use case was the xen-netback
> case but Zoltan has noted a few cases where an IPv4 or IPv6 address
> *could* be used on the backend interfaces (which I'll still poke as
> its unclear to me why they have 'em).

My use-case would simply be to have an analogue for the disable_ipv6
case.  In the future I expect more people will want to disable IPv4 as
they move to IPv6.  If you don't have something like disable_ipv4, then
there's no way to ensure that some random program or something doesn't
set up IPv4 stuff that you don't want.

Same thing for IPv6; some people really don't want IPv6 enabled on an
interface no matter what; they don't want an IPv6LL address assigned,
they don't want kernel SLAAC, they want to ensure that *nothing*
IPv6-related gets done for that interface.  The same can be true for
IPv4, but we don't have a way of doing that right now.

Dan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:40:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI0S3-0000x7-DD; Mon, 24 Feb 2014 18:40:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1WI0S2-0000x2-1q
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 18:40:26 +0000
Received: from [85.158.137.68:16971] by server-16.bemta-3.messagelabs.com id
	DA/B9-29917-9129B035; Mon, 24 Feb 2014 18:40:25 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393267222!3921939!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9953 invoked from network); 24 Feb 2014 18:40:23 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 18:40:23 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s1OIdeFG026873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Mon, 24 Feb 2014 13:39:40 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s1OIddRp026871;
	Mon, 24 Feb 2014 13:39:39 -0500
Date: Mon, 24 Feb 2014 14:39:39 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Paul Bolle <pebolle@tiscali.nl>, phcoder@gmail.com
Message-ID: <20140224183939.GA26794@andromeda.dapyr.net>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392718467.30073.12.camel@x220>
User-Agent: Mutt/1.5.9i
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	Richard Weinberger <richard@nod.at>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 11:14:27AM +0100, Paul Bolle wrote:
> On Mon, 2014-02-17 at 09:43 -0500, Konrad Rzeszutek Wilk wrote:
> > On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
> > > On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> > > > Please look in the grub git tree. They have fixed their code to not do
> > > > this anymore. This should be reflected in the patch description.
> > > 
> > > Thanks, I didn't know that. That turned out to be grub commit
> > > ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
> > > use it to implement generating of config"), see
> > > http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059
> 
> And that commit was reverted a week later in grub commit
> faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 ("Revert grub-file usage in
> grub-mkconfig."), see
> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 .
> 
> That commit has no explanation (other than its one line summary). So
> we're left guessing why this was done. Luckily, it doesn't matter here,
> because the test for CONFIG_XEN_PRIVILEGED_GUEST is superfluous.

How about we ask Vladimir?

Vladimir - could you shed some light on it? Thanks!

> 
> Anyhow, I hope to submit a second version of this patch later this day.
> 
> 
> Paul Bolle
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:40:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI0S3-0000x7-DD; Mon, 24 Feb 2014 18:40:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <darnok@68k.org>) id 1WI0S2-0000x2-1q
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 18:40:26 +0000
Received: from [85.158.137.68:16971] by server-16.bemta-3.messagelabs.com id
	DA/B9-29917-9129B035; Mon, 24 Feb 2014 18:40:25 +0000
X-Env-Sender: darnok@68k.org
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393267222!3921939!1
X-Originating-IP: [206.212.254.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9953 invoked from network); 24 Feb 2014 18:40:23 -0000
Received: from andromeda.dapyr.net (HELO andromeda.dapyr.net) (206.212.254.10)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 24 Feb 2014 18:40:23 -0000
Received: from andromeda.dapyr.net (darnok@localhost [127.0.0.1])
	by andromeda.dapyr.net (8.13.4/8.13.4/Debian-3sarge3) with ESMTP id
	s1OIdeFG026873
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NOT);
	Mon, 24 Feb 2014 13:39:40 -0500
Received: (from darnok@localhost)
	by andromeda.dapyr.net (8.13.4/8.13.4/Submit) id s1OIddRp026871;
	Mon, 24 Feb 2014 13:39:39 -0500
Date: Mon, 24 Feb 2014 14:39:39 -0400
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Paul Bolle <pebolle@tiscali.nl>, phcoder@gmail.com
Message-ID: <20140224183939.GA26794@andromeda.dapyr.net>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392718467.30073.12.camel@x220>
User-Agent: Mutt/1.5.9i
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	Richard Weinberger <richard@nod.at>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 18, 2014 at 11:14:27AM +0100, Paul Bolle wrote:
> On Mon, 2014-02-17 at 09:43 -0500, Konrad Rzeszutek Wilk wrote:
> > On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
> > > On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
> > > > On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
> > > > Please look in the grub git tree. They have fixed their code to not do
> > > > this anymore. This should be reflected in the patch description.
> > > 
> > > Thanks, I didn't know that. That turned out to be grub commit
> > > ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool and
> > > use it to implement generating of config"), see
> > > http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=ec824e0f2a399ce2ab3a2e3353d372a236595059
> 
> And that commit was reverted a week later in grub commit
> faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 ("Revert grub-file usage in
> grub-mkconfig."), see
> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_xen.in?id=faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 .
> 
> That commit has no explanation (other than its one line summary). So
> we're left guessing why this was done. Luckily, it doesn't matter here,
> because the test for CONFIG_XEN_PRIVILEGED_GUEST is superfluous.

How about we ask Vladimir?

Vladimir - could you shed some light on it? Thanks!

> 
> Anyhow, I hope to submit a second version of this patch later this day.
> 
> 
> Paul Bolle
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI0cx-00018X-A3; Mon, 24 Feb 2014 18:51:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WI0cu-00018S-Vc
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 18:51:41 +0000
Received: from [85.158.139.211:56862] by server-15.bemta-5.messagelabs.com id
	2D/6E-24395-CB49B035; Mon, 24 Feb 2014 18:51:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393267898!5912203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3472 invoked from network); 24 Feb 2014 18:51:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 18:51:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,536,1389744000"; d="scan'208";a="103637010"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 18:51:37 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 13:51:37 -0500
Message-ID: <530B94B7.5080602@citrix.com>
Date: Mon, 24 Feb 2014 18:51:35 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paul Bolle <pebolle@tiscali.nl>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>	<1392642197.13000.20.camel@x220>	<20140217144307.GB28658@localhost.localdomain>	<1392718467.30073.12.camel@x220>
	<1392728861.5144.10.camel@x220>
In-Reply-To: <1392728861.5144.10.camel@x220>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Richard Weinberger <richard@nod.at>,
	Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH v2] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 13:07, Paul Bolle wrote:
> This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
> used nowhere in the tree.
> 
> We do know grub2 has a script that greps kernel configuration files for
> its macro. It shouldn't do that. As Linus summarized:
>     This is a grub bug. It really is that simple. Treat it as one.
> 
> Besides, grub2's grepping for that macro is actually superfluous. See,
> that script currently contains this test (simplified):
>     grep -x CONFIG_XEN_DOM0=y $config || grep -x CONFIG_XEN_PRIVILEGED_GUEST=y $config
> 
> But since XEN_DOM0 and XEN_PRIVILEGED_GUEST are by definition equal,
> removing XEN_PRIVILEGED_GUEST cannot influence this test.
> 
> So there's no reason to not remove this symbol, like we do with all
> unused Kconfig symbols.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI0cx-00018X-A3; Mon, 24 Feb 2014 18:51:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WI0cu-00018S-Vc
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 18:51:41 +0000
Received: from [85.158.139.211:56862] by server-15.bemta-5.messagelabs.com id
	2D/6E-24395-CB49B035; Mon, 24 Feb 2014 18:51:40 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393267898!5912203!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3472 invoked from network); 24 Feb 2014 18:51:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 18:51:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,536,1389744000"; d="scan'208";a="103637010"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 18:51:37 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 13:51:37 -0500
Message-ID: <530B94B7.5080602@citrix.com>
Date: Mon, 24 Feb 2014 18:51:35 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paul Bolle <pebolle@tiscali.nl>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>	<1392642197.13000.20.camel@x220>	<20140217144307.GB28658@localhost.localdomain>	<1392718467.30073.12.camel@x220>
	<1392728861.5144.10.camel@x220>
In-Reply-To: <1392728861.5144.10.camel@x220>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Richard Weinberger <richard@nod.at>,
	Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter
	Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Thomas Gleixner <tglx@linutronix.de>
Subject: Re: [Xen-devel] [PATCH v2] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 18/02/14 13:07, Paul Bolle wrote:
> This patch removes the Kconfig symbol XEN_PRIVILEGED_GUEST which is
> used nowhere in the tree.
> 
> We do know grub2 has a script that greps kernel configuration files for
> its macro. It shouldn't do that. As Linus summarized:
>     This is a grub bug. It really is that simple. Treat it as one.
> 
> Besides, grub2's grepping for that macro is actually superfluous. See,
> that script currently contains this test (simplified):
>     grep -x CONFIG_XEN_DOM0=y $config || grep -x CONFIG_XEN_PRIVILEGED_GUEST=y $config
> 
> But since XEN_DOM0 and XEN_PRIVILEGED_GUEST are by definition equal,
> removing XEN_PRIVILEGED_GUEST cannot influence this test.
> 
> So there's no reason to not remove this symbol, like we do with all
> unused Kconfig symbols.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 18:57:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI0in-0001HB-TU; Mon, 24 Feb 2014 18:57:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WI0il-0001Gt-HV; Mon, 24 Feb 2014 18:57:43 +0000
Received: from [85.158.139.211:30820] by server-7.bemta-5.messagelabs.com id
	11/E1-14867-6269B035; Mon, 24 Feb 2014 18:57:42 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393268259!5903854!1
X-Originating-IP: [209.85.216.41]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29910 invoked from network); 24 Feb 2014 18:57:40 -0000
Received: from mail-qa0-f41.google.com (HELO mail-qa0-f41.google.com)
	(209.85.216.41)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 18:57:40 -0000
Received: by mail-qa0-f41.google.com with SMTP id w8so6864584qac.0
	for <multiple recipients>; Mon, 24 Feb 2014 10:57:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=C5Q/sQM3qD0x0LCzXl30UOOzqvFkZFQR0xrj/4mDE8M=;
	b=qIGFohBEkl5XrLhGBQX6t5hyuS0HQ66b97JlycdPTo6P2GPwZptr9ym29gWDjcU/3V
	Q3om4BoQEF5QaVKEHXdR9bBJgZ4fUBANjauhk94/r0JlxTQdxwM4JesERgvfk5M7MdJU
	E8T/kH4OJBnBff9C/Zfiw6aVY83b+L53QSh/xJXGtn2oSNGuFlZd2SpvXXu8blJCxw9v
	MurmMpiStJ/tQtctjbsB/uWE6oBiel7+YWB4kKsNE9QELigFvpGgtp4JmRDk67khe0NV
	d7OyBsaH+soHr47zp7P2NLhgPY4Pi295aZk0J/F5xhFS2clL96rx8yezd54QpMcOfX2X
	6wiw==
MIME-Version: 1.0
X-Received: by 10.140.95.179 with SMTP id i48mr30973503qge.1.1393268259343;
	Mon, 24 Feb 2014 10:57:39 -0800 (PST)
Received: by 10.140.89.47 with HTTP; Mon, 24 Feb 2014 10:57:39 -0800 (PST)
Date: Mon, 24 Feb 2014 18:57:39 +0000
Message-ID: <CAOqnZH6gx62Jx76=_5H-ux=XBCmhXSZ5x0Qi3kKFHpix5Q_UWA@mail.gmail.com>
From: Lars Kurth <lars.kurth.xen@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>, 
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: [Xen-devel] Fwd: [Xen Project] Your organization application has
	been accepted.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1251797512365626616=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1251797512365626616==
Content-Type: multipart/alternative; boundary=001a11c16dfcacd2c504f32b8ba7

--001a11c16dfcacd2c504f32b8ba7
Content-Type: text/plain; charset=ISO-8859-1

Good news everyone : now the real work awaits.

Cleaning up the project list, etc. and making sure that mentors are signed
up via https://www.google-melange.com/gsoc/homepage/google/gsoc2014, etc. -
Ian Jackson will receive mails from GSoC while I am away and may have to
forward key bits of information to you all.

I would suggest a conference call that includes mentors to tidy the project
list pretty soon. Maybe Russell can act as GSoC master, arrange the call
and chase mentors while I am away. We need to make sure that the project
list is in good shape before the 10th. Ideally by next week.

Also our home page in Melange may needs fixing. @Russell, please read the
mail below and lets discuss tomorrow before I leave.

24 February:19:00 UTCList of accepted mentoring organizations published on
the *Google Summer of Code* 2014 site.Interim Period:Would-be students
discuss project ideas with potential mentoring organizations.28 February
16:00 UTC:IRC meeting with rejected meThe ntoring organizations.10 March:19:00
UTCStudent application period opens.

IMPORTANT: in the application I stated that we encourage students to work
on bite size projects before the application (in the same way as it works
for OPW). Maybe OPW mentors want to share their experiences!

This will come your way in the next few weeks and you really should use
those 2-3 weeks to find out which students can work with you.

Best Regards
Lars

---------- Forwarded message ----------
From: <no-reply@google-melange.appspotmail.com>
Date: 24 Feb 2014 18:41
Subject: [Xen Project] Your organization application has been accepted.
To: <lars.kurth@xenproject.org>
Cc:

Congratulations!

Your Organization Application for Xen Project to Google Summer of Code 2014
has been accepted. Your organization's information will be auto-populated
from your Organization profile onto our Participating Organizations
page. Please
click
http://www.google-melange.com/gsoc/org/profile/edit/google/gsoc2014/xen_project
if you want to modify your Organization profile. We recommend you take
time to review your profile carefully and add relevant information and
links for the students where applicable.

If you have questions about items on this profile we would recommend you
read over the Melange User's Guide's section titled "Our organization was
accepted! How do I create my organization
homepage?<http://en.flossmanuals.net/melange/org-application-period/>
"

Please be aware the your organization's information is now available to
students, so you should expect to get contact from potential applicants
very soon. If you have any questions, please email Carol Smith directly at
carols@google.com, but generally, you are off to the races. Student
applications formally open on 10 March at 19:00 UTC.

Best regards,

--001a11c16dfcacd2c504f32b8ba7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote">Good news everyone : now the re=
al work awaits.</div><div class=3D"gmail_quote"><br></div><div class=3D"gma=
il_quote">Cleaning up the project list, etc. and making sure that mentors a=
re signed up via=A0<a href=3D"https://www.google-melange.com/gsoc/homepage/=
google/gsoc2014">https://www.google-melange.com/gsoc/homepage/google/gsoc20=
14</a>, etc. - Ian Jackson will receive mails from GSoC while I am away and=
 may have to forward key bits of information to you all.</div>
<div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">I would sug=
gest a conference call that includes mentors to tidy the project list prett=
y soon. Maybe Russell can act as GSoC master, arrange the call and chase me=
ntors while I am away. We need to make sure that the project list is in goo=
d shape before the 10th. Ideally by next week.</div>
<div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">Also our ho=
me page in Melange may needs fixing. @Russell, please read the mail below a=
nd lets discuss tomorrow before I leave.</div><div class=3D"gmail_quote"><b=
r>
</div><div class=3D"gmail_quote"><table style=3D"margin:0px;padding:0px;bor=
der:none;outline:0px;font-size:13px;vertical-align:baseline;background-colo=
r:rgb(246,246,246);border-collapse:collapse;border-spacing:0px;color:rgb(0,=
0,0);font-family:Arial,&#39;Helvetica Neue&#39;,Helvetica,sans-serif;line-h=
eight:19.5px">
<tbody style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertical-alig=
n:baseline;background-color:transparent"><tr style=3D"margin:0px;padding:0p=
x;border:0px;outline:0px;vertical-align:baseline;background-color:transpare=
nt;height:0px">
<td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-a=
lign:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin:0pt =
6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:baseli=
ne;background-color:transparent;line-height:1.5em;color:rgb(35,139,210)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">24 February:</span></h3><h3 dir=3D"ltr" style=3D"margin:=
0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:ba=
seline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210)=
">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">19:00 UTC</span></h3></td><td style=3D"padding:7px;borde=
r:1px solid rgb(0,0,0);outline:0px;vertical-align:top;background-color:tran=
sparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">List of accepted mento=
ring organizations published on the <em style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;vertical-align:baseline;background-color:transparent">G=
oogle Summer of Code</em> 2014 site.</span></h3>
</td></tr><tr style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertic=
al-align:baseline;background-color:transparent;height:0px"><td style=3D"pad=
ding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-align:top;backgro=
und-color:transparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">Interim Period:</span>=
</h3>
</td><td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;verti=
cal-align:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin=
:0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:b=
aseline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210=
)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">Would-be students discuss project ideas with potential m=
entoring organizations.</span></h3>
</td></tr><tr style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertic=
al-align:baseline;background-color:transparent;height:0px"><td style=3D"pad=
ding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-align:top;backgro=
und-color:transparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">28 February 16:00 UTC:=
</span></h3>
</td><td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;verti=
cal-align:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin=
:0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:b=
aseline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210=
)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">IRC meeting with rejected meThe ntoring organizations.</=
span></h3>
</td></tr><tr style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertic=
al-align:baseline;background-color:transparent;height:0px"><td style=3D"pad=
ding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-align:top;backgro=
und-color:transparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">10 March:</span></h3>
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">19:00 UTC</span></h3>
</td><td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;verti=
cal-align:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin=
:0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:b=
aseline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210=
)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">Student application period opens.</span></h3></td></tr>
</tbody></table></div><div class=3D"gmail_quote"><br></div><div class=3D"gm=
ail_quote">IMPORTANT: in the application I stated that we encourage student=
s to work on bite size projects before the application (in the same way as =
it works for OPW). Maybe OPW mentors want to share their experiences!</div>
<div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">This will c=
ome your way in the next few weeks and you really should use those 2-3 week=
s to find out which students can work with you.=A0</div><div class=3D"gmail=
_quote">
<br></div><div class=3D"gmail_quote">Best Regards</div><div class=3D"gmail_=
quote">Lars</div><div class=3D"gmail_quote"><br></div><div class=3D"gmail_q=
uote">---------- Forwarded message ----------<br>From:  &lt;<a href=3D"mail=
to:no-reply@google-melange.appspotmail.com" target=3D"_blank">no-reply@goog=
le-melange.appspotmail.com</a>&gt;<br>
Date: 24 Feb 2014 18:41<br>Subject: [Xen Project] Your organization applica=
tion has been accepted.<br>
To:  &lt;<a href=3D"mailto:lars.kurth@xenproject.org" target=3D"_blank">lar=
s.kurth@xenproject.org</a>&gt;<br>Cc: <br><br type=3D"attribution"><p><span=
 style=3D"font-size:small">Congratulations!</span></p>
<p><span style=3D"font-size:small">Your Organization Application for Xen Pr=
oject to Google Summer of Code 2014 has been accepted.=A0</span><span style=
=3D"font-size:small">Your organization&#39;s information will be auto-popul=
ated from your Organization profile onto our Participating Organizations pa=
ge.=A0</span><span style=3D"font-size:small">Please click <a href=3D"http:/=
/www.google-melange.com/gsoc/org/profile/edit/google/gsoc2014/xen_project" =
target=3D"_blank">http://www.google-melange.com/gsoc/org/profile/edit/googl=
e/gsoc2014/xen_project</a> =A0if you want to modify your Organization profi=
le. We recommend you take time to review your profile carefully and add rel=
evant information and links for the students where applicable.=A0</span></p=
>


<p><span style=3D"font-size:small">If you have questions about items on thi=
s profile we would recommend you read over the Melange User&#39;s Guide&#39=
;s section titled &quot;<a href=3D"http://en.flossmanuals.net/melange/org-a=
pplication-period/" target=3D"_blank">Our organization was accepted! How do=
 I create my organization homepage?</a>&quot;</span></p>


<p><span style=3D"font-size:small">Please be aware the your organization&#3=
9;s information is now available to students, so you should expect to get c=
ontact from potential applicants very soon. If you have any questions, plea=
se email Carol Smith directly at <a href=3D"mailto:carols@google.com" targe=
t=3D"_blank">carols@google.com</a>, but generally, you are off to the races=
. Student applications formally open on 10 March at 19:00 UTC.</span></p>


<p><span style=3D"font-size:small">Best regards,</span></p>
<p><span style=3D"font-size:small"></span></p></div>
</div>

--001a11c16dfcacd2c504f32b8ba7--


--===============1251797512365626616==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1251797512365626616==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 18:57:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 18:57:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI0in-0001HB-TU; Mon, 24 Feb 2014 18:57:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WI0il-0001Gt-HV; Mon, 24 Feb 2014 18:57:43 +0000
Received: from [85.158.139.211:30820] by server-7.bemta-5.messagelabs.com id
	11/E1-14867-6269B035; Mon, 24 Feb 2014 18:57:42 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393268259!5903854!1
X-Originating-IP: [209.85.216.41]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29910 invoked from network); 24 Feb 2014 18:57:40 -0000
Received: from mail-qa0-f41.google.com (HELO mail-qa0-f41.google.com)
	(209.85.216.41)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 18:57:40 -0000
Received: by mail-qa0-f41.google.com with SMTP id w8so6864584qac.0
	for <multiple recipients>; Mon, 24 Feb 2014 10:57:39 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=C5Q/sQM3qD0x0LCzXl30UOOzqvFkZFQR0xrj/4mDE8M=;
	b=qIGFohBEkl5XrLhGBQX6t5hyuS0HQ66b97JlycdPTo6P2GPwZptr9ym29gWDjcU/3V
	Q3om4BoQEF5QaVKEHXdR9bBJgZ4fUBANjauhk94/r0JlxTQdxwM4JesERgvfk5M7MdJU
	E8T/kH4OJBnBff9C/Zfiw6aVY83b+L53QSh/xJXGtn2oSNGuFlZd2SpvXXu8blJCxw9v
	MurmMpiStJ/tQtctjbsB/uWE6oBiel7+YWB4kKsNE9QELigFvpGgtp4JmRDk67khe0NV
	d7OyBsaH+soHr47zp7P2NLhgPY4Pi295aZk0J/F5xhFS2clL96rx8yezd54QpMcOfX2X
	6wiw==
MIME-Version: 1.0
X-Received: by 10.140.95.179 with SMTP id i48mr30973503qge.1.1393268259343;
	Mon, 24 Feb 2014 10:57:39 -0800 (PST)
Received: by 10.140.89.47 with HTTP; Mon, 24 Feb 2014 10:57:39 -0800 (PST)
Date: Mon, 24 Feb 2014 18:57:39 +0000
Message-ID: <CAOqnZH6gx62Jx76=_5H-ux=XBCmhXSZ5x0Qi3kKFHpix5Q_UWA@mail.gmail.com>
From: Lars Kurth <lars.kurth.xen@gmail.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>, 
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: [Xen-devel] Fwd: [Xen Project] Your organization application has
	been accepted.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1251797512365626616=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1251797512365626616==
Content-Type: multipart/alternative; boundary=001a11c16dfcacd2c504f32b8ba7

--001a11c16dfcacd2c504f32b8ba7
Content-Type: text/plain; charset=ISO-8859-1

Good news everyone : now the real work awaits.

Cleaning up the project list, etc. and making sure that mentors are signed
up via https://www.google-melange.com/gsoc/homepage/google/gsoc2014, etc. -
Ian Jackson will receive mails from GSoC while I am away and may have to
forward key bits of information to you all.

I would suggest a conference call that includes mentors to tidy the project
list pretty soon. Maybe Russell can act as GSoC master, arrange the call
and chase mentors while I am away. We need to make sure that the project
list is in good shape before the 10th. Ideally by next week.

Also our home page in Melange may needs fixing. @Russell, please read the
mail below and lets discuss tomorrow before I leave.

24 February:19:00 UTCList of accepted mentoring organizations published on
the *Google Summer of Code* 2014 site.Interim Period:Would-be students
discuss project ideas with potential mentoring organizations.28 February
16:00 UTC:IRC meeting with rejected meThe ntoring organizations.10 March:19:00
UTCStudent application period opens.

IMPORTANT: in the application I stated that we encourage students to work
on bite size projects before the application (in the same way as it works
for OPW). Maybe OPW mentors want to share their experiences!

This will come your way in the next few weeks and you really should use
those 2-3 weeks to find out which students can work with you.

Best Regards
Lars

---------- Forwarded message ----------
From: <no-reply@google-melange.appspotmail.com>
Date: 24 Feb 2014 18:41
Subject: [Xen Project] Your organization application has been accepted.
To: <lars.kurth@xenproject.org>
Cc:

Congratulations!

Your Organization Application for Xen Project to Google Summer of Code 2014
has been accepted. Your organization's information will be auto-populated
from your Organization profile onto our Participating Organizations
page. Please
click
http://www.google-melange.com/gsoc/org/profile/edit/google/gsoc2014/xen_project
if you want to modify your Organization profile. We recommend you take
time to review your profile carefully and add relevant information and
links for the students where applicable.

If you have questions about items on this profile we would recommend you
read over the Melange User's Guide's section titled "Our organization was
accepted! How do I create my organization
homepage?<http://en.flossmanuals.net/melange/org-application-period/>
"

Please be aware the your organization's information is now available to
students, so you should expect to get contact from potential applicants
very soon. If you have any questions, please email Carol Smith directly at
carols@google.com, but generally, you are off to the races. Student
applications formally open on 10 March at 19:00 UTC.

Best regards,

--001a11c16dfcacd2c504f32b8ba7
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote">Good news everyone : now the re=
al work awaits.</div><div class=3D"gmail_quote"><br></div><div class=3D"gma=
il_quote">Cleaning up the project list, etc. and making sure that mentors a=
re signed up via=A0<a href=3D"https://www.google-melange.com/gsoc/homepage/=
google/gsoc2014">https://www.google-melange.com/gsoc/homepage/google/gsoc20=
14</a>, etc. - Ian Jackson will receive mails from GSoC while I am away and=
 may have to forward key bits of information to you all.</div>
<div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">I would sug=
gest a conference call that includes mentors to tidy the project list prett=
y soon. Maybe Russell can act as GSoC master, arrange the call and chase me=
ntors while I am away. We need to make sure that the project list is in goo=
d shape before the 10th. Ideally by next week.</div>
<div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">Also our ho=
me page in Melange may needs fixing. @Russell, please read the mail below a=
nd lets discuss tomorrow before I leave.</div><div class=3D"gmail_quote"><b=
r>
</div><div class=3D"gmail_quote"><table style=3D"margin:0px;padding:0px;bor=
der:none;outline:0px;font-size:13px;vertical-align:baseline;background-colo=
r:rgb(246,246,246);border-collapse:collapse;border-spacing:0px;color:rgb(0,=
0,0);font-family:Arial,&#39;Helvetica Neue&#39;,Helvetica,sans-serif;line-h=
eight:19.5px">
<tbody style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertical-alig=
n:baseline;background-color:transparent"><tr style=3D"margin:0px;padding:0p=
x;border:0px;outline:0px;vertical-align:baseline;background-color:transpare=
nt;height:0px">
<td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-a=
lign:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin:0pt =
6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:baseli=
ne;background-color:transparent;line-height:1.5em;color:rgb(35,139,210)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">24 February:</span></h3><h3 dir=3D"ltr" style=3D"margin:=
0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:ba=
seline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210)=
">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">19:00 UTC</span></h3></td><td style=3D"padding:7px;borde=
r:1px solid rgb(0,0,0);outline:0px;vertical-align:top;background-color:tran=
sparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">List of accepted mento=
ring organizations published on the <em style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;vertical-align:baseline;background-color:transparent">G=
oogle Summer of Code</em> 2014 site.</span></h3>
</td></tr><tr style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertic=
al-align:baseline;background-color:transparent;height:0px"><td style=3D"pad=
ding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-align:top;backgro=
und-color:transparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">Interim Period:</span>=
</h3>
</td><td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;verti=
cal-align:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin=
:0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:b=
aseline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210=
)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">Would-be students discuss project ideas with potential m=
entoring organizations.</span></h3>
</td></tr><tr style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertic=
al-align:baseline;background-color:transparent;height:0px"><td style=3D"pad=
ding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-align:top;backgro=
und-color:transparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">28 February 16:00 UTC:=
</span></h3>
</td><td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;verti=
cal-align:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin=
:0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:b=
aseline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210=
)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">IRC meeting with rejected meThe ntoring organizations.</=
span></h3>
</td></tr><tr style=3D"margin:0px;padding:0px;border:0px;outline:0px;vertic=
al-align:baseline;background-color:transparent;height:0px"><td style=3D"pad=
ding:7px;border:1px solid rgb(0,0,0);outline:0px;vertical-align:top;backgro=
und-color:transparent">
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">10 March:</span></h3>
<h3 dir=3D"ltr" style=3D"margin:0pt 6pt;padding:0px;border:0px;outline:0px;=
font-size:13px;vertical-align:baseline;background-color:transparent;line-he=
ight:1.5em;color:rgb(35,139,210)"><span style=3D"margin:0px;padding:0px;bor=
der:0px;outline:0px;font-size:16px;vertical-align:baseline;background-color=
:transparent;font-family:Arial;white-space:pre-wrap">19:00 UTC</span></h3>
</td><td style=3D"padding:7px;border:1px solid rgb(0,0,0);outline:0px;verti=
cal-align:top;background-color:transparent"><h3 dir=3D"ltr" style=3D"margin=
:0pt 6pt;padding:0px;border:0px;outline:0px;font-size:13px;vertical-align:b=
aseline;background-color:transparent;line-height:1.5em;color:rgb(35,139,210=
)">
<span style=3D"margin:0px;padding:0px;border:0px;outline:0px;font-size:16px=
;vertical-align:baseline;background-color:transparent;font-family:Arial;whi=
te-space:pre-wrap">Student application period opens.</span></h3></td></tr>
</tbody></table></div><div class=3D"gmail_quote"><br></div><div class=3D"gm=
ail_quote">IMPORTANT: in the application I stated that we encourage student=
s to work on bite size projects before the application (in the same way as =
it works for OPW). Maybe OPW mentors want to share their experiences!</div>
<div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">This will c=
ome your way in the next few weeks and you really should use those 2-3 week=
s to find out which students can work with you.=A0</div><div class=3D"gmail=
_quote">
<br></div><div class=3D"gmail_quote">Best Regards</div><div class=3D"gmail_=
quote">Lars</div><div class=3D"gmail_quote"><br></div><div class=3D"gmail_q=
uote">---------- Forwarded message ----------<br>From:  &lt;<a href=3D"mail=
to:no-reply@google-melange.appspotmail.com" target=3D"_blank">no-reply@goog=
le-melange.appspotmail.com</a>&gt;<br>
Date: 24 Feb 2014 18:41<br>Subject: [Xen Project] Your organization applica=
tion has been accepted.<br>
To:  &lt;<a href=3D"mailto:lars.kurth@xenproject.org" target=3D"_blank">lar=
s.kurth@xenproject.org</a>&gt;<br>Cc: <br><br type=3D"attribution"><p><span=
 style=3D"font-size:small">Congratulations!</span></p>
<p><span style=3D"font-size:small">Your Organization Application for Xen Pr=
oject to Google Summer of Code 2014 has been accepted.=A0</span><span style=
=3D"font-size:small">Your organization&#39;s information will be auto-popul=
ated from your Organization profile onto our Participating Organizations pa=
ge.=A0</span><span style=3D"font-size:small">Please click <a href=3D"http:/=
/www.google-melange.com/gsoc/org/profile/edit/google/gsoc2014/xen_project" =
target=3D"_blank">http://www.google-melange.com/gsoc/org/profile/edit/googl=
e/gsoc2014/xen_project</a> =A0if you want to modify your Organization profi=
le. We recommend you take time to review your profile carefully and add rel=
evant information and links for the students where applicable.=A0</span></p=
>


<p><span style=3D"font-size:small">If you have questions about items on thi=
s profile we would recommend you read over the Melange User&#39;s Guide&#39=
;s section titled &quot;<a href=3D"http://en.flossmanuals.net/melange/org-a=
pplication-period/" target=3D"_blank">Our organization was accepted! How do=
 I create my organization homepage?</a>&quot;</span></p>


<p><span style=3D"font-size:small">Please be aware the your organization&#3=
9;s information is now available to students, so you should expect to get c=
ontact from potential applicants very soon. If you have any questions, plea=
se email Carol Smith directly at <a href=3D"mailto:carols@google.com" targe=
t=3D"_blank">carols@google.com</a>, but generally, you are off to the races=
. Student applications formally open on 10 March at 19:00 UTC.</span></p>


<p><span style=3D"font-size:small">Best regards,</span></p>
<p><span style=3D"font-size:small"></span></p></div>
</div>

--001a11c16dfcacd2c504f32b8ba7--


--===============1251797512365626616==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1251797512365626616==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 19:18:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI12k-0001eG-FE; Mon, 24 Feb 2014 19:18:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WI12i-0001eB-9q
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 19:18:20 +0000
Received: from [85.158.143.35:10899] by server-3.bemta-4.messagelabs.com id
	A4/D1-11539-BFA9B035; Mon, 24 Feb 2014 19:18:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393269496!7938281!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12104 invoked from network); 24 Feb 2014 19:18:17 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 19:18:17 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OJIEnW026154
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 19:18:15 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OJIDWY003500
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 24 Feb 2014 19:18:14 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OJIBUW026216; Mon, 24 Feb 2014 19:18:11 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 11:18:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2F5531C02F0; Mon, 24 Feb 2014 14:18:10 -0500 (EST)
Date: Mon, 24 Feb 2014 14:18:10 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140224191810.GA7023@phenom.dumpdata.com>
References: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 04:59:27PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> Hypercalls submitted by user space tools via the privcmd driver can
> take a long time (potentially many 10s of seconds) if the hypercall
> has many sub-operations.
> 
> A fully preemptible kernel may deschedule such as task in any upcall
> called from a hypercall continuation.
> 
> However, in a kernel with voluntary or no preemption, hypercall
> continuations in Xen allow event handlers to be run but the task
> issuing the hypercall will not be descheduled until the hypercall is
> complete and the ioctl returns to user space.  These long running
> tasks may also trigger the kernel's soft lockup detection.
> 
> Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
> bracket hypercalls that may be preempted.  Use these in the privcmd
> driver.
> 
> When returning from an upcall, call preempt_schedule_irq() if the
> current task was within a preemptible hypercall.
> 
> Since preempt_schedule_irq() can move the task to a different CPU,
> clear and set xen_in_preemptible_hcall around the call.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> Changes in v2:
> - Use per-cpu variable to mark preemptible regions
> - Call preempt_schedule_irq() from the correct place in
>   xen_hypervisor_callback


12929 ERROR: "xen_in_preemptible_hcall" [drivers/xen/xen-privcmd.ko] undefined!


Attached is the config file.
#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.14.0-rc3 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION="upstream"
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
# CONFIG_TICK_CPU_ACCOUNTING is not set
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_MEMCG is not set
# CONFIG_CGROUP_HUGETLB is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
CONFIG_RT_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
# CONFIG_USER_NS is not set
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_SYSFS_DEPRECATED_V2 is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_PCI_QUIRKS=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR is not set
CONFIG_CC_STACKPROTECTOR_NONE=y
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
# CONFIG_SYSTEM_TRUSTED_KEYRING is not set
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
# CONFIG_BLK_CMDLINE_PARSER is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
# CONFIG_X86_INTEL_LPSS is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_XEN_PVH=y
CONFIG_KVM_GUEST=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=y
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_I8K is not set
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_MICROCODE_INTEL_EARLY=y
CONFIG_MICROCODE_AMD_EARLY=y
CONFIG_MICROCODE_EARLY=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
# CONFIG_MOVABLE_NODE is not set
# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_NEED_BOUNCE_POOL=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
# CONFIG_CMA is not set
# CONFIG_ZBUD is not set
# CONFIG_ZSWAP is not set
CONFIG_ZSMALLOC=y
# CONFIG_PGTABLE_MAPPING is not set
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
# CONFIG_EFI_STUB is not set
CONFIG_SECCOMP=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
CONFIG_ACPI_DEBUG=y
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_INTEL_PSTATE is not set
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
CONFIG_X86_SPEEDSTEP_CENTRINO=m
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIE_ECRC=y
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_IOAPIC is not set
CONFIG_PCI_LABEL=y

#
# PCI host controller drivers
#
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set
CONFIG_X86_SYSFB=y

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=y
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
# CONFIG_IPV6_VTI is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_ADVANCED is not set

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_IRC=y
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
# CONFIG_NF_NAT_AMANDA is not set
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
# CONFIG_NF_NAT_TFTP is not set
# CONFIG_NF_TABLES is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_LOG=m
# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
CONFIG_IP_NF_MANGLE=y
# CONFIG_IP_NF_RAW is not set

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
# CONFIG_IP6_NF_RAW is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_HAVE_NET_DSA=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
CONFIG_NET_CLS_ACT=y
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
# CONFIG_VSOCKETS is not set
# CONFIG_NETLINK_MMAP is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_NET_MPLS_GSO is not set
# CONFIG_HSR is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
# CONFIG_CGROUP_NET_CLASSID is not set
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_DMA_SHARED_BUFFER=y

#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_ZRAM=y
# CONFIG_ZRAM_DEBUG is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_VIRTIO_BLK=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_RSXX is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
# CONFIG_VMWARE_VMCI is not set

#
# Intel MIC Host Driver
#
# CONFIG_INTEL_MIC_HOST is not set

#
# Intel MIC Card Driver
#
# CONFIG_INTEL_MIC_CARD is not set
# CONFIG_GENWQE is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
CONFIG_BLK_DEV_3W_XXXX_RAID=m
# CONFIG_SCSI_HPSA is not set
CONFIG_SCSI_3W_9XXX=m
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
# CONFIG_SCSI_MVUMI is not set
CONFIG_SCSI_DPT_I2O=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
# CONFIG_SCSI_ESAS2R is not set
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS_LOGGING=y
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
CONFIG_SCSI_FLASHPOINT=y
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
# CONFIG_FCOE_FNIC is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_EATA=m
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=m
CONFIG_SCSI_GDTH=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
# CONFIG_SCSI_IPR_TRACE is not set
# CONFIG_SCSI_IPR_DUMP is not set
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
# CONFIG_TCM_QLA2XXX is not set
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_DC390T=m
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_SRP=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_SATA_INIC162X=m
# CONFIG_SATA_ACARD_AHCI is not set
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_HIGHBANK is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
# CONFIG_SATA_RCAR is not set
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARASAN_CF is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
CONFIG_PATA_EFAR=m
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MARVELL=m
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
CONFIG_PATA_SCH=m
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=m
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
CONFIG_PATA_PCMCIA=m
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_DM_THIN_PROVISIONING is not set
# CONFIG_DM_CACHE is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_LOG_USERSPACE is not set
# CONFIG_DM_RAID is not set
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_UEVENT is not set
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=m
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=m
# CONFIG_FIREWIRE_SBP2 is not set
# CONFIG_FIREWIRE_NET is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
# CONFIG_VXLAN is not set
CONFIG_NETCONSOLE=m
# CONFIG_NETCONSOLE_DYNAMIC is not set
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
CONFIG_SUNGEM_PHY=m
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#
CONFIG_VHOST_NET=y
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_RING=y
CONFIG_VHOST=y

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
# CONFIG_PCMCIA_3C574 is not set
# CONFIG_PCMCIA_3C589 is not set
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
CONFIG_ATL1C=m
# CONFIG_ALX is not set
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
CONFIG_IGBVF=y
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=y
# CONFIG_I40E is not set
# CONFIG_I40EVF is not set
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
# CONFIG_MLX5_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=m
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
CONFIG_NET_PACKET_ENGINE=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
# CONFIG_SH_ETH is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y

#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
CONFIG_MARVELL_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_RTL8152 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_POLLDEV is not set
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
CONFIG_INPUT_TABLET=y
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_WACOM is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_NR_UARTS=16
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_RSA is not set
# CONFIG_SERIAL_8250_DW is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_KGDB_NMI is not set
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_TPM=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set

#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=m
# CONFIG_TCG_TIS_I2C_ATMEL is not set
# CONFIG_TCG_TIS_I2C_INFINEON is not set
# CONFIG_TCG_TIS_I2C_NUVOTON is not set
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HTU21 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_INTEL_POWERCLAMP is not set
CONFIG_X86_PKG_TEMP_THERMAL=m
# CONFIG_ACPI_INT3403_THERMAL is not set

#
# Texas Instruments thermal drivers
#
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_LPC_ICH is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_TTM=m

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=m
# CONFIG_DRM_RADEON_UMS is not set
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set
# CONFIG_DRM_I810 is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_I915_FBDEV=y
# CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT is not set
# CONFIG_DRM_I915_UMS is not set
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_DRM_VIA=m
CONFIG_DRM_SAVAGE=m
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_QXL is not set
# CONFIG_DRM_BOCHS is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_HDMI=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
# CONFIG_FB_BOOT_VESA_SUPPORT is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
CONFIG_FB_CIRRUS=y
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_GOLDFISH is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_FB_AUO_K190X is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_EXYNOS_VIDEO is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_GENERIC=y
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LP855X is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_LOGO is not set
# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=m
CONFIG_HIDRAW=y
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_HUION is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=m
# CONFIG_HID_ICADE is not set
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
# CONFIG_HID_MAGICMOUSE is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=m
# CONFIG_HID_ORTEK is not set
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=m
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEELSERIES is not set
CONFIG_HID_SUNPLUS=m
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=m
# CONFIG_HID_THINGM is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_WACOM is not set
# CONFIG_HID_WIIMOTE is not set
# CONFIG_HID_XINMO is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
# CONFIG_USB_FUSBH200_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
CONFIG_USB_PRINTER=y
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HSIC_USB3503 is not set

#
# USB Physical Layer drivers
#
# CONFIG_USB_PHY is not set
# CONFIG_USB_OTG_FSM is not set
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_SAMSUNG_USB2PHY is not set
# CONFIG_SAMSUNG_USB3PHY is not set
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_LP5523 is not set
# CONFIG_LEDS_LP5562 is not set
# CONFIG_LEDS_LP8501 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA9685 is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_DELL_NETBOOKS is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_OT200 is not set
# CONFIG_LEDS_BLINKM is not set

#
# LED Triggers
#
# CONFIG_LEDS_TRIGGERS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=y
# CONFIG_EDAC_MCE_INJ is not set
# CONFIG_EDAC_MM_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12057 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_MOXART is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
# CONFIG_INTEL_MID_DMAC is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_DW_DMAC_CORE is not set
# CONFIG_DW_DMAC is not set
# CONFIG_DW_DMAC_PCI is not set
# CONFIG_TIMB_DMA is not set
# CONFIG_PCH_DMA is not set
CONFIG_DMA_ENGINE=y
CONFIG_DMA_ACPI=y

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y
CONFIG_DCA=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y

#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=m
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_STAGING=y
# CONFIG_ET131X is not set
# CONFIG_SLICOSS is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_ECHO is not set
# CONFIG_COMEDI is not set
# CONFIG_RTS5139 is not set
# CONFIG_RTS5208 is not set
# CONFIG_TRANZPORT is not set
# CONFIG_IDE_PHISON is not set
# CONFIG_DX_SEP is not set
# CONFIG_FB_SM7XX is not set
# CONFIG_CRYSTALHD is not set
# CONFIG_FB_XGI is not set
# CONFIG_ACPI_QUICKSTART is not set
# CONFIG_USB_ENESTORAGE is not set
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_USB_WPAN_HCD is not set
# CONFIG_WIMAX_GDM72XX is not set
# CONFIG_LTE_GDM724X is not set
CONFIG_NET_VENDOR_SILICOM=y
# CONFIG_SBYPASS is not set
# CONFIG_BPCTL is not set
# CONFIG_CED1401 is not set
# CONFIG_DGRP is not set
# CONFIG_FIREWIRE_SERIAL is not set
# CONFIG_LUSTRE_FS is not set
# CONFIG_XILLYBUS is not set
# CONFIG_DGNC is not set
# CONFIG_DGAP is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_DELL_WMI is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WIRELESS is not set
# CONFIG_HP_WMI is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_ASUS_WMI is not set
CONFIG_ACPI_WMI=m
# CONFIG_MSI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_MXM_WMI=m
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_APPLE_GMUX is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set
# CONFIG_PVPANIC is not set
# CONFIG_CHROME_PLATFORMS is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
# CONFIG_FMC is not set

#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_PHY_EXYNOS_MIPI_VIDEO is not set
# CONFIG_POWERCAP is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_VARS_PSTORE=y
# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
CONFIG_EFI_RUNTIME_MAP=y
CONFIG_UEFI_CPER=y

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=m
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=m
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
# CONFIG_FUSE_FS is not set

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_F2FS_FS is not set
# CONFIG_EFIVAR_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFSD is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_DYNAMIC_DEBUG is not set

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_DEBUG_KERNEL=y

#
# Memory Debugging
#
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=4000
# CONFIG_DEBUG_KMEMLEAK_TEST is not set
# CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF is not set
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_VM=y
# CONFIG_DEBUG_VM_RB is not set
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_DEBUG_SHIRQ is not set

#
# Debug Lockups and Hangs
#
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=1
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=21
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_FTRACE_SYSCALLS is not set
# CONFIG_TRACER_SNAPSHOT is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
# CONFIG_UPROBE_EVENT is not set
CONFIG_PROBE_EVENTS=y
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set

#
# Runtime Testing
#
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_TEST_MODULE is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_LOW_LEVEL_TRAP is not set
CONFIG_KGDB_KDB=y
# CONFIG_KDB_KEYBOARD is not set
CONFIG_KDB_CONTINUE_CATASTROPHIC=0
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
# CONFIG_EARLY_PRINTK_EFI is not set
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
CONFIG_DOUBLEFAULT=y
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_STATIC_CPU_HAS is not set

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_BIG_KEYS is not set
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# CONFIG_SECURITY_NETWORK_XFRM is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65534
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_IMA is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
# CONFIG_CRYPTO_CMAC is not set
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C_INTEL=m
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_CRC32_PCLMUL is not set
CONFIG_CRYPTO_CRCT10DIF=m
# CONFIG_CRYPTO_CRCT10DIF_PCLMUL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256_SSSE3 is not set
# CONFIG_CRYPTO_SHA512_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_X86_64 is not set
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=y
# CONFIG_CRYPTO_LZO is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
# CONFIG_CRYPTO_DEV_CCP is not set
# CONFIG_ASYMMETRIC_KEY_TYPE is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_KVM_DEVICE_ASSIGNMENT=y
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
CONFIG_CRC_T10DIF=m
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
# CONFIG_CRC8 is not set
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
# CONFIG_XZ_DEC_POWERPC is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_AVERAGE=y
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_FONT_SUPPORT=m
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:18:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI12k-0001eG-FE; Mon, 24 Feb 2014 19:18:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WI12i-0001eB-9q
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 19:18:20 +0000
Received: from [85.158.143.35:10899] by server-3.bemta-4.messagelabs.com id
	A4/D1-11539-BFA9B035; Mon, 24 Feb 2014 19:18:19 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393269496!7938281!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12104 invoked from network); 24 Feb 2014 19:18:17 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 19:18:17 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OJIEnW026154
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 19:18:15 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OJIDWY003500
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Mon, 24 Feb 2014 19:18:14 GMT
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OJIBUW026216; Mon, 24 Feb 2014 19:18:11 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 11:18:11 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2F5531C02F0; Mon, 24 Feb 2014 14:18:10 -0500 (EST)
Date: Mon, 24 Feb 2014 14:18:10 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140224191810.GA7023@phenom.dumpdata.com>
References: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 13, 2014 at 04:59:27PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> Hypercalls submitted by user space tools via the privcmd driver can
> take a long time (potentially many 10s of seconds) if the hypercall
> has many sub-operations.
> 
> A fully preemptible kernel may deschedule such as task in any upcall
> called from a hypercall continuation.
> 
> However, in a kernel with voluntary or no preemption, hypercall
> continuations in Xen allow event handlers to be run but the task
> issuing the hypercall will not be descheduled until the hypercall is
> complete and the ioctl returns to user space.  These long running
> tasks may also trigger the kernel's soft lockup detection.
> 
> Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
> bracket hypercalls that may be preempted.  Use these in the privcmd
> driver.
> 
> When returning from an upcall, call preempt_schedule_irq() if the
> current task was within a preemptible hypercall.
> 
> Since preempt_schedule_irq() can move the task to a different CPU,
> clear and set xen_in_preemptible_hcall around the call.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> Changes in v2:
> - Use per-cpu variable to mark preemptible regions
> - Call preempt_schedule_irq() from the correct place in
>   xen_hypervisor_callback


12929 ERROR: "xen_in_preemptible_hcall" [drivers/xen/xen-privcmd.ko] undefined!


Attached is the config file.
#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.14.0-rc3 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION="upstream"
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
# CONFIG_TICK_CPU_ACCOUNTING is not set
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_MEMCG is not set
# CONFIG_CGROUP_HUGETLB is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
CONFIG_RT_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
# CONFIG_USER_NS is not set
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_SYSFS_DEPRECATED_V2 is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_PCI_QUIRKS=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR is not set
CONFIG_CC_STACKPROTECTOR_NONE=y
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
# CONFIG_SYSTEM_TRUSTED_KEYRING is not set
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
# CONFIG_BLK_CMDLINE_PARSER is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
# CONFIG_X86_INTEL_LPSS is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_XEN_PVH=y
CONFIG_KVM_GUEST=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=y
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_I8K is not set
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_MICROCODE_INTEL_EARLY=y
CONFIG_MICROCODE_AMD_EARLY=y
CONFIG_MICROCODE_EARLY=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
# CONFIG_MOVABLE_NODE is not set
# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_NEED_BOUNCE_POOL=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
# CONFIG_CMA is not set
# CONFIG_ZBUD is not set
# CONFIG_ZSWAP is not set
CONFIG_ZSMALLOC=y
# CONFIG_PGTABLE_MAPPING is not set
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
# CONFIG_EFI_STUB is not set
CONFIG_SECCOMP=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
CONFIG_ACPI_DEBUG=y
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_INTEL_PSTATE is not set
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
CONFIG_X86_SPEEDSTEP_CENTRINO=m
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIE_ECRC=y
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_IOAPIC is not set
CONFIG_PCI_LABEL=y

#
# PCI host controller drivers
#
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set
CONFIG_X86_SYSFB=y

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=y
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
# CONFIG_IPV6_VTI is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_ADVANCED is not set

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_IRC=y
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
# CONFIG_NF_NAT_AMANDA is not set
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
# CONFIG_NF_NAT_TFTP is not set
# CONFIG_NF_TABLES is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_LOG=m
# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
CONFIG_IP_NF_MANGLE=y
# CONFIG_IP_NF_RAW is not set

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
# CONFIG_IP6_NF_RAW is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_HAVE_NET_DSA=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
CONFIG_NET_CLS_ACT=y
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
# CONFIG_VSOCKETS is not set
# CONFIG_NETLINK_MMAP is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_NET_MPLS_GSO is not set
# CONFIG_HSR is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
# CONFIG_CGROUP_NET_CLASSID is not set
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_DMA_SHARED_BUFFER=y

#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_ZRAM=y
# CONFIG_ZRAM_DEBUG is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_VIRTIO_BLK=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_RSXX is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
# CONFIG_VMWARE_VMCI is not set

#
# Intel MIC Host Driver
#
# CONFIG_INTEL_MIC_HOST is not set

#
# Intel MIC Card Driver
#
# CONFIG_INTEL_MIC_CARD is not set
# CONFIG_GENWQE is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
CONFIG_BLK_DEV_3W_XXXX_RAID=m
# CONFIG_SCSI_HPSA is not set
CONFIG_SCSI_3W_9XXX=m
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
# CONFIG_SCSI_MVUMI is not set
CONFIG_SCSI_DPT_I2O=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
# CONFIG_SCSI_ESAS2R is not set
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS_LOGGING=y
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
CONFIG_SCSI_FLASHPOINT=y
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
# CONFIG_FCOE_FNIC is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_EATA=m
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=m
CONFIG_SCSI_GDTH=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
# CONFIG_SCSI_IPR_TRACE is not set
# CONFIG_SCSI_IPR_DUMP is not set
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
# CONFIG_TCM_QLA2XXX is not set
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_DC390T=m
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_SRP=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_SATA_INIC162X=m
# CONFIG_SATA_ACARD_AHCI is not set
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_HIGHBANK is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
# CONFIG_SATA_RCAR is not set
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARASAN_CF is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
CONFIG_PATA_EFAR=m
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MARVELL=m
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
CONFIG_PATA_SCH=m
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=m
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
CONFIG_PATA_PCMCIA=m
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_DM_THIN_PROVISIONING is not set
# CONFIG_DM_CACHE is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_LOG_USERSPACE is not set
# CONFIG_DM_RAID is not set
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_UEVENT is not set
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=m
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=m
# CONFIG_FIREWIRE_SBP2 is not set
# CONFIG_FIREWIRE_NET is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
# CONFIG_VXLAN is not set
CONFIG_NETCONSOLE=m
# CONFIG_NETCONSOLE_DYNAMIC is not set
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
CONFIG_SUNGEM_PHY=m
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#
CONFIG_VHOST_NET=y
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_RING=y
CONFIG_VHOST=y

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
# CONFIG_PCMCIA_3C574 is not set
# CONFIG_PCMCIA_3C589 is not set
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
CONFIG_ATL1C=m
# CONFIG_ALX is not set
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
CONFIG_IGBVF=y
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=y
# CONFIG_I40E is not set
# CONFIG_I40EVF is not set
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
# CONFIG_MLX5_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=m
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
CONFIG_NET_PACKET_ENGINE=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
# CONFIG_SH_ETH is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y

#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
CONFIG_MARVELL_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_RTL8152 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_POLLDEV is not set
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
CONFIG_INPUT_TABLET=y
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_WACOM is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_NR_UARTS=16
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_RSA is not set
# CONFIG_SERIAL_8250_DW is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_KGDB_NMI is not set
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_TPM=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set

#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=m
# CONFIG_TCG_TIS_I2C_ATMEL is not set
# CONFIG_TCG_TIS_I2C_INFINEON is not set
# CONFIG_TCG_TIS_I2C_NUVOTON is not set
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HTU21 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_INTEL_POWERCLAMP is not set
CONFIG_X86_PKG_TEMP_THERMAL=m
# CONFIG_ACPI_INT3403_THERMAL is not set

#
# Texas Instruments thermal drivers
#
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_LPC_ICH is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_TTM=m

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=m
# CONFIG_DRM_RADEON_UMS is not set
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set
# CONFIG_DRM_I810 is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_I915_FBDEV=y
# CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT is not set
# CONFIG_DRM_I915_UMS is not set
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_DRM_VIA=m
CONFIG_DRM_SAVAGE=m
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_QXL is not set
# CONFIG_DRM_BOCHS is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_HDMI=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
# CONFIG_FB_BOOT_VESA_SUPPORT is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
CONFIG_FB_CIRRUS=y
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_GOLDFISH is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_FB_AUO_K190X is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_EXYNOS_VIDEO is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_GENERIC=y
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LP855X is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_LOGO is not set
# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=m
CONFIG_HIDRAW=y
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_HUION is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=m
# CONFIG_HID_ICADE is not set
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
# CONFIG_HID_MAGICMOUSE is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=m
# CONFIG_HID_ORTEK is not set
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=m
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEELSERIES is not set
CONFIG_HID_SUNPLUS=m
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=m
# CONFIG_HID_THINGM is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_WACOM is not set
# CONFIG_HID_WIIMOTE is not set
# CONFIG_HID_XINMO is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
# CONFIG_USB_FUSBH200_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
CONFIG_USB_PRINTER=y
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HSIC_USB3503 is not set

#
# USB Physical Layer drivers
#
# CONFIG_USB_PHY is not set
# CONFIG_USB_OTG_FSM is not set
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_SAMSUNG_USB2PHY is not set
# CONFIG_SAMSUNG_USB3PHY is not set
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_LP5523 is not set
# CONFIG_LEDS_LP5562 is not set
# CONFIG_LEDS_LP8501 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA9685 is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_DELL_NETBOOKS is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_OT200 is not set
# CONFIG_LEDS_BLINKM is not set

#
# LED Triggers
#
# CONFIG_LEDS_TRIGGERS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=y
# CONFIG_EDAC_MCE_INJ is not set
# CONFIG_EDAC_MM_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12057 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_MOXART is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
# CONFIG_INTEL_MID_DMAC is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_DW_DMAC_CORE is not set
# CONFIG_DW_DMAC is not set
# CONFIG_DW_DMAC_PCI is not set
# CONFIG_TIMB_DMA is not set
# CONFIG_PCH_DMA is not set
CONFIG_DMA_ENGINE=y
CONFIG_DMA_ACPI=y

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y
CONFIG_DCA=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y

#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=m
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_STAGING=y
# CONFIG_ET131X is not set
# CONFIG_SLICOSS is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_ECHO is not set
# CONFIG_COMEDI is not set
# CONFIG_RTS5139 is not set
# CONFIG_RTS5208 is not set
# CONFIG_TRANZPORT is not set
# CONFIG_IDE_PHISON is not set
# CONFIG_DX_SEP is not set
# CONFIG_FB_SM7XX is not set
# CONFIG_CRYSTALHD is not set
# CONFIG_FB_XGI is not set
# CONFIG_ACPI_QUICKSTART is not set
# CONFIG_USB_ENESTORAGE is not set
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_USB_WPAN_HCD is not set
# CONFIG_WIMAX_GDM72XX is not set
# CONFIG_LTE_GDM724X is not set
CONFIG_NET_VENDOR_SILICOM=y
# CONFIG_SBYPASS is not set
# CONFIG_BPCTL is not set
# CONFIG_CED1401 is not set
# CONFIG_DGRP is not set
# CONFIG_FIREWIRE_SERIAL is not set
# CONFIG_LUSTRE_FS is not set
# CONFIG_XILLYBUS is not set
# CONFIG_DGNC is not set
# CONFIG_DGAP is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_DELL_WMI is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WIRELESS is not set
# CONFIG_HP_WMI is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_ASUS_WMI is not set
CONFIG_ACPI_WMI=m
# CONFIG_MSI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_MXM_WMI=m
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_APPLE_GMUX is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set
# CONFIG_PVPANIC is not set
# CONFIG_CHROME_PLATFORMS is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
# CONFIG_FMC is not set

#
# PHY Subsystem
#
# CONFIG_GENERIC_PHY is not set
# CONFIG_PHY_EXYNOS_MIPI_VIDEO is not set
# CONFIG_POWERCAP is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_VARS_PSTORE=y
# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
CONFIG_EFI_RUNTIME_MAP=y
CONFIG_UEFI_CPER=y

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=m
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=m
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
# CONFIG_FUSE_FS is not set

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_F2FS_FS is not set
# CONFIG_EFIVAR_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFSD is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_DYNAMIC_DEBUG is not set

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_DEBUG_KERNEL=y

#
# Memory Debugging
#
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=4000
# CONFIG_DEBUG_KMEMLEAK_TEST is not set
# CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF is not set
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_VM=y
# CONFIG_DEBUG_VM_RB is not set
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_DEBUG_SHIRQ is not set

#
# Debug Lockups and Hangs
#
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=1
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=21
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_FTRACE_SYSCALLS is not set
# CONFIG_TRACER_SNAPSHOT is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
# CONFIG_UPROBE_EVENT is not set
CONFIG_PROBE_EVENTS=y
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set

#
# Runtime Testing
#
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_TEST_MODULE is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_LOW_LEVEL_TRAP is not set
CONFIG_KGDB_KDB=y
# CONFIG_KDB_KEYBOARD is not set
CONFIG_KDB_CONTINUE_CATASTROPHIC=0
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
# CONFIG_EARLY_PRINTK_EFI is not set
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
CONFIG_DOUBLEFAULT=y
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_STATIC_CPU_HAS is not set

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_BIG_KEYS is not set
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# CONFIG_SECURITY_NETWORK_XFRM is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65534
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_IMA is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
# CONFIG_CRYPTO_CMAC is not set
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C_INTEL=m
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_CRC32_PCLMUL is not set
CONFIG_CRYPTO_CRCT10DIF=m
# CONFIG_CRYPTO_CRCT10DIF_PCLMUL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256_SSSE3 is not set
# CONFIG_CRYPTO_SHA512_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_X86_64 is not set
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=y
# CONFIG_CRYPTO_LZO is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
# CONFIG_CRYPTO_DEV_CCP is not set
# CONFIG_ASYMMETRIC_KEY_TYPE is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_KVM_DEVICE_ASSIGNMENT=y
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
CONFIG_CRC_T10DIF=m
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
# CONFIG_CRC8 is not set
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
# CONFIG_XZ_DEC_POWERPC is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_AVERAGE=y
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_FONT_SUPPORT=m
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:30:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1EA-0001ra-Du; Mon, 24 Feb 2014 19:30:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WI1E9-0001rV-1e
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 19:30:09 +0000
Received: from [85.158.139.211:42777] by server-16.bemta-5.messagelabs.com id
	12/4A-05060-0CD9B035; Mon, 24 Feb 2014 19:30:08 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393270206!5945740!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20857 invoked from network); 24 Feb 2014 19:30:07 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 19:30:07 -0000
Received: from G4W6310.americas.hpqcorp.net (g4w6310.houston.hp.com
	[16.210.26.217]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3425.houston.hp.com (Postfix) with ESMTPS id A3BAFDA;
	Mon, 24 Feb 2014 19:30:05 +0000 (UTC)
Received: from G9W3612.americas.hpqcorp.net (16.216.186.47) by
	G4W6310.americas.hpqcorp.net (16.210.26.217) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 24 Feb 2014 19:28:33 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3612.americas.hpqcorp.net ([16.216.186.47]) with mapi id
	14.03.0123.003; Mon, 24 Feb 2014 19:28:33 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] q35 in xen?  vfio in xen?
Thread-Index: AQHPL2VwX0KD0THDRguzpODwr34SVZrA6ArAgAO3TQCAAA2vYA==
Date: Mon, 24 Feb 2014 19:28:32 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
	<20140224164004.GO816@phenom.dumpdata.com>
In-Reply-To: <20140224164004.GO816@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Thanks for the info.  Your guest sees the virtual function as  pci device j=
ust like I had suspected.  Unfortunately that won't work for me.  I guess I=
 have to take a hard look at implement pci-e passthrough using pciback then.

Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

Sent: Monday, February 24, 2014 9:40 AM
To: Zhang, Eniac
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?

On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> Hi Konrad,
> =

> Here's what I see when start a VM under xen using pciback to pass a pci-e=
 device into domU.  The device can be seen by guest, and also functioning f=
ine.  But it's not seen as a pci-e device, rather, it looks just like an or=
dinary pci device because only the first 0x100 bytes of its configuration s=
pace is accessible.  So if a driver needs to use data in the extended confi=
guration space for certain features, it will fail.
> =

> When you say you "did PCIe pass through of an VF of an SR-IOV device".  A=
re you actually using it as a pci-e device or have throttled it back to pci=
 mode without aware of the difference?  If you did see the pci-e device in =
guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx=
 output from guest?

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton I=
I]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01) # lspci -t
-[0000:00]-+-00.0
           +-01.0
           +-01.1
           +-01.3
           +-02.0
           +-03.0
           \-04.0
# lspci -s 00:04.0 -xxxxx
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01)
00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

-bash-4.1# more /vm-pci.cfg
builder=3D'hvm'
disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
memory =3D 2048
boot=3D"d"
maxvcpus=3D32
vcpus=3D1
serial=3D'pty'
vnclisten=3D"0.0.0.0"
name=3D"latest"
pci =3D ["0000:02:10.0"]

> =

> Also to echo your second comment:  I might still be a newbie in qemu fiel=
d (I started working on this 4 months ago).  I thought the chipset limits w=
hat you can see/do in vm.  Ie.  If you have 440fx emulations then you can't=
 have any pci-e devices (fake or passthru) in the same system.  Is that not=
 true?
> =

> Regards/Eniac
> =

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, February 21, 2014 5:32 PM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> =

> =

> On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
> >
> > Hi Konrad,
> >
> > Thanks for your reply.
> >
> > Yes, I am aware of the pciback.=A0 Unfortunately it doesn't seem to =

> > support pci-e passthrough. (I could be wrong here)
> =

> I just did PCIe pass through of an VF of an SR-IOV device. It certainly i=
s PCIe.
> =

> >
> > There are two reason that I am interested in this.=A0 For one, my proje=
ct calls for pci-e device passthrough, which can't be accomplished with 440=
fx chipset emulation.=A0 Secondly, I feel we ought to move on with the tech=
nology.=A0 440fx is ancient in computer terms.=A0 Qemu is good and all that=
, but if it refuses to support pci-e natively then it's just a matter of ti=
me that it will become obsoleted.=A0 The trend is clear that pci-e is takin=
g over the world. =

> >
> =

> I am not sure what you are saying but it does not matter whether QEMU emu=
lates 440fx or q35 for PCI pass through .
> =

> > Regards/Eniac
> >
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, February 21, 2014 2:50 PM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] q35 in xen? vfio in xen? =

> >
> > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: =

> > > Hi all,
> > > =

> > > I am playing with q35 chipset in qemu (1.6.1).=A0 It seems we can't e=
nable q35 machine under xen yet.=A0 I made a few quick hacks which all fail=
 miserably (linux kernel oops and window BSOD).=A0 I was wondering why this=
 hasn't been done (q35 was introduced into qemu in 2009). =

> > > =

> > > Next question, vfio works very well for me in standalone qemu (with L=
inux host handling iommu), but is that supported under xen?=A0 I haven't tr=
ied anything there yet because my gut-feeling is that it won't work.=A0 Bec=
ause passing vfio device to qemu can only be done on qemu commandline, and =
xen is not aware of this passing through device, thus not able to make iomm=
u arrangement for this device.=A0 Am I on the right track here? =

> >
> > Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xe=
n. It uses a different mechanism (and you need to bind the device to pcibac=
k). =

> >
> > > =

> > > I am interested in implementing both these two features.=A0 I'd like =
to connect with anyone who's already on this so we don't duplicate the effo=
rts. =

> >
> > What do you need Q35 for? =

> >
> > > =

> > > Regards/Eniac
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:30:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1EA-0001ra-Du; Mon, 24 Feb 2014 19:30:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eniac-xw.zhang@hp.com>) id 1WI1E9-0001rV-1e
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 19:30:09 +0000
Received: from [85.158.139.211:42777] by server-16.bemta-5.messagelabs.com id
	12/4A-05060-0CD9B035; Mon, 24 Feb 2014 19:30:08 +0000
X-Env-Sender: eniac-xw.zhang@hp.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393270206!5945740!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20857 invoked from network); 24 Feb 2014 19:30:07 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 19:30:07 -0000
Received: from G4W6310.americas.hpqcorp.net (g4w6310.houston.hp.com
	[16.210.26.217]) (using TLSv1 with cipher AES128-SHA (128/128 bits))
	(No client certificate requested)
	by g4t3425.houston.hp.com (Postfix) with ESMTPS id A3BAFDA;
	Mon, 24 Feb 2014 19:30:05 +0000 (UTC)
Received: from G9W3612.americas.hpqcorp.net (16.216.186.47) by
	G4W6310.americas.hpqcorp.net (16.210.26.217) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Mon, 24 Feb 2014 19:28:33 +0000
Received: from G9W0737.americas.hpqcorp.net ([169.254.9.96]) by
	G9W3612.americas.hpqcorp.net ([16.216.186.47]) with mapi id
	14.03.0123.003; Mon, 24 Feb 2014 19:28:33 +0000
From: "Zhang, Eniac" <eniac-xw.zhang@hp.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Thread-Topic: [Xen-devel] q35 in xen?  vfio in xen?
Thread-Index: AQHPL2VwX0KD0THDRguzpODwr34SVZrA6ArAgAO3TQCAAA2vYA==
Date: Mon, 24 Feb 2014 19:28:32 +0000
Message-ID: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
	<20140224164004.GO816@phenom.dumpdata.com>
In-Reply-To: <20140224164004.GO816@phenom.dumpdata.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [16.210.48.30]
MIME-Version: 1.0
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Konrad,

Thanks for the info.  Your guest sees the virtual function as  pci device j=
ust like I had suspected.  Unfortunately that won't work for me.  I guess I=
 have to take a hard look at implement pci-e passthrough using pciback then.

Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

Sent: Monday, February 24, 2014 9:40 AM
To: Zhang, Eniac
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?

On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> Hi Konrad,
> =

> Here's what I see when start a VM under xen using pciback to pass a pci-e=
 device into domU.  The device can be seen by guest, and also functioning f=
ine.  But it's not seen as a pci-e device, rather, it looks just like an or=
dinary pci device because only the first 0x100 bytes of its configuration s=
pace is accessible.  So if a driver needs to use data in the extended confi=
guration space for certain features, it will fail.
> =

> When you say you "did PCIe pass through of an VF of an SR-IOV device".  A=
re you actually using it as a pci-e device or have throttled it back to pci=
 mode without aware of the difference?  If you did see the pci-e device in =
guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx=
 output from guest?

# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton I=
I]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01) # lspci -t
-[0000:00]-+-00.0
           +-01.0
           +-01.1
           +-01.3
           +-02.0
           +-03.0
           \-04.0
# lspci -s 00:04.0 -xxxxx
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev =
01)
00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

-bash-4.1# more /vm-pci.cfg
builder=3D'hvm'
disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
memory =3D 2048
boot=3D"d"
maxvcpus=3D32
vcpus=3D1
serial=3D'pty'
vnclisten=3D"0.0.0.0"
name=3D"latest"
pci =3D ["0000:02:10.0"]

> =

> Also to echo your second comment:  I might still be a newbie in qemu fiel=
d (I started working on this 4 months ago).  I thought the chipset limits w=
hat you can see/do in vm.  Ie.  If you have 440fx emulations then you can't=
 have any pci-e devices (fake or passthru) in the same system.  Is that not=
 true?
> =

> Regards/Eniac
> =

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> Sent: Friday, February 21, 2014 5:32 PM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> =

> =

> On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
> >
> > Hi Konrad,
> >
> > Thanks for your reply.
> >
> > Yes, I am aware of the pciback.=A0 Unfortunately it doesn't seem to =

> > support pci-e passthrough. (I could be wrong here)
> =

> I just did PCIe pass through of an VF of an SR-IOV device. It certainly i=
s PCIe.
> =

> >
> > There are two reason that I am interested in this.=A0 For one, my proje=
ct calls for pci-e device passthrough, which can't be accomplished with 440=
fx chipset emulation.=A0 Secondly, I feel we ought to move on with the tech=
nology.=A0 440fx is ancient in computer terms.=A0 Qemu is good and all that=
, but if it refuses to support pci-e natively then it's just a matter of ti=
me that it will become obsoleted.=A0 The trend is clear that pci-e is takin=
g over the world. =

> >
> =

> I am not sure what you are saying but it does not matter whether QEMU emu=
lates 440fx or q35 for PCI pass through .
> =

> > Regards/Eniac
> >
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, February 21, 2014 2:50 PM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] q35 in xen? vfio in xen? =

> >
> > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: =

> > > Hi all,
> > > =

> > > I am playing with q35 chipset in qemu (1.6.1).=A0 It seems we can't e=
nable q35 machine under xen yet.=A0 I made a few quick hacks which all fail=
 miserably (linux kernel oops and window BSOD).=A0 I was wondering why this=
 hasn't been done (q35 was introduced into qemu in 2009). =

> > > =

> > > Next question, vfio works very well for me in standalone qemu (with L=
inux host handling iommu), but is that supported under xen?=A0 I haven't tr=
ied anything there yet because my gut-feeling is that it won't work.=A0 Bec=
ause passing vfio device to qemu can only be done on qemu commandline, and =
xen is not aware of this passing through device, thus not able to make iomm=
u arrangement for this device.=A0 Am I on the right track here? =

> >
> > Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xe=
n. It uses a different mechanism (and you need to bind the device to pcibac=
k). =

> >
> > > =

> > > I am interested in implementing both these two features.=A0 I'd like =
to connect with anyone who's already on this so we don't duplicate the effo=
rts. =

> >
> > What do you need Q35 for? =

> >
> > > =

> > > Regards/Eniac
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@lists.xen.org
> > > http://lists.xen.org/xen-devel
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1MQ-00023O-NL; Mon, 24 Feb 2014 19:38:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <phcoder@gmail.com>) id 1WI1MO-00023C-CH
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 19:38:40 +0000
Received: from [85.158.139.211:21520] by server-12.bemta-5.messagelabs.com id
	94/B1-15415-FBF9B035; Mon, 24 Feb 2014 19:38:39 +0000
X-Env-Sender: phcoder@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393270718!1997786!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25264 invoked from network); 24 Feb 2014 19:38:38 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 19:38:38 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so3337495eak.2
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 11:38:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=vQtxgaWFAoEWm4KBT7fQ8WedFAxG/ap6sT6s9WDb/tU=;
	b=DHwa89uG7YpJHSYCuv1RM1qD2dUhEqSULqQk3sKTlyW0w4HbPG0QgCPZ+E9fUaaUIs
	PbWth8vCejkvfd9JcQcX2HVm2KHVxwEkljJUqDcr/4CPjTK7zdlRhvoPQ6LM9N1jeDOL
	cIKOnMpavxc5T0kkxC9gti52ih26JK3a8QkTzUlBahrizsMS4QT5qstTu+/HxCe20d11
	Qao6Kg91SZcZ9dqcSyu9d3sj5pj8LCrZ5PwL4LfxURa5xY8pUnzflxTOdGqw++UQ+7aK
	Hi4ER07atjFeCXpV93Uu/i/1OOTxVs/qZMNnxLdiG27jYGrYDS0AvRboBzPMilGenXh8
	9oTA==
X-Received: by 10.15.21.136 with SMTP id d8mr17266603eeu.91.1393270718345;
	Mon, 24 Feb 2014 11:38:38 -0800 (PST)
Received: from [192.168.1.16] (85-188.196-178.cust.bluewin.ch.
	[178.196.188.85])
	by mx.google.com with ESMTPSA id 46sm67312763ees.4.2014.02.24.11.38.31
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 11:38:35 -0800 (PST)
Message-ID: <530B9FB6.6000909@gmail.com>
Date: Mon, 24 Feb 2014 20:38:30 +0100
From: =?UTF-8?B?VmxhZGltaXIgJ8+GLWNvZGVyL3BoY29kZXInIFNlcmJpbmVua28=?=
	<phcoder@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Icedove/24.3.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@darnok.org>, Paul Bolle <pebolle@tiscali.nl>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
	<20140224183939.GA26794@andromeda.dapyr.net>
In-Reply-To: <20140224183939.GA26794@andromeda.dapyr.net>
X-Enigmail-Version: 1.6
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	Richard Weinberger <richard@nod.at>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2321070298719627783=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============2321070298719627783==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 24.02.2014 19:39, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 18, 2014 at 11:14:27AM +0100, Paul Bolle wrote:
>> On Mon, 2014-02-17 at 09:43 -0500, Konrad Rzeszutek Wilk wrote:
>>> On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
>>>> On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
>>>>> On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
>>>>> Please look in the grub git tree. They have fixed their code to not=
 do
>>>>> this anymore. This should be reflected in the patch description.
>>>>
>>>> Thanks, I didn't know that. That turned out to be grub commit
>>>> ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool =
and
>>>> use it to implement generating of config"), see
>>>> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linu=
x_xen.in?id=3Dec824e0f2a399ce2ab3a2e3353d372a236595059
>>
>> And that commit was reverted a week later in grub commit
>> faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 ("Revert grub-file usage in
>> grub-mkconfig."), see
>> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_=
xen.in?id=3Dfaf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 .
>>
>> That commit has no explanation (other than its one line summary). So
>> we're left guessing why this was done. Luckily, it doesn't matter here=
,
>> because the test for CONFIG_XEN_PRIVILEGED_GUEST is superfluous.
>=20
> How about we ask Vladimir?
>=20
> Vladimir - could you shed some light on it? Thanks!
>=20
CONFIG_XEN_PRIVILEGED_GUEST is not present on Linux even though it
should be. The test was removed to accomodate this.
The usage of grub-file was removed because it wasn't release-ready.
>>
>> Anyhow, I hope to submit a second version of this patch later this day=
=2E
>>
>>
>> Paul Bolle
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>=20



--LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iF4EAREKAAYFAlMLn7YACgkQmBXlbbo5nOv3vAEAhYBDk5smbGM/jU0PN1s8AeqY
7wx2lLEzfo9PQQb1un0A/RRsx0iyQr64+f7mliQolZwGN4uG8ioP/TD4lMh/LZox
=Ziei
-----END PGP SIGNATURE-----

--LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU--


--===============2321070298719627783==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2321070298719627783==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 19:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1MQ-00023O-NL; Mon, 24 Feb 2014 19:38:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <phcoder@gmail.com>) id 1WI1MO-00023C-CH
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 19:38:40 +0000
Received: from [85.158.139.211:21520] by server-12.bemta-5.messagelabs.com id
	94/B1-15415-FBF9B035; Mon, 24 Feb 2014 19:38:39 +0000
X-Env-Sender: phcoder@gmail.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393270718!1997786!1
X-Originating-IP: [209.85.215.171]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25264 invoked from network); 24 Feb 2014 19:38:38 -0000
Received: from mail-ea0-f171.google.com (HELO mail-ea0-f171.google.com)
	(209.85.215.171)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 19:38:38 -0000
Received: by mail-ea0-f171.google.com with SMTP id f15so3337495eak.2
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 11:38:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=vQtxgaWFAoEWm4KBT7fQ8WedFAxG/ap6sT6s9WDb/tU=;
	b=DHwa89uG7YpJHSYCuv1RM1qD2dUhEqSULqQk3sKTlyW0w4HbPG0QgCPZ+E9fUaaUIs
	PbWth8vCejkvfd9JcQcX2HVm2KHVxwEkljJUqDcr/4CPjTK7zdlRhvoPQ6LM9N1jeDOL
	cIKOnMpavxc5T0kkxC9gti52ih26JK3a8QkTzUlBahrizsMS4QT5qstTu+/HxCe20d11
	Qao6Kg91SZcZ9dqcSyu9d3sj5pj8LCrZ5PwL4LfxURa5xY8pUnzflxTOdGqw++UQ+7aK
	Hi4ER07atjFeCXpV93Uu/i/1OOTxVs/qZMNnxLdiG27jYGrYDS0AvRboBzPMilGenXh8
	9oTA==
X-Received: by 10.15.21.136 with SMTP id d8mr17266603eeu.91.1393270718345;
	Mon, 24 Feb 2014 11:38:38 -0800 (PST)
Received: from [192.168.1.16] (85-188.196-178.cust.bluewin.ch.
	[178.196.188.85])
	by mx.google.com with ESMTPSA id 46sm67312763ees.4.2014.02.24.11.38.31
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Mon, 24 Feb 2014 11:38:35 -0800 (PST)
Message-ID: <530B9FB6.6000909@gmail.com>
Date: Mon, 24 Feb 2014 20:38:30 +0100
From: =?UTF-8?B?VmxhZGltaXIgJ8+GLWNvZGVyL3BoY29kZXInIFNlcmJpbmVua28=?=
	<phcoder@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Icedove/24.3.0
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad@darnok.org>, Paul Bolle <pebolle@tiscali.nl>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
	<20140224183939.GA26794@andromeda.dapyr.net>
In-Reply-To: <20140224183939.GA26794@andromeda.dapyr.net>
X-Enigmail-Version: 1.6
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	Richard Weinberger <richard@nod.at>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>,
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2321070298719627783=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============2321070298719627783==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 24.02.2014 19:39, Konrad Rzeszutek Wilk wrote:
> On Tue, Feb 18, 2014 at 11:14:27AM +0100, Paul Bolle wrote:
>> On Mon, 2014-02-17 at 09:43 -0500, Konrad Rzeszutek Wilk wrote:
>>> On Mon, Feb 17, 2014 at 02:03:17PM +0100, Paul Bolle wrote:
>>>> On Mon, 2014-02-17 at 07:23 -0500, Konrad Rzeszutek Wilk wrote:
>>>>> On Feb 16, 2014 3:07 PM, Paul Bolle <pebolle@tiscali.nl> wrote:
>>>>> Please look in the grub git tree. They have fixed their code to not=
 do
>>>>> this anymore. This should be reflected in the patch description.
>>>>
>>>> Thanks, I didn't know that. That turned out to be grub commit
>>>> ec824e0f2a399ce2ab3a2e3353d372a236595059 ("Implement grub_file tool =
and
>>>> use it to implement generating of config"), see
>>>> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linu=
x_xen.in?id=3Dec824e0f2a399ce2ab3a2e3353d372a236595059
>>
>> And that commit was reverted a week later in grub commit
>> faf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 ("Revert grub-file usage in
>> grub-mkconfig."), see
>> http://git.savannah.gnu.org/cgit/grub.git/commit/util/grub.d/20_linux_=
xen.in?id=3Dfaf4a65e1e1ce1d822d251c1e4b53d96ec7faec5 .
>>
>> That commit has no explanation (other than its one line summary). So
>> we're left guessing why this was done. Luckily, it doesn't matter here=
,
>> because the test for CONFIG_XEN_PRIVILEGED_GUEST is superfluous.
>=20
> How about we ask Vladimir?
>=20
> Vladimir - could you shed some light on it? Thanks!
>=20
CONFIG_XEN_PRIVILEGED_GUEST is not present on Linux even though it
should be. The test was removed to accomodate this.
The usage of grub-file was removed because it wasn't release-ready.
>>
>> Anyhow, I hope to submit a second version of this patch later this day=
=2E
>>
>>
>> Paul Bolle
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>=20



--LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iF4EAREKAAYFAlMLn7YACgkQmBXlbbo5nOv3vAEAhYBDk5smbGM/jU0PN1s8AeqY
7wx2lLEzfo9PQQb1un0A/RRsx0iyQr64+f7mliQolZwGN4uG8ioP/TD4lMh/LZox
=Ziei
-----END PGP SIGNATURE-----

--LQf9i8dCwxhQ225O9gpBO8F1we7xlHuSU--


--===============2321070298719627783==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2321070298719627783==--


From xen-devel-bounces@lists.xen.org Mon Feb 24 19:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1Nd-00029x-8r; Mon, 24 Feb 2014 19:39:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WI1Nc-00029o-Ah
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 19:39:56 +0000
Received: from [193.109.254.147:2861] by server-16.bemta-14.messagelabs.com id
	34/D5-21945-B00AB035; Mon, 24 Feb 2014 19:39:55 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393270794!6478165!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30744 invoked from network); 24 Feb 2014 19:39:54 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-16.tower-27.messagelabs.com with SMTP;
	24 Feb 2014 19:39:54 -0000
X-TM-IMSS-Message-ID: <e17ee447000dea05@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.9]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id e17ee447000dea05 ;
	Mon, 24 Feb 2014 14:38:42 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1OJdpjC001553; 
	Mon, 24 Feb 2014 14:39:51 -0500
Message-ID: <530BA000.3040302@tycho.nsa.gov>
Date: Mon, 24 Feb 2014 14:39:44 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xenproject.org
References: <1393254590-9357-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393254590-9357-1-git-send-email-julien.grall@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen/xsm: Fix xsm_map_gfmn_foreign prototype
 when XSM is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 10:09 AM, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

> ---
>   xen/include/xsm/xsm.h |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index 1939453..5d35455 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -621,7 +621,7 @@ static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint3
>   #endif /* CONFIG_X86 */
>
>   #ifdef CONFIG_ARM
> -static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
> +static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
>   {
>       return xsm_ops->map_gmfn_foreign(d, t);
>   }
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1Nd-00029x-8r; Mon, 24 Feb 2014 19:39:57 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WI1Nc-00029o-Ah
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 19:39:56 +0000
Received: from [193.109.254.147:2861] by server-16.bemta-14.messagelabs.com id
	34/D5-21945-B00AB035; Mon, 24 Feb 2014 19:39:55 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393270794!6478165!1
X-Originating-IP: [63.239.67.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30744 invoked from network); 24 Feb 2014 19:39:54 -0000
Received: from emvm-gh1-uea08.nsa.gov (HELO nsa.gov) (63.239.67.9)
	by server-16.tower-27.messagelabs.com with SMTP;
	24 Feb 2014 19:39:54 -0000
X-TM-IMSS-Message-ID: <e17ee447000dea05@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.9]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id e17ee447000dea05 ;
	Mon, 24 Feb 2014 14:38:42 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1OJdpjC001553; 
	Mon, 24 Feb 2014 14:39:51 -0500
Message-ID: <530BA000.3040302@tycho.nsa.gov>
Date: Mon, 24 Feb 2014 14:39:44 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Julien Grall <julien.grall@linaro.org>, xen-devel@lists.xenproject.org
References: <1393254590-9357-1-git-send-email-julien.grall@linaro.org>
In-Reply-To: <1393254590-9357-1-git-send-email-julien.grall@linaro.org>
Subject: Re: [Xen-devel] [PATCH] xen/xsm: Fix xsm_map_gfmn_foreign prototype
 when XSM is enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/2014 10:09 AM, Julien Grall wrote:
> Signed-off-by: Julien Grall <julien.grall@linaro.org>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

> ---
>   xen/include/xsm/xsm.h |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index 1939453..5d35455 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -621,7 +621,7 @@ static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint3
>   #endif /* CONFIG_X86 */
>
>   #ifdef CONFIG_ARM
> -static inline int xsm_map_gmfn_foreign (struct domain *d, struct domain *t)
> +static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
>   {
>       return xsm_ops->map_gmfn_foreign(d, t);
>   }
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:43:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1Qo-0002Nq-1k; Mon, 24 Feb 2014 19:43:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WI1Qm-0002Nf-1F
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 19:43:12 +0000
Received: from [85.158.139.211:63102] by server-6.bemta-5.messagelabs.com id
	C7/3D-14342-FC0AB035; Mon, 24 Feb 2014 19:43:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393270989!5952911!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16605 invoked from network); 24 Feb 2014 19:43:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 19:43:10 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OJh4BW024292
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 19:43:04 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OJh3wS003689
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 19:43:04 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1OJh2JV016665; Mon, 24 Feb 2014 19:43:03 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 11:43:02 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 68FF31C02F0; Mon, 24 Feb 2014 14:43:01 -0500 (EST)
Date: Mon, 24 Feb 2014 14:43:01 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>, anthony.perard@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140224194301.GA8089@phenom.dumpdata.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
	<20140224164004.GO816@phenom.dumpdata.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
> Hi Konrad,
> =

> Thanks for the info.  Your guest sees the virtual function as  pci device=
 just like I had suspected.  Unfortunately that won't work for me.  I guess=
 I have to take a hard look at implement pci-e passthrough using pciback th=
en.

You won't have to do it with pciback. Keep in mind that pciback just "holds"
the device so that other drivers (like ixbgvf) don't use it.

'xl' ends up doing the proper hypercall to assign the device to
the guest. And QEMU also does it set of calls to setup the
BARS, interrupts, deal with MSI-X, etc.

What you are going to have to look at is QEMU - and how to make it
work with the newer emulated chipset.

Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
(CC-ed as well) had backported the proper pieces in QEMU to do
PCI passthrough.

Looking forward to your patches and we will be more than happy
to help you upstream them!

> =

> Regards/Eniac
> =

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

> Sent: Monday, February 24, 2014 9:40 AM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
> =

> On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> > Hi Konrad,
> > =

> > Here's what I see when start a VM under xen using pciback to pass a pci=
-e device into domU.  The device can be seen by guest, and also functioning=
 fine.  But it's not seen as a pci-e device, rather, it looks just like an =
ordinary pci device because only the first 0x100 bytes of its configuration=
 space is accessible.  So if a driver needs to use data in the extended con=
figuration space for certain features, it will fail.
> > =

> > When you say you "did PCIe pass through of an VF of an SR-IOV device". =
 Are you actually using it as a pci-e device or have throttled it back to p=
ci mode without aware of the difference?  If you did see the pci-e device i=
n guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xx=
xx output from guest?
> =

> # lspci
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev =
02)
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton=
 II]
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> 00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
> 00:03.0 VGA compatible controller: Cirrus Logic GD 5446
> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (re=
v 01) # lspci -t
> -[0000:00]-+-00.0
>            +-01.0
>            +-01.1
>            +-01.3
>            +-02.0
>            +-03.0
>            \-04.0
> # lspci -s 00:04.0 -xxxxx
> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (re=
v 01)
> 00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
> 10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
> 30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
> 80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
> b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
> d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> =

> -bash-4.1# more /vm-pci.cfg
> builder=3D'hvm'
> disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> memory =3D 2048
> boot=3D"d"
> maxvcpus=3D32
> vcpus=3D1
> serial=3D'pty'
> vnclisten=3D"0.0.0.0"
> name=3D"latest"
> pci =3D ["0000:02:10.0"]
> =

> > =

> > Also to echo your second comment:  I might still be a newbie in qemu fi=
eld (I started working on this 4 months ago).  I thought the chipset limits=
 what you can see/do in vm.  Ie.  If you have 440fx emulations then you can=
't have any pci-e devices (fake or passthru) in the same system.  Is that n=
ot true?
> > =

> > Regards/Eniac
> > =

> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, February 21, 2014 5:32 PM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> > =

> > =

> > On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
> > >
> > > Hi Konrad,
> > >
> > > Thanks for your reply.
> > >
> > > Yes, I am aware of the pciback.=A0 Unfortunately it doesn't seem to =

> > > support pci-e passthrough. (I could be wrong here)
> > =

> > I just did PCIe pass through of an VF of an SR-IOV device. It certainly=
 is PCIe.
> > =

> > >
> > > There are two reason that I am interested in this.=A0 For one, my pro=
ject calls for pci-e device passthrough, which can't be accomplished with 4=
40fx chipset emulation.=A0 Secondly, I feel we ought to move on with the te=
chnology.=A0 440fx is ancient in computer terms.=A0 Qemu is good and all th=
at, but if it refuses to support pci-e natively then it's just a matter of =
time that it will become obsoleted.=A0 The trend is clear that pci-e is tak=
ing over the world. =

> > >
> > =

> > I am not sure what you are saying but it does not matter whether QEMU e=
mulates 440fx or q35 for PCI pass through .
> > =

> > > Regards/Eniac
> > >
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, February 21, 2014 2:50 PM
> > > To: Zhang, Eniac
> > > Cc: xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] q35 in xen? vfio in xen? =

> > >
> > > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: =

> > > > Hi all,
> > > > =

> > > > I am playing with q35 chipset in qemu (1.6.1).=A0 It seems we can't=
 enable q35 machine under xen yet.=A0 I made a few quick hacks which all fa=
il miserably (linux kernel oops and window BSOD).=A0 I was wondering why th=
is hasn't been done (q35 was introduced into qemu in 2009). =

> > > > =

> > > > Next question, vfio works very well for me in standalone qemu (with=
 Linux host handling iommu), but is that supported under xen?=A0 I haven't =
tried anything there yet because my gut-feeling is that it won't work.=A0 B=
ecause passing vfio device to qemu can only be done on qemu commandline, an=
d xen is not aware of this passing through device, thus not able to make io=
mmu arrangement for this device.=A0 Am I on the right track here? =

> > >
> > > Yes and no. VFIO won't work - but QEMU does do PCI passthrough under =
Xen. It uses a different mechanism (and you need to bind the device to pcib=
ack). =

> > >
> > > > =

> > > > I am interested in implementing both these two features.=A0 I'd lik=
e to connect with anyone who's already on this so we don't duplicate the ef=
forts. =

> > >
> > > What do you need Q35 for? =

> > >
> > > > =

> > > > Regards/Eniac
> > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel
> > >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 19:43:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 19:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI1Qo-0002Nq-1k; Mon, 24 Feb 2014 19:43:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WI1Qm-0002Nf-1F
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 19:43:12 +0000
Received: from [85.158.139.211:63102] by server-6.bemta-5.messagelabs.com id
	C7/3D-14342-FC0AB035; Mon, 24 Feb 2014 19:43:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393270989!5952911!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16605 invoked from network); 24 Feb 2014 19:43:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-4.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 19:43:10 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1OJh4BW024292
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Mon, 24 Feb 2014 19:43:04 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1OJh3wS003689
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Mon, 24 Feb 2014 19:43:04 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1OJh2JV016665; Mon, 24 Feb 2014 19:43:03 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 11:43:02 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 68FF31C02F0; Mon, 24 Feb 2014 14:43:01 -0500 (EST)
Date: Mon, 24 Feb 2014 14:43:01 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: "Zhang, Eniac" <eniac-xw.zhang@hp.com>, anthony.perard@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Message-ID: <20140224194301.GA8089@phenom.dumpdata.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
	<20140224164004.GO816@phenom.dumpdata.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
> Hi Konrad,
> =

> Thanks for the info.  Your guest sees the virtual function as  pci device=
 just like I had suspected.  Unfortunately that won't work for me.  I guess=
 I have to take a hard look at implement pci-e passthrough using pciback th=
en.

You won't have to do it with pciback. Keep in mind that pciback just "holds"
the device so that other drivers (like ixbgvf) don't use it.

'xl' ends up doing the proper hypercall to assign the device to
the guest. And QEMU also does it set of calls to setup the
BARS, interrupts, deal with MSI-X, etc.

What you are going to have to look at is QEMU - and how to make it
work with the newer emulated chipset.

Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
(CC-ed as well) had backported the proper pieces in QEMU to do
PCI passthrough.

Looking forward to your patches and we will be more than happy
to help you upstream them!

> =

> Regards/Eniac
> =

> -----Original Message-----
> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com] =

> Sent: Monday, February 24, 2014 9:40 AM
> To: Zhang, Eniac
> Cc: xen-devel@lists.xen.org
> Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
> =

> On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> > Hi Konrad,
> > =

> > Here's what I see when start a VM under xen using pciback to pass a pci=
-e device into domU.  The device can be seen by guest, and also functioning=
 fine.  But it's not seen as a pci-e device, rather, it looks just like an =
ordinary pci device because only the first 0x100 bytes of its configuration=
 space is accessible.  So if a driver needs to use data in the extended con=
figuration space for certain features, it will fail.
> > =

> > When you say you "did PCIe pass through of an VF of an SR-IOV device". =
 Are you actually using it as a pci-e device or have throttled it back to p=
ci mode without aware of the difference?  If you did see the pci-e device i=
n guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xx=
xx output from guest?
> =

> # lspci
> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev =
02)
> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton=
 II]
> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> 00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
> 00:03.0 VGA compatible controller: Cirrus Logic GD 5446
> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (re=
v 01) # lspci -t
> -[0000:00]-+-00.0
>            +-01.0
>            +-01.1
>            +-01.3
>            +-02.0
>            +-03.0
>            \-04.0
> # lspci -s 00:04.0 -xxxxx
> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (re=
v 01)
> 00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
> 10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
> 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
> 30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
> 80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> 90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
> b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
> d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> =

> -bash-4.1# more /vm-pci.cfg
> builder=3D'hvm'
> disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> memory =3D 2048
> boot=3D"d"
> maxvcpus=3D32
> vcpus=3D1
> serial=3D'pty'
> vnclisten=3D"0.0.0.0"
> name=3D"latest"
> pci =3D ["0000:02:10.0"]
> =

> > =

> > Also to echo your second comment:  I might still be a newbie in qemu fi=
eld (I started working on this 4 months ago).  I thought the chipset limits=
 what you can see/do in vm.  Ie.  If you have 440fx emulations then you can=
't have any pci-e devices (fake or passthru) in the same system.  Is that n=
ot true?
> > =

> > Regards/Eniac
> > =

> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > Sent: Friday, February 21, 2014 5:32 PM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> > =

> > =

> > On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
> > >
> > > Hi Konrad,
> > >
> > > Thanks for your reply.
> > >
> > > Yes, I am aware of the pciback.=A0 Unfortunately it doesn't seem to =

> > > support pci-e passthrough. (I could be wrong here)
> > =

> > I just did PCIe pass through of an VF of an SR-IOV device. It certainly=
 is PCIe.
> > =

> > >
> > > There are two reason that I am interested in this.=A0 For one, my pro=
ject calls for pci-e device passthrough, which can't be accomplished with 4=
40fx chipset emulation.=A0 Secondly, I feel we ought to move on with the te=
chnology.=A0 440fx is ancient in computer terms.=A0 Qemu is good and all th=
at, but if it refuses to support pci-e natively then it's just a matter of =
time that it will become obsoleted.=A0 The trend is clear that pci-e is tak=
ing over the world. =

> > >
> > =

> > I am not sure what you are saying but it does not matter whether QEMU e=
mulates 440fx or q35 for PCI pass through .
> > =

> > > Regards/Eniac
> > >
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, February 21, 2014 2:50 PM
> > > To: Zhang, Eniac
> > > Cc: xen-devel@lists.xen.org
> > > Subject: Re: [Xen-devel] q35 in xen? vfio in xen? =

> > >
> > > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: =

> > > > Hi all,
> > > > =

> > > > I am playing with q35 chipset in qemu (1.6.1).=A0 It seems we can't=
 enable q35 machine under xen yet.=A0 I made a few quick hacks which all fa=
il miserably (linux kernel oops and window BSOD).=A0 I was wondering why th=
is hasn't been done (q35 was introduced into qemu in 2009). =

> > > > =

> > > > Next question, vfio works very well for me in standalone qemu (with=
 Linux host handling iommu), but is that supported under xen?=A0 I haven't =
tried anything there yet because my gut-feeling is that it won't work.=A0 B=
ecause passing vfio device to qemu can only be done on qemu commandline, an=
d xen is not aware of this passing through device, thus not able to make io=
mmu arrangement for this device.=A0 Am I on the right track here? =

> > >
> > > Yes and no. VFIO won't work - but QEMU does do PCI passthrough under =
Xen. It uses a different mechanism (and you need to bind the device to pcib=
ack). =

> > >
> > > > =

> > > > I am interested in implementing both these two features.=A0 I'd lik=
e to connect with anyone who's already on this so we don't duplicate the ef=
forts. =

> > >
> > > What do you need Q35 for? =

> > >
> > > > =

> > > > Regards/Eniac
> > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@lists.xen.org
> > > > http://lists.xen.org/xen-devel
> > >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 20:34:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 20:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2Dx-0002p9-7K; Mon, 24 Feb 2014 20:34:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WI2Du-0002p4-Pg
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 20:33:59 +0000
Received: from [85.158.137.68:54531] by server-11.bemta-3.messagelabs.com id
	1F/3F-04255-5BCAB035; Mon, 24 Feb 2014 20:33:57 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393274036!3892708!1
X-Originating-IP: [209.85.215.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21807 invoked from network); 24 Feb 2014 20:33:56 -0000
Received: from mail-la0-f49.google.com (HELO mail-la0-f49.google.com)
	(209.85.215.49)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 20:33:56 -0000
Received: by mail-la0-f49.google.com with SMTP id mc6so2989811lab.36
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 12:33:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=kFhabBmHYWc/F7COBab2wAyvEbuE88o6RPjtfuw005M=;
	b=vAuJsnYTvH7vH5aPuqASji8Yhre5Lcss1EAbItR3pBwqCuZKM93k57dSZH+VvJxHJm
	M6qYR/Cw5/ldgirB/FI2NuF34UOBhvTIP/hQrqM6BWGR7BrXPtBzH3HfC+NBS9D9KvKJ
	tnsY1TDbPd/3boqjSZM//ig/klTB5j9BtPrnNpHw3svQEYQXLutDmhIL52WTvUrUCbRT
	c30GDh51jRE9OScG+tOvU1Or1cLaWi2iuBp8CfYZiBLFXHOx30JRL1Xg8E4aYv8S4JVh
	sP/DFFWNUgmEVxh+BMJyY9EIo+XfnS/xzZtdjIJNT3+ycCCxG/fGpWQWgecJ37XzK+BQ
	E0xA==
X-Received: by 10.153.7.137 with SMTP id dc9mr13609738lad.25.1393274035498;
	Mon, 24 Feb 2014 12:33:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Mon, 24 Feb 2014 12:33:35 -0800 (PST)
In-Reply-To: <1393266120.8041.19.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 24 Feb 2014 12:33:35 -0800
X-Google-Sender-Auth: TUN7TrnKUkqBii9Sr230e-SNypY
Message-ID: <CAB=NE6Uf63XAAi_Sy5iSy2sjfNdWd9QsPECMBVBrUX9z4_L5-g@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 10:22 AM, Dan Williams <dcbw@redhat.com> wrote:
> My use-case would simply be to have an analogue for the disable_ipv6
> case.  In the future I expect more people will want to disable IPv4 as
> they move to IPv6.  If you don't have something like disable_ipv4, then
> there's no way to ensure that some random program or something doesn't
> set up IPv4 stuff that you don't want.
>
> Same thing for IPv6; some people really don't want IPv6 enabled on an
> interface no matter what; they don't want an IPv6LL address assigned,
> they don't want kernel SLAAC, they want to ensure that *nothing*
> IPv6-related gets done for that interface.  The same can be true for
> IPv4, but we don't have a way of doing that right now.

I'll add this to my queue.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 20:34:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 20:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2Dx-0002p9-7K; Mon, 24 Feb 2014 20:34:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mcgrof@gmail.com>) id 1WI2Du-0002p4-Pg
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 20:33:59 +0000
Received: from [85.158.137.68:54531] by server-11.bemta-3.messagelabs.com id
	1F/3F-04255-5BCAB035; Mon, 24 Feb 2014 20:33:57 +0000
X-Env-Sender: mcgrof@gmail.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393274036!3892708!1
X-Originating-IP: [209.85.215.49]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21807 invoked from network); 24 Feb 2014 20:33:56 -0000
Received: from mail-la0-f49.google.com (HELO mail-la0-f49.google.com)
	(209.85.215.49)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 20:33:56 -0000
Received: by mail-la0-f49.google.com with SMTP id mc6so2989811lab.36
	for <xen-devel@lists.xenproject.org>;
	Mon, 24 Feb 2014 12:33:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=kFhabBmHYWc/F7COBab2wAyvEbuE88o6RPjtfuw005M=;
	b=vAuJsnYTvH7vH5aPuqASji8Yhre5Lcss1EAbItR3pBwqCuZKM93k57dSZH+VvJxHJm
	M6qYR/Cw5/ldgirB/FI2NuF34UOBhvTIP/hQrqM6BWGR7BrXPtBzH3HfC+NBS9D9KvKJ
	tnsY1TDbPd/3boqjSZM//ig/klTB5j9BtPrnNpHw3svQEYQXLutDmhIL52WTvUrUCbRT
	c30GDh51jRE9OScG+tOvU1Or1cLaWi2iuBp8CfYZiBLFXHOx30JRL1Xg8E4aYv8S4JVh
	sP/DFFWNUgmEVxh+BMJyY9EIo+XfnS/xzZtdjIJNT3+ycCCxG/fGpWQWgecJ37XzK+BQ
	E0xA==
X-Received: by 10.153.7.137 with SMTP id dc9mr13609738lad.25.1393274035498;
	Mon, 24 Feb 2014 12:33:55 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.149.195 with HTTP; Mon, 24 Feb 2014 12:33:35 -0800 (PST)
In-Reply-To: <1393266120.8041.19.camel@dcbw.local>
References: <1392433180-16052-1-git-send-email-mcgrof@do-not-panic.com>
	<1392433180-16052-3-git-send-email-mcgrof@do-not-panic.com>
	<1392668638.21106.5.camel@dcbw.local>
	<CAB=NE6XGBp1RwO63W0ac6tTr9JcRM9TB9sap4pa+8PMWwjxu6g@mail.gmail.com>
	<1392828325.21976.6.camel@dcbw.local>
	<CAB=NE6VpUqwNNY4pgbS6du2VsZ6dH0TzQYXmiw6GGN-EiOsV6g@mail.gmail.com>
	<1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
From: "Luis R. Rodriguez" <mcgrof@do-not-panic.com>
Date: Mon, 24 Feb 2014 12:33:35 -0800
X-Google-Sender-Auth: TUN7TrnKUkqBii9Sr230e-SNypY
Message-ID: <CAB=NE6Uf63XAAi_Sy5iSy2sjfNdWd9QsPECMBVBrUX9z4_L5-g@mail.gmail.com>
To: Dan Williams <dcbw@redhat.com>
Cc: kvm@vger.kernel.org, Patrick McHardy <kaber@trash.net>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	James Morris <jmorris@namei.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>, xen-devel@lists.xenproject.org,
	"David S. Miller" <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 10:22 AM, Dan Williams <dcbw@redhat.com> wrote:
> My use-case would simply be to have an analogue for the disable_ipv6
> case.  In the future I expect more people will want to disable IPv4 as
> they move to IPv6.  If you don't have something like disable_ipv4, then
> there's no way to ensure that some random program or something doesn't
> set up IPv4 stuff that you don't want.
>
> Same thing for IPv6; some people really don't want IPv6 enabled on an
> interface no matter what; they don't want an IPv6LL address assigned,
> they don't want kernel SLAAC, they want to ensure that *nothing*
> IPv6-related gets done for that interface.  The same can be true for
> IPv4, but we don't have a way of doing that right now.

I'll add this to my queue.

  Luis

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 20:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 20:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2TO-00030d-Po; Mon, 24 Feb 2014 20:49:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WI2TO-00030Y-6V
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 20:49:58 +0000
Received: from [85.158.143.35:6163] by server-1.bemta-4.messagelabs.com id
	7B/E5-31661-570BB035; Mon, 24 Feb 2014 20:49:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393274995!7977545!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14154 invoked from network); 24 Feb 2014 20:49:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 20:49:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,536,1389744000"; d="scan'208";a="103675990"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 20:49:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 15:49:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WI2TI-0003HV-QY;
	Mon, 24 Feb 2014 20:49:52 +0000
Date: Mon, 24 Feb 2014 20:49:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
In-Reply-To: <20140224151636.GA13489@kroah.com>
Message-ID: <alpine.DEB.2.02.1402242036490.31489@kaball.uk.xensource.com>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
	<1392914159.32657.18.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
	<20140224151636.GA13489@kroah.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Russell King <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, Pawel Moll <pawel.moll@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, gregkh@linuxfoundation.org wrote:
> On Mon, Feb 24, 2014 at 12:19:11PM +0000, Stefano Stabellini wrote:
> > CC'ing Greg.
> > 
> > On Thu, 20 Feb 2014, Ian Campbell wrote:
> > > On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> > > > Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> > > > This patch introduce a new property "protected-devices" for the hypervisor
> > > > node which list device which the IOMMU are been correctly programmed by Xen.
> > > > 
> > > > During Linux boot, Xen specific code will create an hash table which
> > > > contains all these devices. The hash table will be used in need_xen_dma_ops
> > > > to check if the Xen DMA ops needs to be used for the current device.
> > > 
> > > Is it out of the question to find a field within struct device itself to
> > > store this e.g. in struct device_dma_parameters perhaps and avoid the
> > > need for a hashtable lookup.
> > > 
> > > device->iommu_group might be another option, if we can create our own
> > > group?
> > 
> > I agree that a field in struct device would be ideal.
> > Greg, get_maintainer.pl points at you as main maintainer of device.h, do
> > you have an opinion on this?
> 
> I need a whole lot more context here please.  With a patch would be even
> better so that I know exactly what you are referring to...

The Xen hypervisor tells Linux which devices are protected by an SMMU,
preprogrammed by Xen, so that Linux can avoid the swiotlb and bounce
buffers for DMA requests involving them.
The information is present on device tree and parsed at boot time by
Linux.

Julien is proposing to store the list of "safe" devices on an hash table
in the Xen specific code (in arch/arm/xen/enlighten.c, see
http://marc.info/?l=linux-kernel&m=139291370526082&w=2).
Whenever Linux is about to do DMA, we would check in the hashtable to
figure out whether we need to go through the swiotlb or we can simply
use the native dma_ops.

Ian and I were thinking that it would be much easier and faster to have
a "xen_safe_device" parameter in struct device and just check for that.
It doesn't actually need to be in struct device, it could simply be a
flag in struct device_dma_parameters as Ian was suggesting.

Julien, could you please come up with a simple patch to demonstrate the
concept?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 20:50:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 20:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2TO-00030d-Po; Mon, 24 Feb 2014 20:49:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WI2TO-00030Y-6V
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 20:49:58 +0000
Received: from [85.158.143.35:6163] by server-1.bemta-4.messagelabs.com id
	7B/E5-31661-570BB035; Mon, 24 Feb 2014 20:49:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393274995!7977545!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14154 invoked from network); 24 Feb 2014 20:49:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 20:49:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,536,1389744000"; d="scan'208";a="103675990"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 20:49:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Mon, 24 Feb 2014 15:49:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WI2TI-0003HV-QY;
	Mon, 24 Feb 2014 20:49:52 +0000
Date: Mon, 24 Feb 2014 20:49:49 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "gregkh@linuxfoundation.org" <gregkh@linuxfoundation.org>
In-Reply-To: <20140224151636.GA13489@kroah.com>
Message-ID: <alpine.DEB.2.02.1402242036490.31489@kaball.uk.xensource.com>
References: <1392913301-25524-1-git-send-email-julien.grall@linaro.org>
	<1392914159.32657.18.camel@kazak.uk.xensource.com>
	<alpine.DEB.2.02.1402241214570.4471@kaball.uk.xensource.com>
	<20140224151636.GA13489@kroah.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Mark Rutland <mark.rutland@arm.com>, devicetree@vger.kernel.org,
	Ian Campbell <ijc+devicetree@hellion.org.uk>,
	Russell King <linux@arm.linux.org.uk>,
	Ian Campbell <Ian.Campbell@citrix.com>, Pawel Moll <pawel.moll@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Julien Grall <julien.grall@linaro.org>,
	linux-kernel@vger.kernel.org, Rob Herring <robh+dt@kernel.org>,
	Rob Landley <rob@landley.net>, Kumar Gala <galak@codeaurora.org>,
	xen-devel@lists.xenproject.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [PATCH 2/2] arm/xen: Don't use xen DMA ops when the
 device is protected by an IOMMU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, gregkh@linuxfoundation.org wrote:
> On Mon, Feb 24, 2014 at 12:19:11PM +0000, Stefano Stabellini wrote:
> > CC'ing Greg.
> > 
> > On Thu, 20 Feb 2014, Ian Campbell wrote:
> > > On Thu, 2014-02-20 at 16:21 +0000, Julien Grall wrote:
> > > > Only Xen is able to know if a device can safely avoid to use xen-swiotlb.
> > > > This patch introduce a new property "protected-devices" for the hypervisor
> > > > node which list device which the IOMMU are been correctly programmed by Xen.
> > > > 
> > > > During Linux boot, Xen specific code will create an hash table which
> > > > contains all these devices. The hash table will be used in need_xen_dma_ops
> > > > to check if the Xen DMA ops needs to be used for the current device.
> > > 
> > > Is it out of the question to find a field within struct device itself to
> > > store this e.g. in struct device_dma_parameters perhaps and avoid the
> > > need for a hashtable lookup.
> > > 
> > > device->iommu_group might be another option, if we can create our own
> > > group?
> > 
> > I agree that a field in struct device would be ideal.
> > Greg, get_maintainer.pl points at you as main maintainer of device.h, do
> > you have an opinion on this?
> 
> I need a whole lot more context here please.  With a patch would be even
> better so that I know exactly what you are referring to...

The Xen hypervisor tells Linux which devices are protected by an SMMU,
preprogrammed by Xen, so that Linux can avoid the swiotlb and bounce
buffers for DMA requests involving them.
The information is present on device tree and parsed at boot time by
Linux.

Julien is proposing to store the list of "safe" devices on an hash table
in the Xen specific code (in arch/arm/xen/enlighten.c, see
http://marc.info/?l=linux-kernel&m=139291370526082&w=2).
Whenever Linux is about to do DMA, we would check in the hashtable to
figure out whether we need to go through the swiotlb or we can simply
use the native dma_ops.

Ian and I were thinking that it would be much easier and faster to have
a "xen_safe_device" parameter in struct device and just check for that.
It doesn't actually need to be in struct device, it could simply be a
flag in struct device_dma_parameters as Ian was suggesting.

Julien, could you please come up with a simple patch to demonstrate the
concept?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 21:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 21:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2pB-0003k0-LJ; Mon, 24 Feb 2014 21:12:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WI2p9-0003jr-QK
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 21:12:28 +0000
Received: from [85.158.137.68:35309] by server-9.bemta-3.messagelabs.com id
	8C/64-10184-AB5BB035; Mon, 24 Feb 2014 21:12:26 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393276346!3897323!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16546 invoked from network); 24 Feb 2014 21:12:26 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 24 Feb 2014 21:12:26 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WI2p3-0006Ld-0g; Mon, 24 Feb 2014 22:12:21 +0100
Date: Mon, 24 Feb 2014 22:12:30 +0100 (CET)
From: Thomas Gleixner <tglx@linutronix.de>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <530B5856.7080900@citrix.com>
Message-ID: <alpine.DEB.2.02.1402242210470.21251@ionos.tec.linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.579581220@linutronix.de>
	<530B5856.7080900@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 21/26] xen: Get rid of the last irq_desc
 abuse
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, David Vrabel wrote:
> On 23/02/14 21:40, Thomas Gleixner wrote:
> > I'd prefer to drop that completely but there seems to be some mystic
> > value to the error printout and the allocation check.
> 
>     Warn if any PIRQ cannot be bound to an event channel.  Remove an
>     unnecessary test for !desc in xen_destroy_irq() since the only caller
>     will only do so if the irq was previously allocated.
> 
> > --- tip.orig/drivers/xen/events/events_base.c
> > +++ tip/drivers/xen/events/events_base.c
> [...]
> > @@ -535,7 +528,7 @@ static unsigned int __startup_pirq(unsig
> >  					BIND_PIRQ__WILL_SHARE : 0;
> >  	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
> >  	if (rc != 0) {
> > -		if (!probing_irq(irq))
> > +		if (!data || irqd_irq_has_action(data))
> >  			pr_info("Failed to obtain physical IRQ %d\n", irq);
> 
> Remove this if and change the pr_info() to a pr_warn().
> 
> This hypercall never fails in practice, but it's still useful to have the
> message in case on some systems it does.

Sure, I understood the value of the printk, but I failed to see the
reason for probing_irq(). Will change it.
 
> >  		return 0;
> >  	}
> > @@ -769,15 +762,13 @@ error_irq:
> >  
> >  int xen_destroy_irq(int irq)
> >  {
> > -	struct irq_desc *desc;
> >  	struct physdev_unmap_pirq unmap_irq;
> >  	struct irq_info *info = info_for_irq(irq);
> >  	int rc = -ENOENT;
> >  
> >  	mutex_lock(&irq_mapping_update_lock);
> >  
> > -	desc = irq_to_desc(irq);
> > -	if (!desc)
> > +	if (!irq_is_allocated(irq))
> >  		goto out;
> 
> Remove this test.  The only caller of xen_destroy_irq() will only do
> so if the irq was previously fully setup.

I was not sure about that, but thanks for confirming.
 
> I think this means you don't need to introduce the irqd_irq_has_action()
> and irq_is_allocated() helpers.

I just invented them in case xen really needs those tests.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 21:12:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 21:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2pB-0003k0-LJ; Mon, 24 Feb 2014 21:12:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WI2p9-0003jr-QK
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 21:12:28 +0000
Received: from [85.158.137.68:35309] by server-9.bemta-3.messagelabs.com id
	8C/64-10184-AB5BB035; Mon, 24 Feb 2014 21:12:26 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393276346!3897323!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16546 invoked from network); 24 Feb 2014 21:12:26 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 24 Feb 2014 21:12:26 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WI2p3-0006Ld-0g; Mon, 24 Feb 2014 22:12:21 +0100
Date: Mon, 24 Feb 2014 22:12:30 +0100 (CET)
From: Thomas Gleixner <tglx@linutronix.de>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <530B5856.7080900@citrix.com>
Message-ID: <alpine.DEB.2.02.1402242210470.21251@ionos.tec.linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.579581220@linutronix.de>
	<530B5856.7080900@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 21/26] xen: Get rid of the last irq_desc
 abuse
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, David Vrabel wrote:
> On 23/02/14 21:40, Thomas Gleixner wrote:
> > I'd prefer to drop that completely but there seems to be some mystic
> > value to the error printout and the allocation check.
> 
>     Warn if any PIRQ cannot be bound to an event channel.  Remove an
>     unnecessary test for !desc in xen_destroy_irq() since the only caller
>     will only do so if the irq was previously allocated.
> 
> > --- tip.orig/drivers/xen/events/events_base.c
> > +++ tip/drivers/xen/events/events_base.c
> [...]
> > @@ -535,7 +528,7 @@ static unsigned int __startup_pirq(unsig
> >  					BIND_PIRQ__WILL_SHARE : 0;
> >  	rc = HYPERVISOR_event_channel_op(EVTCHNOP_bind_pirq, &bind_pirq);
> >  	if (rc != 0) {
> > -		if (!probing_irq(irq))
> > +		if (!data || irqd_irq_has_action(data))
> >  			pr_info("Failed to obtain physical IRQ %d\n", irq);
> 
> Remove this if and change the pr_info() to a pr_warn().
> 
> This hypercall never fails in practice, but it's still useful to have the
> message in case on some systems it does.

Sure, I understood the value of the printk, but I failed to see the
reason for probing_irq(). Will change it.
 
> >  		return 0;
> >  	}
> > @@ -769,15 +762,13 @@ error_irq:
> >  
> >  int xen_destroy_irq(int irq)
> >  {
> > -	struct irq_desc *desc;
> >  	struct physdev_unmap_pirq unmap_irq;
> >  	struct irq_info *info = info_for_irq(irq);
> >  	int rc = -ENOENT;
> >  
> >  	mutex_lock(&irq_mapping_update_lock);
> >  
> > -	desc = irq_to_desc(irq);
> > -	if (!desc)
> > +	if (!irq_is_allocated(irq))
> >  		goto out;
> 
> Remove this test.  The only caller of xen_destroy_irq() will only do
> so if the irq was previously fully setup.

I was not sure about that, but thanks for confirming.
 
> I think this means you don't need to introduce the irqd_irq_has_action()
> and irq_is_allocated() helpers.

I just invented them in case xen really needs those tests.

Thanks,

	tglx

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 21:13:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 21:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2qA-0003nu-5x; Mon, 24 Feb 2014 21:13:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WI2q9-0003nk-9A
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 21:13:29 +0000
Received: from [193.109.254.147:24647] by server-10.bemta-14.messagelabs.com
	id EE/25-10711-8F5BB035; Mon, 24 Feb 2014 21:13:28 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393276407!6490305!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28513 invoked from network); 24 Feb 2014 21:13:28 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 24 Feb 2014 21:13:28 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WI2q5-0006Mv-Mm; Mon, 24 Feb 2014 22:13:25 +0100
Date: Mon, 24 Feb 2014 22:13:35 +0100 (CET)
From: Thomas Gleixner <tglx@linutronix.de>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <530B5624.8080309@citrix.com>
Message-ID: <alpine.DEB.2.02.1402242213030.21251@ionos.tec.linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.222412125@linutronix.de>
	<530B5624.8080309@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 18/26] xen: Use the proper irq functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, David Vrabel wrote:

> On 23/02/14 21:40, Thomas Gleixner wrote:
> > I really can't understand why people keep adding irq_desc abuse. We
> > have enough proper interfaces. Delete another 14 lines of hackery.
> 
>     generic_handler_irq() already tests for !desc so use this instead
>     of generic_handle_irq_desc().  Use irq_get_irq_data() instead of
>     desc->irq_data.

Fair enough. I was not really in the mood to come up with proper
changelogs :)

> Otherwise,
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 21:13:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 21:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI2qA-0003nu-5x; Mon, 24 Feb 2014 21:13:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tglx@linutronix.de>) id 1WI2q9-0003nk-9A
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 21:13:29 +0000
Received: from [193.109.254.147:24647] by server-10.bemta-14.messagelabs.com
	id EE/25-10711-8F5BB035; Mon, 24 Feb 2014 21:13:28 +0000
X-Env-Sender: tglx@linutronix.de
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393276407!6490305!1
X-Originating-IP: [62.245.132.108]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28513 invoked from network); 24 Feb 2014 21:13:28 -0000
Received: from www.linutronix.de (HELO Galois.linutronix.de) (62.245.132.108)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 24 Feb 2014 21:13:28 -0000
Received: from localhost ([127.0.0.1]) by Galois.linutronix.de with esmtps
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:256) (Exim 4.80)
	(envelope-from <tglx@linutronix.de>)
	id 1WI2q5-0006Mv-Mm; Mon, 24 Feb 2014 22:13:25 +0100
Date: Mon, 24 Feb 2014 22:13:35 +0100 (CET)
From: Thomas Gleixner <tglx@linutronix.de>
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <530B5624.8080309@citrix.com>
Message-ID: <alpine.DEB.2.02.1402242213030.21251@ionos.tec.linutronix.de>
References: <20140223212703.511977310@linutronix.de>
	<20140223212738.222412125@linutronix.de>
	<530B5624.8080309@citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-Linutronix-Spam-Score: -1.0
X-Linutronix-Spam-Level: -
X-Linutronix-Spam-Status: No , -1.0 points, 5.0 required, ALL_TRUSTED=-1,
	SHORTCIRCUIT=-0.0001
Cc: Peter Zijlstra <peterz@infradead.org>, Xen <xen-devel@lists.xenproject.org>,
	Ingo Molnar <mingo@elte.hu>, LKML <linux-kernel@vger.kernel.org>
Subject: Re: [Xen-devel] [patch 18/26] xen: Use the proper irq functions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 24 Feb 2014, David Vrabel wrote:

> On 23/02/14 21:40, Thomas Gleixner wrote:
> > I really can't understand why people keep adding irq_desc abuse. We
> > have enough proper interfaces. Delete another 14 lines of hackery.
> 
>     generic_handler_irq() already tests for !desc so use this instead
>     of generic_handle_irq_desc().  Use irq_get_irq_data() instead of
>     desc->irq_data.

Fair enough. I was not really in the mood to come up with proper
changelogs :)

> Otherwise,
> 
> Reviewed-by: David Vrabel <david.vrabel@citrix.com>
> 
> David
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 22:09:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 22:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI3hb-00048w-Bv; Mon, 24 Feb 2014 22:08:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WI3hZ-00048r-Fq
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 22:08:41 +0000
Received: from [85.158.137.68:24327] by server-1.bemta-3.messagelabs.com id
	4B/33-17293-8E2CB035; Mon, 24 Feb 2014 22:08:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393279717!2615145!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12046 invoked from network); 24 Feb 2014 22:08:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 22:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,537,1389744000"; d="scan'208";a="103701080"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 22:08:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 17:08:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WI3hU-0000K3-8o;
	Mon, 24 Feb 2014 22:08:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WI3hU-0003fu-7d;
	Mon, 24 Feb 2014 22:08:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25286-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 22:08:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25286: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25286 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25286/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install    fail REGR. vs. 25281

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  9b51591c330672765d866a5ed4ed8e40c75bb1cf
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Cambell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@intel.com>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Christoph Egger <chegger@amazon.de>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 22:09:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 22:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI3hb-00048w-Bv; Mon, 24 Feb 2014 22:08:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WI3hZ-00048r-Fq
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 22:08:41 +0000
Received: from [85.158.137.68:24327] by server-1.bemta-3.messagelabs.com id
	4B/33-17293-8E2CB035; Mon, 24 Feb 2014 22:08:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393279717!2615145!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12046 invoked from network); 24 Feb 2014 22:08:39 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 22:08:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,537,1389744000"; d="scan'208";a="103701080"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 22:08:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 17:08:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WI3hU-0000K3-8o;
	Mon, 24 Feb 2014 22:08:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WI3hU-0003fu-7d;
	Mon, 24 Feb 2014 22:08:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25286-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 22:08:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25286: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25286 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25286/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install    fail REGR. vs. 25281

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf     15 guest-localmigrate/x10    fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  9b51591c330672765d866a5ed4ed8e40c75bb1cf
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Cambell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@intel.com>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Christoph Egger <chegger@amazon.de>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:04:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI4Zh-0004O9-QG; Mon, 24 Feb 2014 23:04:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WI4Zf-0004O4-UI
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 23:04:36 +0000
Received: from [85.158.143.35:59565] by server-2.bemta-4.messagelabs.com id
	17/9D-10891-FFFCB035; Mon, 24 Feb 2014 23:04:31 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393283070!8002486!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15177 invoked from network); 24 Feb 2014 23:04:30 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-3.tower-21.messagelabs.com with SMTP;
	24 Feb 2014 23:04:30 -0000
Received: from localhost (cpe-66-65-72-221.nyc.res.rr.com [66.65.72.221])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id E2438586552;
	Mon, 24 Feb 2014 15:04:28 -0800 (PST)
Date: Mon, 24 Feb 2014 18:04:26 -0500 (EST)
Message-Id: <20140224.180426.411052665068255886.davem@davemloft.net>
To: dcbw@redhat.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1393266120.8041.19.camel@dcbw.local>
References: <1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Mon, 24 Feb 2014 15:04:29 -0800 (PST)
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, yoshfuji@linux-ipv6.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Dan Williams <dcbw@redhat.com>
Date: Mon, 24 Feb 2014 12:22:00 -0600

> In the future I expect more people will want to disable IPv4 as
> they move to IPv6.

I definitely don't.

I've been lightly following this conversation and I have to say
a few things.

disable_ipv6 was added because people wanted to make sure their
machines didn't generate any ipv6 traffic because "ipv6 is not
mature", "we don't have our firewalls configured to handle that
kind of traffic" etc.

None of these things apply to ipv4.

And if you think people will go to ipv6 only, you are dreaming.

Name a provider of a major web sitewho will go to strictly only
providing an ipv6 facing site?

Only an idiot who wanted to lose significiant nunbers of page views
and traffic would do that, so ipv4 based connectivity will be
universally necessary forever.

I think disable_ipv4 is absolutely a non-starter.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:04:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI4Zh-0004O9-QG; Mon, 24 Feb 2014 23:04:37 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WI4Zf-0004O4-UI
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 23:04:36 +0000
Received: from [85.158.143.35:59565] by server-2.bemta-4.messagelabs.com id
	17/9D-10891-FFFCB035; Mon, 24 Feb 2014 23:04:31 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393283070!8002486!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15177 invoked from network); 24 Feb 2014 23:04:30 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-3.tower-21.messagelabs.com with SMTP;
	24 Feb 2014 23:04:30 -0000
Received: from localhost (cpe-66-65-72-221.nyc.res.rr.com [66.65.72.221])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id E2438586552;
	Mon, 24 Feb 2014 15:04:28 -0800 (PST)
Date: Mon, 24 Feb 2014 18:04:26 -0500 (EST)
Message-Id: <20140224.180426.411052665068255886.davem@davemloft.net>
To: dcbw@redhat.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1393266120.8041.19.camel@dcbw.local>
References: <1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Mon, 24 Feb 2014 15:04:29 -0800 (PST)
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, yoshfuji@linux-ipv6.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Dan Williams <dcbw@redhat.com>
Date: Mon, 24 Feb 2014 12:22:00 -0600

> In the future I expect more people will want to disable IPv4 as
> they move to IPv6.

I definitely don't.

I've been lightly following this conversation and I have to say
a few things.

disable_ipv6 was added because people wanted to make sure their
machines didn't generate any ipv6 traffic because "ipv6 is not
mature", "we don't have our firewalls configured to handle that
kind of traffic" etc.

None of these things apply to ipv4.

And if you think people will go to ipv6 only, you are dreaming.

Name a provider of a major web sitewho will go to strictly only
providing an ipv6 facing site?

Only an idiot who wanted to lose significiant nunbers of page views
and traffic would do that, so ipv4 based connectivity will be
universally necessary forever.

I think disable_ipv4 is absolutely a non-starter.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI4fx-0004YA-Up; Mon, 24 Feb 2014 23:11:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI4fw-0004Y3-KS
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 23:11:04 +0000
Received: from [193.109.254.147:55786] by server-13.bemta-14.messagelabs.com
	id 7D/8A-01226-881DB035; Mon, 24 Feb 2014 23:11:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393283459!6502404!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5323 invoked from network); 24 Feb 2014 23:11:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 23:11:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1ONAvNd032681
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Mon, 24 Feb 2014 23:10:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1ONAuWG000764
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL)
	for <xen-devel@lists.xenproject.org>; Mon, 24 Feb 2014 23:10:57 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1ONAuWT023452
	for <xen-devel@lists.xenproject.org>; Mon, 24 Feb 2014 23:10:56 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 15:10:56 -0800
Date: Mon, 24 Feb 2014 15:10:55 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140224151055.6b929170@mantra.us.oracle.com>
In-Reply-To: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
References: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 21 Feb 2014 18:23:22 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Some rearrangement in linux code causes it to go thru certain
> hypercalls that will cause corruption in xen and crash. I like having
> a white list for a big feature while it goes thru it's adolescence to
> catch such bugs, but whatever. Since, it affects pvh dom0 paths only,
> I didnt' think it was necessary for 4.4.  
> 
> Attached patch adds check for it, and also certain paths (iirc it was
> from xentrace) causes panic in hvm_hap_nested_page_fault. add a check
> in there too.
> 
> thanks
> Mukesh
> 

Please ignore this. Resending with separate patches instead of one.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:11:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI4fx-0004YA-Up; Mon, 24 Feb 2014 23:11:05 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI4fw-0004Y3-KS
	for xen-devel@lists.xenproject.org; Mon, 24 Feb 2014 23:11:04 +0000
Received: from [193.109.254.147:55786] by server-13.bemta-14.messagelabs.com
	id 7D/8A-01226-881DB035; Mon, 24 Feb 2014 23:11:04 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393283459!6502404!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5323 invoked from network); 24 Feb 2014 23:11:00 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 23:11:00 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1ONAvNd032681
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Mon, 24 Feb 2014 23:10:58 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1ONAuWG000764
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL)
	for <xen-devel@lists.xenproject.org>; Mon, 24 Feb 2014 23:10:57 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1ONAuWT023452
	for <xen-devel@lists.xenproject.org>; Mon, 24 Feb 2014 23:10:56 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 15:10:56 -0800
Date: Mon, 24 Feb 2014 15:10:55 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140224151055.6b929170@mantra.us.oracle.com>
In-Reply-To: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
References: <1393035803-9483-1-git-send-email-mukesh.rathor@oracle.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH] pvh bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 21 Feb 2014 18:23:22 -0800
Mukesh Rathor <mukesh.rathor@oracle.com> wrote:

> Some rearrangement in linux code causes it to go thru certain
> hypercalls that will cause corruption in xen and crash. I like having
> a white list for a big feature while it goes thru it's adolescence to
> catch such bugs, but whatever. Since, it affects pvh dom0 paths only,
> I didnt' think it was necessary for 4.4.  
> 
> Attached patch adds check for it, and also certain paths (iirc it was
> from xentrace) causes panic in hvm_hap_nested_page_fault. add a check
> in there too.
> 
> thanks
> Mukesh
> 

Please ignore this. Resending with separate patches instead of one.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:13:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI4iM-0004eI-IM; Mon, 24 Feb 2014 23:13:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WI4iL-0004eA-9J
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 23:13:33 +0000
Received: from [85.158.139.211:19301] by server-5.bemta-5.messagelabs.com id
	A7/4B-32749-C12DB035; Mon, 24 Feb 2014 23:13:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393283610!5982808!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15189 invoked from network); 24 Feb 2014 23:13:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 23:13:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,537,1389744000"; d="scan'208";a="103719226"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 23:13:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 18:13:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WI4iG-0000dT-83;
	Mon, 24 Feb 2014 23:13:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WI4iG-0002we-1a;
	Mon, 24 Feb 2014 23:13:28 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25287-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 23:13:28 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25287: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25287 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25287/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:13:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:13:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI4iM-0004eI-IM; Mon, 24 Feb 2014 23:13:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WI4iL-0004eA-9J
	for xen-devel@lists.xensource.com; Mon, 24 Feb 2014 23:13:33 +0000
Received: from [85.158.139.211:19301] by server-5.bemta-5.messagelabs.com id
	A7/4B-32749-C12DB035; Mon, 24 Feb 2014 23:13:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393283610!5982808!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15189 invoked from network); 24 Feb 2014 23:13:31 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	24 Feb 2014 23:13:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,537,1389744000"; d="scan'208";a="103719226"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 24 Feb 2014 23:13:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Mon, 24 Feb 2014 18:13:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WI4iG-0000dT-83;
	Mon, 24 Feb 2014 23:13:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WI4iG-0002we-1a;
	Mon, 24 Feb 2014 23:13:28 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25287-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Mon, 24 Feb 2014 23:13:28 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25287: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25287 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25287/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:53:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI5LB-0005A8-Im; Mon, 24 Feb 2014 23:53:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WI5LA-0005A1-De
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 23:53:40 +0000
Received: from [85.158.143.35:48643] by server-3.bemta-4.messagelabs.com id
	0B/49-11539-38BDB035; Mon, 24 Feb 2014 23:53:39 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393286018!7969408!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4022 invoked from network); 24 Feb 2014 23:53:39 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 23:53:39 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 24 Feb 2014 23:53:37 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,537,1389744000"; d="scan'208";a="679615002"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.143])
	by fldsmtpi01.verizon.com with ESMTP; 24 Feb 2014 23:53:37 +0000
Message-ID: <530BDB81.9070209@terremark.com>
Date: Mon, 24 Feb 2014 18:53:37 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <CAFLBxZaS8R+Q_1KmZak-hbrKe523iF4At+_+mfaa5Aere3Gmhg@mail.gmail.com>
In-Reply-To: <CAFLBxZaS8R+Q_1KmZak-hbrKe523iF4At+_+mfaa5Aere3Gmhg@mail.gmail.com>
Subject: Re: [Xen-devel] Xen development update: RC6 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/14 13:19, George Dunlap wrote:
> In response to a last-minute bug found in libxl that caused a hang
> when running "xl create -c" when using pygrub, we've made a
> last-minute patch and tagged another RC.

Same quick tests on CentOS 5.10 and Fedora 17.  No change, RC6 is looking good.

    -Don Slutz

> Looking at the code, we're reasonably confident that only the broken
> path can be affected by the change, so we will still plan on moving
> forward with the release.  However, if you have the bandwidth to do
> one more test, we'd appreciate it.
>
> Tarballs below.
>
> Thanks!
>   -George
>
> http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz
> http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz.sig
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Mon Feb 24 23:53:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Feb 2014 23:53:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI5LB-0005A8-Im; Mon, 24 Feb 2014 23:53:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WI5LA-0005A1-De
	for xen-devel@lists.xen.org; Mon, 24 Feb 2014 23:53:40 +0000
Received: from [85.158.143.35:48643] by server-3.bemta-4.messagelabs.com id
	0B/49-11539-38BDB035; Mon, 24 Feb 2014 23:53:39 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393286018!7969408!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4022 invoked from network); 24 Feb 2014 23:53:39 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 24 Feb 2014 23:53:39 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by fldsmtpe04.verizon.com with ESMTP; 24 Feb 2014 23:53:37 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,537,1389744000"; d="scan'208";a="679615002"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.3.143])
	by fldsmtpi01.verizon.com with ESMTP; 24 Feb 2014 23:53:37 +0000
Message-ID: <530BDB81.9070209@terremark.com>
Date: Mon, 24 Feb 2014 18:53:37 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>, 
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <CAFLBxZaS8R+Q_1KmZak-hbrKe523iF4At+_+mfaa5Aere3Gmhg@mail.gmail.com>
In-Reply-To: <CAFLBxZaS8R+Q_1KmZak-hbrKe523iF4At+_+mfaa5Aere3Gmhg@mail.gmail.com>
Subject: Re: [Xen-devel] Xen development update: RC6 tagged
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/24/14 13:19, George Dunlap wrote:
> In response to a last-minute bug found in libxl that caused a hang
> when running "xl create -c" when using pygrub, we've made a
> last-minute patch and tagged another RC.

Same quick tests on CentOS 5.10 and Fedora 17.  No change, RC6 is looking good.

    -Don Slutz

> Looking at the code, we're reasonably confident that only the broken
> path can be affected by the change, so we will still plan on moving
> forward with the release.  However, if you have the bandwidth to do
> one more test, we'd appreciate it.
>
> Tarballs below.
>
> Thanks!
>   -George
>
> http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz
> http://bits.xensource.com/oss-xen/release/4.4.0-rc6/xen-4.4.0-rc6.tar.gz.sig
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 00:02:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 00:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI5Tl-0005zk-PB; Tue, 25 Feb 2014 00:02:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben@decadent.org.uk>) id 1WI5Tk-0005zd-CD
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 00:02:32 +0000
Received: from [85.158.139.211:56646] by server-2.bemta-5.messagelabs.com id
	3A/54-23037-79DDB035; Tue, 25 Feb 2014 00:02:31 +0000
X-Env-Sender: ben@decadent.org.uk
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393286549!5935152!1
X-Originating-IP: [88.96.1.126]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16106 invoked from network); 25 Feb 2014 00:02:30 -0000
Received: from shadbolt.e.decadent.org.uk (HELO shadbolt.e.decadent.org.uk)
	(88.96.1.126)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 25 Feb 2014 00:02:30 -0000
Received: from [192.168.4.242] (helo=deadeye.wl.decadent.org.uk)
	by shadbolt.decadent.org.uk with esmtps
	(TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI5TT-0004Kv-Ri; Tue, 25 Feb 2014 00:02:15 +0000
Received: from ben by deadeye.wl.decadent.org.uk with local (Exim 4.82)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI5TT-00051x-HN; Tue, 25 Feb 2014 00:02:15 +0000
Message-ID: <1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
From: Ben Hutchings <ben@decadent.org.uk>
To: David Miller <davem@davemloft.net>
Date: Tue, 25 Feb 2014 00:02:00 +0000
In-Reply-To: <20140224.180426.411052665068255886.davem@davemloft.net>
References: <1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
X-Mailer: Evolution 3.8.5-2+b2 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.4.242
X-SA-Exim-Mail-From: ben@decadent.org.uk
X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk);
	SAEximRunCond expanded to false
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6068053860945399328=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6068053860945399328==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-q/kcLpXQr5co7KvsyRKe"


--=-q/kcLpXQr5co7KvsyRKe
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2014-02-24 at 18:04 -0500, David Miller wrote:
> From: Dan Williams <dcbw@redhat.com>
> Date: Mon, 24 Feb 2014 12:22:00 -0600
>=20
> > In the future I expect more people will want to disable IPv4 as
> > they move to IPv6.
>=20
> I definitely don't.
>=20
> I've been lightly following this conversation and I have to say
> a few things.
>=20
> disable_ipv6 was added because people wanted to make sure their
> machines didn't generate any ipv6 traffic because "ipv6 is not
> mature", "we don't have our firewalls configured to handle that
> kind of traffic" etc.
>=20
> None of these things apply to ipv4.
>=20
> And if you think people will go to ipv6 only, you are dreaming.
>
> Name a provider of a major web sitewho will go to strictly only
> providing an ipv6 facing site?
>
> Only an idiot who wanted to lose significiant nunbers of page views
> and traffic would do that,

That's obviously true for public-facing servers, but that doesn't mean
it's not useful to anyone.

> so ipv4 based connectivity will be universally necessary forever.

You can run an internal network, or access network, as v6-only with
NAT64 and DNS64 at the border.  I believe some mobile networks are doing
this; it was also done on the main FOSDEM wireless network this year.

Ben.

> I think disable_ipv4 is absolutely a non-starter.


--=20
Ben Hutchings
Beware of bugs in the above code;
I have only proved it correct, not tried it. - Donald Knuth

--=-q/kcLpXQr5co7KvsyRKe
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUAUwvdeOe/yOyVhhEJAQplZA//QAOw2FY8BWwaK/c447vv7/VYuzUNjLOL
Ea5KRTemhsIXOddkKnNggp/S3gyTEN4xMGGf4+/DvdhRnRpWT/4os65OU/yR2lD6
sUUhvP2ALHFNWxOu/k8eAB+P5nc92ulBXy1kGHnIoPZxC7PEj8O5zNHPQvyhTPBM
rrUGp3XkZ704uf3EA8pA2NE/o4e8f8uQC0uq97J0vfIZYSZ4opFT9wdsU92Q0UR5
1Z3E2n1f3ZRD0n5+CKV7pNPvuD52HOhknai6Rcg86b4/YDd+4mgSYjA68CJTAyeQ
EzlPnT6LyP119eylvkfpwvkO0YBfJJ1UBlBsNPPn+ki1CcZ+X4aomcvo9UxTVYye
X8N1SJzTm4h53fFRDoLR5WRBL67rGezSWz/ZUfxy92O8PQUTAjPNUHGnuwSdh2A8
p20dt8NlQAlbYMwwklbNQtgEuH+mod1ntljx6MJOA4RELN3mQxgBTAfHBHWiNfS6
BJYUUcDWjygc9vLIhYOx1ioJArNSnMw/hjP0uD/jvigrtfdD6DYjQf8XBGwvQqyr
ziKk7pj8CfhLdo1YwS8EFmX6Mh3Dhf8xE+2+O7L+dyXLCItjOeg/xE3ZrXudb0qB
9fMGr5W3o2Ot6ICvKxAY2r3N87N9s1gqgoK2kKx/tB6RA0DkN1YdnLvsCEfnhnuv
KLuqk8+mKes=
=iTMC
-----END PGP SIGNATURE-----

--=-q/kcLpXQr5co7KvsyRKe--


--===============6068053860945399328==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6068053860945399328==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 00:02:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 00:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI5Tl-0005zk-PB; Tue, 25 Feb 2014 00:02:33 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben@decadent.org.uk>) id 1WI5Tk-0005zd-CD
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 00:02:32 +0000
Received: from [85.158.139.211:56646] by server-2.bemta-5.messagelabs.com id
	3A/54-23037-79DDB035; Tue, 25 Feb 2014 00:02:31 +0000
X-Env-Sender: ben@decadent.org.uk
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393286549!5935152!1
X-Originating-IP: [88.96.1.126]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16106 invoked from network); 25 Feb 2014 00:02:30 -0000
Received: from shadbolt.e.decadent.org.uk (HELO shadbolt.e.decadent.org.uk)
	(88.96.1.126)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 25 Feb 2014 00:02:30 -0000
Received: from [192.168.4.242] (helo=deadeye.wl.decadent.org.uk)
	by shadbolt.decadent.org.uk with esmtps
	(TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI5TT-0004Kv-Ri; Tue, 25 Feb 2014 00:02:15 +0000
Received: from ben by deadeye.wl.decadent.org.uk with local (Exim 4.82)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI5TT-00051x-HN; Tue, 25 Feb 2014 00:02:15 +0000
Message-ID: <1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
From: Ben Hutchings <ben@decadent.org.uk>
To: David Miller <davem@davemloft.net>
Date: Tue, 25 Feb 2014 00:02:00 +0000
In-Reply-To: <20140224.180426.411052665068255886.davem@davemloft.net>
References: <1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
X-Mailer: Evolution 3.8.5-2+b2 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.4.242
X-SA-Exim-Mail-From: ben@decadent.org.uk
X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk);
	SAEximRunCond expanded to false
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6068053860945399328=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6068053860945399328==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-q/kcLpXQr5co7KvsyRKe"


--=-q/kcLpXQr5co7KvsyRKe
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2014-02-24 at 18:04 -0500, David Miller wrote:
> From: Dan Williams <dcbw@redhat.com>
> Date: Mon, 24 Feb 2014 12:22:00 -0600
>=20
> > In the future I expect more people will want to disable IPv4 as
> > they move to IPv6.
>=20
> I definitely don't.
>=20
> I've been lightly following this conversation and I have to say
> a few things.
>=20
> disable_ipv6 was added because people wanted to make sure their
> machines didn't generate any ipv6 traffic because "ipv6 is not
> mature", "we don't have our firewalls configured to handle that
> kind of traffic" etc.
>=20
> None of these things apply to ipv4.
>=20
> And if you think people will go to ipv6 only, you are dreaming.
>
> Name a provider of a major web sitewho will go to strictly only
> providing an ipv6 facing site?
>
> Only an idiot who wanted to lose significiant nunbers of page views
> and traffic would do that,

That's obviously true for public-facing servers, but that doesn't mean
it's not useful to anyone.

> so ipv4 based connectivity will be universally necessary forever.

You can run an internal network, or access network, as v6-only with
NAT64 and DNS64 at the border.  I believe some mobile networks are doing
this; it was also done on the main FOSDEM wireless network this year.

Ben.

> I think disable_ipv4 is absolutely a non-starter.


--=20
Ben Hutchings
Beware of bugs in the above code;
I have only proved it correct, not tried it. - Donald Knuth

--=-q/kcLpXQr5co7KvsyRKe
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUAUwvdeOe/yOyVhhEJAQplZA//QAOw2FY8BWwaK/c447vv7/VYuzUNjLOL
Ea5KRTemhsIXOddkKnNggp/S3gyTEN4xMGGf4+/DvdhRnRpWT/4os65OU/yR2lD6
sUUhvP2ALHFNWxOu/k8eAB+P5nc92ulBXy1kGHnIoPZxC7PEj8O5zNHPQvyhTPBM
rrUGp3XkZ704uf3EA8pA2NE/o4e8f8uQC0uq97J0vfIZYSZ4opFT9wdsU92Q0UR5
1Z3E2n1f3ZRD0n5+CKV7pNPvuD52HOhknai6Rcg86b4/YDd+4mgSYjA68CJTAyeQ
EzlPnT6LyP119eylvkfpwvkO0YBfJJ1UBlBsNPPn+ki1CcZ+X4aomcvo9UxTVYye
X8N1SJzTm4h53fFRDoLR5WRBL67rGezSWz/ZUfxy92O8PQUTAjPNUHGnuwSdh2A8
p20dt8NlQAlbYMwwklbNQtgEuH+mod1ntljx6MJOA4RELN3mQxgBTAfHBHWiNfS6
BJYUUcDWjygc9vLIhYOx1ioJArNSnMw/hjP0uD/jvigrtfdD6DYjQf8XBGwvQqyr
ziKk7pj8CfhLdo1YwS8EFmX6Mh3Dhf8xE+2+O7L+dyXLCItjOeg/xE3ZrXudb0qB
9fMGr5W3o2Ot6ICvKxAY2r3N87N9s1gqgoK2kKx/tB6RA0DkN1YdnLvsCEfnhnuv
KLuqk8+mKes=
=iTMC
-----END PGP SIGNATURE-----

--=-q/kcLpXQr5co7KvsyRKe--


--===============6068053860945399328==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6068053860945399328==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 00:13:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 00:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI5de-0006BV-68; Tue, 25 Feb 2014 00:12:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WI5dd-0006BQ-Dg
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 00:12:45 +0000
Received: from [85.158.139.211:49306] by server-17.bemta-5.messagelabs.com id
	90/D6-31975-CFFDB035; Tue, 25 Feb 2014 00:12:44 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393287161!2025562!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11682 invoked from network); 25 Feb 2014 00:12:42 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-10.tower-206.messagelabs.com with SMTP;
	25 Feb 2014 00:12:42 -0000
Received: from localhost (cpe-66-65-72-221.nyc.res.rr.com [66.65.72.221])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id E5F2B58656D;
	Mon, 24 Feb 2014 16:12:39 -0800 (PST)
Date: Mon, 24 Feb 2014 19:12:38 -0500 (EST)
Message-Id: <20140224.191238.921310808350170272.davem@davemloft.net>
To: ben@decadent.org.uk
From: David Miller <davem@davemloft.net>
In-Reply-To: <1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Mon, 24 Feb 2014 16:12:41 -0800 (PST)
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ben Hutchings <ben@decadent.org.uk>
Date: Tue, 25 Feb 2014 00:02:00 +0000

> You can run an internal network, or access network, as v6-only with
> NAT64 and DNS64 at the border.  I believe some mobile networks are doing
> this; it was also done on the main FOSDEM wireless network this year.

This seems to be bloating up the networking headers of the internal
network, for what purpose?

For mobile that's doubly inadvisable.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 00:13:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 00:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI5de-0006BV-68; Tue, 25 Feb 2014 00:12:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WI5dd-0006BQ-Dg
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 00:12:45 +0000
Received: from [85.158.139.211:49306] by server-17.bemta-5.messagelabs.com id
	90/D6-31975-CFFDB035; Tue, 25 Feb 2014 00:12:44 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393287161!2025562!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11682 invoked from network); 25 Feb 2014 00:12:42 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-10.tower-206.messagelabs.com with SMTP;
	25 Feb 2014 00:12:42 -0000
Received: from localhost (cpe-66-65-72-221.nyc.res.rr.com [66.65.72.221])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id E5F2B58656D;
	Mon, 24 Feb 2014 16:12:39 -0800 (PST)
Date: Mon, 24 Feb 2014 19:12:38 -0500 (EST)
Message-Id: <20140224.191238.921310808350170272.davem@davemloft.net>
To: ben@decadent.org.uk
From: David Miller <davem@davemloft.net>
In-Reply-To: <1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
X-Mailer: Mew version 6.5 on Emacs 24.3 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Mon, 24 Feb 2014 16:12:41 -0800 (PST)
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Ben Hutchings <ben@decadent.org.uk>
Date: Tue, 25 Feb 2014 00:02:00 +0000

> You can run an internal network, or access network, as v6-only with
> NAT64 and DNS64 at the border.  I believe some mobile networks are doing
> this; it was also done on the main FOSDEM wireless network this year.

This seems to be bloating up the networking headers of the internal
network, for what purpose?

For mobile that's doubly inadvisable.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TR-0001sI-KS; Tue, 25 Feb 2014 01:06:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TQ-0001rs-9R
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:16 +0000
Received: from [193.109.254.147:45176] by server-6.bemta-14.messagelabs.com id
	F2/80-03396-78CEB035; Tue, 25 Feb 2014 01:06:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393290373!6563550!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20337 invoked from network); 25 Feb 2014 01:06:14 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:06:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16B5T015791
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16AAB019979
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1P16Add009604
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:10 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:54 -0800
Message-Id: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [V1 PATCH 0/3] pvh: misc bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Resend of previous patch, but broken into multiple patches as
suggested. Please note, this for 4.5, as they seem to affect pvh dom0 
paths only.

Based on c/s: 5224dcca4ac222477d751120fe63c1ed251ad4b1

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TR-0001sP-Vi; Tue, 25 Feb 2014 01:06:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TQ-0001rx-PR
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:17 +0000
Received: from [85.158.137.68:19895] by server-10.bemta-3.messagelabs.com id
	8C/B7-07302-78CEB035; Tue, 25 Feb 2014 01:06:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393290373!2412586!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8692 invoked from network); 25 Feb 2014 01:06:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:06:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16CkG008812
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:13 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BeG008270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BQc017454
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:56 -0800
Message-Id: <1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just like hvm, pirq eoi shared page is not there for pvh. pvh should
not touch any pv_domain fields.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/irq.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index db70077..88444be 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1068,13 +1068,13 @@ bool_t cpu_has_pending_apic_eoi(void)
 
 static inline void set_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         set_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
 static inline void clear_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         clear_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TS-0001sf-HO; Tue, 25 Feb 2014 01:06:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TR-0001s2-5g
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:17 +0000
Received: from [85.158.139.211:23954] by server-2.bemta-5.messagelabs.com id
	E5/E7-23037-88CEB035; Tue, 25 Feb 2014 01:06:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393290373!5972080!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27988 invoked from network); 25 Feb 2014 01:06:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 01:06:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16C8x015795
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:13 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BM7020009
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BGG008272
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:11 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:57 -0800
Message-Id: <1393290237-28427-4-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [V1 PATCH 3/3] pvh: disallow
	PHYSDEVOP_pirq_eoi_gmfn_v2/v1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A call to do_physdev_op with PHYSDEVOP_pirq_eoi_gmfn_v2/v1 will crash
xen later. Disallow that. Currently, such a path exists for linux
dom0 pvh.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/physdev.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index bc0634c..9f85857 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -339,6 +339,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         unsigned long mfn;
         struct page_info *page;
 
+        ret = -ENOSYS;
+        if ( is_pvh_vcpu(current) )
+            break;
+
         ret = -EFAULT;
         if ( copy_from_guest(&info, arg, 1) != 0 )
             break;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TR-0001s7-8z; Tue, 25 Feb 2014 01:06:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TP-0001rr-KM
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:15 +0000
Received: from [85.158.143.35:32706] by server-1.bemta-4.messagelabs.com id
	51/11-31661-78CEB035; Tue, 25 Feb 2014 01:06:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393290372!8002629!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3169 invoked from network); 25 Feb 2014 01:06:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:06:14 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16Bl0008809
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1P16Bq4009630
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1P16AZ1009619
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:55 -0800
Message-Id: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
	hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

pvh does not support nested hvm at present. As such, return if pvh.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/hvm/hvm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..a4a3dcf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
     int sharing_enomem = 0;
     mem_event_request_t *req_ptr = NULL;
 
+    if ( is_pvh_vcpu(v) )
+        return 0;
+
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
      * If this fails, inject a nested page fault into the guest.
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TR-0001s7-8z; Tue, 25 Feb 2014 01:06:17 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TP-0001rr-KM
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:15 +0000
Received: from [85.158.143.35:32706] by server-1.bemta-4.messagelabs.com id
	51/11-31661-78CEB035; Tue, 25 Feb 2014 01:06:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393290372!8002629!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3169 invoked from network); 25 Feb 2014 01:06:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:06:14 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16Bl0008809
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1P16Bq4009630
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1P16AZ1009619
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:55 -0800
Message-Id: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
	hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

pvh does not support nested hvm at present. As such, return if pvh.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/hvm/hvm.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 69f7e74..a4a3dcf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
     int sharing_enomem = 0;
     mem_event_request_t *req_ptr = NULL;
 
+    if ( is_pvh_vcpu(v) )
+        return 0;
+
     /* On Nested Virtualization, walk the guest page table.
      * If this succeeds, all is fine.
      * If this fails, inject a nested page fault into the guest.
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TR-0001sP-Vi; Tue, 25 Feb 2014 01:06:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TQ-0001rx-PR
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:17 +0000
Received: from [85.158.137.68:19895] by server-10.bemta-3.messagelabs.com id
	8C/B7-07302-78CEB035; Tue, 25 Feb 2014 01:06:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393290373!2412586!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8692 invoked from network); 25 Feb 2014 01:06:15 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:06:15 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16CkG008812
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:13 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BeG008270
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BQc017454
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:56 -0800
Message-Id: <1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Subject: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just like hvm, pirq eoi shared page is not there for pvh. pvh should
not touch any pv_domain fields.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/irq.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index db70077..88444be 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1068,13 +1068,13 @@ bool_t cpu_has_pending_apic_eoi(void)
 
 static inline void set_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         set_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
 static inline void clear_pirq_eoi(struct domain *d, unsigned int irq)
 {
-    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
+    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
         clear_bit(irq, d->arch.pv_domain.pirq_eoi_map);
 }
 
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TR-0001sI-KS; Tue, 25 Feb 2014 01:06:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TQ-0001rs-9R
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:16 +0000
Received: from [193.109.254.147:45176] by server-6.bemta-14.messagelabs.com id
	F2/80-03396-78CEB035; Tue, 25 Feb 2014 01:06:15 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393290373!6563550!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20337 invoked from network); 25 Feb 2014 01:06:14 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:06:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16B5T015791
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16AAB019979
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1P16Add009604
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:10 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:10 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:54 -0800
Message-Id: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [V1 PATCH 0/3] pvh: misc bug fixes
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Resend of previous patch, but broken into multiple patches as
suggested. Please note, this for 4.5, as they seem to affect pvh dom0 
paths only.

Based on c/s: 5224dcca4ac222477d751120fe63c1ed251ad4b1

thanks
mukesh


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6TS-0001sf-HO; Tue, 25 Feb 2014 01:06:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WI6TR-0001s2-5g
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:06:17 +0000
Received: from [85.158.139.211:23954] by server-2.bemta-5.messagelabs.com id
	E5/E7-23037-88CEB035; Tue, 25 Feb 2014 01:06:16 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393290373!5972080!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27988 invoked from network); 25 Feb 2014 01:06:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 01:06:15 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P16C8x015795
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:13 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BM7020009
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO)
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:12 GMT
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P16BGG008272
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 01:06:11 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 17:06:11 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: xen-devel@lists.xenproject.org
Date: Mon, 24 Feb 2014 17:03:57 -0800
Message-Id: <1393290237-28427-4-git-send-email-mukesh.rathor@oracle.com>
X-Mailer: git-send-email 1.8.3.1
In-Reply-To: <1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Subject: [Xen-devel] [V1 PATCH 3/3] pvh: disallow
	PHYSDEVOP_pirq_eoi_gmfn_v2/v1
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A call to do_physdev_op with PHYSDEVOP_pirq_eoi_gmfn_v2/v1 will crash
xen later. Disallow that. Currently, such a path exists for linux
dom0 pvh.

Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
---
 xen/arch/x86/physdev.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index bc0634c..9f85857 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -339,6 +339,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         unsigned long mfn;
         struct page_info *page;
 
+        ret = -ENOSYS;
+        if ( is_pvh_vcpu(current) )
+            break;
+
         ret = -EFAULT;
         if ( copy_from_guest(&info, arg, 1) != 0 )
             break;
-- 
1.8.3.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6xx-0002U6-6t; Tue, 25 Feb 2014 01:37:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WI6xv-0002Tq-E8
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:37:47 +0000
Received: from [85.158.137.68:3473] by server-6.bemta-3.messagelabs.com id
	4E/CB-09180-AE3FB035; Tue, 25 Feb 2014 01:37:46 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393292261!3965602!1
X-Originating-IP: [202.81.31.144]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NCA9PiAzMTY3NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3118 invoked from network); 25 Feb 2014 01:37:45 -0000
Received: from e23smtp02.au.ibm.com (HELO e23smtp02.au.ibm.com) (202.81.31.144)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:37:45 -0000
Received: from /spool/local
	by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Tue, 25 Feb 2014 11:37:40 +1000
Received: from d23dlp02.au.ibm.com (202.81.31.213)
	by e23smtp02.au.ibm.com (202.81.31.208) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Tue, 25 Feb 2014 11:37:38 +1000
Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21])
	by d23dlp02.au.ibm.com (Postfix) with ESMTP id DE5662BB0052
	for <xen-devel@lists.xenproject.org>;
	Tue, 25 Feb 2014 12:37:37 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1P1bOEV9568530
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:37:24 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1P1bari021113
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:37:37 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1P1baSH021110; Tue, 25 Feb 2014 12:37:36 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 2572FA039D; Tue, 25 Feb 2014 12:37:36 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
	<CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Tue, 25 Feb 2014 11:10:24 +1030
Message-ID: <87vbw458jr.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022501-5490-0000-0000-00000504BFB9
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
	different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony Liguori <anthony@codemonkey.ws> writes:
> On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>> On the other hand, if we wanted a more Xen-like setup, it would looke
>> like this:
>>
>> 1) Abstract away the "physical addresses" to "handles" in the standard,
>>    and allow some platform-specific mapping setup and teardown.
>
> At the risk of beating a dead horse, passing handles (grant
> references) is going to be slow.
...
> I really think the best paths forward for virtio on Xen are either (1)
> reject the memory isolation thing and leave things as is or (2) assume
> bounce buffering at the transport layer (by using the PCI DMA API).

Xen can get memory isolation back by doing the copy in the hypervisor.
I've always liked that approach because it doesn't alter the guest
semantics, but it's very different from what Xen does now.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6xw-0002Tz-Ry; Tue, 25 Feb 2014 01:37:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WI6xv-0002Tp-E7
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:37:47 +0000
Received: from [85.158.137.68:54830] by server-13.bemta-3.messagelabs.com id
	8B/72-26923-AE3FB035; Tue, 25 Feb 2014 01:37:46 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393292261!2683624!1
X-Originating-IP: [202.81.31.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0MiA9PiAyMzYyNTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11339 invoked from network); 25 Feb 2014 01:37:45 -0000
Received: from e23smtp09.au.ibm.com (HELO e23smtp09.au.ibm.com) (202.81.31.142)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:37:45 -0000
Received: from /spool/local
	by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Tue, 25 Feb 2014 11:37:40 +1000
Received: from d23dlp03.au.ibm.com (202.81.31.214)
	by e23smtp09.au.ibm.com (202.81.31.206) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Tue, 25 Feb 2014 11:37:38 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp03.au.ibm.com (Postfix) with ESMTP id 404E0357805B
	for <xen-devel@lists.xenproject.org>;
	Tue, 25 Feb 2014 12:37:38 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1P1HsqZ8978830
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:17:54 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1P1baj2010093
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:37:37 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1P1bahR010084; Tue, 25 Feb 2014 12:37:36 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 2C471A03B2; Tue, 25 Feb 2014 12:37:36 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140221150107.GG15905@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
	<20140221100506.GR18398@zion.uk.xensource.com>
	<20140221150107.GG15905@phenom.dumpdata.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Tue, 25 Feb 2014 11:03:24 +1030
Message-ID: <87y51058vf.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022501-3568-0000-0000-000004FEC9CB
Cc: virtio-dev@lists.oasis-open.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> writes:
> On Fri, Feb 21, 2014 at 10:05:06AM +0000, Wei Liu wrote:
>> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
>> > The standard should say, "physical address"
>
> This conversation is heading towards - implementation needs it - hence lets
> make the design have it. Which I am OK with - but if we are going that
> route we might as well call this thing 'my-pony-number' because I think
> each hypervisor will have a different view of it.
>
> Some of them might use a physical address with some flag bits on it.
> Some might use just physical address.
>
> And some might want an 32-bit value that has no correlation to to physical
> nor virtual addresses.

True, but if the standard doesn't define what it is, it's not a standard
worth anything.  Xen is special because it's already requiring guest
changes; it's a platform in itself and so can be different from
everything else.  But it still needs to be defined.

At the moment, anything but guest-phys would not be compliant.  That's a
Good Thing if we simply don't know the best answer for Xen; we'll adjust
the standard when we do.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6xx-0002U6-6t; Tue, 25 Feb 2014 01:37:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WI6xv-0002Tq-E8
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:37:47 +0000
Received: from [85.158.137.68:3473] by server-6.bemta-3.messagelabs.com id
	4E/CB-09180-AE3FB035; Tue, 25 Feb 2014 01:37:46 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393292261!3965602!1
X-Originating-IP: [202.81.31.144]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0NCA9PiAzMTY3NTc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3118 invoked from network); 25 Feb 2014 01:37:45 -0000
Received: from e23smtp02.au.ibm.com (HELO e23smtp02.au.ibm.com) (202.81.31.144)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:37:45 -0000
Received: from /spool/local
	by e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Tue, 25 Feb 2014 11:37:40 +1000
Received: from d23dlp02.au.ibm.com (202.81.31.213)
	by e23smtp02.au.ibm.com (202.81.31.208) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Tue, 25 Feb 2014 11:37:38 +1000
Received: from d23relay03.au.ibm.com (d23relay03.au.ibm.com [9.190.235.21])
	by d23dlp02.au.ibm.com (Postfix) with ESMTP id DE5662BB0052
	for <xen-devel@lists.xenproject.org>;
	Tue, 25 Feb 2014 12:37:37 +1100 (EST)
Received: from d23av01.au.ibm.com (d23av01.au.ibm.com [9.190.234.96])
	by d23relay03.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1P1bOEV9568530
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:37:24 +1100
Received: from d23av01.au.ibm.com (localhost [127.0.0.1])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1P1bari021113
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:37:37 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av01.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1P1baSH021110; Tue, 25 Feb 2014 12:37:36 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 2572FA039D; Tue, 25 Feb 2014 12:37:36 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Anthony Liguori <anthony@codemonkey.ws>
In-Reply-To: <CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
	<CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Tue, 25 Feb 2014 11:10:24 +1030
Message-ID: <87vbw458jr.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022501-5490-0000-0000-00000504BFB9
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
	different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Anthony Liguori <anthony@codemonkey.ws> writes:
> On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
>> On the other hand, if we wanted a more Xen-like setup, it would looke
>> like this:
>>
>> 1) Abstract away the "physical addresses" to "handles" in the standard,
>>    and allow some platform-specific mapping setup and teardown.
>
> At the risk of beating a dead horse, passing handles (grant
> references) is going to be slow.
...
> I really think the best paths forward for virtio on Xen are either (1)
> reject the memory isolation thing and leave things as is or (2) assume
> bounce buffering at the transport layer (by using the PCI DMA API).

Xen can get memory isolation back by doing the copy in the hypervisor.
I've always liked that approach because it doesn't alter the guest
semantics, but it's very different from what Xen does now.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 01:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 01:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI6xw-0002Tz-Ry; Tue, 25 Feb 2014 01:37:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rusty@au1.ibm.com>) id 1WI6xv-0002Tp-E7
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 01:37:47 +0000
Received: from [85.158.137.68:54830] by server-13.bemta-3.messagelabs.com id
	8B/72-26923-AE3FB035; Tue, 25 Feb 2014 01:37:46 +0000
X-Env-Sender: rusty@au1.ibm.com
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393292261!2683624!1
X-Originating-IP: [202.81.31.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0MiA9PiAyMzYyNTE=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11339 invoked from network); 25 Feb 2014 01:37:45 -0000
Received: from e23smtp09.au.ibm.com (HELO e23smtp09.au.ibm.com) (202.81.31.142)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 01:37:45 -0000
Received: from /spool/local
	by e23smtp09.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <rusty@au1.ibm.com>;
	Tue, 25 Feb 2014 11:37:40 +1000
Received: from d23dlp03.au.ibm.com (202.81.31.214)
	by e23smtp09.au.ibm.com (202.81.31.206) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Tue, 25 Feb 2014 11:37:38 +1000
Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120])
	by d23dlp03.au.ibm.com (Postfix) with ESMTP id 404E0357805B
	for <xen-devel@lists.xenproject.org>;
	Tue, 25 Feb 2014 12:37:38 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1P1HsqZ8978830
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:17:54 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1P1baj2010093
	for <xen-devel@lists.xenproject.org>; Tue, 25 Feb 2014 12:37:37 +1100
Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1P1bahR010084; Tue, 25 Feb 2014 12:37:36 +1100
Received: by ozlabs.au.ibm.com (Postfix, from userid 1011)
	id 2C471A03B2; Tue, 25 Feb 2014 12:37:36 +1100 (EST)
From: Rusty Russell <rusty@au1.ibm.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140221150107.GG15905@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
	<20140221100506.GR18398@zion.uk.xensource.com>
	<20140221150107.GG15905@phenom.dumpdata.com>
User-Agent: Notmuch/0.15.2 (http://notmuchmail.org) Emacs/23.4.1
	(x86_64-pc-linux-gnu)
Date: Tue, 25 Feb 2014 11:03:24 +1030
Message-ID: <87y51058vf.fsf@rustcorp.com.au>
MIME-Version: 1.0
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022501-3568-0000-0000-000004FEC9CB
Cc: virtio-dev@lists.oasis-open.org, Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> writes:
> On Fri, Feb 21, 2014 at 10:05:06AM +0000, Wei Liu wrote:
>> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
>> > The standard should say, "physical address"
>
> This conversation is heading towards - implementation needs it - hence lets
> make the design have it. Which I am OK with - but if we are going that
> route we might as well call this thing 'my-pony-number' because I think
> each hypervisor will have a different view of it.
>
> Some of them might use a physical address with some flag bits on it.
> Some might use just physical address.
>
> And some might want an 32-bit value that has no correlation to to physical
> nor virtual addresses.

True, but if the standard doesn't define what it is, it's not a standard
worth anything.  Xen is special because it's already requiring guest
changes; it's a platform in itself and so can be different from
everything else.  But it still needs to be defined.

At the moment, anything but guest-phys would not be compliant.  That's a
Good Thing if we simply don't know the best answer for Xen; we'll adjust
the standard when we do.

Cheers,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 02:02:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 02:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI7Li-00038h-3i; Tue, 25 Feb 2014 02:02:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben@decadent.org.uk>) id 1WI7Lg-00038c-I0
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 02:02:20 +0000
Received: from [85.158.143.35:16387] by server-2.bemta-4.messagelabs.com id
	EE/5A-10891-BA9FB035; Tue, 25 Feb 2014 02:02:19 +0000
X-Env-Sender: ben@decadent.org.uk
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393293739!8017372!1
X-Originating-IP: [88.96.1.126]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 424 invoked from network); 25 Feb 2014 02:02:19 -0000
Received: from shadbolt.e.decadent.org.uk (HELO shadbolt.e.decadent.org.uk)
	(88.96.1.126)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 25 Feb 2014 02:02:19 -0000
Received: from [192.168.4.242] (helo=deadeye.wl.decadent.org.uk)
	by shadbolt.decadent.org.uk with esmtps
	(TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI7LW-0000mf-US; Tue, 25 Feb 2014 02:02:11 +0000
Received: from ben by deadeye.wl.decadent.org.uk with local (Exim 4.82)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI7LW-0005KQ-GF; Tue, 25 Feb 2014 02:02:10 +0000
Message-ID: <1393293719.6823.148.camel@deadeye.wl.decadent.org.uk>
From: Ben Hutchings <ben@decadent.org.uk>
To: David Miller <davem@davemloft.net>
Date: Tue, 25 Feb 2014 02:01:59 +0000
In-Reply-To: <20140224.191238.921310808350170272.davem@davemloft.net>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
	<20140224.191238.921310808350170272.davem@davemloft.net>
X-Mailer: Evolution 3.8.5-2+b2 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.4.242
X-SA-Exim-Mail-From: ben@decadent.org.uk
X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk);
	SAEximRunCond expanded to false
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3922592485332296777=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3922592485332296777==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-YYrQ2oDAg0I7CCTJmDT8"


--=-YYrQ2oDAg0I7CCTJmDT8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2014-02-24 at 19:12 -0500, David Miller wrote:
> From: Ben Hutchings <ben@decadent.org.uk>
> Date: Tue, 25 Feb 2014 00:02:00 +0000
>=20
> > You can run an internal network, or access network, as v6-only with
> > NAT64 and DNS64 at the border.  I believe some mobile networks are doin=
g
> > this; it was also done on the main FOSDEM wireless network this year.
>=20
> This seems to be bloating up the networking headers of the internal
> network, for what purpose?
>=20
> For mobile that's doubly inadvisable.

I don't know what the reasoning is for the mobile network operators.
They're forced to do NAT for v4 somewhere, and maybe v6-only makes the
access network easier to manage.

I doubt the extra header length hurts that much on a 3G or 4G network.

Ben.

--=20
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                           - Albert Einstei=
n

--=-YYrQ2oDAg0I7CCTJmDT8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUAUwv5l+e/yOyVhhEJAQpXUA/9HUG2G2LIDPmo09FoOGCUKQ3ak6vf9ZgB
WCkn4sqbfANNxloU8AZT/rnWzcaU5GwlTm4b3XCEmiU9pDCOfZDDa81a2j20MnqI
MHY61c19Vg7SIeQ46TbVGlUOvn13XkHeqYulQIfgW6EvUnVugVog1/MHdagsTpjM
wJSHN1y8NegK07aO9KkEMM16APT6nE1bZ8tVTB5iwaX3b23ZBY/Z5P7Zk70gcbLt
4VoaNmzv76O5SQJ82KQjbpuI+ZNYYXstG6ZIfRUEcFbBEZc0+6LQoEkEK9LOJcSk
4khGycBNemZlQleyhs35WdYde9ZCHkOhHOBzYmmBiuVdy8wwm+rYSEr0RNpBnMji
6cltOYeIX0pXC0FVf7ooeQLrZhYhEmo2AM0qCOqsOt6waTQ54zBmb/PNK2aJUKhJ
Gvkb/5aNQrAiOV5XWkrzT+jXRAVd8JYHQYE/nSBHwtdk+mdukXLdlBz4FaB8o5I/
peG/3Xvo+J74WEHLOGJhvKRgS6N6WZ7rurLUX4O4EiSZD90HoSdQBp7yhaasjfEw
65x2pf+ZM99XwS+41dVj9TCc14KjRdbt13FeJaSbz2GD+WpcOIRRdR2/0LL7/Swl
lYpVB21XTFhrERBL9qRt1X43sHnqCHgDL1+UfJLmCZ9JXHaQ1vqdkTv9BTTgkeee
zdXHXl0KAsg=
=rbWU
-----END PGP SIGNATURE-----

--=-YYrQ2oDAg0I7CCTJmDT8--


--===============3922592485332296777==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3922592485332296777==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 02:02:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 02:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WI7Li-00038h-3i; Tue, 25 Feb 2014 02:02:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ben@decadent.org.uk>) id 1WI7Lg-00038c-I0
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 02:02:20 +0000
Received: from [85.158.143.35:16387] by server-2.bemta-4.messagelabs.com id
	EE/5A-10891-BA9FB035; Tue, 25 Feb 2014 02:02:19 +0000
X-Env-Sender: ben@decadent.org.uk
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393293739!8017372!1
X-Originating-IP: [88.96.1.126]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 424 invoked from network); 25 Feb 2014 02:02:19 -0000
Received: from shadbolt.e.decadent.org.uk (HELO shadbolt.e.decadent.org.uk)
	(88.96.1.126)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 25 Feb 2014 02:02:19 -0000
Received: from [192.168.4.242] (helo=deadeye.wl.decadent.org.uk)
	by shadbolt.decadent.org.uk with esmtps
	(TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI7LW-0000mf-US; Tue, 25 Feb 2014 02:02:11 +0000
Received: from ben by deadeye.wl.decadent.org.uk with local (Exim 4.82)
	(envelope-from <ben@decadent.org.uk>)
	id 1WI7LW-0005KQ-GF; Tue, 25 Feb 2014 02:02:10 +0000
Message-ID: <1393293719.6823.148.camel@deadeye.wl.decadent.org.uk>
From: Ben Hutchings <ben@decadent.org.uk>
To: David Miller <davem@davemloft.net>
Date: Tue, 25 Feb 2014 02:01:59 +0000
In-Reply-To: <20140224.191238.921310808350170272.davem@davemloft.net>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
	<20140224.191238.921310808350170272.davem@davemloft.net>
X-Mailer: Evolution 3.8.5-2+b2 
Mime-Version: 1.0
X-SA-Exim-Connect-IP: 192.168.4.242
X-SA-Exim-Mail-From: ben@decadent.org.uk
X-SA-Exim-Scanned: No (on shadbolt.decadent.org.uk);
	SAEximRunCond expanded to false
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3922592485332296777=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============3922592485332296777==
Content-Type: multipart/signed; micalg="pgp-sha512";
	protocol="application/pgp-signature"; boundary="=-YYrQ2oDAg0I7CCTJmDT8"


--=-YYrQ2oDAg0I7CCTJmDT8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2014-02-24 at 19:12 -0500, David Miller wrote:
> From: Ben Hutchings <ben@decadent.org.uk>
> Date: Tue, 25 Feb 2014 00:02:00 +0000
>=20
> > You can run an internal network, or access network, as v6-only with
> > NAT64 and DNS64 at the border.  I believe some mobile networks are doin=
g
> > this; it was also done on the main FOSDEM wireless network this year.
>=20
> This seems to be bloating up the networking headers of the internal
> network, for what purpose?
>=20
> For mobile that's doubly inadvisable.

I don't know what the reasoning is for the mobile network operators.
They're forced to do NAT for v4 somewhere, and maybe v6-only makes the
access network easier to manage.

I doubt the extra header length hurts that much on a 3G or 4G network.

Ben.

--=20
Ben Hutchings
Everything should be made as simple as possible, but not simpler.
                                                           - Albert Einstei=
n

--=-YYrQ2oDAg0I7CCTJmDT8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQIVAwUAUwv5l+e/yOyVhhEJAQpXUA/9HUG2G2LIDPmo09FoOGCUKQ3ak6vf9ZgB
WCkn4sqbfANNxloU8AZT/rnWzcaU5GwlTm4b3XCEmiU9pDCOfZDDa81a2j20MnqI
MHY61c19Vg7SIeQ46TbVGlUOvn13XkHeqYulQIfgW6EvUnVugVog1/MHdagsTpjM
wJSHN1y8NegK07aO9KkEMM16APT6nE1bZ8tVTB5iwaX3b23ZBY/Z5P7Zk70gcbLt
4VoaNmzv76O5SQJ82KQjbpuI+ZNYYXstG6ZIfRUEcFbBEZc0+6LQoEkEK9LOJcSk
4khGycBNemZlQleyhs35WdYde9ZCHkOhHOBzYmmBiuVdy8wwm+rYSEr0RNpBnMji
6cltOYeIX0pXC0FVf7ooeQLrZhYhEmo2AM0qCOqsOt6waTQ54zBmb/PNK2aJUKhJ
Gvkb/5aNQrAiOV5XWkrzT+jXRAVd8JYHQYE/nSBHwtdk+mdukXLdlBz4FaB8o5I/
peG/3Xvo+J74WEHLOGJhvKRgS6N6WZ7rurLUX4O4EiSZD90HoSdQBp7yhaasjfEw
65x2pf+ZM99XwS+41dVj9TCc14KjRdbt13FeJaSbz2GD+WpcOIRRdR2/0LL7/Swl
lYpVB21XTFhrERBL9qRt1X43sHnqCHgDL1+UfJLmCZ9JXHaQ1vqdkTv9BTTgkeee
zdXHXl0KAsg=
=rbWU
-----END PGP SIGNATURE-----

--=-YYrQ2oDAg0I7CCTJmDT8--


--===============3922592485332296777==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3922592485332296777==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 04:52:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 04:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIA0J-0003jz-Fv; Tue, 25 Feb 2014 04:52:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WIA0I-0003ju-Cy
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 04:52:26 +0000
Received: from [85.158.139.211:13207] by server-16.bemta-5.messagelabs.com id
	C2/F7-05060-9812C035; Tue, 25 Feb 2014 04:52:25 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393303943!5991197!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21434 invoked from network); 25 Feb 2014 04:52:24 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 04:52:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P4qMJA002051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 04:52:22 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P4qKrQ012879
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 04:52:21 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P4qK9Y012874; Tue, 25 Feb 2014 04:52:20 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 20:52:20 -0800
Message-ID: <530C2180.5060508@oracle.com>
Date: Tue, 25 Feb 2014 12:52:16 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
	<CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
In-Reply-To: <CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 02/24/2014 07:01 PM, Vasiliy Tolstov wrote:
> 2014-02-24 7:32 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
>> Two types of page can be stored in tmem: persistent_page and ephemeral_page.
>>
>> Persistent pages are swapped-out pages, whose date can't be dropped by
>> tmem. The rule for persistent pages is:
>> 'current_domain_pages +  persistent_pages have to smaller than
>> domain->max_pages'.
>>
>> Ephemeral pages are clean pagecache pages, those pages have a copy in
>> disk already.
>> The amount number of ephemeral pages are not limited, but XEN host will
>> reclaim those pages if under memory pressure.
>> There is a tmem parameter 'weight' which can be used to control how many
>> ephemeral_pages should be reclaimed between domains.
> 
> 
> Very good, thanks for answers! What minimal kernel versions recommends
> for frontswap/cleancache in domU (dom0 xen 4.3.2)
> 

Any version started from v3.5 should be okay, I'd recommend versions
after v3.10 since there were hardly no commits since that version.

And I suggest you apply this patch also(which haven't been merged to
linus' git tree yet)
https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/commit/?h=stable/for-linus-3.14&id=bc1b0df59e3fc4573f92bc1aab9652047a0aeaa7

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 04:52:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 04:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIA0J-0003jz-Fv; Tue, 25 Feb 2014 04:52:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WIA0I-0003ju-Cy
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 04:52:26 +0000
Received: from [85.158.139.211:13207] by server-16.bemta-5.messagelabs.com id
	C2/F7-05060-9812C035; Tue, 25 Feb 2014 04:52:25 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393303943!5991197!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21434 invoked from network); 25 Feb 2014 04:52:24 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 04:52:24 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P4qMJA002051
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 04:52:22 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P4qKrQ012879
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 04:52:21 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P4qK9Y012874; Tue, 25 Feb 2014 04:52:20 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 20:52:20 -0800
Message-ID: <530C2180.5060508@oracle.com>
Date: Tue, 25 Feb 2014 12:52:16 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
	<CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
In-Reply-To: <CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 02/24/2014 07:01 PM, Vasiliy Tolstov wrote:
> 2014-02-24 7:32 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
>> Two types of page can be stored in tmem: persistent_page and ephemeral_page.
>>
>> Persistent pages are swapped-out pages, whose date can't be dropped by
>> tmem. The rule for persistent pages is:
>> 'current_domain_pages +  persistent_pages have to smaller than
>> domain->max_pages'.
>>
>> Ephemeral pages are clean pagecache pages, those pages have a copy in
>> disk already.
>> The amount number of ephemeral pages are not limited, but XEN host will
>> reclaim those pages if under memory pressure.
>> There is a tmem parameter 'weight' which can be used to control how many
>> ephemeral_pages should be reclaimed between domains.
> 
> 
> Very good, thanks for answers! What minimal kernel versions recommends
> for frontswap/cleancache in domU (dom0 xen 4.3.2)
> 

Any version started from v3.5 should be okay, I'd recommend versions
after v3.10 since there were hardly no commits since that version.

And I suggest you apply this patch also(which haven't been merged to
linus' git tree yet)
https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/commit/?h=stable/for-linus-3.14&id=bc1b0df59e3fc4573f92bc1aab9652047a0aeaa7

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 05:05:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 05:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIACu-00045n-BQ; Tue, 25 Feb 2014 05:05:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WIACs-00045i-0J
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 05:05:26 +0000
Received: from [85.158.143.35:12070] by server-1.bemta-4.messagelabs.com id
	B9/10-31661-5942C035; Tue, 25 Feb 2014 05:05:25 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393304723!8015563!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8935 invoked from network); 25 Feb 2014 05:05:24 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 05:05:24 -0000
Received: by mail-qc0-f172.google.com with SMTP id w7so8018533qcr.17
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 21:05:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Y5k+NX9aRWcByTCvHLFg/AKCN44cY8lP24wkCgVJkYg=;
	b=AE0j+beNlB3KLFFcYlgoB2TEx0lAFA1nZFjei7ZmXRDd2WlYhMnNFNOZPiS6Nb8YAZ
	+BdvD5dtuYI6YBJFPPj3qnHbv4eCEPeyqldWVkr3rVspl4g76mFEsysP/JgA+XzzQviZ
	Qx7gEEPhY9JpOyLVsJ3dzCtrjwXdUpAwY967c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=Y5k+NX9aRWcByTCvHLFg/AKCN44cY8lP24wkCgVJkYg=;
	b=TPxU86AiyCA1LqAcERJ9x5S3X2PY8kllp6p0xO9z8wryNWtXyPj7e7CiNlsoNCCEg1
	AlUBlYtYSqmvbNNMzltkYrjQDxi3wblbyo3pGcsgITwNzSF0mL6zXFgXRl0kQVnVxt+t
	A+AXFRxdAtd6s9+qylzq0Tk5tehdZ5pEqFtQrbWrqQia4I5+Dlje48Ejlp0zYw3v9U4z
	70Z08Q3DQmsir2Dk6F70WC3LK0L2pWyXzaXCn3BWHOkjYNSXkGDR+LAmTKefJcqcV9ET
	IvGbjTwPj/yIEzau/md4ZYtv/dGEpVigyIlwJ8KhFBP1Zwy8dmM6OjmxpWPPze1va1HN
	paWQ==
X-Gm-Message-State: ALoCoQn7v7pJJbDAGyEVtimG7Jo8BuU4llJx1xkIXW9IljoRMD2FeozgLz0qPErimmH/V+wrywni
X-Received: by 10.140.42.138 with SMTP id c10mr33538572qga.24.1393304723233;
	Mon, 24 Feb 2014 21:05:23 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 24 Feb 2014 21:05:08 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <530C2180.5060508@oracle.com>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
	<CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
	<530C2180.5060508@oracle.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Tue, 25 Feb 2014 09:05:08 +0400
X-Google-Sender-Auth: lXV3YxP9HwAnamvzHBB0PYjOk5w
Message-ID: <CACaajQvw3oGCJG9ghtdAP7vLXLbS25oJf+ra0rGn5qosQrb+PA@mail.gmail.com>
To: Bob Liu <bob.liu@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-25 8:52 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
> Any version started from v3.5 should be okay, I'd recommend versions
> after v3.10 since there were hardly no commits since that version.
>

Okay. Big thanks!

> And I suggest you apply this patch also(which haven't been merged to
> linus' git tree yet)
> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/commit/?h=stable/for-linus-3.14&id=bc1b0df59e3fc4573f92bc1aab9652047a0aeaa7

I don't need that - i'm balloon from userspace because in userspace i
have more control how change speed of memory ballooning. And not all
kernel versions have xen-selfballoon.


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 05:05:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 05:05:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIACu-00045n-BQ; Tue, 25 Feb 2014 05:05:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vase@selfip.ru>) id 1WIACs-00045i-0J
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 05:05:26 +0000
Received: from [85.158.143.35:12070] by server-1.bemta-4.messagelabs.com id
	B9/10-31661-5942C035; Tue, 25 Feb 2014 05:05:25 +0000
X-Env-Sender: vase@selfip.ru
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393304723!8015563!1
X-Originating-IP: [209.85.216.172]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8935 invoked from network); 25 Feb 2014 05:05:24 -0000
Received: from mail-qc0-f172.google.com (HELO mail-qc0-f172.google.com)
	(209.85.216.172)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 05:05:24 -0000
Received: by mail-qc0-f172.google.com with SMTP id w7so8018533qcr.17
	for <xen-devel@lists.xen.org>; Mon, 24 Feb 2014 21:05:23 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=selfip.ru; s=google;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=Y5k+NX9aRWcByTCvHLFg/AKCN44cY8lP24wkCgVJkYg=;
	b=AE0j+beNlB3KLFFcYlgoB2TEx0lAFA1nZFjei7ZmXRDd2WlYhMnNFNOZPiS6Nb8YAZ
	+BdvD5dtuYI6YBJFPPj3qnHbv4eCEPeyqldWVkr3rVspl4g76mFEsysP/JgA+XzzQviZ
	Qx7gEEPhY9JpOyLVsJ3dzCtrjwXdUpAwY967c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=Y5k+NX9aRWcByTCvHLFg/AKCN44cY8lP24wkCgVJkYg=;
	b=TPxU86AiyCA1LqAcERJ9x5S3X2PY8kllp6p0xO9z8wryNWtXyPj7e7CiNlsoNCCEg1
	AlUBlYtYSqmvbNNMzltkYrjQDxi3wblbyo3pGcsgITwNzSF0mL6zXFgXRl0kQVnVxt+t
	A+AXFRxdAtd6s9+qylzq0Tk5tehdZ5pEqFtQrbWrqQia4I5+Dlje48Ejlp0zYw3v9U4z
	70Z08Q3DQmsir2Dk6F70WC3LK0L2pWyXzaXCn3BWHOkjYNSXkGDR+LAmTKefJcqcV9ET
	IvGbjTwPj/yIEzau/md4ZYtv/dGEpVigyIlwJ8KhFBP1Zwy8dmM6OjmxpWPPze1va1HN
	paWQ==
X-Gm-Message-State: ALoCoQn7v7pJJbDAGyEVtimG7Jo8BuU4llJx1xkIXW9IljoRMD2FeozgLz0qPErimmH/V+wrywni
X-Received: by 10.140.42.138 with SMTP id c10mr33538572qga.24.1393304723233;
	Mon, 24 Feb 2014 21:05:23 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.110.133 with HTTP; Mon, 24 Feb 2014 21:05:08 -0800 (PST)
X-Originating-IP: [85.143.161.18]
In-Reply-To: <530C2180.5060508@oracle.com>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
	<CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
	<530C2180.5060508@oracle.com>
From: Vasiliy Tolstov <v.tolstov@selfip.ru>
Date: Tue, 25 Feb 2014 09:05:08 +0400
X-Google-Sender-Auth: lXV3YxP9HwAnamvzHBB0PYjOk5w
Message-ID: <CACaajQvw3oGCJG9ghtdAP7vLXLbS25oJf+ra0rGn5qosQrb+PA@mail.gmail.com>
To: Bob Liu <bob.liu@oracle.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

2014-02-25 8:52 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
> Any version started from v3.5 should be okay, I'd recommend versions
> after v3.10 since there were hardly no commits since that version.
>

Okay. Big thanks!

> And I suggest you apply this patch also(which haven't been merged to
> linus' git tree yet)
> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/commit/?h=stable/for-linus-3.14&id=bc1b0df59e3fc4573f92bc1aab9652047a0aeaa7

I don't need that - i'm balloon from userspace because in userspace i
have more control how change speed of memory ballooning. And not all
kernel versions have xen-selfballoon.


-- 
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 05:18:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 05:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIAP3-0004HF-Ih; Tue, 25 Feb 2014 05:18:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WIAP2-0004HA-CD
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 05:18:00 +0000
Received: from [85.158.137.68:61654] by server-8.bemta-3.messagelabs.com id
	C0/D2-16039-7872C035; Tue, 25 Feb 2014 05:17:59 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393305476!3973635!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11050 invoked from network); 25 Feb 2014 05:17:58 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 05:17:58 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 24 Feb 2014 22:17:47 -0700
Message-ID: <530C2779.20502@suse.com>
Date: Mon, 24 Feb 2014 22:17:45 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>	<530B62A2.3080901@eu.citrix.com>	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
In-Reply-To: <CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> On Mon, Feb 24, 2014 at 3:47 PM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>   
>> On Mon, Feb 24, 2014 at 3:17 PM, George Dunlap
>> <george.dunlap@eu.citrix.com> wrote:
>>     
>>> On 02/24/2014 02:19 PM, Ian Jackson wrote:
>>>       
>>>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>>>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>>>> The result on Linux is that the process always deadlocks before
>>>> returning from this function.
>>>>
>>>> This is used by xl's console child.  So, the ultimate effect is that
>>>> xl with pygrub does not manage to connect to the pygrub console.
>>>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>>>>
>>>> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
>>>> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
>>>> documented to suffice if called only on one ctx.  So deregistering the
>>>> ctx it's called on is not sufficient.  Instead, we need a new approach
>>>> which discards the whole sigchld_user list and unconditionally removes
>>>> our SIGCHLD handler if we had one.
>>>>
>>>> Prompted by this, clarify the semantics of
>>>> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
>>>> "quickly" by explaining what operations are not permitted; and
>>>> document the fact that the function doesn't reclaim the resources in
>>>> the ctxs.
>>>>
>>>> And add a comment in libxl_postfork_child_noexec explaining the
>>>> internal concurrency situation.
>>>>
>>>> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>>>>
>>>> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>> Reported-by: M A Young <m.a.young@durham.ac.uk>
>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>         
>>> So it looks like this path gets called from a number of other places in xl:
>>>
>>> libxl_postfork_child_noexec() is called by xl.c:postfork().
>>>
>>> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(),
>>> autoconnect_console(), and do_daemonize().
>>>
>>> do_daemonize() is called during "xl create", and "xl devd".
>>>
>>> Was this deadlock not triggered for those, or was it triggered and nobody
>>> noticed?
>>>       
>> In any case, I do think we need to fix this; the main question is, do
>> we need to delay the release a bit further to make sure it gets
>> sufficient testing?
>>     
>
> Also,  it would be nice to get a Tested-by: from someone using it with
> libvirt (before the release at least, if not before the check-in).
>
> Jim / Dario?
>   

I'll update my test system to rc6 tomorrow and restart my tests.

FYI, the tests were running over the weekend on rc5 + libvirt 1.2.2
rc1.  Over 25,000 domains started, shutdown, created, saved, restored,
etc. with no problems noted.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 05:18:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 05:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIAP3-0004HF-Ih; Tue, 25 Feb 2014 05:18:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WIAP2-0004HA-CD
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 05:18:00 +0000
Received: from [85.158.137.68:61654] by server-8.bemta-3.messagelabs.com id
	C0/D2-16039-7872C035; Tue, 25 Feb 2014 05:17:59 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393305476!3973635!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11050 invoked from network); 25 Feb 2014 05:17:58 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 05:17:58 -0000
Received: from [192.168.0.13] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Mon, 24 Feb 2014 22:17:47 -0700
Message-ID: <530C2779.20502@suse.com>
Date: Mon, 24 Feb 2014 22:17:45 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.18 (X11/20081112)
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>	<530B62A2.3080901@eu.citrix.com>	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
In-Reply-To: <CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

George Dunlap wrote:
> On Mon, Feb 24, 2014 at 3:47 PM, George Dunlap
> <George.Dunlap@eu.citrix.com> wrote:
>   
>> On Mon, Feb 24, 2014 at 3:17 PM, George Dunlap
>> <george.dunlap@eu.citrix.com> wrote:
>>     
>>> On 02/24/2014 02:19 PM, Ian Jackson wrote:
>>>       
>>>> libxl_postfork_child_noexec would nestedly reaquire the non-recursive
>>>> "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
>>>> The result on Linux is that the process always deadlocks before
>>>> returning from this function.
>>>>
>>>> This is used by xl's console child.  So, the ultimate effect is that
>>>> xl with pygrub does not manage to connect to the pygrub console.
>>>> This beahviour was reported by Michael Young in Xen 4.4.0 RC5.
>>>>
>>>> Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
>>>> not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
>>>> documented to suffice if called only on one ctx.  So deregistering the
>>>> ctx it's called on is not sufficient.  Instead, we need a new approach
>>>> which discards the whole sigchld_user list and unconditionally removes
>>>> our SIGCHLD handler if we had one.
>>>>
>>>> Prompted by this, clarify the semantics of
>>>> libxl_postfork_child_noexec.  Specifically, expand on the meaning of
>>>> "quickly" by explaining what operations are not permitted; and
>>>> document the fact that the function doesn't reclaim the resources in
>>>> the ctxs.
>>>>
>>>> And add a comment in libxl_postfork_child_noexec explaining the
>>>> internal concurrency situation.
>>>>
>>>> This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
>>>>
>>>> Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
>>>> Reported-by: M A Young <m.a.young@durham.ac.uk>
>>>> CC: Ian Campbell <Ian.Campbell@citrix.com>
>>>> CC: George Dunlap <george.dunlap@eu.citrix.com>
>>>>         
>>> So it looks like this path gets called from a number of other places in xl:
>>>
>>> libxl_postfork_child_noexec() is called by xl.c:postfork().
>>>
>>> postfork() is called in xl_cmdimpl.c by autoconnect_vncviewer(),
>>> autoconnect_console(), and do_daemonize().
>>>
>>> do_daemonize() is called during "xl create", and "xl devd".
>>>
>>> Was this deadlock not triggered for those, or was it triggered and nobody
>>> noticed?
>>>       
>> In any case, I do think we need to fix this; the main question is, do
>> we need to delay the release a bit further to make sure it gets
>> sufficient testing?
>>     
>
> Also,  it would be nice to get a Tested-by: from someone using it with
> libvirt (before the release at least, if not before the check-in).
>
> Jim / Dario?
>   

I'll update my test system to rc6 tomorrow and restart my tests.

FYI, the tests were running over the weekend on rc5 + libvirt 1.2.2
rc1.  Over 25,000 domains started, shutdown, created, saved, restored,
etc. with no problems noted.

Regards,
Jim


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 05:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 05:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIAXi-0004Qy-AA; Tue, 25 Feb 2014 05:26:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WIAXg-0004Qt-K8
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 05:26:56 +0000
Received: from [85.158.137.68:12904] by server-4.bemta-3.messagelabs.com id
	93/70-04858-F992C035; Tue, 25 Feb 2014 05:26:55 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393306013!3942991!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6788 invoked from network); 25 Feb 2014 05:26:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 05:26:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P5Qo34001604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 05:26:51 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P5Qnhu022876
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 05:26:50 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P5QnFT022868; Tue, 25 Feb 2014 05:26:49 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 21:26:49 -0800
Message-ID: <530C2996.3050005@oracle.com>
Date: Tue, 25 Feb 2014 13:26:46 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
	<CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
	<530C2180.5060508@oracle.com>
	<CACaajQvw3oGCJG9ghtdAP7vLXLbS25oJf+ra0rGn5qosQrb+PA@mail.gmail.com>
In-Reply-To: <CACaajQvw3oGCJG9ghtdAP7vLXLbS25oJf+ra0rGn5qosQrb+PA@mail.gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 02/25/2014 01:05 PM, Vasiliy Tolstov wrote:
> 2014-02-25 8:52 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
>> Any version started from v3.5 should be okay, I'd recommend versions
>> after v3.10 since there were hardly no commits since that version.
>>
> 
> Okay. Big thanks!
> 
>> And I suggest you apply this patch also(which haven't been merged to
>> linus' git tree yet)
>> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/commit/?h=stable/for-linus-3.14&id=bc1b0df59e3fc4573f92bc1aab9652047a0aeaa7
> 
> I don't need that - i'm balloon from userspace because in userspace i
> have more control how change speed of memory ballooning. And not all
> kernel versions have xen-selfballoon.
> 

I see, welcome any feedback!

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 05:27:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 05:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIAXi-0004Qy-AA; Tue, 25 Feb 2014 05:26:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <bob.liu@oracle.com>) id 1WIAXg-0004Qt-K8
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 05:26:56 +0000
Received: from [85.158.137.68:12904] by server-4.bemta-3.messagelabs.com id
	93/70-04858-F992C035; Tue, 25 Feb 2014 05:26:55 +0000
X-Env-Sender: bob.liu@oracle.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393306013!3942991!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6788 invoked from network); 25 Feb 2014 05:26:55 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 05:26:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1P5Qo34001604
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 05:26:51 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P5Qnhu022876
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 05:26:50 GMT
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1P5QnFT022868; Tue, 25 Feb 2014 05:26:49 GMT
Received: from [192.168.0.100] (/116.227.152.143)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 24 Feb 2014 21:26:49 -0800
Message-ID: <530C2996.3050005@oracle.com>
Date: Tue, 25 Feb 2014 13:26:46 +0800
From: Bob Liu <bob.liu@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130308 Thunderbird/17.0.4
MIME-Version: 1.0
To: Vasiliy Tolstov <v.tolstov@selfip.ru>
References: <CACaajQuZxhs7HB7Gd40gVT2oRryN2VoMQyt_CoxEmvQOvnn3XA@mail.gmail.com>
	<CACaajQvU5KKiHMH66UL+XDTEdx-VLNY-UggQOAnBGuQkM7coFg@mail.gmail.com>
	<530ABD5C.10506@oracle.com>
	<CACaajQsGGijsr6a9VP+QxDw31LS63DzGTKyT3yLO6GxhpfZuew@mail.gmail.com>
	<530C2180.5060508@oracle.com>
	<CACaajQvw3oGCJG9ghtdAP7vLXLbS25oJf+ra0rGn5qosQrb+PA@mail.gmail.com>
In-Reply-To: <CACaajQvw3oGCJG9ghtdAP7vLXLbS25oJf+ra0rGn5qosQrb+PA@mail.gmail.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] tmem frontswap without swap file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On 02/25/2014 01:05 PM, Vasiliy Tolstov wrote:
> 2014-02-25 8:52 GMT+04:00 Bob Liu <bob.liu@oracle.com>:
>> Any version started from v3.5 should be okay, I'd recommend versions
>> after v3.10 since there were hardly no commits since that version.
>>
> 
> Okay. Big thanks!
> 
>> And I suggest you apply this patch also(which haven't been merged to
>> linus' git tree yet)
>> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/commit/?h=stable/for-linus-3.14&id=bc1b0df59e3fc4573f92bc1aab9652047a0aeaa7
> 
> I don't need that - i'm balloon from userspace because in userspace i
> have more control how change speed of memory ballooning. And not all
> kernel versions have xen-selfballoon.
> 

I see, welcome any feedback!

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 07:14:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 07:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WICD1-00051m-Mk; Tue, 25 Feb 2014 07:13:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WICD0-00051h-9V
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 07:13:42 +0000
Received: from [85.158.137.68:29044] by server-6.bemta-3.messagelabs.com id
	03/B7-09180-5A24C035; Tue, 25 Feb 2014 07:13:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393312418!2725353!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9307 invoked from network); 25 Feb 2014 07:13:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 07:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,538,1389744000"; d="scan'208";a="103793878"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 07:13:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 02:13:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WICCu-00034A-SZ;
	Tue, 25 Feb 2014 07:13:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WICCu-0003zM-S6;
	Tue, 25 Feb 2014 07:13:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25290-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 07:13:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25290: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25290 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25290/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu 11 guest-saverestore         fail REGR. vs. 25281
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  d6ac84ca0db28b99073d4364815e0f71600c5780
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d6ac84ca0db28b99073d4364815e0f71600c5780
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    (cherry picked from commit 5be1e95318147855713709094e6847e3104ae910)

commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Cambell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@intel.com>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Christoph Egger <chegger@amazon.de>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 07:14:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 07:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WICD1-00051m-Mk; Tue, 25 Feb 2014 07:13:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WICD0-00051h-9V
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 07:13:42 +0000
Received: from [85.158.137.68:29044] by server-6.bemta-3.messagelabs.com id
	03/B7-09180-5A24C035; Tue, 25 Feb 2014 07:13:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393312418!2725353!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9307 invoked from network); 25 Feb 2014 07:13:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 07:13:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,538,1389744000"; d="scan'208";a="103793878"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 07:13:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 02:13:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WICCu-00034A-SZ;
	Tue, 25 Feb 2014 07:13:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WICCu-0003zM-S6;
	Tue, 25 Feb 2014 07:13:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25290-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 07:13:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25290: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25290 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25290/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu 11 guest-saverestore         fail REGR. vs. 25281
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  d6ac84ca0db28b99073d4364815e0f71600c5780
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d6ac84ca0db28b99073d4364815e0f71600c5780
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    (cherry picked from commit 5be1e95318147855713709094e6847e3104ae910)

commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Cambell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@intel.com>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Christoph Egger <chegger@amazon.de>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 07:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 07:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WICpn-0005D0-UQ; Tue, 25 Feb 2014 07:53:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WICpm-0005Cq-R3
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 07:53:46 +0000
Received: from [85.158.139.211:14252] by server-15.bemta-5.messagelabs.com id
	8E/B1-24395-90C4C035; Tue, 25 Feb 2014 07:53:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393314825!6015701!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23639 invoked from network); 25 Feb 2014 07:53:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 07:53:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 07:53:44 +0000
Message-Id: <530C5A11020000780011F055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 07:53:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com>
	<52A1F4AC.6020506@eu.citrix.com> <52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com> <530B6F66.8070201@amd.com>
	<530B80C5020000780011EEBE@nat28.tlf.novell.com>
	<530B8A99.8030909@amd.com>
In-Reply-To: <530B8A99.8030909@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	Sherry Hurwitz <sherry.hurwitz@amd.com>,
	"shurd@broadcom.com" <shurd@broadcom.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 19:08, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> On 2/24/2014 10:26 AM, Jan Beulich wrote:
>>>>   
>>>> It's been well over two months since you posted this, and it's still not
>>>> having Keir's ack. The best way therefore is for you to re-post, with
>>>> Keir properly Cc-ed, and with Andrew's Tested-by added.
> 
> Ok, will do..
> 
> Shall I retain the version number or start from scratch?

If there's no change, retain the version number and add "RESEND".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 07:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 07:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WICpp-0005D7-AU; Tue, 25 Feb 2014 07:53:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WICpn-0005Cr-CI
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 07:53:47 +0000
Received: from [85.158.137.68:58257] by server-6.bemta-3.messagelabs.com id
	8B/E1-09180-A0C4C035; Tue, 25 Feb 2014 07:53:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393314825!2732666!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17332 invoked from network); 25 Feb 2014 07:53:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 07:53:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 07:53:44 +0000
Message-Id: <530C59E2020000780011F052@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 07:52:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
	<530B7B42020000780011EE5E@nat28.tlf.novell.com>
	<530B711D.2080408@citrix.com> <530B83E3.1020603@citrix.com>
In-Reply-To: <530B83E3.1020603@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDI0LjAyLjE0IGF0IDE4OjM5LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPiB3cm90ZToKPiBPbiAyNC8wMi8xNCAxNjoxOSwgQW5kcmV3IENvb3BlciB3cm90
ZToKPj4gT24gMjQvMDIvMTQgMTY6MDIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4gT24gMjQu
MDIuMTQgYXQgMTY6MDEsIEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
IHdyb3RlOgo+Pj4gVGhpcyBwYXRjaCBzaG93cyBhIHNvbWV3aGF0IHVuZGVzaXJhYmxlIGluY29u
c2lzdGVuY3kgKGhhdmluZyBiZWVuCj4+PiBwcmVzZW50IGluIEkgdGhpbmsgbGVzIG9idmlvdXMg
d2F5cyBpbiBlYXJsaWVyIHBhdGNoZXMgdG9vKToKPj4+Cj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJt
L3NodXRkb3duLmMKPj4+PiArKysgYi94ZW4vYXJjaC9hcm0vc2h1dGRvd24uYwo+Pj4+IEBAIC0x
MSw3ICsxMSw3IEBAIHN0YXRpYyB2b2lkIHJhd19tYWNoaW5lX3Jlc2V0KHZvaWQpCj4+Pj4gICAg
ICBwbGF0Zm9ybV9yZXNldCgpOwo+Pj4+ICB9Cj4+Pj4gIAo+Pj4+IC1zdGF0aWMgdm9pZCBoYWx0
X3RoaXNfY3B1KHZvaWQgKmFyZykKPj4+PiArc3RhdGljIHZvaWQgbm9yZXR1cm4gaGFsdF90aGlz
X2NwdSh2b2lkICphcmcpCj4+PiBGb3IgZnVuY3Rpb24gZGVmaW5pdGlvbnMgeW91IHBsYWNlIHRo
ZSBhdHRyaWJ1dGUgd2hlcmUgSSBwZXJzb25hbGx5Cj4+PiB3b3VsZCBleHBlY3QgaXQgdG8gYmUg
KGlpcmMgaXQgY2FuJ3QgZ28gYmV0d2VlbiB0aGUgY2xvc2luZyBwYXJlbgo+Pj4gYWZ0ZXIgdGhl
IHBhcmFtZXRlciBkZWNsYXJhdGlvbnMgYW5kIHRoZSBvcGVuaW5nIGJyYWNlIG9mIHRoZQo+Pj4g
ZnVuY3Rpb24gYm9keSksIHlldCAuLi4KPj4gSG1tIC0gSSB0aG91Z2h0IEkgaGFkIGZpeGVkIGFs
bCBvZiB0aGVzZSAtIEkgc2hhbGwgYXVkaXQgYW5kIHJlc3Bpbi4gIEkKPj4gY2VydGFpbmx5IGRp
ZCBpbnRlbmQgdG8gYmUgY29uc2lzdGVudC4KPj4KPj4gfkFuZHJldwo+IAo+IEFuZCBub3cgSSBy
ZW1lbWJlciB3aHkgaXQgaXMgc3RyaWN0bHkgdGhpcyB3YXkgYXJvdW5kLgo+IAo+IEl0IGlzIGEg
Y29tcGlsZSBlcnJvciB0byBoYXZlIHRoZSBub3JldHVybiBhZnRlciB0aGUgYXJndW1lbnRzIG9u
IGEKPiBzdGF0aWMgZnVuY3Rpb24uCj4gCj4gc2h1dGRvd24uYzoxNToxOiBlcnJvcjogZXhwZWN0
ZWQg4oCYLOKAmSBvciDigJg74oCZIGJlZm9yZSDigJh74oCZIHRva2VuCj4gewo+IF4KPiBzaHV0
ZG93bi5jOjE0OjEzOiBlcnJvcjog4oCYaGFsdF90aGlzX2NwdeKAmSB1c2VkIGJ1dCBuZXZlciBk
ZWZpbmVkIFstV2Vycm9yXQo+IHN0YXRpYyB2b2lkIGhhbHRfdGhpc19jcHUodm9pZCAqYXJnKSBu
b3JldHVybgo+IF4KPiAKPiBidXQgZmluZSB0byBoYXZlIHRoZSBhdHRyaWJ1dGVzIGJldHdlZW4g
dGhlIHJldHVybiB0eXBlIGFuZCBuYW1lLgoKSS5lLiBwcmVjaXNlbHkgbGlrZSBJIHNhaWQgSSBy
ZW1lbWJlciB0aGluZ3MgdG8gYmUuCgo+IEkgY291bGQgc3RhbmRhcmRpc2Ugb24gdGhlIG90aGVy
IHdheSBhcm91bmQsIHRvIGJlIHRoZSBzYW1lIGFzIF9faW5pdCAmCj4gZnJpZW5kcyA/CgpUaGF0
J3MgZXhhY3RseSB3aGF0IEkgd2FzIGFza2luZyBmb3IuCgpKYW4KCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 25 07:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 07:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WICpn-0005D0-UQ; Tue, 25 Feb 2014 07:53:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WICpm-0005Cq-R3
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 07:53:46 +0000
Received: from [85.158.139.211:14252] by server-15.bemta-5.messagelabs.com id
	8E/B1-24395-90C4C035; Tue, 25 Feb 2014 07:53:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393314825!6015701!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23639 invoked from network); 25 Feb 2014 07:53:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 07:53:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 07:53:44 +0000
Message-Id: <530C5A11020000780011F055@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 07:53:37 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Aravind Gopalakrishnan" <aravind.gopalakrishnan@amd.com>
References: <1386283126-2045-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
	<52A19BC8020000780010ABFC@nat28.tlf.novell.com>
	<52A1F2B9.2070504@amd.com>
	<52A1F4AC.6020506@eu.citrix.com> <52A23410.6070906@amd.com>
	<52A58BCF020000780010B3F4@nat28.tlf.novell
	<52AF0E35.7070000@citrix.com> <530B6F66.8070201@amd.com>
	<530B80C5020000780011EEBE@nat28.tlf.novell.com>
	<530B8A99.8030909@amd.com>
In-Reply-To: <530B8A99.8030909@amd.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>,
	George Dunlap <george.dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	Sherry Hurwitz <sherry.hurwitz@amd.com>,
	"shurd@broadcom.com" <shurd@broadcom.com>
Subject: Re: [Xen-devel] [PATCH V8] ns16550: Add support for UART present in
 Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 19:08, Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com> wrote:
> On 2/24/2014 10:26 AM, Jan Beulich wrote:
>>>>   
>>>> It's been well over two months since you posted this, and it's still not
>>>> having Keir's ack. The best way therefore is for you to re-post, with
>>>> Keir properly Cc-ed, and with Andrew's Tested-by added.
> 
> Ok, will do..
> 
> Shall I retain the version number or start from scratch?

If there's no change, retain the version number and add "RESEND".

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 07:54:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 07:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WICpp-0005D7-AU; Tue, 25 Feb 2014 07:53:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WICpn-0005Cr-CI
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 07:53:47 +0000
Received: from [85.158.137.68:58257] by server-6.bemta-3.messagelabs.com id
	8B/E1-09180-A0C4C035; Tue, 25 Feb 2014 07:53:46 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393314825!2732666!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17332 invoked from network); 25 Feb 2014 07:53:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 07:53:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 07:53:44 +0000
Message-Id: <530C59E2020000780011F052@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 07:52:50 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-3-git-send-email-andrew.cooper3@citrix.com>
	<530B7B42020000780011EE5E@nat28.tlf.novell.com>
	<530B711D.2080408@citrix.com> <530B83E3.1020603@citrix.com>
In-Reply-To: <530B83E3.1020603@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 3/5] xen: Identify panic and reboot/halt
 functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Pj4+IE9uIDI0LjAyLjE0IGF0IDE4OjM5LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPiB3cm90ZToKPiBPbiAyNC8wMi8xNCAxNjoxOSwgQW5kcmV3IENvb3BlciB3cm90
ZToKPj4gT24gMjQvMDIvMTQgMTY6MDIsIEphbiBCZXVsaWNoIHdyb3RlOgo+Pj4+Pj4gT24gMjQu
MDIuMTQgYXQgMTY6MDEsIEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
IHdyb3RlOgo+Pj4gVGhpcyBwYXRjaCBzaG93cyBhIHNvbWV3aGF0IHVuZGVzaXJhYmxlIGluY29u
c2lzdGVuY3kgKGhhdmluZyBiZWVuCj4+PiBwcmVzZW50IGluIEkgdGhpbmsgbGVzIG9idmlvdXMg
d2F5cyBpbiBlYXJsaWVyIHBhdGNoZXMgdG9vKToKPj4+Cj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJt
L3NodXRkb3duLmMKPj4+PiArKysgYi94ZW4vYXJjaC9hcm0vc2h1dGRvd24uYwo+Pj4+IEBAIC0x
MSw3ICsxMSw3IEBAIHN0YXRpYyB2b2lkIHJhd19tYWNoaW5lX3Jlc2V0KHZvaWQpCj4+Pj4gICAg
ICBwbGF0Zm9ybV9yZXNldCgpOwo+Pj4+ICB9Cj4+Pj4gIAo+Pj4+IC1zdGF0aWMgdm9pZCBoYWx0
X3RoaXNfY3B1KHZvaWQgKmFyZykKPj4+PiArc3RhdGljIHZvaWQgbm9yZXR1cm4gaGFsdF90aGlz
X2NwdSh2b2lkICphcmcpCj4+PiBGb3IgZnVuY3Rpb24gZGVmaW5pdGlvbnMgeW91IHBsYWNlIHRo
ZSBhdHRyaWJ1dGUgd2hlcmUgSSBwZXJzb25hbGx5Cj4+PiB3b3VsZCBleHBlY3QgaXQgdG8gYmUg
KGlpcmMgaXQgY2FuJ3QgZ28gYmV0d2VlbiB0aGUgY2xvc2luZyBwYXJlbgo+Pj4gYWZ0ZXIgdGhl
IHBhcmFtZXRlciBkZWNsYXJhdGlvbnMgYW5kIHRoZSBvcGVuaW5nIGJyYWNlIG9mIHRoZQo+Pj4g
ZnVuY3Rpb24gYm9keSksIHlldCAuLi4KPj4gSG1tIC0gSSB0aG91Z2h0IEkgaGFkIGZpeGVkIGFs
bCBvZiB0aGVzZSAtIEkgc2hhbGwgYXVkaXQgYW5kIHJlc3Bpbi4gIEkKPj4gY2VydGFpbmx5IGRp
ZCBpbnRlbmQgdG8gYmUgY29uc2lzdGVudC4KPj4KPj4gfkFuZHJldwo+IAo+IEFuZCBub3cgSSBy
ZW1lbWJlciB3aHkgaXQgaXMgc3RyaWN0bHkgdGhpcyB3YXkgYXJvdW5kLgo+IAo+IEl0IGlzIGEg
Y29tcGlsZSBlcnJvciB0byBoYXZlIHRoZSBub3JldHVybiBhZnRlciB0aGUgYXJndW1lbnRzIG9u
IGEKPiBzdGF0aWMgZnVuY3Rpb24uCj4gCj4gc2h1dGRvd24uYzoxNToxOiBlcnJvcjogZXhwZWN0
ZWQg4oCYLOKAmSBvciDigJg74oCZIGJlZm9yZSDigJh74oCZIHRva2VuCj4gewo+IF4KPiBzaHV0
ZG93bi5jOjE0OjEzOiBlcnJvcjog4oCYaGFsdF90aGlzX2NwdeKAmSB1c2VkIGJ1dCBuZXZlciBk
ZWZpbmVkIFstV2Vycm9yXQo+IHN0YXRpYyB2b2lkIGhhbHRfdGhpc19jcHUodm9pZCAqYXJnKSBu
b3JldHVybgo+IF4KPiAKPiBidXQgZmluZSB0byBoYXZlIHRoZSBhdHRyaWJ1dGVzIGJldHdlZW4g
dGhlIHJldHVybiB0eXBlIGFuZCBuYW1lLgoKSS5lLiBwcmVjaXNlbHkgbGlrZSBJIHNhaWQgSSBy
ZW1lbWJlciB0aGluZ3MgdG8gYmUuCgo+IEkgY291bGQgc3RhbmRhcmRpc2Ugb24gdGhlIG90aGVy
IHdheSBhcm91bmQsIHRvIGJlIHRoZSBzYW1lIGFzIF9faW5pdCAmCj4gZnJpZW5kcyA/CgpUaGF0
J3MgZXhhY3RseSB3aGF0IEkgd2FzIGFza2luZyBmb3IuCgpKYW4KCl9fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID3W-0005y4-Bs; Tue, 25 Feb 2014 08:07:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WID3T-0005xz-QG
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 08:07:56 +0000
Received: from [85.158.137.68:51393] by server-17.bemta-3.messagelabs.com id
	85/95-22569-A5F4C035; Tue, 25 Feb 2014 08:07:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393315674!608724!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32069 invoked from network); 25 Feb 2014 08:07:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 08:07:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 08:07:53 +0000
Message-Id: <530C5C60020000780011F076@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 08:03:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <dunlapg@umich.edu>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
In-Reply-To: <CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 18:24, George Dunlap <dunlapg@umich.edu> wrote:
> On Mon, Feb 24, 2014 at 1:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> Not having got any satisfactory suggestions on the inquiry on how to
>> determine the amount a PoD guest needs to balloon down by (see
>> http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg01524.html 
>> and the thread following it), expose XENMEM_get_pod_target such that
>> the guest can use it for this purpose.
> 
> So in theory the bug you're seeing has nothing to do with PoD -- it
> just has to do with a different interpretation that the balloon driver
> and Xen may have as to what "target" means.  Is that right?  The only
> difference is that in the PoD case, not ballooning down enough can be
> deadly to the domain; whereas in the non-PoD case, the worst that can
> happen is that the toolstack has less memory left over on the host
> than it may have expected.

To me this is very much a dependency on whether PoD is in use,
precisely because of the deadliness of ballooning out too little in
that case.

> I don't like the idea of exposing specific PoD information to the
> guest -- PoD should be completely transparent to the guest.  If we
> make it PoD-specific, we may end up with a different sized domain
> depending on whether we booted with PoD mode or not.
> 
> Is the real problem that there's no way to determine the number of
> potentially non-empty pfn ranges?  If we exposed the number of
> non-empty p2m ranges (either ram or PoD), then the guest could compare
> that to the target and balloon down as necessary, no?

The only thing the guest really cares about from this hypercall is
the result of

.tot_pages + .pod_entries - .pod_cache_pages

so exposing anything to the guest that allows it to calculate this
value would be sufficient. It just seems odd to me to invent
something new if we already have what we need, as long as
exposing that information as individual pieces (instead of the
accumulated result) is not a security risk.

Anyway, I also consider it odd to complain about this now when
the referenced discussion has happened weeks ago, with no
useful result.

>> Also leverage some cleanup potential resulting from this change.
> 
> Cleanup should generally be done in separate patches, so that one
> change can be reviewed at a time.

Honestly I think that's a matter of taste - I personally dislike leaving
unclean code in place when the cleanup isn't harmful to the actual
change's understandability. Of course larger cleanup actions should
go by themselves.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:08:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID3W-0005y4-Bs; Tue, 25 Feb 2014 08:07:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WID3T-0005xz-QG
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 08:07:56 +0000
Received: from [85.158.137.68:51393] by server-17.bemta-3.messagelabs.com id
	85/95-22569-A5F4C035; Tue, 25 Feb 2014 08:07:54 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393315674!608724!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32069 invoked from network); 25 Feb 2014 08:07:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 08:07:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 08:07:53 +0000
Message-Id: <530C5C60020000780011F076@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 08:03:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "George Dunlap" <dunlapg@umich.edu>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
In-Reply-To: <CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 18:24, George Dunlap <dunlapg@umich.edu> wrote:
> On Mon, Feb 24, 2014 at 1:31 PM, Jan Beulich <JBeulich@suse.com> wrote:
>> Not having got any satisfactory suggestions on the inquiry on how to
>> determine the amount a PoD guest needs to balloon down by (see
>> http://lists.xenproject.org/archives/html/xen-devel/2014-01/msg01524.html 
>> and the thread following it), expose XENMEM_get_pod_target such that
>> the guest can use it for this purpose.
> 
> So in theory the bug you're seeing has nothing to do with PoD -- it
> just has to do with a different interpretation that the balloon driver
> and Xen may have as to what "target" means.  Is that right?  The only
> difference is that in the PoD case, not ballooning down enough can be
> deadly to the domain; whereas in the non-PoD case, the worst that can
> happen is that the toolstack has less memory left over on the host
> than it may have expected.

To me this is very much a dependency on whether PoD is in use,
precisely because of the deadliness of ballooning out too little in
that case.

> I don't like the idea of exposing specific PoD information to the
> guest -- PoD should be completely transparent to the guest.  If we
> make it PoD-specific, we may end up with a different sized domain
> depending on whether we booted with PoD mode or not.
> 
> Is the real problem that there's no way to determine the number of
> potentially non-empty pfn ranges?  If we exposed the number of
> non-empty p2m ranges (either ram or PoD), then the guest could compare
> that to the target and balloon down as necessary, no?

The only thing the guest really cares about from this hypercall is
the result of

.tot_pages + .pod_entries - .pod_cache_pages

so exposing anything to the guest that allows it to calculate this
value would be sufficient. It just seems odd to me to invent
something new if we already have what we need, as long as
exposing that information as individual pieces (instead of the
accumulated result) is not a security risk.

Anyway, I also consider it odd to complain about this now when
the referenced discussion has happened weeks ago, with no
useful result.

>> Also leverage some cleanup potential resulting from this change.
> 
> Cleanup should generally be done in separate patches, so that one
> change can be reviewed at a time.

Honestly I think that's a matter of taste - I personally dislike leaving
unclean code in place when the cleanup isn't harmful to the actual
change's understandability. Of course larger cleanup actions should
go by themselves.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:12:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID7P-00065p-3v; Tue, 25 Feb 2014 08:11:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WID7N-00065k-Ip
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 08:11:57 +0000
Received: from [193.109.254.147:42645] by server-7.bemta-14.messagelabs.com id
	02/E0-23424-C405C035; Tue, 25 Feb 2014 08:11:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393315914!2603360!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23865 invoked from network); 25 Feb 2014 08:11:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 08:11:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,538,1389744000"; d="scan'208";a="103802160"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 08:11:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 03:11:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WID7J-0003Lr-56;
	Tue, 25 Feb 2014 08:11:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WID7J-0007VC-2t;
	Tue, 25 Feb 2014 08:11:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25291-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 08:11:53 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25291: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25291 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25291/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:12:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID7P-00065p-3v; Tue, 25 Feb 2014 08:11:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WID7N-00065k-Ip
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 08:11:57 +0000
Received: from [193.109.254.147:42645] by server-7.bemta-14.messagelabs.com id
	02/E0-23424-C405C035; Tue, 25 Feb 2014 08:11:56 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393315914!2603360!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23865 invoked from network); 25 Feb 2014 08:11:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 08:11:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,538,1389744000"; d="scan'208";a="103802160"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 08:11:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 03:11:53 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WID7J-0003Lr-56;
	Tue, 25 Feb 2014 08:11:53 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WID7J-0007VC-2t;
	Tue, 25 Feb 2014 08:11:53 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25291-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 08:11:53 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25291: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25291 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25291/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:12:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID85-000696-Sn; Tue, 25 Feb 2014 08:12:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hannes@stressinduktion.org>) id 1WI7ft-0003LQ-Fb
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 02:23:13 +0000
Received: from [85.158.137.68:38134] by server-4.bemta-3.messagelabs.com id
	24/81-04858-09EFB035; Tue, 25 Feb 2014 02:23:12 +0000
X-Env-Sender: hannes@stressinduktion.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393294991!2695220!1
X-Originating-IP: [87.106.68.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17723 invoked from network); 25 Feb 2014 02:23:11 -0000
Received: from order.stressinduktion.org (HELO order.stressinduktion.org)
	(87.106.68.36)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 02:23:11 -0000
Received: by order.stressinduktion.org (Postfix, from userid 500)
	id EB6571A0C2D8; Tue, 25 Feb 2014 03:23:10 +0100 (CET)
Date: Tue, 25 Feb 2014 03:23:10 +0100
From: Hannes Frederic Sowa <hannes@stressinduktion.org>
To: Ben Hutchings <ben@decadent.org.uk>
Message-ID: <20140225022310.GH6598@order.stressinduktion.org>
Mail-Followup-To: Ben Hutchings <ben@decadent.org.uk>,
	David Miller <davem@davemloft.net>, dcbw@redhat.com,
	mcgrof@do-not-panic.com, zoltan.kiss@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	kuznet@ms2.inr.ac.ru, jmorris@namei.org, yoshfuji@linux-ipv6.org,
	kaber@trash.net
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
	<20140224.191238.921310808350170272.davem@davemloft.net>
	<1393293719.6823.148.camel@deadeye.wl.decadent.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393293719.6823.148.camel@deadeye.wl.decadent.org.uk>
X-Mailman-Approved-At: Tue, 25 Feb 2014 08:12:40 +0000
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru, kaber@trash.net,
	xen-devel@lists.xenproject.org, David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 02:01:59AM +0000, Ben Hutchings wrote:
> On Mon, 2014-02-24 at 19:12 -0500, David Miller wrote:
> > From: Ben Hutchings <ben@decadent.org.uk>
> > Date: Tue, 25 Feb 2014 00:02:00 +0000
> > 
> > > You can run an internal network, or access network, as v6-only with
> > > NAT64 and DNS64 at the border.  I believe some mobile networks are doing
> > > this; it was also done on the main FOSDEM wireless network this year.
> > 
> > This seems to be bloating up the networking headers of the internal
> > network, for what purpose?
> > 
> > For mobile that's doubly inadvisable.
> 
> I don't know what the reasoning is for the mobile network operators.
> They're forced to do NAT for v4 somewhere, and maybe v6-only makes the
> access network easier to manage.

Yes, it seems the way to go:
<http://www.dslreports.com/shownews/TMobile-Goes-IPv6-Only-on-Android-44-Devices-126506>

I can't comment on the 464xlat that much because I haven't looked at an
implementation yet, but it can very well be the case it still needs IPv4
on the outgoing interface, I don't know (from the spec pov it doesn't
look like that).

Greetings,

  Hannes


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:12:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID85-00068x-HR; Tue, 25 Feb 2014 08:12:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiezhenjiang@foxmail.com>) id 1WI6wY-0002TS-Mg
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 01:36:22 +0000
Received: from [85.158.143.35:49443] by server-2.bemta-4.messagelabs.com id
	62/8F-10891-693FB035; Tue, 25 Feb 2014 01:36:22 +0000
X-Env-Sender: xiezhenjiang@foxmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393292170!8007477!1
X-Originating-IP: [54.206.34.216]
X-SpamReason: No, hits=1.0 required=7.0 tests=FROM_EXCESS_BASE64,
	MIME_BASE64_TEXT,ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,received_headers: No Received headers,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNDgwODkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11579 invoked from network); 25 Feb 2014 01:36:12 -0000
Received: from smtpbgau2.qq.com (HELO smtpbgau2.qq.com) (54.206.34.216)
	by server-13.tower-21.messagelabs.com with SMTP;
	25 Feb 2014 01:36:12 -0000
X-QQ-SSF: 00000000000000F000000000000000S
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 115.156.145.101
In-Reply-To: <1393261600.16485.74.camel@Solace>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
	<1393004577.32038.843.camel@Solace>
	<tencent_1F08A5736F1E5981265AE0C8@qq.com>
	<1393261600.16485.74.camel@Solace>
X-QQ-STYLE: 
X-QQ-mid: webmail642t1393292161t6024214
From: "=?ISO-8859-1?B?Q2hhcmxlcw==?=" <xiezhenjiang@foxmail.com>
To: "=?ISO-8859-1?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Tue, 25 Feb 2014 09:36:01 +0800
X-Priority: 3
Message-ID: <tencent_6625940576F5CF986B0CCBD9@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-ReplyHash: 2056981814
X-QQ-SENDSIZE: 520
X-QQ-Bgrelay: 1
X-Mailman-Approved-At: Tue, 25 Feb 2014 08:12:40 +0000
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage
	inXenhypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1. I want the  CPU usage information like this:

Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

2. From the VM's perspective, I want to make it out what's the sum of busy_time and idle_time, 
According to my understanding, I think when a vCPU is scheduled on pCPU,  from the perspective of vCPU, its (busy_time +idle_time) equals to 
the percentage of CPU it gets from the hypervisor (because I think when scheduled on a pCPU, the scenario is like a physical OS running, contains 
both busy_time and idle_time ), Then how could I know the busy_time of a vCPU from VM's perspective, since you have noted that if a vCPU is scheduled on pCPU, it keeps 
consuming CPU cycles?

[please, don't drop the list. Re-added]

On lun, 2014-02-24 at 10:30 +0800, Charles wrote:
> Thanks for your reply. 
> 1.The VM usage I mean here is similar to the result of command top in the VM. 
>
Ok, but still, when running top inside the VM, which part are you
interested in? How busy the various vCPUs are or what (as in what
process/OS component) is actually keeping the busy?

That matters because, how busy they are is something that you, to some
extent, see in Xen, as it is at lest bound to how much he various vCPUs
want to run on the host's pCPUs, and that's the hypervisor's scheduler's
job!

If you want to know what process they're running, at what time that
started, etc., what's the "priority" (inside the VM) then this is
something I don't think you can easily have Xen aware of, unless you
introduce some kind of scheduler paravirtualization.

> 2."use the usage information in Credit" means I want to use the usage information to guess the workload type running in the VM
> 
Still too few info (see above). xentop tells already whether a
particular vCPU is getting, say, 5% of pCPU time. Have something like
that inside Xen should not be that hard.

What it does not (xentop) tell is whether a particular vCPU, despite
getting 5%, is asking for more, and only getting that for whatever
reason. That has to do with the estimation of the system load that I was
mentioning, which is embedded in credit2 right now, but can be
generalized.

So, which one, if any, of the above are you after?

> To the best of my knowledge, when in physical OS, the CPU may be idle or busy. Then the OS CPU usage can be computed by (busy_time / (idle_time + busy_time))
> If the vCPU is running the on pCPU and it's consuming pCPU cycles all the time, then is it mean  idle_time = 0? then how to I compute the CPU usage?
> 
Err... from Xen's perspective, if a vCPU is always running, then
idle_time=0, then (busy_time/(idle_time+busy_time))=1, which matches
with the concept of being "always running".

I'm sure I'm missing something of what you meant here...

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:12:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID85-000696-Sn; Tue, 25 Feb 2014 08:12:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hannes@stressinduktion.org>) id 1WI7ft-0003LQ-Fb
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 02:23:13 +0000
Received: from [85.158.137.68:38134] by server-4.bemta-3.messagelabs.com id
	24/81-04858-09EFB035; Tue, 25 Feb 2014 02:23:12 +0000
X-Env-Sender: hannes@stressinduktion.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393294991!2695220!1
X-Originating-IP: [87.106.68.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17723 invoked from network); 25 Feb 2014 02:23:11 -0000
Received: from order.stressinduktion.org (HELO order.stressinduktion.org)
	(87.106.68.36)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 02:23:11 -0000
Received: by order.stressinduktion.org (Postfix, from userid 500)
	id EB6571A0C2D8; Tue, 25 Feb 2014 03:23:10 +0100 (CET)
Date: Tue, 25 Feb 2014 03:23:10 +0100
From: Hannes Frederic Sowa <hannes@stressinduktion.org>
To: Ben Hutchings <ben@decadent.org.uk>
Message-ID: <20140225022310.GH6598@order.stressinduktion.org>
Mail-Followup-To: Ben Hutchings <ben@decadent.org.uk>,
	David Miller <davem@davemloft.net>, dcbw@redhat.com,
	mcgrof@do-not-panic.com, zoltan.kiss@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	kuznet@ms2.inr.ac.ru, jmorris@namei.org, yoshfuji@linux-ipv6.org,
	kaber@trash.net
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
	<20140224.191238.921310808350170272.davem@davemloft.net>
	<1393293719.6823.148.camel@deadeye.wl.decadent.org.uk>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393293719.6823.148.camel@deadeye.wl.decadent.org.uk>
X-Mailman-Approved-At: Tue, 25 Feb 2014 08:12:40 +0000
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru, kaber@trash.net,
	xen-devel@lists.xenproject.org, David Miller <davem@davemloft.net>
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 02:01:59AM +0000, Ben Hutchings wrote:
> On Mon, 2014-02-24 at 19:12 -0500, David Miller wrote:
> > From: Ben Hutchings <ben@decadent.org.uk>
> > Date: Tue, 25 Feb 2014 00:02:00 +0000
> > 
> > > You can run an internal network, or access network, as v6-only with
> > > NAT64 and DNS64 at the border.  I believe some mobile networks are doing
> > > this; it was also done on the main FOSDEM wireless network this year.
> > 
> > This seems to be bloating up the networking headers of the internal
> > network, for what purpose?
> > 
> > For mobile that's doubly inadvisable.
> 
> I don't know what the reasoning is for the mobile network operators.
> They're forced to do NAT for v4 somewhere, and maybe v6-only makes the
> access network easier to manage.

Yes, it seems the way to go:
<http://www.dslreports.com/shownews/TMobile-Goes-IPv6-Only-on-Android-44-Devices-126506>

I can't comment on the 464xlat that much because I haven't looked at an
implementation yet, but it can very well be the case it still needs IPv4
on the outgoing interface, I don't know (from the spec pov it doesn't
look like that).

Greetings,

  Hannes


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 08:12:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 08:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WID85-00068x-HR; Tue, 25 Feb 2014 08:12:41 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xiezhenjiang@foxmail.com>) id 1WI6wY-0002TS-Mg
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 01:36:22 +0000
Received: from [85.158.143.35:49443] by server-2.bemta-4.messagelabs.com id
	62/8F-10891-693FB035; Tue, 25 Feb 2014 01:36:22 +0000
X-Env-Sender: xiezhenjiang@foxmail.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393292170!8007477!1
X-Originating-IP: [54.206.34.216]
X-SpamReason: No, hits=1.0 required=7.0 tests=FROM_EXCESS_BASE64,
	MIME_BASE64_TEXT,ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_8,
	spamassassin: ,received_headers: No Received headers,async_handler: 
	YXN5bmNfZGVsYXk6IDcwNDgwODkgKHRpbWVvdXQp\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11579 invoked from network); 25 Feb 2014 01:36:12 -0000
Received: from smtpbgau2.qq.com (HELO smtpbgau2.qq.com) (54.206.34.216)
	by server-13.tower-21.messagelabs.com with SMTP;
	25 Feb 2014 01:36:12 -0000
X-QQ-SSF: 00000000000000F000000000000000S
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 115.156.145.101
In-Reply-To: <1393261600.16485.74.camel@Solace>
References: <tencent_3EEDE73632631C143BD5D9FF@qq.com>
	<1393004577.32038.843.camel@Solace>
	<tencent_1F08A5736F1E5981265AE0C8@qq.com>
	<1393261600.16485.74.camel@Solace>
X-QQ-STYLE: 
X-QQ-mid: webmail642t1393292161t6024214
From: "=?ISO-8859-1?B?Q2hhcmxlcw==?=" <xiezhenjiang@foxmail.com>
To: "=?ISO-8859-1?B?eGVuLWRldmVs?=" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Date: Tue, 25 Feb 2014 09:36:01 +0800
X-Priority: 3
Message-ID: <tencent_6625940576F5CF986B0CCBD9@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-ReplyHash: 2056981814
X-QQ-SENDSIZE: 520
X-QQ-Bgrelay: 1
X-Mailman-Approved-At: Tue, 25 Feb 2014 08:12:40 +0000
Subject: Re: [Xen-devel] confusions on monitoring VM cpu usage
	inXenhypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1. I want the  CPU usage information like this:

Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

2. From the VM's perspective, I want to make it out what's the sum of busy_time and idle_time, 
According to my understanding, I think when a vCPU is scheduled on pCPU,  from the perspective of vCPU, its (busy_time +idle_time) equals to 
the percentage of CPU it gets from the hypervisor (because I think when scheduled on a pCPU, the scenario is like a physical OS running, contains 
both busy_time and idle_time ), Then how could I know the busy_time of a vCPU from VM's perspective, since you have noted that if a vCPU is scheduled on pCPU, it keeps 
consuming CPU cycles?

[please, don't drop the list. Re-added]

On lun, 2014-02-24 at 10:30 +0800, Charles wrote:
> Thanks for your reply. 
> 1.The VM usage I mean here is similar to the result of command top in the VM. 
>
Ok, but still, when running top inside the VM, which part are you
interested in? How busy the various vCPUs are or what (as in what
process/OS component) is actually keeping the busy?

That matters because, how busy they are is something that you, to some
extent, see in Xen, as it is at lest bound to how much he various vCPUs
want to run on the host's pCPUs, and that's the hypervisor's scheduler's
job!

If you want to know what process they're running, at what time that
started, etc., what's the "priority" (inside the VM) then this is
something I don't think you can easily have Xen aware of, unless you
introduce some kind of scheduler paravirtualization.

> 2."use the usage information in Credit" means I want to use the usage information to guess the workload type running in the VM
> 
Still too few info (see above). xentop tells already whether a
particular vCPU is getting, say, 5% of pCPU time. Have something like
that inside Xen should not be that hard.

What it does not (xentop) tell is whether a particular vCPU, despite
getting 5%, is asking for more, and only getting that for whatever
reason. That has to do with the estimation of the system load that I was
mentioning, which is embedded in credit2 right now, but can be
generalized.

So, which one, if any, of the above are you after?

> To the best of my knowledge, when in physical OS, the CPU may be idle or busy. Then the OS CPU usage can be computed by (busy_time / (idle_time + busy_time))
> If the vCPU is running the on pCPU and it's consuming pCPU cycles all the time, then is it mean  idle_time = 0? then how to I compute the CPU usage?
> 
Err... from Xen's perspective, if a vCPU is always running, then
idle_time=0, then (busy_time/(idle_time+busy_time))=1, which matches
with the concept of being "always running".

I'm sure I'm missing something of what you meant here...

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 09:26:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 09:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIEHD-0006gK-Pg; Tue, 25 Feb 2014 09:26:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIEHC-0006gF-0Y
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 09:26:10 +0000
Received: from [85.158.137.68:60059] by server-11.bemta-3.messagelabs.com id
	51/AA-04255-0B16C035; Tue, 25 Feb 2014 09:26:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393320364!2758800!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32292 invoked from network); 25 Feb 2014 09:26:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 09:26:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 09:26:04 +0000
Message-Id: <530C6FB9020000780011F0D6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 09:26:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
 hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> pvh does not support nested hvm at present. As such, return if pvh.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  xen/arch/x86/hvm/hvm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..a4a3dcf 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>      int sharing_enomem = 0;
>      mem_event_request_t *req_ptr = NULL;
>  
> +    if ( is_pvh_vcpu(v) )
> +        return 0;
> +

Afaict the "nested" in the function name means "nested paging",
not "nested virtualization", i.e. the function here handles more
than just nested HVM cases. With that, the change appears to be
wrong. What's the motivation for putting such a check here
anyway?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 09:26:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 09:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIEHD-0006gK-Pg; Tue, 25 Feb 2014 09:26:11 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIEHC-0006gF-0Y
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 09:26:10 +0000
Received: from [85.158.137.68:60059] by server-11.bemta-3.messagelabs.com id
	51/AA-04255-0B16C035; Tue, 25 Feb 2014 09:26:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393320364!2758800!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32292 invoked from network); 25 Feb 2014 09:26:04 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 09:26:04 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 09:26:04 +0000
Message-Id: <530C6FB9020000780011F0D6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 09:26:01 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
 hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> pvh does not support nested hvm at present. As such, return if pvh.
> 
> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  xen/arch/x86/hvm/hvm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..a4a3dcf 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>      int sharing_enomem = 0;
>      mem_event_request_t *req_ptr = NULL;
>  
> +    if ( is_pvh_vcpu(v) )
> +        return 0;
> +

Afaict the "nested" in the function name means "nested paging",
not "nested virtualization", i.e. the function here handles more
than just nested HVM cases. With that, the change appears to be
wrong. What's the motivation for putting such a check here
anyway?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 09:28:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 09:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIEJS-0006lh-Ap; Tue, 25 Feb 2014 09:28:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIEJR-0006lP-57
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 09:28:29 +0000
Received: from [85.158.137.68:28908] by server-9.bemta-3.messagelabs.com id
	C8/14-10184-C326C035; Tue, 25 Feb 2014 09:28:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393320507!2477185!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22936 invoked from network); 25 Feb 2014 09:28:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 09:28:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 09:28:27 +0000
Message-Id: <530C7048020000780011F0D9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 09:28:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Just like hvm, pirq eoi shared page is not there for pvh. pvh should
> not touch any pv_domain fields.

While the latter is true, wasn't it that IRQ handling wise PVH is using
PV mechanisms? In which case the EOI map page would be of
interest, and rather than guarding the accesses you ought to move
the field out of pv_domain.

Jan

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  xen/arch/x86/irq.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index db70077..88444be 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1068,13 +1068,13 @@ bool_t cpu_has_pending_apic_eoi(void)
>  
>  static inline void set_pirq_eoi(struct domain *d, unsigned int irq)
>  {
> -    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
> +    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
>          set_bit(irq, d->arch.pv_domain.pirq_eoi_map);
>  }
>  
>  static inline void clear_pirq_eoi(struct domain *d, unsigned int irq)
>  {
> -    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
> +    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
>          clear_bit(irq, d->arch.pv_domain.pirq_eoi_map);
>  }
>  
> -- 
> 1.8.3.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 09:28:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 09:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIEJS-0006lh-Ap; Tue, 25 Feb 2014 09:28:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIEJR-0006lP-57
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 09:28:29 +0000
Received: from [85.158.137.68:28908] by server-9.bemta-3.messagelabs.com id
	C8/14-10184-C326C035; Tue, 25 Feb 2014 09:28:28 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393320507!2477185!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22936 invoked from network); 25 Feb 2014 09:28:27 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 09:28:27 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 09:28:27 +0000
Message-Id: <530C7048020000780011F0D9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 09:28:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
In-Reply-To: <1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> Just like hvm, pirq eoi shared page is not there for pvh. pvh should
> not touch any pv_domain fields.

While the latter is true, wasn't it that IRQ handling wise PVH is using
PV mechanisms? In which case the EOI map page would be of
interest, and rather than guarding the accesses you ought to move
the field out of pv_domain.

Jan

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  xen/arch/x86/irq.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index db70077..88444be 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1068,13 +1068,13 @@ bool_t cpu_has_pending_apic_eoi(void)
>  
>  static inline void set_pirq_eoi(struct domain *d, unsigned int irq)
>  {
> -    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
> +    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
>          set_bit(irq, d->arch.pv_domain.pirq_eoi_map);
>  }
>  
>  static inline void clear_pirq_eoi(struct domain *d, unsigned int irq)
>  {
> -    if ( !is_hvm_domain(d) && d->arch.pv_domain.pirq_eoi_map )
> +    if ( is_pv_domain(d) && d->arch.pv_domain.pirq_eoi_map )
>          clear_bit(irq, d->arch.pv_domain.pirq_eoi_map);
>  }
>  
> -- 
> 1.8.3.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:00:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIEoQ-00075L-CO; Tue, 25 Feb 2014 10:00:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIEoO-00075G-DR
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:00:28 +0000
Received: from [85.158.143.35:18747] by server-2.bemta-4.messagelabs.com id
	1B/CE-10891-AB96C035; Tue, 25 Feb 2014 10:00:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393322425!8091530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15596 invoked from network); 25 Feb 2014 10:00:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:00:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:00:25 +0000
Message-Id: <530C77C8020000780011F114@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:00:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartF6C4A4A8.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartF6C4A4A8.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... in a simplified and consistent way.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/docs/misc/printk-formats.txt
+++ b/docs/misc/printk-formats.txt
@@ -15,3 +15,6 @@ Symbol/Function pointers:
=20
        In the case that an appropriate symbol name can't be found, %p[sS] =
will
        fall back to '%p' and print the address in hex.
+
+       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
+               "d<domid>v<vcpuid>")
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
     if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
-                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
+                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
                 has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
-                v->domain->domain_id, v->vcpu_id,
-                guest_mcg_cap & ~MCG_CAP_COUNT);
+                v, guest_mcg_cap & ~MCG_CAP_COUNT);
         return -EPERM;
     }
=20
@@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
               guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) =
&&
              !test_and_set_bool(v->mce_pending) )
         {
-            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
-                       d->domain_id, v->vcpu_id);
+            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
             vcpu_kick(v);
             ret =3D 0;
         }
         else
         {
-            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
-                       d->domain_id, v->vcpu_id);
+            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
             ret =3D -EBUSY;
             break;
         }
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
         if ( !warned )
         {
             warned =3D 1;
-            printk(XENLOG_WARNING "Segment register inaccessible for =
d%dv%d\n"
+            printk(XENLOG_WARNING "Segment register inaccessible for =
%pv\n"
                    "(If you see this outside of debugging activity,"
                    " please report to xen-devel@lists.xenproject.org)\n",
-                   v->domain->domain_id, v->vcpu_id);
+                   v);
         }
         memset(reg, 0, sizeof(*reg));
         return;
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct=20
         if ( !cpu_has(c, X86_FEATURE_DTES64) )
         {
             printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
             goto func_out;
         }
         vpmu_set(vpmu, VPMU_CPU_HAS_DS);
@@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct=20
             /* If BTS_UNAVAIL is set reset the DS feature. */
             vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
             printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
         }
         else
         {
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu=20
     mfn_t *oos;
     struct domain *d =3D v->domain;
=20
-    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
-                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn));=20
+    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
=20
     for_each_vcpu(d, v)=20
     {
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
 static void reserved_bit_page_fault(
     unsigned long addr, struct cpu_user_regs *regs)
 {
-    printk("d%d:v%d: reserved bit in page table (ec=3D%04X)\n",
-           current->domain->domain_id, current->vcpu_id, regs->error_code)=
;
+    printk("%pv: reserved bit in page table (ec=3D%04X)\n",
+           current, regs->error_code);
     show_page_walk(addr);
     show_execution_state(regs);
 }
@@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
         tb->flags |=3D TBF_INTERRUPT;
     if ( unlikely(null_trap_bounce(v, tb)) )
     {
-        printk("d%d:v%d: unhandled page fault (ec=3D%04X)\n",
-               v->domain->domain_id, v->vcpu_id, error_code);
+        printk("%pv: unhandled page fault (ec=3D%04X)\n", v, error_code);
         show_page_walk(addr);
     }
=20
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
=20
 vcpu_info_t dummy_vcpu_info;
=20
-int current_domain_id(void)
-{
-    return current->domain->domain_id;
-}
-
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
=20
     if ( !is_idle_vcpu(current) )
     {
-        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
-               smp_processor_id(), current->domain->domain_id,
-               current->vcpu_id);
+        printk("*** Dumping CPU%u guest state (%pv): ***\n",
+               smp_processor_id(), current);
         show_execution_state(guest_cpu_user_regs());
         printk("\n");
     }
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
     struct list_head *iter;
     int pos =3D 0;
=20
-    d2printk("rqi d%dv%d\n",
-           svc->vcpu->domain->domain_id,
-           svc->vcpu->vcpu_id);
+    d2printk("rqi %pv\n", svc->vcpu);
=20
     BUG_ON(&svc->rqd->runq !=3D runq);
     /* Idle vcpus not allowed on the runqueue anymore */
@@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
=20
         if ( svc->credit > iter_svc->credit )
         {
-            d2printk(" p%d d%dv%d\n",
-                   pos,
-                   iter_svc->vcpu->domain->domain_id,
-                   iter_svc->vcpu->vcpu_id);
+            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
             break;
         }
         pos++;
@@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
     cpumask_t mask;
     struct csched_vcpu * cur;
=20
-    d2printk("rqt d%dv%d cd%dv%d\n",
-             new->vcpu->domain->domain_id,
-             new->vcpu->vcpu_id,
-             current->domain->domain_id,
-             current->vcpu_id);
+    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
=20
     BUG_ON(new->vcpu->processor !=3D cpu);
     BUG_ON(new->rqd !=3D rqd);
@@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
         t2c_update(rqd, delta, svc);
         svc->start_time =3D now;
=20
-        d2printk("b d%dv%d c%d\n",
-                 svc->vcpu->domain->domain_id,
-                 svc->vcpu->vcpu_id,
-                 svc->credit);
+        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
     } else {
         d2printk("%s: Time went backwards? now %"PRI_stime" start =
%"PRI_stime"\n",
                __func__, now, svc->start_time);
@@ -871,11 +859,9 @@ static void
 csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu *svc =3D vc->sched_priv;
-    struct domain * const dom =3D vc->domain;
     struct csched_dom * const sdom =3D svc->sdom;
=20
-    printk("%s: Inserting d%dv%d\n",
-           __func__, dom->domain_id, vc->vcpu_id);
+    printk("%s: Inserting %pv\n", __func__, vc);
=20
     /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
      * been called for that cpu.
@@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler=20
=20
     /* Schedule lock should be held at this point. */
=20
-    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
+    d2printk("w %pv\n", vc);
=20
     BUG_ON( is_idle_vcpu(vc) );
=20
@@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops,=20
     {
         if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags=
) )
         {
-            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id);
+            d2printk("%pv -\n", svc->vcpu);
             clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
         }
         /* Leave it where it is for now.  When we actually pay attention
@@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops,=20
         }
         else
         {
-            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id);
+            d2printk("%pv +\n", svc->vcpu);
             new_cpu =3D cpumask_cycle(vc->processor, &svc->migrate_rqd->ac=
tive);
             goto out_up;
         }
@@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
 {
     if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
     {
-        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id,
-                 svc->rqd->id, trqd->id);
+        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
         /* It's running; mark it to migrate. */
         svc->migrate_rqd =3D trqd;
         set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
@@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
     {
         int on_runq=3D0;
         /* It's not running; just move it */
-        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id,
-                 svc->rqd->id, trqd->id);
+        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
         if ( __vcpu_on_runq(svc) )
         {
             __runq_remove(svc);
@@ -1662,11 +1646,7 @@ csched_schedule(
     SCHED_STAT_CRANK(schedule);
     CSCHED_VCPU_CHECK(current);
=20
-    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
-             cpu,
-             scurr->vcpu->domain->domain_id,
-             scurr->vcpu->vcpu_id,
-             now);
+    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
=20
     BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
=20
@@ -1693,12 +1673,11 @@ csched_schedule(
                 }
             }
         }
-        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
+        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
                "pcpu %d rq %d!\n",
                __func__,
                cpu, this_rqi,
-               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
-               scurr->vcpu->processor, other_rqi);
+               scurr->vcpu, scurr->vcpu->processor, other_rqi);
     }
     BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd !=3D rqd);
=20
@@ -1755,12 +1734,8 @@ csched_schedule(
             __runq_remove(snext);
             if ( snext->vcpu->is_running )
             {
-                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",=

-                       cpu,
-                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_i=
d,
-                       snext->vcpu->processor,
-                       scurr->vcpu->domain->domain_id,
-                       scurr->vcpu->vcpu_id);
+                printk("p%d: snext %pv running on p%d! scurr %pv\n",
+                       cpu, snext->vcpu, snext->vcpu->processor, =
scurr->vcpu);
                 BUG();
             }
             set_bit(__CSFLAG_scheduled, &snext->flags);
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
=20
         if ( v->affinity_broken )
         {
-            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
-                   d->domain_id, v->vcpu_id);
+            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
             cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
             v->affinity_broken =3D 0;
         }
@@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
             if ( cpumask_empty(&online_affinity) &&
                  cpumask_test_cpu(cpu, v->cpu_affinity) )
             {
-                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
-                        d->domain_id, v->vcpu_id);
+                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
=20
                 if (system_state =3D=3D SYS_STATE_suspend)
                 {
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -19,6 +19,7 @@
 #include <xen/ctype.h>
 #include <xen/symbols.h>
 #include <xen/lib.h>
+#include <xen/sched.h>
 #include <asm/div64.h>
 #include <asm/page.h>
=20
@@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
=20
         return str;
     }
+
+    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
+    {
+        const struct vcpu *v =3D arg;
+
+        ++*fmt_ptr;
+        if ( str <=3D end )
+            *str =3D 'd';
+        str =3D number(str + 1, end, v->domain->domain_id, 10, -1, -1, =
0);
+        if ( str <=3D end )
+            *str =3D 'v';
+        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
+    }
     }
=20
     if ( field_width =3D=3D -1 )
--- a/xen/include/xen/config.h
+++ b/xen/include/xen/config.h
@@ -74,12 +74,11 @@
=20
 #ifndef __ASSEMBLY__
=20
-int current_domain_id(void);
 #define dprintk(_l, _f, _a...)                              \
     printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
 #define gdprintk(_l, _f, _a...)                             \
-    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
-           __LINE__, current_domain_id() , ## _a )
+    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
+           __LINE__, current, ## _a )
=20
 #endif /* !__ASSEMBLY__ */
=20



--=__PartF6C4A4A8.1__=
Content-Type: text/plain; name="vsprintf-%pv.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="vsprintf-%pv.patch"

vsprintf: introduce %pv extended format specifier to print domain/vcpu ID =
pair=0A=0A... in a simplified and consistent way.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/docs/misc/printk-formats.txt=0A+++ =
b/docs/misc/printk-formats.txt=0A@@ -15,3 +15,6 @@ Symbol/Function =
pointers:=0A =0A        In the case that an appropriate symbol name can't =
be found, %p[sS] will=0A        fall back to '%p' and print the address in =
hex.=0A+=0A+       %pv     Domain and vCPU ID from a 'struct vcpu *' =
(printed as=0A+               "d<domid>v<vcpuid>")=0A--- a/xen/arch/x86/cpu=
/mcheck/vmce.c=0A+++ b/xen/arch/x86/cpu/mcheck/vmce.c=0A@@ -82,10 +82,9 @@ =
int vmce_restore_vcpu(struct vcpu *v, co=0A     if ( ctxt->caps & =
~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )=0A     {=0A         =
dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"=0A-       =
         " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",=0A+              =
  " %#" PRIx64 " for %pv (supported: %#Lx)\n",=0A                 =
has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,=0A-                =
v->domain->domain_id, v->vcpu_id,=0A-                guest_mcg_cap & =
~MCG_CAP_COUNT);=0A+                v, guest_mcg_cap & ~MCG_CAP_COUNT);=0A =
        return -EPERM;=0A     }=0A =0A@@ -361,15 +360,13 @@ int inject_vmce=
(struct domain *d, int vc=0A               guest_has_trap_callback(d, =
v->vcpu_id, TRAP_machine_check)) &&=0A              !test_and_set_bool(v->m=
ce_pending) )=0A         {=0A-            mce_printk(MCE_VERBOSE, "MCE: =
inject vMCE to d%d:v%d\n",=0A-                       d->domain_id, =
v->vcpu_id);=0A+            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to =
%pv\n", v);=0A             vcpu_kick(v);=0A             ret =3D 0;=0A      =
   }=0A         else=0A         {=0A-            mce_printk(MCE_QUIET, =
"Failed to inject vMCE to d%d:v%d\n",=0A-                       d->domain_i=
d, v->vcpu_id);=0A+            mce_printk(MCE_QUIET, "Failed to inject =
vMCE to %pv\n", v);=0A             ret =3D -EBUSY;=0A             =
break;=0A         }=0A--- a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86=
/hvm/vmx/vmx.c=0A@@ -734,10 +734,10 @@ void vmx_get_segment_register(struct=
 vcp=0A         if ( !warned )=0A         {=0A             warned =3D =
1;=0A-            printk(XENLOG_WARNING "Segment register inaccessible for =
d%dv%d\n"=0A+            printk(XENLOG_WARNING "Segment register inaccessib=
le for %pv\n"=0A                    "(If you see this outside of debugging =
activity,"=0A                    " please report to xen-devel@lists.xenproj=
ect.org)\n",=0A-                   v->domain->domain_id, v->vcpu_id);=0A+  =
                 v);=0A         }=0A         memset(reg, 0, sizeof(*reg));=
=0A         return;=0A--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c=0A+++ =
b/xen/arch/x86/hvm/vmx/vpmu_core2.c=0A@@ -769,8 +769,8 @@ static int =
core2_vpmu_initialise(struct =0A         if ( !cpu_has(c, X86_FEATURE_DTES6=
4) )=0A         {=0A             printk(XENLOG_G_WARNING "CPU doesn't =
support 64-bit DS Area"=0A-                   " - Debug Store disabled for =
d%d:v%d\n",=0A-                   v->domain->domain_id, v->vcpu_id);=0A+   =
                " - Debug Store disabled for %pv\n",=0A+                   =
v);=0A             goto func_out;=0A         }=0A         vpmu_set(vpmu, =
VPMU_CPU_HAS_DS);=0A@@ -780,8 +780,8 @@ static int core2_vpmu_initialise(st=
ruct =0A             /* If BTS_UNAVAIL is set reset the DS feature. */=0A  =
           vpmu_reset(vpmu, VPMU_CPU_HAS_DS);=0A             printk(XENLOG_=
G_WARNING "CPU has set BTS_UNAVAIL"=0A-                   " - Debug Store =
disabled for d%d:v%d\n",=0A-                   v->domain->domain_id, =
v->vcpu_id);=0A+                   " - Debug Store disabled for %pv\n",=0A+=
                   v);=0A         }=0A         else=0A         {=0A--- =
a/xen/arch/x86/mm/shadow/common.c=0A+++ b/xen/arch/x86/mm/shadow/common.c=
=0A@@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu =0A     =
mfn_t *oos;=0A     struct domain *d =3D v->domain;=0A =0A-    SHADOW_PRINTK=
("D%dV%d gmfn %lx\n",=0A-                  v->domain->domain_id, v->vcpu_id=
, mfn_x(gmfn)); =0A+    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));=0A=
 =0A     for_each_vcpu(d, v) =0A     {=0A--- a/xen/arch/x86/traps.c=0A+++ =
b/xen/arch/x86/traps.c=0A@@ -1070,8 +1070,8 @@ void do_machine_check(struct=
 cpu_user_re=0A static void reserved_bit_page_fault(=0A     unsigned long =
addr, struct cpu_user_regs *regs)=0A {=0A-    printk("d%d:v%d: reserved =
bit in page table (ec=3D%04X)\n",=0A-           current->domain->domain_id,=
 current->vcpu_id, regs->error_code);=0A+    printk("%pv: reserved bit in =
page table (ec=3D%04X)\n",=0A+           current, regs->error_code);=0A    =
 show_page_walk(addr);=0A     show_execution_state(regs);=0A }=0A@@ =
-1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault=0A         =
tb->flags |=3D TBF_INTERRUPT;=0A     if ( unlikely(null_trap_bounce(v, =
tb)) )=0A     {=0A-        printk("d%d:v%d: unhandled page fault (ec=3D%04X=
)\n",=0A-               v->domain->domain_id, v->vcpu_id, error_code);=0A+ =
       printk("%pv: unhandled page fault (ec=3D%04X)\n", v, error_code);=0A=
         show_page_walk(addr);=0A     }=0A =0A--- a/xen/common/domain.c=0A+=
++ b/xen/common/domain.c=0A@@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPU=
S] __read_m=0A =0A vcpu_info_t dummy_vcpu_info;=0A =0A-int current_domain_i=
d(void)=0A-{=0A-    return current->domain->domain_id;=0A-}=0A-=0A static =
void __domain_finalise_shutdown(struct domain *d)=0A {=0A     struct vcpu =
*v;=0A--- a/xen/common/keyhandler.c=0A+++ b/xen/common/keyhandler.c=0A@@ =
-89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs=0A =0A     if ( =
!is_idle_vcpu(current) )=0A     {=0A-        printk("*** Dumping CPU%u =
guest state (d%d:v%d): ***\n",=0A-               smp_processor_id(), =
current->domain->domain_id,=0A-               current->vcpu_id);=0A+       =
 printk("*** Dumping CPU%u guest state (%pv): ***\n",=0A+               =
smp_processor_id(), current);=0A         show_execution_state(guest_cpu_use=
r_regs());=0A         printk("\n");=0A     }=0A--- a/xen/common/sched_credi=
t2.c=0A+++ b/xen/common/sched_credit2.c=0A@@ -413,9 +413,7 @@ __runq_insert=
(struct list_head *runq, st=0A     struct list_head *iter;=0A     int pos =
=3D 0;=0A =0A-    d2printk("rqi d%dv%d\n",=0A-           svc->vcpu->domain-=
>domain_id,=0A-           svc->vcpu->vcpu_id);=0A+    d2printk("rqi =
%pv\n", svc->vcpu);=0A =0A     BUG_ON(&svc->rqd->runq !=3D runq);=0A     =
/* Idle vcpus not allowed on the runqueue anymore */=0A@@ -429,10 +427,7 =
@@ __runq_insert(struct list_head *runq, st=0A =0A         if ( svc->credit=
 > iter_svc->credit )=0A         {=0A-            d2printk(" p%d d%dv%d\n",=
=0A-                   pos,=0A-                   iter_svc->vcpu->domain->d=
omain_id,=0A-                   iter_svc->vcpu->vcpu_id);=0A+            =
d2printk(" p%d %pv\n", pos, iter_svc->vcpu);=0A             break;=0A      =
   }=0A         pos++;=0A@@ -492,11 +487,7 @@ runq_tickle(const struct =
scheduler *ops,=0A     cpumask_t mask;=0A     struct csched_vcpu * cur;=0A =
=0A-    d2printk("rqt d%dv%d cd%dv%d\n",=0A-             new->vcpu->domain-=
>domain_id,=0A-             new->vcpu->vcpu_id,=0A-             current->do=
main->domain_id,=0A-             current->vcpu_id);=0A+    d2printk("rqt =
%pv curr %pv\n", new->vcpu, current);=0A =0A     BUG_ON(new->vcpu->processo=
r !=3D cpu);=0A     BUG_ON(new->rqd !=3D rqd);=0A@@ -681,10 +672,7 @@ void =
burn_credits(struct csched_runqueue=0A         t2c_update(rqd, delta, =
svc);=0A         svc->start_time =3D now;=0A =0A-        d2printk("b =
d%dv%d c%d\n",=0A-                 svc->vcpu->domain->domain_id,=0A-       =
          svc->vcpu->vcpu_id,=0A-                 svc->credit);=0A+        =
d2printk("b %pv c%d\n", svc->vcpu, svc->credit);=0A     } else {=0A        =
 d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",=
=0A                __func__, now, svc->start_time);=0A@@ -871,11 +859,9 @@ =
static void=0A csched_vcpu_insert(const struct scheduler *ops, struct vcpu =
*vc)=0A {=0A     struct csched_vcpu *svc =3D vc->sched_priv;=0A-    struct =
domain * const dom =3D vc->domain;=0A     struct csched_dom * const sdom =
=3D svc->sdom;=0A =0A-    printk("%s: Inserting d%dv%d\n",=0A-           =
__func__, dom->domain_id, vc->vcpu_id);=0A+    printk("%s: Inserting =
%pv\n", __func__, vc);=0A =0A     /* NB: On boot, idle vcpus are inserted =
before alloc_pdata() has=0A      * been called for that cpu.=0A@@ -965,7 =
+951,7 @@ csched_vcpu_wake(const struct scheduler =0A =0A     /* Schedule =
lock should be held at this point. */=0A =0A-    d2printk("w d%dv%d\n", =
vc->domain->domain_id, vc->vcpu_id);=0A+    d2printk("w %pv\n", vc);=0A =
=0A     BUG_ON( is_idle_vcpu(vc) );=0A =0A@@ -1074,7 +1060,7 @@ choose_cpu(=
const struct scheduler *ops, =0A     {=0A         if ( test_and_clear_bit(_=
_CSFLAG_runq_migrate_request, &svc->flags) )=0A         {=0A-            =
d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);=
=0A+            d2printk("%pv -\n", svc->vcpu);=0A             clear_bit(__=
CSFLAG_runq_migrate_request, &svc->flags);=0A         }=0A         /* =
Leave it where it is for now.  When we actually pay attention=0A@@ -1094,7 =
+1080,7 @@ choose_cpu(const struct scheduler *ops, =0A         }=0A        =
 else=0A         {=0A-            d2printk("d%dv%d +\n", svc->vcpu->domain-=
>domain_id, svc->vcpu->vcpu_id);=0A+            d2printk("%pv +\n", =
svc->vcpu);=0A             new_cpu =3D cpumask_cycle(vc->processor, =
&svc->migrate_rqd->active);=0A             goto out_up;=0A         }=0A@@ =
-1203,8 +1189,7 @@ void migrate(const struct scheduler *ops=0A {=0A     if =
( test_bit(__CSFLAG_scheduled, &svc->flags) )=0A     {=0A-        =
d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_=
id,=0A-                 svc->rqd->id, trqd->id);=0A+        d2printk("%pv =
%d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);=0A         /* It's =
running; mark it to migrate. */=0A         svc->migrate_rqd =3D trqd;=0A   =
      set_bit(_VPF_migrating, &svc->vcpu->pause_flags);=0A@@ -1214,8 =
+1199,7 @@ void migrate(const struct scheduler *ops=0A     {=0A         =
int on_runq=3D0;=0A         /* It's not running; just move it */=0A-       =
 d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu=
_id,=0A-                 svc->rqd->id, trqd->id);=0A+        d2printk("%pv =
%d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);=0A         if ( __vcpu_on_r=
unq(svc) )=0A         {=0A             __runq_remove(svc);=0A@@ -1662,11 =
+1646,7 @@ csched_schedule(=0A     SCHED_STAT_CRANK(schedule);=0A     =
CSCHED_VCPU_CHECK(current);=0A =0A-    d2printk("sc p%d c d%dv%d now =
%"PRI_stime"\n",=0A-             cpu,=0A-             scurr->vcpu->domain->=
domain_id,=0A-             scurr->vcpu->vcpu_id,=0A-             now);=0A+ =
   d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);=0A =
=0A     BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));=0A =
=0A@@ -1693,12 +1673,11 @@ csched_schedule(=0A                 }=0A        =
     }=0A         }=0A-        printk("%s: pcpu %d rq %d, but scurr d%dv%d =
assigned to "=0A+        printk("%s: pcpu %d rq %d, but scurr %pv assigned =
to "=0A                "pcpu %d rq %d!\n",=0A                __func__,=0A  =
              cpu, this_rqi,=0A-               scurr->vcpu->domain->domain_=
id, scurr->vcpu->vcpu_id,=0A-               scurr->vcpu->processor, =
other_rqi);=0A+               scurr->vcpu, scurr->vcpu->processor, =
other_rqi);=0A     }=0A     BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd=
 !=3D rqd);=0A =0A@@ -1755,12 +1734,8 @@ csched_schedule(=0A             =
__runq_remove(snext);=0A             if ( snext->vcpu->is_running )=0A     =
        {=0A-                printk("p%d: snext d%dv%d running on p%d! =
scurr d%dv%d\n",=0A-                       cpu,=0A-                       =
snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,=0A-                  =
     snext->vcpu->processor,=0A-                       scurr->vcpu->domain-=
>domain_id,=0A-                       scurr->vcpu->vcpu_id);=0A+           =
     printk("p%d: snext %pv running on p%d! scurr %pv\n",=0A+              =
         cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);=0A        =
         BUG();=0A             }=0A             set_bit(__CSFLAG_scheduled,=
 &snext->flags);=0A--- a/xen/common/schedule.c=0A+++ b/xen/common/schedule.=
c=0A@@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain=0A =0A    =
     if ( v->affinity_broken )=0A         {=0A-            printk(XENLOG_DE=
BUG "Restoring affinity for d%dv%d\n",=0A-                   d->domain_id, =
v->vcpu_id);=0A+            printk(XENLOG_DEBUG "Restoring affinity for =
%pv\n", v);=0A             cpumask_copy(v->cpu_affinity, v->cpu_affinity_sa=
ved);=0A             v->affinity_broken =3D 0;=0A         }=0A@@ -608,8 =
+607,7 @@ int cpu_disable_scheduler(unsigned int c=0A             if ( =
cpumask_empty(&online_affinity) &&=0A                  cpumask_test_cpu(cpu=
, v->cpu_affinity) )=0A             {=0A-                printk(XENLOG_DEBU=
G "Breaking affinity for d%dv%d\n",=0A-                        d->domain_id=
, v->vcpu_id);=0A+                printk(XENLOG_DEBUG "Breaking affinity =
for %pv\n", v);=0A =0A                 if (system_state =3D=3D SYS_STATE_su=
spend)=0A                 {=0A--- a/xen/common/vsprintf.c=0A+++ b/xen/commo=
n/vsprintf.c=0A@@ -19,6 +19,7 @@=0A #include <xen/ctype.h>=0A #include =
<xen/symbols.h>=0A #include <xen/lib.h>=0A+#include <xen/sched.h>=0A =
#include <asm/div64.h>=0A #include <asm/page.h>=0A =0A@@ -301,6 +302,19 @@ =
static char *pointer(char *str, char *en=0A =0A         return str;=0A     =
}=0A+=0A+    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */=0A+ =
   {=0A+        const struct vcpu *v =3D arg;=0A+=0A+        ++*fmt_ptr;=0A=
+        if ( str <=3D end )=0A+            *str =3D 'd';=0A+        str =
=3D number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);=0A+        =
if ( str <=3D end )=0A+            *str =3D 'v';=0A+        return =
number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);=0A+    }=0A     }=0A =0A  =
   if ( field_width =3D=3D -1 )=0A--- a/xen/include/xen/config.h=0A+++ =
b/xen/include/xen/config.h=0A@@ -74,12 +74,11 @@=0A =0A #ifndef __ASSEMBLY_=
_=0A =0A-int current_domain_id(void);=0A #define dprintk(_l, _f, _a...)    =
                          \=0A     printk(_l "%s:%d: " _f, __FILE__ , =
__LINE__ , ## _a )=0A #define gdprintk(_l, _f, _a...)                      =
       \=0A-    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       =
\=0A-           __LINE__, current_domain_id() , ## _a )=0A+    printk(XENLO=
G_GUEST _l "%s:%d:%pv " _f, __FILE__,       \=0A+           __LINE__, =
current, ## _a )=0A =0A #endif /* !__ASSEMBLY__ */=0A =0A
--=__PartF6C4A4A8.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartF6C4A4A8.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:00:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIEoQ-00075L-CO; Tue, 25 Feb 2014 10:00:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIEoO-00075G-DR
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:00:28 +0000
Received: from [85.158.143.35:18747] by server-2.bemta-4.messagelabs.com id
	1B/CE-10891-AB96C035; Tue, 25 Feb 2014 10:00:26 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393322425!8091530!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15596 invoked from network); 25 Feb 2014 10:00:25 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:00:25 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:00:25 +0000
Message-Id: <530C77C8020000780011F114@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:00:24 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartF6C4A4A8.1__="
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartF6C4A4A8.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... in a simplified and consistent way.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/docs/misc/printk-formats.txt
+++ b/docs/misc/printk-formats.txt
@@ -15,3 +15,6 @@ Symbol/Function pointers:
=20
        In the case that an appropriate symbol name can't be found, %p[sS] =
will
        fall back to '%p' and print the address in hex.
+
+       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
+               "d<domid>v<vcpuid>")
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
     if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
-                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
+                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
                 has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
-                v->domain->domain_id, v->vcpu_id,
-                guest_mcg_cap & ~MCG_CAP_COUNT);
+                v, guest_mcg_cap & ~MCG_CAP_COUNT);
         return -EPERM;
     }
=20
@@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
               guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) =
&&
              !test_and_set_bool(v->mce_pending) )
         {
-            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
-                       d->domain_id, v->vcpu_id);
+            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
             vcpu_kick(v);
             ret =3D 0;
         }
         else
         {
-            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
-                       d->domain_id, v->vcpu_id);
+            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
             ret =3D -EBUSY;
             break;
         }
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
         if ( !warned )
         {
             warned =3D 1;
-            printk(XENLOG_WARNING "Segment register inaccessible for =
d%dv%d\n"
+            printk(XENLOG_WARNING "Segment register inaccessible for =
%pv\n"
                    "(If you see this outside of debugging activity,"
                    " please report to xen-devel@lists.xenproject.org)\n",
-                   v->domain->domain_id, v->vcpu_id);
+                   v);
         }
         memset(reg, 0, sizeof(*reg));
         return;
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct=20
         if ( !cpu_has(c, X86_FEATURE_DTES64) )
         {
             printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
             goto func_out;
         }
         vpmu_set(vpmu, VPMU_CPU_HAS_DS);
@@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct=20
             /* If BTS_UNAVAIL is set reset the DS feature. */
             vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
             printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v->domain->domain_id, v->vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
         }
         else
         {
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu=20
     mfn_t *oos;
     struct domain *d =3D v->domain;
=20
-    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
-                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn));=20
+    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
=20
     for_each_vcpu(d, v)=20
     {
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
 static void reserved_bit_page_fault(
     unsigned long addr, struct cpu_user_regs *regs)
 {
-    printk("d%d:v%d: reserved bit in page table (ec=3D%04X)\n",
-           current->domain->domain_id, current->vcpu_id, regs->error_code)=
;
+    printk("%pv: reserved bit in page table (ec=3D%04X)\n",
+           current, regs->error_code);
     show_page_walk(addr);
     show_execution_state(regs);
 }
@@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
         tb->flags |=3D TBF_INTERRUPT;
     if ( unlikely(null_trap_bounce(v, tb)) )
     {
-        printk("d%d:v%d: unhandled page fault (ec=3D%04X)\n",
-               v->domain->domain_id, v->vcpu_id, error_code);
+        printk("%pv: unhandled page fault (ec=3D%04X)\n", v, error_code);
         show_page_walk(addr);
     }
=20
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
=20
 vcpu_info_t dummy_vcpu_info;
=20
-int current_domain_id(void)
-{
-    return current->domain->domain_id;
-}
-
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
=20
     if ( !is_idle_vcpu(current) )
     {
-        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
-               smp_processor_id(), current->domain->domain_id,
-               current->vcpu_id);
+        printk("*** Dumping CPU%u guest state (%pv): ***\n",
+               smp_processor_id(), current);
         show_execution_state(guest_cpu_user_regs());
         printk("\n");
     }
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
     struct list_head *iter;
     int pos =3D 0;
=20
-    d2printk("rqi d%dv%d\n",
-           svc->vcpu->domain->domain_id,
-           svc->vcpu->vcpu_id);
+    d2printk("rqi %pv\n", svc->vcpu);
=20
     BUG_ON(&svc->rqd->runq !=3D runq);
     /* Idle vcpus not allowed on the runqueue anymore */
@@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
=20
         if ( svc->credit > iter_svc->credit )
         {
-            d2printk(" p%d d%dv%d\n",
-                   pos,
-                   iter_svc->vcpu->domain->domain_id,
-                   iter_svc->vcpu->vcpu_id);
+            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
             break;
         }
         pos++;
@@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
     cpumask_t mask;
     struct csched_vcpu * cur;
=20
-    d2printk("rqt d%dv%d cd%dv%d\n",
-             new->vcpu->domain->domain_id,
-             new->vcpu->vcpu_id,
-             current->domain->domain_id,
-             current->vcpu_id);
+    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
=20
     BUG_ON(new->vcpu->processor !=3D cpu);
     BUG_ON(new->rqd !=3D rqd);
@@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
         t2c_update(rqd, delta, svc);
         svc->start_time =3D now;
=20
-        d2printk("b d%dv%d c%d\n",
-                 svc->vcpu->domain->domain_id,
-                 svc->vcpu->vcpu_id,
-                 svc->credit);
+        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
     } else {
         d2printk("%s: Time went backwards? now %"PRI_stime" start =
%"PRI_stime"\n",
                __func__, now, svc->start_time);
@@ -871,11 +859,9 @@ static void
 csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu *svc =3D vc->sched_priv;
-    struct domain * const dom =3D vc->domain;
     struct csched_dom * const sdom =3D svc->sdom;
=20
-    printk("%s: Inserting d%dv%d\n",
-           __func__, dom->domain_id, vc->vcpu_id);
+    printk("%s: Inserting %pv\n", __func__, vc);
=20
     /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
      * been called for that cpu.
@@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler=20
=20
     /* Schedule lock should be held at this point. */
=20
-    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
+    d2printk("w %pv\n", vc);
=20
     BUG_ON( is_idle_vcpu(vc) );
=20
@@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops,=20
     {
         if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags=
) )
         {
-            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id);
+            d2printk("%pv -\n", svc->vcpu);
             clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
         }
         /* Leave it where it is for now.  When we actually pay attention
@@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops,=20
         }
         else
         {
-            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id);
+            d2printk("%pv +\n", svc->vcpu);
             new_cpu =3D cpumask_cycle(vc->processor, &svc->migrate_rqd->ac=
tive);
             goto out_up;
         }
@@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
 {
     if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
     {
-        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id,
-                 svc->rqd->id, trqd->id);
+        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
         /* It's running; mark it to migrate. */
         svc->migrate_rqd =3D trqd;
         set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
@@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
     {
         int on_runq=3D0;
         /* It's not running; just move it */
-        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, =
svc->vcpu->vcpu_id,
-                 svc->rqd->id, trqd->id);
+        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
         if ( __vcpu_on_runq(svc) )
         {
             __runq_remove(svc);
@@ -1662,11 +1646,7 @@ csched_schedule(
     SCHED_STAT_CRANK(schedule);
     CSCHED_VCPU_CHECK(current);
=20
-    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
-             cpu,
-             scurr->vcpu->domain->domain_id,
-             scurr->vcpu->vcpu_id,
-             now);
+    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
=20
     BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
=20
@@ -1693,12 +1673,11 @@ csched_schedule(
                 }
             }
         }
-        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
+        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
                "pcpu %d rq %d!\n",
                __func__,
                cpu, this_rqi,
-               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
-               scurr->vcpu->processor, other_rqi);
+               scurr->vcpu, scurr->vcpu->processor, other_rqi);
     }
     BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd !=3D rqd);
=20
@@ -1755,12 +1734,8 @@ csched_schedule(
             __runq_remove(snext);
             if ( snext->vcpu->is_running )
             {
-                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",=

-                       cpu,
-                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_i=
d,
-                       snext->vcpu->processor,
-                       scurr->vcpu->domain->domain_id,
-                       scurr->vcpu->vcpu_id);
+                printk("p%d: snext %pv running on p%d! scurr %pv\n",
+                       cpu, snext->vcpu, snext->vcpu->processor, =
scurr->vcpu);
                 BUG();
             }
             set_bit(__CSFLAG_scheduled, &snext->flags);
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
=20
         if ( v->affinity_broken )
         {
-            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
-                   d->domain_id, v->vcpu_id);
+            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
             cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
             v->affinity_broken =3D 0;
         }
@@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
             if ( cpumask_empty(&online_affinity) &&
                  cpumask_test_cpu(cpu, v->cpu_affinity) )
             {
-                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
-                        d->domain_id, v->vcpu_id);
+                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
=20
                 if (system_state =3D=3D SYS_STATE_suspend)
                 {
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -19,6 +19,7 @@
 #include <xen/ctype.h>
 #include <xen/symbols.h>
 #include <xen/lib.h>
+#include <xen/sched.h>
 #include <asm/div64.h>
 #include <asm/page.h>
=20
@@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
=20
         return str;
     }
+
+    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
+    {
+        const struct vcpu *v =3D arg;
+
+        ++*fmt_ptr;
+        if ( str <=3D end )
+            *str =3D 'd';
+        str =3D number(str + 1, end, v->domain->domain_id, 10, -1, -1, =
0);
+        if ( str <=3D end )
+            *str =3D 'v';
+        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
+    }
     }
=20
     if ( field_width =3D=3D -1 )
--- a/xen/include/xen/config.h
+++ b/xen/include/xen/config.h
@@ -74,12 +74,11 @@
=20
 #ifndef __ASSEMBLY__
=20
-int current_domain_id(void);
 #define dprintk(_l, _f, _a...)                              \
     printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
 #define gdprintk(_l, _f, _a...)                             \
-    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
-           __LINE__, current_domain_id() , ## _a )
+    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
+           __LINE__, current, ## _a )
=20
 #endif /* !__ASSEMBLY__ */
=20



--=__PartF6C4A4A8.1__=
Content-Type: text/plain; name="vsprintf-%pv.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="vsprintf-%pv.patch"

vsprintf: introduce %pv extended format specifier to print domain/vcpu ID =
pair=0A=0A... in a simplified and consistent way.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/docs/misc/printk-formats.txt=0A+++ =
b/docs/misc/printk-formats.txt=0A@@ -15,3 +15,6 @@ Symbol/Function =
pointers:=0A =0A        In the case that an appropriate symbol name can't =
be found, %p[sS] will=0A        fall back to '%p' and print the address in =
hex.=0A+=0A+       %pv     Domain and vCPU ID from a 'struct vcpu *' =
(printed as=0A+               "d<domid>v<vcpuid>")=0A--- a/xen/arch/x86/cpu=
/mcheck/vmce.c=0A+++ b/xen/arch/x86/cpu/mcheck/vmce.c=0A@@ -82,10 +82,9 @@ =
int vmce_restore_vcpu(struct vcpu *v, co=0A     if ( ctxt->caps & =
~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )=0A     {=0A         =
dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"=0A-       =
         " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",=0A+              =
  " %#" PRIx64 " for %pv (supported: %#Lx)\n",=0A                 =
has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,=0A-                =
v->domain->domain_id, v->vcpu_id,=0A-                guest_mcg_cap & =
~MCG_CAP_COUNT);=0A+                v, guest_mcg_cap & ~MCG_CAP_COUNT);=0A =
        return -EPERM;=0A     }=0A =0A@@ -361,15 +360,13 @@ int inject_vmce=
(struct domain *d, int vc=0A               guest_has_trap_callback(d, =
v->vcpu_id, TRAP_machine_check)) &&=0A              !test_and_set_bool(v->m=
ce_pending) )=0A         {=0A-            mce_printk(MCE_VERBOSE, "MCE: =
inject vMCE to d%d:v%d\n",=0A-                       d->domain_id, =
v->vcpu_id);=0A+            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to =
%pv\n", v);=0A             vcpu_kick(v);=0A             ret =3D 0;=0A      =
   }=0A         else=0A         {=0A-            mce_printk(MCE_QUIET, =
"Failed to inject vMCE to d%d:v%d\n",=0A-                       d->domain_i=
d, v->vcpu_id);=0A+            mce_printk(MCE_QUIET, "Failed to inject =
vMCE to %pv\n", v);=0A             ret =3D -EBUSY;=0A             =
break;=0A         }=0A--- a/xen/arch/x86/hvm/vmx/vmx.c=0A+++ b/xen/arch/x86=
/hvm/vmx/vmx.c=0A@@ -734,10 +734,10 @@ void vmx_get_segment_register(struct=
 vcp=0A         if ( !warned )=0A         {=0A             warned =3D =
1;=0A-            printk(XENLOG_WARNING "Segment register inaccessible for =
d%dv%d\n"=0A+            printk(XENLOG_WARNING "Segment register inaccessib=
le for %pv\n"=0A                    "(If you see this outside of debugging =
activity,"=0A                    " please report to xen-devel@lists.xenproj=
ect.org)\n",=0A-                   v->domain->domain_id, v->vcpu_id);=0A+  =
                 v);=0A         }=0A         memset(reg, 0, sizeof(*reg));=
=0A         return;=0A--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c=0A+++ =
b/xen/arch/x86/hvm/vmx/vpmu_core2.c=0A@@ -769,8 +769,8 @@ static int =
core2_vpmu_initialise(struct =0A         if ( !cpu_has(c, X86_FEATURE_DTES6=
4) )=0A         {=0A             printk(XENLOG_G_WARNING "CPU doesn't =
support 64-bit DS Area"=0A-                   " - Debug Store disabled for =
d%d:v%d\n",=0A-                   v->domain->domain_id, v->vcpu_id);=0A+   =
                " - Debug Store disabled for %pv\n",=0A+                   =
v);=0A             goto func_out;=0A         }=0A         vpmu_set(vpmu, =
VPMU_CPU_HAS_DS);=0A@@ -780,8 +780,8 @@ static int core2_vpmu_initialise(st=
ruct =0A             /* If BTS_UNAVAIL is set reset the DS feature. */=0A  =
           vpmu_reset(vpmu, VPMU_CPU_HAS_DS);=0A             printk(XENLOG_=
G_WARNING "CPU has set BTS_UNAVAIL"=0A-                   " - Debug Store =
disabled for d%d:v%d\n",=0A-                   v->domain->domain_id, =
v->vcpu_id);=0A+                   " - Debug Store disabled for %pv\n",=0A+=
                   v);=0A         }=0A         else=0A         {=0A--- =
a/xen/arch/x86/mm/shadow/common.c=0A+++ b/xen/arch/x86/mm/shadow/common.c=
=0A@@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu =0A     =
mfn_t *oos;=0A     struct domain *d =3D v->domain;=0A =0A-    SHADOW_PRINTK=
("D%dV%d gmfn %lx\n",=0A-                  v->domain->domain_id, v->vcpu_id=
, mfn_x(gmfn)); =0A+    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));=0A=
 =0A     for_each_vcpu(d, v) =0A     {=0A--- a/xen/arch/x86/traps.c=0A+++ =
b/xen/arch/x86/traps.c=0A@@ -1070,8 +1070,8 @@ void do_machine_check(struct=
 cpu_user_re=0A static void reserved_bit_page_fault(=0A     unsigned long =
addr, struct cpu_user_regs *regs)=0A {=0A-    printk("d%d:v%d: reserved =
bit in page table (ec=3D%04X)\n",=0A-           current->domain->domain_id,=
 current->vcpu_id, regs->error_code);=0A+    printk("%pv: reserved bit in =
page table (ec=3D%04X)\n",=0A+           current, regs->error_code);=0A    =
 show_page_walk(addr);=0A     show_execution_state(regs);=0A }=0A@@ =
-1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault=0A         =
tb->flags |=3D TBF_INTERRUPT;=0A     if ( unlikely(null_trap_bounce(v, =
tb)) )=0A     {=0A-        printk("d%d:v%d: unhandled page fault (ec=3D%04X=
)\n",=0A-               v->domain->domain_id, v->vcpu_id, error_code);=0A+ =
       printk("%pv: unhandled page fault (ec=3D%04X)\n", v, error_code);=0A=
         show_page_walk(addr);=0A     }=0A =0A--- a/xen/common/domain.c=0A+=
++ b/xen/common/domain.c=0A@@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPU=
S] __read_m=0A =0A vcpu_info_t dummy_vcpu_info;=0A =0A-int current_domain_i=
d(void)=0A-{=0A-    return current->domain->domain_id;=0A-}=0A-=0A static =
void __domain_finalise_shutdown(struct domain *d)=0A {=0A     struct vcpu =
*v;=0A--- a/xen/common/keyhandler.c=0A+++ b/xen/common/keyhandler.c=0A@@ =
-89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs=0A =0A     if ( =
!is_idle_vcpu(current) )=0A     {=0A-        printk("*** Dumping CPU%u =
guest state (d%d:v%d): ***\n",=0A-               smp_processor_id(), =
current->domain->domain_id,=0A-               current->vcpu_id);=0A+       =
 printk("*** Dumping CPU%u guest state (%pv): ***\n",=0A+               =
smp_processor_id(), current);=0A         show_execution_state(guest_cpu_use=
r_regs());=0A         printk("\n");=0A     }=0A--- a/xen/common/sched_credi=
t2.c=0A+++ b/xen/common/sched_credit2.c=0A@@ -413,9 +413,7 @@ __runq_insert=
(struct list_head *runq, st=0A     struct list_head *iter;=0A     int pos =
=3D 0;=0A =0A-    d2printk("rqi d%dv%d\n",=0A-           svc->vcpu->domain-=
>domain_id,=0A-           svc->vcpu->vcpu_id);=0A+    d2printk("rqi =
%pv\n", svc->vcpu);=0A =0A     BUG_ON(&svc->rqd->runq !=3D runq);=0A     =
/* Idle vcpus not allowed on the runqueue anymore */=0A@@ -429,10 +427,7 =
@@ __runq_insert(struct list_head *runq, st=0A =0A         if ( svc->credit=
 > iter_svc->credit )=0A         {=0A-            d2printk(" p%d d%dv%d\n",=
=0A-                   pos,=0A-                   iter_svc->vcpu->domain->d=
omain_id,=0A-                   iter_svc->vcpu->vcpu_id);=0A+            =
d2printk(" p%d %pv\n", pos, iter_svc->vcpu);=0A             break;=0A      =
   }=0A         pos++;=0A@@ -492,11 +487,7 @@ runq_tickle(const struct =
scheduler *ops,=0A     cpumask_t mask;=0A     struct csched_vcpu * cur;=0A =
=0A-    d2printk("rqt d%dv%d cd%dv%d\n",=0A-             new->vcpu->domain-=
>domain_id,=0A-             new->vcpu->vcpu_id,=0A-             current->do=
main->domain_id,=0A-             current->vcpu_id);=0A+    d2printk("rqt =
%pv curr %pv\n", new->vcpu, current);=0A =0A     BUG_ON(new->vcpu->processo=
r !=3D cpu);=0A     BUG_ON(new->rqd !=3D rqd);=0A@@ -681,10 +672,7 @@ void =
burn_credits(struct csched_runqueue=0A         t2c_update(rqd, delta, =
svc);=0A         svc->start_time =3D now;=0A =0A-        d2printk("b =
d%dv%d c%d\n",=0A-                 svc->vcpu->domain->domain_id,=0A-       =
          svc->vcpu->vcpu_id,=0A-                 svc->credit);=0A+        =
d2printk("b %pv c%d\n", svc->vcpu, svc->credit);=0A     } else {=0A        =
 d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",=
=0A                __func__, now, svc->start_time);=0A@@ -871,11 +859,9 @@ =
static void=0A csched_vcpu_insert(const struct scheduler *ops, struct vcpu =
*vc)=0A {=0A     struct csched_vcpu *svc =3D vc->sched_priv;=0A-    struct =
domain * const dom =3D vc->domain;=0A     struct csched_dom * const sdom =
=3D svc->sdom;=0A =0A-    printk("%s: Inserting d%dv%d\n",=0A-           =
__func__, dom->domain_id, vc->vcpu_id);=0A+    printk("%s: Inserting =
%pv\n", __func__, vc);=0A =0A     /* NB: On boot, idle vcpus are inserted =
before alloc_pdata() has=0A      * been called for that cpu.=0A@@ -965,7 =
+951,7 @@ csched_vcpu_wake(const struct scheduler =0A =0A     /* Schedule =
lock should be held at this point. */=0A =0A-    d2printk("w d%dv%d\n", =
vc->domain->domain_id, vc->vcpu_id);=0A+    d2printk("w %pv\n", vc);=0A =
=0A     BUG_ON( is_idle_vcpu(vc) );=0A =0A@@ -1074,7 +1060,7 @@ choose_cpu(=
const struct scheduler *ops, =0A     {=0A         if ( test_and_clear_bit(_=
_CSFLAG_runq_migrate_request, &svc->flags) )=0A         {=0A-            =
d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);=
=0A+            d2printk("%pv -\n", svc->vcpu);=0A             clear_bit(__=
CSFLAG_runq_migrate_request, &svc->flags);=0A         }=0A         /* =
Leave it where it is for now.  When we actually pay attention=0A@@ -1094,7 =
+1080,7 @@ choose_cpu(const struct scheduler *ops, =0A         }=0A        =
 else=0A         {=0A-            d2printk("d%dv%d +\n", svc->vcpu->domain-=
>domain_id, svc->vcpu->vcpu_id);=0A+            d2printk("%pv +\n", =
svc->vcpu);=0A             new_cpu =3D cpumask_cycle(vc->processor, =
&svc->migrate_rqd->active);=0A             goto out_up;=0A         }=0A@@ =
-1203,8 +1189,7 @@ void migrate(const struct scheduler *ops=0A {=0A     if =
( test_bit(__CSFLAG_scheduled, &svc->flags) )=0A     {=0A-        =
d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_=
id,=0A-                 svc->rqd->id, trqd->id);=0A+        d2printk("%pv =
%d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);=0A         /* It's =
running; mark it to migrate. */=0A         svc->migrate_rqd =3D trqd;=0A   =
      set_bit(_VPF_migrating, &svc->vcpu->pause_flags);=0A@@ -1214,8 =
+1199,7 @@ void migrate(const struct scheduler *ops=0A     {=0A         =
int on_runq=3D0;=0A         /* It's not running; just move it */=0A-       =
 d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu=
_id,=0A-                 svc->rqd->id, trqd->id);=0A+        d2printk("%pv =
%d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);=0A         if ( __vcpu_on_r=
unq(svc) )=0A         {=0A             __runq_remove(svc);=0A@@ -1662,11 =
+1646,7 @@ csched_schedule(=0A     SCHED_STAT_CRANK(schedule);=0A     =
CSCHED_VCPU_CHECK(current);=0A =0A-    d2printk("sc p%d c d%dv%d now =
%"PRI_stime"\n",=0A-             cpu,=0A-             scurr->vcpu->domain->=
domain_id,=0A-             scurr->vcpu->vcpu_id,=0A-             now);=0A+ =
   d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);=0A =
=0A     BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));=0A =
=0A@@ -1693,12 +1673,11 @@ csched_schedule(=0A                 }=0A        =
     }=0A         }=0A-        printk("%s: pcpu %d rq %d, but scurr d%dv%d =
assigned to "=0A+        printk("%s: pcpu %d rq %d, but scurr %pv assigned =
to "=0A                "pcpu %d rq %d!\n",=0A                __func__,=0A  =
              cpu, this_rqi,=0A-               scurr->vcpu->domain->domain_=
id, scurr->vcpu->vcpu_id,=0A-               scurr->vcpu->processor, =
other_rqi);=0A+               scurr->vcpu, scurr->vcpu->processor, =
other_rqi);=0A     }=0A     BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd=
 !=3D rqd);=0A =0A@@ -1755,12 +1734,8 @@ csched_schedule(=0A             =
__runq_remove(snext);=0A             if ( snext->vcpu->is_running )=0A     =
        {=0A-                printk("p%d: snext d%dv%d running on p%d! =
scurr d%dv%d\n",=0A-                       cpu,=0A-                       =
snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,=0A-                  =
     snext->vcpu->processor,=0A-                       scurr->vcpu->domain-=
>domain_id,=0A-                       scurr->vcpu->vcpu_id);=0A+           =
     printk("p%d: snext %pv running on p%d! scurr %pv\n",=0A+              =
         cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);=0A        =
         BUG();=0A             }=0A             set_bit(__CSFLAG_scheduled,=
 &snext->flags);=0A--- a/xen/common/schedule.c=0A+++ b/xen/common/schedule.=
c=0A@@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain=0A =0A    =
     if ( v->affinity_broken )=0A         {=0A-            printk(XENLOG_DE=
BUG "Restoring affinity for d%dv%d\n",=0A-                   d->domain_id, =
v->vcpu_id);=0A+            printk(XENLOG_DEBUG "Restoring affinity for =
%pv\n", v);=0A             cpumask_copy(v->cpu_affinity, v->cpu_affinity_sa=
ved);=0A             v->affinity_broken =3D 0;=0A         }=0A@@ -608,8 =
+607,7 @@ int cpu_disable_scheduler(unsigned int c=0A             if ( =
cpumask_empty(&online_affinity) &&=0A                  cpumask_test_cpu(cpu=
, v->cpu_affinity) )=0A             {=0A-                printk(XENLOG_DEBU=
G "Breaking affinity for d%dv%d\n",=0A-                        d->domain_id=
, v->vcpu_id);=0A+                printk(XENLOG_DEBUG "Breaking affinity =
for %pv\n", v);=0A =0A                 if (system_state =3D=3D SYS_STATE_su=
spend)=0A                 {=0A--- a/xen/common/vsprintf.c=0A+++ b/xen/commo=
n/vsprintf.c=0A@@ -19,6 +19,7 @@=0A #include <xen/ctype.h>=0A #include =
<xen/symbols.h>=0A #include <xen/lib.h>=0A+#include <xen/sched.h>=0A =
#include <asm/div64.h>=0A #include <asm/page.h>=0A =0A@@ -301,6 +302,19 @@ =
static char *pointer(char *str, char *en=0A =0A         return str;=0A     =
}=0A+=0A+    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */=0A+ =
   {=0A+        const struct vcpu *v =3D arg;=0A+=0A+        ++*fmt_ptr;=0A=
+        if ( str <=3D end )=0A+            *str =3D 'd';=0A+        str =
=3D number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);=0A+        =
if ( str <=3D end )=0A+            *str =3D 'v';=0A+        return =
number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);=0A+    }=0A     }=0A =0A  =
   if ( field_width =3D=3D -1 )=0A--- a/xen/include/xen/config.h=0A+++ =
b/xen/include/xen/config.h=0A@@ -74,12 +74,11 @@=0A =0A #ifndef __ASSEMBLY_=
_=0A =0A-int current_domain_id(void);=0A #define dprintk(_l, _f, _a...)    =
                          \=0A     printk(_l "%s:%d: " _f, __FILE__ , =
__LINE__ , ## _a )=0A #define gdprintk(_l, _f, _a...)                      =
       \=0A-    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       =
\=0A-           __LINE__, current_domain_id() , ## _a )=0A+    printk(XENLO=
G_GUEST _l "%s:%d:%pv " _f, __FILE__,       \=0A+           __LINE__, =
current, ## _a )=0A =0A #endif /* !__ASSEMBLY__ */=0A =0A
--=__PartF6C4A4A8.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartF6C4A4A8.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:21:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIF8p-0007H4-4I; Tue, 25 Feb 2014 10:21:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIF8n-0007Gw-Fd
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:21:33 +0000
Received: from [85.158.137.68:53742] by server-8.bemta-3.messagelabs.com id
	C7/62-16039-CAE6C035; Tue, 25 Feb 2014 10:21:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393323677!4050835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8600 invoked from network); 25 Feb 2014 10:21:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:21:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:21:17 +0000
Message-Id: <530C7CAB020000780011F121@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:21:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 0/5] x86: EPT/MTRR interaction adjustments and
	cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: x86/hvm: refine the judgment on IDENT_PT for EMT
2: x86/HVM: fix memory type merging in epte_get_entry_emt()
3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
4: x86/HVM: use manifest constants / enumerators for memory types
5: x86/HVM: adjust data definitions in mtrr.c

With this series in place (or actually the first three patches thereof,
as the rest is cleanup), apart from the need to fully drop the
dependency on HVM_PARAM_IDENT_PT (see the discussion started
at http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
the other main question is whether the dependency on iommu_snoop
is really correct: I don't see why the IOMMU's snooping capability
would affect the cachability of memory accesses - especially in the
GPU passthrough case, RAM pages may need mapping as UC/WC
if the GPU is permitted direct access to them - uniformly using WB
here seems to be calling for problems.

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:21:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIF8p-0007H4-4I; Tue, 25 Feb 2014 10:21:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIF8n-0007Gw-Fd
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:21:33 +0000
Received: from [85.158.137.68:53742] by server-8.bemta-3.messagelabs.com id
	C7/62-16039-CAE6C035; Tue, 25 Feb 2014 10:21:32 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393323677!4050835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8600 invoked from network); 25 Feb 2014 10:21:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:21:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:21:17 +0000
Message-Id: <530C7CAB020000780011F121@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:21:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 0/5] x86: EPT/MTRR interaction adjustments and
	cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: x86/hvm: refine the judgment on IDENT_PT for EMT
2: x86/HVM: fix memory type merging in epte_get_entry_emt()
3: x86/HVM: consolidate passthrough handling in epte_get_entry_emt()
4: x86/HVM: use manifest constants / enumerators for memory types
5: x86/HVM: adjust data definitions in mtrr.c

With this series in place (or actually the first three patches thereof,
as the rest is cleanup), apart from the need to fully drop the
dependency on HVM_PARAM_IDENT_PT (see the discussion started
at http://lists.xenproject.org/archives/html/xen-devel/2014-02/msg02150.html)
the other main question is whether the dependency on iommu_snoop
is really correct: I don't see why the IOMMU's snooping capability
would affect the cachability of memory accesses - especially in the
GPU passthrough case, RAM pages may need mapping as UC/WC
if the GPU is permitted direct access to them - uniformly using WB
here seems to be calling for problems.

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFFo-0007PU-4P; Tue, 25 Feb 2014 10:28:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFFm-0007PO-47
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:28:46 +0000
Received: from [193.109.254.147:27365] by server-5.bemta-14.messagelabs.com id
	36/20-16688-D507C035; Tue, 25 Feb 2014 10:28:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393324124!6663000!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27897 invoked from network); 25 Feb 2014 10:28:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:28:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:28:44 +0000
Message-Id: <530C7E6B020000780011F131@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:28:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0C3E5E4B.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT
	for EMT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0C3E5E4B.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.

In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
=20
     *ipat =3D 0;
=20
-    if ( (current->domain !=3D d) &&
-         ((d->vcpu =3D=3D NULL) || ((v =3D d->vcpu[0]) =3D=3D NULL)) )
-        return MTRR_TYPE_WRBACK;
-
-    if ( !is_pvh_vcpu(v) &&
-         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
-        return MTRR_TYPE_WRBACK;
+    if ( v->domain !=3D d )
+        v =3D d->vcpu ? d->vcpu[0] : NULL;
=20
     if ( !mfn_valid(mfn_x(mfn)) )
         return MTRR_TYPE_UNCACHABLE;
@@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
         return MTRR_TYPE_WRBACK;
     }
=20
-    gmtrr_mtype =3D is_hvm_vcpu(v) ?
+    gmtrr_mtype =3D is_hvm_domain(d) && v &&
+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << PAGE_SHIFT=
)) :
                   MTRR_TYPE_WRBACK;
=20




--=__Part0C3E5E4B.1__=
Content-Type: text/plain; name="EPT-ignore-IDENT_PT.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="EPT-ignore-IDENT_PT.patch"

x86/hvm: refine the judgment on IDENT_PT for EMT=0A=0AWhen trying to get =
the EPT EMT type, the judgment on=0AHVM_PARAM_IDENT_PT is not correct =
which always returns WB type if=0Athe parameter is not set. Remove the =
related code.=0A=0ASigned-off-by: Dongxiao Xu <dongxiao.xu@intel.com>=0A=0A=
We can't fully drop the dependency yet, but we should certainly avoid=0Aove=
rriding cases already properly handled. The reason for this is that=0Athe =
guest setting up its MTRRs happens _after_ the EPT tables got=0Aalready =
constructed, and no code is in place to propagate this to the=0AEPT code. =
Without this check we're forcing the guest to run with all of=0Aits memory =
uncachable until something happens to re-write every single=0AEPT entry. =
But of course this has to be just a temporary solution.=0A=0AIn the same =
spirit we should defer the "very early" (when the guest is=0Astill being =
constructed and has no vCPU yet) override to the last=0Apossible =
point.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ -689,13 =
+689,8 @@ uint8_t epte_get_entry_emt(struct domain=0A =0A     *ipat =3D =
0;=0A =0A-    if ( (current->domain !=3D d) &&=0A-         ((d->vcpu =
=3D=3D NULL) || ((v =3D d->vcpu[0]) =3D=3D NULL)) )=0A-        return =
MTRR_TYPE_WRBACK;=0A-=0A-    if ( !is_pvh_vcpu(v) &&=0A-         !v->domain=
->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )=0A-        return MTRR_TYPE_=
WRBACK;=0A+    if ( v->domain !=3D d )=0A+        v =3D d->vcpu ? =
d->vcpu[0] : NULL;=0A =0A     if ( !mfn_valid(mfn_x(mfn)) )=0A         =
return MTRR_TYPE_UNCACHABLE;=0A@@ -718,7 +713,8 @@ uint8_t epte_get_entry_e=
mt(struct domain=0A         return MTRR_TYPE_WRBACK;=0A     }=0A =0A-    =
gmtrr_mtype =3D is_hvm_vcpu(v) ?=0A+    gmtrr_mtype =3D is_hvm_domain(d) =
&& v &&=0A+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] =
?=0A                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << =
PAGE_SHIFT)) :=0A                   MTRR_TYPE_WRBACK;=0A =0A
--=__Part0C3E5E4B.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0C3E5E4B.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:29:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFFo-0007PU-4P; Tue, 25 Feb 2014 10:28:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFFm-0007PO-47
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:28:46 +0000
Received: from [193.109.254.147:27365] by server-5.bemta-14.messagelabs.com id
	36/20-16688-D507C035; Tue, 25 Feb 2014 10:28:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393324124!6663000!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27897 invoked from network); 25 Feb 2014 10:28:44 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:28:44 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:28:44 +0000
Message-Id: <530C7E6B020000780011F131@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:28:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part0C3E5E4B.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 1/5] x86/hvm: refine the judgment on IDENT_PT
	for EMT
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part0C3E5E4B.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

When trying to get the EPT EMT type, the judgment on
HVM_PARAM_IDENT_PT is not correct which always returns WB type if
the parameter is not set. Remove the related code.

Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>

We can't fully drop the dependency yet, but we should certainly avoid
overriding cases already properly handled. The reason for this is that
the guest setting up its MTRRs happens _after_ the EPT tables got
already constructed, and no code is in place to propagate this to the
EPT code. Without this check we're forcing the guest to run with all of
its memory uncachable until something happens to re-write every single
EPT entry. But of course this has to be just a temporary solution.

In the same spirit we should defer the "very early" (when the guest is
still being constructed and has no vCPU yet) override to the last
possible point.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -689,13 +689,8 @@ uint8_t epte_get_entry_emt(struct domain
=20
     *ipat =3D 0;
=20
-    if ( (current->domain !=3D d) &&
-         ((d->vcpu =3D=3D NULL) || ((v =3D d->vcpu[0]) =3D=3D NULL)) )
-        return MTRR_TYPE_WRBACK;
-
-    if ( !is_pvh_vcpu(v) &&
-         !v->domain->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )
-        return MTRR_TYPE_WRBACK;
+    if ( v->domain !=3D d )
+        v =3D d->vcpu ? d->vcpu[0] : NULL;
=20
     if ( !mfn_valid(mfn_x(mfn)) )
         return MTRR_TYPE_UNCACHABLE;
@@ -718,7 +713,8 @@ uint8_t epte_get_entry_emt(struct domain
         return MTRR_TYPE_WRBACK;
     }
=20
-    gmtrr_mtype =3D is_hvm_vcpu(v) ?
+    gmtrr_mtype =3D is_hvm_domain(d) && v &&
+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] ?
                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << PAGE_SHIFT=
)) :
                   MTRR_TYPE_WRBACK;
=20




--=__Part0C3E5E4B.1__=
Content-Type: text/plain; name="EPT-ignore-IDENT_PT.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="EPT-ignore-IDENT_PT.patch"

x86/hvm: refine the judgment on IDENT_PT for EMT=0A=0AWhen trying to get =
the EPT EMT type, the judgment on=0AHVM_PARAM_IDENT_PT is not correct =
which always returns WB type if=0Athe parameter is not set. Remove the =
related code.=0A=0ASigned-off-by: Dongxiao Xu <dongxiao.xu@intel.com>=0A=0A=
We can't fully drop the dependency yet, but we should certainly avoid=0Aove=
rriding cases already properly handled. The reason for this is that=0Athe =
guest setting up its MTRRs happens _after_ the EPT tables got=0Aalready =
constructed, and no code is in place to propagate this to the=0AEPT code. =
Without this check we're forcing the guest to run with all of=0Aits memory =
uncachable until something happens to re-write every single=0AEPT entry. =
But of course this has to be just a temporary solution.=0A=0AIn the same =
spirit we should defer the "very early" (when the guest is=0Astill being =
constructed and has no vCPU yet) override to the last=0Apossible =
point.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ -689,13 =
+689,8 @@ uint8_t epte_get_entry_emt(struct domain=0A =0A     *ipat =3D =
0;=0A =0A-    if ( (current->domain !=3D d) &&=0A-         ((d->vcpu =
=3D=3D NULL) || ((v =3D d->vcpu[0]) =3D=3D NULL)) )=0A-        return =
MTRR_TYPE_WRBACK;=0A-=0A-    if ( !is_pvh_vcpu(v) &&=0A-         !v->domain=
->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] )=0A-        return MTRR_TYPE_=
WRBACK;=0A+    if ( v->domain !=3D d )=0A+        v =3D d->vcpu ? =
d->vcpu[0] : NULL;=0A =0A     if ( !mfn_valid(mfn_x(mfn)) )=0A         =
return MTRR_TYPE_UNCACHABLE;=0A@@ -718,7 +713,8 @@ uint8_t epte_get_entry_e=
mt(struct domain=0A         return MTRR_TYPE_WRBACK;=0A     }=0A =0A-    =
gmtrr_mtype =3D is_hvm_vcpu(v) ?=0A+    gmtrr_mtype =3D is_hvm_domain(d) =
&& v &&=0A+                  d->arch.hvm_domain.params[HVM_PARAM_IDENT_PT] =
?=0A                   get_mtrr_type(&v->arch.hvm_vcpu.mtrr, (gfn << =
PAGE_SHIFT)) :=0A                   MTRR_TYPE_WRBACK;=0A =0A
--=__Part0C3E5E4B.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part0C3E5E4B.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:29:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFGP-0007Rt-Hw; Tue, 25 Feb 2014 10:29:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFGO-0007Rg-Ay
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:29:24 +0000
Received: from [85.158.137.68:27988] by server-7.bemta-3.messagelabs.com id
	06/BB-13775-3807C035; Tue, 25 Feb 2014 10:29:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393324162!4047403!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31144 invoked from network); 25 Feb 2014 10:29:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:29:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:29:22 +0000
Message-Id: <530C7E91020000780011F134@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:29:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 2/5] x86/HVM: fix memory type merging in
 epte_get_entry_emt()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using the minimum numeric value of guest and host specified memory
types is too simplistic - it works only correctly for a subset of
types. It is in particular the WT/WP combination that needs conversion
to UC if the two types conflict.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -719,5 +719,35 @@ uint8_t epte_get_entry_emt(struct domain
                   MTRR_TYPE_WRBACK;
 
     hmtrr_mtype = get_mtrr_type(&mtrr_state, (mfn_x(mfn) << PAGE_SHIFT));
-    return ((gmtrr_mtype <= hmtrr_mtype) ? gmtrr_mtype : hmtrr_mtype);
+
+    /* If both types match we're fine. */
+    if ( likely(gmtrr_mtype == hmtrr_mtype) )
+        return hmtrr_mtype;
+
+    /* If either type is UC, we have to go with that one. */
+    if ( gmtrr_mtype == MTRR_TYPE_UNCACHABLE ||
+         hmtrr_mtype == MTRR_TYPE_UNCACHABLE )
+        return MTRR_TYPE_UNCACHABLE;
+
+    /* If either type is WB, we have to go with the other one. */
+    if ( gmtrr_mtype == MTRR_TYPE_WRBACK )
+        return hmtrr_mtype;
+    if ( hmtrr_mtype == MTRR_TYPE_WRBACK )
+        return gmtrr_mtype;
+
+    /*
+     * At this point we have disagreeing WC, WT, or WP types. The only
+     * combination that can be cleanly resolved is WT:WP. The ones involving
+     * WC need to be converted to UC, both due to the memory ordering
+     * differences and because WC disallows reads to be cached (WT and WP
+     * permit this), while WT and WP require writes to go straight to memory
+     * (WC can buffer them).
+     */
+    if ( (gmtrr_mtype == MTRR_TYPE_WRTHROUGH &&
+          hmtrr_mtype == MTRR_TYPE_WRPROT) ||
+         (gmtrr_mtype == MTRR_TYPE_WRPROT &&
+          hmtrr_mtype == MTRR_TYPE_WRTHROUGH) )
+        return MTRR_TYPE_WRPROT;
+
+    return MTRR_TYPE_UNCACHABLE;
 }




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:29:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:29:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFGP-0007Rt-Hw; Tue, 25 Feb 2014 10:29:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFGO-0007Rg-Ay
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:29:24 +0000
Received: from [85.158.137.68:27988] by server-7.bemta-3.messagelabs.com id
	06/BB-13775-3807C035; Tue, 25 Feb 2014 10:29:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393324162!4047403!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31144 invoked from network); 25 Feb 2014 10:29:22 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:29:22 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:29:22 +0000
Message-Id: <530C7E91020000780011F134@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:29:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 2/5] x86/HVM: fix memory type merging in
 epte_get_entry_emt()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Using the minimum numeric value of guest and host specified memory
types is too simplistic - it works only correctly for a subset of
types. It is in particular the WT/WP combination that needs conversion
to UC if the two types conflict.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -719,5 +719,35 @@ uint8_t epte_get_entry_emt(struct domain
                   MTRR_TYPE_WRBACK;
 
     hmtrr_mtype = get_mtrr_type(&mtrr_state, (mfn_x(mfn) << PAGE_SHIFT));
-    return ((gmtrr_mtype <= hmtrr_mtype) ? gmtrr_mtype : hmtrr_mtype);
+
+    /* If both types match we're fine. */
+    if ( likely(gmtrr_mtype == hmtrr_mtype) )
+        return hmtrr_mtype;
+
+    /* If either type is UC, we have to go with that one. */
+    if ( gmtrr_mtype == MTRR_TYPE_UNCACHABLE ||
+         hmtrr_mtype == MTRR_TYPE_UNCACHABLE )
+        return MTRR_TYPE_UNCACHABLE;
+
+    /* If either type is WB, we have to go with the other one. */
+    if ( gmtrr_mtype == MTRR_TYPE_WRBACK )
+        return hmtrr_mtype;
+    if ( hmtrr_mtype == MTRR_TYPE_WRBACK )
+        return gmtrr_mtype;
+
+    /*
+     * At this point we have disagreeing WC, WT, or WP types. The only
+     * combination that can be cleanly resolved is WT:WP. The ones involving
+     * WC need to be converted to UC, both due to the memory ordering
+     * differences and because WC disallows reads to be cached (WT and WP
+     * permit this), while WT and WP require writes to go straight to memory
+     * (WC can buffer them).
+     */
+    if ( (gmtrr_mtype == MTRR_TYPE_WRTHROUGH &&
+          hmtrr_mtype == MTRR_TYPE_WRPROT) ||
+         (gmtrr_mtype == MTRR_TYPE_WRPROT &&
+          hmtrr_mtype == MTRR_TYPE_WRTHROUGH) )
+        return MTRR_TYPE_WRPROT;
+
+    return MTRR_TYPE_UNCACHABLE;
 }




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:30:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFH6-0007Xd-11; Tue, 25 Feb 2014 10:30:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFH4-0007XN-M4
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:30:06 +0000
Received: from [85.158.137.68:58933] by server-10.bemta-3.messagelabs.com id
	D3/D6-07302-DA07C035; Tue, 25 Feb 2014 10:30:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393324204!4021922!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28965 invoked from network); 25 Feb 2014 10:30:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:30:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:30:04 +0000
Message-Id: <530C7EBB020000780011F138@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:30:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFCCEAEBB.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 3/5] x86/HVM: consolidate passthrough handling
 in epte_get_entry_emt()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFCCEAEBB.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It is inconsistent to depend on iommu_enabled alone: For a guest
without devices passed through to it, it is of no concern whether the
IOMMU is enabled.

There's one rather special case to take care of: VMX code marks the
LAPIC access page as MMIO. The added assertion needs to take this into
consideration, and the subsequent handling of the direct MMIO case was
inconsistent too: That page would have been WB in the absence of an
IOMMU, but UC in the presence of it, while in fact the cachabilty of
this page is entirely unrelated to an IOMMU being in use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2090,9 +2090,9 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( apic_va =3D=3D NULL )
         return -ENOMEM;
     share_xen_page_with_guest(virt_to_page(apic_va), d, XENSHARE_writable)=
;
+    d->arch.hvm_domain.vmx.apic_access_mfn =3D virt_to_mfn(apic_va);
     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
         _mfn(virt_to_mfn(apic_va)));
-    d->arch.hvm_domain.vmx.apic_access_mfn =3D virt_to_mfn(apic_va);
=20
     return 0;
 }
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -698,14 +698,20 @@ uint8_t epte_get_entry_emt(struct domain
     if ( hvm_get_mem_pinned_cacheattr(d, gfn, &type) )
         return type;
=20
-    if ( !iommu_enabled )
+    if ( !iommu_enabled ||
+         (rangeset_is_empty(d->iomem_caps) &&
+          rangeset_is_empty(d->arch.ioport_caps) &&
+          !has_arch_pdevs(d)) )
     {
+        ASSERT(!direct_mmio ||
+               mfn_x(mfn) =3D=3D d->arch.hvm_domain.vmx.apic_access_mfn);
         *ipat =3D 1;
         return MTRR_TYPE_WRBACK;
     }
=20
     if ( direct_mmio )
-        return MTRR_TYPE_UNCACHABLE;
+        return mfn_x(mfn) !=3D d->arch.hvm_domain.vmx.apic_access_mfn
+               ? MTRR_TYPE_UNCACHABLE : MTRR_TYPE_WRBACK;
=20
     if ( iommu_snoop )
     {




--=__PartFCCEAEBB.1__=
Content-Type: text/plain; name="EPT-EMT-no-dev.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="EPT-EMT-no-dev.patch"

x86/HVM: consolidate passthrough handling in epte_get_entry_emt()=0A=0AIt =
is inconsistent to depend on iommu_enabled alone: For a guest=0Awithout =
devices passed through to it, it is of no concern whether the=0AIOMMU is =
enabled.=0A=0AThere's one rather special case to take care of: VMX code =
marks the=0ALAPIC access page as MMIO. The added assertion needs to take =
this into=0Aconsideration, and the subsequent handling of the direct MMIO =
case was=0Ainconsistent too: That page would have been WB in the absence =
of an=0AIOMMU, but UC in the presence of it, while in fact the cachabilty =
of=0Athis page is entirely unrelated to an IOMMU being in use.=0A=0ASigned-=
off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/hvm/vmx/vmx=
.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@ -2090,9 +2090,9 @@ static int =
vmx_alloc_vlapic_mapping(stru=0A     if ( apic_va =3D=3D NULL )=0A         =
return -ENOMEM;=0A     share_xen_page_with_guest(virt_to_page(apic_va), d, =
XENSHARE_writable);=0A+    d->arch.hvm_domain.vmx.apic_access_mfn =3D =
virt_to_mfn(apic_va);=0A     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAUL=
T_PHYS_BASE),=0A         _mfn(virt_to_mfn(apic_va)));=0A-    d->arch.hvm_do=
main.vmx.apic_access_mfn =3D virt_to_mfn(apic_va);=0A =0A     return 0;=0A =
}=0A--- a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ =
-698,14 +698,20 @@ uint8_t epte_get_entry_emt(struct domain=0A     if ( =
hvm_get_mem_pinned_cacheattr(d, gfn, &type) )=0A         return type;=0A =
=0A-    if ( !iommu_enabled )=0A+    if ( !iommu_enabled ||=0A+         =
(rangeset_is_empty(d->iomem_caps) &&=0A+          rangeset_is_empty(d->arch=
.ioport_caps) &&=0A+          !has_arch_pdevs(d)) )=0A     {=0A+        =
ASSERT(!direct_mmio ||=0A+               mfn_x(mfn) =3D=3D d->arch.hvm_doma=
in.vmx.apic_access_mfn);=0A         *ipat =3D 1;=0A         return =
MTRR_TYPE_WRBACK;=0A     }=0A =0A     if ( direct_mmio )=0A-        return =
MTRR_TYPE_UNCACHABLE;=0A+        return mfn_x(mfn) !=3D d->arch.hvm_domain.=
vmx.apic_access_mfn=0A+               ? MTRR_TYPE_UNCACHABLE : MTRR_TYPE_WR=
BACK;=0A =0A     if ( iommu_snoop )=0A     {=0A
--=__PartFCCEAEBB.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFCCEAEBB.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:30:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFH6-0007Xd-11; Tue, 25 Feb 2014 10:30:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFH4-0007XN-M4
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:30:06 +0000
Received: from [85.158.137.68:58933] by server-10.bemta-3.messagelabs.com id
	D3/D6-07302-DA07C035; Tue, 25 Feb 2014 10:30:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393324204!4021922!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28965 invoked from network); 25 Feb 2014 10:30:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:30:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:30:04 +0000
Message-Id: <530C7EBB020000780011F138@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:30:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFCCEAEBB.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 3/5] x86/HVM: consolidate passthrough handling
 in epte_get_entry_emt()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFCCEAEBB.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It is inconsistent to depend on iommu_enabled alone: For a guest
without devices passed through to it, it is of no concern whether the
IOMMU is enabled.

There's one rather special case to take care of: VMX code marks the
LAPIC access page as MMIO. The added assertion needs to take this into
consideration, and the subsequent handling of the direct MMIO case was
inconsistent too: That page would have been WB in the absence of an
IOMMU, but UC in the presence of it, while in fact the cachabilty of
this page is entirely unrelated to an IOMMU being in use.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2090,9 +2090,9 @@ static int vmx_alloc_vlapic_mapping(stru
     if ( apic_va =3D=3D NULL )
         return -ENOMEM;
     share_xen_page_with_guest(virt_to_page(apic_va), d, XENSHARE_writable)=
;
+    d->arch.hvm_domain.vmx.apic_access_mfn =3D virt_to_mfn(apic_va);
     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAULT_PHYS_BASE),
         _mfn(virt_to_mfn(apic_va)));
-    d->arch.hvm_domain.vmx.apic_access_mfn =3D virt_to_mfn(apic_va);
=20
     return 0;
 }
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -698,14 +698,20 @@ uint8_t epte_get_entry_emt(struct domain
     if ( hvm_get_mem_pinned_cacheattr(d, gfn, &type) )
         return type;
=20
-    if ( !iommu_enabled )
+    if ( !iommu_enabled ||
+         (rangeset_is_empty(d->iomem_caps) &&
+          rangeset_is_empty(d->arch.ioport_caps) &&
+          !has_arch_pdevs(d)) )
     {
+        ASSERT(!direct_mmio ||
+               mfn_x(mfn) =3D=3D d->arch.hvm_domain.vmx.apic_access_mfn);
         *ipat =3D 1;
         return MTRR_TYPE_WRBACK;
     }
=20
     if ( direct_mmio )
-        return MTRR_TYPE_UNCACHABLE;
+        return mfn_x(mfn) !=3D d->arch.hvm_domain.vmx.apic_access_mfn
+               ? MTRR_TYPE_UNCACHABLE : MTRR_TYPE_WRBACK;
=20
     if ( iommu_snoop )
     {




--=__PartFCCEAEBB.1__=
Content-Type: text/plain; name="EPT-EMT-no-dev.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="EPT-EMT-no-dev.patch"

x86/HVM: consolidate passthrough handling in epte_get_entry_emt()=0A=0AIt =
is inconsistent to depend on iommu_enabled alone: For a guest=0Awithout =
devices passed through to it, it is of no concern whether the=0AIOMMU is =
enabled.=0A=0AThere's one rather special case to take care of: VMX code =
marks the=0ALAPIC access page as MMIO. The added assertion needs to take =
this into=0Aconsideration, and the subsequent handling of the direct MMIO =
case was=0Ainconsistent too: That page would have been WB in the absence =
of an=0AIOMMU, but UC in the presence of it, while in fact the cachabilty =
of=0Athis page is entirely unrelated to an IOMMU being in use.=0A=0ASigned-=
off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/hvm/vmx/vmx=
.c=0A+++ b/xen/arch/x86/hvm/vmx/vmx.c=0A@@ -2090,9 +2090,9 @@ static int =
vmx_alloc_vlapic_mapping(stru=0A     if ( apic_va =3D=3D NULL )=0A         =
return -ENOMEM;=0A     share_xen_page_with_guest(virt_to_page(apic_va), d, =
XENSHARE_writable);=0A+    d->arch.hvm_domain.vmx.apic_access_mfn =3D =
virt_to_mfn(apic_va);=0A     set_mmio_p2m_entry(d, paddr_to_pfn(APIC_DEFAUL=
T_PHYS_BASE),=0A         _mfn(virt_to_mfn(apic_va)));=0A-    d->arch.hvm_do=
main.vmx.apic_access_mfn =3D virt_to_mfn(apic_va);=0A =0A     return 0;=0A =
}=0A--- a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ =
-698,14 +698,20 @@ uint8_t epte_get_entry_emt(struct domain=0A     if ( =
hvm_get_mem_pinned_cacheattr(d, gfn, &type) )=0A         return type;=0A =
=0A-    if ( !iommu_enabled )=0A+    if ( !iommu_enabled ||=0A+         =
(rangeset_is_empty(d->iomem_caps) &&=0A+          rangeset_is_empty(d->arch=
.ioport_caps) &&=0A+          !has_arch_pdevs(d)) )=0A     {=0A+        =
ASSERT(!direct_mmio ||=0A+               mfn_x(mfn) =3D=3D d->arch.hvm_doma=
in.vmx.apic_access_mfn);=0A         *ipat =3D 1;=0A         return =
MTRR_TYPE_WRBACK;=0A     }=0A =0A     if ( direct_mmio )=0A-        return =
MTRR_TYPE_UNCACHABLE;=0A+        return mfn_x(mfn) !=3D d->arch.hvm_domain.=
vmx.apic_access_mfn=0A+               ? MTRR_TYPE_UNCACHABLE : MTRR_TYPE_WR=
BACK;=0A =0A     if ( iommu_snoop )=0A     {=0A
--=__PartFCCEAEBB.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFCCEAEBB.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFHp-0007eM-In; Tue, 25 Feb 2014 10:30:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFHo-0007du-4S
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:30:52 +0000
Received: from [85.158.137.68:18773] by server-16.bemta-3.messagelabs.com id
	3B/B2-29917-BD07C035; Tue, 25 Feb 2014 10:30:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393324250!4050739!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23273 invoked from network); 25 Feb 2014 10:30:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:30:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:30:49 +0000
Message-Id: <530C7EE9020000780011F13C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:30:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part8EBCDCC9.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 4/5] x86/HVM: use manifest constants /
 enumerators for memory types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part8EBCDCC9.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... instead of literal numbers, thus making it possible for the reader
to understand the code without having to look up the meaning of these
numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -235,9 +235,16 @@ int hvm_set_guest_pat(struct vcpu *v, u6
     uint8_t *value =3D (uint8_t *)&guest_pat;
=20
     for ( i =3D 0; i < 8; i++ )
-        if ( unlikely(!(value[i] =3D=3D 0 || value[i] =3D=3D 1 ||
-                        value[i] =3D=3D 4 || value[i] =3D=3D 5 ||
-                        value[i] =3D=3D 6 || value[i] =3D=3D 7)) ) {
+        switch ( value[i] )
+        {
+        case PAT_TYPE_UC_MINUS:
+        case PAT_TYPE_UNCACHABLE:
+        case PAT_TYPE_WRBACK:
+        case PAT_TYPE_WRCOMB:
+        case PAT_TYPE_WRPROT:
+        case PAT_TYPE_WRTHROUGH:
+            break;
+        default:
             HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",
                         guest_pat);=20
             return 0;
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -42,22 +42,28 @@ static const uint8_t pat_entry_2_pte_fla
=20
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][PAT_TYPE_NUMS] =3D {
-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-)*/
-/* RS means reserved type(2,3), and type is hardcoded here */
- /*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/
-            {0, 1, 2, 2, 0, 0, 0, 0},
- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/
-            {0, 1, 2, 2, 0, 0, 1, 1},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)*/
-            {0, 1, 2, 2, 4, 5, 4, 0},
- /*MTRR(WP):(UC,WC,RS,RS,WT,WP,WP,WC)*/
-            {0, 1, 2, 2, 4, 5, 5, 1},
- /*MTRR(WB):(UC,WC,RS,RS,WT,WP,WB,UC)*/
-            {0, 1, 2, 2, 4, 5, 6, 0}
+#define RS MEMORY_NUM_TYPES
+#define UC MTRR_TYPE_UNCACHABLE
+#define WB MTRR_TYPE_WRBACK
+#define WC MTRR_TYPE_WRCOMB
+#define WP MTRR_TYPE_WRPROT
+#define WT MTRR_TYPE_WRTHROUGH
+
+/*          PAT(UC, WC, RS, RS, WT, WP, WB, UC-) */
+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, UC},
+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},
+/* MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},
+/* MTRR(WB) */ {UC, WC, RS, RS, WT, WP, WB, UC}
+
+#undef UC
+#undef WC
+#undef WT
+#undef WP
+#undef WB
+#undef RS
 };
=20
 /*
@@ -403,13 +409,26 @@ uint32_t get_pat_flags(struct vcpu *v,
     return pat_type_2_pte_flags(pat_entry_value);
 }
=20
+static inline bool_t valid_mtrr_type(uint8_t type)
+{
+    switch ( type )
+    {
+    case MTRR_TYPE_UNCACHABLE:
+    case MTRR_TYPE_WRBACK:
+    case MTRR_TYPE_WRCOMB:
+    case MTRR_TYPE_WRPROT:
+    case MTRR_TYPE_WRTHROUGH:
+        return 1;
+    }
+    return 0;
+}
+
 bool_t mtrr_def_type_msr_set(struct mtrr_state *m, uint64_t msr_content)
 {
     uint8_t def_type =3D msr_content & 0xff;
     uint8_t enabled =3D (msr_content >> 10) & 0x3;
=20
-    if ( unlikely(!(def_type =3D=3D 0 || def_type =3D=3D 1 || def_type =
=3D=3D 4 ||
-                    def_type =3D=3D 5 || def_type =3D=3D 6)) )
+    if ( unlikely(!valid_mtrr_type(def_type)) )
     {
          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", =
def_type);
          return 0;
@@ -436,15 +455,11 @@ bool_t mtrr_fix_range_msr_set(struct mtr
     if ( fixed_range_base[row] !=3D msr_content )
     {
         uint8_t *range =3D (uint8_t*)&msr_content;
-        int32_t i, type;
+        unsigned int i;
=20
         for ( i =3D 0; i < 8; i++ )
-        {
-            type =3D range[i];
-            if ( unlikely(!(type =3D=3D 0 || type =3D=3D 1 ||
-                            type =3D=3D 4 || type =3D=3D 5 || type =3D=3D =
6)) )
+            if ( unlikely(!valid_mtrr_type(range[i])) )
                 return 0;
-        }
=20
         fixed_range_base[row] =3D msr_content;
     }
@@ -455,7 +470,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr
 bool_t mtrr_var_range_msr_set(
     struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t =
msr_content)
 {
-    uint32_t index, type, phys_addr, eax, ebx, ecx, edx;
+    uint32_t index, phys_addr, eax, ebx, ecx, edx;
     uint64_t msr_mask;
     uint64_t *var_range_base =3D (uint64_t*)m->var_ranges;
=20
@@ -463,9 +478,7 @@ bool_t mtrr_var_range_msr_set(
     if ( var_range_base[index] =3D=3D msr_content )
         return 1;
=20
-    type =3D (uint8_t)msr_content;
-    if ( unlikely(!(type =3D=3D 0 || type =3D=3D 1 ||
-                    type =3D=3D 4 || type =3D=3D 5 || type =3D=3D 6)) )
+    if ( unlikely(!valid_mtrr_type((uint8_t)msr_content)) )
         return 0;
=20
     phys_addr =3D 36;
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -20,7 +20,6 @@
 enum {
     PAT_TYPE_UNCACHABLE=3D0,
     PAT_TYPE_WRCOMB=3D1,
-    PAT_TYPE_RESERVED=3D2,
     PAT_TYPE_WRTHROUGH=3D4,
     PAT_TYPE_WRPROT=3D5,
     PAT_TYPE_WRBACK=3D6,



--=__Part8EBCDCC9.1__=
Content-Type: text/plain; name="x86-PAT-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PAT-types.patch"

x86/HVM: use manifest constants / enumerators for memory types=0A=0A... =
instead of literal numbers, thus making it possible for the reader=0Ato =
understand the code without having to look up the meaning of these=0Anumber=
s.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/=
x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -235,9 +235,16 @@ int =
hvm_set_guest_pat(struct vcpu *v, u6=0A     uint8_t *value =3D (uint8_t =
*)&guest_pat;=0A =0A     for ( i =3D 0; i < 8; i++ )=0A-        if ( =
unlikely(!(value[i] =3D=3D 0 || value[i] =3D=3D 1 ||=0A-                   =
     value[i] =3D=3D 4 || value[i] =3D=3D 5 ||=0A-                        =
value[i] =3D=3D 6 || value[i] =3D=3D 7)) ) {=0A+        switch ( value[i] =
)=0A+        {=0A+        case PAT_TYPE_UC_MINUS:=0A+        case =
PAT_TYPE_UNCACHABLE:=0A+        case PAT_TYPE_WRBACK:=0A+        case =
PAT_TYPE_WRCOMB:=0A+        case PAT_TYPE_WRPROT:=0A+        case =
PAT_TYPE_WRTHROUGH:=0A+            break;=0A+        default:=0A           =
  HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",=0A          =
               guest_pat); =0A             return 0;=0A--- a/xen/arch/x86/h=
vm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ -42,22 +42,28 @@ static =
const uint8_t pat_entry_2_pte_fla=0A =0A /* Effective mm type lookup =
table, according to MTRR and PAT. */=0A static const uint8_t mm_type_tbl[MT=
RR_NUM_TYPES][PAT_TYPE_NUMS] =3D {=0A-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-=
)*/=0A-/* RS means reserved type(2,3), and type is hardcoded here */=0A- =
/*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/=0A-            {0, 1, 2, 2, 0, 0, =
0, 0},=0A- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/=0A-            {0, 1, 2, =
2, 0, 0, 1, 1},=0A- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/=0A-            =
{2, 2, 2, 2, 2, 2, 2, 2},=0A- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/=0A-   =
         {2, 2, 2, 2, 2, 2, 2, 2},=0A- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)=
*/=0A-            {0, 1, 2, 2, 4, 5, 4, 0},=0A- /*MTRR(WP):(UC,WC,RS,RS,WT,=
WP,WP,WC)*/=0A-            {0, 1, 2, 2, 4, 5, 5, 1},=0A- /*MTRR(WB):(UC,WC,=
RS,RS,WT,WP,WB,UC)*/=0A-            {0, 1, 2, 2, 4, 5, 6, 0}=0A+#define RS =
MEMORY_NUM_TYPES=0A+#define UC MTRR_TYPE_UNCACHABLE=0A+#define WB =
MTRR_TYPE_WRBACK=0A+#define WC MTRR_TYPE_WRCOMB=0A+#define WP MTRR_TYPE_WRP=
ROT=0A+#define WT MTRR_TYPE_WRTHROUGH=0A+=0A+/*          PAT(UC, WC, RS, =
RS, WT, WP, WB, UC-) */=0A+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, =
UC},=0A+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},=0A+/* MTRR(RS) */ =
{RS, RS, RS, RS, RS, RS, RS, RS},=0A+/* MTRR(RS) */ {RS, RS, RS, RS, RS, =
RS, RS, RS},=0A+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},=0A+/* =
MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},=0A+/* MTRR(WB) */ {UC, WC, =
RS, RS, WT, WP, WB, UC}=0A+=0A+#undef UC=0A+#undef WC=0A+#undef WT=0A+#unde=
f WP=0A+#undef WB=0A+#undef RS=0A };=0A =0A /*=0A@@ -403,13 +409,26 @@ =
uint32_t get_pat_flags(struct vcpu *v,=0A     return pat_type_2_pte_flags(p=
at_entry_value);=0A }=0A =0A+static inline bool_t valid_mtrr_type(uint8_t =
type)=0A+{=0A+    switch ( type )=0A+    {=0A+    case MTRR_TYPE_UNCACHABLE=
:=0A+    case MTRR_TYPE_WRBACK:=0A+    case MTRR_TYPE_WRCOMB:=0A+    case =
MTRR_TYPE_WRPROT:=0A+    case MTRR_TYPE_WRTHROUGH:=0A+        return =
1;=0A+    }=0A+    return 0;=0A+}=0A+=0A bool_t mtrr_def_type_msr_set(struc=
t mtrr_state *m, uint64_t msr_content)=0A {=0A     uint8_t def_type =3D =
msr_content & 0xff;=0A     uint8_t enabled =3D (msr_content >> 10) & =
0x3;=0A =0A-    if ( unlikely(!(def_type =3D=3D 0 || def_type =3D=3D 1 || =
def_type =3D=3D 4 ||=0A-                    def_type =3D=3D 5 || def_type =
=3D=3D 6)) )=0A+    if ( unlikely(!valid_mtrr_type(def_type)) )=0A     =
{=0A          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", =
def_type);=0A          return 0;=0A@@ -436,15 +455,11 @@ bool_t mtrr_fix_ra=
nge_msr_set(struct mtr=0A     if ( fixed_range_base[row] !=3D msr_content =
)=0A     {=0A         uint8_t *range =3D (uint8_t*)&msr_content;=0A-       =
 int32_t i, type;=0A+        unsigned int i;=0A =0A         for ( i =3D 0; =
i < 8; i++ )=0A-        {=0A-            type =3D range[i];=0A-            =
if ( unlikely(!(type =3D=3D 0 || type =3D=3D 1 ||=0A-                      =
      type =3D=3D 4 || type =3D=3D 5 || type =3D=3D 6)) )=0A+            =
if ( unlikely(!valid_mtrr_type(range[i])) )=0A                 return =
0;=0A-        }=0A =0A         fixed_range_base[row] =3D msr_content;=0A   =
  }=0A@@ -455,7 +470,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr=0A =
bool_t mtrr_var_range_msr_set(=0A     struct domain *d, struct mtrr_state =
*m, uint32_t msr, uint64_t msr_content)=0A {=0A-    uint32_t index, type, =
phys_addr, eax, ebx, ecx, edx;=0A+    uint32_t index, phys_addr, eax, ebx, =
ecx, edx;=0A     uint64_t msr_mask;=0A     uint64_t *var_range_base =3D =
(uint64_t*)m->var_ranges;=0A =0A@@ -463,9 +478,7 @@ bool_t mtrr_var_range_m=
sr_set(=0A     if ( var_range_base[index] =3D=3D msr_content )=0A         =
return 1;=0A =0A-    type =3D (uint8_t)msr_content;=0A-    if ( unlikely(!(=
type =3D=3D 0 || type =3D=3D 1 ||=0A-                    type =3D=3D 4 || =
type =3D=3D 5 || type =3D=3D 6)) )=0A+    if ( unlikely(!valid_mtrr_type((u=
int8_t)msr_content)) )=0A         return 0;=0A =0A     phys_addr =3D =
36;=0A--- a/xen/include/asm-x86/mtrr.h=0A+++ b/xen/include/asm-x86/mtrr.h=
=0A@@ -20,7 +20,6 @@=0A enum {=0A     PAT_TYPE_UNCACHABLE=3D0,=0A     =
PAT_TYPE_WRCOMB=3D1,=0A-    PAT_TYPE_RESERVED=3D2,=0A     PAT_TYPE_WRTHROUG=
H=3D4,=0A     PAT_TYPE_WRPROT=3D5,=0A     PAT_TYPE_WRBACK=3D6,=0A
--=__Part8EBCDCC9.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part8EBCDCC9.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:30:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:30:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFHp-0007eM-In; Tue, 25 Feb 2014 10:30:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFHo-0007du-4S
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:30:52 +0000
Received: from [85.158.137.68:18773] by server-16.bemta-3.messagelabs.com id
	3B/B2-29917-BD07C035; Tue, 25 Feb 2014 10:30:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393324250!4050739!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23273 invoked from network); 25 Feb 2014 10:30:50 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:30:50 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:30:49 +0000
Message-Id: <530C7EE9020000780011F13C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:30:49 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part8EBCDCC9.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 4/5] x86/HVM: use manifest constants /
 enumerators for memory types
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part8EBCDCC9.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... instead of literal numbers, thus making it possible for the reader
to understand the code without having to look up the meaning of these
numbers.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -235,9 +235,16 @@ int hvm_set_guest_pat(struct vcpu *v, u6
     uint8_t *value =3D (uint8_t *)&guest_pat;
=20
     for ( i =3D 0; i < 8; i++ )
-        if ( unlikely(!(value[i] =3D=3D 0 || value[i] =3D=3D 1 ||
-                        value[i] =3D=3D 4 || value[i] =3D=3D 5 ||
-                        value[i] =3D=3D 6 || value[i] =3D=3D 7)) ) {
+        switch ( value[i] )
+        {
+        case PAT_TYPE_UC_MINUS:
+        case PAT_TYPE_UNCACHABLE:
+        case PAT_TYPE_WRBACK:
+        case PAT_TYPE_WRCOMB:
+        case PAT_TYPE_WRPROT:
+        case PAT_TYPE_WRTHROUGH:
+            break;
+        default:
             HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",
                         guest_pat);=20
             return 0;
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -42,22 +42,28 @@ static const uint8_t pat_entry_2_pte_fla
=20
 /* Effective mm type lookup table, according to MTRR and PAT. */
 static const uint8_t mm_type_tbl[MTRR_NUM_TYPES][PAT_TYPE_NUMS] =3D {
-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-)*/
-/* RS means reserved type(2,3), and type is hardcoded here */
- /*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/
-            {0, 1, 2, 2, 0, 0, 0, 0},
- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/
-            {0, 1, 2, 2, 0, 0, 1, 1},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/
-            {2, 2, 2, 2, 2, 2, 2, 2},
- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)*/
-            {0, 1, 2, 2, 4, 5, 4, 0},
- /*MTRR(WP):(UC,WC,RS,RS,WT,WP,WP,WC)*/
-            {0, 1, 2, 2, 4, 5, 5, 1},
- /*MTRR(WB):(UC,WC,RS,RS,WT,WP,WB,UC)*/
-            {0, 1, 2, 2, 4, 5, 6, 0}
+#define RS MEMORY_NUM_TYPES
+#define UC MTRR_TYPE_UNCACHABLE
+#define WB MTRR_TYPE_WRBACK
+#define WC MTRR_TYPE_WRCOMB
+#define WP MTRR_TYPE_WRPROT
+#define WT MTRR_TYPE_WRTHROUGH
+
+/*          PAT(UC, WC, RS, RS, WT, WP, WB, UC-) */
+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, UC},
+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(RS) */ {RS, RS, RS, RS, RS, RS, RS, RS},
+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},
+/* MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},
+/* MTRR(WB) */ {UC, WC, RS, RS, WT, WP, WB, UC}
+
+#undef UC
+#undef WC
+#undef WT
+#undef WP
+#undef WB
+#undef RS
 };
=20
 /*
@@ -403,13 +409,26 @@ uint32_t get_pat_flags(struct vcpu *v,
     return pat_type_2_pte_flags(pat_entry_value);
 }
=20
+static inline bool_t valid_mtrr_type(uint8_t type)
+{
+    switch ( type )
+    {
+    case MTRR_TYPE_UNCACHABLE:
+    case MTRR_TYPE_WRBACK:
+    case MTRR_TYPE_WRCOMB:
+    case MTRR_TYPE_WRPROT:
+    case MTRR_TYPE_WRTHROUGH:
+        return 1;
+    }
+    return 0;
+}
+
 bool_t mtrr_def_type_msr_set(struct mtrr_state *m, uint64_t msr_content)
 {
     uint8_t def_type =3D msr_content & 0xff;
     uint8_t enabled =3D (msr_content >> 10) & 0x3;
=20
-    if ( unlikely(!(def_type =3D=3D 0 || def_type =3D=3D 1 || def_type =
=3D=3D 4 ||
-                    def_type =3D=3D 5 || def_type =3D=3D 6)) )
+    if ( unlikely(!valid_mtrr_type(def_type)) )
     {
          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", =
def_type);
          return 0;
@@ -436,15 +455,11 @@ bool_t mtrr_fix_range_msr_set(struct mtr
     if ( fixed_range_base[row] !=3D msr_content )
     {
         uint8_t *range =3D (uint8_t*)&msr_content;
-        int32_t i, type;
+        unsigned int i;
=20
         for ( i =3D 0; i < 8; i++ )
-        {
-            type =3D range[i];
-            if ( unlikely(!(type =3D=3D 0 || type =3D=3D 1 ||
-                            type =3D=3D 4 || type =3D=3D 5 || type =3D=3D =
6)) )
+            if ( unlikely(!valid_mtrr_type(range[i])) )
                 return 0;
-        }
=20
         fixed_range_base[row] =3D msr_content;
     }
@@ -455,7 +470,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr
 bool_t mtrr_var_range_msr_set(
     struct domain *d, struct mtrr_state *m, uint32_t msr, uint64_t =
msr_content)
 {
-    uint32_t index, type, phys_addr, eax, ebx, ecx, edx;
+    uint32_t index, phys_addr, eax, ebx, ecx, edx;
     uint64_t msr_mask;
     uint64_t *var_range_base =3D (uint64_t*)m->var_ranges;
=20
@@ -463,9 +478,7 @@ bool_t mtrr_var_range_msr_set(
     if ( var_range_base[index] =3D=3D msr_content )
         return 1;
=20
-    type =3D (uint8_t)msr_content;
-    if ( unlikely(!(type =3D=3D 0 || type =3D=3D 1 ||
-                    type =3D=3D 4 || type =3D=3D 5 || type =3D=3D 6)) )
+    if ( unlikely(!valid_mtrr_type((uint8_t)msr_content)) )
         return 0;
=20
     phys_addr =3D 36;
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -20,7 +20,6 @@
 enum {
     PAT_TYPE_UNCACHABLE=3D0,
     PAT_TYPE_WRCOMB=3D1,
-    PAT_TYPE_RESERVED=3D2,
     PAT_TYPE_WRTHROUGH=3D4,
     PAT_TYPE_WRPROT=3D5,
     PAT_TYPE_WRBACK=3D6,



--=__Part8EBCDCC9.1__=
Content-Type: text/plain; name="x86-PAT-types.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-PAT-types.patch"

x86/HVM: use manifest constants / enumerators for memory types=0A=0A... =
instead of literal numbers, thus making it possible for the reader=0Ato =
understand the code without having to look up the meaning of these=0Anumber=
s.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/=
x86/hvm/hvm.c=0A+++ b/xen/arch/x86/hvm/hvm.c=0A@@ -235,9 +235,16 @@ int =
hvm_set_guest_pat(struct vcpu *v, u6=0A     uint8_t *value =3D (uint8_t =
*)&guest_pat;=0A =0A     for ( i =3D 0; i < 8; i++ )=0A-        if ( =
unlikely(!(value[i] =3D=3D 0 || value[i] =3D=3D 1 ||=0A-                   =
     value[i] =3D=3D 4 || value[i] =3D=3D 5 ||=0A-                        =
value[i] =3D=3D 6 || value[i] =3D=3D 7)) ) {=0A+        switch ( value[i] =
)=0A+        {=0A+        case PAT_TYPE_UC_MINUS:=0A+        case =
PAT_TYPE_UNCACHABLE:=0A+        case PAT_TYPE_WRBACK:=0A+        case =
PAT_TYPE_WRCOMB:=0A+        case PAT_TYPE_WRPROT:=0A+        case =
PAT_TYPE_WRTHROUGH:=0A+            break;=0A+        default:=0A           =
  HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid guest PAT: %"PRIx64"\n",=0A          =
               guest_pat); =0A             return 0;=0A--- a/xen/arch/x86/h=
vm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ -42,22 +42,28 @@ static =
const uint8_t pat_entry_2_pte_fla=0A =0A /* Effective mm type lookup =
table, according to MTRR and PAT. */=0A static const uint8_t mm_type_tbl[MT=
RR_NUM_TYPES][PAT_TYPE_NUMS] =3D {=0A-/********PAT(UC,WC,RS,RS,WT,WP,WB,UC-=
)*/=0A-/* RS means reserved type(2,3), and type is hardcoded here */=0A- =
/*MTRR(UC):(UC,WC,RS,RS,UC,UC,UC,UC)*/=0A-            {0, 1, 2, 2, 0, 0, =
0, 0},=0A- /*MTRR(WC):(UC,WC,RS,RS,UC,UC,WC,WC)*/=0A-            {0, 1, 2, =
2, 0, 0, 1, 1},=0A- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/=0A-            =
{2, 2, 2, 2, 2, 2, 2, 2},=0A- /*MTRR(RS):(RS,RS,RS,RS,RS,RS,RS,RS)*/=0A-   =
         {2, 2, 2, 2, 2, 2, 2, 2},=0A- /*MTRR(WT):(UC,WC,RS,RS,WT,WP,WT,UC)=
*/=0A-            {0, 1, 2, 2, 4, 5, 4, 0},=0A- /*MTRR(WP):(UC,WC,RS,RS,WT,=
WP,WP,WC)*/=0A-            {0, 1, 2, 2, 4, 5, 5, 1},=0A- /*MTRR(WB):(UC,WC,=
RS,RS,WT,WP,WB,UC)*/=0A-            {0, 1, 2, 2, 4, 5, 6, 0}=0A+#define RS =
MEMORY_NUM_TYPES=0A+#define UC MTRR_TYPE_UNCACHABLE=0A+#define WB =
MTRR_TYPE_WRBACK=0A+#define WC MTRR_TYPE_WRCOMB=0A+#define WP MTRR_TYPE_WRP=
ROT=0A+#define WT MTRR_TYPE_WRTHROUGH=0A+=0A+/*          PAT(UC, WC, RS, =
RS, WT, WP, WB, UC-) */=0A+/* MTRR(UC) */ {UC, WC, RS, RS, UC, UC, UC, =
UC},=0A+/* MTRR(WC) */ {UC, WC, RS, RS, UC, UC, WC, WC},=0A+/* MTRR(RS) */ =
{RS, RS, RS, RS, RS, RS, RS, RS},=0A+/* MTRR(RS) */ {RS, RS, RS, RS, RS, =
RS, RS, RS},=0A+/* MTRR(WT) */ {UC, WC, RS, RS, WT, WP, WT, UC},=0A+/* =
MTRR(WP) */ {UC, WC, RS, RS, WT, WP, WP, WC},=0A+/* MTRR(WB) */ {UC, WC, =
RS, RS, WT, WP, WB, UC}=0A+=0A+#undef UC=0A+#undef WC=0A+#undef WT=0A+#unde=
f WP=0A+#undef WB=0A+#undef RS=0A };=0A =0A /*=0A@@ -403,13 +409,26 @@ =
uint32_t get_pat_flags(struct vcpu *v,=0A     return pat_type_2_pte_flags(p=
at_entry_value);=0A }=0A =0A+static inline bool_t valid_mtrr_type(uint8_t =
type)=0A+{=0A+    switch ( type )=0A+    {=0A+    case MTRR_TYPE_UNCACHABLE=
:=0A+    case MTRR_TYPE_WRBACK:=0A+    case MTRR_TYPE_WRCOMB:=0A+    case =
MTRR_TYPE_WRPROT:=0A+    case MTRR_TYPE_WRTHROUGH:=0A+        return =
1;=0A+    }=0A+    return 0;=0A+}=0A+=0A bool_t mtrr_def_type_msr_set(struc=
t mtrr_state *m, uint64_t msr_content)=0A {=0A     uint8_t def_type =3D =
msr_content & 0xff;=0A     uint8_t enabled =3D (msr_content >> 10) & =
0x3;=0A =0A-    if ( unlikely(!(def_type =3D=3D 0 || def_type =3D=3D 1 || =
def_type =3D=3D 4 ||=0A-                    def_type =3D=3D 5 || def_type =
=3D=3D 6)) )=0A+    if ( unlikely(!valid_mtrr_type(def_type)) )=0A     =
{=0A          HVM_DBG_LOG(DBG_LEVEL_MSR, "invalid MTRR def type:%x\n", =
def_type);=0A          return 0;=0A@@ -436,15 +455,11 @@ bool_t mtrr_fix_ra=
nge_msr_set(struct mtr=0A     if ( fixed_range_base[row] !=3D msr_content =
)=0A     {=0A         uint8_t *range =3D (uint8_t*)&msr_content;=0A-       =
 int32_t i, type;=0A+        unsigned int i;=0A =0A         for ( i =3D 0; =
i < 8; i++ )=0A-        {=0A-            type =3D range[i];=0A-            =
if ( unlikely(!(type =3D=3D 0 || type =3D=3D 1 ||=0A-                      =
      type =3D=3D 4 || type =3D=3D 5 || type =3D=3D 6)) )=0A+            =
if ( unlikely(!valid_mtrr_type(range[i])) )=0A                 return =
0;=0A-        }=0A =0A         fixed_range_base[row] =3D msr_content;=0A   =
  }=0A@@ -455,7 +470,7 @@ bool_t mtrr_fix_range_msr_set(struct mtr=0A =
bool_t mtrr_var_range_msr_set(=0A     struct domain *d, struct mtrr_state =
*m, uint32_t msr, uint64_t msr_content)=0A {=0A-    uint32_t index, type, =
phys_addr, eax, ebx, ecx, edx;=0A+    uint32_t index, phys_addr, eax, ebx, =
ecx, edx;=0A     uint64_t msr_mask;=0A     uint64_t *var_range_base =3D =
(uint64_t*)m->var_ranges;=0A =0A@@ -463,9 +478,7 @@ bool_t mtrr_var_range_m=
sr_set(=0A     if ( var_range_base[index] =3D=3D msr_content )=0A         =
return 1;=0A =0A-    type =3D (uint8_t)msr_content;=0A-    if ( unlikely(!(=
type =3D=3D 0 || type =3D=3D 1 ||=0A-                    type =3D=3D 4 || =
type =3D=3D 5 || type =3D=3D 6)) )=0A+    if ( unlikely(!valid_mtrr_type((u=
int8_t)msr_content)) )=0A         return 0;=0A =0A     phys_addr =3D =
36;=0A--- a/xen/include/asm-x86/mtrr.h=0A+++ b/xen/include/asm-x86/mtrr.h=
=0A@@ -20,7 +20,6 @@=0A enum {=0A     PAT_TYPE_UNCACHABLE=3D0,=0A     =
PAT_TYPE_WRCOMB=3D1,=0A-    PAT_TYPE_RESERVED=3D2,=0A     PAT_TYPE_WRTHROUG=
H=3D4,=0A     PAT_TYPE_WRPROT=3D5,=0A     PAT_TYPE_WRBACK=3D6,=0A
--=__Part8EBCDCC9.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part8EBCDCC9.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:31:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFIQ-0007lK-9e; Tue, 25 Feb 2014 10:31:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFIO-0007ks-2h
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:31:28 +0000
Received: from [85.158.137.68:34729] by server-13.bemta-3.messagelabs.com id
	07/E2-26923-FF07C035; Tue, 25 Feb 2014 10:31:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393324286!2780027!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21006 invoked from network); 25 Feb 2014 10:31:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:31:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:31:26 +0000
Message-Id: <530C7F0B020000780011F140@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:31:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartAC9EFEEB.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 5/5] x86/HVM: adjust data definitions in mtrr.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartAC9EFEEB.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- use proper section attributes
- use initializers where possible
- clean up pat_type_2_pte_flags()

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -70,10 +70,14 @@ static const uint8_t mm_type_tbl[MTRR_NU
  * Reverse lookup table, to find a pat type according to MTRR and =
effective
  * memory type. This table is dynamically generated.
  */
-static uint8_t mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES];
+static uint8_t __read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPE=
S] =3D
+    { [0 ... MTRR_NUM_TYPES-1] =3D
+        { [0 ... MEMORY_NUM_TYPES-1] =3D INVALID_MEM_TYPE }
+    };
=20
 /* Lookup table for PAT entry of a given PAT value in host PAT. */
-static uint8_t pat_entry_tbl[PAT_TYPE_NUMS];
+static uint8_t __read_mostly pat_entry_tbl[PAT_TYPE_NUMS] =3D
+    { [0 ... PAT_TYPE_NUMS-1] =3D INVALID_MEM_TYPE };
=20
 static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
                            uint64_t *base, uint64_t *end)
@@ -149,23 +153,21 @@ bool_t is_var_mtrr_overlapped(struct mtr
 #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
 #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
=20
-static int hvm_mtrr_pat_init(void)
+static int __init hvm_mtrr_pat_init(void)
 {
     unsigned int i, j, phys_addr;
=20
-    memset(&mtrr_epat_tbl, INVALID_MEM_TYPE, sizeof(mtrr_epat_tbl));
     for ( i =3D 0; i < MTRR_NUM_TYPES; i++ )
     {
         for ( j =3D 0; j < PAT_TYPE_NUMS; j++ )
         {
-            int32_t tmp =3D mm_type_tbl[i][j];
-            if ( (tmp >=3D 0) && (tmp < MEMORY_NUM_TYPES) )
+            unsigned int tmp =3D mm_type_tbl[i][j];
+
+            if ( tmp < MEMORY_NUM_TYPES )
                 mtrr_epat_tbl[i][tmp] =3D j;
         }
     }
=20
-    memset(&pat_entry_tbl, INVALID_MEM_TYPE,
-           PAT_TYPE_NUMS * sizeof(pat_entry_tbl[0]));
     for ( i =3D 0; i < PAT_TYPE_NUMS; i++ )
     {
         for ( j =3D 0; j < PAT_TYPE_NUMS; j++ )
@@ -190,16 +192,16 @@ __initcall(hvm_mtrr_pat_init);
=20
 uint8_t pat_type_2_pte_flags(uint8_t pat_type)
 {
-    int32_t pat_entry =3D pat_entry_tbl[pat_type];
+    unsigned int pat_entry =3D pat_entry_tbl[pat_type];
=20
-    /* INVALID_MEM_TYPE, means doesn't find the pat_entry in host pat for
-     * a given pat_type. If host pat covers all the pat types,
-     * it can't happen.
+    /*
+     * INVALID_MEM_TYPE, means doesn't find the pat_entry in host PAT for =
a
+     * given pat_type. If host PAT covers all the PAT types, it can't =
happen.
      */
-    if ( likely(pat_entry !=3D INVALID_MEM_TYPE) )
-        return pat_entry_2_pte_flags[pat_entry];
+    if ( unlikely(pat_entry =3D=3D INVALID_MEM_TYPE) )
+        pat_entry =3D pat_entry_tbl[PAT_TYPE_UNCACHABLE];
=20
-    return pat_entry_2_pte_flags[pat_entry_tbl[PAT_TYPE_UNCACHABLE]];
+    return pat_entry_2_pte_flags[pat_entry];
 }
=20
 int hvm_vcpu_cacheattr_init(struct vcpu *v)




--=__PartAC9EFEEB.1__=
Content-Type: text/plain; name="x86-HVM-MTRR-sections.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-HVM-MTRR-sections.patch"

x86/HVM: adjust data definitions in mtrr.c=0A=0A- use proper section =
attributes=0A- use initializers where possible=0A- clean up pat_type_2_pte_=
flags()=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ -70,10 =
+70,14 @@ static const uint8_t mm_type_tbl[MTRR_NU=0A  * Reverse lookup =
table, to find a pat type according to MTRR and effective=0A  * memory =
type. This table is dynamically generated.=0A  */=0A-static uint8_t =
mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES];=0A+static uint8_t =
__read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES] =3D=0A+    { =
[0 ... MTRR_NUM_TYPES-1] =3D=0A+        { [0 ... MEMORY_NUM_TYPES-1] =3D =
INVALID_MEM_TYPE }=0A+    };=0A =0A /* Lookup table for PAT entry of a =
given PAT value in host PAT. */=0A-static uint8_t pat_entry_tbl[PAT_TYPE_NU=
MS];=0A+static uint8_t __read_mostly pat_entry_tbl[PAT_TYPE_NUMS] =3D=0A+  =
  { [0 ... PAT_TYPE_NUMS-1] =3D INVALID_MEM_TYPE };=0A =0A static void =
get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,=0A                    =
        uint64_t *base, uint64_t *end)=0A@@ -149,23 +153,21 @@ bool_t =
is_var_mtrr_overlapped(struct mtr=0A #define MTRRphysBase_MSR(reg) (0x200 =
+ 2 * (reg))=0A #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)=0A =
=0A-static int hvm_mtrr_pat_init(void)=0A+static int __init hvm_mtrr_pat_in=
it(void)=0A {=0A     unsigned int i, j, phys_addr;=0A =0A-    memset(&mtrr_=
epat_tbl, INVALID_MEM_TYPE, sizeof(mtrr_epat_tbl));=0A     for ( i =3D 0; =
i < MTRR_NUM_TYPES; i++ )=0A     {=0A         for ( j =3D 0; j < PAT_TYPE_N=
UMS; j++ )=0A         {=0A-            int32_t tmp =3D mm_type_tbl[i][j];=
=0A-            if ( (tmp >=3D 0) && (tmp < MEMORY_NUM_TYPES) )=0A+        =
    unsigned int tmp =3D mm_type_tbl[i][j];=0A+=0A+            if ( tmp < =
MEMORY_NUM_TYPES )=0A                 mtrr_epat_tbl[i][tmp] =3D j;=0A      =
   }=0A     }=0A =0A-    memset(&pat_entry_tbl, INVALID_MEM_TYPE,=0A-      =
     PAT_TYPE_NUMS * sizeof(pat_entry_tbl[0]));=0A     for ( i =3D 0; i < =
PAT_TYPE_NUMS; i++ )=0A     {=0A         for ( j =3D 0; j < PAT_TYPE_NUMS; =
j++ )=0A@@ -190,16 +192,16 @@ __initcall(hvm_mtrr_pat_init);=0A =0A =
uint8_t pat_type_2_pte_flags(uint8_t pat_type)=0A {=0A-    int32_t =
pat_entry =3D pat_entry_tbl[pat_type];=0A+    unsigned int pat_entry =3D =
pat_entry_tbl[pat_type];=0A =0A-    /* INVALID_MEM_TYPE, means doesn't =
find the pat_entry in host pat for=0A-     * a given pat_type. If host pat =
covers all the pat types,=0A-     * it can't happen.=0A+    /*=0A+     * =
INVALID_MEM_TYPE, means doesn't find the pat_entry in host PAT for a=0A+   =
  * given pat_type. If host PAT covers all the PAT types, it can't =
happen.=0A      */=0A-    if ( likely(pat_entry !=3D INVALID_MEM_TYPE) =
)=0A-        return pat_entry_2_pte_flags[pat_entry];=0A+    if ( =
unlikely(pat_entry =3D=3D INVALID_MEM_TYPE) )=0A+        pat_entry =3D =
pat_entry_tbl[PAT_TYPE_UNCACHABLE];=0A =0A-    return pat_entry_2_pte_flags=
[pat_entry_tbl[PAT_TYPE_UNCACHABLE]];=0A+    return pat_entry_2_pte_flags[p=
at_entry];=0A }=0A =0A int hvm_vcpu_cacheattr_init(struct vcpu *v)=0A
--=__PartAC9EFEEB.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartAC9EFEEB.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:31:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFIQ-0007lK-9e; Tue, 25 Feb 2014 10:31:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFIO-0007ks-2h
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:31:28 +0000
Received: from [85.158.137.68:34729] by server-13.bemta-3.messagelabs.com id
	07/E2-26923-FF07C035; Tue, 25 Feb 2014 10:31:27 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393324286!2780027!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21006 invoked from network); 25 Feb 2014 10:31:26 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:31:26 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:31:26 +0000
Message-Id: <530C7F0B020000780011F140@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:31:23 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C7CAB020000780011F121@nat28.tlf.novell.com>
In-Reply-To: <530C7CAB020000780011F121@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartAC9EFEEB.1__="
Cc: Yang Z Zhang <yang.z.zhang@intel.com>, Keir Fraser <keir@xen.org>,
	Dongxiao Xu <dongxiao.xu@intel.com>, Eddie Dong <eddie.dong@intel.com>,
	Jun Nakajima <jun.nakajima@intel.com>
Subject: [Xen-devel] [PATCH 5/5] x86/HVM: adjust data definitions in mtrr.c
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartAC9EFEEB.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

- use proper section attributes
- use initializers where possible
- clean up pat_type_2_pte_flags()

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -70,10 +70,14 @@ static const uint8_t mm_type_tbl[MTRR_NU
  * Reverse lookup table, to find a pat type according to MTRR and =
effective
  * memory type. This table is dynamically generated.
  */
-static uint8_t mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES];
+static uint8_t __read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPE=
S] =3D
+    { [0 ... MTRR_NUM_TYPES-1] =3D
+        { [0 ... MEMORY_NUM_TYPES-1] =3D INVALID_MEM_TYPE }
+    };
=20
 /* Lookup table for PAT entry of a given PAT value in host PAT. */
-static uint8_t pat_entry_tbl[PAT_TYPE_NUMS];
+static uint8_t __read_mostly pat_entry_tbl[PAT_TYPE_NUMS] =3D
+    { [0 ... PAT_TYPE_NUMS-1] =3D INVALID_MEM_TYPE };
=20
 static void get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,
                            uint64_t *base, uint64_t *end)
@@ -149,23 +153,21 @@ bool_t is_var_mtrr_overlapped(struct mtr
 #define MTRRphysBase_MSR(reg) (0x200 + 2 * (reg))
 #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)
=20
-static int hvm_mtrr_pat_init(void)
+static int __init hvm_mtrr_pat_init(void)
 {
     unsigned int i, j, phys_addr;
=20
-    memset(&mtrr_epat_tbl, INVALID_MEM_TYPE, sizeof(mtrr_epat_tbl));
     for ( i =3D 0; i < MTRR_NUM_TYPES; i++ )
     {
         for ( j =3D 0; j < PAT_TYPE_NUMS; j++ )
         {
-            int32_t tmp =3D mm_type_tbl[i][j];
-            if ( (tmp >=3D 0) && (tmp < MEMORY_NUM_TYPES) )
+            unsigned int tmp =3D mm_type_tbl[i][j];
+
+            if ( tmp < MEMORY_NUM_TYPES )
                 mtrr_epat_tbl[i][tmp] =3D j;
         }
     }
=20
-    memset(&pat_entry_tbl, INVALID_MEM_TYPE,
-           PAT_TYPE_NUMS * sizeof(pat_entry_tbl[0]));
     for ( i =3D 0; i < PAT_TYPE_NUMS; i++ )
     {
         for ( j =3D 0; j < PAT_TYPE_NUMS; j++ )
@@ -190,16 +192,16 @@ __initcall(hvm_mtrr_pat_init);
=20
 uint8_t pat_type_2_pte_flags(uint8_t pat_type)
 {
-    int32_t pat_entry =3D pat_entry_tbl[pat_type];
+    unsigned int pat_entry =3D pat_entry_tbl[pat_type];
=20
-    /* INVALID_MEM_TYPE, means doesn't find the pat_entry in host pat for
-     * a given pat_type. If host pat covers all the pat types,
-     * it can't happen.
+    /*
+     * INVALID_MEM_TYPE, means doesn't find the pat_entry in host PAT for =
a
+     * given pat_type. If host PAT covers all the PAT types, it can't =
happen.
      */
-    if ( likely(pat_entry !=3D INVALID_MEM_TYPE) )
-        return pat_entry_2_pte_flags[pat_entry];
+    if ( unlikely(pat_entry =3D=3D INVALID_MEM_TYPE) )
+        pat_entry =3D pat_entry_tbl[PAT_TYPE_UNCACHABLE];
=20
-    return pat_entry_2_pte_flags[pat_entry_tbl[PAT_TYPE_UNCACHABLE]];
+    return pat_entry_2_pte_flags[pat_entry];
 }
=20
 int hvm_vcpu_cacheattr_init(struct vcpu *v)




--=__PartAC9EFEEB.1__=
Content-Type: text/plain; name="x86-HVM-MTRR-sections.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-HVM-MTRR-sections.patch"

x86/HVM: adjust data definitions in mtrr.c=0A=0A- use proper section =
attributes=0A- use initializers where possible=0A- clean up pat_type_2_pte_=
flags()=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A--- =
a/xen/arch/x86/hvm/mtrr.c=0A+++ b/xen/arch/x86/hvm/mtrr.c=0A@@ -70,10 =
+70,14 @@ static const uint8_t mm_type_tbl[MTRR_NU=0A  * Reverse lookup =
table, to find a pat type according to MTRR and effective=0A  * memory =
type. This table is dynamically generated.=0A  */=0A-static uint8_t =
mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES];=0A+static uint8_t =
__read_mostly mtrr_epat_tbl[MTRR_NUM_TYPES][MEMORY_NUM_TYPES] =3D=0A+    { =
[0 ... MTRR_NUM_TYPES-1] =3D=0A+        { [0 ... MEMORY_NUM_TYPES-1] =3D =
INVALID_MEM_TYPE }=0A+    };=0A =0A /* Lookup table for PAT entry of a =
given PAT value in host PAT. */=0A-static uint8_t pat_entry_tbl[PAT_TYPE_NU=
MS];=0A+static uint8_t __read_mostly pat_entry_tbl[PAT_TYPE_NUMS] =3D=0A+  =
  { [0 ... PAT_TYPE_NUMS-1] =3D INVALID_MEM_TYPE };=0A =0A static void =
get_mtrr_range(uint64_t base_msr, uint64_t mask_msr,=0A                    =
        uint64_t *base, uint64_t *end)=0A@@ -149,23 +153,21 @@ bool_t =
is_var_mtrr_overlapped(struct mtr=0A #define MTRRphysBase_MSR(reg) (0x200 =
+ 2 * (reg))=0A #define MTRRphysMask_MSR(reg) (0x200 + 2 * (reg) + 1)=0A =
=0A-static int hvm_mtrr_pat_init(void)=0A+static int __init hvm_mtrr_pat_in=
it(void)=0A {=0A     unsigned int i, j, phys_addr;=0A =0A-    memset(&mtrr_=
epat_tbl, INVALID_MEM_TYPE, sizeof(mtrr_epat_tbl));=0A     for ( i =3D 0; =
i < MTRR_NUM_TYPES; i++ )=0A     {=0A         for ( j =3D 0; j < PAT_TYPE_N=
UMS; j++ )=0A         {=0A-            int32_t tmp =3D mm_type_tbl[i][j];=
=0A-            if ( (tmp >=3D 0) && (tmp < MEMORY_NUM_TYPES) )=0A+        =
    unsigned int tmp =3D mm_type_tbl[i][j];=0A+=0A+            if ( tmp < =
MEMORY_NUM_TYPES )=0A                 mtrr_epat_tbl[i][tmp] =3D j;=0A      =
   }=0A     }=0A =0A-    memset(&pat_entry_tbl, INVALID_MEM_TYPE,=0A-      =
     PAT_TYPE_NUMS * sizeof(pat_entry_tbl[0]));=0A     for ( i =3D 0; i < =
PAT_TYPE_NUMS; i++ )=0A     {=0A         for ( j =3D 0; j < PAT_TYPE_NUMS; =
j++ )=0A@@ -190,16 +192,16 @@ __initcall(hvm_mtrr_pat_init);=0A =0A =
uint8_t pat_type_2_pte_flags(uint8_t pat_type)=0A {=0A-    int32_t =
pat_entry =3D pat_entry_tbl[pat_type];=0A+    unsigned int pat_entry =3D =
pat_entry_tbl[pat_type];=0A =0A-    /* INVALID_MEM_TYPE, means doesn't =
find the pat_entry in host pat for=0A-     * a given pat_type. If host pat =
covers all the pat types,=0A-     * it can't happen.=0A+    /*=0A+     * =
INVALID_MEM_TYPE, means doesn't find the pat_entry in host PAT for a=0A+   =
  * given pat_type. If host PAT covers all the PAT types, it can't =
happen.=0A      */=0A-    if ( likely(pat_entry !=3D INVALID_MEM_TYPE) =
)=0A-        return pat_entry_2_pte_flags[pat_entry];=0A+    if ( =
unlikely(pat_entry =3D=3D INVALID_MEM_TYPE) )=0A+        pat_entry =3D =
pat_entry_tbl[PAT_TYPE_UNCACHABLE];=0A =0A-    return pat_entry_2_pte_flags=
[pat_entry_tbl[PAT_TYPE_UNCACHABLE]];=0A+    return pat_entry_2_pte_flags[p=
at_entry];=0A }=0A =0A int hvm_vcpu_cacheattr_init(struct vcpu *v)=0A
--=__PartAC9EFEEB.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartAC9EFEEB.1__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:37:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFNy-000884-D7; Tue, 25 Feb 2014 10:37:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFNx-00087z-4H
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:37:13 +0000
Received: from [193.109.254.147:52332] by server-14.bemta-14.messagelabs.com
	id 73/43-29228-8527C035; Tue, 25 Feb 2014 10:37:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393324629!6631873!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28384 invoked from network); 25 Feb 2014 10:37:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; 
	d="scan'208,217";a="103828950"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 10:37:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:37:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIFNr-0005gX-I7;
	Tue, 25 Feb 2014 10:37:07 +0000
Message-ID: <530C7253.6040504@citrix.com>
Date: Tue, 25 Feb 2014 10:37:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2922351061638859674=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2922351061638859674==
Content-Type: multipart/alternative;
	boundary="------------030401060001040204030404"

--------------030401060001040204030404
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 25/02/14 10:00, Jan Beulich wrote:
> ... in a simplified and consistent way.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

That is a remarkable reduction in code.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/docs/misc/printk-formats.txt
> +++ b/docs/misc/printk-formats.txt
> @@ -15,3 +15,6 @@ Symbol/Function pointers:
>  
>         In the case that an appropriate symbol name can't be found, %p[sS] will
>         fall back to '%p' and print the address in hex.
> +
> +       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
> +               "d<domid>v<vcpuid>")
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
>      if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
>      {
>          dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
> -                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
> +                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
>                  has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
> -                v->domain->domain_id, v->vcpu_id,
> -                guest_mcg_cap & ~MCG_CAP_COUNT);
> +                v, guest_mcg_cap & ~MCG_CAP_COUNT);
>          return -EPERM;
>      }
>  
> @@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
>                guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
>               !test_and_set_bool(v->mce_pending) )
>          {
> -            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
>              vcpu_kick(v);
>              ret = 0;
>          }
>          else
>          {
> -            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
>              ret = -EBUSY;
>              break;
>          }
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
>          if ( !warned )
>          {
>              warned = 1;
> -            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
> +            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
>                     "(If you see this outside of debugging activity,"
>                     " please report to xen-devel@lists.xenproject.org)\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   v);
>          }
>          memset(reg, 0, sizeof(*reg));
>          return;
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct 
>          if ( !cpu_has(c, X86_FEATURE_DTES64) )
>          {
>              printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>              goto func_out;
>          }
>          vpmu_set(vpmu, VPMU_CPU_HAS_DS);
> @@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct 
>              /* If BTS_UNAVAIL is set reset the DS feature. */
>              vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
>              printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>          }
>          else
>          {
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu 
>      mfn_t *oos;
>      struct domain *d = v->domain;
>  
> -    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
> -                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn)); 
> +    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
>  
>      for_each_vcpu(d, v) 
>      {
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
>  static void reserved_bit_page_fault(
>      unsigned long addr, struct cpu_user_regs *regs)
>  {
> -    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
> -           current->domain->domain_id, current->vcpu_id, regs->error_code);
> +    printk("%pv: reserved bit in page table (ec=%04X)\n",
> +           current, regs->error_code);
>      show_page_walk(addr);
>      show_execution_state(regs);
>  }
> @@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
>          tb->flags |= TBF_INTERRUPT;
>      if ( unlikely(null_trap_bounce(v, tb)) )
>      {
> -        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> -               v->domain->domain_id, v->vcpu_id, error_code);
> +        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
>          show_page_walk(addr);
>      }
>  
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
>  
>  vcpu_info_t dummy_vcpu_info;
>  
> -int current_domain_id(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
>  static void __domain_finalise_shutdown(struct domain *d)
>  {
>      struct vcpu *v;
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
>  
>      if ( !is_idle_vcpu(current) )
>      {
> -        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
> -               smp_processor_id(), current->domain->domain_id,
> -               current->vcpu_id);
> +        printk("*** Dumping CPU%u guest state (%pv): ***\n",
> +               smp_processor_id(), current);
>          show_execution_state(guest_cpu_user_regs());
>          printk("\n");
>      }
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
>      struct list_head *iter;
>      int pos = 0;
>  
> -    d2printk("rqi d%dv%d\n",
> -           svc->vcpu->domain->domain_id,
> -           svc->vcpu->vcpu_id);
> +    d2printk("rqi %pv\n", svc->vcpu);
>  
>      BUG_ON(&svc->rqd->runq != runq);
>      /* Idle vcpus not allowed on the runqueue anymore */
> @@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
>  
>          if ( svc->credit > iter_svc->credit )
>          {
> -            d2printk(" p%d d%dv%d\n",
> -                   pos,
> -                   iter_svc->vcpu->domain->domain_id,
> -                   iter_svc->vcpu->vcpu_id);
> +            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
>              break;
>          }
>          pos++;
> @@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
>      cpumask_t mask;
>      struct csched_vcpu * cur;
>  
> -    d2printk("rqt d%dv%d cd%dv%d\n",
> -             new->vcpu->domain->domain_id,
> -             new->vcpu->vcpu_id,
> -             current->domain->domain_id,
> -             current->vcpu_id);
> +    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
>  
>      BUG_ON(new->vcpu->processor != cpu);
>      BUG_ON(new->rqd != rqd);
> @@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
>          t2c_update(rqd, delta, svc);
>          svc->start_time = now;
>  
> -        d2printk("b d%dv%d c%d\n",
> -                 svc->vcpu->domain->domain_id,
> -                 svc->vcpu->vcpu_id,
> -                 svc->credit);
> +        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
>      } else {
>          d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
>                 __func__, now, svc->start_time);
> @@ -871,11 +859,9 @@ static void
>  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>  {
>      struct csched_vcpu *svc = vc->sched_priv;
> -    struct domain * const dom = vc->domain;
>      struct csched_dom * const sdom = svc->sdom;
>  
> -    printk("%s: Inserting d%dv%d\n",
> -           __func__, dom->domain_id, vc->vcpu_id);
> +    printk("%s: Inserting %pv\n", __func__, vc);
>  
>      /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
>       * been called for that cpu.
> @@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler 
>  
>      /* Schedule lock should be held at this point. */
>  
> -    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
> +    d2printk("w %pv\n", vc);
>  
>      BUG_ON( is_idle_vcpu(vc) );
>  
> @@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops, 
>      {
>          if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
>          {
> -            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv -\n", svc->vcpu);
>              clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
>          }
>          /* Leave it where it is for now.  When we actually pay attention
> @@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops, 
>          }
>          else
>          {
> -            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv +\n", svc->vcpu);
>              new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd->active);
>              goto out_up;
>          }
> @@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
>  {
>      if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
>      {
> -        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
>          /* It's running; mark it to migrate. */
>          svc->migrate_rqd = trqd;
>          set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
> @@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
>      {
>          int on_runq=0;
>          /* It's not running; just move it */
> -        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
>          if ( __vcpu_on_runq(svc) )
>          {
>              __runq_remove(svc);
> @@ -1662,11 +1646,7 @@ csched_schedule(
>      SCHED_STAT_CRANK(schedule);
>      CSCHED_VCPU_CHECK(current);
>  
> -    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
> -             cpu,
> -             scurr->vcpu->domain->domain_id,
> -             scurr->vcpu->vcpu_id,
> -             now);
> +    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
>  
>      BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
>  
> @@ -1693,12 +1673,11 @@ csched_schedule(
>                  }
>              }
>          }
> -        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
> +        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
>                 "pcpu %d rq %d!\n",
>                 __func__,
>                 cpu, this_rqi,
> -               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
> -               scurr->vcpu->processor, other_rqi);
> +               scurr->vcpu, scurr->vcpu->processor, other_rqi);
>      }
>      BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
>  
> @@ -1755,12 +1734,8 @@ csched_schedule(
>              __runq_remove(snext);
>              if ( snext->vcpu->is_running )
>              {
> -                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
> -                       cpu,
> -                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,
> -                       snext->vcpu->processor,
> -                       scurr->vcpu->domain->domain_id,
> -                       scurr->vcpu->vcpu_id);
> +                printk("p%d: snext %pv running on p%d! scurr %pv\n",
> +                       cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);
>                  BUG();
>              }
>              set_bit(__CSFLAG_scheduled, &snext->flags);
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
>  
>          if ( v->affinity_broken )
>          {
> -            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
> -                   d->domain_id, v->vcpu_id);
> +            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
>              cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
>              v->affinity_broken = 0;
>          }
> @@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
>              if ( cpumask_empty(&online_affinity) &&
>                   cpumask_test_cpu(cpu, v->cpu_affinity) )
>              {
> -                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
> -                        d->domain_id, v->vcpu_id);
> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
>  
>                  if (system_state == SYS_STATE_suspend)
>                  {
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -19,6 +19,7 @@
>  #include <xen/ctype.h>
>  #include <xen/symbols.h>
>  #include <xen/lib.h>
> +#include <xen/sched.h>
>  #include <asm/div64.h>
>  #include <asm/page.h>
>  
> @@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
>  
>          return str;
>      }
> +
> +    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
> +    {
> +        const struct vcpu *v = arg;
> +
> +        ++*fmt_ptr;
> +        if ( str <= end )
> +            *str = 'd';
> +        str = number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);
> +        if ( str <= end )
> +            *str = 'v';
> +        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
> +    }
>      }
>  
>      if ( field_width == -1 )
> --- a/xen/include/xen/config.h
> +++ b/xen/include/xen/config.h
> @@ -74,12 +74,11 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -int current_domain_id(void);
>  #define dprintk(_l, _f, _a...)                              \
>      printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
>  #define gdprintk(_l, _f, _a...)                             \
> -    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
> -           __LINE__, current_domain_id() , ## _a )
> +    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
> +           __LINE__, current, ## _a )
>  
>  #endif /* !__ASSEMBLY__ */
>  
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------030401060001040204030404
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 25/02/14 10:00, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:530C77C8020000780011F114@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">... in a simplified and consistent way.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    That is a remarkable reduction in code.<br>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:530C77C8020000780011F114@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/docs/misc/printk-formats.txt
+++ b/docs/misc/printk-formats.txt
@@ -15,3 +15,6 @@ Symbol/Function pointers:
 
        In the case that an appropriate symbol name can't be found, %p[sS] will
        fall back to '%p' and print the address in hex.
+
+       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
+               "d&lt;domid&gt;v&lt;vcpuid&gt;")
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
     if ( ctxt-&gt;caps &amp; ~guest_mcg_cap &amp; ~MCG_CAP_COUNT &amp; ~MCG_CTL_P )
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
-                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
+                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
                 has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt-&gt;caps,
-                v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id,
-                guest_mcg_cap &amp; ~MCG_CAP_COUNT);
+                v, guest_mcg_cap &amp; ~MCG_CAP_COUNT);
         return -EPERM;
     }
 
@@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
               guest_has_trap_callback(d, v-&gt;vcpu_id, TRAP_machine_check)) &amp;&amp;
              !test_and_set_bool(v-&gt;mce_pending) )
         {
-            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
-                       d-&gt;domain_id, v-&gt;vcpu_id);
+            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
             vcpu_kick(v);
             ret = 0;
         }
         else
         {
-            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
-                       d-&gt;domain_id, v-&gt;vcpu_id);
+            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
             ret = -EBUSY;
             break;
         }
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
         if ( !warned )
         {
             warned = 1;
-            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
+            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
                    "(If you see this outside of debugging activity,"
                    " please report to <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xenproject.org">xen-devel@lists.xenproject.org</a>)\n",
-                   v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id);
+                   v);
         }
         memset(reg, 0, sizeof(*reg));
         return;
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct 
         if ( !cpu_has(c, X86_FEATURE_DTES64) )
         {
             printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
             goto func_out;
         }
         vpmu_set(vpmu, VPMU_CPU_HAS_DS);
@@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct 
             /* If BTS_UNAVAIL is set reset the DS feature. */
             vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
             printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
         }
         else
         {
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu 
     mfn_t *oos;
     struct domain *d = v-&gt;domain;
 
-    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
-                  v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id, mfn_x(gmfn)); 
+    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
 
     for_each_vcpu(d, v) 
     {
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
 static void reserved_bit_page_fault(
     unsigned long addr, struct cpu_user_regs *regs)
 {
-    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
-           current-&gt;domain-&gt;domain_id, current-&gt;vcpu_id, regs-&gt;error_code);
+    printk("%pv: reserved bit in page table (ec=%04X)\n",
+           current, regs-&gt;error_code);
     show_page_walk(addr);
     show_execution_state(regs);
 }
@@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
         tb-&gt;flags |= TBF_INTERRUPT;
     if ( unlikely(null_trap_bounce(v, tb)) )
     {
-        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
-               v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id, error_code);
+        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
         show_page_walk(addr);
     }
 
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
 
 vcpu_info_t dummy_vcpu_info;
 
-int current_domain_id(void)
-{
-    return current-&gt;domain-&gt;domain_id;
-}
-
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
 
     if ( !is_idle_vcpu(current) )
     {
-        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
-               smp_processor_id(), current-&gt;domain-&gt;domain_id,
-               current-&gt;vcpu_id);
+        printk("*** Dumping CPU%u guest state (%pv): ***\n",
+               smp_processor_id(), current);
         show_execution_state(guest_cpu_user_regs());
         printk("\n");
     }
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
     struct list_head *iter;
     int pos = 0;
 
-    d2printk("rqi d%dv%d\n",
-           svc-&gt;vcpu-&gt;domain-&gt;domain_id,
-           svc-&gt;vcpu-&gt;vcpu_id);
+    d2printk("rqi %pv\n", svc-&gt;vcpu);
 
     BUG_ON(&amp;svc-&gt;rqd-&gt;runq != runq);
     /* Idle vcpus not allowed on the runqueue anymore */
@@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
 
         if ( svc-&gt;credit &gt; iter_svc-&gt;credit )
         {
-            d2printk(" p%d d%dv%d\n",
-                   pos,
-                   iter_svc-&gt;vcpu-&gt;domain-&gt;domain_id,
-                   iter_svc-&gt;vcpu-&gt;vcpu_id);
+            d2printk(" p%d %pv\n", pos, iter_svc-&gt;vcpu);
             break;
         }
         pos++;
@@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
     cpumask_t mask;
     struct csched_vcpu * cur;
 
-    d2printk("rqt d%dv%d cd%dv%d\n",
-             new-&gt;vcpu-&gt;domain-&gt;domain_id,
-             new-&gt;vcpu-&gt;vcpu_id,
-             current-&gt;domain-&gt;domain_id,
-             current-&gt;vcpu_id);
+    d2printk("rqt %pv curr %pv\n", new-&gt;vcpu, current);
 
     BUG_ON(new-&gt;vcpu-&gt;processor != cpu);
     BUG_ON(new-&gt;rqd != rqd);
@@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
         t2c_update(rqd, delta, svc);
         svc-&gt;start_time = now;
 
-        d2printk("b d%dv%d c%d\n",
-                 svc-&gt;vcpu-&gt;domain-&gt;domain_id,
-                 svc-&gt;vcpu-&gt;vcpu_id,
-                 svc-&gt;credit);
+        d2printk("b %pv c%d\n", svc-&gt;vcpu, svc-&gt;credit);
     } else {
         d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
                __func__, now, svc-&gt;start_time);
@@ -871,11 +859,9 @@ static void
 csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu *svc = vc-&gt;sched_priv;
-    struct domain * const dom = vc-&gt;domain;
     struct csched_dom * const sdom = svc-&gt;sdom;
 
-    printk("%s: Inserting d%dv%d\n",
-           __func__, dom-&gt;domain_id, vc-&gt;vcpu_id);
+    printk("%s: Inserting %pv\n", __func__, vc);
 
     /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
      * been called for that cpu.
@@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler 
 
     /* Schedule lock should be held at this point. */
 
-    d2printk("w d%dv%d\n", vc-&gt;domain-&gt;domain_id, vc-&gt;vcpu_id);
+    d2printk("w %pv\n", vc);
 
     BUG_ON( is_idle_vcpu(vc) );
 
@@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops, 
     {
         if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &amp;svc-&gt;flags) )
         {
-            d2printk("d%dv%d -\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id);
+            d2printk("%pv -\n", svc-&gt;vcpu);
             clear_bit(__CSFLAG_runq_migrate_request, &amp;svc-&gt;flags);
         }
         /* Leave it where it is for now.  When we actually pay attention
@@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops, 
         }
         else
         {
-            d2printk("d%dv%d +\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id);
+            d2printk("%pv +\n", svc-&gt;vcpu);
             new_cpu = cpumask_cycle(vc-&gt;processor, &amp;svc-&gt;migrate_rqd-&gt;active);
             goto out_up;
         }
@@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
 {
     if ( test_bit(__CSFLAG_scheduled, &amp;svc-&gt;flags) )
     {
-        d2printk("d%dv%d %d-%d a\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id,
-                 svc-&gt;rqd-&gt;id, trqd-&gt;id);
+        d2printk("%pv %d-%d a\n", svc-&gt;vcpu, svc-&gt;rqd-&gt;id, trqd-&gt;id);
         /* It's running; mark it to migrate. */
         svc-&gt;migrate_rqd = trqd;
         set_bit(_VPF_migrating, &amp;svc-&gt;vcpu-&gt;pause_flags);
@@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
     {
         int on_runq=0;
         /* It's not running; just move it */
-        d2printk("d%dv%d %d-%d i\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id,
-                 svc-&gt;rqd-&gt;id, trqd-&gt;id);
+        d2printk("%pv %d-%d i\n", svc-&gt;vcpu, svc-&gt;rqd-&gt;id, trqd-&gt;id);
         if ( __vcpu_on_runq(svc) )
         {
             __runq_remove(svc);
@@ -1662,11 +1646,7 @@ csched_schedule(
     SCHED_STAT_CRANK(schedule);
     CSCHED_VCPU_CHECK(current);
 
-    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
-             cpu,
-             scurr-&gt;vcpu-&gt;domain-&gt;domain_id,
-             scurr-&gt;vcpu-&gt;vcpu_id,
-             now);
+    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr-&gt;vcpu, now);
 
     BUG_ON(!cpumask_test_cpu(cpu, &amp;CSCHED_PRIV(ops)-&gt;initialized));
 
@@ -1693,12 +1673,11 @@ csched_schedule(
                 }
             }
         }
-        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
+        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
                "pcpu %d rq %d!\n",
                __func__,
                cpu, this_rqi,
-               scurr-&gt;vcpu-&gt;domain-&gt;domain_id, scurr-&gt;vcpu-&gt;vcpu_id,
-               scurr-&gt;vcpu-&gt;processor, other_rqi);
+               scurr-&gt;vcpu, scurr-&gt;vcpu-&gt;processor, other_rqi);
     }
     BUG_ON(!is_idle_vcpu(scurr-&gt;vcpu) &amp;&amp; scurr-&gt;rqd != rqd);
 
@@ -1755,12 +1734,8 @@ csched_schedule(
             __runq_remove(snext);
             if ( snext-&gt;vcpu-&gt;is_running )
             {
-                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
-                       cpu,
-                       snext-&gt;vcpu-&gt;domain-&gt;domain_id, snext-&gt;vcpu-&gt;vcpu_id,
-                       snext-&gt;vcpu-&gt;processor,
-                       scurr-&gt;vcpu-&gt;domain-&gt;domain_id,
-                       scurr-&gt;vcpu-&gt;vcpu_id);
+                printk("p%d: snext %pv running on p%d! scurr %pv\n",
+                       cpu, snext-&gt;vcpu, snext-&gt;vcpu-&gt;processor, scurr-&gt;vcpu);
                 BUG();
             }
             set_bit(__CSFLAG_scheduled, &amp;snext-&gt;flags);
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
 
         if ( v-&gt;affinity_broken )
         {
-            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
-                   d-&gt;domain_id, v-&gt;vcpu_id);
+            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
             cpumask_copy(v-&gt;cpu_affinity, v-&gt;cpu_affinity_saved);
             v-&gt;affinity_broken = 0;
         }
@@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
             if ( cpumask_empty(&amp;online_affinity) &amp;&amp;
                  cpumask_test_cpu(cpu, v-&gt;cpu_affinity) )
             {
-                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
-                        d-&gt;domain_id, v-&gt;vcpu_id);
+                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
 
                 if (system_state == SYS_STATE_suspend)
                 {
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -19,6 +19,7 @@
 #include &lt;xen/ctype.h&gt;
 #include &lt;xen/symbols.h&gt;
 #include &lt;xen/lib.h&gt;
+#include &lt;xen/sched.h&gt;
 #include &lt;asm/div64.h&gt;
 #include &lt;asm/page.h&gt;
 
@@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
 
         return str;
     }
+
+    case 'v': /* d&lt;domain-id&gt;v&lt;vcpu-id&gt; from a struct vcpu */
+    {
+        const struct vcpu *v = arg;
+
+        ++*fmt_ptr;
+        if ( str &lt;= end )
+            *str = 'd';
+        str = number(str + 1, end, v-&gt;domain-&gt;domain_id, 10, -1, -1, 0);
+        if ( str &lt;= end )
+            *str = 'v';
+        return number(str + 1, end, v-&gt;vcpu_id, 10, -1, -1, 0);
+    }
     }
 
     if ( field_width == -1 )
--- a/xen/include/xen/config.h
+++ b/xen/include/xen/config.h
@@ -74,12 +74,11 @@
 
 #ifndef __ASSEMBLY__
 
-int current_domain_id(void);
 #define dprintk(_l, _f, _a...)                              \
     printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
 #define gdprintk(_l, _f, _a...)                             \
-    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
-           __LINE__, current_domain_id() , ## _a )
+    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
+           __LINE__, current, ## _a )
 
 #endif /* !__ASSEMBLY__ */
 


</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------030401060001040204030404--


--===============2922351061638859674==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2922351061638859674==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:37:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFNy-000884-D7; Tue, 25 Feb 2014 10:37:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFNx-00087z-4H
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:37:13 +0000
Received: from [193.109.254.147:52332] by server-14.bemta-14.messagelabs.com
	id 73/43-29228-8527C035; Tue, 25 Feb 2014 10:37:12 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393324629!6631873!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28384 invoked from network); 25 Feb 2014 10:37:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:37:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; 
	d="scan'208,217";a="103828950"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 10:37:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:37:07 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIFNr-0005gX-I7;
	Tue, 25 Feb 2014 10:37:07 +0000
Message-ID: <530C7253.6040504@citrix.com>
Date: Tue, 25 Feb 2014 10:37:07 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2922351061638859674=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2922351061638859674==
Content-Type: multipart/alternative;
	boundary="------------030401060001040204030404"

--------------030401060001040204030404
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 25/02/14 10:00, Jan Beulich wrote:
> ... in a simplified and consistent way.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

That is a remarkable reduction in code.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/docs/misc/printk-formats.txt
> +++ b/docs/misc/printk-formats.txt
> @@ -15,3 +15,6 @@ Symbol/Function pointers:
>  
>         In the case that an appropriate symbol name can't be found, %p[sS] will
>         fall back to '%p' and print the address in hex.
> +
> +       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
> +               "d<domid>v<vcpuid>")
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
>      if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
>      {
>          dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
> -                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
> +                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
>                  has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
> -                v->domain->domain_id, v->vcpu_id,
> -                guest_mcg_cap & ~MCG_CAP_COUNT);
> +                v, guest_mcg_cap & ~MCG_CAP_COUNT);
>          return -EPERM;
>      }
>  
> @@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
>                guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
>               !test_and_set_bool(v->mce_pending) )
>          {
> -            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
>              vcpu_kick(v);
>              ret = 0;
>          }
>          else
>          {
> -            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
>              ret = -EBUSY;
>              break;
>          }
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
>          if ( !warned )
>          {
>              warned = 1;
> -            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
> +            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
>                     "(If you see this outside of debugging activity,"
>                     " please report to xen-devel@lists.xenproject.org)\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   v);
>          }
>          memset(reg, 0, sizeof(*reg));
>          return;
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct 
>          if ( !cpu_has(c, X86_FEATURE_DTES64) )
>          {
>              printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>              goto func_out;
>          }
>          vpmu_set(vpmu, VPMU_CPU_HAS_DS);
> @@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct 
>              /* If BTS_UNAVAIL is set reset the DS feature. */
>              vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
>              printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>          }
>          else
>          {
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu 
>      mfn_t *oos;
>      struct domain *d = v->domain;
>  
> -    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
> -                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn)); 
> +    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
>  
>      for_each_vcpu(d, v) 
>      {
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
>  static void reserved_bit_page_fault(
>      unsigned long addr, struct cpu_user_regs *regs)
>  {
> -    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
> -           current->domain->domain_id, current->vcpu_id, regs->error_code);
> +    printk("%pv: reserved bit in page table (ec=%04X)\n",
> +           current, regs->error_code);
>      show_page_walk(addr);
>      show_execution_state(regs);
>  }
> @@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
>          tb->flags |= TBF_INTERRUPT;
>      if ( unlikely(null_trap_bounce(v, tb)) )
>      {
> -        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> -               v->domain->domain_id, v->vcpu_id, error_code);
> +        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
>          show_page_walk(addr);
>      }
>  
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
>  
>  vcpu_info_t dummy_vcpu_info;
>  
> -int current_domain_id(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
>  static void __domain_finalise_shutdown(struct domain *d)
>  {
>      struct vcpu *v;
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
>  
>      if ( !is_idle_vcpu(current) )
>      {
> -        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
> -               smp_processor_id(), current->domain->domain_id,
> -               current->vcpu_id);
> +        printk("*** Dumping CPU%u guest state (%pv): ***\n",
> +               smp_processor_id(), current);
>          show_execution_state(guest_cpu_user_regs());
>          printk("\n");
>      }
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
>      struct list_head *iter;
>      int pos = 0;
>  
> -    d2printk("rqi d%dv%d\n",
> -           svc->vcpu->domain->domain_id,
> -           svc->vcpu->vcpu_id);
> +    d2printk("rqi %pv\n", svc->vcpu);
>  
>      BUG_ON(&svc->rqd->runq != runq);
>      /* Idle vcpus not allowed on the runqueue anymore */
> @@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
>  
>          if ( svc->credit > iter_svc->credit )
>          {
> -            d2printk(" p%d d%dv%d\n",
> -                   pos,
> -                   iter_svc->vcpu->domain->domain_id,
> -                   iter_svc->vcpu->vcpu_id);
> +            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
>              break;
>          }
>          pos++;
> @@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
>      cpumask_t mask;
>      struct csched_vcpu * cur;
>  
> -    d2printk("rqt d%dv%d cd%dv%d\n",
> -             new->vcpu->domain->domain_id,
> -             new->vcpu->vcpu_id,
> -             current->domain->domain_id,
> -             current->vcpu_id);
> +    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
>  
>      BUG_ON(new->vcpu->processor != cpu);
>      BUG_ON(new->rqd != rqd);
> @@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
>          t2c_update(rqd, delta, svc);
>          svc->start_time = now;
>  
> -        d2printk("b d%dv%d c%d\n",
> -                 svc->vcpu->domain->domain_id,
> -                 svc->vcpu->vcpu_id,
> -                 svc->credit);
> +        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
>      } else {
>          d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
>                 __func__, now, svc->start_time);
> @@ -871,11 +859,9 @@ static void
>  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>  {
>      struct csched_vcpu *svc = vc->sched_priv;
> -    struct domain * const dom = vc->domain;
>      struct csched_dom * const sdom = svc->sdom;
>  
> -    printk("%s: Inserting d%dv%d\n",
> -           __func__, dom->domain_id, vc->vcpu_id);
> +    printk("%s: Inserting %pv\n", __func__, vc);
>  
>      /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
>       * been called for that cpu.
> @@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler 
>  
>      /* Schedule lock should be held at this point. */
>  
> -    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
> +    d2printk("w %pv\n", vc);
>  
>      BUG_ON( is_idle_vcpu(vc) );
>  
> @@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops, 
>      {
>          if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
>          {
> -            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv -\n", svc->vcpu);
>              clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
>          }
>          /* Leave it where it is for now.  When we actually pay attention
> @@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops, 
>          }
>          else
>          {
> -            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv +\n", svc->vcpu);
>              new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd->active);
>              goto out_up;
>          }
> @@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
>  {
>      if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
>      {
> -        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
>          /* It's running; mark it to migrate. */
>          svc->migrate_rqd = trqd;
>          set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
> @@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
>      {
>          int on_runq=0;
>          /* It's not running; just move it */
> -        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
>          if ( __vcpu_on_runq(svc) )
>          {
>              __runq_remove(svc);
> @@ -1662,11 +1646,7 @@ csched_schedule(
>      SCHED_STAT_CRANK(schedule);
>      CSCHED_VCPU_CHECK(current);
>  
> -    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
> -             cpu,
> -             scurr->vcpu->domain->domain_id,
> -             scurr->vcpu->vcpu_id,
> -             now);
> +    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
>  
>      BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
>  
> @@ -1693,12 +1673,11 @@ csched_schedule(
>                  }
>              }
>          }
> -        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
> +        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
>                 "pcpu %d rq %d!\n",
>                 __func__,
>                 cpu, this_rqi,
> -               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
> -               scurr->vcpu->processor, other_rqi);
> +               scurr->vcpu, scurr->vcpu->processor, other_rqi);
>      }
>      BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
>  
> @@ -1755,12 +1734,8 @@ csched_schedule(
>              __runq_remove(snext);
>              if ( snext->vcpu->is_running )
>              {
> -                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
> -                       cpu,
> -                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,
> -                       snext->vcpu->processor,
> -                       scurr->vcpu->domain->domain_id,
> -                       scurr->vcpu->vcpu_id);
> +                printk("p%d: snext %pv running on p%d! scurr %pv\n",
> +                       cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);
>                  BUG();
>              }
>              set_bit(__CSFLAG_scheduled, &snext->flags);
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
>  
>          if ( v->affinity_broken )
>          {
> -            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
> -                   d->domain_id, v->vcpu_id);
> +            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
>              cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
>              v->affinity_broken = 0;
>          }
> @@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
>              if ( cpumask_empty(&online_affinity) &&
>                   cpumask_test_cpu(cpu, v->cpu_affinity) )
>              {
> -                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
> -                        d->domain_id, v->vcpu_id);
> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
>  
>                  if (system_state == SYS_STATE_suspend)
>                  {
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -19,6 +19,7 @@
>  #include <xen/ctype.h>
>  #include <xen/symbols.h>
>  #include <xen/lib.h>
> +#include <xen/sched.h>
>  #include <asm/div64.h>
>  #include <asm/page.h>
>  
> @@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
>  
>          return str;
>      }
> +
> +    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
> +    {
> +        const struct vcpu *v = arg;
> +
> +        ++*fmt_ptr;
> +        if ( str <= end )
> +            *str = 'd';
> +        str = number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);
> +        if ( str <= end )
> +            *str = 'v';
> +        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
> +    }
>      }
>  
>      if ( field_width == -1 )
> --- a/xen/include/xen/config.h
> +++ b/xen/include/xen/config.h
> @@ -74,12 +74,11 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -int current_domain_id(void);
>  #define dprintk(_l, _f, _a...)                              \
>      printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
>  #define gdprintk(_l, _f, _a...)                             \
> -    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
> -           __LINE__, current_domain_id() , ## _a )
> +    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
> +           __LINE__, current, ## _a )
>  
>  #endif /* !__ASSEMBLY__ */
>  
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------030401060001040204030404
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 25/02/14 10:00, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:530C77C8020000780011F114@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">... in a simplified and consistent way.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    That is a remarkable reduction in code.<br>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:530C77C8020000780011F114@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/docs/misc/printk-formats.txt
+++ b/docs/misc/printk-formats.txt
@@ -15,3 +15,6 @@ Symbol/Function pointers:
 
        In the case that an appropriate symbol name can't be found, %p[sS] will
        fall back to '%p' and print the address in hex.
+
+       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
+               "d&lt;domid&gt;v&lt;vcpuid&gt;")
--- a/xen/arch/x86/cpu/mcheck/vmce.c
+++ b/xen/arch/x86/cpu/mcheck/vmce.c
@@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
     if ( ctxt-&gt;caps &amp; ~guest_mcg_cap &amp; ~MCG_CAP_COUNT &amp; ~MCG_CTL_P )
     {
         dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
-                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
+                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
                 has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt-&gt;caps,
-                v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id,
-                guest_mcg_cap &amp; ~MCG_CAP_COUNT);
+                v, guest_mcg_cap &amp; ~MCG_CAP_COUNT);
         return -EPERM;
     }
 
@@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
               guest_has_trap_callback(d, v-&gt;vcpu_id, TRAP_machine_check)) &amp;&amp;
              !test_and_set_bool(v-&gt;mce_pending) )
         {
-            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
-                       d-&gt;domain_id, v-&gt;vcpu_id);
+            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
             vcpu_kick(v);
             ret = 0;
         }
         else
         {
-            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
-                       d-&gt;domain_id, v-&gt;vcpu_id);
+            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
             ret = -EBUSY;
             break;
         }
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
         if ( !warned )
         {
             warned = 1;
-            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
+            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
                    "(If you see this outside of debugging activity,"
                    " please report to <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xenproject.org">xen-devel@lists.xenproject.org</a>)\n",
-                   v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id);
+                   v);
         }
         memset(reg, 0, sizeof(*reg));
         return;
--- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
+++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
@@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct 
         if ( !cpu_has(c, X86_FEATURE_DTES64) )
         {
             printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
             goto func_out;
         }
         vpmu_set(vpmu, VPMU_CPU_HAS_DS);
@@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct 
             /* If BTS_UNAVAIL is set reset the DS feature. */
             vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
             printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
-                   " - Debug Store disabled for d%d:v%d\n",
-                   v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id);
+                   " - Debug Store disabled for %pv\n",
+                   v);
         }
         else
         {
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu 
     mfn_t *oos;
     struct domain *d = v-&gt;domain;
 
-    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
-                  v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id, mfn_x(gmfn)); 
+    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
 
     for_each_vcpu(d, v) 
     {
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
 static void reserved_bit_page_fault(
     unsigned long addr, struct cpu_user_regs *regs)
 {
-    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
-           current-&gt;domain-&gt;domain_id, current-&gt;vcpu_id, regs-&gt;error_code);
+    printk("%pv: reserved bit in page table (ec=%04X)\n",
+           current, regs-&gt;error_code);
     show_page_walk(addr);
     show_execution_state(regs);
 }
@@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
         tb-&gt;flags |= TBF_INTERRUPT;
     if ( unlikely(null_trap_bounce(v, tb)) )
     {
-        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
-               v-&gt;domain-&gt;domain_id, v-&gt;vcpu_id, error_code);
+        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
         show_page_walk(addr);
     }
 
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
 
 vcpu_info_t dummy_vcpu_info;
 
-int current_domain_id(void)
-{
-    return current-&gt;domain-&gt;domain_id;
-}
-
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
 
     if ( !is_idle_vcpu(current) )
     {
-        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
-               smp_processor_id(), current-&gt;domain-&gt;domain_id,
-               current-&gt;vcpu_id);
+        printk("*** Dumping CPU%u guest state (%pv): ***\n",
+               smp_processor_id(), current);
         show_execution_state(guest_cpu_user_regs());
         printk("\n");
     }
--- a/xen/common/sched_credit2.c
+++ b/xen/common/sched_credit2.c
@@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
     struct list_head *iter;
     int pos = 0;
 
-    d2printk("rqi d%dv%d\n",
-           svc-&gt;vcpu-&gt;domain-&gt;domain_id,
-           svc-&gt;vcpu-&gt;vcpu_id);
+    d2printk("rqi %pv\n", svc-&gt;vcpu);
 
     BUG_ON(&amp;svc-&gt;rqd-&gt;runq != runq);
     /* Idle vcpus not allowed on the runqueue anymore */
@@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
 
         if ( svc-&gt;credit &gt; iter_svc-&gt;credit )
         {
-            d2printk(" p%d d%dv%d\n",
-                   pos,
-                   iter_svc-&gt;vcpu-&gt;domain-&gt;domain_id,
-                   iter_svc-&gt;vcpu-&gt;vcpu_id);
+            d2printk(" p%d %pv\n", pos, iter_svc-&gt;vcpu);
             break;
         }
         pos++;
@@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
     cpumask_t mask;
     struct csched_vcpu * cur;
 
-    d2printk("rqt d%dv%d cd%dv%d\n",
-             new-&gt;vcpu-&gt;domain-&gt;domain_id,
-             new-&gt;vcpu-&gt;vcpu_id,
-             current-&gt;domain-&gt;domain_id,
-             current-&gt;vcpu_id);
+    d2printk("rqt %pv curr %pv\n", new-&gt;vcpu, current);
 
     BUG_ON(new-&gt;vcpu-&gt;processor != cpu);
     BUG_ON(new-&gt;rqd != rqd);
@@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
         t2c_update(rqd, delta, svc);
         svc-&gt;start_time = now;
 
-        d2printk("b d%dv%d c%d\n",
-                 svc-&gt;vcpu-&gt;domain-&gt;domain_id,
-                 svc-&gt;vcpu-&gt;vcpu_id,
-                 svc-&gt;credit);
+        d2printk("b %pv c%d\n", svc-&gt;vcpu, svc-&gt;credit);
     } else {
         d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
                __func__, now, svc-&gt;start_time);
@@ -871,11 +859,9 @@ static void
 csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
 {
     struct csched_vcpu *svc = vc-&gt;sched_priv;
-    struct domain * const dom = vc-&gt;domain;
     struct csched_dom * const sdom = svc-&gt;sdom;
 
-    printk("%s: Inserting d%dv%d\n",
-           __func__, dom-&gt;domain_id, vc-&gt;vcpu_id);
+    printk("%s: Inserting %pv\n", __func__, vc);
 
     /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
      * been called for that cpu.
@@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler 
 
     /* Schedule lock should be held at this point. */
 
-    d2printk("w d%dv%d\n", vc-&gt;domain-&gt;domain_id, vc-&gt;vcpu_id);
+    d2printk("w %pv\n", vc);
 
     BUG_ON( is_idle_vcpu(vc) );
 
@@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops, 
     {
         if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &amp;svc-&gt;flags) )
         {
-            d2printk("d%dv%d -\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id);
+            d2printk("%pv -\n", svc-&gt;vcpu);
             clear_bit(__CSFLAG_runq_migrate_request, &amp;svc-&gt;flags);
         }
         /* Leave it where it is for now.  When we actually pay attention
@@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops, 
         }
         else
         {
-            d2printk("d%dv%d +\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id);
+            d2printk("%pv +\n", svc-&gt;vcpu);
             new_cpu = cpumask_cycle(vc-&gt;processor, &amp;svc-&gt;migrate_rqd-&gt;active);
             goto out_up;
         }
@@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
 {
     if ( test_bit(__CSFLAG_scheduled, &amp;svc-&gt;flags) )
     {
-        d2printk("d%dv%d %d-%d a\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id,
-                 svc-&gt;rqd-&gt;id, trqd-&gt;id);
+        d2printk("%pv %d-%d a\n", svc-&gt;vcpu, svc-&gt;rqd-&gt;id, trqd-&gt;id);
         /* It's running; mark it to migrate. */
         svc-&gt;migrate_rqd = trqd;
         set_bit(_VPF_migrating, &amp;svc-&gt;vcpu-&gt;pause_flags);
@@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
     {
         int on_runq=0;
         /* It's not running; just move it */
-        d2printk("d%dv%d %d-%d i\n", svc-&gt;vcpu-&gt;domain-&gt;domain_id, svc-&gt;vcpu-&gt;vcpu_id,
-                 svc-&gt;rqd-&gt;id, trqd-&gt;id);
+        d2printk("%pv %d-%d i\n", svc-&gt;vcpu, svc-&gt;rqd-&gt;id, trqd-&gt;id);
         if ( __vcpu_on_runq(svc) )
         {
             __runq_remove(svc);
@@ -1662,11 +1646,7 @@ csched_schedule(
     SCHED_STAT_CRANK(schedule);
     CSCHED_VCPU_CHECK(current);
 
-    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
-             cpu,
-             scurr-&gt;vcpu-&gt;domain-&gt;domain_id,
-             scurr-&gt;vcpu-&gt;vcpu_id,
-             now);
+    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr-&gt;vcpu, now);
 
     BUG_ON(!cpumask_test_cpu(cpu, &amp;CSCHED_PRIV(ops)-&gt;initialized));
 
@@ -1693,12 +1673,11 @@ csched_schedule(
                 }
             }
         }
-        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
+        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
                "pcpu %d rq %d!\n",
                __func__,
                cpu, this_rqi,
-               scurr-&gt;vcpu-&gt;domain-&gt;domain_id, scurr-&gt;vcpu-&gt;vcpu_id,
-               scurr-&gt;vcpu-&gt;processor, other_rqi);
+               scurr-&gt;vcpu, scurr-&gt;vcpu-&gt;processor, other_rqi);
     }
     BUG_ON(!is_idle_vcpu(scurr-&gt;vcpu) &amp;&amp; scurr-&gt;rqd != rqd);
 
@@ -1755,12 +1734,8 @@ csched_schedule(
             __runq_remove(snext);
             if ( snext-&gt;vcpu-&gt;is_running )
             {
-                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
-                       cpu,
-                       snext-&gt;vcpu-&gt;domain-&gt;domain_id, snext-&gt;vcpu-&gt;vcpu_id,
-                       snext-&gt;vcpu-&gt;processor,
-                       scurr-&gt;vcpu-&gt;domain-&gt;domain_id,
-                       scurr-&gt;vcpu-&gt;vcpu_id);
+                printk("p%d: snext %pv running on p%d! scurr %pv\n",
+                       cpu, snext-&gt;vcpu, snext-&gt;vcpu-&gt;processor, scurr-&gt;vcpu);
                 BUG();
             }
             set_bit(__CSFLAG_scheduled, &amp;snext-&gt;flags);
--- a/xen/common/schedule.c
+++ b/xen/common/schedule.c
@@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
 
         if ( v-&gt;affinity_broken )
         {
-            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
-                   d-&gt;domain_id, v-&gt;vcpu_id);
+            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
             cpumask_copy(v-&gt;cpu_affinity, v-&gt;cpu_affinity_saved);
             v-&gt;affinity_broken = 0;
         }
@@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
             if ( cpumask_empty(&amp;online_affinity) &amp;&amp;
                  cpumask_test_cpu(cpu, v-&gt;cpu_affinity) )
             {
-                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
-                        d-&gt;domain_id, v-&gt;vcpu_id);
+                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
 
                 if (system_state == SYS_STATE_suspend)
                 {
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -19,6 +19,7 @@
 #include &lt;xen/ctype.h&gt;
 #include &lt;xen/symbols.h&gt;
 #include &lt;xen/lib.h&gt;
+#include &lt;xen/sched.h&gt;
 #include &lt;asm/div64.h&gt;
 #include &lt;asm/page.h&gt;
 
@@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
 
         return str;
     }
+
+    case 'v': /* d&lt;domain-id&gt;v&lt;vcpu-id&gt; from a struct vcpu */
+    {
+        const struct vcpu *v = arg;
+
+        ++*fmt_ptr;
+        if ( str &lt;= end )
+            *str = 'd';
+        str = number(str + 1, end, v-&gt;domain-&gt;domain_id, 10, -1, -1, 0);
+        if ( str &lt;= end )
+            *str = 'v';
+        return number(str + 1, end, v-&gt;vcpu_id, 10, -1, -1, 0);
+    }
     }
 
     if ( field_width == -1 )
--- a/xen/include/xen/config.h
+++ b/xen/include/xen/config.h
@@ -74,12 +74,11 @@
 
 #ifndef __ASSEMBLY__
 
-int current_domain_id(void);
 #define dprintk(_l, _f, _a...)                              \
     printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
 #define gdprintk(_l, _f, _a...)                             \
-    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
-           __LINE__, current_domain_id() , ## _a )
+    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
+           __LINE__, current, ## _a )
 
 #endif /* !__ASSEMBLY__ */
 


</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------030401060001040204030404--


--===============2922351061638859674==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2922351061638859674==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:42:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFT8-0008Hw-EG; Tue, 25 Feb 2014 10:42:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFT7-0008Hq-4E
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:42:33 +0000
Received: from [85.158.139.211:30943] by server-2.bemta-5.messagelabs.com id
	AB/40-23037-6937C035; Tue, 25 Feb 2014 10:42:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393324949!6046339!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23534 invoked from network); 25 Feb 2014 10:42:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:42:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:42:29 +0000
Message-Id: <530C81A4020000780011F182@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:42:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 0/4] xsm/flask: more XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: flask: add compat mode guest support
2: flask: use xzalloc()
3: xsm: use # printk format modifier
4: xsm: streamline xsm_default_action()

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:42:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFT8-0008Hw-EG; Tue, 25 Feb 2014 10:42:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFT7-0008Hq-4E
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:42:33 +0000
Received: from [85.158.139.211:30943] by server-2.bemta-5.messagelabs.com id
	AB/40-23037-6937C035; Tue, 25 Feb 2014 10:42:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393324949!6046339!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23534 invoked from network); 25 Feb 2014 10:42:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:42:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:42:29 +0000
Message-Id: <530C81A4020000780011F182@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:42:28 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 0/4] xsm/flask: more XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

1: flask: add compat mode guest support
2: flask: use xzalloc()
3: xsm: use # printk format modifier
4: xsm: streamline xsm_default_action()

Signed-off-by: Jan Beulich <jbeulich@suse.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:44:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFUv-0008Nx-0J; Tue, 25 Feb 2014 10:44:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFUt-0008Np-ES
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:44:23 +0000
Received: from [85.158.139.211:50301] by server-11.bemta-5.messagelabs.com id
	A9/0B-23886-6047C035; Tue, 25 Feb 2014 10:44:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393325061!6098167!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12450 invoked from network); 25 Feb 2014 10:44:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:44:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:44:21 +0000
Message-Id: <530C8213020000780011F18D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:44:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part50620213.0__="
Cc: dgdegra@tycho.nsa.gov, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 1/4] flask: add compat mode guest support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part50620213.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... which has been missing since the introduction of the new interface
in the 4.2 development cycle.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)
         .quad compat_vcpu_op
         .quad compat_ni_hypercall       /* 25 */
         .quad compat_mmuext_op
-        .quad do_xsm_op
+        .quad compat_xsm_op
         .quad compat_nmi_op
         .quad compat_sched_op
         .quad compat_callback_op        /* 30 */
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen-$(compat-arch-y).h
 headers-y                 +=3D compat/arch-$(compat-arch-y).h compat/xlat.=
h
+headers-$(FLASK_ENABLE)   +=3D compat/xsm/flask_op.h
=20
 cppflags-y                :=3D -include public/xen-compat.h
 cppflags-$(CONFIG_X86)    +=3D -m32
@@ -69,7 +70,9 @@ compat/xlat.h: xlat.lst $(filter-out com
 	export PYTHON=3D$(PYTHON); \
 	grep -v '^[	 ]*#' xlat.lst | \
 	while read what name hdr; do \
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g') || exit $$?; =
\
+		hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y=
),g')"; \
+		echo '$(headers-y)' | grep -q "$$hdr" || continue; \
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$hdr || exit $$?; \
 	done >$@.new
 	mv -f $@.new $@
=20
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -99,3 +99,16 @@
 !	vcpu_set_singleshot_timer	vcpu.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
+?	flask_access			xsm/flask_op.h
+!	flask_boolean			xsm/flask_op.h
+?	flask_cache_stats		xsm/flask_op.h
+?	flask_hash_stats		xsm/flask_op.h
+!	flask_load			xsm/flask_op.h
+?	flask_ocontext			xsm/flask_op.h
+?	flask_peersid			xsm/flask_op.h
+?	flask_relabel			xsm/flask_op.h
+?	flask_setavc_threshold		xsm/flask_op.h
+?	flask_setenforce		xsm/flask_op.h
+!	flask_sid_context		xsm/flask_op.h
+?	flask_transition		xsm/flask_op.h
+!	flask_userlist			xsm/flask_op.h
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN
     return -ENOSYS;
 }
=20
+#ifdef CONFIG_COMPAT
+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
op)
+{
+    return -ENOSYS;
+}
+#endif
+
 static XSM_INLINE char *xsm_show_irq_sid(int irq)
 {
     return NULL;
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -129,6 +129,9 @@ struct xsm_operations {
     int (*tmem_control)(void);
=20
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#ifdef CONFIG_COMPAT
+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#endif
=20
     int (*hvm_param) (struct domain *d, unsigned long op);
     int (*hvm_param_nested) (struct domain *d);
@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU
     return xsm_ops->do_xsm_op(op);
 }
=20
+#ifdef CONFIG_COMPAT
+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_ops->do_compat_op(op);
+}
+#endif
+
 static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, =
unsigned long op)
 {
     return xsm_ops->hvm_param(d, op);
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation
     set_to_dummy_if_null(ops, hvm_param_nested);
=20
     set_to_dummy_if_null(ops, do_xsm_op);
+#ifdef CONFIG_COMPAT
+    set_to_dummy_if_null(ops, do_compat_op);
+#endif
=20
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -7,7 +7,7 @@
  *  it under the terms of the GNU General Public License version 2,
  *  as published by the Free Software Foundation.
  */
-
+#ifndef COMPAT
 #include <xen/errno.h>
 #include <xen/event.h>
 #include <xsm/xsm.h>
@@ -20,6 +20,10 @@
 #include <objsec.h>
 #include <conditional.h>
=20
+#define ret_t long
+#define _copy_to_guest copy_to_guest
+#define _copy_from_guest copy_from_guest
+
 #ifdef FLASK_DEVELOP
 int flask_enforcing =3D 0;
 integer_param("flask_enforcing", flask_enforcing);
@@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_user(struct xen_flask_userlist *arg)
 {
     char *user;
@@ -119,7 +125,7 @@ static int flask_security_user(struct xe
=20
     arg->size =3D nsids;
=20
-    if ( copy_to_guest(arg->u.sids, sids, nsids) )
+    if ( _copy_to_guest(arg->u.sids, sids, nsids) )
         rv =3D -EFAULT;
=20
     xfree(sids);
@@ -128,6 +134,8 @@ static int flask_security_user(struct xe
     return rv;
 }
=20
+#ifndef COMPAT
+
 static int flask_security_relabel(struct xen_flask_transition *arg)
 {
     int rv;
@@ -208,6 +216,8 @@ static int flask_security_setenforce(str
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_context(struct xen_flask_sid_context *arg)
 {
     int rv;
@@ -252,7 +262,7 @@ static int flask_security_sid(struct xen
=20
     arg->size =3D len;
=20
-    if ( !rv && copy_to_guest(arg->context, context, len) )
+    if ( !rv && _copy_to_guest(arg->context, context, len) )
         rv =3D -EFAULT;
=20
     xfree(context);
@@ -260,6 +270,8 @@ static int flask_security_sid(struct xen
     return rv;
 }
=20
+#ifndef COMPAT
+
 int flask_disable(void)
 {
     static int flask_disabled =3D 0;
@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho
     return rv;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_resolve_bool(struct xen_flask_boolean *arg)
 {
     char *name;
@@ -382,24 +396,6 @@ static int flask_security_set_bool(struc
     return rv;
 }
=20
-static int flask_security_commit_bools(void)
-{
-    int rv;
-
-    spin_lock(&sel_sem);
-
-    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
-    if ( rv )
-        goto out;
-
-    if ( bool_pending_values )
-        rv =3D security_set_bools(bool_num, bool_pending_values);
-   =20
- out:
-    spin_unlock(&sel_sem);
-    return rv;
-}
-
 static int flask_security_get_bool(struct xen_flask_boolean *arg)
 {
     int rv;
@@ -431,7 +427,7 @@ static int flask_security_get_bool(struc
             rv =3D -ERANGE;
         arg->size =3D nameout_len;
 =20
-        if ( !rv && copy_to_guest(arg->name, nameout, nameout_len) )
+        if ( !rv && _copy_to_guest(arg->name, nameout, nameout_len) )
             rv =3D -EFAULT;
         xfree(nameout);
     }
@@ -441,6 +437,26 @@ static int flask_security_get_bool(struc
     return rv;
 }
=20
+#ifndef COMPAT
+
+static int flask_security_commit_bools(void)
+{
+    int rv;
+
+    spin_lock(&sel_sem);
+
+    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
+    if ( rv )
+        goto out;
+
+    if ( bool_pending_values )
+        rv =3D security_set_bools(bool_num, bool_pending_values);
+
+ out:
+    spin_unlock(&sel_sem);
+    return rv;
+}
+
 static int flask_security_make_bools(void)
 {
     int ret =3D 0;
@@ -484,6 +500,7 @@ static int flask_security_avc_cachestats
 }
=20
 #endif
+#endif /* COMPAT */
=20
 static int flask_security_load(struct xen_flask_load *load)
 {
@@ -501,7 +518,7 @@ static int flask_security_load(struct xe
     if ( !buf )
         return -ENOMEM;
=20
-    if ( copy_from_guest(buf, load->buffer, load->size) )
+    if ( _copy_from_guest(buf, load->buffer, load->size) )
     {
         ret =3D -EFAULT;
         goto out_free;
@@ -524,6 +541,8 @@ static int flask_security_load(struct xe
     return ret;
 }
=20
+#ifndef COMPAT
+
 static int flask_ocontext_del(struct xen_flask_ocontext *arg)
 {
     int rv;
@@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x
     return rc;
 }
=20
-long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
+#endif /* !COMPAT */
+
+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
@@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(
  out:
     return rv;
 }
+
+#ifndef COMPAT
+#undef _copy_to_guest
+#define _copy_to_guest copy_to_compat
+#undef _copy_from_guest
+#define _copy_from_guest copy_from_compat
+
+#include <compat/event_channel.h>
+#include <compat/xsm/flask_op.h>
+
+CHECK_flask_access;
+CHECK_flask_cache_stats;
+CHECK_flask_hash_stats;
+CHECK_flask_ocontext;
+CHECK_flask_peersid;
+CHECK_flask_relabel;
+CHECK_flask_setavc_threshold;
+CHECK_flask_setenforce;
+CHECK_flask_transition;
+
+#define COMPAT
+#define flask_copyin_string(ch, pb, sz, mx) ({ \
+	XEN_GUEST_HANDLE_PARAM(char) gh; \
+	guest_from_compat_handle(gh, ch); \
+	flask_copyin_string(gh, pb, sz, mx); \
+})
+
+#define xen_flask_load compat_flask_load
+#define flask_security_load compat_security_load
+
+#define xen_flask_userlist compat_flask_userlist
+#define flask_security_user compat_security_user
+
+#define xen_flask_sid_context compat_flask_sid_context
+#define flask_security_context compat_security_context
+#define flask_security_sid compat_security_sid
+
+#define xen_flask_boolean compat_flask_boolean
+#define flask_security_resolve_bool compat_security_resolve_bool
+#define flask_security_get_bool compat_security_get_bool
+#define flask_security_set_bool compat_security_set_bool
+
+#define xen_flask_op_t compat_flask_op_t
+#undef ret_t
+#define ret_t int
+#define do_flask_op compat_flask_op
+
+#include "flask_op.c"
+#endif
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1464,6 +1464,7 @@ static int flask_map_gmfn_foreign(struct
 #endif
=20
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
=20
 static struct xsm_operations flask_ops =3D {
     .security_domaininfo =3D flask_security_domaininfo,
@@ -1538,6 +1539,9 @@ static struct xsm_operations flask_ops =3D
     .hvm_param_nested =3D flask_hvm_param_nested,
=20
     .do_xsm_op =3D do_flask_op,
+#ifdef CONFIG_COMPAT
+    .do_compat_op =3D compat_flask_op,
+#endif
=20
     .add_to_physmap =3D flask_add_to_physmap,
     .remove_from_physmap =3D flask_remove_from_physmap,
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x
     return xsm_do_xsm_op(op);
 }
=20
-
+#ifdef CONFIG_COMPAT
+int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_do_compat_op(op);
+}
+#endif



--=__Part50620213.0__=
Content-Type: text/plain; name="flask-op-compat.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-op-compat.patch"

flask: add compat mode guest support=0A=0A... which has been missing since =
the introduction of the new interface=0Ain the 4.2 development cycle.=0A=0A=
Signed-off-by: Jan Beulich <jbeulich@suse.com>=0AAcked-by: Daniel De Graaf =
<dgdegra@tycho.nsa.gov>=0A=0A--- a/xen/arch/x86/x86_64/compat/entry.S=0A+++=
 b/xen/arch/x86/x86_64/compat/entry.S=0A@@ -404,7 +404,7 @@ ENTRY(compat_hy=
percall_table)=0A         .quad compat_vcpu_op=0A         .quad compat_ni_h=
ypercall       /* 25 */=0A         .quad compat_mmuext_op=0A-        .quad =
do_xsm_op=0A+        .quad compat_xsm_op=0A         .quad compat_nmi_op=0A =
        .quad compat_sched_op=0A         .quad compat_callback_op        =
/* 30 */=0A--- a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ =
-27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch=0A headers-$(CONF=
IG_X86)     +=3D compat/arch-x86/xen.h=0A headers-$(CONFIG_X86)     +=3D =
compat/arch-x86/xen-$(compat-arch-y).h=0A headers-y                 +=3D =
compat/arch-$(compat-arch-y).h compat/xlat.h=0A+headers-$(FLASK_ENABLE)   =
+=3D compat/xsm/flask_op.h=0A =0A cppflags-y                :=3D -include =
public/xen-compat.h=0A cppflags-$(CONFIG_X86)    +=3D -m32=0A@@ -69,7 =
+70,9 @@ compat/xlat.h: xlat.lst $(filter-out com=0A 	export PYTHON=3D$(P=
YTHON); \=0A 	grep -v '^[	 ]*#' xlat.lst | \=0A 	while read what =
name hdr; do \=0A-		$(SHELL) $(BASEDIR)/tools/get-fields.sh =
"$$what" compat_$$name $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y=
),g') || exit $$?; \=0A+		hdr=3D"compat/$$(echo $$hdr | sed =
's,@arch@,$(compat-arch-y),g')"; \=0A+		echo '$(headers-y)' | grep =
-q "$$hdr" || continue; \=0A+		$(SHELL) $(BASEDIR)/tools/get-field=
s.sh "$$what" compat_$$name $$hdr || exit $$?; \=0A 	done >$@.new=0A 	=
mv -f $@.new $@=0A =0A--- a/xen/include/xlat.lst=0A+++ b/xen/include/xlat.l=
st=0A@@ -99,3 +99,16 @@=0A !	vcpu_set_singleshot_timer	vcpu.h=0A =
?	xenoprof_init			xenoprof.h=0A ?	xenoprof_passive	=
	xenoprof.h=0A+?	flask_access			xsm/flask_op.h=0A+!=
	flask_boolean			xsm/flask_op.h=0A+?	flask_cache=
_stats		xsm/flask_op.h=0A+?	flask_hash_stats		=
xsm/flask_op.h=0A+!	flask_load			xsm/flask_op.h=0A+?=
	flask_ocontext			xsm/flask_op.h=0A+?	flask_peers=
id			xsm/flask_op.h=0A+?	flask_relabel			=
xsm/flask_op.h=0A+?	flask_setavc_threshold		xsm/flask_op.h=0A+?=
	flask_setenforce		xsm/flask_op.h=0A+!	flask_sid_c=
ontext		xsm/flask_op.h=0A+?	flask_transition		=
xsm/flask_op.h=0A+!	flask_userlist			xsm/flask_op.h=0A--=
- a/xen/include/xsm/dummy.h=0A+++ b/xen/include/xsm/dummy.h=0A@@ -412,6 =
+412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN=0A     return =
-ENOSYS;=0A }=0A =0A+#ifdef CONFIG_COMPAT=0A+static XSM_INLINE int =
xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return =
-ENOSYS;=0A+}=0A+#endif=0A+=0A static XSM_INLINE char *xsm_show_irq_sid(int=
 irq)=0A {=0A     return NULL;=0A--- a/xen/include/xsm/xsm.h=0A+++ =
b/xen/include/xsm/xsm.h=0A@@ -129,6 +129,9 @@ struct xsm_operations {=0A   =
  int (*tmem_control)(void);=0A =0A     long (*do_xsm_op) (XEN_GUEST_HANDLE=
_PARAM(xsm_op_t) op);=0A+#ifdef CONFIG_COMPAT=0A+    int (*do_compat_op) =
(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);=0A+#endif=0A =0A     int (*hvm_param=
) (struct domain *d, unsigned long op);=0A     int (*hvm_param_nested) =
(struct domain *d);=0A@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op=
 (XEN_GU=0A     return xsm_ops->do_xsm_op(op);=0A }=0A =0A+#ifdef =
CONFIG_COMPAT=0A+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM=
(xsm_op_t) op)=0A+{=0A+    return xsm_ops->do_compat_op(op);=0A+}=0A+#endif=
=0A+=0A static inline int xsm_hvm_param (xsm_default_t def, struct domain =
*d, unsigned long op)=0A {=0A     return xsm_ops->hvm_param(d, op);=0A--- =
a/xen/xsm/dummy.c=0A+++ b/xen/xsm/dummy.c=0A@@ -105,6 +105,9 @@ void =
xsm_fixup_ops (struct xsm_operation=0A     set_to_dummy_if_null(ops, =
hvm_param_nested);=0A =0A     set_to_dummy_if_null(ops, do_xsm_op);=0A+#ifd=
ef CONFIG_COMPAT=0A+    set_to_dummy_if_null(ops, do_compat_op);=0A+#endif=
=0A =0A     set_to_dummy_if_null(ops, add_to_physmap);=0A     set_to_dummy_=
if_null(ops, remove_from_physmap);=0A--- a/xen/xsm/flask/flask_op.c=0A+++ =
b/xen/xsm/flask/flask_op.c=0A@@ -7,7 +7,7 @@=0A  *  it under the terms of =
the GNU General Public License version 2,=0A  *  as published by the Free =
Software Foundation.=0A  */=0A-=0A+#ifndef COMPAT=0A #include <xen/errno.h>=
=0A #include <xen/event.h>=0A #include <xsm/xsm.h>=0A@@ -20,6 +20,10 @@=0A =
#include <objsec.h>=0A #include <conditional.h>=0A =0A+#define ret_t =
long=0A+#define _copy_to_guest copy_to_guest=0A+#define _copy_from_guest =
copy_from_guest=0A+=0A #ifdef FLASK_DEVELOP=0A int flask_enforcing =3D =
0;=0A integer_param("flask_enforcing", flask_enforcing);=0A@@ -95,6 +99,8 =
@@ static int flask_copyin_string(XEN_GUEST=0A     return 0;=0A }=0A =
=0A+#endif /* COMPAT */=0A+=0A static int flask_security_user(struct =
xen_flask_userlist *arg)=0A {=0A     char *user;=0A@@ -119,7 +125,7 @@ =
static int flask_security_user(struct xe=0A =0A     arg->size =3D =
nsids;=0A =0A-    if ( copy_to_guest(arg->u.sids, sids, nsids) )=0A+    if =
( _copy_to_guest(arg->u.sids, sids, nsids) )=0A         rv =3D -EFAULT;=0A =
=0A     xfree(sids);=0A@@ -128,6 +134,8 @@ static int flask_security_user(s=
truct xe=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int =
flask_security_relabel(struct xen_flask_transition *arg)=0A {=0A     int =
rv;=0A@@ -208,6 +216,8 @@ static int flask_security_setenforce(str=0A     =
return 0;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_security_=
context(struct xen_flask_sid_context *arg)=0A {=0A     int rv;=0A@@ -252,7 =
+262,7 @@ static int flask_security_sid(struct xen=0A =0A     arg->size =
=3D len;=0A =0A-    if ( !rv && copy_to_guest(arg->context, context, len) =
)=0A+    if ( !rv && _copy_to_guest(arg->context, context, len) )=0A       =
  rv =3D -EFAULT;=0A =0A     xfree(context);=0A@@ -260,6 +270,8 @@ static =
int flask_security_sid(struct xen=0A     return rv;=0A }=0A =0A+#ifndef =
COMPAT=0A+=0A int flask_disable(void)=0A {=0A     static int flask_disabled=
 =3D 0;=0A@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho=0A  =
   return rv;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_secur=
ity_resolve_bool(struct xen_flask_boolean *arg)=0A {=0A     char *name;=0A@=
@ -382,24 +396,6 @@ static int flask_security_set_bool(struc=0A     return =
rv;=0A }=0A =0A-static int flask_security_commit_bools(void)=0A-{=0A-    =
int rv;=0A-=0A-    spin_lock(&sel_sem);=0A-=0A-    rv =3D domain_has_securi=
ty(current->domain, SECURITY__SETBOOL);=0A-    if ( rv )=0A-        goto =
out;=0A-=0A-    if ( bool_pending_values )=0A-        rv =3D security_set_b=
ools(bool_num, bool_pending_values);=0A-    =0A- out:=0A-    spin_unlock(&s=
el_sem);=0A-    return rv;=0A-}=0A-=0A static int flask_security_get_bool(s=
truct xen_flask_boolean *arg)=0A {=0A     int rv;=0A@@ -431,7 +427,7 @@ =
static int flask_security_get_bool(struc=0A             rv =3D -ERANGE;=0A =
        arg->size =3D nameout_len;=0A  =0A-        if ( !rv && copy_to_gues=
t(arg->name, nameout, nameout_len) )=0A+        if ( !rv && _copy_to_guest(=
arg->name, nameout, nameout_len) )=0A             rv =3D -EFAULT;=0A       =
  xfree(nameout);=0A     }=0A@@ -441,6 +437,26 @@ static int flask_security=
_get_bool(struc=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A+static =
int flask_security_commit_bools(void)=0A+{=0A+    int rv;=0A+=0A+    =
spin_lock(&sel_sem);=0A+=0A+    rv =3D domain_has_security(current->domain,=
 SECURITY__SETBOOL);=0A+    if ( rv )=0A+        goto out;=0A+=0A+    if ( =
bool_pending_values )=0A+        rv =3D security_set_bools(bool_num, =
bool_pending_values);=0A+=0A+ out:=0A+    spin_unlock(&sel_sem);=0A+    =
return rv;=0A+}=0A+=0A static int flask_security_make_bools(void)=0A {=0A  =
   int ret =3D 0;=0A@@ -484,6 +500,7 @@ static int flask_security_avc_cache=
stats=0A }=0A =0A #endif=0A+#endif /* COMPAT */=0A =0A static int =
flask_security_load(struct xen_flask_load *load)=0A {=0A@@ -501,7 +518,7 =
@@ static int flask_security_load(struct xe=0A     if ( !buf )=0A         =
return -ENOMEM;=0A =0A-    if ( copy_from_guest(buf, load->buffer, =
load->size) )=0A+    if ( _copy_from_guest(buf, load->buffer, load->size) =
)=0A     {=0A         ret =3D -EFAULT;=0A         goto out_free;=0A@@ =
-524,6 +541,8 @@ static int flask_security_load(struct xe=0A     return =
ret;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int flask_ocontext_del(struct=
 xen_flask_ocontext *arg)=0A {=0A     int rv;=0A@@ -636,7 +655,9 @@ static =
int flask_relabel_domain(struct x=0A     return rc;=0A }=0A =0A-long =
do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)=0A+#endif /* =
!COMPAT */=0A+=0A+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op)=0A {=0A     xen_flask_op_t op;=0A     int rv;=0A@@ -763,3 =
+784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(=0A  out:=0A     return =
rv;=0A }=0A+=0A+#ifndef COMPAT=0A+#undef _copy_to_guest=0A+#define =
_copy_to_guest copy_to_compat=0A+#undef _copy_from_guest=0A+#define =
_copy_from_guest copy_from_compat=0A+=0A+#include <compat/event_channel.h>=
=0A+#include <compat/xsm/flask_op.h>=0A+=0A+CHECK_flask_access;=0A+CHECK_fl=
ask_cache_stats;=0A+CHECK_flask_hash_stats;=0A+CHECK_flask_ocontext;=0A+CHE=
CK_flask_peersid;=0A+CHECK_flask_relabel;=0A+CHECK_flask_setavc_threshold;=
=0A+CHECK_flask_setenforce;=0A+CHECK_flask_transition;=0A+=0A+#define =
COMPAT=0A+#define flask_copyin_string(ch, pb, sz, mx) ({ \=0A+	XEN_GUEST_H=
ANDLE_PARAM(char) gh; \=0A+	guest_from_compat_handle(gh, ch); \=0A+	=
flask_copyin_string(gh, pb, sz, mx); \=0A+})=0A+=0A+#define xen_flask_load =
compat_flask_load=0A+#define flask_security_load compat_security_load=0A+=
=0A+#define xen_flask_userlist compat_flask_userlist=0A+#define flask_secur=
ity_user compat_security_user=0A+=0A+#define xen_flask_sid_context =
compat_flask_sid_context=0A+#define flask_security_context compat_security_=
context=0A+#define flask_security_sid compat_security_sid=0A+=0A+#define =
xen_flask_boolean compat_flask_boolean=0A+#define flask_security_resolve_bo=
ol compat_security_resolve_bool=0A+#define flask_security_get_bool =
compat_security_get_bool=0A+#define flask_security_set_bool compat_security=
_set_bool=0A+=0A+#define xen_flask_op_t compat_flask_op_t=0A+#undef =
ret_t=0A+#define ret_t int=0A+#define do_flask_op compat_flask_op=0A+=0A+#i=
nclude "flask_op.c"=0A+#endif=0A--- a/xen/xsm/flask/hooks.c=0A+++ =
b/xen/xsm/flask/hooks.c=0A@@ -1464,6 +1464,7 @@ static int flask_map_gmfn_f=
oreign(struct=0A #endif=0A =0A long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_=
op_t) u_flask_op);=0A+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op);=0A =0A static struct xsm_operations flask_ops =3D {=0A     =
.security_domaininfo =3D flask_security_domaininfo,=0A@@ -1538,6 +1539,9 =
@@ static struct xsm_operations flask_ops =3D=0A     .hvm_param_nested =3D =
flask_hvm_param_nested,=0A =0A     .do_xsm_op =3D do_flask_op,=0A+#ifdef =
CONFIG_COMPAT=0A+    .do_compat_op =3D compat_flask_op,=0A+#endif=0A =0A   =
  .add_to_physmap =3D flask_add_to_physmap,=0A     .remove_from_physmap =
=3D flask_remove_from_physmap,=0A--- a/xen/xsm/xsm_core.c=0A+++ b/xen/xsm/x=
sm_core.c=0A@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x=0A=
     return xsm_do_xsm_op(op);=0A }=0A =0A-=0A+#ifdef CONFIG_COMPAT=0A+int =
compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return =
xsm_do_compat_op(op);=0A+}=0A+#endif=0A
--=__Part50620213.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part50620213.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:44:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFUv-0008Nx-0J; Tue, 25 Feb 2014 10:44:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFUt-0008Np-ES
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:44:23 +0000
Received: from [85.158.139.211:50301] by server-11.bemta-5.messagelabs.com id
	A9/0B-23886-6047C035; Tue, 25 Feb 2014 10:44:22 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393325061!6098167!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12450 invoked from network); 25 Feb 2014 10:44:21 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:44:21 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:44:21 +0000
Message-Id: <530C8213020000780011F18D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:44:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part50620213.0__="
Cc: dgdegra@tycho.nsa.gov, Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH 1/4] flask: add compat mode guest support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part50620213.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

... which has been missing since the introduction of the new interface
in the 4.2 development cycle.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)
         .quad compat_vcpu_op
         .quad compat_ni_hypercall       /* 25 */
         .quad compat_mmuext_op
-        .quad do_xsm_op
+        .quad compat_xsm_op
         .quad compat_nmi_op
         .quad compat_sched_op
         .quad compat_callback_op        /* 30 */
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     +=3D compat/arch-x86/xen-$(compat-arch-y).h
 headers-y                 +=3D compat/arch-$(compat-arch-y).h compat/xlat.=
h
+headers-$(FLASK_ENABLE)   +=3D compat/xsm/flask_op.h
=20
 cppflags-y                :=3D -include public/xen-compat.h
 cppflags-$(CONFIG_X86)    +=3D -m32
@@ -69,7 +70,9 @@ compat/xlat.h: xlat.lst $(filter-out com
 	export PYTHON=3D$(PYTHON); \
 	grep -v '^[	 ]*#' xlat.lst | \
 	while read what name hdr; do \
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g') || exit $$?; =
\
+		hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y=
),g')"; \
+		echo '$(headers-y)' | grep -q "$$hdr" || continue; \
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$hdr || exit $$?; \
 	done >$@.new
 	mv -f $@.new $@
=20
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -99,3 +99,16 @@
 !	vcpu_set_singleshot_timer	vcpu.h
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
+?	flask_access			xsm/flask_op.h
+!	flask_boolean			xsm/flask_op.h
+?	flask_cache_stats		xsm/flask_op.h
+?	flask_hash_stats		xsm/flask_op.h
+!	flask_load			xsm/flask_op.h
+?	flask_ocontext			xsm/flask_op.h
+?	flask_peersid			xsm/flask_op.h
+?	flask_relabel			xsm/flask_op.h
+?	flask_setavc_threshold		xsm/flask_op.h
+?	flask_setenforce		xsm/flask_op.h
+!	flask_sid_context		xsm/flask_op.h
+?	flask_transition		xsm/flask_op.h
+!	flask_userlist			xsm/flask_op.h
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN
     return -ENOSYS;
 }
=20
+#ifdef CONFIG_COMPAT
+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
op)
+{
+    return -ENOSYS;
+}
+#endif
+
 static XSM_INLINE char *xsm_show_irq_sid(int irq)
 {
     return NULL;
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -129,6 +129,9 @@ struct xsm_operations {
     int (*tmem_control)(void);
=20
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#ifdef CONFIG_COMPAT
+    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
+#endif
=20
     int (*hvm_param) (struct domain *d, unsigned long op);
     int (*hvm_param_nested) (struct domain *d);
@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU
     return xsm_ops->do_xsm_op(op);
 }
=20
+#ifdef CONFIG_COMPAT
+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_ops->do_compat_op(op);
+}
+#endif
+
 static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, =
unsigned long op)
 {
     return xsm_ops->hvm_param(d, op);
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation
     set_to_dummy_if_null(ops, hvm_param_nested);
=20
     set_to_dummy_if_null(ops, do_xsm_op);
+#ifdef CONFIG_COMPAT
+    set_to_dummy_if_null(ops, do_compat_op);
+#endif
=20
     set_to_dummy_if_null(ops, add_to_physmap);
     set_to_dummy_if_null(ops, remove_from_physmap);
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -7,7 +7,7 @@
  *  it under the terms of the GNU General Public License version 2,
  *  as published by the Free Software Foundation.
  */
-
+#ifndef COMPAT
 #include <xen/errno.h>
 #include <xen/event.h>
 #include <xsm/xsm.h>
@@ -20,6 +20,10 @@
 #include <objsec.h>
 #include <conditional.h>
=20
+#define ret_t long
+#define _copy_to_guest copy_to_guest
+#define _copy_from_guest copy_from_guest
+
 #ifdef FLASK_DEVELOP
 int flask_enforcing =3D 0;
 integer_param("flask_enforcing", flask_enforcing);
@@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_user(struct xen_flask_userlist *arg)
 {
     char *user;
@@ -119,7 +125,7 @@ static int flask_security_user(struct xe
=20
     arg->size =3D nsids;
=20
-    if ( copy_to_guest(arg->u.sids, sids, nsids) )
+    if ( _copy_to_guest(arg->u.sids, sids, nsids) )
         rv =3D -EFAULT;
=20
     xfree(sids);
@@ -128,6 +134,8 @@ static int flask_security_user(struct xe
     return rv;
 }
=20
+#ifndef COMPAT
+
 static int flask_security_relabel(struct xen_flask_transition *arg)
 {
     int rv;
@@ -208,6 +216,8 @@ static int flask_security_setenforce(str
     return 0;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_context(struct xen_flask_sid_context *arg)
 {
     int rv;
@@ -252,7 +262,7 @@ static int flask_security_sid(struct xen
=20
     arg->size =3D len;
=20
-    if ( !rv && copy_to_guest(arg->context, context, len) )
+    if ( !rv && _copy_to_guest(arg->context, context, len) )
         rv =3D -EFAULT;
=20
     xfree(context);
@@ -260,6 +270,8 @@ static int flask_security_sid(struct xen
     return rv;
 }
=20
+#ifndef COMPAT
+
 int flask_disable(void)
 {
     static int flask_disabled =3D 0;
@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho
     return rv;
 }
=20
+#endif /* COMPAT */
+
 static int flask_security_resolve_bool(struct xen_flask_boolean *arg)
 {
     char *name;
@@ -382,24 +396,6 @@ static int flask_security_set_bool(struc
     return rv;
 }
=20
-static int flask_security_commit_bools(void)
-{
-    int rv;
-
-    spin_lock(&sel_sem);
-
-    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
-    if ( rv )
-        goto out;
-
-    if ( bool_pending_values )
-        rv =3D security_set_bools(bool_num, bool_pending_values);
-   =20
- out:
-    spin_unlock(&sel_sem);
-    return rv;
-}
-
 static int flask_security_get_bool(struct xen_flask_boolean *arg)
 {
     int rv;
@@ -431,7 +427,7 @@ static int flask_security_get_bool(struc
             rv =3D -ERANGE;
         arg->size =3D nameout_len;
 =20
-        if ( !rv && copy_to_guest(arg->name, nameout, nameout_len) )
+        if ( !rv && _copy_to_guest(arg->name, nameout, nameout_len) )
             rv =3D -EFAULT;
         xfree(nameout);
     }
@@ -441,6 +437,26 @@ static int flask_security_get_bool(struc
     return rv;
 }
=20
+#ifndef COMPAT
+
+static int flask_security_commit_bools(void)
+{
+    int rv;
+
+    spin_lock(&sel_sem);
+
+    rv =3D domain_has_security(current->domain, SECURITY__SETBOOL);
+    if ( rv )
+        goto out;
+
+    if ( bool_pending_values )
+        rv =3D security_set_bools(bool_num, bool_pending_values);
+
+ out:
+    spin_unlock(&sel_sem);
+    return rv;
+}
+
 static int flask_security_make_bools(void)
 {
     int ret =3D 0;
@@ -484,6 +500,7 @@ static int flask_security_avc_cachestats
 }
=20
 #endif
+#endif /* COMPAT */
=20
 static int flask_security_load(struct xen_flask_load *load)
 {
@@ -501,7 +518,7 @@ static int flask_security_load(struct xe
     if ( !buf )
         return -ENOMEM;
=20
-    if ( copy_from_guest(buf, load->buffer, load->size) )
+    if ( _copy_from_guest(buf, load->buffer, load->size) )
     {
         ret =3D -EFAULT;
         goto out_free;
@@ -524,6 +541,8 @@ static int flask_security_load(struct xe
     return ret;
 }
=20
+#ifndef COMPAT
+
 static int flask_ocontext_del(struct xen_flask_ocontext *arg)
 {
     int rv;
@@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x
     return rc;
 }
=20
-long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
+#endif /* !COMPAT */
+
+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
 {
     xen_flask_op_t op;
     int rv;
@@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(
  out:
     return rv;
 }
+
+#ifndef COMPAT
+#undef _copy_to_guest
+#define _copy_to_guest copy_to_compat
+#undef _copy_from_guest
+#define _copy_from_guest copy_from_compat
+
+#include <compat/event_channel.h>
+#include <compat/xsm/flask_op.h>
+
+CHECK_flask_access;
+CHECK_flask_cache_stats;
+CHECK_flask_hash_stats;
+CHECK_flask_ocontext;
+CHECK_flask_peersid;
+CHECK_flask_relabel;
+CHECK_flask_setavc_threshold;
+CHECK_flask_setenforce;
+CHECK_flask_transition;
+
+#define COMPAT
+#define flask_copyin_string(ch, pb, sz, mx) ({ \
+	XEN_GUEST_HANDLE_PARAM(char) gh; \
+	guest_from_compat_handle(gh, ch); \
+	flask_copyin_string(gh, pb, sz, mx); \
+})
+
+#define xen_flask_load compat_flask_load
+#define flask_security_load compat_security_load
+
+#define xen_flask_userlist compat_flask_userlist
+#define flask_security_user compat_security_user
+
+#define xen_flask_sid_context compat_flask_sid_context
+#define flask_security_context compat_security_context
+#define flask_security_sid compat_security_sid
+
+#define xen_flask_boolean compat_flask_boolean
+#define flask_security_resolve_bool compat_security_resolve_bool
+#define flask_security_get_bool compat_security_get_bool
+#define flask_security_set_bool compat_security_set_bool
+
+#define xen_flask_op_t compat_flask_op_t
+#undef ret_t
+#define ret_t int
+#define do_flask_op compat_flask_op
+
+#include "flask_op.c"
+#endif
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1464,6 +1464,7 @@ static int flask_map_gmfn_foreign(struct
 #endif
=20
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
=20
 static struct xsm_operations flask_ops =3D {
     .security_domaininfo =3D flask_security_domaininfo,
@@ -1538,6 +1539,9 @@ static struct xsm_operations flask_ops =3D
     .hvm_param_nested =3D flask_hvm_param_nested,
=20
     .do_xsm_op =3D do_flask_op,
+#ifdef CONFIG_COMPAT
+    .do_compat_op =3D compat_flask_op,
+#endif
=20
     .add_to_physmap =3D flask_add_to_physmap,
     .remove_from_physmap =3D flask_remove_from_physmap,
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x
     return xsm_do_xsm_op(op);
 }
=20
-
+#ifdef CONFIG_COMPAT
+int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
+{
+    return xsm_do_compat_op(op);
+}
+#endif



--=__Part50620213.0__=
Content-Type: text/plain; name="flask-op-compat.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-op-compat.patch"

flask: add compat mode guest support=0A=0A... which has been missing since =
the introduction of the new interface=0Ain the 4.2 development cycle.=0A=0A=
Signed-off-by: Jan Beulich <jbeulich@suse.com>=0AAcked-by: Daniel De Graaf =
<dgdegra@tycho.nsa.gov>=0A=0A--- a/xen/arch/x86/x86_64/compat/entry.S=0A+++=
 b/xen/arch/x86/x86_64/compat/entry.S=0A@@ -404,7 +404,7 @@ ENTRY(compat_hy=
percall_table)=0A         .quad compat_vcpu_op=0A         .quad compat_ni_h=
ypercall       /* 25 */=0A         .quad compat_mmuext_op=0A-        .quad =
do_xsm_op=0A+        .quad compat_xsm_op=0A         .quad compat_nmi_op=0A =
        .quad compat_sched_op=0A         .quad compat_callback_op        =
/* 30 */=0A--- a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ =
-27,6 +27,7 @@ headers-$(CONFIG_X86)     +=3D compat/arch=0A headers-$(CONF=
IG_X86)     +=3D compat/arch-x86/xen.h=0A headers-$(CONFIG_X86)     +=3D =
compat/arch-x86/xen-$(compat-arch-y).h=0A headers-y                 +=3D =
compat/arch-$(compat-arch-y).h compat/xlat.h=0A+headers-$(FLASK_ENABLE)   =
+=3D compat/xsm/flask_op.h=0A =0A cppflags-y                :=3D -include =
public/xen-compat.h=0A cppflags-$(CONFIG_X86)    +=3D -m32=0A@@ -69,7 =
+70,9 @@ compat/xlat.h: xlat.lst $(filter-out com=0A 	export PYTHON=3D$(P=
YTHON); \=0A 	grep -v '^[	 ]*#' xlat.lst | \=0A 	while read what =
name hdr; do \=0A-		$(SHELL) $(BASEDIR)/tools/get-fields.sh =
"$$what" compat_$$name $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y=
),g') || exit $$?; \=0A+		hdr=3D"compat/$$(echo $$hdr | sed =
's,@arch@,$(compat-arch-y),g')"; \=0A+		echo '$(headers-y)' | grep =
-q "$$hdr" || continue; \=0A+		$(SHELL) $(BASEDIR)/tools/get-field=
s.sh "$$what" compat_$$name $$hdr || exit $$?; \=0A 	done >$@.new=0A 	=
mv -f $@.new $@=0A =0A--- a/xen/include/xlat.lst=0A+++ b/xen/include/xlat.l=
st=0A@@ -99,3 +99,16 @@=0A !	vcpu_set_singleshot_timer	vcpu.h=0A =
?	xenoprof_init			xenoprof.h=0A ?	xenoprof_passive	=
	xenoprof.h=0A+?	flask_access			xsm/flask_op.h=0A+!=
	flask_boolean			xsm/flask_op.h=0A+?	flask_cache=
_stats		xsm/flask_op.h=0A+?	flask_hash_stats		=
xsm/flask_op.h=0A+!	flask_load			xsm/flask_op.h=0A+?=
	flask_ocontext			xsm/flask_op.h=0A+?	flask_peers=
id			xsm/flask_op.h=0A+?	flask_relabel			=
xsm/flask_op.h=0A+?	flask_setavc_threshold		xsm/flask_op.h=0A+?=
	flask_setenforce		xsm/flask_op.h=0A+!	flask_sid_c=
ontext		xsm/flask_op.h=0A+?	flask_transition		=
xsm/flask_op.h=0A+!	flask_userlist			xsm/flask_op.h=0A--=
- a/xen/include/xsm/dummy.h=0A+++ b/xen/include/xsm/dummy.h=0A@@ -412,6 =
+412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN=0A     return =
-ENOSYS;=0A }=0A =0A+#ifdef CONFIG_COMPAT=0A+static XSM_INLINE int =
xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return =
-ENOSYS;=0A+}=0A+#endif=0A+=0A static XSM_INLINE char *xsm_show_irq_sid(int=
 irq)=0A {=0A     return NULL;=0A--- a/xen/include/xsm/xsm.h=0A+++ =
b/xen/include/xsm/xsm.h=0A@@ -129,6 +129,9 @@ struct xsm_operations {=0A   =
  int (*tmem_control)(void);=0A =0A     long (*do_xsm_op) (XEN_GUEST_HANDLE=
_PARAM(xsm_op_t) op);=0A+#ifdef CONFIG_COMPAT=0A+    int (*do_compat_op) =
(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);=0A+#endif=0A =0A     int (*hvm_param=
) (struct domain *d, unsigned long op);=0A     int (*hvm_param_nested) =
(struct domain *d);=0A@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op=
 (XEN_GU=0A     return xsm_ops->do_xsm_op(op);=0A }=0A =0A+#ifdef =
CONFIG_COMPAT=0A+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM=
(xsm_op_t) op)=0A+{=0A+    return xsm_ops->do_compat_op(op);=0A+}=0A+#endif=
=0A+=0A static inline int xsm_hvm_param (xsm_default_t def, struct domain =
*d, unsigned long op)=0A {=0A     return xsm_ops->hvm_param(d, op);=0A--- =
a/xen/xsm/dummy.c=0A+++ b/xen/xsm/dummy.c=0A@@ -105,6 +105,9 @@ void =
xsm_fixup_ops (struct xsm_operation=0A     set_to_dummy_if_null(ops, =
hvm_param_nested);=0A =0A     set_to_dummy_if_null(ops, do_xsm_op);=0A+#ifd=
ef CONFIG_COMPAT=0A+    set_to_dummy_if_null(ops, do_compat_op);=0A+#endif=
=0A =0A     set_to_dummy_if_null(ops, add_to_physmap);=0A     set_to_dummy_=
if_null(ops, remove_from_physmap);=0A--- a/xen/xsm/flask/flask_op.c=0A+++ =
b/xen/xsm/flask/flask_op.c=0A@@ -7,7 +7,7 @@=0A  *  it under the terms of =
the GNU General Public License version 2,=0A  *  as published by the Free =
Software Foundation.=0A  */=0A-=0A+#ifndef COMPAT=0A #include <xen/errno.h>=
=0A #include <xen/event.h>=0A #include <xsm/xsm.h>=0A@@ -20,6 +20,10 @@=0A =
#include <objsec.h>=0A #include <conditional.h>=0A =0A+#define ret_t =
long=0A+#define _copy_to_guest copy_to_guest=0A+#define _copy_from_guest =
copy_from_guest=0A+=0A #ifdef FLASK_DEVELOP=0A int flask_enforcing =3D =
0;=0A integer_param("flask_enforcing", flask_enforcing);=0A@@ -95,6 +99,8 =
@@ static int flask_copyin_string(XEN_GUEST=0A     return 0;=0A }=0A =
=0A+#endif /* COMPAT */=0A+=0A static int flask_security_user(struct =
xen_flask_userlist *arg)=0A {=0A     char *user;=0A@@ -119,7 +125,7 @@ =
static int flask_security_user(struct xe=0A =0A     arg->size =3D =
nsids;=0A =0A-    if ( copy_to_guest(arg->u.sids, sids, nsids) )=0A+    if =
( _copy_to_guest(arg->u.sids, sids, nsids) )=0A         rv =3D -EFAULT;=0A =
=0A     xfree(sids);=0A@@ -128,6 +134,8 @@ static int flask_security_user(s=
truct xe=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int =
flask_security_relabel(struct xen_flask_transition *arg)=0A {=0A     int =
rv;=0A@@ -208,6 +216,8 @@ static int flask_security_setenforce(str=0A     =
return 0;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_security_=
context(struct xen_flask_sid_context *arg)=0A {=0A     int rv;=0A@@ -252,7 =
+262,7 @@ static int flask_security_sid(struct xen=0A =0A     arg->size =
=3D len;=0A =0A-    if ( !rv && copy_to_guest(arg->context, context, len) =
)=0A+    if ( !rv && _copy_to_guest(arg->context, context, len) )=0A       =
  rv =3D -EFAULT;=0A =0A     xfree(context);=0A@@ -260,6 +270,8 @@ static =
int flask_security_sid(struct xen=0A     return rv;=0A }=0A =0A+#ifndef =
COMPAT=0A+=0A int flask_disable(void)=0A {=0A     static int flask_disabled=
 =3D 0;=0A@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho=0A  =
   return rv;=0A }=0A =0A+#endif /* COMPAT */=0A+=0A static int flask_secur=
ity_resolve_bool(struct xen_flask_boolean *arg)=0A {=0A     char *name;=0A@=
@ -382,24 +396,6 @@ static int flask_security_set_bool(struc=0A     return =
rv;=0A }=0A =0A-static int flask_security_commit_bools(void)=0A-{=0A-    =
int rv;=0A-=0A-    spin_lock(&sel_sem);=0A-=0A-    rv =3D domain_has_securi=
ty(current->domain, SECURITY__SETBOOL);=0A-    if ( rv )=0A-        goto =
out;=0A-=0A-    if ( bool_pending_values )=0A-        rv =3D security_set_b=
ools(bool_num, bool_pending_values);=0A-    =0A- out:=0A-    spin_unlock(&s=
el_sem);=0A-    return rv;=0A-}=0A-=0A static int flask_security_get_bool(s=
truct xen_flask_boolean *arg)=0A {=0A     int rv;=0A@@ -431,7 +427,7 @@ =
static int flask_security_get_bool(struc=0A             rv =3D -ERANGE;=0A =
        arg->size =3D nameout_len;=0A  =0A-        if ( !rv && copy_to_gues=
t(arg->name, nameout, nameout_len) )=0A+        if ( !rv && _copy_to_guest(=
arg->name, nameout, nameout_len) )=0A             rv =3D -EFAULT;=0A       =
  xfree(nameout);=0A     }=0A@@ -441,6 +437,26 @@ static int flask_security=
_get_bool(struc=0A     return rv;=0A }=0A =0A+#ifndef COMPAT=0A+=0A+static =
int flask_security_commit_bools(void)=0A+{=0A+    int rv;=0A+=0A+    =
spin_lock(&sel_sem);=0A+=0A+    rv =3D domain_has_security(current->domain,=
 SECURITY__SETBOOL);=0A+    if ( rv )=0A+        goto out;=0A+=0A+    if ( =
bool_pending_values )=0A+        rv =3D security_set_bools(bool_num, =
bool_pending_values);=0A+=0A+ out:=0A+    spin_unlock(&sel_sem);=0A+    =
return rv;=0A+}=0A+=0A static int flask_security_make_bools(void)=0A {=0A  =
   int ret =3D 0;=0A@@ -484,6 +500,7 @@ static int flask_security_avc_cache=
stats=0A }=0A =0A #endif=0A+#endif /* COMPAT */=0A =0A static int =
flask_security_load(struct xen_flask_load *load)=0A {=0A@@ -501,7 +518,7 =
@@ static int flask_security_load(struct xe=0A     if ( !buf )=0A         =
return -ENOMEM;=0A =0A-    if ( copy_from_guest(buf, load->buffer, =
load->size) )=0A+    if ( _copy_from_guest(buf, load->buffer, load->size) =
)=0A     {=0A         ret =3D -EFAULT;=0A         goto out_free;=0A@@ =
-524,6 +541,8 @@ static int flask_security_load(struct xe=0A     return =
ret;=0A }=0A =0A+#ifndef COMPAT=0A+=0A static int flask_ocontext_del(struct=
 xen_flask_ocontext *arg)=0A {=0A     int rv;=0A@@ -636,7 +655,9 @@ static =
int flask_relabel_domain(struct x=0A     return rc;=0A }=0A =0A-long =
do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)=0A+#endif /* =
!COMPAT */=0A+=0A+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op)=0A {=0A     xen_flask_op_t op;=0A     int rv;=0A@@ -763,3 =
+784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(=0A  out:=0A     return =
rv;=0A }=0A+=0A+#ifndef COMPAT=0A+#undef _copy_to_guest=0A+#define =
_copy_to_guest copy_to_compat=0A+#undef _copy_from_guest=0A+#define =
_copy_from_guest copy_from_compat=0A+=0A+#include <compat/event_channel.h>=
=0A+#include <compat/xsm/flask_op.h>=0A+=0A+CHECK_flask_access;=0A+CHECK_fl=
ask_cache_stats;=0A+CHECK_flask_hash_stats;=0A+CHECK_flask_ocontext;=0A+CHE=
CK_flask_peersid;=0A+CHECK_flask_relabel;=0A+CHECK_flask_setavc_threshold;=
=0A+CHECK_flask_setenforce;=0A+CHECK_flask_transition;=0A+=0A+#define =
COMPAT=0A+#define flask_copyin_string(ch, pb, sz, mx) ({ \=0A+	XEN_GUEST_H=
ANDLE_PARAM(char) gh; \=0A+	guest_from_compat_handle(gh, ch); \=0A+	=
flask_copyin_string(gh, pb, sz, mx); \=0A+})=0A+=0A+#define xen_flask_load =
compat_flask_load=0A+#define flask_security_load compat_security_load=0A+=
=0A+#define xen_flask_userlist compat_flask_userlist=0A+#define flask_secur=
ity_user compat_security_user=0A+=0A+#define xen_flask_sid_context =
compat_flask_sid_context=0A+#define flask_security_context compat_security_=
context=0A+#define flask_security_sid compat_security_sid=0A+=0A+#define =
xen_flask_boolean compat_flask_boolean=0A+#define flask_security_resolve_bo=
ol compat_security_resolve_bool=0A+#define flask_security_get_bool =
compat_security_get_bool=0A+#define flask_security_set_bool compat_security=
_set_bool=0A+=0A+#define xen_flask_op_t compat_flask_op_t=0A+#undef =
ret_t=0A+#define ret_t int=0A+#define do_flask_op compat_flask_op=0A+=0A+#i=
nclude "flask_op.c"=0A+#endif=0A--- a/xen/xsm/flask/hooks.c=0A+++ =
b/xen/xsm/flask/hooks.c=0A@@ -1464,6 +1464,7 @@ static int flask_map_gmfn_f=
oreign(struct=0A #endif=0A =0A long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_=
op_t) u_flask_op);=0A+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) =
u_flask_op);=0A =0A static struct xsm_operations flask_ops =3D {=0A     =
.security_domaininfo =3D flask_security_domaininfo,=0A@@ -1538,6 +1539,9 =
@@ static struct xsm_operations flask_ops =3D=0A     .hvm_param_nested =3D =
flask_hvm_param_nested,=0A =0A     .do_xsm_op =3D do_flask_op,=0A+#ifdef =
CONFIG_COMPAT=0A+    .do_compat_op =3D compat_flask_op,=0A+#endif=0A =0A   =
  .add_to_physmap =3D flask_add_to_physmap,=0A     .remove_from_physmap =
=3D flask_remove_from_physmap,=0A--- a/xen/xsm/xsm_core.c=0A+++ b/xen/xsm/x=
sm_core.c=0A@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x=0A=
     return xsm_do_xsm_op(op);=0A }=0A =0A-=0A+#ifdef CONFIG_COMPAT=0A+int =
compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)=0A+{=0A+    return =
xsm_do_compat_op(op);=0A+}=0A+#endif=0A
--=__Part50620213.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part50620213.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:44:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFVG-0008QQ-DW; Tue, 25 Feb 2014 10:44:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFVE-0008QA-Vt
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:44:45 +0000
Received: from [193.109.254.147:10581] by server-12.bemta-14.messagelabs.com
	id 99/C9-17220-C147C035; Tue, 25 Feb 2014 10:44:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393325082!6616192!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17624 invoked from network); 25 Feb 2014 10:44:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:44:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105475072"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 10:44:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:44:41 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFVA-0005lr-Qy; Tue, 25 Feb 2014 10:44:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 10:44:39 +0000
Message-ID: <1393325079-28303-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/time: Remove redundant RTC REG_B read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RTC_ALWAYS_BCD is always defined by default, meaning that we will
unconditionally enter the if statement.  Reordering the condition allows
short-circult evaluation to remove a redundant CMOS read.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/time.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 6e31e1f..82492c1 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -661,7 +661,7 @@ static unsigned long __get_cmos_time(void)
     mon  = CMOS_READ(RTC_MONTH);
     year = CMOS_READ(RTC_YEAR);
     
-    if ( !(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD )
+    if ( RTC_ALWAYS_BCD || !(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) )
     {
         BCD_TO_BIN(sec);
         BCD_TO_BIN(min);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:44:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFVG-0008QQ-DW; Tue, 25 Feb 2014 10:44:46 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFVE-0008QA-Vt
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:44:45 +0000
Received: from [193.109.254.147:10581] by server-12.bemta-14.messagelabs.com
	id 99/C9-17220-C147C035; Tue, 25 Feb 2014 10:44:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393325082!6616192!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17624 invoked from network); 25 Feb 2014 10:44:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:44:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105475072"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 10:44:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:44:41 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFVA-0005lr-Qy; Tue, 25 Feb 2014 10:44:40 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 10:44:39 +0000
Message-ID: <1393325079-28303-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/time: Remove redundant RTC REG_B read
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

RTC_ALWAYS_BCD is always defined by default, meaning that we will
unconditionally enter the if statement.  Reordering the condition allows
short-circult evaluation to remove a redundant CMOS read.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/time.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 6e31e1f..82492c1 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -661,7 +661,7 @@ static unsigned long __get_cmos_time(void)
     mon  = CMOS_READ(RTC_MONTH);
     year = CMOS_READ(RTC_YEAR);
     
-    if ( !(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD )
+    if ( RTC_ALWAYS_BCD || !(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) )
     {
         BCD_TO_BIN(sec);
         BCD_TO_BIN(min);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:45:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:45:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFVZ-0008UH-QY; Tue, 25 Feb 2014 10:45:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFVW-0008Tc-Is
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:45:04 +0000
Received: from [85.158.139.211:20024] by server-6.bemta-5.messagelabs.com id
	0D/25-14342-D247C035; Tue, 25 Feb 2014 10:45:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393325100!6077589!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14615 invoked from network); 25 Feb 2014 10:45:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:45:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:45:00 +0000
Message-Id: <530C823A020000780011F191@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:44:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part794B2B3A.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 2/4] flask: use xzalloc()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part794B2B3A.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -360,11 +360,10 @@ static struct avc_node *avc_alloc_node(v
 {
     struct avc_node *node;
=20
-    node =3D xmalloc(struct avc_node);
+    node =3D xzalloc(struct avc_node);
     if (!node)
         goto out;
=20
-    memset(node, 0, sizeof(*node));
     INIT_RCU_HEAD(&node->rhead);
     INIT_HLIST_NODE(&node->list);
     avc_cache_stats_incr(allocations);
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -132,13 +132,10 @@ static int flask_domain_alloc_security(s
 {
     struct domain_security_struct *dsec;
=20
-    dsec =3D xmalloc(struct domain_security_struct);
-
+    dsec =3D xzalloc(struct domain_security_struct);
     if ( !dsec )
         return -ENOMEM;
=20
-    memset(dsec, 0, sizeof(struct domain_security_struct));
-
     switch ( d->domain_id )
     {
     case DOMID_IDLE:
@@ -294,13 +291,10 @@ static int flask_alloc_security_evtchn(s
 {
     struct evtchn_security_struct *esec;
=20
-    esec =3D xmalloc(struct evtchn_security_struct);
-
+    esec =3D xzalloc(struct evtchn_security_struct);
     if ( !esec )
         return -ENOMEM;
=20
-    memset(esec, 0, sizeof(struct evtchn_security_struct));
-
     esec->sid =3D SECINITSID_UNLABELED;
=20
     chn->ssid =3D esec;
--- a/xen/xsm/flask/ss/avtab.c
+++ b/xen/xsm/flask/ss/avtab.c
@@ -38,11 +38,10 @@ static struct avtab_node* avtab_insert_n
     struct avtab_node * prev, struct avtab_node * cur, struct avtab_key =
*key,=20
                                                     struct avtab_datum =
*datum)
 {
-    struct avtab_node * newnode;
-    newnode =3D xmalloc(struct avtab_node);
+    struct avtab_node *newnode =3D xzalloc(struct avtab_node);
+
     if ( newnode =3D=3D NULL )
         return NULL;
-    memset(newnode, 0, sizeof(struct avtab_node));
     newnode->key =3D *key;
     newnode->datum =3D *datum;
     if ( prev )
--- a/xen/xsm/flask/ss/conditional.c
+++ b/xen/xsm/flask/ss/conditional.c
@@ -228,10 +228,9 @@ int cond_read_bool(struct policydb *p, s
     u32 len;
     int rc;
=20
-    booldatum =3D xmalloc(struct cond_bool_datum);
+    booldatum =3D xzalloc(struct cond_bool_datum);
     if ( !booldatum )
         return -1;
-    memset(booldatum, 0, sizeof(struct cond_bool_datum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -343,10 +342,9 @@ static int cond_insertf(struct avtab *a,
         goto err;
     }
=20
-    list =3D xmalloc(struct cond_av_list);
+    list =3D xzalloc(struct cond_av_list);
     if ( !list )
         goto err;
-    memset(list, 0, sizeof(*list));
=20
     list->node =3D node_ptr;
     if ( !data->head )
@@ -441,12 +439,9 @@ static int cond_read_node(struct policyd
         if ( rc < 0 )
             goto err;
=20
-        expr =3D xmalloc(struct cond_expr);
+        expr =3D xzalloc(struct cond_expr);
         if ( !expr )
-        {
             goto err;
-        }
-        memset(expr, 0, sizeof(struct cond_expr));
=20
         expr->expr_type =3D le32_to_cpu(buf[0]);
         expr->bool =3D le32_to_cpu(buf[1]);
@@ -494,10 +489,9 @@ int cond_read_list(struct policydb *p, v
=20
     for ( i =3D 0; i < len; i++ )
     {
-        node =3D xmalloc(struct cond_node);
+        node =3D xzalloc(struct cond_node);
         if ( !node )
             goto err;
-        memset(node, 0, sizeof(struct cond_node));
=20
         if ( cond_read_node(p, node, fp) !=3D 0 )
             goto err;
--- a/xen/xsm/flask/ss/ebitmap.c
+++ b/xen/xsm/flask/ss/ebitmap.c
@@ -50,13 +50,12 @@ int ebitmap_cpy(struct ebitmap *dst, str
     prev =3D NULL;
     while ( n )
     {
-        new =3D xmalloc(struct ebitmap_node);
+        new =3D xzalloc(struct ebitmap_node);
         if ( !new )
         {
             ebitmap_destroy(dst);
             return -ENOMEM;
         }
-        memset(new, 0, sizeof(*new));
         new->startbit =3D n->startbit;
         memcpy(new->maps, n->maps, EBITMAP_SIZE / 8);
         new->next =3D NULL;
@@ -176,10 +175,9 @@ int ebitmap_set_bit(struct ebitmap *e, u
     if ( !value )
         return 0;
=20
-    new =3D xmalloc(struct ebitmap_node);
+    new =3D xzalloc(struct ebitmap_node);
     if ( !new )
         return -ENOMEM;
-    memset(new, 0, sizeof(*new));
=20
     new->startbit =3D bit - (bit % EBITMAP_SIZE);
     ebitmap_node_set_bit(new, bit);
@@ -284,8 +282,8 @@ int ebitmap_read(struct ebitmap *e, void
=20
         if ( !n || startbit >=3D n->startbit + EBITMAP_SIZE )
         {
-            struct ebitmap_node *tmp;
-            tmp =3D xmalloc(struct ebitmap_node);
+            struct ebitmap_node *tmp =3D xzalloc(struct ebitmap_node);
+
             if ( !tmp )
             {
                 printk(KERN_ERR
@@ -293,7 +291,6 @@ int ebitmap_read(struct ebitmap *e, void
                 rc =3D -ENOMEM;
                 goto bad;
             }
-            memset(tmp, 0, sizeof(*tmp));
             /* round down */
             tmp->startbit =3D startbit - (startbit % EBITMAP_SIZE);
             if ( n )
--- a/xen/xsm/flask/ss/hashtab.c
+++ b/xen/xsm/flask/ss/hashtab.c
@@ -16,28 +16,21 @@ struct hashtab *hashtab_create(u32 (*has
             int (*keycmp)(struct hashtab *h, const void *key1,
 			  const void *key2), u32 size)
 {
-    struct hashtab *p;
-    u32 i;
+    struct hashtab *p =3D xzalloc(struct hashtab);
=20
-    p =3D xmalloc(struct hashtab);
     if ( p =3D=3D NULL )
         return p;
=20
-    memset(p, 0, sizeof(*p));
     p->size =3D size;
-    p->nel =3D 0;
     p->hash_value =3D hash_value;
     p->keycmp =3D keycmp;
-    p->htable =3D xmalloc_array(struct hashtab_node *, size);
+    p->htable =3D xzalloc_array(struct hashtab_node *, size);
     if ( p->htable =3D=3D NULL )
     {
         xfree(p);
         return NULL;
     }
=20
-    for ( i =3D 0; i < size; i++ )
-        p->htable[i] =3D NULL;
-
     return p;
 }
=20
@@ -61,10 +54,9 @@ int hashtab_insert(struct hashtab *h, vo
     if ( cur && (h->keycmp(h, key, cur->key) =3D=3D 0) )
         return -EEXIST;
=20
-    newnode =3D xmalloc(struct hashtab_node);
+    newnode =3D xzalloc(struct hashtab_node);
     if ( newnode =3D=3D NULL )
         return -ENOMEM;
-    memset(newnode, 0, sizeof(*newnode));
     newnode->key =3D key;
     newnode->datum =3D datum;
     if ( prev )
--- a/xen/xsm/flask/ss/policydb.c
+++ b/xen/xsm/flask/ss/policydb.c
@@ -166,13 +166,12 @@ static int roles_init(struct policydb *p
     int rc;
     struct role_datum *role;
=20
-    role =3D xmalloc(struct role_datum);
+    role =3D xzalloc(struct role_datum);
     if ( !role )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(role, 0, sizeof(*role));
     role->value =3D ++p->p_roles.nprim;
     if ( role->value !=3D OBJECT_R_VAL )
     {
@@ -950,13 +949,12 @@ static int perm_read(struct policydb *p,
     __le32 buf[2];
     u32 len;
=20
-    perdatum =3D xmalloc(struct perm_datum);
+    perdatum =3D xzalloc(struct perm_datum);
     if ( !perdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(perdatum, 0, sizeof(*perdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -994,13 +992,12 @@ static int common_read(struct policydb *
     u32 len, nel;
     int i, rc;
=20
-    comdatum =3D xmalloc(struct common_datum);
+    comdatum =3D xzalloc(struct common_datum);
     if ( !comdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(comdatum, 0, sizeof(*comdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -1055,10 +1052,9 @@ static int read_cons_helper(struct const
     lc =3D NULL;
     for ( i =3D 0; i < ncons; i++ )
     {
-        c =3D xmalloc(struct constraint_node);
+        c =3D xzalloc(struct constraint_node);
         if ( !c )
             return -ENOMEM;
-        memset(c, 0, sizeof(*c));
=20
         if ( lc )
         {
@@ -1078,10 +1074,9 @@ static int read_cons_helper(struct const
         depth =3D -1;
         for ( j =3D 0; j < nexpr; j++ )
         {
-            e =3D xmalloc(struct constraint_expr);
+            e =3D xzalloc(struct constraint_expr);
             if ( !e )
                 return -ENOMEM;
-            memset(e, 0, sizeof(*e));
=20
             if ( le )
                 le->next =3D e;
@@ -1142,13 +1137,12 @@ static int class_read(struct policydb *p
     u32 len, len2, ncons, nel;
     int i, rc;
=20
-    cladatum =3D xmalloc(struct class_datum);
+    cladatum =3D xzalloc(struct class_datum);
     if ( !cladatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(cladatum, 0, sizeof(*cladatum));
=20
     rc =3D next_entry(buf, fp, sizeof(u32)*6);
     if ( rc < 0 )
@@ -1226,13 +1220,12 @@ static int role_read(struct policydb *p,
     __le32 buf[3];
     u32 len;
=20
-    role =3D xmalloc(struct role_datum);
+    role =3D xzalloc(struct role_datum);
     if ( !role )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(role, 0, sizeof(*role));
=20
     if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )
         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 3);
@@ -1297,13 +1290,12 @@ static int type_read(struct policydb *p,
     __le32 buf[4];
     u32 len;
=20
-    typdatum =3D xmalloc(struct type_datum);
+    typdatum =3D xzalloc(struct type_datum);
     if ( !typdatum )
     {
         rc =3D -ENOMEM;
         return rc;
     }
-    memset(typdatum, 0, sizeof(*typdatum));
=20
     if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )
         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 4);
@@ -1391,13 +1383,12 @@ static int user_read(struct policydb *p,
     __le32 buf[3];
     u32 len;
=20
-    usrdatum =3D xmalloc(struct user_datum);
+    usrdatum =3D xzalloc(struct user_datum);
     if ( !usrdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(usrdatum, 0, sizeof(*usrdatum));
=20
     if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )
         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 3);
@@ -1455,13 +1446,12 @@ static int sens_read(struct policydb *p,
     __le32 buf[2];
     u32 len;
=20
-    levdatum =3D xmalloc(struct level_datum);
+    levdatum =3D xzalloc(struct level_datum);
     if ( !levdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(levdatum, 0, sizeof(*levdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -1511,13 +1501,12 @@ static int cat_read(struct policydb *p,=20
     __le32 buf[3];
     u32 len;
=20
-    catdatum =3D xmalloc(struct cat_datum);
+    catdatum =3D xzalloc(struct cat_datum);
     if ( !catdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(catdatum, 0, sizeof(*catdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -1875,13 +1864,12 @@ int policydb_read(struct policydb *p, vo
     ltr =3D NULL;
     for ( i =3D 0; i < nel; i++ )
     {
-        tr =3D xmalloc(struct role_trans);
+        tr =3D xzalloc(struct role_trans);
         if ( !tr )
         {
             rc =3D -ENOMEM;
             goto bad;
         }
-        memset(tr, 0, sizeof(*tr));
         if ( ltr )
             ltr->next =3D tr;
         else
@@ -1909,13 +1897,12 @@ int policydb_read(struct policydb *p, vo
     lra =3D NULL;
     for ( i =3D 0; i < nel; i++ )
     {
-        ra =3D xmalloc(struct role_allow);
+        ra =3D xzalloc(struct role_allow);
         if ( !ra )
         {
             rc =3D -ENOMEM;
             goto bad;
         }
-        memset(ra, 0, sizeof(*ra));
         if ( lra )
             lra->next =3D ra;
         else
@@ -1951,13 +1938,12 @@ int policydb_read(struct policydb *p, vo
         l =3D NULL;
         for ( j =3D 0; j < nel; j++ )
         {
-            c =3D xmalloc(struct ocontext);
+            c =3D xzalloc(struct ocontext);
             if ( !c )
             {
                 rc =3D -ENOMEM;
                 goto bad;
             }
-            memset(c, 0, sizeof(*c));
             if ( l )
                 l->next =3D c;
             else
@@ -2067,13 +2053,12 @@ int policydb_read(struct policydb *p, vo
         lrt =3D NULL;
         for ( i =3D 0; i < nel; i++ )
         {
-            rt =3D xmalloc(struct range_trans);
+            rt =3D xzalloc(struct range_trans);
             if ( !rt )
             {
                 rc =3D -ENOMEM;
                 goto bad;
             }
-            memset(rt, 0, sizeof(*rt));
             if ( lrt )
                 lrt->next =3D rt;
             else
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1771,13 +1771,12 @@ int security_get_user_sids(u32 fromsid,=20
     }
     usercon.user =3D user->value;
=20
-    mysids =3D xmalloc_array(u32, maxnel);
+    mysids =3D xzalloc_array(u32, maxnel);
     if ( !mysids )
     {
         rc =3D -ENOMEM;
         goto out_unlock;
     }
-    memset(mysids, 0, maxnel*sizeof(*mysids));
=20
     ebitmap_for_each_positive_bit(&user->roles, rnode, i)
     {
@@ -1808,14 +1807,13 @@ int security_get_user_sids(u32 fromsid,=20
             else
             {
                 maxnel +=3D SIDS_NEL;
-                mysids2 =3D xmalloc_array(u32, maxnel);
+                mysids2 =3D xzalloc_array(u32, maxnel);
                 if ( !mysids2 )
                 {
                     rc =3D -ENOMEM;
                     xfree(mysids);
                     goto out_unlock;
                 }
-                memset(mysids2, 0, maxnel*sizeof(*mysids2));
                 memcpy(mysids2, mysids, mynel * sizeof(*mysids2));
                 xfree(mysids);
                 mysids =3D mysids2;
@@ -1868,14 +1866,14 @@ int security_get_bools(int *len, char **
         goto out;
     }
=20
-    if ( names ) {
-        *names =3D (char**)xmalloc_array(char*, *len);
+    if ( names )
+    {
+        *names =3D xzalloc_array(char *, *len);
         if ( !*names )
             goto err;
-        memset(*names, 0, sizeof(char*) * *len);
     }
=20
-    *values =3D (int*)xmalloc_array(int, *len);
+    *values =3D xmalloc_array(int, *len);
     if ( !*values )
         goto err;
=20
@@ -2059,9 +2057,8 @@ int security_ocontext_add( u32 ocon, uns
     struct ocontext *prev;
     struct ocontext *add;
=20
-    if ( (add =3D xmalloc(struct ocontext)) =3D=3D NULL )
+    if ( (add =3D xzalloc(struct ocontext)) =3D=3D NULL )
         return -ENOMEM;
-    memset(add, 0, sizeof(struct ocontext));
     add->sid[0] =3D sid;
=20
     POLICY_WRLOCK;



--=__Part794B2B3A.0__=
Content-Type: text/plain; name="flask-xzalloc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-xzalloc.patch"

flask: use xzalloc()=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A--- a/xen/xsm/flask/avc.c=0A+++ b/xen/xsm/flask/avc.c=0A@@ -360,11 =
+360,10 @@ static struct avc_node *avc_alloc_node(v=0A {=0A     struct =
avc_node *node;=0A =0A-    node =3D xmalloc(struct avc_node);=0A+    node =
=3D xzalloc(struct avc_node);=0A     if (!node)=0A         goto out;=0A =
=0A-    memset(node, 0, sizeof(*node));=0A     INIT_RCU_HEAD(&node->rhead);=
=0A     INIT_HLIST_NODE(&node->list);=0A     avc_cache_stats_incr(allocatio=
ns);=0A--- a/xen/xsm/flask/hooks.c=0A+++ b/xen/xsm/flask/hooks.c=0A@@ =
-132,13 +132,10 @@ static int flask_domain_alloc_security(s=0A {=0A     =
struct domain_security_struct *dsec;=0A =0A-    dsec =3D xmalloc(struct =
domain_security_struct);=0A-=0A+    dsec =3D xzalloc(struct domain_security=
_struct);=0A     if ( !dsec )=0A         return -ENOMEM;=0A =0A-    =
memset(dsec, 0, sizeof(struct domain_security_struct));=0A-=0A     switch =
( d->domain_id )=0A     {=0A     case DOMID_IDLE:=0A@@ -294,13 +291,10 @@ =
static int flask_alloc_security_evtchn(s=0A {=0A     struct evtchn_security=
_struct *esec;=0A =0A-    esec =3D xmalloc(struct evtchn_security_struct);=
=0A-=0A+    esec =3D xzalloc(struct evtchn_security_struct);=0A     if ( =
!esec )=0A         return -ENOMEM;=0A =0A-    memset(esec, 0, sizeof(struct=
 evtchn_security_struct));=0A-=0A     esec->sid =3D SECINITSID_UNLABELED;=
=0A =0A     chn->ssid =3D esec;=0A--- a/xen/xsm/flask/ss/avtab.c=0A+++ =
b/xen/xsm/flask/ss/avtab.c=0A@@ -38,11 +38,10 @@ static struct avtab_node* =
avtab_insert_n=0A     struct avtab_node * prev, struct avtab_node * cur, =
struct avtab_key *key, =0A                                                 =
    struct avtab_datum *datum)=0A {=0A-    struct avtab_node * newnode;=0A-=
    newnode =3D xmalloc(struct avtab_node);=0A+    struct avtab_node =
*newnode =3D xzalloc(struct avtab_node);=0A+=0A     if ( newnode =3D=3D =
NULL )=0A         return NULL;=0A-    memset(newnode, 0, sizeof(struct =
avtab_node));=0A     newnode->key =3D *key;=0A     newnode->datum =3D =
*datum;=0A     if ( prev )=0A--- a/xen/xsm/flask/ss/conditional.c=0A+++ =
b/xen/xsm/flask/ss/conditional.c=0A@@ -228,10 +228,9 @@ int cond_read_bool(=
struct policydb *p, s=0A     u32 len;=0A     int rc;=0A =0A-    booldatum =
=3D xmalloc(struct cond_bool_datum);=0A+    booldatum =3D xzalloc(struct =
cond_bool_datum);=0A     if ( !booldatum )=0A         return -1;=0A-    =
memset(booldatum, 0, sizeof(struct cond_bool_datum));=0A =0A     rc =3D =
next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 )=0A@@ -343,10 +342,9 =
@@ static int cond_insertf(struct avtab *a,=0A         goto err;=0A     =
}=0A =0A-    list =3D xmalloc(struct cond_av_list);=0A+    list =3D =
xzalloc(struct cond_av_list);=0A     if ( !list )=0A         goto err;=0A- =
   memset(list, 0, sizeof(*list));=0A =0A     list->node =3D node_ptr;=0A  =
   if ( !data->head )=0A@@ -441,12 +439,9 @@ static int cond_read_node(stru=
ct policyd=0A         if ( rc < 0 )=0A             goto err;=0A =0A-       =
 expr =3D xmalloc(struct cond_expr);=0A+        expr =3D xzalloc(struct =
cond_expr);=0A         if ( !expr )=0A-        {=0A             goto =
err;=0A-        }=0A-        memset(expr, 0, sizeof(struct cond_expr));=0A =
=0A         expr->expr_type =3D le32_to_cpu(buf[0]);=0A         expr->bool =
=3D le32_to_cpu(buf[1]);=0A@@ -494,10 +489,9 @@ int cond_read_list(struct =
policydb *p, v=0A =0A     for ( i =3D 0; i < len; i++ )=0A     {=0A-       =
 node =3D xmalloc(struct cond_node);=0A+        node =3D xzalloc(struct =
cond_node);=0A         if ( !node )=0A             goto err;=0A-        =
memset(node, 0, sizeof(struct cond_node));=0A =0A         if ( cond_read_no=
de(p, node, fp) !=3D 0 )=0A             goto err;=0A--- a/xen/xsm/flask/ss/=
ebitmap.c=0A+++ b/xen/xsm/flask/ss/ebitmap.c=0A@@ -50,13 +50,12 @@ int =
ebitmap_cpy(struct ebitmap *dst, str=0A     prev =3D NULL;=0A     while ( =
n )=0A     {=0A-        new =3D xmalloc(struct ebitmap_node);=0A+        =
new =3D xzalloc(struct ebitmap_node);=0A         if ( !new )=0A         =
{=0A             ebitmap_destroy(dst);=0A             return -ENOMEM;=0A   =
      }=0A-        memset(new, 0, sizeof(*new));=0A         new->startbit =
=3D n->startbit;=0A         memcpy(new->maps, n->maps, EBITMAP_SIZE / =
8);=0A         new->next =3D NULL;=0A@@ -176,10 +175,9 @@ int ebitmap_set_b=
it(struct ebitmap *e, u=0A     if ( !value )=0A         return 0;=0A =0A-  =
  new =3D xmalloc(struct ebitmap_node);=0A+    new =3D xzalloc(struct =
ebitmap_node);=0A     if ( !new )=0A         return -ENOMEM;=0A-    =
memset(new, 0, sizeof(*new));=0A =0A     new->startbit =3D bit - (bit % =
EBITMAP_SIZE);=0A     ebitmap_node_set_bit(new, bit);=0A@@ -284,8 +282,8 =
@@ int ebitmap_read(struct ebitmap *e, void=0A =0A         if ( !n || =
startbit >=3D n->startbit + EBITMAP_SIZE )=0A         {=0A-            =
struct ebitmap_node *tmp;=0A-            tmp =3D xmalloc(struct ebitmap_nod=
e);=0A+            struct ebitmap_node *tmp =3D xzalloc(struct ebitmap_node=
);=0A+=0A             if ( !tmp )=0A             {=0A                 =
printk(KERN_ERR=0A@@ -293,7 +291,6 @@ int ebitmap_read(struct ebitmap *e, =
void=0A                 rc =3D -ENOMEM;=0A                 goto bad;=0A    =
         }=0A-            memset(tmp, 0, sizeof(*tmp));=0A             /* =
round down */=0A             tmp->startbit =3D startbit - (startbit % =
EBITMAP_SIZE);=0A             if ( n )=0A--- a/xen/xsm/flask/ss/hashtab.c=
=0A+++ b/xen/xsm/flask/ss/hashtab.c=0A@@ -16,28 +16,21 @@ struct hashtab =
*hashtab_create(u32 (*has=0A             int (*keycmp)(struct hashtab *h, =
const void *key1,=0A 			  const void *key2), u32 size)=0A =
{=0A-    struct hashtab *p;=0A-    u32 i;=0A+    struct hashtab *p =3D =
xzalloc(struct hashtab);=0A =0A-    p =3D xmalloc(struct hashtab);=0A     =
if ( p =3D=3D NULL )=0A         return p;=0A =0A-    memset(p, 0, =
sizeof(*p));=0A     p->size =3D size;=0A-    p->nel =3D 0;=0A     =
p->hash_value =3D hash_value;=0A     p->keycmp =3D keycmp;=0A-    =
p->htable =3D xmalloc_array(struct hashtab_node *, size);=0A+    p->htable =
=3D xzalloc_array(struct hashtab_node *, size);=0A     if ( p->htable =
=3D=3D NULL )=0A     {=0A         xfree(p);=0A         return NULL;=0A     =
}=0A =0A-    for ( i =3D 0; i < size; i++ )=0A-        p->htable[i] =3D =
NULL;=0A-=0A     return p;=0A }=0A =0A@@ -61,10 +54,9 @@ int hashtab_insert=
(struct hashtab *h, vo=0A     if ( cur && (h->keycmp(h, key, cur->key) =
=3D=3D 0) )=0A         return -EEXIST;=0A =0A-    newnode =3D xmalloc(struc=
t hashtab_node);=0A+    newnode =3D xzalloc(struct hashtab_node);=0A     =
if ( newnode =3D=3D NULL )=0A         return -ENOMEM;=0A-    memset(newnode=
, 0, sizeof(*newnode));=0A     newnode->key =3D key;=0A     newnode->datum =
=3D datum;=0A     if ( prev )=0A--- a/xen/xsm/flask/ss/policydb.c=0A+++ =
b/xen/xsm/flask/ss/policydb.c=0A@@ -166,13 +166,12 @@ static int roles_init=
(struct policydb *p=0A     int rc;=0A     struct role_datum *role;=0A =0A- =
   role =3D xmalloc(struct role_datum);=0A+    role =3D xzalloc(struct =
role_datum);=0A     if ( !role )=0A     {=0A         rc =3D -ENOMEM;=0A    =
     goto out;=0A     }=0A-    memset(role, 0, sizeof(*role));=0A     =
role->value =3D ++p->p_roles.nprim;=0A     if ( role->value !=3D OBJECT_R_V=
AL )=0A     {=0A@@ -950,13 +949,12 @@ static int perm_read(struct policydb =
*p,=0A     __le32 buf[2];=0A     u32 len;=0A =0A-    perdatum =3D =
xmalloc(struct perm_datum);=0A+    perdatum =3D xzalloc(struct perm_datum);=
=0A     if ( !perdatum )=0A     {=0A         rc =3D -ENOMEM;=0A         =
goto out;=0A     }=0A-    memset(perdatum, 0, sizeof(*perdatum));=0A =0A   =
  rc =3D next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 )=0A@@ =
-994,13 +992,12 @@ static int common_read(struct policydb *=0A     u32 =
len, nel;=0A     int i, rc;=0A =0A-    comdatum =3D xmalloc(struct =
common_datum);=0A+    comdatum =3D xzalloc(struct common_datum);=0A     if =
( !comdatum )=0A     {=0A         rc =3D -ENOMEM;=0A         goto out;=0A  =
   }=0A-    memset(comdatum, 0, sizeof(*comdatum));=0A =0A     rc =3D =
next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 )=0A@@ -1055,10 =
+1052,9 @@ static int read_cons_helper(struct const=0A     lc =3D NULL;=0A =
    for ( i =3D 0; i < ncons; i++ )=0A     {=0A-        c =3D xmalloc(struc=
t constraint_node);=0A+        c =3D xzalloc(struct constraint_node);=0A   =
      if ( !c )=0A             return -ENOMEM;=0A-        memset(c, 0, =
sizeof(*c));=0A =0A         if ( lc )=0A         {=0A@@ -1078,10 +1074,9 =
@@ static int read_cons_helper(struct const=0A         depth =3D -1;=0A    =
     for ( j =3D 0; j < nexpr; j++ )=0A         {=0A-            e =3D =
xmalloc(struct constraint_expr);=0A+            e =3D xzalloc(struct =
constraint_expr);=0A             if ( !e )=0A                 return =
-ENOMEM;=0A-            memset(e, 0, sizeof(*e));=0A =0A             if ( =
le )=0A                 le->next =3D e;=0A@@ -1142,13 +1137,12 @@ static =
int class_read(struct policydb *p=0A     u32 len, len2, ncons, nel;=0A     =
int i, rc;=0A =0A-    cladatum =3D xmalloc(struct class_datum);=0A+    =
cladatum =3D xzalloc(struct class_datum);=0A     if ( !cladatum )=0A     =
{=0A         rc =3D -ENOMEM;=0A         goto out;=0A     }=0A-    =
memset(cladatum, 0, sizeof(*cladatum));=0A =0A     rc =3D next_entry(buf, =
fp, sizeof(u32)*6);=0A     if ( rc < 0 )=0A@@ -1226,13 +1220,12 @@ static =
int role_read(struct policydb *p,=0A     __le32 buf[3];=0A     u32 len;=0A =
=0A-    role =3D xmalloc(struct role_datum);=0A+    role =3D xzalloc(struct=
 role_datum);=0A     if ( !role )=0A     {=0A         rc =3D -ENOMEM;=0A   =
      goto out;=0A     }=0A-    memset(role, 0, sizeof(*role));=0A =0A     =
if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )=0A         rc =3D =
next_entry(buf, fp, sizeof(buf[0]) * 3);=0A@@ -1297,13 +1290,12 @@ static =
int type_read(struct policydb *p,=0A     __le32 buf[4];=0A     u32 len;=0A =
=0A-    typdatum =3D xmalloc(struct type_datum);=0A+    typdatum =3D =
xzalloc(struct type_datum);=0A     if ( !typdatum )=0A     {=0A         rc =
=3D -ENOMEM;=0A         return rc;=0A     }=0A-    memset(typdatum, 0, =
sizeof(*typdatum));=0A =0A     if ( p->policyvers >=3D POLICYDB_VERSION_BOU=
NDARY )=0A         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 4);=0A@@ =
-1391,13 +1383,12 @@ static int user_read(struct policydb *p,=0A     =
__le32 buf[3];=0A     u32 len;=0A =0A-    usrdatum =3D xmalloc(struct =
user_datum);=0A+    usrdatum =3D xzalloc(struct user_datum);=0A     if ( =
!usrdatum )=0A     {=0A         rc =3D -ENOMEM;=0A         goto out;=0A    =
 }=0A-    memset(usrdatum, 0, sizeof(*usrdatum));=0A =0A     if ( =
p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )=0A         rc =3D next_entry=
(buf, fp, sizeof(buf[0]) * 3);=0A@@ -1455,13 +1446,12 @@ static int =
sens_read(struct policydb *p,=0A     __le32 buf[2];=0A     u32 len;=0A =
=0A-    levdatum =3D xmalloc(struct level_datum);=0A+    levdatum =3D =
xzalloc(struct level_datum);=0A     if ( !levdatum )=0A     {=0A         =
rc =3D -ENOMEM;=0A         goto out;=0A     }=0A-    memset(levdatum, 0, =
sizeof(*levdatum));=0A =0A     rc =3D next_entry(buf, fp, sizeof buf);=0A  =
   if ( rc < 0 )=0A@@ -1511,13 +1501,12 @@ static int cat_read(struct =
policydb *p, =0A     __le32 buf[3];=0A     u32 len;=0A =0A-    catdatum =
=3D xmalloc(struct cat_datum);=0A+    catdatum =3D xzalloc(struct =
cat_datum);=0A     if ( !catdatum )=0A     {=0A         rc =3D -ENOMEM;=0A =
        goto out;=0A     }=0A-    memset(catdatum, 0, sizeof(*catdatum));=
=0A =0A     rc =3D next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 =
)=0A@@ -1875,13 +1864,12 @@ int policydb_read(struct policydb *p, vo=0A    =
 ltr =3D NULL;=0A     for ( i =3D 0; i < nel; i++ )=0A     {=0A-        tr =
=3D xmalloc(struct role_trans);=0A+        tr =3D xzalloc(struct role_trans=
);=0A         if ( !tr )=0A         {=0A             rc =3D -ENOMEM;=0A    =
         goto bad;=0A         }=0A-        memset(tr, 0, sizeof(*tr));=0A  =
       if ( ltr )=0A             ltr->next =3D tr;=0A         else=0A@@ =
-1909,13 +1897,12 @@ int policydb_read(struct policydb *p, vo=0A     lra =
=3D NULL;=0A     for ( i =3D 0; i < nel; i++ )=0A     {=0A-        ra =3D =
xmalloc(struct role_allow);=0A+        ra =3D xzalloc(struct role_allow);=
=0A         if ( !ra )=0A         {=0A             rc =3D -ENOMEM;=0A      =
       goto bad;=0A         }=0A-        memset(ra, 0, sizeof(*ra));=0A    =
     if ( lra )=0A             lra->next =3D ra;=0A         else=0A@@ =
-1951,13 +1938,12 @@ int policydb_read(struct policydb *p, vo=0A         l =
=3D NULL;=0A         for ( j =3D 0; j < nel; j++ )=0A         {=0A-        =
    c =3D xmalloc(struct ocontext);=0A+            c =3D xzalloc(struct =
ocontext);=0A             if ( !c )=0A             {=0A                 rc =
=3D -ENOMEM;=0A                 goto bad;=0A             }=0A-            =
memset(c, 0, sizeof(*c));=0A             if ( l )=0A                 =
l->next =3D c;=0A             else=0A@@ -2067,13 +2053,12 @@ int policydb_r=
ead(struct policydb *p, vo=0A         lrt =3D NULL;=0A         for ( i =3D =
0; i < nel; i++ )=0A         {=0A-            rt =3D xmalloc(struct =
range_trans);=0A+            rt =3D xzalloc(struct range_trans);=0A        =
     if ( !rt )=0A             {=0A                 rc =3D -ENOMEM;=0A     =
            goto bad;=0A             }=0A-            memset(rt, 0, =
sizeof(*rt));=0A             if ( lrt )=0A                 lrt->next =3D =
rt;=0A             else=0A--- a/xen/xsm/flask/ss/services.c=0A+++ =
b/xen/xsm/flask/ss/services.c=0A@@ -1771,13 +1771,12 @@ int security_get_us=
er_sids(u32 fromsid, =0A     }=0A     usercon.user =3D user->value;=0A =
=0A-    mysids =3D xmalloc_array(u32, maxnel);=0A+    mysids =3D xzalloc_ar=
ray(u32, maxnel);=0A     if ( !mysids )=0A     {=0A         rc =3D =
-ENOMEM;=0A         goto out_unlock;=0A     }=0A-    memset(mysids, 0, =
maxnel*sizeof(*mysids));=0A =0A     ebitmap_for_each_positive_bit(&user->ro=
les, rnode, i)=0A     {=0A@@ -1808,14 +1807,13 @@ int security_get_user_sid=
s(u32 fromsid, =0A             else=0A             {=0A                 =
maxnel +=3D SIDS_NEL;=0A-                mysids2 =3D xmalloc_array(u32, =
maxnel);=0A+                mysids2 =3D xzalloc_array(u32, maxnel);=0A     =
            if ( !mysids2 )=0A                 {=0A                     rc =
=3D -ENOMEM;=0A                     xfree(mysids);=0A                     =
goto out_unlock;=0A                 }=0A-                memset(mysids2, =
0, maxnel*sizeof(*mysids2));=0A                 memcpy(mysids2, mysids, =
mynel * sizeof(*mysids2));=0A                 xfree(mysids);=0A            =
     mysids =3D mysids2;=0A@@ -1868,14 +1866,14 @@ int security_get_bools(i=
nt *len, char **=0A         goto out;=0A     }=0A =0A-    if ( names ) =
{=0A-        *names =3D (char**)xmalloc_array(char*, *len);=0A+    if ( =
names )=0A+    {=0A+        *names =3D xzalloc_array(char *, *len);=0A     =
    if ( !*names )=0A             goto err;=0A-        memset(*names, 0, =
sizeof(char*) * *len);=0A     }=0A =0A-    *values =3D (int*)xmalloc_array(=
int, *len);=0A+    *values =3D xmalloc_array(int, *len);=0A     if ( =
!*values )=0A         goto err;=0A =0A@@ -2059,9 +2057,8 @@ int security_oc=
ontext_add( u32 ocon, uns=0A     struct ocontext *prev;=0A     struct =
ocontext *add;=0A =0A-    if ( (add =3D xmalloc(struct ocontext)) =3D=3D =
NULL )=0A+    if ( (add =3D xzalloc(struct ocontext)) =3D=3D NULL )=0A     =
    return -ENOMEM;=0A-    memset(add, 0, sizeof(struct ocontext));=0A     =
add->sid[0] =3D sid;=0A =0A     POLICY_WRLOCK;=0A
--=__Part794B2B3A.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part794B2B3A.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:45:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:45:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFVZ-0008UH-QY; Tue, 25 Feb 2014 10:45:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFVW-0008Tc-Is
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:45:04 +0000
Received: from [85.158.139.211:20024] by server-6.bemta-5.messagelabs.com id
	0D/25-14342-D247C035; Tue, 25 Feb 2014 10:45:01 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393325100!6077589!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14615 invoked from network); 25 Feb 2014 10:45:01 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:45:01 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:45:00 +0000
Message-Id: <530C823A020000780011F191@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:44:58 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part794B2B3A.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 2/4] flask: use xzalloc()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part794B2B3A.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -360,11 +360,10 @@ static struct avc_node *avc_alloc_node(v
 {
     struct avc_node *node;
=20
-    node =3D xmalloc(struct avc_node);
+    node =3D xzalloc(struct avc_node);
     if (!node)
         goto out;
=20
-    memset(node, 0, sizeof(*node));
     INIT_RCU_HEAD(&node->rhead);
     INIT_HLIST_NODE(&node->list);
     avc_cache_stats_incr(allocations);
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -132,13 +132,10 @@ static int flask_domain_alloc_security(s
 {
     struct domain_security_struct *dsec;
=20
-    dsec =3D xmalloc(struct domain_security_struct);
-
+    dsec =3D xzalloc(struct domain_security_struct);
     if ( !dsec )
         return -ENOMEM;
=20
-    memset(dsec, 0, sizeof(struct domain_security_struct));
-
     switch ( d->domain_id )
     {
     case DOMID_IDLE:
@@ -294,13 +291,10 @@ static int flask_alloc_security_evtchn(s
 {
     struct evtchn_security_struct *esec;
=20
-    esec =3D xmalloc(struct evtchn_security_struct);
-
+    esec =3D xzalloc(struct evtchn_security_struct);
     if ( !esec )
         return -ENOMEM;
=20
-    memset(esec, 0, sizeof(struct evtchn_security_struct));
-
     esec->sid =3D SECINITSID_UNLABELED;
=20
     chn->ssid =3D esec;
--- a/xen/xsm/flask/ss/avtab.c
+++ b/xen/xsm/flask/ss/avtab.c
@@ -38,11 +38,10 @@ static struct avtab_node* avtab_insert_n
     struct avtab_node * prev, struct avtab_node * cur, struct avtab_key =
*key,=20
                                                     struct avtab_datum =
*datum)
 {
-    struct avtab_node * newnode;
-    newnode =3D xmalloc(struct avtab_node);
+    struct avtab_node *newnode =3D xzalloc(struct avtab_node);
+
     if ( newnode =3D=3D NULL )
         return NULL;
-    memset(newnode, 0, sizeof(struct avtab_node));
     newnode->key =3D *key;
     newnode->datum =3D *datum;
     if ( prev )
--- a/xen/xsm/flask/ss/conditional.c
+++ b/xen/xsm/flask/ss/conditional.c
@@ -228,10 +228,9 @@ int cond_read_bool(struct policydb *p, s
     u32 len;
     int rc;
=20
-    booldatum =3D xmalloc(struct cond_bool_datum);
+    booldatum =3D xzalloc(struct cond_bool_datum);
     if ( !booldatum )
         return -1;
-    memset(booldatum, 0, sizeof(struct cond_bool_datum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -343,10 +342,9 @@ static int cond_insertf(struct avtab *a,
         goto err;
     }
=20
-    list =3D xmalloc(struct cond_av_list);
+    list =3D xzalloc(struct cond_av_list);
     if ( !list )
         goto err;
-    memset(list, 0, sizeof(*list));
=20
     list->node =3D node_ptr;
     if ( !data->head )
@@ -441,12 +439,9 @@ static int cond_read_node(struct policyd
         if ( rc < 0 )
             goto err;
=20
-        expr =3D xmalloc(struct cond_expr);
+        expr =3D xzalloc(struct cond_expr);
         if ( !expr )
-        {
             goto err;
-        }
-        memset(expr, 0, sizeof(struct cond_expr));
=20
         expr->expr_type =3D le32_to_cpu(buf[0]);
         expr->bool =3D le32_to_cpu(buf[1]);
@@ -494,10 +489,9 @@ int cond_read_list(struct policydb *p, v
=20
     for ( i =3D 0; i < len; i++ )
     {
-        node =3D xmalloc(struct cond_node);
+        node =3D xzalloc(struct cond_node);
         if ( !node )
             goto err;
-        memset(node, 0, sizeof(struct cond_node));
=20
         if ( cond_read_node(p, node, fp) !=3D 0 )
             goto err;
--- a/xen/xsm/flask/ss/ebitmap.c
+++ b/xen/xsm/flask/ss/ebitmap.c
@@ -50,13 +50,12 @@ int ebitmap_cpy(struct ebitmap *dst, str
     prev =3D NULL;
     while ( n )
     {
-        new =3D xmalloc(struct ebitmap_node);
+        new =3D xzalloc(struct ebitmap_node);
         if ( !new )
         {
             ebitmap_destroy(dst);
             return -ENOMEM;
         }
-        memset(new, 0, sizeof(*new));
         new->startbit =3D n->startbit;
         memcpy(new->maps, n->maps, EBITMAP_SIZE / 8);
         new->next =3D NULL;
@@ -176,10 +175,9 @@ int ebitmap_set_bit(struct ebitmap *e, u
     if ( !value )
         return 0;
=20
-    new =3D xmalloc(struct ebitmap_node);
+    new =3D xzalloc(struct ebitmap_node);
     if ( !new )
         return -ENOMEM;
-    memset(new, 0, sizeof(*new));
=20
     new->startbit =3D bit - (bit % EBITMAP_SIZE);
     ebitmap_node_set_bit(new, bit);
@@ -284,8 +282,8 @@ int ebitmap_read(struct ebitmap *e, void
=20
         if ( !n || startbit >=3D n->startbit + EBITMAP_SIZE )
         {
-            struct ebitmap_node *tmp;
-            tmp =3D xmalloc(struct ebitmap_node);
+            struct ebitmap_node *tmp =3D xzalloc(struct ebitmap_node);
+
             if ( !tmp )
             {
                 printk(KERN_ERR
@@ -293,7 +291,6 @@ int ebitmap_read(struct ebitmap *e, void
                 rc =3D -ENOMEM;
                 goto bad;
             }
-            memset(tmp, 0, sizeof(*tmp));
             /* round down */
             tmp->startbit =3D startbit - (startbit % EBITMAP_SIZE);
             if ( n )
--- a/xen/xsm/flask/ss/hashtab.c
+++ b/xen/xsm/flask/ss/hashtab.c
@@ -16,28 +16,21 @@ struct hashtab *hashtab_create(u32 (*has
             int (*keycmp)(struct hashtab *h, const void *key1,
 			  const void *key2), u32 size)
 {
-    struct hashtab *p;
-    u32 i;
+    struct hashtab *p =3D xzalloc(struct hashtab);
=20
-    p =3D xmalloc(struct hashtab);
     if ( p =3D=3D NULL )
         return p;
=20
-    memset(p, 0, sizeof(*p));
     p->size =3D size;
-    p->nel =3D 0;
     p->hash_value =3D hash_value;
     p->keycmp =3D keycmp;
-    p->htable =3D xmalloc_array(struct hashtab_node *, size);
+    p->htable =3D xzalloc_array(struct hashtab_node *, size);
     if ( p->htable =3D=3D NULL )
     {
         xfree(p);
         return NULL;
     }
=20
-    for ( i =3D 0; i < size; i++ )
-        p->htable[i] =3D NULL;
-
     return p;
 }
=20
@@ -61,10 +54,9 @@ int hashtab_insert(struct hashtab *h, vo
     if ( cur && (h->keycmp(h, key, cur->key) =3D=3D 0) )
         return -EEXIST;
=20
-    newnode =3D xmalloc(struct hashtab_node);
+    newnode =3D xzalloc(struct hashtab_node);
     if ( newnode =3D=3D NULL )
         return -ENOMEM;
-    memset(newnode, 0, sizeof(*newnode));
     newnode->key =3D key;
     newnode->datum =3D datum;
     if ( prev )
--- a/xen/xsm/flask/ss/policydb.c
+++ b/xen/xsm/flask/ss/policydb.c
@@ -166,13 +166,12 @@ static int roles_init(struct policydb *p
     int rc;
     struct role_datum *role;
=20
-    role =3D xmalloc(struct role_datum);
+    role =3D xzalloc(struct role_datum);
     if ( !role )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(role, 0, sizeof(*role));
     role->value =3D ++p->p_roles.nprim;
     if ( role->value !=3D OBJECT_R_VAL )
     {
@@ -950,13 +949,12 @@ static int perm_read(struct policydb *p,
     __le32 buf[2];
     u32 len;
=20
-    perdatum =3D xmalloc(struct perm_datum);
+    perdatum =3D xzalloc(struct perm_datum);
     if ( !perdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(perdatum, 0, sizeof(*perdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -994,13 +992,12 @@ static int common_read(struct policydb *
     u32 len, nel;
     int i, rc;
=20
-    comdatum =3D xmalloc(struct common_datum);
+    comdatum =3D xzalloc(struct common_datum);
     if ( !comdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(comdatum, 0, sizeof(*comdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -1055,10 +1052,9 @@ static int read_cons_helper(struct const
     lc =3D NULL;
     for ( i =3D 0; i < ncons; i++ )
     {
-        c =3D xmalloc(struct constraint_node);
+        c =3D xzalloc(struct constraint_node);
         if ( !c )
             return -ENOMEM;
-        memset(c, 0, sizeof(*c));
=20
         if ( lc )
         {
@@ -1078,10 +1074,9 @@ static int read_cons_helper(struct const
         depth =3D -1;
         for ( j =3D 0; j < nexpr; j++ )
         {
-            e =3D xmalloc(struct constraint_expr);
+            e =3D xzalloc(struct constraint_expr);
             if ( !e )
                 return -ENOMEM;
-            memset(e, 0, sizeof(*e));
=20
             if ( le )
                 le->next =3D e;
@@ -1142,13 +1137,12 @@ static int class_read(struct policydb *p
     u32 len, len2, ncons, nel;
     int i, rc;
=20
-    cladatum =3D xmalloc(struct class_datum);
+    cladatum =3D xzalloc(struct class_datum);
     if ( !cladatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(cladatum, 0, sizeof(*cladatum));
=20
     rc =3D next_entry(buf, fp, sizeof(u32)*6);
     if ( rc < 0 )
@@ -1226,13 +1220,12 @@ static int role_read(struct policydb *p,
     __le32 buf[3];
     u32 len;
=20
-    role =3D xmalloc(struct role_datum);
+    role =3D xzalloc(struct role_datum);
     if ( !role )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(role, 0, sizeof(*role));
=20
     if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )
         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 3);
@@ -1297,13 +1290,12 @@ static int type_read(struct policydb *p,
     __le32 buf[4];
     u32 len;
=20
-    typdatum =3D xmalloc(struct type_datum);
+    typdatum =3D xzalloc(struct type_datum);
     if ( !typdatum )
     {
         rc =3D -ENOMEM;
         return rc;
     }
-    memset(typdatum, 0, sizeof(*typdatum));
=20
     if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )
         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 4);
@@ -1391,13 +1383,12 @@ static int user_read(struct policydb *p,
     __le32 buf[3];
     u32 len;
=20
-    usrdatum =3D xmalloc(struct user_datum);
+    usrdatum =3D xzalloc(struct user_datum);
     if ( !usrdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(usrdatum, 0, sizeof(*usrdatum));
=20
     if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )
         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 3);
@@ -1455,13 +1446,12 @@ static int sens_read(struct policydb *p,
     __le32 buf[2];
     u32 len;
=20
-    levdatum =3D xmalloc(struct level_datum);
+    levdatum =3D xzalloc(struct level_datum);
     if ( !levdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(levdatum, 0, sizeof(*levdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -1511,13 +1501,12 @@ static int cat_read(struct policydb *p,=20
     __le32 buf[3];
     u32 len;
=20
-    catdatum =3D xmalloc(struct cat_datum);
+    catdatum =3D xzalloc(struct cat_datum);
     if ( !catdatum )
     {
         rc =3D -ENOMEM;
         goto out;
     }
-    memset(catdatum, 0, sizeof(*catdatum));
=20
     rc =3D next_entry(buf, fp, sizeof buf);
     if ( rc < 0 )
@@ -1875,13 +1864,12 @@ int policydb_read(struct policydb *p, vo
     ltr =3D NULL;
     for ( i =3D 0; i < nel; i++ )
     {
-        tr =3D xmalloc(struct role_trans);
+        tr =3D xzalloc(struct role_trans);
         if ( !tr )
         {
             rc =3D -ENOMEM;
             goto bad;
         }
-        memset(tr, 0, sizeof(*tr));
         if ( ltr )
             ltr->next =3D tr;
         else
@@ -1909,13 +1897,12 @@ int policydb_read(struct policydb *p, vo
     lra =3D NULL;
     for ( i =3D 0; i < nel; i++ )
     {
-        ra =3D xmalloc(struct role_allow);
+        ra =3D xzalloc(struct role_allow);
         if ( !ra )
         {
             rc =3D -ENOMEM;
             goto bad;
         }
-        memset(ra, 0, sizeof(*ra));
         if ( lra )
             lra->next =3D ra;
         else
@@ -1951,13 +1938,12 @@ int policydb_read(struct policydb *p, vo
         l =3D NULL;
         for ( j =3D 0; j < nel; j++ )
         {
-            c =3D xmalloc(struct ocontext);
+            c =3D xzalloc(struct ocontext);
             if ( !c )
             {
                 rc =3D -ENOMEM;
                 goto bad;
             }
-            memset(c, 0, sizeof(*c));
             if ( l )
                 l->next =3D c;
             else
@@ -2067,13 +2053,12 @@ int policydb_read(struct policydb *p, vo
         lrt =3D NULL;
         for ( i =3D 0; i < nel; i++ )
         {
-            rt =3D xmalloc(struct range_trans);
+            rt =3D xzalloc(struct range_trans);
             if ( !rt )
             {
                 rc =3D -ENOMEM;
                 goto bad;
             }
-            memset(rt, 0, sizeof(*rt));
             if ( lrt )
                 lrt->next =3D rt;
             else
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -1771,13 +1771,12 @@ int security_get_user_sids(u32 fromsid,=20
     }
     usercon.user =3D user->value;
=20
-    mysids =3D xmalloc_array(u32, maxnel);
+    mysids =3D xzalloc_array(u32, maxnel);
     if ( !mysids )
     {
         rc =3D -ENOMEM;
         goto out_unlock;
     }
-    memset(mysids, 0, maxnel*sizeof(*mysids));
=20
     ebitmap_for_each_positive_bit(&user->roles, rnode, i)
     {
@@ -1808,14 +1807,13 @@ int security_get_user_sids(u32 fromsid,=20
             else
             {
                 maxnel +=3D SIDS_NEL;
-                mysids2 =3D xmalloc_array(u32, maxnel);
+                mysids2 =3D xzalloc_array(u32, maxnel);
                 if ( !mysids2 )
                 {
                     rc =3D -ENOMEM;
                     xfree(mysids);
                     goto out_unlock;
                 }
-                memset(mysids2, 0, maxnel*sizeof(*mysids2));
                 memcpy(mysids2, mysids, mynel * sizeof(*mysids2));
                 xfree(mysids);
                 mysids =3D mysids2;
@@ -1868,14 +1866,14 @@ int security_get_bools(int *len, char **
         goto out;
     }
=20
-    if ( names ) {
-        *names =3D (char**)xmalloc_array(char*, *len);
+    if ( names )
+    {
+        *names =3D xzalloc_array(char *, *len);
         if ( !*names )
             goto err;
-        memset(*names, 0, sizeof(char*) * *len);
     }
=20
-    *values =3D (int*)xmalloc_array(int, *len);
+    *values =3D xmalloc_array(int, *len);
     if ( !*values )
         goto err;
=20
@@ -2059,9 +2057,8 @@ int security_ocontext_add( u32 ocon, uns
     struct ocontext *prev;
     struct ocontext *add;
=20
-    if ( (add =3D xmalloc(struct ocontext)) =3D=3D NULL )
+    if ( (add =3D xzalloc(struct ocontext)) =3D=3D NULL )
         return -ENOMEM;
-    memset(add, 0, sizeof(struct ocontext));
     add->sid[0] =3D sid;
=20
     POLICY_WRLOCK;



--=__Part794B2B3A.0__=
Content-Type: text/plain; name="flask-xzalloc.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="flask-xzalloc.patch"

flask: use xzalloc()=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=
=0A--- a/xen/xsm/flask/avc.c=0A+++ b/xen/xsm/flask/avc.c=0A@@ -360,11 =
+360,10 @@ static struct avc_node *avc_alloc_node(v=0A {=0A     struct =
avc_node *node;=0A =0A-    node =3D xmalloc(struct avc_node);=0A+    node =
=3D xzalloc(struct avc_node);=0A     if (!node)=0A         goto out;=0A =
=0A-    memset(node, 0, sizeof(*node));=0A     INIT_RCU_HEAD(&node->rhead);=
=0A     INIT_HLIST_NODE(&node->list);=0A     avc_cache_stats_incr(allocatio=
ns);=0A--- a/xen/xsm/flask/hooks.c=0A+++ b/xen/xsm/flask/hooks.c=0A@@ =
-132,13 +132,10 @@ static int flask_domain_alloc_security(s=0A {=0A     =
struct domain_security_struct *dsec;=0A =0A-    dsec =3D xmalloc(struct =
domain_security_struct);=0A-=0A+    dsec =3D xzalloc(struct domain_security=
_struct);=0A     if ( !dsec )=0A         return -ENOMEM;=0A =0A-    =
memset(dsec, 0, sizeof(struct domain_security_struct));=0A-=0A     switch =
( d->domain_id )=0A     {=0A     case DOMID_IDLE:=0A@@ -294,13 +291,10 @@ =
static int flask_alloc_security_evtchn(s=0A {=0A     struct evtchn_security=
_struct *esec;=0A =0A-    esec =3D xmalloc(struct evtchn_security_struct);=
=0A-=0A+    esec =3D xzalloc(struct evtchn_security_struct);=0A     if ( =
!esec )=0A         return -ENOMEM;=0A =0A-    memset(esec, 0, sizeof(struct=
 evtchn_security_struct));=0A-=0A     esec->sid =3D SECINITSID_UNLABELED;=
=0A =0A     chn->ssid =3D esec;=0A--- a/xen/xsm/flask/ss/avtab.c=0A+++ =
b/xen/xsm/flask/ss/avtab.c=0A@@ -38,11 +38,10 @@ static struct avtab_node* =
avtab_insert_n=0A     struct avtab_node * prev, struct avtab_node * cur, =
struct avtab_key *key, =0A                                                 =
    struct avtab_datum *datum)=0A {=0A-    struct avtab_node * newnode;=0A-=
    newnode =3D xmalloc(struct avtab_node);=0A+    struct avtab_node =
*newnode =3D xzalloc(struct avtab_node);=0A+=0A     if ( newnode =3D=3D =
NULL )=0A         return NULL;=0A-    memset(newnode, 0, sizeof(struct =
avtab_node));=0A     newnode->key =3D *key;=0A     newnode->datum =3D =
*datum;=0A     if ( prev )=0A--- a/xen/xsm/flask/ss/conditional.c=0A+++ =
b/xen/xsm/flask/ss/conditional.c=0A@@ -228,10 +228,9 @@ int cond_read_bool(=
struct policydb *p, s=0A     u32 len;=0A     int rc;=0A =0A-    booldatum =
=3D xmalloc(struct cond_bool_datum);=0A+    booldatum =3D xzalloc(struct =
cond_bool_datum);=0A     if ( !booldatum )=0A         return -1;=0A-    =
memset(booldatum, 0, sizeof(struct cond_bool_datum));=0A =0A     rc =3D =
next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 )=0A@@ -343,10 +342,9 =
@@ static int cond_insertf(struct avtab *a,=0A         goto err;=0A     =
}=0A =0A-    list =3D xmalloc(struct cond_av_list);=0A+    list =3D =
xzalloc(struct cond_av_list);=0A     if ( !list )=0A         goto err;=0A- =
   memset(list, 0, sizeof(*list));=0A =0A     list->node =3D node_ptr;=0A  =
   if ( !data->head )=0A@@ -441,12 +439,9 @@ static int cond_read_node(stru=
ct policyd=0A         if ( rc < 0 )=0A             goto err;=0A =0A-       =
 expr =3D xmalloc(struct cond_expr);=0A+        expr =3D xzalloc(struct =
cond_expr);=0A         if ( !expr )=0A-        {=0A             goto =
err;=0A-        }=0A-        memset(expr, 0, sizeof(struct cond_expr));=0A =
=0A         expr->expr_type =3D le32_to_cpu(buf[0]);=0A         expr->bool =
=3D le32_to_cpu(buf[1]);=0A@@ -494,10 +489,9 @@ int cond_read_list(struct =
policydb *p, v=0A =0A     for ( i =3D 0; i < len; i++ )=0A     {=0A-       =
 node =3D xmalloc(struct cond_node);=0A+        node =3D xzalloc(struct =
cond_node);=0A         if ( !node )=0A             goto err;=0A-        =
memset(node, 0, sizeof(struct cond_node));=0A =0A         if ( cond_read_no=
de(p, node, fp) !=3D 0 )=0A             goto err;=0A--- a/xen/xsm/flask/ss/=
ebitmap.c=0A+++ b/xen/xsm/flask/ss/ebitmap.c=0A@@ -50,13 +50,12 @@ int =
ebitmap_cpy(struct ebitmap *dst, str=0A     prev =3D NULL;=0A     while ( =
n )=0A     {=0A-        new =3D xmalloc(struct ebitmap_node);=0A+        =
new =3D xzalloc(struct ebitmap_node);=0A         if ( !new )=0A         =
{=0A             ebitmap_destroy(dst);=0A             return -ENOMEM;=0A   =
      }=0A-        memset(new, 0, sizeof(*new));=0A         new->startbit =
=3D n->startbit;=0A         memcpy(new->maps, n->maps, EBITMAP_SIZE / =
8);=0A         new->next =3D NULL;=0A@@ -176,10 +175,9 @@ int ebitmap_set_b=
it(struct ebitmap *e, u=0A     if ( !value )=0A         return 0;=0A =0A-  =
  new =3D xmalloc(struct ebitmap_node);=0A+    new =3D xzalloc(struct =
ebitmap_node);=0A     if ( !new )=0A         return -ENOMEM;=0A-    =
memset(new, 0, sizeof(*new));=0A =0A     new->startbit =3D bit - (bit % =
EBITMAP_SIZE);=0A     ebitmap_node_set_bit(new, bit);=0A@@ -284,8 +282,8 =
@@ int ebitmap_read(struct ebitmap *e, void=0A =0A         if ( !n || =
startbit >=3D n->startbit + EBITMAP_SIZE )=0A         {=0A-            =
struct ebitmap_node *tmp;=0A-            tmp =3D xmalloc(struct ebitmap_nod=
e);=0A+            struct ebitmap_node *tmp =3D xzalloc(struct ebitmap_node=
);=0A+=0A             if ( !tmp )=0A             {=0A                 =
printk(KERN_ERR=0A@@ -293,7 +291,6 @@ int ebitmap_read(struct ebitmap *e, =
void=0A                 rc =3D -ENOMEM;=0A                 goto bad;=0A    =
         }=0A-            memset(tmp, 0, sizeof(*tmp));=0A             /* =
round down */=0A             tmp->startbit =3D startbit - (startbit % =
EBITMAP_SIZE);=0A             if ( n )=0A--- a/xen/xsm/flask/ss/hashtab.c=
=0A+++ b/xen/xsm/flask/ss/hashtab.c=0A@@ -16,28 +16,21 @@ struct hashtab =
*hashtab_create(u32 (*has=0A             int (*keycmp)(struct hashtab *h, =
const void *key1,=0A 			  const void *key2), u32 size)=0A =
{=0A-    struct hashtab *p;=0A-    u32 i;=0A+    struct hashtab *p =3D =
xzalloc(struct hashtab);=0A =0A-    p =3D xmalloc(struct hashtab);=0A     =
if ( p =3D=3D NULL )=0A         return p;=0A =0A-    memset(p, 0, =
sizeof(*p));=0A     p->size =3D size;=0A-    p->nel =3D 0;=0A     =
p->hash_value =3D hash_value;=0A     p->keycmp =3D keycmp;=0A-    =
p->htable =3D xmalloc_array(struct hashtab_node *, size);=0A+    p->htable =
=3D xzalloc_array(struct hashtab_node *, size);=0A     if ( p->htable =
=3D=3D NULL )=0A     {=0A         xfree(p);=0A         return NULL;=0A     =
}=0A =0A-    for ( i =3D 0; i < size; i++ )=0A-        p->htable[i] =3D =
NULL;=0A-=0A     return p;=0A }=0A =0A@@ -61,10 +54,9 @@ int hashtab_insert=
(struct hashtab *h, vo=0A     if ( cur && (h->keycmp(h, key, cur->key) =
=3D=3D 0) )=0A         return -EEXIST;=0A =0A-    newnode =3D xmalloc(struc=
t hashtab_node);=0A+    newnode =3D xzalloc(struct hashtab_node);=0A     =
if ( newnode =3D=3D NULL )=0A         return -ENOMEM;=0A-    memset(newnode=
, 0, sizeof(*newnode));=0A     newnode->key =3D key;=0A     newnode->datum =
=3D datum;=0A     if ( prev )=0A--- a/xen/xsm/flask/ss/policydb.c=0A+++ =
b/xen/xsm/flask/ss/policydb.c=0A@@ -166,13 +166,12 @@ static int roles_init=
(struct policydb *p=0A     int rc;=0A     struct role_datum *role;=0A =0A- =
   role =3D xmalloc(struct role_datum);=0A+    role =3D xzalloc(struct =
role_datum);=0A     if ( !role )=0A     {=0A         rc =3D -ENOMEM;=0A    =
     goto out;=0A     }=0A-    memset(role, 0, sizeof(*role));=0A     =
role->value =3D ++p->p_roles.nprim;=0A     if ( role->value !=3D OBJECT_R_V=
AL )=0A     {=0A@@ -950,13 +949,12 @@ static int perm_read(struct policydb =
*p,=0A     __le32 buf[2];=0A     u32 len;=0A =0A-    perdatum =3D =
xmalloc(struct perm_datum);=0A+    perdatum =3D xzalloc(struct perm_datum);=
=0A     if ( !perdatum )=0A     {=0A         rc =3D -ENOMEM;=0A         =
goto out;=0A     }=0A-    memset(perdatum, 0, sizeof(*perdatum));=0A =0A   =
  rc =3D next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 )=0A@@ =
-994,13 +992,12 @@ static int common_read(struct policydb *=0A     u32 =
len, nel;=0A     int i, rc;=0A =0A-    comdatum =3D xmalloc(struct =
common_datum);=0A+    comdatum =3D xzalloc(struct common_datum);=0A     if =
( !comdatum )=0A     {=0A         rc =3D -ENOMEM;=0A         goto out;=0A  =
   }=0A-    memset(comdatum, 0, sizeof(*comdatum));=0A =0A     rc =3D =
next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 )=0A@@ -1055,10 =
+1052,9 @@ static int read_cons_helper(struct const=0A     lc =3D NULL;=0A =
    for ( i =3D 0; i < ncons; i++ )=0A     {=0A-        c =3D xmalloc(struc=
t constraint_node);=0A+        c =3D xzalloc(struct constraint_node);=0A   =
      if ( !c )=0A             return -ENOMEM;=0A-        memset(c, 0, =
sizeof(*c));=0A =0A         if ( lc )=0A         {=0A@@ -1078,10 +1074,9 =
@@ static int read_cons_helper(struct const=0A         depth =3D -1;=0A    =
     for ( j =3D 0; j < nexpr; j++ )=0A         {=0A-            e =3D =
xmalloc(struct constraint_expr);=0A+            e =3D xzalloc(struct =
constraint_expr);=0A             if ( !e )=0A                 return =
-ENOMEM;=0A-            memset(e, 0, sizeof(*e));=0A =0A             if ( =
le )=0A                 le->next =3D e;=0A@@ -1142,13 +1137,12 @@ static =
int class_read(struct policydb *p=0A     u32 len, len2, ncons, nel;=0A     =
int i, rc;=0A =0A-    cladatum =3D xmalloc(struct class_datum);=0A+    =
cladatum =3D xzalloc(struct class_datum);=0A     if ( !cladatum )=0A     =
{=0A         rc =3D -ENOMEM;=0A         goto out;=0A     }=0A-    =
memset(cladatum, 0, sizeof(*cladatum));=0A =0A     rc =3D next_entry(buf, =
fp, sizeof(u32)*6);=0A     if ( rc < 0 )=0A@@ -1226,13 +1220,12 @@ static =
int role_read(struct policydb *p,=0A     __le32 buf[3];=0A     u32 len;=0A =
=0A-    role =3D xmalloc(struct role_datum);=0A+    role =3D xzalloc(struct=
 role_datum);=0A     if ( !role )=0A     {=0A         rc =3D -ENOMEM;=0A   =
      goto out;=0A     }=0A-    memset(role, 0, sizeof(*role));=0A =0A     =
if ( p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )=0A         rc =3D =
next_entry(buf, fp, sizeof(buf[0]) * 3);=0A@@ -1297,13 +1290,12 @@ static =
int type_read(struct policydb *p,=0A     __le32 buf[4];=0A     u32 len;=0A =
=0A-    typdatum =3D xmalloc(struct type_datum);=0A+    typdatum =3D =
xzalloc(struct type_datum);=0A     if ( !typdatum )=0A     {=0A         rc =
=3D -ENOMEM;=0A         return rc;=0A     }=0A-    memset(typdatum, 0, =
sizeof(*typdatum));=0A =0A     if ( p->policyvers >=3D POLICYDB_VERSION_BOU=
NDARY )=0A         rc =3D next_entry(buf, fp, sizeof(buf[0]) * 4);=0A@@ =
-1391,13 +1383,12 @@ static int user_read(struct policydb *p,=0A     =
__le32 buf[3];=0A     u32 len;=0A =0A-    usrdatum =3D xmalloc(struct =
user_datum);=0A+    usrdatum =3D xzalloc(struct user_datum);=0A     if ( =
!usrdatum )=0A     {=0A         rc =3D -ENOMEM;=0A         goto out;=0A    =
 }=0A-    memset(usrdatum, 0, sizeof(*usrdatum));=0A =0A     if ( =
p->policyvers >=3D POLICYDB_VERSION_BOUNDARY )=0A         rc =3D next_entry=
(buf, fp, sizeof(buf[0]) * 3);=0A@@ -1455,13 +1446,12 @@ static int =
sens_read(struct policydb *p,=0A     __le32 buf[2];=0A     u32 len;=0A =
=0A-    levdatum =3D xmalloc(struct level_datum);=0A+    levdatum =3D =
xzalloc(struct level_datum);=0A     if ( !levdatum )=0A     {=0A         =
rc =3D -ENOMEM;=0A         goto out;=0A     }=0A-    memset(levdatum, 0, =
sizeof(*levdatum));=0A =0A     rc =3D next_entry(buf, fp, sizeof buf);=0A  =
   if ( rc < 0 )=0A@@ -1511,13 +1501,12 @@ static int cat_read(struct =
policydb *p, =0A     __le32 buf[3];=0A     u32 len;=0A =0A-    catdatum =
=3D xmalloc(struct cat_datum);=0A+    catdatum =3D xzalloc(struct =
cat_datum);=0A     if ( !catdatum )=0A     {=0A         rc =3D -ENOMEM;=0A =
        goto out;=0A     }=0A-    memset(catdatum, 0, sizeof(*catdatum));=
=0A =0A     rc =3D next_entry(buf, fp, sizeof buf);=0A     if ( rc < 0 =
)=0A@@ -1875,13 +1864,12 @@ int policydb_read(struct policydb *p, vo=0A    =
 ltr =3D NULL;=0A     for ( i =3D 0; i < nel; i++ )=0A     {=0A-        tr =
=3D xmalloc(struct role_trans);=0A+        tr =3D xzalloc(struct role_trans=
);=0A         if ( !tr )=0A         {=0A             rc =3D -ENOMEM;=0A    =
         goto bad;=0A         }=0A-        memset(tr, 0, sizeof(*tr));=0A  =
       if ( ltr )=0A             ltr->next =3D tr;=0A         else=0A@@ =
-1909,13 +1897,12 @@ int policydb_read(struct policydb *p, vo=0A     lra =
=3D NULL;=0A     for ( i =3D 0; i < nel; i++ )=0A     {=0A-        ra =3D =
xmalloc(struct role_allow);=0A+        ra =3D xzalloc(struct role_allow);=
=0A         if ( !ra )=0A         {=0A             rc =3D -ENOMEM;=0A      =
       goto bad;=0A         }=0A-        memset(ra, 0, sizeof(*ra));=0A    =
     if ( lra )=0A             lra->next =3D ra;=0A         else=0A@@ =
-1951,13 +1938,12 @@ int policydb_read(struct policydb *p, vo=0A         l =
=3D NULL;=0A         for ( j =3D 0; j < nel; j++ )=0A         {=0A-        =
    c =3D xmalloc(struct ocontext);=0A+            c =3D xzalloc(struct =
ocontext);=0A             if ( !c )=0A             {=0A                 rc =
=3D -ENOMEM;=0A                 goto bad;=0A             }=0A-            =
memset(c, 0, sizeof(*c));=0A             if ( l )=0A                 =
l->next =3D c;=0A             else=0A@@ -2067,13 +2053,12 @@ int policydb_r=
ead(struct policydb *p, vo=0A         lrt =3D NULL;=0A         for ( i =3D =
0; i < nel; i++ )=0A         {=0A-            rt =3D xmalloc(struct =
range_trans);=0A+            rt =3D xzalloc(struct range_trans);=0A        =
     if ( !rt )=0A             {=0A                 rc =3D -ENOMEM;=0A     =
            goto bad;=0A             }=0A-            memset(rt, 0, =
sizeof(*rt));=0A             if ( lrt )=0A                 lrt->next =3D =
rt;=0A             else=0A--- a/xen/xsm/flask/ss/services.c=0A+++ =
b/xen/xsm/flask/ss/services.c=0A@@ -1771,13 +1771,12 @@ int security_get_us=
er_sids(u32 fromsid, =0A     }=0A     usercon.user =3D user->value;=0A =
=0A-    mysids =3D xmalloc_array(u32, maxnel);=0A+    mysids =3D xzalloc_ar=
ray(u32, maxnel);=0A     if ( !mysids )=0A     {=0A         rc =3D =
-ENOMEM;=0A         goto out_unlock;=0A     }=0A-    memset(mysids, 0, =
maxnel*sizeof(*mysids));=0A =0A     ebitmap_for_each_positive_bit(&user->ro=
les, rnode, i)=0A     {=0A@@ -1808,14 +1807,13 @@ int security_get_user_sid=
s(u32 fromsid, =0A             else=0A             {=0A                 =
maxnel +=3D SIDS_NEL;=0A-                mysids2 =3D xmalloc_array(u32, =
maxnel);=0A+                mysids2 =3D xzalloc_array(u32, maxnel);=0A     =
            if ( !mysids2 )=0A                 {=0A                     rc =
=3D -ENOMEM;=0A                     xfree(mysids);=0A                     =
goto out_unlock;=0A                 }=0A-                memset(mysids2, =
0, maxnel*sizeof(*mysids2));=0A                 memcpy(mysids2, mysids, =
mynel * sizeof(*mysids2));=0A                 xfree(mysids);=0A            =
     mysids =3D mysids2;=0A@@ -1868,14 +1866,14 @@ int security_get_bools(i=
nt *len, char **=0A         goto out;=0A     }=0A =0A-    if ( names ) =
{=0A-        *names =3D (char**)xmalloc_array(char*, *len);=0A+    if ( =
names )=0A+    {=0A+        *names =3D xzalloc_array(char *, *len);=0A     =
    if ( !*names )=0A             goto err;=0A-        memset(*names, 0, =
sizeof(char*) * *len);=0A     }=0A =0A-    *values =3D (int*)xmalloc_array(=
int, *len);=0A+    *values =3D xmalloc_array(int, *len);=0A     if ( =
!*values )=0A         goto err;=0A =0A@@ -2059,9 +2057,8 @@ int security_oc=
ontext_add( u32 ocon, uns=0A     struct ocontext *prev;=0A     struct =
ocontext *add;=0A =0A-    if ( (add =3D xmalloc(struct ocontext)) =3D=3D =
NULL )=0A+    if ( (add =3D xzalloc(struct ocontext)) =3D=3D NULL )=0A     =
    return -ENOMEM;=0A-    memset(add, 0, sizeof(struct ocontext));=0A     =
add->sid[0] =3D sid;=0A =0A     POLICY_WRLOCK;=0A
--=__Part794B2B3A.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part794B2B3A.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:45:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFW8-0000Aa-Lb; Tue, 25 Feb 2014 10:45:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFW6-0000A4-PD
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:45:39 +0000
Received: from [85.158.139.211:18937] by server-12.bemta-5.messagelabs.com id
	60/01-15415-2547C035; Tue, 25 Feb 2014 10:45:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393325136!6098618!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22524 invoked from network); 25 Feb 2014 10:45:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:45:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:45:36 +0000
Message-Id: <530C825F020000780011F195@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:45:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1C2E4E5F.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 3/4] xsm: use # printk format modifier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1C2E4E5F.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -197,7 +197,7 @@ static void avc_dump_av(struct avc_dump_
     }
=20
     if ( av )
-        avc_printk(buf, " 0x%x", av);
+        avc_printk(buf, " %#x", av);
=20
     avc_printk(buf, " }");
 }
@@ -591,16 +591,16 @@ void avc_audit(u32 ssid, u32 tsid, u16 t
         avc_printk(&buf, "domid=3D%d ", cdom->domain_id);
     switch ( a ? a->type : 0 ) {
     case AVC_AUDIT_DATA_DEV:
-        avc_printk(&buf, "device=3D0x%lx ", a->device);
+        avc_printk(&buf, "device=3D%#lx ", a->device);
         break;
     case AVC_AUDIT_DATA_IRQ:
         avc_printk(&buf, "irq=3D%d ", a->irq);
         break;
     case AVC_AUDIT_DATA_RANGE:
-        avc_printk(&buf, "range=3D0x%lx-0x%lx ", a->range.start, =
a->range.end);
+        avc_printk(&buf, "range=3D%#lx-%#lx ", a->range.start, a->range.en=
d);
         break;
     case AVC_AUDIT_DATA_MEMORY:
-        avc_printk(&buf, "pte=3D0x%lx mfn=3D0x%lx ", a->memory.pte, =
a->memory.mfn);
+        avc_printk(&buf, "pte=3D%#lx mfn=3D%#lx ", a->memory.pte, =
a->memory.mfn);
         break;
     }
=20
--- a/xen/xsm/flask/ss/policydb.c
+++ b/xen/xsm/flask/ss/policydb.c
@@ -1716,8 +1716,8 @@ int policydb_read(struct policydb *p, vo
=20
     if ( le32_to_cpu(buf[0]) !=3D POLICYDB_MAGIC )
     {
-        printk(KERN_ERR "Flask:  policydb magic number 0x%x does "
-               "not match expected magic number 0x%x\n",
+        printk(KERN_ERR "Flask:  policydb magic number %#x does "
+               "not match expected magic number %#x\n",
                le32_to_cpu(buf[0]), POLICYDB_MAGIC);
         goto bad;
     }
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -2111,7 +2111,7 @@ int security_ocontext_add( u32 ocon, uns
                 c->u.ioport.high_ioport =3D=3D high && c->sid[0] =3D=3D =
sid)
                 break;
=20
-            printk("%s: IO Port overlap with entry 0x%x - 0x%x\n",
+            printk("%s: IO Port overlap with entry %#x - %#x\n",
                    __FUNCTION__, c->u.ioport.low_ioport,
                    c->u.ioport.high_ioport);
             ret =3D -EEXIST;
@@ -2145,7 +2145,7 @@ int security_ocontext_add( u32 ocon, uns
                 c->u.iomem.high_iomem =3D=3D high && c->sid[0] =3D=3D =
sid)
                 break;
=20
-            printk("%s: IO Memory overlap with entry 0x%x - 0x%x\n",
+            printk("%s: IO Memory overlap with entry %#x - %#x\n",
                    __FUNCTION__, c->u.iomem.low_iomem,
                    c->u.iomem.high_iomem);
             ret =3D -EEXIST;
@@ -2177,7 +2177,7 @@ int security_ocontext_add( u32 ocon, uns
                 if ( c->sid[0] =3D=3D sid )
                     break;
=20
-                printk("%s: Duplicate PCI Device 0x%x\n", __FUNCTION__,
+                printk("%s: Duplicate PCI Device %#x\n", __FUNCTION__,
                         add->u.device);
                 ret =3D -EEXIST;
                 break;
@@ -2257,7 +2257,7 @@ int security_ocontext_del( u32 ocon, uns
             }
         }
=20
-        printk("%s: ocontext not found: ioport 0x%x - 0x%x\n", __FUNCTION_=
_,
+        printk("%s: ocontext not found: ioport %#x - %#x\n", __FUNCTION__,=

                 low, high);
         ret =3D -ENOENT;
         break;
@@ -2284,7 +2284,7 @@ int security_ocontext_del( u32 ocon, uns
             }
         }
=20
-        printk("%s: ocontext not found: iomem 0x%x - 0x%x\n", __FUNCTION__=
,
+        printk("%s: ocontext not found: iomem %#x - %#x\n", __FUNCTION__,
                 low, high);
         ret =3D -ENOENT;
         break;
@@ -2310,7 +2310,7 @@ int security_ocontext_del( u32 ocon, uns
             }
         }
=20
-        printk("%s: ocontext not found: pcidevice 0x%x\n", __FUNCTION__, =
low);
+        printk("%s: ocontext not found: pcidevice %#x\n", __FUNCTION__, =
low);
         ret =3D -ENOENT;
         break;
=20
--- a/xen/xsm/xsm_policy.c
+++ b/xen/xsm/xsm_policy.c
@@ -52,7 +52,7 @@ int __init xsm_policy_init(unsigned long
             policy_buffer =3D (char *)_policy_start;
             policy_size =3D _policy_len;
=20
-            printk("Policy len  0x%lx, start at %p.\n",
+            printk("Policy len %#lx, start at %p.\n",
                    _policy_len,_policy_start);
=20
             __clear_bit(i, module_map);



--=__Part1C2E4E5F.0__=
Content-Type: text/plain; name="xsm-printk-hash.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xsm-printk-hash.patch"

xsm: use # printk format modifier=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/xsm/flask/avc.c=0A+++ b/xen/xsm/flask/av=
c.c=0A@@ -197,7 +197,7 @@ static void avc_dump_av(struct avc_dump_=0A     =
}=0A =0A     if ( av )=0A-        avc_printk(buf, " 0x%x", av);=0A+        =
avc_printk(buf, " %#x", av);=0A =0A     avc_printk(buf, " }");=0A }=0A@@ =
-591,16 +591,16 @@ void avc_audit(u32 ssid, u32 tsid, u16 t=0A         =
avc_printk(&buf, "domid=3D%d ", cdom->domain_id);=0A     switch ( a ? =
a->type : 0 ) {=0A     case AVC_AUDIT_DATA_DEV:=0A-        avc_printk(&buf,=
 "device=3D0x%lx ", a->device);=0A+        avc_printk(&buf, "device=3D%#lx =
", a->device);=0A         break;=0A     case AVC_AUDIT_DATA_IRQ:=0A        =
 avc_printk(&buf, "irq=3D%d ", a->irq);=0A         break;=0A     case =
AVC_AUDIT_DATA_RANGE:=0A-        avc_printk(&buf, "range=3D0x%lx-0x%lx ", =
a->range.start, a->range.end);=0A+        avc_printk(&buf, "range=3D%#lx-%#=
lx ", a->range.start, a->range.end);=0A         break;=0A     case =
AVC_AUDIT_DATA_MEMORY:=0A-        avc_printk(&buf, "pte=3D0x%lx mfn=3D0x%lx=
 ", a->memory.pte, a->memory.mfn);=0A+        avc_printk(&buf, "pte=3D%#lx =
mfn=3D%#lx ", a->memory.pte, a->memory.mfn);=0A         break;=0A     }=0A =
=0A--- a/xen/xsm/flask/ss/policydb.c=0A+++ b/xen/xsm/flask/ss/policydb.c=0A=
@@ -1716,8 +1716,8 @@ int policydb_read(struct policydb *p, vo=0A =0A     =
if ( le32_to_cpu(buf[0]) !=3D POLICYDB_MAGIC )=0A     {=0A-        =
printk(KERN_ERR "Flask:  policydb magic number 0x%x does "=0A-             =
  "not match expected magic number 0x%x\n",=0A+        printk(KERN_ERR =
"Flask:  policydb magic number %#x does "=0A+               "not match =
expected magic number %#x\n",=0A                le32_to_cpu(buf[0]), =
POLICYDB_MAGIC);=0A         goto bad;=0A     }=0A--- a/xen/xsm/flask/ss/ser=
vices.c=0A+++ b/xen/xsm/flask/ss/services.c=0A@@ -2111,7 +2111,7 @@ int =
security_ocontext_add( u32 ocon, uns=0A                 c->u.ioport.high_io=
port =3D=3D high && c->sid[0] =3D=3D sid)=0A                 break;=0A =
=0A-            printk("%s: IO Port overlap with entry 0x%x - 0x%x\n",=0A+ =
           printk("%s: IO Port overlap with entry %#x - %#x\n",=0A         =
           __FUNCTION__, c->u.ioport.low_ioport,=0A                    =
c->u.ioport.high_ioport);=0A             ret =3D -EEXIST;=0A@@ -2145,7 =
+2145,7 @@ int security_ocontext_add( u32 ocon, uns=0A                 =
c->u.iomem.high_iomem =3D=3D high && c->sid[0] =3D=3D sid)=0A              =
   break;=0A =0A-            printk("%s: IO Memory overlap with entry 0x%x =
- 0x%x\n",=0A+            printk("%s: IO Memory overlap with entry %#x - =
%#x\n",=0A                    __FUNCTION__, c->u.iomem.low_iomem,=0A       =
             c->u.iomem.high_iomem);=0A             ret =3D -EEXIST;=0A@@ =
-2177,7 +2177,7 @@ int security_ocontext_add( u32 ocon, uns=0A             =
    if ( c->sid[0] =3D=3D sid )=0A                     break;=0A =0A-      =
          printk("%s: Duplicate PCI Device 0x%x\n", __FUNCTION__,=0A+      =
          printk("%s: Duplicate PCI Device %#x\n", __FUNCTION__,=0A        =
                 add->u.device);=0A                 ret =3D -EEXIST;=0A    =
             break;=0A@@ -2257,7 +2257,7 @@ int security_ocontext_del( u32 =
ocon, uns=0A             }=0A         }=0A =0A-        printk("%s: =
ocontext not found: ioport 0x%x - 0x%x\n", __FUNCTION__,=0A+        =
printk("%s: ocontext not found: ioport %#x - %#x\n", __FUNCTION__,=0A      =
           low, high);=0A         ret =3D -ENOENT;=0A         break;=0A@@ =
-2284,7 +2284,7 @@ int security_ocontext_del( u32 ocon, uns=0A             =
}=0A         }=0A =0A-        printk("%s: ocontext not found: iomem 0x%x - =
0x%x\n", __FUNCTION__,=0A+        printk("%s: ocontext not found: iomem =
%#x - %#x\n", __FUNCTION__,=0A                 low, high);=0A         ret =
=3D -ENOENT;=0A         break;=0A@@ -2310,7 +2310,7 @@ int security_ocontex=
t_del( u32 ocon, uns=0A             }=0A         }=0A =0A-        =
printk("%s: ocontext not found: pcidevice 0x%x\n", __FUNCTION__, low);=0A+ =
       printk("%s: ocontext not found: pcidevice %#x\n", __FUNCTION__, =
low);=0A         ret =3D -ENOENT;=0A         break;=0A =0A--- a/xen/xsm/xsm=
_policy.c=0A+++ b/xen/xsm/xsm_policy.c=0A@@ -52,7 +52,7 @@ int __init =
xsm_policy_init(unsigned long=0A             policy_buffer =3D (char =
*)_policy_start;=0A             policy_size =3D _policy_len;=0A =0A-       =
     printk("Policy len  0x%lx, start at %p.\n",=0A+            printk("Pol=
icy len %#lx, start at %p.\n",=0A                    _policy_len,_policy_st=
art);=0A =0A             __clear_bit(i, module_map);=0A
--=__Part1C2E4E5F.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1C2E4E5F.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:45:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFW8-0000Aa-Lb; Tue, 25 Feb 2014 10:45:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFW6-0000A4-PD
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:45:39 +0000
Received: from [85.158.139.211:18937] by server-12.bemta-5.messagelabs.com id
	60/01-15415-2547C035; Tue, 25 Feb 2014 10:45:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393325136!6098618!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22524 invoked from network); 25 Feb 2014 10:45:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:45:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:45:36 +0000
Message-Id: <530C825F020000780011F195@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:45:35 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part1C2E4E5F.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 3/4] xsm: use # printk format modifier
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part1C2E4E5F.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/xsm/flask/avc.c
+++ b/xen/xsm/flask/avc.c
@@ -197,7 +197,7 @@ static void avc_dump_av(struct avc_dump_
     }
=20
     if ( av )
-        avc_printk(buf, " 0x%x", av);
+        avc_printk(buf, " %#x", av);
=20
     avc_printk(buf, " }");
 }
@@ -591,16 +591,16 @@ void avc_audit(u32 ssid, u32 tsid, u16 t
         avc_printk(&buf, "domid=3D%d ", cdom->domain_id);
     switch ( a ? a->type : 0 ) {
     case AVC_AUDIT_DATA_DEV:
-        avc_printk(&buf, "device=3D0x%lx ", a->device);
+        avc_printk(&buf, "device=3D%#lx ", a->device);
         break;
     case AVC_AUDIT_DATA_IRQ:
         avc_printk(&buf, "irq=3D%d ", a->irq);
         break;
     case AVC_AUDIT_DATA_RANGE:
-        avc_printk(&buf, "range=3D0x%lx-0x%lx ", a->range.start, =
a->range.end);
+        avc_printk(&buf, "range=3D%#lx-%#lx ", a->range.start, a->range.en=
d);
         break;
     case AVC_AUDIT_DATA_MEMORY:
-        avc_printk(&buf, "pte=3D0x%lx mfn=3D0x%lx ", a->memory.pte, =
a->memory.mfn);
+        avc_printk(&buf, "pte=3D%#lx mfn=3D%#lx ", a->memory.pte, =
a->memory.mfn);
         break;
     }
=20
--- a/xen/xsm/flask/ss/policydb.c
+++ b/xen/xsm/flask/ss/policydb.c
@@ -1716,8 +1716,8 @@ int policydb_read(struct policydb *p, vo
=20
     if ( le32_to_cpu(buf[0]) !=3D POLICYDB_MAGIC )
     {
-        printk(KERN_ERR "Flask:  policydb magic number 0x%x does "
-               "not match expected magic number 0x%x\n",
+        printk(KERN_ERR "Flask:  policydb magic number %#x does "
+               "not match expected magic number %#x\n",
                le32_to_cpu(buf[0]), POLICYDB_MAGIC);
         goto bad;
     }
--- a/xen/xsm/flask/ss/services.c
+++ b/xen/xsm/flask/ss/services.c
@@ -2111,7 +2111,7 @@ int security_ocontext_add( u32 ocon, uns
                 c->u.ioport.high_ioport =3D=3D high && c->sid[0] =3D=3D =
sid)
                 break;
=20
-            printk("%s: IO Port overlap with entry 0x%x - 0x%x\n",
+            printk("%s: IO Port overlap with entry %#x - %#x\n",
                    __FUNCTION__, c->u.ioport.low_ioport,
                    c->u.ioport.high_ioport);
             ret =3D -EEXIST;
@@ -2145,7 +2145,7 @@ int security_ocontext_add( u32 ocon, uns
                 c->u.iomem.high_iomem =3D=3D high && c->sid[0] =3D=3D =
sid)
                 break;
=20
-            printk("%s: IO Memory overlap with entry 0x%x - 0x%x\n",
+            printk("%s: IO Memory overlap with entry %#x - %#x\n",
                    __FUNCTION__, c->u.iomem.low_iomem,
                    c->u.iomem.high_iomem);
             ret =3D -EEXIST;
@@ -2177,7 +2177,7 @@ int security_ocontext_add( u32 ocon, uns
                 if ( c->sid[0] =3D=3D sid )
                     break;
=20
-                printk("%s: Duplicate PCI Device 0x%x\n", __FUNCTION__,
+                printk("%s: Duplicate PCI Device %#x\n", __FUNCTION__,
                         add->u.device);
                 ret =3D -EEXIST;
                 break;
@@ -2257,7 +2257,7 @@ int security_ocontext_del( u32 ocon, uns
             }
         }
=20
-        printk("%s: ocontext not found: ioport 0x%x - 0x%x\n", __FUNCTION_=
_,
+        printk("%s: ocontext not found: ioport %#x - %#x\n", __FUNCTION__,=

                 low, high);
         ret =3D -ENOENT;
         break;
@@ -2284,7 +2284,7 @@ int security_ocontext_del( u32 ocon, uns
             }
         }
=20
-        printk("%s: ocontext not found: iomem 0x%x - 0x%x\n", __FUNCTION__=
,
+        printk("%s: ocontext not found: iomem %#x - %#x\n", __FUNCTION__,
                 low, high);
         ret =3D -ENOENT;
         break;
@@ -2310,7 +2310,7 @@ int security_ocontext_del( u32 ocon, uns
             }
         }
=20
-        printk("%s: ocontext not found: pcidevice 0x%x\n", __FUNCTION__, =
low);
+        printk("%s: ocontext not found: pcidevice %#x\n", __FUNCTION__, =
low);
         ret =3D -ENOENT;
         break;
=20
--- a/xen/xsm/xsm_policy.c
+++ b/xen/xsm/xsm_policy.c
@@ -52,7 +52,7 @@ int __init xsm_policy_init(unsigned long
             policy_buffer =3D (char *)_policy_start;
             policy_size =3D _policy_len;
=20
-            printk("Policy len  0x%lx, start at %p.\n",
+            printk("Policy len %#lx, start at %p.\n",
                    _policy_len,_policy_start);
=20
             __clear_bit(i, module_map);



--=__Part1C2E4E5F.0__=
Content-Type: text/plain; name="xsm-printk-hash.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xsm-printk-hash.patch"

xsm: use # printk format modifier=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/xsm/flask/avc.c=0A+++ b/xen/xsm/flask/av=
c.c=0A@@ -197,7 +197,7 @@ static void avc_dump_av(struct avc_dump_=0A     =
}=0A =0A     if ( av )=0A-        avc_printk(buf, " 0x%x", av);=0A+        =
avc_printk(buf, " %#x", av);=0A =0A     avc_printk(buf, " }");=0A }=0A@@ =
-591,16 +591,16 @@ void avc_audit(u32 ssid, u32 tsid, u16 t=0A         =
avc_printk(&buf, "domid=3D%d ", cdom->domain_id);=0A     switch ( a ? =
a->type : 0 ) {=0A     case AVC_AUDIT_DATA_DEV:=0A-        avc_printk(&buf,=
 "device=3D0x%lx ", a->device);=0A+        avc_printk(&buf, "device=3D%#lx =
", a->device);=0A         break;=0A     case AVC_AUDIT_DATA_IRQ:=0A        =
 avc_printk(&buf, "irq=3D%d ", a->irq);=0A         break;=0A     case =
AVC_AUDIT_DATA_RANGE:=0A-        avc_printk(&buf, "range=3D0x%lx-0x%lx ", =
a->range.start, a->range.end);=0A+        avc_printk(&buf, "range=3D%#lx-%#=
lx ", a->range.start, a->range.end);=0A         break;=0A     case =
AVC_AUDIT_DATA_MEMORY:=0A-        avc_printk(&buf, "pte=3D0x%lx mfn=3D0x%lx=
 ", a->memory.pte, a->memory.mfn);=0A+        avc_printk(&buf, "pte=3D%#lx =
mfn=3D%#lx ", a->memory.pte, a->memory.mfn);=0A         break;=0A     }=0A =
=0A--- a/xen/xsm/flask/ss/policydb.c=0A+++ b/xen/xsm/flask/ss/policydb.c=0A=
@@ -1716,8 +1716,8 @@ int policydb_read(struct policydb *p, vo=0A =0A     =
if ( le32_to_cpu(buf[0]) !=3D POLICYDB_MAGIC )=0A     {=0A-        =
printk(KERN_ERR "Flask:  policydb magic number 0x%x does "=0A-             =
  "not match expected magic number 0x%x\n",=0A+        printk(KERN_ERR =
"Flask:  policydb magic number %#x does "=0A+               "not match =
expected magic number %#x\n",=0A                le32_to_cpu(buf[0]), =
POLICYDB_MAGIC);=0A         goto bad;=0A     }=0A--- a/xen/xsm/flask/ss/ser=
vices.c=0A+++ b/xen/xsm/flask/ss/services.c=0A@@ -2111,7 +2111,7 @@ int =
security_ocontext_add( u32 ocon, uns=0A                 c->u.ioport.high_io=
port =3D=3D high && c->sid[0] =3D=3D sid)=0A                 break;=0A =
=0A-            printk("%s: IO Port overlap with entry 0x%x - 0x%x\n",=0A+ =
           printk("%s: IO Port overlap with entry %#x - %#x\n",=0A         =
           __FUNCTION__, c->u.ioport.low_ioport,=0A                    =
c->u.ioport.high_ioport);=0A             ret =3D -EEXIST;=0A@@ -2145,7 =
+2145,7 @@ int security_ocontext_add( u32 ocon, uns=0A                 =
c->u.iomem.high_iomem =3D=3D high && c->sid[0] =3D=3D sid)=0A              =
   break;=0A =0A-            printk("%s: IO Memory overlap with entry 0x%x =
- 0x%x\n",=0A+            printk("%s: IO Memory overlap with entry %#x - =
%#x\n",=0A                    __FUNCTION__, c->u.iomem.low_iomem,=0A       =
             c->u.iomem.high_iomem);=0A             ret =3D -EEXIST;=0A@@ =
-2177,7 +2177,7 @@ int security_ocontext_add( u32 ocon, uns=0A             =
    if ( c->sid[0] =3D=3D sid )=0A                     break;=0A =0A-      =
          printk("%s: Duplicate PCI Device 0x%x\n", __FUNCTION__,=0A+      =
          printk("%s: Duplicate PCI Device %#x\n", __FUNCTION__,=0A        =
                 add->u.device);=0A                 ret =3D -EEXIST;=0A    =
             break;=0A@@ -2257,7 +2257,7 @@ int security_ocontext_del( u32 =
ocon, uns=0A             }=0A         }=0A =0A-        printk("%s: =
ocontext not found: ioport 0x%x - 0x%x\n", __FUNCTION__,=0A+        =
printk("%s: ocontext not found: ioport %#x - %#x\n", __FUNCTION__,=0A      =
           low, high);=0A         ret =3D -ENOENT;=0A         break;=0A@@ =
-2284,7 +2284,7 @@ int security_ocontext_del( u32 ocon, uns=0A             =
}=0A         }=0A =0A-        printk("%s: ocontext not found: iomem 0x%x - =
0x%x\n", __FUNCTION__,=0A+        printk("%s: ocontext not found: iomem =
%#x - %#x\n", __FUNCTION__,=0A                 low, high);=0A         ret =
=3D -ENOENT;=0A         break;=0A@@ -2310,7 +2310,7 @@ int security_ocontex=
t_del( u32 ocon, uns=0A             }=0A         }=0A =0A-        =
printk("%s: ocontext not found: pcidevice 0x%x\n", __FUNCTION__, low);=0A+ =
       printk("%s: ocontext not found: pcidevice %#x\n", __FUNCTION__, =
low);=0A         ret =3D -ENOENT;=0A         break;=0A =0A--- a/xen/xsm/xsm=
_policy.c=0A+++ b/xen/xsm/xsm_policy.c=0A@@ -52,7 +52,7 @@ int __init =
xsm_policy_init(unsigned long=0A             policy_buffer =3D (char =
*)_policy_start;=0A             policy_size =3D _policy_len;=0A =0A-       =
     printk("Policy len  0x%lx, start at %p.\n",=0A+            printk("Pol=
icy len %#lx, start at %p.\n",=0A                    _policy_len,_policy_st=
art);=0A =0A             __clear_bit(i, module_map);=0A
--=__Part1C2E4E5F.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part1C2E4E5F.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:46:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFWh-0000Ir-4e; Tue, 25 Feb 2014 10:46:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFWf-0000IQ-Pv
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:46:14 +0000
Received: from [85.158.139.211:9027] by server-15.bemta-5.messagelabs.com id
	EC/61-24395-5747C035; Tue, 25 Feb 2014 10:46:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393325172!2128048!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1169 invoked from network); 25 Feb 2014 10:46:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:46:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:46:12 +0000
Message-Id: <530C8282020000780011F199@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:46:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part21137362.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 4/4] xsm: streamline xsm_default_action()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part21137362.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The privileges being strongly ordered is better reflected by using fall
through within the respective switch statement.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -59,20 +59,14 @@ static always_inline int xsm_default_act
     switch ( action ) {
     case XSM_HOOK:
         return 0;
-    case XSM_DM_PRIV:
-        if ( src->is_privileged )
-            return 0;
-        if ( target && src->target =3D=3D target )
-            return 0;
-        return -EPERM;
     case XSM_TARGET:
         if ( src =3D=3D target )
             return 0;
-        if ( src->is_privileged )
-            return 0;
+        /* fall through */
+    case XSM_DM_PRIV:
         if ( target && src->target =3D=3D target )
             return 0;
-        return -EPERM;
+        /* fall through */
     case XSM_PRIV:
         if ( src->is_privileged )
             return 0;




--=__Part21137362.0__=
Content-Type: text/plain; name="xsm-default-action-fallthrough.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xsm-default-action-fallthrough.patch"

xsm: streamline xsm_default_action()=0A=0AThe privileges being strongly =
ordered is better reflected by using fall=0Athrough within the respective =
switch statement.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A=
--- a/xen/include/xsm/dummy.h=0A+++ b/xen/include/xsm/dummy.h=0A@@ -59,20 =
+59,14 @@ static always_inline int xsm_default_act=0A     switch ( action =
) {=0A     case XSM_HOOK:=0A         return 0;=0A-    case XSM_DM_PRIV:=0A-=
        if ( src->is_privileged )=0A-            return 0;=0A-        if ( =
target && src->target =3D=3D target )=0A-            return 0;=0A-        =
return -EPERM;=0A     case XSM_TARGET:=0A         if ( src =3D=3D target =
)=0A             return 0;=0A-        if ( src->is_privileged )=0A-        =
    return 0;=0A+        /* fall through */=0A+    case XSM_DM_PRIV:=0A    =
     if ( target && src->target =3D=3D target )=0A             return =
0;=0A-        return -EPERM;=0A+        /* fall through */=0A     case =
XSM_PRIV:=0A         if ( src->is_privileged )=0A             return 0;=0A
--=__Part21137362.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part21137362.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:46:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:46:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFWh-0000Ir-4e; Tue, 25 Feb 2014 10:46:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFWf-0000IQ-Pv
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 10:46:14 +0000
Received: from [85.158.139.211:9027] by server-15.bemta-5.messagelabs.com id
	EC/61-24395-5747C035; Tue, 25 Feb 2014 10:46:13 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393325172!2128048!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1169 invoked from network); 25 Feb 2014 10:46:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:46:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:46:12 +0000
Message-Id: <530C8282020000780011F199@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:46:10 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part21137362.0__="
Cc: dgdegra@tycho.nsa.gov
Subject: [Xen-devel] [PATCH 4/4] xsm: streamline xsm_default_action()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part21137362.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The privileges being strongly ordered is better reflected by using fall
through within the respective switch statement.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -59,20 +59,14 @@ static always_inline int xsm_default_act
     switch ( action ) {
     case XSM_HOOK:
         return 0;
-    case XSM_DM_PRIV:
-        if ( src->is_privileged )
-            return 0;
-        if ( target && src->target =3D=3D target )
-            return 0;
-        return -EPERM;
     case XSM_TARGET:
         if ( src =3D=3D target )
             return 0;
-        if ( src->is_privileged )
-            return 0;
+        /* fall through */
+    case XSM_DM_PRIV:
         if ( target && src->target =3D=3D target )
             return 0;
-        return -EPERM;
+        /* fall through */
     case XSM_PRIV:
         if ( src->is_privileged )
             return 0;




--=__Part21137362.0__=
Content-Type: text/plain; name="xsm-default-action-fallthrough.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="xsm-default-action-fallthrough.patch"

xsm: streamline xsm_default_action()=0A=0AThe privileges being strongly =
ordered is better reflected by using fall=0Athrough within the respective =
switch statement.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com>=0A=0A=
--- a/xen/include/xsm/dummy.h=0A+++ b/xen/include/xsm/dummy.h=0A@@ -59,20 =
+59,14 @@ static always_inline int xsm_default_act=0A     switch ( action =
) {=0A     case XSM_HOOK:=0A         return 0;=0A-    case XSM_DM_PRIV:=0A-=
        if ( src->is_privileged )=0A-            return 0;=0A-        if ( =
target && src->target =3D=3D target )=0A-            return 0;=0A-        =
return -EPERM;=0A     case XSM_TARGET:=0A         if ( src =3D=3D target =
)=0A             return 0;=0A-        if ( src->is_privileged )=0A-        =
    return 0;=0A+        /* fall through */=0A+    case XSM_DM_PRIV:=0A    =
     if ( target && src->target =3D=3D target )=0A             return =
0;=0A-        return -EPERM;=0A+        /* fall through */=0A     case =
XSM_PRIV:=0A         if ( src->is_privileged )=0A             return 0;=0A
--=__Part21137362.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part21137362.0__=--


From xen-devel-bounces@lists.xen.org Tue Feb 25 10:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFci-0000h2-2Z; Tue, 25 Feb 2014 10:52:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WIFcg-0000gx-On
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 10:52:26 +0000
Received: from [85.158.139.211:54074] by server-15.bemta-5.messagelabs.com id
	0D/FF-24395-AE57C035; Tue, 25 Feb 2014 10:52:26 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393325544!6079620!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28048 invoked from network); 25 Feb 2014 10:52:25 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:52:25 -0000
Received: by mail-lb0-f178.google.com with SMTP id s7so244389lbd.23
	for <xen-devel@lists.xensource.com>;
	Tue, 25 Feb 2014 02:52:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=dvIH/GbD4T1KCPZ3JPteTGMPtu8vbiVvsG1wr0a5zo8=;
	b=ZaqMhaAb+ClvvbzXxKZg3VOgHzEIw7pHd8V8Kp9YI+gRF1VxECM/kILJzGhqRQyie9
	62Tz8G3ytKexpGupzulLOMEgIyhyD4jOq4I0i3R89A0ZsC4rMrGkde8KFeSXdxecXGRI
	CtCzREVktAv+DWhKfcjiEh2XPs6FBP9eB50ueb3uUKBnDtLs3DnwWAMV49eFb4esHJGQ
	RycLVjHsnMqYwNmCcm/fSn+0k6Vly9fEqogPJE8kcE/+EQKvJjHC/0XF0cYUtrxVS5TB
	WrtbecHngDMe1jNFsebOHXiAgtsygXIenSkAM8K3Vo5ptkm/cxqAWqE+3ntVcH6m+gut
	br6w==
X-Gm-Message-State: ALoCoQm1ZMnoYPVClu+9ZxYKp1ccJR0Qkm84K486BhQwpCY1daXhr7WZXdTnnhJyK4WtDAdoDPtx
X-Received: by 10.152.234.202 with SMTP id ug10mr684745lac.28.1393325544197;
	Tue, 25 Feb 2014 02:52:24 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Tue, 25 Feb 2014 02:52:03 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 25 Feb 2014 10:52:03 +0000
Message-ID: <CAFEAcA_-uT3js+FyoEsOazG0Jdhdu98tzvOktZ+DMhcH=Bz4fg@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com Devel" <xen-devel@lists.xensource.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Anthony PERARD <Anthony.Perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 0/2] xen-140220
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20 February 2014 18:12, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> The following changes since commit 2ca92bb993991d6dcb8f68751aca9fc2ec2b8867:
>
>   Merge remote-tracking branch 'remotes/kraxel/tags/pull-usb-3' into staging (2014-02-20 15:25:05 +0000)
>
> are available in the git repository at:
>
>
>   git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140220
>
> for you to fetch changes up to 58da5b1e01a586eb5a52ba3eec342d6828269839:
>
>   xen_disk: fix io accounting (2014-02-20 17:57:13 +0000)

Applied, thanks.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:52:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFci-0000h2-2Z; Tue, 25 Feb 2014 10:52:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WIFcg-0000gx-On
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 10:52:26 +0000
Received: from [85.158.139.211:54074] by server-15.bemta-5.messagelabs.com id
	0D/FF-24395-AE57C035; Tue, 25 Feb 2014 10:52:26 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393325544!6079620!1
X-Originating-IP: [209.85.217.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28048 invoked from network); 25 Feb 2014 10:52:25 -0000
Received: from mail-lb0-f178.google.com (HELO mail-lb0-f178.google.com)
	(209.85.217.178)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:52:25 -0000
Received: by mail-lb0-f178.google.com with SMTP id s7so244389lbd.23
	for <xen-devel@lists.xensource.com>;
	Tue, 25 Feb 2014 02:52:24 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=dvIH/GbD4T1KCPZ3JPteTGMPtu8vbiVvsG1wr0a5zo8=;
	b=ZaqMhaAb+ClvvbzXxKZg3VOgHzEIw7pHd8V8Kp9YI+gRF1VxECM/kILJzGhqRQyie9
	62Tz8G3ytKexpGupzulLOMEgIyhyD4jOq4I0i3R89A0ZsC4rMrGkde8KFeSXdxecXGRI
	CtCzREVktAv+DWhKfcjiEh2XPs6FBP9eB50ueb3uUKBnDtLs3DnwWAMV49eFb4esHJGQ
	RycLVjHsnMqYwNmCcm/fSn+0k6Vly9fEqogPJE8kcE/+EQKvJjHC/0XF0cYUtrxVS5TB
	WrtbecHngDMe1jNFsebOHXiAgtsygXIenSkAM8K3Vo5ptkm/cxqAWqE+3ntVcH6m+gut
	br6w==
X-Gm-Message-State: ALoCoQm1ZMnoYPVClu+9ZxYKp1ccJR0Qkm84K486BhQwpCY1daXhr7WZXdTnnhJyK4WtDAdoDPtx
X-Received: by 10.152.234.202 with SMTP id ug10mr684745lac.28.1393325544197;
	Tue, 25 Feb 2014 02:52:24 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Tue, 25 Feb 2014 02:52:03 -0800 (PST)
In-Reply-To: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402201759490.15812@kaball.uk.xensource.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 25 Feb 2014 10:52:03 +0000
Message-ID: <CAFEAcA_-uT3js+FyoEsOazG0Jdhdu98tzvOktZ+DMhcH=Bz4fg@mail.gmail.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Cc: Olaf Hering <olaf@aepfle.de>,
	"xen-devel@lists.xensource.com Devel" <xen-devel@lists.xensource.com>,
	QEMU Developers <qemu-devel@nongnu.org>,
	Anthony Liguori <anthony@codemonkey.ws>,
	Anthony PERARD <Anthony.Perard@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Xen-devel] [PULL 0/2] xen-140220
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 20 February 2014 18:12, Stefano Stabellini
<stefano.stabellini@eu.citrix.com> wrote:
> The following changes since commit 2ca92bb993991d6dcb8f68751aca9fc2ec2b8867:
>
>   Merge remote-tracking branch 'remotes/kraxel/tags/pull-usb-3' into staging (2014-02-20 15:25:05 +0000)
>
> are available in the git repository at:
>
>
>   git://xenbits.xen.org/people/sstabellini/qemu-dm.git xen-140220
>
> for you to fetch changes up to 58da5b1e01a586eb5a52ba3eec342d6828269839:
>
>   xen_disk: fix io accounting (2014-02-20 17:57:13 +0000)

Applied, thanks.

-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:54:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFeZ-0000nd-1Q; Tue, 25 Feb 2014 10:54:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFeW-0000nC-V7
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:54:21 +0000
Received: from [85.158.139.211:24062] by server-11.bemta-5.messagelabs.com id
	04/72-23886-C567C035; Tue, 25 Feb 2014 10:54:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393325659!6035712!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18864 invoked from network); 25 Feb 2014 10:54:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:54:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:54:18 +0000
Message-Id: <530C8468020000780011F1D1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:54:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>,
	"Tim Deegan" <tim@xen.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
In-Reply-To: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tamas K Lengyel <tamas.lengyel@zentific.com>, keir@xen.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.01.14 at 22:34, Tamas K Lengyel <tamas.lengyel@zentific.com> wrote:
> This patch extends the information returned for CR0/CR3/CR4 register write 
> events
> with the previous value of the register. The old value was already passed to 
> the trap
> processing function, just never placed into the returned request. By 
> returning     
> this value, applications subscribing the CR events obtain additional context 
> about
> the event.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Tim, Andres,

this seems to fall in your area - any thoughts?

Thanks, Jan

> ---
>  xen/arch/x86/hvm/hvm.c         |    4 ++++
>  xen/include/public/mem_event.h |    6 +++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..d46abf2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t 
> reason,
>          req.gla = gla;
>          req.gla_valid = 1;
>      }
> +    else
> +    {
> +        req.gla = old;
> +    }
>      
>      mem_event_put_request(d, &d->mem_event->access, &req);
>      
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index c9ed546..3831b41 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -40,9 +40,9 @@
>  /* Reasons for the memory event request */
>  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is 
> address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value 
> */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value 
> */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value 
> */
> +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 
> value, gla is previous */
> +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 
> value, gla is previous */
> +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 
> value, gla is previous */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP 
> */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: 
> gla/gfn are RIP */
>  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, 
> gla is MSR address;
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:54:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFeX-0000nN-Kn; Tue, 25 Feb 2014 10:54:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFeW-0000nA-Bm
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:54:20 +0000
Received: from [85.158.143.35:29182] by server-2.bemta-4.messagelabs.com id
	3D/12-10891-B567C035; Tue, 25 Feb 2014 10:54:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393325657!8108030!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3058 invoked from network); 25 Feb 2014 10:54:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:54:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105476590"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 10:54:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:54:16 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFeS-0005uS-Cs; Tue, 25 Feb 2014 10:54:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 10:54:14 +0000
Message-ID: <1393325654-28994-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/xen-mceinj: Fix depency for the install
	rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Liu Jinsong <jinsong.liu@intel.com>
---
 tools/tests/mce-test/tools/Makefile |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/mce-test/tools/Makefile b/tools/tests/mce-test/tools/Makefile
index 8a23b83..5ee001f 100644
--- a/tools/tests/mce-test/tools/Makefile
+++ b/tools/tests/mce-test/tools/Makefile
@@ -10,7 +10,7 @@ CFLAGS += $(CFLAGS_xeninclude)
 .PHONY: all
 all: xen-mceinj
 
-install: 
+install: xen-mceinj
 	$(INSTALL_PROG) xen-mceinj $(DESTDIR)$(SBINDIR)
 
 .PHONY: clean
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:54:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFeX-0000nN-Kn; Tue, 25 Feb 2014 10:54:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFeW-0000nA-Bm
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:54:20 +0000
Received: from [85.158.143.35:29182] by server-2.bemta-4.messagelabs.com id
	3D/12-10891-B567C035; Tue, 25 Feb 2014 10:54:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393325657!8108030!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3058 invoked from network); 25 Feb 2014 10:54:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:54:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105476590"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 10:54:17 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:54:16 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFeS-0005uS-Cs; Tue, 25 Feb 2014 10:54:16 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 10:54:14 +0000
Message-ID: <1393325654-28994-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Liu Jinsong <jinsong.liu@intel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel] [PATCH] tools/xen-mceinj: Fix depency for the install
	rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: Liu Jinsong <jinsong.liu@intel.com>
---
 tools/tests/mce-test/tools/Makefile |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/tests/mce-test/tools/Makefile b/tools/tests/mce-test/tools/Makefile
index 8a23b83..5ee001f 100644
--- a/tools/tests/mce-test/tools/Makefile
+++ b/tools/tests/mce-test/tools/Makefile
@@ -10,7 +10,7 @@ CFLAGS += $(CFLAGS_xeninclude)
 .PHONY: all
 all: xen-mceinj
 
-install: 
+install: xen-mceinj
 	$(INSTALL_PROG) xen-mceinj $(DESTDIR)$(SBINDIR)
 
 .PHONY: clean
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:54:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFeZ-0000nd-1Q; Tue, 25 Feb 2014 10:54:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIFeW-0000nC-V7
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:54:21 +0000
Received: from [85.158.139.211:24062] by server-11.bemta-5.messagelabs.com id
	04/72-23886-C567C035; Tue, 25 Feb 2014 10:54:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393325659!6035712!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18864 invoked from network); 25 Feb 2014 10:54:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 10:54:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 10:54:18 +0000
Message-Id: <530C8468020000780011F1D1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 10:54:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andres Lagar-Cavilla" <andres@lagarcavilla.org>,
	"Tim Deegan" <tim@xen.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
In-Reply-To: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Tamas K Lengyel <tamas.lengyel@zentific.com>, keir@xen.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 30.01.14 at 22:34, Tamas K Lengyel <tamas.lengyel@zentific.com> wrote:
> This patch extends the information returned for CR0/CR3/CR4 register write 
> events
> with the previous value of the register. The old value was already passed to 
> the trap
> processing function, just never placed into the returned request. By 
> returning     
> this value, applications subscribing the CR events obtain additional context 
> about
> the event.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Tim, Andres,

this seems to fall in your area - any thoughts?

Thanks, Jan

> ---
>  xen/arch/x86/hvm/hvm.c         |    4 ++++
>  xen/include/public/mem_event.h |    6 +++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..d46abf2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t 
> reason,
>          req.gla = gla;
>          req.gla_valid = 1;
>      }
> +    else
> +    {
> +        req.gla = old;
> +    }
>      
>      mem_event_put_request(d, &d->mem_event->access, &req);
>      
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index c9ed546..3831b41 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -40,9 +40,9 @@
>  /* Reasons for the memory event request */
>  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is 
> address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value 
> */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value 
> */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value 
> */
> +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 
> value, gla is previous */
> +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 
> value, gla is previous */
> +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 
> value, gla is previous */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP 
> */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: 
> gla/gfn are RIP */
>  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, 
> gla is MSR address;
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:57:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFhO-00013P-LW; Tue, 25 Feb 2014 10:57:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFhN-00013I-Re
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:57:18 +0000
Received: from [85.158.137.68:51193] by server-16.bemta-3.messagelabs.com id
	66/C9-29917-D077C035; Tue, 25 Feb 2014 10:57:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393325834!4062228!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12454 invoked from network); 25 Feb 2014 10:57:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:57:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="103832383"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 10:57:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:57:13 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFhI-0005ww-Ti; Tue, 25 Feb 2014 10:57:12 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 10:57:11 +0000
Message-ID: <1393325831-29215-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v2] common/kexec: Identify which cpu the kexec
	image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A patch to this effect has been in XenServer for a little while, and has
proved to be a useful debugging point for servers which have different
behaviours depending when crashing on the non-bootstrap processor.

Moving the printk() from kexec_panic() to one_cpu_only() means that it will
only be printed for the cpu which wins the race along the kexec path.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: David Vrabel <david.vrabel@citrix.com>

---

Changes in v2:
 * Tweak wording as it moves onto a common path with kexec_reboot
---
 xen/common/kexec.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 481b0c2..23d964e 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
     }
 
     set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
+    printk("Executing kexec image on cpu%u\n", cpu);
+
     return 0;
 }
 
@@ -340,8 +342,6 @@ void kexec_crash(void)
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
 
-    printk("Executing crash image\n");
-
     kexecing = TRUE;
 
     if ( kexec_common_shutdown() != 0 )
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 10:57:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 10:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFhO-00013P-LW; Tue, 25 Feb 2014 10:57:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFhN-00013I-Re
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 10:57:18 +0000
Received: from [85.158.137.68:51193] by server-16.bemta-3.messagelabs.com id
	66/C9-29917-D077C035; Tue, 25 Feb 2014 10:57:17 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393325834!4062228!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12454 invoked from network); 25 Feb 2014 10:57:16 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 10:57:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="103832383"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 10:57:14 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 05:57:13 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFhI-0005ww-Ti; Tue, 25 Feb 2014 10:57:12 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 10:57:11 +0000
Message-ID: <1393325831-29215-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v2] common/kexec: Identify which cpu the kexec
	image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A patch to this effect has been in XenServer for a little while, and has
proved to be a useful debugging point for servers which have different
behaviours depending when crashing on the non-bootstrap processor.

Moving the printk() from kexec_panic() to one_cpu_only() means that it will
only be printed for the cpu which wins the race along the kexec path.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: David Vrabel <david.vrabel@citrix.com>

---

Changes in v2:
 * Tweak wording as it moves onto a common path with kexec_reboot
---
 xen/common/kexec.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 481b0c2..23d964e 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -265,6 +265,8 @@ static int noinline one_cpu_only(void)
     }
 
     set_bit(KEXEC_FLAG_IN_PROGRESS, &kexec_flags);
+    printk("Executing kexec image on cpu%u\n", cpu);
+
     return 0;
 }
 
@@ -340,8 +342,6 @@ void kexec_crash(void)
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
 
-    printk("Executing crash image\n");
-
     kexecing = TRUE;
 
     if ( kexec_common_shutdown() != 0 )
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:02:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFmK-0001J3-Nk; Tue, 25 Feb 2014 11:02:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFmJ-0001Iy-Eh
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:02:23 +0000
Received: from [193.109.254.147:37064] by server-1.bemta-14.messagelabs.com id
	B5/CC-15438-E387C035; Tue, 25 Feb 2014 11:02:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393326140!6638338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28584 invoked from network); 25 Feb 2014 11:02:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:02:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105478305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 11:02:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:02:19 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFmE-00060z-HE; Tue, 25 Feb 2014 11:02:18 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 11:02:17 +0000
Message-ID: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
	opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/cpu/intel.c        |    7 ++++---
 xen/include/asm-x86/msr-index.h |    5 +++++
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 27fe762..808ed8d 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -21,7 +21,8 @@
 static unsigned int probe_intel_cpuid_faulting(void)
 {
 	uint64_t x;
-	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
+	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
+            (x & PLATFORM_INFO_CPUID_FAULTING);
 }
 
 static DEFINE_PER_CPU(bool_t, cpuid_faulting_enabled);
@@ -34,9 +35,9 @@ void set_cpuid_faulting(bool_t enable)
 		return;
 
 	rdmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
-	lo &= ~1;
+	lo &= ~MISC_FEATURES_CPUID_FAULTING;
 	if (enable)
-		lo |= 1;
+		lo |= MISC_FEATURES_CPUID_FAULTING;
 	wrmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
 
 	this_cpu(cpuid_faulting_enabled) = enable;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 86b7d64..15a7e13 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -486,7 +486,12 @@
 
 /* Intel cpuid faulting MSRs */
 #define MSR_INTEL_PLATFORM_INFO		0x000000ce
+#define _PLATFORM_INFO_CPUID_FAULTING	31
+#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
+
 #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
+#define _MISC_FEATURES_CPUID_FAULTING	0
+#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)
 
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:02:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFmK-0001J3-Nk; Tue, 25 Feb 2014 11:02:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFmJ-0001Iy-Eh
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:02:23 +0000
Received: from [193.109.254.147:37064] by server-1.bemta-14.messagelabs.com id
	B5/CC-15438-E387C035; Tue, 25 Feb 2014 11:02:22 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393326140!6638338!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28584 invoked from network); 25 Feb 2014 11:02:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:02:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105478305"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 11:02:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:02:19 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFmE-00060z-HE; Tue, 25 Feb 2014 11:02:18 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 11:02:17 +0000
Message-ID: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
	opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/cpu/intel.c        |    7 ++++---
 xen/include/asm-x86/msr-index.h |    5 +++++
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 27fe762..808ed8d 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -21,7 +21,8 @@
 static unsigned int probe_intel_cpuid_faulting(void)
 {
 	uint64_t x;
-	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
+	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
+            (x & PLATFORM_INFO_CPUID_FAULTING);
 }
 
 static DEFINE_PER_CPU(bool_t, cpuid_faulting_enabled);
@@ -34,9 +35,9 @@ void set_cpuid_faulting(bool_t enable)
 		return;
 
 	rdmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
-	lo &= ~1;
+	lo &= ~MISC_FEATURES_CPUID_FAULTING;
 	if (enable)
-		lo |= 1;
+		lo |= MISC_FEATURES_CPUID_FAULTING;
 	wrmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
 
 	this_cpu(cpuid_faulting_enabled) = enable;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 86b7d64..15a7e13 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -486,7 +486,12 @@
 
 /* Intel cpuid faulting MSRs */
 #define MSR_INTEL_PLATFORM_INFO		0x000000ce
+#define _PLATFORM_INFO_CPUID_FAULTING	31
+#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
+
 #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
+#define _MISC_FEATURES_CPUID_FAULTING	0
+#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)
 
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:06:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFqW-0001QY-NI; Tue, 25 Feb 2014 11:06:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFqV-0001QP-Hf
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:06:43 +0000
Received: from [85.158.137.68:38066] by server-7.bemta-3.messagelabs.com id
	49/7B-13775-2497C035; Tue, 25 Feb 2014 11:06:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393326400!665891!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1614 invoked from network); 25 Feb 2014 11:06:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:06:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105479054"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 11:06:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:06:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFpo-00063U-O8; Tue, 25 Feb 2014 11:06:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 11:05:59 +0000
Message-ID: <1393326359-29845-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/cpu: Store extended cpuid level in
	cpuinfo_x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To save finding it repeatedly with cpuid instructions.  The name
"extended_cpuid_level" is chosen to match Linux.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/amd.c                  |    4 ++--
 xen/arch/x86/cpu/common.c               |   22 ++++++++++------------
 xen/arch/x86/hvm/svm/svm.c              |    2 +-
 xen/arch/x86/oprofile/op_model_athlon.c |    5 +----
 xen/include/asm-x86/processor.h         |    1 +
 5 files changed, 15 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 44087fa..904ad2e 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -419,11 +419,11 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 
 	display_cacheinfo(c);
 
-	if (cpuid_eax(0x80000000) >= 0x80000008) {
+	if (c->extended_cpuid_level >= 0x80000008) {
 		c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
 	}
 
-	if (cpuid_eax(0x80000000) >= 0x80000007) {
+	if (c->extended_cpuid_level >= 0x80000007) {
 		c->x86_power = cpuid_edx(0x80000007);
 		if (c->x86_power & (1<<8)) {
 			set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 32ca458..2c128e5 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -74,7 +74,7 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
 	unsigned int *v;
 	char *p, *q;
 
-	if (cpuid_eax(0x80000000) < 0x80000004)
+	if (c->extended_cpuid_level < 0x80000004)
 		return 0;
 
 	v = (unsigned int *) c->x86_model_id;
@@ -101,11 +101,9 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
 
 void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
 {
-	unsigned int n, dummy, ecx, edx, l2size;
+	unsigned int dummy, ecx, edx, l2size;
 
-	n = cpuid_eax(0x80000000);
-
-	if (n >= 0x80000005) {
+	if (c->extended_cpuid_level >= 0x80000005) {
 		cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
 		if (opt_cpu_info)
 			printk("CPU: L1 I cache %dK (%d bytes/line),"
@@ -114,7 +112,7 @@ void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
 		c->x86_cache_size=(ecx>>24)+(edx>>24);	
 	}
 
-	if (n < 0x80000006)	/* Some chips just has a large L1. */
+	if (c->extended_cpuid_level < 0x80000006)	/* Some chips just has a large L1. */
 		return;
 
 	ecx = cpuid_ecx(0x80000006);
@@ -195,7 +193,7 @@ static void __init early_cpu_detect(void)
 
 static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
 {
-	u32 tfms, xlvl, capability, excap, ebx;
+	u32 tfms, capability, excap, ebx;
 
 	/* Get vendor name */
 	cpuid(0x00000000, &c->cpuid_level,
@@ -222,15 +220,15 @@ static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
 		c->x86_clflush_size = ((ebx >> 8) & 0xff) * 8;
 
 	/* AMD-defined flags: level 0x80000001 */
-	xlvl = cpuid_eax(0x80000000);
-	if ( (xlvl & 0xffff0000) == 0x80000000 ) {
-		if ( xlvl >= 0x80000001 ) {
+	c->extended_cpuid_level = cpuid_eax(0x80000000);
+	if ( (c->extended_cpuid_level & 0xffff0000) == 0x80000000 ) {
+		if ( c->extended_cpuid_level >= 0x80000001 ) {
 			c->x86_capability[1] = cpuid_edx(0x80000001);
 			c->x86_capability[6] = cpuid_ecx(0x80000001);
 		}
-		if ( xlvl >= 0x80000004 )
+		if ( c->extended_cpuid_level >= 0x80000004 )
 			get_model_name(c); /* Default name */
-		if ( xlvl >= 0x80000008 )
+		if ( c->extended_cpuid_level >= 0x80000008 )
 			paddr_bits = cpuid_eax(0x80000008) & 0xff;
 	}
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..61b1ec8 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1251,7 +1251,7 @@ const struct hvm_function_table * __init start_svm(void)
 
     setup_vmcb_dump();
 
-    svm_feature_flags = ((cpuid_eax(0x80000000) >= 0x8000000A) ?
+    svm_feature_flags = (current_cpu_data.extended_cpuid_level >= 0x8000000A ?
                          cpuid_edx(0x8000000A) : 0);
 
     printk("SVM: Supported advanced features:\n");
diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c
index e784018..ad84768 100644
--- a/xen/arch/x86/oprofile/op_model_athlon.c
+++ b/xen/arch/x86/oprofile/op_model_athlon.c
@@ -497,14 +497,11 @@ static int __init init_ibs_nmi(void)
 
 static void __init get_ibs_caps(void)
 {
-	unsigned int max_level;
-
 	if (!boot_cpu_has(X86_FEATURE_IBS))
 		return;
 
     /* check IBS cpuid feature flags */
-	max_level = cpuid_eax(0x80000000);
-	if (max_level >= IBS_CPUID_FEATURES)
+	if (current_cpu_data.extended_cpuid_level >= IBS_CPUID_FEATURES)
 		ibs_caps = cpuid_eax(IBS_CPUID_FEATURES);
 	if (!(ibs_caps & IBS_CAPS_AVAIL))
 		/* cpuid flags not valid */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c120460..f1cccff 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -162,6 +162,7 @@ struct cpuinfo_x86 {
     __u8 x86_model;
     __u8 x86_mask;
     int  cpuid_level;    /* Maximum supported CPUID level, -1=no CPUID */
+    __u32 extended_cpuid_level; /* Maximum supported CPUID extended level */
     unsigned int x86_capability[NCAPINTS];
     char x86_vendor_id[16];
     char x86_model_id[64];
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:06:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIFqW-0001QY-NI; Tue, 25 Feb 2014 11:06:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIFqV-0001QP-Hf
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:06:43 +0000
Received: from [85.158.137.68:38066] by server-7.bemta-3.messagelabs.com id
	49/7B-13775-2497C035; Tue, 25 Feb 2014 11:06:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393326400!665891!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1614 invoked from network); 25 Feb 2014 11:06:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:06:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="105479054"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 11:06:02 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:06:02 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIFpo-00063U-O8; Tue, 25 Feb 2014 11:06:00 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 11:05:59 +0000
Message-ID: <1393326359-29845-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Keir Fraser <keir@xen.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/cpu: Store extended cpuid level in
	cpuinfo_x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

To save finding it repeatedly with cpuid instructions.  The name
"extended_cpuid_level" is chosen to match Linux.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
---
 xen/arch/x86/cpu/amd.c                  |    4 ++--
 xen/arch/x86/cpu/common.c               |   22 ++++++++++------------
 xen/arch/x86/hvm/svm/svm.c              |    2 +-
 xen/arch/x86/oprofile/op_model_athlon.c |    5 +----
 xen/include/asm-x86/processor.h         |    1 +
 5 files changed, 15 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index 44087fa..904ad2e 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -419,11 +419,11 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
 
 	display_cacheinfo(c);
 
-	if (cpuid_eax(0x80000000) >= 0x80000008) {
+	if (c->extended_cpuid_level >= 0x80000008) {
 		c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
 	}
 
-	if (cpuid_eax(0x80000000) >= 0x80000007) {
+	if (c->extended_cpuid_level >= 0x80000007) {
 		c->x86_power = cpuid_edx(0x80000007);
 		if (c->x86_power & (1<<8)) {
 			set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 32ca458..2c128e5 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -74,7 +74,7 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
 	unsigned int *v;
 	char *p, *q;
 
-	if (cpuid_eax(0x80000000) < 0x80000004)
+	if (c->extended_cpuid_level < 0x80000004)
 		return 0;
 
 	v = (unsigned int *) c->x86_model_id;
@@ -101,11 +101,9 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
 
 void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
 {
-	unsigned int n, dummy, ecx, edx, l2size;
+	unsigned int dummy, ecx, edx, l2size;
 
-	n = cpuid_eax(0x80000000);
-
-	if (n >= 0x80000005) {
+	if (c->extended_cpuid_level >= 0x80000005) {
 		cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
 		if (opt_cpu_info)
 			printk("CPU: L1 I cache %dK (%d bytes/line),"
@@ -114,7 +112,7 @@ void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
 		c->x86_cache_size=(ecx>>24)+(edx>>24);	
 	}
 
-	if (n < 0x80000006)	/* Some chips just has a large L1. */
+	if (c->extended_cpuid_level < 0x80000006)	/* Some chips just has a large L1. */
 		return;
 
 	ecx = cpuid_ecx(0x80000006);
@@ -195,7 +193,7 @@ static void __init early_cpu_detect(void)
 
 static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
 {
-	u32 tfms, xlvl, capability, excap, ebx;
+	u32 tfms, capability, excap, ebx;
 
 	/* Get vendor name */
 	cpuid(0x00000000, &c->cpuid_level,
@@ -222,15 +220,15 @@ static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
 		c->x86_clflush_size = ((ebx >> 8) & 0xff) * 8;
 
 	/* AMD-defined flags: level 0x80000001 */
-	xlvl = cpuid_eax(0x80000000);
-	if ( (xlvl & 0xffff0000) == 0x80000000 ) {
-		if ( xlvl >= 0x80000001 ) {
+	c->extended_cpuid_level = cpuid_eax(0x80000000);
+	if ( (c->extended_cpuid_level & 0xffff0000) == 0x80000000 ) {
+		if ( c->extended_cpuid_level >= 0x80000001 ) {
 			c->x86_capability[1] = cpuid_edx(0x80000001);
 			c->x86_capability[6] = cpuid_ecx(0x80000001);
 		}
-		if ( xlvl >= 0x80000004 )
+		if ( c->extended_cpuid_level >= 0x80000004 )
 			get_model_name(c); /* Default name */
-		if ( xlvl >= 0x80000008 )
+		if ( c->extended_cpuid_level >= 0x80000008 )
 			paddr_bits = cpuid_eax(0x80000008) & 0xff;
 	}
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..61b1ec8 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1251,7 +1251,7 @@ const struct hvm_function_table * __init start_svm(void)
 
     setup_vmcb_dump();
 
-    svm_feature_flags = ((cpuid_eax(0x80000000) >= 0x8000000A) ?
+    svm_feature_flags = (current_cpu_data.extended_cpuid_level >= 0x8000000A ?
                          cpuid_edx(0x8000000A) : 0);
 
     printk("SVM: Supported advanced features:\n");
diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c
index e784018..ad84768 100644
--- a/xen/arch/x86/oprofile/op_model_athlon.c
+++ b/xen/arch/x86/oprofile/op_model_athlon.c
@@ -497,14 +497,11 @@ static int __init init_ibs_nmi(void)
 
 static void __init get_ibs_caps(void)
 {
-	unsigned int max_level;
-
 	if (!boot_cpu_has(X86_FEATURE_IBS))
 		return;
 
     /* check IBS cpuid feature flags */
-	max_level = cpuid_eax(0x80000000);
-	if (max_level >= IBS_CPUID_FEATURES)
+	if (current_cpu_data.extended_cpuid_level >= IBS_CPUID_FEATURES)
 		ibs_caps = cpuid_eax(IBS_CPUID_FEATURES);
 	if (!(ibs_caps & IBS_CAPS_AVAIL))
 		/* cpuid flags not valid */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c120460..f1cccff 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -162,6 +162,7 @@ struct cpuinfo_x86 {
     __u8 x86_model;
     __u8 x86_mask;
     int  cpuid_level;    /* Maximum supported CPUID level, -1=no CPUID */
+    __u32 extended_cpuid_level; /* Maximum supported CPUID extended level */
     unsigned int x86_capability[NCAPINTS];
     char x86_vendor_id[16];
     char x86_model_id[64];
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:22:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:22:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIG56-0001qD-Ok; Tue, 25 Feb 2014 11:21:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIG55-0001q6-UM
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:21:48 +0000
Received: from [85.158.139.211:23404] by server-13.bemta-5.messagelabs.com id
	7F/85-18801-BCC7C035; Tue, 25 Feb 2014 11:21:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393327306!6079398!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5000 invoked from network); 25 Feb 2014 11:21:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 11:21:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 11:21:46 +0000
Message-Id: <530C8AD8020000780011F20D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 11:21:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
 opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 12:02, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -21,7 +21,8 @@
>  static unsigned int probe_intel_cpuid_faulting(void)
>  {
>  	uint64_t x;
> -	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
> +	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
> +            (x & PLATFORM_INFO_CPUID_FAULTING);

Indentation (a single hard tab ought to come first at least).

> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -486,7 +486,12 @@
>  
>  /* Intel cpuid faulting MSRs */
>  #define MSR_INTEL_PLATFORM_INFO		0x000000ce
> +#define _PLATFORM_INFO_CPUID_FAULTING	31
> +#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
> +
>  #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
> +#define _MISC_FEATURES_CPUID_FAULTING	0
> +#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)

I wonder whether, from a name space pov, it wouldn't be better
if these new constants had at least MSR_ as additional prefix. Both
are rather generic without...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:22:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:22:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIG56-0001qD-Ok; Tue, 25 Feb 2014 11:21:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIG55-0001q6-UM
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:21:48 +0000
Received: from [85.158.139.211:23404] by server-13.bemta-5.messagelabs.com id
	7F/85-18801-BCC7C035; Tue, 25 Feb 2014 11:21:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393327306!6079398!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5000 invoked from network); 25 Feb 2014 11:21:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 11:21:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 11:21:46 +0000
Message-Id: <530C8AD8020000780011F20D@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 11:21:44 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
 opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 12:02, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -21,7 +21,8 @@
>  static unsigned int probe_intel_cpuid_faulting(void)
>  {
>  	uint64_t x;
> -	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
> +	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
> +            (x & PLATFORM_INFO_CPUID_FAULTING);

Indentation (a single hard tab ought to come first at least).

> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -486,7 +486,12 @@
>  
>  /* Intel cpuid faulting MSRs */
>  #define MSR_INTEL_PLATFORM_INFO		0x000000ce
> +#define _PLATFORM_INFO_CPUID_FAULTING	31
> +#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
> +
>  #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
> +#define _MISC_FEATURES_CPUID_FAULTING	0
> +#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)

I wonder whether, from a name space pov, it wouldn't be better
if these new constants had at least MSR_ as additional prefix. Both
are rather generic without...

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:23:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIG72-0001wI-B7; Tue, 25 Feb 2014 11:23:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIG70-0001w9-LP
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:23:46 +0000
Received: from [193.109.254.147:39619] by server-14.bemta-14.messagelabs.com
	id 76/16-29228-14D7C035; Tue, 25 Feb 2014 11:23:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393327424!1378282!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31621 invoked from network); 25 Feb 2014 11:23:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:23:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="103838159"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 11:23:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:23:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIG6w-0006HO-M5;
	Tue, 25 Feb 2014 11:23:42 +0000
Message-ID: <530C7D3E.4040301@citrix.com>
Date: Tue, 25 Feb 2014 11:23:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
	<530C8AD8020000780011F20D@nat28.tlf.novell.com>
In-Reply-To: <530C8AD8020000780011F20D@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
 opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/02/14 11:21, Jan Beulich wrote:
>>>> On 25.02.14 at 12:02, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/cpu/intel.c
>> +++ b/xen/arch/x86/cpu/intel.c
>> @@ -21,7 +21,8 @@
>>  static unsigned int probe_intel_cpuid_faulting(void)
>>  {
>>  	uint64_t x;
>> -	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
>> +	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
>> +            (x & PLATFORM_INFO_CPUID_FAULTING);
> Indentation (a single hard tab ought to come first at least).
>
>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -486,7 +486,12 @@
>>  
>>  /* Intel cpuid faulting MSRs */
>>  #define MSR_INTEL_PLATFORM_INFO		0x000000ce
>> +#define _PLATFORM_INFO_CPUID_FAULTING	31
>> +#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
>> +
>>  #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
>> +#define _MISC_FEATURES_CPUID_FAULTING	0
>> +#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)
> I wonder whether, from a name space pov, it wouldn't be better
> if these new constants had at least MSR_ as additional prefix. Both
> are rather generic without...
>
> Jan
>

How about MSR_INTEL_ to match their MSR number names?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:23:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIG72-0001wI-B7; Tue, 25 Feb 2014 11:23:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIG70-0001w9-LP
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:23:46 +0000
Received: from [193.109.254.147:39619] by server-14.bemta-14.messagelabs.com
	id 76/16-29228-14D7C035; Tue, 25 Feb 2014 11:23:45 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393327424!1378282!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31621 invoked from network); 25 Feb 2014 11:23:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:23:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="103838159"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 11:23:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:23:43 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIG6w-0006HO-M5;
	Tue, 25 Feb 2014 11:23:42 +0000
Message-ID: <530C7D3E.4040301@citrix.com>
Date: Tue, 25 Feb 2014 11:23:42 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
	<530C8AD8020000780011F20D@nat28.tlf.novell.com>
In-Reply-To: <530C8AD8020000780011F20D@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
 opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/02/14 11:21, Jan Beulich wrote:
>>>> On 25.02.14 at 12:02, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/cpu/intel.c
>> +++ b/xen/arch/x86/cpu/intel.c
>> @@ -21,7 +21,8 @@
>>  static unsigned int probe_intel_cpuid_faulting(void)
>>  {
>>  	uint64_t x;
>> -	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
>> +	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
>> +            (x & PLATFORM_INFO_CPUID_FAULTING);
> Indentation (a single hard tab ought to come first at least).
>
>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -486,7 +486,12 @@
>>  
>>  /* Intel cpuid faulting MSRs */
>>  #define MSR_INTEL_PLATFORM_INFO		0x000000ce
>> +#define _PLATFORM_INFO_CPUID_FAULTING	31
>> +#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
>> +
>>  #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
>> +#define _MISC_FEATURES_CPUID_FAULTING	0
>> +#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)
> I wonder whether, from a name space pov, it wouldn't be better
> if these new constants had at least MSR_ as additional prefix. Both
> are rather generic without...
>
> Jan
>

How about MSR_INTEL_ to match their MSR number names?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGFW-0002AR-L8; Tue, 25 Feb 2014 11:32:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <martin@xn--hrling-vxa.se>) id 1WIGEs-0002A3-H3
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:31:54 +0000
Received: from [85.158.139.211:61464] by server-10.bemta-5.messagelabs.com id
	68/97-08578-92F7C035; Tue, 25 Feb 2014 11:31:53 +0000
X-Env-Sender: martin@xn--hrling-vxa.se
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393327912!1544804!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31410 invoked from network); 25 Feb 2014 11:31:52 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:31:52 -0000
Received: by mail-we0-f176.google.com with SMTP id p61so248786wes.35
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 03:31:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=/0Ibz5KaJ0h/1ltPOk0uNSrNskk2/lPvaVeed8VDTkM=;
	b=g3v1JyMoVlJQk18v3+XQJxqac8IRYU0Y/zYVjxDpqueAzP2n1sMpfH5MW2u6ZJ1tFl
	StDn7WCY1XrYNEHFqXGESDQIzW01ld0HgIdXLEaDjvwfuuhpM6+S/ezHfkHYK6QxDMZm
	99fvtBTecWj3IzNvRPlQlXSTG5wVVdFWbeshb8dZMRhAtPIO3SyNjgCte1ce3o0vYb/u
	8bOXKENA+UQnlwbyoq7uT3TPjmaPM1hLr2kl4HRHFoCzgqN7z8uKIv+KtYPx3hB0/eTp
	zO3jPrI5AWZMQvAiuV5s0xUvswkJoFTJhOj2vSCeQtL83e8an/ZaCYzeDe3i1mhghAqW
	kqfw==
X-Gm-Message-State: ALoCoQnE1LcGZxK5xn/aIGbtxFO5G6FGl29WJ92TZMjjxswSICSQ4Ztfg/gE7p8K8Ib+JTCAEli3
MIME-Version: 1.0
X-Received: by 10.180.102.97 with SMTP id fn1mr2389180wib.15.1393327911970;
	Tue, 25 Feb 2014 03:31:51 -0800 (PST)
Received: by 10.216.70.204 with HTTP; Tue, 25 Feb 2014 03:31:51 -0800 (PST)
X-Originating-IP: [83.140.123.6]
In-Reply-To: <20140204195551.GA14122@phenom.dumpdata.com>
References: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
	<20140204195551.GA14122@phenom.dumpdata.com>
Date: Tue, 25 Feb 2014 12:31:51 +0100
Message-ID: <CABg547WKmeo8A0BPqVM87dKctxtrWShJKeXU8xxwy07eXhvJ5w@mail.gmail.com>
From: =?ISO-8859-1?Q?Martin_=D6hrling?= <martin@xn--hrling-vxa.se>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Tue, 25 Feb 2014 11:32:33 +0000
Cc: linux@eikelenboom.it, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
 adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6344270903717969622=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6344270903717969622==
Content-Type: multipart/alternative; boundary=f46d0444ef1d3fbc6304f3396f69

--f46d0444ef1d3fbc6304f3396f69
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Konrad,

I finally got some time to dig into why I had issues with the last patch.
Added a lot of
printk:s to find out what happened. My conclusions, possibly incorrect, are
as follows:

Each device bound to the pciback module will be reset twice during boot.
The done-counter
in the completion struct will be set to 2 since no calls have been made to
wait_for_completion.

The counter in the completion struct will be decremented to 1 at the first
pcistub_get_pci_dev()
or pcistub_get_pci_dev_by_slot() call.

Something bad happens when pcistub_put_pci_dev() during shutdown of domU.
It appears to me
as if the function is expected to block until reset is completed. Code from
the patch file:
+ schedule_work(&found_psdev->reset_work); + + /* Wait .. wait */ +
wait_for_completion(&found_psdev->reset_done); +

The function will not wait since the done-counter is set to 1. The
pcistub_put_pci_dev()
will return
before the work item is excecuted. I do not belive that this was intended.
The completion structs done
member is decremented to zero but once again set to 1 when the reset work
item has been executed.

It is possible to boot the DomU a second time (done-counter in completion
struct goes down to zero).
The problem is that pcistub_put_pci_dev() will leave the done-counter at
zero when the domU is shut
down for the second time. Trying to boot the domU a third time will stall.

The get-functions expects to be able to decrement the completion counter
without initiation work that
increments the counter and the put-function initiates the same number of
reset works as the function
calls wait_for_completion(). It doesn't seem right. Could also explain why
unbound devices hangs. Their
completion struct is not likely to be initiated with a count of 2 after
startup.

Best Regards,
/Martin




2014-02-04 20:55 GMT+01:00 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>:

> On Tue, Feb 04, 2014 at 07:19:16PM +0100, Martin =D6hrling wrote:
> > I tried to use vga passthrough with an AMD card and ran into the issues
> > with dom0 crash on domU reboot. My intention is to debug the issue and =
I
> > have started to read up on the code for pci passthrough. The following
> > observations may not be an error but perhaps someone could shed some
> light
> > over the design intentions.
> >
> > My first impression was that libxl__device_pci_reset() is a function
> level
> > reset function. It is called each time a single function of a device is
> > added or removed. A device reset would break other domU:s if other
> > functions of the device are passed through to these domU:s.
> >
> > The strange thing is that there is only a single
> libxl__device_pci_reset()
> > call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting
> the
> > impression that the function is assumed to be a device reset function.
> >
> > Is the attempt to reset a function, when all other functions belongs to
> > pciback, detected and replaced by a device reset?
>
> Yes with these patches:
>
> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddeve=
l/xen-pciback.slot_and_bus.v0
>
> But the last one seems to hang pciback when the device is unbound.
>
> >
> > Best Regards,
> > Martin
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--f46d0444ef1d3fbc6304f3396f69
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><font face=3D"arial, helvetica, sans-serif">Hi Konrad,</fo=
nt><div><font face=3D"arial, helvetica, sans-serif"><br></font></div><div><=
font face=3D"arial, helvetica, sans-serif">I finally got some time to dig i=
nto why I had issues with the last patch. Added a lot of</font></div>
<div><font face=3D"arial, helvetica, sans-serif">printk:s to find out what =
happened. My conclusions, possibly incorrect, are as follows:</font></div><=
div><font face=3D"arial, helvetica, sans-serif"><br></font></div><div><font=
 face=3D"arial, helvetica, sans-serif">Each device bound to the pciback mod=
ule will be reset twice during boot. The done-counter</font></div>
<div><font face=3D"arial, helvetica, sans-serif">in the completion struct w=
ill be set to 2 since no calls have been made to wait_for_completion.</font=
></div><div><font face=3D"arial, helvetica, sans-serif"><br></font></div><d=
iv>
<font face=3D"arial, helvetica, sans-serif">The counter in the completion s=
truct will be decremented to 1 at the first pcistub_get_pci_dev()</font></d=
iv><div><font face=3D"arial, helvetica, sans-serif">or pcistub_get_pci_dev_=
by_slot() call.</font></div>
<div><font face=3D"arial, helvetica, sans-serif"><br></font></div><div><spa=
n style=3D"background-color:rgb(255,255,255)"><font color=3D"#000000" face=
=3D"arial, helvetica, sans-serif">Something bad happens when pcistub_put_pc=
i_dev() during shutdown of domU. It appears to me</font></span></div>
<div><span style=3D"background-color:rgb(255,255,255)"><font color=3D"#0000=
00" face=3D"arial, helvetica, sans-serif">as if the function is expected to=
 block until reset is completed. Code from the patch file:</font></span></d=
iv>
<div><div class=3D""><span style=3D"background-color:rgb(255,255,255)"><fon=
t color=3D"#000000" face=3D"arial, helvetica, sans-serif"><span style=3D"wh=
ite-space:pre">+	schedule_work(&amp;found_psdev-&gt;reset_work);
+
+	/* Wait .. wait */
+	wait_for_completion(&amp;found_psdev-&gt;reset_done);
+</span><br></font></span></div></div><div class=3D""><font color=3D"#00000=
0" face=3D"arial, helvetica, sans-serif"><span style=3D"white-space:pre;bac=
kground-color:rgb(255,255,255)"><br></span></font></div><div class=3D""><fo=
nt color=3D"#000000" face=3D"arial, helvetica, sans-serif"><span style=3D"w=
hite-space:pre;background-color:rgb(255,255,255)">The function will not wai=
t since the done-counter is set to 1. The </span></font><span style=3D"colo=
r:rgb(0,0,0);font-family:arial,helvetica,sans-serif">pcistub_put_pci_dev() =
will return</span></div>
<div class=3D""><span style=3D"color:rgb(0,0,0);font-family:arial,helvetica=
,sans-serif">before the work item is excecuted. I do not belive that this w=
as intended. The completion structs d</span>one</div><div class=3D"">member=
 is decremented to zero but once again set to 1 when the reset work item ha=
s been executed.</div>
<div class=3D""><br></div><div class=3D"">It is possible to boot the DomU a=
 second time (done-counter in completion struct goes down to zero).</div><d=
iv class=3D"">The problem is that=A0<span style=3D"color:rgb(0,0,0);font-fa=
mily:arial,helvetica,sans-serif">pcistub_put_pci_dev() will leave the done-=
counter at zero when the domU is shut</span></div>
<div class=3D""><span style=3D"color:rgb(0,0,0);font-family:arial,helvetica=
,sans-serif">down for the second time. Trying to boot the domU a third time=
 will stall.</span></div><div class=3D""><span style=3D"color:rgb(0,0,0);fo=
nt-family:arial,helvetica,sans-serif"><br>
</span></div><div class=3D""><span style=3D"color:rgb(0,0,0);font-family:ar=
ial,helvetica,sans-serif">The get-functions expects to be able to decrement=
 the completion counter without initiation work that</span></div><div class=
=3D"">
<span style=3D"color:rgb(0,0,0);font-family:arial,helvetica,sans-serif">inc=
rements the counter and the put-function initiates the same number of reset=
 works as the function</span></div><div class=3D""><span style=3D"color:rgb=
(0,0,0);font-family:arial,helvetica,sans-serif">calls wait_for_completion()=
. It doesn&#39;t seem right. Could also explain why unbound devices hangs. =
Their</span></div>
<div class=3D""><span style=3D"color:rgb(0,0,0);font-family:arial,helvetica=
,sans-serif">completion struct is not likely to be initiated with a count o=
f 2 after startup.</span></div><div class=3D""><span style=3D"color:rgb(0,0=
,0);font-family:arial,helvetica,sans-serif"><br>
</span></div><div class=3D"">Best Regards,</div><div class=3D"">/Martin</di=
v><div class=3D""><br></div><div class=3D""><font color=3D"#008000" face=3D=
"monospace"><span style=3D"white-space:pre"><br></span></font></div></div><=
div class=3D"gmail_extra">
<br><br><div class=3D"gmail_quote">2014-02-04 20:55 GMT+01:00 Konrad Rzeszu=
tek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" ta=
rget=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span>:<br><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Tue, Feb 04, 2014 at 07:19:16PM =
+0100, Martin =D6hrling wrote:<br>
&gt; I tried to use vga passthrough with an AMD card and ran into the issue=
s<br>
&gt; with dom0 crash on domU reboot. My intention is to debug the issue and=
 I<br>
&gt; have started to read up on the code for pci passthrough. The following=
<br>
&gt; observations may not be an error but perhaps someone could shed some l=
ight<br>
&gt; over the design intentions.<br>
&gt;<br>
&gt; My first impression was that libxl__device_pci_reset() is a function l=
evel<br>
&gt; reset function. It is called each time a single function of a device i=
s<br>
&gt; added or removed. A device reset would break other domU:s if other<br>
&gt; functions of the device are passed through to these domU:s.<br>
&gt;<br>
&gt; The strange thing is that there is only a single libxl__device_pci_res=
et()<br>
&gt; call when pcidev-&gt;vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I&#39;m =
getting the<br>
&gt; impression that the function is assumed to be a device reset function.=
<br>
&gt;<br>
&gt; Is the attempt to reset a function, when all other functions belongs t=
o<br>
&gt; pciback, detected and replaced by a device reset?<br>
<br>
</div></div>Yes with these patches:<br>
<a href=3D"https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/=
?h=3Ddevel/xen-pciback.slot_and_bus.v0" target=3D"_blank">https://git.kerne=
l.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddevel/xen-pciback.slot_=
and_bus.v0</a><br>

<br>
But the last one seems to hang pciback when the device is unbound.<br>
<br>
&gt;<br>
&gt; Best Regards,<br>
&gt; Martin<br>
<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div>

--f46d0444ef1d3fbc6304f3396f69--


--===============6344270903717969622==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6344270903717969622==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 11:32:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:32:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGFW-0002AR-L8; Tue, 25 Feb 2014 11:32:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <martin@xn--hrling-vxa.se>) id 1WIGEs-0002A3-H3
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:31:54 +0000
Received: from [85.158.139.211:61464] by server-10.bemta-5.messagelabs.com id
	68/97-08578-92F7C035; Tue, 25 Feb 2014 11:31:53 +0000
X-Env-Sender: martin@xn--hrling-vxa.se
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393327912!1544804!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31410 invoked from network); 25 Feb 2014 11:31:52 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:31:52 -0000
Received: by mail-we0-f176.google.com with SMTP id p61so248786wes.35
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 03:31:52 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=/0Ibz5KaJ0h/1ltPOk0uNSrNskk2/lPvaVeed8VDTkM=;
	b=g3v1JyMoVlJQk18v3+XQJxqac8IRYU0Y/zYVjxDpqueAzP2n1sMpfH5MW2u6ZJ1tFl
	StDn7WCY1XrYNEHFqXGESDQIzW01ld0HgIdXLEaDjvwfuuhpM6+S/ezHfkHYK6QxDMZm
	99fvtBTecWj3IzNvRPlQlXSTG5wVVdFWbeshb8dZMRhAtPIO3SyNjgCte1ce3o0vYb/u
	8bOXKENA+UQnlwbyoq7uT3TPjmaPM1hLr2kl4HRHFoCzgqN7z8uKIv+KtYPx3hB0/eTp
	zO3jPrI5AWZMQvAiuV5s0xUvswkJoFTJhOj2vSCeQtL83e8an/ZaCYzeDe3i1mhghAqW
	kqfw==
X-Gm-Message-State: ALoCoQnE1LcGZxK5xn/aIGbtxFO5G6FGl29WJ92TZMjjxswSICSQ4Ztfg/gE7p8K8Ib+JTCAEli3
MIME-Version: 1.0
X-Received: by 10.180.102.97 with SMTP id fn1mr2389180wib.15.1393327911970;
	Tue, 25 Feb 2014 03:31:51 -0800 (PST)
Received: by 10.216.70.204 with HTTP; Tue, 25 Feb 2014 03:31:51 -0800 (PST)
X-Originating-IP: [83.140.123.6]
In-Reply-To: <20140204195551.GA14122@phenom.dumpdata.com>
References: <CABg547VOFCYPiENWX7EBuoou1A2=wV7_TYjFA+-1FjiQ=PVHfw@mail.gmail.com>
	<20140204195551.GA14122@phenom.dumpdata.com>
Date: Tue, 25 Feb 2014 12:31:51 +0100
Message-ID: <CABg547WKmeo8A0BPqVM87dKctxtrWShJKeXU8xxwy07eXhvJ5w@mail.gmail.com>
From: =?ISO-8859-1?Q?Martin_=D6hrling?= <martin@xn--hrling-vxa.se>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
X-Mailman-Approved-At: Tue, 25 Feb 2014 11:32:33 +0000
Cc: linux@eikelenboom.it, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Inconsistent use of libxl__device_pci_reset() when
 adding/removing all functions of a device
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6344270903717969622=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6344270903717969622==
Content-Type: multipart/alternative; boundary=f46d0444ef1d3fbc6304f3396f69

--f46d0444ef1d3fbc6304f3396f69
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi Konrad,

I finally got some time to dig into why I had issues with the last patch.
Added a lot of
printk:s to find out what happened. My conclusions, possibly incorrect, are
as follows:

Each device bound to the pciback module will be reset twice during boot.
The done-counter
in the completion struct will be set to 2 since no calls have been made to
wait_for_completion.

The counter in the completion struct will be decremented to 1 at the first
pcistub_get_pci_dev()
or pcistub_get_pci_dev_by_slot() call.

Something bad happens when pcistub_put_pci_dev() during shutdown of domU.
It appears to me
as if the function is expected to block until reset is completed. Code from
the patch file:
+ schedule_work(&found_psdev->reset_work); + + /* Wait .. wait */ +
wait_for_completion(&found_psdev->reset_done); +

The function will not wait since the done-counter is set to 1. The
pcistub_put_pci_dev()
will return
before the work item is excecuted. I do not belive that this was intended.
The completion structs done
member is decremented to zero but once again set to 1 when the reset work
item has been executed.

It is possible to boot the DomU a second time (done-counter in completion
struct goes down to zero).
The problem is that pcistub_put_pci_dev() will leave the done-counter at
zero when the domU is shut
down for the second time. Trying to boot the domU a third time will stall.

The get-functions expects to be able to decrement the completion counter
without initiation work that
increments the counter and the put-function initiates the same number of
reset works as the function
calls wait_for_completion(). It doesn't seem right. Could also explain why
unbound devices hangs. Their
completion struct is not likely to be initiated with a count of 2 after
startup.

Best Regards,
/Martin




2014-02-04 20:55 GMT+01:00 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>:

> On Tue, Feb 04, 2014 at 07:19:16PM +0100, Martin =D6hrling wrote:
> > I tried to use vga passthrough with an AMD card and ran into the issues
> > with dom0 crash on domU reboot. My intention is to debug the issue and =
I
> > have started to read up on the code for pci passthrough. The following
> > observations may not be an error but perhaps someone could shed some
> light
> > over the design intentions.
> >
> > My first impression was that libxl__device_pci_reset() is a function
> level
> > reset function. It is called each time a single function of a device is
> > added or removed. A device reset would break other domU:s if other
> > functions of the device are passed through to these domU:s.
> >
> > The strange thing is that there is only a single
> libxl__device_pci_reset()
> > call when pcidev->vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I'm getting
> the
> > impression that the function is assumed to be a device reset function.
> >
> > Is the attempt to reset a function, when all other functions belongs to
> > pciback, detected and replaced by a device reset?
>
> Yes with these patches:
>
> https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddeve=
l/xen-pciback.slot_and_bus.v0
>
> But the last one seems to hang pciback when the device is unbound.
>
> >
> > Best Regards,
> > Martin
>
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
>
>

--f46d0444ef1d3fbc6304f3396f69
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><font face=3D"arial, helvetica, sans-serif">Hi Konrad,</fo=
nt><div><font face=3D"arial, helvetica, sans-serif"><br></font></div><div><=
font face=3D"arial, helvetica, sans-serif">I finally got some time to dig i=
nto why I had issues with the last patch. Added a lot of</font></div>
<div><font face=3D"arial, helvetica, sans-serif">printk:s to find out what =
happened. My conclusions, possibly incorrect, are as follows:</font></div><=
div><font face=3D"arial, helvetica, sans-serif"><br></font></div><div><font=
 face=3D"arial, helvetica, sans-serif">Each device bound to the pciback mod=
ule will be reset twice during boot. The done-counter</font></div>
<div><font face=3D"arial, helvetica, sans-serif">in the completion struct w=
ill be set to 2 since no calls have been made to wait_for_completion.</font=
></div><div><font face=3D"arial, helvetica, sans-serif"><br></font></div><d=
iv>
<font face=3D"arial, helvetica, sans-serif">The counter in the completion s=
truct will be decremented to 1 at the first pcistub_get_pci_dev()</font></d=
iv><div><font face=3D"arial, helvetica, sans-serif">or pcistub_get_pci_dev_=
by_slot() call.</font></div>
<div><font face=3D"arial, helvetica, sans-serif"><br></font></div><div><spa=
n style=3D"background-color:rgb(255,255,255)"><font color=3D"#000000" face=
=3D"arial, helvetica, sans-serif">Something bad happens when pcistub_put_pc=
i_dev() during shutdown of domU. It appears to me</font></span></div>
<div><span style=3D"background-color:rgb(255,255,255)"><font color=3D"#0000=
00" face=3D"arial, helvetica, sans-serif">as if the function is expected to=
 block until reset is completed. Code from the patch file:</font></span></d=
iv>
<div><div class=3D""><span style=3D"background-color:rgb(255,255,255)"><fon=
t color=3D"#000000" face=3D"arial, helvetica, sans-serif"><span style=3D"wh=
ite-space:pre">+	schedule_work(&amp;found_psdev-&gt;reset_work);
+
+	/* Wait .. wait */
+	wait_for_completion(&amp;found_psdev-&gt;reset_done);
+</span><br></font></span></div></div><div class=3D""><font color=3D"#00000=
0" face=3D"arial, helvetica, sans-serif"><span style=3D"white-space:pre;bac=
kground-color:rgb(255,255,255)"><br></span></font></div><div class=3D""><fo=
nt color=3D"#000000" face=3D"arial, helvetica, sans-serif"><span style=3D"w=
hite-space:pre;background-color:rgb(255,255,255)">The function will not wai=
t since the done-counter is set to 1. The </span></font><span style=3D"colo=
r:rgb(0,0,0);font-family:arial,helvetica,sans-serif">pcistub_put_pci_dev() =
will return</span></div>
<div class=3D""><span style=3D"color:rgb(0,0,0);font-family:arial,helvetica=
,sans-serif">before the work item is excecuted. I do not belive that this w=
as intended. The completion structs d</span>one</div><div class=3D"">member=
 is decremented to zero but once again set to 1 when the reset work item ha=
s been executed.</div>
<div class=3D""><br></div><div class=3D"">It is possible to boot the DomU a=
 second time (done-counter in completion struct goes down to zero).</div><d=
iv class=3D"">The problem is that=A0<span style=3D"color:rgb(0,0,0);font-fa=
mily:arial,helvetica,sans-serif">pcistub_put_pci_dev() will leave the done-=
counter at zero when the domU is shut</span></div>
<div class=3D""><span style=3D"color:rgb(0,0,0);font-family:arial,helvetica=
,sans-serif">down for the second time. Trying to boot the domU a third time=
 will stall.</span></div><div class=3D""><span style=3D"color:rgb(0,0,0);fo=
nt-family:arial,helvetica,sans-serif"><br>
</span></div><div class=3D""><span style=3D"color:rgb(0,0,0);font-family:ar=
ial,helvetica,sans-serif">The get-functions expects to be able to decrement=
 the completion counter without initiation work that</span></div><div class=
=3D"">
<span style=3D"color:rgb(0,0,0);font-family:arial,helvetica,sans-serif">inc=
rements the counter and the put-function initiates the same number of reset=
 works as the function</span></div><div class=3D""><span style=3D"color:rgb=
(0,0,0);font-family:arial,helvetica,sans-serif">calls wait_for_completion()=
. It doesn&#39;t seem right. Could also explain why unbound devices hangs. =
Their</span></div>
<div class=3D""><span style=3D"color:rgb(0,0,0);font-family:arial,helvetica=
,sans-serif">completion struct is not likely to be initiated with a count o=
f 2 after startup.</span></div><div class=3D""><span style=3D"color:rgb(0,0=
,0);font-family:arial,helvetica,sans-serif"><br>
</span></div><div class=3D"">Best Regards,</div><div class=3D"">/Martin</di=
v><div class=3D""><br></div><div class=3D""><font color=3D"#008000" face=3D=
"monospace"><span style=3D"white-space:pre"><br></span></font></div></div><=
div class=3D"gmail_extra">
<br><br><div class=3D"gmail_quote">2014-02-04 20:55 GMT+01:00 Konrad Rzeszu=
tek Wilk <span dir=3D"ltr">&lt;<a href=3D"mailto:konrad.wilk@oracle.com" ta=
rget=3D"_blank">konrad.wilk@oracle.com</a>&gt;</span>:<br><blockquote class=
=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padd=
ing-left:1ex">
<div class=3D"HOEnZb"><div class=3D"h5">On Tue, Feb 04, 2014 at 07:19:16PM =
+0100, Martin =D6hrling wrote:<br>
&gt; I tried to use vga passthrough with an AMD card and ran into the issue=
s<br>
&gt; with dom0 crash on domU reboot. My intention is to debug the issue and=
 I<br>
&gt; have started to read up on the code for pci passthrough. The following=
<br>
&gt; observations may not be an error but perhaps someone could shed some l=
ight<br>
&gt; over the design intentions.<br>
&gt;<br>
&gt; My first impression was that libxl__device_pci_reset() is a function l=
evel<br>
&gt; reset function. It is called each time a single function of a device i=
s<br>
&gt; added or removed. A device reset would break other domU:s if other<br>
&gt; functions of the device are passed through to these domU:s.<br>
&gt;<br>
&gt; The strange thing is that there is only a single libxl__device_pci_res=
et()<br>
&gt; call when pcidev-&gt;vfunc_mask is set to LIBXL_PCI_FUNC_ALL. I&#39;m =
getting the<br>
&gt; impression that the function is assumed to be a device reset function.=
<br>
&gt;<br>
&gt; Is the attempt to reset a function, when all other functions belongs t=
o<br>
&gt; pciback, detected and replaced by a device reset?<br>
<br>
</div></div>Yes with these patches:<br>
<a href=3D"https://git.kernel.org/cgit/linux/kernel/git/konrad/xen.git/log/=
?h=3Ddevel/xen-pciback.slot_and_bus.v0" target=3D"_blank">https://git.kerne=
l.org/cgit/linux/kernel/git/konrad/xen.git/log/?h=3Ddevel/xen-pciback.slot_=
and_bus.v0</a><br>

<br>
But the last one seems to hang pciback when the device is unbound.<br>
<br>
&gt;<br>
&gt; Best Regards,<br>
&gt; Martin<br>
<br>
&gt; _______________________________________________<br>
&gt; Xen-devel mailing list<br>
&gt; <a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>=
<br>
&gt; <a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://li=
sts.xen.org/xen-devel</a><br>
<br>
</blockquote></div><br></div>

--f46d0444ef1d3fbc6304f3396f69--


--===============6344270903717969622==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6344270903717969622==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 11:34:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGHT-0002Gh-C8; Tue, 25 Feb 2014 11:34:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIGHS-0002Ga-2I
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:34:34 +0000
Received: from [85.158.143.35:52375] by server-3.bemta-4.messagelabs.com id
	BE/63-11539-9CF7C035; Tue, 25 Feb 2014 11:34:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393328072!8138718!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11199 invoked from network); 25 Feb 2014 11:34:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 11:34:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 11:34:32 +0000
Message-Id: <530C8DD7020000780011F226@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 11:34:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
	<530C8AD8020000780011F20D@nat28.tlf.novell.com>
	<530C7D3E.4040301@citrix.com>
In-Reply-To: <530C7D3E.4040301@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
 opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 12:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 25/02/14 11:21, Jan Beulich wrote:
>>>>> On 25.02.14 at 12:02, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/arch/x86/cpu/intel.c
>>> +++ b/xen/arch/x86/cpu/intel.c
>>> @@ -21,7 +21,8 @@
>>>  static unsigned int probe_intel_cpuid_faulting(void)
>>>  {
>>>  	uint64_t x;
>>> -	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
>>> +	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
>>> +            (x & PLATFORM_INFO_CPUID_FAULTING);
>> Indentation (a single hard tab ought to come first at least).
>>
>>> --- a/xen/include/asm-x86/msr-index.h
>>> +++ b/xen/include/asm-x86/msr-index.h
>>> @@ -486,7 +486,12 @@
>>>  
>>>  /* Intel cpuid faulting MSRs */
>>>  #define MSR_INTEL_PLATFORM_INFO		0x000000ce
>>> +#define _PLATFORM_INFO_CPUID_FAULTING	31
>>> +#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
>>> +
>>>  #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
>>> +#define _MISC_FEATURES_CPUID_FAULTING	0
>>> +#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)
>> I wonder whether, from a name space pov, it wouldn't be better
>> if these new constants had at least MSR_ as additional prefix. Both
>> are rather generic without...
> 
> How about MSR_INTEL_ to match their MSR number names?

I'd be fine with that. I merely didn't require it to be the full name
because it gets rather long. But with only a single use site that's
probably acceptable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:34:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGHT-0002Gh-C8; Tue, 25 Feb 2014 11:34:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIGHS-0002Ga-2I
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:34:34 +0000
Received: from [85.158.143.35:52375] by server-3.bemta-4.messagelabs.com id
	BE/63-11539-9CF7C035; Tue, 25 Feb 2014 11:34:33 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393328072!8138718!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11199 invoked from network); 25 Feb 2014 11:34:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 11:34:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 11:34:32 +0000
Message-Id: <530C8DD7020000780011F226@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 11:34:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393326137-29397-1-git-send-email-andrew.cooper3@citrix.com>
	<530C8AD8020000780011F20D@nat28.tlf.novell.com>
	<530C7D3E.4040301@citrix.com>
In-Reply-To: <530C7D3E.4040301@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/faulting: Use formal defines instead of
 opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 12:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> On 25/02/14 11:21, Jan Beulich wrote:
>>>>> On 25.02.14 at 12:02, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>> --- a/xen/arch/x86/cpu/intel.c
>>> +++ b/xen/arch/x86/cpu/intel.c
>>> @@ -21,7 +21,8 @@
>>>  static unsigned int probe_intel_cpuid_faulting(void)
>>>  {
>>>  	uint64_t x;
>>> -	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
>>> +	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
>>> +            (x & PLATFORM_INFO_CPUID_FAULTING);
>> Indentation (a single hard tab ought to come first at least).
>>
>>> --- a/xen/include/asm-x86/msr-index.h
>>> +++ b/xen/include/asm-x86/msr-index.h
>>> @@ -486,7 +486,12 @@
>>>  
>>>  /* Intel cpuid faulting MSRs */
>>>  #define MSR_INTEL_PLATFORM_INFO		0x000000ce
>>> +#define _PLATFORM_INFO_CPUID_FAULTING	31
>>> +#define PLATFORM_INFO_CPUID_FAULTING	(1ULL << _PLATFORM_INFO_CPUID_FAULTING)
>>> +
>>>  #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
>>> +#define _MISC_FEATURES_CPUID_FAULTING	0
>>> +#define MISC_FEATURES_CPUID_FAULTING	(1ULL << _MISC_FEATURES_CPUID_FAULTING)
>> I wonder whether, from a name space pov, it wouldn't be better
>> if these new constants had at least MSR_ as additional prefix. Both
>> are rather generic without...
> 
> How about MSR_INTEL_ to match their MSR number names?

I'd be fine with that. I merely didn't require it to be the full name
because it gets rather long. But with only a single use site that's
probably acceptable.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:35:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGIQ-0002MO-RD; Tue, 25 Feb 2014 11:35:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WIGIP-0002ME-Dj
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:35:33 +0000
Received: from [193.109.254.147:56903] by server-7.bemta-14.messagelabs.com id
	4D/B1-23424-4008C035; Tue, 25 Feb 2014 11:35:32 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393328131!6683076!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7766 invoked from network); 25 Feb 2014 11:35:32 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 11:35:32 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WIGI4-000Ljt-6C; Tue, 25 Feb 2014 11:35:12 +0000
Date: Tue, 25 Feb 2014 12:35:12 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140225113512.GB76936@deinos.phlegethon.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
	<530C8468020000780011F1D1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530C8468020000780011F1D1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Tamas K Lengyel <tamas.lengyel@zentific.com>, keir@xen.org,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:54 +0000 on 25 Feb (1393322055), Jan Beulich wrote:
> >>> On 30.01.14 at 22:34, Tamas K Lengyel <tamas.lengyel@zentific.com> wrote:
> > This patch extends the information returned for CR0/CR3/CR4 register write 
> > events
> > with the previous value of the register. The old value was already passed to 
> > the trap
> > processing function, just never placed into the returned request. By 
> > returning     
> > this value, applications subscribing the CR events obtain additional context 
> > about
> > the event.
> > 
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> 
> Tim, Andres,
> 
> this seems to fall in your area - any thoughts?

I already acked this; I was intending to apply it on Thursday.

Tim.

> > ---
> >  xen/arch/x86/hvm/hvm.c         |    4 ++++
> >  xen/include/public/mem_event.h |    6 +++---
> >  2 files changed, 7 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 69f7e74..d46abf2 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t 
> > reason,
> >          req.gla = gla;
> >          req.gla_valid = 1;
> >      }
> > +    else
> > +    {
> > +        req.gla = old;
> > +    }
> >      
> >      mem_event_put_request(d, &d->mem_event->access, &req);
> >      
> > diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> > index c9ed546..3831b41 100644
> > --- a/xen/include/public/mem_event.h
> > +++ b/xen/include/public/mem_event.h
> > @@ -40,9 +40,9 @@
> >  /* Reasons for the memory event request */
> >  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
> >  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is 
> > address */
> > -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value 
> > */
> > -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value 
> > */
> > -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value 
> > */
> > +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 
> > value, gla is previous */
> > +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 
> > value, gla is previous */
> > +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 
> > value, gla is previous */
> >  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP 
> > */
> >  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: 
> > gla/gfn are RIP */
> >  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, 
> > gla is MSR address;
> > -- 
> > 1.7.10.4
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:35:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGIQ-0002MO-RD; Tue, 25 Feb 2014 11:35:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WIGIP-0002ME-Dj
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:35:33 +0000
Received: from [193.109.254.147:56903] by server-7.bemta-14.messagelabs.com id
	4D/B1-23424-4008C035; Tue, 25 Feb 2014 11:35:32 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393328131!6683076!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7766 invoked from network); 25 Feb 2014 11:35:32 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 11:35:32 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WIGI4-000Ljt-6C; Tue, 25 Feb 2014 11:35:12 +0000
Date: Tue, 25 Feb 2014 12:35:12 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140225113512.GB76936@deinos.phlegethon.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
	<530C8468020000780011F1D1@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530C8468020000780011F1D1@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Tamas K Lengyel <tamas.lengyel@zentific.com>, keir@xen.org,
	Andres Lagar-Cavilla <andres@lagarcavilla.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 10:54 +0000 on 25 Feb (1393322055), Jan Beulich wrote:
> >>> On 30.01.14 at 22:34, Tamas K Lengyel <tamas.lengyel@zentific.com> wrote:
> > This patch extends the information returned for CR0/CR3/CR4 register write 
> > events
> > with the previous value of the register. The old value was already passed to 
> > the trap
> > processing function, just never placed into the returned request. By 
> > returning     
> > this value, applications subscribing the CR events obtain additional context 
> > about
> > the event.
> > 
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
> 
> Tim, Andres,
> 
> this seems to fall in your area - any thoughts?

I already acked this; I was intending to apply it on Thursday.

Tim.

> > ---
> >  xen/arch/x86/hvm/hvm.c         |    4 ++++
> >  xen/include/public/mem_event.h |    6 +++---
> >  2 files changed, 7 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 69f7e74..d46abf2 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t 
> > reason,
> >          req.gla = gla;
> >          req.gla_valid = 1;
> >      }
> > +    else
> > +    {
> > +        req.gla = old;
> > +    }
> >      
> >      mem_event_put_request(d, &d->mem_event->access, &req);
> >      
> > diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> > index c9ed546..3831b41 100644
> > --- a/xen/include/public/mem_event.h
> > +++ b/xen/include/public/mem_event.h
> > @@ -40,9 +40,9 @@
> >  /* Reasons for the memory event request */
> >  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
> >  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is 
> > address */
> > -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value 
> > */
> > -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value 
> > */
> > -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value 
> > */
> > +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 
> > value, gla is previous */
> > +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 
> > value, gla is previous */
> > +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 
> > value, gla is previous */
> >  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP 
> > */
> >  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: 
> > gla/gfn are RIP */
> >  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, 
> > gla is MSR address;
> > -- 
> > 1.7.10.4
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org 
> > http://lists.xen.org/xen-devel 
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGLw-0002Z8-HB; Tue, 25 Feb 2014 11:39:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIGLv-0002Z2-Qi
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:39:12 +0000
Received: from [85.158.137.68:17634] by server-11.bemta-3.messagelabs.com id
	01/C5-04255-FD08C035; Tue, 25 Feb 2014 11:39:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393328347!2746290!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18233 invoked from network); 25 Feb 2014 11:39:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:39:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="103841013"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 11:39:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:39:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIGLp-0006Sc-T1; Tue, 25 Feb 2014 11:39:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 11:39:04 +0000
Message-ID: <1393328344-27597-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <530C8DD7020000780011F226@nat28.tlf.novell.com>
References: <530C8DD7020000780011F226@nat28.tlf.novell.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v2] x86/faulting: Use formal defines instead of
	opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>

---

Changes in v2:
 * Use MSR_ prefix for names
---
 xen/arch/x86/cpu/intel.c        |    7 ++++---
 xen/include/asm-x86/msr-index.h |    5 +++++
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 27fe762..9e02cf6 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -21,7 +21,8 @@
 static unsigned int probe_intel_cpuid_faulting(void)
 {
 	uint64_t x;
-	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
+	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
+		(x & MSR_PLATFORM_INFO_CPUID_FAULTING);
 }
 
 static DEFINE_PER_CPU(bool_t, cpuid_faulting_enabled);
@@ -34,9 +35,9 @@ void set_cpuid_faulting(bool_t enable)
 		return;
 
 	rdmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
-	lo &= ~1;
+	lo &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
 	if (enable)
-		lo |= 1;
+		lo |= MSR_MISC_FEATURES_CPUID_FAULTING;
 	wrmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
 
 	this_cpu(cpuid_faulting_enabled) = enable;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 86b7d64..3269c43 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -486,7 +486,12 @@
 
 /* Intel cpuid faulting MSRs */
 #define MSR_INTEL_PLATFORM_INFO		0x000000ce
+#define _MSR_PLATFORM_INFO_CPUID_FAULTING	31
+#define MSR_PLATFORM_INFO_CPUID_FAULTING	(1ULL << _MSR_PLATFORM_INFO_CPUID_FAULTING)
+
 #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
+#define _MSR_MISC_FEATURES_CPUID_FAULTING	0
+#define MSR_MISC_FEATURES_CPUID_FAULTING	(1ULL << _MSR_MISC_FEATURES_CPUID_FAULTING)
 
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 11:39:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 11:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIGLw-0002Z8-HB; Tue, 25 Feb 2014 11:39:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIGLv-0002Z2-Qi
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 11:39:12 +0000
Received: from [85.158.137.68:17634] by server-11.bemta-3.messagelabs.com id
	01/C5-04255-FD08C035; Tue, 25 Feb 2014 11:39:11 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393328347!2746290!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18233 invoked from network); 25 Feb 2014 11:39:09 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 11:39:09 -0000
X-IronPort-AV: E=Sophos;i="4.97,539,1389744000"; d="scan'208";a="103841013"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 11:39:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 06:39:06 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIGLp-0006Sc-T1; Tue, 25 Feb 2014 11:39:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 11:39:04 +0000
Message-ID: <1393328344-27597-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <530C8DD7020000780011F226@nat28.tlf.novell.com>
References: <530C8DD7020000780011F226@nat28.tlf.novell.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v2] x86/faulting: Use formal defines instead of
	opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>

---

Changes in v2:
 * Use MSR_ prefix for names
---
 xen/arch/x86/cpu/intel.c        |    7 ++++---
 xen/include/asm-x86/msr-index.h |    5 +++++
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 27fe762..9e02cf6 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -21,7 +21,8 @@
 static unsigned int probe_intel_cpuid_faulting(void)
 {
 	uint64_t x;
-	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) && (x & (1u<<31));
+	return !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, x) &&
+		(x & MSR_PLATFORM_INFO_CPUID_FAULTING);
 }
 
 static DEFINE_PER_CPU(bool_t, cpuid_faulting_enabled);
@@ -34,9 +35,9 @@ void set_cpuid_faulting(bool_t enable)
 		return;
 
 	rdmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
-	lo &= ~1;
+	lo &= ~MSR_MISC_FEATURES_CPUID_FAULTING;
 	if (enable)
-		lo |= 1;
+		lo |= MSR_MISC_FEATURES_CPUID_FAULTING;
 	wrmsr(MSR_INTEL_MISC_FEATURES_ENABLES, lo, hi);
 
 	this_cpu(cpuid_faulting_enabled) = enable;
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 86b7d64..3269c43 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -486,7 +486,12 @@
 
 /* Intel cpuid faulting MSRs */
 #define MSR_INTEL_PLATFORM_INFO		0x000000ce
+#define _MSR_PLATFORM_INFO_CPUID_FAULTING	31
+#define MSR_PLATFORM_INFO_CPUID_FAULTING	(1ULL << _MSR_PLATFORM_INFO_CPUID_FAULTING)
+
 #define MSR_INTEL_MISC_FEATURES_ENABLES	0x00000140
+#define _MSR_MISC_FEATURES_CPUID_FAULTING	0
+#define MSR_MISC_FEATURES_CPUID_FAULTING	(1ULL << _MSR_MISC_FEATURES_CPUID_FAULTING)
 
 /* Geode defined MSRs */
 #define MSR_GEODE_BUSCONT_CONF0		0x00001900
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2x-0002zn-Ji; Tue, 25 Feb 2014 12:23:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2w-0002zE-5o
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:38 +0000
Received: from [85.158.143.35:21599] by server-3.bemta-4.messagelabs.com id
	12/86-11539-94B8C035; Tue, 25 Feb 2014 12:23:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393331015!8154808!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11523 invoked from network); 25 Feb 2014 12:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852562"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-H0; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:27 +0000
Message-ID: <1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
__attribute__((noreturn)).  This includes removing redundant uses with
function definitions which have a public declaration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/arch/x86/efi/boot.c            |    4 ++--
 xen/arch/x86/shutdown.c            |    2 +-
 xen/include/asm-arm/early_printk.h |    4 ++--
 xen/include/xen/compiler.h         |    2 ++
 xen/include/xen/lib.h              |    2 +-
 xen/include/xen/sched.h            |    4 ++--
 7 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..2870a30 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -52,7 +52,7 @@ void __init early_printk(const char *fmt, ...)
     va_end(args);
 }
 
-void __attribute__((noreturn)) __init
+void __init
 early_panic(const char *fmt, ...)
 {
     va_list args;
diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index 0dd935c..a26e0af 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -183,7 +183,7 @@ static bool_t __init match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2)
            !memcmp(guid1->Data4, guid2->Data4, sizeof(guid1->Data4));
 }
 
-static void __init __attribute__((__noreturn__)) blexit(const CHAR16 *str)
+static void __init noreturn blexit(const CHAR16 *str)
 {
     if ( str )
         PrintStr((CHAR16 *)str);
@@ -762,7 +762,7 @@ static void __init relocate_trampoline(unsigned long phys)
         *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
 }
 
-void EFIAPI __init __attribute__((__noreturn__))
+void EFIAPI __init noreturn
 efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     static EFI_GUID __initdata loaded_image_guid = LOADED_IMAGE_PROTOCOL;
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6eba271..6143c40 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -85,7 +85,7 @@ static inline void kb_wait(void)
             break;
 }
 
-static void __attribute__((noreturn)) __machine_halt(void *unused)
+static void noreturn __machine_halt(void *unused)
 {
     local_irq_disable();
     for ( ; ; )
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..3cb8dab 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -26,7 +26,7 @@
 
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
+void early_panic(const char *fmt, ...) noreturn
     __attribute__((format (printf, 1, 2)));
 
 #else
@@ -35,7 +35,7 @@ static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
 
-static inline void  __attribute__((noreturn))
+static inline void noreturn
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d6805c..c80398d 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -14,6 +14,8 @@
 #define always_inline __inline__ __attribute__ ((always_inline))
 #define noinline      __attribute__((noinline))
 
+#define noreturn      __attribute__((noreturn))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..0d1a5d3 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -8,7 +8,7 @@
 #include <xen/string.h>
 #include <asm/bug.h>
 
-void __bug(char *file, int line) __attribute__((noreturn));
+void noreturn __bug(char *file, int line);
 void __warn(char *file, int line);
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..00f0eba 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -580,7 +580,7 @@ void __domain_crash(struct domain *d);
  * Mark current domain as crashed and synchronously deschedule from the local
  * processor. This function never returns.
  */
-void __domain_crash_synchronous(void) __attribute__((noreturn));
+void noreturn __domain_crash_synchronous(void);
 #define domain_crash_synchronous() do {                                   \
     printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__);  \
     __domain_crash_synchronous();                                         \
@@ -591,7 +591,7 @@ void __domain_crash_synchronous(void) __attribute__((noreturn));
  * the crash occured.  If addr is 0, look up address from last extable
  * redirection.
  */
-void asm_domain_crash_synchronous(unsigned long addr) __attribute__((noreturn));
+void noreturn asm_domain_crash_synchronous(unsigned long addr);
 
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2w-0002zV-SB; Tue, 25 Feb 2014 12:23:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2u-0002z3-VJ
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:37 +0000
Received: from [193.109.254.147:38087] by server-11.bemta-14.messagelabs.com
	id BA/29-24604-84B8C035; Tue, 25 Feb 2014 12:23:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23377 invoked from network); 25 Feb 2014 12:23:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852560"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-Iz; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:28 +0000
Message-ID: <1393331011-22240-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 2/5] x86/crash: Fix up declaration of
	do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... so it can correctly be annotated as noreturn.  Move the declaration of
nmi_crash() to be effectivly private in crash.c

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/x86/crash.c            |    3 ++-
 xen/include/asm-x86/processor.h |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 01fd906..ec586bd 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -36,7 +36,7 @@ static unsigned int crashing_cpu;
 static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
 /* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
+void do_nmi_crash(struct cpu_user_regs *regs)
 {
     int cpu = smp_processor_id();
 
@@ -113,6 +113,7 @@ void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
         halt();
 }
 
+void nmi_crash(void);
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c120460..1d1dee6 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -529,7 +529,6 @@ void do_ ## _name(struct cpu_user_regs *regs)
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
-DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -550,6 +549,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
+void noreturn do_nmi_crash(struct cpu_user_regs *regs);
 
 void syscall_enter(void);
 void sysenter_entry(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2y-00030B-Cg; Tue, 25 Feb 2014 12:23:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2w-0002zM-ME
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:38 +0000
Received: from [85.158.143.35:13187] by server-2.bemta-4.messagelabs.com id
	FF/C6-10891-A4B8C035; Tue, 25 Feb 2014 12:23:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393331015!8154808!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11604 invoked from network); 25 Feb 2014 12:23:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852564"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-Mr; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:31 +0000
Message-ID: <1393331011-22240-6-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH v4 5/5] xen/x86: Identify reset_stack_and_jump()
	as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

reset_stack_and_jump() is actually a macro, but can effectivly become noreturn
by giving it an unreachable() declaration.

Propagate the 'noreturn-ness' up through the direct and indirect callers.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/x86/domain.c             |    6 ++----
 xen/arch/x86/hvm/svm/svm.c        |    2 +-
 xen/arch/x86/setup.c              |    2 +-
 xen/include/asm-x86/current.h     |   11 +++++++----
 xen/include/asm-x86/domain.h      |    2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 +-
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..c42a079 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -133,12 +133,12 @@ void startup_cpu_idle_loop(void)
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_idle_domain(struct vcpu *v)
+static void noreturn continue_idle_domain(struct vcpu *v)
 {
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_nonidle_domain(struct vcpu *v)
+static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
@@ -1521,13 +1521,11 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     update_vcpu_system_time(next);
 
     schedule_tail(next);
-    BUG();
 }
 
 void continue_running(struct vcpu *same)
 {
     schedule_tail(same);
-    BUG();
 }
 
 int __sync_local_execstate(void)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..a1d9320 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -911,7 +911,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
         wrmsrl(MSR_TSC_AUX, hvm_msr_tsc_aux(v));
 }
 
-static void svm_do_resume(struct vcpu *v) 
+static void noreturn svm_do_resume(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     bool_t debug_state = v->domain->debugger_attached;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..addd071 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -541,7 +541,7 @@ static char * __init cmdline_cook(char *p, char *loader_name)
     return p;
 }
 
-void __init __start_xen(unsigned long mbi_p)
+void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
     char *cmdline, *kextra, *loader;
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index c2792ce..4d1f20e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -59,10 +59,13 @@ static inline struct cpu_info *get_cpu_info(void)
     ((sp & (~(STACK_SIZE-1))) +                 \
      (STACK_SIZE - sizeof(struct cpu_info) - sizeof(unsigned long)))
 
-#define reset_stack_and_jump(__fn)              \
-    __asm__ __volatile__ (                      \
-        "mov %0,%%"__OP"sp; jmp %c1"            \
-        : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" )
+#define reset_stack_and_jump(__fn)                                      \
+    ({                                                                  \
+        __asm__ __volatile__ (                                          \
+            "mov %0,%%"__OP"sp; jmp %c1"                                \
+            : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" );   \
+        unreachable();                                                  \
+    })
 
 #define schedule_tail(vcpu) (((vcpu)->arch.schedule_tail)(vcpu))
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4ff89f0..49f7c0c 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -395,7 +395,7 @@ struct arch_vcpu
 
     unsigned long      flags; /* TF_ */
 
-    void (*schedule_tail) (struct vcpu *);
+    void noreturn (*schedule_tail) (struct vcpu *);
 
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 6f6b672..827c97e 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -91,7 +91,7 @@ typedef enum {
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
-void vmx_do_resume(struct vcpu *);
+void noreturn vmx_do_resume(struct vcpu *);
 void vmx_vlapic_msr_changed(struct vcpu *v);
 void vmx_realmode(struct cpu_user_regs *regs);
 void vmx_update_debug_state(struct vcpu *v);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2w-0002zV-SB; Tue, 25 Feb 2014 12:23:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2u-0002z3-VJ
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:37 +0000
Received: from [193.109.254.147:38087] by server-11.bemta-14.messagelabs.com
	id BA/29-24604-84B8C035; Tue, 25 Feb 2014 12:23:36 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23377 invoked from network); 25 Feb 2014 12:23:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852560"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-Iz; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:28 +0000
Message-ID: <1393331011-22240-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 2/5] x86/crash: Fix up declaration of
	do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... so it can correctly be annotated as noreturn.  Move the declaration of
nmi_crash() to be effectivly private in crash.c

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/x86/crash.c            |    3 ++-
 xen/include/asm-x86/processor.h |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 01fd906..ec586bd 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -36,7 +36,7 @@ static unsigned int crashing_cpu;
 static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
 /* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
+void do_nmi_crash(struct cpu_user_regs *regs)
 {
     int cpu = smp_processor_id();
 
@@ -113,6 +113,7 @@ void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
         halt();
 }
 
+void nmi_crash(void);
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c120460..1d1dee6 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -529,7 +529,6 @@ void do_ ## _name(struct cpu_user_regs *regs)
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
-DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -550,6 +549,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
+void noreturn do_nmi_crash(struct cpu_user_regs *regs);
 
 void syscall_enter(void);
 void sysenter_entry(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2x-0002zg-8C; Tue, 25 Feb 2014 12:23:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2v-0002z8-LS
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:37 +0000
Received: from [193.109.254.147:3433] by server-9.bemta-14.messagelabs.com id
	48/9D-24895-94B8C035; Tue, 25 Feb 2014 12:23:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23464 invoked from network); 25 Feb 2014 12:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852561"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-LD; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:30 +0000
Message-ID: <1393331011-22240-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 4/5] xen: Misc cleanup as a result of the
	previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This includes:
 * A stale comment in sh_skip_sync()
 * A dead for ever loop in __bug()
 * A prototype for machine_power_off() which unimplemented in any architecture
 * Replacing a for(;;); loop with unreachable()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/efi/boot.c         |    2 +-
 xen/arch/x86/mm/shadow/common.c |    1 -
 xen/drivers/char/console.c      |    1 -
 xen/include/xen/shutdown.h      |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index a26e0af..62c4812 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -201,7 +201,7 @@ static void __init noreturn blexit(const CHAR16 *str)
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_bs->Exit(efi_ih, EFI_SUCCESS, 0, NULL);
-    for( ; ; ); /* not reached */
+    unreachable(); /* not reached */
 }
 
 /* generic routine for printing error messages */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 11c6b62..b400ccb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
     SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
                  mfn_x(gl1mfn));
     BUG();
-    return 0; /* BUG() is no longer __attribute__((noreturn)). */
 }
 
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..7d4383c 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1089,7 +1089,6 @@ void __bug(char *file, int line)
     printk("Xen BUG at %s:%d\n", file, line);
     dump_execution_state();
     panic("Xen BUG at %s:%d", file, line);
-    for ( ; ; ) ;
 }
 
 void __warn(char *file, int line)
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index f04905b..a00bfef 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -10,6 +10,5 @@ void noreturn dom0_shutdown(u8 reason);
 
 void noreturn machine_restart(unsigned int delay_millisecs);
 void noreturn machine_halt(void);
-void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2x-0002zu-VV; Tue, 25 Feb 2014 12:23:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2w-0002zL-GN
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:38 +0000
Received: from [193.109.254.147:38202] by server-4.bemta-14.messagelabs.com id
	B9/91-32066-94B8C035; Tue, 25 Feb 2014 12:23:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23534 invoked from network); 25 Feb 2014 12:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-JV; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:29 +0000
Message-ID: <1393331011-22240-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 3/5] xen: Identify panic and reboot/halt
	functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On an x86 build (GCC Debian 4.7.2-5), this substantially reduces the size of
.text and .init.text sections.

Experimentally, even in a non-debug build, GCC uses `call` rather than `jmp`
so there should be no impact on any stack trace generation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/arm/shutdown.c         |    2 +-
 xen/arch/x86/cpu/mcheck/mce.h   |    2 +-
 xen/arch/x86/shutdown.c         |    2 +-
 xen/common/shutdown.c           |    2 +-
 xen/include/asm-arm/smp.h       |    2 +-
 xen/include/asm-x86/processor.h |    2 +-
 xen/include/xen/lib.h           |    2 +-
 xen/include/xen/shutdown.h      |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 767cc12..adc0529 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -11,7 +11,7 @@ static void raw_machine_reset(void)
     platform_reset();
 }
 
-static void halt_this_cpu(void *arg)
+static void noreturn halt_this_cpu(void *arg)
 {
     stop_cpu();
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index cbd123d..33bd1ab 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
 struct mc_info *x86_mcinfo_getptr(void);
-void mc_panic(char *s);
+void noreturn mc_panic(char *s);
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
 			 uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6143c40..827515d 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -452,7 +452,7 @@ static int __init reboot_init(void)
 }
 __initcall(reboot_init);
 
-static void __machine_restart(void *pdelay)
+static void noreturn __machine_restart(void *pdelay)
 {
     machine_restart(*(unsigned int *)pdelay);
 }
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 9bccd34..fadb69b 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -17,7 +17,7 @@
 bool_t __read_mostly opt_noreboot;
 boolean_param("noreboot", opt_noreboot);
 
-static void maybe_reboot(void)
+static void noreturn maybe_reboot(void)
 {
     if ( opt_noreboot )
     {
diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
index a1de03c..91b1e52 100644
--- a/xen/include/asm-arm/smp.h
+++ b/xen/include/asm-arm/smp.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
 #define raw_smp_processor_id() (get_processor_id())
 
-extern void stop_cpu(void);
+extern void noreturn stop_cpu(void);
 
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 1d1dee6..58fc917 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -514,7 +514,7 @@ void show_registers(struct cpu_user_regs *regs);
 void show_execution_state(struct cpu_user_regs *regs);
 #define dump_execution_state() run_in_exception_handler(show_execution_state)
 void show_page_walk(unsigned long addr);
-void fatal_trap(int trapnr, struct cpu_user_regs *regs);
+void noreturn fatal_trap(int trapnr, struct cpu_user_regs *regs);
 
 void compat_show_guest_stack(struct vcpu *, struct cpu_user_regs *, int lines);
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 0d1a5d3..1369b2b 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -87,7 +87,7 @@ extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
-extern void panic(const char *format, ...)
+extern void noreturn panic(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2bee748..f04905b 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -1,13 +1,15 @@
 #ifndef __XEN_SHUTDOWN_H__
 #define __XEN_SHUTDOWN_H__
 
+#include <xen/compiler.h>
+
 /* opt_noreboot: If true, machine will need manual reset on error. */
 extern bool_t opt_noreboot;
 
-void dom0_shutdown(u8 reason);
+void noreturn dom0_shutdown(u8 reason);
 
-void machine_restart(unsigned int delay_millisecs);
-void machine_halt(void);
+void noreturn machine_restart(unsigned int delay_millisecs);
+void noreturn machine_halt(void);
 void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2x-0002zn-Ji; Tue, 25 Feb 2014 12:23:39 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2w-0002zE-5o
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:38 +0000
Received: from [85.158.143.35:21599] by server-3.bemta-4.messagelabs.com id
	12/86-11539-94B8C035; Tue, 25 Feb 2014 12:23:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393331015!8154808!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11523 invoked from network); 25 Feb 2014 12:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852562"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-H0; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:27 +0000
Message-ID: <1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
__attribute__((noreturn)).  This includes removing redundant uses with
function definitions which have a public declaration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/arch/x86/efi/boot.c            |    4 ++--
 xen/arch/x86/shutdown.c            |    2 +-
 xen/include/asm-arm/early_printk.h |    4 ++--
 xen/include/xen/compiler.h         |    2 ++
 xen/include/xen/lib.h              |    2 +-
 xen/include/xen/sched.h            |    4 ++--
 7 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..2870a30 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -52,7 +52,7 @@ void __init early_printk(const char *fmt, ...)
     va_end(args);
 }
 
-void __attribute__((noreturn)) __init
+void __init
 early_panic(const char *fmt, ...)
 {
     va_list args;
diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index 0dd935c..a26e0af 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -183,7 +183,7 @@ static bool_t __init match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2)
            !memcmp(guid1->Data4, guid2->Data4, sizeof(guid1->Data4));
 }
 
-static void __init __attribute__((__noreturn__)) blexit(const CHAR16 *str)
+static void __init noreturn blexit(const CHAR16 *str)
 {
     if ( str )
         PrintStr((CHAR16 *)str);
@@ -762,7 +762,7 @@ static void __init relocate_trampoline(unsigned long phys)
         *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
 }
 
-void EFIAPI __init __attribute__((__noreturn__))
+void EFIAPI __init noreturn
 efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     static EFI_GUID __initdata loaded_image_guid = LOADED_IMAGE_PROTOCOL;
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6eba271..6143c40 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -85,7 +85,7 @@ static inline void kb_wait(void)
             break;
 }
 
-static void __attribute__((noreturn)) __machine_halt(void *unused)
+static void noreturn __machine_halt(void *unused)
 {
     local_irq_disable();
     for ( ; ; )
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..3cb8dab 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -26,7 +26,7 @@
 
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
+void early_panic(const char *fmt, ...) noreturn
     __attribute__((format (printf, 1, 2)));
 
 #else
@@ -35,7 +35,7 @@ static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
 
-static inline void  __attribute__((noreturn))
+static inline void noreturn
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d6805c..c80398d 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -14,6 +14,8 @@
 #define always_inline __inline__ __attribute__ ((always_inline))
 #define noinline      __attribute__((noinline))
 
+#define noreturn      __attribute__((noreturn))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..0d1a5d3 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -8,7 +8,7 @@
 #include <xen/string.h>
 #include <asm/bug.h>
 
-void __bug(char *file, int line) __attribute__((noreturn));
+void noreturn __bug(char *file, int line);
 void __warn(char *file, int line);
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..00f0eba 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -580,7 +580,7 @@ void __domain_crash(struct domain *d);
  * Mark current domain as crashed and synchronously deschedule from the local
  * processor. This function never returns.
  */
-void __domain_crash_synchronous(void) __attribute__((noreturn));
+void noreturn __domain_crash_synchronous(void);
 #define domain_crash_synchronous() do {                                   \
     printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__);  \
     __domain_crash_synchronous();                                         \
@@ -591,7 +591,7 @@ void __domain_crash_synchronous(void) __attribute__((noreturn));
  * the crash occured.  If addr is 0, look up address from last extable
  * redirection.
  */
-void asm_domain_crash_synchronous(unsigned long addr) __attribute__((noreturn));
+void noreturn asm_domain_crash_synchronous(unsigned long addr);
 
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2v-0002z9-Fk; Tue, 25 Feb 2014 12:23:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2u-0002yy-Bx
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:36 +0000
Received: from [193.109.254.147:38014] by server-3.bemta-14.messagelabs.com id
	9F/A6-00432-74B8C035; Tue, 25 Feb 2014 12:23:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23323 invoked from network); 25 Feb 2014 12:23:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852559"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:32 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-FH; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:26 +0000
Message-ID: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel]  [PATCH v4 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make better use of noreturn.  It allows optimising compilers to produce more
efficient code.

Each patch is compile tested on each architecture, and the result is
functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
older compilers are happy.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2x-0002zg-8C; Tue, 25 Feb 2014 12:23:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2v-0002z8-LS
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:37 +0000
Received: from [193.109.254.147:3433] by server-9.bemta-14.messagelabs.com id
	48/9D-24895-94B8C035; Tue, 25 Feb 2014 12:23:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23464 invoked from network); 25 Feb 2014 12:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852561"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-LD; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:30 +0000
Message-ID: <1393331011-22240-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 4/5] xen: Misc cleanup as a result of the
	previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This includes:
 * A stale comment in sh_skip_sync()
 * A dead for ever loop in __bug()
 * A prototype for machine_power_off() which unimplemented in any architecture
 * Replacing a for(;;); loop with unreachable()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/efi/boot.c         |    2 +-
 xen/arch/x86/mm/shadow/common.c |    1 -
 xen/drivers/char/console.c      |    1 -
 xen/include/xen/shutdown.h      |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index a26e0af..62c4812 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -201,7 +201,7 @@ static void __init noreturn blexit(const CHAR16 *str)
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_bs->Exit(efi_ih, EFI_SUCCESS, 0, NULL);
-    for( ; ; ); /* not reached */
+    unreachable(); /* not reached */
 }
 
 /* generic routine for printing error messages */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 11c6b62..b400ccb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
     SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
                  mfn_x(gl1mfn));
     BUG();
-    return 0; /* BUG() is no longer __attribute__((noreturn)). */
 }
 
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..7d4383c 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1089,7 +1089,6 @@ void __bug(char *file, int line)
     printk("Xen BUG at %s:%d\n", file, line);
     dump_execution_state();
     panic("Xen BUG at %s:%d", file, line);
-    for ( ; ; ) ;
 }
 
 void __warn(char *file, int line)
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index f04905b..a00bfef 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -10,6 +10,5 @@ void noreturn dom0_shutdown(u8 reason);
 
 void noreturn machine_restart(unsigned int delay_millisecs);
 void noreturn machine_halt(void);
-void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2x-0002zu-VV; Tue, 25 Feb 2014 12:23:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2w-0002zL-GN
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:38 +0000
Received: from [193.109.254.147:38202] by server-4.bemta-14.messagelabs.com id
	B9/91-32066-94B8C035; Tue, 25 Feb 2014 12:23:37 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23534 invoked from network); 25 Feb 2014 12:23:36 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852563"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-JV; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:29 +0000
Message-ID: <1393331011-22240-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Keir Fraser <keir@xen.org>, Ian Campbell <ian.campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v4 3/5] xen: Identify panic and reboot/halt
	functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On an x86 build (GCC Debian 4.7.2-5), this substantially reduces the size of
.text and .init.text sections.

Experimentally, even in a non-debug build, GCC uses `call` rather than `jmp`
so there should be no impact on any stack trace generation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/arm/shutdown.c         |    2 +-
 xen/arch/x86/cpu/mcheck/mce.h   |    2 +-
 xen/arch/x86/shutdown.c         |    2 +-
 xen/common/shutdown.c           |    2 +-
 xen/include/asm-arm/smp.h       |    2 +-
 xen/include/asm-x86/processor.h |    2 +-
 xen/include/xen/lib.h           |    2 +-
 xen/include/xen/shutdown.h      |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 767cc12..adc0529 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -11,7 +11,7 @@ static void raw_machine_reset(void)
     platform_reset();
 }
 
-static void halt_this_cpu(void *arg)
+static void noreturn halt_this_cpu(void *arg)
 {
     stop_cpu();
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index cbd123d..33bd1ab 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
 struct mc_info *x86_mcinfo_getptr(void);
-void mc_panic(char *s);
+void noreturn mc_panic(char *s);
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
 			 uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6143c40..827515d 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -452,7 +452,7 @@ static int __init reboot_init(void)
 }
 __initcall(reboot_init);
 
-static void __machine_restart(void *pdelay)
+static void noreturn __machine_restart(void *pdelay)
 {
     machine_restart(*(unsigned int *)pdelay);
 }
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 9bccd34..fadb69b 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -17,7 +17,7 @@
 bool_t __read_mostly opt_noreboot;
 boolean_param("noreboot", opt_noreboot);
 
-static void maybe_reboot(void)
+static void noreturn maybe_reboot(void)
 {
     if ( opt_noreboot )
     {
diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
index a1de03c..91b1e52 100644
--- a/xen/include/asm-arm/smp.h
+++ b/xen/include/asm-arm/smp.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
 #define raw_smp_processor_id() (get_processor_id())
 
-extern void stop_cpu(void);
+extern void noreturn stop_cpu(void);
 
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 1d1dee6..58fc917 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -514,7 +514,7 @@ void show_registers(struct cpu_user_regs *regs);
 void show_execution_state(struct cpu_user_regs *regs);
 #define dump_execution_state() run_in_exception_handler(show_execution_state)
 void show_page_walk(unsigned long addr);
-void fatal_trap(int trapnr, struct cpu_user_regs *regs);
+void noreturn fatal_trap(int trapnr, struct cpu_user_regs *regs);
 
 void compat_show_guest_stack(struct vcpu *, struct cpu_user_regs *, int lines);
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 0d1a5d3..1369b2b 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -87,7 +87,7 @@ extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
-extern void panic(const char *format, ...)
+extern void noreturn panic(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2bee748..f04905b 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -1,13 +1,15 @@
 #ifndef __XEN_SHUTDOWN_H__
 #define __XEN_SHUTDOWN_H__
 
+#include <xen/compiler.h>
+
 /* opt_noreboot: If true, machine will need manual reset on error. */
 extern bool_t opt_noreboot;
 
-void dom0_shutdown(u8 reason);
+void noreturn dom0_shutdown(u8 reason);
 
-void machine_restart(unsigned int delay_millisecs);
-void machine_halt(void);
+void noreturn machine_restart(unsigned int delay_millisecs);
+void noreturn machine_halt(void);
 void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2v-0002z9-Fk; Tue, 25 Feb 2014 12:23:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2u-0002yy-Bx
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:36 +0000
Received: from [193.109.254.147:38014] by server-3.bemta-14.messagelabs.com id
	9F/A6-00432-74B8C035; Tue, 25 Feb 2014 12:23:35 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393331013!6646629!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23323 invoked from network); 25 Feb 2014 12:23:35 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852559"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:32 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-FH; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:26 +0000
Message-ID: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel]  [PATCH v4 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make better use of noreturn.  It allows optimising compilers to produce more
efficient code.

Each patch is compile tested on each architecture, and the result is
functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
older compilers are happy.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 12:23:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 12:23:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIH2y-00030B-Cg; Tue, 25 Feb 2014 12:23:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIH2w-0002zM-ME
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 12:23:38 +0000
Received: from [85.158.143.35:13187] by server-2.bemta-4.messagelabs.com id
	FF/C6-10891-A4B8C035; Tue, 25 Feb 2014 12:23:38 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393331015!8154808!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11604 invoked from network); 25 Feb 2014 12:23:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 12:23:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103852564"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 12:23:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 07:23:33 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIH2q-00072M-Mr; Tue, 25 Feb 2014 12:23:32 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 12:23:31 +0000
Message-ID: <1393331011-22240-6-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH v4 5/5] xen/x86: Identify reset_stack_and_jump()
	as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

reset_stack_and_jump() is actually a macro, but can effectivly become noreturn
by giving it an unreachable() declaration.

Propagate the 'noreturn-ness' up through the direct and indirect callers.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>

---

Changes in v4:
 * Standardise on noreturn before the function name
---
 xen/arch/x86/domain.c             |    6 ++----
 xen/arch/x86/hvm/svm/svm.c        |    2 +-
 xen/arch/x86/setup.c              |    2 +-
 xen/include/asm-x86/current.h     |   11 +++++++----
 xen/include/asm-x86/domain.h      |    2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 +-
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..c42a079 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -133,12 +133,12 @@ void startup_cpu_idle_loop(void)
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_idle_domain(struct vcpu *v)
+static void noreturn continue_idle_domain(struct vcpu *v)
 {
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_nonidle_domain(struct vcpu *v)
+static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
@@ -1521,13 +1521,11 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     update_vcpu_system_time(next);
 
     schedule_tail(next);
-    BUG();
 }
 
 void continue_running(struct vcpu *same)
 {
     schedule_tail(same);
-    BUG();
 }
 
 int __sync_local_execstate(void)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 406d394..a1d9320 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -911,7 +911,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
         wrmsrl(MSR_TSC_AUX, hvm_msr_tsc_aux(v));
 }
 
-static void svm_do_resume(struct vcpu *v) 
+static void noreturn svm_do_resume(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     bool_t debug_state = v->domain->debugger_attached;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..addd071 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -541,7 +541,7 @@ static char * __init cmdline_cook(char *p, char *loader_name)
     return p;
 }
 
-void __init __start_xen(unsigned long mbi_p)
+void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
     char *cmdline, *kextra, *loader;
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index c2792ce..4d1f20e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -59,10 +59,13 @@ static inline struct cpu_info *get_cpu_info(void)
     ((sp & (~(STACK_SIZE-1))) +                 \
      (STACK_SIZE - sizeof(struct cpu_info) - sizeof(unsigned long)))
 
-#define reset_stack_and_jump(__fn)              \
-    __asm__ __volatile__ (                      \
-        "mov %0,%%"__OP"sp; jmp %c1"            \
-        : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" )
+#define reset_stack_and_jump(__fn)                                      \
+    ({                                                                  \
+        __asm__ __volatile__ (                                          \
+            "mov %0,%%"__OP"sp; jmp %c1"                                \
+            : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" );   \
+        unreachable();                                                  \
+    })
 
 #define schedule_tail(vcpu) (((vcpu)->arch.schedule_tail)(vcpu))
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4ff89f0..49f7c0c 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -395,7 +395,7 @@ struct arch_vcpu
 
     unsigned long      flags; /* TF_ */
 
-    void (*schedule_tail) (struct vcpu *);
+    void noreturn (*schedule_tail) (struct vcpu *);
 
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 6f6b672..827c97e 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -91,7 +91,7 @@ typedef enum {
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
-void vmx_do_resume(struct vcpu *);
+void noreturn vmx_do_resume(struct vcpu *);
 void vmx_vlapic_msr_changed(struct vcpu *v);
 void vmx_realmode(struct cpu_user_regs *regs);
 void vmx_update_debug_state(struct vcpu *v);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:03:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIHfT-0003vf-Tl; Tue, 25 Feb 2014 13:03:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WIHfS-0003va-8e
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:03:26 +0000
Received: from [85.158.137.68:57921] by server-17.bemta-3.messagelabs.com id
	51/54-22569-D949C035; Tue, 25 Feb 2014 13:03:25 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393333403!4091368!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3211 invoked from network); 25 Feb 2014 13:03:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-14.tower-31.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 25 Feb 2014 13:03:24 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S2612132AbaBYNDP (ORCPT <rfc822;xen-devel@lists.xen.org>);
	Tue, 25 Feb 2014 14:03:15 +0100
Date: Tue, 25 Feb 2014 14:03:15 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140225130315.GA13528@router-fw-old.local.net-space.pl>
References: <1393325831-29215-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393325831-29215-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.3.28i
Cc: daniel.kiper@oracle.com, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] common/kexec: Identify which cpu the
	kexec image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 10:57:11AM +0000, Andrew Cooper wrote:
> A patch to this effect has been in XenServer for a little while, and has
> proved to be a useful debugging point for servers which have different
> behaviours depending when crashing on the non-bootstrap processor.
>
> Moving the printk() from kexec_panic() to one_cpu_only() means that it will
> only be printed for the cpu which wins the race along the kexec path.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: David Vrabel <david.vrabel@citrix.com>

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:03:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIHfT-0003vf-Tl; Tue, 25 Feb 2014 13:03:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dkiper@net-space.pl>) id 1WIHfS-0003va-8e
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:03:26 +0000
Received: from [85.158.137.68:57921] by server-17.bemta-3.messagelabs.com id
	51/54-22569-D949C035; Tue, 25 Feb 2014 13:03:25 +0000
X-Env-Sender: dkiper@net-space.pl
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393333403!4091368!1
X-Originating-IP: [89.174.63.77]
X-SpamReason: No, hits=0.0 required=7.0 tests=UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3211 invoked from network); 25 Feb 2014 13:03:24 -0000
Received: from router-fw.net-space.pl (HELO router-fw.net-space.pl)
	(89.174.63.77)
	by server-14.tower-31.messagelabs.com with EDH-RSA-DES-CBC3-SHA
	encrypted SMTP; 25 Feb 2014 13:03:24 -0000
Received: (from localhost user: 'dkiper' uid#4000 fake: STDIN
	(dkiper@router-fw.net-space.pl)) by router-fw-old.local.net-space.pl
	id S2612132AbaBYNDP (ORCPT <rfc822;xen-devel@lists.xen.org>);
	Tue, 25 Feb 2014 14:03:15 +0100
Date: Tue, 25 Feb 2014 14:03:15 +0100
From: Daniel Kiper <dkiper@net-space.pl>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140225130315.GA13528@router-fw-old.local.net-space.pl>
References: <1393325831-29215-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393325831-29215-1-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.3.28i
Cc: daniel.kiper@oracle.com, Keir Fraser <keir@xen.org>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] common/kexec: Identify which cpu the
	kexec image is being executed on.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 10:57:11AM +0000, Andrew Cooper wrote:
> A patch to this effect has been in XenServer for a little while, and has
> proved to be a useful debugging point for servers which have different
> behaviours depending when crashing on the non-bootstrap processor.
>
> Moving the printk() from kexec_panic() to one_cpu_only() means that it will
> only be printed for the cpu which wins the race along the kexec path.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: David Vrabel <david.vrabel@citrix.com>

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:33:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WII83-00046Z-3A; Tue, 25 Feb 2014 13:32:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WII81-00046U-Q3
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 13:32:57 +0000
Received: from [193.109.254.147:15540] by server-7.bemta-14.messagelabs.com id
	72/3E-23424-78B9C035; Tue, 25 Feb 2014 13:32:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393335173!1411056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16348 invoked from network); 25 Feb 2014 13:32:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 13:32:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103872229"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 13:32:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 08:32:52 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WII7w-0004tI-CR;
	Tue, 25 Feb 2014 13:32:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WII7v-0008Fv-0B;
	Tue, 25 Feb 2014 13:32:51 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21260.39809.593686.724414@mariner.uk.xensource.com>
Date: Tue, 25 Feb 2014 13:32:49 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <530C2779.20502@suse.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
	<530C2779.20502@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
> I'll update my test system to rc6 tomorrow and restart my tests.

Thanks.  To be honest, I'm very confident that the change between rc5
and rc6 won't break libvirt because the changes to libxl are entirely
confined to libxl_postfork_child_noexec which is not called by
libvirt.

> FYI, the tests were running over the weekend on rc5 + libvirt 1.2.2
> rc1.  Over 25,000 domains started, shutdown, created, saved, restored,
> etc. with no problems noted.

Great.  Thanks a lot for your testing and bug reports (and sorry for
the bugs).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:33:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WII83-00046Z-3A; Tue, 25 Feb 2014 13:32:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WII81-00046U-Q3
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 13:32:57 +0000
Received: from [193.109.254.147:15540] by server-7.bemta-14.messagelabs.com id
	72/3E-23424-78B9C035; Tue, 25 Feb 2014 13:32:55 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393335173!1411056!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16348 invoked from network); 25 Feb 2014 13:32:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 13:32:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="103872229"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 13:32:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 08:32:52 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WII7w-0004tI-CR;
	Tue, 25 Feb 2014 13:32:52 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WII7v-0008Fv-0B;
	Tue, 25 Feb 2014 13:32:51 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21260.39809.593686.724414@mariner.uk.xensource.com>
Date: Tue, 25 Feb 2014 13:32:49 +0000
To: Jim Fehlig <jfehlig@suse.com>
In-Reply-To: <530C2779.20502@suse.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>
	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>
	<530B62A2.3080901@eu.citrix.com>
	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>
	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
	<530C2779.20502@suse.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig writes ("Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec deadlock etc."):
> I'll update my test system to rc6 tomorrow and restart my tests.

Thanks.  To be honest, I'm very confident that the change between rc5
and rc6 won't break libvirt because the changes to libxl are entirely
confined to libxl_postfork_child_noexec which is not called by
libvirt.

> FYI, the tests were running over the weekend on rc5 + libvirt 1.2.2
> rc1.  Over 25,000 domains started, shutdown, created, saved, restored,
> etc. with no problems noted.

Great.  Thanks a lot for your testing and bug reports (and sorry for
the bugs).

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:36:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIBN-0004DB-PI; Tue, 25 Feb 2014 13:36:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIIBM-0004D5-Af
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:36:24 +0000
Received: from [193.109.254.147:13375] by server-11.bemta-14.messagelabs.com
	id E4/08-24604-75C9C035; Tue, 25 Feb 2014 13:36:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393335382!6671331!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17769 invoked from network); 25 Feb 2014 13:36:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 13:36:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 13:36:22 +0000
Message-Id: <530CAA63020000780011F300@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 13:36:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
 __attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Changes in v4:
>  * Standardise on noreturn before the function name

Almost:

> --- a/xen/include/asm-arm/early_printk.h
> +++ b/xen/include/asm-arm/early_printk.h
> @@ -26,7 +26,7 @@
>  
>  void early_printk(const char *fmt, ...)
>      __attribute__((format (printf, 1, 2)));
> -void early_panic(const char *fmt, ...) __attribute__((noreturn))
> +void early_panic(const char *fmt, ...) noreturn
>      __attribute__((format (printf, 1, 2)));

Nevertheless, no need to re-submit afaic.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:36:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIBN-0004DB-PI; Tue, 25 Feb 2014 13:36:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIIBM-0004D5-Af
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:36:24 +0000
Received: from [193.109.254.147:13375] by server-11.bemta-14.messagelabs.com
	id E4/08-24604-75C9C035; Tue, 25 Feb 2014 13:36:23 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393335382!6671331!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17769 invoked from network); 25 Feb 2014 13:36:23 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 13:36:23 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 13:36:22 +0000
Message-Id: <530CAA63020000780011F300@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 13:36:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
 __attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Changes in v4:
>  * Standardise on noreturn before the function name

Almost:

> --- a/xen/include/asm-arm/early_printk.h
> +++ b/xen/include/asm-arm/early_printk.h
> @@ -26,7 +26,7 @@
>  
>  void early_printk(const char *fmt, ...)
>      __attribute__((format (printf, 1, 2)));
> -void early_panic(const char *fmt, ...) __attribute__((noreturn))
> +void early_panic(const char *fmt, ...) noreturn
>      __attribute__((format (printf, 1, 2)));

Nevertheless, no need to re-submit afaic.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:39:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIDo-0004LB-KF; Tue, 25 Feb 2014 13:38:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WIIDn-0004Kg-LL; Tue, 25 Feb 2014 13:38:55 +0000
Received: from [85.158.143.35:13959] by server-1.bemta-4.messagelabs.com id
	45/BD-31661-EEC9C035; Tue, 25 Feb 2014 13:38:54 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393335533!8144321!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25738 invoked from network); 25 Feb 2014 13:38:53 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 13:38:53 -0000
Received: by mail-wg0-f51.google.com with SMTP id a1so409326wgh.10
	for <multiple recipients>; Tue, 25 Feb 2014 05:38:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:content-type;
	bh=fVGCtSX+ZqWy2KgGF82KYXLUK8P0LSuk7J92T5HqbB8=;
	b=luZYsb1F+Nu6Wkc5PlD0+SajGzYe7Inx+sAGgS1bhnVNtd/9TXiAHoyH/rK6/Gb8XW
	dSjTJF0czpqZtdZDZ1BAE7d5+pQLtOBFyC+Sljc50uqGnsAeCv0EDJyCFY7mvJvZB1Pr
	drFBslxc0C0ODWccJR2JagAq1lYuTHdQ2Ad/k1pnqzeGrYrdbq5FMPyNgTmFOurY4D8R
	DSnKRI0fCaBgMhwEFGTdr+vMALzLCk2/ZWB0lG281mYP/mNc9S/EqqreajGC1gHOXfz5
	LTZkaRnLQsb+8EcnvPdC57Z+t/h3v9q4XM7+RpnqL8d3eJWYVIuaEhUur8LddvLBn8YC
	yv3g==
X-Received: by 10.194.71.116 with SMTP id t20mr2699924wju.51.1393335506220;
	Tue, 25 Feb 2014 05:38:26 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id fo6sm293464wib.7.2014.02.25.05.38.24
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 05:38:25 -0800 (PST)
Message-ID: <530C9CD0.9070508@xen.org>
Date: Tue, 25 Feb 2014 13:38:24 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: [Xen-devel] Dates and Location Xen Project Hackathon,
 Xen Project Developer Summit & Meeting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8229170172161860135=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8229170172161860135==
Content-Type: multipart/alternative;
 boundary="------------040001070605010308090103"

This is a multi-part message in MIME format.
--------------040001070605010308090103
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

before I go on holidays just a quick note in the 2014 event schedule. 
This only just came together, but you may want to look into getting 
travel approvals.

== Xen Project Hackatho, London, late May (exact dates TBD) ==
Location: London at Rackspaces offices
Dates: exact dates TBD, but a we are targetting a Thu/Fri at the end of May
URL: http://wiki.xenproject.org/wiki/Hackathon/May2014

== Xen Project Developer Summit ==
Location: Chicago, IL, USA
Dates: August 18-19
CFP: open now, closes */May 2, 2014 /*- see 
http://events.linuxfoundation.org/events/xen-project-developer-summit/program/cfp
URL: http://events.linuxfoundation.org/events/xen-project-developer-summit

I am looking for Program Management Committee members.

== Xen Project Developer Meeting ==
Location: Chicago, IL, USA
Dates: August 20
More details to follow. We have the option to do a 1/2 day, a full day 
or 1/2 day with a Hacker space

Best Regards
Lars
P.S.: Apologies to those who were looking for a Hackathon in Amsterdam (-:

--------------040001070605010308090103
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi all,<br>
    <br>
    before I go on holidays just a quick note in the 2014 event
    schedule. This only just came together, but you may want to look
    into getting travel approvals.<br>
    <br>
    == Xen Project Hackatho, London, late May (exact dates TBD) ==<br>
    Location: London at Rackspaces offices&nbsp; <br>
    Dates: exact dates TBD, but a we are targetting a Thu/Fri at the end
    of May<br>
    URL: <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/Hackathon/May2014">http://wiki.xenproject.org/wiki/Hackathon/May2014</a><br>
    <br>
    == Xen Project Developer Summit == <br>
    Location: Chicago, IL, USA<br>
    Dates: August 18-19<br>
    CFP: open now, closes <strong style="font-weight: bold; color:
      rgb(58, 112, 155); font-family: 'PT Sans', 'Helvetica Neue',
      Arial, sans-serif; font-size: 15px; font-style: normal;
      font-variant: normal; letter-spacing: normal; line-height: 20px;
      orphans: auto; text-align: start; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;"><em
        style="font-style: italic;">May 2, 2014 </em></strong>- see
<a class="moz-txt-link-freetext" href="http://events.linuxfoundation.org/events/xen-project-developer-summit/program/cfp">http://events.linuxfoundation.org/events/xen-project-developer-summit/program/cfp</a><br>
    URL:
    <a class="moz-txt-link-freetext" href="http://events.linuxfoundation.org/events/xen-project-developer-summit">http://events.linuxfoundation.org/events/xen-project-developer-summit</a><br>
    <br>
    I am looking for Program Management Committee members.<br>
    <br>
    == Xen Project Developer Meeting == <br>
    Location: Chicago, IL, USA<br>
    Dates: August 20<br>
    More details to follow. We have the option to do a 1/2 day, a full
    day or 1/2 day with a Hacker space<br>
    <br>
    Best Regards<br>
    Lars<br>
    P.S.: Apologies to those who were looking for a Hackathon in
    Amsterdam (-:<br>
  </body>
</html>

--------------040001070605010308090103--


--===============8229170172161860135==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8229170172161860135==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 13:39:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIDo-0004LB-KF; Tue, 25 Feb 2014 13:38:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WIIDn-0004Kg-LL; Tue, 25 Feb 2014 13:38:55 +0000
Received: from [85.158.143.35:13959] by server-1.bemta-4.messagelabs.com id
	45/BD-31661-EEC9C035; Tue, 25 Feb 2014 13:38:54 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393335533!8144321!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25738 invoked from network); 25 Feb 2014 13:38:53 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 13:38:53 -0000
Received: by mail-wg0-f51.google.com with SMTP id a1so409326wgh.10
	for <multiple recipients>; Tue, 25 Feb 2014 05:38:53 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:content-type;
	bh=fVGCtSX+ZqWy2KgGF82KYXLUK8P0LSuk7J92T5HqbB8=;
	b=luZYsb1F+Nu6Wkc5PlD0+SajGzYe7Inx+sAGgS1bhnVNtd/9TXiAHoyH/rK6/Gb8XW
	dSjTJF0czpqZtdZDZ1BAE7d5+pQLtOBFyC+Sljc50uqGnsAeCv0EDJyCFY7mvJvZB1Pr
	drFBslxc0C0ODWccJR2JagAq1lYuTHdQ2Ad/k1pnqzeGrYrdbq5FMPyNgTmFOurY4D8R
	DSnKRI0fCaBgMhwEFGTdr+vMALzLCk2/ZWB0lG281mYP/mNc9S/EqqreajGC1gHOXfz5
	LTZkaRnLQsb+8EcnvPdC57Z+t/h3v9q4XM7+RpnqL8d3eJWYVIuaEhUur8LddvLBn8YC
	yv3g==
X-Received: by 10.194.71.116 with SMTP id t20mr2699924wju.51.1393335506220;
	Tue, 25 Feb 2014 05:38:26 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id fo6sm293464wib.7.2014.02.25.05.38.24
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 05:38:25 -0800 (PST)
Message-ID: <530C9CD0.9070508@xen.org>
Date: Tue, 25 Feb 2014 13:38:24 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: [Xen-devel] Dates and Location Xen Project Hackathon,
 Xen Project Developer Summit & Meeting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8229170172161860135=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8229170172161860135==
Content-Type: multipart/alternative;
 boundary="------------040001070605010308090103"

This is a multi-part message in MIME format.
--------------040001070605010308090103
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

before I go on holidays just a quick note in the 2014 event schedule. 
This only just came together, but you may want to look into getting 
travel approvals.

== Xen Project Hackatho, London, late May (exact dates TBD) ==
Location: London at Rackspaces offices
Dates: exact dates TBD, but a we are targetting a Thu/Fri at the end of May
URL: http://wiki.xenproject.org/wiki/Hackathon/May2014

== Xen Project Developer Summit ==
Location: Chicago, IL, USA
Dates: August 18-19
CFP: open now, closes */May 2, 2014 /*- see 
http://events.linuxfoundation.org/events/xen-project-developer-summit/program/cfp
URL: http://events.linuxfoundation.org/events/xen-project-developer-summit

I am looking for Program Management Committee members.

== Xen Project Developer Meeting ==
Location: Chicago, IL, USA
Dates: August 20
More details to follow. We have the option to do a 1/2 day, a full day 
or 1/2 day with a Hacker space

Best Regards
Lars
P.S.: Apologies to those who were looking for a Hackathon in Amsterdam (-:

--------------040001070605010308090103
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi all,<br>
    <br>
    before I go on holidays just a quick note in the 2014 event
    schedule. This only just came together, but you may want to look
    into getting travel approvals.<br>
    <br>
    == Xen Project Hackatho, London, late May (exact dates TBD) ==<br>
    Location: London at Rackspaces offices&nbsp; <br>
    Dates: exact dates TBD, but a we are targetting a Thu/Fri at the end
    of May<br>
    URL: <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/Hackathon/May2014">http://wiki.xenproject.org/wiki/Hackathon/May2014</a><br>
    <br>
    == Xen Project Developer Summit == <br>
    Location: Chicago, IL, USA<br>
    Dates: August 18-19<br>
    CFP: open now, closes <strong style="font-weight: bold; color:
      rgb(58, 112, 155); font-family: 'PT Sans', 'Helvetica Neue',
      Arial, sans-serif; font-size: 15px; font-style: normal;
      font-variant: normal; letter-spacing: normal; line-height: 20px;
      orphans: auto; text-align: start; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;"><em
        style="font-style: italic;">May 2, 2014 </em></strong>- see
<a class="moz-txt-link-freetext" href="http://events.linuxfoundation.org/events/xen-project-developer-summit/program/cfp">http://events.linuxfoundation.org/events/xen-project-developer-summit/program/cfp</a><br>
    URL:
    <a class="moz-txt-link-freetext" href="http://events.linuxfoundation.org/events/xen-project-developer-summit">http://events.linuxfoundation.org/events/xen-project-developer-summit</a><br>
    <br>
    I am looking for Program Management Committee members.<br>
    <br>
    == Xen Project Developer Meeting == <br>
    Location: Chicago, IL, USA<br>
    Dates: August 20<br>
    More details to follow. We have the option to do a 1/2 day, a full
    day or 1/2 day with a Hacker space<br>
    <br>
    Best Regards<br>
    Lars<br>
    P.S.: Apologies to those who were looking for a Hackathon in
    Amsterdam (-:<br>
  </body>
</html>

--------------040001070605010308090103--


--===============8229170172161860135==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8229170172161860135==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 13:40:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIFL-0004Uy-2T; Tue, 25 Feb 2014 13:40:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIIFJ-0004Uq-OV
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:40:29 +0000
Received: from [193.109.254.147:59529] by server-13.bemta-14.messagelabs.com
	id 8F/93-01226-D4D9C035; Tue, 25 Feb 2014 13:40:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393335628!6684991!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11979 invoked from network); 25 Feb 2014 13:40:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 13:40:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 13:40:28 +0000
Message-Id: <530CAB5A020000780011F311@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 13:40:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Make better use of noreturn.  It allows optimising compilers to produce more
> efficient code.
> 
> Each patch is compile tested on each architecture, and the result is
> functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
> older compilers are happy.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:40:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:40:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIFL-0004Uy-2T; Tue, 25 Feb 2014 13:40:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIIFJ-0004Uq-OV
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:40:29 +0000
Received: from [193.109.254.147:59529] by server-13.bemta-14.messagelabs.com
	id 8F/93-01226-D4D9C035; Tue, 25 Feb 2014 13:40:29 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393335628!6684991!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11979 invoked from network); 25 Feb 2014 13:40:28 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 25 Feb 2014 13:40:28 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Tue, 25 Feb 2014 13:40:28 +0000
Message-Id: <530CAB5A020000780011F311@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Tue, 25 Feb 2014 13:40:26 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Make better use of noreturn.  It allows optimising compilers to produce more
> efficient code.
> 
> Each patch is compile tested on each architecture, and the result is
> functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
> older compilers are happy.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:48:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIINH-0004ks-89; Tue, 25 Feb 2014 13:48:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIINF-0004kn-R1
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:48:42 +0000
Received: from [85.158.143.35:54078] by server-2.bemta-4.messagelabs.com id
	41/B3-10891-93F9C035; Tue, 25 Feb 2014 13:48:41 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393336119!8171468!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20950 invoked from network); 25 Feb 2014 13:48:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 13:48:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="105523358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 13:48:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 08:48:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIINB-00085L-P2;
	Tue, 25 Feb 2014 13:48:37 +0000
Message-ID: <530C9F35.2020101@citrix.com>
Date: Tue, 25 Feb 2014 13:48:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
	<530CAA63020000780011F300@nat28.tlf.novell.com>
In-Reply-To: <530CAA63020000780011F300@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/02/14 13:36, Jan Beulich wrote:
>>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Changes in v4:
>>  * Standardise on noreturn before the function name
> Almost:
>
>> --- a/xen/include/asm-arm/early_printk.h
>> +++ b/xen/include/asm-arm/early_printk.h
>> @@ -26,7 +26,7 @@
>>  
>>  void early_printk(const char *fmt, ...)
>>      __attribute__((format (printf, 1, 2)));
>> -void early_panic(const char *fmt, ...) __attribute__((noreturn))
>> +void early_panic(const char *fmt, ...) noreturn
>>      __attribute__((format (printf, 1, 2)));
> Nevertheless, no need to re-submit afaic.
>
> Jan
>

Almost!  but as Juliens patch removes early_panic(), this is probably
safe to leave (and would prevent him needing to respin the series)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 13:48:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 13:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIINH-0004ks-89; Tue, 25 Feb 2014 13:48:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIINF-0004kn-R1
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 13:48:42 +0000
Received: from [85.158.143.35:54078] by server-2.bemta-4.messagelabs.com id
	41/B3-10891-93F9C035; Tue, 25 Feb 2014 13:48:41 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393336119!8171468!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20950 invoked from network); 25 Feb 2014 13:48:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 13:48:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,540,1389744000"; d="scan'208";a="105523358"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 13:48:38 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 08:48:38 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIINB-00085L-P2;
	Tue, 25 Feb 2014 13:48:37 +0000
Message-ID: <530C9F35.2020101@citrix.com>
Date: Tue, 25 Feb 2014 13:48:37 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
	<530CAA63020000780011F300@nat28.tlf.novell.com>
In-Reply-To: <530CAA63020000780011F300@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 25/02/14 13:36, Jan Beulich wrote:
>>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Changes in v4:
>>  * Standardise on noreturn before the function name
> Almost:
>
>> --- a/xen/include/asm-arm/early_printk.h
>> +++ b/xen/include/asm-arm/early_printk.h
>> @@ -26,7 +26,7 @@
>>  
>>  void early_printk(const char *fmt, ...)
>>      __attribute__((format (printf, 1, 2)));
>> -void early_panic(const char *fmt, ...) __attribute__((noreturn))
>> +void early_panic(const char *fmt, ...) noreturn
>>      __attribute__((format (printf, 1, 2)));
> Nevertheless, no need to re-submit afaic.
>
> Jan
>

Almost!  but as Juliens patch removes early_panic(), this is probably
safe to leave (and would prevent him needing to respin the series)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 14:27:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 14:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIya-00052W-O3; Tue, 25 Feb 2014 14:27:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WIIyX-00052R-Rw
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 14:27:14 +0000
Received: from [85.158.143.35:60401] by server-3.bemta-4.messagelabs.com id
	50/4E-11539-148AC035; Tue, 25 Feb 2014 14:27:13 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393338429!8161204!1
X-Originating-IP: [64.18.0.26]
X-SpamReason: No, hits=1.6 required=7.0 tests=HOT_NASTY,HTML_50_60,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30843 invoked from network); 25 Feb 2014 14:27:11 -0000
Received: from exprod5og113.obsmtp.com (HELO exprod5og113.obsmtp.com)
	(64.18.0.26)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 14:27:11 -0000
Received: from mail-wi0-f171.google.com ([209.85.212.171]) (using TLSv1) by
	exprod5ob113.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwyoPUP6bExOxNQRPMni7YDfaIdTrqT5@postini.com;
	Tue, 25 Feb 2014 06:27:11 PST
Received: by mail-wi0-f171.google.com with SMTP id cc10so4550311wib.4
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 06:27:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qa4jTT6aZWtQM8MEZ0ETL3j4TAie420sPf4znGjtbYo=;
	b=eLMU+4hXeltm2YmyKJhjf8K6lTaF3axm4f3xfrSaAUanWtyH/BPGZcLvI3b+BDRwZ6
	UayMNQf1d1e/cJ/pFcWeDAt16cDZ+6ldTuf7is+MCs4ejtHA8yZD0oRoMQdPEMIkZ0fs
	BVQP3e0jf/mzjHZNzAHbG+AxEN/gm+vydtVSQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=qa4jTT6aZWtQM8MEZ0ETL3j4TAie420sPf4znGjtbYo=;
	b=m4gkzY2XgpgjfuIlJ+zTcpp1T1UmjEjWdr6bwqI7m5dawICx9BnZ/CXr1IdevOq8xR
	Nq88nRRD/CWLWgxideM34UueCA7XPTXF+aTFZxQreFYtRVUUt4Z9LhzaAX0WQiFZwniS
	ePlJMO7KBJ2JindzYzs4VKvHdPCnVTC5mMHQ1/TQ2K5st8cGew05pbsZxwlbchfH+oaK
	iLG3I6PP3G7q3KGP4cY6aQMUP70X6nEq9zMl41mk9BO0uaLEmPkVaSa4pjs3VCYuOdKm
	d26+yZalXqhMHeA7Gdc0tcV9pcYvigs/wk41MC7LDz6j/iKigid5SjD9njjUmVIcgv+H
	SqtQ==
X-Gm-Message-State: ALoCoQlD4KHAAljF+0XGAHr/1XQS9IN9fmgkzNfwamjobVLJtmZzjPBjDi12e5DAb9nSEN27vUTf/A5m3RtAYAcYT635Bh4wlWQrmz4w2FmZ3bJI250zBES0CoKzR97jWKgs53lXLdkWmDCnvEKWcRpxsKhWqgXVa3dwk6U89kfS6joT5j/DUYs=
X-Received: by 10.180.187.16 with SMTP id fo16mr297679wic.26.1393338428026;
	Tue, 25 Feb 2014 06:27:08 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.187.16 with SMTP id fo16mr297670wic.26.1393338427848;
	Tue, 25 Feb 2014 06:27:07 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Tue, 25 Feb 2014 06:27:07 -0800 (PST)
In-Reply-To: <1393259051.16570.101.camel@kazak.uk.xensource.com>
References: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
	<1393259051.16570.101.camel@kazak.uk.xensource.com>
Date: Tue, 25 Feb 2014 14:27:07 +0000
Message-ID: <CAM=aOxgvuyfsT8ySbqEA_3bDM+Z_K_QDcHRCVqdDDvtNyvdVYA@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4843704183487294318=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4843704183487294318==
Content-Type: multipart/alternative; boundary=001a11c266c40b538d04f33be2f0

--001a11c266c40b538d04f33be2f0
Content-Type: text/plain; charset=ISO-8859-1

Ian,

Thank you for your response.

> I'm not sure how easy it would be to arrange it anyway -- the thing
> which knows about the kernel/appending is a different library to
> what creates the dtb.

We will try to implement some alert mechanism between those
two things.

Regards,
Victor


On Mon, Feb 24, 2014 at 4:24 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Mon, 2014-02-24 at 15:55 +0000, Viktor Kleinik wrote:
> > We are using Linux as Dom0 and Android as DomU on OMAP5 processors
> > family. Xen version is 4.4-rc2. We have our own devtree configuration
> file for
> > DomU. In our case device tree binary is appended to zImage by using
> > CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I
> > understood, Xen can determine that there is appended to zImage DTB for
> > DomU. But it tries to generate its own devtree configuration file via
> > libxl__arch_domain_configure() function call thru libxl__build_pv()
> function in
> > tools/libxl/libxl_dom.c file. Which is dummy work in that case.
>
> This is somewhat similar to real hardware too -- the appended DTB would
> override anything passed from the bootloader.
>
> > In the same time we have some platform specific configuration which is
> different from
> >
> > generic ARM configuration that Xen tries to generate for us. Why Xen
> tries
> > to generate its own devtree even if appended to zImage DTB can be
> > detected?
>
> I think just nobody thought to stop it doing so. I'm not sure how easy
> it would be to arrange it anyway -- the thing which knows about the
> kernel/appending is a different library to what creates the dtb.
>
> >  Should we expect some modifications in domain configuration
> > functions for those cases when DomU requires more platform specific
> > configuration? Please share your thoughts.
>
> I think as features such as device passthrough get wired up properly in
> the toolstack then the generated DTB should reflect this, if that is
> what you are asking.
>
> Appending a DTB really ought to become the exception not the rule.
>
> Ian.
>
>
>
>


-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c266c40b538d04f33be2f0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div>Ian,<br><br></div>Thank you for your respon=
se.<br><br></div><div>&gt; I&#39;m not sure how easy
it would be to arrange it anyway -- the thing <br>&gt; which knows about th=
e
kernel/appending is a different library to <br>&gt; what creates the dtb.<b=
r>
<br></div><div>We will try to implement some alert mechanism between those =
<br>two things.<br></div><div><br></div>Regards,<br></div>Victor<br></div><=
div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Mon, Feb 24=
, 2014 at 4:24 PM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian=
.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</sp=
an> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Mon, 2014-02-24 at 15:55 =
+0000, Viktor Kleinik wrote:<br>
&gt; We are using Linux as Dom0 and Android as DomU on OMAP5 processors<br>
&gt; family. Xen version is 4.4-rc2. We have our own devtree configuration =
file for<br>
&gt; DomU. In our case device tree binary is appended to zImage by using<br=
>
&gt; CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I<br>
&gt; understood, Xen can determine that there is appended to zImage DTB for=
<br>
&gt; DomU. But it tries to generate its own devtree configuration file via<=
br>
&gt; libxl__arch_domain_configure() function call thru libxl__build_pv() fu=
nction in<br>
&gt; tools/libxl/libxl_dom.c file. Which is dummy work in that case.<br>
<br>
</div>This is somewhat similar to real hardware too -- the appended DTB wou=
ld<br>
override anything passed from the bootloader.<br>
<div class=3D""><br>
&gt; In the same time we have some platform specific configuration which is=
 different from<br>
&gt;<br>
&gt; generic ARM configuration that Xen tries to generate for us. Why Xen t=
ries<br>
&gt; to generate its own devtree even if appended to zImage DTB can be<br>
&gt; detected?<br>
<br>
</div>I think just nobody thought to stop it doing so. I&#39;m not sure how=
 easy<br>
it would be to arrange it anyway -- the thing which knows about the<br>
kernel/appending is a different library to what creates the dtb.<br>
<div class=3D""><br>
&gt; =A0Should we expect some modifications in domain configuration<br>
&gt; functions for those cases when DomU requires more platform specific<br=
>
&gt; configuration? Please share your thoughts.<br>
<br>
</div>I think as features such as device passthrough get wired up properly =
in<br>
the toolstack then the generated DTB should reflect this, if that is<br>
what you are asking.<br>
<br>
Appending a DTB really ought to become the exception not the rule.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br><font siz=
e=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:bold">Name | Title</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +x.xxx.xxx.xxxx=A0=A0S=A0skype</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline">www.globallogic.com</span></a><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:#1155cc;background=
-color:transparent;font-weight:normal;font-style:normal;font-variant:normal=
;text-decoration:underline;vertical-align:baseline">http://www.globallogic.=
com/email_disclaimer.txt</span></a><span style=3D"vertical-align:baseline;f=
ont-variant:normal;font-style:normal;font-size:11px;background-color:transp=
arent;text-decoration:none;font-family:Arial;font-weight:normal"></span></f=
ont>
</div>

--001a11c266c40b538d04f33be2f0--


--===============4843704183487294318==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4843704183487294318==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 14:27:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 14:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIIya-00052W-O3; Tue, 25 Feb 2014 14:27:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WIIyX-00052R-Rw
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 14:27:14 +0000
Received: from [85.158.143.35:60401] by server-3.bemta-4.messagelabs.com id
	50/4E-11539-148AC035; Tue, 25 Feb 2014 14:27:13 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393338429!8161204!1
X-Originating-IP: [64.18.0.26]
X-SpamReason: No, hits=1.6 required=7.0 tests=HOT_NASTY,HTML_50_60,
	HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30843 invoked from network); 25 Feb 2014 14:27:11 -0000
Received: from exprod5og113.obsmtp.com (HELO exprod5og113.obsmtp.com)
	(64.18.0.26)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 14:27:11 -0000
Received: from mail-wi0-f171.google.com ([209.85.212.171]) (using TLSv1) by
	exprod5ob113.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUwyoPUP6bExOxNQRPMni7YDfaIdTrqT5@postini.com;
	Tue, 25 Feb 2014 06:27:11 PST
Received: by mail-wi0-f171.google.com with SMTP id cc10so4550311wib.4
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 06:27:08 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=qa4jTT6aZWtQM8MEZ0ETL3j4TAie420sPf4znGjtbYo=;
	b=eLMU+4hXeltm2YmyKJhjf8K6lTaF3axm4f3xfrSaAUanWtyH/BPGZcLvI3b+BDRwZ6
	UayMNQf1d1e/cJ/pFcWeDAt16cDZ+6ldTuf7is+MCs4ejtHA8yZD0oRoMQdPEMIkZ0fs
	BVQP3e0jf/mzjHZNzAHbG+AxEN/gm+vydtVSQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=qa4jTT6aZWtQM8MEZ0ETL3j4TAie420sPf4znGjtbYo=;
	b=m4gkzY2XgpgjfuIlJ+zTcpp1T1UmjEjWdr6bwqI7m5dawICx9BnZ/CXr1IdevOq8xR
	Nq88nRRD/CWLWgxideM34UueCA7XPTXF+aTFZxQreFYtRVUUt4Z9LhzaAX0WQiFZwniS
	ePlJMO7KBJ2JindzYzs4VKvHdPCnVTC5mMHQ1/TQ2K5st8cGew05pbsZxwlbchfH+oaK
	iLG3I6PP3G7q3KGP4cY6aQMUP70X6nEq9zMl41mk9BO0uaLEmPkVaSa4pjs3VCYuOdKm
	d26+yZalXqhMHeA7Gdc0tcV9pcYvigs/wk41MC7LDz6j/iKigid5SjD9njjUmVIcgv+H
	SqtQ==
X-Gm-Message-State: ALoCoQlD4KHAAljF+0XGAHr/1XQS9IN9fmgkzNfwamjobVLJtmZzjPBjDi12e5DAb9nSEN27vUTf/A5m3RtAYAcYT635Bh4wlWQrmz4w2FmZ3bJI250zBES0CoKzR97jWKgs53lXLdkWmDCnvEKWcRpxsKhWqgXVa3dwk6U89kfS6joT5j/DUYs=
X-Received: by 10.180.187.16 with SMTP id fo16mr297679wic.26.1393338428026;
	Tue, 25 Feb 2014 06:27:08 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.187.16 with SMTP id fo16mr297670wic.26.1393338427848;
	Tue, 25 Feb 2014 06:27:07 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Tue, 25 Feb 2014 06:27:07 -0800 (PST)
In-Reply-To: <1393259051.16570.101.camel@kazak.uk.xensource.com>
References: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
	<1393259051.16570.101.camel@kazak.uk.xensource.com>
Date: Tue, 25 Feb 2014 14:27:07 +0000
Message-ID: <CAM=aOxgvuyfsT8ySbqEA_3bDM+Z_K_QDcHRCVqdDDvtNyvdVYA@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4843704183487294318=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4843704183487294318==
Content-Type: multipart/alternative; boundary=001a11c266c40b538d04f33be2f0

--001a11c266c40b538d04f33be2f0
Content-Type: text/plain; charset=ISO-8859-1

Ian,

Thank you for your response.

> I'm not sure how easy it would be to arrange it anyway -- the thing
> which knows about the kernel/appending is a different library to
> what creates the dtb.

We will try to implement some alert mechanism between those
two things.

Regards,
Victor


On Mon, Feb 24, 2014 at 4:24 PM, Ian Campbell <Ian.Campbell@citrix.com>wrote:

> On Mon, 2014-02-24 at 15:55 +0000, Viktor Kleinik wrote:
> > We are using Linux as Dom0 and Android as DomU on OMAP5 processors
> > family. Xen version is 4.4-rc2. We have our own devtree configuration
> file for
> > DomU. In our case device tree binary is appended to zImage by using
> > CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I
> > understood, Xen can determine that there is appended to zImage DTB for
> > DomU. But it tries to generate its own devtree configuration file via
> > libxl__arch_domain_configure() function call thru libxl__build_pv()
> function in
> > tools/libxl/libxl_dom.c file. Which is dummy work in that case.
>
> This is somewhat similar to real hardware too -- the appended DTB would
> override anything passed from the bootloader.
>
> > In the same time we have some platform specific configuration which is
> different from
> >
> > generic ARM configuration that Xen tries to generate for us. Why Xen
> tries
> > to generate its own devtree even if appended to zImage DTB can be
> > detected?
>
> I think just nobody thought to stop it doing so. I'm not sure how easy
> it would be to arrange it anyway -- the thing which knows about the
> kernel/appending is a different library to what creates the dtb.
>
> >  Should we expect some modifications in domain configuration
> > functions for those cases when DomU requires more platform specific
> > configuration? Please share your thoughts.
>
> I think as features such as device passthrough get wired up properly in
> the toolstack then the generated DTB should reflect this, if that is
> what you are asking.
>
> Appending a DTB really ought to become the exception not the rule.
>
> Ian.
>
>
>
>


-- 

Name | Title
GlobalLogic
P +x.xxx.xxx.xxxx  M +x.xxx.xxx.xxxx  S skype
www.globallogic.com
<http://www.globallogic.com/>
http://www.globallogic.com/email_disclaimer.txt

--001a11c266c40b538d04f33be2f0
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div><div>Ian,<br><br></div>Thank you for your respon=
se.<br><br></div><div>&gt; I&#39;m not sure how easy
it would be to arrange it anyway -- the thing <br>&gt; which knows about th=
e
kernel/appending is a different library to <br>&gt; what creates the dtb.<b=
r>
<br></div><div>We will try to implement some alert mechanism between those =
<br>two things.<br></div><div><br></div>Regards,<br></div>Victor<br></div><=
div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Mon, Feb 24=
, 2014 at 4:24 PM, Ian Campbell <span dir=3D"ltr">&lt;<a href=3D"mailto:Ian=
.Campbell@citrix.com" target=3D"_blank">Ian.Campbell@citrix.com</a>&gt;</sp=
an> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On Mon, 2014-02-24 at 15:55 =
+0000, Viktor Kleinik wrote:<br>
&gt; We are using Linux as Dom0 and Android as DomU on OMAP5 processors<br>
&gt; family. Xen version is 4.4-rc2. We have our own devtree configuration =
file for<br>
&gt; DomU. In our case device tree binary is appended to zImage by using<br=
>
&gt; CONFIG_ARM_APPENDED_DTB option in kernel config for DomU. As I<br>
&gt; understood, Xen can determine that there is appended to zImage DTB for=
<br>
&gt; DomU. But it tries to generate its own devtree configuration file via<=
br>
&gt; libxl__arch_domain_configure() function call thru libxl__build_pv() fu=
nction in<br>
&gt; tools/libxl/libxl_dom.c file. Which is dummy work in that case.<br>
<br>
</div>This is somewhat similar to real hardware too -- the appended DTB wou=
ld<br>
override anything passed from the bootloader.<br>
<div class=3D""><br>
&gt; In the same time we have some platform specific configuration which is=
 different from<br>
&gt;<br>
&gt; generic ARM configuration that Xen tries to generate for us. Why Xen t=
ries<br>
&gt; to generate its own devtree even if appended to zImage DTB can be<br>
&gt; detected?<br>
<br>
</div>I think just nobody thought to stop it doing so. I&#39;m not sure how=
 easy<br>
it would be to arrange it anyway -- the thing which knows about the<br>
kernel/appending is a different library to what creates the dtb.<br>
<div class=3D""><br>
&gt; =A0Should we expect some modifications in domain configuration<br>
&gt; functions for those cases when DomU requires more platform specific<br=
>
&gt; configuration? Please share your thoughts.<br>
<br>
</div>I think as features such as device passthrough get wired up properly =
in<br>
the toolstack then the generated DTB should reflect this, if that is<br>
what you are asking.<br>
<br>
Appending a DTB really ought to become the exception not the rule.<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Ian.<br>
<br>
<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><br>-- <br><font siz=
e=3D"-1"><br><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:12px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:bold">Name | Title</span><br>
<span style=3D"vertical-align:baseline;font-variant:normal;font-style:norma=
l;font-size:12px;background-color:transparent;text-decoration:none;font-fam=
ily:Arial;font-weight:normal">GlobalLogic</span><br><span style=3D"vertical=
-align:baseline;font-variant:normal;font-style:normal;font-size:12px;backgr=
ound-color:transparent;text-decoration:none;font-family:Arial;font-weight:n=
ormal">P +x.xxx.xxx.xxxx=A0=A0M +x.xxx.xxx.xxxx=A0=A0S=A0skype</span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline">www.globallogic.com</span></a><span style=
=3D"vertical-align:baseline;font-variant:normal;font-style:normal;font-size=
:12px;background-color:transparent;text-decoration:none;font-family:Arial;f=
ont-weight:normal"></span><br>
<a href=3D"http://www.globallogic.com/" target=3D"_blank"><span style=3D"fo=
nt-size:12px;font-family:Arial;color:#1155cc;background-color:transparent;f=
ont-weight:normal;font-style:normal;font-variant:normal;text-decoration:und=
erline;vertical-align:baseline"></span></a><br>
<a href=3D"http://www.globallogic.com/email_disclaimer.txt" target=3D"_blan=
k"><span style=3D"font-size:11px;font-family:Arial;color:#1155cc;background=
-color:transparent;font-weight:normal;font-style:normal;font-variant:normal=
;text-decoration:underline;vertical-align:baseline">http://www.globallogic.=
com/email_disclaimer.txt</span></a><span style=3D"vertical-align:baseline;f=
ont-variant:normal;font-style:normal;font-size:11px;background-color:transp=
arent;text-decoration:none;font-family:Arial;font-weight:normal"></span></f=
ont>
</div>

--001a11c266c40b538d04f33be2f0--


--===============4843704183487294318==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4843704183487294318==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 14:53:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 14:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIJNq-0005IQ-0C; Tue, 25 Feb 2014 14:53:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WIJNo-0005IL-Pl
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 14:53:21 +0000
Received: from [85.158.143.35:35022] by server-3.bemta-4.messagelabs.com id
	4C/24-11539-F5EAC035; Tue, 25 Feb 2014 14:53:19 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393339997!8192072!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24445 invoked from network); 25 Feb 2014 14:53:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 14:53:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1PErE2Q019160
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 14:53:14 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1PErB2r003669
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 14:53:12 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1PErAVF001726; Tue, 25 Feb 2014 14:53:11 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 25 Feb 2014 06:53:10 -0800
Message-ID: <530CAEC5.6090304@oracle.com>
Date: Tue, 25 Feb 2014 09:55:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1393326359-29845-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393326359-29845-1-git-send-email-andrew.cooper3@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/cpu: Store extended cpuid level in
	cpuinfo_x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/25/2014 06:05 AM, Andrew Cooper wrote:
> To save finding it repeatedly with cpuid instructions.  The name
> "extended_cpuid_level" is chosen to match Linux.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>   xen/arch/x86/cpu/amd.c                  |    4 ++--
>   xen/arch/x86/cpu/common.c               |   22 ++++++++++------------
>   xen/arch/x86/hvm/svm/svm.c              |    2 +-
>   xen/arch/x86/oprofile/op_model_athlon.c |    5 +----
>   xen/include/asm-x86/processor.h         |    1 +
>   5 files changed, 15 insertions(+), 19 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 44087fa..904ad2e 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -419,11 +419,11 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>   
>   	display_cacheinfo(c);
>   
> -	if (cpuid_eax(0x80000000) >= 0x80000008) {
> +	if (c->extended_cpuid_level >= 0x80000008) {
>   		c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
>   	}
>   
> -	if (cpuid_eax(0x80000000) >= 0x80000007) {
> +	if (c->extended_cpuid_level >= 0x80000007) {
>   		c->x86_power = cpuid_edx(0x80000007);
>   		if (c->x86_power & (1<<8)) {
>   			set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
> diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
> index 32ca458..2c128e5 100644
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -74,7 +74,7 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
>   	unsigned int *v;
>   	char *p, *q;
>   
> -	if (cpuid_eax(0x80000000) < 0x80000004)
> +	if (c->extended_cpuid_level < 0x80000004)
>   		return 0;
>   
>   	v = (unsigned int *) c->x86_model_id;
> @@ -101,11 +101,9 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
>   
>   void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
>   {
> -	unsigned int n, dummy, ecx, edx, l2size;
> +	unsigned int dummy, ecx, edx, l2size;
>   
> -	n = cpuid_eax(0x80000000);
> -
> -	if (n >= 0x80000005) {
> +	if (c->extended_cpuid_level >= 0x80000005) {
>   		cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
>   		if (opt_cpu_info)
>   			printk("CPU: L1 I cache %dK (%d bytes/line),"
> @@ -114,7 +112,7 @@ void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
>   		c->x86_cache_size=(ecx>>24)+(edx>>24);	
>   	}
>   
> -	if (n < 0x80000006)	/* Some chips just has a large L1. */
> +	if (c->extended_cpuid_level < 0x80000006)	/* Some chips just has a large L1. */
>   		return;
>   
>   	ecx = cpuid_ecx(0x80000006);
> @@ -195,7 +193,7 @@ static void __init early_cpu_detect(void)
>   
>   static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
>   {
> -	u32 tfms, xlvl, capability, excap, ebx;
> +	u32 tfms, capability, excap, ebx;
>   
>   	/* Get vendor name */
>   	cpuid(0x00000000, &c->cpuid_level,
> @@ -222,15 +220,15 @@ static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
>   		c->x86_clflush_size = ((ebx >> 8) & 0xff) * 8;
>   
>   	/* AMD-defined flags: level 0x80000001 */
> -	xlvl = cpuid_eax(0x80000000);
> -	if ( (xlvl & 0xffff0000) == 0x80000000 ) {
> -		if ( xlvl >= 0x80000001 ) {
> +	c->extended_cpuid_level = cpuid_eax(0x80000000);


This should probably be moved from under AMD-specific comment as it is 
common with Intel. (And Linux apparently makes the same mistake).

I also see a couple more instances that might be replaced --- 
efi_start() and mtrr_top_of_ram() (although the latter is probably more 
trouble than it's worth).

-boris


> +	if ( (c->extended_cpuid_level & 0xffff0000) == 0x80000000 ) {
> +		if ( c->extended_cpuid_level >= 0x80000001 ) {
>   			c->x86_capability[1] = cpuid_edx(0x80000001);
>   			c->x86_capability[6] = cpuid_ecx(0x80000001);
>   		}
> -		if ( xlvl >= 0x80000004 )
> +		if ( c->extended_cpuid_level >= 0x80000004 )
>   			get_model_name(c); /* Default name */
> -		if ( xlvl >= 0x80000008 )
> +		if ( c->extended_cpuid_level >= 0x80000008 )
>   			paddr_bits = cpuid_eax(0x80000008) & 0xff;
>   	}
>   
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..61b1ec8 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1251,7 +1251,7 @@ const struct hvm_function_table * __init start_svm(void)
>   
>       setup_vmcb_dump();
>   
> -    svm_feature_flags = ((cpuid_eax(0x80000000) >= 0x8000000A) ?
> +    svm_feature_flags = (current_cpu_data.extended_cpuid_level >= 0x8000000A ?
>                            cpuid_edx(0x8000000A) : 0);
>   
>       printk("SVM: Supported advanced features:\n");
> diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c
> index e784018..ad84768 100644
> --- a/xen/arch/x86/oprofile/op_model_athlon.c
> +++ b/xen/arch/x86/oprofile/op_model_athlon.c
> @@ -497,14 +497,11 @@ static int __init init_ibs_nmi(void)
>   
>   static void __init get_ibs_caps(void)
>   {
> -	unsigned int max_level;
> -
>   	if (!boot_cpu_has(X86_FEATURE_IBS))
>   		return;
>   
>       /* check IBS cpuid feature flags */
> -	max_level = cpuid_eax(0x80000000);
> -	if (max_level >= IBS_CPUID_FEATURES)
> +	if (current_cpu_data.extended_cpuid_level >= IBS_CPUID_FEATURES)
>   		ibs_caps = cpuid_eax(IBS_CPUID_FEATURES);
>   	if (!(ibs_caps & IBS_CAPS_AVAIL))
>   		/* cpuid flags not valid */
> diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
> index c120460..f1cccff 100644
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -162,6 +162,7 @@ struct cpuinfo_x86 {
>       __u8 x86_model;
>       __u8 x86_mask;
>       int  cpuid_level;    /* Maximum supported CPUID level, -1=no CPUID */
> +    __u32 extended_cpuid_level; /* Maximum supported CPUID extended level */
>       unsigned int x86_capability[NCAPINTS];
>       char x86_vendor_id[16];
>       char x86_model_id[64];


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 14:53:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 14:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIJNq-0005IQ-0C; Tue, 25 Feb 2014 14:53:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WIJNo-0005IL-Pl
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 14:53:21 +0000
Received: from [85.158.143.35:35022] by server-3.bemta-4.messagelabs.com id
	4C/24-11539-F5EAC035; Tue, 25 Feb 2014 14:53:19 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393339997!8192072!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24445 invoked from network); 25 Feb 2014 14:53:19 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 14:53:19 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1PErE2Q019160
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 14:53:14 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1PErB2r003669
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 14:53:12 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1PErAVF001726; Tue, 25 Feb 2014 14:53:11 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 25 Feb 2014 06:53:10 -0800
Message-ID: <530CAEC5.6090304@oracle.com>
Date: Tue, 25 Feb 2014 09:55:01 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1393326359-29845-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393326359-29845-1-git-send-email-andrew.cooper3@citrix.com>
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Keir Fraser <keir@xen.org>,
	Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
	Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/cpu: Store extended cpuid level in
	cpuinfo_x86
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/25/2014 06:05 AM, Andrew Cooper wrote:
> To save finding it repeatedly with cpuid instructions.  The name
> "extended_cpuid_level" is chosen to match Linux.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
> ---
>   xen/arch/x86/cpu/amd.c                  |    4 ++--
>   xen/arch/x86/cpu/common.c               |   22 ++++++++++------------
>   xen/arch/x86/hvm/svm/svm.c              |    2 +-
>   xen/arch/x86/oprofile/op_model_athlon.c |    5 +----
>   xen/include/asm-x86/processor.h         |    1 +
>   5 files changed, 15 insertions(+), 19 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
> index 44087fa..904ad2e 100644
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -419,11 +419,11 @@ static void __devinit init_amd(struct cpuinfo_x86 *c)
>   
>   	display_cacheinfo(c);
>   
> -	if (cpuid_eax(0x80000000) >= 0x80000008) {
> +	if (c->extended_cpuid_level >= 0x80000008) {
>   		c->x86_max_cores = (cpuid_ecx(0x80000008) & 0xff) + 1;
>   	}
>   
> -	if (cpuid_eax(0x80000000) >= 0x80000007) {
> +	if (c->extended_cpuid_level >= 0x80000007) {
>   		c->x86_power = cpuid_edx(0x80000007);
>   		if (c->x86_power & (1<<8)) {
>   			set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
> diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
> index 32ca458..2c128e5 100644
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -74,7 +74,7 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
>   	unsigned int *v;
>   	char *p, *q;
>   
> -	if (cpuid_eax(0x80000000) < 0x80000004)
> +	if (c->extended_cpuid_level < 0x80000004)
>   		return 0;
>   
>   	v = (unsigned int *) c->x86_model_id;
> @@ -101,11 +101,9 @@ int __cpuinit get_model_name(struct cpuinfo_x86 *c)
>   
>   void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
>   {
> -	unsigned int n, dummy, ecx, edx, l2size;
> +	unsigned int dummy, ecx, edx, l2size;
>   
> -	n = cpuid_eax(0x80000000);
> -
> -	if (n >= 0x80000005) {
> +	if (c->extended_cpuid_level >= 0x80000005) {
>   		cpuid(0x80000005, &dummy, &dummy, &ecx, &edx);
>   		if (opt_cpu_info)
>   			printk("CPU: L1 I cache %dK (%d bytes/line),"
> @@ -114,7 +112,7 @@ void __cpuinit display_cacheinfo(struct cpuinfo_x86 *c)
>   		c->x86_cache_size=(ecx>>24)+(edx>>24);	
>   	}
>   
> -	if (n < 0x80000006)	/* Some chips just has a large L1. */
> +	if (c->extended_cpuid_level < 0x80000006)	/* Some chips just has a large L1. */
>   		return;
>   
>   	ecx = cpuid_ecx(0x80000006);
> @@ -195,7 +193,7 @@ static void __init early_cpu_detect(void)
>   
>   static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
>   {
> -	u32 tfms, xlvl, capability, excap, ebx;
> +	u32 tfms, capability, excap, ebx;
>   
>   	/* Get vendor name */
>   	cpuid(0x00000000, &c->cpuid_level,
> @@ -222,15 +220,15 @@ static void __cpuinit generic_identify(struct cpuinfo_x86 *c)
>   		c->x86_clflush_size = ((ebx >> 8) & 0xff) * 8;
>   
>   	/* AMD-defined flags: level 0x80000001 */
> -	xlvl = cpuid_eax(0x80000000);
> -	if ( (xlvl & 0xffff0000) == 0x80000000 ) {
> -		if ( xlvl >= 0x80000001 ) {
> +	c->extended_cpuid_level = cpuid_eax(0x80000000);


This should probably be moved from under AMD-specific comment as it is 
common with Intel. (And Linux apparently makes the same mistake).

I also see a couple more instances that might be replaced --- 
efi_start() and mtrr_top_of_ram() (although the latter is probably more 
trouble than it's worth).

-boris


> +	if ( (c->extended_cpuid_level & 0xffff0000) == 0x80000000 ) {
> +		if ( c->extended_cpuid_level >= 0x80000001 ) {
>   			c->x86_capability[1] = cpuid_edx(0x80000001);
>   			c->x86_capability[6] = cpuid_ecx(0x80000001);
>   		}
> -		if ( xlvl >= 0x80000004 )
> +		if ( c->extended_cpuid_level >= 0x80000004 )
>   			get_model_name(c); /* Default name */
> -		if ( xlvl >= 0x80000008 )
> +		if ( c->extended_cpuid_level >= 0x80000008 )
>   			paddr_bits = cpuid_eax(0x80000008) & 0xff;
>   	}
>   
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 406d394..61b1ec8 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1251,7 +1251,7 @@ const struct hvm_function_table * __init start_svm(void)
>   
>       setup_vmcb_dump();
>   
> -    svm_feature_flags = ((cpuid_eax(0x80000000) >= 0x8000000A) ?
> +    svm_feature_flags = (current_cpu_data.extended_cpuid_level >= 0x8000000A ?
>                            cpuid_edx(0x8000000A) : 0);
>   
>       printk("SVM: Supported advanced features:\n");
> diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprofile/op_model_athlon.c
> index e784018..ad84768 100644
> --- a/xen/arch/x86/oprofile/op_model_athlon.c
> +++ b/xen/arch/x86/oprofile/op_model_athlon.c
> @@ -497,14 +497,11 @@ static int __init init_ibs_nmi(void)
>   
>   static void __init get_ibs_caps(void)
>   {
> -	unsigned int max_level;
> -
>   	if (!boot_cpu_has(X86_FEATURE_IBS))
>   		return;
>   
>       /* check IBS cpuid feature flags */
> -	max_level = cpuid_eax(0x80000000);
> -	if (max_level >= IBS_CPUID_FEATURES)
> +	if (current_cpu_data.extended_cpuid_level >= IBS_CPUID_FEATURES)
>   		ibs_caps = cpuid_eax(IBS_CPUID_FEATURES);
>   	if (!(ibs_caps & IBS_CAPS_AVAIL))
>   		/* cpuid flags not valid */
> diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
> index c120460..f1cccff 100644
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -162,6 +162,7 @@ struct cpuinfo_x86 {
>       __u8 x86_model;
>       __u8 x86_mask;
>       int  cpuid_level;    /* Maximum supported CPUID level, -1=no CPUID */
> +    __u32 extended_cpuid_level; /* Maximum supported CPUID extended level */
>       unsigned int x86_capability[NCAPINTS];
>       char x86_vendor_id[16];
>       char x86_model_id[64];


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 15:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 15:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIKGp-0005cs-If; Tue, 25 Feb 2014 15:50:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vdham@uva.nl>) id 1WIK0e-0005aJ-43
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 15:33:28 +0000
Received: from [193.109.254.147:35771] by server-16.bemta-14.messagelabs.com
	id D4/1D-21945-7C7BC035; Tue, 25 Feb 2014 15:33:27 +0000
X-Env-Sender: vdham@uva.nl
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393342406!1458601!1
X-Originating-IP: [94.142.246.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21795 invoked from network); 25 Feb 2014 15:33:26 -0000
Received: from positron.dckd.nl (HELO positron.dckd.nl) (94.142.246.99)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 15:33:26 -0000
Received: from wcw-staff-145-18-171-175.wireless.uva.nl
	(wcw-staff-145-18-171-175.wireless.uva.nl [145.18.171.175])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by positron.dckd.nl (Postfix) with ESMTPSA id B0A48F80BF
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 16:33:25 +0100 (CET)
From: Jeroen van der Ham <vdham@uva.nl>
Message-Id: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
Date: Tue, 25 Feb 2014 16:33:25 +0100
To: xen-devel <xen-devel@lists.xen.org>
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
X-Mailman-Approved-At: Tue, 25 Feb 2014 15:50:09 +0000
Subject: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I=92m trying to use xen-create-image to create an installed Xen machine. If=
 I do the following:

# sudo xen-create-image =97hostname=3Dfoobar --memory=3D2048 --swap=3D1024M=
 --size=3D10G --lvm=3DVolumeGroupXen --dist=3Dwheezy --fs=3Dext3 --mirror=
=3Dhttp://ftp.nl.debian.org/debian --genpass=3D1 --vcpus=3D2
ERROR: 'memory' argument takes a suffixed integer.

This does not work, because the memory argument is incorrect. So I adapt it:

# sudo xen-create-image =97hostname=3Dfoobar --memory=3D2048M --swap=3D1024=
M --size=3D10G --lvm=3DVolumeGroupXen --dist=3Dwheezy --fs=3Dext3 --mirror=
=3Dhttp://ftp.nl.debian.org/debian --genpass=3D1 --vcpus=3D2

This installs the machine successfully. If I then continue:

# sudo xl create /etc/xen/foobar.cfg
Parsing config from /etc/xen/foobar.cfg
/etc/xen/foobar.cfg:13: warning: parameter `memory' is not a valid number

This means I have to edit the configuration file and remove the =93M=94 fro=
m the memory number.

Seems to me like xen-create-image and xl need to have a friendly argument o=
n how to handle memory numbers.

Regards,
Jeroen.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 15:50:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 15:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIKGp-0005cs-If; Tue, 25 Feb 2014 15:50:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vdham@uva.nl>) id 1WIK0e-0005aJ-43
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 15:33:28 +0000
Received: from [193.109.254.147:35771] by server-16.bemta-14.messagelabs.com
	id D4/1D-21945-7C7BC035; Tue, 25 Feb 2014 15:33:27 +0000
X-Env-Sender: vdham@uva.nl
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393342406!1458601!1
X-Originating-IP: [94.142.246.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21795 invoked from network); 25 Feb 2014 15:33:26 -0000
Received: from positron.dckd.nl (HELO positron.dckd.nl) (94.142.246.99)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 15:33:26 -0000
Received: from wcw-staff-145-18-171-175.wireless.uva.nl
	(wcw-staff-145-18-171-175.wireless.uva.nl [145.18.171.175])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by positron.dckd.nl (Postfix) with ESMTPSA id B0A48F80BF
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 16:33:25 +0100 (CET)
From: Jeroen van der Ham <vdham@uva.nl>
Message-Id: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
Date: Tue, 25 Feb 2014 16:33:25 +0100
To: xen-devel <xen-devel@lists.xen.org>
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
X-Mailman-Approved-At: Tue, 25 Feb 2014 15:50:09 +0000
Subject: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I=92m trying to use xen-create-image to create an installed Xen machine. If=
 I do the following:

# sudo xen-create-image =97hostname=3Dfoobar --memory=3D2048 --swap=3D1024M=
 --size=3D10G --lvm=3DVolumeGroupXen --dist=3Dwheezy --fs=3Dext3 --mirror=
=3Dhttp://ftp.nl.debian.org/debian --genpass=3D1 --vcpus=3D2
ERROR: 'memory' argument takes a suffixed integer.

This does not work, because the memory argument is incorrect. So I adapt it:

# sudo xen-create-image =97hostname=3Dfoobar --memory=3D2048M --swap=3D1024=
M --size=3D10G --lvm=3DVolumeGroupXen --dist=3Dwheezy --fs=3Dext3 --mirror=
=3Dhttp://ftp.nl.debian.org/debian --genpass=3D1 --vcpus=3D2

This installs the machine successfully. If I then continue:

# sudo xl create /etc/xen/foobar.cfg
Parsing config from /etc/xen/foobar.cfg
/etc/xen/foobar.cfg:13: warning: parameter `memory' is not a valid number

This means I have to edit the configuration file and remove the =93M=94 fro=
m the memory number.

Seems to me like xen-create-image and xl need to have a friendly argument o=
n how to handle memory numbers.

Regards,
Jeroen.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 16:07:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 16:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIKXQ-0006Ck-UH; Tue, 25 Feb 2014 16:07:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WIKXP-0006CM-1s; Tue, 25 Feb 2014 16:07:19 +0000
Received: from [85.158.137.68:11349] by server-9.bemta-3.messagelabs.com id
	F7/CB-10184-5BFBC035; Tue, 25 Feb 2014 16:07:17 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393344435!2879688!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19191 invoked from network); 25 Feb 2014 16:07:16 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 16:07:16 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so578259wes.37
	for <multiple recipients>; Tue, 25 Feb 2014 08:07:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:content-type;
	bh=T77tpPpQD/gB2hLfPb1wFw3Ytmfqk8qRwBttDBh3y10=;
	b=RbN8uUEqE45VUWhWKm7ibuywzOE/+sWGJNSdv8SWhRtKFvkYjO6+Nqv3D5KPvm6pgh
	vEi9RNBo7SCkp2XiuZag6gu3MCS9PoO66VWzKT7GEM5Z4OSZH7DchRmiXmXa3QlpIWX4
	ZvYKthCk+XhW9izJgHvdDkGgSazdHXsLtvtB7jpYuAAsdKLd9+SJx3MeuB5vPkXqZVRM
	Pqh0mLmqFNgz87rpShd1B0cFrPpxgPO0NrKSp4w+1qB3bBw7zeBdkuqiiLY7h+wdOrAQ
	epgTo0spI/cFLk2nxICAADsE1TKAd26jQokzKUVinb/MQ3QE9jsSw7Kfx0GRyD6rs8iE
	/eVg==
X-Received: by 10.194.90.177 with SMTP id bx17mr764921wjb.91.1393344435616;
	Tue, 25 Feb 2014 08:07:15 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id r1sm1410116wia.5.2014.02.25.08.07.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 08:07:14 -0800 (PST)
Message-ID: <530CBFB1.5040907@xen.org>
Date: Tue, 25 Feb 2014 16:07:13 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: [Xen-devel] Next steps for GSoC and OPW mentors
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6203124538202805123=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6203124538202805123==
Content-Type: multipart/alternative;
 boundary="------------040201090704040905090500"

This is a multi-part message in MIME format.
--------------040201090704040905090500
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

you may have seen the announcement that the Xen Project and that we are 
also participating in round 8 of the Outreach Program for Women. The 
portals for both programs are here:
* http://wiki.xenproject.org/wiki/GSoc_2014
* http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8

Both programs have a very similar structure: the main difference are 
differences in application deadlines and forms. As I will be on holidays 
in the coming weeks, Russell volunteered to help coordinate activities 
related to both programs. He also has administrator rights for the 
Google GSoC system

The other main difference is that OPW requires applicants to make a code 
contribution *before* the application. This requires working with the 
mentor and a reference to the contribution in the application. This can 
be a bug fix, code refactoring, writing a test case, etc. and is 
designed to get interns familiar with the tooling and culture in the 
project.

Although we do not require GSoC students to submit code before they 
apply, we *strongly encourage* them to do so and the application form is 
designed to encourage references to a students first contributions 
(bugs, wiki, etc.) and public communications on mailing lists. See 
http://wiki.xenproject.org/wiki/GSoc_2014#Finding_a_project_that_fits_you

== Program Mangement Committee for Mentors ==
To manage both programs, I propose that we set up a Commitee made up of 
3-4 experienced GSoc and OPW mentors and former GSoC students to sanity 
check and review project proposals and provide advice to other mentors. 
Better the project proposals = happier mentors = happier 
students/interns = better outcome for the Xen Project and students.

Russell volunteered to reach out to candidates for the committee and to 
organize the first meetings (ideally sometimes next week). Once we have 
all applications in and mentors signed up, the committe would become a 
regular meeting for all mentors across GSoC and OPW who are mentoring 
students. We also need to decide which students to accept for both 
programs by April 18.

For mentors: please make sure that you only take on one intern or 
student. Mentoring more than one student is too hard.

== OPW ==
Intern application deadline: March 18th
Close of Intern Selection: April 18
Accepted participants announced: April 21
Internship Period: May 19 - August 18

Request to mentors:
* please update your projects on 
http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8
* please hang out on the #xen-opw channel in the coming weeks, if you 
can and engage with interns
* please give interns small achievable tasks (on a public list) that 
help us identify whether the intern can can complete a project, prior to 
the application. This can be a bug fix, code refactoring, writing a test 
case, etc. and is designed to get interns familiar with the tooling and 
culture in the project.

== Google Summer of Code ==
Student application deadline: March 21th
Close of Student Selection: April 18
Accepted participants announced: April 21
Coding Period: May 19 - August 18

Request to mentors:
* please update your projects on http://wiki.xenproject.org/wiki/GSoc_2014
* please sign up to the GSoC website, aka
** create an account on 
https://www.google-melange.com/gsoc/homepage/google/gsoc2014
** sign in
** fill out 
https://www.google-melange.com/gsoc/connection/pick/google/gsoc2014 - 
"chose Xen Project"
* please hang out on the #xen-opw channel in the coming weeks, if you 
can and engage with students
* please give students small achievable tasks (on a public list) that 
help us identify whether the student can complete a project, prior to 
the application. This can be a bug fix, code refactoring, writing a test 
case, etc. and is designed to get students familiar with the tooling and 
culture in the project.

Best Regards
Lars

--------------040201090704040905090500
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi all,<br>
    <br>
    you may have seen the announcement that the Xen Project and that we
    are also participating in round 8 of the Outreach Program for Women.
    The portals for both programs are here:<br>
    * <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/GSoc_2014">http://wiki.xenproject.org/wiki/GSoc_2014</a><br>
    * <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8">http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8</a><br>
    <br>
    Both programs have a very similar structure: the main difference are
    differences in application deadlines and forms. As I will be on
    holidays in the coming weeks, Russell volunteered to help coordinate
    activities related to both programs. He also has administrator
    rights for the Google GSoC system<br>
    <br>
    The other main difference is that OPW requires applicants to make a
    code contribution *before* the application. This requires working
    with the mentor and a reference to the contribution in the
    application. This can be a bug fix, code refactoring, writing a test
    case, etc. and is designed to get interns familiar with the tooling
    and culture in the project.<br>
    <br>
    Although we do not require GSoC students to submit code before they
    apply, we *strongly encourage* them to do so and the application
    form is designed to encourage references to a students first
    contributions (bugs, wiki, etc.) and public communications on
    mailing lists. See
    <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/GSoc_2014#Finding_a_project_that_fits_you">http://wiki.xenproject.org/wiki/GSoc_2014#Finding_a_project_that_fits_you</a>
    <br>
    <br>
    == Program Mangement Committee for Mentors ==<br>
    To manage both programs, I propose that we set up a Commitee made up
    of 3-4 experienced GSoc and OPW mentors and former GSoC students to
    sanity check and review project proposals and provide advice to
    other mentors. Better the project proposals = happier mentors =
    happier students/interns = better outcome for the Xen Project and
    students. <br>
    <br>
    Russell volunteered to reach out to candidates for the committee and
    to organize the first meetings (ideally sometimes next week). Once
    we have all applications in and mentors signed up, the committe
    would become a regular meeting for all mentors across GSoC and OPW
    who are mentoring students. We also need to decide which students to
    accept for both programs by April 18.<br>
    <br>
    For mentors: please make sure that you only take on one intern or
    student. Mentoring more than one student is too hard.<br>
    <br>
    == OPW ==<br>
    Intern application deadline: March 18th<br>
    Close of Intern Selection: <span style="color: rgb(0, 0, 0);
      font-family: Cantarell, sans-serif; font-size: medium; font-style:
      normal; font-variant: normal; font-weight: normal; letter-spacing:
      normal; line-height: normal; orphans: auto; text-align: left;
      text-indent: 0px; text-transform: none; white-space: normal;
      widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;">April 18</span><br>
    <span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;"><span style="color: rgb(0, 0, 0); font-family:
        Cantarell, sans-serif; font-size: medium; font-style: normal;
        font-variant: normal; font-weight: normal; letter-spacing:
        normal; line-height: normal; orphans: auto; text-align: left;
        text-indent: 0px; text-transform: none; white-space: normal;
        widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
        background-color: rgb(255, 255, 255); display: inline
        !important; float: none;">Accepted participants announced<span
          class="Apple-converted-space"> : </span></span>April 21</span><br>
    <span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;">Internship Period: May 19 - August 18<br>
      <br>
    </span>Request to mentors: <br>
    * please update your projects on
    <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8">http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8</a><br>
    * please hang out on the #xen-opw channel in the coming weeks, if
    you can and engage with interns<br>
    * please give interns small achievable tasks (on a public list) that
    help us identify whether the intern can can complete a project,
    prior to the application. This can be a bug fix, code refactoring,
    writing a test case, etc. and is designed to get interns familiar
    with the tooling and culture in the project.<br>
    <br>
    == Google Summer of Code ==<br>
    Student application deadline: March 21th<br>
    Close of Student Selection: April 18<br>
    <span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;"><span style="color: rgb(0, 0, 0); font-family:
        Cantarell, sans-serif; font-size: medium; font-style: normal;
        font-variant: normal; font-weight: normal; letter-spacing:
        normal; line-height: normal; orphans: auto; text-align: left;
        text-indent: 0px; text-transform: none; white-space: normal;
        widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
        background-color: rgb(255, 255, 255); display: inline
        !important; float: none;">Accepted participants announced<span
          class="Apple-converted-space"> : </span></span>April 21<br>
    </span><span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;"><span style="color: rgb(0, 0, 0); font-family:
        Cantarell, sans-serif; font-size: medium; font-style: normal;
        font-variant: normal; font-weight: normal; letter-spacing:
        normal; line-height: normal; orphans: auto; text-align: left;
        text-indent: 0px; text-transform: none; white-space: normal;
        widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
        background-color: rgb(255, 255, 255); display: inline
        !important; float: none;">Coding Period: May 19 - August 18<br>
        <br>
      </span></span>Request to mentors: <br>
    * please update your projects on
    <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/GSoc_2014">http://wiki.xenproject.org/wiki/GSoc_2014</a><br>
    * please sign up to the GSoC website, aka<br>
    ** create an account on <a class="moz-txt-link-freetext"
      href="https://www.google-melange.com/gsoc/homepage/google/gsoc2014">https://www.google-melange.com/gsoc/homepage/google/gsoc2014</a>
    <br>
    ** sign in<br>
    ** fill out <a class="moz-txt-link-freetext"
href="https://www.google-melange.com/gsoc/connection/pick/google/gsoc2014">https://www.google-melange.com/gsoc/connection/pick/google/gsoc2014</a>
    - "chose Xen Project"
    <br>
    * please hang out on the #xen-opw channel in the coming weeks, if
    you can and engage with students<br>
    * please give students small achievable tasks (on a public list)
    that help us identify whether the student can complete a project,
    prior to the application. This can be a bug fix, code refactoring,
    writing a test case, etc. and is designed to get students familiar
    with the tooling and culture in the project.<br>
    <br>
    Best Regards
    <br>
    Lars
  </body>
</html>

--------------040201090704040905090500--


--===============6203124538202805123==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6203124538202805123==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 16:07:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 16:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIKXQ-0006Ck-UH; Tue, 25 Feb 2014 16:07:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WIKXP-0006CM-1s; Tue, 25 Feb 2014 16:07:19 +0000
Received: from [85.158.137.68:11349] by server-9.bemta-3.messagelabs.com id
	F7/CB-10184-5BFBC035; Tue, 25 Feb 2014 16:07:17 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393344435!2879688!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19191 invoked from network); 25 Feb 2014 16:07:16 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 16:07:16 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so578259wes.37
	for <multiple recipients>; Tue, 25 Feb 2014 08:07:15 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:content-type;
	bh=T77tpPpQD/gB2hLfPb1wFw3Ytmfqk8qRwBttDBh3y10=;
	b=RbN8uUEqE45VUWhWKm7ibuywzOE/+sWGJNSdv8SWhRtKFvkYjO6+Nqv3D5KPvm6pgh
	vEi9RNBo7SCkp2XiuZag6gu3MCS9PoO66VWzKT7GEM5Z4OSZH7DchRmiXmXa3QlpIWX4
	ZvYKthCk+XhW9izJgHvdDkGgSazdHXsLtvtB7jpYuAAsdKLd9+SJx3MeuB5vPkXqZVRM
	Pqh0mLmqFNgz87rpShd1B0cFrPpxgPO0NrKSp4w+1qB3bBw7zeBdkuqiiLY7h+wdOrAQ
	epgTo0spI/cFLk2nxICAADsE1TKAd26jQokzKUVinb/MQ3QE9jsSw7Kfx0GRyD6rs8iE
	/eVg==
X-Received: by 10.194.90.177 with SMTP id bx17mr764921wjb.91.1393344435616;
	Tue, 25 Feb 2014 08:07:15 -0800 (PST)
Received: from [172.16.25.10] ([2.122.219.75])
	by mx.google.com with ESMTPSA id r1sm1410116wia.5.2014.02.25.08.07.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 08:07:14 -0800 (PST)
Message-ID: <530CBFB1.5040907@xen.org>
Date: Tue, 25 Feb 2014 16:07:13 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: [Xen-devel] Next steps for GSoC and OPW mentors
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6203124538202805123=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6203124538202805123==
Content-Type: multipart/alternative;
 boundary="------------040201090704040905090500"

This is a multi-part message in MIME format.
--------------040201090704040905090500
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

you may have seen the announcement that the Xen Project and that we are 
also participating in round 8 of the Outreach Program for Women. The 
portals for both programs are here:
* http://wiki.xenproject.org/wiki/GSoc_2014
* http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8

Both programs have a very similar structure: the main difference are 
differences in application deadlines and forms. As I will be on holidays 
in the coming weeks, Russell volunteered to help coordinate activities 
related to both programs. He also has administrator rights for the 
Google GSoC system

The other main difference is that OPW requires applicants to make a code 
contribution *before* the application. This requires working with the 
mentor and a reference to the contribution in the application. This can 
be a bug fix, code refactoring, writing a test case, etc. and is 
designed to get interns familiar with the tooling and culture in the 
project.

Although we do not require GSoC students to submit code before they 
apply, we *strongly encourage* them to do so and the application form is 
designed to encourage references to a students first contributions 
(bugs, wiki, etc.) and public communications on mailing lists. See 
http://wiki.xenproject.org/wiki/GSoc_2014#Finding_a_project_that_fits_you

== Program Mangement Committee for Mentors ==
To manage both programs, I propose that we set up a Commitee made up of 
3-4 experienced GSoc and OPW mentors and former GSoC students to sanity 
check and review project proposals and provide advice to other mentors. 
Better the project proposals = happier mentors = happier 
students/interns = better outcome for the Xen Project and students.

Russell volunteered to reach out to candidates for the committee and to 
organize the first meetings (ideally sometimes next week). Once we have 
all applications in and mentors signed up, the committe would become a 
regular meeting for all mentors across GSoC and OPW who are mentoring 
students. We also need to decide which students to accept for both 
programs by April 18.

For mentors: please make sure that you only take on one intern or 
student. Mentoring more than one student is too hard.

== OPW ==
Intern application deadline: March 18th
Close of Intern Selection: April 18
Accepted participants announced: April 21
Internship Period: May 19 - August 18

Request to mentors:
* please update your projects on 
http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8
* please hang out on the #xen-opw channel in the coming weeks, if you 
can and engage with interns
* please give interns small achievable tasks (on a public list) that 
help us identify whether the intern can can complete a project, prior to 
the application. This can be a bug fix, code refactoring, writing a test 
case, etc. and is designed to get interns familiar with the tooling and 
culture in the project.

== Google Summer of Code ==
Student application deadline: March 21th
Close of Student Selection: April 18
Accepted participants announced: April 21
Coding Period: May 19 - August 18

Request to mentors:
* please update your projects on http://wiki.xenproject.org/wiki/GSoc_2014
* please sign up to the GSoC website, aka
** create an account on 
https://www.google-melange.com/gsoc/homepage/google/gsoc2014
** sign in
** fill out 
https://www.google-melange.com/gsoc/connection/pick/google/gsoc2014 - 
"chose Xen Project"
* please hang out on the #xen-opw channel in the coming weeks, if you 
can and engage with students
* please give students small achievable tasks (on a public list) that 
help us identify whether the student can complete a project, prior to 
the application. This can be a bug fix, code refactoring, writing a test 
case, etc. and is designed to get students familiar with the tooling and 
culture in the project.

Best Regards
Lars

--------------040201090704040905090500
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    Hi all,<br>
    <br>
    you may have seen the announcement that the Xen Project and that we
    are also participating in round 8 of the Outreach Program for Women.
    The portals for both programs are here:<br>
    * <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/GSoc_2014">http://wiki.xenproject.org/wiki/GSoc_2014</a><br>
    * <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8">http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8</a><br>
    <br>
    Both programs have a very similar structure: the main difference are
    differences in application deadlines and forms. As I will be on
    holidays in the coming weeks, Russell volunteered to help coordinate
    activities related to both programs. He also has administrator
    rights for the Google GSoC system<br>
    <br>
    The other main difference is that OPW requires applicants to make a
    code contribution *before* the application. This requires working
    with the mentor and a reference to the contribution in the
    application. This can be a bug fix, code refactoring, writing a test
    case, etc. and is designed to get interns familiar with the tooling
    and culture in the project.<br>
    <br>
    Although we do not require GSoC students to submit code before they
    apply, we *strongly encourage* them to do so and the application
    form is designed to encourage references to a students first
    contributions (bugs, wiki, etc.) and public communications on
    mailing lists. See
    <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/GSoc_2014#Finding_a_project_that_fits_you">http://wiki.xenproject.org/wiki/GSoc_2014#Finding_a_project_that_fits_you</a>
    <br>
    <br>
    == Program Mangement Committee for Mentors ==<br>
    To manage both programs, I propose that we set up a Commitee made up
    of 3-4 experienced GSoc and OPW mentors and former GSoC students to
    sanity check and review project proposals and provide advice to
    other mentors. Better the project proposals = happier mentors =
    happier students/interns = better outcome for the Xen Project and
    students. <br>
    <br>
    Russell volunteered to reach out to candidates for the committee and
    to organize the first meetings (ideally sometimes next week). Once
    we have all applications in and mentors signed up, the committe
    would become a regular meeting for all mentors across GSoC and OPW
    who are mentoring students. We also need to decide which students to
    accept for both programs by April 18.<br>
    <br>
    For mentors: please make sure that you only take on one intern or
    student. Mentoring more than one student is too hard.<br>
    <br>
    == OPW ==<br>
    Intern application deadline: March 18th<br>
    Close of Intern Selection: <span style="color: rgb(0, 0, 0);
      font-family: Cantarell, sans-serif; font-size: medium; font-style:
      normal; font-variant: normal; font-weight: normal; letter-spacing:
      normal; line-height: normal; orphans: auto; text-align: left;
      text-indent: 0px; text-transform: none; white-space: normal;
      widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;">April 18</span><br>
    <span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;"><span style="color: rgb(0, 0, 0); font-family:
        Cantarell, sans-serif; font-size: medium; font-style: normal;
        font-variant: normal; font-weight: normal; letter-spacing:
        normal; line-height: normal; orphans: auto; text-align: left;
        text-indent: 0px; text-transform: none; white-space: normal;
        widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
        background-color: rgb(255, 255, 255); display: inline
        !important; float: none;">Accepted participants announced<span
          class="Apple-converted-space"> : </span></span>April 21</span><br>
    <span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;">Internship Period: May 19 - August 18<br>
      <br>
    </span>Request to mentors: <br>
    * please update your projects on
    <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8">http://wiki.xenproject.org/wiki/OutreachProgramForWomen/Round8</a><br>
    * please hang out on the #xen-opw channel in the coming weeks, if
    you can and engage with interns<br>
    * please give interns small achievable tasks (on a public list) that
    help us identify whether the intern can can complete a project,
    prior to the application. This can be a bug fix, code refactoring,
    writing a test case, etc. and is designed to get interns familiar
    with the tooling and culture in the project.<br>
    <br>
    == Google Summer of Code ==<br>
    Student application deadline: March 21th<br>
    Close of Student Selection: April 18<br>
    <span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;"><span style="color: rgb(0, 0, 0); font-family:
        Cantarell, sans-serif; font-size: medium; font-style: normal;
        font-variant: normal; font-weight: normal; letter-spacing:
        normal; line-height: normal; orphans: auto; text-align: left;
        text-indent: 0px; text-transform: none; white-space: normal;
        widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
        background-color: rgb(255, 255, 255); display: inline
        !important; float: none;">Accepted participants announced<span
          class="Apple-converted-space"> : </span></span>April 21<br>
    </span><span style="color: rgb(0, 0, 0); font-family: Cantarell,
      sans-serif; font-size: medium; font-style: normal; font-variant:
      normal; font-weight: normal; letter-spacing: normal; line-height:
      normal; orphans: auto; text-align: left; text-indent: 0px;
      text-transform: none; white-space: normal; widows: auto;
      word-spacing: 0px; -webkit-text-stroke-width: 0px;
      background-color: rgb(255, 255, 255); display: inline !important;
      float: none;"><span style="color: rgb(0, 0, 0); font-family:
        Cantarell, sans-serif; font-size: medium; font-style: normal;
        font-variant: normal; font-weight: normal; letter-spacing:
        normal; line-height: normal; orphans: auto; text-align: left;
        text-indent: 0px; text-transform: none; white-space: normal;
        widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;
        background-color: rgb(255, 255, 255); display: inline
        !important; float: none;">Coding Period: May 19 - August 18<br>
        <br>
      </span></span>Request to mentors: <br>
    * please update your projects on
    <a class="moz-txt-link-freetext" href="http://wiki.xenproject.org/wiki/GSoc_2014">http://wiki.xenproject.org/wiki/GSoc_2014</a><br>
    * please sign up to the GSoC website, aka<br>
    ** create an account on <a class="moz-txt-link-freetext"
      href="https://www.google-melange.com/gsoc/homepage/google/gsoc2014">https://www.google-melange.com/gsoc/homepage/google/gsoc2014</a>
    <br>
    ** sign in<br>
    ** fill out <a class="moz-txt-link-freetext"
href="https://www.google-melange.com/gsoc/connection/pick/google/gsoc2014">https://www.google-melange.com/gsoc/connection/pick/google/gsoc2014</a>
    - "chose Xen Project"
    <br>
    * please hang out on the #xen-opw channel in the coming weeks, if
    you can and engage with students<br>
    * please give students small achievable tasks (on a public list)
    that help us identify whether the student can complete a project,
    prior to the application. This can be a bug fix, code refactoring,
    writing a test case, etc. and is designed to get students familiar
    with the tooling and culture in the project.<br>
    <br>
    Best Regards
    <br>
    Lars
  </body>
</html>

--------------040201090704040905090500--


--===============6203124538202805123==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6203124538202805123==--


From xen-devel-bounces@lists.xen.org Tue Feb 25 16:37:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 16:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIL0K-0006VZ-6Z; Tue, 25 Feb 2014 16:37:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIL0I-0006VU-Hl
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 16:37:11 +0000
Received: from [85.158.137.68:24422] by server-5.bemta-3.messagelabs.com id
	C0/22-04712-5B6CC035; Tue, 25 Feb 2014 16:37:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393346225!4156575!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3338 invoked from network); 25 Feb 2014 16:37:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 16:37:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,541,1389744000"; d="scan'208";a="105606557"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 16:36:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 11:36:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIKzV-0005m9-B2;
	Tue, 25 Feb 2014 16:36:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIKzV-0007BH-1v;
	Tue, 25 Feb 2014 16:36:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25296-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 16:36:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25296: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25296 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 25290 REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl           11 guest-saverestore           fail pass in 25290
 test-amd64-i386-pair     18 guest-migrate/dst_host/src_host fail pass in 25290
 test-amd64-i386-xl-multivcpu 11 guest-saverestore  fail in 25290 pass in 25296

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  d6ac84ca0db28b99073d4364815e0f71600c5780
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d6ac84ca0db28b99073d4364815e0f71600c5780
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    (cherry picked from commit 5be1e95318147855713709094e6847e3104ae910)

commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Cambell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@intel.com>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Christoph Egger <chegger@amazon.de>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 16:37:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 16:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIL0K-0006VZ-6Z; Tue, 25 Feb 2014 16:37:12 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIL0I-0006VU-Hl
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 16:37:11 +0000
Received: from [85.158.137.68:24422] by server-5.bemta-3.messagelabs.com id
	C0/22-04712-5B6CC035; Tue, 25 Feb 2014 16:37:09 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393346225!4156575!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3338 invoked from network); 25 Feb 2014 16:37:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 16:37:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,541,1389744000"; d="scan'208";a="105606557"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 16:36:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 11:36:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIKzV-0005m9-B2;
	Tue, 25 Feb 2014 16:36:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIKzV-0007BH-1v;
	Tue, 25 Feb 2014 16:36:21 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25296-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 16:36:21 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25296: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25296 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-oldkern            4 xen-build        fail in 25290 REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl           11 guest-saverestore           fail pass in 25290
 test-amd64-i386-pair     18 guest-migrate/dst_host/src_host fail pass in 25290
 test-amd64-i386-xl-multivcpu 11 guest-saverestore  fail in 25290 pass in 25296

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  d6ac84ca0db28b99073d4364815e0f71600c5780
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          pass    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d6ac84ca0db28b99073d4364815e0f71600c5780
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
    
    (cherry picked from commit 5be1e95318147855713709094e6847e3104ae910)

commit 9b51591c330672765d866a5ed4ed8e40c75bb1cf
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:33:00 2014 +0100

    iommu: don't need to map dom0 page when the PT is shared
    
    Currently iommu_init_dom0 is browsing the page list and call map_page callback
    on each page.
    
    On both AMD and VTD drivers, the function will directly return if the page
    table is shared with the processor. So Xen can safely avoid to run through
    the page list.
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 2608662379d50e69b3bba4e6827fc910db9f64f8
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:32:00 2014 +0100

    vtd: don't export iommu_set_pgd
    
    iommu_set_pgd is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Xiantoa Zhang <xiantao.zhang@intel.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Ian Campbell <ian.campbell@citrix.com>

commit b0566814a400f436056faac286d2804edb5791c0
Merge: 24e0a36... 21eaf5b...
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:31:28 2014 +0100

    Merge branch 'staging' of xenbits.xen.org:/home/xen/git/xen into staging

commit 24e0a36d63a6bac1e2777b7265c8add3f7895e58
Author: Julien Grall <julien.grall@linaro.org>
Date:   Mon Feb 24 12:21:54 2014 +0100

    vtd: don't export iommu_domain_teardown
    
    iommu_domain_teardown is only used internally in
    xen/drivers/passthrough/vtd/iommu.c
    
    Signed-off-by: Julien Grall <julien.grall@linaro.org>
    Acked-by: Ian Cambell <ian.campbell@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 21eaf5b06a41c787e1441523f7b22e57565bb292
Author: Fabio Fantoni <fabio.fantoni@m2r.biz>
Date:   Sat Feb 22 11:35:54 2014 +0100

    libxl: comments cleanup on libxl_dm.c
    
    Removed some unuseful comments lines.
    
    Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 9625f4ef05031773a52ebe3ca84db64af68956d6
Author: Xudong Hao <xudong.hao@intel.com>
Date:   Mon Feb 24 12:11:53 2014 +0100

    x86: expose RDSEED, ADX, and PREFETCHW to dom0
    
    This patch explicitly exposes Intel new features to dom0, including
    RDSEED and ADX. As for PREFETCHW, it doesn't need explicit exposing.
    
    Signed-off-by: Xudong Hao <xudong.hao@intel.com>
    Signed-off-by: Liu Jinsong <jinsong.liu@intel.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 5d160d913e03b581bdddde73535c18ac670cf0a9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Feb 24 12:11:01 2014 +0100

    x86/MSI: don't risk division by zero
    
    The check in question is redundant with the one in the immediately
    following if(), where dividing by zero gets carefully avoided.
    
    Spotted-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit fd1864f48d8914fb8eeb6841cd08c2c09b368909
Author: Yang Zhang <yang.z.zhang@Intel.com>
Date:   Mon Feb 24 12:09:52 2014 +0100

    Nested VMX: update nested paging mode on vmexit
    
    Since SVM and VMX use different mechanism to emulate the virtual-vmentry
    and virtual-vmexit, it's hard to update the nested paging mode correctly in
    common code. So we need to update the nested paging mode in their respective
    code path.
    SVM already updates the nested paging mode on vmexit. This patch adds the same
    logic in VMX side.
    
    Previous discussion is here:
    http://lists.xen.org/archives/html/xen-devel/2013-12/msg01759.html
    
    Signed-off-by: Yang Zhang <yang.z.zhang@Intel.com>
    Reviewed-by: Christoph Egger <chegger@amazon.de>

commit 199a0878195f3a271394d126c66e8030c461ebd3
Author: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
Date:   Mon Feb 24 12:09:14 2014 +0100

    vmce: Allow vmce_amd_* functions to handle AMD thresolding MSRs
    
    vmce_amd_[rd|wr]msr functions can handle accesses to AMD thresholding
    registers. But due to this statement here:
    switch ( msr & (MSR_IA32_MC0_CTL | 3) )
    we are wrongly masking off top two bits which meant the register
    accesses never made it to vmce_amd_* functions.
    
    Corrected this problem by modifying the mask in this patch to allow
    AMD thresholding registers to fall to 'default' case which in turn
    allows vmce_amd_* functions to handle access to the registers.
    
    While at it, remove some clutter in the vmce_amd* functions. Retained
    current policy of returning zero for reads and ignoring writes.
    
    Signed-off-by: Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>

commit 60ea3a3ac3d2bcd8e85b250fdbfc46b3b9dc7085
Author: Frediano Ziglio <frediano.ziglio@citrix.com>
Date:   Mon Feb 24 12:07:41 2014 +0100

    x86/MCE: Fix race condition in mctelem_reserve
    
    These lines (in mctelem_reserve)
    
            newhead = oldhead->mcte_next;
            if (cmpxchgptr(freelp, oldhead, newhead) == oldhead) {
    
    are racy. After you read the newhead pointer it can happen that another
    flow (thread or recursive invocation) change all the list but set head
    with same value. So oldhead is the same as *freelp but you are setting
    a new head that could point to whatever element (even already used).
    
    This patch use instead a bit array and atomic bit operations.
    
    Signed-off-by: Frediano Ziglio <frediano.ziglio@citrix.com>
    Reviewed-by: Liu Jinsong <jinsong.liu@intel.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 17:46:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 17:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIM4i-0006ti-F0; Tue, 25 Feb 2014 17:45:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIM4g-0006tY-Gb
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 17:45:46 +0000
Received: from [85.158.143.35:32821] by server-1.bemta-4.messagelabs.com id
	65/04-31661-9C6DC035; Tue, 25 Feb 2014 17:45:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393350343!8259451!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 451 invoked from network); 25 Feb 2014 17:45:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 17:45:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,541,1389744000"; d="scan'208";a="105635051"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 17:45:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 12:45:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIM4b-000672-TH;
	Tue, 25 Feb 2014 17:45:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIM4b-0005h4-GT;
	Tue, 25 Feb 2014 17:45:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25297-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 17:45:41 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25297: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25297 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25291 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair        19 guest-stop/src_host         fail pass in 25291

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 17:46:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 17:46:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIM4i-0006ti-F0; Tue, 25 Feb 2014 17:45:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIM4g-0006tY-Gb
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 17:45:46 +0000
Received: from [85.158.143.35:32821] by server-1.bemta-4.messagelabs.com id
	65/04-31661-9C6DC035; Tue, 25 Feb 2014 17:45:45 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393350343!8259451!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 451 invoked from network); 25 Feb 2014 17:45:44 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 17:45:44 -0000
X-IronPort-AV: E=Sophos;i="4.97,541,1389744000"; d="scan'208";a="105635051"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 17:45:43 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 12:45:42 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIM4b-000672-TH;
	Tue, 25 Feb 2014 17:45:41 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIM4b-0005h4-GT;
	Tue, 25 Feb 2014 17:45:41 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25297-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 17:45:41 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25297: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25297 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25291 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair        19 guest-stop/src_host         fail pass in 25291

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 18:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 18:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIMJo-00078R-A8; Tue, 25 Feb 2014 18:01:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WIMJn-00078M-DG
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 18:01:23 +0000
Received: from [85.158.137.68:35541] by server-1.bemta-3.messagelabs.com id
	AF/2C-17293-27ADC035; Tue, 25 Feb 2014 18:01:22 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393351281!4166971!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16921 invoked from network); 25 Feb 2014 18:01:21 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 18:01:21 -0000
Received: by mail-we0-f176.google.com with SMTP id p61so700123wes.21
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 10:01:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=E6d/f6WoRi0QFxeCTDMR4X+MPNg+3mv4Jycbf1jQeFM=;
	b=zPl3x1BVSLbdHHwh22IkqaMBU63hwuM4TT/SEj+bFQrxZklVk2zC/hcjC4bhbBBtCX
	BVKxywlAXsc3smOYsHvv3b3q22k6OnsmtgB1WnyZkmPANOf9wZY+jO2AjzeuUyDA8P/h
	04o0v78ZXsZ3zM92A3q8XMJlvpEoOlsyEEvHJIvFT7vn2HGje24bL+DDXFOPp0Kq5RPj
	Ye4+5JtYDNn5Um2owENJRzcJv9ztxS5BY3roE9Acwkm0GAU7iqRGckrRyHHWeAhUxvGl
	BI+wqTtvRTrYnKMz+yEDJ3Bf8izfM22AQMS405EF6XqbI5vayQbpfYeyLvgXS/Laf/v2
	2S0w==
X-Received: by 10.194.219.132 with SMTP id po4mr26044040wjc.7.1393351281510;
	Tue, 25 Feb 2014 10:01:21 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id pm2sm36793399wic.0.2014.02.25.10.01.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 10:01:20 -0800 (PST)
Message-ID: <530CDA6E.3070804@gmail.com>
Date: Tue, 25 Feb 2014 18:01:18 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <530C8DD7020000780011F226@nat28.tlf.novell.com>
	<1393328344-27597-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393328344-27597-1-git-send-email-andrew.cooper3@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] x86/faulting: Use formal defines instead
 of opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>
> CC: Keir Fraser<keir@xen.org>
> CC: Jan Beulich<JBeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 18:01:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 18:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIMJo-00078R-A8; Tue, 25 Feb 2014 18:01:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WIMJn-00078M-DG
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 18:01:23 +0000
Received: from [85.158.137.68:35541] by server-1.bemta-3.messagelabs.com id
	AF/2C-17293-27ADC035; Tue, 25 Feb 2014 18:01:22 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393351281!4166971!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16921 invoked from network); 25 Feb 2014 18:01:21 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 18:01:21 -0000
Received: by mail-we0-f176.google.com with SMTP id p61so700123wes.21
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 10:01:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=E6d/f6WoRi0QFxeCTDMR4X+MPNg+3mv4Jycbf1jQeFM=;
	b=zPl3x1BVSLbdHHwh22IkqaMBU63hwuM4TT/SEj+bFQrxZklVk2zC/hcjC4bhbBBtCX
	BVKxywlAXsc3smOYsHvv3b3q22k6OnsmtgB1WnyZkmPANOf9wZY+jO2AjzeuUyDA8P/h
	04o0v78ZXsZ3zM92A3q8XMJlvpEoOlsyEEvHJIvFT7vn2HGje24bL+DDXFOPp0Kq5RPj
	Ye4+5JtYDNn5Um2owENJRzcJv9ztxS5BY3roE9Acwkm0GAU7iqRGckrRyHHWeAhUxvGl
	BI+wqTtvRTrYnKMz+yEDJ3Bf8izfM22AQMS405EF6XqbI5vayQbpfYeyLvgXS/Laf/v2
	2S0w==
X-Received: by 10.194.219.132 with SMTP id po4mr26044040wjc.7.1393351281510;
	Tue, 25 Feb 2014 10:01:21 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id pm2sm36793399wic.0.2014.02.25.10.01.19
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 10:01:20 -0800 (PST)
Message-ID: <530CDA6E.3070804@gmail.com>
Date: Tue, 25 Feb 2014 18:01:18 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <530C8DD7020000780011F226@nat28.tlf.novell.com>
	<1393328344-27597-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393328344-27597-1-git-send-email-andrew.cooper3@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2] x86/faulting: Use formal defines instead
 of opencoded bits
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper<andrew.cooper3@citrix.com>
> CC: Keir Fraser<keir@xen.org>
> CC: Jan Beulich<JBeulich@suse.com>

Acked-by: Keir Fraser <keir@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 18:46:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 18:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIN1J-0007LI-3k; Tue, 25 Feb 2014 18:46:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIN1I-0007LD-6M
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 18:46:20 +0000
Received: from [85.158.137.68:31990] by server-10.bemta-3.messagelabs.com id
	E4/CB-07302-BF4EC035; Tue, 25 Feb 2014 18:46:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393353977!1360902!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32564 invoked from network); 25 Feb 2014 18:46:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 18:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,541,1389744000"; d="scan'208";a="105660016"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 18:46:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 13:46:16 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIN1D-0003KY-SS; Tue, 25 Feb 2014 18:46:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 18:46:14 +0000
Message-ID: <1393353974-4057-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, David
	Scott <dave.scott@eu.citrix.com>
Subject: [Xen-devel] [PATCH] tools/ocaml: Ingore more OCaml test binaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: David Scott <dave.scott@eu.citrix.com>
---
 .gitignore |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitignore b/.gitignore
index db3b083..eb210ca 100644
--- a/.gitignore
+++ b/.gitignore
@@ -390,6 +390,8 @@ tools/ocaml/xenstored/oxenstored
 tools/ocaml/test/xtl
 tools/ocaml/test/send_debug_keys
 tools/ocaml/test/list_domains
+tools/ocaml/test/dmesg
+tools/ocaml/test/raise_exception
 tools/debugger/kdd/kdd
 tools/firmware/etherboot/ipxe.tar.gz
 tools/firmware/etherboot/ipxe/
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 18:46:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 18:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIN1J-0007LI-3k; Tue, 25 Feb 2014 18:46:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIN1I-0007LD-6M
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 18:46:20 +0000
Received: from [85.158.137.68:31990] by server-10.bemta-3.messagelabs.com id
	E4/CB-07302-BF4EC035; Tue, 25 Feb 2014 18:46:19 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393353977!1360902!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32564 invoked from network); 25 Feb 2014 18:46:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 18:46:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,541,1389744000"; d="scan'208";a="105660016"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 25 Feb 2014 18:46:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Tue, 25 Feb 2014 13:46:16 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIN1D-0003KY-SS; Tue, 25 Feb 2014 18:46:15 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Tue, 25 Feb 2014 18:46:14 +0000
Message-ID: <1393353974-4057-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, David
	Scott <dave.scott@eu.citrix.com>
Subject: [Xen-devel] [PATCH] tools/ocaml: Ingore more OCaml test binaries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
CC: David Scott <dave.scott@eu.citrix.com>
---
 .gitignore |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/.gitignore b/.gitignore
index db3b083..eb210ca 100644
--- a/.gitignore
+++ b/.gitignore
@@ -390,6 +390,8 @@ tools/ocaml/xenstored/oxenstored
 tools/ocaml/test/xtl
 tools/ocaml/test/send_debug_keys
 tools/ocaml/test/list_domains
+tools/ocaml/test/dmesg
+tools/ocaml/test/raise_exception
 tools/debugger/kdd/kdd
 tools/firmware/etherboot/ipxe.tar.gz
 tools/firmware/etherboot/ipxe/
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 20:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 20:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIOsF-00083C-HZ; Tue, 25 Feb 2014 20:45:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WIOsE-000837-Mv
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 20:45:06 +0000
Received: from [85.158.143.35:36073] by server-3.bemta-4.messagelabs.com id
	FE/4E-11539-1D00D035; Tue, 25 Feb 2014 20:45:05 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393361103!8245664!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8982 invoked from network); 25 Feb 2014 20:45:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 20:45:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1PKj2vU032725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 20:45:03 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1PKj1E8022006
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 20:45:02 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1PKj1tV007538; Tue, 25 Feb 2014 20:45:01 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 25 Feb 2014 12:44:59 -0800
Date: Tue, 25 Feb 2014 12:44:59 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20140225124459.792471cf@mantra.us.oracle.com>
In-Reply-To: <530C7048020000780011F0D9@nat28.tlf.novell.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
	<530C7048020000780011F0D9@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 25 Feb 2014 09:28:24 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > Just like hvm, pirq eoi shared page is not there for pvh. pvh should
> > not touch any pv_domain fields.
> 
> While the latter is true, wasn't it that IRQ handling wise PVH is
> using PV mechanisms? In which case the EOI map page would be of
> interest, and rather than guarding the accesses you ought to move
> the field out of pv_domain.

It would be, but my proposals to move the fields out in earlier patches
had been turned down by you. So, while I work on another solution, this
would help. It's on my list to work on. As soon as dom0 pvh patches
are in, I intend to work on fixmes. I don't understand very well whats
going on with pirq eoi, so need to study it both in xen and linux, 
and focussing on one thing at a time really helps mortals like me :) ... 

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 20:45:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 20:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIOsF-00083C-HZ; Tue, 25 Feb 2014 20:45:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WIOsE-000837-Mv
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 20:45:06 +0000
Received: from [85.158.143.35:36073] by server-3.bemta-4.messagelabs.com id
	FE/4E-11539-1D00D035; Tue, 25 Feb 2014 20:45:05 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393361103!8245664!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8982 invoked from network); 25 Feb 2014 20:45:05 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 25 Feb 2014 20:45:05 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1PKj2vU032725
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 20:45:03 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1PKj1E8022006
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 20:45:02 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1PKj1tV007538; Tue, 25 Feb 2014 20:45:01 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 25 Feb 2014 12:44:59 -0800
Date: Tue, 25 Feb 2014 12:44:59 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: "Jan Beulich" <JBeulich@suse.com>
Message-ID: <20140225124459.792471cf@mantra.us.oracle.com>
In-Reply-To: <530C7048020000780011F0D9@nat28.tlf.novell.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
	<530C7048020000780011F0D9@nat28.tlf.novell.com>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 25 Feb 2014 09:28:24 +0000
"Jan Beulich" <JBeulich@suse.com> wrote:

> >>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com>
> >>> wrote:
> > Just like hvm, pirq eoi shared page is not there for pvh. pvh should
> > not touch any pv_domain fields.
> 
> While the latter is true, wasn't it that IRQ handling wise PVH is
> using PV mechanisms? In which case the EOI map page would be of
> interest, and rather than guarding the accesses you ought to move
> the field out of pv_domain.

It would be, but my proposals to move the fields out in earlier patches
had been turned down by you. So, while I work on another solution, this
would help. It's on my list to work on. As soon as dom0 pvh patches
are in, I intend to work on fixmes. I don't understand very well whats
going on with pirq eoi, so need to study it both in xen and linux, 
and focussing on one thing at a time really helps mortals like me :) ... 

thanks,
Mukesh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:05:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPCA-0008FO-4r; Tue, 25 Feb 2014 21:05:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WIPC8-0008FJ-6B
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 21:05:40 +0000
Received: from [193.109.254.147:48906] by server-16.bemta-14.messagelabs.com
	id B0/EB-21945-3A50D035; Tue, 25 Feb 2014 21:05:39 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393362338!6813825!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16454 invoked from network); 25 Feb 2014 21:05:38 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	25 Feb 2014 21:05:38 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1PL5TQP003886
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 16:05:30 -0500
Received: from [10.3.237.247] (vpn-237-247.phx2.redhat.com [10.3.237.247])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1PL5RQm006412; Tue, 25 Feb 2014 16:05:28 -0500
Message-ID: <1393362420.3032.8.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: David Miller <davem@davemloft.net>
Date: Tue, 25 Feb 2014 15:07:00 -0600
In-Reply-To: <20140224.180426.411052665068255886.davem@davemloft.net>
References: <1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, yoshfuji@linux-ipv6.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 18:04 -0500, David Miller wrote:
> From: Dan Williams <dcbw@redhat.com>
> Date: Mon, 24 Feb 2014 12:22:00 -0600
> 
> > In the future I expect more people will want to disable IPv4 as
> > they move to IPv6.
> 
> I definitely don't.
> 
> I've been lightly following this conversation and I have to say
> a few things.
> 
> disable_ipv6 was added because people wanted to make sure their
> machines didn't generate any ipv6 traffic because "ipv6 is not
> mature", "we don't have our firewalls configured to handle that
> kind of traffic" etc.
> 
> None of these things apply to ipv4.
> 
> And if you think people will go to ipv6 only, you are dreaming.
> 
> Name a provider of a major web sitewho will go to strictly only
> providing an ipv6 facing site?
> 
> Only an idiot who wanted to lose significiant nunbers of page views
> and traffic would do that, so ipv4 based connectivity will be
> universally necessary forever.
> 
> I think disable_ipv4 is absolutely a non-starter.

Also, disable_ipv4 signals *intent*, which is distinct from current
state.

Does an interface without an IPv4 address mean that the user wished it
not to have one?

Or does it mean that DHCP hasn't started yet (but is supposed to), or
failed, or something hasn't gotten around to assigning an address yet?

disable_ipv4 lets you distinguish between these two cases, the same way
disable_ipv6 does.

Dan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:05:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPCA-0008FO-4r; Tue, 25 Feb 2014 21:05:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dcbw@redhat.com>) id 1WIPC8-0008FJ-6B
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 21:05:40 +0000
Received: from [193.109.254.147:48906] by server-16.bemta-14.messagelabs.com
	id B0/EB-21945-3A50D035; Tue, 25 Feb 2014 21:05:39 +0000
X-Env-Sender: dcbw@redhat.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393362338!6813825!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16454 invoked from network); 25 Feb 2014 21:05:38 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-13.tower-27.messagelabs.com with SMTP;
	25 Feb 2014 21:05:38 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1PL5TQP003886
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Tue, 25 Feb 2014 16:05:30 -0500
Received: from [10.3.237.247] (vpn-237-247.phx2.redhat.com [10.3.237.247])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1PL5RQm006412; Tue, 25 Feb 2014 16:05:28 -0500
Message-ID: <1393362420.3032.8.camel@dcbw.local>
From: Dan Williams <dcbw@redhat.com>
To: David Miller <davem@davemloft.net>
Date: Tue, 25 Feb 2014 15:07:00 -0600
In-Reply-To: <20140224.180426.411052665068255886.davem@davemloft.net>
References: <1392857777.22693.14.camel@dcbw.local>
	<CAB=NE6Xvqcdje5OSxjzTRa3wv-vZHr+MJHY86ZCfFXU2Lpzi8Q@mail.gmail.com>
	<1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
Mime-Version: 1.0
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, yoshfuji@linux-ipv6.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, 2014-02-24 at 18:04 -0500, David Miller wrote:
> From: Dan Williams <dcbw@redhat.com>
> Date: Mon, 24 Feb 2014 12:22:00 -0600
> 
> > In the future I expect more people will want to disable IPv4 as
> > they move to IPv6.
> 
> I definitely don't.
> 
> I've been lightly following this conversation and I have to say
> a few things.
> 
> disable_ipv6 was added because people wanted to make sure their
> machines didn't generate any ipv6 traffic because "ipv6 is not
> mature", "we don't have our firewalls configured to handle that
> kind of traffic" etc.
> 
> None of these things apply to ipv4.
> 
> And if you think people will go to ipv6 only, you are dreaming.
> 
> Name a provider of a major web sitewho will go to strictly only
> providing an ipv6 facing site?
> 
> Only an idiot who wanted to lose significiant nunbers of page views
> and traffic would do that, so ipv4 based connectivity will be
> universally necessary forever.
> 
> I think disable_ipv4 is absolutely a non-starter.

Also, disable_ipv4 signals *intent*, which is distinct from current
state.

Does an interface without an IPv4 address mean that the user wished it
not to have one?

Or does it mean that DHCP hasn't started yet (but is supposed to), or
failed, or something hasn't gotten around to assigning an address yet?

disable_ipv4 lets you distinguish between these two cases, the same way
disable_ipv6 does.

Dan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:09:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPFX-0008Mf-RR; Tue, 25 Feb 2014 21:09:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WIPFW-0008MY-CC
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 21:09:10 +0000
Received: from [85.158.143.35:16947] by server-3.bemta-4.messagelabs.com id
	E9/4D-11539-5760D035; Tue, 25 Feb 2014 21:09:09 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393362548!8268026!1
X-Originating-IP: [213.199.154.205]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 979 invoked from network); 25 Feb 2014 21:09:08 -0000
Received: from am1ehsobe002.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.205)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	25 Feb 2014 21:09:08 -0000
Received: from mail12-am1-R.bigfish.com (10.3.201.243) by
	AM1EHSOBE004.bigfish.com (10.3.204.24) with Microsoft SMTP Server id
	14.1.225.22; Tue, 25 Feb 2014 21:09:08 +0000
Received: from mail12-am1 (localhost [127.0.0.1])	by mail12-am1-R.bigfish.com
	(Postfix) with ESMTP id 352DB4C00E8;
	Tue, 25 Feb 2014 21:09:08 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h255eh25cch1155h)
Received: from mail12-am1 (localhost.localdomain [127.0.0.1]) by mail12-am1
	(MessageSwitch) id 1393362546158087_20141;
	Tue, 25 Feb 2014 21:09:06 +0000 (UTC)
Received: from AM1EHSMHS008.bigfish.com (unknown [10.3.201.234])	by
	mail12-am1.bigfish.com (Postfix) with ESMTP id 22ABD16004E;
	Tue, 25 Feb 2014 21:09:06 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by AM1EHSMHS008.bigfish.com
	(10.3.207.108) with Microsoft SMTP Server id 14.16.227.3;
	Tue, 25 Feb 2014 21:09:05 +0000
X-WSS-ID: 0N1KLF2-07-57Z-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	27D8C12C0073;	Tue, 25 Feb 2014 15:09:01 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Tue, 25 Feb 2014 15:09:19 -0600
Received: from autotest-xen-lamar.amd.com (10.180.168.240) by
	satlexdag04.amd.com (10.181.40.9) with Microsoft SMTP Server id
	14.2.328.9; Tue, 25 Feb 2014 16:09:02 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <xen-devel@lists.xen.org>, <jbeulich@suse.com>
Date: Tue, 25 Feb 2014 15:11:27 -0600
Message-ID: <1393362687-6530-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>, keir@xen.org,
	andrew.cooper3@citrix.com,
	Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	sherry.hurwitz@amd.com, shurd@broadcom.com
Subject: [Xen-devel] [PATCH V8 RESEND] ns16550: Add support for UART present
	in Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since it is an MMIO device, the code has been modified to accept MMIO based
devices as well. MMIO device settings are populated in the 'uart_config' table.
It also advertises 64 bit BAR. Therefore, code is reworked to account for 64
bit BAR and 64 bit MMIO lengths.

Some more quirks are - the need to shift the register offset by a specific
value and we also need to verify (UART_LSR_THRE && UART_LSR_TEMT) bits before
transmitting data.

While testing, include com1=115200,8n1,pci,0 on the xen cmdline to observe
output on console using SoL.

Changes from V7:
  - per Jan's comments:
    - Moving pci_ro_device to ns16550_init_postirq() so that either
      one of pci_hide_device or pci_ro_device is done at one place
    - remove leading '0' from printk as absent segment identifier
      implies zero anyway.
  - per Ian's comments:
    - fixed issues that casued his build to fail.
    - cross-compiled for arm32 and arm64 after applying patch and
      build was successful on local machine.

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
---
 xen/drivers/char/ns16550.c | 187 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 166 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 9c2cded..932d643 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -46,13 +46,14 @@ string_param("com2", opt_com2);
 static struct ns16550 {
     int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
     u64 io_base;   /* I/O port or memory-mapped I/O address. */
-    u32 io_size;
+    u64 io_size;
     int reg_shift; /* Bits to shift register offset by */
     int reg_width; /* Size of access to use, the registers
                     * themselves are still bytes */
     char __iomem *remapped_io_base;  /* Remapped virtual address of MMIO. */
     /* UART with IRQ line: interrupt-driven I/O. */
     struct irqaction irqaction;
+    u8 lsr_mask;
 #ifdef CONFIG_ARM
     struct vuart_info vuart;
 #endif
@@ -69,14 +70,50 @@ static struct ns16550 {
     bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
     bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
     u32 bar;
+    u32 bar64;
     u16 cr;
     u8 bar_idx;
+    bool_t enable_ro; /* Make MMIO devices read only to Dom0 */
 #endif
 #ifdef HAS_DEVICE_TREE
     struct dt_irq dt_irq;
 #endif
 } ns16550_com[2] = { { 0 } };
 
+/* Defining uart config options for MMIO devices */
+struct ns16550_config_mmio {
+    u16 vendor_id;
+    u16 dev_id;
+    unsigned int reg_shift;
+    unsigned int reg_width;
+    unsigned int fifo_size;
+    u8 lsr_mask;
+    unsigned int max_bars;
+};
+
+
+#ifdef HAS_PCI
+/*
+ * Create lookup tables for specific MMIO devices..
+ * It is assumed that if the device found is MMIO,
+ * then you have indexed it here. Else, the driver
+ * does nothing.
+ */
+static struct ns16550_config_mmio __initdata uart_config[] =
+{
+    /* Broadcom TruManage device */
+    {
+        .vendor_id = 0x14e4,
+        .dev_id = 0x160a,
+        .reg_shift = 2,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = (UART_LSR_THRE | UART_LSR_TEMT),
+        .max_bars = 1,
+    },
+};
+#endif
+
 static void ns16550_delayed_resume(void *data);
 
 static char ns_read_reg(struct ns16550 *uart, int reg)
@@ -134,7 +171,7 @@ static void ns16550_interrupt(
     while ( !(ns_read_reg(uart, UART_IIR) & UART_IIR_NOINT) )
     {
         char lsr = ns_read_reg(uart, UART_LSR);
-        if ( lsr & UART_LSR_THRE )
+        if ( (lsr & uart->lsr_mask) == uart->lsr_mask )
             serial_tx_interrupt(port, regs);
         if ( lsr & UART_LSR_DR )
             serial_rx_interrupt(port, regs);
@@ -160,7 +197,7 @@ static void __ns16550_poll(struct cpu_user_regs *regs)
         serial_rx_interrupt(port, regs);
     }
 
-    if ( ns_read_reg(uart, UART_LSR) & UART_LSR_THRE )
+    if ( ( ns_read_reg(uart, UART_LSR) & uart->lsr_mask ) == uart->lsr_mask )
         serial_tx_interrupt(port, regs);
 
 out:
@@ -183,7 +220,9 @@ static int ns16550_tx_ready(struct serial_port *port)
 
     if ( ns16550_ioport_invalid(uart) )
         return -EIO;
-    return ns_read_reg(uart, UART_LSR) & UART_LSR_THRE ? uart->fifo_size : 0;
+
+    return ( (ns_read_reg(uart, UART_LSR) &
+              uart->lsr_mask ) == uart->lsr_mask ) ? uart->fifo_size : 0;
 }
 
 static void ns16550_putc(struct serial_port *port, char c)
@@ -354,8 +393,24 @@ static void __init ns16550_init_postirq(struct serial_port *port)
 
 #ifdef HAS_PCI
     if ( uart->bar || uart->ps_bdf_enable )
-        pci_hide_device(uart->ps_bdf[0], PCI_DEVFN(uart->ps_bdf[1],
-                                                   uart->ps_bdf[2]));
+    {
+        if ( !uart->enable_ro )
+            pci_hide_device(uart->ps_bdf[0], PCI_DEVFN(uart->ps_bdf[1],
+                            uart->ps_bdf[2]));
+        else
+        {
+            if ( rangeset_add_range(mmio_ro_ranges,
+                                    uart->io_base,
+                                    uart->io_base + uart->io_size - 1) )
+                printk(XENLOG_INFO "Error while adding MMIO range of device to mmio_ro_ranges\n");
+
+            if ( pci_ro_device(0, uart->ps_bdf[0],
+                               PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2])) )
+                printk(XENLOG_INFO "Could not mark config space of %02x:%02x.%u read-only.\n",
+                                    uart->ps_bdf[0], uart->ps_bdf[1],
+                                    uart->ps_bdf[2]);
+        }
+    }
 #endif
 }
 
@@ -381,6 +436,13 @@ static void _ns16550_resume(struct serial_port *port)
     {
        pci_conf_write32(0, uart->ps_bdf[0], uart->ps_bdf[1], uart->ps_bdf[2],
                         PCI_BASE_ADDRESS_0 + uart->bar_idx*4, uart->bar);
+
+        /* If 64 bit BAR, write higher 32 bits to BAR+4 */
+        if ( uart->bar & PCI_BASE_ADDRESS_MEM_TYPE_64 )
+            pci_conf_write32(0, uart->ps_bdf[0],
+                        uart->ps_bdf[1], uart->ps_bdf[2],
+                        PCI_BASE_ADDRESS_0 + (uart->bar_idx+1)*4, uart->bar64);
+
        pci_conf_write16(0, uart->ps_bdf[0], uart->ps_bdf[1], uart->ps_bdf[2],
                         PCI_COMMAND, uart->cr);
     }
@@ -546,11 +608,13 @@ static int __init check_existence(struct ns16550 *uart)
 }
 
 #ifdef HAS_PCI
-static int
+static int __init
 pci_uart_config (struct ns16550 *uart, int skip_amt, int bar_idx)
 {
-    uint32_t bar, len;
-    int b, d, f, nextf;
+    uint32_t bar, bar_64 = 0, len, len_64;
+    u64 size, mask;
+    unsigned int b, d, f, nextf, i;
+    u16 vendor, device;
 
     /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
     for ( b = skip_amt ? 1 : 0; b < 0x100; b++ )
@@ -579,24 +643,98 @@ pci_uart_config (struct ns16550 *uart, int skip_amt, int bar_idx)
                 bar = pci_conf_read32(0, b, d, f,
                                       PCI_BASE_ADDRESS_0 + bar_idx*4);
 
-                /* Not IO */
+                /* MMIO based */
                 if ( !(bar & PCI_BASE_ADDRESS_SPACE_IO) )
-                    continue;
-
-                pci_conf_write32(0, b, d, f, PCI_BASE_ADDRESS_0, ~0u);
-                len = pci_conf_read32(0, b, d, f, PCI_BASE_ADDRESS_0);
-                pci_conf_write32(0, b, d, f, PCI_BASE_ADDRESS_0 + bar_idx*4, bar);
-
-                /* Not 8 bytes */
-                if ( (len & 0xffff) != 0xfff9 )
-                    continue;
+                {
+                    vendor = pci_conf_read16(0, b, d, f, PCI_VENDOR_ID);
+                    device = pci_conf_read16(0, b, d, f, PCI_DEVICE_ID);
+
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, ~0u);
+                    len = pci_conf_read32(0, b, d, f, PCI_BASE_ADDRESS_0 + bar_idx*4);
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, bar);
+
+                    /* Handle 64 bit BAR if found */
+                    if ( bar & PCI_BASE_ADDRESS_MEM_TYPE_64 )
+                    {
+                        bar_64 = pci_conf_read32(0, b, d, f,
+                                      PCI_BASE_ADDRESS_0 + (bar_idx+1)*4);
+                        pci_conf_write32(0, b, d, f,
+                                    PCI_BASE_ADDRESS_0 + (bar_idx+1)*4, ~0u);
+                        len_64 = pci_conf_read32(0, b, d, f,
+                                    PCI_BASE_ADDRESS_0 + (bar_idx+1)*4);
+                        pci_conf_write32(0, b, d, f,
+                                    PCI_BASE_ADDRESS_0 + (bar_idx+1)*4, bar_64);
+                        mask = ((u64)~0 << 32) | PCI_BASE_ADDRESS_MEM_MASK;
+                        size = (((u64)len_64 << 32) | len) & mask;
+                    }
+                    else
+                        size = len & PCI_BASE_ADDRESS_MEM_MASK;
+
+                    size &= -size;
+
+                    /* Check for quirks in uart_config lookup table */
+                    for ( i = 0; i < ARRAY_SIZE(uart_config); i++)
+                    {
+                        if ( uart_config[i].vendor_id != vendor )
+                            continue;
+
+                        if ( uart_config[i].dev_id != device )
+                            continue;
+
+                        /*
+                         * Force length of mmio region to be at least
+                         * 8 bytes times (1 << reg_shift)
+                         */
+                        if ( size < (0x8 * (1 << uart_config[i].reg_shift)) )
+                            continue;
+
+                        if ( bar_idx >= uart_config[i].max_bars )
+                            continue;
+
+                        if ( uart_config[i].fifo_size )
+                            uart->fifo_size = uart_config[i].fifo_size;
+
+                        uart->reg_shift = uart_config[i].reg_shift;
+                        uart->reg_width = uart_config[i].reg_width;
+                        uart->lsr_mask = uart_config[i].lsr_mask;
+                        uart->io_base = ((u64)bar_64 << 32) |
+                                        (bar & PCI_BASE_ADDRESS_MEM_MASK);
+                        /* Set device and MMIO region read only to Dom0 */
+                        uart->enable_ro = 1;
+                        break;
+                    }
+
+                    /* If we have an io_base, then we succeeded in the lookup */
+                    if ( !uart->io_base )
+                        continue;
+                }
+                /* IO based */
+                else
+                {
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, ~0u);
+                    len = pci_conf_read32(0, b, d, f, PCI_BASE_ADDRESS_0);
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, bar);
+                    size = len & PCI_BASE_ADDRESS_IO_MASK;
+                    size &= -size;
+
+                    /* Not 8 bytes */
+                    if ( size != 0x8 )
+                        continue;
+
+                    uart->io_base = bar & ~PCI_BASE_ADDRESS_SPACE_IO;
+                }
 
                 uart->ps_bdf[0] = b;
                 uart->ps_bdf[1] = d;
                 uart->ps_bdf[2] = f;
-                uart->bar = bar;
                 uart->bar_idx = bar_idx;
-                uart->io_base = bar & ~PCI_BASE_ADDRESS_SPACE_IO;
+                uart->bar = bar;
+                uart->bar64 = bar_64;
+                uart->io_size = size;
                 uart->irq = pci_conf_read8(0, b, d, f, PCI_INTERRUPT_PIN) ?
                     pci_conf_read8(0, b, d, f, PCI_INTERRUPT_LINE) : 0;
 
@@ -743,9 +881,16 @@ void __init ns16550_init(int index, struct ns16550_defaults *defaults)
     uart->reg_width = 1;
     uart->reg_shift = 0;
 
+#ifdef HAS_PCI
+    uart->enable_ro = 0;
+#endif
+
     /* Default is no transmit FIFO. */
     uart->fifo_size = 1;
 
+    /* Default lsr_mask = UART_LSR_THRE */
+    uart->lsr_mask = UART_LSR_THRE;
+
     ns16550_parse_port_config(uart, (index == 0) ? opt_com1 : opt_com2);
 }
 
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:09:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPFX-0008Mf-RR; Tue, 25 Feb 2014 21:09:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Aravind.Gopalakrishnan@amd.com>) id 1WIPFW-0008MY-CC
	for xen-devel@lists.xen.org; Tue, 25 Feb 2014 21:09:10 +0000
Received: from [85.158.143.35:16947] by server-3.bemta-4.messagelabs.com id
	E9/4D-11539-5760D035; Tue, 25 Feb 2014 21:09:09 +0000
X-Env-Sender: Aravind.Gopalakrishnan@amd.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393362548!8268026!1
X-Originating-IP: [213.199.154.205]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 979 invoked from network); 25 Feb 2014 21:09:08 -0000
Received: from am1ehsobe002.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.205)
	by server-7.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	25 Feb 2014 21:09:08 -0000
Received: from mail12-am1-R.bigfish.com (10.3.201.243) by
	AM1EHSOBE004.bigfish.com (10.3.204.24) with Microsoft SMTP Server id
	14.1.225.22; Tue, 25 Feb 2014 21:09:08 +0000
Received: from mail12-am1 (localhost [127.0.0.1])	by mail12-am1-R.bigfish.com
	(Postfix) with ESMTP id 352DB4C00E8;
	Tue, 25 Feb 2014 21:09:08 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:165.204.84.221; KIP:(null); UIP:(null);
	IPV:NLI; H:atltwp01.amd.com; RD:none; EFVD:NLI
X-SpamScore: 0
X-BigFish: VPS0(zzzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2dh839he5bhf0ah1288h12a5h12a9h12bdh12e5h137ah139eh13b6h1441h14ddh1504h1537h162dh1631h1758h1898h18e1h1946h19b5h1ad9h1b0ah2222h224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1e23h1fe8h1ff5h2218h2216h226dh22d0h24afh2327h2336h2438h2461h2487h24d7h2516h2545h255eh25cch1155h)
Received: from mail12-am1 (localhost.localdomain [127.0.0.1]) by mail12-am1
	(MessageSwitch) id 1393362546158087_20141;
	Tue, 25 Feb 2014 21:09:06 +0000 (UTC)
Received: from AM1EHSMHS008.bigfish.com (unknown [10.3.201.234])	by
	mail12-am1.bigfish.com (Postfix) with ESMTP id 22ABD16004E;
	Tue, 25 Feb 2014 21:09:06 +0000 (UTC)
Received: from atltwp01.amd.com (165.204.84.221) by AM1EHSMHS008.bigfish.com
	(10.3.207.108) with Microsoft SMTP Server id 14.16.227.3;
	Tue, 25 Feb 2014 21:09:05 +0000
X-WSS-ID: 0N1KLF2-07-57Z-02
X-M-MSG: 
Received: from satlvexedge02.amd.com (satlvexedge02.amd.com [10.177.96.29])
	(using TLSv1 with cipher AES128-SHA (128/128 bits))	(No client
	certificate
	requested)	by atltwp01.amd.com (Axway MailGate 5.2.1) with ESMTPS id
	27D8C12C0073;	Tue, 25 Feb 2014 15:09:01 -0600 (CST)
Received: from SATLEXDAG04.amd.com (10.181.40.9) by SATLVEXEDGE02.amd.com
	(10.177.96.29) with Microsoft SMTP Server (TLS) id 14.2.328.9;
	Tue, 25 Feb 2014 15:09:19 -0600
Received: from autotest-xen-lamar.amd.com (10.180.168.240) by
	satlexdag04.amd.com (10.181.40.9) with Microsoft SMTP Server id
	14.2.328.9; Tue, 25 Feb 2014 16:09:02 -0500
From: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
To: <xen-devel@lists.xen.org>, <jbeulich@suse.com>
Date: Tue, 25 Feb 2014 15:11:27 -0600
Message-ID: <1393362687-6530-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
X-Mailer: git-send-email 1.8.1.2
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>, keir@xen.org,
	andrew.cooper3@citrix.com,
	Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	sherry.hurwitz@amd.com, shurd@broadcom.com
Subject: [Xen-devel] [PATCH V8 RESEND] ns16550: Add support for UART present
	in Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Since it is an MMIO device, the code has been modified to accept MMIO based
devices as well. MMIO device settings are populated in the 'uart_config' table.
It also advertises 64 bit BAR. Therefore, code is reworked to account for 64
bit BAR and 64 bit MMIO lengths.

Some more quirks are - the need to shift the register offset by a specific
value and we also need to verify (UART_LSR_THRE && UART_LSR_TEMT) bits before
transmitting data.

While testing, include com1=115200,8n1,pci,0 on the xen cmdline to observe
output on console using SoL.

Changes from V7:
  - per Jan's comments:
    - Moving pci_ro_device to ns16550_init_postirq() so that either
      one of pci_hide_device or pci_ro_device is done at one place
    - remove leading '0' from printk as absent segment identifier
      implies zero anyway.
  - per Ian's comments:
    - fixed issues that casued his build to fail.
    - cross-compiled for arm32 and arm64 after applying patch and
      build was successful on local machine.

Signed-off-by: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
Signed-off-by: Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
Signed-off-by: Thomas Lendacky <Thomas.Lendacky@amd.com>
---
 xen/drivers/char/ns16550.c | 187 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 166 insertions(+), 21 deletions(-)

diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 9c2cded..932d643 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -46,13 +46,14 @@ string_param("com2", opt_com2);
 static struct ns16550 {
     int baud, clock_hz, data_bits, parity, stop_bits, fifo_size, irq;
     u64 io_base;   /* I/O port or memory-mapped I/O address. */
-    u32 io_size;
+    u64 io_size;
     int reg_shift; /* Bits to shift register offset by */
     int reg_width; /* Size of access to use, the registers
                     * themselves are still bytes */
     char __iomem *remapped_io_base;  /* Remapped virtual address of MMIO. */
     /* UART with IRQ line: interrupt-driven I/O. */
     struct irqaction irqaction;
+    u8 lsr_mask;
 #ifdef CONFIG_ARM
     struct vuart_info vuart;
 #endif
@@ -69,14 +70,50 @@ static struct ns16550 {
     bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
     bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
     u32 bar;
+    u32 bar64;
     u16 cr;
     u8 bar_idx;
+    bool_t enable_ro; /* Make MMIO devices read only to Dom0 */
 #endif
 #ifdef HAS_DEVICE_TREE
     struct dt_irq dt_irq;
 #endif
 } ns16550_com[2] = { { 0 } };
 
+/* Defining uart config options for MMIO devices */
+struct ns16550_config_mmio {
+    u16 vendor_id;
+    u16 dev_id;
+    unsigned int reg_shift;
+    unsigned int reg_width;
+    unsigned int fifo_size;
+    u8 lsr_mask;
+    unsigned int max_bars;
+};
+
+
+#ifdef HAS_PCI
+/*
+ * Create lookup tables for specific MMIO devices..
+ * It is assumed that if the device found is MMIO,
+ * then you have indexed it here. Else, the driver
+ * does nothing.
+ */
+static struct ns16550_config_mmio __initdata uart_config[] =
+{
+    /* Broadcom TruManage device */
+    {
+        .vendor_id = 0x14e4,
+        .dev_id = 0x160a,
+        .reg_shift = 2,
+        .reg_width = 1,
+        .fifo_size = 16,
+        .lsr_mask = (UART_LSR_THRE | UART_LSR_TEMT),
+        .max_bars = 1,
+    },
+};
+#endif
+
 static void ns16550_delayed_resume(void *data);
 
 static char ns_read_reg(struct ns16550 *uart, int reg)
@@ -134,7 +171,7 @@ static void ns16550_interrupt(
     while ( !(ns_read_reg(uart, UART_IIR) & UART_IIR_NOINT) )
     {
         char lsr = ns_read_reg(uart, UART_LSR);
-        if ( lsr & UART_LSR_THRE )
+        if ( (lsr & uart->lsr_mask) == uart->lsr_mask )
             serial_tx_interrupt(port, regs);
         if ( lsr & UART_LSR_DR )
             serial_rx_interrupt(port, regs);
@@ -160,7 +197,7 @@ static void __ns16550_poll(struct cpu_user_regs *regs)
         serial_rx_interrupt(port, regs);
     }
 
-    if ( ns_read_reg(uart, UART_LSR) & UART_LSR_THRE )
+    if ( ( ns_read_reg(uart, UART_LSR) & uart->lsr_mask ) == uart->lsr_mask )
         serial_tx_interrupt(port, regs);
 
 out:
@@ -183,7 +220,9 @@ static int ns16550_tx_ready(struct serial_port *port)
 
     if ( ns16550_ioport_invalid(uart) )
         return -EIO;
-    return ns_read_reg(uart, UART_LSR) & UART_LSR_THRE ? uart->fifo_size : 0;
+
+    return ( (ns_read_reg(uart, UART_LSR) &
+              uart->lsr_mask ) == uart->lsr_mask ) ? uart->fifo_size : 0;
 }
 
 static void ns16550_putc(struct serial_port *port, char c)
@@ -354,8 +393,24 @@ static void __init ns16550_init_postirq(struct serial_port *port)
 
 #ifdef HAS_PCI
     if ( uart->bar || uart->ps_bdf_enable )
-        pci_hide_device(uart->ps_bdf[0], PCI_DEVFN(uart->ps_bdf[1],
-                                                   uart->ps_bdf[2]));
+    {
+        if ( !uart->enable_ro )
+            pci_hide_device(uart->ps_bdf[0], PCI_DEVFN(uart->ps_bdf[1],
+                            uart->ps_bdf[2]));
+        else
+        {
+            if ( rangeset_add_range(mmio_ro_ranges,
+                                    uart->io_base,
+                                    uart->io_base + uart->io_size - 1) )
+                printk(XENLOG_INFO "Error while adding MMIO range of device to mmio_ro_ranges\n");
+
+            if ( pci_ro_device(0, uart->ps_bdf[0],
+                               PCI_DEVFN(uart->ps_bdf[1], uart->ps_bdf[2])) )
+                printk(XENLOG_INFO "Could not mark config space of %02x:%02x.%u read-only.\n",
+                                    uart->ps_bdf[0], uart->ps_bdf[1],
+                                    uart->ps_bdf[2]);
+        }
+    }
 #endif
 }
 
@@ -381,6 +436,13 @@ static void _ns16550_resume(struct serial_port *port)
     {
        pci_conf_write32(0, uart->ps_bdf[0], uart->ps_bdf[1], uart->ps_bdf[2],
                         PCI_BASE_ADDRESS_0 + uart->bar_idx*4, uart->bar);
+
+        /* If 64 bit BAR, write higher 32 bits to BAR+4 */
+        if ( uart->bar & PCI_BASE_ADDRESS_MEM_TYPE_64 )
+            pci_conf_write32(0, uart->ps_bdf[0],
+                        uart->ps_bdf[1], uart->ps_bdf[2],
+                        PCI_BASE_ADDRESS_0 + (uart->bar_idx+1)*4, uart->bar64);
+
        pci_conf_write16(0, uart->ps_bdf[0], uart->ps_bdf[1], uart->ps_bdf[2],
                         PCI_COMMAND, uart->cr);
     }
@@ -546,11 +608,13 @@ static int __init check_existence(struct ns16550 *uart)
 }
 
 #ifdef HAS_PCI
-static int
+static int __init
 pci_uart_config (struct ns16550 *uart, int skip_amt, int bar_idx)
 {
-    uint32_t bar, len;
-    int b, d, f, nextf;
+    uint32_t bar, bar_64 = 0, len, len_64;
+    u64 size, mask;
+    unsigned int b, d, f, nextf, i;
+    u16 vendor, device;
 
     /* NB. Start at bus 1 to avoid AMT: a plug-in card cannot be on bus 0. */
     for ( b = skip_amt ? 1 : 0; b < 0x100; b++ )
@@ -579,24 +643,98 @@ pci_uart_config (struct ns16550 *uart, int skip_amt, int bar_idx)
                 bar = pci_conf_read32(0, b, d, f,
                                       PCI_BASE_ADDRESS_0 + bar_idx*4);
 
-                /* Not IO */
+                /* MMIO based */
                 if ( !(bar & PCI_BASE_ADDRESS_SPACE_IO) )
-                    continue;
-
-                pci_conf_write32(0, b, d, f, PCI_BASE_ADDRESS_0, ~0u);
-                len = pci_conf_read32(0, b, d, f, PCI_BASE_ADDRESS_0);
-                pci_conf_write32(0, b, d, f, PCI_BASE_ADDRESS_0 + bar_idx*4, bar);
-
-                /* Not 8 bytes */
-                if ( (len & 0xffff) != 0xfff9 )
-                    continue;
+                {
+                    vendor = pci_conf_read16(0, b, d, f, PCI_VENDOR_ID);
+                    device = pci_conf_read16(0, b, d, f, PCI_DEVICE_ID);
+
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, ~0u);
+                    len = pci_conf_read32(0, b, d, f, PCI_BASE_ADDRESS_0 + bar_idx*4);
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, bar);
+
+                    /* Handle 64 bit BAR if found */
+                    if ( bar & PCI_BASE_ADDRESS_MEM_TYPE_64 )
+                    {
+                        bar_64 = pci_conf_read32(0, b, d, f,
+                                      PCI_BASE_ADDRESS_0 + (bar_idx+1)*4);
+                        pci_conf_write32(0, b, d, f,
+                                    PCI_BASE_ADDRESS_0 + (bar_idx+1)*4, ~0u);
+                        len_64 = pci_conf_read32(0, b, d, f,
+                                    PCI_BASE_ADDRESS_0 + (bar_idx+1)*4);
+                        pci_conf_write32(0, b, d, f,
+                                    PCI_BASE_ADDRESS_0 + (bar_idx+1)*4, bar_64);
+                        mask = ((u64)~0 << 32) | PCI_BASE_ADDRESS_MEM_MASK;
+                        size = (((u64)len_64 << 32) | len) & mask;
+                    }
+                    else
+                        size = len & PCI_BASE_ADDRESS_MEM_MASK;
+
+                    size &= -size;
+
+                    /* Check for quirks in uart_config lookup table */
+                    for ( i = 0; i < ARRAY_SIZE(uart_config); i++)
+                    {
+                        if ( uart_config[i].vendor_id != vendor )
+                            continue;
+
+                        if ( uart_config[i].dev_id != device )
+                            continue;
+
+                        /*
+                         * Force length of mmio region to be at least
+                         * 8 bytes times (1 << reg_shift)
+                         */
+                        if ( size < (0x8 * (1 << uart_config[i].reg_shift)) )
+                            continue;
+
+                        if ( bar_idx >= uart_config[i].max_bars )
+                            continue;
+
+                        if ( uart_config[i].fifo_size )
+                            uart->fifo_size = uart_config[i].fifo_size;
+
+                        uart->reg_shift = uart_config[i].reg_shift;
+                        uart->reg_width = uart_config[i].reg_width;
+                        uart->lsr_mask = uart_config[i].lsr_mask;
+                        uart->io_base = ((u64)bar_64 << 32) |
+                                        (bar & PCI_BASE_ADDRESS_MEM_MASK);
+                        /* Set device and MMIO region read only to Dom0 */
+                        uart->enable_ro = 1;
+                        break;
+                    }
+
+                    /* If we have an io_base, then we succeeded in the lookup */
+                    if ( !uart->io_base )
+                        continue;
+                }
+                /* IO based */
+                else
+                {
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, ~0u);
+                    len = pci_conf_read32(0, b, d, f, PCI_BASE_ADDRESS_0);
+                    pci_conf_write32(0, b, d, f,
+                                     PCI_BASE_ADDRESS_0 + bar_idx*4, bar);
+                    size = len & PCI_BASE_ADDRESS_IO_MASK;
+                    size &= -size;
+
+                    /* Not 8 bytes */
+                    if ( size != 0x8 )
+                        continue;
+
+                    uart->io_base = bar & ~PCI_BASE_ADDRESS_SPACE_IO;
+                }
 
                 uart->ps_bdf[0] = b;
                 uart->ps_bdf[1] = d;
                 uart->ps_bdf[2] = f;
-                uart->bar = bar;
                 uart->bar_idx = bar_idx;
-                uart->io_base = bar & ~PCI_BASE_ADDRESS_SPACE_IO;
+                uart->bar = bar;
+                uart->bar64 = bar_64;
+                uart->io_size = size;
                 uart->irq = pci_conf_read8(0, b, d, f, PCI_INTERRUPT_PIN) ?
                     pci_conf_read8(0, b, d, f, PCI_INTERRUPT_LINE) : 0;
 
@@ -743,9 +881,16 @@ void __init ns16550_init(int index, struct ns16550_defaults *defaults)
     uart->reg_width = 1;
     uart->reg_shift = 0;
 
+#ifdef HAS_PCI
+    uart->enable_ro = 0;
+#endif
+
     /* Default is no transmit FIFO. */
     uart->fifo_size = 1;
 
+    /* Default lsr_mask = UART_LSR_THRE */
+    uart->lsr_mask = UART_LSR_THRE;
+
     ns16550_parse_port_config(uart, (index == 0) ? opt_com1 : opt_com2);
 }
 
-- 
1.8.1.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:18:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPOV-00006a-Gs; Tue, 25 Feb 2014 21:18:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WIPOU-00006V-1c
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 21:18:26 +0000
Received: from [85.158.137.68:9022] by server-17.bemta-3.messagelabs.com id
	7C/24-22569-0A80D035; Tue, 25 Feb 2014 21:18:24 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393363103!4191806!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13779 invoked from network); 25 Feb 2014 21:18:23 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-3.tower-31.messagelabs.com with SMTP;
	25 Feb 2014 21:18:23 -0000
Received: from localhost (nat-pool-rdu-t.redhat.com [66.187.233.202])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id CB6AB58078D;
	Tue, 25 Feb 2014 13:18:20 -0800 (PST)
Date: Tue, 25 Feb 2014 16:18:17 -0500 (EST)
Message-Id: <20140225.161817.1623503840238501415.davem@davemloft.net>
To: dcbw@redhat.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1393362420.3032.8.camel@dcbw.local>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393362420.3032.8.camel@dcbw.local>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Tue, 25 Feb 2014 13:18:22 -0800 (PST)
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, yoshfuji@linux-ipv6.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Dan Williams <dcbw@redhat.com>
Date: Tue, 25 Feb 2014 15:07:00 -0600

> Also, disable_ipv4 signals *intent*, which is distinct from current
> state.
> 
> Does an interface without an IPv4 address mean that the user wished it
> not to have one?
> 
> Or does it mean that DHCP hasn't started yet (but is supposed to), or
> failed, or something hasn't gotten around to assigning an address yet?
> 
> disable_ipv4 lets you distinguish between these two cases, the same way
> disable_ipv6 does.

Intent only matters on the kernel side if the kernel automatically
assigns addresses to interfaces which have been brought up like ipv6
does.

Since it does not do this for ipv4, this can be handled entirely in
userspace.

It is not a valid argument to say that a rogue dhcp might run on
the machine and configure an ipv4 address.  That's the admin's
responsibility, and still a user side problem.  A "rogue" program
could just as equally turn the theoretical disable_ipv4 off too.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:18:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPOV-00006a-Gs; Tue, 25 Feb 2014 21:18:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <davem@davemloft.net>) id 1WIPOU-00006V-1c
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 21:18:26 +0000
Received: from [85.158.137.68:9022] by server-17.bemta-3.messagelabs.com id
	7C/24-22569-0A80D035; Tue, 25 Feb 2014 21:18:24 +0000
X-Env-Sender: davem@davemloft.net
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393363103!4191806!1
X-Originating-IP: [149.20.54.216]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13779 invoked from network); 25 Feb 2014 21:18:23 -0000
Received: from shards.monkeyblade.net (HELO shards.monkeyblade.net)
	(149.20.54.216) by server-3.tower-31.messagelabs.com with SMTP;
	25 Feb 2014 21:18:23 -0000
Received: from localhost (nat-pool-rdu-t.redhat.com [66.187.233.202])
	(Authenticated sender: davem-davemloft)
	by shards.monkeyblade.net (Postfix) with ESMTPSA id CB6AB58078D;
	Tue, 25 Feb 2014 13:18:20 -0800 (PST)
Date: Tue, 25 Feb 2014 16:18:17 -0500 (EST)
Message-Id: <20140225.161817.1623503840238501415.davem@davemloft.net>
To: dcbw@redhat.com
From: David Miller <davem@davemloft.net>
In-Reply-To: <1393362420.3032.8.camel@dcbw.local>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393362420.3032.8.camel@dcbw.local>
X-Mailer: Mew version 6.5 on Emacs 24.1 / Mule 6.0 (HANACHIRUSATO)
Mime-Version: 1.0
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.7
	(shards.monkeyblade.net [149.20.54.216]);
	Tue, 25 Feb 2014 13:18:22 -0800 (PST)
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, yoshfuji@linux-ipv6.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Dan Williams <dcbw@redhat.com>
Date: Tue, 25 Feb 2014 15:07:00 -0600

> Also, disable_ipv4 signals *intent*, which is distinct from current
> state.
> 
> Does an interface without an IPv4 address mean that the user wished it
> not to have one?
> 
> Or does it mean that DHCP hasn't started yet (but is supposed to), or
> failed, or something hasn't gotten around to assigning an address yet?
> 
> disable_ipv4 lets you distinguish between these two cases, the same way
> disable_ipv6 does.

Intent only matters on the kernel side if the kernel automatically
assigns addresses to interfaces which have been brought up like ipv6
does.

Since it does not do this for ipv4, this can be handled entirely in
userspace.

It is not a valid argument to say that a rogue dhcp might run on
the machine and configure an ipv4 address.  That's the admin's
responsibility, and still a user side problem.  A "rogue" program
could just as equally turn the theoretical disable_ipv4 off too.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:44:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPnc-0000Wz-3Z; Tue, 25 Feb 2014 21:44:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIPna-0000Ws-BY
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 21:44:22 +0000
Received: from [193.109.254.147:18825] by server-2.bemta-14.messagelabs.com id
	5D/26-01236-5BE0D035; Tue, 25 Feb 2014 21:44:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393364659!6795009!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26015 invoked from network); 25 Feb 2014 21:44:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 21:44:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,542,1389744000"; d="scan'208";a="104083300"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 21:44:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 16:44:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIPnW-0007Io-4n;
	Tue, 25 Feb 2014 21:44:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIPnV-0008Va-7L;
	Tue, 25 Feb 2014 21:44:17 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25300-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 21:44:17 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25300: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25300 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25300/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot              fail blocked in 12557
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install   fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                7472e009a3f1e8b80c65aefd51716afc2e3bab8a
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2391648 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:44:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:44:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPnc-0000Wz-3Z; Tue, 25 Feb 2014 21:44:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIPna-0000Ws-BY
	for xen-devel@lists.xensource.com; Tue, 25 Feb 2014 21:44:22 +0000
Received: from [193.109.254.147:18825] by server-2.bemta-14.messagelabs.com id
	5D/26-01236-5BE0D035; Tue, 25 Feb 2014 21:44:21 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393364659!6795009!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26015 invoked from network); 25 Feb 2014 21:44:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 21:44:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,542,1389744000"; d="scan'208";a="104083300"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 25 Feb 2014 21:44:18 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 16:44:18 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIPnW-0007Io-4n;
	Tue, 25 Feb 2014 21:44:18 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIPnV-0008Va-7L;
	Tue, 25 Feb 2014 21:44:17 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25300-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Tue, 25 Feb 2014 21:44:17 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25300: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25300 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25300/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-freebsd10-amd64  5 xen-boot              fail blocked in 12557
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install   fail blocked in 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                7472e009a3f1e8b80c65aefd51716afc2e3bab8a
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2391648 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:54:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPxF-0000hZ-At; Tue, 25 Feb 2014 21:54:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmarks@google.com>) id 1WIO1t-0007sN-OO
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 19:51:02 +0000
Received: from [85.158.137.68:57074] by server-14.bemta-3.messagelabs.com id
	46/BF-08196-424FC035; Tue, 25 Feb 2014 19:51:00 +0000
X-Env-Sender: pmarks@google.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393357858!2640874!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 734 invoked from network); 25 Feb 2014 19:50:59 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 19:50:59 -0000
Received: by mail-qa0-f44.google.com with SMTP id f11so854722qae.3
	for <xen-devel@lists.xenproject.org>;
	Tue, 25 Feb 2014 11:50:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=lI0r06YtguqRnaLcmjlW4yBNS+Y58SFqmQZRiTFjPRI=;
	b=CYVGv5zXiPISnmi5yDBd9WBDD4UiKs1+ryowuuobjRjrQZ14s8orlROE4cTpTGCyXt
	bV2HtXerEhDa7B4GdqLCOqcGk0v5Ulp1mOjUa8uIxZgsNYB81FtrCo4FwFj06mUX2AV6
	GfkHg0B76qiTMS0up/qoR9GR9y+B81pfp7SaBLyhUXwcKpup0GTfjVt3mVFS0O7gET+v
	SDMh6EO+zwau8EjKTiQ+JHyrtUsocfVaXLWmBhp0AIebRqkyPCqixzyOhjVpmln1x8+u
	+0vznvjD3QiTvFgrX2OHfKntuuepzFHT0ERdAuvFC/akxzbeDisuEGn1e2JS6iY/xfzG
	7GgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=lI0r06YtguqRnaLcmjlW4yBNS+Y58SFqmQZRiTFjPRI=;
	b=bhOJ5wy7mwjSC/8jcOkobw/Vm9O2TBAKKz1od2rQDZeh1q2xfkrWVdCtcb4BHuADwH
	9+PlrZ8zgFfu5gnsGmnuPyAezAi17lcumM6jsNC4TbP+QmssB+DufGxk2g/Kw5BCTivw
	Od90q+fJuUrcG926AORJdMSCAutRQbyQ27xdFoTQO8Vh+1Z213qzsIsO/R5fX+Upxf/a
	QzQ/hoegYvTm+Vmq9GxGG6juJIgXC7pId+Q4Y3eve96ke16QU2rJ6XAOPkQfYRCj7qqR
	DLho8g/t+hysdYRdYnc5Qj1FNl9Ob1KsMLphpjrgUXWpL2Itg1BLxcPxvK5vCZlNQPGO
	A/JA==
X-Gm-Message-State: ALoCoQmcMPH/DirFxy7NORLXiGWtxfwtFPmoGze36TPBT0kStKaegr8wq+Kp7ZWKARapfan5YEZI9bW9F9cq6Chqb1iq+J2L/UveBxBMwI8FPY7isXkPms9Pc2ThCN8K6OArEe3C99LPCm4egeW/xVmVIGRSfEQao9QYOF8ZKvTTB0o0Dgye9AN9fAFb8V+n1Mtu1etId8hs6sLAkueFn04Lj44kw7Pqpw==
X-Received: by 10.140.29.38 with SMTP id a35mr2278632qga.55.1393357858532;
	Tue, 25 Feb 2014 11:50:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.114.196 with HTTP; Tue, 25 Feb 2014 11:50:38 -0800 (PST)
In-Reply-To: <20140224.191238.921310808350170272.davem@davemloft.net>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
	<20140224.191238.921310808350170272.davem@davemloft.net>
From: Paul Marks <pmarks@google.com>
Date: Tue, 25 Feb 2014 11:50:38 -0800
Message-ID: <CAHaKRvKFwXSucvyrZVqxAZ8sa9tNBofPtJUK4hMRavTcZK9JZw@mail.gmail.com>
To: David Miller <davem@davemloft.net>
X-Mailman-Approved-At: Tue, 25 Feb 2014 21:54:19 +0000
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, ben@decadent.org.uk, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 4:12 PM, David Miller <davem@davemloft.net> wrote:
> From: Ben Hutchings <ben@decadent.org.uk>
> Date: Tue, 25 Feb 2014 00:02:00 +0000
>
>> You can run an internal network, or access network, as v6-only with
>> NAT64 and DNS64 at the border.  I believe some mobile networks are doing
>> this; it was also done on the main FOSDEM wireless network this year.
>
> This seems to be bloating up the networking headers of the internal
> network, for what purpose?

The primary purpose of IPv6 is to bloat up network headers, because
the IPv4 headers were too small to address all the endpoints.

NAT64 is an intriguing solution to the problem of "I have too many
customers for 10.0.0.0/8".  Here's are some slides on the topic from
this week's APNIC conference:
https://conference.apnic.net/data/37/464xlat-apricot-2014_1393236641.pdf

A kernel with disable_ipv4 would be fairly usable on such a network
today, as long as you avoid AF_INET-specific apps.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Tue Feb 25 21:54:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Feb 2014 21:54:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIPxF-0000hZ-At; Tue, 25 Feb 2014 21:54:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pmarks@google.com>) id 1WIO1t-0007sN-OO
	for xen-devel@lists.xenproject.org; Tue, 25 Feb 2014 19:51:02 +0000
Received: from [85.158.137.68:57074] by server-14.bemta-3.messagelabs.com id
	46/BF-08196-424FC035; Tue, 25 Feb 2014 19:51:00 +0000
X-Env-Sender: pmarks@google.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393357858!2640874!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 734 invoked from network); 25 Feb 2014 19:50:59 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	25 Feb 2014 19:50:59 -0000
Received: by mail-qa0-f44.google.com with SMTP id f11so854722qae.3
	for <xen-devel@lists.xenproject.org>;
	Tue, 25 Feb 2014 11:50:58 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113;
	h=mime-version:in-reply-to:references:from:date:message-id:subject:to
	:cc:content-type;
	bh=lI0r06YtguqRnaLcmjlW4yBNS+Y58SFqmQZRiTFjPRI=;
	b=CYVGv5zXiPISnmi5yDBd9WBDD4UiKs1+ryowuuobjRjrQZ14s8orlROE4cTpTGCyXt
	bV2HtXerEhDa7B4GdqLCOqcGk0v5Ulp1mOjUa8uIxZgsNYB81FtrCo4FwFj06mUX2AV6
	GfkHg0B76qiTMS0up/qoR9GR9y+B81pfp7SaBLyhUXwcKpup0GTfjVt3mVFS0O7gET+v
	SDMh6EO+zwau8EjKTiQ+JHyrtUsocfVaXLWmBhp0AIebRqkyPCqixzyOhjVpmln1x8+u
	+0vznvjD3QiTvFgrX2OHfKntuuepzFHT0ERdAuvFC/akxzbeDisuEGn1e2JS6iY/xfzG
	7GgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=lI0r06YtguqRnaLcmjlW4yBNS+Y58SFqmQZRiTFjPRI=;
	b=bhOJ5wy7mwjSC/8jcOkobw/Vm9O2TBAKKz1od2rQDZeh1q2xfkrWVdCtcb4BHuADwH
	9+PlrZ8zgFfu5gnsGmnuPyAezAi17lcumM6jsNC4TbP+QmssB+DufGxk2g/Kw5BCTivw
	Od90q+fJuUrcG926AORJdMSCAutRQbyQ27xdFoTQO8Vh+1Z213qzsIsO/R5fX+Upxf/a
	QzQ/hoegYvTm+Vmq9GxGG6juJIgXC7pId+Q4Y3eve96ke16QU2rJ6XAOPkQfYRCj7qqR
	DLho8g/t+hysdYRdYnc5Qj1FNl9Ob1KsMLphpjrgUXWpL2Itg1BLxcPxvK5vCZlNQPGO
	A/JA==
X-Gm-Message-State: ALoCoQmcMPH/DirFxy7NORLXiGWtxfwtFPmoGze36TPBT0kStKaegr8wq+Kp7ZWKARapfan5YEZI9bW9F9cq6Chqb1iq+J2L/UveBxBMwI8FPY7isXkPms9Pc2ThCN8K6OArEe3C99LPCm4egeW/xVmVIGRSfEQao9QYOF8ZKvTTB0o0Dgye9AN9fAFb8V+n1Mtu1etId8hs6sLAkueFn04Lj44kw7Pqpw==
X-Received: by 10.140.29.38 with SMTP id a35mr2278632qga.55.1393357858532;
	Tue, 25 Feb 2014 11:50:58 -0800 (PST)
MIME-Version: 1.0
Received: by 10.229.114.196 with HTTP; Tue, 25 Feb 2014 11:50:38 -0800 (PST)
In-Reply-To: <20140224.191238.921310808350170272.davem@davemloft.net>
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393286520.6823.123.camel@deadeye.wl.decadent.org.uk>
	<20140224.191238.921310808350170272.davem@davemloft.net>
From: Paul Marks <pmarks@google.com>
Date: Tue, 25 Feb 2014 11:50:38 -0800
Message-ID: <CAHaKRvKFwXSucvyrZVqxAZ8sa9tNBofPtJUK4hMRavTcZK9JZw@mail.gmail.com>
To: David Miller <davem@davemloft.net>
X-Mailman-Approved-At: Tue, 25 Feb 2014 21:54:19 +0000
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, ben@decadent.org.uk, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 4:12 PM, David Miller <davem@davemloft.net> wrote:
> From: Ben Hutchings <ben@decadent.org.uk>
> Date: Tue, 25 Feb 2014 00:02:00 +0000
>
>> You can run an internal network, or access network, as v6-only with
>> NAT64 and DNS64 at the border.  I believe some mobile networks are doing
>> this; it was also done on the main FOSDEM wireless network this year.
>
> This seems to be bloating up the networking headers of the internal
> network, for what purpose?

The primary purpose of IPv6 is to bloat up network headers, because
the IPv4 headers were too small to address all the endpoints.

NAT64 is an intriguing solution to the problem of "I have too many
customers for 10.0.0.0/8".  Here's are some slides on the topic from
this week's APNIC conference:
https://conference.apnic.net/data/37/464xlat-apricot-2014_1393236641.pdf

A kernel with disable_ipv4 would be fairly usable on such a network
today, as long as you avoid AF_INET-specific apps.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 00:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIRw8-0001xQ-VX; Wed, 26 Feb 2014 00:01:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtd@galois.com>) id 1WIRw7-0001xL-Ap
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 00:01:19 +0000
Received: from [85.158.143.35:63096] by server-3.bemta-4.messagelabs.com id
	9C/62-11539-ECE2D035; Wed, 26 Feb 2014 00:01:18 +0000
X-Env-Sender: jtd@galois.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393372876!8270154!1
X-Originating-IP: [66.193.37.198]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22877 invoked from network); 26 Feb 2014 00:01:17 -0000
Received: from quintic.galois.com (HELO mail.galois.com) (66.193.37.198)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 00:01:17 -0000
Received: from hurricane.galois.com (hurricane.galois.com
	[IPv6:2001:4870:e08e:200:5054:ff:fefa:ce41])
	by mail.galois.com (8.14.4/8.14.4) with ESMTP id s1Q01Bv2007031
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256
	verify=OK); Tue, 25 Feb 2014 16:01:15 -0800
Received: from galois.com ([IPv6:2001:4870:e08e:201:ad17:382f:3d12:706f])
	(authenticated bits=0)
	by hurricane.galois.com (8.14.4/8.14.4) with ESMTP id s1Q01BnW007495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 16:01:11 -0800
Date: Tue, 25 Feb 2014 16:00:53 -0800
From: Jonathan Daugherty <jtd@galois.com>
To: Xen Development List <xen-devel@lists.xen.org>
Message-ID: <20140226000053.GE45200@galois.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22 (2013-10-16)
X-Spam-Status: No, score=-0.5 required=4.5 tests=BAYES_00,RP_MATCHES_RCVD
	shortcircuit=no autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on quintic.galois.com
Cc: Adam Wick <awick@galois.com>
Subject: [Xen-devel] Xen on ARM: domU ramdisk support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I'm attempting to load a Linux domU on an Arndale board, but the
'ramdisk' setting in my domain configuration file appears to have no
impact on the Linux kernel I'm booting.  What is the status of support
for domU initrds in Xen on ARM at the moment?

I took a look at libxl and found that perhaps libxl_arm.c should be
doing something to set up the FDT nodes describing the initrd start/end,
but I see no evidence of it there.

I may have time to write a patch if this is something that needs doing.

Thanks!

-- 
  Jonathan Daugherty
  Software Engineer
  Galois, Inc.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 00:02:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIRw8-0001xQ-VX; Wed, 26 Feb 2014 00:01:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jtd@galois.com>) id 1WIRw7-0001xL-Ap
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 00:01:19 +0000
Received: from [85.158.143.35:63096] by server-3.bemta-4.messagelabs.com id
	9C/62-11539-ECE2D035; Wed, 26 Feb 2014 00:01:18 +0000
X-Env-Sender: jtd@galois.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393372876!8270154!1
X-Originating-IP: [66.193.37.198]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22877 invoked from network); 26 Feb 2014 00:01:17 -0000
Received: from quintic.galois.com (HELO mail.galois.com) (66.193.37.198)
	by server-6.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 00:01:17 -0000
Received: from hurricane.galois.com (hurricane.galois.com
	[IPv6:2001:4870:e08e:200:5054:ff:fefa:ce41])
	by mail.galois.com (8.14.4/8.14.4) with ESMTP id s1Q01Bv2007031
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256
	verify=OK); Tue, 25 Feb 2014 16:01:15 -0800
Received: from galois.com ([IPv6:2001:4870:e08e:201:ad17:382f:3d12:706f])
	(authenticated bits=0)
	by hurricane.galois.com (8.14.4/8.14.4) with ESMTP id s1Q01BnW007495
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Tue, 25 Feb 2014 16:01:11 -0800
Date: Tue, 25 Feb 2014 16:00:53 -0800
From: Jonathan Daugherty <jtd@galois.com>
To: Xen Development List <xen-devel@lists.xen.org>
Message-ID: <20140226000053.GE45200@galois.com>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.22 (2013-10-16)
X-Spam-Status: No, score=-0.5 required=4.5 tests=BAYES_00,RP_MATCHES_RCVD
	shortcircuit=no autolearn=ham version=3.3.1
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on quintic.galois.com
Cc: Adam Wick <awick@galois.com>
Subject: [Xen-devel] Xen on ARM: domU ramdisk support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

I'm attempting to load a Linux domU on an Arndale board, but the
'ramdisk' setting in my domain configuration file appears to have no
impact on the Linux kernel I'm booting.  What is the status of support
for domU initrds in Xen on ARM at the moment?

I took a look at libxl and found that perhaps libxl_arm.c should be
doing something to set up the FDT nodes describing the initrd start/end,
but I see no evidence of it there.

I may have time to write a patch if this is something that needs doing.

Thanks!

-- 
  Jonathan Daugherty
  Software Engineer
  Galois, Inc.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 00:09:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIS3x-00026G-0J; Wed, 26 Feb 2014 00:09:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1WIS3v-00026B-Pn
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 00:09:24 +0000
Received: from [85.158.139.211:18297] by server-1.bemta-5.messagelabs.com id
	8A/40-12859-3B03D035; Wed, 26 Feb 2014 00:09:23 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393373362!6244786!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22749 invoked from network); 26 Feb 2014 00:09:22 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 00:09:22 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so1045996wes.37
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 16:09:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type:content-transfer-encoding;
	bh=+lwXxnum9OZIAMLUXMX+KY06amADR3KcyBiD71fcB3U=;
	b=YUNQSPBPml+/c2Al+PQAyWIFo4FDyJ8cVfD84J1TLPx31kTEXa5Auf+8Ji0dOWjs4N
	UgHAv09e9/2w8NhnOK8+L/shlOW8FurX2hUdSdr+v1329tYCZtCPGucDPX1xL6RYkisV
	QkZ0XLPIAWVbzKZBgm9IwdnXYQl2xCG577Jb7iLc7+2ugCBeHwmn1Loqz5qtY9cQ2iSf
	9FjuJs9fyklwoDOjhhV/YSMV89K1uBfR+nHgIimW70gj1uXXhAPh7wg49u96OR7RmURF
	3c9tb6WPcYjMWgCpxdEgtIODHe9Jyqwcy69Bphfz6yBiFb+66F4uR+VvlvGZ6aXdXMfq
	QbdA==
X-Gm-Message-State: ALoCoQmgHyVdF0aqnCGkQFJGgFkQ17tcTjnYBVYDBtjjSV4YcTqwKwG5zannQM0UDXpq/5F1TaGY
X-Received: by 10.180.164.106 with SMTP id yp10mr2373179wib.48.1393373361666; 
	Tue, 25 Feb 2014 16:09:21 -0800 (PST)
MIME-Version: 1.0
Received: by 10.227.12.199 with HTTP; Tue, 25 Feb 2014 16:08:51 -0800 (PST)
In-Reply-To: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
References: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
From: Wei Liu <liuw@liuw.name>
Date: Wed, 26 Feb 2014 00:08:51 +0000
Message-ID: <CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
To: Jeroen van der Ham <vdham@uva.nl>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBGZWIgMjUsIDIwMTQgYXQgMzozMyBQTSwgSmVyb2VuIHZhbiBkZXIgSGFtIDx2ZGhh
bUB1dmEubmw+IHdyb3RlOgo+IEhpLAo+Cj4gSeKAmW0gdHJ5aW5nIHRvIHVzZSB4ZW4tY3JlYXRl
LWltYWdlIHRvIGNyZWF0ZSBhbiBpbnN0YWxsZWQgWGVuIG1hY2hpbmUuIElmIEkgZG8gdGhlIGZv
bGxvd2luZzoKPgoKeGVuLWNyZWF0ZS1pbWFnZSBpcyBwYXJ0IG9mIHhlbi10b29scywgd2hpY2gg
aXMgbm90IGRldmVsb3BlZCBieSBjb3JlClhlbiBkZXZlbG9wZXJzLiBTbyB0aGlzIG9uZSBzaG91
bGQgYmUgcmVwb3J0ZWQgdG8geGVuLXRvb2xzCm1haW50YWluZXIocykuCgo+ICMgc3VkbyB4ZW4t
Y3JlYXRlLWltYWdlIOKAlGhvc3RuYW1lPWZvb2JhciAtLW1lbW9yeT0yMDQ4IC0tc3dhcD0xMDI0
TSAtLXNpemU9MTBHIC0tbHZtPVZvbHVtZUdyb3VwWGVuIC0tZGlzdD13aGVlenkgLS1mcz1leHQz
IC0tbWlycm9yPWh0dHA6Ly9mdHAubmwuZGViaWFuLm9yZy9kZWJpYW4gLS1nZW5wYXNzPTEgLS12
Y3B1cz0yCj4gRVJST1I6ICdtZW1vcnknIGFyZ3VtZW50IHRha2VzIGEgc3VmZml4ZWQgaW50ZWdl
ci4KPgo+IFRoaXMgZG9lcyBub3Qgd29yaywgYmVjYXVzZSB0aGUgbWVtb3J5IGFyZ3VtZW50IGlz
IGluY29ycmVjdC4gU28gSSBhZGFwdCBpdDoKPgo+ICMgc3VkbyB4ZW4tY3JlYXRlLWltYWdlIOKA
lGhvc3RuYW1lPWZvb2JhciAtLW1lbW9yeT0yMDQ4TSAtLXN3YXA9MTAyNE0gLS1zaXplPTEwRyAt
LWx2bT1Wb2x1bWVHcm91cFhlbiAtLWRpc3Q9d2hlZXp5IC0tZnM9ZXh0MyAtLW1pcnJvcj1odHRw
Oi8vZnRwLm5sLmRlYmlhbi5vcmcvZGViaWFuIC0tZ2VucGFzcz0xIC0tdmNwdXM9Mgo+CgpJIHJl
bWVtYmVyIHNvbWUgeGVuLXRvb2xzIHZlcnNpb24gaGFzIGEgYnVnIHJlZ2FyZGluZyBtZW1vcnkg
c2l6ZS4gQXJlCnlvdSB1c2luZyBEZWJpYW4ncyBwYWNrYWdlIHZlcnNpb24/IENvdWxkIHlvdSB0
cnkgdGhlIGxhc3Rlc3QKeGVuLXRvb2xzIGZyb20gdXBzdHJlYW0/IERpZCBjaGFuZ2luZyAiTSIg
dG8gIk1iIiBoZWxwIChpdCBzaG91bGQgd29yawpmb3IgYm90aCBTcXVlZXplIGFuZCBXaGVlenkp
PwoKPiBUaGlzIGluc3RhbGxzIHRoZSBtYWNoaW5lIHN1Y2Nlc3NmdWxseS4gSWYgSSB0aGVuIGNv
bnRpbnVlOgo+Cj4gIyBzdWRvIHhsIGNyZWF0ZSAvZXRjL3hlbi9mb29iYXIuY2ZnCj4gUGFyc2lu
ZyBjb25maWcgZnJvbSAvZXRjL3hlbi9mb29iYXIuY2ZnCj4gL2V0Yy94ZW4vZm9vYmFyLmNmZzox
Mzogd2FybmluZzogcGFyYW1ldGVyIGBtZW1vcnknIGlzIG5vdCBhIHZhbGlkIG51bWJlcgo+Cj4g
VGhpcyBtZWFucyBJIGhhdmUgdG8gZWRpdCB0aGUgY29uZmlndXJhdGlvbiBmaWxlIGFuZCByZW1v
dmUgdGhlIOKAnE3igJ0gZnJvbSB0aGUgbWVtb3J5IG51bWJlci4KPgo+IFNlZW1zIHRvIG1lIGxp
a2UgeGVuLWNyZWF0ZS1pbWFnZSBhbmQgeGwgbmVlZCB0byBoYXZlIGEgZnJpZW5kbHkgYXJndW1l
bnQgb24gaG93IHRvIGhhbmRsZSBtZW1vcnkgbnVtYmVycy4KPgoKSSB3cm90ZSBhIHBhdGNoIHNv
bWUgdGltZSBhZ28gdG8gbWFrZSB0aGUgcGFyc2VyIGhhbmRsZSBtZW1vcnkgc3VmZml4CmJ1dCB3
YXMgbm90IHVwc3RyZWFtZWQuIFlvdSdyZSB3ZWxjb21lIHRvIHRha2UgdGhhdCBhbmQgdXBzdHJl
YW0gaXQsCm9yIGltcGxlbWVudCBzb21lIGNsZWFuZXIgc29sdXRpb24uCgpXZWkuCgo+IFJlZ2Fy
ZHMsCj4gSmVyb2VuLgo+Cj4KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXwo+IFhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwo+IGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Feb 26 00:09:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIS3x-00026G-0J; Wed, 26 Feb 2014 00:09:25 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1WIS3v-00026B-Pn
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 00:09:24 +0000
Received: from [85.158.139.211:18297] by server-1.bemta-5.messagelabs.com id
	8A/40-12859-3B03D035; Wed, 26 Feb 2014 00:09:23 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393373362!6244786!1
X-Originating-IP: [74.125.82.178]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22749 invoked from network); 26 Feb 2014 00:09:22 -0000
Received: from mail-we0-f178.google.com (HELO mail-we0-f178.google.com)
	(74.125.82.178)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 00:09:22 -0000
Received: by mail-we0-f178.google.com with SMTP id q59so1045996wes.37
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 16:09:21 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type:content-transfer-encoding;
	bh=+lwXxnum9OZIAMLUXMX+KY06amADR3KcyBiD71fcB3U=;
	b=YUNQSPBPml+/c2Al+PQAyWIFo4FDyJ8cVfD84J1TLPx31kTEXa5Auf+8Ji0dOWjs4N
	UgHAv09e9/2w8NhnOK8+L/shlOW8FurX2hUdSdr+v1329tYCZtCPGucDPX1xL6RYkisV
	QkZ0XLPIAWVbzKZBgm9IwdnXYQl2xCG577Jb7iLc7+2ugCBeHwmn1Loqz5qtY9cQ2iSf
	9FjuJs9fyklwoDOjhhV/YSMV89K1uBfR+nHgIimW70gj1uXXhAPh7wg49u96OR7RmURF
	3c9tb6WPcYjMWgCpxdEgtIODHe9Jyqwcy69Bphfz6yBiFb+66F4uR+VvlvGZ6aXdXMfq
	QbdA==
X-Gm-Message-State: ALoCoQmgHyVdF0aqnCGkQFJGgFkQ17tcTjnYBVYDBtjjSV4YcTqwKwG5zannQM0UDXpq/5F1TaGY
X-Received: by 10.180.164.106 with SMTP id yp10mr2373179wib.48.1393373361666; 
	Tue, 25 Feb 2014 16:09:21 -0800 (PST)
MIME-Version: 1.0
Received: by 10.227.12.199 with HTTP; Tue, 25 Feb 2014 16:08:51 -0800 (PST)
In-Reply-To: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
References: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
From: Wei Liu <liuw@liuw.name>
Date: Wed, 26 Feb 2014 00:08:51 +0000
Message-ID: <CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
To: Jeroen van der Ham <vdham@uva.nl>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVHVlLCBGZWIgMjUsIDIwMTQgYXQgMzozMyBQTSwgSmVyb2VuIHZhbiBkZXIgSGFtIDx2ZGhh
bUB1dmEubmw+IHdyb3RlOgo+IEhpLAo+Cj4gSeKAmW0gdHJ5aW5nIHRvIHVzZSB4ZW4tY3JlYXRl
LWltYWdlIHRvIGNyZWF0ZSBhbiBpbnN0YWxsZWQgWGVuIG1hY2hpbmUuIElmIEkgZG8gdGhlIGZv
bGxvd2luZzoKPgoKeGVuLWNyZWF0ZS1pbWFnZSBpcyBwYXJ0IG9mIHhlbi10b29scywgd2hpY2gg
aXMgbm90IGRldmVsb3BlZCBieSBjb3JlClhlbiBkZXZlbG9wZXJzLiBTbyB0aGlzIG9uZSBzaG91
bGQgYmUgcmVwb3J0ZWQgdG8geGVuLXRvb2xzCm1haW50YWluZXIocykuCgo+ICMgc3VkbyB4ZW4t
Y3JlYXRlLWltYWdlIOKAlGhvc3RuYW1lPWZvb2JhciAtLW1lbW9yeT0yMDQ4IC0tc3dhcD0xMDI0
TSAtLXNpemU9MTBHIC0tbHZtPVZvbHVtZUdyb3VwWGVuIC0tZGlzdD13aGVlenkgLS1mcz1leHQz
IC0tbWlycm9yPWh0dHA6Ly9mdHAubmwuZGViaWFuLm9yZy9kZWJpYW4gLS1nZW5wYXNzPTEgLS12
Y3B1cz0yCj4gRVJST1I6ICdtZW1vcnknIGFyZ3VtZW50IHRha2VzIGEgc3VmZml4ZWQgaW50ZWdl
ci4KPgo+IFRoaXMgZG9lcyBub3Qgd29yaywgYmVjYXVzZSB0aGUgbWVtb3J5IGFyZ3VtZW50IGlz
IGluY29ycmVjdC4gU28gSSBhZGFwdCBpdDoKPgo+ICMgc3VkbyB4ZW4tY3JlYXRlLWltYWdlIOKA
lGhvc3RuYW1lPWZvb2JhciAtLW1lbW9yeT0yMDQ4TSAtLXN3YXA9MTAyNE0gLS1zaXplPTEwRyAt
LWx2bT1Wb2x1bWVHcm91cFhlbiAtLWRpc3Q9d2hlZXp5IC0tZnM9ZXh0MyAtLW1pcnJvcj1odHRw
Oi8vZnRwLm5sLmRlYmlhbi5vcmcvZGViaWFuIC0tZ2VucGFzcz0xIC0tdmNwdXM9Mgo+CgpJIHJl
bWVtYmVyIHNvbWUgeGVuLXRvb2xzIHZlcnNpb24gaGFzIGEgYnVnIHJlZ2FyZGluZyBtZW1vcnkg
c2l6ZS4gQXJlCnlvdSB1c2luZyBEZWJpYW4ncyBwYWNrYWdlIHZlcnNpb24/IENvdWxkIHlvdSB0
cnkgdGhlIGxhc3Rlc3QKeGVuLXRvb2xzIGZyb20gdXBzdHJlYW0/IERpZCBjaGFuZ2luZyAiTSIg
dG8gIk1iIiBoZWxwIChpdCBzaG91bGQgd29yawpmb3IgYm90aCBTcXVlZXplIGFuZCBXaGVlenkp
PwoKPiBUaGlzIGluc3RhbGxzIHRoZSBtYWNoaW5lIHN1Y2Nlc3NmdWxseS4gSWYgSSB0aGVuIGNv
bnRpbnVlOgo+Cj4gIyBzdWRvIHhsIGNyZWF0ZSAvZXRjL3hlbi9mb29iYXIuY2ZnCj4gUGFyc2lu
ZyBjb25maWcgZnJvbSAvZXRjL3hlbi9mb29iYXIuY2ZnCj4gL2V0Yy94ZW4vZm9vYmFyLmNmZzox
Mzogd2FybmluZzogcGFyYW1ldGVyIGBtZW1vcnknIGlzIG5vdCBhIHZhbGlkIG51bWJlcgo+Cj4g
VGhpcyBtZWFucyBJIGhhdmUgdG8gZWRpdCB0aGUgY29uZmlndXJhdGlvbiBmaWxlIGFuZCByZW1v
dmUgdGhlIOKAnE3igJ0gZnJvbSB0aGUgbWVtb3J5IG51bWJlci4KPgo+IFNlZW1zIHRvIG1lIGxp
a2UgeGVuLWNyZWF0ZS1pbWFnZSBhbmQgeGwgbmVlZCB0byBoYXZlIGEgZnJpZW5kbHkgYXJndW1l
bnQgb24gaG93IHRvIGhhbmRsZSBtZW1vcnkgbnVtYmVycy4KPgoKSSB3cm90ZSBhIHBhdGNoIHNv
bWUgdGltZSBhZ28gdG8gbWFrZSB0aGUgcGFyc2VyIGhhbmRsZSBtZW1vcnkgc3VmZml4CmJ1dCB3
YXMgbm90IHVwc3RyZWFtZWQuIFlvdSdyZSB3ZWxjb21lIHRvIHRha2UgdGhhdCBhbmQgdXBzdHJl
YW0gaXQsCm9yIGltcGxlbWVudCBzb21lIGNsZWFuZXIgc29sdXRpb24uCgpXZWkuCgo+IFJlZ2Fy
ZHMsCj4gSmVyb2VuLgo+Cj4KPiBfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fXwo+IFhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKPiBYZW4tZGV2ZWxAbGlzdHMueGVu
Lm9yZwo+IGh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAoKX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4t
ZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Wed Feb 26 00:15:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIS9L-0002Dt-QH; Wed, 26 Feb 2014 00:14:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WIS9K-0002Do-1T
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 00:14:58 +0000
Received: from [85.158.139.211:32911] by server-8.bemta-5.messagelabs.com id
	79/EF-05298-1023D035; Wed, 26 Feb 2014 00:14:57 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393373694!6253170!1
X-Originating-IP: [209.85.220.176]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20834 invoked from network); 26 Feb 2014 00:14:55 -0000
Received: from mail-vc0-f176.google.com (HELO mail-vc0-f176.google.com)
	(209.85.220.176)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 00:14:55 -0000
Received: by mail-vc0-f176.google.com with SMTP id la4so166187vcb.21
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 16:14:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Gy6s4233zbPYD9Q8puYpi+UFhTOqUjYaCQ82p5am5K8=;
	b=Ht6oxRV0R4KxMlyGWpGdhscXTIA5qNJXM+GyHbQ/f0HwGhAddiyCeOJHqolgbvA6MK
	FDx/K6T+rQomRuU4g3OSNOfiTbdDidLG7PH0nJOiEEo2WjFrmb+ISvk2xK3ug01bfALK
	Yn497BSGyCurCptBkderRVEk4JRJBTlfVcSWLufj8VzZcy4OzXOu8uq9AoI/BwR2BhbU
	ACYzphr6f7pbAeeCpidoDAHkl+MQMkUOeHYgppnLJZP+Axrs5CmApxuj/s5h23aXp9BQ
	viRhYrfzmeCfmgNyCEFnum8HTBNKJJhhcBjPsJH/v7Nj163rvoGl+d6fuINURDXn4LgN
	Q+4w==
MIME-Version: 1.0
X-Received: by 10.58.100.100 with SMTP id ex4mr3399091veb.2.1393373694022;
	Tue, 25 Feb 2014 16:14:54 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Tue, 25 Feb 2014 16:14:53 -0800 (PST)
Date: Tue, 25 Feb 2014 16:14:53 -0800
Message-ID: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0289617559933607328=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0289617559933607328==
Content-Type: multipart/alternative; boundary=047d7b33c70612594c04f34418a2

--047d7b33c70612594c04f34418a2
Content-Type: text/plain; charset=ISO-8859-1

Hi,

We are using SuSE 11 SP2 Xen which has 'Xen 4.1.2_14-0.5.5' and we have
noticed that Xen Hypervisor 4.1.2 panic very frequently when we are about
halt/power-off the board. We have two VMs which we gracefully shutdown.

Moreover, even though the command is to power-off/halt the board, but Xen
hypervisor decides to reboot instead if it panics. I think this is a bug.
I'm not sure if it's addressed/fixed in Xen 4.2.2 release.

What are our options here assuming upgrading to Xen 4.2.2 (SuSE 11 SP3) is
the only possibility?


Stopping udevd:                                                       done

Set Hardware Clock to the current System Time                         done

Sending all processes the TERM signal...                              done

Sending all processes the KILL signal...                              done

*The system will be halted immediately.*



/dev/sda:



/dev/sda:

issuing standby command

(XEN) irq.c:1723: dom0: forcing unbind of pirq 313

(XEN) irq.c:1723: dom0: forcing unbind of pirq 314

(XEN) irq.c:1723: dom0: forcing unbind of pirq 315

(XEN) irq.c:1723: dom0: forcing unbind of pirq 316

(XEN) irq.c:1723: dom0: forcing unbind of pirq 317

(XEN) irq.c:1723: dom0: forcing unbind of pirq 318

(XEN) irq.c:1723: dom0: forcing unbind of pirq 319

(XEN) irq.c:1723: dom0: forcing unbind of pirq 320

(XEN) irq.c:1723: dom0: forcing unbind of pirq 312

[ 1092.811272] Power down.

(XEN) Disabling non-boot CPUs ...

(XEN) Broke affinity for irq 8

(XEN) Broke affinity for irq 8

(XEN) ----[ Xen-4.1.2_14-0.5.5  x86_64  debug=n  Not tainted ]----

(XEN) CPU:    0

(XEN) RIP:    e008:[<ffff82c480125247>] gmtime+0x9c7/0xb90

(XEN) RFLAGS: 0000000000010087   CONTEXT: hypervisor

(XEN) rax: ffff83102940bec0   rbx: 0000000000000000   rcx: 0000000000000000

(XEN) rdx: ffff831033a69220   rsi: ffff831033a69220   rdi: ffff82c4804d0f20

(XEN) rbp: 0000000000000000   rsp: ffff82c48047fd90   r8:  0000000000000000

(XEN) r9:  ffff831029804040   r10: 0000010e1aeca268   r11: ffff83007f7d6060

(XEN) r12: ffff831033a69200   r13: ffff82c4804d0f00   r14: 0000000000000000

(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 00000000001426f0

(XEN) cr3: 000000006e08d000   cr2: 0000000000000008

(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008

(XEN) Xen stack trace from rsp=ffff82c48047fd90:

(XEN)    0000000000000003 0000000000008008 0000000000000003 ffff82c48043d208

(XEN)    ffff82c48043d200 ffff82c48043cdc8 0000000000008000 ffff82c48011460d

(XEN)    0000000000000000 0000000000000000 ffff83007f7e815c 0000000000000000

(XEN)    0000000000000000 0000000000000003 0000010c12190d0a ffff82c4804d0d80

(XEN)    ffff82c4804d0d60 ffff82c4801025f0 ffff82c4804d0d60 ffff82c48043cdc0

(XEN)    0000000000000000 0000000000000003 ffff82c4804d0d08 ffff82c48010287c

(XEN)    0000000000000005 0000000000000005 0000000000000000 ffff82c48019696b

(XEN)    0000000000000000 ffff82c4804d0d80 ffff82c4804d0d60 ffff83201953e910

(XEN)    ffff83007f7e8000 ffff82c4804d0d08 0000010c12190d0a ffff82c4804d0d80

(XEN)    ffff82c4804d0d60 ffff82c480105dbe ffff83007f7e81c0 0000000000000000

(XEN)    ffff82c4804d0e10 ffff82c480124580 ffff82c4804d0df0 ffff82c480124788

(XEN)    ffff82c48047ff18 ffff83007edfc000 ffff82c4804d0d60 ffff82c480153813

(XEN)    ffff83007edfc000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 ffffffff8070df60 ffff8801eeefc010 0000000000000246

(XEN)    0000000000007ff0 0000000000000000 ffff8801f7f86d00 0000000000000000

(XEN)    ffffffff8000330a 0000000000000000 0000000000000026 0000000000000002

(XEN)    0000010000000000 ffffffff8000330a 000000000000e033 0000000000000246

(XEN)    ffff8801eeefdf18 000000000000e02b 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff83007edfc000

(XEN)    0000000000000000 0000000000000000

(XEN) Xen call trace:

(XEN)    [<ffff82c480125247>] gmtime+0x9c7/0xb90

(XEN)    [<ffff82c48011460d>] notifier_call_chain+0x7d/0x1250

(XEN)    [<ffff82c4801025f0>] cpu_down+0xf0/0x150

(XEN)    [<ffff82c48010287c>] disable_nonboot_cpus+0xac/0x140

(XEN)    [<ffff82c48019696b>] acpi_enter_sleep_state+0x27b/0x500

(XEN)    [<ffff82c480105dbe>] vcpu_unpause+0x10e/0x120

(XEN)    [<ffff82c480124580>] do_sysctl+0x920/0xa00

(XEN)    [<ffff82c480124788>] do_tasklet+0x78/0xb0

(XEN)    [<ffff82c480153813>] idle_loop+0x23/0x50

(XEN)

(XEN) Pagetable walk from 0000000000000008:

(XEN)  L4[0x000] = 000000103ffed063 5555555555555555

(XEN)  L3[0x000] = 000000103ffec063 5555555555555555

(XEN)  L2[0x000] = 000000103ffeb063 5555555555555555

(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff

(XEN)

(XEN) ****************************************

(XEN) Panic on CPU 0:

(XEN) FATAL PAGE FAULT

(XEN) [error_code=0002]

(XEN) Faulting linear address: 0000000000000008

(XEN) ****************************************

(XEN)

*(XEN) Reboot in five seconds...*



Thanks

/Saurabh

--047d7b33c70612594c04f34418a2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>We are using SuSE 11 SP2 Xen which =
has &#39;Xen 4.1.2_14-0.5.5&#39; and we have noticed that Xen Hypervisor 4.=
1.2 panic very frequently when we are about halt/power-off the board. We ha=
ve two VMs which we gracefully shutdown.</div>
<div><br></div><div>Moreover, even though the command is to power-off/halt =
the board, but Xen hypervisor decides to reboot instead if it panics. I thi=
nk this is a bug. I&#39;m not sure if it&#39;s addressed/fixed in Xen 4.2.2=
 release.</div>
<div><br></div><div>What are our options here assuming upgrading to Xen 4.2=
.2 (SuSE 11 SP3) is the only possibility?</div><div><br></div><div><br></di=
v><div><div><p class=3D"MsoNormal">Stopping
udevd:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0
done</p>

<p class=3D"MsoNormal">Set Hardware Clock to the current System
Time=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0
=A0=A0=A0=A0done</p>

<p class=3D"MsoNormal">Sending all processes the TERM
signal...=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0
done</p>

<p class=3D"MsoNormal">Sending all processes the KILL
signal...=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0
done</p>

<p class=3D"MsoNormal"><b>The system will be halted immediately.</b></p>

<p class=3D"MsoNormal">=A0</p>

<p class=3D"MsoNormal">/dev/sda:</p>

<p class=3D"MsoNormal">=A0</p>

<p class=3D"MsoNormal">/dev/sda:</p>

<p class=3D"MsoNormal">issuing standby command</p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 313</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 314</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 315</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 316</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 317</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 318</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 319</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 320</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 312</=
p>

<p class=3D"MsoNormal">[ 1092.811272] Power down.</p>

<p class=3D"MsoNormal">(XEN) Disabling non-boot CPUs ...</p>

<p class=3D"MsoNormal">(XEN) Broke affinity for irq 8</p>

<p class=3D"MsoNormal">(XEN) Broke affinity for irq 8</p>

<p class=3D"MsoNormal">(XEN) ----[ Xen-4.1.2_14-0.5.5=A0 x86_64=A0
debug=3Dn=A0 Not tainted ]----</p>

<p class=3D"MsoNormal">(XEN) CPU:=A0=A0=A0 0</p>

<p class=3D"MsoNormal">(XEN) RIP:=A0=A0=A0 e008:[&lt;ffff82c480125247&gt;]
gmtime+0x9c7/0xb90</p>

<p class=3D"MsoNormal">(XEN) RFLAGS: 0000000000010087=A0=A0 CONTEXT: hyperv=
isor</p>

<p class=3D"MsoNormal">(XEN) rax: ffff83102940bec0=A0=A0 rbx:
0000000000000000=A0=A0 rcx: 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) rdx: ffff831033a69220=A0=A0 rsi:
ffff831033a69220=A0=A0 rdi: ffff82c4804d0f20</p>

<p class=3D"MsoNormal">(XEN) rbp: 0000000000000000=A0=A0 rsp:
ffff82c48047fd90=A0=A0 r8:=A0 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) r9:=A0 ffff831029804040=A0=A0 r10:
0000010e1aeca268=A0=A0 r11: ffff83007f7d6060</p>

<p class=3D"MsoNormal">(XEN) r12: ffff831033a69200=A0=A0 r13:
ffff82c4804d0f00=A0=A0 r14: 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) r15: 0000000000000000=A0=A0 cr0:
000000008005003b=A0=A0 cr4: 00000000001426f0</p>

<p class=3D"MsoNormal">(XEN) cr3: 000000006e08d000=A0=A0 cr2:
0000000000000008</p>

<p class=3D"MsoNormal">(XEN) ds: 002b=A0=A0 es: 002b=A0=A0 fs:
0000=A0=A0 gs: 0000=A0=A0 ss: e010=A0=A0 cs: e008</p>

<p class=3D"MsoNormal">(XEN) Xen stack trace from rsp=3Dffff82c48047fd90:</=
p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000003 0000000000008008
0000000000000003 ffff82c48043d208</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c48043d200 ffff82c48043cdc8
0000000000008000 ffff82c48011460d</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000000
ffff83007f7e815c 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000003
0000010c12190d0a ffff82c4804d0d80</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c4804d0d60 ffff82c4801025f0
ffff82c4804d0d60 ffff82c48043cdc0</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000003
ffff82c4804d0d08 ffff82c48010287c</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000005 0000000000000005
0000000000000000 ffff82c48019696b</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 ffff82c4804d0d80
ffff82c4804d0d60 ffff83201953e910</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff83007f7e8000 ffff82c4804d0d08
0000010c12190d0a ffff82c4804d0d80</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c4804d0d60 ffff82c480105dbe
ffff83007f7e81c0 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c4804d0e10 ffff82c480124580
ffff82c4804d0df0 ffff82c480124788</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c48047ff18 ffff83007edfc000
ffff82c4804d0d60 ffff82c480153813</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff83007edfc000 0000000000000000
0000000000000000 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 ffffffff8070df60
ffff8801eeefc010 0000000000000246</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000007ff0 0000000000000000
ffff8801f7f86d00 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffffffff8000330a 0000000000000000
0000000000000026 0000000000000002</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000010000000000 ffffffff8000330a
000000000000e033 0000000000000246</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff8801eeefdf18 000000000000e02b
0000000000000000 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000000
0000000000000000 ffff83007edfc000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) Xen call trace:</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480125247&gt;]
gmtime+0x9c7/0xb90</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c48011460d&gt;]
notifier_call_chain+0x7d/0x1250</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c4801025f0&gt;]
cpu_down+0xf0/0x150</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c48010287c&gt;]
disable_nonboot_cpus+0xac/0x140</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c48019696b&gt;]
acpi_enter_sleep_state+0x27b/0x500</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480105dbe&gt;]
vcpu_unpause+0x10e/0x120</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480124580&gt;]
do_sysctl+0x920/0xa00</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480124788&gt;]
do_tasklet+0x78/0xb0</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480153813&gt;]
idle_loop+0x23/0x50</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 </p>

<p class=3D"MsoNormal">(XEN) Pagetable walk from 0000000000000008:</p>

<p class=3D"MsoNormal">(XEN)=A0 L4[0x000] =3D 000000103ffed063 555555555555=
5555</p>

<p class=3D"MsoNormal">(XEN)=A0 L3[0x000] =3D 000000103ffec063 555555555555=
5555</p>

<p class=3D"MsoNormal">(XEN)=A0 L2[0x000] =3D 000000103ffeb063 555555555555=
5555 </p>

<p class=3D"MsoNormal">(XEN)=A0 L1[0x000] =3D 0000000000000000 ffffffffffff=
ffff</p>

<p class=3D"MsoNormal">(XEN) </p>

<p class=3D"MsoNormal">(XEN) ****************************************</p>

<p class=3D"MsoNormal">(XEN) Panic on CPU 0:</p>

<p class=3D"MsoNormal">(XEN) FATAL PAGE FAULT</p>

<p class=3D"MsoNormal">(XEN) [error_code=3D0002]</p>

<p class=3D"MsoNormal">(XEN) Faulting linear address: 0000000000000008</p>

<p class=3D"MsoNormal">(XEN) ****************************************</p>

<p class=3D"MsoNormal">(XEN) </p>

<p class=3D"MsoNormal"><b>(XEN) Reboot in five seconds...</b></p><p class=
=3D"MsoNormal"><br></p><p class=3D"MsoNormal"><br></p><p class=3D"MsoNormal=
">Thanks</p><p class=3D"MsoNormal">/Saurabh</p></div></div></div>

--047d7b33c70612594c04f34418a2--


--===============0289617559933607328==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0289617559933607328==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 00:15:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIS9L-0002Dt-QH; Wed, 26 Feb 2014 00:14:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <saurabh.globe@gmail.com>) id 1WIS9K-0002Do-1T
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 00:14:58 +0000
Received: from [85.158.139.211:32911] by server-8.bemta-5.messagelabs.com id
	79/EF-05298-1023D035; Wed, 26 Feb 2014 00:14:57 +0000
X-Env-Sender: saurabh.globe@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393373694!6253170!1
X-Originating-IP: [209.85.220.176]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20834 invoked from network); 26 Feb 2014 00:14:55 -0000
Received: from mail-vc0-f176.google.com (HELO mail-vc0-f176.google.com)
	(209.85.220.176)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 00:14:55 -0000
Received: by mail-vc0-f176.google.com with SMTP id la4so166187vcb.21
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 16:14:54 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=Gy6s4233zbPYD9Q8puYpi+UFhTOqUjYaCQ82p5am5K8=;
	b=Ht6oxRV0R4KxMlyGWpGdhscXTIA5qNJXM+GyHbQ/f0HwGhAddiyCeOJHqolgbvA6MK
	FDx/K6T+rQomRuU4g3OSNOfiTbdDidLG7PH0nJOiEEo2WjFrmb+ISvk2xK3ug01bfALK
	Yn497BSGyCurCptBkderRVEk4JRJBTlfVcSWLufj8VzZcy4OzXOu8uq9AoI/BwR2BhbU
	ACYzphr6f7pbAeeCpidoDAHkl+MQMkUOeHYgppnLJZP+Axrs5CmApxuj/s5h23aXp9BQ
	viRhYrfzmeCfmgNyCEFnum8HTBNKJJhhcBjPsJH/v7Nj163rvoGl+d6fuINURDXn4LgN
	Q+4w==
MIME-Version: 1.0
X-Received: by 10.58.100.100 with SMTP id ex4mr3399091veb.2.1393373694022;
	Tue, 25 Feb 2014 16:14:54 -0800 (PST)
Received: by 10.58.18.136 with HTTP; Tue, 25 Feb 2014 16:14:53 -0800 (PST)
Date: Tue, 25 Feb 2014 16:14:53 -0800
Message-ID: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
From: Saurabh Mishra <saurabh.globe@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0289617559933607328=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0289617559933607328==
Content-Type: multipart/alternative; boundary=047d7b33c70612594c04f34418a2

--047d7b33c70612594c04f34418a2
Content-Type: text/plain; charset=ISO-8859-1

Hi,

We are using SuSE 11 SP2 Xen which has 'Xen 4.1.2_14-0.5.5' and we have
noticed that Xen Hypervisor 4.1.2 panic very frequently when we are about
halt/power-off the board. We have two VMs which we gracefully shutdown.

Moreover, even though the command is to power-off/halt the board, but Xen
hypervisor decides to reboot instead if it panics. I think this is a bug.
I'm not sure if it's addressed/fixed in Xen 4.2.2 release.

What are our options here assuming upgrading to Xen 4.2.2 (SuSE 11 SP3) is
the only possibility?


Stopping udevd:                                                       done

Set Hardware Clock to the current System Time                         done

Sending all processes the TERM signal...                              done

Sending all processes the KILL signal...                              done

*The system will be halted immediately.*



/dev/sda:



/dev/sda:

issuing standby command

(XEN) irq.c:1723: dom0: forcing unbind of pirq 313

(XEN) irq.c:1723: dom0: forcing unbind of pirq 314

(XEN) irq.c:1723: dom0: forcing unbind of pirq 315

(XEN) irq.c:1723: dom0: forcing unbind of pirq 316

(XEN) irq.c:1723: dom0: forcing unbind of pirq 317

(XEN) irq.c:1723: dom0: forcing unbind of pirq 318

(XEN) irq.c:1723: dom0: forcing unbind of pirq 319

(XEN) irq.c:1723: dom0: forcing unbind of pirq 320

(XEN) irq.c:1723: dom0: forcing unbind of pirq 312

[ 1092.811272] Power down.

(XEN) Disabling non-boot CPUs ...

(XEN) Broke affinity for irq 8

(XEN) Broke affinity for irq 8

(XEN) ----[ Xen-4.1.2_14-0.5.5  x86_64  debug=n  Not tainted ]----

(XEN) CPU:    0

(XEN) RIP:    e008:[<ffff82c480125247>] gmtime+0x9c7/0xb90

(XEN) RFLAGS: 0000000000010087   CONTEXT: hypervisor

(XEN) rax: ffff83102940bec0   rbx: 0000000000000000   rcx: 0000000000000000

(XEN) rdx: ffff831033a69220   rsi: ffff831033a69220   rdi: ffff82c4804d0f20

(XEN) rbp: 0000000000000000   rsp: ffff82c48047fd90   r8:  0000000000000000

(XEN) r9:  ffff831029804040   r10: 0000010e1aeca268   r11: ffff83007f7d6060

(XEN) r12: ffff831033a69200   r13: ffff82c4804d0f00   r14: 0000000000000000

(XEN) r15: 0000000000000000   cr0: 000000008005003b   cr4: 00000000001426f0

(XEN) cr3: 000000006e08d000   cr2: 0000000000000008

(XEN) ds: 002b   es: 002b   fs: 0000   gs: 0000   ss: e010   cs: e008

(XEN) Xen stack trace from rsp=ffff82c48047fd90:

(XEN)    0000000000000003 0000000000008008 0000000000000003 ffff82c48043d208

(XEN)    ffff82c48043d200 ffff82c48043cdc8 0000000000008000 ffff82c48011460d

(XEN)    0000000000000000 0000000000000000 ffff83007f7e815c 0000000000000000

(XEN)    0000000000000000 0000000000000003 0000010c12190d0a ffff82c4804d0d80

(XEN)    ffff82c4804d0d60 ffff82c4801025f0 ffff82c4804d0d60 ffff82c48043cdc0

(XEN)    0000000000000000 0000000000000003 ffff82c4804d0d08 ffff82c48010287c

(XEN)    0000000000000005 0000000000000005 0000000000000000 ffff82c48019696b

(XEN)    0000000000000000 ffff82c4804d0d80 ffff82c4804d0d60 ffff83201953e910

(XEN)    ffff83007f7e8000 ffff82c4804d0d08 0000010c12190d0a ffff82c4804d0d80

(XEN)    ffff82c4804d0d60 ffff82c480105dbe ffff83007f7e81c0 0000000000000000

(XEN)    ffff82c4804d0e10 ffff82c480124580 ffff82c4804d0df0 ffff82c480124788

(XEN)    ffff82c48047ff18 ffff83007edfc000 ffff82c4804d0d60 ffff82c480153813

(XEN)    ffff83007edfc000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 ffffffff8070df60 ffff8801eeefc010 0000000000000246

(XEN)    0000000000007ff0 0000000000000000 ffff8801f7f86d00 0000000000000000

(XEN)    ffffffff8000330a 0000000000000000 0000000000000026 0000000000000002

(XEN)    0000010000000000 ffffffff8000330a 000000000000e033 0000000000000246

(XEN)    ffff8801eeefdf18 000000000000e02b 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 ffff83007edfc000

(XEN)    0000000000000000 0000000000000000

(XEN) Xen call trace:

(XEN)    [<ffff82c480125247>] gmtime+0x9c7/0xb90

(XEN)    [<ffff82c48011460d>] notifier_call_chain+0x7d/0x1250

(XEN)    [<ffff82c4801025f0>] cpu_down+0xf0/0x150

(XEN)    [<ffff82c48010287c>] disable_nonboot_cpus+0xac/0x140

(XEN)    [<ffff82c48019696b>] acpi_enter_sleep_state+0x27b/0x500

(XEN)    [<ffff82c480105dbe>] vcpu_unpause+0x10e/0x120

(XEN)    [<ffff82c480124580>] do_sysctl+0x920/0xa00

(XEN)    [<ffff82c480124788>] do_tasklet+0x78/0xb0

(XEN)    [<ffff82c480153813>] idle_loop+0x23/0x50

(XEN)

(XEN) Pagetable walk from 0000000000000008:

(XEN)  L4[0x000] = 000000103ffed063 5555555555555555

(XEN)  L3[0x000] = 000000103ffec063 5555555555555555

(XEN)  L2[0x000] = 000000103ffeb063 5555555555555555

(XEN)  L1[0x000] = 0000000000000000 ffffffffffffffff

(XEN)

(XEN) ****************************************

(XEN) Panic on CPU 0:

(XEN) FATAL PAGE FAULT

(XEN) [error_code=0002]

(XEN) Faulting linear address: 0000000000000008

(XEN) ****************************************

(XEN)

*(XEN) Reboot in five seconds...*



Thanks

/Saurabh

--047d7b33c70612594c04f34418a2
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>We are using SuSE 11 SP2 Xen which =
has &#39;Xen 4.1.2_14-0.5.5&#39; and we have noticed that Xen Hypervisor 4.=
1.2 panic very frequently when we are about halt/power-off the board. We ha=
ve two VMs which we gracefully shutdown.</div>
<div><br></div><div>Moreover, even though the command is to power-off/halt =
the board, but Xen hypervisor decides to reboot instead if it panics. I thi=
nk this is a bug. I&#39;m not sure if it&#39;s addressed/fixed in Xen 4.2.2=
 release.</div>
<div><br></div><div>What are our options here assuming upgrading to Xen 4.2=
.2 (SuSE 11 SP3) is the only possibility?</div><div><br></div><div><br></di=
v><div><div><p class=3D"MsoNormal">Stopping
udevd:=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0
done</p>

<p class=3D"MsoNormal">Set Hardware Clock to the current System
Time=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0
=A0=A0=A0=A0done</p>

<p class=3D"MsoNormal">Sending all processes the TERM
signal...=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0
done</p>

<p class=3D"MsoNormal">Sending all processes the KILL
signal...=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=A0=
=A0=A0=A0=A0=A0=A0=A0
done</p>

<p class=3D"MsoNormal"><b>The system will be halted immediately.</b></p>

<p class=3D"MsoNormal">=A0</p>

<p class=3D"MsoNormal">/dev/sda:</p>

<p class=3D"MsoNormal">=A0</p>

<p class=3D"MsoNormal">/dev/sda:</p>

<p class=3D"MsoNormal">issuing standby command</p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 313</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 314</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 315</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 316</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 317</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 318</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 319</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 320</=
p>

<p class=3D"MsoNormal">(XEN) irq.c:1723: dom0: forcing unbind of pirq 312</=
p>

<p class=3D"MsoNormal">[ 1092.811272] Power down.</p>

<p class=3D"MsoNormal">(XEN) Disabling non-boot CPUs ...</p>

<p class=3D"MsoNormal">(XEN) Broke affinity for irq 8</p>

<p class=3D"MsoNormal">(XEN) Broke affinity for irq 8</p>

<p class=3D"MsoNormal">(XEN) ----[ Xen-4.1.2_14-0.5.5=A0 x86_64=A0
debug=3Dn=A0 Not tainted ]----</p>

<p class=3D"MsoNormal">(XEN) CPU:=A0=A0=A0 0</p>

<p class=3D"MsoNormal">(XEN) RIP:=A0=A0=A0 e008:[&lt;ffff82c480125247&gt;]
gmtime+0x9c7/0xb90</p>

<p class=3D"MsoNormal">(XEN) RFLAGS: 0000000000010087=A0=A0 CONTEXT: hyperv=
isor</p>

<p class=3D"MsoNormal">(XEN) rax: ffff83102940bec0=A0=A0 rbx:
0000000000000000=A0=A0 rcx: 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) rdx: ffff831033a69220=A0=A0 rsi:
ffff831033a69220=A0=A0 rdi: ffff82c4804d0f20</p>

<p class=3D"MsoNormal">(XEN) rbp: 0000000000000000=A0=A0 rsp:
ffff82c48047fd90=A0=A0 r8:=A0 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) r9:=A0 ffff831029804040=A0=A0 r10:
0000010e1aeca268=A0=A0 r11: ffff83007f7d6060</p>

<p class=3D"MsoNormal">(XEN) r12: ffff831033a69200=A0=A0 r13:
ffff82c4804d0f00=A0=A0 r14: 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) r15: 0000000000000000=A0=A0 cr0:
000000008005003b=A0=A0 cr4: 00000000001426f0</p>

<p class=3D"MsoNormal">(XEN) cr3: 000000006e08d000=A0=A0 cr2:
0000000000000008</p>

<p class=3D"MsoNormal">(XEN) ds: 002b=A0=A0 es: 002b=A0=A0 fs:
0000=A0=A0 gs: 0000=A0=A0 ss: e010=A0=A0 cs: e008</p>

<p class=3D"MsoNormal">(XEN) Xen stack trace from rsp=3Dffff82c48047fd90:</=
p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000003 0000000000008008
0000000000000003 ffff82c48043d208</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c48043d200 ffff82c48043cdc8
0000000000008000 ffff82c48011460d</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000000
ffff83007f7e815c 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000003
0000010c12190d0a ffff82c4804d0d80</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c4804d0d60 ffff82c4801025f0
ffff82c4804d0d60 ffff82c48043cdc0</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000003
ffff82c4804d0d08 ffff82c48010287c</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000005 0000000000000005
0000000000000000 ffff82c48019696b</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 ffff82c4804d0d80
ffff82c4804d0d60 ffff83201953e910</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff83007f7e8000 ffff82c4804d0d08
0000010c12190d0a ffff82c4804d0d80</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c4804d0d60 ffff82c480105dbe
ffff83007f7e81c0 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c4804d0e10 ffff82c480124580
ffff82c4804d0df0 ffff82c480124788</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff82c48047ff18 ffff83007edfc000
ffff82c4804d0d60 ffff82c480153813</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff83007edfc000 0000000000000000
0000000000000000 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 ffffffff8070df60
ffff8801eeefc010 0000000000000246</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000007ff0 0000000000000000
ffff8801f7f86d00 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffffffff8000330a 0000000000000000
0000000000000026 0000000000000002</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000010000000000 ffffffff8000330a
000000000000e033 0000000000000246</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 ffff8801eeefdf18 000000000000e02b
0000000000000000 0000000000000000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000000
0000000000000000 ffff83007edfc000</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 0000000000000000 0000000000000000</p>

<p class=3D"MsoNormal">(XEN) Xen call trace:</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480125247&gt;]
gmtime+0x9c7/0xb90</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c48011460d&gt;]
notifier_call_chain+0x7d/0x1250</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c4801025f0&gt;]
cpu_down+0xf0/0x150</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c48010287c&gt;]
disable_nonboot_cpus+0xac/0x140</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c48019696b&gt;]
acpi_enter_sleep_state+0x27b/0x500</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480105dbe&gt;]
vcpu_unpause+0x10e/0x120</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480124580&gt;]
do_sysctl+0x920/0xa00</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480124788&gt;]
do_tasklet+0x78/0xb0</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 [&lt;ffff82c480153813&gt;]
idle_loop+0x23/0x50</p>

<p class=3D"MsoNormal">(XEN)=A0=A0=A0 </p>

<p class=3D"MsoNormal">(XEN) Pagetable walk from 0000000000000008:</p>

<p class=3D"MsoNormal">(XEN)=A0 L4[0x000] =3D 000000103ffed063 555555555555=
5555</p>

<p class=3D"MsoNormal">(XEN)=A0 L3[0x000] =3D 000000103ffec063 555555555555=
5555</p>

<p class=3D"MsoNormal">(XEN)=A0 L2[0x000] =3D 000000103ffeb063 555555555555=
5555 </p>

<p class=3D"MsoNormal">(XEN)=A0 L1[0x000] =3D 0000000000000000 ffffffffffff=
ffff</p>

<p class=3D"MsoNormal">(XEN) </p>

<p class=3D"MsoNormal">(XEN) ****************************************</p>

<p class=3D"MsoNormal">(XEN) Panic on CPU 0:</p>

<p class=3D"MsoNormal">(XEN) FATAL PAGE FAULT</p>

<p class=3D"MsoNormal">(XEN) [error_code=3D0002]</p>

<p class=3D"MsoNormal">(XEN) Faulting linear address: 0000000000000008</p>

<p class=3D"MsoNormal">(XEN) ****************************************</p>

<p class=3D"MsoNormal">(XEN) </p>

<p class=3D"MsoNormal"><b>(XEN) Reboot in five seconds...</b></p><p class=
=3D"MsoNormal"><br></p><p class=3D"MsoNormal"><br></p><p class=3D"MsoNormal=
">Thanks</p><p class=3D"MsoNormal">/Saurabh</p></div></div></div>

--047d7b33c70612594c04f34418a2--


--===============0289617559933607328==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0289617559933607328==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 00:40:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WISXz-0002TH-Pp; Wed, 26 Feb 2014 00:40:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WISXy-0002TC-Ed
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 00:40:26 +0000
Received: from [85.158.137.68:23556] by server-5.bemta-3.messagelabs.com id
	CE/FA-04712-9F73D035; Wed, 26 Feb 2014 00:40:25 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393375224!4215809!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26786 invoked from network); 26 Feb 2014 00:40:24 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-31.messagelabs.com with SMTP;
	26 Feb 2014 00:40:24 -0000
X-TM-IMSS-Message-ID: <c438cd15000ddad8@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id c438cd15000ddad8 ;
	Tue, 25 Feb 2014 19:42:10 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1Q0eIBN018987; 
	Tue, 25 Feb 2014 19:40:19 -0500
Message-ID: <530D37F2.3000303@tycho.nsa.gov>
Date: Tue, 25 Feb 2014 19:40:18 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Subject: Re: [Xen-devel] [PATCH 0/4] xsm/flask: more XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/25/2014 05:42 AM, Jan Beulich wrote:
> 1: flask: add compat mode guest support
> 2: flask: use xzalloc()
> 3: xsm: use # printk format modifier
> 4: xsm: streamline xsm_default_action()
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

For all four patches:
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 00:40:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 00:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WISXz-0002TH-Pp; Wed, 26 Feb 2014 00:40:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dgdegra@tycho.nsa.gov>) id 1WISXy-0002TC-Ed
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 00:40:26 +0000
Received: from [85.158.137.68:23556] by server-5.bemta-3.messagelabs.com id
	CE/FA-04712-9F73D035; Wed, 26 Feb 2014 00:40:25 +0000
X-Env-Sender: dgdegra@tycho.nsa.gov
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393375224!4215809!1
X-Originating-IP: [63.239.67.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26786 invoked from network); 26 Feb 2014 00:40:24 -0000
Received: from emvm-gh1-uea09.nsa.gov (HELO nsa.gov) (63.239.67.10)
	by server-14.tower-31.messagelabs.com with SMTP;
	26 Feb 2014 00:40:24 -0000
X-TM-IMSS-Message-ID: <c438cd15000ddad8@nsa.gov>
Received: from tarius.tycho.ncsc.mil ([144.51.242.1]) by nsa.gov
	([63.239.67.10]) with ESMTP (TREND IMSS SMTP Service 7.1;
	TLSv1/SSLv3 DHE-RSA-AES256-SHA (256/256)) id c438cd15000ddad8 ;
	Tue, 25 Feb 2014 19:42:10 -0500
Received: from moss-nexus.epoch.ncsc.mil (moss-nexus [192.168.25.48])
	by tarius.tycho.ncsc.mil (8.14.4/8.14.4) with ESMTP id s1Q0eIBN018987; 
	Tue, 25 Feb 2014 19:40:19 -0500
Message-ID: <530D37F2.3000303@tycho.nsa.gov>
Date: Tue, 25 Feb 2014 19:40:18 -0500
From: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Organization: National Security Agency
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
In-Reply-To: <530C81A4020000780011F182@nat28.tlf.novell.com>
Subject: Re: [Xen-devel] [PATCH 0/4] xsm/flask: more XSA-84 follow-ups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/25/2014 05:42 AM, Jan Beulich wrote:
> 1: flask: add compat mode guest support
> 2: flask: use xzalloc()
> 3: xsm: use # printk format modifier
> 4: xsm: streamline xsm_default_action()
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

For all four patches:
Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 02:38:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 02:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIUNn-0007AF-Ff; Wed, 26 Feb 2014 02:38:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WIUNk-0007AA-P2
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 02:38:01 +0000
Received: from [85.158.139.211:28859] by server-14.bemta-5.messagelabs.com id
	68/70-27598-7835D035; Wed, 26 Feb 2014 02:37:59 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393382275!6225947!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19590 invoked from network); 26 Feb 2014 02:37:57 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-15.tower-206.messagelabs.com with SMTP;
	26 Feb 2014 02:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389715200"; 
   d="scan'208";a="9605315"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 26 Feb 2014 10:33:38 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1Q2bFf3002196;
	Wed, 26 Feb 2014 10:37:22 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014022610265010-224563 ;
	Wed, 26 Feb 2014 10:26:50 +0800 
Message-ID: <530D521D.6070904@cn.fujitsu.com>
Date: Wed, 26 Feb 2014 10:31:57 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/26 10:26:50,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/26 10:35:08,
	Serialize complete at 2014/02/26 10:35:08
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/10 V7] Remus/Libxl: Network buffering
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Ian Campbell, Ian Jackson

Ping.
This patchset is ready for 4.5-unstable.
It went through several rounds and got decent reviews&comments.

Could you merge it for 4.5-unstable. Or any comments?

Thanks,
Lai

On 02/10/2014 05:19 PM, Lai Jiangshan wrote:
> This patch series adds support for network buffering in the Remus
> codebase in libxl. 
> 
> Changes in V7:
>   Applied missing comments(by IanJ).
>   Applied Shriram comments.
> 
>   merge netbufering tangled setup/teardown code into one patch.
>   (2/6/8 in V6 => 5 in V7. 9/10 in V6 => 7 in V7)
> 
> Changes in V6:
>   Applied Ian Jackson's comments of V5 series.
>   the [PATCH 2/4 V5] is split by small functionalities.
> 
>   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> 
> Changes in V5:
> 
> Merge hotplug script patch (2/5) and hotplug script setup/teardown
> patch (3/5) into a single patch.
> 
> Changes in V4:
> 
> [1/5] Remove check for libnl command line utils in autoconf checks
> 
> [2/5] minor nits
> 
> [3/5] define LIBXL_HAVE_REMUS_NETBUF in libxl.h
> 
> [4/5] clean ups. Make the usleep in checkpoint callback asynchronous
> 
> [5/5] minor nits
> 
> Changes in V3:
> [1/5] Fix redundant checks in configure scripts
>       (based on Ian Campbell's suggestions)
> 
> [2/5] Introduce locking in the script, during IFB setup.
>       Add xenstore paths used by netbuf scripts
>       to xenstore-paths.markdown
> 
> [3/5] Hotplug scripts setup/teardown invocations are now asynchronous
>       following IanJ's feedback.  However, the invocations are still
>       sequential. 
> 
> [5/5] Allow per-domain specification of netbuffer scripts in xl remus
>       commmand.
> 
> And minor nits throughout the series based on feedback from
> the last version
> 
> Changes in V2:
> [1/5] Configure script will automatically enable/disable network
>       buffer support depending on the availability of the appropriate
>       libnl3 version. [If libnl3 is unavailable, a warning message will be
>       printed to let the user know that the feature has been disabled.]
> 
>       use macros from pkg.m4 instead of pkg-config commands
>       removed redundant checks for libnl3 libraries.
> 
> [3,4/5] - Minor nits.
> 
> Version 1:
> 
> [1/5] Changes to autoconf scripts to check for libnl3. Add linker flags
>       to libxl Makefile.
> 
> [2/5] External script to setup/teardown network buffering using libnl3's
>       CLI. This script will be invoked by libxl before starting Remus.
>       The script's main job is to bring up an IFB device with plug qdisc
>       attached to it.  It then re-routes egress traffic from the guest's
>       vif to the IFB device.
> 
> [3/5] Libxl code to invoke the external setup script, followed by netlink
>       related setup to obtain a handle on the output buffers attached
>       to each vif.
> 
> [4/5] Libxl interaction with network buffer module in the kernel via
>       libnl3 API.
> 
> [5/5] xl cmdline switch to explicitly enable network buffering when
>       starting remus.
> 
> 
>   Few things to note(by shriram): 
> 
>     a) Based on previous email discussions, the setup/teardown task has
>     been moved to a hotplug style shell script which can be customized as
>     desired, instead of implementing it as C code inside libxl.
> 
>     b) Libnl3 is not available on NetBSD. Nor is it available on CentOS
>    (Linux).  So I have made network buffering support an optional feature
>    so that it can be disabled if desired.
> 
>    c) NetBSD does not have libnl3. So I have put the setup script under
>    tools/hotplug/Linux folder.
> 
> thanks
> Lai
> 
> 
> 
> Shriram Rajagopalan (8):
>   remus: add libnl3 dependency to autoconf scripts
>   tools/libxl: update libxl_domain_remus_info
>   tools/libxl: introduce a new structure libxl__remus_state
>   remus: introduce a function to check whether network buffering is
>     enabled
>   remus: Remus network buffering core and APIs to setup/teardown
>   remus: implement the APIs to buffer/release packages
>   libxl: use the APIs to setup/teardown network buffering
>   libxl: rename remus_failover_cb() to remus_replication_failure_cb()
>   libxl: control network buffering in remus callbacks
>   libxl: network buffering cmdline switch
> 
>  README                                 |    4 +
>  config/Tools.mk.in                     |    3 +
>  docs/man/xl.conf.pod.5                 |    6 +
>  docs/man/xl.pod.1                      |   11 +-
>  docs/misc/xenstore-paths.markdown      |    4 +
>  tools/configure.ac                     |   15 +
>  tools/hotplug/Linux/Makefile           |    1 +
>  tools/hotplug/Linux/remus-netbuf-setup |  183 +++++++++++
>  tools/libxl/Makefile                   |   11 +
>  tools/libxl/libxl.c                    |   48 ++-
>  tools/libxl/libxl.h                    |   13 +
>  tools/libxl/libxl_dom.c                |  118 ++++++--
>  tools/libxl/libxl_internal.h           |   54 +++-
>  tools/libxl/libxl_netbuffer.c          |  561 ++++++++++++++++++++++++++++++++
>  tools/libxl/libxl_nonetbuffer.c        |   56 ++++
>  tools/libxl/libxl_remus.c              |   64 ++++
>  tools/libxl/libxl_types.idl            |    2 +
>  tools/libxl/xl.c                       |    4 +
>  tools/libxl/xl.h                       |    1 +
>  tools/libxl/xl_cmdimpl.c               |   28 ++-
>  tools/libxl/xl_cmdtable.c              |    3 +
>  tools/remus/README                     |    6 +
>  22 files changed, 1155 insertions(+), 41 deletions(-)
>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
>  create mode 100644 tools/libxl/libxl_netbuffer.c
>  create mode 100644 tools/libxl/libxl_nonetbuffer.c
>  create mode 100644 tools/libxl/libxl_remus.c
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 02:38:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 02:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIUNn-0007AF-Ff; Wed, 26 Feb 2014 02:38:03 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WIUNk-0007AA-P2
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 02:38:01 +0000
Received: from [85.158.139.211:28859] by server-14.bemta-5.messagelabs.com id
	68/70-27598-7835D035; Wed, 26 Feb 2014 02:37:59 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393382275!6225947!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19590 invoked from network); 26 Feb 2014 02:37:57 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-15.tower-206.messagelabs.com with SMTP;
	26 Feb 2014 02:37:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389715200"; 
   d="scan'208";a="9605315"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 26 Feb 2014 10:33:38 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1Q2bFf3002196;
	Wed, 26 Feb 2014 10:37:22 +0800
Received: from [10.167.226.103] ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014022610265010-224563 ;
	Wed, 26 Feb 2014 10:26:50 +0800 
Message-ID: <530D521D.6070904@cn.fujitsu.com>
Date: Wed, 26 Feb 2014 10:31:57 +0800
From: Lai Jiangshan <laijs@cn.fujitsu.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.2.9) Gecko/20100921 Fedora/3.1.4-1.fc14 Thunderbird/3.1.4
MIME-Version: 1.0
To: Lai Jiangshan <laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/26 10:26:50,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/26 10:35:08,
	Serialize complete at 2014/02/26 10:35:08
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xen.org,
	Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH 00/10 V7] Remus/Libxl: Network buffering
	support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi, Ian Campbell, Ian Jackson

Ping.
This patchset is ready for 4.5-unstable.
It went through several rounds and got decent reviews&comments.

Could you merge it for 4.5-unstable. Or any comments?

Thanks,
Lai

On 02/10/2014 05:19 PM, Lai Jiangshan wrote:
> This patch series adds support for network buffering in the Remus
> codebase in libxl. 
> 
> Changes in V7:
>   Applied missing comments(by IanJ).
>   Applied Shriram comments.
> 
>   merge netbufering tangled setup/teardown code into one patch.
>   (2/6/8 in V6 => 5 in V7. 9/10 in V6 => 7 in V7)
> 
> Changes in V6:
>   Applied Ian Jackson's comments of V5 series.
>   the [PATCH 2/4 V5] is split by small functionalities.
> 
>   [PATCH 4/4 V5] --> [PATCH 13/13] netbuffer is default enabled.
> 
> Changes in V5:
> 
> Merge hotplug script patch (2/5) and hotplug script setup/teardown
> patch (3/5) into a single patch.
> 
> Changes in V4:
> 
> [1/5] Remove check for libnl command line utils in autoconf checks
> 
> [2/5] minor nits
> 
> [3/5] define LIBXL_HAVE_REMUS_NETBUF in libxl.h
> 
> [4/5] clean ups. Make the usleep in checkpoint callback asynchronous
> 
> [5/5] minor nits
> 
> Changes in V3:
> [1/5] Fix redundant checks in configure scripts
>       (based on Ian Campbell's suggestions)
> 
> [2/5] Introduce locking in the script, during IFB setup.
>       Add xenstore paths used by netbuf scripts
>       to xenstore-paths.markdown
> 
> [3/5] Hotplug scripts setup/teardown invocations are now asynchronous
>       following IanJ's feedback.  However, the invocations are still
>       sequential. 
> 
> [5/5] Allow per-domain specification of netbuffer scripts in xl remus
>       commmand.
> 
> And minor nits throughout the series based on feedback from
> the last version
> 
> Changes in V2:
> [1/5] Configure script will automatically enable/disable network
>       buffer support depending on the availability of the appropriate
>       libnl3 version. [If libnl3 is unavailable, a warning message will be
>       printed to let the user know that the feature has been disabled.]
> 
>       use macros from pkg.m4 instead of pkg-config commands
>       removed redundant checks for libnl3 libraries.
> 
> [3,4/5] - Minor nits.
> 
> Version 1:
> 
> [1/5] Changes to autoconf scripts to check for libnl3. Add linker flags
>       to libxl Makefile.
> 
> [2/5] External script to setup/teardown network buffering using libnl3's
>       CLI. This script will be invoked by libxl before starting Remus.
>       The script's main job is to bring up an IFB device with plug qdisc
>       attached to it.  It then re-routes egress traffic from the guest's
>       vif to the IFB device.
> 
> [3/5] Libxl code to invoke the external setup script, followed by netlink
>       related setup to obtain a handle on the output buffers attached
>       to each vif.
> 
> [4/5] Libxl interaction with network buffer module in the kernel via
>       libnl3 API.
> 
> [5/5] xl cmdline switch to explicitly enable network buffering when
>       starting remus.
> 
> 
>   Few things to note(by shriram): 
> 
>     a) Based on previous email discussions, the setup/teardown task has
>     been moved to a hotplug style shell script which can be customized as
>     desired, instead of implementing it as C code inside libxl.
> 
>     b) Libnl3 is not available on NetBSD. Nor is it available on CentOS
>    (Linux).  So I have made network buffering support an optional feature
>    so that it can be disabled if desired.
> 
>    c) NetBSD does not have libnl3. So I have put the setup script under
>    tools/hotplug/Linux folder.
> 
> thanks
> Lai
> 
> 
> 
> Shriram Rajagopalan (8):
>   remus: add libnl3 dependency to autoconf scripts
>   tools/libxl: update libxl_domain_remus_info
>   tools/libxl: introduce a new structure libxl__remus_state
>   remus: introduce a function to check whether network buffering is
>     enabled
>   remus: Remus network buffering core and APIs to setup/teardown
>   remus: implement the APIs to buffer/release packages
>   libxl: use the APIs to setup/teardown network buffering
>   libxl: rename remus_failover_cb() to remus_replication_failure_cb()
>   libxl: control network buffering in remus callbacks
>   libxl: network buffering cmdline switch
> 
>  README                                 |    4 +
>  config/Tools.mk.in                     |    3 +
>  docs/man/xl.conf.pod.5                 |    6 +
>  docs/man/xl.pod.1                      |   11 +-
>  docs/misc/xenstore-paths.markdown      |    4 +
>  tools/configure.ac                     |   15 +
>  tools/hotplug/Linux/Makefile           |    1 +
>  tools/hotplug/Linux/remus-netbuf-setup |  183 +++++++++++
>  tools/libxl/Makefile                   |   11 +
>  tools/libxl/libxl.c                    |   48 ++-
>  tools/libxl/libxl.h                    |   13 +
>  tools/libxl/libxl_dom.c                |  118 ++++++--
>  tools/libxl/libxl_internal.h           |   54 +++-
>  tools/libxl/libxl_netbuffer.c          |  561 ++++++++++++++++++++++++++++++++
>  tools/libxl/libxl_nonetbuffer.c        |   56 ++++
>  tools/libxl/libxl_remus.c              |   64 ++++
>  tools/libxl/libxl_types.idl            |    2 +
>  tools/libxl/xl.c                       |    4 +
>  tools/libxl/xl.h                       |    1 +
>  tools/libxl/xl_cmdimpl.c               |   28 ++-
>  tools/libxl/xl_cmdtable.c              |    3 +
>  tools/remus/README                     |    6 +
>  22 files changed, 1155 insertions(+), 41 deletions(-)
>  create mode 100644 tools/hotplug/Linux/remus-netbuf-setup
>  create mode 100644 tools/libxl/libxl_netbuffer.c
>  create mode 100644 tools/libxl/libxl_nonetbuffer.c
>  create mode 100644 tools/libxl/libxl_remus.c
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 02:48:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 02:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIUYA-0007Jy-T7; Wed, 26 Feb 2014 02:48:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIUY9-0007Jt-SG
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 02:48:46 +0000
Received: from [85.158.137.68:32910] by server-1.bemta-3.messagelabs.com id
	40/03-17293-C065D035; Wed, 26 Feb 2014 02:48:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393382921!2959046!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25604 invoked from network); 26 Feb 2014 02:48:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 02:48:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389744000"; d="scan'208";a="104154431"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 02:48:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 21:48:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIUXY-0000ND-Qd;
	Wed, 26 Feb 2014 02:48:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIUXY-00051t-GR;
	Wed, 26 Feb 2014 02:48:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25302-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 02:48:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25302: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25302 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25302/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  4c2502450ef1cc6fa1fd6daab8dbed8af1b88a29
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 351 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 02:48:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 02:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIUYA-0007Jy-T7; Wed, 26 Feb 2014 02:48:46 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIUY9-0007Jt-SG
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 02:48:46 +0000
Received: from [85.158.137.68:32910] by server-1.bemta-3.messagelabs.com id
	40/03-17293-C065D035; Wed, 26 Feb 2014 02:48:44 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393382921!2959046!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25604 invoked from network); 26 Feb 2014 02:48:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 02:48:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389744000"; d="scan'208";a="104154431"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 02:48:09 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 21:48:08 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIUXY-0000ND-Qd;
	Wed, 26 Feb 2014 02:48:08 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIUXY-00051t-GR;
	Wed, 26 Feb 2014 02:48:08 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25302-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 02:48:08 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25302: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25302 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25302/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  4c2502450ef1cc6fa1fd6daab8dbed8af1b88a29
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 351 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 02:51:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 02:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIUag-0007Ph-FE; Wed, 26 Feb 2014 02:51:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WIUae-0007PZ-G8
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 02:51:20 +0000
Received: from [85.158.139.211:37810] by server-2.bemta-5.messagelabs.com id
	45/7C-23037-7A65D035; Wed, 26 Feb 2014 02:51:19 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393383074!6276936!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12293 invoked from network); 26 Feb 2014 02:51:16 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-16.tower-206.messagelabs.com with SMTP;
	26 Feb 2014 02:51:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389715200"; 
   d="scan'208";a="9605400"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 26 Feb 2014 10:47:22 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1Q2p9cC003225;
	Wed, 26 Feb 2014 10:51:11 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014022610484960-225134 ;
	Wed, 26 Feb 2014 10:48:49 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Wed, 26 Feb 2014 10:53:29 +0800
Message-Id: <1393383209-4449-1-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/26 10:48:49,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/26 10:48:52,
	Serialize complete at 2014/02/26 10:48:52
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC] remus: implement remus replicated
	checkpointing disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch implements remus replicated checkpointing disk.
It includes two parts:
  generic remus replicated checkpointing disks framework
  drbd replicated checkpointing disks
They will be split into different files in next round.

The patch is still simple due to disk-setup-teardown-script is
still under implementing. I need to use libxl_ao to implement it,
but libxl_ao is hard to use. The work sequence is needed to ugly split
to serveral callbacks like device_hotplug().

And becuase the remus disk script is unimplemented, the drbd_setup() code
can't check the disk now. So it just assumes the user config the disk correctly.

This patch is *UNTESTED*.
(there is a problem with xl&drbd(without remus) in my BOXes).

I request *comments* as many as possible.

Thanks,
Lai

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 tools/libxl/Makefile                      |    1 +
 tools/libxl/libxl_dom.c                   |   19 +++-
 tools/libxl/libxl_internal.h              |   10 ++
 tools/libxl/libxl_remus.c                 |    2 +
 tools/libxl/libxl_remus_replicated_disk.c |  219 +++++++++++++++++++++++++++++
 5 files changed, 249 insertions(+), 2 deletions(-)
 create mode 100644 tools/libxl/libxl_remus_replicated_disk.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 218f55e..dbf5dd9 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -53,6 +53,7 @@ LIBXL_OBJS-y += libxl_nonetbuffer.o
 endif
 
 LIBXL_OBJS-y += libxl_remus.o
+LIBXL_OBJS-y += libxl_remus_replicated_disk.o
 
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index a4ffdfd..858f5be 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1251,9 +1251,14 @@ static int libxl__remus_domain_suspend_callback(void *data)
 
     STATE_AO_GC(dss->ao);
 
-    /* REMUS TODO: Issue disk checkpoint reqs. */
     int ok = libxl__domain_suspend_common_callback(data);
 
+    /* Issue disk checkpoint reqs. */
+    if (libxl__remus_disks_postsuspend(remus_state)) {
+        ok = 0;
+        goto out;
+    }
+
     if (!remus_state->netbuf_state || !ok) goto out;
 
     /* The domain was suspended successfully. Start a new network
@@ -1279,7 +1284,10 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* REMUS TODO: Deal with disk. */
+    /* Deal with disk. */
+    if (libxl__remus_disks_preresume(dss->remus_state))
+        return 0;
+
     return 1;
 }
 
@@ -1326,6 +1334,13 @@ static void remus_checkpoint_dm_saved(libxl__egc *egc,
         goto out;
     }
 
+    rc = libxl__remus_disks_commit(remus_state);
+    if (rc) {
+        LOG(ERROR, "Failed to commit disks state"
+            " Terminating Remus..");
+        goto out;
+    }
+
     if (remus_state->netbuf_state) {
         rc = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
                                                     remus_state);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd2bba..8933e5f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2309,6 +2309,10 @@ typedef struct libxl__remus_state {
     void *netbuf_state;
     libxl__ev_time timeout;
     libxl__ev_child child;
+
+    /* remus disks state */
+    uint32_t nr_disks;
+    struct libxl__remus_disk **disks;
 } libxl__remus_state;
 
 _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
@@ -2336,6 +2340,12 @@ _hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
 _hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
                                                   libxl__remus_state *remus_state);
 
+_hidden int libxl__remus_disks_postsuspend(libxl__remus_state *state);
+_hidden int libxl__remus_disks_preresume(libxl__remus_state *state);
+_hidden int libxl__remus_disks_commit(libxl__remus_state *state);
+_hidden int libxl__remus_disks_setup(libxl__egc *egc, libxl__domain_suspend_state *dss);
+_hidden void libxl__remus_disks_teardown(libxl__remus_state *state);
+
 _hidden void libxl__remus_setup_initiate(libxl__egc *egc,
                                          libxl__domain_suspend_state *dss);
 
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index cdc1c16..92eb36a 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -23,6 +23,7 @@ void libxl__remus_setup_initiate(libxl__egc *egc,
                                  libxl__domain_suspend_state *dss)
 {
     libxl__ev_time_init(&dss->remus_state->timeout);
+    libxl__remus_disks_setup(egc, dss);
     if (!dss->remus_state->netbufscript)
         libxl__remus_setup_done(egc, dss, 0);
     else
@@ -51,6 +52,7 @@ void libxl__remus_teardown_initiate(libxl__egc *egc,
     /* stash rc somewhere before invoking teardown ops. */
     dss->remus_state->saved_rc = rc;
 
+    libxl__remus_disks_teardown(dss->remus_state);
     if (!dss->remus_state->netbuf_state)
         libxl__remus_teardown_done(egc, dss);
     else
diff --git a/tools/libxl/libxl_remus_replicated_disk.c b/tools/libxl/libxl_remus_replicated_disk.c
new file mode 100644
index 0000000..4b16403
--- /dev/null
+++ b/tools/libxl/libxl_remus_replicated_disk.c
@@ -0,0 +1,219 @@
+/*
+ * Copyright (C) 2013
+ * Author Lai Jiangshan <laijs@cn.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+typedef struct libxl__remus_disk
+{
+    const struct libxl_device_disk *disk;
+    const struct libxl__remus_disk_type *type;
+
+    /* ao callbacks for setup & teardown script */
+    int (*setup_cb)(struct libxl__remus_disk *d);
+    int (*teardown_cb)(struct libxl__remus_disk *d);
+} libxl__remus_disk;
+
+typedef struct libxl__remus_disk_type
+{
+    /* checkpointing */
+    int (*postsuspend)(libxl__remus_disk *d);
+    int (*preresume)(libxl__remus_disk *d);
+    int (*commit)(libxl__remus_disk *d);
+
+    /* setup & teardown */
+    libxl__remus_disk *(*setup)(libxl__gc *gc, libxl_device_disk *disk);
+    void (*teardown)(libxl__remus_disk *d);
+} libxl__remus_disk_type;
+
+
+/*** drbd implementation ***/
+const int DRBD_SEND_CHECKPOINT = 20;
+const int DRBD_WAIT_CHECKPOINT_ACK = 30;
+typedef struct libxl__remus_drbd_disk
+{
+    libxl__remus_disk remus_disk;
+    int ctl_fd;
+    int ackwait;
+} libxl__remus_drbd_disk;
+
+static int drbd_postsuspend(libxl__remus_disk *d)
+{
+    struct libxl__remus_drbd_disk *drbd = CONTAINER_OF(d, *drbd, remus_disk);
+
+    if (!drbd->ackwait) {
+        if (ioctl(drbd->ctl_fd, DRBD_SEND_CHECKPOINT, 0) <= 0)
+            drbd->ackwait = 1;
+    }
+
+    return 0;
+}
+
+static int drbd_preresume(libxl__remus_disk *d)
+{
+    struct libxl__remus_drbd_disk *drbd = CONTAINER_OF(d, *drbd, remus_disk);
+
+    if (drbd->ackwait) {
+        ioctl(drbd->ctl_fd, DRBD_WAIT_CHECKPOINT_ACK, 0);
+        drbd->ackwait = 0;
+    }
+
+    return 0;
+}
+
+static int drbd_commit(libxl__remus_disk *d)
+{
+    /* nothing to do, all work are done by DRBD's protocal-D. */
+    return 0;
+}
+
+static libxl__remus_disk *drbd_setup(libxl__gc *gc, libxl_device_disk *disk)
+{
+    libxl__remus_drbd_disk *drbd;
+    //if (!(drbd && protocal-D)) // TODO: need to run script async to check
+    //  return NULL
+
+    GCNEW(drbd);
+
+    drbd->ctl_fd = open(GCSPRINTF("/dev/drbd/by-res/%s", disk->pdev_path), O_RDONLY);
+    drbd->ackwait = 0;
+
+    if (drbd->ctl_fd < 0)
+        return NULL;
+
+    return &drbd->remus_disk;
+}
+
+static void drbd_teardown(libxl__remus_disk *d)
+{
+    struct libxl__remus_drbd_disk *drbd = CONTAINER_OF(d, *drbd, remus_disk);
+
+    close(drbd->ctl_fd);
+}
+
+static const libxl__remus_disk_type drbd_disk_type = {
+  .postsuspend = drbd_postsuspend,
+  .preresume = drbd_preresume,
+  .commit = drbd_commit,
+  .setup = drbd_setup,
+  .teardown = drbd_teardown,
+};
+
+/*** checkpoint disks states and callbacks ***/
+static const libxl__remus_disk_type *remus_disk_types[] =
+{
+    &drbd_disk_type,
+};
+
+int libxl__remus_disks_postsuspend(libxl__remus_state *state)
+{
+    int i;
+    int rc = 0;
+
+    for (i = 0; rc == 0 && i < state->nr_disks; i++)
+        rc = state->disks[i]->type->postsuspend(state->disks[i]);
+
+    return rc;
+}
+
+int libxl__remus_disks_preresume(libxl__remus_state *state)
+{
+    int i;
+    int rc = 0;
+
+    for (i = 0; rc == 0 && i < state->nr_disks; i++)
+        rc = state->disks[i]->type->preresume(state->disks[i]);
+
+    return rc;
+}
+
+int libxl__remus_disks_commit(libxl__remus_state *state)
+{
+    int i;
+    int rc = 0;
+
+    for (i = 0; rc == 0 && i < state->nr_disks; i++)
+        rc = state->disks[i]->type->commit(state->disks[i]);
+
+    return rc;
+}
+
+#if 0
+/* TODO: implement disk setup/teardown script */
+static void disk_exec_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
+                                      const struct timeval *requested_abs)
+{
+    libxl__remus_disks_state *state = CONTAINER_OF(ev, *aodev, timeout);
+    STATE_AO_GC(state->ao);
+
+    libxl__ev_time_deregister(gc, &state->timeout);
+
+    assert(libxl__ev_child_inuse(&state->child));
+    if (kill(state->child.pid, SIGKILL)) {
+    }
+
+    return;
+}
+
+int libxl__remus_disks_exec_script(libxl__gc *gc,
+    libxl__remus_disks_state *state)
+{
+}
+#endif
+
+int libxl__remus_disks_setup(libxl__egc *egc, libxl__domain_suspend_state *dss)
+{
+    libxl__remus_state *remus_state = dss->remus_state;
+    int i, j, nr_disks;
+    libxl_device_disk *disks;
+    libxl__remus_disk *remus_disk;
+    const libxl__remus_disk_type *type;
+
+    STATE_AO_GC(dss->ao);
+    disks = libxl_device_disk_list(CTX, dss->domid, &nr_disks);
+    remus_state->nr_disks = nr_disks;
+    GCNEW_ARRAY(remus_state->disks, nr_disks);
+
+    for (i = 0; i < nr_disks; i++) {
+        remus_disk = NULL;
+        for (j = 0; j < ARRAY_SIZE(remus_disk_types); j++) {
+            type = remus_disk_types[j];
+            remus_disk = type->setup(gc, &disks[i]);
+            if (!remus_disk)
+                break;
+
+            remus_state->disks[i] = remus_disk;
+            remus_disk->disk = &disks[i];
+            remus_disk->type = type;
+        }
+        if (!remus_disk) {
+            remus_state->nr_disks = i;
+            libxl__remus_disks_teardown(remus_state);
+            return -1;
+        }
+    }
+    return 0;
+}
+
+void libxl__remus_disks_teardown(libxl__remus_state *state)
+{
+    int i;
+
+    for (i = 0; i < state->nr_disks; i++)
+        state->disks[i]->type->teardown(state->disks[i]);
+    state->nr_disks = 0;
+}
+
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 02:51:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 02:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIUag-0007Ph-FE; Wed, 26 Feb 2014 02:51:22 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <laijs@cn.fujitsu.com>) id 1WIUae-0007PZ-G8
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 02:51:20 +0000
Received: from [85.158.139.211:37810] by server-2.bemta-5.messagelabs.com id
	45/7C-23037-7A65D035; Wed, 26 Feb 2014 02:51:19 +0000
X-Env-Sender: laijs@cn.fujitsu.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393383074!6276936!1
X-Originating-IP: [222.73.24.84]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12293 invoked from network); 26 Feb 2014 02:51:16 -0000
Received: from cn.fujitsu.com (HELO song.cn.fujitsu.com) (222.73.24.84)
	by server-16.tower-206.messagelabs.com with SMTP;
	26 Feb 2014 02:51:16 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389715200"; 
   d="scan'208";a="9605400"
Received: from unknown (HELO tang.cn.fujitsu.com) ([10.167.250.3])
	by song.cn.fujitsu.com with ESMTP; 26 Feb 2014 10:47:22 +0800
Received: from fnstmail02.fnst.cn.fujitsu.com (tang.cn.fujitsu.com [127.0.0.1])
	by tang.cn.fujitsu.com (8.14.3/8.13.1) with ESMTP id s1Q2p9cC003225;
	Wed, 26 Feb 2014 10:51:11 +0800
Received: from G08FNSTD100631.fnst.cn.fujitsu.com ([10.167.226.103])
	by fnstmail02.fnst.cn.fujitsu.com (Lotus Domino Release 8.5.3)
	with ESMTP id 2014022610484960-225134 ;
	Wed, 26 Feb 2014 10:48:49 +0800 
From: Lai Jiangshan <laijs@cn.fujitsu.com>
To: xen-devel@lists.xen.org
Date: Wed, 26 Feb 2014 10:53:29 +0800
Message-Id: <1393383209-4449-1-git-send-email-laijs@cn.fujitsu.com>
X-Mailer: git-send-email 1.7.4.4
In-Reply-To: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
References: <1392023972-24675-1-git-send-email-laijs@cn.fujitsu.com>
X-MIMETrack: Itemize by SMTP Server on mailserver/fnst(Release 8.5.3|September
	15, 2011) at 2014/02/26 10:48:49,
	Serialize by Router on mailserver/fnst(Release 8.5.3|September 15,
	2011) at 2014/02/26 10:48:52,
	Serialize complete at 2014/02/26 10:48:52
Cc: Ian Campbell <ian.campbell@citrix.com>,
	FNST-Wen Congyang <wency@cn.fujitsu.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jiang Yunhong <yunhong.jiang@intel.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Lai Jiangshan <laijs@cn.fujitsu.com>, Dong Eddie <eddie.dong@intel.com>,
	Shriram Rajagopalan <rshriram@cs.ubc.ca>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH RFC] remus: implement remus replicated
	checkpointing disk
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch implements remus replicated checkpointing disk.
It includes two parts:
  generic remus replicated checkpointing disks framework
  drbd replicated checkpointing disks
They will be split into different files in next round.

The patch is still simple due to disk-setup-teardown-script is
still under implementing. I need to use libxl_ao to implement it,
but libxl_ao is hard to use. The work sequence is needed to ugly split
to serveral callbacks like device_hotplug().

And becuase the remus disk script is unimplemented, the drbd_setup() code
can't check the disk now. So it just assumes the user config the disk correctly.

This patch is *UNTESTED*.
(there is a problem with xl&drbd(without remus) in my BOXes).

I request *comments* as many as possible.

Thanks,
Lai

Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
---
 tools/libxl/Makefile                      |    1 +
 tools/libxl/libxl_dom.c                   |   19 +++-
 tools/libxl/libxl_internal.h              |   10 ++
 tools/libxl/libxl_remus.c                 |    2 +
 tools/libxl/libxl_remus_replicated_disk.c |  219 +++++++++++++++++++++++++++++
 5 files changed, 249 insertions(+), 2 deletions(-)
 create mode 100644 tools/libxl/libxl_remus_replicated_disk.c

diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 218f55e..dbf5dd9 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -53,6 +53,7 @@ LIBXL_OBJS-y += libxl_nonetbuffer.o
 endif
 
 LIBXL_OBJS-y += libxl_remus.o
+LIBXL_OBJS-y += libxl_remus_replicated_disk.o
 
 LIBXL_OBJS-$(CONFIG_X86) += libxl_cpuid.o libxl_x86.o
 LIBXL_OBJS-$(CONFIG_ARM) += libxl_nocpuid.o libxl_arm.o
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index a4ffdfd..858f5be 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1251,9 +1251,14 @@ static int libxl__remus_domain_suspend_callback(void *data)
 
     STATE_AO_GC(dss->ao);
 
-    /* REMUS TODO: Issue disk checkpoint reqs. */
     int ok = libxl__domain_suspend_common_callback(data);
 
+    /* Issue disk checkpoint reqs. */
+    if (libxl__remus_disks_postsuspend(remus_state)) {
+        ok = 0;
+        goto out;
+    }
+
     if (!remus_state->netbuf_state || !ok) goto out;
 
     /* The domain was suspended successfully. Start a new network
@@ -1279,7 +1284,10 @@ static int libxl__remus_domain_resume_callback(void *data)
     if (libxl__domain_resume(gc, dss->domid, /* Fast Suspend */1))
         return 0;
 
-    /* REMUS TODO: Deal with disk. */
+    /* Deal with disk. */
+    if (libxl__remus_disks_preresume(dss->remus_state))
+        return 0;
+
     return 1;
 }
 
@@ -1326,6 +1334,13 @@ static void remus_checkpoint_dm_saved(libxl__egc *egc,
         goto out;
     }
 
+    rc = libxl__remus_disks_commit(remus_state);
+    if (rc) {
+        LOG(ERROR, "Failed to commit disks state"
+            " Terminating Remus..");
+        goto out;
+    }
+
     if (remus_state->netbuf_state) {
         rc = libxl__remus_netbuf_release_prev_epoch(gc, dss->domid,
                                                     remus_state);
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 1bd2bba..8933e5f 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2309,6 +2309,10 @@ typedef struct libxl__remus_state {
     void *netbuf_state;
     libxl__ev_time timeout;
     libxl__ev_child child;
+
+    /* remus disks state */
+    uint32_t nr_disks;
+    struct libxl__remus_disk **disks;
 } libxl__remus_state;
 
 _hidden int libxl__netbuffer_enabled(libxl__gc *gc);
@@ -2336,6 +2340,12 @@ _hidden int libxl__remus_netbuf_start_new_epoch(libxl__gc *gc, uint32_t domid,
 _hidden int libxl__remus_netbuf_release_prev_epoch(libxl__gc *gc, uint32_t domid,
                                                   libxl__remus_state *remus_state);
 
+_hidden int libxl__remus_disks_postsuspend(libxl__remus_state *state);
+_hidden int libxl__remus_disks_preresume(libxl__remus_state *state);
+_hidden int libxl__remus_disks_commit(libxl__remus_state *state);
+_hidden int libxl__remus_disks_setup(libxl__egc *egc, libxl__domain_suspend_state *dss);
+_hidden void libxl__remus_disks_teardown(libxl__remus_state *state);
+
 _hidden void libxl__remus_setup_initiate(libxl__egc *egc,
                                          libxl__domain_suspend_state *dss);
 
diff --git a/tools/libxl/libxl_remus.c b/tools/libxl/libxl_remus.c
index cdc1c16..92eb36a 100644
--- a/tools/libxl/libxl_remus.c
+++ b/tools/libxl/libxl_remus.c
@@ -23,6 +23,7 @@ void libxl__remus_setup_initiate(libxl__egc *egc,
                                  libxl__domain_suspend_state *dss)
 {
     libxl__ev_time_init(&dss->remus_state->timeout);
+    libxl__remus_disks_setup(egc, dss);
     if (!dss->remus_state->netbufscript)
         libxl__remus_setup_done(egc, dss, 0);
     else
@@ -51,6 +52,7 @@ void libxl__remus_teardown_initiate(libxl__egc *egc,
     /* stash rc somewhere before invoking teardown ops. */
     dss->remus_state->saved_rc = rc;
 
+    libxl__remus_disks_teardown(dss->remus_state);
     if (!dss->remus_state->netbuf_state)
         libxl__remus_teardown_done(egc, dss);
     else
diff --git a/tools/libxl/libxl_remus_replicated_disk.c b/tools/libxl/libxl_remus_replicated_disk.c
new file mode 100644
index 0000000..4b16403
--- /dev/null
+++ b/tools/libxl/libxl_remus_replicated_disk.c
@@ -0,0 +1,219 @@
+/*
+ * Copyright (C) 2013
+ * Author Lai Jiangshan <laijs@cn.fujitsu.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_osdeps.h" /* must come before any other headers */
+
+#include "libxl_internal.h"
+
+typedef struct libxl__remus_disk
+{
+    const struct libxl_device_disk *disk;
+    const struct libxl__remus_disk_type *type;
+
+    /* ao callbacks for setup & teardown script */
+    int (*setup_cb)(struct libxl__remus_disk *d);
+    int (*teardown_cb)(struct libxl__remus_disk *d);
+} libxl__remus_disk;
+
+typedef struct libxl__remus_disk_type
+{
+    /* checkpointing */
+    int (*postsuspend)(libxl__remus_disk *d);
+    int (*preresume)(libxl__remus_disk *d);
+    int (*commit)(libxl__remus_disk *d);
+
+    /* setup & teardown */
+    libxl__remus_disk *(*setup)(libxl__gc *gc, libxl_device_disk *disk);
+    void (*teardown)(libxl__remus_disk *d);
+} libxl__remus_disk_type;
+
+
+/*** drbd implementation ***/
+const int DRBD_SEND_CHECKPOINT = 20;
+const int DRBD_WAIT_CHECKPOINT_ACK = 30;
+typedef struct libxl__remus_drbd_disk
+{
+    libxl__remus_disk remus_disk;
+    int ctl_fd;
+    int ackwait;
+} libxl__remus_drbd_disk;
+
+static int drbd_postsuspend(libxl__remus_disk *d)
+{
+    struct libxl__remus_drbd_disk *drbd = CONTAINER_OF(d, *drbd, remus_disk);
+
+    if (!drbd->ackwait) {
+        if (ioctl(drbd->ctl_fd, DRBD_SEND_CHECKPOINT, 0) <= 0)
+            drbd->ackwait = 1;
+    }
+
+    return 0;
+}
+
+static int drbd_preresume(libxl__remus_disk *d)
+{
+    struct libxl__remus_drbd_disk *drbd = CONTAINER_OF(d, *drbd, remus_disk);
+
+    if (drbd->ackwait) {
+        ioctl(drbd->ctl_fd, DRBD_WAIT_CHECKPOINT_ACK, 0);
+        drbd->ackwait = 0;
+    }
+
+    return 0;
+}
+
+static int drbd_commit(libxl__remus_disk *d)
+{
+    /* nothing to do, all work are done by DRBD's protocal-D. */
+    return 0;
+}
+
+static libxl__remus_disk *drbd_setup(libxl__gc *gc, libxl_device_disk *disk)
+{
+    libxl__remus_drbd_disk *drbd;
+    //if (!(drbd && protocal-D)) // TODO: need to run script async to check
+    //  return NULL
+
+    GCNEW(drbd);
+
+    drbd->ctl_fd = open(GCSPRINTF("/dev/drbd/by-res/%s", disk->pdev_path), O_RDONLY);
+    drbd->ackwait = 0;
+
+    if (drbd->ctl_fd < 0)
+        return NULL;
+
+    return &drbd->remus_disk;
+}
+
+static void drbd_teardown(libxl__remus_disk *d)
+{
+    struct libxl__remus_drbd_disk *drbd = CONTAINER_OF(d, *drbd, remus_disk);
+
+    close(drbd->ctl_fd);
+}
+
+static const libxl__remus_disk_type drbd_disk_type = {
+  .postsuspend = drbd_postsuspend,
+  .preresume = drbd_preresume,
+  .commit = drbd_commit,
+  .setup = drbd_setup,
+  .teardown = drbd_teardown,
+};
+
+/*** checkpoint disks states and callbacks ***/
+static const libxl__remus_disk_type *remus_disk_types[] =
+{
+    &drbd_disk_type,
+};
+
+int libxl__remus_disks_postsuspend(libxl__remus_state *state)
+{
+    int i;
+    int rc = 0;
+
+    for (i = 0; rc == 0 && i < state->nr_disks; i++)
+        rc = state->disks[i]->type->postsuspend(state->disks[i]);
+
+    return rc;
+}
+
+int libxl__remus_disks_preresume(libxl__remus_state *state)
+{
+    int i;
+    int rc = 0;
+
+    for (i = 0; rc == 0 && i < state->nr_disks; i++)
+        rc = state->disks[i]->type->preresume(state->disks[i]);
+
+    return rc;
+}
+
+int libxl__remus_disks_commit(libxl__remus_state *state)
+{
+    int i;
+    int rc = 0;
+
+    for (i = 0; rc == 0 && i < state->nr_disks; i++)
+        rc = state->disks[i]->type->commit(state->disks[i]);
+
+    return rc;
+}
+
+#if 0
+/* TODO: implement disk setup/teardown script */
+static void disk_exec_timeout_cb(libxl__egc *egc, libxl__ev_time *ev,
+                                      const struct timeval *requested_abs)
+{
+    libxl__remus_disks_state *state = CONTAINER_OF(ev, *aodev, timeout);
+    STATE_AO_GC(state->ao);
+
+    libxl__ev_time_deregister(gc, &state->timeout);
+
+    assert(libxl__ev_child_inuse(&state->child));
+    if (kill(state->child.pid, SIGKILL)) {
+    }
+
+    return;
+}
+
+int libxl__remus_disks_exec_script(libxl__gc *gc,
+    libxl__remus_disks_state *state)
+{
+}
+#endif
+
+int libxl__remus_disks_setup(libxl__egc *egc, libxl__domain_suspend_state *dss)
+{
+    libxl__remus_state *remus_state = dss->remus_state;
+    int i, j, nr_disks;
+    libxl_device_disk *disks;
+    libxl__remus_disk *remus_disk;
+    const libxl__remus_disk_type *type;
+
+    STATE_AO_GC(dss->ao);
+    disks = libxl_device_disk_list(CTX, dss->domid, &nr_disks);
+    remus_state->nr_disks = nr_disks;
+    GCNEW_ARRAY(remus_state->disks, nr_disks);
+
+    for (i = 0; i < nr_disks; i++) {
+        remus_disk = NULL;
+        for (j = 0; j < ARRAY_SIZE(remus_disk_types); j++) {
+            type = remus_disk_types[j];
+            remus_disk = type->setup(gc, &disks[i]);
+            if (!remus_disk)
+                break;
+
+            remus_state->disks[i] = remus_disk;
+            remus_disk->disk = &disks[i];
+            remus_disk->type = type;
+        }
+        if (!remus_disk) {
+            remus_state->nr_disks = i;
+            libxl__remus_disks_teardown(remus_state);
+            return -1;
+        }
+    }
+    return 0;
+}
+
+void libxl__remus_disks_teardown(libxl__remus_state *state)
+{
+    int i;
+
+    for (i = 0; i < state->nr_disks; i++)
+        state->disks[i]->type->teardown(state->disks[i]);
+    state->nr_disks = 0;
+}
+
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 03:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 03:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIVKU-0007jC-VE; Wed, 26 Feb 2014 03:38:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIVKS-0007j7-L8
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 03:38:40 +0000
Received: from [85.158.139.211:29915] by server-17.bemta-5.messagelabs.com id
	36/AC-31975-FB16D035; Wed, 26 Feb 2014 03:38:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393385916!6270639!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11607 invoked from network); 26 Feb 2014 03:38:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 03:38:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389744000"; d="scan'208";a="105799344"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 03:38:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 22:38:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIVKN-0000d4-9w;
	Wed, 26 Feb 2014 03:38:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIVKM-0001bs-EQ;
	Wed, 26 Feb 2014 03:38:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25303-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 03:38:34 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25303: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25303 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25303/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install              fail pass in 25297
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 guest-saverestore fail pass in 25297
 test-amd64-amd64-pair       19 guest-stop/src_host fail in 25297 pass in 25303

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail in 25297 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 03:39:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 03:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIVKU-0007jC-VE; Wed, 26 Feb 2014 03:38:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIVKS-0007j7-L8
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 03:38:40 +0000
Received: from [85.158.139.211:29915] by server-17.bemta-5.messagelabs.com id
	36/AC-31975-FB16D035; Wed, 26 Feb 2014 03:38:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393385916!6270639!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11607 invoked from network); 26 Feb 2014 03:38:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 03:38:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,544,1389744000"; d="scan'208";a="105799344"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 03:38:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Tue, 25 Feb 2014 22:38:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIVKN-0000d4-9w;
	Wed, 26 Feb 2014 03:38:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIVKM-0001bs-EQ;
	Wed, 26 Feb 2014 03:38:34 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25303-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 03:38:34 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25303: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25303 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25303/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  7 debian-install              fail pass in 25297
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 guest-saverestore fail pass in 25297
 test-amd64-amd64-pair       19 guest-stop/src_host fail in 25297 pass in 25303

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail in 25297 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 04:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 04:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIVuq-0007yY-KC; Wed, 26 Feb 2014 04:16:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WIVuo-0007yT-U9
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 04:16:15 +0000
Received: from [85.158.143.35:42371] by server-3.bemta-4.messagelabs.com id
	3E/23-11539-E8A6D035; Wed, 26 Feb 2014 04:16:14 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393388171!8331334!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29854 invoked from network); 26 Feb 2014 04:16:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-15.tower-21.messagelabs.com with SMTP;
	26 Feb 2014 04:16:12 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 25 Feb 2014 20:16:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,544,1389772800"; d="scan'208";a="481874562"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga001.fm.intel.com with ESMTP; 25 Feb 2014 20:16:09 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:16:09 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:16:08 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 12:16:05 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: XSA-59
Thread-Index: Ac8yqXZIfP12xxI1QoqdncUFRoqz8g==
Date: Wed, 26 Feb 2014 04:16:04 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_"
MIME-Version: 1.0
Cc: "Bulygin, Yuriy" <yuriy.bulygin@intel.com>, "Mallick,
	Asit K" <asit.k.mallick@intel.com>, "Li,
	Susie" <susie.li@intel.com>, "Wang,
	Yong Y" <yong.y.wang@intel.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Auld, Will" <will.auld@intel.com>
Subject: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Jan,

For XSA-59, I list some questions (coming from your patches and emails) and=
 consult Intel inside from different groups.

Below are the questions and replies (Q2 still not got answer):
Q1: is the PCI IDs list (0x3400 ...) of root port a complete list? Jan got =
it from a disclosure that Intel made to him meanwhile well over-two-years-a=
go --> Any update about the list?
[Asit]: There is not update to this list. This was provided in 2011 and inc=
luded the Ids prior to being fixed.


Q2: is the PCI IDs list (0x100 ...) of host bridge a complete list? it come=
s from Yuriy but Jan need double confirm 'a complete list'.


Q3: the "...  without AER capability?" warning triggers on Jan's systems --=
> is it an issue? or, how to handle it properly?
[Asit] BIOS can have option to not expose AER capability. It will be good t=
o check the BIOS setup options. The error reporting should be masked so not=
 action needed.
[Yuriy] I expanded the answer to Q3 vs. what's in the attached email after =
we found out that when root port is operating in DMI mode, AER ext. capabil=
ity is not in the chain of ext. capability headers. Please use this one ins=
tead.
Answer to Q3:
On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge BDF=3D00:00.0, wh=
en the root port is operating as DMI, AER extended capability is defined in=
 VSHDR (Vendor Specific Header) configuration register (offset 0x148). It s=
hould have value 0x0004.
After pci_find_ext_capability, if it didn't find AER capability, for BDF=3D=
00:00.0 the patch would need to check if VSHDR register has value 0x0004 in=
 bits [15:0].
Below I've provided example fix for this case:
    case 0x3c00 ... 0x3c0b:
        pos =3D pci_find_ext_capability(seg, bus, pdev->devfn,
                                      PCI_EXT_CAP_ID_ERR);
        if ( !pos )
        {
            if ( 0 =3D=3D bus && 0 =3D=3D pdev->devfn )
            {
                dmi_aer_cap_id =3D pci_conf_read16(seg, 0, 0, 0, 0x148) // =
DMI Specific AER Capability ID
                if ( 0x0004 !=3D dmi_aer_cap_id )
                {
                    printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER ca=
pability?\n", seg, bus, dev, func);
                    break;          =20
                }
            }
            else
            {
                printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER capabi=
lity?\n", seg, bus, dev, func);
                break;
            }
        }


Q4: the patches have no way of handling future chipsets (yet we also have n=
o indication that future chipsets would not exhibit the same bad behavior) =
--> thoughts?
[Jinsong] IMHO handle future chipset case by case.


Q5: please clarify whether masking the reporting of unsupported requests is=
 really an appropriate thing to do: After all, this masks not only maliciou=
sly created instances of them, but also any ones possibly resulting from ma=
lfunctioning hardware.
[Yuriy] Most of the client systems don't have SERR enabled (not recommended=
 configuration). For server systems, I don't know the answer to this questi=
on as our team didn't work on the issue and workaround for it when it was d=
efined. You'd need to ask Rajesh.


BTW, some other infromation from Yuriy:
VT-d-mask-UR-host-bridge.patch:
1. The workaround is only applicable to the host bridge device 00:00.0 (DMI=
BAR does not exist for other devices). The patch is written generically for=
 any PCIe device/bridge.
2. The workaround is only needed when SERR is enabled. As there will be onl=
y a fraction of client systems with SERR enabled, it might be worthwhile to=
 only apply it when SERRE is 1.
   val =3D pci_conf_read16(seg, bus, dev, func, PCI_COMMAND);
   if ( val & PCI_COMMAND_SERR )
     apply this workaround


Thank you all,
Jinsong=

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_
Content-Type: message/rfc822
Content-Disposition: attachment;
	creation-date="Wed, 26 Feb 2014 04:16:03 GMT";
	modification-date="Wed, 26 Feb 2014 04:16:03 GMT"

From: "Bulygin, Yuriy" <yuriy.bulygin@intel.com>
To: "Auld, Will" <will.auld@intel.com>, "Sankaran, Rajesh"
	<rajesh.sankaran@intel.com>, "Monroe, Bruce" <bruce.monroe@intel.com>,
	"Dugger, Donald D" <donald.d.dugger@intel.com>
Subject: RE: Outstanding Xen Security issue
Thread-Topic: Outstanding Xen Security issue
Thread-Index: Ac7iL2nFDh09wY2ESEWl8KUTgYfDuACA2KbA
Date: Mon, 18 Nov 2013 08:52:50 +0000
References: <96EC5A4F3149B74492D2D9B9B1602C27348339B6@ORSMSX105.amr.corp.intel.com>
In-Reply-To: <96EC5A4F3149B74492D2D9B9B1602C27348339B6@ORSMSX105.amr.corp.intel.com>
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
Content-Type: multipart/mixed;
	boundary="_004_6776757180686879707075717576698073676573806968677578777_"
MIME-Version: 1.0

--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Patches are attached.







Responses:





> > Please clarify that aspect as well as confirm that the used list of

> > PCI IDs is complete regarding all your chipsets affected by the issue

> > (or provide a list of necessary additions).



The list contains all device IDs we've confirmed observing this behavior, b=
oth server and client.



The workaround in patch "VT-d-mask-UR-host-bridge.patch" is only needed whe=
n SERR is enabled while majority of client systems shouldn't have SERR enab=
led (not recommended configuration).

Might be worth only applying workaround when SERRE is 1.

   val =3D pci_conf_read16(seg, bus, dev, func, PCI_COMMAND);

   if ( val & PCI_COMMAND_SERR )

     apply this workaround





> Additionally, I discovered an issue needing dealing with on my Romley

> system. The patches result in

>

> (XEN) 0000:00:00.0 without AER capability?

> (XEN) Masked UR signaling on 0000:00:01.0

> (XEN) Masked UR signaling on 0000:00:01.1

> (XEN) Masked UR signaling on 0000:00:02.0

> (XEN) Masked UR signaling on 0000:00:02.2

> (XEN) Masked UR signaling on 0000:00:03.0

> (XEN) Masked UR signaling on 0000:00:03.2

> (XEN) Masked UR signaling on 0000:80:00.0

> (XEN) Masked UR signaling on 0000:80:01.0

> (XEN) Masked UR signaling on 0000:80:02.0

> (XEN) Masked UR signaling on 0000:80:02.2

> (XEN) Masked UR signaling on 0000:80:03.0

> (XEN) Masked UR signaling on 0000:80:03.2

>

> and obviously the first of these messages is of concern. The device in

> question is

>

> 00:00.0 Host bridge [0600]: Intel Corporation Sandy Bridge DMI2

> [8086:3c00] (rev 07)

>

> i.e. falling into the group of PCI IDs needing to be handled on a PCIe

> root port basis (but calls itself a host bridge). Can anyone shed light

> on why this doesn't have the expected AER capability, and - perhaps

> connected - why it's not a root port? Should we perhaps filter out host

> bridges here? Or was the device ID listed together with the other valid

> ones in error?





On Romley system (DID 0x3c00 ... 0x3c0b), Host bridge (DMI root port BDF=3D=
00:00.0) when operated in DMI mode has value 0x0004 of extended capability =
ID which is Intel specific ID for AER.



    case 0x3c00 ... 0x3c0b:

        pos =3D pci_find_ext_capability(seg, bus, pdev->devfn,

                                      PCI_EXT_CAP_ID_ERR);



pci_find_ext_capability in the patch should check for either 0x0001 or 0x00=
04 value of PCI_EXT_CAP before applying workaround.





From: Auld, Will
Sent: Friday, November 15, 2013 10:21 AM
To: Sankaran, Rajesh; Monroe, Bruce; Dugger, Donald D; Bulygin, Yuriy
Cc: Auld, Will
Subject: Outstanding Xen Security issue



Attendees:

Don Dugger, Bruce Monroe, Yuriy Bulygin, Will Auld



Two patches for client system one for Server. Do these cover all the needed=
 systems? Yes for shipping client systems, (Broadwell will need it as well)=
. Server patch is OK but has a bug.



Yuriy will send all three patches with details server patch bug along with =
the higher level details for Don to communicate to SuSE. Yuriy will send th=
is to Don by Monday morning 8AM Pacific.



Thanks everyone,



Will






--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: application/octet-stream; name="VT-d-mask-UR-root-port.patch"
Content-Description: VT-d-mask-UR-root-port.patch
Content-Disposition: attachment; filename="VT-d-mask-UR-root-port.patch";
	size=2396; creation-date="Thu, 22 Aug 2013 12:06:45 GMT";
	modification-date="Thu, 22 Aug 2013 12:06:45 GMT"
Content-Transfer-Encoding: base64

LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3F1aXJrcy5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xdWlya3MuYwpAQCAtMzkwLDEyICszOTAsNDQgQEAgdm9pZCBf
X2luaXQgcGNpX3Z0ZF9xdWlyayhzdHJ1Y3QgcGNpX2RldgogICAgIGludCBidXMgPSBwZGV2LT5i
dXM7CiAgICAgaW50IGRldiA9IFBDSV9TTE9UKHBkZXYtPmRldmZuKTsKICAgICBpbnQgZnVuYyA9
IFBDSV9GVU5DKHBkZXYtPmRldmZuKTsKLSAgICBpbnQgaWQsIHZhbDsKKyAgICBpbnQgcG9zOwor
ICAgIHUzMiB2YWw7CiAKLSAgICBpZCA9IHBjaV9jb25mX3JlYWQzMihzZWcsIGJ1cywgZGV2LCBm
dW5jLCAwKTsKLSAgICBpZiAoIGlkID09IDB4MzQyZTgwODYgfHwgaWQgPT0gMHgzYzI4ODA4NiAp
CisgICAgaWYgKCBwY2lfY29uZl9yZWFkMTYoc2VnLCBidXMsIGRldiwgZnVuYywgUENJX1ZFTkRP
Ul9JRCkgIT0gMHg4MDg2ICkKKyAgICAgICAgcmV0dXJuOworCisgICAgc3dpdGNoICggcGNpX2Nv
bmZfcmVhZDE2KHNlZywgYnVzLCBkZXYsIGZ1bmMsIFBDSV9ERVZJQ0VfSUQpICkKICAgICB7Cisg
ICAgY2FzZSAweDM0MmU6CisgICAgY2FzZSAweDNjMjg6CiAgICAgICAgIHZhbCA9IHBjaV9jb25m
X3JlYWQzMihzZWcsIGJ1cywgZGV2LCBmdW5jLCAweDFBQyk7CiAgICAgICAgIHBjaV9jb25mX3dy
aXRlMzIoc2VnLCBidXMsIGRldiwgZnVuYywgMHgxQUMsIHZhbCB8ICgxIDw8IDMxKSk7CisgICAg
ICAgIGJyZWFrOworCisgICAgY2FzZSAweDM0MDA6IGNhc2UgMHgzNDA4IC4uLiAweDM0MTE6IGNh
c2UgMHgzNDIwOgorICAgIGNhc2UgMHgzYzAwIC4uLiAweDNjMGI6CisgICAgICAgIHBvcyA9IHBj
aV9maW5kX2V4dF9jYXBhYmlsaXR5KHNlZywgYnVzLCBwZGV2LT5kZXZmbiwKKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgUENJX0VYVF9DQVBfSURfRVJSKTsKKyAgICAgICAg
aWYgKCAhcG9zICkKKyAgICAgICAgeworICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5H
ICIlMDR4OiUwMng6JTAyeC4ldSB3aXRob3V0IEFFUiBjYXBhYmlsaXR5P1xuIiwKKyAgICAgICAg
ICAgICAgICAgICBzZWcsIGJ1cywgZGV2LCBmdW5jKTsKKyAgICAgICAgICAgIGJyZWFrOworICAg
ICAgICB9CisKKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVhZDMyKHNlZywgYnVzLCBkZXYsIGZ1
bmMsIHBvcyArIFBDSV9FUlJfVU5DT1JfTUFTSyk7CisgICAgICAgIHBjaV9jb25mX3dyaXRlMzIo
c2VnLCBidXMsIGRldiwgZnVuYywgcG9zICsgUENJX0VSUl9VTkNPUl9NQVNLLAorICAgICAgICAg
ICAgICAgICAgICAgICAgIHZhbCB8IFBDSV9FUlJfVU5DX1VOU1VQKTsKKyAgICAgICAgdmFsID0g
cGNpX2NvbmZfcmVhZDMyKHNlZywgYnVzLCBkZXYsIGZ1bmMsIHBvcyArIFBDSV9FUlJfQ09SX01B
U0spOworICAgICAgICBwY2lfY29uZl93cml0ZTMyKHNlZywgYnVzLCBkZXYsIGZ1bmMsIHBvcyAr
IFBDSV9FUlJfQ09SX01BU0ssCisgICAgICAgICAgICAgICAgICAgICAgICAgdmFsIHwgUENJX0VS
Ul9DT1JfQURWX05GQVQpOworCisgICAgICAgIC8qIFhQVU5DRVJSTVNLIFNlbmQgQ29tcGxldGlv
biB3aXRoIFVuc3VwcG9ydGVkIFJlcXVlc3QgKi8KKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVh
ZDMyKHNlZywgYnVzLCBkZXYsIGZ1bmMsIDB4MjBjKTsKKyAgICAgICAgcGNpX2NvbmZfd3JpdGUz
MihzZWcsIGJ1cywgZGV2LCBmdW5jLCAweDIwYywgdmFsIHwgKDEgPDwgNCkpOworCisgICAgICAg
IHByaW50ayhYRU5MT0dfSU5GTyAiTWFza2VkIFVSIHNpZ25hbGluZyBvbiAlMDR4OiUwMng6JTAy
eC4ldVxuIiwKKyAgICAgICAgICAgICAgIHNlZywgYnVzLCBkZXYsIGZ1bmMpOworICAgICAgICBi
cmVhazsKICAgICB9CiB9Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi9wY2lfcmVncy5oCisrKyBiL3hl
bi9pbmNsdWRlL3hlbi9wY2lfcmVncy5oCkBAIC00NTksNiArNDU5LDcgQEAKICNkZWZpbmUgIFBD
SV9FUlJfQ09SX0JBRF9ETExQCTB4MDAwMDAwODAJLyogQmFkIERMTFAgU3RhdHVzICovCiAjZGVm
aW5lICBQQ0lfRVJSX0NPUl9SRVBfUk9MTAkweDAwMDAwMTAwCS8qIFJFUExBWV9OVU0gUm9sbG92
ZXIgKi8KICNkZWZpbmUgIFBDSV9FUlJfQ09SX1JFUF9USU1FUgkweDAwMDAxMDAwCS8qIFJlcGxh
eSBUaW1lciBUaW1lb3V0ICovCisjZGVmaW5lICBQQ0lfRVJSX0NPUl9BRFZfTkZBVAkweDAwMDAy
MDAwCS8qIEFkdmlzb3J5IE5vbi1GYXRhbCAqLwogI2RlZmluZSBQQ0lfRVJSX0NPUl9NQVNLCTIw
CS8qIENvcnJlY3RhYmxlIEVycm9yIE1hc2sgKi8KIAkvKiBTYW1lIGJpdHMgYXMgYWJvdmUgKi8K
ICNkZWZpbmUgUENJX0VSUl9DQVAJCTI0CS8qIEFkdmFuY2VkIEVycm9yIENhcGFiaWxpdGllcyAq
Lwo=

--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: application/octet-stream;
	name="VT-d-mask-UR-host-bridge.patch"
Content-Description: VT-d-mask-UR-host-bridge.patch
Content-Disposition: attachment; filename="VT-d-mask-UR-host-bridge.patch";
	size=1623; creation-date="Thu, 22 Aug 2013 12:06:45 GMT";
	modification-date="Thu, 22 Aug 2013 12:06:45 GMT"
Content-Transfer-Encoding: base64

LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3F1aXJrcy5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xdWlya3MuYwpAQCAtMzkyLDYgKzM5Miw4IEBAIHZvaWQgX19p
bml0IHBjaV92dGRfcXVpcmsoc3RydWN0IHBjaV9kZXYKICAgICBpbnQgZnVuYyA9IFBDSV9GVU5D
KHBkZXYtPmRldmZuKTsKICAgICBpbnQgcG9zOwogICAgIHUzMiB2YWw7CisgICAgdTY0IGJhcjsK
KyAgICBwYWRkcl90IHBhOwogCiAgICAgaWYgKCBwY2lfY29uZl9yZWFkMTYoc2VnLCBidXMsIGRl
diwgZnVuYywgUENJX1ZFTkRPUl9JRCkgIT0gMHg4MDg2ICkKICAgICAgICAgcmV0dXJuOwpAQCAt
NDI5LDUgKzQzMSwzMyBAQCB2b2lkIF9faW5pdCBwY2lfdnRkX3F1aXJrKHN0cnVjdCBwY2lfZGV2
CiAgICAgICAgIHByaW50ayhYRU5MT0dfSU5GTyAiTWFza2VkIFVSIHNpZ25hbGluZyBvbiAlMDR4
OiUwMng6JTAyeC4ldVxuIiwKICAgICAgICAgICAgICAgIHNlZywgYnVzLCBkZXYsIGZ1bmMpOwog
ICAgICAgICBicmVhazsKKworICAgIGNhc2UgMHgxMDA6IGNhc2UgMHgxMDQ6IGNhc2UgMHgxMDg6
CisgICAgY2FzZSAweDE1MDogY2FzZSAweDE1NDogY2FzZSAweDE1ODoKKyAgICBjYXNlIDB4YTA0
OgorICAgIGNhc2UgMHhjMDA6IGNhc2UgMHhjMDQ6IGNhc2UgMHhjMDg6CisgICAgICAgIGJhciA9
IHBjaV9jb25mX3JlYWQzMihzZWcsIGJ1cywgZGV2LCBmdW5jLCAweDZjKTsKKyAgICAgICAgYmFy
ID0gKGJhciA8PCAzMikgfCBwY2lfY29uZl9yZWFkMzIoc2VnLCBidXMsIGRldiwgZnVuYywgMHg2
OCk7CisgICAgICAgIHBhID0gYmFyICYgMHg3ZmZmZmYwMDA7IC8qIGJpdHMgMTIuLi4zOCAqLwor
ICAgICAgICBpZiAoIChiYXIgJiAxKSAmJiBwYSAmJgorICAgICAgICAgICAgIHBhZ2VfaXNfcmFt
X3R5cGUocGFkZHJfdG9fcGZuKHBhKSwgUkFNX1RZUEVfUkVTRVJWRUQpICkKKyAgICAgICAgewor
ICAgICAgICAgICAgdTMyIF9faW9tZW0gKnZhID0gaW9yZW1hcChwYSwgUEFHRV9TSVpFKTsKKwor
ICAgICAgICAgICAgaWYgKCB2YSApCisgICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgX19z
ZXRfYml0KDB4MWM4ICogOCArIDIwLCB2YSk7CisgICAgICAgICAgICAgICAgaW91bm1hcCh2YSk7
CisgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19JTkZPICJNYXNrZWQgVVIgc2lnbmFsaW5n
IG9uICUwNHg6JTAyeDolMDJ4LiV1XG4iLAorICAgICAgICAgICAgICAgICAgICAgICBzZWcsIGJ1
cywgZGV2LCBmdW5jKTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICAgIGVsc2UKKyAgICAgICAg
ICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiQ291bGQgbm90IG1hcCAlIlBSSXBhZGRyIiBmb3Ig
JTA0eDolMDJ4OiUwMnguJXVcbiIsCisgICAgICAgICAgICAgICAgICAgICAgIHBhLCBzZWcsIGJ1
cywgZGV2LCBmdW5jKTsKKyAgICAgICAgfQorICAgICAgICBlbHNlCisgICAgICAgICAgICBwcmlu
dGsoWEVOTE9HX1dBUk5JTkcgIkJvZ3VzIERNSUJBUiAlIyJQUkl4NjQiIG9uICUwNHg6JTAyeDol
MDJ4LiV1XG4iLAorICAgICAgICAgICAgICAgICAgIGJhciwgc2VnLCBidXMsIGRldiwgZnVuYyk7
CisgICAgICAgIGJyZWFrOwogICAgIH0KIH0K

--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: application/octet-stream; name="pt-suppress-SERR.patch"
Content-Description: pt-suppress-SERR.patch
Content-Disposition: attachment; filename="pt-suppress-SERR.patch"; size=4750;
	creation-date="Thu, 22 Aug 2013 12:06:45 GMT";
	modification-date="Thu, 22 Aug 2013 12:06:45 GMT"
Content-Transfer-Encoding: base64

LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvcGNpLmMKKysrIGIveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvcGNpLmMKQEAgLTE2Miw2ICsxNjIsMTEyIEBAIHN0YXRpYyB2b2lkIF9faW5pdCBw
YXJzZV9waGFudG9tX2RldihjaGEKIH0KIGN1c3RvbV9wYXJhbSgicGNpLXBoYW50b20iLCBwYXJz
ZV9waGFudG9tX2Rldik7CiAKK3N0YXRpYyB1MTYgX19yZWFkX21vc3RseSBjb21tYW5kX21hc2s7
CitzdGF0aWMgdTE2IF9fcmVhZF9tb3N0bHkgYnJpZGdlX2N0bF9tYXNrOworCisvKgorICogVGhl
ICdwY2knIHBhcmFtZXRlciBjb250cm9scyBjZXJ0YWluIFBDSSBkZXZpY2UgYXNwZWN0cy4KKyAq
IE9wdGlvbmFsIGNvbW1hIHNlcGFyYXRlZCB2YWx1ZSBtYXkgY29udGFpbjoKKyAqCisgKiAgIHNl
cnIgICAgICAgICAgICAgICAgICAgICAgIGRvbid0IHN1cHByZXNzIHN5c3RlbSBlcnJvcnMgKGRl
ZmF1bHQpCisgKiAgIG5vLXNlcnIgICAgICAgICAgICAgICAgICAgIHN1cHByZXNzIHN5c3RlbSBl
cnJvcnMKKyAqICAgcGVyciAgICAgICAgICAgICAgICAgICAgICAgZG9uJ3Qgc3VwcHJlc3MgcGFy
aXR5IGVycm9ycyAoZGVmYXVsdCkKKyAqICAgbm8tcGVyciAgICAgICAgICAgICAgICAgICAgc3Vw
cHJlc3MgcGFyaXR5IGVycm9ycworICovCitzdGF0aWMgdm9pZCBfX2luaXQgcGFyc2VfcGNpX3Bh
cmFtKGNoYXIgKnMpCit7CisgICAgY2hhciAqc3M7CisKKyAgICBkbyB7CisgICAgICAgIGJvb2xf
dCBvbiA9ICEhc3RybmNtcChzLCAibm8tIiwgMyk7CisgICAgICAgIHUxNiBjbWRfbWFzayA9IDAs
IGJyY3RsX21hc2sgPSAwOworCisgICAgICAgIGlmICggIW9uICkKKyAgICAgICAgICAgIHMgKz0g
MzsKKworICAgICAgICBzcyA9IHN0cmNocihzLCAnLCcpOworICAgICAgICBpZiAoIHNzICkKKyAg
ICAgICAgICAgICpzcyA9ICdcMCc7CisKKyAgICAgICAgaWYgKCAhc3RyY21wKHMsICJzZXJyIikg
KQorICAgICAgICB7CisgICAgICAgICAgICBjbWRfbWFzayA9IFBDSV9DT01NQU5EX1NFUlI7Cisg
ICAgICAgICAgICBicmN0bF9tYXNrID0gUENJX0JSSURHRV9DVExfU0VSUiB8IFBDSV9CUklER0Vf
Q1RMX0RUTVJfU0VSUjsKKyAgICAgICAgfQorICAgICAgICBlbHNlIGlmICggIXN0cmNtcChzLCAi
cGVyciIpICkKKyAgICAgICAgeworICAgICAgICAgICAgY21kX21hc2sgPSBQQ0lfQ09NTUFORF9Q
QVJJVFk7CisgICAgICAgICAgICBicmN0bF9tYXNrID0gUENJX0JSSURHRV9DVExfUEFSSVRZOwor
ICAgICAgICB9CisKKyAgICAgICAgaWYgKCBvbiApCisgICAgICAgIHsKKyAgICAgICAgICAgIGNv
bW1hbmRfbWFzayAmPSB+Y21kX21hc2s7CisgICAgICAgICAgICBicmlkZ2VfY3RsX21hc2sgJj0g
fmJyY3RsX21hc2s7CisgICAgICAgIH0KKyAgICAgICAgZWxzZQorICAgICAgICB7CisgICAgICAg
ICAgICBjb21tYW5kX21hc2sgfD0gY21kX21hc2s7CisgICAgICAgICAgICBicmlkZ2VfY3RsX21h
c2sgfD0gYnJjdGxfbWFzazsKKyAgICAgICAgfQorCisgICAgICAgIHMgPSBzcyArIDE7CisgICAg
fSB3aGlsZSAoIHNzICk7Cit9CitjdXN0b21fcGFyYW0oInBjaSIsIHBhcnNlX3BjaV9wYXJhbSk7
CisKK3N0YXRpYyB2b2lkIGNoZWNrX3BkZXYoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYpCit7
CisjZGVmaW5lIFBDSV9TVEFUVVNfQ0hFQ0sgXAorICAgIChQQ0lfU1RBVFVTX1BBUklUWSB8IFBD
SV9TVEFUVVNfU0lHX1RBUkdFVF9BQk9SVCB8IFwKKyAgICAgUENJX1NUQVRVU19SRUNfVEFSR0VU
X0FCT1JUIHwgUENJX1NUQVRVU19SRUNfTUFTVEVSX0FCT1JUIHwgXAorICAgICBQQ0lfU1RBVFVT
X1NJR19TWVNURU1fRVJST1IgfCBQQ0lfU1RBVFVTX0RFVEVDVEVEX1BBUklUWSkKKyAgICB1MTYg
c2VnID0gcGRldi0+c2VnOworICAgIHU4IGJ1cyA9IHBkZXYtPmJ1czsKKyAgICB1OCBkZXYgPSBQ
Q0lfU0xPVChwZGV2LT5kZXZmbik7CisgICAgdTggZnVuYyA9IFBDSV9GVU5DKHBkZXYtPmRldmZu
KTsKKyAgICB1MTYgdmFsOworCisgICAgaWYgKCBjb21tYW5kX21hc2sgKQorICAgIHsKKyAgICAg
ICAgdmFsID0gcGNpX2NvbmZfcmVhZDE2KHNlZywgYnVzLCBkZXYsIGZ1bmMsIFBDSV9DT01NQU5E
KTsKKyAgICAgICAgaWYgKCB2YWwgJiBjb21tYW5kX21hc2sgKQorICAgICAgICAgICAgcGNpX2Nv
bmZfd3JpdGUxNihzZWcsIGJ1cywgZGV2LCBmdW5jLCBQQ0lfQ09NTUFORCwKKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdmFsICYgfmNvbW1hbmRfbWFzayk7CisgICAgICAgIHZhbCA9IHBj
aV9jb25mX3JlYWQxNihzZWcsIGJ1cywgZGV2LCBmdW5jLCBQQ0lfU1RBVFVTKTsKKyAgICAgICAg
aWYgKCB2YWwgJiBQQ0lfU1RBVFVTX0NIRUNLICkKKyAgICAgICAgeworICAgICAgICAgICAgcHJp
bnRrKFhFTkxPR19JTkZPICIlMDR4OiUwMng6JTAyeC4ldSBzdGF0dXMgJTA0eFxuIiwKKyAgICAg
ICAgICAgICAgICAgICBzZWcsIGJ1cywgZGV2LCBmdW5jLCB2YWwpOworICAgICAgICAgICAgcGNp
X2NvbmZfd3JpdGUxNihzZWcsIGJ1cywgZGV2LCBmdW5jLCBQQ0lfU1RBVFVTLCB2YWwpOworICAg
ICAgICB9CisgICAgfQorCisgICAgc3dpdGNoICggcGNpX2NvbmZfcmVhZDgoc2VnLCBidXMsIGRl
diwgZnVuYywgUENJX0hFQURFUl9UWVBFKSAmIDB4N2YgKQorICAgIHsKKyAgICBjYXNlIFBDSV9I
RUFERVJfVFlQRV9CUklER0U6CisgICAgICAgIGlmICggIWJyaWRnZV9jdGxfbWFzayApCisgICAg
ICAgICAgICBicmVhazsKKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVhZDE2KHNlZywgYnVzLCBk
ZXYsIGZ1bmMsIFBDSV9CUklER0VfQ09OVFJPTCk7CisgICAgICAgIGlmICggdmFsICYgYnJpZGdl
X2N0bF9tYXNrICkKKyAgICAgICAgICAgIHBjaV9jb25mX3dyaXRlMTYoc2VnLCBidXMsIGRldiwg
ZnVuYywgUENJX0JSSURHRV9DT05UUk9MLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICB2
YWwgJiB+YnJpZGdlX2N0bF9tYXNrKTsKKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVhZDE2KHNl
ZywgYnVzLCBkZXYsIGZ1bmMsIFBDSV9TRUNfU1RBVFVTKTsKKyAgICAgICAgaWYgKCB2YWwgJiBQ
Q0lfU1RBVFVTX0NIRUNLICkKKyAgICAgICAgeworICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19J
TkZPICIlMDR4OiUwMng6JTAyeC4ldSBzZWNvbmRhcnkgc3RhdHVzICUwNHhcbiIsCisgICAgICAg
ICAgICAgICAgICAgc2VnLCBidXMsIGRldiwgZnVuYywgdmFsKTsKKyAgICAgICAgICAgIHBjaV9j
b25mX3dyaXRlMTYoc2VnLCBidXMsIGRldiwgZnVuYywgUENJX1NFQ19TVEFUVVMsIHZhbCk7Cisg
ICAgICAgIH0KKyAgICAgICAgYnJlYWs7CisKKyAgICBjYXNlIFBDSV9IRUFERVJfVFlQRV9DQVJE
QlVTOgorICAgICAgICAvKiBUT0RPICovCisgICAgICAgIGJyZWFrOworICAgIH0KKyN1bmRlZiBQ
Q0lfU1RBVFVTX0NIRUNLCit9CisKIHN0YXRpYyBzdHJ1Y3QgcGNpX2RldiAqYWxsb2NfcGRldihz
dHJ1Y3QgcGNpX3NlZyAqcHNlZywgdTggYnVzLCB1OCBkZXZmbikKIHsKICAgICBzdHJ1Y3QgcGNp
X2RldiAqcGRldjsKQEAgLTI2MSw2ICszNjcsOCBAQCBzdGF0aWMgc3RydWN0IHBjaV9kZXYgKmFs
bG9jX3BkZXYoc3RydWN0CiAgICAgICAgICAgICBicmVhazsKICAgICB9CiAKKyAgICBjaGVja19w
ZGV2KHBkZXYpOworCiAgICAgcmV0dXJuIHBkZXY7CiB9CiAKQEAgLTU3NSw2ICs2ODMsOCBAQCBp
bnQgcGNpX2FkZF9kZXZpY2UodTE2IHNlZywgdTggYnVzLCB1OCBkCiAgICAgICAgICAgICAgICAg
ICAgc2VnLCBidXMsIHNsb3QsIGZ1bmMsIGN0cmwpOwogICAgIH0KIAorICAgIGNoZWNrX3BkZXYo
cGRldik7CisKICAgICByZXQgPSAwOwogICAgIGlmICggIXBkZXYtPmRvbWFpbiApCiAgICAgewot
LS0gYS94ZW4vaW5jbHVkZS94ZW4vcGNpX3JlZ3MuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4vcGNp
X3JlZ3MuaApAQCAtMTI1LDcgKzEyNSw3IEBACiAjZGVmaW5lICBQQ0lfSU9fUkFOR0VfVFlQRV8x
NgkweDAwCiAjZGVmaW5lICBQQ0lfSU9fUkFOR0VfVFlQRV8zMgkweDAxCiAjZGVmaW5lICBQQ0lf
SU9fUkFOR0VfTUFTSwkofjB4MGZVTCkKLSNkZWZpbmUgUENJX1NFQ19TVEFUVVMJCTB4MWUJLyog
U2Vjb25kYXJ5IHN0YXR1cyByZWdpc3Rlciwgb25seSBiaXQgMTQgdXNlZCAqLworI2RlZmluZSBQ
Q0lfU0VDX1NUQVRVUwkJMHgxZQkvKiBTZWNvbmRhcnkgc3RhdHVzIHJlZ2lzdGVyICovCiAjZGVm
aW5lIFBDSV9NRU1PUllfQkFTRQkJMHgyMAkvKiBNZW1vcnkgcmFuZ2UgYmVoaW5kICovCiAjZGVm
aW5lIFBDSV9NRU1PUllfTElNSVQJMHgyMgogI2RlZmluZSAgUENJX01FTU9SWV9SQU5HRV9UWVBF
X01BU0sgMHgwZlVMCkBAIC0xNTIsNiArMTUyLDcgQEAKICNkZWZpbmUgIFBDSV9CUklER0VfQ1RM
X01BU1RFUl9BQk9SVAkweDIwICAvKiBSZXBvcnQgbWFzdGVyIGFib3J0cyAqLwogI2RlZmluZSAg
UENJX0JSSURHRV9DVExfQlVTX1JFU0VUCTB4NDAJLyogU2Vjb25kYXJ5IGJ1cyByZXNldCAqLwog
I2RlZmluZSAgUENJX0JSSURHRV9DVExfRkFTVF9CQUNLCTB4ODAJLyogRmFzdCBCYWNrMkJhY2sg
ZW5hYmxlZCBvbiBzZWNvbmRhcnkgaW50ZXJmYWNlICovCisjZGVmaW5lICBQQ0lfQlJJREdFX0NU
TF9EVE1SX1NFUlIJMHg4MDAJLyogU0VSUiB1cG9uIGRpc2NhcmQgdGltZXIgZXhwaXJ5ICovCiAK
IC8qIEhlYWRlciB0eXBlIDIgKENhcmRCdXMgYnJpZGdlcykgKi8KICNkZWZpbmUgUENJX0NCX0NB
UEFCSUxJVFlfTElTVAkweDE0Cg==

--_004_6776757180686879707075717576698073676573806968677578777_--

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_--


From xen-devel-bounces@lists.xen.org Wed Feb 26 04:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 04:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIVuq-0007yY-KC; Wed, 26 Feb 2014 04:16:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WIVuo-0007yT-U9
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 04:16:15 +0000
Received: from [85.158.143.35:42371] by server-3.bemta-4.messagelabs.com id
	3E/23-11539-E8A6D035; Wed, 26 Feb 2014 04:16:14 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393388171!8331334!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29854 invoked from network); 26 Feb 2014 04:16:12 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-15.tower-21.messagelabs.com with SMTP;
	26 Feb 2014 04:16:12 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 25 Feb 2014 20:16:10 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,544,1389772800"; d="scan'208";a="481874562"
Received: from fmsmsx108.amr.corp.intel.com ([10.19.9.228])
	by fmsmga001.fm.intel.com with ESMTP; 25 Feb 2014 20:16:09 -0800
Received: from fmsmsx152.amr.corp.intel.com (10.19.17.221) by
	FMSMSX108.amr.corp.intel.com (10.19.9.228) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:16:09 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx152.amr.corp.intel.com (10.19.17.221) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:16:08 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 12:16:05 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: XSA-59
Thread-Index: Ac8yqXZIfP12xxI1QoqdncUFRoqz8g==
Date: Wed, 26 Feb 2014 04:16:04 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: multipart/mixed;
	boundary="_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_"
MIME-Version: 1.0
Cc: "Bulygin, Yuriy" <yuriy.bulygin@intel.com>, "Mallick,
	Asit K" <asit.k.mallick@intel.com>, "Li,
	Susie" <susie.li@intel.com>, "Wang,
	Yong Y" <yong.y.wang@intel.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Auld, Will" <will.auld@intel.com>
Subject: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Jan,

For XSA-59, I list some questions (coming from your patches and emails) and=
 consult Intel inside from different groups.

Below are the questions and replies (Q2 still not got answer):
Q1: is the PCI IDs list (0x3400 ...) of root port a complete list? Jan got =
it from a disclosure that Intel made to him meanwhile well over-two-years-a=
go --> Any update about the list?
[Asit]: There is not update to this list. This was provided in 2011 and inc=
luded the Ids prior to being fixed.


Q2: is the PCI IDs list (0x100 ...) of host bridge a complete list? it come=
s from Yuriy but Jan need double confirm 'a complete list'.


Q3: the "...  without AER capability?" warning triggers on Jan's systems --=
> is it an issue? or, how to handle it properly?
[Asit] BIOS can have option to not expose AER capability. It will be good t=
o check the BIOS setup options. The error reporting should be masked so not=
 action needed.
[Yuriy] I expanded the answer to Q3 vs. what's in the attached email after =
we found out that when root port is operating in DMI mode, AER ext. capabil=
ity is not in the chain of ext. capability headers. Please use this one ins=
tead.
Answer to Q3:
On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge BDF=3D00:00.0, wh=
en the root port is operating as DMI, AER extended capability is defined in=
 VSHDR (Vendor Specific Header) configuration register (offset 0x148). It s=
hould have value 0x0004.
After pci_find_ext_capability, if it didn't find AER capability, for BDF=3D=
00:00.0 the patch would need to check if VSHDR register has value 0x0004 in=
 bits [15:0].
Below I've provided example fix for this case:
    case 0x3c00 ... 0x3c0b:
        pos =3D pci_find_ext_capability(seg, bus, pdev->devfn,
                                      PCI_EXT_CAP_ID_ERR);
        if ( !pos )
        {
            if ( 0 =3D=3D bus && 0 =3D=3D pdev->devfn )
            {
                dmi_aer_cap_id =3D pci_conf_read16(seg, 0, 0, 0, 0x148) // =
DMI Specific AER Capability ID
                if ( 0x0004 !=3D dmi_aer_cap_id )
                {
                    printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER ca=
pability?\n", seg, bus, dev, func);
                    break;          =20
                }
            }
            else
            {
                printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER capabi=
lity?\n", seg, bus, dev, func);
                break;
            }
        }


Q4: the patches have no way of handling future chipsets (yet we also have n=
o indication that future chipsets would not exhibit the same bad behavior) =
--> thoughts?
[Jinsong] IMHO handle future chipset case by case.


Q5: please clarify whether masking the reporting of unsupported requests is=
 really an appropriate thing to do: After all, this masks not only maliciou=
sly created instances of them, but also any ones possibly resulting from ma=
lfunctioning hardware.
[Yuriy] Most of the client systems don't have SERR enabled (not recommended=
 configuration). For server systems, I don't know the answer to this questi=
on as our team didn't work on the issue and workaround for it when it was d=
efined. You'd need to ask Rajesh.


BTW, some other infromation from Yuriy:
VT-d-mask-UR-host-bridge.patch:
1. The workaround is only applicable to the host bridge device 00:00.0 (DMI=
BAR does not exist for other devices). The patch is written generically for=
 any PCIe device/bridge.
2. The workaround is only needed when SERR is enabled. As there will be onl=
y a fraction of client systems with SERR enabled, it might be worthwhile to=
 only apply it when SERRE is 1.
   val =3D pci_conf_read16(seg, bus, dev, func, PCI_COMMAND);
   if ( val & PCI_COMMAND_SERR )
     apply this workaround


Thank you all,
Jinsong=

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_
Content-Type: message/rfc822
Content-Disposition: attachment;
	creation-date="Wed, 26 Feb 2014 04:16:03 GMT";
	modification-date="Wed, 26 Feb 2014 04:16:03 GMT"

From: "Bulygin, Yuriy" <yuriy.bulygin@intel.com>
To: "Auld, Will" <will.auld@intel.com>, "Sankaran, Rajesh"
	<rajesh.sankaran@intel.com>, "Monroe, Bruce" <bruce.monroe@intel.com>,
	"Dugger, Donald D" <donald.d.dugger@intel.com>
Subject: RE: Outstanding Xen Security issue
Thread-Topic: Outstanding Xen Security issue
Thread-Index: Ac7iL2nFDh09wY2ESEWl8KUTgYfDuACA2KbA
Date: Mon, 18 Nov 2013 08:52:50 +0000
References: <96EC5A4F3149B74492D2D9B9B1602C27348339B6@ORSMSX105.amr.corp.intel.com>
In-Reply-To: <96EC5A4F3149B74492D2D9B9B1602C27348339B6@ORSMSX105.amr.corp.intel.com>
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
Content-Type: multipart/mixed;
	boundary="_004_6776757180686879707075717576698073676573806968677578777_"
MIME-Version: 1.0

--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

Patches are attached.







Responses:





> > Please clarify that aspect as well as confirm that the used list of

> > PCI IDs is complete regarding all your chipsets affected by the issue

> > (or provide a list of necessary additions).



The list contains all device IDs we've confirmed observing this behavior, b=
oth server and client.



The workaround in patch "VT-d-mask-UR-host-bridge.patch" is only needed whe=
n SERR is enabled while majority of client systems shouldn't have SERR enab=
led (not recommended configuration).

Might be worth only applying workaround when SERRE is 1.

   val =3D pci_conf_read16(seg, bus, dev, func, PCI_COMMAND);

   if ( val & PCI_COMMAND_SERR )

     apply this workaround





> Additionally, I discovered an issue needing dealing with on my Romley

> system. The patches result in

>

> (XEN) 0000:00:00.0 without AER capability?

> (XEN) Masked UR signaling on 0000:00:01.0

> (XEN) Masked UR signaling on 0000:00:01.1

> (XEN) Masked UR signaling on 0000:00:02.0

> (XEN) Masked UR signaling on 0000:00:02.2

> (XEN) Masked UR signaling on 0000:00:03.0

> (XEN) Masked UR signaling on 0000:00:03.2

> (XEN) Masked UR signaling on 0000:80:00.0

> (XEN) Masked UR signaling on 0000:80:01.0

> (XEN) Masked UR signaling on 0000:80:02.0

> (XEN) Masked UR signaling on 0000:80:02.2

> (XEN) Masked UR signaling on 0000:80:03.0

> (XEN) Masked UR signaling on 0000:80:03.2

>

> and obviously the first of these messages is of concern. The device in

> question is

>

> 00:00.0 Host bridge [0600]: Intel Corporation Sandy Bridge DMI2

> [8086:3c00] (rev 07)

>

> i.e. falling into the group of PCI IDs needing to be handled on a PCIe

> root port basis (but calls itself a host bridge). Can anyone shed light

> on why this doesn't have the expected AER capability, and - perhaps

> connected - why it's not a root port? Should we perhaps filter out host

> bridges here? Or was the device ID listed together with the other valid

> ones in error?





On Romley system (DID 0x3c00 ... 0x3c0b), Host bridge (DMI root port BDF=3D=
00:00.0) when operated in DMI mode has value 0x0004 of extended capability =
ID which is Intel specific ID for AER.



    case 0x3c00 ... 0x3c0b:

        pos =3D pci_find_ext_capability(seg, bus, pdev->devfn,

                                      PCI_EXT_CAP_ID_ERR);



pci_find_ext_capability in the patch should check for either 0x0001 or 0x00=
04 value of PCI_EXT_CAP before applying workaround.





From: Auld, Will
Sent: Friday, November 15, 2013 10:21 AM
To: Sankaran, Rajesh; Monroe, Bruce; Dugger, Donald D; Bulygin, Yuriy
Cc: Auld, Will
Subject: Outstanding Xen Security issue



Attendees:

Don Dugger, Bruce Monroe, Yuriy Bulygin, Will Auld



Two patches for client system one for Server. Do these cover all the needed=
 systems? Yes for shipping client systems, (Broadwell will need it as well)=
. Server patch is OK but has a bug.



Yuriy will send all three patches with details server patch bug along with =
the higher level details for Don to communicate to SuSE. Yuriy will send th=
is to Don by Monday morning 8AM Pacific.



Thanks everyone,



Will






--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: application/octet-stream; name="VT-d-mask-UR-root-port.patch"
Content-Description: VT-d-mask-UR-root-port.patch
Content-Disposition: attachment; filename="VT-d-mask-UR-root-port.patch";
	size=2396; creation-date="Thu, 22 Aug 2013 12:06:45 GMT";
	modification-date="Thu, 22 Aug 2013 12:06:45 GMT"
Content-Transfer-Encoding: base64

LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3F1aXJrcy5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xdWlya3MuYwpAQCAtMzkwLDEyICszOTAsNDQgQEAgdm9pZCBf
X2luaXQgcGNpX3Z0ZF9xdWlyayhzdHJ1Y3QgcGNpX2RldgogICAgIGludCBidXMgPSBwZGV2LT5i
dXM7CiAgICAgaW50IGRldiA9IFBDSV9TTE9UKHBkZXYtPmRldmZuKTsKICAgICBpbnQgZnVuYyA9
IFBDSV9GVU5DKHBkZXYtPmRldmZuKTsKLSAgICBpbnQgaWQsIHZhbDsKKyAgICBpbnQgcG9zOwor
ICAgIHUzMiB2YWw7CiAKLSAgICBpZCA9IHBjaV9jb25mX3JlYWQzMihzZWcsIGJ1cywgZGV2LCBm
dW5jLCAwKTsKLSAgICBpZiAoIGlkID09IDB4MzQyZTgwODYgfHwgaWQgPT0gMHgzYzI4ODA4NiAp
CisgICAgaWYgKCBwY2lfY29uZl9yZWFkMTYoc2VnLCBidXMsIGRldiwgZnVuYywgUENJX1ZFTkRP
Ul9JRCkgIT0gMHg4MDg2ICkKKyAgICAgICAgcmV0dXJuOworCisgICAgc3dpdGNoICggcGNpX2Nv
bmZfcmVhZDE2KHNlZywgYnVzLCBkZXYsIGZ1bmMsIFBDSV9ERVZJQ0VfSUQpICkKICAgICB7Cisg
ICAgY2FzZSAweDM0MmU6CisgICAgY2FzZSAweDNjMjg6CiAgICAgICAgIHZhbCA9IHBjaV9jb25m
X3JlYWQzMihzZWcsIGJ1cywgZGV2LCBmdW5jLCAweDFBQyk7CiAgICAgICAgIHBjaV9jb25mX3dy
aXRlMzIoc2VnLCBidXMsIGRldiwgZnVuYywgMHgxQUMsIHZhbCB8ICgxIDw8IDMxKSk7CisgICAg
ICAgIGJyZWFrOworCisgICAgY2FzZSAweDM0MDA6IGNhc2UgMHgzNDA4IC4uLiAweDM0MTE6IGNh
c2UgMHgzNDIwOgorICAgIGNhc2UgMHgzYzAwIC4uLiAweDNjMGI6CisgICAgICAgIHBvcyA9IHBj
aV9maW5kX2V4dF9jYXBhYmlsaXR5KHNlZywgYnVzLCBwZGV2LT5kZXZmbiwKKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgUENJX0VYVF9DQVBfSURfRVJSKTsKKyAgICAgICAg
aWYgKCAhcG9zICkKKyAgICAgICAgeworICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19XQVJOSU5H
ICIlMDR4OiUwMng6JTAyeC4ldSB3aXRob3V0IEFFUiBjYXBhYmlsaXR5P1xuIiwKKyAgICAgICAg
ICAgICAgICAgICBzZWcsIGJ1cywgZGV2LCBmdW5jKTsKKyAgICAgICAgICAgIGJyZWFrOworICAg
ICAgICB9CisKKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVhZDMyKHNlZywgYnVzLCBkZXYsIGZ1
bmMsIHBvcyArIFBDSV9FUlJfVU5DT1JfTUFTSyk7CisgICAgICAgIHBjaV9jb25mX3dyaXRlMzIo
c2VnLCBidXMsIGRldiwgZnVuYywgcG9zICsgUENJX0VSUl9VTkNPUl9NQVNLLAorICAgICAgICAg
ICAgICAgICAgICAgICAgIHZhbCB8IFBDSV9FUlJfVU5DX1VOU1VQKTsKKyAgICAgICAgdmFsID0g
cGNpX2NvbmZfcmVhZDMyKHNlZywgYnVzLCBkZXYsIGZ1bmMsIHBvcyArIFBDSV9FUlJfQ09SX01B
U0spOworICAgICAgICBwY2lfY29uZl93cml0ZTMyKHNlZywgYnVzLCBkZXYsIGZ1bmMsIHBvcyAr
IFBDSV9FUlJfQ09SX01BU0ssCisgICAgICAgICAgICAgICAgICAgICAgICAgdmFsIHwgUENJX0VS
Ul9DT1JfQURWX05GQVQpOworCisgICAgICAgIC8qIFhQVU5DRVJSTVNLIFNlbmQgQ29tcGxldGlv
biB3aXRoIFVuc3VwcG9ydGVkIFJlcXVlc3QgKi8KKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVh
ZDMyKHNlZywgYnVzLCBkZXYsIGZ1bmMsIDB4MjBjKTsKKyAgICAgICAgcGNpX2NvbmZfd3JpdGUz
MihzZWcsIGJ1cywgZGV2LCBmdW5jLCAweDIwYywgdmFsIHwgKDEgPDwgNCkpOworCisgICAgICAg
IHByaW50ayhYRU5MT0dfSU5GTyAiTWFza2VkIFVSIHNpZ25hbGluZyBvbiAlMDR4OiUwMng6JTAy
eC4ldVxuIiwKKyAgICAgICAgICAgICAgIHNlZywgYnVzLCBkZXYsIGZ1bmMpOworICAgICAgICBi
cmVhazsKICAgICB9CiB9Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi9wY2lfcmVncy5oCisrKyBiL3hl
bi9pbmNsdWRlL3hlbi9wY2lfcmVncy5oCkBAIC00NTksNiArNDU5LDcgQEAKICNkZWZpbmUgIFBD
SV9FUlJfQ09SX0JBRF9ETExQCTB4MDAwMDAwODAJLyogQmFkIERMTFAgU3RhdHVzICovCiAjZGVm
aW5lICBQQ0lfRVJSX0NPUl9SRVBfUk9MTAkweDAwMDAwMTAwCS8qIFJFUExBWV9OVU0gUm9sbG92
ZXIgKi8KICNkZWZpbmUgIFBDSV9FUlJfQ09SX1JFUF9USU1FUgkweDAwMDAxMDAwCS8qIFJlcGxh
eSBUaW1lciBUaW1lb3V0ICovCisjZGVmaW5lICBQQ0lfRVJSX0NPUl9BRFZfTkZBVAkweDAwMDAy
MDAwCS8qIEFkdmlzb3J5IE5vbi1GYXRhbCAqLwogI2RlZmluZSBQQ0lfRVJSX0NPUl9NQVNLCTIw
CS8qIENvcnJlY3RhYmxlIEVycm9yIE1hc2sgKi8KIAkvKiBTYW1lIGJpdHMgYXMgYWJvdmUgKi8K
ICNkZWZpbmUgUENJX0VSUl9DQVAJCTI0CS8qIEFkdmFuY2VkIEVycm9yIENhcGFiaWxpdGllcyAq
Lwo=

--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: application/octet-stream;
	name="VT-d-mask-UR-host-bridge.patch"
Content-Description: VT-d-mask-UR-host-bridge.patch
Content-Disposition: attachment; filename="VT-d-mask-UR-host-bridge.patch";
	size=1623; creation-date="Thu, 22 Aug 2013 12:06:45 GMT";
	modification-date="Thu, 22 Aug 2013 12:06:45 GMT"
Content-Transfer-Encoding: base64

LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3F1aXJrcy5jCisrKyBiL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9xdWlya3MuYwpAQCAtMzkyLDYgKzM5Miw4IEBAIHZvaWQgX19p
bml0IHBjaV92dGRfcXVpcmsoc3RydWN0IHBjaV9kZXYKICAgICBpbnQgZnVuYyA9IFBDSV9GVU5D
KHBkZXYtPmRldmZuKTsKICAgICBpbnQgcG9zOwogICAgIHUzMiB2YWw7CisgICAgdTY0IGJhcjsK
KyAgICBwYWRkcl90IHBhOwogCiAgICAgaWYgKCBwY2lfY29uZl9yZWFkMTYoc2VnLCBidXMsIGRl
diwgZnVuYywgUENJX1ZFTkRPUl9JRCkgIT0gMHg4MDg2ICkKICAgICAgICAgcmV0dXJuOwpAQCAt
NDI5LDUgKzQzMSwzMyBAQCB2b2lkIF9faW5pdCBwY2lfdnRkX3F1aXJrKHN0cnVjdCBwY2lfZGV2
CiAgICAgICAgIHByaW50ayhYRU5MT0dfSU5GTyAiTWFza2VkIFVSIHNpZ25hbGluZyBvbiAlMDR4
OiUwMng6JTAyeC4ldVxuIiwKICAgICAgICAgICAgICAgIHNlZywgYnVzLCBkZXYsIGZ1bmMpOwog
ICAgICAgICBicmVhazsKKworICAgIGNhc2UgMHgxMDA6IGNhc2UgMHgxMDQ6IGNhc2UgMHgxMDg6
CisgICAgY2FzZSAweDE1MDogY2FzZSAweDE1NDogY2FzZSAweDE1ODoKKyAgICBjYXNlIDB4YTA0
OgorICAgIGNhc2UgMHhjMDA6IGNhc2UgMHhjMDQ6IGNhc2UgMHhjMDg6CisgICAgICAgIGJhciA9
IHBjaV9jb25mX3JlYWQzMihzZWcsIGJ1cywgZGV2LCBmdW5jLCAweDZjKTsKKyAgICAgICAgYmFy
ID0gKGJhciA8PCAzMikgfCBwY2lfY29uZl9yZWFkMzIoc2VnLCBidXMsIGRldiwgZnVuYywgMHg2
OCk7CisgICAgICAgIHBhID0gYmFyICYgMHg3ZmZmZmYwMDA7IC8qIGJpdHMgMTIuLi4zOCAqLwor
ICAgICAgICBpZiAoIChiYXIgJiAxKSAmJiBwYSAmJgorICAgICAgICAgICAgIHBhZ2VfaXNfcmFt
X3R5cGUocGFkZHJfdG9fcGZuKHBhKSwgUkFNX1RZUEVfUkVTRVJWRUQpICkKKyAgICAgICAgewor
ICAgICAgICAgICAgdTMyIF9faW9tZW0gKnZhID0gaW9yZW1hcChwYSwgUEFHRV9TSVpFKTsKKwor
ICAgICAgICAgICAgaWYgKCB2YSApCisgICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgX19z
ZXRfYml0KDB4MWM4ICogOCArIDIwLCB2YSk7CisgICAgICAgICAgICAgICAgaW91bm1hcCh2YSk7
CisgICAgICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19JTkZPICJNYXNrZWQgVVIgc2lnbmFsaW5n
IG9uICUwNHg6JTAyeDolMDJ4LiV1XG4iLAorICAgICAgICAgICAgICAgICAgICAgICBzZWcsIGJ1
cywgZGV2LCBmdW5jKTsKKyAgICAgICAgICAgIH0KKyAgICAgICAgICAgIGVsc2UKKyAgICAgICAg
ICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiQ291bGQgbm90IG1hcCAlIlBSSXBhZGRyIiBmb3Ig
JTA0eDolMDJ4OiUwMnguJXVcbiIsCisgICAgICAgICAgICAgICAgICAgICAgIHBhLCBzZWcsIGJ1
cywgZGV2LCBmdW5jKTsKKyAgICAgICAgfQorICAgICAgICBlbHNlCisgICAgICAgICAgICBwcmlu
dGsoWEVOTE9HX1dBUk5JTkcgIkJvZ3VzIERNSUJBUiAlIyJQUkl4NjQiIG9uICUwNHg6JTAyeDol
MDJ4LiV1XG4iLAorICAgICAgICAgICAgICAgICAgIGJhciwgc2VnLCBidXMsIGRldiwgZnVuYyk7
CisgICAgICAgIGJyZWFrOwogICAgIH0KIH0K

--_004_6776757180686879707075717576698073676573806968677578777_
Content-Type: application/octet-stream; name="pt-suppress-SERR.patch"
Content-Description: pt-suppress-SERR.patch
Content-Disposition: attachment; filename="pt-suppress-SERR.patch"; size=4750;
	creation-date="Thu, 22 Aug 2013 12:06:45 GMT";
	modification-date="Thu, 22 Aug 2013 12:06:45 GMT"
Content-Transfer-Encoding: base64

LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvcGNpLmMKKysrIGIveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvcGNpLmMKQEAgLTE2Miw2ICsxNjIsMTEyIEBAIHN0YXRpYyB2b2lkIF9faW5pdCBw
YXJzZV9waGFudG9tX2RldihjaGEKIH0KIGN1c3RvbV9wYXJhbSgicGNpLXBoYW50b20iLCBwYXJz
ZV9waGFudG9tX2Rldik7CiAKK3N0YXRpYyB1MTYgX19yZWFkX21vc3RseSBjb21tYW5kX21hc2s7
CitzdGF0aWMgdTE2IF9fcmVhZF9tb3N0bHkgYnJpZGdlX2N0bF9tYXNrOworCisvKgorICogVGhl
ICdwY2knIHBhcmFtZXRlciBjb250cm9scyBjZXJ0YWluIFBDSSBkZXZpY2UgYXNwZWN0cy4KKyAq
IE9wdGlvbmFsIGNvbW1hIHNlcGFyYXRlZCB2YWx1ZSBtYXkgY29udGFpbjoKKyAqCisgKiAgIHNl
cnIgICAgICAgICAgICAgICAgICAgICAgIGRvbid0IHN1cHByZXNzIHN5c3RlbSBlcnJvcnMgKGRl
ZmF1bHQpCisgKiAgIG5vLXNlcnIgICAgICAgICAgICAgICAgICAgIHN1cHByZXNzIHN5c3RlbSBl
cnJvcnMKKyAqICAgcGVyciAgICAgICAgICAgICAgICAgICAgICAgZG9uJ3Qgc3VwcHJlc3MgcGFy
aXR5IGVycm9ycyAoZGVmYXVsdCkKKyAqICAgbm8tcGVyciAgICAgICAgICAgICAgICAgICAgc3Vw
cHJlc3MgcGFyaXR5IGVycm9ycworICovCitzdGF0aWMgdm9pZCBfX2luaXQgcGFyc2VfcGNpX3Bh
cmFtKGNoYXIgKnMpCit7CisgICAgY2hhciAqc3M7CisKKyAgICBkbyB7CisgICAgICAgIGJvb2xf
dCBvbiA9ICEhc3RybmNtcChzLCAibm8tIiwgMyk7CisgICAgICAgIHUxNiBjbWRfbWFzayA9IDAs
IGJyY3RsX21hc2sgPSAwOworCisgICAgICAgIGlmICggIW9uICkKKyAgICAgICAgICAgIHMgKz0g
MzsKKworICAgICAgICBzcyA9IHN0cmNocihzLCAnLCcpOworICAgICAgICBpZiAoIHNzICkKKyAg
ICAgICAgICAgICpzcyA9ICdcMCc7CisKKyAgICAgICAgaWYgKCAhc3RyY21wKHMsICJzZXJyIikg
KQorICAgICAgICB7CisgICAgICAgICAgICBjbWRfbWFzayA9IFBDSV9DT01NQU5EX1NFUlI7Cisg
ICAgICAgICAgICBicmN0bF9tYXNrID0gUENJX0JSSURHRV9DVExfU0VSUiB8IFBDSV9CUklER0Vf
Q1RMX0RUTVJfU0VSUjsKKyAgICAgICAgfQorICAgICAgICBlbHNlIGlmICggIXN0cmNtcChzLCAi
cGVyciIpICkKKyAgICAgICAgeworICAgICAgICAgICAgY21kX21hc2sgPSBQQ0lfQ09NTUFORF9Q
QVJJVFk7CisgICAgICAgICAgICBicmN0bF9tYXNrID0gUENJX0JSSURHRV9DVExfUEFSSVRZOwor
ICAgICAgICB9CisKKyAgICAgICAgaWYgKCBvbiApCisgICAgICAgIHsKKyAgICAgICAgICAgIGNv
bW1hbmRfbWFzayAmPSB+Y21kX21hc2s7CisgICAgICAgICAgICBicmlkZ2VfY3RsX21hc2sgJj0g
fmJyY3RsX21hc2s7CisgICAgICAgIH0KKyAgICAgICAgZWxzZQorICAgICAgICB7CisgICAgICAg
ICAgICBjb21tYW5kX21hc2sgfD0gY21kX21hc2s7CisgICAgICAgICAgICBicmlkZ2VfY3RsX21h
c2sgfD0gYnJjdGxfbWFzazsKKyAgICAgICAgfQorCisgICAgICAgIHMgPSBzcyArIDE7CisgICAg
fSB3aGlsZSAoIHNzICk7Cit9CitjdXN0b21fcGFyYW0oInBjaSIsIHBhcnNlX3BjaV9wYXJhbSk7
CisKK3N0YXRpYyB2b2lkIGNoZWNrX3BkZXYoY29uc3Qgc3RydWN0IHBjaV9kZXYgKnBkZXYpCit7
CisjZGVmaW5lIFBDSV9TVEFUVVNfQ0hFQ0sgXAorICAgIChQQ0lfU1RBVFVTX1BBUklUWSB8IFBD
SV9TVEFUVVNfU0lHX1RBUkdFVF9BQk9SVCB8IFwKKyAgICAgUENJX1NUQVRVU19SRUNfVEFSR0VU
X0FCT1JUIHwgUENJX1NUQVRVU19SRUNfTUFTVEVSX0FCT1JUIHwgXAorICAgICBQQ0lfU1RBVFVT
X1NJR19TWVNURU1fRVJST1IgfCBQQ0lfU1RBVFVTX0RFVEVDVEVEX1BBUklUWSkKKyAgICB1MTYg
c2VnID0gcGRldi0+c2VnOworICAgIHU4IGJ1cyA9IHBkZXYtPmJ1czsKKyAgICB1OCBkZXYgPSBQ
Q0lfU0xPVChwZGV2LT5kZXZmbik7CisgICAgdTggZnVuYyA9IFBDSV9GVU5DKHBkZXYtPmRldmZu
KTsKKyAgICB1MTYgdmFsOworCisgICAgaWYgKCBjb21tYW5kX21hc2sgKQorICAgIHsKKyAgICAg
ICAgdmFsID0gcGNpX2NvbmZfcmVhZDE2KHNlZywgYnVzLCBkZXYsIGZ1bmMsIFBDSV9DT01NQU5E
KTsKKyAgICAgICAgaWYgKCB2YWwgJiBjb21tYW5kX21hc2sgKQorICAgICAgICAgICAgcGNpX2Nv
bmZfd3JpdGUxNihzZWcsIGJ1cywgZGV2LCBmdW5jLCBQQ0lfQ09NTUFORCwKKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdmFsICYgfmNvbW1hbmRfbWFzayk7CisgICAgICAgIHZhbCA9IHBj
aV9jb25mX3JlYWQxNihzZWcsIGJ1cywgZGV2LCBmdW5jLCBQQ0lfU1RBVFVTKTsKKyAgICAgICAg
aWYgKCB2YWwgJiBQQ0lfU1RBVFVTX0NIRUNLICkKKyAgICAgICAgeworICAgICAgICAgICAgcHJp
bnRrKFhFTkxPR19JTkZPICIlMDR4OiUwMng6JTAyeC4ldSBzdGF0dXMgJTA0eFxuIiwKKyAgICAg
ICAgICAgICAgICAgICBzZWcsIGJ1cywgZGV2LCBmdW5jLCB2YWwpOworICAgICAgICAgICAgcGNp
X2NvbmZfd3JpdGUxNihzZWcsIGJ1cywgZGV2LCBmdW5jLCBQQ0lfU1RBVFVTLCB2YWwpOworICAg
ICAgICB9CisgICAgfQorCisgICAgc3dpdGNoICggcGNpX2NvbmZfcmVhZDgoc2VnLCBidXMsIGRl
diwgZnVuYywgUENJX0hFQURFUl9UWVBFKSAmIDB4N2YgKQorICAgIHsKKyAgICBjYXNlIFBDSV9I
RUFERVJfVFlQRV9CUklER0U6CisgICAgICAgIGlmICggIWJyaWRnZV9jdGxfbWFzayApCisgICAg
ICAgICAgICBicmVhazsKKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVhZDE2KHNlZywgYnVzLCBk
ZXYsIGZ1bmMsIFBDSV9CUklER0VfQ09OVFJPTCk7CisgICAgICAgIGlmICggdmFsICYgYnJpZGdl
X2N0bF9tYXNrICkKKyAgICAgICAgICAgIHBjaV9jb25mX3dyaXRlMTYoc2VnLCBidXMsIGRldiwg
ZnVuYywgUENJX0JSSURHRV9DT05UUk9MLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICB2
YWwgJiB+YnJpZGdlX2N0bF9tYXNrKTsKKyAgICAgICAgdmFsID0gcGNpX2NvbmZfcmVhZDE2KHNl
ZywgYnVzLCBkZXYsIGZ1bmMsIFBDSV9TRUNfU1RBVFVTKTsKKyAgICAgICAgaWYgKCB2YWwgJiBQ
Q0lfU1RBVFVTX0NIRUNLICkKKyAgICAgICAgeworICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19J
TkZPICIlMDR4OiUwMng6JTAyeC4ldSBzZWNvbmRhcnkgc3RhdHVzICUwNHhcbiIsCisgICAgICAg
ICAgICAgICAgICAgc2VnLCBidXMsIGRldiwgZnVuYywgdmFsKTsKKyAgICAgICAgICAgIHBjaV9j
b25mX3dyaXRlMTYoc2VnLCBidXMsIGRldiwgZnVuYywgUENJX1NFQ19TVEFUVVMsIHZhbCk7Cisg
ICAgICAgIH0KKyAgICAgICAgYnJlYWs7CisKKyAgICBjYXNlIFBDSV9IRUFERVJfVFlQRV9DQVJE
QlVTOgorICAgICAgICAvKiBUT0RPICovCisgICAgICAgIGJyZWFrOworICAgIH0KKyN1bmRlZiBQ
Q0lfU1RBVFVTX0NIRUNLCit9CisKIHN0YXRpYyBzdHJ1Y3QgcGNpX2RldiAqYWxsb2NfcGRldihz
dHJ1Y3QgcGNpX3NlZyAqcHNlZywgdTggYnVzLCB1OCBkZXZmbikKIHsKICAgICBzdHJ1Y3QgcGNp
X2RldiAqcGRldjsKQEAgLTI2MSw2ICszNjcsOCBAQCBzdGF0aWMgc3RydWN0IHBjaV9kZXYgKmFs
bG9jX3BkZXYoc3RydWN0CiAgICAgICAgICAgICBicmVhazsKICAgICB9CiAKKyAgICBjaGVja19w
ZGV2KHBkZXYpOworCiAgICAgcmV0dXJuIHBkZXY7CiB9CiAKQEAgLTU3NSw2ICs2ODMsOCBAQCBp
bnQgcGNpX2FkZF9kZXZpY2UodTE2IHNlZywgdTggYnVzLCB1OCBkCiAgICAgICAgICAgICAgICAg
ICAgc2VnLCBidXMsIHNsb3QsIGZ1bmMsIGN0cmwpOwogICAgIH0KIAorICAgIGNoZWNrX3BkZXYo
cGRldik7CisKICAgICByZXQgPSAwOwogICAgIGlmICggIXBkZXYtPmRvbWFpbiApCiAgICAgewot
LS0gYS94ZW4vaW5jbHVkZS94ZW4vcGNpX3JlZ3MuaAorKysgYi94ZW4vaW5jbHVkZS94ZW4vcGNp
X3JlZ3MuaApAQCAtMTI1LDcgKzEyNSw3IEBACiAjZGVmaW5lICBQQ0lfSU9fUkFOR0VfVFlQRV8x
NgkweDAwCiAjZGVmaW5lICBQQ0lfSU9fUkFOR0VfVFlQRV8zMgkweDAxCiAjZGVmaW5lICBQQ0lf
SU9fUkFOR0VfTUFTSwkofjB4MGZVTCkKLSNkZWZpbmUgUENJX1NFQ19TVEFUVVMJCTB4MWUJLyog
U2Vjb25kYXJ5IHN0YXR1cyByZWdpc3Rlciwgb25seSBiaXQgMTQgdXNlZCAqLworI2RlZmluZSBQ
Q0lfU0VDX1NUQVRVUwkJMHgxZQkvKiBTZWNvbmRhcnkgc3RhdHVzIHJlZ2lzdGVyICovCiAjZGVm
aW5lIFBDSV9NRU1PUllfQkFTRQkJMHgyMAkvKiBNZW1vcnkgcmFuZ2UgYmVoaW5kICovCiAjZGVm
aW5lIFBDSV9NRU1PUllfTElNSVQJMHgyMgogI2RlZmluZSAgUENJX01FTU9SWV9SQU5HRV9UWVBF
X01BU0sgMHgwZlVMCkBAIC0xNTIsNiArMTUyLDcgQEAKICNkZWZpbmUgIFBDSV9CUklER0VfQ1RM
X01BU1RFUl9BQk9SVAkweDIwICAvKiBSZXBvcnQgbWFzdGVyIGFib3J0cyAqLwogI2RlZmluZSAg
UENJX0JSSURHRV9DVExfQlVTX1JFU0VUCTB4NDAJLyogU2Vjb25kYXJ5IGJ1cyByZXNldCAqLwog
I2RlZmluZSAgUENJX0JSSURHRV9DVExfRkFTVF9CQUNLCTB4ODAJLyogRmFzdCBCYWNrMkJhY2sg
ZW5hYmxlZCBvbiBzZWNvbmRhcnkgaW50ZXJmYWNlICovCisjZGVmaW5lICBQQ0lfQlJJREdFX0NU
TF9EVE1SX1NFUlIJMHg4MDAJLyogU0VSUiB1cG9uIGRpc2NhcmQgdGltZXIgZXhwaXJ5ICovCiAK
IC8qIEhlYWRlciB0eXBlIDIgKENhcmRCdXMgYnJpZGdlcykgKi8KICNkZWZpbmUgUENJX0NCX0NB
UEFCSUxJVFlfTElTVAkweDE0Cg==

--_004_6776757180686879707075717576698073676573806968677578777_--

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--_002_DE8DF0795D48FD4CA783C40EC8292335015154D4SHSMSX101ccrcor_--


From xen-devel-bounces@lists.xen.org Wed Feb 26 04:23:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 04:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIW2A-00089h-2Q; Wed, 26 Feb 2014 04:23:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WIW28-00089c-4T
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 04:23:48 +0000
Received: from [193.109.254.147:38115] by server-3.bemta-14.messagelabs.com id
	20/A0-00432-35C6D035; Wed, 26 Feb 2014 04:23:47 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393388626!2854536!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9190 invoked from network); 26 Feb 2014 04:23:46 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 04:23:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 25 Feb 2014 20:23:45 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,544,1389772800"; d="scan'208";a="489887779"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 25 Feb 2014 20:23:30 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:23:30 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:23:29 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 12:23:25 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: XSA-59
Thread-Index: Ac8yqXZIfP12xxI1QoqdncUFRoqz8gAAM8MA
Date: Wed, 26 Feb 2014 04:23:25 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335015154FD@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Bulygin, Yuriy" <yuriy.bulygin@intel.com>, "Mallick,
	Asit K" <asit.k.mallick@intel.com>, "Li,
	Susie" <susie.li@intel.com>, "Wang,
	Yong Y" <yong.y.wang@intel.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Auld,
	Will" <will.auld@intel.com>, "Sankaran, Rajesh" <rajesh.sankaran@intel.com>
Subject: Re: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Jan,
> 
> For XSA-59, I list some questions (coming from your patches and
> emails) and consult Intel inside from different groups. 
> 
> Below are the questions and replies (Q2 still not got answer):
> Q1: is the PCI IDs list (0x3400 ...) of root port a complete list?
> Jan got it from a disclosure that Intel made to him meanwhile well
> over-two-years-ago --> Any update about the list? [Asit]: There is
> not update to this list. This was provided in 2011 and included the
> Ids prior to being fixed.   
> 
> 
> Q2: is the PCI IDs list (0x100 ...) of host bridge a complete list?
> it comes from Yuriy but Jan need double confirm 'a complete list'. 
> 
> 
> Q3: the "...  without AER capability?" warning triggers on Jan's
> systems --> is it an issue? or, how to handle it properly? [Asit]
> BIOS can have option to not expose AER capability. It will be good to
> check the BIOS setup options. The error reporting should be masked so
> not action needed. [Yuriy] I expanded the answer to Q3 vs. what's in
> the attached email after we found out that when root port is
> operating in DMI mode, AER ext. capability is not in the chain of
> ext. capability headers. Please use this one instead.     
> Answer to Q3:
> On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge
> BDF=00:00.0, when the root port is operating as DMI, AER extended
> capability is defined in VSHDR (Vendor Specific Header) configuration
> register (offset 0x148). It should have value 0x0004.   
> After pci_find_ext_capability, if it didn't find AER capability, for
> BDF=00:00.0 the patch would need to check if VSHDR register has value
> 0x0004 in bits [15:0].  
> Below I've provided example fix for this case:
>     case 0x3c00 ... 0x3c0b:
>         pos = pci_find_ext_capability(seg, bus, pdev->devfn,
>                                       PCI_EXT_CAP_ID_ERR);
>         if ( !pos )
>         {
>             if ( 0 == bus && 0 == pdev->devfn )
>             {
>                 dmi_aer_cap_id = pci_conf_read16(seg, 0, 0, 0, 0x148)
>                 // DMI Specific AER Capability ID if ( 0x0004 !=
>                 dmi_aer_cap_id ) {
>                     printk(XENLOG_WARNING "%04x:%02x:%02x.%u without
>                     AER capability?\n", seg, bus, dev, func); break;
>                 }
>             }
>             else
>             {
>                 printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER
>                 capability?\n", seg, bus, dev, func); break;
>             }
>         }
> 
> 
> Q4: the patches have no way of handling future chipsets (yet we also
> have no indication that future chipsets would not exhibit the same
> bad behavior) --> thoughts? [Jinsong] IMHO handle future chipset case
> by case.  
> 
> 
> Q5: please clarify whether masking the reporting of unsupported
> requests is really an appropriate thing to do: After all, this masks
> not only maliciously created instances of them, but also any ones
> possibly resulting from malfunctioning hardware. [Yuriy] Most of the
> client systems don't have SERR enabled (not recommended
> configuration). For server systems, I don't know the answer to this
> question as our team didn't work on the issue and workaround for it
> when it was defined. You'd need to ask Rajesh.  

Sorry, cc Rajesh.

Thanks,
Jinsong
    
> 
> 
> BTW, some other infromation from Yuriy:
> VT-d-mask-UR-host-bridge.patch:
> 1. The workaround is only applicable to the host bridge device
> 00:00.0 (DMIBAR does not exist for other devices). The patch is
> written generically for any PCIe device/bridge.  
> 2. The workaround is only needed when SERR is enabled. As there will
>    be only a fraction of client systems with SERR enabled, it might
>    be worthwhile to only apply it when SERRE is 1. val =
>      pci_conf_read16(seg, bus, dev, func, PCI_COMMAND); if ( val &
> PCI_COMMAND_SERR ) apply this workaround 
> 
> 
> Thank you all,
> Jinsong


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 04:23:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 04:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIW2A-00089h-2Q; Wed, 26 Feb 2014 04:23:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WIW28-00089c-4T
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 04:23:48 +0000
Received: from [193.109.254.147:38115] by server-3.bemta-14.messagelabs.com id
	20/A0-00432-35C6D035; Wed, 26 Feb 2014 04:23:47 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393388626!2854536!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9190 invoked from network); 26 Feb 2014 04:23:46 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-9.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 04:23:46 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 25 Feb 2014 20:23:45 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,544,1389772800"; d="scan'208";a="489887779"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by orsmga002.jf.intel.com with ESMTP; 25 Feb 2014 20:23:30 -0800
Received: from fmsmsx116.amr.corp.intel.com (10.18.116.20) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:23:30 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx116.amr.corp.intel.com (10.18.116.20) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 20:23:29 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 12:23:25 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "xen-devel@lists.xen.org"
	<xen-devel@lists.xen.org>
Thread-Topic: XSA-59
Thread-Index: Ac8yqXZIfP12xxI1QoqdncUFRoqz8gAAM8MA
Date: Wed, 26 Feb 2014 04:23:25 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC8292335015154FD@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "Bulygin, Yuriy" <yuriy.bulygin@intel.com>, "Mallick,
	Asit K" <asit.k.mallick@intel.com>, "Li,
	Susie" <susie.li@intel.com>, "Wang,
	Yong Y" <yong.y.wang@intel.com>, "Dugger,
	Donald D" <donald.d.dugger@intel.com>, "Auld,
	Will" <will.auld@intel.com>, "Sankaran, Rajesh" <rajesh.sankaran@intel.com>
Subject: Re: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Liu, Jinsong wrote:
> Jan,
> 
> For XSA-59, I list some questions (coming from your patches and
> emails) and consult Intel inside from different groups. 
> 
> Below are the questions and replies (Q2 still not got answer):
> Q1: is the PCI IDs list (0x3400 ...) of root port a complete list?
> Jan got it from a disclosure that Intel made to him meanwhile well
> over-two-years-ago --> Any update about the list? [Asit]: There is
> not update to this list. This was provided in 2011 and included the
> Ids prior to being fixed.   
> 
> 
> Q2: is the PCI IDs list (0x100 ...) of host bridge a complete list?
> it comes from Yuriy but Jan need double confirm 'a complete list'. 
> 
> 
> Q3: the "...  without AER capability?" warning triggers on Jan's
> systems --> is it an issue? or, how to handle it properly? [Asit]
> BIOS can have option to not expose AER capability. It will be good to
> check the BIOS setup options. The error reporting should be masked so
> not action needed. [Yuriy] I expanded the answer to Q3 vs. what's in
> the attached email after we found out that when root port is
> operating in DMI mode, AER ext. capability is not in the chain of
> ext. capability headers. Please use this one instead.     
> Answer to Q3:
> On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge
> BDF=00:00.0, when the root port is operating as DMI, AER extended
> capability is defined in VSHDR (Vendor Specific Header) configuration
> register (offset 0x148). It should have value 0x0004.   
> After pci_find_ext_capability, if it didn't find AER capability, for
> BDF=00:00.0 the patch would need to check if VSHDR register has value
> 0x0004 in bits [15:0].  
> Below I've provided example fix for this case:
>     case 0x3c00 ... 0x3c0b:
>         pos = pci_find_ext_capability(seg, bus, pdev->devfn,
>                                       PCI_EXT_CAP_ID_ERR);
>         if ( !pos )
>         {
>             if ( 0 == bus && 0 == pdev->devfn )
>             {
>                 dmi_aer_cap_id = pci_conf_read16(seg, 0, 0, 0, 0x148)
>                 // DMI Specific AER Capability ID if ( 0x0004 !=
>                 dmi_aer_cap_id ) {
>                     printk(XENLOG_WARNING "%04x:%02x:%02x.%u without
>                     AER capability?\n", seg, bus, dev, func); break;
>                 }
>             }
>             else
>             {
>                 printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER
>                 capability?\n", seg, bus, dev, func); break;
>             }
>         }
> 
> 
> Q4: the patches have no way of handling future chipsets (yet we also
> have no indication that future chipsets would not exhibit the same
> bad behavior) --> thoughts? [Jinsong] IMHO handle future chipset case
> by case.  
> 
> 
> Q5: please clarify whether masking the reporting of unsupported
> requests is really an appropriate thing to do: After all, this masks
> not only maliciously created instances of them, but also any ones
> possibly resulting from malfunctioning hardware. [Yuriy] Most of the
> client systems don't have SERR enabled (not recommended
> configuration). For server systems, I don't know the answer to this
> question as our team didn't work on the issue and workaround for it
> when it was defined. You'd need to ask Rajesh.  

Sorry, cc Rajesh.

Thanks,
Jinsong
    
> 
> 
> BTW, some other infromation from Yuriy:
> VT-d-mask-UR-host-bridge.patch:
> 1. The workaround is only applicable to the host bridge device
> 00:00.0 (DMIBAR does not exist for other devices). The patch is
> written generically for any PCIe device/bridge.  
> 2. The workaround is only needed when SERR is enabled. As there will
>    be only a fraction of client systems with SERR enabled, it might
>    be worthwhile to only apply it when SERRE is 1. val =
>      pci_conf_read16(seg, bus, dev, func, PCI_COMMAND); if ( val &
> PCI_COMMAND_SERR ) apply this workaround 
> 
> 
> Thank you all,
> Jinsong


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 05:15:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 05:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIWqA-00007a-AC; Wed, 26 Feb 2014 05:15:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WIWq7-00007V-UR
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 05:15:28 +0000
Received: from [85.158.139.211:12120] by server-12.bemta-5.messagelabs.com id
	C4/4A-15415-F687D035; Wed, 26 Feb 2014 05:15:27 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393391725!6279288!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29562 invoked from network); 26 Feb 2014 05:15:25 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-206.messagelabs.com with SMTP;
	26 Feb 2014 05:15:25 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 25 Feb 2014 21:11:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,544,1389772800"; d="scan'208";a="462375132"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 25 Feb 2014 21:15:23 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 21:15:23 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 21:15:23 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 13:15:18 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Thread-Topic: [Xen-devel] Single step in HVM domU on Intel machine may see
	wrong DB6
Thread-Index: AQHPLhb/TXz9W/5OvEmwKI7dSE2uP5q+4NwQ///Ki4CACFahcA==
Date: Wed, 26 Feb 2014 05:15:18 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
In-Reply-To: <5306E5D3.6000302@ts.fujitsu.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>, "Dong,
	Eddie" <eddie.dong@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Juergen Gross wrote on 2014-02-21:
> On 21.02.2014 02:26, Zhang, Yang Z wrote:
>> Juergen Gross wrote on 2014-02-20:
>>> Hi,
>> 
>> Hi, Juergen
>> 
>>> 
>>> I think I've found a bug in debug trap handling in the Xen hypervisor
>>> in case of a HVM domu using single stepping:
>>> 
>>> Debug registers are restored on vcpu switch only if db7 has any debug
>>> events activated or if the debug registers are marked to be used by
>>> the domU. This leads to problems if the domU uses single stepping and
>>> vcpu switch occurs between the single step trap and reading of db6 in
>>> the guest. db6 contents (single step indicator) are lost in this case.
>>> 
>>> Jan suggested to intercept the debug trap in the hypervisor and mark
>>> the debug registers to be used by the domU to enable saving and
>>> restoring the debug registers in case of a context switch. I used the
>>> attached patch (applies to Xen 4.2.3) to verify this solution and it
>>> worked (without the patch a test was able to reproduce the bug once in
>>> about 3 hours, with the patch the test ran for more than 12 hours
>>> without problem).
>>> 
>>> Obviously the patch isn't the final one, as I deactivated the "monitor
>>> trap flag" feature to avoid any strange dependencies. Jan wanted
>>> someone from the VMX folks to put together a proper fix to avoid
>>> overlooking some corner case.
>>> 
>> 
>> Thanks for reporting this issue.
>> Actually, I don't know the scenario that you saw this issue. Are you using
> single step inside guest? Or running gdb to debug VM remotely?
> 
> Single step inside guest:
> 
> 1. Guest sets TF flag in status loaded by IRET and does IRET
> 2. Debug trap in guest occurs, physical DB6 holds single step indicator
> 3. vcpu scheduling event occurs, debug registers are NOT saved as not marked
>     being dirty and DB7 has no debug events configured
> 4. when guest vcpu is scheduled again, DB6 has lost single step indicator

How about the following patch. It is not tested because I don't have the environment.
After setting trap_debug in guest exception bitmap, the vmexit for trap_debug is not only used by gdb, but also will used by guest itself. In case of such vmexit, we need to restore the debug register and inject a trap exception into guest.

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b128e81..113a313 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1171,8 +1171,6 @@ void vmx_update_debug_state(struct vcpu *v)
     unsigned long mask;
 
     mask = 1u << TRAP_int3;
-    if ( !cpu_has_monitor_trap_flag )
-        mask |= 1u << TRAP_debug;
 
     if ( v->arch.hvm_vcpu.debug_state_latch )
         v->arch.hvm_vmx.exception_bitmap |= mask;
@@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             __vmread(EXIT_QUALIFICATION, &exit_qualification);
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
             write_debugreg(6, exit_qualification | 0xffff0ff0);
-            if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
-            domain_pause_for_debugger();
+            if ( v->domain->debugger_attached )
+                domain_pause_for_debugger();
+            else 
+            {
+                __restore_debug_registers(v);
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+            }
             break;
         case TRAP_int3: 
         {
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index dcc3483..0d76d8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -378,7 +378,8 @@ static inline int hvm_event_pending(struct vcpu *v)
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+                       (1U << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:


BTW: I also think we should clear the CPU_BASED_MOV_DR_EXITING bit in __restore_debug_registers(). After restore the debug register, we should not trap any DR access unless the VCPU is scheduled out again. Not sure whether I am wrong.

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b128e81..56a3140 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -394,6 +394,8 @@ static void __restore_debug_registers(struct vcpu *v)
     write_debugreg(3, v->arch.debugreg[3]);
     write_debugreg(6, v->arch.debugreg[6]);
     /* DR7 is loaded from the VMCS. */
+    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
+    vmx_update_cpu_exec_control(v);
 }
 
 /*

> 
> 
> Juergen
>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 05:15:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 05:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIWqA-00007a-AC; Wed, 26 Feb 2014 05:15:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WIWq7-00007V-UR
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 05:15:28 +0000
Received: from [85.158.139.211:12120] by server-12.bemta-5.messagelabs.com id
	C4/4A-15415-F687D035; Wed, 26 Feb 2014 05:15:27 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393391725!6279288!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29562 invoked from network); 26 Feb 2014 05:15:25 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-4.tower-206.messagelabs.com with SMTP;
	26 Feb 2014 05:15:25 -0000
Received: from orsmga001.jf.intel.com ([10.7.209.18])
	by orsmga102.jf.intel.com with ESMTP; 25 Feb 2014 21:11:01 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,544,1389772800"; d="scan'208";a="462375132"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga001.jf.intel.com with ESMTP; 25 Feb 2014 21:15:23 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 21:15:23 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Tue, 25 Feb 2014 21:15:23 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 13:15:18 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Juergen Gross <juergen.gross@ts.fujitsu.com>
Thread-Topic: [Xen-devel] Single step in HVM domU on Intel machine may see
	wrong DB6
Thread-Index: AQHPLhb/TXz9W/5OvEmwKI7dSE2uP5q+4NwQ///Ki4CACFahcA==
Date: Wed, 26 Feb 2014 05:15:18 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
In-Reply-To: <5306E5D3.6000302@ts.fujitsu.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>, "Dong,
	Eddie" <eddie.dong@intel.com>, Jan Beulich <JBeulich@suse.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Juergen Gross wrote on 2014-02-21:
> On 21.02.2014 02:26, Zhang, Yang Z wrote:
>> Juergen Gross wrote on 2014-02-20:
>>> Hi,
>> 
>> Hi, Juergen
>> 
>>> 
>>> I think I've found a bug in debug trap handling in the Xen hypervisor
>>> in case of a HVM domu using single stepping:
>>> 
>>> Debug registers are restored on vcpu switch only if db7 has any debug
>>> events activated or if the debug registers are marked to be used by
>>> the domU. This leads to problems if the domU uses single stepping and
>>> vcpu switch occurs between the single step trap and reading of db6 in
>>> the guest. db6 contents (single step indicator) are lost in this case.
>>> 
>>> Jan suggested to intercept the debug trap in the hypervisor and mark
>>> the debug registers to be used by the domU to enable saving and
>>> restoring the debug registers in case of a context switch. I used the
>>> attached patch (applies to Xen 4.2.3) to verify this solution and it
>>> worked (without the patch a test was able to reproduce the bug once in
>>> about 3 hours, with the patch the test ran for more than 12 hours
>>> without problem).
>>> 
>>> Obviously the patch isn't the final one, as I deactivated the "monitor
>>> trap flag" feature to avoid any strange dependencies. Jan wanted
>>> someone from the VMX folks to put together a proper fix to avoid
>>> overlooking some corner case.
>>> 
>> 
>> Thanks for reporting this issue.
>> Actually, I don't know the scenario that you saw this issue. Are you using
> single step inside guest? Or running gdb to debug VM remotely?
> 
> Single step inside guest:
> 
> 1. Guest sets TF flag in status loaded by IRET and does IRET
> 2. Debug trap in guest occurs, physical DB6 holds single step indicator
> 3. vcpu scheduling event occurs, debug registers are NOT saved as not marked
>     being dirty and DB7 has no debug events configured
> 4. when guest vcpu is scheduled again, DB6 has lost single step indicator

How about the following patch. It is not tested because I don't have the environment.
After setting trap_debug in guest exception bitmap, the vmexit for trap_debug is not only used by gdb, but also will used by guest itself. In case of such vmexit, we need to restore the debug register and inject a trap exception into guest.

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b128e81..113a313 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1171,8 +1171,6 @@ void vmx_update_debug_state(struct vcpu *v)
     unsigned long mask;
 
     mask = 1u << TRAP_int3;
-    if ( !cpu_has_monitor_trap_flag )
-        mask |= 1u << TRAP_debug;
 
     if ( v->arch.hvm_vcpu.debug_state_latch )
         v->arch.hvm_vmx.exception_bitmap |= mask;
@@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             __vmread(EXIT_QUALIFICATION, &exit_qualification);
             HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
             write_debugreg(6, exit_qualification | 0xffff0ff0);
-            if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
-                goto exit_and_crash;
-            domain_pause_for_debugger();
+            if ( v->domain->debugger_attached )
+                domain_pause_for_debugger();
+            else 
+            {
+                __restore_debug_registers(v);
+                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
+            }
             break;
         case TRAP_int3: 
         {
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index dcc3483..0d76d8c 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -378,7 +378,8 @@ static inline int hvm_event_pending(struct vcpu *v)
         (cpu_has_xsave ? X86_CR4_OSXSAVE : 0))))
 
 /* These exceptions must always be intercepted. */
-#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op))
+#define HVM_TRAP_MASK ((1U << TRAP_machine_check) | (1U << TRAP_invalid_op) |\
+                       (1U << TRAP_debug))
 
 /*
  * x86 event types. This enumeration is valid for:


BTW: I also think we should clear the CPU_BASED_MOV_DR_EXITING bit in __restore_debug_registers(). After restore the debug register, we should not trap any DR access unless the VCPU is scheduled out again. Not sure whether I am wrong.

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index b128e81..56a3140 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -394,6 +394,8 @@ static void __restore_debug_registers(struct vcpu *v)
     write_debugreg(3, v->arch.debugreg[3]);
     write_debugreg(6, v->arch.debugreg[6]);
     /* DR7 is loaded from the VMCS. */
+    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
+    vmx_update_cpu_exec_control(v);
 }
 
 /*

> 
> 
> Juergen
>


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 07:10:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 07:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIYdG-0000in-Vg; Wed, 26 Feb 2014 07:10:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WIYdF-0000ii-Br
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 07:10:17 +0000
Received: from [193.109.254.147:11317] by server-4.bemta-14.messagelabs.com id
	33/C6-32066-8539D035; Wed, 26 Feb 2014 07:10:16 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393398615!6834868!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14931 invoked from network); 26 Feb 2014 07:10:15 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 07:10:15 -0000
Received: by mail-wg0-f46.google.com with SMTP id z12so1321013wgg.29
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 23:09:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=mgI6YzSC0LBk7jWsWVoabd6qATh0yyS3OvZqcL3DENg=;
	b=Icz1F6YHoGPj4xYza9ONrROIvnOisTJjcIiqa01x5tcntFkOHqJw34MLilLQKeTVZv
	98lGKsHdRAO8C7FkLO3x+M27c9dr+DELuoKmz4JUSLsTSGZs5+wqTP2w0uam7Sx6xH+w
	nKKhvwou6tsXvm+G3yaJwgi/JXU+DydPIDyzz5cJxtYWEZlToalaOo1me5APaTtp3cBD
	mNAsf+XxWbOpUtmlJwL86nTr0ceDBcPRLNSfVOHEMUCbuowNUqSyco34r58XBsFlTRjN
	5/dqRyafufcb+x3AwzZ9eev0j80GKVf55gaIi4OEP2h83NQDfvsWwiBCdiRy32U97uhp
	RqmQ==
X-Received: by 10.180.185.232 with SMTP id ff8mr6470746wic.25.1393398556471;
	Tue, 25 Feb 2014 23:09:16 -0800 (PST)
Received: from keirs-mbp-2.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id de3sm19094wjb.8.2014.02.25.23.09.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 23:09:15 -0800 (PST)
Message-ID: <530D9317.4070600@gmail.com>
Date: Wed, 26 Feb 2014 07:09:11 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
References: <1393362687-6530-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1393362687-6530-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>, jbeulich@suse.com,
	andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	sherry.hurwitz@amd.com, shurd@broadcom.com
Subject: Re: [Xen-devel] [PATCH V8 RESEND] ns16550: Add support for UART
 present in Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Aravind Gopalakrishnan wrote:
> Since it is an MMIO device, the code has been modified to accept MMIO based
> devices as well. MMIO device settings are populated in the 'uart_config' table.
> It also advertises 64 bit BAR. Therefore, code is reworked to account for 64
> bit BAR and 64 bit MMIO lengths.
>
> Some more quirks are - the need to shift the register offset by a specific
> value and we also need to verify (UART_LSR_THRE&&  UART_LSR_TEMT) bits before
> transmitting data.
>
> While testing, include com1=115200,8n1,pci,0 on the xen cmdline to observe
> output on console using SoL.
>
> Changes from V7:
>    - per Jan's comments:
>      - Moving pci_ro_device to ns16550_init_postirq() so that either
>        one of pci_hide_device or pci_ro_device is done at one place
>      - remove leading '0' from printk as absent segment identifier
>        implies zero anyway.
>    - per Ian's comments:
>      - fixed issues that casued his build to fail.
>      - cross-compiled for arm32 and arm64 after applying patch and
>        build was successful on local machine.
>
> Signed-off-by: Aravind Gopalakrishnan<Aravind.Gopalakrishnan@amd.com>
> Signed-off-by: Suravee Suthikulpanit<Suravee.Suthikulpanit@amd.com>
> Signed-off-by: Thomas Lendacky<Thomas.Lendacky@amd.com>
Acked-by: Keir Fraser <keir@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 07:10:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 07:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIYdG-0000in-Vg; Wed, 26 Feb 2014 07:10:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WIYdF-0000ii-Br
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 07:10:17 +0000
Received: from [193.109.254.147:11317] by server-4.bemta-14.messagelabs.com id
	33/C6-32066-8539D035; Wed, 26 Feb 2014 07:10:16 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393398615!6834868!1
X-Originating-IP: [74.125.82.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14931 invoked from network); 26 Feb 2014 07:10:15 -0000
Received: from mail-wg0-f46.google.com (HELO mail-wg0-f46.google.com)
	(74.125.82.46)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 07:10:15 -0000
Received: by mail-wg0-f46.google.com with SMTP id z12so1321013wgg.29
	for <xen-devel@lists.xen.org>; Tue, 25 Feb 2014 23:09:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=mgI6YzSC0LBk7jWsWVoabd6qATh0yyS3OvZqcL3DENg=;
	b=Icz1F6YHoGPj4xYza9ONrROIvnOisTJjcIiqa01x5tcntFkOHqJw34MLilLQKeTVZv
	98lGKsHdRAO8C7FkLO3x+M27c9dr+DELuoKmz4JUSLsTSGZs5+wqTP2w0uam7Sx6xH+w
	nKKhvwou6tsXvm+G3yaJwgi/JXU+DydPIDyzz5cJxtYWEZlToalaOo1me5APaTtp3cBD
	mNAsf+XxWbOpUtmlJwL86nTr0ceDBcPRLNSfVOHEMUCbuowNUqSyco34r58XBsFlTRjN
	5/dqRyafufcb+x3AwzZ9eev0j80GKVf55gaIi4OEP2h83NQDfvsWwiBCdiRy32U97uhp
	RqmQ==
X-Received: by 10.180.185.232 with SMTP id ff8mr6470746wic.25.1393398556471;
	Tue, 25 Feb 2014 23:09:16 -0800 (PST)
Received: from keirs-mbp-2.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id de3sm19094wjb.8.2014.02.25.23.09.14
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Tue, 25 Feb 2014 23:09:15 -0800 (PST)
Message-ID: <530D9317.4070600@gmail.com>
Date: Wed, 26 Feb 2014 07:09:11 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>
References: <1393362687-6530-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
In-Reply-To: <1393362687-6530-1-git-send-email-Aravind.Gopalakrishnan@amd.com>
Cc: Thomas Lendacky <Thomas.Lendacky@amd.com>, jbeulich@suse.com,
	andrew.cooper3@citrix.com, xen-devel@lists.xen.org,
	Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>,
	sherry.hurwitz@amd.com, shurd@broadcom.com
Subject: Re: [Xen-devel] [PATCH V8 RESEND] ns16550: Add support for UART
 present in Broadcom TruManage capable NetXtreme chips
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Aravind Gopalakrishnan wrote:
> Since it is an MMIO device, the code has been modified to accept MMIO based
> devices as well. MMIO device settings are populated in the 'uart_config' table.
> It also advertises 64 bit BAR. Therefore, code is reworked to account for 64
> bit BAR and 64 bit MMIO lengths.
>
> Some more quirks are - the need to shift the register offset by a specific
> value and we also need to verify (UART_LSR_THRE&&  UART_LSR_TEMT) bits before
> transmitting data.
>
> While testing, include com1=115200,8n1,pci,0 on the xen cmdline to observe
> output on console using SoL.
>
> Changes from V7:
>    - per Jan's comments:
>      - Moving pci_ro_device to ns16550_init_postirq() so that either
>        one of pci_hide_device or pci_ro_device is done at one place
>      - remove leading '0' from printk as absent segment identifier
>        implies zero anyway.
>    - per Ian's comments:
>      - fixed issues that casued his build to fail.
>      - cross-compiled for arm32 and arm64 after applying patch and
>        build was successful on local machine.
>
> Signed-off-by: Aravind Gopalakrishnan<Aravind.Gopalakrishnan@amd.com>
> Signed-off-by: Suravee Suthikulpanit<Suravee.Suthikulpanit@amd.com>
> Signed-off-by: Thomas Lendacky<Thomas.Lendacky@amd.com>
Acked-by: Keir Fraser <keir@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 07:57:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 07:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIZMW-0000uT-C1; Wed, 26 Feb 2014 07:57:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hannes@stressinduktion.org>) id 1WITJZ-0006jD-CW
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 01:29:37 +0000
Received: from [85.158.137.68:3825] by server-1.bemta-3.messagelabs.com id
	20/70-17293-0834D035; Wed, 26 Feb 2014 01:29:36 +0000
X-Env-Sender: hannes@stressinduktion.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393378175!4197247!1
X-Originating-IP: [87.106.68.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31913 invoked from network); 26 Feb 2014 01:29:35 -0000
Received: from order.stressinduktion.org (HELO order.stressinduktion.org)
	(87.106.68.36)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 01:29:35 -0000
Received: by order.stressinduktion.org (Postfix, from userid 500)
	id 12D611A0C27C; Wed, 26 Feb 2014 02:29:34 +0100 (CET)
Date: Wed, 26 Feb 2014 02:29:34 +0100
From: Hannes Frederic Sowa <hannes@stressinduktion.org>
To: David Miller <davem@davemloft.net>
Message-ID: <20140226012934.GA24855@order.stressinduktion.org>
Mail-Followup-To: David Miller <davem@davemloft.net>, dcbw@redhat.com,
	mcgrof@do-not-panic.com, zoltan.kiss@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	kuznet@ms2.inr.ac.ru, jmorris@namei.org, yoshfuji@linux-ipv6.org,
	kaber@trash.net
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393362420.3032.8.camel@dcbw.local>
	<20140225.161817.1623503840238501415.davem@davemloft.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140225.161817.1623503840238501415.davem@davemloft.net>
X-Mailman-Approved-At: Wed, 26 Feb 2014 07:57:02 +0000
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 04:18:17PM -0500, David Miller wrote:
> From: Dan Williams <dcbw@redhat.com>
> Date: Tue, 25 Feb 2014 15:07:00 -0600
> 
> > Also, disable_ipv4 signals *intent*, which is distinct from current
> > state.
> > 
> > Does an interface without an IPv4 address mean that the user wished it
> > not to have one?
> > 
> > Or does it mean that DHCP hasn't started yet (but is supposed to), or
> > failed, or something hasn't gotten around to assigning an address yet?
> > 
> > disable_ipv4 lets you distinguish between these two cases, the same way
> > disable_ipv6 does.
> 
> Intent only matters on the kernel side if the kernel automatically
> assigns addresses to interfaces which have been brought up like ipv6
> does.
> 
> Since it does not do this for ipv4, this can be handled entirely in
> userspace.
> 
> It is not a valid argument to say that a rogue dhcp might run on
> the machine and configure an ipv4 address.  That's the admin's
> responsibility, and still a user side problem.  A "rogue" program
> could just as equally turn the theoretical disable_ipv4 off too.

Week end model strikes again. :)

Currently one would need to set arp_filter and arp_ignore and have no
ip address on the interface to isolate it from the ipv4 network.

IFF_NOARP is of no use here as it also disables neighbour discovery.

I am not sure we completley tear down igmp processing on that interface
if no ip address is available. Maybe there are some special cases with
forwarding, too.

Such a "silent" mode could come handy for intrusion detection systems
where one would ensure that no ip processing takes place but could also
be realized with nftables/netfilter/arpfilter, I think.

Bye,

  Hannes


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 07:57:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 07:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIZMW-0000uT-C1; Wed, 26 Feb 2014 07:57:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hannes@stressinduktion.org>) id 1WITJZ-0006jD-CW
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 01:29:37 +0000
Received: from [85.158.137.68:3825] by server-1.bemta-3.messagelabs.com id
	20/70-17293-0834D035; Wed, 26 Feb 2014 01:29:36 +0000
X-Env-Sender: hannes@stressinduktion.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393378175!4197247!1
X-Originating-IP: [87.106.68.36]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31913 invoked from network); 26 Feb 2014 01:29:35 -0000
Received: from order.stressinduktion.org (HELO order.stressinduktion.org)
	(87.106.68.36)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 01:29:35 -0000
Received: by order.stressinduktion.org (Postfix, from userid 500)
	id 12D611A0C27C; Wed, 26 Feb 2014 02:29:34 +0100 (CET)
Date: Wed, 26 Feb 2014 02:29:34 +0100
From: Hannes Frederic Sowa <hannes@stressinduktion.org>
To: David Miller <davem@davemloft.net>
Message-ID: <20140226012934.GA24855@order.stressinduktion.org>
Mail-Followup-To: David Miller <davem@davemloft.net>, dcbw@redhat.com,
	mcgrof@do-not-panic.com, zoltan.kiss@citrix.com,
	netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
	kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
	kuznet@ms2.inr.ac.ru, jmorris@namei.org, yoshfuji@linux-ipv6.org,
	kaber@trash.net
References: <1393266120.8041.19.camel@dcbw.local>
	<20140224.180426.411052665068255886.davem@davemloft.net>
	<1393362420.3032.8.camel@dcbw.local>
	<20140225.161817.1623503840238501415.davem@davemloft.net>
Mime-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140225.161817.1623503840238501415.davem@davemloft.net>
X-Mailman-Approved-At: Wed, 26 Feb 2014 07:57:02 +0000
Cc: kvm@vger.kernel.org, mcgrof@do-not-panic.com, dcbw@redhat.com,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	jmorris@namei.org, yoshfuji@linux-ipv6.org,
	zoltan.kiss@citrix.com, kuznet@ms2.inr.ac.ru,
	xen-devel@lists.xenproject.org, kaber@trash.net
Subject: Re: [Xen-devel] [RFC v2 2/4] net: enables interface option to skip
	IP
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 04:18:17PM -0500, David Miller wrote:
> From: Dan Williams <dcbw@redhat.com>
> Date: Tue, 25 Feb 2014 15:07:00 -0600
> 
> > Also, disable_ipv4 signals *intent*, which is distinct from current
> > state.
> > 
> > Does an interface without an IPv4 address mean that the user wished it
> > not to have one?
> > 
> > Or does it mean that DHCP hasn't started yet (but is supposed to), or
> > failed, or something hasn't gotten around to assigning an address yet?
> > 
> > disable_ipv4 lets you distinguish between these two cases, the same way
> > disable_ipv6 does.
> 
> Intent only matters on the kernel side if the kernel automatically
> assigns addresses to interfaces which have been brought up like ipv6
> does.
> 
> Since it does not do this for ipv4, this can be handled entirely in
> userspace.
> 
> It is not a valid argument to say that a rogue dhcp might run on
> the machine and configure an ipv4 address.  That's the admin's
> responsibility, and still a user side problem.  A "rogue" program
> could just as equally turn the theoretical disable_ipv4 off too.

Week end model strikes again. :)

Currently one would need to set arp_filter and arp_ignore and have no
ip address on the interface to isolate it from the ipv4 network.

IFF_NOARP is of no use here as it also disables neighbour discovery.

I am not sure we completley tear down igmp processing on that interface
if no ip address is available. Maybe there are some special cases with
forwarding, too.

Such a "silent" mode could come handy for intrusion detection systems
where one would ensure that no ip processing takes place but could also
be realized with nftables/netfilter/arpfilter, I think.

Bye,

  Hannes


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 08:01:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 08:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIZQd-0001UV-NW; Wed, 26 Feb 2014 08:01:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WIZQc-0001UP-1S
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 08:01:18 +0000
Received: from [193.109.254.147:48935] by server-3.bemta-14.messagelabs.com id
	A0/12-00432-D4F9D035; Wed, 26 Feb 2014 08:01:17 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393401676!6845891!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24621 invoked from network); 26 Feb 2014 08:01:16 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 08:01:16 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1393401676; l=308;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=ZmYuTmxlmYIxLydUJaVSUpX1/Dw=;
	b=ow49V2kPKDzA3CY8V5RY89e+TeRRG9MqX1pP+/SAR4mqGuaoYfdSXLXIB2sxuEZSjTL
	VnNZxh6Cp2edR37za68Y2hCqrtsl4hDrynzN6zolKbd8aOEezAzH2BbilGSYq7yUwDw0m
	knNphLeytHMWhok0sDdm6s0BqJXlizrvm/o=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id e01e9fq1Q81GKFo
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Wed, 26 Feb 2014 09:01:16 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 6284C5026B; Wed, 26 Feb 2014 09:01:15 +0100 (CET)
Date: Wed, 26 Feb 2014 09:01:15 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140226080115.GA12384@aepfle.de>
References: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen
 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, Saurabh Mishra wrote:

> What are our options here assuming upgrading to Xen 4.2.2 (SuSE 11 SP3) is the
> only possibility?

It is safe to install xen.rpm, xen-libs.rpm and xen-tools.rpm from the
SP3 Update channel on your SP2 system to check if this particular bug is
fixed.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 08:01:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 08:01:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIZQd-0001UV-NW; Wed, 26 Feb 2014 08:01:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WIZQc-0001UP-1S
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 08:01:18 +0000
Received: from [193.109.254.147:48935] by server-3.bemta-14.messagelabs.com id
	A0/12-00432-D4F9D035; Wed, 26 Feb 2014 08:01:17 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393401676!6845891!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24621 invoked from network); 26 Feb 2014 08:01:16 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 08:01:16 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1393401676; l=308;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=ZmYuTmxlmYIxLydUJaVSUpX1/Dw=;
	b=ow49V2kPKDzA3CY8V5RY89e+TeRRG9MqX1pP+/SAR4mqGuaoYfdSXLXIB2sxuEZSjTL
	VnNZxh6Cp2edR37za68Y2hCqrtsl4hDrynzN6zolKbd8aOEezAzH2BbilGSYq7yUwDw0m
	knNphLeytHMWhok0sDdm6s0BqJXlizrvm/o=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id e01e9fq1Q81GKFo
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Wed, 26 Feb 2014 09:01:16 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 6284C5026B; Wed, 26 Feb 2014 09:01:15 +0100 (CET)
Date: Wed, 26 Feb 2014 09:01:15 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Message-ID: <20140226080115.GA12384@aepfle.de>
References: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen
 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, Saurabh Mishra wrote:

> What are our options here assuming upgrading to Xen 4.2.2 (SuSE 11 SP3) is the
> only possibility?

It is safe to install xen.rpm, xen-libs.rpm and xen-tools.rpm from the
SP3 Update channel on your SP2 system to check if this particular bug is
fixed.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 08:39:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 08:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIa1X-0001hb-2T; Wed, 26 Feb 2014 08:39:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIa1V-0001hW-Gz
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 08:39:25 +0000
Received: from [85.158.139.211:14463] by server-17.bemta-5.messagelabs.com id
	68/6E-31975-C38AD035; Wed, 26 Feb 2014 08:39:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393403962!6311637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23783 invoked from network); 26 Feb 2014 08:39:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 08:39:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105842772"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 08:39:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 03:39:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIa1Q-00027S-TS;
	Wed, 26 Feb 2014 08:39:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIa1Q-0000eV-Po;
	Wed, 26 Feb 2014 08:39:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25305-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 08:39:20 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25305: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25305 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25305/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  4c2502450ef1cc6fa1fd6daab8dbed8af1b88a29
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 351 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 08:39:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 08:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIa1X-0001hb-2T; Wed, 26 Feb 2014 08:39:27 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIa1V-0001hW-Gz
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 08:39:25 +0000
Received: from [85.158.139.211:14463] by server-17.bemta-5.messagelabs.com id
	68/6E-31975-C38AD035; Wed, 26 Feb 2014 08:39:24 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393403962!6311637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23783 invoked from network); 26 Feb 2014 08:39:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 08:39:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105842772"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 08:39:21 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 03:39:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIa1Q-00027S-TS;
	Wed, 26 Feb 2014 08:39:20 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIa1Q-0000eV-Po;
	Wed, 26 Feb 2014 08:39:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25305-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 08:39:20 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25305: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25305 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25305/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  4c2502450ef1cc6fa1fd6daab8dbed8af1b88a29
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 351 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:15:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaZl-0001xa-Uw; Wed, 26 Feb 2014 09:14:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WIaZj-0001xV-T0
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:14:48 +0000
Received: from [193.109.254.147:23832] by server-16.bemta-14.messagelabs.com
	id B2/30-21945-780BD035; Wed, 26 Feb 2014 09:14:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393406085!2905701!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15351 invoked from network); 26 Feb 2014 09:14:45 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 09:14:45 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:49413 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WIaY6-0004QL-Op; Wed, 26 Feb 2014 10:13:06 +0100
Date: Wed, 26 Feb 2014 10:14:42 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <824074181.20140226101442@eikelenboom.it>
To: annie li <annie.li@oracle.com>
In-Reply-To: <5306F2E8.5090509@oracle.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
MIME-Version: 1.0
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, February 21, 2014, 7:32:08 AM, you wrote:


> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>>
>>
>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>>> Hi All,
>>>>
>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>>
>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>>
>>>>     In the guest:
>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>>     [57539.859610] net eth0: Need more slots
>>>>     [58157.675939] net eth0: Need more slots
>>>>     [58725.344712] net eth0: Need more slots
>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>>     [61815.849225] net eth0: Need more slots
>>> This issue is familiar... and I thought it get fixed.
>>>   From original analysis for similar issue I hit before, the root cause
>>> is netback still creates response when the ring is full. I remember
>>> larger MTU can trigger this issue before, what is the MTU size?
>> In dom0 both for the physical nics and the guest vif's MTU=1500
>> In domU the eth0 also has MTU=1500.
>>
>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>>
>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>>
>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
> Probably the response overlaps the request and grantcopy return error 
> when using wrong grant reference, Netback returns resp->status with 
> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
> Would it be possible to print log in xenvif_rx_action of netback to see 
> whether something wrong with max slots and used slots?

> Thanks
> Annie

Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
at the same time as the netfront messages in the guest.

I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
One of the things was to simplify the code for the debug key to print the granttables, the present code
takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
the nr of entries per domain.


Issue 1: grant_table.c:1858:d0 Bad grant reference

After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.

Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).

Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
Domain 7 is the domain that happens to give the netfront messages.

I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?

(XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (4) to (5) frames.
(XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (5) to (6) frames.
(XEN) [2014-02-26 00:00:38] grant_table.c:290:d0 Increased maptrack size to 13/256 frames
(XEN) [2014-02-26 00:01:13] grant_table.c:290:d0 Increased maptrack size to 14/256 frames
(XEN) [2014-02-26 04:02:55] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
(XEN) [2014-02-26 04:15:33] grant_table.c:290:d0 Increased maptrack size to 15/256 frames
(XEN) [2014-02-26 04:15:53] grant_table.c:290:d0 Increased maptrack size to 16/256 frames
(XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 17/256 frames
(XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 18/256 frames
(XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 19/256 frames
(XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 20/256 frames
(XEN) [2014-02-26 04:15:59] grant_table.c:290:d0 Increased maptrack size to 21/256 frames
(XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 22/256 frames
(XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 23/256 frames
(XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 24/256 frames
(XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 25/256 frames
(XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 26/256 frames
(XEN) [2014-02-26 04:16:17] grant_table.c:290:d0 Increased maptrack size to 27/256 frames
(XEN) [2014-02-26 04:16:20] grant_table.c:290:d0 Increased maptrack size to 28/256 frames
(XEN) [2014-02-26 04:16:56] grant_table.c:290:d0 Increased maptrack size to 29/256 frames
(XEN) [2014-02-26 05:15:04] grant_table.c:290:d0 Increased maptrack size to 30/256 frames
(XEN) [2014-02-26 05:15:05] grant_table.c:290:d0 Increased maptrack size to 31/256 frames
(XEN) [2014-02-26 05:21:15] grant_table.c:1858:d0 Bad grant reference 107085839 | 2048 | 1 | 0
(XEN) [2014-02-26 05:29:47] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
(XEN) [2014-02-26 07:53:20] gnttab_usage_print_all [ key 'g' pressed
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 active entries: 0
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1) nr_grant_entries: 3072
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 active entries: 2117
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 active entries: 1061
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 active entries: 1045
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 active entries: 709
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 active entries: 163
(XEN) [2014-02-26 07:53:20] gnttab_usage_print_all ] done
(XEN) [2014-02-26 07:55:09] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
(XEN) [2014-02-26 08:37:16] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0



Issue 2: net eth0: rx->offset: 0, size: xxxxxxxxxx

In the guest (domain 7):

Feb 26 08:55:09 backup kernel: [39258.090375] net eth0: rx->offset: 0, size: 4294967295
Feb 26 08:55:09 backup kernel: [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
Feb 26 08:55:09 backup kernel: [39258.090401] net eth0: rx->offset: 0, size: 4294967295
Feb 26 08:55:09 backup kernel: [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
Feb 26 08:55:09 backup kernel: [39258.090415] net eth0: rx->offset: 0, size: 4294967295
Feb 26 08:55:09 backup kernel: [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571

In dom0 i don't see any specific netback warnings related to this domain at this specific times, the printk's i added do trigger quite some times but these are probably not
errorneous, but they seem to only occur on the vif of domain 7 (probably the only domain that is swamping the network by doing rsync and webdavs and causes some fragmented packets)

Feb 26 08:53:20 serveerstertje kernel: [39324.917255] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15101115 cons:15101112 j:8
Feb 26 08:53:56 serveerstertje kernel: [39361.001436] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15127649 cons:15127648 j:13
Feb 26 08:54:00 serveerstertje kernel: [39364.725613] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15130263 cons:15130261 j:2
Feb 26 08:54:04 serveerstertje kernel: [39368.739504] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15133143 cons:15133141 j:0
Feb 26 08:54:20 serveerstertje kernel: [39384.665044] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15144113 cons:15144112 j:0
Feb 26 08:54:29 serveerstertje kernel: [39393.569871] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15150203 cons:15150200 j:0
Feb 26 08:54:40 serveerstertje kernel: [39404.586566] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15157706 cons:15157704 j:12
Feb 26 08:54:56 serveerstertje kernel: [39420.759769] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15168839 cons:15168835 j:0
Feb 26 08:54:56 serveerstertje kernel: [39421.001372] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15169002 cons:15168999 j:8
Feb 26 08:55:00 serveerstertje kernel: [39424.515073] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15171450 cons:15171447 j:0
Feb 26 08:55:10 serveerstertje kernel: [39435.154510] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15178773 cons:15178770 j:1
Feb 26 08:56:19 serveerstertje kernel: [39504.195908] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15227444 cons:15227444 j:0
Feb 26 08:57:39 serveerstertje kernel: [39583.799392] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15283346 cons:15283344 j:8
Feb 26 08:57:55 serveerstertje kernel: [39599.517673] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15293937 cons:15293935 j:0
Feb 26 08:58:07 serveerstertje kernel: [39612.156622] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15302891 cons:15302889 j:19
Feb 26 08:58:07 serveerstertje kernel: [39612.400907] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15303034 cons:15303033 j:0
Feb 26 08:58:18 serveerstertje kernel: [39623.439383] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15310915 cons:15310911 j:0
Feb 26 08:58:39 serveerstertje kernel: [39643.521808] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15324769 cons:15324766 j:1

Feb 26 09:27:07 serveerstertje kernel: [41351.622501] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16502932 cons:16502932 j:8
Feb 26 09:27:19 serveerstertje kernel: [41363.541003] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16510837 cons:16510834 j:7
Feb 26 09:27:23 serveerstertje kernel: [41368.133306] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16513940 cons:16513937 j:0
Feb 26 09:27:43 serveerstertje kernel: [41388.025147] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16527870 cons:16527868 j:0
Feb 26 09:27:47 serveerstertje kernel: [41391.530802] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16530437 cons:16530437 j:7
Feb 26 09:27:51 serveerstertje kernel: [41395.521166] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533320 cons:16533317 j:6
Feb 26 09:27:51 serveerstertje kernel: [41395.767066] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533469 cons:16533469 j:0
Feb 26 09:27:51 serveerstertje kernel: [41395.802319] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533533 cons:16533533 j:24
Feb 26 09:27:51 serveerstertje kernel: [41395.837456] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533534 cons:16533534 j:1
Feb 26 09:27:51 serveerstertje kernel: [41395.872587] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533597 cons:16533596 j:25
Feb 26 09:27:51 serveerstertje kernel: [41396.192784] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533833 cons:16533832 j:3
Feb 26 09:27:51 serveerstertje kernel: [41396.235611] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533890 cons:16533890 j:30
Feb 26 09:27:51 serveerstertje kernel: [41396.271047] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533898 cons:16533896 j:3


--
Sander





>>
>> Will keep you posted when it triggers again with the extra info in the warn.
>>
>> --
>> Sander
>>
>>
>>
>>> Thanks
>>> Annie
>>>>     Xen reports:
>>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>>>
>>>>
>>>>
>>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>>>     - i can ping the guests from dom0
>>>>     - i can ping dom0 from the guests
>>>>     - But i can't ssh or access things by http
>>>>     - I don't see any relevant error messages ...
>>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>>>       (that previously worked fine)
>>>>
>>>> --
>>>>
>>>> Sander
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>
>>



diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..4d720b4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1667,6 +1667,8 @@ skip_vfb:
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
             } else if (!strcmp(buf, "cirrus")) {
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
+            } else if (!strcmp(buf, "none")) {
+                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
             } else {
                 fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
                 exit(1);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 107b000..ab56927 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -265,9 +265,10 @@ get_maptrack_handle(
     while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
     {
         nr_frames = nr_maptrack_frames(lgt);
-        if ( nr_frames >= max_nr_maptrack_frames() )
+        if ( nr_frames >= max_nr_maptrack_frames() ){
+                 gdprintk(XENLOG_INFO, "Already at max maptrack size: %u/%u frames\n",nr_frames, max_nr_maptrack_frames());
             break;
-
+       }
         new_mt = alloc_xenheap_page();
         if ( !new_mt )
             break;
@@ -285,8 +286,8 @@ get_maptrack_handle(
         smp_wmb();
         lgt->maptrack_limit      = new_mt_limit;

-        gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
-                 nr_frames + 1);
+        gdprintk(XENLOG_INFO, "Increased maptrack size to %u/%u frames\n",
+                 nr_frames + 1, max_nr_maptrack_frames());
     }

     spin_unlock(&lgt->lock);
@@ -1854,7 +1855,7 @@ __acquire_grant_for_copy(

     if ( unlikely(gref >= nr_grant_entries(rgt)) )
         PIN_FAIL(unlock_out, GNTST_bad_gntref,
-                 "Bad grant reference %ld\n", gref);
+                 "Bad grant reference %ld | %d | %d | %d \n", gref, nr_grant_entries(rgt), rgt->gt_version, ldom);

     act = &active_entry(rgt, gref);
     shah = shared_entry_header(rgt, gref);
@@ -2830,15 +2831,19 @@ static void gnttab_usage_print(struct domain *rd)
     int first = 1;
     grant_ref_t ref;
     struct grant_table *gt = rd->grant_table;
-
+    unsigned int active=0;
+/*
     printk("      -------- active --------       -------- shared --------\n");
     printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
-
+*/
     spin_lock(&gt->lock);

     if ( gt->gt_version == 0 )
         goto out;

+    printk("grant-table for remote domain:%5d (v%d) nr_grant_entries: %d\n",
+                   rd->domain_id, gt->gt_version, nr_grant_entries(gt));
+
     for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
     {
         struct active_grant_entry *act;
@@ -2875,19 +2880,22 @@ static void gnttab_usage_print(struct domain *rd)
                    rd->domain_id, gt->gt_version);
             first = 0;
         }
-
+        active++;
         /*      [ddd]    ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
-        printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
-               ref, act->domid, act->frame, act->pin,
-               sha->domid, frame, status);
+        /* printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n", ref, act->domid, act->frame, act->pin, sha->domid, frame, status); */
     }

  out:
     spin_unlock(&gt->lock);

+    printk("grant-table for remote domain:%5d active entries: %d\n",
+                   rd->domain_id, active);
+/*
     if ( first )
         printk("grant-table for remote domain:%5d ... "
                "no active grant table entries\n", rd->domain_id);
+*/
+
 }

 static void gnttab_usage_print_all(unsigned char key)






diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..6d93358 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -482,20 +482,23 @@ static void xenvif_rx_action(struct xenvif *vif)
                .meta  = vif->meta,
        };

+       int j=0;
+
        skb_queue_head_init(&rxq);

        while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
                RING_IDX max_slots_needed;
                int i;
+               int nr_frags;

                /* We need a cheap worse case estimate for the number of
                 * slots we'll use.
                 */

                max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
-                                               skb_headlen(skb),
-                                               PAGE_SIZE);
-               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+                                               skb_headlen(skb), PAGE_SIZE);
+               nr_frags = skb_shinfo(skb)->nr_frags;
+               for (i = 0; i < nr_frags; i++) {
                        unsigned int size;
                        size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
                        max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
@@ -508,6 +511,9 @@ static void xenvif_rx_action(struct xenvif *vif)
                if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
                        skb_queue_head(&vif->rx_queue, skb);
                        need_to_notify = true;
+                       if (net_ratelimit())
+                               netdev_err(vif->dev, "!?!?!?! skb may not fit .. bail out now max_slots_needed:%d GSO:%d vif->rx_last_skb_slots:%d nr_frags:%d prod:%d cons:%d j:%d\n",
+                                       max_slots_needed, (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ? 1 : 0, vif->rx_last_skb_slots, nr_frags,vif->rx.sring->req_prod,vif->rx.req_cons,j);
                        vif->rx_last_skb_slots = max_slots_needed;
                        break;
                } else
@@ -518,6 +524,7 @@ static void xenvif_rx_action(struct xenvif *vif)
                BUG_ON(sco->meta_slots_used > max_slots_needed);

                __skb_queue_tail(&rxq, skb);
+               j++;
        }

        BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
@@ -541,7 +548,7 @@ static void xenvif_rx_action(struct xenvif *vif)
                        resp->offset = vif->meta[npo.meta_cons].gso_size;
                        resp->id = vif->meta[npo.meta_cons].id;
                        resp->status = sco->meta_slots_used;
-
+
                        npo.meta_cons++;
                        sco->meta_slots_used--;
                }
@@ -705,7 +712,7 @@ static int xenvif_count_requests(struct xenvif *vif,
                 */
                if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
                        if (net_ratelimit())
-                               netdev_dbg(vif->dev,
+                               netdev_err(vif->dev,
                                           "Too many slots (%d) exceeding limit (%d), dropping packet\n",
                                           slots, XEN_NETBK_LEGACY_SLOTS_MAX);
                        drop_err = -E2BIG;
@@ -728,7 +735,7 @@ static int xenvif_count_requests(struct xenvif *vif,
                 */
                if (!drop_err && txp->size > first->size) {
                        if (net_ratelimit())
-                               netdev_dbg(vif->dev,
+                               netdev_err(vif->dev,
                                           "Invalid tx request, slot size %u > remaining size %u\n",
                                           txp->size, first->size);
                        drop_err = -EIO;



diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f9daa9e..67d5221 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -753,6 +753,7 @@ static int xennet_get_responses(struct netfront_info *np,
                        if (net_ratelimit())
                                dev_warn(dev, "rx->offset: %x, size: %u\n",
                                         rx->offset, rx->status);
+                               dev_warn(dev, "me here .. cons:%d slots:%d rp:%d max:%d err:%d rx->id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
                        xennet_move_rx_slot(np, skb, ref);
                        err = -EINVAL;
                        goto next;
@@ -784,7 +785,7 @@ next:

                if (cons + slots == rp) {
                        if (net_ratelimit())
-                               dev_warn(dev, "Need more slots\n");
+                               dev_warn(dev, "Need more slots cons:%d slots:%d rp:%d max:%d err:%d rx-id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
                        err = -ENOENT;
                        break;
                }
@@ -803,7 +804,6 @@ next:

        if (unlikely(err))
                np->rx.rsp_cons = cons + slots;
-
        return err;
 }

@@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,

                /* Ethernet work: Delayed to here as it peeks the header. */
                skb->protocol = eth_type_trans(skb, dev);
+               skb_reset_network_header(skb);

                if (checksum_setup(dev, skb)) {
                        kfree_skb(skb);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:15:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:15:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaZl-0001xa-Uw; Wed, 26 Feb 2014 09:14:49 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WIaZj-0001xV-T0
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:14:48 +0000
Received: from [193.109.254.147:23832] by server-16.bemta-14.messagelabs.com
	id B2/30-21945-780BD035; Wed, 26 Feb 2014 09:14:47 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393406085!2905701!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15351 invoked from network); 26 Feb 2014 09:14:45 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 09:14:45 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:49413 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WIaY6-0004QL-Op; Wed, 26 Feb 2014 10:13:06 +0100
Date: Wed, 26 Feb 2014 10:14:42 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <824074181.20140226101442@eikelenboom.it>
To: annie li <annie.li@oracle.com>
In-Reply-To: <5306F2E8.5090509@oracle.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
MIME-Version: 1.0
Cc: Paul Durrant <Paul.Durrant@citrix.com>, Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Friday, February 21, 2014, 7:32:08 AM, you wrote:


> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>>
>>
>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>>> Hi All,
>>>>
>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>>
>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>>
>>>>     In the guest:
>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>>     [57539.859610] net eth0: Need more slots
>>>>     [58157.675939] net eth0: Need more slots
>>>>     [58725.344712] net eth0: Need more slots
>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>>     [61815.849225] net eth0: Need more slots
>>> This issue is familiar... and I thought it get fixed.
>>>   From original analysis for similar issue I hit before, the root cause
>>> is netback still creates response when the ring is full. I remember
>>> larger MTU can trigger this issue before, what is the MTU size?
>> In dom0 both for the physical nics and the guest vif's MTU=1500
>> In domU the eth0 also has MTU=1500.
>>
>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>>
>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>>
>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
> Probably the response overlaps the request and grantcopy return error 
> when using wrong grant reference, Netback returns resp->status with 
> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
> Would it be possible to print log in xenvif_rx_action of netback to see 
> whether something wrong with max slots and used slots?

> Thanks
> Annie

Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
at the same time as the netfront messages in the guest.

I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
One of the things was to simplify the code for the debug key to print the granttables, the present code
takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
the nr of entries per domain.


Issue 1: grant_table.c:1858:d0 Bad grant reference

After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.

Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).

Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
Domain 7 is the domain that happens to give the netfront messages.

I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?

(XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (4) to (5) frames.
(XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (5) to (6) frames.
(XEN) [2014-02-26 00:00:38] grant_table.c:290:d0 Increased maptrack size to 13/256 frames
(XEN) [2014-02-26 00:01:13] grant_table.c:290:d0 Increased maptrack size to 14/256 frames
(XEN) [2014-02-26 04:02:55] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
(XEN) [2014-02-26 04:15:33] grant_table.c:290:d0 Increased maptrack size to 15/256 frames
(XEN) [2014-02-26 04:15:53] grant_table.c:290:d0 Increased maptrack size to 16/256 frames
(XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 17/256 frames
(XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 18/256 frames
(XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 19/256 frames
(XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 20/256 frames
(XEN) [2014-02-26 04:15:59] grant_table.c:290:d0 Increased maptrack size to 21/256 frames
(XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 22/256 frames
(XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 23/256 frames
(XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 24/256 frames
(XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 25/256 frames
(XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 26/256 frames
(XEN) [2014-02-26 04:16:17] grant_table.c:290:d0 Increased maptrack size to 27/256 frames
(XEN) [2014-02-26 04:16:20] grant_table.c:290:d0 Increased maptrack size to 28/256 frames
(XEN) [2014-02-26 04:16:56] grant_table.c:290:d0 Increased maptrack size to 29/256 frames
(XEN) [2014-02-26 05:15:04] grant_table.c:290:d0 Increased maptrack size to 30/256 frames
(XEN) [2014-02-26 05:15:05] grant_table.c:290:d0 Increased maptrack size to 31/256 frames
(XEN) [2014-02-26 05:21:15] grant_table.c:1858:d0 Bad grant reference 107085839 | 2048 | 1 | 0
(XEN) [2014-02-26 05:29:47] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
(XEN) [2014-02-26 07:53:20] gnttab_usage_print_all [ key 'g' pressed
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 active entries: 0
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1) nr_grant_entries: 3072
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 active entries: 2117
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 active entries: 1061
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 active entries: 1045
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 active entries: 1060
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 active entries: 709
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1) nr_grant_entries: 2048
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1)
(XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 active entries: 163
(XEN) [2014-02-26 07:53:20] gnttab_usage_print_all ] done
(XEN) [2014-02-26 07:55:09] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
(XEN) [2014-02-26 08:37:16] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0



Issue 2: net eth0: rx->offset: 0, size: xxxxxxxxxx

In the guest (domain 7):

Feb 26 08:55:09 backup kernel: [39258.090375] net eth0: rx->offset: 0, size: 4294967295
Feb 26 08:55:09 backup kernel: [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
Feb 26 08:55:09 backup kernel: [39258.090401] net eth0: rx->offset: 0, size: 4294967295
Feb 26 08:55:09 backup kernel: [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
Feb 26 08:55:09 backup kernel: [39258.090415] net eth0: rx->offset: 0, size: 4294967295
Feb 26 08:55:09 backup kernel: [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571

In dom0 i don't see any specific netback warnings related to this domain at this specific times, the printk's i added do trigger quite some times but these are probably not
errorneous, but they seem to only occur on the vif of domain 7 (probably the only domain that is swamping the network by doing rsync and webdavs and causes some fragmented packets)

Feb 26 08:53:20 serveerstertje kernel: [39324.917255] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15101115 cons:15101112 j:8
Feb 26 08:53:56 serveerstertje kernel: [39361.001436] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15127649 cons:15127648 j:13
Feb 26 08:54:00 serveerstertje kernel: [39364.725613] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15130263 cons:15130261 j:2
Feb 26 08:54:04 serveerstertje kernel: [39368.739504] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15133143 cons:15133141 j:0
Feb 26 08:54:20 serveerstertje kernel: [39384.665044] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15144113 cons:15144112 j:0
Feb 26 08:54:29 serveerstertje kernel: [39393.569871] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15150203 cons:15150200 j:0
Feb 26 08:54:40 serveerstertje kernel: [39404.586566] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15157706 cons:15157704 j:12
Feb 26 08:54:56 serveerstertje kernel: [39420.759769] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15168839 cons:15168835 j:0
Feb 26 08:54:56 serveerstertje kernel: [39421.001372] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15169002 cons:15168999 j:8
Feb 26 08:55:00 serveerstertje kernel: [39424.515073] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15171450 cons:15171447 j:0
Feb 26 08:55:10 serveerstertje kernel: [39435.154510] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15178773 cons:15178770 j:1
Feb 26 08:56:19 serveerstertje kernel: [39504.195908] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15227444 cons:15227444 j:0
Feb 26 08:57:39 serveerstertje kernel: [39583.799392] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15283346 cons:15283344 j:8
Feb 26 08:57:55 serveerstertje kernel: [39599.517673] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15293937 cons:15293935 j:0
Feb 26 08:58:07 serveerstertje kernel: [39612.156622] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15302891 cons:15302889 j:19
Feb 26 08:58:07 serveerstertje kernel: [39612.400907] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15303034 cons:15303033 j:0
Feb 26 08:58:18 serveerstertje kernel: [39623.439383] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15310915 cons:15310911 j:0
Feb 26 08:58:39 serveerstertje kernel: [39643.521808] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15324769 cons:15324766 j:1

Feb 26 09:27:07 serveerstertje kernel: [41351.622501] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16502932 cons:16502932 j:8
Feb 26 09:27:19 serveerstertje kernel: [41363.541003] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16510837 cons:16510834 j:7
Feb 26 09:27:23 serveerstertje kernel: [41368.133306] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16513940 cons:16513937 j:0
Feb 26 09:27:43 serveerstertje kernel: [41388.025147] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16527870 cons:16527868 j:0
Feb 26 09:27:47 serveerstertje kernel: [41391.530802] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16530437 cons:16530437 j:7
Feb 26 09:27:51 serveerstertje kernel: [41395.521166] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533320 cons:16533317 j:6
Feb 26 09:27:51 serveerstertje kernel: [41395.767066] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533469 cons:16533469 j:0
Feb 26 09:27:51 serveerstertje kernel: [41395.802319] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533533 cons:16533533 j:24
Feb 26 09:27:51 serveerstertje kernel: [41395.837456] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533534 cons:16533534 j:1
Feb 26 09:27:51 serveerstertje kernel: [41395.872587] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533597 cons:16533596 j:25
Feb 26 09:27:51 serveerstertje kernel: [41396.192784] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533833 cons:16533832 j:3
Feb 26 09:27:51 serveerstertje kernel: [41396.235611] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533890 cons:16533890 j:30
Feb 26 09:27:51 serveerstertje kernel: [41396.271047] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533898 cons:16533896 j:3


--
Sander





>>
>> Will keep you posted when it triggers again with the extra info in the warn.
>>
>> --
>> Sander
>>
>>
>>
>>> Thanks
>>> Annie
>>>>     Xen reports:
>>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>>>
>>>>
>>>>
>>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>>>     - i can ping the guests from dom0
>>>>     - i can ping dom0 from the guests
>>>>     - But i can't ssh or access things by http
>>>>     - I don't see any relevant error messages ...
>>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>>>       (that previously worked fine)
>>>>
>>>> --
>>>>
>>>> Sander
>>>>
>>>>
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@lists.xen.org
>>>> http://lists.xen.org/xen-devel
>>
>>



diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..4d720b4 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1667,6 +1667,8 @@ skip_vfb:
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
             } else if (!strcmp(buf, "cirrus")) {
                 b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
+            } else if (!strcmp(buf, "none")) {
+                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
             } else {
                 fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
                 exit(1);
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 107b000..ab56927 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -265,9 +265,10 @@ get_maptrack_handle(
     while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
     {
         nr_frames = nr_maptrack_frames(lgt);
-        if ( nr_frames >= max_nr_maptrack_frames() )
+        if ( nr_frames >= max_nr_maptrack_frames() ){
+                 gdprintk(XENLOG_INFO, "Already at max maptrack size: %u/%u frames\n",nr_frames, max_nr_maptrack_frames());
             break;
-
+       }
         new_mt = alloc_xenheap_page();
         if ( !new_mt )
             break;
@@ -285,8 +286,8 @@ get_maptrack_handle(
         smp_wmb();
         lgt->maptrack_limit      = new_mt_limit;

-        gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
-                 nr_frames + 1);
+        gdprintk(XENLOG_INFO, "Increased maptrack size to %u/%u frames\n",
+                 nr_frames + 1, max_nr_maptrack_frames());
     }

     spin_unlock(&lgt->lock);
@@ -1854,7 +1855,7 @@ __acquire_grant_for_copy(

     if ( unlikely(gref >= nr_grant_entries(rgt)) )
         PIN_FAIL(unlock_out, GNTST_bad_gntref,
-                 "Bad grant reference %ld\n", gref);
+                 "Bad grant reference %ld | %d | %d | %d \n", gref, nr_grant_entries(rgt), rgt->gt_version, ldom);

     act = &active_entry(rgt, gref);
     shah = shared_entry_header(rgt, gref);
@@ -2830,15 +2831,19 @@ static void gnttab_usage_print(struct domain *rd)
     int first = 1;
     grant_ref_t ref;
     struct grant_table *gt = rd->grant_table;
-
+    unsigned int active=0;
+/*
     printk("      -------- active --------       -------- shared --------\n");
     printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
-
+*/
     spin_lock(&gt->lock);

     if ( gt->gt_version == 0 )
         goto out;

+    printk("grant-table for remote domain:%5d (v%d) nr_grant_entries: %d\n",
+                   rd->domain_id, gt->gt_version, nr_grant_entries(gt));
+
     for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
     {
         struct active_grant_entry *act;
@@ -2875,19 +2880,22 @@ static void gnttab_usage_print(struct domain *rd)
                    rd->domain_id, gt->gt_version);
             first = 0;
         }
-
+        active++;
         /*      [ddd]    ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
-        printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
-               ref, act->domid, act->frame, act->pin,
-               sha->domid, frame, status);
+        /* printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n", ref, act->domid, act->frame, act->pin, sha->domid, frame, status); */
     }

  out:
     spin_unlock(&gt->lock);

+    printk("grant-table for remote domain:%5d active entries: %d\n",
+                   rd->domain_id, active);
+/*
     if ( first )
         printk("grant-table for remote domain:%5d ... "
                "no active grant table entries\n", rd->domain_id);
+*/
+
 }

 static void gnttab_usage_print_all(unsigned char key)






diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
index e5284bc..6d93358 100644
--- a/drivers/net/xen-netback/netback.c
+++ b/drivers/net/xen-netback/netback.c
@@ -482,20 +482,23 @@ static void xenvif_rx_action(struct xenvif *vif)
                .meta  = vif->meta,
        };

+       int j=0;
+
        skb_queue_head_init(&rxq);

        while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
                RING_IDX max_slots_needed;
                int i;
+               int nr_frags;

                /* We need a cheap worse case estimate for the number of
                 * slots we'll use.
                 */

                max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
-                                               skb_headlen(skb),
-                                               PAGE_SIZE);
-               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+                                               skb_headlen(skb), PAGE_SIZE);
+               nr_frags = skb_shinfo(skb)->nr_frags;
+               for (i = 0; i < nr_frags; i++) {
                        unsigned int size;
                        size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
                        max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
@@ -508,6 +511,9 @@ static void xenvif_rx_action(struct xenvif *vif)
                if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
                        skb_queue_head(&vif->rx_queue, skb);
                        need_to_notify = true;
+                       if (net_ratelimit())
+                               netdev_err(vif->dev, "!?!?!?! skb may not fit .. bail out now max_slots_needed:%d GSO:%d vif->rx_last_skb_slots:%d nr_frags:%d prod:%d cons:%d j:%d\n",
+                                       max_slots_needed, (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ? 1 : 0, vif->rx_last_skb_slots, nr_frags,vif->rx.sring->req_prod,vif->rx.req_cons,j);
                        vif->rx_last_skb_slots = max_slots_needed;
                        break;
                } else
@@ -518,6 +524,7 @@ static void xenvif_rx_action(struct xenvif *vif)
                BUG_ON(sco->meta_slots_used > max_slots_needed);

                __skb_queue_tail(&rxq, skb);
+               j++;
        }

        BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
@@ -541,7 +548,7 @@ static void xenvif_rx_action(struct xenvif *vif)
                        resp->offset = vif->meta[npo.meta_cons].gso_size;
                        resp->id = vif->meta[npo.meta_cons].id;
                        resp->status = sco->meta_slots_used;
-
+
                        npo.meta_cons++;
                        sco->meta_slots_used--;
                }
@@ -705,7 +712,7 @@ static int xenvif_count_requests(struct xenvif *vif,
                 */
                if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
                        if (net_ratelimit())
-                               netdev_dbg(vif->dev,
+                               netdev_err(vif->dev,
                                           "Too many slots (%d) exceeding limit (%d), dropping packet\n",
                                           slots, XEN_NETBK_LEGACY_SLOTS_MAX);
                        drop_err = -E2BIG;
@@ -728,7 +735,7 @@ static int xenvif_count_requests(struct xenvif *vif,
                 */
                if (!drop_err && txp->size > first->size) {
                        if (net_ratelimit())
-                               netdev_dbg(vif->dev,
+                               netdev_err(vif->dev,
                                           "Invalid tx request, slot size %u > remaining size %u\n",
                                           txp->size, first->size);
                        drop_err = -EIO;



diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f9daa9e..67d5221 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -753,6 +753,7 @@ static int xennet_get_responses(struct netfront_info *np,
                        if (net_ratelimit())
                                dev_warn(dev, "rx->offset: %x, size: %u\n",
                                         rx->offset, rx->status);
+                               dev_warn(dev, "me here .. cons:%d slots:%d rp:%d max:%d err:%d rx->id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
                        xennet_move_rx_slot(np, skb, ref);
                        err = -EINVAL;
                        goto next;
@@ -784,7 +785,7 @@ next:

                if (cons + slots == rp) {
                        if (net_ratelimit())
-                               dev_warn(dev, "Need more slots\n");
+                               dev_warn(dev, "Need more slots cons:%d slots:%d rp:%d max:%d err:%d rx-id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
                        err = -ENOENT;
                        break;
                }
@@ -803,7 +804,6 @@ next:

        if (unlikely(err))
                np->rx.rsp_cons = cons + slots;
-
        return err;
 }

@@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,

                /* Ethernet work: Delayed to here as it peeks the header. */
                skb->protocol = eth_type_trans(skb, dev);
+               skb_reset_network_header(skb);

                if (checksum_setup(dev, skb)) {
                        kfree_skb(skb);


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:15:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaap-00021q-IV; Wed, 26 Feb 2014 09:15:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WIaao-00021d-GV
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:15:54 +0000
Received: from [193.109.254.147:61955] by server-8.bemta-14.messagelabs.com id
	47/0D-18529-9C0BD035; Wed, 26 Feb 2014 09:15:53 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393406152!3206094!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17886 invoked from network); 26 Feb 2014 09:15:53 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 09:15:53 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 26 Feb 2014 01:11:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,546,1389772800"; d="scan'208";a="489995699"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 26 Feb 2014 01:15:06 -0800
Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 01:15:06 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
	FMSMSX114.amr.corp.intel.com (10.18.116.8) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 01:15:06 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX104.ccr.corp.intel.com ([169.254.5.227]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 17:15:04 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH] tools/xen-mceinj: Fix depency for the install rule
Thread-Index: AQHPMhfwpzKxmlyCYkCx7L2xlS8iCZrHQg5A
Date: Wed, 26 Feb 2014 09:15:03 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC829233501516190@SHSMSX101.ccr.corp.intel.com>
References: <1393325654-28994-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393325654-28994-1-git-send-email-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/xen-mceinj: Fix depency for the
	install rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Liu Jinsong <jinsong.liu@intel.com>
> ---
>  tools/tests/mce-test/tools/Makefile |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/tests/mce-test/tools/Makefile
> b/tools/tests/mce-test/tools/Makefile index 8a23b83..5ee001f 100644
> --- a/tools/tests/mce-test/tools/Makefile
> +++ b/tools/tests/mce-test/tools/Makefile
> @@ -10,7 +10,7 @@ CFLAGS += $(CFLAGS_xeninclude)
>  .PHONY: all
>  all: xen-mceinj
> 
> -install:
> +install: xen-mceinj
>  	$(INSTALL_PROG) xen-mceinj $(DESTDIR)$(SBINDIR)
> 
>  .PHONY: clean

Acked-by: Liu Jinsong <jinsong.liu@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:15:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaap-00021q-IV; Wed, 26 Feb 2014 09:15:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jinsong.liu@intel.com>) id 1WIaao-00021d-GV
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:15:54 +0000
Received: from [193.109.254.147:61955] by server-8.bemta-14.messagelabs.com id
	47/0D-18529-9C0BD035; Wed, 26 Feb 2014 09:15:53 +0000
X-Env-Sender: jinsong.liu@intel.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393406152!3206094!1
X-Originating-IP: [134.134.136.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjQgPT4gMzkwOTcx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17886 invoked from network); 26 Feb 2014 09:15:53 -0000
Received: from mga09.intel.com (HELO mga09.intel.com) (134.134.136.24)
	by server-12.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 09:15:53 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga102.jf.intel.com with ESMTP; 26 Feb 2014 01:11:23 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,546,1389772800"; d="scan'208";a="489995699"
Received: from fmsmsx107.amr.corp.intel.com ([10.19.9.54])
	by orsmga002.jf.intel.com with ESMTP; 26 Feb 2014 01:15:06 -0800
Received: from fmsmsx114.amr.corp.intel.com (10.18.116.8) by
	FMSMSX107.amr.corp.intel.com (10.19.9.54) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 01:15:06 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
	FMSMSX114.amr.corp.intel.com (10.18.116.8) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 01:15:06 -0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.26]) by
	SHSMSX104.ccr.corp.intel.com ([169.254.5.227]) with mapi id
	14.03.0123.003; Wed, 26 Feb 2014 17:15:04 +0800
From: "Liu, Jinsong" <jinsong.liu@intel.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	<xen-devel@lists.xen.org>
Thread-Topic: [PATCH] tools/xen-mceinj: Fix depency for the install rule
Thread-Index: AQHPMhfwpzKxmlyCYkCx7L2xlS8iCZrHQg5A
Date: Wed, 26 Feb 2014 09:15:03 +0000
Message-ID: <DE8DF0795D48FD4CA783C40EC829233501516190@SHSMSX101.ccr.corp.intel.com>
References: <1393325654-28994-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393325654-28994-1-git-send-email-andrew.cooper3@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: Re: [Xen-devel] [PATCH] tools/xen-mceinj: Fix depency for the
	install rule
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Ian Campbell <Ian.Campbell@citrix.com>
> CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
> CC: Liu Jinsong <jinsong.liu@intel.com>
> ---
>  tools/tests/mce-test/tools/Makefile |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/tests/mce-test/tools/Makefile
> b/tools/tests/mce-test/tools/Makefile index 8a23b83..5ee001f 100644
> --- a/tools/tests/mce-test/tools/Makefile
> +++ b/tools/tests/mce-test/tools/Makefile
> @@ -10,7 +10,7 @@ CFLAGS += $(CFLAGS_xeninclude)
>  .PHONY: all
>  all: xen-mceinj
> 
> -install:
> +install: xen-mceinj
>  	$(INSTALL_PROG) xen-mceinj $(DESTDIR)$(SBINDIR)
> 
>  .PHONY: clean

Acked-by: Liu Jinsong <jinsong.liu@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:24:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIail-0002G6-LL; Wed, 26 Feb 2014 09:24:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vdham@uva.nl>) id 1WIaYQ-0001xD-4I
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:13:26 +0000
Received: from [85.158.137.68:58197] by server-4.bemta-3.messagelabs.com id
	94/33-04858-530BD035; Wed, 26 Feb 2014 09:13:25 +0000
X-Env-Sender: vdham@uva.nl
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393406004!3008658!1
X-Originating-IP: [94.142.246.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2495 invoked from network); 26 Feb 2014 09:13:24 -0000
Received: from positron.dckd.nl (HELO positron.dckd.nl) (94.142.246.99)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 09:13:24 -0000
Received: from [IPv6:2001:610:158:1000:b061:20a5:4ef0:cc71] (unknown
	[IPv6:2001:610:158:1000:b061:20a5:4ef0:cc71])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by positron.dckd.nl (Postfix) with ESMTPSA id 61763F8044;
	Wed, 26 Feb 2014 10:13:23 +0100 (CET)
Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\))
From: Jeroen van der Ham <vdham@uva.nl>
In-Reply-To: <CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
Date: Wed, 26 Feb 2014 10:13:22 +0100
Message-Id: <72CD20E6-9F26-42A8-8AAA-8D87D5B636E1@uva.nl>
References: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
	<CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
To: Wei Liu <liuw@liuw.name>
X-Mailer: Apple Mail (2.1874)
X-Mailman-Approved-At: Wed, 26 Feb 2014 09:24:06 +0000
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On 26 Feb 2014, at 01:08, Wei Liu <liuw@liuw.name> wrote:
> xen-create-image is part of xen-tools, which is not developed by core
> Xen developers. So this one should be reported to xen-tools
> maintainer(s).

Okay, will do.

> I remember some xen-tools version has a bug regarding memory size. Are
> you using Debian's package version? Could you try the lastest
> xen-tools from upstream? Did changing "M" to "Mb" help (it should work
> for both Squeeze and Wheezy)?

That did the trick indeed. (But I=92ll still report it as a bug, because th=
is is not obvious)

> I wrote a patch some time ago to make the parser handle memory suffix
> but was not upstreamed. You're welcome to take that and upstream it,
> or implement some cleaner solution.

You mean for the xen config files?

Jeroen.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:24:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIail-0002G6-LL; Wed, 26 Feb 2014 09:24:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <vdham@uva.nl>) id 1WIaYQ-0001xD-4I
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:13:26 +0000
Received: from [85.158.137.68:58197] by server-4.bemta-3.messagelabs.com id
	94/33-04858-530BD035; Wed, 26 Feb 2014 09:13:25 +0000
X-Env-Sender: vdham@uva.nl
X-Msg-Ref: server-13.tower-31.messagelabs.com!1393406004!3008658!1
X-Originating-IP: [94.142.246.99]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2495 invoked from network); 26 Feb 2014 09:13:24 -0000
Received: from positron.dckd.nl (HELO positron.dckd.nl) (94.142.246.99)
	by server-13.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 09:13:24 -0000
Received: from [IPv6:2001:610:158:1000:b061:20a5:4ef0:cc71] (unknown
	[IPv6:2001:610:158:1000:b061:20a5:4ef0:cc71])
	(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by positron.dckd.nl (Postfix) with ESMTPSA id 61763F8044;
	Wed, 26 Feb 2014 10:13:23 +0100 (CET)
Mime-Version: 1.0 (Mac OS X Mail 7.2 \(1874\))
From: Jeroen van der Ham <vdham@uva.nl>
In-Reply-To: <CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
Date: Wed, 26 Feb 2014 10:13:22 +0100
Message-Id: <72CD20E6-9F26-42A8-8AAA-8D87D5B636E1@uva.nl>
References: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
	<CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
To: Wei Liu <liuw@liuw.name>
X-Mailer: Apple Mail (2.1874)
X-Mailman-Approved-At: Wed, 26 Feb 2014 09:24:06 +0000
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi,

On 26 Feb 2014, at 01:08, Wei Liu <liuw@liuw.name> wrote:
> xen-create-image is part of xen-tools, which is not developed by core
> Xen developers. So this one should be reported to xen-tools
> maintainer(s).

Okay, will do.

> I remember some xen-tools version has a bug regarding memory size. Are
> you using Debian's package version? Could you try the lastest
> xen-tools from upstream? Did changing "M" to "Mb" help (it should work
> for both Squeeze and Wheezy)?

That did the trick indeed. (But I=92ll still report it as a bug, because th=
is is not obvious)

> I wrote a patch some time ago to make the parser handle memory suffix
> but was not upstreamed. You're welcome to take that and upstream it,
> or implement some cleaner solution.

You mean for the xen config files?

Jeroen.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIajv-0002KY-3k; Wed, 26 Feb 2014 09:25:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIajt-0002KQ-2f
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:25:17 +0000
Received: from [85.158.137.68:21560] by server-16.bemta-3.messagelabs.com id
	BD/89-29917-CF2BD035; Wed, 26 Feb 2014 09:25:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393406714!4284817!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2981 invoked from network); 26 Feb 2014 09:25:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:25:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="104215606"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 09:25:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:25:12 -0500
Message-ID: <1393406711.6506.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 26 Feb 2014 09:25:11 +0000
In-Reply-To: <530C5C60020000780011F076@nat28.tlf.novell.com>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
	<530C5C60020000780011F076@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	George Dunlap <dunlapg@umich.edu>, Keir Fraser <keir@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 08:03 +0000, Jan Beulich wrote:
> Anyway, I also consider it odd to complain about this now when
> the referenced discussion has happened weeks ago, with no
> useful result.

IIRC George was on vacation and/or travelling for a conference back then
(or maybe he was just back in town and buried in email, with the same
affect)

Anyway, since George knows PoD better than any of us I think it is worth
revisiting things with his input.

I would certainly prefer a solution which exposed some more semantically
meaningful information (from the guest's PoV) which lets it do the right
thing rather than just exposing the PoD internal accounting directly,
since that seems like rather an implementation detail to me. e.g. what
if we decide to make the PoD cache shared between a bunch of related
guests (totally hypothetical) or back it with the normal page sharing or
paging mechanisms? We don't want those sorts of changes to create a
visible guest ABI difference.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:25:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIajv-0002KY-3k; Wed, 26 Feb 2014 09:25:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIajt-0002KQ-2f
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:25:17 +0000
Received: from [85.158.137.68:21560] by server-16.bemta-3.messagelabs.com id
	BD/89-29917-CF2BD035; Wed, 26 Feb 2014 09:25:16 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393406714!4284817!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2981 invoked from network); 26 Feb 2014 09:25:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:25:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="104215606"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 09:25:13 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:25:12 -0500
Message-ID: <1393406711.6506.15.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 26 Feb 2014 09:25:11 +0000
In-Reply-To: <530C5C60020000780011F076@nat28.tlf.novell.com>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
	<530C5C60020000780011F076@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	George Dunlap <dunlapg@umich.edu>, Keir Fraser <keir@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 08:03 +0000, Jan Beulich wrote:
> Anyway, I also consider it odd to complain about this now when
> the referenced discussion has happened weeks ago, with no
> useful result.

IIRC George was on vacation and/or travelling for a conference back then
(or maybe he was just back in town and buried in email, with the same
affect)

Anyway, since George knows PoD better than any of us I think it is worth
revisiting things with his input.

I would certainly prefer a solution which exposed some more semantically
meaningful information (from the guest's PoV) which lets it do the right
thing rather than just exposing the PoD internal accounting directly,
since that seems like rather an implementation detail to me. e.g. what
if we decide to make the PoD cache shared between a bunch of related
guests (totally hypothetical) or back it with the normal page sharing or
paging mechanisms? We don't want those sorts of changes to create a
visible guest ABI difference.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:26:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIalR-0002R7-Jq; Wed, 26 Feb 2014 09:26:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIalQ-0002Qt-Dt
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:26:52 +0000
Received: from [85.158.137.68:45404] by server-11.bemta-3.messagelabs.com id
	8E/67-04255-B53BD035; Wed, 26 Feb 2014 09:26:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393406809!4263733!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3439 invoked from network); 26 Feb 2014 09:26:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:26:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105852818"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 09:26:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:26:47 -0500
Message-ID: <1393406807.6506.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 26 Feb 2014 09:26:47 +0000
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 10:00 +0000, Jan Beulich wrote:
> ... in a simplified and consistent way.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I don't see any ARM changes here. Is that because you looked and didn't
find any or does one of the ARM folks need to do a pass over the ARM
code in a follow up patch?

Ian.

> 
> --- a/docs/misc/printk-formats.txt
> +++ b/docs/misc/printk-formats.txt
> @@ -15,3 +15,6 @@ Symbol/Function pointers:
>  
>         In the case that an appropriate symbol name can't be found, %p[sS] will
>         fall back to '%p' and print the address in hex.
> +
> +       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
> +               "d<domid>v<vcpuid>")
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
>      if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
>      {
>          dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
> -                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
> +                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
>                  has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
> -                v->domain->domain_id, v->vcpu_id,
> -                guest_mcg_cap & ~MCG_CAP_COUNT);
> +                v, guest_mcg_cap & ~MCG_CAP_COUNT);
>          return -EPERM;
>      }
>  
> @@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
>                guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
>               !test_and_set_bool(v->mce_pending) )
>          {
> -            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
>              vcpu_kick(v);
>              ret = 0;
>          }
>          else
>          {
> -            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
>              ret = -EBUSY;
>              break;
>          }
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
>          if ( !warned )
>          {
>              warned = 1;
> -            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
> +            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
>                     "(If you see this outside of debugging activity,"
>                     " please report to xen-devel@lists.xenproject.org)\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   v);
>          }
>          memset(reg, 0, sizeof(*reg));
>          return;
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct 
>          if ( !cpu_has(c, X86_FEATURE_DTES64) )
>          {
>              printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>              goto func_out;
>          }
>          vpmu_set(vpmu, VPMU_CPU_HAS_DS);
> @@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct 
>              /* If BTS_UNAVAIL is set reset the DS feature. */
>              vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
>              printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>          }
>          else
>          {
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu 
>      mfn_t *oos;
>      struct domain *d = v->domain;
>  
> -    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
> -                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn)); 
> +    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
>  
>      for_each_vcpu(d, v) 
>      {
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
>  static void reserved_bit_page_fault(
>      unsigned long addr, struct cpu_user_regs *regs)
>  {
> -    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
> -           current->domain->domain_id, current->vcpu_id, regs->error_code);
> +    printk("%pv: reserved bit in page table (ec=%04X)\n",
> +           current, regs->error_code);
>      show_page_walk(addr);
>      show_execution_state(regs);
>  }
> @@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
>          tb->flags |= TBF_INTERRUPT;
>      if ( unlikely(null_trap_bounce(v, tb)) )
>      {
> -        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> -               v->domain->domain_id, v->vcpu_id, error_code);
> +        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
>          show_page_walk(addr);
>      }
>  
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
>  
>  vcpu_info_t dummy_vcpu_info;
>  
> -int current_domain_id(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
>  static void __domain_finalise_shutdown(struct domain *d)
>  {
>      struct vcpu *v;
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
>  
>      if ( !is_idle_vcpu(current) )
>      {
> -        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
> -               smp_processor_id(), current->domain->domain_id,
> -               current->vcpu_id);
> +        printk("*** Dumping CPU%u guest state (%pv): ***\n",
> +               smp_processor_id(), current);
>          show_execution_state(guest_cpu_user_regs());
>          printk("\n");
>      }
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
>      struct list_head *iter;
>      int pos = 0;
>  
> -    d2printk("rqi d%dv%d\n",
> -           svc->vcpu->domain->domain_id,
> -           svc->vcpu->vcpu_id);
> +    d2printk("rqi %pv\n", svc->vcpu);
>  
>      BUG_ON(&svc->rqd->runq != runq);
>      /* Idle vcpus not allowed on the runqueue anymore */
> @@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
>  
>          if ( svc->credit > iter_svc->credit )
>          {
> -            d2printk(" p%d d%dv%d\n",
> -                   pos,
> -                   iter_svc->vcpu->domain->domain_id,
> -                   iter_svc->vcpu->vcpu_id);
> +            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
>              break;
>          }
>          pos++;
> @@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
>      cpumask_t mask;
>      struct csched_vcpu * cur;
>  
> -    d2printk("rqt d%dv%d cd%dv%d\n",
> -             new->vcpu->domain->domain_id,
> -             new->vcpu->vcpu_id,
> -             current->domain->domain_id,
> -             current->vcpu_id);
> +    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
>  
>      BUG_ON(new->vcpu->processor != cpu);
>      BUG_ON(new->rqd != rqd);
> @@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
>          t2c_update(rqd, delta, svc);
>          svc->start_time = now;
>  
> -        d2printk("b d%dv%d c%d\n",
> -                 svc->vcpu->domain->domain_id,
> -                 svc->vcpu->vcpu_id,
> -                 svc->credit);
> +        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
>      } else {
>          d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
>                 __func__, now, svc->start_time);
> @@ -871,11 +859,9 @@ static void
>  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>  {
>      struct csched_vcpu *svc = vc->sched_priv;
> -    struct domain * const dom = vc->domain;
>      struct csched_dom * const sdom = svc->sdom;
>  
> -    printk("%s: Inserting d%dv%d\n",
> -           __func__, dom->domain_id, vc->vcpu_id);
> +    printk("%s: Inserting %pv\n", __func__, vc);
>  
>      /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
>       * been called for that cpu.
> @@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler 
>  
>      /* Schedule lock should be held at this point. */
>  
> -    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
> +    d2printk("w %pv\n", vc);
>  
>      BUG_ON( is_idle_vcpu(vc) );
>  
> @@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops, 
>      {
>          if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
>          {
> -            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv -\n", svc->vcpu);
>              clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
>          }
>          /* Leave it where it is for now.  When we actually pay attention
> @@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops, 
>          }
>          else
>          {
> -            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv +\n", svc->vcpu);
>              new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd->active);
>              goto out_up;
>          }
> @@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
>  {
>      if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
>      {
> -        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
>          /* It's running; mark it to migrate. */
>          svc->migrate_rqd = trqd;
>          set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
> @@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
>      {
>          int on_runq=0;
>          /* It's not running; just move it */
> -        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
>          if ( __vcpu_on_runq(svc) )
>          {
>              __runq_remove(svc);
> @@ -1662,11 +1646,7 @@ csched_schedule(
>      SCHED_STAT_CRANK(schedule);
>      CSCHED_VCPU_CHECK(current);
>  
> -    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
> -             cpu,
> -             scurr->vcpu->domain->domain_id,
> -             scurr->vcpu->vcpu_id,
> -             now);
> +    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
>  
>      BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
>  
> @@ -1693,12 +1673,11 @@ csched_schedule(
>                  }
>              }
>          }
> -        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
> +        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
>                 "pcpu %d rq %d!\n",
>                 __func__,
>                 cpu, this_rqi,
> -               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
> -               scurr->vcpu->processor, other_rqi);
> +               scurr->vcpu, scurr->vcpu->processor, other_rqi);
>      }
>      BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
>  
> @@ -1755,12 +1734,8 @@ csched_schedule(
>              __runq_remove(snext);
>              if ( snext->vcpu->is_running )
>              {
> -                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
> -                       cpu,
> -                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,
> -                       snext->vcpu->processor,
> -                       scurr->vcpu->domain->domain_id,
> -                       scurr->vcpu->vcpu_id);
> +                printk("p%d: snext %pv running on p%d! scurr %pv\n",
> +                       cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);
>                  BUG();
>              }
>              set_bit(__CSFLAG_scheduled, &snext->flags);
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
>  
>          if ( v->affinity_broken )
>          {
> -            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
> -                   d->domain_id, v->vcpu_id);
> +            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
>              cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
>              v->affinity_broken = 0;
>          }
> @@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
>              if ( cpumask_empty(&online_affinity) &&
>                   cpumask_test_cpu(cpu, v->cpu_affinity) )
>              {
> -                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
> -                        d->domain_id, v->vcpu_id);
> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
>  
>                  if (system_state == SYS_STATE_suspend)
>                  {
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -19,6 +19,7 @@
>  #include <xen/ctype.h>
>  #include <xen/symbols.h>
>  #include <xen/lib.h>
> +#include <xen/sched.h>
>  #include <asm/div64.h>
>  #include <asm/page.h>
>  
> @@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
>  
>          return str;
>      }
> +
> +    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
> +    {
> +        const struct vcpu *v = arg;
> +
> +        ++*fmt_ptr;
> +        if ( str <= end )
> +            *str = 'd';
> +        str = number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);
> +        if ( str <= end )
> +            *str = 'v';
> +        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
> +    }
>      }
>  
>      if ( field_width == -1 )
> --- a/xen/include/xen/config.h
> +++ b/xen/include/xen/config.h
> @@ -74,12 +74,11 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -int current_domain_id(void);
>  #define dprintk(_l, _f, _a...)                              \
>      printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
>  #define gdprintk(_l, _f, _a...)                             \
> -    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
> -           __LINE__, current_domain_id() , ## _a )
> +    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
> +           __LINE__, current, ## _a )
>  
>  #endif /* !__ASSEMBLY__ */
>  
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:26:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIalR-0002R7-Jq; Wed, 26 Feb 2014 09:26:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIalQ-0002Qt-Dt
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:26:52 +0000
Received: from [85.158.137.68:45404] by server-11.bemta-3.messagelabs.com id
	8E/67-04255-B53BD035; Wed, 26 Feb 2014 09:26:51 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393406809!4263733!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3439 invoked from network); 26 Feb 2014 09:26:50 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:26:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105852818"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 09:26:48 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:26:47 -0500
Message-ID: <1393406807.6506.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 26 Feb 2014 09:26:47 +0000
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 10:00 +0000, Jan Beulich wrote:
> ... in a simplified and consistent way.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I don't see any ARM changes here. Is that because you looked and didn't
find any or does one of the ARM folks need to do a pass over the ARM
code in a follow up patch?

Ian.

> 
> --- a/docs/misc/printk-formats.txt
> +++ b/docs/misc/printk-formats.txt
> @@ -15,3 +15,6 @@ Symbol/Function pointers:
>  
>         In the case that an appropriate symbol name can't be found, %p[sS] will
>         fall back to '%p' and print the address in hex.
> +
> +       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
> +               "d<domid>v<vcpuid>")
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
>      if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
>      {
>          dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
> -                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
> +                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
>                  has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
> -                v->domain->domain_id, v->vcpu_id,
> -                guest_mcg_cap & ~MCG_CAP_COUNT);
> +                v, guest_mcg_cap & ~MCG_CAP_COUNT);
>          return -EPERM;
>      }
>  
> @@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
>                guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
>               !test_and_set_bool(v->mce_pending) )
>          {
> -            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
>              vcpu_kick(v);
>              ret = 0;
>          }
>          else
>          {
> -            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
>              ret = -EBUSY;
>              break;
>          }
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
>          if ( !warned )
>          {
>              warned = 1;
> -            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
> +            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
>                     "(If you see this outside of debugging activity,"
>                     " please report to xen-devel@lists.xenproject.org)\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   v);
>          }
>          memset(reg, 0, sizeof(*reg));
>          return;
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct 
>          if ( !cpu_has(c, X86_FEATURE_DTES64) )
>          {
>              printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>              goto func_out;
>          }
>          vpmu_set(vpmu, VPMU_CPU_HAS_DS);
> @@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct 
>              /* If BTS_UNAVAIL is set reset the DS feature. */
>              vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
>              printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>          }
>          else
>          {
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu 
>      mfn_t *oos;
>      struct domain *d = v->domain;
>  
> -    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
> -                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn)); 
> +    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
>  
>      for_each_vcpu(d, v) 
>      {
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
>  static void reserved_bit_page_fault(
>      unsigned long addr, struct cpu_user_regs *regs)
>  {
> -    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
> -           current->domain->domain_id, current->vcpu_id, regs->error_code);
> +    printk("%pv: reserved bit in page table (ec=%04X)\n",
> +           current, regs->error_code);
>      show_page_walk(addr);
>      show_execution_state(regs);
>  }
> @@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
>          tb->flags |= TBF_INTERRUPT;
>      if ( unlikely(null_trap_bounce(v, tb)) )
>      {
> -        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> -               v->domain->domain_id, v->vcpu_id, error_code);
> +        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
>          show_page_walk(addr);
>      }
>  
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
>  
>  vcpu_info_t dummy_vcpu_info;
>  
> -int current_domain_id(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
>  static void __domain_finalise_shutdown(struct domain *d)
>  {
>      struct vcpu *v;
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
>  
>      if ( !is_idle_vcpu(current) )
>      {
> -        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
> -               smp_processor_id(), current->domain->domain_id,
> -               current->vcpu_id);
> +        printk("*** Dumping CPU%u guest state (%pv): ***\n",
> +               smp_processor_id(), current);
>          show_execution_state(guest_cpu_user_regs());
>          printk("\n");
>      }
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
>      struct list_head *iter;
>      int pos = 0;
>  
> -    d2printk("rqi d%dv%d\n",
> -           svc->vcpu->domain->domain_id,
> -           svc->vcpu->vcpu_id);
> +    d2printk("rqi %pv\n", svc->vcpu);
>  
>      BUG_ON(&svc->rqd->runq != runq);
>      /* Idle vcpus not allowed on the runqueue anymore */
> @@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
>  
>          if ( svc->credit > iter_svc->credit )
>          {
> -            d2printk(" p%d d%dv%d\n",
> -                   pos,
> -                   iter_svc->vcpu->domain->domain_id,
> -                   iter_svc->vcpu->vcpu_id);
> +            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
>              break;
>          }
>          pos++;
> @@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
>      cpumask_t mask;
>      struct csched_vcpu * cur;
>  
> -    d2printk("rqt d%dv%d cd%dv%d\n",
> -             new->vcpu->domain->domain_id,
> -             new->vcpu->vcpu_id,
> -             current->domain->domain_id,
> -             current->vcpu_id);
> +    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
>  
>      BUG_ON(new->vcpu->processor != cpu);
>      BUG_ON(new->rqd != rqd);
> @@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
>          t2c_update(rqd, delta, svc);
>          svc->start_time = now;
>  
> -        d2printk("b d%dv%d c%d\n",
> -                 svc->vcpu->domain->domain_id,
> -                 svc->vcpu->vcpu_id,
> -                 svc->credit);
> +        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
>      } else {
>          d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
>                 __func__, now, svc->start_time);
> @@ -871,11 +859,9 @@ static void
>  csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>  {
>      struct csched_vcpu *svc = vc->sched_priv;
> -    struct domain * const dom = vc->domain;
>      struct csched_dom * const sdom = svc->sdom;
>  
> -    printk("%s: Inserting d%dv%d\n",
> -           __func__, dom->domain_id, vc->vcpu_id);
> +    printk("%s: Inserting %pv\n", __func__, vc);
>  
>      /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
>       * been called for that cpu.
> @@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler 
>  
>      /* Schedule lock should be held at this point. */
>  
> -    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
> +    d2printk("w %pv\n", vc);
>  
>      BUG_ON( is_idle_vcpu(vc) );
>  
> @@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops, 
>      {
>          if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
>          {
> -            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv -\n", svc->vcpu);
>              clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
>          }
>          /* Leave it where it is for now.  When we actually pay attention
> @@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops, 
>          }
>          else
>          {
> -            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv +\n", svc->vcpu);
>              new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd->active);
>              goto out_up;
>          }
> @@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
>  {
>      if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
>      {
> -        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
>          /* It's running; mark it to migrate. */
>          svc->migrate_rqd = trqd;
>          set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
> @@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
>      {
>          int on_runq=0;
>          /* It's not running; just move it */
> -        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
>          if ( __vcpu_on_runq(svc) )
>          {
>              __runq_remove(svc);
> @@ -1662,11 +1646,7 @@ csched_schedule(
>      SCHED_STAT_CRANK(schedule);
>      CSCHED_VCPU_CHECK(current);
>  
> -    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
> -             cpu,
> -             scurr->vcpu->domain->domain_id,
> -             scurr->vcpu->vcpu_id,
> -             now);
> +    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
>  
>      BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
>  
> @@ -1693,12 +1673,11 @@ csched_schedule(
>                  }
>              }
>          }
> -        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
> +        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
>                 "pcpu %d rq %d!\n",
>                 __func__,
>                 cpu, this_rqi,
> -               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
> -               scurr->vcpu->processor, other_rqi);
> +               scurr->vcpu, scurr->vcpu->processor, other_rqi);
>      }
>      BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
>  
> @@ -1755,12 +1734,8 @@ csched_schedule(
>              __runq_remove(snext);
>              if ( snext->vcpu->is_running )
>              {
> -                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
> -                       cpu,
> -                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,
> -                       snext->vcpu->processor,
> -                       scurr->vcpu->domain->domain_id,
> -                       scurr->vcpu->vcpu_id);
> +                printk("p%d: snext %pv running on p%d! scurr %pv\n",
> +                       cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);
>                  BUG();
>              }
>              set_bit(__CSFLAG_scheduled, &snext->flags);
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
>  
>          if ( v->affinity_broken )
>          {
> -            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
> -                   d->domain_id, v->vcpu_id);
> +            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
>              cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
>              v->affinity_broken = 0;
>          }
> @@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
>              if ( cpumask_empty(&online_affinity) &&
>                   cpumask_test_cpu(cpu, v->cpu_affinity) )
>              {
> -                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
> -                        d->domain_id, v->vcpu_id);
> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
>  
>                  if (system_state == SYS_STATE_suspend)
>                  {
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -19,6 +19,7 @@
>  #include <xen/ctype.h>
>  #include <xen/symbols.h>
>  #include <xen/lib.h>
> +#include <xen/sched.h>
>  #include <asm/div64.h>
>  #include <asm/page.h>
>  
> @@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
>  
>          return str;
>      }
> +
> +    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
> +    {
> +        const struct vcpu *v = arg;
> +
> +        ++*fmt_ptr;
> +        if ( str <= end )
> +            *str = 'd';
> +        str = number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);
> +        if ( str <= end )
> +            *str = 'v';
> +        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
> +    }
>      }
>  
>      if ( field_width == -1 )
> --- a/xen/include/xen/config.h
> +++ b/xen/include/xen/config.h
> @@ -74,12 +74,11 @@
>  
>  #ifndef __ASSEMBLY__
>  
> -int current_domain_id(void);
>  #define dprintk(_l, _f, _a...)                              \
>      printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
>  #define gdprintk(_l, _f, _a...)                             \
> -    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
> -           __LINE__, current_domain_id() , ## _a )
> +    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
> +           __LINE__, current, ## _a )
>  
>  #endif /* !__ASSEMBLY__ */
>  
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaoZ-0002co-86; Wed, 26 Feb 2014 09:30:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIaoY-0002ch-6a
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:30:06 +0000
Received: from [85.158.137.68:30812] by server-14.bemta-3.messagelabs.com id
	B9/45-08196-D14BD035; Wed, 26 Feb 2014 09:30:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393407001!4287942!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31996 invoked from network); 26 Feb 2014 09:30:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:30:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105853347"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 09:30:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:30:00 -0500
Message-ID: <1393406999.6506.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Date: Wed, 26 Feb 2014 09:29:59 +0000
In-Reply-To: <CAM=aOxgvuyfsT8ySbqEA_3bDM+Z_K_QDcHRCVqdDDvtNyvdVYA@mail.gmail.com>
References: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
	<1393259051.16570.101.camel@kazak.uk.xensource.com>
	<CAM=aOxgvuyfsT8ySbqEA_3bDM+Z_K_QDcHRCVqdDDvtNyvdVYA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 14:27 +0000, Viktor Kleinik wrote:
> Ian,
> 
> 
> Thank you for your response.
> 
> 
> > I'm not sure how easy it would be to arrange it anyway -- the thing 
> > which knows about the kernel/appending is a different library to 
> > what creates the dtb.
> 
> 
> We will try to implement some alert mechanism between those 
> two things.

Ultimately we should consider exposing whatever it is that causes you to
need to write a custom DTB in the first place via the toolstack in a
first class way. I suppose it is passthrough related?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:30:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaoZ-0002co-86; Wed, 26 Feb 2014 09:30:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIaoY-0002ch-6a
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:30:06 +0000
Received: from [85.158.137.68:30812] by server-14.bemta-3.messagelabs.com id
	B9/45-08196-D14BD035; Wed, 26 Feb 2014 09:30:05 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393407001!4287942!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31996 invoked from network); 26 Feb 2014 09:30:04 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:30:04 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105853347"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 09:30:01 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:30:00 -0500
Message-ID: <1393406999.6506.18.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Date: Wed, 26 Feb 2014 09:29:59 +0000
In-Reply-To: <CAM=aOxgvuyfsT8ySbqEA_3bDM+Z_K_QDcHRCVqdDDvtNyvdVYA@mail.gmail.com>
References: <CAM=aOxghm++oVRMoJDB7J+CH14jpNJTdynDwQ7ffWgR1x0+4sQ@mail.gmail.com>
	<1393259051.16570.101.camel@kazak.uk.xensource.com>
	<CAM=aOxgvuyfsT8ySbqEA_3bDM+Z_K_QDcHRCVqdDDvtNyvdVYA@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Domain configuration for DomU
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 14:27 +0000, Viktor Kleinik wrote:
> Ian,
> 
> 
> Thank you for your response.
> 
> 
> > I'm not sure how easy it would be to arrange it anyway -- the thing 
> > which knows about the kernel/appending is a different library to 
> > what creates the dtb.
> 
> 
> We will try to implement some alert mechanism between those 
> two things.

Ultimately we should consider exposing whatever it is that causes you to
need to write a custom DTB in the first place via the toolstack in a
first class way. I suppose it is passthrough related?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaxQ-0002rr-I4; Wed, 26 Feb 2014 09:39:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIaxP-0002rm-5Y
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:39:15 +0000
Received: from [85.158.143.35:9072] by server-3.bemta-4.messagelabs.com id
	DC/F6-11539-246BD035; Wed, 26 Feb 2014 09:39:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393407552!8380078!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16665 invoked from network); 26 Feb 2014 09:39:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:39:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105854993"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 09:38:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:38:51 -0500
Message-ID: <1393407529.18730.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Date: Wed, 26 Feb 2014 09:38:49 +0000
In-Reply-To: <87vbw458jr.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
	<CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
	<87vbw458jr.fsf@rustcorp.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 11:10 +1030, Rusty Russell wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
> > On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >> On the other hand, if we wanted a more Xen-like setup, it would looke
> >> like this:
> >>
> >> 1) Abstract away the "physical addresses" to "handles" in the standard,
> >>    and allow some platform-specific mapping setup and teardown.
> >
> > At the risk of beating a dead horse, passing handles (grant
> > references) is going to be slow.
> ...
> > I really think the best paths forward for virtio on Xen are either (1)
> > reject the memory isolation thing and leave things as is or (2) assume
> > bounce buffering at the transport layer (by using the PCI DMA API).
> 
> Xen can get memory isolation back by doing the copy in the hypervisor.
> I've always liked that approach because it doesn't alter the guest
> semantics, but it's very different from what Xen does now.

Doing the copy in the hypervisor still uses grant references, since the
hypervisor needs to know what the source domain is permitting access to
for the target domain (or vice versa if you do the copy the other way)
and grant tables are the mechanism which achieves this. See the already
existing GNTTABOP_copy[0] for example, it is used in the existing Xen PV
driver pairs (e.g. network receive into domU).

Ian.

[0]
http://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,grant_table.h.html#EnumVal_GNTTABOP_copy



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:39:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIaxQ-0002rr-I4; Wed, 26 Feb 2014 09:39:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIaxP-0002rm-5Y
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:39:15 +0000
Received: from [85.158.143.35:9072] by server-3.bemta-4.messagelabs.com id
	DC/F6-11539-246BD035; Wed, 26 Feb 2014 09:39:14 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393407552!8380078!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16665 invoked from network); 26 Feb 2014 09:39:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:39:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="105854993"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 09:38:51 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:38:51 -0500
Message-ID: <1393407529.18730.3.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Rusty Russell <rusty@au1.ibm.com>
Date: Wed, 26 Feb 2014 09:38:49 +0000
In-Reply-To: <87vbw458jr.fsf@rustcorp.com.au>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
	<CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
	<87vbw458jr.fsf@rustcorp.com.au>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	ian@bromium.com, Anthony Liguori <anthony@codemonkey.ws>,
	sasha.levin@oracle.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 11:10 +1030, Rusty Russell wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
> > On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >> On the other hand, if we wanted a more Xen-like setup, it would looke
> >> like this:
> >>
> >> 1) Abstract away the "physical addresses" to "handles" in the standard,
> >>    and allow some platform-specific mapping setup and teardown.
> >
> > At the risk of beating a dead horse, passing handles (grant
> > references) is going to be slow.
> ...
> > I really think the best paths forward for virtio on Xen are either (1)
> > reject the memory isolation thing and leave things as is or (2) assume
> > bounce buffering at the transport layer (by using the PCI DMA API).
> 
> Xen can get memory isolation back by doing the copy in the hypervisor.
> I've always liked that approach because it doesn't alter the guest
> semantics, but it's very different from what Xen does now.

Doing the copy in the hypervisor still uses grant references, since the
hypervisor needs to know what the source domain is permitting access to
for the target domain (or vice versa if you do the copy the other way)
and grant tables are the mechanism which achieves this. See the already
existing GNTTABOP_copy[0] for example, it is used in the existing Xen PV
driver pairs (e.g. network receive into domU).

Ian.

[0]
http://xenbits.xen.org/docs/unstable/hypercall/x86_64/include,public,grant_table.h.html#EnumVal_GNTTABOP_copy



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:44:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb24-0002zu-CA; Wed, 26 Feb 2014 09:44:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIb23-0002zp-FH
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:44:03 +0000
Received: from [85.158.137.68:58453] by server-16.bemta-3.messagelabs.com id
	B6/4C-29917-267BD035; Wed, 26 Feb 2014 09:44:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393407841!4290682!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13573 invoked from network); 26 Feb 2014 09:44:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 09:44:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 09:44:01 +0000
Message-Id: <530DC57D020000780011F6D3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 09:44:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
	<530C7048020000780011F0D9@nat28.tlf.novell.com>
	<20140225124459.792471cf@mantra.us.oracle.com>
In-Reply-To: <20140225124459.792471cf@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 21:44, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Tue, 25 Feb 2014 09:28:24 +0000
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> >>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com>
>> >>> wrote:
>> > Just like hvm, pirq eoi shared page is not there for pvh. pvh should
>> > not touch any pv_domain fields.
>> 
>> While the latter is true, wasn't it that IRQ handling wise PVH is
>> using PV mechanisms? In which case the EOI map page would be of
>> interest, and rather than guarding the accesses you ought to move
>> the field out of pv_domain.
> 
> It would be, but my proposals to move the fields out in earlier patches
> had been turned down by you

I don't think I intentionally turned down any movement of fields
provably useful for PVH. I'm not immune to making mistakes
during patch review, so I'm sorry if I indeed prevented this to be
done right away.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:44:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb24-0002zu-CA; Wed, 26 Feb 2014 09:44:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIb23-0002zp-FH
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:44:03 +0000
Received: from [85.158.137.68:58453] by server-16.bemta-3.messagelabs.com id
	B6/4C-29917-267BD035; Wed, 26 Feb 2014 09:44:02 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393407841!4290682!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13573 invoked from network); 26 Feb 2014 09:44:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 09:44:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 09:44:01 +0000
Message-Id: <530DC57D020000780011F6D3@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 09:44:13 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Mukesh Rathor" <mukesh.rathor@oracle.com>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-3-git-send-email-mukesh.rathor@oracle.com>
	<530C7048020000780011F0D9@nat28.tlf.novell.com>
	<20140225124459.792471cf@mantra.us.oracle.com>
In-Reply-To: <20140225124459.792471cf@mantra.us.oracle.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 2/3] pvh: fix pirq path for pvh
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 21:44, Mukesh Rathor <mukesh.rathor@oracle.com> wrote:
> On Tue, 25 Feb 2014 09:28:24 +0000
> "Jan Beulich" <JBeulich@suse.com> wrote:
> 
>> >>> On 25.02.14 at 02:03, Mukesh Rathor <mukesh.rathor@oracle.com>
>> >>> wrote:
>> > Just like hvm, pirq eoi shared page is not there for pvh. pvh should
>> > not touch any pv_domain fields.
>> 
>> While the latter is true, wasn't it that IRQ handling wise PVH is
>> using PV mechanisms? In which case the EOI map page would be of
>> interest, and rather than guarding the accesses you ought to move
>> the field out of pv_domain.
> 
> It would be, but my proposals to move the fields out in earlier patches
> had been turned down by you

I don't think I intentionally turned down any movement of fields
provably useful for PVH. I'm not immune to making mistakes
during patch review, so I'm sorry if I indeed prevented this to be
done right away.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb3t-000367-Tx; Wed, 26 Feb 2014 09:45:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIb3s-000360-8c
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:45:56 +0000
Received: from [85.158.137.68:42525] by server-12.bemta-3.messagelabs.com id
	31/14-01674-3D7BD035; Wed, 26 Feb 2014 09:45:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393407954!903965!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19160 invoked from network); 26 Feb 2014 09:45:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 09:45:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 09:45:54 +0000
Message-Id: <530DC5EF020000780011F6D6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 09:46:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"George Dunlap" <dunlapg@umich.edu>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
	<530C5C60020000780011F076@nat28.tlf.novell.com>
	<1393406711.6506.15.camel@kazak.uk.xensource.com>
In-Reply-To: <1393406711.6506.15.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-25 at 08:03 +0000, Jan Beulich wrote:
>> Anyway, I also consider it odd to complain about this now when
>> the referenced discussion has happened weeks ago, with no
>> useful result.
> 
> IIRC George was on vacation and/or travelling for a conference back then
> (or maybe he was just back in town and buried in email, with the same
> affect)
> 
> Anyway, since George knows PoD better than any of us I think it is worth
> revisiting things with his input.
> 
> I would certainly prefer a solution which exposed some more semantically
> meaningful information (from the guest's PoV) which lets it do the right
> thing rather than just exposing the PoD internal accounting directly,
> since that seems like rather an implementation detail to me. e.g. what
> if we decide to make the PoD cache shared between a bunch of related
> guests (totally hypothetical) or back it with the normal page sharing or
> paging mechanisms? We don't want those sorts of changes to create a
> visible guest ABI difference.

So what's the counter proposal then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:46:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:46:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb3t-000367-Tx; Wed, 26 Feb 2014 09:45:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIb3s-000360-8c
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:45:56 +0000
Received: from [85.158.137.68:42525] by server-12.bemta-3.messagelabs.com id
	31/14-01674-3D7BD035; Wed, 26 Feb 2014 09:45:55 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393407954!903965!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19160 invoked from network); 26 Feb 2014 09:45:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 09:45:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 09:45:54 +0000
Message-Id: <530DC5EF020000780011F6D6@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 09:46:07 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"George Dunlap" <dunlapg@umich.edu>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
	<530C5C60020000780011F076@nat28.tlf.novell.com>
	<1393406711.6506.15.camel@kazak.uk.xensource.com>
In-Reply-To: <1393406711.6506.15.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-25 at 08:03 +0000, Jan Beulich wrote:
>> Anyway, I also consider it odd to complain about this now when
>> the referenced discussion has happened weeks ago, with no
>> useful result.
> 
> IIRC George was on vacation and/or travelling for a conference back then
> (or maybe he was just back in town and buried in email, with the same
> affect)
> 
> Anyway, since George knows PoD better than any of us I think it is worth
> revisiting things with his input.
> 
> I would certainly prefer a solution which exposed some more semantically
> meaningful information (from the guest's PoV) which lets it do the right
> thing rather than just exposing the PoD internal accounting directly,
> since that seems like rather an implementation detail to me. e.g. what
> if we decide to make the PoD cache shared between a bunch of related
> guests (totally hypothetical) or back it with the normal page sharing or
> paging mechanisms? We don't want those sorts of changes to create a
> visible guest ABI difference.

So what's the counter proposal then?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:48:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb6Q-0003LC-4A; Wed, 26 Feb 2014 09:48:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIb6P-0003Ky-8P
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:48:33 +0000
Received: from [85.158.137.68:31380] by server-12.bemta-3.messagelabs.com id
	7B/E8-01674-F68BD035; Wed, 26 Feb 2014 09:48:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393408109!4284835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1953 invoked from network); 26 Feb 2014 09:48:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 09:48:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 09:48:29 +0000
Message-Id: <530DC689020000780011F6D9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 09:48:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
	<1393406807.6506.17.camel@kazak.uk.xensource.com>
In-Reply-To: <1393406807.6506.17.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 10:26, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-25 at 10:00 +0000, Jan Beulich wrote:
>> ... in a simplified and consistent way.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I don't see any ARM changes here. Is that because you looked and didn't
> find any or does one of the ARM folks need to do a pass over the ARM
> code in a follow up patch?

It's been a while that I did this, so I'm not entirely certain which way
it was; I think I avoided looking at the ARM side to not make the
dependency chain regarding necessary acks larger than needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:48:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:48:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb6Q-0003LC-4A; Wed, 26 Feb 2014 09:48:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIb6P-0003Ky-8P
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 09:48:33 +0000
Received: from [85.158.137.68:31380] by server-12.bemta-3.messagelabs.com id
	7B/E8-01674-F68BD035; Wed, 26 Feb 2014 09:48:31 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393408109!4284835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1953 invoked from network); 26 Feb 2014 09:48:30 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 09:48:30 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 09:48:29 +0000
Message-Id: <530DC689020000780011F6D9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 09:48:41 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
	<1393406807.6506.17.camel@kazak.uk.xensource.com>
In-Reply-To: <1393406807.6506.17.camel@kazak.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 10:26, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> On Tue, 2014-02-25 at 10:00 +0000, Jan Beulich wrote:
>> ... in a simplified and consistent way.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I don't see any ARM changes here. Is that because you looked and didn't
> find any or does one of the ARM folks need to do a pass over the ARM
> code in a follow up patch?

It's been a while that I did this, so I'm not entirely certain which way
it was; I think I avoided looking at the ARM side to not make the
dependency chain regarding necessary acks larger than needed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb9v-0003Zu-PR; Wed, 26 Feb 2014 09:52:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIb9t-0003Zn-P5
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:52:09 +0000
Received: from [85.158.139.211:29716] by server-12.bemta-5.messagelabs.com id
	AB/94-15415-949BD035; Wed, 26 Feb 2014 09:52:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393408327!1786313!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25750 invoked from network); 26 Feb 2014 09:52:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:52:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="104220578"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 09:52:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:52:04 -0500
Message-ID: <1393408323.18730.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jonathan Daugherty <jtd@galois.com>
Date: Wed, 26 Feb 2014 09:52:03 +0000
In-Reply-To: <20140226000053.GE45200@galois.com>
References: <20140226000053.GE45200@galois.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Adam Wick <awick@galois.com>,
	Xen Development List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen on ARM: domU ramdisk support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 16:00 -0800, Jonathan Daugherty wrote:
> Hi,
> 
> I'm attempting to load a Linux domU on an Arndale board, but the
> 'ramdisk' setting in my domain configuration file appears to have no
> impact on the Linux kernel I'm booting.  What is the status of support
> for domU initrds in Xen on ARM at the moment?

Still TBD I'm afraid :-(

> I took a look at libxl and found that perhaps libxl_arm.c should be
> doing something to set up the FDT nodes describing the initrd start/end,
> but I see no evidence of it there.

Right, I think that is one piece which is missing.

The other bit is that xc_dom_arm.c:arch_setup_meminit (probably that's
the right place) need to pick a suitable address and set it in
ramdisk_seg. The selected address will need to conform to the
constraints documented in linux/Documentation/arm64/booting.txt and
linux/Documentation/arm/Booting. It can either meet both, as
xen/arch/arm/kernel.c does for dom0, or make it conditional on the guest
type.

Once ramdisk_seg is populated I *think* the generic libxc code will load
it into RAM automatically and then it just needs the reference from the
DTB nodes.

> I may have time to write a patch if this is something that needs doing.

It is, and please do it would be much appreciated.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 09:52:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 09:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIb9v-0003Zu-PR; Wed, 26 Feb 2014 09:52:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIb9t-0003Zn-P5
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 09:52:09 +0000
Received: from [85.158.139.211:29716] by server-12.bemta-5.messagelabs.com id
	AB/94-15415-949BD035; Wed, 26 Feb 2014 09:52:09 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393408327!1786313!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25750 invoked from network); 26 Feb 2014 09:52:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 09:52:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,546,1389744000"; d="scan'208";a="104220578"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 09:52:06 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 04:52:04 -0500
Message-ID: <1393408323.18730.13.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jonathan Daugherty <jtd@galois.com>
Date: Wed, 26 Feb 2014 09:52:03 +0000
In-Reply-To: <20140226000053.GE45200@galois.com>
References: <20140226000053.GE45200@galois.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: Adam Wick <awick@galois.com>,
	Xen Development List <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Xen on ARM: domU ramdisk support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 16:00 -0800, Jonathan Daugherty wrote:
> Hi,
> 
> I'm attempting to load a Linux domU on an Arndale board, but the
> 'ramdisk' setting in my domain configuration file appears to have no
> impact on the Linux kernel I'm booting.  What is the status of support
> for domU initrds in Xen on ARM at the moment?

Still TBD I'm afraid :-(

> I took a look at libxl and found that perhaps libxl_arm.c should be
> doing something to set up the FDT nodes describing the initrd start/end,
> but I see no evidence of it there.

Right, I think that is one piece which is missing.

The other bit is that xc_dom_arm.c:arch_setup_meminit (probably that's
the right place) need to pick a suitable address and set it in
ramdisk_seg. The selected address will need to conform to the
constraints documented in linux/Documentation/arm64/booting.txt and
linux/Documentation/arm/Booting. It can either meet both, as
xen/arch/arm/kernel.c does for dom0, or make it conditional on the guest
type.

Once ramdisk_seg is populated I *think* the generic libxc code will load
it into RAM automatically and then it just needs the reference from the
DTB nodes.

> I may have time to write a patch if this is something that needs doing.

It is, and please do it would be much appreciated.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbeZ-00040b-24; Wed, 26 Feb 2014 10:23:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1WIbeY-00040W-79
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 10:23:50 +0000
Received: from [193.109.254.147:33298] by server-11.bemta-14.messagelabs.com
	id 4D/79-24604-5B0CD035; Wed, 26 Feb 2014 10:23:49 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393410228!6905146!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19788 invoked from network); 26 Feb 2014 10:23:48 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 10:23:48 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QANiKN004497
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 05:23:44 -0500
Received: from localhost (vpn1-7-246.ams2.redhat.com [10.36.7.246])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QANhGS004812; Wed, 26 Feb 2014 05:23:43 -0500
Date: Wed, 26 Feb 2014 11:26:21 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140226102620.GA4686@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
	<20140203141429.GD3400@phenom.dumpdata.com>
	<20140210143727.GA15771@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140210143727.GA15771@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 09:37:27AM -0500, Konrad Rzeszutek Wilk wrote:
> > > But I'm not sure if that is good solution. It crate some not necessery
> > > sysfs directories and files. Additionaly it can restore CPU C-states
> > > after some other drivers resume, which prehaps require proper C-states.
> > 
> > Yes.
> > > 
> > > Hence maybe adding direct notify from xen core resume will be better
> > > idea (proposed patch below). Plese let me know what you think, I'll
> > > provide solution which you choose to bug reporters for testing.
> > 
> > Let me think about it for a day or so.
> 
> Sorry for the delay. I think this is fine.

I'm sorry for delay too. I provided test kernel with the patch to bug
reporter, but have not get any answer since two weeks now.

I'll post the patch in the next email, I hope someone can test it (I'm
not xen user). I'm pretty sure it fixes the problem, but not sure if it
does not cause some crash. To test is enough to perform suspend/resume
cycle (thought I'm not sure if suspended should be just guest or whole
system with hypervisor).

Thanks
Stanislaw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:24:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbeZ-00040b-24; Wed, 26 Feb 2014 10:23:51 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1WIbeY-00040W-79
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 10:23:50 +0000
Received: from [193.109.254.147:33298] by server-11.bemta-14.messagelabs.com
	id 4D/79-24604-5B0CD035; Wed, 26 Feb 2014 10:23:49 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393410228!6905146!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19788 invoked from network); 26 Feb 2014 10:23:48 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-14.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 10:23:48 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QANiKN004497
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 05:23:44 -0500
Received: from localhost (vpn1-7-246.ams2.redhat.com [10.36.7.246])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QANhGS004812; Wed, 26 Feb 2014 05:23:43 -0500
Date: Wed, 26 Feb 2014 11:26:21 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140226102620.GA4686@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
	<20140203141429.GD3400@phenom.dumpdata.com>
	<20140210143727.GA15771@phenom.dumpdata.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140210143727.GA15771@phenom.dumpdata.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: Re: [Xen-devel] [BUG?] Interrupts enabled after
 xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 10, 2014 at 09:37:27AM -0500, Konrad Rzeszutek Wilk wrote:
> > > But I'm not sure if that is good solution. It crate some not necessery
> > > sysfs directories and files. Additionaly it can restore CPU C-states
> > > after some other drivers resume, which prehaps require proper C-states.
> > 
> > Yes.
> > > 
> > > Hence maybe adding direct notify from xen core resume will be better
> > > idea (proposed patch below). Plese let me know what you think, I'll
> > > provide solution which you choose to bug reporters for testing.
> > 
> > Let me think about it for a day or so.
> 
> Sorry for the delay. I think this is fine.

I'm sorry for delay too. I provided test kernel with the patch to bug
reporter, but have not get any answer since two weeks now.

I'll post the patch in the next email, I hope someone can test it (I'm
not xen user). I'm pretty sure it fixes the problem, but not sure if it
does not cause some crash. To test is enough to perform suspend/resume
cycle (thought I'm not sure if suspended should be just guest or whole
system with hypervisor).

Thanks
Stanislaw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:25:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbgY-00046x-Km; Wed, 26 Feb 2014 10:25:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIbgX-00046l-2e
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 10:25:53 +0000
Received: from [193.109.254.147:63815] by server-12.bemta-14.messagelabs.com
	id 29/5F-17220-031CD035; Wed, 26 Feb 2014 10:25:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393410350!6941081!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3509 invoked from network); 26 Feb 2014 10:25:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 10:25:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104226481"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 10:25:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 05:25:49 -0500
Message-ID: <1393410348.18730.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Date: Wed, 26 Feb 2014 10:25:48 +0000
In-Reply-To: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
References: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen
 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 16:14 -0800, Saurabh Mishra wrote:

> We are using SuSE 11 SP2 Xen

This list focuses on the development of upstream Xen. Distro specific
issues/bugs are best targeted at the appropriate distro resource in the
first instance (i.e. suse bugzilla and/or mailing lists in this case).

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:25:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbgY-00046x-Km; Wed, 26 Feb 2014 10:25:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIbgX-00046l-2e
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 10:25:53 +0000
Received: from [193.109.254.147:63815] by server-12.bemta-14.messagelabs.com
	id 29/5F-17220-031CD035; Wed, 26 Feb 2014 10:25:52 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393410350!6941081!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3509 invoked from network); 26 Feb 2014 10:25:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 10:25:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104226481"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 10:25:49 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 05:25:49 -0500
Message-ID: <1393410348.18730.16.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Date: Wed, 26 Feb 2014 10:25:48 +0000
In-Reply-To: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
References: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen
 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, 2014-02-25 at 16:14 -0800, Saurabh Mishra wrote:

> We are using SuSE 11 SP2 Xen

This list focuses on the development of upstream Xen. Distro specific
issues/bugs are best targeted at the appropriate distro resource in the
first instance (i.e. suse bugzilla and/or mailing lists in this case).

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:28:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbia-0004DX-5Q; Wed, 26 Feb 2014 10:28:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1WIbiX-0004DO-Ts
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 10:27:58 +0000
Received: from [193.109.254.147:13416] by server-14.bemta-14.messagelabs.com
	id B4/86-29228-DA1CD035; Wed, 26 Feb 2014 10:27:57 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393410475!1637028!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7987 invoked from network); 26 Feb 2014 10:27:56 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 10:27:56 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QARrK0019398
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 05:27:53 -0500
Received: from localhost (vpn1-7-246.ams2.redhat.com [10.36.7.246])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QARqQM010215; Wed, 26 Feb 2014 05:27:53 -0500
Date: Wed, 26 Feb 2014 11:30:30 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140226103030.GB4686@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
	<20140203141429.GD3400@phenom.dumpdata.com>
	<20140210143727.GA15771@phenom.dumpdata.com>
	<20140226102620.GA4686@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140226102620.GA4686@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH] xen/acpi-processor: fix enabling interrupts on
	syscore_resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

syscore->resume() callback is expected to do not enable interrupts,
it generates warning like below otherwise:

[ 9386.365390] WARNING: CPU: 0 PID: 6733 at drivers/base/syscore.c:104 syscore_resume+0x9a/0xe0()
[ 9386.365403] Interrupts enabled after xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
...
[ 9386.365429] Call Trace:
[ 9386.365434]  [<ffffffff81667a8b>] dump_stack+0x45/0x56
[ 9386.365437]  [<ffffffff8106921d>] warn_slowpath_common+0x7d/0xa0
[ 9386.365439]  [<ffffffff8106928c>] warn_slowpath_fmt+0x4c/0x50
[ 9386.365442]  [<ffffffffa0261bb0>] ? xen_upload_processor_pm_data+0x300/0x300 [xen_acpi_processor]
[ 9386.365443]  [<ffffffff814055fa>] syscore_resume+0x9a/0xe0
[ 9386.365445]  [<ffffffff810aef42>] suspend_devices_and_enter+0x402/0x470
[ 9386.365447]  [<ffffffff810af128>] pm_suspend+0x178/0x260

On xen_acpi_processor_resume() we call various procedures, which are
non atomic and can enable interrupts. To prevent the issue introduce
separate resume notify called after we enable interrupts on resume
and before we call other drivers resume callbacks.

Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
Patch was only compiled, not tested otherwise.

 drivers/xen/manage.c             | 17 +++++++++++++++++
 drivers/xen/xen-acpi-processor.c | 15 ++++++++-------
 include/xen/xen-ops.h            |  3 +++
 3 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 624e8dc..96e4173 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -13,6 +13,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/notifier.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -46,6 +47,20 @@ struct suspend_info {
 	void (*post)(int cancelled);
 };
 
+static RAW_NOTIFIER_HEAD(xen_resume_notifier);
+
+void xen_resume_notifier_register(struct notifier_block *nb)
+{
+	raw_notifier_chain_register(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
+
+void xen_resume_notifier_unregister(struct notifier_block *nb)
+{
+	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
+
 #ifdef CONFIG_HIBERNATE_CALLBACKS
 static void xen_hvm_post_suspend(int cancelled)
 {
@@ -152,6 +167,8 @@ static void do_suspend(void)
 
 	err = stop_machine(xen_suspend, &si, cpumask_of(0));
 
+	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
+
 	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
 
 	if (err) {
diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..82358d1 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -27,10 +27,10 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/types.h>
-#include <linux/syscore_ops.h>
 #include <linux/acpi.h>
 #include <acpi/processor.h>
 #include <xen/xen.h>
+#include <xen/xen-ops.h>
 #include <xen/interface/platform.h>
 #include <asm/xen/hypercall.h>
 
@@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
 	return rc;
 }
 
-static void xen_acpi_processor_resume(void)
+static int xen_acpi_processor_resume(struct notifier_block *nb,
+				     unsigned long action, void *data)
 {
 	bitmap_zero(acpi_ids_done, nr_acpi_bits);
-	xen_upload_processor_pm_data();
+	return xen_upload_processor_pm_data();
 }
 
-static struct syscore_ops xap_syscore_ops = {
-	.resume	= xen_acpi_processor_resume,
+struct notifier_block xen_acpi_processor_resume_nb = {
+	.notifier_call = xen_acpi_processor_resume,
 };
 
 static int __init xen_acpi_processor_init(void)
@@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
 	if (rc)
 		goto err_unregister;
 
-	register_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
 
 	return 0;
 err_unregister:
@@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
 {
 	int i;
 
-	unregister_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
 	kfree(acpi_ids_done);
 	kfree(acpi_id_present);
 	kfree(acpi_id_cst_present);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6412358 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+void xen_resume_notifier_register(struct notifier_block *nb);
+void xen_resume_notifier_unregister(struct notifier_block *nb);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:28:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:28:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbia-0004DX-5Q; Wed, 26 Feb 2014 10:28:00 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <sgruszka@redhat.com>) id 1WIbiX-0004DO-Ts
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 10:27:58 +0000
Received: from [193.109.254.147:13416] by server-14.bemta-14.messagelabs.com
	id B4/86-29228-DA1CD035; Wed, 26 Feb 2014 10:27:57 +0000
X-Env-Sender: sgruszka@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393410475!1637028!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7987 invoked from network); 26 Feb 2014 10:27:56 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 10:27:56 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QARrK0019398
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 05:27:53 -0500
Received: from localhost (vpn1-7-246.ams2.redhat.com [10.36.7.246])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QARqQM010215; Wed, 26 Feb 2014 05:27:53 -0500
Date: Wed, 26 Feb 2014 11:30:30 +0100
From: Stanislaw Gruszka <sgruszka@redhat.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Message-ID: <20140226103030.GB4686@redhat.com>
References: <20140128150848.GA1428@redhat.com>
	<20140128160404.GA5732@phenom.dumpdata.com>
	<alpine.DEB.2.02.1401282120180.20167@chino.kir.corp.google.com>
	<20140129082521.GA1362@redhat.com>
	<20140131160140.GC23648@phenom.dumpdata.com>
	<20140203101215.GA1725@redhat.com>
	<20140203141429.GD3400@phenom.dumpdata.com>
	<20140210143727.GA15771@phenom.dumpdata.com>
	<20140226102620.GA4686@redhat.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140226102620.GA4686@redhat.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: Ben Guthro <benjamin.guthro@citrix.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	linux-kernel@vger.kernel.org, xen-devel@lists.xensource.com,
	david.vrabel@citrix.com, David Rientjes <rientjes@google.com>,
	boris.ostrovsky@oracle.com
Subject: [Xen-devel] [PATCH] xen/acpi-processor: fix enabling interrupts on
	syscore_resume
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

syscore->resume() callback is expected to do not enable interrupts,
it generates warning like below otherwise:

[ 9386.365390] WARNING: CPU: 0 PID: 6733 at drivers/base/syscore.c:104 syscore_resume+0x9a/0xe0()
[ 9386.365403] Interrupts enabled after xen_acpi_processor_resume+0x0/0x34 [xen_acpi_processor]
...
[ 9386.365429] Call Trace:
[ 9386.365434]  [<ffffffff81667a8b>] dump_stack+0x45/0x56
[ 9386.365437]  [<ffffffff8106921d>] warn_slowpath_common+0x7d/0xa0
[ 9386.365439]  [<ffffffff8106928c>] warn_slowpath_fmt+0x4c/0x50
[ 9386.365442]  [<ffffffffa0261bb0>] ? xen_upload_processor_pm_data+0x300/0x300 [xen_acpi_processor]
[ 9386.365443]  [<ffffffff814055fa>] syscore_resume+0x9a/0xe0
[ 9386.365445]  [<ffffffff810aef42>] suspend_devices_and_enter+0x402/0x470
[ 9386.365447]  [<ffffffff810af128>] pm_suspend+0x178/0x260

On xen_acpi_processor_resume() we call various procedures, which are
non atomic and can enable interrupts. To prevent the issue introduce
separate resume notify called after we enable interrupts on resume
and before we call other drivers resume callbacks.

Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
---
Patch was only compiled, not tested otherwise.

 drivers/xen/manage.c             | 17 +++++++++++++++++
 drivers/xen/xen-acpi-processor.c | 15 ++++++++-------
 include/xen/xen-ops.h            |  3 +++
 3 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index 624e8dc..96e4173 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -13,6 +13,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/notifier.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -46,6 +47,20 @@ struct suspend_info {
 	void (*post)(int cancelled);
 };
 
+static RAW_NOTIFIER_HEAD(xen_resume_notifier);
+
+void xen_resume_notifier_register(struct notifier_block *nb)
+{
+	raw_notifier_chain_register(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_register);
+
+void xen_resume_notifier_unregister(struct notifier_block *nb)
+{
+	raw_notifier_chain_unregister(&xen_resume_notifier, nb);
+}
+EXPORT_SYMBOL_GPL(xen_resume_notifier_unregister);
+
 #ifdef CONFIG_HIBERNATE_CALLBACKS
 static void xen_hvm_post_suspend(int cancelled)
 {
@@ -152,6 +167,8 @@ static void do_suspend(void)
 
 	err = stop_machine(xen_suspend, &si, cpumask_of(0));
 
+	raw_notifier_call_chain(&xen_resume_notifier, 0, NULL);
+
 	dpm_resume_start(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
 
 	if (err) {
diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
index 7231859..82358d1 100644
--- a/drivers/xen/xen-acpi-processor.c
+++ b/drivers/xen/xen-acpi-processor.c
@@ -27,10 +27,10 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/types.h>
-#include <linux/syscore_ops.h>
 #include <linux/acpi.h>
 #include <acpi/processor.h>
 #include <xen/xen.h>
+#include <xen/xen-ops.h>
 #include <xen/interface/platform.h>
 #include <asm/xen/hypercall.h>
 
@@ -495,14 +495,15 @@ static int xen_upload_processor_pm_data(void)
 	return rc;
 }
 
-static void xen_acpi_processor_resume(void)
+static int xen_acpi_processor_resume(struct notifier_block *nb,
+				     unsigned long action, void *data)
 {
 	bitmap_zero(acpi_ids_done, nr_acpi_bits);
-	xen_upload_processor_pm_data();
+	return xen_upload_processor_pm_data();
 }
 
-static struct syscore_ops xap_syscore_ops = {
-	.resume	= xen_acpi_processor_resume,
+struct notifier_block xen_acpi_processor_resume_nb = {
+	.notifier_call = xen_acpi_processor_resume,
 };
 
 static int __init xen_acpi_processor_init(void)
@@ -555,7 +556,7 @@ static int __init xen_acpi_processor_init(void)
 	if (rc)
 		goto err_unregister;
 
-	register_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
 
 	return 0;
 err_unregister:
@@ -574,7 +575,7 @@ static void __exit xen_acpi_processor_exit(void)
 {
 	int i;
 
-	unregister_syscore_ops(&xap_syscore_ops);
+	xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
 	kfree(acpi_ids_done);
 	kfree(acpi_id_present);
 	kfree(acpi_id_cst_present);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6412358 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -16,6 +16,9 @@ void xen_mm_unpin_all(void);
 void xen_timer_resume(void);
 void xen_arch_resume(void);
 
+void xen_resume_notifier_register(struct notifier_block *nb);
+void xen_resume_notifier_unregister(struct notifier_block *nb);
+
 int xen_setup_shutdown_event(void);
 
 extern unsigned long *xen_contiguous_bitmap;
-- 
1.7.11.7


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:29:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbjt-0004Lr-Qh; Wed, 26 Feb 2014 10:29:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIbjs-0004Lk-8J
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 10:29:20 +0000
Received: from [85.158.137.68:14815] by server-14.bemta-3.messagelabs.com id
	DB/7D-08196-FF1CD035; Wed, 26 Feb 2014 10:29:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393410556!4252595!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13656 invoked from network); 26 Feb 2014 10:29:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 10:29:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="105863847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 10:28:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 05:28:46 -0500
Message-ID: <1393410525.18730.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 26 Feb 2014 10:28:45 +0000
In-Reply-To: <530DC5EF020000780011F6D6@nat28.tlf.novell.com>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
	<530C5C60020000780011F076@nat28.tlf.novell.com>
	<1393406711.6506.15.camel@kazak.uk.xensource.com>
	<530DC5EF020000780011F6D6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	George Dunlap <dunlapg@umich.edu>, Keir Fraser <keir@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 09:46 +0000, Jan Beulich wrote:
> >>> On 26.02.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-02-25 at 08:03 +0000, Jan Beulich wrote:
> >> Anyway, I also consider it odd to complain about this now when
> >> the referenced discussion has happened weeks ago, with no
> >> useful result.
> > 
> > IIRC George was on vacation and/or travelling for a conference back then
> > (or maybe he was just back in town and buried in email, with the same
> > affect)
> > 
> > Anyway, since George knows PoD better than any of us I think it is worth
> > revisiting things with his input.
> > 
> > I would certainly prefer a solution which exposed some more semantically
> > meaningful information (from the guest's PoV) which lets it do the right
> > thing rather than just exposing the PoD internal accounting directly,
> > since that seems like rather an implementation detail to me. e.g. what
> > if we decide to make the PoD cache shared between a bunch of related
> > guests (totally hypothetical) or back it with the normal page sharing or
> > paging mechanisms? We don't want those sorts of changes to create a
> > visible guest ABI difference.
> 
> So what's the counter proposal then?

My expectation was that was why George is asking questions... Since he
knows PoD I'm hoping he will have some ideas.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 10:29:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 10:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIbjt-0004Lr-Qh; Wed, 26 Feb 2014 10:29:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIbjs-0004Lk-8J
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 10:29:20 +0000
Received: from [85.158.137.68:14815] by server-14.bemta-3.messagelabs.com id
	DB/7D-08196-FF1CD035; Wed, 26 Feb 2014 10:29:19 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393410556!4252595!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13656 invoked from network); 26 Feb 2014 10:29:18 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 10:29:18 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="105863847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 10:28:47 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 05:28:46 -0500
Message-ID: <1393410525.18730.17.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Wed, 26 Feb 2014 10:28:45 +0000
In-Reply-To: <530DC5EF020000780011F6D6@nat28.tlf.novell.com>
References: <530B57A7020000780011ECAC@nat28.tlf.novell.com>
	<CAFLBxZYo8CPzNMw-ERPi8jNUBZe43ZKSdM3gTwMcPXmQe+3VLg@mail.gmail.com>
	<530C5C60020000780011F076@nat28.tlf.novell.com>
	<1393406711.6506.15.camel@kazak.uk.xensource.com>
	<530DC5EF020000780011F6D6@nat28.tlf.novell.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	George Dunlap <dunlapg@umich.edu>, Keir Fraser <keir@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [Xen-devel] [PATCH] x86: expose XENMEM_get_pod_target to
 subject domain
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 09:46 +0000, Jan Beulich wrote:
> >>> On 26.02.14 at 10:25, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> > On Tue, 2014-02-25 at 08:03 +0000, Jan Beulich wrote:
> >> Anyway, I also consider it odd to complain about this now when
> >> the referenced discussion has happened weeks ago, with no
> >> useful result.
> > 
> > IIRC George was on vacation and/or travelling for a conference back then
> > (or maybe he was just back in town and buried in email, with the same
> > affect)
> > 
> > Anyway, since George knows PoD better than any of us I think it is worth
> > revisiting things with his input.
> > 
> > I would certainly prefer a solution which exposed some more semantically
> > meaningful information (from the guest's PoV) which lets it do the right
> > thing rather than just exposing the PoD internal accounting directly,
> > since that seems like rather an implementation detail to me. e.g. what
> > if we decide to make the PoD cache shared between a bunch of related
> > guests (totally hypothetical) or back it with the normal page sharing or
> > paging mechanisms? We don't want those sorts of changes to create a
> > visible guest ABI difference.
> 
> So what's the counter proposal then?

My expectation was that was why George is asking questions... Since he
knows PoD I'm hoping he will have some ideas.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIcLx-0005Hf-Hz; Wed, 26 Feb 2014 11:08:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1WIcLv-0005HZ-Pb
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 11:08:40 +0000
Received: from [85.158.137.68:35948] by server-5.bemta-3.messagelabs.com id
	5A/45-04712-63BCD035; Wed, 26 Feb 2014 11:08:38 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393412917!1501683!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27154 invoked from network); 26 Feb 2014 11:08:38 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:08:38 -0000
Received: by mail-wg0-f42.google.com with SMTP id x13so1495745wgg.1
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 03:08:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type:content-transfer-encoding;
	bh=qGiHLyBQIfxA52qbRuWPTmKSKEHf2Yu3c/FZbW4XYqY=;
	b=epuayWTYSaWgqnEpYK2SzlMMNQs0RuVQFERLexyF42rQDnRKfffq7Zyx0r74qg9mAm
	E4idqh4jVTSXXfQ9wHDVSeZlYoZvNaVKQIkhjher7El+khrHloyRThp10WBt6TBjsH1y
	v3jhLsqIhPvIPtVG6D1qctJ2s4yolH+6g2TzrVyPA9IEe+ngIhMLq+dmzSPD+2FsWBM9
	FOiZOEu0Q3UaUL7EPaAhMVukvOfDjtOVpNAMF0ZCO/C3s3Qa9TvxA9l5zQYNDEpz06K6
	ZKznLtSjjNn76wJALqiaw/tm09mNRdj0f0fjeEvKlaorykvCMchuZeJlF6u7rL1ZJj+u
	0k+Q==
X-Gm-Message-State: ALoCoQk7f7NOgOyZlqzgcX369bTox0MUk88QSK1l7rgaAEtzucgyDWJ0Ph3saq1F21hGpcccnBO0
X-Received: by 10.194.89.33 with SMTP id bl1mr1289511wjb.64.1393412917650;
	Wed, 26 Feb 2014 03:08:37 -0800 (PST)
MIME-Version: 1.0
Received: by 10.227.12.199 with HTTP; Wed, 26 Feb 2014 03:08:07 -0800 (PST)
In-Reply-To: <72CD20E6-9F26-42A8-8AAA-8D87D5B636E1@uva.nl>
References: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
	<CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
	<72CD20E6-9F26-42A8-8AAA-8D87D5B636E1@uva.nl>
From: Wei Liu <liuw@liuw.name>
Date: Wed, 26 Feb 2014 11:08:07 +0000
Message-ID: <CAOsiSVW83a9+4BUFi+-Je5ieXgXObJoa72WTZ44+nxd-iDy7zA@mail.gmail.com>
To: Jeroen van der Ham <vdham@uva.nl>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBGZWIgMjYsIDIwMTQgYXQgOToxMyBBTSwgSmVyb2VuIHZhbiBkZXIgSGFtIDx2ZGhh
bUB1dmEubmw+IHdyb3RlOgo+IEhpLAo+Cj4gT24gMjYgRmViIDIwMTQsIGF0IDAxOjA4LCBXZWkg
TGl1IDxsaXV3QGxpdXcubmFtZT4gd3JvdGU6Cj4+IHhlbi1jcmVhdGUtaW1hZ2UgaXMgcGFydCBv
ZiB4ZW4tdG9vbHMsIHdoaWNoIGlzIG5vdCBkZXZlbG9wZWQgYnkgY29yZQo+PiBYZW4gZGV2ZWxv
cGVycy4gU28gdGhpcyBvbmUgc2hvdWxkIGJlIHJlcG9ydGVkIHRvIHhlbi10b29scwo+PiBtYWlu
dGFpbmVyKHMpLgo+Cj4gT2theSwgd2lsbCBkby4KPgoKWW91J2QgYmV0dGVyIG1ha2Ugc3VyZSBp
ZiBpdCBpcyBhIGJ1ZyBpbiB0aGUgcGFja2FnZSB5b3UncmUgdXNpbmcgKEkKcHJlc3VtZSB5b3Ug
dXNlZCBhIHBhY2thZ2UgaW4geW91ciBkaXN0cm8pIG9yIGEgYnVnIGluIHVwc3RyZWFtLgpQcm9i
YWJseSB0aGUgcHJvYmxlbSBpcyBhbHJlYWR5IGZpeGVkIHVwc3RyZWFtLgoKPj4gSSByZW1lbWJl
ciBzb21lIHhlbi10b29scyB2ZXJzaW9uIGhhcyBhIGJ1ZyByZWdhcmRpbmcgbWVtb3J5IHNpemUu
IEFyZQo+PiB5b3UgdXNpbmcgRGViaWFuJ3MgcGFja2FnZSB2ZXJzaW9uPyBDb3VsZCB5b3UgdHJ5
IHRoZSBsYXN0ZXN0Cj4+IHhlbi10b29scyBmcm9tIHVwc3RyZWFtPyBEaWQgY2hhbmdpbmcgIk0i
IHRvICJNYiIgaGVscCAoaXQgc2hvdWxkIHdvcmsKPj4gZm9yIGJvdGggU3F1ZWV6ZSBhbmQgV2hl
ZXp5KT8KPgo+IFRoYXQgZGlkIHRoZSB0cmljayBpbmRlZWQuIChCdXQgSeKAmWxsIHN0aWxsIHJl
cG9ydCBpdCBhcyBhIGJ1ZywgYmVjYXVzZSB0aGlzIGlzIG5vdCBvYnZpb3VzKQo+Cj4+IEkgd3Jv
dGUgYSBwYXRjaCBzb21lIHRpbWUgYWdvIHRvIG1ha2UgdGhlIHBhcnNlciBoYW5kbGUgbWVtb3J5
IHN1ZmZpeAo+PiBidXQgd2FzIG5vdCB1cHN0cmVhbWVkLiBZb3UncmUgd2VsY29tZSB0byB0YWtl
IHRoYXQgYW5kIHVwc3RyZWFtIGl0LAo+PiBvciBpbXBsZW1lbnQgc29tZSBjbGVhbmVyIHNvbHV0
aW9uLgo+Cj4gWW91IG1lYW4gZm9yIHRoZSB4ZW4gY29uZmlnIGZpbGVzPwo+CgpUaGUgcGFyc2Vy
IGluIHhsLCB3aGljaCBpcyB1c2VkIHRvIHBhcnNlIHlvdXIgY29uZmlnIGZpbGUuCgpMb29rIGZv
ciAibGlieGw6IHN1cHBvcnQgZm9yIHVzaW5nICdNbUdnJyBhcyBtZW1vcnkgc2l6ZSBzdWZmaXgi
IGluCnRoZSBhcmNoaXZlLiBNYWludGFpbmVycyBhbHJlYWR5IGdhdmUgZmVlZGJhY2suIEkganVz
dCBkaWRuJ3QgaGF2ZSB0aGUKdGltZSB0byByZXZpc2l0IGl0LgoKV2VpLgoKPiBKZXJvZW4uCj4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIcLx-0005Hf-Hz; Wed, 26 Feb 2014 11:08:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <liuw@liuw.name>) id 1WIcLv-0005HZ-Pb
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 11:08:40 +0000
Received: from [85.158.137.68:35948] by server-5.bemta-3.messagelabs.com id
	5A/45-04712-63BCD035; Wed, 26 Feb 2014 11:08:38 +0000
X-Env-Sender: liuw@liuw.name
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393412917!1501683!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27154 invoked from network); 26 Feb 2014 11:08:38 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:08:38 -0000
Received: by mail-wg0-f42.google.com with SMTP id x13so1495745wgg.1
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 03:08:37 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type:content-transfer-encoding;
	bh=qGiHLyBQIfxA52qbRuWPTmKSKEHf2Yu3c/FZbW4XYqY=;
	b=epuayWTYSaWgqnEpYK2SzlMMNQs0RuVQFERLexyF42rQDnRKfffq7Zyx0r74qg9mAm
	E4idqh4jVTSXXfQ9wHDVSeZlYoZvNaVKQIkhjher7El+khrHloyRThp10WBt6TBjsH1y
	v3jhLsqIhPvIPtVG6D1qctJ2s4yolH+6g2TzrVyPA9IEe+ngIhMLq+dmzSPD+2FsWBM9
	FOiZOEu0Q3UaUL7EPaAhMVukvOfDjtOVpNAMF0ZCO/C3s3Qa9TvxA9l5zQYNDEpz06K6
	ZKznLtSjjNn76wJALqiaw/tm09mNRdj0f0fjeEvKlaorykvCMchuZeJlF6u7rL1ZJj+u
	0k+Q==
X-Gm-Message-State: ALoCoQk7f7NOgOyZlqzgcX369bTox0MUk88QSK1l7rgaAEtzucgyDWJ0Ph3saq1F21hGpcccnBO0
X-Received: by 10.194.89.33 with SMTP id bl1mr1289511wjb.64.1393412917650;
	Wed, 26 Feb 2014 03:08:37 -0800 (PST)
MIME-Version: 1.0
Received: by 10.227.12.199 with HTTP; Wed, 26 Feb 2014 03:08:07 -0800 (PST)
In-Reply-To: <72CD20E6-9F26-42A8-8AAA-8D87D5B636E1@uva.nl>
References: <5F9A4C4C-1797-4E4B-9C33-C01CCB59B78D@uva.nl>
	<CAOsiSVWLBGs+mZ24zHKLoMvTnZ1SOhbAFmDKRJExrm5r_RsMJA@mail.gmail.com>
	<72CD20E6-9F26-42A8-8AAA-8D87D5B636E1@uva.nl>
From: Wei Liu <liuw@liuw.name>
Date: Wed, 26 Feb 2014 11:08:07 +0000
Message-ID: <CAOsiSVW83a9+4BUFi+-Je5ieXgXObJoa72WTZ44+nxd-iDy7zA@mail.gmail.com>
To: Jeroen van der Ham <vdham@uva.nl>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [BUG] xen-create-image memory vs config file
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gV2VkLCBGZWIgMjYsIDIwMTQgYXQgOToxMyBBTSwgSmVyb2VuIHZhbiBkZXIgSGFtIDx2ZGhh
bUB1dmEubmw+IHdyb3RlOgo+IEhpLAo+Cj4gT24gMjYgRmViIDIwMTQsIGF0IDAxOjA4LCBXZWkg
TGl1IDxsaXV3QGxpdXcubmFtZT4gd3JvdGU6Cj4+IHhlbi1jcmVhdGUtaW1hZ2UgaXMgcGFydCBv
ZiB4ZW4tdG9vbHMsIHdoaWNoIGlzIG5vdCBkZXZlbG9wZWQgYnkgY29yZQo+PiBYZW4gZGV2ZWxv
cGVycy4gU28gdGhpcyBvbmUgc2hvdWxkIGJlIHJlcG9ydGVkIHRvIHhlbi10b29scwo+PiBtYWlu
dGFpbmVyKHMpLgo+Cj4gT2theSwgd2lsbCBkby4KPgoKWW91J2QgYmV0dGVyIG1ha2Ugc3VyZSBp
ZiBpdCBpcyBhIGJ1ZyBpbiB0aGUgcGFja2FnZSB5b3UncmUgdXNpbmcgKEkKcHJlc3VtZSB5b3Ug
dXNlZCBhIHBhY2thZ2UgaW4geW91ciBkaXN0cm8pIG9yIGEgYnVnIGluIHVwc3RyZWFtLgpQcm9i
YWJseSB0aGUgcHJvYmxlbSBpcyBhbHJlYWR5IGZpeGVkIHVwc3RyZWFtLgoKPj4gSSByZW1lbWJl
ciBzb21lIHhlbi10b29scyB2ZXJzaW9uIGhhcyBhIGJ1ZyByZWdhcmRpbmcgbWVtb3J5IHNpemUu
IEFyZQo+PiB5b3UgdXNpbmcgRGViaWFuJ3MgcGFja2FnZSB2ZXJzaW9uPyBDb3VsZCB5b3UgdHJ5
IHRoZSBsYXN0ZXN0Cj4+IHhlbi10b29scyBmcm9tIHVwc3RyZWFtPyBEaWQgY2hhbmdpbmcgIk0i
IHRvICJNYiIgaGVscCAoaXQgc2hvdWxkIHdvcmsKPj4gZm9yIGJvdGggU3F1ZWV6ZSBhbmQgV2hl
ZXp5KT8KPgo+IFRoYXQgZGlkIHRoZSB0cmljayBpbmRlZWQuIChCdXQgSeKAmWxsIHN0aWxsIHJl
cG9ydCBpdCBhcyBhIGJ1ZywgYmVjYXVzZSB0aGlzIGlzIG5vdCBvYnZpb3VzKQo+Cj4+IEkgd3Jv
dGUgYSBwYXRjaCBzb21lIHRpbWUgYWdvIHRvIG1ha2UgdGhlIHBhcnNlciBoYW5kbGUgbWVtb3J5
IHN1ZmZpeAo+PiBidXQgd2FzIG5vdCB1cHN0cmVhbWVkLiBZb3UncmUgd2VsY29tZSB0byB0YWtl
IHRoYXQgYW5kIHVwc3RyZWFtIGl0LAo+PiBvciBpbXBsZW1lbnQgc29tZSBjbGVhbmVyIHNvbHV0
aW9uLgo+Cj4gWW91IG1lYW4gZm9yIHRoZSB4ZW4gY29uZmlnIGZpbGVzPwo+CgpUaGUgcGFyc2Vy
IGluIHhsLCB3aGljaCBpcyB1c2VkIHRvIHBhcnNlIHlvdXIgY29uZmlnIGZpbGUuCgpMb29rIGZv
ciAibGlieGw6IHN1cHBvcnQgZm9yIHVzaW5nICdNbUdnJyBhcyBtZW1vcnkgc2l6ZSBzdWZmaXgi
IGluCnRoZSBhcmNoaXZlLiBNYWludGFpbmVycyBhbHJlYWR5IGdhdmUgZmVlZGJhY2suIEkganVz
dCBkaWRuJ3QgaGF2ZSB0aGUKdGltZSB0byByZXZpc2l0IGl0LgoKV2VpLgoKPiBKZXJvZW4uCj4K
Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZl
bCBtYWlsaW5nIGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5v
cmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:35:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:35:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIclQ-0005Us-Ug; Wed, 26 Feb 2014 11:35:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIclP-0005UP-Ov
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 11:35:00 +0000
Received: from [85.158.137.68:18969] by server-3.bemta-3.messagelabs.com id
	3B/24-14520-261DD035; Wed, 26 Feb 2014 11:34:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393414496!4315794!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15111 invoked from network); 26 Feb 2014 11:34:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104239752"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 11:34:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 06:34:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIckz-0002xD-P7;
	Wed, 26 Feb 2014 11:34:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIckz-0004oG-K3;
	Wed, 26 Feb 2014 11:34:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25306-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 11:34:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25306: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25306 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25306/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 25266
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:35:17 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:35:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIclQ-0005Us-Ug; Wed, 26 Feb 2014 11:35:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIclP-0005UP-Ov
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 11:35:00 +0000
Received: from [85.158.137.68:18969] by server-3.bemta-3.messagelabs.com id
	3B/24-14520-261DD035; Wed, 26 Feb 2014 11:34:58 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393414496!4315794!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15111 invoked from network); 26 Feb 2014 11:34:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:34:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104239752"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 11:34:34 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 06:34:33 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIckz-0002xD-P7;
	Wed, 26 Feb 2014 11:34:33 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIckz-0004oG-K3;
	Wed, 26 Feb 2014 11:34:33 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25306-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 11:34:33 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25306: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25306 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25306/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xend              4 xen-build                 fail REGR. vs. 25266
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-pv           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             fail    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          blocked 
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:47:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIcxG-0005go-Ck; Wed, 26 Feb 2014 11:47:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WIcxE-0005gj-Hl
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 11:47:12 +0000
Received: from [193.109.254.147:18569] by server-5.bemta-14.messagelabs.com id
	98/BD-16688-F34DD035; Wed, 26 Feb 2014 11:47:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393415230!6928624!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12138 invoked from network); 26 Feb 2014 11:47:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:47:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="105878549"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 11:47:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 06:47:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1WIcxA-0007YA-8F;
	Wed, 26 Feb 2014 11:47:08 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 26 Feb 2014 11:47:07 +0000
Message-ID: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Wei Liu <wei.liu2@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH] mm: ensure useful progress in
	decrease_reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

During my fun time playing with balloon driver I found that hypervisor's
preemption check kept decrease_reservation from doing any useful work
for 32 bit guests, resulting in hanging the guests.

As Andrew suggested, we can force the check to fail for the first
iteration to ensure progress. We did this in d3a55d7d9 "x86/mm: Ensure
useful progress in alloc_l2_table()" already.

After this change I cannot see the hang caused by continuation logic
anymore.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/common/memory.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5a0efd5..9d0d32e 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)
 
     for ( i = a->nr_done; i < a->nr_extents; i++ )
     {
-        if ( hypercall_preempt_check() )
+        if ( hypercall_preempt_check() && i != a->nr_done )
         {
             a->preempted = 1;
             goto out;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:47:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIcxG-0005go-Ck; Wed, 26 Feb 2014 11:47:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WIcxE-0005gj-Hl
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 11:47:12 +0000
Received: from [193.109.254.147:18569] by server-5.bemta-14.messagelabs.com id
	98/BD-16688-F34DD035; Wed, 26 Feb 2014 11:47:11 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393415230!6928624!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12138 invoked from network); 26 Feb 2014 11:47:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:47:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="105878549"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 11:47:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 06:47:08 -0500
Received: from dt47.uk.xensource.com ([10.80.229.47]
	helo=dt47.uk.xensource.com.)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <wei.liu2@citrix.com>)	id 1WIcxA-0007YA-8F;
	Wed, 26 Feb 2014 11:47:08 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 26 Feb 2014 11:47:07 +0000
Message-ID: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Wei Liu <wei.liu2@citrix.com>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH] mm: ensure useful progress in
	decrease_reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

During my fun time playing with balloon driver I found that hypervisor's
preemption check kept decrease_reservation from doing any useful work
for 32 bit guests, resulting in hanging the guests.

As Andrew suggested, we can force the check to fail for the first
iteration to ensure progress. We did this in d3a55d7d9 "x86/mm: Ensure
useful progress in alloc_l2_table()" already.

After this change I cannot see the hang caused by continuation logic
anymore.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Cc: Jan Beulich <JBeulich@suse.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/common/memory.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 5a0efd5..9d0d32e 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)
 
     for ( i = a->nr_done; i < a->nr_extents; i++ )
     {
-        if ( hypercall_preempt_check() )
+        if ( hypercall_preempt_check() && i != a->nr_done )
         {
             a->preempted = 1;
             goto out;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 11:52:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WId2e-0005os-6g; Wed, 26 Feb 2014 11:52:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <qin.l.li@oracle.com>) id 1WId2c-0005om-OS
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 11:52:47 +0000
Received: from [85.158.143.35:20952] by server-2.bemta-4.messagelabs.com id
	C0/1C-04779-D85DD035; Wed, 26 Feb 2014 11:52:45 +0000
X-Env-Sender: qin.l.li@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393415563!8421293!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 767 invoked from network); 26 Feb 2014 11:52:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 11:52:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1QBqbLI020610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 11:52:37 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1QBqZvc025429
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 26 Feb 2014 11:52:35 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1QBqYE2012361; Wed, 26 Feb 2014 11:52:35 GMT
Received: from [10.0.0.246] (/180.184.99.37)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 26 Feb 2014 03:52:34 -0800
Message-ID: <530DD57A.8010709@oracle.com>
Date: Wed, 26 Feb 2014 19:52:26 +0800
From: Qin Li <qin.l.li@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6448543515923308066=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6448543515923308066==
Content-Type: multipart/alternative;
 boundary="------------090008050007090005050903"

This is a multi-part message in MIME format.
--------------090008050007090005050903
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit


? 2014/1/16 20:17, Stefano Stabellini ??:
>> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
>> >is already visiable. Does guest OS still need any action to ask
>> >hypervisor to update this piece of memory periodically?
> I don't think you need to ask the hypervisor to update vcpu_time_info
> periodically, what gave you that idea?
Hi Stefano,

Now, I see it's the hypervisor that will update vcpu_time_info, but 
another thing confuse me:
HVM guest has time drift issue because TSC on different vCPU could be 
out-of-sync, especially after domain suspend/resume.
But how pvclock actually fix this issue? Let's see how FreeBSD port 
calculate the system time:

==================
static uint64_t
get_nsec_offset(struct vcpu_time_info *tinfo)
{

     return (scale_delta*(rdtsc() -* tinfo->tsc_timestamp,
         tinfo->tsc_to_system_mul, tinfo->tsc_shift));
}

/**
  * \brief Get the current time, in nanoseconds, since the hypervisor 
booted.
  *
  * \note This function returns the current CPU's idea of this value, unless
  *       it happens to be less than another CPU's previously determined 
value.
  */
static uint64_t
xen_fetch_vcpu_time(void)
{
     struct vcpu_time_info dst;
     struct vcpu_time_info *src;
     uint32_t pre_version;
     uint64_t now;
     volatile uint64_t last;
     struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);

     src = &vcpu->time;

     critical_enter();
     do {
         pre_version = xen_fetch_vcpu_tinfo(&dst, src);
         barrier();
         now = dst.system_time + *get_nsec_offset(&dst);*
         barrier();
     } while (pre_version != src->version);

     /*
      * Enforce a monotonically increasing clock time across all
      * VCPUs.  If our time is too old, use the last time and return.
      * Otherwise, try to update the last time.
      */
     do {
         last = last_time;
         if (last > now) {
             now = last;
             break;
         }
     } while (!atomic_cmpset_64(&last_time, last, now));

     critical_exit();

     return (now);
}
==================================

I guest linux guest will do the same thing, rdtsc() fetch current 
timestamp from current running vCPU, TSC out-of-sync issue is still there.
It seems to me pvclock finally fix the time drift issue just because the 
workaround enforced as above, right?

Thanks,
Michael



--------------090008050007090005050903
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <div class="moz-cite-prefix">&#20110; 2014/1/16 20:17, Stefano Stabellini
      &#20889;&#36947;:<br>
    </div>
    <blockquote
      cite="mid:alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com"
      type="cite">
      <blockquote type="cite" style="color: #000000;">
        <pre wrap="">. For a PV-on-HVM guest OS, the "shared_info-&gt;vcpu_info-&gt;vcpu_time_info"
<span class="moz-txt-citetags">&gt; </span>is already visiable. Does guest OS still need any action to ask
<span class="moz-txt-citetags">&gt; </span>hypervisor to update this piece of memory periodically?
</pre>
      </blockquote>
      <pre wrap="">I don't think you need to ask the hypervisor to update vcpu_time_info
periodically, what gave you that idea?</pre>
    </blockquote>
    Hi Stefano,<br>
    <br>
    Now, I see it's the hypervisor that will update vcpu_time_info, but
    another thing confuse me:<br>
    HVM guest has time drift issue because TSC on different vCPU could
    be out-of-sync, especially after domain suspend/resume.<br>
    But how pvclock actually fix this issue? Let's see how FreeBSD port
    calculate the system time:<br>
    <br>
    ==================<br>
    static uint64_t<br>
    get_nsec_offset(struct vcpu_time_info *tinfo)<br>
    {<br>
    <br>
    &nbsp;&nbsp;&nbsp; return (scale_delta<b><font color="#cc0000">(rdtsc() -</font></b>
    tinfo-&gt;tsc_timestamp,<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; tinfo-&gt;tsc_to_system_mul, tinfo-&gt;tsc_shift));<br>
    }<br>
    <br>
    /**<br>
    &nbsp;* \brief Get the current time, in nanoseconds, since the hypervisor
    booted.<br>
    &nbsp;*<br>
    &nbsp;* \note This function returns the current CPU's idea of this value,
    unless<br>
    &nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; it happens to be less than another CPU's previously
    determined value.<br>
    &nbsp;*/<br>
    static uint64_t<br>
    xen_fetch_vcpu_time(void)<br>
    {<br>
    &nbsp;&nbsp;&nbsp; struct vcpu_time_info dst;<br>
    &nbsp;&nbsp;&nbsp; struct vcpu_time_info *src;<br>
    &nbsp;&nbsp;&nbsp; uint32_t pre_version;<br>
    &nbsp;&nbsp;&nbsp; uint64_t now;<br>
    &nbsp;&nbsp;&nbsp; volatile uint64_t last;<br>
    &nbsp;&nbsp;&nbsp; struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);<br>
    <br>
    &nbsp;&nbsp;&nbsp; src = &amp;vcpu-&gt;time;<br>
    <br>
    &nbsp;&nbsp;&nbsp; critical_enter();<br>
    &nbsp;&nbsp;&nbsp; do {<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; pre_version = xen_fetch_vcpu_tinfo(&amp;dst, src);<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; barrier();<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; now = dst.system_time + <b><font color="#cc0000">get_nsec_offset(&amp;dst);</font></b><br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; barrier();<br>
    &nbsp;&nbsp;&nbsp; } while (pre_version != src-&gt;version);<br>
    <br>
    &nbsp;&nbsp;&nbsp; /*<br>
    &nbsp;&nbsp;&nbsp; &nbsp;* Enforce a monotonically increasing clock time across all<br>
    &nbsp;&nbsp;&nbsp; &nbsp;* VCPUs.&nbsp; If our time is too old, use the last time and return.<br>
    &nbsp;&nbsp;&nbsp; &nbsp;* Otherwise, try to update the last time.<br>
    &nbsp;&nbsp;&nbsp; &nbsp;*/<br>
    &nbsp;&nbsp;&nbsp; do {<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; last = last_time;<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; if (last &gt; now) {<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; now = last;<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; break;<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; }<br>
    &nbsp;&nbsp;&nbsp; } while (!atomic_cmpset_64(&amp;last_time, last, now));<br>
    <br>
    &nbsp;&nbsp;&nbsp; critical_exit();<br>
    <br>
    &nbsp;&nbsp;&nbsp; return (now);<br>
    }<br>
    ==================================<br>
    <br>
    I guest linux guest will do the same thing, rdtsc() fetch current
    timestamp from current running vCPU, TSC out-of-sync issue is still
    there.<br>
    It seems to me pvclock finally fix the time drift issue just because
    the workaround enforced as above, right?<br>
    <br>
    Thanks,<br>
    Michael <br>
    &nbsp;<br>
    <br>
  </body>
</html>

--------------090008050007090005050903--


--===============6448543515923308066==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6448543515923308066==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 11:52:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WId2e-0005os-6g; Wed, 26 Feb 2014 11:52:48 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <qin.l.li@oracle.com>) id 1WId2c-0005om-OS
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 11:52:47 +0000
Received: from [85.158.143.35:20952] by server-2.bemta-4.messagelabs.com id
	C0/1C-04779-D85DD035; Wed, 26 Feb 2014 11:52:45 +0000
X-Env-Sender: qin.l.li@oracle.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393415563!8421293!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 767 invoked from network); 26 Feb 2014 11:52:45 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 11:52:45 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1QBqbLI020610
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 11:52:37 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1QBqZvc025429
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Wed, 26 Feb 2014 11:52:35 GMT
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1QBqYE2012361; Wed, 26 Feb 2014 11:52:35 GMT
Received: from [10.0.0.246] (/180.184.99.37)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 26 Feb 2014 03:52:34 -0800
Message-ID: <530DD57A.8010709@oracle.com>
Date: Wed, 26 Feb 2014 19:52:26 +0800
From: Qin Li <qin.l.li@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6448543515923308066=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6448543515923308066==
Content-Type: multipart/alternative;
 boundary="------------090008050007090005050903"

This is a multi-part message in MIME format.
--------------090008050007090005050903
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit


? 2014/1/16 20:17, Stefano Stabellini ??:
>> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
>> >is already visiable. Does guest OS still need any action to ask
>> >hypervisor to update this piece of memory periodically?
> I don't think you need to ask the hypervisor to update vcpu_time_info
> periodically, what gave you that idea?
Hi Stefano,

Now, I see it's the hypervisor that will update vcpu_time_info, but 
another thing confuse me:
HVM guest has time drift issue because TSC on different vCPU could be 
out-of-sync, especially after domain suspend/resume.
But how pvclock actually fix this issue? Let's see how FreeBSD port 
calculate the system time:

==================
static uint64_t
get_nsec_offset(struct vcpu_time_info *tinfo)
{

     return (scale_delta*(rdtsc() -* tinfo->tsc_timestamp,
         tinfo->tsc_to_system_mul, tinfo->tsc_shift));
}

/**
  * \brief Get the current time, in nanoseconds, since the hypervisor 
booted.
  *
  * \note This function returns the current CPU's idea of this value, unless
  *       it happens to be less than another CPU's previously determined 
value.
  */
static uint64_t
xen_fetch_vcpu_time(void)
{
     struct vcpu_time_info dst;
     struct vcpu_time_info *src;
     uint32_t pre_version;
     uint64_t now;
     volatile uint64_t last;
     struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);

     src = &vcpu->time;

     critical_enter();
     do {
         pre_version = xen_fetch_vcpu_tinfo(&dst, src);
         barrier();
         now = dst.system_time + *get_nsec_offset(&dst);*
         barrier();
     } while (pre_version != src->version);

     /*
      * Enforce a monotonically increasing clock time across all
      * VCPUs.  If our time is too old, use the last time and return.
      * Otherwise, try to update the last time.
      */
     do {
         last = last_time;
         if (last > now) {
             now = last;
             break;
         }
     } while (!atomic_cmpset_64(&last_time, last, now));

     critical_exit();

     return (now);
}
==================================

I guest linux guest will do the same thing, rdtsc() fetch current 
timestamp from current running vCPU, TSC out-of-sync issue is still there.
It seems to me pvclock finally fix the time drift issue just because the 
workaround enforced as above, right?

Thanks,
Michael



--------------090008050007090005050903
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <br>
    <div class="moz-cite-prefix">&#20110; 2014/1/16 20:17, Stefano Stabellini
      &#20889;&#36947;:<br>
    </div>
    <blockquote
      cite="mid:alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com"
      type="cite">
      <blockquote type="cite" style="color: #000000;">
        <pre wrap="">. For a PV-on-HVM guest OS, the "shared_info-&gt;vcpu_info-&gt;vcpu_time_info"
<span class="moz-txt-citetags">&gt; </span>is already visiable. Does guest OS still need any action to ask
<span class="moz-txt-citetags">&gt; </span>hypervisor to update this piece of memory periodically?
</pre>
      </blockquote>
      <pre wrap="">I don't think you need to ask the hypervisor to update vcpu_time_info
periodically, what gave you that idea?</pre>
    </blockquote>
    Hi Stefano,<br>
    <br>
    Now, I see it's the hypervisor that will update vcpu_time_info, but
    another thing confuse me:<br>
    HVM guest has time drift issue because TSC on different vCPU could
    be out-of-sync, especially after domain suspend/resume.<br>
    But how pvclock actually fix this issue? Let's see how FreeBSD port
    calculate the system time:<br>
    <br>
    ==================<br>
    static uint64_t<br>
    get_nsec_offset(struct vcpu_time_info *tinfo)<br>
    {<br>
    <br>
    &nbsp;&nbsp;&nbsp; return (scale_delta<b><font color="#cc0000">(rdtsc() -</font></b>
    tinfo-&gt;tsc_timestamp,<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; tinfo-&gt;tsc_to_system_mul, tinfo-&gt;tsc_shift));<br>
    }<br>
    <br>
    /**<br>
    &nbsp;* \brief Get the current time, in nanoseconds, since the hypervisor
    booted.<br>
    &nbsp;*<br>
    &nbsp;* \note This function returns the current CPU's idea of this value,
    unless<br>
    &nbsp;*&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; it happens to be less than another CPU's previously
    determined value.<br>
    &nbsp;*/<br>
    static uint64_t<br>
    xen_fetch_vcpu_time(void)<br>
    {<br>
    &nbsp;&nbsp;&nbsp; struct vcpu_time_info dst;<br>
    &nbsp;&nbsp;&nbsp; struct vcpu_time_info *src;<br>
    &nbsp;&nbsp;&nbsp; uint32_t pre_version;<br>
    &nbsp;&nbsp;&nbsp; uint64_t now;<br>
    &nbsp;&nbsp;&nbsp; volatile uint64_t last;<br>
    &nbsp;&nbsp;&nbsp; struct vcpu_info *vcpu = DPCPU_GET(vcpu_info);<br>
    <br>
    &nbsp;&nbsp;&nbsp; src = &amp;vcpu-&gt;time;<br>
    <br>
    &nbsp;&nbsp;&nbsp; critical_enter();<br>
    &nbsp;&nbsp;&nbsp; do {<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; pre_version = xen_fetch_vcpu_tinfo(&amp;dst, src);<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; barrier();<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; now = dst.system_time + <b><font color="#cc0000">get_nsec_offset(&amp;dst);</font></b><br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; barrier();<br>
    &nbsp;&nbsp;&nbsp; } while (pre_version != src-&gt;version);<br>
    <br>
    &nbsp;&nbsp;&nbsp; /*<br>
    &nbsp;&nbsp;&nbsp; &nbsp;* Enforce a monotonically increasing clock time across all<br>
    &nbsp;&nbsp;&nbsp; &nbsp;* VCPUs.&nbsp; If our time is too old, use the last time and return.<br>
    &nbsp;&nbsp;&nbsp; &nbsp;* Otherwise, try to update the last time.<br>
    &nbsp;&nbsp;&nbsp; &nbsp;*/<br>
    &nbsp;&nbsp;&nbsp; do {<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; last = last_time;<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; if (last &gt; now) {<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; now = last;<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; break;<br>
    &nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp; }<br>
    &nbsp;&nbsp;&nbsp; } while (!atomic_cmpset_64(&amp;last_time, last, now));<br>
    <br>
    &nbsp;&nbsp;&nbsp; critical_exit();<br>
    <br>
    &nbsp;&nbsp;&nbsp; return (now);<br>
    }<br>
    ==================================<br>
    <br>
    I guest linux guest will do the same thing, rdtsc() fetch current
    timestamp from current running vCPU, TSC out-of-sync issue is still
    there.<br>
    It seems to me pvclock finally fix the time drift issue just because
    the workaround enforced as above, right?<br>
    <br>
    Thanks,<br>
    Michael <br>
    &nbsp;<br>
    <br>
  </body>
</html>

--------------090008050007090005050903--


--===============6448543515923308066==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6448543515923308066==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 11:58:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WId86-00060N-6B; Wed, 26 Feb 2014 11:58:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WId84-00060I-HH
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 11:58:24 +0000
Received: from [193.109.254.147:59169] by server-15.bemta-14.messagelabs.com
	id 11/65-10839-FD6DD035; Wed, 26 Feb 2014 11:58:23 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393415901!6915547!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20786 invoked from network); 26 Feb 2014 11:58:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:58:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104243798"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 11:58:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 06:58:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WId7z-0007gQ-3H;
	Wed, 26 Feb 2014 11:58:19 +0000
Date: Wed, 26 Feb 2014 11:58:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140224194301.GA8089@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
	<20140224164004.GO816@phenom.dumpdata.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
	<20140224194301.GA8089@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-882013886-1393415737=:31489"
Content-ID: <alpine.DEB.2.02.1402261156020.31489@kaball.uk.xensource.com>
X-DLP: MIA2
Cc: anthony.perard@citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-882013886-1393415737=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1402261156021.31489@kaball.uk.xensource.com>

A little while ago Anthony made the Q35 emulation in QEMU work with Xen:

http://marc.info/?l=3Dqemu-devel&m=3D137813513713296

He might be able to give you some pointers on where to start.


On Mon, 24 Feb 2014, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
> > Hi Konrad,
> >=20
> > Thanks for the info.  Your guest sees the virtual function as  pci devi=
ce just like I had suspected.  Unfortunately that won't work for me.  I gue=
ss I have to take a hard look at implement pci-e passthrough using pciback =
then.
>=20
> You won't have to do it with pciback. Keep in mind that pciback just "hol=
ds"
> the device so that other drivers (like ixbgvf) don't use it.
>=20
> 'xl' ends up doing the proper hypercall to assign the device to
> the guest. And QEMU also does it set of calls to setup the
> BARS, interrupts, deal with MSI-X, etc.
>=20
> What you are going to have to look at is QEMU - and how to make it
> work with the newer emulated chipset.
>=20
> Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
> (CC-ed as well) had backported the proper pieces in QEMU to do
> PCI passthrough.
>=20
> Looking forward to your patches and we will be more than happy
> to help you upstream them!
>=20
> >=20
> > Regards/Eniac
> >=20
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]=20
> > Sent: Monday, February 24, 2014 9:40 AM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
> >=20
> > On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> > > Hi Konrad,
> > >=20
> > > Here's what I see when start a VM under xen using pciback to pass a p=
ci-e device into domU.  The device can be seen by guest, and also functioni=
ng fine.  But it's not seen as a pci-e device, rather, it looks just like a=
n ordinary pci device because only the first 0x100 bytes of its configurati=
on space is accessible.  So if a driver needs to use data in the extended c=
onfiguration space for certain features, it will fail.
> > >=20
> > > When you say you "did PCIe pass through of an VF of an SR-IOV device"=
=2E  Are you actually using it as a pci-e device or have throttled it back =
to pci mode without aware of the difference?  If you did see the pci-e devi=
ce in guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci=
 -xxxx output from guest?
> >=20
> > # lspci
> > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (re=
v 02)
> > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton =
II]
> > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Trit=
on II]
> > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> > 00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
> > 00:03.0 VGA compatible controller: Cirrus Logic GD 5446
> > 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (=
rev 01) # lspci -t
> > -[0000:00]-+-00.0
> >            +-01.0
> >            +-01.1
> >            +-01.3
> >            +-02.0
> >            +-03.0
> >            \-04.0
> > # lspci -s 00:04.0 -xxxxx
> > 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (=
rev 01)
> > 00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
> > 10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
> > 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
> > 30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
> > 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
> > 80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
> > b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
> > d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> >=20
> > -bash-4.1# more /vm-pci.cfg
> > builder=3D'hvm'
> > disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > memory =3D 2048
> > boot=3D"d"
> > maxvcpus=3D32
> > vcpus=3D1
> > serial=3D'pty'
> > vnclisten=3D"0.0.0.0"
> > name=3D"latest"
> > pci =3D ["0000:02:10.0"]
> >=20
> > >=20
> > > Also to echo your second comment:  I might still be a newbie in qemu =
field (I started working on this 4 months ago).  I thought the chipset limi=
ts what you can see/do in vm.  Ie.  If you have 440fx emulations then you c=
an't have any pci-e devices (fake or passthru) in the same system.  Is that=
 not true?
> > >=20
> > > Regards/Eniac
> > >=20
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, February 21, 2014 5:32 PM
> > > To: Zhang, Eniac
> > > Cc: xen-devel@lists.xen.org
> > > Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> > >=20
> > >=20
> > > On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote=
:
> > > >
> > > > Hi Konrad,
> > > >
> > > > Thanks for your reply.
> > > >
> > > > Yes, I am aware of the pciback.=C2=A0 Unfortunately it doesn't seem=
 to=20
> > > > support pci-e passthrough. (I could be wrong here)
> > >=20
> > > I just did PCIe pass through of an VF of an SR-IOV device. It certain=
ly is PCIe.
> > >=20
> > > >
> > > > There are two reason that I am interested in this.=C2=A0 For one, m=
y project calls for pci-e device passthrough, which can't be accomplished w=
ith 440fx chipset emulation.=C2=A0 Secondly, I feel we ought to move on wit=
h the technology.=C2=A0 440fx is ancient in computer terms.=C2=A0 Qemu is g=
ood and all that, but if it refuses to support pci-e natively then it's jus=
t a matter of time that it will become obsoleted.=C2=A0 The trend is clear =
that pci-e is taking over the world.=20
> > > >
> > >=20
> > > I am not sure what you are saying but it does not matter whether QEMU=
 emulates 440fx or q35 for PCI pass through .
> > >=20
> > > > Regards/Eniac
> > > >
> > > > -----Original Message-----
> > > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > > Sent: Friday, February 21, 2014 2:50 PM
> > > > To: Zhang, Eniac
> > > > Cc: xen-devel@lists.xen.org
> > > > Subject: Re: [Xen-devel] q35 in xen? vfio in xen?=20
> > > >
> > > > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:=20
> > > > > Hi all,
> > > > >=20
> > > > > I am playing with q35 chipset in qemu (1.6.1).=C2=A0 It seems we =
can't enable q35 machine under xen yet.=C2=A0 I made a few quick hacks whic=
h all fail miserably (linux kernel oops and window BSOD).=C2=A0 I was wonde=
ring why this hasn't been done (q35 was introduced into qemu in 2009).=20
> > > > >=20
> > > > > Next question, vfio works very well for me in standalone qemu (wi=
th Linux host handling iommu), but is that supported under xen?=C2=A0 I hav=
en't tried anything there yet because my gut-feeling is that it won't work.=
=C2=A0 Because passing vfio device to qemu can only be done on qemu command=
line, and xen is not aware of this passing through device, thus not able to=
 make iommu arrangement for this device.=C2=A0 Am I on the right track here=
?=20
> > > >
> > > > Yes and no. VFIO won't work - but QEMU does do PCI passthrough unde=
r Xen. It uses a different mechanism (and you need to bind the device to pc=
iback).=20
> > > >
> > > > >=20
> > > > > I am interested in implementing both these two features.=C2=A0 I'=
d like to connect with anyone who's already on this so we don't duplicate t=
he efforts.=20
> > > >
> > > > What do you need Q35 for?=20
> > > >
> > > > >=20
> > > > > Regards/Eniac
> > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xen.org
> > > > > http://lists.xen.org/xen-devel
> > > >
>=20
--1342847746-882013886-1393415737=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-882013886-1393415737=:31489--


From xen-devel-bounces@lists.xen.org Wed Feb 26 11:58:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 11:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WId86-00060N-6B; Wed, 26 Feb 2014 11:58:26 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WId84-00060I-HH
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 11:58:24 +0000
Received: from [193.109.254.147:59169] by server-15.bemta-14.messagelabs.com
	id 11/65-10839-FD6DD035; Wed, 26 Feb 2014 11:58:23 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393415901!6915547!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20786 invoked from network); 26 Feb 2014 11:58:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 11:58:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104243798"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 11:58:21 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 06:58:19 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WId7z-0007gQ-3H;
	Wed, 26 Feb 2014 11:58:19 +0000
Date: Wed, 26 Feb 2014 11:58:14 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
In-Reply-To: <20140224194301.GA8089@phenom.dumpdata.com>
Message-ID: <alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>
	<20140224164004.GO816@phenom.dumpdata.com>
	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>
	<20140224194301.GA8089@phenom.dumpdata.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-882013886-1393415737=:31489"
Content-ID: <alpine.DEB.2.02.1402261156020.31489@kaball.uk.xensource.com>
X-DLP: MIA2
Cc: anthony.perard@citrix.com,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, "Zhang,
	Eniac" <eniac-xw.zhang@hp.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-882013886-1393415737=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE
Content-ID: <alpine.DEB.2.02.1402261156021.31489@kaball.uk.xensource.com>

A little while ago Anthony made the Q35 emulation in QEMU work with Xen:

http://marc.info/?l=3Dqemu-devel&m=3D137813513713296

He might be able to give you some pointers on where to start.


On Mon, 24 Feb 2014, Konrad Rzeszutek Wilk wrote:
> On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
> > Hi Konrad,
> >=20
> > Thanks for the info.  Your guest sees the virtual function as  pci devi=
ce just like I had suspected.  Unfortunately that won't work for me.  I gue=
ss I have to take a hard look at implement pci-e passthrough using pciback =
then.
>=20
> You won't have to do it with pciback. Keep in mind that pciback just "hol=
ds"
> the device so that other drivers (like ixbgvf) don't use it.
>=20
> 'xl' ends up doing the proper hypercall to assign the device to
> the guest. And QEMU also does it set of calls to setup the
> BARS, interrupts, deal with MSI-X, etc.
>=20
> What you are going to have to look at is QEMU - and how to make it
> work with the newer emulated chipset.
>=20
> Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
> (CC-ed as well) had backported the proper pieces in QEMU to do
> PCI passthrough.
>=20
> Looking forward to your patches and we will be more than happy
> to help you upstream them!
>=20
> >=20
> > Regards/Eniac
> >=20
> > -----Original Message-----
> > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]=20
> > Sent: Monday, February 24, 2014 9:40 AM
> > To: Zhang, Eniac
> > Cc: xen-devel@lists.xen.org
> > Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
> >=20
> > On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
> > > Hi Konrad,
> > >=20
> > > Here's what I see when start a VM under xen using pciback to pass a p=
ci-e device into domU.  The device can be seen by guest, and also functioni=
ng fine.  But it's not seen as a pci-e device, rather, it looks just like a=
n ordinary pci device because only the first 0x100 bytes of its configurati=
on space is accessible.  So if a driver needs to use data in the extended c=
onfiguration space for certain features, it will fail.
> > >=20
> > > When you say you "did PCIe pass through of an VF of an SR-IOV device"=
=2E  Are you actually using it as a pci-e device or have throttled it back =
to pci mode without aware of the difference?  If you did see the pci-e devi=
ce in guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci=
 -xxxx output from guest?
> >=20
> > # lspci
> > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (re=
v 02)
> > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton =
II]
> > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Trit=
on II]
> > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> > 00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
> > 00:03.0 VGA compatible controller: Cirrus Logic GD 5446
> > 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (=
rev 01) # lspci -t
> > -[0000:00]-+-00.0
> >            +-01.0
> >            +-01.1
> >            +-01.3
> >            +-02.0
> >            +-03.0
> >            \-04.0
> > # lspci -s 00:04.0 -xxxxx
> > 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (=
rev 01)
> > 00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
> > 10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
> > 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
> > 30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
> > 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
> > 80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
> > b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
> > d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> >=20
> > -bash-4.1# more /vm-pci.cfg
> > builder=3D'hvm'
> > disk =3D [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
> > memory =3D 2048
> > boot=3D"d"
> > maxvcpus=3D32
> > vcpus=3D1
> > serial=3D'pty'
> > vnclisten=3D"0.0.0.0"
> > name=3D"latest"
> > pci =3D ["0000:02:10.0"]
> >=20
> > >=20
> > > Also to echo your second comment:  I might still be a newbie in qemu =
field (I started working on this 4 months ago).  I thought the chipset limi=
ts what you can see/do in vm.  Ie.  If you have 440fx emulations then you c=
an't have any pci-e devices (fake or passthru) in the same system.  Is that=
 not true?
> > >=20
> > > Regards/Eniac
> > >=20
> > > -----Original Message-----
> > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > Sent: Friday, February 21, 2014 5:32 PM
> > > To: Zhang, Eniac
> > > Cc: xen-devel@lists.xen.org
> > > Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
> > >=20
> > >=20
> > > On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote=
:
> > > >
> > > > Hi Konrad,
> > > >
> > > > Thanks for your reply.
> > > >
> > > > Yes, I am aware of the pciback.=C2=A0 Unfortunately it doesn't seem=
 to=20
> > > > support pci-e passthrough. (I could be wrong here)
> > >=20
> > > I just did PCIe pass through of an VF of an SR-IOV device. It certain=
ly is PCIe.
> > >=20
> > > >
> > > > There are two reason that I am interested in this.=C2=A0 For one, m=
y project calls for pci-e device passthrough, which can't be accomplished w=
ith 440fx chipset emulation.=C2=A0 Secondly, I feel we ought to move on wit=
h the technology.=C2=A0 440fx is ancient in computer terms.=C2=A0 Qemu is g=
ood and all that, but if it refuses to support pci-e natively then it's jus=
t a matter of time that it will become obsoleted.=C2=A0 The trend is clear =
that pci-e is taking over the world.=20
> > > >
> > >=20
> > > I am not sure what you are saying but it does not matter whether QEMU=
 emulates 440fx or q35 for PCI pass through .
> > >=20
> > > > Regards/Eniac
> > > >
> > > > -----Original Message-----
> > > > From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
> > > > Sent: Friday, February 21, 2014 2:50 PM
> > > > To: Zhang, Eniac
> > > > Cc: xen-devel@lists.xen.org
> > > > Subject: Re: [Xen-devel] q35 in xen? vfio in xen?=20
> > > >
> > > > On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:=20
> > > > > Hi all,
> > > > >=20
> > > > > I am playing with q35 chipset in qemu (1.6.1).=C2=A0 It seems we =
can't enable q35 machine under xen yet.=C2=A0 I made a few quick hacks whic=
h all fail miserably (linux kernel oops and window BSOD).=C2=A0 I was wonde=
ring why this hasn't been done (q35 was introduced into qemu in 2009).=20
> > > > >=20
> > > > > Next question, vfio works very well for me in standalone qemu (wi=
th Linux host handling iommu), but is that supported under xen?=C2=A0 I hav=
en't tried anything there yet because my gut-feeling is that it won't work.=
=C2=A0 Because passing vfio device to qemu can only be done on qemu command=
line, and xen is not aware of this passing through device, thus not able to=
 make iommu arrangement for this device.=C2=A0 Am I on the right track here=
?=20
> > > >
> > > > Yes and no. VFIO won't work - but QEMU does do PCI passthrough unde=
r Xen. It uses a different mechanism (and you need to bind the device to pc=
iback).=20
> > > >
> > > > >=20
> > > > > I am interested in implementing both these two features.=C2=A0 I'=
d like to connect with anyone who's already on this so we don't duplicate t=
he efforts.=20
> > > >
> > > > What do you need Q35 for?=20
> > > >
> > > > >=20
> > > > > Regards/Eniac
> > > >
> > > > > _______________________________________________
> > > > > Xen-devel mailing list
> > > > > Xen-devel@lists.xen.org
> > > > > http://lists.xen.org/xen-devel
> > > >
>=20
--1342847746-882013886-1393415737=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-882013886-1393415737=:31489--


From xen-devel-bounces@lists.xen.org Wed Feb 26 12:01:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdAk-0006Cf-45; Wed, 26 Feb 2014 12:01:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <souravsaha.work@gmail.com>) id 1WIdAj-0006CZ-1Z
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:01:09 +0000
Received: from [193.109.254.147:45284] by server-16.bemta-14.messagelabs.com
	id DF/26-21945-487DD035; Wed, 26 Feb 2014 12:01:08 +0000
X-Env-Sender: souravsaha.work@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393416066!6933172!1
X-Originating-IP: [209.85.220.196]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26844 invoked from network); 26 Feb 2014 12:01:07 -0000
Received: from mail-vc0-f196.google.com (HELO mail-vc0-f196.google.com)
	(209.85.220.196)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:01:07 -0000
Received: by mail-vc0-f196.google.com with SMTP id lf12so239708vcb.7
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 04:01:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=i0pn7G+Xxhau6/C2gb2NUHVVHOU4optpVI5ywCS/xw4=;
	b=BUF182ra8vTKHUfUSaEhxs0IqqJ/x3aQkl1DRM4JcwNHCcXLQ1sbrTHg9KKjBLFWve
	aM4vaKoTkoRw7/lHlIgjhgW8qVbAXelVJJEcWltiz5j3/ybjY+k9via8QKnicHt595Fp
	WJDa4y2QWyNI5bP9Lx2eSDM0VkFhWKiQovoRgnRjI45HbYwiIrv1iutG0wyQCVamsFhN
	FGSBvqvqsO92o4eebTCRRkU4csi2yLOYxNDyPjV2zVO74Le4UNDNMY3gPg9Yz7iY8uuX
	9jGxJ69CeAk25abwNKA7hCeebuzKr98kALte956+JeNyI47UnFnJn+8UPQruTydy/w/B
	On9w==
MIME-Version: 1.0
X-Received: by 10.58.100.211 with SMTP id fa19mr836607veb.14.1393416061185;
	Wed, 26 Feb 2014 04:01:01 -0800 (PST)
Received: by 10.58.90.133 with HTTP; Wed, 26 Feb 2014 04:01:01 -0800 (PST)
Date: Wed, 26 Feb 2014 17:31:01 +0530
Message-ID: <CADkf8UkxTF9DYt4i1VT+Y2Aprej8pujkAc_zcziTQ_vNuK8+oA@mail.gmail.com>
From: Sourav Saha <souravsaha.work@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen PV Guest VM is not coming up on Ubuntu 12.04: Bad
	archive.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1956365709182816799=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1956365709182816799==
Content-Type: multipart/alternative; boundary=089e01229cbc5a467d04f34df559

--089e01229cbc5a467d04f34df559
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGksDQoNCkkgaGF2ZSBpbnN0YWxsZWQgWGVuIG9uIFVidW50dSAxMi4wNCBhcyBwZXINCmh0dHBz
Oi8vaGVscC51YnVudHUuY29tL2NvbW11bml0eS9YZW4gaW5zdHJ1Y3Rpb25zIGFuZCBjcmVhdGVk
IHRoZQ0Kdm9sdW1lLWdyb3VwLCBsb2dpY2FsICYgcGh5c2ljYWwgdm9sdW1lLg0KDQpDaGFuZ2Vk
IHRoZSAvZXRjL25ldHdvcmsvaW50ZXJmYWNlcyBmaWxlIHRvIGRvIHRoZSBuZXR3b3JrIGNvbmZp
Z3VyYXRpb24NCmFuZCBWTSBjb25maWcgZmlsZS4NCg0KQnV0IHdoZW4gaXQgYXNrZWQgbWUgdG8g
Y2hvb3NlIGEgbWlycm9yIG9mIHRoZSB1YnVudHUgYXJjaGl2ZSBkdXJpbmcNCmluc3RhbGxhdGlv
biwgSSBzZWxlY3RlZCBVbml0ZWQgU3RhdGVzIGFuZCBnb3QgdGhlIGJlbG93IGVycm9yKGJhZCBh
cmNoaXZlKS4NCg0Kc2NyZWVuLXNob3QgMToNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQrilIzi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilKQgWyFdIENob29zZSBh
IG1pcnJvciBvZiB0aGUgVWJ1bnR1IGFyY2hpdmUg4pSc4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSQDQogIOKUgg0K4pSCDQogIOKUgiBQbGVhc2Ugc2VsZWN0IGFu
IFVidW50dSBhcmNoaXZlIG1pcnJvci4gWW91IHNob3VsZCB1c2UgYSBtaXJyb3IgaW4NCuKUgg0K
ICDilIIgeW91ciBjb3VudHJ5IG9yIHJlZ2lvbiBpZiB5b3UgZG8gbm90IGtub3cgd2hpY2ggbWly
cm9yIGhhcyB0aGUgYmVzdA0K4pSCDQogIOKUgiBJbnRlcm5ldCBjb25uZWN0aW9uIHRvIHlvdS4N
CuKUgg0KICDilIINCuKUgg0KICDilIIgVXN1YWxseSwgPHlvdXIgY291bnRyeSBjb2RlPi5hcmNo
aXZlLnVidW50dS5jb20gaXMgYSBnb29kIGNob2ljZS4NCuKUgg0KICDilIINCuKUgg0KICDilIIg
VWJ1bnR1IGFyY2hpdmUgbWlycm9yOg0K4pSCDQogIOKUgg0K4pSCDQogIOKUgiAgICAgICAgICAg
ICAgICAgICAgICAgIHVzLmFyY2hpdmUudWJ1bnR1LmNvbQ0K4pSCDQogIOKUgg0K4pSCDQogIOKU
giAgICAgPEdvIEJhY2s+DQrilIINCiAg4pSCDQrilIINCg0K4pSU4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSYDQoNCnNjcmVlbi1zaG90IDI6DQotLS0t
LS0tLS0tLS0tLS0tLS0tLS0tDQrilIzilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilKQgWyEhXSBDaG9vc2UgYSBtaXJyb3Igb2YgdGhlIFVidW50dSBhcmNoaXZlIOKUnOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUkA0KICAg4pSCICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICDilIINCiAgIOKUgiAgICAgICAgICAgICAgICAgICAgICAgICAgQmFkIGFyY2hpdmUgbWlycm9y
ICAgICAgICAgICAgICAgICAgICAgICAgICAg4pSCDQogICDilIIgQW4gZXJyb3IgaGFzIGJlZW4g
ZGV0ZWN0ZWQgd2hpbGUgdHJ5aW5nIHRvIHVzZSB0aGUgc3BlY2lmaWVkIFVidW50dSAgIOKUgg0K
ICAg4pSCIGFyY2hpdmUgbWlycm9yLiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICDilIINCiAgIOKUgiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg4pSCDQogICDilIIg
UG9zc2libGUgcmVhc29ucyBmb3IgdGhlIGVycm9yIGFyZTogaW5jb3JyZWN0IG1pcnJvciBzcGVj
aWZpZWQ7ICAgICAgIOKUgg0KICAg4pSCIG1pcnJvciBpcyBub3QgYXZhaWxhYmxlIChwb3NzaWJs
eSBkdWUgdG8gYW4gdW5yZWxpYWJsZSBuZXR3b3JrICAgICAgICDilIINCiAgIOKUgiBjb25uZWN0
aW9uKTsgbWlycm9yIGlzIGJyb2tlbiAoZm9yIGV4YW1wbGUgYmVjYXVzZSBhbiBpbnZhbGlkIFJl
bGVhc2Ug4pSCDQogICDilIIgZmlsZSB3YXMgZm91bmQpOyBtaXJyb3IgZG9lcyBub3Qgc3VwcG9y
dCB0aGUgY29ycmVjdCBVYnVudHUgdmVyc2lvbi4gIOKUgg0KICAg4pSCICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICDi
lIINCiAgIOKUgiBBZGRpdGlvbmFsIGRldGFpbHMgbWF5IGJlIGF2YWlsYWJsZSBpbiAvdmFyL2xv
Zy9zeXNsb2cgb3Igb24gdmlydHVhbCAg4pSCDQogICDilIIgY29uc29sZSA0LiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIOKUgg0KICAg
4pSCICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICDilIINCiAgIOKUgiBQbGVhc2UgY2hlY2sgdGhlIHNwZWNpZmllZCBt
aXJyb3Igb3IgdHJ5IGEgZGlmZmVyZW50IG9uZS4gICAgICAgICAgICAg4pSCDQogICDilIIgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIOKUgg0KICAg4pSCICAgICA8R28gQmFjaz4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA8Q29udGludWU+ICAgICDilIINCiAgIOKUgiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAg4pSCDQogICDilJTilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lJgNCg0KDQpJIHRyaWVkIGZvciBvdGhlciBtaXJyb3JzIGFsc28gYnV0IGJsb2NrZWQgaW4gc2Ft
ZSBzdGF0ZS4gTm90IGFibGUgdG8gZmluZA0KdGhlIHJvb3QgY2F1c2Ugb2YgdGhpcyBhbmQgaG93
IHRvIHNvbHZlIHRoaXMsIGhvcGUgbXkgbmV0d29yayBjb25maWcgaXMNCmZpbmUuDQoNCldhbnQg
dG8gcGFydGljaXBhdGUgaW4gWGVuIHByb2plY3QgYnV0IGJsb2NrZWQgaW4gdGhpcyBzdGF0ZSBi
YWRseSA6KA0KDQpUaGFua3MsDQpTb3VyYXYgU2FoYQ0K
--089e01229cbc5a467d04f34df559
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2PjxkaXY+PHNwYW4gc3R5bGU9ImNvbG9yOnJnYigxMSw4
MywxNDgpIj5IaSw8YnI+PGJyPjwvc3Bhbj48L2Rpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdiKDEx
LDgzLDE0OCkiPkkgaGF2ZSBpbnN0YWxsZWQgWGVuIG9uIFVidW50dSAxMi4wNCBhcyBwZXIgPGEg
aHJlZj0iaHR0cHM6Ly9oZWxwLnVidW50dS5jb20vY29tbXVuaXR5L1hlbiI+aHR0cHM6Ly9oZWxw
LnVidW50dS5jb20vY29tbXVuaXR5L1hlbjwvYT4gaW5zdHJ1Y3Rpb25zIGFuZCBjcmVhdGVkIHRo
ZSB2b2x1bWUtZ3JvdXAsIGxvZ2ljYWwgJmFtcDsgcGh5c2ljYWwgdm9sdW1lLjxicj4NCjxicj5D
aGFuZ2VkIHRoZSAvZXRjL25ldHdvcmsvaW50ZXJmYWNlcyBmaWxlIHRvIGRvIHRoZSBuZXR3b3Jr
IGNvbmZpZ3VyYXRpb24gYW5kIFZNIGNvbmZpZyBmaWxlLiA8YnI+PGJyPjwvc3Bhbj48L2Rpdj48
c3BhbiBzdHlsZT0iY29sb3I6cmdiKDExLDgzLDE0OCkiPkJ1dCB3aGVuIGl0IGFza2VkIG1lIHRv
IGNob29zZSBhIG1pcnJvciBvZiB0aGUgdWJ1bnR1IGFyY2hpdmUgZHVyaW5nIGluc3RhbGxhdGlv
biwgSSBzZWxlY3RlZCBVbml0ZWQgU3RhdGVzIGFuZCBnb3QgdGhlIGJlbG93IGVycm9yKGJhZCBh
cmNoaXZlKS48L3NwYW4+PGJyPg0KPGJyPnNjcmVlbi1zaG90IDE6PGJyPi0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tPGJyPuKUjOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUpCBbIV0gQ2hvb3NlIGEgbWlycm9yIG9mIHRoZSBVYnVudHUgYXJjaGl2ZSDilJzilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilJA8YnI+wqAg4pSCwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj7CoCDilIIgUGxlYXNlIHNlbGVjdCBhbiBVYnVu
dHUgYXJjaGl2ZSBtaXJyb3IuIFlvdSBzaG91bGQgdXNlIGEgbWlycm9yIGluwqDCoMKgwqDCoCDi
lII8YnI+DQrCoCDilIIgeW91ciBjb3VudHJ5IG9yIHJlZ2lvbiBpZiB5b3UgZG8gbm90IGtub3cg
d2hpY2ggbWlycm9yIGhhcyB0aGUgYmVzdMKgwqDCoMKgIOKUgjxicj7CoCDilIIgSW50ZXJuZXQg
Y29ubmVjdGlvbiB0byB5b3UuwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCDilII8YnI+
wqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj4NCsKgIOKUgiBVc3Vh
bGx5LCAmbHQ7eW91ciBjb3VudHJ5IGNvZGUmZ3Q7LjxhIGhyZWY9Imh0dHA6Ly9hcmNoaXZlLnVi
dW50dS5jb20iPmFyY2hpdmUudWJ1bnR1LmNvbTwvYT4gaXMgYSBnb29kIGNob2ljZS7CoMKgwqDC
oMKgwqAg4pSCPGJyPsKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCDilII8YnI+
wqAg4pSCIFVidW50dSBhcmNoaXZlIG1pcnJvcjrCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCDilII8YnI+DQrCoCDilILCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAg4pSCPGJyPsKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgPGEgaHJlZj0iaHR0cDovL3VzLmFyY2hpdmUudWJ1bnR1LmNvbSI+dXMuYXJjaGl2ZS51
YnVudHUuY29tPC9hPsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCDilII8YnI+wqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKU
gjxicj4NCsKgIOKUgsKgwqDCoMKgICZsdDtHbyBCYWNrJmd0O8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj7CoCDilILCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgIOKUlOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUmDxicj4NCjxicj5zY3JlZW4tc2hvdCAy
Ojxicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPGJyPuKUjOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUpCBbISFdIENob29zZSBhIG1pcnJvciBvZiB0aGUgVWJ1bnR1IGFyY2hp
dmUg4pSc4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSQPGJyPsKgwqAg
4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgwqAg4pSCwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQmFkIGFyY2hpdmUgbWlycm9y
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCDilII8
YnI+DQrCoMKgIOKUgiBBbiBlcnJvciBoYXMgYmVlbiBkZXRlY3RlZCB3aGlsZSB0cnlpbmcgdG8g
dXNlIHRoZSBzcGVjaWZpZWQgVWJ1bnR1wqDCoCDilII8YnI+wqDCoCDilIIgYXJjaGl2ZSBtaXJy
b3IuwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKU
gjxicj7CoMKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj4NCsKgwqAg4pSC
IFBvc3NpYmxlIHJlYXNvbnMgZm9yIHRoZSBlcnJvciBhcmU6IGluY29ycmVjdCBtaXJyb3Igc3Bl
Y2lmaWVkO8KgwqDCoMKgwqDCoCDilII8YnI+wqDCoCDilIIgbWlycm9yIGlzIG5vdCBhdmFpbGFi
bGUgKHBvc3NpYmx5IGR1ZSB0byBhbiB1bnJlbGlhYmxlIG5ldHdvcmvCoMKgwqDCoMKgwqDCoCDi
lII8YnI+wqDCoCDilIIgY29ubmVjdGlvbik7IG1pcnJvciBpcyBicm9rZW4gKGZvciBleGFtcGxl
IGJlY2F1c2UgYW4gaW52YWxpZCBSZWxlYXNlIOKUgjxicj4NCsKgwqAg4pSCIGZpbGUgd2FzIGZv
dW5kKTsgbWlycm9yIGRvZXMgbm90IHN1cHBvcnQgdGhlIGNvcnJlY3QgVWJ1bnR1IHZlcnNpb24u
wqAg4pSCPGJyPsKgwqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgwqAg
4pSCIEFkZGl0aW9uYWwgZGV0YWlscyBtYXkgYmUgYXZhaWxhYmxlIGluIC92YXIvbG9nL3N5c2xv
ZyBvciBvbiB2aXJ0dWFswqAg4pSCPGJyPg0KwqDCoCDilIIgY29uc29sZSA0LsKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJy
PsKgwqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgwqAg4pSCIFBsZWFz
ZSBjaGVjayB0aGUgc3BlY2lmaWVkIG1pcnJvciBvciB0cnkgYSBkaWZmZXJlbnQgb25lLsKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCDilII8YnI+DQrCoMKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIOKUgjxicj7CoMKgIOKUgsKgwqDCoMKgICZsdDtHbyBCYWNrJmd0O8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgJmx0O0NvbnRpbnVlJmd0O8KgwqDCoMKgIOKUgjxicj7CoMKgIOKUgsKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj4NCsKgwqAg4pSU4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSYPGJyPjxicj48YnI+PC9kaXY+PGRpdj48c3Bh
biBzdHlsZT0iY29sb3I6cmdiKDExLDgzLDE0OCkiPkkgdHJpZWQgZm9yIG90aGVyIG1pcnJvcnMg
YWxzbyBidXQgYmxvY2tlZCBpbiBzYW1lIHN0YXRlLiA8c3BhbiBzdHlsZT0iY29sb3I6cmdiKDI1
NSwwLDApIj5Ob3QgYWJsZSB0byBmaW5kIHRoZSByb290IGNhdXNlIG9mIHRoaXMgYW5kIGhvdyB0
byBzb2x2ZSB0aGlzPC9zcGFuPiwgaG9wZSBteSBuZXR3b3JrIGNvbmZpZyBpcyBmaW5lLjxicj4N
Cjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdiKDExLDgzLDE0OCki
PldhbnQgdG8gcGFydGljaXBhdGUgaW4gWGVuIHByb2plY3QgYnV0IGJsb2NrZWQgaW4gdGhpcyBz
dGF0ZSBiYWRseSA6KDxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdi
KDExLDgzLDE0OCkiPjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdi
KDExLDgzLDE0OCkiPlRoYW5rcyw8YnI+DQo8L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0i
Y29sb3I6cmdiKDExLDgzLDE0OCkiPlNvdXJhdiBTYWhhPC9zcGFuPjxicj48L2Rpdj48L2Rpdj4N
Cg==
--089e01229cbc5a467d04f34df559--


--===============1956365709182816799==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1956365709182816799==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 12:01:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdAk-0006Cf-45; Wed, 26 Feb 2014 12:01:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <souravsaha.work@gmail.com>) id 1WIdAj-0006CZ-1Z
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:01:09 +0000
Received: from [193.109.254.147:45284] by server-16.bemta-14.messagelabs.com
	id DF/26-21945-487DD035; Wed, 26 Feb 2014 12:01:08 +0000
X-Env-Sender: souravsaha.work@gmail.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393416066!6933172!1
X-Originating-IP: [209.85.220.196]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26844 invoked from network); 26 Feb 2014 12:01:07 -0000
Received: from mail-vc0-f196.google.com (HELO mail-vc0-f196.google.com)
	(209.85.220.196)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:01:07 -0000
Received: by mail-vc0-f196.google.com with SMTP id lf12so239708vcb.7
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 04:01:06 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=i0pn7G+Xxhau6/C2gb2NUHVVHOU4optpVI5ywCS/xw4=;
	b=BUF182ra8vTKHUfUSaEhxs0IqqJ/x3aQkl1DRM4JcwNHCcXLQ1sbrTHg9KKjBLFWve
	aM4vaKoTkoRw7/lHlIgjhgW8qVbAXelVJJEcWltiz5j3/ybjY+k9via8QKnicHt595Fp
	WJDa4y2QWyNI5bP9Lx2eSDM0VkFhWKiQovoRgnRjI45HbYwiIrv1iutG0wyQCVamsFhN
	FGSBvqvqsO92o4eebTCRRkU4csi2yLOYxNDyPjV2zVO74Le4UNDNMY3gPg9Yz7iY8uuX
	9jGxJ69CeAk25abwNKA7hCeebuzKr98kALte956+JeNyI47UnFnJn+8UPQruTydy/w/B
	On9w==
MIME-Version: 1.0
X-Received: by 10.58.100.211 with SMTP id fa19mr836607veb.14.1393416061185;
	Wed, 26 Feb 2014 04:01:01 -0800 (PST)
Received: by 10.58.90.133 with HTTP; Wed, 26 Feb 2014 04:01:01 -0800 (PST)
Date: Wed, 26 Feb 2014 17:31:01 +0530
Message-ID: <CADkf8UkxTF9DYt4i1VT+Y2Aprej8pujkAc_zcziTQ_vNuK8+oA@mail.gmail.com>
From: Sourav Saha <souravsaha.work@gmail.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] Xen PV Guest VM is not coming up on Ubuntu 12.04: Bad
	archive.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1956365709182816799=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============1956365709182816799==
Content-Type: multipart/alternative; boundary=089e01229cbc5a467d04f34df559

--089e01229cbc5a467d04f34df559
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

SGksDQoNCkkgaGF2ZSBpbnN0YWxsZWQgWGVuIG9uIFVidW50dSAxMi4wNCBhcyBwZXINCmh0dHBz
Oi8vaGVscC51YnVudHUuY29tL2NvbW11bml0eS9YZW4gaW5zdHJ1Y3Rpb25zIGFuZCBjcmVhdGVk
IHRoZQ0Kdm9sdW1lLWdyb3VwLCBsb2dpY2FsICYgcGh5c2ljYWwgdm9sdW1lLg0KDQpDaGFuZ2Vk
IHRoZSAvZXRjL25ldHdvcmsvaW50ZXJmYWNlcyBmaWxlIHRvIGRvIHRoZSBuZXR3b3JrIGNvbmZp
Z3VyYXRpb24NCmFuZCBWTSBjb25maWcgZmlsZS4NCg0KQnV0IHdoZW4gaXQgYXNrZWQgbWUgdG8g
Y2hvb3NlIGEgbWlycm9yIG9mIHRoZSB1YnVudHUgYXJjaGl2ZSBkdXJpbmcNCmluc3RhbGxhdGlv
biwgSSBzZWxlY3RlZCBVbml0ZWQgU3RhdGVzIGFuZCBnb3QgdGhlIGJlbG93IGVycm9yKGJhZCBh
cmNoaXZlKS4NCg0Kc2NyZWVuLXNob3QgMToNCi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQrilIzi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilKQgWyFdIENob29zZSBh
IG1pcnJvciBvZiB0aGUgVWJ1bnR1IGFyY2hpdmUg4pSc4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSQDQogIOKUgg0K4pSCDQogIOKUgiBQbGVhc2Ugc2VsZWN0IGFu
IFVidW50dSBhcmNoaXZlIG1pcnJvci4gWW91IHNob3VsZCB1c2UgYSBtaXJyb3IgaW4NCuKUgg0K
ICDilIIgeW91ciBjb3VudHJ5IG9yIHJlZ2lvbiBpZiB5b3UgZG8gbm90IGtub3cgd2hpY2ggbWly
cm9yIGhhcyB0aGUgYmVzdA0K4pSCDQogIOKUgiBJbnRlcm5ldCBjb25uZWN0aW9uIHRvIHlvdS4N
CuKUgg0KICDilIINCuKUgg0KICDilIIgVXN1YWxseSwgPHlvdXIgY291bnRyeSBjb2RlPi5hcmNo
aXZlLnVidW50dS5jb20gaXMgYSBnb29kIGNob2ljZS4NCuKUgg0KICDilIINCuKUgg0KICDilIIg
VWJ1bnR1IGFyY2hpdmUgbWlycm9yOg0K4pSCDQogIOKUgg0K4pSCDQogIOKUgiAgICAgICAgICAg
ICAgICAgICAgICAgIHVzLmFyY2hpdmUudWJ1bnR1LmNvbQ0K4pSCDQogIOKUgg0K4pSCDQogIOKU
giAgICAgPEdvIEJhY2s+DQrilIINCiAg4pSCDQrilIINCg0K4pSU4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSYDQoNCnNjcmVlbi1zaG90IDI6DQotLS0t
LS0tLS0tLS0tLS0tLS0tLS0tDQrilIzilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilKQgWyEhXSBDaG9vc2UgYSBtaXJyb3Igb2YgdGhlIFVidW50dSBhcmNoaXZlIOKUnOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUkA0KICAg4pSCICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICDilIINCiAgIOKUgiAgICAgICAgICAgICAgICAgICAgICAgICAgQmFkIGFyY2hpdmUgbWlycm9y
ICAgICAgICAgICAgICAgICAgICAgICAgICAg4pSCDQogICDilIIgQW4gZXJyb3IgaGFzIGJlZW4g
ZGV0ZWN0ZWQgd2hpbGUgdHJ5aW5nIHRvIHVzZSB0aGUgc3BlY2lmaWVkIFVidW50dSAgIOKUgg0K
ICAg4pSCIGFyY2hpdmUgbWlycm9yLiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICDilIINCiAgIOKUgiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg4pSCDQogICDilIIg
UG9zc2libGUgcmVhc29ucyBmb3IgdGhlIGVycm9yIGFyZTogaW5jb3JyZWN0IG1pcnJvciBzcGVj
aWZpZWQ7ICAgICAgIOKUgg0KICAg4pSCIG1pcnJvciBpcyBub3QgYXZhaWxhYmxlIChwb3NzaWJs
eSBkdWUgdG8gYW4gdW5yZWxpYWJsZSBuZXR3b3JrICAgICAgICDilIINCiAgIOKUgiBjb25uZWN0
aW9uKTsgbWlycm9yIGlzIGJyb2tlbiAoZm9yIGV4YW1wbGUgYmVjYXVzZSBhbiBpbnZhbGlkIFJl
bGVhc2Ug4pSCDQogICDilIIgZmlsZSB3YXMgZm91bmQpOyBtaXJyb3IgZG9lcyBub3Qgc3VwcG9y
dCB0aGUgY29ycmVjdCBVYnVudHUgdmVyc2lvbi4gIOKUgg0KICAg4pSCICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICDi
lIINCiAgIOKUgiBBZGRpdGlvbmFsIGRldGFpbHMgbWF5IGJlIGF2YWlsYWJsZSBpbiAvdmFyL2xv
Zy9zeXNsb2cgb3Igb24gdmlydHVhbCAg4pSCDQogICDilIIgY29uc29sZSA0LiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIOKUgg0KICAg
4pSCICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICDilIINCiAgIOKUgiBQbGVhc2UgY2hlY2sgdGhlIHNwZWNpZmllZCBt
aXJyb3Igb3IgdHJ5IGEgZGlmZmVyZW50IG9uZS4gICAgICAgICAgICAg4pSCDQogICDilIIgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIOKUgg0KICAg4pSCICAgICA8R28gQmFjaz4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICA8Q29udGludWU+ICAgICDilIINCiAgIOKUgiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAg4pSCDQogICDilJTilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDi
lJgNCg0KDQpJIHRyaWVkIGZvciBvdGhlciBtaXJyb3JzIGFsc28gYnV0IGJsb2NrZWQgaW4gc2Ft
ZSBzdGF0ZS4gTm90IGFibGUgdG8gZmluZA0KdGhlIHJvb3QgY2F1c2Ugb2YgdGhpcyBhbmQgaG93
IHRvIHNvbHZlIHRoaXMsIGhvcGUgbXkgbmV0d29yayBjb25maWcgaXMNCmZpbmUuDQoNCldhbnQg
dG8gcGFydGljaXBhdGUgaW4gWGVuIHByb2plY3QgYnV0IGJsb2NrZWQgaW4gdGhpcyBzdGF0ZSBi
YWRseSA6KA0KDQpUaGFua3MsDQpTb3VyYXYgU2FoYQ0K
--089e01229cbc5a467d04f34df559
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: base64

PGRpdiBkaXI9Imx0ciI+PGRpdj48ZGl2PjxkaXY+PHNwYW4gc3R5bGU9ImNvbG9yOnJnYigxMSw4
MywxNDgpIj5IaSw8YnI+PGJyPjwvc3Bhbj48L2Rpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdiKDEx
LDgzLDE0OCkiPkkgaGF2ZSBpbnN0YWxsZWQgWGVuIG9uIFVidW50dSAxMi4wNCBhcyBwZXIgPGEg
aHJlZj0iaHR0cHM6Ly9oZWxwLnVidW50dS5jb20vY29tbXVuaXR5L1hlbiI+aHR0cHM6Ly9oZWxw
LnVidW50dS5jb20vY29tbXVuaXR5L1hlbjwvYT4gaW5zdHJ1Y3Rpb25zIGFuZCBjcmVhdGVkIHRo
ZSB2b2x1bWUtZ3JvdXAsIGxvZ2ljYWwgJmFtcDsgcGh5c2ljYWwgdm9sdW1lLjxicj4NCjxicj5D
aGFuZ2VkIHRoZSAvZXRjL25ldHdvcmsvaW50ZXJmYWNlcyBmaWxlIHRvIGRvIHRoZSBuZXR3b3Jr
IGNvbmZpZ3VyYXRpb24gYW5kIFZNIGNvbmZpZyBmaWxlLiA8YnI+PGJyPjwvc3Bhbj48L2Rpdj48
c3BhbiBzdHlsZT0iY29sb3I6cmdiKDExLDgzLDE0OCkiPkJ1dCB3aGVuIGl0IGFza2VkIG1lIHRv
IGNob29zZSBhIG1pcnJvciBvZiB0aGUgdWJ1bnR1IGFyY2hpdmUgZHVyaW5nIGluc3RhbGxhdGlv
biwgSSBzZWxlY3RlZCBVbml0ZWQgU3RhdGVzIGFuZCBnb3QgdGhlIGJlbG93IGVycm9yKGJhZCBh
cmNoaXZlKS48L3NwYW4+PGJyPg0KPGJyPnNjcmVlbi1zaG90IDE6PGJyPi0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tPGJyPuKUjOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUpCBbIV0gQ2hvb3NlIGEgbWlycm9yIG9mIHRoZSBVYnVudHUgYXJjaGl2ZSDilJzilIDilIDi
lIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilIDilJA8YnI+wqAg4pSCwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj7CoCDilIIgUGxlYXNlIHNlbGVjdCBhbiBVYnVu
dHUgYXJjaGl2ZSBtaXJyb3IuIFlvdSBzaG91bGQgdXNlIGEgbWlycm9yIGluwqDCoMKgwqDCoCDi
lII8YnI+DQrCoCDilIIgeW91ciBjb3VudHJ5IG9yIHJlZ2lvbiBpZiB5b3UgZG8gbm90IGtub3cg
d2hpY2ggbWlycm9yIGhhcyB0aGUgYmVzdMKgwqDCoMKgIOKUgjxicj7CoCDilIIgSW50ZXJuZXQg
Y29ubmVjdGlvbiB0byB5b3UuwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCDilII8YnI+
wqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj4NCsKgIOKUgiBVc3Vh
bGx5LCAmbHQ7eW91ciBjb3VudHJ5IGNvZGUmZ3Q7LjxhIGhyZWY9Imh0dHA6Ly9hcmNoaXZlLnVi
dW50dS5jb20iPmFyY2hpdmUudWJ1bnR1LmNvbTwvYT4gaXMgYSBnb29kIGNob2ljZS7CoMKgwqDC
oMKgwqAg4pSCPGJyPsKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCDilII8YnI+
wqAg4pSCIFVidW50dSBhcmNoaXZlIG1pcnJvcjrCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCDilII8YnI+DQrCoCDilILCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAg4pSCPGJyPsKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgPGEgaHJlZj0iaHR0cDovL3VzLmFyY2hpdmUudWJ1bnR1LmNvbSI+dXMuYXJjaGl2ZS51
YnVudHUuY29tPC9hPsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCDilII8YnI+wqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKU
gjxicj4NCsKgIOKUgsKgwqDCoMKgICZsdDtHbyBCYWNrJmd0O8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj7CoCDilILCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgIOKUlOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKUmDxicj4NCjxicj5zY3JlZW4tc2hvdCAy
Ojxicj4tLS0tLS0tLS0tLS0tLS0tLS0tLS0tPGJyPuKUjOKUgOKUgOKUgOKUgOKUgOKUgOKUgOKU
gOKUgOKUgOKUgOKUgOKUpCBbISFdIENob29zZSBhIG1pcnJvciBvZiB0aGUgVWJ1bnR1IGFyY2hp
dmUg4pSc4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSQPGJyPsKgwqAg
4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgwqAg4pSCwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQmFkIGFyY2hpdmUgbWlycm9y
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCDilII8
YnI+DQrCoMKgIOKUgiBBbiBlcnJvciBoYXMgYmVlbiBkZXRlY3RlZCB3aGlsZSB0cnlpbmcgdG8g
dXNlIHRoZSBzcGVjaWZpZWQgVWJ1bnR1wqDCoCDilII8YnI+wqDCoCDilIIgYXJjaGl2ZSBtaXJy
b3IuwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKU
gjxicj7CoMKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj4NCsKgwqAg4pSC
IFBvc3NpYmxlIHJlYXNvbnMgZm9yIHRoZSBlcnJvciBhcmU6IGluY29ycmVjdCBtaXJyb3Igc3Bl
Y2lmaWVkO8KgwqDCoMKgwqDCoCDilII8YnI+wqDCoCDilIIgbWlycm9yIGlzIG5vdCBhdmFpbGFi
bGUgKHBvc3NpYmx5IGR1ZSB0byBhbiB1bnJlbGlhYmxlIG5ldHdvcmvCoMKgwqDCoMKgwqDCoCDi
lII8YnI+wqDCoCDilIIgY29ubmVjdGlvbik7IG1pcnJvciBpcyBicm9rZW4gKGZvciBleGFtcGxl
IGJlY2F1c2UgYW4gaW52YWxpZCBSZWxlYXNlIOKUgjxicj4NCsKgwqAg4pSCIGZpbGUgd2FzIGZv
dW5kKTsgbWlycm9yIGRvZXMgbm90IHN1cHBvcnQgdGhlIGNvcnJlY3QgVWJ1bnR1IHZlcnNpb24u
wqAg4pSCPGJyPsKgwqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgwqAg
4pSCIEFkZGl0aW9uYWwgZGV0YWlscyBtYXkgYmUgYXZhaWxhYmxlIGluIC92YXIvbG9nL3N5c2xv
ZyBvciBvbiB2aXJ0dWFswqAg4pSCPGJyPg0KwqDCoCDilIIgY29uc29sZSA0LsKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJy
PsKgwqAg4pSCwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg4pSCPGJyPsKgwqAg4pSCIFBsZWFz
ZSBjaGVjayB0aGUgc3BlY2lmaWVkIG1pcnJvciBvciB0cnkgYSBkaWZmZXJlbnQgb25lLsKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCDilII8YnI+DQrCoMKgIOKUgsKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIOKUgjxicj7CoMKgIOKUgsKgwqDCoMKgICZsdDtHbyBCYWNrJmd0O8KgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgJmx0O0NvbnRpbnVlJmd0O8KgwqDCoMKgIOKUgjxicj7CoMKgIOKUgsKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIOKUgjxicj4NCsKgwqAg4pSU4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA
4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSA4pSYPGJyPjxicj48YnI+PC9kaXY+PGRpdj48c3Bh
biBzdHlsZT0iY29sb3I6cmdiKDExLDgzLDE0OCkiPkkgdHJpZWQgZm9yIG90aGVyIG1pcnJvcnMg
YWxzbyBidXQgYmxvY2tlZCBpbiBzYW1lIHN0YXRlLiA8c3BhbiBzdHlsZT0iY29sb3I6cmdiKDI1
NSwwLDApIj5Ob3QgYWJsZSB0byBmaW5kIHRoZSByb290IGNhdXNlIG9mIHRoaXMgYW5kIGhvdyB0
byBzb2x2ZSB0aGlzPC9zcGFuPiwgaG9wZSBteSBuZXR3b3JrIGNvbmZpZyBpcyBmaW5lLjxicj4N
Cjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdiKDExLDgzLDE0OCki
PldhbnQgdG8gcGFydGljaXBhdGUgaW4gWGVuIHByb2plY3QgYnV0IGJsb2NrZWQgaW4gdGhpcyBz
dGF0ZSBiYWRseSA6KDxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdi
KDExLDgzLDE0OCkiPjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6cmdi
KDExLDgzLDE0OCkiPlRoYW5rcyw8YnI+DQo8L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0i
Y29sb3I6cmdiKDExLDgzLDE0OCkiPlNvdXJhdiBTYWhhPC9zcGFuPjxicj48L2Rpdj48L2Rpdj4N
Cg==
--089e01229cbc5a467d04f34df559--


--===============1956365709182816799==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1956365709182816799==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 12:11:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdKF-0006SM-Fp; Wed, 26 Feb 2014 12:10:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIdKE-0006SH-3J
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:10:58 +0000
Received: from [85.158.139.211:22872] by server-17.bemta-5.messagelabs.com id
	D9/DC-31975-1D9DD035; Wed, 26 Feb 2014 12:10:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393416655!6389810!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15343 invoked from network); 26 Feb 2014 12:10:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104247269"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:10:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 07:10:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIdK9-0007qW-57;
	Wed, 26 Feb 2014 12:10:53 +0000
Message-ID: <530DD9CC.3020008@citrix.com>
Date: Wed, 26 Feb 2014 12:10:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mm: ensure useful progress in
	decrease_reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 11:47, Wei Liu wrote:
> During my fun time playing with balloon driver I found that hypervisor's
> preemption check kept decrease_reservation from doing any useful work
> for 32 bit guests, resulting in hanging the guests.
>
> As Andrew suggested, we can force the check to fail for the first
> iteration to ensure progress. We did this in d3a55d7d9 "x86/mm: Ensure
> useful progress in alloc_l2_table()" already.
>
> After this change I cannot see the hang caused by continuation logic
> anymore.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Keir Fraser <keir@xen.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

As discussed on IRC, this issue was reliably seen with 32bit HVM guests
only.  The suspicion is that the compat layer is sufficiently long that
the there is always something pending by the time decrease_reservation()
got called.

This highlights that the fix for long-running hypercalls (starting with
XSA-45) is almost as bad as the long-running hypercalls themselves.

In XenServer, we have noticed that toolstack operations for
creating/migrating/destroying domains have started failing 22 second
softlockup timeouts, meaning that individual batched hypercalls (and
their continuations) are now exceeding 22 seconds of wallclock time.

In this case, 32bit HVM guests are reliably being locked-up by Xen,
meaning that for the duration of the vcpu being scheduled, Xen is
consistently bouncing in and out of non-root mode, and running the
compat layer over the hypercall parameters.

Even with the fix in place, 32bit HVM guests will be decreasing by a
single page for each bounce in and out of non-root mode and compat
layer, which is a staggering overhead and substantially worse than a bit
of time-skew.

The only solution I can see is for there to be an absolute minimum
amount of work Xen will do before even considering a continuation, and
for that minimum to be rather higher than it is at the moment.

~Andrew

> ---
>  xen/common/memory.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 5a0efd5..9d0d32e 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)
>  
>      for ( i = a->nr_done; i < a->nr_extents; i++ )
>      {
> -        if ( hypercall_preempt_check() )
> +        if ( hypercall_preempt_check() && i != a->nr_done )
>          {
>              a->preempted = 1;
>              goto out;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:11:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdKF-0006SM-Fp; Wed, 26 Feb 2014 12:10:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIdKE-0006SH-3J
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:10:58 +0000
Received: from [85.158.139.211:22872] by server-17.bemta-5.messagelabs.com id
	D9/DC-31975-1D9DD035; Wed, 26 Feb 2014 12:10:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393416655!6389810!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15343 invoked from network); 26 Feb 2014 12:10:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:10:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104247269"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:10:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 07:10:53 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIdK9-0007qW-57;
	Wed, 26 Feb 2014 12:10:53 +0000
Message-ID: <530DD9CC.3020008@citrix.com>
Date: Wed, 26 Feb 2014 12:10:52 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
In-Reply-To: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mm: ensure useful progress in
	decrease_reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 11:47, Wei Liu wrote:
> During my fun time playing with balloon driver I found that hypervisor's
> preemption check kept decrease_reservation from doing any useful work
> for 32 bit guests, resulting in hanging the guests.
>
> As Andrew suggested, we can force the check to fail for the first
> iteration to ensure progress. We did this in d3a55d7d9 "x86/mm: Ensure
> useful progress in alloc_l2_table()" already.
>
> After this change I cannot see the hang caused by continuation logic
> anymore.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Keir Fraser <keir@xen.org>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

As discussed on IRC, this issue was reliably seen with 32bit HVM guests
only.  The suspicion is that the compat layer is sufficiently long that
the there is always something pending by the time decrease_reservation()
got called.

This highlights that the fix for long-running hypercalls (starting with
XSA-45) is almost as bad as the long-running hypercalls themselves.

In XenServer, we have noticed that toolstack operations for
creating/migrating/destroying domains have started failing 22 second
softlockup timeouts, meaning that individual batched hypercalls (and
their continuations) are now exceeding 22 seconds of wallclock time.

In this case, 32bit HVM guests are reliably being locked-up by Xen,
meaning that for the duration of the vcpu being scheduled, Xen is
consistently bouncing in and out of non-root mode, and running the
compat layer over the hypercall parameters.

Even with the fix in place, 32bit HVM guests will be decreasing by a
single page for each bounce in and out of non-root mode and compat
layer, which is a staggering overhead and substantially worse than a bit
of time-skew.

The only solution I can see is for there to be an absolute minimum
amount of work Xen will do before even considering a continuation, and
for that minimum to be rather higher than it is at the moment.

~Andrew

> ---
>  xen/common/memory.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 5a0efd5..9d0d32e 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)
>  
>      for ( i = a->nr_done; i < a->nr_extents; i++ )
>      {
> -        if ( hypercall_preempt_check() )
> +        if ( hypercall_preempt_check() && i != a->nr_done )
>          {
>              a->preempted = 1;
>              goto out;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:13:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdMJ-0006X4-0h; Wed, 26 Feb 2014 12:13:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdMH-0006Ws-H4
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:13:05 +0000
Received: from [85.158.137.68:20588] by server-2.bemta-3.messagelabs.com id
	7F/33-06531-05ADD035; Wed, 26 Feb 2014 12:13:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393416781!4343234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28683 invoked from network); 26 Feb 2014 12:13:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:13:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104247654"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:13:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 07:13:00 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WIdMC-000398-NY;
	Wed, 26 Feb 2014 12:13:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 26 Feb 2014 12:13:00 +0000
Message-ID: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs in
	DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise we deference a NULL pointer.

I saw this while experimenting with libvirt on Xen on ARM, xl already checks
that the command line is non NULL and provides "" as a default.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: george.dunlap@citrix.com>
---
This is a pretty obvious fix and would be nice to have if we are taking any
more fixes for other stuff , but otherwise I think we can leave to 4.4.1 quite
happily.
---
 tools/libxl/libxl_arm.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 0a1c8c5..0cfd0cf 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -164,8 +164,10 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,
     res = fdt_begin_node(fdt, "chosen");
     if (res) return res;
 
-    res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
-    if (res) return res;
+    if (info->u.pv.cmdline) {
+        res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
+        if (res) return res;
+    }
 
     res = fdt_end_node(fdt);
     if (res) return res;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:13:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdMJ-0006X4-0h; Wed, 26 Feb 2014 12:13:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdMH-0006Ws-H4
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:13:05 +0000
Received: from [85.158.137.68:20588] by server-2.bemta-3.messagelabs.com id
	7F/33-06531-05ADD035; Wed, 26 Feb 2014 12:13:04 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393416781!4343234!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28683 invoked from network); 26 Feb 2014 12:13:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:13:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104247654"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:13:01 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 07:13:00 -0500
Received: from drall.uk.xensource.com ([10.80.16.71]
	helo=drall.uk.xensource.com.)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <ian.campbell@citrix.com>)	id 1WIdMC-000398-NY;
	Wed, 26 Feb 2014 12:13:00 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 26 Feb 2014 12:13:00 +0000
Message-ID: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs in
	DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Otherwise we deference a NULL pointer.

I saw this while experimenting with libvirt on Xen on ARM, xl already checks
that the command line is non NULL and provides "" as a default.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Cc: george.dunlap@citrix.com>
---
This is a pretty obvious fix and would be nice to have if we are taking any
more fixes for other stuff , but otherwise I think we can leave to 4.4.1 quite
happily.
---
 tools/libxl/libxl_arm.c |    6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 0a1c8c5..0cfd0cf 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -164,8 +164,10 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,
     res = fdt_begin_node(fdt, "chosen");
     if (res) return res;
 
-    res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
-    if (res) return res;
+    if (info->u.pv.cmdline) {
+        res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
+        if (res) return res;
+    }
 
     res = fdt_end_node(fdt);
     if (res) return res;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:14:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdNj-0006dA-Gg; Wed, 26 Feb 2014 12:14:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdNi-0006cv-3d
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:14:34 +0000
Received: from [85.158.139.211:43381] by server-17.bemta-5.messagelabs.com id
	88/45-31975-9AADD035; Wed, 26 Feb 2014 12:14:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393416871!6347973!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28266 invoked from network); 26 Feb 2014 12:14:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:14:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="105884763"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 12:14:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 07:14:03 -0500
Message-ID: <1393416841.18730.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 26 Feb 2014 12:14:01 +0000
In-Reply-To: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
References: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs
 in DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 12:13 +0000, Ian Campbell wrote:
> Otherwise we deference a NULL pointer.
> 
> I saw this while experimenting with libvirt on Xen on ARM, xl already checks
> that the command line is non NULL and provides "" as a default.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: george.dunlap@citrix.com>

Typo (extra ">") so Goerge's CC got missed out...

> ---
> This is a pretty obvious fix and would be nice to have if we are taking any
> more fixes for other stuff , but otherwise I think we can leave to 4.4.1 quite
> happily.
> ---
>  tools/libxl/libxl_arm.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 0a1c8c5..0cfd0cf 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -164,8 +164,10 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,
>      res = fdt_begin_node(fdt, "chosen");
>      if (res) return res;
>  
> -    res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
> -    if (res) return res;
> +    if (info->u.pv.cmdline) {
> +        res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
> +        if (res) return res;
> +    }
>  
>      res = fdt_end_node(fdt);
>      if (res) return res;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:14:37 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdNj-0006dA-Gg; Wed, 26 Feb 2014 12:14:35 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdNi-0006cv-3d
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:14:34 +0000
Received: from [85.158.139.211:43381] by server-17.bemta-5.messagelabs.com id
	88/45-31975-9AADD035; Wed, 26 Feb 2014 12:14:33 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393416871!6347973!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28266 invoked from network); 26 Feb 2014 12:14:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:14:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="105884763"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 12:14:03 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 07:14:03 -0500
Message-ID: <1393416841.18730.38.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Wed, 26 Feb 2014 12:14:01 +0000
In-Reply-To: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
References: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: George Dunlap <george.dunlap@eu.citrix.com>, ian.jackson@eu.citrix.com,
	julien.grall@linaro.org, tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs
 in DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 12:13 +0000, Ian Campbell wrote:
> Otherwise we deference a NULL pointer.
> 
> I saw this while experimenting with libvirt on Xen on ARM, xl already checks
> that the command line is non NULL and provides "" as a default.
> 
> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
> Cc: george.dunlap@citrix.com>

Typo (extra ">") so Goerge's CC got missed out...

> ---
> This is a pretty obvious fix and would be nice to have if we are taking any
> more fixes for other stuff , but otherwise I think we can leave to 4.4.1 quite
> happily.
> ---
>  tools/libxl/libxl_arm.c |    6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 0a1c8c5..0cfd0cf 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -164,8 +164,10 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,
>      res = fdt_begin_node(fdt, "chosen");
>      if (res) return res;
>  
> -    res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
> -    if (res) return res;
> +    if (info->u.pv.cmdline) {
> +        res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
> +        if (res) return res;
> +    }
>  
>      res = fdt_end_node(fdt);
>      if (res) return res;



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:34:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdgu-0006v2-F8; Wed, 26 Feb 2014 12:34:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdgt-0006ux-Go
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:34:23 +0000
Received: from [85.158.137.68:55139] by server-15.bemta-3.messagelabs.com id
	0A/40-19263-E4FDD035; Wed, 26 Feb 2014 12:34:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393418059!953834!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16302 invoked from network); 26 Feb 2014 12:34:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:34:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104251920"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:34:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 07:34:18 -0500
Received: from marilith-n13-p0.uk.xensource.com ([10.80.229.115]
	helo=marilith-n13.uk.xensource.com.)	by norwich.cam.xci-test.com with
	esmtp (Exim 4.72)	(envelope-from <ian.campbell@citrix.com>)	id
	1WIdgo-0003FB-Q4; Wed, 26 Feb 2014 12:34:18 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <libvir-list@redhat.com>, <xen-devel@lists.xen.org>, Jim Fehlig
	<jfehlig@suse.com>
Date: Wed, 26 Feb 2014 12:34:17 +0000
Message-ID: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH LIBVIRT] libxl: Recognise ARM architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only tested on v7 but the v8 equivalent seems pretty obvious.

XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
but I have stuck with the existing pattern.

With this I can create a guest from:
  <domain type='xen'>
    <name>libvirt-test</name>
    <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
    <memory>393216</memory>
    <currentMemory>393216</currentMemory>
    <vcpu>1</vcpu>
    <os>
      <type arch='armv7l' machine='xenpv'>linux</type>
      <kernel>/boot/vmlinuz-arm-native</kernel>
      <cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
    </os>
    <clock offset='utc'/>
    <on_poweroff>destroy</on_poweroff>
    <on_reboot>restart</on_reboot>
    <on_crash>destroy</on_crash>
    <devices>
      <disk type='block' device='disk'>
        <source dev='/dev/marilith-n0/debian-disk'/>
        <target dev='xvda1'/>
      </disk>
      <interface type='bridge'>
        <mac address='8e:a7:8e:3c:f4:f6'/>
        <source bridge='xenbr0'/>
      </interface>
    </devices>
  </domain>

Using virsh create and I can destroy it too.

Currently virsh console fails with:
  Connected to domain libvirt-test
  Escape character is ^]
  error: internal error: cannot find character device <null>

I haven't investigated yet.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 src/libxl/libxl_conf.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index 4cefadf..7ed692d 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -61,7 +61,7 @@ struct guest_arch {
     int ia64_be;
 };
 
-#define XEN_CAP_REGEX "(xen|hvm)-[[:digit:]]+\\.[[:digit:]]+-(x86_32|x86_64|ia64|powerpc64)(p|be)?"
+#define XEN_CAP_REGEX "(xen|hvm)-[[:digit:]]+\\.[[:digit:]]+-(aarch64|armv7l|x86_32|x86_64|ia64|powerpc64)(p|be)?"
 
 
 static virClassPtr libxlDriverConfigClass;
@@ -319,8 +319,11 @@ libxlCapsInitGuests(libxl_ctx *ctx, virCapsPtr caps)
             }
             else if (STRPREFIX(&token[subs[2].rm_so], "powerpc64")) {
                 arch = VIR_ARCH_PPC64;
+            } else if (STRPREFIX(&token[subs[2].rm_so], "armv7l")) {
+                arch = VIR_ARCH_ARMV7L;
+            } else if (STRPREFIX(&token[subs[2].rm_so], "aarch64")) {
+                arch = VIR_ARCH_AARCH64;
             } else {
-                /* XXX arm ? */
                 continue;
             }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:34:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdgu-0006v2-F8; Wed, 26 Feb 2014 12:34:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdgt-0006ux-Go
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:34:23 +0000
Received: from [85.158.137.68:55139] by server-15.bemta-3.messagelabs.com id
	0A/40-19263-E4FDD035; Wed, 26 Feb 2014 12:34:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393418059!953834!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16302 invoked from network); 26 Feb 2014 12:34:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:34:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104251920"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:34:19 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 07:34:18 -0500
Received: from marilith-n13-p0.uk.xensource.com ([10.80.229.115]
	helo=marilith-n13.uk.xensource.com.)	by norwich.cam.xci-test.com with
	esmtp (Exim 4.72)	(envelope-from <ian.campbell@citrix.com>)	id
	1WIdgo-0003FB-Q4; Wed, 26 Feb 2014 12:34:18 +0000
From: Ian Campbell <ian.campbell@citrix.com>
To: <libvir-list@redhat.com>, <xen-devel@lists.xen.org>, Jim Fehlig
	<jfehlig@suse.com>
Date: Wed, 26 Feb 2014 12:34:17 +0000
Message-ID: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@linaro.org, tim@xen.org,
	Ian Campbell <ian.campbell@citrix.com>, stefano.stabellini@eu.citrix.com
Subject: [Xen-devel] [PATCH LIBVIRT] libxl: Recognise ARM architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Only tested on v7 but the v8 equivalent seems pretty obvious.

XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
but I have stuck with the existing pattern.

With this I can create a guest from:
  <domain type='xen'>
    <name>libvirt-test</name>
    <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
    <memory>393216</memory>
    <currentMemory>393216</currentMemory>
    <vcpu>1</vcpu>
    <os>
      <type arch='armv7l' machine='xenpv'>linux</type>
      <kernel>/boot/vmlinuz-arm-native</kernel>
      <cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
    </os>
    <clock offset='utc'/>
    <on_poweroff>destroy</on_poweroff>
    <on_reboot>restart</on_reboot>
    <on_crash>destroy</on_crash>
    <devices>
      <disk type='block' device='disk'>
        <source dev='/dev/marilith-n0/debian-disk'/>
        <target dev='xvda1'/>
      </disk>
      <interface type='bridge'>
        <mac address='8e:a7:8e:3c:f4:f6'/>
        <source bridge='xenbr0'/>
      </interface>
    </devices>
  </domain>

Using virsh create and I can destroy it too.

Currently virsh console fails with:
  Connected to domain libvirt-test
  Escape character is ^]
  error: internal error: cannot find character device <null>

I haven't investigated yet.

Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
---
 src/libxl/libxl_conf.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index 4cefadf..7ed692d 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -61,7 +61,7 @@ struct guest_arch {
     int ia64_be;
 };
 
-#define XEN_CAP_REGEX "(xen|hvm)-[[:digit:]]+\\.[[:digit:]]+-(x86_32|x86_64|ia64|powerpc64)(p|be)?"
+#define XEN_CAP_REGEX "(xen|hvm)-[[:digit:]]+\\.[[:digit:]]+-(aarch64|armv7l|x86_32|x86_64|ia64|powerpc64)(p|be)?"
 
 
 static virClassPtr libxlDriverConfigClass;
@@ -319,8 +319,11 @@ libxlCapsInitGuests(libxl_ctx *ctx, virCapsPtr caps)
             }
             else if (STRPREFIX(&token[subs[2].rm_so], "powerpc64")) {
                 arch = VIR_ARCH_PPC64;
+            } else if (STRPREFIX(&token[subs[2].rm_so], "armv7l")) {
+                arch = VIR_ARCH_ARMV7L;
+            } else if (STRPREFIX(&token[subs[2].rm_so], "aarch64")) {
+                arch = VIR_ARCH_AARCH64;
             } else {
-                /* XXX arm ? */
                 continue;
             }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:37:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdjh-00072v-8d; Wed, 26 Feb 2014 12:37:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1WIdjf-00072m-8Z
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:37:15 +0000
Received: from [85.158.137.68:22448] by server-12.bemta-3.messagelabs.com id
	3F/08-01674-AFFDD035; Wed, 26 Feb 2014 12:37:14 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393418232!4344323!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15596 invoked from network); 26 Feb 2014 12:37:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-31.messagelabs.com with SMTP;
	26 Feb 2014 12:37:13 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QCb7fY004331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 07:37:07 -0500
Received: from redhat.com (vpn1-7-218.ams2.redhat.com [10.36.7.218])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QCb3a3008817
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Wed, 26 Feb 2014 07:37:05 -0500
Date: Wed, 26 Feb 2014 12:37:03 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140226123703.GC29185@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> Only tested on v7 but the v8 equivalent seems pretty obvious.
> 
> XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
> but I have stuck with the existing pattern.
> 
> With this I can create a guest from:
>   <domain type='xen'>
>     <name>libvirt-test</name>
>     <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
>     <memory>393216</memory>
>     <currentMemory>393216</currentMemory>
>     <vcpu>1</vcpu>
>     <os>
>       <type arch='armv7l' machine='xenpv'>linux</type>
>       <kernel>/boot/vmlinuz-arm-native</kernel>
>       <cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
>     </os>
>     <clock offset='utc'/>
>     <on_poweroff>destroy</on_poweroff>
>     <on_reboot>restart</on_reboot>
>     <on_crash>destroy</on_crash>
>     <devices>
>       <disk type='block' device='disk'>
>         <source dev='/dev/marilith-n0/debian-disk'/>
>         <target dev='xvda1'/>
>       </disk>
>       <interface type='bridge'>
>         <mac address='8e:a7:8e:3c:f4:f6'/>
>         <source bridge='xenbr0'/>
>       </interface>
>     </devices>
>   </domain>
> 
> Using virsh create and I can destroy it too.
> 
> Currently virsh console fails with:
>   Connected to domain libvirt-test
>   Escape character is ^]
>   error: internal error: cannot find character device <null>
> 
> I haven't investigated yet.

That'll be because no <console> or <serial> device is
listed in your config above I expect. Also looks like
bogus error handling in the console API, not checking
for <null>.


ACK

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:37:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdjh-00072v-8d; Wed, 26 Feb 2014 12:37:17 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1WIdjf-00072m-8Z
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:37:15 +0000
Received: from [85.158.137.68:22448] by server-12.bemta-3.messagelabs.com id
	3F/08-01674-AFFDD035; Wed, 26 Feb 2014 12:37:14 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393418232!4344323!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15596 invoked from network); 26 Feb 2014 12:37:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-31.messagelabs.com with SMTP;
	26 Feb 2014 12:37:13 -0000
Received: from int-mx12.intmail.prod.int.phx2.redhat.com
	(int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QCb7fY004331
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 07:37:07 -0500
Received: from redhat.com (vpn1-7-218.ams2.redhat.com [10.36.7.218])
	by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QCb3a3008817
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Wed, 26 Feb 2014 07:37:05 -0500
Date: Wed, 26 Feb 2014 12:37:03 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140226123703.GC29185@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.25
Cc: stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> Only tested on v7 but the v8 equivalent seems pretty obvious.
> 
> XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
> but I have stuck with the existing pattern.
> 
> With this I can create a guest from:
>   <domain type='xen'>
>     <name>libvirt-test</name>
>     <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
>     <memory>393216</memory>
>     <currentMemory>393216</currentMemory>
>     <vcpu>1</vcpu>
>     <os>
>       <type arch='armv7l' machine='xenpv'>linux</type>
>       <kernel>/boot/vmlinuz-arm-native</kernel>
>       <cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
>     </os>
>     <clock offset='utc'/>
>     <on_poweroff>destroy</on_poweroff>
>     <on_reboot>restart</on_reboot>
>     <on_crash>destroy</on_crash>
>     <devices>
>       <disk type='block' device='disk'>
>         <source dev='/dev/marilith-n0/debian-disk'/>
>         <target dev='xvda1'/>
>       </disk>
>       <interface type='bridge'>
>         <mac address='8e:a7:8e:3c:f4:f6'/>
>         <source bridge='xenbr0'/>
>       </interface>
>     </devices>
>   </domain>
> 
> Using virsh create and I can destroy it too.
> 
> Currently virsh console fails with:
>   Connected to domain libvirt-test
>   Escape character is ^]
>   error: internal error: cannot find character device <null>
> 
> I haven't investigated yet.

That'll be because no <console> or <serial> device is
listed in your config above I expect. Also looks like
bogus error handling in the console API, not checking
for <null>.


ACK

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:42:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdow-0007EM-1f; Wed, 26 Feb 2014 12:42:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdot-0007EG-T5
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:42:40 +0000
Received: from [85.158.139.211:17383] by server-16.bemta-5.messagelabs.com id
	69/DF-05060-F31ED035; Wed, 26 Feb 2014 12:42:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393418556!2440063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16143 invoked from network); 26 Feb 2014 12:42:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:42:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104253500"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:42:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 07:42:35 -0500
Message-ID: <1393418554.18730.41.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Date: Wed, 26 Feb 2014 12:42:34 +0000
In-Reply-To: <20140226123703.GC29185@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > Currently virsh console fails with:
> >   Connected to domain libvirt-test
> >   Escape character is ^]
> >   error: internal error: cannot find character device <null>
> > 
> > I haven't investigated yet.
> 
> That'll be because no <console> or <serial> device is
> listed in your config above I expect. Also looks like
> bogus error handling in the console API, not checking
> for <null>.

Thanks. I've just tried (inside <devices>...</...>):
         <console tty='/dev/pts/5'/>
and
     <console type='pty'>
      <target port='0'/>
    </console>
which I gleaned from http://libvirt.org/drvxen.html but neither seem to
do the trick (and the first ones use of an explicit pts looks odd to
me...).

> ACK

Thanks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 12:42:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 12:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIdow-0007EM-1f; Wed, 26 Feb 2014 12:42:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIdot-0007EG-T5
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 12:42:40 +0000
Received: from [85.158.139.211:17383] by server-16.bemta-5.messagelabs.com id
	69/DF-05060-F31ED035; Wed, 26 Feb 2014 12:42:39 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393418556!2440063!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16143 invoked from network); 26 Feb 2014 12:42:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 12:42:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,547,1389744000"; d="scan'208";a="104253500"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 12:42:36 +0000
Received: from [10.80.2.80] (10.80.2.80) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 07:42:35 -0500
Message-ID: <1393418554.18730.41.camel@kazak.uk.xensource.com>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Date: Wed, 26 Feb 2014 12:42:34 +0000
In-Reply-To: <20140226123703.GC29185@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-3 
MIME-Version: 1.0
X-Originating-IP: [10.80.2.80]
X-DLP: MIA1
Cc: stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > Currently virsh console fails with:
> >   Connected to domain libvirt-test
> >   Escape character is ^]
> >   error: internal error: cannot find character device <null>
> > 
> > I haven't investigated yet.
> 
> That'll be because no <console> or <serial> device is
> listed in your config above I expect. Also looks like
> bogus error handling in the console API, not checking
> for <null>.

Thanks. I've just tried (inside <devices>...</...>):
         <console tty='/dev/pts/5'/>
and
     <console type='pty'>
      <target port='0'/>
    </console>
which I gleaned from http://libvirt.org/drvxen.html but neither seem to
do the trick (and the first ones use of an explicit pts looks odd to
me...).

> ACK

Thanks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 13:25:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 13:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIeUC-0007VA-AH; Wed, 26 Feb 2014 13:25:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WIeUA-0007V5-7l
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 13:25:18 +0000
Received: from [85.158.137.68:17039] by server-11.bemta-3.messagelabs.com id
	4A/10-04255-D3BED035; Wed, 26 Feb 2014 13:25:17 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393421115!4356050!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9151 invoked from network); 26 Feb 2014 13:25:15 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 13:25:15 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so529549eek.27
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 05:25:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type;
	bh=HZODnFL1+qIBwLkNF4FcAUWXSgDCTdHQeUH18P29hZ4=;
	b=MVrOyOkm5U7sg9PkRwZ+HIBNkeqIgmm/4it1Mi09IV4on9QrTcwmpLTVlOxPkZ0Sfa
	XoBbCSz3/ZGESWzyorew7CzmRVmNclXmBBdA9OFs/4uUZpYHCpIeAwOn9tl8qlOz/An+
	CN1xEh4DQ07keORl5xBn5EWiCucDDWjO9pKOyg6bXZvRo2VvzO1icFTkq+kctBF9VrAd
	pFzUPc/0yjCW1fShJBwXIdZWVDBF6xA75BSnjSd8ivk20OoILUzwKU20c3olWTJSFBnQ
	TktUO6OuOqEzwzvn9UlslQ5G3nOFjIOcFqJ6/ZMpqWsKD6arywRnqcjDI7DW3lvxVby4
	FDXA==
X-Gm-Message-State: ALoCoQl+7Qs3HuOZOjg2L6sZAUwpl9M1EBbPgoMX6+lcxFeuGuTZj0SfczbAcqpQSSsTgXbe2qV2
X-Received: by 10.14.179.129 with SMTP id h1mr6965491eem.26.1393421114799;
	Wed, 26 Feb 2014 05:25:14 -0800 (PST)
Received: from [192.168.1.15] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id f45sm3722710eeg.5.2014.02.26.05.25.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 05:25:13 -0800 (PST)
Message-ID: <530DEB36.5010305@m2r.biz>
Date: Wed, 26 Feb 2014 14:25:10 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>	<20140224164004.GO816@phenom.dumpdata.com>	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>	<20140224194301.GA8089@phenom.dumpdata.com>
	<alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com>
Cc: anthony.perard@citrix.com, "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8428466112131771469=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8428466112131771469==
Content-Type: multipart/alternative;
 boundary="------------060507060107090802030404"

This is a multi-part message in MIME format.
--------------060507060107090802030404
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Il 26/02/2014 12:58, Stefano Stabellini ha scritto:
> A little while ago Anthony made the Q35 emulation in QEMU work with Xen:
>
> http://marc.info/?l=qemu-devel&m=137813513713296
>
> He might be able to give you some pointers on where to start.

FWIK that was only a patch which renders working back again ram 
inizialisation of qemu 1.6 with xen.

I commented out an assert in hvmloader which prevent domUs start with 
q35 chipset and that added some qemu parameters manually, for example:
device_model_args_hvm=["-machine","q35,accel=xen","-device","i82801b11-bridge","-drive","file=/mnt/vm/disks/W7-q35.disk1.xm,if=none,id=drive-sata-disk0,format=raw,cache=writeback","-device","ide-hd,drive=drive-sata-disk0,id=sata0"]

Actually I don't find my previous mail with details about my q35 tests 
but I believe that the main things to start testing were the one 
mentioned above.
-machine q35,accel=xen to set q35 chipset
-device i82801b11-bridge to made available old pci bus since probably 
that default pci-e need hvmloader changes to be working.
I also found that with actual qemu parameters the disks were not visible 
on boot, instead were visible using new qemu parameters (-device).
I tried to do a patch with new qemu parameters compatible also with old 
chipset with ide controller but I found that automatic bus selection is 
bugged, unfortunately, I did not have more time to continue nor the 
knowledge to make needed changes in hvmloader to make q35 fully working.

>
>
> On Mon, 24 Feb 2014, Konrad Rzeszutek Wilk wrote:
>> On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
>>> Hi Konrad,
>>>
>>> Thanks for the info.  Your guest sees the virtual function as  pci device just like I had suspected.  Unfortunately that won't work for me.  I guess I have to take a hard look at implement pci-e passthrough using pciback then.
>> You won't have to do it with pciback. Keep in mind that pciback just "holds"
>> the device so that other drivers (like ixbgvf) don't use it.
>>
>> 'xl' ends up doing the proper hypercall to assign the device to
>> the guest. And QEMU also does it set of calls to setup the
>> BARS, interrupts, deal with MSI-X, etc.
>>
>> What you are going to have to look at is QEMU - and how to make it
>> work with the newer emulated chipset.
>>
>> Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
>> (CC-ed as well) had backported the proper pieces in QEMU to do
>> PCI passthrough.
>>
>> Looking forward to your patches and we will be more than happy
>> to help you upstream them!
>>
>>> Regards/Eniac
>>>
>>> -----Original Message-----
>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>>> Sent: Monday, February 24, 2014 9:40 AM
>>> To: Zhang, Eniac
>>> Cc: xen-devel@lists.xen.org
>>> Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
>>>
>>> On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
>>>> Hi Konrad,
>>>>
>>>> Here's what I see when start a VM under xen using pciback to pass a pci-e device into domU.  The device can be seen by guest, and also functioning fine.  But it's not seen as a pci-e device, rather, it looks just like an ordinary pci device because only the first 0x100 bytes of its configuration space is accessible.  So if a driver needs to use data in the extended configuration space for certain features, it will fail.
>>>>
>>>> When you say you "did PCIe pass through of an VF of an SR-IOV device".  Are you actually using it as a pci-e device or have throttled it back to pci mode without aware of the difference?  If you did see the pci-e device in guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx output from guest?
>>> # lspci
>>> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
>>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>>> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
>>> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>> 00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
>>> 00:03.0 VGA compatible controller: Cirrus Logic GD 5446
>>> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) # lspci -t
>>> -[0000:00]-+-00.0
>>>             +-01.0
>>>             +-01.1
>>>             +-01.3
>>>             +-02.0
>>>             +-03.0
>>>             \-04.0
>>> # lspci -s 00:04.0 -xxxxx
>>> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
>>> 00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
>>> 10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
>>> 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
>>> 30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
>>> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
>>> 80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
>>> b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
>>> d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>>
>>> -bash-4.1# more /vm-pci.cfg
>>> builder='hvm'
>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>> memory = 2048
>>> boot="d"
>>> maxvcpus=32
>>> vcpus=1
>>> serial='pty'
>>> vnclisten="0.0.0.0"
>>> name="latest"
>>> pci = ["0000:02:10.0"]
>>>
>>>> Also to echo your second comment:  I might still be a newbie in qemu field (I started working on this 4 months ago).  I thought the chipset limits what you can see/do in vm.  Ie.  If you have 440fx emulations then you can't have any pci-e devices (fake or passthru) in the same system.  Is that not true?
>>>>
>>>> Regards/Eniac
>>>>
>>>> -----Original Message-----
>>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>>>> Sent: Friday, February 21, 2014 5:32 PM
>>>> To: Zhang, Eniac
>>>> Cc: xen-devel@lists.xen.org
>>>> Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
>>>>
>>>>
>>>> On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
>>>>> Hi Konrad,
>>>>>
>>>>> Thanks for your reply.
>>>>>
>>>>> Yes, I am aware of the pciback.  Unfortunately it doesn't seem to
>>>>> support pci-e passthrough. (I could be wrong here)
>>>> I just did PCIe pass through of an VF of an SR-IOV device. It certainly is PCIe.
>>>>
>>>>> There are two reason that I am interested in this.  For one, my project calls for pci-e device passthrough, which can't be accomplished with 440fx chipset emulation.  Secondly, I feel we ought to move on with the technology.  440fx is ancient in computer terms.  Qemu is good and all that, but if it refuses to support pci-e natively then it's just a matter of time that it will become obsoleted.  The trend is clear that pci-e is taking over the world.
>>>>>
>>>> I am not sure what you are saying but it does not matter whether QEMU emulates 440fx or q35 for PCI pass through .
>>>>
>>>>> Regards/Eniac
>>>>>
>>>>> -----Original Message-----
>>>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>>>>> Sent: Friday, February 21, 2014 2:50 PM
>>>>> To: Zhang, Eniac
>>>>> Cc: xen-devel@lists.xen.org
>>>>> Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
>>>>>
>>>>> On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q35 machine under xen yet.  I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).  I was wondering why this hasn't been done (q35 was introduced into qemu in 2009).
>>>>>>
>>>>>> Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?  I haven't tried anything there yet because my gut-feeling is that it won't work.  Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.  Am I on the right track here?
>>>>> Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses a different mechanism (and you need to bind the device to pciback).
>>>>>
>>>>>> I am interested in implementing both these two features.  I'd like to connect with anyone who's already on this so we don't duplicate the efforts.
>>>>> What do you need Q35 for?
>>>>>
>>>>>> Regards/Eniac
>>>>>> _______________________________________________
>>>>>> Xen-devel mailing list
>>>>>> Xen-devel@lists.xen.org
>>>>>> http://lists.xen.org/xen-devel
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


--------------060507060107090802030404
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Il 26/02/2014 12:58, Stefano Stabellini
      ha scritto:<br>
    </div>
    <blockquote
      cite="mid:alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com"
      type="cite">
      <pre wrap="">A little while ago Anthony made the Q35 emulation in QEMU work with Xen:

<a class="moz-txt-link-freetext" href="http://marc.info/?l=qemu-devel&amp;m=137813513713296">http://marc.info/?l=qemu-devel&amp;m=137813513713296</a>

He might be able to give you some pointers on where to start.</pre>
    </blockquote>
    <br>
    FWIK that was only a patch which renders working back again ram
    inizialisation of qemu 1.6 with xen.<br>
    <br>
    I commented out an assert in hvmloader which prevent domUs start
    with q35 chipset and that added some qemu parameters manually, for
    example:<br>
device_model_args_hvm=["-machine","q35,accel=xen","-device","i82801b11-bridge","-drive","file=/mnt/vm/disks/W7-q35.disk1.xm,if=none,id=drive-sata-disk0,format=raw,cache=writeback","-device","ide-hd,drive=drive-sata-disk0,id=sata0"]<br>
    <br>
    Actually I don't find my previous mail with details about my q35
    tests but I believe that the main things to start testing were the
    one mentioned above.<br>
    -machine q35,accel=xen to set q35 chipset<br>
    -device i82801b11-bridge to made available old pci bus since
    probably that default pci-e need hvmloader changes to be working.<br>
    I also found that with actual qemu parameters the disks were not
    visible on boot, instead were visible using new qemu parameters
    (-device). <br>
    I tried to do a patch with new qemu parameters compatible also with
    old chipset with ide controller but I found that automatic bus
    selection is bugged, <span id="result_box" class="" lang="en"><span
        class="">unfortunately,</span> <span class="">I did not have</span>
      <span class="">more time to</span> <span class="">continue </span></span><span
      id="result_box" class="" lang="en"><span class="">nor</span> <span
        class="">the knowledge to</span> <span class="">make</span> <span
        class="">needed changes</span> <span class="">in</span> <span
        class="">hvmloader to make q35 fully working.</span></span><br>
    <br>
    <blockquote
      cite="mid:alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com"
      type="cite">
      <pre wrap="">


On Mon, 24 Feb 2014, Konrad Rzeszutek Wilk wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">Hi Konrad,

Thanks for the info.  Your guest sees the virtual function as  pci device just like I had suspected.  Unfortunately that won't work for me.  I guess I have to take a hard look at implement pci-e passthrough using pciback then.
</pre>
        </blockquote>
        <pre wrap="">
You won't have to do it with pciback. Keep in mind that pciback just "holds"
the device so that other drivers (like ixbgvf) don't use it.

'xl' ends up doing the proper hypercall to assign the device to
the guest. And QEMU also does it set of calls to setup the
BARS, interrupts, deal with MSI-X, etc.

What you are going to have to look at is QEMU - and how to make it
work with the newer emulated chipset.

Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
(CC-ed as well) had backported the proper pieces in QEMU to do
PCI passthrough.

Looking forward to your patches and we will be more than happy
to help you upstream them!

</pre>
        <blockquote type="cite">
          <pre wrap="">
Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [<a class="moz-txt-link-freetext" href="mailto:konrad.wilk@oracle.com">mailto:konrad.wilk@oracle.com</a>] 
Sent: Monday, February 24, 2014 9:40 AM
To: Zhang, Eniac
Cc: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?

On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
</pre>
          <blockquote type="cite">
            <pre wrap="">Hi Konrad,

Here's what I see when start a VM under xen using pciback to pass a pci-e device into domU.  The device can be seen by guest, and also functioning fine.  But it's not seen as a pci-e device, rather, it looks just like an ordinary pci device because only the first 0x100 bytes of its configuration space is accessible.  So if a driver needs to use data in the extended configuration space for certain features, it will fail.

When you say you "did PCIe pass through of an VF of an SR-IOV device".  Are you actually using it as a pci-e device or have throttled it back to pci mode without aware of the difference?  If you did see the pci-e device in guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx output from guest?
</pre>
          </blockquote>
          <pre wrap="">
# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) # lspci -t
-[0000:00]-+-00.0
           +-01.0
           +-01.1
           +-01.3
           +-02.0
           +-03.0
           \-04.0
# lspci -s 00:04.0 -xxxxx
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

-bash-4.1# more /vm-pci.cfg
builder='hvm'
disk = [ '<a class="moz-txt-link-freetext" href="file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r">file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r</a>']
memory = 2048
boot="d"
maxvcpus=32
vcpus=1
serial='pty'
vnclisten="0.0.0.0"
name="latest"
pci = ["0000:02:10.0"]

</pre>
          <blockquote type="cite">
            <pre wrap="">
Also to echo your second comment:  I might still be a newbie in qemu field (I started working on this 4 months ago).  I thought the chipset limits what you can see/do in vm.  Ie.  If you have 440fx emulations then you can't have any pci-e devices (fake or passthru) in the same system.  Is that not true?

Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [<a class="moz-txt-link-freetext" href="mailto:konrad.wilk@oracle.com">mailto:konrad.wilk@oracle.com</a>]
Sent: Friday, February 21, 2014 5:32 PM
To: Zhang, Eniac
Cc: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
Subject: RE: [Xen-devel] q35 in xen? vfio in xen?


On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <a class="moz-txt-link-rfc2396E" href="mailto:eniac-xw.zhang@hp.com">&lt;eniac-xw.zhang@hp.com&gt;</a> wrote:
</pre>
            <blockquote type="cite">
              <pre wrap="">
Hi Konrad,

Thanks for your reply.

Yes, I am aware of the pciback.&nbsp; Unfortunately it doesn't seem to 
support pci-e passthrough. (I could be wrong here)
</pre>
            </blockquote>
            <pre wrap="">
I just did PCIe pass through of an VF of an SR-IOV device. It certainly is PCIe.

</pre>
            <blockquote type="cite">
              <pre wrap="">
There are two reason that I am interested in this.&nbsp; For one, my project calls for pci-e device passthrough, which can't be accomplished with 440fx chipset emulation.&nbsp; Secondly, I feel we ought to move on with the technology.&nbsp; 440fx is ancient in computer terms.&nbsp; Qemu is good and all that, but if it refuses to support pci-e natively then it's just a matter of time that it will become obsoleted.&nbsp; The trend is clear that pci-e is taking over the world. 

</pre>
            </blockquote>
            <pre wrap="">
I am not sure what you are saying but it does not matter whether QEMU emulates 440fx or q35 for PCI pass through .

</pre>
            <blockquote type="cite">
              <pre wrap="">Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [<a class="moz-txt-link-freetext" href="mailto:konrad.wilk@oracle.com">mailto:konrad.wilk@oracle.com</a>]
Sent: Friday, February 21, 2014 2:50 PM
To: Zhang, Eniac
Cc: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
Subject: Re: [Xen-devel] q35 in xen? vfio in xen? 

On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: 
</pre>
              <blockquote type="cite">
                <pre wrap="">Hi all,

I am playing with q35 chipset in qemu (1.6.1).&nbsp; It seems we can't enable q35 machine under xen yet.&nbsp; I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).&nbsp; I was wondering why this hasn't been done (q35 was introduced into qemu in 2009). 

Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?&nbsp; I haven't tried anything there yet because my gut-feeling is that it won't work.&nbsp; Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.&nbsp; Am I on the right track here? 
</pre>
              </blockquote>
              <pre wrap="">
Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses a different mechanism (and you need to bind the device to pciback). 

</pre>
              <blockquote type="cite">
                <pre wrap="">
I am interested in implementing both these two features.&nbsp; I'd like to connect with anyone who's already on this so we don't duplicate the efforts. 
</pre>
              </blockquote>
              <pre wrap="">
What do you need Q35 for? 

</pre>
              <blockquote type="cite">
                <pre wrap="">
Regards/Eniac
</pre>
              </blockquote>
              <pre wrap="">
</pre>
              <blockquote type="cite">
                <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
              </blockquote>
              <pre wrap="">
</pre>
            </blockquote>
          </blockquote>
        </blockquote>
        <br>
        <fieldset class="mimeAttachmentHeader"></fieldset>
        <br>
        <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
      </blockquote>
    </blockquote>
    <br>
  </body>
</html>

--------------060507060107090802030404--


--===============8428466112131771469==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8428466112131771469==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 13:25:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 13:25:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIeUC-0007VA-AH; Wed, 26 Feb 2014 13:25:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WIeUA-0007V5-7l
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 13:25:18 +0000
Received: from [85.158.137.68:17039] by server-11.bemta-3.messagelabs.com id
	4A/10-04255-D3BED035; Wed, 26 Feb 2014 13:25:17 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393421115!4356050!1
X-Originating-IP: [74.125.83.54]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9151 invoked from network); 26 Feb 2014 13:25:15 -0000
Received: from mail-ee0-f54.google.com (HELO mail-ee0-f54.google.com)
	(74.125.83.54)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 13:25:15 -0000
Received: by mail-ee0-f54.google.com with SMTP id c41so529549eek.27
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 05:25:14 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type;
	bh=HZODnFL1+qIBwLkNF4FcAUWXSgDCTdHQeUH18P29hZ4=;
	b=MVrOyOkm5U7sg9PkRwZ+HIBNkeqIgmm/4it1Mi09IV4on9QrTcwmpLTVlOxPkZ0Sfa
	XoBbCSz3/ZGESWzyorew7CzmRVmNclXmBBdA9OFs/4uUZpYHCpIeAwOn9tl8qlOz/An+
	CN1xEh4DQ07keORl5xBn5EWiCucDDWjO9pKOyg6bXZvRo2VvzO1icFTkq+kctBF9VrAd
	pFzUPc/0yjCW1fShJBwXIdZWVDBF6xA75BSnjSd8ivk20OoILUzwKU20c3olWTJSFBnQ
	TktUO6OuOqEzwzvn9UlslQ5G3nOFjIOcFqJ6/ZMpqWsKD6arywRnqcjDI7DW3lvxVby4
	FDXA==
X-Gm-Message-State: ALoCoQl+7Qs3HuOZOjg2L6sZAUwpl9M1EBbPgoMX6+lcxFeuGuTZj0SfczbAcqpQSSsTgXbe2qV2
X-Received: by 10.14.179.129 with SMTP id h1mr6965491eem.26.1393421114799;
	Wed, 26 Feb 2014 05:25:14 -0800 (PST)
Received: from [192.168.1.15] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id f45sm3722710eeg.5.2014.02.26.05.25.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 05:25:13 -0800 (PST)
Message-ID: <530DEB36.5010305@m2r.biz>
Date: Wed, 26 Feb 2014 14:25:10 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <201402220031.s1M0Vcxm023037@aserz7021.oracle.com>	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B7AA0@G9W0737.americas.hpqcorp.net>	<20140224164004.GO816@phenom.dumpdata.com>	<3B22ECA2D19A3D408C83F4F15A9CB7D4507B945A@G9W0737.americas.hpqcorp.net>	<20140224194301.GA8089@phenom.dumpdata.com>
	<alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com>
Cc: anthony.perard@citrix.com, "Zhang, Eniac" <eniac-xw.zhang@hp.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] q35 in xen?  vfio in xen?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8428466112131771469=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============8428466112131771469==
Content-Type: multipart/alternative;
 boundary="------------060507060107090802030404"

This is a multi-part message in MIME format.
--------------060507060107090802030404
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

Il 26/02/2014 12:58, Stefano Stabellini ha scritto:
> A little while ago Anthony made the Q35 emulation in QEMU work with Xen:
>
> http://marc.info/?l=qemu-devel&m=137813513713296
>
> He might be able to give you some pointers on where to start.

FWIK that was only a patch which renders working back again ram 
inizialisation of qemu 1.6 with xen.

I commented out an assert in hvmloader which prevent domUs start with 
q35 chipset and that added some qemu parameters manually, for example:
device_model_args_hvm=["-machine","q35,accel=xen","-device","i82801b11-bridge","-drive","file=/mnt/vm/disks/W7-q35.disk1.xm,if=none,id=drive-sata-disk0,format=raw,cache=writeback","-device","ide-hd,drive=drive-sata-disk0,id=sata0"]

Actually I don't find my previous mail with details about my q35 tests 
but I believe that the main things to start testing were the one 
mentioned above.
-machine q35,accel=xen to set q35 chipset
-device i82801b11-bridge to made available old pci bus since probably 
that default pci-e need hvmloader changes to be working.
I also found that with actual qemu parameters the disks were not visible 
on boot, instead were visible using new qemu parameters (-device).
I tried to do a patch with new qemu parameters compatible also with old 
chipset with ide controller but I found that automatic bus selection is 
bugged, unfortunately, I did not have more time to continue nor the 
knowledge to make needed changes in hvmloader to make q35 fully working.

>
>
> On Mon, 24 Feb 2014, Konrad Rzeszutek Wilk wrote:
>> On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
>>> Hi Konrad,
>>>
>>> Thanks for the info.  Your guest sees the virtual function as  pci device just like I had suspected.  Unfortunately that won't work for me.  I guess I have to take a hard look at implement pci-e passthrough using pciback then.
>> You won't have to do it with pciback. Keep in mind that pciback just "holds"
>> the device so that other drivers (like ixbgvf) don't use it.
>>
>> 'xl' ends up doing the proper hypercall to assign the device to
>> the guest. And QEMU also does it set of calls to setup the
>> BARS, interrupts, deal with MSI-X, etc.
>>
>> What you are going to have to look at is QEMU - and how to make it
>> work with the newer emulated chipset.
>>
>> Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
>> (CC-ed as well) had backported the proper pieces in QEMU to do
>> PCI passthrough.
>>
>> Looking forward to your patches and we will be more than happy
>> to help you upstream them!
>>
>>> Regards/Eniac
>>>
>>> -----Original Message-----
>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>>> Sent: Monday, February 24, 2014 9:40 AM
>>> To: Zhang, Eniac
>>> Cc: xen-devel@lists.xen.org
>>> Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
>>>
>>> On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
>>>> Hi Konrad,
>>>>
>>>> Here's what I see when start a VM under xen using pciback to pass a pci-e device into domU.  The device can be seen by guest, and also functioning fine.  But it's not seen as a pci-e device, rather, it looks just like an ordinary pci device because only the first 0x100 bytes of its configuration space is accessible.  So if a driver needs to use data in the extended configuration space for certain features, it will fail.
>>>>
>>>> When you say you "did PCIe pass through of an VF of an SR-IOV device".  Are you actually using it as a pci-e device or have throttled it back to pci mode without aware of the difference?  If you did see the pci-e device in guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx output from guest?
>>> # lspci
>>> 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
>>> 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
>>> 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
>>> 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
>>> 00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
>>> 00:03.0 VGA compatible controller: Cirrus Logic GD 5446
>>> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) # lspci -t
>>> -[0000:00]-+-00.0
>>>             +-01.0
>>>             +-01.1
>>>             +-01.3
>>>             +-02.0
>>>             +-03.0
>>>             \-04.0
>>> # lspci -s 00:04.0 -xxxxx
>>> 00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
>>> 00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
>>> 10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
>>> 20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
>>> 30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
>>> 40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
>>> 80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> 90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
>>> b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
>>> d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>> f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>>>
>>> -bash-4.1# more /vm-pci.cfg
>>> builder='hvm'
>>> disk = [ 'file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r']
>>> memory = 2048
>>> boot="d"
>>> maxvcpus=32
>>> vcpus=1
>>> serial='pty'
>>> vnclisten="0.0.0.0"
>>> name="latest"
>>> pci = ["0000:02:10.0"]
>>>
>>>> Also to echo your second comment:  I might still be a newbie in qemu field (I started working on this 4 months ago).  I thought the chipset limits what you can see/do in vm.  Ie.  If you have 440fx emulations then you can't have any pci-e devices (fake or passthru) in the same system.  Is that not true?
>>>>
>>>> Regards/Eniac
>>>>
>>>> -----Original Message-----
>>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>>>> Sent: Friday, February 21, 2014 5:32 PM
>>>> To: Zhang, Eniac
>>>> Cc: xen-devel@lists.xen.org
>>>> Subject: RE: [Xen-devel] q35 in xen? vfio in xen?
>>>>
>>>>
>>>> On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <eniac-xw.zhang@hp.com> wrote:
>>>>> Hi Konrad,
>>>>>
>>>>> Thanks for your reply.
>>>>>
>>>>> Yes, I am aware of the pciback.  Unfortunately it doesn't seem to
>>>>> support pci-e passthrough. (I could be wrong here)
>>>> I just did PCIe pass through of an VF of an SR-IOV device. It certainly is PCIe.
>>>>
>>>>> There are two reason that I am interested in this.  For one, my project calls for pci-e device passthrough, which can't be accomplished with 440fx chipset emulation.  Secondly, I feel we ought to move on with the technology.  440fx is ancient in computer terms.  Qemu is good and all that, but if it refuses to support pci-e natively then it's just a matter of time that it will become obsoleted.  The trend is clear that pci-e is taking over the world.
>>>>>
>>>> I am not sure what you are saying but it does not matter whether QEMU emulates 440fx or q35 for PCI pass through .
>>>>
>>>>> Regards/Eniac
>>>>>
>>>>> -----Original Message-----
>>>>> From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@oracle.com]
>>>>> Sent: Friday, February 21, 2014 2:50 PM
>>>>> To: Zhang, Eniac
>>>>> Cc: xen-devel@lists.xen.org
>>>>> Subject: Re: [Xen-devel] q35 in xen? vfio in xen?
>>>>>
>>>>> On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote:
>>>>>> Hi all,
>>>>>>
>>>>>> I am playing with q35 chipset in qemu (1.6.1).  It seems we can't enable q35 machine under xen yet.  I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).  I was wondering why this hasn't been done (q35 was introduced into qemu in 2009).
>>>>>>
>>>>>> Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?  I haven't tried anything there yet because my gut-feeling is that it won't work.  Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.  Am I on the right track here?
>>>>> Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses a different mechanism (and you need to bind the device to pciback).
>>>>>
>>>>>> I am interested in implementing both these two features.  I'd like to connect with anyone who's already on this so we don't duplicate the efforts.
>>>>> What do you need Q35 for?
>>>>>
>>>>>> Regards/Eniac
>>>>>> _______________________________________________
>>>>>> Xen-devel mailing list
>>>>>> Xen-devel@lists.xen.org
>>>>>> http://lists.xen.org/xen-devel
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel


--------------060507060107090802030404
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">Il 26/02/2014 12:58, Stefano Stabellini
      ha scritto:<br>
    </div>
    <blockquote
      cite="mid:alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com"
      type="cite">
      <pre wrap="">A little while ago Anthony made the Q35 emulation in QEMU work with Xen:

<a class="moz-txt-link-freetext" href="http://marc.info/?l=qemu-devel&amp;m=137813513713296">http://marc.info/?l=qemu-devel&amp;m=137813513713296</a>

He might be able to give you some pointers on where to start.</pre>
    </blockquote>
    <br>
    FWIK that was only a patch which renders working back again ram
    inizialisation of qemu 1.6 with xen.<br>
    <br>
    I commented out an assert in hvmloader which prevent domUs start
    with q35 chipset and that added some qemu parameters manually, for
    example:<br>
device_model_args_hvm=["-machine","q35,accel=xen","-device","i82801b11-bridge","-drive","file=/mnt/vm/disks/W7-q35.disk1.xm,if=none,id=drive-sata-disk0,format=raw,cache=writeback","-device","ide-hd,drive=drive-sata-disk0,id=sata0"]<br>
    <br>
    Actually I don't find my previous mail with details about my q35
    tests but I believe that the main things to start testing were the
    one mentioned above.<br>
    -machine q35,accel=xen to set q35 chipset<br>
    -device i82801b11-bridge to made available old pci bus since
    probably that default pci-e need hvmloader changes to be working.<br>
    I also found that with actual qemu parameters the disks were not
    visible on boot, instead were visible using new qemu parameters
    (-device). <br>
    I tried to do a patch with new qemu parameters compatible also with
    old chipset with ide controller but I found that automatic bus
    selection is bugged, <span id="result_box" class="" lang="en"><span
        class="">unfortunately,</span> <span class="">I did not have</span>
      <span class="">more time to</span> <span class="">continue </span></span><span
      id="result_box" class="" lang="en"><span class="">nor</span> <span
        class="">the knowledge to</span> <span class="">make</span> <span
        class="">needed changes</span> <span class="">in</span> <span
        class="">hvmloader to make q35 fully working.</span></span><br>
    <br>
    <blockquote
      cite="mid:alpine.DEB.2.02.1402261155320.31489@kaball.uk.xensource.com"
      type="cite">
      <pre wrap="">


On Mon, 24 Feb 2014, Konrad Rzeszutek Wilk wrote:
</pre>
      <blockquote type="cite">
        <pre wrap="">On Mon, Feb 24, 2014 at 07:28:32PM +0000, Zhang, Eniac wrote:
</pre>
        <blockquote type="cite">
          <pre wrap="">Hi Konrad,

Thanks for the info.  Your guest sees the virtual function as  pci device just like I had suspected.  Unfortunately that won't work for me.  I guess I have to take a hard look at implement pci-e passthrough using pciback then.
</pre>
        </blockquote>
        <pre wrap="">
You won't have to do it with pciback. Keep in mind that pciback just "holds"
the device so that other drivers (like ixbgvf) don't use it.

'xl' ends up doing the proper hypercall to assign the device to
the guest. And QEMU also does it set of calls to setup the
BARS, interrupts, deal with MSI-X, etc.

What you are going to have to look at is QEMU - and how to make it
work with the newer emulated chipset.

Stefano (CC-ed) here is the maintainer of QEMU Xen pieces. Anthony
(CC-ed as well) had backported the proper pieces in QEMU to do
PCI passthrough.

Looking forward to your patches and we will be more than happy
to help you upstream them!

</pre>
        <blockquote type="cite">
          <pre wrap="">
Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [<a class="moz-txt-link-freetext" href="mailto:konrad.wilk@oracle.com">mailto:konrad.wilk@oracle.com</a>] 
Sent: Monday, February 24, 2014 9:40 AM
To: Zhang, Eniac
Cc: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
Subject: Re: [Xen-devel] q35 in xen? vfio in xen?

On Sat, Feb 22, 2014 at 08:06:50AM +0000, Zhang, Eniac wrote:
</pre>
          <blockquote type="cite">
            <pre wrap="">Hi Konrad,

Here's what I see when start a VM under xen using pciback to pass a pci-e device into domU.  The device can be seen by guest, and also functioning fine.  But it's not seen as a pci-e device, rather, it looks just like an ordinary pci device because only the first 0x100 bytes of its configuration space is accessible.  So if a driver needs to use data in the extended configuration space for certain features, it will fail.

When you say you "did PCIe pass through of an VF of an SR-IOV device".  Are you actually using it as a pci-e device or have throttled it back to pci mode without aware of the difference?  If you did see the pci-e device in guest, can you share your xl.cfg file as well as lspci/lspci -t/lspci -xxxx output from guest?
</pre>
          </blockquote>
          <pre wrap="">
# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 Class ff80: XenSource, Inc. Xen Platform Device (rev 01)
00:03.0 VGA compatible controller: Cirrus Logic GD 5446
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01) # lspci -t
-[0000:00]-+-00.0
           +-01.0
           +-01.1
           +-01.3
           +-02.0
           +-03.0
           \-04.0
# lspci -s 00:04.0 -xxxxx
00:04.0 Ethernet controller: Intel Corporation 82576 Virtual Function (rev 01)
00: 86 80 ca 10 06 04 10 00 01 00 00 02 00 00 00 00
10: 04 00 01 f3 00 00 00 00 00 00 00 00 04 40 01 f3
20: 00 00 00 00 00 00 00 00 00 00 00 00 86 80 3c a0
30: 00 00 00 00 70 00 00 00 00 00 00 00 00 00 00 00
40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
70: 11 a0 02 80 03 00 00 00 03 20 00 00 00 00 00 00
80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
a0: 10 00 02 00 c2 8c 00 00 10 28 00 00 41 6c 03 00
b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
c0: 00 00 00 00 1f 00 00 00 00 00 00 00 00 00 00 00
d0: 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

-bash-4.1# more /vm-pci.cfg
builder='hvm'
disk = [ '<a class="moz-txt-link-freetext" href="file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r">file:/mnt/lab/latest/root_image.iso,hdc:cdrom,r</a>']
memory = 2048
boot="d"
maxvcpus=32
vcpus=1
serial='pty'
vnclisten="0.0.0.0"
name="latest"
pci = ["0000:02:10.0"]

</pre>
          <blockquote type="cite">
            <pre wrap="">
Also to echo your second comment:  I might still be a newbie in qemu field (I started working on this 4 months ago).  I thought the chipset limits what you can see/do in vm.  Ie.  If you have 440fx emulations then you can't have any pci-e devices (fake or passthru) in the same system.  Is that not true?

Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [<a class="moz-txt-link-freetext" href="mailto:konrad.wilk@oracle.com">mailto:konrad.wilk@oracle.com</a>]
Sent: Friday, February 21, 2014 5:32 PM
To: Zhang, Eniac
Cc: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
Subject: RE: [Xen-devel] q35 in xen? vfio in xen?


On Feb 21, 2014 4:58 PM, "Zhang, Eniac" <a class="moz-txt-link-rfc2396E" href="mailto:eniac-xw.zhang@hp.com">&lt;eniac-xw.zhang@hp.com&gt;</a> wrote:
</pre>
            <blockquote type="cite">
              <pre wrap="">
Hi Konrad,

Thanks for your reply.

Yes, I am aware of the pciback.&nbsp; Unfortunately it doesn't seem to 
support pci-e passthrough. (I could be wrong here)
</pre>
            </blockquote>
            <pre wrap="">
I just did PCIe pass through of an VF of an SR-IOV device. It certainly is PCIe.

</pre>
            <blockquote type="cite">
              <pre wrap="">
There are two reason that I am interested in this.&nbsp; For one, my project calls for pci-e device passthrough, which can't be accomplished with 440fx chipset emulation.&nbsp; Secondly, I feel we ought to move on with the technology.&nbsp; 440fx is ancient in computer terms.&nbsp; Qemu is good and all that, but if it refuses to support pci-e natively then it's just a matter of time that it will become obsoleted.&nbsp; The trend is clear that pci-e is taking over the world. 

</pre>
            </blockquote>
            <pre wrap="">
I am not sure what you are saying but it does not matter whether QEMU emulates 440fx or q35 for PCI pass through .

</pre>
            <blockquote type="cite">
              <pre wrap="">Regards/Eniac

-----Original Message-----
From: Konrad Rzeszutek Wilk [<a class="moz-txt-link-freetext" href="mailto:konrad.wilk@oracle.com">mailto:konrad.wilk@oracle.com</a>]
Sent: Friday, February 21, 2014 2:50 PM
To: Zhang, Eniac
Cc: <a class="moz-txt-link-abbreviated" href="mailto:xen-devel@lists.xen.org">xen-devel@lists.xen.org</a>
Subject: Re: [Xen-devel] q35 in xen? vfio in xen? 

On Fri, Feb 21, 2014 at 09:41:39PM +0000, Zhang, Eniac wrote: 
</pre>
              <blockquote type="cite">
                <pre wrap="">Hi all,

I am playing with q35 chipset in qemu (1.6.1).&nbsp; It seems we can't enable q35 machine under xen yet.&nbsp; I made a few quick hacks which all fail miserably (linux kernel oops and window BSOD).&nbsp; I was wondering why this hasn't been done (q35 was introduced into qemu in 2009). 

Next question, vfio works very well for me in standalone qemu (with Linux host handling iommu), but is that supported under xen?&nbsp; I haven't tried anything there yet because my gut-feeling is that it won't work.&nbsp; Because passing vfio device to qemu can only be done on qemu commandline, and xen is not aware of this passing through device, thus not able to make iommu arrangement for this device.&nbsp; Am I on the right track here? 
</pre>
              </blockquote>
              <pre wrap="">
Yes and no. VFIO won't work - but QEMU does do PCI passthrough under Xen. It uses a different mechanism (and you need to bind the device to pciback). 

</pre>
              <blockquote type="cite">
                <pre wrap="">
I am interested in implementing both these two features.&nbsp; I'd like to connect with anyone who's already on this so we don't duplicate the efforts. 
</pre>
              </blockquote>
              <pre wrap="">
What do you need Q35 for? 

</pre>
              <blockquote type="cite">
                <pre wrap="">
Regards/Eniac
</pre>
              </blockquote>
              <pre wrap="">
</pre>
              <blockquote type="cite">
                <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
              </blockquote>
              <pre wrap="">
</pre>
            </blockquote>
          </blockquote>
        </blockquote>
        <br>
        <fieldset class="mimeAttachmentHeader"></fieldset>
        <br>
        <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
      </blockquote>
    </blockquote>
    <br>
  </body>
</html>

--------------060507060107090802030404--


--===============8428466112131771469==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8428466112131771469==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 13:34:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 13:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIedR-0007gU-LE; Wed, 26 Feb 2014 13:34:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eblake@redhat.com>) id 1WIedQ-0007gP-AC
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 13:34:52 +0000
Received: from [85.158.143.35:6642] by server-2.bemta-4.messagelabs.com id
	97/9A-04779-B7DED035; Wed, 26 Feb 2014 13:34:51 +0000
X-Env-Sender: eblake@redhat.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393421690!8458407!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10677 invoked from network); 26 Feb 2014 13:34:50 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	26 Feb 2014 13:34:50 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QDYhcd012563
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 08:34:44 -0500
Received: from [10.3.113.107] (ovpn-113-107.phx2.redhat.com [10.3.113.107])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QDYgp0007048; Wed, 26 Feb 2014 08:34:42 -0500
Message-ID: <530DED71.4020604@redhat.com>
Date: Wed, 26 Feb 2014 06:34:41 -0700
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "Daniel P. Berrange" <berrange@redhat.com>,
	Ian Campbell <ian.campbell@citrix.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
In-Reply-To: <20140226123703.GC29185@redhat.com>
X-Enigmail-Version: 1.6
OpenPGP: url=http://people.redhat.com/eblake/eblake.gpg
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org, julien.grall@linaro.org,
	tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5224595353611818087=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============5224595353611818087==
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 02/26/2014 05:37 AM, Daniel P. Berrange wrote:
> On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
>> Only tested on v7 but the v8 equivalent seems pretty obvious.
>>
>> XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86=
_32be)
>> but I have stuck with the existing pattern.
>>
>> With this I can create a guest from:
>>   <domain type=3D'xen'>
>>     <name>libvirt-test</name>
>>     <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
>>     <memory>393216</memory>
>>     <currentMemory>393216</currentMemory>
>>     <vcpu>1</vcpu>
>>     <os>
>>       <type arch=3D'armv7l' machine=3D'xenpv'>linux</type>
>>       <kernel>/boot/vmlinuz-arm-native</kernel>
>>       <cmdline>console=3Dhvc0 earlyprintk debug root=3D/dev/xvda1</cmd=
line>
>>     </os>

>=20
> ACK

I've gone ahead and pushed the patch.


--=20
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


--XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJTDe1xAAoJEKeha0olJ0NqCSAH/1g9E+WsBN0v7bRQ9OrmnwuR
VuWmclJje6ZMzcfRjXYPYFOyXhbSYZwfRumm54y++D+/njw1st0bKvLBOG5u2QxR
7ljPT5eUdHkG6lLdYrQUEr1dOTB7WGdWZwFur8BOUGkFhSDkQ4dsUmyFfalAnMtv
gXkj8e+h/Dy1+AbWJsMIFvR7nDeDFSmuIwf6OkD7gqXA1y3P4wmSAMMj7mKs0xGm
OdMEcwM4xthVFdL9BYGFkFTMzv70dIuJE+mlhveDHI/Bva/Qd0XQVafVJOuofhnz
WddTg+/aUocqnce7isy7W8thAzgDpY9iWN89C4WW6ZMU9gbDw5UPFF1anW5932E=
=XylM
-----END PGP SIGNATURE-----

--XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq--


--===============5224595353611818087==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5224595353611818087==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 13:34:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 13:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIedR-0007gU-LE; Wed, 26 Feb 2014 13:34:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <eblake@redhat.com>) id 1WIedQ-0007gP-AC
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 13:34:52 +0000
Received: from [85.158.143.35:6642] by server-2.bemta-4.messagelabs.com id
	97/9A-04779-B7DED035; Wed, 26 Feb 2014 13:34:51 +0000
X-Env-Sender: eblake@redhat.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393421690!8458407!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10677 invoked from network); 26 Feb 2014 13:34:50 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-21.messagelabs.com with SMTP;
	26 Feb 2014 13:34:50 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QDYhcd012563
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 08:34:44 -0500
Received: from [10.3.113.107] (ovpn-113-107.phx2.redhat.com [10.3.113.107])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1QDYgp0007048; Wed, 26 Feb 2014 08:34:42 -0500
Message-ID: <530DED71.4020604@redhat.com>
Date: Wed, 26 Feb 2014 06:34:41 -0700
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "Daniel P. Berrange" <berrange@redhat.com>,
	Ian Campbell <ian.campbell@citrix.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
In-Reply-To: <20140226123703.GC29185@redhat.com>
X-Enigmail-Version: 1.6
OpenPGP: url=http://people.redhat.com/eblake/eblake.gpg
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: libvir-list@redhat.com, xen-devel@lists.xen.org, julien.grall@linaro.org,
	tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5224595353611818087=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============5224595353611818087==
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 02/26/2014 05:37 AM, Daniel P. Berrange wrote:
> On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
>> Only tested on v7 but the v8 equivalent seems pretty obvious.
>>
>> XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86=
_32be)
>> but I have stuck with the existing pattern.
>>
>> With this I can create a guest from:
>>   <domain type=3D'xen'>
>>     <name>libvirt-test</name>
>>     <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
>>     <memory>393216</memory>
>>     <currentMemory>393216</currentMemory>
>>     <vcpu>1</vcpu>
>>     <os>
>>       <type arch=3D'armv7l' machine=3D'xenpv'>linux</type>
>>       <kernel>/boot/vmlinuz-arm-native</kernel>
>>       <cmdline>console=3Dhvc0 earlyprintk debug root=3D/dev/xvda1</cmd=
line>
>>     </os>

>=20
> ACK

I've gone ahead and pushed the patch.


--=20
Eric Blake   eblake redhat com    +1-919-301-3266
Libvirt virtualization library http://libvirt.org


--XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Public key at http://people.redhat.com/eblake/eblake.gpg
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBCAAGBQJTDe1xAAoJEKeha0olJ0NqCSAH/1g9E+WsBN0v7bRQ9OrmnwuR
VuWmclJje6ZMzcfRjXYPYFOyXhbSYZwfRumm54y++D+/njw1st0bKvLBOG5u2QxR
7ljPT5eUdHkG6lLdYrQUEr1dOTB7WGdWZwFur8BOUGkFhSDkQ4dsUmyFfalAnMtv
gXkj8e+h/Dy1+AbWJsMIFvR7nDeDFSmuIwf6OkD7gqXA1y3P4wmSAMMj7mKs0xGm
OdMEcwM4xthVFdL9BYGFkFTMzv70dIuJE+mlhveDHI/Bva/Qd0XQVafVJOuofhnz
WddTg+/aUocqnce7isy7W8thAzgDpY9iWN89C4WW6ZMU9gbDw5UPFF1anW5932E=
=XylM
-----END PGP SIGNATURE-----

--XnSsANLTMjlg5TW4rSSUhrbMrPQhAqwgq--


--===============5224595353611818087==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5224595353611818087==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 14:01:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 14:01:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIf2Y-0007wu-5y; Wed, 26 Feb 2014 14:00:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WIf2V-0007wp-MQ
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 14:00:48 +0000
Received: from [85.158.137.68:25250] by server-15.bemta-3.messagelabs.com id
	E6/E4-19263-E83FD035; Wed, 26 Feb 2014 14:00:46 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393423245!4359049!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_32,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5955 invoked from network); 26 Feb 2014 14:00:46 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 14:00:46 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1393423245; l=964;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=wUeeErVPpP7X/5o9wIR8xSu+SI4=;
	b=V0B63Aiw89rPXkHFVvlR2uZaMiu6KxEnvKC2iT1A90BxXUBGzO9+LZJ+GxhDJ7ivCfK
	y6jMfj5twN3UnC+Fp7lRt9ubVvu3KhDJraMIZ1biu+LT6aFxbcTr3H2IoyZcJUAb1c8ZD
	FcltFOcN2xkIkvZICscInEPS9dUHWJN8sM8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id 903b7dq1QE0YPcF
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Wed, 26 Feb 2014 15:00:34 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D55995026B; Wed, 26 Feb 2014 15:00:33 +0100 (CET)
Date: Wed, 26 Feb 2014 15:00:33 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140226140033.GA15045@aepfle.de>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393418554.18730.41.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, Ian Campbell wrote:

> On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > > Currently virsh console fails with:
> > >   Connected to domain libvirt-test
> > >   Escape character is ^]
> > >   error: internal error: cannot find character device <null>
> > That'll be because no <console> or <serial> device is
> > listed in your config above I expect. Also looks like
> > bogus error handling in the console API, not checking
> > for <null>.
> Thanks. I've just tried (inside <devices>...</...>):
>          <console tty='/dev/pts/5'/>
> and
>      <console type='pty'>
>       <target port='0'/>
>     </console>
> which I gleaned from http://libvirt.org/drvxen.html but neither seem to
> do the trick (and the first ones use of an explicit pts looks odd to
> me...).

I learned yesterday this should be "serial" instead of "console", maybe
it fixes also your case.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 14:01:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 14:01:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIf2Y-0007wu-5y; Wed, 26 Feb 2014 14:00:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WIf2V-0007wp-MQ
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 14:00:48 +0000
Received: from [85.158.137.68:25250] by server-15.bemta-3.messagelabs.com id
	E6/E4-19263-E83FD035; Wed, 26 Feb 2014 14:00:46 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393423245!4359049!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_32,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5955 invoked from network); 26 Feb 2014 14:00:46 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-3.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 14:00:46 -0000
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; t=1393423245; l=964;
	s=domk; d=aepfle.de;
	h=In-Reply-To:Content-Type:MIME-Version:References:Subject:Cc:To:From:
	Date:X-RZG-CLASS-ID:X-RZG-AUTH;
	bh=wUeeErVPpP7X/5o9wIR8xSu+SI4=;
	b=V0B63Aiw89rPXkHFVvlR2uZaMiu6KxEnvKC2iT1A90BxXUBGzO9+LZJ+GxhDJ7ivCfK
	y6jMfj5twN3UnC+Fp7lRt9ubVvu3KhDJraMIZ1biu+LT6aFxbcTr3H2IoyZcJUAb1c8ZD
	FcltFOcN2xkIkvZICscInEPS9dUHWJN8sM8=
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id 903b7dq1QE0YPcF
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate);
	Wed, 26 Feb 2014 15:00:34 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id D55995026B; Wed, 26 Feb 2014 15:00:33 +0100 (CET)
Date: Wed, 26 Feb 2014 15:00:33 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140226140033.GA15045@aepfle.de>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393418554.18730.41.camel@kazak.uk.xensource.com>
User-Agent: Mutt/1.5.22.rev6346 (2013-10-29)
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, Ian Campbell wrote:

> On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > > Currently virsh console fails with:
> > >   Connected to domain libvirt-test
> > >   Escape character is ^]
> > >   error: internal error: cannot find character device <null>
> > That'll be because no <console> or <serial> device is
> > listed in your config above I expect. Also looks like
> > bogus error handling in the console API, not checking
> > for <null>.
> Thanks. I've just tried (inside <devices>...</...>):
>          <console tty='/dev/pts/5'/>
> and
>      <console type='pty'>
>       <target port='0'/>
>     </console>
> which I gleaned from http://libvirt.org/drvxen.html but neither seem to
> do the trick (and the first ones use of an explicit pts looks odd to
> me...).

I learned yesterday this should be "serial" instead of "console", maybe
it fixes also your case.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 14:55:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 14:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIftJ-0000Z3-WB; Wed, 26 Feb 2014 14:55:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIftI-0000Yw-Ms
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 14:55:20 +0000
Received: from [85.158.143.35:26508] by server-1.bemta-4.messagelabs.com id
	85/08-31661-8500E035; Wed, 26 Feb 2014 14:55:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393426518!8505637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13745 invoked from network); 26 Feb 2014 14:55:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 14:55:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105931099"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 14:55:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 09:55:16 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WIftC-0001XN-U0;
	Wed, 26 Feb 2014 14:55:15 +0000
Message-ID: <1393426513.9640.18.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 26 Feb 2014 14:55:13 +0000
In-Reply-To: <20140226140033.GA15045@aepfle.de>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 15:00 +0100, Olaf Hering wrote:
> On Wed, Feb 26, Ian Campbell wrote:
> 
> > On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> > > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > > > Currently virsh console fails with:
> > > >   Connected to domain libvirt-test
> > > >   Escape character is ^]
> > > >   error: internal error: cannot find character device <null>
> > > That'll be because no <console> or <serial> device is
> > > listed in your config above I expect. Also looks like
> > > bogus error handling in the console API, not checking
> > > for <null>.
> > Thanks. I've just tried (inside <devices>...</...>):
> >          <console tty='/dev/pts/5'/>
> > and
> >      <console type='pty'>
> >       <target port='0'/>
> >     </console>
> > which I gleaned from http://libvirt.org/drvxen.html but neither seem to
> > do the trick (and the first ones use of an explicit pts looks odd to
> > me...).
> 
> I learned yesterday this should be "serial" instead of "console", maybe
> it fixes also your case.

My understanding was that this was for HVM guests, whereas ARM has a PV
console but I suppose I should try it anyway. I've got a flight to catch
so it'll be a few days I expect.

Thanks,

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 14:55:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 14:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIftJ-0000Z3-WB; Wed, 26 Feb 2014 14:55:21 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIftI-0000Yw-Ms
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 14:55:20 +0000
Received: from [85.158.143.35:26508] by server-1.bemta-4.messagelabs.com id
	85/08-31661-8500E035; Wed, 26 Feb 2014 14:55:20 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393426518!8505637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13745 invoked from network); 26 Feb 2014 14:55:19 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 14:55:19 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105931099"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 14:55:16 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 09:55:16 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WIftC-0001XN-U0;
	Wed, 26 Feb 2014 14:55:15 +0000
Message-ID: <1393426513.9640.18.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
Date: Wed, 26 Feb 2014 14:55:13 +0000
In-Reply-To: <20140226140033.GA15045@aepfle.de>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 15:00 +0100, Olaf Hering wrote:
> On Wed, Feb 26, Ian Campbell wrote:
> 
> > On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> > > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > > > Currently virsh console fails with:
> > > >   Connected to domain libvirt-test
> > > >   Escape character is ^]
> > > >   error: internal error: cannot find character device <null>
> > > That'll be because no <console> or <serial> device is
> > > listed in your config above I expect. Also looks like
> > > bogus error handling in the console API, not checking
> > > for <null>.
> > Thanks. I've just tried (inside <devices>...</...>):
> >          <console tty='/dev/pts/5'/>
> > and
> >      <console type='pty'>
> >       <target port='0'/>
> >     </console>
> > which I gleaned from http://libvirt.org/drvxen.html but neither seem to
> > do the trick (and the first ones use of an explicit pts looks odd to
> > me...).
> 
> I learned yesterday this should be "serial" instead of "console", maybe
> it fixes also your case.

My understanding was that this was for HVM guests, whereas ARM has a PV
console but I suppose I should try it anyway. I've got a flight to catch
so it'll be a few days I expect.

Thanks,

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:01:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIfz2-0000m7-Q8; Wed, 26 Feb 2014 15:01:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1WIfz1-0000m2-VE
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:01:16 +0000
Received: from [193.109.254.147:5395] by server-13.bemta-14.messagelabs.com id
	2C/4B-01226-BB10E035; Wed, 26 Feb 2014 15:01:15 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393426872!1721024!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11338 invoked from network); 26 Feb 2014 15:01:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 15:01:13 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QF14Yh012543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 10:01:05 -0500
Received: from redhat.com (dhcp-1-236.lcy.redhat.com [10.32.224.236])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1QF10IO019079
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Wed, 26 Feb 2014 10:01:02 -0500
Date: Wed, 26 Feb 2014 15:01:00 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140226150100.GC6046@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393426513.9640.18.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: Olaf Hering <olaf@aepfle.de>, stefano.stabellini@eu.citrix.com,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 02:55:13PM +0000, Ian Campbell wrote:
> On Wed, 2014-02-26 at 15:00 +0100, Olaf Hering wrote:
> > On Wed, Feb 26, Ian Campbell wrote:
> > 
> > > On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> > > > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > > > > Currently virsh console fails with:
> > > > >   Connected to domain libvirt-test
> > > > >   Escape character is ^]
> > > > >   error: internal error: cannot find character device <null>
> > > > That'll be because no <console> or <serial> device is
> > > > listed in your config above I expect. Also looks like
> > > > bogus error handling in the console API, not checking
> > > > for <null>.
> > > Thanks. I've just tried (inside <devices>...</...>):
> > >          <console tty='/dev/pts/5'/>
> > > and
> > >      <console type='pty'>
> > >       <target port='0'/>
> > >     </console>
> > > which I gleaned from http://libvirt.org/drvxen.html but neither seem to
> > > do the trick (and the first ones use of an explicit pts looks odd to
> > > me...).
> > 
> > I learned yesterday this should be "serial" instead of "console", maybe
> > it fixes also your case.
> 
> My understanding was that this was for HVM guests, whereas ARM has a PV
> console but I suppose I should try it anyway. I've got a flight to catch
> so it'll be a few days I expect.

Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
to allow us to configure that explicitly, similar to how we do for KVM's
virtio-console support.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:01:24 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIfz2-0000m7-Q8; Wed, 26 Feb 2014 15:01:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1WIfz1-0000m2-VE
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:01:16 +0000
Received: from [193.109.254.147:5395] by server-13.bemta-14.messagelabs.com id
	2C/4B-01226-BB10E035; Wed, 26 Feb 2014 15:01:15 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393426872!1721024!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11338 invoked from network); 26 Feb 2014 15:01:13 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 15:01:13 -0000
Received: from int-mx02.intmail.prod.int.phx2.redhat.com
	(int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1QF14Yh012543
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Wed, 26 Feb 2014 10:01:05 -0500
Received: from redhat.com (dhcp-1-236.lcy.redhat.com [10.32.224.236])
	by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1QF10IO019079
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Wed, 26 Feb 2014 10:01:02 -0500
Date: Wed, 26 Feb 2014 15:01:00 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Message-ID: <20140226150100.GC6046@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393426513.9640.18.camel@dagon.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12
Cc: Olaf Hering <olaf@aepfle.de>, stefano.stabellini@eu.citrix.com,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 02:55:13PM +0000, Ian Campbell wrote:
> On Wed, 2014-02-26 at 15:00 +0100, Olaf Hering wrote:
> > On Wed, Feb 26, Ian Campbell wrote:
> > 
> > > On Wed, 2014-02-26 at 12:37 +0000, Daniel P. Berrange wrote:
> > > > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> > > > > Currently virsh console fails with:
> > > > >   Connected to domain libvirt-test
> > > > >   Escape character is ^]
> > > > >   error: internal error: cannot find character device <null>
> > > > That'll be because no <console> or <serial> device is
> > > > listed in your config above I expect. Also looks like
> > > > bogus error handling in the console API, not checking
> > > > for <null>.
> > > Thanks. I've just tried (inside <devices>...</...>):
> > >          <console tty='/dev/pts/5'/>
> > > and
> > >      <console type='pty'>
> > >       <target port='0'/>
> > >     </console>
> > > which I gleaned from http://libvirt.org/drvxen.html but neither seem to
> > > do the trick (and the first ones use of an explicit pts looks odd to
> > > me...).
> > 
> > I learned yesterday this should be "serial" instead of "console", maybe
> > it fixes also your case.
> 
> My understanding was that this was for HVM guests, whereas ARM has a PV
> console but I suppose I should try it anyway. I've got a flight to catch
> so it'll be a few days I expect.

Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
to allow us to configure that explicitly, similar to how we do for KVM's
virtio-console support.


Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:11:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIg94-0000ze-08; Wed, 26 Feb 2014 15:11:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WIg92-0000zZ-4k
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:11:36 +0000
Received: from [85.158.137.68:34553] by server-6.bemta-3.messagelabs.com id
	9A/91-09180-5240E035; Wed, 26 Feb 2014 15:11:33 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393427491!4392775!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30837 invoked from network); 26 Feb 2014 15:11:31 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 15:11:31 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53650 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WIg7H-0007UY-Bp; Wed, 26 Feb 2014 16:09:47 +0100
Date: Wed, 26 Feb 2014 16:11:23 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <59358334.20140226161123@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <824074181.20140226101442@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
MIME-Version: 1.0
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 26, 2014, 10:14:42 AM, you wrote:


> Friday, February 21, 2014, 7:32:08 AM, you wrote:


>> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>>>
>>>
>>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>>>> Hi All,
>>>>>
>>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>>>
>>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>>>
>>>>>     In the guest:
>>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [57539.859610] net eth0: Need more slots
>>>>>     [58157.675939] net eth0: Need more slots
>>>>>     [58725.344712] net eth0: Need more slots
>>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [61815.849225] net eth0: Need more slots
>>>> This issue is familiar... and I thought it get fixed.
>>>>   From original analysis for similar issue I hit before, the root cause
>>>> is netback still creates response when the ring is full. I remember
>>>> larger MTU can trigger this issue before, what is the MTU size?
>>> In dom0 both for the physical nics and the guest vif's MTU=1500
>>> In domU the eth0 also has MTU=1500.
>>>
>>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>>>
>>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>>>
>>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

>> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
>> Probably the response overlaps the request and grantcopy return error 
>> when using wrong grant reference, Netback returns resp->status with 
>> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
>> Would it be possible to print log in xenvif_rx_action of netback to see 
>> whether something wrong with max slots and used slots?

>> Thanks
>> Annie

> Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
> at the same time as the netfront messages in the guest.

> I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
> One of the things was to simplify the code for the debug key to print the granttables, the present code
> takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
> the nr of entries per domain.


> Issue 1: grant_table.c:1858:d0 Bad grant reference

> After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
> The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.

> Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
> The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).

> Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
> Domain 7 is the domain that happens to give the netfront messages.

> I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
> Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?

> (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (4) to (5) frames.
> (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (5) to (6) frames.
> (XEN) [2014-02-26 00:00:38] grant_table.c:290:d0 Increased maptrack size to 13/256 frames
> (XEN) [2014-02-26 00:01:13] grant_table.c:290:d0 Increased maptrack size to 14/256 frames
> (XEN) [2014-02-26 04:02:55] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> (XEN) [2014-02-26 04:15:33] grant_table.c:290:d0 Increased maptrack size to 15/256 frames
> (XEN) [2014-02-26 04:15:53] grant_table.c:290:d0 Increased maptrack size to 16/256 frames
> (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 17/256 frames
> (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 18/256 frames
> (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 19/256 frames
> (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 20/256 frames
> (XEN) [2014-02-26 04:15:59] grant_table.c:290:d0 Increased maptrack size to 21/256 frames
> (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 22/256 frames
> (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 23/256 frames
> (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 24/256 frames
> (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 25/256 frames
> (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 26/256 frames
> (XEN) [2014-02-26 04:16:17] grant_table.c:290:d0 Increased maptrack size to 27/256 frames
> (XEN) [2014-02-26 04:16:20] grant_table.c:290:d0 Increased maptrack size to 28/256 frames
> (XEN) [2014-02-26 04:16:56] grant_table.c:290:d0 Increased maptrack size to 29/256 frames
> (XEN) [2014-02-26 05:15:04] grant_table.c:290:d0 Increased maptrack size to 30/256 frames
> (XEN) [2014-02-26 05:15:05] grant_table.c:290:d0 Increased maptrack size to 31/256 frames
> (XEN) [2014-02-26 05:21:15] grant_table.c:1858:d0 Bad grant reference 107085839 | 2048 | 1 | 0
> (XEN) [2014-02-26 05:29:47] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
> (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all [ key 'g' pressed
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 active entries: 0
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1) nr_grant_entries: 3072
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 active entries: 2117
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 active entries: 1061
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 active entries: 1045
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 active entries: 709
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 active entries: 163
> (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all ] done
> (XEN) [2014-02-26 07:55:09] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> (XEN) [2014-02-26 08:37:16] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0



> Issue 2: net eth0: rx->offset: 0, size: xxxxxxxxxx

> In the guest (domain 7):

> Feb 26 08:55:09 backup kernel: [39258.090375] net eth0: rx->offset: 0, size: 4294967295
> Feb 26 08:55:09 backup kernel: [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
> Feb 26 08:55:09 backup kernel: [39258.090401] net eth0: rx->offset: 0, size: 4294967295
> Feb 26 08:55:09 backup kernel: [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
> Feb 26 08:55:09 backup kernel: [39258.090415] net eth0: rx->offset: 0, size: 4294967295
> Feb 26 08:55:09 backup kernel: [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571

> In dom0 i don't see any specific netback warnings related to this domain at this specific times, the printk's i added do trigger quite some times but these are probably not
> errorneous, but they seem to only occur on the vif of domain 7 (probably the only domain that is swamping the network by doing rsync and webdavs and causes some fragmented packets)

Another addition ... the guest doesn't shutdown anymore on "xl shutdown" .. it just does .. erhmm nothing .. (tried multiple times)
After that i ssh'ed into the guest and did a "halt -p" ... the guest shutted down .. but the guest remained in xl list in blocked state ..
Doing a "xl console" shows:

[30024.559656] net eth0: me here .. cons:8713451 slots:1 rp:8713462 max:18 err:0 rx->id:234 rx->offset:0 size:4294967295 ref:-131941395332550
[30024.559666] net eth0: rx->offset: 0, size: 4294967295
[30024.559671] net eth0: me here .. cons:8713451 slots:2 rp:8713462 max:18 err:-22 rx->id:236 rx->offset:0 size:4294967295 ref:-131941395332504
[30024.559680] net eth0: rx->offset: 0, size: 4294967295
[30024.559686] net eth0: me here .. cons:8713451 slots:3 rp:8713462 max:18 err:-22 rx->id:1 rx->offset:0 size:4294967295 ref:-131941395332390
[30536.665135] net eth0: Need more slots cons:9088533 slots:6 rp:9088539 max:17 err:0 rx-id:26 rx->offset:0 size:0 ref:687
[39258.090375] net eth0: rx->offset: 0, size: 4294967295
[39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
[39258.090401] net eth0: rx->offset: 0, size: 4294967295
[39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
[39258.090415] net eth0: rx->offset: 0, size: 4294967295
[39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571
INIT: Switching to runlevel: 0
INIT: Sending processes the TERM signal
[info] Using makefile-style concurrent boot in runlevel 0.
Stopping openntpd: ntpd.
[ ok ] Stopping mail-transfer-agent: nullmailer.
[ ok ] Stopping web server: apache2 ... waiting .
[ ok ] Asking all remaining processes to terminate...done.
[ ok ] All processes ended within 2 seconds...done.
[ ok ] Stopping enhanced syslogd: rsyslogd.
[ ok ] Deconfiguring network interfaces...done.
[ ok ] Deactivating swap...done.
[65015.958259] EXT4-fs (xvda1): re-mounted. Opts: (null)
[info] Will now halt.
[65018.166546] vif vif-0: 5 starting transaction
[65160.490419] INFO: task halt:4846 blocked for more than 120 seconds.
[65160.490464]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
[65160.490485] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[65160.490510] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000
[65280.490470] INFO: task halt:4846 blocked for more than 120 seconds.
[65280.490517]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
[65280.490540] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[65280.490564] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000


Especially the  "[65018.166546] vif vif-0: 5 starting transaction" after the halt surprises me ..

--
Sander

> Feb 26 08:53:20 serveerstertje kernel: [39324.917255] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15101115 cons:15101112 j:8
> Feb 26 08:53:56 serveerstertje kernel: [39361.001436] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15127649 cons:15127648 j:13
> Feb 26 08:54:00 serveerstertje kernel: [39364.725613] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15130263 cons:15130261 j:2
> Feb 26 08:54:04 serveerstertje kernel: [39368.739504] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15133143 cons:15133141 j:0
> Feb 26 08:54:20 serveerstertje kernel: [39384.665044] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15144113 cons:15144112 j:0
> Feb 26 08:54:29 serveerstertje kernel: [39393.569871] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15150203 cons:15150200 j:0
> Feb 26 08:54:40 serveerstertje kernel: [39404.586566] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15157706 cons:15157704 j:12
> Feb 26 08:54:56 serveerstertje kernel: [39420.759769] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15168839 cons:15168835 j:0
> Feb 26 08:54:56 serveerstertje kernel: [39421.001372] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15169002 cons:15168999 j:8
> Feb 26 08:55:00 serveerstertje kernel: [39424.515073] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15171450 cons:15171447 j:0
> Feb 26 08:55:10 serveerstertje kernel: [39435.154510] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15178773 cons:15178770 j:1
> Feb 26 08:56:19 serveerstertje kernel: [39504.195908] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15227444 cons:15227444 j:0
> Feb 26 08:57:39 serveerstertje kernel: [39583.799392] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15283346 cons:15283344 j:8
> Feb 26 08:57:55 serveerstertje kernel: [39599.517673] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15293937 cons:15293935 j:0
> Feb 26 08:58:07 serveerstertje kernel: [39612.156622] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15302891 cons:15302889 j:19
> Feb 26 08:58:07 serveerstertje kernel: [39612.400907] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15303034 cons:15303033 j:0
> Feb 26 08:58:18 serveerstertje kernel: [39623.439383] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15310915 cons:15310911 j:0
> Feb 26 08:58:39 serveerstertje kernel: [39643.521808] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15324769 cons:15324766 j:1

> Feb 26 09:27:07 serveerstertje kernel: [41351.622501] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16502932 cons:16502932 j:8
> Feb 26 09:27:19 serveerstertje kernel: [41363.541003] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16510837 cons:16510834 j:7
> Feb 26 09:27:23 serveerstertje kernel: [41368.133306] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16513940 cons:16513937 j:0
> Feb 26 09:27:43 serveerstertje kernel: [41388.025147] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16527870 cons:16527868 j:0
> Feb 26 09:27:47 serveerstertje kernel: [41391.530802] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16530437 cons:16530437 j:7
> Feb 26 09:27:51 serveerstertje kernel: [41395.521166] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533320 cons:16533317 j:6
> Feb 26 09:27:51 serveerstertje kernel: [41395.767066] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533469 cons:16533469 j:0
> Feb 26 09:27:51 serveerstertje kernel: [41395.802319] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533533 cons:16533533 j:24
> Feb 26 09:27:51 serveerstertje kernel: [41395.837456] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533534 cons:16533534 j:1
> Feb 26 09:27:51 serveerstertje kernel: [41395.872587] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533597 cons:16533596 j:25
> Feb 26 09:27:51 serveerstertje kernel: [41396.192784] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533833 cons:16533832 j:3
> Feb 26 09:27:51 serveerstertje kernel: [41396.235611] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533890 cons:16533890 j:30
> Feb 26 09:27:51 serveerstertje kernel: [41396.271047] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533898 cons:16533896 j:3


> --
> Sander





>>>
>>> Will keep you posted when it triggers again with the extra info in the warn.
>>>
>>> --
>>> Sander
>>>
>>>
>>>
>>>> Thanks
>>>> Annie
>>>>>     Xen reports:
>>>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>>>>
>>>>>
>>>>>
>>>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>>>>     - i can ping the guests from dom0
>>>>>     - i can ping dom0 from the guests
>>>>>     - But i can't ssh or access things by http
>>>>>     - I don't see any relevant error messages ...
>>>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>>>>       (that previously worked fine)
>>>>>
>>>>> --
>>>>>
>>>>> Sander
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Xen-devel mailing list
>>>>> Xen-devel@lists.xen.org
>>>>> http://lists.xen.org/xen-devel
>>>
>>>



> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 4fc46eb..4d720b4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1667,6 +1667,8 @@ skip_vfb:
>                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
>              } else if (!strcmp(buf, "cirrus")) {
>                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
> +            } else if (!strcmp(buf, "none")) {
> +                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
>              } else {
>                  fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
>                  exit(1);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 107b000..ab56927 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -265,9 +265,10 @@ get_maptrack_handle(
>      while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
>      {
>          nr_frames = nr_maptrack_frames(lgt);
> -        if ( nr_frames >= max_nr_maptrack_frames() )
> +        if ( nr_frames >= max_nr_maptrack_frames() ){
> +                 gdprintk(XENLOG_INFO, "Already at max maptrack size: %u/%u frames\n",nr_frames, max_nr_maptrack_frames());
>              break;
> -
> +       }
>          new_mt = alloc_xenheap_page();
>          if ( !new_mt )
>              break;
> @@ -285,8 +286,8 @@ get_maptrack_handle(
>          smp_wmb();
>          lgt->maptrack_limit      = new_mt_limit;

> -        gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
> -                 nr_frames + 1);
> +        gdprintk(XENLOG_INFO, "Increased maptrack size to %u/%u frames\n",
> +                 nr_frames + 1, max_nr_maptrack_frames());
>      }

>      spin_unlock(&lgt->lock);
> @@ -1854,7 +1855,7 @@ __acquire_grant_for_copy(

>      if ( unlikely(gref >= nr_grant_entries(rgt)) )
>          PIN_FAIL(unlock_out, GNTST_bad_gntref,
> -                 "Bad grant reference %ld\n", gref);
> +                 "Bad grant reference %ld | %d | %d | %d \n", gref, nr_grant_entries(rgt), rgt->gt_version, ldom);

>      act = &active_entry(rgt, gref);
>      shah = shared_entry_header(rgt, gref);
> @@ -2830,15 +2831,19 @@ static void gnttab_usage_print(struct domain *rd)
>      int first = 1;
>      grant_ref_t ref;
>      struct grant_table *gt = rd->grant_table;
> -
> +    unsigned int active=0;
> +/*
>      printk("      -------- active --------       -------- shared --------\n");
>      printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
> -
> +*/
>      spin_lock(&gt->lock);

>      if ( gt->gt_version == 0 )
>          goto out;

> +    printk("grant-table for remote domain:%5d (v%d) nr_grant_entries: %d\n",
> +                   rd->domain_id, gt->gt_version, nr_grant_entries(gt));
> +
>      for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
>      {
>          struct active_grant_entry *act;
> @@ -2875,19 +2880,22 @@ static void gnttab_usage_print(struct domain *rd)
>                     rd->domain_id, gt->gt_version);
>              first = 0;
>          }
> -
> +        active++;
>          /*      [ddd]    ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
> -        printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
> -               ref, act->domid, act->frame, act->pin,
> -               sha->domid, frame, status);
> +        /* printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n", ref, act->domid, act->frame, act->pin, sha->domid, frame, status); */
>      }

>   out:
>      spin_unlock(&gt->lock);

> +    printk("grant-table for remote domain:%5d active entries: %d\n",
> +                   rd->domain_id, active);
> +/*
>      if ( first )
>          printk("grant-table for remote domain:%5d ... "
>                 "no active grant table entries\n", rd->domain_id);
> +*/
> +
>  }

>  static void gnttab_usage_print_all(unsigned char key)






> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index e5284bc..6d93358 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -482,20 +482,23 @@ static void xenvif_rx_action(struct xenvif *vif)
>                 .meta  = vif->meta,
>         };

> +       int j=0;
> +
>         skb_queue_head_init(&rxq);

>         while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
>                 RING_IDX max_slots_needed;
>                 int i;
> +               int nr_frags;

>                 /* We need a cheap worse case estimate for the number of
>                  * slots we'll use.
>                  */

>                 max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
> -                                               skb_headlen(skb),
> -                                               PAGE_SIZE);
> -               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> +                                               skb_headlen(skb), PAGE_SIZE);
> +               nr_frags = skb_shinfo(skb)->nr_frags;
> +               for (i = 0; i < nr_frags; i++) {
>                         unsigned int size;
>                         size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
>                         max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
> @@ -508,6 +511,9 @@ static void xenvif_rx_action(struct xenvif *vif)
>                 if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
>                         skb_queue_head(&vif->rx_queue, skb);
>                         need_to_notify = true;
> +                       if (net_ratelimit())
> +                               netdev_err(vif->dev, "!?!?!?! skb may not fit .. bail out now max_slots_needed:%d GSO:%d vif->rx_last_skb_slots:%d nr_frags:%d prod:%d cons:%d j:%d\n",
> +                                       max_slots_needed, (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ? 1 : 0, vif->rx_last_skb_slots, nr_frags,vif->rx.sring->req_prod,vif->rx.req_cons,j);
>                         vif->rx_last_skb_slots = max_slots_needed;
>                         break;
>                 } else
> @@ -518,6 +524,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>                 BUG_ON(sco->meta_slots_used > max_slots_needed);

>                 __skb_queue_tail(&rxq, skb);
> +               j++;
>         }

>         BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> @@ -541,7 +548,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>                         resp->offset = vif->meta[npo.meta_cons].gso_size;
>                         resp->id = vif->meta[npo.meta_cons].id;
>                         resp->status = sco->meta_slots_used;
> -
> +
>                         npo.meta_cons++;
>                         sco->meta_slots_used--;
>                 }
> @@ -705,7 +712,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>                  */
>                 if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
>                         if (net_ratelimit())
> -                               netdev_dbg(vif->dev,
> +                               netdev_err(vif->dev,
>                                            "Too many slots (%d) exceeding limit (%d), dropping packet\n",
>                                            slots, XEN_NETBK_LEGACY_SLOTS_MAX);
>                         drop_err = -E2BIG;
> @@ -728,7 +735,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>                  */
>                 if (!drop_err && txp->size > first->size) {
>                         if (net_ratelimit())
> -                               netdev_dbg(vif->dev,
> +                               netdev_err(vif->dev,
>                                            "Invalid tx request, slot size %u > remaining size %u\n",
>                                            txp->size, first->size);
>                         drop_err = -EIO;



> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index f9daa9e..67d5221 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -753,6 +753,7 @@ static int xennet_get_responses(struct netfront_info *np,
>                         if (net_ratelimit())
>                                 dev_warn(dev, "rx->offset: %x, size: %u\n",
>                                          rx->offset, rx->status);
> +                               dev_warn(dev, "me here .. cons:%d slots:%d rp:%d max:%d err:%d rx->id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
>                         xennet_move_rx_slot(np, skb, ref);
>                         err = -EINVAL;
>                         goto next;
> @@ -784,7 +785,7 @@ next:

>                 if (cons + slots == rp) {
>                         if (net_ratelimit())
> -                               dev_warn(dev, "Need more slots\n");
> +                               dev_warn(dev, "Need more slots cons:%d slots:%d rp:%d max:%d err:%d rx-id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
>                         err = -ENOENT;
>                         break;
>                 }
> @@ -803,7 +804,6 @@ next:

>         if (unlikely(err))
>                 np->rx.rsp_cons = cons + slots;
> -
>         return err;
>  }

> @@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,

>                 /* Ethernet work: Delayed to here as it peeks the header. */
>                 skb->protocol = eth_type_trans(skb, dev);
> +               skb_reset_network_header(skb);

>                 if (checksum_setup(dev, skb)) {
>                         kfree_skb(skb);





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:11:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIg94-0000ze-08; Wed, 26 Feb 2014 15:11:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WIg92-0000zZ-4k
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:11:36 +0000
Received: from [85.158.137.68:34553] by server-6.bemta-3.messagelabs.com id
	9A/91-09180-5240E035; Wed, 26 Feb 2014 15:11:33 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393427491!4392775!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30837 invoked from network); 26 Feb 2014 15:11:31 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 15:11:31 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53650 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WIg7H-0007UY-Bp; Wed, 26 Feb 2014 16:09:47 +0100
Date: Wed, 26 Feb 2014 16:11:23 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <59358334.20140226161123@eikelenboom.it>
To: Sander Eikelenboom <linux@eikelenboom.it>
In-Reply-To: <824074181.20140226101442@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
MIME-Version: 1.0
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Wednesday, February 26, 2014, 10:14:42 AM, you wrote:


> Friday, February 21, 2014, 7:32:08 AM, you wrote:


>> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>>>
>>>
>>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>>>> Hi All,
>>>>>
>>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>>>
>>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>>>
>>>>>     In the guest:
>>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [57539.859610] net eth0: Need more slots
>>>>>     [58157.675939] net eth0: Need more slots
>>>>>     [58725.344712] net eth0: Need more slots
>>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>>>     [61815.849225] net eth0: Need more slots
>>>> This issue is familiar... and I thought it get fixed.
>>>>   From original analysis for similar issue I hit before, the root cause
>>>> is netback still creates response when the ring is full. I remember
>>>> larger MTU can trigger this issue before, what is the MTU size?
>>> In dom0 both for the physical nics and the guest vif's MTU=1500
>>> In domU the eth0 also has MTU=1500.
>>>
>>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>>>
>>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>>>
>>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.

>> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
>> Probably the response overlaps the request and grantcopy return error 
>> when using wrong grant reference, Netback returns resp->status with 
>> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
>> Would it be possible to print log in xenvif_rx_action of netback to see 
>> whether something wrong with max slots and used slots?

>> Thanks
>> Annie

> Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
> at the same time as the netfront messages in the guest.

> I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
> One of the things was to simplify the code for the debug key to print the granttables, the present code
> takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
> the nr of entries per domain.


> Issue 1: grant_table.c:1858:d0 Bad grant reference

> After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
> The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.

> Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
> The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).

> Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
> Domain 7 is the domain that happens to give the netfront messages.

> I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
> Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?

> (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (4) to (5) frames.
> (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (5) to (6) frames.
> (XEN) [2014-02-26 00:00:38] grant_table.c:290:d0 Increased maptrack size to 13/256 frames
> (XEN) [2014-02-26 00:01:13] grant_table.c:290:d0 Increased maptrack size to 14/256 frames
> (XEN) [2014-02-26 04:02:55] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> (XEN) [2014-02-26 04:15:33] grant_table.c:290:d0 Increased maptrack size to 15/256 frames
> (XEN) [2014-02-26 04:15:53] grant_table.c:290:d0 Increased maptrack size to 16/256 frames
> (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 17/256 frames
> (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 18/256 frames
> (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 19/256 frames
> (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 20/256 frames
> (XEN) [2014-02-26 04:15:59] grant_table.c:290:d0 Increased maptrack size to 21/256 frames
> (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 22/256 frames
> (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 23/256 frames
> (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 24/256 frames
> (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 25/256 frames
> (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 26/256 frames
> (XEN) [2014-02-26 04:16:17] grant_table.c:290:d0 Increased maptrack size to 27/256 frames
> (XEN) [2014-02-26 04:16:20] grant_table.c:290:d0 Increased maptrack size to 28/256 frames
> (XEN) [2014-02-26 04:16:56] grant_table.c:290:d0 Increased maptrack size to 29/256 frames
> (XEN) [2014-02-26 05:15:04] grant_table.c:290:d0 Increased maptrack size to 30/256 frames
> (XEN) [2014-02-26 05:15:05] grant_table.c:290:d0 Increased maptrack size to 31/256 frames
> (XEN) [2014-02-26 05:21:15] grant_table.c:1858:d0 Bad grant reference 107085839 | 2048 | 1 | 0
> (XEN) [2014-02-26 05:29:47] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
> (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all [ key 'g' pressed
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 active entries: 0
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1) nr_grant_entries: 3072
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 active entries: 2117
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 active entries: 1061
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 active entries: 1045
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 active entries: 1060
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 active entries: 709
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1) nr_grant_entries: 2048
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1)
> (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 active entries: 163
> (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all ] done
> (XEN) [2014-02-26 07:55:09] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> (XEN) [2014-02-26 08:37:16] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0



> Issue 2: net eth0: rx->offset: 0, size: xxxxxxxxxx

> In the guest (domain 7):

> Feb 26 08:55:09 backup kernel: [39258.090375] net eth0: rx->offset: 0, size: 4294967295
> Feb 26 08:55:09 backup kernel: [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
> Feb 26 08:55:09 backup kernel: [39258.090401] net eth0: rx->offset: 0, size: 4294967295
> Feb 26 08:55:09 backup kernel: [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
> Feb 26 08:55:09 backup kernel: [39258.090415] net eth0: rx->offset: 0, size: 4294967295
> Feb 26 08:55:09 backup kernel: [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571

> In dom0 i don't see any specific netback warnings related to this domain at this specific times, the printk's i added do trigger quite some times but these are probably not
> errorneous, but they seem to only occur on the vif of domain 7 (probably the only domain that is swamping the network by doing rsync and webdavs and causes some fragmented packets)

Another addition ... the guest doesn't shutdown anymore on "xl shutdown" .. it just does .. erhmm nothing .. (tried multiple times)
After that i ssh'ed into the guest and did a "halt -p" ... the guest shutted down .. but the guest remained in xl list in blocked state ..
Doing a "xl console" shows:

[30024.559656] net eth0: me here .. cons:8713451 slots:1 rp:8713462 max:18 err:0 rx->id:234 rx->offset:0 size:4294967295 ref:-131941395332550
[30024.559666] net eth0: rx->offset: 0, size: 4294967295
[30024.559671] net eth0: me here .. cons:8713451 slots:2 rp:8713462 max:18 err:-22 rx->id:236 rx->offset:0 size:4294967295 ref:-131941395332504
[30024.559680] net eth0: rx->offset: 0, size: 4294967295
[30024.559686] net eth0: me here .. cons:8713451 slots:3 rp:8713462 max:18 err:-22 rx->id:1 rx->offset:0 size:4294967295 ref:-131941395332390
[30536.665135] net eth0: Need more slots cons:9088533 slots:6 rp:9088539 max:17 err:0 rx-id:26 rx->offset:0 size:0 ref:687
[39258.090375] net eth0: rx->offset: 0, size: 4294967295
[39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
[39258.090401] net eth0: rx->offset: 0, size: 4294967295
[39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
[39258.090415] net eth0: rx->offset: 0, size: 4294967295
[39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571
INIT: Switching to runlevel: 0
INIT: Sending processes the TERM signal
[info] Using makefile-style concurrent boot in runlevel 0.
Stopping openntpd: ntpd.
[ ok ] Stopping mail-transfer-agent: nullmailer.
[ ok ] Stopping web server: apache2 ... waiting .
[ ok ] Asking all remaining processes to terminate...done.
[ ok ] All processes ended within 2 seconds...done.
[ ok ] Stopping enhanced syslogd: rsyslogd.
[ ok ] Deconfiguring network interfaces...done.
[ ok ] Deactivating swap...done.
[65015.958259] EXT4-fs (xvda1): re-mounted. Opts: (null)
[info] Will now halt.
[65018.166546] vif vif-0: 5 starting transaction
[65160.490419] INFO: task halt:4846 blocked for more than 120 seconds.
[65160.490464]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
[65160.490485] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[65160.490510] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000
[65280.490470] INFO: task halt:4846 blocked for more than 120 seconds.
[65280.490517]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
[65280.490540] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[65280.490564] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000


Especially the  "[65018.166546] vif vif-0: 5 starting transaction" after the halt surprises me ..

--
Sander

> Feb 26 08:53:20 serveerstertje kernel: [39324.917255] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15101115 cons:15101112 j:8
> Feb 26 08:53:56 serveerstertje kernel: [39361.001436] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15127649 cons:15127648 j:13
> Feb 26 08:54:00 serveerstertje kernel: [39364.725613] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15130263 cons:15130261 j:2
> Feb 26 08:54:04 serveerstertje kernel: [39368.739504] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15133143 cons:15133141 j:0
> Feb 26 08:54:20 serveerstertje kernel: [39384.665044] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15144113 cons:15144112 j:0
> Feb 26 08:54:29 serveerstertje kernel: [39393.569871] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15150203 cons:15150200 j:0
> Feb 26 08:54:40 serveerstertje kernel: [39404.586566] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15157706 cons:15157704 j:12
> Feb 26 08:54:56 serveerstertje kernel: [39420.759769] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15168839 cons:15168835 j:0
> Feb 26 08:54:56 serveerstertje kernel: [39421.001372] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15169002 cons:15168999 j:8
> Feb 26 08:55:00 serveerstertje kernel: [39424.515073] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15171450 cons:15171447 j:0
> Feb 26 08:55:10 serveerstertje kernel: [39435.154510] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15178773 cons:15178770 j:1
> Feb 26 08:56:19 serveerstertje kernel: [39504.195908] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15227444 cons:15227444 j:0
> Feb 26 08:57:39 serveerstertje kernel: [39583.799392] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15283346 cons:15283344 j:8
> Feb 26 08:57:55 serveerstertje kernel: [39599.517673] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15293937 cons:15293935 j:0
> Feb 26 08:58:07 serveerstertje kernel: [39612.156622] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15302891 cons:15302889 j:19
> Feb 26 08:58:07 serveerstertje kernel: [39612.400907] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15303034 cons:15303033 j:0
> Feb 26 08:58:18 serveerstertje kernel: [39623.439383] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15310915 cons:15310911 j:0
> Feb 26 08:58:39 serveerstertje kernel: [39643.521808] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15324769 cons:15324766 j:1

> Feb 26 09:27:07 serveerstertje kernel: [41351.622501] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16502932 cons:16502932 j:8
> Feb 26 09:27:19 serveerstertje kernel: [41363.541003] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16510837 cons:16510834 j:7
> Feb 26 09:27:23 serveerstertje kernel: [41368.133306] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16513940 cons:16513937 j:0
> Feb 26 09:27:43 serveerstertje kernel: [41388.025147] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16527870 cons:16527868 j:0
> Feb 26 09:27:47 serveerstertje kernel: [41391.530802] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16530437 cons:16530437 j:7
> Feb 26 09:27:51 serveerstertje kernel: [41395.521166] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533320 cons:16533317 j:6
> Feb 26 09:27:51 serveerstertje kernel: [41395.767066] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533469 cons:16533469 j:0
> Feb 26 09:27:51 serveerstertje kernel: [41395.802319] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533533 cons:16533533 j:24
> Feb 26 09:27:51 serveerstertje kernel: [41395.837456] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533534 cons:16533534 j:1
> Feb 26 09:27:51 serveerstertje kernel: [41395.872587] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533597 cons:16533596 j:25
> Feb 26 09:27:51 serveerstertje kernel: [41396.192784] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533833 cons:16533832 j:3
> Feb 26 09:27:51 serveerstertje kernel: [41396.235611] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533890 cons:16533890 j:30
> Feb 26 09:27:51 serveerstertje kernel: [41396.271047] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533898 cons:16533896 j:3


> --
> Sander





>>>
>>> Will keep you posted when it triggers again with the extra info in the warn.
>>>
>>> --
>>> Sander
>>>
>>>
>>>
>>>> Thanks
>>>> Annie
>>>>>     Xen reports:
>>>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
>>>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
>>>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
>>>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
>>>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
>>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
>>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
>>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
>>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
>>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
>>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
>>>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
>>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
>>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
>>>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
>>>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
>>>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
>>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
>>>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
>>>>>
>>>>>
>>>>>
>>>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
>>>>>     - i can ping the guests from dom0
>>>>>     - i can ping dom0 from the guests
>>>>>     - But i can't ssh or access things by http
>>>>>     - I don't see any relevant error messages ...
>>>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
>>>>>       (that previously worked fine)
>>>>>
>>>>> --
>>>>>
>>>>> Sander
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Xen-devel mailing list
>>>>> Xen-devel@lists.xen.org
>>>>> http://lists.xen.org/xen-devel
>>>
>>>



> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 4fc46eb..4d720b4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1667,6 +1667,8 @@ skip_vfb:
>                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
>              } else if (!strcmp(buf, "cirrus")) {
>                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
> +            } else if (!strcmp(buf, "none")) {
> +                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
>              } else {
>                  fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
>                  exit(1);
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 107b000..ab56927 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -265,9 +265,10 @@ get_maptrack_handle(
>      while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
>      {
>          nr_frames = nr_maptrack_frames(lgt);
> -        if ( nr_frames >= max_nr_maptrack_frames() )
> +        if ( nr_frames >= max_nr_maptrack_frames() ){
> +                 gdprintk(XENLOG_INFO, "Already at max maptrack size: %u/%u frames\n",nr_frames, max_nr_maptrack_frames());
>              break;
> -
> +       }
>          new_mt = alloc_xenheap_page();
>          if ( !new_mt )
>              break;
> @@ -285,8 +286,8 @@ get_maptrack_handle(
>          smp_wmb();
>          lgt->maptrack_limit      = new_mt_limit;

> -        gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
> -                 nr_frames + 1);
> +        gdprintk(XENLOG_INFO, "Increased maptrack size to %u/%u frames\n",
> +                 nr_frames + 1, max_nr_maptrack_frames());
>      }

>      spin_unlock(&lgt->lock);
> @@ -1854,7 +1855,7 @@ __acquire_grant_for_copy(

>      if ( unlikely(gref >= nr_grant_entries(rgt)) )
>          PIN_FAIL(unlock_out, GNTST_bad_gntref,
> -                 "Bad grant reference %ld\n", gref);
> +                 "Bad grant reference %ld | %d | %d | %d \n", gref, nr_grant_entries(rgt), rgt->gt_version, ldom);

>      act = &active_entry(rgt, gref);
>      shah = shared_entry_header(rgt, gref);
> @@ -2830,15 +2831,19 @@ static void gnttab_usage_print(struct domain *rd)
>      int first = 1;
>      grant_ref_t ref;
>      struct grant_table *gt = rd->grant_table;
> -
> +    unsigned int active=0;
> +/*
>      printk("      -------- active --------       -------- shared --------\n");
>      printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
> -
> +*/
>      spin_lock(&gt->lock);

>      if ( gt->gt_version == 0 )
>          goto out;

> +    printk("grant-table for remote domain:%5d (v%d) nr_grant_entries: %d\n",
> +                   rd->domain_id, gt->gt_version, nr_grant_entries(gt));
> +
>      for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
>      {
>          struct active_grant_entry *act;
> @@ -2875,19 +2880,22 @@ static void gnttab_usage_print(struct domain *rd)
>                     rd->domain_id, gt->gt_version);
>              first = 0;
>          }
> -
> +        active++;
>          /*      [ddd]    ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
> -        printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
> -               ref, act->domid, act->frame, act->pin,
> -               sha->domid, frame, status);
> +        /* printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n", ref, act->domid, act->frame, act->pin, sha->domid, frame, status); */
>      }

>   out:
>      spin_unlock(&gt->lock);

> +    printk("grant-table for remote domain:%5d active entries: %d\n",
> +                   rd->domain_id, active);
> +/*
>      if ( first )
>          printk("grant-table for remote domain:%5d ... "
>                 "no active grant table entries\n", rd->domain_id);
> +*/
> +
>  }

>  static void gnttab_usage_print_all(unsigned char key)






> diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> index e5284bc..6d93358 100644
> --- a/drivers/net/xen-netback/netback.c
> +++ b/drivers/net/xen-netback/netback.c
> @@ -482,20 +482,23 @@ static void xenvif_rx_action(struct xenvif *vif)
>                 .meta  = vif->meta,
>         };

> +       int j=0;
> +
>         skb_queue_head_init(&rxq);

>         while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
>                 RING_IDX max_slots_needed;
>                 int i;
> +               int nr_frags;

>                 /* We need a cheap worse case estimate for the number of
>                  * slots we'll use.
>                  */

>                 max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
> -                                               skb_headlen(skb),
> -                                               PAGE_SIZE);
> -               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> +                                               skb_headlen(skb), PAGE_SIZE);
> +               nr_frags = skb_shinfo(skb)->nr_frags;
> +               for (i = 0; i < nr_frags; i++) {
>                         unsigned int size;
>                         size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
>                         max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
> @@ -508,6 +511,9 @@ static void xenvif_rx_action(struct xenvif *vif)
>                 if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
>                         skb_queue_head(&vif->rx_queue, skb);
>                         need_to_notify = true;
> +                       if (net_ratelimit())
> +                               netdev_err(vif->dev, "!?!?!?! skb may not fit .. bail out now max_slots_needed:%d GSO:%d vif->rx_last_skb_slots:%d nr_frags:%d prod:%d cons:%d j:%d\n",
> +                                       max_slots_needed, (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ? 1 : 0, vif->rx_last_skb_slots, nr_frags,vif->rx.sring->req_prod,vif->rx.req_cons,j);
>                         vif->rx_last_skb_slots = max_slots_needed;
>                         break;
>                 } else
> @@ -518,6 +524,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>                 BUG_ON(sco->meta_slots_used > max_slots_needed);

>                 __skb_queue_tail(&rxq, skb);
> +               j++;
>         }

>         BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> @@ -541,7 +548,7 @@ static void xenvif_rx_action(struct xenvif *vif)
>                         resp->offset = vif->meta[npo.meta_cons].gso_size;
>                         resp->id = vif->meta[npo.meta_cons].id;
>                         resp->status = sco->meta_slots_used;
> -
> +
>                         npo.meta_cons++;
>                         sco->meta_slots_used--;
>                 }
> @@ -705,7 +712,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>                  */
>                 if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
>                         if (net_ratelimit())
> -                               netdev_dbg(vif->dev,
> +                               netdev_err(vif->dev,
>                                            "Too many slots (%d) exceeding limit (%d), dropping packet\n",
>                                            slots, XEN_NETBK_LEGACY_SLOTS_MAX);
>                         drop_err = -E2BIG;
> @@ -728,7 +735,7 @@ static int xenvif_count_requests(struct xenvif *vif,
>                  */
>                 if (!drop_err && txp->size > first->size) {
>                         if (net_ratelimit())
> -                               netdev_dbg(vif->dev,
> +                               netdev_err(vif->dev,
>                                            "Invalid tx request, slot size %u > remaining size %u\n",
>                                            txp->size, first->size);
>                         drop_err = -EIO;



> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index f9daa9e..67d5221 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -753,6 +753,7 @@ static int xennet_get_responses(struct netfront_info *np,
>                         if (net_ratelimit())
>                                 dev_warn(dev, "rx->offset: %x, size: %u\n",
>                                          rx->offset, rx->status);
> +                               dev_warn(dev, "me here .. cons:%d slots:%d rp:%d max:%d err:%d rx->id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
>                         xennet_move_rx_slot(np, skb, ref);
>                         err = -EINVAL;
>                         goto next;
> @@ -784,7 +785,7 @@ next:

>                 if (cons + slots == rp) {
>                         if (net_ratelimit())
> -                               dev_warn(dev, "Need more slots\n");
> +                               dev_warn(dev, "Need more slots cons:%d slots:%d rp:%d max:%d err:%d rx-id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
>                         err = -ENOENT;
>                         break;
>                 }
> @@ -803,7 +804,6 @@ next:

>         if (unlikely(err))
>                 np->rx.rsp_cons = cons + slots;
> -
>         return err;
>  }

> @@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,

>                 /* Ethernet work: Delayed to here as it peeks the header. */
>                 skb->protocol = eth_type_trans(skb, dev);
> +               skb_reset_network_header(skb);

>                 if (checksum_setup(dev, skb)) {
>                         kfree_skb(skb);





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDn-0001Bf-BD; Wed, 26 Feb 2014 15:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCz-00018e-9s
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:41 +0000
Received: from [85.158.143.35:55407] by server-2.bemta-4.messagelabs.com id
	08/85-04779-C150E035; Wed, 26 Feb 2014 15:15:40 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393427738!8499268!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6183 invoked from network); 26 Feb 2014 15:15:39 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:39 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id 01574263;
	Wed, 26 Feb 2014 15:15:38 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 88B5B71;
	Wed, 26 Feb 2014 15:15:35 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:28 -0500
Message-Id: <1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
	x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch enables KVM to use the queue spinlock's PV support code
when the PARAVIRT_SPINLOCKS kernel config option is set. However,
PV support for Xen is not ready yet and so the queue spinlock will
still have to be disabled when PARAVIRT_SPINLOCKS config option is
on with Xen.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 +-
 2 files changed, 55 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index f318e78..3ddc436 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 		}
 	}
 }
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 lh_kick_stats;	/* Lock holder kick count */
+static u32 qh_kick_stats;	/* Queue head kick count  */
+static u32 nn_kick_stats;	/* Next node kick count   */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+	if (!d_kvm_debug) {
+		printk(KERN_WARNING
+		       "Could not create 'kvm' debugfs directory\n");
+		return -ENOMEM;
+	}
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+	debugfs_create_u32("lh_kick_stats", 0644, d_spin_debug, &lh_kick_stats);
+	debugfs_create_u32("qh_kick_stats", 0644, d_spin_debug, &qh_kick_stats);
+	debugfs_create_u32("nn_kick_stats", 0644, d_spin_debug, &nn_kick_stats);
+
+	return 0;
+}
+
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+	if (type == PV_KICK_LOCK_HOLDER)
+		add_smp(&lh_kick_stats, 1);
+	else if (type == PV_KICK_QUEUE_HEAD)
+		add_smp(&qh_kick_stats, 1);
+	else
+		add_smp(&nn_kick_stats, 1);
+}
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+static void kvm_kick_cpu_type(int cpu, enum pv_kick_type type)
+{
+	kvm_kick_cpu(cpu);
+	inc_kick_stats(type);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDo-0001CM-FF; Wed, 26 Feb 2014 15:16:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgD8-00019j-4e
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:50 +0000
Received: from [193.109.254.147:52968] by server-14.bemta-14.messagelabs.com
	id 6C/1D-29228-5250E035; Wed, 26 Feb 2014 15:15:49 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393427747!6981985!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20958 invoked from network); 26 Feb 2014 15:15:48 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:48 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id F0D5F24B;
	Wed, 26 Feb 2014 15:15:32 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 80D2572;
	Wed, 26 Feb 2014 15:15:30 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:26 -0500
Message-Id: <1393427668-60228-7-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 6/8] pvqspinlock,
	x86: Rename paravirt_ticketlocks_enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6e6de1f..283f2cf 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,7 +40,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -151,7 +151,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a489140..f318e78 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -818,7 +818,7 @@ static __init int kvm_spinlock_init_jump(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index a50032a..8c67cbe 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
 #endif
 
 #ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 581521c..06f4a64 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -290,7 +290,7 @@ static __init int xen_init_spinlocks_jump(void)
 	if (!xen_pvspin)
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDn-0001Bo-Ms; Wed, 26 Feb 2014 15:16:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgD1-00018z-L0
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:44 +0000
Received: from [193.109.254.147:2365] by server-1.bemta-14.messagelabs.com id
	9A/73-15438-E150E035; Wed, 26 Feb 2014 15:15:42 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393427735!7037747!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14899 invoked from network); 26 Feb 2014 15:15:41 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:41 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id 6679A2B8;
	Wed, 26 Feb 2014 15:15:25 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id E8E566B;
	Wed, 26 Feb 2014 15:15:22 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:23 -0500
Message-Id: <1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 3/8] qspinlock,
	x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A major problem with the queue spinlock patch is its performance at
low contention level (2-4 contending tasks) where it is slower than
the corresponding ticket spinlock code path. The following table shows
the execution time (in ms) of a micro-benchmark where 5M iterations
of the lock/unlock cycles were run on a 10-core Westere-EX CPU with
2 different types loads - standalone (lock and protected data in
different cachelines) and embedded (lock and protected data in the
same cacheline).

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  135/111	 135/102	  0%/-8%
       2	  732/950	1315/1573	+80%/+66%
       3	 1827/1783	2372/2428	+30%/+36%
       4	 2689/2725	2934/2934	 +9%/+8%
       5	 3736/3748	3658/3652	 -2%/-3%
       6	 4942/4984	4434/4428	-10%/-11%
       7	 6304/6319	5176/5163	-18%/-18%
       8	 7736/7629	5955/5944	-23%/-22%

It can be seen that the performance degradation is particular bad
with 2 and 3 contending tasks. To reduce that performance deficit
at low contention level, a special x86 specific optimized code path
for 2 contending tasks was added. This special code path will only
be activated with less than 16K of configured CPUs because it uses
a byte in the 32-bit lock word to hold a waiting bit for the 2nd
contending tasks instead of queuing the waiting task in the queue.

With the change, the performance data became:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       2	  732/950	 523/528	-29%/-44%
       3	 1827/1783	2366/2384	+30%/+34%

The queue spinlock code path is now a bit faster with 2 contending
tasks.  There is also a very slight improvement with 3 contending
tasks.

The performance of the optimized code path can vary depending on which
of the several different code paths is taken. It is also not as fair as
the ticket spinlock and some variations on the execution times of the
2 contending tasks.  Testing with different pairs of cores within the
same CPUs shows an execution time that varies from 400ms to 1194ms.
The ticket spinlock code also shows a variation of 718-1146ms which
is probably due to the CPU topology within a socket.

In a multi-socketed server, the optimized code path also seems to
produce a big performance improvement in cross-node contention traffic
at low contention level. The table below show the performance with
1 contending task per node:

		[Standalone]
  # of nodes	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	   135		 135		  0%
       2	  4452		 528		-88%
       3	 10767		2369		-78%
       4	 20835		2921		-86%

The micro-benchmark was also run on a 4-core Ivy-Bridge PC. The table
below shows the collected performance data:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  197/178	  181/150	 -8%/-16%
       2	 1109/928    435-1417/697-2125
       3	 1836/1702  1372-3112/1379-3138
       4	 2717/2429  1842-4158/1846-4170

The performance of the queue lock patch varied from run to run whereas
the performance of the ticket lock was more consistent. The queue
lock figures above were the range of values that were reported.

This optimization can also be easily used by other architectures as
long as they support 8 and 16 bits atomic operations.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/qspinlock.h      |   20 ++++-
 include/asm-generic/qspinlock_types.h |    8 ++-
 kernel/locking/qspinlock.c            |  192 ++++++++++++++++++++++++++++++++-
 3 files changed, 215 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 44cefee..98db42e 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,12 +7,30 @@
 
 #define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
 
+#define smp_u8_store_release(p, v)	\
+do {					\
+	barrier();			\
+	ACCESS_ONCE(*p) = (v);		\
+} while (0)
+
+/*
+ * As the qcode will be accessed as a 16-bit word, no offset is needed
+ */
+#define _QCODE_VAL_OFFSET	0
+
 /*
  * x86-64 specific queue spinlock union structure
+ * Besides the slock and lock fields, the other fields are only
+ * valid with less than 16K CPUs.
  */
 union arch_qspinlock {
 	struct qspinlock slock;
-	u8		 lock;	/* Lock bit	*/
+	struct {
+		u8  lock;	/* Lock bit	*/
+		u8  wait;	/* Waiting bit	*/
+		u16 qcode;	/* Queue code	*/
+	};
+	u16 lock_wait;		/* Lock and wait bits */
 };
 
 #define	queue_spin_unlock queue_spin_unlock
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index df981d0..3a02a9e 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -48,7 +48,13 @@ typedef struct qspinlock {
 	atomic_t	qlcode;	/* Lock + queue code */
 } arch_spinlock_t;
 
-#define _QCODE_OFFSET		8
+#if CONFIG_NR_CPUS >= (1 << 14)
+# define _Q_MANY_CPUS
+# define _QCODE_OFFSET	8
+#else
+# define _QCODE_OFFSET	16
+#endif
+
 #define _QSPINLOCK_LOCKED	1U
 #define	_QSPINLOCK_LOCK_MASK	0xff
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ed5efa7..22a63fa 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -109,8 +109,11 @@ static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
  *  2) A smp_u8_store_release() macro for byte size store operation	*
  *  3) A "union arch_qspinlock" structure that include the individual	*
  *     fields of the qspinlock structure, including:			*
- *      o slock - the qspinlock structure				*
- *      o lock  - the lock byte						*
+ *      o slock     - the qspinlock structure				*
+ *      o lock      - the lock byte					*
+ *      o wait      - the waiting byte					*
+ *      o qcode     - the queue node code				*
+ *      o lock_wait - the combined lock and waiting bytes		*
  *									*
  ************************************************************************
  */
@@ -129,6 +132,176 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 		return 1;
 	return 0;
 }
+
+#ifndef _Q_MANY_CPUS
+/*
+ * With less than 16K CPUs, the following optimizations are possible with
+ * the x86 architecture:
+ *  1) The 2nd byte of the 32-bit lock word can be used as a pending bit
+ *     for waiting lock acquirer so that it won't need to go through the
+ *     MCS style locking queuing which has a higher overhead.
+ *  2) The 16-bit queue code can be accessed or modified directly as a
+ *     16-bit short value without disturbing the first 2 bytes.
+ */
+#define	_QSPINLOCK_WAITING	0x100U	/* Waiting bit in 2nd byte   */
+#define	_QSPINLOCK_LWMASK	0xffff	/* Mask for lock & wait bits */
+
+#define queue_encode_qcode(cpu, idx)	(((cpu) + 1) << 2 | (idx))
+
+#define queue_spin_trylock_quick queue_spin_trylock_quick
+/**
+ * queue_spin_trylock_quick - fast spinning on the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Old queue spinlock value
+ * Return: 1 if lock acquired, 0 if failed
+ *
+ * This is an optimized contention path for 2 contending tasks. It
+ * should only be entered if no task is waiting in the queue. This
+ * optimized path is not as fair as the ticket spinlock, but it offers
+ * slightly better performance. The regular MCS locking path for 3 or
+ * more contending tasks, however, is fair.
+ *
+ * Depending on the exact timing, there are several different paths where
+ * a contending task can take. The actual contention performance depends
+ * on which path is taken. So it can be faster or slower than the
+ * corresponding ticket spinlock path. On average, it is probably on par
+ * with ticket spinlock.
+ */
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+	u16		     old;
+
+	/*
+	 * Fall into the quick spinning code path only if no one is waiting
+	 * or the lock is available.
+	 */
+	if (unlikely((qsval != _QSPINLOCK_LOCKED) &&
+		     (qsval != _QSPINLOCK_WAITING)))
+		return 0;
+
+	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
+
+	if (old == 0) {
+		/*
+		 * Got the lock, can clear the waiting bit now
+		 */
+		smp_u8_store_release(&qlock->wait, 0);
+		return 1;
+	} else if (old == _QSPINLOCK_LOCKED) {
+try_again:
+		/*
+		 * Wait until the lock byte is cleared to get the lock
+		 */
+		do {
+			cpu_relax();
+		} while (ACCESS_ONCE(qlock->lock));
+		/*
+		 * Set the lock bit & clear the waiting bit
+		 */
+		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
+			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+			return 1;
+		/*
+		 * Someone has steal the lock, so wait again
+		 */
+		goto try_again;
+	} else if (old == _QSPINLOCK_WAITING) {
+		/*
+		 * Another task is already waiting while it steals the lock.
+		 * A bit of unfairness here won't change the big picture.
+		 * So just take the lock and return.
+		 */
+		return 1;
+	}
+	/*
+	 * Nothing need to be done if the old value is
+	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
+	 */
+	return 0;
+}
+
+#define queue_code_xchg queue_code_xchg
+/**
+ * queue_code_xchg - exchange a queue code value
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: New queue code to be exchanged
+ * Return: The original qcode value in the queue spinlock
+ */
+static inline u32 queue_code_xchg(struct qspinlock *lock, u32 qcode)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	return (u32)xchg(&qlock->qcode, (u16)qcode);
+}
+
+#define queue_spin_trylock_and_clr_qcode queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	qcode <<= _QCODE_OFFSET;
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+
+#define queue_get_lock_qcode queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value
+ * Return : > 0 if lock is not available
+ *	   = 0 if lock is free
+ *	   < 0 if lock is taken & can return after cleanup
+ *
+ * It is considered locked when either the lock bit or the wait bit is set.
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	u32 qlcode;
+
+	qlcode = (u32)atomic_read(&lock->qlcode);
+	/*
+	 * With the special case that qlcode contains only _QSPINLOCK_LOCKED
+	 * and mycode. It will try to transition back to the quick spinning
+	 * code by clearing the qcode and setting the _QSPINLOCK_WAITING
+	 * bit.
+	 */
+	if (qlcode == (_QSPINLOCK_LOCKED | (mycode << _QCODE_OFFSET))) {
+		u32 old = qlcode;
+
+		qlcode = atomic_cmpxchg(&lock->qlcode, old,
+				_QSPINLOCK_LOCKED|_QSPINLOCK_WAITING);
+		if (qlcode == old) {
+			union arch_qspinlock *slock =
+				(union arch_qspinlock *)lock;
+try_again:
+			/*
+			 * Wait until the lock byte is cleared
+			 */
+			do {
+				cpu_relax();
+			} while (ACCESS_ONCE(slock->lock));
+			/*
+			 * Set the lock bit & clear the waiting bit
+			 */
+			if (cmpxchg(&slock->lock_wait, _QSPINLOCK_WAITING,
+				    _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+				return -1;	/* Got the lock */
+			goto try_again;
+		}
+	}
+	*qcode = qlcode >> _QCODE_OFFSET;
+	return qlcode & _QSPINLOCK_LWMASK;
+}
+#endif /* _Q_MANY_CPUS */
+
 #else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
 /*
  * Generic functions for architectures that do not support atomic
@@ -144,7 +317,7 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 	int qlcode = atomic_read(lock->qlcode);
 
 	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
-		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+		qlcode, code|_QSPINLOCK_LOCKED) == qlcode))
 			return 1;
 	return 0;
 }
@@ -156,6 +329,10 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
  * that may get superseded by a more optimized version.			*
  ************************************************************************
  */
+#ifndef queue_spin_trylock_quick
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{ return 0; }
+#endif
 
 #ifndef queue_get_lock_qcode
 /**
@@ -266,6 +443,11 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	u32 prev_qcode, my_qcode;
 
 	/*
+	 * Try the quick spinning code path
+	 */
+	if (queue_spin_trylock_quick(lock, qsval))
+		return;
+	/*
 	 * Get the queue node
 	 */
 	cpu_nr = smp_processor_id();
@@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		return;
 	}
 
+#ifdef queue_code_xchg
+	prev_qcode = queue_code_xchg(lock, my_qcode);
+#else
 	/*
 	 * Exchange current copy of the queue node code
 	 */
@@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	} else
 		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
 	my_qcode &= ~_QSPINLOCK_LOCKED;
+#endif /* queue_code_xchg */
 
 	if (prev_qcode) {
 		/*
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDm-0001BB-6s; Wed, 26 Feb 2014 15:16:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCm-00017X-8d
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:28 +0000
Received: from [85.158.137.68:13211] by server-9.bemta-3.messagelabs.com id
	37/8E-10184-F050E035; Wed, 26 Feb 2014 15:15:27 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393427724!4399743!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5196 invoked from network); 26 Feb 2014 15:15:25 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:25 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id 7C0D82DE;
	Wed, 26 Feb 2014 15:15:20 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id CEC7471;
	Wed, 26 Feb 2014 15:15:17 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:21 -0500
Message-Id: <1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte queue
	spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces a new queue spinlock implementation that can
serve as an alternative to the default ticket spinlock. Compared with
the ticket spinlock, this queue spinlock should be almost as fair as
the ticket spinlock. It has about the same speed in single-thread and
it can be much faster in high contention situations. Only in light to
moderate contention where the average queue depth is around 1-3 will
this queue spinlock be potentially a bit slower due to the higher
slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

The idea behind this spinlock implementation is the fact that spinlocks
are acquired with preemption disabled. In other words, the process
will not be migrated to another CPU while it is trying to get a
spinlock. Ignoring interrupt handling, a CPU can only be contending
in one spinlock at any one time. Of course, interrupt handler can try
to acquire one spinlock while the interrupted user process is in the
process of getting another spinlock. By allocating a set of per-cpu
queue nodes and used them to form a waiting queue, we can encode the
queue node address into a much smaller 16-bit size. Together with
the 1-byte lock bit, this queue spinlock implementation will only
need 4 bytes to hold all the information that it needs.

The current queue node address encoding of the 4-byte word is as
follows:
Bits 0-7  : the locked byte
Bits 8-9  : queue node index in the per-cpu array (4 entries)
Bits 10-31: cpu number + 1 (max cpus = 4M -1)

In the extremely unlikely case that all the queue node entries are
used up, the current code will fall back to busy spinning without
waiting in a queue with warning message.

For single-thread performance (no contention), a 256K lock/unlock
loop was run on a 2.4Ghz Westmere x86-64 CPU.  The following table
shows the average time (in ns) for a single lock/unlock sequence
(including the looping and timing overhead):

  Lock Type			Time (ns)
  ---------			---------
  Ticket spinlock		  14.1
  Queue spinlock		   8.8

So the queue spinlock is much faster than the ticket spinlock, even
though the overhead of locking and unlocking should be pretty small
when there is no contention. The performance advantage is mainly
due to the fact that ticket spinlock does a read-modify-write (add)
instruction in unlock whereas queue spinlock only does a simple write
in unlock which can be much faster in a pipelined CPU.

The AIM7 benchmark was run on a 8-socket 80-core DL980 with Westmere
x86-64 CPUs with XFS filesystem on a ramdisk and HT off to evaluate
the performance impact of this patch on a 3.13 kernel.

  +------------+----------+-----------------+---------+
  | Kernel     | 3.13 JPM |    3.13 with    | %Change |
  |            |          | qspinlock patch |	      |
  +------------+----------+-----------------+---------+
  |		      10-100 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   357459 |      363109     |  +1.58% |
  |dbase       |   496847 |      498801	    |  +0.39% |
  |disk        |  2925312 |     2771387     |  -5.26% |
  |five_sec    |   166612 |      169215     |  +1.56% |
  |fserver     |   382129 |      383279     |  +0.30% |
  |high_systime|    16356 |       16380     |  +0.15% |
  |short       |  4521978 |     4257363     |  -5.85% |
  +------------+----------+-----------------+---------+
  |		     200-1000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   449070 |      447711     |  -0.30% |
  |dbase       |   845029 |      853362	    |  +0.99% |
  |disk        |  2725249 |     4892907     | +79.54% |
  |five_sec    |   169410 |      170638     |  +0.72% |
  |fserver     |   489662 |      491828     |  +0.44% |
  |high_systime|   142823 |      143790     |  +0.68% |
  |short       |  7435288 |     9016171     | +21.26% |
  +------------+----------+-----------------+---------+
  |		     1100-2000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   432470 |      432570     |  +0.02% |
  |dbase       |   889289 |      890026	    |  +0.08% |
  |disk        |  2565138 |     5008732     | +95.26% |
  |five_sec    |   169141 |      170034     |  +0.53% |
  |fserver     |   498569 |      500701     |  +0.43% |
  |high_systime|   229913 |      245866     |  +6.94% |
  |short       |  8496794 |     8281918     |  -2.53% |
  +------------+----------+-----------------+---------+

The workload with the most gain was the disk workload. Without the
patch, the perf profile at 1500 users looked like:

 26.19%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--47.28%-- evict
              |--46.87%-- inode_sb_list_add
              |--1.24%-- xlog_cil_insert_items
              |--0.68%-- __remove_inode_hash
              |--0.67%-- inode_wait_for_writeback
               --3.26%-- [...]
 22.96%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  5.56%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  4.87%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.04%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.30%    reaim  [kernel.kallsyms]  [k] memcpy
  1.08%    reaim  [unknown]          [.] 0x0000003c52009447

There was pretty high spinlock contention on the inode_sb_list_lock
and maybe the inode's i_lock.

With the patch, the perf profile at 1500 users became:

 26.82%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  4.66%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  3.97%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.40%    reaim  [kernel.kallsyms]  [k] queue_spin_lock_slowpath
              |--88.31%-- _raw_spin_lock
              |          |--36.02%-- inode_sb_list_add
              |          |--35.09%-- evict
              |          |--16.89%-- xlog_cil_insert_items
              |          |--6.30%-- try_to_wake_up
              |          |--2.20%-- _xfs_buf_find
              |          |--0.75%-- __remove_inode_hash
              |          |--0.72%-- __mutex_lock_slowpath
              |          |--0.53%-- load_balance
              |--6.02%-- _raw_spin_lock_irqsave
              |          |--74.75%-- down_trylock
              |          |--9.69%-- rcu_check_quiescent_state
              |          |--7.47%-- down
              |          |--3.57%-- up
              |          |--1.67%-- rwsem_wake
              |          |--1.00%-- remove_wait_queue
              |          |--0.56%-- pagevec_lru_move_fn
              |--5.39%-- _raw_spin_lock_irq
              |          |--82.05%-- rwsem_down_read_failed
              |          |--10.48%-- rwsem_down_write_failed
              |          |--4.24%-- __down
              |          |--2.74%-- __schedule
               --0.28%-- [...]
  2.20%    reaim  [kernel.kallsyms]  [k] memcpy
  1.84%    reaim  [unknown]          [.] 0x000000000041517b
  1.77%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--21.08%-- xlog_cil_insert_items
              |--10.14%-- xfs_icsb_modify_counters
              |--7.20%-- xfs_iget_cache_hit
              |--6.56%-- inode_sb_list_add
              |--5.49%-- _xfs_buf_find
              |--5.25%-- evict
              |--5.03%-- __remove_inode_hash
              |--4.64%-- __mutex_lock_slowpath
              |--3.78%-- selinux_inode_free_security
              |--2.95%-- xfs_inode_is_filestream
              |--2.35%-- try_to_wake_up
              |--2.07%-- xfs_inode_set_reclaim_tag
              |--1.52%-- list_lru_add
              |--1.16%-- xfs_inode_clear_eofblocks_tag
		  :
  1.30%    reaim  [kernel.kallsyms]  [k] effective_load
  1.27%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.10%    reaim  [kernel.kallsyms]  [k] security_compute_sid

On the ext4 filesystem, the disk workload improved from 416281 JPM
to 899101 JPM (+116%) with the patch. In this case, the contended
spinlock is the mb_cache_spinlock.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 include/asm-generic/qspinlock.h       |  122 ++++++++++
 include/asm-generic/qspinlock_types.h |   55 +++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  393 +++++++++++++++++++++++++++++++++
 5 files changed, 578 insertions(+), 0 deletions(-)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..08da60f
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,122 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/*
+ * External function declarations
+ */
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval);
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & _QSPINLOCK_LOCKED;
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !(atomic_read(&lock.qlcode) & _QSPINLOCK_LOCKED);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & ~_QSPINLOCK_LOCK_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->qlcode) &&
+	   (atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	int qsval;
+
+	/*
+	 * To reduce memory access to only once for the cold cache case,
+	 * a direct cmpxchg() is performed in the fastpath to optimize the
+	 * uncontended case. The contended performance, however, may suffer
+	 * a bit because of that.
+	 */
+	qsval = atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED);
+	if (likely(qsval == 0))
+		return;
+	queue_spin_lock_slowpath(lock, qsval);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * Use an atomic subtraction to clear the lock bit.
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_QSPINLOCK_LOCKED, &lock->qlcode);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..df981d0
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,55 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here in this case.
+ */
+#ifdef CONFIG_PARAVIRT
+# include <asm/paravirt_types.h>
+#else
+# include <linux/types.h>
+# include <linux/atomic.h>
+#endif
+
+/*
+ * The queue spinlock data structure - a 32-bit word
+ *
+ * For NR_CPUS >= 16K, the bits assignment are:
+ *   Bit  0   : Set if locked
+ *   Bits 1-7 : Not used
+ *   Bits 8-31: Queue code
+ *
+ * For NR_CPUS < 16K, the bits assignment are:
+ *   Bit   0   : Set if locked
+ *   Bits  1-7 : Not used
+ *   Bits  8-15: Reserved for architecture specific optimization
+ *   Bits 16-31: Queue code
+ */
+typedef struct qspinlock {
+	atomic_t	qlcode;	/* Lock + queue code */
+} arch_spinlock_t;
+
+#define _QCODE_OFFSET		8
+#define _QSPINLOCK_LOCKED	1U
+#define	_QSPINLOCK_LOCK_MASK	0xff
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
 config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index baab8e5..e3b3293 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
 endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..ed5efa7
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,393 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock with twists
+ * to make it fit the following constraints:
+ * 1. A max spinlock size of 4 bytes
+ * 2. Good fastpath performance
+ * 3. No change in the locking APIs
+ *
+ * The queue spinlock fastpath is as simple as it can get, all the heavy
+ * lifting is done in the lock slowpath. The main idea behind this queue
+ * spinlock implementation is to keep the spinlock size at 4 bytes while
+ * at the same time implement a queue structure to queue up the waiting
+ * lock spinners.
+ *
+ * Since preemption is disabled before getting the lock, a given CPU will
+ * only need to use one queue node structure in a non-interrupt context.
+ * A percpu queue node structure will be allocated for this purpose and the
+ * cpu number will be put into the queue spinlock structure to indicate the
+ * tail of the queue.
+ *
+ * To handle spinlock acquisition at interrupt context (softirq or hardirq),
+ * the queue node structure is actually an array for supporting nested spin
+ * locking operations in interrupt handlers. If all the entries in the
+ * array are used up, a warning message will be printed (as that shouldn't
+ * happen in normal circumstances) and the lock spinner will fall back to
+ * busy spinning instead of waiting in a queue.
+ */
+
+/*
+ * The 24-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
+ *
+ * The 16-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-15: CPU number + 1   (16K - 1 CPUs)
+ *
+ * A queue node code of 0 indicates that no one is waiting for the lock.
+ * As the value 0 cannot be used as a valid CPU number. We need to add
+ * 1 to it before putting it into the queue code.
+ */
+#define MAX_QNODES		4
+#ifndef _QCODE_VAL_OFFSET
+#define _QCODE_VAL_OFFSET	_QCODE_OFFSET
+#endif
+
+/*
+ * The queue node structure
+ *
+ * This structure is essentially the same as the mcs_spinlock structure
+ * in mcs_spinlock.h file. This structure is retained for future extension
+ * where new fields may be added.
+ */
+struct qnode {
+	u32		 wait;		/* Waiting flag		*/
+	struct qnode	*next;		/* Next queue node addr */
+};
+
+struct qnode_set {
+	struct qnode	nodes[MAX_QNODES];
+	int		node_idx;	/* Current node to use */
+};
+
+/*
+ * Per-CPU queue node structures
+ */
+static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
+
+/*
+ ************************************************************************
+ * The following optimized codes are for architectures that support:	*
+ *  1) Atomic byte and short data write					*
+ *  2) Byte and short data exchange and compare-exchange instructions	*
+ *									*
+ * For those architectures, their asm/qspinlock.h header file should	*
+ * define the followings in order to use the optimized codes.		*
+ *  1) The _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS macro			*
+ *  2) A smp_u8_store_release() macro for byte size store operation	*
+ *  3) A "union arch_qspinlock" structure that include the individual	*
+ *     fields of the qspinlock structure, including:			*
+ *      o slock - the qspinlock structure				*
+ *      o lock  - the lock byte						*
+ *									*
+ ************************************************************************
+ */
+#ifdef _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+/**
+ * queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!ACCESS_ONCE(qlock->lock) &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+#else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
+/*
+ * Generic functions for architectures that do not support atomic
+ * byte or short data types.
+ */
+/**
+ *_queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	int qlcode = atomic_read(lock->qlcode);
+
+	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
+		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+			return 1;
+	return 0;
+}
+#endif /* _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS */
+
+/*
+ ************************************************************************
+ * Inline functions used by the queue_spin_lock_slowpath() function	*
+ * that may get superseded by a more optimized version.			*
+ ************************************************************************
+ */
+
+#ifndef queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value (not used)
+ * Return : > 0 if lock is not available, = 0 if lock is free
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	int qlcode = atomic_read(&lock->qlcode);
+
+	*qcode = qlcode;
+	return qlcode & _QSPINLOCK_LOCKED;
+}
+#endif /* queue_get_lock_qcode */
+
+#ifndef queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+#endif /* queue_spin_trylock_and_clr_qcode */
+
+#ifndef queue_encode_qcode
+/**
+ * queue_encode_qcode - Encode the CPU number & node index into a qnode code
+ * @cpu_nr: CPU number
+ * @qn_idx: Queue node index
+ * Return : A qnode code that can be saved into the qspinlock structure
+ *
+ * The lock bit is set in the encoded 32-bit value as the need to encode
+ * a qnode means that the lock should have been taken.
+ */
+static u32 queue_encode_qcode(u32 cpu_nr, u8 qn_idx)
+{
+	return ((cpu_nr + 1) << (_QCODE_VAL_OFFSET + 2)) |
+		(qn_idx << _QCODE_VAL_OFFSET) | _QSPINLOCK_LOCKED;
+}
+#endif /* queue_encode_qcode */
+
+/*
+ ************************************************************************
+ * Other inline functions needed by the queue_spin_lock_slowpath()	*
+ * function.								*
+ ************************************************************************
+ */
+
+/**
+ * xlate_qcode - translate the queue code into the queue node address
+ * @qcode: Queue code to be translated
+ * Return: The corresponding queue node address
+ */
+static inline struct qnode *xlate_qcode(u32 qcode)
+{
+	u32 cpu_nr = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+	u8  qn_idx = (qcode >> _QCODE_VAL_OFFSET) & 3;
+
+	return per_cpu_ptr(&qnset.nodes[qn_idx], cpu_nr);
+}
+
+/**
+ * get_qnode - Get a queue node address
+ * @qn_idx: Pointer to queue node index [out]
+ * Return : queue node address & queue node index in qn_idx, or NULL if
+ *	    no free queue node available.
+ */
+static struct qnode *get_qnode(unsigned int *qn_idx)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+	int i;
+
+	if (unlikely(qset->node_idx >= MAX_QNODES))
+		return NULL;
+	i = qset->node_idx++;
+	*qn_idx = i;
+	return &qset->nodes[i];
+}
+
+/**
+ * put_qnode - Return a queue node to the pool
+ */
+static void put_qnode(void)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+
+	qset->node_idx--;
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Current value of the queue spinlock 32-bit word
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
+{
+	unsigned int cpu_nr, qn_idx;
+	struct qnode *node, *next;
+	u32 prev_qcode, my_qcode;
+
+	/*
+	 * Get the queue node
+	 */
+	cpu_nr = smp_processor_id();
+	node   = get_qnode(&qn_idx);
+
+	/*
+	 * It should never happen that all the queue nodes are being used.
+	 */
+	BUG_ON(!node);
+
+	/*
+	 * Set up the new cpu code to be exchanged
+	 */
+	my_qcode = queue_encode_qcode(cpu_nr, qn_idx);
+
+	/*
+	 * Initialize the queue node
+	 */
+	node->wait = true;
+	node->next = NULL;
+
+	/*
+	 * The lock may be available at this point, try again if no task was
+	 * waiting in the queue.
+	 */
+	if (!(qsval >> _QCODE_OFFSET) && queue_spin_trylock(lock)) {
+		put_qnode();
+		return;
+	}
+
+	/*
+	 * Exchange current copy of the queue node code
+	 */
+	prev_qcode = atomic_xchg(&lock->qlcode, my_qcode);
+	/*
+	 * It is possible that we may accidentally steal the lock. If this is
+	 * the case, we need to either release it if not the head of the queue
+	 * or get the lock and be done with it.
+	 */
+	if (unlikely(!(prev_qcode & _QSPINLOCK_LOCKED))) {
+		if (prev_qcode == 0) {
+			/*
+			 * Got the lock since it is at the head of the queue
+			 * Now try to atomically clear the queue code.
+			 */
+			if (atomic_cmpxchg(&lock->qlcode, my_qcode,
+					  _QSPINLOCK_LOCKED) == my_qcode)
+				goto release_node;
+			/*
+			 * The cmpxchg fails only if one or more tasks
+			 * are added to the queue. In this case, we need to
+			 * notify the next one to be the head of the queue.
+			 */
+			goto notify_next;
+		}
+		/*
+		 * Accidentally steal the lock, release the lock and
+		 * let the queue head get it.
+		 */
+		queue_spin_unlock(lock);
+	} else
+		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
+	my_qcode &= ~_QSPINLOCK_LOCKED;
+
+	if (prev_qcode) {
+		/*
+		 * Not at the queue head, get the address of the previous node
+		 * and set up the "next" fields of the that node.
+		 */
+		struct qnode *prev = xlate_qcode(prev_qcode);
+
+		ACCESS_ONCE(prev->next) = node;
+		/*
+		 * Wait until the waiting flag is off
+		 */
+		while (smp_load_acquire(&node->wait))
+			arch_mutex_cpu_relax();
+	}
+
+	/*
+	 * At the head of the wait queue now
+	 */
+	while (true) {
+		u32 qcode;
+		int retval;
+
+		retval = queue_get_lock_qcode(lock, &qcode, my_qcode);
+		if (retval > 0)
+			;	/* Lock not available yet */
+		else if (retval < 0)
+			/* Lock taken, can release the node & return */
+			goto release_node;
+		else if (qcode != my_qcode) {
+			/*
+			 * Just get the lock with other spinners waiting
+			 * in the queue.
+			 */
+			if (queue_spin_setlock(lock))
+				goto notify_next;
+		} else {
+			/*
+			 * Get the lock & clear the queue code simultaneously
+			 */
+			if (queue_spin_trylock_and_clr_qcode(lock, qcode))
+				/* No need to notify the next one */
+				goto release_node;
+		}
+		arch_mutex_cpu_relax();
+	}
+
+notify_next:
+	/*
+	 * Wait, if needed, until the next one in queue set up the next field
+	 */
+	while (!(next = ACCESS_ONCE(node->next)))
+		arch_mutex_cpu_relax();
+	/*
+	 * The next one in queue is now at the head
+	 */
+	smp_store_release(&next->wait, false);
+
+release_node:
+	put_qnode();
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDm-0001BV-VP; Wed, 26 Feb 2014 15:16:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCx-00018X-7S
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:39 +0000
Received: from [85.158.139.211:18801] by server-3.bemta-5.messagelabs.com id
	0C/ED-13671-A150E035; Wed, 26 Feb 2014 15:15:38 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393427735!6400660!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17698 invoked from network); 26 Feb 2014 15:15:37 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:37 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id 79792D3;
	Wed, 26 Feb 2014 15:15:35 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 0BCA366;
	Wed, 26 Feb 2014 15:15:32 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:27 -0500
Message-Id: <1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
	x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds para-virtualization support to the queue spinlock code
by enabling the queue head to kick the lock holder CPU, if known,
in when the lock isn't released for a certain amount of time. It
also enables the mutual monitoring of the queue head CPU and the
following node CPU in the queue to make sure that their CPUs will
stay scheduled in.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/paravirt.h       |    9 ++-
 arch/x86/include/asm/paravirt_types.h |   12 +++
 arch/x86/include/asm/pvqspinlock.h    |  176 +++++++++++++++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c  |    4 +
 kernel/locking/qspinlock.c            |   41 +++++++-
 5 files changed, 235 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..06d3279 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 }
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
-
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
+{
+	PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
+}
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
-
+#endif
 #endif
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..87f8836 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,21 @@ struct arch_spinlock;
 typedef u16 __ticket_t;
 #endif
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_kick_type {
+	PV_KICK_LOCK_HOLDER,
+	PV_KICK_QUEUE_HEAD,
+	PV_KICK_NEXT_NODE
+};
+#endif
+
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	void (*kick_cpu)(int cpu, enum pv_kick_type);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..45aae39
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,176 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ *	Queue Spinlock Para-Virtualization Support
+ *
+ *	+------+	    +-----+ nxtcpu_p1  +----+
+ *	| Lock |	    |Queue|----------->|Next|
+ *	|Holder|<-----------|Head |<-----------|Node|
+ *	+------+ prev_qcode +-----+ prev_qcode +----+
+ *
+ * As long as the current lock holder passes through the slowpath, the queue
+ * head CPU will have its CPU number stored in prev_qcode. The situation is
+ * the same for the node next to the queue head.
+ *
+ * The next node, while setting up the next pointer in the queue head, can
+ * also store its CPU number in that node. With that change, the queue head
+ * will have the CPU numbers of both its upstream and downstream neighbors.
+ *
+ * To make forward progress in lock acquisition and release, it is necessary
+ * that both the lock holder and the queue head virtual CPUs are present.
+ * The queue head can monitor the lock holder, but the lock holder can't
+ * monitor the queue head back. As a result, the next node is also brought
+ * into the picture to monitor the queue head. In the above diagram, all the
+ * 3 virtual CPUs should be present with the queue head and next node
+ * monitoring each other to make sure they are both present.
+ *
+ * Heartbeat counters are used to track if a neighbor is active. There are
+ * 3 different sets of heartbeat counter monitoring going on:
+ * 1) The queue head will wait until the number loop iteration exceeds a
+ *    certain threshold (HEAD_SPIN_THRESHOLD). In that case, it will send
+ *    a kick-cpu signal to the lock holder if it has the CPU number available.
+ *    The kick-cpu siginal will be sent only once as the real lock holder
+ *    may not be the same as what the queue head thinks it is.
+ * 2) The queue head will periodically clear the active flag of the next node.
+ *    It will then check to see if the active flag remains cleared at the end
+ *    of the cycle. If it is, the next node CPU may be scheduled out. So it
+ *    send a kick-cpu signal to make sure that the next node CPU remain active.
+ * 3) The next node CPU will monitor its own active flag to see if it gets
+ *    clear periodically. If it is not, the queue head CPU may be scheduled
+ *    out. It will then send the kick-cpu signal to the queue head CPU.
+ */
+
+/*
+ * Loop thresholds
+ */
+#define	HEAD_SPIN_THRESHOLD	(1<<12)	/* Threshold to kick lock holder  */
+#define	CLEAR_ACTIVE_THRESHOLD	(1<<8)	/* Threahold for clearing active flag */
+#define CLEAR_ACTIVE_MASK	(CLEAR_ACTIVE_THRESHOLD - 1)
+
+/*
+ * PV macros
+ */
+#define PV_SET_VAR(type, var, val)	type var = val
+#define PV_VAR(var)			var
+#define	PV_GET_NXTCPU(node)		(node)->pv.nxtcpu_p1
+
+/*
+ * Additional fields to be added to the qnode structure
+ *
+ * Try to cram the PV fields into a 32 bits so that it won't increase the
+ * qnode size in x86-64.
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t	u32
+#else
+#define _cpuid_t	u16
+#endif
+
+struct pv_qvars {
+	u8	 active;	/* Set if CPU active		*/
+	u8	 prehead;	/* Set if next to queue head	*/
+	_cpuid_t nxtcpu_p1;	/* CPU number of next node + 1	*/
+};
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv: pointer to struct pv_qvars
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv)
+{
+	pv->active    = false;
+	pv->prehead   = false;
+	pv->nxtcpu_p1 = 0;
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue head
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @nxtcpu: CPU number of next node + 1
+ * @next  : pointer to the next node
+ * @offset: offset of the pv_qvars within the qnode
+ *
+ * 4 checks will be done:
+ * 1) See if it is time to kick the lock holder
+ * 2) Set the prehead flag of the next node
+ * 3) Clear the active flag of the next node periodically
+ * 4) If the active flag is not set after a while, assume the CPU of the
+ *    next-in-line node is offline and kick it back up again.
+ */
+static __always_inline void
+pv_head_spin_check(int *count, u32 qcode, int nxtcpu, void *next, int offset)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if ((++(*count) == HEAD_SPIN_THRESHOLD) && qcode) {
+		/*
+		 * Get the CPU number of the lock holder & kick it
+		 * The lock may have been stealed by another CPU
+		 * if PARAVIRT_UNFAIR_LOCKS is set, so the computed
+		 * CPU number may not be the actual lock holder.
+		 */
+		int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+		__queue_kick_cpu(cpu, PV_KICK_LOCK_HOLDER);
+	}
+	if (next) {
+		struct pv_qvars *pv = (struct pv_qvars *)
+				      ((char *)next + offset);
+
+		if (!pv->prehead)
+			pv->prehead = true;
+		if ((*count & CLEAR_ACTIVE_MASK) == CLEAR_ACTIVE_MASK)
+			pv->active = false;
+		if (((*count & CLEAR_ACTIVE_MASK) == 0) &&
+			!pv->active && nxtcpu)
+			/*
+			 * The CPU of the next node doesn't seem to be
+			 * active, need to kick it to make sure that
+			 * it is ready to be transitioned to queue head.
+			 */
+			__queue_kick_cpu(nxtcpu - 1, PV_KICK_NEXT_NODE);
+	}
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue member
+ * @pv   : pointer to struct pv_qvars
+ * @count: loop count
+ * @qcode: queue code of the previous node (queue head if pv->prehead set)
+ *
+ * Set the active flag if it is next to the queue head
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, int *count, u32 qcode)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if (ACCESS_ONCE(pv->prehead)) {
+		if (pv->active == false) {
+			*count = 0;	/* Reset counter */
+			pv->active = true;
+		}
+		if ((++(*count) >= 4 * CLEAR_ACTIVE_THRESHOLD) && qcode) {
+			/*
+			 * The queue head isn't clearing the active flag for
+			 * too long. Need to kick it.
+			 */
+			int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+			__queue_kick_cpu(cpu, PV_KICK_QUEUE_HEAD);
+			*count = 0;
+		}
+	}
+}
+
+/**
+ * pv_set_cpu - set CPU # in the given pv_qvars structure
+ * @pv : pointer to struct pv_qvars to be set
+ * @cpu: cpu number to be set
+ */
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)
+{
+	pv->nxtcpu_p1 = cpu + 1;
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 8c67cbe..30d76f5 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,13 @@
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.kick_cpu = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 22a63fa..f10446e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -58,6 +58,26 @@
  */
 
 /*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+#define PV_SET_VAR(type, var, val)
+#define PV_VAR(var)			0
+#define PV_GET_NXTCPU(node)		0
+
+struct pv_qvars {};
+static __always_inline void pv_init_vars(struct pv_qvars *pv)		{}
+static __always_inline void pv_head_spin_check(int *count, u32 qcode,
+				int nxtcpu, void *next, int offset)	{}
+static __always_inline void pv_queue_spin_check(struct pv_qvars *pv,
+				int *count, u32 qcode)			{}
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)	{}
+#endif
+
+/*
  * The 24-bit queue node code is divided into the following 2 fields:
  * Bits 0-1 : queue node index (4 nodes)
  * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
@@ -77,15 +97,13 @@
 
 /*
  * The queue node structure
- *
- * This structure is essentially the same as the mcs_spinlock structure
- * in mcs_spinlock.h file. This structure is retained for future extension
- * where new fields may be added.
  */
 struct qnode {
 	u32		 wait;		/* Waiting flag		*/
+	struct pv_qvars	 pv;		/* Para-virtualization  */
 	struct qnode	*next;		/* Next queue node addr */
 };
+#define PV_OFFSET	offsetof(struct qnode, pv)
 
 struct qnode_set {
 	struct qnode	nodes[MAX_QNODES];
@@ -441,6 +459,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	unsigned int cpu_nr, qn_idx;
 	struct qnode *node, *next;
 	u32 prev_qcode, my_qcode;
+	PV_SET_VAR(int, hcnt, 0);
 
 	/*
 	 * Try the quick spinning code path
@@ -468,6 +487,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	 */
 	node->wait = true;
 	node->next = NULL;
+	pv_init_vars(&node->pv);
 
 	/*
 	 * The lock may be available at this point, try again if no task was
@@ -522,13 +542,22 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		 * and set up the "next" fields of the that node.
 		 */
 		struct qnode *prev = xlate_qcode(prev_qcode);
+		PV_SET_VAR(int, qcnt, 0);
 
 		ACCESS_ONCE(prev->next) = node;
 		/*
+		 * Set current CPU number into the previous node
+		 */
+		pv_set_cpu(&prev->pv, cpu_nr);
+
+		/*
 		 * Wait until the waiting flag is off
 		 */
-		while (smp_load_acquire(&node->wait))
+		while (smp_load_acquire(&node->wait)) {
 			arch_mutex_cpu_relax();
+			pv_queue_spin_check(&node->pv, PV_VAR(&qcnt),
+					    prev_qcode);
+		}
 	}
 
 	/*
@@ -560,6 +589,8 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 				goto release_node;
 		}
 		arch_mutex_cpu_relax();
+		pv_head_spin_check(PV_VAR(&hcnt), prev_qcode,
+				PV_GET_NXTCPU(node), node->next, PV_OFFSET);
 	}
 
 notify_next:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDo-0001C1-3b; Wed, 26 Feb 2014 15:16:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgD6-00019d-5w
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:48 +0000
Received: from [193.109.254.147:52571] by server-13.bemta-14.messagelabs.com
	id 59/B9-01226-3250E035; Wed, 26 Feb 2014 15:15:47 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393427745!6998216!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21299 invoked from network); 26 Feb 2014 15:15:46 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:46 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3425.houston.hp.com (Postfix) with ESMTP id 7BCA3292;
	Wed, 26 Feb 2014 15:15:30 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id EF5F967;
	Wed, 26 Feb 2014 15:15:27 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:25 -0500
Message-Id: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
	x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a KVM init function to activate the unfair queue
spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 713f1b3..a489140 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
 early_initcall(kvm_spinlock_init_jump);
 
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/*
+ * Enable unfair lock if running in a real para-virtualized environment
+ */
+static __init int kvm_unfair_locks_init_jump(void)
+{
+	if (!kvm_para_available())
+		return 0;
+
+	static_key_slow_inc(&paravirt_unfairlocks_enabled);
+	printk(KERN_INFO "KVM setup unfair spinlock\n");
+
+	return 0;
+}
+early_initcall(kvm_unfair_locks_init_jump);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDn-0001Bo-Ms; Wed, 26 Feb 2014 15:16:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgD1-00018z-L0
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:44 +0000
Received: from [193.109.254.147:2365] by server-1.bemta-14.messagelabs.com id
	9A/73-15438-E150E035; Wed, 26 Feb 2014 15:15:42 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393427735!7037747!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14899 invoked from network); 26 Feb 2014 15:15:41 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-3.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:41 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id 6679A2B8;
	Wed, 26 Feb 2014 15:15:25 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id E8E566B;
	Wed, 26 Feb 2014 15:15:22 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:23 -0500
Message-Id: <1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 3/8] qspinlock,
	x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A major problem with the queue spinlock patch is its performance at
low contention level (2-4 contending tasks) where it is slower than
the corresponding ticket spinlock code path. The following table shows
the execution time (in ms) of a micro-benchmark where 5M iterations
of the lock/unlock cycles were run on a 10-core Westere-EX CPU with
2 different types loads - standalone (lock and protected data in
different cachelines) and embedded (lock and protected data in the
same cacheline).

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  135/111	 135/102	  0%/-8%
       2	  732/950	1315/1573	+80%/+66%
       3	 1827/1783	2372/2428	+30%/+36%
       4	 2689/2725	2934/2934	 +9%/+8%
       5	 3736/3748	3658/3652	 -2%/-3%
       6	 4942/4984	4434/4428	-10%/-11%
       7	 6304/6319	5176/5163	-18%/-18%
       8	 7736/7629	5955/5944	-23%/-22%

It can be seen that the performance degradation is particular bad
with 2 and 3 contending tasks. To reduce that performance deficit
at low contention level, a special x86 specific optimized code path
for 2 contending tasks was added. This special code path will only
be activated with less than 16K of configured CPUs because it uses
a byte in the 32-bit lock word to hold a waiting bit for the 2nd
contending tasks instead of queuing the waiting task in the queue.

With the change, the performance data became:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       2	  732/950	 523/528	-29%/-44%
       3	 1827/1783	2366/2384	+30%/+34%

The queue spinlock code path is now a bit faster with 2 contending
tasks.  There is also a very slight improvement with 3 contending
tasks.

The performance of the optimized code path can vary depending on which
of the several different code paths is taken. It is also not as fair as
the ticket spinlock and some variations on the execution times of the
2 contending tasks.  Testing with different pairs of cores within the
same CPUs shows an execution time that varies from 400ms to 1194ms.
The ticket spinlock code also shows a variation of 718-1146ms which
is probably due to the CPU topology within a socket.

In a multi-socketed server, the optimized code path also seems to
produce a big performance improvement in cross-node contention traffic
at low contention level. The table below show the performance with
1 contending task per node:

		[Standalone]
  # of nodes	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	   135		 135		  0%
       2	  4452		 528		-88%
       3	 10767		2369		-78%
       4	 20835		2921		-86%

The micro-benchmark was also run on a 4-core Ivy-Bridge PC. The table
below shows the collected performance data:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  197/178	  181/150	 -8%/-16%
       2	 1109/928    435-1417/697-2125
       3	 1836/1702  1372-3112/1379-3138
       4	 2717/2429  1842-4158/1846-4170

The performance of the queue lock patch varied from run to run whereas
the performance of the ticket lock was more consistent. The queue
lock figures above were the range of values that were reported.

This optimization can also be easily used by other architectures as
long as they support 8 and 16 bits atomic operations.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/qspinlock.h      |   20 ++++-
 include/asm-generic/qspinlock_types.h |    8 ++-
 kernel/locking/qspinlock.c            |  192 ++++++++++++++++++++++++++++++++-
 3 files changed, 215 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 44cefee..98db42e 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,12 +7,30 @@
 
 #define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
 
+#define smp_u8_store_release(p, v)	\
+do {					\
+	barrier();			\
+	ACCESS_ONCE(*p) = (v);		\
+} while (0)
+
+/*
+ * As the qcode will be accessed as a 16-bit word, no offset is needed
+ */
+#define _QCODE_VAL_OFFSET	0
+
 /*
  * x86-64 specific queue spinlock union structure
+ * Besides the slock and lock fields, the other fields are only
+ * valid with less than 16K CPUs.
  */
 union arch_qspinlock {
 	struct qspinlock slock;
-	u8		 lock;	/* Lock bit	*/
+	struct {
+		u8  lock;	/* Lock bit	*/
+		u8  wait;	/* Waiting bit	*/
+		u16 qcode;	/* Queue code	*/
+	};
+	u16 lock_wait;		/* Lock and wait bits */
 };
 
 #define	queue_spin_unlock queue_spin_unlock
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index df981d0..3a02a9e 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -48,7 +48,13 @@ typedef struct qspinlock {
 	atomic_t	qlcode;	/* Lock + queue code */
 } arch_spinlock_t;
 
-#define _QCODE_OFFSET		8
+#if CONFIG_NR_CPUS >= (1 << 14)
+# define _Q_MANY_CPUS
+# define _QCODE_OFFSET	8
+#else
+# define _QCODE_OFFSET	16
+#endif
+
 #define _QSPINLOCK_LOCKED	1U
 #define	_QSPINLOCK_LOCK_MASK	0xff
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ed5efa7..22a63fa 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -109,8 +109,11 @@ static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
  *  2) A smp_u8_store_release() macro for byte size store operation	*
  *  3) A "union arch_qspinlock" structure that include the individual	*
  *     fields of the qspinlock structure, including:			*
- *      o slock - the qspinlock structure				*
- *      o lock  - the lock byte						*
+ *      o slock     - the qspinlock structure				*
+ *      o lock      - the lock byte					*
+ *      o wait      - the waiting byte					*
+ *      o qcode     - the queue node code				*
+ *      o lock_wait - the combined lock and waiting bytes		*
  *									*
  ************************************************************************
  */
@@ -129,6 +132,176 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 		return 1;
 	return 0;
 }
+
+#ifndef _Q_MANY_CPUS
+/*
+ * With less than 16K CPUs, the following optimizations are possible with
+ * the x86 architecture:
+ *  1) The 2nd byte of the 32-bit lock word can be used as a pending bit
+ *     for waiting lock acquirer so that it won't need to go through the
+ *     MCS style locking queuing which has a higher overhead.
+ *  2) The 16-bit queue code can be accessed or modified directly as a
+ *     16-bit short value without disturbing the first 2 bytes.
+ */
+#define	_QSPINLOCK_WAITING	0x100U	/* Waiting bit in 2nd byte   */
+#define	_QSPINLOCK_LWMASK	0xffff	/* Mask for lock & wait bits */
+
+#define queue_encode_qcode(cpu, idx)	(((cpu) + 1) << 2 | (idx))
+
+#define queue_spin_trylock_quick queue_spin_trylock_quick
+/**
+ * queue_spin_trylock_quick - fast spinning on the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Old queue spinlock value
+ * Return: 1 if lock acquired, 0 if failed
+ *
+ * This is an optimized contention path for 2 contending tasks. It
+ * should only be entered if no task is waiting in the queue. This
+ * optimized path is not as fair as the ticket spinlock, but it offers
+ * slightly better performance. The regular MCS locking path for 3 or
+ * more contending tasks, however, is fair.
+ *
+ * Depending on the exact timing, there are several different paths where
+ * a contending task can take. The actual contention performance depends
+ * on which path is taken. So it can be faster or slower than the
+ * corresponding ticket spinlock path. On average, it is probably on par
+ * with ticket spinlock.
+ */
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+	u16		     old;
+
+	/*
+	 * Fall into the quick spinning code path only if no one is waiting
+	 * or the lock is available.
+	 */
+	if (unlikely((qsval != _QSPINLOCK_LOCKED) &&
+		     (qsval != _QSPINLOCK_WAITING)))
+		return 0;
+
+	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
+
+	if (old == 0) {
+		/*
+		 * Got the lock, can clear the waiting bit now
+		 */
+		smp_u8_store_release(&qlock->wait, 0);
+		return 1;
+	} else if (old == _QSPINLOCK_LOCKED) {
+try_again:
+		/*
+		 * Wait until the lock byte is cleared to get the lock
+		 */
+		do {
+			cpu_relax();
+		} while (ACCESS_ONCE(qlock->lock));
+		/*
+		 * Set the lock bit & clear the waiting bit
+		 */
+		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
+			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+			return 1;
+		/*
+		 * Someone has steal the lock, so wait again
+		 */
+		goto try_again;
+	} else if (old == _QSPINLOCK_WAITING) {
+		/*
+		 * Another task is already waiting while it steals the lock.
+		 * A bit of unfairness here won't change the big picture.
+		 * So just take the lock and return.
+		 */
+		return 1;
+	}
+	/*
+	 * Nothing need to be done if the old value is
+	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
+	 */
+	return 0;
+}
+
+#define queue_code_xchg queue_code_xchg
+/**
+ * queue_code_xchg - exchange a queue code value
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: New queue code to be exchanged
+ * Return: The original qcode value in the queue spinlock
+ */
+static inline u32 queue_code_xchg(struct qspinlock *lock, u32 qcode)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	return (u32)xchg(&qlock->qcode, (u16)qcode);
+}
+
+#define queue_spin_trylock_and_clr_qcode queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	qcode <<= _QCODE_OFFSET;
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+
+#define queue_get_lock_qcode queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value
+ * Return : > 0 if lock is not available
+ *	   = 0 if lock is free
+ *	   < 0 if lock is taken & can return after cleanup
+ *
+ * It is considered locked when either the lock bit or the wait bit is set.
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	u32 qlcode;
+
+	qlcode = (u32)atomic_read(&lock->qlcode);
+	/*
+	 * With the special case that qlcode contains only _QSPINLOCK_LOCKED
+	 * and mycode. It will try to transition back to the quick spinning
+	 * code by clearing the qcode and setting the _QSPINLOCK_WAITING
+	 * bit.
+	 */
+	if (qlcode == (_QSPINLOCK_LOCKED | (mycode << _QCODE_OFFSET))) {
+		u32 old = qlcode;
+
+		qlcode = atomic_cmpxchg(&lock->qlcode, old,
+				_QSPINLOCK_LOCKED|_QSPINLOCK_WAITING);
+		if (qlcode == old) {
+			union arch_qspinlock *slock =
+				(union arch_qspinlock *)lock;
+try_again:
+			/*
+			 * Wait until the lock byte is cleared
+			 */
+			do {
+				cpu_relax();
+			} while (ACCESS_ONCE(slock->lock));
+			/*
+			 * Set the lock bit & clear the waiting bit
+			 */
+			if (cmpxchg(&slock->lock_wait, _QSPINLOCK_WAITING,
+				    _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+				return -1;	/* Got the lock */
+			goto try_again;
+		}
+	}
+	*qcode = qlcode >> _QCODE_OFFSET;
+	return qlcode & _QSPINLOCK_LWMASK;
+}
+#endif /* _Q_MANY_CPUS */
+
 #else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
 /*
  * Generic functions for architectures that do not support atomic
@@ -144,7 +317,7 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 	int qlcode = atomic_read(lock->qlcode);
 
 	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
-		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+		qlcode, code|_QSPINLOCK_LOCKED) == qlcode))
 			return 1;
 	return 0;
 }
@@ -156,6 +329,10 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
  * that may get superseded by a more optimized version.			*
  ************************************************************************
  */
+#ifndef queue_spin_trylock_quick
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{ return 0; }
+#endif
 
 #ifndef queue_get_lock_qcode
 /**
@@ -266,6 +443,11 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	u32 prev_qcode, my_qcode;
 
 	/*
+	 * Try the quick spinning code path
+	 */
+	if (queue_spin_trylock_quick(lock, qsval))
+		return;
+	/*
 	 * Get the queue node
 	 */
 	cpu_nr = smp_processor_id();
@@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		return;
 	}
 
+#ifdef queue_code_xchg
+	prev_qcode = queue_code_xchg(lock, my_qcode);
+#else
 	/*
 	 * Exchange current copy of the queue node code
 	 */
@@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	} else
 		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
 	my_qcode &= ~_QSPINLOCK_LOCKED;
+#endif /* queue_code_xchg */
 
 	if (prev_qcode) {
 		/*
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDl-0001B4-Qu; Wed, 26 Feb 2014 15:16:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCl-00017R-6S
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:27 +0000
Received: from [85.158.139.211:11633] by server-9.bemta-5.messagelabs.com id
	8E/C5-11237-E050E035; Wed, 26 Feb 2014 15:15:26 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393427724!6400590!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15468 invoked from network); 26 Feb 2014 15:15:25 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:25 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3425.houston.hp.com (Postfix) with ESMTP id 154C6272;
	Wed, 26 Feb 2014 15:15:22 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 6B5FA66;
	Wed, 26 Feb 2014 15:15:20 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:22 -0500
Message-Id: <1393427668-60228-3-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 2/8] qspinlock,
	x86: Enable x86-64 to use queue spinlock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   41 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 ++++
 arch/x86/include/asm/spinlock_types.h |    4 +++
 4 files changed, 51 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/include/asm/qspinlock.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1b4ff87..5bf70ab 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -17,6 +17,7 @@ config X86_64
 	depends on 64BIT
 	select X86_DEV_DMA_OPS
 	select ARCH_USE_CMPXCHG_LOCKREF
+	select ARCH_USE_QUEUE_SPINLOCK
 
 ### Arch settings
 config X86
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+	struct qspinlock slock;
+	u8		 lock;	/* Lock bit	*/
+};
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	barrier();
+	ACCESS_ONCE(qlock->lock) = 0;
+	barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index bf156de..6e6de1f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -43,6 +43,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -181,6 +185,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #include <asm/rwlock.h>
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDl-0001B4-Qu; Wed, 26 Feb 2014 15:16:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCl-00017R-6S
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:27 +0000
Received: from [85.158.139.211:11633] by server-9.bemta-5.messagelabs.com id
	8E/C5-11237-E050E035; Wed, 26 Feb 2014 15:15:26 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393427724!6400590!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15468 invoked from network); 26 Feb 2014 15:15:25 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:25 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3425.houston.hp.com (Postfix) with ESMTP id 154C6272;
	Wed, 26 Feb 2014 15:15:22 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 6B5FA66;
	Wed, 26 Feb 2014 15:15:20 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:22 -0500
Message-Id: <1393427668-60228-3-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 2/8] qspinlock,
	x86: Enable x86-64 to use queue spinlock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   41 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 ++++
 arch/x86/include/asm/spinlock_types.h |    4 +++
 4 files changed, 51 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/include/asm/qspinlock.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1b4ff87..5bf70ab 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -17,6 +17,7 @@ config X86_64
 	depends on 64BIT
 	select X86_DEV_DMA_OPS
 	select ARCH_USE_CMPXCHG_LOCKREF
+	select ARCH_USE_QUEUE_SPINLOCK
 
 ### Arch settings
 config X86
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+	struct qspinlock slock;
+	u8		 lock;	/* Lock bit	*/
+};
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	barrier();
+	ACCESS_ONCE(qlock->lock) = 0;
+	barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index bf156de..6e6de1f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -43,6 +43,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -181,6 +185,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #include <asm/rwlock.h>
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDm-0001BI-JL; Wed, 26 Feb 2014 15:16:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCp-00017z-PB
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:32 +0000
Received: from [85.158.143.35:44080] by server-3.bemta-4.messagelabs.com id
	FA/50-11539-3150E035; Wed, 26 Feb 2014 15:15:31 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393427728!8494969!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25377 invoked from network); 26 Feb 2014 15:15:29 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:29 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id E50AE17F;
	Wed, 26 Feb 2014 15:15:27 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 724AC66;
	Wed, 26 Feb 2014 15:15:25 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:24 -0500
Message-Id: <1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
	x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Locking is always an issue in a virtualized environment as the virtual
CPU that is waiting on a lock may get scheduled out and hence block
any progress in lock acquisition even when the lock has been freed.

One solution to this problem is to allow unfair lock in a
para-virtualized environment. In this case, a new lock acquirer can
come and steal the lock if the next-in-line CPU to get the lock is
scheduled out. Unfair lock in a native environment is generally not a
good idea as there is a possibility of lock starvation for a heavily
contended lock.

This patch add a new configuration option for the x86
architecture to enable the use of unfair queue spinlock
(PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
(paravirt_unfairlocks_enabled) is used to switch between a fair and
an unfair version of the spinlock code. This jump label will only be
enabled in a real PV guest.

Enabling this configuration feature decreases the performance of an
uncontended lock-unlock operation by about 1-2%.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/Kconfig                     |   11 +++++
 arch/x86/include/asm/qspinlock.h     |   74 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/Makefile             |    1 +
 arch/x86/kernel/paravirt-spinlocks.c |    7 +++
 4 files changed, 93 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5bf70ab..8d7c941 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -645,6 +645,17 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer Y.
 
+config PARAVIRT_UNFAIR_LOCKS
+	bool "Enable unfair locks in a para-virtualized guest"
+	depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+	depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+	---help---
+	  This changes the kernel to use unfair locks in a real
+	  para-virtualized guest system. This will help performance
+	  in most cases. However, there is a possibility of lock
+	  starvation on a heavily contended lock especially in a
+	  large guest with many virtual CPUs.
+
 source "arch/x86/xen/Kconfig"
 
 config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 98db42e..c278aed 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -56,4 +56,78 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
 
 #include <asm-generic/qspinlock.h>
 
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (likely(cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return;
+	/*
+	 * Since the lock is now unfair, there is no need to activate
+	 * the 2-task quick spinning code path.
+	 */
+	queue_spin_lock_slowpath(lock, -1);
+}
+
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!qlock->lock &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+extern struct static_key paravirt_unfairlocks_enabled;
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		queue_spin_lock_unfair(lock);
+		return;
+	}
+	queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		return queue_spin_trylock_unfair(lock);
+	}
+	return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f)	arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
 #endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index cb648c8..1107a20 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
 obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 
 obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..a50032a 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,9 @@ EXPORT_SYMBOL(pv_lock_ops);
 
 struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
 EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDn-0001Bf-BD; Wed, 26 Feb 2014 15:16:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCz-00018e-9s
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:41 +0000
Received: from [85.158.143.35:55407] by server-2.bemta-4.messagelabs.com id
	08/85-04779-C150E035; Wed, 26 Feb 2014 15:15:40 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393427738!8499268!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6183 invoked from network); 26 Feb 2014 15:15:39 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:39 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id 01574263;
	Wed, 26 Feb 2014 15:15:38 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 88B5B71;
	Wed, 26 Feb 2014 15:15:35 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:28 -0500
Message-Id: <1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
	x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch enables KVM to use the queue spinlock's PV support code
when the PARAVIRT_SPINLOCKS kernel config option is set. However,
PV support for Xen is not ready yet and so the queue spinlock will
still have to be disabled when PARAVIRT_SPINLOCKS config option is
on with Xen.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 +-
 2 files changed, 55 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index f318e78..3ddc436 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 		}
 	}
 }
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 lh_kick_stats;	/* Lock holder kick count */
+static u32 qh_kick_stats;	/* Queue head kick count  */
+static u32 nn_kick_stats;	/* Next node kick count   */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+	if (!d_kvm_debug) {
+		printk(KERN_WARNING
+		       "Could not create 'kvm' debugfs directory\n");
+		return -ENOMEM;
+	}
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+	debugfs_create_u32("lh_kick_stats", 0644, d_spin_debug, &lh_kick_stats);
+	debugfs_create_u32("qh_kick_stats", 0644, d_spin_debug, &qh_kick_stats);
+	debugfs_create_u32("nn_kick_stats", 0644, d_spin_debug, &nn_kick_stats);
+
+	return 0;
+}
+
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+	if (type == PV_KICK_LOCK_HOLDER)
+		add_smp(&lh_kick_stats, 1);
+	else if (type == PV_KICK_QUEUE_HEAD)
+		add_smp(&qh_kick_stats, 1);
+	else
+		add_smp(&nn_kick_stats, 1);
+}
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+static void kvm_kick_cpu_type(int cpu, enum pv_kick_type type)
+{
+	kvm_kick_cpu(cpu);
+	inc_kick_stats(type);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDm-0001BV-VP; Wed, 26 Feb 2014 15:16:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCx-00018X-7S
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:39 +0000
Received: from [85.158.139.211:18801] by server-3.bemta-5.messagelabs.com id
	0C/ED-13671-A150E035; Wed, 26 Feb 2014 15:15:38 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393427735!6400660!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17698 invoked from network); 26 Feb 2014 15:15:37 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:37 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id 79792D3;
	Wed, 26 Feb 2014 15:15:35 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 0BCA366;
	Wed, 26 Feb 2014 15:15:32 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:27 -0500
Message-Id: <1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
	x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds para-virtualization support to the queue spinlock code
by enabling the queue head to kick the lock holder CPU, if known,
in when the lock isn't released for a certain amount of time. It
also enables the mutual monitoring of the queue head CPU and the
following node CPU in the queue to make sure that their CPUs will
stay scheduled in.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/paravirt.h       |    9 ++-
 arch/x86/include/asm/paravirt_types.h |   12 +++
 arch/x86/include/asm/pvqspinlock.h    |  176 +++++++++++++++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c  |    4 +
 kernel/locking/qspinlock.c            |   41 +++++++-
 5 files changed, 235 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..06d3279 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 }
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
-
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
+{
+	PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
+}
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
-
+#endif
 #endif
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..87f8836 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,21 @@ struct arch_spinlock;
 typedef u16 __ticket_t;
 #endif
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_kick_type {
+	PV_KICK_LOCK_HOLDER,
+	PV_KICK_QUEUE_HEAD,
+	PV_KICK_NEXT_NODE
+};
+#endif
+
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	void (*kick_cpu)(int cpu, enum pv_kick_type);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..45aae39
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,176 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ *	Queue Spinlock Para-Virtualization Support
+ *
+ *	+------+	    +-----+ nxtcpu_p1  +----+
+ *	| Lock |	    |Queue|----------->|Next|
+ *	|Holder|<-----------|Head |<-----------|Node|
+ *	+------+ prev_qcode +-----+ prev_qcode +----+
+ *
+ * As long as the current lock holder passes through the slowpath, the queue
+ * head CPU will have its CPU number stored in prev_qcode. The situation is
+ * the same for the node next to the queue head.
+ *
+ * The next node, while setting up the next pointer in the queue head, can
+ * also store its CPU number in that node. With that change, the queue head
+ * will have the CPU numbers of both its upstream and downstream neighbors.
+ *
+ * To make forward progress in lock acquisition and release, it is necessary
+ * that both the lock holder and the queue head virtual CPUs are present.
+ * The queue head can monitor the lock holder, but the lock holder can't
+ * monitor the queue head back. As a result, the next node is also brought
+ * into the picture to monitor the queue head. In the above diagram, all the
+ * 3 virtual CPUs should be present with the queue head and next node
+ * monitoring each other to make sure they are both present.
+ *
+ * Heartbeat counters are used to track if a neighbor is active. There are
+ * 3 different sets of heartbeat counter monitoring going on:
+ * 1) The queue head will wait until the number loop iteration exceeds a
+ *    certain threshold (HEAD_SPIN_THRESHOLD). In that case, it will send
+ *    a kick-cpu signal to the lock holder if it has the CPU number available.
+ *    The kick-cpu siginal will be sent only once as the real lock holder
+ *    may not be the same as what the queue head thinks it is.
+ * 2) The queue head will periodically clear the active flag of the next node.
+ *    It will then check to see if the active flag remains cleared at the end
+ *    of the cycle. If it is, the next node CPU may be scheduled out. So it
+ *    send a kick-cpu signal to make sure that the next node CPU remain active.
+ * 3) The next node CPU will monitor its own active flag to see if it gets
+ *    clear periodically. If it is not, the queue head CPU may be scheduled
+ *    out. It will then send the kick-cpu signal to the queue head CPU.
+ */
+
+/*
+ * Loop thresholds
+ */
+#define	HEAD_SPIN_THRESHOLD	(1<<12)	/* Threshold to kick lock holder  */
+#define	CLEAR_ACTIVE_THRESHOLD	(1<<8)	/* Threahold for clearing active flag */
+#define CLEAR_ACTIVE_MASK	(CLEAR_ACTIVE_THRESHOLD - 1)
+
+/*
+ * PV macros
+ */
+#define PV_SET_VAR(type, var, val)	type var = val
+#define PV_VAR(var)			var
+#define	PV_GET_NXTCPU(node)		(node)->pv.nxtcpu_p1
+
+/*
+ * Additional fields to be added to the qnode structure
+ *
+ * Try to cram the PV fields into a 32 bits so that it won't increase the
+ * qnode size in x86-64.
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t	u32
+#else
+#define _cpuid_t	u16
+#endif
+
+struct pv_qvars {
+	u8	 active;	/* Set if CPU active		*/
+	u8	 prehead;	/* Set if next to queue head	*/
+	_cpuid_t nxtcpu_p1;	/* CPU number of next node + 1	*/
+};
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv: pointer to struct pv_qvars
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv)
+{
+	pv->active    = false;
+	pv->prehead   = false;
+	pv->nxtcpu_p1 = 0;
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue head
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @nxtcpu: CPU number of next node + 1
+ * @next  : pointer to the next node
+ * @offset: offset of the pv_qvars within the qnode
+ *
+ * 4 checks will be done:
+ * 1) See if it is time to kick the lock holder
+ * 2) Set the prehead flag of the next node
+ * 3) Clear the active flag of the next node periodically
+ * 4) If the active flag is not set after a while, assume the CPU of the
+ *    next-in-line node is offline and kick it back up again.
+ */
+static __always_inline void
+pv_head_spin_check(int *count, u32 qcode, int nxtcpu, void *next, int offset)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if ((++(*count) == HEAD_SPIN_THRESHOLD) && qcode) {
+		/*
+		 * Get the CPU number of the lock holder & kick it
+		 * The lock may have been stealed by another CPU
+		 * if PARAVIRT_UNFAIR_LOCKS is set, so the computed
+		 * CPU number may not be the actual lock holder.
+		 */
+		int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+		__queue_kick_cpu(cpu, PV_KICK_LOCK_HOLDER);
+	}
+	if (next) {
+		struct pv_qvars *pv = (struct pv_qvars *)
+				      ((char *)next + offset);
+
+		if (!pv->prehead)
+			pv->prehead = true;
+		if ((*count & CLEAR_ACTIVE_MASK) == CLEAR_ACTIVE_MASK)
+			pv->active = false;
+		if (((*count & CLEAR_ACTIVE_MASK) == 0) &&
+			!pv->active && nxtcpu)
+			/*
+			 * The CPU of the next node doesn't seem to be
+			 * active, need to kick it to make sure that
+			 * it is ready to be transitioned to queue head.
+			 */
+			__queue_kick_cpu(nxtcpu - 1, PV_KICK_NEXT_NODE);
+	}
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue member
+ * @pv   : pointer to struct pv_qvars
+ * @count: loop count
+ * @qcode: queue code of the previous node (queue head if pv->prehead set)
+ *
+ * Set the active flag if it is next to the queue head
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, int *count, u32 qcode)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if (ACCESS_ONCE(pv->prehead)) {
+		if (pv->active == false) {
+			*count = 0;	/* Reset counter */
+			pv->active = true;
+		}
+		if ((++(*count) >= 4 * CLEAR_ACTIVE_THRESHOLD) && qcode) {
+			/*
+			 * The queue head isn't clearing the active flag for
+			 * too long. Need to kick it.
+			 */
+			int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+			__queue_kick_cpu(cpu, PV_KICK_QUEUE_HEAD);
+			*count = 0;
+		}
+	}
+}
+
+/**
+ * pv_set_cpu - set CPU # in the given pv_qvars structure
+ * @pv : pointer to struct pv_qvars to be set
+ * @cpu: cpu number to be set
+ */
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)
+{
+	pv->nxtcpu_p1 = cpu + 1;
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 8c67cbe..30d76f5 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,13 @@
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.kick_cpu = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 22a63fa..f10446e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -58,6 +58,26 @@
  */
 
 /*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+#define PV_SET_VAR(type, var, val)
+#define PV_VAR(var)			0
+#define PV_GET_NXTCPU(node)		0
+
+struct pv_qvars {};
+static __always_inline void pv_init_vars(struct pv_qvars *pv)		{}
+static __always_inline void pv_head_spin_check(int *count, u32 qcode,
+				int nxtcpu, void *next, int offset)	{}
+static __always_inline void pv_queue_spin_check(struct pv_qvars *pv,
+				int *count, u32 qcode)			{}
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)	{}
+#endif
+
+/*
  * The 24-bit queue node code is divided into the following 2 fields:
  * Bits 0-1 : queue node index (4 nodes)
  * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
@@ -77,15 +97,13 @@
 
 /*
  * The queue node structure
- *
- * This structure is essentially the same as the mcs_spinlock structure
- * in mcs_spinlock.h file. This structure is retained for future extension
- * where new fields may be added.
  */
 struct qnode {
 	u32		 wait;		/* Waiting flag		*/
+	struct pv_qvars	 pv;		/* Para-virtualization  */
 	struct qnode	*next;		/* Next queue node addr */
 };
+#define PV_OFFSET	offsetof(struct qnode, pv)
 
 struct qnode_set {
 	struct qnode	nodes[MAX_QNODES];
@@ -441,6 +459,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	unsigned int cpu_nr, qn_idx;
 	struct qnode *node, *next;
 	u32 prev_qcode, my_qcode;
+	PV_SET_VAR(int, hcnt, 0);
 
 	/*
 	 * Try the quick spinning code path
@@ -468,6 +487,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	 */
 	node->wait = true;
 	node->next = NULL;
+	pv_init_vars(&node->pv);
 
 	/*
 	 * The lock may be available at this point, try again if no task was
@@ -522,13 +542,22 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		 * and set up the "next" fields of the that node.
 		 */
 		struct qnode *prev = xlate_qcode(prev_qcode);
+		PV_SET_VAR(int, qcnt, 0);
 
 		ACCESS_ONCE(prev->next) = node;
 		/*
+		 * Set current CPU number into the previous node
+		 */
+		pv_set_cpu(&prev->pv, cpu_nr);
+
+		/*
 		 * Wait until the waiting flag is off
 		 */
-		while (smp_load_acquire(&node->wait))
+		while (smp_load_acquire(&node->wait)) {
 			arch_mutex_cpu_relax();
+			pv_queue_spin_check(&node->pv, PV_VAR(&qcnt),
+					    prev_qcode);
+		}
 	}
 
 	/*
@@ -560,6 +589,8 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 				goto release_node;
 		}
 		arch_mutex_cpu_relax();
+		pv_head_spin_check(PV_VAR(&hcnt), prev_qcode,
+				PV_GET_NXTCPU(node), node->next, PV_OFFSET);
 	}
 
 notify_next:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDo-0001C1-3b; Wed, 26 Feb 2014 15:16:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgD6-00019d-5w
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:48 +0000
Received: from [193.109.254.147:52571] by server-13.bemta-14.messagelabs.com
	id 59/B9-01226-3250E035; Wed, 26 Feb 2014 15:15:47 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393427745!6998216!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21299 invoked from network); 26 Feb 2014 15:15:46 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:46 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3425.houston.hp.com (Postfix) with ESMTP id 7BCA3292;
	Wed, 26 Feb 2014 15:15:30 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id EF5F967;
	Wed, 26 Feb 2014 15:15:27 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:25 -0500
Message-Id: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
	x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a KVM init function to activate the unfair queue
spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 713f1b3..a489140 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
 early_initcall(kvm_spinlock_init_jump);
 
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/*
+ * Enable unfair lock if running in a real para-virtualized environment
+ */
+static __init int kvm_unfair_locks_init_jump(void)
+{
+	if (!kvm_para_available())
+		return 0;
+
+	static_key_slow_inc(&paravirt_unfairlocks_enabled);
+	printk(KERN_INFO "KVM setup unfair spinlock\n");
+
+	return 0;
+}
+early_initcall(kvm_unfair_locks_init_jump);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDo-0001CY-ST; Wed, 26 Feb 2014 15:16:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgDg-0001Ah-9e
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:16:24 +0000
Received: from [193.109.254.147:27101] by server-15.bemta-14.messagelabs.com
	id DF/C8-10839-7450E035; Wed, 26 Feb 2014 15:16:23 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393427781!7005198!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29878 invoked from network); 26 Feb 2014 15:16:22 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:16:22 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id E46D9281;
	Wed, 26 Feb 2014 15:15:17 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id D032F67;
	Wed, 26 Feb 2014 15:15:10 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:20 -0500
Message-Id: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with
	PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v4->v5:
 - Move the optimized 2-task contending code to the generic file to
   enable more architectures to use it without code duplication.
 - Address some of the style-related comments by PeterZ.
 - Allow the use of unfair queue spinlock in a real para-virtualized
   execution environment.
 - Add para-virtualization support to the qspinlock code by ensuring
   that the lock holder and queue head stay alive as much as possible.

v3->v4:
 - Remove debugging code and fix a configuration error
 - Simplify the qspinlock structure and streamline the code to make it
   perform a bit better
 - Add an x86 version of asm/qspinlock.h for holding x86 specific
   optimization.
 - Add an optimized x86 code path for 2 contending tasks to improve
   low contention performance.

v2->v3:
 - Simplify the code by using numerous mode only without an unfair option.
 - Use the latest smp_load_acquire()/smp_store_release() barriers.
 - Move the queue spinlock code to kernel/locking.
 - Make the use of queue spinlock the default for x86-64 without user
   configuration.
 - Additional performance tuning.

v1->v2:
 - Add some more comments to document what the code does.
 - Add a numerous CPU mode to support >= 16K CPUs
 - Add a configuration option to allow lock stealing which can further
   improve performance in many cases.
 - Enable wakeup of queue head CPU at unlock time for non-numerous
   CPU mode.

This patch set has 3 different sections:
 1) Patches 1-3: Introduces a queue-based spinlock implementation that
    can replace the default ticket spinlock without increasing the
    size of the spinlock data structure. As a result, critical kernel
    data structures that embed spinlock won't increase in size and
    breaking data alignments.
 2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
    real para-virtualized execution environment. This can resolve
    some of the locking related performance issues due to the fact
    that the next CPU to get the lock may have been scheduled out
    for a period of time.
 3) Patches 6-8: Enable qspinlock para-virtualization support by making
    sure that the lock holder and the queue head stay alive as long as
    possible.

Patches 1-3 are fully tested and ready for production. Patches 4-8, on
the other hands, are not fully tested. They have undergone compilation
tests with various combinations of kernel config setting and boot-up
tests in a non-virtualized setting. Further tests and performance
characterization are still needed to be done in a KVM guest. So
comments on them are welcomed. Suggestions or recommendations on how
to add PV support in the Xen environment are also needed.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.

The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.

Waiman Long (8):
  qspinlock: Introducing a 4-byte queue spinlock implementation
  qspinlock, x86: Enable x86-64 to use queue spinlock
  qspinlock, x86: Add x86 specific optimization for 2 contending tasks
  pvqspinlock, x86: Allow unfair spinlock in a real PV environment
  pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
  pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  pvqspinlock, x86: Add qspinlock para-virtualization support
  pvqspinlock, x86: Enable KVM to use qspinlock's PV support

 arch/x86/Kconfig                      |   12 +
 arch/x86/include/asm/paravirt.h       |    9 +-
 arch/x86/include/asm/paravirt_types.h |   12 +
 arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
 arch/x86/include/asm/qspinlock.h      |  133 +++++++
 arch/x86/include/asm/spinlock.h       |    9 +-
 arch/x86/include/asm/spinlock_types.h |    4 +
 arch/x86/kernel/Makefile              |    1 +
 arch/x86/kernel/kvm.c                 |   73 ++++-
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
 arch/x86/xen/spinlock.c               |    2 +-
 include/asm-generic/qspinlock.h       |  122 +++++++
 include/asm-generic/qspinlock_types.h |   61 ++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
 16 files changed, 1239 insertions(+), 8 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h
 create mode 100644 arch/x86/include/asm/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDo-0001CM-FF; Wed, 26 Feb 2014 15:16:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgD8-00019j-4e
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:50 +0000
Received: from [193.109.254.147:52968] by server-14.bemta-14.messagelabs.com
	id 6C/1D-29228-5250E035; Wed, 26 Feb 2014 15:15:49 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393427747!6981985!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20958 invoked from network); 26 Feb 2014 15:15:48 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:48 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id F0D5F24B;
	Wed, 26 Feb 2014 15:15:32 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 80D2572;
	Wed, 26 Feb 2014 15:15:30 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:26 -0500
Message-Id: <1393427668-60228-7-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 6/8] pvqspinlock,
	x86: Rename paravirt_ticketlocks_enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6e6de1f..283f2cf 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,7 +40,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -151,7 +151,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a489140..f318e78 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -818,7 +818,7 @@ static __init int kvm_spinlock_init_jump(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index a50032a..8c67cbe 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
 #endif
 
 #ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 581521c..06f4a64 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -290,7 +290,7 @@ static __init int xen_init_spinlocks_jump(void)
 	if (!xen_pvspin)
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDm-0001BB-6s; Wed, 26 Feb 2014 15:16:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCm-00017X-8d
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:28 +0000
Received: from [85.158.137.68:13211] by server-9.bemta-3.messagelabs.com id
	37/8E-10184-F050E035; Wed, 26 Feb 2014 15:15:27 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393427724!4399743!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5196 invoked from network); 26 Feb 2014 15:15:25 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:25 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id 7C0D82DE;
	Wed, 26 Feb 2014 15:15:20 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id CEC7471;
	Wed, 26 Feb 2014 15:15:17 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:21 -0500
Message-Id: <1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte queue
	spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces a new queue spinlock implementation that can
serve as an alternative to the default ticket spinlock. Compared with
the ticket spinlock, this queue spinlock should be almost as fair as
the ticket spinlock. It has about the same speed in single-thread and
it can be much faster in high contention situations. Only in light to
moderate contention where the average queue depth is around 1-3 will
this queue spinlock be potentially a bit slower due to the higher
slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

The idea behind this spinlock implementation is the fact that spinlocks
are acquired with preemption disabled. In other words, the process
will not be migrated to another CPU while it is trying to get a
spinlock. Ignoring interrupt handling, a CPU can only be contending
in one spinlock at any one time. Of course, interrupt handler can try
to acquire one spinlock while the interrupted user process is in the
process of getting another spinlock. By allocating a set of per-cpu
queue nodes and used them to form a waiting queue, we can encode the
queue node address into a much smaller 16-bit size. Together with
the 1-byte lock bit, this queue spinlock implementation will only
need 4 bytes to hold all the information that it needs.

The current queue node address encoding of the 4-byte word is as
follows:
Bits 0-7  : the locked byte
Bits 8-9  : queue node index in the per-cpu array (4 entries)
Bits 10-31: cpu number + 1 (max cpus = 4M -1)

In the extremely unlikely case that all the queue node entries are
used up, the current code will fall back to busy spinning without
waiting in a queue with warning message.

For single-thread performance (no contention), a 256K lock/unlock
loop was run on a 2.4Ghz Westmere x86-64 CPU.  The following table
shows the average time (in ns) for a single lock/unlock sequence
(including the looping and timing overhead):

  Lock Type			Time (ns)
  ---------			---------
  Ticket spinlock		  14.1
  Queue spinlock		   8.8

So the queue spinlock is much faster than the ticket spinlock, even
though the overhead of locking and unlocking should be pretty small
when there is no contention. The performance advantage is mainly
due to the fact that ticket spinlock does a read-modify-write (add)
instruction in unlock whereas queue spinlock only does a simple write
in unlock which can be much faster in a pipelined CPU.

The AIM7 benchmark was run on a 8-socket 80-core DL980 with Westmere
x86-64 CPUs with XFS filesystem on a ramdisk and HT off to evaluate
the performance impact of this patch on a 3.13 kernel.

  +------------+----------+-----------------+---------+
  | Kernel     | 3.13 JPM |    3.13 with    | %Change |
  |            |          | qspinlock patch |	      |
  +------------+----------+-----------------+---------+
  |		      10-100 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   357459 |      363109     |  +1.58% |
  |dbase       |   496847 |      498801	    |  +0.39% |
  |disk        |  2925312 |     2771387     |  -5.26% |
  |five_sec    |   166612 |      169215     |  +1.56% |
  |fserver     |   382129 |      383279     |  +0.30% |
  |high_systime|    16356 |       16380     |  +0.15% |
  |short       |  4521978 |     4257363     |  -5.85% |
  +------------+----------+-----------------+---------+
  |		     200-1000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   449070 |      447711     |  -0.30% |
  |dbase       |   845029 |      853362	    |  +0.99% |
  |disk        |  2725249 |     4892907     | +79.54% |
  |five_sec    |   169410 |      170638     |  +0.72% |
  |fserver     |   489662 |      491828     |  +0.44% |
  |high_systime|   142823 |      143790     |  +0.68% |
  |short       |  7435288 |     9016171     | +21.26% |
  +------------+----------+-----------------+---------+
  |		     1100-2000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   432470 |      432570     |  +0.02% |
  |dbase       |   889289 |      890026	    |  +0.08% |
  |disk        |  2565138 |     5008732     | +95.26% |
  |five_sec    |   169141 |      170034     |  +0.53% |
  |fserver     |   498569 |      500701     |  +0.43% |
  |high_systime|   229913 |      245866     |  +6.94% |
  |short       |  8496794 |     8281918     |  -2.53% |
  +------------+----------+-----------------+---------+

The workload with the most gain was the disk workload. Without the
patch, the perf profile at 1500 users looked like:

 26.19%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--47.28%-- evict
              |--46.87%-- inode_sb_list_add
              |--1.24%-- xlog_cil_insert_items
              |--0.68%-- __remove_inode_hash
              |--0.67%-- inode_wait_for_writeback
               --3.26%-- [...]
 22.96%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  5.56%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  4.87%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.04%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.30%    reaim  [kernel.kallsyms]  [k] memcpy
  1.08%    reaim  [unknown]          [.] 0x0000003c52009447

There was pretty high spinlock contention on the inode_sb_list_lock
and maybe the inode's i_lock.

With the patch, the perf profile at 1500 users became:

 26.82%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  4.66%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  3.97%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.40%    reaim  [kernel.kallsyms]  [k] queue_spin_lock_slowpath
              |--88.31%-- _raw_spin_lock
              |          |--36.02%-- inode_sb_list_add
              |          |--35.09%-- evict
              |          |--16.89%-- xlog_cil_insert_items
              |          |--6.30%-- try_to_wake_up
              |          |--2.20%-- _xfs_buf_find
              |          |--0.75%-- __remove_inode_hash
              |          |--0.72%-- __mutex_lock_slowpath
              |          |--0.53%-- load_balance
              |--6.02%-- _raw_spin_lock_irqsave
              |          |--74.75%-- down_trylock
              |          |--9.69%-- rcu_check_quiescent_state
              |          |--7.47%-- down
              |          |--3.57%-- up
              |          |--1.67%-- rwsem_wake
              |          |--1.00%-- remove_wait_queue
              |          |--0.56%-- pagevec_lru_move_fn
              |--5.39%-- _raw_spin_lock_irq
              |          |--82.05%-- rwsem_down_read_failed
              |          |--10.48%-- rwsem_down_write_failed
              |          |--4.24%-- __down
              |          |--2.74%-- __schedule
               --0.28%-- [...]
  2.20%    reaim  [kernel.kallsyms]  [k] memcpy
  1.84%    reaim  [unknown]          [.] 0x000000000041517b
  1.77%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--21.08%-- xlog_cil_insert_items
              |--10.14%-- xfs_icsb_modify_counters
              |--7.20%-- xfs_iget_cache_hit
              |--6.56%-- inode_sb_list_add
              |--5.49%-- _xfs_buf_find
              |--5.25%-- evict
              |--5.03%-- __remove_inode_hash
              |--4.64%-- __mutex_lock_slowpath
              |--3.78%-- selinux_inode_free_security
              |--2.95%-- xfs_inode_is_filestream
              |--2.35%-- try_to_wake_up
              |--2.07%-- xfs_inode_set_reclaim_tag
              |--1.52%-- list_lru_add
              |--1.16%-- xfs_inode_clear_eofblocks_tag
		  :
  1.30%    reaim  [kernel.kallsyms]  [k] effective_load
  1.27%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.10%    reaim  [kernel.kallsyms]  [k] security_compute_sid

On the ext4 filesystem, the disk workload improved from 416281 JPM
to 899101 JPM (+116%) with the patch. In this case, the contended
spinlock is the mb_cache_spinlock.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 include/asm-generic/qspinlock.h       |  122 ++++++++++
 include/asm-generic/qspinlock_types.h |   55 +++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  393 +++++++++++++++++++++++++++++++++
 5 files changed, 578 insertions(+), 0 deletions(-)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..08da60f
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,122 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/*
+ * External function declarations
+ */
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval);
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & _QSPINLOCK_LOCKED;
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !(atomic_read(&lock.qlcode) & _QSPINLOCK_LOCKED);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & ~_QSPINLOCK_LOCK_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->qlcode) &&
+	   (atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	int qsval;
+
+	/*
+	 * To reduce memory access to only once for the cold cache case,
+	 * a direct cmpxchg() is performed in the fastpath to optimize the
+	 * uncontended case. The contended performance, however, may suffer
+	 * a bit because of that.
+	 */
+	qsval = atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED);
+	if (likely(qsval == 0))
+		return;
+	queue_spin_lock_slowpath(lock, qsval);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * Use an atomic subtraction to clear the lock bit.
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_QSPINLOCK_LOCKED, &lock->qlcode);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..df981d0
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,55 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here in this case.
+ */
+#ifdef CONFIG_PARAVIRT
+# include <asm/paravirt_types.h>
+#else
+# include <linux/types.h>
+# include <linux/atomic.h>
+#endif
+
+/*
+ * The queue spinlock data structure - a 32-bit word
+ *
+ * For NR_CPUS >= 16K, the bits assignment are:
+ *   Bit  0   : Set if locked
+ *   Bits 1-7 : Not used
+ *   Bits 8-31: Queue code
+ *
+ * For NR_CPUS < 16K, the bits assignment are:
+ *   Bit   0   : Set if locked
+ *   Bits  1-7 : Not used
+ *   Bits  8-15: Reserved for architecture specific optimization
+ *   Bits 16-31: Queue code
+ */
+typedef struct qspinlock {
+	atomic_t	qlcode;	/* Lock + queue code */
+} arch_spinlock_t;
+
+#define _QCODE_OFFSET		8
+#define _QSPINLOCK_LOCKED	1U
+#define	_QSPINLOCK_LOCK_MASK	0xff
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
 config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index baab8e5..e3b3293 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
 endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..ed5efa7
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,393 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock with twists
+ * to make it fit the following constraints:
+ * 1. A max spinlock size of 4 bytes
+ * 2. Good fastpath performance
+ * 3. No change in the locking APIs
+ *
+ * The queue spinlock fastpath is as simple as it can get, all the heavy
+ * lifting is done in the lock slowpath. The main idea behind this queue
+ * spinlock implementation is to keep the spinlock size at 4 bytes while
+ * at the same time implement a queue structure to queue up the waiting
+ * lock spinners.
+ *
+ * Since preemption is disabled before getting the lock, a given CPU will
+ * only need to use one queue node structure in a non-interrupt context.
+ * A percpu queue node structure will be allocated for this purpose and the
+ * cpu number will be put into the queue spinlock structure to indicate the
+ * tail of the queue.
+ *
+ * To handle spinlock acquisition at interrupt context (softirq or hardirq),
+ * the queue node structure is actually an array for supporting nested spin
+ * locking operations in interrupt handlers. If all the entries in the
+ * array are used up, a warning message will be printed (as that shouldn't
+ * happen in normal circumstances) and the lock spinner will fall back to
+ * busy spinning instead of waiting in a queue.
+ */
+
+/*
+ * The 24-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
+ *
+ * The 16-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-15: CPU number + 1   (16K - 1 CPUs)
+ *
+ * A queue node code of 0 indicates that no one is waiting for the lock.
+ * As the value 0 cannot be used as a valid CPU number. We need to add
+ * 1 to it before putting it into the queue code.
+ */
+#define MAX_QNODES		4
+#ifndef _QCODE_VAL_OFFSET
+#define _QCODE_VAL_OFFSET	_QCODE_OFFSET
+#endif
+
+/*
+ * The queue node structure
+ *
+ * This structure is essentially the same as the mcs_spinlock structure
+ * in mcs_spinlock.h file. This structure is retained for future extension
+ * where new fields may be added.
+ */
+struct qnode {
+	u32		 wait;		/* Waiting flag		*/
+	struct qnode	*next;		/* Next queue node addr */
+};
+
+struct qnode_set {
+	struct qnode	nodes[MAX_QNODES];
+	int		node_idx;	/* Current node to use */
+};
+
+/*
+ * Per-CPU queue node structures
+ */
+static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
+
+/*
+ ************************************************************************
+ * The following optimized codes are for architectures that support:	*
+ *  1) Atomic byte and short data write					*
+ *  2) Byte and short data exchange and compare-exchange instructions	*
+ *									*
+ * For those architectures, their asm/qspinlock.h header file should	*
+ * define the followings in order to use the optimized codes.		*
+ *  1) The _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS macro			*
+ *  2) A smp_u8_store_release() macro for byte size store operation	*
+ *  3) A "union arch_qspinlock" structure that include the individual	*
+ *     fields of the qspinlock structure, including:			*
+ *      o slock - the qspinlock structure				*
+ *      o lock  - the lock byte						*
+ *									*
+ ************************************************************************
+ */
+#ifdef _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+/**
+ * queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!ACCESS_ONCE(qlock->lock) &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+#else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
+/*
+ * Generic functions for architectures that do not support atomic
+ * byte or short data types.
+ */
+/**
+ *_queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	int qlcode = atomic_read(lock->qlcode);
+
+	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
+		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+			return 1;
+	return 0;
+}
+#endif /* _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS */
+
+/*
+ ************************************************************************
+ * Inline functions used by the queue_spin_lock_slowpath() function	*
+ * that may get superseded by a more optimized version.			*
+ ************************************************************************
+ */
+
+#ifndef queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value (not used)
+ * Return : > 0 if lock is not available, = 0 if lock is free
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	int qlcode = atomic_read(&lock->qlcode);
+
+	*qcode = qlcode;
+	return qlcode & _QSPINLOCK_LOCKED;
+}
+#endif /* queue_get_lock_qcode */
+
+#ifndef queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+#endif /* queue_spin_trylock_and_clr_qcode */
+
+#ifndef queue_encode_qcode
+/**
+ * queue_encode_qcode - Encode the CPU number & node index into a qnode code
+ * @cpu_nr: CPU number
+ * @qn_idx: Queue node index
+ * Return : A qnode code that can be saved into the qspinlock structure
+ *
+ * The lock bit is set in the encoded 32-bit value as the need to encode
+ * a qnode means that the lock should have been taken.
+ */
+static u32 queue_encode_qcode(u32 cpu_nr, u8 qn_idx)
+{
+	return ((cpu_nr + 1) << (_QCODE_VAL_OFFSET + 2)) |
+		(qn_idx << _QCODE_VAL_OFFSET) | _QSPINLOCK_LOCKED;
+}
+#endif /* queue_encode_qcode */
+
+/*
+ ************************************************************************
+ * Other inline functions needed by the queue_spin_lock_slowpath()	*
+ * function.								*
+ ************************************************************************
+ */
+
+/**
+ * xlate_qcode - translate the queue code into the queue node address
+ * @qcode: Queue code to be translated
+ * Return: The corresponding queue node address
+ */
+static inline struct qnode *xlate_qcode(u32 qcode)
+{
+	u32 cpu_nr = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+	u8  qn_idx = (qcode >> _QCODE_VAL_OFFSET) & 3;
+
+	return per_cpu_ptr(&qnset.nodes[qn_idx], cpu_nr);
+}
+
+/**
+ * get_qnode - Get a queue node address
+ * @qn_idx: Pointer to queue node index [out]
+ * Return : queue node address & queue node index in qn_idx, or NULL if
+ *	    no free queue node available.
+ */
+static struct qnode *get_qnode(unsigned int *qn_idx)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+	int i;
+
+	if (unlikely(qset->node_idx >= MAX_QNODES))
+		return NULL;
+	i = qset->node_idx++;
+	*qn_idx = i;
+	return &qset->nodes[i];
+}
+
+/**
+ * put_qnode - Return a queue node to the pool
+ */
+static void put_qnode(void)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+
+	qset->node_idx--;
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Current value of the queue spinlock 32-bit word
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
+{
+	unsigned int cpu_nr, qn_idx;
+	struct qnode *node, *next;
+	u32 prev_qcode, my_qcode;
+
+	/*
+	 * Get the queue node
+	 */
+	cpu_nr = smp_processor_id();
+	node   = get_qnode(&qn_idx);
+
+	/*
+	 * It should never happen that all the queue nodes are being used.
+	 */
+	BUG_ON(!node);
+
+	/*
+	 * Set up the new cpu code to be exchanged
+	 */
+	my_qcode = queue_encode_qcode(cpu_nr, qn_idx);
+
+	/*
+	 * Initialize the queue node
+	 */
+	node->wait = true;
+	node->next = NULL;
+
+	/*
+	 * The lock may be available at this point, try again if no task was
+	 * waiting in the queue.
+	 */
+	if (!(qsval >> _QCODE_OFFSET) && queue_spin_trylock(lock)) {
+		put_qnode();
+		return;
+	}
+
+	/*
+	 * Exchange current copy of the queue node code
+	 */
+	prev_qcode = atomic_xchg(&lock->qlcode, my_qcode);
+	/*
+	 * It is possible that we may accidentally steal the lock. If this is
+	 * the case, we need to either release it if not the head of the queue
+	 * or get the lock and be done with it.
+	 */
+	if (unlikely(!(prev_qcode & _QSPINLOCK_LOCKED))) {
+		if (prev_qcode == 0) {
+			/*
+			 * Got the lock since it is at the head of the queue
+			 * Now try to atomically clear the queue code.
+			 */
+			if (atomic_cmpxchg(&lock->qlcode, my_qcode,
+					  _QSPINLOCK_LOCKED) == my_qcode)
+				goto release_node;
+			/*
+			 * The cmpxchg fails only if one or more tasks
+			 * are added to the queue. In this case, we need to
+			 * notify the next one to be the head of the queue.
+			 */
+			goto notify_next;
+		}
+		/*
+		 * Accidentally steal the lock, release the lock and
+		 * let the queue head get it.
+		 */
+		queue_spin_unlock(lock);
+	} else
+		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
+	my_qcode &= ~_QSPINLOCK_LOCKED;
+
+	if (prev_qcode) {
+		/*
+		 * Not at the queue head, get the address of the previous node
+		 * and set up the "next" fields of the that node.
+		 */
+		struct qnode *prev = xlate_qcode(prev_qcode);
+
+		ACCESS_ONCE(prev->next) = node;
+		/*
+		 * Wait until the waiting flag is off
+		 */
+		while (smp_load_acquire(&node->wait))
+			arch_mutex_cpu_relax();
+	}
+
+	/*
+	 * At the head of the wait queue now
+	 */
+	while (true) {
+		u32 qcode;
+		int retval;
+
+		retval = queue_get_lock_qcode(lock, &qcode, my_qcode);
+		if (retval > 0)
+			;	/* Lock not available yet */
+		else if (retval < 0)
+			/* Lock taken, can release the node & return */
+			goto release_node;
+		else if (qcode != my_qcode) {
+			/*
+			 * Just get the lock with other spinners waiting
+			 * in the queue.
+			 */
+			if (queue_spin_setlock(lock))
+				goto notify_next;
+		} else {
+			/*
+			 * Get the lock & clear the queue code simultaneously
+			 */
+			if (queue_spin_trylock_and_clr_qcode(lock, qcode))
+				/* No need to notify the next one */
+				goto release_node;
+		}
+		arch_mutex_cpu_relax();
+	}
+
+notify_next:
+	/*
+	 * Wait, if needed, until the next one in queue set up the next field
+	 */
+	while (!(next = ACCESS_ONCE(node->next)))
+		arch_mutex_cpu_relax();
+	/*
+	 * The next one in queue is now at the head
+	 */
+	smp_store_release(&next->wait, false);
+
+release_node:
+	put_qnode();
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDo-0001CY-ST; Wed, 26 Feb 2014 15:16:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgDg-0001Ah-9e
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:16:24 +0000
Received: from [193.109.254.147:27101] by server-15.bemta-14.messagelabs.com
	id DF/C8-10839-7450E035; Wed, 26 Feb 2014 15:16:23 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393427781!7005198!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29878 invoked from network); 26 Feb 2014 15:16:22 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:16:22 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id E46D9281;
	Wed, 26 Feb 2014 15:15:17 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id D032F67;
	Wed, 26 Feb 2014 15:15:10 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:20 -0500
Message-Id: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with
	PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v4->v5:
 - Move the optimized 2-task contending code to the generic file to
   enable more architectures to use it without code duplication.
 - Address some of the style-related comments by PeterZ.
 - Allow the use of unfair queue spinlock in a real para-virtualized
   execution environment.
 - Add para-virtualization support to the qspinlock code by ensuring
   that the lock holder and queue head stay alive as much as possible.

v3->v4:
 - Remove debugging code and fix a configuration error
 - Simplify the qspinlock structure and streamline the code to make it
   perform a bit better
 - Add an x86 version of asm/qspinlock.h for holding x86 specific
   optimization.
 - Add an optimized x86 code path for 2 contending tasks to improve
   low contention performance.

v2->v3:
 - Simplify the code by using numerous mode only without an unfair option.
 - Use the latest smp_load_acquire()/smp_store_release() barriers.
 - Move the queue spinlock code to kernel/locking.
 - Make the use of queue spinlock the default for x86-64 without user
   configuration.
 - Additional performance tuning.

v1->v2:
 - Add some more comments to document what the code does.
 - Add a numerous CPU mode to support >= 16K CPUs
 - Add a configuration option to allow lock stealing which can further
   improve performance in many cases.
 - Enable wakeup of queue head CPU at unlock time for non-numerous
   CPU mode.

This patch set has 3 different sections:
 1) Patches 1-3: Introduces a queue-based spinlock implementation that
    can replace the default ticket spinlock without increasing the
    size of the spinlock data structure. As a result, critical kernel
    data structures that embed spinlock won't increase in size and
    breaking data alignments.
 2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
    real para-virtualized execution environment. This can resolve
    some of the locking related performance issues due to the fact
    that the next CPU to get the lock may have been scheduled out
    for a period of time.
 3) Patches 6-8: Enable qspinlock para-virtualization support by making
    sure that the lock holder and the queue head stay alive as long as
    possible.

Patches 1-3 are fully tested and ready for production. Patches 4-8, on
the other hands, are not fully tested. They have undergone compilation
tests with various combinations of kernel config setting and boot-up
tests in a non-virtualized setting. Further tests and performance
characterization are still needed to be done in a KVM guest. So
comments on them are welcomed. Suggestions or recommendations on how
to add PV support in the Xen environment are also needed.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.

The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.

Waiman Long (8):
  qspinlock: Introducing a 4-byte queue spinlock implementation
  qspinlock, x86: Enable x86-64 to use queue spinlock
  qspinlock, x86: Add x86 specific optimization for 2 contending tasks
  pvqspinlock, x86: Allow unfair spinlock in a real PV environment
  pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
  pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  pvqspinlock, x86: Add qspinlock para-virtualization support
  pvqspinlock, x86: Enable KVM to use qspinlock's PV support

 arch/x86/Kconfig                      |   12 +
 arch/x86/include/asm/paravirt.h       |    9 +-
 arch/x86/include/asm/paravirt_types.h |   12 +
 arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
 arch/x86/include/asm/qspinlock.h      |  133 +++++++
 arch/x86/include/asm/spinlock.h       |    9 +-
 arch/x86/include/asm/spinlock_types.h |    4 +
 arch/x86/kernel/Makefile              |    1 +
 arch/x86/kernel/kvm.c                 |   73 ++++-
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
 arch/x86/xen/spinlock.c               |    2 +-
 include/asm-generic/qspinlock.h       |  122 +++++++
 include/asm-generic/qspinlock_types.h |   61 ++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
 16 files changed, 1239 insertions(+), 8 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h
 create mode 100644 arch/x86/include/asm/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:16:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgDm-0001BI-JL; Wed, 26 Feb 2014 15:16:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIgCp-00017z-PB
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:15:32 +0000
Received: from [85.158.143.35:44080] by server-3.bemta-4.messagelabs.com id
	FA/50-11539-3150E035; Wed, 26 Feb 2014 15:15:31 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393427728!8494969!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25377 invoked from network); 26 Feb 2014 15:15:29 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:15:29 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id E50AE17F;
	Wed, 26 Feb 2014 15:15:27 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 724AC66;
	Wed, 26 Feb 2014 15:15:25 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 10:14:24 -0500
Message-Id: <1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
X-Mailman-Approved-At: Wed, 26 Feb 2014 15:16:29 +0000
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
	x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Locking is always an issue in a virtualized environment as the virtual
CPU that is waiting on a lock may get scheduled out and hence block
any progress in lock acquisition even when the lock has been freed.

One solution to this problem is to allow unfair lock in a
para-virtualized environment. In this case, a new lock acquirer can
come and steal the lock if the next-in-line CPU to get the lock is
scheduled out. Unfair lock in a native environment is generally not a
good idea as there is a possibility of lock starvation for a heavily
contended lock.

This patch add a new configuration option for the x86
architecture to enable the use of unfair queue spinlock
(PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
(paravirt_unfairlocks_enabled) is used to switch between a fair and
an unfair version of the spinlock code. This jump label will only be
enabled in a real PV guest.

Enabling this configuration feature decreases the performance of an
uncontended lock-unlock operation by about 1-2%.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/Kconfig                     |   11 +++++
 arch/x86/include/asm/qspinlock.h     |   74 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/Makefile             |    1 +
 arch/x86/kernel/paravirt-spinlocks.c |    7 +++
 4 files changed, 93 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5bf70ab..8d7c941 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -645,6 +645,17 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer Y.
 
+config PARAVIRT_UNFAIR_LOCKS
+	bool "Enable unfair locks in a para-virtualized guest"
+	depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+	depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+	---help---
+	  This changes the kernel to use unfair locks in a real
+	  para-virtualized guest system. This will help performance
+	  in most cases. However, there is a possibility of lock
+	  starvation on a heavily contended lock especially in a
+	  large guest with many virtual CPUs.
+
 source "arch/x86/xen/Kconfig"
 
 config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 98db42e..c278aed 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -56,4 +56,78 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
 
 #include <asm-generic/qspinlock.h>
 
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (likely(cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return;
+	/*
+	 * Since the lock is now unfair, there is no need to activate
+	 * the 2-task quick spinning code path.
+	 */
+	queue_spin_lock_slowpath(lock, -1);
+}
+
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!qlock->lock &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+extern struct static_key paravirt_unfairlocks_enabled;
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		queue_spin_lock_unfair(lock);
+		return;
+	}
+	queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		return queue_spin_trylock_unfair(lock);
+	}
+	return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f)	arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
 #endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index cb648c8..1107a20 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
 obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 
 obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..a50032a 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,9 @@ EXPORT_SYMBOL(pv_lock_ops);
 
 struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
 EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:19:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgG8-00023L-17; Wed, 26 Feb 2014 15:18:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WIgG7-000230-0h
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:18:55 +0000
Received: from [85.158.143.35:38306] by server-1.bemta-4.messagelabs.com id
	3A/28-31661-ED50E035; Wed, 26 Feb 2014 15:18:54 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393427932!8507893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1344 invoked from network); 26 Feb 2014 15:18:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 15:18:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105941922"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 15:18:52 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 10:18:51 -0500
Message-ID: <530E05DA.6070500@citrix.com>
Date: Wed, 26 Feb 2014 16:18:50 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: <drbd-dev@lists.linbit.com>
References: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, drbd-user@lists.linbit.com
Subject: Re: [Xen-devel] [PATCH] block-drbd: type is "phy" for drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CkkndmUgcmVhbGl6ZWQgSSBoYXZlbid0IENjJ2VkIHRoZSBkcmJkIGRldmVsb3BlcnMgbGlzdHMs
IHNvIGFkZGluZyBpdCBub3cuCgpPbiAyMC8wMi8xNCAxNjo1OSwgUm9nZXIgUGF1IE1vbm5lIHdy
b3RlOgo+IFRoZSB0eXBlIHdyaXR0ZW4gdG8geGVuc3RvcmUgYnkgbGlieGwgd2hlbiBhdHRhY2hp
bmcgYSBkcmJkIGJhY2tlbmQgaXMKPiAicGh5Iiwgbm90ICJkcmJkIiwgc28gaGFuZGxlIHRoaXMg
Y2FzZSBhbHNvLgo+IAo+IFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgo+IC0tLQo+ICBzY3JpcHRzL2Jsb2NrLWRyYmQgfCAgICA0ICsrLS0KPiAg
MSBmaWxlcyBjaGFuZ2VkLCAyIGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pCj4gCj4gZGlm
ZiAtLWdpdCBhL3NjcmlwdHMvYmxvY2stZHJiZCBiL3NjcmlwdHMvYmxvY2stZHJiZAo+IGluZGV4
IDU1NjNjY2IuLjk3NTgwMmIgMTAwNzU1Cj4gLS0tIGEvc2NyaXB0cy9ibG9jay1kcmJkCj4gKysr
IGIvc2NyaXB0cy9ibG9jay1kcmJkCj4gQEAgLTI1MCw3ICsyNTAsNyBAQCBjYXNlICIkY29tbWFu
ZCIgaW4KPiAgICAgIGZpCj4gIAo+ICAgICAgY2FzZSAkdCBpbiAKPiAtICAgICAgZHJiZCkKPiAr
ICAgICAgZHJiZHxwaHkpCj4gICAgICAgICAgZHJiZF9yZXNvdXJjZT0kcAo+ICAgICAgICAgIGRy
YmRfcm9sZT0iJChkcmJkYWRtIHJvbGUgJGRyYmRfcmVzb3VyY2UpIgo+ICAgICAgICAgIGRyYmRf
bHJvbGU9IiR7ZHJiZF9yb2xlJSUvKn0iCj4gQEAgLTI3OCw3ICsyNzgsNyBAQCBjYXNlICIkY29t
bWFuZCIgaW4KPiAgCj4gICAgcmVtb3ZlKQo+ICAgICAgY2FzZSAkdCBpbiAKPiAtICAgICAgZHJi
ZCkKPiArICAgICAgZHJiZHxwaHkpCj4gICAgICAgICAgcD0kKHhlbnN0b3JlX3JlYWQgIiRYRU5C
VVNfUEFUSC9wYXJhbXMiKQo+ICAgICAgICAgIGRyYmRfcmVzb3VyY2U9JHAKPiAgICAgICAgICBk
cmJkX3JvbGU9IiQoZHJiZGFkbSByb2xlICRkcmJkX3Jlc291cmNlKSIKPiAKCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:19:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgG8-00023L-17; Wed, 26 Feb 2014 15:18:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WIgG7-000230-0h
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 15:18:55 +0000
Received: from [85.158.143.35:38306] by server-1.bemta-4.messagelabs.com id
	3A/28-31661-ED50E035; Wed, 26 Feb 2014 15:18:54 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393427932!8507893!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1344 invoked from network); 26 Feb 2014 15:18:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 15:18:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105941922"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 15:18:52 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 10:18:51 -0500
Message-ID: <530E05DA.6070500@citrix.com>
Date: Wed, 26 Feb 2014 16:18:50 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: <drbd-dev@lists.linbit.com>
References: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, drbd-user@lists.linbit.com
Subject: Re: [Xen-devel] [PATCH] block-drbd: type is "phy" for drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CkkndmUgcmVhbGl6ZWQgSSBoYXZlbid0IENjJ2VkIHRoZSBkcmJkIGRldmVsb3BlcnMgbGlzdHMs
IHNvIGFkZGluZyBpdCBub3cuCgpPbiAyMC8wMi8xNCAxNjo1OSwgUm9nZXIgUGF1IE1vbm5lIHdy
b3RlOgo+IFRoZSB0eXBlIHdyaXR0ZW4gdG8geGVuc3RvcmUgYnkgbGlieGwgd2hlbiBhdHRhY2hp
bmcgYSBkcmJkIGJhY2tlbmQgaXMKPiAicGh5Iiwgbm90ICJkcmJkIiwgc28gaGFuZGxlIHRoaXMg
Y2FzZSBhbHNvLgo+IAo+IFNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgo+IC0tLQo+ICBzY3JpcHRzL2Jsb2NrLWRyYmQgfCAgICA0ICsrLS0KPiAg
MSBmaWxlcyBjaGFuZ2VkLCAyIGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pCj4gCj4gZGlm
ZiAtLWdpdCBhL3NjcmlwdHMvYmxvY2stZHJiZCBiL3NjcmlwdHMvYmxvY2stZHJiZAo+IGluZGV4
IDU1NjNjY2IuLjk3NTgwMmIgMTAwNzU1Cj4gLS0tIGEvc2NyaXB0cy9ibG9jay1kcmJkCj4gKysr
IGIvc2NyaXB0cy9ibG9jay1kcmJkCj4gQEAgLTI1MCw3ICsyNTAsNyBAQCBjYXNlICIkY29tbWFu
ZCIgaW4KPiAgICAgIGZpCj4gIAo+ICAgICAgY2FzZSAkdCBpbiAKPiAtICAgICAgZHJiZCkKPiAr
ICAgICAgZHJiZHxwaHkpCj4gICAgICAgICAgZHJiZF9yZXNvdXJjZT0kcAo+ICAgICAgICAgIGRy
YmRfcm9sZT0iJChkcmJkYWRtIHJvbGUgJGRyYmRfcmVzb3VyY2UpIgo+ICAgICAgICAgIGRyYmRf
bHJvbGU9IiR7ZHJiZF9yb2xlJSUvKn0iCj4gQEAgLTI3OCw3ICsyNzgsNyBAQCBjYXNlICIkY29t
bWFuZCIgaW4KPiAgCj4gICAgcmVtb3ZlKQo+ICAgICAgY2FzZSAkdCBpbiAKPiAtICAgICAgZHJi
ZCkKPiArICAgICAgZHJiZHxwaHkpCj4gICAgICAgICAgcD0kKHhlbnN0b3JlX3JlYWQgIiRYRU5C
VVNfUEFUSC9wYXJhbXMiKQo+ICAgICAgICAgIGRyYmRfcmVzb3VyY2U9JHAKPiAgICAgICAgICBk
cmJkX3JvbGU9IiQoZHJiZGFkbSByb2xlICRkcmJkX3Jlc291cmNlKSIKPiAKCgpfX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBs
aXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZl
bAo=

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:28:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgPh-0002or-3B; Wed, 26 Feb 2014 15:28:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIgPf-0002oi-OO
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:28:47 +0000
Received: from [85.158.139.211:34341] by server-2.bemta-5.messagelabs.com id
	E2/B7-23037-E280E035; Wed, 26 Feb 2014 15:28:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393428524!6441715!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 585 invoked from network); 26 Feb 2014 15:28:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 15:28:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="104313189"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 15:28:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 10:28:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WIgPa-0001xu-I0;
	Wed, 26 Feb 2014 15:28:42 +0000
Message-ID: <1393428521.9640.19.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Eric Blake <eblake@redhat.com>
Date: Wed, 26 Feb 2014 15:28:41 +0000
In-Reply-To: <530DED71.4020604@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com> <530DED71.4020604@redhat.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 06:34 -0700, Eric Blake wrote:
> On 02/26/2014 05:37 AM, Daniel P. Berrange wrote:
> > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> >> Only tested on v7 but the v8 equivalent seems pretty obvious.
> >>
> >> XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
> >> but I have stuck with the existing pattern.
> >>
> >> With this I can create a guest from:
> >>   <domain type='xen'>
> >>     <name>libvirt-test</name>
> >>     <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
> >>     <memory>393216</memory>
> >>     <currentMemory>393216</currentMemory>
> >>     <vcpu>1</vcpu>
> >>     <os>
> >>       <type arch='armv7l' machine='xenpv'>linux</type>
> >>       <kernel>/boot/vmlinuz-arm-native</kernel>
> >>       <cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
> >>     </os>
> 
> > 
> > ACK
> 
> I've gone ahead and pushed the patch.

Wow, that was quick, thanks!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:28:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgPh-0002or-3B; Wed, 26 Feb 2014 15:28:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WIgPf-0002oi-OO
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:28:47 +0000
Received: from [85.158.139.211:34341] by server-2.bemta-5.messagelabs.com id
	E2/B7-23037-E280E035; Wed, 26 Feb 2014 15:28:46 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393428524!6441715!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 585 invoked from network); 26 Feb 2014 15:28:45 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 15:28:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="104313189"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 15:28:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 10:28:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17] helo=[127.0.0.1])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<Ian.Campbell@citrix.com>)	id 1WIgPa-0001xu-I0;
	Wed, 26 Feb 2014 15:28:42 +0000
Message-ID: <1393428521.9640.19.camel@dagon.hellion.org.uk>
From: Ian Campbell <Ian.Campbell@citrix.com>
To: Eric Blake <eblake@redhat.com>
Date: Wed, 26 Feb 2014 15:28:41 +0000
In-Reply-To: <530DED71.4020604@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com> <530DED71.4020604@redhat.com>
Organization: Citrix Systems, Inc.
X-Mailer: Evolution 3.4.4-4+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: "Daniel P. Berrange" <berrange@redhat.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
	architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 06:34 -0700, Eric Blake wrote:
> On 02/26/2014 05:37 AM, Daniel P. Berrange wrote:
> > On Wed, Feb 26, 2014 at 12:34:17PM +0000, Ian Campbell wrote:
> >> Only tested on v7 but the v8 equivalent seems pretty obvious.
> >>
> >> XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
> >> but I have stuck with the existing pattern.
> >>
> >> With this I can create a guest from:
> >>   <domain type='xen'>
> >>     <name>libvirt-test</name>
> >>     <uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
> >>     <memory>393216</memory>
> >>     <currentMemory>393216</currentMemory>
> >>     <vcpu>1</vcpu>
> >>     <os>
> >>       <type arch='armv7l' machine='xenpv'>linux</type>
> >>       <kernel>/boot/vmlinuz-arm-native</kernel>
> >>       <cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
> >>     </os>
> 
> > 
> > ACK
> 
> I've gone ahead and pushed the patch.

Wow, that was quick, thanks!

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 15:44:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgeK-00037Y-Ra; Wed, 26 Feb 2014 15:43:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WIgeI-00037T-Ob
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:43:55 +0000
Received: from [85.158.137.68:64562] by server-1.bemta-3.messagelabs.com id
	DA/8F-17293-9BB0E035; Wed, 26 Feb 2014 15:43:53 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393429430!4378788!1
X-Originating-IP: [64.18.0.26]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4950 invoked from network); 26 Feb 2014 15:43:52 -0000
Received: from exprod5og113.obsmtp.com (HELO exprod5og113.obsmtp.com)
	(64.18.0.26)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:43:52 -0000
Received: from mail-wi0-f170.google.com ([209.85.212.170]) (using TLSv1) by
	exprod5ob113.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUw4LtUdh+Q4cN5BqT0LsOXfdijOOH7bb@postini.com;
	Wed, 26 Feb 2014 07:43:52 PST
Received: by mail-wi0-f170.google.com with SMTP id hi5so5786307wib.3
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 07:43:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=2q90SzX7OliAC/C2kTNeyYshEL4GgSWHT28SxtRsmts=;
	b=UvgRg59RV5MN8j1VpPF49F4xy4iJjY+3Wkxdw9Tb7z7rapcu2rWBJAU0AfIAmAn60P
	wnaIaVMYwN/KQTxluEDJ0zXfGxUYbLd8Akz6rp333+vD8oBRt9qC4k9orBDgx7K9soAJ
	DHamvhEirzaWToU8aqXT3xJEPinEClGVZhFmQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=2q90SzX7OliAC/C2kTNeyYshEL4GgSWHT28SxtRsmts=;
	b=Lbtb9jGruuNT5M0faHjEs7Dp2Xb9NvfS5sIAFsM0gRaBJctWLE+hM5wtMgL86jGQW6
	w8k4Y906oCGvsmazNc28ROFOU07yAAuC1u6mU3ox0KcBDdhPV6YSxDomNmqbthKXLJuA
	LvjntDBQGSh9mZcQG2DIhDRQy1bhuvI6WauVpMIR0i1oruizONo3glHo3FKuPOk6dtqe
	m6vJQD8/+t/7ek+Iy5IP5amHnygaHmCKBWItV22S5LZgUPvzDmrq/Az1u59YLfQLdLZn
	18YsIwNxHGWT7UVctzDIee//Cu5Xc59TAHTr6dAxox3hNEP+MJfPRCFNOIlGueKoobLu
	Kj2w==
X-Gm-Message-State: ALoCoQneajAXwnhI6cL/5FmNgA5JDv0l0yGjZy3WLKlj/nWEsroMLSu+Ggb1QdFr5K8dRQoToc3MWwbltWJKjw8wzj4BmedPsUIXmqb4432rHZnTOIhg6sZHmN8PdfyyoZpdkP2rdXD0OogbXQKxXdUNmm0qHxwuOeT47D/BkVim06KsRAokD2M=
X-Received: by 10.180.12.43 with SMTP id v11mr5201995wib.33.1393429428711;
	Wed, 26 Feb 2014 07:43:48 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.12.43 with SMTP id v11mr5201992wib.33.1393429428615;
	Wed, 26 Feb 2014 07:43:48 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Wed, 26 Feb 2014 07:43:48 -0800 (PST)
Date: Wed, 26 Feb 2014 15:43:48 +0000
Message-ID: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8785435034796965010=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8785435034796965010==
Content-Type: multipart/alternative; boundary=001a11c225901cd0c904f3511229

--001a11c225901cd0c904f3511229
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

Does anyone knows something about future plans to implement
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?

Is their implementation expected in the near future?

Regards,
Victor

--001a11c225901cd0c904f3511229
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr"><div>Hi all,<br><br></div>Does anyone knows something about future plans to implement<br>xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br><br>
Is their implementation expected in the near future?<br><div><br>Regards,<br></div>Victor<br></div>

--001a11c225901cd0c904f3511229--


--===============8785435034796965010==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8785435034796965010==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 15:44:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 15:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgeK-00037Y-Ra; Wed, 26 Feb 2014 15:43:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WIgeI-00037T-Ob
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 15:43:55 +0000
Received: from [85.158.137.68:64562] by server-1.bemta-3.messagelabs.com id
	DA/8F-17293-9BB0E035; Wed, 26 Feb 2014 15:43:53 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393429430!4378788!1
X-Originating-IP: [64.18.0.26]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4950 invoked from network); 26 Feb 2014 15:43:52 -0000
Received: from exprod5og113.obsmtp.com (HELO exprod5og113.obsmtp.com)
	(64.18.0.26)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 15:43:52 -0000
Received: from mail-wi0-f170.google.com ([209.85.212.170]) (using TLSv1) by
	exprod5ob113.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUw4LtUdh+Q4cN5BqT0LsOXfdijOOH7bb@postini.com;
	Wed, 26 Feb 2014 07:43:52 PST
Received: by mail-wi0-f170.google.com with SMTP id hi5so5786307wib.3
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 07:43:48 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=2q90SzX7OliAC/C2kTNeyYshEL4GgSWHT28SxtRsmts=;
	b=UvgRg59RV5MN8j1VpPF49F4xy4iJjY+3Wkxdw9Tb7z7rapcu2rWBJAU0AfIAmAn60P
	wnaIaVMYwN/KQTxluEDJ0zXfGxUYbLd8Akz6rp333+vD8oBRt9qC4k9orBDgx7K9soAJ
	DHamvhEirzaWToU8aqXT3xJEPinEClGVZhFmQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:date:message-id:subject:from:to
	:content-type;
	bh=2q90SzX7OliAC/C2kTNeyYshEL4GgSWHT28SxtRsmts=;
	b=Lbtb9jGruuNT5M0faHjEs7Dp2Xb9NvfS5sIAFsM0gRaBJctWLE+hM5wtMgL86jGQW6
	w8k4Y906oCGvsmazNc28ROFOU07yAAuC1u6mU3ox0KcBDdhPV6YSxDomNmqbthKXLJuA
	LvjntDBQGSh9mZcQG2DIhDRQy1bhuvI6WauVpMIR0i1oruizONo3glHo3FKuPOk6dtqe
	m6vJQD8/+t/7ek+Iy5IP5amHnygaHmCKBWItV22S5LZgUPvzDmrq/Az1u59YLfQLdLZn
	18YsIwNxHGWT7UVctzDIee//Cu5Xc59TAHTr6dAxox3hNEP+MJfPRCFNOIlGueKoobLu
	Kj2w==
X-Gm-Message-State: ALoCoQneajAXwnhI6cL/5FmNgA5JDv0l0yGjZy3WLKlj/nWEsroMLSu+Ggb1QdFr5K8dRQoToc3MWwbltWJKjw8wzj4BmedPsUIXmqb4432rHZnTOIhg6sZHmN8PdfyyoZpdkP2rdXD0OogbXQKxXdUNmm0qHxwuOeT47D/BkVim06KsRAokD2M=
X-Received: by 10.180.12.43 with SMTP id v11mr5201995wib.33.1393429428711;
	Wed, 26 Feb 2014 07:43:48 -0800 (PST)
MIME-Version: 1.0
X-Received: by 10.180.12.43 with SMTP id v11mr5201992wib.33.1393429428615;
	Wed, 26 Feb 2014 07:43:48 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Wed, 26 Feb 2014 07:43:48 -0800 (PST)
Date: Wed, 26 Feb 2014 15:43:48 +0000
Message-ID: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: xen-devel@lists.xen.org
Subject: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8785435034796965010=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8785435034796965010==
Content-Type: multipart/alternative; boundary=001a11c225901cd0c904f3511229

--001a11c225901cd0c904f3511229
Content-Type: text/plain; charset=ISO-8859-1

Hi all,

Does anyone knows something about future plans to implement
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?

Is their implementation expected in the near future?

Regards,
Victor

--001a11c225901cd0c904f3511229
Content-Type: text/html; charset=ISO-8859-1

<div dir="ltr"><div>Hi all,<br><br></div>Does anyone knows something about future plans to implement<br>xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br><br>
Is their implementation expected in the near future?<br><div><br>Regards,<br></div>Victor<br></div>

--001a11c225901cd0c904f3511229--


--===============8785435034796965010==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8785435034796965010==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 16:00:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgty-0003hT-L3; Wed, 26 Feb 2014 16:00:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIgtw-0003Um-RE
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:00:05 +0000
Received: from [85.158.139.211:4836] by server-5.bemta-5.messagelabs.com id
	EF/6E-32749-48F0E035; Wed, 26 Feb 2014 16:00:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393430403!1904880!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31190 invoked from network); 26 Feb 2014 16:00:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 16:00:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:00:06 +0000
Message-Id: <530E1D9E020000780011F938@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 16:00:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>              write_debugreg(6, exit_qualification | 0xffff0ff0);
> -            if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
> -                goto exit_and_crash;
> -            domain_pause_for_debugger();
> +            if ( v->domain->debugger_attached )
> +                domain_pause_for_debugger();
> +            else 
> +            {
> +                __restore_debug_registers(v);
> +                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
> +            }

I suppose you need to set DR6.BS after restoring the reigsters?

Also, the change looks rather simple - is that really correct for both
cpu_has_monitor_trap_flag and !cpu_has_monitor_trap_flag cases?

> BTW: I also think we should clear the CPU_BASED_MOV_DR_EXITING bit in 
> __restore_debug_registers(). After restore the debug register, we should not 
> trap any DR access unless the VCPU is scheduled out again. Not sure whether I 
> am wrong.
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index b128e81..56a3140 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -394,6 +394,8 @@ static void __restore_debug_registers(struct vcpu *v)
>      write_debugreg(3, v->arch.debugreg[3]);
>      write_debugreg(6, v->arch.debugreg[6]);
>      /* DR7 is loaded from the VMCS. */
> +    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
> +    vmx_update_cpu_exec_control(v);
>  }
>  
>  /*

That's being done by at least one of its callers (vmx_dr_access())
already, and I think it was purposefully not done in other cases.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:00:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIgty-0003hT-L3; Wed, 26 Feb 2014 16:00:06 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIgtw-0003Um-RE
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:00:05 +0000
Received: from [85.158.139.211:4836] by server-5.bemta-5.messagelabs.com id
	EF/6E-32749-48F0E035; Wed, 26 Feb 2014 16:00:04 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393430403!1904880!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31190 invoked from network); 26 Feb 2014 16:00:03 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 16:00:03 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:00:06 +0000
Message-Id: <530E1D9E020000780011F938@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 16:00:14 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>              write_debugreg(6, exit_qualification | 0xffff0ff0);
> -            if ( !v->domain->debugger_attached || cpu_has_monitor_trap_flag )
> -                goto exit_and_crash;
> -            domain_pause_for_debugger();
> +            if ( v->domain->debugger_attached )
> +                domain_pause_for_debugger();
> +            else 
> +            {
> +                __restore_debug_registers(v);
> +                hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE);
> +            }

I suppose you need to set DR6.BS after restoring the reigsters?

Also, the change looks rather simple - is that really correct for both
cpu_has_monitor_trap_flag and !cpu_has_monitor_trap_flag cases?

> BTW: I also think we should clear the CPU_BASED_MOV_DR_EXITING bit in 
> __restore_debug_registers(). After restore the debug register, we should not 
> trap any DR access unless the VCPU is scheduled out again. Not sure whether I 
> am wrong.
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index b128e81..56a3140 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -394,6 +394,8 @@ static void __restore_debug_registers(struct vcpu *v)
>      write_debugreg(3, v->arch.debugreg[3]);
>      write_debugreg(6, v->arch.debugreg[6]);
>      /* DR7 is loaded from the VMCS. */
> +    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
> +    vmx_update_cpu_exec_control(v);
>  }
>  
>  /*

That's being done by at least one of its callers (vmx_dr_access())
already, and I think it was purposefully not done in other cases.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:13:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIh7A-0003yL-Av; Wed, 26 Feb 2014 16:13:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIh79-0003yE-AY
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 16:13:43 +0000
Received: from [193.109.254.147:29296] by server-10.bemta-14.messagelabs.com
	id 5D/90-10711-6B21E035; Wed, 26 Feb 2014 16:13:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393431221!3337368!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18252 invoked from network); 26 Feb 2014 16:13:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 16:13:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:13:48 +0000
Message-Id: <530E20D3020000780011F952@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 16:13:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Saurabh Mishra" <saurabh.globe@gmail.com>
References: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
In-Reply-To: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen
 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 01:14, Saurabh Mishra <saurabh.globe@gmail.com> wrote:
> What are our options here assuming upgrading to Xen 4.2.2 (SuSE 11 SP3) is
> the only possibility?

I don't see why first of all you wouldn't want to update to the most
recent SP2 version, 4.1.5 based.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:13:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIh7A-0003yL-Av; Wed, 26 Feb 2014 16:13:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIh79-0003yE-AY
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 16:13:43 +0000
Received: from [193.109.254.147:29296] by server-10.bemta-14.messagelabs.com
	id 5D/90-10711-6B21E035; Wed, 26 Feb 2014 16:13:42 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393431221!3337368!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18252 invoked from network); 26 Feb 2014 16:13:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 16:13:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:13:48 +0000
Message-Id: <530E20D3020000780011F952@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 16:13:55 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Saurabh Mishra" <saurabh.globe@gmail.com>
References: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
In-Reply-To: <CAMnwyJ1C+YPde9VU0QxVY5ne52LzuGtDmOmzNYyNEnQsqaws4A@mail.gmail.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen hypervisor panic in SuSE 11 SP2 (Xen
 4.1.2_14-0.5.5)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 01:14, Saurabh Mishra <saurabh.globe@gmail.com> wrote:
> What are our options here assuming upgrading to Xen 4.2.2 (SuSE 11 SP3) is
> the only possibility?

I don't see why first of all you wouldn't want to update to the most
recent SP2 version, 4.1.5 based.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:21:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhEs-00049n-CI; Wed, 26 Feb 2014 16:21:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIhEq-00049i-V1
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:21:41 +0000
Received: from [85.158.137.68:54526] by server-13.bemta-3.messagelabs.com id
	A0/06-26923-4941E035; Wed, 26 Feb 2014 16:21:40 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393431698!3144147!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21002 invoked from network); 26 Feb 2014 16:21:39 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 16:21:39 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIhEB-0005Xq-55; Wed, 26 Feb 2014 16:20:59 +0000
Received: by laptop (Postfix, from userid 1000)
	id 523D5100A8205; Wed, 26 Feb 2014 17:20:57 +0100 (CET)
Date: Wed, 26 Feb 2014 17:20:57 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226162057.GW6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


You don't happen to have a proper state diagram for this thing do you?

I suppose I'm going to have to make one; this is all getting a bit
unwieldy, and those xchg() + fixup things are hard to read.

On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote:
> +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
> +{
> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
> +	u16		     old;
> +
> +	/*
> +	 * Fall into the quick spinning code path only if no one is waiting
> +	 * or the lock is available.
> +	 */
> +	if (unlikely((qsval != _QSPINLOCK_LOCKED) &&
> +		     (qsval != _QSPINLOCK_WAITING)))
> +		return 0;
> +
> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
> +
> +	if (old == 0) {
> +		/*
> +		 * Got the lock, can clear the waiting bit now
> +		 */
> +		smp_u8_store_release(&qlock->wait, 0);


So we just did an atomic op, and now you're trying to optimize this
write. Why do you need a whole byte for that?

Surely a cmpxchg loop with the right atomic op can't be _that_ much
slower? Its far more readable and likely avoids that steal fail below as
well.

> +		return 1;
> +	} else if (old == _QSPINLOCK_LOCKED) {
> +try_again:
> +		/*
> +		 * Wait until the lock byte is cleared to get the lock
> +		 */
> +		do {
> +			cpu_relax();
> +		} while (ACCESS_ONCE(qlock->lock));
> +		/*
> +		 * Set the lock bit & clear the waiting bit
> +		 */
> +		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
> +			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
> +			return 1;
> +		/*
> +		 * Someone has steal the lock, so wait again
> +		 */
> +		goto try_again;

That's just a fail.. steals should not ever be allowed. It's a fair lock
after all.

> +	} else if (old == _QSPINLOCK_WAITING) {
> +		/*
> +		 * Another task is already waiting while it steals the lock.
> +		 * A bit of unfairness here won't change the big picture.
> +		 * So just take the lock and return.
> +		 */
> +		return 1;
> +	}
> +	/*
> +	 * Nothing need to be done if the old value is
> +	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
> +	 */
> +	return 0;
> +}




> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  		return;
>  	}
>  
> +#ifdef queue_code_xchg
> +	prev_qcode = queue_code_xchg(lock, my_qcode);
> +#else
>  	/*
>  	 * Exchange current copy of the queue node code
>  	 */
> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  	} else
>  		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
>  	my_qcode &= ~_QSPINLOCK_LOCKED;
> +#endif /* queue_code_xchg */
>  
>  	if (prev_qcode) {
>  		/*

That's just horrible.. please just make the entire #else branch another
version of that same queue_code_xchg() function.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:21:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhEs-00049n-CI; Wed, 26 Feb 2014 16:21:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIhEq-00049i-V1
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:21:41 +0000
Received: from [85.158.137.68:54526] by server-13.bemta-3.messagelabs.com id
	A0/06-26923-4941E035; Wed, 26 Feb 2014 16:21:40 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393431698!3144147!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21002 invoked from network); 26 Feb 2014 16:21:39 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 16:21:39 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIhEB-0005Xq-55; Wed, 26 Feb 2014 16:20:59 +0000
Received: by laptop (Postfix, from userid 1000)
	id 523D5100A8205; Wed, 26 Feb 2014 17:20:57 +0100 (CET)
Date: Wed, 26 Feb 2014 17:20:57 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226162057.GW6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


You don't happen to have a proper state diagram for this thing do you?

I suppose I'm going to have to make one; this is all getting a bit
unwieldy, and those xchg() + fixup things are hard to read.

On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote:
> +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
> +{
> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
> +	u16		     old;
> +
> +	/*
> +	 * Fall into the quick spinning code path only if no one is waiting
> +	 * or the lock is available.
> +	 */
> +	if (unlikely((qsval != _QSPINLOCK_LOCKED) &&
> +		     (qsval != _QSPINLOCK_WAITING)))
> +		return 0;
> +
> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
> +
> +	if (old == 0) {
> +		/*
> +		 * Got the lock, can clear the waiting bit now
> +		 */
> +		smp_u8_store_release(&qlock->wait, 0);


So we just did an atomic op, and now you're trying to optimize this
write. Why do you need a whole byte for that?

Surely a cmpxchg loop with the right atomic op can't be _that_ much
slower? Its far more readable and likely avoids that steal fail below as
well.

> +		return 1;
> +	} else if (old == _QSPINLOCK_LOCKED) {
> +try_again:
> +		/*
> +		 * Wait until the lock byte is cleared to get the lock
> +		 */
> +		do {
> +			cpu_relax();
> +		} while (ACCESS_ONCE(qlock->lock));
> +		/*
> +		 * Set the lock bit & clear the waiting bit
> +		 */
> +		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
> +			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
> +			return 1;
> +		/*
> +		 * Someone has steal the lock, so wait again
> +		 */
> +		goto try_again;

That's just a fail.. steals should not ever be allowed. It's a fair lock
after all.

> +	} else if (old == _QSPINLOCK_WAITING) {
> +		/*
> +		 * Another task is already waiting while it steals the lock.
> +		 * A bit of unfairness here won't change the big picture.
> +		 * So just take the lock and return.
> +		 */
> +		return 1;
> +	}
> +	/*
> +	 * Nothing need to be done if the old value is
> +	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
> +	 */
> +	return 0;
> +}




> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  		return;
>  	}
>  
> +#ifdef queue_code_xchg
> +	prev_qcode = queue_code_xchg(lock, my_qcode);
> +#else
>  	/*
>  	 * Exchange current copy of the queue node code
>  	 */
> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  	} else
>  		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
>  	my_qcode &= ~_QSPINLOCK_LOCKED;
> +#endif /* queue_code_xchg */
>  
>  	if (prev_qcode) {
>  		/*

That's just horrible.. please just make the entire #else branch another
version of that same queue_code_xchg() function.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:23:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhGJ-0004DR-Fd; Wed, 26 Feb 2014 16:23:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIhG8-0004DJ-Bb
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:23:00 +0000
Received: from [85.158.143.35:36835] by server-2.bemta-4.messagelabs.com id
	07/1E-04779-3E41E035; Wed, 26 Feb 2014 16:22:59 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393431778!8515160!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13202 invoked from network); 26 Feb 2014 16:22:58 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 16:22:58 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIhFt-0005gM-JI; Wed, 26 Feb 2014 16:22:45 +0000
Received: by laptop (Postfix, from userid 1000)
	id B5AB1100A8209; Wed, 26 Feb 2014 17:22:43 +0100 (CET)
Date: Wed, 26 Feb 2014 17:22:43 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226162243.GX6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:

> +struct qnode {
> +	u32		 wait;		/* Waiting flag		*/
> +	struct qnode	*next;		/* Next queue node addr */
> +};
> +
> +struct qnode_set {
> +	struct qnode	nodes[MAX_QNODES];
> +	int		node_idx;	/* Current node to use */
> +};
> +
> +/*
> + * Per-CPU queue node structures
> + */
> +static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };

So I've not yet wrapped my head around any of this; and I see a later
patch adds some paravirt gunk to this, but it does blow you can't keep
it a single cacheline for the sane case.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:23:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:23:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhGJ-0004DR-Fd; Wed, 26 Feb 2014 16:23:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIhG8-0004DJ-Bb
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:23:00 +0000
Received: from [85.158.143.35:36835] by server-2.bemta-4.messagelabs.com id
	07/1E-04779-3E41E035; Wed, 26 Feb 2014 16:22:59 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393431778!8515160!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13202 invoked from network); 26 Feb 2014 16:22:58 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 16:22:58 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIhFt-0005gM-JI; Wed, 26 Feb 2014 16:22:45 +0000
Received: by laptop (Postfix, from userid 1000)
	id B5AB1100A8209; Wed, 26 Feb 2014 17:22:43 +0100 (CET)
Date: Wed, 26 Feb 2014 17:22:43 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226162243.GX6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:

> +struct qnode {
> +	u32		 wait;		/* Waiting flag		*/
> +	struct qnode	*next;		/* Next queue node addr */
> +};
> +
> +struct qnode_set {
> +	struct qnode	nodes[MAX_QNODES];
> +	int		node_idx;	/* Current node to use */
> +};
> +
> +/*
> + * Per-CPU queue node structures
> + */
> +static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };

So I've not yet wrapped my head around any of this; and I see a later
patch adds some paravirt gunk to this, but it does blow you can't keep
it a single cacheline for the sane case.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:25:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhIA-0004LD-9E; Wed, 26 Feb 2014 16:25:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIhI9-0004L4-5f
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:25:05 +0000
Received: from [85.158.137.68:28187] by server-10.bemta-3.messagelabs.com id
	B5/3E-07302-0651E035; Wed, 26 Feb 2014 16:25:04 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393431902!3145116!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13797 invoked from network); 26 Feb 2014 16:25:03 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 16:25:03 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIhHq-0005mC-37; Wed, 26 Feb 2014 16:24:46 +0000
Received: by laptop (Postfix, from userid 1000)
	id 20287100A8205; Wed, 26 Feb 2014 17:24:40 +0100 (CET)
Date: Wed, 26 Feb 2014 17:24:40 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226162440.GY6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:
> +static void put_qnode(void)
> +{
> +	struct qnode_set *qset = this_cpu_ptr(&qnset);
> +
> +	qset->node_idx--;
> +}

That very much wants to be: this_cpu_dec().

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:25:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhIA-0004LD-9E; Wed, 26 Feb 2014 16:25:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIhI9-0004L4-5f
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:25:05 +0000
Received: from [85.158.137.68:28187] by server-10.bemta-3.messagelabs.com id
	B5/3E-07302-0651E035; Wed, 26 Feb 2014 16:25:04 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393431902!3145116!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13797 invoked from network); 26 Feb 2014 16:25:03 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 16:25:03 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIhHq-0005mC-37; Wed, 26 Feb 2014 16:24:46 +0000
Received: by laptop (Postfix, from userid 1000)
	id 20287100A8205; Wed, 26 Feb 2014 17:24:40 +0100 (CET)
Date: Wed, 26 Feb 2014 17:24:40 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226162440.GY6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:
> +static void put_qnode(void)
> +{
> +	struct qnode_set *qset = this_cpu_ptr(&qnset);
> +
> +	qset->node_idx--;
> +}

That very much wants to be: this_cpu_dec().

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:32:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhPR-0004cM-Rj; Wed, 26 Feb 2014 16:32:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.ellenberg@linbit.com>) id 1WIhNZ-0004bn-3r
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:30:41 +0000
Received: from [193.109.254.147:27765] by server-1.bemta-14.messagelabs.com id
	B5/0E-15438-0B61E035; Wed, 26 Feb 2014 16:30:40 +0000
X-Env-Sender: lars.ellenberg@linbit.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393432239!7017296!1
X-Originating-IP: [212.69.166.240]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4912 invoked from network); 26 Feb 2014 16:30:39 -0000
Received: from zimbra13.linbit.com (HELO zimbra13.linbit.com) (212.69.166.240)
	by server-8.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 16:30:39 -0000
Received: from localhost (localhost [127.0.0.1])
	by zimbra13.linbit.com (Postfix) with ESMTP id DF751321763;
	Wed, 26 Feb 2014 17:30:38 +0100 (CET)
Received: from zimbra13.linbit.com ([127.0.0.1])
	by localhost (zimbra13.linbit.com [127.0.0.1]) (amavisd-new, port 10032)
	with ESMTP id SifhcoXO_i3h; Wed, 26 Feb 2014 17:30:38 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by zimbra13.linbit.com (Postfix) with ESMTP id 4493632176C;
	Wed, 26 Feb 2014 17:30:37 +0100 (CET)
X-Virus-Scanned: amavisd-new at linbit.com
Received: from zimbra13.linbit.com ([127.0.0.1])
	by localhost (zimbra13.linbit.com [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id nBroO_SOUrzi; Wed, 26 Feb 2014 17:30:37 +0100 (CET)
Received: from soda.linbit (tuerlsteher.linbit.com [86.59.100.100])
	by zimbra13.linbit.com (Postfix) with ESMTPS id A35AE321763;
	Wed, 26 Feb 2014 17:30:36 +0100 (CET)
Date: Wed, 26 Feb 2014 17:30:36 +0100
From: Lars Ellenberg <lars.ellenberg@linbit.com>
To: drbd-dev@lists.linbit.com, xen-devel@lists.xenproject.org,
	drbd-user@lists.linbit.com
Message-ID: <20140226163036.GE3154@soda.linbit>
Mail-Followup-To: drbd-dev@lists.linbit.com, xen-devel@lists.xenproject.org,
	drbd-user@lists.linbit.com
References: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
	<530E05DA.6070500@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530E05DA.6070500@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Wed, 26 Feb 2014 16:32:36 +0000
Subject: Re: [Xen-devel] [Drbd-dev] [PATCH] block-drbd: type is "phy" for
	drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 04:18:50PM +0100, Roger Pau Monn=E9 wrote:
> =

> I've realized I haven't Cc'ed the drbd developers lists, so adding it now.

We do read the user's list just as well.
Our -dev is actually mainly intended for "coordination".

> On 20/02/14 16:59, Roger Pau Monne wrote:
> > The type written to xenstore by libxl when attaching a drbd backend is
> > "phy", not "drbd", so handle this case also.

If you say so; you should know ;-)
Did this change at some point?
I personally have not used this script in a long while...

Do you ("citrix") have this in some
regression/integration testing yourself?

> > Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> > ---
> >  scripts/block-drbd |    4 ++--
> >  1 files changed, 2 insertions(+), 2 deletions(-)
> > =

> > diff --git a/scripts/block-drbd b/scripts/block-drbd
> > index 5563ccb..975802b 100755
> > --- a/scripts/block-drbd
> > +++ b/scripts/block-drbd
> > @@ -250,7 +250,7 @@ case "$command" in
> >      fi
> >  =

> >      case $t in =

> > -      drbd)
> > +      drbd|phy)
> >          drbd_resource=3D$p
> >          drbd_role=3D"$(drbdadm role $drbd_resource)"
> >          drbd_lrole=3D"${drbd_role%%/*}"
> > @@ -278,7 +278,7 @@ case "$command" in
> >  =

> >    remove)
> >      case $t in =

> > -      drbd)
> > +      drbd|phy)
> >          p=3D$(xenstore_read "$XENBUS_PATH/params")
> >          drbd_resource=3D$p
> >          drbd_role=3D"$(drbdadm role $drbd_resource)"

-- =

: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD=AE and LINBIT=AE are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:32:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhPR-0004cM-Rj; Wed, 26 Feb 2014 16:32:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.ellenberg@linbit.com>) id 1WIhNZ-0004bn-3r
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 16:30:41 +0000
Received: from [193.109.254.147:27765] by server-1.bemta-14.messagelabs.com id
	B5/0E-15438-0B61E035; Wed, 26 Feb 2014 16:30:40 +0000
X-Env-Sender: lars.ellenberg@linbit.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393432239!7017296!1
X-Originating-IP: [212.69.166.240]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4912 invoked from network); 26 Feb 2014 16:30:39 -0000
Received: from zimbra13.linbit.com (HELO zimbra13.linbit.com) (212.69.166.240)
	by server-8.tower-27.messagelabs.com with SMTP;
	26 Feb 2014 16:30:39 -0000
Received: from localhost (localhost [127.0.0.1])
	by zimbra13.linbit.com (Postfix) with ESMTP id DF751321763;
	Wed, 26 Feb 2014 17:30:38 +0100 (CET)
Received: from zimbra13.linbit.com ([127.0.0.1])
	by localhost (zimbra13.linbit.com [127.0.0.1]) (amavisd-new, port 10032)
	with ESMTP id SifhcoXO_i3h; Wed, 26 Feb 2014 17:30:38 +0100 (CET)
Received: from localhost (localhost [127.0.0.1])
	by zimbra13.linbit.com (Postfix) with ESMTP id 4493632176C;
	Wed, 26 Feb 2014 17:30:37 +0100 (CET)
X-Virus-Scanned: amavisd-new at linbit.com
Received: from zimbra13.linbit.com ([127.0.0.1])
	by localhost (zimbra13.linbit.com [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id nBroO_SOUrzi; Wed, 26 Feb 2014 17:30:37 +0100 (CET)
Received: from soda.linbit (tuerlsteher.linbit.com [86.59.100.100])
	by zimbra13.linbit.com (Postfix) with ESMTPS id A35AE321763;
	Wed, 26 Feb 2014 17:30:36 +0100 (CET)
Date: Wed, 26 Feb 2014 17:30:36 +0100
From: Lars Ellenberg <lars.ellenberg@linbit.com>
To: drbd-dev@lists.linbit.com, xen-devel@lists.xenproject.org,
	drbd-user@lists.linbit.com
Message-ID: <20140226163036.GE3154@soda.linbit>
Mail-Followup-To: drbd-dev@lists.linbit.com, xen-devel@lists.xenproject.org,
	drbd-user@lists.linbit.com
References: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>
	<530E05DA.6070500@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530E05DA.6070500@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Wed, 26 Feb 2014 16:32:36 +0000
Subject: Re: [Xen-devel] [Drbd-dev] [PATCH] block-drbd: type is "phy" for
	drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 04:18:50PM +0100, Roger Pau Monn=E9 wrote:
> =

> I've realized I haven't Cc'ed the drbd developers lists, so adding it now.

We do read the user's list just as well.
Our -dev is actually mainly intended for "coordination".

> On 20/02/14 16:59, Roger Pau Monne wrote:
> > The type written to xenstore by libxl when attaching a drbd backend is
> > "phy", not "drbd", so handle this case also.

If you say so; you should know ;-)
Did this change at some point?
I personally have not used this script in a long while...

Do you ("citrix") have this in some
regression/integration testing yourself?

> > Signed-off-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> > ---
> >  scripts/block-drbd |    4 ++--
> >  1 files changed, 2 insertions(+), 2 deletions(-)
> > =

> > diff --git a/scripts/block-drbd b/scripts/block-drbd
> > index 5563ccb..975802b 100755
> > --- a/scripts/block-drbd
> > +++ b/scripts/block-drbd
> > @@ -250,7 +250,7 @@ case "$command" in
> >      fi
> >  =

> >      case $t in =

> > -      drbd)
> > +      drbd|phy)
> >          drbd_resource=3D$p
> >          drbd_role=3D"$(drbdadm role $drbd_resource)"
> >          drbd_lrole=3D"${drbd_role%%/*}"
> > @@ -278,7 +278,7 @@ case "$command" in
> >  =

> >    remove)
> >      case $t in =

> > -      drbd)
> > +      drbd|phy)
> >          p=3D$(xenstore_read "$XENBUS_PATH/params")
> >          drbd_resource=3D$p
> >          drbd_role=3D"$(drbdadm role $drbd_resource)"

-- =

: Lars Ellenberg
: LINBIT | Your Way to High Availability
: DRBD/HA support and consulting http://www.linbit.com

DRBD=AE and LINBIT=AE are registered trademarks of LINBIT, Austria.
__
please don't Cc me, but send to list   --   I'm subscribed

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:41:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhXn-0004nd-2Y; Wed, 26 Feb 2014 16:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIhXl-0004nY-Oi
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 16:41:14 +0000
Received: from [85.158.139.211:16323] by server-16.bemta-5.messagelabs.com id
	E9/EA-05060-8291E035; Wed, 26 Feb 2014 16:41:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393432872!6467485!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18540 invoked from network); 26 Feb 2014 16:41:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 16:41:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:41:26 +0000
Message-Id: <530E2741020000780011F981@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 16:41:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yuriy Bulygin <yuriy.bulygin@intel.com>,
	Asit K Mallick <asit.k.mallick@intel.com>, Susie Li <susie.li@intel.com>,
	Yong Y Wang <yong.y.wang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Will Auld <will.auld@intel.com>
Subject: Re: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 05:16, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Q3: the "...  without AER capability?" warning triggers on Jan's systems --> is 
> it an issue? or, how to handle it properly?
> [Asit] BIOS can have option to not expose AER capability. It will be good to 
> check the BIOS setup options. The error reporting should be masked so not 
> action needed.
> [Yuriy] I expanded the answer to Q3 vs. what's in the attached email after 
> we found out that when root port is operating in DMI mode, AER ext. 
> capability is not in the chain of ext. capability headers. Please use this 
> one instead.
> Answer to Q3:
> On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge BDF=00:00.0, when 
> the root port is operating as DMI, AER extended capability is defined in 
> VSHDR (Vendor Specific Header) configuration register (offset 0x148). It 
> should have value 0x0004.
> After pci_find_ext_capability, if it didn't find AER capability, for 
> BDF=00:00.0 the patch would need to check if VSHDR register has value 0x0004 
> in bits [15:0].
> Below I've provided example fix for this case:
>     case 0x3c00 ... 0x3c0b:
>         pos = pci_find_ext_capability(seg, bus, pdev->devfn,
>                                       PCI_EXT_CAP_ID_ERR);
>         if ( !pos )
>         {
>             if ( 0 == bus && 0 == pdev->devfn )
>             {
>                 dmi_aer_cap_id = pci_conf_read16(seg, 0, 0, 0, 0x148) // DMI Specific AER Capability ID
>                 if ( 0x0004 != dmi_aer_cap_id )
>                 {
>                     printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER capability?\n", seg, bus, dev, func);
>                     break;           
>                 }


If we get here, pos needs to have a proper value - what would that
be?

> Q4: the patches have no way of handling future chipsets (yet we also have no 
> indication that future chipsets would not exhibit the same bad behavior) --> 
> thoughts?
> [Jinsong] IMHO handle future chipset case by case.

Ugly. And hard to believe that over this extremely long time it
took you (as a company) to come back there were no new
additions to either of the two sets of PCI IDs that need dealing
with.

> BTW, some other infromation from Yuriy:
> VT-d-mask-UR-host-bridge.patch:
> 1. The workaround is only applicable to the host bridge device 00:00.0 
> (DMIBAR does not exist for other devices). The patch is written generically 
> for any PCIe device/bridge.

Does that mean that all other devices would have the respective
signal filtered by the host bridge if that one has UR signaling
masked?

That would reduce the number of PCI IDs needing dealing with,
wouldn't it? In which case we'd need to know which ones to care
about (I'd guess just 0x3400 and 0x3c00), and whether we in
fact need to check bdf (see the older workaround dealing with
0x342e and 0x3c28, which don't check where in the topology they
live either).

> 2. The workaround is only needed when SERR is enabled. As there will be only 
> a fraction of client systems with SERR enabled, it might be worthwhile to 
> only apply it when SERRE is 1.
>    val = pci_conf_read16(seg, bus, dev, func, PCI_COMMAND);
>    if ( val & PCI_COMMAND_SERR )
>      apply this workaround

But SERR may get enabled subsequently by the Dom0 kernel. I'd
rather not leave that case un-addressed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:41:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhXn-0004nd-2Y; Wed, 26 Feb 2014 16:41:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIhXl-0004nY-Oi
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 16:41:14 +0000
Received: from [85.158.139.211:16323] by server-16.bemta-5.messagelabs.com id
	E9/EA-05060-8291E035; Wed, 26 Feb 2014 16:41:12 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393432872!6467485!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18540 invoked from network); 26 Feb 2014 16:41:12 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 16:41:12 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:41:26 +0000
Message-Id: <530E2741020000780011F981@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Wed, 26 Feb 2014 16:41:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Yuriy Bulygin <yuriy.bulygin@intel.com>,
	Asit K Mallick <asit.k.mallick@intel.com>, Susie Li <susie.li@intel.com>,
	Yong Y Wang <yong.y.wang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Will Auld <will.auld@intel.com>
Subject: Re: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 05:16, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Q3: the "...  without AER capability?" warning triggers on Jan's systems --> is 
> it an issue? or, how to handle it properly?
> [Asit] BIOS can have option to not expose AER capability. It will be good to 
> check the BIOS setup options. The error reporting should be masked so not 
> action needed.
> [Yuriy] I expanded the answer to Q3 vs. what's in the attached email after 
> we found out that when root port is operating in DMI mode, AER ext. 
> capability is not in the chain of ext. capability headers. Please use this 
> one instead.
> Answer to Q3:
> On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge BDF=00:00.0, when 
> the root port is operating as DMI, AER extended capability is defined in 
> VSHDR (Vendor Specific Header) configuration register (offset 0x148). It 
> should have value 0x0004.
> After pci_find_ext_capability, if it didn't find AER capability, for 
> BDF=00:00.0 the patch would need to check if VSHDR register has value 0x0004 
> in bits [15:0].
> Below I've provided example fix for this case:
>     case 0x3c00 ... 0x3c0b:
>         pos = pci_find_ext_capability(seg, bus, pdev->devfn,
>                                       PCI_EXT_CAP_ID_ERR);
>         if ( !pos )
>         {
>             if ( 0 == bus && 0 == pdev->devfn )
>             {
>                 dmi_aer_cap_id = pci_conf_read16(seg, 0, 0, 0, 0x148) // DMI Specific AER Capability ID
>                 if ( 0x0004 != dmi_aer_cap_id )
>                 {
>                     printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER capability?\n", seg, bus, dev, func);
>                     break;           
>                 }


If we get here, pos needs to have a proper value - what would that
be?

> Q4: the patches have no way of handling future chipsets (yet we also have no 
> indication that future chipsets would not exhibit the same bad behavior) --> 
> thoughts?
> [Jinsong] IMHO handle future chipset case by case.

Ugly. And hard to believe that over this extremely long time it
took you (as a company) to come back there were no new
additions to either of the two sets of PCI IDs that need dealing
with.

> BTW, some other infromation from Yuriy:
> VT-d-mask-UR-host-bridge.patch:
> 1. The workaround is only applicable to the host bridge device 00:00.0 
> (DMIBAR does not exist for other devices). The patch is written generically 
> for any PCIe device/bridge.

Does that mean that all other devices would have the respective
signal filtered by the host bridge if that one has UR signaling
masked?

That would reduce the number of PCI IDs needing dealing with,
wouldn't it? In which case we'd need to know which ones to care
about (I'd guess just 0x3400 and 0x3c00), and whether we in
fact need to check bdf (see the older workaround dealing with
0x342e and 0x3c28, which don't check where in the topology they
live either).

> 2. The workaround is only needed when SERR is enabled. As there will be only 
> a fraction of client systems with SERR enabled, it might be worthwhile to 
> only apply it when SERRE is 1.
>    val = pci_conf_read16(seg, bus, dev, func, PCI_COMMAND);
>    if ( val & PCI_COMMAND_SERR )
>      apply this workaround

But SERR may get enabled subsequently by the Dom0 kernel. I'd
rather not leave that case un-addressed.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:43:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhaE-0004so-Nd; Wed, 26 Feb 2014 16:43:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WIhaC-0004se-47
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 16:43:44 +0000
Received: from [85.158.139.211:49319] by server-7.bemta-5.messagelabs.com id
	42/1D-14867-FB91E035; Wed, 26 Feb 2014 16:43:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393433020!6455802!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15645 invoked from network); 26 Feb 2014 16:43:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 16:43:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105977861"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 16:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 11:43:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WIhHt-0002gg-Ck;
	Wed, 26 Feb 2014 16:24:49 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>, <linux-kernel@vger.kernel.org>
Date: Wed, 26 Feb 2014 17:24:44 +0100
Message-ID: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3IgWGVuIERvbTAgdXNpbmcgdGhl
Ck1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFwIHR5cGUuCgpJbiBvcmRlciB0byBrZWVw
IHRyYWNrIG9mIHdoaWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCnBp
cnFzIGluIHRoZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5l
d2x5CmludHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2Fs
bGluZwpQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBi
ZSBkb25lIHdpdGggdGhlCmZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgoKU2lnbmVkLW9mZi1ieTog
Um9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQpUZXN0ZWQgd2l0aCBh
biBJbnRlbCBJQ0g4IEFIQ0kgU0FUQSBjb250cm9sbGVyLgotLS0KIGFyY2gveDg2L3BjaS94ZW4u
YyAgICAgICAgICAgICAgICAgICB8ICAgMjkgKysrKysrKysrKysrKystLS0tLS0KIGRyaXZlcnMv
eGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jICAgICB8ICAgNDcgKysrKysrKysrKysrKysrKysrKysr
KystLS0tLS0tLS0tLQogZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5oIHwgICAg
MSArCiBpbmNsdWRlL3hlbi9ldmVudHMuaCAgICAgICAgICAgICAgICAgfCAgICAyICstCiBpbmNs
dWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oICAgICAgfCAgICAxICsKIDUgZmlsZXMgY2hhbmdl
ZCwgNTUgaW5zZXJ0aW9ucygrKSwgMjUgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvYXJjaC94
ODYvcGNpL3hlbi5jIGIvYXJjaC94ODYvcGNpL3hlbi5jCmluZGV4IDEwM2U3MDIuLjkwNTk1NmYg
MTAwNjQ0Ci0tLSBhL2FyY2gveDg2L3BjaS94ZW4uYworKysgYi9hcmNoL3g4Ni9wY2kveGVuLmMK
QEAgLTE3OCw2ICsxNzgsNyBAQCBzdGF0aWMgaW50IHhlbl9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3Qg
cGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJaSA9IDA7CiAJbGlzdF9mb3JfZWFj
aF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkgewogCQlpcnEgPSB4ZW5fYmlu
ZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCB2W2ldLAorCQkJCQkgICAgICAgKHR5cGUg
PT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCiAJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8KIAkJCQkJICAgICAgICJwY2lmcm9udC1tc2kteCIgOgogCQkJCQkgICAg
ICAgInBjaWZyb250LW1zaSIsCkBAIC0yNDUsNiArMjQ2LDcgQEAgc3RhdGljIGludCB4ZW5faHZt
X3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkK
IAkJCQkieGVuOiBtc2kgYWxyZWFkeSBib3VuZCB0byBwaXJxPSVkXG4iLCBwaXJxKTsKIAkJfQog
CQlpcnEgPSB4ZW5fYmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCBwaXJxLAorCQkJ
CQkgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCiAJCQkJCSAgICAg
ICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSVgpID8KIAkJCQkJICAgICAgICJtc2kteCIgOiAibXNp
IiwKIAkJCQkJICAgICAgIERPTUlEX1NFTEYpOwpAQCAtMjY5LDkgKzI3MSw2IEBAIHN0YXRpYyBp
bnQgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52
ZWMsIGludCB0eXBlKQogCWludCByZXQgPSAwOwogCXN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYzsK
IAotCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxKQotCQlyZXR1cm4gMTsK
LQogCWxpc3RfZm9yX2VhY2hfZW50cnkobXNpZGVzYywgJmRldi0+bXNpX2xpc3QsIGxpc3QpIHsK
IAkJc3RydWN0IHBoeXNkZXZfbWFwX3BpcnEgbWFwX2lycTsKIAkJZG9taWRfdCBkb21pZDsKQEAg
LTI5MSw3ICsyOTAsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJCQkgICAgICAocGNpX2Rv
bWFpbl9ucihkZXYtPmJ1cykgPDwgMTYpOwogCQltYXBfaXJxLmRldmZuID0gZGV2LT5kZXZmbjsK
IAotCQlpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSVgpIHsKKwkJaWYgKHR5cGUgPT0gUENJX0NB
UF9JRF9NU0kgJiYgbnZlYyA+IDEpIHsKKwkJCW1hcF9pcnEudHlwZSA9IE1BUF9QSVJRX1RZUEVf
TVVMVElfTVNJOworCQkJbWFwX2lycS5lbnRyeV9uciA9IG52ZWM7CisJCX0gZWxzZSBpZiAodHlw
ZSA9PSBQQ0lfQ0FQX0lEX01TSVgpIHsKIAkJCWludCBwb3M7CiAJCQl1MzIgdGFibGVfb2Zmc2V0
LCBiaXI7CiAKQEAgLTMwOCw2ICszMTAsMTYgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1
cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJCWlm
IChwY2lfc2VnX3N1cHBvcnRlZCkKIAkJCXJldCA9IEhZUEVSVklTT1JfcGh5c2Rldl9vcChQSFlT
REVWT1BfbWFwX3BpcnEsCiAJCQkJCQkgICAgJm1hcF9pcnEpOworCQlpZiAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSSAmJiBudmVjID4gMSAmJiByZXQpIHsKKwkJCS8qCisJCQkgKiBJZiBNQVBfUElS
UV9UWVBFX01VTFRJX01TSSBpcyBub3QgYXZhaWxhYmxlCisJCQkgKiB0aGVyZSdzIG5vdGhpbmcg
ZWxzZSB3ZSBjYW4gZG8gaW4gdGhpcyBjYXNlLgorCQkJICogSnVzdCBzZXQgcmV0ID4gMCBzbyBk
cml2ZXIgY2FuIHJldHJ5IHdpdGgKKwkJCSAqIHNpbmdsZSBNU0kuCisJCQkgKi8KKwkJCXJldCA9
IDE7CisJCQlnb3RvIG91dDsKKwkJfQogCQlpZiAocmV0ID09IC1FSU5WQUwgJiYgIXBjaV9kb21h
aW5fbnIoZGV2LT5idXMpKSB7CiAJCQltYXBfaXJxLnR5cGUgPSBNQVBfUElSUV9UWVBFX01TSTsK
IAkJCW1hcF9pcnEuaW5kZXggPSAtMTsKQEAgLTMyNCwxMSArMzM2LDEwIEBAIHN0YXRpYyBpbnQg
eGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMs
IGludCB0eXBlKQogCQkJZ290byBvdXQ7CiAJCX0KIAotCQlyZXQgPSB4ZW5fYmluZF9waXJxX21z
aV90b19pcnEoZGV2LCBtc2lkZXNjLAotCQkJCQkgICAgICAgbWFwX2lycS5waXJxLAotCQkJCQkg
ICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/Ci0JCQkJCSAgICAgICAibXNpLXgiIDog
Im1zaSIsCi0JCQkJCQlkb21pZCk7CisJCXJldCA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShk
ZXYsIG1zaWRlc2MsIG1hcF9pcnEucGlycSwKKwkJICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52ZWMgOiAxLAorCQkgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/ICJtc2kteCIgOiAi
bXNpIiwKKwkJICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGRvbWlkKTsKIAkJaWYgKHJl
dCA8IDApCiAJCQlnb3RvIG91dDsKIAl9CmRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2Jhc2UuYyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCmluZGV4IGY0
YTllMzMuLmZmMjBhZTIgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFz
ZS5jCisrKyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCkBAIC0zOTEsMTAgKzM5
MSwxMCBAQCBzdGF0aWMgdm9pZCB4ZW5faXJxX2luaXQodW5zaWduZWQgaXJxKQogCWxpc3RfYWRk
X3RhaWwoJmluZm8tPmxpc3QsICZ4ZW5faXJxX2xpc3RfaGVhZCk7CiB9CiAKLXN0YXRpYyBpbnQg
X19tdXN0X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lkKQorc3RhdGljIGludCBf
X211c3RfY2hlY2sgeGVuX2FsbG9jYXRlX2lycXNfZHluYW1pYyhpbnQgbnZlYykKIHsKIAlpbnQg
Zmlyc3QgPSAwOwotCWludCBpcnE7CisJaW50IGksIGlycTsKIAogI2lmZGVmIENPTkZJR19YODZf
SU9fQVBJQwogCS8qCkBAIC00MDgsMTQgKzQwOCwyMiBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVj
ayB4ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkKIAkJZmlyc3QgPSBnZXRfbnJfaXJxc19n
c2koKTsKICNlbmRpZgogCi0JaXJxID0gaXJxX2FsbG9jX2Rlc2NfZnJvbShmaXJzdCwgLTEpOwor
CWlycSA9IGlycV9hbGxvY19kZXNjc19mcm9tKGZpcnN0LCBudmVjLCAtMSk7CiAKLQlpZiAoaXJx
ID49IDApCi0JCXhlbl9pcnFfaW5pdChpcnEpOworCWlmIChpcnEgPj0gMCkgeworCQlmb3IgKGkg
PSAwOyBpIDwgbnZlYzsgaSsrKQorCQkJeGVuX2lycV9pbml0KGlycSArIGkpOworCX0KIAogCXJl
dHVybiBpcnE7CiB9CiAKK3N0YXRpYyBpbmxpbmUgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2Nh
dGVfaXJxX2R5bmFtaWModm9pZCkKK3sKKworCXJldHVybiB4ZW5fYWxsb2NhdGVfaXJxc19keW5h
bWljKDEpOworfQorCiBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxX2dz
aSh1bnNpZ25lZCBnc2kpCiB7CiAJaW50IGlycTsKQEAgLTc0MSwyMiArNzQ5LDI1IEBAIGludCB4
ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNj
ICptc2lkZXNjKQogfQogCiBpbnQgeGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKHN0cnVjdCBwY2lf
ZGV2ICpkZXYsIHN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYywKLQkJCSAgICAgaW50IHBpcnEsIGNv
bnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpCisJCQkgICAgIGludCBwaXJxLCBpbnQgbnZl
YywgY29uc3QgY2hhciAqbmFtZSwgZG9taWRfdCBkb21pZCkKIHsKLQlpbnQgaXJxLCByZXQ7CisJ
aW50IGksIGlycSwgcmV0OwogCiAJbXV0ZXhfbG9jaygmaXJxX21hcHBpbmdfdXBkYXRlX2xvY2sp
OwogCi0JaXJxID0geGVuX2FsbG9jYXRlX2lycV9keW5hbWljKCk7CisJaXJxID0geGVuX2FsbG9j
YXRlX2lycXNfZHluYW1pYyhudmVjKTsKIAlpZiAoaXJxIDwgMCkKIAkJZ290byBvdXQ7CiAKLQlp
cnFfc2V0X2NoaXBfYW5kX2hhbmRsZXJfbmFtZShpcnEsICZ4ZW5fcGlycV9jaGlwLCBoYW5kbGVf
ZWRnZV9pcnEsCi0JCQluYW1lKTsKKwlmb3IgKGkgPSAwOyBpIDwgbnZlYzsgaSsrKSB7CisJCWly
cV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGlycSArIGksICZ4ZW5fcGlycV9jaGlwLCBoYW5k
bGVfZWRnZV9pcnEsIG5hbWUpOworCisJCXJldCA9IHhlbl9pcnFfaW5mb19waXJxX3NldHVwKGly
cSArIGksIDAsIHBpcnEgKyBpLCAwLCBkb21pZCwKKwkJCQkJICAgICAgaSA9PSAwID8gMCA6IFBJ
UlFfTVNJX0dST1VQKTsKKwkJaWYgKHJldCA8IDApCisJCQlnb3RvIGVycm9yX2lycTsKKwl9CiAK
LQlyZXQgPSB4ZW5faXJxX2luZm9fcGlycV9zZXR1cChpcnEsIDAsIHBpcnEsIDAsIGRvbWlkLCAw
KTsKLQlpZiAocmV0IDwgMCkKLQkJZ290byBlcnJvcl9pcnE7CiAJcmV0ID0gaXJxX3NldF9tc2lf
ZGVzYyhpcnEsIG1zaWRlc2MpOwogCWlmIChyZXQgPCAwKQogCQlnb3RvIGVycm9yX2lycTsKQEAg
LTc2NCw3ICs3NzUsOCBAQCBvdXQ6CiAJbXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVf
bG9jayk7CiAJcmV0dXJuIGlycTsKIGVycm9yX2lycToKLQlfX3VuYmluZF9mcm9tX2lycShpcnEp
OworCWZvciAoOyBpID49IDA7IGktLSkKKwkJX191bmJpbmRfZnJvbV9pcnEoaXJxICsgaSk7CiAJ
bXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7CiAJcmV0dXJuIHJldDsKIH0K
QEAgLTc4Myw3ICs3OTUsMTIgQEAgaW50IHhlbl9kZXN0cm95X2lycShpbnQgaXJxKQogCWlmICgh
ZGVzYykKIAkJZ290byBvdXQ7CiAKLQlpZiAoeGVuX2luaXRpYWxfZG9tYWluKCkpIHsKKwkvKgor
CSAqIElmIHRyeWluZyB0byByZW1vdmUgYSB2ZWN0b3IgaW4gYSBNU0kgZ3JvdXAgZGlmZmVyZW50
CisJICogdGhhbiB0aGUgZmlyc3Qgb25lIHNraXAgdGhlIFBJUlEgdW5tYXAgdW5sZXNzIHRoaXMg
dmVjdG9yCisJICogaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAuCisJICovCisJaWYgKHhl
bl9pbml0aWFsX2RvbWFpbigpICYmICEoaW5mby0+dS5waXJxLmZsYWdzICYgUElSUV9NU0lfR1JP
VVApKSB7CiAJCXVubWFwX2lycS5waXJxID0gaW5mby0+dS5waXJxLnBpcnE7CiAJCXVubWFwX2ly
cS5kb21pZCA9IGluZm8tPnUucGlycS5kb21pZDsKIAkJcmMgPSBIWVBFUlZJU09SX3BoeXNkZXZf
b3AoUEhZU0RFVk9QX3VubWFwX3BpcnEsICZ1bm1hcF9pcnEpOwpkaWZmIC0tZ2l0IGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5oIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50
c19pbnRlcm5hbC5oCmluZGV4IDY3N2Y0MWEuLjUwYzIwNTBhIDEwMDY0NAotLS0gYS9kcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19pbnRlcm5hbC5oCkBAIC01Myw2ICs1Myw3IEBAIHN0cnVjdCBpcnFfaW5mbyB7CiAKICNk
ZWZpbmUgUElSUV9ORUVEU19FT0kJKDEgPDwgMCkKICNkZWZpbmUgUElSUV9TSEFSRUFCTEUJKDEg
PDwgMSkKKyNkZWZpbmUgUElSUV9NU0lfR1JPVVAJKDEgPDwgMikKIAogc3RydWN0IGV2dGNobl9v
cHMgewogCXVuc2lnbmVkICgqbWF4X2NoYW5uZWxzKSh2b2lkKTsKZGlmZiAtLWdpdCBhL2luY2x1
ZGUveGVuL2V2ZW50cy5oIGIvaW5jbHVkZS94ZW4vZXZlbnRzLmgKaW5kZXggYzljODVjZi4uMmFl
N2UwMyAxMDA2NDQKLS0tIGEvaW5jbHVkZS94ZW4vZXZlbnRzLmgKKysrIGIvaW5jbHVkZS94ZW4v
ZXZlbnRzLmgKQEAgLTEwMiw3ICsxMDIsNyBAQCBpbnQgeGVuX2JpbmRfcGlycV9nc2lfdG9faXJx
KHVuc2lnbmVkIGdzaSwKIGludCB4ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYg
KmRldiwgc3RydWN0IG1zaV9kZXNjICptc2lkZXNjKTsKIC8qIEJpbmQgYW4gUFNJIHBpcnEgdG8g
YW4gaXJxLiAqLwogaW50IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShzdHJ1Y3QgcGNpX2RldiAq
ZGV2LCBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2MsCi0JCQkgICAgIGludCBwaXJxLCBjb25zdCBj
aGFyICpuYW1lLCBkb21pZF90IGRvbWlkKTsKKwkJCSAgICAgaW50IHBpcnEsIGludCBudmVjLCBj
b25zdCBjaGFyICpuYW1lLCBkb21pZF90IGRvbWlkKTsKICNlbmRpZgogCiAvKiBEZS1hbGxvY2F0
ZXMgdGhlIGFib3ZlIG1lbnRpb25lZCBwaHlzaWNhbCBpbnRlcnJ1cHQuICovCmRpZmYgLS1naXQg
YS9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oIGIvaW5jbHVkZS94ZW4vaW50ZXJmYWNl
L3BoeXNkZXYuaAppbmRleCA0MjcyMWQxLi5lYjEzMzI2ZCAxMDA2NDQKLS0tIGEvaW5jbHVkZS94
ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaAorKysgYi9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rl
di5oCkBAIC0xMzEsNiArMTMxLDcgQEAgc3RydWN0IHBoeXNkZXZfaXJxIHsKICNkZWZpbmUgTUFQ
X1BJUlFfVFlQRV9HU0kJCTB4MQogI2RlZmluZSBNQVBfUElSUV9UWVBFX1VOS05PV04JCTB4Mgog
I2RlZmluZSBNQVBfUElSUV9UWVBFX01TSV9TRUcJCTB4MworI2RlZmluZSBNQVBfUElSUV9UWVBF
X01VTFRJX01TSQkJMHg0CiAKICNkZWZpbmUgUEhZU0RFVk9QX21hcF9waXJxCQkxMwogc3RydWN0
IHBoeXNkZXZfbWFwX3BpcnEgewotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 26 16:43:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 16:43:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhaE-0004so-Nd; Wed, 26 Feb 2014 16:43:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WIhaC-0004se-47
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 16:43:44 +0000
Received: from [85.158.139.211:49319] by server-7.bemta-5.messagelabs.com id
	42/1D-14867-FB91E035; Wed, 26 Feb 2014 16:43:43 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393433020!6455802!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15645 invoked from network); 26 Feb 2014 16:43:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 16:43:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105977861"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 16:43:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 11:43:39 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=localhost.localdomain)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WIhHt-0002gg-Ck;
	Wed, 26 Feb 2014 16:24:49 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>, <linux-kernel@vger.kernel.org>
Date: Wed, 26 Feb 2014 17:24:44 +0100
Message-ID: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3IgWGVuIERvbTAgdXNpbmcgdGhl
Ck1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFwIHR5cGUuCgpJbiBvcmRlciB0byBrZWVw
IHRyYWNrIG9mIHdoaWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCnBp
cnFzIGluIHRoZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5l
d2x5CmludHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2Fs
bGluZwpQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBi
ZSBkb25lIHdpdGggdGhlCmZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgoKU2lnbmVkLW9mZi1ieTog
Um9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+Ci0tLQpUZXN0ZWQgd2l0aCBh
biBJbnRlbCBJQ0g4IEFIQ0kgU0FUQSBjb250cm9sbGVyLgotLS0KIGFyY2gveDg2L3BjaS94ZW4u
YyAgICAgICAgICAgICAgICAgICB8ICAgMjkgKysrKysrKysrKysrKystLS0tLS0KIGRyaXZlcnMv
eGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jICAgICB8ICAgNDcgKysrKysrKysrKysrKysrKysrKysr
KystLS0tLS0tLS0tLQogZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5oIHwgICAg
MSArCiBpbmNsdWRlL3hlbi9ldmVudHMuaCAgICAgICAgICAgICAgICAgfCAgICAyICstCiBpbmNs
dWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oICAgICAgfCAgICAxICsKIDUgZmlsZXMgY2hhbmdl
ZCwgNTUgaW5zZXJ0aW9ucygrKSwgMjUgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvYXJjaC94
ODYvcGNpL3hlbi5jIGIvYXJjaC94ODYvcGNpL3hlbi5jCmluZGV4IDEwM2U3MDIuLjkwNTk1NmYg
MTAwNjQ0Ci0tLSBhL2FyY2gveDg2L3BjaS94ZW4uYworKysgYi9hcmNoL3g4Ni9wY2kveGVuLmMK
QEAgLTE3OCw2ICsxNzgsNyBAQCBzdGF0aWMgaW50IHhlbl9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3Qg
cGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJaSA9IDA7CiAJbGlzdF9mb3JfZWFj
aF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkgewogCQlpcnEgPSB4ZW5fYmlu
ZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCB2W2ldLAorCQkJCQkgICAgICAgKHR5cGUg
PT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCiAJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8KIAkJCQkJICAgICAgICJwY2lmcm9udC1tc2kteCIgOgogCQkJCQkgICAg
ICAgInBjaWZyb250LW1zaSIsCkBAIC0yNDUsNiArMjQ2LDcgQEAgc3RhdGljIGludCB4ZW5faHZt
X3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkK
IAkJCQkieGVuOiBtc2kgYWxyZWFkeSBib3VuZCB0byBwaXJxPSVkXG4iLCBwaXJxKTsKIAkJfQog
CQlpcnEgPSB4ZW5fYmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCBwaXJxLAorCQkJ
CQkgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCiAJCQkJCSAgICAg
ICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSVgpID8KIAkJCQkJICAgICAgICJtc2kteCIgOiAibXNp
IiwKIAkJCQkJICAgICAgIERPTUlEX1NFTEYpOwpAQCAtMjY5LDkgKzI3MSw2IEBAIHN0YXRpYyBp
bnQgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52
ZWMsIGludCB0eXBlKQogCWludCByZXQgPSAwOwogCXN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYzsK
IAotCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxKQotCQlyZXR1cm4gMTsK
LQogCWxpc3RfZm9yX2VhY2hfZW50cnkobXNpZGVzYywgJmRldi0+bXNpX2xpc3QsIGxpc3QpIHsK
IAkJc3RydWN0IHBoeXNkZXZfbWFwX3BpcnEgbWFwX2lycTsKIAkJZG9taWRfdCBkb21pZDsKQEAg
LTI5MSw3ICsyOTAsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJCQkgICAgICAocGNpX2Rv
bWFpbl9ucihkZXYtPmJ1cykgPDwgMTYpOwogCQltYXBfaXJxLmRldmZuID0gZGV2LT5kZXZmbjsK
IAotCQlpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSVgpIHsKKwkJaWYgKHR5cGUgPT0gUENJX0NB
UF9JRF9NU0kgJiYgbnZlYyA+IDEpIHsKKwkJCW1hcF9pcnEudHlwZSA9IE1BUF9QSVJRX1RZUEVf
TVVMVElfTVNJOworCQkJbWFwX2lycS5lbnRyeV9uciA9IG52ZWM7CisJCX0gZWxzZSBpZiAodHlw
ZSA9PSBQQ0lfQ0FQX0lEX01TSVgpIHsKIAkJCWludCBwb3M7CiAJCQl1MzIgdGFibGVfb2Zmc2V0
LCBiaXI7CiAKQEAgLTMwOCw2ICszMTAsMTYgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1
cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJCWlm
IChwY2lfc2VnX3N1cHBvcnRlZCkKIAkJCXJldCA9IEhZUEVSVklTT1JfcGh5c2Rldl9vcChQSFlT
REVWT1BfbWFwX3BpcnEsCiAJCQkJCQkgICAgJm1hcF9pcnEpOworCQlpZiAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSSAmJiBudmVjID4gMSAmJiByZXQpIHsKKwkJCS8qCisJCQkgKiBJZiBNQVBfUElS
UV9UWVBFX01VTFRJX01TSSBpcyBub3QgYXZhaWxhYmxlCisJCQkgKiB0aGVyZSdzIG5vdGhpbmcg
ZWxzZSB3ZSBjYW4gZG8gaW4gdGhpcyBjYXNlLgorCQkJICogSnVzdCBzZXQgcmV0ID4gMCBzbyBk
cml2ZXIgY2FuIHJldHJ5IHdpdGgKKwkJCSAqIHNpbmdsZSBNU0kuCisJCQkgKi8KKwkJCXJldCA9
IDE7CisJCQlnb3RvIG91dDsKKwkJfQogCQlpZiAocmV0ID09IC1FSU5WQUwgJiYgIXBjaV9kb21h
aW5fbnIoZGV2LT5idXMpKSB7CiAJCQltYXBfaXJxLnR5cGUgPSBNQVBfUElSUV9UWVBFX01TSTsK
IAkJCW1hcF9pcnEuaW5kZXggPSAtMTsKQEAgLTMyNCwxMSArMzM2LDEwIEBAIHN0YXRpYyBpbnQg
eGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMs
IGludCB0eXBlKQogCQkJZ290byBvdXQ7CiAJCX0KIAotCQlyZXQgPSB4ZW5fYmluZF9waXJxX21z
aV90b19pcnEoZGV2LCBtc2lkZXNjLAotCQkJCQkgICAgICAgbWFwX2lycS5waXJxLAotCQkJCQkg
ICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/Ci0JCQkJCSAgICAgICAibXNpLXgiIDog
Im1zaSIsCi0JCQkJCQlkb21pZCk7CisJCXJldCA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShk
ZXYsIG1zaWRlc2MsIG1hcF9pcnEucGlycSwKKwkJICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52ZWMgOiAxLAorCQkgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/ICJtc2kteCIgOiAi
bXNpIiwKKwkJICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGRvbWlkKTsKIAkJaWYgKHJl
dCA8IDApCiAJCQlnb3RvIG91dDsKIAl9CmRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2Jhc2UuYyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCmluZGV4IGY0
YTllMzMuLmZmMjBhZTIgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFz
ZS5jCisrKyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCkBAIC0zOTEsMTAgKzM5
MSwxMCBAQCBzdGF0aWMgdm9pZCB4ZW5faXJxX2luaXQodW5zaWduZWQgaXJxKQogCWxpc3RfYWRk
X3RhaWwoJmluZm8tPmxpc3QsICZ4ZW5faXJxX2xpc3RfaGVhZCk7CiB9CiAKLXN0YXRpYyBpbnQg
X19tdXN0X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lkKQorc3RhdGljIGludCBf
X211c3RfY2hlY2sgeGVuX2FsbG9jYXRlX2lycXNfZHluYW1pYyhpbnQgbnZlYykKIHsKIAlpbnQg
Zmlyc3QgPSAwOwotCWludCBpcnE7CisJaW50IGksIGlycTsKIAogI2lmZGVmIENPTkZJR19YODZf
SU9fQVBJQwogCS8qCkBAIC00MDgsMTQgKzQwOCwyMiBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVj
ayB4ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkKIAkJZmlyc3QgPSBnZXRfbnJfaXJxc19n
c2koKTsKICNlbmRpZgogCi0JaXJxID0gaXJxX2FsbG9jX2Rlc2NfZnJvbShmaXJzdCwgLTEpOwor
CWlycSA9IGlycV9hbGxvY19kZXNjc19mcm9tKGZpcnN0LCBudmVjLCAtMSk7CiAKLQlpZiAoaXJx
ID49IDApCi0JCXhlbl9pcnFfaW5pdChpcnEpOworCWlmIChpcnEgPj0gMCkgeworCQlmb3IgKGkg
PSAwOyBpIDwgbnZlYzsgaSsrKQorCQkJeGVuX2lycV9pbml0KGlycSArIGkpOworCX0KIAogCXJl
dHVybiBpcnE7CiB9CiAKK3N0YXRpYyBpbmxpbmUgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2Nh
dGVfaXJxX2R5bmFtaWModm9pZCkKK3sKKworCXJldHVybiB4ZW5fYWxsb2NhdGVfaXJxc19keW5h
bWljKDEpOworfQorCiBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxX2dz
aSh1bnNpZ25lZCBnc2kpCiB7CiAJaW50IGlycTsKQEAgLTc0MSwyMiArNzQ5LDI1IEBAIGludCB4
ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNj
ICptc2lkZXNjKQogfQogCiBpbnQgeGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKHN0cnVjdCBwY2lf
ZGV2ICpkZXYsIHN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYywKLQkJCSAgICAgaW50IHBpcnEsIGNv
bnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpCisJCQkgICAgIGludCBwaXJxLCBpbnQgbnZl
YywgY29uc3QgY2hhciAqbmFtZSwgZG9taWRfdCBkb21pZCkKIHsKLQlpbnQgaXJxLCByZXQ7CisJ
aW50IGksIGlycSwgcmV0OwogCiAJbXV0ZXhfbG9jaygmaXJxX21hcHBpbmdfdXBkYXRlX2xvY2sp
OwogCi0JaXJxID0geGVuX2FsbG9jYXRlX2lycV9keW5hbWljKCk7CisJaXJxID0geGVuX2FsbG9j
YXRlX2lycXNfZHluYW1pYyhudmVjKTsKIAlpZiAoaXJxIDwgMCkKIAkJZ290byBvdXQ7CiAKLQlp
cnFfc2V0X2NoaXBfYW5kX2hhbmRsZXJfbmFtZShpcnEsICZ4ZW5fcGlycV9jaGlwLCBoYW5kbGVf
ZWRnZV9pcnEsCi0JCQluYW1lKTsKKwlmb3IgKGkgPSAwOyBpIDwgbnZlYzsgaSsrKSB7CisJCWly
cV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGlycSArIGksICZ4ZW5fcGlycV9jaGlwLCBoYW5k
bGVfZWRnZV9pcnEsIG5hbWUpOworCisJCXJldCA9IHhlbl9pcnFfaW5mb19waXJxX3NldHVwKGly
cSArIGksIDAsIHBpcnEgKyBpLCAwLCBkb21pZCwKKwkJCQkJICAgICAgaSA9PSAwID8gMCA6IFBJ
UlFfTVNJX0dST1VQKTsKKwkJaWYgKHJldCA8IDApCisJCQlnb3RvIGVycm9yX2lycTsKKwl9CiAK
LQlyZXQgPSB4ZW5faXJxX2luZm9fcGlycV9zZXR1cChpcnEsIDAsIHBpcnEsIDAsIGRvbWlkLCAw
KTsKLQlpZiAocmV0IDwgMCkKLQkJZ290byBlcnJvcl9pcnE7CiAJcmV0ID0gaXJxX3NldF9tc2lf
ZGVzYyhpcnEsIG1zaWRlc2MpOwogCWlmIChyZXQgPCAwKQogCQlnb3RvIGVycm9yX2lycTsKQEAg
LTc2NCw3ICs3NzUsOCBAQCBvdXQ6CiAJbXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVf
bG9jayk7CiAJcmV0dXJuIGlycTsKIGVycm9yX2lycToKLQlfX3VuYmluZF9mcm9tX2lycShpcnEp
OworCWZvciAoOyBpID49IDA7IGktLSkKKwkJX191bmJpbmRfZnJvbV9pcnEoaXJxICsgaSk7CiAJ
bXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7CiAJcmV0dXJuIHJldDsKIH0K
QEAgLTc4Myw3ICs3OTUsMTIgQEAgaW50IHhlbl9kZXN0cm95X2lycShpbnQgaXJxKQogCWlmICgh
ZGVzYykKIAkJZ290byBvdXQ7CiAKLQlpZiAoeGVuX2luaXRpYWxfZG9tYWluKCkpIHsKKwkvKgor
CSAqIElmIHRyeWluZyB0byByZW1vdmUgYSB2ZWN0b3IgaW4gYSBNU0kgZ3JvdXAgZGlmZmVyZW50
CisJICogdGhhbiB0aGUgZmlyc3Qgb25lIHNraXAgdGhlIFBJUlEgdW5tYXAgdW5sZXNzIHRoaXMg
dmVjdG9yCisJICogaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAuCisJICovCisJaWYgKHhl
bl9pbml0aWFsX2RvbWFpbigpICYmICEoaW5mby0+dS5waXJxLmZsYWdzICYgUElSUV9NU0lfR1JP
VVApKSB7CiAJCXVubWFwX2lycS5waXJxID0gaW5mby0+dS5waXJxLnBpcnE7CiAJCXVubWFwX2ly
cS5kb21pZCA9IGluZm8tPnUucGlycS5kb21pZDsKIAkJcmMgPSBIWVBFUlZJU09SX3BoeXNkZXZf
b3AoUEhZU0RFVk9QX3VubWFwX3BpcnEsICZ1bm1hcF9pcnEpOwpkaWZmIC0tZ2l0IGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5oIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50
c19pbnRlcm5hbC5oCmluZGV4IDY3N2Y0MWEuLjUwYzIwNTBhIDEwMDY0NAotLS0gYS9kcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19pbnRlcm5hbC5oCkBAIC01Myw2ICs1Myw3IEBAIHN0cnVjdCBpcnFfaW5mbyB7CiAKICNk
ZWZpbmUgUElSUV9ORUVEU19FT0kJKDEgPDwgMCkKICNkZWZpbmUgUElSUV9TSEFSRUFCTEUJKDEg
PDwgMSkKKyNkZWZpbmUgUElSUV9NU0lfR1JPVVAJKDEgPDwgMikKIAogc3RydWN0IGV2dGNobl9v
cHMgewogCXVuc2lnbmVkICgqbWF4X2NoYW5uZWxzKSh2b2lkKTsKZGlmZiAtLWdpdCBhL2luY2x1
ZGUveGVuL2V2ZW50cy5oIGIvaW5jbHVkZS94ZW4vZXZlbnRzLmgKaW5kZXggYzljODVjZi4uMmFl
N2UwMyAxMDA2NDQKLS0tIGEvaW5jbHVkZS94ZW4vZXZlbnRzLmgKKysrIGIvaW5jbHVkZS94ZW4v
ZXZlbnRzLmgKQEAgLTEwMiw3ICsxMDIsNyBAQCBpbnQgeGVuX2JpbmRfcGlycV9nc2lfdG9faXJx
KHVuc2lnbmVkIGdzaSwKIGludCB4ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYg
KmRldiwgc3RydWN0IG1zaV9kZXNjICptc2lkZXNjKTsKIC8qIEJpbmQgYW4gUFNJIHBpcnEgdG8g
YW4gaXJxLiAqLwogaW50IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShzdHJ1Y3QgcGNpX2RldiAq
ZGV2LCBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2MsCi0JCQkgICAgIGludCBwaXJxLCBjb25zdCBj
aGFyICpuYW1lLCBkb21pZF90IGRvbWlkKTsKKwkJCSAgICAgaW50IHBpcnEsIGludCBudmVjLCBj
b25zdCBjaGFyICpuYW1lLCBkb21pZF90IGRvbWlkKTsKICNlbmRpZgogCiAvKiBEZS1hbGxvY2F0
ZXMgdGhlIGFib3ZlIG1lbnRpb25lZCBwaHlzaWNhbCBpbnRlcnJ1cHQuICovCmRpZmYgLS1naXQg
YS9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oIGIvaW5jbHVkZS94ZW4vaW50ZXJmYWNl
L3BoeXNkZXYuaAppbmRleCA0MjcyMWQxLi5lYjEzMzI2ZCAxMDA2NDQKLS0tIGEvaW5jbHVkZS94
ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaAorKysgYi9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rl
di5oCkBAIC0xMzEsNiArMTMxLDcgQEAgc3RydWN0IHBoeXNkZXZfaXJxIHsKICNkZWZpbmUgTUFQ
X1BJUlFfVFlQRV9HU0kJCTB4MQogI2RlZmluZSBNQVBfUElSUV9UWVBFX1VOS05PV04JCTB4Mgog
I2RlZmluZSBNQVBfUElSUV9UWVBFX01TSV9TRUcJCTB4MworI2RlZmluZSBNQVBfUElSUV9UWVBF
X01VTFRJX01TSQkJMHg0CiAKICNkZWZpbmUgUEhZU0RFVk9QX21hcF9waXJxCQkxMwogc3RydWN0
IHBoeXNkZXZfbWFwX3BpcnEgewotLSAKMS43LjcuNSAoQXBwbGUgR2l0LTI2KQoKCl9fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5n
IGxpc3QKWGVuLWRldmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRl
dmVsCg==

From xen-devel-bounces@lists.xen.org Wed Feb 26 17:02:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 17:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhrx-0005Oy-Sz; Wed, 26 Feb 2014 17:02:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WIhrw-0005Ok-BL
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 17:02:04 +0000
Received: from [85.158.137.68:26989] by server-14.bemta-3.messagelabs.com id
	37/EE-08196-B0E1E035; Wed, 26 Feb 2014 17:02:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393434121!4428121!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11758 invoked from network); 26 Feb 2014 17:02:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 17:02:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105986959"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 17:02:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 12:02:00 -0500
Message-ID: <530E1E07.4090007@citrix.com>
Date: Wed, 26 Feb 2014 18:01:59 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: <drbd-dev@lists.linbit.com>, <xen-devel@lists.xenproject.org>,
	<drbd-user@lists.linbit.com>
References: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>	<530E05DA.6070500@citrix.com>
	<20140226163036.GE3154@soda.linbit>
In-Reply-To: <20140226163036.GE3154@soda.linbit>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [Drbd-dev] [PATCH] block-drbd: type is "phy" for
 drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 17:30, Lars Ellenberg wrote:
> On Wed, Feb 26, 2014 at 04:18:50PM +0100, Roger Pau Monn=E9 wrote:
>>
>> I've realized I haven't Cc'ed the drbd developers lists, so adding it no=
w.
> =

> We do read the user's list just as well.
> Our -dev is actually mainly intended for "coordination".
> =

>> On 20/02/14 16:59, Roger Pau Monne wrote:
>>> The type written to xenstore by libxl when attaching a drbd backend is
>>> "phy", not "drbd", so handle this case also.
> =

> If you say so; you should know ;-)
> Did this change at some point?

I'm sure libxl (the new Xen toolstack) uses "phy" and not "drbd" as
type. Probably the old xend toolstack was setting type to "drbd", and
that's why the script checked for drbd.

IMHO, setting the type to phy is the right thing to do, since drbd ends
up presenting a block device which is handled by blkback.

> I personally have not used this script in a long while...
> =

> Do you ("citrix") have this in some
> regression/integration testing yourself?

Not in Xen OSS Test suite [0], although I have no idea if other products
based on Xen perform any kind of tests with drbd.

[0] http://xenbits.xen.org/gitweb/?p=3Dosstest.git;a=3Dsummary

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 17:02:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 17:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIhrx-0005Oy-Sz; Wed, 26 Feb 2014 17:02:05 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WIhrw-0005Ok-BL
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 17:02:04 +0000
Received: from [85.158.137.68:26989] by server-14.bemta-3.messagelabs.com id
	37/EE-08196-B0E1E035; Wed, 26 Feb 2014 17:02:03 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393434121!4428121!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11758 invoked from network); 26 Feb 2014 17:02:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 17:02:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="105986959"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 17:02:00 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 12:02:00 -0500
Message-ID: <530E1E07.4090007@citrix.com>
Date: Wed, 26 Feb 2014 18:01:59 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: <drbd-dev@lists.linbit.com>, <xen-devel@lists.xenproject.org>,
	<drbd-user@lists.linbit.com>
References: <1392911976-37506-1-git-send-email-roger.pau@citrix.com>	<530E05DA.6070500@citrix.com>
	<20140226163036.GE3154@soda.linbit>
In-Reply-To: <20140226163036.GE3154@soda.linbit>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Subject: Re: [Xen-devel] [Drbd-dev] [PATCH] block-drbd: type is "phy" for
 drbd backends
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 17:30, Lars Ellenberg wrote:
> On Wed, Feb 26, 2014 at 04:18:50PM +0100, Roger Pau Monn=E9 wrote:
>>
>> I've realized I haven't Cc'ed the drbd developers lists, so adding it no=
w.
> =

> We do read the user's list just as well.
> Our -dev is actually mainly intended for "coordination".
> =

>> On 20/02/14 16:59, Roger Pau Monne wrote:
>>> The type written to xenstore by libxl when attaching a drbd backend is
>>> "phy", not "drbd", so handle this case also.
> =

> If you say so; you should know ;-)
> Did this change at some point?

I'm sure libxl (the new Xen toolstack) uses "phy" and not "drbd" as
type. Probably the old xend toolstack was setting type to "drbd", and
that's why the script checked for drbd.

IMHO, setting the type to phy is the right thing to do, since drbd ends
up presenting a block device which is handled by blkback.

> I personally have not used this script in a long while...
> =

> Do you ("citrix") have this in some
> regression/integration testing yourself?

Not in Xen OSS Test suite [0], although I have no idea if other products
based on Xen perform any kind of tests with drbd.

[0] http://xenbits.xen.org/gitweb/?p=3Dosstest.git;a=3Dsummary

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 17:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 17:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIiYw-0006OE-7P; Wed, 26 Feb 2014 17:46:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIiYu-0006O9-HP
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 17:46:28 +0000
Received: from [85.158.143.35:11333] by server-1.bemta-4.messagelabs.com id
	AB/D1-31661-3782E035; Wed, 26 Feb 2014 17:46:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393436785!8548276!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31174 invoked from network); 26 Feb 2014 17:46:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 17:46:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="106002849"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 17:46:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 12:46:24 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIiYq-0004kD-9P;
	Wed, 26 Feb 2014 17:46:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIiYp-0004Vv-Tm;
	Wed, 26 Feb 2014 17:46:24 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25308-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 17:46:23 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25308: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25308 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25308/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25302

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install        fail like 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25302 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop    fail in 25302 never pass

version targeted for testing:
 xen                  4c2502450ef1cc6fa1fd6daab8dbed8af1b88a29
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 351 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 17:46:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 17:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIiYw-0006OE-7P; Wed, 26 Feb 2014 17:46:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIiYu-0006O9-HP
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 17:46:28 +0000
Received: from [85.158.143.35:11333] by server-1.bemta-4.messagelabs.com id
	AB/D1-31661-3782E035; Wed, 26 Feb 2014 17:46:27 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393436785!8548276!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31174 invoked from network); 26 Feb 2014 17:46:26 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 17:46:26 -0000
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="106002849"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 17:46:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 12:46:24 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIiYq-0004kD-9P;
	Wed, 26 Feb 2014 17:46:24 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIiYp-0004Vv-Tm;
	Wed, 26 Feb 2014 17:46:24 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25308-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 17:46:23 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25308: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25308 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25308/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25302

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install        fail like 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25302 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop    fail in 25302 never pass

version targeted for testing:
 xen                  4c2502450ef1cc6fa1fd6daab8dbed8af1b88a29
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 351 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:01:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIimw-0006hJ-JO; Wed, 26 Feb 2014 18:00:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WIimv-0006h7-6e
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:00:57 +0000
Received: from [85.158.139.211:15324] by server-8.bemta-5.messagelabs.com id
	18/E2-05298-7DB2E035; Wed, 26 Feb 2014 18:00:55 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393437653!6477825!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5172 invoked from network); 26 Feb 2014 18:00:55 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 18:00:55 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 26 Feb 2014 18:00:52 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="681508989"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi01.verizon.com with ESMTP; 26 Feb 2014 18:00:52 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Wed, 26 Feb 2014 13:00:47 -0500
Message-Id: <1393437647-16694-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Don Slutz <dslutz@verizon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently in 32 bit mode the routine hpet_set_timer() will convert a
time in the past to a time in future.  This is done by the uint32_t
cast of diff.

Even without this issue, hpet_tick_to_ns() does not support past
times.

Real hardware does not support past times.

So just do the same thing in 32 bit mode as 64 bit mode.

Without this change it is possible for an HVM guest running linux to
get the message:

..MP-BIOS bug: 8254 timer not connected to IO-APIC

On the guest console(s); and will panic.

Also Xen hypervisor console with be flooded with:

vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/hvm/hpet.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 4324b52..14b1a39 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -197,10 +197,6 @@ static void hpet_stop_timer(HPETState *h, unsigned int tn)
     hpet_get_comparator(h, tn);
 }
 
-/* the number of HPET tick that stands for
- * 1/(2^10) second, namely, 0.9765625 milliseconds */
-#define  HPET_TINY_TIME_SPAN  ((h->stime_freq >> 10) / STIME_PER_HPET_TICK)
-
 static void hpet_set_timer(HPETState *h, unsigned int tn)
 {
     uint64_t tn_cmp, cur_tick, diff;
@@ -231,14 +227,11 @@ static void hpet_set_timer(HPETState *h, unsigned int tn)
     diff = tn_cmp - cur_tick;
 
     /*
-     * Detect time values set in the past. This is hard to do for 32-bit
-     * comparators as the timer does not have to be set that far in the future
-     * for the counter difference to wrap a 32-bit signed integer. We fudge
-     * by looking for a 'small' time value in the past.
+     * Detect time values set in the past. Since hpet_tick_to_ns() does
+     * not handle this, use 0 for both 64 and 32 bit mode.
      */
     if ( (int64_t)diff < 0 )
-        diff = (timer_is_32bit(h, tn) && (-diff > HPET_TINY_TIME_SPAN))
-            ? (uint32_t)diff : 0;
+        diff = 0;
 
     if ( (tn <= 1) && (h->hpet.config & HPET_CFG_LEGACY) )
         /* if LegacyReplacementRoute bit is set, HPET specification requires
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:01:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIimv-0006h8-4M; Wed, 26 Feb 2014 18:00:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WIimu-0006h2-6X
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:00:56 +0000
Received: from [85.158.139.211:16994] by server-8.bemta-5.messagelabs.com id
	C4/E2-05298-7DB2E035; Wed, 26 Feb 2014 18:00:55 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393437653!6477825!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5117 invoked from network); 26 Feb 2014 18:00:54 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 18:00:54 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 26 Feb 2014 18:00:52 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="681508967"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi01.verizon.com with ESMTP; 26 Feb 2014 18:00:51 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Wed, 26 Feb 2014 13:00:46 -0500
Message-Id: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Don Slutz <dslutz@verizon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/1] Prevent one cause of "MP-BIOS bug: 8254
	timer"... message from linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Based on the proposed fix in QEMU:

http://marc.info/?l=qemu-devel&m=139304386331192&w=2

That was provided for:

http://marc.info/?l=qemu-devel&m=139295851107140&w=2

Which is very close to a bug I have been looking into and asked some
questions about in:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg01787.html

I came up with this change.  It does look like vpt does support
starting a timer in the past and what happens is based on the
setting of timer_mode.  Since real hardware does not support this
and this is the simpler change I went this way.

Don Slutz (1):
  hpet: Act more like real hardware

 xen/arch/x86/hvm/hpet.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:01:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIimw-0006hJ-JO; Wed, 26 Feb 2014 18:00:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WIimv-0006h7-6e
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:00:57 +0000
Received: from [85.158.139.211:15324] by server-8.bemta-5.messagelabs.com id
	18/E2-05298-7DB2E035; Wed, 26 Feb 2014 18:00:55 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393437653!6477825!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5172 invoked from network); 26 Feb 2014 18:00:55 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 18:00:55 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 26 Feb 2014 18:00:52 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="681508989"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi01.verizon.com with ESMTP; 26 Feb 2014 18:00:52 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Wed, 26 Feb 2014 13:00:47 -0500
Message-Id: <1393437647-16694-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
Cc: Keir Fraser <keir@xen.org>, Don Slutz <dslutz@verizon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Currently in 32 bit mode the routine hpet_set_timer() will convert a
time in the past to a time in future.  This is done by the uint32_t
cast of diff.

Even without this issue, hpet_tick_to_ns() does not support past
times.

Real hardware does not support past times.

So just do the same thing in 32 bit mode as 64 bit mode.

Without this change it is possible for an HVM guest running linux to
get the message:

..MP-BIOS bug: 8254 timer not connected to IO-APIC

On the guest console(s); and will panic.

Also Xen hypervisor console with be flooded with:

vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7
vioapic.c:352:d1 Unsupported delivery mode 7

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 xen/arch/x86/hvm/hpet.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index 4324b52..14b1a39 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -197,10 +197,6 @@ static void hpet_stop_timer(HPETState *h, unsigned int tn)
     hpet_get_comparator(h, tn);
 }
 
-/* the number of HPET tick that stands for
- * 1/(2^10) second, namely, 0.9765625 milliseconds */
-#define  HPET_TINY_TIME_SPAN  ((h->stime_freq >> 10) / STIME_PER_HPET_TICK)
-
 static void hpet_set_timer(HPETState *h, unsigned int tn)
 {
     uint64_t tn_cmp, cur_tick, diff;
@@ -231,14 +227,11 @@ static void hpet_set_timer(HPETState *h, unsigned int tn)
     diff = tn_cmp - cur_tick;
 
     /*
-     * Detect time values set in the past. This is hard to do for 32-bit
-     * comparators as the timer does not have to be set that far in the future
-     * for the counter difference to wrap a 32-bit signed integer. We fudge
-     * by looking for a 'small' time value in the past.
+     * Detect time values set in the past. Since hpet_tick_to_ns() does
+     * not handle this, use 0 for both 64 and 32 bit mode.
      */
     if ( (int64_t)diff < 0 )
-        diff = (timer_is_32bit(h, tn) && (-diff > HPET_TINY_TIME_SPAN))
-            ? (uint32_t)diff : 0;
+        diff = 0;
 
     if ( (tn <= 1) && (h->hpet.config & HPET_CFG_LEGACY) )
         /* if LegacyReplacementRoute bit is set, HPET specification requires
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:01:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIimv-0006h8-4M; Wed, 26 Feb 2014 18:00:57 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WIimu-0006h2-6X
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:00:56 +0000
Received: from [85.158.139.211:16994] by server-8.bemta-5.messagelabs.com id
	C4/E2-05298-7DB2E035; Wed, 26 Feb 2014 18:00:55 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393437653!6477825!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5117 invoked from network); 26 Feb 2014 18:00:54 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 18:00:54 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi01.verizon.com) ([166.68.71.143])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 26 Feb 2014 18:00:52 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,548,1389744000"; d="scan'208";a="681508967"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi01.verizon.com with ESMTP; 26 Feb 2014 18:00:51 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Wed, 26 Feb 2014 13:00:46 -0500
Message-Id: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Keir Fraser <keir@xen.org>, Don Slutz <dslutz@verizon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [Xen-devel] [PATCH 0/1] Prevent one cause of "MP-BIOS bug: 8254
	timer"... message from linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Based on the proposed fix in QEMU:

http://marc.info/?l=qemu-devel&m=139304386331192&w=2

That was provided for:

http://marc.info/?l=qemu-devel&m=139295851107140&w=2

Which is very close to a bug I have been looking into and asked some
questions about in:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg01787.html

I came up with this change.  It does look like vpt does support
starting a timer in the past and what happens is based on the
setting of timer_mode.  Since real hardware does not support this
and this is the simpler change I went this way.

Don Slutz (1):
  hpet: Act more like real hardware

 xen/arch/x86/hvm/hpet.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:33:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjID-0007U0-D3; Wed, 26 Feb 2014 18:33:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WIjIB-0007Tv-LB
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:33:15 +0000
Received: from [193.109.254.147:61412] by server-7.bemta-14.messagelabs.com id
	32/7B-23424-B633E035; Wed, 26 Feb 2014 18:33:15 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393439592!7040713!1
X-Originating-IP: [140.108.26.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDIgPT4gMzE1NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28756 invoked from network); 26 Feb 2014 18:33:14 -0000
Received: from fldsmtpe03.verizon.com (HELO fldsmtpe03.verizon.com)
	(140.108.26.142)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 18:33:14 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe03.verizon.com with ESMTP; 26 Feb 2014 18:33:11 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="662373626"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi03.verizon.com with ESMTP; 26 Feb 2014 18:33:11 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Wed, 26 Feb 2014 13:32:16 -0500
Message-ID: <530E332F.3060901@terremark.com>
Date: Wed, 26 Feb 2014 13:32:15 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <5304CA45.7050406@terremark.com>
	<5305D1BC020000780011DEF7@nat28.tlf.novell.com>
In-Reply-To: <5305D1BC020000780011DEF7@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Questions on vpic, vlapic,
 vioapic and line 0 (aka timer)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/20/14 03:58, Jan Beulich wrote:
>>>> On 19.02.14 at 16:14, Don Slutz <dslutz@verizon.com> wrote:
>> For some TBD reason (very very rarely) the routine timer_irq_works() in linux
>> is reporting that the timer IRQ does not work:
>>
>> ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
>> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
>> ...trying to set up timer (IRQ0) through the 8259A ...
>> ..... (found apic 0 pin 2) ...
>> ....... failed.
>> ...trying to set up timer as Virtual Wire IRQ...
>> ..... failed.
>> ...trying to set up timer as ExtINT IRQ...
>>
>> hangs and xen's console is spewing:
>>
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> ...
>>
>> I have determined that the lines (in linux):
>>
>> #ifdef CONFIG_X86_IO_APIC
>>           no_timer_check = 1;
>> #endif
>>
>> that KVM has are missing for Xen.  The simple workaround is to specify
>> "no_timer_check" on the kernel command args.
>>
>> The reason for the email is that I have found the routine
>> __vlapic_accept_pic_intr:
>>
>>       /* We deliver 8259 interrupts to the appropriate CPU as follows. */
>>       return ((/* IOAPIC pin0 is unmasked and routing to this LAPIC? */
>>                ((redir0.fields.delivery_mode == dest_ExtINT) &&
>>                 !redir0.fields.mask &&
>>                 redir0.fields.dest_id == VLAPIC_ID(vlapic) &&
>>                 !vlapic_disabled(vlapic)) ||
>>                /* LAPIC has LVT0 unmasked for ExtInts? */
>>                ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_EXTINT) ||
>>                /* LAPIC is fully disabled? */
>>                vlapic_hw_disabled(vlapic)));
>> }
>>
>>
>> Which looks to imply that the vioapic supports "delivery mode 7"
>> (dest_ExtINT), but this case is missing (the message logged above).
> Not really - the code above suggests the LAPIC emulation is
> prepared for the IOAPIC emulation to support that mode, not
> that the IOAPIC one supports it. At present only LAPIC LVT0
> really supports that delivery mode.

Well, the linux kernel that I have been testing with only sets LAPIC LVT0 to 7:


         init_8259A(0);
         make_8259A_irq(0);
         apic_write(APIC_LVT0, APIC_DM_EXTINT);


And that agrees with xen-hvmctx:

dcs-xen-54:~/xen>sudo xen-hvmctx 8 | grep -A7 LINT0
       LVT LINT0  : 00000700
         mask     = 0
         trigger  = edge
         rem irr  = 0
         polarty  = active hi
         status   = idle
         delivry  = ExtINT
         vector   = 00

(Patch in the works for this decode).

And the ioapic does NOT have this mode set:

     IOAPIC: base_address 0xfec00000, ioregsel 0x20 id 0
         pin dsm dsa msk trig rir polarity dls  dlm    vec raw
         00: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         01: phys 00  0  edge   0 activehi idle Fixed   49 0x0000000000000031
         02: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         03: phys 00  0  edge   0 activehi idle Fixed   51 0x0000000000000033
         04: phys 00  0  edge   0 activehi idle Fixed   52 0x0000000000000034
         05: phys 00  1  level  0 activelo idle Fixed   53 0x000000000001a035
         06: phys 00  0  edge   0 activehi idle Fixed   54 0x0000000000000036
         07: phys 00  0  edge   0 activehi idle Fixed   55 0x0000000000000037
         08: phys 00  0  edge   0 activehi idle Fixed   56 0x0000000000000038
         09: phys 00  1  level  0 activelo idle Fixed   57 0x000000000001a039
         10: phys 00  1  level  0 activelo idle Fixed   58 0x000000000001a03a
         11: phys 00  1  level  0 activelo idle Fixed   59 0x000000000001a03b
         12: phys 00  0  edge   0 activehi idle Fixed   60 0x000000000000003c
         13: phys 00  0  edge   0 activehi idle Fixed   61 0x000000000000003d
         14: phys 00  0  edge   0 activehi idle Fixed   62 0x000000000000003e
         15: phys 00  0  edge   0 activehi idle Fixed   63 0x000000000000003f
         16: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         17: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         18: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         19: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         20: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         21: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         22: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         23: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         24: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         25: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         26: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         27: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         28: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         29: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         30: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         31: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         32: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         33: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         34: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         35: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         36: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         37: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         38: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         39: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         40: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         41: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         42: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         43: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         44: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         45: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         46: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         47: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000

So I am now more confused that vioapic.c is reporting this message.


>> Also linux and xen both set "LAPIC has LVT0" to APIC_DM_FIXED for "Virtual
>> Wire IRQ" usage, but this code only allows for APIC_DM_EXTINT.  I have been
>> able to get "Virtual Wire IRQ" usage to work by adding:
>>
>>                /* LAPIC has LVT0 unmasked for Fixed? */
>>                ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_FIXED) ||
>>
>> It is not clear to me if it should be added or just changed.
> Adding this to __vlapic_accept_pic_intr() would be contrary to
> the purpose of the function afaict - fixed delivery mode is
> unrelated to delivering PIC interrupts.

So where would be a better place to add this kind of check?

Or is the PIT and HPET not "connected" to LINTIN0 of the lapic?

>> This code looks to state that:
>>
>> ...trying to set up timer (IRQ0) through the 8259A ...
>>
>> is expected to fail.  Is this by design?  (QEMU does allow this case.)
> But in the end I think you're barking up the wrong tree: All of this
> code would be of no interest at all if Linux didn't for some reason go
> into that fallback code. And with that fallback code existing mainly
> (if not exclusively) to deal with (half-)broken hardware/firmware, we
> should really make sure our emulation isn't broken (i.e. the fallback
> never being made use of). So finding the "TBD reason" is what is
> really needed.
>
> Jan
>

I am now sure that I found the base reason in my case.  See:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg02408.html

For my proposed fix.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:33:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjID-0007U0-D3; Wed, 26 Feb 2014 18:33:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WIjIB-0007Tv-LB
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:33:15 +0000
Received: from [193.109.254.147:61412] by server-7.bemta-14.messagelabs.com id
	32/7B-23424-B633E035; Wed, 26 Feb 2014 18:33:15 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393439592!7040713!1
X-Originating-IP: [140.108.26.142]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDIgPT4gMzE1NzU3\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28756 invoked from network); 26 Feb 2014 18:33:14 -0000
Received: from fldsmtpe03.verizon.com (HELO fldsmtpe03.verizon.com)
	(140.108.26.142)
	by server-8.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 18:33:14 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by fldsmtpe03.verizon.com with ESMTP; 26 Feb 2014 18:33:11 +0000
From: Don Slutz <dslutz@verizon.com>
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="662373626"
Received: from unknown (HELO MIA20725CAS892.apps.tmrk.corp) ([162.47.0.51])
	by fldsmtpi03.verizon.com with ESMTP; 26 Feb 2014 18:33:11 +0000
Received: from don-760.CloudSwitch.com (10.25.89.94) by
	MIA20725CAS892.apps.tmrk.corp (10.1.3.224) with Microsoft SMTP Server
	(TLS) id 14.2.318.1; Wed, 26 Feb 2014 13:32:16 -0500
Message-ID: <530E332F.3060901@terremark.com>
Date: Wed, 26 Feb 2014 13:32:15 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <5304CA45.7050406@terremark.com>
	<5305D1BC020000780011DEF7@nat28.tlf.novell.com>
In-Reply-To: <5305D1BC020000780011DEF7@nat28.tlf.novell.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Questions on vpic, vlapic,
 vioapic and line 0 (aka timer)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/20/14 03:58, Jan Beulich wrote:
>>>> On 19.02.14 at 16:14, Don Slutz <dslutz@verizon.com> wrote:
>> For some TBD reason (very very rarely) the routine timer_irq_works() in linux
>> is reporting that the timer IRQ does not work:
>>
>> ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
>> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
>> ...trying to set up timer (IRQ0) through the 8259A ...
>> ..... (found apic 0 pin 2) ...
>> ....... failed.
>> ...trying to set up timer as Virtual Wire IRQ...
>> ..... failed.
>> ...trying to set up timer as ExtINT IRQ...
>>
>> hangs and xen's console is spewing:
>>
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> ...
>>
>> I have determined that the lines (in linux):
>>
>> #ifdef CONFIG_X86_IO_APIC
>>           no_timer_check = 1;
>> #endif
>>
>> that KVM has are missing for Xen.  The simple workaround is to specify
>> "no_timer_check" on the kernel command args.
>>
>> The reason for the email is that I have found the routine
>> __vlapic_accept_pic_intr:
>>
>>       /* We deliver 8259 interrupts to the appropriate CPU as follows. */
>>       return ((/* IOAPIC pin0 is unmasked and routing to this LAPIC? */
>>                ((redir0.fields.delivery_mode == dest_ExtINT) &&
>>                 !redir0.fields.mask &&
>>                 redir0.fields.dest_id == VLAPIC_ID(vlapic) &&
>>                 !vlapic_disabled(vlapic)) ||
>>                /* LAPIC has LVT0 unmasked for ExtInts? */
>>                ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_EXTINT) ||
>>                /* LAPIC is fully disabled? */
>>                vlapic_hw_disabled(vlapic)));
>> }
>>
>>
>> Which looks to imply that the vioapic supports "delivery mode 7"
>> (dest_ExtINT), but this case is missing (the message logged above).
> Not really - the code above suggests the LAPIC emulation is
> prepared for the IOAPIC emulation to support that mode, not
> that the IOAPIC one supports it. At present only LAPIC LVT0
> really supports that delivery mode.

Well, the linux kernel that I have been testing with only sets LAPIC LVT0 to 7:


         init_8259A(0);
         make_8259A_irq(0);
         apic_write(APIC_LVT0, APIC_DM_EXTINT);


And that agrees with xen-hvmctx:

dcs-xen-54:~/xen>sudo xen-hvmctx 8 | grep -A7 LINT0
       LVT LINT0  : 00000700
         mask     = 0
         trigger  = edge
         rem irr  = 0
         polarty  = active hi
         status   = idle
         delivry  = ExtINT
         vector   = 00

(Patch in the works for this decode).

And the ioapic does NOT have this mode set:

     IOAPIC: base_address 0xfec00000, ioregsel 0x20 id 0
         pin dsm dsa msk trig rir polarity dls  dlm    vec raw
         00: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         01: phys 00  0  edge   0 activehi idle Fixed   49 0x0000000000000031
         02: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         03: phys 00  0  edge   0 activehi idle Fixed   51 0x0000000000000033
         04: phys 00  0  edge   0 activehi idle Fixed   52 0x0000000000000034
         05: phys 00  1  level  0 activelo idle Fixed   53 0x000000000001a035
         06: phys 00  0  edge   0 activehi idle Fixed   54 0x0000000000000036
         07: phys 00  0  edge   0 activehi idle Fixed   55 0x0000000000000037
         08: phys 00  0  edge   0 activehi idle Fixed   56 0x0000000000000038
         09: phys 00  1  level  0 activelo idle Fixed   57 0x000000000001a039
         10: phys 00  1  level  0 activelo idle Fixed   58 0x000000000001a03a
         11: phys 00  1  level  0 activelo idle Fixed   59 0x000000000001a03b
         12: phys 00  0  edge   0 activehi idle Fixed   60 0x000000000000003c
         13: phys 00  0  edge   0 activehi idle Fixed   61 0x000000000000003d
         14: phys 00  0  edge   0 activehi idle Fixed   62 0x000000000000003e
         15: phys 00  0  edge   0 activehi idle Fixed   63 0x000000000000003f
         16: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         17: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         18: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         19: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         20: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         21: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         22: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         23: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         24: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         25: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         26: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         27: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         28: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         29: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         30: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         31: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         32: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         33: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         34: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         35: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         36: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         37: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         38: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         39: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         40: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         41: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         42: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         43: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         44: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         45: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         46: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000
         47: phys 00  1  edge   0 activehi idle Fixed    0 0x0000000000010000

So I am now more confused that vioapic.c is reporting this message.


>> Also linux and xen both set "LAPIC has LVT0" to APIC_DM_FIXED for "Virtual
>> Wire IRQ" usage, but this code only allows for APIC_DM_EXTINT.  I have been
>> able to get "Virtual Wire IRQ" usage to work by adding:
>>
>>                /* LAPIC has LVT0 unmasked for Fixed? */
>>                ((lvt0 & (APIC_MODE_MASK|APIC_LVT_MASKED)) == APIC_DM_FIXED) ||
>>
>> It is not clear to me if it should be added or just changed.
> Adding this to __vlapic_accept_pic_intr() would be contrary to
> the purpose of the function afaict - fixed delivery mode is
> unrelated to delivering PIC interrupts.

So where would be a better place to add this kind of check?

Or is the PIT and HPET not "connected" to LINTIN0 of the lapic?

>> This code looks to state that:
>>
>> ...trying to set up timer (IRQ0) through the 8259A ...
>>
>> is expected to fail.  Is this by design?  (QEMU does allow this case.)
> But in the end I think you're barking up the wrong tree: All of this
> code would be of no interest at all if Linux didn't for some reason go
> into that fallback code. And with that fallback code existing mainly
> (if not exclusively) to deal with (half-)broken hardware/firmware, we
> should really make sure our emulation isn't broken (i.e. the fallback
> never being made use of). So finding the "TBD reason" is what is
> really needed.
>
> Jan
>

I am now sure that I found the base reason in my case.  See:

http://lists.xen.org/archives/html/xen-devel/2014-02/msg02408.html

For my proposed fix.

    -Don Slutz

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjK7-0007bE-UB; Wed, 26 Feb 2014 18:35:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIjK6-0007b4-SU
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:35:15 +0000
Received: from [85.158.139.211:59001] by server-11.bemta-5.messagelabs.com id
	09/46-23886-2E33E035; Wed, 26 Feb 2014 18:35:14 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393439710!6486772!1
X-Originating-IP: [209.85.220.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_32,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8145 invoked from network); 26 Feb 2014 18:35:12 -0000
Received: from mail-pa0-f46.google.com (HELO mail-pa0-f46.google.com)
	(209.85.220.46)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:35:12 -0000
Received: by mail-pa0-f46.google.com with SMTP id rd3so1341155pab.33
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 10:35:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version
	:content-type:content-disposition:user-agent;
	bh=5SzRlQKhY9GUV8MqE1wGE6ORN8CrdwRLp1y8TWr2idI=;
	b=l3UXtsW8Cfbw+AAWRWyegjW1yY8XHlaQSaeoqQy31n9PPCzSILZoZ6xKHHwzwCXwCU
	3nzqonwh5FKDyvnhDSKI/UljtVI5tiueypy2Ne+bg8dvDVv1M+7PH4MX7kw4RjRMT2PI
	SWmpVC/ec/XDLuHWDTMbp8LNTVO6K+4Pkj0fcq0xZ75WHBeSo0LIRFREppEGqaEmiZCT
	8HPMbMQnsTD5OC6527sD3cj3L58VoMsARQtbosBFhs47WTZWjyKPY/2tiL6AUF7HFhBH
	motYXQqNi2Eex43tULk9iG7yNL8jkP9MSX0mSYbrGLnH8ZHeDEihzILILbmSU/jhutwW
	yzCQ==
X-Gm-Message-State: ALoCoQl2LdAAKimuciwVybnqvsJ2UZRAqstlluwDecIxjP5sEWviomVQrl8EuRDiJtuH4H1Xnwbq
X-Received: by 10.68.223.193 with SMTP id qw1mr8664665pbc.16.1393439709346;
	Wed, 26 Feb 2014 10:35:09 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id db3sm5691596pbb.10.2014.02.26.10.34.53
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 10:34:55 -0800 (PST)
Date: Wed, 26 Feb 2014 10:34:54 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: cross-distro@lists.linaro.org, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, 
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, xen-devel@lists.xen.org
Message-ID: <20140226183454.GA14639@cbox>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>
Subject: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM VM System Specification
===========================

Goal
----
The goal of this spec is to allow suitably-built OS images to run on
all ARM virtualization solutions, such as KVM or Xen.

Recommendations in this spec are valid for aarch32 and aarch64 alike, and
they aim to be hypervisor agnostic.

Note that simply adhering to the SBSA [2] is not a valid approach,
for example because the SBSA mandates EL2, which will not be available
for VMs.  Further, the SBSA mandates peripherals like the pl011, which
may be controversial for some ARM VM implementations to support.
This spec also covers the aarch32 execution mode, not covered in the
SBSA.


Image format
------------
The image format, as presented to the VM, needs to be well-defined in
order for prepared disk images to be bootable across various
virtualization implementations.

The raw disk format as presented to the VM must be partitioned with a
GUID Partition Table (GPT).  The bootable software must be placed in the
EFI System Partition (ESP), using the UEFI removable media path, and
must be an EFI application complying to the UEFI Specification 2.4
Revision A [6].

The ESP partition's GPT entry's partition type GUID must be
C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
formatted as FAT32/vfat as per Section 12.3.1.1 in [6].

The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
state.

This ensures that tools for both Xen and KVM can load a binary UEFI
firmware which can read and boot the EFI application in the disk image.

A typical scenario will be GRUB2 packaged as an EFI application, which
mounts the system boot partition and boots Linux.


Virtual Firmware
----------------
The VM system must be able to boot the EFI application in the ESP.  It
is recommended that this is achieved by loading a UEFI binary as the
first software executed by the VM, which then executes the EFI
application.  The UEFI implementation should be compliant with UEFI
Specification 2.4 Revision A [6] or later.

This document strongly recommends that the VM implementation supports
persistent environment storage for virtual firmware implementation in
order to ensure probable use cases such as adding additional disk images
to a VM or running installers to perform upgrades.

The binary UEFI firmware implementation should not be distributed as
part of the VM image, but is specific to the VM implementation.


Hardware Description
--------------------
The Linux kernel's proper entry point always takes a pointer to an FDT,
regardless of the boot mechanism, firmware, and hardware description
method.  Even on real hardware which only supports ACPI and UEFI, the kernel
entry point will still receive a pointer to a simple FDT, generated by
the Linux kernel UEFI stub, containing a pointer to the UEFI system
table.  The kernel can then discover ACPI from the system tables.  The
presence of ACPI vs. FDT is therefore always itself discoverable,
through the FDT.

Therefore, the VM implementation must provide through its UEFI
implementation, either:

  a complete FDT which describes the entire VM system and will boot
  mainline kernels driven by device tree alone, or

  no FDT.  In this case, the VM implementation must provide ACPI, and
  the OS must be able to locate the ACPI root pointer through the UEFI
  system table.

For more information about the arm and arm64 boot conventions, see
Documentation/arm/Booting and Documentation/arm64/booting.txt in the
Linux kernel source tree.

For more information about UEFI and ACPI booting, see [4] and [5].


VM Platform
-----------
The specification does not mandate any specific memory map.  The guest
OS must be able to enumerate all processing elements, devices, and
memory through HW description data (FDT, ACPI) or a bus-specific
mechanism such as PCI.

The virtual platform must support at least one of the following ARM
execution states:
  (1) aarch32 virtual CPUs on aarch32 physical CPUs
  (2) aarch32 virtual CPUs on aarch64 physical CPUs
  (3) aarch64 virtual CPUs on aarch64 physical CPUs

It is recommended to support both (2) and (3) on aarch64 capable
physical systems.

The virtual hardware platform must provide a number of mandatory
peripherals:

  Serial console:  The platform should provide a console,
  based on an emulated pl011, a virtio-console, or a Xen PV console.

  An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
  limits the the number of virtual CPUs to 8 cores, newer GIC versions
  removes this limitation.

  The ARM virtual timer and counter should be available to the VM as
  per the ARM Generic Timers specification in the ARM ARM [1].

  A hotpluggable bus to support hotplug of at least block and network
  devices.  Suitable buses include a virtual PCIe bus and the Xen PV
  bus.


We make the following recommendations for the guest OS kernel:

  The guest OS must include support for GICv2 and any available newer
  version of the GIC architecture to maintain compatibility with older
  VM implementations.

  It is strongly recommended to include support for all available
  (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
  drivers in the guest OS kernel or initial ramdisk.


Other common peripherals for block devices, networking, and more can
(and typically will) be provided, but OS software written and compiled
to run on ARM VMs cannot make any assumptions about which variations
of these should exist or which implementation they use (e.g. VirtIO or
Xen PV).  See "Hardware Description" above.

Note that this platform specification is separate from the Linux kernel
concept of mach-virt, which merely specifies a machine model driven
purely from device tree, but does not mandate any peripherals or have any
mention of ACPI.


References
----------
[1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html

[2]: ARM Server Base System Architecture
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html

[3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html

[4]: http://www.secretlab.ca/archives/27

[5]: https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt

[6]: UEFI Specification 2.4 Revision A
http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:35:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:35:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjK7-0007bE-UB; Wed, 26 Feb 2014 18:35:15 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIjK6-0007b4-SU
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 18:35:15 +0000
Received: from [85.158.139.211:59001] by server-11.bemta-5.messagelabs.com id
	09/46-23886-2E33E035; Wed, 26 Feb 2014 18:35:14 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393439710!6486772!1
X-Originating-IP: [209.85.220.46]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_32,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8145 invoked from network); 26 Feb 2014 18:35:12 -0000
Received: from mail-pa0-f46.google.com (HELO mail-pa0-f46.google.com)
	(209.85.220.46)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:35:12 -0000
Received: by mail-pa0-f46.google.com with SMTP id rd3so1341155pab.33
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 10:35:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version
	:content-type:content-disposition:user-agent;
	bh=5SzRlQKhY9GUV8MqE1wGE6ORN8CrdwRLp1y8TWr2idI=;
	b=l3UXtsW8Cfbw+AAWRWyegjW1yY8XHlaQSaeoqQy31n9PPCzSILZoZ6xKHHwzwCXwCU
	3nzqonwh5FKDyvnhDSKI/UljtVI5tiueypy2Ne+bg8dvDVv1M+7PH4MX7kw4RjRMT2PI
	SWmpVC/ec/XDLuHWDTMbp8LNTVO6K+4Pkj0fcq0xZ75WHBeSo0LIRFREppEGqaEmiZCT
	8HPMbMQnsTD5OC6527sD3cj3L58VoMsARQtbosBFhs47WTZWjyKPY/2tiL6AUF7HFhBH
	motYXQqNi2Eex43tULk9iG7yNL8jkP9MSX0mSYbrGLnH8ZHeDEihzILILbmSU/jhutwW
	yzCQ==
X-Gm-Message-State: ALoCoQl2LdAAKimuciwVybnqvsJ2UZRAqstlluwDecIxjP5sEWviomVQrl8EuRDiJtuH4H1Xnwbq
X-Received: by 10.68.223.193 with SMTP id qw1mr8664665pbc.16.1393439709346;
	Wed, 26 Feb 2014 10:35:09 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id db3sm5691596pbb.10.2014.02.26.10.34.53
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 10:34:55 -0800 (PST)
Date: Wed, 26 Feb 2014 10:34:54 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: cross-distro@lists.linaro.org, "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>, 
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, xen-devel@lists.xen.org
Message-ID: <20140226183454.GA14639@cbox>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>
Subject: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

ARM VM System Specification
===========================

Goal
----
The goal of this spec is to allow suitably-built OS images to run on
all ARM virtualization solutions, such as KVM or Xen.

Recommendations in this spec are valid for aarch32 and aarch64 alike, and
they aim to be hypervisor agnostic.

Note that simply adhering to the SBSA [2] is not a valid approach,
for example because the SBSA mandates EL2, which will not be available
for VMs.  Further, the SBSA mandates peripherals like the pl011, which
may be controversial for some ARM VM implementations to support.
This spec also covers the aarch32 execution mode, not covered in the
SBSA.


Image format
------------
The image format, as presented to the VM, needs to be well-defined in
order for prepared disk images to be bootable across various
virtualization implementations.

The raw disk format as presented to the VM must be partitioned with a
GUID Partition Table (GPT).  The bootable software must be placed in the
EFI System Partition (ESP), using the UEFI removable media path, and
must be an EFI application complying to the UEFI Specification 2.4
Revision A [6].

The ESP partition's GPT entry's partition type GUID must be
C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
formatted as FAT32/vfat as per Section 12.3.1.1 in [6].

The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
state.

This ensures that tools for both Xen and KVM can load a binary UEFI
firmware which can read and boot the EFI application in the disk image.

A typical scenario will be GRUB2 packaged as an EFI application, which
mounts the system boot partition and boots Linux.


Virtual Firmware
----------------
The VM system must be able to boot the EFI application in the ESP.  It
is recommended that this is achieved by loading a UEFI binary as the
first software executed by the VM, which then executes the EFI
application.  The UEFI implementation should be compliant with UEFI
Specification 2.4 Revision A [6] or later.

This document strongly recommends that the VM implementation supports
persistent environment storage for virtual firmware implementation in
order to ensure probable use cases such as adding additional disk images
to a VM or running installers to perform upgrades.

The binary UEFI firmware implementation should not be distributed as
part of the VM image, but is specific to the VM implementation.


Hardware Description
--------------------
The Linux kernel's proper entry point always takes a pointer to an FDT,
regardless of the boot mechanism, firmware, and hardware description
method.  Even on real hardware which only supports ACPI and UEFI, the kernel
entry point will still receive a pointer to a simple FDT, generated by
the Linux kernel UEFI stub, containing a pointer to the UEFI system
table.  The kernel can then discover ACPI from the system tables.  The
presence of ACPI vs. FDT is therefore always itself discoverable,
through the FDT.

Therefore, the VM implementation must provide through its UEFI
implementation, either:

  a complete FDT which describes the entire VM system and will boot
  mainline kernels driven by device tree alone, or

  no FDT.  In this case, the VM implementation must provide ACPI, and
  the OS must be able to locate the ACPI root pointer through the UEFI
  system table.

For more information about the arm and arm64 boot conventions, see
Documentation/arm/Booting and Documentation/arm64/booting.txt in the
Linux kernel source tree.

For more information about UEFI and ACPI booting, see [4] and [5].


VM Platform
-----------
The specification does not mandate any specific memory map.  The guest
OS must be able to enumerate all processing elements, devices, and
memory through HW description data (FDT, ACPI) or a bus-specific
mechanism such as PCI.

The virtual platform must support at least one of the following ARM
execution states:
  (1) aarch32 virtual CPUs on aarch32 physical CPUs
  (2) aarch32 virtual CPUs on aarch64 physical CPUs
  (3) aarch64 virtual CPUs on aarch64 physical CPUs

It is recommended to support both (2) and (3) on aarch64 capable
physical systems.

The virtual hardware platform must provide a number of mandatory
peripherals:

  Serial console:  The platform should provide a console,
  based on an emulated pl011, a virtio-console, or a Xen PV console.

  An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
  limits the the number of virtual CPUs to 8 cores, newer GIC versions
  removes this limitation.

  The ARM virtual timer and counter should be available to the VM as
  per the ARM Generic Timers specification in the ARM ARM [1].

  A hotpluggable bus to support hotplug of at least block and network
  devices.  Suitable buses include a virtual PCIe bus and the Xen PV
  bus.


We make the following recommendations for the guest OS kernel:

  The guest OS must include support for GICv2 and any available newer
  version of the GIC architecture to maintain compatibility with older
  VM implementations.

  It is strongly recommended to include support for all available
  (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
  drivers in the guest OS kernel or initial ramdisk.


Other common peripherals for block devices, networking, and more can
(and typically will) be provided, but OS software written and compiled
to run on ARM VMs cannot make any assumptions about which variations
of these should exist or which implementation they use (e.g. VirtIO or
Xen PV).  See "Hardware Description" above.

Note that this platform specification is separate from the Linux kernel
concept of mach-virt, which merely specifies a machine model driven
purely from device tree, but does not mandate any peripherals or have any
mention of ACPI.


References
----------
[1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html

[2]: ARM Server Base System Architecture
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html

[3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html

[4]: http://www.secretlab.ca/archives/27

[5]: https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt

[6]: UEFI Specification 2.4 Revision A
http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:39:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjOW-0007qH-Mi; Wed, 26 Feb 2014 18:39:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOV-0007qB-Ey
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:39:47 +0000
Received: from [85.158.137.68:39046] by server-17.bemta-3.messagelabs.com id
	DE/1B-22569-2F43E035; Wed, 26 Feb 2014 18:39:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393439984!4444525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10063 invoked from network); 26 Feb 2014 18:39:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:39:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106023990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 18:39:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:39:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOQ-0004Ru-Lk;
	Wed, 26 Feb 2014 18:39:42 +0000
Date: Wed, 26 Feb 2014 18:39:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, jtd@galois.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 0/12] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.
It also improves priority handling, keeping the highest priority
outstanding interrupts in the GICH_LR registers.


Changes in v3:
- add "no need to set HCR_VI when using the vgic to inject irqs";
- add "s/gic_set_guest_irq/gic_raise_guest_irq";
- add "xen/arm: call gic_clear_lrs on entry to the hypervisor";
- do not use the PENDING and ACTIVE state for HW interrupts;
- unify the inflight and non-inflight code paths in
vgic_vcpu_inject_irq;
- remove "avoid taking unconditionally the vgic.lock in gic_clear_lrs";
- add "xen/arm: gic_events_need_delivery and irq priorities";
- use spin_lock_irqsave and spin_unlock_irqrestore in gic_dump_info.

Changes in v2:
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr;
- simplify gic_clear_lrs;
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1);
- add a patch to keep track of the LR number in pending_irq;
- add a patch to set GICH_LR_PENDING to inject a second irq while the
first one is still active;
- add a patch to simplify and reduce the usage of gic.lock;
- add a patch to reduce the usage of vgic.lock;
- add a patch to use GICH_ELSR[01] to avoid reading all the GICH_LRs in
gic_clear_lrs;
- add a debug patch to print more info in gic_dump_info.


Stefano Stabellini (12):
      xen/arm: no need to set HCR_VI when using the vgic to inject irqs
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: support HW interrupts in gic_set_lr
      xen/arm: do not request maintenance_interrupts
      xen/arm: set GICH_HCR_UIE if all the LRs are in use
      xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
      xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq
      xen/arm: call gic_clear_lrs on entry to the hypervisor
      xen/arm: second irq injection while the first irq is still inflight
      xen/arm: don't protect GICH and lr_queue accesses with gic.lock
      xen/arm: gic_events_need_delivery and irq priorities
      xen/arm: print more info in gic_dump_info, keep gic_lr sync'ed

 xen/arch/arm/domain.c        |    2 +-
 xen/arch/arm/gic.c           |  281 ++++++++++++++++++++++++------------------
 xen/arch/arm/irq.c           |    2 +-
 xen/arch/arm/time.c          |    2 +-
 xen/arch/arm/traps.c         |   10 ++
 xen/arch/arm/vgic.c          |   50 ++++----
 xen/arch/arm/vtimer.c        |    4 +-
 xen/include/asm-arm/domain.h |    6 +-
 xen/include/asm-arm/gic.h    |   10 +-
 9 files changed, 214 insertions(+), 153 deletions(-)


git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:39:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjOW-0007qH-Mi; Wed, 26 Feb 2014 18:39:48 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOV-0007qB-Ey
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:39:47 +0000
Received: from [85.158.137.68:39046] by server-17.bemta-3.messagelabs.com id
	DE/1B-22569-2F43E035; Wed, 26 Feb 2014 18:39:46 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393439984!4444525!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10063 invoked from network); 26 Feb 2014 18:39:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:39:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106023990"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 18:39:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:39:43 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOQ-0004Ru-Lk;
	Wed, 26 Feb 2014 18:39:42 +0000
Date: Wed, 26 Feb 2014 18:39:37 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: <xen-devel@lists.xensource.com>
Message-ID: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>, jtd@galois.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 0/12] remove maintenance interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi all,
this patch series removes any needs for maintenance interrupts for both
hardware and software interrupts in Xen.
It achieves the goal by using the GICH_LR_HW bit for hardware interrupts
and by checking the status of the GICH_LR registers on return to guest,
clearing the registers that are invalid and handling the lifecycle of
the corresponding interrupts in Xen data structures.
It also improves priority handling, keeping the highest priority
outstanding interrupts in the GICH_LR registers.


Changes in v3:
- add "no need to set HCR_VI when using the vgic to inject irqs";
- add "s/gic_set_guest_irq/gic_raise_guest_irq";
- add "xen/arm: call gic_clear_lrs on entry to the hypervisor";
- do not use the PENDING and ACTIVE state for HW interrupts;
- unify the inflight and non-inflight code paths in
vgic_vcpu_inject_irq;
- remove "avoid taking unconditionally the vgic.lock in gic_clear_lrs";
- add "xen/arm: gic_events_need_delivery and irq priorities";
- use spin_lock_irqsave and spin_unlock_irqrestore in gic_dump_info.

Changes in v2:
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr;
- simplify gic_clear_lrs;
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1);
- add a patch to keep track of the LR number in pending_irq;
- add a patch to set GICH_LR_PENDING to inject a second irq while the
first one is still active;
- add a patch to simplify and reduce the usage of gic.lock;
- add a patch to reduce the usage of vgic.lock;
- add a patch to use GICH_ELSR[01] to avoid reading all the GICH_LRs in
gic_clear_lrs;
- add a debug patch to print more info in gic_dump_info.


Stefano Stabellini (12):
      xen/arm: no need to set HCR_VI when using the vgic to inject irqs
      xen/arm: remove unused virtual parameter from vgic_vcpu_inject_irq
      xen/arm: support HW interrupts in gic_set_lr
      xen/arm: do not request maintenance_interrupts
      xen/arm: set GICH_HCR_UIE if all the LRs are in use
      xen/arm: keep track of the GICH_LR used for the irq in struct pending_irq
      xen/arm: s/gic_set_guest_irq/gic_raise_guest_irq
      xen/arm: call gic_clear_lrs on entry to the hypervisor
      xen/arm: second irq injection while the first irq is still inflight
      xen/arm: don't protect GICH and lr_queue accesses with gic.lock
      xen/arm: gic_events_need_delivery and irq priorities
      xen/arm: print more info in gic_dump_info, keep gic_lr sync'ed

 xen/arch/arm/domain.c        |    2 +-
 xen/arch/arm/gic.c           |  281 ++++++++++++++++++++++++------------------
 xen/arch/arm/irq.c           |    2 +-
 xen/arch/arm/time.c          |    2 +-
 xen/arch/arm/traps.c         |   10 ++
 xen/arch/arm/vgic.c          |   50 ++++----
 xen/arch/arm/vtimer.c        |    4 +-
 xen/include/asm-arm/domain.h |    6 +-
 xen/include/asm-arm/gic.h    |   10 +-
 9 files changed, 214 insertions(+), 153 deletions(-)


git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v3

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjOy-0007u5-9j; Wed, 26 Feb 2014 18:40:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOw-0007ta-AL
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:14 +0000
Received: from [85.158.143.35:14742] by server-1.bemta-4.messagelabs.com id
	D9/46-31661-D053E035; Wed, 26 Feb 2014 18:40:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28553 invoked from network); 26 Feb 2014 18:40:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388717"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-8E;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:51 +0000
Message-ID: <1393439997-26936-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 06/12] xen/arm: keep track of the GICH_LR
	used for the irq in struct pending_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c           |    6 ++++--
 xen/include/asm-arm/domain.h |    1 +
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 371ebd8..9293c16 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -643,6 +643,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    p->lr = lr;
 }
 
 static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
@@ -723,6 +724,7 @@ static void gic_clear_lrs(struct vcpu *v)
             if ( p->desc != NULL )
                 p->desc->status &= ~IRQ_INPROGRESS;
             clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            p->lr = nr_lrs;
             if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                     test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
             {
@@ -965,12 +967,12 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d\n", p->irq);
+        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d\n", p->irq);
+        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
     }
 
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..7b636c8 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -59,6 +59,7 @@ struct pending_irq
 #define GIC_IRQ_GUEST_VISIBLE  1
 #define GIC_IRQ_GUEST_ENABLED  2
     unsigned long status;
+    uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     uint8_t priority;
     /* inflight is used to append instances of pending_irq to
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjOy-0007u5-9j; Wed, 26 Feb 2014 18:40:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOw-0007ta-AL
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:14 +0000
Received: from [85.158.143.35:14742] by server-1.bemta-4.messagelabs.com id
	D9/46-31661-D053E035; Wed, 26 Feb 2014 18:40:13 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28553 invoked from network); 26 Feb 2014 18:40:12 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:12 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388717"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-8E;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:51 +0000
Message-ID: <1393439997-26936-6-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 06/12] xen/arm: keep track of the GICH_LR
	used for the irq in struct pending_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c           |    6 ++++--
 xen/include/asm-arm/domain.h |    1 +
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 371ebd8..9293c16 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -643,6 +643,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    p->lr = lr;
 }
 
 static inline void gic_add_to_lr_pending(struct vcpu *v, unsigned int irq,
@@ -723,6 +724,7 @@ static void gic_clear_lrs(struct vcpu *v)
             if ( p->desc != NULL )
                 p->desc->status &= ~IRQ_INPROGRESS;
             clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            p->lr = nr_lrs;
             if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                     test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
             {
@@ -965,12 +967,12 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d\n", p->irq);
+        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d\n", p->irq);
+        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
     }
 
 }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index af8c64b..7b636c8 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -59,6 +59,7 @@ struct pending_irq
 #define GIC_IRQ_GUEST_VISIBLE  1
 #define GIC_IRQ_GUEST_ENABLED  2
     unsigned long status;
+    uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
     uint8_t priority;
     /* inflight is used to append instances of pending_irq to
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP5-0007ve-IE; Wed, 26 Feb 2014 18:40:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOx-0007tj-9c
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:15 +0000
Received: from [85.158.143.35:7528] by server-1.bemta-4.messagelabs.com id
	9D/46-31661-E053E035; Wed, 26 Feb 2014 18:40:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28595 invoked from network); 26 Feb 2014 18:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388721"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-7i;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:50 +0000
Message-ID: <1393439997-26936-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 05/12] xen/arm: set GICH_HCR_UIE if all
	the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_UIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1).
---
 xen/arch/arm/gic.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 15e5f91..371ebd8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -789,6 +789,12 @@ void gic_inject(void)
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) &&
+         this_cpu(lr_mask) == ((1 << nr_lrs) - 1) )
+        GICH[GICH_HCR] |= GICH_HCR_UIE;
+    else
+        GICH[GICH_HCR] &= ~GICH_HCR_UIE;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP6-0007w8-H2; Wed, 26 Feb 2014 18:40:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007tt-3i
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:16 +0000
Received: from [85.158.139.211:33302] by server-5.bemta-5.messagelabs.com id
	D4/6A-32749-F053E035; Wed, 26 Feb 2014 18:40:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393440013!6451951!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9442 invoked from network); 26 Feb 2014 18:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388723"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-4A;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:48 +0000
Message-ID: <1393439997-26936-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 03/12] xen/arm: support HW interrupts in
	gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW.

Remove the code to EOI a physical interrupt on behalf of the guest
because it has become unnecessary.

Also add a struct vcpu* parameter to gic_set_lr.

This patch needs the following patch to work correctly. It has been sent
separately to make it easier to review.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- remove the EOI code, now unnecessary;
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr.
---
 xen/arch/arm/gic.c |   52 +++++++++++++++++-----------------------------------
 1 file changed, 17 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6f27630..fd42922 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -620,20 +620,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     return rc;
 }
 
-static inline void gic_set_lr(int lr, unsigned int virtual_irq,
+static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
-    struct pending_irq *p = irq_to_pending(current, virtual_irq);
+    struct pending_irq *p = irq_to_pending(v, irq);
+    uint32_t lr_reg;
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
+    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
         ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
-        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+        ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        lr_reg |= GICH_LR_HW |
+            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
+
+    GICH[GICH_LR + lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -668,7 +672,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
@@ -681,12 +685,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, virtual_irq, state, priority);
+            gic_set_lr(v, i, irq, state, priority);
             goto out;
         }
     }
 
-    gic_add_to_lr_pending(v, virtual_irq, priority);
+    gic_add_to_lr_pending(v, irq, priority);
 
 out:
     spin_unlock_irqrestore(&gic.lock, flags);
@@ -705,7 +709,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         if ( i >= nr_lrs ) return;
 
         spin_lock_irqsave(&gic.lock, flags);
-        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
+        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
         spin_unlock_irqrestore(&gic.lock, flags);
@@ -886,15 +890,9 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
+    int i = 0, virq;
     uint32_t lr;
     struct vcpu *v = current;
     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
@@ -902,10 +900,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
     while ((i = find_next_bit((const long unsigned int *) &eisr,
                               64, i)) < 64) {
         struct pending_irq *p, *p2;
-        int cpu;
         bool_t inflight;
 
-        cpu = -1;
         inflight = 0;
 
         spin_lock_irq(&gic.lock);
@@ -915,12 +911,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
         clear_bit(i, &this_cpu(lr_mask));
 
         p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
+        if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
@@ -932,7 +924,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
 
         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
             p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
+            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
             list_del_init(&p2->lr_queue);
             set_bit(i, &this_cpu(lr_mask));
         }
@@ -945,16 +937,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
             spin_unlock_irq(&v->arch.vgic.lock);
         }
 
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
         i++;
     }
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP6-0007w8-H2; Wed, 26 Feb 2014 18:40:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007tt-3i
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:16 +0000
Received: from [85.158.139.211:33302] by server-5.bemta-5.messagelabs.com id
	D4/6A-32749-F053E035; Wed, 26 Feb 2014 18:40:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393440013!6451951!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9442 invoked from network); 26 Feb 2014 18:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388723"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-4A;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:48 +0000
Message-ID: <1393439997-26936-3-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 03/12] xen/arm: support HW interrupts in
	gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

If the irq to be injected is an hardware irq (p->desc != NULL), set
GICH_LR_HW.

Remove the code to EOI a physical interrupt on behalf of the guest
because it has become unnecessary.

Also add a struct vcpu* parameter to gic_set_lr.

This patch needs the following patch to work correctly. It has been sent
separately to make it easier to review.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- remove the EOI code, now unnecessary;
- do not assume physical IRQ == virtual IRQ;
- refactor gic_set_lr.
---
 xen/arch/arm/gic.c |   52 +++++++++++++++++-----------------------------------
 1 file changed, 17 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 6f27630..fd42922 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -620,20 +620,24 @@ int __init setup_dt_irq(const struct dt_irq *irq, struct irqaction *new)
     return rc;
 }
 
-static inline void gic_set_lr(int lr, unsigned int virtual_irq,
+static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
-    int maintenance_int = GICH_LR_MAINTENANCE_IRQ;
-    struct pending_irq *p = irq_to_pending(current, virtual_irq);
+    struct pending_irq *p = irq_to_pending(v, irq);
+    uint32_t lr_reg;
 
     BUG_ON(lr >= nr_lrs);
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    GICH[GICH_LR + lr] = state |
-        maintenance_int |
+    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
         ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
-        ((virtual_irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+        ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
+    if ( p->desc != NULL )
+        lr_reg |= GICH_LR_HW |
+            ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
+
+    GICH[GICH_LR + lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -668,7 +672,7 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
+void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         unsigned int state, unsigned int priority)
 {
     int i;
@@ -681,12 +685,12 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int virtual_irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(i, virtual_irq, state, priority);
+            gic_set_lr(v, i, irq, state, priority);
             goto out;
         }
     }
 
-    gic_add_to_lr_pending(v, virtual_irq, priority);
+    gic_add_to_lr_pending(v, irq, priority);
 
 out:
     spin_unlock_irqrestore(&gic.lock, flags);
@@ -705,7 +709,7 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         if ( i >= nr_lrs ) return;
 
         spin_lock_irqsave(&gic.lock, flags);
-        gic_set_lr(i, p->irq, GICH_LR_PENDING, p->priority);
+        gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
         spin_unlock_irqrestore(&gic.lock, flags);
@@ -886,15 +890,9 @@ int gicv_setup(struct domain *d)
 
 }
 
-static void gic_irq_eoi(void *info)
-{
-    int virq = (uintptr_t) info;
-    GICC[GICC_DIR] = virq;
-}
-
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq, pirq = -1;
+    int i = 0, virq;
     uint32_t lr;
     struct vcpu *v = current;
     uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
@@ -902,10 +900,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
     while ((i = find_next_bit((const long unsigned int *) &eisr,
                               64, i)) < 64) {
         struct pending_irq *p, *p2;
-        int cpu;
         bool_t inflight;
 
-        cpu = -1;
         inflight = 0;
 
         spin_lock_irq(&gic.lock);
@@ -915,12 +911,8 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
         clear_bit(i, &this_cpu(lr_mask));
 
         p = irq_to_pending(v, virq);
-        if ( p->desc != NULL ) {
+        if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
-            /* Assume only one pcpu needs to EOI the irq */
-            cpu = p->desc->arch.eoi_cpu;
-            pirq = p->desc->irq;
-        }
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
         {
@@ -932,7 +924,7 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
 
         if ( !list_empty(&v->arch.vgic.lr_pending) ) {
             p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(i, p2->irq, GICH_LR_PENDING, p2->priority);
+            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
             list_del_init(&p2->lr_queue);
             set_bit(i, &this_cpu(lr_mask));
         }
@@ -945,16 +937,6 @@ static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *r
             spin_unlock_irq(&v->arch.vgic.lock);
         }
 
-        if ( p->desc != NULL ) {
-            /* this is not racy because we can't receive another irq of the
-             * same type until we EOI it.  */
-            if ( cpu == smp_processor_id() )
-                gic_irq_eoi((void*)(uintptr_t)pirq);
-            else
-                on_selected_cpus(cpumask_of(cpu),
-                                 gic_irq_eoi, (void*)(uintptr_t)pirq, 0);
-        }
-
         i++;
     }
 }
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP4-0007vD-U1; Wed, 26 Feb 2014 18:40:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOw-0007tb-Nu
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:14 +0000
Received: from [85.158.143.35:14777] by server-3.bemta-4.messagelabs.com id
	55/DE-11539-E053E035; Wed, 26 Feb 2014 18:40:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28571 invoked from network); 26 Feb 2014 18:40:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388718"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-38;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:46 +0000
Message-ID: <1393439997-26936-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 01/12] xen/arm: no need to set HCR_VI
	when using the vgic to inject irqs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It can actually cause spurious interrupt injections into the guest.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 1484e1f..f860194 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -725,22 +725,6 @@ void gic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&gic.lock, flags);
 }
 
-static void gic_inject_irq_start(void)
-{
-    register_t hcr = READ_SYSREG(HCR_EL2);
-    WRITE_SYSREG(hcr | HCR_VI, HCR_EL2);
-    isb();
-}
-
-static void gic_inject_irq_stop(void)
-{
-    register_t hcr = READ_SYSREG(HCR_EL2);
-    if (hcr & HCR_VI) {
-        WRITE_SYSREG(hcr & ~HCR_VI, HCR_EL2);
-        isb();
-    }
-}
-
 int gic_events_need_delivery(void)
 {
     return (!list_empty(&current->arch.vgic.lr_pending) ||
@@ -753,10 +737,6 @@ void gic_inject(void)
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
 
     gic_restore_pending_irqs(current);
-    if (!gic_events_need_delivery())
-        gic_inject_irq_stop();
-    else
-        gic_inject_irq_start();
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP4-0007vD-U1; Wed, 26 Feb 2014 18:40:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOw-0007tb-Nu
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:14 +0000
Received: from [85.158.143.35:14777] by server-3.bemta-4.messagelabs.com id
	55/DE-11539-E053E035; Wed, 26 Feb 2014 18:40:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28571 invoked from network); 26 Feb 2014 18:40:13 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388718"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:10 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-38;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:46 +0000
Message-ID: <1393439997-26936-1-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 01/12] xen/arm: no need to set HCR_VI
	when using the vgic to inject irqs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It can actually cause spurious interrupt injections into the guest.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c |   20 --------------------
 1 file changed, 20 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 1484e1f..f860194 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -725,22 +725,6 @@ void gic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&gic.lock, flags);
 }
 
-static void gic_inject_irq_start(void)
-{
-    register_t hcr = READ_SYSREG(HCR_EL2);
-    WRITE_SYSREG(hcr | HCR_VI, HCR_EL2);
-    isb();
-}
-
-static void gic_inject_irq_stop(void)
-{
-    register_t hcr = READ_SYSREG(HCR_EL2);
-    if (hcr & HCR_VI) {
-        WRITE_SYSREG(hcr & ~HCR_VI, HCR_EL2);
-        isb();
-    }
-}
-
 int gic_events_need_delivery(void)
 {
     return (!list_empty(&current->arch.vgic.lr_pending) ||
@@ -753,10 +737,6 @@ void gic_inject(void)
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
 
     gic_restore_pending_irqs(current);
-    if (!gic_events_need_delivery())
-        gic_inject_irq_stop();
-    else
-        gic_inject_irq_start();
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP7-0007wY-E6; Wed, 26 Feb 2014 18:40:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007tu-6F
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:16 +0000
Received: from [85.158.143.35:14827] by server-2.bemta-4.messagelabs.com id
	C9/2E-04779-F053E035; Wed, 26 Feb 2014 18:40:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28652 invoked from network); 26 Feb 2014 18:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388724"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-5v;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:49 +0000
Message-ID: <1393439997-26936-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 04/12] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
GICH_LR registers.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs from gic_restore_state and on return to guest
(gic_inject).

In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
send and SGI to it to interrupt it and force it to clear the old LRs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- simplify gic_clear_lrs.
---
 xen/arch/arm/gic.c  |   99 ++++++++++++++++++++++++++-------------------------
 xen/arch/arm/vgic.c |    3 +-
 2 files changed, 51 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index fd42922..15e5f91 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_lrs(struct vcpu *v);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -128,6 +130,7 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
+    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -630,8 +633,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
-        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+    lr_reg = state | ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
         ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     if ( p->desc != NULL )
         lr_reg |= GICH_LR_HW |
@@ -697,6 +699,50 @@ out:
     return;
 }
 
+static void gic_clear_lrs(struct vcpu *v)
+{
+    struct pending_irq *p;
+    int i = 0, irq;
+    uint32_t lr;
+    bool_t inflight;
+
+    ASSERT(!local_irq_is_enabled());
+
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        lr = GICH[GICH_LR + i];
+        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+        {
+            inflight = 0;
+            GICH[GICH_LR + i] = 0;
+            clear_bit(i, &this_cpu(lr_mask));
+
+            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+            spin_lock(&gic.lock);
+            p = irq_to_pending(v, irq);
+            if ( p->desc != NULL )
+                p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+            {
+                inflight = 1;
+                gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            }
+            spin_unlock(&gic.lock);
+            if ( !inflight )
+            {
+                spin_lock(&v->arch.vgic.lock);
+                list_del_init(&p->inflight);
+                spin_unlock(&v->arch.vgic.lock);
+            }
+
+        }
+
+        i++;
+    }
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -737,6 +783,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    gic_clear_lrs(current);
+
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
@@ -892,53 +940,6 @@ int gicv_setup(struct domain *d)
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        bool_t inflight;
-
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL )
-            p->desc->status &= ~IRQ_INPROGRESS;
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, virq, p->priority);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        i++;
-    }
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d10227..da15f4d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -699,8 +699,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP5-0007ve-IE; Wed, 26 Feb 2014 18:40:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOx-0007tj-9c
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:15 +0000
Received: from [85.158.143.35:7528] by server-1.bemta-4.messagelabs.com id
	9D/46-31661-E053E035; Wed, 26 Feb 2014 18:40:14 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28595 invoked from network); 26 Feb 2014 18:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388721"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-7i;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:50 +0000
Message-ID: <1393439997-26936-5-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 05/12] xen/arm: set GICH_HCR_UIE if all
	the LRs are in use
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On return to guest, if there are no free LRs and we still have more
interrupt to inject, set GICH_HCR_UIE so that we are going to receive a
maintenance interrupt when no pending interrupts are present in the LR
registers.
The maintenance interrupt handler won't do anything anymore, but
receiving the interrupt is going to cause gic_inject to be called on
return to guest that is going to clear the old LRs and inject new
interrupts.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- disable/enable the GICH_HCR_UIE bit in GICH_HCR;
- only enable GICH_HCR_UIE if this_cpu(lr_mask) == ((1 << nr_lrs) - 1).
---
 xen/arch/arm/gic.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 15e5f91..371ebd8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -789,6 +789,12 @@ void gic_inject(void)
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
+
+    if ( !list_empty(&current->arch.vgic.lr_pending) &&
+         this_cpu(lr_mask) == ((1 << nr_lrs) - 1) )
+        GICH[GICH_HCR] |= GICH_HCR_UIE;
+    else
+        GICH[GICH_HCR] &= ~GICH_HCR_UIE;
 }
 
 int gic_route_irq_to_guest(struct domain *d, const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP7-0007wY-E6; Wed, 26 Feb 2014 18:40:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007tu-6F
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:16 +0000
Received: from [85.158.143.35:14827] by server-2.bemta-4.messagelabs.com id
	C9/2E-04779-F053E035; Wed, 26 Feb 2014 18:40:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!4
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28652 invoked from network); 26 Feb 2014 18:40:14 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:14 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388724"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-5v;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:49 +0000
Message-ID: <1393439997-26936-4-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 04/12] xen/arm: do not request
	maintenance_interrupts
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Do not set GICH_LR_MAINTENANCE_IRQ for every interrupt with set in the
GICH_LR registers.

Introduce a new function, gic_clear_lrs, that goes over the GICH_LR
registers, clear the invalid ones and free the corresponding interrupts
from the inflight queue if appropriate. Add the interrupt to lr_pending
if the GIC_IRQ_GUEST_PENDING is still set.

Call gic_clear_lrs from gic_restore_state and on return to guest
(gic_inject).

In vgic_vcpu_inject_irq, if the target is a vcpu running on another cpu,
send and SGI to it to interrupt it and force it to clear the old LRs.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---

Changes in v2:
- simplify gic_clear_lrs.
---
 xen/arch/arm/gic.c  |   99 ++++++++++++++++++++++++++-------------------------
 xen/arch/arm/vgic.c |    3 +-
 2 files changed, 51 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index fd42922..15e5f91 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void gic_clear_lrs(struct vcpu *v);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -128,6 +130,7 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
+    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -630,8 +633,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
     BUG_ON(lr < 0);
     BUG_ON(state & ~(GICH_LR_STATE_MASK<<GICH_LR_STATE_SHIFT));
 
-    lr_reg = state | GICH_LR_MAINTENANCE_IRQ |
-        ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
+    lr_reg = state | ((priority >> 3) << GICH_LR_PRIORITY_SHIFT) |
         ((irq & GICH_LR_VIRTUAL_MASK) << GICH_LR_VIRTUAL_SHIFT);
     if ( p->desc != NULL )
         lr_reg |= GICH_LR_HW |
@@ -697,6 +699,50 @@ out:
     return;
 }
 
+static void gic_clear_lrs(struct vcpu *v)
+{
+    struct pending_irq *p;
+    int i = 0, irq;
+    uint32_t lr;
+    bool_t inflight;
+
+    ASSERT(!local_irq_is_enabled());
+
+    while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
+                              nr_lrs, i)) < nr_lrs) {
+        lr = GICH[GICH_LR + i];
+        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
+        {
+            inflight = 0;
+            GICH[GICH_LR + i] = 0;
+            clear_bit(i, &this_cpu(lr_mask));
+
+            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+            spin_lock(&gic.lock);
+            p = irq_to_pending(v, irq);
+            if ( p->desc != NULL )
+                p->desc->status &= ~IRQ_INPROGRESS;
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+            {
+                inflight = 1;
+                gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            }
+            spin_unlock(&gic.lock);
+            if ( !inflight )
+            {
+                spin_lock(&v->arch.vgic.lock);
+                list_del_init(&p->inflight);
+                spin_unlock(&v->arch.vgic.lock);
+            }
+
+        }
+
+        i++;
+    }
+}
+
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
     int i;
@@ -737,6 +783,8 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
+    gic_clear_lrs(current);
+
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
@@ -892,53 +940,6 @@ int gicv_setup(struct domain *d)
 
 static void maintenance_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
-    int i = 0, virq;
-    uint32_t lr;
-    struct vcpu *v = current;
-    uint64_t eisr = GICH[GICH_EISR0] | (((uint64_t) GICH[GICH_EISR1]) << 32);
-
-    while ((i = find_next_bit((const long unsigned int *) &eisr,
-                              64, i)) < 64) {
-        struct pending_irq *p, *p2;
-        bool_t inflight;
-
-        inflight = 0;
-
-        spin_lock_irq(&gic.lock);
-        lr = GICH[GICH_LR + i];
-        virq = lr & GICH_LR_VIRTUAL_MASK;
-        GICH[GICH_LR + i] = 0;
-        clear_bit(i, &this_cpu(lr_mask));
-
-        p = irq_to_pending(v, virq);
-        if ( p->desc != NULL )
-            p->desc->status &= ~IRQ_INPROGRESS;
-        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-        {
-            inflight = 1;
-            gic_add_to_lr_pending(v, virq, p->priority);
-        }
-
-        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-
-        if ( !list_empty(&v->arch.vgic.lr_pending) ) {
-            p2 = list_entry(v->arch.vgic.lr_pending.next, typeof(*p2), lr_queue);
-            gic_set_lr(v, i, p2->irq, GICH_LR_PENDING, p2->priority);
-            list_del_init(&p2->lr_queue);
-            set_bit(i, &this_cpu(lr_mask));
-        }
-        spin_unlock_irq(&gic.lock);
-
-        if ( !inflight )
-        {
-            spin_lock_irq(&v->arch.vgic.lock);
-            list_del_init(&p->inflight);
-            spin_unlock_irq(&v->arch.vgic.lock);
-        }
-
-        i++;
-    }
 }
 
 void gic_dump_info(struct vcpu *v)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 7d10227..da15f4d 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -699,8 +699,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
         if ( (irq != current->domain->arch.evtchn_irq) ||
              (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
             set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
-        return;
+        goto out;
     }
 
     /* vcpu offline */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP9-0007yL-0r; Wed, 26 Feb 2014 18:40:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007u0-Ld
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:16 +0000
Received: from [193.109.254.147:16942] by server-16.bemta-14.messagelabs.com
	id B1/41-21945-F053E035; Wed, 26 Feb 2014 18:40:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393440013!3366978!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15800 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388727"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-3f;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:47 +0000
Message-ID: <1393439997-26936-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 02/12] xen/arm: remove unused virtual
	parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..244738d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index f860194..6f27630 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -734,7 +734,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
 }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 68b939d..0548201 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..7d10227 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP9-0007yL-0r; Wed, 26 Feb 2014 18:40:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007u0-Ld
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:16 +0000
Received: from [193.109.254.147:16942] by server-16.bemta-14.messagelabs.com
	id B1/41-21945-F053E035; Wed, 26 Feb 2014 18:40:15 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393440013!3366978!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15800 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388727"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-3f;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:47 +0000
Message-ID: <1393439997-26936-2-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 02/12] xen/arm: remove unused virtual
	parameter from vgic_vcpu_inject_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>
---
 xen/arch/arm/domain.c     |    2 +-
 xen/arch/arm/gic.c        |    2 +-
 xen/arch/arm/irq.c        |    2 +-
 xen/arch/arm/time.c       |    2 +-
 xen/arch/arm/vgic.c       |    4 ++--
 xen/arch/arm/vtimer.c     |    4 ++--
 xen/include/asm-arm/gic.h |    2 +-
 7 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 635a9a4..244738d 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -791,7 +791,7 @@ void vcpu_mark_events_pending(struct vcpu *v)
     if ( already_pending )
         return;
 
-    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq, 1);
+    vgic_vcpu_inject_irq(v, v->domain->arch.evtchn_irq);
 }
 
 /*
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index f860194..6f27630 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -734,7 +734,7 @@ int gic_events_need_delivery(void)
 void gic_inject(void)
 {
     if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq, 1);
+        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
     gic_restore_pending_irqs(current);
 }
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index 3e326b0..5daa269 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -159,7 +159,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
         desc->arch.eoi_cpu = smp_processor_id();
 
         /* XXX: inject irq into all guest vcpus */
-        vgic_vcpu_inject_irq(d->vcpu[0], irq, 0);
+        vgic_vcpu_inject_irq(d->vcpu[0], irq);
         goto out_no_end;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index 68b939d..0548201 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -215,7 +215,7 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
     WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
-    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq, 1);
+    vgic_vcpu_inject_irq(current, current->arch.virt_timer.irq);
 }
 
 /* Route timer's IRQ on this CPU */
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 90e9707..7d10227 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -455,7 +455,7 @@ static int vgic_to_sgi(struct vcpu *v, register_t sgir)
                      sgir, vcpu_mask);
             continue;
         }
-        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq, 1);
+        vgic_vcpu_inject_irq(d->vcpu[vcpuid], virtual_irq);
     }
     return 1;
 }
@@ -683,7 +683,7 @@ void vgic_clear_pending_irqs(struct vcpu *v)
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
-void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq, int virtual)
+void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 {
     int idx = irq >> 2, byte = irq & 0x3;
     uint8_t priority;
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index e325f78..87be11e 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -34,14 +34,14 @@ static void phys_timer_expired(void *data)
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_PENDING;
     if ( !(t->ctl & CNTx_CTL_MASK) )
-        vgic_vcpu_inject_irq(t->v, t->irq, 1);
+        vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 static void virt_timer_expired(void *data)
 {
     struct vtimer *t = data;
     t->ctl |= CNTx_CTL_MASK;
-    vgic_vcpu_inject_irq(t->v, t->irq, 1);
+    vgic_vcpu_inject_irq(t->v, t->irq);
 }
 
 int vcpu_domain_init(struct domain *d)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 071280b..6fce5c2 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -162,7 +162,7 @@ extern void domain_vgic_free(struct domain *d);
 
 extern int vcpu_vgic_init(struct vcpu *v);
 
-extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq,int virtual);
+extern void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq);
 extern void vgic_clear_pending_irqs(struct vcpu *v);
 extern struct pending_irq *irq_to_pending(struct vcpu *v, unsigned int irq);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP9-0007zI-Vj; Wed, 26 Feb 2014 18:40:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007u4-Ui
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:17 +0000
Received: from [85.158.139.211:33322] by server-6.bemta-5.messagelabs.com id
	20/E4-14342-0153E035; Wed, 26 Feb 2014 18:40:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393440013!6451951!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9470 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388728"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-B8;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:53 +0000
Message-ID: <1393439997-26936-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 08/12] xen/arm: call gic_clear_lrs on
	entry to the hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This change is needed by other patches later on. It is going to make
sure that the calculation in Xen of the highest priority interrupt
currently inflight is correct and accurate and not based on stale data.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |   12 +++++-------
 xen/arch/arm/traps.c      |   10 ++++++++++
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d1e7ed3..0e429c8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,8 +67,6 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
-static void gic_clear_lrs(struct vcpu *v);
-
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -130,7 +128,6 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
-    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -700,14 +697,15 @@ out:
     return;
 }
 
-static void gic_clear_lrs(struct vcpu *v)
+void gic_clear_lrs(struct vcpu *v)
 {
     struct pending_irq *p;
     int i = 0, irq;
     uint32_t lr;
     bool_t inflight;
+    unsigned long flags;
 
-    ASSERT(!local_irq_is_enabled());
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
     while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
                               nr_lrs, i)) < nr_lrs) {
@@ -743,6 +741,8 @@ static void gic_clear_lrs(struct vcpu *v)
 
         i++;
     }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 static void gic_restore_pending_irqs(struct vcpu *v)
@@ -785,8 +785,6 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    gic_clear_lrs(current);
-
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea77cb8..7663114 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -70,6 +70,7 @@ static int debug_stack_lines = 40;
 
 integer_param("debug_stack_lines", debug_stack_lines);
 
+static void enter_hypervisor_head(void);
 
 void __cpuinit init_traps(void)
 {
@@ -1701,6 +1702,8 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 {
     union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
 
+    enter_hypervisor_head();
+
     switch (hsr.ec) {
     case HSR_EC_WFI_WFE:
         if ( !check_conditional_instr(regs, hsr) )
@@ -1778,11 +1781,13 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 
 asmlinkage void do_trap_irq(struct cpu_user_regs *regs)
 {
+    enter_hypervisor_head();
     gic_interrupt(regs, 0);
 }
 
 asmlinkage void do_trap_fiq(struct cpu_user_regs *regs)
 {
+    enter_hypervisor_head();
     gic_interrupt(regs, 1);
 }
 
@@ -1800,6 +1805,11 @@ asmlinkage void leave_hypervisor_tail(void)
     }
 }
 
+static void enter_hypervisor_head(void)
+{
+    gic_clear_lrs(current);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 4834cd6..5a9dc77 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -220,6 +220,7 @@ extern unsigned int gic_number_lines(void);
 /* IRQ translation function for the device tree */
 int gic_irq_xlate(const u32 *intspec, unsigned int intsize,
                   unsigned int *out_hwirq, unsigned int *out_type);
+void gic_clear_lrs(struct vcpu *v);
 
 #endif /* __ASSEMBLY__ */
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjP9-0007zI-Vj; Wed, 26 Feb 2014 18:40:28 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOy-0007u4-Ui
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:17 +0000
Received: from [85.158.139.211:33322] by server-6.bemta-5.messagelabs.com id
	20/E4-14342-0153E035; Wed, 26 Feb 2014 18:40:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393440013!6451951!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9470 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388728"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-B8;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:53 +0000
Message-ID: <1393439997-26936-8-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 08/12] xen/arm: call gic_clear_lrs on
	entry to the hypervisor
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This change is needed by other patches later on. It is going to make
sure that the calculation in Xen of the highest priority interrupt
currently inflight is correct and accurate and not based on stale data.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |   12 +++++-------
 xen/arch/arm/traps.c      |   10 ++++++++++
 xen/include/asm-arm/gic.h |    1 +
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index d1e7ed3..0e429c8 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,8 +67,6 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
-static void gic_clear_lrs(struct vcpu *v);
-
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -130,7 +128,6 @@ void gic_restore_state(struct vcpu *v)
     GICH[GICH_HCR] = GICH_HCR_EN;
     isb();
 
-    gic_clear_lrs(v);
     gic_restore_pending_irqs(v);
 }
 
@@ -700,14 +697,15 @@ out:
     return;
 }
 
-static void gic_clear_lrs(struct vcpu *v)
+void gic_clear_lrs(struct vcpu *v)
 {
     struct pending_irq *p;
     int i = 0, irq;
     uint32_t lr;
     bool_t inflight;
+    unsigned long flags;
 
-    ASSERT(!local_irq_is_enabled());
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
     while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
                               nr_lrs, i)) < nr_lrs) {
@@ -743,6 +741,8 @@ static void gic_clear_lrs(struct vcpu *v)
 
         i++;
     }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 static void gic_restore_pending_irqs(struct vcpu *v)
@@ -785,8 +785,6 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    gic_clear_lrs(current);
-
     if ( vcpu_info(current, evtchn_upcall_pending) )
         vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
 
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ea77cb8..7663114 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -70,6 +70,7 @@ static int debug_stack_lines = 40;
 
 integer_param("debug_stack_lines", debug_stack_lines);
 
+static void enter_hypervisor_head(void);
 
 void __cpuinit init_traps(void)
 {
@@ -1701,6 +1702,8 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 {
     union hsr hsr = { .bits = READ_SYSREG32(ESR_EL2) };
 
+    enter_hypervisor_head();
+
     switch (hsr.ec) {
     case HSR_EC_WFI_WFE:
         if ( !check_conditional_instr(regs, hsr) )
@@ -1778,11 +1781,13 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs)
 
 asmlinkage void do_trap_irq(struct cpu_user_regs *regs)
 {
+    enter_hypervisor_head();
     gic_interrupt(regs, 0);
 }
 
 asmlinkage void do_trap_fiq(struct cpu_user_regs *regs)
 {
+    enter_hypervisor_head();
     gic_interrupt(regs, 1);
 }
 
@@ -1800,6 +1805,11 @@ asmlinkage void leave_hypervisor_tail(void)
     }
 }
 
+static void enter_hypervisor_head(void)
+{
+    gic_clear_lrs(current);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 4834cd6..5a9dc77 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -220,6 +220,7 @@ extern unsigned int gic_number_lines(void);
 /* IRQ translation function for the device tree */
 int gic_irq_xlate(const u32 *intspec, unsigned int intsize,
                   unsigned int *out_hwirq, unsigned int *out_type);
+void gic_clear_lrs(struct vcpu *v);
 
 #endif /* __ASSEMBLY__ */
 #endif
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjPB-00080e-CY; Wed, 26 Feb 2014 18:40:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOz-0007uH-P6
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:18 +0000
Received: from [193.109.254.147:16975] by server-9.bemta-14.messagelabs.com id
	46/A1-24895-0153E035; Wed, 26 Feb 2014 18:40:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393440013!3366978!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15883 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388738"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-DN;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:55 +0000
Message-ID: <1393439997-26936-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 10/12] xen/arm: don't protect GICH and
	lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH is banked, protect accesses by disabling interrupts.
Protect lr_queue accesses with the vgic.lock only.
gic.lock only protects accesses to GICD now.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c  |   22 +++-------------------
 xen/arch/arm/vgic.c |   12 ++++++++++--
 2 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 2dc6386..296d9a7 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -668,17 +668,14 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
 
-    spin_lock(&gic.lock);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock(&gic.lock);
 }
 
 void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
                          unsigned int priority)
 {
     int i;
-    unsigned long flags;
     struct pending_irq *n = irq_to_pending(v, irq);
 
     if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
@@ -688,23 +685,17 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
         return;
     }
 
-    spin_lock_irqsave(&gic.lock, flags);
-
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
             gic_set_lr(v, i, irq, GICH_LR_PENDING, priority);
-            goto out;
+            return;
         }
     }
 
     gic_add_to_lr_pending(v, irq, priority);
-
-out:
-    spin_unlock_irqrestore(&gic.lock, flags);
-    return;
 }
 
 static void _gic_clear_lr(struct vcpu *v, int i)
@@ -726,8 +717,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
-        spin_lock(&gic.lock);
-
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
@@ -741,8 +730,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
             gic_raise_guest_irq(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
-
-        spin_unlock(&gic.lock);
     }
 }
 
@@ -772,11 +759,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if ( i >= nr_lrs ) return;
 
-        spin_lock_irqsave(&gic.lock, flags);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&gic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     }
 
 }
@@ -784,13 +771,10 @@ static void gic_restore_pending_irqs(struct vcpu *v)
 void gic_clear_pending_irqs(struct vcpu *v)
 {
     struct pending_irq *p, *t;
-    unsigned long flags;
 
-    spin_lock_irqsave(&gic.lock, flags);
     v->arch.lr_mask = 0;
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
         list_del_init(&p->lr_queue);
-    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 int gic_events_need_delivery(void)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 981db6c..e9464b2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -365,12 +365,15 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
+    unsigned long flags;
 
     while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         gic_remove_from_queues(v, irq);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
         if ( p->desc != NULL )
             p->desc->handler->disable(p->desc);
         i++;
@@ -391,8 +394,13 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
             vgic_vcpu_inject_irq(v, irq);
-        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_raise_guest_irq(v, irq, p->priority);
+        else {
+            unsigned long flags;
+            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+                gic_raise_guest_irq(v, irq, p->priority);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        }
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjPB-00080e-CY; Wed, 26 Feb 2014 18:40:29 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjOz-0007uH-P6
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:18 +0000
Received: from [193.109.254.147:16975] by server-9.bemta-14.messagelabs.com id
	46/A1-24895-0153E035; Wed, 26 Feb 2014 18:40:16 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393440013!3366978!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15883 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388738"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-DN;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:55 +0000
Message-ID: <1393439997-26936-10-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 10/12] xen/arm: don't protect GICH and
	lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

GICH is banked, protect accesses by disabling interrupts.
Protect lr_queue accesses with the vgic.lock only.
gic.lock only protects accesses to GICD now.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c  |   22 +++-------------------
 xen/arch/arm/vgic.c |   12 ++++++++++--
 2 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 2dc6386..296d9a7 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -668,17 +668,14 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
 {
     struct pending_irq *p = irq_to_pending(v, virtual_irq);
 
-    spin_lock(&gic.lock);
     if ( !list_empty(&p->lr_queue) )
         list_del_init(&p->lr_queue);
-    spin_unlock(&gic.lock);
 }
 
 void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
                          unsigned int priority)
 {
     int i;
-    unsigned long flags;
     struct pending_irq *n = irq_to_pending(v, irq);
 
     if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
@@ -688,23 +685,17 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
         return;
     }
 
-    spin_lock_irqsave(&gic.lock, flags);
-
     if ( v == current && list_empty(&v->arch.vgic.lr_pending) )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
             gic_set_lr(v, i, irq, GICH_LR_PENDING, priority);
-            goto out;
+            return;
         }
     }
 
     gic_add_to_lr_pending(v, irq, priority);
-
-out:
-    spin_unlock_irqrestore(&gic.lock, flags);
-    return;
 }
 
 static void _gic_clear_lr(struct vcpu *v, int i)
@@ -726,8 +717,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
-        spin_lock(&gic.lock);
-
         GICH[GICH_LR + i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
@@ -741,8 +730,6 @@ static void _gic_clear_lr(struct vcpu *v, int i)
             gic_raise_guest_irq(v, irq, p->priority);
         } else
             list_del_init(&p->inflight);
-
-        spin_unlock(&gic.lock);
     }
 }
 
@@ -772,11 +759,11 @@ static void gic_restore_pending_irqs(struct vcpu *v)
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if ( i >= nr_lrs ) return;
 
-        spin_lock_irqsave(&gic.lock, flags);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&gic.lock, flags);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     }
 
 }
@@ -784,13 +771,10 @@ static void gic_restore_pending_irqs(struct vcpu *v)
 void gic_clear_pending_irqs(struct vcpu *v)
 {
     struct pending_irq *p, *t;
-    unsigned long flags;
 
-    spin_lock_irqsave(&gic.lock, flags);
     v->arch.lr_mask = 0;
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
         list_del_init(&p->lr_queue);
-    spin_unlock_irqrestore(&gic.lock, flags);
 }
 
 int gic_events_need_delivery(void)
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 981db6c..e9464b2 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -365,12 +365,15 @@ static void vgic_disable_irqs(struct vcpu *v, uint32_t r, int n)
     struct pending_irq *p;
     unsigned int irq;
     int i = 0;
+    unsigned long flags;
 
     while ( (i = find_next_bit((const long unsigned int *) &r, 32, i)) < 32 ) {
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
+        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         gic_remove_from_queues(v, irq);
+        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
         if ( p->desc != NULL )
             p->desc->handler->disable(p->desc);
         i++;
@@ -391,8 +394,13 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
              vcpu_info(current, evtchn_upcall_pending) &&
              list_empty(&p->inflight) )
             vgic_vcpu_inject_irq(v, irq);
-        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_raise_guest_irq(v, irq, p->priority);
+        else {
+            unsigned long flags;
+            spin_lock_irqsave(&v->arch.vgic.lock, flags);
+            if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+                gic_raise_guest_irq(v, irq, p->priority);
+            spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+        }
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjPC-000826-Dn; Wed, 26 Feb 2014 18:40:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjP2-0007uR-3L
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:20 +0000
Received: from [85.158.139.211:50621] by server-17.bemta-5.messagelabs.com id
	09/54-31975-1153E035; Wed, 26 Feb 2014 18:40:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393440013!6451951!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9495 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388742"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-Be;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:54 +0000
Message-ID: <1393439997-26936-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 09/12] xen/arm: second irq injection
	while the first irq is still inflight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
while the first one is still active.
If the first irq is already pending (not active), just clear
GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
already visible by the guest.
If the irq has already been EOI'ed then just clear the GICH_LR right
away and move the interrupt to lr_pending so that it is going to be
reinjected by gic_restore_pending_irqs on return to guest.

If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
and send an SGI. The target cpu is going to be interrupted and call
gic_clear_lrs, that is going to take the same actions.

Unify the inflight and non-inflight code paths in vgic_vcpu_inject_irq.

Do not call vgic_vcpu_inject_irq from gic_inject if
evtchn_upcall_pending is set. If we remove that call, we don't need to
special case evtchn_irq in vgic_vcpu_inject_irq anymore.
We also need to force the first injection of evtchn_irq (call
gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
is already set by common code on vcpu creation.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---
Changes in v3:
- do not use the PENDING and ACTIVE state for HW interrupts,
- check that p->lr is valid in gic_set_clear_lr.
---
 xen/arch/arm/gic.c  |   89 +++++++++++++++++++++++++++++----------------------
 xen/arch/arm/vgic.c |   33 ++++++++++---------
 2 files changed, 68 insertions(+), 54 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0e429c8..2dc6386 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void _gic_clear_lr(struct vcpu *v, int i);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -677,6 +679,14 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
 {
     int i;
     unsigned long flags;
+    struct pending_irq *n = irq_to_pending(v, irq);
+
+    if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
+    {
+        if ( v == current )
+            _gic_clear_lr(v, n->lr);
+        return;
+    }
 
     spin_lock_irqsave(&gic.lock, flags);
 
@@ -697,51 +707,57 @@ out:
     return;
 }
 
-void gic_clear_lrs(struct vcpu *v)
+static void _gic_clear_lr(struct vcpu *v, int i)
 {
-    struct pending_irq *p;
-    int i = 0, irq;
+    int irq;
     uint32_t lr;
-    bool_t inflight;
+    struct pending_irq *p;
+
+    lr = GICH[GICH_LR + i];
+    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+    p = irq_to_pending(v, irq);
+    if ( lr & GICH_LR_ACTIVE )
+    {
+        /* HW interrupts cannot be ACTIVE and PENDING */
+        if ( p->desc == NULL &&
+             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
+             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+    } else if ( lr & GICH_LR_PENDING ) {
+        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    } else {
+        spin_lock(&gic.lock);
+
+        GICH[GICH_LR + i] = 0;
+        clear_bit(i, &this_cpu(lr_mask));
+
+        if ( p->desc != NULL )
+            p->desc->status &= ~IRQ_INPROGRESS;
+        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        p->lr = nr_lrs;
+        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+        {
+            gic_raise_guest_irq(v, irq, p->priority);
+        } else
+            list_del_init(&p->inflight);
+
+        spin_unlock(&gic.lock);
+    }
+}
+
+void gic_clear_lrs(struct vcpu *v)
+{
+    int i = 0;
     unsigned long flags;
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
-
     while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
                               nr_lrs, i)) < nr_lrs) {
-        lr = GICH[GICH_LR + i];
-        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
-        {
-            inflight = 0;
-            GICH[GICH_LR + i] = 0;
-            clear_bit(i, &this_cpu(lr_mask));
-
-            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
-            spin_lock(&gic.lock);
-            p = irq_to_pending(v, irq);
-            if ( p->desc != NULL )
-                p->desc->status &= ~IRQ_INPROGRESS;
-            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-            p->lr = nr_lrs;
-            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-            {
-                inflight = 1;
-                gic_raise_guest_irq(v, irq, p->priority);
-            }
-            spin_unlock(&gic.lock);
-            if ( !inflight )
-            {
-                spin_lock(&v->arch.vgic.lock);
-                list_del_init(&p->inflight);
-                spin_unlock(&v->arch.vgic.lock);
-            }
-
-        }
 
+        _gic_clear_lr(v, i);
         i++;
     }
-
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
@@ -785,9 +801,6 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
-
     gic_restore_pending_irqs(current);
 
     if ( !list_empty(&current->arch.vgic.lr_pending) &&
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index b003f29..981db6c 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        if ( irq == v->domain->arch.evtchn_irq &&
+             vcpu_info(current, evtchn_upcall_pending) &&
+             list_empty(&p->inflight) )
+            vgic_vcpu_inject_irq(v, irq);
+        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
@@ -694,14 +698,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
-    if ( !list_empty(&n->inflight) )
-    {
-        if ( (irq != current->domain->arch.evtchn_irq) ||
-             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
-            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        goto out;
-    }
-
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
@@ -713,21 +709,26 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     n->irq = irq;
     set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-    n->priority = priority;
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
         gic_raise_guest_irq(v, irq, priority);
 
-    list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
+    if ( list_empty(&n->inflight) )
     {
-        if ( iter->priority > priority )
+        n->priority = priority;
+        list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
         {
-            list_add_tail(&n->inflight, &iter->inflight);
-            goto out;
+            if ( iter->priority > priority )
+            {
+                list_add_tail(&n->inflight, &iter->inflight);
+                goto out;
+            }
         }
-    }
-    list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+        list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+    } else if ( n->priority != priority )
+        gdprintk(XENLOG_WARNING, "Changing priority of an inflight interrupt is not supported");
+
 out:
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     /* we have a new higher priority irq, inject it into the guest */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjPC-000826-Dn; Wed, 26 Feb 2014 18:40:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjP2-0007uR-3L
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:20 +0000
Received: from [85.158.139.211:50621] by server-17.bemta-5.messagelabs.com id
	09/54-31975-1153E035; Wed, 26 Feb 2014 18:40:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393440013!6451951!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9495 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388742"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-Be;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:54 +0000
Message-ID: <1393439997-26936-9-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 09/12] xen/arm: second irq injection
	while the first irq is still inflight
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Set GICH_LR_PENDING in the corresponding GICH_LR to inject a second irq
while the first one is still active.
If the first irq is already pending (not active), just clear
GIC_IRQ_GUEST_PENDING because the irq has already been injected and is
already visible by the guest.
If the irq has already been EOI'ed then just clear the GICH_LR right
away and move the interrupt to lr_pending so that it is going to be
reinjected by gic_restore_pending_irqs on return to guest.

If the target cpu is not the current cpu, then set GIC_IRQ_GUEST_PENDING
and send an SGI. The target cpu is going to be interrupted and call
gic_clear_lrs, that is going to take the same actions.

Unify the inflight and non-inflight code paths in vgic_vcpu_inject_irq.

Do not call vgic_vcpu_inject_irq from gic_inject if
evtchn_upcall_pending is set. If we remove that call, we don't need to
special case evtchn_irq in vgic_vcpu_inject_irq anymore.
We also need to force the first injection of evtchn_irq (call
gic_vcpu_inject_irq) from vgic_enable_irqs because evtchn_upcall_pending
is already set by common code on vcpu creation.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---
Changes in v3:
- do not use the PENDING and ACTIVE state for HW interrupts,
- check that p->lr is valid in gic_set_clear_lr.
---
 xen/arch/arm/gic.c  |   89 +++++++++++++++++++++++++++++----------------------
 xen/arch/arm/vgic.c |   33 ++++++++++---------
 2 files changed, 68 insertions(+), 54 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 0e429c8..2dc6386 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -67,6 +67,8 @@ static DEFINE_PER_CPU(u8, gic_cpu_id);
 /* Maximum cpu interface per GIC */
 #define NR_GIC_CPU_IF 8
 
+static void _gic_clear_lr(struct vcpu *v, int i);
+
 static unsigned int gic_cpu_mask(const cpumask_t *cpumask)
 {
     unsigned int cpu;
@@ -677,6 +679,14 @@ void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
 {
     int i;
     unsigned long flags;
+    struct pending_irq *n = irq_to_pending(v, irq);
+
+    if ( test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status))
+    {
+        if ( v == current )
+            _gic_clear_lr(v, n->lr);
+        return;
+    }
 
     spin_lock_irqsave(&gic.lock, flags);
 
@@ -697,51 +707,57 @@ out:
     return;
 }
 
-void gic_clear_lrs(struct vcpu *v)
+static void _gic_clear_lr(struct vcpu *v, int i)
 {
-    struct pending_irq *p;
-    int i = 0, irq;
+    int irq;
     uint32_t lr;
-    bool_t inflight;
+    struct pending_irq *p;
+
+    lr = GICH[GICH_LR + i];
+    irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
+    p = irq_to_pending(v, irq);
+    if ( lr & GICH_LR_ACTIVE )
+    {
+        /* HW interrupts cannot be ACTIVE and PENDING */
+        if ( p->desc == NULL &&
+             test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
+             test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+            GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+    } else if ( lr & GICH_LR_PENDING ) {
+        clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
+    } else {
+        spin_lock(&gic.lock);
+
+        GICH[GICH_LR + i] = 0;
+        clear_bit(i, &this_cpu(lr_mask));
+
+        if ( p->desc != NULL )
+            p->desc->status &= ~IRQ_INPROGRESS;
+        clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        p->lr = nr_lrs;
+        if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
+                test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
+        {
+            gic_raise_guest_irq(v, irq, p->priority);
+        } else
+            list_del_init(&p->inflight);
+
+        spin_unlock(&gic.lock);
+    }
+}
+
+void gic_clear_lrs(struct vcpu *v)
+{
+    int i = 0;
     unsigned long flags;
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
-
     while ((i = find_next_bit((const long unsigned int *) &this_cpu(lr_mask),
                               nr_lrs, i)) < nr_lrs) {
-        lr = GICH[GICH_LR + i];
-        if ( !(lr & (GICH_LR_PENDING|GICH_LR_ACTIVE)) )
-        {
-            inflight = 0;
-            GICH[GICH_LR + i] = 0;
-            clear_bit(i, &this_cpu(lr_mask));
-
-            irq = (lr >> GICH_LR_VIRTUAL_SHIFT) & GICH_LR_VIRTUAL_MASK;
-            spin_lock(&gic.lock);
-            p = irq_to_pending(v, irq);
-            if ( p->desc != NULL )
-                p->desc->status &= ~IRQ_INPROGRESS;
-            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
-            p->lr = nr_lrs;
-            if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
-                    test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
-            {
-                inflight = 1;
-                gic_raise_guest_irq(v, irq, p->priority);
-            }
-            spin_unlock(&gic.lock);
-            if ( !inflight )
-            {
-                spin_lock(&v->arch.vgic.lock);
-                list_del_init(&p->inflight);
-                spin_unlock(&v->arch.vgic.lock);
-            }
-
-        }
 
+        _gic_clear_lr(v, i);
         i++;
     }
-
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
@@ -785,9 +801,6 @@ int gic_events_need_delivery(void)
 
 void gic_inject(void)
 {
-    if ( vcpu_info(current, evtchn_upcall_pending) )
-        vgic_vcpu_inject_irq(current, current->domain->arch.evtchn_irq);
-
     gic_restore_pending_irqs(current);
 
     if ( !list_empty(&current->arch.vgic.lr_pending) &&
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index b003f29..981db6c 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -387,7 +387,11 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         irq = i + (32 * n);
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
-        if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
+        if ( irq == v->domain->arch.evtchn_irq &&
+             vcpu_info(current, evtchn_upcall_pending) &&
+             list_empty(&p->inflight) )
+            vgic_vcpu_inject_irq(v, irq);
+        else if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
             gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
@@ -694,14 +698,6 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     spin_lock_irqsave(&v->arch.vgic.lock, flags);
 
-    if ( !list_empty(&n->inflight) )
-    {
-        if ( (irq != current->domain->arch.evtchn_irq) ||
-             (!test_bit(GIC_IRQ_GUEST_VISIBLE, &n->status)) )
-            set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-        goto out;
-    }
-
     /* vcpu offline */
     if ( test_bit(_VPF_down, &v->pause_flags) )
     {
@@ -713,21 +709,26 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     n->irq = irq;
     set_bit(GIC_IRQ_GUEST_PENDING, &n->status);
-    n->priority = priority;
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
         gic_raise_guest_irq(v, irq, priority);
 
-    list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
+    if ( list_empty(&n->inflight) )
     {
-        if ( iter->priority > priority )
+        n->priority = priority;
+        list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
         {
-            list_add_tail(&n->inflight, &iter->inflight);
-            goto out;
+            if ( iter->priority > priority )
+            {
+                list_add_tail(&n->inflight, &iter->inflight);
+                goto out;
+            }
         }
-    }
-    list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+        list_add_tail(&n->inflight, &v->arch.vgic.inflight_irqs);
+    } else if ( n->priority != priority )
+        gdprintk(XENLOG_WARNING, "Changing priority of an inflight interrupt is not supported");
+
 out:
     spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
     /* we have a new higher priority irq, inject it into the guest */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjPD-00083r-Mh; Wed, 26 Feb 2014 18:40:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjP1-0007ua-NU
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:20 +0000
Received: from [85.158.143.35:7725] by server-3.bemta-4.messagelabs.com id
	32/EE-11539-1153E035; Wed, 26 Feb 2014 18:40:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28761 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388734"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-9B;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:52 +0000
Message-ID: <1393439997-26936-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 07/12] xen/arm:
	s/gic_set_guest_irq/gic_raise_guest_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rename gic_set_guest_irq to gic_raise_guest_irq and remove the state
parameter.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |    8 ++++----
 xen/arch/arm/vgic.c       |    4 ++--
 xen/include/asm-arm/gic.h |    4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 9293c16..d1e7ed3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -675,8 +675,8 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
-        unsigned int state, unsigned int priority)
+void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
+                         unsigned int priority)
 {
     int i;
     unsigned long flags;
@@ -688,7 +688,7 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(v, i, irq, state, priority);
+            gic_set_lr(v, i, irq, GICH_LR_PENDING, priority);
             goto out;
         }
     }
@@ -729,7 +729,7 @@ static void gic_clear_lrs(struct vcpu *v)
                     test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
             {
                 inflight = 1;
-                gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+                gic_raise_guest_irq(v, irq, p->priority);
             }
             spin_unlock(&gic.lock);
             if ( !inflight )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index da15f4d..b003f29 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -388,7 +388,7 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
@@ -717,7 +717,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
-        gic_set_guest_irq(v, irq, GICH_LR_PENDING, priority);
+        gic_raise_guest_irq(v, irq, priority);
 
     list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
     {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6fce5c2..4834cd6 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -178,8 +178,8 @@ extern void gic_clear_pending_irqs(struct vcpu *v);
 extern int gic_events_need_delivery(void);
 
 extern void __cpuinit init_maintenance_interrupt(void);
-extern void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
-        unsigned int state, unsigned int priority);
+extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
+        unsigned int priority);
 extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq);
 extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:40:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjPD-00083r-Mh; Wed, 26 Feb 2014 18:40:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjP1-0007ua-NU
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:40:20 +0000
Received: from [85.158.143.35:7725] by server-3.bemta-4.messagelabs.com id
	32/EE-11539-1153E035; Wed, 26 Feb 2014 18:40:17 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393440011!8519574!5
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28761 invoked from network); 26 Feb 2014 18:40:15 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:40:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="104388734"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 18:40:12 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:40:10 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-9B;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:52 +0000
Message-ID: <1393439997-26936-7-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 07/12] xen/arm:
	s/gic_set_guest_irq/gic_raise_guest_irq
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Rename gic_set_guest_irq to gic_raise_guest_irq and remove the state
parameter.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c        |    8 ++++----
 xen/arch/arm/vgic.c       |    4 ++--
 xen/include/asm-arm/gic.h |    4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 9293c16..d1e7ed3 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -675,8 +675,8 @@ void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq)
     spin_unlock(&gic.lock);
 }
 
-void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
-        unsigned int state, unsigned int priority)
+void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
+                         unsigned int priority)
 {
     int i;
     unsigned long flags;
@@ -688,7 +688,7 @@ void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
         if (i < nr_lrs) {
             set_bit(i, &this_cpu(lr_mask));
-            gic_set_lr(v, i, irq, state, priority);
+            gic_set_lr(v, i, irq, GICH_LR_PENDING, priority);
             goto out;
         }
     }
@@ -729,7 +729,7 @@ static void gic_clear_lrs(struct vcpu *v)
                     test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
             {
                 inflight = 1;
-                gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+                gic_raise_guest_irq(v, irq, p->priority);
             }
             spin_unlock(&gic.lock);
             if ( !inflight )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index da15f4d..b003f29 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -388,7 +388,7 @@ static void vgic_enable_irqs(struct vcpu *v, uint32_t r, int n)
         p = irq_to_pending(v, irq);
         set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
         if ( !list_empty(&p->inflight) && !test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status) )
-            gic_set_guest_irq(v, irq, GICH_LR_PENDING, p->priority);
+            gic_raise_guest_irq(v, irq, p->priority);
         if ( p->desc != NULL )
             p->desc->handler->enable(p->desc);
         i++;
@@ -717,7 +717,7 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int irq)
 
     /* the irq is enabled */
     if ( test_bit(GIC_IRQ_GUEST_ENABLED, &n->status) )
-        gic_set_guest_irq(v, irq, GICH_LR_PENDING, priority);
+        gic_raise_guest_irq(v, irq, priority);
 
     list_for_each_entry ( iter, &v->arch.vgic.inflight_irqs, inflight )
     {
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 6fce5c2..4834cd6 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -178,8 +178,8 @@ extern void gic_clear_pending_irqs(struct vcpu *v);
 extern int gic_events_need_delivery(void);
 
 extern void __cpuinit init_maintenance_interrupt(void);
-extern void gic_set_guest_irq(struct vcpu *v, unsigned int irq,
-        unsigned int state, unsigned int priority);
+extern void gic_raise_guest_irq(struct vcpu *v, unsigned int irq,
+        unsigned int priority);
 extern void gic_remove_from_queues(struct vcpu *v, unsigned int virtual_irq);
 extern int gic_route_irq_to_guest(struct domain *d,
                                   const struct dt_irq *irq,
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:55:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjds-00012l-Fn; Wed, 26 Feb 2014 18:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjdq-00012g-Vi
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:55:39 +0000
Received: from [85.158.143.35:29276] by server-1.bemta-4.messagelabs.com id
	DA/E2-31661-AA83E035; Wed, 26 Feb 2014 18:55:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393440935!8548275!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4520 invoked from network); 26 Feb 2014 18:55:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106029837"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 18:55:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:55:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-Fx;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:57 +0000
Message-ID: <1393439997-26936-12-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 12/12] xen/arm: print more info in
	gic_dump_info, keep gic_lr sync'ed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For each inflight and pending irqs print GIC_IRQ_GUEST_ENABLED,
GIC_IRQ_GUEST_PENDING and GIC_IRQ_GUEST_VISIBLE.

In order to get consistent information from gic_dump_info, we need to
get the vgic.lock and disable interrupts before walking the inflight and
lr_pending lists.

We also need to keep v->arch.gic_lr in sync with GICH_LR registers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---
Changes in v3:
- use spin_lock_irqsave and spin_unlock_irqrestore in gic_dump_info.
---
 xen/arch/arm/gic.c |   20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index aaba8b0..7de4443 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -639,6 +639,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
             ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
 
     GICH[GICH_LR + lr] = lr_reg;
+    v->arch.gic_lr[lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -714,11 +715,15 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+        {
             GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+            v->arch.gic_lr[i] = lr | GICH_LR_PENDING;
+        }
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
         GICH[GICH_LR + i] = 0;
+        v->arch.gic_lr[i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
         if ( p->desc != NULL )
@@ -1006,8 +1011,11 @@ void gic_dump_info(struct vcpu *v)
 {
     int i;
     struct pending_irq *p;
+    unsigned long flags;
 
     printk("GICH_LRs (vcpu %d) mask=%"PRIx64"\n", v->vcpu_id, v->arch.lr_mask);
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
     if ( v == current )
     {
         for ( i = 0; i < nr_lrs; i++ )
@@ -1019,14 +1027,20 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Inflight irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Pending irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
-
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void __cpuinit init_maintenance_interrupt(void)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:55:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjdw-000132-SF; Wed, 26 Feb 2014 18:55:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjdu-00012s-To
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:55:43 +0000
Received: from [85.158.143.35:29490] by server-3.bemta-4.messagelabs.com id
	E4/1C-11539-EA83E035; Wed, 26 Feb 2014 18:55:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393440935!8548275!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4663 invoked from network); 26 Feb 2014 18:55:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:55:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106029919"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 18:55:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:55:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-E2;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:56 +0000
Message-ID: <1393439997-26936-11-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 11/12] xen/arm: gic_events_need_delivery
	and irq priorities
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

gic_events_need_delivery should only return positive if an outstanding
pending irq has an higher priority than the currently active irq and the
priority mask.
Rewrite the function by going through the priority ordered inflight and
lr_queue lists.

In gic_restore_pending_irqs replace lower priority pending (and not
active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
are available.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c           |   71 +++++++++++++++++++++++++++++++++++++-----
 xen/include/asm-arm/domain.h |    5 +--
 xen/include/asm-arm/gic.h    |    3 ++
 3 files changed, 70 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 296d9a7..aaba8b0 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -709,6 +709,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     p = irq_to_pending(v, irq);
     if ( lr & GICH_LR_ACTIVE )
     {
+        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         /* HW interrupts cannot be ACTIVE and PENDING */
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
@@ -723,6 +724,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = nr_lrs;
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
@@ -750,22 +752,47 @@ void gic_clear_lrs(struct vcpu *v)
 
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
-    int i;
-    struct pending_irq *p, *t;
+    int i = 0, lrs = nr_lrs;
+    struct pending_irq *p, *t, *p_r;
     unsigned long flags;
 
+    if ( list_empty(&v->arch.vgic.lr_pending) )
+        return;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    p_r = list_entry(v->arch.vgic.inflight_irqs.prev,
+                         typeof(*p_r), inflight);
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
-        if ( i >= nr_lrs ) return;
+        if ( i >= nr_lrs )
+        {
+            while ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) ||
+                    test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
+            {
+                p_r = list_entry(p_r->inflight.prev, typeof(*p_r), inflight);
+                if ( &p_r->inflight == p->inflight.next )
+                    goto out;
+            }
+            i = p_r->lr;
+            p_r->lr = nr_lrs;
+            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
+            gic_add_to_lr_pending(v, p_r->irq, p_r->priority);
+        }
 
-        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+        lrs--;
+        if ( lrs == 0 )
+            break;
     }
 
+out:
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void gic_clear_pending_irqs(struct vcpu *v)
@@ -779,8 +806,38 @@ void gic_clear_pending_irqs(struct vcpu *v)
 
 int gic_events_need_delivery(void)
 {
-    return (!list_empty(&current->arch.vgic.lr_pending) ||
-            this_cpu(lr_mask));
+    int mask_priority, lrs = nr_lrs;
+    int max_priority = 0xff, active_priority = 0xff;
+    struct vcpu *v = current;
+    struct pending_irq *p;
+    unsigned long flags;
+
+    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
+        {
+            if ( p->priority < active_priority )
+                active_priority = p->priority;
+        } else {
+            if ( p->priority < max_priority )
+                max_priority = p->priority;
+        }
+        lrs--;
+        if ( lrs == 0 )
+            break;
+    }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+    if ( max_priority < active_priority &&
+         (max_priority >> 3) < mask_priority )
+        return 1;
+    else
+        return 0;
 }
 
 void gic_inject(void)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7b636c8..86cb361 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -56,8 +56,9 @@ struct pending_irq
      *
      */
 #define GIC_IRQ_GUEST_PENDING  0
-#define GIC_IRQ_GUEST_VISIBLE  1
-#define GIC_IRQ_GUEST_ENABLED  2
+#define GIC_IRQ_GUEST_ACTIVE   1
+#define GIC_IRQ_GUEST_VISIBLE  2
+#define GIC_IRQ_GUEST_ENABLED  3
     unsigned long status;
     uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 5a9dc77..5d8f7f1 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,9 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+#define GICH_VMCR_PRIORITY_MASK   0x1f
+#define GICH_VMCR_PRIORITY_SHIFT  27
+
 /*
  * The minimum GICC_BPR is required to be in the range 0-3. We set
  * GICC_BPR to 0 but we must expect that it might be 3. This means we
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:55:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjds-00012l-Fn; Wed, 26 Feb 2014 18:55:40 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjdq-00012g-Vi
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:55:39 +0000
Received: from [85.158.143.35:29276] by server-1.bemta-4.messagelabs.com id
	DA/E2-31661-AA83E035; Wed, 26 Feb 2014 18:55:38 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393440935!8548275!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4520 invoked from network); 26 Feb 2014 18:55:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:55:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106029837"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 18:55:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:55:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-Fx;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:57 +0000
Message-ID: <1393439997-26936-12-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 12/12] xen/arm: print more info in
	gic_dump_info, keep gic_lr sync'ed
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For each inflight and pending irqs print GIC_IRQ_GUEST_ENABLED,
GIC_IRQ_GUEST_PENDING and GIC_IRQ_GUEST_VISIBLE.

In order to get consistent information from gic_dump_info, we need to
get the vgic.lock and disable interrupts before walking the inflight and
lr_pending lists.

We also need to keep v->arch.gic_lr in sync with GICH_LR registers.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

---
Changes in v3:
- use spin_lock_irqsave and spin_unlock_irqrestore in gic_dump_info.
---
 xen/arch/arm/gic.c |   20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index aaba8b0..7de4443 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -639,6 +639,7 @@ static inline void gic_set_lr(struct vcpu *v, int lr, unsigned int irq,
             ((p->desc->irq & GICH_LR_PHYSICAL_MASK) << GICH_LR_PHYSICAL_SHIFT);
 
     GICH[GICH_LR + lr] = lr_reg;
+    v->arch.gic_lr[lr] = lr_reg;
 
     set_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
     clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
@@ -714,11 +715,15 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
              test_and_clear_bit(GIC_IRQ_GUEST_PENDING, &p->status) )
+        {
             GICH[GICH_LR + i] = lr | GICH_LR_PENDING;
+            v->arch.gic_lr[i] = lr | GICH_LR_PENDING;
+        }
     } else if ( lr & GICH_LR_PENDING ) {
         clear_bit(GIC_IRQ_GUEST_PENDING, &p->status);
     } else {
         GICH[GICH_LR + i] = 0;
+        v->arch.gic_lr[i] = 0;
         clear_bit(i, &this_cpu(lr_mask));
 
         if ( p->desc != NULL )
@@ -1006,8 +1011,11 @@ void gic_dump_info(struct vcpu *v)
 {
     int i;
     struct pending_irq *p;
+    unsigned long flags;
 
     printk("GICH_LRs (vcpu %d) mask=%"PRIx64"\n", v->vcpu_id, v->arch.lr_mask);
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
     if ( v == current )
     {
         for ( i = 0; i < nr_lrs; i++ )
@@ -1019,14 +1027,20 @@ void gic_dump_info(struct vcpu *v)
 
     list_for_each_entry ( p, &v->arch.vgic.inflight_irqs, inflight )
     {
-        printk("Inflight irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Inflight irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
 
     list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
     {
-        printk("Pending irq=%d lr=%u\n", p->irq, p->lr);
+        printk("Pending irq=%d lr=%u enable=%d pending=%d visible=%d\n",
+                p->irq, p->lr, test_bit(GIC_IRQ_GUEST_ENABLED, &p->status),
+                test_bit(GIC_IRQ_GUEST_PENDING, &p->status),
+                test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status));
     }
-
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void __cpuinit init_maintenance_interrupt(void)
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 18:55:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 18:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIjdw-000132-SF; Wed, 26 Feb 2014 18:55:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIjdu-00012s-To
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 18:55:43 +0000
Received: from [85.158.143.35:29490] by server-3.bemta-4.messagelabs.com id
	E4/1C-11539-EA83E035; Wed, 26 Feb 2014 18:55:42 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393440935!8548275!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4663 invoked from network); 26 Feb 2014 18:55:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 18:55:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106029919"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 18:55:35 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 13:55:34 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIjOm-0004S3-E2;
	Wed, 26 Feb 2014 18:40:04 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
To: <xen-devel@lists.xensource.com>
Date: Wed, 26 Feb 2014 18:39:56 +0000
Message-ID: <1393439997-26936-11-git-send-email-stefano.stabellini@eu.citrix.com>
X-Mailer: git-send-email 1.7.9.5
In-Reply-To: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, jtd@galois.com, Ian.Campbell@citrix.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH-4.5 v3 11/12] xen/arm: gic_events_need_delivery
	and irq priorities
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

gic_events_need_delivery should only return positive if an outstanding
pending irq has an higher priority than the currently active irq and the
priority mask.
Rewrite the function by going through the priority ordered inflight and
lr_queue lists.

In gic_restore_pending_irqs replace lower priority pending (and not
active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
are available.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 xen/arch/arm/gic.c           |   71 +++++++++++++++++++++++++++++++++++++-----
 xen/include/asm-arm/domain.h |    5 +--
 xen/include/asm-arm/gic.h    |    3 ++
 3 files changed, 70 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 296d9a7..aaba8b0 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -709,6 +709,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     p = irq_to_pending(v, irq);
     if ( lr & GICH_LR_ACTIVE )
     {
+        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         /* HW interrupts cannot be ACTIVE and PENDING */
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
@@ -723,6 +724,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = nr_lrs;
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
@@ -750,22 +752,47 @@ void gic_clear_lrs(struct vcpu *v)
 
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
-    int i;
-    struct pending_irq *p, *t;
+    int i = 0, lrs = nr_lrs;
+    struct pending_irq *p, *t, *p_r;
     unsigned long flags;
 
+    if ( list_empty(&v->arch.vgic.lr_pending) )
+        return;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    p_r = list_entry(v->arch.vgic.inflight_irqs.prev,
+                         typeof(*p_r), inflight);
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
-        if ( i >= nr_lrs ) return;
+        if ( i >= nr_lrs )
+        {
+            while ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) ||
+                    test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
+            {
+                p_r = list_entry(p_r->inflight.prev, typeof(*p_r), inflight);
+                if ( &p_r->inflight == p->inflight.next )
+                    goto out;
+            }
+            i = p_r->lr;
+            p_r->lr = nr_lrs;
+            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
+            gic_add_to_lr_pending(v, p_r->irq, p_r->priority);
+        }
 
-        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+        lrs--;
+        if ( lrs == 0 )
+            break;
     }
 
+out:
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void gic_clear_pending_irqs(struct vcpu *v)
@@ -779,8 +806,38 @@ void gic_clear_pending_irqs(struct vcpu *v)
 
 int gic_events_need_delivery(void)
 {
-    return (!list_empty(&current->arch.vgic.lr_pending) ||
-            this_cpu(lr_mask));
+    int mask_priority, lrs = nr_lrs;
+    int max_priority = 0xff, active_priority = 0xff;
+    struct vcpu *v = current;
+    struct pending_irq *p;
+    unsigned long flags;
+
+    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
+        {
+            if ( p->priority < active_priority )
+                active_priority = p->priority;
+        } else {
+            if ( p->priority < max_priority )
+                max_priority = p->priority;
+        }
+        lrs--;
+        if ( lrs == 0 )
+            break;
+    }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+    if ( max_priority < active_priority &&
+         (max_priority >> 3) < mask_priority )
+        return 1;
+    else
+        return 0;
 }
 
 void gic_inject(void)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7b636c8..86cb361 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -56,8 +56,9 @@ struct pending_irq
      *
      */
 #define GIC_IRQ_GUEST_PENDING  0
-#define GIC_IRQ_GUEST_VISIBLE  1
-#define GIC_IRQ_GUEST_ENABLED  2
+#define GIC_IRQ_GUEST_ACTIVE   1
+#define GIC_IRQ_GUEST_VISIBLE  2
+#define GIC_IRQ_GUEST_ENABLED  3
     unsigned long status;
     uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 5a9dc77..5d8f7f1 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,9 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+#define GICH_VMCR_PRIORITY_MASK   0x1f
+#define GICH_VMCR_PRIORITY_SHIFT  27
+
 /*
  * The minimum GICC_BPR is required to be in the range 0-3. We set
  * GICC_BPR to 0 but we must expect that it might be 3. This means we
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:28:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIk8z-0001UJ-5O; Wed, 26 Feb 2014 19:27:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1WIk8x-0001UE-F8
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 19:27:47 +0000
Received: from [85.158.137.68:31519] by server-15.bemta-3.messagelabs.com id
	8F/4B-19263-2304E035; Wed, 26 Feb 2014 19:27:46 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393442863!4419822!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28538 invoked from network); 26 Feb 2014 19:27:45 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 19:27:45 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id 7EDB713ED83;
	Wed, 26 Feb 2014 19:27:42 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id 6F91813EFFC; Wed, 26 Feb 2014 19:27:42 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id EBCDC13ED83;
	Wed, 26 Feb 2014 19:27:40 +0000 (UTC)
Message-ID: <530E402C.9040502@codeaurora.org>
Date: Wed, 26 Feb 2014 14:27:40 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox>
In-Reply-To: <20140226183454.GA14639@cbox>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Christoffer,

On 02/26/2014 01:34 PM, Christoffer Dall wrote:
> ARM VM System Specification
> ===========================
> 
> Goal
> ----
> The goal of this spec is to allow suitably-built OS images to run on
> all ARM virtualization solutions, such as KVM or Xen.

Would you consider including simulators/emulators as well, such as QEMU in TCG
mode?

> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> they aim to be hypervisor agnostic.
> 
> Note that simply adhering to the SBSA [2] is not a valid approach,
> for example because the SBSA mandates EL2, which will not be available
> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> may be controversial for some ARM VM implementations to support.
> This spec also covers the aarch32 execution mode, not covered in the
> SBSA.
> 
> 
> Image format
> ------------
> The image format, as presented to the VM, needs to be well-defined in
> order for prepared disk images to be bootable across various
> virtualization implementations.
> 
> The raw disk format as presented to the VM must be partitioned with a
> GUID Partition Table (GPT).  The bootable software must be placed in the
> EFI System Partition (ESP), using the UEFI removable media path, and
> must be an EFI application complying to the UEFI Specification 2.4
> Revision A [6].
> 
> The ESP partition's GPT entry's partition type GUID must be
> C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
> 
> The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> state.
> 
> This ensures that tools for both Xen and KVM can load a binary UEFI
> firmware which can read and boot the EFI application in the disk image.
> 
> A typical scenario will be GRUB2 packaged as an EFI application, which
> mounts the system boot partition and boots Linux.
> 
> 
> Virtual Firmware
> ----------------
> The VM system must be able to boot the EFI application in the ESP.  It
> is recommended that this is achieved by loading a UEFI binary as the
> first software executed by the VM, which then executes the EFI
> application.  The UEFI implementation should be compliant with UEFI
> Specification 2.4 Revision A [6] or later.
> 
> This document strongly recommends that the VM implementation supports
> persistent environment storage for virtual firmware implementation in
> order to ensure probable use cases such as adding additional disk images
> to a VM or running installers to perform upgrades.
> 
> The binary UEFI firmware implementation should not be distributed as
> part of the VM image, but is specific to the VM implementation.

Can you elaborate on the motivation for requiring that the kernel be stuffed
into a disk image and for requiring such a heavyweight bootloader/firmware? By
doing so you would seem to exclude those requiring an optimized boot process.

> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.
> 
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
> 
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
> 
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.
> 
> For more information about the arm and arm64 boot conventions, see
> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> Linux kernel source tree.
> 
> For more information about UEFI and ACPI booting, see [4] and [5].
> 
> 
> VM Platform
> -----------
> The specification does not mandate any specific memory map.  The guest
> OS must be able to enumerate all processing elements, devices, and
> memory through HW description data (FDT, ACPI) or a bus-specific
> mechanism such as PCI.
> 
> The virtual platform must support at least one of the following ARM
> execution states:
>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> 
> It is recommended to support both (2) and (3) on aarch64 capable
> physical systems.
> 
> The virtual hardware platform must provide a number of mandatory
> peripherals:
> 
>   Serial console:  The platform should provide a console,
>   based on an emulated pl011, a virtio-console, or a Xen PV console.
> 
>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>   removes this limitation.
> 
>   The ARM virtual timer and counter should be available to the VM as
>   per the ARM Generic Timers specification in the ARM ARM [1].
> 
>   A hotpluggable bus to support hotplug of at least block and network
>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>   bus.

Is VirtIO hotplug capable? Over PCI or MMIO transports or both?

> We make the following recommendations for the guest OS kernel:
> 
>   The guest OS must include support for GICv2 and any available newer
>   version of the GIC architecture to maintain compatibility with older
>   VM implementations.
> 
>   It is strongly recommended to include support for all available
>   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
>   drivers in the guest OS kernel or initial ramdisk.

I would love to eventually see some defconfigs for this sort of thing.

> Other common peripherals for block devices, networking, and more can
> (and typically will) be provided, but OS software written and compiled
> to run on ARM VMs cannot make any assumptions about which variations
> of these should exist or which implementation they use (e.g. VirtIO or
> Xen PV).  See "Hardware Description" above.
> 
> Note that this platform specification is separate from the Linux kernel
> concept of mach-virt, which merely specifies a machine model driven
> purely from device tree, but does not mandate any peripherals or have any
> mention of ACPI.

Well, the commit message for it said it mandated a GIC and architected timers.

Regards,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:28:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIk8z-0001UJ-5O; Wed, 26 Feb 2014 19:27:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1WIk8x-0001UE-F8
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 19:27:47 +0000
Received: from [85.158.137.68:31519] by server-15.bemta-3.messagelabs.com id
	8F/4B-19263-2304E035; Wed, 26 Feb 2014 19:27:46 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393442863!4419822!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28538 invoked from network); 26 Feb 2014 19:27:45 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-8.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 19:27:45 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id 7EDB713ED83;
	Wed, 26 Feb 2014 19:27:42 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id 6F91813EFFC; Wed, 26 Feb 2014 19:27:42 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id EBCDC13ED83;
	Wed, 26 Feb 2014 19:27:40 +0000 (UTC)
Message-ID: <530E402C.9040502@codeaurora.org>
Date: Wed, 26 Feb 2014 14:27:40 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox>
In-Reply-To: <20140226183454.GA14639@cbox>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Christoffer,

On 02/26/2014 01:34 PM, Christoffer Dall wrote:
> ARM VM System Specification
> ===========================
> 
> Goal
> ----
> The goal of this spec is to allow suitably-built OS images to run on
> all ARM virtualization solutions, such as KVM or Xen.

Would you consider including simulators/emulators as well, such as QEMU in TCG
mode?

> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> they aim to be hypervisor agnostic.
> 
> Note that simply adhering to the SBSA [2] is not a valid approach,
> for example because the SBSA mandates EL2, which will not be available
> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> may be controversial for some ARM VM implementations to support.
> This spec also covers the aarch32 execution mode, not covered in the
> SBSA.
> 
> 
> Image format
> ------------
> The image format, as presented to the VM, needs to be well-defined in
> order for prepared disk images to be bootable across various
> virtualization implementations.
> 
> The raw disk format as presented to the VM must be partitioned with a
> GUID Partition Table (GPT).  The bootable software must be placed in the
> EFI System Partition (ESP), using the UEFI removable media path, and
> must be an EFI application complying to the UEFI Specification 2.4
> Revision A [6].
> 
> The ESP partition's GPT entry's partition type GUID must be
> C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
> 
> The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> state.
> 
> This ensures that tools for both Xen and KVM can load a binary UEFI
> firmware which can read and boot the EFI application in the disk image.
> 
> A typical scenario will be GRUB2 packaged as an EFI application, which
> mounts the system boot partition and boots Linux.
> 
> 
> Virtual Firmware
> ----------------
> The VM system must be able to boot the EFI application in the ESP.  It
> is recommended that this is achieved by loading a UEFI binary as the
> first software executed by the VM, which then executes the EFI
> application.  The UEFI implementation should be compliant with UEFI
> Specification 2.4 Revision A [6] or later.
> 
> This document strongly recommends that the VM implementation supports
> persistent environment storage for virtual firmware implementation in
> order to ensure probable use cases such as adding additional disk images
> to a VM or running installers to perform upgrades.
> 
> The binary UEFI firmware implementation should not be distributed as
> part of the VM image, but is specific to the VM implementation.

Can you elaborate on the motivation for requiring that the kernel be stuffed
into a disk image and for requiring such a heavyweight bootloader/firmware? By
doing so you would seem to exclude those requiring an optimized boot process.

> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.
> 
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
> 
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
> 
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.
> 
> For more information about the arm and arm64 boot conventions, see
> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> Linux kernel source tree.
> 
> For more information about UEFI and ACPI booting, see [4] and [5].
> 
> 
> VM Platform
> -----------
> The specification does not mandate any specific memory map.  The guest
> OS must be able to enumerate all processing elements, devices, and
> memory through HW description data (FDT, ACPI) or a bus-specific
> mechanism such as PCI.
> 
> The virtual platform must support at least one of the following ARM
> execution states:
>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> 
> It is recommended to support both (2) and (3) on aarch64 capable
> physical systems.
> 
> The virtual hardware platform must provide a number of mandatory
> peripherals:
> 
>   Serial console:  The platform should provide a console,
>   based on an emulated pl011, a virtio-console, or a Xen PV console.
> 
>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>   removes this limitation.
> 
>   The ARM virtual timer and counter should be available to the VM as
>   per the ARM Generic Timers specification in the ARM ARM [1].
> 
>   A hotpluggable bus to support hotplug of at least block and network
>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>   bus.

Is VirtIO hotplug capable? Over PCI or MMIO transports or both?

> We make the following recommendations for the guest OS kernel:
> 
>   The guest OS must include support for GICv2 and any available newer
>   version of the GIC architecture to maintain compatibility with older
>   VM implementations.
> 
>   It is strongly recommended to include support for all available
>   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
>   drivers in the guest OS kernel or initial ramdisk.

I would love to eventually see some defconfigs for this sort of thing.

> Other common peripherals for block devices, networking, and more can
> (and typically will) be provided, but OS software written and compiled
> to run on ARM VMs cannot make any assumptions about which variations
> of these should exist or which implementation they use (e.g. VirtIO or
> Xen PV).  See "Hardware Description" above.
> 
> Note that this platform specification is separate from the Linux kernel
> concept of mach-virt, which merely specifies a machine model driven
> purely from device tree, but does not mandate any peripherals or have any
> mention of ACPI.

Well, the commit message for it said it mandated a GIC and architected timers.

Regards,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:37:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkHm-0001eh-CC; Wed, 26 Feb 2014 19:36:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIkHk-0001ec-QZ
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 19:36:53 +0000
Received: from [85.158.143.35:9462] by server-3.bemta-4.messagelabs.com id
	D3/4D-11539-4524E035; Wed, 26 Feb 2014 19:36:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393443409!8550907!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12434 invoked from network); 26 Feb 2014 19:36:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:36:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106046541"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 19:36:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 14:36:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIkHg-00059U-5E;
	Wed, 26 Feb 2014 19:36:48 +0000
Date: Wed, 26 Feb 2014 19:36:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-11-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402261926420.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-11-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, jtd@galois.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 11/12] xen/arm:
 gic_events_need_delivery and irq priorities
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 26 Feb 2014, Stefano Stabellini wrote:
> gic_events_need_delivery should only return positive if an outstanding
> pending irq has an higher priority than the currently active irq and the
> priority mask.
> Rewrite the function by going through the priority ordered inflight and
> lr_queue lists.
> 
> In gic_restore_pending_irqs replace lower priority pending (and not
> active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
> are available.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

[snip]

> @@ -779,8 +806,38 @@ void gic_clear_pending_irqs(struct vcpu *v)
>  
>  int gic_events_need_delivery(void)
>  {
> -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> -            this_cpu(lr_mask));
> +    int mask_priority, lrs = nr_lrs;
> +    int max_priority = 0xff, active_priority = 0xff;
> +    struct vcpu *v = current;
> +    struct pending_irq *p;
> +    unsigned long flags;
> +
> +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
> +
> +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +    list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
> +    {

Unfortunately I sent the wrong version of this patch. We should be going
through the inflight list here, not the lr_pending list:

list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )

I also added a check for GIC_IRQ_GUEST_ENABLED.
I pushed the full series with this small fix to:

git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v3.1

I am also appending the newer version here.


commit cfddc7eb3a7d4649ceacdf89609a86d826773e9e
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Feb 26 19:21:35 2014 +0000

    xen/arm: gic_events_need_delivery and irq priorities
    
    gic_events_need_delivery should only return positive if an outstanding
    pending irq has an higher priority than the currently active irq and the
    priority mask.
    Rewrite the function by going through the priority ordered inflight and
    lr_queue lists.
    
    In gic_restore_pending_irqs replace lower priority pending (and not
    active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
    are available.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    ---
    Changes in v4:
    - in gic_events_need_delivery go through inflight_irqs and only consider
    enabled irqs.

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 296d9a7..d2e23a9 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -709,6 +709,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     p = irq_to_pending(v, irq);
     if ( lr & GICH_LR_ACTIVE )
     {
+        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         /* HW interrupts cannot be ACTIVE and PENDING */
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
@@ -723,6 +724,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = nr_lrs;
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
@@ -750,22 +752,47 @@ void gic_clear_lrs(struct vcpu *v)
 
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
-    int i;
-    struct pending_irq *p, *t;
+    int i = 0, lrs = nr_lrs;
+    struct pending_irq *p, *t, *p_r;
     unsigned long flags;
 
+    if ( list_empty(&v->arch.vgic.lr_pending) )
+        return;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    p_r = list_entry(v->arch.vgic.inflight_irqs.prev,
+                         typeof(*p_r), inflight);
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
-        if ( i >= nr_lrs ) return;
+        if ( i >= nr_lrs )
+        {
+            while ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) ||
+                    test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
+            {
+                p_r = list_entry(p_r->inflight.prev, typeof(*p_r), inflight);
+                if ( &p_r->inflight == p->inflight.next )
+                    goto out;
+            }
+            i = p_r->lr;
+            p_r->lr = nr_lrs;
+            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
+            gic_add_to_lr_pending(v, p_r->irq, p_r->priority);
+        }
 
-        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+        lrs--;
+        if ( lrs == 0 )
+            break;
     }
 
+out:
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void gic_clear_pending_irqs(struct vcpu *v)
@@ -779,8 +806,38 @@ void gic_clear_pending_irqs(struct vcpu *v)
 
 int gic_events_need_delivery(void)
 {
-    return (!list_empty(&current->arch.vgic.lr_pending) ||
-            this_cpu(lr_mask));
+    int mask_priority, lrs = nr_lrs;
+    int max_priority = 0xff, active_priority = 0xff;
+    struct vcpu *v = current;
+    struct pending_irq *p;
+    unsigned long flags;
+
+    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
+        {
+            if ( p->priority < active_priority )
+                active_priority = p->priority;
+        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
+            if ( p->priority < max_priority )
+                max_priority = p->priority;
+        }
+        lrs--;
+        if ( lrs == 0 )
+            break;
+    }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+    if ( max_priority < active_priority &&
+         (max_priority >> 3) < mask_priority )
+        return 1;
+    else
+        return 0;
 }
 
 void gic_inject(void)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7b636c8..86cb361 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -56,8 +56,9 @@ struct pending_irq
      *
      */
 #define GIC_IRQ_GUEST_PENDING  0
-#define GIC_IRQ_GUEST_VISIBLE  1
-#define GIC_IRQ_GUEST_ENABLED  2
+#define GIC_IRQ_GUEST_ACTIVE   1
+#define GIC_IRQ_GUEST_VISIBLE  2
+#define GIC_IRQ_GUEST_ENABLED  3
     unsigned long status;
     uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 5a9dc77..5d8f7f1 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,9 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+#define GICH_VMCR_PRIORITY_MASK   0x1f
+#define GICH_VMCR_PRIORITY_SHIFT  27
+
 /*
  * The minimum GICC_BPR is required to be in the range 0-3. We set
  * GICC_BPR to 0 but we must expect that it might be 3. This means we

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:37:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkHm-0001eh-CC; Wed, 26 Feb 2014 19:36:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WIkHk-0001ec-QZ
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 19:36:53 +0000
Received: from [85.158.143.35:9462] by server-3.bemta-4.messagelabs.com id
	D3/4D-11539-4524E035; Wed, 26 Feb 2014 19:36:52 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393443409!8550907!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12434 invoked from network); 26 Feb 2014 19:36:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:36:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,549,1389744000"; d="scan'208";a="106046541"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 19:36:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Wed, 26 Feb 2014 14:36:49 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WIkHg-00059U-5E;
	Wed, 26 Feb 2014 19:36:48 +0000
Date: Wed, 26 Feb 2014 19:36:43 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-11-git-send-email-stefano.stabellini@eu.citrix.com>
Message-ID: <alpine.DEB.2.02.1402261926420.31489@kaball.uk.xensource.com>
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-11-git-send-email-stefano.stabellini@eu.citrix.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: julien.grall@citrix.com, jtd@galois.com, xen-devel@lists.xensource.com,
	Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 11/12] xen/arm:
 gic_events_need_delivery and irq priorities
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 26 Feb 2014, Stefano Stabellini wrote:
> gic_events_need_delivery should only return positive if an outstanding
> pending irq has an higher priority than the currently active irq and the
> priority mask.
> Rewrite the function by going through the priority ordered inflight and
> lr_queue lists.
> 
> In gic_restore_pending_irqs replace lower priority pending (and not
> active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
> are available.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>

[snip]

> @@ -779,8 +806,38 @@ void gic_clear_pending_irqs(struct vcpu *v)
>  
>  int gic_events_need_delivery(void)
>  {
> -    return (!list_empty(&current->arch.vgic.lr_pending) ||
> -            this_cpu(lr_mask));
> +    int mask_priority, lrs = nr_lrs;
> +    int max_priority = 0xff, active_priority = 0xff;
> +    struct vcpu *v = current;
> +    struct pending_irq *p;
> +    unsigned long flags;
> +
> +    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
> +
> +    spin_lock_irqsave(&v->arch.vgic.lock, flags);
> +
> +    list_for_each_entry( p, &v->arch.vgic.lr_pending, lr_queue )
> +    {

Unfortunately I sent the wrong version of this patch. We should be going
through the inflight list here, not the lr_pending list:

list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )

I also added a check for GIC_IRQ_GUEST_ENABLED.
I pushed the full series with this small fix to:

git://xenbits.xen.org/people/sstabellini/xen-unstable.git no_maintenance_interrupts-v3.1

I am also appending the newer version here.


commit cfddc7eb3a7d4649ceacdf89609a86d826773e9e
Author: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date:   Wed Feb 26 19:21:35 2014 +0000

    xen/arm: gic_events_need_delivery and irq priorities
    
    gic_events_need_delivery should only return positive if an outstanding
    pending irq has an higher priority than the currently active irq and the
    priority mask.
    Rewrite the function by going through the priority ordered inflight and
    lr_queue lists.
    
    In gic_restore_pending_irqs replace lower priority pending (and not
    active) irqs in GICH_LRs with higher priority irqs if no more GICH_LRs
    are available.
    
    Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    
    ---
    Changes in v4:
    - in gic_events_need_delivery go through inflight_irqs and only consider
    enabled irqs.

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 296d9a7..d2e23a9 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -709,6 +709,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
     p = irq_to_pending(v, irq);
     if ( lr & GICH_LR_ACTIVE )
     {
+        set_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         /* HW interrupts cannot be ACTIVE and PENDING */
         if ( p->desc == NULL &&
              test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) &&
@@ -723,6 +724,7 @@ static void _gic_clear_lr(struct vcpu *v, int i)
         if ( p->desc != NULL )
             p->desc->status &= ~IRQ_INPROGRESS;
         clear_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
+        clear_bit(GIC_IRQ_GUEST_ACTIVE, &p->status);
         p->lr = nr_lrs;
         if ( test_bit(GIC_IRQ_GUEST_PENDING, &p->status) &&
                 test_bit(GIC_IRQ_GUEST_ENABLED, &p->status))
@@ -750,22 +752,47 @@ void gic_clear_lrs(struct vcpu *v)
 
 static void gic_restore_pending_irqs(struct vcpu *v)
 {
-    int i;
-    struct pending_irq *p, *t;
+    int i = 0, lrs = nr_lrs;
+    struct pending_irq *p, *t, *p_r;
     unsigned long flags;
 
+    if ( list_empty(&v->arch.vgic.lr_pending) )
+        return;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    p_r = list_entry(v->arch.vgic.inflight_irqs.prev,
+                         typeof(*p_r), inflight);
     list_for_each_entry_safe ( p, t, &v->arch.vgic.lr_pending, lr_queue )
     {
         i = find_first_zero_bit(&this_cpu(lr_mask), nr_lrs);
-        if ( i >= nr_lrs ) return;
+        if ( i >= nr_lrs )
+        {
+            while ( !test_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status) ||
+                    test_bit(GIC_IRQ_GUEST_ACTIVE, &p_r->status) )
+            {
+                p_r = list_entry(p_r->inflight.prev, typeof(*p_r), inflight);
+                if ( &p_r->inflight == p->inflight.next )
+                    goto out;
+            }
+            i = p_r->lr;
+            p_r->lr = nr_lrs;
+            set_bit(GIC_IRQ_GUEST_PENDING, &p_r->status);
+            clear_bit(GIC_IRQ_GUEST_VISIBLE, &p_r->status);
+            gic_add_to_lr_pending(v, p_r->irq, p_r->priority);
+        }
 
-        spin_lock_irqsave(&v->arch.vgic.lock, flags);
         gic_set_lr(v, i, p->irq, GICH_LR_PENDING, p->priority);
         list_del_init(&p->lr_queue);
         set_bit(i, &this_cpu(lr_mask));
-        spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+        lrs--;
+        if ( lrs == 0 )
+            break;
     }
 
+out:
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
 }
 
 void gic_clear_pending_irqs(struct vcpu *v)
@@ -779,8 +806,38 @@ void gic_clear_pending_irqs(struct vcpu *v)
 
 int gic_events_need_delivery(void)
 {
-    return (!list_empty(&current->arch.vgic.lr_pending) ||
-            this_cpu(lr_mask));
+    int mask_priority, lrs = nr_lrs;
+    int max_priority = 0xff, active_priority = 0xff;
+    struct vcpu *v = current;
+    struct pending_irq *p;
+    unsigned long flags;
+
+    mask_priority = (GICH[GICH_VMCR] >> GICH_VMCR_PRIORITY_SHIFT) & GICH_VMCR_PRIORITY_MASK;
+
+    spin_lock_irqsave(&v->arch.vgic.lock, flags);
+
+    list_for_each_entry( p, &v->arch.vgic.inflight_irqs, inflight )
+    {
+        if ( test_bit(GIC_IRQ_GUEST_ACTIVE, &p->status) )
+        {
+            if ( p->priority < active_priority )
+                active_priority = p->priority;
+        } else if ( test_bit(GIC_IRQ_GUEST_ENABLED, &p->status) ) {
+            if ( p->priority < max_priority )
+                max_priority = p->priority;
+        }
+        lrs--;
+        if ( lrs == 0 )
+            break;
+    }
+
+    spin_unlock_irqrestore(&v->arch.vgic.lock, flags);
+
+    if ( max_priority < active_priority &&
+         (max_priority >> 3) < mask_priority )
+        return 1;
+    else
+        return 0;
 }
 
 void gic_inject(void)
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 7b636c8..86cb361 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -56,8 +56,9 @@ struct pending_irq
      *
      */
 #define GIC_IRQ_GUEST_PENDING  0
-#define GIC_IRQ_GUEST_VISIBLE  1
-#define GIC_IRQ_GUEST_ENABLED  2
+#define GIC_IRQ_GUEST_ACTIVE   1
+#define GIC_IRQ_GUEST_VISIBLE  2
+#define GIC_IRQ_GUEST_ENABLED  3
     unsigned long status;
     uint8_t lr;
     struct irq_desc *desc; /* only set it the irq corresponds to a physical irq */
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 5a9dc77..5d8f7f1 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -129,6 +129,9 @@
 #define GICH_LR_CPUID_SHIFT     9
 #define GICH_VTR_NRLRGS         0x3f
 
+#define GICH_VMCR_PRIORITY_MASK   0x1f
+#define GICH_VMCR_PRIORITY_SHIFT  27
+
 /*
  * The minimum GICC_BPR is required to be in the range 0-3. We set
  * GICC_BPR to 0 but we must expect that it might be 3. This means we

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkUz-0001xX-4m; Wed, 26 Feb 2014 19:50:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WIkUx-0001xS-U9
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 19:50:32 +0000
Received: from [193.109.254.147:10385] by server-3.bemta-14.messagelabs.com id
	1C/D9-00432-7854E035; Wed, 26 Feb 2014 19:50:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393444230!7079319!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7846 invoked from network); 26 Feb 2014 19:50:30 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:50:30 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so6358082wib.4
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Feb 2014 11:50:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=iDfWtQ1oDbytkS6RIQxqKEm77mKfYa7R0lcL+0CyIGs=;
	b=m2vxPMVutUyPDn9w7UtrA+VANtfCUQdLCFYwj9O833JVsdaE1ctIJX52KtWtKZ3Z0R
	9VPO0OoMwQdbCBDCV/XQMZ8kiCzmLGyatvuaGD9NZimJzPrbuwL7yJp3mke1DMNiplb3
	HEsgElglgUTFZGc3yhhCE1CcEqQbL0gUXKS+FMWVC3n+hsybhAjGcDaPsLKZHFgGbPet
	LKT5gbB5j2ikDQM5abKjSbmXBZMrmOrMiEU2lzF+Zyt33BOasWiare2PefsjrLvfuFVs
	R2Ovw/H0xY3EDMqx6MglcFlV7d981axGFgbOKPPsrfxU4529dj7QkWMRNByfRrWnptcT
	LUsg==
X-Gm-Message-State: ALoCoQkdZMwMK581zisCYfBajZHHnOBaBR8xQJkSH5jyQLl3jek61ZEvuStp7oTOKRGHdrepCsuO
X-Received: by 10.180.210.171 with SMTP id mv11mr9464884wic.44.1393444230302; 
	Wed, 26 Feb 2014 11:50:30 -0800 (PST)
Received: from localhost.localdomain ([31.221.87.87])
	by mx.google.com with ESMTPSA id di9sm8269274wid.6.2014.02.26.11.50.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 11:50:29 -0800 (PST)
Message-ID: <530E4584.3080602@linaro.org>
Date: Wed, 26 Feb 2014 19:50:28 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-1-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: jtd@galois.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 01/12] xen/arm: no need to set HCR_VI
 when using the vgic to inject irqs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 26/02/14 18:39, Stefano Stabellini wrote:
> It can actually cause spurious interrupt injections into the guest.
>

I would explain what is the current behaviour with HCR_VI, e.g. the
guest is entering in interrupt mode...

By only reading the documentation and your commit message, this patch 
seems wrong (I know it's not the case ;)).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:50:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkUz-0001xX-4m; Wed, 26 Feb 2014 19:50:33 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WIkUx-0001xS-U9
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 19:50:32 +0000
Received: from [193.109.254.147:10385] by server-3.bemta-14.messagelabs.com id
	1C/D9-00432-7854E035; Wed, 26 Feb 2014 19:50:31 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393444230!7079319!1
X-Originating-IP: [209.85.212.171]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG, SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7846 invoked from network); 26 Feb 2014 19:50:30 -0000
Received: from mail-wi0-f171.google.com (HELO mail-wi0-f171.google.com)
	(209.85.212.171)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:50:30 -0000
Received: by mail-wi0-f171.google.com with SMTP id cc10so6358082wib.4
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Feb 2014 11:50:30 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=iDfWtQ1oDbytkS6RIQxqKEm77mKfYa7R0lcL+0CyIGs=;
	b=m2vxPMVutUyPDn9w7UtrA+VANtfCUQdLCFYwj9O833JVsdaE1ctIJX52KtWtKZ3Z0R
	9VPO0OoMwQdbCBDCV/XQMZ8kiCzmLGyatvuaGD9NZimJzPrbuwL7yJp3mke1DMNiplb3
	HEsgElglgUTFZGc3yhhCE1CcEqQbL0gUXKS+FMWVC3n+hsybhAjGcDaPsLKZHFgGbPet
	LKT5gbB5j2ikDQM5abKjSbmXBZMrmOrMiEU2lzF+Zyt33BOasWiare2PefsjrLvfuFVs
	R2Ovw/H0xY3EDMqx6MglcFlV7d981axGFgbOKPPsrfxU4529dj7QkWMRNByfRrWnptcT
	LUsg==
X-Gm-Message-State: ALoCoQkdZMwMK581zisCYfBajZHHnOBaBR8xQJkSH5jyQLl3jek61ZEvuStp7oTOKRGHdrepCsuO
X-Received: by 10.180.210.171 with SMTP id mv11mr9464884wic.44.1393444230302; 
	Wed, 26 Feb 2014 11:50:30 -0800 (PST)
Received: from localhost.localdomain ([31.221.87.87])
	by mx.google.com with ESMTPSA id di9sm8269274wid.6.2014.02.26.11.50.29
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 11:50:29 -0800 (PST)
Message-ID: <530E4584.3080602@linaro.org>
Date: Wed, 26 Feb 2014 19:50:28 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-1-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-1-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: jtd@galois.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 01/12] xen/arm: no need to set HCR_VI
 when using the vgic to inject irqs
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 26/02/14 18:39, Stefano Stabellini wrote:
> It can actually cause spurious interrupt injections into the guest.
>

I would explain what is the current behaviour with HCR_VI, e.g. the
guest is entering in interrupt mode...

By only reading the documentation and your commit message, this patch 
seems wrong (I know it's not the case ;)).

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:51:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkVc-00020E-IT; Wed, 26 Feb 2014 19:51:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIkVa-000200-TE
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 19:51:11 +0000
Received: from [85.158.139.211:53934] by server-3.bemta-5.messagelabs.com id
	A1/04-13671-EA54E035; Wed, 26 Feb 2014 19:51:10 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393444266!1946713!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5570 invoked from network); 26 Feb 2014 19:51:08 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:51:08 -0000
Received: by mail-pd0-f181.google.com with SMTP id p10so1365506pdj.26
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 11:51:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=+RUVR6/BWQC6latIGP3zdJIe1r5QtHJejPO+SHhF2J4=;
	b=lnvyPCVn0T1jwIxCtooajyZztUPUkVmM7HmyNt8lePMGBQZXqH17qs6i1rlN5w/hoh
	X0YCOdMMWYLVa5WMgoQ7tT/MeulOmVdnRNnLtqBfvmiPgU9nCWNBj/f4rJQb94gZVUA0
	4pLWOWrwZzMbkN3/L/B9SI2CxCwAOBex5YeQJEYQ8Ce38n15NbVCt4bcgP6QXra9OyY6
	+6O2DIKJeyAEb4P6p4S312WVz7mHDjZD5gC/kqyHhQ7hXQj0F36h406XJii15rwwi5Nx
	+rLspjXOyxszhp8EB+EzsON5YON7rlxSABsjkUM9dBjuMlQan8fz35/WZ3D+0Ir4U03K
	k+Og==
X-Gm-Message-State: ALoCoQmhtri9fkxh9JJXrcCs1gTTIiDxqbWkBZ3NmaFAao3B6DIhbdLYlpDv2ugywb76l1Q2KRsP
X-Received: by 10.69.31.43 with SMTP id kj11mr8712502pbd.67.1393444266445;
	Wed, 26 Feb 2014 11:51:06 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221]) by mx.google.com with ESMTPSA id
	yg4sm13507065pab.19.2014.02.26.11.51.03 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 11:51:05 -0800 (PST)
Date: Wed, 26 Feb 2014 11:51:11 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Christopher Covington <cov@codeaurora.org>
Message-ID: <20140226195111.GB16149@cbox>
References: <20140226183454.GA14639@cbox>
 <530E402C.9040502@codeaurora.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530E402C.9040502@codeaurora.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 02:27:40PM -0500, Christopher Covington wrote:
> Hi Christoffer,
> 
> On 02/26/2014 01:34 PM, Christoffer Dall wrote:
> > ARM VM System Specification
> > ===========================
> > 
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> 
> Would you consider including simulators/emulators as well, such as QEMU in TCG
> mode?
> 

Yes, but I think KVM or Xen is the most common use cases for this, but
in fact, for KVM, most of the work to support this would be in QEMU
anyhow and whether you choose to enable KVM or not shouldn't make any
difference.

> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> > 
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> > 
> > 
> > Image format
> > ------------
> > The image format, as presented to the VM, needs to be well-defined in
> > order for prepared disk images to be bootable across various
> > virtualization implementations.
> > 
> > The raw disk format as presented to the VM must be partitioned with a
> > GUID Partition Table (GPT).  The bootable software must be placed in the
> > EFI System Partition (ESP), using the UEFI removable media path, and
> > must be an EFI application complying to the UEFI Specification 2.4
> > Revision A [6].
> > 
> > The ESP partition's GPT entry's partition type GUID must be
> > C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> > formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
> > 
> > The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> > execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> > state.
> > 
> > This ensures that tools for both Xen and KVM can load a binary UEFI
> > firmware which can read and boot the EFI application in the disk image.
> > 
> > A typical scenario will be GRUB2 packaged as an EFI application, which
> > mounts the system boot partition and boots Linux.
> > 
> > 
> > Virtual Firmware
> > ----------------
> > The VM system must be able to boot the EFI application in the ESP.  It
> > is recommended that this is achieved by loading a UEFI binary as the
> > first software executed by the VM, which then executes the EFI
> > application.  The UEFI implementation should be compliant with UEFI
> > Specification 2.4 Revision A [6] or later.
> > 
> > This document strongly recommends that the VM implementation supports
> > persistent environment storage for virtual firmware implementation in
> > order to ensure probable use cases such as adding additional disk images
> > to a VM or running installers to perform upgrades.
> > 
> > The binary UEFI firmware implementation should not be distributed as
> > part of the VM image, but is specific to the VM implementation.
> 
> Can you elaborate on the motivation for requiring that the kernel be stuffed
> into a disk image and for requiring such a heavyweight bootloader/firmware? By
> doing so you would seem to exclude those requiring an optimized boot process.
> 

What's the alternative?  Shipping kernels externally and loading them
externally?  Sure you can do that, but then distros can't upgrade the
kernel themselves, and you have to come up with a convention for how to
ship kernels, initrd's etc.

This works well on x86 today and is going to reflect how most people see
ARM server hardware to behave as well.

> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> > 
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> > 
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> > 
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> > 
> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> > 
> > For more information about UEFI and ACPI booting, see [4] and [5].
> > 
> > 
> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map.  The guest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> > 
> > The virtual platform must support at least one of the following ARM
> > execution states:
> >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> > 
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> > 
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> > 
> >   Serial console:  The platform should provide a console,
> >   based on an emulated pl011, a virtio-console, or a Xen PV console.
> > 
> >   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >   removes this limitation.
> > 
> >   The ARM virtual timer and counter should be available to the VM as
> >   per the ARM Generic Timers specification in the ARM ARM [1].
> > 
> >   A hotpluggable bus to support hotplug of at least block and network
> >   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >   bus.
> 
> Is VirtIO hotplug capable? Over PCI or MMIO transports or both?
> 

VirtIO devices attached on a PCIe bus are hotpluggable, the emulated
PCIe bus itself would not have anything to do with virtio, except that
virtio devices can hang off of it.  AFAIU.

> > We make the following recommendations for the guest OS kernel:
> > 
> >   The guest OS must include support for GICv2 and any available newer
> >   version of the GIC architecture to maintain compatibility with older
> >   VM implementations.
> > 
> >   It is strongly recommended to include support for all available
> >   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
> >   drivers in the guest OS kernel or initial ramdisk.
> 
> I would love to eventually see some defconfigs for this sort of thing.
> 

Agreed, I think it's beyond the scope of this spec though.

> > Other common peripherals for block devices, networking, and more can
> > (and typically will) be provided, but OS software written and compiled
> > to run on ARM VMs cannot make any assumptions about which variations
> > of these should exist or which implementation they use (e.g. VirtIO or
> > Xen PV).  See "Hardware Description" above.
> > 
> > Note that this platform specification is separate from the Linux kernel
> > concept of mach-virt, which merely specifies a machine model driven
> > purely from device tree, but does not mandate any peripherals or have any
> > mention of ACPI.
> 
> Well, the commit message for it said it mandated a GIC and architected timers.
> 
Haven't we been down that road before?  I think everyone pretty much
agrees this is the definition of mach-virt today, but if this note
causes people to start splitting hairs, I can remove the paragraph.

Thanks,
-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:51:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkVc-00020E-IT; Wed, 26 Feb 2014 19:51:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIkVa-000200-TE
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 19:51:11 +0000
Received: from [85.158.139.211:53934] by server-3.bemta-5.messagelabs.com id
	A1/04-13671-EA54E035; Wed, 26 Feb 2014 19:51:10 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393444266!1946713!1
X-Originating-IP: [209.85.192.181]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5570 invoked from network); 26 Feb 2014 19:51:08 -0000
Received: from mail-pd0-f181.google.com (HELO mail-pd0-f181.google.com)
	(209.85.192.181)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:51:08 -0000
Received: by mail-pd0-f181.google.com with SMTP id p10so1365506pdj.26
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 11:51:06 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=+RUVR6/BWQC6latIGP3zdJIe1r5QtHJejPO+SHhF2J4=;
	b=lnvyPCVn0T1jwIxCtooajyZztUPUkVmM7HmyNt8lePMGBQZXqH17qs6i1rlN5w/hoh
	X0YCOdMMWYLVa5WMgoQ7tT/MeulOmVdnRNnLtqBfvmiPgU9nCWNBj/f4rJQb94gZVUA0
	4pLWOWrwZzMbkN3/L/B9SI2CxCwAOBex5YeQJEYQ8Ce38n15NbVCt4bcgP6QXra9OyY6
	+6O2DIKJeyAEb4P6p4S312WVz7mHDjZD5gC/kqyHhQ7hXQj0F36h406XJii15rwwi5Nx
	+rLspjXOyxszhp8EB+EzsON5YON7rlxSABsjkUM9dBjuMlQan8fz35/WZ3D+0Ir4U03K
	k+Og==
X-Gm-Message-State: ALoCoQmhtri9fkxh9JJXrcCs1gTTIiDxqbWkBZ3NmaFAao3B6DIhbdLYlpDv2ugywb76l1Q2KRsP
X-Received: by 10.69.31.43 with SMTP id kj11mr8712502pbd.67.1393444266445;
	Wed, 26 Feb 2014 11:51:06 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221]) by mx.google.com with ESMTPSA id
	yg4sm13507065pab.19.2014.02.26.11.51.03 for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 11:51:05 -0800 (PST)
Date: Wed, 26 Feb 2014 11:51:11 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Christopher Covington <cov@codeaurora.org>
Message-ID: <20140226195111.GB16149@cbox>
References: <20140226183454.GA14639@cbox>
 <530E402C.9040502@codeaurora.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530E402C.9040502@codeaurora.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 02:27:40PM -0500, Christopher Covington wrote:
> Hi Christoffer,
> 
> On 02/26/2014 01:34 PM, Christoffer Dall wrote:
> > ARM VM System Specification
> > ===========================
> > 
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> 
> Would you consider including simulators/emulators as well, such as QEMU in TCG
> mode?
> 

Yes, but I think KVM or Xen is the most common use cases for this, but
in fact, for KVM, most of the work to support this would be in QEMU
anyhow and whether you choose to enable KVM or not shouldn't make any
difference.

> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> > 
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> > 
> > 
> > Image format
> > ------------
> > The image format, as presented to the VM, needs to be well-defined in
> > order for prepared disk images to be bootable across various
> > virtualization implementations.
> > 
> > The raw disk format as presented to the VM must be partitioned with a
> > GUID Partition Table (GPT).  The bootable software must be placed in the
> > EFI System Partition (ESP), using the UEFI removable media path, and
> > must be an EFI application complying to the UEFI Specification 2.4
> > Revision A [6].
> > 
> > The ESP partition's GPT entry's partition type GUID must be
> > C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> > formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
> > 
> > The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> > execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> > state.
> > 
> > This ensures that tools for both Xen and KVM can load a binary UEFI
> > firmware which can read and boot the EFI application in the disk image.
> > 
> > A typical scenario will be GRUB2 packaged as an EFI application, which
> > mounts the system boot partition and boots Linux.
> > 
> > 
> > Virtual Firmware
> > ----------------
> > The VM system must be able to boot the EFI application in the ESP.  It
> > is recommended that this is achieved by loading a UEFI binary as the
> > first software executed by the VM, which then executes the EFI
> > application.  The UEFI implementation should be compliant with UEFI
> > Specification 2.4 Revision A [6] or later.
> > 
> > This document strongly recommends that the VM implementation supports
> > persistent environment storage for virtual firmware implementation in
> > order to ensure probable use cases such as adding additional disk images
> > to a VM or running installers to perform upgrades.
> > 
> > The binary UEFI firmware implementation should not be distributed as
> > part of the VM image, but is specific to the VM implementation.
> 
> Can you elaborate on the motivation for requiring that the kernel be stuffed
> into a disk image and for requiring such a heavyweight bootloader/firmware? By
> doing so you would seem to exclude those requiring an optimized boot process.
> 

What's the alternative?  Shipping kernels externally and loading them
externally?  Sure you can do that, but then distros can't upgrade the
kernel themselves, and you have to come up with a convention for how to
ship kernels, initrd's etc.

This works well on x86 today and is going to reflect how most people see
ARM server hardware to behave as well.

> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> > 
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> > 
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> > 
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> > 
> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> > 
> > For more information about UEFI and ACPI booting, see [4] and [5].
> > 
> > 
> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map.  The guest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> > 
> > The virtual platform must support at least one of the following ARM
> > execution states:
> >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> > 
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> > 
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> > 
> >   Serial console:  The platform should provide a console,
> >   based on an emulated pl011, a virtio-console, or a Xen PV console.
> > 
> >   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >   removes this limitation.
> > 
> >   The ARM virtual timer and counter should be available to the VM as
> >   per the ARM Generic Timers specification in the ARM ARM [1].
> > 
> >   A hotpluggable bus to support hotplug of at least block and network
> >   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >   bus.
> 
> Is VirtIO hotplug capable? Over PCI or MMIO transports or both?
> 

VirtIO devices attached on a PCIe bus are hotpluggable, the emulated
PCIe bus itself would not have anything to do with virtio, except that
virtio devices can hang off of it.  AFAIU.

> > We make the following recommendations for the guest OS kernel:
> > 
> >   The guest OS must include support for GICv2 and any available newer
> >   version of the GIC architecture to maintain compatibility with older
> >   VM implementations.
> > 
> >   It is strongly recommended to include support for all available
> >   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
> >   drivers in the guest OS kernel or initial ramdisk.
> 
> I would love to eventually see some defconfigs for this sort of thing.
> 

Agreed, I think it's beyond the scope of this spec though.

> > Other common peripherals for block devices, networking, and more can
> > (and typically will) be provided, but OS software written and compiled
> > to run on ARM VMs cannot make any assumptions about which variations
> > of these should exist or which implementation they use (e.g. VirtIO or
> > Xen PV).  See "Hardware Description" above.
> > 
> > Note that this platform specification is separate from the Linux kernel
> > concept of mach-virt, which merely specifies a machine model driven
> > purely from device tree, but does not mandate any peripherals or have any
> > mention of ACPI.
> 
> Well, the commit message for it said it mandated a GIC and architected timers.
> 
Haven't we been down that road before?  I think everyone pretty much
agrees this is the definition of mach-virt today, but if this note
causes people to start splitting hairs, I can remove the paragraph.

Thanks,
-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkWF-00024z-0R; Wed, 26 Feb 2014 19:51:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WIkWD-00024k-DK
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 19:51:49 +0000
Received: from [85.158.137.68:61602] by server-6.bemta-3.messagelabs.com id
	78/44-09180-4D54E035; Wed, 26 Feb 2014 19:51:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393444308!4384016!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26510 invoked from network); 26 Feb 2014 19:51:48 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:51:48 -0000
Received: by mail-wg0-f42.google.com with SMTP id x13so2052317wgg.25
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Feb 2014 11:51:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RHTbIiaWNP4V5wDIoSqZf1ky1AO20A7zhnE8Rv8RtwY=;
	b=VoFtf2b4aVVY6EiOvgwNxJZpw3SZt25D5j27wyyv2UxgaMKqRlDwc+ZwYTBQhqTMqG
	EMjX0STRwz5UJGqY/pJJl/DTwIdeAQrdC1upVdFditEPR7l/LkYBWzWdS183i89HZy+N
	D7c/HHvwGE4Sh1ltE+nhUo1guVWK1yZUOFCLKZ/3AbE607VgFhPx3KeuzxThkNX77fXq
	2e2g/bxJXFft4EA2Ewe4CUG+LuABFQBpDR7CgoTG0GKusiyhWLOXmiiM5efXtsgKHbE9
	klcxTGIrJ4zisI1Bh0A4UDzGI8xr1j5xjsquQvk7LQziPSngnnyL2mv2jRZmwv+N1FSd
	lQ5w==
X-Gm-Message-State: ALoCoQmDTwRFgYVMQVIQopwW05Rd1/5uxDbK5NX4l1WTWZVbAanjyIXq1Efe4eWu5Fzys422LCDT
X-Received: by 10.194.118.228 with SMTP id kp4mr24277wjb.94.1393444307509;
	Wed, 26 Feb 2014 11:51:47 -0800 (PST)
Received: from localhost.localdomain ([31.221.87.87])
	by mx.google.com with ESMTPSA id jd2sm8247873wic.9.2014.02.26.11.51.46
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 11:51:46 -0800 (PST)
Message-ID: <530E45D1.4080009@linaro.org>
Date: Wed, 26 Feb 2014 19:51:45 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-3-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: Jonathan Daugherty <jtd@galois.com>, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 03/12] xen/arm: support HW interrupts
 in gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 26/02/14 18:39, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW.
>
> Remove the code to EOI a physical interrupt on behalf of the guest
> because it has become unnecessary.
>
> Also add a struct vcpu* parameter to gic_set_lr.
>
> This patch needs the following patch to work correctly. It has been sent
> separately to make it easier to review.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:51:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkWF-00024z-0R; Wed, 26 Feb 2014 19:51:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WIkWD-00024k-DK
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 19:51:49 +0000
Received: from [85.158.137.68:61602] by server-6.bemta-3.messagelabs.com id
	78/44-09180-4D54E035; Wed, 26 Feb 2014 19:51:48 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393444308!4384016!1
X-Originating-IP: [74.125.82.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26510 invoked from network); 26 Feb 2014 19:51:48 -0000
Received: from mail-wg0-f42.google.com (HELO mail-wg0-f42.google.com)
	(74.125.82.42)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 19:51:48 -0000
Received: by mail-wg0-f42.google.com with SMTP id x13so2052317wgg.25
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Feb 2014 11:51:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=RHTbIiaWNP4V5wDIoSqZf1ky1AO20A7zhnE8Rv8RtwY=;
	b=VoFtf2b4aVVY6EiOvgwNxJZpw3SZt25D5j27wyyv2UxgaMKqRlDwc+ZwYTBQhqTMqG
	EMjX0STRwz5UJGqY/pJJl/DTwIdeAQrdC1upVdFditEPR7l/LkYBWzWdS183i89HZy+N
	D7c/HHvwGE4Sh1ltE+nhUo1guVWK1yZUOFCLKZ/3AbE607VgFhPx3KeuzxThkNX77fXq
	2e2g/bxJXFft4EA2Ewe4CUG+LuABFQBpDR7CgoTG0GKusiyhWLOXmiiM5efXtsgKHbE9
	klcxTGIrJ4zisI1Bh0A4UDzGI8xr1j5xjsquQvk7LQziPSngnnyL2mv2jRZmwv+N1FSd
	lQ5w==
X-Gm-Message-State: ALoCoQmDTwRFgYVMQVIQopwW05Rd1/5uxDbK5NX4l1WTWZVbAanjyIXq1Efe4eWu5Fzys422LCDT
X-Received: by 10.194.118.228 with SMTP id kp4mr24277wjb.94.1393444307509;
	Wed, 26 Feb 2014 11:51:47 -0800 (PST)
Received: from localhost.localdomain ([31.221.87.87])
	by mx.google.com with ESMTPSA id jd2sm8247873wic.9.2014.02.26.11.51.46
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 11:51:46 -0800 (PST)
Message-ID: <530E45D1.4080009@linaro.org>
Date: Wed, 26 Feb 2014 19:51:45 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-3-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-3-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: Jonathan Daugherty <jtd@galois.com>, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 03/12] xen/arm: support HW interrupts
 in gic_set_lr
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 26/02/14 18:39, Stefano Stabellini wrote:
> If the irq to be injected is an hardware irq (p->desc != NULL), set
> GICH_LR_HW.
>
> Remove the code to EOI a physical interrupt on behalf of the guest
> because it has become unnecessary.
>
> Also add a struct vcpu* parameter to gic_set_lr.
>
> This patch needs the following patch to work correctly. It has been sent
> separately to make it easier to review.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkaP-0002KP-N5; Wed, 26 Feb 2014 19:56:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WIkaN-0002KH-EB
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 19:56:07 +0000
Received: from [85.158.143.35:27455] by server-2.bemta-4.messagelabs.com id
	16/2A-04779-6D64E035; Wed, 26 Feb 2014 19:56:06 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393444565!8552679!1
X-Originating-IP: [212.227.126.130]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1576 invoked from network); 26 Feb 2014 19:56:05 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.130)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 19:56:05 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue003) with ESMTP (Nemesis)
	id 0MPKo0-1WNAR508KO-004Qd0; Wed, 26 Feb 2014 20:55:59 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: linux-arm-kernel@lists.infradead.org
Date: Wed, 26 Feb 2014 20:55:58 +0100
Message-ID: <5553754.0b4gMg5OS7@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <20140226183454.GA14639@cbox>
References: <20140226183454.GA14639@cbox>
MIME-Version: 1.0
X-Provags-ID: V02:K0:YcwJeSqkJ+JvJWUqxrwlhlDeOlP0VNJlZQMMCI4RGbH
	LDwiaKd3MAix3Kzs8w8VuR+tnSko+N13lSaMlxtMVLPNNougFm
	C2sqE4pHi5CaygXNf/2tfceABWwXJKVXRKhSp+kKWg+2Ij/zJL
	ZzWFfAE/lI5F6zlkn+vb9gDegf5/RbYVAqofngl37wMOHi98Tn
	hPushETX4RKV+5P00gjSkoVC+Ynzb0UOBbkxtyGqIB/Al/nvec
	p8OzRGbBVPgJKTplynfP74r7q870WBOwoTkTXQUPkX8BnymfZ0
	wwD7GQJSz5JBsMG6wE7W+1ZhdbtIMdOifyTEPwL+l7JBb5vSY8
	4xA6tUg7u2j7RwdQL5p4=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> ARM VM System Specification
> ===========================
> 
> Goal
> ----
> The goal of this spec is to allow suitably-built OS images to run on
> all ARM virtualization solutions, such as KVM or Xen.
> 
> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> they aim to be hypervisor agnostic.
> 
> Note that simply adhering to the SBSA [2] is not a valid approach,
> for example because the SBSA mandates EL2, which will not be available
> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> may be controversial for some ARM VM implementations to support.
> This spec also covers the aarch32 execution mode, not covered in the
> SBSA.

I would prefer if we can stay as close as possible to SBSA for individual
hardware components, and only stray from it when there is a strong reason.
pl011-subset doesn't sound like a significant problem to implement,
especially as SBSA makes the DMA part of that optional. Can you
elaborate on what hypervisor would have a problem with that?

> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.
> 
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
> 
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
> 
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.
> 
> For more information about the arm and arm64 boot conventions, see
> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> Linux kernel source tree.
> 
> For more information about UEFI and ACPI booting, see [4] and [5].

What's the point of having ACPI in a virtual machine? You wouldn't
need to abstract any of the hardware in AML since you already know
what the virtual hardware is, so I can't see how this would help
anyone.

However, as ACPI will not be supported by arm32, not having the
complete FDT will prevent you from running a 32-bit guest on
a 64-bit hypervisor, which I consider an important use case.

> VM Platform
> -----------
> The specification does not mandate any specific memory map.  The guest
> OS must be able to enumerate all processing elements, devices, and
> memory through HW description data (FDT, ACPI) or a bus-specific
> mechanism such as PCI.
> 
> The virtual platform must support at least one of the following ARM
> execution states:
>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> 
> It is recommended to support both (2) and (3) on aarch64 capable
> physical systems.

Isn't this more of a CPU capabilities question? Or maybe you
should just add 'if aarch32 mode supported is supported by
the host CPU'.

> 
> The virtual hardware platform must provide a number of mandatory
> peripherals:
> 
>   Serial console:  The platform should provide a console,
>   based on an emulated pl011, a virtio-console, or a Xen PV console.
> 
>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>   removes this limitation.
> 
>   The ARM virtual timer and counter should be available to the VM as
>   per the ARM Generic Timers specification in the ARM ARM [1].
> 
>   A hotpluggable bus to support hotplug of at least block and network
>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>   bus.

I think you should specify exactly what you want PCIe to look like,
if present. Otherwise you can get wildly incompatible bus discovery.

> Note that this platform specification is separate from the Linux kernel
> concept of mach-virt, which merely specifies a machine model driven
> purely from device tree, but does not mandate any peripherals or have any
> mention of ACPI.

Did you notice we are removing mach-virt now? Probably no point
mentioning it here.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 19:56:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 19:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkaP-0002KP-N5; Wed, 26 Feb 2014 19:56:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WIkaN-0002KH-EB
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 19:56:07 +0000
Received: from [85.158.143.35:27455] by server-2.bemta-4.messagelabs.com id
	16/2A-04779-6D64E035; Wed, 26 Feb 2014 19:56:06 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393444565!8552679!1
X-Originating-IP: [212.227.126.130]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1576 invoked from network); 26 Feb 2014 19:56:05 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.130)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 19:56:05 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue003) with ESMTP (Nemesis)
	id 0MPKo0-1WNAR508KO-004Qd0; Wed, 26 Feb 2014 20:55:59 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: linux-arm-kernel@lists.infradead.org
Date: Wed, 26 Feb 2014 20:55:58 +0100
Message-ID: <5553754.0b4gMg5OS7@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <20140226183454.GA14639@cbox>
References: <20140226183454.GA14639@cbox>
MIME-Version: 1.0
X-Provags-ID: V02:K0:YcwJeSqkJ+JvJWUqxrwlhlDeOlP0VNJlZQMMCI4RGbH
	LDwiaKd3MAix3Kzs8w8VuR+tnSko+N13lSaMlxtMVLPNNougFm
	C2sqE4pHi5CaygXNf/2tfceABWwXJKVXRKhSp+kKWg+2Ij/zJL
	ZzWFfAE/lI5F6zlkn+vb9gDegf5/RbYVAqofngl37wMOHi98Tn
	hPushETX4RKV+5P00gjSkoVC+Ynzb0UOBbkxtyGqIB/Al/nvec
	p8OzRGbBVPgJKTplynfP74r7q870WBOwoTkTXQUPkX8BnymfZ0
	wwD7GQJSz5JBsMG6wE7W+1ZhdbtIMdOifyTEPwL+l7JBb5vSY8
	4xA6tUg7u2j7RwdQL5p4=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> ARM VM System Specification
> ===========================
> 
> Goal
> ----
> The goal of this spec is to allow suitably-built OS images to run on
> all ARM virtualization solutions, such as KVM or Xen.
> 
> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> they aim to be hypervisor agnostic.
> 
> Note that simply adhering to the SBSA [2] is not a valid approach,
> for example because the SBSA mandates EL2, which will not be available
> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> may be controversial for some ARM VM implementations to support.
> This spec also covers the aarch32 execution mode, not covered in the
> SBSA.

I would prefer if we can stay as close as possible to SBSA for individual
hardware components, and only stray from it when there is a strong reason.
pl011-subset doesn't sound like a significant problem to implement,
especially as SBSA makes the DMA part of that optional. Can you
elaborate on what hypervisor would have a problem with that?

> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.
> 
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
> 
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
> 
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.
> 
> For more information about the arm and arm64 boot conventions, see
> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> Linux kernel source tree.
> 
> For more information about UEFI and ACPI booting, see [4] and [5].

What's the point of having ACPI in a virtual machine? You wouldn't
need to abstract any of the hardware in AML since you already know
what the virtual hardware is, so I can't see how this would help
anyone.

However, as ACPI will not be supported by arm32, not having the
complete FDT will prevent you from running a 32-bit guest on
a 64-bit hypervisor, which I consider an important use case.

> VM Platform
> -----------
> The specification does not mandate any specific memory map.  The guest
> OS must be able to enumerate all processing elements, devices, and
> memory through HW description data (FDT, ACPI) or a bus-specific
> mechanism such as PCI.
> 
> The virtual platform must support at least one of the following ARM
> execution states:
>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> 
> It is recommended to support both (2) and (3) on aarch64 capable
> physical systems.

Isn't this more of a CPU capabilities question? Or maybe you
should just add 'if aarch32 mode supported is supported by
the host CPU'.

> 
> The virtual hardware platform must provide a number of mandatory
> peripherals:
> 
>   Serial console:  The platform should provide a console,
>   based on an emulated pl011, a virtio-console, or a Xen PV console.
> 
>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>   removes this limitation.
> 
>   The ARM virtual timer and counter should be available to the VM as
>   per the ARM Generic Timers specification in the ARM ARM [1].
> 
>   A hotpluggable bus to support hotplug of at least block and network
>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>   bus.

I think you should specify exactly what you want PCIe to look like,
if present. Otherwise you can get wildly incompatible bus discovery.

> Note that this platform specification is separate from the Linux kernel
> concept of mach-virt, which merely specifies a machine model driven
> purely from device tree, but does not mandate any peripherals or have any
> mention of ACPI.

Did you notice we are removing mach-virt now? Probably no point
mentioning it here.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:05:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkja-0002a3-ST; Wed, 26 Feb 2014 20:05:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIkjZ-0002Zx-0V
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:05:37 +0000
Received: from [85.158.137.68:30688] by server-6.bemta-3.messagelabs.com id
	E0/9D-09180-0194E035; Wed, 26 Feb 2014 20:05:36 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393445133!1057740!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26148 invoked from network); 26 Feb 2014 20:05:35 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:05:35 -0000
Received: by mail-pa0-f52.google.com with SMTP id fb1so1437302pad.39
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 12:05:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=wSsgY4iQqxmB1sjWl+56bdtkrlq4HrKxrVCW+yKBLvA=;
	b=dvVoX2OuD78zSqMJ3Jt7ueq574rYPVEG4su/KwXd0VbsYmcRxxa8bvfqhBQ2N+KQ+S
	RLIJyTnDbU7wVfMTIknjco6jBExZ40AozwkMQQAB8Gub1yYmVVYHM5qbsupYzMqeHGVS
	vQcZWAUNM6zJfgIKW+oPT1hA35ymgIeLUmS3oGf4RL34bkguIvdi0EyQWX1i6sGQkKUc
	Z7UUwSzPzi7cOGCOElP+UEORMdapaEHNzSzEAn7QXX2QI/rQq7myTSksalGg7PN4fJSr
	D3QD8+uMqh4eH6PC0TIvBJafpNvA7FFKui1V7BwCmTfliBhbsWNXev1P1RUrTr3X6bu8
	KkTw==
X-Gm-Message-State: ALoCoQmafaAhprEHPVOamLiT8VkDROn/nHAbGEr8haeRyOfIMxBbN5mDfNaVVbLcHOngx4blmfvc
X-Received: by 10.66.41.106 with SMTP id e10mr10807308pal.109.1393445132363;
	Wed, 26 Feb 2014 12:05:32 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id ki3sm6189379pbc.6.2014.02.26.12.05.29
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 12:05:30 -0800 (PST)
Date: Wed, 26 Feb 2014 12:05:37 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Arnd Bergmann <arnd@arndb.de>
Message-ID: <20140226200537.GC16149@cbox>
References: <20140226183454.GA14639@cbox>
 <5553754.0b4gMg5OS7@wuerfel>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> > ARM VM System Specification
> > ===========================
> > 
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> > 
> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> > 
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> 
> I would prefer if we can stay as close as possible to SBSA for individual
> hardware components, and only stray from it when there is a strong reason.
> pl011-subset doesn't sound like a significant problem to implement,
> especially as SBSA makes the DMA part of that optional. Can you
> elaborate on what hypervisor would have a problem with that?
> 

The Xen guys are hard-set on not supporting a pl011.  If we can convince
them or force it upon them, I'm ok with that.

I agree we should stay close to the SBSA, but I think there are
considerations for VMs beyond the SBSA that warrants this spec.

> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> > 
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> > 
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> > 
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> > 
> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> > 
> > For more information about UEFI and ACPI booting, see [4] and [5].
> 
> What's the point of having ACPI in a virtual machine? You wouldn't
> need to abstract any of the hardware in AML since you already know
> what the virtual hardware is, so I can't see how this would help
> anyone.

The most common response I've been getting so far is that people
generally want their VMs to look close to the real thing, but not sure
how valid an argument that is.

Some people feel strongly about this and seem to think that ARMv8
kernels will only work with ACPI in the future...

Another case is that it's a good development platform.  I know nothing
of developing and testing ACPI, so I won't judge one way or the other.

> 
> However, as ACPI will not be supported by arm32, not having the
> complete FDT will prevent you from running a 32-bit guest on
> a 64-bit hypervisor, which I consider an important use case.
> 

Agreed, I didn't appreciate that fact.  Hmmm, we need to consider that
case.

> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map.  The guest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> > 
> > The virtual platform must support at least one of the following ARM
> > execution states:
> >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> > 
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> 
> Isn't this more of a CPU capabilities question? Or maybe you
> should just add 'if aarch32 mode supported is supported by
> the host CPU'.
> 

The recommendation is to tell people to actually have a -aarch32 (or
whatever it would be called) that works in their VM implementation.
This can certainly be reworded.

> > 
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> > 
> >   Serial console:  The platform should provide a console,
> >   based on an emulated pl011, a virtio-console, or a Xen PV console.
> > 
> >   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >   removes this limitation.
> > 
> >   The ARM virtual timer and counter should be available to the VM as
> >   per the ARM Generic Timers specification in the ARM ARM [1].
> > 
> >   A hotpluggable bus to support hotplug of at least block and network
> >   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >   bus.
> 
> I think you should specify exactly what you want PCIe to look like,
> if present. Otherwise you can get wildly incompatible bus discovery.
> 

As soon as there is more clarity on what it will actually look like,
I'll be happy to add this.  I'm afraid my PCIe understanding is too
piecemeal to fully grasp this, so concrete suggestions for the text
would be much appreciated.

> > Note that this platform specification is separate from the Linux kernel
> > concept of mach-virt, which merely specifies a machine model driven
> > purely from device tree, but does not mandate any peripherals or have any
> > mention of ACPI.
> 
> Did you notice we are removing mach-virt now? Probably no point
> mentioning it here.
> 
Yes, I'm aware.  I've just heard people say "why do we need this, isn't
mach-virt all we need", and therefore I added the note.

I can definitely can rid of this paragraph in the future if it causes
more harm than good.

Thanks!
-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:05:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkja-0002a3-ST; Wed, 26 Feb 2014 20:05:38 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIkjZ-0002Zx-0V
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:05:37 +0000
Received: from [85.158.137.68:30688] by server-6.bemta-3.messagelabs.com id
	E0/9D-09180-0194E035; Wed, 26 Feb 2014 20:05:36 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393445133!1057740!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26148 invoked from network); 26 Feb 2014 20:05:35 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-2.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:05:35 -0000
Received: by mail-pa0-f52.google.com with SMTP id fb1so1437302pad.39
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 12:05:32 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=wSsgY4iQqxmB1sjWl+56bdtkrlq4HrKxrVCW+yKBLvA=;
	b=dvVoX2OuD78zSqMJ3Jt7ueq574rYPVEG4su/KwXd0VbsYmcRxxa8bvfqhBQ2N+KQ+S
	RLIJyTnDbU7wVfMTIknjco6jBExZ40AozwkMQQAB8Gub1yYmVVYHM5qbsupYzMqeHGVS
	vQcZWAUNM6zJfgIKW+oPT1hA35ymgIeLUmS3oGf4RL34bkguIvdi0EyQWX1i6sGQkKUc
	Z7UUwSzPzi7cOGCOElP+UEORMdapaEHNzSzEAn7QXX2QI/rQq7myTSksalGg7PN4fJSr
	D3QD8+uMqh4eH6PC0TIvBJafpNvA7FFKui1V7BwCmTfliBhbsWNXev1P1RUrTr3X6bu8
	KkTw==
X-Gm-Message-State: ALoCoQmafaAhprEHPVOamLiT8VkDROn/nHAbGEr8haeRyOfIMxBbN5mDfNaVVbLcHOngx4blmfvc
X-Received: by 10.66.41.106 with SMTP id e10mr10807308pal.109.1393445132363;
	Wed, 26 Feb 2014 12:05:32 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id ki3sm6189379pbc.6.2014.02.26.12.05.29
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 12:05:30 -0800 (PST)
Date: Wed, 26 Feb 2014 12:05:37 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Arnd Bergmann <arnd@arndb.de>
Message-ID: <20140226200537.GC16149@cbox>
References: <20140226183454.GA14639@cbox>
 <5553754.0b4gMg5OS7@wuerfel>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> > ARM VM System Specification
> > ===========================
> > 
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> > 
> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> > 
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> 
> I would prefer if we can stay as close as possible to SBSA for individual
> hardware components, and only stray from it when there is a strong reason.
> pl011-subset doesn't sound like a significant problem to implement,
> especially as SBSA makes the DMA part of that optional. Can you
> elaborate on what hypervisor would have a problem with that?
> 

The Xen guys are hard-set on not supporting a pl011.  If we can convince
them or force it upon them, I'm ok with that.

I agree we should stay close to the SBSA, but I think there are
considerations for VMs beyond the SBSA that warrants this spec.

> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> > 
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> > 
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> > 
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> > 
> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> > 
> > For more information about UEFI and ACPI booting, see [4] and [5].
> 
> What's the point of having ACPI in a virtual machine? You wouldn't
> need to abstract any of the hardware in AML since you already know
> what the virtual hardware is, so I can't see how this would help
> anyone.

The most common response I've been getting so far is that people
generally want their VMs to look close to the real thing, but not sure
how valid an argument that is.

Some people feel strongly about this and seem to think that ARMv8
kernels will only work with ACPI in the future...

Another case is that it's a good development platform.  I know nothing
of developing and testing ACPI, so I won't judge one way or the other.

> 
> However, as ACPI will not be supported by arm32, not having the
> complete FDT will prevent you from running a 32-bit guest on
> a 64-bit hypervisor, which I consider an important use case.
> 

Agreed, I didn't appreciate that fact.  Hmmm, we need to consider that
case.

> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map.  The guest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> > 
> > The virtual platform must support at least one of the following ARM
> > execution states:
> >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> > 
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> 
> Isn't this more of a CPU capabilities question? Or maybe you
> should just add 'if aarch32 mode supported is supported by
> the host CPU'.
> 

The recommendation is to tell people to actually have a -aarch32 (or
whatever it would be called) that works in their VM implementation.
This can certainly be reworded.

> > 
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> > 
> >   Serial console:  The platform should provide a console,
> >   based on an emulated pl011, a virtio-console, or a Xen PV console.
> > 
> >   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >   removes this limitation.
> > 
> >   The ARM virtual timer and counter should be available to the VM as
> >   per the ARM Generic Timers specification in the ARM ARM [1].
> > 
> >   A hotpluggable bus to support hotplug of at least block and network
> >   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >   bus.
> 
> I think you should specify exactly what you want PCIe to look like,
> if present. Otherwise you can get wildly incompatible bus discovery.
> 

As soon as there is more clarity on what it will actually look like,
I'll be happy to add this.  I'm afraid my PCIe understanding is too
piecemeal to fully grasp this, so concrete suggestions for the text
would be much appreciated.

> > Note that this platform specification is separate from the Linux kernel
> > concept of mach-virt, which merely specifies a machine model driven
> > purely from device tree, but does not mandate any peripherals or have any
> > mention of ACPI.
> 
> Did you notice we are removing mach-virt now? Probably no point
> mentioning it here.
> 
Yes, I'm aware.  I've just heard people say "why do we need this, isn't
mach-virt all we need", and therefore I added the note.

I can definitely can rid of this paragraph in the future if it causes
more harm than good.

Thanks!
-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:17:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkug-0002mj-HA; Wed, 26 Feb 2014 20:17:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WIkuf-0002me-AP
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 20:17:05 +0000
Received: from [85.158.137.68:34277] by server-17.bemta-3.messagelabs.com id
	D8/CF-22569-0CB4E035; Wed, 26 Feb 2014 20:17:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393445823!4395506!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3076 invoked from network); 26 Feb 2014 20:17:04 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:17:04 -0000
Received: by mail-we0-f173.google.com with SMTP id x48so2075138wes.18
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Feb 2014 12:17:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=NsSHsG1VW47KgKcz2MYCkQlaaiWV227j9mXCftNeYSg=;
	b=lW2sPD9OzuyD8vMPSkvDpCmjL/V0+C3R0RWypb9G6Y3AIZNO+CSHoE8/G+8qPGw/pC
	sJTLHPM6yhbty5tCa+fmYtpULAHZi+yCv8pTkszCBTdaB6w5+5dtBBm0FBrbiWSrznIX
	fywEV3omWLXu+fNQZQbWMbXkcGmt6KP2B/yMQ0Qp+mR4WfUSfEr9yEOEHDk9jUpdwdL0
	yeRKTIe+utPN55kbr7sukJ2YrHgR/Fr8TLMiM5PMhRva4SUBvMlQhpJBv9yJkQEkFCup
	dJ8dYB8lope9I3US3FEWzoof1poDZvs4H9YYd1vtxKIVuzs0RxwEob3sAylWwN1/SKV/
	m3bA==
X-Gm-Message-State: ALoCoQlWx8dT5eCuclQFn75+cq6AthX2MKn7SDIPXhc01FHhMa6uSSGF+ti/1fx7sY9zrNyZ0yp5
X-Received: by 10.180.165.238 with SMTP id zb14mr6259371wib.51.1393445823682; 
	Wed, 26 Feb 2014 12:17:03 -0800 (PST)
Received: from localhost.localdomain ([31.221.87.87])
	by mx.google.com with ESMTPSA id dk9sm5211801wjb.4.2014.02.26.12.17.02
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 12:17:02 -0800 (PST)
Message-ID: <530E4BBD.4010703@linaro.org>
Date: Wed, 26 Feb 2014 20:17:01 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-10-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-10-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: jtd@galois.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 10/12] xen/arm: don't protect GICH
 and lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 26/02/14 18:39, Stefano Stabellini wrote:
> GICH is banked, protect accesses by disabling interrupts.
> Protect lr_queue accesses with the vgic.lock only.

When the interrupt is an SPI, the lr_queue is shared between every VCPU. 
Using only vgic.lock seems wrong to me.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:17:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkug-0002mj-HA; Wed, 26 Feb 2014 20:17:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WIkuf-0002me-AP
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 20:17:05 +0000
Received: from [85.158.137.68:34277] by server-17.bemta-3.messagelabs.com id
	D8/CF-22569-0CB4E035; Wed, 26 Feb 2014 20:17:04 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393445823!4395506!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3076 invoked from network); 26 Feb 2014 20:17:04 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:17:04 -0000
Received: by mail-we0-f173.google.com with SMTP id x48so2075138wes.18
	for <xen-devel@lists.xensource.com>;
	Wed, 26 Feb 2014 12:17:03 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=NsSHsG1VW47KgKcz2MYCkQlaaiWV227j9mXCftNeYSg=;
	b=lW2sPD9OzuyD8vMPSkvDpCmjL/V0+C3R0RWypb9G6Y3AIZNO+CSHoE8/G+8qPGw/pC
	sJTLHPM6yhbty5tCa+fmYtpULAHZi+yCv8pTkszCBTdaB6w5+5dtBBm0FBrbiWSrznIX
	fywEV3omWLXu+fNQZQbWMbXkcGmt6KP2B/yMQ0Qp+mR4WfUSfEr9yEOEHDk9jUpdwdL0
	yeRKTIe+utPN55kbr7sukJ2YrHgR/Fr8TLMiM5PMhRva4SUBvMlQhpJBv9yJkQEkFCup
	dJ8dYB8lope9I3US3FEWzoof1poDZvs4H9YYd1vtxKIVuzs0RxwEob3sAylWwN1/SKV/
	m3bA==
X-Gm-Message-State: ALoCoQlWx8dT5eCuclQFn75+cq6AthX2MKn7SDIPXhc01FHhMa6uSSGF+ti/1fx7sY9zrNyZ0yp5
X-Received: by 10.180.165.238 with SMTP id zb14mr6259371wib.51.1393445823682; 
	Wed, 26 Feb 2014 12:17:03 -0800 (PST)
Received: from localhost.localdomain ([31.221.87.87])
	by mx.google.com with ESMTPSA id dk9sm5211801wjb.4.2014.02.26.12.17.02
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 12:17:02 -0800 (PST)
Message-ID: <530E4BBD.4010703@linaro.org>
Date: Wed, 26 Feb 2014 20:17:01 +0000
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, 
	xen-devel@lists.xensource.com
References: <alpine.DEB.2.02.1402261517510.31489@kaball.uk.xensource.com>
	<1393439997-26936-10-git-send-email-stefano.stabellini@eu.citrix.com>
In-Reply-To: <1393439997-26936-10-git-send-email-stefano.stabellini@eu.citrix.com>
Cc: jtd@galois.com, Ian.Campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH-4.5 v3 10/12] xen/arm: don't protect GICH
 and lr_queue accesses with gic.lock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Stefano,

On 26/02/14 18:39, Stefano Stabellini wrote:
> GICH is banked, protect accesses by disabling interrupts.
> Protect lr_queue accesses with the vgic.lock only.

When the interrupt is an SPI, the lr_queue is shared between every VCPU. 
Using only vgic.lock seems wrong to me.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:19:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkwm-0002ue-3n; Wed, 26 Feb 2014 20:19:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WIkwk-0002uW-07
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:19:14 +0000
Received: from [85.158.143.35:22423] by server-3.bemta-4.messagelabs.com id
	1A/CB-11539-14C4E035; Wed, 26 Feb 2014 20:19:13 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393445951!8572783!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30869 invoked from network); 26 Feb 2014 20:19:12 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:19:12 -0000
Received: by mail-qa0-f42.google.com with SMTP id k4so3070609qaq.1
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 12:19:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=k010/yrHd1hzraHWhmwFE/kpdQqY60IOYL9weDZADRk=;
	b=VYmPXkKTZ90Nq3u6QRfNVNvXPKmHbsKo1HjWn8g9qzn5QIU2+auO9ycXmfgEnbR6j7
	pjF71dMaBwyWV7EnA0QEHLoBqZC9Ub98BIv9IgjksGUsjHqkHQgSavkkY0/MwEo9MFuj
	zGGd6HV7KlfoFJQTU1U/bRJt/I6s70iB9b4CyeF28TA9C4aelWrpL3HYYZ93cIMI15O2
	hFh5xaUYE3BXIWL/UzcSK/8adYnhUv9+uVNMSbK6YDzzf1iwBc/l+5Ueth+doXlDhTUh
	IO+2KecrJKNV24oEzXaOl7OXoC3qZ8O1WYYMsiOb971khyLWlsePY3QR7XlRHi2kOEJr
	cACg==
X-Received: by 10.140.93.111 with SMTP id c102mr1917601qge.53.1393445951359;
	Wed, 26 Feb 2014 12:19:11 -0800 (PST)
Received: from yakj.usersys.redhat.com
	(net-37-117-154-249.cust.vodafonedsl.it. [37.117.154.249])
	by mx.google.com with ESMTPSA id v92sm1400552qge.6.2014.02.26.12.19.06
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 12:19:10 -0800 (PST)
Message-ID: <530E4C39.9010804@redhat.com>
Date: Wed, 26 Feb 2014 21:19:05 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 26/02/2014 20:55, Arnd Bergmann ha scritto:
>> > For more information about UEFI and ACPI booting, see [4] and [5].
> What's the point of having ACPI in a virtual machine? You wouldn't
> need to abstract any of the hardware in AML since you already know
> what the virtual hardware is, so I can't see how this would help
> anyone.

In x86 land it's been certainly helpful to abstract hotplug 
capabilities.  For ARM it could be the same.  Not so much for PCI (ARM 
probably can use native PCIe hotplug and standard hotplug controllers; 
on x86 we started with PCI and also have to deal with Windows's lack of 
support for SHPC), but it could help for CPU and memory hotplug.

> Did you notice we are removing mach-virt now? Probably no point
> mentioning it here.

Peter, do we still want mach-virt support in QEMU then?

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:19:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkwm-0002ue-3n; Wed, 26 Feb 2014 20:19:16 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WIkwk-0002uW-07
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:19:14 +0000
Received: from [85.158.143.35:22423] by server-3.bemta-4.messagelabs.com id
	1A/CB-11539-14C4E035; Wed, 26 Feb 2014 20:19:13 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393445951!8572783!1
X-Originating-IP: [209.85.216.42]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30869 invoked from network); 26 Feb 2014 20:19:12 -0000
Received: from mail-qa0-f42.google.com (HELO mail-qa0-f42.google.com)
	(209.85.216.42)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:19:12 -0000
Received: by mail-qa0-f42.google.com with SMTP id k4so3070609qaq.1
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 12:19:11 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=k010/yrHd1hzraHWhmwFE/kpdQqY60IOYL9weDZADRk=;
	b=VYmPXkKTZ90Nq3u6QRfNVNvXPKmHbsKo1HjWn8g9qzn5QIU2+auO9ycXmfgEnbR6j7
	pjF71dMaBwyWV7EnA0QEHLoBqZC9Ub98BIv9IgjksGUsjHqkHQgSavkkY0/MwEo9MFuj
	zGGd6HV7KlfoFJQTU1U/bRJt/I6s70iB9b4CyeF28TA9C4aelWrpL3HYYZ93cIMI15O2
	hFh5xaUYE3BXIWL/UzcSK/8adYnhUv9+uVNMSbK6YDzzf1iwBc/l+5Ueth+doXlDhTUh
	IO+2KecrJKNV24oEzXaOl7OXoC3qZ8O1WYYMsiOb971khyLWlsePY3QR7XlRHi2kOEJr
	cACg==
X-Received: by 10.140.93.111 with SMTP id c102mr1917601qge.53.1393445951359;
	Wed, 26 Feb 2014 12:19:11 -0800 (PST)
Received: from yakj.usersys.redhat.com
	(net-37-117-154-249.cust.vodafonedsl.it. [37.117.154.249])
	by mx.google.com with ESMTPSA id v92sm1400552qge.6.2014.02.26.12.19.06
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 12:19:10 -0800 (PST)
Message-ID: <530E4C39.9010804@redhat.com>
Date: Wed, 26 Feb 2014 21:19:05 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Arnd Bergmann <arnd@arndb.de>, linux-arm-kernel@lists.infradead.org
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
X-Enigmail-Version: 1.6
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 26/02/2014 20:55, Arnd Bergmann ha scritto:
>> > For more information about UEFI and ACPI booting, see [4] and [5].
> What's the point of having ACPI in a virtual machine? You wouldn't
> need to abstract any of the hardware in AML since you already know
> what the virtual hardware is, so I can't see how this would help
> anyone.

In x86 land it's been certainly helpful to abstract hotplug 
capabilities.  For ARM it could be the same.  Not so much for PCI (ARM 
probably can use native PCIe hotplug and standard hotplug controllers; 
on x86 we started with PCI and also have to deal with Windows's lack of 
support for SHPC), but it could help for CPU and memory hotplug.

> Did you notice we are removing mach-virt now? Probably no point
> mentioning it here.

Peter, do we still want mach-virt support in QEMU then?

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:21:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkyi-00032D-NQ; Wed, 26 Feb 2014 20:21:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WIkyh-000324-5M
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:21:15 +0000
Received: from [193.109.254.147:53004] by server-6.bemta-14.messagelabs.com id
	82/85-03396-ABC4E035; Wed, 26 Feb 2014 20:21:14 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393446073!7083446!1
X-Originating-IP: [209.85.215.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11224 invoked from network); 26 Feb 2014 20:21:13 -0000
Received: from mail-la0-f44.google.com (HELO mail-la0-f44.google.com)
	(209.85.215.44)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:21:13 -0000
Received: by mail-la0-f44.google.com with SMTP id hr13so1017108lab.31
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 12:21:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=8moQOnrn3Z79mgFWP3gRm9mSumBjK2CEzTZFEMoeTWw=;
	b=JmxYUE6QX+FbcTRPfee140Hndc82ibECc7tsjfyTRJ/joy02BcYkNa+YsAHERgSjQZ
	6tj7cv3PkE0cNQ3NdPc+fd2s0/TujSlhnpjfYNoGa2d+lpuV610GGxoPQMH898vusGPJ
	hOSnWCAVGTrDPIa0d42kg9t7jzrfFtS3anbkMsXNMioKiwmZLbuZQIktKhjuIDY6IAEY
	yUUNMW89ACz29HYbDksqzZQn1t0N2mrEgXsgZwle2aQ/tA+BNZX3hyrn8vNLcHxMfmWB
	0AH46gyoF9t3S+l5cD5Y/yaQO4RoXLpk3l+O05IEsAEP37ne6JzNPFX/hRDnS7PvcQXP
	1rzQ==
X-Gm-Message-State: ALoCoQk4+3moNbqj+J7X7lcPQoPTcM0UAu6dQzVLahhYboolRHjRwB+/Ugh1bJvZvfmofuROpee5
X-Received: by 10.152.6.199 with SMTP id d7mr3747852laa.22.1393446072997; Wed,
	26 Feb 2014 12:21:12 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Wed, 26 Feb 2014 12:20:52 -0800 (PST)
In-Reply-To: <530E4C39.9010804@redhat.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<530E4C39.9010804@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 26 Feb 2014 20:20:52 +0000
Message-ID: <CAFEAcA_ojkfaud3QGc7MA=2DRvcynsPz9F9yZ7MxmqENzpQz9g@mail.gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Arnd Bergmann <arnd@arndb.de>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	arm-mail-list <linux-arm-kernel@lists.infradead.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 20:19, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 26/02/2014 20:55, Arnd Bergmann ha scritto:
>> Did you notice we are removing mach-virt now? Probably no point
>> mentioning it here.
>
>
> Peter, do we still want mach-virt support in QEMU then?

Yes, it will be about the only thing that supports PCI-e :-)
When the kernel guys say "removing mach-virt" they mean
removing a specific lump of kernel code, not removing support
for having a kernel that can handle a machine described
only by the device tree.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:21:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIkyi-00032D-NQ; Wed, 26 Feb 2014 20:21:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WIkyh-000324-5M
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:21:15 +0000
Received: from [193.109.254.147:53004] by server-6.bemta-14.messagelabs.com id
	82/85-03396-ABC4E035; Wed, 26 Feb 2014 20:21:14 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393446073!7083446!1
X-Originating-IP: [209.85.215.44]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11224 invoked from network); 26 Feb 2014 20:21:13 -0000
Received: from mail-la0-f44.google.com (HELO mail-la0-f44.google.com)
	(209.85.215.44)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 20:21:13 -0000
Received: by mail-la0-f44.google.com with SMTP id hr13so1017108lab.31
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 12:21:13 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=8moQOnrn3Z79mgFWP3gRm9mSumBjK2CEzTZFEMoeTWw=;
	b=JmxYUE6QX+FbcTRPfee140Hndc82ibECc7tsjfyTRJ/joy02BcYkNa+YsAHERgSjQZ
	6tj7cv3PkE0cNQ3NdPc+fd2s0/TujSlhnpjfYNoGa2d+lpuV610GGxoPQMH898vusGPJ
	hOSnWCAVGTrDPIa0d42kg9t7jzrfFtS3anbkMsXNMioKiwmZLbuZQIktKhjuIDY6IAEY
	yUUNMW89ACz29HYbDksqzZQn1t0N2mrEgXsgZwle2aQ/tA+BNZX3hyrn8vNLcHxMfmWB
	0AH46gyoF9t3S+l5cD5Y/yaQO4RoXLpk3l+O05IEsAEP37ne6JzNPFX/hRDnS7PvcQXP
	1rzQ==
X-Gm-Message-State: ALoCoQk4+3moNbqj+J7X7lcPQoPTcM0UAu6dQzVLahhYboolRHjRwB+/Ugh1bJvZvfmofuROpee5
X-Received: by 10.152.6.199 with SMTP id d7mr3747852laa.22.1393446072997; Wed,
	26 Feb 2014 12:21:12 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Wed, 26 Feb 2014 12:20:52 -0800 (PST)
In-Reply-To: <530E4C39.9010804@redhat.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<530E4C39.9010804@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 26 Feb 2014 20:20:52 +0000
Message-ID: <CAFEAcA_ojkfaud3QGc7MA=2DRvcynsPz9F9yZ7MxmqENzpQz9g@mail.gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Arnd Bergmann <arnd@arndb.de>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	arm-mail-list <linux-arm-kernel@lists.infradead.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 20:19, Paolo Bonzini <pbonzini@redhat.com> wrote:
> Il 26/02/2014 20:55, Arnd Bergmann ha scritto:
>> Did you notice we are removing mach-virt now? Probably no point
>> mentioning it here.
>
>
> Peter, do we still want mach-virt support in QEMU then?

Yes, it will be about the only thing that supports PCI-e :-)
When the kernel guys say "removing mach-virt" they mean
removing a specific lump of kernel code, not removing support
for having a kernel that can handle a machine described
only by the device tree.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:22:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIl0H-00038v-7W; Wed, 26 Feb 2014 20:22:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WIl0F-00038j-8K
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:22:51 +0000
Received: from [85.158.143.35:36692] by server-2.bemta-4.messagelabs.com id
	4E/2C-04779-A1D4E035; Wed, 26 Feb 2014 20:22:50 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393446169!8570395!1
X-Originating-IP: [212.227.126.130]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6763 invoked from network); 26 Feb 2014 20:22:49 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.130)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 20:22:49 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue005) with ESMTP (Nemesis)
	id 0Lr6Jj-1WwEgb3jWt-00efWL; Wed, 26 Feb 2014 21:22:35 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Christoffer Dall <christoffer.dall@linaro.org>
Date: Wed, 26 Feb 2014 21:22:33 +0100
Message-ID: <6873065.LH15I057sX@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <20140226200537.GC16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox>
MIME-Version: 1.0
X-Provags-ID: V02:K0:9YIa8zOJLkzi2NdVc4R5vPpaKwQWaCKr0OrktY15npS
	+PIcxHiWtPH9krbcSTeexzX8taKMgflie3ri85kUJRkPmW2uq+
	b5BS/q0R8IZOzvOrAxgQGR1ddgm5KaNBWAb9FNCb+SuCKSJNDD
	QFME0wsHTXQGmYmUnFFlIhuB9/ud45UZFzll0rqvVch9sxJcl5
	5OUDmEr3iYTp+uGGBjWR3poB8wMd9EyfUNxqWrKx1NBDs2Bv5C
	DckBIs9+F5NANeK48se4FVHWZQUkwghqftU8ZbwaToDx9L5SVT
	ygLEz4bNwR1ex2x/looQTjn+vye7B2fOVUjBqJagYQl89cxXmm
	b4i83cU1uxqNIu77Yxbo=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wednesday 26 February 2014 12:05:37 Christoffer Dall wrote:
> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> 
> The most common response I've been getting so far is that people
> generally want their VMs to look close to the real thing, but not sure
> how valid an argument that is.
> 
> Some people feel strongly about this and seem to think that ARMv8
> kernels will only work with ACPI in the future...

That is certainly a misconception that has caused a lot of trouble.
We will certainly keep supporting FDT boot in ARMv8 indefinitely,
and I expect that most systems will not use ACPI at all.

The case for ACPI is really SBSA compliant servers, where ACPI serves
to abstract the hardware differences to let you boot an OS that does
not know about the system details for things like power management.

For embedded systems that are not SBSA compliant, using ACPI doesn't
gain you anything and causes a lot of headache, so we won't do that.

> Another case is that it's a good development platform.  I know nothing
> of developing and testing ACPI, so I won't judge one way or the other.

The interesting aspects of developing and testing ACPI are all related
to the hardware specific parts. Testing ACPI on a trivial virtual
machine doesn't gain you much once the basic support is there.

> > > VM Platform
> > > -----------
> > > The specification does not mandate any specific memory map.  The guest
> > > OS must be able to enumerate all processing elements, devices, and
> > > memory through HW description data (FDT, ACPI) or a bus-specific
> > > mechanism such as PCI.
> > > 
> > > The virtual platform must support at least one of the following ARM
> > > execution states:
> > >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> > >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> > >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> > > 
> > > It is recommended to support both (2) and (3) on aarch64 capable
> > > physical systems.
> > 
> > Isn't this more of a CPU capabilities question? Or maybe you
> > should just add 'if aarch32 mode supported is supported by
> > the host CPU'.
> 
> The recommendation is to tell people to actually have a -aarch32 (or
> whatever it would be called) that works in their VM implementation.
> This can certainly be reworded.

Yes, the intention is good, it just won't work on a few systems
when the CPU designers took the shortcut to implement only the
64-bit instructions. Apparently those systems will exist, but I
expect them to be the exception, and I don't know if they will
support virtualization.

> > I think you should specify exactly what you want PCIe to look like,
> > if present. Otherwise you can get wildly incompatible bus discovery.
> > 
> 
> As soon as there is more clarity on what it will actually look like,
> I'll be happy to add this.  I'm afraid my PCIe understanding is too
> piecemeal to fully grasp this, so concrete suggestions for the text
> would be much appreciated.

Will Deacon is currently prototyping a PCI model using kvmtool, trying
to make it as simple as possible, and compliant to SBSA. How about
we wait for the next version of that to see if we're happy with it,
and then figure out if all hypervisors we care about can use the
same interface.

> > > Note that this platform specification is separate from the Linux kernel
> > > concept of mach-virt, which merely specifies a machine model driven
> > > purely from device tree, but does not mandate any peripherals or have any
> > > mention of ACPI.
> > 
> > Did you notice we are removing mach-virt now? Probably no point
> > mentioning it here.
> > 
> Yes, I'm aware.  I've just heard people say "why do we need this, isn't
> mach-virt all we need", and therefore I added the note.
> 
> I can definitely can rid of this paragraph in the future if it causes
> more harm than good.

Maybe replace it with something like this:

| On both arm32 and arm64, no platform specific kernel code must be required,
| and all device detection must happen through the device description
| passed from the hypervisor or discoverable buses.

A harder question is what peripherals we should list as mandatory or
optional beyond what you have already. We could try coming up with
an exhaustive list of devices that are supported by mainline Linux-3.10
(the current longterm release) and implemented by any of the hypervisors,
but we probably want to leave open the possibility to extend it later
as we implement new virtual devices.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 20:22:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 20:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIl0H-00038v-7W; Wed, 26 Feb 2014 20:22:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WIl0F-00038j-8K
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 20:22:51 +0000
Received: from [85.158.143.35:36692] by server-2.bemta-4.messagelabs.com id
	4E/2C-04779-A1D4E035; Wed, 26 Feb 2014 20:22:50 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393446169!8570395!1
X-Originating-IP: [212.227.126.130]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6763 invoked from network); 26 Feb 2014 20:22:49 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.130)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 26 Feb 2014 20:22:49 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue005) with ESMTP (Nemesis)
	id 0Lr6Jj-1WwEgb3jWt-00efWL; Wed, 26 Feb 2014 21:22:35 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Christoffer Dall <christoffer.dall@linaro.org>
Date: Wed, 26 Feb 2014 21:22:33 +0100
Message-ID: <6873065.LH15I057sX@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <20140226200537.GC16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox>
MIME-Version: 1.0
X-Provags-ID: V02:K0:9YIa8zOJLkzi2NdVc4R5vPpaKwQWaCKr0OrktY15npS
	+PIcxHiWtPH9krbcSTeexzX8taKMgflie3ri85kUJRkPmW2uq+
	b5BS/q0R8IZOzvOrAxgQGR1ddgm5KaNBWAb9FNCb+SuCKSJNDD
	QFME0wsHTXQGmYmUnFFlIhuB9/ud45UZFzll0rqvVch9sxJcl5
	5OUDmEr3iYTp+uGGBjWR3poB8wMd9EyfUNxqWrKx1NBDs2Bv5C
	DckBIs9+F5NANeK48se4FVHWZQUkwghqftU8ZbwaToDx9L5SVT
	ygLEz4bNwR1ex2x/looQTjn+vye7B2fOVUjBqJagYQl89cxXmm
	b4i83cU1uxqNIu77Yxbo=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wednesday 26 February 2014 12:05:37 Christoffer Dall wrote:
> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> 
> The most common response I've been getting so far is that people
> generally want their VMs to look close to the real thing, but not sure
> how valid an argument that is.
> 
> Some people feel strongly about this and seem to think that ARMv8
> kernels will only work with ACPI in the future...

That is certainly a misconception that has caused a lot of trouble.
We will certainly keep supporting FDT boot in ARMv8 indefinitely,
and I expect that most systems will not use ACPI at all.

The case for ACPI is really SBSA compliant servers, where ACPI serves
to abstract the hardware differences to let you boot an OS that does
not know about the system details for things like power management.

For embedded systems that are not SBSA compliant, using ACPI doesn't
gain you anything and causes a lot of headache, so we won't do that.

> Another case is that it's a good development platform.  I know nothing
> of developing and testing ACPI, so I won't judge one way or the other.

The interesting aspects of developing and testing ACPI are all related
to the hardware specific parts. Testing ACPI on a trivial virtual
machine doesn't gain you much once the basic support is there.

> > > VM Platform
> > > -----------
> > > The specification does not mandate any specific memory map.  The guest
> > > OS must be able to enumerate all processing elements, devices, and
> > > memory through HW description data (FDT, ACPI) or a bus-specific
> > > mechanism such as PCI.
> > > 
> > > The virtual platform must support at least one of the following ARM
> > > execution states:
> > >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> > >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> > >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> > > 
> > > It is recommended to support both (2) and (3) on aarch64 capable
> > > physical systems.
> > 
> > Isn't this more of a CPU capabilities question? Or maybe you
> > should just add 'if aarch32 mode supported is supported by
> > the host CPU'.
> 
> The recommendation is to tell people to actually have a -aarch32 (or
> whatever it would be called) that works in their VM implementation.
> This can certainly be reworded.

Yes, the intention is good, it just won't work on a few systems
when the CPU designers took the shortcut to implement only the
64-bit instructions. Apparently those systems will exist, but I
expect them to be the exception, and I don't know if they will
support virtualization.

> > I think you should specify exactly what you want PCIe to look like,
> > if present. Otherwise you can get wildly incompatible bus discovery.
> > 
> 
> As soon as there is more clarity on what it will actually look like,
> I'll be happy to add this.  I'm afraid my PCIe understanding is too
> piecemeal to fully grasp this, so concrete suggestions for the text
> would be much appreciated.

Will Deacon is currently prototyping a PCI model using kvmtool, trying
to make it as simple as possible, and compliant to SBSA. How about
we wait for the next version of that to see if we're happy with it,
and then figure out if all hypervisors we care about can use the
same interface.

> > > Note that this platform specification is separate from the Linux kernel
> > > concept of mach-virt, which merely specifies a machine model driven
> > > purely from device tree, but does not mandate any peripherals or have any
> > > mention of ACPI.
> > 
> > Did you notice we are removing mach-virt now? Probably no point
> > mentioning it here.
> > 
> Yes, I'm aware.  I've just heard people say "why do we need this, isn't
> mach-virt all we need", and therefore I added the note.
> 
> I can definitely can rid of this paragraph in the future if it causes
> more harm than good.

Maybe replace it with something like this:

| On both arm32 and arm64, no platform specific kernel code must be required,
| and all device detection must happen through the device description
| passed from the hypervisor or discoverable buses.

A harder question is what peripherals we should list as mandatory or
optional beyond what you have already. We could try coming up with
an exhaustive list of devices that are supported by mainline Linux-3.10
(the current longterm release) and implemented by any of the hypervisors,
but we probably want to leave open the possibility to extend it later
as we implement new virtual devices.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 21:08:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 21:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIliI-0003Zh-DP; Wed, 26 Feb 2014 21:08:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIliG-0003Zc-HU
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:08:20 +0000
Received: from [85.158.143.35:51814] by server-1.bemta-4.messagelabs.com id
	BC/4F-31661-3C75E035; Wed, 26 Feb 2014 21:08:19 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393448897!8539494!1
X-Originating-IP: [209.85.220.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4971 invoked from network); 26 Feb 2014 21:08:18 -0000
Received: from mail-pa0-f44.google.com (HELO mail-pa0-f44.google.com)
	(209.85.220.44)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 21:08:18 -0000
Received: by mail-pa0-f44.google.com with SMTP id bj1so1512560pad.3
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 13:08:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=k4EoDYwjZQR92YL8HWWnN+YZUQcjIQT7kxOFkgN6Csk=;
	b=gWcZ0zUPChb6o1lBngJ28TkS3Tdtgbjhfkb5cSrMIMMuSjRfzYOXJQVMcxVmiqCEKc
	PP9uYNF761f0eBfbZoD176KTP9AEVp2LR5J6g9t43O8nVJSfypiWn2p8L00OhFw1gonc
	RODXPG/I32qDBk7P2zDp18WLrN40Gp4On/tptgOWVy/4XFtImdSErQzVB2M2VN7KZLlC
	L9RCESxU2spSTK0uCpBYFleI4CkvARF7+dq2RBB09AwT9SiMyGBT3oSOzalYNnmmjz7V
	TmVJ3gJu0kKOyp//5vCc5F5MZvsy7LU8Zg0nDXck0reELhsOxctYlMMbCy2h62trsbW3
	+1ng==
X-Gm-Message-State: ALoCoQlltO+OENXJn4ZB1+V+T4mN6dFPIwVC9pz3+wjzey2I8dZ5tb+YqFKHV9kuKLE8M0HKeNjs
X-Received: by 10.68.159.228 with SMTP id xf4mr9364903pbb.74.1393448896589;
	Wed, 26 Feb 2014 13:08:16 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id e3sm6462781pbc.17.2014.02.26.13.08.13
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 13:08:15 -0800 (PST)
Date: Wed, 26 Feb 2014 13:08:21 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Michael Hudson-Doyle <michael.hudson@linaro.org>
Message-ID: <20140226210821.GD16149@cbox>
References: <20140226183454.GA14639@cbox>
 <87k3chh9fi.fsf@canonical.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87k3chh9fi.fsf@canonical.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 10:05:05AM +1300, Michael Hudson-Doyle wrote:
> Christoffer Dall <christoffer.dall@linaro.org> writes:
> 
> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> >
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> >
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> >
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> 
> Maybe I'm missing something, but should this last bit say "a trivial
> FDT" instead of "no FDT"?  If not, I don't understand the first
> paragraph :-)
> 
That trivial FDT would be generated by the EFI stub in the kernel - not
provided by the VM implementation.

-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 21:08:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 21:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIliI-0003Zh-DP; Wed, 26 Feb 2014 21:08:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WIliG-0003Zc-HU
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:08:20 +0000
Received: from [85.158.143.35:51814] by server-1.bemta-4.messagelabs.com id
	BC/4F-31661-3C75E035; Wed, 26 Feb 2014 21:08:19 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393448897!8539494!1
X-Originating-IP: [209.85.220.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4971 invoked from network); 26 Feb 2014 21:08:18 -0000
Received: from mail-pa0-f44.google.com (HELO mail-pa0-f44.google.com)
	(209.85.220.44)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 21:08:18 -0000
Received: by mail-pa0-f44.google.com with SMTP id bj1so1512560pad.3
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 13:08:16 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=k4EoDYwjZQR92YL8HWWnN+YZUQcjIQT7kxOFkgN6Csk=;
	b=gWcZ0zUPChb6o1lBngJ28TkS3Tdtgbjhfkb5cSrMIMMuSjRfzYOXJQVMcxVmiqCEKc
	PP9uYNF761f0eBfbZoD176KTP9AEVp2LR5J6g9t43O8nVJSfypiWn2p8L00OhFw1gonc
	RODXPG/I32qDBk7P2zDp18WLrN40Gp4On/tptgOWVy/4XFtImdSErQzVB2M2VN7KZLlC
	L9RCESxU2spSTK0uCpBYFleI4CkvARF7+dq2RBB09AwT9SiMyGBT3oSOzalYNnmmjz7V
	TmVJ3gJu0kKOyp//5vCc5F5MZvsy7LU8Zg0nDXck0reELhsOxctYlMMbCy2h62trsbW3
	+1ng==
X-Gm-Message-State: ALoCoQlltO+OENXJn4ZB1+V+T4mN6dFPIwVC9pz3+wjzey2I8dZ5tb+YqFKHV9kuKLE8M0HKeNjs
X-Received: by 10.68.159.228 with SMTP id xf4mr9364903pbb.74.1393448896589;
	Wed, 26 Feb 2014 13:08:16 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id e3sm6462781pbc.17.2014.02.26.13.08.13
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 13:08:15 -0800 (PST)
Date: Wed, 26 Feb 2014 13:08:21 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Michael Hudson-Doyle <michael.hudson@linaro.org>
Message-ID: <20140226210821.GD16149@cbox>
References: <20140226183454.GA14639@cbox>
 <87k3chh9fi.fsf@canonical.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87k3chh9fi.fsf@canonical.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 10:05:05AM +1300, Michael Hudson-Doyle wrote:
> Christoffer Dall <christoffer.dall@linaro.org> writes:
> 
> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> >
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> >
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> >
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> 
> Maybe I'm missing something, but should this last bit say "a trivial
> FDT" instead of "no FDT"?  If not, I don't understand the first
> paragraph :-)
> 
That trivial FDT would be generated by the EFI stub in the kernel - not
provided by the VM implementation.

-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 21:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 21:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImJd-0003ri-D0; Wed, 26 Feb 2014 21:46:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <leif.lindholm@linaro.org>) id 1WImJb-0003rd-Sl
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:46:56 +0000
Received: from [85.158.143.35:19622] by server-2.bemta-4.messagelabs.com id
	30/2D-04779-FC06E035; Wed, 26 Feb 2014 21:46:55 +0000
X-Env-Sender: leif.lindholm@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393451214!8564770!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21739 invoked from network); 26 Feb 2014 21:46:54 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 21:46:54 -0000
Received: by mail-wg0-f51.google.com with SMTP id a1so2155343wgh.34
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 13:46:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=/rFflfHHVAXIkQUvi2PoBuqn43xVCgTxzhbNj+eTbH4=;
	b=m7yTtawqRijSBzsNuRjkf07fSNv6EtRiZp5VDi6FDRFWIcRsVluCqj3YF15nvKpSNL
	UdqlKAAZNlqdHREgEcnIsg3E6RMQEvsqT5H/P8iJn+gvcMJHyt9RIKkAuFW9AvvyfhMd
	0YreM4TrwCMEEV2ZGBAW+TIG2N8rTs2O4baCyEqAb6yK8ZH7bMmJ0zR0dCmdXDXLM8He
	294nQNtAk6ZFgnSz57WO2f9cczNMOvUBTIRqSvejWQxMs99Qa8bS7+8cwKF4VxKRhdHc
	CnzkBfxpcwVCjWS4P3CZr+HrQsmbTtE8bFid7WXh+yGZtcp/C5QdQ81kHTEin+ftmw1B
	Aiow==
X-Gm-Message-State: ALoCoQng7K3an34EVXWTg1fLMpeQE2a0pJ9mVz84bnKHNBy+f+Zp9EguqNxF/U1cM53VroANx7/Z
X-Received: by 10.180.87.162 with SMTP id az2mr9711646wib.23.1393451213836;
	Wed, 26 Feb 2014 13:46:53 -0800 (PST)
Received: from bivouac.eciton.net (bivouac.eciton.net. [46.235.226.95])
	by mx.google.com with ESMTPSA id fb6sm13834981wib.2.2014.02.26.13.46.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 13:46:52 -0800 (PST)
Date: Wed, 26 Feb 2014 21:48:43 +0000
From: Leif Lindholm <leif.lindholm@linaro.org>
To: Arnd Bergmann <arnd@arndb.de>
Message-ID: <20140226214843.GD12169@bivouac.eciton.net>
References: <20140226183454.GA14639@cbox>
 <5553754.0b4gMg5OS7@wuerfel>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: cross-distro@lists.linaro.org, Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	linux-arm-kernel@lists.infradead.org,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> > ARM VM System Specification
> > ===========================
> > 
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> > 
> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> > 
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> 
> I would prefer if we can stay as close as possible to SBSA for individual
> hardware components, and only stray from it when there is a strong reason.
> pl011-subset doesn't sound like a significant problem to implement,
> especially as SBSA makes the DMA part of that optional. Can you
> elaborate on what hypervisor would have a problem with that?

I believe it comes down to how much extra overhead pl011-access-trap
would be over virtio-console. If low, then sure.
(Since there are certain things we cannot provide SBSA-compliant in
the guest anyway, I wouldn't consider lack of pl011 to be a big issue.)

> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> > 
> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> > 
> > For more information about UEFI and ACPI booting, see [4] and [5].
> 
> What's the point of having ACPI in a virtual machine? You wouldn't
> need to abstract any of the hardware in AML since you already know
> what the virtual hardware is, so I can't see how this would help
> anyone.

The point is that if we need to share any real hw then we need to use
whatever the host has.

> However, as ACPI will not be supported by arm32, not having the
> complete FDT will prevent you from running a 32-bit guest on
> a 64-bit hypervisor, which I consider an important use case.

In which case we would be making an active call to ban anything other
than virtio/xen-pv devices for 32-bit guests on hardware without DT.

However, I see the case of a 32-bit guest on 64-bit hypervisor as less
likely in the server space than in mobile, and ACPI in mobile as
unlikely, so it may end up not being a big issue.

/
    Leif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 21:47:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 21:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImJd-0003ri-D0; Wed, 26 Feb 2014 21:46:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <leif.lindholm@linaro.org>) id 1WImJb-0003rd-Sl
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:46:56 +0000
Received: from [85.158.143.35:19622] by server-2.bemta-4.messagelabs.com id
	30/2D-04779-FC06E035; Wed, 26 Feb 2014 21:46:55 +0000
X-Env-Sender: leif.lindholm@linaro.org
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393451214!8564770!1
X-Originating-IP: [74.125.82.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21739 invoked from network); 26 Feb 2014 21:46:54 -0000
Received: from mail-wg0-f51.google.com (HELO mail-wg0-f51.google.com)
	(74.125.82.51)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 21:46:54 -0000
Received: by mail-wg0-f51.google.com with SMTP id a1so2155343wgh.34
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 13:46:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=/rFflfHHVAXIkQUvi2PoBuqn43xVCgTxzhbNj+eTbH4=;
	b=m7yTtawqRijSBzsNuRjkf07fSNv6EtRiZp5VDi6FDRFWIcRsVluCqj3YF15nvKpSNL
	UdqlKAAZNlqdHREgEcnIsg3E6RMQEvsqT5H/P8iJn+gvcMJHyt9RIKkAuFW9AvvyfhMd
	0YreM4TrwCMEEV2ZGBAW+TIG2N8rTs2O4baCyEqAb6yK8ZH7bMmJ0zR0dCmdXDXLM8He
	294nQNtAk6ZFgnSz57WO2f9cczNMOvUBTIRqSvejWQxMs99Qa8bS7+8cwKF4VxKRhdHc
	CnzkBfxpcwVCjWS4P3CZr+HrQsmbTtE8bFid7WXh+yGZtcp/C5QdQ81kHTEin+ftmw1B
	Aiow==
X-Gm-Message-State: ALoCoQng7K3an34EVXWTg1fLMpeQE2a0pJ9mVz84bnKHNBy+f+Zp9EguqNxF/U1cM53VroANx7/Z
X-Received: by 10.180.87.162 with SMTP id az2mr9711646wib.23.1393451213836;
	Wed, 26 Feb 2014 13:46:53 -0800 (PST)
Received: from bivouac.eciton.net (bivouac.eciton.net. [46.235.226.95])
	by mx.google.com with ESMTPSA id fb6sm13834981wib.2.2014.02.26.13.46.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 13:46:52 -0800 (PST)
Date: Wed, 26 Feb 2014 21:48:43 +0000
From: Leif Lindholm <leif.lindholm@linaro.org>
To: Arnd Bergmann <arnd@arndb.de>
Message-ID: <20140226214843.GD12169@bivouac.eciton.net>
References: <20140226183454.GA14639@cbox>
 <5553754.0b4gMg5OS7@wuerfel>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: cross-distro@lists.linaro.org, Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	linux-arm-kernel@lists.infradead.org,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> > ARM VM System Specification
> > ===========================
> > 
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> > 
> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> > 
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> 
> I would prefer if we can stay as close as possible to SBSA for individual
> hardware components, and only stray from it when there is a strong reason.
> pl011-subset doesn't sound like a significant problem to implement,
> especially as SBSA makes the DMA part of that optional. Can you
> elaborate on what hypervisor would have a problem with that?

I believe it comes down to how much extra overhead pl011-access-trap
would be over virtio-console. If low, then sure.
(Since there are certain things we cannot provide SBSA-compliant in
the guest anyway, I wouldn't consider lack of pl011 to be a big issue.)

> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> > 
> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> > 
> > For more information about UEFI and ACPI booting, see [4] and [5].
> 
> What's the point of having ACPI in a virtual machine? You wouldn't
> need to abstract any of the hardware in AML since you already know
> what the virtual hardware is, so I can't see how this would help
> anyone.

The point is that if we need to share any real hw then we need to use
whatever the host has.

> However, as ACPI will not be supported by arm32, not having the
> complete FDT will prevent you from running a 32-bit guest on
> a 64-bit hypervisor, which I consider an important use case.

In which case we would be making an active call to ban anything other
than virtio/xen-pv devices for 32-bit guests on hardware without DT.

However, I see the case of a 32-bit guest on 64-bit hypervisor as less
likely in the server space than in mobile, and ACPI in mobile as
unlikely, so it may end up not being a big issue.

/
    Leif

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:06:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImce-00048i-Ta; Wed, 26 Feb 2014 22:06:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WImcc-00048Z-Td
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 22:06:35 +0000
Received: from [85.158.137.68:43669] by server-6.bemta-3.messagelabs.com id
	4E/67-09180-9656E035; Wed, 26 Feb 2014 22:06:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393452390!4461367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 481 invoked from network); 26 Feb 2014 22:06:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:06:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,550,1389744000"; d="scan'208";a="104467847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 22:06:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 17:06:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WImcH-00060V-M4;
	Wed, 26 Feb 2014 22:06:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WImcH-0004dn-AU;
	Wed, 26 Feb 2014 22:06:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25311-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 22:06:13 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25311: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25311 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25311/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-xend               4 xen-build                 fail REGR. vs. 25266
 build-amd64-xend              4 xen-build        fail in 25306 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  9 guest-start                 fail pass in 25306

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 25306 n/a

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:06:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImce-00048i-Ta; Wed, 26 Feb 2014 22:06:36 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WImcc-00048Z-Td
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 22:06:35 +0000
Received: from [85.158.137.68:43669] by server-6.bemta-3.messagelabs.com id
	4E/67-09180-9656E035; Wed, 26 Feb 2014 22:06:33 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393452390!4461367!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 481 invoked from network); 26 Feb 2014 22:06:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:06:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,550,1389744000"; d="scan'208";a="104467847"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 26 Feb 2014 22:06:14 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 17:06:13 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WImcH-00060V-M4;
	Wed, 26 Feb 2014 22:06:13 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WImcH-0004dn-AU;
	Wed, 26 Feb 2014 22:06:13 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25311-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 22:06:13 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25311: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25311 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25311/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-xend               4 xen-build                 fail REGR. vs. 25266
 build-amd64-xend              4 xen-build        fail in 25306 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf-pin  9 guest-start                 fail pass in 25306

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-pv           1 xen-build-check(1)        blocked in 25306 n/a

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:21:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImql-0004Ly-Ex; Wed, 26 Feb 2014 22:21:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rob.herring@linaro.org>) id 1WImSm-00043v-Pr
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:56:24 +0000
Received: from [193.109.254.147:43013] by server-1.bemta-14.messagelabs.com id
	46/63-15438-8036E035; Wed, 26 Feb 2014 21:56:24 +0000
X-Env-Sender: rob.herring@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393451782!7106442!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6951 invoked from network); 26 Feb 2014 21:56:23 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 21:56:23 -0000
Received: by mail-lb0-f177.google.com with SMTP id z11so1078386lbi.36
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 13:56:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=0cvz6gXgdwP4ZyPDcgNIjqThEZayRgyFrg6VTfmHLpc=;
	b=Gi9+JRFpCffqv+nsgNbslunCiNKkuU2ebPKCvuz30HJV4jZShG/TR0gjRmxJ7lWGPS
	Eix0qnuMxKNH/e9Uxr/zyN8WSNbKU4ICRWJk3hCSUJLRNA9aqtARur9ZhjBtzwAQLHgX
	reQcWKTqL8yYPBHBGJvH0A7+GrI7fejK6LlR5fjoaGMtCNBxTbNw41z6JYj9Z/FUjUxt
	BLrt8tOGNw/ZP2dRTD22e+uBS7MH+LLGx0gKewsmCgMXiID5VX/nWpkatf+Xw+c533SQ
	dAV+kg9u+cvrRoDGAMSyQm09ecU8Eo2SR+yJ4ShC6LhHACqwxACQ5PYl2FuAoGelt/1u
	IFMA==
X-Gm-Message-State: ALoCoQmhaawD5Q0hRenXuuKXROJe/py0b7WxqKbq8alohY24uiycDksDA1wN8MnKRky9/wkbDL+1
X-Received: by 10.152.26.135 with SMTP id l7mr3281821lag.43.1393451782585;
	Wed, 26 Feb 2014 13:56:22 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.6.169 with HTTP; Wed, 26 Feb 2014 13:56:02 -0800 (PST)
In-Reply-To: <6873065.LH15I057sX@wuerfel>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox> <6873065.LH15I057sX@wuerfel>
From: Rob Herring <rob.herring@linaro.org>
Date: Wed, 26 Feb 2014 15:56:02 -0600
Message-ID: <CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
To: Arnd Bergmann <arnd@arndb.de>
X-Mailman-Approved-At: Wed, 26 Feb 2014 22:21:09 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 2:22 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wednesday 26 February 2014 12:05:37 Christoffer Dall wrote:
>> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
>> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>>
>> The most common response I've been getting so far is that people
>> generally want their VMs to look close to the real thing, but not sure
>> how valid an argument that is.
>>
>> Some people feel strongly about this and seem to think that ARMv8
>> kernels will only work with ACPI in the future...
>
> That is certainly a misconception that has caused a lot of trouble.
> We will certainly keep supporting FDT boot in ARMv8 indefinitely,
> and I expect that most systems will not use ACPI at all.

Furthermore, even enterprise distro kernels will boot DT based kernels
assuming the h/w support is mainlined despite statements to the
contrary. It is a requirement in mainline kernels that DT and ACPI
support to coexist. Distros are not going to go out of their way to
undo/break that. And since the boot interface is DT, you can't simply
turn off DT. :)

Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:21:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImqj-0004Lr-S5; Wed, 26 Feb 2014 22:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <michael.hudson@canonical.com>) id 1WIlfP-0003ZB-4j
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:05:23 +0000
Received: from [85.158.143.35:21360] by server-3.bemta-4.messagelabs.com id
	64/09-11539-2175E035; Wed, 26 Feb 2014 21:05:22 +0000
X-Env-Sender: michael.hudson@canonical.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393448721!8562149!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9844 invoked from network); 26 Feb 2014 21:05:21 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-12.tower-21.messagelabs.com with SMTP;
	26 Feb 2014 21:05:21 -0000
Received: from [118.149.151.178] (helo=narsil)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71)
	(envelope-from <michael.hudson@canonical.com>)
	id 1WIlfD-0002uT-Gm; Wed, 26 Feb 2014 21:05:12 +0000
Received: by narsil (Postfix, from userid 1000)
	id 75CCC5263CF; Thu, 27 Feb 2014 10:05:05 +1300 (NZDT)
From: Michael Hudson-Doyle <michael.hudson@linaro.org>
To: Christoffer Dall <christoffer.dall@linaro.org>,
	cross-distro@lists.linaro.org, "linux-arm-kernel\@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"kvmarm\@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"kvm\@vger.kernel.org" <kvm@vger.kernel.org>, xen-devel@lists.xen.org
In-Reply-To: <20140226183454.GA14639@cbox>
References: <20140226183454.GA14639@cbox>
User-Agent: Notmuch/0.17~rc1+4~g7f07bfd (http://notmuchmail.org)
	Emacs/24.3.50.2 (x86_64-pc-linux-gnu)
Date: Thu, 27 Feb 2014 10:05:05 +1300
Message-ID: <87k3chh9fi.fsf@canonical.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 26 Feb 2014 22:21:09 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Rob Herring <rob.herring@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoffer Dall <christoffer.dall@linaro.org> writes:

> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.
>
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
>
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
>
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.

Maybe I'm missing something, but should this last bit say "a trivial
FDT" instead of "no FDT"?  If not, I don't understand the first
paragraph :-)

Cheers,
mwh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:21:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImql-0004Ly-Ex; Wed, 26 Feb 2014 22:21:11 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <rob.herring@linaro.org>) id 1WImSm-00043v-Pr
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:56:24 +0000
Received: from [193.109.254.147:43013] by server-1.bemta-14.messagelabs.com id
	46/63-15438-8036E035; Wed, 26 Feb 2014 21:56:24 +0000
X-Env-Sender: rob.herring@linaro.org
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393451782!7106442!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6951 invoked from network); 26 Feb 2014 21:56:23 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 21:56:23 -0000
Received: by mail-lb0-f177.google.com with SMTP id z11so1078386lbi.36
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 13:56:22 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=0cvz6gXgdwP4ZyPDcgNIjqThEZayRgyFrg6VTfmHLpc=;
	b=Gi9+JRFpCffqv+nsgNbslunCiNKkuU2ebPKCvuz30HJV4jZShG/TR0gjRmxJ7lWGPS
	Eix0qnuMxKNH/e9Uxr/zyN8WSNbKU4ICRWJk3hCSUJLRNA9aqtARur9ZhjBtzwAQLHgX
	reQcWKTqL8yYPBHBGJvH0A7+GrI7fejK6LlR5fjoaGMtCNBxTbNw41z6JYj9Z/FUjUxt
	BLrt8tOGNw/ZP2dRTD22e+uBS7MH+LLGx0gKewsmCgMXiID5VX/nWpkatf+Xw+c533SQ
	dAV+kg9u+cvrRoDGAMSyQm09ecU8Eo2SR+yJ4ShC6LhHACqwxACQ5PYl2FuAoGelt/1u
	IFMA==
X-Gm-Message-State: ALoCoQmhaawD5Q0hRenXuuKXROJe/py0b7WxqKbq8alohY24uiycDksDA1wN8MnKRky9/wkbDL+1
X-Received: by 10.152.26.135 with SMTP id l7mr3281821lag.43.1393451782585;
	Wed, 26 Feb 2014 13:56:22 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.6.169 with HTTP; Wed, 26 Feb 2014 13:56:02 -0800 (PST)
In-Reply-To: <6873065.LH15I057sX@wuerfel>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox> <6873065.LH15I057sX@wuerfel>
From: Rob Herring <rob.herring@linaro.org>
Date: Wed, 26 Feb 2014 15:56:02 -0600
Message-ID: <CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
To: Arnd Bergmann <arnd@arndb.de>
X-Mailman-Approved-At: Wed, 26 Feb 2014 22:21:09 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 2:22 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wednesday 26 February 2014 12:05:37 Christoffer Dall wrote:
>> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
>> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>>
>> The most common response I've been getting so far is that people
>> generally want their VMs to look close to the real thing, but not sure
>> how valid an argument that is.
>>
>> Some people feel strongly about this and seem to think that ARMv8
>> kernels will only work with ACPI in the future...
>
> That is certainly a misconception that has caused a lot of trouble.
> We will certainly keep supporting FDT boot in ARMv8 indefinitely,
> and I expect that most systems will not use ACPI at all.

Furthermore, even enterprise distro kernels will boot DT based kernels
assuming the h/w support is mainlined despite statements to the
contrary. It is a requirement in mainline kernels that DT and ACPI
support to coexist. Distros are not going to go out of their way to
undo/break that. And since the boot interface is DT, you can't simply
turn off DT. :)

Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:21:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImqj-0004Lr-S5; Wed, 26 Feb 2014 22:21:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <michael.hudson@canonical.com>) id 1WIlfP-0003ZB-4j
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 21:05:23 +0000
Received: from [85.158.143.35:21360] by server-3.bemta-4.messagelabs.com id
	64/09-11539-2175E035; Wed, 26 Feb 2014 21:05:22 +0000
X-Env-Sender: michael.hudson@canonical.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393448721!8562149!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9844 invoked from network); 26 Feb 2014 21:05:21 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-12.tower-21.messagelabs.com with SMTP;
	26 Feb 2014 21:05:21 -0000
Received: from [118.149.151.178] (helo=narsil)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71)
	(envelope-from <michael.hudson@canonical.com>)
	id 1WIlfD-0002uT-Gm; Wed, 26 Feb 2014 21:05:12 +0000
Received: by narsil (Postfix, from userid 1000)
	id 75CCC5263CF; Thu, 27 Feb 2014 10:05:05 +1300 (NZDT)
From: Michael Hudson-Doyle <michael.hudson@linaro.org>
To: Christoffer Dall <christoffer.dall@linaro.org>,
	cross-distro@lists.linaro.org, "linux-arm-kernel\@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"kvmarm\@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"kvm\@vger.kernel.org" <kvm@vger.kernel.org>, xen-devel@lists.xen.org
In-Reply-To: <20140226183454.GA14639@cbox>
References: <20140226183454.GA14639@cbox>
User-Agent: Notmuch/0.17~rc1+4~g7f07bfd (http://notmuchmail.org)
	Emacs/24.3.50.2 (x86_64-pc-linux-gnu)
Date: Thu, 27 Feb 2014 10:05:05 +1300
Message-ID: <87k3chh9fi.fsf@canonical.com>
MIME-Version: 1.0
X-Mailman-Approved-At: Wed, 26 Feb 2014 22:21:09 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Rob Herring <rob.herring@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Christoffer Dall <christoffer.dall@linaro.org> writes:

> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.
>
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
>
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
>
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.

Maybe I'm missing something, but should this last bit say "a trivial
FDT" instead of "no FDT"?  If not, I don't understand the first
paragraph :-)

Cheers,
mwh

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:21:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImrM-0004QS-Hd; Wed, 26 Feb 2014 22:21:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WImrK-0004Q0-B3
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:21:46 +0000
Received: from [85.158.139.211:6835] by server-4.bemta-5.messagelabs.com id
	35/24-08092-9F86E035; Wed, 26 Feb 2014 22:21:45 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393453302!6536857!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7456 invoked from network); 26 Feb 2014 22:21:44 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:21:44 -0000
Received: by mail-pb0-f48.google.com with SMTP id md12so1607589pbc.35
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:21:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=hkF3S9mpE1umUwOK+TZbV8MFbHOXYHvSrXAxSA2p5ys=;
	b=CK+yiVSiT7b5+vkcA4Y9hxE6UUVZG/PFAGWFUlubzkAxXtJ+Dw1UrtvCUK0o8BQeG5
	6eeAwWimqQTWPYKs7aIjMkhbOBtuxgJl9M68jpyPoHV4SUjAUoS6Eo9rx3/l3Ro1gTmN
	7p99vp2wVcpmkhrI29aNoUZDBNrKPHF/GdE2/IJpoyY6wJkwJtKRamentTGktodmFiAh
	ieddCmCALBnChm555kRbkCgF+fU/81PhmzIfIp28DFSCkCqEzChw0uhEilekDa7yfoEJ
	t4SA7ZtGBpFgYlWcEhkEL2jS3Z5WBKkDfLdMZ//iuvdisuPJ7h5L5WLhCk+76lVXKaRb
	AVmA==
X-Gm-Message-State: ALoCoQntWqTZj5t/POVfJnAprNwQU50GaDXPeD6xZG1+UNs1EXnPWXrYiROuZVgUPDd4ydy+3ez1
X-Received: by 10.66.243.131 with SMTP id wy3mr11689845pac.32.1393453302097;
	Wed, 26 Feb 2014 14:21:42 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id om6sm6736076pbc.43.2014.02.26.14.21.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 14:21:40 -0800 (PST)
Date: Wed, 26 Feb 2014 14:21:47 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Rob Herring <rob.herring@linaro.org>
Message-ID: <20140226222147.GE16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox> <6873065.LH15I057sX@wuerfel>
	<CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>, Arnd Bergmann <arnd@arndb.de>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 03:56:02PM -0600, Rob Herring wrote:
> On Wed, Feb 26, 2014 at 2:22 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Wednesday 26 February 2014 12:05:37 Christoffer Dall wrote:
> >> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> >> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> >>
> >> The most common response I've been getting so far is that people
> >> generally want their VMs to look close to the real thing, but not sure
> >> how valid an argument that is.
> >>
> >> Some people feel strongly about this and seem to think that ARMv8
> >> kernels will only work with ACPI in the future...
> >
> > That is certainly a misconception that has caused a lot of trouble.
> > We will certainly keep supporting FDT boot in ARMv8 indefinitely,
> > and I expect that most systems will not use ACPI at all.
> 
> Furthermore, even enterprise distro kernels will boot DT based kernels
> assuming the h/w support is mainlined despite statements to the
> contrary. It is a requirement in mainline kernels that DT and ACPI
> support to coexist. Distros are not going to go out of their way to
> undo/break that. And since the boot interface is DT, you can't simply
> turn off DT. :)
> 

Personally I'm all for simplicity so I don't want to push any agenda for
ACPI in VMs.

Note that the spec does not mandate the use of ACPI, it just tells you
how to do it if you wish to.

But, we can change the spec to require full FDT description of the
system, unless of course some of the ACPI-in-VM supporters manage to
convince the rest.

-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:21:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImrM-0004QS-Hd; Wed, 26 Feb 2014 22:21:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WImrK-0004Q0-B3
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:21:46 +0000
Received: from [85.158.139.211:6835] by server-4.bemta-5.messagelabs.com id
	35/24-08092-9F86E035; Wed, 26 Feb 2014 22:21:45 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393453302!6536857!1
X-Originating-IP: [209.85.160.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7456 invoked from network); 26 Feb 2014 22:21:44 -0000
Received: from mail-pb0-f48.google.com (HELO mail-pb0-f48.google.com)
	(209.85.160.48)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:21:44 -0000
Received: by mail-pb0-f48.google.com with SMTP id md12so1607589pbc.35
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:21:42 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=hkF3S9mpE1umUwOK+TZbV8MFbHOXYHvSrXAxSA2p5ys=;
	b=CK+yiVSiT7b5+vkcA4Y9hxE6UUVZG/PFAGWFUlubzkAxXtJ+Dw1UrtvCUK0o8BQeG5
	6eeAwWimqQTWPYKs7aIjMkhbOBtuxgJl9M68jpyPoHV4SUjAUoS6Eo9rx3/l3Ro1gTmN
	7p99vp2wVcpmkhrI29aNoUZDBNrKPHF/GdE2/IJpoyY6wJkwJtKRamentTGktodmFiAh
	ieddCmCALBnChm555kRbkCgF+fU/81PhmzIfIp28DFSCkCqEzChw0uhEilekDa7yfoEJ
	t4SA7ZtGBpFgYlWcEhkEL2jS3Z5WBKkDfLdMZ//iuvdisuPJ7h5L5WLhCk+76lVXKaRb
	AVmA==
X-Gm-Message-State: ALoCoQntWqTZj5t/POVfJnAprNwQU50GaDXPeD6xZG1+UNs1EXnPWXrYiROuZVgUPDd4ydy+3ez1
X-Received: by 10.66.243.131 with SMTP id wy3mr11689845pac.32.1393453302097;
	Wed, 26 Feb 2014 14:21:42 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id om6sm6736076pbc.43.2014.02.26.14.21.39
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 14:21:40 -0800 (PST)
Date: Wed, 26 Feb 2014 14:21:47 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Rob Herring <rob.herring@linaro.org>
Message-ID: <20140226222147.GE16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox> <6873065.LH15I057sX@wuerfel>
	<CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>, Arnd Bergmann <arnd@arndb.de>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 03:56:02PM -0600, Rob Herring wrote:
> On Wed, Feb 26, 2014 at 2:22 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> > On Wednesday 26 February 2014 12:05:37 Christoffer Dall wrote:
> >> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> >> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> >>
> >> The most common response I've been getting so far is that people
> >> generally want their VMs to look close to the real thing, but not sure
> >> how valid an argument that is.
> >>
> >> Some people feel strongly about this and seem to think that ARMv8
> >> kernels will only work with ACPI in the future...
> >
> > That is certainly a misconception that has caused a lot of trouble.
> > We will certainly keep supporting FDT boot in ARMv8 indefinitely,
> > and I expect that most systems will not use ACPI at all.
> 
> Furthermore, even enterprise distro kernels will boot DT based kernels
> assuming the h/w support is mainlined despite statements to the
> contrary. It is a requirement in mainline kernels that DT and ACPI
> support to coexist. Distros are not going to go out of their way to
> undo/break that. And since the boot interface is DT, you can't simply
> turn off DT. :)
> 

Personally I'm all for simplicity so I don't want to push any agenda for
ACPI in VMs.

Note that the spec does not mandate the use of ACPI, it just tells you
how to do it if you wish to.

But, we can change the spec to require full FDT description of the
system, unless of course some of the ACPI-in-VM supporters manage to
convince the rest.

-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:25:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImvJ-0004hz-U9; Wed, 26 Feb 2014 22:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WImvH-0004ho-PU
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:25:52 +0000
Received: from [85.158.143.35:41423] by server-1.bemta-4.messagelabs.com id
	78/B6-31661-FE96E035; Wed, 26 Feb 2014 22:25:51 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393453548!8592225!1
X-Originating-IP: [209.85.160.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26740 invoked from network); 26 Feb 2014 22:25:50 -0000
Received: from mail-pb0-f46.google.com (HELO mail-pb0-f46.google.com)
	(209.85.160.46)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:25:50 -0000
Received: by mail-pb0-f46.google.com with SMTP id rq2so536687pbb.5
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:25:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=aL/hRaT0/S/KeI/xHb6oA6yokStrwj4MUOpSCDzJw8E=;
	b=TtyLTzMwYn77aYoGMajbOpSUym52d49QAzOL2ZDLB2Syr6SO2wwRPwJOaT2CgEdkeJ
	UbzQ0VnLo4T1Z2IyGLr8Xvp63QzuJ5XuDpr8lkJW8RSNCCRvy6eXz4TwLkP59M7sZKvT
	dbt8Pl7INV7RyUorFOjQJI6A6owRU8rXr2OpmRphDPn/NM/EvbvGFuA6fKkZPEVs4FjP
	6H+V9nY9mdnd+F3GX/Xq07Pcuhjv6nEkjBHeajEBGwAS7lEbjXv/khb7biJo8abF4z9k
	ELpDOrQUsZnys/JybhBJFsGgRWSgKPTG+EbSifXeDzJJunWsrbf8O0XqqeOGpUPPZjZV
	8v+A==
X-Gm-Message-State: ALoCoQm/RZyQnS7Ys/iELMLH9i/LYcuj0dmCvGJqA35Dr7scTI3m2wh8GMQDQ7xiqKMCUXaaMYGu
X-Received: by 10.68.66.1 with SMTP id b1mr9548049pbt.43.1393453547984;
	Wed, 26 Feb 2014 14:25:47 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id op3sm6757253pbc.40.2014.02.26.14.25.45
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 14:25:46 -0800 (PST)
Date: Wed, 26 Feb 2014 14:25:53 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Leif Lindholm <leif.lindholm@linaro.org>
Message-ID: <20140226222553.GF16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226214843.GD12169@bivouac.eciton.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140226214843.GD12169@bivouac.eciton.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: cross-distro@lists.linaro.org, Arnd Bergmann <arnd@arndb.de>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 09:48:43PM +0000, Leif Lindholm wrote:
> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> > > ARM VM System Specification
> > > ===========================
> > > 
> > > Goal
> > > ----
> > > The goal of this spec is to allow suitably-built OS images to run on
> > > all ARM virtualization solutions, such as KVM or Xen.
> > > 
> > > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > > they aim to be hypervisor agnostic.
> > > 
> > > Note that simply adhering to the SBSA [2] is not a valid approach,
> > > for example because the SBSA mandates EL2, which will not be available
> > > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > > may be controversial for some ARM VM implementations to support.
> > > This spec also covers the aarch32 execution mode, not covered in the
> > > SBSA.
> > 
> > I would prefer if we can stay as close as possible to SBSA for individual
> > hardware components, and only stray from it when there is a strong reason.
> > pl011-subset doesn't sound like a significant problem to implement,
> > especially as SBSA makes the DMA part of that optional. Can you
> > elaborate on what hypervisor would have a problem with that?
> 
> I believe it comes down to how much extra overhead pl011-access-trap
> would be over virtio-console. If low, then sure.
> (Since there are certain things we cannot provide SBSA-compliant in
> the guest anyway, I wouldn't consider lack of pl011 to be a big issue.)
> 

I don't think it's about overhead, sure pl011 may be slower, but it's a
serial port, does anyone care about performance of a console? pl011
should be good enough for sure.

I think the issue is that Xen does no real device emulation today at
all, they don't use QEMU etc.  That being said, adding a pl011 emulation
in the Xen tools doesn't seem like the worst idea in the world, but I
will let them chime in on why they are opposed to it.

> > >   no FDT.  In this case, the VM implementation must provide ACPI, and
> > >   the OS must be able to locate the ACPI root pointer through the UEFI
> > >   system table.
> > > 
> > > For more information about the arm and arm64 boot conventions, see
> > > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > > Linux kernel source tree.
> > > 
> > > For more information about UEFI and ACPI booting, see [4] and [5].
> > 
> > What's the point of having ACPI in a virtual machine? You wouldn't
> > need to abstract any of the hardware in AML since you already know
> > what the virtual hardware is, so I can't see how this would help
> > anyone.
> 
> The point is that if we need to share any real hw then we need to use
> whatever the host has.
> 
> > However, as ACPI will not be supported by arm32, not having the
> > complete FDT will prevent you from running a 32-bit guest on
> > a 64-bit hypervisor, which I consider an important use case.
> 
> In which case we would be making an active call to ban anything other
> than virtio/xen-pv devices for 32-bit guests on hardware without DT.
> 
> However, I see the case of a 32-bit guest on 64-bit hypervisor as less
> likely in the server space than in mobile, and ACPI in mobile as
> unlikely, so it may end up not being a big issue.
> 
But you may want a networking applicance, for example, built on ARM
64-bit hardware, with an ARM 64-bit hypervisor on there, and you wish to
run some 32-bit system in a VM.  If the hardware is ACPI based, it would
be a shame if the VM implementation (or the virtual firmware) couldn't
provide a useable 32-bit FDT.  We may want to call that case out in this
document?

Thanks,
-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:25:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImvJ-0004hz-U9; Wed, 26 Feb 2014 22:25:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WImvH-0004ho-PU
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:25:52 +0000
Received: from [85.158.143.35:41423] by server-1.bemta-4.messagelabs.com id
	78/B6-31661-FE96E035; Wed, 26 Feb 2014 22:25:51 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393453548!8592225!1
X-Originating-IP: [209.85.160.46]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26740 invoked from network); 26 Feb 2014 22:25:50 -0000
Received: from mail-pb0-f46.google.com (HELO mail-pb0-f46.google.com)
	(209.85.160.46)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:25:50 -0000
Received: by mail-pb0-f46.google.com with SMTP id rq2so536687pbb.5
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:25:48 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=aL/hRaT0/S/KeI/xHb6oA6yokStrwj4MUOpSCDzJw8E=;
	b=TtyLTzMwYn77aYoGMajbOpSUym52d49QAzOL2ZDLB2Syr6SO2wwRPwJOaT2CgEdkeJ
	UbzQ0VnLo4T1Z2IyGLr8Xvp63QzuJ5XuDpr8lkJW8RSNCCRvy6eXz4TwLkP59M7sZKvT
	dbt8Pl7INV7RyUorFOjQJI6A6owRU8rXr2OpmRphDPn/NM/EvbvGFuA6fKkZPEVs4FjP
	6H+V9nY9mdnd+F3GX/Xq07Pcuhjv6nEkjBHeajEBGwAS7lEbjXv/khb7biJo8abF4z9k
	ELpDOrQUsZnys/JybhBJFsGgRWSgKPTG+EbSifXeDzJJunWsrbf8O0XqqeOGpUPPZjZV
	8v+A==
X-Gm-Message-State: ALoCoQm/RZyQnS7Ys/iELMLH9i/LYcuj0dmCvGJqA35Dr7scTI3m2wh8GMQDQ7xiqKMCUXaaMYGu
X-Received: by 10.68.66.1 with SMTP id b1mr9548049pbt.43.1393453547984;
	Wed, 26 Feb 2014 14:25:47 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id op3sm6757253pbc.40.2014.02.26.14.25.45
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 14:25:46 -0800 (PST)
Date: Wed, 26 Feb 2014 14:25:53 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Leif Lindholm <leif.lindholm@linaro.org>
Message-ID: <20140226222553.GF16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226214843.GD12169@bivouac.eciton.net>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140226214843.GD12169@bivouac.eciton.net>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: cross-distro@lists.linaro.org, Arnd Bergmann <arnd@arndb.de>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 09:48:43PM +0000, Leif Lindholm wrote:
> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
> > On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
> > > ARM VM System Specification
> > > ===========================
> > > 
> > > Goal
> > > ----
> > > The goal of this spec is to allow suitably-built OS images to run on
> > > all ARM virtualization solutions, such as KVM or Xen.
> > > 
> > > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > > they aim to be hypervisor agnostic.
> > > 
> > > Note that simply adhering to the SBSA [2] is not a valid approach,
> > > for example because the SBSA mandates EL2, which will not be available
> > > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > > may be controversial for some ARM VM implementations to support.
> > > This spec also covers the aarch32 execution mode, not covered in the
> > > SBSA.
> > 
> > I would prefer if we can stay as close as possible to SBSA for individual
> > hardware components, and only stray from it when there is a strong reason.
> > pl011-subset doesn't sound like a significant problem to implement,
> > especially as SBSA makes the DMA part of that optional. Can you
> > elaborate on what hypervisor would have a problem with that?
> 
> I believe it comes down to how much extra overhead pl011-access-trap
> would be over virtio-console. If low, then sure.
> (Since there are certain things we cannot provide SBSA-compliant in
> the guest anyway, I wouldn't consider lack of pl011 to be a big issue.)
> 

I don't think it's about overhead, sure pl011 may be slower, but it's a
serial port, does anyone care about performance of a console? pl011
should be good enough for sure.

I think the issue is that Xen does no real device emulation today at
all, they don't use QEMU etc.  That being said, adding a pl011 emulation
in the Xen tools doesn't seem like the worst idea in the world, but I
will let them chime in on why they are opposed to it.

> > >   no FDT.  In this case, the VM implementation must provide ACPI, and
> > >   the OS must be able to locate the ACPI root pointer through the UEFI
> > >   system table.
> > > 
> > > For more information about the arm and arm64 boot conventions, see
> > > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > > Linux kernel source tree.
> > > 
> > > For more information about UEFI and ACPI booting, see [4] and [5].
> > 
> > What's the point of having ACPI in a virtual machine? You wouldn't
> > need to abstract any of the hardware in AML since you already know
> > what the virtual hardware is, so I can't see how this would help
> > anyone.
> 
> The point is that if we need to share any real hw then we need to use
> whatever the host has.
> 
> > However, as ACPI will not be supported by arm32, not having the
> > complete FDT will prevent you from running a 32-bit guest on
> > a 64-bit hypervisor, which I consider an important use case.
> 
> In which case we would be making an active call to ban anything other
> than virtio/xen-pv devices for 32-bit guests on hardware without DT.
> 
> However, I see the case of a 32-bit guest on 64-bit hypervisor as less
> likely in the server space than in mobile, and ACPI in mobile as
> unlikely, so it may end up not being a big issue.
> 
But you may want a networking applicance, for example, built on ARM
64-bit hardware, with an ARM 64-bit hypervisor on there, and you wish to
run some 32-bit system in a VM.  If the hardware is ACPI based, it would
be a shame if the VM implementation (or the virtual firmware) couldn't
provide a useable 32-bit FDT.  We may want to call that case out in this
document?

Thanks,
-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImwG-0004pa-Ac; Wed, 26 Feb 2014 22:26:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paulmck@linux.vnet.ibm.com>) id 1WImwD-0004pF-V3
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 22:26:50 +0000
Received: from [85.158.139.211:23708] by server-6.bemta-5.messagelabs.com id
	02/EF-14342-92A6E035; Wed, 26 Feb 2014 22:26:49 +0000
X-Env-Sender: paulmck@linux.vnet.ibm.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393453606!6454988!1
X-Originating-IP: [32.97.110.149]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMzIuOTcuMTEwLjE0OSA9PiAyMjk1ODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28580 invoked from network); 26 Feb 2014 22:26:48 -0000
Received: from e31.co.us.ibm.com (HELO e31.co.us.ibm.com) (32.97.110.149)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 22:26:48 -0000
Received: from /spool/local
	by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only!
	Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <paulmck@linux.vnet.ibm.com>; 
	Wed, 26 Feb 2014 15:26:46 -0700
Received: from d03dlp03.boulder.ibm.com (9.17.202.179)
	by e31.co.us.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Wed, 26 Feb 2014 15:26:44 -0700
Received: from b03cxnp08027.gho.boulder.ibm.com
	(b03cxnp08027.gho.boulder.ibm.com [9.17.130.19])
	by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id D157719D8048
	for <xen-devel@lists.xenproject.org>;
	Wed, 26 Feb 2014 15:26:41 -0700 (MST)
Received: from d03av06.boulder.ibm.com (d03av06.boulder.ibm.com [9.17.195.245])
	by b03cxnp08027.gho.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with
	ESMTP id s1QMQH7H9371994
	for <xen-devel@lists.xenproject.org>; Wed, 26 Feb 2014 23:26:17 +0100
Received: from d03av06.boulder.ibm.com (loopback [127.0.0.1])
	by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP
	id s1QMU8Wi029530
	for <xen-devel@lists.xenproject.org>; Wed, 26 Feb 2014 15:30:11 -0700
Received: from paulmck-ThinkPad-W500 ([9.70.82.174])
	by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id
	s1QMU844029517; Wed, 26 Feb 2014 15:30:08 -0700
Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000)
	id 32917381472; Wed, 26 Feb 2014 14:26:40 -0800 (PST)
Date: Wed, 26 Feb 2014 14:26:40 -0800
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226222640.GN8264@linux.vnet.ibm.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022622-8236-0000-0000-0000003C0127
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, x86@kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org, Alexander Fyodorov <halcy@yandex.ru>,
	Arnd Bergmann <arnd@arndb.de>, Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: paulmck@linux.vnet.ibm.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:20AM -0500, Waiman Long wrote:

This series passes a short locktorture test when based on top of current
tip/core/locking.  This is for both the first three patches and for the
full set, though in the latter case it took me an embarrassingly large
number of tries to get PARAVIRT_UNFAIR_LOCKS set properly.

Again, don't read too much into this.  This was in an 8-CPU KVM guest
on x86 (though with an interfering kernel build running on the host),
and as noted earlier, locktorture is still a bit on the lame side.

							Thanx, Paul

> v4->v5:
>  - Move the optimized 2-task contending code to the generic file to
>    enable more architectures to use it without code duplication.
>  - Address some of the style-related comments by PeterZ.
>  - Allow the use of unfair queue spinlock in a real para-virtualized
>    execution environment.
>  - Add para-virtualization support to the qspinlock code by ensuring
>    that the lock holder and queue head stay alive as much as possible.
> 
> v3->v4:
>  - Remove debugging code and fix a configuration error
>  - Simplify the qspinlock structure and streamline the code to make it
>    perform a bit better
>  - Add an x86 version of asm/qspinlock.h for holding x86 specific
>    optimization.
>  - Add an optimized x86 code path for 2 contending tasks to improve
>    low contention performance.
> 
> v2->v3:
>  - Simplify the code by using numerous mode only without an unfair option.
>  - Use the latest smp_load_acquire()/smp_store_release() barriers.
>  - Move the queue spinlock code to kernel/locking.
>  - Make the use of queue spinlock the default for x86-64 without user
>    configuration.
>  - Additional performance tuning.
> 
> v1->v2:
>  - Add some more comments to document what the code does.
>  - Add a numerous CPU mode to support >= 16K CPUs
>  - Add a configuration option to allow lock stealing which can further
>    improve performance in many cases.
>  - Enable wakeup of queue head CPU at unlock time for non-numerous
>    CPU mode.
> 
> This patch set has 3 different sections:
>  1) Patches 1-3: Introduces a queue-based spinlock implementation that
>     can replace the default ticket spinlock without increasing the
>     size of the spinlock data structure. As a result, critical kernel
>     data structures that embed spinlock won't increase in size and
>     breaking data alignments.
>  2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
>     real para-virtualized execution environment. This can resolve
>     some of the locking related performance issues due to the fact
>     that the next CPU to get the lock may have been scheduled out
>     for a period of time.
>  3) Patches 6-8: Enable qspinlock para-virtualization support by making
>     sure that the lock holder and the queue head stay alive as long as
>     possible.
> 
> Patches 1-3 are fully tested and ready for production. Patches 4-8, on
> the other hands, are not fully tested. They have undergone compilation
> tests with various combinations of kernel config setting and boot-up
> tests in a non-virtualized setting. Further tests and performance
> characterization are still needed to be done in a KVM guest. So
> comments on them are welcomed. Suggestions or recommendations on how
> to add PV support in the Xen environment are also needed.
> 
> The queue spinlock has slightly better performance than the ticket
> spinlock in uncontended case. Its performance can be much better
> with moderate to heavy contention.  This patch has the potential of
> improving the performance of all the workloads that have moderate to
> heavy spinlock contention.
> 
> The queue spinlock is especially suitable for NUMA machines with at
> least 2 sockets, though noticeable performance benefit probably won't
> show up in machines with less than 4 sockets.
> 
> The purpose of this patch set is not to solve any particular spinlock
> contention problems. Those need to be solved by refactoring the code
> to make more efficient use of the lock or finer granularity ones. The
> main purpose is to make the lock contention problems more tolerable
> until someone can spend the time and effort to fix them.
> 
> Waiman Long (8):
>   qspinlock: Introducing a 4-byte queue spinlock implementation
>   qspinlock, x86: Enable x86-64 to use queue spinlock
>   qspinlock, x86: Add x86 specific optimization for 2 contending tasks
>   pvqspinlock, x86: Allow unfair spinlock in a real PV environment
>   pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
>   pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
>   pvqspinlock, x86: Add qspinlock para-virtualization support
>   pvqspinlock, x86: Enable KVM to use qspinlock's PV support
> 
>  arch/x86/Kconfig                      |   12 +
>  arch/x86/include/asm/paravirt.h       |    9 +-
>  arch/x86/include/asm/paravirt_types.h |   12 +
>  arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
>  arch/x86/include/asm/qspinlock.h      |  133 +++++++
>  arch/x86/include/asm/spinlock.h       |    9 +-
>  arch/x86/include/asm/spinlock_types.h |    4 +
>  arch/x86/kernel/Makefile              |    1 +
>  arch/x86/kernel/kvm.c                 |   73 ++++-
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
>  arch/x86/xen/spinlock.c               |    2 +-
>  include/asm-generic/qspinlock.h       |  122 +++++++
>  include/asm-generic/qspinlock_types.h |   61 ++++
>  kernel/Kconfig.locks                  |    7 +
>  kernel/locking/Makefile               |    1 +
>  kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
>  16 files changed, 1239 insertions(+), 8 deletions(-)
>  create mode 100644 arch/x86/include/asm/pvqspinlock.h
>  create mode 100644 arch/x86/include/asm/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock_types.h
>  create mode 100644 kernel/locking/qspinlock.c
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:26:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WImwG-0004pa-Ac; Wed, 26 Feb 2014 22:26:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paulmck@linux.vnet.ibm.com>) id 1WImwD-0004pF-V3
	for xen-devel@lists.xenproject.org; Wed, 26 Feb 2014 22:26:50 +0000
Received: from [85.158.139.211:23708] by server-6.bemta-5.messagelabs.com id
	02/EF-14342-92A6E035; Wed, 26 Feb 2014 22:26:49 +0000
X-Env-Sender: paulmck@linux.vnet.ibm.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393453606!6454988!1
X-Originating-IP: [32.97.110.149]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMzIuOTcuMTEwLjE0OSA9PiAyMjk1ODM=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28580 invoked from network); 26 Feb 2014 22:26:48 -0000
Received: from e31.co.us.ibm.com (HELO e31.co.us.ibm.com) (32.97.110.149)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 22:26:48 -0000
Received: from /spool/local
	by e31.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only!
	Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from <paulmck@linux.vnet.ibm.com>; 
	Wed, 26 Feb 2014 15:26:46 -0700
Received: from d03dlp03.boulder.ibm.com (9.17.202.179)
	by e31.co.us.ibm.com (192.168.1.131) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Wed, 26 Feb 2014 15:26:44 -0700
Received: from b03cxnp08027.gho.boulder.ibm.com
	(b03cxnp08027.gho.boulder.ibm.com [9.17.130.19])
	by d03dlp03.boulder.ibm.com (Postfix) with ESMTP id D157719D8048
	for <xen-devel@lists.xenproject.org>;
	Wed, 26 Feb 2014 15:26:41 -0700 (MST)
Received: from d03av06.boulder.ibm.com (d03av06.boulder.ibm.com [9.17.195.245])
	by b03cxnp08027.gho.boulder.ibm.com (8.13.8/8.13.8/NCO v10.0) with
	ESMTP id s1QMQH7H9371994
	for <xen-devel@lists.xenproject.org>; Wed, 26 Feb 2014 23:26:17 +0100
Received: from d03av06.boulder.ibm.com (loopback [127.0.0.1])
	by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP
	id s1QMU8Wi029530
	for <xen-devel@lists.xenproject.org>; Wed, 26 Feb 2014 15:30:11 -0700
Received: from paulmck-ThinkPad-W500 ([9.70.82.174])
	by d03av06.boulder.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id
	s1QMU844029517; Wed, 26 Feb 2014 15:30:08 -0700
Received: by paulmck-ThinkPad-W500 (Postfix, from userid 1000)
	id 32917381472; Wed, 26 Feb 2014 14:26:40 -0800 (PST)
Date: Wed, 26 Feb 2014 14:26:40 -0800
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226222640.GN8264@linux.vnet.ibm.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022622-8236-0000-0000-0000003C0127
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, x86@kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org, Alexander Fyodorov <halcy@yandex.ru>,
	Arnd Bergmann <arnd@arndb.de>, Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: paulmck@linux.vnet.ibm.com
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:20AM -0500, Waiman Long wrote:

This series passes a short locktorture test when based on top of current
tip/core/locking.  This is for both the first three patches and for the
full set, though in the latter case it took me an embarrassingly large
number of tries to get PARAVIRT_UNFAIR_LOCKS set properly.

Again, don't read too much into this.  This was in an 8-CPU KVM guest
on x86 (though with an interfering kernel build running on the host),
and as noted earlier, locktorture is still a bit on the lame side.

							Thanx, Paul

> v4->v5:
>  - Move the optimized 2-task contending code to the generic file to
>    enable more architectures to use it without code duplication.
>  - Address some of the style-related comments by PeterZ.
>  - Allow the use of unfair queue spinlock in a real para-virtualized
>    execution environment.
>  - Add para-virtualization support to the qspinlock code by ensuring
>    that the lock holder and queue head stay alive as much as possible.
> 
> v3->v4:
>  - Remove debugging code and fix a configuration error
>  - Simplify the qspinlock structure and streamline the code to make it
>    perform a bit better
>  - Add an x86 version of asm/qspinlock.h for holding x86 specific
>    optimization.
>  - Add an optimized x86 code path for 2 contending tasks to improve
>    low contention performance.
> 
> v2->v3:
>  - Simplify the code by using numerous mode only without an unfair option.
>  - Use the latest smp_load_acquire()/smp_store_release() barriers.
>  - Move the queue spinlock code to kernel/locking.
>  - Make the use of queue spinlock the default for x86-64 without user
>    configuration.
>  - Additional performance tuning.
> 
> v1->v2:
>  - Add some more comments to document what the code does.
>  - Add a numerous CPU mode to support >= 16K CPUs
>  - Add a configuration option to allow lock stealing which can further
>    improve performance in many cases.
>  - Enable wakeup of queue head CPU at unlock time for non-numerous
>    CPU mode.
> 
> This patch set has 3 different sections:
>  1) Patches 1-3: Introduces a queue-based spinlock implementation that
>     can replace the default ticket spinlock without increasing the
>     size of the spinlock data structure. As a result, critical kernel
>     data structures that embed spinlock won't increase in size and
>     breaking data alignments.
>  2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
>     real para-virtualized execution environment. This can resolve
>     some of the locking related performance issues due to the fact
>     that the next CPU to get the lock may have been scheduled out
>     for a period of time.
>  3) Patches 6-8: Enable qspinlock para-virtualization support by making
>     sure that the lock holder and the queue head stay alive as long as
>     possible.
> 
> Patches 1-3 are fully tested and ready for production. Patches 4-8, on
> the other hands, are not fully tested. They have undergone compilation
> tests with various combinations of kernel config setting and boot-up
> tests in a non-virtualized setting. Further tests and performance
> characterization are still needed to be done in a KVM guest. So
> comments on them are welcomed. Suggestions or recommendations on how
> to add PV support in the Xen environment are also needed.
> 
> The queue spinlock has slightly better performance than the ticket
> spinlock in uncontended case. Its performance can be much better
> with moderate to heavy contention.  This patch has the potential of
> improving the performance of all the workloads that have moderate to
> heavy spinlock contention.
> 
> The queue spinlock is especially suitable for NUMA machines with at
> least 2 sockets, though noticeable performance benefit probably won't
> show up in machines with less than 4 sockets.
> 
> The purpose of this patch set is not to solve any particular spinlock
> contention problems. Those need to be solved by refactoring the code
> to make more efficient use of the lock or finer granularity ones. The
> main purpose is to make the lock contention problems more tolerable
> until someone can spend the time and effort to fix them.
> 
> Waiman Long (8):
>   qspinlock: Introducing a 4-byte queue spinlock implementation
>   qspinlock, x86: Enable x86-64 to use queue spinlock
>   qspinlock, x86: Add x86 specific optimization for 2 contending tasks
>   pvqspinlock, x86: Allow unfair spinlock in a real PV environment
>   pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
>   pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
>   pvqspinlock, x86: Add qspinlock para-virtualization support
>   pvqspinlock, x86: Enable KVM to use qspinlock's PV support
> 
>  arch/x86/Kconfig                      |   12 +
>  arch/x86/include/asm/paravirt.h       |    9 +-
>  arch/x86/include/asm/paravirt_types.h |   12 +
>  arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
>  arch/x86/include/asm/qspinlock.h      |  133 +++++++
>  arch/x86/include/asm/spinlock.h       |    9 +-
>  arch/x86/include/asm/spinlock_types.h |    4 +
>  arch/x86/kernel/Makefile              |    1 +
>  arch/x86/kernel/kvm.c                 |   73 ++++-
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
>  arch/x86/xen/spinlock.c               |    2 +-
>  include/asm-generic/qspinlock.h       |  122 +++++++
>  include/asm-generic/qspinlock_types.h |   61 ++++
>  kernel/Kconfig.locks                  |    7 +
>  kernel/locking/Makefile               |    1 +
>  kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
>  16 files changed, 1239 insertions(+), 8 deletions(-)
>  create mode 100644 arch/x86/include/asm/pvqspinlock.h
>  create mode 100644 arch/x86/include/asm/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock_types.h
>  create mode 100644 kernel/locking/qspinlock.c
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:48:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInGh-00058d-ID; Wed, 26 Feb 2014 22:47:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WInGg-00058Y-9e
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:47:58 +0000
Received: from [85.158.137.68:34233] by server-8.bemta-3.messagelabs.com id
	F5/51-16039-D1F6E035; Wed, 26 Feb 2014 22:47:57 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393454874!4441827!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_32,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21789 invoked from network); 26 Feb 2014 22:47:55 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:47:55 -0000
Received: by mail-pd0-f179.google.com with SMTP id w10so1566979pde.38
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:47:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=FMT7SJUu+EzImJi5HM3d6P7exCmpCW6M0KS0rduz2tk=;
	b=VNlV4Ty7OM1C8/khnCs2dmCDoL/7pf/TTfS94RqL08bJDVQb0A5M9suhBAo/R+N2sW
	2SVU1yxbXHYslVqC37Yivyv3sqoPFW0n+ws/VADqE8Ierfyp1pZJ/j2jHb9yI9qkqSQs
	Z6JSy3j3g+KoXzdIPfbR0P6Jsg4Pzr53RX877R1D7RMVw0PjVQbKPyu8rIG+BBMwZlm9
	D0SWAPlgpWClpuheDnS3GPNjSFfLPMX15MsoRX24o0K20ellAjK5AZKtStfNkz7cEv9T
	vGFGA3yMDE3E4veEnhpVWgrMnJSVN9ONK1w2+gYFJEqpAAaMgTaqZGMxbFiLb9+2yDZ1
	ZCaQ==
X-Gm-Message-State: ALoCoQkrgISphfXidcVB3H6b65weA6TuXuDW5ih8lJukmBAE8n+vvxnmqXogis/rDA+yuC+3MY2E
X-Received: by 10.68.203.135 with SMTP id kq7mr9604489pbc.85.1393454873694;
	Wed, 26 Feb 2014 14:47:53 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id ha2sm6945296pbb.8.2014.02.26.14.47.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 14:47:52 -0800 (PST)
Date: Wed, 26 Feb 2014 14:47:59 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Grant Likely <grant.likely@linaro.org>
Message-ID: <20140226224759.GI16149@cbox>
References: <20140226183454.GA14639@cbox>
	<CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	cross-distro@lists.linaro.org,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Grant,

Thanks for comments,

On Wed, Feb 26, 2014 at 10:35:54PM +0000, Grant Likely wrote:
> 
> On 26 Feb 2014 18:35, "Christoffer Dall" <christoffer.dall@linaro.org>
> wrote:
> >
> > ARM VM System Specification
> > ===========================
> >
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> >
> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> >
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> >
> >
> > Image format
> > ------------
> > The image format, as presented to the VM, needs to be well-defined in
> > order for prepared disk images to be bootable across various
> > virtualization implementations.
> >
> > The raw disk format as presented to the VM must be partitioned with a
> > GUID Partition Table (GPT).  The bootable software must be placed in the
> > EFI System Partition (ESP), using the UEFI removable media path, and
> > must be an EFI application complying to the UEFI Specification 2.4
> > Revision A [6].
> >
> > The ESP partition's GPT entry's partition type GUID must be
> > C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> > formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
> >
> > The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> > execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> > state.
> >
> > This ensures that tools for both Xen and KVM can load a binary UEFI
> > firmware which can read and boot the EFI application in the disk image.
> >
> > A typical scenario will be GRUB2 packaged as an EFI application, which
> > mounts the system boot partition and boots Linux.
> >
> >
> > Virtual Firmware
> > ----------------
> > The VM system must be able to boot the EFI application in the ESP.  It
> > is recommended that this is achieved by loading a UEFI binary as the
> > first software executed by the VM, which then executes the EFI
> > application.  The UEFI implementation should be compliant with UEFI
> > Specification 2.4 Revision A [6] or later.
> >
> > This document strongly recommends that the VM implementation supports
> > persistent environment storage for virtual firmware implementation in
> > order to ensure probable use cases such as adding additional disk images
> > to a VM or running installers to perform upgrades.
> >
> > The binary UEFI firmware implementation should not be distributed as
> > part of the VM image, but is specific to the VM implementation.
> >
> >
> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the
> kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> 
> I would drop pretty much all of the above detail of the kernel entry point.
> The spec should specify UEFI compliance and stop there.

That probably make sense.  The discussion started out as "should it be
DTB or ACPI" and moved on from there, and then interestingly we sort of
found out, that it doesn't really matter in terms of how to package the
kernel, hence this text.  But looking over it now, it doesn't add to the
clarity of the spec.

> 
> What is relevant is the allowance for the UEFI implementation to provide an
> FDT and/or ACPI via the configuration table.
> 
> >
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> >
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> >
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> 
> It is actually valid for the VM to provide both ACPI and FDT. In that
> scenario it is up to the OS to chose which it will use.
> 

ok, "either" becomes "at least one of" with the appropriate adjustments.

> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> >
> > For more information about UEFI and ACPI booting, see [4] and [5].
> >
> >
> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map.  The guest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> >
> > The virtual platform must support at least one of the following ARM
> > execution states:
> >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> >
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> >
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> >
> >   Serial console:  The platform should provide a console,
> >   based on an emulated pl011, a virtio-console, or a Xen PV console.
> 
> For portable disk image, can Xen PV be dropped from the list? pl011 is part
> of SBSA, and virtio is getting standardised, but Xen PV is implementation
> specific.
> 

It would certainly be easier if everyone just use virtio and to mandate
a pl011, but I don't want to preclude Xen from this spec, and the Xen
folks don't seem likely to implement virtio or pl011 based on this spec.

What's the problem with the current recommendation?  Are you concerned
that people will build kernels without virtio-console and Xen PV
console?

> >
> >   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >   removes this limitation.
> >
> >   The ARM virtual timer and counter should be available to the VM as
> >   per the ARM Generic Timers specification in the ARM ARM [1].
> >
> >   A hotpluggable bus to support hotplug of at least block and network
> >   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >   bus.
> >
> >
> > We make the following recommendations for the guest OS kernel:
> >
> >   The guest OS must include support for GICv2 and any available newer
> >   version of the GIC architecture to maintain compatibility with older
> >   VM implementations.
> >
> >   It is strongly recommended to include support for all available
> >   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
> >   drivers in the guest OS kernel or initial ramdisk.
> >
> >
> > Other common peripherals for block devices, networking, and more can
> > (and typically will) be provided, but OS software written and compiled
> > to run on ARM VMs cannot make any assumptions about which variations
> > of these should exist or which implementation they use (e.g. VirtIO or
> > Xen PV).  See "Hardware Description" above.
> >
> > Note that this platform specification is separate from the Linux kernel
> > concept of mach-virt, which merely specifies a machine model driven
> > purely from device tree, but does not mandate any peripherals or have any
> > mention of ACPI.
> >
> >
> > References
> > ----------
> > [1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
> >
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
> >
> > [2]: ARM Server Base System Architecture
> >
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html
> >
> > [3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
> >
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
> >
> > [4]: http://www.secretlab.ca/archives/27
> >
> > [5]:
> https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt
> >
> > [6]: UEFI Specification 2.4 Revision A
> > http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:48:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInGh-00058d-ID; Wed, 26 Feb 2014 22:47:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WInGg-00058Y-9e
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:47:58 +0000
Received: from [85.158.137.68:34233] by server-8.bemta-3.messagelabs.com id
	F5/51-16039-D1F6E035; Wed, 26 Feb 2014 22:47:57 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393454874!4441827!1
X-Originating-IP: [209.85.192.179]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_32,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21789 invoked from network); 26 Feb 2014 22:47:55 -0000
Received: from mail-pd0-f179.google.com (HELO mail-pd0-f179.google.com)
	(209.85.192.179)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:47:55 -0000
Received: by mail-pd0-f179.google.com with SMTP id w10so1566979pde.38
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:47:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=FMT7SJUu+EzImJi5HM3d6P7exCmpCW6M0KS0rduz2tk=;
	b=VNlV4Ty7OM1C8/khnCs2dmCDoL/7pf/TTfS94RqL08bJDVQb0A5M9suhBAo/R+N2sW
	2SVU1yxbXHYslVqC37Yivyv3sqoPFW0n+ws/VADqE8Ierfyp1pZJ/j2jHb9yI9qkqSQs
	Z6JSy3j3g+KoXzdIPfbR0P6Jsg4Pzr53RX877R1D7RMVw0PjVQbKPyu8rIG+BBMwZlm9
	D0SWAPlgpWClpuheDnS3GPNjSFfLPMX15MsoRX24o0K20ellAjK5AZKtStfNkz7cEv9T
	vGFGA3yMDE3E4veEnhpVWgrMnJSVN9ONK1w2+gYFJEqpAAaMgTaqZGMxbFiLb9+2yDZ1
	ZCaQ==
X-Gm-Message-State: ALoCoQkrgISphfXidcVB3H6b65weA6TuXuDW5ih8lJukmBAE8n+vvxnmqXogis/rDA+yuC+3MY2E
X-Received: by 10.68.203.135 with SMTP id kq7mr9604489pbc.85.1393454873694;
	Wed, 26 Feb 2014 14:47:53 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id ha2sm6945296pbb.8.2014.02.26.14.47.51
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 14:47:52 -0800 (PST)
Date: Wed, 26 Feb 2014 14:47:59 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Grant Likely <grant.likely@linaro.org>
Message-ID: <20140226224759.GI16149@cbox>
References: <20140226183454.GA14639@cbox>
	<CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	cross-distro@lists.linaro.org,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Grant,

Thanks for comments,

On Wed, Feb 26, 2014 at 10:35:54PM +0000, Grant Likely wrote:
> 
> On 26 Feb 2014 18:35, "Christoffer Dall" <christoffer.dall@linaro.org>
> wrote:
> >
> > ARM VM System Specification
> > ===========================
> >
> > Goal
> > ----
> > The goal of this spec is to allow suitably-built OS images to run on
> > all ARM virtualization solutions, such as KVM or Xen.
> >
> > Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> > they aim to be hypervisor agnostic.
> >
> > Note that simply adhering to the SBSA [2] is not a valid approach,
> > for example because the SBSA mandates EL2, which will not be available
> > for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> > may be controversial for some ARM VM implementations to support.
> > This spec also covers the aarch32 execution mode, not covered in the
> > SBSA.
> >
> >
> > Image format
> > ------------
> > The image format, as presented to the VM, needs to be well-defined in
> > order for prepared disk images to be bootable across various
> > virtualization implementations.
> >
> > The raw disk format as presented to the VM must be partitioned with a
> > GUID Partition Table (GPT).  The bootable software must be placed in the
> > EFI System Partition (ESP), using the UEFI removable media path, and
> > must be an EFI application complying to the UEFI Specification 2.4
> > Revision A [6].
> >
> > The ESP partition's GPT entry's partition type GUID must be
> > C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> > formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
> >
> > The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> > execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> > state.
> >
> > This ensures that tools for both Xen and KVM can load a binary UEFI
> > firmware which can read and boot the EFI application in the disk image.
> >
> > A typical scenario will be GRUB2 packaged as an EFI application, which
> > mounts the system boot partition and boots Linux.
> >
> >
> > Virtual Firmware
> > ----------------
> > The VM system must be able to boot the EFI application in the ESP.  It
> > is recommended that this is achieved by loading a UEFI binary as the
> > first software executed by the VM, which then executes the EFI
> > application.  The UEFI implementation should be compliant with UEFI
> > Specification 2.4 Revision A [6] or later.
> >
> > This document strongly recommends that the VM implementation supports
> > persistent environment storage for virtual firmware implementation in
> > order to ensure probable use cases such as adding additional disk images
> > to a VM or running installers to perform upgrades.
> >
> > The binary UEFI firmware implementation should not be distributed as
> > part of the VM image, but is specific to the VM implementation.
> >
> >
> > Hardware Description
> > --------------------
> > The Linux kernel's proper entry point always takes a pointer to an FDT,
> > regardless of the boot mechanism, firmware, and hardware description
> > method.  Even on real hardware which only supports ACPI and UEFI, the
> kernel
> > entry point will still receive a pointer to a simple FDT, generated by
> > the Linux kernel UEFI stub, containing a pointer to the UEFI system
> > table.  The kernel can then discover ACPI from the system tables.  The
> > presence of ACPI vs. FDT is therefore always itself discoverable,
> > through the FDT.
> 
> I would drop pretty much all of the above detail of the kernel entry point.
> The spec should specify UEFI compliance and stop there.

That probably make sense.  The discussion started out as "should it be
DTB or ACPI" and moved on from there, and then interestingly we sort of
found out, that it doesn't really matter in terms of how to package the
kernel, hence this text.  But looking over it now, it doesn't add to the
clarity of the spec.

> 
> What is relevant is the allowance for the UEFI implementation to provide an
> FDT and/or ACPI via the configuration table.
> 
> >
> > Therefore, the VM implementation must provide through its UEFI
> > implementation, either:
> >
> >   a complete FDT which describes the entire VM system and will boot
> >   mainline kernels driven by device tree alone, or
> >
> >   no FDT.  In this case, the VM implementation must provide ACPI, and
> >   the OS must be able to locate the ACPI root pointer through the UEFI
> >   system table.
> 
> It is actually valid for the VM to provide both ACPI and FDT. In that
> scenario it is up to the OS to chose which it will use.
> 

ok, "either" becomes "at least one of" with the appropriate adjustments.

> > For more information about the arm and arm64 boot conventions, see
> > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > Linux kernel source tree.
> >
> > For more information about UEFI and ACPI booting, see [4] and [5].
> >
> >
> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map.  The guest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> >
> > The virtual platform must support at least one of the following ARM
> > execution states:
> >   (1) aarch32 virtual CPUs on aarch32 physical CPUs
> >   (2) aarch32 virtual CPUs on aarch64 physical CPUs
> >   (3) aarch64 virtual CPUs on aarch64 physical CPUs
> >
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> >
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> >
> >   Serial console:  The platform should provide a console,
> >   based on an emulated pl011, a virtio-console, or a Xen PV console.
> 
> For portable disk image, can Xen PV be dropped from the list? pl011 is part
> of SBSA, and virtio is getting standardised, but Xen PV is implementation
> specific.
> 

It would certainly be easier if everyone just use virtio and to mandate
a pl011, but I don't want to preclude Xen from this spec, and the Xen
folks don't seem likely to implement virtio or pl011 based on this spec.

What's the problem with the current recommendation?  Are you concerned
that people will build kernels without virtio-console and Xen PV
console?

> >
> >   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >   removes this limitation.
> >
> >   The ARM virtual timer and counter should be available to the VM as
> >   per the ARM Generic Timers specification in the ARM ARM [1].
> >
> >   A hotpluggable bus to support hotplug of at least block and network
> >   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >   bus.
> >
> >
> > We make the following recommendations for the guest OS kernel:
> >
> >   The guest OS must include support for GICv2 and any available newer
> >   version of the GIC architecture to maintain compatibility with older
> >   VM implementations.
> >
> >   It is strongly recommended to include support for all available
> >   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
> >   drivers in the guest OS kernel or initial ramdisk.
> >
> >
> > Other common peripherals for block devices, networking, and more can
> > (and typically will) be provided, but OS software written and compiled
> > to run on ARM VMs cannot make any assumptions about which variations
> > of these should exist or which implementation they use (e.g. VirtIO or
> > Xen PV).  See "Hardware Description" above.
> >
> > Note that this platform specification is separate from the Linux kernel
> > concept of mach-virt, which merely specifies a machine model driven
> > purely from device tree, but does not mandate any peripherals or have any
> > mention of ACPI.
> >
> >
> > References
> > ----------
> > [1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
> >
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
> >
> > [2]: ARM Server Base System Architecture
> >
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html
> >
> > [3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
> >
> http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
> >
> > [4]: http://www.secretlab.ca/archives/27
> >
> > [5]:
> https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt
> >
> > [6]: UEFI Specification 2.4 Revision A
> > http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:49:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInHn-0005EJ-23; Wed, 26 Feb 2014 22:49:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1WInHk-0005EC-MK
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:49:04 +0000
Received: from [85.158.143.35:26832] by server-2.bemta-4.messagelabs.com id
	61/7A-04779-F5F6E035; Wed, 26 Feb 2014 22:49:03 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393454941!8572209!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22848 invoked from network); 26 Feb 2014 22:49:02 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:49:02 -0000
Received: by mail-vc0-f174.google.com with SMTP id im17so1658795vcb.19
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:49:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=S1Qx8faCCoEpBpZmb1wJnKZlm9Q/tyHM4FTi5D4ZT48=;
	b=N7r4bYSp+pbeFO461qPCUFrrwZ+dmRRHN/+cMjM3jeyXfP3DNiknu1uCZUr2yeyia5
	GcsOLwo7Xm/5h5zDmFArIuSgONV0mdwSVo8ASl1mUHga7ygrDE+4dIJGiYHLHlx8Nv6n
	IMOVveBzx4o7BAhqYKth8j05xrShoJigiGW4msUjkz3MwXvi4q4WNIj1RhvoV+onC0eK
	YUhc73dK437/O/HpGHUExRG0e8dzQuCtixtNmket4W2Ypv32jY/V+7lpW8cl1VfOLNC+
	7xKP+6Yv4zxCw7dcfLd6gVyGNlrVLjfAST6aKqvytjqXVXJrlF9EsXUaUHjH/QZicVEX
	zsZA==
MIME-Version: 1.0
X-Received: by 10.58.94.195 with SMTP id de3mr182936veb.39.1393454941576; Wed,
	26 Feb 2014 14:49:01 -0800 (PST)
Received: by 10.221.28.8 with HTTP; Wed, 26 Feb 2014 14:49:01 -0800 (PST)
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
References: <20140226183454.GA14639@cbox>
	<5553754.0b4gMg5OS7@wuerfel>
Date: Wed, 26 Feb 2014 16:49:01 -0600
Message-ID: <CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
From: Rob Herring <robherring2@gmail.com>
To: Arnd Bergmann <arnd@arndb.de>
Cc: cross-distro@lists.linaro.org, Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 1:55 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>> ARM VM System Specification
>> ===========================
>>
>> Goal
>> ----
>> The goal of this spec is to allow suitably-built OS images to run on
>> all ARM virtualization solutions, such as KVM or Xen.
>>
>> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
>> they aim to be hypervisor agnostic.
>>
>> Note that simply adhering to the SBSA [2] is not a valid approach,
>> for example because the SBSA mandates EL2, which will not be available
>> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
>> may be controversial for some ARM VM implementations to support.
>> This spec also covers the aarch32 execution mode, not covered in the
>> SBSA.
>
> I would prefer if we can stay as close as possible to SBSA for individual
> hardware components, and only stray from it when there is a strong reason.
> pl011-subset doesn't sound like a significant problem to implement,
> especially as SBSA makes the DMA part of that optional. Can you
> elaborate on what hypervisor would have a problem with that?

The SBSA only spec's a very minimal pl011 subset which is only
suitable for early serial output. Not only is there no DMA, but there
are no interrupts and maybe no input. I think it also assumes the uart
is enabled and configured already by firmware. It is all somewhat
pointless because the location is still not known or discoverable by
early code. Just mandating a real pl011 would have been better, but I
guess uart IP is value add for some. There is a downside to the pl011
which is the tty name is different from x86 uarts which gets exposed
to users and things like libvirt.

I think the VM image just has to support pl011, virtio-console, and
xen console. Arguably, an 8250 should also be included in interest of
making things just work.

Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:49:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInHn-0005EJ-23; Wed, 26 Feb 2014 22:49:07 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1WInHk-0005EC-MK
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:49:04 +0000
Received: from [85.158.143.35:26832] by server-2.bemta-4.messagelabs.com id
	61/7A-04779-F5F6E035; Wed, 26 Feb 2014 22:49:03 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393454941!8572209!1
X-Originating-IP: [209.85.220.174]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22848 invoked from network); 26 Feb 2014 22:49:02 -0000
Received: from mail-vc0-f174.google.com (HELO mail-vc0-f174.google.com)
	(209.85.220.174)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:49:02 -0000
Received: by mail-vc0-f174.google.com with SMTP id im17so1658795vcb.19
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:49:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=S1Qx8faCCoEpBpZmb1wJnKZlm9Q/tyHM4FTi5D4ZT48=;
	b=N7r4bYSp+pbeFO461qPCUFrrwZ+dmRRHN/+cMjM3jeyXfP3DNiknu1uCZUr2yeyia5
	GcsOLwo7Xm/5h5zDmFArIuSgONV0mdwSVo8ASl1mUHga7ygrDE+4dIJGiYHLHlx8Nv6n
	IMOVveBzx4o7BAhqYKth8j05xrShoJigiGW4msUjkz3MwXvi4q4WNIj1RhvoV+onC0eK
	YUhc73dK437/O/HpGHUExRG0e8dzQuCtixtNmket4W2Ypv32jY/V+7lpW8cl1VfOLNC+
	7xKP+6Yv4zxCw7dcfLd6gVyGNlrVLjfAST6aKqvytjqXVXJrlF9EsXUaUHjH/QZicVEX
	zsZA==
MIME-Version: 1.0
X-Received: by 10.58.94.195 with SMTP id de3mr182936veb.39.1393454941576; Wed,
	26 Feb 2014 14:49:01 -0800 (PST)
Received: by 10.221.28.8 with HTTP; Wed, 26 Feb 2014 14:49:01 -0800 (PST)
In-Reply-To: <5553754.0b4gMg5OS7@wuerfel>
References: <20140226183454.GA14639@cbox>
	<5553754.0b4gMg5OS7@wuerfel>
Date: Wed, 26 Feb 2014 16:49:01 -0600
Message-ID: <CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
From: Rob Herring <robherring2@gmail.com>
To: Arnd Bergmann <arnd@arndb.de>
Cc: cross-distro@lists.linaro.org, Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 1:55 PM, Arnd Bergmann <arnd@arndb.de> wrote:
> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>> ARM VM System Specification
>> ===========================
>>
>> Goal
>> ----
>> The goal of this spec is to allow suitably-built OS images to run on
>> all ARM virtualization solutions, such as KVM or Xen.
>>
>> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
>> they aim to be hypervisor agnostic.
>>
>> Note that simply adhering to the SBSA [2] is not a valid approach,
>> for example because the SBSA mandates EL2, which will not be available
>> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
>> may be controversial for some ARM VM implementations to support.
>> This spec also covers the aarch32 execution mode, not covered in the
>> SBSA.
>
> I would prefer if we can stay as close as possible to SBSA for individual
> hardware components, and only stray from it when there is a strong reason.
> pl011-subset doesn't sound like a significant problem to implement,
> especially as SBSA makes the DMA part of that optional. Can you
> elaborate on what hypervisor would have a problem with that?

The SBSA only spec's a very minimal pl011 subset which is only
suitable for early serial output. Not only is there no DMA, but there
are no interrupts and maybe no input. I think it also assumes the uart
is enabled and configured already by firmware. It is all somewhat
pointless because the location is still not known or discoverable by
early code. Just mandating a real pl011 would have been better, but I
guess uart IP is value add for some. There is a downside to the pl011
which is the tty name is different from x86 uarts which gets exposed
to users and things like libvirt.

I think the VM image just has to support pl011, virtio-console, and
xen console. Arguably, an 8250 should also be included in interest of
making things just work.

Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInL8-0005Pq-NK; Wed, 26 Feb 2014 22:52:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WInL7-0005Ph-3m
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 22:52:33 +0000
Received: from [85.158.139.211:20180] by server-7.bemta-5.messagelabs.com id
	F5/F3-14867-0307E035; Wed, 26 Feb 2014 22:52:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393455149!6457105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6489 invoked from network); 26 Feb 2014 22:52:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,550,1389744000"; d="scan'208";a="106111538"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 22:52:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 17:52:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WInL2-0006Fw-HP;
	Wed, 26 Feb 2014 22:52:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WInL2-0001TW-Db;
	Wed, 26 Feb 2014 22:52:28 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25309-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 22:52:28 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25309: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25309 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25309/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv          15 guest-localmigrate/x10    fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                6dba6ecba7d937e9b04b46f6cdff25e574f64857
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2392134 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:52:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:52:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInL8-0005Pq-NK; Wed, 26 Feb 2014 22:52:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WInL7-0005Ph-3m
	for xen-devel@lists.xensource.com; Wed, 26 Feb 2014 22:52:33 +0000
Received: from [85.158.139.211:20180] by server-7.bemta-5.messagelabs.com id
	F5/F3-14867-0307E035; Wed, 26 Feb 2014 22:52:32 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393455149!6457105!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6489 invoked from network); 26 Feb 2014 22:52:31 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:52:31 -0000
X-IronPort-AV: E=Sophos;i="4.97,550,1389744000"; d="scan'208";a="106111538"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 26 Feb 2014 22:52:29 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 17:52:28 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WInL2-0006Fw-HP;
	Wed, 26 Feb 2014 22:52:28 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WInL2-0001TW-Db;
	Wed, 26 Feb 2014 22:52:28 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25309-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Wed, 26 Feb 2014 22:52:28 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25309: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25309 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25309/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-pv          15 guest-localmigrate/x10    fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                6dba6ecba7d937e9b04b46f6cdff25e574f64857
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          fail    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2392134 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInN2-0005Yr-F3; Wed, 26 Feb 2014 22:54:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WInN0-0005Yk-Db
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:54:30 +0000
Received: from [193.109.254.147:41142] by server-7.bemta-14.messagelabs.com id
	2C/CA-23424-5A07E035; Wed, 26 Feb 2014 22:54:29 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393455268!7072424!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4626 invoked from network); 26 Feb 2014 22:54:29 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:54:28 -0000
Received: by mail-la0-f47.google.com with SMTP id y1so1120933lam.6
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:54:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=PNxJz7buxcGSQCfBtPepApv9f3DMnKfULTeNU4hoNl4=;
	b=KntS9gknvJRJssrhUGJHWIiPTbyb0gD4g+hq2XfknkdBikFTTpXW+K9gAAJSxXX/RJ
	8s/EiQrtJmOLyLWq39JLHfCNYO6KmggBPcOsDwELd3BId9tdJ8N/p/IQuzcsyrSaNkKF
	da5LYHUbdxhjni5JnpZNiYoZHfkjGKCwv6R1L3HrU1cpR2h2SLh8EqLo2MsvpeuymVNZ
	vKy/bFVxHkpQZycprGlwq3AOxBU3jMrySPpYzqW43GqW2IV8gmA3AIxdP+syiprHT7Yd
	Sbs3uyRobxysRULVy9uOklmY0LFfSd8FI1YzfduesCX8Fc/vLX839gBKz+nEEhjaK0xd
	s5JQ==
X-Gm-Message-State: ALoCoQntgOtAPU4h1O2Qijfb6sUt3bSnNRon56PeJ+pgGkVHrdWlj/jsIiF8hrvZ7rXKx/sGPvlR
X-Received: by 10.152.19.166 with SMTP id g6mr4081473lae.21.1393455268269;
	Wed, 26 Feb 2014 14:54:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Wed, 26 Feb 2014 14:54:08 -0800 (PST)
In-Reply-To: <CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 26 Feb 2014 22:54:08 +0000
Message-ID: <CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
To: Rob Herring <robherring2@gmail.com>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 22:49, Rob Herring <robherring2@gmail.com> wrote
> The SBSA only spec's a very minimal pl011 subset which is only
> suitable for early serial output. Not only is there no DMA, but there
> are no interrupts and maybe no input.

No interrupts on a UART in a VM (especially an emulated one)
is a good way to spend all your time bouncing around in I/O
emulation of the "hey can we send another byte yet?" register...

> I think it also assumes the uart
> is enabled and configured already by firmware. It is all somewhat
> pointless because the location is still not known or discoverable by
> early code.

This sounds like we should specify and implement something
so we can provide this information in the device tree. Telling
the kernel where the hardware is is exactly what DT is for, right?

> I think the VM image just has to support pl011, virtio-console, and
> xen console. Arguably, an 8250 should also be included in interest of
> making things just work.

What does the 8250 have to recommend it over just providing
the PL011?

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 22:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 22:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInN2-0005Yr-F3; Wed, 26 Feb 2014 22:54:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WInN0-0005Yk-Db
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:54:30 +0000
Received: from [193.109.254.147:41142] by server-7.bemta-14.messagelabs.com id
	2C/CA-23424-5A07E035; Wed, 26 Feb 2014 22:54:29 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393455268!7072424!1
X-Originating-IP: [209.85.215.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4626 invoked from network); 26 Feb 2014 22:54:29 -0000
Received: from mail-la0-f47.google.com (HELO mail-la0-f47.google.com)
	(209.85.215.47)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:54:28 -0000
Received: by mail-la0-f47.google.com with SMTP id y1so1120933lam.6
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:54:28 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=PNxJz7buxcGSQCfBtPepApv9f3DMnKfULTeNU4hoNl4=;
	b=KntS9gknvJRJssrhUGJHWIiPTbyb0gD4g+hq2XfknkdBikFTTpXW+K9gAAJSxXX/RJ
	8s/EiQrtJmOLyLWq39JLHfCNYO6KmggBPcOsDwELd3BId9tdJ8N/p/IQuzcsyrSaNkKF
	da5LYHUbdxhjni5JnpZNiYoZHfkjGKCwv6R1L3HrU1cpR2h2SLh8EqLo2MsvpeuymVNZ
	vKy/bFVxHkpQZycprGlwq3AOxBU3jMrySPpYzqW43GqW2IV8gmA3AIxdP+syiprHT7Yd
	Sbs3uyRobxysRULVy9uOklmY0LFfSd8FI1YzfduesCX8Fc/vLX839gBKz+nEEhjaK0xd
	s5JQ==
X-Gm-Message-State: ALoCoQntgOtAPU4h1O2Qijfb6sUt3bSnNRon56PeJ+pgGkVHrdWlj/jsIiF8hrvZ7rXKx/sGPvlR
X-Received: by 10.152.19.166 with SMTP id g6mr4081473lae.21.1393455268269;
	Wed, 26 Feb 2014 14:54:28 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Wed, 26 Feb 2014 14:54:08 -0800 (PST)
In-Reply-To: <CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 26 Feb 2014 22:54:08 +0000
Message-ID: <CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
To: Rob Herring <robherring2@gmail.com>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 22:49, Rob Herring <robherring2@gmail.com> wrote
> The SBSA only spec's a very minimal pl011 subset which is only
> suitable for early serial output. Not only is there no DMA, but there
> are no interrupts and maybe no input.

No interrupts on a UART in a VM (especially an emulated one)
is a good way to spend all your time bouncing around in I/O
emulation of the "hey can we send another byte yet?" register...

> I think it also assumes the uart
> is enabled and configured already by firmware. It is all somewhat
> pointless because the location is still not known or discoverable by
> early code.

This sounds like we should specify and implement something
so we can provide this information in the device tree. Telling
the kernel where the hardware is is exactly what DT is for, right?

> I think the VM image just has to support pl011, virtio-console, and
> xen console. Arguably, an 8250 should also be included in interest of
> making things just work.

What does the 8250 have to recommend it over just providing
the PL011?

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:00:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInSQ-0005o9-Ki; Wed, 26 Feb 2014 23:00:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1WInSO-0005o4-RN
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:00:05 +0000
Received: from [85.158.137.68:47346] by server-2.bemta-3.messagelabs.com id
	C2/BE-06531-3F17E035; Wed, 26 Feb 2014 23:00:03 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393455601!4466132!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5176 invoked from network); 26 Feb 2014 23:00:03 -0000
Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com)
	(137.65.248.74)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 23:00:03 -0000
Received: from INET-PRV-MTA by prv-mh.provo.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:00:00 -0700
Message-Id: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 14.0.0 
Date: Wed, 26 Feb 2014 15:59:58 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).
Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
Was this functionality ever intended to work? 
Does anyone know the status of this code?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:00:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:00:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInSQ-0005o9-Ki; Wed, 26 Feb 2014 23:00:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1WInSO-0005o4-RN
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:00:05 +0000
Received: from [85.158.137.68:47346] by server-2.bemta-3.messagelabs.com id
	C2/BE-06531-3F17E035; Wed, 26 Feb 2014 23:00:03 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393455601!4466132!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5176 invoked from network); 26 Feb 2014 23:00:03 -0000
Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com)
	(137.65.248.74)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 26 Feb 2014 23:00:03 -0000
Received: from INET-PRV-MTA by prv-mh.provo.novell.com
	with Novell_GroupWise; Wed, 26 Feb 2014 16:00:00 -0700
Message-Id: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 14.0.0 
Date: Wed, 26 Feb 2014 15:59:58 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Subject: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).
Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
Was this functionality ever intended to work? 
Does anyone know the status of this code?



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:09:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInak-0005xp-Vu; Wed, 26 Feb 2014 23:08:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1WInaj-0005xk-DA
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:08:41 +0000
Received: from [85.158.139.211:29240] by server-8.bemta-5.messagelabs.com id
	6D/A6-05298-8F37E035; Wed, 26 Feb 2014 23:08:40 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393456118!6512254!1
X-Originating-IP: [209.85.220.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22288 invoked from network); 26 Feb 2014 23:08:39 -0000
Received: from mail-vc0-f176.google.com (HELO mail-vc0-f176.google.com)
	(209.85.220.176)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 23:08:39 -0000
Received: by mail-vc0-f176.google.com with SMTP id la4so1671574vcb.21
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 15:08:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=1HF2p768nhPmDJ0wALzHdHNPHAnQ/Kfaexc50mU4r9E=;
	b=jEUA/56LMSMPrAxpOyl0J9toGebRP17FYvj0k9xz/91bVZAc5naRtLlLBH/KEOy869
	KKZkLYRpZLhaSdEvsvTiK8FL8a5WIee3Dj0Rnr8tc9gFWFHBFKnpJrZFoHYHhwQheG6W
	G0SK5hTWGTJK9UwDtXMTqjCYXGjekiUm7q129bcsE1qa3HRBI8OKugCUe3i4kbNYx4+n
	h+xghekPQZhzAAhdkeb7hN07sJ5W0goTzyT73FTYoyYXPK5V43Z7ywNLA/09NW7Ix14a
	of73g9GieSY9OxrL5oE/93thMKvOFpjIBWpldXKMT5qdnZQowyegl7hn7rxNoWS/x51c
	/+SQ==
MIME-Version: 1.0
X-Received: by 10.220.161.132 with SMTP id r4mr28630vcx.29.1393456118416; Wed,
	26 Feb 2014 15:08:38 -0800 (PST)
Received: by 10.221.28.8 with HTTP; Wed, 26 Feb 2014 15:08:38 -0800 (PST)
In-Reply-To: <CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
Date: Wed, 26 Feb 2014 17:08:38 -0600
Message-ID: <CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
From: Rob Herring <robherring2@gmail.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 4:54 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 26 February 2014 22:49, Rob Herring <robherring2@gmail.com> wrote
>> The SBSA only spec's a very minimal pl011 subset which is only
>> suitable for early serial output. Not only is there no DMA, but there
>> are no interrupts and maybe no input.
>
> No interrupts on a UART in a VM (especially an emulated one)
> is a good way to spend all your time bouncing around in I/O
> emulation of the "hey can we send another byte yet?" register...
>
>> I think it also assumes the uart
>> is enabled and configured already by firmware. It is all somewhat
>> pointless because the location is still not known or discoverable by
>> early code.
>
> This sounds like we should specify and implement something
> so we can provide this information in the device tree. Telling
> the kernel where the hardware is is exactly what DT is for, right?

Yes, I'm looking into that, but that's not really a concern for this
doc as early output is a debug feature.

>> I think the VM image just has to support pl011, virtio-console, and
>> xen console. Arguably, an 8250 should also be included in interest of
>> making things just work.
>
> What does the 8250 have to recommend it over just providing
> the PL011?

As I mentioned, it will just work for anything that expects the serial
port to be ttyS0 as on x86 rather than ttyAMA0. Really, I'd like to
see ttyAMA go away, but evidently that's not an easily fixed issue and
it is an ABI.

Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:09:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:09:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInak-0005xp-Vu; Wed, 26 Feb 2014 23:08:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robherring2@gmail.com>) id 1WInaj-0005xk-DA
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:08:41 +0000
Received: from [85.158.139.211:29240] by server-8.bemta-5.messagelabs.com id
	6D/A6-05298-8F37E035; Wed, 26 Feb 2014 23:08:40 +0000
X-Env-Sender: robherring2@gmail.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393456118!6512254!1
X-Originating-IP: [209.85.220.176]
X-SpamReason: No, hits=0.3 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22288 invoked from network); 26 Feb 2014 23:08:39 -0000
Received: from mail-vc0-f176.google.com (HELO mail-vc0-f176.google.com)
	(209.85.220.176)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 23:08:39 -0000
Received: by mail-vc0-f176.google.com with SMTP id la4so1671574vcb.21
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 15:08:38 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=1HF2p768nhPmDJ0wALzHdHNPHAnQ/Kfaexc50mU4r9E=;
	b=jEUA/56LMSMPrAxpOyl0J9toGebRP17FYvj0k9xz/91bVZAc5naRtLlLBH/KEOy869
	KKZkLYRpZLhaSdEvsvTiK8FL8a5WIee3Dj0Rnr8tc9gFWFHBFKnpJrZFoHYHhwQheG6W
	G0SK5hTWGTJK9UwDtXMTqjCYXGjekiUm7q129bcsE1qa3HRBI8OKugCUe3i4kbNYx4+n
	h+xghekPQZhzAAhdkeb7hN07sJ5W0goTzyT73FTYoyYXPK5V43Z7ywNLA/09NW7Ix14a
	of73g9GieSY9OxrL5oE/93thMKvOFpjIBWpldXKMT5qdnZQowyegl7hn7rxNoWS/x51c
	/+SQ==
MIME-Version: 1.0
X-Received: by 10.220.161.132 with SMTP id r4mr28630vcx.29.1393456118416; Wed,
	26 Feb 2014 15:08:38 -0800 (PST)
Received: by 10.221.28.8 with HTTP; Wed, 26 Feb 2014 15:08:38 -0800 (PST)
In-Reply-To: <CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
Date: Wed, 26 Feb 2014 17:08:38 -0600
Message-ID: <CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
From: Rob Herring <robherring2@gmail.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 4:54 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 26 February 2014 22:49, Rob Herring <robherring2@gmail.com> wrote
>> The SBSA only spec's a very minimal pl011 subset which is only
>> suitable for early serial output. Not only is there no DMA, but there
>> are no interrupts and maybe no input.
>
> No interrupts on a UART in a VM (especially an emulated one)
> is a good way to spend all your time bouncing around in I/O
> emulation of the "hey can we send another byte yet?" register...
>
>> I think it also assumes the uart
>> is enabled and configured already by firmware. It is all somewhat
>> pointless because the location is still not known or discoverable by
>> early code.
>
> This sounds like we should specify and implement something
> so we can provide this information in the device tree. Telling
> the kernel where the hardware is is exactly what DT is for, right?

Yes, I'm looking into that, but that's not really a concern for this
doc as early output is a debug feature.

>> I think the VM image just has to support pl011, virtio-console, and
>> xen console. Arguably, an 8250 should also be included in interest of
>> making things just work.
>
> What does the 8250 have to recommend it over just providing
> the PL011?

As I mentioned, it will just work for anything that expects the serial
port to be ttyS0 as on x86 rather than ttyAMA0. Really, I'd like to
see ttyAMA go away, but evidently that's not an easily fixed issue and
it is an ABI.

Rob

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInfT-00068K-Qq; Wed, 26 Feb 2014 23:13:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1WInfS-00068D-LR
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:13:34 +0000
Received: from [85.158.143.35:30449] by server-3.bemta-4.messagelabs.com id
	8C/1A-11539-E157E035; Wed, 26 Feb 2014 23:13:34 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393456411!8597926!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13369 invoked from network); 26 Feb 2014 23:13:32 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 23:13:32 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id BD4F313EF78;
	Wed, 26 Feb 2014 23:13:29 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id AC30213F01B; Wed, 26 Feb 2014 23:13:29 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id 1C09B13F126;
	Wed, 26 Feb 2014 23:13:28 +0000 (UTC)
Message-ID: <530E7517.2010806@codeaurora.org>
Date: Wed, 26 Feb 2014 18:13:27 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Rob Herring <robherring2@gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
In-Reply-To: <CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: cross-distro@lists.linaro.org, Ian Campbell <ian.campbell@citrix.com>,
	Arnd Bergmann <arnd@arndb.de>, Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 05:49 PM, Rob Herring wrote:
> On Wed, Feb 26, 2014 at 1:55 PM, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>>> ARM VM System Specification
>>> ===========================
>>>
>>> Goal
>>> ----
>>> The goal of this spec is to allow suitably-built OS images to run on
>>> all ARM virtualization solutions, such as KVM or Xen.
>>>
>>> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
>>> they aim to be hypervisor agnostic.
>>>
>>> Note that simply adhering to the SBSA [2] is not a valid approach,
>>> for example because the SBSA mandates EL2, which will not be available
>>> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
>>> may be controversial for some ARM VM implementations to support.
>>> This spec also covers the aarch32 execution mode, not covered in the
>>> SBSA.
>>
>> I would prefer if we can stay as close as possible to SBSA for individual
>> hardware components, and only stray from it when there is a strong reason.
>> pl011-subset doesn't sound like a significant problem to implement,
>> especially as SBSA makes the DMA part of that optional. Can you
>> elaborate on what hypervisor would have a problem with that?
> 
> The SBSA only spec's a very minimal pl011 subset which is only
> suitable for early serial output. Not only is there no DMA, but there
> are no interrupts and maybe no input. I think it also assumes the uart
> is enabled and configured already by firmware. It is all somewhat
> pointless because the location is still not known or discoverable by
> early code. Just mandating a real pl011 would have been better, but I
> guess uart IP is value add for some. There is a downside to the pl011
> which is the tty name is different from x86 uarts which gets exposed
> to users and things like libvirt.

Can you just use /dev/console? That's what I use in my init scripts for
portability from PL011 to DCC.

Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInfT-00068K-Qq; Wed, 26 Feb 2014 23:13:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1WInfS-00068D-LR
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:13:34 +0000
Received: from [85.158.143.35:30449] by server-3.bemta-4.messagelabs.com id
	8C/1A-11539-E157E035; Wed, 26 Feb 2014 23:13:34 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393456411!8597926!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13369 invoked from network); 26 Feb 2014 23:13:32 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 26 Feb 2014 23:13:32 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id BD4F313EF78;
	Wed, 26 Feb 2014 23:13:29 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id AC30213F01B; Wed, 26 Feb 2014 23:13:29 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id 1C09B13F126;
	Wed, 26 Feb 2014 23:13:28 +0000 (UTC)
Message-ID: <530E7517.2010806@codeaurora.org>
Date: Wed, 26 Feb 2014 18:13:27 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Rob Herring <robherring2@gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
In-Reply-To: <CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: cross-distro@lists.linaro.org, Ian Campbell <ian.campbell@citrix.com>,
	Arnd Bergmann <arnd@arndb.de>, Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 05:49 PM, Rob Herring wrote:
> On Wed, Feb 26, 2014 at 1:55 PM, Arnd Bergmann <arnd@arndb.de> wrote:
>> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>>> ARM VM System Specification
>>> ===========================
>>>
>>> Goal
>>> ----
>>> The goal of this spec is to allow suitably-built OS images to run on
>>> all ARM virtualization solutions, such as KVM or Xen.
>>>
>>> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
>>> they aim to be hypervisor agnostic.
>>>
>>> Note that simply adhering to the SBSA [2] is not a valid approach,
>>> for example because the SBSA mandates EL2, which will not be available
>>> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
>>> may be controversial for some ARM VM implementations to support.
>>> This spec also covers the aarch32 execution mode, not covered in the
>>> SBSA.
>>
>> I would prefer if we can stay as close as possible to SBSA for individual
>> hardware components, and only stray from it when there is a strong reason.
>> pl011-subset doesn't sound like a significant problem to implement,
>> especially as SBSA makes the DMA part of that optional. Can you
>> elaborate on what hypervisor would have a problem with that?
> 
> The SBSA only spec's a very minimal pl011 subset which is only
> suitable for early serial output. Not only is there no DMA, but there
> are no interrupts and maybe no input. I think it also assumes the uart
> is enabled and configured already by firmware. It is all somewhat
> pointless because the location is still not known or discoverable by
> early code. Just mandating a real pl011 would have been better, but I
> guess uart IP is value add for some. There is a downside to the pl011
> which is the tty name is different from x86 uarts which gets exposed
> to users and things like libvirt.

Can you just use /dev/console? That's what I use in my init scripts for
portability from PL011 to DCC.

Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:15:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInhB-0006EX-EL; Wed, 26 Feb 2014 23:15:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WInh9-0006EQ-9d
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:15:19 +0000
Received: from [193.109.254.147:46882] by server-8.bemta-14.messagelabs.com id
	67/E5-18529-6857E035; Wed, 26 Feb 2014 23:15:18 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393456517!7077716!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8316 invoked from network); 26 Feb 2014 23:15:18 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 23:15:18 -0000
Received: by mail-la0-f42.google.com with SMTP id ec20so1158830lab.29
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 15:15:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Av33mDz2TMqaLzoajzEnQQhq2eAhuB2llnjlCQ/p81c=;
	b=LLfPVtJiJRZf1JEfA3WGopVhRzILpplJNuTCAAFEI1vMJNrvlhbqJGk5typqNkXahy
	ZBGaOSEeeMj80j45Ad+yCR51JnIoTeeFFcgMIflKrQLMk5qC1I1yY59QEEHjrx+ccKRd
	+EbF8iLtb9hsAErIcMuTvTFZF63nnIcEsAw38XtZCMIt0166zHAJQuDASSIVRCDRiod6
	yY/kgkVfZMIhCr9c8lP6NTd/WHWHBeZVJOErUNQrK1tKJvj28yYCar/Wfwk/Qd4IGZUv
	fGfH42OEyjCyHtEpGOQjnT/RRAU76y4ms3X9IVHRrkITi3ulZsPduDGx/gmgrUm8PWx4
	n/8w==
X-Gm-Message-State: ALoCoQkN83zGcUaFi1ZRw9v+v3AhBP/pDbpYDxySCa8YNovApddKCRmdTykUBzbKvp2/W2uT8Ava
X-Received: by 10.112.140.202 with SMTP id ri10mr3941895lbb.9.1393456517138;
	Wed, 26 Feb 2014 15:15:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Wed, 26 Feb 2014 15:14:56 -0800 (PST)
In-Reply-To: <CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
	<CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 26 Feb 2014 23:14:56 +0000
Message-ID: <CAFEAcA9nq_V4VGG=j6FWFatJBnTuYrU9-9U9eGXcQz0zimrasQ@mail.gmail.com>
To: Rob Herring <robherring2@gmail.com>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 23:08, Rob Herring <robherring2@gmail.com> wrote:
> On Wed, Feb 26, 2014 at 4:54 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
>> What does the 8250 have to recommend it over just providing
>> the PL011?
>
> As I mentioned, it will just work for anything that expects the serial
> port to be ttyS0 as on x86 rather than ttyAMA0.

This doesn't seem very compelling to me. Either userspace
should just be fixed or the kernel should implement a namespace
for serial ports which doesn't randomly change just because the
particular h/w driver doing the implementation is different.
We shouldn't be papering over other peoples' problems in the VM
spec IMHO.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:15:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WInhB-0006EX-EL; Wed, 26 Feb 2014 23:15:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WInh9-0006EQ-9d
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:15:19 +0000
Received: from [193.109.254.147:46882] by server-8.bemta-14.messagelabs.com id
	67/E5-18529-6857E035; Wed, 26 Feb 2014 23:15:18 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393456517!7077716!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8316 invoked from network); 26 Feb 2014 23:15:18 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 23:15:18 -0000
Received: by mail-la0-f42.google.com with SMTP id ec20so1158830lab.29
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 15:15:17 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Av33mDz2TMqaLzoajzEnQQhq2eAhuB2llnjlCQ/p81c=;
	b=LLfPVtJiJRZf1JEfA3WGopVhRzILpplJNuTCAAFEI1vMJNrvlhbqJGk5typqNkXahy
	ZBGaOSEeeMj80j45Ad+yCR51JnIoTeeFFcgMIflKrQLMk5qC1I1yY59QEEHjrx+ccKRd
	+EbF8iLtb9hsAErIcMuTvTFZF63nnIcEsAw38XtZCMIt0166zHAJQuDASSIVRCDRiod6
	yY/kgkVfZMIhCr9c8lP6NTd/WHWHBeZVJOErUNQrK1tKJvj28yYCar/Wfwk/Qd4IGZUv
	fGfH42OEyjCyHtEpGOQjnT/RRAU76y4ms3X9IVHRrkITi3ulZsPduDGx/gmgrUm8PWx4
	n/8w==
X-Gm-Message-State: ALoCoQkN83zGcUaFi1ZRw9v+v3AhBP/pDbpYDxySCa8YNovApddKCRmdTykUBzbKvp2/W2uT8Ava
X-Received: by 10.112.140.202 with SMTP id ri10mr3941895lbb.9.1393456517138;
	Wed, 26 Feb 2014 15:15:17 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Wed, 26 Feb 2014 15:14:56 -0800 (PST)
In-Reply-To: <CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
	<CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Wed, 26 Feb 2014 23:14:56 +0000
Message-ID: <CAFEAcA9nq_V4VGG=j6FWFatJBnTuYrU9-9U9eGXcQz0zimrasQ@mail.gmail.com>
To: Rob Herring <robherring2@gmail.com>
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 23:08, Rob Herring <robherring2@gmail.com> wrote:
> On Wed, Feb 26, 2014 at 4:54 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
>> What does the 8250 have to recommend it over just providing
>> the PL011?
>
> As I mentioned, it will just work for anything that expects the serial
> port to be ttyS0 as on x86 rather than ttyAMA0.

This doesn't seem very compelling to me. Either userspace
should just be fixed or the kernel should implement a namespace
for serial ports which doesn't randomly change just because the
particular h/w driver doing the implementation is different.
We shouldn't be papering over other peoples' problems in the VM
spec IMHO.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Wed Feb 26 23:35:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIo0R-0006WO-3Q; Wed, 26 Feb 2014 23:35:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <myselfdushyantbehl@gmail.com>) id 1WIo0P-0006WJ-6v
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:35:13 +0000
Received: from [85.158.143.35:46054] by server-2.bemta-4.messagelabs.com id
	AE/BF-04779-03A7E035; Wed, 26 Feb 2014 23:35:12 +0000
X-Env-Sender: myselfdushyantbehl@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393457710!8596807!1
X-Originating-IP: [209.85.214.194]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15069 invoked from network); 26 Feb 2014 23:35:11 -0000
Received: from mail-ob0-f194.google.com (HELO mail-ob0-f194.google.com)
	(209.85.214.194)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 23:35:11 -0000
Received: by mail-ob0-f194.google.com with SMTP id vb8so506650obc.5
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 15:35:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=laaNzlohWZq50n59A+PUo2fYGfpU9O0KHMJ9lSPd51o=;
	b=LU0sETE+GLCmHDjAxQiSnU5XaPNQY9dsXOphOFiItIaVAT91yY/JWs+2HEZ5vnPD6R
	kwuuhp9zbQs5ibYY2djFKM1Z73rPWh3qt2rehiLXPuJqOYcpy5X0DU53zNkJVzErrs5b
	lmJn2sI/ukfIn7Qq7K1p/dXLecglfCcqHYjVIuwYeRmUfeB+kiLtbH+IT5U1wBU1P8Tr
	wYuPUGRMb3Hjaw1Og5KuWc9dKFe+dsRdDQdhu8rF1LQzIJAX5HcBZLV8VJnYWir+zQk4
	W2V/LaHxuJMIFhG3TnLxDnAoL5eUBsOagnv547fb9Sa8cPpLk0yb7MZtKuOBDimJdALS
	xFng==
MIME-Version: 1.0
X-Received: by 10.60.246.10 with SMTP id xs10mr5390750oec.18.1393457709716;
	Wed, 26 Feb 2014 15:35:09 -0800 (PST)
Received: by 10.76.189.108 with HTTP; Wed, 26 Feb 2014 15:35:09 -0800 (PST)
Date: Thu, 27 Feb 2014 05:05:09 +0530
Message-ID: <CAHF350KPKzu4s6xETtptKby0a=bukRCn0jJA9sA-vnMDARFrag@mail.gmail.com>
From: Dushyant Behl <myselfdushyantbehl@gmail.com>
To: george.dunlap@eu.citrix.com, stefano.stabellini@eu.citrix.com
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] GSOC 2014 - Project Queries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4490805181157981408=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4490805181157981408==
Content-Type: multipart/alternative; boundary=001a1136989acc1add04f357a79f

--001a1136989acc1add04f357a79f
Content-Type: text/plain; charset=ISO-8859-1

Hi All,

I'm very sorry if i mailed at the wrong mailing list.
I'm a master's computer science student very much enthusiastic about the
area of Virtualization and Hypervisors and I would like to be a part of the
Open Source community by taking part in the GSOC-2014 program with Xen. I
have looked at the Xen project ideas and i would be interested to work on
them, I am specifically curious about the projects -* Allowing guests to
boot with a passed-through GPU as the primary display. and VM Snapshots.*
I have started by looking into the xen architecture and it'd be great to
get some starting points or to get in touch with someone regarding the
project. Please help me by suggesting anything regarding the same.

Also I wanted to know if there's a different IRC channel for GSOC, I'm
available on the ##xen channel under the nick - *theVoodooChild*.

Looking forward to any reply.

Thanks,
Dushyant

--001a1136989acc1add04f357a79f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi All,<div><br></div><div>I&#39;m very sorry if i mailed =
at the wrong mailing list.</div><div>I&#39;m a master&#39;s computer scienc=
e student very much enthusiastic about the area of Virtualization and Hyper=
visors and I would like to be a part of the Open Source community by taking=
 part in the GSOC-2014 program with Xen. I have looked at the Xen project i=
deas and i would be interested to work on them, I am specifically curious a=
bout the projects -<b>=A0Allowing guests to boot with a passed-through GPU =
as the primary display. and VM Snapshots.</b></div>
<div>I have started by looking into the xen architecture and it&#39;d be gr=
eat to get some starting points or to get in touch with someone regarding t=
he project. Please help me by suggesting anything regarding the same.</div>
<div><br></div><div>Also I wanted to know if there&#39;s a different IRC ch=
annel for GSOC, I&#39;m available on the ##xen channel under the nick - <b>=
theVoodooChild</b>.</div><div><br></div><div>Looking forward to any reply.<=
/div>
<div><br></div><div>Thanks,</div><div>Dushyant=A0</div><div><br></div><div>=
<br></div><div><br></div></div>

--001a1136989acc1add04f357a79f--


--===============4490805181157981408==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4490805181157981408==--


From xen-devel-bounces@lists.xen.org Wed Feb 26 23:35:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Feb 2014 23:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIo0R-0006WO-3Q; Wed, 26 Feb 2014 23:35:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <myselfdushyantbehl@gmail.com>) id 1WIo0P-0006WJ-6v
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 23:35:13 +0000
Received: from [85.158.143.35:46054] by server-2.bemta-4.messagelabs.com id
	AE/BF-04779-03A7E035; Wed, 26 Feb 2014 23:35:12 +0000
X-Env-Sender: myselfdushyantbehl@gmail.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393457710!8596807!1
X-Originating-IP: [209.85.214.194]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_10_20,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15069 invoked from network); 26 Feb 2014 23:35:11 -0000
Received: from mail-ob0-f194.google.com (HELO mail-ob0-f194.google.com)
	(209.85.214.194)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 23:35:11 -0000
Received: by mail-ob0-f194.google.com with SMTP id vb8so506650obc.5
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 15:35:09 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:cc:content-type;
	bh=laaNzlohWZq50n59A+PUo2fYGfpU9O0KHMJ9lSPd51o=;
	b=LU0sETE+GLCmHDjAxQiSnU5XaPNQY9dsXOphOFiItIaVAT91yY/JWs+2HEZ5vnPD6R
	kwuuhp9zbQs5ibYY2djFKM1Z73rPWh3qt2rehiLXPuJqOYcpy5X0DU53zNkJVzErrs5b
	lmJn2sI/ukfIn7Qq7K1p/dXLecglfCcqHYjVIuwYeRmUfeB+kiLtbH+IT5U1wBU1P8Tr
	wYuPUGRMb3Hjaw1Og5KuWc9dKFe+dsRdDQdhu8rF1LQzIJAX5HcBZLV8VJnYWir+zQk4
	W2V/LaHxuJMIFhG3TnLxDnAoL5eUBsOagnv547fb9Sa8cPpLk0yb7MZtKuOBDimJdALS
	xFng==
MIME-Version: 1.0
X-Received: by 10.60.246.10 with SMTP id xs10mr5390750oec.18.1393457709716;
	Wed, 26 Feb 2014 15:35:09 -0800 (PST)
Received: by 10.76.189.108 with HTTP; Wed, 26 Feb 2014 15:35:09 -0800 (PST)
Date: Thu, 27 Feb 2014 05:05:09 +0530
Message-ID: <CAHF350KPKzu4s6xETtptKby0a=bukRCn0jJA9sA-vnMDARFrag@mail.gmail.com>
From: Dushyant Behl <myselfdushyantbehl@gmail.com>
To: george.dunlap@eu.citrix.com, stefano.stabellini@eu.citrix.com
Cc: xen-devel@lists.xen.org
Subject: [Xen-devel] GSOC 2014 - Project Queries
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4490805181157981408=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4490805181157981408==
Content-Type: multipart/alternative; boundary=001a1136989acc1add04f357a79f

--001a1136989acc1add04f357a79f
Content-Type: text/plain; charset=ISO-8859-1

Hi All,

I'm very sorry if i mailed at the wrong mailing list.
I'm a master's computer science student very much enthusiastic about the
area of Virtualization and Hypervisors and I would like to be a part of the
Open Source community by taking part in the GSOC-2014 program with Xen. I
have looked at the Xen project ideas and i would be interested to work on
them, I am specifically curious about the projects -* Allowing guests to
boot with a passed-through GPU as the primary display. and VM Snapshots.*
I have started by looking into the xen architecture and it'd be great to
get some starting points or to get in touch with someone regarding the
project. Please help me by suggesting anything regarding the same.

Also I wanted to know if there's a different IRC channel for GSOC, I'm
available on the ##xen channel under the nick - *theVoodooChild*.

Looking forward to any reply.

Thanks,
Dushyant

--001a1136989acc1add04f357a79f
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi All,<div><br></div><div>I&#39;m very sorry if i mailed =
at the wrong mailing list.</div><div>I&#39;m a master&#39;s computer scienc=
e student very much enthusiastic about the area of Virtualization and Hyper=
visors and I would like to be a part of the Open Source community by taking=
 part in the GSOC-2014 program with Xen. I have looked at the Xen project i=
deas and i would be interested to work on them, I am specifically curious a=
bout the projects -<b>=A0Allowing guests to boot with a passed-through GPU =
as the primary display. and VM Snapshots.</b></div>
<div>I have started by looking into the xen architecture and it&#39;d be gr=
eat to get some starting points or to get in touch with someone regarding t=
he project. Please help me by suggesting anything regarding the same.</div>
<div><br></div><div>Also I wanted to know if there&#39;s a different IRC ch=
annel for GSOC, I&#39;m available on the ##xen channel under the nick - <b>=
theVoodooChild</b>.</div><div><br></div><div>Looking forward to any reply.<=
/div>
<div><br></div><div>Thanks,</div><div>Dushyant=A0</div><div><br></div><div>=
<br></div><div><br></div></div>

--001a1136989acc1add04f357a79f--


--===============4490805181157981408==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4490805181157981408==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 00:04:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 00:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIoSY-0007GS-IU; Thu, 27 Feb 2014 00:04:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WIoSX-0007GN-Aj
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 00:04:17 +0000
Received: from [85.158.139.211:11433] by server-1.bemta-5.messagelabs.com id
	8E/BE-12859-0018E035; Thu, 27 Feb 2014 00:04:16 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393459453!6463144!1
X-Originating-IP: [209.85.220.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17234 invoked from network); 27 Feb 2014 00:04:15 -0000
Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com)
	(209.85.220.47)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 00:04:15 -0000
Received: by mail-pa0-f47.google.com with SMTP id kp14so1683454pab.6
	for <xen-devel@lists.xenproject.org>;
	Wed, 26 Feb 2014 16:04:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=mnFxQPLewIeniPRtL1tUSV8P9eQTo5aNI5DJiBGNdRI=;
	b=tFbcrDp/mT93kZkyCPJ9zykCY2DYgFXva+IgD5PerjeCOZ56LbcCgTXGjqd4L8WlPR
	sVV+gBdOyToaLlR7xZ4Tk/S/VizmuxDqewwkucGhdhFAx7AriDuCRgYOz75JuOVkvpDc
	ml1TlBcrZl04SJuqZtwMU0AjiY3yPcXw5+7mck/vPA0+/e3/hYdGTbRdWfKM0x3Jl4gg
	kr86UA7kdnCqcS0iv/V+YqqjEK8TUOuV3P/NkC+uZU8EsxfkJoHWsETPyrTO1DpQUux1
	NqwjOQ4LA2NONV7myQJLf5k0XJQenDRGU1kLWUv7p1VXVVl15He6QTv+Wb8wP+x6Fv3C
	eZXw==
X-Received: by 10.66.163.2 with SMTP id ye2mr11855258pab.110.1393459453170;
	Wed, 26 Feb 2014 16:04:13 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-186.amazon.com. [54.240.196.186])
	by mx.google.com with ESMTPSA id yd4sm7290512pbc.13.2014.02.26.16.04.10
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 16:04:11 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Wed, 26 Feb 2014 16:04:07 -0800
Date: Wed, 26 Feb 2014 16:04:07 -0800
From: Matt Wilson <msw@linux.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<527A113C02000078000FFF99@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <527A113C02000078000FFF99@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Zhu Yanhai <gaoyang.zyh@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Charles Wang <muming.wq@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
> >>> On 06.11.13 at 07:41, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
> > As we know Intel X86's CR0.TS is a sticky bit, which means once set
> > it remains set until cleared by some software routines, in other words,
> > the exception handler expects the bit is set when it starts to execute.
> 
> Since when would that be the case? CR0.TS is entirely unaffected
> by exception invocations according to all I know. All that is known
> here is that #NM wouldn't have occurred in the first place if CR0.TS
> was clear.
> 
> > However xen doesn't simulate this behavior quite well for PV guests -
> > vcpu_restore_fpu_lazy() clears CR0.TS unconditionally in the very beginning,
> > so the guest kernel's #NM handler runs with CR0.TS cleared. Generally 
> > speaking
> > it's fine since the linux kernel executes the exception handler with
> > interrupt disabled and a sane #NM handler will clear the bit anyway
> > before it exits, but there's a catch: if it's the first FPU trap for the 
> > process,
> > the linux kernel must allocate a piece of SLAB memory for it to save
> > the FPU registers, which opens a schedule window as the memory
> > allocation might sleep -- and with CR0.TS keeps clear!
> > 
> > [see the code below in linux kernel,
> 
> You're apparently referring to the pvops kernel.
> 
> > void math_state_restore(void)
> > {
> >     struct task_struct *tsk = current;
> > 
> >     if (!tsk_used_math(tsk)) {
> >         local_irq_enable();
> >         /*
> >          * does a slab alloc which can sleep
> >          */
> >         if (init_fpu(tsk)) {                 <<<< Here it might open a schedule window
> >             /*
> >              * ran out of memory!
> >              */
> >             do_group_exit(SIGKILL);
> >             return;
> >         }
> >         local_irq_disable();
> >     }
> > 
> >     __thread_fpu_begin(tsk);    <<<< Here the process gets marked as a 'fpu user'
> >                                          after the schedule window
> > 
> >     /*
> >      * Paranoid restore. send a SIGSEGV if we fail to restore the state.
> >      */
> >     if (unlikely(restore_fpu_checking(tsk))) {
> >         drop_init_fpu(tsk);
> >         force_sig(SIGSEGV, tsk);
> >         return;
> >     }
> > 
> >     tsk->fpu_counter++;
> > }
> > ]
> 
> May I direct your attention to the XenoLinux one:
> 
> asmlinkage void math_state_restore(void)
> {
> 	struct task_struct *me = current;
> 
> 	/* NB. 'clts' is done for us by Xen during virtual trap. */
> 	__get_cpu_var(xen_x86_cr0) &= ~X86_CR0_TS;
> 	if (!used_math())
> 		init_fpu(me);
> 	restore_fpu_checking(&me->thread.i387.fxsave);
> 	task_thread_info(me)->status |= TS_USEDFPU;
> }
> 
> Note the comment close to the beginning - the fact that CR0.TS
> is clear at exception handler entry is actually part of the PV ABI,
> i.e. by altering hypervisor behavior here you break all forward
> ported kernels.
> 
> Nevertheless I agree that there is an issue, but this needs to be
> fixed on the Linux side (hence adding the Linux maintainers to Cc);
> this issue was introduced way back in 2.6.26 (before that there
> was no allocation on that path). It's not clear though whether
> using GFP_ATOMIC for the allocation would be preferable over
> stts() before calling the allocation function (and clts() if it
> succeeded), or whether perhaps to defer the stts() until we
> actually know the task is being switched out. It's going to be an
> ugly, Xen-specific hack in any event.

Was there ever a resolution to this problem? I never saw a comment
from the Linux Xen PV maintainers.

--msw


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 00:04:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 00:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIoSY-0007GS-IU; Thu, 27 Feb 2014 00:04:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WIoSX-0007GN-Aj
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 00:04:17 +0000
Received: from [85.158.139.211:11433] by server-1.bemta-5.messagelabs.com id
	8E/BE-12859-0018E035; Thu, 27 Feb 2014 00:04:16 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393459453!6463144!1
X-Originating-IP: [209.85.220.47]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17234 invoked from network); 27 Feb 2014 00:04:15 -0000
Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com)
	(209.85.220.47)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 00:04:15 -0000
Received: by mail-pa0-f47.google.com with SMTP id kp14so1683454pab.6
	for <xen-devel@lists.xenproject.org>;
	Wed, 26 Feb 2014 16:04:13 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=mnFxQPLewIeniPRtL1tUSV8P9eQTo5aNI5DJiBGNdRI=;
	b=tFbcrDp/mT93kZkyCPJ9zykCY2DYgFXva+IgD5PerjeCOZ56LbcCgTXGjqd4L8WlPR
	sVV+gBdOyToaLlR7xZ4Tk/S/VizmuxDqewwkucGhdhFAx7AriDuCRgYOz75JuOVkvpDc
	ml1TlBcrZl04SJuqZtwMU0AjiY3yPcXw5+7mck/vPA0+/e3/hYdGTbRdWfKM0x3Jl4gg
	kr86UA7kdnCqcS0iv/V+YqqjEK8TUOuV3P/NkC+uZU8EsxfkJoHWsETPyrTO1DpQUux1
	NqwjOQ4LA2NONV7myQJLf5k0XJQenDRGU1kLWUv7p1VXVVl15He6QTv+Wb8wP+x6Fv3C
	eZXw==
X-Received: by 10.66.163.2 with SMTP id ye2mr11855258pab.110.1393459453170;
	Wed, 26 Feb 2014 16:04:13 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-186.amazon.com. [54.240.196.186])
	by mx.google.com with ESMTPSA id yd4sm7290512pbc.13.2014.02.26.16.04.10
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 16:04:11 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Wed, 26 Feb 2014 16:04:07 -0800
Date: Wed, 26 Feb 2014 16:04:07 -0800
From: Matt Wilson <msw@linux.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<527A113C02000078000FFF99@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <527A113C02000078000FFF99@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Zhu Yanhai <gaoyang.zyh@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Charles Wang <muming.wq@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
> >>> On 06.11.13 at 07:41, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
> > As we know Intel X86's CR0.TS is a sticky bit, which means once set
> > it remains set until cleared by some software routines, in other words,
> > the exception handler expects the bit is set when it starts to execute.
> 
> Since when would that be the case? CR0.TS is entirely unaffected
> by exception invocations according to all I know. All that is known
> here is that #NM wouldn't have occurred in the first place if CR0.TS
> was clear.
> 
> > However xen doesn't simulate this behavior quite well for PV guests -
> > vcpu_restore_fpu_lazy() clears CR0.TS unconditionally in the very beginning,
> > so the guest kernel's #NM handler runs with CR0.TS cleared. Generally 
> > speaking
> > it's fine since the linux kernel executes the exception handler with
> > interrupt disabled and a sane #NM handler will clear the bit anyway
> > before it exits, but there's a catch: if it's the first FPU trap for the 
> > process,
> > the linux kernel must allocate a piece of SLAB memory for it to save
> > the FPU registers, which opens a schedule window as the memory
> > allocation might sleep -- and with CR0.TS keeps clear!
> > 
> > [see the code below in linux kernel,
> 
> You're apparently referring to the pvops kernel.
> 
> > void math_state_restore(void)
> > {
> >     struct task_struct *tsk = current;
> > 
> >     if (!tsk_used_math(tsk)) {
> >         local_irq_enable();
> >         /*
> >          * does a slab alloc which can sleep
> >          */
> >         if (init_fpu(tsk)) {                 <<<< Here it might open a schedule window
> >             /*
> >              * ran out of memory!
> >              */
> >             do_group_exit(SIGKILL);
> >             return;
> >         }
> >         local_irq_disable();
> >     }
> > 
> >     __thread_fpu_begin(tsk);    <<<< Here the process gets marked as a 'fpu user'
> >                                          after the schedule window
> > 
> >     /*
> >      * Paranoid restore. send a SIGSEGV if we fail to restore the state.
> >      */
> >     if (unlikely(restore_fpu_checking(tsk))) {
> >         drop_init_fpu(tsk);
> >         force_sig(SIGSEGV, tsk);
> >         return;
> >     }
> > 
> >     tsk->fpu_counter++;
> > }
> > ]
> 
> May I direct your attention to the XenoLinux one:
> 
> asmlinkage void math_state_restore(void)
> {
> 	struct task_struct *me = current;
> 
> 	/* NB. 'clts' is done for us by Xen during virtual trap. */
> 	__get_cpu_var(xen_x86_cr0) &= ~X86_CR0_TS;
> 	if (!used_math())
> 		init_fpu(me);
> 	restore_fpu_checking(&me->thread.i387.fxsave);
> 	task_thread_info(me)->status |= TS_USEDFPU;
> }
> 
> Note the comment close to the beginning - the fact that CR0.TS
> is clear at exception handler entry is actually part of the PV ABI,
> i.e. by altering hypervisor behavior here you break all forward
> ported kernels.
> 
> Nevertheless I agree that there is an issue, but this needs to be
> fixed on the Linux side (hence adding the Linux maintainers to Cc);
> this issue was introduced way back in 2.6.26 (before that there
> was no allocation on that path). It's not clear though whether
> using GFP_ATOMIC for the allocation would be preferable over
> stts() before calling the allocation function (and clts() if it
> succeeded), or whether perhaps to defer the stts() until we
> actually know the task is being switched out. It's going to be an
> ugly, Xen-specific hack in any event.

Was there ever a resolution to this problem? I never saw a comment
from the Linux Xen PV maintainers.

--msw


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 01:32:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 01:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIpox-0003Ph-GW; Thu, 27 Feb 2014 01:31:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WIpow-0003Pc-7w
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 01:31:30 +0000
Received: from [85.158.137.68:21567] by server-14.bemta-3.messagelabs.com id
	3A/38-08196-1759E035; Thu, 27 Feb 2014 01:31:29 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393464687!4428195!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6980 invoked from network); 27 Feb 2014 01:31:28 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 01:31:28 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 26 Feb 2014 17:31:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,551,1389772800"; d="scan'208";a="482557796"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 26 Feb 2014 17:31:15 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 17:31:15 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 17:31:15 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Thu, 27 Feb 2014 09:31:10 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Single step in HVM domU on Intel machine may see
	wrong DB6
Thread-Index: AQHPLhb/TXz9W/5OvEmwKI7dSE2uP5q+4NwQ///Ki4CACFahcIAAM1kAgAEcpdA=
Date: Thu, 27 Feb 2014 01:31:09 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F7F31@SHSMSX104.ccr.corp.intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
	<530E1D9E020000780011F938@nat28.tlf.novell.com>
In-Reply-To: <530E1D9E020000780011F938@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-27:
>>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs
> *regs)
>>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>> -            if ( !v->domain->debugger_attached ||
>> cpu_has_monitor_trap_flag ) -                goto exit_and_crash; -    
>>        domain_pause_for_debugger(); +            if (
>> v->domain->debugger_attached ) +               
>> domain_pause_for_debugger(); +            else +            { +        
>>        __restore_debug_registers(v); +               
>> hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE); +      
>>      }
> 
> I suppose you need to set DR6.BS after restoring the reigsters?

Right but is not enough. If flag_dr_dirty is set, we need to restore register from hardware. Conversely, restore is from debugreg and set DR6 to exit_qualification.

> 
> Also, the change looks rather simple - is that really correct for both
> cpu_has_monitor_trap_flag and !cpu_has_monitor_trap_flag cases?
> 

Until now, MTF is used for single step by gdb. And with MTF enabled, it should never goto here. So those changes should not impact the MTF. Also, I used gdb to debug guest with using MTF approach and applied this patch. It seems everything is working well.
But still need more testing from Juergen.

>> BTW: I also think we should clear the CPU_BASED_MOV_DR_EXITING bit
>> in __restore_debug_registers(). After restore the debug register, we
>> should not trap any DR access unless the VCPU is scheduled out again.
>> Not sure whether I am wrong.
>> 
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index b128e81..56a3140 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -394,6 +394,8 @@ static void __restore_debug_registers(struct
>> vcpu
> *v)
>>      write_debugreg(3, v->arch.debugreg[3]);
>>      write_debugreg(6, v->arch.debugreg[6]);
>>      /* DR7 is loaded from the VMCS. */
>> +    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
>> +    vmx_update_cpu_exec_control(v);
>>  }
>>  
>>  /*
> 
> That's being done by at least one of its callers (vmx_dr_access())
> already, and I think it was purposefully not done in other cases.

Yes, the question is why only one caller does this. Per my understanding, after restoring debug register, hypervisor should allow the guest to access them unless the VCPU is scheduled out again and vmx_save_dr is called.
Take the injecting debug_trap to guest as example, hypervisor will restore the debug register to hardware. After returning to guest, we should allow guest to read DR without trap again. But now, it still causes DR access vmexit and hypervisor restores it again. Then it clear the DR exiting bit in this vmexit handler. Are this behavior we expecting?

> 
> Jan


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 01:32:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 01:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIpox-0003Ph-GW; Thu, 27 Feb 2014 01:31:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WIpow-0003Pc-7w
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 01:31:30 +0000
Received: from [85.158.137.68:21567] by server-14.bemta-3.messagelabs.com id
	3A/38-08196-1759E035; Thu, 27 Feb 2014 01:31:29 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393464687!4428195!1
X-Originating-IP: [192.55.52.88]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTkyLjU1LjUyLjg4ID0+IDM3NDcyNQ==\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6980 invoked from network); 27 Feb 2014 01:31:28 -0000
Received: from mga01.intel.com (HELO mga01.intel.com) (192.55.52.88)
	by server-6.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 01:31:28 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by fmsmga101.fm.intel.com with ESMTP; 26 Feb 2014 17:31:27 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,551,1389772800"; d="scan'208";a="482557796"
Received: from fmsmsx104.amr.corp.intel.com ([10.19.9.35])
	by fmsmga001.fm.intel.com with ESMTP; 26 Feb 2014 17:31:15 -0800
Received: from FMSMSX110.amr.corp.intel.com (10.18.116.10) by
	FMSMSX104.amr.corp.intel.com (10.19.9.35) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 17:31:15 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	fmsmsx110.amr.corp.intel.com (10.18.116.10) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 17:31:15 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Thu, 27 Feb 2014 09:31:10 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Thread-Topic: [Xen-devel] Single step in HVM domU on Intel machine may see
	wrong DB6
Thread-Index: AQHPLhb/TXz9W/5OvEmwKI7dSE2uP5q+4NwQ///Ki4CACFahcIAAM1kAgAEcpdA=
Date: Thu, 27 Feb 2014 01:31:09 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F7F31@SHSMSX104.ccr.corp.intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
	<530E1D9E020000780011F938@nat28.tlf.novell.com>
In-Reply-To: <530E1D9E020000780011F938@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>, "Dong,
	Eddie" <eddie.dong@intel.com>, "Nakajima, Jun" <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-27:
>>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs
> *regs)
>>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>> -            if ( !v->domain->debugger_attached ||
>> cpu_has_monitor_trap_flag ) -                goto exit_and_crash; -    
>>        domain_pause_for_debugger(); +            if (
>> v->domain->debugger_attached ) +               
>> domain_pause_for_debugger(); +            else +            { +        
>>        __restore_debug_registers(v); +               
>> hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE); +      
>>      }
> 
> I suppose you need to set DR6.BS after restoring the reigsters?

Right but is not enough. If flag_dr_dirty is set, we need to restore register from hardware. Conversely, restore is from debugreg and set DR6 to exit_qualification.

> 
> Also, the change looks rather simple - is that really correct for both
> cpu_has_monitor_trap_flag and !cpu_has_monitor_trap_flag cases?
> 

Until now, MTF is used for single step by gdb. And with MTF enabled, it should never goto here. So those changes should not impact the MTF. Also, I used gdb to debug guest with using MTF approach and applied this patch. It seems everything is working well.
But still need more testing from Juergen.

>> BTW: I also think we should clear the CPU_BASED_MOV_DR_EXITING bit
>> in __restore_debug_registers(). After restore the debug register, we
>> should not trap any DR access unless the VCPU is scheduled out again.
>> Not sure whether I am wrong.
>> 
>> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
>> index b128e81..56a3140 100644
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -394,6 +394,8 @@ static void __restore_debug_registers(struct
>> vcpu
> *v)
>>      write_debugreg(3, v->arch.debugreg[3]);
>>      write_debugreg(6, v->arch.debugreg[6]);
>>      /* DR7 is loaded from the VMCS. */
>> +    v->arch.hvm_vmx.exec_control &= ~CPU_BASED_MOV_DR_EXITING;
>> +    vmx_update_cpu_exec_control(v);
>>  }
>>  
>>  /*
> 
> That's being done by at least one of its callers (vmx_dr_access())
> already, and I think it was purposefully not done in other cases.

Yes, the question is why only one caller does this. Per my understanding, after restoring debug register, hypervisor should allow the guest to access them unless the VCPU is scheduled out again and vmx_save_dr is called.
Take the injecting debug_trap to guest as example, hypervisor will restore the debug register to hardware. After returning to guest, we should allow guest to read DR without trap again. But now, it still causes DR access vmexit and hypervisor restores it again. Then it clear the DR exiting bit in this vmexit handler. Are this behavior we expecting?

> 
> Jan


Best regards,
Yang


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 03:10:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 03:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIrMO-0004Sp-Mx; Thu, 27 Feb 2014 03:10:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WIrMM-0004Sk-Ry
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 03:10:07 +0000
Received: from [85.158.137.68:45987] by server-9.bemta-3.messagelabs.com id
	79/2F-10184-D8CAE035; Thu, 27 Feb 2014 03:10:05 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393470604!4488002!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21377 invoked from network); 27 Feb 2014 03:10:05 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 03:10:05 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 26 Feb 2014 19:10:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,551,1389772800"; d="scan'208";a="490613949"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 26 Feb 2014 19:10:03 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 19:10:02 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 19:10:01 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Thu, 27 Feb 2014 11:09:58 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [RFC] utilizing EPT_MISCONFIG VM exit
Thread-Index: AQHPMWEiZSXiSq08kU2i88ZePtWRLZrIaQJg
Date: Thu, 27 Feb 2014 03:09:58 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F80E0@SHSMSX104.ccr.corp.intel.com>
References: <530B51B0020000780011EC46@nat28.tlf.novell.com>
In-Reply-To: <530B51B0020000780011EC46@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] utilizing EPT_MISCONFIG VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-24:
> Attached draft patch (still having debugging code in it, and assuming
> other adjustments having happened before) demonstrates that we
> should evaluate whether - not only for dealing with the memory type
> adjustments here, but for basically everything currently done through
> ->change_entry_type_global() - utilizing the EPT_MISCONFIG VM exit
> would be an architecturally clean approach. The fundamental idea
> here is to defer updates to the page tables until the respective
> entries actually get used, instead of iterating through all page tables
> when the change is being requested, thus
> - avoiding (here) or eliminating (elsewhere) long lasting operations
>   without having to introduce expensive/fragile preemption handling -
>   leaving unaffected the sharing of the page tables with the IOMMU
>   (since the EPT memory type field is available to the programmer on the
>   IOMMU side; we obviously can't use the read/write bits without
>   affecting the IOMMU)
> The main question obviously is whether it is architecturally safe to use
> any particular, presently invalid memory type (right now the patches use
> type 7, i.e. the value defined for UC- in the PAT MSR only), or whether
> such an invalid type could be determined at run time.
> 
> Obviously if on EPT we can go this route, the goal ought to be to
> eliminate ->change_entry_type_global() altogether (i.e. also from
> the generic P2M code) by using on-access adjustments instead of
> on-request ones. Quite likely that would involved adding an address
> range to ->memory_type_changed().
> 

Nice idea. Yes, the only concern is the reserved bit in EPT. I will forward this to internal to look for help.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 03:10:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 03:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIrMO-0004Sp-Mx; Thu, 27 Feb 2014 03:10:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WIrMM-0004Sk-Ry
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 03:10:07 +0000
Received: from [85.158.137.68:45987] by server-9.bemta-3.messagelabs.com id
	79/2F-10184-D8CAE035; Thu, 27 Feb 2014 03:10:05 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393470604!4488002!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21377 invoked from network); 27 Feb 2014 03:10:05 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 03:10:05 -0000
Received: from orsmga002.jf.intel.com ([10.7.209.21])
	by orsmga101.jf.intel.com with ESMTP; 26 Feb 2014 19:10:03 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,551,1389772800"; d="scan'208";a="490613949"
Received: from fmsmsx106.amr.corp.intel.com ([10.19.9.37])
	by orsmga002.jf.intel.com with ESMTP; 26 Feb 2014 19:10:03 -0800
Received: from fmsmsx102.amr.corp.intel.com (10.19.9.53) by
	FMSMSX106.amr.corp.intel.com (10.19.9.37) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 19:10:02 -0800
Received: from shsmsx102.ccr.corp.intel.com (10.239.4.154) by
	FMSMSX102.amr.corp.intel.com (10.19.9.53) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 19:10:01 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX102.ccr.corp.intel.com ([10.239.4.154]) with mapi id
	14.03.0123.003; Thu, 27 Feb 2014 11:09:58 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: Jan Beulich <JBeulich@suse.com>, "Dong, Eddie" <eddie.dong@intel.com>,
	"Nakajima, Jun" <jun.nakajima@intel.com>, Tim Deegan <tim@xen.org>
Thread-Topic: [RFC] utilizing EPT_MISCONFIG VM exit
Thread-Index: AQHPMWEiZSXiSq08kU2i88ZePtWRLZrIaQJg
Date: Thu, 27 Feb 2014 03:09:58 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F80E0@SHSMSX104.ccr.corp.intel.com>
References: <530B51B0020000780011EC46@nat28.tlf.novell.com>
In-Reply-To: <530B51B0020000780011EC46@nat28.tlf.novell.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC] utilizing EPT_MISCONFIG VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jan Beulich wrote on 2014-02-24:
> Attached draft patch (still having debugging code in it, and assuming
> other adjustments having happened before) demonstrates that we
> should evaluate whether - not only for dealing with the memory type
> adjustments here, but for basically everything currently done through
> ->change_entry_type_global() - utilizing the EPT_MISCONFIG VM exit
> would be an architecturally clean approach. The fundamental idea
> here is to defer updates to the page tables until the respective
> entries actually get used, instead of iterating through all page tables
> when the change is being requested, thus
> - avoiding (here) or eliminating (elsewhere) long lasting operations
>   without having to introduce expensive/fragile preemption handling -
>   leaving unaffected the sharing of the page tables with the IOMMU
>   (since the EPT memory type field is available to the programmer on the
>   IOMMU side; we obviously can't use the read/write bits without
>   affecting the IOMMU)
> The main question obviously is whether it is architecturally safe to use
> any particular, presently invalid memory type (right now the patches use
> type 7, i.e. the value defined for UC- in the PAT MSR only), or whether
> such an invalid type could be determined at run time.
> 
> Obviously if on EPT we can go this route, the goal ought to be to
> eliminate ->change_entry_type_global() altogether (i.e. also from
> the generic P2M code) by using on-access adjustments instead of
> on-request ones. Quite likely that would involved adding an address
> range to ->memory_type_changed().
> 

Nice idea. Yes, the only concern is the reserved bit in EPT. I will forward this to internal to look for help.

> Jan


Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:01:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIs9W-0004ss-I3; Thu, 27 Feb 2014 04:00:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIs9U-0004sk-Uc
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 04:00:53 +0000
Received: from [85.158.143.35:21836] by server-2.bemta-4.messagelabs.com id
	80/02-04779-478BE035; Thu, 27 Feb 2014 04:00:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393473649!8621704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4063 invoked from network); 27 Feb 2014 04:00:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 04:00:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,551,1389744000"; d="scan'208";a="104533010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 04:00:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 23:00:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIs9Q-0007kM-7h;
	Thu, 27 Feb 2014 04:00:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIs9P-00059R-LD;
	Thu, 27 Feb 2014 04:00:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25312-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 04:00:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25312: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25312 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25312/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  c9f8e0aee507bec25104ca5535fde38efae6c6bc
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 407 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:01:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIs9W-0004ss-I3; Thu, 27 Feb 2014 04:00:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIs9U-0004sk-Uc
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 04:00:53 +0000
Received: from [85.158.143.35:21836] by server-2.bemta-4.messagelabs.com id
	80/02-04779-478BE035; Thu, 27 Feb 2014 04:00:52 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393473649!8621704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4063 invoked from network); 27 Feb 2014 04:00:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 04:00:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,551,1389744000"; d="scan'208";a="104533010"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 04:00:49 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Wed, 26 Feb 2014 23:00:48 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIs9Q-0007kM-7h;
	Thu, 27 Feb 2014 04:00:48 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIs9P-00059R-LD;
	Thu, 27 Feb 2014 04:00:48 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25312-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 04:00:47 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25312: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25312 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25312/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  c9f8e0aee507bec25104ca5535fde38efae6c6bc
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 407 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:06:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsEg-00050H-DF; Thu, 27 Feb 2014 04:06:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1WIsEf-00050B-In
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 04:06:13 +0000
Received: from [193.109.254.147:22556] by server-9.bemta-14.messagelabs.com id
	E8/17-24895-4B9BE035; Thu, 27 Feb 2014 04:06:12 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393473968!7105090!1
X-Originating-IP: [209.85.192.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31718 invoked from network); 27 Feb 2014 04:06:09 -0000
Received: from mail-qg0-f48.google.com (HELO mail-qg0-f48.google.com)
	(209.85.192.48)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 04:06:09 -0000
Received: by mail-qg0-f48.google.com with SMTP id a108so4119815qge.7
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 20:06:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type;
	bh=dmwiTBZ+PdZM8E6CSOXZ5SrIA2cmu+cYT51MYjjQf9Y=;
	b=lFEd1jxPl4YdGWhRjLNfMRptqwpMkK74HWe8UAk1ehqZu2fXf9I2VGJJbL7nbwJrQx
	L2dKmbI29JBbltDZhLLstzYvHdBm1GlDhJKe02i/oC4YwrLvjm7AOM5cNjtCuJZeTmpE
	wBlS0ib4X3704P5APgpixhLYaStsj8lQfGPUcb//Co5LW6fY0SLfnufZij74eCM0YM0V
	O3jb2Mb7+BZcA9e2NPJOD0pzZRhDN4OQlsNBYqbB7AUpx2oLhrIYwXtQiHsRSR2YknfN
	tbs6yF/Ay0VO9LCCZFT1P0++205J0dAGmZRuPuAAwkwdMPL0qnObvxzNC54KCi0rRPJY
	VS7g==
X-Gm-Message-State: ALoCoQnuNCcNSBeSVJqyp3qDK7tYaspoXTr/NyDlZgy1qwtEqfVrFuTJ2cnkMRT1FDaqhWGhc46n
X-Received: by 10.224.80.201 with SMTP id u9mr13133120qak.5.1393473968527;
	Wed, 26 Feb 2014 20:06:08 -0800 (PST)
Received: from xanadu.home (modemcable177.143-130-66.mc.videotron.ca.
	[66.130.143.177])
	by mx.google.com with ESMTPSA id k1sm9553050qat.16.2014.02.26.20.06.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 20:06:07 -0800 (PST)
Date: Wed, 26 Feb 2014 23:06:06 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA9nq_V4VGG=j6FWFatJBnTuYrU9-9U9eGXcQz0zimrasQ@mail.gmail.com>
Message-ID: <alpine.LFD.2.11.1402262259290.17677@knanqh.ubzr>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
	<CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
	<CAFEAcA9nq_V4VGG=j6FWFatJBnTuYrU9-9U9eGXcQz0zimrasQ@mail.gmail.com>
User-Agent: Alpine 2.11 (LFD 23 2013-08-11)
MIME-Version: 1.0
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Rob Herring <robherring2@gmail.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 26 Feb 2014, Peter Maydell wrote:

> On 26 February 2014 23:08, Rob Herring <robherring2@gmail.com> wrote:
> > On Wed, Feb 26, 2014 at 4:54 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
> >> What does the 8250 have to recommend it over just providing
> >> the PL011?
> >
> > As I mentioned, it will just work for anything that expects the serial
> > port to be ttyS0 as on x86 rather than ttyAMA0.
> 
> This doesn't seem very compelling to me. Either userspace
> should just be fixed or the kernel should implement a namespace
> for serial ports which doesn't randomly change just because the
> particular h/w driver doing the implementation is different.
> We shouldn't be papering over other peoples' problems in the VM
> spec IMHO.

Indeed.

This is already causing us trouble in other contexts.  Please have a 
look at this patch for a solution:

http://article.gmane.org/gmane.linux.kernel.samsung-soc/27222


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:06:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsEg-00050H-DF; Thu, 27 Feb 2014 04:06:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <nicolas.pitre@linaro.org>) id 1WIsEf-00050B-In
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 04:06:13 +0000
Received: from [193.109.254.147:22556] by server-9.bemta-14.messagelabs.com id
	E8/17-24895-4B9BE035; Thu, 27 Feb 2014 04:06:12 +0000
X-Env-Sender: nicolas.pitre@linaro.org
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393473968!7105090!1
X-Originating-IP: [209.85.192.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31718 invoked from network); 27 Feb 2014 04:06:09 -0000
Received: from mail-qg0-f48.google.com (HELO mail-qg0-f48.google.com)
	(209.85.192.48)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 04:06:09 -0000
Received: by mail-qg0-f48.google.com with SMTP id a108so4119815qge.7
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 20:06:08 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id
	:references:user-agent:mime-version:content-type;
	bh=dmwiTBZ+PdZM8E6CSOXZ5SrIA2cmu+cYT51MYjjQf9Y=;
	b=lFEd1jxPl4YdGWhRjLNfMRptqwpMkK74HWe8UAk1ehqZu2fXf9I2VGJJbL7nbwJrQx
	L2dKmbI29JBbltDZhLLstzYvHdBm1GlDhJKe02i/oC4YwrLvjm7AOM5cNjtCuJZeTmpE
	wBlS0ib4X3704P5APgpixhLYaStsj8lQfGPUcb//Co5LW6fY0SLfnufZij74eCM0YM0V
	O3jb2Mb7+BZcA9e2NPJOD0pzZRhDN4OQlsNBYqbB7AUpx2oLhrIYwXtQiHsRSR2YknfN
	tbs6yF/Ay0VO9LCCZFT1P0++205J0dAGmZRuPuAAwkwdMPL0qnObvxzNC54KCi0rRPJY
	VS7g==
X-Gm-Message-State: ALoCoQnuNCcNSBeSVJqyp3qDK7tYaspoXTr/NyDlZgy1qwtEqfVrFuTJ2cnkMRT1FDaqhWGhc46n
X-Received: by 10.224.80.201 with SMTP id u9mr13133120qak.5.1393473968527;
	Wed, 26 Feb 2014 20:06:08 -0800 (PST)
Received: from xanadu.home (modemcable177.143-130-66.mc.videotron.ca.
	[66.130.143.177])
	by mx.google.com with ESMTPSA id k1sm9553050qat.16.2014.02.26.20.06.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 20:06:07 -0800 (PST)
Date: Wed, 26 Feb 2014 23:06:06 -0500 (EST)
From: Nicolas Pitre <nicolas.pitre@linaro.org>
To: Peter Maydell <peter.maydell@linaro.org>
In-Reply-To: <CAFEAcA9nq_V4VGG=j6FWFatJBnTuYrU9-9U9eGXcQz0zimrasQ@mail.gmail.com>
Message-ID: <alpine.LFD.2.11.1402262259290.17677@knanqh.ubzr>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
	<CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
	<CAFEAcA9nq_V4VGG=j6FWFatJBnTuYrU9-9U9eGXcQz0zimrasQ@mail.gmail.com>
User-Agent: Alpine 2.11 (LFD 23 2013-08-11)
MIME-Version: 1.0
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Rob Herring <robherring2@gmail.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 26 Feb 2014, Peter Maydell wrote:

> On 26 February 2014 23:08, Rob Herring <robherring2@gmail.com> wrote:
> > On Wed, Feb 26, 2014 at 4:54 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
> >> What does the 8250 have to recommend it over just providing
> >> the PL011?
> >
> > As I mentioned, it will just work for anything that expects the serial
> > port to be ttyS0 as on x86 rather than ttyAMA0.
> 
> This doesn't seem very compelling to me. Either userspace
> should just be fixed or the kernel should implement a namespace
> for serial ports which doesn't randomly change just because the
> particular h/w driver doing the implementation is different.
> We shouldn't be papering over other peoples' problems in the VM
> spec IMHO.

Indeed.

This is already causing us trouble in other contexts.  Please have a 
look at this patch for a solution:

http://article.gmane.org/gmane.linux.kernel.samsung-soc/27222


Nicolas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfI-0005Fo-9A; Thu, 27 Feb 2014 04:33:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfE-0005FL-OJ
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:41 +0000
Received: from [193.109.254.147:2381] by server-2.bemta-14.messagelabs.com id
	55/C4-01236-320CE035; Thu, 27 Feb 2014 04:33:39 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393475616!7140481!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4709 invoked from network); 27 Feb 2014 04:33:37 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:37 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 57828149;
	Thu, 27 Feb 2014 04:33:34 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id DA83346;
	Thu, 27 Feb 2014 04:33:31 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:40 -0500
Message-Id: <1393475567-4453-2-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte queue
	spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces a new queue spinlock implementation that can
serve as an alternative to the default ticket spinlock. Compared with
the ticket spinlock, this queue spinlock should be almost as fair as
the ticket spinlock. It has about the same speed in single-thread and
it can be much faster in high contention situations. Only in light to
moderate contention where the average queue depth is around 1-3 will
this queue spinlock be potentially a bit slower due to the higher
slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

The idea behind this spinlock implementation is the fact that spinlocks
are acquired with preemption disabled. In other words, the process
will not be migrated to another CPU while it is trying to get a
spinlock. Ignoring interrupt handling, a CPU can only be contending
in one spinlock at any one time. Of course, interrupt handler can try
to acquire one spinlock while the interrupted user process is in the
process of getting another spinlock. By allocating a set of per-cpu
queue nodes and used them to form a waiting queue, we can encode the
queue node address into a much smaller 16-bit size. Together with
the 1-byte lock bit, this queue spinlock implementation will only
need 4 bytes to hold all the information that it needs.

The current queue node address encoding of the 4-byte word is as
follows:
Bits 0-7  : the locked byte
Bits 8-9  : queue node index in the per-cpu array (4 entries)
Bits 10-31: cpu number + 1 (max cpus = 4M -1)

In the extremely unlikely case that all the queue node entries are
used up, the current code will fall back to busy spinning without
waiting in a queue with warning message.

For single-thread performance (no contention), a 256K lock/unlock
loop was run on a 2.4Ghz Westmere x86-64 CPU.  The following table
shows the average time (in ns) for a single lock/unlock sequence
(including the looping and timing overhead):

  Lock Type			Time (ns)
  ---------			---------
  Ticket spinlock		  14.1
  Queue spinlock		   8.8

So the queue spinlock is much faster than the ticket spinlock, even
though the overhead of locking and unlocking should be pretty small
when there is no contention. The performance advantage is mainly
due to the fact that ticket spinlock does a read-modify-write (add)
instruction in unlock whereas queue spinlock only does a simple write
in unlock which can be much faster in a pipelined CPU.

The AIM7 benchmark was run on a 8-socket 80-core DL980 with Westmere
x86-64 CPUs with XFS filesystem on a ramdisk and HT off to evaluate
the performance impact of this patch on a 3.13 kernel.

  +------------+----------+-----------------+---------+
  | Kernel     | 3.13 JPM |    3.13 with    | %Change |
  |            |          | qspinlock patch |	      |
  +------------+----------+-----------------+---------+
  |		      10-100 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   357459 |      363109     |  +1.58% |
  |dbase       |   496847 |      498801	    |  +0.39% |
  |disk        |  2925312 |     2771387     |  -5.26% |
  |five_sec    |   166612 |      169215     |  +1.56% |
  |fserver     |   382129 |      383279     |  +0.30% |
  |high_systime|    16356 |       16380     |  +0.15% |
  |short       |  4521978 |     4257363     |  -5.85% |
  +------------+----------+-----------------+---------+
  |		     200-1000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   449070 |      447711     |  -0.30% |
  |dbase       |   845029 |      853362	    |  +0.99% |
  |disk        |  2725249 |     4892907     | +79.54% |
  |five_sec    |   169410 |      170638     |  +0.72% |
  |fserver     |   489662 |      491828     |  +0.44% |
  |high_systime|   142823 |      143790     |  +0.68% |
  |short       |  7435288 |     9016171     | +21.26% |
  +------------+----------+-----------------+---------+
  |		     1100-2000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   432470 |      432570     |  +0.02% |
  |dbase       |   889289 |      890026	    |  +0.08% |
  |disk        |  2565138 |     5008732     | +95.26% |
  |five_sec    |   169141 |      170034     |  +0.53% |
  |fserver     |   498569 |      500701     |  +0.43% |
  |high_systime|   229913 |      245866     |  +6.94% |
  |short       |  8496794 |     8281918     |  -2.53% |
  +------------+----------+-----------------+---------+

The workload with the most gain was the disk workload. Without the
patch, the perf profile at 1500 users looked like:

 26.19%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--47.28%-- evict
              |--46.87%-- inode_sb_list_add
              |--1.24%-- xlog_cil_insert_items
              |--0.68%-- __remove_inode_hash
              |--0.67%-- inode_wait_for_writeback
               --3.26%-- [...]
 22.96%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  5.56%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  4.87%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.04%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.30%    reaim  [kernel.kallsyms]  [k] memcpy
  1.08%    reaim  [unknown]          [.] 0x0000003c52009447

There was pretty high spinlock contention on the inode_sb_list_lock
and maybe the inode's i_lock.

With the patch, the perf profile at 1500 users became:

 26.82%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  4.66%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  3.97%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.40%    reaim  [kernel.kallsyms]  [k] queue_spin_lock_slowpath
              |--88.31%-- _raw_spin_lock
              |          |--36.02%-- inode_sb_list_add
              |          |--35.09%-- evict
              |          |--16.89%-- xlog_cil_insert_items
              |          |--6.30%-- try_to_wake_up
              |          |--2.20%-- _xfs_buf_find
              |          |--0.75%-- __remove_inode_hash
              |          |--0.72%-- __mutex_lock_slowpath
              |          |--0.53%-- load_balance
              |--6.02%-- _raw_spin_lock_irqsave
              |          |--74.75%-- down_trylock
              |          |--9.69%-- rcu_check_quiescent_state
              |          |--7.47%-- down
              |          |--3.57%-- up
              |          |--1.67%-- rwsem_wake
              |          |--1.00%-- remove_wait_queue
              |          |--0.56%-- pagevec_lru_move_fn
              |--5.39%-- _raw_spin_lock_irq
              |          |--82.05%-- rwsem_down_read_failed
              |          |--10.48%-- rwsem_down_write_failed
              |          |--4.24%-- __down
              |          |--2.74%-- __schedule
               --0.28%-- [...]
  2.20%    reaim  [kernel.kallsyms]  [k] memcpy
  1.84%    reaim  [unknown]          [.] 0x000000000041517b
  1.77%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--21.08%-- xlog_cil_insert_items
              |--10.14%-- xfs_icsb_modify_counters
              |--7.20%-- xfs_iget_cache_hit
              |--6.56%-- inode_sb_list_add
              |--5.49%-- _xfs_buf_find
              |--5.25%-- evict
              |--5.03%-- __remove_inode_hash
              |--4.64%-- __mutex_lock_slowpath
              |--3.78%-- selinux_inode_free_security
              |--2.95%-- xfs_inode_is_filestream
              |--2.35%-- try_to_wake_up
              |--2.07%-- xfs_inode_set_reclaim_tag
              |--1.52%-- list_lru_add
              |--1.16%-- xfs_inode_clear_eofblocks_tag
		  :
  1.30%    reaim  [kernel.kallsyms]  [k] effective_load
  1.27%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.10%    reaim  [kernel.kallsyms]  [k] security_compute_sid

On the ext4 filesystem, the disk workload improved from 416281 JPM
to 899101 JPM (+116%) with the patch. In this case, the contended
spinlock is the mb_cache_spinlock.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 include/asm-generic/qspinlock.h       |  122 ++++++++++
 include/asm-generic/qspinlock_types.h |   55 +++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  393 +++++++++++++++++++++++++++++++++
 5 files changed, 578 insertions(+), 0 deletions(-)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..08da60f
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,122 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/*
+ * External function declarations
+ */
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval);
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & _QSPINLOCK_LOCKED;
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !(atomic_read(&lock.qlcode) & _QSPINLOCK_LOCKED);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & ~_QSPINLOCK_LOCK_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->qlcode) &&
+	   (atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	int qsval;
+
+	/*
+	 * To reduce memory access to only once for the cold cache case,
+	 * a direct cmpxchg() is performed in the fastpath to optimize the
+	 * uncontended case. The contended performance, however, may suffer
+	 * a bit because of that.
+	 */
+	qsval = atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED);
+	if (likely(qsval == 0))
+		return;
+	queue_spin_lock_slowpath(lock, qsval);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * Use an atomic subtraction to clear the lock bit.
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_QSPINLOCK_LOCKED, &lock->qlcode);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..df981d0
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,55 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here in this case.
+ */
+#ifdef CONFIG_PARAVIRT
+# include <asm/paravirt_types.h>
+#else
+# include <linux/types.h>
+# include <linux/atomic.h>
+#endif
+
+/*
+ * The queue spinlock data structure - a 32-bit word
+ *
+ * For NR_CPUS >= 16K, the bits assignment are:
+ *   Bit  0   : Set if locked
+ *   Bits 1-7 : Not used
+ *   Bits 8-31: Queue code
+ *
+ * For NR_CPUS < 16K, the bits assignment are:
+ *   Bit   0   : Set if locked
+ *   Bits  1-7 : Not used
+ *   Bits  8-15: Reserved for architecture specific optimization
+ *   Bits 16-31: Queue code
+ */
+typedef struct qspinlock {
+	atomic_t	qlcode;	/* Lock + queue code */
+} arch_spinlock_t;
+
+#define _QCODE_OFFSET		8
+#define _QSPINLOCK_LOCKED	1U
+#define	_QSPINLOCK_LOCK_MASK	0xff
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
 config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index baab8e5..e3b3293 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
 endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..ed5efa7
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,393 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock with twists
+ * to make it fit the following constraints:
+ * 1. A max spinlock size of 4 bytes
+ * 2. Good fastpath performance
+ * 3. No change in the locking APIs
+ *
+ * The queue spinlock fastpath is as simple as it can get, all the heavy
+ * lifting is done in the lock slowpath. The main idea behind this queue
+ * spinlock implementation is to keep the spinlock size at 4 bytes while
+ * at the same time implement a queue structure to queue up the waiting
+ * lock spinners.
+ *
+ * Since preemption is disabled before getting the lock, a given CPU will
+ * only need to use one queue node structure in a non-interrupt context.
+ * A percpu queue node structure will be allocated for this purpose and the
+ * cpu number will be put into the queue spinlock structure to indicate the
+ * tail of the queue.
+ *
+ * To handle spinlock acquisition at interrupt context (softirq or hardirq),
+ * the queue node structure is actually an array for supporting nested spin
+ * locking operations in interrupt handlers. If all the entries in the
+ * array are used up, a warning message will be printed (as that shouldn't
+ * happen in normal circumstances) and the lock spinner will fall back to
+ * busy spinning instead of waiting in a queue.
+ */
+
+/*
+ * The 24-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
+ *
+ * The 16-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-15: CPU number + 1   (16K - 1 CPUs)
+ *
+ * A queue node code of 0 indicates that no one is waiting for the lock.
+ * As the value 0 cannot be used as a valid CPU number. We need to add
+ * 1 to it before putting it into the queue code.
+ */
+#define MAX_QNODES		4
+#ifndef _QCODE_VAL_OFFSET
+#define _QCODE_VAL_OFFSET	_QCODE_OFFSET
+#endif
+
+/*
+ * The queue node structure
+ *
+ * This structure is essentially the same as the mcs_spinlock structure
+ * in mcs_spinlock.h file. This structure is retained for future extension
+ * where new fields may be added.
+ */
+struct qnode {
+	u32		 wait;		/* Waiting flag		*/
+	struct qnode	*next;		/* Next queue node addr */
+};
+
+struct qnode_set {
+	struct qnode	nodes[MAX_QNODES];
+	int		node_idx;	/* Current node to use */
+};
+
+/*
+ * Per-CPU queue node structures
+ */
+static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
+
+/*
+ ************************************************************************
+ * The following optimized codes are for architectures that support:	*
+ *  1) Atomic byte and short data write					*
+ *  2) Byte and short data exchange and compare-exchange instructions	*
+ *									*
+ * For those architectures, their asm/qspinlock.h header file should	*
+ * define the followings in order to use the optimized codes.		*
+ *  1) The _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS macro			*
+ *  2) A smp_u8_store_release() macro for byte size store operation	*
+ *  3) A "union arch_qspinlock" structure that include the individual	*
+ *     fields of the qspinlock structure, including:			*
+ *      o slock - the qspinlock structure				*
+ *      o lock  - the lock byte						*
+ *									*
+ ************************************************************************
+ */
+#ifdef _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+/**
+ * queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!ACCESS_ONCE(qlock->lock) &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+#else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
+/*
+ * Generic functions for architectures that do not support atomic
+ * byte or short data types.
+ */
+/**
+ *_queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	int qlcode = atomic_read(lock->qlcode);
+
+	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
+		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+			return 1;
+	return 0;
+}
+#endif /* _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS */
+
+/*
+ ************************************************************************
+ * Inline functions used by the queue_spin_lock_slowpath() function	*
+ * that may get superseded by a more optimized version.			*
+ ************************************************************************
+ */
+
+#ifndef queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value (not used)
+ * Return : > 0 if lock is not available, = 0 if lock is free
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	int qlcode = atomic_read(&lock->qlcode);
+
+	*qcode = qlcode;
+	return qlcode & _QSPINLOCK_LOCKED;
+}
+#endif /* queue_get_lock_qcode */
+
+#ifndef queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+#endif /* queue_spin_trylock_and_clr_qcode */
+
+#ifndef queue_encode_qcode
+/**
+ * queue_encode_qcode - Encode the CPU number & node index into a qnode code
+ * @cpu_nr: CPU number
+ * @qn_idx: Queue node index
+ * Return : A qnode code that can be saved into the qspinlock structure
+ *
+ * The lock bit is set in the encoded 32-bit value as the need to encode
+ * a qnode means that the lock should have been taken.
+ */
+static u32 queue_encode_qcode(u32 cpu_nr, u8 qn_idx)
+{
+	return ((cpu_nr + 1) << (_QCODE_VAL_OFFSET + 2)) |
+		(qn_idx << _QCODE_VAL_OFFSET) | _QSPINLOCK_LOCKED;
+}
+#endif /* queue_encode_qcode */
+
+/*
+ ************************************************************************
+ * Other inline functions needed by the queue_spin_lock_slowpath()	*
+ * function.								*
+ ************************************************************************
+ */
+
+/**
+ * xlate_qcode - translate the queue code into the queue node address
+ * @qcode: Queue code to be translated
+ * Return: The corresponding queue node address
+ */
+static inline struct qnode *xlate_qcode(u32 qcode)
+{
+	u32 cpu_nr = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+	u8  qn_idx = (qcode >> _QCODE_VAL_OFFSET) & 3;
+
+	return per_cpu_ptr(&qnset.nodes[qn_idx], cpu_nr);
+}
+
+/**
+ * get_qnode - Get a queue node address
+ * @qn_idx: Pointer to queue node index [out]
+ * Return : queue node address & queue node index in qn_idx, or NULL if
+ *	    no free queue node available.
+ */
+static struct qnode *get_qnode(unsigned int *qn_idx)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+	int i;
+
+	if (unlikely(qset->node_idx >= MAX_QNODES))
+		return NULL;
+	i = qset->node_idx++;
+	*qn_idx = i;
+	return &qset->nodes[i];
+}
+
+/**
+ * put_qnode - Return a queue node to the pool
+ */
+static void put_qnode(void)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+
+	qset->node_idx--;
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Current value of the queue spinlock 32-bit word
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
+{
+	unsigned int cpu_nr, qn_idx;
+	struct qnode *node, *next;
+	u32 prev_qcode, my_qcode;
+
+	/*
+	 * Get the queue node
+	 */
+	cpu_nr = smp_processor_id();
+	node   = get_qnode(&qn_idx);
+
+	/*
+	 * It should never happen that all the queue nodes are being used.
+	 */
+	BUG_ON(!node);
+
+	/*
+	 * Set up the new cpu code to be exchanged
+	 */
+	my_qcode = queue_encode_qcode(cpu_nr, qn_idx);
+
+	/*
+	 * Initialize the queue node
+	 */
+	node->wait = true;
+	node->next = NULL;
+
+	/*
+	 * The lock may be available at this point, try again if no task was
+	 * waiting in the queue.
+	 */
+	if (!(qsval >> _QCODE_OFFSET) && queue_spin_trylock(lock)) {
+		put_qnode();
+		return;
+	}
+
+	/*
+	 * Exchange current copy of the queue node code
+	 */
+	prev_qcode = atomic_xchg(&lock->qlcode, my_qcode);
+	/*
+	 * It is possible that we may accidentally steal the lock. If this is
+	 * the case, we need to either release it if not the head of the queue
+	 * or get the lock and be done with it.
+	 */
+	if (unlikely(!(prev_qcode & _QSPINLOCK_LOCKED))) {
+		if (prev_qcode == 0) {
+			/*
+			 * Got the lock since it is at the head of the queue
+			 * Now try to atomically clear the queue code.
+			 */
+			if (atomic_cmpxchg(&lock->qlcode, my_qcode,
+					  _QSPINLOCK_LOCKED) == my_qcode)
+				goto release_node;
+			/*
+			 * The cmpxchg fails only if one or more tasks
+			 * are added to the queue. In this case, we need to
+			 * notify the next one to be the head of the queue.
+			 */
+			goto notify_next;
+		}
+		/*
+		 * Accidentally steal the lock, release the lock and
+		 * let the queue head get it.
+		 */
+		queue_spin_unlock(lock);
+	} else
+		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
+	my_qcode &= ~_QSPINLOCK_LOCKED;
+
+	if (prev_qcode) {
+		/*
+		 * Not at the queue head, get the address of the previous node
+		 * and set up the "next" fields of the that node.
+		 */
+		struct qnode *prev = xlate_qcode(prev_qcode);
+
+		ACCESS_ONCE(prev->next) = node;
+		/*
+		 * Wait until the waiting flag is off
+		 */
+		while (smp_load_acquire(&node->wait))
+			arch_mutex_cpu_relax();
+	}
+
+	/*
+	 * At the head of the wait queue now
+	 */
+	while (true) {
+		u32 qcode;
+		int retval;
+
+		retval = queue_get_lock_qcode(lock, &qcode, my_qcode);
+		if (retval > 0)
+			;	/* Lock not available yet */
+		else if (retval < 0)
+			/* Lock taken, can release the node & return */
+			goto release_node;
+		else if (qcode != my_qcode) {
+			/*
+			 * Just get the lock with other spinners waiting
+			 * in the queue.
+			 */
+			if (queue_spin_setlock(lock))
+				goto notify_next;
+		} else {
+			/*
+			 * Get the lock & clear the queue code simultaneously
+			 */
+			if (queue_spin_trylock_and_clr_qcode(lock, qcode))
+				/* No need to notify the next one */
+				goto release_node;
+		}
+		arch_mutex_cpu_relax();
+	}
+
+notify_next:
+	/*
+	 * Wait, if needed, until the next one in queue set up the next field
+	 */
+	while (!(next = ACCESS_ONCE(node->next)))
+		arch_mutex_cpu_relax();
+	/*
+	 * The next one in queue is now at the head
+	 */
+	smp_store_release(&next->wait, false);
+
+release_node:
+	put_qnode();
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfR-0005Hc-KP; Thu, 27 Feb 2014 04:33:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfN-0005Gj-VK
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:50 +0000
Received: from [193.109.254.147:36727] by server-1.bemta-14.messagelabs.com id
	B2/6B-15438-D20CE035; Thu, 27 Feb 2014 04:33:49 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393475627!1832514!1
X-Originating-IP: [15.217.128.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30340 invoked from network); 27 Feb 2014 04:33:48 -0000
Received: from g2t2352.austin.hp.com (HELO g2t2352.austin.hp.com)
	(15.217.128.51)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:48 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2352.austin.hp.com (Postfix) with ESMTP id DC1E5284;
	Thu, 27 Feb 2014 04:33:46 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 7D26146;
	Thu, 27 Feb 2014 04:33:44 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:45 -0500
Message-Id: <1393475567-4453-7-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 6/8] pvqspinlock,
	x86: Rename paravirt_ticketlocks_enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6e6de1f..283f2cf 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,7 +40,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -151,7 +151,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a489140..f318e78 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -818,7 +818,7 @@ static __init int kvm_spinlock_init_jump(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index a50032a..8c67cbe 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
 #endif
 
 #ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 581521c..06f4a64 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -290,7 +290,7 @@ static __init int xen_init_spinlocks_jump(void)
 	if (!xen_pvspin)
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfH-0005Fg-7C; Thu, 27 Feb 2014 04:33:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfE-0005FM-MP
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:40 +0000
Received: from [193.109.254.147:55566] by server-9.bemta-14.messagelabs.com id
	98/70-24895-320CE035; Thu, 27 Feb 2014 04:33:39 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393475617!7087850!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7742 invoked from network); 27 Feb 2014 04:33:38 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:38 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id CDF0625C;
	Thu, 27 Feb 2014 04:33:36 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 68F554C;
	Thu, 27 Feb 2014 04:33:34 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:41 -0500
Message-Id: <1393475567-4453-3-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 2/8] qspinlock,
	x86: Enable x86-64 to use queue spinlock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   41 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 ++++
 arch/x86/include/asm/spinlock_types.h |    4 +++
 4 files changed, 51 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/include/asm/qspinlock.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1b4ff87..5bf70ab 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -17,6 +17,7 @@ config X86_64
 	depends on 64BIT
 	select X86_DEV_DMA_OPS
 	select ARCH_USE_CMPXCHG_LOCKREF
+	select ARCH_USE_QUEUE_SPINLOCK
 
 ### Arch settings
 config X86
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+	struct qspinlock slock;
+	u8		 lock;	/* Lock bit	*/
+};
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	barrier();
+	ACCESS_ONCE(qlock->lock) = 0;
+	barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index bf156de..6e6de1f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -43,6 +43,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -181,6 +185,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #include <asm/rwlock.h>
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfM-0005GY-Mh; Thu, 27 Feb 2014 04:33:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfI-0005Fm-96
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:44 +0000
Received: from [193.109.254.147:36556] by server-11.bemta-14.messagelabs.com
	id 17/05-24604-720CE035; Thu, 27 Feb 2014 04:33:43 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393475621!7087859!1
X-Originating-IP: [15.217.128.51]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8147 invoked from network); 27 Feb 2014 04:33:42 -0000
Received: from g2t2352.austin.hp.com (HELO g2t2352.austin.hp.com)
	(15.217.128.51)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:42 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2352.austin.hp.com (Postfix) with ESMTP id 52F9F28B;
	Thu, 27 Feb 2014 04:33:39 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id DFE6349;
	Thu, 27 Feb 2014 04:33:36 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:42 -0500
Message-Id: <1393475567-4453-4-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 3/8] qspinlock,
	x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A major problem with the queue spinlock patch is its performance at
low contention level (2-4 contending tasks) where it is slower than
the corresponding ticket spinlock code path. The following table shows
the execution time (in ms) of a micro-benchmark where 5M iterations
of the lock/unlock cycles were run on a 10-core Westere-EX CPU with
2 different types loads - standalone (lock and protected data in
different cachelines) and embedded (lock and protected data in the
same cacheline).

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  135/111	 135/102	  0%/-8%
       2	  732/950	1315/1573	+80%/+66%
       3	 1827/1783	2372/2428	+30%/+36%
       4	 2689/2725	2934/2934	 +9%/+8%
       5	 3736/3748	3658/3652	 -2%/-3%
       6	 4942/4984	4434/4428	-10%/-11%
       7	 6304/6319	5176/5163	-18%/-18%
       8	 7736/7629	5955/5944	-23%/-22%

It can be seen that the performance degradation is particular bad
with 2 and 3 contending tasks. To reduce that performance deficit
at low contention level, a special x86 specific optimized code path
for 2 contending tasks was added. This special code path will only
be activated with less than 16K of configured CPUs because it uses
a byte in the 32-bit lock word to hold a waiting bit for the 2nd
contending tasks instead of queuing the waiting task in the queue.

With the change, the performance data became:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       2	  732/950	 523/528	-29%/-44%
       3	 1827/1783	2366/2384	+30%/+34%

The queue spinlock code path is now a bit faster with 2 contending
tasks.  There is also a very slight improvement with 3 contending
tasks.

The performance of the optimized code path can vary depending on which
of the several different code paths is taken. It is also not as fair as
the ticket spinlock and some variations on the execution times of the
2 contending tasks.  Testing with different pairs of cores within the
same CPUs shows an execution time that varies from 400ms to 1194ms.
The ticket spinlock code also shows a variation of 718-1146ms which
is probably due to the CPU topology within a socket.

In a multi-socketed server, the optimized code path also seems to
produce a big performance improvement in cross-node contention traffic
at low contention level. The table below show the performance with
1 contending task per node:

		[Standalone]
  # of nodes	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	   135		 135		  0%
       2	  4452		 528		-88%
       3	 10767		2369		-78%
       4	 20835		2921		-86%

The micro-benchmark was also run on a 4-core Ivy-Bridge PC. The table
below shows the collected performance data:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  197/178	  181/150	 -8%/-16%
       2	 1109/928    435-1417/697-2125
       3	 1836/1702  1372-3112/1379-3138
       4	 2717/2429  1842-4158/1846-4170

The performance of the queue lock patch varied from run to run whereas
the performance of the ticket lock was more consistent. The queue
lock figures above were the range of values that were reported.

This optimization can also be easily used by other architectures as
long as they support 8 and 16 bits atomic operations.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/qspinlock.h      |   20 ++++-
 include/asm-generic/qspinlock_types.h |    8 ++-
 kernel/locking/qspinlock.c            |  192 ++++++++++++++++++++++++++++++++-
 3 files changed, 215 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 44cefee..98db42e 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,12 +7,30 @@
 
 #define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
 
+#define smp_u8_store_release(p, v)	\
+do {					\
+	barrier();			\
+	ACCESS_ONCE(*p) = (v);		\
+} while (0)
+
+/*
+ * As the qcode will be accessed as a 16-bit word, no offset is needed
+ */
+#define _QCODE_VAL_OFFSET	0
+
 /*
  * x86-64 specific queue spinlock union structure
+ * Besides the slock and lock fields, the other fields are only
+ * valid with less than 16K CPUs.
  */
 union arch_qspinlock {
 	struct qspinlock slock;
-	u8		 lock;	/* Lock bit	*/
+	struct {
+		u8  lock;	/* Lock bit	*/
+		u8  wait;	/* Waiting bit	*/
+		u16 qcode;	/* Queue code	*/
+	};
+	u16 lock_wait;		/* Lock and wait bits */
 };
 
 #define	queue_spin_unlock queue_spin_unlock
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index df981d0..3a02a9e 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -48,7 +48,13 @@ typedef struct qspinlock {
 	atomic_t	qlcode;	/* Lock + queue code */
 } arch_spinlock_t;
 
-#define _QCODE_OFFSET		8
+#if CONFIG_NR_CPUS >= (1 << 14)
+# define _Q_MANY_CPUS
+# define _QCODE_OFFSET	8
+#else
+# define _QCODE_OFFSET	16
+#endif
+
 #define _QSPINLOCK_LOCKED	1U
 #define	_QSPINLOCK_LOCK_MASK	0xff
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ed5efa7..22a63fa 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -109,8 +109,11 @@ static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
  *  2) A smp_u8_store_release() macro for byte size store operation	*
  *  3) A "union arch_qspinlock" structure that include the individual	*
  *     fields of the qspinlock structure, including:			*
- *      o slock - the qspinlock structure				*
- *      o lock  - the lock byte						*
+ *      o slock     - the qspinlock structure				*
+ *      o lock      - the lock byte					*
+ *      o wait      - the waiting byte					*
+ *      o qcode     - the queue node code				*
+ *      o lock_wait - the combined lock and waiting bytes		*
  *									*
  ************************************************************************
  */
@@ -129,6 +132,176 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 		return 1;
 	return 0;
 }
+
+#ifndef _Q_MANY_CPUS
+/*
+ * With less than 16K CPUs, the following optimizations are possible with
+ * the x86 architecture:
+ *  1) The 2nd byte of the 32-bit lock word can be used as a pending bit
+ *     for waiting lock acquirer so that it won't need to go through the
+ *     MCS style locking queuing which has a higher overhead.
+ *  2) The 16-bit queue code can be accessed or modified directly as a
+ *     16-bit short value without disturbing the first 2 bytes.
+ */
+#define	_QSPINLOCK_WAITING	0x100U	/* Waiting bit in 2nd byte   */
+#define	_QSPINLOCK_LWMASK	0xffff	/* Mask for lock & wait bits */
+
+#define queue_encode_qcode(cpu, idx)	(((cpu) + 1) << 2 | (idx))
+
+#define queue_spin_trylock_quick queue_spin_trylock_quick
+/**
+ * queue_spin_trylock_quick - fast spinning on the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Old queue spinlock value
+ * Return: 1 if lock acquired, 0 if failed
+ *
+ * This is an optimized contention path for 2 contending tasks. It
+ * should only be entered if no task is waiting in the queue. This
+ * optimized path is not as fair as the ticket spinlock, but it offers
+ * slightly better performance. The regular MCS locking path for 3 or
+ * more contending tasks, however, is fair.
+ *
+ * Depending on the exact timing, there are several different paths where
+ * a contending task can take. The actual contention performance depends
+ * on which path is taken. So it can be faster or slower than the
+ * corresponding ticket spinlock path. On average, it is probably on par
+ * with ticket spinlock.
+ */
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+	u16		     old;
+
+	/*
+	 * Fall into the quick spinning code path only if no one is waiting
+	 * or the lock is available.
+	 */
+	if (unlikely((qsval != _QSPINLOCK_LOCKED) &&
+		     (qsval != _QSPINLOCK_WAITING)))
+		return 0;
+
+	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
+
+	if (old == 0) {
+		/*
+		 * Got the lock, can clear the waiting bit now
+		 */
+		smp_u8_store_release(&qlock->wait, 0);
+		return 1;
+	} else if (old == _QSPINLOCK_LOCKED) {
+try_again:
+		/*
+		 * Wait until the lock byte is cleared to get the lock
+		 */
+		do {
+			cpu_relax();
+		} while (ACCESS_ONCE(qlock->lock));
+		/*
+		 * Set the lock bit & clear the waiting bit
+		 */
+		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
+			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+			return 1;
+		/*
+		 * Someone has steal the lock, so wait again
+		 */
+		goto try_again;
+	} else if (old == _QSPINLOCK_WAITING) {
+		/*
+		 * Another task is already waiting while it steals the lock.
+		 * A bit of unfairness here won't change the big picture.
+		 * So just take the lock and return.
+		 */
+		return 1;
+	}
+	/*
+	 * Nothing need to be done if the old value is
+	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
+	 */
+	return 0;
+}
+
+#define queue_code_xchg queue_code_xchg
+/**
+ * queue_code_xchg - exchange a queue code value
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: New queue code to be exchanged
+ * Return: The original qcode value in the queue spinlock
+ */
+static inline u32 queue_code_xchg(struct qspinlock *lock, u32 qcode)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	return (u32)xchg(&qlock->qcode, (u16)qcode);
+}
+
+#define queue_spin_trylock_and_clr_qcode queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	qcode <<= _QCODE_OFFSET;
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+
+#define queue_get_lock_qcode queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value
+ * Return : > 0 if lock is not available
+ *	   = 0 if lock is free
+ *	   < 0 if lock is taken & can return after cleanup
+ *
+ * It is considered locked when either the lock bit or the wait bit is set.
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	u32 qlcode;
+
+	qlcode = (u32)atomic_read(&lock->qlcode);
+	/*
+	 * With the special case that qlcode contains only _QSPINLOCK_LOCKED
+	 * and mycode. It will try to transition back to the quick spinning
+	 * code by clearing the qcode and setting the _QSPINLOCK_WAITING
+	 * bit.
+	 */
+	if (qlcode == (_QSPINLOCK_LOCKED | (mycode << _QCODE_OFFSET))) {
+		u32 old = qlcode;
+
+		qlcode = atomic_cmpxchg(&lock->qlcode, old,
+				_QSPINLOCK_LOCKED|_QSPINLOCK_WAITING);
+		if (qlcode == old) {
+			union arch_qspinlock *slock =
+				(union arch_qspinlock *)lock;
+try_again:
+			/*
+			 * Wait until the lock byte is cleared
+			 */
+			do {
+				cpu_relax();
+			} while (ACCESS_ONCE(slock->lock));
+			/*
+			 * Set the lock bit & clear the waiting bit
+			 */
+			if (cmpxchg(&slock->lock_wait, _QSPINLOCK_WAITING,
+				    _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+				return -1;	/* Got the lock */
+			goto try_again;
+		}
+	}
+	*qcode = qlcode >> _QCODE_OFFSET;
+	return qlcode & _QSPINLOCK_LWMASK;
+}
+#endif /* _Q_MANY_CPUS */
+
 #else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
 /*
  * Generic functions for architectures that do not support atomic
@@ -144,7 +317,7 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 	int qlcode = atomic_read(lock->qlcode);
 
 	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
-		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+		qlcode, code|_QSPINLOCK_LOCKED) == qlcode))
 			return 1;
 	return 0;
 }
@@ -156,6 +329,10 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
  * that may get superseded by a more optimized version.			*
  ************************************************************************
  */
+#ifndef queue_spin_trylock_quick
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{ return 0; }
+#endif
 
 #ifndef queue_get_lock_qcode
 /**
@@ -266,6 +443,11 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	u32 prev_qcode, my_qcode;
 
 	/*
+	 * Try the quick spinning code path
+	 */
+	if (queue_spin_trylock_quick(lock, qsval))
+		return;
+	/*
 	 * Get the queue node
 	 */
 	cpu_nr = smp_processor_id();
@@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		return;
 	}
 
+#ifdef queue_code_xchg
+	prev_qcode = queue_code_xchg(lock, my_qcode);
+#else
 	/*
 	 * Exchange current copy of the queue node code
 	 */
@@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	} else
 		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
 	my_qcode &= ~_QSPINLOCK_LOCKED;
+#endif /* queue_code_xchg */
 
 	if (prev_qcode) {
 		/*
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfN-0005Gt-Pk; Thu, 27 Feb 2014 04:33:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfJ-0005Fw-Kb
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:45 +0000
Received: from [85.158.139.211:56590] by server-3.bemta-5.messagelabs.com id
	E7/44-13671-820CE035; Thu, 27 Feb 2014 04:33:44 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393475622!6538926!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29552 invoked from network); 27 Feb 2014 04:33:43 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:43 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id DF848261;
	Thu, 27 Feb 2014 04:33:41 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 661D54C;
	Thu, 27 Feb 2014 04:33:39 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:43 -0500
Message-Id: <1393475567-4453-5-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
	x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Locking is always an issue in a virtualized environment as the virtual
CPU that is waiting on a lock may get scheduled out and hence block
any progress in lock acquisition even when the lock has been freed.

One solution to this problem is to allow unfair lock in a
para-virtualized environment. In this case, a new lock acquirer can
come and steal the lock if the next-in-line CPU to get the lock is
scheduled out. Unfair lock in a native environment is generally not a
good idea as there is a possibility of lock starvation for a heavily
contended lock.

This patch add a new configuration option for the x86
architecture to enable the use of unfair queue spinlock
(PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
(paravirt_unfairlocks_enabled) is used to switch between a fair and
an unfair version of the spinlock code. This jump label will only be
enabled in a real PV guest.

Enabling this configuration feature decreases the performance of an
uncontended lock-unlock operation by about 1-2%.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/Kconfig                     |   11 +++++
 arch/x86/include/asm/qspinlock.h     |   74 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/Makefile             |    1 +
 arch/x86/kernel/paravirt-spinlocks.c |    7 +++
 4 files changed, 93 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5bf70ab..8d7c941 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -645,6 +645,17 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer Y.
 
+config PARAVIRT_UNFAIR_LOCKS
+	bool "Enable unfair locks in a para-virtualized guest"
+	depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+	depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+	---help---
+	  This changes the kernel to use unfair locks in a real
+	  para-virtualized guest system. This will help performance
+	  in most cases. However, there is a possibility of lock
+	  starvation on a heavily contended lock especially in a
+	  large guest with many virtual CPUs.
+
 source "arch/x86/xen/Kconfig"
 
 config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 98db42e..c278aed 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -56,4 +56,78 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
 
 #include <asm-generic/qspinlock.h>
 
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (likely(cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return;
+	/*
+	 * Since the lock is now unfair, there is no need to activate
+	 * the 2-task quick spinning code path.
+	 */
+	queue_spin_lock_slowpath(lock, -1);
+}
+
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!qlock->lock &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+extern struct static_key paravirt_unfairlocks_enabled;
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		queue_spin_lock_unfair(lock);
+		return;
+	}
+	queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		return queue_spin_trylock_unfair(lock);
+	}
+	return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f)	arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
 #endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index cb648c8..1107a20 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
 obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 
 obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..a50032a 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,9 @@ EXPORT_SYMBOL(pv_lock_ops);
 
 struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
 EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfT-0005IF-D5; Thu, 27 Feb 2014 04:33:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfR-0005HQ-5N
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:53 +0000
Received: from [85.158.137.68:36756] by server-4.bemta-3.messagelabs.com id
	89/DA-04858-030CE035; Thu, 27 Feb 2014 04:33:52 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393475629!1675450!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12071 invoked from network); 27 Feb 2014 04:33:51 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:51 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id 6D9B026B;
	Thu, 27 Feb 2014 04:33:49 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id EB2CE4E;
	Thu, 27 Feb 2014 04:33:46 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:46 -0500
Message-Id: <1393475567-4453-8-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
	x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds para-virtualization support to the queue spinlock code
by enabling the queue head to kick the lock holder CPU, if known,
in when the lock isn't released for a certain amount of time. It
also enables the mutual monitoring of the queue head CPU and the
following node CPU in the queue to make sure that their CPUs will
stay scheduled in.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/paravirt.h       |    9 ++-
 arch/x86/include/asm/paravirt_types.h |   12 +++
 arch/x86/include/asm/pvqspinlock.h    |  176 +++++++++++++++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c  |    4 +
 kernel/locking/qspinlock.c            |   41 +++++++-
 5 files changed, 235 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..06d3279 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 }
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
-
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
+{
+	PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
+}
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
-
+#endif
 #endif
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..87f8836 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,21 @@ struct arch_spinlock;
 typedef u16 __ticket_t;
 #endif
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_kick_type {
+	PV_KICK_LOCK_HOLDER,
+	PV_KICK_QUEUE_HEAD,
+	PV_KICK_NEXT_NODE
+};
+#endif
+
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	void (*kick_cpu)(int cpu, enum pv_kick_type);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..45aae39
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,176 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ *	Queue Spinlock Para-Virtualization Support
+ *
+ *	+------+	    +-----+ nxtcpu_p1  +----+
+ *	| Lock |	    |Queue|----------->|Next|
+ *	|Holder|<-----------|Head |<-----------|Node|
+ *	+------+ prev_qcode +-----+ prev_qcode +----+
+ *
+ * As long as the current lock holder passes through the slowpath, the queue
+ * head CPU will have its CPU number stored in prev_qcode. The situation is
+ * the same for the node next to the queue head.
+ *
+ * The next node, while setting up the next pointer in the queue head, can
+ * also store its CPU number in that node. With that change, the queue head
+ * will have the CPU numbers of both its upstream and downstream neighbors.
+ *
+ * To make forward progress in lock acquisition and release, it is necessary
+ * that both the lock holder and the queue head virtual CPUs are present.
+ * The queue head can monitor the lock holder, but the lock holder can't
+ * monitor the queue head back. As a result, the next node is also brought
+ * into the picture to monitor the queue head. In the above diagram, all the
+ * 3 virtual CPUs should be present with the queue head and next node
+ * monitoring each other to make sure they are both present.
+ *
+ * Heartbeat counters are used to track if a neighbor is active. There are
+ * 3 different sets of heartbeat counter monitoring going on:
+ * 1) The queue head will wait until the number loop iteration exceeds a
+ *    certain threshold (HEAD_SPIN_THRESHOLD). In that case, it will send
+ *    a kick-cpu signal to the lock holder if it has the CPU number available.
+ *    The kick-cpu siginal will be sent only once as the real lock holder
+ *    may not be the same as what the queue head thinks it is.
+ * 2) The queue head will periodically clear the active flag of the next node.
+ *    It will then check to see if the active flag remains cleared at the end
+ *    of the cycle. If it is, the next node CPU may be scheduled out. So it
+ *    send a kick-cpu signal to make sure that the next node CPU remain active.
+ * 3) The next node CPU will monitor its own active flag to see if it gets
+ *    clear periodically. If it is not, the queue head CPU may be scheduled
+ *    out. It will then send the kick-cpu signal to the queue head CPU.
+ */
+
+/*
+ * Loop thresholds
+ */
+#define	HEAD_SPIN_THRESHOLD	(1<<12)	/* Threshold to kick lock holder  */
+#define	CLEAR_ACTIVE_THRESHOLD	(1<<8)	/* Threahold for clearing active flag */
+#define CLEAR_ACTIVE_MASK	(CLEAR_ACTIVE_THRESHOLD - 1)
+
+/*
+ * PV macros
+ */
+#define PV_SET_VAR(type, var, val)	type var = val
+#define PV_VAR(var)			var
+#define	PV_GET_NXTCPU(node)		(node)->pv.nxtcpu_p1
+
+/*
+ * Additional fields to be added to the qnode structure
+ *
+ * Try to cram the PV fields into a 32 bits so that it won't increase the
+ * qnode size in x86-64.
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t	u32
+#else
+#define _cpuid_t	u16
+#endif
+
+struct pv_qvars {
+	u8	 active;	/* Set if CPU active		*/
+	u8	 prehead;	/* Set if next to queue head	*/
+	_cpuid_t nxtcpu_p1;	/* CPU number of next node + 1	*/
+};
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv: pointer to struct pv_qvars
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv)
+{
+	pv->active    = false;
+	pv->prehead   = false;
+	pv->nxtcpu_p1 = 0;
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue head
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @nxtcpu: CPU number of next node + 1
+ * @next  : pointer to the next node
+ * @offset: offset of the pv_qvars within the qnode
+ *
+ * 4 checks will be done:
+ * 1) See if it is time to kick the lock holder
+ * 2) Set the prehead flag of the next node
+ * 3) Clear the active flag of the next node periodically
+ * 4) If the active flag is not set after a while, assume the CPU of the
+ *    next-in-line node is offline and kick it back up again.
+ */
+static __always_inline void
+pv_head_spin_check(int *count, u32 qcode, int nxtcpu, void *next, int offset)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if ((++(*count) == HEAD_SPIN_THRESHOLD) && qcode) {
+		/*
+		 * Get the CPU number of the lock holder & kick it
+		 * The lock may have been stealed by another CPU
+		 * if PARAVIRT_UNFAIR_LOCKS is set, so the computed
+		 * CPU number may not be the actual lock holder.
+		 */
+		int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+		__queue_kick_cpu(cpu, PV_KICK_LOCK_HOLDER);
+	}
+	if (next) {
+		struct pv_qvars *pv = (struct pv_qvars *)
+				      ((char *)next + offset);
+
+		if (!pv->prehead)
+			pv->prehead = true;
+		if ((*count & CLEAR_ACTIVE_MASK) == CLEAR_ACTIVE_MASK)
+			pv->active = false;
+		if (((*count & CLEAR_ACTIVE_MASK) == 0) &&
+			!pv->active && nxtcpu)
+			/*
+			 * The CPU of the next node doesn't seem to be
+			 * active, need to kick it to make sure that
+			 * it is ready to be transitioned to queue head.
+			 */
+			__queue_kick_cpu(nxtcpu - 1, PV_KICK_NEXT_NODE);
+	}
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue member
+ * @pv   : pointer to struct pv_qvars
+ * @count: loop count
+ * @qcode: queue code of the previous node (queue head if pv->prehead set)
+ *
+ * Set the active flag if it is next to the queue head
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, int *count, u32 qcode)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if (ACCESS_ONCE(pv->prehead)) {
+		if (pv->active == false) {
+			*count = 0;	/* Reset counter */
+			pv->active = true;
+		}
+		if ((++(*count) >= 4 * CLEAR_ACTIVE_THRESHOLD) && qcode) {
+			/*
+			 * The queue head isn't clearing the active flag for
+			 * too long. Need to kick it.
+			 */
+			int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+			__queue_kick_cpu(cpu, PV_KICK_QUEUE_HEAD);
+			*count = 0;
+		}
+	}
+}
+
+/**
+ * pv_set_cpu - set CPU # in the given pv_qvars structure
+ * @pv : pointer to struct pv_qvars to be set
+ * @cpu: cpu number to be set
+ */
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)
+{
+	pv->nxtcpu_p1 = cpu + 1;
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 8c67cbe..30d76f5 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,13 @@
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.kick_cpu = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 22a63fa..f10446e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -58,6 +58,26 @@
  */
 
 /*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+#define PV_SET_VAR(type, var, val)
+#define PV_VAR(var)			0
+#define PV_GET_NXTCPU(node)		0
+
+struct pv_qvars {};
+static __always_inline void pv_init_vars(struct pv_qvars *pv)		{}
+static __always_inline void pv_head_spin_check(int *count, u32 qcode,
+				int nxtcpu, void *next, int offset)	{}
+static __always_inline void pv_queue_spin_check(struct pv_qvars *pv,
+				int *count, u32 qcode)			{}
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)	{}
+#endif
+
+/*
  * The 24-bit queue node code is divided into the following 2 fields:
  * Bits 0-1 : queue node index (4 nodes)
  * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
@@ -77,15 +97,13 @@
 
 /*
  * The queue node structure
- *
- * This structure is essentially the same as the mcs_spinlock structure
- * in mcs_spinlock.h file. This structure is retained for future extension
- * where new fields may be added.
  */
 struct qnode {
 	u32		 wait;		/* Waiting flag		*/
+	struct pv_qvars	 pv;		/* Para-virtualization  */
 	struct qnode	*next;		/* Next queue node addr */
 };
+#define PV_OFFSET	offsetof(struct qnode, pv)
 
 struct qnode_set {
 	struct qnode	nodes[MAX_QNODES];
@@ -441,6 +459,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	unsigned int cpu_nr, qn_idx;
 	struct qnode *node, *next;
 	u32 prev_qcode, my_qcode;
+	PV_SET_VAR(int, hcnt, 0);
 
 	/*
 	 * Try the quick spinning code path
@@ -468,6 +487,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	 */
 	node->wait = true;
 	node->next = NULL;
+	pv_init_vars(&node->pv);
 
 	/*
 	 * The lock may be available at this point, try again if no task was
@@ -522,13 +542,22 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		 * and set up the "next" fields of the that node.
 		 */
 		struct qnode *prev = xlate_qcode(prev_qcode);
+		PV_SET_VAR(int, qcnt, 0);
 
 		ACCESS_ONCE(prev->next) = node;
 		/*
+		 * Set current CPU number into the previous node
+		 */
+		pv_set_cpu(&prev->pv, cpu_nr);
+
+		/*
 		 * Wait until the waiting flag is off
 		 */
-		while (smp_load_acquire(&node->wait))
+		while (smp_load_acquire(&node->wait)) {
 			arch_mutex_cpu_relax();
+			pv_queue_spin_check(&node->pv, PV_VAR(&qcnt),
+					    prev_qcode);
+		}
 	}
 
 	/*
@@ -560,6 +589,8 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 				goto release_node;
 		}
 		arch_mutex_cpu_relax();
+		pv_head_spin_check(PV_VAR(&hcnt), prev_qcode,
+				PV_GET_NXTCPU(node), node->next, PV_OFFSET);
 	}
 
 notify_next:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfG-0005FZ-JA; Thu, 27 Feb 2014 04:33:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfD-0005FK-On
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:40 +0000
Received: from [85.158.137.68:8638] by server-12.bemta-3.messagelabs.com id
	03/68-01674-220CE035; Thu, 27 Feb 2014 04:33:38 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393475616!3171741!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30016 invoked from network); 27 Feb 2014 04:33:37 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:37 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id E8174EE;
	Thu, 27 Feb 2014 04:33:31 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id EE29552;
	Thu, 27 Feb 2014 04:33:24 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:39 -0500
Message-Id: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with
	PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v4->v5:
 - Move the optimized 2-task contending code to the generic file to
   enable more architectures to use it without code duplication.
 - Address some of the style-related comments by PeterZ.
 - Allow the use of unfair queue spinlock in a real para-virtualized
   execution environment.
 - Add para-virtualization support to the qspinlock code by ensuring
   that the lock holder and queue head stay alive as much as possible.

v3->v4:
 - Remove debugging code and fix a configuration error
 - Simplify the qspinlock structure and streamline the code to make it
   perform a bit better
 - Add an x86 version of asm/qspinlock.h for holding x86 specific
   optimization.
 - Add an optimized x86 code path for 2 contending tasks to improve
   low contention performance.

v2->v3:
 - Simplify the code by using numerous mode only without an unfair option.
 - Use the latest smp_load_acquire()/smp_store_release() barriers.
 - Move the queue spinlock code to kernel/locking.
 - Make the use of queue spinlock the default for x86-64 without user
   configuration.
 - Additional performance tuning.

v1->v2:
 - Add some more comments to document what the code does.
 - Add a numerous CPU mode to support >= 16K CPUs
 - Add a configuration option to allow lock stealing which can further
   improve performance in many cases.
 - Enable wakeup of queue head CPU at unlock time for non-numerous
   CPU mode.

This patch set has 3 different sections:
 1) Patches 1-3: Introduces a queue-based spinlock implementation that
    can replace the default ticket spinlock without increasing the
    size of the spinlock data structure. As a result, critical kernel
    data structures that embed spinlock won't increase in size and
    breaking data alignments.
 2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
    real para-virtualized execution environment. This can resolve
    some of the locking related performance issues due to the fact
    that the next CPU to get the lock may have been scheduled out
    for a period of time.
 3) Patches 6-8: Enable qspinlock para-virtualization support by making
    sure that the lock holder and the queue head stay alive as long as
    possible.

Patches 1-3 are fully tested and ready for production. Patches
4-8, on the other hands, are not fully tested. They have undergone
compilation tests with various combinations of kernel config settings,
boot-up tests in a bare-metal as well as simple performance test on a
KVM guest.  Further tests and performance characterization are still
needed to be done. So comments on them are welcomed. Suggestions or
recommendations on how to add PV support in the Xen environment are
also needed.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.

The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.

The performance data in bare metal were discussed in the patch
descriptions. For PV support, some simple performance test was
performed on a 2-node 20-CPU KVM guest running 3.14-rc4 kernel in a
larger 8-node machine.  The disk workload of the AIM7 benchmark was
run on both ext4 and xfs RAM disks at 2000 users. The JPM (jobs/minute)
data of the test run were:

  kernel			XFS FS	%change	ext4 FS %change
  ------			------	-------	------- -------
  PV ticketlock (baseline) 	2390438	   -	1366743    -
  qspinlock			1775148	  -26%	1336303  -2.2%
  PV qspinlock			2264151	 -5.3%	1351351  -1.1%
  unfair qspinlock		2404810	 +0.6%	1612903	  +18%
  unfair + PV qspinlock		2419355	 +1.2%	1612903   +18%

The XFS test had moderate spinlock contention of 1.6% whereas the ext4
test had heavy spinlock contention of 15.4% as reported by perf. It
seems like the PV qspinlock support still has room for improvement
compared with the current PV ticketlock implementation.

Waiman Long (8):
  qspinlock: Introducing a 4-byte queue spinlock implementation
  qspinlock, x86: Enable x86-64 to use queue spinlock
  qspinlock, x86: Add x86 specific optimization for 2 contending tasks
  pvqspinlock, x86: Allow unfair spinlock in a real PV environment
  pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
  pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  pvqspinlock, x86: Add qspinlock para-virtualization support
  pvqspinlock, x86: Enable KVM to use qspinlock's PV support

 arch/x86/Kconfig                      |   12 +
 arch/x86/include/asm/paravirt.h       |    9 +-
 arch/x86/include/asm/paravirt_types.h |   12 +
 arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
 arch/x86/include/asm/qspinlock.h      |  133 +++++++
 arch/x86/include/asm/spinlock.h       |    9 +-
 arch/x86/include/asm/spinlock_types.h |    4 +
 arch/x86/kernel/Makefile              |    1 +
 arch/x86/kernel/kvm.c                 |   73 ++++-
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
 arch/x86/xen/spinlock.c               |    2 +-
 include/asm-generic/qspinlock.h       |  122 +++++++
 include/asm-generic/qspinlock_types.h |   61 ++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
 16 files changed, 1239 insertions(+), 8 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h
 create mode 100644 arch/x86/include/asm/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfP-0005HI-Na; Thu, 27 Feb 2014 04:33:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfL-0005GI-Jf
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:47 +0000
Received: from [193.109.254.147:55779] by server-15.bemta-14.messagelabs.com
	id FD/D1-10839-B20CE035; Thu, 27 Feb 2014 04:33:47 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393475624!7132842!1
X-Originating-IP: [15.217.128.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18285 invoked from network); 27 Feb 2014 04:33:45 -0000
Received: from g2t2352.austin.hp.com (HELO g2t2352.austin.hp.com)
	(15.217.128.51)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:45 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2352.austin.hp.com (Postfix) with ESMTP id 6D60E158;
	Thu, 27 Feb 2014 04:33:44 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id EEDDF49;
	Thu, 27 Feb 2014 04:33:41 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:44 -0500
Message-Id: <1393475567-4453-6-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
	x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a KVM init function to activate the unfair queue
spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 713f1b3..a489140 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
 early_initcall(kvm_spinlock_init_jump);
 
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/*
+ * Enable unfair lock if running in a real para-virtualized environment
+ */
+static __init int kvm_unfair_locks_init_jump(void)
+{
+	if (!kvm_para_available())
+		return 0;
+
+	static_key_slow_inc(&paravirt_unfairlocks_enabled);
+	printk(KERN_INFO "KVM setup unfair spinlock\n");
+
+	return 0;
+}
+early_initcall(kvm_unfair_locks_init_jump);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfM-0005GY-Mh; Thu, 27 Feb 2014 04:33:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfI-0005Fm-96
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:44 +0000
Received: from [193.109.254.147:36556] by server-11.bemta-14.messagelabs.com
	id 17/05-24604-720CE035; Thu, 27 Feb 2014 04:33:43 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393475621!7087859!1
X-Originating-IP: [15.217.128.51]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8147 invoked from network); 27 Feb 2014 04:33:42 -0000
Received: from g2t2352.austin.hp.com (HELO g2t2352.austin.hp.com)
	(15.217.128.51)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:42 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2352.austin.hp.com (Postfix) with ESMTP id 52F9F28B;
	Thu, 27 Feb 2014 04:33:39 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id DFE6349;
	Thu, 27 Feb 2014 04:33:36 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:42 -0500
Message-Id: <1393475567-4453-4-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 3/8] qspinlock,
	x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

A major problem with the queue spinlock patch is its performance at
low contention level (2-4 contending tasks) where it is slower than
the corresponding ticket spinlock code path. The following table shows
the execution time (in ms) of a micro-benchmark where 5M iterations
of the lock/unlock cycles were run on a 10-core Westere-EX CPU with
2 different types loads - standalone (lock and protected data in
different cachelines) and embedded (lock and protected data in the
same cacheline).

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  135/111	 135/102	  0%/-8%
       2	  732/950	1315/1573	+80%/+66%
       3	 1827/1783	2372/2428	+30%/+36%
       4	 2689/2725	2934/2934	 +9%/+8%
       5	 3736/3748	3658/3652	 -2%/-3%
       6	 4942/4984	4434/4428	-10%/-11%
       7	 6304/6319	5176/5163	-18%/-18%
       8	 7736/7629	5955/5944	-23%/-22%

It can be seen that the performance degradation is particular bad
with 2 and 3 contending tasks. To reduce that performance deficit
at low contention level, a special x86 specific optimized code path
for 2 contending tasks was added. This special code path will only
be activated with less than 16K of configured CPUs because it uses
a byte in the 32-bit lock word to hold a waiting bit for the 2nd
contending tasks instead of queuing the waiting task in the queue.

With the change, the performance data became:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       2	  732/950	 523/528	-29%/-44%
       3	 1827/1783	2366/2384	+30%/+34%

The queue spinlock code path is now a bit faster with 2 contending
tasks.  There is also a very slight improvement with 3 contending
tasks.

The performance of the optimized code path can vary depending on which
of the several different code paths is taken. It is also not as fair as
the ticket spinlock and some variations on the execution times of the
2 contending tasks.  Testing with different pairs of cores within the
same CPUs shows an execution time that varies from 400ms to 1194ms.
The ticket spinlock code also shows a variation of 718-1146ms which
is probably due to the CPU topology within a socket.

In a multi-socketed server, the optimized code path also seems to
produce a big performance improvement in cross-node contention traffic
at low contention level. The table below show the performance with
1 contending task per node:

		[Standalone]
  # of nodes	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	   135		 135		  0%
       2	  4452		 528		-88%
       3	 10767		2369		-78%
       4	 20835		2921		-86%

The micro-benchmark was also run on a 4-core Ivy-Bridge PC. The table
below shows the collected performance data:

		  [Standalone/Embedded]
  # of tasks	Ticket lock	Queue lock	%Change
  ----------	-----------	----------	-------
       1	  197/178	  181/150	 -8%/-16%
       2	 1109/928    435-1417/697-2125
       3	 1836/1702  1372-3112/1379-3138
       4	 2717/2429  1842-4158/1846-4170

The performance of the queue lock patch varied from run to run whereas
the performance of the ticket lock was more consistent. The queue
lock figures above were the range of values that were reported.

This optimization can also be easily used by other architectures as
long as they support 8 and 16 bits atomic operations.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/qspinlock.h      |   20 ++++-
 include/asm-generic/qspinlock_types.h |    8 ++-
 kernel/locking/qspinlock.c            |  192 ++++++++++++++++++++++++++++++++-
 3 files changed, 215 insertions(+), 5 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 44cefee..98db42e 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,12 +7,30 @@
 
 #define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
 
+#define smp_u8_store_release(p, v)	\
+do {					\
+	barrier();			\
+	ACCESS_ONCE(*p) = (v);		\
+} while (0)
+
+/*
+ * As the qcode will be accessed as a 16-bit word, no offset is needed
+ */
+#define _QCODE_VAL_OFFSET	0
+
 /*
  * x86-64 specific queue spinlock union structure
+ * Besides the slock and lock fields, the other fields are only
+ * valid with less than 16K CPUs.
  */
 union arch_qspinlock {
 	struct qspinlock slock;
-	u8		 lock;	/* Lock bit	*/
+	struct {
+		u8  lock;	/* Lock bit	*/
+		u8  wait;	/* Waiting bit	*/
+		u16 qcode;	/* Queue code	*/
+	};
+	u16 lock_wait;		/* Lock and wait bits */
 };
 
 #define	queue_spin_unlock queue_spin_unlock
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index df981d0..3a02a9e 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -48,7 +48,13 @@ typedef struct qspinlock {
 	atomic_t	qlcode;	/* Lock + queue code */
 } arch_spinlock_t;
 
-#define _QCODE_OFFSET		8
+#if CONFIG_NR_CPUS >= (1 << 14)
+# define _Q_MANY_CPUS
+# define _QCODE_OFFSET	8
+#else
+# define _QCODE_OFFSET	16
+#endif
+
 #define _QSPINLOCK_LOCKED	1U
 #define	_QSPINLOCK_LOCK_MASK	0xff
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ed5efa7..22a63fa 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -109,8 +109,11 @@ static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
  *  2) A smp_u8_store_release() macro for byte size store operation	*
  *  3) A "union arch_qspinlock" structure that include the individual	*
  *     fields of the qspinlock structure, including:			*
- *      o slock - the qspinlock structure				*
- *      o lock  - the lock byte						*
+ *      o slock     - the qspinlock structure				*
+ *      o lock      - the lock byte					*
+ *      o wait      - the waiting byte					*
+ *      o qcode     - the queue node code				*
+ *      o lock_wait - the combined lock and waiting bytes		*
  *									*
  ************************************************************************
  */
@@ -129,6 +132,176 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 		return 1;
 	return 0;
 }
+
+#ifndef _Q_MANY_CPUS
+/*
+ * With less than 16K CPUs, the following optimizations are possible with
+ * the x86 architecture:
+ *  1) The 2nd byte of the 32-bit lock word can be used as a pending bit
+ *     for waiting lock acquirer so that it won't need to go through the
+ *     MCS style locking queuing which has a higher overhead.
+ *  2) The 16-bit queue code can be accessed or modified directly as a
+ *     16-bit short value without disturbing the first 2 bytes.
+ */
+#define	_QSPINLOCK_WAITING	0x100U	/* Waiting bit in 2nd byte   */
+#define	_QSPINLOCK_LWMASK	0xffff	/* Mask for lock & wait bits */
+
+#define queue_encode_qcode(cpu, idx)	(((cpu) + 1) << 2 | (idx))
+
+#define queue_spin_trylock_quick queue_spin_trylock_quick
+/**
+ * queue_spin_trylock_quick - fast spinning on the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Old queue spinlock value
+ * Return: 1 if lock acquired, 0 if failed
+ *
+ * This is an optimized contention path for 2 contending tasks. It
+ * should only be entered if no task is waiting in the queue. This
+ * optimized path is not as fair as the ticket spinlock, but it offers
+ * slightly better performance. The regular MCS locking path for 3 or
+ * more contending tasks, however, is fair.
+ *
+ * Depending on the exact timing, there are several different paths where
+ * a contending task can take. The actual contention performance depends
+ * on which path is taken. So it can be faster or slower than the
+ * corresponding ticket spinlock path. On average, it is probably on par
+ * with ticket spinlock.
+ */
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+	u16		     old;
+
+	/*
+	 * Fall into the quick spinning code path only if no one is waiting
+	 * or the lock is available.
+	 */
+	if (unlikely((qsval != _QSPINLOCK_LOCKED) &&
+		     (qsval != _QSPINLOCK_WAITING)))
+		return 0;
+
+	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
+
+	if (old == 0) {
+		/*
+		 * Got the lock, can clear the waiting bit now
+		 */
+		smp_u8_store_release(&qlock->wait, 0);
+		return 1;
+	} else if (old == _QSPINLOCK_LOCKED) {
+try_again:
+		/*
+		 * Wait until the lock byte is cleared to get the lock
+		 */
+		do {
+			cpu_relax();
+		} while (ACCESS_ONCE(qlock->lock));
+		/*
+		 * Set the lock bit & clear the waiting bit
+		 */
+		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
+			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+			return 1;
+		/*
+		 * Someone has steal the lock, so wait again
+		 */
+		goto try_again;
+	} else if (old == _QSPINLOCK_WAITING) {
+		/*
+		 * Another task is already waiting while it steals the lock.
+		 * A bit of unfairness here won't change the big picture.
+		 * So just take the lock and return.
+		 */
+		return 1;
+	}
+	/*
+	 * Nothing need to be done if the old value is
+	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
+	 */
+	return 0;
+}
+
+#define queue_code_xchg queue_code_xchg
+/**
+ * queue_code_xchg - exchange a queue code value
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: New queue code to be exchanged
+ * Return: The original qcode value in the queue spinlock
+ */
+static inline u32 queue_code_xchg(struct qspinlock *lock, u32 qcode)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	return (u32)xchg(&qlock->qcode, (u16)qcode);
+}
+
+#define queue_spin_trylock_and_clr_qcode queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	qcode <<= _QCODE_OFFSET;
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+
+#define queue_get_lock_qcode queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value
+ * Return : > 0 if lock is not available
+ *	   = 0 if lock is free
+ *	   < 0 if lock is taken & can return after cleanup
+ *
+ * It is considered locked when either the lock bit or the wait bit is set.
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	u32 qlcode;
+
+	qlcode = (u32)atomic_read(&lock->qlcode);
+	/*
+	 * With the special case that qlcode contains only _QSPINLOCK_LOCKED
+	 * and mycode. It will try to transition back to the quick spinning
+	 * code by clearing the qcode and setting the _QSPINLOCK_WAITING
+	 * bit.
+	 */
+	if (qlcode == (_QSPINLOCK_LOCKED | (mycode << _QCODE_OFFSET))) {
+		u32 old = qlcode;
+
+		qlcode = atomic_cmpxchg(&lock->qlcode, old,
+				_QSPINLOCK_LOCKED|_QSPINLOCK_WAITING);
+		if (qlcode == old) {
+			union arch_qspinlock *slock =
+				(union arch_qspinlock *)lock;
+try_again:
+			/*
+			 * Wait until the lock byte is cleared
+			 */
+			do {
+				cpu_relax();
+			} while (ACCESS_ONCE(slock->lock));
+			/*
+			 * Set the lock bit & clear the waiting bit
+			 */
+			if (cmpxchg(&slock->lock_wait, _QSPINLOCK_WAITING,
+				    _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
+				return -1;	/* Got the lock */
+			goto try_again;
+		}
+	}
+	*qcode = qlcode >> _QCODE_OFFSET;
+	return qlcode & _QSPINLOCK_LWMASK;
+}
+#endif /* _Q_MANY_CPUS */
+
 #else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
 /*
  * Generic functions for architectures that do not support atomic
@@ -144,7 +317,7 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
 	int qlcode = atomic_read(lock->qlcode);
 
 	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
-		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+		qlcode, code|_QSPINLOCK_LOCKED) == qlcode))
 			return 1;
 	return 0;
 }
@@ -156,6 +329,10 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
  * that may get superseded by a more optimized version.			*
  ************************************************************************
  */
+#ifndef queue_spin_trylock_quick
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{ return 0; }
+#endif
 
 #ifndef queue_get_lock_qcode
 /**
@@ -266,6 +443,11 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	u32 prev_qcode, my_qcode;
 
 	/*
+	 * Try the quick spinning code path
+	 */
+	if (queue_spin_trylock_quick(lock, qsval))
+		return;
+	/*
 	 * Get the queue node
 	 */
 	cpu_nr = smp_processor_id();
@@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		return;
 	}
 
+#ifdef queue_code_xchg
+	prev_qcode = queue_code_xchg(lock, my_qcode);
+#else
 	/*
 	 * Exchange current copy of the queue node code
 	 */
@@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	} else
 		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
 	my_qcode &= ~_QSPINLOCK_LOCKED;
+#endif /* queue_code_xchg */
 
 	if (prev_qcode) {
 		/*
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfH-0005Fg-7C; Thu, 27 Feb 2014 04:33:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfE-0005FM-MP
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:40 +0000
Received: from [193.109.254.147:55566] by server-9.bemta-14.messagelabs.com id
	98/70-24895-320CE035; Thu, 27 Feb 2014 04:33:39 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393475617!7087850!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7742 invoked from network); 27 Feb 2014 04:33:38 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:38 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id CDF0625C;
	Thu, 27 Feb 2014 04:33:36 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 68F554C;
	Thu, 27 Feb 2014 04:33:34 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:41 -0500
Message-Id: <1393475567-4453-3-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 2/8] qspinlock,
	x86: Enable x86-64 to use queue spinlock
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.

Currently, there is some incompatibilities between the para-virtualized
spinlock code (which hard-codes the use of ticket spinlock) and the
queue spinlock. Therefore, the use of queue spinlock is disabled when
the para-virtualized spinlock is enabled.

The arch/x86/include/asm/qspinlock.h header file includes some x86
specific optimization which will make the queue spinlock code perform
better than the generic implementation.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 arch/x86/Kconfig                      |    1 +
 arch/x86/include/asm/qspinlock.h      |   41 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/spinlock.h       |    5 ++++
 arch/x86/include/asm/spinlock_types.h |    4 +++
 4 files changed, 51 insertions(+), 0 deletions(-)
 create mode 100644 arch/x86/include/asm/qspinlock.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 1b4ff87..5bf70ab 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -17,6 +17,7 @@ config X86_64
 	depends on 64BIT
 	select X86_DEV_DMA_OPS
 	select ARCH_USE_CMPXCHG_LOCKREF
+	select ARCH_USE_QUEUE_SPINLOCK
 
 ### Arch settings
 config X86
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
new file mode 100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+	struct qspinlock slock;
+	u8		 lock;	/* Lock bit	*/
+};
+
+#define	queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+ * No special memory barrier other than a compiler one is needed for the
+ * x86 architecture. A compiler barrier is added at the end to make sure
+ * that the clearing the lock bit is done ASAP without artificial delay
+ * due to compiler optimization.
+ */
+static inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	barrier();
+	ACCESS_ONCE(qlock->lock) = 0;
+	barrier();
+}
+
+#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */
+
+#include <asm-generic/qspinlock.h>
+
+#endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index bf156de..6e6de1f 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -43,6 +43,10 @@
 extern struct static_key paravirt_ticketlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm/qspinlock.h>
+#else
+
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 
 static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
@@ -181,6 +185,7 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
 {
 	arch_spin_lock(lock);
 }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 {
diff --git a/arch/x86/include/asm/spinlock_types.h b/arch/x86/include/asm/spinlock_types.h
index 4f1bea1..7960268 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -23,6 +23,9 @@ typedef u32 __ticketpair_t;
 
 #define TICKET_SHIFT	(sizeof(__ticket_t) * 8)
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+#include <asm-generic/qspinlock_types.h>
+#else
 typedef struct arch_spinlock {
 	union {
 		__ticketpair_t head_tail;
@@ -33,6 +36,7 @@ typedef struct arch_spinlock {
 } arch_spinlock_t;
 
 #define __ARCH_SPIN_LOCK_UNLOCKED	{ { 0 } }
+#endif /* CONFIG_QUEUE_SPINLOCK */
 
 #include <asm/rwlock.h>
 
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfI-0005Fo-9A; Thu, 27 Feb 2014 04:33:44 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfE-0005FL-OJ
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:41 +0000
Received: from [193.109.254.147:2381] by server-2.bemta-14.messagelabs.com id
	55/C4-01236-320CE035; Thu, 27 Feb 2014 04:33:39 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-10.tower-27.messagelabs.com!1393475616!7140481!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4709 invoked from network); 27 Feb 2014 04:33:37 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-10.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:37 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 57828149;
	Thu, 27 Feb 2014 04:33:34 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id DA83346;
	Thu, 27 Feb 2014 04:33:31 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:40 -0500
Message-Id: <1393475567-4453-2-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte queue
	spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch introduces a new queue spinlock implementation that can
serve as an alternative to the default ticket spinlock. Compared with
the ticket spinlock, this queue spinlock should be almost as fair as
the ticket spinlock. It has about the same speed in single-thread and
it can be much faster in high contention situations. Only in light to
moderate contention where the average queue depth is around 1-3 will
this queue spinlock be potentially a bit slower due to the higher
slowpath overhead.

This queue spinlock is especially suit to NUMA machines with a large
number of cores as the chance of spinlock contention is much higher
in those machines. The cost of contention is also higher because of
slower inter-node memory traffic.

The idea behind this spinlock implementation is the fact that spinlocks
are acquired with preemption disabled. In other words, the process
will not be migrated to another CPU while it is trying to get a
spinlock. Ignoring interrupt handling, a CPU can only be contending
in one spinlock at any one time. Of course, interrupt handler can try
to acquire one spinlock while the interrupted user process is in the
process of getting another spinlock. By allocating a set of per-cpu
queue nodes and used them to form a waiting queue, we can encode the
queue node address into a much smaller 16-bit size. Together with
the 1-byte lock bit, this queue spinlock implementation will only
need 4 bytes to hold all the information that it needs.

The current queue node address encoding of the 4-byte word is as
follows:
Bits 0-7  : the locked byte
Bits 8-9  : queue node index in the per-cpu array (4 entries)
Bits 10-31: cpu number + 1 (max cpus = 4M -1)

In the extremely unlikely case that all the queue node entries are
used up, the current code will fall back to busy spinning without
waiting in a queue with warning message.

For single-thread performance (no contention), a 256K lock/unlock
loop was run on a 2.4Ghz Westmere x86-64 CPU.  The following table
shows the average time (in ns) for a single lock/unlock sequence
(including the looping and timing overhead):

  Lock Type			Time (ns)
  ---------			---------
  Ticket spinlock		  14.1
  Queue spinlock		   8.8

So the queue spinlock is much faster than the ticket spinlock, even
though the overhead of locking and unlocking should be pretty small
when there is no contention. The performance advantage is mainly
due to the fact that ticket spinlock does a read-modify-write (add)
instruction in unlock whereas queue spinlock only does a simple write
in unlock which can be much faster in a pipelined CPU.

The AIM7 benchmark was run on a 8-socket 80-core DL980 with Westmere
x86-64 CPUs with XFS filesystem on a ramdisk and HT off to evaluate
the performance impact of this patch on a 3.13 kernel.

  +------------+----------+-----------------+---------+
  | Kernel     | 3.13 JPM |    3.13 with    | %Change |
  |            |          | qspinlock patch |	      |
  +------------+----------+-----------------+---------+
  |		      10-100 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   357459 |      363109     |  +1.58% |
  |dbase       |   496847 |      498801	    |  +0.39% |
  |disk        |  2925312 |     2771387     |  -5.26% |
  |five_sec    |   166612 |      169215     |  +1.56% |
  |fserver     |   382129 |      383279     |  +0.30% |
  |high_systime|    16356 |       16380     |  +0.15% |
  |short       |  4521978 |     4257363     |  -5.85% |
  +------------+----------+-----------------+---------+
  |		     200-1000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   449070 |      447711     |  -0.30% |
  |dbase       |   845029 |      853362	    |  +0.99% |
  |disk        |  2725249 |     4892907     | +79.54% |
  |five_sec    |   169410 |      170638     |  +0.72% |
  |fserver     |   489662 |      491828     |  +0.44% |
  |high_systime|   142823 |      143790     |  +0.68% |
  |short       |  7435288 |     9016171     | +21.26% |
  +------------+----------+-----------------+---------+
  |		     1100-2000 users		      |
  +------------+----------+-----------------+---------+
  |custom      |   432470 |      432570     |  +0.02% |
  |dbase       |   889289 |      890026	    |  +0.08% |
  |disk        |  2565138 |     5008732     | +95.26% |
  |five_sec    |   169141 |      170034     |  +0.53% |
  |fserver     |   498569 |      500701     |  +0.43% |
  |high_systime|   229913 |      245866     |  +6.94% |
  |short       |  8496794 |     8281918     |  -2.53% |
  +------------+----------+-----------------+---------+

The workload with the most gain was the disk workload. Without the
patch, the perf profile at 1500 users looked like:

 26.19%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--47.28%-- evict
              |--46.87%-- inode_sb_list_add
              |--1.24%-- xlog_cil_insert_items
              |--0.68%-- __remove_inode_hash
              |--0.67%-- inode_wait_for_writeback
               --3.26%-- [...]
 22.96%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  5.56%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  4.87%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.04%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.30%    reaim  [kernel.kallsyms]  [k] memcpy
  1.08%    reaim  [unknown]          [.] 0x0000003c52009447

There was pretty high spinlock contention on the inode_sb_list_lock
and maybe the inode's i_lock.

With the patch, the perf profile at 1500 users became:

 26.82%  swapper  [kernel.kallsyms]  [k] cpu_idle_loop
  4.66%    reaim  [kernel.kallsyms]  [k] mutex_spin_on_owner
  3.97%    reaim  [kernel.kallsyms]  [k] update_cfs_rq_blocked_load
  2.40%    reaim  [kernel.kallsyms]  [k] queue_spin_lock_slowpath
              |--88.31%-- _raw_spin_lock
              |          |--36.02%-- inode_sb_list_add
              |          |--35.09%-- evict
              |          |--16.89%-- xlog_cil_insert_items
              |          |--6.30%-- try_to_wake_up
              |          |--2.20%-- _xfs_buf_find
              |          |--0.75%-- __remove_inode_hash
              |          |--0.72%-- __mutex_lock_slowpath
              |          |--0.53%-- load_balance
              |--6.02%-- _raw_spin_lock_irqsave
              |          |--74.75%-- down_trylock
              |          |--9.69%-- rcu_check_quiescent_state
              |          |--7.47%-- down
              |          |--3.57%-- up
              |          |--1.67%-- rwsem_wake
              |          |--1.00%-- remove_wait_queue
              |          |--0.56%-- pagevec_lru_move_fn
              |--5.39%-- _raw_spin_lock_irq
              |          |--82.05%-- rwsem_down_read_failed
              |          |--10.48%-- rwsem_down_write_failed
              |          |--4.24%-- __down
              |          |--2.74%-- __schedule
               --0.28%-- [...]
  2.20%    reaim  [kernel.kallsyms]  [k] memcpy
  1.84%    reaim  [unknown]          [.] 0x000000000041517b
  1.77%    reaim  [kernel.kallsyms]  [k] _raw_spin_lock
              |--21.08%-- xlog_cil_insert_items
              |--10.14%-- xfs_icsb_modify_counters
              |--7.20%-- xfs_iget_cache_hit
              |--6.56%-- inode_sb_list_add
              |--5.49%-- _xfs_buf_find
              |--5.25%-- evict
              |--5.03%-- __remove_inode_hash
              |--4.64%-- __mutex_lock_slowpath
              |--3.78%-- selinux_inode_free_security
              |--2.95%-- xfs_inode_is_filestream
              |--2.35%-- try_to_wake_up
              |--2.07%-- xfs_inode_set_reclaim_tag
              |--1.52%-- list_lru_add
              |--1.16%-- xfs_inode_clear_eofblocks_tag
		  :
  1.30%    reaim  [kernel.kallsyms]  [k] effective_load
  1.27%    reaim  [kernel.kallsyms]  [k] mspin_lock
  1.10%    reaim  [kernel.kallsyms]  [k] security_compute_sid

On the ext4 filesystem, the disk workload improved from 416281 JPM
to 899101 JPM (+116%) with the patch. In this case, the contended
spinlock is the mb_cache_spinlock.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
Acked-by: Rik van Riel <riel@redhat.com>
---
 include/asm-generic/qspinlock.h       |  122 ++++++++++
 include/asm-generic/qspinlock_types.h |   55 +++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  393 +++++++++++++++++++++++++++++++++
 5 files changed, 578 insertions(+), 0 deletions(-)
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
new file mode 100644
index 0000000..08da60f
--- /dev/null
+++ b/include/asm-generic/qspinlock.h
@@ -0,0 +1,122 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_H
+#define __ASM_GENERIC_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+/*
+ * External function declarations
+ */
+extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval);
+
+/**
+ * queue_spin_is_locked - is the spinlock locked?
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if it is locked, 0 otherwise
+ */
+static __always_inline int queue_spin_is_locked(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & _QSPINLOCK_LOCKED;
+}
+
+/**
+ * queue_spin_value_unlocked - is the spinlock structure unlocked?
+ * @lock: queue spinlock structure
+ * Return: 1 if it is unlocked, 0 otherwise
+ */
+static __always_inline int queue_spin_value_unlocked(struct qspinlock lock)
+{
+	return !(atomic_read(&lock.qlcode) & _QSPINLOCK_LOCKED);
+}
+
+/**
+ * queue_spin_is_contended - check if the lock is contended
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock contended, 0 otherwise
+ */
+static __always_inline int queue_spin_is_contended(struct qspinlock *lock)
+{
+	return atomic_read(&lock->qlcode) & ~_QSPINLOCK_LOCK_MASK;
+}
+/**
+ * queue_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock(struct qspinlock *lock)
+{
+	if (!atomic_read(&lock->qlcode) &&
+	   (atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/**
+ * queue_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock(struct qspinlock *lock)
+{
+	int qsval;
+
+	/*
+	 * To reduce memory access to only once for the cold cache case,
+	 * a direct cmpxchg() is performed in the fastpath to optimize the
+	 * uncontended case. The contended performance, however, may suffer
+	 * a bit because of that.
+	 */
+	qsval = atomic_cmpxchg(&lock->qlcode, 0, _QSPINLOCK_LOCKED);
+	if (likely(qsval == 0))
+		return;
+	queue_spin_lock_slowpath(lock, qsval);
+}
+
+#ifndef queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+	/*
+	 * Use an atomic subtraction to clear the lock bit.
+	 */
+	smp_mb__before_atomic_dec();
+	atomic_sub(_QSPINLOCK_LOCKED, &lock->qlcode);
+}
+#endif
+
+/*
+ * Initializier
+ */
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+
+/*
+ * Remapping spinlock architecture specific functions to the corresponding
+ * queue spinlock functions.
+ */
+#define arch_spin_is_locked(l)		queue_spin_is_locked(l)
+#define arch_spin_is_contended(l)	queue_spin_is_contended(l)
+#define arch_spin_value_unlocked(l)	queue_spin_value_unlocked(l)
+#define arch_spin_lock(l)		queue_spin_lock(l)
+#define arch_spin_trylock(l)		queue_spin_trylock(l)
+#define arch_spin_unlock(l)		queue_spin_unlock(l)
+#define arch_spin_lock_flags(l, f)	queue_spin_lock(l)
+
+#endif /* __ASM_GENERIC_QSPINLOCK_H */
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
new file mode 100644
index 0000000..df981d0
--- /dev/null
+++ b/include/asm-generic/qspinlock_types.h
@@ -0,0 +1,55 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H
+#define __ASM_GENERIC_QSPINLOCK_TYPES_H
+
+/*
+ * Including atomic.h with PARAVIRT on will cause compilation errors because
+ * of recursive header file incluson via paravirt_types.h. A workaround is
+ * to include paravirt_types.h here in this case.
+ */
+#ifdef CONFIG_PARAVIRT
+# include <asm/paravirt_types.h>
+#else
+# include <linux/types.h>
+# include <linux/atomic.h>
+#endif
+
+/*
+ * The queue spinlock data structure - a 32-bit word
+ *
+ * For NR_CPUS >= 16K, the bits assignment are:
+ *   Bit  0   : Set if locked
+ *   Bits 1-7 : Not used
+ *   Bits 8-31: Queue code
+ *
+ * For NR_CPUS < 16K, the bits assignment are:
+ *   Bit   0   : Set if locked
+ *   Bits  1-7 : Not used
+ *   Bits  8-15: Reserved for architecture specific optimization
+ *   Bits 16-31: Queue code
+ */
+typedef struct qspinlock {
+	atomic_t	qlcode;	/* Lock + queue code */
+} arch_spinlock_t;
+
+#define _QCODE_OFFSET		8
+#define _QSPINLOCK_LOCKED	1U
+#define	_QSPINLOCK_LOCK_MASK	0xff
+
+#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index d2b32ac..f185584 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -223,3 +223,10 @@ endif
 config MUTEX_SPIN_ON_OWNER
 	def_bool y
 	depends on SMP && !DEBUG_MUTEXES
+
+config ARCH_USE_QUEUE_SPINLOCK
+	bool
+
+config QUEUE_SPINLOCK
+	def_bool y if ARCH_USE_QUEUE_SPINLOCK
+	depends on SMP && !PARAVIRT_SPINLOCKS
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index baab8e5..e3b3293 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_LOCKDEP) += lockdep_proc.o
 endif
 obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
+obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o
 obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
new file mode 100644
index 0000000..ed5efa7
--- /dev/null
+++ b/kernel/locking/qspinlock.c
@@ -0,0 +1,393 @@
+/*
+ * Queue spinlock
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P.
+ *
+ * Authors: Waiman Long <waiman.long@hp.com>
+ */
+#include <linux/smp.h>
+#include <linux/bug.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mutex.h>
+#include <linux/spinlock.h>
+
+/*
+ * The basic principle of a queue-based spinlock can best be understood
+ * by studying a classic queue-based spinlock implementation called the
+ * MCS lock. The paper below provides a good description for this kind
+ * of lock.
+ *
+ * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
+ *
+ * This queue spinlock implementation is based on the MCS lock with twists
+ * to make it fit the following constraints:
+ * 1. A max spinlock size of 4 bytes
+ * 2. Good fastpath performance
+ * 3. No change in the locking APIs
+ *
+ * The queue spinlock fastpath is as simple as it can get, all the heavy
+ * lifting is done in the lock slowpath. The main idea behind this queue
+ * spinlock implementation is to keep the spinlock size at 4 bytes while
+ * at the same time implement a queue structure to queue up the waiting
+ * lock spinners.
+ *
+ * Since preemption is disabled before getting the lock, a given CPU will
+ * only need to use one queue node structure in a non-interrupt context.
+ * A percpu queue node structure will be allocated for this purpose and the
+ * cpu number will be put into the queue spinlock structure to indicate the
+ * tail of the queue.
+ *
+ * To handle spinlock acquisition at interrupt context (softirq or hardirq),
+ * the queue node structure is actually an array for supporting nested spin
+ * locking operations in interrupt handlers. If all the entries in the
+ * array are used up, a warning message will be printed (as that shouldn't
+ * happen in normal circumstances) and the lock spinner will fall back to
+ * busy spinning instead of waiting in a queue.
+ */
+
+/*
+ * The 24-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
+ *
+ * The 16-bit queue node code is divided into the following 2 fields:
+ * Bits 0-1 : queue node index (4 nodes)
+ * Bits 2-15: CPU number + 1   (16K - 1 CPUs)
+ *
+ * A queue node code of 0 indicates that no one is waiting for the lock.
+ * As the value 0 cannot be used as a valid CPU number. We need to add
+ * 1 to it before putting it into the queue code.
+ */
+#define MAX_QNODES		4
+#ifndef _QCODE_VAL_OFFSET
+#define _QCODE_VAL_OFFSET	_QCODE_OFFSET
+#endif
+
+/*
+ * The queue node structure
+ *
+ * This structure is essentially the same as the mcs_spinlock structure
+ * in mcs_spinlock.h file. This structure is retained for future extension
+ * where new fields may be added.
+ */
+struct qnode {
+	u32		 wait;		/* Waiting flag		*/
+	struct qnode	*next;		/* Next queue node addr */
+};
+
+struct qnode_set {
+	struct qnode	nodes[MAX_QNODES];
+	int		node_idx;	/* Current node to use */
+};
+
+/*
+ * Per-CPU queue node structures
+ */
+static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
+
+/*
+ ************************************************************************
+ * The following optimized codes are for architectures that support:	*
+ *  1) Atomic byte and short data write					*
+ *  2) Byte and short data exchange and compare-exchange instructions	*
+ *									*
+ * For those architectures, their asm/qspinlock.h header file should	*
+ * define the followings in order to use the optimized codes.		*
+ *  1) The _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS macro			*
+ *  2) A smp_u8_store_release() macro for byte size store operation	*
+ *  3) A "union arch_qspinlock" structure that include the individual	*
+ *     fields of the qspinlock structure, including:			*
+ *      o slock - the qspinlock structure				*
+ *      o lock  - the lock byte						*
+ *									*
+ ************************************************************************
+ */
+#ifdef _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+/**
+ * queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!ACCESS_ONCE(qlock->lock) &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+#else /*  _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS  */
+/*
+ * Generic functions for architectures that do not support atomic
+ * byte or short data types.
+ */
+/**
+ *_queue_spin_setlock - try to acquire the lock by setting the lock bit
+ * @lock: Pointer to queue spinlock structure
+ * Return: 1 if lock bit set successfully, 0 if failed
+ */
+static __always_inline int queue_spin_setlock(struct qspinlock *lock)
+{
+	int qlcode = atomic_read(lock->qlcode);
+
+	if (!(qlcode & _QSPINLOCK_LOCKED) && (atomic_cmpxchg(&lock->qlcode,
+		qlcode, qlcode|_QSPINLOCK_LOCKED) == qlcode))
+			return 1;
+	return 0;
+}
+#endif /* _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS */
+
+/*
+ ************************************************************************
+ * Inline functions used by the queue_spin_lock_slowpath() function	*
+ * that may get superseded by a more optimized version.			*
+ ************************************************************************
+ */
+
+#ifndef queue_get_lock_qcode
+/**
+ * queue_get_lock_qcode - get the lock & qcode values
+ * @lock  : Pointer to queue spinlock structure
+ * @qcode : Pointer to the returned qcode value
+ * @mycode: My qcode value (not used)
+ * Return : > 0 if lock is not available, = 0 if lock is free
+ */
+static inline int
+queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode)
+{
+	int qlcode = atomic_read(&lock->qlcode);
+
+	*qcode = qlcode;
+	return qlcode & _QSPINLOCK_LOCKED;
+}
+#endif /* queue_get_lock_qcode */
+
+#ifndef queue_spin_trylock_and_clr_qcode
+/**
+ * queue_spin_trylock_and_clr_qcode - Try to lock & clear qcode simultaneously
+ * @lock : Pointer to queue spinlock structure
+ * @qcode: The supposedly current qcode value
+ * Return: true if successful, false otherwise
+ */
+static inline int
+queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode)
+{
+	return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode;
+}
+#endif /* queue_spin_trylock_and_clr_qcode */
+
+#ifndef queue_encode_qcode
+/**
+ * queue_encode_qcode - Encode the CPU number & node index into a qnode code
+ * @cpu_nr: CPU number
+ * @qn_idx: Queue node index
+ * Return : A qnode code that can be saved into the qspinlock structure
+ *
+ * The lock bit is set in the encoded 32-bit value as the need to encode
+ * a qnode means that the lock should have been taken.
+ */
+static u32 queue_encode_qcode(u32 cpu_nr, u8 qn_idx)
+{
+	return ((cpu_nr + 1) << (_QCODE_VAL_OFFSET + 2)) |
+		(qn_idx << _QCODE_VAL_OFFSET) | _QSPINLOCK_LOCKED;
+}
+#endif /* queue_encode_qcode */
+
+/*
+ ************************************************************************
+ * Other inline functions needed by the queue_spin_lock_slowpath()	*
+ * function.								*
+ ************************************************************************
+ */
+
+/**
+ * xlate_qcode - translate the queue code into the queue node address
+ * @qcode: Queue code to be translated
+ * Return: The corresponding queue node address
+ */
+static inline struct qnode *xlate_qcode(u32 qcode)
+{
+	u32 cpu_nr = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+	u8  qn_idx = (qcode >> _QCODE_VAL_OFFSET) & 3;
+
+	return per_cpu_ptr(&qnset.nodes[qn_idx], cpu_nr);
+}
+
+/**
+ * get_qnode - Get a queue node address
+ * @qn_idx: Pointer to queue node index [out]
+ * Return : queue node address & queue node index in qn_idx, or NULL if
+ *	    no free queue node available.
+ */
+static struct qnode *get_qnode(unsigned int *qn_idx)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+	int i;
+
+	if (unlikely(qset->node_idx >= MAX_QNODES))
+		return NULL;
+	i = qset->node_idx++;
+	*qn_idx = i;
+	return &qset->nodes[i];
+}
+
+/**
+ * put_qnode - Return a queue node to the pool
+ */
+static void put_qnode(void)
+{
+	struct qnode_set *qset = this_cpu_ptr(&qnset);
+
+	qset->node_idx--;
+}
+
+/**
+ * queue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Current value of the queue spinlock 32-bit word
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
+{
+	unsigned int cpu_nr, qn_idx;
+	struct qnode *node, *next;
+	u32 prev_qcode, my_qcode;
+
+	/*
+	 * Get the queue node
+	 */
+	cpu_nr = smp_processor_id();
+	node   = get_qnode(&qn_idx);
+
+	/*
+	 * It should never happen that all the queue nodes are being used.
+	 */
+	BUG_ON(!node);
+
+	/*
+	 * Set up the new cpu code to be exchanged
+	 */
+	my_qcode = queue_encode_qcode(cpu_nr, qn_idx);
+
+	/*
+	 * Initialize the queue node
+	 */
+	node->wait = true;
+	node->next = NULL;
+
+	/*
+	 * The lock may be available at this point, try again if no task was
+	 * waiting in the queue.
+	 */
+	if (!(qsval >> _QCODE_OFFSET) && queue_spin_trylock(lock)) {
+		put_qnode();
+		return;
+	}
+
+	/*
+	 * Exchange current copy of the queue node code
+	 */
+	prev_qcode = atomic_xchg(&lock->qlcode, my_qcode);
+	/*
+	 * It is possible that we may accidentally steal the lock. If this is
+	 * the case, we need to either release it if not the head of the queue
+	 * or get the lock and be done with it.
+	 */
+	if (unlikely(!(prev_qcode & _QSPINLOCK_LOCKED))) {
+		if (prev_qcode == 0) {
+			/*
+			 * Got the lock since it is at the head of the queue
+			 * Now try to atomically clear the queue code.
+			 */
+			if (atomic_cmpxchg(&lock->qlcode, my_qcode,
+					  _QSPINLOCK_LOCKED) == my_qcode)
+				goto release_node;
+			/*
+			 * The cmpxchg fails only if one or more tasks
+			 * are added to the queue. In this case, we need to
+			 * notify the next one to be the head of the queue.
+			 */
+			goto notify_next;
+		}
+		/*
+		 * Accidentally steal the lock, release the lock and
+		 * let the queue head get it.
+		 */
+		queue_spin_unlock(lock);
+	} else
+		prev_qcode &= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
+	my_qcode &= ~_QSPINLOCK_LOCKED;
+
+	if (prev_qcode) {
+		/*
+		 * Not at the queue head, get the address of the previous node
+		 * and set up the "next" fields of the that node.
+		 */
+		struct qnode *prev = xlate_qcode(prev_qcode);
+
+		ACCESS_ONCE(prev->next) = node;
+		/*
+		 * Wait until the waiting flag is off
+		 */
+		while (smp_load_acquire(&node->wait))
+			arch_mutex_cpu_relax();
+	}
+
+	/*
+	 * At the head of the wait queue now
+	 */
+	while (true) {
+		u32 qcode;
+		int retval;
+
+		retval = queue_get_lock_qcode(lock, &qcode, my_qcode);
+		if (retval > 0)
+			;	/* Lock not available yet */
+		else if (retval < 0)
+			/* Lock taken, can release the node & return */
+			goto release_node;
+		else if (qcode != my_qcode) {
+			/*
+			 * Just get the lock with other spinners waiting
+			 * in the queue.
+			 */
+			if (queue_spin_setlock(lock))
+				goto notify_next;
+		} else {
+			/*
+			 * Get the lock & clear the queue code simultaneously
+			 */
+			if (queue_spin_trylock_and_clr_qcode(lock, qcode))
+				/* No need to notify the next one */
+				goto release_node;
+		}
+		arch_mutex_cpu_relax();
+	}
+
+notify_next:
+	/*
+	 * Wait, if needed, until the next one in queue set up the next field
+	 */
+	while (!(next = ACCESS_ONCE(node->next)))
+		arch_mutex_cpu_relax();
+	/*
+	 * The next one in queue is now at the head
+	 */
+	smp_store_release(&next->wait, false);
+
+release_node:
+	put_qnode();
+}
+EXPORT_SYMBOL(queue_spin_lock_slowpath);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfT-0005IF-D5; Thu, 27 Feb 2014 04:33:55 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfR-0005HQ-5N
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:53 +0000
Received: from [85.158.137.68:36756] by server-4.bemta-3.messagelabs.com id
	89/DA-04858-030CE035; Thu, 27 Feb 2014 04:33:52 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393475629!1675450!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12071 invoked from network); 27 Feb 2014 04:33:51 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:51 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id 6D9B026B;
	Thu, 27 Feb 2014 04:33:49 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id EB2CE4E;
	Thu, 27 Feb 2014 04:33:46 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:46 -0500
Message-Id: <1393475567-4453-8-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
	x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds para-virtualization support to the queue spinlock code
by enabling the queue head to kick the lock holder CPU, if known,
in when the lock isn't released for a certain amount of time. It
also enables the mutual monitoring of the queue head CPU and the
following node CPU in the queue to make sure that their CPUs will
stay scheduled in.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/paravirt.h       |    9 ++-
 arch/x86/include/asm/paravirt_types.h |   12 +++
 arch/x86/include/asm/pvqspinlock.h    |  176 +++++++++++++++++++++++++++++++++
 arch/x86/kernel/paravirt-spinlocks.c  |    4 +
 kernel/locking/qspinlock.c            |   41 +++++++-
 5 files changed, 235 insertions(+), 7 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cd6e161..06d3279 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
 }
 
 #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
-
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
+{
+	PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
+}
+#else
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
 							__ticket_t ticket)
 {
@@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
 {
 	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
-
+#endif
 #endif
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 7549b8b..87f8836 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -333,9 +333,21 @@ struct arch_spinlock;
 typedef u16 __ticket_t;
 #endif
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+enum pv_kick_type {
+	PV_KICK_LOCK_HOLDER,
+	PV_KICK_QUEUE_HEAD,
+	PV_KICK_NEXT_NODE
+};
+#endif
+
 struct pv_lock_ops {
+#ifdef CONFIG_QUEUE_SPINLOCK
+	void (*kick_cpu)(int cpu, enum pv_kick_type);
+#else
 	struct paravirt_callee_save lock_spinning;
 	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
+#endif
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
new file mode 100644
index 0000000..45aae39
--- /dev/null
+++ b/arch/x86/include/asm/pvqspinlock.h
@@ -0,0 +1,176 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ *	Queue Spinlock Para-Virtualization Support
+ *
+ *	+------+	    +-----+ nxtcpu_p1  +----+
+ *	| Lock |	    |Queue|----------->|Next|
+ *	|Holder|<-----------|Head |<-----------|Node|
+ *	+------+ prev_qcode +-----+ prev_qcode +----+
+ *
+ * As long as the current lock holder passes through the slowpath, the queue
+ * head CPU will have its CPU number stored in prev_qcode. The situation is
+ * the same for the node next to the queue head.
+ *
+ * The next node, while setting up the next pointer in the queue head, can
+ * also store its CPU number in that node. With that change, the queue head
+ * will have the CPU numbers of both its upstream and downstream neighbors.
+ *
+ * To make forward progress in lock acquisition and release, it is necessary
+ * that both the lock holder and the queue head virtual CPUs are present.
+ * The queue head can monitor the lock holder, but the lock holder can't
+ * monitor the queue head back. As a result, the next node is also brought
+ * into the picture to monitor the queue head. In the above diagram, all the
+ * 3 virtual CPUs should be present with the queue head and next node
+ * monitoring each other to make sure they are both present.
+ *
+ * Heartbeat counters are used to track if a neighbor is active. There are
+ * 3 different sets of heartbeat counter monitoring going on:
+ * 1) The queue head will wait until the number loop iteration exceeds a
+ *    certain threshold (HEAD_SPIN_THRESHOLD). In that case, it will send
+ *    a kick-cpu signal to the lock holder if it has the CPU number available.
+ *    The kick-cpu siginal will be sent only once as the real lock holder
+ *    may not be the same as what the queue head thinks it is.
+ * 2) The queue head will periodically clear the active flag of the next node.
+ *    It will then check to see if the active flag remains cleared at the end
+ *    of the cycle. If it is, the next node CPU may be scheduled out. So it
+ *    send a kick-cpu signal to make sure that the next node CPU remain active.
+ * 3) The next node CPU will monitor its own active flag to see if it gets
+ *    clear periodically. If it is not, the queue head CPU may be scheduled
+ *    out. It will then send the kick-cpu signal to the queue head CPU.
+ */
+
+/*
+ * Loop thresholds
+ */
+#define	HEAD_SPIN_THRESHOLD	(1<<12)	/* Threshold to kick lock holder  */
+#define	CLEAR_ACTIVE_THRESHOLD	(1<<8)	/* Threahold for clearing active flag */
+#define CLEAR_ACTIVE_MASK	(CLEAR_ACTIVE_THRESHOLD - 1)
+
+/*
+ * PV macros
+ */
+#define PV_SET_VAR(type, var, val)	type var = val
+#define PV_VAR(var)			var
+#define	PV_GET_NXTCPU(node)		(node)->pv.nxtcpu_p1
+
+/*
+ * Additional fields to be added to the qnode structure
+ *
+ * Try to cram the PV fields into a 32 bits so that it won't increase the
+ * qnode size in x86-64.
+ */
+#if CONFIG_NR_CPUS >= (1 << 16)
+#define _cpuid_t	u32
+#else
+#define _cpuid_t	u16
+#endif
+
+struct pv_qvars {
+	u8	 active;	/* Set if CPU active		*/
+	u8	 prehead;	/* Set if next to queue head	*/
+	_cpuid_t nxtcpu_p1;	/* CPU number of next node + 1	*/
+};
+
+/**
+ * pv_init_vars - initialize fields in struct pv_qvars
+ * @pv: pointer to struct pv_qvars
+ */
+static __always_inline void pv_init_vars(struct pv_qvars *pv)
+{
+	pv->active    = false;
+	pv->prehead   = false;
+	pv->nxtcpu_p1 = 0;
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue head
+ * @count : loop count
+ * @qcode : queue code of the supposed lock holder
+ * @nxtcpu: CPU number of next node + 1
+ * @next  : pointer to the next node
+ * @offset: offset of the pv_qvars within the qnode
+ *
+ * 4 checks will be done:
+ * 1) See if it is time to kick the lock holder
+ * 2) Set the prehead flag of the next node
+ * 3) Clear the active flag of the next node periodically
+ * 4) If the active flag is not set after a while, assume the CPU of the
+ *    next-in-line node is offline and kick it back up again.
+ */
+static __always_inline void
+pv_head_spin_check(int *count, u32 qcode, int nxtcpu, void *next, int offset)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if ((++(*count) == HEAD_SPIN_THRESHOLD) && qcode) {
+		/*
+		 * Get the CPU number of the lock holder & kick it
+		 * The lock may have been stealed by another CPU
+		 * if PARAVIRT_UNFAIR_LOCKS is set, so the computed
+		 * CPU number may not be the actual lock holder.
+		 */
+		int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+		__queue_kick_cpu(cpu, PV_KICK_LOCK_HOLDER);
+	}
+	if (next) {
+		struct pv_qvars *pv = (struct pv_qvars *)
+				      ((char *)next + offset);
+
+		if (!pv->prehead)
+			pv->prehead = true;
+		if ((*count & CLEAR_ACTIVE_MASK) == CLEAR_ACTIVE_MASK)
+			pv->active = false;
+		if (((*count & CLEAR_ACTIVE_MASK) == 0) &&
+			!pv->active && nxtcpu)
+			/*
+			 * The CPU of the next node doesn't seem to be
+			 * active, need to kick it to make sure that
+			 * it is ready to be transitioned to queue head.
+			 */
+			__queue_kick_cpu(nxtcpu - 1, PV_KICK_NEXT_NODE);
+	}
+}
+
+/**
+ * head_spin_check - perform para-virtualization checks for queue member
+ * @pv   : pointer to struct pv_qvars
+ * @count: loop count
+ * @qcode: queue code of the previous node (queue head if pv->prehead set)
+ *
+ * Set the active flag if it is next to the queue head
+ */
+static __always_inline void
+pv_queue_spin_check(struct pv_qvars *pv, int *count, u32 qcode)
+{
+	if (!static_key_false(&paravirt_spinlocks_enabled))
+		return;
+	if (ACCESS_ONCE(pv->prehead)) {
+		if (pv->active == false) {
+			*count = 0;	/* Reset counter */
+			pv->active = true;
+		}
+		if ((++(*count) >= 4 * CLEAR_ACTIVE_THRESHOLD) && qcode) {
+			/*
+			 * The queue head isn't clearing the active flag for
+			 * too long. Need to kick it.
+			 */
+			int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
+			__queue_kick_cpu(cpu, PV_KICK_QUEUE_HEAD);
+			*count = 0;
+		}
+	}
+}
+
+/**
+ * pv_set_cpu - set CPU # in the given pv_qvars structure
+ * @pv : pointer to struct pv_qvars to be set
+ * @cpu: cpu number to be set
+ */
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)
+{
+	pv->nxtcpu_p1 = cpu + 1;
+}
+
+#endif /* _ASM_X86_PVQSPINLOCK_H */
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index 8c67cbe..30d76f5 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -11,9 +11,13 @@
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
+#ifdef CONFIG_QUEUE_SPINLOCK
+	.kick_cpu = paravirt_nop,
+#else
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
 	.unlock_kick = paravirt_nop,
 #endif
+#endif
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 22a63fa..f10446e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -58,6 +58,26 @@
  */
 
 /*
+ * Para-virtualized queue spinlock support
+ */
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#include <asm/pvqspinlock.h>
+#else
+
+#define PV_SET_VAR(type, var, val)
+#define PV_VAR(var)			0
+#define PV_GET_NXTCPU(node)		0
+
+struct pv_qvars {};
+static __always_inline void pv_init_vars(struct pv_qvars *pv)		{}
+static __always_inline void pv_head_spin_check(int *count, u32 qcode,
+				int nxtcpu, void *next, int offset)	{}
+static __always_inline void pv_queue_spin_check(struct pv_qvars *pv,
+				int *count, u32 qcode)			{}
+static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)	{}
+#endif
+
+/*
  * The 24-bit queue node code is divided into the following 2 fields:
  * Bits 0-1 : queue node index (4 nodes)
  * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
@@ -77,15 +97,13 @@
 
 /*
  * The queue node structure
- *
- * This structure is essentially the same as the mcs_spinlock structure
- * in mcs_spinlock.h file. This structure is retained for future extension
- * where new fields may be added.
  */
 struct qnode {
 	u32		 wait;		/* Waiting flag		*/
+	struct pv_qvars	 pv;		/* Para-virtualization  */
 	struct qnode	*next;		/* Next queue node addr */
 };
+#define PV_OFFSET	offsetof(struct qnode, pv)
 
 struct qnode_set {
 	struct qnode	nodes[MAX_QNODES];
@@ -441,6 +459,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	unsigned int cpu_nr, qn_idx;
 	struct qnode *node, *next;
 	u32 prev_qcode, my_qcode;
+	PV_SET_VAR(int, hcnt, 0);
 
 	/*
 	 * Try the quick spinning code path
@@ -468,6 +487,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 	 */
 	node->wait = true;
 	node->next = NULL;
+	pv_init_vars(&node->pv);
 
 	/*
 	 * The lock may be available at this point, try again if no task was
@@ -522,13 +542,22 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 		 * and set up the "next" fields of the that node.
 		 */
 		struct qnode *prev = xlate_qcode(prev_qcode);
+		PV_SET_VAR(int, qcnt, 0);
 
 		ACCESS_ONCE(prev->next) = node;
 		/*
+		 * Set current CPU number into the previous node
+		 */
+		pv_set_cpu(&prev->pv, cpu_nr);
+
+		/*
 		 * Wait until the waiting flag is off
 		 */
-		while (smp_load_acquire(&node->wait))
+		while (smp_load_acquire(&node->wait)) {
 			arch_mutex_cpu_relax();
+			pv_queue_spin_check(&node->pv, PV_VAR(&qcnt),
+					    prev_qcode);
+		}
 	}
 
 	/*
@@ -560,6 +589,8 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
 				goto release_node;
 		}
 		arch_mutex_cpu_relax();
+		pv_head_spin_check(PV_VAR(&hcnt), prev_qcode,
+				PV_GET_NXTCPU(node), node->next, PV_OFFSET);
 	}
 
 notify_next:
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfN-0005Gt-Pk; Thu, 27 Feb 2014 04:33:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfJ-0005Fw-Kb
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:45 +0000
Received: from [85.158.139.211:56590] by server-3.bemta-5.messagelabs.com id
	E7/44-13671-820CE035; Thu, 27 Feb 2014 04:33:44 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393475622!6538926!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29552 invoked from network); 27 Feb 2014 04:33:43 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:43 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id DF848261;
	Thu, 27 Feb 2014 04:33:41 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 661D54C;
	Thu, 27 Feb 2014 04:33:39 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:43 -0500
Message-Id: <1393475567-4453-5-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
	x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Locking is always an issue in a virtualized environment as the virtual
CPU that is waiting on a lock may get scheduled out and hence block
any progress in lock acquisition even when the lock has been freed.

One solution to this problem is to allow unfair lock in a
para-virtualized environment. In this case, a new lock acquirer can
come and steal the lock if the next-in-line CPU to get the lock is
scheduled out. Unfair lock in a native environment is generally not a
good idea as there is a possibility of lock starvation for a heavily
contended lock.

This patch add a new configuration option for the x86
architecture to enable the use of unfair queue spinlock
(PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
(paravirt_unfairlocks_enabled) is used to switch between a fair and
an unfair version of the spinlock code. This jump label will only be
enabled in a real PV guest.

Enabling this configuration feature decreases the performance of an
uncontended lock-unlock operation by about 1-2%.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/Kconfig                     |   11 +++++
 arch/x86/include/asm/qspinlock.h     |   74 ++++++++++++++++++++++++++++++++++
 arch/x86/kernel/Makefile             |    1 +
 arch/x86/kernel/paravirt-spinlocks.c |    7 +++
 4 files changed, 93 insertions(+), 0 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 5bf70ab..8d7c941 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -645,6 +645,17 @@ config PARAVIRT_SPINLOCKS
 
 	  If you are unsure how to answer this question, answer Y.
 
+config PARAVIRT_UNFAIR_LOCKS
+	bool "Enable unfair locks in a para-virtualized guest"
+	depends on PARAVIRT && SMP && QUEUE_SPINLOCK
+	depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
+	---help---
+	  This changes the kernel to use unfair locks in a real
+	  para-virtualized guest system. This will help performance
+	  in most cases. However, there is a possibility of lock
+	  starvation on a heavily contended lock especially in a
+	  large guest with many virtual CPUs.
+
 source "arch/x86/xen/Kconfig"
 
 config KVM_GUEST
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 98db42e..c278aed 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -56,4 +56,78 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
 
 #include <asm-generic/qspinlock.h>
 
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/**
+ * queue_spin_lock_unfair - acquire a queue spinlock unfairly
+ * @lock: Pointer to queue spinlock structure
+ */
+static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (likely(cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return;
+	/*
+	 * Since the lock is now unfair, there is no need to activate
+	 * the 2-task quick spinning code path.
+	 */
+	queue_spin_lock_slowpath(lock, -1);
+}
+
+/**
+ * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
+{
+	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
+
+	if (!qlock->lock &&
+	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
+		return 1;
+	return 0;
+}
+
+/*
+ * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
+ * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
+ * is true.
+ */
+#undef arch_spin_lock
+#undef arch_spin_trylock
+#undef arch_spin_lock_flags
+
+extern struct static_key paravirt_unfairlocks_enabled;
+
+/**
+ * arch_spin_lock - acquire a queue spinlock
+ * @lock: Pointer to queue spinlock structure
+ */
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		queue_spin_lock_unfair(lock);
+		return;
+	}
+	queue_spin_lock(lock);
+}
+
+/**
+ * arch_spin_trylock - try to acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * Return: 1 if lock acquired, 0 if failed
+ */
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+	if (static_key_false(&paravirt_unfairlocks_enabled)) {
+		return queue_spin_trylock_unfair(lock);
+	}
+	return queue_spin_trylock(lock);
+}
+
+#define arch_spin_lock_flags(l, f)	arch_spin_lock(l)
+
+#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
+
 #endif /* _ASM_X86_QSPINLOCK_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index cb648c8..1107a20 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
 obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
 obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
 obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
+obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
 obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
 
 obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index bbb6c73..a50032a 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -8,6 +8,7 @@
 
 #include <asm/paravirt.h>
 
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
 	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
@@ -18,3 +19,9 @@ EXPORT_SYMBOL(pv_lock_ops);
 
 struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
 EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+#endif
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfV-0005Im-V7; Thu, 27 Feb 2014 04:33:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfU-0005IN-A2
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:56 +0000
Received: from [85.158.137.68:9131] by server-17.bemta-3.messagelabs.com id
	FD/7C-22569-330CE035; Thu, 27 Feb 2014 04:33:55 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393475632!1675456!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12202 invoked from network); 27 Feb 2014 04:33:54 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:54 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id E9CAE252;
	Thu, 27 Feb 2014 04:33:51 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 7C18A46;
	Thu, 27 Feb 2014 04:33:49 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:47 -0500
Message-Id: <1393475567-4453-9-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
	x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch enables KVM to use the queue spinlock's PV support code
when the PARAVIRT_SPINLOCKS kernel config option is set. However,
PV support for Xen is not ready yet and so the queue spinlock will
still have to be disabled when PARAVIRT_SPINLOCKS config option is
on with Xen.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 +-
 2 files changed, 55 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index f318e78..3ddc436 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 		}
 	}
 }
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 lh_kick_stats;	/* Lock holder kick count */
+static u32 qh_kick_stats;	/* Queue head kick count  */
+static u32 nn_kick_stats;	/* Next node kick count   */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+	if (!d_kvm_debug) {
+		printk(KERN_WARNING
+		       "Could not create 'kvm' debugfs directory\n");
+		return -ENOMEM;
+	}
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+	debugfs_create_u32("lh_kick_stats", 0644, d_spin_debug, &lh_kick_stats);
+	debugfs_create_u32("qh_kick_stats", 0644, d_spin_debug, &qh_kick_stats);
+	debugfs_create_u32("nn_kick_stats", 0644, d_spin_debug, &nn_kick_stats);
+
+	return 0;
+}
+
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+	if (type == PV_KICK_LOCK_HOLDER)
+		add_smp(&lh_kick_stats, 1);
+	else if (type == PV_KICK_QUEUE_HEAD)
+		add_smp(&qh_kick_stats, 1);
+	else
+		add_smp(&nn_kick_stats, 1);
+}
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+static void kvm_kick_cpu_type(int cpu, enum pv_kick_type type)
+{
+	kvm_kick_cpu(cpu);
+	inc_kick_stats(type);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfR-0005Hc-KP; Thu, 27 Feb 2014 04:33:53 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfN-0005Gj-VK
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:50 +0000
Received: from [193.109.254.147:36727] by server-1.bemta-14.messagelabs.com id
	B2/6B-15438-D20CE035; Thu, 27 Feb 2014 04:33:49 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393475627!1832514!1
X-Originating-IP: [15.217.128.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30340 invoked from network); 27 Feb 2014 04:33:48 -0000
Received: from g2t2352.austin.hp.com (HELO g2t2352.austin.hp.com)
	(15.217.128.51)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:48 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2352.austin.hp.com (Postfix) with ESMTP id DC1E5284;
	Thu, 27 Feb 2014 04:33:46 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 7D26146;
	Thu, 27 Feb 2014 04:33:44 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:45 -0500
Message-Id: <1393475567-4453-7-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 6/8] pvqspinlock,
	x86: Rename paravirt_ticketlocks_enabled
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/include/asm/spinlock.h      |    4 ++--
 arch/x86/kernel/kvm.c                |    2 +-
 arch/x86/kernel/paravirt-spinlocks.c |    4 ++--
 arch/x86/xen/spinlock.c              |    2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6e6de1f..283f2cf 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,7 +40,7 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD	(1 << 15)
 
-extern struct static_key paravirt_ticketlocks_enabled;
+extern struct static_key paravirt_spinlocks_enabled;
 static __always_inline bool static_key_false(struct static_key *key);
 
 #ifdef CONFIG_QUEUE_SPINLOCK
@@ -151,7 +151,7 @@ static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
 	if (TICKET_SLOWPATH_FLAG &&
-	    static_key_false(&paravirt_ticketlocks_enabled)) {
+	    static_key_false(&paravirt_spinlocks_enabled)) {
 		arch_spinlock_t prev;
 
 		prev = *lock;
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a489140..f318e78 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -818,7 +818,7 @@ static __init int kvm_spinlock_init_jump(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	printk(KERN_INFO "KVM setup paravirtual spinlock\n");
 
 	return 0;
diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
index a50032a..8c67cbe 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -17,8 +17,8 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
-struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
-EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
+struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE;
+EXPORT_SYMBOL(paravirt_spinlocks_enabled);
 #endif
 
 #ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 581521c..06f4a64 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -290,7 +290,7 @@ static __init int xen_init_spinlocks_jump(void)
 	if (!xen_pvspin)
 		return 0;
 
-	static_key_slow_inc(&paravirt_ticketlocks_enabled);
+	static_key_slow_inc(&paravirt_spinlocks_enabled);
 	return 0;
 }
 early_initcall(xen_init_spinlocks_jump);
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfP-0005HI-Na; Thu, 27 Feb 2014 04:33:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfL-0005GI-Jf
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:47 +0000
Received: from [193.109.254.147:55779] by server-15.bemta-14.messagelabs.com
	id FD/D1-10839-B20CE035; Thu, 27 Feb 2014 04:33:47 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393475624!7132842!1
X-Originating-IP: [15.217.128.51]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18285 invoked from network); 27 Feb 2014 04:33:45 -0000
Received: from g2t2352.austin.hp.com (HELO g2t2352.austin.hp.com)
	(15.217.128.51)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:45 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2352.austin.hp.com (Postfix) with ESMTP id 6D60E158;
	Thu, 27 Feb 2014 04:33:44 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id EEDDF49;
	Thu, 27 Feb 2014 04:33:41 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:44 -0500
Message-Id: <1393475567-4453-6-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
	x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch adds a KVM init function to activate the unfair queue
spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
option is selected.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   17 +++++++++++++++++
 1 files changed, 17 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 713f1b3..a489140 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
 early_initcall(kvm_spinlock_init_jump);
 
 #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
+
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+/*
+ * Enable unfair lock if running in a real para-virtualized environment
+ */
+static __init int kvm_unfair_locks_init_jump(void)
+{
+	if (!kvm_para_available())
+		return 0;
+
+	static_key_slow_inc(&paravirt_unfairlocks_enabled);
+	printk(KERN_INFO "KVM setup unfair spinlock\n");
+
+	return 0;
+}
+early_initcall(kvm_unfair_locks_init_jump);
+#endif
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfG-0005FZ-JA; Thu, 27 Feb 2014 04:33:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfD-0005FK-On
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:40 +0000
Received: from [85.158.137.68:8638] by server-12.bemta-3.messagelabs.com id
	03/68-01674-220CE035; Thu, 27 Feb 2014 04:33:38 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393475616!3171741!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30016 invoked from network); 27 Feb 2014 04:33:37 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:37 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id E8174EE;
	Thu, 27 Feb 2014 04:33:31 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id EE29552;
	Thu, 27 Feb 2014 04:33:24 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:39 -0500
Message-Id: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with
	PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

v4->v5:
 - Move the optimized 2-task contending code to the generic file to
   enable more architectures to use it without code duplication.
 - Address some of the style-related comments by PeterZ.
 - Allow the use of unfair queue spinlock in a real para-virtualized
   execution environment.
 - Add para-virtualization support to the qspinlock code by ensuring
   that the lock holder and queue head stay alive as much as possible.

v3->v4:
 - Remove debugging code and fix a configuration error
 - Simplify the qspinlock structure and streamline the code to make it
   perform a bit better
 - Add an x86 version of asm/qspinlock.h for holding x86 specific
   optimization.
 - Add an optimized x86 code path for 2 contending tasks to improve
   low contention performance.

v2->v3:
 - Simplify the code by using numerous mode only without an unfair option.
 - Use the latest smp_load_acquire()/smp_store_release() barriers.
 - Move the queue spinlock code to kernel/locking.
 - Make the use of queue spinlock the default for x86-64 without user
   configuration.
 - Additional performance tuning.

v1->v2:
 - Add some more comments to document what the code does.
 - Add a numerous CPU mode to support >= 16K CPUs
 - Add a configuration option to allow lock stealing which can further
   improve performance in many cases.
 - Enable wakeup of queue head CPU at unlock time for non-numerous
   CPU mode.

This patch set has 3 different sections:
 1) Patches 1-3: Introduces a queue-based spinlock implementation that
    can replace the default ticket spinlock without increasing the
    size of the spinlock data structure. As a result, critical kernel
    data structures that embed spinlock won't increase in size and
    breaking data alignments.
 2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
    real para-virtualized execution environment. This can resolve
    some of the locking related performance issues due to the fact
    that the next CPU to get the lock may have been scheduled out
    for a period of time.
 3) Patches 6-8: Enable qspinlock para-virtualization support by making
    sure that the lock holder and the queue head stay alive as long as
    possible.

Patches 1-3 are fully tested and ready for production. Patches
4-8, on the other hands, are not fully tested. They have undergone
compilation tests with various combinations of kernel config settings,
boot-up tests in a bare-metal as well as simple performance test on a
KVM guest.  Further tests and performance characterization are still
needed to be done. So comments on them are welcomed. Suggestions or
recommendations on how to add PV support in the Xen environment are
also needed.

The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention.  This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.

The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.

The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.

The performance data in bare metal were discussed in the patch
descriptions. For PV support, some simple performance test was
performed on a 2-node 20-CPU KVM guest running 3.14-rc4 kernel in a
larger 8-node machine.  The disk workload of the AIM7 benchmark was
run on both ext4 and xfs RAM disks at 2000 users. The JPM (jobs/minute)
data of the test run were:

  kernel			XFS FS	%change	ext4 FS %change
  ------			------	-------	------- -------
  PV ticketlock (baseline) 	2390438	   -	1366743    -
  qspinlock			1775148	  -26%	1336303  -2.2%
  PV qspinlock			2264151	 -5.3%	1351351  -1.1%
  unfair qspinlock		2404810	 +0.6%	1612903	  +18%
  unfair + PV qspinlock		2419355	 +1.2%	1612903   +18%

The XFS test had moderate spinlock contention of 1.6% whereas the ext4
test had heavy spinlock contention of 15.4% as reported by perf. It
seems like the PV qspinlock support still has room for improvement
compared with the current PV ticketlock implementation.

Waiman Long (8):
  qspinlock: Introducing a 4-byte queue spinlock implementation
  qspinlock, x86: Enable x86-64 to use queue spinlock
  qspinlock, x86: Add x86 specific optimization for 2 contending tasks
  pvqspinlock, x86: Allow unfair spinlock in a real PV environment
  pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
  pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
  pvqspinlock, x86: Add qspinlock para-virtualization support
  pvqspinlock, x86: Enable KVM to use qspinlock's PV support

 arch/x86/Kconfig                      |   12 +
 arch/x86/include/asm/paravirt.h       |    9 +-
 arch/x86/include/asm/paravirt_types.h |   12 +
 arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
 arch/x86/include/asm/qspinlock.h      |  133 +++++++
 arch/x86/include/asm/spinlock.h       |    9 +-
 arch/x86/include/asm/spinlock_types.h |    4 +
 arch/x86/kernel/Makefile              |    1 +
 arch/x86/kernel/kvm.c                 |   73 ++++-
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
 arch/x86/xen/spinlock.c               |    2 +-
 include/asm-generic/qspinlock.h       |  122 +++++++
 include/asm-generic/qspinlock_types.h |   61 ++++
 kernel/Kconfig.locks                  |    7 +
 kernel/locking/Makefile               |    1 +
 kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
 16 files changed, 1239 insertions(+), 8 deletions(-)
 create mode 100644 arch/x86/include/asm/pvqspinlock.h
 create mode 100644 arch/x86/include/asm/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock.h
 create mode 100644 include/asm-generic/qspinlock_types.h
 create mode 100644 kernel/locking/qspinlock.c


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:34:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsfV-0005Im-V7; Thu, 27 Feb 2014 04:33:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Waiman.Long@hp.com>) id 1WIsfU-0005IN-A2
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 04:33:56 +0000
Received: from [85.158.137.68:9131] by server-17.bemta-3.messagelabs.com id
	FD/7C-22569-330CE035; Thu, 27 Feb 2014 04:33:55 +0000
X-Env-Sender: Waiman.Long@hp.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393475632!1675456!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12202 invoked from network); 27 Feb 2014 04:33:54 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 04:33:54 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id E9CAE252;
	Thu, 27 Feb 2014 04:33:51 +0000 (UTC)
Received: from RHEL65.localdomain (unknown [16.213.96.232])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 7C18A46;
	Thu, 27 Feb 2014 04:33:49 +0000 (UTC)
From: Waiman Long <Waiman.Long@hp.com>
To: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	"H. Peter Anvin" <hpa@zytor.com>, Arnd Bergmann <arnd@arndb.de>,
	Peter Zijlstra <peterz@infradead.org>
Date: Wed, 26 Feb 2014 23:32:47 -0500
Message-Id: <1393475567-4453-9-git-send-email-Waiman.Long@hp.com>
X-Mailer: git-send-email 1.7.1
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	kvm@vger.kernel.org, virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
	x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This patch enables KVM to use the queue spinlock's PV support code
when the PARAVIRT_SPINLOCKS kernel config option is set. However,
PV support for Xen is not ready yet and so the queue spinlock will
still have to be disabled when PARAVIRT_SPINLOCKS config option is
on with Xen.

Signed-off-by: Waiman Long <Waiman.Long@hp.com>
---
 arch/x86/kernel/kvm.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++
 kernel/Kconfig.locks  |    2 +-
 2 files changed, 55 insertions(+), 1 deletions(-)

diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index f318e78..3ddc436 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
 	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
 }
 
+#ifndef CONFIG_QUEUE_SPINLOCK
 enum kvm_contention_stat {
 	TAKEN_SLOW,
 	TAKEN_SLOW_PICKUP,
@@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
 		}
 	}
 }
+#else /* !CONFIG_QUEUE_SPINLOCK */
+
+#ifdef CONFIG_KVM_DEBUG_FS
+static struct dentry *d_spin_debug;
+static struct dentry *d_kvm_debug;
+static u32 lh_kick_stats;	/* Lock holder kick count */
+static u32 qh_kick_stats;	/* Queue head kick count  */
+static u32 nn_kick_stats;	/* Next node kick count   */
+
+static int __init kvm_spinlock_debugfs(void)
+{
+	d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
+	if (!d_kvm_debug) {
+		printk(KERN_WARNING
+		       "Could not create 'kvm' debugfs directory\n");
+		return -ENOMEM;
+	}
+	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
+
+	debugfs_create_u32("lh_kick_stats", 0644, d_spin_debug, &lh_kick_stats);
+	debugfs_create_u32("qh_kick_stats", 0644, d_spin_debug, &qh_kick_stats);
+	debugfs_create_u32("nn_kick_stats", 0644, d_spin_debug, &nn_kick_stats);
+
+	return 0;
+}
+
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+	if (type == PV_KICK_LOCK_HOLDER)
+		add_smp(&lh_kick_stats, 1);
+	else if (type == PV_KICK_QUEUE_HEAD)
+		add_smp(&qh_kick_stats, 1);
+	else
+		add_smp(&nn_kick_stats, 1);
+}
+fs_initcall(kvm_spinlock_debugfs);
+
+#else /* CONFIG_KVM_DEBUG_FS */
+static inline void inc_kick_stats(enum pv_kick_type type)
+{
+}
+#endif /* CONFIG_KVM_DEBUG_FS */
+
+static void kvm_kick_cpu_type(int cpu, enum pv_kick_type type)
+{
+	kvm_kick_cpu(cpu);
+	inc_kick_stats(type);
+}
+#endif /* !CONFIG_QUEUE_SPINLOCK */
 
 /*
  * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
@@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void)
 	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
 		return;
 
+#ifdef CONFIG_QUEUE_SPINLOCK
+	pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
+#else
 	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
 	pv_lock_ops.unlock_kick = kvm_unlock_kick;
+#endif
 }
 
 static __init int kvm_spinlock_init_jump(void)
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index f185584..a70fdeb 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
 
 config QUEUE_SPINLOCK
 	def_bool y if ARCH_USE_QUEUE_SPINLOCK
-	depends on SMP && !PARAVIRT_SPINLOCKS
+	depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 04:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsta-0006bf-8S; Thu, 27 Feb 2014 04:48:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gerard.spivey@gmail.com>) id 1WIstY-0006Ze-2b
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 04:48:28 +0000
Received: from [193.109.254.147:33889] by server-6.bemta-14.messagelabs.com id
	0D/73-03396-B93CE035; Thu, 27 Feb 2014 04:48:27 +0000
X-Env-Sender: gerard.spivey@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393476505!3132158!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 700 invoked from network); 27 Feb 2014 04:48:26 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 04:48:26 -0000
Received: by mail-qc0-f176.google.com with SMTP id r5so2686211qcx.7
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 20:48:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:date:message-id:cc:to:mime-version;
	bh=qHalxca3e6tkDayl5jWkOtTZGyKIpOIKCMo3p9yXrdM=;
	b=HSEa+AbApeipQkzW8mRU7Mq9yy9QnQVWIctjjUie8395YhjNUg9SCMDm1EtePqbCRy
	Et8LxNJImG+++VQ7ctLVT2PNL86Indvrt8oB7R15RkzR6P6CQV7DqaLlzlaiHne7BKOC
	pNEqyfQrnW1qFuaUaF6iomRBm4yzvgiL61zurW5Udv6Hj9qpM3CF5ph29OxNviEImT9i
	w8N7UZ5S4xS706NCOl8ObKrHbOohO6/o8f+2YYsjhvkuM1bgjgVOk83EgSNFE+XWwXjC
	ggTOIkCpdSpAUd4nZZ4o4c9IFodPIEn2hmaE5jTyWKeK8RC4HWOoQ/LrQ/jqQQIX+eKk
	v+GQ==
X-Received: by 10.140.21.201 with SMTP id 67mr4380578qgl.78.1393476504854;
	Wed, 26 Feb 2014 20:48:24 -0800 (PST)
Received: from [192.168.0.104] (c-76-21-163-195.hsd1.md.comcast.net.
	[76.21.163.195])
	by mx.google.com with ESMTPSA id p67sm3138878qgd.8.2014.02.26.20.48.24
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 20:48:24 -0800 (PST)
From: Gerard Spivey <gerard.spivey@gmail.com>
Date: Wed, 26 Feb 2014 23:48:23 -0500
Message-Id: <9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Cc: andrew.cooper3@citrix.com
Subject: [Xen-devel] CPU/RAM/PCI diagram tool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6131681968797116773=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6131681968797116773==
Content-Type: multipart/alternative; boundary="Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D"


--Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hello All,

My name is Gerard.
I am new to the Xen development community and would like to get started.
I saw the Xen development projects list and will start on the =
CPU/RAM/PCI diagram tool as my first project.

During my day job I=92m a software developer writing networking =
applications for a variety of computing architectures (x86, Tilera, and =
more),
using technologies similar to and including PF_RINGS and DPDK. Due to =
the work I=92ve been doing the past few years Ive taken an interest in =
the Xen project.

I=92m looking forward to to working on the project and engaging with the =
Xen developer community.
I=92m on the ##xen channel as SpiffySpivey.

Thanks!


Gerard=

--Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div =
style=3D"font-size: 13px;"><font face=3D"Arial, sans-serif"><span =
style=3D"font-size: 15px; line-height: 22px;">Hello =
All,</span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">My name is Gerard.</span></font></div><div style=3D"font-size: =
13px;"><font face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; =
line-height: 22px;">I am new to the Xen development community and would =
like to get started.</span></font></div><div style=3D"font-size: =
13px;"><font face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; =
line-height: 22px;">I saw the Xen development projects list and will =
start on the CPU/RAM/PCI diagram tool as my first =
project.</span></font></div><div style=3D"font-size: 13px;"><span =
style=3D"font-size: 15px; line-height: 22px; font-family: Arial, =
sans-serif;"><br></span></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">During my day job&nbsp;I=92m a software developer writing =
networking applications for a variety of computing architectures (x86, =
Tilera, and more),</span></font></div><div style=3D"font-size: =
13px;"><font face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; =
line-height: 22px;">using technologies&nbsp;similar to and including =
PF_RINGS and DPDK. Due to the work&nbsp;I=92ve been doing the past few =
years Ive taken an&nbsp;interest in the Xen =
project.</span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">I=92m looking forward to to working on the project =
and&nbsp;engaging with the Xen developer =
community.</span></font></div><div><font face=3D"Arial, =
sans-serif"><span style=3D"font-size: 15px; line-height: 22px;">I=92m on =
the ##xen channel as SpiffySpivey.</span></font></div><div =
style=3D"font-size: 13px;"><font face=3D"Arial, sans-serif"><span =
style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">Thanks!</span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">Gerard</span></font></div></body></html>=

--Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D--


--===============6131681968797116773==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6131681968797116773==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 04:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 04:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIsta-0006bf-8S; Thu, 27 Feb 2014 04:48:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gerard.spivey@gmail.com>) id 1WIstY-0006Ze-2b
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 04:48:28 +0000
Received: from [193.109.254.147:33889] by server-6.bemta-14.messagelabs.com id
	0D/73-03396-B93CE035; Thu, 27 Feb 2014 04:48:27 +0000
X-Env-Sender: gerard.spivey@gmail.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393476505!3132158!1
X-Originating-IP: [209.85.216.176]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 700 invoked from network); 27 Feb 2014 04:48:26 -0000
Received: from mail-qc0-f176.google.com (HELO mail-qc0-f176.google.com)
	(209.85.216.176)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 04:48:26 -0000
Received: by mail-qc0-f176.google.com with SMTP id r5so2686211qcx.7
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 20:48:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=from:content-type:subject:date:message-id:cc:to:mime-version;
	bh=qHalxca3e6tkDayl5jWkOtTZGyKIpOIKCMo3p9yXrdM=;
	b=HSEa+AbApeipQkzW8mRU7Mq9yy9QnQVWIctjjUie8395YhjNUg9SCMDm1EtePqbCRy
	Et8LxNJImG+++VQ7ctLVT2PNL86Indvrt8oB7R15RkzR6P6CQV7DqaLlzlaiHne7BKOC
	pNEqyfQrnW1qFuaUaF6iomRBm4yzvgiL61zurW5Udv6Hj9qpM3CF5ph29OxNviEImT9i
	w8N7UZ5S4xS706NCOl8ObKrHbOohO6/o8f+2YYsjhvkuM1bgjgVOk83EgSNFE+XWwXjC
	ggTOIkCpdSpAUd4nZZ4o4c9IFodPIEn2hmaE5jTyWKeK8RC4HWOoQ/LrQ/jqQQIX+eKk
	v+GQ==
X-Received: by 10.140.21.201 with SMTP id 67mr4380578qgl.78.1393476504854;
	Wed, 26 Feb 2014 20:48:24 -0800 (PST)
Received: from [192.168.0.104] (c-76-21-163-195.hsd1.md.comcast.net.
	[76.21.163.195])
	by mx.google.com with ESMTPSA id p67sm3138878qgd.8.2014.02.26.20.48.24
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 20:48:24 -0800 (PST)
From: Gerard Spivey <gerard.spivey@gmail.com>
Date: Wed, 26 Feb 2014 23:48:23 -0500
Message-Id: <9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com>
To: xen-devel@lists.xen.org
Mime-Version: 1.0 (Mac OS X Mail 7.1 \(1827\))
X-Mailer: Apple Mail (2.1827)
Cc: andrew.cooper3@citrix.com
Subject: [Xen-devel] CPU/RAM/PCI diagram tool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6131681968797116773=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============6131681968797116773==
Content-Type: multipart/alternative; boundary="Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D"


--Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=windows-1252

Hello All,

My name is Gerard.
I am new to the Xen development community and would like to get started.
I saw the Xen development projects list and will start on the =
CPU/RAM/PCI diagram tool as my first project.

During my day job I=92m a software developer writing networking =
applications for a variety of computing architectures (x86, Tilera, and =
more),
using technologies similar to and including PF_RINGS and DPDK. Due to =
the work I=92ve been doing the past few years Ive taken an interest in =
the Xen project.

I=92m looking forward to to working on the project and engaging with the =
Xen developer community.
I=92m on the ##xen channel as SpiffySpivey.

Thanks!


Gerard=

--Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=windows-1252

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dwindows-1252"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div =
style=3D"font-size: 13px;"><font face=3D"Arial, sans-serif"><span =
style=3D"font-size: 15px; line-height: 22px;">Hello =
All,</span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">My name is Gerard.</span></font></div><div style=3D"font-size: =
13px;"><font face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; =
line-height: 22px;">I am new to the Xen development community and would =
like to get started.</span></font></div><div style=3D"font-size: =
13px;"><font face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; =
line-height: 22px;">I saw the Xen development projects list and will =
start on the CPU/RAM/PCI diagram tool as my first =
project.</span></font></div><div style=3D"font-size: 13px;"><span =
style=3D"font-size: 15px; line-height: 22px; font-family: Arial, =
sans-serif;"><br></span></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">During my day job&nbsp;I=92m a software developer writing =
networking applications for a variety of computing architectures (x86, =
Tilera, and more),</span></font></div><div style=3D"font-size: =
13px;"><font face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; =
line-height: 22px;">using technologies&nbsp;similar to and including =
PF_RINGS and DPDK. Due to the work&nbsp;I=92ve been doing the past few =
years Ive taken an&nbsp;interest in the Xen =
project.</span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">I=92m looking forward to to working on the project =
and&nbsp;engaging with the Xen developer =
community.</span></font></div><div><font face=3D"Arial, =
sans-serif"><span style=3D"font-size: 15px; line-height: 22px;">I=92m on =
the ##xen channel as SpiffySpivey.</span></font></div><div =
style=3D"font-size: 13px;"><font face=3D"Arial, sans-serif"><span =
style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">Thanks!</span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;"><br></span></font></div><div style=3D"font-size: 13px;"><font =
face=3D"Arial, sans-serif"><span style=3D"font-size: 15px; line-height: =
22px;">Gerard</span></font></div></body></html>=

--Apple-Mail=_BA737449-5175-498F-9C35-73C07A6FEA3D--


--===============6131681968797116773==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6131681968797116773==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 05:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 05:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WItft-00075x-AJ; Thu, 27 Feb 2014 05:38:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WItfr-00075s-AI
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 05:38:23 +0000
Received: from [85.158.143.35:55086] by server-3.bemta-4.messagelabs.com id
	BE/67-11539-E4FCE035; Thu, 27 Feb 2014 05:38:22 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393479501!8620844!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21483 invoked from network); 27 Feb 2014 05:38:21 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-21.messagelabs.com with SMTP;
	27 Feb 2014 05:38:21 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 26 Feb 2014 21:38:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,552,1389772800"; d="scan'208";a="482681100"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 26 Feb 2014 21:38:19 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 21:38:19 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Thu, 27 Feb 2014 13:38:17 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Thread-Topic: [PATCH 0/5] xen: add Intel IGD passthrough support
Thread-Index: AQHPLtEIUw9opk8tEkuLznOoJ6J1gZrInhjQ
Date: Thu, 27 Feb 2014 05:38:17 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F8289@SHSMSX104.ccr.corp.intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Kay, Allen M" <allen.m.kay@intel.com>,
	"anthony@codemonkey.ws" <anthony@codemonkey.ws>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] xen: add Intel IGD passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2014-02-21:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> The following patches are ported from Xen Qemu-traditional branch
> which are adding Intel IGD passthrough supporting to Qemu upstream.
> 
> To pass through IGD to guest, user need to add following lines in Xen
> config file: gfx_passthru=1 pci=['00:02.0@2']
> 
> Besides, since Xen + Qemu upstream is requiring seabios, user also
> need to recompile seabios with CONFIG_OPTIONROMS_DEPLOYED=y to allow
> IGD pass through
> successfully:
> 1. change CONFIG_OPTIONROMS_DEPLOYED=y in file:
> xen/tools/firmware/seabios-config 2. recompile the tools
> 
> I have successfully boot Win 7 and RHEL6u4 guests with IGD assigned in
> Haswell desktop with Latest Xen + Qemu upstream.
> 
> Yang Zhang (5):
>   xen, gfx passthrough: basic graphics passthrough support
>   xen, gfx passthrough: reserve 00:02.0 for INTEL IGD
>   xen, gfx passthrough: create intel isa bridge
>   xen, gfx passthrough: support Intel IGD passthrough with VT-D
>   xen, gfx passthrough: add opregion mapping
>  configure                    |    2 +- hw/pci-host/piix.c           |  
>  15 ++ hw/pci/pci.c                 |   19 ++ hw/xen/Makefile.objs      
>    |    2 +- hw/xen/xen-host-pci-device.c |    5 +
>  hw/xen/xen-host-pci-device.h |    1 + hw/xen/xen_pt.c              |  
>  10 + hw/xen/xen_pt.h              |   13 ++-
>  hw/xen/xen_pt_config_init.c  |   45 +++++- hw/xen/xen_pt_graphics.c    
>  |  407 ++++++++++++++++++++++++++++++++++++++++++ qemu-options.hx      
>         |    9 + vl.c                         |    8 + 12 files changed,
>  532 insertions(+), 4 deletions(-)  create mode
> 100644 hw/xen/xen_pt_graphics.c

Ping.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 05:38:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 05:38:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WItft-00075x-AJ; Thu, 27 Feb 2014 05:38:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <yang.z.zhang@intel.com>) id 1WItfr-00075s-AI
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 05:38:23 +0000
Received: from [85.158.143.35:55086] by server-3.bemta-4.messagelabs.com id
	BE/67-11539-E4FCE035; Thu, 27 Feb 2014 05:38:22 +0000
X-Env-Sender: yang.z.zhang@intel.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393479501!8620844!1
X-Originating-IP: [134.134.136.20]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTM0LjEzNC4xMzYuMjAgPT4gMzU1MzU4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21483 invoked from network); 27 Feb 2014 05:38:21 -0000
Received: from mga02.intel.com (HELO mga02.intel.com) (134.134.136.20)
	by server-11.tower-21.messagelabs.com with SMTP;
	27 Feb 2014 05:38:21 -0000
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
	by orsmga101.jf.intel.com with ESMTP; 26 Feb 2014 21:38:20 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="4.97,552,1389772800"; d="scan'208";a="482681100"
Received: from fmsmsx105.amr.corp.intel.com ([10.19.9.36])
	by fmsmga001.fm.intel.com with ESMTP; 26 Feb 2014 21:38:19 -0800
Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by
	FMSMSX105.amr.corp.intel.com (10.19.9.36) with Microsoft SMTP Server
	(TLS) id 14.3.123.3; Wed, 26 Feb 2014 21:38:19 -0800
Received: from shsmsx104.ccr.corp.intel.com ([169.254.5.227]) by
	SHSMSX101.ccr.corp.intel.com ([169.254.1.26]) with mapi id
	14.03.0123.003; Thu, 27 Feb 2014 13:38:17 +0800
From: "Zhang, Yang Z" <yang.z.zhang@intel.com>
To: "qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Thread-Topic: [PATCH 0/5] xen: add Intel IGD passthrough support
Thread-Index: AQHPLtEIUw9opk8tEkuLznOoJ6J1gZrInhjQ
Date: Thu, 27 Feb 2014 05:38:17 +0000
Message-ID: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F8289@SHSMSX104.ccr.corp.intel.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
In-Reply-To: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
MIME-Version: 1.0
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Kay, Allen M" <allen.m.kay@intel.com>,
	"anthony@codemonkey.ws" <anthony@codemonkey.ws>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] xen: add Intel IGD passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Zhang, Yang Z wrote on 2014-02-21:
> From: Yang Zhang <yang.z.zhang@Intel.com>
> 
> The following patches are ported from Xen Qemu-traditional branch
> which are adding Intel IGD passthrough supporting to Qemu upstream.
> 
> To pass through IGD to guest, user need to add following lines in Xen
> config file: gfx_passthru=1 pci=['00:02.0@2']
> 
> Besides, since Xen + Qemu upstream is requiring seabios, user also
> need to recompile seabios with CONFIG_OPTIONROMS_DEPLOYED=y to allow
> IGD pass through
> successfully:
> 1. change CONFIG_OPTIONROMS_DEPLOYED=y in file:
> xen/tools/firmware/seabios-config 2. recompile the tools
> 
> I have successfully boot Win 7 and RHEL6u4 guests with IGD assigned in
> Haswell desktop with Latest Xen + Qemu upstream.
> 
> Yang Zhang (5):
>   xen, gfx passthrough: basic graphics passthrough support
>   xen, gfx passthrough: reserve 00:02.0 for INTEL IGD
>   xen, gfx passthrough: create intel isa bridge
>   xen, gfx passthrough: support Intel IGD passthrough with VT-D
>   xen, gfx passthrough: add opregion mapping
>  configure                    |    2 +- hw/pci-host/piix.c           |  
>  15 ++ hw/pci/pci.c                 |   19 ++ hw/xen/Makefile.objs      
>    |    2 +- hw/xen/xen-host-pci-device.c |    5 +
>  hw/xen/xen-host-pci-device.h |    1 + hw/xen/xen_pt.c              |  
>  10 + hw/xen/xen_pt.h              |   13 ++-
>  hw/xen/xen_pt_config_init.c  |   45 +++++- hw/xen/xen_pt_graphics.c    
>  |  407 ++++++++++++++++++++++++++++++++++++++++++ qemu-options.hx      
>         |    9 + vl.c                         |    8 + 12 files changed,
>  532 insertions(+), 4 deletions(-)  create mode
> 100644 hw/xen/xen_pt_graphics.c

Ping.

Best regards,
Yang



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 07:31:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 07:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIvQy-0007kF-Kb; Thu, 27 Feb 2014 07:31:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WIvQx-0007k7-Lc
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 07:31:07 +0000
Received: from [85.158.143.35:13408] by server-2.bemta-4.messagelabs.com id
	28/17-04779-AB9EE035; Thu, 27 Feb 2014 07:31:06 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393486266!8657942!1
X-Originating-IP: [212.227.17.13]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9580 invoked from network); 27 Feb 2014 07:31:06 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.13)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 07:31:06 -0000
Received: from klappe2.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue102) with ESMTP (Nemesis)
	id 0Lx6wD-1XKlMK1c9x-016c81; Thu, 27 Feb 2014 08:30:21 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Christoffer Dall <christoffer.dall@linaro.org>
Date: Thu, 27 Feb 2014 08:30:16 +0100
User-Agent: KMail/1.12.2 (Linux/3.8.0-22-generic; KDE/4.3.2; x86_64; ; )
References: <20140226183454.GA14639@cbox>
	<CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
	<20140226222147.GE16149@cbox>
In-Reply-To: <20140226222147.GE16149@cbox>
MIME-Version: 1.0
Message-Id: <201402270830.16903.arnd@arndb.de>
X-Provags-ID: V02:K0:47EZe2xFs+ZtkE5qDIynC24+OoffIt+z65tlMZl/Rml
	x2e6I+pq1zXmBQhAIaDNjv54S+JU1D66TuS8EVd/R/0+u15GmX
	irxay0tOaHQ8wh1B7iIfTDzkW2hyzOcTMXfXtL64f1zjQbGYml
	bJLQwdGC2ciwWP4sJmA3ZdWoRInGODKxnaMS+gtSrsEgv4X6C9
	ZXRqGXznNSf3SOy3Fa7Cwpv4tsMdrNpqmrT7BwaivW+blsP8RD
	/zF+XvJ3vbxi6TZAlP98NjFTpfNLZ4kgFeHUBgEQUdqP0KuI+4
	9S4Uj4lkwjunn9OZlTgh640md+WJ9hJMWp4ity/BGd53K4hbZd
	qBZAdr0t7wKtEUVIwy5Y=
Cc: Rob Herring <rob.herring@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	Peter Maydell <peter.maydell@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>, xen-devel@lists.xen.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Robie Basak <robie.basak@canonical.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Grant Likely <grant.likely@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wednesday 26 February 2014, Christoffer Dall wrote:
> Personally I'm all for simplicity so I don't want to push any agenda for
> ACPI in VMs.
> 
> Note that the spec does not mandate the use of ACPI, it just tells you
> how to do it if you wish to.
> 
> But, we can change the spec to require full FDT description of the
> system, unless of course some of the ACPI-in-VM supporters manage to
> convince the rest.

I guess the real question is whether we are interested in running Windows RT
in VM guests. I don't personally expect MS to come out with a port for
this spec, no matter what we do, but some of you may have information I don't.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 07:31:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 07:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIvQy-0007kF-Kb; Thu, 27 Feb 2014 07:31:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WIvQx-0007k7-Lc
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 07:31:07 +0000
Received: from [85.158.143.35:13408] by server-2.bemta-4.messagelabs.com id
	28/17-04779-AB9EE035; Thu, 27 Feb 2014 07:31:06 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393486266!8657942!1
X-Originating-IP: [212.227.17.13]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9580 invoked from network); 27 Feb 2014 07:31:06 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.13)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 07:31:06 -0000
Received: from klappe2.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue102) with ESMTP (Nemesis)
	id 0Lx6wD-1XKlMK1c9x-016c81; Thu, 27 Feb 2014 08:30:21 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Christoffer Dall <christoffer.dall@linaro.org>
Date: Thu, 27 Feb 2014 08:30:16 +0100
User-Agent: KMail/1.12.2 (Linux/3.8.0-22-generic; KDE/4.3.2; x86_64; ; )
References: <20140226183454.GA14639@cbox>
	<CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
	<20140226222147.GE16149@cbox>
In-Reply-To: <20140226222147.GE16149@cbox>
MIME-Version: 1.0
Message-Id: <201402270830.16903.arnd@arndb.de>
X-Provags-ID: V02:K0:47EZe2xFs+ZtkE5qDIynC24+OoffIt+z65tlMZl/Rml
	x2e6I+pq1zXmBQhAIaDNjv54S+JU1D66TuS8EVd/R/0+u15GmX
	irxay0tOaHQ8wh1B7iIfTDzkW2hyzOcTMXfXtL64f1zjQbGYml
	bJLQwdGC2ciwWP4sJmA3ZdWoRInGODKxnaMS+gtSrsEgv4X6C9
	ZXRqGXznNSf3SOy3Fa7Cwpv4tsMdrNpqmrT7BwaivW+blsP8RD
	/zF+XvJ3vbxi6TZAlP98NjFTpfNLZ4kgFeHUBgEQUdqP0KuI+4
	9S4Uj4lkwjunn9OZlTgh640md+WJ9hJMWp4ity/BGd53K4hbZd
	qBZAdr0t7wKtEUVIwy5Y=
Cc: Rob Herring <rob.herring@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	Peter Maydell <peter.maydell@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>, xen-devel@lists.xen.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Robie Basak <robie.basak@canonical.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Grant Likely <grant.likely@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wednesday 26 February 2014, Christoffer Dall wrote:
> Personally I'm all for simplicity so I don't want to push any agenda for
> ACPI in VMs.
> 
> Note that the spec does not mandate the use of ACPI, it just tells you
> how to do it if you wish to.
> 
> But, we can change the spec to require full FDT description of the
> system, unless of course some of the ACPI-in-VM supporters manage to
> convince the rest.

I guess the real question is whether we are interested in running Windows RT
in VM guests. I don't personally expect MS to come out with a port for
this spec, no matter what we do, but some of you may have information I don't.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 07:32:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 07:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIvSi-0007oN-4W; Thu, 27 Feb 2014 07:32:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIvSg-0007oF-VQ
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 07:32:55 +0000
Received: from [193.109.254.147:27701] by server-14.bemta-14.messagelabs.com
	id 38/29-29228-62AEE035; Thu, 27 Feb 2014 07:32:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393486371!7108704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22577 invoked from network); 27 Feb 2014 07:32:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 07:32:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,552,1389744000"; d="scan'208";a="104562583"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 07:32:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 02:32:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIvS9-0000P1-36;
	Thu, 27 Feb 2014 07:32:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIvS8-00031G-RN;
	Thu, 27 Feb 2014 07:32:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25314-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 07:32:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25314: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25314 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25314/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-xend               4 xen-build        fail in 25311 REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25303 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64 13 guest-localmigrate/x10    fail pass in 25311
 test-amd64-i386-xend-qemut-winxpsp3 14 guest-stop           fail pass in 25303
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 25311
 test-amd64-amd64-xl-sedf-pin  9 guest-start        fail in 25311 pass in 25314
 test-amd64-amd64-xl-sedf-pin  7 debian-install     fail in 25303 pass in 25314
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 guest-saverestore fail in 25303 pass in 25314

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 25311 n/a
 test-amd64-amd64-xl-win7-amd64 14 guest-stop          fail in 25311 never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 25311 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 25311 n/a
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25311 never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check fail in 25303 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 07:32:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 07:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIvSi-0007oN-4W; Thu, 27 Feb 2014 07:32:56 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIvSg-0007oF-VQ
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 07:32:55 +0000
Received: from [193.109.254.147:27701] by server-14.bemta-14.messagelabs.com
	id 38/29-29228-62AEE035; Thu, 27 Feb 2014 07:32:54 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393486371!7108704!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22577 invoked from network); 27 Feb 2014 07:32:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 07:32:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,552,1389744000"; d="scan'208";a="104562583"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 07:32:22 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 02:32:21 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIvS9-0000P1-36;
	Thu, 27 Feb 2014 07:32:21 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIvS8-00031G-RN;
	Thu, 27 Feb 2014 07:32:20 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25314-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 07:32:20 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25314: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25314 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25314/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-xend               4 xen-build        fail in 25311 REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25303 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64 13 guest-localmigrate/x10    fail pass in 25311
 test-amd64-i386-xend-qemut-winxpsp3 14 guest-stop           fail pass in 25303
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 25311
 test-amd64-amd64-xl-sedf-pin  9 guest-start        fail in 25311 pass in 25314
 test-amd64-amd64-xl-sedf-pin  7 debian-install     fail in 25303 pass in 25314
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 9 guest-saverestore fail in 25303 pass in 25314

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-pv            1 xen-build-check(1)        blocked in 25311 n/a
 test-amd64-amd64-xl-win7-amd64 14 guest-stop          fail in 25311 never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1) blocked in 25311 n/a
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)       blocked in 25311 n/a
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25311 never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check fail in 25303 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:01:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIvtg-0008Ox-RM; Thu, 27 Feb 2014 08:00:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIvte-0008Os-Ix
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 08:00:46 +0000
Received: from [85.158.139.211:22528] by server-10.bemta-5.messagelabs.com id
	FF/C4-08578-DA0FE035; Thu, 27 Feb 2014 08:00:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393488044!6169310!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16508 invoked from network); 27 Feb 2014 08:00:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 08:00:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 08:00:43 +0000
Message-Id: <530EFEC8020000780011FB81@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 08:00:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@linux.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<527A113C02000078000FFF99@nat28.tlf.novell.com>
	<20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Charles Wang <muming.wq@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Zhu Yanhai <gaoyang.zyh@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 01:04, Matt Wilson <msw@linux.com> wrote:
> On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
>> Nevertheless I agree that there is an issue, but this needs to be
>> fixed on the Linux side (hence adding the Linux maintainers to Cc);
>> this issue was introduced way back in 2.6.26 (before that there
>> was no allocation on that path). It's not clear though whether
>> using GFP_ATOMIC for the allocation would be preferable over
>> stts() before calling the allocation function (and clts() if it
>> succeeded), or whether perhaps to defer the stts() until we
>> actually know the task is being switched out. It's going to be an
>> ugly, Xen-specific hack in any event.
> 
> Was there ever a resolution to this problem? I never saw a comment
> from the Linux Xen PV maintainers.

Neither did I, so no, I'm not aware of a solution.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:01:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIvtg-0008Ox-RM; Thu, 27 Feb 2014 08:00:48 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIvte-0008Os-Ix
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 08:00:46 +0000
Received: from [85.158.139.211:22528] by server-10.bemta-5.messagelabs.com id
	FF/C4-08578-DA0FE035; Thu, 27 Feb 2014 08:00:45 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393488044!6169310!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16508 invoked from network); 27 Feb 2014 08:00:45 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 08:00:45 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 08:00:43 +0000
Message-Id: <530EFEC8020000780011FB81@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 08:00:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Matt Wilson" <msw@linux.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<527A113C02000078000FFF99@nat28.tlf.novell.com>
	<20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Charles Wang <muming.wq@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Zhu Yanhai <gaoyang.zyh@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 01:04, Matt Wilson <msw@linux.com> wrote:
> On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
>> Nevertheless I agree that there is an issue, but this needs to be
>> fixed on the Linux side (hence adding the Linux maintainers to Cc);
>> this issue was introduced way back in 2.6.26 (before that there
>> was no allocation on that path). It's not clear though whether
>> using GFP_ATOMIC for the allocation would be preferable over
>> stts() before calling the allocation function (and clts() if it
>> succeeded), or whether perhaps to defer the stts() until we
>> actually know the task is being switched out. It's going to be an
>> ugly, Xen-specific hack in any event.
> 
> Was there ever a resolution to this problem? I never saw a comment
> from the Linux Xen PV maintainers.

Neither did I, so no, I'm not aware of a solution.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:09:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIw1u-0000EE-2Y; Thu, 27 Feb 2014 08:09:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIw1t-0000E9-8M
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 08:09:17 +0000
Received: from [85.158.139.211:30620] by server-14.bemta-5.messagelabs.com id
	45/A0-27598-CA2FE035; Thu, 27 Feb 2014 08:09:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393488555!6578068!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21695 invoked from network); 27 Feb 2014 08:09:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 08:09:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 08:09:15 +0000
Message-Id: <530F00C7020000780011FBA1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 08:09:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
	<530E1D9E020000780011F938@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F7F31@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F7F31@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 02:31, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-02-27:
>>>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs
>> *regs)
>>>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>>>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>>>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>>> -            if ( !v->domain->debugger_attached ||
>>> cpu_has_monitor_trap_flag ) -                goto exit_and_crash; -    
>>>        domain_pause_for_debugger(); +            if (
>>> v->domain->debugger_attached ) +               
>>> domain_pause_for_debugger(); +            else +            { +        
>>>        __restore_debug_registers(v); +               
>>> hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE); +      
>>>      }
>> 
>> I suppose you need to set DR6.BS after restoring the reigsters?
> 
> Right but is not enough. If flag_dr_dirty is set, we need to restore 
> register from hardware. Conversely, restore is from debugreg and set DR6 to 
> exit_qualification.

After some more thought, I in fact doubt that restoring the debug
registers is in line with the current model: We should simply set
DR6.BS in the in-memory copy when the debug registers aren't
live yet (and it doesn't hurt to always do that). And since DR6
bits generally are sticky, I think exit_qualification actually needs
to be or-ed into the in-memory copy.

And presumably we would be better off if we dropped the
interception of TRAP_debug once we restored the debug
registers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:09:27 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIw1u-0000EE-2Y; Thu, 27 Feb 2014 08:09:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIw1t-0000E9-8M
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 08:09:17 +0000
Received: from [85.158.139.211:30620] by server-14.bemta-5.messagelabs.com id
	45/A0-27598-CA2FE035; Thu, 27 Feb 2014 08:09:16 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393488555!6578068!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21695 invoked from network); 27 Feb 2014 08:09:15 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 08:09:15 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 08:09:15 +0000
Message-Id: <530F00C7020000780011FBA1@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 08:09:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Yang Z Zhang" <yang.z.zhang@intel.com>
References: <5305BE9F.2090600@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F1C91@SHSMSX104.ccr.corp.intel.com>
	<5306E5D3.6000302@ts.fujitsu.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F6FC2@SHSMSX104.ccr.corp.intel.com>
	<530E1D9E020000780011F938@nat28.tlf.novell.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F7F31@SHSMSX104.ccr.corp.intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F7F31@SHSMSX104.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <juergen.gross@ts.fujitsu.com>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] Single step in HVM domU on Intel machine may see
 wrong DB6
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 02:31, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
> Jan Beulich wrote on 2014-02-27:
>>>>> On 26.02.14 at 06:15, "Zhang, Yang Z" <yang.z.zhang@intel.com> wrote:
>>> @@ -2690,9 +2688,13 @@ void vmx_vmexit_handler(struct cpu_user_regs
>> *regs)
>>>              __vmread(EXIT_QUALIFICATION, &exit_qualification);
>>>              HVMTRACE_1D(TRAP_DEBUG, exit_qualification);
>>>              write_debugreg(6, exit_qualification | 0xffff0ff0);
>>> -            if ( !v->domain->debugger_attached ||
>>> cpu_has_monitor_trap_flag ) -                goto exit_and_crash; -    
>>>        domain_pause_for_debugger(); +            if (
>>> v->domain->debugger_attached ) +               
>>> domain_pause_for_debugger(); +            else +            { +        
>>>        __restore_debug_registers(v); +               
>>> hvm_inject_hw_exception(TRAP_debug, HVM_DELIVER_NO_ERROR_CODE); +      
>>>      }
>> 
>> I suppose you need to set DR6.BS after restoring the reigsters?
> 
> Right but is not enough. If flag_dr_dirty is set, we need to restore 
> register from hardware. Conversely, restore is from debugreg and set DR6 to 
> exit_qualification.

After some more thought, I in fact doubt that restoring the debug
registers is in line with the current model: We should simply set
DR6.BS in the in-memory copy when the debug registers aren't
live yet (and it doesn't hurt to always do that). And since DR6
bits generally are sticky, I think exit_qualification actually needs
to be or-ed into the in-memory copy.

And presumably we would be better off if we dropped the
interception of TRAP_debug once we restored the debug
registers.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:23:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwFO-0000O5-Tf; Thu, 27 Feb 2014 08:23:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <glikely@secretlab.ca>) id 1WIn54-00055C-KA
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:35:59 +0000
Received: from [193.109.254.147:8626] by server-14.bemta-14.messagelabs.com id
	01/3E-29228-D4C6E035; Wed, 26 Feb 2014 22:35:57 +0000
X-Env-Sender: glikely@secretlab.ca
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393454155!3396458!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_32, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27368 invoked from network); 26 Feb 2014 22:35:56 -0000
Received: from mail-ig0-f173.google.com (HELO mail-ig0-f173.google.com)
	(209.85.213.173)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:35:56 -0000
Received: by mail-ig0-f173.google.com with SMTP id r1so11251252igi.0
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:35:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=71NUAEgqUcc3KV8mujUjRDxxi2LjuxgoBnN/Wu6MIIg=;
	b=fS5WAeaGRF8f/QBryS9PuN0ecN6Uyt+zn5zPwlKLm75ydhaEvLqPCqUOTQRG4odZ64
	3Fvcj9kX8ru68bMl0ln77p4l0/lHddIlF1jsNCZ+YkFXyHfSlR0nx4RdSFSKjZ09sGe8
	T9yS6zm4s5sSk8QGQC2UuZ/5q4z85uZLm1uRZ5vGgMzzJT4BtC9CHFcdhNpW1ZiDBNEL
	Ol0yw1xTzkMa8nLGYMtqkieQTuQ4xuySxPm4qUJwJ9a+3R0s0g5rqZh+o/llO/mEFcC+
	ISFkg6wsAOAT47M92wrhBJHHvMIVYAXIjuXzl7A+A5HfMjrXL9ZTNzRqbPKzTISdbXuI
	rW+g==
X-Gm-Message-State: ALoCoQmqnrXv0iWy00mppQVF39cilhV7rvWkYEEPpTzNKaEqYKSpY0PDLCV/GMaRGFoDR6171+j3
MIME-Version: 1.0
X-Received: by 10.50.62.211 with SMTP id a19mr2252276igs.46.1393454154652;
	Wed, 26 Feb 2014 14:35:54 -0800 (PST)
Received: by 10.64.81.79 with HTTP; Wed, 26 Feb 2014 14:35:54 -0800 (PST)
Received: by 10.64.81.79 with HTTP; Wed, 26 Feb 2014 14:35:54 -0800 (PST)
In-Reply-To: <20140226183454.GA14639@cbox>
References: <20140226183454.GA14639@cbox>
Date: Wed, 26 Feb 2014 22:35:54 +0000
X-Google-Sender-Auth: 1Oeob5fl8PIwVZGaEDnoQch8sfs
Message-ID: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
From: Grant Likely <grant.likely@linaro.org>
To: Christoffer Dall <christoffer.dall@linaro.org>
X-Mailman-Approved-At: Thu, 27 Feb 2014 08:23:13 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	cross-distro@lists.linaro.org,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3241241134938229303=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3241241134938229303==
Content-Type: multipart/alternative; boundary=047d7bd75920e636cd04f356d358

--047d7bd75920e636cd04f356d358
Content-Type: text/plain; charset=ISO-8859-1

Hi Christoffer,

Comments below...

On 26 Feb 2014 18:35, "Christoffer Dall" <christoffer.dall@linaro.org>
wrote:
>
> ARM VM System Specification
> ===========================
>
> Goal
> ----
> The goal of this spec is to allow suitably-built OS images to run on
> all ARM virtualization solutions, such as KVM or Xen.
>
> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> they aim to be hypervisor agnostic.
>
> Note that simply adhering to the SBSA [2] is not a valid approach,
> for example because the SBSA mandates EL2, which will not be available
> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> may be controversial for some ARM VM implementations to support.
> This spec also covers the aarch32 execution mode, not covered in the
> SBSA.
>
>
> Image format
> ------------
> The image format, as presented to the VM, needs to be well-defined in
> order for prepared disk images to be bootable across various
> virtualization implementations.
>
> The raw disk format as presented to the VM must be partitioned with a
> GUID Partition Table (GPT).  The bootable software must be placed in the
> EFI System Partition (ESP), using the UEFI removable media path, and
> must be an EFI application complying to the UEFI Specification 2.4
> Revision A [6].
>
> The ESP partition's GPT entry's partition type GUID must be
> C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
>
> The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> state.
>
> This ensures that tools for both Xen and KVM can load a binary UEFI
> firmware which can read and boot the EFI application in the disk image.
>
> A typical scenario will be GRUB2 packaged as an EFI application, which
> mounts the system boot partition and boots Linux.
>
>
> Virtual Firmware
> ----------------
> The VM system must be able to boot the EFI application in the ESP.  It
> is recommended that this is achieved by loading a UEFI binary as the
> first software executed by the VM, which then executes the EFI
> application.  The UEFI implementation should be compliant with UEFI
> Specification 2.4 Revision A [6] or later.
>
> This document strongly recommends that the VM implementation supports
> persistent environment storage for virtual firmware implementation in
> order to ensure probable use cases such as adding additional disk images
> to a VM or running installers to perform upgrades.
>
> The binary UEFI firmware implementation should not be distributed as
> part of the VM image, but is specific to the VM implementation.
>
>
> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the
kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.

I would drop pretty much all of the above detail of the kernel entry point.
The spec should specify UEFI compliance and stop there.

What is relevant is the allowance for the UEFI implementation to provide an
FDT and/or ACPI via the configuration table.

>
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
>
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
>
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.

It is actually valid for the VM to provide both ACPI and FDT. In that
scenario it is up to the OS to chose which it will use.

> For more information about the arm and arm64 boot conventions, see
> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> Linux kernel source tree.
>
> For more information about UEFI and ACPI booting, see [4] and [5].
>
>
> VM Platform
> -----------
> The specification does not mandate any specific memory map.  The guest
> OS must be able to enumerate all processing elements, devices, and
> memory through HW description data (FDT, ACPI) or a bus-specific
> mechanism such as PCI.
>
> The virtual platform must support at least one of the following ARM
> execution states:
>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
>
> It is recommended to support both (2) and (3) on aarch64 capable
> physical systems.
>
> The virtual hardware platform must provide a number of mandatory
> peripherals:
>
>   Serial console:  The platform should provide a console,
>   based on an emulated pl011, a virtio-console, or a Xen PV console.

For portable disk image, can Xen PV be dropped from the list? pl011 is part
of SBSA, and virtio is getting standardised, but Xen PV is implementation
specific.

>
>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>   removes this limitation.
>
>   The ARM virtual timer and counter should be available to the VM as
>   per the ARM Generic Timers specification in the ARM ARM [1].
>
>   A hotpluggable bus to support hotplug of at least block and network
>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>   bus.
>
>
> We make the following recommendations for the guest OS kernel:
>
>   The guest OS must include support for GICv2 and any available newer
>   version of the GIC architecture to maintain compatibility with older
>   VM implementations.
>
>   It is strongly recommended to include support for all available
>   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
>   drivers in the guest OS kernel or initial ramdisk.
>
>
> Other common peripherals for block devices, networking, and more can
> (and typically will) be provided, but OS software written and compiled
> to run on ARM VMs cannot make any assumptions about which variations
> of these should exist or which implementation they use (e.g. VirtIO or
> Xen PV).  See "Hardware Description" above.
>
> Note that this platform specification is separate from the Linux kernel
> concept of mach-virt, which merely specifies a machine model driven
> purely from device tree, but does not mandate any peripherals or have any
> mention of ACPI.
>
>
> References
> ----------
> [1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
>
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
>
> [2]: ARM Server Base System Architecture
>
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html
>
> [3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
>
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
>
> [4]: http://www.secretlab.ca/archives/27
>
> [5]:
https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt
>
> [6]: UEFI Specification 2.4 Revision A
> http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf

--047d7bd75920e636cd04f356d358
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">Hi Christoffer,</p>
<p dir=3D"ltr">Comments below...</p>
<p dir=3D"ltr">On 26 Feb 2014 18:35, &quot;Christoffer Dall&quot; &lt;<a hr=
ef=3D"mailto:christoffer.dall@linaro.org">christoffer.dall@linaro.org</a>&g=
t; wrote:<br>
&gt;<br>
&gt; ARM VM System Specification<br>
&gt; =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D<br>
&gt;<br>
&gt; Goal<br>
&gt; ----<br>
&gt; The goal of this spec is to allow suitably-built OS images to run on<b=
r>
&gt; all ARM virtualization solutions, such as KVM or Xen.<br>
&gt;<br>
&gt; Recommendations in this spec are valid for aarch32 and aarch64 alike, =
and<br>
&gt; they aim to be hypervisor agnostic.<br>
&gt;<br>
&gt; Note that simply adhering to the SBSA [2] is not a valid approach,<br>
&gt; for example because the SBSA mandates EL2, which will not be available=
<br>
&gt; for VMs. =A0Further, the SBSA mandates peripherals like the pl011, whi=
ch<br>
&gt; may be controversial for some ARM VM implementations to support.<br>
&gt; This spec also covers the aarch32 execution mode, not covered in the<b=
r>
&gt; SBSA.<br>
&gt;<br>
&gt;<br>
&gt; Image format<br>
&gt; ------------<br>
&gt; The image format, as presented to the VM, needs to be well-defined in<=
br>
&gt; order for prepared disk images to be bootable across various<br>
&gt; virtualization implementations.<br>
&gt;<br>
&gt; The raw disk format as presented to the VM must be partitioned with a<=
br>
&gt; GUID Partition Table (GPT). =A0The bootable software must be placed in=
 the<br>
&gt; EFI System Partition (ESP), using the UEFI removable media path, and<b=
r>
&gt; must be an EFI application complying to the UEFI Specification 2.4<br>
&gt; Revision A [6].<br>
&gt;<br>
&gt; The ESP partition&#39;s GPT entry&#39;s partition type GUID must be<br=
>
&gt; C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be<br>
&gt; formatted as FAT32/vfat as per Section 12.3.1.1 in [6].<br>
&gt;<br>
&gt; The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32<br>
&gt; execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 executio=
n<br>
&gt; state.<br>
&gt;<br>
&gt; This ensures that tools for both Xen and KVM can load a binary UEFI<br=
>
&gt; firmware which can read and boot the EFI application in the disk image=
.<br>
&gt;<br>
&gt; A typical scenario will be GRUB2 packaged as an EFI application, which=
<br>
&gt; mounts the system boot partition and boots Linux.<br>
&gt;<br>
&gt;<br>
&gt; Virtual Firmware<br>
&gt; ----------------<br>
&gt; The VM system must be able to boot the EFI application in the ESP. =A0=
It<br>
&gt; is recommended that this is achieved by loading a UEFI binary as the<b=
r>
&gt; first software executed by the VM, which then executes the EFI<br>
&gt; application. =A0The UEFI implementation should be compliant with UEFI<=
br>
&gt; Specification 2.4 Revision A [6] or later.<br>
&gt;<br>
&gt; This document strongly recommends that the VM implementation supports<=
br>
&gt; persistent environment storage for virtual firmware implementation in<=
br>
&gt; order to ensure probable use cases such as adding additional disk imag=
es<br>
&gt; to a VM or running installers to perform upgrades.<br>
&gt;<br>
&gt; The binary UEFI firmware implementation should not be distributed as<b=
r>
&gt; part of the VM image, but is specific to the VM implementation.<br>
&gt;<br>
&gt;<br>
&gt; Hardware Description<br>
&gt; --------------------<br>
&gt; The Linux kernel&#39;s proper entry point always takes a pointer to an=
 FDT,<br>
&gt; regardless of the boot mechanism, firmware, and hardware description<b=
r>
&gt; method. =A0Even on real hardware which only supports ACPI and UEFI, th=
e kernel<br>
&gt; entry point will still receive a pointer to a simple FDT, generated by=
<br>
&gt; the Linux kernel UEFI stub, containing a pointer to the UEFI system<br=
>
&gt; table. =A0The kernel can then discover ACPI from the system tables. =
=A0The<br>
&gt; presence of ACPI vs. FDT is therefore always itself discoverable,<br>
&gt; through the FDT.</p>
<p dir=3D"ltr">I would drop pretty much all of the above detail of the kern=
el entry point. The spec should specify UEFI compliance and stop there.</p>
<p dir=3D"ltr">What is relevant is the allowance for the UEFI implementatio=
n to provide an FDT and/or ACPI via the configuration table.</p>
<p dir=3D"ltr">&gt;<br>
&gt; Therefore, the VM implementation must provide through its UEFI<br>
&gt; implementation, either:<br>
&gt;<br>
&gt; =A0 a complete FDT which describes the entire VM system and will boot<=
br>
&gt; =A0 mainline kernels driven by device tree alone, or<br>
&gt;<br>
&gt; =A0 no FDT. =A0In this case, the VM implementation must provide ACPI, =
and<br>
&gt; =A0 the OS must be able to locate the ACPI root pointer through the UE=
FI<br>
&gt; =A0 system table.</p>
<p dir=3D"ltr">It is actually valid for the VM to provide both ACPI and FDT=
. In that scenario it is up to the OS to chose which it will use.</p>
<p dir=3D"ltr">&gt; For more information about the arm and arm64 boot conve=
ntions, see<br>
&gt; Documentation/arm/Booting and Documentation/arm64/booting.txt in the<b=
r>
&gt; Linux kernel source tree.<br>
&gt;<br>
&gt; For more information about UEFI and ACPI booting, see [4] and [5].<br>
&gt;<br>
&gt;<br>
&gt; VM Platform<br>
&gt; -----------<br>
&gt; The specification does not mandate any specific memory map. =A0The gue=
st<br>
&gt; OS must be able to enumerate all processing elements, devices, and<br>
&gt; memory through HW description data (FDT, ACPI) or a bus-specific<br>
&gt; mechanism such as PCI.<br>
&gt;<br>
&gt; The virtual platform must support at least one of the following ARM<br=
>
&gt; execution states:<br>
&gt; =A0 (1) aarch32 virtual CPUs on aarch32 physical CPUs<br>
&gt; =A0 (2) aarch32 virtual CPUs on aarch64 physical CPUs<br>
&gt; =A0 (3) aarch64 virtual CPUs on aarch64 physical CPUs<br>
&gt;<br>
&gt; It is recommended to support both (2) and (3) on aarch64 capable<br>
&gt; physical systems.<br>
&gt;<br>
&gt; The virtual hardware platform must provide a number of mandatory<br>
&gt; peripherals:<br>
&gt;<br>
&gt; =A0 Serial console: =A0The platform should provide a console,<br>
&gt; =A0 based on an emulated pl011, a virtio-console, or a Xen PV console.=
</p>
<p dir=3D"ltr">For portable disk image, can Xen PV be dropped from the list=
? pl011 is part of SBSA, and virtio is getting standardised, but Xen PV is =
implementation specific.</p>
<p dir=3D"ltr">&gt;<br>
&gt; =A0 An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer. =A0GI=
Cv2<br>
&gt; =A0 limits the the number of virtual CPUs to 8 cores, newer GIC versio=
ns<br>
&gt; =A0 removes this limitation.<br>
&gt;<br>
&gt; =A0 The ARM virtual timer and counter should be available to the VM as=
<br>
&gt; =A0 per the ARM Generic Timers specification in the ARM ARM [1].<br>
&gt;<br>
&gt; =A0 A hotpluggable bus to support hotplug of at least block and networ=
k<br>
&gt; =A0 devices. =A0Suitable buses include a virtual PCIe bus and the Xen =
PV<br>
&gt; =A0 bus.<br>
&gt;<br>
&gt;<br>
&gt; We make the following recommendations for the guest OS kernel:<br>
&gt;<br>
&gt; =A0 The guest OS must include support for GICv2 and any available newe=
r<br>
&gt; =A0 version of the GIC architecture to maintain compatibility with old=
er<br>
&gt; =A0 VM implementations.<br>
&gt;<br>
&gt; =A0 It is strongly recommended to include support for all available<br=
>
&gt; =A0 (block, network, console, balloon) virtio-pci, virtio-mmio, and Xe=
n PV<br>
&gt; =A0 drivers in the guest OS kernel or initial ramdisk.<br>
&gt;<br>
&gt;<br>
&gt; Other common peripherals for block devices, networking, and more can<b=
r>
&gt; (and typically will) be provided, but OS software written and compiled=
<br>
&gt; to run on ARM VMs cannot make any assumptions about which variations<b=
r>
&gt; of these should exist or which implementation they use (e.g. VirtIO or=
<br>
&gt; Xen PV). =A0See &quot;Hardware Description&quot; above.<br>
&gt;<br>
&gt; Note that this platform specification is separate from the Linux kerne=
l<br>
&gt; concept of mach-virt, which merely specifies a machine model driven<br=
>
&gt; purely from device tree, but does not mandate any peripherals or have =
any<br>
&gt; mention of ACPI.<br>
&gt;<br>
&gt;<br>
&gt; References<br>
&gt; ----------<br>
&gt; [1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b<br>
&gt; <a href=3D"http://infocenter.arm.com/help/index.jsp?topic=3D/com.arm.d=
oc.ddi0487a.b/index.html">http://infocenter.arm.com/help/index.jsp?topic=3D=
/com.arm.doc.ddi0487a.b/index.html</a><br>
&gt;<br>
&gt; [2]: ARM Server Base System Architecture<br>
&gt; <a href=3D"http://infocenter.arm.com/help/index.jsp?topic=3D/com.arm.d=
oc.den0029/index.html">http://infocenter.arm.com/help/index.jsp?topic=3D/co=
m.arm.doc.den0029/index.html</a><br>
&gt;<br>
&gt; [3]: The ARM Generic Interrupt Controller Architecture Specifications =
v2.0<br>
&gt; <a href=3D"http://infocenter.arm.com/help/index.jsp?topic=3D/com.arm.d=
oc.ddi0487a.b/index.html">http://infocenter.arm.com/help/index.jsp?topic=3D=
/com.arm.doc.ddi0487a.b/index.html</a><br>
&gt;<br>
&gt; [4]: <a href=3D"http://www.secretlab.ca/archives/27">http://www.secret=
lab.ca/archives/27</a><br>
&gt;<br>
&gt; [5]: <a href=3D"https://git.linaro.org/people/leif.lindholm/linux.git/=
blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt">https://git.=
linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream=
:/Documentation/arm/uefi.txt</a><br>

&gt;<br>
&gt; [6]: UEFI Specification 2.4 Revision A<br>
&gt; <a href=3D"http://www.uefi.org/sites/default/files/resources/2_4_Errat=
a_A.pdf">http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf=
</a><br>
</p>

--047d7bd75920e636cd04f356d358--


--===============3241241134938229303==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3241241134938229303==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 08:23:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwFP-0000OC-9L; Thu, 27 Feb 2014 08:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <blibbet@gmail.com>) id 1WIp2i-0007eG-4l
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 00:41:40 +0000
Received: from [85.158.137.68:38031] by server-1.bemta-3.messagelabs.com id
	F7/99-17293-3C98E035; Thu, 27 Feb 2014 00:41:39 +0000
X-Env-Sender: blibbet@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393461696!4464440!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=1.4 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	NO_FORMS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14352 invoked from network); 27 Feb 2014 00:41:38 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 00:41:38 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so1756424pbb.0
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 16:41:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=q8KDyXh58Z3nMCkQ/PTbfu30SOBOj0BKSoMYoo59IH4=;
	b=AowZQGaY+4/Uhbnvcfw1jn8oqkFrFj8Ci+YW4kYJCJTMvjCQ4rWqYvyGAE9cQykXn2
	eYMyleYL6amd8ZexCEgXAdTTQz9X7c6c5nDvQA1HabjzeXuiqe29vVgXlWV3eol8DPpj
	LzkJrBGGkEzzNYglMLEXnfDlmzKIggOO2pYDZokLBpAXFUomHmaM9RzRd4pmaK/P2ezT
	QzWhsQq68y/RfAWf1YGlL4w+PLUuEovc9Nvc7G8hyqJONDAVV6KLzplUr2cYQQ63zvIs
	R9ykj1e5d/K4toY2/gsf2AAudP9G/ryo/kihQoOh+PntZKbfK6MFvESwmmNVOlXcDXOd
	LEHw==
X-Received: by 10.68.129.5 with SMTP id ns5mr9983790pbb.147.1393461696314;
	Wed, 26 Feb 2014 16:41:36 -0800 (PST)
Received: from box.local ([184.11.139.108])
	by mx.google.com with ESMTPSA id tu3sm17845504pab.1.2014.02.26.16.41.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 16:41:35 -0800 (PST)
Message-ID: <530E89B1.90604@gmail.com>
Date: Wed, 26 Feb 2014 16:41:21 -0800
From: Blibbet <blibbet@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.5;
	rv:16.0) Gecko/20121005 Thunderbird/16.0
MIME-Version: 1.0
To: Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox>
In-Reply-To: <20140226183454.GA14639@cbox>
X-Mailman-Approved-At: Thu, 27 Feb 2014 08:23:13 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/26/14 10:34 AM, Christoffer Dall wrote:
> ARM VM System Specification
> ===========================

See also the thread forked off to the EFI dev list, about using existing 
EFI ByteCode (EBC) for this new purpose, especially the informative 
reply from Andrew Fish of Apple.com:

http://sourceforge.net/p/edk2/mailman/message/32031943/

Today, EBC is created by Intel, and targets Intel's 3 platforms, but no 
ARM platforms yet. Today existing EFI uses EBC "VM", existing 
implementation exists with BSD license. EBC's goal was to let IHVs share 
Option ROM style drivers, not have to ship multiple ones. Having ARM and 
Intel use the same VM/bytecode would be even better for IHVs. I'm 
unclear to your non-EFI use cases, so may not be useful outside EFI. IMO 
the main issue with EBC is only commercial Intel and Microsoft compilers 
support it, not GCC or CLang. IP clarify of this Intel creation would 
also be an issue, but apparently UEFI Forum owns the spec.

There are too many lists CC'ed already, but if this becomes a valid 
option, the linux-efi list on kernel.org needs to get invited. :-)

Thanks,
Lee


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:23:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwFO-0000O5-Tf; Thu, 27 Feb 2014 08:23:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <glikely@secretlab.ca>) id 1WIn54-00055C-KA
	for xen-devel@lists.xen.org; Wed, 26 Feb 2014 22:35:59 +0000
Received: from [193.109.254.147:8626] by server-14.bemta-14.messagelabs.com id
	01/3E-29228-D4C6E035; Wed, 26 Feb 2014 22:35:57 +0000
X-Env-Sender: glikely@secretlab.ca
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393454155!3396458!1
X-Originating-IP: [209.85.213.173]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_32, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27368 invoked from network); 26 Feb 2014 22:35:56 -0000
Received: from mail-ig0-f173.google.com (HELO mail-ig0-f173.google.com)
	(209.85.213.173)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	26 Feb 2014 22:35:56 -0000
Received: by mail-ig0-f173.google.com with SMTP id r1so11251252igi.0
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 14:35:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=71NUAEgqUcc3KV8mujUjRDxxi2LjuxgoBnN/Wu6MIIg=;
	b=fS5WAeaGRF8f/QBryS9PuN0ecN6Uyt+zn5zPwlKLm75ydhaEvLqPCqUOTQRG4odZ64
	3Fvcj9kX8ru68bMl0ln77p4l0/lHddIlF1jsNCZ+YkFXyHfSlR0nx4RdSFSKjZ09sGe8
	T9yS6zm4s5sSk8QGQC2UuZ/5q4z85uZLm1uRZ5vGgMzzJT4BtC9CHFcdhNpW1ZiDBNEL
	Ol0yw1xTzkMa8nLGYMtqkieQTuQ4xuySxPm4qUJwJ9a+3R0s0g5rqZh+o/llO/mEFcC+
	ISFkg6wsAOAT47M92wrhBJHHvMIVYAXIjuXzl7A+A5HfMjrXL9ZTNzRqbPKzTISdbXuI
	rW+g==
X-Gm-Message-State: ALoCoQmqnrXv0iWy00mppQVF39cilhV7rvWkYEEPpTzNKaEqYKSpY0PDLCV/GMaRGFoDR6171+j3
MIME-Version: 1.0
X-Received: by 10.50.62.211 with SMTP id a19mr2252276igs.46.1393454154652;
	Wed, 26 Feb 2014 14:35:54 -0800 (PST)
Received: by 10.64.81.79 with HTTP; Wed, 26 Feb 2014 14:35:54 -0800 (PST)
Received: by 10.64.81.79 with HTTP; Wed, 26 Feb 2014 14:35:54 -0800 (PST)
In-Reply-To: <20140226183454.GA14639@cbox>
References: <20140226183454.GA14639@cbox>
Date: Wed, 26 Feb 2014 22:35:54 +0000
X-Google-Sender-Auth: 1Oeob5fl8PIwVZGaEDnoQch8sfs
Message-ID: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
From: Grant Likely <grant.likely@linaro.org>
To: Christoffer Dall <christoffer.dall@linaro.org>
X-Mailman-Approved-At: Thu, 27 Feb 2014 08:23:13 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	cross-distro@lists.linaro.org,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3241241134938229303=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3241241134938229303==
Content-Type: multipart/alternative; boundary=047d7bd75920e636cd04f356d358

--047d7bd75920e636cd04f356d358
Content-Type: text/plain; charset=ISO-8859-1

Hi Christoffer,

Comments below...

On 26 Feb 2014 18:35, "Christoffer Dall" <christoffer.dall@linaro.org>
wrote:
>
> ARM VM System Specification
> ===========================
>
> Goal
> ----
> The goal of this spec is to allow suitably-built OS images to run on
> all ARM virtualization solutions, such as KVM or Xen.
>
> Recommendations in this spec are valid for aarch32 and aarch64 alike, and
> they aim to be hypervisor agnostic.
>
> Note that simply adhering to the SBSA [2] is not a valid approach,
> for example because the SBSA mandates EL2, which will not be available
> for VMs.  Further, the SBSA mandates peripherals like the pl011, which
> may be controversial for some ARM VM implementations to support.
> This spec also covers the aarch32 execution mode, not covered in the
> SBSA.
>
>
> Image format
> ------------
> The image format, as presented to the VM, needs to be well-defined in
> order for prepared disk images to be bootable across various
> virtualization implementations.
>
> The raw disk format as presented to the VM must be partitioned with a
> GUID Partition Table (GPT).  The bootable software must be placed in the
> EFI System Partition (ESP), using the UEFI removable media path, and
> must be an EFI application complying to the UEFI Specification 2.4
> Revision A [6].
>
> The ESP partition's GPT entry's partition type GUID must be
> C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
> formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
>
> The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
> execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
> state.
>
> This ensures that tools for both Xen and KVM can load a binary UEFI
> firmware which can read and boot the EFI application in the disk image.
>
> A typical scenario will be GRUB2 packaged as an EFI application, which
> mounts the system boot partition and boots Linux.
>
>
> Virtual Firmware
> ----------------
> The VM system must be able to boot the EFI application in the ESP.  It
> is recommended that this is achieved by loading a UEFI binary as the
> first software executed by the VM, which then executes the EFI
> application.  The UEFI implementation should be compliant with UEFI
> Specification 2.4 Revision A [6] or later.
>
> This document strongly recommends that the VM implementation supports
> persistent environment storage for virtual firmware implementation in
> order to ensure probable use cases such as adding additional disk images
> to a VM or running installers to perform upgrades.
>
> The binary UEFI firmware implementation should not be distributed as
> part of the VM image, but is specific to the VM implementation.
>
>
> Hardware Description
> --------------------
> The Linux kernel's proper entry point always takes a pointer to an FDT,
> regardless of the boot mechanism, firmware, and hardware description
> method.  Even on real hardware which only supports ACPI and UEFI, the
kernel
> entry point will still receive a pointer to a simple FDT, generated by
> the Linux kernel UEFI stub, containing a pointer to the UEFI system
> table.  The kernel can then discover ACPI from the system tables.  The
> presence of ACPI vs. FDT is therefore always itself discoverable,
> through the FDT.

I would drop pretty much all of the above detail of the kernel entry point.
The spec should specify UEFI compliance and stop there.

What is relevant is the allowance for the UEFI implementation to provide an
FDT and/or ACPI via the configuration table.

>
> Therefore, the VM implementation must provide through its UEFI
> implementation, either:
>
>   a complete FDT which describes the entire VM system and will boot
>   mainline kernels driven by device tree alone, or
>
>   no FDT.  In this case, the VM implementation must provide ACPI, and
>   the OS must be able to locate the ACPI root pointer through the UEFI
>   system table.

It is actually valid for the VM to provide both ACPI and FDT. In that
scenario it is up to the OS to chose which it will use.

> For more information about the arm and arm64 boot conventions, see
> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> Linux kernel source tree.
>
> For more information about UEFI and ACPI booting, see [4] and [5].
>
>
> VM Platform
> -----------
> The specification does not mandate any specific memory map.  The guest
> OS must be able to enumerate all processing elements, devices, and
> memory through HW description data (FDT, ACPI) or a bus-specific
> mechanism such as PCI.
>
> The virtual platform must support at least one of the following ARM
> execution states:
>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
>
> It is recommended to support both (2) and (3) on aarch64 capable
> physical systems.
>
> The virtual hardware platform must provide a number of mandatory
> peripherals:
>
>   Serial console:  The platform should provide a console,
>   based on an emulated pl011, a virtio-console, or a Xen PV console.

For portable disk image, can Xen PV be dropped from the list? pl011 is part
of SBSA, and virtio is getting standardised, but Xen PV is implementation
specific.

>
>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>   removes this limitation.
>
>   The ARM virtual timer and counter should be available to the VM as
>   per the ARM Generic Timers specification in the ARM ARM [1].
>
>   A hotpluggable bus to support hotplug of at least block and network
>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>   bus.
>
>
> We make the following recommendations for the guest OS kernel:
>
>   The guest OS must include support for GICv2 and any available newer
>   version of the GIC architecture to maintain compatibility with older
>   VM implementations.
>
>   It is strongly recommended to include support for all available
>   (block, network, console, balloon) virtio-pci, virtio-mmio, and Xen PV
>   drivers in the guest OS kernel or initial ramdisk.
>
>
> Other common peripherals for block devices, networking, and more can
> (and typically will) be provided, but OS software written and compiled
> to run on ARM VMs cannot make any assumptions about which variations
> of these should exist or which implementation they use (e.g. VirtIO or
> Xen PV).  See "Hardware Description" above.
>
> Note that this platform specification is separate from the Linux kernel
> concept of mach-virt, which merely specifies a machine model driven
> purely from device tree, but does not mandate any peripherals or have any
> mention of ACPI.
>
>
> References
> ----------
> [1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b
>
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
>
> [2]: ARM Server Base System Architecture
>
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.den0029/index.html
>
> [3]: The ARM Generic Interrupt Controller Architecture Specifications v2.0
>
http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.b/index.html
>
> [4]: http://www.secretlab.ca/archives/27
>
> [5]:
https://git.linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt
>
> [6]: UEFI Specification 2.4 Revision A
> http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf

--047d7bd75920e636cd04f356d358
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr">Hi Christoffer,</p>
<p dir=3D"ltr">Comments below...</p>
<p dir=3D"ltr">On 26 Feb 2014 18:35, &quot;Christoffer Dall&quot; &lt;<a hr=
ef=3D"mailto:christoffer.dall@linaro.org">christoffer.dall@linaro.org</a>&g=
t; wrote:<br>
&gt;<br>
&gt; ARM VM System Specification<br>
&gt; =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D<br>
&gt;<br>
&gt; Goal<br>
&gt; ----<br>
&gt; The goal of this spec is to allow suitably-built OS images to run on<b=
r>
&gt; all ARM virtualization solutions, such as KVM or Xen.<br>
&gt;<br>
&gt; Recommendations in this spec are valid for aarch32 and aarch64 alike, =
and<br>
&gt; they aim to be hypervisor agnostic.<br>
&gt;<br>
&gt; Note that simply adhering to the SBSA [2] is not a valid approach,<br>
&gt; for example because the SBSA mandates EL2, which will not be available=
<br>
&gt; for VMs. =A0Further, the SBSA mandates peripherals like the pl011, whi=
ch<br>
&gt; may be controversial for some ARM VM implementations to support.<br>
&gt; This spec also covers the aarch32 execution mode, not covered in the<b=
r>
&gt; SBSA.<br>
&gt;<br>
&gt;<br>
&gt; Image format<br>
&gt; ------------<br>
&gt; The image format, as presented to the VM, needs to be well-defined in<=
br>
&gt; order for prepared disk images to be bootable across various<br>
&gt; virtualization implementations.<br>
&gt;<br>
&gt; The raw disk format as presented to the VM must be partitioned with a<=
br>
&gt; GUID Partition Table (GPT). =A0The bootable software must be placed in=
 the<br>
&gt; EFI System Partition (ESP), using the UEFI removable media path, and<b=
r>
&gt; must be an EFI application complying to the UEFI Specification 2.4<br>
&gt; Revision A [6].<br>
&gt;<br>
&gt; The ESP partition&#39;s GPT entry&#39;s partition type GUID must be<br=
>
&gt; C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be<br>
&gt; formatted as FAT32/vfat as per Section 12.3.1.1 in [6].<br>
&gt;<br>
&gt; The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32<br>
&gt; execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 executio=
n<br>
&gt; state.<br>
&gt;<br>
&gt; This ensures that tools for both Xen and KVM can load a binary UEFI<br=
>
&gt; firmware which can read and boot the EFI application in the disk image=
.<br>
&gt;<br>
&gt; A typical scenario will be GRUB2 packaged as an EFI application, which=
<br>
&gt; mounts the system boot partition and boots Linux.<br>
&gt;<br>
&gt;<br>
&gt; Virtual Firmware<br>
&gt; ----------------<br>
&gt; The VM system must be able to boot the EFI application in the ESP. =A0=
It<br>
&gt; is recommended that this is achieved by loading a UEFI binary as the<b=
r>
&gt; first software executed by the VM, which then executes the EFI<br>
&gt; application. =A0The UEFI implementation should be compliant with UEFI<=
br>
&gt; Specification 2.4 Revision A [6] or later.<br>
&gt;<br>
&gt; This document strongly recommends that the VM implementation supports<=
br>
&gt; persistent environment storage for virtual firmware implementation in<=
br>
&gt; order to ensure probable use cases such as adding additional disk imag=
es<br>
&gt; to a VM or running installers to perform upgrades.<br>
&gt;<br>
&gt; The binary UEFI firmware implementation should not be distributed as<b=
r>
&gt; part of the VM image, but is specific to the VM implementation.<br>
&gt;<br>
&gt;<br>
&gt; Hardware Description<br>
&gt; --------------------<br>
&gt; The Linux kernel&#39;s proper entry point always takes a pointer to an=
 FDT,<br>
&gt; regardless of the boot mechanism, firmware, and hardware description<b=
r>
&gt; method. =A0Even on real hardware which only supports ACPI and UEFI, th=
e kernel<br>
&gt; entry point will still receive a pointer to a simple FDT, generated by=
<br>
&gt; the Linux kernel UEFI stub, containing a pointer to the UEFI system<br=
>
&gt; table. =A0The kernel can then discover ACPI from the system tables. =
=A0The<br>
&gt; presence of ACPI vs. FDT is therefore always itself discoverable,<br>
&gt; through the FDT.</p>
<p dir=3D"ltr">I would drop pretty much all of the above detail of the kern=
el entry point. The spec should specify UEFI compliance and stop there.</p>
<p dir=3D"ltr">What is relevant is the allowance for the UEFI implementatio=
n to provide an FDT and/or ACPI via the configuration table.</p>
<p dir=3D"ltr">&gt;<br>
&gt; Therefore, the VM implementation must provide through its UEFI<br>
&gt; implementation, either:<br>
&gt;<br>
&gt; =A0 a complete FDT which describes the entire VM system and will boot<=
br>
&gt; =A0 mainline kernels driven by device tree alone, or<br>
&gt;<br>
&gt; =A0 no FDT. =A0In this case, the VM implementation must provide ACPI, =
and<br>
&gt; =A0 the OS must be able to locate the ACPI root pointer through the UE=
FI<br>
&gt; =A0 system table.</p>
<p dir=3D"ltr">It is actually valid for the VM to provide both ACPI and FDT=
. In that scenario it is up to the OS to chose which it will use.</p>
<p dir=3D"ltr">&gt; For more information about the arm and arm64 boot conve=
ntions, see<br>
&gt; Documentation/arm/Booting and Documentation/arm64/booting.txt in the<b=
r>
&gt; Linux kernel source tree.<br>
&gt;<br>
&gt; For more information about UEFI and ACPI booting, see [4] and [5].<br>
&gt;<br>
&gt;<br>
&gt; VM Platform<br>
&gt; -----------<br>
&gt; The specification does not mandate any specific memory map. =A0The gue=
st<br>
&gt; OS must be able to enumerate all processing elements, devices, and<br>
&gt; memory through HW description data (FDT, ACPI) or a bus-specific<br>
&gt; mechanism such as PCI.<br>
&gt;<br>
&gt; The virtual platform must support at least one of the following ARM<br=
>
&gt; execution states:<br>
&gt; =A0 (1) aarch32 virtual CPUs on aarch32 physical CPUs<br>
&gt; =A0 (2) aarch32 virtual CPUs on aarch64 physical CPUs<br>
&gt; =A0 (3) aarch64 virtual CPUs on aarch64 physical CPUs<br>
&gt;<br>
&gt; It is recommended to support both (2) and (3) on aarch64 capable<br>
&gt; physical systems.<br>
&gt;<br>
&gt; The virtual hardware platform must provide a number of mandatory<br>
&gt; peripherals:<br>
&gt;<br>
&gt; =A0 Serial console: =A0The platform should provide a console,<br>
&gt; =A0 based on an emulated pl011, a virtio-console, or a Xen PV console.=
</p>
<p dir=3D"ltr">For portable disk image, can Xen PV be dropped from the list=
? pl011 is part of SBSA, and virtio is getting standardised, but Xen PV is =
implementation specific.</p>
<p dir=3D"ltr">&gt;<br>
&gt; =A0 An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer. =A0GI=
Cv2<br>
&gt; =A0 limits the the number of virtual CPUs to 8 cores, newer GIC versio=
ns<br>
&gt; =A0 removes this limitation.<br>
&gt;<br>
&gt; =A0 The ARM virtual timer and counter should be available to the VM as=
<br>
&gt; =A0 per the ARM Generic Timers specification in the ARM ARM [1].<br>
&gt;<br>
&gt; =A0 A hotpluggable bus to support hotplug of at least block and networ=
k<br>
&gt; =A0 devices. =A0Suitable buses include a virtual PCIe bus and the Xen =
PV<br>
&gt; =A0 bus.<br>
&gt;<br>
&gt;<br>
&gt; We make the following recommendations for the guest OS kernel:<br>
&gt;<br>
&gt; =A0 The guest OS must include support for GICv2 and any available newe=
r<br>
&gt; =A0 version of the GIC architecture to maintain compatibility with old=
er<br>
&gt; =A0 VM implementations.<br>
&gt;<br>
&gt; =A0 It is strongly recommended to include support for all available<br=
>
&gt; =A0 (block, network, console, balloon) virtio-pci, virtio-mmio, and Xe=
n PV<br>
&gt; =A0 drivers in the guest OS kernel or initial ramdisk.<br>
&gt;<br>
&gt;<br>
&gt; Other common peripherals for block devices, networking, and more can<b=
r>
&gt; (and typically will) be provided, but OS software written and compiled=
<br>
&gt; to run on ARM VMs cannot make any assumptions about which variations<b=
r>
&gt; of these should exist or which implementation they use (e.g. VirtIO or=
<br>
&gt; Xen PV). =A0See &quot;Hardware Description&quot; above.<br>
&gt;<br>
&gt; Note that this platform specification is separate from the Linux kerne=
l<br>
&gt; concept of mach-virt, which merely specifies a machine model driven<br=
>
&gt; purely from device tree, but does not mandate any peripherals or have =
any<br>
&gt; mention of ACPI.<br>
&gt;<br>
&gt;<br>
&gt; References<br>
&gt; ----------<br>
&gt; [1]: The ARM Architecture Reference Manual, ARMv8, Issue A.b<br>
&gt; <a href=3D"http://infocenter.arm.com/help/index.jsp?topic=3D/com.arm.d=
oc.ddi0487a.b/index.html">http://infocenter.arm.com/help/index.jsp?topic=3D=
/com.arm.doc.ddi0487a.b/index.html</a><br>
&gt;<br>
&gt; [2]: ARM Server Base System Architecture<br>
&gt; <a href=3D"http://infocenter.arm.com/help/index.jsp?topic=3D/com.arm.d=
oc.den0029/index.html">http://infocenter.arm.com/help/index.jsp?topic=3D/co=
m.arm.doc.den0029/index.html</a><br>
&gt;<br>
&gt; [3]: The ARM Generic Interrupt Controller Architecture Specifications =
v2.0<br>
&gt; <a href=3D"http://infocenter.arm.com/help/index.jsp?topic=3D/com.arm.d=
oc.ddi0487a.b/index.html">http://infocenter.arm.com/help/index.jsp?topic=3D=
/com.arm.doc.ddi0487a.b/index.html</a><br>
&gt;<br>
&gt; [4]: <a href=3D"http://www.secretlab.ca/archives/27">http://www.secret=
lab.ca/archives/27</a><br>
&gt;<br>
&gt; [5]: <a href=3D"https://git.linaro.org/people/leif.lindholm/linux.git/=
blob/refs/heads/uefi-for-upstream:/Documentation/arm/uefi.txt">https://git.=
linaro.org/people/leif.lindholm/linux.git/blob/refs/heads/uefi-for-upstream=
:/Documentation/arm/uefi.txt</a><br>

&gt;<br>
&gt; [6]: UEFI Specification 2.4 Revision A<br>
&gt; <a href=3D"http://www.uefi.org/sites/default/files/resources/2_4_Errat=
a_A.pdf">http://www.uefi.org/sites/default/files/resources/2_4_Errata_A.pdf=
</a><br>
</p>

--047d7bd75920e636cd04f356d358--


--===============3241241134938229303==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3241241134938229303==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 08:23:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwFP-0000OC-9L; Thu, 27 Feb 2014 08:23:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <blibbet@gmail.com>) id 1WIp2i-0007eG-4l
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 00:41:40 +0000
Received: from [85.158.137.68:38031] by server-1.bemta-3.messagelabs.com id
	F7/99-17293-3C98E035; Thu, 27 Feb 2014 00:41:39 +0000
X-Env-Sender: blibbet@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393461696!4464440!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=1.4 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	NO_FORMS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14352 invoked from network); 27 Feb 2014 00:41:38 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 00:41:38 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so1756424pbb.0
	for <xen-devel@lists.xen.org>; Wed, 26 Feb 2014 16:41:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=q8KDyXh58Z3nMCkQ/PTbfu30SOBOj0BKSoMYoo59IH4=;
	b=AowZQGaY+4/Uhbnvcfw1jn8oqkFrFj8Ci+YW4kYJCJTMvjCQ4rWqYvyGAE9cQykXn2
	eYMyleYL6amd8ZexCEgXAdTTQz9X7c6c5nDvQA1HabjzeXuiqe29vVgXlWV3eol8DPpj
	LzkJrBGGkEzzNYglMLEXnfDlmzKIggOO2pYDZokLBpAXFUomHmaM9RzRd4pmaK/P2ezT
	QzWhsQq68y/RfAWf1YGlL4w+PLUuEovc9Nvc7G8hyqJONDAVV6KLzplUr2cYQQ63zvIs
	R9ykj1e5d/K4toY2/gsf2AAudP9G/ryo/kihQoOh+PntZKbfK6MFvESwmmNVOlXcDXOd
	LEHw==
X-Received: by 10.68.129.5 with SMTP id ns5mr9983790pbb.147.1393461696314;
	Wed, 26 Feb 2014 16:41:36 -0800 (PST)
Received: from box.local ([184.11.139.108])
	by mx.google.com with ESMTPSA id tu3sm17845504pab.1.2014.02.26.16.41.35
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Wed, 26 Feb 2014 16:41:35 -0800 (PST)
Message-ID: <530E89B1.90604@gmail.com>
Date: Wed, 26 Feb 2014 16:41:21 -0800
From: Blibbet <blibbet@gmail.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.5;
	rv:16.0) Gecko/20121005 Thunderbird/16.0
MIME-Version: 1.0
To: Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox>
In-Reply-To: <20140226183454.GA14639@cbox>
X-Mailman-Approved-At: Thu, 27 Feb 2014 08:23:13 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 2/26/14 10:34 AM, Christoffer Dall wrote:
> ARM VM System Specification
> ===========================

See also the thread forked off to the EFI dev list, about using existing 
EFI ByteCode (EBC) for this new purpose, especially the informative 
reply from Andrew Fish of Apple.com:

http://sourceforge.net/p/edk2/mailman/message/32031943/

Today, EBC is created by Intel, and targets Intel's 3 platforms, but no 
ARM platforms yet. Today existing EFI uses EBC "VM", existing 
implementation exists with BSD license. EBC's goal was to let IHVs share 
Option ROM style drivers, not have to ship multiple ones. Having ARM and 
Intel use the same VM/bytecode would be even better for IHVs. I'm 
unclear to your non-EFI use cases, so may not be useful outside EFI. IMO 
the main issue with EBC is only commercial Intel and Microsoft compilers 
support it, not GCC or CLang. IP clarify of this Intel creation would 
also be an issue, but apparently UEFI Forum owns the spec.

There are too many lists CC'ed already, but if this becomes a valid 
option, the linux-efi list on kernel.org needs to get invited. :-)

Thanks,
Lee


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwTZ-0000in-8x; Thu, 27 Feb 2014 08:37:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIwTX-0000ih-Pa
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 08:37:51 +0000
Received: from [85.158.139.211:14352] by server-12.bemta-5.messagelabs.com id
	C9/23-15415-E59FE035; Thu, 27 Feb 2014 08:37:50 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393490269!6581311!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22824 invoked from network); 27 Feb 2014 08:37:50 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 08:37:50 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=twins)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIwT1-0005Rj-4v; Thu, 27 Feb 2014 08:37:19 +0000
Received: by twins (Postfix, from userid 1000)
	id 6C85C82787BE; Thu, 27 Feb 2014 09:37:15 +0100 (CET)
Date: Thu, 27 Feb 2014 09:37:15 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140227083715.GY9987@twins.programming.kicks-ass.net>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, kvm@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Arnd Bergmann <arnd@arndb.de>, Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



Is this the same 8 patches you send yesterday?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwTZ-0000in-8x; Thu, 27 Feb 2014 08:37:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WIwTX-0000ih-Pa
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 08:37:51 +0000
Received: from [85.158.139.211:14352] by server-12.bemta-5.messagelabs.com id
	C9/23-15415-E59FE035; Thu, 27 Feb 2014 08:37:50 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393490269!6581311!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22824 invoked from network); 27 Feb 2014 08:37:50 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 08:37:50 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=twins)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WIwT1-0005Rj-4v; Thu, 27 Feb 2014 08:37:19 +0000
Received: by twins (Postfix, from userid 1000)
	id 6C85C82787BE; Thu, 27 Feb 2014 09:37:15 +0100 (CET)
Date: Thu, 27 Feb 2014 09:37:15 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140227083715.GY9987@twins.programming.kicks-ass.net>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, kvm@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Arnd Bergmann <arnd@arndb.de>, Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



Is this the same 8 patches you send yesterday?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwnX-0000v6-MN; Thu, 27 Feb 2014 08:58:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIwnW-0000ul-3H
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 08:58:30 +0000
Received: from [85.158.137.68:16746] by server-4.bemta-3.messagelabs.com id
	EF/24-04858-53EFE035; Thu, 27 Feb 2014 08:58:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393491506!3259108!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17714 invoked from network); 27 Feb 2014 08:58:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 08:58:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,553,1389744000"; d="scan'208";a="104577395"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 08:58:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 03:58:25 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIwnR-0000pG-55;
	Thu, 27 Feb 2014 08:58:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIwnQ-00049m-Tv;
	Thu, 27 Feb 2014 08:58:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25315-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 08:58:24 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25315 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25315/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)     broken pass in 25312

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail in 25312 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop      fail in 25312 never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop           fail in 25312 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25312 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop      fail in 25312 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail in 25312 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25312 never pass

version targeted for testing:
 xen                  c9f8e0aee507bec25104ca5535fde38efae6c6bc
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 407 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 08:58:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 08:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIwnX-0000v6-MN; Thu, 27 Feb 2014 08:58:31 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WIwnW-0000ul-3H
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 08:58:30 +0000
Received: from [85.158.137.68:16746] by server-4.bemta-3.messagelabs.com id
	EF/24-04858-53EFE035; Thu, 27 Feb 2014 08:58:29 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393491506!3259108!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17714 invoked from network); 27 Feb 2014 08:58:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 08:58:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,553,1389744000"; d="scan'208";a="104577395"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 08:58:25 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 03:58:25 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WIwnR-0000pG-55;
	Thu, 27 Feb 2014 08:58:25 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WIwnQ-00049m-Tv;
	Thu, 27 Feb 2014 08:58:25 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25315-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 08:58:24 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25315 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25315/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  3 host-install(3)     broken pass in 25312

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail in 25312 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop      fail in 25312 never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop           fail in 25312 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25312 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop      fail in 25312 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail in 25312 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25312 never pass

version targeted for testing:
 xen                  c9f8e0aee507bec25104ca5535fde38efae6c6bc
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           broken  
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 407 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:20:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIx8f-000191-7y; Thu, 27 Feb 2014 09:20:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIx8e-00018w-9Q
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 09:20:20 +0000
Received: from [85.158.137.68:52831] by server-14.bemta-3.messagelabs.com id
	AD/D3-08196-3530F035; Thu, 27 Feb 2014 09:20:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393492818!1724400!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12661 invoked from network); 27 Feb 2014 09:20:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 09:20:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 09:20:18 +0000
Message-Id: <530F116F020000780011FC45@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 09:20:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1393437647-16694-2-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
> Currently in 32 bit mode the routine hpet_set_timer() will convert a
> time in the past to a time in future.  This is done by the uint32_t
> cast of diff.
> 
> Even without this issue, hpet_tick_to_ns() does not support past
> times.
> 
> Real hardware does not support past times.
> 
> So just do the same thing in 32 bit mode as 64 bit mode.

While the change looks valid at the first glance, what I'm missing
is an explanation of how the problem that the introduction of this
fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
Vista") is now being taken care of (or why this is of no concern).
That's pretty relevant considering for how long this code has been
there without causing (known) problems to anyone.

Jan

> Without this change it is possible for an HVM guest running linux to
> get the message:
> 
> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> 
> On the guest console(s); and will panic.
> 
> Also Xen hypervisor console with be flooded with:
> 
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/hvm/hpet.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index 4324b52..14b1a39 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -197,10 +197,6 @@ static void hpet_stop_timer(HPETState *h, unsigned int tn)
>      hpet_get_comparator(h, tn);
>  }
>  
> -/* the number of HPET tick that stands for
> - * 1/(2^10) second, namely, 0.9765625 milliseconds */
> -#define  HPET_TINY_TIME_SPAN  ((h->stime_freq >> 10) / STIME_PER_HPET_TICK)
> -
>  static void hpet_set_timer(HPETState *h, unsigned int tn)
>  {
>      uint64_t tn_cmp, cur_tick, diff;
> @@ -231,14 +227,11 @@ static void hpet_set_timer(HPETState *h, unsigned int tn)
>      diff = tn_cmp - cur_tick;
>  
>      /*
> -     * Detect time values set in the past. This is hard to do for 32-bit
> -     * comparators as the timer does not have to be set that far in the future
> -     * for the counter difference to wrap a 32-bit signed integer. We fudge
> -     * by looking for a 'small' time value in the past.
> +     * Detect time values set in the past. Since hpet_tick_to_ns() does
> +     * not handle this, use 0 for both 64 and 32 bit mode.
>       */
>      if ( (int64_t)diff < 0 )
> -        diff = (timer_is_32bit(h, tn) && (-diff > HPET_TINY_TIME_SPAN))
> -            ? (uint32_t)diff : 0;
> +        diff = 0;
>  
>      if ( (tn <= 1) && (h->hpet.config & HPET_CFG_LEGACY) )
>          /* if LegacyReplacementRoute bit is set, HPET specification requires
> -- 
> 1.8.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:20:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIx8f-000191-7y; Thu, 27 Feb 2014 09:20:21 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIx8e-00018w-9Q
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 09:20:20 +0000
Received: from [85.158.137.68:52831] by server-14.bemta-3.messagelabs.com id
	AD/D3-08196-3530F035; Thu, 27 Feb 2014 09:20:19 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393492818!1724400!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12661 invoked from network); 27 Feb 2014 09:20:18 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-12.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 09:20:18 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 09:20:18 +0000
Message-Id: <530F116F020000780011FC45@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 09:20:31 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1393437647-16694-2-git-send-email-dslutz@verizon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
> Currently in 32 bit mode the routine hpet_set_timer() will convert a
> time in the past to a time in future.  This is done by the uint32_t
> cast of diff.
> 
> Even without this issue, hpet_tick_to_ns() does not support past
> times.
> 
> Real hardware does not support past times.
> 
> So just do the same thing in 32 bit mode as 64 bit mode.

While the change looks valid at the first glance, what I'm missing
is an explanation of how the problem that the introduction of this
fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
Vista") is now being taken care of (or why this is of no concern).
That's pretty relevant considering for how long this code has been
there without causing (known) problems to anyone.

Jan

> Without this change it is possible for an HVM guest running linux to
> get the message:
> 
> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
> 
> On the guest console(s); and will panic.
> 
> Also Xen hypervisor console with be flooded with:
> 
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> vioapic.c:352:d1 Unsupported delivery mode 7
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> ---
>  xen/arch/x86/hvm/hpet.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index 4324b52..14b1a39 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -197,10 +197,6 @@ static void hpet_stop_timer(HPETState *h, unsigned int tn)
>      hpet_get_comparator(h, tn);
>  }
>  
> -/* the number of HPET tick that stands for
> - * 1/(2^10) second, namely, 0.9765625 milliseconds */
> -#define  HPET_TINY_TIME_SPAN  ((h->stime_freq >> 10) / STIME_PER_HPET_TICK)
> -
>  static void hpet_set_timer(HPETState *h, unsigned int tn)
>  {
>      uint64_t tn_cmp, cur_tick, diff;
> @@ -231,14 +227,11 @@ static void hpet_set_timer(HPETState *h, unsigned int tn)
>      diff = tn_cmp - cur_tick;
>  
>      /*
> -     * Detect time values set in the past. This is hard to do for 32-bit
> -     * comparators as the timer does not have to be set that far in the future
> -     * for the counter difference to wrap a 32-bit signed integer. We fudge
> -     * by looking for a 'small' time value in the past.
> +     * Detect time values set in the past. Since hpet_tick_to_ns() does
> +     * not handle this, use 0 for both 64 and 32 bit mode.
>       */
>      if ( (int64_t)diff < 0 )
> -        diff = (timer_is_32bit(h, tn) && (-diff > HPET_TINY_TIME_SPAN))
> -            ? (uint32_t)diff : 0;
> +        diff = 0;
>  
>      if ( (tn <= 1) && (h->hpet.config & HPET_CFG_LEGACY) )
>          /* if LegacyReplacementRoute bit is set, HPET specification requires
> -- 
> 1.8.4




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:33:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxKy-0001Ip-Or; Thu, 27 Feb 2014 09:33:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WIxKx-0001Ih-CU
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 09:33:03 +0000
Received: from [85.158.137.68:55538] by server-13.bemta-3.messagelabs.com id
	C3/83-26923-E460F035; Thu, 27 Feb 2014 09:33:02 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393493581!3269864!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17497 invoked from network); 27 Feb 2014 09:33:01 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 09:33:01 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1R9W1No018919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 04:32:02 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1R9Vqw6009029; Thu, 27 Feb 2014 04:31:52 -0500
Message-ID: <530F0607.1020705@redhat.com>
Date: Thu, 27 Feb 2014 10:31:51 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Arnd Bergmann <arnd@arndb.de>, Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
 x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 26/02/2014 16:14, Waiman Long ha scritto:
> This patch enables KVM to use the queue spinlock's PV support code
> when the PARAVIRT_SPINLOCKS kernel config option is set. However,
> PV support for Xen is not ready yet and so the queue spinlock will
> still have to be disabled when PARAVIRT_SPINLOCKS config option is
> on with Xen.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/kernel/kvm.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++
>  kernel/Kconfig.locks  |    2 +-
>  2 files changed, 55 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index f318e78..3ddc436 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
>  	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
>  }
>  
> +#ifndef CONFIG_QUEUE_SPINLOCK
>  enum kvm_contention_stat {
>  	TAKEN_SLOW,
>  	TAKEN_SLOW_PICKUP,
> @@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
>  		}
>  	}
>  }
> +#else /* !CONFIG_QUEUE_SPINLOCK */
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +static u32 lh_kick_stats;	/* Lock holder kick count */
> +static u32 qh_kick_stats;	/* Queue head kick count  */
> +static u32 nn_kick_stats;	/* Next node kick count   */
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> +	d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
> +	if (!d_kvm_debug) {
> +		printk(KERN_WARNING
> +		       "Could not create 'kvm' debugfs directory\n");
> +		return -ENOMEM;
> +	}
> +	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
> +
> +	debugfs_create_u32("lh_kick_stats", 0644, d_spin_debug, &lh_kick_stats);
> +	debugfs_create_u32("qh_kick_stats", 0644, d_spin_debug, &qh_kick_stats);
> +	debugfs_create_u32("nn_kick_stats", 0644, d_spin_debug, &nn_kick_stats);
> +
> +	return 0;
> +}
> +
> +static inline void inc_kick_stats(enum pv_kick_type type)
> +{
> +	if (type == PV_KICK_LOCK_HOLDER)
> +		add_smp(&lh_kick_stats, 1);
> +	else if (type == PV_KICK_QUEUE_HEAD)
> +		add_smp(&qh_kick_stats, 1);
> +	else
> +		add_smp(&nn_kick_stats, 1);
> +}
> +fs_initcall(kvm_spinlock_debugfs);
> +
> +#else /* CONFIG_KVM_DEBUG_FS */
> +static inline void inc_kick_stats(enum pv_kick_type type)
> +{
> +}
> +#endif /* CONFIG_KVM_DEBUG_FS */
> +
> +static void kvm_kick_cpu_type(int cpu, enum pv_kick_type type)
> +{
> +	kvm_kick_cpu(cpu);
> +	inc_kick_stats(type);
> +}
> +#endif /* !CONFIG_QUEUE_SPINLOCK */
>  
>  /*
>   * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> @@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void)
>  	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>  		return;
>  
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +	pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
> +#else
>  	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
>  	pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
>  }
>  
>  static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>  
>  config QUEUE_SPINLOCK
>  	def_bool y if ARCH_USE_QUEUE_SPINLOCK
> -	depends on SMP && !PARAVIRT_SPINLOCKS
> +	depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> 

Should this rather be

    def_bool y if ARCH_USE_QUEUE_SPINLOCK && (!PARAVIRT_SPINLOCKS || !XEN)

?

PARAVIRT_SPINLOCKS + XEN + QUEUE_SPINLOCK + PARAVIRT_UNFAIR_LOCKS is a
valid combination, but it's impossible to choose PARAVIRT_UNFAIR_LOCKS
if QUEUE_SPINLOCK is unavailable.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:33:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxKy-0001Ip-Or; Thu, 27 Feb 2014 09:33:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WIxKx-0001Ih-CU
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 09:33:03 +0000
Received: from [85.158.137.68:55538] by server-13.bemta-3.messagelabs.com id
	C3/83-26923-E460F035; Thu, 27 Feb 2014 09:33:02 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393493581!3269864!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17497 invoked from network); 27 Feb 2014 09:33:01 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-5.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 09:33:01 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1R9W1No018919
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 04:32:02 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1R9Vqw6009029; Thu, 27 Feb 2014 04:31:52 -0500
Message-ID: <530F0607.1020705@redhat.com>
Date: Thu, 27 Feb 2014 10:31:51 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Arnd Bergmann <arnd@arndb.de>, Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
 x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 26/02/2014 16:14, Waiman Long ha scritto:
> This patch enables KVM to use the queue spinlock's PV support code
> when the PARAVIRT_SPINLOCKS kernel config option is set. However,
> PV support for Xen is not ready yet and so the queue spinlock will
> still have to be disabled when PARAVIRT_SPINLOCKS config option is
> on with Xen.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/kernel/kvm.c |   54 +++++++++++++++++++++++++++++++++++++++++++++++++
>  kernel/Kconfig.locks  |    2 +-
>  2 files changed, 55 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index f318e78..3ddc436 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
>  	kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
>  }
>  
> +#ifndef CONFIG_QUEUE_SPINLOCK
>  enum kvm_contention_stat {
>  	TAKEN_SLOW,
>  	TAKEN_SLOW_PICKUP,
> @@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
>  		}
>  	}
>  }
> +#else /* !CONFIG_QUEUE_SPINLOCK */
> +
> +#ifdef CONFIG_KVM_DEBUG_FS
> +static struct dentry *d_spin_debug;
> +static struct dentry *d_kvm_debug;
> +static u32 lh_kick_stats;	/* Lock holder kick count */
> +static u32 qh_kick_stats;	/* Queue head kick count  */
> +static u32 nn_kick_stats;	/* Next node kick count   */
> +
> +static int __init kvm_spinlock_debugfs(void)
> +{
> +	d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
> +	if (!d_kvm_debug) {
> +		printk(KERN_WARNING
> +		       "Could not create 'kvm' debugfs directory\n");
> +		return -ENOMEM;
> +	}
> +	d_spin_debug = debugfs_create_dir("spinlocks", d_kvm_debug);
> +
> +	debugfs_create_u32("lh_kick_stats", 0644, d_spin_debug, &lh_kick_stats);
> +	debugfs_create_u32("qh_kick_stats", 0644, d_spin_debug, &qh_kick_stats);
> +	debugfs_create_u32("nn_kick_stats", 0644, d_spin_debug, &nn_kick_stats);
> +
> +	return 0;
> +}
> +
> +static inline void inc_kick_stats(enum pv_kick_type type)
> +{
> +	if (type == PV_KICK_LOCK_HOLDER)
> +		add_smp(&lh_kick_stats, 1);
> +	else if (type == PV_KICK_QUEUE_HEAD)
> +		add_smp(&qh_kick_stats, 1);
> +	else
> +		add_smp(&nn_kick_stats, 1);
> +}
> +fs_initcall(kvm_spinlock_debugfs);
> +
> +#else /* CONFIG_KVM_DEBUG_FS */
> +static inline void inc_kick_stats(enum pv_kick_type type)
> +{
> +}
> +#endif /* CONFIG_KVM_DEBUG_FS */
> +
> +static void kvm_kick_cpu_type(int cpu, enum pv_kick_type type)
> +{
> +	kvm_kick_cpu(cpu);
> +	inc_kick_stats(type);
> +}
> +#endif /* !CONFIG_QUEUE_SPINLOCK */
>  
>  /*
>   * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present.
> @@ -807,8 +857,12 @@ void __init kvm_spinlock_init(void)
>  	if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>  		return;
>  
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +	pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
> +#else
>  	pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
>  	pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
>  }
>  
>  static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>  
>  config QUEUE_SPINLOCK
>  	def_bool y if ARCH_USE_QUEUE_SPINLOCK
> -	depends on SMP && !PARAVIRT_SPINLOCKS
> +	depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> 

Should this rather be

    def_bool y if ARCH_USE_QUEUE_SPINLOCK && (!PARAVIRT_SPINLOCKS || !XEN)

?

PARAVIRT_SPINLOCKS + XEN + QUEUE_SPINLOCK + PARAVIRT_UNFAIR_LOCKS is a
valid combination, but it's impossible to choose PARAVIRT_UNFAIR_LOCKS
if QUEUE_SPINLOCK is unavailable.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:36:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxNt-0001R7-Ja; Thu, 27 Feb 2014 09:36:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@gmail.com>) id 1WIxNs-0001R1-LN
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 09:36:05 +0000
Received: from [85.158.139.211:47663] by server-1.bemta-5.messagelabs.com id
	CE/77-12859-3070F035; Thu, 27 Feb 2014 09:36:03 +0000
X-Env-Sender: catalin.marinas@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393493761!6583619!1
X-Originating-IP: [209.85.220.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15083 invoked from network); 27 Feb 2014 09:36:03 -0000
Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com)
	(209.85.220.47)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 09:36:03 -0000
Received: by mail-pa0-f47.google.com with SMTP id lj1so487740pab.20
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 01:36:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=oDNzAov0BcwqwYZLfbQxKeAi/2FO+GgFCvUG5JS/ySU=;
	b=Akj76orDjapGa7NJU6cXmfYTrSWKbzPb47h0K8XM1UouhG/lmegTMBDkI6eqlI7ZEV
	9DHlWQy5Zp5cGvG02l6GPsFEXVRSBIVsbRe1/S6eAELbzmtHcN4oSFpTiXHf/y2lBk/W
	9WsiKPkvkbRV0MvxPgZbBk+YCGyCCcD9I2QrVBe3XjM9RafBviUIlnHcIGTfNWm4LI9j
	Ah9fg3bjVYtH2ZX+ip2FNLKfLM9EzJIkkKCyovGX70evtleC0Rp5fWK5uh7jpl7w3k4p
	G40byUNqr1Fzgt48PegVy3fqcfETpFKbj49Y1dArTF6XwCJ/dDXAybS3wVHftyidHpO0
	e27g==
X-Received: by 10.66.136.229 with SMTP id qd5mr14129063pab.118.1393493760903; 
	Thu, 27 Feb 2014 01:36:00 -0800 (PST)
MIME-Version: 1.0
Received: by 10.70.135.228 with HTTP; Thu, 27 Feb 2014 01:35:40 -0800 (PST)
In-Reply-To: <20140226200537.GC16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox>
From: Catalin Marinas <catalin.marinas@arm.com>
Date: Thu, 27 Feb 2014 09:35:40 +0000
X-Google-Sender-Auth: -hWfgSZ2IJuw34LepOTln1IjwCQ
Message-ID: <CAHkRjk57LyJGPZ4UdLVFpLrE2VeaY4doCwKo8UjbpzK52jTbVA@mail.gmail.com>
To: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Arnd Bergmann <arnd@arndb.de>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 20:05, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
>> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>> > For more information about UEFI and ACPI booting, see [4] and [5].
>>
>> What's the point of having ACPI in a virtual machine? You wouldn't
>> need to abstract any of the hardware in AML since you already know
>> what the virtual hardware is, so I can't see how this would help
>> anyone.
>
> The most common response I've been getting so far is that people
> generally want their VMs to look close to the real thing, but not sure
> how valid an argument that is.
>
> Some people feel strongly about this and seem to think that ARMv8
> kernels will only work with ACPI in the future...

My strong feeling is that AArch64 kernels *may* support ACPI in the future ;).

On a more serious note, both FDT and ACPI will be first-class citizens
on AArch64 and I have no intention whatsoever of dropping FDT.

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:36:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:36:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxNt-0001R7-Ja; Thu, 27 Feb 2014 09:36:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <catalin.marinas@gmail.com>) id 1WIxNs-0001R1-LN
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 09:36:05 +0000
Received: from [85.158.139.211:47663] by server-1.bemta-5.messagelabs.com id
	CE/77-12859-3070F035; Thu, 27 Feb 2014 09:36:03 +0000
X-Env-Sender: catalin.marinas@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393493761!6583619!1
X-Originating-IP: [209.85.220.47]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15083 invoked from network); 27 Feb 2014 09:36:03 -0000
Received: from mail-pa0-f47.google.com (HELO mail-pa0-f47.google.com)
	(209.85.220.47)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 09:36:03 -0000
Received: by mail-pa0-f47.google.com with SMTP id lj1so487740pab.20
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 01:36:01 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:from:date:message-id
	:subject:to:cc:content-type;
	bh=oDNzAov0BcwqwYZLfbQxKeAi/2FO+GgFCvUG5JS/ySU=;
	b=Akj76orDjapGa7NJU6cXmfYTrSWKbzPb47h0K8XM1UouhG/lmegTMBDkI6eqlI7ZEV
	9DHlWQy5Zp5cGvG02l6GPsFEXVRSBIVsbRe1/S6eAELbzmtHcN4oSFpTiXHf/y2lBk/W
	9WsiKPkvkbRV0MvxPgZbBk+YCGyCCcD9I2QrVBe3XjM9RafBviUIlnHcIGTfNWm4LI9j
	Ah9fg3bjVYtH2ZX+ip2FNLKfLM9EzJIkkKCyovGX70evtleC0Rp5fWK5uh7jpl7w3k4p
	G40byUNqr1Fzgt48PegVy3fqcfETpFKbj49Y1dArTF6XwCJ/dDXAybS3wVHftyidHpO0
	e27g==
X-Received: by 10.66.136.229 with SMTP id qd5mr14129063pab.118.1393493760903; 
	Thu, 27 Feb 2014 01:36:00 -0800 (PST)
MIME-Version: 1.0
Received: by 10.70.135.228 with HTTP; Thu, 27 Feb 2014 01:35:40 -0800 (PST)
In-Reply-To: <20140226200537.GC16149@cbox>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226200537.GC16149@cbox>
From: Catalin Marinas <catalin.marinas@arm.com>
Date: Thu, 27 Feb 2014 09:35:40 +0000
X-Google-Sender-Auth: -hWfgSZ2IJuw34LepOTln1IjwCQ
Message-ID: <CAHkRjk57LyJGPZ4UdLVFpLrE2VeaY4doCwKo8UjbpzK52jTbVA@mail.gmail.com>
To: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	Arnd Bergmann <arnd@arndb.de>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 20:05, Christoffer Dall <christoffer.dall@linaro.org> wrote:
> On Wed, Feb 26, 2014 at 08:55:58PM +0100, Arnd Bergmann wrote:
>> On Wednesday 26 February 2014 10:34:54 Christoffer Dall wrote:
>> > For more information about UEFI and ACPI booting, see [4] and [5].
>>
>> What's the point of having ACPI in a virtual machine? You wouldn't
>> need to abstract any of the hardware in AML since you already know
>> what the virtual hardware is, so I can't see how this would help
>> anyone.
>
> The most common response I've been getting so far is that people
> generally want their VMs to look close to the real thing, but not sure
> how valid an argument that is.
>
> Some people feel strongly about this and seem to think that ARMv8
> kernels will only work with ACPI in the future...

My strong feeling is that AArch64 kernels *may* support ACPI in the future ;).

On a more serious note, both FDT and ACPI will be first-class citizens
on AArch64 and I have no intention whatsoever of dropping FDT.

-- 
Catalin

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:42:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxU2-0001cJ-HT; Thu, 27 Feb 2014 09:42:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WIxU0-0001c8-Q3
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 09:42:25 +0000
Received: from [85.158.139.211:41861] by server-2.bemta-5.messagelabs.com id
	9F/FC-23037-F780F035; Thu, 27 Feb 2014 09:42:23 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393494141!6585371!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1432 invoked from network); 27 Feb 2014 09:42:22 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-206.messagelabs.com with SMTP;
	27 Feb 2014 09:42:22 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1R9fI3Q009496
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 04:41:19 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1R9f7Aq025158; Thu, 27 Feb 2014 04:41:07 -0500
Message-ID: <530F0832.50205@redhat.com>
Date: Thu, 27 Feb 2014 10:41:06 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Arnd Bergmann <arnd@arndb.de>, Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 26/02/2014 16:14, Waiman Long ha scritto:
> This patch adds a KVM init function to activate the unfair queue
> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>  1 files changed, 17 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 713f1b3..a489140 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>  early_initcall(kvm_spinlock_init_jump);
>
>  #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/*
> + * Enable unfair lock if running in a real para-virtualized environment
> + */
> +static __init int kvm_unfair_locks_init_jump(void)
> +{
> +	if (!kvm_para_available())
> +		return 0;
> +
> +	static_key_slow_inc(&paravirt_unfairlocks_enabled);
> +	printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> +	return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
>

I think this should apply to all paravirt implementations, unless 
pv_lock_ops.kick_cpu != NULL.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 09:42:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 09:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxU2-0001cJ-HT; Thu, 27 Feb 2014 09:42:26 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WIxU0-0001c8-Q3
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 09:42:25 +0000
Received: from [85.158.139.211:41861] by server-2.bemta-5.messagelabs.com id
	9F/FC-23037-F780F035; Thu, 27 Feb 2014 09:42:23 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393494141!6585371!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1432 invoked from network); 27 Feb 2014 09:42:22 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-16.tower-206.messagelabs.com with SMTP;
	27 Feb 2014 09:42:22 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1R9fI3Q009496
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 04:41:19 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1R9f7Aq025158; Thu, 27 Feb 2014 04:41:07 -0500
Message-ID: <530F0832.50205@redhat.com>
Date: Thu, 27 Feb 2014 10:41:06 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Arnd Bergmann <arnd@arndb.de>, Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, Michel Lespinasse <walken@google.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	linux-arch@vger.kernel.org, x86@kernel.org,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 26/02/2014 16:14, Waiman Long ha scritto:
> This patch adds a KVM init function to activate the unfair queue
> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>  1 files changed, 17 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 713f1b3..a489140 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>  early_initcall(kvm_spinlock_init_jump);
>
>  #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/*
> + * Enable unfair lock if running in a real para-virtualized environment
> + */
> +static __init int kvm_unfair_locks_init_jump(void)
> +{
> +	if (!kvm_para_available())
> +		return 0;
> +
> +	static_key_slow_inc(&paravirt_unfairlocks_enabled);
> +	printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> +	return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
>

I think this should apply to all paravirt implementations, unless 
pv_lock_ops.kick_cpu != NULL.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:05:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxq9-0001rg-L0; Thu, 27 Feb 2014 10:05:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WIxq7-0001rb-JR
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 10:05:15 +0000
Received: from [85.158.139.211:34537] by server-6.bemta-5.messagelabs.com id
	D0/D5-14342-ADD0F035; Thu, 27 Feb 2014 10:05:14 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393495513!6592711!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19077 invoked from network); 27 Feb 2014 10:05:14 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 10:05:14 -0000
Received: by mail-qa0-f44.google.com with SMTP id f11so3712910qae.31
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 02:05:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=k56qA/sy6liO2zTtXsiWA+1oLuTJwpulhtlQ9aAwBWQ=;
	b=IzezdAk96kt7C3z6Yy60t8EGwAtxEcANDXFoLELsawa4sPiiiDmo/gt3e0Gq90UNR1
	Lef00djJqNCbrVGjpzQgnjaivGWWPvd2/4OHR7EW0dykonf+Qqr0K/h5sFFM2PC9xcqG
	3k8C34YwYuhMDROCuKsAT7M1la402f9gMDU8q3ZvI+pvaKL6BYIzAELdcKQ+B5lsncO7
	kCKyd7PFinJ/gcPRciPEEIx7M3MW2vRBoDKK3EH0FTIf1zowLJivaZ401K6hAFopc6Ad
	+fM1v/4j3WcfQrSCw5uqqPsZV8vmKzZTnuBVst60aHp7Om35o7QT+RV+VADufeYnFM+V
	JZOg==
X-Received: by 10.224.19.199 with SMTP id c7mr15078632qab.78.1393495512768;
	Thu, 27 Feb 2014 02:05:12 -0800 (PST)
Received: from yakj.usersys.redhat.com (nat-pool-mxp-t.redhat.com.
	[209.132.186.18])
	by mx.google.com with ESMTPSA id z18sm12142910qab.5.2014.02.27.02.05.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 02:05:11 -0800 (PST)
Message-ID: <530F0DD2.5040402@redhat.com>
Date: Thu, 27 Feb 2014 11:05:06 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Arnd Bergmann <arnd@arndb.de>, 
	Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox>
	<CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
	<20140226222147.GE16149@cbox> <201402270830.16903.arnd@arndb.de>
In-Reply-To: <201402270830.16903.arnd@arndb.de>
X-Enigmail-Version: 1.6
Cc: Rob Herring <rob.herring@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	Peter Maydell <peter.maydell@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>, xen-devel@lists.xen.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Robie Basak <robie.basak@canonical.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Grant Likely <grant.likely@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 08:30, Arnd Bergmann ha scritto:
> I guess the real question is whether we are interested in running Windows RT
> in VM guests. I don't personally expect MS to come out with a port for
> this spec, no matter what we do, but some of you may have information I don't.

Given enough firmware and driver support there's no reason why Windows 
and Linux should be any different---they certainly aren't on x86.

Right now Windows on ARM is just tablets and hence RT; but things might 
change for server-based ARM.  So Windows support as a VM should 
definitely be on the table, but so is writing custom drivers: you 
certainly will be able at some point to build virtio drivers for ARM, 
and use them with this spec.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:05:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxq9-0001rg-L0; Thu, 27 Feb 2014 10:05:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <paolo.bonzini@gmail.com>) id 1WIxq7-0001rb-JR
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 10:05:15 +0000
Received: from [85.158.139.211:34537] by server-6.bemta-5.messagelabs.com id
	D0/D5-14342-ADD0F035; Thu, 27 Feb 2014 10:05:14 +0000
X-Env-Sender: paolo.bonzini@gmail.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393495513!6592711!1
X-Originating-IP: [209.85.216.44]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19077 invoked from network); 27 Feb 2014 10:05:14 -0000
Received: from mail-qa0-f44.google.com (HELO mail-qa0-f44.google.com)
	(209.85.216.44)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 10:05:14 -0000
Received: by mail-qa0-f44.google.com with SMTP id f11so3712910qae.31
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 02:05:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=k56qA/sy6liO2zTtXsiWA+1oLuTJwpulhtlQ9aAwBWQ=;
	b=IzezdAk96kt7C3z6Yy60t8EGwAtxEcANDXFoLELsawa4sPiiiDmo/gt3e0Gq90UNR1
	Lef00djJqNCbrVGjpzQgnjaivGWWPvd2/4OHR7EW0dykonf+Qqr0K/h5sFFM2PC9xcqG
	3k8C34YwYuhMDROCuKsAT7M1la402f9gMDU8q3ZvI+pvaKL6BYIzAELdcKQ+B5lsncO7
	kCKyd7PFinJ/gcPRciPEEIx7M3MW2vRBoDKK3EH0FTIf1zowLJivaZ401K6hAFopc6Ad
	+fM1v/4j3WcfQrSCw5uqqPsZV8vmKzZTnuBVst60aHp7Om35o7QT+RV+VADufeYnFM+V
	JZOg==
X-Received: by 10.224.19.199 with SMTP id c7mr15078632qab.78.1393495512768;
	Thu, 27 Feb 2014 02:05:12 -0800 (PST)
Received: from yakj.usersys.redhat.com (nat-pool-mxp-t.redhat.com.
	[209.132.186.18])
	by mx.google.com with ESMTPSA id z18sm12142910qab.5.2014.02.27.02.05.07
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 02:05:11 -0800 (PST)
Message-ID: <530F0DD2.5040402@redhat.com>
Date: Thu, 27 Feb 2014 11:05:06 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Arnd Bergmann <arnd@arndb.de>, 
	Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox>
	<CABGGisxHOVqLcG7hVAuAzdeic41KWSLLBSjQLSJQcjTXLhNCow@mail.gmail.com>
	<20140226222147.GE16149@cbox> <201402270830.16903.arnd@arndb.de>
In-Reply-To: <201402270830.16903.arnd@arndb.de>
X-Enigmail-Version: 1.6
Cc: Rob Herring <rob.herring@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	Peter Maydell <peter.maydell@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>, xen-devel@lists.xen.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Robie Basak <robie.basak@canonical.com>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org,
	Grant Likely <grant.likely@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 08:30, Arnd Bergmann ha scritto:
> I guess the real question is whether we are interested in running Windows RT
> in VM guests. I don't personally expect MS to come out with a port for
> this spec, no matter what we do, but some of you may have information I don't.

Given enough firmware and driver support there's no reason why Windows 
and Linux should be any different---they certainly aren't on x86.

Right now Windows on ARM is just tablets and hence RT; but things might 
change for server-based ARM.  So Windows support as a VM should 
definitely be on the table, but so is writing custom drivers: you 
certainly will be able at some point to build virtio drivers for ARM, 
and use them with this spec.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:14:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxzB-00021A-Sq; Thu, 27 Feb 2014 10:14:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WIxzA-000215-BK
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 10:14:36 +0000
Received: from [193.109.254.147:12371] by server-10.bemta-14.messagelabs.com
	id CF/58-10711-B001F035; Thu, 27 Feb 2014 10:14:35 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393496073!7215637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12119 invoked from network); 27 Feb 2014 10:14:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 10:14:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,553,1389744000"; 
	d="asc'?scan'208";a="106218743"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 10:14:32 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 05:14:32 -0500
Message-ID: <1393496069.3921.14.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Date: Thu, 27 Feb 2014 11:14:29 +0100
In-Reply-To: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.4 (3.10.4-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5363317140000469027=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5363317140000469027==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-AtIgmIKWl7s4teTK/iYn"

--=-AtIgmIKWl7s4teTK/iYn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> Hi all,
>=20
Hi,

> Does anyone knows something about future plans to implement
> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
>
I think Arianna is working on an implementation of the former
(XEN_DOMCTL_memory_mapping), and she should be sending patches to this
list soon, isn't it so, Arianna?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-AtIgmIKWl7s4teTK/iYn
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEABECAAYFAlMPEAUACgkQk4XaBE3IOsS9igCffxF8EWRDzqFn/Vr7qSbeg/bx
O7MAoJWG9v2hkjMyeQXJRb/I7rk/1A8n
=9pSk
-----END PGP SIGNATURE-----

--=-AtIgmIKWl7s4teTK/iYn--


--===============5363317140000469027==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5363317140000469027==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 10:14:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIxzB-00021A-Sq; Thu, 27 Feb 2014 10:14:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WIxzA-000215-BK
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 10:14:36 +0000
Received: from [193.109.254.147:12371] by server-10.bemta-14.messagelabs.com
	id CF/58-10711-B001F035; Thu, 27 Feb 2014 10:14:35 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393496073!7215637!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n,
	ML_RADAR_SPEW_LINKS_8, spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12119 invoked from network); 27 Feb 2014 10:14:34 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 10:14:34 -0000
X-IronPort-AV: E=Sophos;i="4.97,553,1389744000"; 
	d="asc'?scan'208";a="106218743"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 10:14:32 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 05:14:32 -0500
Message-ID: <1393496069.3921.14.camel@Solace>
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Date: Thu, 27 Feb 2014 11:14:29 +0100
In-Reply-To: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
Organization: Citrix
X-Mailer: Evolution 3.10.4 (3.10.4-1.fc20) 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Julien Grall <julien.grall@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5363317140000469027=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5363317140000469027==
Content-Type: multipart/signed; micalg=pgp-sha1;
	protocol="application/pgp-signature"; boundary="=-AtIgmIKWl7s4teTK/iYn"

--=-AtIgmIKWl7s4teTK/iYn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> Hi all,
>=20
Hi,

> Does anyone knows something about future plans to implement
> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
>
I think Arianna is working on an implementation of the former
(XEN_DOMCTL_memory_mapping), and she should be sending patches to this
list soon, isn't it so, Arianna?

Regards,
Dario

--=20
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


--=-AtIgmIKWl7s4teTK/iYn
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iEYEABECAAYFAlMPEAUACgkQk4XaBE3IOsS9igCffxF8EWRDzqFn/Vr7qSbeg/bx
O7MAoJWG9v2hkjMyeQXJRb/I7rk/1A8n
=9pSk
-----END PGP SIGNATURE-----

--=-AtIgmIKWl7s4teTK/iYn--


--===============5363317140000469027==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5363317140000469027==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 10:34:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIyIT-0002CW-Ex; Thu, 27 Feb 2014 10:34:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raghavendra.kt@linux.vnet.ibm.com>)
	id 1WIyIR-0002CR-VO
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 10:34:32 +0000
Received: from [85.158.137.68:55650] by server-14.bemta-3.messagelabs.com id
	D1/8D-08196-6B41F035; Thu, 27 Feb 2014 10:34:30 +0000
X-Env-Sender: raghavendra.kt@linux.vnet.ibm.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393497265!3021495!1
X-Originating-IP: [202.81.31.140]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0MCA9PiAzNjUwMjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 693 invoked from network); 27 Feb 2014 10:34:29 -0000
Received: from e23smtp07.au.ibm.com (HELO e23smtp07.au.ibm.com) (202.81.31.140)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 10:34:29 -0000
Received: from /spool/local
	by e23smtp07.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<raghavendra.kt@linux.vnet.ibm.com>; Thu, 27 Feb 2014 20:34:24 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp07.au.ibm.com (202.81.31.204) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 27 Feb 2014 20:34:22 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id 26E262CE8060
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 21:34:16 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1RAEJ7S65798292
	for <xen-devel@lists.xenproject.org>; Thu, 27 Feb 2014 21:14:20 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1RAYBQU021853
	for <xen-devel@lists.xenproject.org>; Thu, 27 Feb 2014 21:34:14 +1100
Received: from [9.124.158.79] (codeblue.in.ibm.com [9.124.158.79])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1RAY4v0021780; Thu, 27 Feb 2014 21:34:05 +1100
Message-ID: <530F1605.9040001@linux.vnet.ibm.com>
Date: Thu, 27 Feb 2014 16:10:05 +0530
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Organization: IBM
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022710-0260-0000-0000-0000047172E8
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 08:44 PM, Waiman Long wrote:
> This patch adds a KVM init function to activate the unfair queue
> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>   arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>   1 files changed, 17 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 713f1b3..a489140 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>   early_initcall(kvm_spinlock_init_jump);
>
>   #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/*
> + * Enable unfair lock if running in a real para-virtualized environment
> + */
> +static __init int kvm_unfair_locks_init_jump(void)
> +{
> +	if (!kvm_para_available())
> +		return 0;
> +

kvm_kick_cpu_type() in patch 8 assumes that host has support for kick
hypercall (KVM_HC_KICK_CPU).

I think for that we need explicit check of this 
kvm_para_has_feature(KVM_FEATURE_PV_UNHALT).

otherwise things may break for unlikely case of running a new guest on
a old host?


> +	static_key_slow_inc(&paravirt_unfairlocks_enabled);
> +	printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> +	return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:34:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIyIT-0002CW-Ex; Thu, 27 Feb 2014 10:34:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raghavendra.kt@linux.vnet.ibm.com>)
	id 1WIyIR-0002CR-VO
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 10:34:32 +0000
Received: from [85.158.137.68:55650] by server-14.bemta-3.messagelabs.com id
	D1/8D-08196-6B41F035; Thu, 27 Feb 2014 10:34:30 +0000
X-Env-Sender: raghavendra.kt@linux.vnet.ibm.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393497265!3021495!1
X-Originating-IP: [202.81.31.140]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0MCA9PiAzNjUwMjc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 693 invoked from network); 27 Feb 2014 10:34:29 -0000
Received: from e23smtp07.au.ibm.com (HELO e23smtp07.au.ibm.com) (202.81.31.140)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 10:34:29 -0000
Received: from /spool/local
	by e23smtp07.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<raghavendra.kt@linux.vnet.ibm.com>; Thu, 27 Feb 2014 20:34:24 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp07.au.ibm.com (202.81.31.204) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Thu, 27 Feb 2014 20:34:22 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id 26E262CE8060
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 21:34:16 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1RAEJ7S65798292
	for <xen-devel@lists.xenproject.org>; Thu, 27 Feb 2014 21:14:20 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1RAYBQU021853
	for <xen-devel@lists.xenproject.org>; Thu, 27 Feb 2014 21:34:14 +1100
Received: from [9.124.158.79] (codeblue.in.ibm.com [9.124.158.79])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1RAY4v0021780; Thu, 27 Feb 2014 21:34:05 +1100
Message-ID: <530F1605.9040001@linux.vnet.ibm.com>
Date: Thu, 27 Feb 2014 16:10:05 +0530
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Organization: IBM
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022710-0260-0000-0000-0000047172E8
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 08:44 PM, Waiman Long wrote:
> This patch adds a KVM init function to activate the unfair queue
> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
>
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>   arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>   1 files changed, 17 insertions(+), 0 deletions(-)
>
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 713f1b3..a489140 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>   early_initcall(kvm_spinlock_init_jump);
>
>   #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/*
> + * Enable unfair lock if running in a real para-virtualized environment
> + */
> +static __init int kvm_unfair_locks_init_jump(void)
> +{
> +	if (!kvm_para_available())
> +		return 0;
> +

kvm_kick_cpu_type() in patch 8 assumes that host has support for kick
hypercall (KVM_HC_KICK_CPU).

I think for that we need explicit check of this 
kvm_para_has_feature(KVM_FEATURE_PV_UNHALT).

otherwise things may break for unlikely case of running a new guest on
a old host?


> +	static_key_slow_inc(&paravirt_unfairlocks_enabled);
> +	printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> +	return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:46:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIyTs-0002M0-Sb; Thu, 27 Feb 2014 10:46:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WIyTr-0002Lv-JB
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 10:46:19 +0000
Received: from [85.158.137.68:31946] by server-17.bemta-3.messagelabs.com id
	25/A4-22569-A771F035; Thu, 27 Feb 2014 10:46:18 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393497977!3291624!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32343 invoked from network); 27 Feb 2014 10:46:17 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 10:46:17 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WIyTo-000Ead-Mo; Thu, 27 Feb 2014 10:46:16 +0000
Date: Thu, 27 Feb 2014 11:46:16 +0100
From: Tim Deegan <tim@xen.org>
To: Tamas K Lengyel <tamas.lengyel@zentific.com>
Message-ID: <20140227104616.GA53925@deinos.phlegethon.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:34 +0100 on 30 Jan (1391117656), Tamas K Lengyel wrote:
> This patch extends the information returned for CR0/CR3/CR4 register write events
> with the previous value of the register. The old value was already passed to the trap
> processing function, just never placed into the returned request. By returning     
> this value, applications subscribing the CR events obtain additional context about
> the event.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Applied, thanks.

Tim.

> ---
>  xen/arch/x86/hvm/hvm.c         |    4 ++++
>  xen/include/public/mem_event.h |    6 +++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..d46abf2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
>          req.gla = gla;
>          req.gla_valid = 1;
>      }
> +    else
> +    {
> +        req.gla = old;
> +    }
>      
>      mem_event_put_request(d, &d->mem_event->access, &req);
>      
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index c9ed546..3831b41 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -40,9 +40,9 @@
>  /* Reasons for the memory event request */
>  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
> +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
> +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
> +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:46:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIyTs-0002M0-Sb; Thu, 27 Feb 2014 10:46:20 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WIyTr-0002Lv-JB
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 10:46:19 +0000
Received: from [85.158.137.68:31946] by server-17.bemta-3.messagelabs.com id
	25/A4-22569-A771F035; Thu, 27 Feb 2014 10:46:18 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393497977!3291624!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32343 invoked from network); 27 Feb 2014 10:46:17 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 10:46:17 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WIyTo-000Ead-Mo; Thu, 27 Feb 2014 10:46:16 +0000
Date: Thu, 27 Feb 2014 11:46:16 +0100
From: Tim Deegan <tim@xen.org>
To: Tamas K Lengyel <tamas.lengyel@zentific.com>
Message-ID: <20140227104616.GA53925@deinos.phlegethon.org>
References: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1391117656-992-1-git-send-email-tamas.lengyel@zentific.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: keir@xen.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 22:34 +0100 on 30 Jan (1391117656), Tamas K Lengyel wrote:
> This patch extends the information returned for CR0/CR3/CR4 register write events
> with the previous value of the register. The old value was already passed to the trap
> processing function, just never placed into the returned request. By returning     
> this value, applications subscribing the CR events obtain additional context about
> the event.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>

Applied, thanks.

Tim.

> ---
>  xen/arch/x86/hvm/hvm.c         |    4 ++++
>  xen/include/public/mem_event.h |    6 +++---
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..d46abf2 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4682,6 +4682,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
>          req.gla = gla;
>          req.gla_valid = 1;
>      }
> +    else
> +    {
> +        req.gla = old;
> +    }
>      
>      mem_event_put_request(d, &d->mem_event->access, &req);
>      
> diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
> index c9ed546..3831b41 100644
> --- a/xen/include/public/mem_event.h
> +++ b/xen/include/public/mem_event.h
> @@ -40,9 +40,9 @@
>  /* Reasons for the memory event request */
>  #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
>  #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
> -#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
> -#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
> -#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
> +#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
> +#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
> +#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
>  #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
>  #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:56:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIydO-0002VT-5h; Thu, 27 Feb 2014 10:56:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WIydL-0002VO-PH
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 10:56:08 +0000
Received: from [193.109.254.147:28044] by server-10.bemta-14.messagelabs.com
	id 4A/E3-10711-7C91F035; Thu, 27 Feb 2014 10:56:07 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393498566!1916652!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24055 invoked from network); 27 Feb 2014 10:56:06 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 10:56:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WIydJ-000Eko-D2; Thu, 27 Feb 2014 10:56:05 +0000
Date: Thu, 27 Feb 2014 11:56:05 +0100
From: Tim Deegan <tim@xen.org>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140227105605.GB53925@deinos.phlegethon.org>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
 hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:03 -0800 on 24 Feb (1393257835), Mukesh Rathor wrote:
> pvh does not support nested hvm at present. As such, return if pvh.

Nack, sorry.  

1: this is the nested pagefault (i.e. EPT/NPT) handler, not part of
the nested HVM code.

2: If there _is_ a problem with the interaction between nested HVM and
PVH, the right way to fix it is to enforce that they can't both be
enabled at the same time, and then make sure all the nested-HVM code
properly checks for being enabled.  It's not a good idea to scatter
PVH checks all over code for unrelated features.

Cheers,

Tim.

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  xen/arch/x86/hvm/hvm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..a4a3dcf 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>      int sharing_enomem = 0;
>      mem_event_request_t *req_ptr = NULL;
>  
> +    if ( is_pvh_vcpu(v) )
> +        return 0;
> +
>      /* On Nested Virtualization, walk the guest page table.
>       * If this succeeds, all is fine.
>       * If this fails, inject a nested page fault into the guest.
> -- 
> 1.8.3.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 10:56:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 10:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIydO-0002VT-5h; Thu, 27 Feb 2014 10:56:10 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WIydL-0002VO-PH
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 10:56:08 +0000
Received: from [193.109.254.147:28044] by server-10.bemta-14.messagelabs.com
	id 4A/E3-10711-7C91F035; Thu, 27 Feb 2014 10:56:07 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393498566!1916652!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24055 invoked from network); 27 Feb 2014 10:56:06 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 10:56:06 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WIydJ-000Eko-D2; Thu, 27 Feb 2014 10:56:05 +0000
Date: Thu, 27 Feb 2014 11:56:05 +0100
From: Tim Deegan <tim@xen.org>
To: Mukesh Rathor <mukesh.rathor@oracle.com>
Message-ID: <20140227105605.GB53925@deinos.phlegethon.org>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
 hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 17:03 -0800 on 24 Feb (1393257835), Mukesh Rathor wrote:
> pvh does not support nested hvm at present. As such, return if pvh.

Nack, sorry.  

1: this is the nested pagefault (i.e. EPT/NPT) handler, not part of
the nested HVM code.

2: If there _is_ a problem with the interaction between nested HVM and
PVH, the right way to fix it is to enforce that they can't both be
enabled at the same time, and then make sure all the nested-HVM code
properly checks for being enabled.  It's not a good idea to scatter
PVH checks all over code for unrelated features.

Cheers,

Tim.

> Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> ---
>  xen/arch/x86/hvm/hvm.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 69f7e74..a4a3dcf 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>      int sharing_enomem = 0;
>      mem_event_request_t *req_ptr = NULL;
>  
> +    if ( is_pvh_vcpu(v) )
> +        return 0;
> +
>      /* On Nested Virtualization, walk the guest page table.
>       * If this succeeds, all is fine.
>       * If this fails, inject a nested page fault into the guest.
> -- 
> 1.8.3.1
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysU-0002kN-QQ; Thu, 27 Feb 2014 11:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysS-0002k7-Tk
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:45 +0000
Received: from [85.158.139.211:64611] by server-15.bemta-5.messagelabs.com id
	EB/9B-24395-07D1F035; Thu, 27 Feb 2014 11:11:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499500!6606870!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31400 invoked from network); 27 Feb 2014 11:11:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231830"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-SI; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:35 +0000
Message-ID: <1393499497-9162-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel]  [PATCH RFC 1/2] tools/libxc: Improved xc_{topology,
	numa}info functions.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two new functions provide a substantially easier-to-use API, where libxc
itself takes care of all the appropriate bounce buffering.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxc/xc_misc.c |   85 +++++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h |   49 ++++++++++++++++++++++++++++
 2 files changed, 134 insertions(+)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 3303454..4f672ce 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -195,6 +195,46 @@ int xc_topologyinfo(xc_interface *xch,
     return 0;
 }
 
+int xc_topologyinfo_bounced(xc_interface *xch,
+                            uint32_t *max_cpu_index,
+                            uint32_t *cpu_to_core,
+                            uint32_t *cpu_to_socket,
+                            uint32_t *cpu_to_node)
+{
+    int ret = -1;
+    size_t sz = sizeof(uint32_t) * (*max_cpu_index + 1);
+
+    DECLARE_HYPERCALL_BOUNCE(cpu_to_core, sz, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(cpu_to_socket, sz, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(cpu_to_node, sz, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_SYSCTL;
+
+    if ( xc_hypercall_bounce_pre(xch, cpu_to_core) ||
+         xc_hypercall_bounce_pre(xch, cpu_to_socket) ||
+         xc_hypercall_bounce_pre(xch, cpu_to_node) )
+        goto out;
+
+    sysctl.cmd = XEN_SYSCTL_topologyinfo;
+    sysctl.u.topologyinfo.max_cpu_index = *max_cpu_index;
+
+    set_xen_guest_handle(sysctl.u.topologyinfo.cpu_to_core, cpu_to_core);
+    set_xen_guest_handle(sysctl.u.topologyinfo.cpu_to_socket, cpu_to_socket);
+    set_xen_guest_handle(sysctl.u.topologyinfo.cpu_to_node, cpu_to_node);
+
+    ret = do_sysctl(xch, &sysctl);
+
+    if ( ret )
+        goto out;
+
+    *max_cpu_index = sysctl.u.topologyinfo.max_cpu_index;
+
+out:
+    xc_hypercall_bounce_post(xch, cpu_to_node);
+    xc_hypercall_bounce_post(xch, cpu_to_socket);
+    xc_hypercall_bounce_post(xch, cpu_to_core);
+    return ret;
+}
+
 int xc_numainfo(xc_interface *xch,
                 xc_numainfo_t *put_info)
 {
@@ -213,6 +253,51 @@ int xc_numainfo(xc_interface *xch,
     return 0;
 }
 
+int xc_numainfo_bounced(xc_interface *xch,
+                        uint32_t *max_node_index,
+                        uint64_t *node_to_memsize,
+                        uint64_t *node_to_memfree,
+                        uint32_t *node_to_node_distance)
+{
+    int ret = -1;
+    size_t mem_sz = sizeof(uint64_t) * (*max_node_index + 1);
+    size_t distance_sz = (sizeof(uint32_t) * (*max_node_index + 1) *
+                          (*max_node_index + 1));
+
+    DECLARE_HYPERCALL_BOUNCE(node_to_memsize, mem_sz,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(node_to_memfree, mem_sz,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(node_to_node_distance, distance_sz,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_SYSCTL;
+
+    if ( xc_hypercall_bounce_pre(xch, node_to_memsize) ||
+         xc_hypercall_bounce_pre(xch, node_to_memfree) ||
+         xc_hypercall_bounce_pre(xch, node_to_node_distance) )
+        goto out;
+
+    sysctl.cmd = XEN_SYSCTL_numainfo;
+    sysctl.u.numainfo.max_node_index = *max_node_index;
+
+    set_xen_guest_handle(sysctl.u.numainfo.node_to_memsize, node_to_memsize);
+    set_xen_guest_handle(sysctl.u.numainfo.node_to_memfree, node_to_memfree);
+    set_xen_guest_handle(sysctl.u.numainfo.node_to_node_distance,
+                         node_to_node_distance);
+
+    ret = do_sysctl(xch, &sysctl);
+
+    if ( ret )
+        goto out;
+
+    *max_node_index = sysctl.u.numainfo.max_node_index;
+
+out:
+    xc_hypercall_bounce_post(xch, node_to_node_distance);
+    xc_hypercall_bounce_post(xch, node_to_memfree);
+    xc_hypercall_bounce_post(xch, node_to_memsize);
+    return ret;
+}
 
 int xc_sched_id(xc_interface *xch,
                 int *sched_id)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..50126ae 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1144,9 +1144,58 @@ typedef uint64_t xc_node_to_memfree_t;
 typedef uint32_t xc_node_to_node_dist_t;
 
 int xc_physinfo(xc_interface *xch, xc_physinfo_t *info);
+
+/* Query Xen for the cpu topology information.  The caller is responsible for
+ * ensuring correct hypercall buffering. */
 int xc_topologyinfo(xc_interface *xch, xc_topologyinfo_t *info);
+
+/**
+ * Query Xen for the cpu topology information.  The library shall ensure
+ * correct bounce buffering is performed.
+ *
+ * The following parameters behave exactly as described in Xen's public
+ * sysctl.h.  Arrays may be NULL if the information is not wanted.
+ *
+ * Each array should have (max_cpu_index + 1) elements.
+ *
+ * @param [in/out] max_cpu_index
+ * @param [out] cpu_to_core
+ * @param [out] cpu_to_socket
+ * @param [out] cpu_to_node
+ * @returns 0 on success, -1 and sets errno on error.
+ */
+int xc_topologyinfo_bounced(xc_interface *xch,
+                            uint32_t *max_cpu_index,
+                            uint32_t *cpu_to_core,
+                            uint32_t *cpu_to_socket,
+                            uint32_t *cpu_to_node);
+
+/* Query Xen for the memory NUMA information.  The caller is responsible for
+ * ensuring correct hypercall buffering. */
 int xc_numainfo(xc_interface *xch, xc_numainfo_t *info);
 
+/**
+ * Query Xen for the memory NUMA information.  The library shall ensure
+ * correct bounce buffering is performed.
+ *
+ * The following parameters behave exactly as described in Xen's public
+ * sysctl.h.  Arrays may be NULL if the information is not wanted.
+ *
+ * node_to_mem{size,free} should have (max_node_index + 1) elements
+ * node_to_node_distance should have (max_node_index + 1)^2 elements
+ *
+ * @param [in/out] max_node_index
+ * @param [out] node_to_memsize
+ * @param [out] node_to_memfree
+ * @param [out] node_to_node_distance
+ * @returns 0 on success, -1 and sets errno on error.
+ */
+int xc_numainfo_bounced(xc_interface *xch,
+                        uint32_t *max_node_index,
+                        uint64_t *node_to_memsize,
+                        uint64_t *node_to_memfree,
+                        uint32_t *node_to_node_distance);
+
 int xc_sched_id(xc_interface *xch,
                 int *sched_id);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysS-0002k0-2z; Thu, 27 Feb 2014 11:11:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysR-0002ju-0e
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:43 +0000
Received: from [85.158.139.211:2021] by server-15.bemta-5.messagelabs.com id
	12/8B-24395-E6D1F035; Thu, 27 Feb 2014 11:11:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499500!6606870!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31217 invoked from network); 27 Feb 2014 11:11:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231827"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-Qd; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:34 +0000
Message-ID: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel]  [PATCH RFC 0/2] Support for hwloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two patches have been developed in combination with a Xen pluging for
hwloc, to allow a toolstack domain to gather the system topology rather than
the virtual topology.

An experimental hwloc branch can be found here:

   git://xenbits.xen.org/people/andrewcoop/hwloc.git hwloc-xen-topology-v5
   http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v5

and depends on these two patches to function.

Patch 1 is extending xc_{topology,numa}info() functions to include a version
which performs correct hypercall bounce buffering.

Patch 2 introduced SYSCTL_xen_cpuid which allows a toolstack to perform
arbitrary cpuid instructions on specific physical cpus.  hwloc uses this to
enumerate the cpuid cache leaves without resorting to tactics of pinning vcpus
to specific pcpus and running native cpuid.

Comments welcome,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysU-0002kN-QQ; Thu, 27 Feb 2014 11:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysS-0002k7-Tk
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:45 +0000
Received: from [85.158.139.211:64611] by server-15.bemta-5.messagelabs.com id
	EB/9B-24395-07D1F035; Thu, 27 Feb 2014 11:11:44 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499500!6606870!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31400 invoked from network); 27 Feb 2014 11:11:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231830"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-SI; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:35 +0000
Message-ID: <1393499497-9162-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>
Subject: [Xen-devel]  [PATCH RFC 1/2] tools/libxc: Improved xc_{topology,
	numa}info functions.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two new functions provide a substantially easier-to-use API, where libxc
itself takes care of all the appropriate bounce buffering.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxc/xc_misc.c |   85 +++++++++++++++++++++++++++++++++++++++++++++++++
 tools/libxc/xenctrl.h |   49 ++++++++++++++++++++++++++++
 2 files changed, 134 insertions(+)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 3303454..4f672ce 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -195,6 +195,46 @@ int xc_topologyinfo(xc_interface *xch,
     return 0;
 }
 
+int xc_topologyinfo_bounced(xc_interface *xch,
+                            uint32_t *max_cpu_index,
+                            uint32_t *cpu_to_core,
+                            uint32_t *cpu_to_socket,
+                            uint32_t *cpu_to_node)
+{
+    int ret = -1;
+    size_t sz = sizeof(uint32_t) * (*max_cpu_index + 1);
+
+    DECLARE_HYPERCALL_BOUNCE(cpu_to_core, sz, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(cpu_to_socket, sz, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(cpu_to_node, sz, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_SYSCTL;
+
+    if ( xc_hypercall_bounce_pre(xch, cpu_to_core) ||
+         xc_hypercall_bounce_pre(xch, cpu_to_socket) ||
+         xc_hypercall_bounce_pre(xch, cpu_to_node) )
+        goto out;
+
+    sysctl.cmd = XEN_SYSCTL_topologyinfo;
+    sysctl.u.topologyinfo.max_cpu_index = *max_cpu_index;
+
+    set_xen_guest_handle(sysctl.u.topologyinfo.cpu_to_core, cpu_to_core);
+    set_xen_guest_handle(sysctl.u.topologyinfo.cpu_to_socket, cpu_to_socket);
+    set_xen_guest_handle(sysctl.u.topologyinfo.cpu_to_node, cpu_to_node);
+
+    ret = do_sysctl(xch, &sysctl);
+
+    if ( ret )
+        goto out;
+
+    *max_cpu_index = sysctl.u.topologyinfo.max_cpu_index;
+
+out:
+    xc_hypercall_bounce_post(xch, cpu_to_node);
+    xc_hypercall_bounce_post(xch, cpu_to_socket);
+    xc_hypercall_bounce_post(xch, cpu_to_core);
+    return ret;
+}
+
 int xc_numainfo(xc_interface *xch,
                 xc_numainfo_t *put_info)
 {
@@ -213,6 +253,51 @@ int xc_numainfo(xc_interface *xch,
     return 0;
 }
 
+int xc_numainfo_bounced(xc_interface *xch,
+                        uint32_t *max_node_index,
+                        uint64_t *node_to_memsize,
+                        uint64_t *node_to_memfree,
+                        uint32_t *node_to_node_distance)
+{
+    int ret = -1;
+    size_t mem_sz = sizeof(uint64_t) * (*max_node_index + 1);
+    size_t distance_sz = (sizeof(uint32_t) * (*max_node_index + 1) *
+                          (*max_node_index + 1));
+
+    DECLARE_HYPERCALL_BOUNCE(node_to_memsize, mem_sz,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(node_to_memfree, mem_sz,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_HYPERCALL_BOUNCE(node_to_node_distance, distance_sz,
+                             XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+    DECLARE_SYSCTL;
+
+    if ( xc_hypercall_bounce_pre(xch, node_to_memsize) ||
+         xc_hypercall_bounce_pre(xch, node_to_memfree) ||
+         xc_hypercall_bounce_pre(xch, node_to_node_distance) )
+        goto out;
+
+    sysctl.cmd = XEN_SYSCTL_numainfo;
+    sysctl.u.numainfo.max_node_index = *max_node_index;
+
+    set_xen_guest_handle(sysctl.u.numainfo.node_to_memsize, node_to_memsize);
+    set_xen_guest_handle(sysctl.u.numainfo.node_to_memfree, node_to_memfree);
+    set_xen_guest_handle(sysctl.u.numainfo.node_to_node_distance,
+                         node_to_node_distance);
+
+    ret = do_sysctl(xch, &sysctl);
+
+    if ( ret )
+        goto out;
+
+    *max_node_index = sysctl.u.numainfo.max_node_index;
+
+out:
+    xc_hypercall_bounce_post(xch, node_to_node_distance);
+    xc_hypercall_bounce_post(xch, node_to_memfree);
+    xc_hypercall_bounce_post(xch, node_to_memsize);
+    return ret;
+}
 
 int xc_sched_id(xc_interface *xch,
                 int *sched_id)
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 13f816b..50126ae 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1144,9 +1144,58 @@ typedef uint64_t xc_node_to_memfree_t;
 typedef uint32_t xc_node_to_node_dist_t;
 
 int xc_physinfo(xc_interface *xch, xc_physinfo_t *info);
+
+/* Query Xen for the cpu topology information.  The caller is responsible for
+ * ensuring correct hypercall buffering. */
 int xc_topologyinfo(xc_interface *xch, xc_topologyinfo_t *info);
+
+/**
+ * Query Xen for the cpu topology information.  The library shall ensure
+ * correct bounce buffering is performed.
+ *
+ * The following parameters behave exactly as described in Xen's public
+ * sysctl.h.  Arrays may be NULL if the information is not wanted.
+ *
+ * Each array should have (max_cpu_index + 1) elements.
+ *
+ * @param [in/out] max_cpu_index
+ * @param [out] cpu_to_core
+ * @param [out] cpu_to_socket
+ * @param [out] cpu_to_node
+ * @returns 0 on success, -1 and sets errno on error.
+ */
+int xc_topologyinfo_bounced(xc_interface *xch,
+                            uint32_t *max_cpu_index,
+                            uint32_t *cpu_to_core,
+                            uint32_t *cpu_to_socket,
+                            uint32_t *cpu_to_node);
+
+/* Query Xen for the memory NUMA information.  The caller is responsible for
+ * ensuring correct hypercall buffering. */
 int xc_numainfo(xc_interface *xch, xc_numainfo_t *info);
 
+/**
+ * Query Xen for the memory NUMA information.  The library shall ensure
+ * correct bounce buffering is performed.
+ *
+ * The following parameters behave exactly as described in Xen's public
+ * sysctl.h.  Arrays may be NULL if the information is not wanted.
+ *
+ * node_to_mem{size,free} should have (max_node_index + 1) elements
+ * node_to_node_distance should have (max_node_index + 1)^2 elements
+ *
+ * @param [in/out] max_node_index
+ * @param [out] node_to_memsize
+ * @param [out] node_to_memfree
+ * @param [out] node_to_node_distance
+ * @returns 0 on success, -1 and sets errno on error.
+ */
+int xc_numainfo_bounced(xc_interface *xch,
+                        uint32_t *max_node_index,
+                        uint64_t *node_to_memsize,
+                        uint64_t *node_to_memfree,
+                        uint32_t *node_to_node_distance);
+
 int xc_sched_id(xc_interface *xch,
                 int *sched_id);
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysU-0002kG-Ey; Thu, 27 Feb 2014 11:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysS-0002jz-7z
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:44 +0000
Received: from [85.158.139.211:2157] by server-16.bemta-5.messagelabs.com id
	99/69-05060-F6D1F035; Thu, 27 Feb 2014 11:11:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499500!6606870!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31315 invoked from network); 27 Feb 2014 11:11:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231828"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-Sh; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:36 +0000
Message-ID: <1393499497-9162-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH RFC 2/2] SYSCTL subop to execute cpuid on a
	specified pcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/libxc/xc_misc.c       |   25 +++++++++++++++++++++++++
 tools/libxc/xenctrl.h       |    7 +++++++
 xen/arch/x86/sysctl.c       |   30 ++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h |    9 +++++++++
 4 files changed, 71 insertions(+)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 4f672ce..48ef33e 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -774,6 +774,31 @@ int xc_hvm_inject_trap(
     return rc;
 }
 
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_cpuid;
+    sysctl.u.cpuid.cpu = cpu;
+    sysctl.u.cpuid.eax = *eax;
+    sysctl.u.cpuid.ebx = *ebx;
+    sysctl.u.cpuid.ecx = *ecx;
+    sysctl.u.cpuid.edx = *edx;
+
+    ret = do_sysctl(xch, &sysctl);
+    if ( ret )
+        return ret;
+
+    *eax = sysctl.u.cpuid.eax;
+    *ebx = sysctl.u.cpuid.ebx;
+    *ecx = sysctl.u.cpuid.ecx;
+    *edx = sysctl.u.cpuid.edx;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 50126ae..a714156 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1801,6 +1801,13 @@ int xc_hvm_inject_trap(
     uint64_t cr2);
 
 /*
+ * Run the cpuid instruction on a specificied physical cpu with the given
+ * parameters.
+ */
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
+
+/*
  *  LOGGING AND ERROR REPORTING
  */
 
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..dbad1ee 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -66,6 +66,18 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+void sysctl_cpuid_helper(void *data)
+{
+    struct xen_sysctl_cpuid *cpuid = data;
+
+    ASSERT(smp_processor_id() == cpuid->cpu);
+    asm volatile ("cpuid"
+                  : "=a" (cpuid->eax), "=b" (cpuid->ebx),
+                    "=c" (cpuid->ecx), "=d" (cpuid->edx)
+                  : "a" (cpuid->eax), "b" (cpuid->ebx),
+                    "c" (cpuid->ecx), "d" (cpuid->edx) );
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +113,24 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_cpuid:
+    {
+        int cpu = sysctl->u.cpuid.cpu;
+
+        if ( cpu == smp_processor_id() )
+            sysctl_cpuid_helper(&sysctl->u.cpuid);
+        else if ( cpu >= nr_cpu_ids || !cpu_online(cpu) )
+            ret = -EINVAL;
+        else
+            on_selected_cpus(cpumask_of(cpu),
+                             sysctl_cpuid_helper,
+                             &sysctl->u.cpuid, 1);
+
+        if ( !ret && __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..afa8a73 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -633,6 +633,13 @@ typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
 
+struct xen_sysctl_cpuid {
+    uint32_t cpu; /* IN - Pcpu to execute on */
+    uint32_t eax, ebx, ecx, edx; /* IN/OUT - Parameters to `cpuid` */
+};
+typedef struct xen_sysctl_cpuid xen_sysctl_cpuid_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpuid_t);
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -654,6 +661,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_cpuid                         21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +683,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_cpuid             cpuid;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysU-0002kG-Ey; Thu, 27 Feb 2014 11:11:46 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysS-0002jz-7z
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:44 +0000
Received: from [85.158.139.211:2157] by server-16.bemta-5.messagelabs.com id
	99/69-05060-F6D1F035; Thu, 27 Feb 2014 11:11:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499500!6606870!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31315 invoked from network); 27 Feb 2014 11:11:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231828"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-Sh; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:36 +0000
Message-ID: <1393499497-9162-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH RFC 2/2] SYSCTL subop to execute cpuid on a
	specified pcpu
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/libxc/xc_misc.c       |   25 +++++++++++++++++++++++++
 tools/libxc/xenctrl.h       |    7 +++++++
 xen/arch/x86/sysctl.c       |   30 ++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h |    9 +++++++++
 4 files changed, 71 insertions(+)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 4f672ce..48ef33e 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -774,6 +774,31 @@ int xc_hvm_inject_trap(
     return rc;
 }
 
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_cpuid;
+    sysctl.u.cpuid.cpu = cpu;
+    sysctl.u.cpuid.eax = *eax;
+    sysctl.u.cpuid.ebx = *ebx;
+    sysctl.u.cpuid.ecx = *ecx;
+    sysctl.u.cpuid.edx = *edx;
+
+    ret = do_sysctl(xch, &sysctl);
+    if ( ret )
+        return ret;
+
+    *eax = sysctl.u.cpuid.eax;
+    *ebx = sysctl.u.cpuid.ebx;
+    *ecx = sysctl.u.cpuid.ecx;
+    *edx = sysctl.u.cpuid.edx;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 50126ae..a714156 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1801,6 +1801,13 @@ int xc_hvm_inject_trap(
     uint64_t cr2);
 
 /*
+ * Run the cpuid instruction on a specificied physical cpu with the given
+ * parameters.
+ */
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
+
+/*
  *  LOGGING AND ERROR REPORTING
  */
 
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..dbad1ee 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -66,6 +66,18 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+void sysctl_cpuid_helper(void *data)
+{
+    struct xen_sysctl_cpuid *cpuid = data;
+
+    ASSERT(smp_processor_id() == cpuid->cpu);
+    asm volatile ("cpuid"
+                  : "=a" (cpuid->eax), "=b" (cpuid->ebx),
+                    "=c" (cpuid->ecx), "=d" (cpuid->edx)
+                  : "a" (cpuid->eax), "b" (cpuid->ebx),
+                    "c" (cpuid->ecx), "d" (cpuid->edx) );
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +113,24 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_cpuid:
+    {
+        int cpu = sysctl->u.cpuid.cpu;
+
+        if ( cpu == smp_processor_id() )
+            sysctl_cpuid_helper(&sysctl->u.cpuid);
+        else if ( cpu >= nr_cpu_ids || !cpu_online(cpu) )
+            ret = -EINVAL;
+        else
+            on_selected_cpus(cpumask_of(cpu),
+                             sysctl_cpuid_helper,
+                             &sysctl->u.cpuid, 1);
+
+        if ( !ret && __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..afa8a73 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -633,6 +633,13 @@ typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
 
+struct xen_sysctl_cpuid {
+    uint32_t cpu; /* IN - Pcpu to execute on */
+    uint32_t eax, ebx, ecx, edx; /* IN/OUT - Parameters to `cpuid` */
+};
+typedef struct xen_sysctl_cpuid xen_sysctl_cpuid_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpuid_t);
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -654,6 +661,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_cpuid                         21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +683,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_cpuid             cpuid;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysS-0002k0-2z; Thu, 27 Feb 2014 11:11:44 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysR-0002ju-0e
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:43 +0000
Received: from [85.158.139.211:2021] by server-15.bemta-5.messagelabs.com id
	12/8B-24395-E6D1F035; Thu, 27 Feb 2014 11:11:42 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499500!6606870!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31217 invoked from network); 27 Feb 2014 11:11:41 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231827"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-Qd; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:34 +0000
Message-ID: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel]  [PATCH RFC 0/2] Support for hwloc
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These two patches have been developed in combination with a Xen pluging for
hwloc, to allow a toolstack domain to gather the system topology rather than
the virtual topology.

An experimental hwloc branch can be found here:

   git://xenbits.xen.org/people/andrewcoop/hwloc.git hwloc-xen-topology-v5
   http://xenbits.xen.org/gitweb/?p=people/andrewcoop/hwloc.git;a=shortlog;h=refs/heads/hwloc-xen-topology-v5

and depends on these two patches to function.

Patch 1 is extending xc_{topology,numa}info() functions to include a version
which performs correct hypercall bounce buffering.

Patch 2 introduced SYSCTL_xen_cpuid which allows a toolstack to perform
arbitrary cpuid instructions on specific physical cpus.  hwloc uses this to
enumerate the cpuid cache leaves without resorting to tactics of pinning vcpus
to specific pcpus and running native cpuid.

Comments welcome,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysZ-0002ky-6r; Thu, 27 Feb 2014 11:11:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysX-0002kq-Sy
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:50 +0000
Received: from [85.158.139.211:23059] by server-7.bemta-5.messagelabs.com id
	29/89-14867-57D1F035; Thu, 27 Feb 2014 11:11:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499502!6606881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31455 invoked from network); 27 Feb 2014 11:11:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231831"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-TC; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:37 +0000
Message-ID: <1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
	hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

which permits a toolstack to execute an arbitrary cpuid instruction on a
specified physical cpu.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxc/xc_misc.c       |   25 +++++++++++++++++++++++++
 tools/libxc/xenctrl.h       |    7 +++++++
 xen/arch/x86/sysctl.c       |   30 ++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h |    9 +++++++++
 4 files changed, 71 insertions(+)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 4f672ce..48ef33e 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -774,6 +774,31 @@ int xc_hvm_inject_trap(
     return rc;
 }
 
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_cpuid;
+    sysctl.u.cpuid.cpu = cpu;
+    sysctl.u.cpuid.eax = *eax;
+    sysctl.u.cpuid.ebx = *ebx;
+    sysctl.u.cpuid.ecx = *ecx;
+    sysctl.u.cpuid.edx = *edx;
+
+    ret = do_sysctl(xch, &sysctl);
+    if ( ret )
+        return ret;
+
+    *eax = sysctl.u.cpuid.eax;
+    *ebx = sysctl.u.cpuid.ebx;
+    *ecx = sysctl.u.cpuid.ecx;
+    *edx = sysctl.u.cpuid.edx;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 50126ae..a714156 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1801,6 +1801,13 @@ int xc_hvm_inject_trap(
     uint64_t cr2);
 
 /*
+ * Run the cpuid instruction on a specificied physical cpu with the given
+ * parameters.
+ */
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
+
+/*
  *  LOGGING AND ERROR REPORTING
  */
 
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..dbad1ee 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -66,6 +66,18 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+void sysctl_cpuid_helper(void *data)
+{
+    struct xen_sysctl_cpuid *cpuid = data;
+
+    ASSERT(smp_processor_id() == cpuid->cpu);
+    asm volatile ("cpuid"
+                  : "=a" (cpuid->eax), "=b" (cpuid->ebx),
+                    "=c" (cpuid->ecx), "=d" (cpuid->edx)
+                  : "a" (cpuid->eax), "b" (cpuid->ebx),
+                    "c" (cpuid->ecx), "d" (cpuid->edx) );
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +113,24 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_cpuid:
+    {
+        int cpu = sysctl->u.cpuid.cpu;
+
+        if ( cpu == smp_processor_id() )
+            sysctl_cpuid_helper(&sysctl->u.cpuid);
+        else if ( cpu >= nr_cpu_ids || !cpu_online(cpu) )
+            ret = -EINVAL;
+        else
+            on_selected_cpus(cpumask_of(cpu),
+                             sysctl_cpuid_helper,
+                             &sysctl->u.cpuid, 1);
+
+        if ( !ret && __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..afa8a73 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -633,6 +633,13 @@ typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
 
+struct xen_sysctl_cpuid {
+    uint32_t cpu; /* IN - Pcpu to execute on */
+    uint32_t eax, ebx, ecx, edx; /* IN/OUT - Parameters to `cpuid` */
+};
+typedef struct xen_sysctl_cpuid xen_sysctl_cpuid_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpuid_t);
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -654,6 +661,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_cpuid                         21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +683,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_cpuid             cpuid;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:12:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIysZ-0002ky-6r; Thu, 27 Feb 2014 11:11:51 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIysX-0002kq-Sy
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:11:50 +0000
Received: from [85.158.139.211:23059] by server-7.bemta-5.messagelabs.com id
	29/89-14867-57D1F035; Thu, 27 Feb 2014 11:11:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393499502!6606881!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31455 invoked from network); 27 Feb 2014 11:11:43 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 11:11:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106231831"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 11:11:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 06:11:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WIysM-0000K7-TC; Thu, 27 Feb 2014 11:11:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 11:11:37 +0000
Message-ID: <1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
	hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

which permits a toolstack to execute an arbitrary cpuid instruction on a
specified physical cpu.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Ian Campbell <Ian.Campbell@citrix.com>
CC: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 tools/libxc/xc_misc.c       |   25 +++++++++++++++++++++++++
 tools/libxc/xenctrl.h       |    7 +++++++
 xen/arch/x86/sysctl.c       |   30 ++++++++++++++++++++++++++++++
 xen/include/public/sysctl.h |    9 +++++++++
 4 files changed, 71 insertions(+)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 4f672ce..48ef33e 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -774,6 +774,31 @@ int xc_hvm_inject_trap(
     return rc;
 }
 
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx)
+{
+    int ret;
+    DECLARE_SYSCTL;
+
+    sysctl.cmd = XEN_SYSCTL_cpuid;
+    sysctl.u.cpuid.cpu = cpu;
+    sysctl.u.cpuid.eax = *eax;
+    sysctl.u.cpuid.ebx = *ebx;
+    sysctl.u.cpuid.ecx = *ecx;
+    sysctl.u.cpuid.edx = *edx;
+
+    ret = do_sysctl(xch, &sysctl);
+    if ( ret )
+        return ret;
+
+    *eax = sysctl.u.cpuid.eax;
+    *ebx = sysctl.u.cpuid.ebx;
+    *ecx = sysctl.u.cpuid.ecx;
+    *edx = sysctl.u.cpuid.edx;
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
index 50126ae..a714156 100644
--- a/tools/libxc/xenctrl.h
+++ b/tools/libxc/xenctrl.h
@@ -1801,6 +1801,13 @@ int xc_hvm_inject_trap(
     uint64_t cr2);
 
 /*
+ * Run the cpuid instruction on a specificied physical cpu with the given
+ * parameters.
+ */
+int xc_xen_cpuid(xc_interface *xch, uint32_t cpu, uint32_t *eax,
+                 uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
+
+/*
  *  LOGGING AND ERROR REPORTING
  */
 
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 15d4b91..dbad1ee 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -66,6 +66,18 @@ void arch_do_physinfo(xen_sysctl_physinfo_t *pi)
         pi->capabilities |= XEN_SYSCTL_PHYSCAP_hvm_directio;
 }
 
+void sysctl_cpuid_helper(void *data)
+{
+    struct xen_sysctl_cpuid *cpuid = data;
+
+    ASSERT(smp_processor_id() == cpuid->cpu);
+    asm volatile ("cpuid"
+                  : "=a" (cpuid->eax), "=b" (cpuid->ebx),
+                    "=c" (cpuid->ecx), "=d" (cpuid->edx)
+                  : "a" (cpuid->eax), "b" (cpuid->ebx),
+                    "c" (cpuid->ecx), "d" (cpuid->edx) );
+}
+
 long arch_do_sysctl(
     struct xen_sysctl *sysctl, XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
 {
@@ -101,6 +113,24 @@ long arch_do_sysctl(
     }
     break;
 
+    case XEN_SYSCTL_cpuid:
+    {
+        int cpu = sysctl->u.cpuid.cpu;
+
+        if ( cpu == smp_processor_id() )
+            sysctl_cpuid_helper(&sysctl->u.cpuid);
+        else if ( cpu >= nr_cpu_ids || !cpu_online(cpu) )
+            ret = -EINVAL;
+        else
+            on_selected_cpus(cpumask_of(cpu),
+                             sysctl_cpuid_helper,
+                             &sysctl->u.cpuid, 1);
+
+        if ( !ret && __copy_to_guest(u_sysctl, sysctl, 1) )
+            ret = -EFAULT;
+    }
+    break;
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 8437d31..afa8a73 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -633,6 +633,13 @@ typedef struct xen_sysctl_coverage_op xen_sysctl_coverage_op_t;
 DEFINE_XEN_GUEST_HANDLE(xen_sysctl_coverage_op_t);
 
 
+struct xen_sysctl_cpuid {
+    uint32_t cpu; /* IN - Pcpu to execute on */
+    uint32_t eax, ebx, ecx, edx; /* IN/OUT - Parameters to `cpuid` */
+};
+typedef struct xen_sysctl_cpuid xen_sysctl_cpuid_t;
+DEFINE_XEN_GUEST_HANDLE(xen_sysctl_cpuid_t);
+
 struct xen_sysctl {
     uint32_t cmd;
 #define XEN_SYSCTL_readconsole                    1
@@ -654,6 +661,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_cpupool_op                    18
 #define XEN_SYSCTL_scheduler_op                  19
 #define XEN_SYSCTL_coverage_op                   20
+#define XEN_SYSCTL_cpuid                         21
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
         struct xen_sysctl_readconsole       readconsole;
@@ -675,6 +683,7 @@ struct xen_sysctl {
         struct xen_sysctl_cpupool_op        cpupool_op;
         struct xen_sysctl_scheduler_op      scheduler_op;
         struct xen_sysctl_coverage_op       coverage_op;
+        struct xen_sysctl_cpuid             cpuid;
         uint8_t                             pad[128];
     } u;
 };
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:37:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzGf-0003MQ-IE; Thu, 27 Feb 2014 11:36:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robie.basak@canonical.com>) id 1WIzGA-0003M2-FD
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:36:14 +0000
Received: from [85.158.139.211:7616] by server-13.bemta-5.messagelabs.com id
	74/1A-18801-D232F035; Thu, 27 Feb 2014 11:36:13 +0000
X-Env-Sender: robie.basak@canonical.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393500972!6632500!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30023 invoked from network); 27 Feb 2014 11:36:13 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-4.tower-206.messagelabs.com with SMTP;
	27 Feb 2014 11:36:13 -0000
Received: from mal.justgohome.co.uk ([217.169.28.211] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <robie.basak@canonical.com>)
	id 1WIzG1-0003Ly-De; Thu, 27 Feb 2014 11:36:05 +0000
Date: Thu, 27 Feb 2014 11:36:04 +0000
From: Robie Basak <robie.basak@canonical.com>
To: Rob Herring <robherring2@gmail.com>
Message-ID: <20140227113604.GC643@mal.justgohome.co.uk>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
	<CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
MIME-Version: 1.0
In-Reply-To: <CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
X-Mailman-Approved-At: Thu, 27 Feb 2014 11:36:44 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2772234824786825178=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2772234824786825178==
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="DKU6Jbt7q3WqK7+M"
Content-Disposition: inline


--DKU6Jbt7q3WqK7+M
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Feb 26, 2014 at 05:08:38PM -0600, Rob Herring wrote:
> > What does the 8250 have to recommend it over just providing
> > the PL011?
>=20
> As I mentioned, it will just work for anything that expects the serial
> port to be ttyS0 as on x86 rather than ttyAMA0. Really, I'd like to
> see ttyAMA go away, but evidently that's not an easily fixed issue and
> it is an ABI.

Kernel parameters (eg. console=3D...) are going to be embedded inside the
guest VM disk image inside a grub configuration or something like that,
right? Is there enough here to ensure that I can ship a guest VM disk
image that will correctly work (ie. use the right device) for both
kernel console output and userspace getty, regardless of what the host
is? Reading your post I realise that I'm not sure what I need to do to
make this work, since the device may be different. Can something be
recommended as part of the spec for this?

--DKU6Jbt7q3WqK7+M
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)

iQIcBAEBCgAGBQJTDyMjAAoJEOVkucJ1vdUuvNQQALgmVkvdVM+3KrwCgGeul5Vy
TCuMJplIjVGhBwH4d8Y1czEaxF4wLL3bwUuEHeUhVgP0PbtD9wjIrLevJm1GaquW
FqYjtd9zyRCBSdQ0t+VLtJT2QThJy5GMTK4AjNVv/F2MYSaSJ5xVecDZzN12hbOR
I8/d98AU5E2RdLnBjmCkxKiukXZgIlt6vKxxMEtS4K3ikhxJoUqQCvab3VgJKPAP
6yJveqYaS7UdCxxutRCxyZjQttTUoW7TpZzOcjphiFLn91BVqhJUr9igZLI7JCgl
yWJ41dUoy5Y2cUbLeCre9brwwfGsbbI5oCI+qOwgV/zPb9mqkva5o7i+89oVUHbC
rclVclxsLexKinWn4xXIQIOeMdx8fFsaIAY6ZjVZtt5EWi/N/fBqXE9k51tBQFOS
QXwzbQOExdX6P7oJuJLe1CURunrr2awS4rDLUMY3D2HHjGzwGXOTwljTw+08AHfX
h1sZNNpJA6O49oXjzBqhCDpPxyakERzR+C8XOrtShuj1Cfuccak2Uefq4H9+yOHR
SqGbkxDAtngkKT15PTLCetLVNXt+jdR18iobNn9Fzz5HNvUur11DJ8L1R6fgkUG+
2LYs8S44Ye6dNeASa+3/SaUn0h5yeOq3CWek+qvrzvq8FGPvcv5Pmx11HtMNqqXD
QV+Bnq7Yn8K2AlE7ttKB
=ciYc
-----END PGP SIGNATURE-----

--DKU6Jbt7q3WqK7+M--


--===============2772234824786825178==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2772234824786825178==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 11:37:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzGf-0003MQ-IE; Thu, 27 Feb 2014 11:36:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <robie.basak@canonical.com>) id 1WIzGA-0003M2-FD
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:36:14 +0000
Received: from [85.158.139.211:7616] by server-13.bemta-5.messagelabs.com id
	74/1A-18801-D232F035; Thu, 27 Feb 2014 11:36:13 +0000
X-Env-Sender: robie.basak@canonical.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393500972!6632500!1
X-Originating-IP: [91.189.89.112]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30023 invoked from network); 27 Feb 2014 11:36:13 -0000
Received: from youngberry.canonical.com (HELO youngberry.canonical.com)
	(91.189.89.112) by server-4.tower-206.messagelabs.com with SMTP;
	27 Feb 2014 11:36:13 -0000
Received: from mal.justgohome.co.uk ([217.169.28.211] helo=localhost)
	by youngberry.canonical.com with esmtpsa
	(TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71)
	(envelope-from <robie.basak@canonical.com>)
	id 1WIzG1-0003Ly-De; Thu, 27 Feb 2014 11:36:05 +0000
Date: Thu, 27 Feb 2014 11:36:04 +0000
From: Robie Basak <robie.basak@canonical.com>
To: Rob Herring <robherring2@gmail.com>
Message-ID: <20140227113604.GC643@mal.justgohome.co.uk>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<CAL_Jsq+3jBdca9zVnxHG+7pxy2_316emsP=PB3WkTMQP0JYBYQ@mail.gmail.com>
	<CAFEAcA8u6u8Hh=qn8XE2Evc93_-pkZPSDSShm-eiSn-vcsYgKA@mail.gmail.com>
	<CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
MIME-Version: 1.0
In-Reply-To: <CAL_JsqJ+mXadxLHqFw6uXvM9=juC-emKT7G-d-_PLYbXNSX7Ew@mail.gmail.com>
X-Mailman-Approved-At: Thu, 27 Feb 2014 11:36:44 +0000
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2772234824786825178=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--===============2772234824786825178==
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="DKU6Jbt7q3WqK7+M"
Content-Disposition: inline


--DKU6Jbt7q3WqK7+M
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Wed, Feb 26, 2014 at 05:08:38PM -0600, Rob Herring wrote:
> > What does the 8250 have to recommend it over just providing
> > the PL011?
>=20
> As I mentioned, it will just work for anything that expects the serial
> port to be ttyS0 as on x86 rather than ttyAMA0. Really, I'd like to
> see ttyAMA go away, but evidently that's not an easily fixed issue and
> it is an ABI.

Kernel parameters (eg. console=3D...) are going to be embedded inside the
guest VM disk image inside a grub configuration or something like that,
right? Is there enough here to ensure that I can ship a guest VM disk
image that will correctly work (ie. use the right device) for both
kernel console output and userspace getty, regardless of what the host
is? Reading your post I realise that I'm not sure what I need to do to
make this work, since the device may be different. Can something be
recommended as part of the spec for this?

--DKU6Jbt7q3WqK7+M
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: Digital signature

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.14 (GNU/Linux)

iQIcBAEBCgAGBQJTDyMjAAoJEOVkucJ1vdUuvNQQALgmVkvdVM+3KrwCgGeul5Vy
TCuMJplIjVGhBwH4d8Y1czEaxF4wLL3bwUuEHeUhVgP0PbtD9wjIrLevJm1GaquW
FqYjtd9zyRCBSdQ0t+VLtJT2QThJy5GMTK4AjNVv/F2MYSaSJ5xVecDZzN12hbOR
I8/d98AU5E2RdLnBjmCkxKiukXZgIlt6vKxxMEtS4K3ikhxJoUqQCvab3VgJKPAP
6yJveqYaS7UdCxxutRCxyZjQttTUoW7TpZzOcjphiFLn91BVqhJUr9igZLI7JCgl
yWJ41dUoy5Y2cUbLeCre9brwwfGsbbI5oCI+qOwgV/zPb9mqkva5o7i+89oVUHbC
rclVclxsLexKinWn4xXIQIOeMdx8fFsaIAY6ZjVZtt5EWi/N/fBqXE9k51tBQFOS
QXwzbQOExdX6P7oJuJLe1CURunrr2awS4rDLUMY3D2HHjGzwGXOTwljTw+08AHfX
h1sZNNpJA6O49oXjzBqhCDpPxyakERzR+C8XOrtShuj1Cfuccak2Uefq4H9+yOHR
SqGbkxDAtngkKT15PTLCetLVNXt+jdR18iobNn9Fzz5HNvUur11DJ8L1R6fgkUG+
2LYs8S44Ye6dNeASa+3/SaUn0h5yeOq3CWek+qvrzvq8FGPvcv5Pmx11HtMNqqXD
QV+Bnq7Yn8K2AlE7ttKB
=ciYc
-----END PGP SIGNATURE-----

--DKU6Jbt7q3WqK7+M--


--===============2772234824786825178==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2772234824786825178==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 11:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzLj-0003XI-HE; Thu, 27 Feb 2014 11:41:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIzLh-0003XA-7y
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:41:57 +0000
Received: from [193.109.254.147:9142] by server-4.bemta-14.messagelabs.com id
	83/5D-32066-4842F035; Thu, 27 Feb 2014 11:41:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393501314!3451210!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15116 invoked from network); 27 Feb 2014 11:41:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 11:41:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 11:41:58 +0000
Message-Id: <530F329D020000780011FCE9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 11:42:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part8DBFDC9D.1__="
Cc: Yuriy Bulygin <yuriy.bulygin@intel.com>,
	Asit K Mallick <asit.k.mallick@intel.com>, Susie Li <susie.li@intel.com>,
	Yong Y Wang <yong.y.wang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Will Auld <will.auld@intel.com>
Subject: Re: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part8DBFDC9D.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 26.02.14 at 05:16, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Q1: is the PCI IDs list (0x3400 ...) of root port a complete list? Jan =
got=20
> it from a disclosure that Intel made to him meanwhile well over-two-years=
-ago -->=20
> Any update about the list?
> [Asit]: There is not update to this list. This was provided in 2011 =
and=20
> included the Ids prior to being fixed.

Very interesting. While hunting down the data sheets for these IDs,
I found the Xeon C5500/C3500 and Xeon E5 v2 ones, which add new
sets (370x/3720 and 0e0x).

> Q3: the "...  without AER capability?" warning triggers on Jan's systems =
--> is=20
> it an issue? or, how to handle it properly?
> [Asit] BIOS can have option to not expose AER capability. It will be =
good to=20
> check the BIOS setup options. The error reporting should be masked so =
not=20
> action needed.
> [Yuriy] I expanded the answer to Q3 vs. what's in the attached email =
after=20
> we found out that when root port is operating in DMI mode, AER ext.=20
> capability is not in the chain of ext. capability headers. Please use =
this=20
> one instead.
> Answer to Q3:
> On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge BDF=3D00:00.0, =
when=20
> the root port is operating as DMI, AER extended capability is defined =
in=20
> VSHDR (Vendor Specific Header) configuration register (offset 0x148). =
It=20
> should have value 0x0004.
> After pci_find_ext_capability, if it didn't find AER capability, for=20
> BDF=3D00:00.0 the patch would need to check if VSHDR register has value =
0x0004=20
> in bits [15:0].

And it also needs to check for version 1 afaict - on that same Romley
system, 00:00.0 has another one with version 2 at 0x280 (and
80:00.0 also has such, but is running in PCIe mode).

The E5 v2 seem to be using ID 5 version 3 instead - sort of
confusing.

> Q4: the patches have no way of handling future chipsets (yet we also =
have no=20
> indication that future chipsets would not exhibit the same bad behavior) =
-->=20
> thoughts?
> [Jinsong] IMHO handle future chipset case by case.

I.e. we would never be able to fully put to rest this XSA? Rather
undesirable I would think.

> BTW, some other infromation from Yuriy:
> VT-d-mask-UR-host-bridge.patch:
> 1. The workaround is only applicable to the host bridge device 00:00.0=20=

> (DMIBAR does not exist for other devices). The patch is written =
generically=20
> for any PCIe device/bridge.

Rather than hard coding 00:00.0, should this then - if indeed
sufficient - perhaps simply be checking for whether the bridge a
a host one? Would that perhaps even make sense generalizing
(i.e. not looking for particular device IDs)?

And of course an even better approach would likely be to do all of
this only for the root ports handling devices actually getting passed
to guests. That would also address the issue of the current patch
not generally dealing with PCI segments other than 0 (due to
pci_vtd_quirk() being called at boot time only). But that would
make the workaround more fragile due to relying on more topology
information.

While going through the data sheets again, I also began to wonder
whether LER_{XP,}UNCERRMSK might not also need the respective
bits getting set. Sadly the data sheets don't have any detail on
what signaling LER events involves (i.e. namely whether these can
be fatal in any way to the host).

Anyway, attached the updated patch with the uncontroversial
feedback and the information from the other data sheets I found
integrated.

Jan


--=__Part8DBFDC9D.1__=
Content-Type: text/plain; name="VT-d-mask-UR-root-port.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-mask-UR-root-port.patch"

--- a/xen/drivers/passthrough/vtd/quirks.c=0A+++ b/xen/drivers/passthrough/=
vtd/quirks.c=0A@@ -390,12 +390,62 @@ void __init pci_vtd_quirk(struct =
pci_dev=0A     int bus =3D pdev->bus;=0A     int dev =3D PCI_SLOT(pdev->dev=
fn);=0A     int func =3D PCI_FUNC(pdev->devfn);=0A-    int id, val;=0A+    =
int pos;=0A+    u32 val;=0A =0A-    id =3D pci_conf_read32(seg, bus, dev, =
func, 0);=0A-    if ( id =3D=3D 0x342e8086 || id =3D=3D 0x3c288086 )=0A+   =
 if ( pci_conf_read16(seg, bus, dev, func, PCI_VENDOR_ID) !=3D 0x8086 =
)=0A+        return;=0A+=0A+    switch ( pci_conf_read16(seg, bus, dev, =
func, PCI_DEVICE_ID) )=0A     {=0A+    case 0x342e:=0A+    case 0x3c28:=0A =
        val =3D pci_conf_read32(seg, bus, dev, func, 0x1AC);=0A         =
pci_conf_write32(seg, bus, dev, func, 0x1AC, val | (1 << 31));=0A+        =
break;=0A+=0A+    case 0x3400 ... 0x3411: case 0x3420:=0A+    case 0x3c00 =
... 0x3c0b:=0A+    case 0x3700 ... 0x3707: case 0x3720:=0A+    case =
0x0e00: case 0x0e01: case 0x0e04 ... 0x0e0b:=0A+        pos =3D pci_find_ex=
t_capability(seg, bus, pdev->devfn,=0A+                                    =
  PCI_EXT_CAP_ID_ERR);=0A+        if ( !pos )=0A+        {=0A+            =
pos =3D pci_find_ext_capability(seg, bus, pdev->devfn,=0A+                 =
                         PCI_EXT_CAP_ID_VNDR);=0A+            while ( pos =
)=0A+            {=0A+                val =3D pci_conf_read32(seg, bus, =
dev, func, pos + PCI_VNDR_HEADER);=0A+                if ( PCI_VNDR_HEADER_=
ID(val) =3D=3D 4 && PCI_VNDR_HEADER_REV(val) =3D=3D 1 )=0A+                =
{=0A+                    pos +=3D PCI_VNDR_HEADER;=0A+                    =
break;=0A+                }=0A+                pos =3D pci_find_next_ext_ca=
pability(seg, bus, pdev->devfn, pos,=0A+                                   =
                PCI_EXT_CAP_ID_VNDR);=0A+            }=0A+        }=0A+    =
    if ( !pos )=0A+        {=0A+            printk(XENLOG_WARNING =
"%04x:%02x:%02x.%u without AER capability?\n",=0A+                   seg, =
bus, dev, func);=0A+            break;=0A+        }=0A+=0A+        val =3D =
pci_conf_read32(seg, bus, dev, func, pos + PCI_ERR_UNCOR_MASK);=0A+        =
pci_conf_write32(seg, bus, dev, func, pos + PCI_ERR_UNCOR_MASK,=0A+        =
                 val | PCI_ERR_UNC_UNSUP);=0A+        val =3D pci_conf_read=
32(seg, bus, dev, func, pos + PCI_ERR_COR_MASK);=0A+        pci_conf_write3=
2(seg, bus, dev, func, pos + PCI_ERR_COR_MASK,=0A+                         =
val | PCI_ERR_COR_ADV_NFAT);=0A+=0A+        /* XPUNCERRMSK Send Completion =
with Unsupported Request */=0A+        val =3D pci_conf_read32(seg, bus, =
dev, func, 0x20c);=0A+        pci_conf_write32(seg, bus, dev, func, 0x20c, =
val | (1 << 4));=0A+=0A+        printk(XENLOG_INFO "Masked UR signaling on =
%04x:%02x:%02x.%u\n",=0A+               seg, bus, dev, func);=0A+        =
break;=0A     }=0A }=0A--- a/xen/drivers/pci/pci.c=0A+++ b/xen/drivers/pci/=
pci.c=0A@@ -66,23 +66,33 @@ int pci_find_next_cap(u16 seg, u8 bus, u=0A =
=0A /**=0A  * pci_find_ext_capability - Find an extended capability=0A- * =
@dev: PCI device to query=0A+ * @seg/@bus/@devfn: PCI device to query=0A  =
* @cap: capability code=0A  *=0A  * Returns the address of the requested =
extended capability structure=0A  * within the device's PCI configuration =
space or 0 if the device does=0A- * not support it.  Possible values for =
@cap:=0A- *=0A- *  %PCI_EXT_CAP_ID_ERR         Advanced Error Reporting=0A-=
 *  %PCI_EXT_CAP_ID_VC          Virtual Channel=0A- *  %PCI_EXT_CAP_ID_DSN =
        Device Serial Number=0A- *  %PCI_EXT_CAP_ID_PWR         Power =
Budgeting=0A+ * not support it.=0A  */=0A int pci_find_ext_capability(int =
seg, int bus, int devfn, int cap)=0A {=0A+    return pci_find_next_ext_capa=
bility(seg, bus, devfn, 0, cap);=0A+}=0A+=0A+/**=0A+ * pci_find_next_ext_ca=
pability - Find another extended capability=0A+ * @seg/@bus/@devfn: PCI =
device to query=0A+ * @pos: starting position=0A+ * @cap: capability =
code=0A+ *=0A+ * Returns the address of the requested extended capability =
structure=0A+ * within the device's PCI configuration space or 0 if the =
device does=0A+ * not support it.=0A+ */=0A+int pci_find_next_ext_capabilit=
y(int seg, int bus, int devfn, int start, int cap)=0A+{=0A     u32 =
header;=0A     int ttl =3D 480; /* 3840 bytes, minimum 8 bytes per =
capability */=0A-    int pos =3D 0x100;=0A+    int pos =3D max(start, =
0x100);=0A =0A     header =3D pci_conf_read32(seg, bus, PCI_SLOT(devfn), =
PCI_FUNC(devfn), pos);=0A =0A@@ -92,9 +102,10 @@ int pci_find_ext_capabilit=
y(int seg, int=0A      */=0A     if ( (header =3D=3D 0) || (header =3D=3D =
-1) )=0A         return 0;=0A+    ASSERT(start !=3D pos || PCI_EXT_CAP_ID(h=
eader) =3D=3D cap);=0A =0A     while ( ttl-- > 0 ) {=0A-        if ( =
PCI_EXT_CAP_ID(header) =3D=3D cap )=0A+        if ( PCI_EXT_CAP_ID(header) =
=3D=3D cap && pos !=3D start )=0A             return pos;=0A         pos =
=3D PCI_EXT_CAP_NEXT(header);=0A         if ( pos < 0x100 )=0A--- =
a/xen/include/xen/pci.h=0A+++ b/xen/include/xen/pci.h=0A@@ -140,6 +140,7 =
@@ int pci_mmcfg_write(unsigned int seg, un=0A int pci_find_cap_offset(u16 =
seg, u8 bus, u8 dev, u8 func, u8 cap);=0A int pci_find_next_cap(u16 seg, =
u8 bus, unsigned int devfn, u8 pos, int cap);=0A int pci_find_ext_capabilit=
y(int seg, int bus, int devfn, int cap);=0A+int pci_find_next_ext_capabilit=
y(int seg, int bus, int devfn, int pos, int cap);=0A const char *parse_pci(=
const char *, unsigned int *seg, unsigned int *bus,=0A                     =
  unsigned int *dev, unsigned int *func);=0A =0A--- a/xen/include/xen/pci_r=
egs.h=0A+++ b/xen/include/xen/pci_regs.h=0A@@ -431,6 +431,7 @@=0A #define =
PCI_EXT_CAP_ID_VC	2=0A #define PCI_EXT_CAP_ID_DSN	3=0A #define =
PCI_EXT_CAP_ID_PWR	4=0A+#define PCI_EXT_CAP_ID_VNDR	11=0A =
#define PCI_EXT_CAP_ID_ACS	13=0A #define PCI_EXT_CAP_ID_ARI	=
14=0A #define PCI_EXT_CAP_ID_ATS	15=0A@@ -459,6 +460,7 @@=0A =
#define  PCI_ERR_COR_BAD_DLLP	0x00000080	/* Bad DLLP Status */=0A =
#define  PCI_ERR_COR_REP_ROLL	0x00000100	/* REPLAY_NUM Rollover =
*/=0A #define  PCI_ERR_COR_REP_TIMER	0x00001000	/* Replay Timer =
Timeout */=0A+#define  PCI_ERR_COR_ADV_NFAT	0x00002000	/* =
Advisory Non-Fatal */=0A #define PCI_ERR_COR_MASK	20	/* =
Correctable Error Mask */=0A 	/* Same bits as above */=0A #define =
PCI_ERR_CAP		24	/* Advanced Error Capabilities */=0A@@ =
-510,6 +512,12 @@=0A #define PCI_PWR_CAP		12	/* =
Capability */=0A #define  PCI_PWR_CAP_BUDGET(x)	((x) & 1)	/* =
Included in system budget */=0A =0A+/* Vendor-Specific (VSEC, PCI_EXT_CAP_I=
D_VNDR) */=0A+#define PCI_VNDR_HEADER		4	/* Vendor-Specific =
Header */=0A+#define  PCI_VNDR_HEADER_ID(x)	((x) & 0xffff)=0A+#define  =
PCI_VNDR_HEADER_REV(x)	(((x) >> 16) & 0xf)=0A+#define  PCI_VNDR_HEADER_LEN=
(x)	(((x) >> 20) & 0xfff)=0A+=0A /*=0A  * Hypertransport sub capability=
 types=0A  *=0A
--=__Part8DBFDC9D.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part8DBFDC9D.1__=--


From xen-devel-bounces@lists.xen.org Thu Feb 27 11:42:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:42:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzLj-0003XI-HE; Thu, 27 Feb 2014 11:41:59 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIzLh-0003XA-7y
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:41:57 +0000
Received: from [193.109.254.147:9142] by server-4.bemta-14.messagelabs.com id
	83/5D-32066-4842F035; Thu, 27 Feb 2014 11:41:56 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-27.messagelabs.com!1393501314!3451210!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15116 invoked from network); 27 Feb 2014 11:41:54 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 11:41:54 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 11:41:58 +0000
Message-Id: <530F329D020000780011FCE9@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 11:42:05 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Jinsong Liu" <jinsong.liu@intel.com>
References: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <DE8DF0795D48FD4CA783C40EC8292335015154D4@SHSMSX101.ccr.corp.intel.com>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part8DBFDC9D.1__="
Cc: Yuriy Bulygin <yuriy.bulygin@intel.com>,
	Asit K Mallick <asit.k.mallick@intel.com>, Susie Li <susie.li@intel.com>,
	Yong Y Wang <yong.y.wang@intel.com>,
	Donald D Dugger <donald.d.dugger@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Will Auld <will.auld@intel.com>
Subject: Re: [Xen-devel] XSA-59
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part8DBFDC9D.1__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

>>> On 26.02.14 at 05:16, "Liu, Jinsong" <jinsong.liu@intel.com> wrote:
> Q1: is the PCI IDs list (0x3400 ...) of root port a complete list? Jan =
got=20
> it from a disclosure that Intel made to him meanwhile well over-two-years=
-ago -->=20
> Any update about the list?
> [Asit]: There is not update to this list. This was provided in 2011 =
and=20
> included the Ids prior to being fixed.

Very interesting. While hunting down the data sheets for these IDs,
I found the Xeon C5500/C3500 and Xeon E5 v2 ones, which add new
sets (370x/3720 and 0e0x).

> Q3: the "...  without AER capability?" warning triggers on Jan's systems =
--> is=20
> it an issue? or, how to handle it properly?
> [Asit] BIOS can have option to not expose AER capability. It will be =
good to=20
> check the BIOS setup options. The error reporting should be masked so =
not=20
> action needed.
> [Yuriy] I expanded the answer to Q3 vs. what's in the attached email =
after=20
> we found out that when root port is operating in DMI mode, AER ext.=20
> capability is not in the chain of ext. capability headers. Please use =
this=20
> one instead.
> Answer to Q3:
> On Romley system (DID 0x3c00 ... 0x3c0b), for Host bridge BDF=3D00:00.0, =
when=20
> the root port is operating as DMI, AER extended capability is defined =
in=20
> VSHDR (Vendor Specific Header) configuration register (offset 0x148). =
It=20
> should have value 0x0004.
> After pci_find_ext_capability, if it didn't find AER capability, for=20
> BDF=3D00:00.0 the patch would need to check if VSHDR register has value =
0x0004=20
> in bits [15:0].

And it also needs to check for version 1 afaict - on that same Romley
system, 00:00.0 has another one with version 2 at 0x280 (and
80:00.0 also has such, but is running in PCIe mode).

The E5 v2 seem to be using ID 5 version 3 instead - sort of
confusing.

> Q4: the patches have no way of handling future chipsets (yet we also =
have no=20
> indication that future chipsets would not exhibit the same bad behavior) =
-->=20
> thoughts?
> [Jinsong] IMHO handle future chipset case by case.

I.e. we would never be able to fully put to rest this XSA? Rather
undesirable I would think.

> BTW, some other infromation from Yuriy:
> VT-d-mask-UR-host-bridge.patch:
> 1. The workaround is only applicable to the host bridge device 00:00.0=20=

> (DMIBAR does not exist for other devices). The patch is written =
generically=20
> for any PCIe device/bridge.

Rather than hard coding 00:00.0, should this then - if indeed
sufficient - perhaps simply be checking for whether the bridge a
a host one? Would that perhaps even make sense generalizing
(i.e. not looking for particular device IDs)?

And of course an even better approach would likely be to do all of
this only for the root ports handling devices actually getting passed
to guests. That would also address the issue of the current patch
not generally dealing with PCI segments other than 0 (due to
pci_vtd_quirk() being called at boot time only). But that would
make the workaround more fragile due to relying on more topology
information.

While going through the data sheets again, I also began to wonder
whether LER_{XP,}UNCERRMSK might not also need the respective
bits getting set. Sadly the data sheets don't have any detail on
what signaling LER events involves (i.e. namely whether these can
be fatal in any way to the host).

Anyway, attached the updated patch with the uncontroversial
feedback and the information from the other data sheets I found
integrated.

Jan


--=__Part8DBFDC9D.1__=
Content-Type: text/plain; name="VT-d-mask-UR-root-port.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="VT-d-mask-UR-root-port.patch"

--- a/xen/drivers/passthrough/vtd/quirks.c=0A+++ b/xen/drivers/passthrough/=
vtd/quirks.c=0A@@ -390,12 +390,62 @@ void __init pci_vtd_quirk(struct =
pci_dev=0A     int bus =3D pdev->bus;=0A     int dev =3D PCI_SLOT(pdev->dev=
fn);=0A     int func =3D PCI_FUNC(pdev->devfn);=0A-    int id, val;=0A+    =
int pos;=0A+    u32 val;=0A =0A-    id =3D pci_conf_read32(seg, bus, dev, =
func, 0);=0A-    if ( id =3D=3D 0x342e8086 || id =3D=3D 0x3c288086 )=0A+   =
 if ( pci_conf_read16(seg, bus, dev, func, PCI_VENDOR_ID) !=3D 0x8086 =
)=0A+        return;=0A+=0A+    switch ( pci_conf_read16(seg, bus, dev, =
func, PCI_DEVICE_ID) )=0A     {=0A+    case 0x342e:=0A+    case 0x3c28:=0A =
        val =3D pci_conf_read32(seg, bus, dev, func, 0x1AC);=0A         =
pci_conf_write32(seg, bus, dev, func, 0x1AC, val | (1 << 31));=0A+        =
break;=0A+=0A+    case 0x3400 ... 0x3411: case 0x3420:=0A+    case 0x3c00 =
... 0x3c0b:=0A+    case 0x3700 ... 0x3707: case 0x3720:=0A+    case =
0x0e00: case 0x0e01: case 0x0e04 ... 0x0e0b:=0A+        pos =3D pci_find_ex=
t_capability(seg, bus, pdev->devfn,=0A+                                    =
  PCI_EXT_CAP_ID_ERR);=0A+        if ( !pos )=0A+        {=0A+            =
pos =3D pci_find_ext_capability(seg, bus, pdev->devfn,=0A+                 =
                         PCI_EXT_CAP_ID_VNDR);=0A+            while ( pos =
)=0A+            {=0A+                val =3D pci_conf_read32(seg, bus, =
dev, func, pos + PCI_VNDR_HEADER);=0A+                if ( PCI_VNDR_HEADER_=
ID(val) =3D=3D 4 && PCI_VNDR_HEADER_REV(val) =3D=3D 1 )=0A+                =
{=0A+                    pos +=3D PCI_VNDR_HEADER;=0A+                    =
break;=0A+                }=0A+                pos =3D pci_find_next_ext_ca=
pability(seg, bus, pdev->devfn, pos,=0A+                                   =
                PCI_EXT_CAP_ID_VNDR);=0A+            }=0A+        }=0A+    =
    if ( !pos )=0A+        {=0A+            printk(XENLOG_WARNING =
"%04x:%02x:%02x.%u without AER capability?\n",=0A+                   seg, =
bus, dev, func);=0A+            break;=0A+        }=0A+=0A+        val =3D =
pci_conf_read32(seg, bus, dev, func, pos + PCI_ERR_UNCOR_MASK);=0A+        =
pci_conf_write32(seg, bus, dev, func, pos + PCI_ERR_UNCOR_MASK,=0A+        =
                 val | PCI_ERR_UNC_UNSUP);=0A+        val =3D pci_conf_read=
32(seg, bus, dev, func, pos + PCI_ERR_COR_MASK);=0A+        pci_conf_write3=
2(seg, bus, dev, func, pos + PCI_ERR_COR_MASK,=0A+                         =
val | PCI_ERR_COR_ADV_NFAT);=0A+=0A+        /* XPUNCERRMSK Send Completion =
with Unsupported Request */=0A+        val =3D pci_conf_read32(seg, bus, =
dev, func, 0x20c);=0A+        pci_conf_write32(seg, bus, dev, func, 0x20c, =
val | (1 << 4));=0A+=0A+        printk(XENLOG_INFO "Masked UR signaling on =
%04x:%02x:%02x.%u\n",=0A+               seg, bus, dev, func);=0A+        =
break;=0A     }=0A }=0A--- a/xen/drivers/pci/pci.c=0A+++ b/xen/drivers/pci/=
pci.c=0A@@ -66,23 +66,33 @@ int pci_find_next_cap(u16 seg, u8 bus, u=0A =
=0A /**=0A  * pci_find_ext_capability - Find an extended capability=0A- * =
@dev: PCI device to query=0A+ * @seg/@bus/@devfn: PCI device to query=0A  =
* @cap: capability code=0A  *=0A  * Returns the address of the requested =
extended capability structure=0A  * within the device's PCI configuration =
space or 0 if the device does=0A- * not support it.  Possible values for =
@cap:=0A- *=0A- *  %PCI_EXT_CAP_ID_ERR         Advanced Error Reporting=0A-=
 *  %PCI_EXT_CAP_ID_VC          Virtual Channel=0A- *  %PCI_EXT_CAP_ID_DSN =
        Device Serial Number=0A- *  %PCI_EXT_CAP_ID_PWR         Power =
Budgeting=0A+ * not support it.=0A  */=0A int pci_find_ext_capability(int =
seg, int bus, int devfn, int cap)=0A {=0A+    return pci_find_next_ext_capa=
bility(seg, bus, devfn, 0, cap);=0A+}=0A+=0A+/**=0A+ * pci_find_next_ext_ca=
pability - Find another extended capability=0A+ * @seg/@bus/@devfn: PCI =
device to query=0A+ * @pos: starting position=0A+ * @cap: capability =
code=0A+ *=0A+ * Returns the address of the requested extended capability =
structure=0A+ * within the device's PCI configuration space or 0 if the =
device does=0A+ * not support it.=0A+ */=0A+int pci_find_next_ext_capabilit=
y(int seg, int bus, int devfn, int start, int cap)=0A+{=0A     u32 =
header;=0A     int ttl =3D 480; /* 3840 bytes, minimum 8 bytes per =
capability */=0A-    int pos =3D 0x100;=0A+    int pos =3D max(start, =
0x100);=0A =0A     header =3D pci_conf_read32(seg, bus, PCI_SLOT(devfn), =
PCI_FUNC(devfn), pos);=0A =0A@@ -92,9 +102,10 @@ int pci_find_ext_capabilit=
y(int seg, int=0A      */=0A     if ( (header =3D=3D 0) || (header =3D=3D =
-1) )=0A         return 0;=0A+    ASSERT(start !=3D pos || PCI_EXT_CAP_ID(h=
eader) =3D=3D cap);=0A =0A     while ( ttl-- > 0 ) {=0A-        if ( =
PCI_EXT_CAP_ID(header) =3D=3D cap )=0A+        if ( PCI_EXT_CAP_ID(header) =
=3D=3D cap && pos !=3D start )=0A             return pos;=0A         pos =
=3D PCI_EXT_CAP_NEXT(header);=0A         if ( pos < 0x100 )=0A--- =
a/xen/include/xen/pci.h=0A+++ b/xen/include/xen/pci.h=0A@@ -140,6 +140,7 =
@@ int pci_mmcfg_write(unsigned int seg, un=0A int pci_find_cap_offset(u16 =
seg, u8 bus, u8 dev, u8 func, u8 cap);=0A int pci_find_next_cap(u16 seg, =
u8 bus, unsigned int devfn, u8 pos, int cap);=0A int pci_find_ext_capabilit=
y(int seg, int bus, int devfn, int cap);=0A+int pci_find_next_ext_capabilit=
y(int seg, int bus, int devfn, int pos, int cap);=0A const char *parse_pci(=
const char *, unsigned int *seg, unsigned int *bus,=0A                     =
  unsigned int *dev, unsigned int *func);=0A =0A--- a/xen/include/xen/pci_r=
egs.h=0A+++ b/xen/include/xen/pci_regs.h=0A@@ -431,6 +431,7 @@=0A #define =
PCI_EXT_CAP_ID_VC	2=0A #define PCI_EXT_CAP_ID_DSN	3=0A #define =
PCI_EXT_CAP_ID_PWR	4=0A+#define PCI_EXT_CAP_ID_VNDR	11=0A =
#define PCI_EXT_CAP_ID_ACS	13=0A #define PCI_EXT_CAP_ID_ARI	=
14=0A #define PCI_EXT_CAP_ID_ATS	15=0A@@ -459,6 +460,7 @@=0A =
#define  PCI_ERR_COR_BAD_DLLP	0x00000080	/* Bad DLLP Status */=0A =
#define  PCI_ERR_COR_REP_ROLL	0x00000100	/* REPLAY_NUM Rollover =
*/=0A #define  PCI_ERR_COR_REP_TIMER	0x00001000	/* Replay Timer =
Timeout */=0A+#define  PCI_ERR_COR_ADV_NFAT	0x00002000	/* =
Advisory Non-Fatal */=0A #define PCI_ERR_COR_MASK	20	/* =
Correctable Error Mask */=0A 	/* Same bits as above */=0A #define =
PCI_ERR_CAP		24	/* Advanced Error Capabilities */=0A@@ =
-510,6 +512,12 @@=0A #define PCI_PWR_CAP		12	/* =
Capability */=0A #define  PCI_PWR_CAP_BUDGET(x)	((x) & 1)	/* =
Included in system budget */=0A =0A+/* Vendor-Specific (VSEC, PCI_EXT_CAP_I=
D_VNDR) */=0A+#define PCI_VNDR_HEADER		4	/* Vendor-Specific =
Header */=0A+#define  PCI_VNDR_HEADER_ID(x)	((x) & 0xffff)=0A+#define  =
PCI_VNDR_HEADER_REV(x)	(((x) >> 16) & 0xf)=0A+#define  PCI_VNDR_HEADER_LEN=
(x)	(((x) >> 20) & 0xfff)=0A+=0A /*=0A  * Hypertransport sub capability=
 types=0A  *=0A
--=__Part8DBFDC9D.1__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part8DBFDC9D.1__=--


From xen-devel-bounces@lists.xen.org Thu Feb 27 11:58:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzb8-0003iQ-CN; Thu, 27 Feb 2014 11:57:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIzb6-0003iL-L0
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:57:52 +0000
Received: from [193.109.254.147:6500] by server-2.bemta-14.messagelabs.com id
	48/52-01236-F382F035; Thu, 27 Feb 2014 11:57:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393502271!7192104!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14174 invoked from network); 27 Feb 2014 11:57:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 11:57:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 11:57:57 +0000
Message-Id: <530F365B020000780011FD03@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 11:58:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 12:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> which permits a toolstack to execute an arbitrary cpuid instruction on a
> specified physical cpu.

For one - is it a good idea to expose the unprocessed CPUID to
any guest code? After all even the Dom0 kernel only gets to see
processed values, and the fact the without CPUID faulting apps
in PV guests can inadvertently use the raw values is known to be
a problem, not a feature.

And then - if you already have access to control operations, I
don't think you need the hyypervisor to help you: Limit your
vCPU's affinity to the particular pCPU you care about, and do
what you need doing from the kernel (by also setting the
processes affinity to the particular CPU you could achieve the
same even from user land).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 11:58:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 11:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzb8-0003iQ-CN; Thu, 27 Feb 2014 11:57:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WIzb6-0003iL-L0
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 11:57:52 +0000
Received: from [193.109.254.147:6500] by server-2.bemta-14.messagelabs.com id
	48/52-01236-F382F035; Thu, 27 Feb 2014 11:57:51 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393502271!7192104!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14174 invoked from network); 27 Feb 2014 11:57:51 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 11:57:51 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 11:57:57 +0000
Message-Id: <530F365B020000780011FD03@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 11:58:03 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 12:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> which permits a toolstack to execute an arbitrary cpuid instruction on a
> specified physical cpu.

For one - is it a good idea to expose the unprocessed CPUID to
any guest code? After all even the Dom0 kernel only gets to see
processed values, and the fact the without CPUID faulting apps
in PV guests can inadvertently use the raw values is known to be
a problem, not a feature.

And then - if you already have access to control operations, I
don't think you need the hyypervisor to help you: Limit your
vCPU's affinity to the particular pCPU you care about, and do
what you need doing from the kernel (by also setting the
processes affinity to the particular CPU you could achieve the
same even from user land).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:01:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzeF-0003tn-Fu; Thu, 27 Feb 2014 12:01:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WIzeE-0003tb-AB
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:01:06 +0000
Received: from [193.109.254.147:42061] by server-9.bemta-14.messagelabs.com id
	E7/97-24895-1092F035; Thu, 27 Feb 2014 12:01:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393502464!3532187!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1158 invoked from network); 27 Feb 2014 12:01:04 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:01:04 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so2743845wes.2
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 04:01:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=eSllUGj2+Nb7liJK5o59niuTJTIlClfVLJ6T/+tuLtk=;
	b=Tr2v2q8PLjYTZBDWF3xq3XyLejmukanZNK4s00U5CqUKHcMKtY1mZzuvGTjubFJG1u
	Vr4KS4GlXRnR4akDwhJkQnJN283wVbJhlmi/O3v20pNf4vW7gDmL8NCwykfbuZm2PG/4
	VbCIC04p1DczzJHJZZFzsefcrZwL2dhzSuuFrJLDiYf6mMveE/oMTnOkfgtXbTvb7+U9
	pMqAvENSOLPCPL7rA4aWqWBcAPRWUG3V9j0HermMMXcwwlTxBMagllw7j8zEuQA65Sxl
	lSA+MpnDc6biDsllyFr6AAS6kmCyeeW27CzxXRI8MoUMFt7JUo1y6h8D4bEyTPHlqZzF
	W3ig==
MIME-Version: 1.0
X-Received: by 10.194.61.114 with SMTP id o18mr7145203wjr.6.1393502464328;
	Thu, 27 Feb 2014 04:01:04 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 27 Feb 2014 04:01:04 -0800 (PST)
In-Reply-To: <1389184789.4883.47.camel@kazak.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389184789.4883.47.camel@kazak.uk.xensource.com>
Date: Thu, 27 Feb 2014 12:01:04 +0000
X-Google-Sender-Auth: bVP6n7qBECaDTO8ji6MNgrF84KU
Message-ID: <CAFLBxZb1Jca4U6QKLG1oTUp=UEULu0bQC-o6DtfudrZic_ZxKQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 29
thanks

On Wed, Jan 8, 2014 at 12:39 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> create ^
> title it Windows install failures/BSOD
> severity it blocker
> thanks
>
> On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
>> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
>> > flight 24250 xen-unstable real [real]
>> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
>> >
>> > Failures :-/ but no regressions.
>> >
>> > Regressions which are regarded as allowable (not blocking):
>> >  test-armhf-armhf-xl           7 debian-install               fail   like 24146
>> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
>> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146
>>
>> These windows-install failures have been pretty persistent for
>> the last month or two. I've been looking at the logs from the
>> hypervisor side a number of times without spotting anything. It'd
>> be nice to know whether anyone also did so from the tools and
>> qemu sides... In any event we will need to do something about
>> this before 4.4 goes out.
>>
>> Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:01:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzeF-0003tn-Fu; Thu, 27 Feb 2014 12:01:07 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WIzeE-0003tb-AB
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:01:06 +0000
Received: from [193.109.254.147:42061] by server-9.bemta-14.messagelabs.com id
	E7/97-24895-1092F035; Thu, 27 Feb 2014 12:01:05 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393502464!3532187!1
X-Originating-IP: [74.125.82.171]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1158 invoked from network); 27 Feb 2014 12:01:04 -0000
Received: from mail-we0-f171.google.com (HELO mail-we0-f171.google.com)
	(74.125.82.171)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:01:04 -0000
Received: by mail-we0-f171.google.com with SMTP id u56so2743845wes.2
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 04:01:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=eSllUGj2+Nb7liJK5o59niuTJTIlClfVLJ6T/+tuLtk=;
	b=Tr2v2q8PLjYTZBDWF3xq3XyLejmukanZNK4s00U5CqUKHcMKtY1mZzuvGTjubFJG1u
	Vr4KS4GlXRnR4akDwhJkQnJN283wVbJhlmi/O3v20pNf4vW7gDmL8NCwykfbuZm2PG/4
	VbCIC04p1DczzJHJZZFzsefcrZwL2dhzSuuFrJLDiYf6mMveE/oMTnOkfgtXbTvb7+U9
	pMqAvENSOLPCPL7rA4aWqWBcAPRWUG3V9j0HermMMXcwwlTxBMagllw7j8zEuQA65Sxl
	lSA+MpnDc6biDsllyFr6AAS6kmCyeeW27CzxXRI8MoUMFt7JUo1y6h8D4bEyTPHlqZzF
	W3ig==
MIME-Version: 1.0
X-Received: by 10.194.61.114 with SMTP id o18mr7145203wjr.6.1393502464328;
	Thu, 27 Feb 2014 04:01:04 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 27 Feb 2014 04:01:04 -0800 (PST)
In-Reply-To: <1389184789.4883.47.camel@kazak.uk.xensource.com>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389184789.4883.47.camel@kazak.uk.xensource.com>
Date: Thu, 27 Feb 2014 12:01:04 +0000
X-Google-Sender-Auth: bVP6n7qBECaDTO8ji6MNgrF84KU
Message-ID: <CAFLBxZb1Jca4U6QKLG1oTUp=UEULu0bQC-o6DtfudrZic_ZxKQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

close 29
thanks

On Wed, Jan 8, 2014 at 12:39 PM, Ian Campbell <Ian.Campbell@citrix.com> wrote:
> create ^
> title it Windows install failures/BSOD
> severity it blocker
> thanks
>
> On Tue, 2014-01-07 at 12:29 +0000, Jan Beulich wrote:
>> >>> On 06.01.14 at 10:36, xen.org <ian.jackson@eu.citrix.com> wrote:
>> > flight 24250 xen-unstable real [real]
>> > http://www.chiark.greenend.org.uk/~xensrcts/logs/24250/
>> >
>> > Failures :-/ but no regressions.
>> >
>> > Regressions which are regarded as allowable (not blocking):
>> >  test-armhf-armhf-xl           7 debian-install               fail   like 24146
>> >  test-amd64-i386-xl-winxpsp3-vcpus1  7 windows-install          fail like 23938
>> >  test-amd64-i386-xl-win7-amd64  7 windows-install              fail  like 24146
>>
>> These windows-install failures have been pretty persistent for
>> the last month or two. I've been looking at the logs from the
>> hypervisor side a number of times without spotting anything. It'd
>> be nice to know whether anyone also did so from the tools and
>> qemu sides... In any event we will need to do something about
>> this before 4.4 goes out.
>>
>> Jan
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> http://lists.xen.org/xen-devel
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:10:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIznB-0004D2-Rj; Thu, 27 Feb 2014 12:10:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WIznA-0004Cx-MF
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:10:20 +0000
Received: from [193.109.254.147:43884] by server-7.bemta-14.messagelabs.com id
	AB/90-23424-B2B2F035; Thu, 27 Feb 2014 12:10:19 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393503018!7240185!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16951 invoked from network); 27 Feb 2014 12:10:19 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 12:10:19 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WIzrh-0004wJ-Tv; Thu, 27 Feb 2014 12:15:01 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1393503301.18989@bugs.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389184789.4883.47.camel@kazak.uk.xensource.com>
	<CAFLBxZb1Jca4U6QKLG1oTUp=UEULu0bQC-o6DtfudrZic_ZxKQ@mail.gmail.com>
In-Reply-To: <CAFLBxZb1Jca4U6QKLG1oTUp=UEULu0bQC-o6DtfudrZic_ZxKQ@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Thu, 27 Feb 2014 12:15:01 +0000
Subject: [Xen-devel] Processed: Re: [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 29
Closing bug #29
> thanks
Finished processing.

Modified/created Bugs:
 - 29: http://bugs.xenproject.org/xen/bug/29

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:10:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIznB-0004D2-Rj; Thu, 27 Feb 2014 12:10:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WIznA-0004Cx-MF
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:10:20 +0000
Received: from [193.109.254.147:43884] by server-7.bemta-14.messagelabs.com id
	AB/90-23424-B2B2F035; Thu, 27 Feb 2014 12:10:19 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393503018!7240185!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16951 invoked from network); 27 Feb 2014 12:10:19 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 12:10:19 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WIzrh-0004wJ-Tv; Thu, 27 Feb 2014 12:15:01 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1393503301.18989@bugs.xenproject.org>
References: <osstest-24250-mainreport@xen.org>
	<52CC01350200007800111164@nat28.tlf.novell.com>
	<1389184789.4883.47.camel@kazak.uk.xensource.com>
	<CAFLBxZb1Jca4U6QKLG1oTUp=UEULu0bQC-o6DtfudrZic_ZxKQ@mail.gmail.com>
In-Reply-To: <CAFLBxZb1Jca4U6QKLG1oTUp=UEULu0bQC-o6DtfudrZic_ZxKQ@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Thu, 27 Feb 2014 12:15:01 +0000
Subject: [Xen-devel] Processed: Re: [xen-unstable test] 24250: tolerable FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> close 29
Closing bug #29
> thanks
Finished processing.

Modified/created Bugs:
 - 29: http://bugs.xenproject.org/xen/bug/29

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:11:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzog-0004Gt-Az; Thu, 27 Feb 2014 12:11:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIzof-0004Gn-HJ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:11:53 +0000
Received: from [193.109.254.147:11429] by server-14.bemta-14.messagelabs.com
	id BE/86-29228-88B2F035; Thu, 27 Feb 2014 12:11:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393503110!7190240!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32638 invoked from network); 27 Feb 2014 12:11:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:11:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106244833"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:11:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:11:49 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIzob-00018o-9C;
	Thu, 27 Feb 2014 12:11:49 +0000
Message-ID: <530F2B85.6060403@citrix.com>
Date: Thu, 27 Feb 2014 12:11:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
	<530F365B020000780011FD03@nat28.tlf.novell.com>
In-Reply-To: <530F365B020000780011FD03@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 11:58, Jan Beulich wrote:
>>>> On 27.02.14 at 12:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> which permits a toolstack to execute an arbitrary cpuid instruction on a
>> specified physical cpu.
> For one - is it a good idea to expose the unprocessed CPUID to
> any guest code? After all even the Dom0 kernel only gets to see
> processed values, and the fact the without CPUID faulting apps
> in PV guests can inadvertently use the raw values is known to be
> a problem, not a feature.

Any toolstack which uses this specific hypercall to find out information
normally hidden from dom0 using faulting/masking/policy can only shoot
itself.

The usecase is for enumerating the real cache leaves, which are normally
faked up in the policy anyway, so of no use.

>
> And then - if you already have access to control operations, I
> don't think you need the hyypervisor to help you: Limit your
> vCPU's affinity to the particular pCPU you care about, and do
> what you need doing from the kernel (by also setting the
> processes affinity to the particular CPU you could achieve the
> same even from user land).
>
> Jan
>

Having a toolstack rely on being able to pin its vcpus around so some
userspace can enumerate the cache leaves is horrific.

Apart from forcibly messing with a balanced numa setup, what about cpu
pools, or toolstack disaggregation where pinning is restricted?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:11:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzog-0004Gt-Az; Thu, 27 Feb 2014 12:11:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WIzof-0004Gn-HJ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:11:53 +0000
Received: from [193.109.254.147:11429] by server-14.bemta-14.messagelabs.com
	id BE/86-29228-88B2F035; Thu, 27 Feb 2014 12:11:52 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393503110!7190240!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32638 invoked from network); 27 Feb 2014 12:11:51 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:11:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106244833"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:11:49 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:11:49 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WIzob-00018o-9C;
	Thu, 27 Feb 2014 12:11:49 +0000
Message-ID: <530F2B85.6060403@citrix.com>
Date: Thu, 27 Feb 2014 12:11:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
	<530F365B020000780011FD03@nat28.tlf.novell.com>
In-Reply-To: <530F365B020000780011FD03@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 11:58, Jan Beulich wrote:
>>>> On 27.02.14 at 12:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> which permits a toolstack to execute an arbitrary cpuid instruction on a
>> specified physical cpu.
> For one - is it a good idea to expose the unprocessed CPUID to
> any guest code? After all even the Dom0 kernel only gets to see
> processed values, and the fact the without CPUID faulting apps
> in PV guests can inadvertently use the raw values is known to be
> a problem, not a feature.

Any toolstack which uses this specific hypercall to find out information
normally hidden from dom0 using faulting/masking/policy can only shoot
itself.

The usecase is for enumerating the real cache leaves, which are normally
faked up in the policy anyway, so of no use.

>
> And then - if you already have access to control operations, I
> don't think you need the hyypervisor to help you: Limit your
> vCPU's affinity to the particular pCPU you care about, and do
> what you need doing from the kernel (by also setting the
> processes affinity to the particular CPU you could achieve the
> same even from user land).
>
> Jan
>

Having a toolstack rely on being able to pin its vcpus around so some
userspace can enumerate the cache leaves is horrific.

Apart from forcibly messing with a balanced numa setup, what about cpu
pools, or toolstack disaggregation where pinning is restricted?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:12:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzov-0004Ih-OB; Thu, 27 Feb 2014 12:12:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WIzou-0004IQ-7O
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:12:08 +0000
Received: from [85.158.143.35:30353] by server-3.bemta-4.messagelabs.com id
	05/7C-11539-79B2F035; Thu, 27 Feb 2014 12:12:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393503125!8730484!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15557 invoked from network); 27 Feb 2014 12:12:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:12:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104617894"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:12:04 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:12:04 -0500
Message-ID: <530F2B8F.1010401@citrix.com>
Date: Thu, 27 Feb 2014 12:11:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H.
	Peter Anvin" <hpa@zytor.com>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 15:14, Waiman Long wrote:
> This patch adds para-virtualization support to the queue spinlock code
> by enabling the queue head to kick the lock holder CPU, if known,
> in when the lock isn't released for a certain amount of time. It
> also enables the mutual monitoring of the queue head CPU and the
> following node CPU in the queue to make sure that their CPUs will
> stay scheduled in.

I'm not really understanding how this is supposed to work.  There
appears to be an assumption that a guest can keep one of its VCPUs
running by repeatedly kicking it?  This is not possible under Xen and I
doubt it's possible under KVM or any other hypervisor.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:12:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzov-0004Ih-OB; Thu, 27 Feb 2014 12:12:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WIzou-0004IQ-7O
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:12:08 +0000
Received: from [85.158.143.35:30353] by server-3.bemta-4.messagelabs.com id
	05/7C-11539-79B2F035; Thu, 27 Feb 2014 12:12:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393503125!8730484!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15557 invoked from network); 27 Feb 2014 12:12:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:12:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104617894"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:12:04 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:12:04 -0500
Message-ID: <530F2B8F.1010401@citrix.com>
Date: Thu, 27 Feb 2014 12:11:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H.
	Peter Anvin" <hpa@zytor.com>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 15:14, Waiman Long wrote:
> This patch adds para-virtualization support to the queue spinlock code
> by enabling the queue head to kick the lock holder CPU, if known,
> in when the lock isn't released for a certain amount of time. It
> also enables the mutual monitoring of the queue head CPU and the
> following node CPU in the queue to make sure that their CPUs will
> stay scheduled in.

I'm not really understanding how this is supposed to work.  There
appears to be an assumption that a guest can keep one of its VCPUs
running by repeatedly kicking it?  This is not possible under Xen and I
doubt it's possible under KVM or any other hypervisor.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:21:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:21:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzxr-0004dY-Uy; Thu, 27 Feb 2014 12:21:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WIzxp-0004dT-Nj
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 12:21:22 +0000
Received: from [85.158.139.211:14420] by server-4.bemta-5.messagelabs.com id
	4D/EA-08092-0CD2F035; Thu, 27 Feb 2014 12:21:20 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393503678!2104141!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15180 invoked from network); 27 Feb 2014 12:21:19 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:21:19 -0000
Received: by mail-we0-f169.google.com with SMTP id t61so2840286wes.14
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Feb 2014 04:21:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=2MDlqq5jn/EgA7Qt0s/1/gq2b4Sp2tegKeLo4EF7lEg=;
	b=IKFpGaifwh4GZMP/d5FknnJtAmpOo6qUwlGH+bYLKUjP/osuk8vXnwYnBq2cAlpEkU
	v8NGvX9YFyDhDCPltVBedv+mzgGk/oz3u147TCHamQ39gp/iGOl/EfaeWzqPZcmNml2N
	/LWlIlFKODfbY44DlrP1z2ywlx3hpmwSHj/UeugpbtFqhmDDnQHnqlazwWoCRcbKCon/
	HJKzmfFR7pekpw+8IeJhcyU1lq5ORYf9SeYgdxCNKl+Il4Ii+W+LXqbvPbbLvfWj4VCq
	ba5sikxSqhWGyTKRDlWirE05HHtK5JAe9pVIbl3GqNvYkj44qLm6It0VopdKpNT+KiZ/
	bujg==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr2137933wjy.57.1393503678433;
	Thu, 27 Feb 2014 04:21:18 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 27 Feb 2014 04:21:18 -0800 (PST)
In-Reply-To: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
Date: Thu, 27 Feb 2014 12:21:18 +0000
X-Google-Sender-Auth: hbOhGbAG3RRFGV_LehtfdH2gdE4
Message-ID: <CAFLBxZYpV9boJFHyLAk31bS=tYJsKZDNf1e2b+VoOtb7iF+VoQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Zhu Yanhai <zhu.yanhai@gmail.com>
Cc: Zhu Yanhai <gaoyang.zyh@taobao.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Charles Wang <muming.wq@taobao.com>, Shen Yiben <zituan@taobao.com>,
	Wan Jia <jia.wanj@alibaba-inc.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it linux pvops: fpu corruption due to incorrect assumptions
about TS bit after exception under Xen
thanks

On Wed, Nov 6, 2013 at 6:41 AM, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
> As we know Intel X86's CR0.TS is a sticky bit, which means once set
> it remains set until cleared by some software routines, in other words,
> the exception handler expects the bit is set when it starts to execute.
>
> However xen doesn't simulate this behavior quite well for PV guests -
> vcpu_restore_fpu_lazy() clears CR0.TS unconditionally in the very beginning,
> so the guest kernel's #NM handler runs with CR0.TS cleared. Generally speaking
> it's fine since the linux kernel executes the exception handler with
> interrupt disabled and a sane #NM handler will clear the bit anyway
> before it exits, but there's a catch: if it's the first FPU trap for the process,
> the linux kernel must allocate a piece of SLAB memory for it to save
> the FPU registers, which opens a schedule window as the memory
> allocation might sleep -- and with CR0.TS keeps clear!
>
> [see the code below in linux kernel,
>
> void math_state_restore(void)
> {
>     struct task_struct *tsk = current;
>
>     if (!tsk_used_math(tsk)) {
>         local_irq_enable();
>         /*
>          * does a slab alloc which can sleep
>          */
>         if (init_fpu(tsk)) {                 <<<< Here it might open a schedule window
>             /*
>              * ran out of memory!
>              */
>             do_group_exit(SIGKILL);
>             return;
>         }
>         local_irq_disable();
>     }
>
>     __thread_fpu_begin(tsk);    <<<< Here the process gets marked as a 'fpu user'
>                                          after the schedule window
>
>     /*
>      * Paranoid restore. send a SIGSEGV if we fail to restore the state.
>      */
>     if (unlikely(restore_fpu_checking(tsk))) {
>         drop_init_fpu(tsk);
>         force_sig(SIGSEGV, tsk);
>         return;
>     }
>
>     tsk->fpu_counter++;
> }
> ]
>
> The check in linux kernel's switch_fpu_prepare() doesn't stts() either because
> the current process only gets marked as a FPU user after the schedule window
> (the story is a bit different if eagerfpu is enabled, anyway a sane hypervisor
> cannot depend on such undetermined things). And then supposing that the new
> process scheduled-in wants to touch FPU, nobody will do fxrstor/frstor for it anymore,
> conducing to a serious data damage.
>
> Also, The point is everything is fine on linux + baremetal since CR0.TS will
> keep set until the interrupted #NM handler got the memory it needs and exits,
> so the incomer FPU user will get trapped as it's supposed to be.
>
> The test case is as below,
>
>         buf = malloc(BUF_SIZE);
>         if (!buf) {
>                 fprintf(stderr, "error %s during %s\n",
>                         strerror(-err),
>                         "malloc");
>                 return 1;
>         }
>         memset(buf, IO_PATTERN, BUF_SIZE);
>         memset(cmp_buf, IO_PATTERN, BUF_SIZE);
>
>         if (memcmp(buf, cmp_buf, BUF_SIZE)) {
>                 unsigned long long *ubuf = (unsigned long long *)buf;
>                 int i;
>
>                 for (i = 0; i < BUF_SIZE / sizeof(unsigned long long); i++)
>                         printf("%d: 0x%llx\n", i, ubuf[i]);
>
>                 return 2;
>         }
>
> Two shell scripts on each box's dom0 runs above program repeatedly until
> the compare fails (so every time the C program is a new fpu user and triggers
> memory allocation). we can see the data damage at least once with
> xen 4.3 + linux 2.6.32 on ~200 physical machines within two hours.
> With xen 4.3 + linux 3.11.6 stable it becomes harder to reproduce
> (guess it's because of the eagerfpu feature introduced in linux kernel 3.7)
> but it's still possible to come out within about four hours.
>
> The fix here is trying to make xen behave as close to the hardware as possible.
>
> This bug only has effects on PV guests (and including dom0 kernel of course).
>
> Cc: Wan Jia <jia.wanj@alibaba-inc.com>
> Cc: Shen Yiben <zituan@taobao.com>
> Cc: Charles Wang <muming.wq@taobao.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Signed-off-by: Zhu Yanhai <gaoyang.zyh@taobao.com>
> ---
>  xen/arch/x86/traps.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 77c200b..b0321a6 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -3267,8 +3267,8 @@ void do_device_not_available(struct cpu_user_regs *regs)
>
>      if ( curr->arch.pv_vcpu.ctrlreg[0] & X86_CR0_TS )
>      {
> +        stts();
>          do_guest_trap(TRAP_no_device, regs, 0);
> -        curr->arch.pv_vcpu.ctrlreg[0] &= ~X86_CR0_TS;
>      }
>      else
>          TRACE_0D(TRC_PV_MATH_STATE_RESTORE);
> --
> 1.7.4.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:21:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:21:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WIzxr-0004dY-Uy; Thu, 27 Feb 2014 12:21:23 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dunlapg@gmail.com>) id 1WIzxp-0004dT-Nj
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 12:21:22 +0000
Received: from [85.158.139.211:14420] by server-4.bemta-5.messagelabs.com id
	4D/EA-08092-0CD2F035; Thu, 27 Feb 2014 12:21:20 +0000
X-Env-Sender: dunlapg@gmail.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393503678!2104141!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15180 invoked from network); 27 Feb 2014 12:21:19 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:21:19 -0000
Received: by mail-we0-f169.google.com with SMTP id t61so2840286wes.14
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Feb 2014 04:21:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=2MDlqq5jn/EgA7Qt0s/1/gq2b4Sp2tegKeLo4EF7lEg=;
	b=IKFpGaifwh4GZMP/d5FknnJtAmpOo6qUwlGH+bYLKUjP/osuk8vXnwYnBq2cAlpEkU
	v8NGvX9YFyDhDCPltVBedv+mzgGk/oz3u147TCHamQ39gp/iGOl/EfaeWzqPZcmNml2N
	/LWlIlFKODfbY44DlrP1z2ywlx3hpmwSHj/UeugpbtFqhmDDnQHnqlazwWoCRcbKCon/
	HJKzmfFR7pekpw+8IeJhcyU1lq5ORYf9SeYgdxCNKl+Il4Ii+W+LXqbvPbbLvfWj4VCq
	ba5sikxSqhWGyTKRDlWirE05HHtK5JAe9pVIbl3GqNvYkj44qLm6It0VopdKpNT+KiZ/
	bujg==
MIME-Version: 1.0
X-Received: by 10.194.81.196 with SMTP id c4mr2137933wjy.57.1393503678433;
	Thu, 27 Feb 2014 04:21:18 -0800 (PST)
Received: by 10.194.75.163 with HTTP; Thu, 27 Feb 2014 04:21:18 -0800 (PST)
In-Reply-To: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
Date: Thu, 27 Feb 2014 12:21:18 +0000
X-Google-Sender-Auth: hbOhGbAG3RRFGV_LehtfdH2gdE4
Message-ID: <CAFLBxZYpV9boJFHyLAk31bS=tYJsKZDNf1e2b+VoOtb7iF+VoQ@mail.gmail.com>
From: George Dunlap <George.Dunlap@eu.citrix.com>
To: Zhu Yanhai <zhu.yanhai@gmail.com>
Cc: Zhu Yanhai <gaoyang.zyh@taobao.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Charles Wang <muming.wq@taobao.com>, Shen Yiben <zituan@taobao.com>,
	Wan Jia <jia.wanj@alibaba-inc.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

create ^
title it linux pvops: fpu corruption due to incorrect assumptions
about TS bit after exception under Xen
thanks

On Wed, Nov 6, 2013 at 6:41 AM, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
> As we know Intel X86's CR0.TS is a sticky bit, which means once set
> it remains set until cleared by some software routines, in other words,
> the exception handler expects the bit is set when it starts to execute.
>
> However xen doesn't simulate this behavior quite well for PV guests -
> vcpu_restore_fpu_lazy() clears CR0.TS unconditionally in the very beginning,
> so the guest kernel's #NM handler runs with CR0.TS cleared. Generally speaking
> it's fine since the linux kernel executes the exception handler with
> interrupt disabled and a sane #NM handler will clear the bit anyway
> before it exits, but there's a catch: if it's the first FPU trap for the process,
> the linux kernel must allocate a piece of SLAB memory for it to save
> the FPU registers, which opens a schedule window as the memory
> allocation might sleep -- and with CR0.TS keeps clear!
>
> [see the code below in linux kernel,
>
> void math_state_restore(void)
> {
>     struct task_struct *tsk = current;
>
>     if (!tsk_used_math(tsk)) {
>         local_irq_enable();
>         /*
>          * does a slab alloc which can sleep
>          */
>         if (init_fpu(tsk)) {                 <<<< Here it might open a schedule window
>             /*
>              * ran out of memory!
>              */
>             do_group_exit(SIGKILL);
>             return;
>         }
>         local_irq_disable();
>     }
>
>     __thread_fpu_begin(tsk);    <<<< Here the process gets marked as a 'fpu user'
>                                          after the schedule window
>
>     /*
>      * Paranoid restore. send a SIGSEGV if we fail to restore the state.
>      */
>     if (unlikely(restore_fpu_checking(tsk))) {
>         drop_init_fpu(tsk);
>         force_sig(SIGSEGV, tsk);
>         return;
>     }
>
>     tsk->fpu_counter++;
> }
> ]
>
> The check in linux kernel's switch_fpu_prepare() doesn't stts() either because
> the current process only gets marked as a FPU user after the schedule window
> (the story is a bit different if eagerfpu is enabled, anyway a sane hypervisor
> cannot depend on such undetermined things). And then supposing that the new
> process scheduled-in wants to touch FPU, nobody will do fxrstor/frstor for it anymore,
> conducing to a serious data damage.
>
> Also, The point is everything is fine on linux + baremetal since CR0.TS will
> keep set until the interrupted #NM handler got the memory it needs and exits,
> so the incomer FPU user will get trapped as it's supposed to be.
>
> The test case is as below,
>
>         buf = malloc(BUF_SIZE);
>         if (!buf) {
>                 fprintf(stderr, "error %s during %s\n",
>                         strerror(-err),
>                         "malloc");
>                 return 1;
>         }
>         memset(buf, IO_PATTERN, BUF_SIZE);
>         memset(cmp_buf, IO_PATTERN, BUF_SIZE);
>
>         if (memcmp(buf, cmp_buf, BUF_SIZE)) {
>                 unsigned long long *ubuf = (unsigned long long *)buf;
>                 int i;
>
>                 for (i = 0; i < BUF_SIZE / sizeof(unsigned long long); i++)
>                         printf("%d: 0x%llx\n", i, ubuf[i]);
>
>                 return 2;
>         }
>
> Two shell scripts on each box's dom0 runs above program repeatedly until
> the compare fails (so every time the C program is a new fpu user and triggers
> memory allocation). we can see the data damage at least once with
> xen 4.3 + linux 2.6.32 on ~200 physical machines within two hours.
> With xen 4.3 + linux 3.11.6 stable it becomes harder to reproduce
> (guess it's because of the eagerfpu feature introduced in linux kernel 3.7)
> but it's still possible to come out within about four hours.
>
> The fix here is trying to make xen behave as close to the hardware as possible.
>
> This bug only has effects on PV guests (and including dom0 kernel of course).
>
> Cc: Wan Jia <jia.wanj@alibaba-inc.com>
> Cc: Shen Yiben <zituan@taobao.com>
> Cc: Charles Wang <muming.wq@taobao.com>
> Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Ian Campbell <ian.campbell@citrix.com>
> Signed-off-by: Zhu Yanhai <gaoyang.zyh@taobao.com>
> ---
>  xen/arch/x86/traps.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 77c200b..b0321a6 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -3267,8 +3267,8 @@ void do_device_not_available(struct cpu_user_regs *regs)
>
>      if ( curr->arch.pv_vcpu.ctrlreg[0] & X86_CR0_TS )
>      {
> +        stts();
>          do_guest_trap(TRAP_no_device, regs, 0);
> -        curr->arch.pv_vcpu.ctrlreg[0] &= ~X86_CR0_TS;
>      }
>      else
>          TRACE_0D(TRC_PV_MATH_STATE_RESTORE);
> --
> 1.7.4.4
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:25:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ01j-0004kp-Lv; Thu, 27 Feb 2014 12:25:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WJ01i-0004kj-EQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:25:22 +0000
Received: from [85.158.143.35:46893] by server-2.bemta-4.messagelabs.com id
	F0/42-04779-1BE2F035; Thu, 27 Feb 2014 12:25:21 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393503919!8751887!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28594 invoked from network); 27 Feb 2014 12:25:20 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 12:25:20 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WJ06E-00056G-Rk; Thu, 27 Feb 2014 12:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1393504202.19606@bugs.xenproject.org>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<CAFLBxZYpV9boJFHyLAk31bS=tYJsKZDNf1e2b+VoOtb7iF+VoQ@mail.gmail.com>
In-Reply-To: <CAFLBxZYpV9boJFHyLAk31bS=tYJsKZDNf1e2b+VoOtb7iF+VoQ@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Thu, 27 Feb 2014 12:30:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] x86/fpu: CR0.TS should be set
 before trap into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #40 rooted at `<1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>'
Title: `Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap into PV guest's #NM exception handler'
> title it linux pvops: fpu corruption due to incorrect assumptions
Set title for #40 to `linux pvops: fpu corruption due to incorrect assumptions'
> about TS bit after exception under Xen
Command failed: Unknown command `about'. at /srv/xen-devel-bugs/lib/emesinae/control.pl line 455, <GEN1> line 3.
Stop processing here.

Modified/created Bugs:
 - 40: http://bugs.xenproject.org/xen/bug/40 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:25:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ01j-0004kp-Lv; Thu, 27 Feb 2014 12:25:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WJ01i-0004kj-EQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:25:22 +0000
Received: from [85.158.143.35:46893] by server-2.bemta-4.messagelabs.com id
	F0/42-04779-1BE2F035; Thu, 27 Feb 2014 12:25:21 +0000
X-Env-Sender: xen-devel-bugs@bugs.xenproject.org
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393503919!8751887!1
X-Originating-IP: [50.56.82.209]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28594 invoked from network); 27 Feb 2014 12:25:20 -0000
Received: from bugs.xenproject.org (HELO bugs.xenproject.org) (50.56.82.209)
	by server-14.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 12:25:20 -0000
Received: from xen-devel-bugs by bugs.xenproject.org with local (Exim 4.80)
	(envelope-from <xen-devel-bugs@bugs.xenproject.org>)
	id 1WJ06E-00056G-Rk; Thu, 27 Feb 2014 12:30:02 +0000
Content-Disposition: inline
MIME-Version: 1.0
X-Mailer: MIME-tools 5.503 (Entity 5.503)
From: xen@bugs.xenproject.org
To: George Dunlap <George.Dunlap@eu.citrix.com>, xen-devel@lists.xen.org
Message-ID: <control-reply-1393504202.19606@bugs.xenproject.org>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<CAFLBxZYpV9boJFHyLAk31bS=tYJsKZDNf1e2b+VoOtb7iF+VoQ@mail.gmail.com>
In-Reply-To: <CAFLBxZYpV9boJFHyLAk31bS=tYJsKZDNf1e2b+VoOtb7iF+VoQ@mail.gmail.com>
X-Emesinae-Message: control
X-Emesinae-Control-From: George Dunlap <George.Dunlap@eu.citrix.com>
Date: Thu, 27 Feb 2014 12:30:02 +0000
Subject: [Xen-devel] Processed: Re: [PATCH] x86/fpu: CR0.TS should be set
 before trap into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Processing commands for xen@bugs.xenproject.org:

> create ^
Created new bug #40 rooted at `<1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>'
Title: `Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap into PV guest's #NM exception handler'
> title it linux pvops: fpu corruption due to incorrect assumptions
Set title for #40 to `linux pvops: fpu corruption due to incorrect assumptions'
> about TS bit after exception under Xen
Command failed: Unknown command `about'. at /srv/xen-devel-bugs/lib/emesinae/control.pl line 455, <GEN1> line 3.
Stop processing here.

Modified/created Bugs:
 - 40: http://bugs.xenproject.org/xen/bug/40 (new)

---
Xen Hypervisor Bug Tracker
See http://wiki.xen.org/wiki/Reporting_Bugs_against_Xen for information on reporting bugs
Contact xen-bugs-owner@bugs.xenproject.org with any infrastructure issues

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:26:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ02S-0004o5-3G; Thu, 27 Feb 2014 12:26:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJ02Q-0004ny-QU
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:26:06 +0000
Received: from [193.109.254.147:60659] by server-2.bemta-14.messagelabs.com id
	4A/6B-01236-EDE2F035; Thu, 27 Feb 2014 12:26:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393503965!1945111!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16563 invoked from network); 27 Feb 2014 12:26:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 12:26:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 12:26:17 +0000
Message-Id: <530F3CF8020000780011FD40@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 12:26:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
	<530F365B020000780011FD03@nat28.tlf.novell.com>
	<530F2B85.6060403@citrix.com>
In-Reply-To: <530F2B85.6060403@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 13:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Apart from forcibly messing with a balanced numa setup, what about cpu
> pools, or toolstack disaggregation where pinning is restricted?

There no messing with a balanced setup here - everything should
of course be transient, i.e. get restored to previous values right
after. And I can't see a reason not to permit the hardware domain
to temporarily escape its enclave, as it can do worse to the overall
system anyway.

The new hypercall is simple enough (yet very x86-centric) that I'm
not really against it; what I'm against is adding functionality to the
hypervisor that is already available by other means.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:26:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ02S-0004o5-3G; Thu, 27 Feb 2014 12:26:08 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJ02Q-0004ny-QU
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:26:06 +0000
Received: from [193.109.254.147:60659] by server-2.bemta-14.messagelabs.com id
	4A/6B-01236-EDE2F035; Thu, 27 Feb 2014 12:26:06 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393503965!1945111!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16563 invoked from network); 27 Feb 2014 12:26:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 12:26:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 12:26:17 +0000
Message-Id: <530F3CF8020000780011FD40@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 12:26:16 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
	<530F365B020000780011FD03@nat28.tlf.novell.com>
	<530F2B85.6060403@citrix.com>
In-Reply-To: <530F2B85.6060403@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 13:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Apart from forcibly messing with a balanced numa setup, what about cpu
> pools, or toolstack disaggregation where pinning is restricted?

There no messing with a balanced setup here - everything should
of course be transient, i.e. get restored to previous values right
after. And I can't see a reason not to permit the hardware domain
to temporarily escape its enclave, as it can do worse to the overall
system anyway.

The new hypercall is simple enough (yet very x86-centric) that I'm
not really against it; what I'm against is adding functionality to the
hypervisor that is already available by other means.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:28:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ04O-0004zt-RZ; Thu, 27 Feb 2014 12:28:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ04N-0004zk-Gn
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:28:07 +0000
Received: from [85.158.139.211:61000] by server-13.bemta-5.messagelabs.com id
	E0/CF-18801-65F2F035; Thu, 27 Feb 2014 12:28:06 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393504084!6628472!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18547 invoked from network); 27 Feb 2014 12:28:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:28:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104621644"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:28:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:28:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ04J-0001Nj-C3;
	Thu, 27 Feb 2014 12:28:03 +0000
Date: Thu, 27 Feb 2014 12:27:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Grant Likely <grant.likely@linaro.org>
In-Reply-To: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402271216040.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox>
	<CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1199956662-1393504078=:31489"
X-DLP: MIA1
Cc: "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	cross-distro@lists.linaro.org,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1199956662-1393504078=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Wed, 26 Feb 2014, Grant Likely wrote:
> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map. =C2=A0The g=
uest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> >
> > The virtual platform must support at least one of the following ARM
> > execution states:
> > =C2=A0 (1) aarch32 virtual CPUs on aarch32 physical CPUs
> > =C2=A0 (2) aarch32 virtual CPUs on aarch64 physical CPUs
> > =C2=A0 (3) aarch64 virtual CPUs on aarch64 physical CPUs
> >
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> >
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> >
> > =C2=A0 Serial console: =C2=A0The platform should provide a console,
> > =C2=A0 based on an emulated pl011, a virtio-console, or a Xen PV consol=
e.
>=20
> For portable disk image, can Xen PV be dropped from the list? pl011 is pa=
rt of SBSA, and virtio is getting standardised, but Xen PV is
> implementation specific.

Does an interface need OASIS' rubber stamp to be "standard"?  If so, we
should also drop FDT from this document. The SBSA has not been published
by any OASIS-like standardization body either, so maybe we should drop
the SBSA too.

If it doesn't need OASIS nice logo on the side to be a standard, then
the Xen PV interfaces are a standard too. Give a look at
xen/include/public/io, they go as back as 2004, and they have multiple
different implementations of the frontends and backends in multiple
operating systems today.

There is no reason why another hypervisor couldn't implement the same
interface and in fact I know for a fact that it was considered for KVM.
--1342847746-1199956662-1393504078=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1199956662-1393504078=:31489--


From xen-devel-bounces@lists.xen.org Thu Feb 27 12:28:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ04O-0004zt-RZ; Thu, 27 Feb 2014 12:28:08 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ04N-0004zk-Gn
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:28:07 +0000
Received: from [85.158.139.211:61000] by server-13.bemta-5.messagelabs.com id
	E0/CF-18801-65F2F035; Thu, 27 Feb 2014 12:28:06 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393504084!6628472!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18547 invoked from network); 27 Feb 2014 12:28:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:28:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104621644"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:28:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:28:03 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ04J-0001Nj-C3;
	Thu, 27 Feb 2014 12:28:03 +0000
Date: Thu, 27 Feb 2014 12:27:58 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Grant Likely <grant.likely@linaro.org>
In-Reply-To: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
Message-ID: <alpine.DEB.2.02.1402271216040.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox>
	<CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-1199956662-1393504078=:31489"
X-DLP: MIA1
Cc: "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	cross-distro@lists.linaro.org,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-1199956662-1393504078=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Wed, 26 Feb 2014, Grant Likely wrote:
> > VM Platform
> > -----------
> > The specification does not mandate any specific memory map. =C2=A0The g=
uest
> > OS must be able to enumerate all processing elements, devices, and
> > memory through HW description data (FDT, ACPI) or a bus-specific
> > mechanism such as PCI.
> >
> > The virtual platform must support at least one of the following ARM
> > execution states:
> > =C2=A0 (1) aarch32 virtual CPUs on aarch32 physical CPUs
> > =C2=A0 (2) aarch32 virtual CPUs on aarch64 physical CPUs
> > =C2=A0 (3) aarch64 virtual CPUs on aarch64 physical CPUs
> >
> > It is recommended to support both (2) and (3) on aarch64 capable
> > physical systems.
> >
> > The virtual hardware platform must provide a number of mandatory
> > peripherals:
> >
> > =C2=A0 Serial console: =C2=A0The platform should provide a console,
> > =C2=A0 based on an emulated pl011, a virtio-console, or a Xen PV consol=
e.
>=20
> For portable disk image, can Xen PV be dropped from the list? pl011 is pa=
rt of SBSA, and virtio is getting standardised, but Xen PV is
> implementation specific.

Does an interface need OASIS' rubber stamp to be "standard"?  If so, we
should also drop FDT from this document. The SBSA has not been published
by any OASIS-like standardization body either, so maybe we should drop
the SBSA too.

If it doesn't need OASIS nice logo on the side to be a standard, then
the Xen PV interfaces are a standard too. Give a look at
xen/include/public/io, they go as back as 2004, and they have multiple
different implementations of the frontends and backends in multiple
operating systems today.

There is no reason why another hypervisor couldn't implement the same
interface and in fact I know for a fact that it was considered for KVM.
--1342847746-1199956662-1393504078=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-1199956662-1393504078=:31489--


From xen-devel-bounces@lists.xen.org Thu Feb 27 12:28:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ057-00055e-Bp; Thu, 27 Feb 2014 12:28:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ056-00055V-9g
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:28:52 +0000
Received: from [85.158.139.211:34765] by server-13.bemta-5.messagelabs.com id
	E7/41-18801-38F2F035; Thu, 27 Feb 2014 12:28:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393504129!2106298!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15714 invoked from network); 27 Feb 2014 12:28:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:28:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104621781"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:28:48 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:28:48 -0500
Message-ID: <530F2F7B.40500@citrix.com>
Date: Thu, 27 Feb 2014 12:28:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H.
	Peter Anvin" <hpa@zytor.com>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 15:14, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
> 
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer can
> come and steal the lock if the next-in-line CPU to get the lock is
> scheduled out. Unfair lock in a native environment is generally not a
> good idea as there is a possibility of lock starvation for a heavily
> contended lock.

I'm not sure I'm keen on losing the fairness in PV environment.  I'm
concerned that on an over-committed host, the lock starvation problem
will be particularly bad.

But I'll have to revist this once a non-broken PV qspinlock
implementation exists (or someone explains how the proposed one works).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:28:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ057-00055e-Bp; Thu, 27 Feb 2014 12:28:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ056-00055V-9g
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:28:52 +0000
Received: from [85.158.139.211:34765] by server-13.bemta-5.messagelabs.com id
	E7/41-18801-38F2F035; Thu, 27 Feb 2014 12:28:51 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393504129!2106298!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15714 invoked from network); 27 Feb 2014 12:28:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:28:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104621781"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:28:48 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:28:48 -0500
Message-ID: <530F2F7B.40500@citrix.com>
Date: Thu, 27 Feb 2014 12:28:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
In-Reply-To: <1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H.
	Peter Anvin" <hpa@zytor.com>, Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 15:14, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
> 
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer can
> come and steal the lock if the next-in-line CPU to get the lock is
> scheduled out. Unfair lock in a native environment is generally not a
> good idea as there is a possibility of lock starvation for a heavily
> contended lock.

I'm not sure I'm keen on losing the fairness in PV environment.  I'm
concerned that on an over-committed host, the lock starvation problem
will be particularly bad.

But I'll have to revist this once a non-broken PV qspinlock
implementation exists (or someone explains how the proposed one works).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ08D-0005Ju-GO; Thu, 27 Feb 2014 12:32:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ08C-0005Jl-33
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:32:04 +0000
Received: from [85.158.143.35:58563] by server-2.bemta-4.messagelabs.com id
	64/9F-04779-3403F035; Thu, 27 Feb 2014 12:32:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393504321!8732370!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29405 invoked from network); 27 Feb 2014 12:32:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:32:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106249442"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:32:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:32:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ088-0001RZ-Jx;
	Thu, 27 Feb 2014 12:32:00 +0000
Date: Thu, 27 Feb 2014 12:31:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Leif Lindholm <leif.lindholm@linaro.org>
In-Reply-To: <20140226214843.GD12169@bivouac.eciton.net>
Message-ID: <alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226214843.GD12169@bivouac.eciton.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Ian
	Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Robie Basak <robie.basak@canonical.com>, Stefano
	Stabellini <stefano.stabellini@citrix.com>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 26 Feb 2014, Leif Lindholm wrote:
> > >   no FDT.  In this case, the VM implementation must provide ACPI, and
> > >   the OS must be able to locate the ACPI root pointer through the UEFI
> > >   system table.
> > > 
> > > For more information about the arm and arm64 boot conventions, see
> > > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > > Linux kernel source tree.
> > > 
> > > For more information about UEFI and ACPI booting, see [4] and [5].
> > 
> > What's the point of having ACPI in a virtual machine? You wouldn't
> > need to abstract any of the hardware in AML since you already know
> > what the virtual hardware is, so I can't see how this would help
> > anyone.
> 
> The point is that if we need to share any real hw then we need to use
> whatever the host has.

That's right.

I dislike ACPI as much as the next guy, but unfortunately if the host
only supports ACPI, the Linux driver for a particular device only works
together with ACPI, and you want to assign that device to a VM, then we
might be forced to use ACPI to describe it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ08D-0005Ju-GO; Thu, 27 Feb 2014 12:32:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ08C-0005Jl-33
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:32:04 +0000
Received: from [85.158.143.35:58563] by server-2.bemta-4.messagelabs.com id
	64/9F-04779-3403F035; Thu, 27 Feb 2014 12:32:03 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393504321!8732370!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29405 invoked from network); 27 Feb 2014 12:32:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:32:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106249442"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:32:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:32:00 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ088-0001RZ-Jx;
	Thu, 27 Feb 2014 12:32:00 +0000
Date: Thu, 27 Feb 2014 12:31:55 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Leif Lindholm <leif.lindholm@linaro.org>
In-Reply-To: <20140226214843.GD12169@bivouac.eciton.net>
Message-ID: <alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox> <5553754.0b4gMg5OS7@wuerfel>
	<20140226214843.GD12169@bivouac.eciton.net>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Peter Maydell <peter.maydell@linaro.org>, Ian
	Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Robie Basak <robie.basak@canonical.com>, Stefano
	Stabellini <stefano.stabellini@citrix.com>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 26 Feb 2014, Leif Lindholm wrote:
> > >   no FDT.  In this case, the VM implementation must provide ACPI, and
> > >   the OS must be able to locate the ACPI root pointer through the UEFI
> > >   system table.
> > > 
> > > For more information about the arm and arm64 boot conventions, see
> > > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > > Linux kernel source tree.
> > > 
> > > For more information about UEFI and ACPI booting, see [4] and [5].
> > 
> > What's the point of having ACPI in a virtual machine? You wouldn't
> > need to abstract any of the hardware in AML since you already know
> > what the virtual hardware is, so I can't see how this would help
> > anyone.
> 
> The point is that if we need to share any real hw then we need to use
> whatever the host has.

That's right.

I dislike ACPI as much as the next guy, but unfortunately if the host
only supports ACPI, the Linux driver for a particular device only works
together with ACPI, and you want to assign that device to a VM, then we
might be forced to use ACPI to describe it.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:38:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Dm-0005VA-Eq; Thu, 27 Feb 2014 12:37:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ0Dk-0005V4-Tt
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:37:49 +0000
Received: from [85.158.139.211:51405] by server-2.bemta-5.messagelabs.com id
	77/A2-23037-C913F035; Thu, 27 Feb 2014 12:37:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393504666!6591045!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23933 invoked from network); 27 Feb 2014 12:37:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:37:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104623652"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:37:45 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:37:45 -0500
Message-ID: <530F3197.2040403@citrix.com>
Date: Thu, 27 Feb 2014 12:37:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>	<527A113C02000078000FFF99@nat28.tlf.novell.com>
	<20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Charles Wang <muming.wq@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Zhu Yanhai <gaoyang.zyh@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 00:04, Matt Wilson wrote:
> On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
>>>>> On 06.11.13 at 07:41, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
>>> As we know Intel X86's CR0.TS is a sticky bit, which means once set
>>> it remains set until cleared by some software routines, in other words,
>>> the exception handler expects the bit is set when it starts to execute.
>>
>> Since when would that be the case? CR0.TS is entirely unaffected
>> by exception invocations according to all I know. All that is known
>> here is that #NM wouldn't have occurred in the first place if CR0.TS
>> was clear.
>>
>>> However xen doesn't simulate this behavior quite well for PV guests -
>>> vcpu_restore_fpu_lazy() clears CR0.TS unconditionally in the very beginning,
>>> so the guest kernel's #NM handler runs with CR0.TS cleared. Generally 
>>> speaking
>>> it's fine since the linux kernel executes the exception handler with
>>> interrupt disabled and a sane #NM handler will clear the bit anyway
>>> before it exits, but there's a catch: if it's the first FPU trap for the 
>>> process,
>>> the linux kernel must allocate a piece of SLAB memory for it to save
>>> the FPU registers, which opens a schedule window as the memory
>>> allocation might sleep -- and with CR0.TS keeps clear!
>>>
>>> [see the code below in linux kernel,
>>
>> You're apparently referring to the pvops kernel.
>>
>>> void math_state_restore(void)
>>> {
>>>     struct task_struct *tsk = current;
>>>
>>>     if (!tsk_used_math(tsk)) {
>>>         local_irq_enable();
>>>         /*
>>>          * does a slab alloc which can sleep
>>>          */
>>>         if (init_fpu(tsk)) {                 <<<< Here it might open a schedule window
>>>             /*
>>>              * ran out of memory!
>>>              */
>>>             do_group_exit(SIGKILL);
>>>             return;
>>>         }
>>>         local_irq_disable();
>>>     }
>>>
>>>     __thread_fpu_begin(tsk);    <<<< Here the process gets marked as a 'fpu user'
>>>                                          after the schedule window
>>>
>>>     /*
>>>      * Paranoid restore. send a SIGSEGV if we fail to restore the state.
>>>      */
>>>     if (unlikely(restore_fpu_checking(tsk))) {
>>>         drop_init_fpu(tsk);
>>>         force_sig(SIGSEGV, tsk);
>>>         return;
>>>     }
>>>
>>>     tsk->fpu_counter++;
>>> }
>>> ]
>>
>> May I direct your attention to the XenoLinux one:
>>
>> asmlinkage void math_state_restore(void)
>> {
>> 	struct task_struct *me = current;
>>
>> 	/* NB. 'clts' is done for us by Xen during virtual trap. */
>> 	__get_cpu_var(xen_x86_cr0) &= ~X86_CR0_TS;
>> 	if (!used_math())
>> 		init_fpu(me);
>> 	restore_fpu_checking(&me->thread.i387.fxsave);
>> 	task_thread_info(me)->status |= TS_USEDFPU;
>> }
>>
>> Note the comment close to the beginning - the fact that CR0.TS
>> is clear at exception handler entry is actually part of the PV ABI,
>> i.e. by altering hypervisor behavior here you break all forward
>> ported kernels.
>>
>> Nevertheless I agree that there is an issue, but this needs to be
>> fixed on the Linux side (hence adding the Linux maintainers to Cc);
>> this issue was introduced way back in 2.6.26 (before that there
>> was no allocation on that path). It's not clear though whether
>> using GFP_ATOMIC for the allocation would be preferable over
>> stts() before calling the allocation function (and clts() if it
>> succeeded), or whether perhaps to defer the stts() until we
>> actually know the task is being switched out. It's going to be an
>> ugly, Xen-specific hack in any event.
> 
> Was there ever a resolution to this problem? I never saw a comment
> from the Linux Xen PV maintainers.

I think allocating on the context switch is mad and the irq
enable/disable just to allow the allocation looks equally mad.

I had vague plans to maintain a mempool for FPU contexts but couldn't
immediately think how we could guarantee that the pool would be kept
sufficiently populated.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:38:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Dm-0005VA-Eq; Thu, 27 Feb 2014 12:37:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ0Dk-0005V4-Tt
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:37:49 +0000
Received: from [85.158.139.211:51405] by server-2.bemta-5.messagelabs.com id
	77/A2-23037-C913F035; Thu, 27 Feb 2014 12:37:48 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393504666!6591045!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23933 invoked from network); 27 Feb 2014 12:37:47 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:37:47 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104623652"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 12:37:45 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:37:45 -0500
Message-ID: <530F3197.2040403@citrix.com>
Date: Thu, 27 Feb 2014 12:37:43 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Matt Wilson <msw@linux.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>	<527A113C02000078000FFF99@nat28.tlf.novell.com>
	<20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Charles Wang <muming.wq@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Zhu Yanhai <gaoyang.zyh@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 00:04, Matt Wilson wrote:
> On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
>>>>> On 06.11.13 at 07:41, Zhu Yanhai <zhu.yanhai@gmail.com> wrote:
>>> As we know Intel X86's CR0.TS is a sticky bit, which means once set
>>> it remains set until cleared by some software routines, in other words,
>>> the exception handler expects the bit is set when it starts to execute.
>>
>> Since when would that be the case? CR0.TS is entirely unaffected
>> by exception invocations according to all I know. All that is known
>> here is that #NM wouldn't have occurred in the first place if CR0.TS
>> was clear.
>>
>>> However xen doesn't simulate this behavior quite well for PV guests -
>>> vcpu_restore_fpu_lazy() clears CR0.TS unconditionally in the very beginning,
>>> so the guest kernel's #NM handler runs with CR0.TS cleared. Generally 
>>> speaking
>>> it's fine since the linux kernel executes the exception handler with
>>> interrupt disabled and a sane #NM handler will clear the bit anyway
>>> before it exits, but there's a catch: if it's the first FPU trap for the 
>>> process,
>>> the linux kernel must allocate a piece of SLAB memory for it to save
>>> the FPU registers, which opens a schedule window as the memory
>>> allocation might sleep -- and with CR0.TS keeps clear!
>>>
>>> [see the code below in linux kernel,
>>
>> You're apparently referring to the pvops kernel.
>>
>>> void math_state_restore(void)
>>> {
>>>     struct task_struct *tsk = current;
>>>
>>>     if (!tsk_used_math(tsk)) {
>>>         local_irq_enable();
>>>         /*
>>>          * does a slab alloc which can sleep
>>>          */
>>>         if (init_fpu(tsk)) {                 <<<< Here it might open a schedule window
>>>             /*
>>>              * ran out of memory!
>>>              */
>>>             do_group_exit(SIGKILL);
>>>             return;
>>>         }
>>>         local_irq_disable();
>>>     }
>>>
>>>     __thread_fpu_begin(tsk);    <<<< Here the process gets marked as a 'fpu user'
>>>                                          after the schedule window
>>>
>>>     /*
>>>      * Paranoid restore. send a SIGSEGV if we fail to restore the state.
>>>      */
>>>     if (unlikely(restore_fpu_checking(tsk))) {
>>>         drop_init_fpu(tsk);
>>>         force_sig(SIGSEGV, tsk);
>>>         return;
>>>     }
>>>
>>>     tsk->fpu_counter++;
>>> }
>>> ]
>>
>> May I direct your attention to the XenoLinux one:
>>
>> asmlinkage void math_state_restore(void)
>> {
>> 	struct task_struct *me = current;
>>
>> 	/* NB. 'clts' is done for us by Xen during virtual trap. */
>> 	__get_cpu_var(xen_x86_cr0) &= ~X86_CR0_TS;
>> 	if (!used_math())
>> 		init_fpu(me);
>> 	restore_fpu_checking(&me->thread.i387.fxsave);
>> 	task_thread_info(me)->status |= TS_USEDFPU;
>> }
>>
>> Note the comment close to the beginning - the fact that CR0.TS
>> is clear at exception handler entry is actually part of the PV ABI,
>> i.e. by altering hypervisor behavior here you break all forward
>> ported kernels.
>>
>> Nevertheless I agree that there is an issue, but this needs to be
>> fixed on the Linux side (hence adding the Linux maintainers to Cc);
>> this issue was introduced way back in 2.6.26 (before that there
>> was no allocation on that path). It's not clear though whether
>> using GFP_ATOMIC for the allocation would be preferable over
>> stts() before calling the allocation function (and clts() if it
>> succeeded), or whether perhaps to defer the stts() until we
>> actually know the task is being switched out. It's going to be an
>> ugly, Xen-specific hack in any event.
> 
> Was there ever a resolution to this problem? I never saw a comment
> from the Linux Xen PV maintainers.

I think allocating on the context switch is mad and the irq
enable/disable just to allow the allocation looks equally mad.

I had vague plans to maintain a mempool for FPU contexts but couldn't
immediately think how we could guarantee that the pool would be kept
sufficiently populated.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:43:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Iw-0005ez-A5; Thu, 27 Feb 2014 12:43:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ0Iu-0005eu-Aq
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:43:08 +0000
Received: from [85.158.139.211:36191] by server-12.bemta-5.messagelabs.com id
	7B/F6-15415-BD23F035; Thu, 27 Feb 2014 12:43:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393504984!6248636!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32366 invoked from network); 27 Feb 2014 12:43:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:43:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106251641"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:42:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:42:47 -0500
Message-ID: <530F32C6.5060901@citrix.com>
Date: Thu, 27 Feb 2014 12:42:46 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 16:24, Roger Pau Monne wrote:
> Add support for MSI message groups for Xen Dom0 using the
> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
> 
> In order to keep track of which pirq is the first one in the group all
> pirqs in the MSI group except for the first one have the newly
> introduced PIRQ_MSI_GROUP flag set. This prevents calling
> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
> first pirq in the group.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:43:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Iw-0005ez-A5; Thu, 27 Feb 2014 12:43:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ0Iu-0005eu-Aq
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:43:08 +0000
Received: from [85.158.139.211:36191] by server-12.bemta-5.messagelabs.com id
	7B/F6-15415-BD23F035; Thu, 27 Feb 2014 12:43:07 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393504984!6248636!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32366 invoked from network); 27 Feb 2014 12:43:05 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:43:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106251641"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:42:47 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:42:47 -0500
Message-ID: <530F32C6.5060901@citrix.com>
Date: Thu, 27 Feb 2014 12:42:46 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26/02/14 16:24, Roger Pau Monne wrote:
> Add support for MSI message groups for Xen Dom0 using the
> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
> 
> In order to keep track of which pirq is the first one in the group all
> pirqs in the MSI group except for the first one have the newly
> introduced PIRQ_MSI_GROUP flag set. This prevents calling
> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
> first pirq in the group.

Reviewed-by: David Vrabel <david.vrabel@citrix.com>

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:43:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0JJ-0005gQ-N4; Thu, 27 Feb 2014 12:43:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ0JH-0005gB-R2
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:43:32 +0000
Received: from [85.158.137.68:52028] by server-3.bemta-3.messagelabs.com id
	64/8A-14520-3F23F035; Thu, 27 Feb 2014 12:43:31 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393505008!4542041!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29568 invoked from network); 27 Feb 2014 12:43:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:43:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106251949"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:43:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:43:27 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ0JD-0001dC-CH;
	Thu, 27 Feb 2014 12:43:27 +0000
Date: Thu, 27 Feb 2014 12:43:27 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140227124327.GD16241@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
	<530B606F.2070902@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530B606F.2070902@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	Zoltan Kiss <zoltan.kiss@schaman.hu>
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 03:08:31PM +0000, Zoltan Kiss wrote:
> On 24/02/14 13:49, Zoltan Kiss wrote:
> >On 22/02/14 23:18, Zoltan Kiss wrote:
> >>On 18/02/14 17:45, Ian Campbell wrote:
> >>>On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>
> >>>Re the Subject: change how? Perhaps "handle foreign mapped pages on the
> >>>guest RX path" would be clearer.
> >>Ok, I'll do that.
> >>
> >>>
> >>>>RX path need to know if the SKB fragments are stored on
> >>>>pages from another
> >>>>domain.
> >>>Does this not need to be done either before the mapping change
> >>>or at the
> >>>same time? -- otherwise you have a window of a couple of commits where
> >>>things are broken, breaking bisectability.
> >>I can move this to the beginning, to keep bisectability. I've
> >>put it here originally because none of these makes sense without
> >>the previous patches.
> >Well, I gave it a close look: to move this to the beginning as a
> >separate patch I would need to put move a lot of definitions from
> >the first patch to here (ubuf_to_vif helper,
> >xenvif_zerocopy_callback etc.). That would be the best from bisect
> >point of view, but from patch review point of view even worse than
> >now. So the only option I see is to merge this with the first 2
> >patches, so it will be even bigger.
> Actually I was stupid, we can move this patch earlier and introduce
> stubs for those 2 functions. But for the another two patches (#6 and
> #8) it's still true that we can't move them before, only merge them
> into the main, as they heavily rely on the main patch. #6 is
> necessary for Windows frontends, as they are keen to send too many
> slots. #8 is quite a rare case, happens only if a guest wedge or
> malicious, and sits on the packet.
> So my question is still up: do you prefer perfect bisectability or
> more segmented patches which are not that pain to review?
> 

What's the diff stat if you merge those patches?

> >And based on that principle, patch #6 and #8 should be merged
> >there as well, as they solve corner cases introduced by the grant
> >mapping.
> >I don't know how much the bisecting requirements are written in
> >stone. At this moment, all the separate patches compile, but after
> >#2 there are new problems solved in #4, #6 and #8. If someone
> >bisect in the middle of this range and run into these problems,
> >they could quite easily figure out what went wrong looking at the
> >adjacent patches. So I would recommend to keep this current order.
> >What's your opinion?
> >
> >Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:43:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0JJ-0005gQ-N4; Thu, 27 Feb 2014 12:43:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ0JH-0005gB-R2
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:43:32 +0000
Received: from [85.158.137.68:52028] by server-3.bemta-3.messagelabs.com id
	64/8A-14520-3F23F035; Thu, 27 Feb 2014 12:43:31 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393505008!4542041!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29568 invoked from network); 27 Feb 2014 12:43:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:43:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106251949"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:43:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:43:27 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ0JD-0001dC-CH;
	Thu, 27 Feb 2014 12:43:27 +0000
Date: Thu, 27 Feb 2014 12:43:27 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140227124327.GD16241@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
	<530B606F.2070902@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530B606F.2070902@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: jonathan.davies@citrix.com, wei.liu2@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
	Zoltan Kiss <zoltan.kiss@schaman.hu>
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 03:08:31PM +0000, Zoltan Kiss wrote:
> On 24/02/14 13:49, Zoltan Kiss wrote:
> >On 22/02/14 23:18, Zoltan Kiss wrote:
> >>On 18/02/14 17:45, Ian Campbell wrote:
> >>>On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>
> >>>Re the Subject: change how? Perhaps "handle foreign mapped pages on the
> >>>guest RX path" would be clearer.
> >>Ok, I'll do that.
> >>
> >>>
> >>>>RX path need to know if the SKB fragments are stored on
> >>>>pages from another
> >>>>domain.
> >>>Does this not need to be done either before the mapping change
> >>>or at the
> >>>same time? -- otherwise you have a window of a couple of commits where
> >>>things are broken, breaking bisectability.
> >>I can move this to the beginning, to keep bisectability. I've
> >>put it here originally because none of these makes sense without
> >>the previous patches.
> >Well, I gave it a close look: to move this to the beginning as a
> >separate patch I would need to put move a lot of definitions from
> >the first patch to here (ubuf_to_vif helper,
> >xenvif_zerocopy_callback etc.). That would be the best from bisect
> >point of view, but from patch review point of view even worse than
> >now. So the only option I see is to merge this with the first 2
> >patches, so it will be even bigger.
> Actually I was stupid, we can move this patch earlier and introduce
> stubs for those 2 functions. But for the another two patches (#6 and
> #8) it's still true that we can't move them before, only merge them
> into the main, as they heavily rely on the main patch. #6 is
> necessary for Windows frontends, as they are keen to send too many
> slots. #8 is quite a rare case, happens only if a guest wedge or
> malicious, and sits on the packet.
> So my question is still up: do you prefer perfect bisectability or
> more segmented patches which are not that pain to review?
> 

What's the diff stat if you merge those patches?

> >And based on that principle, patch #6 and #8 should be merged
> >there as well, as they solve corner cases introduced by the grant
> >mapping.
> >I don't know how much the bisecting requirements are written in
> >stone. At this moment, all the separate patches compile, but after
> >#2 there are new problems solved in #4, #6 and #8. If someone
> >bisect in the middle of this range and run into these problems,
> >they could quite easily figure out what went wrong looking at the
> >adjacent patches. So I would recommend to keep this current order.
> >What's your opinion?
> >
> >Zoli

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:46:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Lw-0005s8-CY; Thu, 27 Feb 2014 12:46:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ0Lu-0005s0-QY
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:46:15 +0000
Received: from [85.158.137.68:10897] by server-14.bemta-3.messagelabs.com id
	1C/7C-08196-5933F035; Thu, 27 Feb 2014 12:46:13 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393505171!3327238!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32642 invoked from network); 27 Feb 2014 12:46:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:46:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106252316"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:46:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:46:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ0Lq-0001ff-B4;
	Thu, 27 Feb 2014 12:46:10 +0000
Message-ID: <530F338D.6030107@eu.citrix.com>
Date: Thu, 27 Feb 2014 12:46:05 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Matt Wilson <msw@linux.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<527A113C02000078000FFF99@nat28.tlf.novell.com>
	<20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
	<530EFEC8020000780011FB81@nat28.tlf.novell.com>
In-Reply-To: <530EFEC8020000780011FB81@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: Charles Wang <muming.wq@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Zhu Yanhai <gaoyang.zyh@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 08:00 AM, Jan Beulich wrote:
>>>> On 27.02.14 at 01:04, Matt Wilson <msw@linux.com> wrote:
>> On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
>>> Nevertheless I agree that there is an issue, but this needs to be
>>> fixed on the Linux side (hence adding the Linux maintainers to Cc);
>>> this issue was introduced way back in 2.6.26 (before that there
>>> was no allocation on that path). It's not clear though whether
>>> using GFP_ATOMIC for the allocation would be preferable over
>>> stts() before calling the allocation function (and clts() if it
>>> succeeded), or whether perhaps to defer the stts() until we
>>> actually know the task is being switched out. It's going to be an
>>> ugly, Xen-specific hack in any event.
>> Was there ever a resolution to this problem? I never saw a comment
>> from the Linux Xen PV maintainers.
> Neither did I, so no, I'm not aware of a solution.

Well we basically have two solutions I think:

1. Add a flag to the guest kernel that requests Xen to keep the TS bit 
set (and eat the extra cost of the trap on clearing it).

2. In the uncommon case of the first use, set the TS  bit again 
(incurring the cost of the extra trap) before calling allocate.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:46:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Lw-0005s8-CY; Thu, 27 Feb 2014 12:46:16 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ0Lu-0005s0-QY
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 12:46:15 +0000
Received: from [85.158.137.68:10897] by server-14.bemta-3.messagelabs.com id
	1C/7C-08196-5933F035; Thu, 27 Feb 2014 12:46:13 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393505171!3327238!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32642 invoked from network); 27 Feb 2014 12:46:13 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:46:13 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106252316"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:46:11 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:46:10 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ0Lq-0001ff-B4;
	Thu, 27 Feb 2014 12:46:10 +0000
Message-ID: <530F338D.6030107@eu.citrix.com>
Date: Thu, 27 Feb 2014 12:46:05 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Matt Wilson <msw@linux.com>
References: <1383720072-6242-1-git-send-email-gaoyang.zyh@taobao.com>
	<527A113C02000078000FFF99@nat28.tlf.novell.com>
	<20140227000405.GA11825@u109add4315675089e695.ant.amazon.com>
	<530EFEC8020000780011FB81@nat28.tlf.novell.com>
In-Reply-To: <530EFEC8020000780011FB81@nat28.tlf.novell.com>
X-DLP: MIA2
Cc: Charles Wang <muming.wq@taobao.com>, Ian Campbell <ian.campbell@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Zhu Yanhai <gaoyang.zyh@taobao.com>, Shen Yiben <zituan@taobao.com>,
	David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Zhu Yanhai <zhu.yanhai@gmail.com>, Wan Jia <jia.wanj@alibaba-inc.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] x86/fpu: CR0.TS should be set before trap
 into PV guest's #NM exception handler
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 08:00 AM, Jan Beulich wrote:
>>>> On 27.02.14 at 01:04, Matt Wilson <msw@linux.com> wrote:
>> On Wed, Nov 06, 2013 at 08:51:56AM +0000, Jan Beulich wrote:
>>> Nevertheless I agree that there is an issue, but this needs to be
>>> fixed on the Linux side (hence adding the Linux maintainers to Cc);
>>> this issue was introduced way back in 2.6.26 (before that there
>>> was no allocation on that path). It's not clear though whether
>>> using GFP_ATOMIC for the allocation would be preferable over
>>> stts() before calling the allocation function (and clts() if it
>>> succeeded), or whether perhaps to defer the stts() until we
>>> actually know the task is being switched out. It's going to be an
>>> ugly, Xen-specific hack in any event.
>> Was there ever a resolution to this problem? I never saw a comment
>> from the Linux Xen PV maintainers.
> Neither did I, so no, I'm not aware of a solution.

Well we basically have two solutions I think:

1. Add a flag to the guest kernel that requests Xen to keep the TS bit 
set (and eat the extra cost of the trap on clearing it).

2. In the uncommon case of the first use, set the TS  bit again 
(incurring the cost of the extra trap) before calling allocate.

  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Nl-000628-4E; Thu, 27 Feb 2014 12:48:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ0Nj-000621-Nt
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 12:48:07 +0000
Received: from [193.109.254.147:16067] by server-3.bemta-14.messagelabs.com id
	BF/88-00432-7043F035; Thu, 27 Feb 2014 12:48:07 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393505285!3246401!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25954 invoked from network); 27 Feb 2014 12:48:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:48:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106252508"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:47:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:47:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ0NX-0001hP-5u;
	Thu, 27 Feb 2014 12:47:55 +0000
Date: Thu, 27 Feb 2014 12:47:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F8289@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1402271246340.31489@kaball.uk.xensource.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F8289@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Kay, Allen M" <allen.m.kay@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"anthony@codemonkey.ws" <anthony@codemonkey.ws>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] xen: add Intel IGD passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 27 Feb 2014, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2014-02-21:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> > 
> > The following patches are ported from Xen Qemu-traditional branch
> > which are adding Intel IGD passthrough supporting to Qemu upstream.
> > 
> > To pass through IGD to guest, user need to add following lines in Xen
> > config file: gfx_passthru=1 pci=['00:02.0@2']
> > 
> > Besides, since Xen + Qemu upstream is requiring seabios, user also
> > need to recompile seabios with CONFIG_OPTIONROMS_DEPLOYED=y to allow
> > IGD pass through
> > successfully:
> > 1. change CONFIG_OPTIONROMS_DEPLOYED=y in file:
> > xen/tools/firmware/seabios-config 2. recompile the tools
> > 
> > I have successfully boot Win 7 and RHEL6u4 guests with IGD assigned in
> > Haswell desktop with Latest Xen + Qemu upstream.
> > 
> > Yang Zhang (5):
> >   xen, gfx passthrough: basic graphics passthrough support
> >   xen, gfx passthrough: reserve 00:02.0 for INTEL IGD
> >   xen, gfx passthrough: create intel isa bridge
> >   xen, gfx passthrough: support Intel IGD passthrough with VT-D
> >   xen, gfx passthrough: add opregion mapping
> >  configure                    |    2 +- hw/pci-host/piix.c           |  
> >  15 ++ hw/pci/pci.c                 |   19 ++ hw/xen/Makefile.objs      
> >    |    2 +- hw/xen/xen-host-pci-device.c |    5 +
> >  hw/xen/xen-host-pci-device.h |    1 + hw/xen/xen_pt.c              |  
> >  10 + hw/xen/xen_pt.h              |   13 ++-
> >  hw/xen/xen_pt_config_init.c  |   45 +++++- hw/xen/xen_pt_graphics.c    
> >  |  407 ++++++++++++++++++++++++++++++++++++++++++ qemu-options.hx      
> >         |    9 + vl.c                         |    8 + 12 files changed,
> >  532 insertions(+), 4 deletions(-)  create mode
> > 100644 hw/xen/xen_pt_graphics.c
> 
> Ping.

Sorry but with the Xen 4.4 release and travels I won't be able to review
it for a while, but I'll get around to it at some point not too far.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:48:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Nl-000628-4E; Thu, 27 Feb 2014 12:48:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ0Nj-000621-Nt
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 12:48:07 +0000
Received: from [193.109.254.147:16067] by server-3.bemta-14.messagelabs.com id
	BF/88-00432-7043F035; Thu, 27 Feb 2014 12:48:07 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393505285!3246401!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25954 invoked from network); 27 Feb 2014 12:48:06 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:48:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="106252508"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 12:47:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 07:47:55 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ0NX-0001hP-5u;
	Thu, 27 Feb 2014 12:47:55 +0000
Date: Thu, 27 Feb 2014 12:47:50 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Zhang, Yang Z" <yang.z.zhang@intel.com>
In-Reply-To: <A9667DDFB95DB7438FA9D7D576C3D87E0A9F8289@SHSMSX104.ccr.corp.intel.com>
Message-ID: <alpine.DEB.2.02.1402271246340.31489@kaball.uk.xensource.com>
References: <1392965053-1069-1-git-send-email-yang.z.zhang@intel.com>
	<A9667DDFB95DB7438FA9D7D576C3D87E0A9F8289@SHSMSX104.ccr.corp.intel.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: "peter.maydell@linaro.org" <peter.maydell@linaro.org>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	"stefano.stabellini@eu.citrix.com" <stefano.stabellini@eu.citrix.com>,
	"Kay, Allen M" <allen.m.kay@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"anthony@codemonkey.ws" <anthony@codemonkey.ws>,
	"anthony.perard@citrix.com" <anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH 0/5] xen: add Intel IGD passthrough support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 27 Feb 2014, Zhang, Yang Z wrote:
> Zhang, Yang Z wrote on 2014-02-21:
> > From: Yang Zhang <yang.z.zhang@Intel.com>
> > 
> > The following patches are ported from Xen Qemu-traditional branch
> > which are adding Intel IGD passthrough supporting to Qemu upstream.
> > 
> > To pass through IGD to guest, user need to add following lines in Xen
> > config file: gfx_passthru=1 pci=['00:02.0@2']
> > 
> > Besides, since Xen + Qemu upstream is requiring seabios, user also
> > need to recompile seabios with CONFIG_OPTIONROMS_DEPLOYED=y to allow
> > IGD pass through
> > successfully:
> > 1. change CONFIG_OPTIONROMS_DEPLOYED=y in file:
> > xen/tools/firmware/seabios-config 2. recompile the tools
> > 
> > I have successfully boot Win 7 and RHEL6u4 guests with IGD assigned in
> > Haswell desktop with Latest Xen + Qemu upstream.
> > 
> > Yang Zhang (5):
> >   xen, gfx passthrough: basic graphics passthrough support
> >   xen, gfx passthrough: reserve 00:02.0 for INTEL IGD
> >   xen, gfx passthrough: create intel isa bridge
> >   xen, gfx passthrough: support Intel IGD passthrough with VT-D
> >   xen, gfx passthrough: add opregion mapping
> >  configure                    |    2 +- hw/pci-host/piix.c           |  
> >  15 ++ hw/pci/pci.c                 |   19 ++ hw/xen/Makefile.objs      
> >    |    2 +- hw/xen/xen-host-pci-device.c |    5 +
> >  hw/xen/xen-host-pci-device.h |    1 + hw/xen/xen_pt.c              |  
> >  10 + hw/xen/xen_pt.h              |   13 ++-
> >  hw/xen/xen_pt_config_init.c  |   45 +++++- hw/xen/xen_pt_graphics.c    
> >  |  407 ++++++++++++++++++++++++++++++++++++++++++ qemu-options.hx      
> >         |    9 + vl.c                         |    8 + 12 files changed,
> >  532 insertions(+), 4 deletions(-)  create mode
> > 100644 hw/xen/xen_pt_graphics.c
> 
> Ping.

Sorry but with the Xen 4.4 release and travels I won't be able to review
it for a while, but I'll get around to it at some point not too far.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Vi-0006Ga-9h; Thu, 27 Feb 2014 12:56:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WJ0Vg-0006GV-MN
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:56:20 +0000
Received: from [85.158.137.68:54838] by server-8.bemta-3.messagelabs.com id
	DA/07-16039-3F53F035; Thu, 27 Feb 2014 12:56:19 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393505778!3339300!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4202 invoked from network); 27 Feb 2014 12:56:19 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:56:19 -0000
Received: by mail-la0-f42.google.com with SMTP id ec20so1644875lab.15
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 04:56:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Pfgox0VYNm9945rJGKDiouddBiyRfqkXfafcbLqPPek=;
	b=Ocy99/DGl6nutc4ip5rRsZiNyOS2wV0a8CWjCK8RwQQBPOqf6duoes5/wRyjUFDWCs
	dLwRodslhHbJZLweoor+kZznY0Nb40RXxTLvhU6NCs0r6bfH+z9LDj6S+lal//vMFtlT
	ztELH2VbSBwAHYCZJCSIjN1Iq8HdvOq5oM1OckkvTW0gj9r1b1a7cWmtG/GYMowBfSiU
	S8pdOvZsec1H3XyYS4HAVp4uiGbkWCY4m616fyJL04NacBQ0iqJ0N91+QG+TbgplcNS5
	Lx3vUL9UH9WUh0RYCkJRjgd064c0TC/cQJwFoQCzEX+OuS1gAKLqDg2q9S7zyaAmCM5i
	6FPg==
X-Gm-Message-State: ALoCoQkJUaYWfNN9C9VgZktbu0m7uDpxc1Wiyl54wk+rkVi8gFf7FlYJikuNcNTuOQ8GS1WKJ4sq
X-Received: by 10.153.9.72 with SMTP id dq8mr1307864lad.62.1393505778460; Thu,
	27 Feb 2014 04:56:18 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Thu, 27 Feb 2014 04:55:58 -0800 (PST)
In-Reply-To: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
References: <20140226183454.GA14639@cbox>
	<CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 27 Feb 2014 12:55:58 +0000
Message-ID: <CAFEAcA8DrF6TfAe-bM8NoW+y-cNomg8zxkPB1oe6LY_JG-wsSw@mail.gmail.com>
To: Grant Likely <grant.likely@linaro.org>
Cc: Michael Casadevall <Michael.casadevall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 22:35, Grant Likely <grant.likely@linaro.org> wrote:
> On 26 Feb 2014 18:35, "Christoffer Dall" <christoffer.dall@linaro.org>
> wrote:
>>   Serial console:  The platform should provide a console,
>>   based on an emulated pl011, a virtio-console, or a Xen PV console.
>
> For portable disk image, can Xen PV be dropped from the list? pl011 is part
> of SBSA, and virtio is getting standardised, but Xen PV is implementation
> specific.

The underlying question here is to what extent we want to
force VMs to provide a single implementation of something
and to what extent we want to force guests to cope with "any
choice from some small set". Personally I don't think it's
realistic to ask the Xen folk to drop their long-standing
PV bus implementation, and so the right answer is roughly
what we have here, ie "guest kernels need to cope with both
situations". Otherwise Xen is going to go its own way anyway,
and you just end up either (a) ruling out Xen as a platform
for running portable disk images or (b) having an unofficial
requirement to handle Xen PV anyway if you want an actually
portable image, which I would assume distros do.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 12:56:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 12:56:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0Vi-0006Ga-9h; Thu, 27 Feb 2014 12:56:22 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peter.maydell@linaro.org>) id 1WJ0Vg-0006GV-MN
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 12:56:20 +0000
Received: from [85.158.137.68:54838] by server-8.bemta-3.messagelabs.com id
	DA/07-16039-3F53F035; Thu, 27 Feb 2014 12:56:19 +0000
X-Env-Sender: peter.maydell@linaro.org
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393505778!3339300!1
X-Originating-IP: [209.85.215.42]
X-SpamReason: No, hits=0.3 required=7.0 tests=RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4202 invoked from network); 27 Feb 2014 12:56:19 -0000
Received: from mail-la0-f42.google.com (HELO mail-la0-f42.google.com)
	(209.85.215.42)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 12:56:19 -0000
Received: by mail-la0-f42.google.com with SMTP id ec20so1644875lab.15
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 04:56:18 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:from:date
	:message-id:subject:to:cc:content-type;
	bh=Pfgox0VYNm9945rJGKDiouddBiyRfqkXfafcbLqPPek=;
	b=Ocy99/DGl6nutc4ip5rRsZiNyOS2wV0a8CWjCK8RwQQBPOqf6duoes5/wRyjUFDWCs
	dLwRodslhHbJZLweoor+kZznY0Nb40RXxTLvhU6NCs0r6bfH+z9LDj6S+lal//vMFtlT
	ztELH2VbSBwAHYCZJCSIjN1Iq8HdvOq5oM1OckkvTW0gj9r1b1a7cWmtG/GYMowBfSiU
	S8pdOvZsec1H3XyYS4HAVp4uiGbkWCY4m616fyJL04NacBQ0iqJ0N91+QG+TbgplcNS5
	Lx3vUL9UH9WUh0RYCkJRjgd064c0TC/cQJwFoQCzEX+OuS1gAKLqDg2q9S7zyaAmCM5i
	6FPg==
X-Gm-Message-State: ALoCoQkJUaYWfNN9C9VgZktbu0m7uDpxc1Wiyl54wk+rkVi8gFf7FlYJikuNcNTuOQ8GS1WKJ4sq
X-Received: by 10.153.9.72 with SMTP id dq8mr1307864lad.62.1393505778460; Thu,
	27 Feb 2014 04:56:18 -0800 (PST)
MIME-Version: 1.0
Received: by 10.112.118.34 with HTTP; Thu, 27 Feb 2014 04:55:58 -0800 (PST)
In-Reply-To: <CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
References: <20140226183454.GA14639@cbox>
	<CACxGe6tjuytsYAn6Hadf0AK+REzHgRydgbHPafL8+Sdtd_tMUA@mail.gmail.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 27 Feb 2014 12:55:58 +0000
Message-ID: <CAFEAcA8DrF6TfAe-bM8NoW+y-cNomg8zxkPB1oe6LY_JG-wsSw@mail.gmail.com>
To: Grant Likely <grant.likely@linaro.org>
Cc: Michael Casadevall <Michael.casadevall@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 26 February 2014 22:35, Grant Likely <grant.likely@linaro.org> wrote:
> On 26 Feb 2014 18:35, "Christoffer Dall" <christoffer.dall@linaro.org>
> wrote:
>>   Serial console:  The platform should provide a console,
>>   based on an emulated pl011, a virtio-console, or a Xen PV console.
>
> For portable disk image, can Xen PV be dropped from the list? pl011 is part
> of SBSA, and virtio is getting standardised, but Xen PV is implementation
> specific.

The underlying question here is to what extent we want to
force VMs to provide a single implementation of something
and to what extent we want to force guests to cope with "any
choice from some small set". Personally I don't think it's
realistic to ask the Xen folk to drop their long-standing
PV bus implementation, and so the right answer is roughly
what we have here, ie "guest kernels need to cope with both
situations". Otherwise Xen is going to go its own way anyway,
and you just end up either (a) ruling out Xen as a platform
for running portable disk images or (b) having an unofficial
requirement to handle Xen PV anyway if you want an actually
portable image, which I would assume distros do.

thanks
-- PMM

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 13:04:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:04:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0d4-0006Uq-1w; Thu, 27 Feb 2014 13:03:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ0d3-0006Uk-DL
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 13:03:57 +0000
Received: from [85.158.143.35:36764] by server-3.bemta-4.messagelabs.com id
	E3/92-11539-CB73F035; Thu, 27 Feb 2014 13:03:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393506234!8760601!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5527 invoked from network); 27 Feb 2014 13:03:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 13:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104629590"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 13:03:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 08:03:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ0cz-0001v5-HC;
	Thu, 27 Feb 2014 13:03:53 +0000
Date: Thu, 27 Feb 2014 13:03:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Qin Li <qin.l.li@oracle.com>
In-Reply-To: <530DD57A.8010709@oracle.com>
Message-ID: <alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
	<530DD57A.8010709@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-846335373-1393506228=:31489"
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-846335373-1393506228=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Wed, 26 Feb 2014, Qin Li wrote:
>=20
> =E4=BA=8E 2014/1/16 20:17, Stefano Stabellini =E5=86=99=E9=81=93:
>=20
> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
> > is already visiable. Does guest OS still need any action to ask
> > hypervisor to update this piece of memory periodically?
>=20
> I don't think you need to ask the hypervisor to update vcpu_time_info
> periodically, what gave you that idea?
>=20
> Hi Stefano,
>=20
> Now, I see it's the hypervisor that will update vcpu_time_info, but anoth=
er thing confuse me:
> HVM guest has time drift issue because TSC on different vCPU could be out=
-of-sync, especially after domain suspend/resume.
> But how pvclock actually fix this issue? Let's see how FreeBSD port calcu=
late the system time:
>=20
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> static uint64_t
> get_nsec_offset(struct vcpu_time_info *tinfo)
> {
>=20
> =C2=A0=C2=A0=C2=A0 return (scale_delta(rdtsc() - tinfo->tsc_timestamp,
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 tinfo->tsc_to_system_mul, tinfo->ts=
c_shift));
> }
>=20
> /**
> =C2=A0* \brief Get the current time, in nanoseconds, since the hypervisor=
 booted.
> =C2=A0*
> =C2=A0* \note This function returns the current CPU's idea of this value,=
 unless
> =C2=A0*=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 it happens to be less than an=
other CPU's previously determined value.
> =C2=A0*/
> static uint64_t
> xen_fetch_vcpu_time(void)
> {
> =C2=A0=C2=A0=C2=A0 struct vcpu_time_info dst;
> =C2=A0=C2=A0=C2=A0 struct vcpu_time_info *src;
> =C2=A0=C2=A0=C2=A0 uint32_t pre_version;
> =C2=A0=C2=A0=C2=A0 uint64_t now;
> =C2=A0=C2=A0=C2=A0 volatile uint64_t last;
> =C2=A0=C2=A0=C2=A0 struct vcpu_info *vcpu =3D DPCPU_GET(vcpu_info);
>=20
> =C2=A0=C2=A0=C2=A0 src =3D &vcpu->time;
>=20
> =C2=A0=C2=A0=C2=A0 critical_enter();
> =C2=A0=C2=A0=C2=A0 do {
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 pre_version =3D xen_fetch_vcpu_tinf=
o(&dst, src);
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 barrier();
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 now =3D dst.system_time + get_nsec_=
offset(&dst);
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 barrier();
> =C2=A0=C2=A0=C2=A0 } while (pre_version !=3D src->version);
>=20
> =C2=A0=C2=A0=C2=A0 /*
> =C2=A0=C2=A0=C2=A0 =C2=A0* Enforce a monotonically increasing clock time =
across all
> =C2=A0=C2=A0=C2=A0 =C2=A0* VCPUs.=C2=A0 If our time is too old, use the l=
ast time and return.
> =C2=A0=C2=A0=C2=A0 =C2=A0* Otherwise, try to update the last time.
> =C2=A0=C2=A0=C2=A0 =C2=A0*/
> =C2=A0=C2=A0=C2=A0 do {
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 last =3D last_time;
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 if (last > now) {
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 now =3D last;
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 break;
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 }
> =C2=A0=C2=A0=C2=A0 } while (!atomic_cmpset_64(&last_time, last, now));
>=20
> =C2=A0=C2=A0=C2=A0 critical_exit();
>=20
> =C2=A0=C2=A0=C2=A0 return (now);
> }
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>=20
> I guest linux guest will do the same thing, rdtsc() fetch current timesta=
mp from current running vCPU, TSC out-of-sync issue is still there.
> It seems to me pvclock finally fix the time drift issue just because the =
workaround enforced as above, right?

First you should know that TSC is not always guaranteed to be
synchronized across multiple processors, especially on older systems.
On "TSC-safe" systems, Xen would export a consistent TSC to guests, by
setting the vtsc offset and scale appropriately.
--1342847746-846335373-1393506228=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-846335373-1393506228=:31489--


From xen-devel-bounces@lists.xen.org Thu Feb 27 13:04:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:04:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0d4-0006Uq-1w; Thu, 27 Feb 2014 13:03:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ0d3-0006Uk-DL
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 13:03:57 +0000
Received: from [85.158.143.35:36764] by server-3.bemta-4.messagelabs.com id
	E3/92-11539-CB73F035; Thu, 27 Feb 2014 13:03:56 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393506234!8760601!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5527 invoked from network); 27 Feb 2014 13:03:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 13:03:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104629590"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 13:03:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 08:03:53 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ0cz-0001v5-HC;
	Thu, 27 Feb 2014 13:03:53 +0000
Date: Thu, 27 Feb 2014 13:03:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Qin Li <qin.l.li@oracle.com>
In-Reply-To: <530DD57A.8010709@oracle.com>
Message-ID: <alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
	<530DD57A.8010709@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-846335373-1393506228=:31489"
X-DLP: MIA1
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-846335373-1393506228=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Wed, 26 Feb 2014, Qin Li wrote:
>=20
> =E4=BA=8E 2014/1/16 20:17, Stefano Stabellini =E5=86=99=E9=81=93:
>=20
> . For a PV-on-HVM guest OS, the "shared_info->vcpu_info->vcpu_time_info"
> > is already visiable. Does guest OS still need any action to ask
> > hypervisor to update this piece of memory periodically?
>=20
> I don't think you need to ask the hypervisor to update vcpu_time_info
> periodically, what gave you that idea?
>=20
> Hi Stefano,
>=20
> Now, I see it's the hypervisor that will update vcpu_time_info, but anoth=
er thing confuse me:
> HVM guest has time drift issue because TSC on different vCPU could be out=
-of-sync, especially after domain suspend/resume.
> But how pvclock actually fix this issue? Let's see how FreeBSD port calcu=
late the system time:
>=20
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> static uint64_t
> get_nsec_offset(struct vcpu_time_info *tinfo)
> {
>=20
> =C2=A0=C2=A0=C2=A0 return (scale_delta(rdtsc() - tinfo->tsc_timestamp,
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 tinfo->tsc_to_system_mul, tinfo->ts=
c_shift));
> }
>=20
> /**
> =C2=A0* \brief Get the current time, in nanoseconds, since the hypervisor=
 booted.
> =C2=A0*
> =C2=A0* \note This function returns the current CPU's idea of this value,=
 unless
> =C2=A0*=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 it happens to be less than an=
other CPU's previously determined value.
> =C2=A0*/
> static uint64_t
> xen_fetch_vcpu_time(void)
> {
> =C2=A0=C2=A0=C2=A0 struct vcpu_time_info dst;
> =C2=A0=C2=A0=C2=A0 struct vcpu_time_info *src;
> =C2=A0=C2=A0=C2=A0 uint32_t pre_version;
> =C2=A0=C2=A0=C2=A0 uint64_t now;
> =C2=A0=C2=A0=C2=A0 volatile uint64_t last;
> =C2=A0=C2=A0=C2=A0 struct vcpu_info *vcpu =3D DPCPU_GET(vcpu_info);
>=20
> =C2=A0=C2=A0=C2=A0 src =3D &vcpu->time;
>=20
> =C2=A0=C2=A0=C2=A0 critical_enter();
> =C2=A0=C2=A0=C2=A0 do {
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 pre_version =3D xen_fetch_vcpu_tinf=
o(&dst, src);
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 barrier();
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 now =3D dst.system_time + get_nsec_=
offset(&dst);
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 barrier();
> =C2=A0=C2=A0=C2=A0 } while (pre_version !=3D src->version);
>=20
> =C2=A0=C2=A0=C2=A0 /*
> =C2=A0=C2=A0=C2=A0 =C2=A0* Enforce a monotonically increasing clock time =
across all
> =C2=A0=C2=A0=C2=A0 =C2=A0* VCPUs.=C2=A0 If our time is too old, use the l=
ast time and return.
> =C2=A0=C2=A0=C2=A0 =C2=A0* Otherwise, try to update the last time.
> =C2=A0=C2=A0=C2=A0 =C2=A0*/
> =C2=A0=C2=A0=C2=A0 do {
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 last =3D last_time;
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 if (last > now) {
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 now =3D last;
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 break;
> =C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0 }
> =C2=A0=C2=A0=C2=A0 } while (!atomic_cmpset_64(&last_time, last, now));
>=20
> =C2=A0=C2=A0=C2=A0 critical_exit();
>=20
> =C2=A0=C2=A0=C2=A0 return (now);
> }
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>=20
> I guest linux guest will do the same thing, rdtsc() fetch current timesta=
mp from current running vCPU, TSC out-of-sync issue is still there.
> It seems to me pvclock finally fix the time drift issue just because the =
workaround enforced as above, right?

First you should know that TSC is not always guaranteed to be
synchronized across multiple processors, especially on older systems.
On "TSC-safe" systems, Xen would export a consistent TSC to guests, by
setting the vtsc offset and scale appropriately.
--1342847746-846335373-1393506228=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-846335373-1393506228=:31489--


From xen-devel-bounces@lists.xen.org Thu Feb 27 13:13:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0l7-0006nA-O4; Thu, 27 Feb 2014 13:12:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WJ0l6-0006n5-6n
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 13:12:16 +0000
Received: from [85.158.139.211:10065] by server-16.bemta-5.messagelabs.com id
	87/45-05060-FA93F035; Thu, 27 Feb 2014 13:12:15 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393506732!6599795!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18245 invoked from network); 27 Feb 2014 13:12:12 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-206.messagelabs.com with SMTP;
	27 Feb 2014 13:12:12 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1RDBCs2003041
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 08:11:12 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1RDB38o023516; Thu, 27 Feb 2014 08:11:04 -0500
Message-ID: <530F3967.6030805@redhat.com>
Date: Thu, 27 Feb 2014 14:11:03 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>, Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com>
In-Reply-To: <530F2B8F.1010401@citrix.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 13:11, David Vrabel ha scritto:
>> > This patch adds para-virtualization support to the queue spinlock code
>> > by enabling the queue head to kick the lock holder CPU, if known,
>> > in when the lock isn't released for a certain amount of time. It
>> > also enables the mutual monitoring of the queue head CPU and the
>> > following node CPU in the queue to make sure that their CPUs will
>> > stay scheduled in.
> I'm not really understanding how this is supposed to work.  There
> appears to be an assumption that a guest can keep one of its VCPUs
> running by repeatedly kicking it?  This is not possible under Xen and I
> doubt it's possible under KVM or any other hypervisor.

KVM allows any VCPU to wake up a currently halted VCPU of its choice, 
see Documentation/virtual/kvm/hypercalls.txt.

   5. KVM_HC_KICK_CPU
   ------------------------
   Architecture: x86
   Status: active
   Purpose: Hypercall used to wakeup a vcpu from HLT state
   Usage example : A vcpu of a paravirtualized guest that is busywaiting
   in guest kernel mode for an event to occur (ex: a spinlock to become
   available) can execute HLT instruction once it has busy-waited for more
   than a threshold time-interval. Execution of HLT instruction would cause
   the hypervisor to put the vcpu to sleep until occurrence of an 
appropriate
   event. Another vcpu of the same guest can wakeup the sleeping vcpu by
   issuing KVM_HC_KICK_CPU hypercall, specifying APIC ID (a1) of the vcpu
   to be woken up. An additional argument (a0) is used in the hypercall for
   future use.

This is the same as a dummy IPI, but cheaper (about 2000 clock cycles 
wasted on the source VCPU, and the latency on the destination is about 
half; an IPI costs roughly the same on the source and much more on the 
destination).

It looks like Xen could use an event channel.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 13:13:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0l7-0006nA-O4; Thu, 27 Feb 2014 13:12:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WJ0l6-0006n5-6n
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 13:12:16 +0000
Received: from [85.158.139.211:10065] by server-16.bemta-5.messagelabs.com id
	87/45-05060-FA93F035; Thu, 27 Feb 2014 13:12:15 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393506732!6599795!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18245 invoked from network); 27 Feb 2014 13:12:12 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-15.tower-206.messagelabs.com with SMTP;
	27 Feb 2014 13:12:12 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1RDBCs2003041
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 08:11:12 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1RDB38o023516; Thu, 27 Feb 2014 08:11:04 -0500
Message-ID: <530F3967.6030805@redhat.com>
Date: Thu, 27 Feb 2014 14:11:03 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>, Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com>
In-Reply-To: <530F2B8F.1010401@citrix.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 13:11, David Vrabel ha scritto:
>> > This patch adds para-virtualization support to the queue spinlock code
>> > by enabling the queue head to kick the lock holder CPU, if known,
>> > in when the lock isn't released for a certain amount of time. It
>> > also enables the mutual monitoring of the queue head CPU and the
>> > following node CPU in the queue to make sure that their CPUs will
>> > stay scheduled in.
> I'm not really understanding how this is supposed to work.  There
> appears to be an assumption that a guest can keep one of its VCPUs
> running by repeatedly kicking it?  This is not possible under Xen and I
> doubt it's possible under KVM or any other hypervisor.

KVM allows any VCPU to wake up a currently halted VCPU of its choice, 
see Documentation/virtual/kvm/hypercalls.txt.

   5. KVM_HC_KICK_CPU
   ------------------------
   Architecture: x86
   Status: active
   Purpose: Hypercall used to wakeup a vcpu from HLT state
   Usage example : A vcpu of a paravirtualized guest that is busywaiting
   in guest kernel mode for an event to occur (ex: a spinlock to become
   available) can execute HLT instruction once it has busy-waited for more
   than a threshold time-interval. Execution of HLT instruction would cause
   the hypervisor to put the vcpu to sleep until occurrence of an 
appropriate
   event. Another vcpu of the same guest can wakeup the sleeping vcpu by
   issuing KVM_HC_KICK_CPU hypercall, specifying APIC ID (a1) of the vcpu
   to be woken up. An additional argument (a0) is used in the hypercall for
   future use.

This is the same as a dummy IPI, but cheaper (about 2000 clock cycles 
wasted on the source VCPU, and the latency on the destination is about 
half; an IPI costs roughly the same on the source and much more on the 
destination).

It looks like Xen could use an event channel.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 13:13:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0lY-0006nk-4m; Thu, 27 Feb 2014 13:12:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1WJ0lW-0006nd-7B
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 13:12:42 +0000
Received: from [85.158.137.68:38418] by server-3.bemta-3.messagelabs.com id
	2D/66-14520-9C93F035; Thu, 27 Feb 2014 13:12:41 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393506758!4559460!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12400 invoked from network); 27 Feb 2014 13:12:40 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 13:12:40 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id 0816B13EF02;
	Thu, 27 Feb 2014 13:12:38 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id EA5C813EFA2; Thu, 27 Feb 2014 13:12:37 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id 7DCA613EF02;
	Thu, 27 Feb 2014 13:12:36 +0000 (UTC)
Message-ID: <530F39C3.5060808@codeaurora.org>
Date: Thu, 27 Feb 2014 08:12:35 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox> <530E402C.9040502@codeaurora.org>
	<20140226195111.GB16149@cbox>
In-Reply-To: <20140226195111.GB16149@cbox>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Christoffer,

On 02/26/2014 02:51 PM, Christoffer Dall wrote:
> On Wed, Feb 26, 2014 at 02:27:40PM -0500, Christopher Covington wrote:

>>> Image format
>>> ------------
>>> The image format, as presented to the VM, needs to be well-defined in
>>> order for prepared disk images to be bootable across various
>>> virtualization implementations.
>>>
>>> The raw disk format as presented to the VM must be partitioned with a
>>> GUID Partition Table (GPT).  The bootable software must be placed in the
>>> EFI System Partition (ESP), using the UEFI removable media path, and
>>> must be an EFI application complying to the UEFI Specification 2.4
>>> Revision A [6].
>>>
>>> The ESP partition's GPT entry's partition type GUID must be
>>> C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
>>> formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
>>>
>>> The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
>>> execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
>>> state.
>>>
>>> This ensures that tools for both Xen and KVM can load a binary UEFI
>>> firmware which can read and boot the EFI application in the disk image.
>>>
>>> A typical scenario will be GRUB2 packaged as an EFI application, which
>>> mounts the system boot partition and boots Linux.
>>>
>>>
>>> Virtual Firmware
>>> ----------------
>>> The VM system must be able to boot the EFI application in the ESP.  It
>>> is recommended that this is achieved by loading a UEFI binary as the
>>> first software executed by the VM, which then executes the EFI
>>> application.  The UEFI implementation should be compliant with UEFI
>>> Specification 2.4 Revision A [6] or later.
>>>
>>> This document strongly recommends that the VM implementation supports
>>> persistent environment storage for virtual firmware implementation in
>>> order to ensure probable use cases such as adding additional disk images
>>> to a VM or running installers to perform upgrades.
>>>
>>> The binary UEFI firmware implementation should not be distributed as
>>> part of the VM image, but is specific to the VM implementation.
>>
>> Can you elaborate on the motivation for requiring that the kernel be stuffed
>> into a disk image and for requiring such a heavyweight bootloader/firmware? By
>> doing so you would seem to exclude those requiring an optimized boot process.
>>
> 
> What's the alternative?  Shipping kernels externally and loading them
> externally?  Sure you can do that, but then distros can't upgrade the
> kernel themselves, and you have to come up with a convention for how to
> ship kernels, initrd's etc.

The self-hosted upgrades use case makes sense. I can imagine using a
pass-through or network filesystem to do it in the case of external loading,
something like the following. In the case of P9, the tag could be the same as
the GPT GUID. Everything could still be in the /EFI/BOOT directory. The kernel
Image could be at BOOT(ARM|AA64).IMG, the zImage at .ZMG, and the initramfs at
.RFS. It's more work for distros to support multiple upgrade methods, though,
so maybe those who want an optimized boot process should make an external
loader capable of carving the necessary components out of a VFAT filesystem
inside a GPT partitioned image instead.

>>> VM Platform
>>> -----------
>>> The specification does not mandate any specific memory map.  The guest
>>> OS must be able to enumerate all processing elements, devices, and
>>> memory through HW description data (FDT, ACPI) or a bus-specific
>>> mechanism such as PCI.
>>>
>>> The virtual platform must support at least one of the following ARM
>>> execution states:
>>>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>>>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>>>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
>>>
>>> It is recommended to support both (2) and (3) on aarch64 capable
>>> physical systems.
>>>
>>> The virtual hardware platform must provide a number of mandatory
>>> peripherals:
>>>
>>>   Serial console:  The platform should provide a console,
>>>   based on an emulated pl011, a virtio-console, or a Xen PV console.
>>>
>>>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>>>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>>>   removes this limitation.
>>>
>>>   The ARM virtual timer and counter should be available to the VM as
>>>   per the ARM Generic Timers specification in the ARM ARM [1].
>>>
>>>   A hotpluggable bus to support hotplug of at least block and network
>>>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>>>   bus.
>>
>> Is VirtIO hotplug capable? Over PCI or MMIO transports or both?
> 
> VirtIO devices attached on a PCIe bus are hotpluggable, the emulated
> PCIe bus itself would not have anything to do with virtio, except that
> virtio devices can hang off of it.  AFAIU.

So network/block device only as memory mapped peripherals (like SMSC or PL
SD/MMC) or over VirtIO-MMIO won't meet the specification? Is PCI/VirtIO-PCI on
ARM production ready? What's the motivation for requiring hotplug?

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 13:13:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0lY-0006nk-4m; Thu, 27 Feb 2014 13:12:44 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <cov@codeaurora.org>) id 1WJ0lW-0006nd-7B
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 13:12:42 +0000
Received: from [85.158.137.68:38418] by server-3.bemta-3.messagelabs.com id
	2D/66-14520-9C93F035; Thu, 27 Feb 2014 13:12:41 +0000
X-Env-Sender: cov@codeaurora.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393506758!4559460!1
X-Originating-IP: [198.145.11.231]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12400 invoked from network); 27 Feb 2014 13:12:40 -0000
Received: from smtp.codeaurora.org (HELO smtp.codeaurora.org) (198.145.11.231)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 13:12:40 -0000
Received: from smtp.codeaurora.org (localhost [127.0.0.1])
	by smtp.codeaurora.org (Postfix) with ESMTP id 0816B13EF02;
	Thu, 27 Feb 2014 13:12:38 +0000 (UTC)
Received: by smtp.codeaurora.org (Postfix, from userid 486)
	id EA5C813EFA2; Thu, 27 Feb 2014 13:12:37 +0000 (UTC)
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on
	pdx-caf-smtp.dmz.codeaurora.org
X-Spam-Level: 
X-Spam-Status: No, score=-2.9 required=2.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham version=3.3.1
Received: from [10.228.82.110] (rrcs-67-52-130-30.west.biz.rr.com
	[67.52.130.30])
	(using TLSv1 with cipher DHE-RSA-CAMELLIA256-SHA (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: cov@smtp.codeaurora.org)
	by smtp.codeaurora.org (Postfix) with ESMTPSA id 7DCA613EF02;
	Thu, 27 Feb 2014 13:12:36 +0000 (UTC)
Message-ID: <530F39C3.5060808@codeaurora.org>
Date: Thu, 27 Feb 2014 08:12:35 -0500
From: Christopher Covington <cov@codeaurora.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130106 Thunderbird/17.0.2
MIME-Version: 1.0
To: Christoffer Dall <christoffer.dall@linaro.org>
References: <20140226183454.GA14639@cbox> <530E402C.9040502@codeaurora.org>
	<20140226195111.GB16149@cbox>
In-Reply-To: <20140226195111.GB16149@cbox>
X-Virus-Scanned: ClamAV using ClamSMTP
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi Christoffer,

On 02/26/2014 02:51 PM, Christoffer Dall wrote:
> On Wed, Feb 26, 2014 at 02:27:40PM -0500, Christopher Covington wrote:

>>> Image format
>>> ------------
>>> The image format, as presented to the VM, needs to be well-defined in
>>> order for prepared disk images to be bootable across various
>>> virtualization implementations.
>>>
>>> The raw disk format as presented to the VM must be partitioned with a
>>> GUID Partition Table (GPT).  The bootable software must be placed in the
>>> EFI System Partition (ESP), using the UEFI removable media path, and
>>> must be an EFI application complying to the UEFI Specification 2.4
>>> Revision A [6].
>>>
>>> The ESP partition's GPT entry's partition type GUID must be
>>> C12A7328-F81F-11D2-BA4B-00A0C93EC93B and the file system must be
>>> formatted as FAT32/vfat as per Section 12.3.1.1 in [6].
>>>
>>> The removable media path is \EFI\BOOT\BOOTARM.EFI for the aarch32
>>> execution state and is \EFI\BOOT\BOOTAA64.EFI for the aarch64 execution
>>> state.
>>>
>>> This ensures that tools for both Xen and KVM can load a binary UEFI
>>> firmware which can read and boot the EFI application in the disk image.
>>>
>>> A typical scenario will be GRUB2 packaged as an EFI application, which
>>> mounts the system boot partition and boots Linux.
>>>
>>>
>>> Virtual Firmware
>>> ----------------
>>> The VM system must be able to boot the EFI application in the ESP.  It
>>> is recommended that this is achieved by loading a UEFI binary as the
>>> first software executed by the VM, which then executes the EFI
>>> application.  The UEFI implementation should be compliant with UEFI
>>> Specification 2.4 Revision A [6] or later.
>>>
>>> This document strongly recommends that the VM implementation supports
>>> persistent environment storage for virtual firmware implementation in
>>> order to ensure probable use cases such as adding additional disk images
>>> to a VM or running installers to perform upgrades.
>>>
>>> The binary UEFI firmware implementation should not be distributed as
>>> part of the VM image, but is specific to the VM implementation.
>>
>> Can you elaborate on the motivation for requiring that the kernel be stuffed
>> into a disk image and for requiring such a heavyweight bootloader/firmware? By
>> doing so you would seem to exclude those requiring an optimized boot process.
>>
> 
> What's the alternative?  Shipping kernels externally and loading them
> externally?  Sure you can do that, but then distros can't upgrade the
> kernel themselves, and you have to come up with a convention for how to
> ship kernels, initrd's etc.

The self-hosted upgrades use case makes sense. I can imagine using a
pass-through or network filesystem to do it in the case of external loading,
something like the following. In the case of P9, the tag could be the same as
the GPT GUID. Everything could still be in the /EFI/BOOT directory. The kernel
Image could be at BOOT(ARM|AA64).IMG, the zImage at .ZMG, and the initramfs at
.RFS. It's more work for distros to support multiple upgrade methods, though,
so maybe those who want an optimized boot process should make an external
loader capable of carving the necessary components out of a VFAT filesystem
inside a GPT partitioned image instead.

>>> VM Platform
>>> -----------
>>> The specification does not mandate any specific memory map.  The guest
>>> OS must be able to enumerate all processing elements, devices, and
>>> memory through HW description data (FDT, ACPI) or a bus-specific
>>> mechanism such as PCI.
>>>
>>> The virtual platform must support at least one of the following ARM
>>> execution states:
>>>   (1) aarch32 virtual CPUs on aarch32 physical CPUs
>>>   (2) aarch32 virtual CPUs on aarch64 physical CPUs
>>>   (3) aarch64 virtual CPUs on aarch64 physical CPUs
>>>
>>> It is recommended to support both (2) and (3) on aarch64 capable
>>> physical systems.
>>>
>>> The virtual hardware platform must provide a number of mandatory
>>> peripherals:
>>>
>>>   Serial console:  The platform should provide a console,
>>>   based on an emulated pl011, a virtio-console, or a Xen PV console.
>>>
>>>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
>>>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
>>>   removes this limitation.
>>>
>>>   The ARM virtual timer and counter should be available to the VM as
>>>   per the ARM Generic Timers specification in the ARM ARM [1].
>>>
>>>   A hotpluggable bus to support hotplug of at least block and network
>>>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
>>>   bus.
>>
>> Is VirtIO hotplug capable? Over PCI or MMIO transports or both?
> 
> VirtIO devices attached on a PCIe bus are hotpluggable, the emulated
> PCIe bus itself would not have anything to do with virtio, except that
> virtio devices can hang off of it.  AFAIU.

So network/block device only as memory mapped peripherals (like SMSC or PL
SD/MMC) or over VirtIO-MMIO won't meet the specification? Is PCI/VirtIO-PCI on
ARM production ready? What's the motivation for requiring hotplug?

Thanks,
Christopher

-- 
Employee of Qualcomm Innovation Center, Inc.
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by the Linux Foundation.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 13:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0on-0006wM-Ux; Thu, 27 Feb 2014 13:16:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ0oj-0006wF-Pd
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 13:16:03 +0000
Received: from [85.158.139.211:39954] by server-9.bemta-5.messagelabs.com id
	E6/7F-11237-19A3F035; Thu, 27 Feb 2014 13:16:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393506958!6619583!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17832 invoked from network); 27 Feb 2014 13:16:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 13:16:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104632997"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 13:15:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 08:15:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ0of-00025P-Td;
	Thu, 27 Feb 2014 13:15:57 +0000
Date: Thu, 27 Feb 2014 13:15:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1393496069.3921.14.camel@Solace>
Message-ID: <alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org, Julien Grall <julien.grall@citrix.com>,
	etrudeau@broadcom.com, Arianna Avanzini <avanzini.arianna@gmail.com>,
	Viktor Kleinik <viktor.kleinik@globallogic.com>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 27 Feb 2014, Dario Faggioli wrote:
> On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > Hi all,
> > 
> Hi,
> 
> > Does anyone knows something about future plans to implement
> > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> >
> I think Arianna is working on an implementation of the former
> (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> list soon, isn't it so, Arianna?

Eric Trudeau did some work in the area too:

http://marc.info/?l=xen-devel&m=137338996422503
http://marc.info/?l=xen-devel&m=137365750318936

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 13:16:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 13:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ0on-0006wM-Ux; Thu, 27 Feb 2014 13:16:05 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJ0oj-0006wF-Pd
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 13:16:03 +0000
Received: from [85.158.139.211:39954] by server-9.bemta-5.messagelabs.com id
	E6/7F-11237-19A3F035; Thu, 27 Feb 2014 13:16:01 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393506958!6619583!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17832 invoked from network); 27 Feb 2014 13:16:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 13:16:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,554,1389744000"; d="scan'208";a="104632997"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 13:15:58 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 08:15:58 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJ0of-00025P-Td;
	Thu, 27 Feb 2014 13:15:57 +0000
Date: Thu, 27 Feb 2014 13:15:52 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Dario Faggioli <dario.faggioli@citrix.com>
In-Reply-To: <1393496069.3921.14.camel@Solace>
Message-ID: <alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	xen-devel@lists.xen.org, Julien Grall <julien.grall@citrix.com>,
	etrudeau@broadcom.com, Arianna Avanzini <avanzini.arianna@gmail.com>,
	Viktor Kleinik <viktor.kleinik@globallogic.com>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 27 Feb 2014, Dario Faggioli wrote:
> On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > Hi all,
> > 
> Hi,
> 
> > Does anyone knows something about future plans to implement
> > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> >
> I think Arianna is working on an implementation of the former
> (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> list soon, isn't it so, Arianna?

Eric Trudeau did some work in the area too:

http://marc.info/?l=xen-devel&m=137338996422503
http://marc.info/?l=xen-devel&m=137365750318936

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1WA-0007iU-SV; Thu, 27 Feb 2014 14:00:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WJ1W9-0007iP-IM
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:00:53 +0000
Received: from [85.158.143.35:50522] by server-3.bemta-4.messagelabs.com id
	1F/7F-11539-4154F035; Thu, 27 Feb 2014 14:00:52 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393509651!8778126!1
X-Originating-IP: [212.227.17.13]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22953 invoked from network); 27 Feb 2014 14:00:52 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.13)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 14:00:52 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue104) with ESMTP (Nemesis)
	id 0M1od8-1X79OQ2WHy-00ticd; Thu, 27 Feb 2014 15:00:44 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: linux-arm-kernel@lists.infradead.org
Date: Thu, 27 Feb 2014 15:00:44 +0100
Message-ID: <11174980.YJ21cfsHoG@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox>
	<20140226214843.GD12169@bivouac.eciton.net>
	<alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-Provags-ID: V02:K0:OLbtDgJ8Zm10gsyQh4vJCFhRDIFK2W6p8iIqtWOZ76Q
	k9+4+hUUArcAjRXMj/ci9LNv1SJZk/+qkc/xw67j7cSwD5lea4
	YavZSlERWAUyH/k37EabGM5I4dRGZZDnzAWEzXAJ+BFruI9lxo
	ZaBAxOAghEeaNECzlKkPkc4fG4XaqUNrreeQucqHKYwkwdJeuE
	bThgIxeEr1nK+Bo/fkyReEXWvHvVuea3AMzEcHXHwDUmlcDMD/
	T+DhVdsgO4oPvgEt2wD6f8vczdw4Cv3isSYLdmwdCjzj6zZh1P
	nWKkZ6U+5wvawcwHlSWdxq73EvUf7CCGXqg6enDi9LokoWfHaX
	zwDRT1GZEvKa67T4mGNQ=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday 27 February 2014 12:31:55 Stefano Stabellini wrote:
> On Wed, 26 Feb 2014, Leif Lindholm wrote:
> > > >   no FDT.  In this case, the VM implementation must provide ACPI, and
> > > >   the OS must be able to locate the ACPI root pointer through the UEFI
> > > >   system table.
> > > > 
> > > > For more information about the arm and arm64 boot conventions, see
> > > > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > > > Linux kernel source tree.
> > > > 
> > > > For more information about UEFI and ACPI booting, see [4] and [5].
> > > 
> > > What's the point of having ACPI in a virtual machine? You wouldn't
> > > need to abstract any of the hardware in AML since you already know
> > > what the virtual hardware is, so I can't see how this would help
> > > anyone.
> > 
> > The point is that if we need to share any real hw then we need to use
> > whatever the host has.

I would be more comfortable defining in the spec that you cannot share
hardware at all. Obviously that doesn't stop anyone from actually
sharing hardware with the guest, but at that point it would become
noncompliant with this spec, with the consequence that you couldn't
expect a compliant guest image to run on that hardware, but that is
exactly something we can't guarantee anyway because we don't know
what drivers might be needed.

Also, there is no way to generally do this with either FDT or ACPI:
In the former case, the hypervisor needs to modify any properties
that point to other device nodes so that they point to nodes visible
to the guest. That may be possible for simple things like IRQs
and reg properties, but as soon as you get into stuff like dmaengine,
pinctrl or PHY references, you just can't solve it in a generic way.

For ACPI it's probably worse: any AML methods that the host has
are unlikely to work in the guest, and it's impossible to translate
them at all.

Obviously things are different for Xen Dom0 where we share *all* devices
between host and guest, and we just use the host firmware interfaces.
That case again cannot be covered by the generic VM system specification.

> I dislike ACPI as much as the next guy, but unfortunately if the host
> only supports ACPI, the Linux driver for a particular device only works
> together with ACPI, and you want to assign that device to a VM, then we
> might be forced to use ACPI to describe it.

Can anyone think of an example where this would actually work?

The only case I can see where it's possible to share a device with
a guest without the hypervisor building up the description is for
PCI functions that are passed through with an IOMMU. Those won't
need ACPI or DT support however.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:01:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1WA-0007iU-SV; Thu, 27 Feb 2014 14:00:54 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WJ1W9-0007iP-IM
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:00:53 +0000
Received: from [85.158.143.35:50522] by server-3.bemta-4.messagelabs.com id
	1F/7F-11539-4154F035; Thu, 27 Feb 2014 14:00:52 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393509651!8778126!1
X-Originating-IP: [212.227.17.13]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22953 invoked from network); 27 Feb 2014 14:00:52 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.13)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 14:00:52 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue104) with ESMTP (Nemesis)
	id 0M1od8-1X79OQ2WHy-00ticd; Thu, 27 Feb 2014 15:00:44 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: linux-arm-kernel@lists.infradead.org
Date: Thu, 27 Feb 2014 15:00:44 +0100
Message-ID: <11174980.YJ21cfsHoG@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox>
	<20140226214843.GD12169@bivouac.eciton.net>
	<alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
MIME-Version: 1.0
X-Provags-ID: V02:K0:OLbtDgJ8Zm10gsyQh4vJCFhRDIFK2W6p8iIqtWOZ76Q
	k9+4+hUUArcAjRXMj/ci9LNv1SJZk/+qkc/xw67j7cSwD5lea4
	YavZSlERWAUyH/k37EabGM5I4dRGZZDnzAWEzXAJ+BFruI9lxo
	ZaBAxOAghEeaNECzlKkPkc4fG4XaqUNrreeQucqHKYwkwdJeuE
	bThgIxeEr1nK+Bo/fkyReEXWvHvVuea3AMzEcHXHwDUmlcDMD/
	T+DhVdsgO4oPvgEt2wD6f8vczdw4Cv3isSYLdmwdCjzj6zZh1P
	nWKkZ6U+5wvawcwHlSWdxq73EvUf7CCGXqg6enDi9LokoWfHaX
	zwDRT1GZEvKa67T4mGNQ=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	xen-devel@lists.xen.org, Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	Christoffer Dall <christoffer.dall@linaro.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday 27 February 2014 12:31:55 Stefano Stabellini wrote:
> On Wed, 26 Feb 2014, Leif Lindholm wrote:
> > > >   no FDT.  In this case, the VM implementation must provide ACPI, and
> > > >   the OS must be able to locate the ACPI root pointer through the UEFI
> > > >   system table.
> > > > 
> > > > For more information about the arm and arm64 boot conventions, see
> > > > Documentation/arm/Booting and Documentation/arm64/booting.txt in the
> > > > Linux kernel source tree.
> > > > 
> > > > For more information about UEFI and ACPI booting, see [4] and [5].
> > > 
> > > What's the point of having ACPI in a virtual machine? You wouldn't
> > > need to abstract any of the hardware in AML since you already know
> > > what the virtual hardware is, so I can't see how this would help
> > > anyone.
> > 
> > The point is that if we need to share any real hw then we need to use
> > whatever the host has.

I would be more comfortable defining in the spec that you cannot share
hardware at all. Obviously that doesn't stop anyone from actually
sharing hardware with the guest, but at that point it would become
noncompliant with this spec, with the consequence that you couldn't
expect a compliant guest image to run on that hardware, but that is
exactly something we can't guarantee anyway because we don't know
what drivers might be needed.

Also, there is no way to generally do this with either FDT or ACPI:
In the former case, the hypervisor needs to modify any properties
that point to other device nodes so that they point to nodes visible
to the guest. That may be possible for simple things like IRQs
and reg properties, but as soon as you get into stuff like dmaengine,
pinctrl or PHY references, you just can't solve it in a generic way.

For ACPI it's probably worse: any AML methods that the host has
are unlikely to work in the guest, and it's impossible to translate
them at all.

Obviously things are different for Xen Dom0 where we share *all* devices
between host and guest, and we just use the host firmware interfaces.
That case again cannot be covered by the generic VM system specification.

> I dislike ACPI as much as the next guy, but unfortunately if the host
> only supports ACPI, the Linux driver for a particular device only works
> together with ACPI, and you want to assign that device to a VM, then we
> might be forced to use ACPI to describe it.

Can anyone think of an example where this would actually work?

The only case I can see where it's possible to share a device with
a guest without the hypervisor building up the description is for
PCI functions that are passed through with an IOMMU. Those won't
need ACPI or DT support however.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1Yh-0007nz-G1; Thu, 27 Feb 2014 14:03:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ1Yg-0007nk-AL
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:03:30 +0000
Received: from [193.109.254.147:52251] by server-8.bemta-14.messagelabs.com id
	B6/AB-18529-1B54F035; Thu, 27 Feb 2014 14:03:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393509806!7224434!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7063 invoked from network); 27 Feb 2014 14:03:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:03:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106273193"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:03:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJ1YK-0002lQ-Rr; Thu, 27 Feb 2014 14:03:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:03:06 +0000
Message-ID: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Tamas K Lengyel <tamas.lengyel@zentific.com>
Subject: [Xen-devel] [PATCH 1/2] mem_event: Return previous value of
	CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tamas K Lengyel <tamas.lengyel@zentific.com>

This patch extends the information returned for CR0/CR3/CR4 register
write events with the previous value of the register. The old value
was already passed to the trap processing function, just never placed
into the returned request. By returning this value, applications
subscribing the CR events obtain additional context about the event.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c         |    4 ++++
 xen/include/public/mem_event.h |    6 +++---
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 08fec34..9e85c13 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4808,6 +4808,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
         req.gla = gla;
         req.gla_valid = 1;
     }
+    else
+    {
+        req.gla = old;
+    }
     
     mem_event_put_request(d, &d->mem_event->access, &req);
     
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index c9ed546..3831b41 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -40,9 +40,9 @@
 /* Reasons for the memory event request */
 #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
 #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
+#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
+#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
+#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
 #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1Yi-0007o8-V9; Thu, 27 Feb 2014 14:03:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ1Yg-0007nn-Qt
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:03:31 +0000
Received: from [193.109.254.147:52376] by server-7.bemta-14.messagelabs.com id
	C3/50-23424-2B54F035; Thu, 27 Feb 2014 14:03:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393509806!7224434!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7192 invoked from network); 27 Feb 2014 14:03:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:03:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106273195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:03:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJ1YK-0002lQ-SJ; Thu, 27 Feb 2014 14:03:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:03:07 +0000
Message-ID: <1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps as an
	offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

** This is RFC, and not intended to be applied in its current state **

There exists a "console_timestamps" command line option which causes full
date/time stamps to be printed, e.g.

    (XEN) ENABLING IO-APIC IRQs
    (XEN)  -> Using old ACK method
    (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
    (XEN) TSC deadline timer enabled
    (XEN) [2014-02-27 12:29:27] Platform timer is 14.318MHz HPET
    (XEN) [2014-02-27 12:29:27] Allocated console ring of 64 KiB.
    (XEN) [2014-02-27 12:29:27] mwait-idle: MWAIT substates: 0x21120

However, this only has single-second granularity.  This patch replaces the
string printed with one which matches Linux kernel timestamps, in seconds and
milliseconds since boot.

The result looks like:

    (XEN) [    0.158968] VMX: Supported advanced features:
    (XEN) [    0.159369]  - APIC TPR shadow
    (XEN) [    0.159771] HVM: ASIDs disabled.
    (XEN) [    0.160203] HVM: VMX enabled
    (XEN) [    0.160599] HVM: Hardware Assisted Paging (HAP) not detected

Although it looks rather worse interleaved with kernel timestamps:

    [    0.300276] pci 0000:00:1c.0: System wakeup disabled by ACPI
    (XEN) [    3.286620] PCI add device 0000:00:1c.0
    [    0.301169] pci 0000:00:1c.4: System wakeup disabled by ACPI
    (XEN) [    3.287508] PCI add device 0000:00:1c.4
    [    0.302078] pci 0000:00:1c.5: System wakeup disabled by ACPI
    (XEN) [    3.288420] PCI add device 0000:00:1c.5
    [    0.302899] pci 0000:00:1d.0: System wakeup disabled by ACPI
    (XEN) [    3.289229] PCI add device 0000:00:1d.0

Some latent bugs are emphasised by these changes.  There are steps in time
when the TSC scale is calculated, and when the platform time is initialised ...

    (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
    (XEN) [   27.553075] Detected 2793.232 MHz processor.
    (XEN) [   27.558277] Initing memory sharing.
    (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
    (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
    (XEN) [   27.577687] Intel machine check reporting enabled
    (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
    (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
    (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
    (XEN) [   27.603238] I/O virtualisation disabled
    (XEN) [   27.608093] ENABLING IO-APIC IRQs
    (XEN) [   27.612136]  -> Using new ACK method
    (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
    (XEN) [    0.153431] Platform timer is 14.318MHz HPET

... and the synchronisation across CPUs needs to be earlier during AP bringup.

    (XEN) [    0.161004] HVM: PVH mode not supported on this platform
    (XEN) [    0.000000] Cannot set CPU feature mask on CPU#1
    (XEN) [    0.182299] Brought up 2 CPUs

Is it likely that people would want to still have the option for the full
date/timestamps?  If so, that code will have to be kept.

Comments/suggestions welcome, especially regarding the interleaving of Xen and
dom0 timestamps.

~Andrew
---
 xen/drivers/char/console.c |   12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..e2d9521 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -548,21 +548,19 @@ static int printk_prefix_check(char *p, char **pp)
 
 static void printk_start_of_line(const char *prefix)
 {
-    struct tm tm;
     char tstr[32];
+    uint64_t sec, nsec;
 
     __putstr(prefix);
 
     if ( !opt_console_timestamps )
         return;
 
-    tm = wallclock_time();
-    if ( tm.tm_mday == 0 )
-        return;
+    sec = NOW();
+    nsec = do_div(sec, 1000000000);
 
-    snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
-             1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
-             tm.tm_hour, tm.tm_min, tm.tm_sec);
+    snprintf(tstr, sizeof(tstr), "[%5"PRIu64".%06"PRIu64"] ",
+             sec, nsec/1000);
     __putstr(tstr);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1Yi-0007o8-V9; Thu, 27 Feb 2014 14:03:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ1Yg-0007nn-Qt
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:03:31 +0000
Received: from [193.109.254.147:52376] by server-7.bemta-14.messagelabs.com id
	C3/50-23424-2B54F035; Thu, 27 Feb 2014 14:03:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393509806!7224434!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7192 invoked from network); 27 Feb 2014 14:03:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:03:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106273195"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:03:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJ1YK-0002lQ-SJ; Thu, 27 Feb 2014 14:03:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:03:07 +0000
Message-ID: <1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps as an
	offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

** This is RFC, and not intended to be applied in its current state **

There exists a "console_timestamps" command line option which causes full
date/time stamps to be printed, e.g.

    (XEN) ENABLING IO-APIC IRQs
    (XEN)  -> Using old ACK method
    (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
    (XEN) TSC deadline timer enabled
    (XEN) [2014-02-27 12:29:27] Platform timer is 14.318MHz HPET
    (XEN) [2014-02-27 12:29:27] Allocated console ring of 64 KiB.
    (XEN) [2014-02-27 12:29:27] mwait-idle: MWAIT substates: 0x21120

However, this only has single-second granularity.  This patch replaces the
string printed with one which matches Linux kernel timestamps, in seconds and
milliseconds since boot.

The result looks like:

    (XEN) [    0.158968] VMX: Supported advanced features:
    (XEN) [    0.159369]  - APIC TPR shadow
    (XEN) [    0.159771] HVM: ASIDs disabled.
    (XEN) [    0.160203] HVM: VMX enabled
    (XEN) [    0.160599] HVM: Hardware Assisted Paging (HAP) not detected

Although it looks rather worse interleaved with kernel timestamps:

    [    0.300276] pci 0000:00:1c.0: System wakeup disabled by ACPI
    (XEN) [    3.286620] PCI add device 0000:00:1c.0
    [    0.301169] pci 0000:00:1c.4: System wakeup disabled by ACPI
    (XEN) [    3.287508] PCI add device 0000:00:1c.4
    [    0.302078] pci 0000:00:1c.5: System wakeup disabled by ACPI
    (XEN) [    3.288420] PCI add device 0000:00:1c.5
    [    0.302899] pci 0000:00:1d.0: System wakeup disabled by ACPI
    (XEN) [    3.289229] PCI add device 0000:00:1d.0

Some latent bugs are emphasised by these changes.  There are steps in time
when the TSC scale is calculated, and when the platform time is initialised ...

    (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
    (XEN) [   27.553075] Detected 2793.232 MHz processor.
    (XEN) [   27.558277] Initing memory sharing.
    (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
    (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
    (XEN) [   27.577687] Intel machine check reporting enabled
    (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
    (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
    (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
    (XEN) [   27.603238] I/O virtualisation disabled
    (XEN) [   27.608093] ENABLING IO-APIC IRQs
    (XEN) [   27.612136]  -> Using new ACK method
    (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
    (XEN) [    0.153431] Platform timer is 14.318MHz HPET

... and the synchronisation across CPUs needs to be earlier during AP bringup.

    (XEN) [    0.161004] HVM: PVH mode not supported on this platform
    (XEN) [    0.000000] Cannot set CPU feature mask on CPU#1
    (XEN) [    0.182299] Brought up 2 CPUs

Is it likely that people would want to still have the option for the full
date/timestamps?  If so, that code will have to be kept.

Comments/suggestions welcome, especially regarding the interleaving of Xen and
dom0 timestamps.

~Andrew
---
 xen/drivers/char/console.c |   12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..e2d9521 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -548,21 +548,19 @@ static int printk_prefix_check(char *p, char **pp)
 
 static void printk_start_of_line(const char *prefix)
 {
-    struct tm tm;
     char tstr[32];
+    uint64_t sec, nsec;
 
     __putstr(prefix);
 
     if ( !opt_console_timestamps )
         return;
 
-    tm = wallclock_time();
-    if ( tm.tm_mday == 0 )
-        return;
+    sec = NOW();
+    nsec = do_div(sec, 1000000000);
 
-    snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
-             1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
-             tm.tm_hour, tm.tm_min, tm.tm_sec);
+    snprintf(tstr, sizeof(tstr), "[%5"PRIu64".%06"PRIu64"] ",
+             sec, nsec/1000);
     __putstr(tstr);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:03:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1Yh-0007nz-G1; Thu, 27 Feb 2014 14:03:31 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ1Yg-0007nk-AL
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:03:30 +0000
Received: from [193.109.254.147:52251] by server-8.bemta-14.messagelabs.com id
	B6/AB-18529-1B54F035; Thu, 27 Feb 2014 14:03:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393509806!7224434!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7063 invoked from network); 27 Feb 2014 14:03:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:03:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106273193"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:03:09 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:03:08 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJ1YK-0002lQ-Rr; Thu, 27 Feb 2014 14:03:08 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:03:06 +0000
Message-ID: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA1
Cc: Tim Deegan <tim@xen.org>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Tamas K Lengyel <tamas.lengyel@zentific.com>
Subject: [Xen-devel] [PATCH 1/2] mem_event: Return previous value of
	CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tamas K Lengyel <tamas.lengyel@zentific.com>

This patch extends the information returned for CR0/CR3/CR4 register
write events with the previous value of the register. The old value
was already passed to the trap processing function, just never placed
into the returned request. By returning this value, applications
subscribing the CR events obtain additional context about the event.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@zentific.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c         |    4 ++++
 xen/include/public/mem_event.h |    6 +++---
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 08fec34..9e85c13 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4808,6 +4808,10 @@ static int hvm_memory_event_traps(long p, uint32_t reason,
         req.gla = gla;
         req.gla_valid = 1;
     }
+    else
+    {
+        req.gla = old;
+    }
     
     mem_event_put_request(d, &d->mem_event->access, &req);
     
diff --git a/xen/include/public/mem_event.h b/xen/include/public/mem_event.h
index c9ed546..3831b41 100644
--- a/xen/include/public/mem_event.h
+++ b/xen/include/public/mem_event.h
@@ -40,9 +40,9 @@
 /* Reasons for the memory event request */
 #define MEM_EVENT_REASON_UNKNOWN     0    /* typical reason */
 #define MEM_EVENT_REASON_VIOLATION   1    /* access violation, GFN is address */
-#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is CR0 value */
-#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is CR3 value */
-#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is CR4 value */
+#define MEM_EVENT_REASON_CR0         2    /* CR0 was hit: gfn is new CR0 value, gla is previous */
+#define MEM_EVENT_REASON_CR3         3    /* CR3 was hit: gfn is new CR3 value, gla is previous */
+#define MEM_EVENT_REASON_CR4         4    /* CR4 was hit: gfn is new CR4 value, gla is previous */
 #define MEM_EVENT_REASON_INT3        5    /* int3 was hit: gla/gfn are RIP */
 #define MEM_EVENT_REASON_SINGLESTEP  6    /* single step was invoked: gla/gfn are RIP */
 #define MEM_EVENT_REASON_MSR         7    /* MSR was hit: gfn is MSR value, gla is MSR address;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:12:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1gj-0008Bm-35; Thu, 27 Feb 2014 14:11:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <etrudeau@broadcom.com>) id 1WJ1gh-0008Bh-PN
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:11:48 +0000
Received: from [85.158.137.68:42329] by server-5.bemta-3.messagelabs.com id
	DD/75-04712-3A74F035; Thu, 27 Feb 2014 14:11:47 +0000
X-Env-Sender: etrudeau@broadcom.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393510304!3301184!1
X-Originating-IP: [216.31.210.64]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32048 invoked from network); 27 Feb 2014 14:11:45 -0000
Received: from mail-gw3-out.broadcom.com (HELO mail-gw3-out.broadcom.com)
	(216.31.210.64) by server-16.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 14:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389772800"; d="scan'208";a="16681064"
Received: from irvexchcas06.broadcom.com (HELO
	IRVEXCHCAS06.corp.ad.broadcom.com) ([10.9.208.53])
	by mail-gw3-out.broadcom.com with ESMTP; 27 Feb 2014 06:23:21 -0800
Received: from SJEXCHCAS06.corp.ad.broadcom.com (10.16.203.14) by
	IRVEXCHCAS06.corp.ad.broadcom.com (10.9.208.53) with Microsoft SMTP
	Server (TLS) id 14.3.174.1; Thu, 27 Feb 2014 06:11:43 -0800
Received: from SJEXCHMB09.corp.ad.broadcom.com ([fe80::3da7:665e:cc78:181f])
	by SJEXCHCAS06.corp.ad.broadcom.com ([::1]) with mapi id 14.03.0174.001;
	Thu, 27 Feb 2014 06:11:43 -0800
From: Eric Trudeau <etrudeau@broadcom.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Dario Faggioli
	<dario.faggioli@citrix.com>
Thread-Topic: [Xen-devel] ARM: access to iomem and HW IRQ
Thread-Index: AQHPM74WcrCQSMHRDEiH6tQL9Y9lQZrJIMTA
Date: Thu, 27 Feb 2014 14:11:42 +0000
Message-ID: <FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.16.203.100]
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Viktor Kleinik <viktor.kleinik@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Thursday, February 27, 2014 8:16 AM
> To: Dario Faggioli
> Cc: Viktor Kleinik; xen-devel@lists.xen.org; Arianna Avanzini; Stefano Stabellini;
> Julien Grall; Eric Trudeau
> Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> 
> On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > Hi all,
> > >
> > Hi,
> >
> > > Does anyone knows something about future plans to implement
> > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > >
> > I think Arianna is working on an implementation of the former
> > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > list soon, isn't it so, Arianna?
> 
> Eric Trudeau did some work in the area too:
> 
> http://marc.info/?l=xen-devel&m=137338996422503
> http://marc.info/?l=xen-devel&m=137365750318936

I checked our repo and the route IRQ changes to DomUs in the second patch URL Stefano provided below are up-to-date with what we have been using on our platforms.  We made no further changes after that patch, i.e. we left the 100 msec max wait for a domain to finish an ISR when destroying it.

We also added support for a DomU to map in I/O memory with the iomem configuration parameter.  Unfortunately, I don't have time to provide an official patch on recent Xen upstream code due to time constraints, but below is a patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
I hope this is helpful, because that is the best I can do at this time.

-----------------

tools/libxl/libxl_create.c |  5 +++--
 xen/arch/arm/domctl.c      | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b320d3..53ed52e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);
 
-        ret = xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret = xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+    bool_t copyback = 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
+        int add = domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret = -ENOSYS;
+            break;
+        }
+        ret = -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implemented on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT)) || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret = -EPERM;
+        if ( current->domain->domain_id != 0 )
+            break;
+
+        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }
 
 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:12:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1gj-0008Bm-35; Thu, 27 Feb 2014 14:11:49 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <etrudeau@broadcom.com>) id 1WJ1gh-0008Bh-PN
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:11:48 +0000
Received: from [85.158.137.68:42329] by server-5.bemta-3.messagelabs.com id
	DD/75-04712-3A74F035; Thu, 27 Feb 2014 14:11:47 +0000
X-Env-Sender: etrudeau@broadcom.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393510304!3301184!1
X-Originating-IP: [216.31.210.64]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32048 invoked from network); 27 Feb 2014 14:11:45 -0000
Received: from mail-gw3-out.broadcom.com (HELO mail-gw3-out.broadcom.com)
	(216.31.210.64) by server-16.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 14:11:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389772800"; d="scan'208";a="16681064"
Received: from irvexchcas06.broadcom.com (HELO
	IRVEXCHCAS06.corp.ad.broadcom.com) ([10.9.208.53])
	by mail-gw3-out.broadcom.com with ESMTP; 27 Feb 2014 06:23:21 -0800
Received: from SJEXCHCAS06.corp.ad.broadcom.com (10.16.203.14) by
	IRVEXCHCAS06.corp.ad.broadcom.com (10.9.208.53) with Microsoft SMTP
	Server (TLS) id 14.3.174.1; Thu, 27 Feb 2014 06:11:43 -0800
Received: from SJEXCHMB09.corp.ad.broadcom.com ([fe80::3da7:665e:cc78:181f])
	by SJEXCHCAS06.corp.ad.broadcom.com ([::1]) with mapi id 14.03.0174.001;
	Thu, 27 Feb 2014 06:11:43 -0800
From: Eric Trudeau <etrudeau@broadcom.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, Dario Faggioli
	<dario.faggioli@citrix.com>
Thread-Topic: [Xen-devel] ARM: access to iomem and HW IRQ
Thread-Index: AQHPM74WcrCQSMHRDEiH6tQL9Y9lQZrJIMTA
Date: Thu, 27 Feb 2014 14:11:42 +0000
Message-ID: <FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.16.203.100]
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Viktor Kleinik <viktor.kleinik@globallogic.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> Sent: Thursday, February 27, 2014 8:16 AM
> To: Dario Faggioli
> Cc: Viktor Kleinik; xen-devel@lists.xen.org; Arianna Avanzini; Stefano Stabellini;
> Julien Grall; Eric Trudeau
> Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> 
> On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > Hi all,
> > >
> > Hi,
> >
> > > Does anyone knows something about future plans to implement
> > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > >
> > I think Arianna is working on an implementation of the former
> > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > list soon, isn't it so, Arianna?
> 
> Eric Trudeau did some work in the area too:
> 
> http://marc.info/?l=xen-devel&m=137338996422503
> http://marc.info/?l=xen-devel&m=137365750318936

I checked our repo and the route IRQ changes to DomUs in the second patch URL Stefano provided below are up-to-date with what we have been using on our platforms.  We made no further changes after that patch, i.e. we left the 100 msec max wait for a domain to finish an ISR when destroying it.

We also added support for a DomU to map in I/O memory with the iomem configuration parameter.  Unfortunately, I don't have time to provide an official patch on recent Xen upstream code due to time constraints, but below is a patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
I hope this is helpful, because that is the best I can do at this time.

-----------------

tools/libxl/libxl_create.c |  5 +++--
 xen/arch/arm/domctl.c      | 74 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b320d3..53ed52e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl__multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);
 
-        ret = xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret = xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret = 0;
+    bool_t copyback = 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
+        int add = domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret = -ENOSYS;
+            break;
+        }
+        ret = -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implemented on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT)) || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret = -EPERM;
+        if ( current->domain->domain_id != 0 )
+            break;
+
+        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret = -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }
 
 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:12:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1hO-0008EB-IB; Thu, 27 Feb 2014 14:12:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ1hM-0008Dt-DH
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:12:28 +0000
Received: from [85.158.137.68:59381] by server-15.bemta-3.messagelabs.com id
	92/30-19263-BC74F035; Thu, 27 Feb 2014 14:12:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393510345!4567180!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17922 invoked from network); 27 Feb 2014 14:12:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:12:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106276788"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:12:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:12:24 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ1hI-0002sf-EM;
	Thu, 27 Feb 2014 14:12:24 +0000
Message-ID: <530F47C8.5050303@citrix.com>
Date: Thu, 27 Feb 2014 14:12:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel <xen-devel@lists.xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Tamas K Lengyel <tamas.lengyel@zentific.com>
Subject: Re: [Xen-devel] [PATCH 1/2] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Apologies - Please ignore.

I am being particuarly dim with git send-email today.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:12:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1hO-0008EB-IB; Thu, 27 Feb 2014 14:12:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ1hM-0008Dt-DH
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:12:28 +0000
Received: from [85.158.137.68:59381] by server-15.bemta-3.messagelabs.com id
	92/30-19263-BC74F035; Thu, 27 Feb 2014 14:12:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393510345!4567180!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17922 invoked from network); 27 Feb 2014 14:12:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:12:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106276788"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:12:25 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:12:24 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ1hI-0002sf-EM;
	Thu, 27 Feb 2014 14:12:24 +0000
Message-ID: <530F47C8.5050303@citrix.com>
Date: Thu, 27 Feb 2014 14:12:24 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel <xen-devel@lists.xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>,
	Tamas K Lengyel <tamas.lengyel@zentific.com>
Subject: Re: [Xen-devel] [PATCH 1/2] mem_event: Return previous value of
 CR0/CR3/CR4 on change.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Apologies - Please ignore.

I am being particuarly dim with git send-email today.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:17:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1li-0008SE-GO; Thu, 27 Feb 2014 14:16:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ1lh-0008S6-N0
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 14:16:57 +0000
Received: from [85.158.143.35:34056] by server-3.bemta-4.messagelabs.com id
	7F/B2-11539-9D84F035; Thu, 27 Feb 2014 14:16:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393510614!8786488!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20082 invoked from network); 27 Feb 2014 14:16:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:16:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104651491"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:16:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 09:16:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJ1lc-0002Nk-OE;
	Thu, 27 Feb 2014 14:16:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJ1lc-0002HK-KL;
	Thu, 27 Feb 2014 14:16:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25317-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 14:16:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25317: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25317 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25317/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:17:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1li-0008SE-GO; Thu, 27 Feb 2014 14:16:58 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ1lh-0008S6-N0
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 14:16:57 +0000
Received: from [85.158.143.35:34056] by server-3.bemta-4.messagelabs.com id
	7F/B2-11539-9D84F035; Thu, 27 Feb 2014 14:16:57 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393510614!8786488!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20082 invoked from network); 27 Feb 2014 14:16:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:16:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104651491"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:16:53 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 09:16:52 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJ1lc-0002Nk-OE;
	Thu, 27 Feb 2014 14:16:52 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJ1lc-0002HK-KL;
	Thu, 27 Feb 2014 14:16:52 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25317-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 14:16:52 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25317: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25317 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25317/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           fail    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:18:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1n9-00005n-0v; Thu, 27 Feb 2014 14:18:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ1n6-00005g-Tw
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:18:25 +0000
Received: from [193.109.254.147:17586] by server-15.bemta-14.messagelabs.com
	id DB/C0-10839-0394F035; Thu, 27 Feb 2014 14:18:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393510700!7232455!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14029 invoked from network); 27 Feb 2014 14:18:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:18:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106278760"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:18:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:18:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ1mu-0002xW-Bc;
	Thu, 27 Feb 2014 14:18:12 +0000
Date: Thu, 27 Feb 2014 14:18:12 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140227141812.GE16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <59358334.20140226161123@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 04:11:23PM +0100, Sander Eikelenboom wrote:
> 
> Wednesday, February 26, 2014, 10:14:42 AM, you wrote:
> 
> 
> > Friday, February 21, 2014, 7:32:08 AM, you wrote:
> 
> 
> >> On 2014/2/20 19:18, Sander Eikelenboom wrote:
> >>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
> >>>
> >>>
> >>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
> >>>>> Hi All,
> >>>>>
> >>>>> I'm currently having some network troubles with Xen and recent linux kernels.
> >>>>>
> >>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
> >>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
> >>>>>
> >>>>>     In the guest:
> >>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [57539.859610] net eth0: Need more slots
> >>>>>     [58157.675939] net eth0: Need more slots
> >>>>>     [58725.344712] net eth0: Need more slots
> >>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [61815.849225] net eth0: Need more slots
> >>>> This issue is familiar... and I thought it get fixed.
> >>>>   From original analysis for similar issue I hit before, the root cause
> >>>> is netback still creates response when the ring is full. I remember
> >>>> larger MTU can trigger this issue before, what is the MTU size?
> >>> In dom0 both for the physical nics and the guest vif's MTU=1500
> >>> In domU the eth0 also has MTU=1500.
> >>>
> >>> So it's not jumbo frames .. just everywhere the same plain defaults ..
> >>>
> >>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
> >>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
> >>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
> >>>
> >>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.
> 
> >> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
> >> Probably the response overlaps the request and grantcopy return error 
> >> when using wrong grant reference, Netback returns resp->status with 
> >> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
> >> Would it be possible to print log in xenvif_rx_action of netback to see 
> >> whether something wrong with max slots and used slots?
> 
> >> Thanks
> >> Annie
> 
> > Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
> > at the same time as the netfront messages in the guest.
> 
> > I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
> > One of the things was to simplify the code for the debug key to print the granttables, the present code
> > takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
> > the nr of entries per domain.
> 
> 
> > Issue 1: grant_table.c:1858:d0 Bad grant reference
> 
> > After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
> > The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.
> 
> > Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
> > The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).
> 

As far as I can tell netfront has a pool of grant references and it
will BUG_ON() if there's no grefs in the pool when you request one.
Since your DomU didn't crash so I suspect the book-keeping is still
intact.

> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
> > Domain 7 is the domain that happens to give the netfront messages.
> 
> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
> 

I suppose Dom0 expanding its maptrack is normal. I see as well when I
increase the number of domains. But if it keeps increasing while the
number of DomUs stay the same then it is not normal.

Presumably you only have netfront and blkfront to use grant table and
your workload as described below invovled both so it would be hard to
tell which one is faulty.

There's no immediate functional changes regarding slot counting in this
dev cycle for network driver. But there's some changes to blkfront/back
which seem interesting (memory related).

My suggestion is, if you have a working base line, you can try to setup
different frontend / backend combination to help narrow down the
problem.

Wei.

> > (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (4) to (5) frames.
> > (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (5) to (6) frames.
> > (XEN) [2014-02-26 00:00:38] grant_table.c:290:d0 Increased maptrack size to 13/256 frames
> > (XEN) [2014-02-26 00:01:13] grant_table.c:290:d0 Increased maptrack size to 14/256 frames
> > (XEN) [2014-02-26 04:02:55] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> > (XEN) [2014-02-26 04:15:33] grant_table.c:290:d0 Increased maptrack size to 15/256 frames
> > (XEN) [2014-02-26 04:15:53] grant_table.c:290:d0 Increased maptrack size to 16/256 frames
> > (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 17/256 frames
> > (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 18/256 frames
> > (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 19/256 frames
> > (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 20/256 frames
> > (XEN) [2014-02-26 04:15:59] grant_table.c:290:d0 Increased maptrack size to 21/256 frames
> > (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 22/256 frames
> > (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 23/256 frames
> > (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 24/256 frames
> > (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 25/256 frames
> > (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 26/256 frames
> > (XEN) [2014-02-26 04:16:17] grant_table.c:290:d0 Increased maptrack size to 27/256 frames
> > (XEN) [2014-02-26 04:16:20] grant_table.c:290:d0 Increased maptrack size to 28/256 frames
> > (XEN) [2014-02-26 04:16:56] grant_table.c:290:d0 Increased maptrack size to 29/256 frames
> > (XEN) [2014-02-26 05:15:04] grant_table.c:290:d0 Increased maptrack size to 30/256 frames
> > (XEN) [2014-02-26 05:15:05] grant_table.c:290:d0 Increased maptrack size to 31/256 frames
> > (XEN) [2014-02-26 05:21:15] grant_table.c:1858:d0 Bad grant reference 107085839 | 2048 | 1 | 0
> > (XEN) [2014-02-26 05:29:47] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
> > (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all [ key 'g' pressed
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 active entries: 0
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1) nr_grant_entries: 3072
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 active entries: 2117
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 active entries: 1061
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 active entries: 1045
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 active entries: 709
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 active entries: 163
> > (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all ] done
> > (XEN) [2014-02-26 07:55:09] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> > (XEN) [2014-02-26 08:37:16] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
> 
> 
> 
> > Issue 2: net eth0: rx->offset: 0, size: xxxxxxxxxx
> 
> > In the guest (domain 7):
> 
> > Feb 26 08:55:09 backup kernel: [39258.090375] net eth0: rx->offset: 0, size: 4294967295
> > Feb 26 08:55:09 backup kernel: [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
> > Feb 26 08:55:09 backup kernel: [39258.090401] net eth0: rx->offset: 0, size: 4294967295
> > Feb 26 08:55:09 backup kernel: [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
> > Feb 26 08:55:09 backup kernel: [39258.090415] net eth0: rx->offset: 0, size: 4294967295
> > Feb 26 08:55:09 backup kernel: [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571
> 
> > In dom0 i don't see any specific netback warnings related to this domain at this specific times, the printk's i added do trigger quite some times but these are probably not
> > errorneous, but they seem to only occur on the vif of domain 7 (probably the only domain that is swamping the network by doing rsync and webdavs and causes some fragmented packets)
> 
> Another addition ... the guest doesn't shutdown anymore on "xl shutdown" .. it just does .. erhmm nothing .. (tried multiple times)
> After that i ssh'ed into the guest and did a "halt -p" ... the guest shutted down .. but the guest remained in xl list in blocked state ..
> Doing a "xl console" shows:
> 
> [30024.559656] net eth0: me here .. cons:8713451 slots:1 rp:8713462 max:18 err:0 rx->id:234 rx->offset:0 size:4294967295 ref:-131941395332550
> [30024.559666] net eth0: rx->offset: 0, size: 4294967295
> [30024.559671] net eth0: me here .. cons:8713451 slots:2 rp:8713462 max:18 err:-22 rx->id:236 rx->offset:0 size:4294967295 ref:-131941395332504
> [30024.559680] net eth0: rx->offset: 0, size: 4294967295
> [30024.559686] net eth0: me here .. cons:8713451 slots:3 rp:8713462 max:18 err:-22 rx->id:1 rx->offset:0 size:4294967295 ref:-131941395332390
> [30536.665135] net eth0: Need more slots cons:9088533 slots:6 rp:9088539 max:17 err:0 rx-id:26 rx->offset:0 size:0 ref:687
> [39258.090375] net eth0: rx->offset: 0, size: 4294967295
> [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
> [39258.090401] net eth0: rx->offset: 0, size: 4294967295
> [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
> [39258.090415] net eth0: rx->offset: 0, size: 4294967295
> [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571
> INIT: Switching to runlevel: 0
> INIT: Sending processes the TERM signal
> [info] Using makefile-style concurrent boot in runlevel 0.
> Stopping openntpd: ntpd.
> [ ok ] Stopping mail-transfer-agent: nullmailer.
> [ ok ] Stopping web server: apache2 ... waiting .
> [ ok ] Asking all remaining processes to terminate...done.
> [ ok ] All processes ended within 2 seconds...done.
> [ ok ] Stopping enhanced syslogd: rsyslogd.
> [ ok ] Deconfiguring network interfaces...done.
> [ ok ] Deactivating swap...done.
> [65015.958259] EXT4-fs (xvda1): re-mounted. Opts: (null)
> [info] Will now halt.
> [65018.166546] vif vif-0: 5 starting transaction
> [65160.490419] INFO: task halt:4846 blocked for more than 120 seconds.
> [65160.490464]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
> [65160.490485] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [65160.490510] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000
> [65280.490470] INFO: task halt:4846 blocked for more than 120 seconds.
> [65280.490517]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
> [65280.490540] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [65280.490564] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000
> 
> 
> Especially the  "[65018.166546] vif vif-0: 5 starting transaction" after the halt surprises me ..
> 
> --
> Sander
> 
> > Feb 26 08:53:20 serveerstertje kernel: [39324.917255] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15101115 cons:15101112 j:8
> > Feb 26 08:53:56 serveerstertje kernel: [39361.001436] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15127649 cons:15127648 j:13
> > Feb 26 08:54:00 serveerstertje kernel: [39364.725613] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15130263 cons:15130261 j:2
> > Feb 26 08:54:04 serveerstertje kernel: [39368.739504] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15133143 cons:15133141 j:0
> > Feb 26 08:54:20 serveerstertje kernel: [39384.665044] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15144113 cons:15144112 j:0
> > Feb 26 08:54:29 serveerstertje kernel: [39393.569871] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15150203 cons:15150200 j:0
> > Feb 26 08:54:40 serveerstertje kernel: [39404.586566] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15157706 cons:15157704 j:12
> > Feb 26 08:54:56 serveerstertje kernel: [39420.759769] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15168839 cons:15168835 j:0
> > Feb 26 08:54:56 serveerstertje kernel: [39421.001372] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15169002 cons:15168999 j:8
> > Feb 26 08:55:00 serveerstertje kernel: [39424.515073] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15171450 cons:15171447 j:0
> > Feb 26 08:55:10 serveerstertje kernel: [39435.154510] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15178773 cons:15178770 j:1
> > Feb 26 08:56:19 serveerstertje kernel: [39504.195908] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15227444 cons:15227444 j:0
> > Feb 26 08:57:39 serveerstertje kernel: [39583.799392] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15283346 cons:15283344 j:8
> > Feb 26 08:57:55 serveerstertje kernel: [39599.517673] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15293937 cons:15293935 j:0
> > Feb 26 08:58:07 serveerstertje kernel: [39612.156622] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15302891 cons:15302889 j:19
> > Feb 26 08:58:07 serveerstertje kernel: [39612.400907] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15303034 cons:15303033 j:0
> > Feb 26 08:58:18 serveerstertje kernel: [39623.439383] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15310915 cons:15310911 j:0
> > Feb 26 08:58:39 serveerstertje kernel: [39643.521808] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15324769 cons:15324766 j:1
> 
> > Feb 26 09:27:07 serveerstertje kernel: [41351.622501] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16502932 cons:16502932 j:8
> > Feb 26 09:27:19 serveerstertje kernel: [41363.541003] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16510837 cons:16510834 j:7
> > Feb 26 09:27:23 serveerstertje kernel: [41368.133306] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16513940 cons:16513937 j:0
> > Feb 26 09:27:43 serveerstertje kernel: [41388.025147] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16527870 cons:16527868 j:0
> > Feb 26 09:27:47 serveerstertje kernel: [41391.530802] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16530437 cons:16530437 j:7
> > Feb 26 09:27:51 serveerstertje kernel: [41395.521166] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533320 cons:16533317 j:6
> > Feb 26 09:27:51 serveerstertje kernel: [41395.767066] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533469 cons:16533469 j:0
> > Feb 26 09:27:51 serveerstertje kernel: [41395.802319] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533533 cons:16533533 j:24
> > Feb 26 09:27:51 serveerstertje kernel: [41395.837456] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533534 cons:16533534 j:1
> > Feb 26 09:27:51 serveerstertje kernel: [41395.872587] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533597 cons:16533596 j:25
> > Feb 26 09:27:51 serveerstertje kernel: [41396.192784] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533833 cons:16533832 j:3
> > Feb 26 09:27:51 serveerstertje kernel: [41396.235611] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533890 cons:16533890 j:30
> > Feb 26 09:27:51 serveerstertje kernel: [41396.271047] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533898 cons:16533896 j:3
> 
> 
> > --
> > Sander
> 
> 
> 
> 
> 
> >>>
> >>> Will keep you posted when it triggers again with the extra info in the warn.
> >>>
> >>> --
> >>> Sander
> >>>
> >>>
> >>>
> >>>> Thanks
> >>>> Annie
> >>>>>     Xen reports:
> >>>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
> >>>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
> >>>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
> >>>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
> >>>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
> >>>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
> >>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
> >>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
> >>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
> >>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
> >>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
> >>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
> >>>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
> >>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
> >>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
> >>>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
> >>>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
> >>>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
> >>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
> >>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
> >>>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
> >>>>>
> >>>>>
> >>>>>
> >>>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
> >>>>>     - i can ping the guests from dom0
> >>>>>     - i can ping dom0 from the guests
> >>>>>     - But i can't ssh or access things by http
> >>>>>     - I don't see any relevant error messages ...
> >>>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
> >>>>>       (that previously worked fine)
> >>>>>
> >>>>> --
> >>>>>
> >>>>> Sander
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Xen-devel mailing list
> >>>>> Xen-devel@lists.xen.org
> >>>>> http://lists.xen.org/xen-devel
> >>>
> >>>
> 
> 
> 
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index 4fc46eb..4d720b4 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -1667,6 +1667,8 @@ skip_vfb:
> >                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
> >              } else if (!strcmp(buf, "cirrus")) {
> >                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
> > +            } else if (!strcmp(buf, "none")) {
> > +                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
> >              } else {
> >                  fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
> >                  exit(1);
> > diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> > index 107b000..ab56927 100644
> > --- a/xen/common/grant_table.c
> > +++ b/xen/common/grant_table.c
> > @@ -265,9 +265,10 @@ get_maptrack_handle(
> >      while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
> >      {
> >          nr_frames = nr_maptrack_frames(lgt);
> > -        if ( nr_frames >= max_nr_maptrack_frames() )
> > +        if ( nr_frames >= max_nr_maptrack_frames() ){
> > +                 gdprintk(XENLOG_INFO, "Already at max maptrack size: %u/%u frames\n",nr_frames, max_nr_maptrack_frames());
> >              break;
> > -
> > +       }
> >          new_mt = alloc_xenheap_page();
> >          if ( !new_mt )
> >              break;
> > @@ -285,8 +286,8 @@ get_maptrack_handle(
> >          smp_wmb();
> >          lgt->maptrack_limit      = new_mt_limit;
> 
> > -        gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
> > -                 nr_frames + 1);
> > +        gdprintk(XENLOG_INFO, "Increased maptrack size to %u/%u frames\n",
> > +                 nr_frames + 1, max_nr_maptrack_frames());
> >      }
> 
> >      spin_unlock(&lgt->lock);
> > @@ -1854,7 +1855,7 @@ __acquire_grant_for_copy(
> 
> >      if ( unlikely(gref >= nr_grant_entries(rgt)) )
> >          PIN_FAIL(unlock_out, GNTST_bad_gntref,
> > -                 "Bad grant reference %ld\n", gref);
> > +                 "Bad grant reference %ld | %d | %d | %d \n", gref, nr_grant_entries(rgt), rgt->gt_version, ldom);
> 
> >      act = &active_entry(rgt, gref);
> >      shah = shared_entry_header(rgt, gref);
> > @@ -2830,15 +2831,19 @@ static void gnttab_usage_print(struct domain *rd)
> >      int first = 1;
> >      grant_ref_t ref;
> >      struct grant_table *gt = rd->grant_table;
> > -
> > +    unsigned int active=0;
> > +/*
> >      printk("      -------- active --------       -------- shared --------\n");
> >      printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
> > -
> > +*/
> >      spin_lock(&gt->lock);
> 
> >      if ( gt->gt_version == 0 )
> >          goto out;
> 
> > +    printk("grant-table for remote domain:%5d (v%d) nr_grant_entries: %d\n",
> > +                   rd->domain_id, gt->gt_version, nr_grant_entries(gt));
> > +
> >      for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
> >      {
> >          struct active_grant_entry *act;
> > @@ -2875,19 +2880,22 @@ static void gnttab_usage_print(struct domain *rd)
> >                     rd->domain_id, gt->gt_version);
> >              first = 0;
> >          }
> > -
> > +        active++;
> >          /*      [ddd]    ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
> > -        printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
> > -               ref, act->domid, act->frame, act->pin,
> > -               sha->domid, frame, status);
> > +        /* printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n", ref, act->domid, act->frame, act->pin, sha->domid, frame, status); */
> >      }
> 
> >   out:
> >      spin_unlock(&gt->lock);
> 
> > +    printk("grant-table for remote domain:%5d active entries: %d\n",
> > +                   rd->domain_id, active);
> > +/*
> >      if ( first )
> >          printk("grant-table for remote domain:%5d ... "
> >                 "no active grant table entries\n", rd->domain_id);
> > +*/
> > +
> >  }
> 
> >  static void gnttab_usage_print_all(unsigned char key)
> 
> 
> 
> 
> 
> 
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > index e5284bc..6d93358 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -482,20 +482,23 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                 .meta  = vif->meta,
> >         };
> 
> > +       int j=0;
> > +
> >         skb_queue_head_init(&rxq);
> 
> >         while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> >                 RING_IDX max_slots_needed;
> >                 int i;
> > +               int nr_frags;
> 
> >                 /* We need a cheap worse case estimate for the number of
> >                  * slots we'll use.
> >                  */
> 
> >                 max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
> > -                                               skb_headlen(skb),
> > -                                               PAGE_SIZE);
> > -               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> > +                                               skb_headlen(skb), PAGE_SIZE);
> > +               nr_frags = skb_shinfo(skb)->nr_frags;
> > +               for (i = 0; i < nr_frags; i++) {
> >                         unsigned int size;
> >                         size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
> >                         max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
> > @@ -508,6 +511,9 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                 if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
> >                         skb_queue_head(&vif->rx_queue, skb);
> >                         need_to_notify = true;
> > +                       if (net_ratelimit())
> > +                               netdev_err(vif->dev, "!?!?!?! skb may not fit .. bail out now max_slots_needed:%d GSO:%d vif->rx_last_skb_slots:%d nr_frags:%d prod:%d cons:%d j:%d\n",
> > +                                       max_slots_needed, (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ? 1 : 0, vif->rx_last_skb_slots, nr_frags,vif->rx.sring->req_prod,vif->rx.req_cons,j);
> >                         vif->rx_last_skb_slots = max_slots_needed;
> >                         break;
> >                 } else
> > @@ -518,6 +524,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                 BUG_ON(sco->meta_slots_used > max_slots_needed);
> 
> >                 __skb_queue_tail(&rxq, skb);
> > +               j++;
> >         }
> 
> >         BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> > @@ -541,7 +548,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                         resp->offset = vif->meta[npo.meta_cons].gso_size;
> >                         resp->id = vif->meta[npo.meta_cons].id;
> >                         resp->status = sco->meta_slots_used;
> > -
> > +
> >                         npo.meta_cons++;
> >                         sco->meta_slots_used--;
> >                 }
> > @@ -705,7 +712,7 @@ static int xenvif_count_requests(struct xenvif *vif,
> >                  */
> >                 if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
> >                         if (net_ratelimit())
> > -                               netdev_dbg(vif->dev,
> > +                               netdev_err(vif->dev,
> >                                            "Too many slots (%d) exceeding limit (%d), dropping packet\n",
> >                                            slots, XEN_NETBK_LEGACY_SLOTS_MAX);
> >                         drop_err = -E2BIG;
> > @@ -728,7 +735,7 @@ static int xenvif_count_requests(struct xenvif *vif,
> >                  */
> >                 if (!drop_err && txp->size > first->size) {
> >                         if (net_ratelimit())
> > -                               netdev_dbg(vif->dev,
> > +                               netdev_err(vif->dev,
> >                                            "Invalid tx request, slot size %u > remaining size %u\n",
> >                                            txp->size, first->size);
> >                         drop_err = -EIO;
> 
> 
> 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index f9daa9e..67d5221 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -753,6 +753,7 @@ static int xennet_get_responses(struct netfront_info *np,
> >                         if (net_ratelimit())
> >                                 dev_warn(dev, "rx->offset: %x, size: %u\n",
> >                                          rx->offset, rx->status);
> > +                               dev_warn(dev, "me here .. cons:%d slots:%d rp:%d max:%d err:%d rx->id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
> >                         xennet_move_rx_slot(np, skb, ref);
> >                         err = -EINVAL;
> >                         goto next;
> > @@ -784,7 +785,7 @@ next:
> 
> >                 if (cons + slots == rp) {
> >                         if (net_ratelimit())
> > -                               dev_warn(dev, "Need more slots\n");
> > +                               dev_warn(dev, "Need more slots cons:%d slots:%d rp:%d max:%d err:%d rx-id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
> >                         err = -ENOENT;
> >                         break;
> >                 }
> > @@ -803,7 +804,6 @@ next:
> 
> >         if (unlikely(err))
> >                 np->rx.rsp_cons = cons + slots;
> > -
> >         return err;
> >  }
> 
> > @@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
> 
> >                 /* Ethernet work: Delayed to here as it peeks the header. */
> >                 skb->protocol = eth_type_trans(skb, dev);
> > +               skb_reset_network_header(skb);
> 
> >                 if (checksum_setup(dev, skb)) {
> >                         kfree_skb(skb);
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:18:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1n9-00005n-0v; Thu, 27 Feb 2014 14:18:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ1n6-00005g-Tw
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:18:25 +0000
Received: from [193.109.254.147:17586] by server-15.bemta-14.messagelabs.com
	id DB/C0-10839-0394F035; Thu, 27 Feb 2014 14:18:24 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393510700!7232455!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14029 invoked from network); 27 Feb 2014 14:18:22 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:18:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106278760"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:18:13 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:18:12 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ1mu-0002xW-Bc;
	Thu, 27 Feb 2014 14:18:12 +0000
Date: Thu, 27 Feb 2014 14:18:12 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140227141812.GE16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <59358334.20140226161123@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 04:11:23PM +0100, Sander Eikelenboom wrote:
> 
> Wednesday, February 26, 2014, 10:14:42 AM, you wrote:
> 
> 
> > Friday, February 21, 2014, 7:32:08 AM, you wrote:
> 
> 
> >> On 2014/2/20 19:18, Sander Eikelenboom wrote:
> >>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
> >>>
> >>>
> >>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
> >>>>> Hi All,
> >>>>>
> >>>>> I'm currently having some network troubles with Xen and recent linux kernels.
> >>>>>
> >>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
> >>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
> >>>>>
> >>>>>     In the guest:
> >>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [57539.859610] net eth0: Need more slots
> >>>>>     [58157.675939] net eth0: Need more slots
> >>>>>     [58725.344712] net eth0: Need more slots
> >>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
> >>>>>     [61815.849225] net eth0: Need more slots
> >>>> This issue is familiar... and I thought it get fixed.
> >>>>   From original analysis for similar issue I hit before, the root cause
> >>>> is netback still creates response when the ring is full. I remember
> >>>> larger MTU can trigger this issue before, what is the MTU size?
> >>> In dom0 both for the physical nics and the guest vif's MTU=1500
> >>> In domU the eth0 also has MTU=1500.
> >>>
> >>> So it's not jumbo frames .. just everywhere the same plain defaults ..
> >>>
> >>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
> >>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
> >>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
> >>>
> >>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.
> 
> >> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
> >> Probably the response overlaps the request and grantcopy return error 
> >> when using wrong grant reference, Netback returns resp->status with 
> >> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
> >> Would it be possible to print log in xenvif_rx_action of netback to see 
> >> whether something wrong with max slots and used slots?
> 
> >> Thanks
> >> Annie
> 
> > Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
> > at the same time as the netfront messages in the guest.
> 
> > I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
> > One of the things was to simplify the code for the debug key to print the granttables, the present code
> > takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
> > the nr of entries per domain.
> 
> 
> > Issue 1: grant_table.c:1858:d0 Bad grant reference
> 
> > After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
> > The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.
> 
> > Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
> > The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).
> 

As far as I can tell netfront has a pool of grant references and it
will BUG_ON() if there's no grefs in the pool when you request one.
Since your DomU didn't crash so I suspect the book-keeping is still
intact.

> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
> > Domain 7 is the domain that happens to give the netfront messages.
> 
> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
> 

I suppose Dom0 expanding its maptrack is normal. I see as well when I
increase the number of domains. But if it keeps increasing while the
number of DomUs stay the same then it is not normal.

Presumably you only have netfront and blkfront to use grant table and
your workload as described below invovled both so it would be hard to
tell which one is faulty.

There's no immediate functional changes regarding slot counting in this
dev cycle for network driver. But there's some changes to blkfront/back
which seem interesting (memory related).

My suggestion is, if you have a working base line, you can try to setup
different frontend / backend combination to help narrow down the
problem.

Wei.

> > (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (4) to (5) frames.
> > (XEN) [2014-02-26 00:00:38] grant_table.c:1250:d1 Expanding dom (1) grant table from (5) to (6) frames.
> > (XEN) [2014-02-26 00:00:38] grant_table.c:290:d0 Increased maptrack size to 13/256 frames
> > (XEN) [2014-02-26 00:01:13] grant_table.c:290:d0 Increased maptrack size to 14/256 frames
> > (XEN) [2014-02-26 04:02:55] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> > (XEN) [2014-02-26 04:15:33] grant_table.c:290:d0 Increased maptrack size to 15/256 frames
> > (XEN) [2014-02-26 04:15:53] grant_table.c:290:d0 Increased maptrack size to 16/256 frames
> > (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 17/256 frames
> > (XEN) [2014-02-26 04:15:56] grant_table.c:290:d0 Increased maptrack size to 18/256 frames
> > (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 19/256 frames
> > (XEN) [2014-02-26 04:15:57] grant_table.c:290:d0 Increased maptrack size to 20/256 frames
> > (XEN) [2014-02-26 04:15:59] grant_table.c:290:d0 Increased maptrack size to 21/256 frames
> > (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 22/256 frames
> > (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 23/256 frames
> > (XEN) [2014-02-26 04:16:00] grant_table.c:290:d0 Increased maptrack size to 24/256 frames
> > (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 25/256 frames
> > (XEN) [2014-02-26 04:16:10] grant_table.c:290:d0 Increased maptrack size to 26/256 frames
> > (XEN) [2014-02-26 04:16:17] grant_table.c:290:d0 Increased maptrack size to 27/256 frames
> > (XEN) [2014-02-26 04:16:20] grant_table.c:290:d0 Increased maptrack size to 28/256 frames
> > (XEN) [2014-02-26 04:16:56] grant_table.c:290:d0 Increased maptrack size to 29/256 frames
> > (XEN) [2014-02-26 05:15:04] grant_table.c:290:d0 Increased maptrack size to 30/256 frames
> > (XEN) [2014-02-26 05:15:05] grant_table.c:290:d0 Increased maptrack size to 31/256 frames
> > (XEN) [2014-02-26 05:21:15] grant_table.c:1858:d0 Bad grant reference 107085839 | 2048 | 1 | 0
> > (XEN) [2014-02-26 05:29:47] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
> > (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all [ key 'g' pressed
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    0 active entries: 0
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1) nr_grant_entries: 3072
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    1 active entries: 2117
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    2 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    3 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    4 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    5 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    6 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    7 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    8 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:    9 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   10 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   11 active entries: 1061
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   12 active entries: 1045
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   13 active entries: 1060
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   14 active entries: 709
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1) nr_grant_entries: 2048
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 (v1)
> > (XEN) [2014-02-26 07:53:20] grant-table for remote domain:   15 active entries: 163
> > (XEN) [2014-02-26 07:53:20] gnttab_usage_print_all ] done
> > (XEN) [2014-02-26 07:55:09] grant_table.c:1858:d0 Bad grant reference 4325377 | 2048 | 1 | 0
> > (XEN) [2014-02-26 08:37:16] grant_table.c:1858:d0 Bad grant reference 268435460 | 2048 | 1 | 0
> 
> 
> 
> > Issue 2: net eth0: rx->offset: 0, size: xxxxxxxxxx
> 
> > In the guest (domain 7):
> 
> > Feb 26 08:55:09 backup kernel: [39258.090375] net eth0: rx->offset: 0, size: 4294967295
> > Feb 26 08:55:09 backup kernel: [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
> > Feb 26 08:55:09 backup kernel: [39258.090401] net eth0: rx->offset: 0, size: 4294967295
> > Feb 26 08:55:09 backup kernel: [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
> > Feb 26 08:55:09 backup kernel: [39258.090415] net eth0: rx->offset: 0, size: 4294967295
> > Feb 26 08:55:09 backup kernel: [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571
> 
> > In dom0 i don't see any specific netback warnings related to this domain at this specific times, the printk's i added do trigger quite some times but these are probably not
> > errorneous, but they seem to only occur on the vif of domain 7 (probably the only domain that is swamping the network by doing rsync and webdavs and causes some fragmented packets)
> 
> Another addition ... the guest doesn't shutdown anymore on "xl shutdown" .. it just does .. erhmm nothing .. (tried multiple times)
> After that i ssh'ed into the guest and did a "halt -p" ... the guest shutted down .. but the guest remained in xl list in blocked state ..
> Doing a "xl console" shows:
> 
> [30024.559656] net eth0: me here .. cons:8713451 slots:1 rp:8713462 max:18 err:0 rx->id:234 rx->offset:0 size:4294967295 ref:-131941395332550
> [30024.559666] net eth0: rx->offset: 0, size: 4294967295
> [30024.559671] net eth0: me here .. cons:8713451 slots:2 rp:8713462 max:18 err:-22 rx->id:236 rx->offset:0 size:4294967295 ref:-131941395332504
> [30024.559680] net eth0: rx->offset: 0, size: 4294967295
> [30024.559686] net eth0: me here .. cons:8713451 slots:3 rp:8713462 max:18 err:-22 rx->id:1 rx->offset:0 size:4294967295 ref:-131941395332390
> [30536.665135] net eth0: Need more slots cons:9088533 slots:6 rp:9088539 max:17 err:0 rx-id:26 rx->offset:0 size:0 ref:687
> [39258.090375] net eth0: rx->offset: 0, size: 4294967295
> [39258.090392] net eth0: me here .. cons:15177803 slots:1 rp:15177807 max:18 err:0 rx->id:74 rx->offset:0 size:4294967295 ref:533
> [39258.090401] net eth0: rx->offset: 0, size: 4294967295
> [39258.090406] net eth0: me here .. cons:15177803 slots:2 rp:15177807 max:18 err:-22 rx->id:76 rx->offset:0 size:4294967295 ref:686
> [39258.090415] net eth0: rx->offset: 0, size: 4294967295
> [39258.090420] net eth0: me here .. cons:15177803 slots:3 rp:15177807 max:18 err:-22 rx->id:77 rx->offset:0 size:4294967295 ref:571
> INIT: Switching to runlevel: 0
> INIT: Sending processes the TERM signal
> [info] Using makefile-style concurrent boot in runlevel 0.
> Stopping openntpd: ntpd.
> [ ok ] Stopping mail-transfer-agent: nullmailer.
> [ ok ] Stopping web server: apache2 ... waiting .
> [ ok ] Asking all remaining processes to terminate...done.
> [ ok ] All processes ended within 2 seconds...done.
> [ ok ] Stopping enhanced syslogd: rsyslogd.
> [ ok ] Deconfiguring network interfaces...done.
> [ ok ] Deactivating swap...done.
> [65015.958259] EXT4-fs (xvda1): re-mounted. Opts: (null)
> [info] Will now halt.
> [65018.166546] vif vif-0: 5 starting transaction
> [65160.490419] INFO: task halt:4846 blocked for more than 120 seconds.
> [65160.490464]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
> [65160.490485] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [65160.490510] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000
> [65280.490470] INFO: task halt:4846 blocked for more than 120 seconds.
> [65280.490517]       Not tainted 3.14.0-rc4-20140225-vanilla-nfnbdebug2+ #1
> [65280.490540] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [65280.490564] halt            D ffff88001d6cfc38     0  4846   4838 0x00000000
> 
> 
> Especially the  "[65018.166546] vif vif-0: 5 starting transaction" after the halt surprises me ..
> 
> --
> Sander
> 
> > Feb 26 08:53:20 serveerstertje kernel: [39324.917255] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15101115 cons:15101112 j:8
> > Feb 26 08:53:56 serveerstertje kernel: [39361.001436] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15127649 cons:15127648 j:13
> > Feb 26 08:54:00 serveerstertje kernel: [39364.725613] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15130263 cons:15130261 j:2
> > Feb 26 08:54:04 serveerstertje kernel: [39368.739504] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15133143 cons:15133141 j:0
> > Feb 26 08:54:20 serveerstertje kernel: [39384.665044] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15144113 cons:15144112 j:0
> > Feb 26 08:54:29 serveerstertje kernel: [39393.569871] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15150203 cons:15150200 j:0
> > Feb 26 08:54:40 serveerstertje kernel: [39404.586566] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15157706 cons:15157704 j:12
> > Feb 26 08:54:56 serveerstertje kernel: [39420.759769] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15168839 cons:15168835 j:0
> > Feb 26 08:54:56 serveerstertje kernel: [39421.001372] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15169002 cons:15168999 j:8
> > Feb 26 08:55:00 serveerstertje kernel: [39424.515073] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15171450 cons:15171447 j:0
> > Feb 26 08:55:10 serveerstertje kernel: [39435.154510] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15178773 cons:15178770 j:1
> > Feb 26 08:56:19 serveerstertje kernel: [39504.195908] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15227444 cons:15227444 j:0
> > Feb 26 08:57:39 serveerstertje kernel: [39583.799392] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15283346 cons:15283344 j:8
> > Feb 26 08:57:55 serveerstertje kernel: [39599.517673] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:4 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15293937 cons:15293935 j:0
> > Feb 26 08:58:07 serveerstertje kernel: [39612.156622] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15302891 cons:15302889 j:19
> > Feb 26 08:58:07 serveerstertje kernel: [39612.400907] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15303034 cons:15303033 j:0
> > Feb 26 08:58:18 serveerstertje kernel: [39623.439383] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:15310915 cons:15310911 j:0
> > Feb 26 08:58:39 serveerstertje kernel: [39643.521808] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:6 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:15324769 cons:15324766 j:1
> 
> > Feb 26 09:27:07 serveerstertje kernel: [41351.622501] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16502932 cons:16502932 j:8
> > Feb 26 09:27:19 serveerstertje kernel: [41363.541003] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16510837 cons:16510834 j:7
> > Feb 26 09:27:23 serveerstertje kernel: [41368.133306] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16513940 cons:16513937 j:0
> > Feb 26 09:27:43 serveerstertje kernel: [41388.025147] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16527870 cons:16527868 j:0
> > Feb 26 09:27:47 serveerstertje kernel: [41391.530802] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:2 prod:16530437 cons:16530437 j:7
> > Feb 26 09:27:51 serveerstertje kernel: [41395.521166] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:5 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533320 cons:16533317 j:6
> > Feb 26 09:27:51 serveerstertje kernel: [41395.767066] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533469 cons:16533469 j:0
> > Feb 26 09:27:51 serveerstertje kernel: [41395.802319] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533533 cons:16533533 j:24
> > Feb 26 09:27:51 serveerstertje kernel: [41395.837456] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:1 GSO:0 vif->rx_last_skb_slots:0 nr_frags:0 prod:16533534 cons:16533534 j:1
> > Feb 26 09:27:51 serveerstertje kernel: [41395.872587] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533597 cons:16533596 j:25
> > Feb 26 09:27:51 serveerstertje kernel: [41396.192784] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533833 cons:16533832 j:3
> > Feb 26 09:27:51 serveerstertje kernel: [41396.235611] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533890 cons:16533890 j:30
> > Feb 26 09:27:51 serveerstertje kernel: [41396.271047] vif vif-7-0 vif7.0: !?!?!?! skb may not fit .. bail out now max_slots_needed:3 GSO:1 vif->rx_last_skb_slots:0 nr_frags:1 prod:16533898 cons:16533896 j:3
> 
> 
> > --
> > Sander
> 
> 
> 
> 
> 
> >>>
> >>> Will keep you posted when it triggers again with the extra info in the warn.
> >>>
> >>> --
> >>> Sander
> >>>
> >>>
> >>>
> >>>> Thanks
> >>>> Annie
> >>>>>     Xen reports:
> >>>>>     (XEN) [2014-02-18 03:22:47] grant_table.c:1857:d0 Bad grant reference 19791875
> >>>>>     (XEN) [2014-02-18 03:42:33] grant_table.c:1857:d0 Bad grant reference 268435460
> >>>>>     (XEN) [2014-02-18 04:15:23] grant_table.c:289:d0 Increased maptrack size to 14 frames
> >>>>>     (XEN) [2014-02-18 04:15:27] grant_table.c:289:d0 Increased maptrack size to 15 frames
> >>>>>     (XEN) [2014-02-18 04:15:48] grant_table.c:289:d0 Increased maptrack size to 16 frames
> >>>>>     (XEN) [2014-02-18 04:15:50] grant_table.c:289:d0 Increased maptrack size to 17 frames
> >>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 18 frames
> >>>>>     (XEN) [2014-02-18 04:15:55] grant_table.c:289:d0 Increased maptrack size to 19 frames
> >>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 20 frames
> >>>>>     (XEN) [2014-02-18 04:15:56] grant_table.c:289:d0 Increased maptrack size to 21 frames
> >>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 22 frames
> >>>>>     (XEN) [2014-02-18 04:15:59] grant_table.c:289:d0 Increased maptrack size to 23 frames
> >>>>>     (XEN) [2014-02-18 04:16:00] grant_table.c:289:d0 Increased maptrack size to 24 frames
> >>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 25 frames
> >>>>>     (XEN) [2014-02-18 04:16:05] grant_table.c:289:d0 Increased maptrack size to 26 frames
> >>>>>     (XEN) [2014-02-18 04:16:06] grant_table.c:289:d0 Increased maptrack size to 27 frames
> >>>>>     (XEN) [2014-02-18 04:16:12] grant_table.c:289:d0 Increased maptrack size to 28 frames
> >>>>>     (XEN) [2014-02-18 04:16:18] grant_table.c:289:d0 Increased maptrack size to 29 frames
> >>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
> >>>>>     (XEN) [2014-02-18 04:17:00] grant_table.c:1857:d0 Bad grant reference 268435460
> >>>>>     (XEN) [2014-02-18 04:34:03] grant_table.c:1857:d0 Bad grant reference 4325377
> >>>>>
> >>>>>
> >>>>>
> >>>>> Another issue with networking is when running both dom0 and domU's with a 3.14-rc3 kernel:
> >>>>>     - i can ping the guests from dom0
> >>>>>     - i can ping dom0 from the guests
> >>>>>     - But i can't ssh or access things by http
> >>>>>     - I don't see any relevant error messages ...
> >>>>>     - This is with the same system and kernel config as with the 3.14 and 3.13 combination above
> >>>>>       (that previously worked fine)
> >>>>>
> >>>>> --
> >>>>>
> >>>>> Sander
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Xen-devel mailing list
> >>>>> Xen-devel@lists.xen.org
> >>>>> http://lists.xen.org/xen-devel
> >>>
> >>>
> 
> 
> 
> > diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> > index 4fc46eb..4d720b4 100644
> > --- a/tools/libxl/xl_cmdimpl.c
> > +++ b/tools/libxl/xl_cmdimpl.c
> > @@ -1667,6 +1667,8 @@ skip_vfb:
> >                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
> >              } else if (!strcmp(buf, "cirrus")) {
> >                  b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
> > +            } else if (!strcmp(buf, "none")) {
> > +                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
> >              } else {
> >                  fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
> >                  exit(1);
> > diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> > index 107b000..ab56927 100644
> > --- a/xen/common/grant_table.c
> > +++ b/xen/common/grant_table.c
> > @@ -265,9 +265,10 @@ get_maptrack_handle(
> >      while ( unlikely((handle = __get_maptrack_handle(lgt)) == -1) )
> >      {
> >          nr_frames = nr_maptrack_frames(lgt);
> > -        if ( nr_frames >= max_nr_maptrack_frames() )
> > +        if ( nr_frames >= max_nr_maptrack_frames() ){
> > +                 gdprintk(XENLOG_INFO, "Already at max maptrack size: %u/%u frames\n",nr_frames, max_nr_maptrack_frames());
> >              break;
> > -
> > +       }
> >          new_mt = alloc_xenheap_page();
> >          if ( !new_mt )
> >              break;
> > @@ -285,8 +286,8 @@ get_maptrack_handle(
> >          smp_wmb();
> >          lgt->maptrack_limit      = new_mt_limit;
> 
> > -        gdprintk(XENLOG_INFO, "Increased maptrack size to %u frames\n",
> > -                 nr_frames + 1);
> > +        gdprintk(XENLOG_INFO, "Increased maptrack size to %u/%u frames\n",
> > +                 nr_frames + 1, max_nr_maptrack_frames());
> >      }
> 
> >      spin_unlock(&lgt->lock);
> > @@ -1854,7 +1855,7 @@ __acquire_grant_for_copy(
> 
> >      if ( unlikely(gref >= nr_grant_entries(rgt)) )
> >          PIN_FAIL(unlock_out, GNTST_bad_gntref,
> > -                 "Bad grant reference %ld\n", gref);
> > +                 "Bad grant reference %ld | %d | %d | %d \n", gref, nr_grant_entries(rgt), rgt->gt_version, ldom);
> 
> >      act = &active_entry(rgt, gref);
> >      shah = shared_entry_header(rgt, gref);
> > @@ -2830,15 +2831,19 @@ static void gnttab_usage_print(struct domain *rd)
> >      int first = 1;
> >      grant_ref_t ref;
> >      struct grant_table *gt = rd->grant_table;
> > -
> > +    unsigned int active=0;
> > +/*
> >      printk("      -------- active --------       -------- shared --------\n");
> >      printk("[ref] localdom mfn      pin          localdom gmfn     flags\n");
> > -
> > +*/
> >      spin_lock(&gt->lock);
> 
> >      if ( gt->gt_version == 0 )
> >          goto out;
> 
> > +    printk("grant-table for remote domain:%5d (v%d) nr_grant_entries: %d\n",
> > +                   rd->domain_id, gt->gt_version, nr_grant_entries(gt));
> > +
> >      for ( ref = 0; ref != nr_grant_entries(gt); ref++ )
> >      {
> >          struct active_grant_entry *act;
> > @@ -2875,19 +2880,22 @@ static void gnttab_usage_print(struct domain *rd)
> >                     rd->domain_id, gt->gt_version);
> >              first = 0;
> >          }
> > -
> > +        active++;
> >          /*      [ddd]    ddddd 0xXXXXXX 0xXXXXXXXX      ddddd 0xXXXXXX 0xXX */
> > -        printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n",
> > -               ref, act->domid, act->frame, act->pin,
> > -               sha->domid, frame, status);
> > +        /* printk("[%3d]    %5d 0x%06lx 0x%08x      %5d 0x%06"PRIx64" 0x%02x\n", ref, act->domid, act->frame, act->pin, sha->domid, frame, status); */
> >      }
> 
> >   out:
> >      spin_unlock(&gt->lock);
> 
> > +    printk("grant-table for remote domain:%5d active entries: %d\n",
> > +                   rd->domain_id, active);
> > +/*
> >      if ( first )
> >          printk("grant-table for remote domain:%5d ... "
> >                 "no active grant table entries\n", rd->domain_id);
> > +*/
> > +
> >  }
> 
> >  static void gnttab_usage_print_all(unsigned char key)
> 
> 
> 
> 
> 
> 
> > diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c
> > index e5284bc..6d93358 100644
> > --- a/drivers/net/xen-netback/netback.c
> > +++ b/drivers/net/xen-netback/netback.c
> > @@ -482,20 +482,23 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                 .meta  = vif->meta,
> >         };
> 
> > +       int j=0;
> > +
> >         skb_queue_head_init(&rxq);
> 
> >         while ((skb = skb_dequeue(&vif->rx_queue)) != NULL) {
> >                 RING_IDX max_slots_needed;
> >                 int i;
> > +               int nr_frags;
> 
> >                 /* We need a cheap worse case estimate for the number of
> >                  * slots we'll use.
> >                  */
> 
> >                 max_slots_needed = DIV_ROUND_UP(offset_in_page(skb->data) +
> > -                                               skb_headlen(skb),
> > -                                               PAGE_SIZE);
> > -               for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
> > +                                               skb_headlen(skb), PAGE_SIZE);
> > +               nr_frags = skb_shinfo(skb)->nr_frags;
> > +               for (i = 0; i < nr_frags; i++) {
> >                         unsigned int size;
> >                         size = skb_frag_size(&skb_shinfo(skb)->frags[i]);
> >                         max_slots_needed += DIV_ROUND_UP(size, PAGE_SIZE);
> > @@ -508,6 +511,9 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                 if (!xenvif_rx_ring_slots_available(vif, max_slots_needed)) {
> >                         skb_queue_head(&vif->rx_queue, skb);
> >                         need_to_notify = true;
> > +                       if (net_ratelimit())
> > +                               netdev_err(vif->dev, "!?!?!?! skb may not fit .. bail out now max_slots_needed:%d GSO:%d vif->rx_last_skb_slots:%d nr_frags:%d prod:%d cons:%d j:%d\n",
> > +                                       max_slots_needed, (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4 || skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) ? 1 : 0, vif->rx_last_skb_slots, nr_frags,vif->rx.sring->req_prod,vif->rx.req_cons,j);
> >                         vif->rx_last_skb_slots = max_slots_needed;
> >                         break;
> >                 } else
> > @@ -518,6 +524,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                 BUG_ON(sco->meta_slots_used > max_slots_needed);
> 
> >                 __skb_queue_tail(&rxq, skb);
> > +               j++;
> >         }
> 
> >         BUG_ON(npo.meta_prod > ARRAY_SIZE(vif->meta));
> > @@ -541,7 +548,7 @@ static void xenvif_rx_action(struct xenvif *vif)
> >                         resp->offset = vif->meta[npo.meta_cons].gso_size;
> >                         resp->id = vif->meta[npo.meta_cons].id;
> >                         resp->status = sco->meta_slots_used;
> > -
> > +
> >                         npo.meta_cons++;
> >                         sco->meta_slots_used--;
> >                 }
> > @@ -705,7 +712,7 @@ static int xenvif_count_requests(struct xenvif *vif,
> >                  */
> >                 if (!drop_err && slots >= XEN_NETBK_LEGACY_SLOTS_MAX) {
> >                         if (net_ratelimit())
> > -                               netdev_dbg(vif->dev,
> > +                               netdev_err(vif->dev,
> >                                            "Too many slots (%d) exceeding limit (%d), dropping packet\n",
> >                                            slots, XEN_NETBK_LEGACY_SLOTS_MAX);
> >                         drop_err = -E2BIG;
> > @@ -728,7 +735,7 @@ static int xenvif_count_requests(struct xenvif *vif,
> >                  */
> >                 if (!drop_err && txp->size > first->size) {
> >                         if (net_ratelimit())
> > -                               netdev_dbg(vif->dev,
> > +                               netdev_err(vif->dev,
> >                                            "Invalid tx request, slot size %u > remaining size %u\n",
> >                                            txp->size, first->size);
> >                         drop_err = -EIO;
> 
> 
> 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index f9daa9e..67d5221 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -753,6 +753,7 @@ static int xennet_get_responses(struct netfront_info *np,
> >                         if (net_ratelimit())
> >                                 dev_warn(dev, "rx->offset: %x, size: %u\n",
> >                                          rx->offset, rx->status);
> > +                               dev_warn(dev, "me here .. cons:%d slots:%d rp:%d max:%d err:%d rx->id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
> >                         xennet_move_rx_slot(np, skb, ref);
> >                         err = -EINVAL;
> >                         goto next;
> > @@ -784,7 +785,7 @@ next:
> 
> >                 if (cons + slots == rp) {
> >                         if (net_ratelimit())
> > -                               dev_warn(dev, "Need more slots\n");
> > +                               dev_warn(dev, "Need more slots cons:%d slots:%d rp:%d max:%d err:%d rx-id:%d rx->offset:%x size:%u ref:%ld\n",cons,slots,rp,max,err,rx->id, rx->offset, rx->status, ref);
> >                         err = -ENOENT;
> >                         break;
> >                 }
> > @@ -803,7 +804,6 @@ next:
> 
> >         if (unlikely(err))
> >                 np->rx.rsp_cons = cons + slots;
> > -
> >         return err;
> >  }
> 
> > @@ -907,6 +907,7 @@ static int handle_incoming_queue(struct net_device *dev,
> 
> >                 /* Ethernet work: Delayed to here as it peeks the header. */
> >                 skb->protocol = eth_type_trans(skb, dev);
> > +               skb_reset_network_header(skb);
> 
> >                 if (checksum_setup(dev, skb)) {
> >                         kfree_skb(skb);
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:19:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1ng-0000IH-Kb; Thu, 27 Feb 2014 14:19:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ1ne-0000Hr-EB
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:18:58 +0000
Received: from [85.158.137.68:43237] by server-11.bemta-3.messagelabs.com id
	D7/06-04255-1594F035; Thu, 27 Feb 2014 14:18:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393510735!3303336!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26971 invoked from network); 27 Feb 2014 14:18:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106278979"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:18:54 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:18:54 -0500
Message-ID: <530F4949.4050706@citrix.com>
Date: Thu, 27 Feb 2014 14:18:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
In-Reply-To: <530F3967.6030805@redhat.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 13:11, Paolo Bonzini wrote:
> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>> > This patch adds para-virtualization support to the queue spinlock code
>>> > by enabling the queue head to kick the lock holder CPU, if known,
>>> > in when the lock isn't released for a certain amount of time. It
>>> > also enables the mutual monitoring of the queue head CPU and the
>>> > following node CPU in the queue to make sure that their CPUs will
>>> > stay scheduled in.
>> I'm not really understanding how this is supposed to work.  There
>> appears to be an assumption that a guest can keep one of its VCPUs
>> running by repeatedly kicking it?  This is not possible under Xen and I
>> doubt it's possible under KVM or any other hypervisor.
> 
> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
> see Documentation/virtual/kvm/hypercalls.txt.

But neither of the VCPUs being kicked here are halted -- they're either
running or runnable (descheduled by the hypervisor).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:19:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1ng-0000IH-Kb; Thu, 27 Feb 2014 14:19:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ1ne-0000Hr-EB
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:18:58 +0000
Received: from [85.158.137.68:43237] by server-11.bemta-3.messagelabs.com id
	D7/06-04255-1594F035; Thu, 27 Feb 2014 14:18:57 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393510735!3303336!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26971 invoked from network); 27 Feb 2014 14:18:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:18:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106278979"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:18:54 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:18:54 -0500
Message-ID: <530F4949.4050706@citrix.com>
Date: Thu, 27 Feb 2014 14:18:49 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
In-Reply-To: <530F3967.6030805@redhat.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 13:11, Paolo Bonzini wrote:
> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>> > This patch adds para-virtualization support to the queue spinlock code
>>> > by enabling the queue head to kick the lock holder CPU, if known,
>>> > in when the lock isn't released for a certain amount of time. It
>>> > also enables the mutual monitoring of the queue head CPU and the
>>> > following node CPU in the queue to make sure that their CPUs will
>>> > stay scheduled in.
>> I'm not really understanding how this is supposed to work.  There
>> appears to be an assumption that a guest can keep one of its VCPUs
>> running by repeatedly kicking it?  This is not possible under Xen and I
>> doubt it's possible under KVM or any other hypervisor.
> 
> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
> see Documentation/virtual/kvm/hypercalls.txt.

But neither of the VCPUs being kicked here are halted -- they're either
running or runnable (descheduled by the hypervisor).

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1sw-0000a7-H2; Thu, 27 Feb 2014 14:24:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agraf@suse.de>) id 1WJ1sv-0000a0-Hr
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:24:25 +0000
Received: from [85.158.137.68:16398] by server-3.bemta-3.messagelabs.com id
	71/B9-14520-89A4F035; Thu, 27 Feb 2014 14:24:24 +0000
X-Env-Sender: agraf@suse.de
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393511063!3304902!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.2 required=7.0 tests=MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7702 invoked from network); 27 Feb 2014 14:24:23 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 14:24:23 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 0C72F75014;
	Thu, 27 Feb 2014 14:24:23 +0000 (UTC)
Mime-Version: 1.0 (1.0)
From: Alexander Graf <agraf@suse.de>
X-Mailer: iPhone Mail (11B554a)
In-Reply-To: <11174980.YJ21cfsHoG@wuerfel>
Date: Thu, 27 Feb 2014 22:24:13 +0800
Message-Id: <5AA88E43-1A40-4409-9A56-334988483843@suse.de>
References: <20140226183454.GA14639@cbox>
	<20140226214843.GD12169@bivouac.eciton.net>
	<alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
	<11174980.YJ21cfsHoG@wuerfel>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> Am 27.02.2014 um 22:00 schrieb Arnd Bergmann <arnd@arndb.de>:
> 
>> On Thursday 27 February 2014 12:31:55 Stefano Stabellini wrote:
>> On Wed, 26 Feb 2014, Leif Lindholm wrote:
>>>>>  no FDT.  In this case, the VM implementation must provide ACPI, and
>>>>>  the OS must be able to locate the ACPI root pointer through the UEFI
>>>>>  system table.
>>>>> 
>>>>> For more information about the arm and arm64 boot conventions, see
>>>>> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
>>>>> Linux kernel source tree.
>>>>> 
>>>>> For more information about UEFI and ACPI booting, see [4] and [5].
>>>> 
>>>> What's the point of having ACPI in a virtual machine? You wouldn't
>>>> need to abstract any of the hardware in AML since you already know
>>>> what the virtual hardware is, so I can't see how this would help
>>>> anyone.
>>> 
>>> The point is that if we need to share any real hw then we need to use
>>> whatever the host has.
> 
> I would be more comfortable defining in the spec that you cannot share
> hardware at all. Obviously that

Nonono we want to share hardware.

> doesn't stop anyone from actually
> sharing hardware with the guest, but at that point it would become
> noncompliant with this spec, with the consequence that you couldn't
> expect a compliant guest image to run on that hardware, but that is
> exactly something we can't guarantee anyway because we don't know
> what drivers might be needed.
> 
> Also, there is no way to generally do this with either FDT or ACPI:
> In the former case, the hypervisor needs to modify any properties
> that point to other device nodes so that they point to nodes visible
> to the guest. That may be possible for simple things like IRQs
> and reg properties, but as soon as you get into stuff like dmaengine,
> pinctrl or PHY references, you just can't solve it in a generic way.
> 
> For ACPI it's probably worse: any AML methods that the host has
> are unlikely to work in the guest, and it's impossible to translate
> them at all.
> 
> Obviously things are different for Xen Dom0 where we share *all* devices
> between host and guest, and we just use the host firmware interfaces.
> That case again cannot be covered by the generic VM system specification.
> 
>> I dislike ACPI as much as the next guy, but unfortunately if the host
>> only supports ACPI, the Linux driver for a particular device only works
>> together with ACPI, and you want to assign that device to a VM, then we
>> might be forced to use ACPI to describe it.
> 
> Can anyone think of an example where this would actually work?
> 
> The only case I can see where it's possible to share a device with
> a guest without the hypervisor building up the description is for
> PCI functions that are passed through with an IOMMU. Those won't
> need ACPI or DT support however.

If you want to assign a platform device, you need to generate a respective hardware description (fdt/dsdt) chunk in the hypervisor. You can't take the host's description - it's too tightly coupled to the overall host layout.

Imagine you get an AArch64 notebook with Windows on it. You want to run Linux there, so your host needs to understand ACPI. Now you want to run a Windows guest inside a VM, so you need ACPI in there again.

Replace Windows by "Linux with custom drivers" and you're in the same situation even when you neglect Windows. Reality will be that we will have fdt and acpi based systems.


Alex


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:24:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:24:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1sw-0000a7-H2; Thu, 27 Feb 2014 14:24:26 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agraf@suse.de>) id 1WJ1sv-0000a0-Hr
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:24:25 +0000
Received: from [85.158.137.68:16398] by server-3.bemta-3.messagelabs.com id
	71/B9-14520-89A4F035; Thu, 27 Feb 2014 14:24:24 +0000
X-Env-Sender: agraf@suse.de
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393511063!3304902!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.2 required=7.0 tests=MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7702 invoked from network); 27 Feb 2014 14:24:23 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-16.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 14:24:23 -0000
Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 0C72F75014;
	Thu, 27 Feb 2014 14:24:23 +0000 (UTC)
Mime-Version: 1.0 (1.0)
From: Alexander Graf <agraf@suse.de>
X-Mailer: iPhone Mail (11B554a)
In-Reply-To: <11174980.YJ21cfsHoG@wuerfel>
Date: Thu, 27 Feb 2014 22:24:13 +0800
Message-Id: <5AA88E43-1A40-4409-9A56-334988483843@suse.de>
References: <20140226183454.GA14639@cbox>
	<20140226214843.GD12169@bivouac.eciton.net>
	<alpine.DEB.2.02.1402271229370.31489@kaball.uk.xensource.com>
	<11174980.YJ21cfsHoG@wuerfel>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> Am 27.02.2014 um 22:00 schrieb Arnd Bergmann <arnd@arndb.de>:
> 
>> On Thursday 27 February 2014 12:31:55 Stefano Stabellini wrote:
>> On Wed, 26 Feb 2014, Leif Lindholm wrote:
>>>>>  no FDT.  In this case, the VM implementation must provide ACPI, and
>>>>>  the OS must be able to locate the ACPI root pointer through the UEFI
>>>>>  system table.
>>>>> 
>>>>> For more information about the arm and arm64 boot conventions, see
>>>>> Documentation/arm/Booting and Documentation/arm64/booting.txt in the
>>>>> Linux kernel source tree.
>>>>> 
>>>>> For more information about UEFI and ACPI booting, see [4] and [5].
>>>> 
>>>> What's the point of having ACPI in a virtual machine? You wouldn't
>>>> need to abstract any of the hardware in AML since you already know
>>>> what the virtual hardware is, so I can't see how this would help
>>>> anyone.
>>> 
>>> The point is that if we need to share any real hw then we need to use
>>> whatever the host has.
> 
> I would be more comfortable defining in the spec that you cannot share
> hardware at all. Obviously that

Nonono we want to share hardware.

> doesn't stop anyone from actually
> sharing hardware with the guest, but at that point it would become
> noncompliant with this spec, with the consequence that you couldn't
> expect a compliant guest image to run on that hardware, but that is
> exactly something we can't guarantee anyway because we don't know
> what drivers might be needed.
> 
> Also, there is no way to generally do this with either FDT or ACPI:
> In the former case, the hypervisor needs to modify any properties
> that point to other device nodes so that they point to nodes visible
> to the guest. That may be possible for simple things like IRQs
> and reg properties, but as soon as you get into stuff like dmaengine,
> pinctrl or PHY references, you just can't solve it in a generic way.
> 
> For ACPI it's probably worse: any AML methods that the host has
> are unlikely to work in the guest, and it's impossible to translate
> them at all.
> 
> Obviously things are different for Xen Dom0 where we share *all* devices
> between host and guest, and we just use the host firmware interfaces.
> That case again cannot be covered by the generic VM system specification.
> 
>> I dislike ACPI as much as the next guy, but unfortunately if the host
>> only supports ACPI, the Linux driver for a particular device only works
>> together with ACPI, and you want to assign that device to a VM, then we
>> might be forced to use ACPI to describe it.
> 
> Can anyone think of an example where this would actually work?
> 
> The only case I can see where it's possible to share a device with
> a guest without the hypervisor building up the description is for
> PCI functions that are passed through with an IOMMU. Those won't
> need ACPI or DT support however.

If you want to assign a platform device, you need to generate a respective hardware description (fdt/dsdt) chunk in the hypervisor. You can't take the host's description - it's too tightly coupled to the overall host layout.

Imagine you get an AArch64 notebook with Windows on it. You want to run Linux there, so your host needs to understand ACPI. Now you want to run a Windows guest inside a VM, so you need ACPI in there again.

Replace Windows by "Linux with custom drivers" and you're in the same situation even when you neglect Windows. Reality will be that we will have fdt and acpi based systems.


Alex


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1tx-0000dw-0c; Thu, 27 Feb 2014 14:25:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ1tv-0000di-L1
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:25:28 +0000
Received: from [85.158.137.68:29875] by server-15.bemta-3.messagelabs.com id
	01/A9-19263-6DA4F035; Thu, 27 Feb 2014 14:25:26 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393511123!4631180!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3429 invoked from network); 27 Feb 2014 14:25:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:25:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104653894"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:25:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:25:22 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ1tq-000351-Hy;
	Thu, 27 Feb 2014 14:25:22 +0000
Message-ID: <530F4ACD.9050105@eu.citrix.com>
Date: Thu, 27 Feb 2014 14:25:17 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/25/2014 10:00 AM, Jan Beulich wrote:
> ... in a simplified and consistent way.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Nice!

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> --- a/docs/misc/printk-formats.txt
> +++ b/docs/misc/printk-formats.txt
> @@ -15,3 +15,6 @@ Symbol/Function pointers:
>   
>          In the case that an appropriate symbol name can't be found, %p[sS] will
>          fall back to '%p' and print the address in hex.
> +
> +       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
> +               "d<domid>v<vcpuid>")
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
>       if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
>       {
>           dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
> -                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
> +                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
>                   has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
> -                v->domain->domain_id, v->vcpu_id,
> -                guest_mcg_cap & ~MCG_CAP_COUNT);
> +                v, guest_mcg_cap & ~MCG_CAP_COUNT);
>           return -EPERM;
>       }
>   
> @@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
>                 guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
>                !test_and_set_bool(v->mce_pending) )
>           {
> -            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
>               vcpu_kick(v);
>               ret = 0;
>           }
>           else
>           {
> -            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
>               ret = -EBUSY;
>               break;
>           }
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
>           if ( !warned )
>           {
>               warned = 1;
> -            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
> +            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
>                      "(If you see this outside of debugging activity,"
>                      " please report to xen-devel@lists.xenproject.org)\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   v);
>           }
>           memset(reg, 0, sizeof(*reg));
>           return;
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct
>           if ( !cpu_has(c, X86_FEATURE_DTES64) )
>           {
>               printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>               goto func_out;
>           }
>           vpmu_set(vpmu, VPMU_CPU_HAS_DS);
> @@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct
>               /* If BTS_UNAVAIL is set reset the DS feature. */
>               vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
>               printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>           }
>           else
>           {
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu
>       mfn_t *oos;
>       struct domain *d = v->domain;
>   
> -    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
> -                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn));
> +    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
>   
>       for_each_vcpu(d, v)
>       {
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
>   static void reserved_bit_page_fault(
>       unsigned long addr, struct cpu_user_regs *regs)
>   {
> -    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
> -           current->domain->domain_id, current->vcpu_id, regs->error_code);
> +    printk("%pv: reserved bit in page table (ec=%04X)\n",
> +           current, regs->error_code);
>       show_page_walk(addr);
>       show_execution_state(regs);
>   }
> @@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
>           tb->flags |= TBF_INTERRUPT;
>       if ( unlikely(null_trap_bounce(v, tb)) )
>       {
> -        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> -               v->domain->domain_id, v->vcpu_id, error_code);
> +        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
>           show_page_walk(addr);
>       }
>   
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
>   
>   vcpu_info_t dummy_vcpu_info;
>   
> -int current_domain_id(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
>   static void __domain_finalise_shutdown(struct domain *d)
>   {
>       struct vcpu *v;
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
>   
>       if ( !is_idle_vcpu(current) )
>       {
> -        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
> -               smp_processor_id(), current->domain->domain_id,
> -               current->vcpu_id);
> +        printk("*** Dumping CPU%u guest state (%pv): ***\n",
> +               smp_processor_id(), current);
>           show_execution_state(guest_cpu_user_regs());
>           printk("\n");
>       }
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
>       struct list_head *iter;
>       int pos = 0;
>   
> -    d2printk("rqi d%dv%d\n",
> -           svc->vcpu->domain->domain_id,
> -           svc->vcpu->vcpu_id);
> +    d2printk("rqi %pv\n", svc->vcpu);
>   
>       BUG_ON(&svc->rqd->runq != runq);
>       /* Idle vcpus not allowed on the runqueue anymore */
> @@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
>   
>           if ( svc->credit > iter_svc->credit )
>           {
> -            d2printk(" p%d d%dv%d\n",
> -                   pos,
> -                   iter_svc->vcpu->domain->domain_id,
> -                   iter_svc->vcpu->vcpu_id);
> +            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
>               break;
>           }
>           pos++;
> @@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
>       cpumask_t mask;
>       struct csched_vcpu * cur;
>   
> -    d2printk("rqt d%dv%d cd%dv%d\n",
> -             new->vcpu->domain->domain_id,
> -             new->vcpu->vcpu_id,
> -             current->domain->domain_id,
> -             current->vcpu_id);
> +    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
>   
>       BUG_ON(new->vcpu->processor != cpu);
>       BUG_ON(new->rqd != rqd);
> @@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
>           t2c_update(rqd, delta, svc);
>           svc->start_time = now;
>   
> -        d2printk("b d%dv%d c%d\n",
> -                 svc->vcpu->domain->domain_id,
> -                 svc->vcpu->vcpu_id,
> -                 svc->credit);
> +        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
>       } else {
>           d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
>                  __func__, now, svc->start_time);
> @@ -871,11 +859,9 @@ static void
>   csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>   {
>       struct csched_vcpu *svc = vc->sched_priv;
> -    struct domain * const dom = vc->domain;
>       struct csched_dom * const sdom = svc->sdom;
>   
> -    printk("%s: Inserting d%dv%d\n",
> -           __func__, dom->domain_id, vc->vcpu_id);
> +    printk("%s: Inserting %pv\n", __func__, vc);
>   
>       /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
>        * been called for that cpu.
> @@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler
>   
>       /* Schedule lock should be held at this point. */
>   
> -    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
> +    d2printk("w %pv\n", vc);
>   
>       BUG_ON( is_idle_vcpu(vc) );
>   
> @@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops,
>       {
>           if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
>           {
> -            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv -\n", svc->vcpu);
>               clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
>           }
>           /* Leave it where it is for now.  When we actually pay attention
> @@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops,
>           }
>           else
>           {
> -            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv +\n", svc->vcpu);
>               new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd->active);
>               goto out_up;
>           }
> @@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
>   {
>       if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
>       {
> -        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
>           /* It's running; mark it to migrate. */
>           svc->migrate_rqd = trqd;
>           set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
> @@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
>       {
>           int on_runq=0;
>           /* It's not running; just move it */
> -        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
>           if ( __vcpu_on_runq(svc) )
>           {
>               __runq_remove(svc);
> @@ -1662,11 +1646,7 @@ csched_schedule(
>       SCHED_STAT_CRANK(schedule);
>       CSCHED_VCPU_CHECK(current);
>   
> -    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
> -             cpu,
> -             scurr->vcpu->domain->domain_id,
> -             scurr->vcpu->vcpu_id,
> -             now);
> +    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
>   
>       BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
>   
> @@ -1693,12 +1673,11 @@ csched_schedule(
>                   }
>               }
>           }
> -        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
> +        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
>                  "pcpu %d rq %d!\n",
>                  __func__,
>                  cpu, this_rqi,
> -               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
> -               scurr->vcpu->processor, other_rqi);
> +               scurr->vcpu, scurr->vcpu->processor, other_rqi);
>       }
>       BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
>   
> @@ -1755,12 +1734,8 @@ csched_schedule(
>               __runq_remove(snext);
>               if ( snext->vcpu->is_running )
>               {
> -                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
> -                       cpu,
> -                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,
> -                       snext->vcpu->processor,
> -                       scurr->vcpu->domain->domain_id,
> -                       scurr->vcpu->vcpu_id);
> +                printk("p%d: snext %pv running on p%d! scurr %pv\n",
> +                       cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);
>                   BUG();
>               }
>               set_bit(__CSFLAG_scheduled, &snext->flags);
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
>   
>           if ( v->affinity_broken )
>           {
> -            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
> -                   d->domain_id, v->vcpu_id);
> +            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
>               cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
>               v->affinity_broken = 0;
>           }
> @@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
>               if ( cpumask_empty(&online_affinity) &&
>                    cpumask_test_cpu(cpu, v->cpu_affinity) )
>               {
> -                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
> -                        d->domain_id, v->vcpu_id);
> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
>   
>                   if (system_state == SYS_STATE_suspend)
>                   {
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -19,6 +19,7 @@
>   #include <xen/ctype.h>
>   #include <xen/symbols.h>
>   #include <xen/lib.h>
> +#include <xen/sched.h>
>   #include <asm/div64.h>
>   #include <asm/page.h>
>   
> @@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
>   
>           return str;
>       }
> +
> +    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
> +    {
> +        const struct vcpu *v = arg;
> +
> +        ++*fmt_ptr;
> +        if ( str <= end )
> +            *str = 'd';
> +        str = number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);
> +        if ( str <= end )
> +            *str = 'v';
> +        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
> +    }
>       }
>   
>       if ( field_width == -1 )
> --- a/xen/include/xen/config.h
> +++ b/xen/include/xen/config.h
> @@ -74,12 +74,11 @@
>   
>   #ifndef __ASSEMBLY__
>   
> -int current_domain_id(void);
>   #define dprintk(_l, _f, _a...)                              \
>       printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
>   #define gdprintk(_l, _f, _a...)                             \
> -    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
> -           __LINE__, current_domain_id() , ## _a )
> +    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
> +           __LINE__, current, ## _a )
>   
>   #endif /* !__ASSEMBLY__ */
>   
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:25:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1tx-0000dw-0c; Thu, 27 Feb 2014 14:25:29 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ1tv-0000di-L1
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:25:28 +0000
Received: from [85.158.137.68:29875] by server-15.bemta-3.messagelabs.com id
	01/A9-19263-6DA4F035; Thu, 27 Feb 2014 14:25:26 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393511123!4631180!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3429 invoked from network); 27 Feb 2014 14:25:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:25:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104653894"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:25:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:25:22 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ1tq-000351-Hy;
	Thu, 27 Feb 2014 14:25:22 +0000
Message-ID: <530F4ACD.9050105@eu.citrix.com>
Date: Thu, 27 Feb 2014 14:25:17 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/25/2014 10:00 AM, Jan Beulich wrote:
> ... in a simplified and consistent way.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Nice!

Acked-by: George Dunlap <george.dunlap@eu.citrix.com>

>
> --- a/docs/misc/printk-formats.txt
> +++ b/docs/misc/printk-formats.txt
> @@ -15,3 +15,6 @@ Symbol/Function pointers:
>   
>          In the case that an appropriate symbol name can't be found, %p[sS] will
>          fall back to '%p' and print the address in hex.
> +
> +       %pv     Domain and vCPU ID from a 'struct vcpu *' (printed as
> +               "d<domid>v<vcpuid>")
> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -82,10 +82,9 @@ int vmce_restore_vcpu(struct vcpu *v, co
>       if ( ctxt->caps & ~guest_mcg_cap & ~MCG_CAP_COUNT & ~MCG_CTL_P )
>       {
>           dprintk(XENLOG_G_ERR, "%s restore: unsupported MCA capabilities"
> -                " %#" PRIx64 " for d%d:v%u (supported: %#Lx)\n",
> +                " %#" PRIx64 " for %pv (supported: %#Lx)\n",
>                   has_hvm_container_vcpu(v) ? "HVM" : "PV", ctxt->caps,
> -                v->domain->domain_id, v->vcpu_id,
> -                guest_mcg_cap & ~MCG_CAP_COUNT);
> +                v, guest_mcg_cap & ~MCG_CAP_COUNT);
>           return -EPERM;
>       }
>   
> @@ -361,15 +360,13 @@ int inject_vmce(struct domain *d, int vc
>                 guest_has_trap_callback(d, v->vcpu_id, TRAP_machine_check)) &&
>                !test_and_set_bool(v->mce_pending) )
>           {
> -            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_VERBOSE, "MCE: inject vMCE to %pv\n", v);
>               vcpu_kick(v);
>               ret = 0;
>           }
>           else
>           {
> -            mce_printk(MCE_QUIET, "Failed to inject vMCE to d%d:v%d\n",
> -                       d->domain_id, v->vcpu_id);
> +            mce_printk(MCE_QUIET, "Failed to inject vMCE to %pv\n", v);
>               ret = -EBUSY;
>               break;
>           }
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -734,10 +734,10 @@ void vmx_get_segment_register(struct vcp
>           if ( !warned )
>           {
>               warned = 1;
> -            printk(XENLOG_WARNING "Segment register inaccessible for d%dv%d\n"
> +            printk(XENLOG_WARNING "Segment register inaccessible for %pv\n"
>                      "(If you see this outside of debugging activity,"
>                      " please report to xen-devel@lists.xenproject.org)\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   v);
>           }
>           memset(reg, 0, sizeof(*reg));
>           return;
> --- a/xen/arch/x86/hvm/vmx/vpmu_core2.c
> +++ b/xen/arch/x86/hvm/vmx/vpmu_core2.c
> @@ -769,8 +769,8 @@ static int core2_vpmu_initialise(struct
>           if ( !cpu_has(c, X86_FEATURE_DTES64) )
>           {
>               printk(XENLOG_G_WARNING "CPU doesn't support 64-bit DS Area"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>               goto func_out;
>           }
>           vpmu_set(vpmu, VPMU_CPU_HAS_DS);
> @@ -780,8 +780,8 @@ static int core2_vpmu_initialise(struct
>               /* If BTS_UNAVAIL is set reset the DS feature. */
>               vpmu_reset(vpmu, VPMU_CPU_HAS_DS);
>               printk(XENLOG_G_WARNING "CPU has set BTS_UNAVAIL"
> -                   " - Debug Store disabled for d%d:v%d\n",
> -                   v->domain->domain_id, v->vcpu_id);
> +                   " - Debug Store disabled for %pv\n",
> +                   v);
>           }
>           else
>           {
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -786,8 +786,7 @@ static void oos_hash_remove(struct vcpu
>       mfn_t *oos;
>       struct domain *d = v->domain;
>   
> -    SHADOW_PRINTK("D%dV%d gmfn %lx\n",
> -                  v->domain->domain_id, v->vcpu_id, mfn_x(gmfn));
> +    SHADOW_PRINTK("%pv gmfn %lx\n", v, mfn_x(gmfn));
>   
>       for_each_vcpu(d, v)
>       {
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1070,8 +1070,8 @@ void do_machine_check(struct cpu_user_re
>   static void reserved_bit_page_fault(
>       unsigned long addr, struct cpu_user_regs *regs)
>   {
> -    printk("d%d:v%d: reserved bit in page table (ec=%04X)\n",
> -           current->domain->domain_id, current->vcpu_id, regs->error_code);
> +    printk("%pv: reserved bit in page table (ec=%04X)\n",
> +           current, regs->error_code);
>       show_page_walk(addr);
>       show_execution_state(regs);
>   }
> @@ -1113,8 +1113,7 @@ struct trap_bounce *propagate_page_fault
>           tb->flags |= TBF_INTERRUPT;
>       if ( unlikely(null_trap_bounce(v, tb)) )
>       {
> -        printk("d%d:v%d: unhandled page fault (ec=%04X)\n",
> -               v->domain->domain_id, v->vcpu_id, error_code);
> +        printk("%pv: unhandled page fault (ec=%04X)\n", v, error_code);
>           show_page_walk(addr);
>       }
>   
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -65,11 +65,6 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_m
>   
>   vcpu_info_t dummy_vcpu_info;
>   
> -int current_domain_id(void)
> -{
> -    return current->domain->domain_id;
> -}
> -
>   static void __domain_finalise_shutdown(struct domain *d)
>   {
>       struct vcpu *v;
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -89,9 +89,8 @@ void dump_execstate(struct cpu_user_regs
>   
>       if ( !is_idle_vcpu(current) )
>       {
> -        printk("*** Dumping CPU%u guest state (d%d:v%d): ***\n",
> -               smp_processor_id(), current->domain->domain_id,
> -               current->vcpu_id);
> +        printk("*** Dumping CPU%u guest state (%pv): ***\n",
> +               smp_processor_id(), current);
>           show_execution_state(guest_cpu_user_regs());
>           printk("\n");
>       }
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -413,9 +413,7 @@ __runq_insert(struct list_head *runq, st
>       struct list_head *iter;
>       int pos = 0;
>   
> -    d2printk("rqi d%dv%d\n",
> -           svc->vcpu->domain->domain_id,
> -           svc->vcpu->vcpu_id);
> +    d2printk("rqi %pv\n", svc->vcpu);
>   
>       BUG_ON(&svc->rqd->runq != runq);
>       /* Idle vcpus not allowed on the runqueue anymore */
> @@ -429,10 +427,7 @@ __runq_insert(struct list_head *runq, st
>   
>           if ( svc->credit > iter_svc->credit )
>           {
> -            d2printk(" p%d d%dv%d\n",
> -                   pos,
> -                   iter_svc->vcpu->domain->domain_id,
> -                   iter_svc->vcpu->vcpu_id);
> +            d2printk(" p%d %pv\n", pos, iter_svc->vcpu);
>               break;
>           }
>           pos++;
> @@ -492,11 +487,7 @@ runq_tickle(const struct scheduler *ops,
>       cpumask_t mask;
>       struct csched_vcpu * cur;
>   
> -    d2printk("rqt d%dv%d cd%dv%d\n",
> -             new->vcpu->domain->domain_id,
> -             new->vcpu->vcpu_id,
> -             current->domain->domain_id,
> -             current->vcpu_id);
> +    d2printk("rqt %pv curr %pv\n", new->vcpu, current);
>   
>       BUG_ON(new->vcpu->processor != cpu);
>       BUG_ON(new->rqd != rqd);
> @@ -681,10 +672,7 @@ void burn_credits(struct csched_runqueue
>           t2c_update(rqd, delta, svc);
>           svc->start_time = now;
>   
> -        d2printk("b d%dv%d c%d\n",
> -                 svc->vcpu->domain->domain_id,
> -                 svc->vcpu->vcpu_id,
> -                 svc->credit);
> +        d2printk("b %pv c%d\n", svc->vcpu, svc->credit);
>       } else {
>           d2printk("%s: Time went backwards? now %"PRI_stime" start %"PRI_stime"\n",
>                  __func__, now, svc->start_time);
> @@ -871,11 +859,9 @@ static void
>   csched_vcpu_insert(const struct scheduler *ops, struct vcpu *vc)
>   {
>       struct csched_vcpu *svc = vc->sched_priv;
> -    struct domain * const dom = vc->domain;
>       struct csched_dom * const sdom = svc->sdom;
>   
> -    printk("%s: Inserting d%dv%d\n",
> -           __func__, dom->domain_id, vc->vcpu_id);
> +    printk("%s: Inserting %pv\n", __func__, vc);
>   
>       /* NB: On boot, idle vcpus are inserted before alloc_pdata() has
>        * been called for that cpu.
> @@ -965,7 +951,7 @@ csched_vcpu_wake(const struct scheduler
>   
>       /* Schedule lock should be held at this point. */
>   
> -    d2printk("w d%dv%d\n", vc->domain->domain_id, vc->vcpu_id);
> +    d2printk("w %pv\n", vc);
>   
>       BUG_ON( is_idle_vcpu(vc) );
>   
> @@ -1074,7 +1060,7 @@ choose_cpu(const struct scheduler *ops,
>       {
>           if ( test_and_clear_bit(__CSFLAG_runq_migrate_request, &svc->flags) )
>           {
> -            d2printk("d%dv%d -\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv -\n", svc->vcpu);
>               clear_bit(__CSFLAG_runq_migrate_request, &svc->flags);
>           }
>           /* Leave it where it is for now.  When we actually pay attention
> @@ -1094,7 +1080,7 @@ choose_cpu(const struct scheduler *ops,
>           }
>           else
>           {
> -            d2printk("d%dv%d +\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id);
> +            d2printk("%pv +\n", svc->vcpu);
>               new_cpu = cpumask_cycle(vc->processor, &svc->migrate_rqd->active);
>               goto out_up;
>           }
> @@ -1203,8 +1189,7 @@ void migrate(const struct scheduler *ops
>   {
>       if ( test_bit(__CSFLAG_scheduled, &svc->flags) )
>       {
> -        d2printk("d%dv%d %d-%d a\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d a\n", svc->vcpu, svc->rqd->id, trqd->id);
>           /* It's running; mark it to migrate. */
>           svc->migrate_rqd = trqd;
>           set_bit(_VPF_migrating, &svc->vcpu->pause_flags);
> @@ -1214,8 +1199,7 @@ void migrate(const struct scheduler *ops
>       {
>           int on_runq=0;
>           /* It's not running; just move it */
> -        d2printk("d%dv%d %d-%d i\n", svc->vcpu->domain->domain_id, svc->vcpu->vcpu_id,
> -                 svc->rqd->id, trqd->id);
> +        d2printk("%pv %d-%d i\n", svc->vcpu, svc->rqd->id, trqd->id);
>           if ( __vcpu_on_runq(svc) )
>           {
>               __runq_remove(svc);
> @@ -1662,11 +1646,7 @@ csched_schedule(
>       SCHED_STAT_CRANK(schedule);
>       CSCHED_VCPU_CHECK(current);
>   
> -    d2printk("sc p%d c d%dv%d now %"PRI_stime"\n",
> -             cpu,
> -             scurr->vcpu->domain->domain_id,
> -             scurr->vcpu->vcpu_id,
> -             now);
> +    d2printk("sc p%d c %pv now %"PRI_stime"\n", cpu, scurr->vcpu, now);
>   
>       BUG_ON(!cpumask_test_cpu(cpu, &CSCHED_PRIV(ops)->initialized));
>   
> @@ -1693,12 +1673,11 @@ csched_schedule(
>                   }
>               }
>           }
> -        printk("%s: pcpu %d rq %d, but scurr d%dv%d assigned to "
> +        printk("%s: pcpu %d rq %d, but scurr %pv assigned to "
>                  "pcpu %d rq %d!\n",
>                  __func__,
>                  cpu, this_rqi,
> -               scurr->vcpu->domain->domain_id, scurr->vcpu->vcpu_id,
> -               scurr->vcpu->processor, other_rqi);
> +               scurr->vcpu, scurr->vcpu->processor, other_rqi);
>       }
>       BUG_ON(!is_idle_vcpu(scurr->vcpu) && scurr->rqd != rqd);
>   
> @@ -1755,12 +1734,8 @@ csched_schedule(
>               __runq_remove(snext);
>               if ( snext->vcpu->is_running )
>               {
> -                printk("p%d: snext d%dv%d running on p%d! scurr d%dv%d\n",
> -                       cpu,
> -                       snext->vcpu->domain->domain_id, snext->vcpu->vcpu_id,
> -                       snext->vcpu->processor,
> -                       scurr->vcpu->domain->domain_id,
> -                       scurr->vcpu->vcpu_id);
> +                printk("p%d: snext %pv running on p%d! scurr %pv\n",
> +                       cpu, snext->vcpu, snext->vcpu->processor, scurr->vcpu);
>                   BUG();
>               }
>               set_bit(__CSFLAG_scheduled, &snext->flags);
> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -559,8 +559,7 @@ void restore_vcpu_affinity(struct domain
>   
>           if ( v->affinity_broken )
>           {
> -            printk(XENLOG_DEBUG "Restoring affinity for d%dv%d\n",
> -                   d->domain_id, v->vcpu_id);
> +            printk(XENLOG_DEBUG "Restoring affinity for %pv\n", v);
>               cpumask_copy(v->cpu_affinity, v->cpu_affinity_saved);
>               v->affinity_broken = 0;
>           }
> @@ -608,8 +607,7 @@ int cpu_disable_scheduler(unsigned int c
>               if ( cpumask_empty(&online_affinity) &&
>                    cpumask_test_cpu(cpu, v->cpu_affinity) )
>               {
> -                printk(XENLOG_DEBUG "Breaking affinity for d%dv%d\n",
> -                        d->domain_id, v->vcpu_id);
> +                printk(XENLOG_DEBUG "Breaking affinity for %pv\n", v);
>   
>                   if (system_state == SYS_STATE_suspend)
>                   {
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -19,6 +19,7 @@
>   #include <xen/ctype.h>
>   #include <xen/symbols.h>
>   #include <xen/lib.h>
> +#include <xen/sched.h>
>   #include <asm/div64.h>
>   #include <asm/page.h>
>   
> @@ -301,6 +302,19 @@ static char *pointer(char *str, char *en
>   
>           return str;
>       }
> +
> +    case 'v': /* d<domain-id>v<vcpu-id> from a struct vcpu */
> +    {
> +        const struct vcpu *v = arg;
> +
> +        ++*fmt_ptr;
> +        if ( str <= end )
> +            *str = 'd';
> +        str = number(str + 1, end, v->domain->domain_id, 10, -1, -1, 0);
> +        if ( str <= end )
> +            *str = 'v';
> +        return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
> +    }
>       }
>   
>       if ( field_width == -1 )
> --- a/xen/include/xen/config.h
> +++ b/xen/include/xen/config.h
> @@ -74,12 +74,11 @@
>   
>   #ifndef __ASSEMBLY__
>   
> -int current_domain_id(void);
>   #define dprintk(_l, _f, _a...)                              \
>       printk(_l "%s:%d: " _f, __FILE__ , __LINE__ , ## _a )
>   #define gdprintk(_l, _f, _a...)                             \
> -    printk(XENLOG_GUEST _l "%s:%d:d%d " _f, __FILE__,       \
> -           __LINE__, current_domain_id() , ## _a )
> +    printk(XENLOG_GUEST _l "%s:%d:%pv " _f, __FILE__,       \
> +           __LINE__, current, ## _a )
>   
>   #endif /* !__ASSEMBLY__ */
>   
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vp-0000qd-3G; Thu, 27 Feb 2014 14:27:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vn-0000pJ-DO
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:23 +0000
Received: from [85.158.143.35:23153] by server-2.bemta-4.messagelabs.com id
	B3/B2-04779-A4B4F035; Thu, 27 Feb 2014 14:27:22 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393511240!8793254!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6471 invoked from network); 27 Feb 2014 14:27:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654382"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:19 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vj-00037V-5N;
	Thu, 27 Feb 2014 14:27:19 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vi-0007Xh-UY;
	Thu, 27 Feb 2014 14:27:18 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:12 +0000
Message-ID: <1393511233-28942-4-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 3/4] x86/mem_sharing: drop unused variable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Coverity CID 1087198

Signed-off-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
---
 xen/arch/x86/mm/mem_sharing.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4a5d9e8..7ed6594 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -846,7 +846,6 @@ int mem_sharing_nominate_page(struct domain *d,
     mfn_t mfn;
     struct page_info *page = NULL; /* gcc... */
     int ret;
-    struct gfn_info *gfn_info;
 
     *phandle = 0UL;
 
@@ -905,7 +904,7 @@ int mem_sharing_nominate_page(struct domain *d,
     page->sharing->handle = get_next_handle();  
 
     /* Create the local gfn info */
-    if ( (gfn_info = mem_sharing_gfn_alloc(page, d, gfn)) == NULL )
+    if ( mem_sharing_gfn_alloc(page, d, gfn) == NULL )
     {
         xfree(page->sharing);
         page->sharing = NULL;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vo-0000qJ-MV; Thu, 27 Feb 2014 14:27:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vm-0000p9-JV
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:23 +0000
Received: from [193.109.254.147:19801] by server-14.bemta-14.messagelabs.com
	id A1/AC-29228-94B4F035; Thu, 27 Feb 2014 14:27:21 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393511239!1980735!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26269 invoked from network); 27 Feb 2014 14:27:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106281470"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:18 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vi-00037S-TE;
	Thu, 27 Feb 2014 14:27:18 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vi-0007Xc-K7;
	Thu, 27 Feb 2014 14:27:18 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:11 +0000
Message-ID: <1393511233-28942-3-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 2/4] x86/shadow: Drop shadow_mode_trap_reads()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This was never actually implemented, and is confusing coverity.

Coverity CID 1090354

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/multi.c | 30 ++++--------------------------
 xen/include/asm-x86/shadow.h   |  4 ----
 2 files changed, 4 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 3d35537..5c7a7ac 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -692,21 +692,7 @@ _sh_propagate(struct vcpu *v,
                        && (ft == ft_demand_write))
 #endif /* OOS */
                   ) )
-    {
-        if ( shadow_mode_trap_reads(d) )
-        {
-            // if we are trapping both reads & writes, then mark this page
-            // as not present...
-            //
-            sflags &= ~_PAGE_PRESENT;
-        }
-        else
-        {
-            // otherwise, just prevent any writes...
-            //
-            sflags &= ~_PAGE_RW;
-        }
-    }
+        sflags &= ~_PAGE_RW;
 
     // PV guests in 64-bit mode use two different page tables for user vs
     // supervisor permissions, making the guest's _PAGE_USER bit irrelevant.
@@ -3181,18 +3167,10 @@ static int sh_page_fault(struct vcpu *v,
          && !(mfn_is_out_of_sync(gmfn)
               && !(regs->error_code & PFEC_user_mode))
 #endif
-         )
+         && (ft == ft_demand_write) )
     {
-        if ( ft == ft_demand_write )
-        {
-            perfc_incr(shadow_fault_emulate_write);
-            goto emulate;
-        }
-        else if ( shadow_mode_trap_reads(d) && ft == ft_demand_read )
-        {
-            perfc_incr(shadow_fault_emulate_read);
-            goto emulate;
-        }
+        perfc_incr(shadow_fault_emulate_write);
+        goto emulate;
     }
 
     /* Need to hand off device-model MMIO to the device model */
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 348915e..f40cab4 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -44,10 +44,6 @@
 #define shadow_mode_external(_d)  (paging_mode_shadow(_d) && \
                                    paging_mode_external(_d))
 
-/* Xen traps & emulates all reads of all page table pages:
- * not yet supported */
-#define shadow_mode_trap_reads(_d) ({ (void)(_d); 0; })
-
 /*****************************************************************************
  * Entry points into the shadow code */
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vp-0000qu-HK; Thu, 27 Feb 2014 14:27:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vn-0000pS-Oh
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:24 +0000
Received: from [85.158.137.68:7896] by server-10.bemta-3.messagelabs.com id
	D0/B9-07302-B4B4F035; Thu, 27 Feb 2014 14:27:23 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393511239!3088882!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10279 invoked from network); 27 Feb 2014 14:27:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654383"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:19 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vj-00037Y-DQ;
	Thu, 27 Feb 2014 14:27:19 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vj-0007Xm-69;
	Thu, 27 Feb 2014 14:27:19 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:13 +0000
Message-ID: <1393511233-28942-5-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for small
	constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No semantic changes, just makes the control flow a bit clearer.

I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
formula is too clever for Coverity, but in fact it always takes me a
minute or two to understand it too. :)

Signed-off-by: Tim Deegan <tim@xen.org>

---

v2: fix find_next_bit macros to evaluate 'addr' exactly once.
---
 xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
 xen/include/xen/bitmap.h     | 30 ++++++++++++---------
 2 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
index ab21d92..05ed2d7 100644
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -335,23 +335,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_bit(addr, r__); \
-        else \
-            r__ = __find_next_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_bit(addr, size, off) ({                                   \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && s__ == 0 )                           \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_bit(a__, s__);                                   \
+    else                                                                    \
+        r__ = __find_next_bit(a__, s__, o__);                               \
+    r__;                                                                    \
 })
 
 /**
@@ -370,23 +367,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_zero_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(~*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_zero_bit(addr, r__); \
-        else \
-            r__ = __find_next_zero_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_zero_bit(addr, size, off) ({                              \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && s__ == 0 )                           \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(~*(const unsigned long *)(a__) >> o__, s__);  \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_zero_bit(a__, s__);                              \
+    else                                                                    \
+        r__ = __find_next_zero_bit(a__, s__, o__);                          \
+    r__;                                                                    \
 })
 
 /**
diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
index b5ec455..166e1a0 100644
--- a/xen/include/xen/bitmap.h
+++ b/xen/include/xen/bitmap.h
@@ -110,13 +110,14 @@ extern int bitmap_allocate_region(unsigned long *bitmap, int pos, int order);
 
 #define bitmap_bytes(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
 
-#define bitmap_switch(nbits, zero_ret, small, large)			\
-	switch (-!__builtin_constant_p(nbits) | (nbits)) {		\
-	case 0:	return zero_ret;					\
-	case 1 ... BITS_PER_LONG:					\
-		small; break;						\
-	default:							\
-		large; break;						\
+#define bitmap_switch(nbits, zero, small, large)			  \
+	unsigned int n__ = (nbits);					  \
+	if (__builtin_constant_p(nbits) && n__ == 0) {			  \
+		zero;							  \
+	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
+		small;							  \
+	} else {							  \
+		large;							  \
 	}
 
 static inline void bitmap_zero(unsigned long *dst, int nbits)
@@ -191,7 +192,8 @@ static inline void bitmap_complement(unsigned long *dst, const unsigned long *sr
 static inline int bitmap_equal(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_equal(src1, src2, nbits));
 }
@@ -199,7 +201,8 @@ static inline int bitmap_equal(const unsigned long *src1,
 static inline int bitmap_intersects(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0,
 		return __bitmap_intersects(src1, src2, nbits));
 }
@@ -207,21 +210,24 @@ static inline int bitmap_intersects(const unsigned long *src1,
 static inline int bitmap_subset(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 & ~*src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_subset(src1, src2, nbits));
 }
 
 static inline int bitmap_empty(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_empty(src, nbits));
 }
 
 static inline int bitmap_full(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(~*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_full(src, nbits));
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vn-0000pX-Lx; Thu, 27 Feb 2014 14:27:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vm-0000p8-1n
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:22 +0000
Received: from [85.158.137.68:7696] by server-5.bemta-3.messagelabs.com id
	3B/CE-04712-94B4F035; Thu, 27 Feb 2014 14:27:21 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393511239!3088882!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9991 invoked from network); 27 Feb 2014 14:27:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654380"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:18 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vi-00037M-7R;
	Thu, 27 Feb 2014 14:27:18 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vh-0007XU-Qk;
	Thu, 27 Feb 2014 14:27:17 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:09 +0000
Message-ID: <1393511233-28942-1-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 0/4] more Coverity-inspired tidying.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These four patches are small cleanups of things that Coverity complains
about.  AFAICT none of them fixes any bugs, but I do think that they make
the code more readable (i.e. I'm not just mangling the code to make
Coverity happy).

Reposting now that 4.4 has branched, with a v2 of patch 4/4 fixing
a missing argument evaluation.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vp-0000qd-3G; Thu, 27 Feb 2014 14:27:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vn-0000pJ-DO
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:23 +0000
Received: from [85.158.143.35:23153] by server-2.bemta-4.messagelabs.com id
	B3/B2-04779-A4B4F035; Thu, 27 Feb 2014 14:27:22 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393511240!8793254!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6471 invoked from network); 27 Feb 2014 14:27:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654382"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:19 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vj-00037V-5N;
	Thu, 27 Feb 2014 14:27:19 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vi-0007Xh-UY;
	Thu, 27 Feb 2014 14:27:18 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:12 +0000
Message-ID: <1393511233-28942-4-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 3/4] x86/mem_sharing: drop unused variable.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Coverity CID 1087198

Signed-off-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Andres Lagar-Cavilla <andres@lagarcavilla.org>
---
 xen/arch/x86/mm/mem_sharing.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 4a5d9e8..7ed6594 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -846,7 +846,6 @@ int mem_sharing_nominate_page(struct domain *d,
     mfn_t mfn;
     struct page_info *page = NULL; /* gcc... */
     int ret;
-    struct gfn_info *gfn_info;
 
     *phandle = 0UL;
 
@@ -905,7 +904,7 @@ int mem_sharing_nominate_page(struct domain *d,
     page->sharing->handle = get_next_handle();  
 
     /* Create the local gfn info */
-    if ( (gfn_info = mem_sharing_gfn_alloc(page, d, gfn)) == NULL )
+    if ( mem_sharing_gfn_alloc(page, d, gfn) == NULL )
     {
         xfree(page->sharing);
         page->sharing = NULL;
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vo-0000qJ-MV; Thu, 27 Feb 2014 14:27:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vm-0000p9-JV
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:23 +0000
Received: from [193.109.254.147:19801] by server-14.bemta-14.messagelabs.com
	id A1/AC-29228-94B4F035; Thu, 27 Feb 2014 14:27:21 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393511239!1980735!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26269 invoked from network); 27 Feb 2014 14:27:21 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106281470"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:18 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vi-00037S-TE;
	Thu, 27 Feb 2014 14:27:18 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vi-0007Xc-K7;
	Thu, 27 Feb 2014 14:27:18 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:11 +0000
Message-ID: <1393511233-28942-3-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 2/4] x86/shadow: Drop shadow_mode_trap_reads()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This was never actually implemented, and is confusing coverity.

Coverity CID 1090354

Signed-off-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/multi.c | 30 ++++--------------------------
 xen/include/asm-x86/shadow.h   |  4 ----
 2 files changed, 4 insertions(+), 30 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 3d35537..5c7a7ac 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -692,21 +692,7 @@ _sh_propagate(struct vcpu *v,
                        && (ft == ft_demand_write))
 #endif /* OOS */
                   ) )
-    {
-        if ( shadow_mode_trap_reads(d) )
-        {
-            // if we are trapping both reads & writes, then mark this page
-            // as not present...
-            //
-            sflags &= ~_PAGE_PRESENT;
-        }
-        else
-        {
-            // otherwise, just prevent any writes...
-            //
-            sflags &= ~_PAGE_RW;
-        }
-    }
+        sflags &= ~_PAGE_RW;
 
     // PV guests in 64-bit mode use two different page tables for user vs
     // supervisor permissions, making the guest's _PAGE_USER bit irrelevant.
@@ -3181,18 +3167,10 @@ static int sh_page_fault(struct vcpu *v,
          && !(mfn_is_out_of_sync(gmfn)
               && !(regs->error_code & PFEC_user_mode))
 #endif
-         )
+         && (ft == ft_demand_write) )
     {
-        if ( ft == ft_demand_write )
-        {
-            perfc_incr(shadow_fault_emulate_write);
-            goto emulate;
-        }
-        else if ( shadow_mode_trap_reads(d) && ft == ft_demand_read )
-        {
-            perfc_incr(shadow_fault_emulate_read);
-            goto emulate;
-        }
+        perfc_incr(shadow_fault_emulate_write);
+        goto emulate;
     }
 
     /* Need to hand off device-model MMIO to the device model */
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 348915e..f40cab4 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -44,10 +44,6 @@
 #define shadow_mode_external(_d)  (paging_mode_shadow(_d) && \
                                    paging_mode_external(_d))
 
-/* Xen traps & emulates all reads of all page table pages:
- * not yet supported */
-#define shadow_mode_trap_reads(_d) ({ (void)(_d); 0; })
-
 /*****************************************************************************
  * Entry points into the shadow code */
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vp-0000qu-HK; Thu, 27 Feb 2014 14:27:25 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vn-0000pS-Oh
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:24 +0000
Received: from [85.158.137.68:7896] by server-10.bemta-3.messagelabs.com id
	D0/B9-07302-B4B4F035; Thu, 27 Feb 2014 14:27:23 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393511239!3088882!3
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10279 invoked from network); 27 Feb 2014 14:27:22 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:22 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654383"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:20 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:19 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vj-00037Y-DQ;
	Thu, 27 Feb 2014 14:27:19 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vj-0007Xm-69;
	Thu, 27 Feb 2014 14:27:19 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:13 +0000
Message-ID: <1393511233-28942-5-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA1
Subject: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for small
	constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

No semantic changes, just makes the control flow a bit clearer.

I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
formula is too clever for Coverity, but in fact it always takes me a
minute or two to understand it too. :)

Signed-off-by: Tim Deegan <tim@xen.org>

---

v2: fix find_next_bit macros to evaluate 'addr' exactly once.
---
 xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
 xen/include/xen/bitmap.h     | 30 ++++++++++++---------
 2 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
index ab21d92..05ed2d7 100644
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -335,23 +335,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_bit(addr, r__); \
-        else \
-            r__ = __find_next_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_bit(addr, size, off) ({                                   \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && s__ == 0 )                           \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_bit(a__, s__);                                   \
+    else                                                                    \
+        r__ = __find_next_bit(a__, s__, o__);                               \
+    r__;                                                                    \
 })
 
 /**
@@ -370,23 +367,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_zero_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(~*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_zero_bit(addr, r__); \
-        else \
-            r__ = __find_next_zero_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_zero_bit(addr, size, off) ({                              \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && s__ == 0 )                           \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(~*(const unsigned long *)(a__) >> o__, s__);  \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_zero_bit(a__, s__);                              \
+    else                                                                    \
+        r__ = __find_next_zero_bit(a__, s__, o__);                          \
+    r__;                                                                    \
 })
 
 /**
diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
index b5ec455..166e1a0 100644
--- a/xen/include/xen/bitmap.h
+++ b/xen/include/xen/bitmap.h
@@ -110,13 +110,14 @@ extern int bitmap_allocate_region(unsigned long *bitmap, int pos, int order);
 
 #define bitmap_bytes(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
 
-#define bitmap_switch(nbits, zero_ret, small, large)			\
-	switch (-!__builtin_constant_p(nbits) | (nbits)) {		\
-	case 0:	return zero_ret;					\
-	case 1 ... BITS_PER_LONG:					\
-		small; break;						\
-	default:							\
-		large; break;						\
+#define bitmap_switch(nbits, zero, small, large)			  \
+	unsigned int n__ = (nbits);					  \
+	if (__builtin_constant_p(nbits) && n__ == 0) {			  \
+		zero;							  \
+	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
+		small;							  \
+	} else {							  \
+		large;							  \
 	}
 
 static inline void bitmap_zero(unsigned long *dst, int nbits)
@@ -191,7 +192,8 @@ static inline void bitmap_complement(unsigned long *dst, const unsigned long *sr
 static inline int bitmap_equal(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_equal(src1, src2, nbits));
 }
@@ -199,7 +201,8 @@ static inline int bitmap_equal(const unsigned long *src1,
 static inline int bitmap_intersects(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0,
 		return __bitmap_intersects(src1, src2, nbits));
 }
@@ -207,21 +210,24 @@ static inline int bitmap_intersects(const unsigned long *src1,
 static inline int bitmap_subset(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 & ~*src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_subset(src1, src2, nbits));
 }
 
 static inline int bitmap_empty(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_empty(src, nbits));
 }
 
 static inline int bitmap_full(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(~*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_full(src, nbits));
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vn-0000pX-Lx; Thu, 27 Feb 2014 14:27:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vm-0000p8-1n
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:22 +0000
Received: from [85.158.137.68:7696] by server-5.bemta-3.messagelabs.com id
	3B/CE-04712-94B4F035; Thu, 27 Feb 2014 14:27:21 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393511239!3088882!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9991 invoked from network); 27 Feb 2014 14:27:20 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654380"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:18 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:18 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vi-00037M-7R;
	Thu, 27 Feb 2014 14:27:18 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vh-0007XU-Qk;
	Thu, 27 Feb 2014 14:27:17 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:09 +0000
Message-ID: <1393511233-28942-1-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 0/4] more Coverity-inspired tidying.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

These four patches are small cleanups of things that Coverity complains
about.  AFAICT none of them fixes any bugs, but I do think that they make
the code more readable (i.e. I'm not just mangling the code to make
Coverity happy).

Reposting now that 4.4 has branched, with a v2 of patch 4/4 fixing
a missing argument evaluation.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vo-0000py-9L; Thu, 27 Feb 2014 14:27:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vm-0000pE-RL
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:22 +0000
Received: from [85.158.137.68:65014] by server-13.bemta-3.messagelabs.com id
	0B/6B-26923-A4B4F035; Thu, 27 Feb 2014 14:27:22 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393511239!3088882!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10160 invoked from network); 27 Feb 2014 14:27:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654381"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:18 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vi-00037P-IU;
	Thu, 27 Feb 2014 14:27:18 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vi-0007XX-4f;
	Thu, 27 Feb 2014 14:27:18 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:10 +0000
Message-ID: <1393511233-28942-2-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 1/4] common/vsprintf: Explicitly treat
	negative lengths as 'unlimited'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The old code relied on implictly casting negative numbers to size_t
making a very large limit, which was correct but non-obvious.

Coverity CID 1128575

Signed-off-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/common/vsprintf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..0a6fa05 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -239,7 +239,7 @@ static char *number(
 static char *string(char *str, char *end, const char *s,
                     int field_width, int precision, int flags)
 {
-    int i, len = strnlen(s, precision);
+    int i, len = (precision < 0) ? strlen(s) : strnlen(s, precision);
 
     if (!(flags & LEFT)) {
         while (len < field_width--) {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:27:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1vo-0000py-9L; Thu, 27 Feb 2014 14:27:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ1vm-0000pE-RL
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:27:22 +0000
Received: from [85.158.137.68:65014] by server-13.bemta-3.messagelabs.com id
	0B/6B-26923-A4B4F035; Thu, 27 Feb 2014 14:27:22 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393511239!3088882!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10160 invoked from network); 27 Feb 2014 14:27:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:27:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654381"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:27:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:27:18 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ1vi-00037P-IU;
	Thu, 27 Feb 2014 14:27:18 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ1vi-0007XX-4f;
	Thu, 27 Feb 2014 14:27:18 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 14:27:10 +0000
Message-ID: <1393511233-28942-2-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
In-Reply-To: <1393511233-28942-1-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
MIME-Version: 1.0
X-DLP: MIA2
Subject: [Xen-devel] [PATCH v2 1/4] common/vsprintf: Explicitly treat
	negative lengths as 'unlimited'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The old code relied on implictly casting negative numbers to size_t
making a very large limit, which was correct but non-obvious.

Coverity CID 1128575

Signed-off-by: Tim Deegan <tim@xen.org>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/common/vsprintf.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 1a6198e..0a6fa05 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -239,7 +239,7 @@ static char *number(
 static char *string(char *str, char *end, const char *s,
                     int field_width, int precision, int flags)
 {
-    int i, len = strnlen(s, precision);
+    int i, len = (precision < 0) ? strlen(s) : strnlen(s, precision);
 
     if (!(flags & LEFT)) {
         while (len < field_width--) {
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:29:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1xS-0001bO-3N; Thu, 27 Feb 2014 14:29:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ1xQ-0001ak-Uc
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:29:05 +0000
Received: from [85.158.137.68:41315] by server-1.bemta-3.messagelabs.com id
	5D/61-17293-0BB4F035; Thu, 27 Feb 2014 14:29:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393511341!3366295!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21711 invoked from network); 27 Feb 2014 14:29:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:29:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654745"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:28:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:28:35 -0500
Message-ID: <530F4B92.90700@citrix.com>
Date: Thu, 27 Feb 2014 14:28:34 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
	<20140224191810.GA7023@phenom.dumpdata.com>
In-Reply-To: <20140224191810.GA7023@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 19:18, Konrad Rzeszutek Wilk wrote:
> On Thu, Feb 13, 2014 at 04:59:27PM +0000, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> Hypercalls submitted by user space tools via the privcmd driver can
>> take a long time (potentially many 10s of seconds) if the hypercall
>> has many sub-operations.
>>
>> A fully preemptible kernel may deschedule such as task in any upcall
>> called from a hypercall continuation.
>>
>> However, in a kernel with voluntary or no preemption, hypercall
>> continuations in Xen allow event handlers to be run but the task
>> issuing the hypercall will not be descheduled until the hypercall is
>> complete and the ioctl returns to user space.  These long running
>> tasks may also trigger the kernel's soft lockup detection.
>>
>> Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
>> bracket hypercalls that may be preempted.  Use these in the privcmd
>> driver.
>>
>> When returning from an upcall, call preempt_schedule_irq() if the
>> current task was within a preemptible hypercall.
>>
>> Since preempt_schedule_irq() can move the task to a different CPU,
>> clear and set xen_in_preemptible_hcall around the call.
>>
>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>> ---
>> Changes in v2:
>> - Use per-cpu variable to mark preemptible regions
>> - Call preempt_schedule_irq() from the correct place in
>>   xen_hypervisor_callback
> 
> 
> 12929 ERROR: "xen_in_preemptible_hcall" [drivers/xen/xen-privcmd.ko] undefined!

You have privcmd built as a module so xen_in_preemptible_hcall needs to
be exported.  I'll send a v3 shortly.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:29:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ1xS-0001bO-3N; Thu, 27 Feb 2014 14:29:06 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ1xQ-0001ak-Uc
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:29:05 +0000
Received: from [85.158.137.68:41315] by server-1.bemta-3.messagelabs.com id
	5D/61-17293-0BB4F035; Thu, 27 Feb 2014 14:29:04 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393511341!3366295!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21711 invoked from network); 27 Feb 2014 14:29:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:29:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104654745"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:28:35 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:28:35 -0500
Message-ID: <530F4B92.90700@citrix.com>
Date: Thu, 27 Feb 2014 14:28:34 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1392310767-19347-1-git-send-email-david.vrabel@citrix.com>
	<20140224191810.GA7023@phenom.dumpdata.com>
In-Reply-To: <20140224191810.GA7023@phenom.dumpdata.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv2] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 24/02/14 19:18, Konrad Rzeszutek Wilk wrote:
> On Thu, Feb 13, 2014 at 04:59:27PM +0000, David Vrabel wrote:
>> From: David Vrabel <david.vrabel@citrix.com>
>>
>> Hypercalls submitted by user space tools via the privcmd driver can
>> take a long time (potentially many 10s of seconds) if the hypercall
>> has many sub-operations.
>>
>> A fully preemptible kernel may deschedule such as task in any upcall
>> called from a hypercall continuation.
>>
>> However, in a kernel with voluntary or no preemption, hypercall
>> continuations in Xen allow event handlers to be run but the task
>> issuing the hypercall will not be descheduled until the hypercall is
>> complete and the ioctl returns to user space.  These long running
>> tasks may also trigger the kernel's soft lockup detection.
>>
>> Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
>> bracket hypercalls that may be preempted.  Use these in the privcmd
>> driver.
>>
>> When returning from an upcall, call preempt_schedule_irq() if the
>> current task was within a preemptible hypercall.
>>
>> Since preempt_schedule_irq() can move the task to a different CPU,
>> clear and set xen_in_preemptible_hcall around the call.
>>
>> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
>> ---
>> Changes in v2:
>> - Use per-cpu variable to mark preemptible regions
>> - Call preempt_schedule_irq() from the correct place in
>>   xen_hypervisor_callback
> 
> 
> 12929 ERROR: "xen_in_preemptible_hcall" [drivers/xen/xen-privcmd.ko] undefined!

You have privcmd built as a module so xen_in_preemptible_hcall needs to
be exported.  I'll send a v3 shortly.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ26y-0002D5-FC; Thu, 27 Feb 2014 14:38:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WJ26x-0002CF-94
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 14:38:55 +0000
Received: from [85.158.139.211:17650] by server-9.bemta-5.messagelabs.com id
	F0/DA-11237-EFD4F035; Thu, 27 Feb 2014 14:38:54 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393511933!6681529!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31990 invoked from network); 27 Feb 2014 14:38:53 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:38:53 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so1864966eak.15
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Feb 2014 06:38:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=nZF5gxvKivoh1M2p5zMr04SGN5o46J+jhkmwTKpwsN8=;
	b=BcdYP78/LZ8Eeu8mnPX+KQJbjuD++gLW0BcgGUAuXZDQelEN6QE2aZaN+HUenpYi/8
	nBaftGoOAqjiJ58TQxSTCsxU7yKIpR4wFn3aC22nUHibntwTOAS7xfEUv0/8Ma2q972K
	yfIIS5Fg+zhialpdyeJYlDuDUJUYOg+6v2mx8fZ7mKriu4P/YCc1OmKAw290uN7cjoYo
	GIHA9XgyVUZ//4/iUHpcFA8HjlfaEM1mJ0swvijlu0ugPOkLjWj0DzO0EAYW5v183yob
	2gX+h9jzXLjlX2m7w/I65nCuyrmaP560d3fUsJHx25vy/g+bDxz3J99mU3Mvm3ZvnzZe
	EyBA==
X-Gm-Message-State: ALoCoQlRGUMvnLdukO7hvNMr0dLqLlqNNobURbJoJxQJSWkhRb68wOj1RozYOaLjvzVF9wRU5gY7
X-Received: by 10.14.47.133 with SMTP id t5mr10305383eeb.96.1393511932882;
	Thu, 27 Feb 2014 06:38:52 -0800 (PST)
Received: from [192.168.1.15] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id 43sm17613490eeh.13.2014.02.27.06.38.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 06:38:52 -0800 (PST)
Message-ID: <530F4E05.1070708@m2r.biz>
Date: Thu, 27 Feb 2014 15:39:01 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: xen-devel@lists.xensource.com
References: <1393065431-11802-1-git-send-email-fabio.fantoni@m2r.biz>
In-Reply-To: <1393065431-11802-1-git-send-email-fabio.fantoni@m2r.biz>
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v3 RESEND] libxl: Add none to vga parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 22/02/2014 11:37, Fabio Fantoni ha scritto:
> Usage:
>    vga="none"
>
> Make possible to not have an emulated vga on hvm domUs.
>
> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
>
> ---
>
> Changes in v3:
> - set video_memkb to 0 if vga is none.
> - remove a check on one condition no more needed.
>
> Changes in v2:
> - libxl_dm.c:
>   if vga is none, on qemu traditional:
>    - add -vga none parameter.
>    - do not add -videoram parameter.
>
> ---
>   docs/man/xl.cfg.pod.5       |    2 +-
>   tools/libxl/libxl_create.c  |    6 ++++++
>   tools/libxl/libxl_dm.c      |    5 +++++
>   tools/libxl/libxl_types.idl |    1 +
>   tools/libxl/xl_cmdimpl.c    |    2 ++
>   5 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index e15a49f..2f36143 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1082,7 +1082,7 @@ This option is deprecated, use vga="stdvga" instead.
>   
>   =item B<vga="STRING">
>   
> -Selects the emulated video card (stdvga|cirrus).
> +Selects the emulated video card (none|stdvga|cirrus).
>   The default is cirrus.
>   
>   =item B<vnc=BOOLEAN>
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..9110394 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -226,6 +226,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>           switch (b_info->device_model_version) {
>           case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>               switch (b_info->u.hvm.vga.kind) {
> +            case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +                b_info->video_memkb = 0;
> +                break;
>               case LIBXL_VGA_INTERFACE_TYPE_STD:
>                   if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
>                       b_info->video_memkb = 8 * 1024;
> @@ -246,6 +249,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>           case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>           default:
>               switch (b_info->u.hvm.vga.kind) {
> +            case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +                b_info->video_memkb = 0;
> +                break;
>               case LIBXL_VGA_INTERFACE_TYPE_STD:
>                   if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
>                       b_info->video_memkb = 16 * 1024;
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index f6f7bbd..761bb61 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -217,6 +217,9 @@ static char ** libxl__build_device_model_args_old(libxl__gc *gc,
>               break;
>           case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
>               break;
> +        case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +            flexarray_append_pair(dm_args, "-vga", "none");
> +            break;
>           }
>   
>           if (b_info->u.hvm.boot) {
> @@ -515,6 +518,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>                   GCSPRINTF("vga.vram_size_mb=%d",
>                   libxl__sizekb_to_mb(b_info->video_memkb)));
>               break;
> +        case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +            break;
>           }
>   
>           if (b_info->u.hvm.boot) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b5a8387 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -153,6 +153,7 @@ libxl_shutdown_reason = Enumeration("shutdown_reason", [
>   libxl_vga_interface_type = Enumeration("vga_interface_type", [
>       (1, "CIRRUS"),
>       (2, "STD"),
> +    (3, "NONE"),
>       ], init_val = 1)
>   
>   libxl_vendor_device = Enumeration("vendor_device", [
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 4fc46eb..4d720b4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1667,6 +1667,8 @@ skip_vfb:
>                   b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
>               } else if (!strcmp(buf, "cirrus")) {
>                   b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
> +            } else if (!strcmp(buf, "none")) {
> +                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
>               } else {
>                   fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
>                   exit(1);

Ping

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:39:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ26y-0002D5-FC; Thu, 27 Feb 2014 14:38:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <fabio.fantoni@m2r.biz>) id 1WJ26x-0002CF-94
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 14:38:55 +0000
Received: from [85.158.139.211:17650] by server-9.bemta-5.messagelabs.com id
	F0/DA-11237-EFD4F035; Thu, 27 Feb 2014 14:38:54 +0000
X-Env-Sender: fabio.fantoni@m2r.biz
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393511933!6681529!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=1.7 required=7.0 tests=BIZ_TLD
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31990 invoked from network); 27 Feb 2014 14:38:53 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-8.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:38:53 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so1864966eak.15
	for <xen-devel@lists.xensource.com>;
	Thu, 27 Feb 2014 06:38:53 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=nZF5gxvKivoh1M2p5zMr04SGN5o46J+jhkmwTKpwsN8=;
	b=BcdYP78/LZ8Eeu8mnPX+KQJbjuD++gLW0BcgGUAuXZDQelEN6QE2aZaN+HUenpYi/8
	nBaftGoOAqjiJ58TQxSTCsxU7yKIpR4wFn3aC22nUHibntwTOAS7xfEUv0/8Ma2q972K
	yfIIS5Fg+zhialpdyeJYlDuDUJUYOg+6v2mx8fZ7mKriu4P/YCc1OmKAw290uN7cjoYo
	GIHA9XgyVUZ//4/iUHpcFA8HjlfaEM1mJ0swvijlu0ugPOkLjWj0DzO0EAYW5v183yob
	2gX+h9jzXLjlX2m7w/I65nCuyrmaP560d3fUsJHx25vy/g+bDxz3J99mU3Mvm3ZvnzZe
	EyBA==
X-Gm-Message-State: ALoCoQlRGUMvnLdukO7hvNMr0dLqLlqNNobURbJoJxQJSWkhRb68wOj1RozYOaLjvzVF9wRU5gY7
X-Received: by 10.14.47.133 with SMTP id t5mr10305383eeb.96.1393511932882;
	Thu, 27 Feb 2014 06:38:52 -0800 (PST)
Received: from [192.168.1.15] (ip-73-126.sn2.eutelia.it. [83.211.73.126])
	by mx.google.com with ESMTPSA id 43sm17613490eeh.13.2014.02.27.06.38.51
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 06:38:52 -0800 (PST)
Message-ID: <530F4E05.1070708@m2r.biz>
Date: Thu, 27 Feb 2014 15:39:01 +0100
From: Fabio Fantoni <fabio.fantoni@m2r.biz>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: xen-devel@lists.xensource.com
References: <1393065431-11802-1-git-send-email-fabio.fantoni@m2r.biz>
In-Reply-To: <1393065431-11802-1-git-send-email-fabio.fantoni@m2r.biz>
Cc: anthony.perard@citrix.com, Ian.Jackson@eu.citrix.com,
	Ian.Campbell@citrix.com, Stefano.Stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH v3 RESEND] libxl: Add none to vga parameter
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 22/02/2014 11:37, Fabio Fantoni ha scritto:
> Usage:
>    vga="none"
>
> Make possible to not have an emulated vga on hvm domUs.
>
> Signed-off-by: Fabio Fantoni <fabio.fantoni@m2r.biz>
>
> ---
>
> Changes in v3:
> - set video_memkb to 0 if vga is none.
> - remove a check on one condition no more needed.
>
> Changes in v2:
> - libxl_dm.c:
>   if vga is none, on qemu traditional:
>    - add -vga none parameter.
>    - do not add -videoram parameter.
>
> ---
>   docs/man/xl.cfg.pod.5       |    2 +-
>   tools/libxl/libxl_create.c  |    6 ++++++
>   tools/libxl/libxl_dm.c      |    5 +++++
>   tools/libxl/libxl_types.idl |    1 +
>   tools/libxl/xl_cmdimpl.c    |    2 ++
>   5 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/docs/man/xl.cfg.pod.5 b/docs/man/xl.cfg.pod.5
> index e15a49f..2f36143 100644
> --- a/docs/man/xl.cfg.pod.5
> +++ b/docs/man/xl.cfg.pod.5
> @@ -1082,7 +1082,7 @@ This option is deprecated, use vga="stdvga" instead.
>   
>   =item B<vga="STRING">
>   
> -Selects the emulated video card (stdvga|cirrus).
> +Selects the emulated video card (none|stdvga|cirrus).
>   The default is cirrus.
>   
>   =item B<vnc=BOOLEAN>
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..9110394 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -226,6 +226,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>           switch (b_info->device_model_version) {
>           case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
>               switch (b_info->u.hvm.vga.kind) {
> +            case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +                b_info->video_memkb = 0;
> +                break;
>               case LIBXL_VGA_INTERFACE_TYPE_STD:
>                   if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
>                       b_info->video_memkb = 8 * 1024;
> @@ -246,6 +249,9 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
>           case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
>           default:
>               switch (b_info->u.hvm.vga.kind) {
> +            case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +                b_info->video_memkb = 0;
> +                break;
>               case LIBXL_VGA_INTERFACE_TYPE_STD:
>                   if (b_info->video_memkb == LIBXL_MEMKB_DEFAULT)
>                       b_info->video_memkb = 16 * 1024;
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index f6f7bbd..761bb61 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -217,6 +217,9 @@ static char ** libxl__build_device_model_args_old(libxl__gc *gc,
>               break;
>           case LIBXL_VGA_INTERFACE_TYPE_CIRRUS:
>               break;
> +        case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +            flexarray_append_pair(dm_args, "-vga", "none");
> +            break;
>           }
>   
>           if (b_info->u.hvm.boot) {
> @@ -515,6 +518,8 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
>                   GCSPRINTF("vga.vram_size_mb=%d",
>                   libxl__sizekb_to_mb(b_info->video_memkb)));
>               break;
> +        case LIBXL_VGA_INTERFACE_TYPE_NONE:
> +            break;
>           }
>   
>           if (b_info->u.hvm.boot) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 649ce50..b5a8387 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -153,6 +153,7 @@ libxl_shutdown_reason = Enumeration("shutdown_reason", [
>   libxl_vga_interface_type = Enumeration("vga_interface_type", [
>       (1, "CIRRUS"),
>       (2, "STD"),
> +    (3, "NONE"),
>       ], init_val = 1)
>   
>   libxl_vendor_device = Enumeration("vendor_device", [
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 4fc46eb..4d720b4 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1667,6 +1667,8 @@ skip_vfb:
>                   b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
>               } else if (!strcmp(buf, "cirrus")) {
>                   b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
> +            } else if (!strcmp(buf, "none")) {
> +                b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_NONE;
>               } else {
>                   fprintf(stderr, "Unknown vga \"%s\" specified\n", buf);
>                   exit(1);

Ping

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ27y-0002Vz-3z; Thu, 27 Feb 2014 14:39:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ27w-0002VW-8e
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:39:56 +0000
Received: from [85.158.139.211:12019] by server-9.bemta-5.messagelabs.com id
	70/1D-11237-B3E4F035; Thu, 27 Feb 2014 14:39:55 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393511993!6652181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32543 invoked from network); 27 Feb 2014 14:39:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:39:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104658681"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:39:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:39:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ27s-0003IU-Cn;
	Thu, 27 Feb 2014 14:39:52 +0000
Message-ID: <530F4E33.1060403@eu.citrix.com>
Date: Thu, 27 Feb 2014 14:39:47 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <xen-devel@lists.xen.org>
References: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
	<1393416841.18730.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1393416841.18730.38.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs
 in DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:14 PM, Ian Campbell wrote:
> On Wed, 2014-02-26 at 12:13 +0000, Ian Campbell wrote:
>> Otherwise we deference a NULL pointer.
>>
>> I saw this while experimenting with libvirt on Xen on ARM, xl already checks
>> that the command line is non NULL and provides "" as a default.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: george.dunlap@citrix.com>
> Typo (extra ">") so Goerge's CC got missed out...

I'm not aware (yet) of anything that might make us delay the release.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:39:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ27y-0002Vz-3z; Thu, 27 Feb 2014 14:39:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ27w-0002VW-8e
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:39:56 +0000
Received: from [85.158.139.211:12019] by server-9.bemta-5.messagelabs.com id
	70/1D-11237-B3E4F035; Thu, 27 Feb 2014 14:39:55 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393511993!6652181!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32543 invoked from network); 27 Feb 2014 14:39:54 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:39:54 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104658681"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 14:39:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:39:52 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ27s-0003IU-Cn;
	Thu, 27 Feb 2014 14:39:52 +0000
Message-ID: <530F4E33.1060403@eu.citrix.com>
Date: Thu, 27 Feb 2014 14:39:47 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, <xen-devel@lists.xen.org>
References: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
	<1393416841.18730.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1393416841.18730.38.camel@kazak.uk.xensource.com>
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com, julien.grall@linaro.org, tim@xen.org,
	stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs
 in DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:14 PM, Ian Campbell wrote:
> On Wed, 2014-02-26 at 12:13 +0000, Ian Campbell wrote:
>> Otherwise we deference a NULL pointer.
>>
>> I saw this while experimenting with libvirt on Xen on ARM, xl already checks
>> that the command line is non NULL and provides "" as a default.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: george.dunlap@citrix.com>
> Typo (extra ">") so Goerge's CC got missed out...

I'm not aware (yet) of anything that might make us delay the release.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:44:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Br-0002sj-0Y; Thu, 27 Feb 2014 14:43:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WJ2Bp-0002sX-KQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:43:57 +0000
Received: from [85.158.139.211:46046] by server-4.bemta-5.messagelabs.com id
	6A/57-08092-C2F4F035; Thu, 27 Feb 2014 14:43:56 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393512235!6672943!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18599 invoked from network); 27 Feb 2014 14:43:55 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 27 Feb 2014 14:43:55 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53866 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WJ2A6-0004D6-I0; Thu, 27 Feb 2014 15:42:10 +0100
Date: Thu, 27 Feb 2014 15:43:51 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <529743590.20140227154351@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140227141812.GE16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
MIME-Version: 1.0
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 27, 2014, 3:18:12 PM, you wrote:

> On Wed, Feb 26, 2014 at 04:11:23PM +0100, Sander Eikelenboom wrote:
>> 
>> Wednesday, February 26, 2014, 10:14:42 AM, you wrote:
>> 
>> 
>> > Friday, February 21, 2014, 7:32:08 AM, you wrote:
>> 
>> 
>> >> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>> >>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>> >>>
>> >>>
>> >>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>> >>>>> Hi All,
>> >>>>>
>> >>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>> >>>>>
>> >>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>> >>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>> >>>>>
>> >>>>>     In the guest:
>> >>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [57539.859610] net eth0: Need more slots
>> >>>>>     [58157.675939] net eth0: Need more slots
>> >>>>>     [58725.344712] net eth0: Need more slots
>> >>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [61815.849225] net eth0: Need more slots
>> >>>> This issue is familiar... and I thought it get fixed.
>> >>>>   From original analysis for similar issue I hit before, the root cause
>> >>>> is netback still creates response when the ring is full. I remember
>> >>>> larger MTU can trigger this issue before, what is the MTU size?
>> >>> In dom0 both for the physical nics and the guest vif's MTU=1500
>> >>> In domU the eth0 also has MTU=1500.
>> >>>
>> >>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>> >>>
>> >>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>> >>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>> >>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>> >>>
>> >>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.
>> 
>> >> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
>> >> Probably the response overlaps the request and grantcopy return error 
>> >> when using wrong grant reference, Netback returns resp->status with 
>> >> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
>> >> Would it be possible to print log in xenvif_rx_action of netback to see 
>> >> whether something wrong with max slots and used slots?
>> 
>> >> Thanks
>> >> Annie
>> 
>> > Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
>> > at the same time as the netfront messages in the guest.
>> 
>> > I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
>> > One of the things was to simplify the code for the debug key to print the granttables, the present code
>> > takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
>> > the nr of entries per domain.
>> 
>> 
>> > Issue 1: grant_table.c:1858:d0 Bad grant reference
>> 
>> > After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
>> > The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.
>> 
>> > Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
>> > The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).
>> 

> As far as I can tell netfront has a pool of grant references and it
> will BUG_ON() if there's no grefs in the pool when you request one.
> Since your DomU didn't crash so I suspect the book-keeping is still
> intact.

>> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
>> > Domain 7 is the domain that happens to give the netfront messages.
>> 
>> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
>> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
>> 

> I suppose Dom0 expanding its maptrack is normal. I see as well when I
> increase the number of domains. But if it keeps increasing while the
> number of DomUs stay the same then it is not normal.

It keeps increasing (without (re)starting domains) although eventually it looks like it is settling at a round a maptrack size of 31/256 frames.


> Presumably you only have netfront and blkfront to use grant table and
> your workload as described below invovled both so it would be hard to
> tell which one is faulty.

> There's no immediate functional changes regarding slot counting in this
> dev cycle for network driver. But there's some changes to blkfront/back
> which seem interesting (memory related).

Hmm all the times i get a "Bad grant reference" are related to that one specific guest.
And it's not doing much blkback/front I/O (it's providing webdav and rsync to network based storage (glusterfs))

Added some more printk's:

@@ -2072,7 +2076,11 @@ __gnttab_copy(
                                       &s_frame, &s_pg,
                                       &source_off, &source_len, 1);
         if ( rc != GNTST_okay )
-            goto error_out;
+            PIN_FAIL(error_out, GNTST_general_error,
+                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
+                     current->domain->domain_id, op->source.domid, op->dest.domid);
+
+
         have_s_grant = 1;
         if ( op->source.offset < source_off ||
              op->len > source_len )
@@ -2096,7 +2104,11 @@ __gnttab_copy(
                                       current->domain->domain_id, 0,
                                       &d_frame, &d_pg, &dest_off, &dest_len, 1);
         if ( rc != GNTST_okay )
-            goto error_out;
+            PIN_FAIL(error_out, GNTST_general_error,
+                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
+                     current->domain->domain_id, op->source.domid, op->dest.domid);
+
+
         have_d_grant = 1;


this comes out:

(XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7


> My suggestion is, if you have a working base line, you can try to setup
> different frontend / backend combination to help narrow down the
> problem.

Will see what i can do after the weekend

> Wei.

<snip>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:44:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:44:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Br-0002sj-0Y; Thu, 27 Feb 2014 14:43:59 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WJ2Bp-0002sX-KQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:43:57 +0000
Received: from [85.158.139.211:46046] by server-4.bemta-5.messagelabs.com id
	6A/57-08092-C2F4F035; Thu, 27 Feb 2014 14:43:56 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393512235!6672943!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18599 invoked from network); 27 Feb 2014 14:43:55 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES128-SHA
	encrypted SMTP; 27 Feb 2014 14:43:55 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:53866 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WJ2A6-0004D6-I0; Thu, 27 Feb 2014 15:42:10 +0100
Date: Thu, 27 Feb 2014 15:43:51 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <529743590.20140227154351@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140227141812.GE16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
MIME-Version: 1.0
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 27, 2014, 3:18:12 PM, you wrote:

> On Wed, Feb 26, 2014 at 04:11:23PM +0100, Sander Eikelenboom wrote:
>> 
>> Wednesday, February 26, 2014, 10:14:42 AM, you wrote:
>> 
>> 
>> > Friday, February 21, 2014, 7:32:08 AM, you wrote:
>> 
>> 
>> >> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>> >>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>> >>>
>> >>>
>> >>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>> >>>>> Hi All,
>> >>>>>
>> >>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>> >>>>>
>> >>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>> >>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>> >>>>>
>> >>>>>     In the guest:
>> >>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [57539.859610] net eth0: Need more slots
>> >>>>>     [58157.675939] net eth0: Need more slots
>> >>>>>     [58725.344712] net eth0: Need more slots
>> >>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>> >>>>>     [61815.849225] net eth0: Need more slots
>> >>>> This issue is familiar... and I thought it get fixed.
>> >>>>   From original analysis for similar issue I hit before, the root cause
>> >>>> is netback still creates response when the ring is full. I remember
>> >>>> larger MTU can trigger this issue before, what is the MTU size?
>> >>> In dom0 both for the physical nics and the guest vif's MTU=1500
>> >>> In domU the eth0 also has MTU=1500.
>> >>>
>> >>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>> >>>
>> >>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>> >>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>> >>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>> >>>
>> >>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.
>> 
>> >> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
>> >> Probably the response overlaps the request and grantcopy return error 
>> >> when using wrong grant reference, Netback returns resp->status with 
>> >> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
>> >> Would it be possible to print log in xenvif_rx_action of netback to see 
>> >> whether something wrong with max slots and used slots?
>> 
>> >> Thanks
>> >> Annie
>> 
>> > Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
>> > at the same time as the netfront messages in the guest.
>> 
>> > I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
>> > One of the things was to simplify the code for the debug key to print the granttables, the present code
>> > takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
>> > the nr of entries per domain.
>> 
>> 
>> > Issue 1: grant_table.c:1858:d0 Bad grant reference
>> 
>> > After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
>> > The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.
>> 
>> > Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
>> > The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).
>> 

> As far as I can tell netfront has a pool of grant references and it
> will BUG_ON() if there's no grefs in the pool when you request one.
> Since your DomU didn't crash so I suspect the book-keeping is still
> intact.

>> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
>> > Domain 7 is the domain that happens to give the netfront messages.
>> 
>> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
>> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
>> 

> I suppose Dom0 expanding its maptrack is normal. I see as well when I
> increase the number of domains. But if it keeps increasing while the
> number of DomUs stay the same then it is not normal.

It keeps increasing (without (re)starting domains) although eventually it looks like it is settling at a round a maptrack size of 31/256 frames.


> Presumably you only have netfront and blkfront to use grant table and
> your workload as described below invovled both so it would be hard to
> tell which one is faulty.

> There's no immediate functional changes regarding slot counting in this
> dev cycle for network driver. But there's some changes to blkfront/back
> which seem interesting (memory related).

Hmm all the times i get a "Bad grant reference" are related to that one specific guest.
And it's not doing much blkback/front I/O (it's providing webdav and rsync to network based storage (glusterfs))

Added some more printk's:

@@ -2072,7 +2076,11 @@ __gnttab_copy(
                                       &s_frame, &s_pg,
                                       &source_off, &source_len, 1);
         if ( rc != GNTST_okay )
-            goto error_out;
+            PIN_FAIL(error_out, GNTST_general_error,
+                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
+                     current->domain->domain_id, op->source.domid, op->dest.domid);
+
+
         have_s_grant = 1;
         if ( op->source.offset < source_off ||
              op->len > source_len )
@@ -2096,7 +2104,11 @@ __gnttab_copy(
                                       current->domain->domain_id, 0,
                                       &d_frame, &d_pg, &dest_off, &dest_len, 1);
         if ( rc != GNTST_okay )
-            goto error_out;
+            PIN_FAIL(error_out, GNTST_general_error,
+                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
+                     current->domain->domain_id, op->source.domid, op->dest.domid);
+
+
         have_d_grant = 1;


this comes out:

(XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7


> My suggestion is, if you have a working base line, you can try to setup
> different frontend / backend combination to help narrow down the
> problem.

Will see what i can do after the weekend

> Wei.

<snip>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:45:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Db-000356-80; Thu, 27 Feb 2014 14:45:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2DZ-00034o-3W
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:45:45 +0000
Received: from [193.109.254.147:35803] by server-14.bemta-14.messagelabs.com
	id 90/DB-29228-89F4F035; Thu, 27 Feb 2014 14:45:44 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393512343!1976726!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9030 invoked from network); 27 Feb 2014 14:45:43 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:45:43 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2DS-000IUu-Pr; Thu, 27 Feb 2014 14:45:38 +0000
Date: Thu, 27 Feb 2014 15:45:38 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140227144538.GC53925@deinos.phlegethon.org>
References: <530B51B0020000780011EC46@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530B51B0020000780011EC46@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [RFC] utilizing EPT_MISCONFIG VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:05 +0000 on 24 Feb (1393243536), Jan Beulich wrote:
> Attached draft patch (still having debugging code in it, and assuming
> other adjustments having happened before) demonstrates that we
> should evaluate whether - not only for dealing with the memory type
> adjustments here, but for basically everything currently done through
> ->change_entry_type_global() - utilizing the EPT_MISCONFIG VM exit
> would be an architecturally clean approach. The fundamental idea
> here is to defer updates to the page tables until the respective
> entries actually get used, instead of iterating through all page tables
> when the change is being requested, thus
> - avoiding (here) or eliminating (elsewhere) long lasting operations
>   without having to introduce expensive/fragile preemption handling
> - leaving unaffected the sharing of the page tables with the IOMMU
>   (since the EPT memory type field is available to the programmer on
>   the IOMMU side; we obviously can't use the read/write bits without
>   affecting the IOMMU)

Looks like a pretty good plan to me. :)

> The main question obviously is whether it is architecturally safe to
> use any particular, presently invalid memory type (right now the
> patches use type 7, i.e. the value defined for UC- in the PAT MSR
> only), or whether such an invalid type could be determined at run
> time.

Another question is whether we can easily do the same on AMD.  The AMD
nested pagetable fault will flag reserved-bit errors in EXITINFO1,
which looks good enough, but the AMD IOMMU spec lists all the PTE's
reserved bits as reserved in IOMMU too. :|

> Obviously if on EPT we can go this route, the goal ought to be to
> eliminate ->change_entry_type_global() altogether (i.e. also from
> the generic P2M code) by using on-access adjustments instead of
> on-request ones. Quite likely that would involved adding an address
> range to ->memory_type_changed().

Sure, that makes sense.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:45:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Db-000356-80; Thu, 27 Feb 2014 14:45:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2DZ-00034o-3W
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:45:45 +0000
Received: from [193.109.254.147:35803] by server-14.bemta-14.messagelabs.com
	id 90/DB-29228-89F4F035; Thu, 27 Feb 2014 14:45:44 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393512343!1976726!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9030 invoked from network); 27 Feb 2014 14:45:43 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:45:43 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2DS-000IUu-Pr; Thu, 27 Feb 2014 14:45:38 +0000
Date: Thu, 27 Feb 2014 15:45:38 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140227144538.GC53925@deinos.phlegethon.org>
References: <530B51B0020000780011EC46@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530B51B0020000780011EC46@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Yang Z Zhang <yang.z.zhang@intel.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Eddie Dong <eddie.dong@intel.com>, Jun Nakajima <jun.nakajima@intel.com>
Subject: Re: [Xen-devel] [RFC] utilizing EPT_MISCONFIG VM exit
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 13:05 +0000 on 24 Feb (1393243536), Jan Beulich wrote:
> Attached draft patch (still having debugging code in it, and assuming
> other adjustments having happened before) demonstrates that we
> should evaluate whether - not only for dealing with the memory type
> adjustments here, but for basically everything currently done through
> ->change_entry_type_global() - utilizing the EPT_MISCONFIG VM exit
> would be an architecturally clean approach. The fundamental idea
> here is to defer updates to the page tables until the respective
> entries actually get used, instead of iterating through all page tables
> when the change is being requested, thus
> - avoiding (here) or eliminating (elsewhere) long lasting operations
>   without having to introduce expensive/fragile preemption handling
> - leaving unaffected the sharing of the page tables with the IOMMU
>   (since the EPT memory type field is available to the programmer on
>   the IOMMU side; we obviously can't use the read/write bits without
>   affecting the IOMMU)

Looks like a pretty good plan to me. :)

> The main question obviously is whether it is architecturally safe to
> use any particular, presently invalid memory type (right now the
> patches use type 7, i.e. the value defined for UC- in the PAT MSR
> only), or whether such an invalid type could be determined at run
> time.

Another question is whether we can easily do the same on AMD.  The AMD
nested pagetable fault will flag reserved-bit errors in EXITINFO1,
which looks good enough, but the AMD IOMMU spec lists all the PTE's
reserved bits as reserved in IOMMU too. :|

> Obviously if on EPT we can go this route, the goal ought to be to
> eliminate ->change_entry_type_global() altogether (i.e. also from
> the generic P2M code) by using on-access adjustments instead of
> on-request ones. Quite likely that would involved adding an address
> range to ->memory_type_changed().

Sure, that makes sense.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:47:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Eq-0003G6-Pv; Thu, 27 Feb 2014 14:47:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WJ2Eo-0003Fi-WA
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:47:03 +0000
Received: from [85.158.137.68:29096] by server-11.bemta-3.messagelabs.com id
	7E/BB-04255-6EF4F035; Thu, 27 Feb 2014 14:47:02 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393512420!4630354!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18011 invoked from network); 27 Feb 2014 14:47:01 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-3.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 14:47:01 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1REjpfe008177
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 09:45:52 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1REjiON011459; Thu, 27 Feb 2014 09:45:45 -0500
Message-ID: <530F4F98.2080308@redhat.com>
Date: Thu, 27 Feb 2014 15:45:44 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com>
In-Reply-To: <530F4949.4050706@citrix.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 15:18, David Vrabel ha scritto:
> On 27/02/14 13:11, Paolo Bonzini wrote:
>> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>>>> This patch adds para-virtualization support to the queue spinlock code
>>>>> by enabling the queue head to kick the lock holder CPU, if known,
>>>>> in when the lock isn't released for a certain amount of time. It
>>>>> also enables the mutual monitoring of the queue head CPU and the
>>>>> following node CPU in the queue to make sure that their CPUs will
>>>>> stay scheduled in.
>>> I'm not really understanding how this is supposed to work.  There
>>> appears to be an assumption that a guest can keep one of its VCPUs
>>> running by repeatedly kicking it?  This is not possible under Xen and I
>>> doubt it's possible under KVM or any other hypervisor.
>>
>> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
>> see Documentation/virtual/kvm/hypercalls.txt.
>
> But neither of the VCPUs being kicked here are halted -- they're either
> running or runnable (descheduled by the hypervisor).

/me actually looks at Waiman's code...

Right, this is really different from pvticketlocks, where the *unlock* 
primitive wakes up a sleeping VCPU.  It is more similar to PLE 
(pause-loop exiting).

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:47:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Eq-0003G6-Pv; Thu, 27 Feb 2014 14:47:04 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WJ2Eo-0003Fi-WA
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 14:47:03 +0000
Received: from [85.158.137.68:29096] by server-11.bemta-3.messagelabs.com id
	7E/BB-04255-6EF4F035; Thu, 27 Feb 2014 14:47:02 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393512420!4630354!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18011 invoked from network); 27 Feb 2014 14:47:01 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-3.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 14:47:01 -0000
Received: from int-mx01.intmail.prod.int.phx2.redhat.com
	(int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1REjpfe008177
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 09:45:52 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP
	id s1REjiON011459; Thu, 27 Feb 2014 09:45:45 -0500
Message-ID: <530F4F98.2080308@redhat.com>
Date: Thu, 27 Feb 2014 15:45:44 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com>
In-Reply-To: <530F4949.4050706@citrix.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.67 on 10.5.11.11
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Waiman Long <Waiman.Long@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 15:18, David Vrabel ha scritto:
> On 27/02/14 13:11, Paolo Bonzini wrote:
>> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>>>> This patch adds para-virtualization support to the queue spinlock code
>>>>> by enabling the queue head to kick the lock holder CPU, if known,
>>>>> in when the lock isn't released for a certain amount of time. It
>>>>> also enables the mutual monitoring of the queue head CPU and the
>>>>> following node CPU in the queue to make sure that their CPUs will
>>>>> stay scheduled in.
>>> I'm not really understanding how this is supposed to work.  There
>>> appears to be an assumption that a guest can keep one of its VCPUs
>>> running by repeatedly kicking it?  This is not possible under Xen and I
>>> doubt it's possible under KVM or any other hypervisor.
>>
>> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
>> see Documentation/virtual/kvm/hypercalls.txt.
>
> But neither of the VCPUs being kicked here are halted -- they're either
> running or runnable (descheduled by the hypervisor).

/me actually looks at Waiman's code...

Right, this is really different from pvticketlocks, where the *unlock* 
primitive wakes up a sleeping VCPU.  It is more similar to PLE 
(pause-loop exiting).

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:48:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Gd-0003Sk-DK; Thu, 27 Feb 2014 14:48:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ2Gc-0003SM-1Y
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:48:54 +0000
Received: from [193.109.254.147:56602] by server-5.bemta-14.messagelabs.com id
	B1/69-16688-5505F035; Thu, 27 Feb 2014 14:48:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393512531!7262218!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7708 invoked from network); 27 Feb 2014 14:48:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:48:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; 
	d="scan'208,217";a="106288650"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:48:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:48:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ2GY-0003QS-6y;
	Thu, 27 Feb 2014 14:48:50 +0000
Message-ID: <530F5052.8040308@citrix.com>
Date: Thu, 27 Feb 2014 14:48:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Gerard Spivey <gerard.spivey@gmail.com>
References: <9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com>
In-Reply-To: <9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] CPU/RAM/PCI diagram tool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2163979839363678782=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2163979839363678782==
Content-Type: multipart/alternative;
	boundary="------------070005030101060708030901"

--------------070005030101060708030901
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: 8bit

On 27/02/14 04:48, Gerard Spivey wrote:
> Hello All,
>
> My name is Gerard.
> I am new to the Xen development community and would like to get started.
> I saw the Xen development projects list and will start on the
> CPU/RAM/PCI diagram tool as my first project.
>
> During my day job I’m a software developer writing networking
> applications for a variety of computing architectures (x86, Tilera,
> and more),
> using technologies similar to and including PF_RINGS and DPDK. Due to
> the work I’ve been doing the past few years Ive taken an interest in
> the Xen project.
>
> I’m looking forward to to working on the project and engaging with the
> Xen developer community.
> I’m on the ##xen channel as SpiffySpivey.
>
> Thanks!
>
>
> Gerard

Hello,

Thankyou for taking an interest.

Having a day job, I presume you are not associated with GSoC, and are
doing this of your own volition?  As you can imagine, there is
substantially more than 1 projects worth of potential work to be done,
and plenty of room for expansion.

How much knowledge do you have x86 architecture and system topology, or
the methods of conveying this information?

~Andrew

--------------070005030101060708030901
Content-Type: text/html; charset="windows-1252"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 27/02/14 04:48, Gerard Spivey wrote:<br>
    </div>
    <blockquote
      cite="mid:9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">Hello All,</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">My name is
            Gerard.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">I am new to the
            Xen development community and would like to get started.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">I saw the Xen
            development projects list and will start on the CPU/RAM/PCI
            diagram tool as my first project.</span></font></div>
      <div style="font-size: 13px;"><span style="font-size: 15px;
          line-height: 22px; font-family: Arial, sans-serif;"><br>
        </span></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">During my day
            job I’m a software developer writing networking applications
            for a variety of computing architectures (x86, Tilera, and
            more),</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">using
            technologies similar to and including PF_RINGS and DPDK. Due
            to the work I’ve been doing the past few years Ive taken
            an interest in the Xen project.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">I’m looking
            forward to to working on the project and engaging with the
            Xen developer community.</span></font></div>
      <div><font face="Arial, sans-serif"><span style="font-size: 15px;
            line-height: 22px;">I’m on the ##xen channel as
            SpiffySpivey.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">Thanks!</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">Gerard</span></font></div>
    </blockquote>
    <br>
    Hello,<br>
    <br>
    Thankyou for taking an interest.<br>
    <br>
    Having a day job, I presume you are not associated with GSoC, and
    are doing this of your own volition?  As you can imagine, there is
    substantially more than 1 projects worth of potential work to be
    done, and plenty of room for expansion.<br>
    <br>
    How much knowledge do you have x86 architecture and system topology,
    or the methods of conveying this information?<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------070005030101060708030901--


--===============2163979839363678782==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2163979839363678782==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 14:48:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Gd-0003Sk-DK; Thu, 27 Feb 2014 14:48:55 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ2Gc-0003SM-1Y
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:48:54 +0000
Received: from [193.109.254.147:56602] by server-5.bemta-14.messagelabs.com id
	B1/69-16688-5505F035; Thu, 27 Feb 2014 14:48:53 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393512531!7262218!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7708 invoked from network); 27 Feb 2014 14:48:52 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 14:48:52 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; 
	d="scan'208,217";a="106288650"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 14:48:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 09:48:50 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ2GY-0003QS-6y;
	Thu, 27 Feb 2014 14:48:50 +0000
Message-ID: <530F5052.8040308@citrix.com>
Date: Thu, 27 Feb 2014 14:48:50 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Gerard Spivey <gerard.spivey@gmail.com>
References: <9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com>
In-Reply-To: <9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] CPU/RAM/PCI diagram tool
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2163979839363678782=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2163979839363678782==
Content-Type: multipart/alternative;
	boundary="------------070005030101060708030901"

--------------070005030101060708030901
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: 8bit

On 27/02/14 04:48, Gerard Spivey wrote:
> Hello All,
>
> My name is Gerard.
> I am new to the Xen development community and would like to get started.
> I saw the Xen development projects list and will start on the
> CPU/RAM/PCI diagram tool as my first project.
>
> During my day job I’m a software developer writing networking
> applications for a variety of computing architectures (x86, Tilera,
> and more),
> using technologies similar to and including PF_RINGS and DPDK. Due to
> the work I’ve been doing the past few years Ive taken an interest in
> the Xen project.
>
> I’m looking forward to to working on the project and engaging with the
> Xen developer community.
> I’m on the ##xen channel as SpiffySpivey.
>
> Thanks!
>
>
> Gerard

Hello,

Thankyou for taking an interest.

Having a day job, I presume you are not associated with GSoC, and are
doing this of your own volition?  As you can imagine, there is
substantially more than 1 projects worth of potential work to be done,
and plenty of room for expansion.

How much knowledge do you have x86 architecture and system topology, or
the methods of conveying this information?

~Andrew

--------------070005030101060708030901
Content-Type: text/html; charset="windows-1252"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta content="text/html; charset=windows-1252"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 27/02/14 04:48, Gerard Spivey wrote:<br>
    </div>
    <blockquote
      cite="mid:9FF1CEF2-0669-405A-9CAF-3059315B7D63@gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">Hello All,</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">My name is
            Gerard.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">I am new to the
            Xen development community and would like to get started.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">I saw the Xen
            development projects list and will start on the CPU/RAM/PCI
            diagram tool as my first project.</span></font></div>
      <div style="font-size: 13px;"><span style="font-size: 15px;
          line-height: 22px; font-family: Arial, sans-serif;"><br>
        </span></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">During my day
            job I’m a software developer writing networking applications
            for a variety of computing architectures (x86, Tilera, and
            more),</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">using
            technologies similar to and including PF_RINGS and DPDK. Due
            to the work I’ve been doing the past few years Ive taken
            an interest in the Xen project.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">I’m looking
            forward to to working on the project and engaging with the
            Xen developer community.</span></font></div>
      <div><font face="Arial, sans-serif"><span style="font-size: 15px;
            line-height: 22px;">I’m on the ##xen channel as
            SpiffySpivey.</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">Thanks!</span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;"><br>
          </span></font></div>
      <div style="font-size: 13px;"><font face="Arial, sans-serif"><span
            style="font-size: 15px; line-height: 22px;">Gerard</span></font></div>
    </blockquote>
    <br>
    Hello,<br>
    <br>
    Thankyou for taking an interest.<br>
    <br>
    Having a day job, I presume you are not associated with GSoC, and
    are doing this of your own volition?  As you can imagine, there is
    substantially more than 1 projects worth of potential work to be
    done, and plenty of room for expansion.<br>
    <br>
    How much knowledge do you have x86 architecture and system topology,
    or the methods of conveying this information?<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------070005030101060708030901--


--===============2163979839363678782==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2163979839363678782==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 14:53:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2KT-0003ui-F8; Thu, 27 Feb 2014 14:52:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2KS-0003uT-Jy
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:52:52 +0000
Received: from [85.158.143.35:10851] by server-2.bemta-4.messagelabs.com id
	90/C4-04779-3415F035; Thu, 27 Feb 2014 14:52:51 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393512771!8780014!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12739 invoked from network); 27 Feb 2014 14:52:51 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:52:51 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2KQ-000Ic8-P1; Thu, 27 Feb 2014 14:52:50 +0000
Date: Thu, 27 Feb 2014 15:52:50 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227145250.GD53925@deinos.phlegethon.org>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:01 +0000 on 24 Feb (1393250489), Andrew Cooper wrote:
> This includes:
>  * A stale comment in sh_skip_sync()
>  * A dead for ever loop in __bug()
>  * A prototype for machine_power_off() which unimplemented in any architecture
>  * Replacing a for(;;); loop with unreachable()
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:53:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2KT-0003ui-F8; Thu, 27 Feb 2014 14:52:53 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2KS-0003uT-Jy
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:52:52 +0000
Received: from [85.158.143.35:10851] by server-2.bemta-4.messagelabs.com id
	90/C4-04779-3415F035; Thu, 27 Feb 2014 14:52:51 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393512771!8780014!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12739 invoked from network); 27 Feb 2014 14:52:51 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:52:51 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2KQ-000Ic8-P1; Thu, 27 Feb 2014 14:52:50 +0000
Date: Thu, 27 Feb 2014 15:52:50 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227145250.GD53925@deinos.phlegethon.org>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393254090-5081-4-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 4/5] xen: Misc cleanup as a result of the
 previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:01 +0000 on 24 Feb (1393250489), Andrew Cooper wrote:
> This includes:
>  * A stale comment in sh_skip_sync()
>  * A dead for ever loop in __bug()
>  * A prototype for machine_power_off() which unimplemented in any architecture
>  * Replacing a for(;;); loop with unreachable()
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:54:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2LO-00042W-Uy; Thu, 27 Feb 2014 14:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJ2LN-00042I-O9
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:53:50 +0000
Received: from [85.158.139.211:41797] by server-7.bemta-5.messagelabs.com id
	59/A7-14867-D715F035; Thu, 27 Feb 2014 14:53:49 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393512825!6675812!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11881 invoked from network); 27 Feb 2014 14:53:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:53:46 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1RErh1R015484
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 14:53:44 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1RErg4C019485
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 27 Feb 2014 14:53:43 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1RErgwH019474; Thu, 27 Feb 2014 14:53:42 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 06:53:42 -0800
Message-ID: <530F51E9.1070703@oracle.com>
Date: Thu, 27 Feb 2014 09:55:37 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjYvMjAxNCAxMToyNCBBTSwgUm9nZXIgUGF1IE1vbm5lIHdyb3RlOgo+IEFkZCBzdXBw
b3J0IGZvciBNU0kgbWVzc2FnZSBncm91cHMgZm9yIFhlbiBEb20wIHVzaW5nIHRoZQo+IE1BUF9Q
SVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFwIHR5cGUuCj4KPiBJbiBvcmRlciB0byBrZWVwIHRy
YWNrIG9mIHdoaWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCj4gcGly
cXMgaW4gdGhlIE1TSSBncm91cCBleGNlcHQgZm9yIHRoZSBmaXJzdCBvbmUgaGF2ZSB0aGUgbmV3
bHkKPiBpbnRyb2R1Y2VkIFBJUlFfTVNJX0dST1VQIGZsYWcgc2V0LiBUaGlzIHByZXZlbnRzIGNh
bGxpbmcKPiBQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVz
dCBiZSBkb25lIHdpdGggdGhlCj4gZmlyc3QgcGlycSBpbiB0aGUgZ3JvdXAuCj4KPiBTaWduZWQt
b2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiAtLS0KPiBU
ZXN0ZWQgd2l0aCBhbiBJbnRlbCBJQ0g4IEFIQ0kgU0FUQSBjb250cm9sbGVyLgo+IC0tLQo+ICAg
YXJjaC94ODYvcGNpL3hlbi5jICAgICAgICAgICAgICAgICAgIHwgICAyOSArKysrKysrKysrKysr
Ky0tLS0tLQo+ICAgZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgICAgIHwgICA0NyAr
KysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tCj4gICBkcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2ludGVybmFsLmggfCAgICAxICsKPiAgIGluY2x1ZGUveGVuL2V2ZW50cy5oICAgICAg
ICAgICAgICAgICB8ICAgIDIgKy0KPiAgIGluY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2Lmgg
ICAgICB8ICAgIDEgKwo+ICAgNSBmaWxlcyBjaGFuZ2VkLCA1NSBpbnNlcnRpb25zKCspLCAyNSBk
ZWxldGlvbnMoLSkKPgo+IGRpZmYgLS1naXQgYS9hcmNoL3g4Ni9wY2kveGVuLmMgYi9hcmNoL3g4
Ni9wY2kveGVuLmMKPiBpbmRleCAxMDNlNzAyLi45MDU5NTZmIDEwMDY0NAo+IC0tLSBhL2FyY2gv
eDg2L3BjaS94ZW4uYwo+ICsrKyBiL2FyY2gveDg2L3BjaS94ZW4uYwo+IEBAIC0xNzgsNiArMTc4
LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwg
aW50IG52ZWMsIGludCB0eXBlKQo+ICAgCWkgPSAwOwo+ICAgCWxpc3RfZm9yX2VhY2hfZW50cnko
bXNpZGVzYywgJmRldi0+bXNpX2xpc3QsIGxpc3QpIHsKPiAgIAkJaXJxID0geGVuX2JpbmRfcGly
cV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgdltpXSwKPiArCQkJCQkgICAgICAgKHR5cGUgPT0g
UENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4gICAJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8KPiAgIAkJCQkJICAgICAgICJwY2lmcm9udC1tc2kteCIgOgo+ICAgCQkJ
CQkgICAgICAgInBjaWZyb250LW1zaSIsCj4gQEAgLTI0NSw2ICsyNDYsNyBAQCBzdGF0aWMgaW50
IHhlbl9odm1fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMsIGlu
dCB0eXBlKQo+ICAgCQkJCSJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRvIHBpcnE9JWRcbiIsIHBp
cnEpOwo+ICAgCQl9Cj4gICAJCWlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1z
aWRlc2MsIHBpcnEsCj4gKwkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52
ZWMgOiAxLAo+ICAgCQkJCQkgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/Cj4gICAJ
CQkJCSAgICAgICAibXNpLXgiIDogIm1zaSIsCj4gICAJCQkJCSAgICAgICBET01JRF9TRUxGKTsK
PiBAQCAtMjY5LDkgKzI3MSw2IEBAIHN0YXRpYyBpbnQgeGVuX2luaXRkb21fc2V0dXBfbXNpX2ly
cXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMsIGludCB0eXBlKQo+ICAgCWludCByZXQg
PSAwOwo+ICAgCXN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYzsKPiAgIAo+IC0JaWYgKHR5cGUgPT0g
UENJX0NBUF9JRF9NU0kgJiYgbnZlYyA+IDEpCj4gLQkJcmV0dXJuIDE7Cj4gLQo+ICAgCWxpc3Rf
Zm9yX2VhY2hfZW50cnkobXNpZGVzYywgJmRldi0+bXNpX2xpc3QsIGxpc3QpIHsKPiAgIAkJc3Ry
dWN0IHBoeXNkZXZfbWFwX3BpcnEgbWFwX2lycTsKPiAgIAkJZG9taWRfdCBkb21pZDsKPiBAQCAt
MjkxLDcgKzI5MCwxMCBAQCBzdGF0aWMgaW50IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0
cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPiAgIAkJCSAgICAgIChwY2lf
ZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7Cj4gICAJCW1hcF9pcnEuZGV2Zm4gPSBkZXYtPmRl
dmZuOwo+ICAgCj4gLQkJaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSB7Cj4gKwkJaWYgKHR5
cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYgbnZlYyA+IDEpIHsKPiArCQkJbWFwX2lycS50eXBlID0g
TUFQX1BJUlFfVFlQRV9NVUxUSV9NU0k7Cj4gKwkJCW1hcF9pcnEuZW50cnlfbnIgPSBudmVjOwoK
CkFyZSB3ZSBvdmVybG9hZGluZyBlbnRyeV9uciBoZXJlIHdpdGggYSBkaWZmZXJlbnQgbWVhbmlu
Zz8gSSB0aG91Z2h0IGl0IAp3YXMgbWVhbnQgdG8gYmUgZW50cnkgbnVtYmVyIChpbiBNU0ktWCB0
YWJsZSBmb3IgZXhhbXBsZSksIG5vdCBudW1iZXIgb2YgCmVudHJpZXMuCgoKPiArCQl9IGVsc2Ug
aWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSB7Cj4gICAJCQlpbnQgcG9zOwo+ICAgCQkJdTMy
IHRhYmxlX29mZnNldCwgYmlyOwo+ICAgCj4gQEAgLTMwOCw2ICszMTAsMTYgQEAgc3RhdGljIGlu
dCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZl
YywgaW50IHR5cGUpCj4gICAJCWlmIChwY2lfc2VnX3N1cHBvcnRlZCkKPiAgIAkJCXJldCA9IEhZ
UEVSVklTT1JfcGh5c2Rldl9vcChQSFlTREVWT1BfbWFwX3BpcnEsCj4gICAJCQkJCQkgICAgJm1h
cF9pcnEpOwo+ICsJCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxICYmIHJl
dCkgewo+ICsJCQkvKgo+ICsJCQkgKiBJZiBNQVBfUElSUV9UWVBFX01VTFRJX01TSSBpcyBub3Qg
YXZhaWxhYmxlCj4gKwkJCSAqIHRoZXJlJ3Mgbm90aGluZyBlbHNlIHdlIGNhbiBkbyBpbiB0aGlz
IGNhc2UuCj4gKwkJCSAqIEp1c3Qgc2V0IHJldCA+IDAgc28gZHJpdmVyIGNhbiByZXRyeSB3aXRo
Cj4gKwkJCSAqIHNpbmdsZSBNU0kuCj4gKwkJCSAqLwo+ICsJCQlyZXQgPSAxOwo+ICsJCQlnb3Rv
IG91dDsKPiArCQl9Cj4gICAJCWlmIChyZXQgPT0gLUVJTlZBTCAmJiAhcGNpX2RvbWFpbl9ucihk
ZXYtPmJ1cykpIHsKPiAgIAkJCW1hcF9pcnEudHlwZSA9IE1BUF9QSVJRX1RZUEVfTVNJOwo+ICAg
CQkJbWFwX2lycS5pbmRleCA9IC0xOwo+IEBAIC0zMjQsMTEgKzMzNiwxMCBAQCBzdGF0aWMgaW50
IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVj
LCBpbnQgdHlwZSkKPiAgIAkJCWdvdG8gb3V0Owo+ICAgCQl9Cj4gICAKPiAtCQlyZXQgPSB4ZW5f
YmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLAo+IC0JCQkJCSAgICAgICBtYXBfaXJx
LnBpcnEsCj4gLQkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+IC0JCQkJ
CSAgICAgICAibXNpLXgiIDogIm1zaSIsCj4gLQkJCQkJCWRvbWlkKTsKPiArCQlyZXQgPSB4ZW5f
YmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCBtYXBfaXJxLnBpcnEsCj4gKwkJICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52
ZWMgOiAxLAo+ICsJCSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8gIm1zaS14IiA6ICJtc2kiLAo+ICsJCSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBkb21pZCk7Cj4gICAJCWlmIChyZXQgPCAwKQo+ICAgCQkJZ290byBvdXQ7Cj4g
ICAJfQo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyBiL2Ry
aXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCj4gaW5kZXggZjRhOWUzMy4uZmYyMGFlMiAx
MDA2NDQKPiAtLS0gYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+ICsrKyBiL2Ry
aXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCj4gQEAgLTM5MSwxMCArMzkxLDEwIEBAIHN0
YXRpYyB2b2lkIHhlbl9pcnFfaW5pdCh1bnNpZ25lZCBpcnEpCj4gICAJbGlzdF9hZGRfdGFpbCgm
aW5mby0+bGlzdCwgJnhlbl9pcnFfbGlzdF9oZWFkKTsKPiAgIH0KPiAgIAo+IC1zdGF0aWMgaW50
IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkKPiArc3RhdGljIGlu
dCBfX211c3RfY2hlY2sgeGVuX2FsbG9jYXRlX2lycXNfZHluYW1pYyhpbnQgbnZlYykKPiAgIHsK
PiAgIAlpbnQgZmlyc3QgPSAwOwo+IC0JaW50IGlycTsKPiArCWludCBpLCBpcnE7Cj4gICAKPiAg
ICNpZmRlZiBDT05GSUdfWDg2X0lPX0FQSUMKPiAgIAkvKgo+IEBAIC00MDgsMTQgKzQwOCwyMiBA
QCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkK
PiAgIAkJZmlyc3QgPSBnZXRfbnJfaXJxc19nc2koKTsKPiAgICNlbmRpZgo+ICAgCj4gLQlpcnEg
PSBpcnFfYWxsb2NfZGVzY19mcm9tKGZpcnN0LCAtMSk7Cj4gKwlpcnEgPSBpcnFfYWxsb2NfZGVz
Y3NfZnJvbShmaXJzdCwgbnZlYywgLTEpOwo+ICAgCj4gLQlpZiAoaXJxID49IDApCj4gLQkJeGVu
X2lycV9pbml0KGlycSk7Cj4gKwlpZiAoaXJxID49IDApIHsKPiArCQlmb3IgKGkgPSAwOyBpIDwg
bnZlYzsgaSsrKQo+ICsJCQl4ZW5faXJxX2luaXQoaXJxICsgaSk7Cj4gKwl9Cj4gICAKPiAgIAly
ZXR1cm4gaXJxOwo+ICAgfQo+ICAgCj4gK3N0YXRpYyBpbmxpbmUgaW50IF9fbXVzdF9jaGVjayB4
ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkKPiArewo+ICsKPiArCXJldHVybiB4ZW5fYWxs
b2NhdGVfaXJxc19keW5hbWljKDEpOwo+ICt9Cj4gKwo+ICAgc3RhdGljIGludCBfX211c3RfY2hl
Y2sgeGVuX2FsbG9jYXRlX2lycV9nc2kodW5zaWduZWQgZ3NpKQo+ICAgewo+ICAgCWludCBpcnE7
Cj4gQEAgLTc0MSwyMiArNzQ5LDI1IEBAIGludCB4ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0
IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNjICptc2lkZXNjKQo+ICAgfQo+ICAgCj4gICBp
bnQgeGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKHN0cnVjdCBwY2lfZGV2ICpkZXYsIHN0cnVjdCBt
c2lfZGVzYyAqbXNpZGVzYywKPiAtCQkJICAgICBpbnQgcGlycSwgY29uc3QgY2hhciAqbmFtZSwg
ZG9taWRfdCBkb21pZCkKPiArCQkJICAgICBpbnQgcGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIg
Km5hbWUsIGRvbWlkX3QgZG9taWQpCj4gICB7Cj4gLQlpbnQgaXJxLCByZXQ7Cj4gKwlpbnQgaSwg
aXJxLCByZXQ7Cj4gICAKPiAgIAltdXRleF9sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7
Cj4gICAKPiAtCWlycSA9IHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYygpOwo+ICsJaXJxID0geGVu
X2FsbG9jYXRlX2lycXNfZHluYW1pYyhudmVjKTsKPiAgIAlpZiAoaXJxIDwgMCkKPiAgIAkJZ290
byBvdXQ7Cj4gICAKPiAtCWlycV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGlycSwgJnhlbl9w
aXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwKPiAtCQkJbmFtZSk7Cj4gKwlmb3IgKGkgPSAwOyBp
IDwgbnZlYzsgaSsrKSB7Cj4gKwkJaXJxX3NldF9jaGlwX2FuZF9oYW5kbGVyX25hbWUoaXJxICsg
aSwgJnhlbl9waXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwgbmFtZSk7Cj4gKwo+ICsJCXJldCA9
IHhlbl9pcnFfaW5mb19waXJxX3NldHVwKGlycSArIGksIDAsIHBpcnEgKyBpLCAwLCBkb21pZCwK
PiArCQkJCQkgICAgICBpID09IDAgPyAwIDogUElSUV9NU0lfR1JPVVApOwo+ICsJCWlmIChyZXQg
PCAwKQo+ICsJCQlnb3RvIGVycm9yX2lycTsKPiArCX0KPiAgIAo+IC0JcmV0ID0geGVuX2lycV9p
bmZvX3BpcnFfc2V0dXAoaXJxLCAwLCBwaXJxLCAwLCBkb21pZCwgMCk7Cj4gLQlpZiAocmV0IDwg
MCkKPiAtCQlnb3RvIGVycm9yX2lycTsKPiAgIAlyZXQgPSBpcnFfc2V0X21zaV9kZXNjKGlycSwg
bXNpZGVzYyk7Cj4gICAJaWYgKHJldCA8IDApCj4gICAJCWdvdG8gZXJyb3JfaXJxOwo+IEBAIC03
NjQsNyArNzc1LDggQEAgb3V0Ogo+ICAgCW11dGV4X3VubG9jaygmaXJxX21hcHBpbmdfdXBkYXRl
X2xvY2spOwo+ICAgCXJldHVybiBpcnE7Cj4gICBlcnJvcl9pcnE6Cj4gLQlfX3VuYmluZF9mcm9t
X2lycShpcnEpOwo+ICsJZm9yICg7IGkgPj0gMDsgaS0tKQo+ICsJCV9fdW5iaW5kX2Zyb21faXJx
KGlycSArIGkpOwo+ICAgCW11dGV4X3VubG9jaygmaXJxX21hcHBpbmdfdXBkYXRlX2xvY2spOwo+
ICAgCXJldHVybiByZXQ7Cj4gICB9Cj4gQEAgLTc4Myw3ICs3OTUsMTIgQEAgaW50IHhlbl9kZXN0
cm95X2lycShpbnQgaXJxKQo+ICAgCWlmICghZGVzYykKPiAgIAkJZ290byBvdXQ7Cj4gICAKPiAt
CWlmICh4ZW5faW5pdGlhbF9kb21haW4oKSkgewo+ICsJLyoKPiArCSAqIElmIHRyeWluZyB0byBy
ZW1vdmUgYSB2ZWN0b3IgaW4gYSBNU0kgZ3JvdXAgZGlmZmVyZW50Cj4gKwkgKiB0aGFuIHRoZSBm
aXJzdCBvbmUgc2tpcCB0aGUgUElSUSB1bm1hcCB1bmxlc3MgdGhpcyB2ZWN0b3IKPiArCSAqIGlz
IHRoZSBmaXJzdCBvbmUgaW4gdGhlIGdyb3VwLgo+ICsJICovCj4gKwlpZiAoeGVuX2luaXRpYWxf
ZG9tYWluKCkgJiYgIShpbmZvLT51LnBpcnEuZmxhZ3MgJiBQSVJRX01TSV9HUk9VUCkpIHsKPiAg
IAkJdW5tYXBfaXJxLnBpcnEgPSBpbmZvLT51LnBpcnEucGlycTsKPiAgIAkJdW5tYXBfaXJxLmRv
bWlkID0gaW5mby0+dS5waXJxLmRvbWlkOwo+ICAgCQlyYyA9IEhZUEVSVklTT1JfcGh5c2Rldl9v
cChQSFlTREVWT1BfdW5tYXBfcGlycSwgJnVubWFwX2lycSk7Cj4gZGlmZiAtLWdpdCBhL2RyaXZl
cnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaCBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVu
dHNfaW50ZXJuYWwuaAo+IGluZGV4IDY3N2Y0MWEuLjUwYzIwNTBhIDEwMDY0NAo+IC0tLSBhL2Ry
aXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+ICsrKyBiL2RyaXZlcnMveGVuL2V2
ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+IEBAIC01Myw2ICs1Myw3IEBAIHN0cnVjdCBpcnFfaW5m
byB7Cj4gICAKPiAgICNkZWZpbmUgUElSUV9ORUVEU19FT0kJKDEgPDwgMCkKPiAgICNkZWZpbmUg
UElSUV9TSEFSRUFCTEUJKDEgPDwgMSkKPiArI2RlZmluZSBQSVJRX01TSV9HUk9VUAkoMSA8PCAy
KQo+ICAgCj4gICBzdHJ1Y3QgZXZ0Y2huX29wcyB7Cj4gICAJdW5zaWduZWQgKCptYXhfY2hhbm5l
bHMpKHZvaWQpOwo+IGRpZmYgLS1naXQgYS9pbmNsdWRlL3hlbi9ldmVudHMuaCBiL2luY2x1ZGUv
eGVuL2V2ZW50cy5oCj4gaW5kZXggYzljODVjZi4uMmFlN2UwMyAxMDA2NDQKPiAtLS0gYS9pbmNs
dWRlL3hlbi9ldmVudHMuaAo+ICsrKyBiL2luY2x1ZGUveGVuL2V2ZW50cy5oCj4gQEAgLTEwMiw3
ICsxMDIsNyBAQCBpbnQgeGVuX2JpbmRfcGlycV9nc2lfdG9faXJxKHVuc2lnbmVkIGdzaSwKPiAg
IGludCB4ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1z
aV9kZXNjICptc2lkZXNjKTsKPiAgIC8qIEJpbmQgYW4gUFNJIHBpcnEgdG8gYW4gaXJxLiAqLwo+
ICAgaW50IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShzdHJ1Y3QgcGNpX2RldiAqZGV2LCBzdHJ1
Y3QgbXNpX2Rlc2MgKm1zaWRlc2MsCj4gLQkJCSAgICAgaW50IHBpcnEsIGNvbnN0IGNoYXIgKm5h
bWUsIGRvbWlkX3QgZG9taWQpOwo+ICsJCQkgICAgIGludCBwaXJxLCBpbnQgbnZlYywgY29uc3Qg
Y2hhciAqbmFtZSwgZG9taWRfdCBkb21pZCk7Cj4gICAjZW5kaWYKPiAgIAo+ICAgLyogRGUtYWxs
b2NhdGVzIHRoZSBhYm92ZSBtZW50aW9uZWQgcGh5c2ljYWwgaW50ZXJydXB0LiAqLwo+IGRpZmYg
LS1naXQgYS9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oIGIvaW5jbHVkZS94ZW4vaW50
ZXJmYWNlL3BoeXNkZXYuaAo+IGluZGV4IDQyNzIxZDEuLmViMTMzMjZkIDEwMDY0NAo+IC0tLSBh
L2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgKPiArKysgYi9pbmNsdWRlL3hlbi9pbnRl
cmZhY2UvcGh5c2Rldi5oCj4gQEAgLTEzMSw2ICsxMzEsNyBAQCBzdHJ1Y3QgcGh5c2Rldl9pcnEg
ewo+ICAgI2RlZmluZSBNQVBfUElSUV9UWVBFX0dTSQkJMHgxCj4gICAjZGVmaW5lIE1BUF9QSVJR
X1RZUEVfVU5LTk9XTgkJMHgyCj4gICAjZGVmaW5lIE1BUF9QSVJRX1RZUEVfTVNJX1NFRwkJMHgz
Cj4gKyNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kJCTB4NAoKRm9ybWF0dGluZy4KCi1i
b3JpcwoKPiAgIAo+ICAgI2RlZmluZSBQSFlTREVWT1BfbWFwX3BpcnEJCTEzCj4gICBzdHJ1Y3Qg
cGh5c2Rldl9tYXBfcGlycSB7CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:54:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:54:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2LO-00042W-Uy; Thu, 27 Feb 2014 14:53:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJ2LN-00042I-O9
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:53:50 +0000
Received: from [85.158.139.211:41797] by server-7.bemta-5.messagelabs.com id
	59/A7-14867-D715F035; Thu, 27 Feb 2014 14:53:49 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393512825!6675812!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11881 invoked from network); 27 Feb 2014 14:53:46 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:53:46 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1RErh1R015484
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 14:53:44 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1RErg4C019485
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Thu, 27 Feb 2014 14:53:43 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1RErgwH019474; Thu, 27 Feb 2014 14:53:42 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 06:53:42 -0800
Message-ID: <530F51E9.1070703@oracle.com>
Date: Thu, 27 Feb 2014 09:55:37 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjYvMjAxNCAxMToyNCBBTSwgUm9nZXIgUGF1IE1vbm5lIHdyb3RlOgo+IEFkZCBzdXBw
b3J0IGZvciBNU0kgbWVzc2FnZSBncm91cHMgZm9yIFhlbiBEb20wIHVzaW5nIHRoZQo+IE1BUF9Q
SVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFwIHR5cGUuCj4KPiBJbiBvcmRlciB0byBrZWVwIHRy
YWNrIG9mIHdoaWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCj4gcGly
cXMgaW4gdGhlIE1TSSBncm91cCBleGNlcHQgZm9yIHRoZSBmaXJzdCBvbmUgaGF2ZSB0aGUgbmV3
bHkKPiBpbnRyb2R1Y2VkIFBJUlFfTVNJX0dST1VQIGZsYWcgc2V0LiBUaGlzIHByZXZlbnRzIGNh
bGxpbmcKPiBQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVz
dCBiZSBkb25lIHdpdGggdGhlCj4gZmlyc3QgcGlycSBpbiB0aGUgZ3JvdXAuCj4KPiBTaWduZWQt
b2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPiAtLS0KPiBU
ZXN0ZWQgd2l0aCBhbiBJbnRlbCBJQ0g4IEFIQ0kgU0FUQSBjb250cm9sbGVyLgo+IC0tLQo+ICAg
YXJjaC94ODYvcGNpL3hlbi5jICAgICAgICAgICAgICAgICAgIHwgICAyOSArKysrKysrKysrKysr
Ky0tLS0tLQo+ICAgZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgICAgIHwgICA0NyAr
KysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tCj4gICBkcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2ludGVybmFsLmggfCAgICAxICsKPiAgIGluY2x1ZGUveGVuL2V2ZW50cy5oICAgICAg
ICAgICAgICAgICB8ICAgIDIgKy0KPiAgIGluY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2Lmgg
ICAgICB8ICAgIDEgKwo+ICAgNSBmaWxlcyBjaGFuZ2VkLCA1NSBpbnNlcnRpb25zKCspLCAyNSBk
ZWxldGlvbnMoLSkKPgo+IGRpZmYgLS1naXQgYS9hcmNoL3g4Ni9wY2kveGVuLmMgYi9hcmNoL3g4
Ni9wY2kveGVuLmMKPiBpbmRleCAxMDNlNzAyLi45MDU5NTZmIDEwMDY0NAo+IC0tLSBhL2FyY2gv
eDg2L3BjaS94ZW4uYwo+ICsrKyBiL2FyY2gveDg2L3BjaS94ZW4uYwo+IEBAIC0xNzgsNiArMTc4
LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwg
aW50IG52ZWMsIGludCB0eXBlKQo+ICAgCWkgPSAwOwo+ICAgCWxpc3RfZm9yX2VhY2hfZW50cnko
bXNpZGVzYywgJmRldi0+bXNpX2xpc3QsIGxpc3QpIHsKPiAgIAkJaXJxID0geGVuX2JpbmRfcGly
cV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgdltpXSwKPiArCQkJCQkgICAgICAgKHR5cGUgPT0g
UENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4gICAJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8KPiAgIAkJCQkJICAgICAgICJwY2lmcm9udC1tc2kteCIgOgo+ICAgCQkJ
CQkgICAgICAgInBjaWZyb250LW1zaSIsCj4gQEAgLTI0NSw2ICsyNDYsNyBAQCBzdGF0aWMgaW50
IHhlbl9odm1fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMsIGlu
dCB0eXBlKQo+ICAgCQkJCSJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRvIHBpcnE9JWRcbiIsIHBp
cnEpOwo+ICAgCQl9Cj4gICAJCWlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1z
aWRlc2MsIHBpcnEsCj4gKwkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52
ZWMgOiAxLAo+ICAgCQkJCQkgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/Cj4gICAJ
CQkJCSAgICAgICAibXNpLXgiIDogIm1zaSIsCj4gICAJCQkJCSAgICAgICBET01JRF9TRUxGKTsK
PiBAQCAtMjY5LDkgKzI3MSw2IEBAIHN0YXRpYyBpbnQgeGVuX2luaXRkb21fc2V0dXBfbXNpX2ly
cXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMsIGludCB0eXBlKQo+ICAgCWludCByZXQg
PSAwOwo+ICAgCXN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYzsKPiAgIAo+IC0JaWYgKHR5cGUgPT0g
UENJX0NBUF9JRF9NU0kgJiYgbnZlYyA+IDEpCj4gLQkJcmV0dXJuIDE7Cj4gLQo+ICAgCWxpc3Rf
Zm9yX2VhY2hfZW50cnkobXNpZGVzYywgJmRldi0+bXNpX2xpc3QsIGxpc3QpIHsKPiAgIAkJc3Ry
dWN0IHBoeXNkZXZfbWFwX3BpcnEgbWFwX2lycTsKPiAgIAkJZG9taWRfdCBkb21pZDsKPiBAQCAt
MjkxLDcgKzI5MCwxMCBAQCBzdGF0aWMgaW50IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0
cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPiAgIAkJCSAgICAgIChwY2lf
ZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7Cj4gICAJCW1hcF9pcnEuZGV2Zm4gPSBkZXYtPmRl
dmZuOwo+ICAgCj4gLQkJaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSB7Cj4gKwkJaWYgKHR5
cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYgbnZlYyA+IDEpIHsKPiArCQkJbWFwX2lycS50eXBlID0g
TUFQX1BJUlFfVFlQRV9NVUxUSV9NU0k7Cj4gKwkJCW1hcF9pcnEuZW50cnlfbnIgPSBudmVjOwoK
CkFyZSB3ZSBvdmVybG9hZGluZyBlbnRyeV9uciBoZXJlIHdpdGggYSBkaWZmZXJlbnQgbWVhbmlu
Zz8gSSB0aG91Z2h0IGl0IAp3YXMgbWVhbnQgdG8gYmUgZW50cnkgbnVtYmVyIChpbiBNU0ktWCB0
YWJsZSBmb3IgZXhhbXBsZSksIG5vdCBudW1iZXIgb2YgCmVudHJpZXMuCgoKPiArCQl9IGVsc2Ug
aWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSB7Cj4gICAJCQlpbnQgcG9zOwo+ICAgCQkJdTMy
IHRhYmxlX29mZnNldCwgYmlyOwo+ICAgCj4gQEAgLTMwOCw2ICszMTAsMTYgQEAgc3RhdGljIGlu
dCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZl
YywgaW50IHR5cGUpCj4gICAJCWlmIChwY2lfc2VnX3N1cHBvcnRlZCkKPiAgIAkJCXJldCA9IEhZ
UEVSVklTT1JfcGh5c2Rldl9vcChQSFlTREVWT1BfbWFwX3BpcnEsCj4gICAJCQkJCQkgICAgJm1h
cF9pcnEpOwo+ICsJCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxICYmIHJl
dCkgewo+ICsJCQkvKgo+ICsJCQkgKiBJZiBNQVBfUElSUV9UWVBFX01VTFRJX01TSSBpcyBub3Qg
YXZhaWxhYmxlCj4gKwkJCSAqIHRoZXJlJ3Mgbm90aGluZyBlbHNlIHdlIGNhbiBkbyBpbiB0aGlz
IGNhc2UuCj4gKwkJCSAqIEp1c3Qgc2V0IHJldCA+IDAgc28gZHJpdmVyIGNhbiByZXRyeSB3aXRo
Cj4gKwkJCSAqIHNpbmdsZSBNU0kuCj4gKwkJCSAqLwo+ICsJCQlyZXQgPSAxOwo+ICsJCQlnb3Rv
IG91dDsKPiArCQl9Cj4gICAJCWlmIChyZXQgPT0gLUVJTlZBTCAmJiAhcGNpX2RvbWFpbl9ucihk
ZXYtPmJ1cykpIHsKPiAgIAkJCW1hcF9pcnEudHlwZSA9IE1BUF9QSVJRX1RZUEVfTVNJOwo+ICAg
CQkJbWFwX2lycS5pbmRleCA9IC0xOwo+IEBAIC0zMjQsMTEgKzMzNiwxMCBAQCBzdGF0aWMgaW50
IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVj
LCBpbnQgdHlwZSkKPiAgIAkJCWdvdG8gb3V0Owo+ICAgCQl9Cj4gICAKPiAtCQlyZXQgPSB4ZW5f
YmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLAo+IC0JCQkJCSAgICAgICBtYXBfaXJx
LnBpcnEsCj4gLQkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+IC0JCQkJ
CSAgICAgICAibXNpLXgiIDogIm1zaSIsCj4gLQkJCQkJCWRvbWlkKTsKPiArCQlyZXQgPSB4ZW5f
YmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCBtYXBfaXJxLnBpcnEsCj4gKwkJICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52
ZWMgOiAxLAo+ICsJCSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8gIm1zaS14IiA6ICJtc2kiLAo+ICsJCSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBkb21pZCk7Cj4gICAJCWlmIChyZXQgPCAwKQo+ICAgCQkJZ290byBvdXQ7Cj4g
ICAJfQo+IGRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyBiL2Ry
aXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCj4gaW5kZXggZjRhOWUzMy4uZmYyMGFlMiAx
MDA2NDQKPiAtLS0gYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+ICsrKyBiL2Ry
aXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCj4gQEAgLTM5MSwxMCArMzkxLDEwIEBAIHN0
YXRpYyB2b2lkIHhlbl9pcnFfaW5pdCh1bnNpZ25lZCBpcnEpCj4gICAJbGlzdF9hZGRfdGFpbCgm
aW5mby0+bGlzdCwgJnhlbl9pcnFfbGlzdF9oZWFkKTsKPiAgIH0KPiAgIAo+IC1zdGF0aWMgaW50
IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkKPiArc3RhdGljIGlu
dCBfX211c3RfY2hlY2sgeGVuX2FsbG9jYXRlX2lycXNfZHluYW1pYyhpbnQgbnZlYykKPiAgIHsK
PiAgIAlpbnQgZmlyc3QgPSAwOwo+IC0JaW50IGlycTsKPiArCWludCBpLCBpcnE7Cj4gICAKPiAg
ICNpZmRlZiBDT05GSUdfWDg2X0lPX0FQSUMKPiAgIAkvKgo+IEBAIC00MDgsMTQgKzQwOCwyMiBA
QCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkK
PiAgIAkJZmlyc3QgPSBnZXRfbnJfaXJxc19nc2koKTsKPiAgICNlbmRpZgo+ICAgCj4gLQlpcnEg
PSBpcnFfYWxsb2NfZGVzY19mcm9tKGZpcnN0LCAtMSk7Cj4gKwlpcnEgPSBpcnFfYWxsb2NfZGVz
Y3NfZnJvbShmaXJzdCwgbnZlYywgLTEpOwo+ICAgCj4gLQlpZiAoaXJxID49IDApCj4gLQkJeGVu
X2lycV9pbml0KGlycSk7Cj4gKwlpZiAoaXJxID49IDApIHsKPiArCQlmb3IgKGkgPSAwOyBpIDwg
bnZlYzsgaSsrKQo+ICsJCQl4ZW5faXJxX2luaXQoaXJxICsgaSk7Cj4gKwl9Cj4gICAKPiAgIAly
ZXR1cm4gaXJxOwo+ICAgfQo+ICAgCj4gK3N0YXRpYyBpbmxpbmUgaW50IF9fbXVzdF9jaGVjayB4
ZW5fYWxsb2NhdGVfaXJxX2R5bmFtaWModm9pZCkKPiArewo+ICsKPiArCXJldHVybiB4ZW5fYWxs
b2NhdGVfaXJxc19keW5hbWljKDEpOwo+ICt9Cj4gKwo+ICAgc3RhdGljIGludCBfX211c3RfY2hl
Y2sgeGVuX2FsbG9jYXRlX2lycV9nc2kodW5zaWduZWQgZ3NpKQo+ICAgewo+ICAgCWludCBpcnE7
Cj4gQEAgLTc0MSwyMiArNzQ5LDI1IEBAIGludCB4ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0
IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNjICptc2lkZXNjKQo+ICAgfQo+ICAgCj4gICBp
bnQgeGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKHN0cnVjdCBwY2lfZGV2ICpkZXYsIHN0cnVjdCBt
c2lfZGVzYyAqbXNpZGVzYywKPiAtCQkJICAgICBpbnQgcGlycSwgY29uc3QgY2hhciAqbmFtZSwg
ZG9taWRfdCBkb21pZCkKPiArCQkJICAgICBpbnQgcGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIg
Km5hbWUsIGRvbWlkX3QgZG9taWQpCj4gICB7Cj4gLQlpbnQgaXJxLCByZXQ7Cj4gKwlpbnQgaSwg
aXJxLCByZXQ7Cj4gICAKPiAgIAltdXRleF9sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7
Cj4gICAKPiAtCWlycSA9IHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYygpOwo+ICsJaXJxID0geGVu
X2FsbG9jYXRlX2lycXNfZHluYW1pYyhudmVjKTsKPiAgIAlpZiAoaXJxIDwgMCkKPiAgIAkJZ290
byBvdXQ7Cj4gICAKPiAtCWlycV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGlycSwgJnhlbl9w
aXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwKPiAtCQkJbmFtZSk7Cj4gKwlmb3IgKGkgPSAwOyBp
IDwgbnZlYzsgaSsrKSB7Cj4gKwkJaXJxX3NldF9jaGlwX2FuZF9oYW5kbGVyX25hbWUoaXJxICsg
aSwgJnhlbl9waXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwgbmFtZSk7Cj4gKwo+ICsJCXJldCA9
IHhlbl9pcnFfaW5mb19waXJxX3NldHVwKGlycSArIGksIDAsIHBpcnEgKyBpLCAwLCBkb21pZCwK
PiArCQkJCQkgICAgICBpID09IDAgPyAwIDogUElSUV9NU0lfR1JPVVApOwo+ICsJCWlmIChyZXQg
PCAwKQo+ICsJCQlnb3RvIGVycm9yX2lycTsKPiArCX0KPiAgIAo+IC0JcmV0ID0geGVuX2lycV9p
bmZvX3BpcnFfc2V0dXAoaXJxLCAwLCBwaXJxLCAwLCBkb21pZCwgMCk7Cj4gLQlpZiAocmV0IDwg
MCkKPiAtCQlnb3RvIGVycm9yX2lycTsKPiAgIAlyZXQgPSBpcnFfc2V0X21zaV9kZXNjKGlycSwg
bXNpZGVzYyk7Cj4gICAJaWYgKHJldCA8IDApCj4gICAJCWdvdG8gZXJyb3JfaXJxOwo+IEBAIC03
NjQsNyArNzc1LDggQEAgb3V0Ogo+ICAgCW11dGV4X3VubG9jaygmaXJxX21hcHBpbmdfdXBkYXRl
X2xvY2spOwo+ICAgCXJldHVybiBpcnE7Cj4gICBlcnJvcl9pcnE6Cj4gLQlfX3VuYmluZF9mcm9t
X2lycShpcnEpOwo+ICsJZm9yICg7IGkgPj0gMDsgaS0tKQo+ICsJCV9fdW5iaW5kX2Zyb21faXJx
KGlycSArIGkpOwo+ICAgCW11dGV4X3VubG9jaygmaXJxX21hcHBpbmdfdXBkYXRlX2xvY2spOwo+
ICAgCXJldHVybiByZXQ7Cj4gICB9Cj4gQEAgLTc4Myw3ICs3OTUsMTIgQEAgaW50IHhlbl9kZXN0
cm95X2lycShpbnQgaXJxKQo+ICAgCWlmICghZGVzYykKPiAgIAkJZ290byBvdXQ7Cj4gICAKPiAt
CWlmICh4ZW5faW5pdGlhbF9kb21haW4oKSkgewo+ICsJLyoKPiArCSAqIElmIHRyeWluZyB0byBy
ZW1vdmUgYSB2ZWN0b3IgaW4gYSBNU0kgZ3JvdXAgZGlmZmVyZW50Cj4gKwkgKiB0aGFuIHRoZSBm
aXJzdCBvbmUgc2tpcCB0aGUgUElSUSB1bm1hcCB1bmxlc3MgdGhpcyB2ZWN0b3IKPiArCSAqIGlz
IHRoZSBmaXJzdCBvbmUgaW4gdGhlIGdyb3VwLgo+ICsJICovCj4gKwlpZiAoeGVuX2luaXRpYWxf
ZG9tYWluKCkgJiYgIShpbmZvLT51LnBpcnEuZmxhZ3MgJiBQSVJRX01TSV9HUk9VUCkpIHsKPiAg
IAkJdW5tYXBfaXJxLnBpcnEgPSBpbmZvLT51LnBpcnEucGlycTsKPiAgIAkJdW5tYXBfaXJxLmRv
bWlkID0gaW5mby0+dS5waXJxLmRvbWlkOwo+ICAgCQlyYyA9IEhZUEVSVklTT1JfcGh5c2Rldl9v
cChQSFlTREVWT1BfdW5tYXBfcGlycSwgJnVubWFwX2lycSk7Cj4gZGlmZiAtLWdpdCBhL2RyaXZl
cnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaCBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVu
dHNfaW50ZXJuYWwuaAo+IGluZGV4IDY3N2Y0MWEuLjUwYzIwNTBhIDEwMDY0NAo+IC0tLSBhL2Ry
aXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+ICsrKyBiL2RyaXZlcnMveGVuL2V2
ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+IEBAIC01Myw2ICs1Myw3IEBAIHN0cnVjdCBpcnFfaW5m
byB7Cj4gICAKPiAgICNkZWZpbmUgUElSUV9ORUVEU19FT0kJKDEgPDwgMCkKPiAgICNkZWZpbmUg
UElSUV9TSEFSRUFCTEUJKDEgPDwgMSkKPiArI2RlZmluZSBQSVJRX01TSV9HUk9VUAkoMSA8PCAy
KQo+ICAgCj4gICBzdHJ1Y3QgZXZ0Y2huX29wcyB7Cj4gICAJdW5zaWduZWQgKCptYXhfY2hhbm5l
bHMpKHZvaWQpOwo+IGRpZmYgLS1naXQgYS9pbmNsdWRlL3hlbi9ldmVudHMuaCBiL2luY2x1ZGUv
eGVuL2V2ZW50cy5oCj4gaW5kZXggYzljODVjZi4uMmFlN2UwMyAxMDA2NDQKPiAtLS0gYS9pbmNs
dWRlL3hlbi9ldmVudHMuaAo+ICsrKyBiL2luY2x1ZGUveGVuL2V2ZW50cy5oCj4gQEAgLTEwMiw3
ICsxMDIsNyBAQCBpbnQgeGVuX2JpbmRfcGlycV9nc2lfdG9faXJxKHVuc2lnbmVkIGdzaSwKPiAg
IGludCB4ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1z
aV9kZXNjICptc2lkZXNjKTsKPiAgIC8qIEJpbmQgYW4gUFNJIHBpcnEgdG8gYW4gaXJxLiAqLwo+
ICAgaW50IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShzdHJ1Y3QgcGNpX2RldiAqZGV2LCBzdHJ1
Y3QgbXNpX2Rlc2MgKm1zaWRlc2MsCj4gLQkJCSAgICAgaW50IHBpcnEsIGNvbnN0IGNoYXIgKm5h
bWUsIGRvbWlkX3QgZG9taWQpOwo+ICsJCQkgICAgIGludCBwaXJxLCBpbnQgbnZlYywgY29uc3Qg
Y2hhciAqbmFtZSwgZG9taWRfdCBkb21pZCk7Cj4gICAjZW5kaWYKPiAgIAo+ICAgLyogRGUtYWxs
b2NhdGVzIHRoZSBhYm92ZSBtZW50aW9uZWQgcGh5c2ljYWwgaW50ZXJydXB0LiAqLwo+IGRpZmYg
LS1naXQgYS9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oIGIvaW5jbHVkZS94ZW4vaW50
ZXJmYWNlL3BoeXNkZXYuaAo+IGluZGV4IDQyNzIxZDEuLmViMTMzMjZkIDEwMDY0NAo+IC0tLSBh
L2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgKPiArKysgYi9pbmNsdWRlL3hlbi9pbnRl
cmZhY2UvcGh5c2Rldi5oCj4gQEAgLTEzMSw2ICsxMzEsNyBAQCBzdHJ1Y3QgcGh5c2Rldl9pcnEg
ewo+ICAgI2RlZmluZSBNQVBfUElSUV9UWVBFX0dTSQkJMHgxCj4gICAjZGVmaW5lIE1BUF9QSVJR
X1RZUEVfVU5LTk9XTgkJMHgyCj4gICAjZGVmaW5lIE1BUF9QSVJRX1RZUEVfTVNJX1NFRwkJMHgz
Cj4gKyNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kJCTB4NAoKRm9ybWF0dGluZy4KCi1i
b3JpcwoKPiAgIAo+ICAgI2RlZmluZSBQSFlTREVWT1BfbWFwX3BpcnEJCTEzCj4gICBzdHJ1Y3Qg
cGh5c2Rldl9tYXBfcGlycSB7CgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:54:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2La-00044y-Ea; Thu, 27 Feb 2014 14:54:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2LY-00044P-Hh
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:54:00 +0000
Received: from [85.158.137.68:20436] by server-16.bemta-3.messagelabs.com id
	58/B9-29917-7815F035; Thu, 27 Feb 2014 14:53:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393512838!4589292!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1160 invoked from network); 27 Feb 2014 14:53:59 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:53:59 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2LW-000IdK-DC; Thu, 27 Feb 2014 14:53:58 +0000
Date: Thu, 27 Feb 2014 15:53:58 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227145358.GE53925@deinos.phlegethon.org>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-5-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393254090-5081-5-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 5/5] xen/x86: Identify
 reset_stack_and_jump() as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:01 +0000 on 24 Feb (1393250490), Andrew Cooper wrote:
> reset_stack_and_jump() is actually a macro, but can effectivly become noreturn
> by giving it an unreachable() declaration.
> 
> Propagate the 'noreturn-ness' up through the direct and indirect callers.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>

Reviewed-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:54:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2La-00044y-Ea; Thu, 27 Feb 2014 14:54:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2LY-00044P-Hh
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:54:00 +0000
Received: from [85.158.137.68:20436] by server-16.bemta-3.messagelabs.com id
	58/B9-29917-7815F035; Thu, 27 Feb 2014 14:53:59 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393512838!4589292!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1160 invoked from network); 27 Feb 2014 14:53:59 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-6.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:53:59 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2LW-000IdK-DC; Thu, 27 Feb 2014 14:53:58 +0000
Date: Thu, 27 Feb 2014 15:53:58 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227145358.GE53925@deinos.phlegethon.org>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-5-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393254090-5081-5-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 5/5] xen/x86: Identify
 reset_stack_and_jump() as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:01 +0000 on 24 Feb (1393250490), Andrew Cooper wrote:
> reset_stack_and_jump() is actually a macro, but can effectivly become noreturn
> by giving it an unreachable() declaration.
> 
> Propagate the 'noreturn-ness' up through the direct and indirect callers.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>

Reviewed-by: Tim Deegan <tim@xen.org>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:54:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2M9-0004C5-8U; Thu, 27 Feb 2014 14:54:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2M7-0004Bi-V9
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:54:36 +0000
Received: from [193.109.254.147:39830] by server-13.bemta-14.messagelabs.com
	id 2C/57-01226-BA15F035; Thu, 27 Feb 2014 14:54:35 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393512874!7246401!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26759 invoked from network); 27 Feb 2014 14:54:34 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:54:34 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2M6-000IeX-EV; Thu, 27 Feb 2014 14:54:34 +0000
Date: Thu, 27 Feb 2014 15:54:34 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227145434.GF53925@deinos.phlegethon.org>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-2-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393254090-5081-2-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 2/5] x86/crash: Fix up declaration of
 do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:01 +0000 on 24 Feb (1393250487), Andrew Cooper wrote:
> ... so it can correctly be annotated as noreturn.  Move the declaration of
> nmi_crash() to be effectivly private in crash.c
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 14:54:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 14:54:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2M9-0004C5-8U; Thu, 27 Feb 2014 14:54:37 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ2M7-0004Bi-V9
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 14:54:36 +0000
Received: from [193.109.254.147:39830] by server-13.bemta-14.messagelabs.com
	id 2C/57-01226-BA15F035; Thu, 27 Feb 2014 14:54:35 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393512874!7246401!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26759 invoked from network); 27 Feb 2014 14:54:34 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 14:54:34 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ2M6-000IeX-EV; Thu, 27 Feb 2014 14:54:34 +0000
Date: Thu, 27 Feb 2014 15:54:34 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227145434.GF53925@deinos.phlegethon.org>
References: <1393254090-5081-1-git-send-email-andrew.cooper3@citrix.com>
	<1393254090-5081-2-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393254090-5081-2-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v3 2/5] x86/crash: Fix up declaration of
 do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 15:01 +0000 on 24 Feb (1393250487), Andrew Cooper wrote:
> ... so it can correctly be annotated as noreturn.  Move the declaration of
> nmi_crash() to be effectivly private in crash.c
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:03:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:03:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Tx-0004tR-GE; Thu, 27 Feb 2014 15:02:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ2Tv-0004tG-Vw
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 15:02:40 +0000
Received: from [85.158.139.211:11245] by server-6.bemta-5.messagelabs.com id
	1C/EF-14342-F835F035; Thu, 27 Feb 2014 15:02:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393513357!6691958!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24939 invoked from network); 27 Feb 2014 15:02:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:02:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106293263"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:02:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 10:02:36 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ2Ts-0002ba-19;
	Thu, 27 Feb 2014 15:02:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ2Tr-000249-Ni;
	Thu, 27 Feb 2014 15:02:35 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21263.21387.343462.630238@mariner.uk.xensource.com>
Date: Thu, 27 Feb 2014 15:02:35 +0000
To: <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>, Ian
	Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <osstest-25315-mainreport@xen.org>
References: <osstest-25315-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-unstable test] 25315: regressions - trouble: blocked/broken/fail/pass"):
> flight 25315 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25315/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64-oldkern           4 xen-build            fail REGR. vs. 25275
>  build-i386                    4 xen-build            fail REGR. vs. 25281

Our network infrastructure problems have become intolerable.  Nothing
has passed for several days.  As a temporary workaround, I'm pushing
this patch into the osstest push gate.

Ian.

>From b4fdc54eac9ed82e0e056573a11f3ad106f97c67 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 15:00:11 +0000
Subject: [OSSTEST PATCH] allow.all: Tolerate all build-*-oldkern failures

We are getting very many of these due to network braindamage.
For now, tolerate them.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 allow.all |    1 +
 1 file changed, 1 insertion(+)

diff --git a/allow.all b/allow.all
index 78775d4..064b2b1 100644
--- a/allow.all
+++ b/allow.all
@@ -2,3 +2,4 @@ test-@@-sedf@@
 build-@@                        logs-capture@@
 test-@@-pcipt@@
 test-@@-qemuu-@@		guest-localmigrate
+build-@@-oldkern
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:03:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:03:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Tx-0004tR-GE; Thu, 27 Feb 2014 15:02:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ2Tv-0004tG-Vw
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 15:02:40 +0000
Received: from [85.158.139.211:11245] by server-6.bemta-5.messagelabs.com id
	1C/EF-14342-F835F035; Thu, 27 Feb 2014 15:02:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393513357!6691958!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24939 invoked from network); 27 Feb 2014 15:02:38 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:02:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106293263"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:02:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 10:02:36 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ2Ts-0002ba-19;
	Thu, 27 Feb 2014 15:02:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ2Tr-000249-Ni;
	Thu, 27 Feb 2014 15:02:35 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21263.21387.343462.630238@mariner.uk.xensource.com>
Date: Thu, 27 Feb 2014 15:02:35 +0000
To: <xen-devel@lists.xensource.com>, Jan Beulich <JBeulich@suse.com>, Ian
	Campbell <Ian.Campbell@citrix.com>
In-Reply-To: <osstest-25315-mainreport@xen.org>
References: <osstest-25315-mainreport@xen.org>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Subject: Re: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
	blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

xen.org writes ("[xen-unstable test] 25315: regressions - trouble: blocked/broken/fail/pass"):
> flight 25315 xen-unstable real [real]
> http://www.chiark.greenend.org.uk/~xensrcts/logs/25315/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64-oldkern           4 xen-build            fail REGR. vs. 25275
>  build-i386                    4 xen-build            fail REGR. vs. 25281

Our network infrastructure problems have become intolerable.  Nothing
has passed for several days.  As a temporary workaround, I'm pushing
this patch into the osstest push gate.

Ian.

>From b4fdc54eac9ed82e0e056573a11f3ad106f97c67 Mon Sep 17 00:00:00 2001
From: Ian Jackson <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 15:00:11 +0000
Subject: [OSSTEST PATCH] allow.all: Tolerate all build-*-oldkern failures

We are getting very many of these due to network braindamage.
For now, tolerate them.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 allow.all |    1 +
 1 file changed, 1 insertion(+)

diff --git a/allow.all b/allow.all
index 78775d4..064b2b1 100644
--- a/allow.all
+++ b/allow.all
@@ -2,3 +2,4 @@ test-@@-sedf@@
 build-@@                        logs-capture@@
 test-@@-pcipt@@
 test-@@-qemuu-@@		guest-localmigrate
+build-@@-oldkern
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2XS-00055M-64; Thu, 27 Feb 2014 15:06:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1WJ2XQ-00055H-MQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:06:16 +0000
Received: from [85.158.139.211:13811] by server-13.bemta-5.messagelabs.com id
	97/EC-18801-8645F035; Thu, 27 Feb 2014 15:06:16 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393513575!2142126!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19049 invoked from network); 27 Feb 2014 15:06:15 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:06:15 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so1920058eak.1
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 07:06:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=dtvyUQyLiNIPul3Ff24IWKHpBI+tyBnwuzMBny41hdY=;
	b=h8OOGiNxM3fels2layJUPvUfPDoJi/zgLSOLOkdaoO1oJ0jTE26Tq/FUQiO3E4UFKm
	hFRF8dfvbDw9aIZr1TUMtPeMSWrde4E9s99gLGSOfZZ7f7OC2j5y8AjgVpEBhIQQEOia
	st/7i/ng6Y+57DcCAKRLc+0uY9lgToSK5tnqwKz5qzIYJxgIsr2cwJbRy9XZyben3gd1
	97s9Lq1DnaZ5h+BsblFrJacL+s8p3PEvsrnbizauaVO6Y1ePCZ3cmTqwdYFRiB0JdpmF
	JyEr7Ies0NwF1fLfJHKMQSu/o/uzjzbUdytXG5Vvjl3Z8K99SYHlGT4hpNUZNFqO8DCX
	FDWQ==
MIME-Version: 1.0
X-Received: by 10.204.213.81 with SMTP id gv17mr5969bkb.77.1393513574875; Thu,
	27 Feb 2014 07:06:14 -0800 (PST)
Received: by 10.205.34.135 with HTTP; Thu, 27 Feb 2014 07:06:14 -0800 (PST)
Date: Thu, 27 Feb 2014 23:06:14 +0800
Message-ID: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] How to get the accurate physical CPU utilization in
	Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5115797059445320186=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5115797059445320186==
Content-Type: multipart/alternative; boundary=485b3978bbaf9ee83104f364a9f4

--485b3978bbaf9ee83104f364a9f4
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU core,
such as core 3. Then, I run a cpu-bound process in DomU and the vcpu
utilization is 100% (got it with "xentop" in Dom0).
However, when I use "top" in Dom0 to see the physical CPU utilization, the
CPU core 3 utilization is zero or less than 1%. The utilization expected of
CPU core 3 is also 100% like the vcpu. Is it? Why I cannot get the accurate
physical CPU utilization with "top" command in Dom0?

Any advice is appreciated. Thank you for your time.

-- 
Best Regards,
Bei Guan

--485b3978bbaf9ee83104f364a9f4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi,</div><div><br></div><div>I run a PV DomU with 1 v=
cpu on Xen. I pin the vcpu to a physical CPU core, such as core 3. Then, I =
run a cpu-bound process in DomU and the vcpu utilization is 100% (got it wi=
th &quot;xentop&quot; in Dom0).</div>
<div>However, when I use &quot;top&quot; in Dom0 to see the physical CPU ut=
ilization, the CPU core 3 utilization is zero or less than 1%. The utilizat=
ion expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot =
get the accurate physical CPU utilization with &quot;top&quot; command in D=
om0?</div>
<div><br></div>Any advice is appreciated. Thank you for your time.<br clear=
=3D"all"><div><br></div>-- <br>Best Regards,<div>Bei Guan</div>
</div>

--485b3978bbaf9ee83104f364a9f4--


--===============5115797059445320186==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5115797059445320186==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 15:06:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2XS-00055M-64; Thu, 27 Feb 2014 15:06:18 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1WJ2XQ-00055H-MQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:06:16 +0000
Received: from [85.158.139.211:13811] by server-13.bemta-5.messagelabs.com id
	97/EC-18801-8645F035; Thu, 27 Feb 2014 15:06:16 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393513575!2142126!1
X-Originating-IP: [209.85.215.170]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19049 invoked from network); 27 Feb 2014 15:06:15 -0000
Received: from mail-ea0-f170.google.com (HELO mail-ea0-f170.google.com)
	(209.85.215.170)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:06:15 -0000
Received: by mail-ea0-f170.google.com with SMTP id g15so1920058eak.1
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 07:06:14 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:date:message-id:subject:from:to:content-type;
	bh=dtvyUQyLiNIPul3Ff24IWKHpBI+tyBnwuzMBny41hdY=;
	b=h8OOGiNxM3fels2layJUPvUfPDoJi/zgLSOLOkdaoO1oJ0jTE26Tq/FUQiO3E4UFKm
	hFRF8dfvbDw9aIZr1TUMtPeMSWrde4E9s99gLGSOfZZ7f7OC2j5y8AjgVpEBhIQQEOia
	st/7i/ng6Y+57DcCAKRLc+0uY9lgToSK5tnqwKz5qzIYJxgIsr2cwJbRy9XZyben3gd1
	97s9Lq1DnaZ5h+BsblFrJacL+s8p3PEvsrnbizauaVO6Y1ePCZ3cmTqwdYFRiB0JdpmF
	JyEr7Ies0NwF1fLfJHKMQSu/o/uzjzbUdytXG5Vvjl3Z8K99SYHlGT4hpNUZNFqO8DCX
	FDWQ==
MIME-Version: 1.0
X-Received: by 10.204.213.81 with SMTP id gv17mr5969bkb.77.1393513574875; Thu,
	27 Feb 2014 07:06:14 -0800 (PST)
Received: by 10.205.34.135 with HTTP; Thu, 27 Feb 2014 07:06:14 -0800 (PST)
Date: Thu, 27 Feb 2014 23:06:14 +0800
Message-ID: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: xen-devel <xen-devel@lists.xen.org>
Subject: [Xen-devel] How to get the accurate physical CPU utilization in
	Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5115797059445320186=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5115797059445320186==
Content-Type: multipart/alternative; boundary=485b3978bbaf9ee83104f364a9f4

--485b3978bbaf9ee83104f364a9f4
Content-Type: text/plain; charset=ISO-8859-1

Hi,

I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU core,
such as core 3. Then, I run a cpu-bound process in DomU and the vcpu
utilization is 100% (got it with "xentop" in Dom0).
However, when I use "top" in Dom0 to see the physical CPU utilization, the
CPU core 3 utilization is zero or less than 1%. The utilization expected of
CPU core 3 is also 100% like the vcpu. Is it? Why I cannot get the accurate
physical CPU utilization with "top" command in Dom0?

Any advice is appreciated. Thank you for your time.

-- 
Best Regards,
Bei Guan

--485b3978bbaf9ee83104f364a9f4
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi,</div><div><br></div><div>I run a PV DomU with 1 v=
cpu on Xen. I pin the vcpu to a physical CPU core, such as core 3. Then, I =
run a cpu-bound process in DomU and the vcpu utilization is 100% (got it wi=
th &quot;xentop&quot; in Dom0).</div>
<div>However, when I use &quot;top&quot; in Dom0 to see the physical CPU ut=
ilization, the CPU core 3 utilization is zero or less than 1%. The utilizat=
ion expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot =
get the accurate physical CPU utilization with &quot;top&quot; command in D=
om0?</div>
<div><br></div>Any advice is appreciated. Thank you for your time.<br clear=
=3D"all"><div><br></div>-- <br>Best Regards,<div>Bei Guan</div>
</div>

--485b3978bbaf9ee83104f364a9f4--


--===============5115797059445320186==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5115797059445320186==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 15:08:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Yz-0005B5-NI; Thu, 27 Feb 2014 15:07:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ2Yy-0005As-7B
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:07:52 +0000
Received: from [85.158.139.211:8003] by server-10.bemta-5.messagelabs.com id
	20/2A-08578-7C45F035; Thu, 27 Feb 2014 15:07:51 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393513669!2740026!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20345 invoked from network); 27 Feb 2014 15:07:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:07:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104668697"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:07:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:07:48 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ2Yu-0003jU-7c;
	Thu, 27 Feb 2014 15:07:48 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ2Yu-0007cX-19;
	Thu, 27 Feb 2014 15:07:48 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 15:07:46 +0000
Message-ID: <1393513666-29259-1-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/hvm: assert that we we saved a sane number
	of MSRs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just as a backstop measure against later changes that add MSRs to the
save function without updating the count in the init function.

Signed-off-by: Tim Deegan <tim@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/hvm/hvm.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9e85c13..ae24211 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1148,6 +1148,8 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
         if ( hvm_funcs.save_msr )
             hvm_funcs.save_msr(v, ctxt);
 
+        ASSERT(ctxt->count <= msr_count_max);
+
         for ( i = 0; i < ctxt->count; ++i )
             ctxt->msr[i]._rsvd = 0;
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:08:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Yz-0005B5-NI; Thu, 27 Feb 2014 15:07:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <tdeegan@xensource.com>) id 1WJ2Yy-0005As-7B
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:07:52 +0000
Received: from [85.158.139.211:8003] by server-10.bemta-5.messagelabs.com id
	20/2A-08578-7C45F035; Thu, 27 Feb 2014 15:07:51 +0000
X-Env-Sender: tdeegan@xensource.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393513669!2740026!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20345 invoked from network); 27 Feb 2014 15:07:50 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-10.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:07:50 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104668697"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:07:48 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:07:48 -0500
Received: from whitby.uk.xensource.com ([10.80.2.60])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<tdeegan@xensource.com>)	id 1WJ2Yu-0003jU-7c;
	Thu, 27 Feb 2014 15:07:48 +0000
Received: from tdeegan by whitby.uk.xensource.com with local (Exim 4.82)
	(envelope-from <tdeegan@whitby.uk.xensource.com>)	id 1WJ2Yu-0007cX-19;
	Thu, 27 Feb 2014 15:07:48 +0000
From: Tim Deegan <tim@xen.org>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 15:07:46 +0000
Message-ID: <1393513666-29259-1-git-send-email-tim@xen.org>
X-Mailer: git-send-email 1.8.5.2
MIME-Version: 1.0
X-DLP: MIA1
Cc: Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH] x86/hvm: assert that we we saved a sane number
	of MSRs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Just as a backstop measure against later changes that add MSRs to the
save function without updating the count in the init function.

Signed-off-by: Tim Deegan <tim@xen.org>
Cc: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/hvm/hvm.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 9e85c13..ae24211 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1148,6 +1148,8 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
         if ( hvm_funcs.save_msr )
             hvm_funcs.save_msr(v, ctxt);
 
+        ASSERT(ctxt->count <= msr_count_max);
+
         for ( i = 0; i < ctxt->count; ++i )
             ctxt->msr[i]._rsvd = 0;
 
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Zj-0005In-FS; Thu, 27 Feb 2014 15:08:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ2Zi-0005IY-M2
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:08:38 +0000
Received: from [85.158.139.211:24738] by server-2.bemta-5.messagelabs.com id
	C1/02-23037-6F45F035; Thu, 27 Feb 2014 15:08:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393513714!6716331!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9193 invoked from network); 27 Feb 2014 15:08:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:08:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106295610"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:08:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:07:59 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WJ2Z5-0003jX-Rl;
	Thu, 27 Feb 2014 15:07:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 15:07:55 +0000
Message-ID: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv3] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
Changes in v3:
- Export xen_in_preemptible_hcall (to fix modular privcmd driver).

Changes in v2:
- Use per-cpu variable to mark preemptible regions
- Call preempt_schedule_irq() from the correct place in
  xen_hypervisor_callback
---
 arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
 arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
 drivers/xen/Makefile       |    2 +-
 drivers/xen/preempt.c      |   17 +++++++++++++++++
 drivers/xen/privcmd.c      |    2 ++
 include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
 6 files changed, 89 insertions(+), 1 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..b99bc9c 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
 ENTRY(xen_do_upcall)
 1:	mov %esp, %eax
 	call xen_evtchn_do_upcall
+#ifdef CONFIG_PREEMPT
 	jmp  ret_from_intr
+#else
+	GET_THREAD_INFO(%ebp)
+#ifdef CONFIG_VM86
+	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
+	movb PT_CS(%esp), %al
+	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+#else
+	movl PT_CS(%esp), %eax
+	andl $SEGMENT_RPL_MASK, %eax
+#endif
+	cmpl $USER_RPL, %eax
+	jae resume_userspace		# returning to v8086 or userspace
+	DISABLE_INTERRUPTS(CLBR_ANY)
+	cmpl $0,TI_preempt_count(%ebp)	# non-zero preempt_count ?
+	jnz resume_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz resume_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp resume_kernel
+#endif /* CONFIG_PREEMPT */
 	CFI_ENDPROC
 ENDPROC(xen_hypervisor_callback)
 
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..d8f4fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
+#ifdef CONFIG_PREEMPT
 	jmp  error_exit
+#else
+	movl %ebx, %eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax, %eax
+	je error_exit_user
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz retint_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz retint_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp retint_kernel
+#endif
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)
 
@@ -1629,6 +1647,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 45e00af..6b867e9 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/
 
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..b5a3e98
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,17 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;
 
+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();
 
 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
 bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:08:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2Zj-0005In-FS; Thu, 27 Feb 2014 15:08:39 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ2Zi-0005IY-M2
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:08:38 +0000
Received: from [85.158.139.211:24738] by server-2.bemta-5.messagelabs.com id
	C1/02-23037-6F45F035; Thu, 27 Feb 2014 15:08:38 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393513714!6716331!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9193 invoked from network); 27 Feb 2014 15:08:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:08:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106295610"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:08:00 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:07:59 -0500
Received: from qabil.uk.xensource.com ([10.80.2.76])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<david.vrabel@citrix.com>)	id 1WJ2Z5-0003jX-Rl;
	Thu, 27 Feb 2014 15:07:59 +0000
From: David Vrabel <david.vrabel@citrix.com>
To: <xen-devel@lists.xen.org>
Date: Thu, 27 Feb 2014 15:07:55 +0000
Message-ID: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
X-Mailer: git-send-email 1.7.2.5
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: [Xen-devel] [PATCHv3] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: David Vrabel <david.vrabel@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
Changes in v3:
- Export xen_in_preemptible_hcall (to fix modular privcmd driver).

Changes in v2:
- Use per-cpu variable to mark preemptible regions
- Call preempt_schedule_irq() from the correct place in
  xen_hypervisor_callback
---
 arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
 arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
 drivers/xen/Makefile       |    2 +-
 drivers/xen/preempt.c      |   17 +++++++++++++++++
 drivers/xen/privcmd.c      |    2 ++
 include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
 6 files changed, 89 insertions(+), 1 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..b99bc9c 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
 ENTRY(xen_do_upcall)
 1:	mov %esp, %eax
 	call xen_evtchn_do_upcall
+#ifdef CONFIG_PREEMPT
 	jmp  ret_from_intr
+#else
+	GET_THREAD_INFO(%ebp)
+#ifdef CONFIG_VM86
+	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
+	movb PT_CS(%esp), %al
+	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+#else
+	movl PT_CS(%esp), %eax
+	andl $SEGMENT_RPL_MASK, %eax
+#endif
+	cmpl $USER_RPL, %eax
+	jae resume_userspace		# returning to v8086 or userspace
+	DISABLE_INTERRUPTS(CLBR_ANY)
+	cmpl $0,TI_preempt_count(%ebp)	# non-zero preempt_count ?
+	jnz resume_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz resume_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp resume_kernel
+#endif /* CONFIG_PREEMPT */
 	CFI_ENDPROC
 ENDPROC(xen_hypervisor_callback)
 
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..d8f4fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
+#ifdef CONFIG_PREEMPT
 	jmp  error_exit
+#else
+	movl %ebx, %eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax, %eax
+	je error_exit_user
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz retint_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz retint_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp retint_kernel
+#endif
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)
 
@@ -1629,6 +1647,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 45e00af..6b867e9 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/
 
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..b5a3e98
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,17 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;
 
+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();
 
 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
 bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2ga-0005i2-Tw; Thu, 27 Feb 2014 15:15:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ2gY-0005hx-LQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:15:42 +0000
Received: from [85.158.143.35:30429] by server-2.bemta-4.messagelabs.com id
	87/A4-04779-D965F035; Thu, 27 Feb 2014 15:15:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393514139!8808776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26878 invoked from network); 27 Feb 2014 15:15:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:15:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104672705"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:15:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:15:39 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ2gV-0003rF-28;
	Thu, 27 Feb 2014 15:15:39 +0000
Date: Thu, 27 Feb 2014 15:15:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140227151538.GG16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<529743590.20140227154351@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <529743590.20140227154351@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:43:51PM +0100, Sander Eikelenboom wrote:
[...]
> 
> > As far as I can tell netfront has a pool of grant references and it
> > will BUG_ON() if there's no grefs in the pool when you request one.
> > Since your DomU didn't crash so I suspect the book-keeping is still
> > intact.
> 
> >> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
> >> > Domain 7 is the domain that happens to give the netfront messages.
> >> 
> >> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
> >> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
> >> 
> 
> > I suppose Dom0 expanding its maptrack is normal. I see as well when I
> > increase the number of domains. But if it keeps increasing while the
> > number of DomUs stay the same then it is not normal.
> 
> It keeps increasing (without (re)starting domains) although eventually it looks like it is settling at a round a maptrack size of 31/256 frames.
> 

Then I guess that's reasonable. You have 15 DomUs after all...

> 
> > Presumably you only have netfront and blkfront to use grant table and
> > your workload as described below invovled both so it would be hard to
> > tell which one is faulty.
> 
> > There's no immediate functional changes regarding slot counting in this
> > dev cycle for network driver. But there's some changes to blkfront/back
> > which seem interesting (memory related).
> 
> Hmm all the times i get a "Bad grant reference" are related to that one specific guest.
> And it's not doing much blkback/front I/O (it's providing webdav and rsync to network based storage (glusterfs))
> 

OK. I misunderstood that you were rsync'ing from / to your VM disk.

What does webdav do anyway? Does it have a specific traffic pattern?

> Added some more printk's:
> 
> @@ -2072,7 +2076,11 @@ __gnttab_copy(
>                                        &s_frame, &s_pg,
>                                        &source_off, &source_len, 1);
>          if ( rc != GNTST_okay )
> -            goto error_out;
> +            PIN_FAIL(error_out, GNTST_general_error,
> +                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> +
> +
>          have_s_grant = 1;
>          if ( op->source.offset < source_off ||
>               op->len > source_len )
> @@ -2096,7 +2104,11 @@ __gnttab_copy(
>                                        current->domain->domain_id, 0,
>                                        &d_frame, &d_pg, &dest_off, &dest_len, 1);
>          if ( rc != GNTST_okay )
> -            goto error_out;
> +            PIN_FAIL(error_out, GNTST_general_error,
> +                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> +
> +
>          have_d_grant = 1;
> 
> 
> this comes out:
> 
> (XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7
> 

If it fails in gnttab_copy then I very much suspects this is a network
driver problem as persistent grant in blk driver doesn't use grant
copy.

> 
> > My suggestion is, if you have a working base line, you can try to setup
> > different frontend / backend combination to help narrow down the
> > problem.
> 
> Will see what i can do after the weekend
> 

Thanks

> > Wei.
> 
> <snip>
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2ga-0005i2-Tw; Thu, 27 Feb 2014 15:15:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ2gY-0005hx-LQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:15:42 +0000
Received: from [85.158.143.35:30429] by server-2.bemta-4.messagelabs.com id
	87/A4-04779-D965F035; Thu, 27 Feb 2014 15:15:41 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393514139!8808776!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26878 invoked from network); 27 Feb 2014 15:15:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:15:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104672705"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:15:39 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:15:39 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ2gV-0003rF-28;
	Thu, 27 Feb 2014 15:15:39 +0000
Date: Thu, 27 Feb 2014 15:15:39 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140227151538.GG16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<529743590.20140227154351@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <529743590.20140227154351@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:43:51PM +0100, Sander Eikelenboom wrote:
[...]
> 
> > As far as I can tell netfront has a pool of grant references and it
> > will BUG_ON() if there's no grefs in the pool when you request one.
> > Since your DomU didn't crash so I suspect the book-keeping is still
> > intact.
> 
> >> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
> >> > Domain 7 is the domain that happens to give the netfront messages.
> >> 
> >> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
> >> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
> >> 
> 
> > I suppose Dom0 expanding its maptrack is normal. I see as well when I
> > increase the number of domains. But if it keeps increasing while the
> > number of DomUs stay the same then it is not normal.
> 
> It keeps increasing (without (re)starting domains) although eventually it looks like it is settling at a round a maptrack size of 31/256 frames.
> 

Then I guess that's reasonable. You have 15 DomUs after all...

> 
> > Presumably you only have netfront and blkfront to use grant table and
> > your workload as described below invovled both so it would be hard to
> > tell which one is faulty.
> 
> > There's no immediate functional changes regarding slot counting in this
> > dev cycle for network driver. But there's some changes to blkfront/back
> > which seem interesting (memory related).
> 
> Hmm all the times i get a "Bad grant reference" are related to that one specific guest.
> And it's not doing much blkback/front I/O (it's providing webdav and rsync to network based storage (glusterfs))
> 

OK. I misunderstood that you were rsync'ing from / to your VM disk.

What does webdav do anyway? Does it have a specific traffic pattern?

> Added some more printk's:
> 
> @@ -2072,7 +2076,11 @@ __gnttab_copy(
>                                        &s_frame, &s_pg,
>                                        &source_off, &source_len, 1);
>          if ( rc != GNTST_okay )
> -            goto error_out;
> +            PIN_FAIL(error_out, GNTST_general_error,
> +                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> +
> +
>          have_s_grant = 1;
>          if ( op->source.offset < source_off ||
>               op->len > source_len )
> @@ -2096,7 +2104,11 @@ __gnttab_copy(
>                                        current->domain->domain_id, 0,
>                                        &d_frame, &d_pg, &dest_off, &dest_len, 1);
>          if ( rc != GNTST_okay )
> -            goto error_out;
> +            PIN_FAIL(error_out, GNTST_general_error,
> +                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> +
> +
>          have_d_grant = 1;
> 
> 
> this comes out:
> 
> (XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7
> 

If it fails in gnttab_copy then I very much suspects this is a network
driver problem as persistent grant in blk driver doesn't use grant
copy.

> 
> > My suggestion is, if you have a working base line, you can try to setup
> > different frontend / backend combination to help narrow down the
> > problem.
> 
> Will see what i can do after the weekend
> 

Thanks

> > Wei.
> 
> <snip>
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:17:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2iN-0005pq-I6; Thu, 27 Feb 2014 15:17:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raghavendra.kt@linux.vnet.ibm.com>)
	id 1WJ2iM-0005pg-4T
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:17:34 +0000
Received: from [85.158.143.35:49605] by server-1.bemta-4.messagelabs.com id
	35/4E-31661-D075F035; Thu, 27 Feb 2014 15:17:33 +0000
X-Env-Sender: raghavendra.kt@linux.vnet.ibm.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393514247!8788824!1
X-Originating-IP: [202.81.31.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0OCA9PiAzNDExMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31280 invoked from network); 27 Feb 2014 15:17:31 -0000
Received: from e23smtp06.au.ibm.com (HELO e23smtp06.au.ibm.com) (202.81.31.148)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 15:17:31 -0000
Received: from /spool/local
	by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<raghavendra.kt@linux.vnet.ibm.com>; Fri, 28 Feb 2014 01:17:24 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp06.au.ibm.com (202.81.31.212) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 28 Feb 2014 01:17:22 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id DB17A2CE8052
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 02:17:18 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1REvMcA42926106
	for <xen-devel@lists.xenproject.org>; Fri, 28 Feb 2014 01:57:23 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1RFHFMa013247
	for <xen-devel@lists.xenproject.org>; Fri, 28 Feb 2014 02:17:18 +1100
Received: from [9.79.207.119] ([9.79.207.119])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1RFGvVd012574; Fri, 28 Feb 2014 02:17:05 +1100
Message-ID: <530F5851.1090809@linux.vnet.ibm.com>
Date: Thu, 27 Feb 2014 20:52:57 +0530
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Organization: IBM
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
In-Reply-To: <530F4F98.2080308@redhat.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022715-7014-0000-0000-0000046D1998
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU.  It is more similar to PLE
> (pause-loop exiting).

Adding to the discussion, I see there are two possibilities here,
considering that in undercommit cases we should not exceed
HEAD_SPIN_THRESHOLD,

1. the looping vcpu in pv_head_spin_check() should do halt()
considering that we have done enough spinning (more than typical
lock-hold time), and hence we are in potential overcommit.

2. multiplex kick_cpu to do directed yield in qspinlock case.
But this may result in some ping ponging?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:17:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2iN-0005pq-I6; Thu, 27 Feb 2014 15:17:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <raghavendra.kt@linux.vnet.ibm.com>)
	id 1WJ2iM-0005pg-4T
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:17:34 +0000
Received: from [85.158.143.35:49605] by server-1.bemta-4.messagelabs.com id
	35/4E-31661-D075F035; Thu, 27 Feb 2014 15:17:33 +0000
X-Env-Sender: raghavendra.kt@linux.vnet.ibm.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393514247!8788824!1
X-Originating-IP: [202.81.31.148]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjAyLjgxLjMxLjE0OCA9PiAzNDExMzU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31280 invoked from network); 27 Feb 2014 15:17:31 -0000
Received: from e23smtp06.au.ibm.com (HELO e23smtp06.au.ibm.com) (202.81.31.148)
	by server-9.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 15:17:31 -0000
Received: from /spool/local
	by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use
	Only! Violators will be prosecuted
	for <xen-devel@lists.xenproject.org> from
	<raghavendra.kt@linux.vnet.ibm.com>; Fri, 28 Feb 2014 01:17:24 +1000
Received: from d23dlp01.au.ibm.com (202.81.31.203)
	by e23smtp06.au.ibm.com (202.81.31.212) with IBM ESMTP SMTP Gateway:
	Authorized Use Only! Violators will be prosecuted; 
	Fri, 28 Feb 2014 01:17:22 +1000
Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152])
	by d23dlp01.au.ibm.com (Postfix) with ESMTP id DB17A2CE8052
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 02:17:18 +1100 (EST)
Received: from d23av04.au.ibm.com (d23av04.au.ibm.com [9.190.235.139])
	by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id
	s1REvMcA42926106
	for <xen-devel@lists.xenproject.org>; Fri, 28 Feb 2014 01:57:23 +1100
Received: from d23av04.au.ibm.com (localhost [127.0.0.1])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id
	s1RFHFMa013247
	for <xen-devel@lists.xenproject.org>; Fri, 28 Feb 2014 02:17:18 +1100
Received: from [9.79.207.119] ([9.79.207.119])
	by d23av04.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id
	s1RFGvVd012574; Fri, 28 Feb 2014 02:17:05 +1100
Message-ID: <530F5851.1090809@linux.vnet.ibm.com>
Date: Thu, 27 Feb 2014 20:52:57 +0530
From: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
Organization: IBM
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
In-Reply-To: <530F4F98.2080308@redhat.com>
X-TM-AS-MML: disable
X-Content-Scanned: Fidelis XPS MAILER
x-cbid: 14022715-7014-0000-0000-0000046D1998
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
[...]
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock*
> primitive wakes up a sleeping VCPU.  It is more similar to PLE
> (pause-loop exiting).

Adding to the discussion, I see there are two possibilities here,
considering that in undercommit cases we should not exceed
HEAD_SPIN_THRESHOLD,

1. the looping vcpu in pv_head_spin_check() should do halt()
considering that we have done enough spinning (more than typical
lock-hold time), and hence we are in potential overcommit.

2. multiplex kick_cpu to do directed yield in qspinlock case.
But this may result in some ping ponging?




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:27:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2rV-0006GL-RR; Thu, 27 Feb 2014 15:27:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WJ2rT-0006G8-Sj
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:27:00 +0000
Received: from [193.109.254.147:18321] by server-16.bemta-14.messagelabs.com
	id CB/D8-21945-3495F035; Thu, 27 Feb 2014 15:26:59 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393514818!7299595!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31985 invoked from network); 27 Feb 2014 15:26:58 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 15:26:58 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:54188 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WJ2pl-0007eJ-CY; Thu, 27 Feb 2014 16:25:13 +0100
Date: Thu, 27 Feb 2014 16:26:55 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1982379440.20140227162655@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140227151538.GG16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<529743590.20140227154351@eikelenboom.it>
	<20140227151538.GG16241@zion.uk.xensource.com>
MIME-Version: 1.0
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 27, 2014, 4:15:39 PM, you wrote:

> On Thu, Feb 27, 2014 at 03:43:51PM +0100, Sander Eikelenboom wrote:
> [...]
>> 
>> > As far as I can tell netfront has a pool of grant references and it
>> > will BUG_ON() if there's no grefs in the pool when you request one.
>> > Since your DomU didn't crash so I suspect the book-keeping is still
>> > intact.
>> 
>> >> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
>> >> > Domain 7 is the domain that happens to give the netfront messages.
>> >> 
>> >> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
>> >> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
>> >> 
>> 
>> > I suppose Dom0 expanding its maptrack is normal. I see as well when I
>> > increase the number of domains. But if it keeps increasing while the
>> > number of DomUs stay the same then it is not normal.
>> 
>> It keeps increasing (without (re)starting domains) although eventually it looks like it is settling at a round a maptrack size of 31/256 frames.
>> 

> Then I guess that's reasonable. You have 15 DomUs after all...

>> 
>> > Presumably you only have netfront and blkfront to use grant table and
>> > your workload as described below invovled both so it would be hard to
>> > tell which one is faulty.
>> 
>> > There's no immediate functional changes regarding slot counting in this
>> > dev cycle for network driver. But there's some changes to blkfront/back
>> > which seem interesting (memory related).
>> 
>> Hmm all the times i get a "Bad grant reference" are related to that one specific guest.
>> And it's not doing much blkback/front I/O (it's providing webdav and rsync to network based storage (glusterfs))
>> 

> OK. I misunderstood that you were rsync'ing from / to your VM disk.

> What does webdav do anyway? Does it have a specific traffic pattern?

The VM is a webdav store .. and the storage for that is network based (at the moment glusterfs in dom0).
Remote backup solutions use this to store backups with duplicity.

Besides that in the guest runs a rsync script that syncs the storage with a remote location.

So the webdav network traffic from outside to the vm causes about the same amount of traffic from the vm to dom0,
so yes .. that gets stretched and tested ;-)

>> Added some more printk's:
>> 
>> @@ -2072,7 +2076,11 @@ __gnttab_copy(
>>                                        &s_frame, &s_pg,
>>                                        &source_off, &source_len, 1);
>>          if ( rc != GNTST_okay )
>> -            goto error_out;
>> +            PIN_FAIL(error_out, GNTST_general_error,
>> +                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
>> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
>> +
>> +
>>          have_s_grant = 1;
>>          if ( op->source.offset < source_off ||
>>               op->len > source_len )
>> @@ -2096,7 +2104,11 @@ __gnttab_copy(
>>                                        current->domain->domain_id, 0,
>>                                        &d_frame, &d_pg, &dest_off, &dest_len, 1);
>>          if ( rc != GNTST_okay )
>> -            goto error_out;
>> +            PIN_FAIL(error_out, GNTST_general_error,
>> +                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
>> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
>> +
>> +
>>          have_d_grant = 1;
>> 
>> 
>> this comes out:
>> 
>> (XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7
>> 

> If it fails in gnttab_copy then I very much suspects this is a network
> driver problem as persistent grant in blk driver doesn't use grant
> copy.

Does the dest_gref or src_is_gref by any chance give some sort of direction ?

>> 
>> > My suggestion is, if you have a working base line, you can try to setup
>> > different frontend / backend combination to help narrow down the
>> > problem.
>> 
>> Will see what i can do after the weekend
>> 

> Thanks

>> > Wei.
>> 
>> <snip>
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:27:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ2rV-0006GL-RR; Thu, 27 Feb 2014 15:27:01 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linux@eikelenboom.it>) id 1WJ2rT-0006G8-Sj
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:27:00 +0000
Received: from [193.109.254.147:18321] by server-16.bemta-14.messagelabs.com
	id CB/D8-21945-3495F035; Thu, 27 Feb 2014 15:26:59 +0000
X-Env-Sender: linux@eikelenboom.it
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393514818!7299595!1
X-Originating-IP: [84.200.39.61]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31985 invoked from network); 27 Feb 2014 15:26:58 -0000
Received: from vserver.eikelenboom.it (HELO smtp.eikelenboom.it) (84.200.39.61)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 15:26:58 -0000
Received: from 207-69-ftth.on.nl ([88.159.69.207]:54188 helo=[172.16.1.20])
	by smtp.eikelenboom.it with esmtpsa (TLS1.0:RSA_AES_256_CBC_SHA1:256)
	(Exim 4.80) (envelope-from <linux@eikelenboom.it>)
	id 1WJ2pl-0007eJ-CY; Thu, 27 Feb 2014 16:25:13 +0100
Date: Thu, 27 Feb 2014 16:26:55 +0100
From: Sander Eikelenboom <linux@eikelenboom.it>
Organization: Eikelenboom IT services
X-Priority: 3 (Normal)
Message-ID: <1982379440.20140227162655@eikelenboom.it>
To: Wei Liu <wei.liu2@citrix.com>
In-Reply-To: <20140227151538.GG16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<529743590.20140227154351@eikelenboom.it>
	<20140227151538.GG16241@zion.uk.xensource.com>
MIME-Version: 1.0
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


Thursday, February 27, 2014, 4:15:39 PM, you wrote:

> On Thu, Feb 27, 2014 at 03:43:51PM +0100, Sander Eikelenboom wrote:
> [...]
>> 
>> > As far as I can tell netfront has a pool of grant references and it
>> > will BUG_ON() if there's no grefs in the pool when you request one.
>> > Since your DomU didn't crash so I suspect the book-keeping is still
>> > intact.
>> 
>> >> > Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
>> >> > Domain 7 is the domain that happens to give the netfront messages.
>> >> 
>> >> > I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
>> >> > Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
>> >> 
>> 
>> > I suppose Dom0 expanding its maptrack is normal. I see as well when I
>> > increase the number of domains. But if it keeps increasing while the
>> > number of DomUs stay the same then it is not normal.
>> 
>> It keeps increasing (without (re)starting domains) although eventually it looks like it is settling at a round a maptrack size of 31/256 frames.
>> 

> Then I guess that's reasonable. You have 15 DomUs after all...

>> 
>> > Presumably you only have netfront and blkfront to use grant table and
>> > your workload as described below invovled both so it would be hard to
>> > tell which one is faulty.
>> 
>> > There's no immediate functional changes regarding slot counting in this
>> > dev cycle for network driver. But there's some changes to blkfront/back
>> > which seem interesting (memory related).
>> 
>> Hmm all the times i get a "Bad grant reference" are related to that one specific guest.
>> And it's not doing much blkback/front I/O (it's providing webdav and rsync to network based storage (glusterfs))
>> 

> OK. I misunderstood that you were rsync'ing from / to your VM disk.

> What does webdav do anyway? Does it have a specific traffic pattern?

The VM is a webdav store .. and the storage for that is network based (at the moment glusterfs in dom0).
Remote backup solutions use this to store backups with duplicity.

Besides that in the guest runs a rsync script that syncs the storage with a remote location.

So the webdav network traffic from outside to the vm causes about the same amount of traffic from the vm to dom0,
so yes .. that gets stretched and tested ;-)

>> Added some more printk's:
>> 
>> @@ -2072,7 +2076,11 @@ __gnttab_copy(
>>                                        &s_frame, &s_pg,
>>                                        &source_off, &source_len, 1);
>>          if ( rc != GNTST_okay )
>> -            goto error_out;
>> +            PIN_FAIL(error_out, GNTST_general_error,
>> +                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
>> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
>> +
>> +
>>          have_s_grant = 1;
>>          if ( op->source.offset < source_off ||
>>               op->len > source_len )
>> @@ -2096,7 +2104,11 @@ __gnttab_copy(
>>                                        current->domain->domain_id, 0,
>>                                        &d_frame, &d_pg, &dest_off, &dest_len, 1);
>>          if ( rc != GNTST_okay )
>> -            goto error_out;
>> +            PIN_FAIL(error_out, GNTST_general_error,
>> +                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
>> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
>> +
>> +
>>          have_d_grant = 1;
>> 
>> 
>> this comes out:
>> 
>> (XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7
>> 

> If it fails in gnttab_copy then I very much suspects this is a network
> driver problem as persistent grant in blk driver doesn't use grant
> copy.

Does the dest_gref or src_is_gref by any chance give some sort of direction ?

>> 
>> > My suggestion is, if you have a working base line, you can try to setup
>> > different frontend / backend combination to help narrow down the
>> > problem.
>> 
>> Will see what i can do after the weekend
>> 

> Thanks

>> > Wei.
>> 
>> <snip>
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:36:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ30d-0006bi-8U; Thu, 27 Feb 2014 15:36:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mthuywin@gmail.com>) id 1WJ30c-0006bd-MZ
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 15:36:26 +0000
Received: from [193.109.254.147:51241] by server-3.bemta-14.messagelabs.com id
	7E/0B-00432-97B5F035; Thu, 27 Feb 2014 15:36:25 +0000
X-Env-Sender: mthuywin@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393515384!3595248!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3641 invoked from network); 27 Feb 2014 15:36:25 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-12.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Feb 2014 15:36:25 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <mthuywin@gmail.com>) id 1WJ30Z-0007p3-Qx
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 07:36:23 -0800
Date: Thu, 27 Feb 2014 07:36:23 -0800 (PST)
From: VirSecExplorer <mthuywin@gmail.com>
To: xen-devel@lists.xensource.com
Message-ID: <1393515383816-5721422.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] Hypercall_page and hypercall interception
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everyone,

My name is Thu, and I am currently working on virtualization security using
Xen.

I am interested in intercepting hypercalls from DomU, and would like to know
if I can do that using the hypercall_page kernel symbol. In other words, I
would like to know if I can use it in the same way system call interception
is done using the system_call_table.

I'd be happy if you guys can provide me with some tips on how I can
intercept hypercalls in Xen.

Thanks a lot for your help and I look forward to hearing from you.

Best,

Thu



--
View this message in context: http://xen.1045712.n5.nabble.com/Hypercall-page-and-hypercall-interception-tp5721422.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:36:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ30d-0006bi-8U; Thu, 27 Feb 2014 15:36:27 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mthuywin@gmail.com>) id 1WJ30c-0006bd-MZ
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 15:36:26 +0000
Received: from [193.109.254.147:51241] by server-3.bemta-14.messagelabs.com id
	7E/0B-00432-97B5F035; Thu, 27 Feb 2014 15:36:25 +0000
X-Env-Sender: mthuywin@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393515384!3595248!1
X-Originating-IP: [216.139.236.26]
X-SpamReason: No, hits=0.0 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3641 invoked from network); 27 Feb 2014 15:36:25 -0000
Received: from sam.nabble.com (HELO sam.nabble.com) (216.139.236.26)
	by server-12.tower-27.messagelabs.com with AES256-SHA encrypted SMTP;
	27 Feb 2014 15:36:25 -0000
Received: from [192.168.236.26] (helo=sam.nabble.com)
	by sam.nabble.com with esmtp (Exim 4.72)
	(envelope-from <mthuywin@gmail.com>) id 1WJ30Z-0007p3-Qx
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 07:36:23 -0800
Date: Thu, 27 Feb 2014 07:36:23 -0800 (PST)
From: VirSecExplorer <mthuywin@gmail.com>
To: xen-devel@lists.xensource.com
Message-ID: <1393515383816-5721422.post@n5.nabble.com>
MIME-Version: 1.0
Subject: [Xen-devel] Hypercall_page and hypercall interception
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi everyone,

My name is Thu, and I am currently working on virtualization security using
Xen.

I am interested in intercepting hypercalls from DomU, and would like to know
if I can do that using the hypercall_page kernel symbol. In other words, I
would like to know if I can use it in the same way system call interception
is done using the system_call_table.

I'd be happy if you guys can provide me with some tips on how I can
intercept hypercalls in Xen.

Thanks a lot for your help and I look forward to hearing from you.

Best,

Thu



--
View this message in context: http://xen.1045712.n5.nabble.com/Hypercall-page-and-hypercall-interception-tp5721422.html
Sent from the Xen - Dev mailing list archive at Nabble.com.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:37:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ319-0006eg-Qu; Thu, 27 Feb 2014 15:36:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ318-0006eS-2h
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:36:58 +0000
Received: from [85.158.143.35:62271] by server-1.bemta-4.messagelabs.com id
	D7/B1-31661-99B5F035; Thu, 27 Feb 2014 15:36:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393515415!8810596!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10023 invoked from network); 27 Feb 2014 15:36:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:36:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106308021"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:36:54 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:36:54 -0500
Message-ID: <530F5B94.3050308@citrix.com>
Date: Thu, 27 Feb 2014 16:36:52 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Sander Eikelenboom <linux@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>	<5305CFC6.3080502@oracle.com>	<587238484.20140220121842@eikelenboom.it>	<5306F2E8.5090509@oracle.com>	<824074181.20140226101442@eikelenboom.it>	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
In-Reply-To: <20140227141812.GE16241@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 15:18, Wei Liu wrote:
> On Wed, Feb 26, 2014 at 04:11:23PM +0100, Sander Eikelenboom wrote:
>>
>> Wednesday, February 26, 2014, 10:14:42 AM, you wrote:
>>
>>
>>> Friday, February 21, 2014, 7:32:08 AM, you wrote:
>>
>>
>>>> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>>>>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>>>>>
>>>>>
>>>>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>>>>>> Hi All,
>>>>>>>
>>>>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>>>>>
>>>>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>>>>>
>>>>>>>     In the guest:
>>>>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [57539.859610] net eth0: Need more slots
>>>>>>>     [58157.675939] net eth0: Need more slots
>>>>>>>     [58725.344712] net eth0: Need more slots
>>>>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [61815.849225] net eth0: Need more slots
>>>>>> This issue is familiar... and I thought it get fixed.
>>>>>>   From original analysis for similar issue I hit before, the root cause
>>>>>> is netback still creates response when the ring is full. I remember
>>>>>> larger MTU can trigger this issue before, what is the MTU size?
>>>>> In dom0 both for the physical nics and the guest vif's MTU=1500
>>>>> In domU the eth0 also has MTU=1500.
>>>>>
>>>>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>>>>>
>>>>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>>>>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>>>>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>>>>>
>>>>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.
>>
>>>> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
>>>> Probably the response overlaps the request and grantcopy return error 
>>>> when using wrong grant reference, Netback returns resp->status with 
>>>> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
>>>> Would it be possible to print log in xenvif_rx_action of netback to see 
>>>> whether something wrong with max slots and used slots?
>>
>>>> Thanks
>>>> Annie
>>
>>> Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
>>> at the same time as the netfront messages in the guest.
>>
>>> I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
>>> One of the things was to simplify the code for the debug key to print the granttables, the present code
>>> takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
>>> the nr of entries per domain.
>>
>>
>>> Issue 1: grant_table.c:1858:d0 Bad grant reference
>>
>>> After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
>>> The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.
>>
>>> Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
>>> The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).
>>
> 
> As far as I can tell netfront has a pool of grant references and it
> will BUG_ON() if there's no grefs in the pool when you request one.
> Since your DomU didn't crash so I suspect the book-keeping is still
> intact.
> 
>>> Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
>>> Domain 7 is the domain that happens to give the netfront messages.
>>
>>> I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
>>> Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
>>
> 
> I suppose Dom0 expanding its maptrack is normal. I see as well when I
> increase the number of domains. But if it keeps increasing while the
> number of DomUs stay the same then it is not normal.

blkfront/blkback will allocate persistent grants on demand, so it's not
strange to see the number of grants increasing while the domain is
running (although it should reach a stable state at some point).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:37:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ319-0006eg-Qu; Thu, 27 Feb 2014 15:36:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ318-0006eS-2h
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:36:58 +0000
Received: from [85.158.143.35:62271] by server-1.bemta-4.messagelabs.com id
	D7/B1-31661-99B5F035; Thu, 27 Feb 2014 15:36:57 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393515415!8810596!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10023 invoked from network); 27 Feb 2014 15:36:56 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:36:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106308021"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:36:54 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:36:54 -0500
Message-ID: <530F5B94.3050308@citrix.com>
Date: Thu, 27 Feb 2014 16:36:52 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>, Sander Eikelenboom <linux@eikelenboom.it>
References: <1772884781.20140218222513@eikelenboom.it>	<5305CFC6.3080502@oracle.com>	<587238484.20140220121842@eikelenboom.it>	<5306F2E8.5090509@oracle.com>	<824074181.20140226101442@eikelenboom.it>	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
In-Reply-To: <20140227141812.GE16241@zion.uk.xensource.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 15:18, Wei Liu wrote:
> On Wed, Feb 26, 2014 at 04:11:23PM +0100, Sander Eikelenboom wrote:
>>
>> Wednesday, February 26, 2014, 10:14:42 AM, you wrote:
>>
>>
>>> Friday, February 21, 2014, 7:32:08 AM, you wrote:
>>
>>
>>>> On 2014/2/20 19:18, Sander Eikelenboom wrote:
>>>>> Thursday, February 20, 2014, 10:49:58 AM, you wrote:
>>>>>
>>>>>
>>>>>> On 2014/2/19 5:25, Sander Eikelenboom wrote:
>>>>>>> Hi All,
>>>>>>>
>>>>>>> I'm currently having some network troubles with Xen and recent linux kernels.
>>>>>>>
>>>>>>> - When running with a 3.14-rc3 kernel in dom0 and a 3.13 kernel in domU
>>>>>>>     I get what seems to be described in this thread: http://www.spinics.net/lists/netdev/msg242953.html
>>>>>>>
>>>>>>>     In the guest:
>>>>>>>     [57539.859584] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [57539.859599] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [57539.859605] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [57539.859610] net eth0: Need more slots
>>>>>>>     [58157.675939] net eth0: Need more slots
>>>>>>>     [58725.344712] net eth0: Need more slots
>>>>>>>     [61815.849180] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [61815.849205] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [61815.849216] net eth0: rx->offset: 0, size: 4294967295
>>>>>>>     [61815.849225] net eth0: Need more slots
>>>>>> This issue is familiar... and I thought it get fixed.
>>>>>>   From original analysis for similar issue I hit before, the root cause
>>>>>> is netback still creates response when the ring is full. I remember
>>>>>> larger MTU can trigger this issue before, what is the MTU size?
>>>>> In dom0 both for the physical nics and the guest vif's MTU=1500
>>>>> In domU the eth0 also has MTU=1500.
>>>>>
>>>>> So it's not jumbo frames .. just everywhere the same plain defaults ..
>>>>>
>>>>> With the patch from Wei that solves the other issue, i'm still seeing the Need more slots issue on 3.14-rc3+wei's patch now.
>>>>> I have extended the "need more slots warn" to also print the cons, slots, max,  rx->offset, size, hope that gives some more insight.
>>>>> But it indeed is the VM were i had similar issues before, the primary thing this VM does is 2 simultaneous rsync's (one push one pull) with some gigabytes of data.
>>>>>
>>>>> This time it was also acompanied by a "grant_table.c:1857:d0 Bad grant reference " as seen below, don't know if it's a cause or a effect though.
>>
>>>> The log "grant_table.c:1857:d0 Bad grant reference " was also seen before.
>>>> Probably the response overlaps the request and grantcopy return error 
>>>> when using wrong grant reference, Netback returns resp->status with 
>>>> ||XEN_NETIF_RSP_ERROR(-1) which is 4294967295 printed above from frontend.
>>>> Would it be possible to print log in xenvif_rx_action of netback to see 
>>>> whether something wrong with max slots and used slots?
>>
>>>> Thanks
>>>> Annie
>>
>>> Looking more closely it are perhaps 2 different issues ... the bad grant references do not happen
>>> at the same time as the netfront messages in the guest.
>>
>>> I added some debugpatches to the kernel netback, netfront and xen granttable code (see below)
>>> One of the things was to simplify the code for the debug key to print the granttables, the present code
>>> takes too long to execute and brings down the box due to stalls and NMI's. So it now only prints
>>> the nr of entries per domain.
>>
>>
>>> Issue 1: grant_table.c:1858:d0 Bad grant reference
>>
>>> After running the box for just one night (with 15 VM's) i get these mentions of "Bad grant reference".
>>> The maptrack also seems to increase quite fast and the number of entries seem to have gone up quite fast as well.
>>
>>> Most domains have just one disk(blkfront/blkback) and one nic, a few have a second disk.
>>> The blk drivers use persistent grants so i would assume it would reuse those and not increase it (by much).
>>
> 
> As far as I can tell netfront has a pool of grant references and it
> will BUG_ON() if there's no grefs in the pool when you request one.
> Since your DomU didn't crash so I suspect the book-keeping is still
> intact.
> 
>>> Domain 1 seems to have increased it's nr_grant_entries from 2048 to 3072 somewhere this night.
>>> Domain 7 is the domain that happens to give the netfront messages.
>>
>>> I also don't get why it is reporting the "Bad grant reference" for domain 0, which seems to have 0 active entries ..
>>> Also is this amount of grant entries "normal" ? or could it be a leak somewhere ?
>>
> 
> I suppose Dom0 expanding its maptrack is normal. I see as well when I
> increase the number of domains. But if it keeps increasing while the
> number of DomUs stay the same then it is not normal.

blkfront/blkback will allocate persistent grants on demand, so it's not
strange to see the number of grants increasing while the domain is
running (although it should reach a stable state at some point).

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:41:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ352-0006vu-K1; Thu, 27 Feb 2014 15:41:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ351-0006vo-7F
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:40:59 +0000
Received: from [85.158.137.68:22268] by server-1.bemta-3.messagelabs.com id
	4E/75-17293-A8C5F035; Thu, 27 Feb 2014 15:40:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393515656!4662257!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24539 invoked from network); 27 Feb 2014 15:40:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:40:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106309365"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:40:56 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:40:55 -0500
Message-ID: <530F5C86.4060902@citrix.com>
Date: Thu, 27 Feb 2014 16:40:54 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Bei Guan <gbtju85@gmail.com>, xen-devel <xen-devel@lists.xen.org>
References: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
In-Reply-To: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] How to get the accurate physical CPU utilization in
 Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 16:06, Bei Guan wrote:
> Hi,
> 
> I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU
> core, such as core 3. Then, I run a cpu-bound process in DomU and the
> vcpu utilization is 100% (got it with "xentop" in Dom0).
> However, when I use "top" in Dom0 to see the physical CPU utilization,
> the CPU core 3 utilization is zero or less than 1%. The utilization
> expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot
> get the accurate physical CPU utilization with "top" command in Dom0?

top in Dom0 will only show CPU utilization of Dom0 (Xen is not a type 2
hypervisor, so Dom0 is no different than any other DomU in this aspect),
if you want to see CPU utilization of all domains you should use xl top
(xentop).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:41:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ352-0006vu-K1; Thu, 27 Feb 2014 15:41:00 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ351-0006vo-7F
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:40:59 +0000
Received: from [85.158.137.68:22268] by server-1.bemta-3.messagelabs.com id
	4E/75-17293-A8C5F035; Thu, 27 Feb 2014 15:40:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393515656!4662257!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24539 invoked from network); 27 Feb 2014 15:40:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:40:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106309365"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:40:56 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.79)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:40:55 -0500
Message-ID: <530F5C86.4060902@citrix.com>
Date: Thu, 27 Feb 2014 16:40:54 +0100
From: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Bei Guan <gbtju85@gmail.com>, xen-devel <xen-devel@lists.xen.org>
References: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
In-Reply-To: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] How to get the accurate physical CPU utilization in
 Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 16:06, Bei Guan wrote:
> Hi,
> 
> I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU
> core, such as core 3. Then, I run a cpu-bound process in DomU and the
> vcpu utilization is 100% (got it with "xentop" in Dom0).
> However, when I use "top" in Dom0 to see the physical CPU utilization,
> the CPU core 3 utilization is zero or less than 1%. The utilization
> expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot
> get the accurate physical CPU utilization with "top" command in Dom0?

top in Dom0 will only show CPU utilization of Dom0 (Xen is not a type 2
hypervisor, so Dom0 is no different than any other DomU in this aspect),
if you want to see CPU utilization of all domains you should use xl top
(xentop).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:45:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ398-0007BI-DB; Thu, 27 Feb 2014 15:45:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ396-0007BA-PJ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:45:12 +0000
Received: from [193.109.254.147:28924] by server-13.bemta-14.messagelabs.com
	id 06/A3-01226-88D5F035; Thu, 27 Feb 2014 15:45:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393515908!7255137!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15320 invoked from network); 27 Feb 2014 15:45:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104683790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:45:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:45:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ391-0004It-O5;
	Thu, 27 Feb 2014 15:45:07 +0000
Date: Thu, 27 Feb 2014 15:45:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140227154507.GH16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<530F5B94.3050308@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F5B94.3050308@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org,
	Sander Eikelenboom <linux@eikelenboom.it>, annie li <annie.li@oracle.com>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 04:36:52PM +0100, Roger Pau Monn=E9 wrote:
[...]
> >>
> > =

> > I suppose Dom0 expanding its maptrack is normal. I see as well when I
> > increase the number of domains. But if it keeps increasing while the
> > number of DomUs stay the same then it is not normal.
> =

> blkfront/blkback will allocate persistent grants on demand, so it's not
> strange to see the number of grants increasing while the domain is
> running (although it should reach a stable state at some point).
> =


Yes, that's exactly what I meant. :-)

Wei.

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:45:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ398-0007BI-DB; Thu, 27 Feb 2014 15:45:14 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ396-0007BA-PJ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:45:12 +0000
Received: from [193.109.254.147:28924] by server-13.bemta-14.messagelabs.com
	id 06/A3-01226-88D5F035; Thu, 27 Feb 2014 15:45:12 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393515908!7255137!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15320 invoked from network); 27 Feb 2014 15:45:10 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:45:10 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104683790"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:45:08 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:45:07 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ391-0004It-O5;
	Thu, 27 Feb 2014 15:45:07 +0000
Date: Thu, 27 Feb 2014 15:45:07 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20140227154507.GH16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<530F5B94.3050308@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F5B94.3050308@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: Wei Liu <wei.liu2@citrix.com>, xen-devel@lists.xen.org,
	Sander Eikelenboom <linux@eikelenboom.it>, annie li <annie.li@oracle.com>,
	Paul Durrant <Paul.Durrant@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 04:36:52PM +0100, Roger Pau Monn=E9 wrote:
[...]
> >>
> > =

> > I suppose Dom0 expanding its maptrack is normal. I see as well when I
> > increase the number of domains. But if it keeps increasing while the
> > number of DomUs stay the same then it is not normal.
> =

> blkfront/blkback will allocate persistent grants on demand, so it's not
> strange to see the number of grants increasing while the domain is
> running (although it should reach a stable state at some point).
> =


Yes, that's exactly what I meant. :-)

Wei.

> Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:45:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ39b-0007Dn-T2; Thu, 27 Feb 2014 15:45:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ39a-0007Dc-8r
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:45:42 +0000
Received: from [85.158.139.211:17722] by server-16.bemta-5.messagelabs.com id
	48/50-05060-5AD5F035; Thu, 27 Feb 2014 15:45:41 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393515939!6670785!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23848 invoked from network); 27 Feb 2014 15:45:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:45:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104683967"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:45:38 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:45:37 -0500
Message-ID: <530F5DA0.3060408@citrix.com>
Date: Thu, 27 Feb 2014 16:45:36 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
	<530F51E9.1070703@oracle.com>
In-Reply-To: <530F51E9.1070703@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjcvMDIvMTQgMTU6NTUsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yNi8yMDE0
IDExOjI0IEFNLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+IEFkZCBzdXBwb3J0IGZvciBNU0kg
bWVzc2FnZSBncm91cHMgZm9yIFhlbiBEb20wIHVzaW5nIHRoZQo+PiBNQVBfUElSUV9UWVBFX01V
TFRJX01TSSBwaXJxIG1hcCB0eXBlLgo+Pgo+PiBJbiBvcmRlciB0byBrZWVwIHRyYWNrIG9mIHdo
aWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCj4+IHBpcnFzIGluIHRo
ZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5ld2x5Cj4+IGlu
dHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2FsbGluZwo+
PiBQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBiZSBk
b25lIHdpdGggdGhlCj4+IGZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgo+Pgo+PiBTaWduZWQtb2Zm
LWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPj4gLS0tCj4+IFRl
c3RlZCB3aXRoIGFuIEludGVsIElDSDggQUhDSSBTQVRBIGNvbnRyb2xsZXIuCj4+IC0tLQo+PiAg
IGFyY2gveDg2L3BjaS94ZW4uYyAgICAgICAgICAgICAgICAgICB8ICAgMjkgKysrKysrKysrKysr
KystLS0tLS0KPj4gICBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyAgICAgfCAgIDQ3
Cj4+ICsrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0KPj4gICBkcml2ZXJzL3hlbi9l
dmVudHMvZXZlbnRzX2ludGVybmFsLmggfCAgICAxICsKPj4gICBpbmNsdWRlL3hlbi9ldmVudHMu
aCAgICAgICAgICAgICAgICAgfCAgICAyICstCj4+ICAgaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3Bo
eXNkZXYuaCAgICAgIHwgICAgMSArCj4+ICAgNSBmaWxlcyBjaGFuZ2VkLCA1NSBpbnNlcnRpb25z
KCspLCAyNSBkZWxldGlvbnMoLSkKPj4KPj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3BjaS94ZW4u
YyBiL2FyY2gveDg2L3BjaS94ZW4uYwo+PiBpbmRleCAxMDNlNzAyLi45MDU5NTZmIDEwMDY0NAo+
PiAtLS0gYS9hcmNoL3g4Ni9wY2kveGVuLmMKPj4gKysrIGIvYXJjaC94ODYvcGNpL3hlbi5jCj4+
IEBAIC0xNzgsNiArMTc4LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBfbXNpX2lycXMoc3RydWN0
IHBjaV9kZXYgKmRldiwKPj4gaW50IG52ZWMsIGludCB0eXBlKQo+PiAgICAgICBpID0gMDsKPj4g
ICAgICAgbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkg
ewo+PiAgICAgICAgICAgaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVz
YywgdltpXSwKPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBf
SURfTVNJKSA/IG52ZWMgOiAxLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBl
ID09IFBDSV9DQVBfSURfTVNJWCkgPwo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJw
Y2lmcm9udC1tc2kteCIgOgo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJwY2lmcm9u
dC1tc2kiLAo+PiBAQCAtMjQ1LDYgKzI0Niw3IEBAIHN0YXRpYyBpbnQgeGVuX2h2bV9zZXR1cF9t
c2lfaXJxcyhzdHJ1Y3QgcGNpX2Rldgo+PiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+ICAg
ICAgICAgICAgICAgICAgICJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRvIHBpcnE9JWRcbiIsIHBp
cnEpOwo+PiAgICAgICAgICAgfQo+PiAgICAgICAgICAgaXJxID0geGVuX2JpbmRfcGlycV9tc2lf
dG9faXJxKGRldiwgbXNpZGVzYywgcGlycSwKPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52ZWMgOiAxLAo+PiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+PiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJtc2kteCIgOiAibXNpIiwKPj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBET01JRF9TRUxGKTsKPj4gQEAgLTI2OSw5ICsyNzEsNiBAQCBzdGF0aWMgaW50IHhl
bl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdAo+PiBwY2lfZGV2ICpkZXYsIGludCBudmVj
LCBpbnQgdHlwZSkKPj4gICAgICAgaW50IHJldCA9IDA7Cj4+ICAgICAgIHN0cnVjdCBtc2lfZGVz
YyAqbXNpZGVzYzsKPj4gICAtICAgIGlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMg
PiAxKQo+PiAtICAgICAgICByZXR1cm4gMTsKPj4gLQo+PiAgICAgICBsaXN0X2Zvcl9lYWNoX2Vu
dHJ5KG1zaWRlc2MsICZkZXYtPm1zaV9saXN0LCBsaXN0KSB7Cj4+ICAgICAgICAgICBzdHJ1Y3Qg
cGh5c2Rldl9tYXBfcGlycSBtYXBfaXJxOwo+PiAgICAgICAgICAgZG9taWRfdCBkb21pZDsKPj4g
QEAgLTI5MSw3ICsyOTAsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJx
cyhzdHJ1Y3QKPj4gcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+ICAgICAgICAg
ICAgICAgICAgICAgKHBjaV9kb21haW5fbnIoZGV2LT5idXMpIDw8IDE2KTsKPj4gICAgICAgICAg
IG1hcF9pcnEuZGV2Zm4gPSBkZXYtPmRldmZuOwo+PiAgIC0gICAgICAgIGlmICh0eXBlID09IFBD
SV9DQVBfSURfTVNJWCkgewo+PiArICAgICAgICBpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAm
JiBudmVjID4gMSkgewo+PiArICAgICAgICAgICAgbWFwX2lycS50eXBlID0gTUFQX1BJUlFfVFlQ
RV9NVUxUSV9NU0k7Cj4+ICsgICAgICAgICAgICBtYXBfaXJxLmVudHJ5X25yID0gbnZlYzsKPiAK
PiAKPiBBcmUgd2Ugb3ZlcmxvYWRpbmcgZW50cnlfbnIgaGVyZSB3aXRoIGEgZGlmZmVyZW50IG1l
YW5pbmc/IEkgdGhvdWdodCBpdAo+IHdhcyBtZWFudCB0byBiZSBlbnRyeSBudW1iZXIgKGluIE1T
SS1YIHRhYmxlIGZvciBleGFtcGxlKSwgbm90IG51bWJlciBvZgo+IGVudHJpZXMuCgpJbiB0aGUg
Y2FzZSBvZiBNU0kgbWVzc2FnZSBncm91cHMgKE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJKSBlbnRy
eV9uciBpcwp0aGUgbnVtYmVyIG9mIHZlY3RvcnMgdG8gc2V0dXAsIHNvIHllcywgaXQncyBhbiBv
dmVybG9hZGluZyBvZiBlbnRyeV9uci4KCj4gCj4+ICsgICAgICAgIH0gZWxzZSBpZiAodHlwZSA9
PSBQQ0lfQ0FQX0lEX01TSVgpIHsKPj4gICAgICAgICAgICAgICBpbnQgcG9zOwo+PiAgICAgICAg
ICAgICAgIHUzMiB0YWJsZV9vZmZzZXQsIGJpcjsKPj4gICBAQCAtMzA4LDYgKzMxMCwxNiBAQCBz
dGF0aWMgaW50IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdAo+PiBwY2lfZGV2ICpk
ZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPj4gICAgICAgICAgIGlmIChwY2lfc2VnX3N1cHBvcnRl
ZCkKPj4gICAgICAgICAgICAgICByZXQgPSBIWVBFUlZJU09SX3BoeXNkZXZfb3AoUEhZU0RFVk9Q
X21hcF9waXJxLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmbWFwX2lycSk7Cj4+
ICsgICAgICAgIGlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxICYmIHJldCkg
ewo+PiArICAgICAgICAgICAgLyoKPj4gKyAgICAgICAgICAgICAqIElmIE1BUF9QSVJRX1RZUEVf
TVVMVElfTVNJIGlzIG5vdCBhdmFpbGFibGUKPj4gKyAgICAgICAgICAgICAqIHRoZXJlJ3Mgbm90
aGluZyBlbHNlIHdlIGNhbiBkbyBpbiB0aGlzIGNhc2UuCj4+ICsgICAgICAgICAgICAgKiBKdXN0
IHNldCByZXQgPiAwIHNvIGRyaXZlciBjYW4gcmV0cnkgd2l0aAo+PiArICAgICAgICAgICAgICog
c2luZ2xlIE1TSS4KPj4gKyAgICAgICAgICAgICAqLwo+PiArICAgICAgICAgICAgcmV0ID0gMTsK
Pj4gKyAgICAgICAgICAgIGdvdG8gb3V0Owo+PiArICAgICAgICB9Cj4+ICAgICAgICAgICBpZiAo
cmV0ID09IC1FSU5WQUwgJiYgIXBjaV9kb21haW5fbnIoZGV2LT5idXMpKSB7Cj4+ICAgICAgICAg
ICAgICAgbWFwX2lycS50eXBlID0gTUFQX1BJUlFfVFlQRV9NU0k7Cj4+ICAgICAgICAgICAgICAg
bWFwX2lycS5pbmRleCA9IC0xOwo+PiBAQCAtMzI0LDExICszMzYsMTAgQEAgc3RhdGljIGludCB4
ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QKPj4gcGNpX2RldiAqZGV2LCBpbnQgbnZl
YywgaW50IHR5cGUpCj4+ICAgICAgICAgICAgICAgZ290byBvdXQ7Cj4+ICAgICAgICAgICB9Cj4+
ICAgLSAgICAgICAgcmV0ID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywK
Pj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgIG1hcF9pcnEucGlycSwKPj4gLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+PiAtICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIm1zaS14IiA6ICJtc2kiLAo+PiAtICAgICAgICAgICAgICAg
ICAgICAgICAgZG9taWQpOwo+PiArICAgICAgICByZXQgPSB4ZW5fYmluZF9waXJxX21zaV90b19p
cnEoZGV2LCBtc2lkZXNjLCBtYXBfaXJxLnBpcnEsCj4+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSkgPwo+PiBudmVjIDogMSwK
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9D
QVBfSURfTVNJWCkgPwo+PiAibXNpLXgiIDogIm1zaSIsCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBkb21pZCk7Cj4+ICAgICAgICAgICBpZiAocmV0IDwgMCkKPj4g
ICAgICAgICAgICAgICBnb3RvIG91dDsKPj4gICAgICAgfQo+PiBkaWZmIC0tZ2l0IGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKPj4gYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRz
X2Jhc2UuYwo+PiBpbmRleCBmNGE5ZTMzLi5mZjIwYWUyIDEwMDY0NAo+PiAtLS0gYS9kcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+PiArKysgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYwo+PiBAQCAtMzkxLDEwICszOTEsMTAgQEAgc3RhdGljIHZvaWQgeGVuX2lycV9p
bml0KHVuc2lnbmVkIGlycSkKPj4gICAgICAgbGlzdF9hZGRfdGFpbCgmaW5mby0+bGlzdCwgJnhl
bl9pcnFfbGlzdF9oZWFkKTsKPj4gICB9Cj4+ICAgLXN0YXRpYyBpbnQgX19tdXN0X2NoZWNrIHhl
bl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lkKQo+PiArc3RhdGljIGludCBfX211c3RfY2hlY2sg
eGVuX2FsbG9jYXRlX2lycXNfZHluYW1pYyhpbnQgbnZlYykKPj4gICB7Cj4+ICAgICAgIGludCBm
aXJzdCA9IDA7Cj4+IC0gICAgaW50IGlycTsKPj4gKyAgICBpbnQgaSwgaXJxOwo+PiAgICAgI2lm
ZGVmIENPTkZJR19YODZfSU9fQVBJQwo+PiAgICAgICAvKgo+PiBAQCAtNDA4LDE0ICs0MDgsMjIg
QEAgc3RhdGljIGludCBfX211c3RfY2hlY2sKPj4geGVuX2FsbG9jYXRlX2lycV9keW5hbWljKHZv
aWQpCj4+ICAgICAgICAgICBmaXJzdCA9IGdldF9ucl9pcnFzX2dzaSgpOwo+PiAgICNlbmRpZgo+
PiAgIC0gICAgaXJxID0gaXJxX2FsbG9jX2Rlc2NfZnJvbShmaXJzdCwgLTEpOwo+PiArICAgIGly
cSA9IGlycV9hbGxvY19kZXNjc19mcm9tKGZpcnN0LCBudmVjLCAtMSk7Cj4+ICAgLSAgICBpZiAo
aXJxID49IDApCj4+IC0gICAgICAgIHhlbl9pcnFfaW5pdChpcnEpOwo+PiArICAgIGlmIChpcnEg
Pj0gMCkgewo+PiArICAgICAgICBmb3IgKGkgPSAwOyBpIDwgbnZlYzsgaSsrKQo+PiArICAgICAg
ICAgICAgeGVuX2lycV9pbml0KGlycSArIGkpOwo+PiArICAgIH0KPj4gICAgICAgICByZXR1cm4g
aXJxOwo+PiAgIH0KPj4gICArc3RhdGljIGlubGluZSBpbnQgX19tdXN0X2NoZWNrIHhlbl9hbGxv
Y2F0ZV9pcnFfZHluYW1pYyh2b2lkKQo+PiArewo+PiArCj4+ICsgICAgcmV0dXJuIHhlbl9hbGxv
Y2F0ZV9pcnFzX2R5bmFtaWMoMSk7Cj4+ICt9Cj4+ICsKPj4gICBzdGF0aWMgaW50IF9fbXVzdF9j
aGVjayB4ZW5fYWxsb2NhdGVfaXJxX2dzaSh1bnNpZ25lZCBnc2kpCj4+ICAgewo+PiAgICAgICBp
bnQgaXJxOwo+PiBAQCAtNzQxLDIyICs3NDksMjUgQEAgaW50IHhlbl9hbGxvY2F0ZV9waXJxX21z
aShzdHJ1Y3QgcGNpX2RldiAqZGV2LAo+PiBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2MpCj4+ICAg
fQo+PiAgICAgaW50IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShzdHJ1Y3QgcGNpX2RldiAqZGV2
LCBzdHJ1Y3QgbXNpX2Rlc2MKPj4gKm1zaWRlc2MsCj4+IC0gICAgICAgICAgICAgICAgIGludCBw
aXJxLCBjb25zdCBjaGFyICpuYW1lLCBkb21pZF90IGRvbWlkKQo+PiArICAgICAgICAgICAgICAg
ICBpbnQgcGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpCj4+
ICAgewo+PiAtICAgIGludCBpcnEsIHJldDsKPj4gKyAgICBpbnQgaSwgaXJxLCByZXQ7Cj4+ICAg
ICAgICAgbXV0ZXhfbG9jaygmaXJxX21hcHBpbmdfdXBkYXRlX2xvY2spOwo+PiAgIC0gICAgaXJx
ID0geGVuX2FsbG9jYXRlX2lycV9keW5hbWljKCk7Cj4+ICsgICAgaXJxID0geGVuX2FsbG9jYXRl
X2lycXNfZHluYW1pYyhudmVjKTsKPj4gICAgICAgaWYgKGlycSA8IDApCj4+ICAgICAgICAgICBn
b3RvIG91dDsKPj4gICAtICAgIGlycV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGlycSwgJnhl
bl9waXJxX2NoaXAsCj4+IGhhbmRsZV9lZGdlX2lycSwKPj4gLSAgICAgICAgICAgIG5hbWUpOwo+
PiArICAgIGZvciAoaSA9IDA7IGkgPCBudmVjOyBpKyspIHsKPj4gKyAgICAgICAgaXJxX3NldF9j
aGlwX2FuZF9oYW5kbGVyX25hbWUoaXJxICsgaSwgJnhlbl9waXJxX2NoaXAsCj4+IGhhbmRsZV9l
ZGdlX2lycSwgbmFtZSk7Cj4+ICsKPj4gKyAgICAgICAgcmV0ID0geGVuX2lycV9pbmZvX3BpcnFf
c2V0dXAoaXJxICsgaSwgMCwgcGlycSArIGksIDAsIGRvbWlkLAo+PiArICAgICAgICAgICAgICAg
ICAgICAgICAgICBpID09IDAgPyAwIDogUElSUV9NU0lfR1JPVVApOwo+PiArICAgICAgICBpZiAo
cmV0IDwgMCkKPj4gKyAgICAgICAgICAgIGdvdG8gZXJyb3JfaXJxOwo+PiArICAgIH0KPj4gICAt
ICAgIHJldCA9IHhlbl9pcnFfaW5mb19waXJxX3NldHVwKGlycSwgMCwgcGlycSwgMCwgZG9taWQs
IDApOwo+PiAtICAgIGlmIChyZXQgPCAwKQo+PiAtICAgICAgICBnb3RvIGVycm9yX2lycTsKPj4g
ICAgICAgcmV0ID0gaXJxX3NldF9tc2lfZGVzYyhpcnEsIG1zaWRlc2MpOwo+PiAgICAgICBpZiAo
cmV0IDwgMCkKPj4gICAgICAgICAgIGdvdG8gZXJyb3JfaXJxOwo+PiBAQCAtNzY0LDcgKzc3NSw4
IEBAIG91dDoKPj4gICAgICAgbXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7
Cj4+ICAgICAgIHJldHVybiBpcnE7Cj4+ICAgZXJyb3JfaXJxOgo+PiAtICAgIF9fdW5iaW5kX2Zy
b21faXJxKGlycSk7Cj4+ICsgICAgZm9yICg7IGkgPj0gMDsgaS0tKQo+PiArICAgICAgICBfX3Vu
YmluZF9mcm9tX2lycShpcnEgKyBpKTsKPj4gICAgICAgbXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGlu
Z191cGRhdGVfbG9jayk7Cj4+ICAgICAgIHJldHVybiByZXQ7Cj4+ICAgfQo+PiBAQCAtNzgzLDcg
Kzc5NSwxMiBAQCBpbnQgeGVuX2Rlc3Ryb3lfaXJxKGludCBpcnEpCj4+ICAgICAgIGlmICghZGVz
YykKPj4gICAgICAgICAgIGdvdG8gb3V0Owo+PiAgIC0gICAgaWYgKHhlbl9pbml0aWFsX2RvbWFp
bigpKSB7Cj4+ICsgICAgLyoKPj4gKyAgICAgKiBJZiB0cnlpbmcgdG8gcmVtb3ZlIGEgdmVjdG9y
IGluIGEgTVNJIGdyb3VwIGRpZmZlcmVudAo+PiArICAgICAqIHRoYW4gdGhlIGZpcnN0IG9uZSBz
a2lwIHRoZSBQSVJRIHVubWFwIHVubGVzcyB0aGlzIHZlY3Rvcgo+PiArICAgICAqIGlzIHRoZSBm
aXJzdCBvbmUgaW4gdGhlIGdyb3VwLgo+PiArICAgICAqLwo+PiArICAgIGlmICh4ZW5faW5pdGlh
bF9kb21haW4oKSAmJiAhKGluZm8tPnUucGlycS5mbGFncyAmCj4+IFBJUlFfTVNJX0dST1VQKSkg
ewo+PiAgICAgICAgICAgdW5tYXBfaXJxLnBpcnEgPSBpbmZvLT51LnBpcnEucGlycTsKPj4gICAg
ICAgICAgIHVubWFwX2lycS5kb21pZCA9IGluZm8tPnUucGlycS5kb21pZDsKPj4gICAgICAgICAg
IHJjID0gSFlQRVJWSVNPUl9waHlzZGV2X29wKFBIWVNERVZPUF91bm1hcF9waXJxLCAmdW5tYXBf
aXJxKTsKPj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwu
aAo+PiBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+PiBpbmRleCA2Nzdm
NDFhLi41MGMyMDUwYSAxMDA2NDQKPj4gLS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19p
bnRlcm5hbC5oCj4+ICsrKyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+
PiBAQCAtNTMsNiArNTMsNyBAQCBzdHJ1Y3QgaXJxX2luZm8gewo+PiAgICAgI2RlZmluZSBQSVJR
X05FRURTX0VPSSAgICAoMSA8PCAwKQo+PiAgICNkZWZpbmUgUElSUV9TSEFSRUFCTEUgICAgKDEg
PDwgMSkKPj4gKyNkZWZpbmUgUElSUV9NU0lfR1JPVVAgICAgKDEgPDwgMikKPj4gICAgIHN0cnVj
dCBldnRjaG5fb3BzIHsKPj4gICAgICAgdW5zaWduZWQgKCptYXhfY2hhbm5lbHMpKHZvaWQpOwo+
PiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vZXZlbnRzLmggYi9pbmNsdWRlL3hlbi9ldmVudHMu
aAo+PiBpbmRleCBjOWM4NWNmLi4yYWU3ZTAzIDEwMDY0NAo+PiAtLS0gYS9pbmNsdWRlL3hlbi9l
dmVudHMuaAo+PiArKysgYi9pbmNsdWRlL3hlbi9ldmVudHMuaAo+PiBAQCAtMTAyLDcgKzEwMiw3
IEBAIGludCB4ZW5fYmluZF9waXJxX2dzaV90b19pcnEodW5zaWduZWQgZ3NpLAo+PiAgIGludCB4
ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNj
Cj4+ICptc2lkZXNjKTsKPj4gICAvKiBCaW5kIGFuIFBTSSBwaXJxIHRvIGFuIGlycS4gKi8KPj4g
ICBpbnQgeGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKHN0cnVjdCBwY2lfZGV2ICpkZXYsIHN0cnVj
dCBtc2lfZGVzYwo+PiAqbXNpZGVzYywKPj4gLSAgICAgICAgICAgICAgICAgaW50IHBpcnEsIGNv
bnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpOwo+PiArICAgICAgICAgICAgICAgICBpbnQg
cGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpOwo+PiAgICNl
bmRpZgo+PiAgICAgLyogRGUtYWxsb2NhdGVzIHRoZSBhYm92ZSBtZW50aW9uZWQgcGh5c2ljYWwg
aW50ZXJydXB0LiAqLwo+PiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNk
ZXYuaAo+PiBiL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgKPj4gaW5kZXggNDI3MjFk
MS4uZWIxMzMyNmQgMTAwNjQ0Cj4+IC0tLSBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2
LmgKPj4gKysrIGIvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaAo+PiBAQCAtMTMxLDYg
KzEzMSw3IEBAIHN0cnVjdCBwaHlzZGV2X2lycSB7Cj4+ICAgI2RlZmluZSBNQVBfUElSUV9UWVBF
X0dTSSAgICAgICAgMHgxCj4+ICAgI2RlZmluZSBNQVBfUElSUV9UWVBFX1VOS05PV04gICAgICAg
IDB4Mgo+PiAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VHICAgICAgICAweDMKPj4gKyNk
ZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgICAgICAgIDB4NAo+IAo+IEZvcm1hdHRpbmcu
CgpJIGRvbid0IGdldCB0aGUgZm9ybWF0dGluZyBwcm9ibGVtLCBpdCdzIHRoZSBzYW1lIGZvcm1h
dHRpbmcgdGhhdCB0aGUKb3RoZXIgTUFQX1BJUlFfVFlQRV8qIHVzZSwgYW5kIGlmIHRoZSBwYXRj
aCBpcyBhcHBsaWVkIGZvcm1hdHRpbmcgaXMgT0suCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:45:44 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ39b-0007Dn-T2; Thu, 27 Feb 2014 15:45:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ39a-0007Dc-8r
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:45:42 +0000
Received: from [85.158.139.211:17722] by server-16.bemta-5.messagelabs.com id
	48/50-05060-5AD5F035; Thu, 27 Feb 2014 15:45:41 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393515939!6670785!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23848 invoked from network); 27 Feb 2014 15:45:40 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:45:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104683967"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:45:38 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:45:37 -0500
Message-ID: <530F5DA0.3060408@citrix.com>
Date: Thu, 27 Feb 2014 16:45:36 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
	<530F51E9.1070703@oracle.com>
In-Reply-To: <530F51E9.1070703@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjcvMDIvMTQgMTU6NTUsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yNi8yMDE0
IDExOjI0IEFNLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+IEFkZCBzdXBwb3J0IGZvciBNU0kg
bWVzc2FnZSBncm91cHMgZm9yIFhlbiBEb20wIHVzaW5nIHRoZQo+PiBNQVBfUElSUV9UWVBFX01V
TFRJX01TSSBwaXJxIG1hcCB0eXBlLgo+Pgo+PiBJbiBvcmRlciB0byBrZWVwIHRyYWNrIG9mIHdo
aWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCj4+IHBpcnFzIGluIHRo
ZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5ld2x5Cj4+IGlu
dHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2FsbGluZwo+
PiBQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBiZSBk
b25lIHdpdGggdGhlCj4+IGZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgo+Pgo+PiBTaWduZWQtb2Zm
LWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KPj4gLS0tCj4+IFRl
c3RlZCB3aXRoIGFuIEludGVsIElDSDggQUhDSSBTQVRBIGNvbnRyb2xsZXIuCj4+IC0tLQo+PiAg
IGFyY2gveDg2L3BjaS94ZW4uYyAgICAgICAgICAgICAgICAgICB8ICAgMjkgKysrKysrKysrKysr
KystLS0tLS0KPj4gICBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyAgICAgfCAgIDQ3
Cj4+ICsrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0tLS0KPj4gICBkcml2ZXJzL3hlbi9l
dmVudHMvZXZlbnRzX2ludGVybmFsLmggfCAgICAxICsKPj4gICBpbmNsdWRlL3hlbi9ldmVudHMu
aCAgICAgICAgICAgICAgICAgfCAgICAyICstCj4+ICAgaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3Bo
eXNkZXYuaCAgICAgIHwgICAgMSArCj4+ICAgNSBmaWxlcyBjaGFuZ2VkLCA1NSBpbnNlcnRpb25z
KCspLCAyNSBkZWxldGlvbnMoLSkKPj4KPj4gZGlmZiAtLWdpdCBhL2FyY2gveDg2L3BjaS94ZW4u
YyBiL2FyY2gveDg2L3BjaS94ZW4uYwo+PiBpbmRleCAxMDNlNzAyLi45MDU5NTZmIDEwMDY0NAo+
PiAtLS0gYS9hcmNoL3g4Ni9wY2kveGVuLmMKPj4gKysrIGIvYXJjaC94ODYvcGNpL3hlbi5jCj4+
IEBAIC0xNzgsNiArMTc4LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBfbXNpX2lycXMoc3RydWN0
IHBjaV9kZXYgKmRldiwKPj4gaW50IG52ZWMsIGludCB0eXBlKQo+PiAgICAgICBpID0gMDsKPj4g
ICAgICAgbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkg
ewo+PiAgICAgICAgICAgaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVz
YywgdltpXSwKPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBf
SURfTVNJKSA/IG52ZWMgOiAxLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBl
ID09IFBDSV9DQVBfSURfTVNJWCkgPwo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJw
Y2lmcm9udC1tc2kteCIgOgo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICJwY2lmcm9u
dC1tc2kiLAo+PiBAQCAtMjQ1LDYgKzI0Niw3IEBAIHN0YXRpYyBpbnQgeGVuX2h2bV9zZXR1cF9t
c2lfaXJxcyhzdHJ1Y3QgcGNpX2Rldgo+PiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+ICAg
ICAgICAgICAgICAgICAgICJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRvIHBpcnE9JWRcbiIsIHBp
cnEpOwo+PiAgICAgICAgICAgfQo+PiAgICAgICAgICAgaXJxID0geGVuX2JpbmRfcGlycV9tc2lf
dG9faXJxKGRldiwgbXNpZGVzYywgcGlycSwKPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52ZWMgOiAxLAo+PiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+PiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICJtc2kteCIgOiAibXNpIiwKPj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBET01JRF9TRUxGKTsKPj4gQEAgLTI2OSw5ICsyNzEsNiBAQCBzdGF0aWMgaW50IHhl
bl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdAo+PiBwY2lfZGV2ICpkZXYsIGludCBudmVj
LCBpbnQgdHlwZSkKPj4gICAgICAgaW50IHJldCA9IDA7Cj4+ICAgICAgIHN0cnVjdCBtc2lfZGVz
YyAqbXNpZGVzYzsKPj4gICAtICAgIGlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMg
PiAxKQo+PiAtICAgICAgICByZXR1cm4gMTsKPj4gLQo+PiAgICAgICBsaXN0X2Zvcl9lYWNoX2Vu
dHJ5KG1zaWRlc2MsICZkZXYtPm1zaV9saXN0LCBsaXN0KSB7Cj4+ICAgICAgICAgICBzdHJ1Y3Qg
cGh5c2Rldl9tYXBfcGlycSBtYXBfaXJxOwo+PiAgICAgICAgICAgZG9taWRfdCBkb21pZDsKPj4g
QEAgLTI5MSw3ICsyOTAsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJx
cyhzdHJ1Y3QKPj4gcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+ICAgICAgICAg
ICAgICAgICAgICAgKHBjaV9kb21haW5fbnIoZGV2LT5idXMpIDw8IDE2KTsKPj4gICAgICAgICAg
IG1hcF9pcnEuZGV2Zm4gPSBkZXYtPmRldmZuOwo+PiAgIC0gICAgICAgIGlmICh0eXBlID09IFBD
SV9DQVBfSURfTVNJWCkgewo+PiArICAgICAgICBpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAm
JiBudmVjID4gMSkgewo+PiArICAgICAgICAgICAgbWFwX2lycS50eXBlID0gTUFQX1BJUlFfVFlQ
RV9NVUxUSV9NU0k7Cj4+ICsgICAgICAgICAgICBtYXBfaXJxLmVudHJ5X25yID0gbnZlYzsKPiAK
PiAKPiBBcmUgd2Ugb3ZlcmxvYWRpbmcgZW50cnlfbnIgaGVyZSB3aXRoIGEgZGlmZmVyZW50IG1l
YW5pbmc/IEkgdGhvdWdodCBpdAo+IHdhcyBtZWFudCB0byBiZSBlbnRyeSBudW1iZXIgKGluIE1T
SS1YIHRhYmxlIGZvciBleGFtcGxlKSwgbm90IG51bWJlciBvZgo+IGVudHJpZXMuCgpJbiB0aGUg
Y2FzZSBvZiBNU0kgbWVzc2FnZSBncm91cHMgKE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJKSBlbnRy
eV9uciBpcwp0aGUgbnVtYmVyIG9mIHZlY3RvcnMgdG8gc2V0dXAsIHNvIHllcywgaXQncyBhbiBv
dmVybG9hZGluZyBvZiBlbnRyeV9uci4KCj4gCj4+ICsgICAgICAgIH0gZWxzZSBpZiAodHlwZSA9
PSBQQ0lfQ0FQX0lEX01TSVgpIHsKPj4gICAgICAgICAgICAgICBpbnQgcG9zOwo+PiAgICAgICAg
ICAgICAgIHUzMiB0YWJsZV9vZmZzZXQsIGJpcjsKPj4gICBAQCAtMzA4LDYgKzMxMCwxNiBAQCBz
dGF0aWMgaW50IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdAo+PiBwY2lfZGV2ICpk
ZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPj4gICAgICAgICAgIGlmIChwY2lfc2VnX3N1cHBvcnRl
ZCkKPj4gICAgICAgICAgICAgICByZXQgPSBIWVBFUlZJU09SX3BoeXNkZXZfb3AoUEhZU0RFVk9Q
X21hcF9waXJxLAo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmbWFwX2lycSk7Cj4+
ICsgICAgICAgIGlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxICYmIHJldCkg
ewo+PiArICAgICAgICAgICAgLyoKPj4gKyAgICAgICAgICAgICAqIElmIE1BUF9QSVJRX1RZUEVf
TVVMVElfTVNJIGlzIG5vdCBhdmFpbGFibGUKPj4gKyAgICAgICAgICAgICAqIHRoZXJlJ3Mgbm90
aGluZyBlbHNlIHdlIGNhbiBkbyBpbiB0aGlzIGNhc2UuCj4+ICsgICAgICAgICAgICAgKiBKdXN0
IHNldCByZXQgPiAwIHNvIGRyaXZlciBjYW4gcmV0cnkgd2l0aAo+PiArICAgICAgICAgICAgICog
c2luZ2xlIE1TSS4KPj4gKyAgICAgICAgICAgICAqLwo+PiArICAgICAgICAgICAgcmV0ID0gMTsK
Pj4gKyAgICAgICAgICAgIGdvdG8gb3V0Owo+PiArICAgICAgICB9Cj4+ICAgICAgICAgICBpZiAo
cmV0ID09IC1FSU5WQUwgJiYgIXBjaV9kb21haW5fbnIoZGV2LT5idXMpKSB7Cj4+ICAgICAgICAg
ICAgICAgbWFwX2lycS50eXBlID0gTUFQX1BJUlFfVFlQRV9NU0k7Cj4+ICAgICAgICAgICAgICAg
bWFwX2lycS5pbmRleCA9IC0xOwo+PiBAQCAtMzI0LDExICszMzYsMTAgQEAgc3RhdGljIGludCB4
ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QKPj4gcGNpX2RldiAqZGV2LCBpbnQgbnZl
YywgaW50IHR5cGUpCj4+ICAgICAgICAgICAgICAgZ290byBvdXQ7Cj4+ICAgICAgICAgICB9Cj4+
ICAgLSAgICAgICAgcmV0ID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywK
Pj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgIG1hcF9pcnEucGlycSwKPj4gLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+PiAtICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIm1zaS14IiA6ICJtc2kiLAo+PiAtICAgICAgICAgICAgICAg
ICAgICAgICAgZG9taWQpOwo+PiArICAgICAgICByZXQgPSB4ZW5fYmluZF9waXJxX21zaV90b19p
cnEoZGV2LCBtc2lkZXNjLCBtYXBfaXJxLnBpcnEsCj4+ICsgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSkgPwo+PiBudmVjIDogMSwK
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9D
QVBfSURfTVNJWCkgPwo+PiAibXNpLXgiIDogIm1zaSIsCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBkb21pZCk7Cj4+ICAgICAgICAgICBpZiAocmV0IDwgMCkKPj4g
ICAgICAgICAgICAgICBnb3RvIG91dDsKPj4gICAgICAgfQo+PiBkaWZmIC0tZ2l0IGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKPj4gYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRz
X2Jhc2UuYwo+PiBpbmRleCBmNGE5ZTMzLi5mZjIwYWUyIDEwMDY0NAo+PiAtLS0gYS9kcml2ZXJz
L3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwo+PiArKysgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZl
bnRzX2Jhc2UuYwo+PiBAQCAtMzkxLDEwICszOTEsMTAgQEAgc3RhdGljIHZvaWQgeGVuX2lycV9p
bml0KHVuc2lnbmVkIGlycSkKPj4gICAgICAgbGlzdF9hZGRfdGFpbCgmaW5mby0+bGlzdCwgJnhl
bl9pcnFfbGlzdF9oZWFkKTsKPj4gICB9Cj4+ICAgLXN0YXRpYyBpbnQgX19tdXN0X2NoZWNrIHhl
bl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lkKQo+PiArc3RhdGljIGludCBfX211c3RfY2hlY2sg
eGVuX2FsbG9jYXRlX2lycXNfZHluYW1pYyhpbnQgbnZlYykKPj4gICB7Cj4+ICAgICAgIGludCBm
aXJzdCA9IDA7Cj4+IC0gICAgaW50IGlycTsKPj4gKyAgICBpbnQgaSwgaXJxOwo+PiAgICAgI2lm
ZGVmIENPTkZJR19YODZfSU9fQVBJQwo+PiAgICAgICAvKgo+PiBAQCAtNDA4LDE0ICs0MDgsMjIg
QEAgc3RhdGljIGludCBfX211c3RfY2hlY2sKPj4geGVuX2FsbG9jYXRlX2lycV9keW5hbWljKHZv
aWQpCj4+ICAgICAgICAgICBmaXJzdCA9IGdldF9ucl9pcnFzX2dzaSgpOwo+PiAgICNlbmRpZgo+
PiAgIC0gICAgaXJxID0gaXJxX2FsbG9jX2Rlc2NfZnJvbShmaXJzdCwgLTEpOwo+PiArICAgIGly
cSA9IGlycV9hbGxvY19kZXNjc19mcm9tKGZpcnN0LCBudmVjLCAtMSk7Cj4+ICAgLSAgICBpZiAo
aXJxID49IDApCj4+IC0gICAgICAgIHhlbl9pcnFfaW5pdChpcnEpOwo+PiArICAgIGlmIChpcnEg
Pj0gMCkgewo+PiArICAgICAgICBmb3IgKGkgPSAwOyBpIDwgbnZlYzsgaSsrKQo+PiArICAgICAg
ICAgICAgeGVuX2lycV9pbml0KGlycSArIGkpOwo+PiArICAgIH0KPj4gICAgICAgICByZXR1cm4g
aXJxOwo+PiAgIH0KPj4gICArc3RhdGljIGlubGluZSBpbnQgX19tdXN0X2NoZWNrIHhlbl9hbGxv
Y2F0ZV9pcnFfZHluYW1pYyh2b2lkKQo+PiArewo+PiArCj4+ICsgICAgcmV0dXJuIHhlbl9hbGxv
Y2F0ZV9pcnFzX2R5bmFtaWMoMSk7Cj4+ICt9Cj4+ICsKPj4gICBzdGF0aWMgaW50IF9fbXVzdF9j
aGVjayB4ZW5fYWxsb2NhdGVfaXJxX2dzaSh1bnNpZ25lZCBnc2kpCj4+ICAgewo+PiAgICAgICBp
bnQgaXJxOwo+PiBAQCAtNzQxLDIyICs3NDksMjUgQEAgaW50IHhlbl9hbGxvY2F0ZV9waXJxX21z
aShzdHJ1Y3QgcGNpX2RldiAqZGV2LAo+PiBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2MpCj4+ICAg
fQo+PiAgICAgaW50IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShzdHJ1Y3QgcGNpX2RldiAqZGV2
LCBzdHJ1Y3QgbXNpX2Rlc2MKPj4gKm1zaWRlc2MsCj4+IC0gICAgICAgICAgICAgICAgIGludCBw
aXJxLCBjb25zdCBjaGFyICpuYW1lLCBkb21pZF90IGRvbWlkKQo+PiArICAgICAgICAgICAgICAg
ICBpbnQgcGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpCj4+
ICAgewo+PiAtICAgIGludCBpcnEsIHJldDsKPj4gKyAgICBpbnQgaSwgaXJxLCByZXQ7Cj4+ICAg
ICAgICAgbXV0ZXhfbG9jaygmaXJxX21hcHBpbmdfdXBkYXRlX2xvY2spOwo+PiAgIC0gICAgaXJx
ID0geGVuX2FsbG9jYXRlX2lycV9keW5hbWljKCk7Cj4+ICsgICAgaXJxID0geGVuX2FsbG9jYXRl
X2lycXNfZHluYW1pYyhudmVjKTsKPj4gICAgICAgaWYgKGlycSA8IDApCj4+ICAgICAgICAgICBn
b3RvIG91dDsKPj4gICAtICAgIGlycV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGlycSwgJnhl
bl9waXJxX2NoaXAsCj4+IGhhbmRsZV9lZGdlX2lycSwKPj4gLSAgICAgICAgICAgIG5hbWUpOwo+
PiArICAgIGZvciAoaSA9IDA7IGkgPCBudmVjOyBpKyspIHsKPj4gKyAgICAgICAgaXJxX3NldF9j
aGlwX2FuZF9oYW5kbGVyX25hbWUoaXJxICsgaSwgJnhlbl9waXJxX2NoaXAsCj4+IGhhbmRsZV9l
ZGdlX2lycSwgbmFtZSk7Cj4+ICsKPj4gKyAgICAgICAgcmV0ID0geGVuX2lycV9pbmZvX3BpcnFf
c2V0dXAoaXJxICsgaSwgMCwgcGlycSArIGksIDAsIGRvbWlkLAo+PiArICAgICAgICAgICAgICAg
ICAgICAgICAgICBpID09IDAgPyAwIDogUElSUV9NU0lfR1JPVVApOwo+PiArICAgICAgICBpZiAo
cmV0IDwgMCkKPj4gKyAgICAgICAgICAgIGdvdG8gZXJyb3JfaXJxOwo+PiArICAgIH0KPj4gICAt
ICAgIHJldCA9IHhlbl9pcnFfaW5mb19waXJxX3NldHVwKGlycSwgMCwgcGlycSwgMCwgZG9taWQs
IDApOwo+PiAtICAgIGlmIChyZXQgPCAwKQo+PiAtICAgICAgICBnb3RvIGVycm9yX2lycTsKPj4g
ICAgICAgcmV0ID0gaXJxX3NldF9tc2lfZGVzYyhpcnEsIG1zaWRlc2MpOwo+PiAgICAgICBpZiAo
cmV0IDwgMCkKPj4gICAgICAgICAgIGdvdG8gZXJyb3JfaXJxOwo+PiBAQCAtNzY0LDcgKzc3NSw4
IEBAIG91dDoKPj4gICAgICAgbXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7
Cj4+ICAgICAgIHJldHVybiBpcnE7Cj4+ICAgZXJyb3JfaXJxOgo+PiAtICAgIF9fdW5iaW5kX2Zy
b21faXJxKGlycSk7Cj4+ICsgICAgZm9yICg7IGkgPj0gMDsgaS0tKQo+PiArICAgICAgICBfX3Vu
YmluZF9mcm9tX2lycShpcnEgKyBpKTsKPj4gICAgICAgbXV0ZXhfdW5sb2NrKCZpcnFfbWFwcGlu
Z191cGRhdGVfbG9jayk7Cj4+ICAgICAgIHJldHVybiByZXQ7Cj4+ICAgfQo+PiBAQCAtNzgzLDcg
Kzc5NSwxMiBAQCBpbnQgeGVuX2Rlc3Ryb3lfaXJxKGludCBpcnEpCj4+ICAgICAgIGlmICghZGVz
YykKPj4gICAgICAgICAgIGdvdG8gb3V0Owo+PiAgIC0gICAgaWYgKHhlbl9pbml0aWFsX2RvbWFp
bigpKSB7Cj4+ICsgICAgLyoKPj4gKyAgICAgKiBJZiB0cnlpbmcgdG8gcmVtb3ZlIGEgdmVjdG9y
IGluIGEgTVNJIGdyb3VwIGRpZmZlcmVudAo+PiArICAgICAqIHRoYW4gdGhlIGZpcnN0IG9uZSBz
a2lwIHRoZSBQSVJRIHVubWFwIHVubGVzcyB0aGlzIHZlY3Rvcgo+PiArICAgICAqIGlzIHRoZSBm
aXJzdCBvbmUgaW4gdGhlIGdyb3VwLgo+PiArICAgICAqLwo+PiArICAgIGlmICh4ZW5faW5pdGlh
bF9kb21haW4oKSAmJiAhKGluZm8tPnUucGlycS5mbGFncyAmCj4+IFBJUlFfTVNJX0dST1VQKSkg
ewo+PiAgICAgICAgICAgdW5tYXBfaXJxLnBpcnEgPSBpbmZvLT51LnBpcnEucGlycTsKPj4gICAg
ICAgICAgIHVubWFwX2lycS5kb21pZCA9IGluZm8tPnUucGlycS5kb21pZDsKPj4gICAgICAgICAg
IHJjID0gSFlQRVJWSVNPUl9waHlzZGV2X29wKFBIWVNERVZPUF91bm1hcF9waXJxLCAmdW5tYXBf
aXJxKTsKPj4gZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwu
aAo+PiBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+PiBpbmRleCA2Nzdm
NDFhLi41MGMyMDUwYSAxMDA2NDQKPj4gLS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19p
bnRlcm5hbC5oCj4+ICsrKyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAo+
PiBAQCAtNTMsNiArNTMsNyBAQCBzdHJ1Y3QgaXJxX2luZm8gewo+PiAgICAgI2RlZmluZSBQSVJR
X05FRURTX0VPSSAgICAoMSA8PCAwKQo+PiAgICNkZWZpbmUgUElSUV9TSEFSRUFCTEUgICAgKDEg
PDwgMSkKPj4gKyNkZWZpbmUgUElSUV9NU0lfR1JPVVAgICAgKDEgPDwgMikKPj4gICAgIHN0cnVj
dCBldnRjaG5fb3BzIHsKPj4gICAgICAgdW5zaWduZWQgKCptYXhfY2hhbm5lbHMpKHZvaWQpOwo+
PiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vZXZlbnRzLmggYi9pbmNsdWRlL3hlbi9ldmVudHMu
aAo+PiBpbmRleCBjOWM4NWNmLi4yYWU3ZTAzIDEwMDY0NAo+PiAtLS0gYS9pbmNsdWRlL3hlbi9l
dmVudHMuaAo+PiArKysgYi9pbmNsdWRlL3hlbi9ldmVudHMuaAo+PiBAQCAtMTAyLDcgKzEwMiw3
IEBAIGludCB4ZW5fYmluZF9waXJxX2dzaV90b19pcnEodW5zaWduZWQgZ3NpLAo+PiAgIGludCB4
ZW5fYWxsb2NhdGVfcGlycV9tc2koc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNj
Cj4+ICptc2lkZXNjKTsKPj4gICAvKiBCaW5kIGFuIFBTSSBwaXJxIHRvIGFuIGlycS4gKi8KPj4g
ICBpbnQgeGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKHN0cnVjdCBwY2lfZGV2ICpkZXYsIHN0cnVj
dCBtc2lfZGVzYwo+PiAqbXNpZGVzYywKPj4gLSAgICAgICAgICAgICAgICAgaW50IHBpcnEsIGNv
bnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpOwo+PiArICAgICAgICAgICAgICAgICBpbnQg
cGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpOwo+PiAgICNl
bmRpZgo+PiAgICAgLyogRGUtYWxsb2NhdGVzIHRoZSBhYm92ZSBtZW50aW9uZWQgcGh5c2ljYWwg
aW50ZXJydXB0LiAqLwo+PiBkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNk
ZXYuaAo+PiBiL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgKPj4gaW5kZXggNDI3MjFk
MS4uZWIxMzMyNmQgMTAwNjQ0Cj4+IC0tLSBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2
LmgKPj4gKysrIGIvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaAo+PiBAQCAtMTMxLDYg
KzEzMSw3IEBAIHN0cnVjdCBwaHlzZGV2X2lycSB7Cj4+ICAgI2RlZmluZSBNQVBfUElSUV9UWVBF
X0dTSSAgICAgICAgMHgxCj4+ICAgI2RlZmluZSBNQVBfUElSUV9UWVBFX1VOS05PV04gICAgICAg
IDB4Mgo+PiAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VHICAgICAgICAweDMKPj4gKyNk
ZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgICAgICAgIDB4NAo+IAo+IEZvcm1hdHRpbmcu
CgpJIGRvbid0IGdldCB0aGUgZm9ybWF0dGluZyBwcm9ibGVtLCBpdCdzIHRoZSBzYW1lIGZvcm1h
dHRpbmcgdGhhdCB0aGUKb3RoZXIgTUFQX1BJUlFfVFlQRV8qIHVzZSwgYW5kIGlmIHRoZSBwYXRj
aCBpcyBhcHBsaWVkIGZvcm1hdHRpbmcgaXMgT0suCgoKX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:49:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3DV-0007TF-K1; Thu, 27 Feb 2014 15:49:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ3DU-0007T9-74
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:49:44 +0000
Received: from [85.158.137.68:28924] by server-11.bemta-3.messagelabs.com id
	26/93-04255-79E5F035; Thu, 27 Feb 2014 15:49:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393516180!3380118!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1691 invoked from network); 27 Feb 2014 15:49:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; 
	d="scan'208,217";a="106312834"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:49:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:49:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ3DP-0004MJ-5U;
	Thu, 27 Feb 2014 15:49:39 +0000
Message-ID: <530F5E93.7020804@citrix.com>
Date: Thu, 27 Feb 2014 15:49:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Bei Guan <gbtju85@gmail.com>
References: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
In-Reply-To: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get the accurate physical CPU utilization in
 Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3235517607840402474=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3235517607840402474==
Content-Type: multipart/alternative;
	boundary="------------010106010405020302090900"

--------------010106010405020302090900
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 27/02/14 15:06, Bei Guan wrote:
> Hi,
>
> I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU
> core, such as core 3. Then, I run a cpu-bound process in DomU and the
> vcpu utilization is 100% (got it with "xentop" in Dom0).
> However, when I use "top" in Dom0 to see the physical CPU utilization,
> the CPU core 3 utilization is zero or less than 1%. The utilization
> expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot
> get the accurate physical CPU utilization with "top" command in Dom0?
>
> Any advice is appreciated. Thank you for your time.

Xen is not KVM; dom0 is just another VM as far as Xen is concerned, so
dom0's cpu3 is not domU's cpu3.

Top in dom0 shows dom0's virtual cpu utilisation.  I am not aware of a
utility like top which gives the physical cpu information, distributed
by physical cpu.

~Andrew

--------------010106010405020302090900
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 27/02/14 15:06, Bei Guan wrote:<br>
    </div>
    <blockquote
cite="mid:CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div dir="ltr">
        <div>Hi,</div>
        <div><br>
        </div>
        <div>I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a
          physical CPU core, such as core 3. Then, I run a cpu-bound
          process in DomU and the vcpu utilization is 100% (got it with
          "xentop" in Dom0).</div>
        <div>However, when I use "top" in Dom0 to see the physical CPU
          utilization, the CPU core 3 utilization is zero or less than
          1%. The utilization expected of CPU core 3 is also 100% like
          the vcpu. Is it? Why I cannot get the accurate physical CPU
          utilization with "top" command in Dom0?</div>
        <div><br>
        </div>
        Any advice is appreciated. Thank you for your time.<br
          clear="all">
      </div>
    </blockquote>
    <br>
    Xen is not KVM; dom0 is just another VM as far as Xen is concerned,
    so dom0's cpu3 is not domU's cpu3.<br>
    <br>
    Top in dom0 shows dom0's virtual cpu utilisation.&nbsp; I am not aware of
    a utility like top which gives the physical cpu information,
    distributed by physical cpu.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------010106010405020302090900--


--===============3235517607840402474==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3235517607840402474==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 15:49:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3DV-0007TF-K1; Thu, 27 Feb 2014 15:49:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ3DU-0007T9-74
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:49:44 +0000
Received: from [85.158.137.68:28924] by server-11.bemta-3.messagelabs.com id
	26/93-04255-79E5F035; Thu, 27 Feb 2014 15:49:43 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393516180!3380118!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1691 invoked from network); 27 Feb 2014 15:49:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:49:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; 
	d="scan'208,217";a="106312834"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:49:40 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:49:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ3DP-0004MJ-5U;
	Thu, 27 Feb 2014 15:49:39 +0000
Message-ID: <530F5E93.7020804@citrix.com>
Date: Thu, 27 Feb 2014 15:49:39 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Bei Guan <gbtju85@gmail.com>
References: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
In-Reply-To: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get the accurate physical CPU utilization in
 Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3235517607840402474=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3235517607840402474==
Content-Type: multipart/alternative;
	boundary="------------010106010405020302090900"

--------------010106010405020302090900
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 27/02/14 15:06, Bei Guan wrote:
> Hi,
>
> I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU
> core, such as core 3. Then, I run a cpu-bound process in DomU and the
> vcpu utilization is 100% (got it with "xentop" in Dom0).
> However, when I use "top" in Dom0 to see the physical CPU utilization,
> the CPU core 3 utilization is zero or less than 1%. The utilization
> expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot
> get the accurate physical CPU utilization with "top" command in Dom0?
>
> Any advice is appreciated. Thank you for your time.

Xen is not KVM; dom0 is just another VM as far as Xen is concerned, so
dom0's cpu3 is not domU's cpu3.

Top in dom0 shows dom0's virtual cpu utilisation.  I am not aware of a
utility like top which gives the physical cpu information, distributed
by physical cpu.

~Andrew

--------------010106010405020302090900
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 27/02/14 15:06, Bei Guan wrote:<br>
    </div>
    <blockquote
cite="mid:CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com"
      type="cite">
      <meta http-equiv="Content-Type" content="text/html;
        charset=ISO-8859-1">
      <div dir="ltr">
        <div>Hi,</div>
        <div><br>
        </div>
        <div>I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a
          physical CPU core, such as core 3. Then, I run a cpu-bound
          process in DomU and the vcpu utilization is 100% (got it with
          "xentop" in Dom0).</div>
        <div>However, when I use "top" in Dom0 to see the physical CPU
          utilization, the CPU core 3 utilization is zero or less than
          1%. The utilization expected of CPU core 3 is also 100% like
          the vcpu. Is it? Why I cannot get the accurate physical CPU
          utilization with "top" command in Dom0?</div>
        <div><br>
        </div>
        Any advice is appreciated. Thank you for your time.<br
          clear="all">
      </div>
    </blockquote>
    <br>
    Xen is not KVM; dom0 is just another VM as far as Xen is concerned,
    so dom0's cpu3 is not domU's cpu3.<br>
    <br>
    Top in dom0 shows dom0's virtual cpu utilisation.&nbsp; I am not aware of
    a utility like top which gives the physical cpu information,
    distributed by physical cpu.<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------010106010405020302090900--


--===============3235517607840402474==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3235517607840402474==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 15:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3De-0007Ua-6H; Thu, 27 Feb 2014 15:49:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WJ3Dd-0007UJ-05
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:49:53 +0000
Received: from [193.109.254.147:51429] by server-7.bemta-14.messagelabs.com id
	A0/E8-23424-0AE5F035; Thu, 27 Feb 2014 15:49:52 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393516190!1999973!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20471 invoked from network); 27 Feb 2014 15:49:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:49:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104685632"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:49:49 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:49:49 -0500
Message-ID: <530F5E9B.5020404@citrix.com>
Date: Thu, 27 Feb 2014 15:49:47 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
	<530B606F.2070902@citrix.com>
	<20140227124327.GD16241@zion.uk.xensource.com>
In-Reply-To: <20140227124327.GD16241@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 12:43, Wei Liu wrote:
> On Mon, Feb 24, 2014 at 03:08:31PM +0000, Zoltan Kiss wrote:
>> On 24/02/14 13:49, Zoltan Kiss wrote:
>>> On 22/02/14 23:18, Zoltan Kiss wrote:
>>>> On 18/02/14 17:45, Ian Campbell wrote:
>>>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>>>
>>>>> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
>>>>> guest RX path" would be clearer.
>>>> Ok, I'll do that.
>>>>
>>>>>
>>>>>> RX path need to know if the SKB fragments are stored on
>>>>>> pages from another
>>>>>> domain.
>>>>> Does this not need to be done either before the mapping change
>>>>> or at the
>>>>> same time? -- otherwise you have a window of a couple of commits where
>>>>> things are broken, breaking bisectability.
>>>> I can move this to the beginning, to keep bisectability. I've
>>>> put it here originally because none of these makes sense without
>>>> the previous patches.
>>> Well, I gave it a close look: to move this to the beginning as a
>>> separate patch I would need to put move a lot of definitions from
>>> the first patch to here (ubuf_to_vif helper,
>>> xenvif_zerocopy_callback etc.). That would be the best from bisect
>>> point of view, but from patch review point of view even worse than
>>> now. So the only option I see is to merge this with the first 2
>>> patches, so it will be even bigger.
>> Actually I was stupid, we can move this patch earlier and introduce
>> stubs for those 2 functions. But for the another two patches (#6 and
>> #8) it's still true that we can't move them before, only merge them
>> into the main, as they heavily rely on the main patch. #6 is
>> necessary for Windows frontends, as they are keen to send too many
>> slots. #8 is quite a rare case, happens only if a guest wedge or
>> malicious, and sits on the packet.
>> So my question is still up: do you prefer perfect bisectability or
>> more segmented patches which are not that pain to review?
>>
>
> What's the diff stat if you merge those patches?
>

  drivers/net/xen-netback/common.h    |   33 ++-
  drivers/net/xen-netback/interface.c |   67 +++++-
  drivers/net/xen-netback/netback.c   |  424 
++++++++++++++++++++++-------------
  3 files changed, 362 insertions(+), 162 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:49:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3De-0007Ua-6H; Thu, 27 Feb 2014 15:49:54 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WJ3Dd-0007UJ-05
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:49:53 +0000
Received: from [193.109.254.147:51429] by server-7.bemta-14.messagelabs.com id
	A0/E8-23424-0AE5F035; Thu, 27 Feb 2014 15:49:52 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393516190!1999973!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20471 invoked from network); 27 Feb 2014 15:49:51 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:49:51 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104685632"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:49:49 +0000
Received: from [10.80.2.133] (10.80.2.133) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:49:49 -0500
Message-ID: <530F5E9B.5020404@citrix.com>
Date: Thu, 27 Feb 2014 15:49:47 +0000
From: Zoltan Kiss <zoltan.kiss@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
	<530B606F.2070902@citrix.com>
	<20140227124327.GD16241@zion.uk.xensource.com>
In-Reply-To: <20140227124327.GD16241@zion.uk.xensource.com>
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, jonathan.davies@citrix.com,
	Ian Campbell <Ian.Campbell@citrix.com>,
	linux-kernel@vger.kernel.org, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 12:43, Wei Liu wrote:
> On Mon, Feb 24, 2014 at 03:08:31PM +0000, Zoltan Kiss wrote:
>> On 24/02/14 13:49, Zoltan Kiss wrote:
>>> On 22/02/14 23:18, Zoltan Kiss wrote:
>>>> On 18/02/14 17:45, Ian Campbell wrote:
>>>>> On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
>>>>>
>>>>> Re the Subject: change how? Perhaps "handle foreign mapped pages on the
>>>>> guest RX path" would be clearer.
>>>> Ok, I'll do that.
>>>>
>>>>>
>>>>>> RX path need to know if the SKB fragments are stored on
>>>>>> pages from another
>>>>>> domain.
>>>>> Does this not need to be done either before the mapping change
>>>>> or at the
>>>>> same time? -- otherwise you have a window of a couple of commits where
>>>>> things are broken, breaking bisectability.
>>>> I can move this to the beginning, to keep bisectability. I've
>>>> put it here originally because none of these makes sense without
>>>> the previous patches.
>>> Well, I gave it a close look: to move this to the beginning as a
>>> separate patch I would need to put move a lot of definitions from
>>> the first patch to here (ubuf_to_vif helper,
>>> xenvif_zerocopy_callback etc.). That would be the best from bisect
>>> point of view, but from patch review point of view even worse than
>>> now. So the only option I see is to merge this with the first 2
>>> patches, so it will be even bigger.
>> Actually I was stupid, we can move this patch earlier and introduce
>> stubs for those 2 functions. But for the another two patches (#6 and
>> #8) it's still true that we can't move them before, only merge them
>> into the main, as they heavily rely on the main patch. #6 is
>> necessary for Windows frontends, as they are keen to send too many
>> slots. #8 is quite a rare case, happens only if a guest wedge or
>> malicious, and sits on the packet.
>> So my question is still up: do you prefer perfect bisectability or
>> more segmented patches which are not that pain to review?
>>
>
> What's the diff stat if you merge those patches?
>

  drivers/net/xen-netback/common.h    |   33 ++-
  drivers/net/xen-netback/interface.c |   67 +++++-
  drivers/net/xen-netback/netback.c   |  424 
++++++++++++++++++++++-------------
  3 files changed, 362 insertions(+), 162 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:52:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3Fe-0007j7-R6; Thu, 27 Feb 2014 15:51:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WJ3Fd-0007it-1k
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:51:57 +0000
Received: from [193.109.254.147:56038] by server-16.bemta-14.messagelabs.com
	id 2C/7E-21945-C1F5F035; Thu, 27 Feb 2014 15:51:56 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393516312!3599479!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25394 invoked from network); 27 Feb 2014 15:51:53 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	27 Feb 2014 15:51:53 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1RFofxc006532
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 10:50:42 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1RFoVVd020741; Thu, 27 Feb 2014 10:50:32 -0500
Message-ID: <530F5EC7.4060603@redhat.com>
Date: Thu, 27 Feb 2014 16:50:31 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	David Vrabel <david.vrabel@citrix.com>, Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
	<530F5851.1090809@linux.vnet.ibm.com>
In-Reply-To: <530F5851.1090809@linux.vnet.ibm.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 16:22, Raghavendra K T ha scritto:
> On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
> [...]
>>> But neither of the VCPUs being kicked here are halted -- they're either
>>> running or runnable (descheduled by the hypervisor).
>>
>> /me actually looks at Waiman's code...
>>
>> Right, this is really different from pvticketlocks, where the *unlock*
>> primitive wakes up a sleeping VCPU.  It is more similar to PLE
>> (pause-loop exiting).
>
> Adding to the discussion, I see there are two possibilities here,
> considering that in undercommit cases we should not exceed
> HEAD_SPIN_THRESHOLD,
>
> 1. the looping vcpu in pv_head_spin_check() should do halt()
> considering that we have done enough spinning (more than typical
> lock-hold time), and hence we are in potential overcommit.
>
> 2. multiplex kick_cpu to do directed yield in qspinlock case.
> But this may result in some ping ponging?

Actually, I think the qspinlock can work roughly the same as the 
pvticketlock, using the same lock_spinning and unlock_lock hooks.

The x86-specific codepath can use bit 1 in the ->wait byte as "I have 
halted, please kick me".

	value = _QSPINLOCK_WAITING;
	i = 0;
	do
		cpu_relax();
	while (ACCESS_ONCE(slock->lock) && i++ < BUSY_WAIT);
	if (ACCESS_ONCE(slock->lock)) {
		value |= _QSPINLOCK_HALTED;
		xchg(&slock->wait, value >> 8);
		if (ACCESS_ONCE(slock->lock)) {
			... call lock_spinning hook ...
		}
	}

	/*
	 * Set the lock bit & clear the halted+waiting bits
	 */
	if (cmpxchg(&slock->lock_wait, value,
		    _QSPINLOCK_LOCKED) == value)
		return -1;	/* Got the lock */
	__atomic_and(&slock->lock_wait, ~QSPINLOCK_HALTED);

The lock_spinning/unlock_lock code can probably be much simpler, because 
you do not need to keep a list of all spinning locks.  Unlock_lock can 
just use the CPU number to wake up the right CPU.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:52:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3Fe-0007j7-R6; Thu, 27 Feb 2014 15:51:58 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pbonzini@redhat.com>) id 1WJ3Fd-0007it-1k
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:51:57 +0000
Received: from [193.109.254.147:56038] by server-16.bemta-14.messagelabs.com
	id 2C/7E-21945-C1F5F035; Thu, 27 Feb 2014 15:51:56 +0000
X-Env-Sender: pbonzini@redhat.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393516312!3599479!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25394 invoked from network); 27 Feb 2014 15:51:53 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-12.tower-27.messagelabs.com with SMTP;
	27 Feb 2014 15:51:53 -0000
Received: from int-mx10.intmail.prod.int.phx2.redhat.com
	(int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1RFofxc006532
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 10:50:42 -0500
Received: from yakj.usersys.redhat.com (dhcp-176-198.mxp.redhat.com
	[10.32.176.198])
	by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1RFoVVd020741; Thu, 27 Feb 2014 10:50:32 -0500
Message-ID: <530F5EC7.4060603@redhat.com>
Date: Thu, 27 Feb 2014 16:50:31 +0100
From: Paolo Bonzini <pbonzini@redhat.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.2.0
MIME-Version: 1.0
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	David Vrabel <david.vrabel@citrix.com>, Waiman Long <Waiman.Long@hp.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
	<530F5851.1090809@linux.vnet.ibm.com>
In-Reply-To: <530F5851.1090809@linux.vnet.ibm.com>
X-Enigmail-Version: 1.6
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Il 27/02/2014 16:22, Raghavendra K T ha scritto:
> On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
> [...]
>>> But neither of the VCPUs being kicked here are halted -- they're either
>>> running or runnable (descheduled by the hypervisor).
>>
>> /me actually looks at Waiman's code...
>>
>> Right, this is really different from pvticketlocks, where the *unlock*
>> primitive wakes up a sleeping VCPU.  It is more similar to PLE
>> (pause-loop exiting).
>
> Adding to the discussion, I see there are two possibilities here,
> considering that in undercommit cases we should not exceed
> HEAD_SPIN_THRESHOLD,
>
> 1. the looping vcpu in pv_head_spin_check() should do halt()
> considering that we have done enough spinning (more than typical
> lock-hold time), and hence we are in potential overcommit.
>
> 2. multiplex kick_cpu to do directed yield in qspinlock case.
> But this may result in some ping ponging?

Actually, I think the qspinlock can work roughly the same as the 
pvticketlock, using the same lock_spinning and unlock_lock hooks.

The x86-specific codepath can use bit 1 in the ->wait byte as "I have 
halted, please kick me".

	value = _QSPINLOCK_WAITING;
	i = 0;
	do
		cpu_relax();
	while (ACCESS_ONCE(slock->lock) && i++ < BUSY_WAIT);
	if (ACCESS_ONCE(slock->lock)) {
		value |= _QSPINLOCK_HALTED;
		xchg(&slock->wait, value >> 8);
		if (ACCESS_ONCE(slock->lock)) {
			... call lock_spinning hook ...
		}
	}

	/*
	 * Set the lock bit & clear the halted+waiting bits
	 */
	if (cmpxchg(&slock->lock_wait, value,
		    _QSPINLOCK_LOCKED) == value)
		return -1;	/* Got the lock */
	__atomic_and(&slock->lock_wait, ~QSPINLOCK_HALTED);

The lock_spinning/unlock_lock code can probably be much simpler, because 
you do not need to keep a list of all spinning locks.  Unlock_lock can 
just use the CPU number to wake up the right CPU.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3JH-0007wB-Jl; Thu, 27 Feb 2014 15:55:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WJ3JG-0007w6-QZ
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:55:43 +0000
Received: from [85.158.139.211:47854] by server-11.bemta-5.messagelabs.com id
	B3/49-23886-EFF5F035; Thu, 27 Feb 2014 15:55:42 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393516539!2155101!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19355 invoked from network); 27 Feb 2014 15:55:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:55:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104687664"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:55:39 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 10:55:38 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, David Vrabel
	<david.vrabel@citrix.com>, Russell King <linux@arm.linux.org.uk>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>, Thomas Gleixner
	<tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin"
	<hpa@zytor.com>
Date: Thu, 27 Feb 2014 15:55:30 +0000
Message-ID: <1393516530-9145-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Matt Wilson <msw@linux.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Anthony Liguori <aliguori@amazon.com>, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen/grant-table: Refactor gnttab_[un]map_refs
	to avoid m2p_override
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(This is a continuation of "[PATCH v9] xen/grant-table: Avoid m2p_override
during mapping")

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the bulk of the original function (everything after the mapping hypercall)
  is moved to arch-dependent set/clear_foreign_p2m_mapping
- the "if (xen_feature(XENFEAT_auto_translated_physmap))" brach goes to ARM
- therefore the ARM function could be much smaller, the m2p_override stubs
  could be also removed
- on x86 the set_phys_to_machine calls were moved up to this new funcion
  from m2p_override functions
- and m2p_override functions are only called when there is a kmap_ops param

It also removes a stray space from arch/x86/include/asm/xen/page.h.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: Anthony Liguori <aliguori@amazon.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/page.h |   15 ++---
 arch/arm/xen/p2m.c              |   32 +++++++++++
 arch/x86/include/asm/xen/page.h |   11 +++-
 arch/x86/xen/p2m.c              |  121 ++++++++++++++++++++++++++++++++++-----
 drivers/xen/grant-table.c       |   73 +----------------------
 5 files changed, 156 insertions(+), 96 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index e0965ab..cf4f3e8 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,16 +97,13 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
-static inline int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
-{
-	return 0;
-}
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count);
 
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
-{
-	return 0;
-}
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count);
 
 bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
index b31ee1b2..9c48778 100644
--- a/arch/arm/xen/p2m.c
+++ b/arch/arm/xen/p2m.c
@@ -146,6 +146,38 @@ unsigned long __mfn_to_pfn(unsigned long mfn)
 }
 EXPORT_SYMBOL_GPL(__mfn_to_pfn);
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count);
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		if (map_ops[i].status)
+			continue;
+		set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+				    map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
+				    INVALID_P2M_ENTRY);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 bool __set_phys_to_machine_multi(unsigned long pfn,
 		unsigned long mfn, unsigned long nr_pages)
 {
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 3e276eb..c949923 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,17 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +128,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..85e5d78 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -881,6 +881,65 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count)
+{
+	int i, ret = 0;
+	bool lazy = false;
+	pte_t *pte;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (kmap_ops &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn, pfn;
+
+		/* Do not add to override if the map failed. */
+		if (map_ops[i].status)
+			continue;
+
+		if (map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+				(map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+		}
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+		pages[i]->index = pfn_to_mfn(pfn);
+
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		if (kmap_ops) {
+			ret = m2p_add_override(mfn, pages[i], &kmap_ops[i]);
+			if (ret)
+				goto out;
+		}
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -899,13 +958,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -943,20 +995,62 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	int i, ret = 0;
+	bool lazy = false;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (kmap_ops &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		unsigned long pfn = page_to_pfn(pages[i]);
+
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+
+		if (kmap_ops)
+			ret = m2p_remove_override(pages[i], &kmap_ops[i], mfn);
+		if (ret)
+			goto out;
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+	return ret;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -970,10 +1064,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index b84e3ab..6d325bd 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -933,9 +933,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
-	pte_t *pte;
-	unsigned long mfn;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
@@ -947,45 +944,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
 						&map_ops[i].status, __func__);
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			if (map_ops[i].status)
-				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
-		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return set_foreign_p2m_mapping(map_ops, kmap_ops, pages, count);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
@@ -993,39 +952,13 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
 		      struct page **pages, unsigned int count)
 {
-	int i, ret;
-	bool lazy = false;
+	int ret;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
-					INVALID_P2M_ENTRY);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return clear_foreign_p2m_mapping(unmap_ops, kmap_ops, pages, count);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:55:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3JH-0007wB-Jl; Thu, 27 Feb 2014 15:55:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WJ3JG-0007w6-QZ
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 15:55:43 +0000
Received: from [85.158.139.211:47854] by server-11.bemta-5.messagelabs.com id
	B3/49-23886-EFF5F035; Thu, 27 Feb 2014 15:55:42 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393516539!2155101!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19355 invoked from network); 27 Feb 2014 15:55:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:55:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104687664"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:55:39 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL01.citrite.net (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 10:55:38 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>, David Vrabel
	<david.vrabel@citrix.com>, Russell King <linux@arm.linux.org.uk>, Konrad
	Rzeszutek Wilk <konrad.wilk@oracle.com>, Thomas Gleixner
	<tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin"
	<hpa@zytor.com>
Date: Thu, 27 Feb 2014 15:55:30 +0000
Message-ID: <1393516530-9145-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, Zoltan Kiss <zoltan.kiss@citrix.com>,
	Matt Wilson <msw@linux.com>, x86@kernel.org, linux-kernel@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	Anthony Liguori <aliguori@amazon.com>, xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH] xen/grant-table: Refactor gnttab_[un]map_refs
	to avoid m2p_override
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

(This is a continuation of "[PATCH v9] xen/grant-table: Avoid m2p_override
during mapping")

The grant mapping API does m2p_override unnecessarily: only gntdev needs it,
for blkback and future netback patches it just cause a lock contention, as
those pages never go to userspace. Therefore this series does the following:
- the bulk of the original function (everything after the mapping hypercall)
  is moved to arch-dependent set/clear_foreign_p2m_mapping
- the "if (xen_feature(XENFEAT_auto_translated_physmap))" brach goes to ARM
- therefore the ARM function could be much smaller, the m2p_override stubs
  could be also removed
- on x86 the set_phys_to_machine calls were moved up to this new funcion
  from m2p_override functions
- and m2p_override functions are only called when there is a kmap_ops param

It also removes a stray space from arch/x86/include/asm/xen/page.h.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Suggested-by: Anthony Liguori <aliguori@amazon.com>
Suggested-by: David Vrabel <david.vrabel@citrix.com>
Suggested-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
---
 arch/arm/include/asm/xen/page.h |   15 ++---
 arch/arm/xen/p2m.c              |   32 +++++++++++
 arch/x86/include/asm/xen/page.h |   11 +++-
 arch/x86/xen/p2m.c              |  121 ++++++++++++++++++++++++++++++++++-----
 drivers/xen/grant-table.c       |   73 +----------------------
 5 files changed, 156 insertions(+), 96 deletions(-)

diff --git a/arch/arm/include/asm/xen/page.h b/arch/arm/include/asm/xen/page.h
index e0965ab..cf4f3e8 100644
--- a/arch/arm/include/asm/xen/page.h
+++ b/arch/arm/include/asm/xen/page.h
@@ -97,16 +97,13 @@ static inline pte_t *lookup_address(unsigned long address, unsigned int *level)
 	return NULL;
 }
 
-static inline int m2p_add_override(unsigned long mfn, struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
-{
-	return 0;
-}
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count);
 
-static inline int m2p_remove_override(struct page *page, bool clear_pte)
-{
-	return 0;
-}
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count);
 
 bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 bool __set_phys_to_machine_multi(unsigned long pfn, unsigned long mfn,
diff --git a/arch/arm/xen/p2m.c b/arch/arm/xen/p2m.c
index b31ee1b2..9c48778 100644
--- a/arch/arm/xen/p2m.c
+++ b/arch/arm/xen/p2m.c
@@ -146,6 +146,38 @@ unsigned long __mfn_to_pfn(unsigned long mfn)
 }
 EXPORT_SYMBOL_GPL(__mfn_to_pfn);
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count);
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		if (map_ops[i].status)
+			continue;
+		set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
+				    map_ops[i].dev_bus_addr >> PAGE_SHIFT);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count);
+{
+	int i;
+
+	for (i = 0; i < count; i++) {
+		set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
+				    INVALID_P2M_ENTRY);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 bool __set_phys_to_machine_multi(unsigned long pfn,
 		unsigned long mfn, unsigned long nr_pages)
 {
diff --git a/arch/x86/include/asm/xen/page.h b/arch/x86/include/asm/xen/page.h
index 3e276eb..c949923 100644
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -49,10 +49,17 @@ extern bool __set_phys_to_machine(unsigned long pfn, unsigned long mfn);
 extern unsigned long set_phys_range_identity(unsigned long pfn_s,
 					     unsigned long pfn_e);
 
+extern int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+				   struct gnttab_map_grant_ref *kmap_ops,
+				   struct page **pages, unsigned int count);
 extern int m2p_add_override(unsigned long mfn, struct page *page,
 			    struct gnttab_map_grant_ref *kmap_op);
+extern int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+				     struct gnttab_map_grant_ref *kmap_ops,
+				     struct page **pages, unsigned int count);
 extern int m2p_remove_override(struct page *page,
-				struct gnttab_map_grant_ref *kmap_op);
+			       struct gnttab_map_grant_ref *kmap_op,
+			       unsigned long mfn);
 extern struct page *m2p_find_override(unsigned long mfn);
 extern unsigned long m2p_find_override_pfn(unsigned long mfn, unsigned long pfn);
 
@@ -121,7 +128,7 @@ static inline unsigned long mfn_to_pfn(unsigned long mfn)
 		pfn = m2p_find_override_pfn(mfn, ~0);
 	}
 
-	/* 
+	/*
 	 * pfn is ~0 if there are no entries in the m2p for mfn or if the
 	 * entry doesn't map back to the mfn and m2p_override doesn't have a
 	 * valid entry for it.
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 696c694..85e5d78 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -881,6 +881,65 @@ static unsigned long mfn_hash(unsigned long mfn)
 	return hash_long(mfn, M2P_OVERRIDE_HASH_SHIFT);
 }
 
+int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+			    struct gnttab_map_grant_ref *kmap_ops,
+			    struct page **pages, unsigned int count)
+{
+	int i, ret = 0;
+	bool lazy = false;
+	pte_t *pte;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (kmap_ops &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn, pfn;
+
+		/* Do not add to override if the map failed. */
+		if (map_ops[i].status)
+			continue;
+
+		if (map_ops[i].flags & GNTMAP_contains_pte) {
+			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
+				(map_ops[i].host_addr & ~PAGE_MASK));
+			mfn = pte_mfn(*pte);
+		} else {
+			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
+		}
+		pfn = page_to_pfn(pages[i]);
+
+		WARN_ON(PagePrivate(pages[i]));
+		SetPagePrivate(pages[i]);
+		set_page_private(pages[i], mfn);
+		pages[i]->index = pfn_to_mfn(pfn);
+
+		if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)))) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		if (kmap_ops) {
+			ret = m2p_add_override(mfn, pages[i], &kmap_ops[i]);
+			if (ret)
+				goto out;
+		}
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(set_foreign_p2m_mapping);
+
 /* Add an MFN override for a particular page */
 int m2p_add_override(unsigned long mfn, struct page *page,
 		struct gnttab_map_grant_ref *kmap_op)
@@ -899,13 +958,6 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 					"m2p_add_override: pfn %lx not mapped", pfn))
 			return -EINVAL;
 	}
-	WARN_ON(PagePrivate(page));
-	SetPagePrivate(page);
-	set_page_private(page, mfn);
-	page->index = pfn_to_mfn(pfn);
-
-	if (unlikely(!set_phys_to_machine(pfn, FOREIGN_FRAME(mfn))))
-		return -ENOMEM;
 
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
@@ -943,20 +995,62 @@ int m2p_add_override(unsigned long mfn, struct page *page,
 	return 0;
 }
 EXPORT_SYMBOL_GPL(m2p_add_override);
+
+int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+			      struct gnttab_map_grant_ref *kmap_ops,
+			      struct page **pages, unsigned int count)
+{
+	int i, ret = 0;
+	bool lazy = false;
+
+	if (xen_feature(XENFEAT_auto_translated_physmap))
+		return 0;
+
+	if (kmap_ops &&
+	    !in_interrupt() &&
+	    paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
+		arch_enter_lazy_mmu_mode();
+		lazy = true;
+	}
+
+	for (i = 0; i < count; i++) {
+		unsigned long mfn = get_phys_to_machine(page_to_pfn(pages[i]));
+		unsigned long pfn = page_to_pfn(pages[i]);
+
+		if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT)) {
+			ret = -EINVAL;
+			goto out;
+		}
+
+		set_page_private(pages[i], INVALID_P2M_ENTRY);
+		WARN_ON(!PagePrivate(pages[i]));
+		ClearPagePrivate(pages[i]);
+		set_phys_to_machine(pfn, pages[i]->index);
+
+		if (kmap_ops)
+			ret = m2p_remove_override(pages[i], &kmap_ops[i], mfn);
+		if (ret)
+			goto out;
+	}
+
+out:
+	if (lazy)
+		arch_leave_lazy_mmu_mode();
+	return ret;
+}
+EXPORT_SYMBOL_GPL(clear_foreign_p2m_mapping);
+
 int m2p_remove_override(struct page *page,
-		struct gnttab_map_grant_ref *kmap_op)
+			struct gnttab_map_grant_ref *kmap_op,
+			unsigned long mfn)
 {
 	unsigned long flags;
-	unsigned long mfn;
 	unsigned long pfn;
 	unsigned long uninitialized_var(address);
 	unsigned level;
 	pte_t *ptep = NULL;
 
 	pfn = page_to_pfn(page);
-	mfn = get_phys_to_machine(pfn);
-	if (mfn == INVALID_P2M_ENTRY || !(mfn & FOREIGN_FRAME_BIT))
-		return -EINVAL;
 
 	if (!PageHighMem(page)) {
 		address = (unsigned long)__va(pfn << PAGE_SHIFT);
@@ -970,10 +1064,7 @@ int m2p_remove_override(struct page *page,
 	spin_lock_irqsave(&m2p_override_lock, flags);
 	list_del(&page->lru);
 	spin_unlock_irqrestore(&m2p_override_lock, flags);
-	WARN_ON(!PagePrivate(page));
-	ClearPagePrivate(page);
 
-	set_phys_to_machine(pfn, page->index);
 	if (kmap_op != NULL) {
 		if (!PageHighMem(page)) {
 			struct multicall_space mcs;
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index b84e3ab..6d325bd 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -933,9 +933,6 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 		    struct page **pages, unsigned int count)
 {
 	int i, ret;
-	bool lazy = false;
-	pte_t *pte;
-	unsigned long mfn;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref, map_ops, count);
 	if (ret)
@@ -947,45 +944,7 @@ int gnttab_map_refs(struct gnttab_map_grant_ref *map_ops,
 			gnttab_retry_eagain_gop(GNTTABOP_map_grant_ref, map_ops + i,
 						&map_ops[i].status, __func__);
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			if (map_ops[i].status)
-				continue;
-			set_phys_to_machine(map_ops[i].host_addr >> PAGE_SHIFT,
-					map_ops[i].dev_bus_addr >> PAGE_SHIFT);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		/* Do not add to override if the map failed. */
-		if (map_ops[i].status)
-			continue;
-
-		if (map_ops[i].flags & GNTMAP_contains_pte) {
-			pte = (pte_t *) (mfn_to_virt(PFN_DOWN(map_ops[i].host_addr)) +
-				(map_ops[i].host_addr & ~PAGE_MASK));
-			mfn = pte_mfn(*pte);
-		} else {
-			mfn = PFN_DOWN(map_ops[i].dev_bus_addr);
-		}
-		ret = m2p_add_override(mfn, pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return set_foreign_p2m_mapping(map_ops, kmap_ops, pages, count);
 }
 EXPORT_SYMBOL_GPL(gnttab_map_refs);
 
@@ -993,39 +952,13 @@ int gnttab_unmap_refs(struct gnttab_unmap_grant_ref *unmap_ops,
 		      struct gnttab_map_grant_ref *kmap_ops,
 		      struct page **pages, unsigned int count)
 {
-	int i, ret;
-	bool lazy = false;
+	int ret;
 
 	ret = HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap_ops, count);
 	if (ret)
 		return ret;
 
-	/* this is basically a nop on x86 */
-	if (xen_feature(XENFEAT_auto_translated_physmap)) {
-		for (i = 0; i < count; i++) {
-			set_phys_to_machine(unmap_ops[i].host_addr >> PAGE_SHIFT,
-					INVALID_P2M_ENTRY);
-		}
-		return ret;
-	}
-
-	if (!in_interrupt() && paravirt_get_lazy_mode() == PARAVIRT_LAZY_NONE) {
-		arch_enter_lazy_mmu_mode();
-		lazy = true;
-	}
-
-	for (i = 0; i < count; i++) {
-		ret = m2p_remove_override(pages[i], kmap_ops ?
-				       &kmap_ops[i] : NULL);
-		if (ret)
-			goto out;
-	}
-
- out:
-	if (lazy)
-		arch_leave_lazy_mmu_mode();
-
-	return ret;
+	return clear_foreign_p2m_mapping(unmap_ops, kmap_ops, pages, count);
 }
 EXPORT_SYMBOL_GPL(gnttab_unmap_refs);
 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3Ky-000833-3R; Thu, 27 Feb 2014 15:57:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ3Kw-00082v-Ag
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:57:27 +0000
Received: from [193.109.254.147:33953] by server-4.bemta-14.messagelabs.com id
	FD/B7-32066-5606F035; Thu, 27 Feb 2014 15:57:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393516643!7258869!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15524 invoked from network); 27 Feb 2014 15:57:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:57:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104688135"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:57:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:57:22 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ3Ks-0004Sp-Dx;
	Thu, 27 Feb 2014 15:57:22 +0000
Message-ID: <530F6062.2040302@citrix.com>
Date: Thu, 27 Feb 2014 15:57:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
	<530F365B020000780011FD03@nat28.tlf.novell.com>
	<530F2B85.6060403@citrix.com>
	<530F3CF8020000780011FD40@nat28.tlf.novell.com>
In-Reply-To: <530F3CF8020000780011FD40@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 12:26, Jan Beulich wrote:
>>>> On 27.02.14 at 13:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Apart from forcibly messing with a balanced numa setup, what about cpu
>> pools, or toolstack disaggregation where pinning is restricted?
> There no messing with a balanced setup here - everything should
> of course be transient, i.e. get restored to previous values right
> after.

But anything else happening on that vcpu at the same time is transiently
out of balance.

>  And I can't see a reason not to permit the hardware domain
> to temporarily escape its enclave, as it can do worse to the overall
> system anyway.

True

>
> The new hypercall is simple enough (yet very x86-centric) that I'm
> not really against it; what I'm against is adding functionality to the
> hypervisor that is already available by other means.
>
> Jan
>

As I said before, the cache information is faked up by the cpuid policy,
so might be subject to policy depending on faulting or non-dom0 hardware
domian, or PVH dom0 in the near future. 

I think there is a legitimate case for "I really truely need the real
hardware values, without anything in the policy getting in the way". 
'cpuid' is indeed very x86 specific, but it is not information readily
available by other means.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3L1-00083N-HB; Thu, 27 Feb 2014 15:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ3L0-00083D-5V
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:57:30 +0000
Received: from [85.158.143.35:26655] by server-1.bemta-4.messagelabs.com id
	46/87-31661-9606F035; Thu, 27 Feb 2014 15:57:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393516647!8800965!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1176 invoked from network); 27 Feb 2014 15:57:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:57:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106315485"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:57:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:57:26 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ3Kw-0004Ss-OE;
	Thu, 27 Feb 2014 15:57:26 +0000
Date: Thu, 27 Feb 2014 15:57:26 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140227155726.GI16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<529743590.20140227154351@eikelenboom.it>
	<20140227151538.GG16241@zion.uk.xensource.com>
	<1982379440.20140227162655@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1982379440.20140227162655@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 04:26:55PM +0100, Sander Eikelenboom wrote:
[...]
> >> Added some more printk's:
> >> 
> >> @@ -2072,7 +2076,11 @@ __gnttab_copy(
> >>                                        &s_frame, &s_pg,
> >>                                        &source_off, &source_len, 1);
> >>          if ( rc != GNTST_okay )
> >> -            goto error_out;
> >> +            PIN_FAIL(error_out, GNTST_general_error,
> >> +                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> >> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> >> +
> >> +
> >>          have_s_grant = 1;
> >>          if ( op->source.offset < source_off ||
> >>               op->len > source_len )
> >> @@ -2096,7 +2104,11 @@ __gnttab_copy(
> >>                                        current->domain->domain_id, 0,
> >>                                        &d_frame, &d_pg, &dest_off, &dest_len, 1);
> >>          if ( rc != GNTST_okay )
> >> -            goto error_out;
> >> +            PIN_FAIL(error_out, GNTST_general_error,
> >> +                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> >> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> >> +
> >> +
> >>          have_d_grant = 1;
> >> 
> >> 
> >> this comes out:
> >> 
> >> (XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7
> >> 
> 
> > If it fails in gnttab_copy then I very much suspects this is a network
> > driver problem as persistent grant in blk driver doesn't use grant
> > copy.
> 
> Does the dest_gref or src_is_gref by any chance give some sort of direction ?
> 

Yes, there's indication. For network driver, dest_is_gref means DomU RX
path, src_is_gref means DomU TX path.

In the particular error message you mentioned, it means that this
happens in DomU's RX path, but it would not give us clear idea what had
happened. As the ring is skewed any way it's not surprised to see a
garbage gref in hypervisor. 

Wei.

> >> 
> >> > My suggestion is, if you have a working base line, you can try to setup
> >> > different frontend / backend combination to help narrow down the
> >> > problem.
> >> 
> >> Will see what i can do after the weekend
> >> 
> 
> > Thanks
> 
> >> > Wei.
> >> 
> >> <snip>
> >> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3L1-00083N-HB; Thu, 27 Feb 2014 15:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ3L0-00083D-5V
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:57:30 +0000
Received: from [85.158.143.35:26655] by server-1.bemta-4.messagelabs.com id
	46/87-31661-9606F035; Thu, 27 Feb 2014 15:57:29 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393516647!8800965!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1176 invoked from network); 27 Feb 2014 15:57:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:57:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106315485"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:57:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:57:26 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ3Kw-0004Ss-OE;
	Thu, 27 Feb 2014 15:57:26 +0000
Date: Thu, 27 Feb 2014 15:57:26 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <20140227155726.GI16241@zion.uk.xensource.com>
References: <1772884781.20140218222513@eikelenboom.it>
	<5305CFC6.3080502@oracle.com>
	<587238484.20140220121842@eikelenboom.it>
	<5306F2E8.5090509@oracle.com>
	<824074181.20140226101442@eikelenboom.it>
	<59358334.20140226161123@eikelenboom.it>
	<20140227141812.GE16241@zion.uk.xensource.com>
	<529743590.20140227154351@eikelenboom.it>
	<20140227151538.GG16241@zion.uk.xensource.com>
	<1982379440.20140227162655@eikelenboom.it>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1982379440.20140227162655@eikelenboom.it>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: annie li <annie.li@oracle.com>, Paul Durrant <Paul.Durrant@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Zoltan Kiss <zoltan.kiss@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen-unstable Linux 3.14-rc3 and 3.13 Network
	troubles
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 04:26:55PM +0100, Sander Eikelenboom wrote:
[...]
> >> Added some more printk's:
> >> 
> >> @@ -2072,7 +2076,11 @@ __gnttab_copy(
> >>                                        &s_frame, &s_pg,
> >>                                        &source_off, &source_len, 1);
> >>          if ( rc != GNTST_okay )
> >> -            goto error_out;
> >> +            PIN_FAIL(error_out, GNTST_general_error,
> >> +                     "?!?!? src_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> >> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> >> +
> >> +
> >>          have_s_grant = 1;
> >>          if ( op->source.offset < source_off ||
> >>               op->len > source_len )
> >> @@ -2096,7 +2104,11 @@ __gnttab_copy(
> >>                                        current->domain->domain_id, 0,
> >>                                        &d_frame, &d_pg, &dest_off, &dest_len, 1);
> >>          if ( rc != GNTST_okay )
> >> -            goto error_out;
> >> +            PIN_FAIL(error_out, GNTST_general_error,
> >> +                     "?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:%d src_dom_id:%d dest_dom_id:%d\n",
> >> +                     current->domain->domain_id, op->source.domid, op->dest.domid);
> >> +
> >> +
> >>          have_d_grant = 1;
> >> 
> >> 
> >> this comes out:
> >> 
> >> (XEN) [2014-02-27 02:34:37] grant_table.c:2109:d0 ?!?!? dest_is_gref: aquire grant for copy failed current_dom_id:0 src_dom_id:32752 dest_dom_id:7
> >> 
> 
> > If it fails in gnttab_copy then I very much suspects this is a network
> > driver problem as persistent grant in blk driver doesn't use grant
> > copy.
> 
> Does the dest_gref or src_is_gref by any chance give some sort of direction ?
> 

Yes, there's indication. For network driver, dest_is_gref means DomU RX
path, src_is_gref means DomU TX path.

In the particular error message you mentioned, it means that this
happens in DomU's RX path, but it would not give us clear idea what had
happened. As the ring is skewed any way it's not surprised to see a
garbage gref in hypervisor. 

Wei.

> >> 
> >> > My suggestion is, if you have a working base line, you can try to setup
> >> > different frontend / backend combination to help narrow down the
> >> > problem.
> >> 
> >> Will see what i can do after the weekend
> >> 
> 
> > Thanks
> 
> >> > Wei.
> >> 
> >> <snip>
> >> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:57:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3Ky-000833-3R; Thu, 27 Feb 2014 15:57:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ3Kw-00082v-Ag
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:57:27 +0000
Received: from [193.109.254.147:33953] by server-4.bemta-14.messagelabs.com id
	FD/B7-32066-5606F035; Thu, 27 Feb 2014 15:57:25 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393516643!7258869!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15524 invoked from network); 27 Feb 2014 15:57:24 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:57:24 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104688135"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 15:57:23 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:57:22 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ3Ks-0004Sp-Dx;
	Thu, 27 Feb 2014 15:57:22 +0000
Message-ID: <530F6062.2040302@citrix.com>
Date: Thu, 27 Feb 2014 15:57:22 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393499497-9162-1-git-send-email-andrew.cooper3@citrix.com>
	<1393499497-9162-4-git-send-email-andrew.cooper3@citrix.com>
	<530F365B020000780011FD03@nat28.tlf.novell.com>
	<530F2B85.6060403@citrix.com>
	<530F3CF8020000780011FD40@nat28.tlf.novell.com>
In-Reply-To: <530F3CF8020000780011FD40@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH RFC 2/2] xen/x86: Introduce XEN_SYSCTL_cpuid
 hypercall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 12:26, Jan Beulich wrote:
>>>> On 27.02.14 at 13:11, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Apart from forcibly messing with a balanced numa setup, what about cpu
>> pools, or toolstack disaggregation where pinning is restricted?
> There no messing with a balanced setup here - everything should
> of course be transient, i.e. get restored to previous values right
> after.

But anything else happening on that vcpu at the same time is transiently
out of balance.

>  And I can't see a reason not to permit the hardware domain
> to temporarily escape its enclave, as it can do worse to the overall
> system anyway.

True

>
> The new hypercall is simple enough (yet very x86-centric) that I'm
> not really against it; what I'm against is adding functionality to the
> hypervisor that is already available by other means.
>
> Jan
>

As I said before, the cache information is faked up by the cpuid policy,
so might be subject to policy depending on faulting or non-dom0 hardware
domian, or PVH dom0 in the near future. 

I think there is a legitimate case for "I really truely need the real
hardware values, without anything in the policy getting in the way". 
'cpuid' is indeed very x86 specific, but it is not information readily
available by other means.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3M8-0008F0-4g; Thu, 27 Feb 2014 15:58:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ3M6-0008Ei-Jt
	for Xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:58:39 +0000
Received: from [85.158.137.68:8367] by server-12.bemta-3.messagelabs.com id
	EC/EC-01674-DA06F035; Thu, 27 Feb 2014 15:58:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393516715!1841137!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24120 invoked from network); 27 Feb 2014 15:58:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:58:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106315721"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:58:08 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:58:07 -0500
Message-ID: <530F608E.9040407@citrix.com>
Date: Thu, 27 Feb 2014 15:58:06 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Domain Save Image Format proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is a draft of a proposal for a new domain save image format.  It
does not currently cover all use cases (see the full list in the doc).

http://xenbits.xen.org/people/dvrabel/domain-save-format-C.pdf

Since I believe this is now complete enough for x86 PV guests, the next
step will be prototype implementation for these guests.

Introduction
============

Revision History
----------------

--------------------------------------------------------------------
Version  Date         Changes
-------  -----------  ----------------------------------------------
Draft A  6 Feb 2014   Initial draft.

Draft B  10 Feb 2014  Corrected image header field widths.

                      Minor updates and clarifications.

Draft C  27 Feb 2014  List feature excluded from this draft.

                      Clarify which image versions a restore must
                      support.

                      x86 and ARM are always little-endian.

                      Domain header: combine arch and type fields and
                      add Xen major and minor version.

                      Move checksum to end of record and include the
                      header.

                      Remove P2M record.

                      Add some reserved bits to pfn fields in the
                      PAGE_DATA record to allow for future expansion.

                      List page types and note that XTAB can be used
                      for unmapped pages at the end of a live
                      migrations.

                      Rename VCPU_INFO record to VCPU_COUNT.

                      Add VCPU_CONTEXT_X1, and VCPU_CONTEXT_X2 records
                      for the various extended VCPU context.

                      Add array of P2M frame PFNs to X86_PV_INFO
                      record.
--------------------------------------------------------------------

Purpose
-------

The _domain save image_ is the context of a running domain used for
snapshots of a domain or for transferring domains between hosts during
migration.

There are a number of problems with the format of the domain save
image used in Xen 4.4 and earlier (the _legacy format_).

* Dependant on toolstack word size.  A number of fields within the
  image are native types such as `unsigned long` which have different
  sizes between 32-bit and 64-bit toolstacks.  This prevents domains
  from being migrated between hosts running 32-bit and 64-bit
  toolstacks.

* There is no header identifying the image.

* The image has no version information.

A new format that addresses the above is required.

ARM does not yet have have a domain save image format specified and
the format described in this specification should be suitable.

Not Yet Included
----------------

The following features are not yet fully specified and will be
included in a future draft.

* HVM guests

* Remus

* Page data compression.

* ARM


Overview
========

The image format consists of two main sections:

* _Headers_
* _Records_

Headers
-------

There are two headers: the _image header_, and the _domain header_.
The image header describes the format of the image (version etc.).
The _domain header_ contains general information about the domain
(architecture, type etc.).

Records
-------

The main part of the format is a sequence of different _records_.
Each record type contains information about the domain context.  At a
minimum there is a END record marking the end of the records section.


Fields
------

All the fields within the headers and records have a fixed width.

Fields are always aligned to their size.

Padding and reserved fields are set to zero on save and must be
ignored during restore.

Integer (numeric) fields in the image header are always in big-endian
byte order.

Integer fields in the domain header and in the records are in the
endianess described in the image header (which will typically be the
native ordering).

Headers
=======

Image Header
------------

The image header identifies an image as a Xen domain save image.  It
includes the version of this specification that the image complies
with.

Tools supporting version _V_ of the specification shall always save
images using version _V_.  Tools shall support restoring from version
_V_.  If the previous Xen release produced version _V_ - 1 images,
tools shall supported restoring from these.  Tools may additionally
support restoring from earlier versions.

The marker field can be used to distinguish between legacy images and
those corresponding to this specification.  Legacy images will have at
one or more zero bits within the first 8 octets of the image.

Fields within the image header are always in _big-endian_ byte order,
regardless of the setting of the endianness bit.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+
    | marker                                          |
    +-----------------------+-------------------------+
    | id                    | version                 |
    +-----------+-----------+-------------------------+
    | options   | (reserved)                          |
    +-----------+-------------------------------------+


--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
marker      0xFFFFFFFFFFFFFFFF.

id          0x58454E46 ("XENF" in ASCII).

version     0x00000001.  The version of this specification.

options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.

            bit 1-15: Reserved.
--------------------------------------------------------------------

The endianness shall be 0 (little-endian) for images generated on an
i386, x86_64, or arm host.


Domain Header
-------------

The domain header includes general properties of the domain.

     0      1     2     3     4     5     6     7 octet
    +-----------------------+-----------+-------------+
    | type                  | page_shift| (reserved)  |
    +-----------------------+-----------+-------------+
    | xen_major             | xen_minor               |
    +-----------------------+-------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
type        0x0000: Reserved.

            0x0001: x86 PV.

            0x0002: x86 HVM.

            0x0003: x86 PVH.

            0x0004: ARM.

            0x0005 - 0xFFFFFFFF: Reserved.

page_shift  Size of a guest page as a power of two.

            i.e., page size = 2^page_shift^.

xen_major   The Xen major version when this image was saved.

xen_minor   The Xen minor version when this image was saved.
--------------------------------------------------------------------

\clearpage

Records
=======

A record has a record header, type specific data and a trailing
footer.  If `body_length` is not a multiple of 8, the body is padded
with zeroes to align the record footer on an 8 octet boundary.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | type                  | body_length             |
    +-----------+-----------+-------------------------+
    | options   | (reserved)                          |
    +-----------+-------------------------------------+
    ...
    Record body of length body_length octets followed by
    0 to 7 octets of padding.
    ...
    +-----------------------+-------------------------+
    | (reserved)            | checksum                |
    +-----------------------+-------------------------+

--------------------------------------------------------------------
Field        Description
-----------  -------------------------------------------------------
type         0x00000000: END

             0x00000001: PAGE_DATA

             0x00000002: VCPU_COUNT

             0x00000003: VCPU_CONTEXT

             0x00000004: VCPU_CONTEXT_X1

             0x00000005: VCPU_CONTEXT_X2

             0x00000006: X86_PV_INFO

             0x00000007: X86_PV_P2M_FRAMES

             0x00000008 - 0xEFFFFFFF: Reserved for future _mandatory_
             records.

             0x80000000 - 0xFFFFFFFF: Reserved for future _optional_
             records.

body_length  Length in octets of the record body.

options      Bit 0: 0 - checksum invalid, 1 = checksum valid.

             Bit 1-15: Reserved.

checksum     CRC-32C checksum of the record head, body (including any
             trailing padding) and the footer (except for the checksum
             field itself), or 0x00000000 if the checksum field is
             invalid.
--------------------------------------------------------------------

Records may be _mandatory_ or _optional_.  Optional records have bit
31 set in their type.  Restoring an image that has unrecognized or
unsupported mandatory record must fail.  The contents of optional
records may be ignored during a restore (but their checksums should
still be validated).

The CRC-32C generator is 0x11EDC6F41[^1].

[^1]: See section 12.1 of [RFC 3720][RFC3720], Internet Small Computer
Systems Interface (iSCSI).

[RFC3720]: http://tools.ietf.org/html/rfc3720

The following sub-sections specify the record body format for each of
the record types.

END
----

A end record marks the end of the image.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+

The end record contains no fields; its body_length is 0.

PAGE_DATA
---------

The bulk of an image consists of many PAGE_DATA records containing the
memory contents.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | count (C)             | (reserved)              |
    +-----------------------+-------------------------+
    | pfn[0]                                          |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | pfn[C-1]                                        |
    +-------------------------------------------------+
    | page_data[0]...                                 |
    ...
    +-------------------------------------------------+
    | page_data[N-1]...                               |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
count       Number of pages described in this record.

pfn         An array of count PFNs and their types.

            Bit 63-60: XEN\_DOMCTL\_PFINFO\_* type.

            Bit 59-52: Reserved.

            Bit 51-0: PFN.

page_data   page_size octets of uncompressed page contents for each page
            set as present in the pfn array.
--------------------------------------------------------------------

\clearpage

Table: XEN\_DOMCTL\_PFINFO\_* Page Types.

--------------------------------------------------------------------
PFINFO type    Value      Description
-------------  ---------  ------------------------------------------
NOTAB          0x0        Normal page.

L1TAB          0x1        L1 page table page.

L2TAB          0x2        L2 page table page.

L3TAB          0x3        L3 page table page.

L4TAB          0x4        L4 page table page.

               0x5-0x8    Reserved.

L1TAB_PIN      0x9        L1 page table page (pinned).

L2TAB_PIN      0xA        L2 page table page (pinned).

L3TAB_PIN      0xB        L3 page table page (pinned).

L4TAB_PIN      0xC        L4 page table page (pinned).

BROKEN         0xD        Broken page.

XALLOC         0xE        Allocate only.

XTAB           0xF        Invalid page.
--------------------------------------------------------------------

PFNs with type `BROKEN`, `XALLOC`, or `XTAB` do not have any
corresponding `page_data`.

The saver uses the `XTAB` type for PFNs that become invalid in the
guest's P2M table during a live migration[^2].

[^2]: In the legacy format, this is the list of unmapped PFNs in the
tail.

\clearpage

VCPU_COUNT
---------

The VCPU_COUNT record includes the maximum number of VCPUs.  This will
be followed a VCPU_CONTEXT record for each online VCPU.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+------------------------+
    | max_vcpus             | (reserved)             |
    +-----------------------+------------------------+

--------------------------------------------------------------------
Field        Description
-----------  -------------------------------------------------------
max_vcpus    Maximum number of VCPUs.
--------------------------------------------------------------------

VCPU_CONTEXT
------------

The context for a single VCPU, as accesses by the
XEN_DOMCTL_getvcpucontext and XEN_DOMCTL_setvcpu_context hypercall
sub-ops.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

\clearpage

VCPU_CONTEXT_X1
---------------

Additional context for a single VCPU, as accessed by the
XEN_DOMCTL_get_ext_vcpucontext and XEN_DOMCTL_set_ext_vcpucontext
hypercall sub-ops.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

VCPU_CONTEXT_X2
---------------

Additional context for a single VCPU, as accessed by the
XEN_DOMCTL_setvcpuextstate and XEN_DOMCTL_setvcpuextstate hypercall
sub-ops.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

\clearpage

X86_PV_INFO
-----------

> [ This record replaces part of the extended-info chunk, and the
> p2m frame list ]

     0     1     2     3     4     5     6     7 octet
    +-----+-----+-----+-----+-------------------------+
    | w   | ptl | o   | (r) | p2m_pages (P)           |
    +-----+-----+-----+-----+-------------------------+
    | p2m_pfn[0]                                      |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | p2m_pfn[P-1]                                    |
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
guest_width (w)  Guest width in octets (either 4 or 8).

pt_levels (ptl)  Number of page table levels (either 3 or 4).

options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
                 1 - VMASST_pae_extended_cr3.

                 Bit 1-7: Reserved.

p2m_pagess       Number of pages in the guest's P2M table.

p2m_pfn          Array of PFNs containing the guest's P2M table.
--------------------------------------------------------------------

\clearpage

Layout
======

The set of valid records depends on the guest architecture and type.

x86 PV Guest
------------

An x86 PV guest image will have in this order:

1. Image header
2. Domain header
3. X86_PV_INFO record
4. Many PAGE_DATA records
5. VCPU_COUNT record
6. VCPU context records for each online VCPU
    a. VCPU_CONTEXT record
    b. VCPU_CONTEXT_X1 record
    c. VCPU_CONTEXT_X2 record
7. END record


Legacy Images (x86 only)
========================

Restoring legacy images from older tools shall be handled by
translating the legacy format image into this new format.

It shall not be possible to save in the legacy format.

There are two different legacy images depending on whether they were
generated by a 32-bit or a 64-bit toolstack. These shall be
distinguished by inspecting octets 4-7 in the image.  If these are
zero then it is a 64-bit image.

Toolstack  Field                            Value
---------  -----                            -----
64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
32-bit     Chunk type (HVM)                 < 0
32-bit     Page count (HVM)                 > 0

Table: Possible values for octet 4-7 in legacy images

This assumes the presence of the extended-info chunk which was
introduced in Xen 3.0.


Future Extensions
=================

All changes to the image or domain headers require the image version
to be increased.

The format may be extended by adding additional record types.

Extending an existing record type must be done by adding a new record
type.  This allows old images with the old record to still be
restored.

The image header may only be extended by _appending_ additional
fields.  In particular, the `marker`, `id` and `version` fields must
never change size or location.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 15:58:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 15:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3M8-0008F0-4g; Thu, 27 Feb 2014 15:58:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ3M6-0008Ei-Jt
	for Xen-devel@lists.xen.org; Thu, 27 Feb 2014 15:58:39 +0000
Received: from [85.158.137.68:8367] by server-12.bemta-3.messagelabs.com id
	EC/EC-01674-DA06F035; Thu, 27 Feb 2014 15:58:37 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393516715!1841137!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24120 invoked from network); 27 Feb 2014 15:58:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 15:58:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106315721"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 15:58:08 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL01.citrite.net
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 10:58:07 -0500
Message-ID: <530F608E.9040407@citrix.com>
Date: Thu, 27 Feb 2014 15:58:06 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: "Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] Domain Save Image Format proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Here is a draft of a proposal for a new domain save image format.  It
does not currently cover all use cases (see the full list in the doc).

http://xenbits.xen.org/people/dvrabel/domain-save-format-C.pdf

Since I believe this is now complete enough for x86 PV guests, the next
step will be prototype implementation for these guests.

Introduction
============

Revision History
----------------

--------------------------------------------------------------------
Version  Date         Changes
-------  -----------  ----------------------------------------------
Draft A  6 Feb 2014   Initial draft.

Draft B  10 Feb 2014  Corrected image header field widths.

                      Minor updates and clarifications.

Draft C  27 Feb 2014  List feature excluded from this draft.

                      Clarify which image versions a restore must
                      support.

                      x86 and ARM are always little-endian.

                      Domain header: combine arch and type fields and
                      add Xen major and minor version.

                      Move checksum to end of record and include the
                      header.

                      Remove P2M record.

                      Add some reserved bits to pfn fields in the
                      PAGE_DATA record to allow for future expansion.

                      List page types and note that XTAB can be used
                      for unmapped pages at the end of a live
                      migrations.

                      Rename VCPU_INFO record to VCPU_COUNT.

                      Add VCPU_CONTEXT_X1, and VCPU_CONTEXT_X2 records
                      for the various extended VCPU context.

                      Add array of P2M frame PFNs to X86_PV_INFO
                      record.
--------------------------------------------------------------------

Purpose
-------

The _domain save image_ is the context of a running domain used for
snapshots of a domain or for transferring domains between hosts during
migration.

There are a number of problems with the format of the domain save
image used in Xen 4.4 and earlier (the _legacy format_).

* Dependant on toolstack word size.  A number of fields within the
  image are native types such as `unsigned long` which have different
  sizes between 32-bit and 64-bit toolstacks.  This prevents domains
  from being migrated between hosts running 32-bit and 64-bit
  toolstacks.

* There is no header identifying the image.

* The image has no version information.

A new format that addresses the above is required.

ARM does not yet have have a domain save image format specified and
the format described in this specification should be suitable.

Not Yet Included
----------------

The following features are not yet fully specified and will be
included in a future draft.

* HVM guests

* Remus

* Page data compression.

* ARM


Overview
========

The image format consists of two main sections:

* _Headers_
* _Records_

Headers
-------

There are two headers: the _image header_, and the _domain header_.
The image header describes the format of the image (version etc.).
The _domain header_ contains general information about the domain
(architecture, type etc.).

Records
-------

The main part of the format is a sequence of different _records_.
Each record type contains information about the domain context.  At a
minimum there is a END record marking the end of the records section.


Fields
------

All the fields within the headers and records have a fixed width.

Fields are always aligned to their size.

Padding and reserved fields are set to zero on save and must be
ignored during restore.

Integer (numeric) fields in the image header are always in big-endian
byte order.

Integer fields in the domain header and in the records are in the
endianess described in the image header (which will typically be the
native ordering).

Headers
=======

Image Header
------------

The image header identifies an image as a Xen domain save image.  It
includes the version of this specification that the image complies
with.

Tools supporting version _V_ of the specification shall always save
images using version _V_.  Tools shall support restoring from version
_V_.  If the previous Xen release produced version _V_ - 1 images,
tools shall supported restoring from these.  Tools may additionally
support restoring from earlier versions.

The marker field can be used to distinguish between legacy images and
those corresponding to this specification.  Legacy images will have at
one or more zero bits within the first 8 octets of the image.

Fields within the image header are always in _big-endian_ byte order,
regardless of the setting of the endianness bit.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+
    | marker                                          |
    +-----------------------+-------------------------+
    | id                    | version                 |
    +-----------+-----------+-------------------------+
    | options   | (reserved)                          |
    +-----------+-------------------------------------+


--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
marker      0xFFFFFFFFFFFFFFFF.

id          0x58454E46 ("XENF" in ASCII).

version     0x00000001.  The version of this specification.

options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.

            bit 1-15: Reserved.
--------------------------------------------------------------------

The endianness shall be 0 (little-endian) for images generated on an
i386, x86_64, or arm host.


Domain Header
-------------

The domain header includes general properties of the domain.

     0      1     2     3     4     5     6     7 octet
    +-----------------------+-----------+-------------+
    | type                  | page_shift| (reserved)  |
    +-----------------------+-----------+-------------+
    | xen_major             | xen_minor               |
    +-----------------------+-------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
type        0x0000: Reserved.

            0x0001: x86 PV.

            0x0002: x86 HVM.

            0x0003: x86 PVH.

            0x0004: ARM.

            0x0005 - 0xFFFFFFFF: Reserved.

page_shift  Size of a guest page as a power of two.

            i.e., page size = 2^page_shift^.

xen_major   The Xen major version when this image was saved.

xen_minor   The Xen minor version when this image was saved.
--------------------------------------------------------------------

\clearpage

Records
=======

A record has a record header, type specific data and a trailing
footer.  If `body_length` is not a multiple of 8, the body is padded
with zeroes to align the record footer on an 8 octet boundary.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | type                  | body_length             |
    +-----------+-----------+-------------------------+
    | options   | (reserved)                          |
    +-----------+-------------------------------------+
    ...
    Record body of length body_length octets followed by
    0 to 7 octets of padding.
    ...
    +-----------------------+-------------------------+
    | (reserved)            | checksum                |
    +-----------------------+-------------------------+

--------------------------------------------------------------------
Field        Description
-----------  -------------------------------------------------------
type         0x00000000: END

             0x00000001: PAGE_DATA

             0x00000002: VCPU_COUNT

             0x00000003: VCPU_CONTEXT

             0x00000004: VCPU_CONTEXT_X1

             0x00000005: VCPU_CONTEXT_X2

             0x00000006: X86_PV_INFO

             0x00000007: X86_PV_P2M_FRAMES

             0x00000008 - 0xEFFFFFFF: Reserved for future _mandatory_
             records.

             0x80000000 - 0xFFFFFFFF: Reserved for future _optional_
             records.

body_length  Length in octets of the record body.

options      Bit 0: 0 - checksum invalid, 1 = checksum valid.

             Bit 1-15: Reserved.

checksum     CRC-32C checksum of the record head, body (including any
             trailing padding) and the footer (except for the checksum
             field itself), or 0x00000000 if the checksum field is
             invalid.
--------------------------------------------------------------------

Records may be _mandatory_ or _optional_.  Optional records have bit
31 set in their type.  Restoring an image that has unrecognized or
unsupported mandatory record must fail.  The contents of optional
records may be ignored during a restore (but their checksums should
still be validated).

The CRC-32C generator is 0x11EDC6F41[^1].

[^1]: See section 12.1 of [RFC 3720][RFC3720], Internet Small Computer
Systems Interface (iSCSI).

[RFC3720]: http://tools.ietf.org/html/rfc3720

The following sub-sections specify the record body format for each of
the record types.

END
----

A end record marks the end of the image.

     0     1     2     3     4     5     6     7 octet
    +-------------------------------------------------+

The end record contains no fields; its body_length is 0.

PAGE_DATA
---------

The bulk of an image consists of many PAGE_DATA records containing the
memory contents.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | count (C)             | (reserved)              |
    +-----------------------+-------------------------+
    | pfn[0]                                          |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | pfn[C-1]                                        |
    +-------------------------------------------------+
    | page_data[0]...                                 |
    ...
    +-------------------------------------------------+
    | page_data[N-1]...                               |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field       Description
----------- --------------------------------------------------------
count       Number of pages described in this record.

pfn         An array of count PFNs and their types.

            Bit 63-60: XEN\_DOMCTL\_PFINFO\_* type.

            Bit 59-52: Reserved.

            Bit 51-0: PFN.

page_data   page_size octets of uncompressed page contents for each page
            set as present in the pfn array.
--------------------------------------------------------------------

\clearpage

Table: XEN\_DOMCTL\_PFINFO\_* Page Types.

--------------------------------------------------------------------
PFINFO type    Value      Description
-------------  ---------  ------------------------------------------
NOTAB          0x0        Normal page.

L1TAB          0x1        L1 page table page.

L2TAB          0x2        L2 page table page.

L3TAB          0x3        L3 page table page.

L4TAB          0x4        L4 page table page.

               0x5-0x8    Reserved.

L1TAB_PIN      0x9        L1 page table page (pinned).

L2TAB_PIN      0xA        L2 page table page (pinned).

L3TAB_PIN      0xB        L3 page table page (pinned).

L4TAB_PIN      0xC        L4 page table page (pinned).

BROKEN         0xD        Broken page.

XALLOC         0xE        Allocate only.

XTAB           0xF        Invalid page.
--------------------------------------------------------------------

PFNs with type `BROKEN`, `XALLOC`, or `XTAB` do not have any
corresponding `page_data`.

The saver uses the `XTAB` type for PFNs that become invalid in the
guest's P2M table during a live migration[^2].

[^2]: In the legacy format, this is the list of unmapped PFNs in the
tail.

\clearpage

VCPU_COUNT
---------

The VCPU_COUNT record includes the maximum number of VCPUs.  This will
be followed a VCPU_CONTEXT record for each online VCPU.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+------------------------+
    | max_vcpus             | (reserved)             |
    +-----------------------+------------------------+

--------------------------------------------------------------------
Field        Description
-----------  -------------------------------------------------------
max_vcpus    Maximum number of VCPUs.
--------------------------------------------------------------------

VCPU_CONTEXT
------------

The context for a single VCPU, as accesses by the
XEN_DOMCTL_getvcpucontext and XEN_DOMCTL_setvcpu_context hypercall
sub-ops.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

\clearpage

VCPU_CONTEXT_X1
---------------

Additional context for a single VCPU, as accessed by the
XEN_DOMCTL_get_ext_vcpucontext and XEN_DOMCTL_set_ext_vcpucontext
hypercall sub-ops.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

VCPU_CONTEXT_X2
---------------

Additional context for a single VCPU, as accessed by the
XEN_DOMCTL_setvcpuextstate and XEN_DOMCTL_setvcpuextstate hypercall
sub-ops.

     0     1     2     3     4     5     6     7 octet
    +-----------------------+-------------------------+
    | vcpu_id               | (reserved)              |
    +-----------------------+-------------------------+
    | vcpu_ctx...                                     |
    ...
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
vcpu_id          The VCPU ID.

vcpu_ctx         Context for this VCPU.
--------------------------------------------------------------------

\clearpage

X86_PV_INFO
-----------

> [ This record replaces part of the extended-info chunk, and the
> p2m frame list ]

     0     1     2     3     4     5     6     7 octet
    +-----+-----+-----+-----+-------------------------+
    | w   | ptl | o   | (r) | p2m_pages (P)           |
    +-----+-----+-----+-----+-------------------------+
    | p2m_pfn[0]                                      |
    +-------------------------------------------------+
    ...
    +-------------------------------------------------+
    | p2m_pfn[P-1]                                    |
    +-------------------------------------------------+

--------------------------------------------------------------------
Field            Description
-----------      ---------------------------------------------------
guest_width (w)  Guest width in octets (either 4 or 8).

pt_levels (ptl)  Number of page table levels (either 3 or 4).

options (o)      Bit 0: 0 - no VMASST_pae_extended_cr3,
                 1 - VMASST_pae_extended_cr3.

                 Bit 1-7: Reserved.

p2m_pagess       Number of pages in the guest's P2M table.

p2m_pfn          Array of PFNs containing the guest's P2M table.
--------------------------------------------------------------------

\clearpage

Layout
======

The set of valid records depends on the guest architecture and type.

x86 PV Guest
------------

An x86 PV guest image will have in this order:

1. Image header
2. Domain header
3. X86_PV_INFO record
4. Many PAGE_DATA records
5. VCPU_COUNT record
6. VCPU context records for each online VCPU
    a. VCPU_CONTEXT record
    b. VCPU_CONTEXT_X1 record
    c. VCPU_CONTEXT_X2 record
7. END record


Legacy Images (x86 only)
========================

Restoring legacy images from older tools shall be handled by
translating the legacy format image into this new format.

It shall not be possible to save in the legacy format.

There are two different legacy images depending on whether they were
generated by a 32-bit or a 64-bit toolstack. These shall be
distinguished by inspecting octets 4-7 in the image.  If these are
zero then it is a 64-bit image.

Toolstack  Field                            Value
---------  -----                            -----
64-bit     Bit 31-63 of the p2m_size field  0 (since p2m_size < 2^32^)
32-bit     extended-info chunk ID (PV)      0xFFFFFFFF
32-bit     Chunk type (HVM)                 < 0
32-bit     Page count (HVM)                 > 0

Table: Possible values for octet 4-7 in legacy images

This assumes the presence of the extended-info chunk which was
introduced in Xen 3.0.


Future Extensions
=================

All changes to the image or domain headers require the image version
to be increased.

The format may be extended by adding additional record types.

Extending an existing record type must be done by adding a new record
type.  This allows old images with the old record to still be
restored.

The image header may only be extended by _appending_ additional
fields.  In particular, the `marker`, `id` and `version` fields must
never change size or location.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3P7-0000Yb-Qq; Thu, 27 Feb 2014 16:01:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ3P7-0000YT-0w
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 16:01:45 +0000
Received: from [193.109.254.147:24806] by server-10.bemta-14.messagelabs.com
	id 3A/64-10711-8616F035; Thu, 27 Feb 2014 16:01:44 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393516902!7287031!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13110 invoked from network); 27 Feb 2014 16:01:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:01:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104689939"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 16:01:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 11:01:41 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ3P3-0004WF-9z;
	Thu, 27 Feb 2014 16:01:41 +0000
Date: Thu, 27 Feb 2014 16:01:41 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140227160141.GJ16241@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
	<530B606F.2070902@citrix.com>
	<20140227124327.GD16241@zion.uk.xensource.com>
	<530F5E9B.5020404@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F5E9B.5020404@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:49:47PM +0000, Zoltan Kiss wrote:
> On 27/02/14 12:43, Wei Liu wrote:
> >On Mon, Feb 24, 2014 at 03:08:31PM +0000, Zoltan Kiss wrote:
> >>On 24/02/14 13:49, Zoltan Kiss wrote:
> >>>On 22/02/14 23:18, Zoltan Kiss wrote:
> >>>>On 18/02/14 17:45, Ian Campbell wrote:
> >>>>>On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>>>
> >>>>>Re the Subject: change how? Perhaps "handle foreign mapped pages on the
> >>>>>guest RX path" would be clearer.
> >>>>Ok, I'll do that.
> >>>>
> >>>>>
> >>>>>>RX path need to know if the SKB fragments are stored on
> >>>>>>pages from another
> >>>>>>domain.
> >>>>>Does this not need to be done either before the mapping change
> >>>>>or at the
> >>>>>same time? -- otherwise you have a window of a couple of commits where
> >>>>>things are broken, breaking bisectability.
> >>>>I can move this to the beginning, to keep bisectability. I've
> >>>>put it here originally because none of these makes sense without
> >>>>the previous patches.
> >>>Well, I gave it a close look: to move this to the beginning as a
> >>>separate patch I would need to put move a lot of definitions from
> >>>the first patch to here (ubuf_to_vif helper,
> >>>xenvif_zerocopy_callback etc.). That would be the best from bisect
> >>>point of view, but from patch review point of view even worse than
> >>>now. So the only option I see is to merge this with the first 2
> >>>patches, so it will be even bigger.
> >>Actually I was stupid, we can move this patch earlier and introduce
> >>stubs for those 2 functions. But for the another two patches (#6 and
> >>#8) it's still true that we can't move them before, only merge them
> >>into the main, as they heavily rely on the main patch. #6 is
> >>necessary for Windows frontends, as they are keen to send too many
> >>slots. #8 is quite a rare case, happens only if a guest wedge or
> >>malicious, and sits on the packet.
> >>So my question is still up: do you prefer perfect bisectability or
> >>more segmented patches which are not that pain to review?
> >>
> >
> >What's the diff stat if you merge those patches?
> >
> 
>  drivers/net/xen-netback/common.h    |   33 ++-
>  drivers/net/xen-netback/interface.c |   67 +++++-
>  drivers/net/xen-netback/netback.c   |  424
> ++++++++++++++++++++++-------------
>  3 files changed, 362 insertions(+), 162 deletions(-)

Not terribly bad IMHO -- if you look at netback's changelog, I've done
worse. :-P

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:02:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3P7-0000Yb-Qq; Thu, 27 Feb 2014 16:01:45 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJ3P7-0000YT-0w
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 16:01:45 +0000
Received: from [193.109.254.147:24806] by server-10.bemta-14.messagelabs.com
	id 3A/64-10711-8616F035; Thu, 27 Feb 2014 16:01:44 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393516902!7287031!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13110 invoked from network); 27 Feb 2014 16:01:43 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:01:43 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104689939"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 16:01:41 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 11:01:41 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJ3P3-0004WF-9z;
	Thu, 27 Feb 2014 16:01:41 +0000
Date: Thu, 27 Feb 2014 16:01:41 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Zoltan Kiss <zoltan.kiss@citrix.com>
Message-ID: <20140227160141.GJ16241@zion.uk.xensource.com>
References: <1390253069-25507-1-git-send-email-zoltan.kiss@citrix.com>
	<1390253069-25507-5-git-send-email-zoltan.kiss@citrix.com>
	<1392745532.23084.65.camel@kazak.uk.xensource.com>
	<53093051.9040907@citrix.com> <530B4E05.4020900@schaman.hu>
	<530B606F.2070902@citrix.com>
	<20140227124327.GD16241@zion.uk.xensource.com>
	<530F5E9B.5020404@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F5E9B.5020404@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: jonathan.davies@citrix.com, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH net-next v5 4/9] xen-netback: Change RX path
 for mapped SKB fragments
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:49:47PM +0000, Zoltan Kiss wrote:
> On 27/02/14 12:43, Wei Liu wrote:
> >On Mon, Feb 24, 2014 at 03:08:31PM +0000, Zoltan Kiss wrote:
> >>On 24/02/14 13:49, Zoltan Kiss wrote:
> >>>On 22/02/14 23:18, Zoltan Kiss wrote:
> >>>>On 18/02/14 17:45, Ian Campbell wrote:
> >>>>>On Mon, 2014-01-20 at 21:24 +0000, Zoltan Kiss wrote:
> >>>>>
> >>>>>Re the Subject: change how? Perhaps "handle foreign mapped pages on the
> >>>>>guest RX path" would be clearer.
> >>>>Ok, I'll do that.
> >>>>
> >>>>>
> >>>>>>RX path need to know if the SKB fragments are stored on
> >>>>>>pages from another
> >>>>>>domain.
> >>>>>Does this not need to be done either before the mapping change
> >>>>>or at the
> >>>>>same time? -- otherwise you have a window of a couple of commits where
> >>>>>things are broken, breaking bisectability.
> >>>>I can move this to the beginning, to keep bisectability. I've
> >>>>put it here originally because none of these makes sense without
> >>>>the previous patches.
> >>>Well, I gave it a close look: to move this to the beginning as a
> >>>separate patch I would need to put move a lot of definitions from
> >>>the first patch to here (ubuf_to_vif helper,
> >>>xenvif_zerocopy_callback etc.). That would be the best from bisect
> >>>point of view, but from patch review point of view even worse than
> >>>now. So the only option I see is to merge this with the first 2
> >>>patches, so it will be even bigger.
> >>Actually I was stupid, we can move this patch earlier and introduce
> >>stubs for those 2 functions. But for the another two patches (#6 and
> >>#8) it's still true that we can't move them before, only merge them
> >>into the main, as they heavily rely on the main patch. #6 is
> >>necessary for Windows frontends, as they are keen to send too many
> >>slots. #8 is quite a rare case, happens only if a guest wedge or
> >>malicious, and sits on the packet.
> >>So my question is still up: do you prefer perfect bisectability or
> >>more segmented patches which are not that pain to review?
> >>
> >
> >What's the diff stat if you merge those patches?
> >
> 
>  drivers/net/xen-netback/common.h    |   33 ++-
>  drivers/net/xen-netback/interface.c |   67 +++++-
>  drivers/net/xen-netback/netback.c   |  424
> ++++++++++++++++++++++-------------
>  3 files changed, 362 insertions(+), 162 deletions(-)

Not terribly bad IMHO -- if you look at netback's changelog, I've done
worse. :-P

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:03:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3QV-0000e5-AE; Thu, 27 Feb 2014 16:03:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WJ3QM-0000ds-Jv
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:03:09 +0000
Received: from [85.158.139.211:60380] by server-6.bemta-5.messagelabs.com id
	06/C1-14342-5B16F035; Thu, 27 Feb 2014 16:03:01 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393516979!6685042!1
X-Originating-IP: [209.85.220.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9828 invoked from network); 27 Feb 2014 16:03:00 -0000
Received: from mail-pa0-f48.google.com (HELO mail-pa0-f48.google.com)
	(209.85.220.48)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:03:00 -0000
Received: by mail-pa0-f48.google.com with SMTP id kx10so2668010pab.35
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 08:02:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=vz0Vc/5JLy+c8dN2X0FBL5vXaSw4sgT/XY9v7ET+L4A=;
	b=aK5RLX1n29KD0EE6mw4WqbukKXNEL8pBHGCdBAymvio3E80j7OdqROZqQ6rpZA64P4
	OZYHkr92x+0UN5RA4TGG3/YxROFMkr140QhTX/x9BEGC7IcgjsLTWdFn2AdhkPBZPawP
	+eyHfvGufHRwgIX5N67QCrsCEDYYNNu8p+0RoRprO/DdOpR7amDyBVlNfiuORubEyh9a
	+uNTvsbV5VJdeFHjanjwMCl9NNKFLBNy9xSc3HQd4RFun+OvVmbuPoyNLv4UL6f5Mird
	XY21POXf7xWK0Kvx/+5aGKcr6we8HXwSx2779Pw3wsAIRkPWyOQ8vaooAYLWWS71hOGX
	qrVQ==
X-Gm-Message-State: ALoCoQmaaGJUekAHaKEYlYQTr7LDq14NbJ9USdvQAeUx6NjYz5FjTmCJT2ExaPfaDOqrLKDK15Va
X-Received: by 10.69.21.106 with SMTP id hj10mr14045432pbd.87.1393516978673;
	Thu, 27 Feb 2014 08:02:58 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id g6sm34108940pat.2.2014.02.27.08.02.55
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 08:02:57 -0800 (PST)
Date: Thu, 27 Feb 2014 08:02:57 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Christopher Covington <cov@codeaurora.org>
Message-ID: <20140227160257.GA22935@lvm>
References: <20140226183454.GA14639@cbox> <530E402C.9040502@codeaurora.org>
	<20140226195111.GB16149@cbox> <530F39C3.5060808@codeaurora.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F39C3.5060808@codeaurora.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 08:12:35AM -0500, Christopher Covington wrote:
> Hi Christoffer,
> 
> On 02/26/2014 02:51 PM, Christoffer Dall wrote:
> > On Wed, Feb 26, 2014 at 02:27:40PM -0500, Christopher Covington wrote:
> 

[...]

> >>>
> >>> The virtual hardware platform must provide a number of mandatory
> >>> peripherals:
> >>>
> >>>   Serial console:  The platform should provide a console,
> >>>   based on an emulated pl011, a virtio-console, or a Xen PV console.
> >>>
> >>>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >>>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >>>   removes this limitation.
> >>>
> >>>   The ARM virtual timer and counter should be available to the VM as
> >>>   per the ARM Generic Timers specification in the ARM ARM [1].
> >>>
> >>>   A hotpluggable bus to support hotplug of at least block and network
> >>>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >>>   bus.
> >>
> >> Is VirtIO hotplug capable? Over PCI or MMIO transports or both?
> > 
> > VirtIO devices attached on a PCIe bus are hotpluggable, the emulated
> > PCIe bus itself would not have anything to do with virtio, except that
> > virtio devices can hang off of it.  AFAIU.
> 
> So network/block device only as memory mapped peripherals (like SMSC or PL
> SD/MMC) or over VirtIO-MMIO won't meet the specification? Is PCI/VirtIO-PCI on
> ARM production ready? What's the motivation for requiring hotplug?
> 

Platform devices that don't sit on any 'real bus' are generally not
hotpluggable.

VM management systems such as OpenStack make heavy use of hotplug to add
storage to your VMs, for example.

This spec does not prohibit devices over mmio, or over virtio-mmio, in
fact it encourages guest kernels to include support for such.  But it
mandates that there is some hotpluggable bus, so a very common VM use
case is supported for ARM VMs.

PCI for ARM is not ready yet, but people are working on it.  That should
not have any bearing on what the right decision for this spec is though
- it's all early stage at this point.

-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:03:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3QV-0000e5-AE; Thu, 27 Feb 2014 16:03:11 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <christoffer.dall@linaro.org>) id 1WJ3QM-0000ds-Jv
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:03:09 +0000
Received: from [85.158.139.211:60380] by server-6.bemta-5.messagelabs.com id
	06/C1-14342-5B16F035; Thu, 27 Feb 2014 16:03:01 +0000
X-Env-Sender: christoffer.dall@linaro.org
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393516979!6685042!1
X-Originating-IP: [209.85.220.48]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 9828 invoked from network); 27 Feb 2014 16:03:00 -0000
Received: from mail-pa0-f48.google.com (HELO mail-pa0-f48.google.com)
	(209.85.220.48)
	by server-6.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:03:00 -0000
Received: by mail-pa0-f48.google.com with SMTP id kx10so2668010pab.35
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 08:02:58 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:date:from:to:cc:subject:message-id:references
	:mime-version:content-type:content-disposition:in-reply-to
	:user-agent;
	bh=vz0Vc/5JLy+c8dN2X0FBL5vXaSw4sgT/XY9v7ET+L4A=;
	b=aK5RLX1n29KD0EE6mw4WqbukKXNEL8pBHGCdBAymvio3E80j7OdqROZqQ6rpZA64P4
	OZYHkr92x+0UN5RA4TGG3/YxROFMkr140QhTX/x9BEGC7IcgjsLTWdFn2AdhkPBZPawP
	+eyHfvGufHRwgIX5N67QCrsCEDYYNNu8p+0RoRprO/DdOpR7amDyBVlNfiuORubEyh9a
	+uNTvsbV5VJdeFHjanjwMCl9NNKFLBNy9xSc3HQd4RFun+OvVmbuPoyNLv4UL6f5Mird
	XY21POXf7xWK0Kvx/+5aGKcr6we8HXwSx2779Pw3wsAIRkPWyOQ8vaooAYLWWS71hOGX
	qrVQ==
X-Gm-Message-State: ALoCoQmaaGJUekAHaKEYlYQTr7LDq14NbJ9USdvQAeUx6NjYz5FjTmCJT2ExaPfaDOqrLKDK15Va
X-Received: by 10.69.21.106 with SMTP id hj10mr14045432pbd.87.1393516978673;
	Thu, 27 Feb 2014 08:02:58 -0800 (PST)
Received: from localhost (c-67-169-181-221.hsd1.ca.comcast.net.
	[67.169.181.221])
	by mx.google.com with ESMTPSA id g6sm34108940pat.2.2014.02.27.08.02.55
	for <multiple recipients>
	(version=TLSv1.2 cipher=RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 08:02:57 -0800 (PST)
Date: Thu, 27 Feb 2014 08:02:57 -0800
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Christopher Covington <cov@codeaurora.org>
Message-ID: <20140227160257.GA22935@lvm>
References: <20140226183454.GA14639@cbox> <530E402C.9040502@codeaurora.org>
	<20140226195111.GB16149@cbox> <530F39C3.5060808@codeaurora.org>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F39C3.5060808@codeaurora.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>, cross-distro@lists.linaro.org,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>, xen-devel@lists.xen.org,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 08:12:35AM -0500, Christopher Covington wrote:
> Hi Christoffer,
> 
> On 02/26/2014 02:51 PM, Christoffer Dall wrote:
> > On Wed, Feb 26, 2014 at 02:27:40PM -0500, Christopher Covington wrote:
> 

[...]

> >>>
> >>> The virtual hardware platform must provide a number of mandatory
> >>> peripherals:
> >>>
> >>>   Serial console:  The platform should provide a console,
> >>>   based on an emulated pl011, a virtio-console, or a Xen PV console.
> >>>
> >>>   An ARM Generic Interrupt Controller v2 (GICv2) [3] or newer.  GICv2
> >>>   limits the the number of virtual CPUs to 8 cores, newer GIC versions
> >>>   removes this limitation.
> >>>
> >>>   The ARM virtual timer and counter should be available to the VM as
> >>>   per the ARM Generic Timers specification in the ARM ARM [1].
> >>>
> >>>   A hotpluggable bus to support hotplug of at least block and network
> >>>   devices.  Suitable buses include a virtual PCIe bus and the Xen PV
> >>>   bus.
> >>
> >> Is VirtIO hotplug capable? Over PCI or MMIO transports or both?
> > 
> > VirtIO devices attached on a PCIe bus are hotpluggable, the emulated
> > PCIe bus itself would not have anything to do with virtio, except that
> > virtio devices can hang off of it.  AFAIU.
> 
> So network/block device only as memory mapped peripherals (like SMSC or PL
> SD/MMC) or over VirtIO-MMIO won't meet the specification? Is PCI/VirtIO-PCI on
> ARM production ready? What's the motivation for requiring hotplug?
> 

Platform devices that don't sit on any 'real bus' are generally not
hotpluggable.

VM management systems such as OpenStack make heavy use of hotplug to add
storage to your VMs, for example.

This spec does not prohibit devices over mmio, or over virtio-mmio, in
fact it encourages guest kernels to include support for such.  But it
mandates that there is some hotpluggable bus, so a very common VM use
case is supported for ARM VMs.

PCI for ARM is not ready yet, but people are working on it.  That should
not have any bearing on what the right decision for this spec is though
- it's all early stage at this point.

-Christoffer

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:20:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3gQ-0001AL-DT; Thu, 27 Feb 2014 16:19:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1WJ3gO-0001AG-JW
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:19:36 +0000
Received: from [85.158.139.211:47378] by server-5.bemta-5.messagelabs.com id
	8F/17-32749-7956F035; Thu, 27 Feb 2014 16:19:35 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393517973!6667709!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10724 invoked from network); 27 Feb 2014 16:19:33 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:19:33 -0000
Received: by mail-ea0-f174.google.com with SMTP id h14so1936516eaj.19
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 08:19:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=zNxFcQVKY3ZyGeT/TkRfnCNQw8B8RXXwHjzgrMRRLxw=;
	b=lcUvJQvQE4hVK2W3VyYBNkn3H2szlhxGcqCIb0j9k61g9QmcahKcQbcXWUmTbhBDNy
	hq5R2w1Np+biaL/h9RK1LkwDBI/yVWY2Ur8gdgbFrbU+PcGS9qbKd4QE3kQu+2pf1gdl
	bL6E60utHA4djw4eaupUlwRDZwwYP1aOc5LPDc/i2r30q7sMBuUpovdfkjfZDYlj7Tj+
	BwRP1rxcmKX0hjWeKUyPzpuTL2DB8oXBRtqUPQtEfE8Ialw72N3JjL2qZVKCQSyKZq4w
	8qeGTFUN8eIaLVJwQ4bB26nAmAff3/SfKX4upz0ntHsPbatUKGmMKOCylREcvxzFUkuT
	HSiQ==
MIME-Version: 1.0
X-Received: by 10.204.108.68 with SMTP id e4mr18394bkp.94.1393517973674; Thu,
	27 Feb 2014 08:19:33 -0800 (PST)
Received: by 10.205.34.135 with HTTP; Thu, 27 Feb 2014 08:19:33 -0800 (PST)
In-Reply-To: <530F5C86.4060902@citrix.com>
References: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
	<530F5C86.4060902@citrix.com>
Date: Fri, 28 Feb 2014 00:19:33 +0800
Message-ID: <CAEQjb-S1Q22PPvk0cd7SNXzHEt+nd8PdLupQX4FEqohj+is36Q@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get the accurate physical CPU utilization in
	Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9069151582967046754=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9069151582967046754==
Content-Type: multipart/alternative; boundary=089e0122ed08cf473004f365af76

--089e0122ed08cf473004f365af76
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,

Thank you for all your advice. I have another one question.
How does the Xen scheduler map a vcpu to a physical CPU, if the host has
multi-cores? Is there any rules in the map? Or the map is irregular?
Thank you for your time.


2014-02-27 23:40 GMT+08:00 Roger Pau Monn=E9 <roger.pau@citrix.com>:

> On 27/02/14 16:06, Bei Guan wrote:
> > Hi,
> >
> > I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU
> > core, such as core 3. Then, I run a cpu-bound process in DomU and the
> > vcpu utilization is 100% (got it with "xentop" in Dom0).
> > However, when I use "top" in Dom0 to see the physical CPU utilization,
> > the CPU core 3 utilization is zero or less than 1%. The utilization
> > expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot
> > get the accurate physical CPU utilization with "top" command in Dom0?
>
> top in Dom0 will only show CPU utilization of Dom0 (Xen is not a type 2
> hypervisor, so Dom0 is no different than any other DomU in this aspect),
> if you want to see CPU utilization of all domains you should use xl top
> (xentop).
>
> Roger.
>
>


--=20
Best Regards,
Bei Guan

--089e0122ed08cf473004f365af76
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>Thank you for all your advice. I ha=
ve another one question.</div><div>How does the Xen scheduler map a vcpu to=
 a physical CPU, if the host has multi-cores? Is there any rules in the map=
? Or the map is irregular?</div>
<div>Thank you for your time.</div></div><div class=3D"gmail_extra"><br><br=
><div class=3D"gmail_quote">2014-02-27 23:40 GMT+08:00 Roger Pau Monn=E9 <s=
pan dir=3D"ltr">&lt;<a href=3D"mailto:roger.pau@citrix.com" target=3D"_blan=
k">roger.pau@citrix.com</a>&gt;</span>:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On 27/02/14 16:06, Bei Guan =
wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt; I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU<b=
r>
&gt; core, such as core 3. Then, I run a cpu-bound process in DomU and the<=
br>
&gt; vcpu utilization is 100% (got it with &quot;xentop&quot; in Dom0).<br>
&gt; However, when I use &quot;top&quot; in Dom0 to see the physical CPU ut=
ilization,<br>
&gt; the CPU core 3 utilization is zero or less than 1%. The utilization<br=
>
&gt; expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot=
<br>
&gt; get the accurate physical CPU utilization with &quot;top&quot; command=
 in Dom0?<br>
<br>
</div>top in Dom0 will only show CPU utilization of Dom0 (Xen is not a type=
 2<br>
hypervisor, so Dom0 is no different than any other DomU in this aspect),<br=
>
if you want to see CPU utilization of all domains you should use xl top<br>
(xentop).<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Roger.<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r>Best Regards,<div>Bei Guan</div>
</div>

--089e0122ed08cf473004f365af76--


--===============9069151582967046754==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9069151582967046754==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 16:20:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3gQ-0001AL-DT; Thu, 27 Feb 2014 16:19:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gbtju85@gmail.com>) id 1WJ3gO-0001AG-JW
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:19:36 +0000
Received: from [85.158.139.211:47378] by server-5.bemta-5.messagelabs.com id
	8F/17-32749-7956F035; Thu, 27 Feb 2014 16:19:35 +0000
X-Env-Sender: gbtju85@gmail.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393517973!6667709!1
X-Originating-IP: [209.85.215.174]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,RCVD_BY_IP,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10724 invoked from network); 27 Feb 2014 16:19:33 -0000
Received: from mail-ea0-f174.google.com (HELO mail-ea0-f174.google.com)
	(209.85.215.174)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:19:33 -0000
Received: by mail-ea0-f174.google.com with SMTP id h14so1936516eaj.19
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 08:19:33 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=zNxFcQVKY3ZyGeT/TkRfnCNQw8B8RXXwHjzgrMRRLxw=;
	b=lcUvJQvQE4hVK2W3VyYBNkn3H2szlhxGcqCIb0j9k61g9QmcahKcQbcXWUmTbhBDNy
	hq5R2w1Np+biaL/h9RK1LkwDBI/yVWY2Ur8gdgbFrbU+PcGS9qbKd4QE3kQu+2pf1gdl
	bL6E60utHA4djw4eaupUlwRDZwwYP1aOc5LPDc/i2r30q7sMBuUpovdfkjfZDYlj7Tj+
	BwRP1rxcmKX0hjWeKUyPzpuTL2DB8oXBRtqUPQtEfE8Ialw72N3JjL2qZVKCQSyKZq4w
	8qeGTFUN8eIaLVJwQ4bB26nAmAff3/SfKX4upz0ntHsPbatUKGmMKOCylREcvxzFUkuT
	HSiQ==
MIME-Version: 1.0
X-Received: by 10.204.108.68 with SMTP id e4mr18394bkp.94.1393517973674; Thu,
	27 Feb 2014 08:19:33 -0800 (PST)
Received: by 10.205.34.135 with HTTP; Thu, 27 Feb 2014 08:19:33 -0800 (PST)
In-Reply-To: <530F5C86.4060902@citrix.com>
References: <CAEQjb-RPBHD_n51ixHPwxpoSgzF6Su3VvYAfifuqyWgHGyZddg@mail.gmail.com>
	<530F5C86.4060902@citrix.com>
Date: Fri, 28 Feb 2014 00:19:33 +0800
Message-ID: <CAEQjb-S1Q22PPvk0cd7SNXzHEt+nd8PdLupQX4FEqohj+is36Q@mail.gmail.com>
From: Bei Guan <gbtju85@gmail.com>
To: =?ISO-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] How to get the accurate physical CPU utilization in
	Dom0?
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============9069151582967046754=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============9069151582967046754==
Content-Type: multipart/alternative; boundary=089e0122ed08cf473004f365af76

--089e0122ed08cf473004f365af76
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

Hi,

Thank you for all your advice. I have another one question.
How does the Xen scheduler map a vcpu to a physical CPU, if the host has
multi-cores? Is there any rules in the map? Or the map is irregular?
Thank you for your time.


2014-02-27 23:40 GMT+08:00 Roger Pau Monn=E9 <roger.pau@citrix.com>:

> On 27/02/14 16:06, Bei Guan wrote:
> > Hi,
> >
> > I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU
> > core, such as core 3. Then, I run a cpu-bound process in DomU and the
> > vcpu utilization is 100% (got it with "xentop" in Dom0).
> > However, when I use "top" in Dom0 to see the physical CPU utilization,
> > the CPU core 3 utilization is zero or less than 1%. The utilization
> > expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot
> > get the accurate physical CPU utilization with "top" command in Dom0?
>
> top in Dom0 will only show CPU utilization of Dom0 (Xen is not a type 2
> hypervisor, so Dom0 is no different than any other DomU in this aspect),
> if you want to see CPU utilization of all domains you should use xl top
> (xentop).
>
> Roger.
>
>


--=20
Best Regards,
Bei Guan

--089e0122ed08cf473004f365af76
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi,<div><br></div><div>Thank you for all your advice. I ha=
ve another one question.</div><div>How does the Xen scheduler map a vcpu to=
 a physical CPU, if the host has multi-cores? Is there any rules in the map=
? Or the map is irregular?</div>
<div>Thank you for your time.</div></div><div class=3D"gmail_extra"><br><br=
><div class=3D"gmail_quote">2014-02-27 23:40 GMT+08:00 Roger Pau Monn=E9 <s=
pan dir=3D"ltr">&lt;<a href=3D"mailto:roger.pau@citrix.com" target=3D"_blan=
k">roger.pau@citrix.com</a>&gt;</span>:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div class=3D"">On 27/02/14 16:06, Bei Guan =
wrote:<br>
&gt; Hi,<br>
&gt;<br>
&gt; I run a PV DomU with 1 vcpu on Xen. I pin the vcpu to a physical CPU<b=
r>
&gt; core, such as core 3. Then, I run a cpu-bound process in DomU and the<=
br>
&gt; vcpu utilization is 100% (got it with &quot;xentop&quot; in Dom0).<br>
&gt; However, when I use &quot;top&quot; in Dom0 to see the physical CPU ut=
ilization,<br>
&gt; the CPU core 3 utilization is zero or less than 1%. The utilization<br=
>
&gt; expected of CPU core 3 is also 100% like the vcpu. Is it? Why I cannot=
<br>
&gt; get the accurate physical CPU utilization with &quot;top&quot; command=
 in Dom0?<br>
<br>
</div>top in Dom0 will only show CPU utilization of Dom0 (Xen is not a type=
 2<br>
hypervisor, so Dom0 is no different than any other DomU in this aspect),<br=
>
if you want to see CPU utilization of all domains you should use xl top<br>
(xentop).<br>
<span class=3D"HOEnZb"><font color=3D"#888888"><br>
Roger.<br>
<br>
</font></span></blockquote></div><br><br clear=3D"all"><div><br></div>-- <b=
r>Best Regards,<div>Bei Guan</div>
</div>

--089e0122ed08cf473004f365af76--


--===============9069151582967046754==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============9069151582967046754==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 16:21:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3i3-0001GG-Ud; Thu, 27 Feb 2014 16:21:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ3i1-0001Fw-Vx
	for Xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:21:18 +0000
Received: from [85.158.143.35:30321] by server-1.bemta-4.messagelabs.com id
	D3/A2-31661-CF56F035; Thu, 27 Feb 2014 16:21:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393518074!8823469!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3426 invoked from network); 27 Feb 2014 16:21:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:21:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106326454"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 16:21:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 11:21:13 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ3hx-00032y-68;
	Thu, 27 Feb 2014 16:21:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ3hw-0002CR-Ue;
	Thu, 27 Feb 2014 16:21:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21263.26104.660301.591599@mariner.uk.xensource.com>
Date: Thu, 27 Feb 2014 16:21:12 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <530F608E.9040407@citrix.com>
References: <530F608E.9040407@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Domain Save Image Format proposal (draft C)"):
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (see the full list in the doc).

Good work.

>              0x00000008 - 0xEFFFFFFF: Reserved for future _mandatory_
>              records.

YM "- 0x7fffffff".  HTH.

> checksum     CRC-32C checksum of the record head, body (including any
>              trailing padding) and the footer (except for the checksum
>              field itself), or 0x00000000 if the checksum field is
>              invalid.

I still question whether it is really sensible to have a set of
per-record checksums which do not cover all of the file (eg, excluding
the image header, and the file structure).

> Table: XEN\_DOMCTL\_PFINFO\_* Page Types.
...
>                0x5-0x8    Reserved.

You should explicitly say that receiving a not understood page type is
fatal.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:21:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3i3-0001GG-Ud; Thu, 27 Feb 2014 16:21:19 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ3i1-0001Fw-Vx
	for Xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:21:18 +0000
Received: from [85.158.143.35:30321] by server-1.bemta-4.messagelabs.com id
	D3/A2-31661-CF56F035; Thu, 27 Feb 2014 16:21:16 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393518074!8823469!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3426 invoked from network); 27 Feb 2014 16:21:15 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:21:15 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="106326454"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 16:21:13 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 11:21:13 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ3hx-00032y-68;
	Thu, 27 Feb 2014 16:21:13 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJ3hw-0002CR-Ue;
	Thu, 27 Feb 2014 16:21:12 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21263.26104.660301.591599@mariner.uk.xensource.com>
Date: Thu, 27 Feb 2014 16:21:12 +0000
To: David Vrabel <david.vrabel@citrix.com>
In-Reply-To: <530F608E.9040407@citrix.com>
References: <530F608E.9040407@citrix.com>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

David Vrabel writes ("Domain Save Image Format proposal (draft C)"):
> Here is a draft of a proposal for a new domain save image format.  It
> does not currently cover all use cases (see the full list in the doc).

Good work.

>              0x00000008 - 0xEFFFFFFF: Reserved for future _mandatory_
>              records.

YM "- 0x7fffffff".  HTH.

> checksum     CRC-32C checksum of the record head, body (including any
>              trailing padding) and the footer (except for the checksum
>              field itself), or 0x00000000 if the checksum field is
>              invalid.

I still question whether it is really sensible to have a set of
per-record checksums which do not cover all of the file (eg, excluding
the image header, and the file structure).

> Table: XEN\_DOMCTL\_PFINFO\_* Page Types.
...
>                0x5-0x8    Reserved.

You should explicitly say that receiving a not understood page type is
fatal.

Thanks,
Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:22:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3it-0001MZ-Ca; Thu, 27 Feb 2014 16:22:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ3il-0001Lv-CJ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:22:05 +0000
Received: from [85.158.143.35:56826] by server-2.bemta-4.messagelabs.com id
	94/8D-04779-A266F035; Thu, 27 Feb 2014 16:22:02 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393518121!8810525!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14597 invoked from network); 27 Feb 2014 16:22:02 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 16:22:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ3ih-000K70-Ge; Thu, 27 Feb 2014 16:21:59 +0000
Date: Thu, 27 Feb 2014 17:21:59 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227162159.GG53925@deinos.phlegethon.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:03 +0000 on 27 Feb (1393506187), Andrew Cooper wrote:
> Some latent bugs are emphasised by these changes.  There are steps in time
> when the TSC scale is calculated, and when the platform time is initialised ...
> 
>     (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>     (XEN) [   27.553075] Detected 2793.232 MHz processor.
>     (XEN) [   27.558277] Initing memory sharing.
>     (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>     (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>     (XEN) [   27.577687] Intel machine check reporting enabled
>     (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>     (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>     (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>     (XEN) [   27.603238] I/O virtualisation disabled
>     (XEN) [   27.608093] ENABLING IO-APIC IRQs
>     (XEN) [   27.612136]  -> Using new ACK method
>     (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>     (XEN) [    0.153431] Platform timer is 14.318MHz HPET

The patch below fixes the first step for me.  I haven't been able to
understand the exact mechanism of the second one yet.  With this patch
applied, of course, the second step is not visible -- which doesn't
mean it's gone away.

Cheers,

Tim.

commit d986516ce297bbcf3181225105dbc67edfa7c37e
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 27 16:17:02 2014 +0000

    x86/time: Always count s_time from Xen boot.
    
    In the early-boot clock, before the calibration routines kick in,
    count time from Xen boot rather than from when the BSP's TSC was 0.
    
    Signed-off-by: Tim Deegan <tim@xen.org>

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 82492c1..2bc2b2d 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1456,6 +1456,7 @@ void __init early_time_init(void)
     u64 tmp = init_pit_and_calibrate_tsc();
 
     set_time_scale(&this_cpu(cpu_time).tsc_scale, tmp);
+    rdtscll(this_cpu(cpu_time).local_tsc_stamp);
 
     do_div(tmp, 1000);
     cpu_khz = (unsigned long)tmp;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:22:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3it-0001MZ-Ca; Thu, 27 Feb 2014 16:22:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ3il-0001Lv-CJ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:22:05 +0000
Received: from [85.158.143.35:56826] by server-2.bemta-4.messagelabs.com id
	94/8D-04779-A266F035; Thu, 27 Feb 2014 16:22:02 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393518121!8810525!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.7 required=7.0 tests=BODY_RANDOM_LONG, RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14597 invoked from network); 27 Feb 2014 16:22:02 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 16:22:02 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ3ih-000K70-Ge; Thu, 27 Feb 2014 16:21:59 +0000
Date: Thu, 27 Feb 2014 17:21:59 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <20140227162159.GG53925@deinos.phlegethon.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 14:03 +0000 on 27 Feb (1393506187), Andrew Cooper wrote:
> Some latent bugs are emphasised by these changes.  There are steps in time
> when the TSC scale is calculated, and when the platform time is initialised ...
> 
>     (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>     (XEN) [   27.553075] Detected 2793.232 MHz processor.
>     (XEN) [   27.558277] Initing memory sharing.
>     (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>     (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>     (XEN) [   27.577687] Intel machine check reporting enabled
>     (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>     (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>     (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>     (XEN) [   27.603238] I/O virtualisation disabled
>     (XEN) [   27.608093] ENABLING IO-APIC IRQs
>     (XEN) [   27.612136]  -> Using new ACK method
>     (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>     (XEN) [    0.153431] Platform timer is 14.318MHz HPET

The patch below fixes the first step for me.  I haven't been able to
understand the exact mechanism of the second one yet.  With this patch
applied, of course, the second step is not visible -- which doesn't
mean it's gone away.

Cheers,

Tim.

commit d986516ce297bbcf3181225105dbc67edfa7c37e
Author: Tim Deegan <tim@xen.org>
Date:   Thu Feb 27 16:17:02 2014 +0000

    x86/time: Always count s_time from Xen boot.
    
    In the early-boot clock, before the calibration routines kick in,
    count time from Xen boot rather than from when the BSP's TSC was 0.
    
    Signed-off-by: Tim Deegan <tim@xen.org>

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 82492c1..2bc2b2d 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1456,6 +1456,7 @@ void __init early_time_init(void)
     u64 tmp = init_pit_and_calibrate_tsc();
 
     set_time_scale(&this_cpu(cpu_time).tsc_scale, tmp);
+    rdtscll(this_cpu(cpu_time).local_tsc_stamp);
 
     do_div(tmp, 1000);
     cpu_khz = (unsigned long)tmp;


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:26:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3nG-0001fu-8A; Thu, 27 Feb 2014 16:26:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ3nE-0001fp-M2
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 16:26:40 +0000
Received: from [85.158.137.68:63983] by server-5.bemta-3.messagelabs.com id
	16/4C-04712-F376F035; Thu, 27 Feb 2014 16:26:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393518396!1848593!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1101 invoked from network); 27 Feb 2014 16:26:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:26:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104701513"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 16:26:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 11:26:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJ3n9-000356-Hr;
	Thu, 27 Feb 2014 16:26:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJ3n9-0002yC-Ds;
	Thu, 27 Feb 2014 16:26:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25318-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 16:26:35 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25318: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25318 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25318/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      5 xen-boot                    fail pass in 25315
 test-amd64-amd64-xl-qemut-winxpsp3 3 host-install(3) broken in 25315 pass in 25318

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  c9f8e0aee507bec25104ca5535fde38efae6c6bc
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 407 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:26:58 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3nG-0001fu-8A; Thu, 27 Feb 2014 16:26:42 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ3nE-0001fp-M2
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 16:26:40 +0000
Received: from [85.158.137.68:63983] by server-5.bemta-3.messagelabs.com id
	16/4C-04712-F376F035; Thu, 27 Feb 2014 16:26:39 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-31.messagelabs.com!1393518396!1848593!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1101 invoked from network); 27 Feb 2014 16:26:38 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-12.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:26:38 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104701513"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 16:26:36 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 11:26:35 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJ3n9-000356-Hr;
	Thu, 27 Feb 2014 16:26:35 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJ3n9-0002yC-Ds;
	Thu, 27 Feb 2014 16:26:35 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25318-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 16:26:35 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25318: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25318 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25318/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-sedf      5 xen-boot                    fail pass in 25315
 test-amd64-amd64-xl-qemut-winxpsp3 3 host-install(3) broken in 25315 pass in 25318

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  c9f8e0aee507bec25104ca5535fde38efae6c6bc
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     fail    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 407 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:30:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3qt-0001rk-24; Thu, 27 Feb 2014 16:30:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJ3qr-0001rb-Sn
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:30:26 +0000
Received: from [85.158.143.35:50143] by server-1.bemta-4.messagelabs.com id
	82/90-31661-1286F035; Thu, 27 Feb 2014 16:30:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393518624!8776678!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30190 invoked from network); 27 Feb 2014 16:30:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 16:30:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 16:30:23 +0000
Message-Id: <530F763C020000780011FF5F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 16:30:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-5-git-send-email-tim@xen.org>
In-Reply-To: <1393511233-28942-5-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for
 small constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 15:27, Tim Deegan <tim@xen.org> wrote:
> No semantic changes, just makes the control flow a bit clearer.
> 
> I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
> formula is too clever for Coverity, but in fact it always takes me a
> minute or two to understand it too. :)
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Well, I'm not really happy to see this changed into more text
(even if fewer lines), because to me that's part of what makes
such macros badly readable, but ...

>  xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
>  xen/include/xen/bitmap.h     | 30 ++++++++++++---------
>  2 files changed, 46 insertions(+), 46 deletions(-)

... at least the overall change is not growing the number of
lines.

Nevertheless a small consistency nit:

> +#define find_next_bit(addr, size, off) ({                                   \
> +    unsigned int r__;                                                       \
> +    const unsigned long *a__ = (addr);                                      \
> +    unsigned int s__ = (size);                                              \
> +    unsigned int o__ = (off);                                               \
> +    if ( __builtin_constant_p(size) && s__ == 0 )                           \

Using == 0 here, ...

> +        r__ = s__;                                                          \
> +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> +        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
> +    else if ( __builtin_constant_p(off) && !o__ )                           \

... but using ! here. I'd prefer the latter everywhere, but I'd also
be fine with you choosing the former consistently. (Same of course
again further down.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:30:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3qt-0001rk-24; Thu, 27 Feb 2014 16:30:27 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJ3qr-0001rb-Sn
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:30:26 +0000
Received: from [85.158.143.35:50143] by server-1.bemta-4.messagelabs.com id
	82/90-31661-1286F035; Thu, 27 Feb 2014 16:30:25 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393518624!8776678!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30190 invoked from network); 27 Feb 2014 16:30:24 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 16:30:24 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Thu, 27 Feb 2014 16:30:23 +0000
Message-Id: <530F763C020000780011FF5F@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Thu, 27 Feb 2014 16:30:36 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-5-git-send-email-tim@xen.org>
In-Reply-To: <1393511233-28942-5-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for
 small constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 15:27, Tim Deegan <tim@xen.org> wrote:
> No semantic changes, just makes the control flow a bit clearer.
> 
> I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
> formula is too clever for Coverity, but in fact it always takes me a
> minute or two to understand it too. :)
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Well, I'm not really happy to see this changed into more text
(even if fewer lines), because to me that's part of what makes
such macros badly readable, but ...

>  xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
>  xen/include/xen/bitmap.h     | 30 ++++++++++++---------
>  2 files changed, 46 insertions(+), 46 deletions(-)

... at least the overall change is not growing the number of
lines.

Nevertheless a small consistency nit:

> +#define find_next_bit(addr, size, off) ({                                   \
> +    unsigned int r__;                                                       \
> +    const unsigned long *a__ = (addr);                                      \
> +    unsigned int s__ = (size);                                              \
> +    unsigned int o__ = (off);                                               \
> +    if ( __builtin_constant_p(size) && s__ == 0 )                           \

Using == 0 here, ...

> +        r__ = s__;                                                          \
> +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> +        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
> +    else if ( __builtin_constant_p(off) && !o__ )                           \

... but using ! here. I'd prefer the latter everywhere, but I'd also
be fine with you choosing the former consistently. (Same of course
again further down.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:31:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3rf-0001xH-K9; Thu, 27 Feb 2014 16:31:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJ3rd-0001wz-RP
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:31:14 +0000
Received: from [85.158.143.35:55299] by server-2.bemta-4.messagelabs.com id
	6C/6B-04779-1586F035; Thu, 27 Feb 2014 16:31:13 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393518671!8813149!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27797 invoked from network); 27 Feb 2014 16:31:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 16:31:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1RGV8aB014414
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 16:31:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1RGV7UN006599
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Feb 2014 16:31:08 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1RGV7Ag018045; Thu, 27 Feb 2014 16:31:07 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 08:31:07 -0800
Message-ID: <530F68BE.2070505@oracle.com>
Date: Thu, 27 Feb 2014 11:33:02 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
	<530F51E9.1070703@oracle.com> <530F5DA0.3060408@citrix.com>
In-Reply-To: <530F5DA0.3060408@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjcvMjAxNCAxMDo0NSBBTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4KPj4gQEAg
LTI5MSw3ICsyOTAsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QKPj4gcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+ICAgICAgICAgICAg
ICAgICAgICAgIChwY2lfZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7Cj4+ICAgICAgICAgICAg
bWFwX2lycS5kZXZmbiA9IGRldi0+ZGV2Zm47Cj4+ICAgIC0gICAgICAgIGlmICh0eXBlID09IFBD
SV9DQVBfSURfTVNJWCkgewo+PiArICAgICAgICBpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAm
JiBudmVjID4gMSkgewo+PiArICAgICAgICAgICAgbWFwX2lycS50eXBlID0gTUFQX1BJUlFfVFlQ
RV9NVUxUSV9NU0k7Cj4+ICsgICAgICAgICAgICBtYXBfaXJxLmVudHJ5X25yID0gbnZlYzsKPj4K
Pj4gQXJlIHdlIG92ZXJsb2FkaW5nIGVudHJ5X25yIGhlcmUgd2l0aCBhIGRpZmZlcmVudCBtZWFu
aW5nPyBJIHRob3VnaHQgaXQKPj4gd2FzIG1lYW50IHRvIGJlIGVudHJ5IG51bWJlciAoaW4gTVNJ
LVggdGFibGUgZm9yIGV4YW1wbGUpLCBub3QgbnVtYmVyIG9mCj4+IGVudHJpZXMuCj4gSW4gdGhl
IGNhc2Ugb2YgTVNJIG1lc3NhZ2UgZ3JvdXBzIChNQVBfUElSUV9UWVBFX01VTFRJX01TSSkgZW50
cnlfbnIgaXMKPiB0aGUgbnVtYmVyIG9mIHZlY3RvcnMgdG8gc2V0dXAsIHNvIHllcywgaXQncyBh
biBvdmVybG9hZGluZyBvZiBlbnRyeV9uci4KClRoZW4gSSB0aGluayB3ZSBzaG91bGQgYXQgbGVh
c3QgbWFrZSBhIG5vdGUgb2YgdGhpcyBpbiBwaHlzZGV2LmguIChPciAKbWF5YmUgZXZlbiBtYWtl
IGVudHJ5X25yIGEgdW5pb24gd2l0aCBudmVjIG9yIHNvbWUgc3VjaCwgYWx0aG91Z2ggdGhhdCAK
d291bGQgbG9vayByYXRoZXIgaGFja3kpLgoKPiAuLi4uCj4KPj4KPj4gaW5kZXggNDI3MjFkMS4u
ZWIxMzMyNmQgMTAwNjQ0Cj4+IC0tLSBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgK
Pj4gKysrIGIvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaAo+PiBAQCAtMTMxLDYgKzEz
MSw3IEBAIHN0cnVjdCBwaHlzZGV2X2lycSB7Cj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9H
U0kgICAgICAgIDB4MQo+PiAgICAjZGVmaW5lIE1BUF9QSVJRX1RZUEVfVU5LTk9XTiAgICAgICAg
MHgyCj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VHICAgICAgICAweDMKPj4gKyNk
ZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgICAgICAgIDB4NAo+PiBGb3JtYXR0aW5nLgo+
IEkgZG9uJ3QgZ2V0IHRoZSBmb3JtYXR0aW5nIHByb2JsZW0sIGl0J3MgdGhlIHNhbWUgZm9ybWF0
dGluZyB0aGF0IHRoZQo+IG90aGVyIE1BUF9QSVJRX1RZUEVfKiB1c2UsIGFuZCBpZiB0aGUgcGF0
Y2ggaXMgYXBwbGllZCBmb3JtYXR0aW5nIGlzIE9LLgo+CgpUaGF0J3MgYmVjYXVzZSBteSBjbGll
bnQgbWVzc2VkIHVwIHdoaXRlc3BhY2VzLiBZb3UgY2FuIGxvb2sgZm9yIGV4YW1wbGUgCmF0IGh0
dHBzOi8vbGttbC5vcmcvbGttbC8yMDE0LzIvMjYvMzUyIHRvIHNlZSB0aGUgZXh0cmEgdGFiLgoK
LWJvcmlzCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:31:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3rf-0001xH-K9; Thu, 27 Feb 2014 16:31:15 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJ3rd-0001wz-RP
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:31:14 +0000
Received: from [85.158.143.35:55299] by server-2.bemta-4.messagelabs.com id
	6C/6B-04779-1586F035; Thu, 27 Feb 2014 16:31:13 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393518671!8813149!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27797 invoked from network); 27 Feb 2014 16:31:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 16:31:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1RGV8aB014414
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 16:31:09 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1RGV7UN006599
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Feb 2014 16:31:08 GMT
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1RGV7Ag018045; Thu, 27 Feb 2014 16:31:07 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 08:31:07 -0800
Message-ID: <530F68BE.2070505@oracle.com>
Date: Thu, 27 Feb 2014 11:33:02 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
	<530F51E9.1070703@oracle.com> <530F5DA0.3060408@citrix.com>
In-Reply-To: <530F5DA0.3060408@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjcvMjAxNCAxMDo0NSBBTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPj4KPj4gQEAg
LTI5MSw3ICsyOTAsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QKPj4gcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+ICAgICAgICAgICAg
ICAgICAgICAgIChwY2lfZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7Cj4+ICAgICAgICAgICAg
bWFwX2lycS5kZXZmbiA9IGRldi0+ZGV2Zm47Cj4+ICAgIC0gICAgICAgIGlmICh0eXBlID09IFBD
SV9DQVBfSURfTVNJWCkgewo+PiArICAgICAgICBpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAm
JiBudmVjID4gMSkgewo+PiArICAgICAgICAgICAgbWFwX2lycS50eXBlID0gTUFQX1BJUlFfVFlQ
RV9NVUxUSV9NU0k7Cj4+ICsgICAgICAgICAgICBtYXBfaXJxLmVudHJ5X25yID0gbnZlYzsKPj4K
Pj4gQXJlIHdlIG92ZXJsb2FkaW5nIGVudHJ5X25yIGhlcmUgd2l0aCBhIGRpZmZlcmVudCBtZWFu
aW5nPyBJIHRob3VnaHQgaXQKPj4gd2FzIG1lYW50IHRvIGJlIGVudHJ5IG51bWJlciAoaW4gTVNJ
LVggdGFibGUgZm9yIGV4YW1wbGUpLCBub3QgbnVtYmVyIG9mCj4+IGVudHJpZXMuCj4gSW4gdGhl
IGNhc2Ugb2YgTVNJIG1lc3NhZ2UgZ3JvdXBzIChNQVBfUElSUV9UWVBFX01VTFRJX01TSSkgZW50
cnlfbnIgaXMKPiB0aGUgbnVtYmVyIG9mIHZlY3RvcnMgdG8gc2V0dXAsIHNvIHllcywgaXQncyBh
biBvdmVybG9hZGluZyBvZiBlbnRyeV9uci4KClRoZW4gSSB0aGluayB3ZSBzaG91bGQgYXQgbGVh
c3QgbWFrZSBhIG5vdGUgb2YgdGhpcyBpbiBwaHlzZGV2LmguIChPciAKbWF5YmUgZXZlbiBtYWtl
IGVudHJ5X25yIGEgdW5pb24gd2l0aCBudmVjIG9yIHNvbWUgc3VjaCwgYWx0aG91Z2ggdGhhdCAK
d291bGQgbG9vayByYXRoZXIgaGFja3kpLgoKPiAuLi4uCj4KPj4KPj4gaW5kZXggNDI3MjFkMS4u
ZWIxMzMyNmQgMTAwNjQ0Cj4+IC0tLSBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgK
Pj4gKysrIGIvaW5jbHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaAo+PiBAQCAtMTMxLDYgKzEz
MSw3IEBAIHN0cnVjdCBwaHlzZGV2X2lycSB7Cj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9H
U0kgICAgICAgIDB4MQo+PiAgICAjZGVmaW5lIE1BUF9QSVJRX1RZUEVfVU5LTk9XTiAgICAgICAg
MHgyCj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VHICAgICAgICAweDMKPj4gKyNk
ZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgICAgICAgIDB4NAo+PiBGb3JtYXR0aW5nLgo+
IEkgZG9uJ3QgZ2V0IHRoZSBmb3JtYXR0aW5nIHByb2JsZW0sIGl0J3MgdGhlIHNhbWUgZm9ybWF0
dGluZyB0aGF0IHRoZQo+IG90aGVyIE1BUF9QSVJRX1RZUEVfKiB1c2UsIGFuZCBpZiB0aGUgcGF0
Y2ggaXMgYXBwbGllZCBmb3JtYXR0aW5nIGlzIE9LLgo+CgpUaGF0J3MgYmVjYXVzZSBteSBjbGll
bnQgbWVzc2VkIHVwIHdoaXRlc3BhY2VzLiBZb3UgY2FuIGxvb2sgZm9yIGV4YW1wbGUgCmF0IGh0
dHBzOi8vbGttbC5vcmcvbGttbC8yMDE0LzIvMjYvMzUyIHRvIHNlZSB0aGUgZXh0cmEgdGFiLgoK
LWJvcmlzCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:38:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3yt-0002HD-QU; Thu, 27 Feb 2014 16:38:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ3ys-0002H3-6x
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:38:42 +0000
Received: from [85.158.143.35:61202] by server-1.bemta-4.messagelabs.com id
	53/2E-31661-11A6F035; Thu, 27 Feb 2014 16:38:41 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393519120!8807351!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19079 invoked from network); 27 Feb 2014 16:38:40 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 16:38:40 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ3yp-000KO4-L1; Thu, 27 Feb 2014 16:38:39 +0000
Date: Thu, 27 Feb 2014 17:38:39 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140227163839.GH53925@deinos.phlegethon.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-5-git-send-email-tim@xen.org>
	<530F763C020000780011FF5F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F763C020000780011FF5F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for
 small constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:30 +0000 on 27 Feb (1393515036), Jan Beulich wrote:
> Using == 0 here, ...
> 
> > +        r__ = s__;                                                          \
> > +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> > +        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
> > +    else if ( __builtin_constant_p(off) && !o__ )                           \
> 
> ... but using ! here. I'd prefer the latter everywhere, but I'd also
> be fine with you choosing the former consistently. (Same of course
> again further down.)

Good point.  v3 is below

Tim.


>From dba941064825d723b79f866d16bb0f07585de320 Mon Sep 17 00:00:00 2001
From: Tim Deegan <tim@xen.org>
Date: Thu, 28 Nov 2013 15:40:48 +0000
Subject: [PATCH] bitmaps/bitops: Clarify tests for small constant size.

No semantic changes, just makes the control flow a bit clearer.

I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
formula is too clever for Coverity, but in fact it always takes me a
minute or two to understand it too. :)

Signed-off-by: Tim Deegan <tim@xen.org>

---

v3: Consistenly use '!foo' rather than 'foo == 0'
v2: fix find_next_bit macros to evaluate 'addr' exactly once.
---
 xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
 xen/include/xen/bitmap.h     | 30 ++++++++++++---------
 2 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
index ab21d92..82a08ee 100644
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -335,23 +335,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_bit(addr, r__); \
-        else \
-            r__ = __find_next_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_bit(addr, size, off) ({                                   \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && !s__ )                               \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_bit(a__, s__);                                   \
+    else                                                                    \
+        r__ = __find_next_bit(a__, s__, o__);                               \
+    r__;                                                                    \
 })
 
 /**
@@ -370,23 +367,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_zero_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(~*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_zero_bit(addr, r__); \
-        else \
-            r__ = __find_next_zero_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_zero_bit(addr, size, off) ({                              \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && !s__ )                               \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(~*(const unsigned long *)(a__) >> o__, s__);  \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_zero_bit(a__, s__);                              \
+    else                                                                    \
+        r__ = __find_next_zero_bit(a__, s__, o__);                          \
+    r__;                                                                    \
 })
 
 /**
diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
index b5ec455..e2a3686 100644
--- a/xen/include/xen/bitmap.h
+++ b/xen/include/xen/bitmap.h
@@ -110,13 +110,14 @@ extern int bitmap_allocate_region(unsigned long *bitmap, int pos, int order);
 
 #define bitmap_bytes(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
 
-#define bitmap_switch(nbits, zero_ret, small, large)			\
-	switch (-!__builtin_constant_p(nbits) | (nbits)) {		\
-	case 0:	return zero_ret;					\
-	case 1 ... BITS_PER_LONG:					\
-		small; break;						\
-	default:							\
-		large; break;						\
+#define bitmap_switch(nbits, zero, small, large)			  \
+	unsigned int n__ = (nbits);					  \
+	if (__builtin_constant_p(nbits) && !n__) {			  \
+		zero;							  \
+	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
+		small;							  \
+	} else {							  \
+		large;							  \
 	}
 
 static inline void bitmap_zero(unsigned long *dst, int nbits)
@@ -191,7 +192,8 @@ static inline void bitmap_complement(unsigned long *dst, const unsigned long *sr
 static inline int bitmap_equal(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_equal(src1, src2, nbits));
 }
@@ -199,7 +201,8 @@ static inline int bitmap_equal(const unsigned long *src1,
 static inline int bitmap_intersects(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0,
 		return __bitmap_intersects(src1, src2, nbits));
 }
@@ -207,21 +210,24 @@ static inline int bitmap_intersects(const unsigned long *src1,
 static inline int bitmap_subset(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 & ~*src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_subset(src1, src2, nbits));
 }
 
 static inline int bitmap_empty(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_empty(src, nbits));
 }
 
 static inline int bitmap_full(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(~*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_full(src, nbits));
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:38:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ3yt-0002HD-QU; Thu, 27 Feb 2014 16:38:43 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72) (envelope-from <tim@xen.org>)
	id 1WJ3ys-0002H3-6x
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:38:42 +0000
Received: from [85.158.143.35:61202] by server-1.bemta-4.messagelabs.com id
	53/2E-31661-11A6F035; Thu, 27 Feb 2014 16:38:41 +0000
X-Env-Sender: tim@xen.org
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393519120!8807351!1
X-Originating-IP: [5.39.92.215]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19079 invoked from network); 27 Feb 2014 16:38:40 -0000
Received: from deinos.phlegethon.org (HELO mail.phlegethon.org) (5.39.92.215)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 16:38:40 -0000
Received: from tjd by mail.phlegethon.org with local (Exim 4.82 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1WJ3yp-000KO4-L1; Thu, 27 Feb 2014 16:38:39 +0000
Date: Thu, 27 Feb 2014 17:38:39 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140227163839.GH53925@deinos.phlegethon.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-5-git-send-email-tim@xen.org>
	<530F763C020000780011FF5F@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530F763C020000780011FF5F@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.22 (2013-10-16)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on mail.phlegethon.org); SAEximRunCond expanded to false
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for
 small constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

At 16:30 +0000 on 27 Feb (1393515036), Jan Beulich wrote:
> Using == 0 here, ...
> 
> > +        r__ = s__;                                                          \
> > +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> > +        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
> > +    else if ( __builtin_constant_p(off) && !o__ )                           \
> 
> ... but using ! here. I'd prefer the latter everywhere, but I'd also
> be fine with you choosing the former consistently. (Same of course
> again further down.)

Good point.  v3 is below

Tim.


>From dba941064825d723b79f866d16bb0f07585de320 Mon Sep 17 00:00:00 2001
From: Tim Deegan <tim@xen.org>
Date: Thu, 28 Nov 2013 15:40:48 +0000
Subject: [PATCH] bitmaps/bitops: Clarify tests for small constant size.

No semantic changes, just makes the control flow a bit clearer.

I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
formula is too clever for Coverity, but in fact it always takes me a
minute or two to understand it too. :)

Signed-off-by: Tim Deegan <tim@xen.org>

---

v3: Consistenly use '!foo' rather than 'foo == 0'
v2: fix find_next_bit macros to evaluate 'addr' exactly once.
---
 xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
 xen/include/xen/bitmap.h     | 30 ++++++++++++---------
 2 files changed, 46 insertions(+), 46 deletions(-)

diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
index ab21d92..82a08ee 100644
--- a/xen/include/asm-x86/bitops.h
+++ b/xen/include/asm-x86/bitops.h
@@ -335,23 +335,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_bit(addr, r__); \
-        else \
-            r__ = __find_next_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_bit(addr, size, off) ({                                   \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && !s__ )                               \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_bit(a__, s__);                                   \
+    else                                                                    \
+        r__ = __find_next_bit(a__, s__, o__);                               \
+    r__;                                                                    \
 })
 
 /**
@@ -370,23 +367,20 @@ static inline unsigned int __scanbit(unsigned long val, unsigned long max)
  * @offset: The bitnumber to start searching at
  * @size: The maximum size to search
  */
-#define find_next_zero_bit(addr, size, off) ({ \
-    unsigned int r__ = (size); \
-    unsigned int o__ = (off); \
-    switch ( -!__builtin_constant_p(size) | r__ ) \
-    { \
-    case 0: (void)(addr); break; \
-    case 1 ... BITS_PER_LONG: \
-        r__ = o__ + __scanbit(~*(const unsigned long *)(addr) >> o__, r__); \
-        break; \
-    default: \
-        if ( __builtin_constant_p(off) && !o__ ) \
-            r__ = __find_first_zero_bit(addr, r__); \
-        else \
-            r__ = __find_next_zero_bit(addr, r__, o__); \
-        break; \
-    } \
-    r__; \
+#define find_next_zero_bit(addr, size, off) ({                              \
+    unsigned int r__;                                                       \
+    const unsigned long *a__ = (addr);                                      \
+    unsigned int s__ = (size);                                              \
+    unsigned int o__ = (off);                                               \
+    if ( __builtin_constant_p(size) && !s__ )                               \
+        r__ = s__;                                                          \
+    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
+        r__ = o__ + __scanbit(~*(const unsigned long *)(a__) >> o__, s__);  \
+    else if ( __builtin_constant_p(off) && !o__ )                           \
+        r__ = __find_first_zero_bit(a__, s__);                              \
+    else                                                                    \
+        r__ = __find_next_zero_bit(a__, s__, o__);                          \
+    r__;                                                                    \
 })
 
 /**
diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
index b5ec455..e2a3686 100644
--- a/xen/include/xen/bitmap.h
+++ b/xen/include/xen/bitmap.h
@@ -110,13 +110,14 @@ extern int bitmap_allocate_region(unsigned long *bitmap, int pos, int order);
 
 #define bitmap_bytes(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
 
-#define bitmap_switch(nbits, zero_ret, small, large)			\
-	switch (-!__builtin_constant_p(nbits) | (nbits)) {		\
-	case 0:	return zero_ret;					\
-	case 1 ... BITS_PER_LONG:					\
-		small; break;						\
-	default:							\
-		large; break;						\
+#define bitmap_switch(nbits, zero, small, large)			  \
+	unsigned int n__ = (nbits);					  \
+	if (__builtin_constant_p(nbits) && !n__) {			  \
+		zero;							  \
+	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
+		small;							  \
+	} else {							  \
+		large;							  \
 	}
 
 static inline void bitmap_zero(unsigned long *dst, int nbits)
@@ -191,7 +192,8 @@ static inline void bitmap_complement(unsigned long *dst, const unsigned long *sr
 static inline int bitmap_equal(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_equal(src1, src2, nbits));
 }
@@ -199,7 +201,8 @@ static inline int bitmap_equal(const unsigned long *src1,
 static inline int bitmap_intersects(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0,
 		return __bitmap_intersects(src1, src2, nbits));
 }
@@ -207,21 +210,24 @@ static inline int bitmap_intersects(const unsigned long *src1,
 static inline int bitmap_subset(const unsigned long *src1,
 			const unsigned long *src2, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !((*src1 & ~*src2) & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_subset(src1, src2, nbits));
 }
 
 static inline int bitmap_empty(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_empty(src, nbits));
 }
 
 static inline int bitmap_full(const unsigned long *src, int nbits)
 {
-	bitmap_switch(nbits, -1,
+	bitmap_switch(nbits,
+		return -1,
 		return !(~*src & BITMAP_LAST_WORD_MASK(nbits)),
 		return __bitmap_full(src, nbits));
 }
-- 
1.8.5.2


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ417-0002OE-Fo; Thu, 27 Feb 2014 16:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ415-0002O8-GX
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:40:59 +0000
Received: from [85.158.143.35:22195] by server-2.bemta-4.messagelabs.com id
	48/DA-04779-A9A6F035; Thu, 27 Feb 2014 16:40:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393519257!8828346!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27408 invoked from network); 27 Feb 2014 16:40:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:40:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104706652"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 16:40:56 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 11:40:56 -0500
Message-ID: <530F6A96.2030508@citrix.com>
Date: Thu, 27 Feb 2014 17:40:54 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
	<530F51E9.1070703@oracle.com> <530F5DA0.3060408@citrix.com>
	<530F68BE.2070505@oracle.com>
In-Reply-To: <530F68BE.2070505@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjcvMDIvMTQgMTc6MzMsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yNy8yMDE0
IDEwOjQ1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4KPj4+IEBAIC0yOTEsNyArMjkw
LDEwIEBAIHN0YXRpYyBpbnQgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMoc3RydWN0Cj4+PiBw
Y2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPj4+ICAgICAgICAgICAgICAgICAgICAg
IChwY2lfZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7Cj4+PiAgICAgICAgICAgIG1hcF9pcnEu
ZGV2Zm4gPSBkZXYtPmRldmZuOwo+Pj4gICAgLSAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NBUF9J
RF9NU0lYKSB7Cj4+PiArICAgICAgICBpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAmJiBudmVj
ID4gMSkgewo+Pj4gKyAgICAgICAgICAgIG1hcF9pcnEudHlwZSA9IE1BUF9QSVJRX1RZUEVfTVVM
VElfTVNJOwo+Pj4gKyAgICAgICAgICAgIG1hcF9pcnEuZW50cnlfbnIgPSBudmVjOwo+Pj4KPj4+
IEFyZSB3ZSBvdmVybG9hZGluZyBlbnRyeV9uciBoZXJlIHdpdGggYSBkaWZmZXJlbnQgbWVhbmlu
Zz8gSSB0aG91Z2h0IGl0Cj4+PiB3YXMgbWVhbnQgdG8gYmUgZW50cnkgbnVtYmVyIChpbiBNU0kt
WCB0YWJsZSBmb3IgZXhhbXBsZSksIG5vdCBudW1iZXIgb2YKPj4+IGVudHJpZXMuCj4+IEluIHRo
ZSBjYXNlIG9mIE1TSSBtZXNzYWdlIGdyb3VwcyAoTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kpIGVu
dHJ5X25yIGlzCj4+IHRoZSBudW1iZXIgb2YgdmVjdG9ycyB0byBzZXR1cCwgc28geWVzLCBpdCdz
IGFuIG92ZXJsb2FkaW5nIG9mIGVudHJ5X25yLgo+IAo+IFRoZW4gSSB0aGluayB3ZSBzaG91bGQg
YXQgbGVhc3QgbWFrZSBhIG5vdGUgb2YgdGhpcyBpbiBwaHlzZGV2LmguIChPcgo+IG1heWJlIGV2
ZW4gbWFrZSBlbnRyeV9uciBhIHVuaW9uIHdpdGggbnZlYyBvciBzb21lIHN1Y2gsIGFsdGhvdWdo
IHRoYXQKPiB3b3VsZCBsb29rIHJhdGhlciBoYWNreSkuCgpPSywgSSBjYW4gYWRkIGEgY29tbWVu
dCB0byB0aGF0IGVmZmVjdCBpbiBwaHlzZGV2LmguCgo+IAo+PiAuLi4uCj4+Cj4+Pgo+Pj4gaW5k
ZXggNDI3MjFkMS4uZWIxMzMyNmQgMTAwNjQ0Cj4+PiAtLS0gYS9pbmNsdWRlL3hlbi9pbnRlcmZh
Y2UvcGh5c2Rldi5oCj4+PiArKysgYi9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oCj4+
PiBAQCAtMTMxLDYgKzEzMSw3IEBAIHN0cnVjdCBwaHlzZGV2X2lycSB7Cj4+PiAgICAjZGVmaW5l
IE1BUF9QSVJRX1RZUEVfR1NJICAgICAgICAweDEKPj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQ
RV9VTktOT1dOICAgICAgICAweDIKPj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VH
ICAgICAgICAweDMKPj4+ICsjZGVmaW5lIE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJICAgICAgICAw
eDQKPj4+IEZvcm1hdHRpbmcuCj4+IEkgZG9uJ3QgZ2V0IHRoZSBmb3JtYXR0aW5nIHByb2JsZW0s
IGl0J3MgdGhlIHNhbWUgZm9ybWF0dGluZyB0aGF0IHRoZQo+PiBvdGhlciBNQVBfUElSUV9UWVBF
XyogdXNlLCBhbmQgaWYgdGhlIHBhdGNoIGlzIGFwcGxpZWQgZm9ybWF0dGluZyBpcyBPSy4KPj4K
PiAKPiBUaGF0J3MgYmVjYXVzZSBteSBjbGllbnQgbWVzc2VkIHVwIHdoaXRlc3BhY2VzLiBZb3Ug
Y2FuIGxvb2sgZm9yIGV4YW1wbGUKPiBhdCBodHRwczovL2xrbWwub3JnL2xrbWwvMjAxNC8yLzI2
LzM1MiB0byBzZWUgdGhlIGV4dHJhIHRhYi4KCkl0IGxvb2tzIGxpa2UgYW4gZXh0cmEgdGFiIGJl
Y2F1c2UgdGhlcmUncyBhICIrIiBpbiBmcm9udCBvZiB0aGUgbGluZSwKd2hpY2ggbWFrZXMgdGhl
IHRhYiBqdW1wLiBJZiB5b3UgcmVtb3ZlIHRoZSBleHRyYSAiKyIgYW5kIHRoZSBzcGFjZXMgaW4K
ZnJvbnQgb2YgdGhlIHByZWNlZGluZyBsaW5lcyBpdCBpcyBnb2luZyB0byBiZSBhbGlnbmVkLgoK
Um9nZXIuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 27 16:41:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 16:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ417-0002OE-Fo; Thu, 27 Feb 2014 16:41:01 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ415-0002O8-GX
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 16:40:59 +0000
Received: from [85.158.143.35:22195] by server-2.bemta-4.messagelabs.com id
	48/DA-04779-A9A6F035; Thu, 27 Feb 2014 16:40:58 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393519257!8828346!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27408 invoked from network); 27 Feb 2014 16:40:58 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 16:40:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,555,1389744000"; d="scan'208";a="104706652"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 16:40:56 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.78)
	with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 11:40:56 -0500
Message-ID: <530F6A96.2030508@citrix.com>
Date: Thu, 27 Feb 2014 17:40:54 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <1393431884-1120-1-git-send-email-roger.pau@citrix.com>
	<530F51E9.1070703@oracle.com> <530F5DA0.3060408@citrix.com>
	<530F68BE.2070505@oracle.com>
In-Reply-To: <530F68BE.2070505@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjcvMDIvMTQgMTc6MzMsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yNy8yMDE0
IDEwOjQ1IEFNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4KPj4+IEBAIC0yOTEsNyArMjkw
LDEwIEBAIHN0YXRpYyBpbnQgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMoc3RydWN0Cj4+PiBw
Y2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPj4+ICAgICAgICAgICAgICAgICAgICAg
IChwY2lfZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7Cj4+PiAgICAgICAgICAgIG1hcF9pcnEu
ZGV2Zm4gPSBkZXYtPmRldmZuOwo+Pj4gICAgLSAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NBUF9J
RF9NU0lYKSB7Cj4+PiArICAgICAgICBpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAmJiBudmVj
ID4gMSkgewo+Pj4gKyAgICAgICAgICAgIG1hcF9pcnEudHlwZSA9IE1BUF9QSVJRX1RZUEVfTVVM
VElfTVNJOwo+Pj4gKyAgICAgICAgICAgIG1hcF9pcnEuZW50cnlfbnIgPSBudmVjOwo+Pj4KPj4+
IEFyZSB3ZSBvdmVybG9hZGluZyBlbnRyeV9uciBoZXJlIHdpdGggYSBkaWZmZXJlbnQgbWVhbmlu
Zz8gSSB0aG91Z2h0IGl0Cj4+PiB3YXMgbWVhbnQgdG8gYmUgZW50cnkgbnVtYmVyIChpbiBNU0kt
WCB0YWJsZSBmb3IgZXhhbXBsZSksIG5vdCBudW1iZXIgb2YKPj4+IGVudHJpZXMuCj4+IEluIHRo
ZSBjYXNlIG9mIE1TSSBtZXNzYWdlIGdyb3VwcyAoTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kpIGVu
dHJ5X25yIGlzCj4+IHRoZSBudW1iZXIgb2YgdmVjdG9ycyB0byBzZXR1cCwgc28geWVzLCBpdCdz
IGFuIG92ZXJsb2FkaW5nIG9mIGVudHJ5X25yLgo+IAo+IFRoZW4gSSB0aGluayB3ZSBzaG91bGQg
YXQgbGVhc3QgbWFrZSBhIG5vdGUgb2YgdGhpcyBpbiBwaHlzZGV2LmguIChPcgo+IG1heWJlIGV2
ZW4gbWFrZSBlbnRyeV9uciBhIHVuaW9uIHdpdGggbnZlYyBvciBzb21lIHN1Y2gsIGFsdGhvdWdo
IHRoYXQKPiB3b3VsZCBsb29rIHJhdGhlciBoYWNreSkuCgpPSywgSSBjYW4gYWRkIGEgY29tbWVu
dCB0byB0aGF0IGVmZmVjdCBpbiBwaHlzZGV2LmguCgo+IAo+PiAuLi4uCj4+Cj4+Pgo+Pj4gaW5k
ZXggNDI3MjFkMS4uZWIxMzMyNmQgMTAwNjQ0Cj4+PiAtLS0gYS9pbmNsdWRlL3hlbi9pbnRlcmZh
Y2UvcGh5c2Rldi5oCj4+PiArKysgYi9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oCj4+
PiBAQCAtMTMxLDYgKzEzMSw3IEBAIHN0cnVjdCBwaHlzZGV2X2lycSB7Cj4+PiAgICAjZGVmaW5l
IE1BUF9QSVJRX1RZUEVfR1NJICAgICAgICAweDEKPj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQ
RV9VTktOT1dOICAgICAgICAweDIKPj4+ICAgICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VH
ICAgICAgICAweDMKPj4+ICsjZGVmaW5lIE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJICAgICAgICAw
eDQKPj4+IEZvcm1hdHRpbmcuCj4+IEkgZG9uJ3QgZ2V0IHRoZSBmb3JtYXR0aW5nIHByb2JsZW0s
IGl0J3MgdGhlIHNhbWUgZm9ybWF0dGluZyB0aGF0IHRoZQo+PiBvdGhlciBNQVBfUElSUV9UWVBF
XyogdXNlLCBhbmQgaWYgdGhlIHBhdGNoIGlzIGFwcGxpZWQgZm9ybWF0dGluZyBpcyBPSy4KPj4K
PiAKPiBUaGF0J3MgYmVjYXVzZSBteSBjbGllbnQgbWVzc2VkIHVwIHdoaXRlc3BhY2VzLiBZb3Ug
Y2FuIGxvb2sgZm9yIGV4YW1wbGUKPiBhdCBodHRwczovL2xrbWwub3JnL2xrbWwvMjAxNC8yLzI2
LzM1MiB0byBzZWUgdGhlIGV4dHJhIHRhYi4KCkl0IGxvb2tzIGxpa2UgYW4gZXh0cmEgdGFiIGJl
Y2F1c2UgdGhlcmUncyBhICIrIiBpbiBmcm9udCBvZiB0aGUgbGluZSwKd2hpY2ggbWFrZXMgdGhl
IHRhYiBqdW1wLiBJZiB5b3UgcmVtb3ZlIHRoZSBleHRyYSAiKyIgYW5kIHRoZSBzcGFjZXMgaW4K
ZnJvbnQgb2YgdGhlIHByZWNlZGluZyBsaW5lcyBpdCBpcyBnb2luZyB0byBiZSBhbGlnbmVkLgoK
Um9nZXIuCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:06:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:06:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ4PW-000348-3j; Thu, 27 Feb 2014 17:06:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WJ4PU-00033y-J9
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 17:06:12 +0000
Received: from [85.158.137.68:27871] by server-7.bemta-3.messagelabs.com id
	73/0A-13775-3807F035; Thu, 27 Feb 2014 17:06:11 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393520769!3399203!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16064 invoked from network); 27 Feb 2014 17:06:10 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 17:06:10 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 27 Feb 2014 10:06:01 -0700
Message-ID: <530F7077.2040808@suse.com>
Date: Thu, 27 Feb 2014 10:05:59 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>	<530B62A2.3080901@eu.citrix.com>	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
	<530C2779.20502@suse.com>
In-Reply-To: <530C2779.20502@suse.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> George Dunlap wrote:
>   
>> Also,  it would be nice to get a Tested-by: from someone using it with
>> libvirt (before the release at least, if not before the check-in).
>>
>> Jim / Dario?
>>   
>>     
>
> I'll update my test system to rc6 tomorrow and restart my tests.
>   

FYI, the libvirt-based tests have been running for 18 hours with no
problems.  I need to stop them now to use the machine for other
purposes, but will start a weekend run tomorrow.  I know this patch is
already committed, but nonetheless

    Tested-by: Jim Fehlig <jfehlig@suse.com>

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:06:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:06:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ4PW-000348-3j; Thu, 27 Feb 2014 17:06:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <jfehlig@suse.com>) id 1WJ4PU-00033y-J9
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 17:06:12 +0000
Received: from [85.158.137.68:27871] by server-7.bemta-3.messagelabs.com id
	73/0A-13775-3807F035; Thu, 27 Feb 2014 17:06:11 +0000
X-Env-Sender: jfehlig@suse.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393520769!3399203!1
X-Originating-IP: [137.65.250.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16064 invoked from network); 27 Feb 2014 17:06:10 -0000
Received: from smtp2.provo.novell.com (HELO victor.provo.novell.com)
	(137.65.250.81)
	by server-5.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 17:06:10 -0000
Received: from [137.65.135.33] (prv-ext-foundry1int.gns.novell.com
	[137.65.251.240])
	by victor.provo.novell.com with ESMTP (TLS encrypted);
	Thu, 27 Feb 2014 10:06:01 -0700
Message-ID: <530F7077.2040808@suse.com>
Date: Thu, 27 Feb 2014 10:05:59 -0700
From: Jim Fehlig <jfehlig@suse.com>
User-Agent: Thunderbird 2.0.0.24 (X11/20100302)
MIME-Version: 1.0
To: George Dunlap <George.Dunlap@eu.citrix.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>	<530B62A2.3080901@eu.citrix.com>	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
	<530C2779.20502@suse.com>
In-Reply-To: <530C2779.20502@suse.com>
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Jim Fehlig wrote:
> George Dunlap wrote:
>   
>> Also,  it would be nice to get a Tested-by: from someone using it with
>> libvirt (before the release at least, if not before the check-in).
>>
>> Jim / Dario?
>>   
>>     
>
> I'll update my test system to rc6 tomorrow and restart my tests.
>   

FYI, the libvirt-based tests have been running for 18 hours with no
problems.  I need to stop them now to use the machine for other
purposes, but will start a weekend run tomorrow.  I know this patch is
already committed, but nonetheless

    Tested-by: Jim Fehlig <jfehlig@suse.com>

Regards,
Jim

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:09:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ4SV-0003Hj-V8; Thu, 27 Feb 2014 17:09:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ4SU-0003HW-Mh
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 17:09:18 +0000
Received: from [85.158.137.68:7447] by server-4.bemta-3.messagelabs.com id
	03/A6-04858-D317F035; Thu, 27 Feb 2014 17:09:17 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393520955!4674565!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26559 invoked from network); 27 Feb 2014 17:09:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:09:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="104717314"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 17:08:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 12:08:50 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ4S2-0005Rm-Cz;
	Thu, 27 Feb 2014 17:08:50 +0000
Message-ID: <530F711D.9020803@eu.citrix.com>
Date: Thu, 27 Feb 2014 17:08:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>	<530B62A2.3080901@eu.citrix.com>	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
	<530C2779.20502@suse.com> <530F7077.2040808@suse.com>
In-Reply-To: <530F7077.2040808@suse.com>
X-DLP: MIA2
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 05:05 PM, Jim Fehlig wrote:
> Jim Fehlig wrote:
>> George Dunlap wrote:
>>    
>>> Also,  it would be nice to get a Tested-by: from someone using it with
>>> libvirt (before the release at least, if not before the check-in).
>>>
>>> Jim / Dario?
>>>    
>>>      
>> I'll update my test system to rc6 tomorrow and restart my tests.
>>    
> FYI, the libvirt-based tests have been running for 18 hours with no
> problems.  I need to stop them now to use the machine for other
> purposes, but will start a weekend run tomorrow.  I know this patch is
> already committed, but nonetheless
>
>      Tested-by: Jim Fehlig <jfehlig@suse.com>

Awesome, thanks, Jim.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:09:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ4SV-0003Hj-V8; Thu, 27 Feb 2014 17:09:19 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <George.Dunlap@citrix.com>) id 1WJ4SU-0003HW-Mh
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 17:09:18 +0000
Received: from [85.158.137.68:7447] by server-4.bemta-3.messagelabs.com id
	03/A6-04858-D317F035; Thu, 27 Feb 2014 17:09:17 +0000
X-Env-Sender: George.Dunlap@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393520955!4674565!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26559 invoked from network); 27 Feb 2014 17:09:17 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:09:17 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="104717314"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 17:08:51 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 12:08:50 -0500
Received: from elijah.uk.xensource.com ([10.80.2.24])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<george.dunlap@eu.citrix.com>)	id 1WJ4S2-0005Rm-Cz;
	Thu, 27 Feb 2014 17:08:50 +0000
Message-ID: <530F711D.9020803@eu.citrix.com>
Date: Thu, 27 Feb 2014 17:08:45 +0000
From: George Dunlap <george.dunlap@eu.citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jim Fehlig <jfehlig@suse.com>
References: <1393251555-22418-1-git-send-email-ian.jackson@eu.citrix.com>	<1393251555-22418-2-git-send-email-ian.jackson@eu.citrix.com>	<530B62A2.3080901@eu.citrix.com>	<CAFLBxZaKXmWM-2g8a=pz6=ZmTRa6rrZT0op_Qpta=V0iX7yd=A@mail.gmail.com>	<CAFLBxZbOGM4ALC8DD92ZLBcFf0CDFq=aONuaLEcUujutmbyzTA@mail.gmail.com>
	<530C2779.20502@suse.com> <530F7077.2040808@suse.com>
In-Reply-To: <530F7077.2040808@suse.com>
X-DLP: MIA2
Cc: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>, M A Young <m.a.young@durham.ac.uk>
Subject: Re: [Xen-devel] [PATCH 1/3] libxl: Fix libxl_postfork_child_noexec
 deadlock etc.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 05:05 PM, Jim Fehlig wrote:
> Jim Fehlig wrote:
>> George Dunlap wrote:
>>    
>>> Also,  it would be nice to get a Tested-by: from someone using it with
>>> libvirt (before the release at least, if not before the check-in).
>>>
>>> Jim / Dario?
>>>    
>>>      
>> I'll update my test system to rc6 tomorrow and restart my tests.
>>    
> FYI, the libvirt-based tests have been running for 18 hours with no
> problems.  I need to stop them now to use the machine for other
> purposes, but will start a weekend run tomorrow.  I know this patch is
> already committed, but nonetheless
>
>      Tested-by: Jim Fehlig <jfehlig@suse.com>

Awesome, thanks, Jim.

  -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:40:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ4w3-00040c-BV; Thu, 27 Feb 2014 17:39:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WJ4w0-00040O-6M
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:39:49 +0000
Received: from [85.158.143.35:60542] by server-3.bemta-4.messagelabs.com id
	EE/40-11539-3687F035; Thu, 27 Feb 2014 17:39:47 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393522765!8794240!1
X-Originating-IP: [64.18.0.182]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11305 invoked from network); 27 Feb 2014 17:39:45 -0000
Received: from exprod5og106.obsmtp.com (HELO exprod5og106.obsmtp.com)
	(64.18.0.182)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 17:39:45 -0000
Received: from mail-wi0-f169.google.com ([209.85.212.169]) (using TLSv1) by
	exprod5ob106.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUw94Te1d9Dr8YyOWEYh9ZtE7mNj3KOvh@postini.com;
	Thu, 27 Feb 2014 09:39:34 PST
Received: by mail-wi0-f169.google.com with SMTP id bs8so934041wib.2
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 09:39:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ci4XdNXXNuTY1Jxu52MxbJYoOwU+tlAmF6jNbnp2vZ0=;
	b=BHx5yqr/YXhF3JEtwy6wm0SCOGlXn8j5f9t11heB8gCV7V4IU7eNTX7fJVG03lGLLZ
	kASoQjFi6/yG7//JY9FmxQbAAwtqBd1szu2eIuMrBoDcD8Ay2bVDmWriy9UunU3oXZGU
	dttLMUBEgZw9czjB1/2MDAUhBdNsDmzg8hMws=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ci4XdNXXNuTY1Jxu52MxbJYoOwU+tlAmF6jNbnp2vZ0=;
	b=D7Q2yplThhCXQPYzfz0JPQrB91P+GiT0rxgPXBwcHBtWbmGKXZ+TncYryei99RBPL8
	lSRxgOb/MLU8/S+W0no3UdeQlJ8PVbHy4zQ3LaAwoen75ffkNsFnJfp4bLDD7Bq5ndxI
	OjyAWkduvc+QvRCy/lqCGv3Pgdr5Jb79YMc17nPG4/UwDQY0ikYHA7R9WzJcOOFWRSjA
	puksLDV9DISrHHQCmixwtGdD7F+6SyCzxlg7VhfzGXrCMdw/p/J9WnDH/+bOFourIg9q
	U3Lwn4tUr4DedSXi89iPhNTyZAJ7tAMithksuVlsC0rCxM+Jb9lvrwCD5Ty9MkinLgKm
	CGbA==
X-Received: by 10.180.79.7 with SMTP id f7mr14115693wix.20.1393521101679;
	Thu, 27 Feb 2014 09:11:41 -0800 (PST)
X-Gm-Message-State: ALoCoQmJAwED4aO2R7i1fs59U8y6OAb/jPb7mpzwHNWw2aj3MPjHYk7KusXlK53Ye1RjG8yYVYKHA5PDFlAnjdZTWU4VszohUMlqvkrwcMiF0PsrvP1vadtQOSxoZi7xqFcBkwLYqI6UN+g0dpNEJAfa+XTiaFKNk/jIaKgah68Y7C1vuO8X39U=
MIME-Version: 1.0
X-Received: by 10.180.79.7 with SMTP id f7mr14115672wix.20.1393521101508; Thu,
	27 Feb 2014 09:11:41 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Thu, 27 Feb 2014 09:11:41 -0800 (PST)
In-Reply-To: <FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>
Date: Thu, 27 Feb 2014 17:11:41 +0000
Message-ID: <CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: Eric Trudeau <etrudeau@broadcom.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6125966010409888447=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6125966010409888447==
Content-Type: multipart/alternative; boundary=f46d041825ea3e27bf04f3666a14

--f46d041825ea3e27bf04f3666a14
Content-Type: text/plain; charset=ISO-8859-1

Thank you all for your responses.

I will try those changes on our platform.
Are you planning push the implementation of
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
official Xen release?

Regards,
Victor


On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com> wrote:

> > -----Original Message-----
> > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > Sent: Thursday, February 27, 2014 8:16 AM
> > To: Dario Faggioli
> > Cc: Viktor Kleinik; xen-devel@lists.xen.org; Arianna Avanzini; Stefano
> Stabellini;
> > Julien Grall; Eric Trudeau
> > Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> >
> > On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > > Hi all,
> > > >
> > > Hi,
> > >
> > > > Does anyone knows something about future plans to implement
> > > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > > >
> > > I think Arianna is working on an implementation of the former
> > > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > > list soon, isn't it so, Arianna?
> >
> > Eric Trudeau did some work in the area too:
> >
> > http://marc.info/?l=xen-devel&m=137338996422503
> > http://marc.info/?l=xen-devel&m=137365750318936
>
> I checked our repo and the route IRQ changes to DomUs in the second patch
> URL Stefano provided below are up-to-date with what we have been using on
> our platforms.  We made no further changes after that patch, i.e. we left
> the 100 msec max wait for a domain to finish an ISR when destroying it.
>
> We also added support for a DomU to map in I/O memory with the iomem
> configuration parameter.  Unfortunately, I don't have time to provide an
> official patch on recent Xen upstream code due to time constraints, but
> below is a patch based on last October, :( , commit
> d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
> I hope this is helpful, because that is the best I can do at this time.
>
> -----------------
>
> tools/libxl/libxl_create.c |  5 +++--
>  xen/arch/arm/domctl.c      | 74
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 76 insertions(+), 3 deletions(-)
>
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 1b320d3..53ed52e 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
>          LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
>              domid, io->start, io->start + io->number - 1);
>
> -        ret = xc_domain_iomem_permission(CTX->xch, domid,
> -                                          io->start, io->number, 1);
> +        ret = xc_domain_memory_mapping(CTX->xch, domid,
> +                                       io->start, io->start,
> +                                       io->number, 1);
>          if (ret < 0) {
>              LOGE(ERROR,
>                   "failed give dom%d access to iomem range
> %"PRIx64"-%"PRIx64,
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 851ee40..222aac9 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -10,11 +10,83 @@
>  #include <xen/errno.h>
>  #include <xen/sched.h>
>  #include <public/domctl.h>
> +#include <xen/iocap.h>
> +#include <xsm/xsm.h>
> +#include <xen/paging.h>
> +#include <xen/guest_access.h>
>
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> -    return -ENOSYS;
> +    long ret = 0;
> +    bool_t copyback = 0;
> +
> +    switch ( domctl->cmd )
> +    {
> +    case XEN_DOMCTL_memory_mapping:
> +    {
> +        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
> +        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
> +        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
> +        int add = domctl->u.memory_mapping.add_mapping;
> +
> +        /* removing i/o memory is not implemented yet */
> +        if (!add) {
> +            ret = -ENOSYS;
> +            break;
> +        }
> +        ret = -EINVAL;
> +        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
> +             /* x86 checks wrap based on paddr_bits which is not
> implemented on ARM? */
> +             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits -
> PAGE_SHIFT)) || */
> +             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
> +            break;
> +
> +        ret = -EPERM;
> +        if ( current->domain->domain_id != 0 )
> +            break;
> +
> +        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
> +        if ( ret )
> +            break;
> +
> +        if ( add )
> +        {
> +            printk(XENLOG_G_INFO
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> +                   d->domain_id, gfn, mfn, nr_mfns);
> +
> +            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
> +            if ( !ret && paging_mode_translate(d) )
> +            {
> +                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
> +                                       (gfn + nr_mfns) << PAGE_SHIFT,
> +                                       mfn << PAGE_SHIFT);
> +                if ( ret )
> +                {
> +                    printk(XENLOG_G_WARNING
> +                           "memory_map:fail: dom%d gfn=%lx mfn=%lx
> nr=%lx\n",
> +                           d->domain_id, gfn, mfn, nr_mfns);
> +                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
> +                         is_hardware_domain(current->domain) )
> +                        printk(XENLOG_ERR
> +                               "memory_map: failed to deny dom%d access
> to [%lx,%lx]\n",
> +                               d->domain_id, mfn, mfn + nr_mfns - 1);
> +                }
> +            }
> +        }
> +    }
> +    break;
> +
> +    default:
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
> +    return ret;
>  }
>
>  void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
>
<http://www.globallogic.com/email_disclaimer.txt>

--f46d041825ea3e27bf04f3666a14
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Thank you all for your responses.<br><br></div><=
div>I will try those changes on our platform.<br></div><div>Are you plannin=
g push the implementation of <br>xc_domain_memory_mapping/XEN_DOMCTL_memory=
_mapping and <br>
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into <br></div><div>offic=
ial Xen release?<br></div><div><br></div>Regards,<br></div>Victor<br><div><=
div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Thu, Feb 27=
, 2014 at 2:11 PM, Eric Trudeau <span dir=3D"ltr">&lt;<a href=3D"mailto:etr=
udeau@broadcom.com" target=3D"_blank">etrudeau@broadcom.com</a>&gt;</span> =
wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D""><div clas=
s=3D"h5">&gt; -----Original Message-----<br>
&gt; From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@=
eu.citrix.com">stefano.stabellini@eu.citrix.com</a>]<br>
&gt; Sent: Thursday, February 27, 2014 8:16 AM<br>
&gt; To: Dario Faggioli<br>
&gt; Cc: Viktor Kleinik; <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a>; Arianna Avanzini; Stefano Stabellini;<br>
&gt; Julien Grall; Eric Trudeau<br>
&gt; Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ<br>
&gt;<br>
&gt; On Thu, 27 Feb 2014, Dario Faggioli wrote:<br>
&gt; &gt; On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; Hi,<br>
&gt; &gt;<br>
&gt; &gt; &gt; Does anyone knows something about future plans to implement<=
br>
&gt; &gt; &gt; xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>
&gt; &gt; &gt; xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br>
&gt; &gt; &gt;<br>
&gt; &gt; I think Arianna is working on an implementation of the former<br>
&gt; &gt; (XEN_DOMCTL_memory_mapping), and she should be sending patches to=
 this<br>
&gt; &gt; list soon, isn&#39;t it so, Arianna?<br>
&gt;<br>
&gt; Eric Trudeau did some work in the area too:<br>
&gt;<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503</a>=
<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936</a>=
<br>
<br>
</div></div>I checked our repo and the route IRQ changes to DomUs in the se=
cond patch URL Stefano provided below are up-to-date with what we have been=
 using on our platforms. =A0We made no further changes after that patch, i.=
e. we left the 100 msec max wait for a domain to finish an ISR when destroy=
ing it.<br>

<br>
We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter. =A0Unfortunately, I don&#39;t have time to provide an of=
ficial patch on recent Xen upstream code due to time constraints, but below=
 is a patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a=
1ae1d6fc1d29.<br>

I hope this is helpful, because that is the best I can do at this time.<br>
<br>
-----------------<br>
<br>
tools/libxl/libxl_create.c | =A05 +++--<br>
=A0xen/arch/arm/domctl.c =A0 =A0 =A0| 74 ++++++++++++++++++++++++++++++++++=
+++++++++++++++++++++++++++++++++++++++-<br>
=A02 files changed, 76 insertions(+), 3 deletions(-)<br>
<br>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
index 1b320d3..53ed52e 100644<br>
--- a/tools/libxl/libxl_create.c<br>
+++ b/tools/libxl/libxl_create.c<br>
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl_=
_multidev *multidev,<br>
=A0 =A0 =A0 =A0 =A0LOG(DEBUG, &quot;dom%d iomem %&quot;PRIx64&quot;-%&quot;=
PRIx64,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0domid, io-&gt;start, io-&gt;start + io-&gt;numbe=
r - 1);<br>
<br>
- =A0 =A0 =A0 =A0ret =3D xc_domain_iomem_permission(CTX-&gt;xch, domid,<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0io-&gt;start, io-&gt;number, 1);<br>
+ =A0 =A0 =A0 =A0ret =3D xc_domain_memory_mapping(CTX-&gt;xch, domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 io-&gt;start, io-&gt;start,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 io-&gt;number, 1);<br>
=A0 =A0 =A0 =A0 =A0if (ret &lt; 0) {<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0LOGE(ERROR,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;failed give dom%d access to iomem=
 range %&quot;PRIx64&quot;-%&quot;PRIx64,<br>
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c<br>
index 851ee40..222aac9 100644<br>
--- a/xen/arch/arm/domctl.c<br>
+++ b/xen/arch/arm/domctl.c<br>
@@ -10,11 +10,83 @@<br>
=A0#include &lt;xen/errno.h&gt;<br>
=A0#include &lt;xen/sched.h&gt;<br>
=A0#include &lt;public/domctl.h&gt;<br>
+#include &lt;xen/iocap.h&gt;<br>
+#include &lt;xsm/xsm.h&gt;<br>
+#include &lt;xen/paging.h&gt;<br>
+#include &lt;xen/guest_access.h&gt;<br>
<br>
=A0long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0XEN_GUEST_HANDLE_PARAM(xen_domct=
l_t) u_domctl)<br>
=A0{<br>
- =A0 =A0return -ENOSYS;<br>
+ =A0 =A0long ret =3D 0;<br>
+ =A0 =A0bool_t copyback =3D 0;<br>
+<br>
+ =A0 =A0switch ( domctl-&gt;cmd )<br>
+ =A0 =A0{<br>
+ =A0 =A0case XEN_DOMCTL_memory_mapping:<br>
+ =A0 =A0{<br>
+ =A0 =A0 =A0 =A0unsigned long gfn =3D domctl-&gt;u.memory_mapping.first_gf=
n;<br>
+ =A0 =A0 =A0 =A0unsigned long mfn =3D domctl-&gt;u.memory_mapping.first_mf=
n;<br>
+ =A0 =A0 =A0 =A0unsigned long nr_mfns =3D domctl-&gt;u.memory_mapping.nr_m=
fns;<br>
+ =A0 =A0 =A0 =A0int add =3D domctl-&gt;u.memory_mapping.add_mapping;<br>
+<br>
+ =A0 =A0 =A0 =A0/* removing i/o memory is not implemented yet */<br>
+ =A0 =A0 =A0 =A0if (!add) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0ret =3D -ENOSYS;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0ret =3D -EINVAL;<br>
+ =A0 =A0 =A0 =A0if ( (mfn + nr_mfns - 1) &lt; mfn || /* wrap? */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 /* x86 checks wrap based on paddr_bits which is n=
ot implemented on ARM? */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 /* ((mfn | (mfn + nr_mfns - 1)) &gt;&gt; (paddr_b=
its - PAGE_SHIFT)) || */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 (gfn + nr_mfns - 1) &lt; gfn ) /* wrap? */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0ret =3D -EPERM;<br>
+ =A0 =A0 =A0 =A0if ( current-&gt;domain-&gt;domain_id !=3D 0 )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns =
- 1, add);<br>
+ =A0 =A0 =A0 =A0if ( ret )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0if ( add )<br>
+ =A0 =A0 =A0 =A0{<br>
+ =A0 =A0 =A0 =A0 =A0 =A0printk(XENLOG_G_INFO<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;memory_map:add: dom%d gfn=3D%lx=
 mfn=3D%lx nr=3D%lx\n&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 d-&gt;domain_id, gfn, mfn, nr_mfns);<=
br>
+<br>
+ =A0 =A0 =A0 =A0 =A0 =A0ret =3D iomem_permit_access(d, mfn, mfn + nr_mfns =
- 1);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0if ( !ret &amp;&amp; paging_mode_translate(d) )<br=
>
+ =A0 =A0 =A0 =A0 =A0 =A0{<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ret =3D map_mmio_regions(d, gfn &lt;&lt; P=
AGE_SHIFT,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 (gfn + nr_mfns) &lt;&lt; PAGE_SHIFT,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 mfn &lt;&lt; PAGE_SHIFT);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if ( ret )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0{<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printk(XENLOG_G_WARNING<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;memory_map:fail=
: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 d-&gt;domain_id, gfn,=
 mfn, nr_mfns);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if ( iomem_deny_access(d, mfn, mfn=
 + nr_mfns - 1) &amp;&amp;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 is_hardware_domain(curren=
t-&gt;domain) )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printk(XENLOG_ERR<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;memory_=
map: failed to deny dom%d access to [%lx,%lx]\n&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 d-&gt;domain_=
id, mfn, mfn + nr_mfns - 1);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0}<br>
+ =A0 =A0break;<br>
+<br>
+ =A0 =A0default:<br>
+ =A0 =A0 =A0 =A0ret =3D -ENOSYS;<br>
+ =A0 =A0 =A0 =A0break;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0if ( copyback &amp;&amp; __copy_to_guest(u_domctl, domctl, 1) )<br=
>
+ =A0 =A0 =A0 =A0ret =3D -EFAULT;<br>
+<br>
+ =A0 =A0return ret;<br>
=A0}<br>
<br>
=A0void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)<br>
</blockquote></div><font size=3D"-1"><a href=3D"http://www.globallogic.com/=
email_disclaimer.txt" target=3D"_blank"><span style=3D"font-size:11px;font-=
family:Arial;color:rgb(17,85,204);background-color:transparent;font-weight:=
normal;font-style:normal;font-variant:normal;text-decoration:underline;vert=
ical-align:baseline"></span></a><span style=3D"vertical-align:baseline;font=
-variant:normal;font-style:normal;font-size:11px;background-color:transpare=
nt;text-decoration:none;font-family:Arial;font-weight:normal"></span></font=
>
</div></div></div>

--f46d041825ea3e27bf04f3666a14--


--===============6125966010409888447==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6125966010409888447==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 17:40:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ4w3-00040c-BV; Thu, 27 Feb 2014 17:39:51 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <viktor.kleinik@globallogic.com>) id 1WJ4w0-00040O-6M
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:39:49 +0000
Received: from [85.158.143.35:60542] by server-3.bemta-4.messagelabs.com id
	EE/40-11539-3687F035; Thu, 27 Feb 2014 17:39:47 +0000
X-Env-Sender: viktor.kleinik@globallogic.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393522765!8794240!1
X-Originating-IP: [64.18.0.182]
X-SpamReason: No, hits=0.9 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11305 invoked from network); 27 Feb 2014 17:39:45 -0000
Received: from exprod5og106.obsmtp.com (HELO exprod5og106.obsmtp.com)
	(64.18.0.182)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 17:39:45 -0000
Received: from mail-wi0-f169.google.com ([209.85.212.169]) (using TLSv1) by
	exprod5ob106.postini.com ([64.18.4.12]) with SMTP
	ID DSNKUw94Te1d9Dr8YyOWEYh9ZtE7mNj3KOvh@postini.com;
	Thu, 27 Feb 2014 09:39:34 PST
Received: by mail-wi0-f169.google.com with SMTP id bs8so934041wib.2
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 09:39:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=globallogic.com; s=google;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=ci4XdNXXNuTY1Jxu52MxbJYoOwU+tlAmF6jNbnp2vZ0=;
	b=BHx5yqr/YXhF3JEtwy6wm0SCOGlXn8j5f9t11heB8gCV7V4IU7eNTX7fJVG03lGLLZ
	kASoQjFi6/yG7//JY9FmxQbAAwtqBd1szu2eIuMrBoDcD8Ay2bVDmWriy9UunU3oXZGU
	dttLMUBEgZw9czjB1/2MDAUhBdNsDmzg8hMws=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=ci4XdNXXNuTY1Jxu52MxbJYoOwU+tlAmF6jNbnp2vZ0=;
	b=D7Q2yplThhCXQPYzfz0JPQrB91P+GiT0rxgPXBwcHBtWbmGKXZ+TncYryei99RBPL8
	lSRxgOb/MLU8/S+W0no3UdeQlJ8PVbHy4zQ3LaAwoen75ffkNsFnJfp4bLDD7Bq5ndxI
	OjyAWkduvc+QvRCy/lqCGv3Pgdr5Jb79YMc17nPG4/UwDQY0ikYHA7R9WzJcOOFWRSjA
	puksLDV9DISrHHQCmixwtGdD7F+6SyCzxlg7VhfzGXrCMdw/p/J9WnDH/+bOFourIg9q
	U3Lwn4tUr4DedSXi89iPhNTyZAJ7tAMithksuVlsC0rCxM+Jb9lvrwCD5Ty9MkinLgKm
	CGbA==
X-Received: by 10.180.79.7 with SMTP id f7mr14115693wix.20.1393521101679;
	Thu, 27 Feb 2014 09:11:41 -0800 (PST)
X-Gm-Message-State: ALoCoQmJAwED4aO2R7i1fs59U8y6OAb/jPb7mpzwHNWw2aj3MPjHYk7KusXlK53Ye1RjG8yYVYKHA5PDFlAnjdZTWU4VszohUMlqvkrwcMiF0PsrvP1vadtQOSxoZi7xqFcBkwLYqI6UN+g0dpNEJAfa+XTiaFKNk/jIaKgah68Y7C1vuO8X39U=
MIME-Version: 1.0
X-Received: by 10.180.79.7 with SMTP id f7mr14115672wix.20.1393521101508; Thu,
	27 Feb 2014 09:11:41 -0800 (PST)
Received: by 10.216.174.202 with HTTP; Thu, 27 Feb 2014 09:11:41 -0800 (PST)
In-Reply-To: <FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>
Date: Thu, 27 Feb 2014 17:11:41 +0000
Message-ID: <CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>
From: Viktor Kleinik <viktor.kleinik@globallogic.com>
To: Eric Trudeau <etrudeau@broadcom.com>
Cc: Julien Grall <julien.grall@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6125966010409888447=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============6125966010409888447==
Content-Type: multipart/alternative; boundary=f46d041825ea3e27bf04f3666a14

--f46d041825ea3e27bf04f3666a14
Content-Type: text/plain; charset=ISO-8859-1

Thank you all for your responses.

I will try those changes on our platform.
Are you planning push the implementation of
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
official Xen release?

Regards,
Victor


On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com> wrote:

> > -----Original Message-----
> > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com]
> > Sent: Thursday, February 27, 2014 8:16 AM
> > To: Dario Faggioli
> > Cc: Viktor Kleinik; xen-devel@lists.xen.org; Arianna Avanzini; Stefano
> Stabellini;
> > Julien Grall; Eric Trudeau
> > Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> >
> > On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > > Hi all,
> > > >
> > > Hi,
> > >
> > > > Does anyone knows something about future plans to implement
> > > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > > >
> > > I think Arianna is working on an implementation of the former
> > > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > > list soon, isn't it so, Arianna?
> >
> > Eric Trudeau did some work in the area too:
> >
> > http://marc.info/?l=xen-devel&m=137338996422503
> > http://marc.info/?l=xen-devel&m=137365750318936
>
> I checked our repo and the route IRQ changes to DomUs in the second patch
> URL Stefano provided below are up-to-date with what we have been using on
> our platforms.  We made no further changes after that patch, i.e. we left
> the 100 msec max wait for a domain to finish an ISR when destroying it.
>
> We also added support for a DomU to map in I/O memory with the iomem
> configuration parameter.  Unfortunately, I don't have time to provide an
> official patch on recent Xen upstream code due to time constraints, but
> below is a patch based on last October, :( , commit
> d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
> I hope this is helpful, because that is the best I can do at this time.
>
> -----------------
>
> tools/libxl/libxl_create.c |  5 +++--
>  xen/arch/arm/domctl.c      | 74
> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 76 insertions(+), 3 deletions(-)
>
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 1b320d3..53ed52e 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc,
> libxl__multidev *multidev,
>          LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
>              domid, io->start, io->start + io->number - 1);
>
> -        ret = xc_domain_iomem_permission(CTX->xch, domid,
> -                                          io->start, io->number, 1);
> +        ret = xc_domain_memory_mapping(CTX->xch, domid,
> +                                       io->start, io->start,
> +                                       io->number, 1);
>          if (ret < 0) {
>              LOGE(ERROR,
>                   "failed give dom%d access to iomem range
> %"PRIx64"-%"PRIx64,
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 851ee40..222aac9 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -10,11 +10,83 @@
>  #include <xen/errno.h>
>  #include <xen/sched.h>
>  #include <public/domctl.h>
> +#include <xen/iocap.h>
> +#include <xsm/xsm.h>
> +#include <xen/paging.h>
> +#include <xen/guest_access.h>
>
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> -    return -ENOSYS;
> +    long ret = 0;
> +    bool_t copyback = 0;
> +
> +    switch ( domctl->cmd )
> +    {
> +    case XEN_DOMCTL_memory_mapping:
> +    {
> +        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
> +        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
> +        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
> +        int add = domctl->u.memory_mapping.add_mapping;
> +
> +        /* removing i/o memory is not implemented yet */
> +        if (!add) {
> +            ret = -ENOSYS;
> +            break;
> +        }
> +        ret = -EINVAL;
> +        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
> +             /* x86 checks wrap based on paddr_bits which is not
> implemented on ARM? */
> +             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits -
> PAGE_SHIFT)) || */
> +             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
> +            break;
> +
> +        ret = -EPERM;
> +        if ( current->domain->domain_id != 0 )
> +            break;
> +
> +        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
> +        if ( ret )
> +            break;
> +
> +        if ( add )
> +        {
> +            printk(XENLOG_G_INFO
> +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
> +                   d->domain_id, gfn, mfn, nr_mfns);
> +
> +            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
> +            if ( !ret && paging_mode_translate(d) )
> +            {
> +                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
> +                                       (gfn + nr_mfns) << PAGE_SHIFT,
> +                                       mfn << PAGE_SHIFT);
> +                if ( ret )
> +                {
> +                    printk(XENLOG_G_WARNING
> +                           "memory_map:fail: dom%d gfn=%lx mfn=%lx
> nr=%lx\n",
> +                           d->domain_id, gfn, mfn, nr_mfns);
> +                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
> +                         is_hardware_domain(current->domain) )
> +                        printk(XENLOG_ERR
> +                               "memory_map: failed to deny dom%d access
> to [%lx,%lx]\n",
> +                               d->domain_id, mfn, mfn + nr_mfns - 1);
> +                }
> +            }
> +        }
> +    }
> +    break;
> +
> +    default:
> +        ret = -ENOSYS;
> +        break;
> +    }
> +
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
> +    return ret;
>  }
>
>  void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
>
<http://www.globallogic.com/email_disclaimer.txt>

--f46d041825ea3e27bf04f3666a14
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Thank you all for your responses.<br><br></div><=
div>I will try those changes on our platform.<br></div><div>Are you plannin=
g push the implementation of <br>xc_domain_memory_mapping/XEN_DOMCTL_memory=
_mapping and <br>
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into <br></div><div>offic=
ial Xen release?<br></div><div><br></div>Regards,<br></div>Victor<br><div><=
div class=3D"gmail_extra"><br><br><div class=3D"gmail_quote">On Thu, Feb 27=
, 2014 at 2:11 PM, Eric Trudeau <span dir=3D"ltr">&lt;<a href=3D"mailto:etr=
udeau@broadcom.com" target=3D"_blank">etrudeau@broadcom.com</a>&gt;</span> =
wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex"><div class=3D""><div clas=
s=3D"h5">&gt; -----Original Message-----<br>
&gt; From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@=
eu.citrix.com">stefano.stabellini@eu.citrix.com</a>]<br>
&gt; Sent: Thursday, February 27, 2014 8:16 AM<br>
&gt; To: Dario Faggioli<br>
&gt; Cc: Viktor Kleinik; <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a>; Arianna Avanzini; Stefano Stabellini;<br>
&gt; Julien Grall; Eric Trudeau<br>
&gt; Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ<br>
&gt;<br>
&gt; On Thu, 27 Feb 2014, Dario Faggioli wrote:<br>
&gt; &gt; On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; Hi,<br>
&gt; &gt;<br>
&gt; &gt; &gt; Does anyone knows something about future plans to implement<=
br>
&gt; &gt; &gt; xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>
&gt; &gt; &gt; xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br>
&gt; &gt; &gt;<br>
&gt; &gt; I think Arianna is working on an implementation of the former<br>
&gt; &gt; (XEN_DOMCTL_memory_mapping), and she should be sending patches to=
 this<br>
&gt; &gt; list soon, isn&#39;t it so, Arianna?<br>
&gt;<br>
&gt; Eric Trudeau did some work in the area too:<br>
&gt;<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503</a>=
<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936</a>=
<br>
<br>
</div></div>I checked our repo and the route IRQ changes to DomUs in the se=
cond patch URL Stefano provided below are up-to-date with what we have been=
 using on our platforms. =A0We made no further changes after that patch, i.=
e. we left the 100 msec max wait for a domain to finish an ISR when destroy=
ing it.<br>

<br>
We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter. =A0Unfortunately, I don&#39;t have time to provide an of=
ficial patch on recent Xen upstream code due to time constraints, but below=
 is a patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a=
1ae1d6fc1d29.<br>

I hope this is helpful, because that is the best I can do at this time.<br>
<br>
-----------------<br>
<br>
tools/libxl/libxl_create.c | =A05 +++--<br>
=A0xen/arch/arm/domctl.c =A0 =A0 =A0| 74 ++++++++++++++++++++++++++++++++++=
+++++++++++++++++++++++++++++++++++++++-<br>
=A02 files changed, 76 insertions(+), 3 deletions(-)<br>
<br>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
index 1b320d3..53ed52e 100644<br>
--- a/tools/libxl/libxl_create.c<br>
+++ b/tools/libxl/libxl_create.c<br>
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl_=
_multidev *multidev,<br>
=A0 =A0 =A0 =A0 =A0LOG(DEBUG, &quot;dom%d iomem %&quot;PRIx64&quot;-%&quot;=
PRIx64,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0domid, io-&gt;start, io-&gt;start + io-&gt;numbe=
r - 1);<br>
<br>
- =A0 =A0 =A0 =A0ret =3D xc_domain_iomem_permission(CTX-&gt;xch, domid,<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0io-&gt;start, io-&gt;number, 1);<br>
+ =A0 =A0 =A0 =A0ret =3D xc_domain_memory_mapping(CTX-&gt;xch, domid,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 io-&gt;start, io-&gt;start,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 io-&gt;number, 1);<br>
=A0 =A0 =A0 =A0 =A0if (ret &lt; 0) {<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0LOGE(ERROR,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;failed give dom%d access to iomem=
 range %&quot;PRIx64&quot;-%&quot;PRIx64,<br>
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c<br>
index 851ee40..222aac9 100644<br>
--- a/xen/arch/arm/domctl.c<br>
+++ b/xen/arch/arm/domctl.c<br>
@@ -10,11 +10,83 @@<br>
=A0#include &lt;xen/errno.h&gt;<br>
=A0#include &lt;xen/sched.h&gt;<br>
=A0#include &lt;public/domctl.h&gt;<br>
+#include &lt;xen/iocap.h&gt;<br>
+#include &lt;xsm/xsm.h&gt;<br>
+#include &lt;xen/paging.h&gt;<br>
+#include &lt;xen/guest_access.h&gt;<br>
<br>
=A0long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0XEN_GUEST_HANDLE_PARAM(xen_domct=
l_t) u_domctl)<br>
=A0{<br>
- =A0 =A0return -ENOSYS;<br>
+ =A0 =A0long ret =3D 0;<br>
+ =A0 =A0bool_t copyback =3D 0;<br>
+<br>
+ =A0 =A0switch ( domctl-&gt;cmd )<br>
+ =A0 =A0{<br>
+ =A0 =A0case XEN_DOMCTL_memory_mapping:<br>
+ =A0 =A0{<br>
+ =A0 =A0 =A0 =A0unsigned long gfn =3D domctl-&gt;u.memory_mapping.first_gf=
n;<br>
+ =A0 =A0 =A0 =A0unsigned long mfn =3D domctl-&gt;u.memory_mapping.first_mf=
n;<br>
+ =A0 =A0 =A0 =A0unsigned long nr_mfns =3D domctl-&gt;u.memory_mapping.nr_m=
fns;<br>
+ =A0 =A0 =A0 =A0int add =3D domctl-&gt;u.memory_mapping.add_mapping;<br>
+<br>
+ =A0 =A0 =A0 =A0/* removing i/o memory is not implemented yet */<br>
+ =A0 =A0 =A0 =A0if (!add) {<br>
+ =A0 =A0 =A0 =A0 =A0 =A0ret =3D -ENOSYS;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0ret =3D -EINVAL;<br>
+ =A0 =A0 =A0 =A0if ( (mfn + nr_mfns - 1) &lt; mfn || /* wrap? */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 /* x86 checks wrap based on paddr_bits which is n=
ot implemented on ARM? */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 /* ((mfn | (mfn + nr_mfns - 1)) &gt;&gt; (paddr_b=
its - PAGE_SHIFT)) || */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 (gfn + nr_mfns - 1) &lt; gfn ) /* wrap? */<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0ret =3D -EPERM;<br>
+ =A0 =A0 =A0 =A0if ( current-&gt;domain-&gt;domain_id !=3D 0 )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns =
- 1, add);<br>
+ =A0 =A0 =A0 =A0if ( ret )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0break;<br>
+<br>
+ =A0 =A0 =A0 =A0if ( add )<br>
+ =A0 =A0 =A0 =A0{<br>
+ =A0 =A0 =A0 =A0 =A0 =A0printk(XENLOG_G_INFO<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;memory_map:add: dom%d gfn=3D%lx=
 mfn=3D%lx nr=3D%lx\n&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 d-&gt;domain_id, gfn, mfn, nr_mfns);<=
br>
+<br>
+ =A0 =A0 =A0 =A0 =A0 =A0ret =3D iomem_permit_access(d, mfn, mfn + nr_mfns =
- 1);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0if ( !ret &amp;&amp; paging_mode_translate(d) )<br=
>
+ =A0 =A0 =A0 =A0 =A0 =A0{<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0ret =3D map_mmio_regions(d, gfn &lt;&lt; P=
AGE_SHIFT,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 (gfn + nr_mfns) &lt;&lt; PAGE_SHIFT,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 mfn &lt;&lt; PAGE_SHIFT);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if ( ret )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0{<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printk(XENLOG_G_WARNING<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;memory_map:fail=
: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 d-&gt;domain_id, gfn,=
 mfn, nr_mfns);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0if ( iomem_deny_access(d, mfn, mfn=
 + nr_mfns - 1) &amp;&amp;<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 is_hardware_domain(curren=
t-&gt;domain) )<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0printk(XENLOG_ERR<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 &quot;memory_=
map: failed to deny dom%d access to [%lx,%lx]\n&quot;,<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 d-&gt;domain_=
id, mfn, mfn + nr_mfns - 1);<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0 =A0 =A0}<br>
+ =A0 =A0 =A0 =A0}<br>
+ =A0 =A0}<br>
+ =A0 =A0break;<br>
+<br>
+ =A0 =A0default:<br>
+ =A0 =A0 =A0 =A0ret =3D -ENOSYS;<br>
+ =A0 =A0 =A0 =A0break;<br>
+ =A0 =A0}<br>
+<br>
+ =A0 =A0if ( copyback &amp;&amp; __copy_to_guest(u_domctl, domctl, 1) )<br=
>
+ =A0 =A0 =A0 =A0ret =3D -EFAULT;<br>
+<br>
+ =A0 =A0return ret;<br>
=A0}<br>
<br>
=A0void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)<br>
</blockquote></div><font size=3D"-1"><a href=3D"http://www.globallogic.com/=
email_disclaimer.txt" target=3D"_blank"><span style=3D"font-size:11px;font-=
family:Arial;color:rgb(17,85,204);background-color:transparent;font-weight:=
normal;font-style:normal;font-variant:normal;text-decoration:underline;vert=
ical-align:baseline"></span></a><span style=3D"vertical-align:baseline;font=
-variant:normal;font-style:normal;font-size:11px;background-color:transpare=
nt;text-decoration:none;font-family:Arial;font-weight:normal"></span></font=
>
</div></div></div>

--f46d041825ea3e27bf04f3666a14--


--===============6125966010409888447==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6125966010409888447==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 17:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ563-0004LY-28; Thu, 27 Feb 2014 17:50:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ561-0004LM-DQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:50:09 +0000
Received: from [85.158.143.35:10211] by server-3.bemta-4.messagelabs.com id
	B1/BB-11539-0DA7F035; Thu, 27 Feb 2014 17:50:08 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393523406!8829882!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30329 invoked from network); 27 Feb 2014 17:50:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:50:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="106360904"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 17:50:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 12:50:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ55x-000623-KY;
	Thu, 27 Feb 2014 17:50:05 +0000
Message-ID: <530F7ACD.6020007@citrix.com>
Date: Thu, 27 Feb 2014 17:50:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
	<20140227162159.GG53925@deinos.phlegethon.org>
In-Reply-To: <20140227162159.GG53925@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 16:21, Tim Deegan wrote:
> At 14:03 +0000 on 27 Feb (1393506187), Andrew Cooper wrote:
>> Some latent bugs are emphasised by these changes.  There are steps in time
>> when the TSC scale is calculated, and when the platform time is initialised ...
>>
>>     (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>>     (XEN) [   27.553075] Detected 2793.232 MHz processor.
>>     (XEN) [   27.558277] Initing memory sharing.
>>     (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>>     (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>>     (XEN) [   27.577687] Intel machine check reporting enabled
>>     (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>>     (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>>     (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>>     (XEN) [   27.603238] I/O virtualisation disabled
>>     (XEN) [   27.608093] ENABLING IO-APIC IRQs
>>     (XEN) [   27.612136]  -> Using new ACK method
>>     (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>>     (XEN) [    0.153431] Platform timer is 14.318MHz HPET
> The patch below fixes the first step for me.  I haven't been able to
> understand the exact mechanism of the second one yet.  With this patch
> applied, of course, the second step is not visible -- which doesn't
> mean it's gone away.
>
> Cheers,
>
> Tim.
>
> commit d986516ce297bbcf3181225105dbc67edfa7c37e
> Author: Tim Deegan <tim@xen.org>
> Date:   Thu Feb 27 16:17:02 2014 +0000
>
>     x86/time: Always count s_time from Xen boot.
>     
>     In the early-boot clock, before the calibration routines kick in,
>     count time from Xen boot rather than from when the BSP's TSC was 0.
>     
>     Signed-off-by: Tim Deegan <tim@xen.org>

This does indeed fix the first step, although I will continue debugging
the second without this present.

However, as a time from boot, it would make more sense for the beginning
of __start_xen() (or even in the early asm code) to stash the TSC value
sideways.  I will see how easy this is to do.

~Andrew

>
> diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
> index 82492c1..2bc2b2d 100644
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1456,6 +1456,7 @@ void __init early_time_init(void)
>      u64 tmp = init_pit_and_calibrate_tsc();
>  
>      set_time_scale(&this_cpu(cpu_time).tsc_scale, tmp);
> +    rdtscll(this_cpu(cpu_time).local_tsc_stamp);
>  
>      do_div(tmp, 1000);
>      cpu_khz = (unsigned long)tmp;
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:50:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:50:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ563-0004LY-28; Thu, 27 Feb 2014 17:50:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ561-0004LM-DQ
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:50:09 +0000
Received: from [85.158.143.35:10211] by server-3.bemta-4.messagelabs.com id
	B1/BB-11539-0DA7F035; Thu, 27 Feb 2014 17:50:08 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393523406!8829882!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30329 invoked from network); 27 Feb 2014 17:50:07 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:50:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="106360904"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 17:50:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 12:50:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ55x-000623-KY;
	Thu, 27 Feb 2014 17:50:05 +0000
Message-ID: <530F7ACD.6020007@citrix.com>
Date: Thu, 27 Feb 2014 17:50:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
	<20140227162159.GG53925@deinos.phlegethon.org>
In-Reply-To: <20140227162159.GG53925@deinos.phlegethon.org>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 16:21, Tim Deegan wrote:
> At 14:03 +0000 on 27 Feb (1393506187), Andrew Cooper wrote:
>> Some latent bugs are emphasised by these changes.  There are steps in time
>> when the TSC scale is calculated, and when the platform time is initialised ...
>>
>>     (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>>     (XEN) [   27.553075] Detected 2793.232 MHz processor.
>>     (XEN) [   27.558277] Initing memory sharing.
>>     (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>>     (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>>     (XEN) [   27.577687] Intel machine check reporting enabled
>>     (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>>     (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>>     (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>>     (XEN) [   27.603238] I/O virtualisation disabled
>>     (XEN) [   27.608093] ENABLING IO-APIC IRQs
>>     (XEN) [   27.612136]  -> Using new ACK method
>>     (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>>     (XEN) [    0.153431] Platform timer is 14.318MHz HPET
> The patch below fixes the first step for me.  I haven't been able to
> understand the exact mechanism of the second one yet.  With this patch
> applied, of course, the second step is not visible -- which doesn't
> mean it's gone away.
>
> Cheers,
>
> Tim.
>
> commit d986516ce297bbcf3181225105dbc67edfa7c37e
> Author: Tim Deegan <tim@xen.org>
> Date:   Thu Feb 27 16:17:02 2014 +0000
>
>     x86/time: Always count s_time from Xen boot.
>     
>     In the early-boot clock, before the calibration routines kick in,
>     count time from Xen boot rather than from when the BSP's TSC was 0.
>     
>     Signed-off-by: Tim Deegan <tim@xen.org>

This does indeed fix the first step, although I will continue debugging
the second without this present.

However, as a time from boot, it would make more sense for the beginning
of __start_xen() (or even in the early asm code) to stash the TSC value
sideways.  I will see how easy this is to do.

~Andrew

>
> diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
> index 82492c1..2bc2b2d 100644
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1456,6 +1456,7 @@ void __init early_time_init(void)
>      u64 tmp = init_pit_and_calibrate_tsc();
>  
>      set_time_scale(&this_cpu(cpu_time).tsc_scale, tmp);
> +    rdtscll(this_cpu(cpu_time).local_tsc_stamp);
>  
>      do_div(tmp, 1000);
>      cpu_khz = (unsigned long)tmp;
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:52:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ58W-0004X0-Lm; Thu, 27 Feb 2014 17:52:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ58V-0004Wp-6l
	for Xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:52:43 +0000
Received: from [85.158.143.35:24712] by server-1.bemta-4.messagelabs.com id
	D1/EF-31661-A6B7F035; Thu, 27 Feb 2014 17:52:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393523560!8807439!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19593 invoked from network); 27 Feb 2014 17:52:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:52:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="104733804"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 17:52:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 12:52:39 -0500
Message-ID: <530F7B66.7010303@citrix.com>
Date: Thu, 27 Feb 2014 17:52:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <530F608E.9040407@citrix.com>
	<21263.26104.660301.591599@mariner.uk.xensource.com>
In-Reply-To: <21263.26104.660301.591599@mariner.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 16:21, Ian Jackson wrote:
> David Vrabel writes ("Domain Save Image Format proposal (draft C)"):
> 
>> checksum     CRC-32C checksum of the record head, body (including any
>>              trailing padding) and the footer (except for the checksum
>>              field itself), or 0x00000000 if the checksum field is
>>              invalid.
> 
> I still question whether it is really sensible to have a set of
> per-record checksums which do not cover all of the file (eg, excluding
> the image header, and the file structure).

I'm going to drop the checksum field.  There doesn't seem to be much
interest in it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 17:52:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 17:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ58W-0004X0-Lm; Thu, 27 Feb 2014 17:52:44 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJ58V-0004Wp-6l
	for Xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:52:43 +0000
Received: from [85.158.143.35:24712] by server-1.bemta-4.messagelabs.com id
	D1/EF-31661-A6B7F035; Thu, 27 Feb 2014 17:52:42 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-6.tower-21.messagelabs.com!1393523560!8807439!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19593 invoked from network); 27 Feb 2014 17:52:41 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-6.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:52:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="104733804"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 17:52:40 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL02.citrite.net
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 12:52:39 -0500
Message-ID: <530F7B66.7010303@citrix.com>
Date: Thu, 27 Feb 2014 17:52:38 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
References: <530F608E.9040407@citrix.com>
	<21263.26104.660301.591599@mariner.uk.xensource.com>
In-Reply-To: <21263.26104.660301.591599@mariner.uk.xensource.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA1
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	"Xen-devel@lists.xen.org" <Xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Domain Save Image Format proposal (draft C)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 16:21, Ian Jackson wrote:
> David Vrabel writes ("Domain Save Image Format proposal (draft C)"):
> 
>> checksum     CRC-32C checksum of the record head, body (including any
>>              trailing padding) and the footer (except for the checksum
>>              field itself), or 0x00000000 if the checksum field is
>>              invalid.
> 
> I still question whether it is really sensible to have a set of
> per-record checksums which do not cover all of the file (eg, excluding
> the image header, and the file structure).

I'm going to drop the checksum field.  There doesn't seem to be much
interest in it.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5Tu-00059b-1n; Thu, 27 Feb 2014 18:14:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ5Ts-00059W-04
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:14:48 +0000
Received: from [85.158.143.35:39605] by server-2.bemta-4.messagelabs.com id
	78/26-04779-7908F035; Thu, 27 Feb 2014 18:14:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393524885!8801511!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24714 invoked from network); 27 Feb 2014 18:14:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 18:14:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="104742056"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 18:14:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 13:14:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ5Ti-0006Rv-QQ;
	Thu, 27 Feb 2014 18:14:38 +0000
Message-ID: <530F808E.1000607@citrix.com>
Date: Thu, 27 Feb 2014 18:14:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
	<20140227162159.GG53925@deinos.phlegethon.org>
	<530F7ACD.6020007@citrix.com>
In-Reply-To: <530F7ACD.6020007@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 17:50, Andrew Cooper wrote:
> On 27/02/14 16:21, Tim Deegan wrote:
>> At 14:03 +0000 on 27 Feb (1393506187), Andrew Cooper wrote:
>>> Some latent bugs are emphasised by these changes.  There are steps in time
>>> when the TSC scale is calculated, and when the platform time is initialised ...
>>>
>>>     (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>>>     (XEN) [   27.553075] Detected 2793.232 MHz processor.
>>>     (XEN) [   27.558277] Initing memory sharing.
>>>     (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>>>     (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>>>     (XEN) [   27.577687] Intel machine check reporting enabled
>>>     (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>>>     (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>>>     (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>>>     (XEN) [   27.603238] I/O virtualisation disabled
>>>     (XEN) [   27.608093] ENABLING IO-APIC IRQs
>>>     (XEN) [   27.612136]  -> Using new ACK method
>>>     (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>>>     (XEN) [    0.153431] Platform timer is 14.318MHz HPET
>> The patch below fixes the first step for me.  I haven't been able to
>> understand the exact mechanism of the second one yet.  With this patch
>> applied, of course, the second step is not visible -- which doesn't
>> mean it's gone away.
>>
>> Cheers,
>>
>> Tim.
>>
>> commit d986516ce297bbcf3181225105dbc67edfa7c37e
>> Author: Tim Deegan <tim@xen.org>
>> Date:   Thu Feb 27 16:17:02 2014 +0000
>>
>>     x86/time: Always count s_time from Xen boot.
>>     
>>     In the early-boot clock, before the calibration routines kick in,
>>     count time from Xen boot rather than from when the BSP's TSC was 0.
>>     
>>     Signed-off-by: Tim Deegan <tim@xen.org>
> This does indeed fix the first step, although I will continue debugging
> the second without this present.
>
> However, as a time from boot, it would make more sense for the beginning
> of __start_xen() (or even in the early asm code) to stash the TSC value
> sideways.  I will see how easy this is to do.
>
> ~Andrew

And indeed, stashing the TSC in head.S result in:

(XEN) [    0.000000] IRQ limits: 24 GSI, 376 MSI/MSI-X
(XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
(XEN) [    1.013174] Detected 2793.250 MHz processor.
(XEN) [    1.018377] Initing memory sharing.
(XEN) [    1.022601] Cannot set CPU feature mask on CPU#0
(XEN) [    1.027950] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI
0 firstbank 0 extended MCE MSR 18
(XEN) [    1.037785] Intel machine check reporting enabled
(XEN) [    1.043245] PCI: MCFG configuration 0: base e0000000 segment
0000 buses 00 - 3f
(XEN) [    1.051509] PCI: MCFG area at e0000000 reserved in E820
(XEN) [    1.057471] PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) [    1.063340] I/O virtualisation disabled
(XEN) [    1.068198] ENABLING IO-APIC IRQs
(XEN) [    1.072241]  -> Using new ACK method
(XEN) [    1.076706] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) [    0.470790] Platform timer is 14.318MHz HPET
(XEN) [    0.475474] Allocated console ring of 16 KiB.

Which is a rather more accurate time-since-boot, until the the platform
timer starts up and steps time backwards.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:15:07 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5Tu-00059b-1n; Thu, 27 Feb 2014 18:14:50 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJ5Ts-00059W-04
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:14:48 +0000
Received: from [85.158.143.35:39605] by server-2.bemta-4.messagelabs.com id
	78/26-04779-7908F035; Thu, 27 Feb 2014 18:14:47 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393524885!8801511!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24714 invoked from network); 27 Feb 2014 18:14:46 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 18:14:46 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="104742056"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 18:14:43 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 13:14:39 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJ5Ti-0006Rv-QQ;
	Thu, 27 Feb 2014 18:14:38 +0000
Message-ID: <530F808E.1000607@citrix.com>
Date: Thu, 27 Feb 2014 18:14:38 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Tim Deegan <tim@xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
	<20140227162159.GG53925@deinos.phlegethon.org>
	<530F7ACD.6020007@citrix.com>
In-Reply-To: <530F7ACD.6020007@citrix.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: Keir Fraser <keir@xen.org>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 27/02/14 17:50, Andrew Cooper wrote:
> On 27/02/14 16:21, Tim Deegan wrote:
>> At 14:03 +0000 on 27 Feb (1393506187), Andrew Cooper wrote:
>>> Some latent bugs are emphasised by these changes.  There are steps in time
>>> when the TSC scale is calculated, and when the platform time is initialised ...
>>>
>>>     (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>>>     (XEN) [   27.553075] Detected 2793.232 MHz processor.
>>>     (XEN) [   27.558277] Initing memory sharing.
>>>     (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>>>     (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>>>     (XEN) [   27.577687] Intel machine check reporting enabled
>>>     (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>>>     (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>>>     (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>>>     (XEN) [   27.603238] I/O virtualisation disabled
>>>     (XEN) [   27.608093] ENABLING IO-APIC IRQs
>>>     (XEN) [   27.612136]  -> Using new ACK method
>>>     (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>>>     (XEN) [    0.153431] Platform timer is 14.318MHz HPET
>> The patch below fixes the first step for me.  I haven't been able to
>> understand the exact mechanism of the second one yet.  With this patch
>> applied, of course, the second step is not visible -- which doesn't
>> mean it's gone away.
>>
>> Cheers,
>>
>> Tim.
>>
>> commit d986516ce297bbcf3181225105dbc67edfa7c37e
>> Author: Tim Deegan <tim@xen.org>
>> Date:   Thu Feb 27 16:17:02 2014 +0000
>>
>>     x86/time: Always count s_time from Xen boot.
>>     
>>     In the early-boot clock, before the calibration routines kick in,
>>     count time from Xen boot rather than from when the BSP's TSC was 0.
>>     
>>     Signed-off-by: Tim Deegan <tim@xen.org>
> This does indeed fix the first step, although I will continue debugging
> the second without this present.
>
> However, as a time from boot, it would make more sense for the beginning
> of __start_xen() (or even in the early asm code) to stash the TSC value
> sideways.  I will see how easy this is to do.
>
> ~Andrew

And indeed, stashing the TSC in head.S result in:

(XEN) [    0.000000] IRQ limits: 24 GSI, 376 MSI/MSI-X
(XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
(XEN) [    1.013174] Detected 2793.250 MHz processor.
(XEN) [    1.018377] Initing memory sharing.
(XEN) [    1.022601] Cannot set CPU feature mask on CPU#0
(XEN) [    1.027950] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI
0 firstbank 0 extended MCE MSR 18
(XEN) [    1.037785] Intel machine check reporting enabled
(XEN) [    1.043245] PCI: MCFG configuration 0: base e0000000 segment
0000 buses 00 - 3f
(XEN) [    1.051509] PCI: MCFG area at e0000000 reserved in E820
(XEN) [    1.057471] PCI: Using MCFG for segment 0000 bus 00-3f
(XEN) [    1.063340] I/O virtualisation disabled
(XEN) [    1.068198] ENABLING IO-APIC IRQs
(XEN) [    1.072241]  -> Using new ACK method
(XEN) [    1.076706] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
(XEN) [    0.470790] Platform timer is 14.318MHz HPET
(XEN) [    0.475474] Allocated console ring of 16 KiB.

Which is a rather more accurate time-since-boot, until the the platform
timer starts up and steps time backwards.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5UN-0005BN-Eu; Thu, 27 Feb 2014 18:15:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WJ5UM-0005BC-BK
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:15:18 +0000
Received: from [193.109.254.147:58097] by server-7.bemta-14.messagelabs.com id
	BD/6F-23424-5B08F035; Thu, 27 Feb 2014 18:15:17 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393524916!3331553!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22138 invoked from network); 27 Feb 2014 18:15:16 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:15:16 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id Z035d9q1RIFGdgy
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate) for <xen-devel@lists.xen.org>;
	Thu, 27 Feb 2014 19:15:16 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9B4455026B; Thu, 27 Feb 2014 19:15:15 +0100 (CET)
Resent-From: Olaf Hering <olaf@aepfle.de>
Resent-Date: Thu, 27 Feb 2014 19:15:15 +0100
Resent-Message-ID: <20140227181515.GA11600@aepfle.de>
Resent-To: xen-devel@lists.xen.org
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <glikely@secretlab.ca>) id 1WJ4rJ-0003xP-Q0
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:34:58 +0000
Received: from [85.158.139.211:6084] by server-8.bemta-5.messagelabs.com id
	47/8E-05298-0477F035; Thu, 27 Feb 2014 17:34:56 +0000
X-Env-Sender: glikely@secretlab.ca
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393522494!6713712!1
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27200 invoked from network); 27 Feb 2014 17:34:56 -0000
Received: from mail-ig0-f176.google.com (HELO mail-ig0-f176.google.com)
	(209.85.213.176)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:34:56 -0000
Received: by mail-ig0-f176.google.com with SMTP id uy17so5327503igb.3
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 09:34:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=jFSPDT8LYwM4H0x8JnKs6XnTVEwtQ/RnXGqfeHAP/dw=;
	b=InrdJj3HqUY1AByxUovsLJR+ZhulkuWaBjkICc3i+xbtd4HoBMhhs8ZZIPiaGNTewb
	Vd+kZTAGZL3KPPfBIJrILQ2lxSyrZu1sbsvnNRe1atZAv6wpG6+vL7F0et5wrobBLuom
	PUmPWiI97bOPoG7aNMwrzt/abmztDBmx7wWNzAJRWcsiTDQJjBZODlPoEWkDFP5UIgGc
	ngwhODBaj/lL+qVQ3DBJ2iV/br5oYABxjd0bQbdK1qAqVcc/S///cypxLTuTJK3JLW9F
	xkXfNAMxhdh/pwUFhApTs8QueU1iK01ww8cjc/0ZZwba0PsFlmGHKjR7jIFeRsHHrJmI
	/slQ==
X-Gm-Message-State: ALoCoQl1XJzuMEasUX0EBfct8xGNCsxMA9kVMLC3/uWlbBCUp6QVOqLJFBgNCPSw9P94zwNLyUQo
X-Received: by 10.50.253.228 with SMTP id ad4mr7757421igd.39.1393522494520;
	Thu, 27 Feb 2014 09:34:54 -0800 (PST)
MIME-Version: 1.0
Received: by 10.64.81.79 with HTTP; Thu, 27 Feb 2014 09:34:33 -0800 (PST)
In-Reply-To: <20140226151536.58154704@anubis.ausil.us>
References: <20140226183454.GA14639@cbox>
	<20140226134251.0436294e@anubis.ausil.us>
	<CAMJs5B9bCs8Oz2Zg4UK--A3H4AaZRPMwy7SpxYom-1--_=qhBQ@mail.gmail.com>
	<20140226151536.58154704@anubis.ausil.us>
From: Grant Likely <grant.likely@secretlab.ca>
Date: Thu, 27 Feb 2014 17:34:33 +0000
X-Google-Sender-Auth: HmbTYMEx8n4-zsGpNh-veJwfflc
Message-ID: <CACxGe6sQd7VFQtjoNsAtYB=-87ZTOZ73B_s7QFcggSdm39U-mA@mail.gmail.com>
To: Dennis Gilmore <dennis@gilmore.net.au>
Cc: cross-distro <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Rob Herring <rob.herring@linaro.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 9:15 PM, Dennis Gilmore <dennis@gilmore.net.au> wrote:
> On Wed, 26 Feb 2014 11:56:53 -0800
> Christoffer Dall <christoffer.dall@linaro.org> wrote:
>
>> [why did you drop everyone from cc here?]
>
> standard reply to list behavior, I would appreciate if you followed it.

Not on the Linaro, infradead or vger lists. We preserve cc's here, always have.

>> On 26 February 2014 11:42, Dennis Gilmore <dennis@gilmore.net.au>
>> wrote:
>> > On Wed, 26 Feb 2014 10:34:54 -0800
>> > Christoffer Dall <christoffer.dall@linaro.org> wrote:
>> Also, I'm afraid "u-boot and look like an existing 32 bit system" is
>> not much of a spec. How does a distro vendor ship an image based on
>> that description that they can be sure will boot?
>
> based on the work to make a standard boot environment I have been
> working on, pass in the u-boot binary and things will work by loading
> config from inside the image and acting just like any system. really
> UEFI is major overkill here and a massive divergence from the real
> world. What is the argument that justifies the divergence?

That's what I used to say all the time until I actually looked at it.
It isn't the horrid monster that many of us feared it would be. There
is a fully open source implementation hosted on sourceforge which is
what I would expect most VM vendors to use directly. It isn't
unreasonably large and it implements sane behaviour.

Remember, we are talking about what is needed to make a portable VM
ecosystem. The folks working on the UEFI spec have spent a lot of time
thinking about how to choose what image to boot from a disk and the
spec is well defined in this regard. That aspect has not been U-Boot's
focus and U-Boot isn't anywhere near as mature as UEFI in that regard
(nor would I expect it to be; embedded has never had the same
incentive to create portable boot images as general purpose machines).

Also, specifying UEFI for this spec does not in any way prevent
someone from running U-Boot in their VM, or executing the kernel
directly. This spec is about a platform for portable images and it is
important to as much as possible specify things like firmware
interfaces without a whole lot of options. Other use-cases can freely
disregard the spec and run whatever they want.

>> Personally I think keeping things uniform across both 32-bit and
>> 64-bit is better, and the GTP/EFI image is a modern standard that
>> should work well.
>
> It means that installers will need special code paths to support being
> installed into virt guests and is not sustainable or supportable. as
> hardware wont work the same way.

Installers already have the EFI code paths, the kernel patches for
both 32 and 64 bit ARM are in-flight and will get merged soon. The
grub patches are done and merged. Installers will work exactly the
same way on real hardware with EFI and on VMs with EFI. It will also
work exactly the same way between x86, ARM and ARM64. What part is
unsustainable?

g.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:15:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:15:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5UN-0005BN-Eu; Thu, 27 Feb 2014 18:15:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <olaf@aepfle.de>) id 1WJ5UM-0005BC-BK
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:15:18 +0000
Received: from [193.109.254.147:58097] by server-7.bemta-14.messagelabs.com id
	BD/6F-23424-5B08F035; Thu, 27 Feb 2014 18:15:17 +0000
X-Env-Sender: olaf@aepfle.de
X-Msg-Ref: server-9.tower-27.messagelabs.com!1393524916!3331553!1
X-Originating-IP: [81.169.146.220]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
	RCVD_BY_IP,UNPARSEABLE_RELAY
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22138 invoked from network); 27 Feb 2014 18:15:16 -0000
Received: from mo4-p00-ob.smtp.rzone.de (HELO mo4-p00-ob.smtp.rzone.de)
	(81.169.146.220)
	by server-9.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:15:16 -0000
X-RZG-AUTH: :P2EQZWCpfu+qG7CngxMFH1J+yackYocTD1iAi8x+OWi/zfN1cLnBYfssUYpSR8eljMl97v4biHuJWrcUJ26/r+cFwJwiSQ==
X-RZG-CLASS-ID: mo00
Received: from probook.fritz.box ([2001:a60:10e4:1501:1ec1:deff:fe91:f51c])
	by smtp.strato.de (RZmta 32.27 AUTH) with ESMTPSA id Z035d9q1RIFGdgy
	(using TLSv1.2 with cipher DHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate) for <xen-devel@lists.xen.org>;
	Thu, 27 Feb 2014 19:15:16 +0100 (CET)
Received: by probook.fritz.box (Postfix, from userid 1000)
	id 9B4455026B; Thu, 27 Feb 2014 19:15:15 +0100 (CET)
Resent-From: Olaf Hering <olaf@aepfle.de>
Resent-Date: Thu, 27 Feb 2014 19:15:15 +0100
Resent-Message-ID: <20140227181515.GA11600@aepfle.de>
Resent-To: xen-devel@lists.xen.org
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <glikely@secretlab.ca>) id 1WJ4rJ-0003xP-Q0
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 17:34:58 +0000
Received: from [85.158.139.211:6084] by server-8.bemta-5.messagelabs.com id
	47/8E-05298-0477F035; Thu, 27 Feb 2014 17:34:56 +0000
X-Env-Sender: glikely@secretlab.ca
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393522494!6713712!1
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27200 invoked from network); 27 Feb 2014 17:34:56 -0000
Received: from mail-ig0-f176.google.com (HELO mail-ig0-f176.google.com)
	(209.85.213.176)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 17:34:56 -0000
Received: by mail-ig0-f176.google.com with SMTP id uy17so5327503igb.3
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 09:34:54 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:sender:in-reply-to:references:from
	:date:message-id:subject:to:cc:content-type;
	bh=jFSPDT8LYwM4H0x8JnKs6XnTVEwtQ/RnXGqfeHAP/dw=;
	b=InrdJj3HqUY1AByxUovsLJR+ZhulkuWaBjkICc3i+xbtd4HoBMhhs8ZZIPiaGNTewb
	Vd+kZTAGZL3KPPfBIJrILQ2lxSyrZu1sbsvnNRe1atZAv6wpG6+vL7F0et5wrobBLuom
	PUmPWiI97bOPoG7aNMwrzt/abmztDBmx7wWNzAJRWcsiTDQJjBZODlPoEWkDFP5UIgGc
	ngwhODBaj/lL+qVQ3DBJ2iV/br5oYABxjd0bQbdK1qAqVcc/S///cypxLTuTJK3JLW9F
	xkXfNAMxhdh/pwUFhApTs8QueU1iK01ww8cjc/0ZZwba0PsFlmGHKjR7jIFeRsHHrJmI
	/slQ==
X-Gm-Message-State: ALoCoQl1XJzuMEasUX0EBfct8xGNCsxMA9kVMLC3/uWlbBCUp6QVOqLJFBgNCPSw9P94zwNLyUQo
X-Received: by 10.50.253.228 with SMTP id ad4mr7757421igd.39.1393522494520;
	Thu, 27 Feb 2014 09:34:54 -0800 (PST)
MIME-Version: 1.0
Received: by 10.64.81.79 with HTTP; Thu, 27 Feb 2014 09:34:33 -0800 (PST)
In-Reply-To: <20140226151536.58154704@anubis.ausil.us>
References: <20140226183454.GA14639@cbox>
	<20140226134251.0436294e@anubis.ausil.us>
	<CAMJs5B9bCs8Oz2Zg4UK--A3H4AaZRPMwy7SpxYom-1--_=qhBQ@mail.gmail.com>
	<20140226151536.58154704@anubis.ausil.us>
From: Grant Likely <grant.likely@secretlab.ca>
Date: Thu, 27 Feb 2014 17:34:33 +0000
X-Google-Sender-Auth: HmbTYMEx8n4-zsGpNh-veJwfflc
Message-ID: <CACxGe6sQd7VFQtjoNsAtYB=-87ZTOZ73B_s7QFcggSdm39U-mA@mail.gmail.com>
To: Dennis Gilmore <dennis@gilmore.net.au>
Cc: cross-distro <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Rob Herring <rob.herring@linaro.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Leif Lindholm <leif.lindholm@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Robie Basak <robie.basak@canonical.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Christoffer Dall <christoffer.dall@linaro.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 9:15 PM, Dennis Gilmore <dennis@gilmore.net.au> wrote:
> On Wed, 26 Feb 2014 11:56:53 -0800
> Christoffer Dall <christoffer.dall@linaro.org> wrote:
>
>> [why did you drop everyone from cc here?]
>
> standard reply to list behavior, I would appreciate if you followed it.

Not on the Linaro, infradead or vger lists. We preserve cc's here, always have.

>> On 26 February 2014 11:42, Dennis Gilmore <dennis@gilmore.net.au>
>> wrote:
>> > On Wed, 26 Feb 2014 10:34:54 -0800
>> > Christoffer Dall <christoffer.dall@linaro.org> wrote:
>> Also, I'm afraid "u-boot and look like an existing 32 bit system" is
>> not much of a spec. How does a distro vendor ship an image based on
>> that description that they can be sure will boot?
>
> based on the work to make a standard boot environment I have been
> working on, pass in the u-boot binary and things will work by loading
> config from inside the image and acting just like any system. really
> UEFI is major overkill here and a massive divergence from the real
> world. What is the argument that justifies the divergence?

That's what I used to say all the time until I actually looked at it.
It isn't the horrid monster that many of us feared it would be. There
is a fully open source implementation hosted on sourceforge which is
what I would expect most VM vendors to use directly. It isn't
unreasonably large and it implements sane behaviour.

Remember, we are talking about what is needed to make a portable VM
ecosystem. The folks working on the UEFI spec have spent a lot of time
thinking about how to choose what image to boot from a disk and the
spec is well defined in this regard. That aspect has not been U-Boot's
focus and U-Boot isn't anywhere near as mature as UEFI in that regard
(nor would I expect it to be; embedded has never had the same
incentive to create portable boot images as general purpose machines).

Also, specifying UEFI for this spec does not in any way prevent
someone from running U-Boot in their VM, or executing the kernel
directly. This spec is about a platform for portable images and it is
important to as much as possible specify things like firmware
interfaces without a whole lot of options. Other use-cases can freely
disregard the spec and run whatever they want.

>> Personally I think keeping things uniform across both 32-bit and
>> 64-bit is better, and the GTP/EFI image is a modern standard that
>> should work well.
>
> It means that installers will need special code paths to support being
> installed into virt guests and is not sustainable or supportable. as
> hardware wont work the same way.

Installers already have the EFI code paths, the kernel patches for
both 32 and 64 bit ARM are in-flight and will get merged soon. The
grub patches are done and merged. Installers will work exactly the
same way on real hardware with EFI and on VMs with EFI. It will also
work exactly the same way between x86, ARM and ARM64. What part is
unsustainable?

g.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:15:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5Us-0005Fl-0J; Thu, 27 Feb 2014 18:15:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ5Uq-0005FT-18
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:15:48 +0000
Received: from [85.158.137.68:54854] by server-14.bemta-3.messagelabs.com id
	D8/64-08196-3D08F035; Thu, 27 Feb 2014 18:15:47 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393524944!4664315!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21703 invoked from network); 27 Feb 2014 18:15:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 18:15:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="106370370"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 18:15:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 13:15:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=FTLPEX01CL01.citrite.net)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WJ5Ul-0006TX-9O;
	Thu, 27 Feb 2014 18:15:43 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>, <linux-kernel@vger.kernel.org>
Date: Thu, 27 Feb 2014 19:15:35 +0100
Message-ID: <1393524935-4216-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <530F68BE.2070505@oracle.com>
References: <530F68BE.2070505@oracle.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3IgWGVuIERvbTAgdXNpbmcgdGhl
Ck1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFwIHR5cGUuCgpJbiBvcmRlciB0byBrZWVw
IHRyYWNrIG9mIHdoaWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCnBp
cnFzIGluIHRoZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5l
d2x5CmludHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2Fs
bGluZwpQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBi
ZSBkb25lIHdpdGggdGhlCmZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgoKU2lnbmVkLW9mZi1ieTog
Um9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBEYXZpZCBWcmFiZWwg
PGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ry
b3Zza3lAb3JhY2xlLmNvbT4KLS0tClRlc3RlZCB3aXRoIGFuIEludGVsIElDSDggQUhDSSBTQVRB
IGNvbnRyb2xsZXIuCi0tLQpDaGFuZ2VzIHNpbmNlIHYxOgogLSBBZGQgY29tbWVudHMgcmVmbGVj
dGluZyB0aGUgbmV3IHVzYWdlIG9mIGVudHJ5X25yIGluCiAgIHBoeXNkZXZfbWFwX3BpcnEuCi0t
LQogYXJjaC94ODYvcGNpL3hlbi5jICAgICAgICAgICAgICAgICAgIHwgICAyOSArKysrKysrKysr
KysrKy0tLS0tLQogZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgICAgIHwgICA0NyAr
KysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tCiBkcml2ZXJzL3hlbi9ldmVudHMvZXZl
bnRzX2ludGVybmFsLmggfCAgICAxICsKIGluY2x1ZGUveGVuL2V2ZW50cy5oICAgICAgICAgICAg
ICAgICB8ICAgIDIgKy0KIGluY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmggICAgICB8ICAg
MTAgKysrKysrLQogNSBmaWxlcyBjaGFuZ2VkLCA2MiBpbnNlcnRpb25zKCspLCAyNyBkZWxldGlv
bnMoLSkKCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9wY2kveGVuLmMgYi9hcmNoL3g4Ni9wY2kveGVu
LmMKaW5kZXggMTAzZTcwMi4uOTA1OTU2ZiAxMDA2NDQKLS0tIGEvYXJjaC94ODYvcGNpL3hlbi5j
CisrKyBiL2FyY2gveDg2L3BjaS94ZW4uYwpAQCAtMTc4LDYgKzE3OCw3IEBAIHN0YXRpYyBpbnQg
eGVuX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlw
ZSkKIAlpID0gMDsKIAlsaXN0X2Zvcl9lYWNoX2VudHJ5KG1zaWRlc2MsICZkZXYtPm1zaV9saXN0
LCBsaXN0KSB7CiAJCWlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1zaWRlc2Ms
IHZbaV0sCisJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSkgPyBudmVjIDogMSwK
IAkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwogCQkJCQkgICAgICAgInBj
aWZyb250LW1zaS14IiA6CiAJCQkJCSAgICAgICAicGNpZnJvbnQtbXNpIiwKQEAgLTI0NSw2ICsy
NDYsNyBAQCBzdGF0aWMgaW50IHhlbl9odm1fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYg
KmRldiwgaW50IG52ZWMsIGludCB0eXBlKQogCQkJCSJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRv
IHBpcnE9JWRcbiIsIHBpcnEpOwogCQl9CiAJCWlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2ly
cShkZXYsIG1zaWRlc2MsIHBpcnEsCisJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01T
SSkgPyBudmVjIDogMSwKIAkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwog
CQkJCQkgICAgICAgIm1zaS14IiA6ICJtc2kiLAogCQkJCQkgICAgICAgRE9NSURfU0VMRik7CkBA
IC0yNjksOSArMjcxLDYgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJaW50IHJldCA9IDA7CiAJ
c3RydWN0IG1zaV9kZXNjICptc2lkZXNjOwogCi0JaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kg
JiYgbnZlYyA+IDEpCi0JCXJldHVybiAxOwotCiAJbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNj
LCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkgewogCQlzdHJ1Y3QgcGh5c2Rldl9tYXBfcGlycSBtYXBf
aXJxOwogCQlkb21pZF90IGRvbWlkOwpAQCAtMjkxLDcgKzI5MCwxMCBAQCBzdGF0aWMgaW50IHhl
bl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBp
bnQgdHlwZSkKIAkJCSAgICAgIChwY2lfZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7CiAJCW1h
cF9pcnEuZGV2Zm4gPSBkZXYtPmRldmZuOwogCi0JCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJ
WCkgeworCQlpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAmJiBudmVjID4gMSkgeworCQkJbWFw
X2lycS50eXBlID0gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0k7CisJCQltYXBfaXJxLmVudHJ5X25y
ID0gbnZlYzsKKwkJfSBlbHNlIGlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgewogCQkJaW50
IHBvczsKIAkJCXUzMiB0YWJsZV9vZmZzZXQsIGJpcjsKIApAQCAtMzA4LDYgKzMxMCwxNiBAQCBz
dGF0aWMgaW50IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYs
IGludCBudmVjLCBpbnQgdHlwZSkKIAkJaWYgKHBjaV9zZWdfc3VwcG9ydGVkKQogCQkJcmV0ID0g
SFlQRVJWSVNPUl9waHlzZGV2X29wKFBIWVNERVZPUF9tYXBfcGlycSwKIAkJCQkJCSAgICAmbWFw
X2lycSk7CisJCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxICYmIHJldCkg
eworCQkJLyoKKwkJCSAqIElmIE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIGlzIG5vdCBhdmFpbGFi
bGUKKwkJCSAqIHRoZXJlJ3Mgbm90aGluZyBlbHNlIHdlIGNhbiBkbyBpbiB0aGlzIGNhc2UuCisJ
CQkgKiBKdXN0IHNldCByZXQgPiAwIHNvIGRyaXZlciBjYW4gcmV0cnkgd2l0aAorCQkJICogc2lu
Z2xlIE1TSS4KKwkJCSAqLworCQkJcmV0ID0gMTsKKwkJCWdvdG8gb3V0OworCQl9CiAJCWlmIChy
ZXQgPT0gLUVJTlZBTCAmJiAhcGNpX2RvbWFpbl9ucihkZXYtPmJ1cykpIHsKIAkJCW1hcF9pcnEu
dHlwZSA9IE1BUF9QSVJRX1RZUEVfTVNJOwogCQkJbWFwX2lycS5pbmRleCA9IC0xOwpAQCAtMzI0
LDExICszMzYsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhzdHJ1
Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJCQlnb3RvIG91dDsKIAkJfQog
Ci0JCXJldCA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1zaWRlc2MsCi0JCQkJCSAg
ICAgICBtYXBfaXJxLnBpcnEsCi0JCQkJCSAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSVgp
ID8KLQkJCQkJICAgICAgICJtc2kteCIgOiAibXNpIiwKLQkJCQkJCWRvbWlkKTsKKwkJcmV0ID0g
eGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgbWFwX2lycS5waXJxLAorCQkg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8g
bnZlYyA6IDEsCisJCSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8gIm1zaS14IiA6ICJtc2kiLAorCQkgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgZG9taWQpOwogCQlpZiAocmV0IDwgMCkKIAkJCWdvdG8gb3V0OwogCX0KZGlmZiAt
LWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jIGIvZHJpdmVycy94ZW4vZXZl
bnRzL2V2ZW50c19iYXNlLmMKaW5kZXggZjRhOWUzMy4uZmYyMGFlMiAxMDA2NDQKLS0tIGEvZHJp
dmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19iYXNlLmMKQEAgLTM5MSwxMCArMzkxLDEwIEBAIHN0YXRpYyB2b2lkIHhlbl9pcnFfaW5p
dCh1bnNpZ25lZCBpcnEpCiAJbGlzdF9hZGRfdGFpbCgmaW5mby0+bGlzdCwgJnhlbl9pcnFfbGlz
dF9oZWFkKTsKIH0KIAotc3RhdGljIGludCBfX211c3RfY2hlY2sgeGVuX2FsbG9jYXRlX2lycV9k
eW5hbWljKHZvaWQpCitzdGF0aWMgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxc19k
eW5hbWljKGludCBudmVjKQogewogCWludCBmaXJzdCA9IDA7Ci0JaW50IGlycTsKKwlpbnQgaSwg
aXJxOwogCiAjaWZkZWYgQ09ORklHX1g4Nl9JT19BUElDCiAJLyoKQEAgLTQwOCwxNCArNDA4LDIy
IEBAIHN0YXRpYyBpbnQgX19tdXN0X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lk
KQogCQlmaXJzdCA9IGdldF9ucl9pcnFzX2dzaSgpOwogI2VuZGlmCiAKLQlpcnEgPSBpcnFfYWxs
b2NfZGVzY19mcm9tKGZpcnN0LCAtMSk7CisJaXJxID0gaXJxX2FsbG9jX2Rlc2NzX2Zyb20oZmly
c3QsIG52ZWMsIC0xKTsKIAotCWlmIChpcnEgPj0gMCkKLQkJeGVuX2lycV9pbml0KGlycSk7CisJ
aWYgKGlycSA+PSAwKSB7CisJCWZvciAoaSA9IDA7IGkgPCBudmVjOyBpKyspCisJCQl4ZW5faXJx
X2luaXQoaXJxICsgaSk7CisJfQogCiAJcmV0dXJuIGlycTsKIH0KIAorc3RhdGljIGlubGluZSBp
bnQgX19tdXN0X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lkKQoreworCisJcmV0
dXJuIHhlbl9hbGxvY2F0ZV9pcnFzX2R5bmFtaWMoMSk7Cit9CisKIHN0YXRpYyBpbnQgX19tdXN0
X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZ3NpKHVuc2lnbmVkIGdzaSkKIHsKIAlpbnQgaXJxOwpA
QCAtNzQxLDIyICs3NDksMjUgQEAgaW50IHhlbl9hbGxvY2F0ZV9waXJxX21zaShzdHJ1Y3QgcGNp
X2RldiAqZGV2LCBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2MpCiB9CiAKIGludCB4ZW5fYmluZF9w
aXJxX21zaV90b19pcnEoc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNjICptc2lk
ZXNjLAotCQkJICAgICBpbnQgcGlycSwgY29uc3QgY2hhciAqbmFtZSwgZG9taWRfdCBkb21pZCkK
KwkJCSAgICAgaW50IHBpcnEsIGludCBudmVjLCBjb25zdCBjaGFyICpuYW1lLCBkb21pZF90IGRv
bWlkKQogewotCWludCBpcnEsIHJldDsKKwlpbnQgaSwgaXJxLCByZXQ7CiAKIAltdXRleF9sb2Nr
KCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7CiAKLQlpcnEgPSB4ZW5fYWxsb2NhdGVfaXJxX2R5
bmFtaWMoKTsKKwlpcnEgPSB4ZW5fYWxsb2NhdGVfaXJxc19keW5hbWljKG52ZWMpOwogCWlmIChp
cnEgPCAwKQogCQlnb3RvIG91dDsKIAotCWlycV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGly
cSwgJnhlbl9waXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwKLQkJCW5hbWUpOworCWZvciAoaSA9
IDA7IGkgPCBudmVjOyBpKyspIHsKKwkJaXJxX3NldF9jaGlwX2FuZF9oYW5kbGVyX25hbWUoaXJx
ICsgaSwgJnhlbl9waXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwgbmFtZSk7CisKKwkJcmV0ID0g
eGVuX2lycV9pbmZvX3BpcnFfc2V0dXAoaXJxICsgaSwgMCwgcGlycSArIGksIDAsIGRvbWlkLAor
CQkJCQkgICAgICBpID09IDAgPyAwIDogUElSUV9NU0lfR1JPVVApOworCQlpZiAocmV0IDwgMCkK
KwkJCWdvdG8gZXJyb3JfaXJxOworCX0KIAotCXJldCA9IHhlbl9pcnFfaW5mb19waXJxX3NldHVw
KGlycSwgMCwgcGlycSwgMCwgZG9taWQsIDApOwotCWlmIChyZXQgPCAwKQotCQlnb3RvIGVycm9y
X2lycTsKIAlyZXQgPSBpcnFfc2V0X21zaV9kZXNjKGlycSwgbXNpZGVzYyk7CiAJaWYgKHJldCA8
IDApCiAJCWdvdG8gZXJyb3JfaXJxOwpAQCAtNzY0LDcgKzc3NSw4IEBAIG91dDoKIAltdXRleF91
bmxvY2soJmlycV9tYXBwaW5nX3VwZGF0ZV9sb2NrKTsKIAlyZXR1cm4gaXJxOwogZXJyb3JfaXJx
OgotCV9fdW5iaW5kX2Zyb21faXJxKGlycSk7CisJZm9yICg7IGkgPj0gMDsgaS0tKQorCQlfX3Vu
YmluZF9mcm9tX2lycShpcnEgKyBpKTsKIAltdXRleF91bmxvY2soJmlycV9tYXBwaW5nX3VwZGF0
ZV9sb2NrKTsKIAlyZXR1cm4gcmV0OwogfQpAQCAtNzgzLDcgKzc5NSwxMiBAQCBpbnQgeGVuX2Rl
c3Ryb3lfaXJxKGludCBpcnEpCiAJaWYgKCFkZXNjKQogCQlnb3RvIG91dDsKIAotCWlmICh4ZW5f
aW5pdGlhbF9kb21haW4oKSkgeworCS8qCisJICogSWYgdHJ5aW5nIHRvIHJlbW92ZSBhIHZlY3Rv
ciBpbiBhIE1TSSBncm91cCBkaWZmZXJlbnQKKwkgKiB0aGFuIHRoZSBmaXJzdCBvbmUgc2tpcCB0
aGUgUElSUSB1bm1hcCB1bmxlc3MgdGhpcyB2ZWN0b3IKKwkgKiBpcyB0aGUgZmlyc3Qgb25lIGlu
IHRoZSBncm91cC4KKwkgKi8KKwlpZiAoeGVuX2luaXRpYWxfZG9tYWluKCkgJiYgIShpbmZvLT51
LnBpcnEuZmxhZ3MgJiBQSVJRX01TSV9HUk9VUCkpIHsKIAkJdW5tYXBfaXJxLnBpcnEgPSBpbmZv
LT51LnBpcnEucGlycTsKIAkJdW5tYXBfaXJxLmRvbWlkID0gaW5mby0+dS5waXJxLmRvbWlkOwog
CQlyYyA9IEhZUEVSVklTT1JfcGh5c2Rldl9vcChQSFlTREVWT1BfdW5tYXBfcGlycSwgJnVubWFw
X2lycSk7CmRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgg
Yi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgKaW5kZXggNjc3ZjQxYS4uNTBj
MjA1MGEgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAor
KysgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgKQEAgLTUzLDYgKzUzLDcg
QEAgc3RydWN0IGlycV9pbmZvIHsKIAogI2RlZmluZSBQSVJRX05FRURTX0VPSQkoMSA8PCAwKQog
I2RlZmluZSBQSVJRX1NIQVJFQUJMRQkoMSA8PCAxKQorI2RlZmluZSBQSVJRX01TSV9HUk9VUAko
MSA8PCAyKQogCiBzdHJ1Y3QgZXZ0Y2huX29wcyB7CiAJdW5zaWduZWQgKCptYXhfY2hhbm5lbHMp
KHZvaWQpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vZXZlbnRzLmggYi9pbmNsdWRlL3hlbi9l
dmVudHMuaAppbmRleCBjOWM4NWNmLi4yYWU3ZTAzIDEwMDY0NAotLS0gYS9pbmNsdWRlL3hlbi9l
dmVudHMuaAorKysgYi9pbmNsdWRlL3hlbi9ldmVudHMuaApAQCAtMTAyLDcgKzEwMiw3IEBAIGlu
dCB4ZW5fYmluZF9waXJxX2dzaV90b19pcnEodW5zaWduZWQgZ3NpLAogaW50IHhlbl9hbGxvY2F0
ZV9waXJxX21zaShzdHJ1Y3QgcGNpX2RldiAqZGV2LCBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2Mp
OwogLyogQmluZCBhbiBQU0kgcGlycSB0byBhbiBpcnEuICovCiBpbnQgeGVuX2JpbmRfcGlycV9t
c2lfdG9faXJxKHN0cnVjdCBwY2lfZGV2ICpkZXYsIHN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYywK
LQkJCSAgICAgaW50IHBpcnEsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpOworCQkJ
ICAgICBpbnQgcGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQp
OwogI2VuZGlmCiAKIC8qIERlLWFsbG9jYXRlcyB0aGUgYWJvdmUgbWVudGlvbmVkIHBoeXNpY2Fs
IGludGVycnVwdC4gKi8KZGlmZiAtLWdpdCBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2
LmggYi9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oCmluZGV4IDQyNzIxZDEuLjYxMGRi
YTkgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgKKysrIGIvaW5j
bHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaApAQCAtMTMxLDYgKzEzMSw3IEBAIHN0cnVjdCBw
aHlzZGV2X2lycSB7CiAjZGVmaW5lIE1BUF9QSVJRX1RZUEVfR1NJCQkweDEKICNkZWZpbmUgTUFQ
X1BJUlFfVFlQRV9VTktOT1dOCQkweDIKICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VHCQkw
eDMKKyNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kJCTB4NAogCiAjZGVmaW5lIFBIWVNE
RVZPUF9tYXBfcGlycQkJMTMKIHN0cnVjdCBwaHlzZGV2X21hcF9waXJxIHsKQEAgLTE0MSwxMSAr
MTQyLDE2IEBAIHN0cnVjdCBwaHlzZGV2X21hcF9waXJxIHsKICAgICBpbnQgaW5kZXg7CiAgICAg
LyogSU4gb3IgT1VUICovCiAgICAgaW50IHBpcnE7Ci0gICAgLyogSU4gLSBoaWdoIDE2IGJpdHMg
aG9sZCBzZWdtZW50IGZvciBNQVBfUElSUV9UWVBFX01TSV9TRUcgKi8KKyAgICAvKiBJTiAtIGhp
Z2ggMTYgYml0cyBob2xkIHNlZ21lbnQgZm9yIC4uLl9NU0lfU0VHIGFuZCAuLi5fTVVMVElfTVNJ
ICovCiAgICAgaW50IGJ1czsKICAgICAvKiBJTiAqLwogICAgIGludCBkZXZmbjsKLSAgICAvKiBJ
TiAqLworICAgIC8qIElOCisgICAgICogLSBGb3IgTVNJLVggY29udGFpbnMgZW50cnkgbnVtYmVy
LgorICAgICAqIC0gRm9yIE1TSSB3aXRoIC4uLl9NVUxUSV9NU0kgY29udGFpbnMgbnVtYmVyIG9m
IHZlY3RvcnMuCisgICAgICogT1VUICguLi5fTVVMVElfTVNJIG9ubHkpCisgICAgICogLSBOdW1i
ZXIgb2YgdmVjdG9ycyBhbGxvY2F0ZWQuCisgICAgICovCiAgICAgaW50IGVudHJ5X25yOwogICAg
IC8qIElOICovCiAgICAgdWludDY0X3QgdGFibGVfYmFzZTsKLS0gCjEuNy43LjUgKEFwcGxlIEdp
dC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:15:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5Us-0005Fl-0J; Thu, 27 Feb 2014 18:15:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJ5Uq-0005FT-18
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:15:48 +0000
Received: from [85.158.137.68:54854] by server-14.bemta-3.messagelabs.com id
	D8/64-08196-3D08F035; Thu, 27 Feb 2014 18:15:47 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393524944!4664315!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21703 invoked from network); 27 Feb 2014 18:15:45 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 18:15:45 -0000
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="106370370"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 27 Feb 2014 18:15:44 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 13:15:43 -0500
Received: from gateway-cbg.eng.citrite.net ([10.80.16.17]
	helo=FTLPEX01CL01.citrite.net)	by ukmail1.uk.xensource.com with esmtp
	(Exim
	4.69)	(envelope-from <roger.pau@citrix.com>)	id 1WJ5Ul-0006TX-9O;
	Thu, 27 Feb 2014 18:15:43 +0000
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xen.org>, <linux-kernel@vger.kernel.org>
Date: Thu, 27 Feb 2014 19:15:35 +0100
Message-ID: <1393524935-4216-1-git-send-email-roger.pau@citrix.com>
X-Mailer: git-send-email 1.7.7.5 (Apple Git-26)
In-Reply-To: <530F68BE.2070505@oracle.com>
References: <530F68BE.2070505@oracle.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>,
	Roger Pau Monne <roger.pau@citrix.com>
Subject: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

QWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3IgWGVuIERvbTAgdXNpbmcgdGhl
Ck1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFwIHR5cGUuCgpJbiBvcmRlciB0byBrZWVw
IHRyYWNrIG9mIHdoaWNoIHBpcnEgaXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCnBp
cnFzIGluIHRoZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5l
d2x5CmludHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2Fs
bGluZwpQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBi
ZSBkb25lIHdpdGggdGhlCmZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgoKU2lnbmVkLW9mZi1ieTog
Um9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+CkNjOiBEYXZpZCBWcmFiZWwg
PGRhdmlkLnZyYWJlbEBjaXRyaXguY29tPgpDYzogQm9yaXMgT3N0cm92c2t5IDxib3Jpcy5vc3Ry
b3Zza3lAb3JhY2xlLmNvbT4KLS0tClRlc3RlZCB3aXRoIGFuIEludGVsIElDSDggQUhDSSBTQVRB
IGNvbnRyb2xsZXIuCi0tLQpDaGFuZ2VzIHNpbmNlIHYxOgogLSBBZGQgY29tbWVudHMgcmVmbGVj
dGluZyB0aGUgbmV3IHVzYWdlIG9mIGVudHJ5X25yIGluCiAgIHBoeXNkZXZfbWFwX3BpcnEuCi0t
LQogYXJjaC94ODYvcGNpL3hlbi5jICAgICAgICAgICAgICAgICAgIHwgICAyOSArKysrKysrKysr
KysrKy0tLS0tLQogZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgICAgIHwgICA0NyAr
KysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0tLS0tCiBkcml2ZXJzL3hlbi9ldmVudHMvZXZl
bnRzX2ludGVybmFsLmggfCAgICAxICsKIGluY2x1ZGUveGVuL2V2ZW50cy5oICAgICAgICAgICAg
ICAgICB8ICAgIDIgKy0KIGluY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmggICAgICB8ICAg
MTAgKysrKysrLQogNSBmaWxlcyBjaGFuZ2VkLCA2MiBpbnNlcnRpb25zKCspLCAyNyBkZWxldGlv
bnMoLSkKCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9wY2kveGVuLmMgYi9hcmNoL3g4Ni9wY2kveGVu
LmMKaW5kZXggMTAzZTcwMi4uOTA1OTU2ZiAxMDA2NDQKLS0tIGEvYXJjaC94ODYvcGNpL3hlbi5j
CisrKyBiL2FyY2gveDg2L3BjaS94ZW4uYwpAQCAtMTc4LDYgKzE3OCw3IEBAIHN0YXRpYyBpbnQg
eGVuX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlw
ZSkKIAlpID0gMDsKIAlsaXN0X2Zvcl9lYWNoX2VudHJ5KG1zaWRlc2MsICZkZXYtPm1zaV9saXN0
LCBsaXN0KSB7CiAJCWlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1zaWRlc2Ms
IHZbaV0sCisJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSkgPyBudmVjIDogMSwK
IAkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwogCQkJCQkgICAgICAgInBj
aWZyb250LW1zaS14IiA6CiAJCQkJCSAgICAgICAicGNpZnJvbnQtbXNpIiwKQEAgLTI0NSw2ICsy
NDYsNyBAQCBzdGF0aWMgaW50IHhlbl9odm1fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYg
KmRldiwgaW50IG52ZWMsIGludCB0eXBlKQogCQkJCSJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRv
IHBpcnE9JWRcbiIsIHBpcnEpOwogCQl9CiAJCWlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2ly
cShkZXYsIG1zaWRlc2MsIHBpcnEsCisJCQkJCSAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01T
SSkgPyBudmVjIDogMSwKIAkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwog
CQkJCQkgICAgICAgIm1zaS14IiA6ICJtc2kiLAogCQkJCQkgICAgICAgRE9NSURfU0VMRik7CkBA
IC0yNjksOSArMjcxLDYgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJaW50IHJldCA9IDA7CiAJ
c3RydWN0IG1zaV9kZXNjICptc2lkZXNjOwogCi0JaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kg
JiYgbnZlYyA+IDEpCi0JCXJldHVybiAxOwotCiAJbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNj
LCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkgewogCQlzdHJ1Y3QgcGh5c2Rldl9tYXBfcGlycSBtYXBf
aXJxOwogCQlkb21pZF90IGRvbWlkOwpAQCAtMjkxLDcgKzI5MCwxMCBAQCBzdGF0aWMgaW50IHhl
bl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBp
bnQgdHlwZSkKIAkJCSAgICAgIChwY2lfZG9tYWluX25yKGRldi0+YnVzKSA8PCAxNik7CiAJCW1h
cF9pcnEuZGV2Zm4gPSBkZXYtPmRldmZuOwogCi0JCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJ
WCkgeworCQlpZiAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSSAmJiBudmVjID4gMSkgeworCQkJbWFw
X2lycS50eXBlID0gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0k7CisJCQltYXBfaXJxLmVudHJ5X25y
ID0gbnZlYzsKKwkJfSBlbHNlIGlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgewogCQkJaW50
IHBvczsKIAkJCXUzMiB0YWJsZV9vZmZzZXQsIGJpcjsKIApAQCAtMzA4LDYgKzMxMCwxNiBAQCBz
dGF0aWMgaW50IHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYs
IGludCBudmVjLCBpbnQgdHlwZSkKIAkJaWYgKHBjaV9zZWdfc3VwcG9ydGVkKQogCQkJcmV0ID0g
SFlQRVJWSVNPUl9waHlzZGV2X29wKFBIWVNERVZPUF9tYXBfcGlycSwKIAkJCQkJCSAgICAmbWFw
X2lycSk7CisJCWlmICh0eXBlID09IFBDSV9DQVBfSURfTVNJICYmIG52ZWMgPiAxICYmIHJldCkg
eworCQkJLyoKKwkJCSAqIElmIE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIGlzIG5vdCBhdmFpbGFi
bGUKKwkJCSAqIHRoZXJlJ3Mgbm90aGluZyBlbHNlIHdlIGNhbiBkbyBpbiB0aGlzIGNhc2UuCisJ
CQkgKiBKdXN0IHNldCByZXQgPiAwIHNvIGRyaXZlciBjYW4gcmV0cnkgd2l0aAorCQkJICogc2lu
Z2xlIE1TSS4KKwkJCSAqLworCQkJcmV0ID0gMTsKKwkJCWdvdG8gb3V0OworCQl9CiAJCWlmIChy
ZXQgPT0gLUVJTlZBTCAmJiAhcGNpX2RvbWFpbl9ucihkZXYtPmJ1cykpIHsKIAkJCW1hcF9pcnEu
dHlwZSA9IE1BUF9QSVJRX1RZUEVfTVNJOwogCQkJbWFwX2lycS5pbmRleCA9IC0xOwpAQCAtMzI0
LDExICszMzYsMTAgQEAgc3RhdGljIGludCB4ZW5faW5pdGRvbV9zZXR1cF9tc2lfaXJxcyhzdHJ1
Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCiAJCQlnb3RvIG91dDsKIAkJfQog
Ci0JCXJldCA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1zaWRlc2MsCi0JCQkJCSAg
ICAgICBtYXBfaXJxLnBpcnEsCi0JCQkJCSAgICAgICAodHlwZSA9PSBQQ0lfQ0FQX0lEX01TSVgp
ID8KLQkJCQkJICAgICAgICJtc2kteCIgOiAibXNpIiwKLQkJCQkJCWRvbWlkKTsKKwkJcmV0ID0g
eGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgbWFwX2lycS5waXJxLAorCQkg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8g
bnZlYyA6IDEsCisJCSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAodHlwZSA9PSBQQ0lf
Q0FQX0lEX01TSVgpID8gIm1zaS14IiA6ICJtc2kiLAorCQkgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgZG9taWQpOwogCQlpZiAocmV0IDwgMCkKIAkJCWdvdG8gb3V0OwogCX0KZGlmZiAt
LWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jIGIvZHJpdmVycy94ZW4vZXZl
bnRzL2V2ZW50c19iYXNlLmMKaW5kZXggZjRhOWUzMy4uZmYyMGFlMiAxMDA2NDQKLS0tIGEvZHJp
dmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19iYXNlLmMKQEAgLTM5MSwxMCArMzkxLDEwIEBAIHN0YXRpYyB2b2lkIHhlbl9pcnFfaW5p
dCh1bnNpZ25lZCBpcnEpCiAJbGlzdF9hZGRfdGFpbCgmaW5mby0+bGlzdCwgJnhlbl9pcnFfbGlz
dF9oZWFkKTsKIH0KIAotc3RhdGljIGludCBfX211c3RfY2hlY2sgeGVuX2FsbG9jYXRlX2lycV9k
eW5hbWljKHZvaWQpCitzdGF0aWMgaW50IF9fbXVzdF9jaGVjayB4ZW5fYWxsb2NhdGVfaXJxc19k
eW5hbWljKGludCBudmVjKQogewogCWludCBmaXJzdCA9IDA7Ci0JaW50IGlycTsKKwlpbnQgaSwg
aXJxOwogCiAjaWZkZWYgQ09ORklHX1g4Nl9JT19BUElDCiAJLyoKQEAgLTQwOCwxNCArNDA4LDIy
IEBAIHN0YXRpYyBpbnQgX19tdXN0X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lk
KQogCQlmaXJzdCA9IGdldF9ucl9pcnFzX2dzaSgpOwogI2VuZGlmCiAKLQlpcnEgPSBpcnFfYWxs
b2NfZGVzY19mcm9tKGZpcnN0LCAtMSk7CisJaXJxID0gaXJxX2FsbG9jX2Rlc2NzX2Zyb20oZmly
c3QsIG52ZWMsIC0xKTsKIAotCWlmIChpcnEgPj0gMCkKLQkJeGVuX2lycV9pbml0KGlycSk7CisJ
aWYgKGlycSA+PSAwKSB7CisJCWZvciAoaSA9IDA7IGkgPCBudmVjOyBpKyspCisJCQl4ZW5faXJx
X2luaXQoaXJxICsgaSk7CisJfQogCiAJcmV0dXJuIGlycTsKIH0KIAorc3RhdGljIGlubGluZSBp
bnQgX19tdXN0X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZHluYW1pYyh2b2lkKQoreworCisJcmV0
dXJuIHhlbl9hbGxvY2F0ZV9pcnFzX2R5bmFtaWMoMSk7Cit9CisKIHN0YXRpYyBpbnQgX19tdXN0
X2NoZWNrIHhlbl9hbGxvY2F0ZV9pcnFfZ3NpKHVuc2lnbmVkIGdzaSkKIHsKIAlpbnQgaXJxOwpA
QCAtNzQxLDIyICs3NDksMjUgQEAgaW50IHhlbl9hbGxvY2F0ZV9waXJxX21zaShzdHJ1Y3QgcGNp
X2RldiAqZGV2LCBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2MpCiB9CiAKIGludCB4ZW5fYmluZF9w
aXJxX21zaV90b19pcnEoc3RydWN0IHBjaV9kZXYgKmRldiwgc3RydWN0IG1zaV9kZXNjICptc2lk
ZXNjLAotCQkJICAgICBpbnQgcGlycSwgY29uc3QgY2hhciAqbmFtZSwgZG9taWRfdCBkb21pZCkK
KwkJCSAgICAgaW50IHBpcnEsIGludCBudmVjLCBjb25zdCBjaGFyICpuYW1lLCBkb21pZF90IGRv
bWlkKQogewotCWludCBpcnEsIHJldDsKKwlpbnQgaSwgaXJxLCByZXQ7CiAKIAltdXRleF9sb2Nr
KCZpcnFfbWFwcGluZ191cGRhdGVfbG9jayk7CiAKLQlpcnEgPSB4ZW5fYWxsb2NhdGVfaXJxX2R5
bmFtaWMoKTsKKwlpcnEgPSB4ZW5fYWxsb2NhdGVfaXJxc19keW5hbWljKG52ZWMpOwogCWlmIChp
cnEgPCAwKQogCQlnb3RvIG91dDsKIAotCWlycV9zZXRfY2hpcF9hbmRfaGFuZGxlcl9uYW1lKGly
cSwgJnhlbl9waXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwKLQkJCW5hbWUpOworCWZvciAoaSA9
IDA7IGkgPCBudmVjOyBpKyspIHsKKwkJaXJxX3NldF9jaGlwX2FuZF9oYW5kbGVyX25hbWUoaXJx
ICsgaSwgJnhlbl9waXJxX2NoaXAsIGhhbmRsZV9lZGdlX2lycSwgbmFtZSk7CisKKwkJcmV0ID0g
eGVuX2lycV9pbmZvX3BpcnFfc2V0dXAoaXJxICsgaSwgMCwgcGlycSArIGksIDAsIGRvbWlkLAor
CQkJCQkgICAgICBpID09IDAgPyAwIDogUElSUV9NU0lfR1JPVVApOworCQlpZiAocmV0IDwgMCkK
KwkJCWdvdG8gZXJyb3JfaXJxOworCX0KIAotCXJldCA9IHhlbl9pcnFfaW5mb19waXJxX3NldHVw
KGlycSwgMCwgcGlycSwgMCwgZG9taWQsIDApOwotCWlmIChyZXQgPCAwKQotCQlnb3RvIGVycm9y
X2lycTsKIAlyZXQgPSBpcnFfc2V0X21zaV9kZXNjKGlycSwgbXNpZGVzYyk7CiAJaWYgKHJldCA8
IDApCiAJCWdvdG8gZXJyb3JfaXJxOwpAQCAtNzY0LDcgKzc3NSw4IEBAIG91dDoKIAltdXRleF91
bmxvY2soJmlycV9tYXBwaW5nX3VwZGF0ZV9sb2NrKTsKIAlyZXR1cm4gaXJxOwogZXJyb3JfaXJx
OgotCV9fdW5iaW5kX2Zyb21faXJxKGlycSk7CisJZm9yICg7IGkgPj0gMDsgaS0tKQorCQlfX3Vu
YmluZF9mcm9tX2lycShpcnEgKyBpKTsKIAltdXRleF91bmxvY2soJmlycV9tYXBwaW5nX3VwZGF0
ZV9sb2NrKTsKIAlyZXR1cm4gcmV0OwogfQpAQCAtNzgzLDcgKzc5NSwxMiBAQCBpbnQgeGVuX2Rl
c3Ryb3lfaXJxKGludCBpcnEpCiAJaWYgKCFkZXNjKQogCQlnb3RvIG91dDsKIAotCWlmICh4ZW5f
aW5pdGlhbF9kb21haW4oKSkgeworCS8qCisJICogSWYgdHJ5aW5nIHRvIHJlbW92ZSBhIHZlY3Rv
ciBpbiBhIE1TSSBncm91cCBkaWZmZXJlbnQKKwkgKiB0aGFuIHRoZSBmaXJzdCBvbmUgc2tpcCB0
aGUgUElSUSB1bm1hcCB1bmxlc3MgdGhpcyB2ZWN0b3IKKwkgKiBpcyB0aGUgZmlyc3Qgb25lIGlu
IHRoZSBncm91cC4KKwkgKi8KKwlpZiAoeGVuX2luaXRpYWxfZG9tYWluKCkgJiYgIShpbmZvLT51
LnBpcnEuZmxhZ3MgJiBQSVJRX01TSV9HUk9VUCkpIHsKIAkJdW5tYXBfaXJxLnBpcnEgPSBpbmZv
LT51LnBpcnEucGlycTsKIAkJdW5tYXBfaXJxLmRvbWlkID0gaW5mby0+dS5waXJxLmRvbWlkOwog
CQlyYyA9IEhZUEVSVklTT1JfcGh5c2Rldl9vcChQSFlTREVWT1BfdW5tYXBfcGlycSwgJnVubWFw
X2lycSk7CmRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgg
Yi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgKaW5kZXggNjc3ZjQxYS4uNTBj
MjA1MGEgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwuaAor
KysgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmgKQEAgLTUzLDYgKzUzLDcg
QEAgc3RydWN0IGlycV9pbmZvIHsKIAogI2RlZmluZSBQSVJRX05FRURTX0VPSQkoMSA8PCAwKQog
I2RlZmluZSBQSVJRX1NIQVJFQUJMRQkoMSA8PCAxKQorI2RlZmluZSBQSVJRX01TSV9HUk9VUAko
MSA8PCAyKQogCiBzdHJ1Y3QgZXZ0Y2huX29wcyB7CiAJdW5zaWduZWQgKCptYXhfY2hhbm5lbHMp
KHZvaWQpOwpkaWZmIC0tZ2l0IGEvaW5jbHVkZS94ZW4vZXZlbnRzLmggYi9pbmNsdWRlL3hlbi9l
dmVudHMuaAppbmRleCBjOWM4NWNmLi4yYWU3ZTAzIDEwMDY0NAotLS0gYS9pbmNsdWRlL3hlbi9l
dmVudHMuaAorKysgYi9pbmNsdWRlL3hlbi9ldmVudHMuaApAQCAtMTAyLDcgKzEwMiw3IEBAIGlu
dCB4ZW5fYmluZF9waXJxX2dzaV90b19pcnEodW5zaWduZWQgZ3NpLAogaW50IHhlbl9hbGxvY2F0
ZV9waXJxX21zaShzdHJ1Y3QgcGNpX2RldiAqZGV2LCBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2Mp
OwogLyogQmluZCBhbiBQU0kgcGlycSB0byBhbiBpcnEuICovCiBpbnQgeGVuX2JpbmRfcGlycV9t
c2lfdG9faXJxKHN0cnVjdCBwY2lfZGV2ICpkZXYsIHN0cnVjdCBtc2lfZGVzYyAqbXNpZGVzYywK
LQkJCSAgICAgaW50IHBpcnEsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQpOworCQkJ
ICAgICBpbnQgcGlycSwgaW50IG52ZWMsIGNvbnN0IGNoYXIgKm5hbWUsIGRvbWlkX3QgZG9taWQp
OwogI2VuZGlmCiAKIC8qIERlLWFsbG9jYXRlcyB0aGUgYWJvdmUgbWVudGlvbmVkIHBoeXNpY2Fs
IGludGVycnVwdC4gKi8KZGlmZiAtLWdpdCBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2
LmggYi9pbmNsdWRlL3hlbi9pbnRlcmZhY2UvcGh5c2Rldi5oCmluZGV4IDQyNzIxZDEuLjYxMGRi
YTkgMTAwNjQ0Ci0tLSBhL2luY2x1ZGUveGVuL2ludGVyZmFjZS9waHlzZGV2LmgKKysrIGIvaW5j
bHVkZS94ZW4vaW50ZXJmYWNlL3BoeXNkZXYuaApAQCAtMTMxLDYgKzEzMSw3IEBAIHN0cnVjdCBw
aHlzZGV2X2lycSB7CiAjZGVmaW5lIE1BUF9QSVJRX1RZUEVfR1NJCQkweDEKICNkZWZpbmUgTUFQ
X1BJUlFfVFlQRV9VTktOT1dOCQkweDIKICNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NU0lfU0VHCQkw
eDMKKyNkZWZpbmUgTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kJCTB4NAogCiAjZGVmaW5lIFBIWVNE
RVZPUF9tYXBfcGlycQkJMTMKIHN0cnVjdCBwaHlzZGV2X21hcF9waXJxIHsKQEAgLTE0MSwxMSAr
MTQyLDE2IEBAIHN0cnVjdCBwaHlzZGV2X21hcF9waXJxIHsKICAgICBpbnQgaW5kZXg7CiAgICAg
LyogSU4gb3IgT1VUICovCiAgICAgaW50IHBpcnE7Ci0gICAgLyogSU4gLSBoaWdoIDE2IGJpdHMg
aG9sZCBzZWdtZW50IGZvciBNQVBfUElSUV9UWVBFX01TSV9TRUcgKi8KKyAgICAvKiBJTiAtIGhp
Z2ggMTYgYml0cyBob2xkIHNlZ21lbnQgZm9yIC4uLl9NU0lfU0VHIGFuZCAuLi5fTVVMVElfTVNJ
ICovCiAgICAgaW50IGJ1czsKICAgICAvKiBJTiAqLwogICAgIGludCBkZXZmbjsKLSAgICAvKiBJ
TiAqLworICAgIC8qIElOCisgICAgICogLSBGb3IgTVNJLVggY29udGFpbnMgZW50cnkgbnVtYmVy
LgorICAgICAqIC0gRm9yIE1TSSB3aXRoIC4uLl9NVUxUSV9NU0kgY29udGFpbnMgbnVtYmVyIG9m
IHZlY3RvcnMuCisgICAgICogT1VUICguLi5fTVVMVElfTVNJIG9ubHkpCisgICAgICogLSBOdW1i
ZXIgb2YgdmVjdG9ycyBhbGxvY2F0ZWQuCisgICAgICovCiAgICAgaW50IGVudHJ5X25yOwogICAg
IC8qIElOICovCiAgICAgdWludDY0X3QgdGFibGVfYmFzZTsKLS0gCjEuNy43LjUgKEFwcGxlIEdp
dC0yNikKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwpY
ZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6Ly9saXN0
cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:25:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5e3-0005li-0S; Thu, 27 Feb 2014 18:25:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ5e2-0005ld-3L
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 18:25:18 +0000
Received: from [193.109.254.147:40206] by server-16.bemta-14.messagelabs.com
	id BF/47-21945-D038F035; Thu, 27 Feb 2014 18:25:17 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393525515!7297019!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1264 invoked from network); 27 Feb 2014 18:25:16 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:25:16 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id 8D0102BF;
	Thu, 27 Feb 2014 18:25:12 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 2E62566;
	Thu, 27 Feb 2014 18:25:02 +0000 (UTC)
Message-ID: <530F82FD.1060000@hp.com>
Date: Thu, 27 Feb 2014 13:25:01 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
	<20140227083715.GY9987@twins.programming.kicks-ass.net>
In-Reply-To: <20140227083715.GY9987@twins.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, kvm@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Arnd Bergmann <arnd@arndb.de>, Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 03:37 AM, Peter Zijlstra wrote:
>
> Is this the same 8 patches you send yesterday?

Sorry for the duplication. It was the same patch. It has some minor 
update in the cover-letter to include some KVM guest test results. I was 
having problem locating the patch from the LKML list and so I thought 
they had failed to be sent out properly.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:25:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5e3-0005li-0S; Thu, 27 Feb 2014 18:25:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ5e2-0005ld-3L
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 18:25:18 +0000
Received: from [193.109.254.147:40206] by server-16.bemta-14.messagelabs.com
	id BF/47-21945-D038F035; Thu, 27 Feb 2014 18:25:17 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393525515!7297019!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1264 invoked from network); 27 Feb 2014 18:25:16 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:25:16 -0000
Received: from g9t2301.houston.hp.com (g9t2301.houston.hp.com [16.216.185.78])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id 8D0102BF;
	Thu, 27 Feb 2014 18:25:12 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g9t2301.houston.hp.com (Postfix) with ESMTP id 2E62566;
	Thu, 27 Feb 2014 18:25:02 +0000 (UTC)
Message-ID: <530F82FD.1060000@hp.com>
Date: Thu, 27 Feb 2014 13:25:01 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393475567-4453-1-git-send-email-Waiman.Long@hp.com>
	<20140227083715.GY9987@twins.programming.kicks-ass.net>
In-Reply-To: <20140227083715.GY9987@twins.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>, kvm@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Thomas Gleixner <tglx@linutronix.de>, linux-arch@vger.kernel.org,
	Gleb Natapov <gleb@redhat.com>, x86@kernel.org,
	Ingo Molnar <mingo@redhat.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Arnd Bergmann <arnd@arndb.de>, Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	Oleg Nesterov <oleg@redhat.com>, Alok Kataria <akataria@vmware.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Chegu Vinod <chegu_vinod@hp.com>, linux-kernel@vger.kernel.org,
	David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 03:37 AM, Peter Zijlstra wrote:
>
> Is this the same 8 patches you send yesterday?

Sorry for the duplication. It was the same patch. It has some minor 
update in the cover-letter to include some KVM guest test results. I was 
having problem locating the patch from the LKML list and so I thought 
they had failed to be sent out properly.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:36:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5oc-0006HI-Cn; Thu, 27 Feb 2014 18:36:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ5oa-0006H9-U3
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 18:36:13 +0000
Received: from [85.158.139.211:38842] by server-15.bemta-5.messagelabs.com id
	13/06-24395-C958F035; Thu, 27 Feb 2014 18:36:12 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393526170!6762876!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32205 invoked from network); 27 Feb 2014 18:36:11 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:36:11 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id DD1E12A4;
	Thu, 27 Feb 2014 18:36:08 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 6C10D6F;
	Thu, 27 Feb 2014 18:36:02 +0000 (UTC)
Message-ID: <530F8590.60207@hp.com>
Date: Thu, 27 Feb 2014 13:36:00 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
	<530F0607.1020705@redhat.com>
In-Reply-To: <530F0607.1020705@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
 x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 04:31 AM, Paolo Bonzini wrote:
>   static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
>   config QUEUE_SPINLOCK
>   	def_bool y if ARCH_USE_QUEUE_SPINLOCK
> -	depends on SMP&&  !PARAVIRT_SPINLOCKS
> +	depends on SMP&&  (!PARAVIRT_SPINLOCKS || !XEN)
>
> Should this rather be
>
>      def_bool y if ARCH_USE_QUEUE_SPINLOCK&&  (!PARAVIRT_SPINLOCKS || !XEN)
>
> ?
>
> PARAVIRT_SPINLOCKS + XEN + QUEUE_SPINLOCK + PARAVIRT_UNFAIR_LOCKS is a
> valid combination, but it's impossible to choose PARAVIRT_UNFAIR_LOCKS
> if QUEUE_SPINLOCK is unavailable.
>
> Paolo

The PV ticketlock code assumes the presence of ticket spinlock. So it 
will cause compilation error if QUEUE_SPINLOCK is enabled. My patches 
7/8 modified the KVM code so that the PV ticketlock code can coexist 
with queue spinlock. As I haven't figure out the proper way to modify 
the Xen code, I need to disable the queue spinlock code if 
PARAVIRT_SPINLOCKS and XEN are both enabled. However, by disabling 
PARAVIRT_SPINLOCKS, we can use PARAVIRT_UNFAIR_LOCKS with XEN.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:36:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5oc-0006HI-Cn; Thu, 27 Feb 2014 18:36:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ5oa-0006H9-U3
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 18:36:13 +0000
Received: from [85.158.139.211:38842] by server-15.bemta-5.messagelabs.com id
	13/06-24395-C958F035; Thu, 27 Feb 2014 18:36:12 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-7.tower-206.messagelabs.com!1393526170!6762876!1
X-Originating-IP: [15.201.208.55]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ, UPPERCASE_25_50
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32205 invoked from network); 27 Feb 2014 18:36:11 -0000
Received: from g4t3427.houston.hp.com (HELO g4t3427.houston.hp.com)
	(15.201.208.55)
	by server-7.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:36:11 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3427.houston.hp.com (Postfix) with ESMTP id DD1E12A4;
	Thu, 27 Feb 2014 18:36:08 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 6C10D6F;
	Thu, 27 Feb 2014 18:36:02 +0000 (UTC)
Message-ID: <530F8590.60207@hp.com>
Date: Thu, 27 Feb 2014 13:36:00 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-9-git-send-email-Waiman.Long@hp.com>
	<530F0607.1020705@redhat.com>
In-Reply-To: <530F0607.1020705@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 8/8] pvqspinlock,
 x86: Enable KVM to use qspinlock's PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 04:31 AM, Paolo Bonzini wrote:
>   static __init int kvm_spinlock_init_jump(void)
> diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
> index f185584..a70fdeb 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
>   config QUEUE_SPINLOCK
>   	def_bool y if ARCH_USE_QUEUE_SPINLOCK
> -	depends on SMP&&  !PARAVIRT_SPINLOCKS
> +	depends on SMP&&  (!PARAVIRT_SPINLOCKS || !XEN)
>
> Should this rather be
>
>      def_bool y if ARCH_USE_QUEUE_SPINLOCK&&  (!PARAVIRT_SPINLOCKS || !XEN)
>
> ?
>
> PARAVIRT_SPINLOCKS + XEN + QUEUE_SPINLOCK + PARAVIRT_UNFAIR_LOCKS is a
> valid combination, but it's impossible to choose PARAVIRT_UNFAIR_LOCKS
> if QUEUE_SPINLOCK is unavailable.
>
> Paolo

The PV ticketlock code assumes the presence of ticket spinlock. So it 
will cause compilation error if QUEUE_SPINLOCK is enabled. My patches 
7/8 modified the KVM code so that the PV ticketlock code can coexist 
with queue spinlock. As I haven't figure out the proper way to modify 
the Xen code, I need to disable the queue spinlock code if 
PARAVIRT_SPINLOCKS and XEN are both enabled. However, by disabling 
PARAVIRT_SPINLOCKS, we can use PARAVIRT_UNFAIR_LOCKS with XEN.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5vt-0006aQ-FF; Thu, 27 Feb 2014 18:43:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJ5vs-0006aH-EY
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:43:44 +0000
Received: from [85.158.139.211:39466] by server-4.bemta-5.messagelabs.com id
	2B/CE-08092-F578F035; Thu, 27 Feb 2014 18:43:43 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393526621!6707392!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22268 invoked from network); 27 Feb 2014 18:43:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:43:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1RIhct5015875
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 18:43:39 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1RIhboP016468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Feb 2014 18:43:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1RIhbsO016435; Thu, 27 Feb 2014 18:43:37 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 10:43:37 -0800
Message-ID: <530F87CC.8090000@oracle.com>
Date: Thu, 27 Feb 2014 13:45:32 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1393524935-4216-1-git-send-email-roger.pau@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
> Add support for MSI message groups for Xen Dom0 using the
> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
>
> In order to keep track of which pirq is the first one in the group all
> pirqs in the MSI group except for the first one have the newly
> introduced PIRQ_MSI_GROUP flag set. This prevents calling
> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
> first pirq in the group.

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:44:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ5vt-0006aQ-FF; Thu, 27 Feb 2014 18:43:45 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJ5vs-0006aH-EY
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:43:44 +0000
Received: from [85.158.139.211:39466] by server-4.bemta-5.messagelabs.com id
	2B/CE-08092-F578F035; Thu, 27 Feb 2014 18:43:43 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393526621!6707392!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22268 invoked from network); 27 Feb 2014 18:43:43 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-9.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:43:43 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1RIhct5015875
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Thu, 27 Feb 2014 18:43:39 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1RIhboP016468
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Thu, 27 Feb 2014 18:43:38 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1RIhbsO016435; Thu, 27 Feb 2014 18:43:37 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 10:43:37 -0800
Message-ID: <530F87CC.8090000@oracle.com>
Date: Thu, 27 Feb 2014 13:45:32 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
In-Reply-To: <1393524935-4216-1-git-send-email-roger.pau@citrix.com>
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
> Add support for MSI message groups for Xen Dom0 using the
> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
>
> In order to keep track of which pirq is the first one in the group all
> pirqs in the MSI group except for the first one have the newly
> introduced PIRQ_MSI_GROUP flag set. This prevents calling
> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
> first pirq in the group.

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:58:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ69d-0006yy-AJ; Thu, 27 Feb 2014 18:57:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJ69b-0006yt-UR
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:57:56 +0000
Received: from [85.158.143.35:7099] by server-2.bemta-4.messagelabs.com id
	52/A9-04779-3BA8F035; Thu, 27 Feb 2014 18:57:55 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393527473!8844027!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19857 invoked from network); 27 Feb 2014 18:57:54 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:57:54 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 27 Feb 2014 18:57:40 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="662327684"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi02.verizon.com with ESMTP; 27 Feb 2014 18:57:39 +0000
Message-ID: <530F8AA3.5050906@terremark.com>
Date: Thu, 27 Feb 2014 13:57:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, 
	Xen-devel <xen-devel@lists.xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/14 09:03, Andrew Cooper wrote:
> ** This is RFC, and not intended to be applied in its current state **
>
> There exists a "console_timestamps" command line option which causes full
> date/time stamps to be printed, e.g.
>
>      (XEN) ENABLING IO-APIC IRQs
>      (XEN)  -> Using old ACK method
>      (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>      (XEN) TSC deadline timer enabled
>      (XEN) [2014-02-27 12:29:27] Platform timer is 14.318MHz HPET
>      (XEN) [2014-02-27 12:29:27] Allocated console ring of 64 KiB.
>      (XEN) [2014-02-27 12:29:27] mwait-idle: MWAIT substates: 0x21120
>
> However, this only has single-second granularity.  This patch replaces the
> string printed with one which matches Linux kernel timestamps, in seconds and
> milliseconds since boot.
>
> The result looks like:
>
>      (XEN) [    0.158968] VMX: Supported advanced features:
>      (XEN) [    0.159369]  - APIC TPR shadow
>      (XEN) [    0.159771] HVM: ASIDs disabled.
>      (XEN) [    0.160203] HVM: VMX enabled
>      (XEN) [    0.160599] HVM: Hardware Assisted Paging (HAP) not detected
>
> Although it looks rather worse interleaved with kernel timestamps:
>
>      [    0.300276] pci 0000:00:1c.0: System wakeup disabled by ACPI
>      (XEN) [    3.286620] PCI add device 0000:00:1c.0
>      [    0.301169] pci 0000:00:1c.4: System wakeup disabled by ACPI
>      (XEN) [    3.287508] PCI add device 0000:00:1c.4
>      [    0.302078] pci 0000:00:1c.5: System wakeup disabled by ACPI
>      (XEN) [    3.288420] PCI add device 0000:00:1c.5
>      [    0.302899] pci 0000:00:1d.0: System wakeup disabled by ACPI
>      (XEN) [    3.289229] PCI add device 0000:00:1d.0
>
> Some latent bugs are emphasised by these changes.  There are steps in time
> when the TSC scale is calculated, and when the platform time is initialised ...
>
>      (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>      (XEN) [   27.553075] Detected 2793.232 MHz processor.
>      (XEN) [   27.558277] Initing memory sharing.
>      (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>      (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>      (XEN) [   27.577687] Intel machine check reporting enabled
>      (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>      (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>      (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>      (XEN) [   27.603238] I/O virtualisation disabled
>      (XEN) [   27.608093] ENABLING IO-APIC IRQs
>      (XEN) [   27.612136]  -> Using new ACK method
>      (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>      (XEN) [    0.153431] Platform timer is 14.318MHz HPET
>
> ... and the synchronisation across CPUs needs to be earlier during AP bringup.
>
>      (XEN) [    0.161004] HVM: PVH mode not supported on this platform
>      (XEN) [    0.000000] Cannot set CPU feature mask on CPU#1
>      (XEN) [    0.182299] Brought up 2 CPUs
>
> Is it likely that people would want to still have the option for the full
> date/timestamps?  If so, that code will have to be kept.

I would like to be able to select the old way.

> Comments/suggestions welcome, especially regarding the interleaving of Xen and
> dom0 timestamps.

Another option would be to just add the milliseconds to the current output.  This might help with the interleaving.

    -Don Slutz

> ~Andrew
> ---
>   xen/drivers/char/console.c |   12 +++++-------
>   1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 532c426..e2d9521 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -548,21 +548,19 @@ static int printk_prefix_check(char *p, char **pp)
>   
>   static void printk_start_of_line(const char *prefix)
>   {
> -    struct tm tm;
>       char tstr[32];
> +    uint64_t sec, nsec;
>   
>       __putstr(prefix);
>   
>       if ( !opt_console_timestamps )
>           return;
>   
> -    tm = wallclock_time();
> -    if ( tm.tm_mday == 0 )
> -        return;
> +    sec = NOW();
> +    nsec = do_div(sec, 1000000000);
>   
> -    snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
> -             1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
> -             tm.tm_hour, tm.tm_min, tm.tm_sec);
> +    snprintf(tstr, sizeof(tstr), "[%5"PRIu64".%06"PRIu64"] ",
> +             sec, nsec/1000);
>       __putstr(tstr);
>   }
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 18:58:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 18:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ69d-0006yy-AJ; Thu, 27 Feb 2014 18:57:57 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJ69b-0006yt-UR
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 18:57:56 +0000
Received: from [85.158.143.35:7099] by server-2.bemta-4.messagelabs.com id
	52/A9-04779-3BA8F035; Thu, 27 Feb 2014 18:57:55 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-13.tower-21.messagelabs.com!1393527473!8844027!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19857 invoked from network); 27 Feb 2014 18:57:54 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-13.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 18:57:54 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 27 Feb 2014 18:57:40 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,556,1389744000"; d="scan'208";a="662327684"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi02.verizon.com with ESMTP; 27 Feb 2014 18:57:39 +0000
Message-ID: <530F8AA3.5050906@terremark.com>
Date: Thu, 27 Feb 2014 13:57:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Andrew Cooper <andrew.cooper3@citrix.com>, 
	Xen-devel <xen-devel@lists.xen.org>
References: <1393509787-13497-1-git-send-email-andrew.cooper3@citrix.com>
	<1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393509787-13497-2-git-send-email-andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>, Tim Deegan <tim@xen.org>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [PATCH 2/2] [RFC] xen/console: Provide timestamps
 as an offset since boot.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/14 09:03, Andrew Cooper wrote:
> ** This is RFC, and not intended to be applied in its current state **
>
> There exists a "console_timestamps" command line option which causes full
> date/time stamps to be printed, e.g.
>
>      (XEN) ENABLING IO-APIC IRQs
>      (XEN)  -> Using old ACK method
>      (XEN) ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>      (XEN) TSC deadline timer enabled
>      (XEN) [2014-02-27 12:29:27] Platform timer is 14.318MHz HPET
>      (XEN) [2014-02-27 12:29:27] Allocated console ring of 64 KiB.
>      (XEN) [2014-02-27 12:29:27] mwait-idle: MWAIT substates: 0x21120
>
> However, this only has single-second granularity.  This patch replaces the
> string printed with one which matches Linux kernel timestamps, in seconds and
> milliseconds since boot.
>
> The result looks like:
>
>      (XEN) [    0.158968] VMX: Supported advanced features:
>      (XEN) [    0.159369]  - APIC TPR shadow
>      (XEN) [    0.159771] HVM: ASIDs disabled.
>      (XEN) [    0.160203] HVM: VMX enabled
>      (XEN) [    0.160599] HVM: Hardware Assisted Paging (HAP) not detected
>
> Although it looks rather worse interleaved with kernel timestamps:
>
>      [    0.300276] pci 0000:00:1c.0: System wakeup disabled by ACPI
>      (XEN) [    3.286620] PCI add device 0000:00:1c.0
>      [    0.301169] pci 0000:00:1c.4: System wakeup disabled by ACPI
>      (XEN) [    3.287508] PCI add device 0000:00:1c.4
>      [    0.302078] pci 0000:00:1c.5: System wakeup disabled by ACPI
>      (XEN) [    3.288420] PCI add device 0000:00:1c.5
>      [    0.302899] pci 0000:00:1d.0: System wakeup disabled by ACPI
>      (XEN) [    3.289229] PCI add device 0000:00:1d.0
>
> Some latent bugs are emphasised by these changes.  There are steps in time
> when the TSC scale is calculated, and when the platform time is initialised ...
>
>      (XEN) [    0.000000] Using scheduler: SMP Credit Scheduler (credit)
>      (XEN) [   27.553075] Detected 2793.232 MHz processor.
>      (XEN) [   27.558277] Initing memory sharing.
>      (XEN) [   27.562502] Cannot set CPU feature mask on CPU#0
>      (XEN) [   27.567852] mce_intel.c:717: MCA Capability: BCAST 0 SER 0 CMCI 0 firstbank 0 extended MCE MSR 18
>      (XEN) [   27.577687] Intel machine check reporting enabled
>      (XEN) [   27.583147] PCI: MCFG configuration 0: base e0000000 segment 0000 buses 00 - 3f
>      (XEN) [   27.591407] PCI: MCFG area at e0000000 reserved in E820
>      (XEN) [   27.597369] PCI: Using MCFG for segment 0000 bus 00-3f
>      (XEN) [   27.603238] I/O virtualisation disabled
>      (XEN) [   27.608093] ENABLING IO-APIC IRQs
>      (XEN) [   27.612136]  -> Using new ACK method
>      (XEN) [   27.616601] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
>      (XEN) [    0.153431] Platform timer is 14.318MHz HPET
>
> ... and the synchronisation across CPUs needs to be earlier during AP bringup.
>
>      (XEN) [    0.161004] HVM: PVH mode not supported on this platform
>      (XEN) [    0.000000] Cannot set CPU feature mask on CPU#1
>      (XEN) [    0.182299] Brought up 2 CPUs
>
> Is it likely that people would want to still have the option for the full
> date/timestamps?  If so, that code will have to be kept.

I would like to be able to select the old way.

> Comments/suggestions welcome, especially regarding the interleaving of Xen and
> dom0 timestamps.

Another option would be to just add the milliseconds to the current output.  This might help with the interleaving.

    -Don Slutz

> ~Andrew
> ---
>   xen/drivers/char/console.c |   12 +++++-------
>   1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 532c426..e2d9521 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -548,21 +548,19 @@ static int printk_prefix_check(char *p, char **pp)
>   
>   static void printk_start_of_line(const char *prefix)
>   {
> -    struct tm tm;
>       char tstr[32];
> +    uint64_t sec, nsec;
>   
>       __putstr(prefix);
>   
>       if ( !opt_console_timestamps )
>           return;
>   
> -    tm = wallclock_time();
> -    if ( tm.tm_mday == 0 )
> -        return;
> +    sec = NOW();
> +    nsec = do_div(sec, 1000000000);
>   
> -    snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
> -             1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
> -             tm.tm_hour, tm.tm_min, tm.tm_sec);
> +    snprintf(tstr, sizeof(tstr), "[%5"PRIu64".%06"PRIu64"] ",
> +             sec, nsec/1000);
>       __putstr(tstr);
>   }
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:06:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6HZ-0007KJ-FJ; Thu, 27 Feb 2014 19:06:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6HX-0007K8-Oo
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:06:07 +0000
Received: from [85.158.139.211:21324] by server-15.bemta-5.messagelabs.com id
	E0/A1-24395-F9C8F035; Thu, 27 Feb 2014 19:06:07 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393527965!6722735!1
X-Originating-IP: [15.193.200.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26092 invoked from network); 27 Feb 2014 19:06:05 -0000
Received: from g6t1526.atlanta.hp.com (HELO g6t1526.atlanta.hp.com)
	(15.193.200.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 19:06:05 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g6t1526.atlanta.hp.com (Postfix) with ESMTP id 6F113C2;
	Thu, 27 Feb 2014 19:06:00 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id C4ED06B;
	Thu, 27 Feb 2014 19:05:50 +0000 (UTC)
Message-ID: <530F8C8C.2030906@hp.com>
Date: Thu, 27 Feb 2014 14:05:48 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
	<530F0832.50205@redhat.com>
In-Reply-To: <530F0832.50205@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 04:41 AM, Paolo Bonzini wrote:
> Il 26/02/2014 16:14, Waiman Long ha scritto:
>> This patch adds a KVM init function to activate the unfair queue
>> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
>> option is selected.
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> ---
>>  arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>>  1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 713f1b3..a489140 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>>  early_initcall(kvm_spinlock_init_jump);
>>
>>  #endif    /* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> +/*
>> + * Enable unfair lock if running in a real para-virtualized environment
>> + */
>> +static __init int kvm_unfair_locks_init_jump(void)
>> +{
>> +    if (!kvm_para_available())
>> +        return 0;
>> +
>> +    static_key_slow_inc(&paravirt_unfairlocks_enabled);
>> +    printk(KERN_INFO "KVM setup unfair spinlock\n");
>> +
>> +    return 0;
>> +}
>> +early_initcall(kvm_unfair_locks_init_jump);
>> +#endif
>>
>
> I think this should apply to all paravirt implementations, unless 
> pv_lock_ops.kick_cpu != NULL.
>
> Paolo

Unfair lock is currently implemented as an independent config option 
that can be turned on or off irrespective of the other PV settings. 
There are concern about lock starvation if there is a large number of 
virtual CPUs. So one idea that I have is to disable this feature if 
there is more than a certain number of virtual CPUs available. I will 
investigate this idea when I have time.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:06:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:06:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6HZ-0007KJ-FJ; Thu, 27 Feb 2014 19:06:09 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6HX-0007K8-Oo
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:06:07 +0000
Received: from [85.158.139.211:21324] by server-15.bemta-5.messagelabs.com id
	E0/A1-24395-F9C8F035; Thu, 27 Feb 2014 19:06:07 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393527965!6722735!1
X-Originating-IP: [15.193.200.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26092 invoked from network); 27 Feb 2014 19:06:05 -0000
Received: from g6t1526.atlanta.hp.com (HELO g6t1526.atlanta.hp.com)
	(15.193.200.69)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 19:06:05 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g6t1526.atlanta.hp.com (Postfix) with ESMTP id 6F113C2;
	Thu, 27 Feb 2014 19:06:00 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id C4ED06B;
	Thu, 27 Feb 2014 19:05:50 +0000 (UTC)
Message-ID: <530F8C8C.2030906@hp.com>
Date: Thu, 27 Feb 2014 14:05:48 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
	<530F0832.50205@redhat.com>
In-Reply-To: <530F0832.50205@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 04:41 AM, Paolo Bonzini wrote:
> Il 26/02/2014 16:14, Waiman Long ha scritto:
>> This patch adds a KVM init function to activate the unfair queue
>> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
>> option is selected.
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> ---
>>  arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>>  1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 713f1b3..a489140 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>>  early_initcall(kvm_spinlock_init_jump);
>>
>>  #endif    /* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> +/*
>> + * Enable unfair lock if running in a real para-virtualized environment
>> + */
>> +static __init int kvm_unfair_locks_init_jump(void)
>> +{
>> +    if (!kvm_para_available())
>> +        return 0;
>> +
>> +    static_key_slow_inc(&paravirt_unfairlocks_enabled);
>> +    printk(KERN_INFO "KVM setup unfair spinlock\n");
>> +
>> +    return 0;
>> +}
>> +early_initcall(kvm_unfair_locks_init_jump);
>> +#endif
>>
>
> I think this should apply to all paravirt implementations, unless 
> pv_lock_ops.kick_cpu != NULL.
>
> Paolo

Unfair lock is currently implemented as an independent config option 
that can be turned on or off irrespective of the other PV settings. 
There are concern about lock starvation if there is a large number of 
virtual CPUs. So one idea that I have is to disable this feature if 
there is more than a certain number of virtual CPUs available. I will 
investigate this idea when I have time.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:12:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6Nq-0007ji-1N; Thu, 27 Feb 2014 19:12:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6Np-0007jb-18
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:12:37 +0000
Received: from [85.158.139.211:26729] by server-5.bemta-5.messagelabs.com id
	D4/09-32749-42E8F035; Thu, 27 Feb 2014 19:12:36 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393528353!6723548!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19677 invoked from network); 27 Feb 2014 19:12:34 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 19:12:34 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 9EADF1ED;
	Thu, 27 Feb 2014 19:12:32 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id ED98E49;
	Thu, 27 Feb 2014 19:12:25 +0000 (UTC)
Message-ID: <530F8E17.5060308@hp.com>
Date: Thu, 27 Feb 2014 14:12:23 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
	<530F1605.9040001@linux.vnet.ibm.com>
In-Reply-To: <530F1605.9040001@linux.vnet.ibm.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 05:40 AM, Raghavendra K T wrote:
> On 02/26/2014 08:44 PM, Waiman Long wrote:
>> This patch adds a KVM init function to activate the unfair queue
>> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
>> option is selected.
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> ---
>>   arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>>   1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 713f1b3..a489140 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>>   early_initcall(kvm_spinlock_init_jump);
>>
>>   #endif    /* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> +/*
>> + * Enable unfair lock if running in a real para-virtualized environment
>> + */
>> +static __init int kvm_unfair_locks_init_jump(void)
>> +{
>> +    if (!kvm_para_available())
>> +        return 0;
>> +
>
> kvm_kick_cpu_type() in patch 8 assumes that host has support for kick
> hypercall (KVM_HC_KICK_CPU).
>
> I think for that we need explicit check of this 
> kvm_para_has_feature(KVM_FEATURE_PV_UNHALT).
>
> otherwise things may break for unlikely case of running a new guest on
> a old host?
>

Unfair lock is a separate config option that does not need to do any cpu 
kick. The checking of kvm_para_available() is just to make sure that the 
kernel is running in a real PV environment, not on bare metal but with 
CONFIG_PARAVIRT enabled.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:12:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6Nq-0007ji-1N; Thu, 27 Feb 2014 19:12:38 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6Np-0007jb-18
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:12:37 +0000
Received: from [85.158.139.211:26729] by server-5.bemta-5.messagelabs.com id
	D4/09-32749-42E8F035; Thu, 27 Feb 2014 19:12:36 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393528353!6723548!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19677 invoked from network); 27 Feb 2014 19:12:34 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-13.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 19:12:34 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 9EADF1ED;
	Thu, 27 Feb 2014 19:12:32 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id ED98E49;
	Thu, 27 Feb 2014 19:12:25 +0000 (UTC)
Message-ID: <530F8E17.5060308@hp.com>
Date: Thu, 27 Feb 2014 14:12:23 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
	<530F1605.9040001@linux.vnet.ibm.com>
In-Reply-To: <530F1605.9040001@linux.vnet.ibm.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 05:40 AM, Raghavendra K T wrote:
> On 02/26/2014 08:44 PM, Waiman Long wrote:
>> This patch adds a KVM init function to activate the unfair queue
>> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
>> option is selected.
>>
>> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
>> ---
>>   arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>>   1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 713f1b3..a489140 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>>   early_initcall(kvm_spinlock_init_jump);
>>
>>   #endif    /* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> +/*
>> + * Enable unfair lock if running in a real para-virtualized environment
>> + */
>> +static __init int kvm_unfair_locks_init_jump(void)
>> +{
>> +    if (!kvm_para_available())
>> +        return 0;
>> +
>
> kvm_kick_cpu_type() in patch 8 assumes that host has support for kick
> hypercall (KVM_HC_KICK_CPU).
>
> I think for that we need explicit check of this 
> kvm_para_has_feature(KVM_FEATURE_PV_UNHALT).
>
> otherwise things may break for unlikely case of running a new guest on
> a old host?
>

Unfair lock is a separate config option that does not need to do any cpu 
kick. The checking of kvm_para_available() is just to make sure that the 
kernel is running in a real PV environment, not on bare metal but with 
CONFIG_PARAVIRT enabled.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:40:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6oo-00005r-Ak; Thu, 27 Feb 2014 19:40:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6on-00005b-2B
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:40:29 +0000
Received: from [85.158.139.211:18105] by server-6.bemta-5.messagelabs.com id
	50/90-14342-AA49F035; Thu, 27 Feb 2014 19:40:26 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393530023!2205547!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2552 invoked from network); 27 Feb 2014 19:40:24 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 19:40:24 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id 297401DD;
	Thu, 27 Feb 2014 19:40:21 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 50F4946;
	Thu, 27 Feb 2014 19:40:15 +0000 (UTC)
Message-ID: <530F949D.6050609@hp.com>
Date: Thu, 27 Feb 2014 14:40:13 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
	<530F2F7B.40500@citrix.com>
In-Reply-To: <530F2F7B.40500@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 07:28 AM, David Vrabel wrote:
> On 26/02/14 15:14, Waiman Long wrote:
>> Locking is always an issue in a virtualized environment as the virtual
>> CPU that is waiting on a lock may get scheduled out and hence block
>> any progress in lock acquisition even when the lock has been freed.
>>
>> One solution to this problem is to allow unfair lock in a
>> para-virtualized environment. In this case, a new lock acquirer can
>> come and steal the lock if the next-in-line CPU to get the lock is
>> scheduled out. Unfair lock in a native environment is generally not a
>> good idea as there is a possibility of lock starvation for a heavily
>> contended lock.
> I'm not sure I'm keen on losing the fairness in PV environment.  I'm
> concerned that on an over-committed host, the lock starvation problem
> will be particularly bad.
>
> But I'll have to revist this once a non-broken PV qspinlock
> implementation exists (or someone explains how the proposed one works).
>
> David

On second thought, the unfair qspinlock may not be as bad as other 
unfair locks. Basically, a task gets one chance to steal the lock. If it 
can't, it will go to the back of the queue and wait for its turn. So 
unless a single CPU can monopolize a lock by acquiring it again 
immediately after release, all the tasks queuing up will eventually has 
its chance of getting the lock.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:40:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6oo-00005r-Ak; Thu, 27 Feb 2014 19:40:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6on-00005b-2B
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:40:29 +0000
Received: from [85.158.139.211:18105] by server-6.bemta-5.messagelabs.com id
	50/90-14342-AA49F035; Thu, 27 Feb 2014 19:40:26 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393530023!2205547!1
X-Originating-IP: [15.217.128.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2552 invoked from network); 27 Feb 2014 19:40:24 -0000
Received: from g2t2353.austin.hp.com (HELO g2t2353.austin.hp.com)
	(15.217.128.52)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 27 Feb 2014 19:40:24 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2353.austin.hp.com (Postfix) with ESMTP id 297401DD;
	Thu, 27 Feb 2014 19:40:21 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 50F4946;
	Thu, 27 Feb 2014 19:40:15 +0000 (UTC)
Message-ID: <530F949D.6050609@hp.com>
Date: Thu, 27 Feb 2014 14:40:13 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: David Vrabel <david.vrabel@citrix.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
	<530F2F7B.40500@citrix.com>
In-Reply-To: <530F2F7B.40500@citrix.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 07:28 AM, David Vrabel wrote:
> On 26/02/14 15:14, Waiman Long wrote:
>> Locking is always an issue in a virtualized environment as the virtual
>> CPU that is waiting on a lock may get scheduled out and hence block
>> any progress in lock acquisition even when the lock has been freed.
>>
>> One solution to this problem is to allow unfair lock in a
>> para-virtualized environment. In this case, a new lock acquirer can
>> come and steal the lock if the next-in-line CPU to get the lock is
>> scheduled out. Unfair lock in a native environment is generally not a
>> good idea as there is a possibility of lock starvation for a heavily
>> contended lock.
> I'm not sure I'm keen on losing the fairness in PV environment.  I'm
> concerned that on an over-committed host, the lock starvation problem
> will be particularly bad.
>
> But I'll have to revist this once a non-broken PV qspinlock
> implementation exists (or someone explains how the proposed one works).
>
> David

On second thought, the unfair qspinlock may not be as bad as other 
unfair locks. Basically, a task gets one chance to steal the lock. If it 
can't, it will go to the back of the queue and wait for its turn. So 
unless a single CPU can monopolize a lock by acquiring it again 
immediately after release, all the tasks queuing up will eventually has 
its chance of getting the lock.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6r1-0000J4-6p; Thu, 27 Feb 2014 19:42:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6qy-0000Ip-Uy
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:42:45 +0000
Received: from [85.158.143.35:56121] by server-2.bemta-4.messagelabs.com id
	08/8B-04779-4359F035; Thu, 27 Feb 2014 19:42:44 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393530162!8862334!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8510 invoked from network); 27 Feb 2014 19:42:43 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 19:42:43 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 0B0202AA;
	Thu, 27 Feb 2014 19:42:41 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id C759E46;
	Thu, 27 Feb 2014 19:42:35 +0000 (UTC)
Message-ID: <530F952A.3030702@hp.com>
Date: Thu, 27 Feb 2014 14:42:34 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
In-Reply-To: <530F4F98.2080308@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, x86@kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 09:45 AM, Paolo Bonzini wrote:
> Il 27/02/2014 15:18, David Vrabel ha scritto:
>> On 27/02/14 13:11, Paolo Bonzini wrote:
>>> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>>>>> This patch adds para-virtualization support to the queue spinlock 
>>>>>> code
>>>>>> by enabling the queue head to kick the lock holder CPU, if known,
>>>>>> in when the lock isn't released for a certain amount of time. It
>>>>>> also enables the mutual monitoring of the queue head CPU and the
>>>>>> following node CPU in the queue to make sure that their CPUs will
>>>>>> stay scheduled in.
>>>> I'm not really understanding how this is supposed to work.  There
>>>> appears to be an assumption that a guest can keep one of its VCPUs
>>>> running by repeatedly kicking it?  This is not possible under Xen 
>>>> and I
>>>> doubt it's possible under KVM or any other hypervisor.
>>>
>>> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
>>> see Documentation/virtual/kvm/hypercalls.txt.
>>
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock* 
> primitive wakes up a sleeping VCPU.  It is more similar to PLE 
> (pause-loop exiting).
>
> Paolo

Yes, it is mostly to deal with vCPU that are not running because of PLE.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ6r1-0000J4-6p; Thu, 27 Feb 2014 19:42:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ6qy-0000Ip-Uy
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 19:42:45 +0000
Received: from [85.158.143.35:56121] by server-2.bemta-4.messagelabs.com id
	08/8B-04779-4359F035; Thu, 27 Feb 2014 19:42:44 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393530162!8862334!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8510 invoked from network); 27 Feb 2014 19:42:43 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 19:42:43 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 0B0202AA;
	Thu, 27 Feb 2014 19:42:41 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id C759E46;
	Thu, 27 Feb 2014 19:42:35 +0000 (UTC)
Message-ID: <530F952A.3030702@hp.com>
Date: Thu, 27 Feb 2014 14:42:34 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Paolo Bonzini <pbonzini@redhat.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
In-Reply-To: <530F4F98.2080308@redhat.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>, x86@kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Ingo Molnar <mingo@redhat.com>, Scott J Norton <scott.norton@hp.com>,
	xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 09:45 AM, Paolo Bonzini wrote:
> Il 27/02/2014 15:18, David Vrabel ha scritto:
>> On 27/02/14 13:11, Paolo Bonzini wrote:
>>> Il 27/02/2014 13:11, David Vrabel ha scritto:
>>>>>> This patch adds para-virtualization support to the queue spinlock 
>>>>>> code
>>>>>> by enabling the queue head to kick the lock holder CPU, if known,
>>>>>> in when the lock isn't released for a certain amount of time. It
>>>>>> also enables the mutual monitoring of the queue head CPU and the
>>>>>> following node CPU in the queue to make sure that their CPUs will
>>>>>> stay scheduled in.
>>>> I'm not really understanding how this is supposed to work.  There
>>>> appears to be an assumption that a guest can keep one of its VCPUs
>>>> running by repeatedly kicking it?  This is not possible under Xen 
>>>> and I
>>>> doubt it's possible under KVM or any other hypervisor.
>>>
>>> KVM allows any VCPU to wake up a currently halted VCPU of its choice,
>>> see Documentation/virtual/kvm/hypercalls.txt.
>>
>> But neither of the VCPUs being kicked here are halted -- they're either
>> running or runnable (descheduled by the hypervisor).
>
> /me actually looks at Waiman's code...
>
> Right, this is really different from pvticketlocks, where the *unlock* 
> primitive wakes up a sleeping VCPU.  It is more similar to PLE 
> (pause-loop exiting).
>
> Paolo

Yes, it is mostly to deal with vCPU that are not running because of PLE.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:57:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ756-0000iG-H2; Thu, 27 Feb 2014 19:57:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WJ754-0000iB-HB
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 19:57:18 +0000
Received: from [85.158.143.35:45095] by server-3.bemta-4.messagelabs.com id
	40/F7-11539-D989F035; Thu, 27 Feb 2014 19:57:17 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393531036!8849438!1
X-Originating-IP: [212.227.126.187]
X-SpamReason: No, hits=0.2 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODcgPT4gNjc1NDE=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODcgPT4gNjc1NDE=\n,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14770 invoked from network); 27 Feb 2014 19:57:16 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.187)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 19:57:16 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue005) with ESMTP (Nemesis)
	id 0MPsu6-1WMwp60itG-0053YS; Thu, 27 Feb 2014 20:56:26 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Alexander Graf <agraf@suse.de>
Date: Thu, 27 Feb 2014 20:56:25 +0100
Message-ID: <6039706.L1ZWvgjHF0@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <5AA88E43-1A40-4409-9A56-334988483843@suse.de>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
MIME-Version: 1.0
X-Provags-ID: V02:K0:rzAReBQo0RejoNWvZUOZfgBk3/cDvaZFv31q5dRPrbA
	P+9vmq2M2QgcFvVciA9di0j4DbrCEjnK/+IKuf+eq/n4h27OdL
	jqINWkQSzNC2gowwIV05UpfSoEVL4Bvlo8IRJ84IaW98IAXnBJ
	RIlCR1LkS8Giq3deeSoKDK8EQYvUhEnBnV6glAnsu+24FUTqwS
	Dggs1Jt/J5gDz/tgL3iDoQGcFIjYi0nyW9CE+Tng6pztlK2SYk
	u2mlzCR/eslZgCsy7jyiPO7GvGaRs+0S+CJfz64inq5YEM29ne
	POr4lsaynG+t0osjYmhy2IPM33x9HXyBCS4l0i/uMouwFyED4D
	hMh7BVTfx9K8xZxhYho0=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday 27 February 2014 22:24:13 Alexander Graf wrote:
> 
> If you want to assign a platform device, you need to generate a respective
> hardware description (fdt/dsdt) chunk in the hypervisor. You can't take
> the host's description - it's too tightly coupled to the overall host layout.

But at that point, you need hardware specific drivers in both the hypervisor
and in the guest OS. When you have that, why do you still care about a
system specification? 

Going back to the previous argument, since the hypervisor has to make up the
description for the platform device itself, it won't matter whether the host
is booted using DT or ACPI: the description that the hypervisor makes up for
the guest has to match what the hypervisor uses to describe the rest of the
guest environment, which is independent of what the host uses.

> Imagine you get an AArch64 notebook with Windows on it. You want to run
> Linux there, so your host needs to understand ACPI. Now you want to run
> a Windows guest inside a VM, so you need ACPI in there again.

And you think that Windows is going to support a VM system specification
we are writing here? Booting Windows RT in a virtual machine is certainly
an interesting use case, but I think we will have to emulate a platform
that WinRT supports then, rather than expect it to run on ours.
 
> Replace Windows by "Linux with custom drivers" and you're in the same
> situation even when you neglect Windows. Reality will be that we will
> have fdt and acpi based systems.

We will however want to boot all sorts of guests in a standardized
virtual environment:

* 32 bit Linux (since some distros don't support biarch or multiarch
  on arm64) for running applications that are either binary-only
  or not 64-bit safe.
* 32-bit Android
* big-endian Linux for running applications that are not endian-clean
  (typically network stuff ported from powerpc or mipseb.
* OS/v guests
* NOMMU Linux
* BSD based OSs
* QNX
* random other RTOSs

Most of these will not work with ACPI, or at least not in 32-bit mode.
64-bit Linux will obviously support both DT (always) and ACPI (optionally),
depending on the platform, but for a specification like this, I think
it's much easier to support fewer options, to make it easier for other
guest OSs to ensure they actually run on any compliant hypervisor.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 19:57:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 19:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ756-0000iG-H2; Thu, 27 Feb 2014 19:57:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WJ754-0000iB-HB
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 19:57:18 +0000
Received: from [85.158.143.35:45095] by server-3.bemta-4.messagelabs.com id
	40/F7-11539-D989F035; Thu, 27 Feb 2014 19:57:17 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393531036!8849438!1
X-Originating-IP: [212.227.126.187]
X-SpamReason: No, hits=0.2 required=7.0 tests=sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODcgPT4gNjc1NDE=\n,sa_preprocessor: 
	QmFkIElQOiAyMTIuMjI3LjEyNi4xODcgPT4gNjc1NDE=\n,RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14770 invoked from network); 27 Feb 2014 19:57:16 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.126.187)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 27 Feb 2014 19:57:16 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue005) with ESMTP (Nemesis)
	id 0MPsu6-1WMwp60itG-0053YS; Thu, 27 Feb 2014 20:56:26 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Alexander Graf <agraf@suse.de>
Date: Thu, 27 Feb 2014 20:56:25 +0100
Message-ID: <6039706.L1ZWvgjHF0@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <5AA88E43-1A40-4409-9A56-334988483843@suse.de>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
MIME-Version: 1.0
X-Provags-ID: V02:K0:rzAReBQo0RejoNWvZUOZfgBk3/cDvaZFv31q5dRPrbA
	P+9vmq2M2QgcFvVciA9di0j4DbrCEjnK/+IKuf+eq/n4h27OdL
	jqINWkQSzNC2gowwIV05UpfSoEVL4Bvlo8IRJ84IaW98IAXnBJ
	RIlCR1LkS8Giq3deeSoKDK8EQYvUhEnBnV6glAnsu+24FUTqwS
	Dggs1Jt/J5gDz/tgL3iDoQGcFIjYi0nyW9CE+Tng6pztlK2SYk
	u2mlzCR/eslZgCsy7jyiPO7GvGaRs+0S+CJfz64inq5YEM29ne
	POr4lsaynG+t0osjYmhy2IPM33x9HXyBCS4l0i/uMouwFyED4D
	hMh7BVTfx9K8xZxhYho0=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thursday 27 February 2014 22:24:13 Alexander Graf wrote:
> 
> If you want to assign a platform device, you need to generate a respective
> hardware description (fdt/dsdt) chunk in the hypervisor. You can't take
> the host's description - it's too tightly coupled to the overall host layout.

But at that point, you need hardware specific drivers in both the hypervisor
and in the guest OS. When you have that, why do you still care about a
system specification? 

Going back to the previous argument, since the hypervisor has to make up the
description for the platform device itself, it won't matter whether the host
is booted using DT or ACPI: the description that the hypervisor makes up for
the guest has to match what the hypervisor uses to describe the rest of the
guest environment, which is independent of what the host uses.

> Imagine you get an AArch64 notebook with Windows on it. You want to run
> Linux there, so your host needs to understand ACPI. Now you want to run
> a Windows guest inside a VM, so you need ACPI in there again.

And you think that Windows is going to support a VM system specification
we are writing here? Booting Windows RT in a virtual machine is certainly
an interesting use case, but I think we will have to emulate a platform
that WinRT supports then, rather than expect it to run on ours.
 
> Replace Windows by "Linux with custom drivers" and you're in the same
> situation even when you neglect Windows. Reality will be that we will
> have fdt and acpi based systems.

We will however want to boot all sorts of guests in a standardized
virtual environment:

* 32 bit Linux (since some distros don't support biarch or multiarch
  on arm64) for running applications that are either binary-only
  or not 64-bit safe.
* 32-bit Android
* big-endian Linux for running applications that are not endian-clean
  (typically network stuff ported from powerpc or mipseb.
* OS/v guests
* NOMMU Linux
* BSD based OSs
* QNX
* random other RTOSs

Most of these will not work with ACPI, or at least not in 32-bit mode.
64-bit Linux will obviously support both DT (always) and ACPI (optionally),
depending on the platform, but for a specification like this, I think
it's much easier to support fewer options, to make it easier for other
guest OSs to ensure they actually run on any compliant hypervisor.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7WG-0001Xn-Hp; Thu, 27 Feb 2014 20:25:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7WF-0001Xi-4V
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:25:23 +0000
Received: from [85.158.137.68:20023] by server-5.bemta-3.messagelabs.com id
	45/6A-04712-23F9F035; Thu, 27 Feb 2014 20:25:22 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393532720!3162170!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23355 invoked from network); 27 Feb 2014 20:25:21 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:25:21 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id C48F627D;
	Thu, 27 Feb 2014 20:25:17 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 857A553;
	Thu, 27 Feb 2014 20:25:11 +0000 (UTC)
Message-ID: <530F9F25.7010904@hp.com>
Date: Thu, 27 Feb 2014 15:25:09 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
	<20140226162243.GX6835@laptop.programming.kicks-ass.net>
In-Reply-To: <20140226162243.GX6835@laptop.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 11:22 AM, Peter Zijlstra wrote:
> On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:
>
>> +struct qnode {
>> +	u32		 wait;		/* Waiting flag		*/
>> +	struct qnode	*next;		/* Next queue node addr */
>> +};
>> +
>> +struct qnode_set {
>> +	struct qnode	nodes[MAX_QNODES];
>> +	int		node_idx;	/* Current node to use */
>> +};
>> +
>> +/*
>> + * Per-CPU queue node structures
>> + */
>> +static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
> So I've not yet wrapped my head around any of this; and I see a later
> patch adds some paravirt gunk to this, but it does blow you can't keep
> it a single cacheline for the sane case.

There is a 4-byte hole in the qnode structure for x86-64. I did try to 
make the additional PV fields used only 4 bytes so that there is no 
increase in size in the qnode structure unless we need to support 16K 
CPUs or more.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:25:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7WG-0001Xn-Hp; Thu, 27 Feb 2014 20:25:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7WF-0001Xi-4V
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:25:23 +0000
Received: from [85.158.137.68:20023] by server-5.bemta-3.messagelabs.com id
	45/6A-04712-23F9F035; Thu, 27 Feb 2014 20:25:22 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393532720!3162170!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23355 invoked from network); 27 Feb 2014 20:25:21 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:25:21 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id C48F627D;
	Thu, 27 Feb 2014 20:25:17 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 857A553;
	Thu, 27 Feb 2014 20:25:11 +0000 (UTC)
Message-ID: <530F9F25.7010904@hp.com>
Date: Thu, 27 Feb 2014 15:25:09 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
	<20140226162243.GX6835@laptop.programming.kicks-ass.net>
In-Reply-To: <20140226162243.GX6835@laptop.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 11:22 AM, Peter Zijlstra wrote:
> On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:
>
>> +struct qnode {
>> +	u32		 wait;		/* Waiting flag		*/
>> +	struct qnode	*next;		/* Next queue node addr */
>> +};
>> +
>> +struct qnode_set {
>> +	struct qnode	nodes[MAX_QNODES];
>> +	int		node_idx;	/* Current node to use */
>> +};
>> +
>> +/*
>> + * Per-CPU queue node structures
>> + */
>> +static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { {{0}}, 0 };
> So I've not yet wrapped my head around any of this; and I see a later
> patch adds some paravirt gunk to this, but it does blow you can't keep
> it a single cacheline for the sane case.

There is a 4-byte hole in the qnode structure for x86-64. I did try to 
make the additional PV fields used only 4 bytes so that there is no 
increase in size in the qnode structure unless we need to support 16K 
CPUs or more.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:26:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7Ws-0001Zf-Uu; Thu, 27 Feb 2014 20:26:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7Wr-0001ZU-R1
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:26:02 +0000
Received: from [85.158.137.68:54851] by server-6.bemta-3.messagelabs.com id
	5C/B3-09180-95F9F035; Thu, 27 Feb 2014 20:26:01 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393532759!4640389!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11325 invoked from network); 27 Feb 2014 20:26:00 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:26:00 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id B9012FF;
	Thu, 27 Feb 2014 20:25:55 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 7BA784C;
	Thu, 27 Feb 2014 20:25:52 +0000 (UTC)
Message-ID: <530F9F4F.9070104@hp.com>
Date: Thu, 27 Feb 2014 15:25:51 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
	<20140226162440.GY6835@laptop.programming.kicks-ass.net>
In-Reply-To: <20140226162440.GY6835@laptop.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 11:24 AM, Peter Zijlstra wrote:
> On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:
>> +static void put_qnode(void)
>> +{
>> +	struct qnode_set *qset = this_cpu_ptr(&qnset);
>> +
>> +	qset->node_idx--;
>> +}
> That very much wants to be: this_cpu_dec().

Yes, I will change it to use this_cpu_dec().

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:26:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7Ws-0001Zf-Uu; Thu, 27 Feb 2014 20:26:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7Wr-0001ZU-R1
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:26:02 +0000
Received: from [85.158.137.68:54851] by server-6.bemta-3.messagelabs.com id
	5C/B3-09180-95F9F035; Thu, 27 Feb 2014 20:26:01 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393532759!4640389!1
X-Originating-IP: [15.201.208.54]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11325 invoked from network); 27 Feb 2014 20:26:00 -0000
Received: from g4t3426.houston.hp.com (HELO g4t3426.houston.hp.com)
	(15.201.208.54)
	by server-15.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:26:00 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3426.houston.hp.com (Postfix) with ESMTP id B9012FF;
	Thu, 27 Feb 2014 20:25:55 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 7BA784C;
	Thu, 27 Feb 2014 20:25:52 +0000 (UTC)
Message-ID: <530F9F4F.9070104@hp.com>
Date: Thu, 27 Feb 2014 15:25:51 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-2-git-send-email-Waiman.Long@hp.com>
	<20140226162440.GY6835@laptop.programming.kicks-ass.net>
In-Reply-To: <20140226162440.GY6835@laptop.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 1/8] qspinlock: Introducing a 4-byte
 queue spinlock implementation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 11:24 AM, Peter Zijlstra wrote:
> On Wed, Feb 26, 2014 at 10:14:21AM -0500, Waiman Long wrote:
>> +static void put_qnode(void)
>> +{
>> +	struct qnode_set *qset = this_cpu_ptr(&qnset);
>> +
>> +	qset->node_idx--;
>> +}
> That very much wants to be: this_cpu_dec().

Yes, I will change it to use this_cpu_dec().

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7mt-0002gF-3v; Thu, 27 Feb 2014 20:42:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7mr-0002g5-Ua
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:42:34 +0000
Received: from [85.158.143.35:33557] by server-1.bemta-4.messagelabs.com id
	D4/08-31661-933AF035; Thu, 27 Feb 2014 20:42:33 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393533751!8874293!1
X-Originating-IP: [15.193.200.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29221 invoked from network); 27 Feb 2014 20:42:32 -0000
Received: from g6t1525.atlanta.hp.com (HELO g6t1525.atlanta.hp.com)
	(15.193.200.68)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:42:32 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g6t1525.atlanta.hp.com (Postfix) with ESMTP id 3F7BC100;
	Thu, 27 Feb 2014 20:42:26 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id AA6805D;
	Thu, 27 Feb 2014 20:42:20 +0000 (UTC)
Message-ID: <530FA32B.8010202@hp.com>
Date: Thu, 27 Feb 2014 15:42:19 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
In-Reply-To: <20140226162057.GW6835@laptop.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 11:20 AM, Peter Zijlstra wrote:
> You don't happen to have a proper state diagram for this thing do you?
>
> I suppose I'm going to have to make one; this is all getting a bit
> unwieldy, and those xchg() + fixup things are hard to read.

I don't have a state diagram on hand, but I will add more comments to 
describe the 4 possible cases and how to handle them.

>
> On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote:
>> +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
>> +{
>> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
>> +	u16		     old;
>> +
>> +	/*
>> +	 * Fall into the quick spinning code path only if no one is waiting
>> +	 * or the lock is available.
>> +	 */
>> +	if (unlikely((qsval != _QSPINLOCK_LOCKED)&&
>> +		     (qsval != _QSPINLOCK_WAITING)))
>> +		return 0;
>> +
>> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
>> +
>> +	if (old == 0) {
>> +		/*
>> +		 * Got the lock, can clear the waiting bit now
>> +		 */
>> +		smp_u8_store_release(&qlock->wait, 0);
>
> So we just did an atomic op, and now you're trying to optimize this
> write. Why do you need a whole byte for that?
>
> Surely a cmpxchg loop with the right atomic op can't be _that_ much
> slower? Its far more readable and likely avoids that steal fail below as
> well.

At low contention level, atomic operations that requires a lock prefix 
are the major contributor to the total execution times. I saw estimate 
online that the time to execute a lock prefix instruction can easily be 
50X longer than a regular instruction that can be pipelined. That is why 
I try to do it with as few lock prefix instructions as possible. If I 
have to do an atomic cmpxchg, it probably won't be faster than the 
regular qspinlock slowpath.

Given that speed at low contention level which is the common case is 
important to get this patch accepted, I have to do what I can to make it 
run as far as possible for this 2 contending task case.

>> +		return 1;
>> +	} else if (old == _QSPINLOCK_LOCKED) {
>> +try_again:
>> +		/*
>> +		 * Wait until the lock byte is cleared to get the lock
>> +		 */
>> +		do {
>> +			cpu_relax();
>> +		} while (ACCESS_ONCE(qlock->lock));
>> +		/*
>> +		 * Set the lock bit&  clear the waiting bit
>> +		 */
>> +		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
>> +			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
>> +			return 1;
>> +		/*
>> +		 * Someone has steal the lock, so wait again
>> +		 */
>> +		goto try_again;
> That's just a fail.. steals should not ever be allowed. It's a fair lock
> after all.

The code is unfair, but this unfairness help it to run faster than 
ticket spinlock in this particular case. And the regular qspinlock 
slowpath is fair. A little bit of unfairness in this particular case 
helps its speed.

>> +	} else if (old == _QSPINLOCK_WAITING) {
>> +		/*
>> +		 * Another task is already waiting while it steals the lock.
>> +		 * A bit of unfairness here won't change the big picture.
>> +		 * So just take the lock and return.
>> +		 */
>> +		return 1;
>> +	}
>> +	/*
>> +	 * Nothing need to be done if the old value is
>> +	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
>> +	 */
>> +	return 0;
>> +}
>
>
>
>> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>>   		return;
>>   	}
>>
>> +#ifdef queue_code_xchg
>> +	prev_qcode = queue_code_xchg(lock, my_qcode);
>> +#else
>>   	/*
>>   	 * Exchange current copy of the queue node code
>>   	 */
>> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>>   	} else
>>   		prev_qcode&= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
>>   	my_qcode&= ~_QSPINLOCK_LOCKED;
>> +#endif /* queue_code_xchg */
>>
>>   	if (prev_qcode) {
>>   		/*
> That's just horrible.. please just make the entire #else branch another
> version of that same queue_code_xchg() function.

OK, I will wrap it in another function.

Regards,
Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:42:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7mt-0002gF-3v; Thu, 27 Feb 2014 20:42:35 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7mr-0002g5-Ua
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:42:34 +0000
Received: from [85.158.143.35:33557] by server-1.bemta-4.messagelabs.com id
	D4/08-31661-933AF035; Thu, 27 Feb 2014 20:42:33 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393533751!8874293!1
X-Originating-IP: [15.193.200.68]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29221 invoked from network); 27 Feb 2014 20:42:32 -0000
Received: from g6t1525.atlanta.hp.com (HELO g6t1525.atlanta.hp.com)
	(15.193.200.68)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:42:32 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g6t1525.atlanta.hp.com (Postfix) with ESMTP id 3F7BC100;
	Thu, 27 Feb 2014 20:42:26 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id AA6805D;
	Thu, 27 Feb 2014 20:42:20 +0000 (UTC)
Message-ID: <530FA32B.8010202@hp.com>
Date: Thu, 27 Feb 2014 15:42:19 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
In-Reply-To: <20140226162057.GW6835@laptop.programming.kicks-ass.net>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 11:20 AM, Peter Zijlstra wrote:
> You don't happen to have a proper state diagram for this thing do you?
>
> I suppose I'm going to have to make one; this is all getting a bit
> unwieldy, and those xchg() + fixup things are hard to read.

I don't have a state diagram on hand, but I will add more comments to 
describe the 4 possible cases and how to handle them.

>
> On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote:
>> +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
>> +{
>> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
>> +	u16		     old;
>> +
>> +	/*
>> +	 * Fall into the quick spinning code path only if no one is waiting
>> +	 * or the lock is available.
>> +	 */
>> +	if (unlikely((qsval != _QSPINLOCK_LOCKED)&&
>> +		     (qsval != _QSPINLOCK_WAITING)))
>> +		return 0;
>> +
>> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
>> +
>> +	if (old == 0) {
>> +		/*
>> +		 * Got the lock, can clear the waiting bit now
>> +		 */
>> +		smp_u8_store_release(&qlock->wait, 0);
>
> So we just did an atomic op, and now you're trying to optimize this
> write. Why do you need a whole byte for that?
>
> Surely a cmpxchg loop with the right atomic op can't be _that_ much
> slower? Its far more readable and likely avoids that steal fail below as
> well.

At low contention level, atomic operations that requires a lock prefix 
are the major contributor to the total execution times. I saw estimate 
online that the time to execute a lock prefix instruction can easily be 
50X longer than a regular instruction that can be pipelined. That is why 
I try to do it with as few lock prefix instructions as possible. If I 
have to do an atomic cmpxchg, it probably won't be faster than the 
regular qspinlock slowpath.

Given that speed at low contention level which is the common case is 
important to get this patch accepted, I have to do what I can to make it 
run as far as possible for this 2 contending task case.

>> +		return 1;
>> +	} else if (old == _QSPINLOCK_LOCKED) {
>> +try_again:
>> +		/*
>> +		 * Wait until the lock byte is cleared to get the lock
>> +		 */
>> +		do {
>> +			cpu_relax();
>> +		} while (ACCESS_ONCE(qlock->lock));
>> +		/*
>> +		 * Set the lock bit&  clear the waiting bit
>> +		 */
>> +		if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING,
>> +			   _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING)
>> +			return 1;
>> +		/*
>> +		 * Someone has steal the lock, so wait again
>> +		 */
>> +		goto try_again;
> That's just a fail.. steals should not ever be allowed. It's a fair lock
> after all.

The code is unfair, but this unfairness help it to run faster than 
ticket spinlock in this particular case. And the regular qspinlock 
slowpath is fair. A little bit of unfairness in this particular case 
helps its speed.

>> +	} else if (old == _QSPINLOCK_WAITING) {
>> +		/*
>> +		 * Another task is already waiting while it steals the lock.
>> +		 * A bit of unfairness here won't change the big picture.
>> +		 * So just take the lock and return.
>> +		 */
>> +		return 1;
>> +	}
>> +	/*
>> +	 * Nothing need to be done if the old value is
>> +	 * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
>> +	 */
>> +	return 0;
>> +}
>
>
>
>> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>>   		return;
>>   	}
>>
>> +#ifdef queue_code_xchg
>> +	prev_qcode = queue_code_xchg(lock, my_qcode);
>> +#else
>>   	/*
>>   	 * Exchange current copy of the queue node code
>>   	 */
>> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>>   	} else
>>   		prev_qcode&= ~_QSPINLOCK_LOCKED;	/* Clear the lock bit */
>>   	my_qcode&= ~_QSPINLOCK_LOCKED;
>> +#endif /* queue_code_xchg */
>>
>>   	if (prev_qcode) {
>>   		/*
> That's just horrible.. please just make the entire #else branch another
> version of that same queue_code_xchg() function.

OK, I will wrap it in another function.

Regards,
Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:50:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7ur-00031Q-7Z; Thu, 27 Feb 2014 20:50:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7up-00031L-1k
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:50:47 +0000
Received: from [85.158.143.35:46912] by server-2.bemta-4.messagelabs.com id
	A6/BB-04779-625AF035; Thu, 27 Feb 2014 20:50:46 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393534244!8849835!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31705 invoked from network); 27 Feb 2014 20:50:45 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:50:45 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3425.houston.hp.com (Postfix) with ESMTP id 66BF57D;
	Thu, 27 Feb 2014 20:50:43 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 1E78163;
	Thu, 27 Feb 2014 20:50:39 +0000 (UTC)
Message-ID: <530FA51D.6060303@hp.com>
Date: Thu, 27 Feb 2014 15:50:37 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
	<530F5851.1090809@linux.vnet.ibm.com>
In-Reply-To: <530F5851.1090809@linux.vnet.ibm.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 10:22 AM, Raghavendra K T wrote:
> On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
> [...]
>>> But neither of the VCPUs being kicked here are halted -- they're either
>>> running or runnable (descheduled by the hypervisor).
>>
>> /me actually looks at Waiman's code...
>>
>> Right, this is really different from pvticketlocks, where the *unlock*
>> primitive wakes up a sleeping VCPU.  It is more similar to PLE
>> (pause-loop exiting).
>
> Adding to the discussion, I see there are two possibilities here,
> considering that in undercommit cases we should not exceed
> HEAD_SPIN_THRESHOLD,
>
> 1. the looping vcpu in pv_head_spin_check() should do halt()
> considering that we have done enough spinning (more than typical
> lock-hold time), and hence we are in potential overcommit.
>
> 2. multiplex kick_cpu to do directed yield in qspinlock case.
> But this may result in some ping ponging?
>
>
>

In the current code, the lock holder can't easily locate the CPU # of 
the queue head when in the unlock path. That is why I try to keep the 
queue head alive as long as possible so that it can take over when the 
lock is free. I am trying out new code to let the CPUs that are waiting 
other than the first 2 to go to halt to see if that will help the 
overcommit case.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 20:50:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 20:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ7ur-00031Q-7Z; Thu, 27 Feb 2014 20:50:49 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJ7up-00031L-1k
	for xen-devel@lists.xenproject.org; Thu, 27 Feb 2014 20:50:47 +0000
Received: from [85.158.143.35:46912] by server-2.bemta-4.messagelabs.com id
	A6/BB-04779-625AF035; Thu, 27 Feb 2014 20:50:46 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-2.tower-21.messagelabs.com!1393534244!8849835!1
X-Originating-IP: [15.201.208.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31705 invoked from network); 27 Feb 2014 20:50:45 -0000
Received: from g4t3425.houston.hp.com (HELO g4t3425.houston.hp.com)
	(15.201.208.53)
	by server-2.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 20:50:45 -0000
Received: from g4t3433.houston.hp.com (g4t3433.houston.hp.com [16.210.25.219])
	by g4t3425.houston.hp.com (Postfix) with ESMTP id 66BF57D;
	Thu, 27 Feb 2014 20:50:43 +0000 (UTC)
Received: from [192.168.142.175] (unknown [16.99.33.8])
	by g4t3433.houston.hp.com (Postfix) with ESMTP id 1E78163;
	Thu, 27 Feb 2014 20:50:39 +0000 (UTC)
Message-ID: <530FA51D.6060303@hp.com>
Date: Thu, 27 Feb 2014 15:50:37 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
	<530F2B8F.1010401@citrix.com> <530F3967.6030805@redhat.com>
	<530F4949.4050706@citrix.com> <530F4F98.2080308@redhat.com>
	<530F5851.1090809@linux.vnet.ibm.com>
In-Reply-To: <530F5851.1090809@linux.vnet.ibm.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 10:22 AM, Raghavendra K T wrote:
> On 02/27/2014 08:15 PM, Paolo Bonzini wrote:
> [...]
>>> But neither of the VCPUs being kicked here are halted -- they're either
>>> running or runnable (descheduled by the hypervisor).
>>
>> /me actually looks at Waiman's code...
>>
>> Right, this is really different from pvticketlocks, where the *unlock*
>> primitive wakes up a sleeping VCPU.  It is more similar to PLE
>> (pause-loop exiting).
>
> Adding to the discussion, I see there are two possibilities here,
> considering that in undercommit cases we should not exceed
> HEAD_SPIN_THRESHOLD,
>
> 1. the looping vcpu in pv_head_spin_check() should do halt()
> considering that we have done enough spinning (more than typical
> lock-hold time), and hence we are in potential overcommit.
>
> 2. multiplex kick_cpu to do directed yield in qspinlock case.
> But this may result in some ping ponging?
>
>
>

In the current code, the lock holder can't easily locate the CPU # of 
the queue head when in the unlock path. That is why I try to keep the 
queue head alive as long as possible so that it can take over when the 
lock is free. I am trying out new code to let the CPUs that are waiting 
other than the first 2 to go to halt to see if that will help the 
overcommit case.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 21:47:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 21:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ8ng-0004b1-0L; Thu, 27 Feb 2014 21:47:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ8nf-0004aw-0R
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 21:47:27 +0000
Received: from [85.158.143.35:55493] by server-2.bemta-4.messagelabs.com id
	EB/7B-04779-E62BF035; Thu, 27 Feb 2014 21:47:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393537644!8879088!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23466 invoked from network); 27 Feb 2014 21:47:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 21:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,557,1389744000"; d="scan'208";a="104821076"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 21:47:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 16:47:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJ8na-0004gL-Fr;
	Thu, 27 Feb 2014 21:47:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJ8na-0003D0-4H;
	Thu, 27 Feb 2014 21:47:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25319-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 21:47:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25319: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25319 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25319/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot               fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                d2a0476307e67a6e6a293563a4f4ad4ec79ac0e5
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2392396 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 21:47:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 21:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ8ng-0004b1-0L; Thu, 27 Feb 2014 21:47:28 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJ8nf-0004aw-0R
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 21:47:27 +0000
Received: from [85.158.143.35:55493] by server-2.bemta-4.messagelabs.com id
	EB/7B-04779-E62BF035; Thu, 27 Feb 2014 21:47:26 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393537644!8879088!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23466 invoked from network); 27 Feb 2014 21:47:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 21:47:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,557,1389744000"; d="scan'208";a="104821076"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 27 Feb 2014 21:47:23 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 16:47:22 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJ8na-0004gL-Fr;
	Thu, 27 Feb 2014 21:47:22 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJ8na-0003D0-4H;
	Thu, 27 Feb 2014 21:47:22 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25319-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Thu, 27 Feb 2014 21:47:22 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25319: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25319 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25319/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu  5 xen-boot                  fail REGR. vs. 12557
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-qemuu-rhel6hvm-intel  5 xen-boot               fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 linux                d2a0476307e67a6e6a293563a4f4ad4ec79ac0e5
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2392396 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 22:10:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 22:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ99M-0005Im-7D; Thu, 27 Feb 2014 22:09:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abchak@juniper.net>) id 1WJ99K-0005IZ-5C
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 22:09:50 +0000
Received: from [193.109.254.147:5676] by server-7.bemta-14.messagelabs.com id
	D2/A3-23424-DA7BF035; Thu, 27 Feb 2014 22:09:49 +0000
X-Env-Sender: abchak@juniper.net
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393538987!7384174!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27836 invoked from network); 27 Feb 2014 22:09:48 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-3.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Feb 2014 22:09:48 -0000
Received: from mail241-tx2-R.bigfish.com (10.9.14.239) by
	TX2EHSOBE006.bigfish.com (10.9.40.26) with Microsoft SMTP Server id
	14.1.225.22; Thu, 27 Feb 2014 22:09:46 +0000
Received: from mail241-tx2 (localhost [127.0.0.1])	by
	mail241-tx2-R.bigfish.com (Postfix) with ESMTP id 60C5CE0183	for
	<xen-devel@lists.xen.org>; Thu, 27 Feb 2014 22:09:46 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:157.56.240.101; KIP:(null); UIP:(null);
	IPV:NLI; H:BL2PRD0510HT001.namprd05.prod.outlook.com; RD:none;
	EFVD:NLI
X-SpamScore: -1
X-BigFish: VPS-1(zz14ffIzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2fh109h2a8h839h946hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h19ceh1ad9h1b0ah224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1fe8h1ff5h2052h20b3h2216h22d0h2336h2438h2461h2487h24d7h2516h2545h255eh25cch1155h)
Received-SPF: pass (mail241-tx2: domain of juniper.net designates
	157.56.240.101 as permitted sender) client-ip=157.56.240.101;
	envelope-from=abchak@juniper.net;
	helo=BL2PRD0510HT001.namprd05.prod.outlook.com ; .outlook.com ; 
X-Forefront-Antispam-Report-Untrusted: SFV:NSPM;
	SFS:(10009001)(6009001)(428001)(53754006)(199002)(189002)(92566001)(76482001)(59766001)(19580395003)(36756003)(56776001)(81816001)(77982001)(77096001)(76786001)(76796001)(74876001)(76176001)(69226001)(82746002)(79102001)(74366001)(54316002)(2656002)(93136001)(50986001)(47976001)(47736001)(74502001)(47446002)(49866001)(66066001)(90146001)(56816005)(87936001)(33656001)(74706001)(65816001)(93516002)(87266001)(4396001)(31966008)(83716003)(85852003)(83322001)(53806001)(81542001)(51856001)(92726001)(19580405001)(85306002)(74662001)(54356001)(94316002)(80022001)(81342001)(46102001)(86362001)(81686001)(95416001)(80976001)(83072002)(63696002)(95666003)(94946001);
	DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR05MB712; H:BL2PR05MB193
Received: from mail241-tx2 (localhost.localdomain [127.0.0.1]) by mail241-tx2
	(MessageSwitch) id 1393538984105420_22580;
	Thu, 27 Feb 2014 22:09:44 +0000 (UTC)
Received: from TX2EHSMHS023.bigfish.com (unknown [10.9.14.228])	by
	mail241-tx2.bigfish.com (Postfix) with ESMTP id 104A46006B	for
	<xen-devel@lists.xen.org>; Thu, 27 Feb 2014 22:09:44 +0000 (UTC)
Received: from BL2PRD0510HT001.namprd05.prod.outlook.com (157.56.240.101) by
	TX2EHSMHS023.bigfish.com (10.9.99.123) with Microsoft SMTP Server (TLS)
	id 14.16.227.3; Thu, 27 Feb 2014 22:09:39 +0000
Received: from BY2PR05MB712.namprd05.prod.outlook.com (10.141.222.155) by
	BL2PRD0510HT001.namprd05.prod.outlook.com (10.255.100.36) with
	Microsoft SMTP
	Server (TLS) id 14.16.423.0; Thu, 27 Feb 2014 22:09:38 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com (10.242.198.143) by
	BY2PR05MB712.namprd05.prod.outlook.com (10.141.222.155) with Microsoft
	SMTP Server (TLS) id 15.0.888.9; Thu, 27 Feb 2014 22:09:36 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.23]) by
	BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.23]) with mapi id
	15.00.0883.010; Thu, 27 Feb 2014 22:09:35 +0000
From: Anirban Chakraborty <abchak@juniper.net>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: build issue in xcp-networkd
Thread-Index: AQHPNAiY63+F7YGNT0KzFjnlrIvRuw==
Date: Thu, 27 Feb 2014 22:09:34 +0000
Message-ID: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [66.129.239.13]
x-forefront-prvs: 013568035E
Content-ID: <0742FD019DFECB43B37760E1D4E78880@namprd05.prod.outlook.com>
MIME-Version: 1.0
X-OriginatorOrg: juniper.net
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Subject: [Xen-devel] build issue in xcp-networkd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

I was trying to =91opam install xcp-networkd=92 in xenserver DDK vm (2.6.32=
.43-0.4.1.xs1.8.0.847.170785xen) and encountered following error messages:

=97
# ocamlfind ocamlopt -a -package lwt.syntax lib/xs_transport.cmx -o lib/xen=
store_transport.cmxa
# ocamlfind ocamlopt -shared -linkall -package lwt.syntax lib/xenstore_tran=
sport.cmxa -o lib/xenstore_transport.cmxs
# ocamlfind ocamldep -package xenstore.client -package xenstore -package un=
ix -package lwt.unix -package lwt.syntax -package lwt -syntax camlp4o -modu=
les lib/xs_transport_lwt_unix_client.ml > lib/xs_transport_lwt_unix_client.=
ml.depends
# ocamlfind ocamldep -package xenstore.client -package xenstore -package un=
ix -package lwt.unix -package lwt.syntax -package lwt -syntax camlp4o -modu=
les lib/xs.ml > lib/xs.ml.depends
# ocamlfind ocamlc -c -g -annot -I lib -package xenstore.client -package xe=
nstore -package unix -package lwt.unix -package lwt.syntax -package lwt -sy=
ntax camlp4o -I lib -o lib/xs_transport_lwt_unix_client.cmo lib/xs_transpor=
t_lwt_unix_client.ml
# ocamlfind ocamlc -c -g -annot -I lib -package xenstore.client -package xe=
nstore -package unix -package lwt.unix -package lwt.syntax -package lwt -sy=
ntax camlp4o -I lib -o lib/xs.cmo lib/xs.ml
# + ocamlfind ocamlc -c -g -annot -I lib -package xenstore.client -package =
xenstore -package unix -package lwt.unix -package lwt.syntax -package lwt -=
syntax camlp4o -I lib -o lib/xs.cmo lib/xs.ml
# File "lib/xs.ml", line 1, characters 8-28:
# Error: Unbound module Xs_client_lwt
# Command exited with code 2.
### stderr ###
# E: Failure("Command ''/usr/local/bin/ocamlbuild' lib/xenstore_transport.c=
ma lib/xenstore_transport.cmxa lib/xenstore_transport.a lib/xenstore_transp=
ort.cmxs lib/xenstore_transport_lwt_unix.cma lib/xenstore_transport_lwt_uni=
x.cmxa lib/xenstore_transport_lwt_unix.a lib/xenstore_transport_lwt_unix.cm=
xs lib/xenstore_transport_unix.cma lib/xenstore_transport_unix.cmxa lib/xen=
store_transport_unix.a lib/...[truncated]
# make: *** [build] Error 1
=97

I have looked up for Xs_client_lwt and it seems that this module is availab=
le in xenstore package, which I was able to install (xenstore 2.0.0). I hav=
e installed xenctrl 5.0.0 as well (with --disable xenlight).

My sources are at following change set:
---
commit 7fca2b36e8886ba9d7a3ed355c7129ba16b252da
Merge: 4aed922 e810dd2
Author: Rob Hoes <rob.hoes@citrix.com>
Date:   Tue Sep 24 05:51:05 2013 -0700

    Merge pull request #16 from robhoes/master
---

Any pointer would be great as how to resolve this.

thanks,
Anirban



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 22:10:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 22:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ99M-0005Im-7D; Thu, 27 Feb 2014 22:09:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abchak@juniper.net>) id 1WJ99K-0005IZ-5C
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 22:09:50 +0000
Received: from [193.109.254.147:5676] by server-7.bemta-14.messagelabs.com id
	D2/A3-23424-DA7BF035; Thu, 27 Feb 2014 22:09:49 +0000
X-Env-Sender: abchak@juniper.net
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393538987!7384174!1
X-Originating-IP: [65.55.88.14]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27836 invoked from network); 27 Feb 2014 22:09:48 -0000
Received: from tx2ehsobe004.messaging.microsoft.com (HELO
	tx2outboundpool.messaging.microsoft.com) (65.55.88.14)
	by server-3.tower-27.messagelabs.com with AES128-SHA encrypted SMTP;
	27 Feb 2014 22:09:48 -0000
Received: from mail241-tx2-R.bigfish.com (10.9.14.239) by
	TX2EHSOBE006.bigfish.com (10.9.40.26) with Microsoft SMTP Server id
	14.1.225.22; Thu, 27 Feb 2014 22:09:46 +0000
Received: from mail241-tx2 (localhost [127.0.0.1])	by
	mail241-tx2-R.bigfish.com (Postfix) with ESMTP id 60C5CE0183	for
	<xen-devel@lists.xen.org>; Thu, 27 Feb 2014 22:09:46 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:157.56.240.101; KIP:(null); UIP:(null);
	IPV:NLI; H:BL2PRD0510HT001.namprd05.prod.outlook.com; RD:none;
	EFVD:NLI
X-SpamScore: -1
X-BigFish: VPS-1(zz14ffIzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh1de097hz2fh109h2a8h839h946hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h19ceh1ad9h1b0ah224fh1d0ch1d2eh1d3fh1dc1h1dfeh1dffh1e1dh1fe8h1ff5h2052h20b3h2216h22d0h2336h2438h2461h2487h24d7h2516h2545h255eh25cch1155h)
Received-SPF: pass (mail241-tx2: domain of juniper.net designates
	157.56.240.101 as permitted sender) client-ip=157.56.240.101;
	envelope-from=abchak@juniper.net;
	helo=BL2PRD0510HT001.namprd05.prod.outlook.com ; .outlook.com ; 
X-Forefront-Antispam-Report-Untrusted: SFV:NSPM;
	SFS:(10009001)(6009001)(428001)(53754006)(199002)(189002)(92566001)(76482001)(59766001)(19580395003)(36756003)(56776001)(81816001)(77982001)(77096001)(76786001)(76796001)(74876001)(76176001)(69226001)(82746002)(79102001)(74366001)(54316002)(2656002)(93136001)(50986001)(47976001)(47736001)(74502001)(47446002)(49866001)(66066001)(90146001)(56816005)(87936001)(33656001)(74706001)(65816001)(93516002)(87266001)(4396001)(31966008)(83716003)(85852003)(83322001)(53806001)(81542001)(51856001)(92726001)(19580405001)(85306002)(74662001)(54356001)(94316002)(80022001)(81342001)(46102001)(86362001)(81686001)(95416001)(80976001)(83072002)(63696002)(95666003)(94946001);
	DIR:OUT; SFP:1101; SCL:1; SRVR:BY2PR05MB712; H:BL2PR05MB193
Received: from mail241-tx2 (localhost.localdomain [127.0.0.1]) by mail241-tx2
	(MessageSwitch) id 1393538984105420_22580;
	Thu, 27 Feb 2014 22:09:44 +0000 (UTC)
Received: from TX2EHSMHS023.bigfish.com (unknown [10.9.14.228])	by
	mail241-tx2.bigfish.com (Postfix) with ESMTP id 104A46006B	for
	<xen-devel@lists.xen.org>; Thu, 27 Feb 2014 22:09:44 +0000 (UTC)
Received: from BL2PRD0510HT001.namprd05.prod.outlook.com (157.56.240.101) by
	TX2EHSMHS023.bigfish.com (10.9.99.123) with Microsoft SMTP Server (TLS)
	id 14.16.227.3; Thu, 27 Feb 2014 22:09:39 +0000
Received: from BY2PR05MB712.namprd05.prod.outlook.com (10.141.222.155) by
	BL2PRD0510HT001.namprd05.prod.outlook.com (10.255.100.36) with
	Microsoft SMTP
	Server (TLS) id 14.16.423.0; Thu, 27 Feb 2014 22:09:38 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com (10.242.198.143) by
	BY2PR05MB712.namprd05.prod.outlook.com (10.141.222.155) with Microsoft
	SMTP Server (TLS) id 15.0.888.9; Thu, 27 Feb 2014 22:09:36 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.23]) by
	BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.23]) with mapi id
	15.00.0883.010; Thu, 27 Feb 2014 22:09:35 +0000
From: Anirban Chakraborty <abchak@juniper.net>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Thread-Topic: build issue in xcp-networkd
Thread-Index: AQHPNAiY63+F7YGNT0KzFjnlrIvRuw==
Date: Thu, 27 Feb 2014 22:09:34 +0000
Message-ID: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [66.129.239.13]
x-forefront-prvs: 013568035E
Content-ID: <0742FD019DFECB43B37760E1D4E78880@namprd05.prod.outlook.com>
MIME-Version: 1.0
X-OriginatorOrg: juniper.net
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Subject: [Xen-devel] build issue in xcp-networkd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hi All,

I was trying to =91opam install xcp-networkd=92 in xenserver DDK vm (2.6.32=
.43-0.4.1.xs1.8.0.847.170785xen) and encountered following error messages:

=97
# ocamlfind ocamlopt -a -package lwt.syntax lib/xs_transport.cmx -o lib/xen=
store_transport.cmxa
# ocamlfind ocamlopt -shared -linkall -package lwt.syntax lib/xenstore_tran=
sport.cmxa -o lib/xenstore_transport.cmxs
# ocamlfind ocamldep -package xenstore.client -package xenstore -package un=
ix -package lwt.unix -package lwt.syntax -package lwt -syntax camlp4o -modu=
les lib/xs_transport_lwt_unix_client.ml > lib/xs_transport_lwt_unix_client.=
ml.depends
# ocamlfind ocamldep -package xenstore.client -package xenstore -package un=
ix -package lwt.unix -package lwt.syntax -package lwt -syntax camlp4o -modu=
les lib/xs.ml > lib/xs.ml.depends
# ocamlfind ocamlc -c -g -annot -I lib -package xenstore.client -package xe=
nstore -package unix -package lwt.unix -package lwt.syntax -package lwt -sy=
ntax camlp4o -I lib -o lib/xs_transport_lwt_unix_client.cmo lib/xs_transpor=
t_lwt_unix_client.ml
# ocamlfind ocamlc -c -g -annot -I lib -package xenstore.client -package xe=
nstore -package unix -package lwt.unix -package lwt.syntax -package lwt -sy=
ntax camlp4o -I lib -o lib/xs.cmo lib/xs.ml
# + ocamlfind ocamlc -c -g -annot -I lib -package xenstore.client -package =
xenstore -package unix -package lwt.unix -package lwt.syntax -package lwt -=
syntax camlp4o -I lib -o lib/xs.cmo lib/xs.ml
# File "lib/xs.ml", line 1, characters 8-28:
# Error: Unbound module Xs_client_lwt
# Command exited with code 2.
### stderr ###
# E: Failure("Command ''/usr/local/bin/ocamlbuild' lib/xenstore_transport.c=
ma lib/xenstore_transport.cmxa lib/xenstore_transport.a lib/xenstore_transp=
ort.cmxs lib/xenstore_transport_lwt_unix.cma lib/xenstore_transport_lwt_uni=
x.cmxa lib/xenstore_transport_lwt_unix.a lib/xenstore_transport_lwt_unix.cm=
xs lib/xenstore_transport_unix.cma lib/xenstore_transport_unix.cmxa lib/xen=
store_transport_unix.a lib/...[truncated]
# make: *** [build] Error 1
=97

I have looked up for Xs_client_lwt and it seems that this module is availab=
le in xenstore package, which I was able to install (xenstore 2.0.0). I hav=
e installed xenctrl 5.0.0 as well (with --disable xenlight).

My sources are at following change set:
---
commit 7fca2b36e8886ba9d7a3ed355c7129ba16b252da
Merge: 4aed922 e810dd2
Author: Rob Hoes <rob.hoes@citrix.com>
Date:   Tue Sep 24 05:51:05 2013 -0700

    Merge pull request #16 from robhoes/master
---

Any pointer would be great as how to resolve this.

thanks,
Anirban



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 22:32:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 22:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ9VI-00061W-KQ; Thu, 27 Feb 2014 22:32:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJ9VH-00061K-AC
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 22:32:31 +0000
Received: from [193.109.254.147:27701] by server-13.bemta-14.messagelabs.com
	id 21/6E-01226-EFCBF035; Thu, 27 Feb 2014 22:32:30 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393540348!7324414!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22435 invoked from network); 27 Feb 2014 22:32:29 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 22:32:29 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 27 Feb 2014 22:32:27 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,557,1389744000"; d="scan'208";a="662557909"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi02.verizon.com with ESMTP; 27 Feb 2014 22:32:26 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xensource.com,
	qemu-devel@nongnu.org
Date: Thu, 27 Feb 2014 17:32:23 -0500
Message-Id: <1393540343-11089-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Anthony Liguori <aliguori@amazon.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: [Xen-devel] [PATCH 1/1] vl.c: Add pci_hole_min_size machine option.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows growing the pci_hole to the size needed.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 hw/i386/pc_piix.c    | 14 ++++++++++++--
 hw/i386/pc_q35.c     | 14 ++++++++++++--
 include/hw/i386/pc.h |  3 +++
 vl.c                 | 16 ++++++++++++++++
 xen-all.c            | 28 +++++++++++++++++-----------
 5 files changed, 60 insertions(+), 15 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index d5dc1ef..58e273d 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -95,6 +95,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
     DeviceState *icc_bridge;
     FWCfgState *fw_cfg = NULL;
     PcGuestInfo *guest_info;
+    ram_addr_t lowmem = 0xe0000000;
 
     if (xen_enabled() && xen_hvm_init(&ram_memory) != 0) {
         fprintf(stderr, "xen hardware virtual machine initialisation failed\n");
@@ -111,6 +112,11 @@ static void pc_init1(QEMUMachineInitArgs *args,
         kvmclock_create();
     }
 
+    /* Adjust for pci_hole_min_size */
+    if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+        lowmem = (1ULL << 32) - pci_hole_min_size;
+    }
+
     /* Check whether RAM fits below 4G (leaving 1/2 GByte for IO memory).
      * If it doesn't, we need to split it in chunks below and above 4G.
      * In any case, try to make sure that guest addresses aligned at
@@ -118,8 +124,12 @@ static void pc_init1(QEMUMachineInitArgs *args,
      * For old machine types, use whatever split we used historically to avoid
      * breaking migration.
      */
-    if (args->ram_size >= 0xe0000000) {
-        ram_addr_t lowmem = gigabyte_align ? 0xc0000000 : 0xe0000000;
+    if (args->ram_size >= lowmem) {
+        lowmem = gigabyte_align ? 0xc0000000 : 0xe0000000;
+        /* Adjust for pci_hole_min_size */
+        if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+            lowmem = (1ULL << 32) - pci_hole_min_size;
+        }
         above_4g_mem_size = args->ram_size - lowmem;
         below_4g_mem_size = lowmem;
     } else {
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index a7f6260..a491f6a 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -82,6 +82,7 @@ static void pc_q35_init(QEMUMachineInitArgs *args)
     PCIDevice *ahci;
     DeviceState *icc_bridge;
     PcGuestInfo *guest_info;
+    ram_addr_t lowmem = 0xb0000000;
 
     if (xen_enabled() && xen_hvm_init(&ram_memory) != 0) {
         fprintf(stderr, "xen hardware virtual machine initialisation failed\n");
@@ -97,6 +98,11 @@ static void pc_q35_init(QEMUMachineInitArgs *args)
 
     kvmclock_create();
 
+    /* Adjust for pci_hole_min_size */
+    if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+        lowmem = (1ULL << 32) - pci_hole_min_size;
+    }
+
     /* Check whether RAM fits below 4G (leaving 1/2 GByte for IO memory
      * and 256 Mbytes for PCI Express Enhanced Configuration Access Mapping
      * also known as MMCFG).
@@ -106,8 +112,12 @@ static void pc_q35_init(QEMUMachineInitArgs *args)
      * For old machine types, use whatever split we used historically to avoid
      * breaking migration.
      */
-    if (args->ram_size >= 0xb0000000) {
-        ram_addr_t lowmem = gigabyte_align ? 0x80000000 : 0xb0000000;
+    if (args->ram_size >= lowmem) {
+        lowmem = gigabyte_align ? 0x80000000 : 0xb0000000;
+        /* Adjust for pci_hole_min_size */
+        if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+            lowmem = (1ULL << 32) - pci_hole_min_size;
+        }
         above_4g_mem_size = args->ram_size - lowmem;
         below_4g_mem_size = lowmem;
     } else {
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 9010246..43ea04b 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -204,6 +204,9 @@ int isa_vga_mm_init(hwaddr vram_base,
                     hwaddr ctrl_base, int it_shift,
                     MemoryRegion *address_space);
 
+/* vl.c */
+extern uint64_t pci_hole_min_size;
+
 /* ne2000.c */
 static inline bool isa_ne2000_init(ISABus *bus, int base, int irq, NICInfo *nd)
 {
diff --git a/vl.c b/vl.c
index 1d27b34..32266d3 100644
--- a/vl.c
+++ b/vl.c
@@ -123,6 +123,7 @@ int main(int argc, char **argv)
 #include "qom/object_interfaces.h"
 
 #define DEFAULT_RAM_SIZE 128
+#define MAX_PCI_HOLE_SIZE ((1ULL << 32) - 16 * 1024 * 1024)
 
 #define MAX_VIRTIO_CONSOLES 1
 #define MAX_SCLP_CONSOLES 1
@@ -213,6 +214,7 @@ static NotifierList machine_init_done_notifiers =
 
 static bool tcg_allowed = true;
 bool xen_allowed;
+uint64_t pci_hole_min_size;
 uint32_t xen_domid;
 enum xen_mode xen_mode = XEN_EMULATE;
 static int tcg_tb_size;
@@ -378,6 +380,10 @@ static QemuOptsList qemu_machine_opts = {
             .name = "firmware",
             .type = QEMU_OPT_STRING,
             .help = "firmware image",
+        }, {
+            .name = "pci_hole_min_size",
+            .type = QEMU_OPT_SIZE,
+            .help = "minimum size of PCI mmio hole below 4G",
         },
         { /* End of list */ }
     },
@@ -4050,6 +4056,16 @@ int main(int argc, char **argv, char **envp)
     initrd_filename = qemu_opt_get(machine_opts, "initrd");
     kernel_cmdline = qemu_opt_get(machine_opts, "append");
     bios_name = qemu_opt_get(machine_opts, "firmware");
+    pci_hole_min_size = qemu_opt_get_size(machine_opts,
+                                          "pci_hole_min_size",
+                                          pci_hole_min_size);
+    if (pci_hole_min_size > MAX_PCI_HOLE_SIZE) {
+        fprintf(stderr,
+                "%s: pci_hole_min_size=%llu too big, adjusted to %llu\n",
+                __func__, (unsigned long long) pci_hole_min_size,
+                MAX_PCI_HOLE_SIZE);
+        pci_hole_min_size = MAX_PCI_HOLE_SIZE;
+    }
 
     boot_order = machine->default_boot_order;
     opts = qemu_opts_find(qemu_find_opts("boot-opts"), NULL);
diff --git a/xen-all.c b/xen-all.c
index 4a594bd..dbe24c7 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -155,6 +155,15 @@ qemu_irq *xen_interrupt_controller_init(void)
 
 /* Memory Ops */
 
+static uint64_t mmio_hole_size(void)
+{
+    uint64_t sz = HVM_BELOW_4G_MMIO_LENGTH;
+    if (sz < pci_hole_min_size) {
+        sz = pci_hole_min_size;
+    }
+    return sz;
+}
+
 static void xen_ram_init(ram_addr_t ram_size, MemoryRegion **ram_memory_p)
 {
     MemoryRegion *sysmem = get_system_memory();
@@ -162,23 +171,20 @@ static void xen_ram_init(ram_addr_t ram_size, MemoryRegion **ram_memory_p)
     ram_addr_t block_len;
 
     block_len = ram_size;
-    if (ram_size >= HVM_BELOW_4G_RAM_END) {
-        /* Xen does not allocate the memory continuously, and keep a hole at
-         * HVM_BELOW_4G_MMIO_START of HVM_BELOW_4G_MMIO_LENGTH
+    below_4g_mem_size = (1ULL << 32) - mmio_hole_size();
+    if (ram_size < below_4g_mem_size) {
+        below_4g_mem_size = ram_size;
+    } else {
+        above_4g_mem_size = ram_size - below_4g_mem_size;
+        /* Xen does not allocate the memory continuously, and keep a hole of
+         * size mmio_hole_size().
          */
-        block_len += HVM_BELOW_4G_MMIO_LENGTH;
+        block_len += mmio_hole_size();
     }
     memory_region_init_ram(&ram_memory, NULL, "xen.ram", block_len);
     *ram_memory_p = &ram_memory;
     vmstate_register_ram_global(&ram_memory);
 
-    if (ram_size >= HVM_BELOW_4G_RAM_END) {
-        above_4g_mem_size = ram_size - HVM_BELOW_4G_RAM_END;
-        below_4g_mem_size = HVM_BELOW_4G_RAM_END;
-    } else {
-        below_4g_mem_size = ram_size;
-    }
-
     memory_region_init_alias(&ram_640k, NULL, "xen.ram.640k",
                              &ram_memory, 0, 0xa0000);
     memory_region_add_subregion(sysmem, 0, &ram_640k);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 22:32:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 22:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ9VI-00061W-KQ; Thu, 27 Feb 2014 22:32:32 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJ9VH-00061K-AC
	for xen-devel@lists.xensource.com; Thu, 27 Feb 2014 22:32:31 +0000
Received: from [193.109.254.147:27701] by server-13.bemta-14.messagelabs.com
	id 21/6E-01226-EFCBF035; Thu, 27 Feb 2014 22:32:30 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393540348!7324414!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22435 invoked from network); 27 Feb 2014 22:32:29 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 22:32:29 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 27 Feb 2014 22:32:27 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,557,1389744000"; d="scan'208";a="662557909"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi02.verizon.com with ESMTP; 27 Feb 2014 22:32:26 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xensource.com,
	qemu-devel@nongnu.org
Date: Thu, 27 Feb 2014 17:32:23 -0500
Message-Id: <1393540343-11089-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Don Slutz <dslutz@verizon.com>, Anthony Liguori <aliguori@amazon.com>,
	"Michael S. Tsirkin" <mst@redhat.com>
Subject: [Xen-devel] [PATCH 1/1] vl.c: Add pci_hole_min_size machine option.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows growing the pci_hole to the size needed.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 hw/i386/pc_piix.c    | 14 ++++++++++++--
 hw/i386/pc_q35.c     | 14 ++++++++++++--
 include/hw/i386/pc.h |  3 +++
 vl.c                 | 16 ++++++++++++++++
 xen-all.c            | 28 +++++++++++++++++-----------
 5 files changed, 60 insertions(+), 15 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index d5dc1ef..58e273d 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -95,6 +95,7 @@ static void pc_init1(QEMUMachineInitArgs *args,
     DeviceState *icc_bridge;
     FWCfgState *fw_cfg = NULL;
     PcGuestInfo *guest_info;
+    ram_addr_t lowmem = 0xe0000000;
 
     if (xen_enabled() && xen_hvm_init(&ram_memory) != 0) {
         fprintf(stderr, "xen hardware virtual machine initialisation failed\n");
@@ -111,6 +112,11 @@ static void pc_init1(QEMUMachineInitArgs *args,
         kvmclock_create();
     }
 
+    /* Adjust for pci_hole_min_size */
+    if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+        lowmem = (1ULL << 32) - pci_hole_min_size;
+    }
+
     /* Check whether RAM fits below 4G (leaving 1/2 GByte for IO memory).
      * If it doesn't, we need to split it in chunks below and above 4G.
      * In any case, try to make sure that guest addresses aligned at
@@ -118,8 +124,12 @@ static void pc_init1(QEMUMachineInitArgs *args,
      * For old machine types, use whatever split we used historically to avoid
      * breaking migration.
      */
-    if (args->ram_size >= 0xe0000000) {
-        ram_addr_t lowmem = gigabyte_align ? 0xc0000000 : 0xe0000000;
+    if (args->ram_size >= lowmem) {
+        lowmem = gigabyte_align ? 0xc0000000 : 0xe0000000;
+        /* Adjust for pci_hole_min_size */
+        if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+            lowmem = (1ULL << 32) - pci_hole_min_size;
+        }
         above_4g_mem_size = args->ram_size - lowmem;
         below_4g_mem_size = lowmem;
     } else {
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index a7f6260..a491f6a 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -82,6 +82,7 @@ static void pc_q35_init(QEMUMachineInitArgs *args)
     PCIDevice *ahci;
     DeviceState *icc_bridge;
     PcGuestInfo *guest_info;
+    ram_addr_t lowmem = 0xb0000000;
 
     if (xen_enabled() && xen_hvm_init(&ram_memory) != 0) {
         fprintf(stderr, "xen hardware virtual machine initialisation failed\n");
@@ -97,6 +98,11 @@ static void pc_q35_init(QEMUMachineInitArgs *args)
 
     kvmclock_create();
 
+    /* Adjust for pci_hole_min_size */
+    if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+        lowmem = (1ULL << 32) - pci_hole_min_size;
+    }
+
     /* Check whether RAM fits below 4G (leaving 1/2 GByte for IO memory
      * and 256 Mbytes for PCI Express Enhanced Configuration Access Mapping
      * also known as MMCFG).
@@ -106,8 +112,12 @@ static void pc_q35_init(QEMUMachineInitArgs *args)
      * For old machine types, use whatever split we used historically to avoid
      * breaking migration.
      */
-    if (args->ram_size >= 0xb0000000) {
-        ram_addr_t lowmem = gigabyte_align ? 0x80000000 : 0xb0000000;
+    if (args->ram_size >= lowmem) {
+        lowmem = gigabyte_align ? 0x80000000 : 0xb0000000;
+        /* Adjust for pci_hole_min_size */
+        if (pci_hole_min_size > ((1ULL << 32) - lowmem)) {
+            lowmem = (1ULL << 32) - pci_hole_min_size;
+        }
         above_4g_mem_size = args->ram_size - lowmem;
         below_4g_mem_size = lowmem;
     } else {
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 9010246..43ea04b 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -204,6 +204,9 @@ int isa_vga_mm_init(hwaddr vram_base,
                     hwaddr ctrl_base, int it_shift,
                     MemoryRegion *address_space);
 
+/* vl.c */
+extern uint64_t pci_hole_min_size;
+
 /* ne2000.c */
 static inline bool isa_ne2000_init(ISABus *bus, int base, int irq, NICInfo *nd)
 {
diff --git a/vl.c b/vl.c
index 1d27b34..32266d3 100644
--- a/vl.c
+++ b/vl.c
@@ -123,6 +123,7 @@ int main(int argc, char **argv)
 #include "qom/object_interfaces.h"
 
 #define DEFAULT_RAM_SIZE 128
+#define MAX_PCI_HOLE_SIZE ((1ULL << 32) - 16 * 1024 * 1024)
 
 #define MAX_VIRTIO_CONSOLES 1
 #define MAX_SCLP_CONSOLES 1
@@ -213,6 +214,7 @@ static NotifierList machine_init_done_notifiers =
 
 static bool tcg_allowed = true;
 bool xen_allowed;
+uint64_t pci_hole_min_size;
 uint32_t xen_domid;
 enum xen_mode xen_mode = XEN_EMULATE;
 static int tcg_tb_size;
@@ -378,6 +380,10 @@ static QemuOptsList qemu_machine_opts = {
             .name = "firmware",
             .type = QEMU_OPT_STRING,
             .help = "firmware image",
+        }, {
+            .name = "pci_hole_min_size",
+            .type = QEMU_OPT_SIZE,
+            .help = "minimum size of PCI mmio hole below 4G",
         },
         { /* End of list */ }
     },
@@ -4050,6 +4056,16 @@ int main(int argc, char **argv, char **envp)
     initrd_filename = qemu_opt_get(machine_opts, "initrd");
     kernel_cmdline = qemu_opt_get(machine_opts, "append");
     bios_name = qemu_opt_get(machine_opts, "firmware");
+    pci_hole_min_size = qemu_opt_get_size(machine_opts,
+                                          "pci_hole_min_size",
+                                          pci_hole_min_size);
+    if (pci_hole_min_size > MAX_PCI_HOLE_SIZE) {
+        fprintf(stderr,
+                "%s: pci_hole_min_size=%llu too big, adjusted to %llu\n",
+                __func__, (unsigned long long) pci_hole_min_size,
+                MAX_PCI_HOLE_SIZE);
+        pci_hole_min_size = MAX_PCI_HOLE_SIZE;
+    }
 
     boot_order = machine->default_boot_order;
     opts = qemu_opts_find(qemu_find_opts("boot-opts"), NULL);
diff --git a/xen-all.c b/xen-all.c
index 4a594bd..dbe24c7 100644
--- a/xen-all.c
+++ b/xen-all.c
@@ -155,6 +155,15 @@ qemu_irq *xen_interrupt_controller_init(void)
 
 /* Memory Ops */
 
+static uint64_t mmio_hole_size(void)
+{
+    uint64_t sz = HVM_BELOW_4G_MMIO_LENGTH;
+    if (sz < pci_hole_min_size) {
+        sz = pci_hole_min_size;
+    }
+    return sz;
+}
+
 static void xen_ram_init(ram_addr_t ram_size, MemoryRegion **ram_memory_p)
 {
     MemoryRegion *sysmem = get_system_memory();
@@ -162,23 +171,20 @@ static void xen_ram_init(ram_addr_t ram_size, MemoryRegion **ram_memory_p)
     ram_addr_t block_len;
 
     block_len = ram_size;
-    if (ram_size >= HVM_BELOW_4G_RAM_END) {
-        /* Xen does not allocate the memory continuously, and keep a hole at
-         * HVM_BELOW_4G_MMIO_START of HVM_BELOW_4G_MMIO_LENGTH
+    below_4g_mem_size = (1ULL << 32) - mmio_hole_size();
+    if (ram_size < below_4g_mem_size) {
+        below_4g_mem_size = ram_size;
+    } else {
+        above_4g_mem_size = ram_size - below_4g_mem_size;
+        /* Xen does not allocate the memory continuously, and keep a hole of
+         * size mmio_hole_size().
          */
-        block_len += HVM_BELOW_4G_MMIO_LENGTH;
+        block_len += mmio_hole_size();
     }
     memory_region_init_ram(&ram_memory, NULL, "xen.ram", block_len);
     *ram_memory_p = &ram_memory;
     vmstate_register_ram_global(&ram_memory);
 
-    if (ram_size >= HVM_BELOW_4G_RAM_END) {
-        above_4g_mem_size = ram_size - HVM_BELOW_4G_RAM_END;
-        below_4g_mem_size = HVM_BELOW_4G_RAM_END;
-    } else {
-        below_4g_mem_size = ram_size;
-    }
-
     memory_region_init_alias(&ram_640k, NULL, "xen.ram.640k",
                              &ram_memory, 0, 0xa0000);
     memory_region_add_subregion(sysmem, 0, &ram_640k);
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 22:44:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 22:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ9hA-0006SH-A7; Thu, 27 Feb 2014 22:44:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WJ9h9-0006Rv-8I; Thu, 27 Feb 2014 22:44:47 +0000
Received: from [193.109.254.147:14879] by server-10.bemta-14.messagelabs.com
	id AD/FD-10711-EDFBF035; Thu, 27 Feb 2014 22:44:46 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393541083!7375382!1
X-Originating-IP: [209.85.160.46]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24259 invoked from network); 27 Feb 2014 22:44:45 -0000
Received: from mail-pb0-f46.google.com (HELO mail-pb0-f46.google.com)
	(209.85.160.46)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 22:44:45 -0000
Received: by mail-pb0-f46.google.com with SMTP id rq2so2080904pbb.33
	for <multiple recipients>; Thu, 27 Feb 2014 14:44:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type;
	bh=rjxJndkq86V5fwWAwnuwpj49Bmb9bF4tuGXc4aCbeKk=;
	b=Wq+Jnl5sioX00sGnMYCICg7YlkTBMoQ9B2wD6M8UoIdi9wF15kcwo7wbl2wHn00uJL
	BthpmZwxGau80ylUsgk0ca6v0PZ74bp9FydcNwxv9FrNq6l7ceNdyMF18z6YbO6YCGrM
	sK4eIxacgJsLOV7zbvvnyK61sIEkQhjCYGYNTPrGYtRbA3g3Qipp/ePJSFnTS2ZNK3AZ
	nnITfA8tCpOCOn1NZm9qgzX+DKKgWNXYzmpQ1o9bf0HVOJ2TqKNDbmqCcJb91PPUUzni
	RVVJ1pIX3vLF+NMCy5bmvEkQAHPV5V1di0sxuuZoUDOr2zc0nWVlphjOTFVuhQWBlpxW
	ufrg==
X-Received: by 10.68.4.232 with SMTP id n8mr16015160pbn.114.1393541082849;
	Thu, 27 Feb 2014 14:44:42 -0800 (PST)
Received: from [172.16.26.11] ([121.211.204.213])
	by mx.google.com with ESMTPSA id
	ou9sm17117908pbc.30.2014.02.27.14.44.40 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 14:44:42 -0800 (PST)
Message-ID: <530FBFD6.8040905@xen.org>
Date: Thu, 27 Feb 2014 22:44:38 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
References: <530C9CD0.9070508@xen.org>
In-Reply-To: <530C9CD0.9070508@xen.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Dates and Location Xen Project Hackathon,
 Xen Project Developer Summit & Meeting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1326117665885674654=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============1326117665885674654==
Content-Type: multipart/alternative;
 boundary="------------020008090104000901060101"

This is a multi-part message in MIME format.
--------------020008090104000901060101
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 25/02/2014 13:38, Lars Kurth wrote:
> Hi all,
>
> before I go on holidays just a quick note in the 2014 event schedule. 
> This only just came together, but you may want to look into getting 
> travel approvals.
>
> == Xen Project Hackatho, London, late May (exact dates TBD) ==
> Location: London at Rackspaces offices
> Dates: exact dates TBD, but a we are targetting a Thu/Fri at the end 
> of May
> URL: http://wiki.xenproject.org/wiki/Hackathon/May2014
Hi all, the date has been confirmed for May 29-30
Regards
Lars

--------------020008090104000901060101
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 25/02/2014 13:38, Lars Kurth wrote:<br>
    </div>
    <blockquote cite="mid:530C9CD0.9070508@xen.org" type="cite">
      <meta http-equiv="content-type" content="text/html;
        charset=ISO-8859-1">
      Hi all,<br>
      <br>
      before I go on holidays just a quick note in the 2014 event
      schedule. This only just came together, but you may want to look
      into getting travel approvals.<br>
      <br>
      == Xen Project Hackatho, London, late May (exact dates TBD) ==<br>
      Location: London at Rackspaces offices&nbsp; <br>
      Dates: exact dates TBD, but a we are targetting a Thu/Fri at the
      end of May<br>
      URL: <a moz-do-not-send="true" class="moz-txt-link-freetext"
        href="http://wiki.xenproject.org/wiki/Hackathon/May2014">http://wiki.xenproject.org/wiki/Hackathon/May2014</a><br>
    </blockquote>
    Hi all, the date has been confirmed for May 29-30<br>
    Regards<br>
    Lars<br>
  </body>
</html>

--------------020008090104000901060101--


--===============1326117665885674654==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1326117665885674654==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 22:44:55 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 22:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ9hA-0006SH-A7; Thu, 27 Feb 2014 22:44:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <lars.kurth.xen@gmail.com>)
	id 1WJ9h9-0006Rv-8I; Thu, 27 Feb 2014 22:44:47 +0000
Received: from [193.109.254.147:14879] by server-10.bemta-14.messagelabs.com
	id AD/FD-10711-EDFBF035; Thu, 27 Feb 2014 22:44:46 +0000
X-Env-Sender: lars.kurth.xen@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393541083!7375382!1
X-Originating-IP: [209.85.160.46]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24259 invoked from network); 27 Feb 2014 22:44:45 -0000
Received: from mail-pb0-f46.google.com (HELO mail-pb0-f46.google.com)
	(209.85.160.46)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 22:44:45 -0000
Received: by mail-pb0-f46.google.com with SMTP id rq2so2080904pbb.33
	for <multiple recipients>; Thu, 27 Feb 2014 14:44:43 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:message-id:date:from:reply-to:user-agent:mime-version:to:cc
	:subject:references:in-reply-to:content-type;
	bh=rjxJndkq86V5fwWAwnuwpj49Bmb9bF4tuGXc4aCbeKk=;
	b=Wq+Jnl5sioX00sGnMYCICg7YlkTBMoQ9B2wD6M8UoIdi9wF15kcwo7wbl2wHn00uJL
	BthpmZwxGau80ylUsgk0ca6v0PZ74bp9FydcNwxv9FrNq6l7ceNdyMF18z6YbO6YCGrM
	sK4eIxacgJsLOV7zbvvnyK61sIEkQhjCYGYNTPrGYtRbA3g3Qipp/ePJSFnTS2ZNK3AZ
	nnITfA8tCpOCOn1NZm9qgzX+DKKgWNXYzmpQ1o9bf0HVOJ2TqKNDbmqCcJb91PPUUzni
	RVVJ1pIX3vLF+NMCy5bmvEkQAHPV5V1di0sxuuZoUDOr2zc0nWVlphjOTFVuhQWBlpxW
	ufrg==
X-Received: by 10.68.4.232 with SMTP id n8mr16015160pbn.114.1393541082849;
	Thu, 27 Feb 2014 14:44:42 -0800 (PST)
Received: from [172.16.26.11] ([121.211.204.213])
	by mx.google.com with ESMTPSA id
	ou9sm17117908pbc.30.2014.02.27.14.44.40 for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 14:44:42 -0800 (PST)
Message-ID: <530FBFD6.8040905@xen.org>
Date: Thu, 27 Feb 2014 22:44:38 +0000
From: Lars Kurth <lars.kurth@xen.org>
User-Agent: Mozilla/5.0 (Windows NT 6.1;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>, 
	"xen-api@lists.xen.org" <xen-api@lists.xen.org>,
	"mirageos-devel@lists.xenproject.org" <mirageos-devel@lists.xenproject.org>
References: <530C9CD0.9070508@xen.org>
In-Reply-To: <530C9CD0.9070508@xen.org>
Cc: Russell Pavlicek <russell.pavlicek@citrix.com>
Subject: Re: [Xen-devel] Dates and Location Xen Project Hackathon,
 Xen Project Developer Summit & Meeting
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: lars.kurth@xen.org
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1326117665885674654=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============1326117665885674654==
Content-Type: multipart/alternative;
 boundary="------------020008090104000901060101"

This is a multi-part message in MIME format.
--------------020008090104000901060101
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 25/02/2014 13:38, Lars Kurth wrote:
> Hi all,
>
> before I go on holidays just a quick note in the 2014 event schedule. 
> This only just came together, but you may want to look into getting 
> travel approvals.
>
> == Xen Project Hackatho, London, late May (exact dates TBD) ==
> Location: London at Rackspaces offices
> Dates: exact dates TBD, but a we are targetting a Thu/Fri at the end 
> of May
> URL: http://wiki.xenproject.org/wiki/Hackathon/May2014
Hi all, the date has been confirmed for May 29-30
Regards
Lars

--------------020008090104000901060101
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 25/02/2014 13:38, Lars Kurth wrote:<br>
    </div>
    <blockquote cite="mid:530C9CD0.9070508@xen.org" type="cite">
      <meta http-equiv="content-type" content="text/html;
        charset=ISO-8859-1">
      Hi all,<br>
      <br>
      before I go on holidays just a quick note in the 2014 event
      schedule. This only just came together, but you may want to look
      into getting travel approvals.<br>
      <br>
      == Xen Project Hackatho, London, late May (exact dates TBD) ==<br>
      Location: London at Rackspaces offices&nbsp; <br>
      Dates: exact dates TBD, but a we are targetting a Thu/Fri at the
      end of May<br>
      URL: <a moz-do-not-send="true" class="moz-txt-link-freetext"
        href="http://wiki.xenproject.org/wiki/Hackathon/May2014">http://wiki.xenproject.org/wiki/Hackathon/May2014</a><br>
    </blockquote>
    Hi all, the date has been confirmed for May 29-30<br>
    Regards<br>
    Lars<br>
  </body>
</html>

--------------020008090104000901060101--


--===============1326117665885674654==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1326117665885674654==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 23:03:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 23:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ9yu-0007A5-Co; Thu, 27 Feb 2014 23:03:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <etrudeau@broadcom.com>) id 1WJ9ys-00079t-Ul
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 23:03:07 +0000
Received: from [85.158.137.68:22899] by server-8.bemta-3.messagelabs.com id
	77/67-16039-A24CF035; Thu, 27 Feb 2014 23:03:06 +0000
X-Env-Sender: etrudeau@broadcom.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393542182!4712208!1
X-Originating-IP: [216.31.210.64]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12599 invoked from network); 27 Feb 2014 23:03:03 -0000
Received: from mail-gw3-out.broadcom.com (HELO mail-gw3-out.broadcom.com)
	(216.31.210.64) by server-3.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 23:03:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,557,1389772800"; d="scan'208,217";a="16761244"
Received: from irvexchcas08.broadcom.com (HELO
	IRVEXCHCAS08.corp.ad.broadcom.com) ([10.9.208.57])
	by mail-gw3-out.broadcom.com with ESMTP; 27 Feb 2014 15:14:43 -0800
Received: from SJEXCHCAS07.corp.ad.broadcom.com (10.16.203.16) by
	IRVEXCHCAS08.corp.ad.broadcom.com (10.9.208.57) with Microsoft SMTP
	Server (TLS) id 14.3.174.1; Thu, 27 Feb 2014 15:03:01 -0800
Received: from SJEXCHMB09.corp.ad.broadcom.com ([fe80::3da7:665e:cc78:181f])
	by SJEXCHCAS07.corp.ad.broadcom.com ([::1]) with mapi id 14.03.0174.001;
	Thu, 27 Feb 2014 15:03:01 -0800
From: Eric Trudeau <etrudeau@broadcom.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Thread-Topic: [Xen-devel] ARM: access to iomem and HW IRQ
Thread-Index: AQHPM74WcrCQSMHRDEiH6tQL9Y9lQZrJIMTAgAC70oD//9wOYQ==
Date: Thu, 27 Feb 2014 23:03:01 +0000
Message-ID: <0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>,
	<CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>
In-Reply-To: <CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8188376107821367081=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8188376107821367081==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_"

--_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@globallogic.c=
om<mailto:viktor.kleinik@globallogic.com>> wrote:

Thank you all for your responses.

I will try those changes on our platform.
Are you planning push the implementation of
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
official Xen release?

Regards,
Victor

I don't expect to push the changes up. If you want to submit, please go ahe=
ad.

On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com<mailto=
:etrudeau@broadcom.com>> wrote:
> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com<mailto:=
stefano.stabellini@eu.citrix.com>]
> Sent: Thursday, February 27, 2014 8:16 AM
> To: Dario Faggioli
> Cc: Viktor Kleinik; xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.or=
g>; Arianna Avanzini; Stefano Stabellini;
> Julien Grall; Eric Trudeau
> Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>
> On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > Hi all,
> > >
> > Hi,
> >
> > > Does anyone knows something about future plans to implement
> > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > >
> > I think Arianna is working on an implementation of the former
> > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > list soon, isn't it so, Arianna?
>
> Eric Trudeau did some work in the area too:
>
> http://marc.info/?l=3Dxen-devel&m=3D137338996422503
> http://marc.info/?l=3Dxen-devel&m=3D137365750318936

I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms.  We made no further changes after that patch, i.e. we left the=
 100 msec max wait for a domain to finish an ISR when destroying it.

We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter.  Unfortunately, I don't have time to provide an official=
 patch on recent Xen upstream code due to time constraints, but below is a =
patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6=
fc1d29.
I hope this is helpful, because that is the best I can do at this time.

-----------------

tools/libxl/libxl_create.c |  5 +++--
 xen/arch/arm/domctl.c      | 74 ++++++++++++++++++++++++++++++++++++++++++=
+++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b320d3..53ed52e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl_=
_multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);

-        ret =3D xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret =3D xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx=
64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>

 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret =3D 0;
+    bool_t copyback =3D 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn =3D domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn =3D domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns =3D domctl->u.memory_mapping.nr_mfns;
+        int add =3D domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret =3D -ENOSYS;
+            break;
+        }
+        ret =3D -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implement=
ed on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT))=
 || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret =3D -EPERM;
+        if ( current->domain->domain_id !=3D 0 )
+            break;
+
+        ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add=
);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret =3D iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret =3D map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to=
 [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret =3D -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret =3D -EFAULT;
+
+    return ret;
 }

 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
<http://www.globallogic.com/email_disclaimer.txt>

--_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body dir=3D"auto">
<div><br>
</div>
<div>On Feb 27, 2014, at 1:10 PM, &quot;Viktor Kleinik&quot; &lt;<a href=3D=
"mailto:viktor.kleinik@globallogic.com">viktor.kleinik@globallogic.com</a>&=
gt; wrote:<br>
<br>
</div>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div>Thank you all for your responses.<br>
<br>
</div>
<div>I will try those changes on our platform.<br>
</div>
<div>Are you planning push the implementation of <br>
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and <br>
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into <br>
</div>
<div>official Xen release?<br>
</div>
<div><br>
</div>
Regards,<br>
</div>
Victor<br>
<div>
<div class=3D"gmail_extra"><br>
</div>
</div>
</div>
</div>
</blockquote>
I don't expect to push the changes up. If you want to submit, please go ahe=
ad.&nbsp;<br>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote">On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <s=
pan dir=3D"ltr">
&lt;<a href=3D"mailto:etrudeau@broadcom.com" target=3D"_blank">etrudeau@bro=
adcom.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D"">
<div class=3D"h5">&gt; -----Original Message-----<br>
&gt; From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@=
eu.citrix.com">stefano.stabellini@eu.citrix.com</a>]<br>
&gt; Sent: Thursday, February 27, 2014 8:16 AM<br>
&gt; To: Dario Faggioli<br>
&gt; Cc: Viktor Kleinik; <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a>; Arianna Avanzini; Stefano Stabellini;<br>
&gt; Julien Grall; Eric Trudeau<br>
&gt; Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ<br>
&gt;<br>
&gt; On Thu, 27 Feb 2014, Dario Faggioli wrote:<br>
&gt; &gt; On mer, 2014-02-26 at 15:43 &#43;0000, Viktor Kleinik wrote:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; Hi,<br>
&gt; &gt;<br>
&gt; &gt; &gt; Does anyone knows something about future plans to implement<=
br>
&gt; &gt; &gt; xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>
&gt; &gt; &gt; xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br>
&gt; &gt; &gt;<br>
&gt; &gt; I think Arianna is working on an implementation of the former<br>
&gt; &gt; (XEN_DOMCTL_memory_mapping), and she should be sending patches to=
 this<br>
&gt; &gt; list soon, isn't it so, Arianna?<br>
&gt;<br>
&gt; Eric Trudeau did some work in the area too:<br>
&gt;<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503</a>=
<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936</a>=
<br>
<br>
</div>
</div>
I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms. &nbsp;We made no further changes after that patch, i.e. we lef=
t the 100 msec max wait for a domain
 to finish an ISR when destroying it.<br>
<br>
We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter. &nbsp;Unfortunately, I don't have time to provide an off=
icial patch on recent Xen upstream code due to time constraints, but below =
is a patch based on last October, :(
 , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.<br>
I hope this is helpful, because that is the best I can do at this time.<br>
<br>
-----------------<br>
<br>
tools/libxl/libxl_create.c | &nbsp;5 &#43;&#43;&#43;--<br>
&nbsp;xen/arch/arm/domctl.c &nbsp; &nbsp; &nbsp;| 74 &#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;-<br>
&nbsp;2 files changed, 76 insertions(&#43;), 3 deletions(-)<br>
<br>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
index 1b320d3..53ed52e 100644<br>
--- a/tools/libxl/libxl_create.c<br>
&#43;&#43;&#43; b/tools/libxl/libxl_create.c<br>
@@ -976,8 &#43;976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, li=
bxl__multidev *multidev,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOG(DEBUG, &quot;dom%d iomem %&quot;PRIx6=
4&quot;-%&quot;PRIx64,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;domid, io-&gt;start, io-&gt=
;start &#43; io-&gt;number - 1);<br>
<br>
- &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_iomem_permission(CTX-&gt;xch=
, domid,<br>
- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;io=
-&gt;start, io-&gt;number, 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_memory_mapping(CTX-&gt;x=
ch, domid,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;star=
t, io-&gt;start,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;numb=
er, 1);<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if (ret &lt; 0) {<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOGE(ERROR,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;failed=
 give dom%d access to iomem range %&quot;PRIx64&quot;-%&quot;PRIx64,<br>
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c<br>
index 851ee40..222aac9 100644<br>
--- a/xen/arch/arm/domctl.c<br>
&#43;&#43;&#43; b/xen/arch/arm/domctl.c<br>
@@ -10,11 &#43;10,83 @@<br>
&nbsp;#include &lt;xen/errno.h&gt;<br>
&nbsp;#include &lt;xen/sched.h&gt;<br>
&nbsp;#include &lt;public/domctl.h&gt;<br>
&#43;#include &lt;xen/iocap.h&gt;<br>
&#43;#include &lt;xsm/xsm.h&gt;<br>
&#43;#include &lt;xen/paging.h&gt;<br>
&#43;#include &lt;xen/guest_access.h&gt;<br>
<br>
&nbsp;long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)<br>
&nbsp;{<br>
- &nbsp; &nbsp;return -ENOSYS;<br>
&#43; &nbsp; &nbsp;long ret =3D 0;<br>
&#43; &nbsp; &nbsp;bool_t copyback =3D 0;<br>
&#43;<br>
&#43; &nbsp; &nbsp;switch ( domctl-&gt;cmd )<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp;case XEN_DOMCTL_memory_mapping:<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long gfn =3D domctl-&gt;u.memory_=
mapping.first_gfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long mfn =3D domctl-&gt;u.memory_=
mapping.first_mfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long nr_mfns =3D domctl-&gt;u.mem=
ory_mapping.nr_mfns;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;int add =3D domctl-&gt;u.memory_mapping.ad=
d_mapping;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;/* removing i/o memory is not implemented =
yet */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if (!add) {<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EINVAL;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( (mfn &#43; nr_mfns - 1) &lt; mfn || /=
* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* x86 checks wrap based on=
 paddr_bits which is not implemented on ARM? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* ((mfn | (mfn &#43; nr_mf=
ns - 1)) &gt;&gt; (paddr_bits - PAGE_SHIFT)) || */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; nr_mfns - 1) &lt=
; gfn ) /* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EPERM;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( current-&gt;domain-&gt;domain_id !=3D=
 0 )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn=
, mfn &#43; nr_mfns - 1, add);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( add )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;printk(XENLOG_G_INFO<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;=
memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;=
domain_id, gfn, mfn, nr_mfns);<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D iomem_permit_access(=
d, mfn, mfn &#43; nr_mfns - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( !ret &amp;&amp; paging_=
mode_translate(d) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D map_mm=
io_regions(d, gfn &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; =
nr_mfns) &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mfn &lt;&lt=
; PAGE_SHIFT);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
printk(XENLOG_G_WARNING<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &quot;memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; d-&gt;domain_id, gfn, mfn, nr_mfns);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
if ( iomem_deny_access(d, mfn, mfn &#43; nr_mfns - 1) &amp;&amp;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; is_hardware_domain(current-&gt;domain) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp;printk(XENLOG_ERR<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;memory_map: failed to deny dom%d =
access to [%lx,%lx]\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;domain_id, mfn, mfn &#43; nr_mfns=
 - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp;default:<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp;}<br>
&#43;<br>
&#43; &nbsp; &nbsp;if ( copyback &amp;&amp; __copy_to_guest(u_domctl, domct=
l, 1) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EFAULT;<br>
&#43;<br>
&#43; &nbsp; &nbsp;return ret;<br>
&nbsp;}<br>
<br>
&nbsp;void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)<br>
</blockquote>
</div>
<font size=3D"-1"><a href=3D"http://www.globallogic.com/email_disclaimer.tx=
t" target=3D"_blank"><span style=3D"font-size:11px;font-family:Arial;color:=
rgb(17,85,204);background-color:transparent;font-weight:normal;font-style:n=
ormal;font-variant:normal;text-decoration:underline;vertical-align:baseline=
"></span></a><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:11px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:normal"></span></font></div>
</div>
</div>
</div>
</blockquote>
</body>
</html>

--_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_--


--===============8188376107821367081==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8188376107821367081==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 23:03:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 23:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJ9yu-0007A5-Co; Thu, 27 Feb 2014 23:03:08 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <etrudeau@broadcom.com>) id 1WJ9ys-00079t-Ul
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 23:03:07 +0000
Received: from [85.158.137.68:22899] by server-8.bemta-3.messagelabs.com id
	77/67-16039-A24CF035; Thu, 27 Feb 2014 23:03:06 +0000
X-Env-Sender: etrudeau@broadcom.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393542182!4712208!1
X-Originating-IP: [216.31.210.64]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12599 invoked from network); 27 Feb 2014 23:03:03 -0000
Received: from mail-gw3-out.broadcom.com (HELO mail-gw3-out.broadcom.com)
	(216.31.210.64) by server-3.tower-31.messagelabs.com with SMTP;
	27 Feb 2014 23:03:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,557,1389772800"; d="scan'208,217";a="16761244"
Received: from irvexchcas08.broadcom.com (HELO
	IRVEXCHCAS08.corp.ad.broadcom.com) ([10.9.208.57])
	by mail-gw3-out.broadcom.com with ESMTP; 27 Feb 2014 15:14:43 -0800
Received: from SJEXCHCAS07.corp.ad.broadcom.com (10.16.203.16) by
	IRVEXCHCAS08.corp.ad.broadcom.com (10.9.208.57) with Microsoft SMTP
	Server (TLS) id 14.3.174.1; Thu, 27 Feb 2014 15:03:01 -0800
Received: from SJEXCHMB09.corp.ad.broadcom.com ([fe80::3da7:665e:cc78:181f])
	by SJEXCHCAS07.corp.ad.broadcom.com ([::1]) with mapi id 14.03.0174.001;
	Thu, 27 Feb 2014 15:03:01 -0800
From: Eric Trudeau <etrudeau@broadcom.com>
To: Viktor Kleinik <viktor.kleinik@globallogic.com>
Thread-Topic: [Xen-devel] ARM: access to iomem and HW IRQ
Thread-Index: AQHPM74WcrCQSMHRDEiH6tQL9Y9lQZrJIMTAgAC70oD//9wOYQ==
Date: Thu, 27 Feb 2014 23:03:01 +0000
Message-ID: <0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>,
	<CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>
In-Reply-To: <CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
MIME-Version: 1.0
Cc: Julien Grall <julien.grall@citrix.com>,
	Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8188376107821367081=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8188376107821367081==
Content-Language: en-US
Content-Type: multipart/alternative;
	boundary="_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_"

--_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable


On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@globallogic.c=
om<mailto:viktor.kleinik@globallogic.com>> wrote:

Thank you all for your responses.

I will try those changes on our platform.
Are you planning push the implementation of
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
official Xen release?

Regards,
Victor

I don't expect to push the changes up. If you want to submit, please go ahe=
ad.

On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com<mailto=
:etrudeau@broadcom.com>> wrote:
> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com<mailto:=
stefano.stabellini@eu.citrix.com>]
> Sent: Thursday, February 27, 2014 8:16 AM
> To: Dario Faggioli
> Cc: Viktor Kleinik; xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.or=
g>; Arianna Avanzini; Stefano Stabellini;
> Julien Grall; Eric Trudeau
> Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>
> On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > Hi all,
> > >
> > Hi,
> >
> > > Does anyone knows something about future plans to implement
> > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > >
> > I think Arianna is working on an implementation of the former
> > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > list soon, isn't it so, Arianna?
>
> Eric Trudeau did some work in the area too:
>
> http://marc.info/?l=3Dxen-devel&m=3D137338996422503
> http://marc.info/?l=3Dxen-devel&m=3D137365750318936

I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms.  We made no further changes after that patch, i.e. we left the=
 100 msec max wait for a domain to finish an ISR when destroying it.

We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter.  Unfortunately, I don't have time to provide an official=
 patch on recent Xen upstream code due to time constraints, but below is a =
patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6=
fc1d29.
I hope this is helpful, because that is the best I can do at this time.

-----------------

tools/libxl/libxl_create.c |  5 +++--
 xen/arch/arm/domctl.c      | 74 ++++++++++++++++++++++++++++++++++++++++++=
+++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b320d3..53ed52e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl_=
_multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);

-        ret =3D xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret =3D xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx=
64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>

 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret =3D 0;
+    bool_t copyback =3D 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn =3D domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn =3D domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns =3D domctl->u.memory_mapping.nr_mfns;
+        int add =3D domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret =3D -ENOSYS;
+            break;
+        }
+        ret =3D -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implement=
ed on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT))=
 || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret =3D -EPERM;
+        if ( current->domain->domain_id !=3D 0 )
+            break;
+
+        ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add=
);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret =3D iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret =3D map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to=
 [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret =3D -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret =3D -EFAULT;
+
+    return ret;
 }

 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
<http://www.globallogic.com/email_disclaimer.txt>

--_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body dir=3D"auto">
<div><br>
</div>
<div>On Feb 27, 2014, at 1:10 PM, &quot;Viktor Kleinik&quot; &lt;<a href=3D=
"mailto:viktor.kleinik@globallogic.com">viktor.kleinik@globallogic.com</a>&=
gt; wrote:<br>
<br>
</div>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div>Thank you all for your responses.<br>
<br>
</div>
<div>I will try those changes on our platform.<br>
</div>
<div>Are you planning push the implementation of <br>
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and <br>
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into <br>
</div>
<div>official Xen release?<br>
</div>
<div><br>
</div>
Regards,<br>
</div>
Victor<br>
<div>
<div class=3D"gmail_extra"><br>
</div>
</div>
</div>
</div>
</blockquote>
I don't expect to push the changes up. If you want to submit, please go ahe=
ad.&nbsp;<br>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote">On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <s=
pan dir=3D"ltr">
&lt;<a href=3D"mailto:etrudeau@broadcom.com" target=3D"_blank">etrudeau@bro=
adcom.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left:1px solid rgb(204,204,204);padding-left:1ex">
<div class=3D"">
<div class=3D"h5">&gt; -----Original Message-----<br>
&gt; From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@=
eu.citrix.com">stefano.stabellini@eu.citrix.com</a>]<br>
&gt; Sent: Thursday, February 27, 2014 8:16 AM<br>
&gt; To: Dario Faggioli<br>
&gt; Cc: Viktor Kleinik; <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a>; Arianna Avanzini; Stefano Stabellini;<br>
&gt; Julien Grall; Eric Trudeau<br>
&gt; Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ<br>
&gt;<br>
&gt; On Thu, 27 Feb 2014, Dario Faggioli wrote:<br>
&gt; &gt; On mer, 2014-02-26 at 15:43 &#43;0000, Viktor Kleinik wrote:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; Hi,<br>
&gt; &gt;<br>
&gt; &gt; &gt; Does anyone knows something about future plans to implement<=
br>
&gt; &gt; &gt; xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>
&gt; &gt; &gt; xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br>
&gt; &gt; &gt;<br>
&gt; &gt; I think Arianna is working on an implementation of the former<br>
&gt; &gt; (XEN_DOMCTL_memory_mapping), and she should be sending patches to=
 this<br>
&gt; &gt; list soon, isn't it so, Arianna?<br>
&gt;<br>
&gt; Eric Trudeau did some work in the area too:<br>
&gt;<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503</a>=
<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936</a>=
<br>
<br>
</div>
</div>
I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms. &nbsp;We made no further changes after that patch, i.e. we lef=
t the 100 msec max wait for a domain
 to finish an ISR when destroying it.<br>
<br>
We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter. &nbsp;Unfortunately, I don't have time to provide an off=
icial patch on recent Xen upstream code due to time constraints, but below =
is a patch based on last October, :(
 , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.<br>
I hope this is helpful, because that is the best I can do at this time.<br>
<br>
-----------------<br>
<br>
tools/libxl/libxl_create.c | &nbsp;5 &#43;&#43;&#43;--<br>
&nbsp;xen/arch/arm/domctl.c &nbsp; &nbsp; &nbsp;| 74 &#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;-<br>
&nbsp;2 files changed, 76 insertions(&#43;), 3 deletions(-)<br>
<br>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
index 1b320d3..53ed52e 100644<br>
--- a/tools/libxl/libxl_create.c<br>
&#43;&#43;&#43; b/tools/libxl/libxl_create.c<br>
@@ -976,8 &#43;976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, li=
bxl__multidev *multidev,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOG(DEBUG, &quot;dom%d iomem %&quot;PRIx6=
4&quot;-%&quot;PRIx64,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;domid, io-&gt;start, io-&gt=
;start &#43; io-&gt;number - 1);<br>
<br>
- &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_iomem_permission(CTX-&gt;xch=
, domid,<br>
- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;io=
-&gt;start, io-&gt;number, 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_memory_mapping(CTX-&gt;x=
ch, domid,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;star=
t, io-&gt;start,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;numb=
er, 1);<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if (ret &lt; 0) {<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOGE(ERROR,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;failed=
 give dom%d access to iomem range %&quot;PRIx64&quot;-%&quot;PRIx64,<br>
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c<br>
index 851ee40..222aac9 100644<br>
--- a/xen/arch/arm/domctl.c<br>
&#43;&#43;&#43; b/xen/arch/arm/domctl.c<br>
@@ -10,11 &#43;10,83 @@<br>
&nbsp;#include &lt;xen/errno.h&gt;<br>
&nbsp;#include &lt;xen/sched.h&gt;<br>
&nbsp;#include &lt;public/domctl.h&gt;<br>
&#43;#include &lt;xen/iocap.h&gt;<br>
&#43;#include &lt;xsm/xsm.h&gt;<br>
&#43;#include &lt;xen/paging.h&gt;<br>
&#43;#include &lt;xen/guest_access.h&gt;<br>
<br>
&nbsp;long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)<br>
&nbsp;{<br>
- &nbsp; &nbsp;return -ENOSYS;<br>
&#43; &nbsp; &nbsp;long ret =3D 0;<br>
&#43; &nbsp; &nbsp;bool_t copyback =3D 0;<br>
&#43;<br>
&#43; &nbsp; &nbsp;switch ( domctl-&gt;cmd )<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp;case XEN_DOMCTL_memory_mapping:<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long gfn =3D domctl-&gt;u.memory_=
mapping.first_gfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long mfn =3D domctl-&gt;u.memory_=
mapping.first_mfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long nr_mfns =3D domctl-&gt;u.mem=
ory_mapping.nr_mfns;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;int add =3D domctl-&gt;u.memory_mapping.ad=
d_mapping;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;/* removing i/o memory is not implemented =
yet */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if (!add) {<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EINVAL;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( (mfn &#43; nr_mfns - 1) &lt; mfn || /=
* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* x86 checks wrap based on=
 paddr_bits which is not implemented on ARM? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* ((mfn | (mfn &#43; nr_mf=
ns - 1)) &gt;&gt; (paddr_bits - PAGE_SHIFT)) || */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; nr_mfns - 1) &lt=
; gfn ) /* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EPERM;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( current-&gt;domain-&gt;domain_id !=3D=
 0 )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn=
, mfn &#43; nr_mfns - 1, add);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( add )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;printk(XENLOG_G_INFO<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;=
memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;=
domain_id, gfn, mfn, nr_mfns);<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D iomem_permit_access(=
d, mfn, mfn &#43; nr_mfns - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( !ret &amp;&amp; paging_=
mode_translate(d) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D map_mm=
io_regions(d, gfn &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; =
nr_mfns) &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mfn &lt;&lt=
; PAGE_SHIFT);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
printk(XENLOG_G_WARNING<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &quot;memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; d-&gt;domain_id, gfn, mfn, nr_mfns);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
if ( iomem_deny_access(d, mfn, mfn &#43; nr_mfns - 1) &amp;&amp;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; is_hardware_domain(current-&gt;domain) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp;printk(XENLOG_ERR<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;memory_map: failed to deny dom%d =
access to [%lx,%lx]\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;domain_id, mfn, mfn &#43; nr_mfns=
 - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp;default:<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp;}<br>
&#43;<br>
&#43; &nbsp; &nbsp;if ( copyback &amp;&amp; __copy_to_guest(u_domctl, domct=
l, 1) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EFAULT;<br>
&#43;<br>
&#43; &nbsp; &nbsp;return ret;<br>
&nbsp;}<br>
<br>
&nbsp;void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)<br>
</blockquote>
</div>
<font size=3D"-1"><a href=3D"http://www.globallogic.com/email_disclaimer.tx=
t" target=3D"_blank"><span style=3D"font-size:11px;font-family:Arial;color:=
rgb(17,85,204);background-color:transparent;font-weight:normal;font-style:n=
ormal;font-variant:normal;text-decoration:underline;vertical-align:baseline=
"></span></a><span style=3D"vertical-align:baseline;font-variant:normal;fon=
t-style:normal;font-size:11px;background-color:transparent;text-decoration:=
none;font-family:Arial;font-weight:normal"></span></font></div>
</div>
</div>
</div>
</blockquote>
</body>
</html>

--_000_0873BA48F5C54C4D93AE5CB967FF35E1broadcomcom_--


--===============8188376107821367081==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8188376107821367081==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 23:39:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 23:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJAY1-00087r-GE; Thu, 27 Feb 2014 23:39:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WJAXz-00087g-80
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 23:39:23 +0000
Received: from [193.109.254.147:17406] by server-8.bemta-14.messagelabs.com id
	70/ED-18529-AACCF035; Thu, 27 Feb 2014 23:39:22 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393544360!7348643!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20977 invoked from network); 27 Feb 2014 23:39:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 23:39:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,558,1389744000"; d="scan'208,217";a="9869920"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 27 Feb 2014 23:39:19 +0000
Received: from AMSPEX01CL02.citrite.net ([169.254.7.42]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 28 Feb 2014 00:39:18 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Eric Trudeau <etrudeau@broadcom.com>, Viktor Kleinik
	<viktor.kleinik@globallogic.com>
Thread-Topic: [Xen-devel] ARM: access to iomem and HW IRQ
Thread-Index: AQHPM74OQlE6mHVs6kqPAmbaGPtwuJrJE20AgAAySoCAAGIpgIAAGue2
Date: Thu, 27 Feb 2014 23:39:18 +0000
Message-ID: <bdbwiri5n1ohxkmfi9y58mr4.1393544349845@email.android.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>,
	<CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>,
	<0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
In-Reply-To: <0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
MIME-Version: 1.0
X-DLP: AMS1
Cc: Julien Grall <julien.grall@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] R:  ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0102354970778944571=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0102354970778944571==
Content-Language: en-GB
Content-Type: multipart/alternative;
	boundary="_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_"

--_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

As I said, Arianna is doing something very similar... perhaps she can merge=
 her and your work and try to upstream it properly, in the next few days...=
 Arianna?

Regards,
Dario


Inviato da Samsung Mobile


-------- Messaggio originale --------
Da: Eric Trudeau
Data:28/02/2014 00:03 (GMT+01:00)
A: Viktor Kleinik
Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@lists.xen.org,Arianna Ava=
nzini ,Julien Grall
Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ


On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@globallogic.c=
om<mailto:viktor.kleinik@globallogic.com>> wrote:

Thank you all for your responses.

I will try those changes on our platform.
Are you planning push the implementation of
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
official Xen release?

Regards,
Victor

I don't expect to push the changes up. If you want to submit, please go ahe=
ad.

On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com<mailto=
:etrudeau@broadcom.com>> wrote:
> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com<mailto:=
stefano.stabellini@eu.citrix.com>]
> Sent: Thursday, February 27, 2014 8:16 AM
> To: Dario Faggioli
> Cc: Viktor Kleinik; xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.or=
g>; Arianna Avanzini; Stefano Stabellini;
> Julien Grall; Eric Trudeau
> Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>
> On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > Hi all,
> > >
> > Hi,
> >
> > > Does anyone knows something about future plans to implement
> > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > >
> > I think Arianna is working on an implementation of the former
> > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > list soon, isn't it so, Arianna?
>
> Eric Trudeau did some work in the area too:
>
> http://marc.info/?l=3Dxen-devel&m=3D137338996422503
> http://marc.info/?l=3Dxen-devel&m=3D137365750318936

I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms.  We made no further changes after that patch, i.e. we left the=
 100 msec max wait for a domain to finish an ISR when destroying it.

We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter.  Unfortunately, I don't have time to provide an official=
 patch on recent Xen upstream code due to time constraints, but below is a =
patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6=
fc1d29.
I hope this is helpful, because that is the best I can do at this time.

-----------------

tools/libxl/libxl_create.c |  5 +++--
 xen/arch/arm/domctl.c      | 74 ++++++++++++++++++++++++++++++++++++++++++=
+++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b320d3..53ed52e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl_=
_multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);

-        ret =3D xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret =3D xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx=
64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>

 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret =3D 0;
+    bool_t copyback =3D 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn =3D domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn =3D domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns =3D domctl->u.memory_mapping.nr_mfns;
+        int add =3D domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret =3D -ENOSYS;
+            break;
+        }
+        ret =3D -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implement=
ed on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT))=
 || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret =3D -EPERM;
+        if ( current->domain->domain_id !=3D 0 )
+            break;
+
+        ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add=
);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret =3D iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret =3D map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to=
 [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret =3D -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret =3D -EFAULT;
+
+    return ret;
 }

 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
<http://www.globallogic.com/email_disclaimer.txt>

--_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body dir=3D"auto">
<div>As I said, Arianna is doing something very similar... perhaps she can =
merge her and your work and try to upstream it properly, in the next few da=
ys... Arianna?</div>
<div><br>
</div>
<div>Regards,</div>
<div>Dario</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div style=3D"font-size:75%; color:#575757">Inviato da Samsung Mobile</div>
</div>
<br>
<br>
-------- Messaggio originale --------<br>
Da: Eric Trudeau <br>
Data:28/02/2014 00:03 (GMT&#43;01:00) <br>
A: Viktor Kleinik <br>
Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@lists.xen.org,Arianna Ava=
nzini ,Julien Grall
<br>
Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ <br>
<br>
<div>
<div><br>
</div>
<div>On Feb 27, 2014, at 1:10 PM, &quot;Viktor Kleinik&quot; &lt;<a href=3D=
"mailto:viktor.kleinik@globallogic.com">viktor.kleinik@globallogic.com</a>&=
gt; wrote:<br>
<br>
</div>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div>Thank you all for your responses.<br>
<br>
</div>
<div>I will try those changes on our platform.<br>
</div>
<div>Are you planning push the implementation of <br>
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and <br>
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into <br>
</div>
<div>official Xen release?<br>
</div>
<div><br>
</div>
Regards,<br>
</div>
Victor<br>
<div>
<div class=3D"gmail_extra"><br>
</div>
</div>
</div>
</div>
</blockquote>
I don't expect to push the changes up. If you want to submit, please go ahe=
ad.&nbsp;<br>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote">On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <s=
pan dir=3D"ltr">
&lt;<a href=3D"mailto:etrudeau@broadcom.com" target=3D"_blank">etrudeau@bro=
adcom.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex; border=
-left:1px solid rgb(204,204,204); padding-left:1ex">
<div class=3D"">
<div class=3D"h5">&gt; -----Original Message-----<br>
&gt; From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@=
eu.citrix.com">stefano.stabellini@eu.citrix.com</a>]<br>
&gt; Sent: Thursday, February 27, 2014 8:16 AM<br>
&gt; To: Dario Faggioli<br>
&gt; Cc: Viktor Kleinik; <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a>; Arianna Avanzini; Stefano Stabellini;<br>
&gt; Julien Grall; Eric Trudeau<br>
&gt; Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ<br>
&gt;<br>
&gt; On Thu, 27 Feb 2014, Dario Faggioli wrote:<br>
&gt; &gt; On mer, 2014-02-26 at 15:43 &#43;0000, Viktor Kleinik wrote:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; Hi,<br>
&gt; &gt;<br>
&gt; &gt; &gt; Does anyone knows something about future plans to implement<=
br>
&gt; &gt; &gt; xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>
&gt; &gt; &gt; xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br>
&gt; &gt; &gt;<br>
&gt; &gt; I think Arianna is working on an implementation of the former<br>
&gt; &gt; (XEN_DOMCTL_memory_mapping), and she should be sending patches to=
 this<br>
&gt; &gt; list soon, isn't it so, Arianna?<br>
&gt;<br>
&gt; Eric Trudeau did some work in the area too:<br>
&gt;<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503</a>=
<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936</a>=
<br>
<br>
</div>
</div>
I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms. &nbsp;We made no further changes after that patch, i.e. we lef=
t the 100 msec max wait for a domain
 to finish an ISR when destroying it.<br>
<br>
We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter. &nbsp;Unfortunately, I don't have time to provide an off=
icial patch on recent Xen upstream code due to time constraints, but below =
is a patch based on last October, :(
 , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.<br>
I hope this is helpful, because that is the best I can do at this time.<br>
<br>
-----------------<br>
<br>
tools/libxl/libxl_create.c | &nbsp;5 &#43;&#43;&#43;--<br>
&nbsp;xen/arch/arm/domctl.c &nbsp; &nbsp; &nbsp;| 74 &#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;-<br>
&nbsp;2 files changed, 76 insertions(&#43;), 3 deletions(-)<br>
<br>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
index 1b320d3..53ed52e 100644<br>
--- a/tools/libxl/libxl_create.c<br>
&#43;&#43;&#43; b/tools/libxl/libxl_create.c<br>
@@ -976,8 &#43;976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, li=
bxl__multidev *multidev,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOG(DEBUG, &quot;dom%d iomem %&quot;PRIx6=
4&quot;-%&quot;PRIx64,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;domid, io-&gt;start, io-&gt=
;start &#43; io-&gt;number - 1);<br>
<br>
- &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_iomem_permission(CTX-&gt;xch=
, domid,<br>
- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;io=
-&gt;start, io-&gt;number, 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_memory_mapping(CTX-&gt;x=
ch, domid,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;star=
t, io-&gt;start,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;numb=
er, 1);<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if (ret &lt; 0) {<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOGE(ERROR,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;failed=
 give dom%d access to iomem range %&quot;PRIx64&quot;-%&quot;PRIx64,<br>
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c<br>
index 851ee40..222aac9 100644<br>
--- a/xen/arch/arm/domctl.c<br>
&#43;&#43;&#43; b/xen/arch/arm/domctl.c<br>
@@ -10,11 &#43;10,83 @@<br>
&nbsp;#include &lt;xen/errno.h&gt;<br>
&nbsp;#include &lt;xen/sched.h&gt;<br>
&nbsp;#include &lt;public/domctl.h&gt;<br>
&#43;#include &lt;xen/iocap.h&gt;<br>
&#43;#include &lt;xsm/xsm.h&gt;<br>
&#43;#include &lt;xen/paging.h&gt;<br>
&#43;#include &lt;xen/guest_access.h&gt;<br>
<br>
&nbsp;long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)<br>
&nbsp;{<br>
- &nbsp; &nbsp;return -ENOSYS;<br>
&#43; &nbsp; &nbsp;long ret =3D 0;<br>
&#43; &nbsp; &nbsp;bool_t copyback =3D 0;<br>
&#43;<br>
&#43; &nbsp; &nbsp;switch ( domctl-&gt;cmd )<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp;case XEN_DOMCTL_memory_mapping:<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long gfn =3D domctl-&gt;u.memory_=
mapping.first_gfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long mfn =3D domctl-&gt;u.memory_=
mapping.first_mfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long nr_mfns =3D domctl-&gt;u.mem=
ory_mapping.nr_mfns;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;int add =3D domctl-&gt;u.memory_mapping.ad=
d_mapping;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;/* removing i/o memory is not implemented =
yet */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if (!add) {<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EINVAL;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( (mfn &#43; nr_mfns - 1) &lt; mfn || /=
* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* x86 checks wrap based on=
 paddr_bits which is not implemented on ARM? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* ((mfn | (mfn &#43; nr_mf=
ns - 1)) &gt;&gt; (paddr_bits - PAGE_SHIFT)) || */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; nr_mfns - 1) &lt=
; gfn ) /* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EPERM;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( current-&gt;domain-&gt;domain_id !=3D=
 0 )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn=
, mfn &#43; nr_mfns - 1, add);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( add )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;printk(XENLOG_G_INFO<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;=
memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;=
domain_id, gfn, mfn, nr_mfns);<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D iomem_permit_access(=
d, mfn, mfn &#43; nr_mfns - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( !ret &amp;&amp; paging_=
mode_translate(d) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D map_mm=
io_regions(d, gfn &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; =
nr_mfns) &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mfn &lt;&lt=
; PAGE_SHIFT);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
printk(XENLOG_G_WARNING<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &quot;memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; d-&gt;domain_id, gfn, mfn, nr_mfns);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
if ( iomem_deny_access(d, mfn, mfn &#43; nr_mfns - 1) &amp;&amp;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; is_hardware_domain(current-&gt;domain) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp;printk(XENLOG_ERR<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;memory_map: failed to deny dom%d =
access to [%lx,%lx]\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;domain_id, mfn, mfn &#43; nr_mfns=
 - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp;default:<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp;}<br>
&#43;<br>
&#43; &nbsp; &nbsp;if ( copyback &amp;&amp; __copy_to_guest(u_domctl, domct=
l, 1) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EFAULT;<br>
&#43;<br>
&#43; &nbsp; &nbsp;return ret;<br>
&nbsp;}<br>
<br>
&nbsp;void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)<br>
</blockquote>
</div>
<font size=3D"-1"><a href=3D"http://www.globallogic.com/email_disclaimer.tx=
t" target=3D"_blank"><span style=3D"font-size:11px; font-family:Arial; colo=
r:rgb(17,85,204); background-color:transparent; font-weight:normal; font-st=
yle:normal; font-variant:normal; text-decoration:underline; vertical-align:=
baseline"></span></a><span style=3D"vertical-align:baseline; font-variant:n=
ormal; font-style:normal; font-size:11px; background-color:transparent; tex=
t-decoration:none; font-family:Arial; font-weight:normal"></span></font></d=
iv>
</div>
</div>
</div>
</blockquote>
</div>
</body>
</html>

--_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_--


--===============0102354970778944571==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0102354970778944571==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 23:39:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 23:39:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJAY1-00087r-GE; Thu, 27 Feb 2014 23:39:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dario.faggioli@citrix.com>) id 1WJAXz-00087g-80
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 23:39:23 +0000
Received: from [193.109.254.147:17406] by server-8.bemta-14.messagelabs.com id
	70/ED-18529-AACCF035; Thu, 27 Feb 2014 23:39:22 +0000
X-Env-Sender: dario.faggioli@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393544360!7348643!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_50_60,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20977 invoked from network); 27 Feb 2014 23:39:20 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	27 Feb 2014 23:39:20 -0000
X-IronPort-AV: E=Sophos;i="4.97,558,1389744000"; d="scan'208,217";a="9869920"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 27 Feb 2014 23:39:19 +0000
Received: from AMSPEX01CL02.citrite.net ([169.254.7.42]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 28 Feb 2014 00:39:18 +0100
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Eric Trudeau <etrudeau@broadcom.com>, Viktor Kleinik
	<viktor.kleinik@globallogic.com>
Thread-Topic: [Xen-devel] ARM: access to iomem and HW IRQ
Thread-Index: AQHPM74OQlE6mHVs6kqPAmbaGPtwuJrJE20AgAAySoCAAGIpgIAAGue2
Date: Thu, 27 Feb 2014 23:39:18 +0000
Message-ID: <bdbwiri5n1ohxkmfi9y58mr4.1393544349845@email.android.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>
	<1393496069.3921.14.camel@Solace>
	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>
	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>,
	<CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>,
	<0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
In-Reply-To: <0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
MIME-Version: 1.0
X-DLP: AMS1
Cc: Julien Grall <julien.grall@citrix.com>,
	Arianna Avanzini <avanzini.arianna@gmail.com>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [Xen-devel] R:  ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0102354970778944571=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0102354970778944571==
Content-Language: en-GB
Content-Type: multipart/alternative;
	boundary="_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_"

--_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

As I said, Arianna is doing something very similar... perhaps she can merge=
 her and your work and try to upstream it properly, in the next few days...=
 Arianna?

Regards,
Dario


Inviato da Samsung Mobile


-------- Messaggio originale --------
Da: Eric Trudeau
Data:28/02/2014 00:03 (GMT+01:00)
A: Viktor Kleinik
Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@lists.xen.org,Arianna Ava=
nzini ,Julien Grall
Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ


On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@globallogic.c=
om<mailto:viktor.kleinik@globallogic.com>> wrote:

Thank you all for your responses.

I will try those changes on our platform.
Are you planning push the implementation of
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
official Xen release?

Regards,
Victor

I don't expect to push the changes up. If you want to submit, please go ahe=
ad.

On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com<mailto=
:etrudeau@broadcom.com>> wrote:
> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com<mailto:=
stefano.stabellini@eu.citrix.com>]
> Sent: Thursday, February 27, 2014 8:16 AM
> To: Dario Faggioli
> Cc: Viktor Kleinik; xen-devel@lists.xen.org<mailto:xen-devel@lists.xen.or=
g>; Arianna Avanzini; Stefano Stabellini;
> Julien Grall; Eric Trudeau
> Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>
> On Thu, 27 Feb 2014, Dario Faggioli wrote:
> > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
> > > Hi all,
> > >
> > Hi,
> >
> > > Does anyone knows something about future plans to implement
> > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
> > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
> > >
> > I think Arianna is working on an implementation of the former
> > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
> > list soon, isn't it so, Arianna?
>
> Eric Trudeau did some work in the area too:
>
> http://marc.info/?l=3Dxen-devel&m=3D137338996422503
> http://marc.info/?l=3Dxen-devel&m=3D137365750318936

I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms.  We made no further changes after that patch, i.e. we left the=
 100 msec max wait for a domain to finish an ISR when destroying it.

We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter.  Unfortunately, I don't have time to provide an official=
 patch on recent Xen upstream code due to time constraints, but below is a =
patch based on last October, :( , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6=
fc1d29.
I hope this is helpful, because that is the best I can do at this time.

-----------------

tools/libxl/libxl_create.c |  5 +++--
 xen/arch/arm/domctl.c      | 74 ++++++++++++++++++++++++++++++++++++++++++=
+++++++++++++++++++++++++++++++-
 2 files changed, 76 insertions(+), 3 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 1b320d3..53ed52e 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, libxl_=
_multidev *multidev,
         LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
             domid, io->start, io->start + io->number - 1);

-        ret =3D xc_domain_iomem_permission(CTX->xch, domid,
-                                          io->start, io->number, 1);
+        ret =3D xc_domain_memory_mapping(CTX->xch, domid,
+                                       io->start, io->start,
+                                       io->number, 1);
         if (ret < 0) {
             LOGE(ERROR,
                  "failed give dom%d access to iomem range %"PRIx64"-%"PRIx=
64,
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 851ee40..222aac9 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -10,11 +10,83 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <public/domctl.h>
+#include <xen/iocap.h>
+#include <xsm/xsm.h>
+#include <xen/paging.h>
+#include <xen/guest_access.h>

 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
-    return -ENOSYS;
+    long ret =3D 0;
+    bool_t copyback =3D 0;
+
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_memory_mapping:
+    {
+        unsigned long gfn =3D domctl->u.memory_mapping.first_gfn;
+        unsigned long mfn =3D domctl->u.memory_mapping.first_mfn;
+        unsigned long nr_mfns =3D domctl->u.memory_mapping.nr_mfns;
+        int add =3D domctl->u.memory_mapping.add_mapping;
+
+        /* removing i/o memory is not implemented yet */
+        if (!add) {
+            ret =3D -ENOSYS;
+            break;
+        }
+        ret =3D -EINVAL;
+        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
+             /* x86 checks wrap based on paddr_bits which is not implement=
ed on ARM? */
+             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits - PAGE_SHIFT))=
 || */
+             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
+            break;
+
+        ret =3D -EPERM;
+        if ( current->domain->domain_id !=3D 0 )
+            break;
+
+        ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add=
);
+        if ( ret )
+            break;
+
+        if ( add )
+        {
+            printk(XENLOG_G_INFO
+                   "memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n",
+                   d->domain_id, gfn, mfn, nr_mfns);
+
+            ret =3D iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
+            if ( !ret && paging_mode_translate(d) )
+            {
+                ret =3D map_mmio_regions(d, gfn << PAGE_SHIFT,
+                                       (gfn + nr_mfns) << PAGE_SHIFT,
+                                       mfn << PAGE_SHIFT);
+                if ( ret )
+                {
+                    printk(XENLOG_G_WARNING
+                           "memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n",
+                           d->domain_id, gfn, mfn, nr_mfns);
+                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
+                         is_hardware_domain(current->domain) )
+                        printk(XENLOG_ERR
+                               "memory_map: failed to deny dom%d access to=
 [%lx,%lx]\n",
+                               d->domain_id, mfn, mfn + nr_mfns - 1);
+                }
+            }
+        }
+    }
+    break;
+
+    default:
+        ret =3D -ENOSYS;
+        break;
+    }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret =3D -EFAULT;
+
+    return ret;
 }

 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
<http://www.globallogic.com/email_disclaimer.txt>

--_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body dir=3D"auto">
<div>As I said, Arianna is doing something very similar... perhaps she can =
merge her and your work and try to upstream it properly, in the next few da=
ys... Arianna?</div>
<div><br>
</div>
<div>Regards,</div>
<div>Dario</div>
<div><br>
</div>
<div><br>
</div>
<div>
<div style=3D"font-size:75%; color:#575757">Inviato da Samsung Mobile</div>
</div>
<br>
<br>
-------- Messaggio originale --------<br>
Da: Eric Trudeau <br>
Data:28/02/2014 00:03 (GMT&#43;01:00) <br>
A: Viktor Kleinik <br>
Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@lists.xen.org,Arianna Ava=
nzini ,Julien Grall
<br>
Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ <br>
<br>
<div>
<div><br>
</div>
<div>On Feb 27, 2014, at 1:10 PM, &quot;Viktor Kleinik&quot; &lt;<a href=3D=
"mailto:viktor.kleinik@globallogic.com">viktor.kleinik@globallogic.com</a>&=
gt; wrote:<br>
<br>
</div>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div>Thank you all for your responses.<br>
<br>
</div>
<div>I will try those changes on our platform.<br>
</div>
<div>Are you planning push the implementation of <br>
xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and <br>
xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into <br>
</div>
<div>official Xen release?<br>
</div>
<div><br>
</div>
Regards,<br>
</div>
Victor<br>
<div>
<div class=3D"gmail_extra"><br>
</div>
</div>
</div>
</div>
</blockquote>
I don't expect to push the changes up. If you want to submit, please go ahe=
ad.&nbsp;<br>
<blockquote type=3D"cite">
<div>
<div dir=3D"ltr">
<div>
<div class=3D"gmail_extra"><br>
<div class=3D"gmail_quote">On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <s=
pan dir=3D"ltr">
&lt;<a href=3D"mailto:etrudeau@broadcom.com" target=3D"_blank">etrudeau@bro=
adcom.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex; border=
-left:1px solid rgb(204,204,204); padding-left:1ex">
<div class=3D"">
<div class=3D"h5">&gt; -----Original Message-----<br>
&gt; From: Stefano Stabellini [mailto:<a href=3D"mailto:stefano.stabellini@=
eu.citrix.com">stefano.stabellini@eu.citrix.com</a>]<br>
&gt; Sent: Thursday, February 27, 2014 8:16 AM<br>
&gt; To: Dario Faggioli<br>
&gt; Cc: Viktor Kleinik; <a href=3D"mailto:xen-devel@lists.xen.org">xen-dev=
el@lists.xen.org</a>; Arianna Avanzini; Stefano Stabellini;<br>
&gt; Julien Grall; Eric Trudeau<br>
&gt; Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ<br>
&gt;<br>
&gt; On Thu, 27 Feb 2014, Dario Faggioli wrote:<br>
&gt; &gt; On mer, 2014-02-26 at 15:43 &#43;0000, Viktor Kleinik wrote:<br>
&gt; &gt; &gt; Hi all,<br>
&gt; &gt; &gt;<br>
&gt; &gt; Hi,<br>
&gt; &gt;<br>
&gt; &gt; &gt; Does anyone knows something about future plans to implement<=
br>
&gt; &gt; &gt; xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and<br>
&gt; &gt; &gt; xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?<br>
&gt; &gt; &gt;<br>
&gt; &gt; I think Arianna is working on an implementation of the former<br>
&gt; &gt; (XEN_DOMCTL_memory_mapping), and she should be sending patches to=
 this<br>
&gt; &gt; list soon, isn't it so, Arianna?<br>
&gt;<br>
&gt; Eric Trudeau did some work in the area too:<br>
&gt;<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137338996422503</a>=
<br>
&gt; <a href=3D"http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936" ta=
rget=3D"_blank">http://marc.info/?l=3Dxen-devel&amp;m=3D137365750318936</a>=
<br>
<br>
</div>
</div>
I checked our repo and the route IRQ changes to DomUs in the second patch U=
RL Stefano provided below are up-to-date with what we have been using on ou=
r platforms. &nbsp;We made no further changes after that patch, i.e. we lef=
t the 100 msec max wait for a domain
 to finish an ISR when destroying it.<br>
<br>
We also added support for a DomU to map in I/O memory with the iomem config=
uration parameter. &nbsp;Unfortunately, I don't have time to provide an off=
icial patch on recent Xen upstream code due to time constraints, but below =
is a patch based on last October, :(
 , commit d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.<br>
I hope this is helpful, because that is the best I can do at this time.<br>
<br>
-----------------<br>
<br>
tools/libxl/libxl_create.c | &nbsp;5 &#43;&#43;&#43;--<br>
&nbsp;xen/arch/arm/domctl.c &nbsp; &nbsp; &nbsp;| 74 &#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#=
43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;&#43;-<br>
&nbsp;2 files changed, 76 insertions(&#43;), 3 deletions(-)<br>
<br>
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c<br>
index 1b320d3..53ed52e 100644<br>
--- a/tools/libxl/libxl_create.c<br>
&#43;&#43;&#43; b/tools/libxl/libxl_create.c<br>
@@ -976,8 &#43;976,9 @@ static void domcreate_launch_dm(libxl__egc *egc, li=
bxl__multidev *multidev,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOG(DEBUG, &quot;dom%d iomem %&quot;PRIx6=
4&quot;-%&quot;PRIx64,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;domid, io-&gt;start, io-&gt=
;start &#43; io-&gt;number - 1);<br>
<br>
- &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_iomem_permission(CTX-&gt;xch=
, domid,<br>
- &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;io=
-&gt;start, io-&gt;number, 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xc_domain_memory_mapping(CTX-&gt;x=
ch, domid,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;star=
t, io-&gt;start,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; io-&gt;numb=
er, 1);<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if (ret &lt; 0) {<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;LOGE(ERROR,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;failed=
 give dom%d access to iomem range %&quot;PRIx64&quot;-%&quot;PRIx64,<br>
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c<br>
index 851ee40..222aac9 100644<br>
--- a/xen/arch/arm/domctl.c<br>
&#43;&#43;&#43; b/xen/arch/arm/domctl.c<br>
@@ -10,11 &#43;10,83 @@<br>
&nbsp;#include &lt;xen/errno.h&gt;<br>
&nbsp;#include &lt;xen/sched.h&gt;<br>
&nbsp;#include &lt;public/domctl.h&gt;<br>
&#43;#include &lt;xen/iocap.h&gt;<br>
&#43;#include &lt;xsm/xsm.h&gt;<br>
&#43;#include &lt;xen/paging.h&gt;<br>
&#43;#include &lt;xen/guest_access.h&gt;<br>
<br>
&nbsp;long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,<br>
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)<br>
&nbsp;{<br>
- &nbsp; &nbsp;return -ENOSYS;<br>
&#43; &nbsp; &nbsp;long ret =3D 0;<br>
&#43; &nbsp; &nbsp;bool_t copyback =3D 0;<br>
&#43;<br>
&#43; &nbsp; &nbsp;switch ( domctl-&gt;cmd )<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp;case XEN_DOMCTL_memory_mapping:<br>
&#43; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long gfn =3D domctl-&gt;u.memory_=
mapping.first_gfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long mfn =3D domctl-&gt;u.memory_=
mapping.first_mfn;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;unsigned long nr_mfns =3D domctl-&gt;u.mem=
ory_mapping.nr_mfns;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;int add =3D domctl-&gt;u.memory_mapping.ad=
d_mapping;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;/* removing i/o memory is not implemented =
yet */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if (!add) {<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EINVAL;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( (mfn &#43; nr_mfns - 1) &lt; mfn || /=
* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* x86 checks wrap based on=
 paddr_bits which is not implemented on ARM? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; /* ((mfn | (mfn &#43; nr_mf=
ns - 1)) &gt;&gt; (paddr_bits - PAGE_SHIFT)) || */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; nr_mfns - 1) &lt=
; gfn ) /* wrap? */<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EPERM;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( current-&gt;domain-&gt;domain_id !=3D=
 0 )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D xsm_iomem_mapping(XSM_HOOK, d, mfn=
, mfn &#43; nr_mfns - 1, add);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;if ( add )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;printk(XENLOG_G_INFO<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;=
memory_map:add: dom%d gfn=3D%lx mfn=3D%lx nr=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;=
domain_id, gfn, mfn, nr_mfns);<br>
&#43;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D iomem_permit_access(=
d, mfn, mfn &#43; nr_mfns - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( !ret &amp;&amp; paging_=
mode_translate(d) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D map_mm=
io_regions(d, gfn &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; (gfn &#43; =
nr_mfns) &lt;&lt; PAGE_SHIFT,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; mfn &lt;&lt=
; PAGE_SHIFT);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;if ( ret )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;{<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
printk(XENLOG_G_WARNING<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &quot;memory_map:fail: dom%d gfn=3D%lx mfn=3D%lx nr=
=3D%lx\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; d-&gt;domain_id, gfn, mfn, nr_mfns);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
if ( iomem_deny_access(d, mfn, mfn &#43; nr_mfns - 1) &amp;&amp;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; is_hardware_domain(current-&gt;domain) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp;printk(XENLOG_ERR<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &quot;memory_map: failed to deny dom%d =
access to [%lx,%lx]\n&quot;,<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;=
 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; d-&gt;domain_id, mfn, mfn &#43; nr_mfns=
 - 1);<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;}<br>
&#43; &nbsp; &nbsp;break;<br>
&#43;<br>
&#43; &nbsp; &nbsp;default:<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -ENOSYS;<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;break;<br>
&#43; &nbsp; &nbsp;}<br>
&#43;<br>
&#43; &nbsp; &nbsp;if ( copyback &amp;&amp; __copy_to_guest(u_domctl, domct=
l, 1) )<br>
&#43; &nbsp; &nbsp; &nbsp; &nbsp;ret =3D -EFAULT;<br>
&#43;<br>
&#43; &nbsp; &nbsp;return ret;<br>
&nbsp;}<br>
<br>
&nbsp;void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)<br>
</blockquote>
</div>
<font size=3D"-1"><a href=3D"http://www.globallogic.com/email_disclaimer.tx=
t" target=3D"_blank"><span style=3D"font-size:11px; font-family:Arial; colo=
r:rgb(17,85,204); background-color:transparent; font-weight:normal; font-st=
yle:normal; font-variant:normal; text-decoration:underline; vertical-align:=
baseline"></span></a><span style=3D"vertical-align:baseline; font-variant:n=
ormal; font-style:normal; font-size:11px; background-color:transparent; tex=
t-decoration:none; font-family:Arial; font-weight:normal"></span></font></d=
iv>
</div>
</div>
</div>
</blockquote>
</div>
</body>
</html>

--_000_bdbwiri5n1ohxkmfi9y58mr41393544349845emailandroidcom_--


--===============0102354970778944571==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0102354970778944571==--


From xen-devel-bounces@lists.xen.org Thu Feb 27 23:57:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 23:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJAp7-0000IW-Ih; Thu, 27 Feb 2014 23:57:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJAp5-0000IR-PG
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 23:57:04 +0000
Received: from [85.158.143.35:56048] by server-2.bemta-4.messagelabs.com id
	86/FC-04779-FC0DF035; Thu, 27 Feb 2014 23:57:03 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393545420!8877216!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18746 invoked from network); 27 Feb 2014 23:57:02 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 23:57:02 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 27 Feb 2014 23:56:59 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,558,1389744000"; d="scan'208";a="662612649"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi02.verizon.com with ESMTP; 27 Feb 2014 23:56:59 +0000
Message-ID: <530FD0CA.5080906@terremark.com>
Date: Thu, 27 Feb 2014 18:56:58 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
	<530F116F020000780011FC45@nat28.tlf.novell.com>
In-Reply-To: <530F116F020000780011FC45@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/14 04:20, Jan Beulich wrote:
>>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
>> Currently in 32 bit mode the routine hpet_set_timer() will convert a
>> time in the past to a time in future.  This is done by the uint32_t
>> cast of diff.
>>
>> Even without this issue, hpet_tick_to_ns() does not support past
>> times.
>>
>> Real hardware does not support past times.
>>
>> So just do the same thing in 32 bit mode as 64 bit mode.
> While the change looks valid at the first glance, what I'm missing
> is an explanation of how the problem that the introduction of this
> fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
> Vista") is now being taken care of (or why this is of no concern).
> That's pretty relevant considering for how long this code has been
> there without causing (known) problems to anyone.

Ok, digging around (the git version):

commit f545359b1c54f59be9d7c27112a68c51c45b06b5
Date:   Thu Jan 18 18:54:28 2007 +0000
     [HVM] Fix slow wallclock in x64 Vista. This is due to confusing a


And one that changed how it worked:

commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac
Date:   Tue Jan 8 16:20:04 2008 +0000
    hvm: hpet: Fix overflow when converting to nanoseconds.


Is when a past time was prevented.  Which may well have caused x64 Vista to have wallclock issues.

Next:

commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2
Date:   Sat May 24 09:27:03 2008 +0100
     hvm: Build guest timers on monotonic system time.


Has a chance to do 2 things:
1) Make the diff < 0 very unlikely
2) Fixed x64 Vista wallclock issues (again)

Looking closer at hpet_tick_to_ns() and doing some math.  I get:


     h->stime_freq = S_TO_NS;
     h->hpet_to_ns_scale = ((S_TO_NS * STIME_PER_HPET_TICK) << 10) / h->stime_freq;

I.E.

     h->hpet_to_ns_scale = STIME_PER_HPET_TICK << 10;

And so:

#define hpet_tick_to_ns(h, tick)                        \
     ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
         ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))


Is really:

#define hpet_tick_to_ns(h, tick)                        \
     ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
         (~0ULL >> 10) : (tick) * STIME_PER_HPET_TICK))

And if you change to using a signed multiply most of the time you will be fine.  If you want a complex that is "safer":

#define hpet_tick_to_ns(tick)                                   \
     ((s_time_t)(tick) >= 0 ?                                    \
       (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK >= 0 ?   \
        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
        (s_time_t)(~0ULL >> 10) :                                \
       (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK < 0 ?    \
        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
        0)

If the signed multiply overflows in the positive case then the old max is returned.  Note: this can return larger values then the old max.

So I can re-work the patch to use this and still provide past times.  Which path should I go with?

    -Don Slutz








> Jan
>
>> Without this change it is possible for an HVM guest running linux to
>> get the message:
>>
>> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
>>
>> On the guest console(s); and will panic.
>>
>> Also Xen hypervisor console with be flooded with:
>>
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   xen/arch/x86/hvm/hpet.c | 13 +++----------
>>   1 file changed, 3 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>> index 4324b52..14b1a39 100644
>> --- a/xen/arch/x86/hvm/hpet.c
>> +++ b/xen/arch/x86/hvm/hpet.c
>> @@ -197,10 +197,6 @@ static void hpet_stop_timer(HPETState *h, unsigned int tn)
>>       hpet_get_comparator(h, tn);
>>   }
>>   
>> -/* the number of HPET tick that stands for
>> - * 1/(2^10) second, namely, 0.9765625 milliseconds */
>> -#define  HPET_TINY_TIME_SPAN  ((h->stime_freq >> 10) / STIME_PER_HPET_TICK)
>> -
>>   static void hpet_set_timer(HPETState *h, unsigned int tn)
>>   {
>>       uint64_t tn_cmp, cur_tick, diff;
>> @@ -231,14 +227,11 @@ static void hpet_set_timer(HPETState *h, unsigned int tn)
>>       diff = tn_cmp - cur_tick;
>>   
>>       /*
>> -     * Detect time values set in the past. This is hard to do for 32-bit
>> -     * comparators as the timer does not have to be set that far in the future
>> -     * for the counter difference to wrap a 32-bit signed integer. We fudge
>> -     * by looking for a 'small' time value in the past.
>> +     * Detect time values set in the past. Since hpet_tick_to_ns() does
>> +     * not handle this, use 0 for both 64 and 32 bit mode.
>>        */
>>       if ( (int64_t)diff < 0 )
>> -        diff = (timer_is_32bit(h, tn) && (-diff > HPET_TINY_TIME_SPAN))
>> -            ? (uint32_t)diff : 0;
>> +        diff = 0;
>>   
>>       if ( (tn <= 1) && (h->hpet.config & HPET_CFG_LEGACY) )
>>           /* if LegacyReplacementRoute bit is set, HPET specification requires
>> -- 
>> 1.8.4
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Thu Feb 27 23:57:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Feb 2014 23:57:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJAp7-0000IW-Ih; Thu, 27 Feb 2014 23:57:05 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJAp5-0000IR-PG
	for xen-devel@lists.xen.org; Thu, 27 Feb 2014 23:57:04 +0000
Received: from [85.158.143.35:56048] by server-2.bemta-4.messagelabs.com id
	86/FC-04779-FC0DF035; Thu, 27 Feb 2014 23:57:03 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-12.tower-21.messagelabs.com!1393545420!8877216!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18746 invoked from network); 27 Feb 2014 23:57:02 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-12.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 27 Feb 2014 23:57:02 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 27 Feb 2014 23:56:59 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,558,1389744000"; d="scan'208";a="662612649"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi02.verizon.com with ESMTP; 27 Feb 2014 23:56:59 +0000
Message-ID: <530FD0CA.5080906@terremark.com>
Date: Thu, 27 Feb 2014 18:56:58 -0500
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>, Don Slutz <dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
	<530F116F020000780011FC45@nat28.tlf.novell.com>
In-Reply-To: <530F116F020000780011FC45@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/14 04:20, Jan Beulich wrote:
>>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
>> Currently in 32 bit mode the routine hpet_set_timer() will convert a
>> time in the past to a time in future.  This is done by the uint32_t
>> cast of diff.
>>
>> Even without this issue, hpet_tick_to_ns() does not support past
>> times.
>>
>> Real hardware does not support past times.
>>
>> So just do the same thing in 32 bit mode as 64 bit mode.
> While the change looks valid at the first glance, what I'm missing
> is an explanation of how the problem that the introduction of this
> fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
> Vista") is now being taken care of (or why this is of no concern).
> That's pretty relevant considering for how long this code has been
> there without causing (known) problems to anyone.

Ok, digging around (the git version):

commit f545359b1c54f59be9d7c27112a68c51c45b06b5
Date:   Thu Jan 18 18:54:28 2007 +0000
     [HVM] Fix slow wallclock in x64 Vista. This is due to confusing a


And one that changed how it worked:

commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac
Date:   Tue Jan 8 16:20:04 2008 +0000
    hvm: hpet: Fix overflow when converting to nanoseconds.


Is when a past time was prevented.  Which may well have caused x64 Vista to have wallclock issues.

Next:

commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2
Date:   Sat May 24 09:27:03 2008 +0100
     hvm: Build guest timers on monotonic system time.


Has a chance to do 2 things:
1) Make the diff < 0 very unlikely
2) Fixed x64 Vista wallclock issues (again)

Looking closer at hpet_tick_to_ns() and doing some math.  I get:


     h->stime_freq = S_TO_NS;
     h->hpet_to_ns_scale = ((S_TO_NS * STIME_PER_HPET_TICK) << 10) / h->stime_freq;

I.E.

     h->hpet_to_ns_scale = STIME_PER_HPET_TICK << 10;

And so:

#define hpet_tick_to_ns(h, tick)                        \
     ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
         ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))


Is really:

#define hpet_tick_to_ns(h, tick)                        \
     ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
         (~0ULL >> 10) : (tick) * STIME_PER_HPET_TICK))

And if you change to using a signed multiply most of the time you will be fine.  If you want a complex that is "safer":

#define hpet_tick_to_ns(tick)                                   \
     ((s_time_t)(tick) >= 0 ?                                    \
       (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK >= 0 ?   \
        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
        (s_time_t)(~0ULL >> 10) :                                \
       (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK < 0 ?    \
        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
        0)

If the signed multiply overflows in the positive case then the old max is returned.  Note: this can return larger values then the old max.

So I can re-work the patch to use this and still provide past times.  Which path should I go with?

    -Don Slutz








> Jan
>
>> Without this change it is possible for an HVM guest running linux to
>> get the message:
>>
>> ..MP-BIOS bug: 8254 timer not connected to IO-APIC
>>
>> On the guest console(s); and will panic.
>>
>> Also Xen hypervisor console with be flooded with:
>>
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>> vioapic.c:352:d1 Unsupported delivery mode 7
>>
>> Signed-off-by: Don Slutz <dslutz@verizon.com>
>> ---
>>   xen/arch/x86/hvm/hpet.c | 13 +++----------
>>   1 file changed, 3 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>> index 4324b52..14b1a39 100644
>> --- a/xen/arch/x86/hvm/hpet.c
>> +++ b/xen/arch/x86/hvm/hpet.c
>> @@ -197,10 +197,6 @@ static void hpet_stop_timer(HPETState *h, unsigned int tn)
>>       hpet_get_comparator(h, tn);
>>   }
>>   
>> -/* the number of HPET tick that stands for
>> - * 1/(2^10) second, namely, 0.9765625 milliseconds */
>> -#define  HPET_TINY_TIME_SPAN  ((h->stime_freq >> 10) / STIME_PER_HPET_TICK)
>> -
>>   static void hpet_set_timer(HPETState *h, unsigned int tn)
>>   {
>>       uint64_t tn_cmp, cur_tick, diff;
>> @@ -231,14 +227,11 @@ static void hpet_set_timer(HPETState *h, unsigned int tn)
>>       diff = tn_cmp - cur_tick;
>>   
>>       /*
>> -     * Detect time values set in the past. This is hard to do for 32-bit
>> -     * comparators as the timer does not have to be set that far in the future
>> -     * for the counter difference to wrap a 32-bit signed integer. We fudge
>> -     * by looking for a 'small' time value in the past.
>> +     * Detect time values set in the past. Since hpet_tick_to_ns() does
>> +     * not handle this, use 0 for both 64 and 32 bit mode.
>>        */
>>       if ( (int64_t)diff < 0 )
>> -        diff = (timer_is_32bit(h, tn) && (-diff > HPET_TINY_TIME_SPAN))
>> -            ? (uint32_t)diff : 0;
>> +        diff = 0;
>>   
>>       if ( (tn <= 1) && (h->hpet.config & HPET_CFG_LEGACY) )
>>           /* if LegacyReplacementRoute bit is set, HPET specification requires
>> -- 
>> 1.8.4
>
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 00:05:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 00:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJAxG-000185-LD; Fri, 28 Feb 2014 00:05:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agraf@suse.de>) id 1WJAxE-000180-O4
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 00:05:28 +0000
Received: from [85.158.139.211:11801] by server-5.bemta-5.messagelabs.com id
	8C/8E-32749-7C2DF035; Fri, 28 Feb 2014 00:05:27 +0000
X-Env-Sender: agraf@suse.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393545926!6371878!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.2 required=7.0 tests=MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14040 invoked from network); 28 Feb 2014 00:05:26 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 00:05:26 -0000
Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 5560175014;
	Fri, 28 Feb 2014 00:05:26 +0000 (UTC)
Mime-Version: 1.0 (1.0)
From: Alexander Graf <agraf@suse.de>
X-Mailer: iPhone Mail (11B554a)
In-Reply-To: <6039706.L1ZWvgjHF0@wuerfel>
Date: Fri, 28 Feb 2014 08:05:15 +0800
Message-Id: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
	<6039706.L1ZWvgjHF0@wuerfel>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> Am 28.02.2014 um 03:56 schrieb Arnd Bergmann <arnd@arndb.de>:
> 
>> On Thursday 27 February 2014 22:24:13 Alexander Graf wrote:
>> 
>> If you want to assign a platform device, you need to generate a respective
>> hardware description (fdt/dsdt) chunk in the hypervisor. You can't take
>> the host's description - it's too tightly coupled to the overall host layout.
> 
> But at that point, you need hardware specific drivers in both the hypervisor
> and in the guest OS.

In our case, you need hardware specific drivers in QEMU and the guest, correct.

> When you have that, why do you still care about a
> system specification? 

Because I don't want to go back to the system level definition. To me a peripheral is a peripheral - regardless of whether it is on a platform bus or a PCI bus. I want to leverage common ground and only add the few pieces that diverge from it.

> 
> Going back to the previous argument, since the hypervisor has to make up the
> description for the platform device itself, it won't matter whether the host
> is booted using DT or ACPI: the description that the hypervisor makes up for
> the guest has to match what the hypervisor uses to describe the rest of the
> guest environment, which is independent of what the host uses.

I agree 100%. This spec should be completely independent of the host.

The reason I brought up the host is that if you want to migrate an OS from physical to virtual, you may need to pass through devices to make this work. If your host driver developers only ever worked with ACPI, there's a good chance the drivers won't work in a pure dt environment.

Brw, the same argument applies the other way around as well. I don't believe we will get around with generating and mandating a single machibe description environment.

> 
>> Imagine you get an AArch64 notebook with Windows on it. You want to run
>> Linux there, so your host needs to understand ACPI. Now you want to run
>> a Windows guest inside a VM, so you need ACPI in there again.
> 
> And you think that Windows is going to support a VM system specification
> we are writing here? Booting Windows RT in a virtual machine is certainly
> an interesting use case, but I think we will have to emulate a platform
> that WinRT supports then, rather than expect it to run on ours.

Point taken :). Though it is a real shame we are in that situation in tge first place. X86 makes life a lot easier here.

> 
>> Replace Windows by "Linux with custom drivers" and you're in the same
>> situation even when you neglect Windows. Reality will be that we will
>> have fdt and acpi based systems.
> 
> We will however want to boot all sorts of guests in a standardized
> virtual environment:
> 
> * 32 bit Linux (since some distros don't support biarch or multiarch
>  on arm64) for running applications that are either binary-only
>  or not 64-bit safe.
> * 32-bit Android
> * big-endian Linux for running applications that are not endian-clean
>  (typically network stuff ported from powerpc or mipseb.
> * OS/v guests
> * NOMMU Linux
> * BSD based OSs
> * QNX
> * random other RTOSs
> 
> Most of these will not work with ACPI, or at least not in 32-bit mode.
> 64-bit Linux will obviously support both DT (always)

Unfortunately not

> and ACPI (optionally),
> depending on the platform, but for a specification like this, I think
> it's much easier to support fewer options, to make it easier for other
> guest OSs to ensure they actually run on any compliant hypervisor.

You're forgetting a few pretty important cases here:

* Enterprise grade Linux distribution that only supports ACPI
* Maybe WinRT if we can convince MS to use it
* Non-Linux with x86/ia64 heritage and thus ACPI support

If we want to run those, we need to expose ACPI tables.

Again, I think the only reasonable thing to do is to implement and expose both. That situation sucks, but we got into it ourselves ;).


Alex


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 00:05:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 00:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJAxG-000185-LD; Fri, 28 Feb 2014 00:05:30 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <agraf@suse.de>) id 1WJAxE-000180-O4
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 00:05:28 +0000
Received: from [85.158.139.211:11801] by server-5.bemta-5.messagelabs.com id
	8C/8E-32749-7C2DF035; Fri, 28 Feb 2014 00:05:27 +0000
X-Env-Sender: agraf@suse.de
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393545926!6371878!1
X-Originating-IP: [195.135.220.15]
X-SpamReason: No, hits=0.2 required=7.0 tests=MIME_QP_LONG_LINE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14040 invoked from network); 28 Feb 2014 00:05:26 -0000
Received: from cantor2.suse.de (HELO mx2.suse.de) (195.135.220.15)
	by server-14.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 00:05:26 -0000
Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254])
	by mx2.suse.de (Postfix) with ESMTP id 5560175014;
	Fri, 28 Feb 2014 00:05:26 +0000 (UTC)
Mime-Version: 1.0 (1.0)
From: Alexander Graf <agraf@suse.de>
X-Mailer: iPhone Mail (11B554a)
In-Reply-To: <6039706.L1ZWvgjHF0@wuerfel>
Date: Fri, 28 Feb 2014 08:05:15 +0800
Message-Id: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
	<6039706.L1ZWvgjHF0@wuerfel>
To: Arnd Bergmann <arnd@arndb.de>
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> Am 28.02.2014 um 03:56 schrieb Arnd Bergmann <arnd@arndb.de>:
> 
>> On Thursday 27 February 2014 22:24:13 Alexander Graf wrote:
>> 
>> If you want to assign a platform device, you need to generate a respective
>> hardware description (fdt/dsdt) chunk in the hypervisor. You can't take
>> the host's description - it's too tightly coupled to the overall host layout.
> 
> But at that point, you need hardware specific drivers in both the hypervisor
> and in the guest OS.

In our case, you need hardware specific drivers in QEMU and the guest, correct.

> When you have that, why do you still care about a
> system specification? 

Because I don't want to go back to the system level definition. To me a peripheral is a peripheral - regardless of whether it is on a platform bus or a PCI bus. I want to leverage common ground and only add the few pieces that diverge from it.

> 
> Going back to the previous argument, since the hypervisor has to make up the
> description for the platform device itself, it won't matter whether the host
> is booted using DT or ACPI: the description that the hypervisor makes up for
> the guest has to match what the hypervisor uses to describe the rest of the
> guest environment, which is independent of what the host uses.

I agree 100%. This spec should be completely independent of the host.

The reason I brought up the host is that if you want to migrate an OS from physical to virtual, you may need to pass through devices to make this work. If your host driver developers only ever worked with ACPI, there's a good chance the drivers won't work in a pure dt environment.

Brw, the same argument applies the other way around as well. I don't believe we will get around with generating and mandating a single machibe description environment.

> 
>> Imagine you get an AArch64 notebook with Windows on it. You want to run
>> Linux there, so your host needs to understand ACPI. Now you want to run
>> a Windows guest inside a VM, so you need ACPI in there again.
> 
> And you think that Windows is going to support a VM system specification
> we are writing here? Booting Windows RT in a virtual machine is certainly
> an interesting use case, but I think we will have to emulate a platform
> that WinRT supports then, rather than expect it to run on ours.

Point taken :). Though it is a real shame we are in that situation in tge first place. X86 makes life a lot easier here.

> 
>> Replace Windows by "Linux with custom drivers" and you're in the same
>> situation even when you neglect Windows. Reality will be that we will
>> have fdt and acpi based systems.
> 
> We will however want to boot all sorts of guests in a standardized
> virtual environment:
> 
> * 32 bit Linux (since some distros don't support biarch or multiarch
>  on arm64) for running applications that are either binary-only
>  or not 64-bit safe.
> * 32-bit Android
> * big-endian Linux for running applications that are not endian-clean
>  (typically network stuff ported from powerpc or mipseb.
> * OS/v guests
> * NOMMU Linux
> * BSD based OSs
> * QNX
> * random other RTOSs
> 
> Most of these will not work with ACPI, or at least not in 32-bit mode.
> 64-bit Linux will obviously support both DT (always)

Unfortunately not

> and ACPI (optionally),
> depending on the platform, but for a specification like this, I think
> it's much easier to support fewer options, to make it easier for other
> guest OSs to ensure they actually run on any compliant hypervisor.

You're forgetting a few pretty important cases here:

* Enterprise grade Linux distribution that only supports ACPI
* Maybe WinRT if we can convince MS to use it
* Non-Linux with x86/ia64 heritage and thus ACPI support

If we want to run those, we need to expose ACPI tables.

Again, I think the only reasonable thing to do is to implement and expose both. That situation sucks, but we got into it ourselves ;).


Alex


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 00:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 00:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJB9t-0001Vh-1J; Fri, 28 Feb 2014 00:18:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJB9q-0001VV-SZ
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 00:18:31 +0000
Received: from [85.158.143.35:13942] by server-2.bemta-4.messagelabs.com id
	4F/F8-04779-6D5DF035; Fri, 28 Feb 2014 00:18:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393546708!8878268!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15297 invoked from network); 28 Feb 2014 00:18:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 00:18:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,558,1389744000"; d="scan'208";a="106479936"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 00:18:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 19:18:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJB9m-0005QJ-Qz;
	Fri, 28 Feb 2014 00:18:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJB9m-0003f9-Nh;
	Fri, 28 Feb 2014 00:18:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25321-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 00:18:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25321: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25321 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25321/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25317 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64 13 guest-localmigrate/x10    fail pass in 25317
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 25317
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 25317

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop          fail in 25317 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop    fail in 25317 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25317 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 00:18:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 00:18:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJB9t-0001Vh-1J; Fri, 28 Feb 2014 00:18:33 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJB9q-0001VV-SZ
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 00:18:31 +0000
Received: from [85.158.143.35:13942] by server-2.bemta-4.messagelabs.com id
	4F/F8-04779-6D5DF035; Fri, 28 Feb 2014 00:18:30 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393546708!8878268!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15297 invoked from network); 28 Feb 2014 00:18:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 00:18:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,558,1389744000"; d="scan'208";a="106479936"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 00:18:27 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 19:18:27 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJB9m-0005QJ-Qz;
	Fri, 28 Feb 2014 00:18:26 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJB9m-0003f9-Nh;
	Fri, 28 Feb 2014 00:18:26 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25321-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 00:18:26 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25321: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25321 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25321/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25317 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-win7-amd64 13 guest-localmigrate/x10    fail pass in 25317
 test-amd64-amd64-xl-qemuu-win7-amd64  7 windows-install     fail pass in 25317
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 25317

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop          fail in 25317 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop    fail in 25317 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25317 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:09:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJCsf-0000DA-6X; Fri, 28 Feb 2014 02:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJCsd-0000Ch-Gv
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:08:51 +0000
Received: from [85.158.137.68:48011] by server-8.bemta-3.messagelabs.com id
	7D/28-16039-1BFEF035; Fri, 28 Feb 2014 02:08:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393553327!3473296!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15945 invoked from network); 28 Feb 2014 02:08:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:08:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="106498506"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 02:08:46 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 21:08:45 -0500
Message-ID: <1393553322.20365.4.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Sourav Saha <souravsaha.work@gmail.com>
Date: Fri, 28 Feb 2014 02:08:42 +0000
In-Reply-To: <CADkf8UkxTF9DYt4i1VT+Y2Aprej8pujkAc_zcziTQ_vNuK8+oA@mail.gmail.com>
References: <CADkf8UkxTF9DYt4i1VT+Y2Aprej8pujkAc_zcziTQ_vNuK8+oA@mail.gmail.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen PV Guest VM is not coming up on Ubuntu 12.04:
 Bad archive.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 17:31 +0530, Sourav Saha wrote:

> But when it asked me to choose a mirror of the ubuntu archive during
> installation, I selected United States and got the below error(bad
> archive).

I reckon this is either an error with the mirror you are using or with
the Ubuntu installer itself. In any case it seems likely given the
information presented so far that this is not a Xen *development* issue.
Please take this to the relevant user list -- probably Ubuntu's in the
first instance and if they determine this is a Xen related issue then
the Xen user list would be appropriate.

When you take this to the user list you will probably want to do as the
dialog suggests and investigate the content of /var/log/syslog.

Thanks,
Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:09:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJCsf-0000DA-6X; Fri, 28 Feb 2014 02:08:53 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJCsd-0000Ch-Gv
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:08:51 +0000
Received: from [85.158.137.68:48011] by server-8.bemta-3.messagelabs.com id
	7D/28-16039-1BFEF035; Fri, 28 Feb 2014 02:08:49 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393553327!3473296!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15945 invoked from network); 28 Feb 2014 02:08:48 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:08:48 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="106498506"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 02:08:46 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 21:08:45 -0500
Message-ID: <1393553322.20365.4.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Sourav Saha <souravsaha.work@gmail.com>
Date: Fri, 28 Feb 2014 02:08:42 +0000
In-Reply-To: <CADkf8UkxTF9DYt4i1VT+Y2Aprej8pujkAc_zcziTQ_vNuK8+oA@mail.gmail.com>
References: <CADkf8UkxTF9DYt4i1VT+Y2Aprej8pujkAc_zcziTQ_vNuK8+oA@mail.gmail.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] Xen PV Guest VM is not coming up on Ubuntu 12.04:
 Bad archive.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 17:31 +0530, Sourav Saha wrote:

> But when it asked me to choose a mirror of the ubuntu archive during
> installation, I selected United States and got the below error(bad
> archive).

I reckon this is either an error with the mirror you are using or with
the Ubuntu installer itself. In any case it seems likely given the
information presented so far that this is not a Xen *development* issue.
Please take this to the relevant user list -- probably Ubuntu's in the
first instance and if they determine this is a Xen related issue then
the Xen user list would be appropriate.

When you take this to the user list you will probably want to do as the
dialog suggests and investigate the content of /var/log/syslog.

Thanks,
Ian.




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:12:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJCwK-0000Xr-TU; Fri, 28 Feb 2014 02:12:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJCwJ-0000Xm-AO
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:12:39 +0000
Received: from [85.158.137.68:31200] by server-16.bemta-3.messagelabs.com id
	B2/E7-29917-690FF035; Fri, 28 Feb 2014 02:12:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393553556!4735517!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6565 invoked from network); 28 Feb 2014 02:12:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:12:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="104877765"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 02:12:36 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 21:12:35 -0500
Message-ID: <1393553550.20365.5.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Date: Fri, 28 Feb 2014 02:12:30 +0000
In-Reply-To: <20140226150100.GC6046@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, stefano.stabellini@eu.citrix.com,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 15:01 +0000, Daniel P. Berrange wrote:
> Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
> to allow us to configure that explicitly, similar to how we do for KVM's
> virtio-console support.

Do you mean I need to add something to the XML config snippet, or I need
to add some special handling in the XML parser/consumer?

I've grepped around the virtio-console stuff and I'm none the wiser.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:12:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJCwK-0000Xr-TU; Fri, 28 Feb 2014 02:12:40 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJCwJ-0000Xm-AO
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:12:39 +0000
Received: from [85.158.137.68:31200] by server-16.bemta-3.messagelabs.com id
	B2/E7-29917-690FF035; Fri, 28 Feb 2014 02:12:38 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393553556!4735517!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6565 invoked from network); 28 Feb 2014 02:12:37 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:12:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="104877765"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 02:12:36 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 21:12:35 -0500
Message-ID: <1393553550.20365.5.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: "Daniel P. Berrange" <berrange@redhat.com>
Date: Fri, 28 Feb 2014 02:12:30 +0000
In-Reply-To: <20140226150100.GC6046@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, stefano.stabellini@eu.citrix.com,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 15:01 +0000, Daniel P. Berrange wrote:
> Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
> to allow us to configure that explicitly, similar to how we do for KVM's
> virtio-console support.

Do you mean I need to add something to the XML config snippet, or I need
to add some special handling in the XML parser/consumer?

I've grepped around the virtio-console stuff and I'm none the wiser.

Thanks,
Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:25:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJD8W-0000yj-EU; Fri, 28 Feb 2014 02:25:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WJD8U-0000ye-F7
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 02:25:14 +0000
Received: from [85.158.139.211:47470] by server-4.bemta-5.messagelabs.com id
	37/D4-08092-983FF035; Fri, 28 Feb 2014 02:25:13 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393554310!6780178!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30983 invoked from network); 28 Feb 2014 02:25:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 02:25:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S2P8vP002701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 02:25:08 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1S2P7vB011047
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 02:25:07 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S2P6bK006167; Fri, 28 Feb 2014 02:25:06 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 18:25:06 -0800
Date: Thu, 27 Feb 2014 18:25:05 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Tim Deegan <tim@xen.org>
Message-ID: <20140227182505.160841e8@mantra.us.oracle.com>
In-Reply-To: <20140227105605.GB53925@deinos.phlegethon.org>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<20140227105605.GB53925@deinos.phlegethon.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
 hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 27 Feb 2014 11:56:05 +0100
Tim Deegan <tim@xen.org> wrote:

> At 17:03 -0800 on 24 Feb (1393257835), Mukesh Rathor wrote:
> > pvh does not support nested hvm at present. As such, return if pvh.
> 
> Nack, sorry.  
> 
> 1: this is the nested pagefault (i.e. EPT/NPT) handler, not part of
> the nested HVM code.

yeah, jan already pointed that out. i just quickly whipped it out last
minute after seeing a crash in hvm_hap_nested_page_fault from running
xentrace on pvh. anyways, i'll take a look at it.

> 2: If there _is_ a problem with the interaction between nested HVM and
> PVH, the right way to fix it is to enforce that they can't both be
> enabled at the same time, and then make sure all the nested-HVM code
> properly checks for being enabled.  It's not a good idea to scatter
> PVH checks all over code for unrelated features.

Yup. thanks.

Mueksh

> 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > ---
> >  xen/arch/x86/hvm/hvm.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 69f7e74..a4a3dcf 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
> >      int sharing_enomem = 0;
> >      mem_event_request_t *req_ptr = NULL;
> >  
> > +    if ( is_pvh_vcpu(v) )
> > +        return 0;
> > +
> >      /* On Nested Virtualization, walk the guest page table.
> >       * If this succeeds, all is fine.
> >       * If this fails, inject a nested page fault into the guest.
> > -- 
> > 1.8.3.1
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:25:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJD8W-0000yj-EU; Fri, 28 Feb 2014 02:25:16 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mukesh.rathor@oracle.com>) id 1WJD8U-0000ye-F7
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 02:25:14 +0000
Received: from [85.158.139.211:47470] by server-4.bemta-5.messagelabs.com id
	37/D4-08092-983FF035; Fri, 28 Feb 2014 02:25:13 +0000
X-Env-Sender: mukesh.rathor@oracle.com
X-Msg-Ref: server-8.tower-206.messagelabs.com!1393554310!6780178!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30983 invoked from network); 28 Feb 2014 02:25:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 02:25:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S2P8vP002701
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 02:25:08 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1S2P7vB011047
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 02:25:07 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S2P6bK006167; Fri, 28 Feb 2014 02:25:06 GMT
Received: from mantra.us.oracle.com (/130.35.68.95)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 18:25:06 -0800
Date: Thu, 27 Feb 2014 18:25:05 -0800
From: Mukesh Rathor <mukesh.rathor@oracle.com>
To: Tim Deegan <tim@xen.org>
Message-ID: <20140227182505.160841e8@mantra.us.oracle.com>
In-Reply-To: <20140227105605.GB53925@deinos.phlegethon.org>
References: <1393290237-28427-1-git-send-email-mukesh.rathor@oracle.com>
	<1393290237-28427-2-git-send-email-mukesh.rathor@oracle.com>
	<20140227105605.GB53925@deinos.phlegethon.org>
Organization: Oracle Corporation
X-Mailer: Claws Mail 3.9.3 (GTK+ 2.24.22; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [V1 PATCH 1/3] pvh: early return from
 hvm_hap_nested_page_fault
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 27 Feb 2014 11:56:05 +0100
Tim Deegan <tim@xen.org> wrote:

> At 17:03 -0800 on 24 Feb (1393257835), Mukesh Rathor wrote:
> > pvh does not support nested hvm at present. As such, return if pvh.
> 
> Nack, sorry.  
> 
> 1: this is the nested pagefault (i.e. EPT/NPT) handler, not part of
> the nested HVM code.

yeah, jan already pointed that out. i just quickly whipped it out last
minute after seeing a crash in hvm_hap_nested_page_fault from running
xentrace on pvh. anyways, i'll take a look at it.

> 2: If there _is_ a problem with the interaction between nested HVM and
> PVH, the right way to fix it is to enforce that they can't both be
> enabled at the same time, and then make sure all the nested-HVM code
> properly checks for being enabled.  It's not a good idea to scatter
> PVH checks all over code for unrelated features.

Yup. thanks.

Mueksh

> 
> > Signed-off-by: Mukesh Rathor <mukesh.rathor@oracle.com>
> > ---
> >  xen/arch/x86/hvm/hvm.c | 3 +++
> >  1 file changed, 3 insertions(+)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 69f7e74..a4a3dcf 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -1416,6 +1416,9 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
> >      int sharing_enomem = 0;
> >      mem_event_request_t *req_ptr = NULL;
> >  
> > +    if ( is_pvh_vcpu(v) )
> > +        return 0;
> > +
> >      /* On Nested Virtualization, walk the guest page table.
> >       * If this succeeds, all is fine.
> >       * If this fails, inject a nested page fault into the guest.
> > -- 
> > 1.8.3.1
> > 
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:25:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJD94-00010x-6Q; Fri, 28 Feb 2014 02:25:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJD92-00010g-2d
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 02:25:48 +0000
Received: from [85.158.137.68:31263] by server-6.bemta-3.messagelabs.com id
	DE/1F-09180-BA3FF035; Fri, 28 Feb 2014 02:25:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393554343!4736617!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 842 invoked from network); 28 Feb 2014 02:25:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 02:25:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S2PbQv022532
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 02:25:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S2ParR006832
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 02:25:37 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S2PaaU022609; Fri, 28 Feb 2014 02:25:36 GMT
Received: from localhost.localdomain (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 18:25:35 -0800
Date: Thu, 27 Feb 2014 21:25:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: akpm@linux-foundation.org, steven@uplinklabs.net, ufimtseva@gmail.com,
	mgorman@suse.de, david.vrabel@citrix.com, torvalds@linux-foundation.org
Message-ID: <20140228022532.GC7114@localhost.localdomain>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="82I3+IH0IqGh5yIs"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Subject: [Xen-devel] Regression introduced by "xen: properly account for
 _PAGE_NUMA during xen pte translations"
 (a9c8e4beeeb64c22b84c803747487857fe424b68)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--82I3+IH0IqGh5yIs
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable


This only shows up when using the Linux kernel as a 64-bit PV guest and
when I have migrated and I am running iscsid and I poweroff the guest.
Note: I can also reproduce this if I kill 'iscsid'.

If I revert the above mentioned commit the problem disappears.

The page flags it shows are bogus - this guest is running from RAM
and has no swap.

Here is what the console says (ignore the first BUG please):

[   42.268060] xen:grant_table: Grant tables using version 1 layout
[   42.268060] BUG: sleeping function called from invalid context at /home/=
konrad/ssd/konrad/linux/kernel/locking/mutex.c:96
[   42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name: migration/0
[   42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc4upstre=
am #1
[   42.268060]  0000000000000002 ffff88003cc0dcd8 ffffffff816f0e59 ffff8800=
3cc02630
[   42.268060]  ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce ffff8800=
3cc0dd08
[   42.268060]  ffffffff816f3bdf ffff88003cc0dd08 0000000000000017 ffff8800=
3cc0dd38
[   42.268060] Call Trace:
[   42.268060]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   42.268060]  [<ffffffff810cdfce>] __might_sleep+0xce/0xf0
[   42.268060]  [<ffffffff816f3bdf>] mutex_lock+0x1f/0x40
[   42.268060]  [<ffffffff813f49fb>] rebind_evtchn_irq+0x3b/0xb0
[   42.268060]  [<ffffffff81428bdc>] xen_console_resume+0x5c/0x60
[   42.268060]  [<ffffffff813f3c0a>] xen_suspend+0x8a/0xb0
[   42.268060]  [<ffffffff811265db>] multi_cpu_stop+0xbb/0xe0
[   42.268060]  [<ffffffff81126520>] ? irq_cpu_stop_queue_work+0x30/0x30
[   42.268060]  [<ffffffff81126bfa>] cpu_stopper_thread+0x4a/0x180
[   42.268060]  [<ffffffff816f1a01>] ? __schedule+0x381/0x7e0
[   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
[   42.268060]  [<ffffffff816f612b>] ? _raw_spin_unlock_irqrestore+0x1b/0x70
[   42.268060]  [<ffffffff810cc058>] smpboot_thread_fn+0x148/0x1e0
[   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
[   42.268060]  [<ffffffff810c490e>] kthread+0xce/0xf0
[   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0=
x80
[   42.268060]  [<ffffffff816fe74c>] ret_from_fork+0x7c/0xb0
[   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0=
x80
[   42.268060] PM: noirq restore of devices complete after 0.251 msecs
[   42.268645] PM: early restore of devices complete after 0.151 msecs

#=20
# [   42.281199] switch: port 1(eth0) entered disabled state
[   42.282591] PM: restore of devices complete after 11.656 msecs
[   42.307965] switch: port 1(eth0) entered forwarding state
[   42.307990] switch: port 1(eth0) entered forwarding state

#=20
#=20
# [   57.312124] switch: port 1(eth0) entered forwarding state
lspci
# lsscsi
# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68 =20
          inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:99 errors:0 dropped:0 overruns:0 frame:0
          TX packets:86 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000=20
          RX bytes:8722 (8.5 KiB)  TX bytes:8709 (8.5 KiB)

lo        Link encap:Local Loopback =20
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0=20
          RX bytes:260 (260.0 b)  TX bytes:260 (260.0 b)

switch    Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68 =20
          inet addr:192.168.102.68  Bcast:192.168.102.255  Mask:255.255.255=
=2E0
          inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:96 errors:0 dropped:0 overruns:0 frame:0
          TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0=20
          RX bytes:8506 (8.3 KiB)  TX bytes:7755 (7.5 KiB)

# ping 1=08 =088.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=3D1 ttl=3D45 time=3D57.0 ms
64 bytes from 8.8.8.8: icmp_seq=3D2 ttl=3D45 time=3D51.4 ms
64 bytes from 8.8.8.8: icmp_seq=3D3 ttl=3D45 time=3D51.8 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2130ms
rtt min/avg/max/mdev =3D 51.421/53.420/57.008/2.556 ms
# poweroff
Feb 28 02:05:30 g-pvops init: starting pid 2386, tty '': '/etc/init.d/halt'=
=0D
# Usage: /etc/init.d/halt {start}
=0DThe system is going down NOW!
=0DSent SIGTERM to all processes
Feb 28 02:05:30 g-pvops exiting on signal 15=0D
[   71.195552] BUG: Bad page map in process iscsid  pte:39b22120 pmd:069530=
67
[   71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c8
[   71.195576] page flags: 0x100000000080078(uptodate|dirty|lru|active|swap=
backed)
[   71.195585] page dumped because: bad pte
[   71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:c
[   71.195598] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.195603] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted 3.14.0-rc4upstream=
 #1
[   71.195618]  00007fb105d37000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.195627]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b ffff8800=
30ff3c58
[   71.195634]  ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000 ffff8800=
069539b8
[   71.195642] Call Trace:
[   71.195649]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.195657]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.195663]  [<ffffffff8117f6e1>] ? activate_page+0xb1/0xe0
[   71.195669]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.195676]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.195680]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.195688]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.195694]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.195702]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.195707]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.195712]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.195719]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.195724]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.195732]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.195736] Disabling lock debugging due to kernel taint
[   71.195740] BUG: Bad page map in process iscsid  pte:39b23120 pmd:069530=
67
[   71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c9
[   71.195751] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
[   71.195759] page dumped because: bad pte
[   71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:d
[   71.195770] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.195774] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.195783]  00007fb105d38000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.195791]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000=
00000575
[   71.195798]  0720072007200720 ffff880030ff3c58 00007fb105d38000 ffff8800=
069539c0
[   71.195806] Call Trace:
[   71.195810]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.195816]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.195820]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.195827]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.195832]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.195839]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.195843]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.195849]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.195854]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.195858]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.195863]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.195870]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.195875]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.195880] BUG: Bad page map in process iscsid  pte:39b24120 pmd:069530=
67
[   71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1 mapping:ffff88003a=
536490 index:0x1ca
[   71.195888] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
[   71.195895] page dumped because: bad pte
[   71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:e
[   71.195904] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.195908] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.195916]  00007fb105d39000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.195923]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000=
0000058e
[   71.195929]  0720072007200720 ffff880030ff3c58 00007fb105d39000 ffff8800=
069539c8
[   71.195935] Call Trace:
[   71.195939]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.195944]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.195948]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.195953]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.195958]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.195963]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.195967]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.195973]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.195977]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.195982]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.195987]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.195992]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.195997]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.196001] BUG: Bad page map in process iscsid  pte:39b25120 pmd:069530=
67
[   71.196005] page:ffffea0000c9f018 count:2 mapcount:-1 mapping:ffff88003a=
536490 index:0x1cb
[   71.362165] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
[   71.362172] page dumped because: bad pte
[   71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:f
[   71.362181] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.362185] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.362197]  00007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.362206]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000=
000005a7
[   71.362212]  0720072007200720 ffff880030ff3c58 00007fb105d3a000 ffff8800=
069539d0
[   71.362221] Call Trace:
[   71.362225]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.362230]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.362241]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.362247]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.362254]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.362258]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.362263]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.362270]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.362276]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.362282]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.362287]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.362292]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.362297]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.364639] BUG: Bad page state in process iscsid  pfn:39b25
[   71.364651] page:ffffea0000c9f018 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1cb
[   71.364656] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.364663] page dumped because: non-NULL mapping
[   71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.364711]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.364718]  ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.364724]  0000000000000001 ffffea0000c9f018 0000000000000000 ffff8800=
30ff3c48
[   71.364733] Call Trace:
[   71.364739]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.364748]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.364753]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.364760]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.364768]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.364773]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.364780]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.364785]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.364791]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.364798]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.364803]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.364810]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.364815]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.364820]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.364825]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.364830]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.364835]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.364842]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.364848]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.364854] BUG: Bad page state in process iscsid  pfn:39b24
[   71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1ca
[   71.364863] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.364871] page dumped because: non-NULL mapping
[   71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.364913]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.364919]  ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.364925]  0000000000000001 ffffea0000c9efe0 0000000000000000 ffff8800=
30ff3c48
[   71.364933] Call Trace:
[   71.364938]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.364945]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.364950]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.364955]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.364960]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.364965]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.364970]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.364975]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.364982]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.364987]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.364994]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.364999]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.365004]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.365009]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.365015]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.365020]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.365025]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.365030]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.365035]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.365039] BUG: Bad page state in process iscsid  pfn:39b23
[   71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c9
[   71.365047] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.365074] page dumped because: non-NULL mapping
[   71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.365109]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.365115]  ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.365121]  0000000000000001 ffffea0000c9efa8 0000000000000000 ffff8800=
30ff3c48
[   71.365127] Call Trace:
[   71.365131]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.365136]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.365141]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.365146]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.365151]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.365155]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.365160]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.365165]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.365170]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.365175]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.365180]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.562344]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.562349]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.562354]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.562359]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.562364]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.562369]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.562379]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.562387]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.562391] BUG: Bad page state in process iscsid  pfn:39b22
[   71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c8
[   71.562399] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.562413] page dumped because: non-NULL mapping
[   71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.562461]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.562467]  ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.562476]  0000000000000001 ffffea0000c9ef70 0000000000000000 ffff8800=
30ff3c48
[   71.562486] Call Trace:
[   71.562490]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.562495]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.562500]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.562515]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.562520]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.562525]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.562535]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.562544]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.562550]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.562555]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.562561]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.562568]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.562579]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.562584]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.562594]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.562603]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.562608]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.562613]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.562617]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
=0DSent SIGKILL to all processes
=0DRequesting system poweroff
[   73.375423] reboot: System halted

--82I3+IH0IqGh5yIs
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=config

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.14.0-rc4 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION="upstream"
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
# CONFIG_TICK_CPU_ACCOUNTING is not set
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_MEMCG is not set
# CONFIG_CGROUP_HUGETLB is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
CONFIG_RT_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
# CONFIG_USER_NS is not set
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_SYSFS_DEPRECATED_V2 is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_PCI_QUIRKS=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR is not set
CONFIG_CC_STACKPROTECTOR_NONE=y
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
# CONFIG_SYSTEM_TRUSTED_KEYRING is not set
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
# CONFIG_BLK_CMDLINE_PARSER is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
# CONFIG_X86_INTEL_LPSS is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_XEN_PVH=y
CONFIG_KVM_GUEST=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=y
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_I8K is not set
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_MICROCODE_INTEL_EARLY=y
CONFIG_MICROCODE_AMD_EARLY=y
CONFIG_MICROCODE_EARLY=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
# CONFIG_MOVABLE_NODE is not set
# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_NEED_BOUNCE_POOL=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
# CONFIG_CMA is not set
# CONFIG_ZBUD is not set
# CONFIG_ZSWAP is not set
CONFIG_ZSMALLOC=y
# CONFIG_PGTABLE_MAPPING is not set
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
# CONFIG_EFI_STUB is not set
CONFIG_SECCOMP=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
CONFIG_ACPI_DEBUG=y
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_INTEL_PSTATE is not set
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
CONFIG_X86_SPEEDSTEP_CENTRINO=m
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIE_ECRC=y
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_IOAPIC is not set
CONFIG_PCI_LABEL=y

#
# PCI host controller drivers
#
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set
CONFIG_X86_SYSFB=y

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=y
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
# CONFIG_IPV6_VTI is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_ADVANCED is not set

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_IRC=y
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
# CONFIG_NF_NAT_AMANDA is not set
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
# CONFIG_NF_NAT_TFTP is not set
# CONFIG_NF_TABLES is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_LOG=m
# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
CONFIG_IP_NF_MANGLE=y
# CONFIG_IP_NF_RAW is not set

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
# CONFIG_IP6_NF_RAW is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_HAVE_NET_DSA=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
CONFIG_NET_CLS_ACT=y
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
# CONFIG_VSOCKETS is not set
# CONFIG_NETLINK_MMAP is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_NET_MPLS_GSO is not set
# CONFIG_HSR is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
# CONFIG_CGROUP_NET_CLASSID is not set
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_DMA_SHARED_BUFFER=y

#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_ZRAM=y
# CONFIG_ZRAM_DEBUG is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_VIRTIO_BLK=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_RSXX is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
# CONFIG_VMWARE_VMCI is not set

#
# Intel MIC Host Driver
#
# CONFIG_INTEL_MIC_HOST is not set

#
# Intel MIC Card Driver
#
# CONFIG_INTEL_MIC_CARD is not set
# CONFIG_GENWQE is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
CONFIG_BLK_DEV_3W_XXXX_RAID=m
# CONFIG_SCSI_HPSA is not set
CONFIG_SCSI_3W_9XXX=m
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
# CONFIG_SCSI_MVUMI is not set
CONFIG_SCSI_DPT_I2O=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
# CONFIG_SCSI_ESAS2R is not set
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS_LOGGING=y
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
CONFIG_SCSI_FLASHPOINT=y
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
# CONFIG_FCOE_FNIC is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_EATA=m
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=m
CONFIG_SCSI_GDTH=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
# CONFIG_SCSI_IPR_TRACE is not set
# CONFIG_SCSI_IPR_DUMP is not set
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
# CONFIG_TCM_QLA2XXX is not set
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_DC390T=m
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_SRP=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_SATA_INIC162X=m
# CONFIG_SATA_ACARD_AHCI is not set
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_HIGHBANK is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
# CONFIG_SATA_RCAR is not set
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARASAN_CF is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
CONFIG_PATA_EFAR=m
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MARVELL=m
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
CONFIG_PATA_SCH=m
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=m
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
CONFIG_PATA_PCMCIA=m
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_DM_THIN_PROVISIONING is not set
# CONFIG_DM_CACHE is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_LOG_USERSPACE is not set
# CONFIG_DM_RAID is not set
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_UEVENT is not set
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=m
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=m
# CONFIG_FIREWIRE_SBP2 is not set
# CONFIG_FIREWIRE_NET is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
# CONFIG_VXLAN is not set
CONFIG_NETCONSOLE=m
# CONFIG_NETCONSOLE_DYNAMIC is not set
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
CONFIG_SUNGEM_PHY=m
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#
CONFIG_VHOST_NET=y
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_RING=y
CONFIG_VHOST=y

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
# CONFIG_PCMCIA_3C574 is not set
# CONFIG_PCMCIA_3C589 is not set
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
CONFIG_ATL1C=m
# CONFIG_ALX is not set
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
CONFIG_IGBVF=y
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=y
# CONFIG_I40E is not set
# CONFIG_I40EVF is not set
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
# CONFIG_MLX5_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=m
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
CONFIG_NET_PACKET_ENGINE=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
# CONFIG_SH_ETH is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y

#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
CONFIG_MARVELL_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_RTL8152 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_POLLDEV is not set
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
CONFIG_INPUT_TABLET=y
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_WACOM is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_NR_UARTS=16
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_RSA is not set
# CONFIG_SERIAL_8250_DW is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_KGDB_NMI is not set
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_TPM=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set

#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=m
# CONFIG_TCG_TIS_I2C_ATMEL is not set
# CONFIG_TCG_TIS_I2C_INFINEON is not set
# CONFIG_TCG_TIS_I2C_NUVOTON is not set
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HTU21 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_INTEL_POWERCLAMP is not set
CONFIG_X86_PKG_TEMP_THERMAL=m
# CONFIG_ACPI_INT3403_THERMAL is not set

#
# Texas Instruments thermal drivers
#
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_LPC_ICH is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_TTM=m

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=m
# CONFIG_DRM_RADEON_UMS is not set
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set
# CONFIG_DRM_I810 is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_I915_FBDEV=y
# CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT is not set
# CONFIG_DRM_I915_UMS is not set
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_DRM_VIA=m
CONFIG_DRM_SAVAGE=m
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_QXL is not set
# CONFIG_DRM_BOCHS is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_HDMI=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
# CONFIG_FB_BOOT_VESA_SUPPORT is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
CONFIG_FB_CIRRUS=y
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_GOLDFISH is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_FB_AUO_K190X is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_EXYNOS_VIDEO is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_GENERIC=y
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LP855X is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_LOGO is not set
# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=m
CONFIG_HIDRAW=y
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_HUION is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=m
# CONFIG_HID_ICADE is not set
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
# CONFIG_HID_MAGICMOUSE is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=m
# CONFIG_HID_ORTEK is not set
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=m
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEELSERIES is not set
CONFIG_HID_SUNPLUS=m
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=m
# CONFIG_HID_THINGM is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_WACOM is not set
# CONFIG_HID_WIIMOTE is not set
# CONFIG_HID_XINMO is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
# CONFIG_USB_FUSBH200_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
CONFIG_USB_PRINTER=y
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HSIC_USB3503 is not set

#
# USB Physical Layer drivers
#
# CONFIG_USB_PHY is not set
# CONFIG_USB_OTG_FSM is not set
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_SAMSUNG_USB2PHY is not set
# CONFIG_SAMSUNG_USB3PHY is not set
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_LP5523 is not set
# CONFIG_LEDS_LP5562 is not set
# CONFIG_LEDS_LP8501 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA9685 is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_DELL_NETBOOKS is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_OT200 is not set
# CONFIG_LEDS_BLINKM is not set

#
# LED Triggers
#
# CONFIG_LEDS_TRIGGERS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=y
# CONFIG_EDAC_MCE_INJ is not set
# CONFIG_EDAC_MM_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12057 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_MOXART is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
# CONFIG_INTEL_MID_DMAC is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_DW_DMAC_CORE is not set
# CONFIG_DW_DMAC is not set
# CONFIG_DW_DMAC_PCI is not set
# CONFIG_TIMB_DMA is not set
# CONFIG_PCH_DMA is not set
CONFIG_DMA_ENGINE=y
CONFIG_DMA_ACPI=y

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y
CONFIG_DCA=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y

#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=m
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_STAGING=y
# CONFIG_ET131X is not set
# CONFIG_SLICOSS is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_ECHO is not set
# CONFIG_COMEDI is not set
# CONFIG_RTS5139 is not set
# CONFIG_RTS5208 is not set
# CONFIG_TRANZPORT is not set
# CONFIG_IDE_PHISON is not set
# CONFIG_DX_SEP is not set
# CONFIG_FB_SM7XX is not set
# CONFIG_CRYSTALHD is not set
# CONFIG_FB_XGI is not set
# CONFIG_ACPI_QUICKSTART is not set
# CONFIG_USB_ENESTORAGE is not set
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_USB_WPAN_HCD is not set
# CONFIG_WIMAX_GDM72XX is not set
# CONFIG_LTE_GDM724X is not set
CONFIG_NET_VENDOR_SILICOM=y
# CONFIG_SBYPASS is not set
# CONFIG_BPCTL is not set
# CONFIG_CED1401 is not set
# CONFIG_DGRP is not set
# CONFIG_FIREWIRE_SERIAL is not set
# CONFIG_LUSTRE_FS is not set
# CONFIG_XILLYBUS is not set
# CONFIG_DGNC is not set
# CONFIG_DGAP is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_DELL_WMI is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WIRELESS is not set
# CONFIG_HP_WMI is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_ASUS_WMI is not set
CONFIG_ACPI_WMI=m
# CONFIG_MSI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_MXM_WMI=m
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_APPLE_GMUX is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set
# CONFIG_PVPANIC is not set
# CONFIG_CHROME_PLATFORMS is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
# CONFIG_FMC is not set

#
# PHY Subsystem
#
CONFIG_GENERIC_PHY=y
# CONFIG_PHY_EXYNOS_MIPI_VIDEO is not set
# CONFIG_BCM_KONA_USB2_PHY is not set
# CONFIG_POWERCAP is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_VARS_PSTORE=y
# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
CONFIG_EFI_RUNTIME_MAP=y
CONFIG_UEFI_CPER=y

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=m
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=m
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
# CONFIG_FUSE_FS is not set

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_F2FS_FS is not set
# CONFIG_EFIVAR_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFSD is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_DYNAMIC_DEBUG is not set

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_DEBUG_KERNEL=y

#
# Memory Debugging
#
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=4000
# CONFIG_DEBUG_KMEMLEAK_TEST is not set
# CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF is not set
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_VM=y
# CONFIG_DEBUG_VM_RB is not set
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_DEBUG_SHIRQ is not set

#
# Debug Lockups and Hangs
#
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=1
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=21
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_FTRACE_SYSCALLS is not set
# CONFIG_TRACER_SNAPSHOT is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
# CONFIG_UPROBE_EVENT is not set
CONFIG_PROBE_EVENTS=y
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set

#
# Runtime Testing
#
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_TEST_MODULE is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_LOW_LEVEL_TRAP is not set
CONFIG_KGDB_KDB=y
# CONFIG_KDB_KEYBOARD is not set
CONFIG_KDB_CONTINUE_CATASTROPHIC=0
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
# CONFIG_EARLY_PRINTK_EFI is not set
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
CONFIG_DOUBLEFAULT=y
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_STATIC_CPU_HAS is not set

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_BIG_KEYS is not set
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# CONFIG_SECURITY_NETWORK_XFRM is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65534
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_IMA is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
# CONFIG_CRYPTO_CMAC is not set
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C_INTEL=m
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_CRC32_PCLMUL is not set
CONFIG_CRYPTO_CRCT10DIF=m
# CONFIG_CRYPTO_CRCT10DIF_PCLMUL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256_SSSE3 is not set
# CONFIG_CRYPTO_SHA512_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_X86_64 is not set
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=y
# CONFIG_CRYPTO_LZO is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
# CONFIG_CRYPTO_DEV_CCP is not set
# CONFIG_ASYMMETRIC_KEY_TYPE is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_KVM_DEVICE_ASSIGNMENT=y
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
CONFIG_CRC_T10DIF=m
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
# CONFIG_CRC8 is not set
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
# CONFIG_XZ_DEC_POWERPC is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_AVERAGE=y
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_FONT_SUPPORT=m
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y

--82I3+IH0IqGh5yIs
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--82I3+IH0IqGh5yIs--


From xen-devel-bounces@lists.xen.org Fri Feb 28 02:25:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJD94-00010x-6Q; Fri, 28 Feb 2014 02:25:50 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJD92-00010g-2d
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 02:25:48 +0000
Received: from [85.158.137.68:31263] by server-6.bemta-3.messagelabs.com id
	DE/1F-09180-BA3FF035; Fri, 28 Feb 2014 02:25:47 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393554343!4736617!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 842 invoked from network); 28 Feb 2014 02:25:45 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 02:25:45 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S2PbQv022532
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 02:25:38 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S2ParR006832
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 02:25:37 GMT
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S2PaaU022609; Fri, 28 Feb 2014 02:25:36 GMT
Received: from localhost.localdomain (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 18:25:35 -0800
Date: Thu, 27 Feb 2014 21:25:33 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: akpm@linux-foundation.org, steven@uplinklabs.net, ufimtseva@gmail.com,
	mgorman@suse.de, david.vrabel@citrix.com, torvalds@linux-foundation.org
Message-ID: <20140228022532.GC7114@localhost.localdomain>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="82I3+IH0IqGh5yIs"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	david.vrabel@citrix.com
Subject: [Xen-devel] Regression introduced by "xen: properly account for
 _PAGE_NUMA during xen pte translations"
 (a9c8e4beeeb64c22b84c803747487857fe424b68)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


--82I3+IH0IqGh5yIs
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable


This only shows up when using the Linux kernel as a 64-bit PV guest and
when I have migrated and I am running iscsid and I poweroff the guest.
Note: I can also reproduce this if I kill 'iscsid'.

If I revert the above mentioned commit the problem disappears.

The page flags it shows are bogus - this guest is running from RAM
and has no swap.

Here is what the console says (ignore the first BUG please):

[   42.268060] xen:grant_table: Grant tables using version 1 layout
[   42.268060] BUG: sleeping function called from invalid context at /home/=
konrad/ssd/konrad/linux/kernel/locking/mutex.c:96
[   42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name: migration/0
[   42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc4upstre=
am #1
[   42.268060]  0000000000000002 ffff88003cc0dcd8 ffffffff816f0e59 ffff8800=
3cc02630
[   42.268060]  ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce ffff8800=
3cc0dd08
[   42.268060]  ffffffff816f3bdf ffff88003cc0dd08 0000000000000017 ffff8800=
3cc0dd38
[   42.268060] Call Trace:
[   42.268060]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   42.268060]  [<ffffffff810cdfce>] __might_sleep+0xce/0xf0
[   42.268060]  [<ffffffff816f3bdf>] mutex_lock+0x1f/0x40
[   42.268060]  [<ffffffff813f49fb>] rebind_evtchn_irq+0x3b/0xb0
[   42.268060]  [<ffffffff81428bdc>] xen_console_resume+0x5c/0x60
[   42.268060]  [<ffffffff813f3c0a>] xen_suspend+0x8a/0xb0
[   42.268060]  [<ffffffff811265db>] multi_cpu_stop+0xbb/0xe0
[   42.268060]  [<ffffffff81126520>] ? irq_cpu_stop_queue_work+0x30/0x30
[   42.268060]  [<ffffffff81126bfa>] cpu_stopper_thread+0x4a/0x180
[   42.268060]  [<ffffffff816f1a01>] ? __schedule+0x381/0x7e0
[   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
[   42.268060]  [<ffffffff816f612b>] ? _raw_spin_unlock_irqrestore+0x1b/0x70
[   42.268060]  [<ffffffff810cc058>] smpboot_thread_fn+0x148/0x1e0
[   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
[   42.268060]  [<ffffffff810c490e>] kthread+0xce/0xf0
[   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0=
x80
[   42.268060]  [<ffffffff816fe74c>] ret_from_fork+0x7c/0xb0
[   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0=
x80
[   42.268060] PM: noirq restore of devices complete after 0.251 msecs
[   42.268645] PM: early restore of devices complete after 0.151 msecs

#=20
# [   42.281199] switch: port 1(eth0) entered disabled state
[   42.282591] PM: restore of devices complete after 11.656 msecs
[   42.307965] switch: port 1(eth0) entered forwarding state
[   42.307990] switch: port 1(eth0) entered forwarding state

#=20
#=20
# [   57.312124] switch: port 1(eth0) entered forwarding state
lspci
# lsscsi
# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68 =20
          inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:99 errors:0 dropped:0 overruns:0 frame:0
          TX packets:86 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000=20
          RX bytes:8722 (8.5 KiB)  TX bytes:8709 (8.5 KiB)

lo        Link encap:Local Loopback =20
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:4 errors:0 dropped:0 overruns:0 frame:0
          TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0=20
          RX bytes:260 (260.0 b)  TX bytes:260 (260.0 b)

switch    Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68 =20
          inet addr:192.168.102.68  Bcast:192.168.102.255  Mask:255.255.255=
=2E0
          inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:96 errors:0 dropped:0 overruns:0 frame:0
          TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0=20
          RX bytes:8506 (8.3 KiB)  TX bytes:7755 (7.5 KiB)

# ping 1=08 =088.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=3D1 ttl=3D45 time=3D57.0 ms
64 bytes from 8.8.8.8: icmp_seq=3D2 ttl=3D45 time=3D51.4 ms
64 bytes from 8.8.8.8: icmp_seq=3D3 ttl=3D45 time=3D51.8 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2130ms
rtt min/avg/max/mdev =3D 51.421/53.420/57.008/2.556 ms
# poweroff
Feb 28 02:05:30 g-pvops init: starting pid 2386, tty '': '/etc/init.d/halt'=
=0D
# Usage: /etc/init.d/halt {start}
=0DThe system is going down NOW!
=0DSent SIGTERM to all processes
Feb 28 02:05:30 g-pvops exiting on signal 15=0D
[   71.195552] BUG: Bad page map in process iscsid  pte:39b22120 pmd:069530=
67
[   71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c8
[   71.195576] page flags: 0x100000000080078(uptodate|dirty|lru|active|swap=
backed)
[   71.195585] page dumped because: bad pte
[   71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:c
[   71.195598] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.195603] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted 3.14.0-rc4upstream=
 #1
[   71.195618]  00007fb105d37000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.195627]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b ffff8800=
30ff3c58
[   71.195634]  ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000 ffff8800=
069539b8
[   71.195642] Call Trace:
[   71.195649]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.195657]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.195663]  [<ffffffff8117f6e1>] ? activate_page+0xb1/0xe0
[   71.195669]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.195676]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.195680]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.195688]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.195694]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.195702]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.195707]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.195712]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.195719]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.195724]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.195732]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.195736] Disabling lock debugging due to kernel taint
[   71.195740] BUG: Bad page map in process iscsid  pte:39b23120 pmd:069530=
67
[   71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c9
[   71.195751] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
[   71.195759] page dumped because: bad pte
[   71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:d
[   71.195770] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.195774] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.195783]  00007fb105d38000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.195791]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000=
00000575
[   71.195798]  0720072007200720 ffff880030ff3c58 00007fb105d38000 ffff8800=
069539c0
[   71.195806] Call Trace:
[   71.195810]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.195816]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.195820]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.195827]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.195832]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.195839]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.195843]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.195849]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.195854]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.195858]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.195863]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.195870]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.195875]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.195880] BUG: Bad page map in process iscsid  pte:39b24120 pmd:069530=
67
[   71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1 mapping:ffff88003a=
536490 index:0x1ca
[   71.195888] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
[   71.195895] page dumped because: bad pte
[   71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:e
[   71.195904] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.195908] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.195916]  00007fb105d39000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.195923]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000=
0000058e
[   71.195929]  0720072007200720 ffff880030ff3c58 00007fb105d39000 ffff8800=
069539c8
[   71.195935] Call Trace:
[   71.195939]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.195944]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.195948]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.195953]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.195958]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.195963]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.195967]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.195973]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.195977]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.195982]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.195987]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.195992]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.195997]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.196001] BUG: Bad page map in process iscsid  pte:39b25120 pmd:069530=
67
[   71.196005] page:ffffea0000c9f018 count:2 mapcount:-1 mapping:ffff88003a=
536490 index:0x1cb
[   71.362165] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
[   71.362172] page dumped because: bad pte
[   71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma:          (=
null) mapping:ffff880032006970 index:f
[   71.362181] vma->vm_ops->fault: shmem_fault+0x0/0x70
[   71.362185] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
[   71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.362197]  00007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59 00000000=
00000000
[   71.362206]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000=
000005a7
[   71.362212]  0720072007200720 ffff880030ff3c58 00007fb105d3a000 ffff8800=
069539d0
[   71.362221] Call Trace:
[   71.362225]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.362230]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
[   71.362241]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
[   71.362247]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
[   71.362254]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
[   71.362258]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
[   71.362263]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
[   71.362270]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.362276]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.362282]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.362287]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.362292]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.362297]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.364639] BUG: Bad page state in process iscsid  pfn:39b25
[   71.364651] page:ffffea0000c9f018 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1cb
[   71.364656] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.364663] page dumped because: non-NULL mapping
[   71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.364711]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.364718]  ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.364724]  0000000000000001 ffffea0000c9f018 0000000000000000 ffff8800=
30ff3c48
[   71.364733] Call Trace:
[   71.364739]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.364748]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.364753]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.364760]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.364768]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.364773]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.364780]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.364785]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.364791]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.364798]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.364803]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.364810]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.364815]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.364820]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.364825]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.364830]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.364835]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.364842]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.364848]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.364854] BUG: Bad page state in process iscsid  pfn:39b24
[   71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1ca
[   71.364863] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.364871] page dumped because: non-NULL mapping
[   71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.364913]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.364919]  ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.364925]  0000000000000001 ffffea0000c9efe0 0000000000000000 ffff8800=
30ff3c48
[   71.364933] Call Trace:
[   71.364938]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.364945]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.364950]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.364955]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.364960]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.364965]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.364970]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.364975]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.364982]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.364987]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.364994]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.364999]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.365004]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.365009]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.365015]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.365020]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.365025]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.365030]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.365035]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.365039] BUG: Bad page state in process iscsid  pfn:39b23
[   71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c9
[   71.365047] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.365074] page dumped because: non-NULL mapping
[   71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.365109]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.365115]  ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.365121]  0000000000000001 ffffea0000c9efa8 0000000000000000 ffff8800=
30ff3c48
[   71.365127] Call Trace:
[   71.365131]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.365136]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.365141]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.365146]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.365151]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.365155]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.365160]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.365165]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.365170]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.365175]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.365180]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.562344]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.562349]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.562354]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.562359]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.562364]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.562369]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.562379]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.562387]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
[   71.562391] BUG: Bad page state in process iscsid  pfn:39b22
[   71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1 mapping:ffff88003a=
536490 index:0x1c8
[   71.562399] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
[   71.562413] page dumped because: non-NULL mapping
[   71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot=
_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc=
32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper=
 xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect sy=
scopyarea xen_kbdfront xenfs xen_privcmd
[   71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-=
rc4upstream #1
[   71.562461]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff=
8197f00f
[   71.562467]  ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0 ffff8800=
30ff3c48
[   71.562476]  0000000000000001 ffffea0000c9ef70 0000000000000000 ffff8800=
30ff3c48
[   71.562486] Call Trace:
[   71.562490]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
[   71.562495]  [<ffffffff811751a0>] bad_page+0xd0/0x120
[   71.562500]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
[   71.562515]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
[   71.562520]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
[   71.562525]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
[   71.562535]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
[   71.562544]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
[   71.562550]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
[   71.562555]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
[   71.562561]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
[   71.562568]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
[   71.562579]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
[   71.562584]  [<ffffffff8109d672>] mmput+0x52/0x100
[   71.562594]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
[   71.562603]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
[   71.562608]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
[   71.562613]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
[   71.562617]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
=0DSent SIGKILL to all processes
=0DRequesting system poweroff
[   73.375423] reboot: System halted

--82I3+IH0IqGh5yIs
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=config

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 3.14.0-rc4 Kernel Configuration
#
CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_ARCH_DEFCONFIG="arch/x86/configs/x86_64_defconfig"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_HAVE_LATENCYTOP_SUPPORT=y
CONFIG_MMU=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_RWSEM_XCHGADD_ALGORITHM=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_CPU_AUTOPROBE=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_OPTIMIZED_INLINING=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_X86_HT=y
CONFIG_ARCH_HWEIGHT_CFLAGS="-fcall-saved-rdi -fcall-saved-rsi -fcall-saved-rdx -fcall-saved-rcx -fcall-saved-r8 -fcall-saved-r9 -fcall-saved-r10 -fcall-saved-r11"
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_EXTABLE_SORT=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
CONFIG_CROSS_COMPILE=""
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION="upstream"
CONFIG_LOCALVERSION_AUTO=y
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_FHANDLE is not set
CONFIG_AUDIT=y
CONFIG_AUDITSYSCALL=y
CONFIG_AUDIT_WATCH=y
CONFIG_AUDIT_TREE=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_DATA=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BUILD=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=y
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y

#
# CPU/Task time and stats accounting
#
# CONFIG_TICK_CPU_ACCOUNTING is not set
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
CONFIG_IRQ_TIME_ACCOUNTING=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
# CONFIG_PREEMPT_RCU is not set
CONFIG_RCU_STALL_COMMON=y
# CONFIG_RCU_USER_QS is not set
CONFIG_RCU_FANOUT=64
CONFIG_RCU_FANOUT_LEAF=16
# CONFIG_RCU_FANOUT_EXACT is not set
# CONFIG_RCU_FAST_NO_HZ is not set
# CONFIG_TREE_RCU_TRACE is not set
# CONFIG_RCU_NOCB_CPU is not set
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=18
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y
CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_ARCH_WANTS_PROT_NUMA_PROT_NONE=y
# CONFIG_NUMA_BALANCING is not set
CONFIG_CGROUPS=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_CGROUP_FREEZER=y
# CONFIG_CGROUP_DEVICE is not set
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_RESOURCE_COUNTERS=y
# CONFIG_MEMCG is not set
# CONFIG_CGROUP_HUGETLB is not set
# CONFIG_CGROUP_PERF is not set
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
# CONFIG_CFS_BANDWIDTH is not set
CONFIG_RT_GROUP_SCHED=y
# CONFIG_BLK_CGROUP is not set
# CONFIG_CHECKPOINT_RESTORE is not set
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_IPC_NS=y
# CONFIG_USER_NS is not set
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_SCHED_AUTOGROUP=y
CONFIG_SYSFS_DEPRECATED=y
# CONFIG_SYSFS_DEPRECATED_V2 is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_ANON_INODES=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
# CONFIG_EXPERT is not set
CONFIG_UID16=y
# CONFIG_SYSCTL_SYSCALL is not set
CONFIG_KALLSYMS=y
# CONFIG_KALLSYMS_ALL is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_PCI_QUIRKS=y
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=y

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
CONFIG_VM_EVENT_COUNTERS=y
# CONFIG_COMPAT_BRK is not set
CONFIG_SLAB=y
# CONFIG_SLUB is not set
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
CONFIG_OPROFILE=m
# CONFIG_OPROFILE_EVENT_MULTIPLEX is not set
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
CONFIG_OPTPROBES=y
# CONFIG_HAVE_64BIT_ALIGNED_ACCESS is not set
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_ATTRS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_DMA_API_DEBUG=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_CC_STACKPROTECTOR=y
# CONFIG_CC_STACKPROTECTOR is not set
CONFIG_CC_STACKPROTECTOR_NONE=y
# CONFIG_CC_STACKPROTECTOR_REGULAR is not set
# CONFIG_CC_STACKPROTECTOR_STRONG is not set
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
# CONFIG_HAVE_GENERIC_DMA_COHERENT is not set
CONFIG_SLABINFO=y
CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
# CONFIG_SYSTEM_TRUSTED_KEYRING is not set
CONFIG_MODULES=y
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=y
CONFIG_MODULE_FORCE_UNLOAD=y
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
# CONFIG_MODULE_SIG is not set
CONFIG_STOP_MACHINE=y
CONFIG_BLOCK=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
# CONFIG_BLK_CMDLINE_PARSER is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
CONFIG_BLOCK_COMPAT=y

#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_INLINE_SPIN_UNLOCK_IRQ=y
CONFIG_INLINE_READ_UNLOCK=y
CONFIG_INLINE_READ_UNLOCK_IRQ=y
CONFIG_INLINE_WRITE_UNLOCK=y
CONFIG_INLINE_WRITE_UNLOCK_IRQ=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_FREEZER=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_MPPARSE=y
# CONFIG_X86_EXTENDED_PLATFORM is not set
# CONFIG_X86_INTEL_LPSS is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
CONFIG_SCHED_OMIT_FRAME_POINTER=y
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_XEN=y
CONFIG_XEN_DOM0=y
CONFIG_XEN_PRIVILEGED_GUEST=y
CONFIG_XEN_PVHVM=y
CONFIG_XEN_MAX_DOMAIN_MEMORY=500
CONFIG_XEN_SAVE_RESTORE=y
CONFIG_XEN_DEBUG_FS=y
CONFIG_XEN_PVH=y
CONFIG_KVM_GUEST=y
# CONFIG_KVM_DEBUG_FS is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
CONFIG_NO_BOOTMEM=y
# CONFIG_MEMTEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
# CONFIG_GART_IOMMU is not set
CONFIG_CALGARY_IOMMU=y
CONFIG_CALGARY_IOMMU_ENABLED_BY_DEFAULT=y
CONFIG_SWIOTLB=y
CONFIG_IOMMU_HELPER=y
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS=512
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT_NONE=y
# CONFIG_PREEMPT_VOLUNTARY is not set
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=y
CONFIG_X86_THERMAL_VECTOR=y
# CONFIG_I8K is not set
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_MICROCODE_INTEL_EARLY=y
CONFIG_MICROCODE_AMD_EARLY=y
CONFIG_MICROCODE_EARLY=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_ARCH_PHYS_ADDR_T_64BIT=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_DIRECT_GBPAGES=y
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
CONFIG_NODES_SPAN_OTHER_NODES=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=6
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
# CONFIG_ARCH_MEMORY_PROBE is not set
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_ALLOC_MEM_MAP_TOGETHER=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_MEMBLOCK=y
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y
CONFIG_ARCH_DISCARD_MEMBLOCK=y
# CONFIG_MOVABLE_NODE is not set
# CONFIG_HAVE_BOOTMEM_INFO_NODE is not set
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTREMOVE is not set
CONFIG_PAGEFLAGS_EXTENDED=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_MIGRATION=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_ZONE_DMA_FLAG=1
CONFIG_BOUNCE=y
CONFIG_NEED_BOUNCE_POOL=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
# CONFIG_KSM is not set
CONFIG_DEFAULT_MMAP_MIN_ADDR=65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_MEMORY_FAILURE is not set
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
# CONFIG_CMA is not set
# CONFIG_ZBUD is not set
# CONFIG_ZSWAP is not set
CONFIG_ZSMALLOC=y
# CONFIG_PGTABLE_MAPPING is not set
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=y
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
# CONFIG_MTRR_SANITIZER is not set
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_EFI=y
# CONFIG_EFI_STUB is not set
CONFIG_SECCOMP=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
# CONFIG_KEXEC_JUMP is not set
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_HOTPLUG_CPU=y
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
# CONFIG_PM_RUNTIME is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
# CONFIG_PM_ADVANCED_DEBUG is not set
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
CONFIG_ACPI=y
CONFIG_ACPI_SLEEP=y
CONFIG_ACPI_PROCFS=y
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_CUSTOM_DSDT is not set
# CONFIG_ACPI_INITRD_TABLE_OVERRIDE is not set
CONFIG_ACPI_DEBUG=y
# CONFIG_ACPI_PCI_SLOT is not set
CONFIG_X86_PM_TIMER=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=y
# CONFIG_ACPI_CUSTOM_METHOD is not set
# CONFIG_ACPI_BGRT is not set
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_EINJ=y
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_SFI is not set

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_COMMON=y
# CONFIG_CPU_FREQ_STAT is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
# CONFIG_CPU_FREQ_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_GOV_CONSERVATIVE is not set

#
# x86 CPU frequency scaling drivers
#
# CONFIG_X86_INTEL_PSTATE is not set
# CONFIG_X86_PCC_CPUFREQ is not set
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
CONFIG_X86_SPEEDSTEP_CENTRINO=m
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_MULTIPLE_DRIVERS is not set
CONFIG_CPU_IDLE_GOV_LADDER=y
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED is not set
CONFIG_INTEL_IDLE=y

#
# Memory power savings
#
# CONFIG_I7300_IDLE is not set

#
# Bus options (PCI etc.)
#
CONFIG_PCI=y
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIE_ECRC=y
# CONFIG_PCIEAER_INJECT is not set
CONFIG_PCIEASPM=y
# CONFIG_PCIEASPM_DEBUG is not set
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCI_MSI=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
# CONFIG_PCI_STUB is not set
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_HT_IRQ=y
CONFIG_PCI_ATS=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_IOAPIC is not set
CONFIG_PCI_LABEL=y

#
# PCI host controller drivers
#
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
CONFIG_PCCARD=y
CONFIG_PCMCIA=y
CONFIG_PCMCIA_LOAD_CIS=y
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=y
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_PD6729 is not set
# CONFIG_I82092 is not set
CONFIG_PCCARD_NONSTATIC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
# CONFIG_HOTPLUG_PCI_CPCI is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set
# CONFIG_RAPIDIO is not set
CONFIG_X86_SYSFB=y

#
# Executable file formats / Emulations
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
# CONFIG_HAVE_AOUT is not set
CONFIG_BINFMT_MISC=y
CONFIG_COREDUMP=y
CONFIG_IA32_EMULATION=y
# CONFIG_IA32_AOUT is not set
# CONFIG_X86_X32 is not set
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
CONFIG_KEYS_COMPAT=y
CONFIG_X86_DEV_DMA_OPS=y
CONFIG_NET=y

#
# Networking options
#
CONFIG_PACKET=y
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=y
# CONFIG_UNIX_DIAG is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_SUB_POLICY is not set
# CONFIG_XFRM_MIGRATE is not set
# CONFIG_XFRM_STATISTICS is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
# CONFIG_IP_FIB_TRIE_STATS is not set
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_NET_IP_TUNNEL=y
CONFIG_IP_MROUTE=y
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
CONFIG_INET_TUNNEL=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
CONFIG_INET_LRO=y
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=y
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=y
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
# CONFIG_IPV6_ROUTER_PREF is not set
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
CONFIG_INET6_AH=y
CONFIG_INET6_ESP=y
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
CONFIG_INET6_XFRM_MODE_TRANSPORT=y
CONFIG_INET6_XFRM_MODE_TUNNEL=y
CONFIG_INET6_XFRM_MODE_BEET=y
# CONFIG_INET6_XFRM_MODE_ROUTEOPTIMIZATION is not set
# CONFIG_IPV6_VTI is not set
CONFIG_IPV6_SIT=y
# CONFIG_IPV6_SIT_6RD is not set
CONFIG_IPV6_NDISC_NODETYPE=y
# CONFIG_IPV6_TUNNEL is not set
# CONFIG_IPV6_GRE is not set
# CONFIG_IPV6_MULTIPLE_TABLES is not set
# CONFIG_IPV6_MROUTE is not set
CONFIG_NETLABEL=y
CONFIG_NETWORK_SECMARK=y
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=y
# CONFIG_NETFILTER_DEBUG is not set
# CONFIG_NETFILTER_ADVANCED is not set

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_NETLINK=y
CONFIG_NETFILTER_NETLINK_LOG=y
CONFIG_NF_CONNTRACK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_FTP=y
CONFIG_NF_CONNTRACK_IRC=y
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
CONFIG_NF_CONNTRACK_SIP=y
CONFIG_NF_CT_NETLINK=y
CONFIG_NF_NAT=m
CONFIG_NF_NAT_NEEDED=y
# CONFIG_NF_NAT_AMANDA is not set
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
# CONFIG_NF_NAT_TFTP is not set
# CONFIG_NF_TABLES is not set
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=y
CONFIG_NETFILTER_XT_TARGET_LOG=m
# CONFIG_NETFILTER_XT_TARGET_NETMAP is not set
CONFIG_NETFILTER_XT_TARGET_NFLOG=y
# CONFIG_NETFILTER_XT_TARGET_REDIRECT is not set
CONFIG_NETFILTER_XT_TARGET_SECMARK=y
CONFIG_NETFILTER_XT_TARGET_TCPMSS=y

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=y
CONFIG_NETFILTER_XT_MATCH_POLICY=y
CONFIG_NETFILTER_XT_MATCH_STATE=y
# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=y
CONFIG_NF_CONNTRACK_IPV4=y
CONFIG_NF_CONNTRACK_PROC_COMPAT=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_FILTER=y
CONFIG_IP_NF_TARGET_REJECT=y
CONFIG_IP_NF_TARGET_ULOG=y
CONFIG_NF_NAT_IPV4=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
# CONFIG_NF_NAT_PPTP is not set
# CONFIG_NF_NAT_H323 is not set
CONFIG_IP_NF_MANGLE=y
# CONFIG_IP_NF_RAW is not set

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV6=y
CONFIG_NF_CONNTRACK_IPV6=y
CONFIG_IP6_NF_IPTABLES=y
CONFIG_IP6_NF_MATCH_IPV6HEADER=y
CONFIG_IP6_NF_FILTER=y
CONFIG_IP6_NF_TARGET_REJECT=y
CONFIG_IP6_NF_MANGLE=y
# CONFIG_IP6_NF_RAW is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_HAVE_NET_DSA=y
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set

#
# Classification
#
CONFIG_NET_CLS=y
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
CONFIG_NET_CLS_ACT=y
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=y
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
# CONFIG_VSOCKETS is not set
# CONFIG_NETLINK_MMAP is not set
# CONFIG_NETLINK_DIAG is not set
# CONFIG_NET_MPLS_GSO is not set
# CONFIG_HSR is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
# CONFIG_CGROUP_NET_CLASSID is not set
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
# CONFIG_BPF_JIT is not set
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_TCPPROBE is not set
# CONFIG_NET_DROP_MONITOR is not set
# CONFIG_HAMRADIO is not set
# CONFIG_CAN is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
CONFIG_FIB_RULES=y
# CONFIG_WIRELESS is not set
# CONFIG_WIMAX is not set
CONFIG_RFKILL=y
CONFIG_RFKILL_INPUT=y
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
CONFIG_HAVE_BPF_JIT=y

#
# Device Drivers
#

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
CONFIG_FW_LOADER=y
CONFIG_FIRMWARE_IN_KERNEL=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_DEBUG_DRIVER is not set
CONFIG_DEBUG_DEVRES=y
CONFIG_SYS_HYPERVISOR=y
# CONFIG_GENERIC_CPU_DEVICES is not set
CONFIG_DMA_SHARED_BUFFER=y

#
# Bus devices
#
CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_MTD is not set
# CONFIG_PARPORT is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PNP=y
CONFIG_PNP_DEBUG_MESSAGES=y

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
CONFIG_ZRAM=y
# CONFIG_ZRAM_DEBUG is not set
# CONFIG_BLK_CPQ_CISS_DA is not set
# CONFIG_BLK_DEV_DAC960 is not set
# CONFIG_BLK_DEV_UMEM is not set
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_LOOP_MIN_COUNT=8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
# CONFIG_BLK_DEV_RAM is not set
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_XEN_BLKDEV_BACKEND=y
CONFIG_VIRTIO_BLK=m
# CONFIG_BLK_DEV_HD is not set
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_RSXX is not set

#
# Misc devices
#
# CONFIG_SENSORS_LIS3LV02D is not set
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_SGI_IOC4 is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ATMEL_SSC is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1780 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_VMWARE_BALLOON is not set
# CONFIG_BMP085_I2C is not set
# CONFIG_PCH_PHUB is not set
# CONFIG_USB_SWITCH_FSA9480 is not set
# CONFIG_SRAM is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_SENSORS_LIS3_I2C is not set

#
# Altera FPGA firmware download module
#
# CONFIG_ALTERA_STAPL is not set
# CONFIG_VMWARE_VMCI is not set

#
# Intel MIC Host Driver
#
# CONFIG_INTEL_MIC_HOST is not set

#
# Intel MIC Card Driver
#
# CONFIG_INTEL_MIC_CARD is not set
# CONFIG_GENWQE is not set
CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=m
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=m
CONFIG_SCSI_DMA=y
CONFIG_SCSI_TGT=m
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_CHR_DEV_OSST=m
CONFIG_BLK_DEV_SR=m
CONFIG_BLK_DEV_SR_VENDOR=y
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_MULTI_LUN=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_FC_TGT_ATTRS=y
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
# CONFIG_SCSI_SRP_ATTRS is not set
CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_SCSI_BNX2X_FCOE is not set
# CONFIG_BE2ISCSI is not set
CONFIG_BLK_DEV_3W_XXXX_RAID=m
# CONFIG_SCSI_HPSA is not set
CONFIG_SCSI_3W_9XXX=m
# CONFIG_SCSI_3W_SAS is not set
CONFIG_SCSI_ACARD=m
CONFIG_SCSI_AACRAID=m
CONFIG_SCSI_AIC7XXX=m
CONFIG_AIC7XXX_CMDS_PER_DEVICE=8
CONFIG_AIC7XXX_RESET_DELAY_MS=15000
CONFIG_AIC7XXX_DEBUG_ENABLE=y
CONFIG_AIC7XXX_DEBUG_MASK=0
CONFIG_AIC7XXX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=32
CONFIG_AIC79XX_RESET_DELAY_MS=15000
CONFIG_AIC79XX_DEBUG_ENABLE=y
CONFIG_AIC79XX_DEBUG_MASK=0
CONFIG_AIC79XX_REG_PRETTY_PRINT=y
CONFIG_SCSI_AIC94XX=m
# CONFIG_AIC94XX_DEBUG is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
# CONFIG_SCSI_MVSAS_TASKLET is not set
# CONFIG_SCSI_MVUMI is not set
CONFIG_SCSI_DPT_I2O=m
CONFIG_SCSI_ADVANSYS=m
CONFIG_SCSI_ARCMSR=m
# CONFIG_SCSI_ESAS2R is not set
CONFIG_MEGARAID_NEWGEN=y
CONFIG_MEGARAID_MM=m
CONFIG_MEGARAID_MAILBOX=m
CONFIG_MEGARAID_LEGACY=m
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT2SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS_LOGGING=y
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_UFSHCD is not set
CONFIG_SCSI_HPTIOP=m
CONFIG_SCSI_BUSLOGIC=m
CONFIG_SCSI_FLASHPOINT=y
# CONFIG_VMWARE_PVSCSI is not set
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
# CONFIG_FCOE_FNIC is not set
CONFIG_SCSI_DMX3191D=m
CONFIG_SCSI_EATA=m
CONFIG_SCSI_EATA_TAGGED_QUEUE=y
CONFIG_SCSI_EATA_LINKED_COMMANDS=y
CONFIG_SCSI_EATA_MAX_TAGS=16
CONFIG_SCSI_FUTURE_DOMAIN=m
CONFIG_SCSI_GDTH=m
CONFIG_SCSI_ISCI=m
CONFIG_SCSI_IPS=m
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
CONFIG_SCSI_STEX=m
CONFIG_SCSI_SYM53C8XX_2=m
CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
CONFIG_SCSI_SYM53C8XX_MMIO=y
CONFIG_SCSI_IPR=m
# CONFIG_SCSI_IPR_TRACE is not set
# CONFIG_SCSI_IPR_DUMP is not set
CONFIG_SCSI_QLOGIC_1280=m
CONFIG_SCSI_QLA_FC=m
# CONFIG_TCM_QLA2XXX is not set
# CONFIG_SCSI_QLA_ISCSI is not set
CONFIG_SCSI_LPFC=m
# CONFIG_SCSI_LPFC_DEBUG_FS is not set
CONFIG_SCSI_DC395x=m
CONFIG_SCSI_DC390T=m
CONFIG_SCSI_DEBUG=m
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
CONFIG_SCSI_SRP=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
# CONFIG_SCSI_LOWLEVEL_PCMCIA is not set
# CONFIG_SCSI_DH is not set
# CONFIG_SCSI_OSD_INITIATOR is not set
CONFIG_ATA=m
# CONFIG_ATA_NONSTANDARD is not set
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
# CONFIG_SATA_AHCI_PLATFORM is not set
CONFIG_SATA_INIC162X=m
# CONFIG_SATA_ACARD_AHCI is not set
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_HIGHBANK is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
# CONFIG_SATA_RCAR is not set
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARASAN_CF is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CS5520 is not set
# CONFIG_PATA_CS5530 is not set
# CONFIG_PATA_CS5536 is not set
# CONFIG_PATA_CYPRESS is not set
CONFIG_PATA_EFAR=m
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
CONFIG_PATA_MARVELL=m
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
CONFIG_PATA_PDC_OLD=m
CONFIG_PATA_RADISYS=m
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SC1200 is not set
CONFIG_PATA_SCH=m
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=m
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
CONFIG_PATA_WINBOND=m

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
CONFIG_PATA_PCMCIA=m
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=m
CONFIG_PATA_LEGACY=m
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
CONFIG_MD_MULTIPATH=m
CONFIG_MD_FAULTY=m
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
# CONFIG_DM_DEBUG is not set
CONFIG_DM_BUFIO=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
# CONFIG_DM_THIN_PROVISIONING is not set
# CONFIG_DM_CACHE is not set
CONFIG_DM_MIRROR=m
# CONFIG_DM_LOG_USERSPACE is not set
# CONFIG_DM_RAID is not set
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
# CONFIG_DM_MULTIPATH_QL is not set
# CONFIG_DM_MULTIPATH_ST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_UEVENT is not set
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
# CONFIG_SBP_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=40
CONFIG_FUSION_CTL=m
# CONFIG_FUSION_LOGGING is not set

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=y
CONFIG_FIREWIRE_OHCI=m
# CONFIG_FIREWIRE_SBP2 is not set
# CONFIG_FIREWIRE_NET is not set
# CONFIG_FIREWIRE_NOSY is not set
# CONFIG_I2O is not set
CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=m
CONFIG_NET_CORE=y
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_EQUALIZER is not set
# CONFIG_NET_FC is not set
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
CONFIG_MACVLAN=y
CONFIG_MACVTAP=y
# CONFIG_VXLAN is not set
CONFIG_NETCONSOLE=m
# CONFIG_NETCONSOLE_DYNAMIC is not set
CONFIG_NETPOLL=y
CONFIG_NETPOLL_TRAP=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_TUN=y
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=m
# CONFIG_NLMON is not set
CONFIG_SUNGEM_PHY=m
# CONFIG_ARCNET is not set

#
# CAIF transport drivers
#
CONFIG_VHOST_NET=y
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_RING=y
CONFIG_VHOST=y

#
# Distributed Switch Architecture drivers
#
# CONFIG_NET_DSA_MV88E6XXX is not set
# CONFIG_NET_DSA_MV88E6060 is not set
# CONFIG_NET_DSA_MV88E6XXX_NEED_PPU is not set
# CONFIG_NET_DSA_MV88E6131 is not set
# CONFIG_NET_DSA_MV88E6123_61_65 is not set
CONFIG_ETHERNET=y
CONFIG_MDIO=m
CONFIG_NET_VENDOR_3COM=y
# CONFIG_PCMCIA_3C574 is not set
# CONFIG_PCMCIA_3C589 is not set
CONFIG_VORTEX=m
CONFIG_TYPHOON=m
CONFIG_NET_VENDOR_ADAPTEC=y
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_ALTEON=y
# CONFIG_ACENIC is not set
CONFIG_NET_VENDOR_AMD=y
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_PCMCIA_NMCLAN is not set
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
CONFIG_ATL1C=m
# CONFIG_ALX is not set
CONFIG_NET_CADENCE=y
# CONFIG_ARM_AT91_ETHER is not set
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_BROADCOM=y
# CONFIG_B44 is not set
CONFIG_BNX2=m
# CONFIG_CNIC is not set
CONFIG_TIGON3=y
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
CONFIG_NET_VENDOR_BROCADE=y
# CONFIG_BNA is not set
# CONFIG_NET_CALXEDA_XGMAC is not set
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=y
# CONFIG_ENIC is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
# CONFIG_PCMCIA_XIRCOM is not set
CONFIG_NET_VENDOR_DLINK=y
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=y
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EXAR=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_FUJITSU=y
# CONFIG_PCMCIA_FMVJ18X is not set
CONFIG_NET_VENDOR_HP=y
# CONFIG_HP100 is not set
CONFIG_NET_VENDOR_INTEL=y
CONFIG_E100=m
CONFIG_E1000=m
CONFIG_E1000E=m
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
CONFIG_IGB_DCA=y
CONFIG_IGBVF=y
CONFIG_IXGB=m
CONFIG_IXGBE=m
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCA=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=y
# CONFIG_I40E is not set
# CONFIG_I40EVF is not set
CONFIG_NET_VENDOR_I825XX=y
# CONFIG_IP1000 is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=y
# CONFIG_MVMDIO is not set
CONFIG_SKGE=m
# CONFIG_SKGE_DEBUG is not set
# CONFIG_SKGE_GENESIS is not set
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
# CONFIG_MLX4_EN is not set
# CONFIG_MLX4_CORE is not set
# CONFIG_MLX5_CORE is not set
CONFIG_NET_VENDOR_MICREL=y
# CONFIG_KS8842 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MYRI=y
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=y
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_8390=y
# CONFIG_PCMCIA_AXNET is not set
CONFIG_NE2K_PCI=m
# CONFIG_PCMCIA_PCNET is not set
CONFIG_NET_VENDOR_NVIDIA=y
CONFIG_FORCEDETH=y
CONFIG_NET_VENDOR_OKI=y
# CONFIG_PCH_GBE is not set
# CONFIG_ETHOC is not set
CONFIG_NET_PACKET_ENGINE=y
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_QLOGIC=y
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_QLGE is not set
# CONFIG_NETXEN_NIC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_8139CP is not set
CONFIG_8139TOO=m
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
# CONFIG_8139TOO_8129 is not set
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=m
# CONFIG_SH_ETH is not set
CONFIG_NET_VENDOR_RDC=y
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_SEEQ=y
CONFIG_NET_VENDOR_SILAN=y
CONFIG_SC92031=m
CONFIG_NET_VENDOR_SIS=y
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
# CONFIG_SFC is not set
CONFIG_NET_VENDOR_SMSC=y
# CONFIG_PCMCIA_SMC91C92 is not set
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_STMICRO=y
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=y
CONFIG_HAPPYMEAL=m
CONFIG_SUNGEM=m
CONFIG_CASSINI=m
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_TEHUTI=y
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
CONFIG_TLAN=m
CONFIG_NET_VENDOR_VIA=y
CONFIG_VIA_RHINE=m
# CONFIG_VIA_RHINE_MMIO is not set
CONFIG_VIA_VELOCITY=m
CONFIG_NET_VENDOR_WIZNET=y
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XIRCOM=y
# CONFIG_PCMCIA_XIRC2PS is not set
CONFIG_FDDI=y
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_PHYLIB=y

#
# MII PHY device drivers
#
# CONFIG_AT803X_PHY is not set
# CONFIG_AMD_PHY is not set
CONFIG_MARVELL_PHY=m
CONFIG_DAVICOM_PHY=m
CONFIG_QSEMI_PHY=m
CONFIG_LXT_PHY=m
CONFIG_CICADA_PHY=m
CONFIG_VITESSE_PHY=m
CONFIG_SMSC_PHY=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM87XX_PHY is not set
# CONFIG_ICPLUS_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_MICREL_PHY is not set
CONFIG_FIXED_PHY=y
# CONFIG_MDIO_BITBANG is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set

#
# USB Network Adapters
#
# CONFIG_USB_CATC is not set
# CONFIG_USB_KAWETH is not set
# CONFIG_USB_PEGASUS is not set
# CONFIG_USB_RTL8150 is not set
# CONFIG_USB_RTL8152 is not set
# CONFIG_USB_USBNET is not set
# CONFIG_USB_HSO is not set
# CONFIG_USB_IPHETH is not set
# CONFIG_WLAN is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
# CONFIG_WAN is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_XEN_NETDEV_BACKEND=y
# CONFIG_VMXNET3 is not set
# CONFIG_ISDN is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_FF_MEMLESS=m
# CONFIG_INPUT_POLLDEV is not set
CONFIG_INPUT_SPARSEKMAP=y
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
# CONFIG_MOUSE_PS2_ELANTECH is not set
# CONFIG_MOUSE_PS2_SENTELIC is not set
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=y
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
CONFIG_INPUT_TABLET=y
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_WACOM is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_PCSPKR is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_MPU3050 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_UINPUT is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_GAMEPORT is not set

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_DEVPTS_MULTIPLE_INSTANCES is not set
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
# CONFIG_SERIAL_NONSTANDARD is not set
# CONFIG_NOZOMI is not set
# CONFIG_N_GSM is not set
# CONFIG_TRACE_SINK is not set
CONFIG_DEVKMEM=y

#
# Serial drivers
#
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_DEPRECATED_OPTIONS=y
CONFIG_SERIAL_8250_PNP=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_CS=m
CONFIG_SERIAL_8250_NR_UARTS=16
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
# CONFIG_SERIAL_8250_RSA is not set
# CONFIG_SERIAL_8250_DW is not set

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_KGDB_NMI is not set
# CONFIG_SERIAL_MFD_HSU is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_CONSOLE_POLL=y
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_TIMBERDALE is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_PCH_UART is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_IPMI_HANDLER is not set
CONFIG_HW_RANDOM=y
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
CONFIG_HW_RANDOM_VIA=y
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_HW_RANDOM_TPM=y
CONFIG_NVRAM=y
# CONFIG_R3964 is not set
# CONFIG_APPLICOM is not set

#
# PCMCIA character devices
#
# CONFIG_SYNCLINK_CS is not set
# CONFIG_CARDMAN_4000 is not set
# CONFIG_CARDMAN_4040 is not set
# CONFIG_IPWIRELESS is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_HPET=y
# CONFIG_HPET_MMAP is not set
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=y
CONFIG_TCG_TIS=m
# CONFIG_TCG_TIS_I2C_ATMEL is not set
# CONFIG_TCG_TIS_I2C_INFINEON is not set
# CONFIG_TCG_TIS_I2C_NUVOTON is not set
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
# CONFIG_TELCLOCK is not set
CONFIG_DEVPORT=y
CONFIG_I2C=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
# CONFIG_I2C_CHARDEV is not set
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_ALGOBIT=y

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
CONFIG_I2C_I801=y
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_DESIGNWARE_PLATFORM is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EG20T is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_PXA_PCI is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_STUB is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# CONFIG_SPI is not set
# CONFIG_HSI is not set

#
# PPS support
#
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
# CONFIG_PPS_CLIENT_LDISC is not set
# CONFIG_PPS_CLIENT_GPIO is not set

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks.
#
# CONFIG_PTP_1588_CLOCK_PCH is not set
CONFIG_ARCH_WANT_OPTIONAL_GPIOLIB=y
# CONFIG_GPIOLIB is not set
# CONFIG_W1 is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
# CONFIG_PDA_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_BATTERY_BQ27x00 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_POWER_RESET is not set
# CONFIG_POWER_AVS is not set
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_HTU21 is not set
# CONFIG_SENSORS_CORETEMP is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH56XX_COMMON is not set
# CONFIG_SENSORS_ADS1015 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_APPLESMC is not set

#
# ACPI drivers
#
# CONFIG_SENSORS_ACPI_POWER is not set
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=y
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
# CONFIG_THERMAL_GOV_FAIR_SHARE is not set
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_THERMAL_EMULATION is not set
# CONFIG_INTEL_POWERCLAMP is not set
CONFIG_X86_PKG_TEMP_THERMAL=m
# CONFIG_ACPI_INT3403_THERMAL is not set

#
# Texas Instruments thermal drivers
#
# CONFIG_WATCHDOG is not set
CONFIG_SSB_POSSIBLE=y

#
# Sonics Silicon Backplane
#
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=y

#
# Broadcom specific AMBA
#
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
# CONFIG_MFD_CORE is not set
# CONFIG_MFD_CS5535 is not set
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_CROS_EC is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_LPC_ICH is not set
# CONFIG_LPC_SCH is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77686 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RTSX_PCI is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_STMPE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TPS65217 is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TC3589X is not set
# CONFIG_MFD_TMIO is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# CONFIG_REGULATOR is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=16
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_KMS_HELPER=m
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_LOAD_EDID_FIRMWARE is not set
CONFIG_DRM_TTM=m

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_TDFX is not set
# CONFIG_DRM_R128 is not set
CONFIG_DRM_RADEON=m
# CONFIG_DRM_RADEON_UMS is not set
CONFIG_DRM_NOUVEAU=m
CONFIG_NOUVEAU_DEBUG=5
CONFIG_NOUVEAU_DEBUG_DEFAULT=3
# CONFIG_DRM_NOUVEAU_BACKLIGHT is not set
# CONFIG_DRM_I810 is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_KMS=y
CONFIG_DRM_I915_FBDEV=y
# CONFIG_DRM_I915_PRELIMINARY_HW_SUPPORT is not set
# CONFIG_DRM_I915_UMS is not set
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_DRM_VIA=m
CONFIG_DRM_SAVAGE=m
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
# CONFIG_DRM_MGAG200 is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_QXL is not set
# CONFIG_DRM_BOCHS is not set
# CONFIG_VGASTATE is not set
CONFIG_VIDEO_OUTPUT_CONTROL=y
CONFIG_HDMI=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
# CONFIG_FB_DDC is not set
# CONFIG_FB_BOOT_VESA_SUPPORT is not set
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_CFB_REV_PIXELS_IN_BYTE is not set
CONFIG_FB_SYS_FILLRECT=m
CONFIG_FB_SYS_COPYAREA=m
CONFIG_FB_SYS_IMAGEBLIT=m
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=m
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_SVGALIB is not set
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
CONFIG_FB_CIRRUS=y
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
# CONFIG_FB_VESA is not set
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_GOLDFISH is not set
# CONFIG_FB_VIRTUAL is not set
CONFIG_XEN_FBDEV_FRONTEND=m
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
# CONFIG_FB_BROADSHEET is not set
# CONFIG_FB_AUO_K190X is not set
# CONFIG_FB_SIMPLE is not set
# CONFIG_EXYNOS_VIDEO is not set
CONFIG_BACKLIGHT_LCD_SUPPORT=y
# CONFIG_LCD_CLASS_DEVICE is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
CONFIG_BACKLIGHT_GENERIC=y
# CONFIG_BACKLIGHT_APPLE is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
# CONFIG_BACKLIGHT_LP855X is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
# CONFIG_LOGO is not set
# CONFIG_SOUND is not set

#
# HID support
#
CONFIG_HID=m
CONFIG_HIDRAW=y
# CONFIG_UHID is not set
CONFIG_HID_GENERIC=m

#
# Special HID drivers
#
CONFIG_HID_A4TECH=m
# CONFIG_HID_ACRUX is not set
CONFIG_HID_APPLE=m
# CONFIG_HID_APPLEIR is not set
# CONFIG_HID_AUREAL is not set
CONFIG_HID_BELKIN=m
CONFIG_HID_CHERRY=m
CONFIG_HID_CHICONY=m
CONFIG_HID_CYPRESS=m
# CONFIG_HID_DRAGONRISE is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELECOM is not set
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=m
# CONFIG_HID_HOLTEK is not set
# CONFIG_HID_HUION is not set
# CONFIG_HID_KEYTOUCH is not set
# CONFIG_HID_KYE is not set
# CONFIG_HID_UCLOGIC is not set
# CONFIG_HID_WALTOP is not set
CONFIG_HID_GYRATION=m
# CONFIG_HID_ICADE is not set
# CONFIG_HID_TWINHAN is not set
CONFIG_HID_KENSINGTON=m
# CONFIG_HID_LCPOWER is not set
# CONFIG_HID_LENOVO_TPKBD is not set
CONFIG_HID_LOGITECH=m
# CONFIG_HID_LOGITECH_DJ is not set
CONFIG_LOGITECH_FF=y
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
CONFIG_LOGIWHEELS_FF=y
# CONFIG_HID_MAGICMOUSE is not set
CONFIG_HID_MICROSOFT=m
CONFIG_HID_MONTEREY=m
# CONFIG_HID_MULTITOUCH is not set
CONFIG_HID_NTRIG=m
# CONFIG_HID_ORTEK is not set
CONFIG_HID_PANTHERLORD=m
CONFIG_PANTHERLORD_FF=y
CONFIG_HID_PETALYNX=m
# CONFIG_HID_PICOLCD is not set
# CONFIG_HID_PRIMAX is not set
# CONFIG_HID_ROCCAT is not set
# CONFIG_HID_SAITEK is not set
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
# CONFIG_HID_SPEEDLINK is not set
# CONFIG_HID_STEELSERIES is not set
CONFIG_HID_SUNPLUS=m
# CONFIG_HID_GREENASIA is not set
# CONFIG_HID_SMARTJOYPLUS is not set
# CONFIG_HID_TIVO is not set
CONFIG_HID_TOPSEED=m
# CONFIG_HID_THINGM is not set
# CONFIG_HID_THRUSTMASTER is not set
# CONFIG_HID_WACOM is not set
# CONFIG_HID_WIIMOTE is not set
# CONFIG_HID_XINMO is not set
# CONFIG_HID_ZEROPLUS is not set
# CONFIG_HID_ZYDACRON is not set
# CONFIG_HID_SENSOR_HUB is not set

#
# USB HID support
#
CONFIG_USB_HID=m
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_DEBUG=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_DYNAMIC_MINORS is not set
CONFIG_USB_MON=y
# CONFIG_USB_WUSB_CBAF is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
# CONFIG_USB_XHCI_HCD is not set
CONFIG_USB_EHCI_HCD=y
# CONFIG_USB_EHCI_ROOT_HUB_TT is not set
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_ISP1760_HCD is not set
# CONFIG_USB_ISP1362_HCD is not set
# CONFIG_USB_FUSBH200_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
CONFIG_USB_PRINTER=y
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
# CONFIG_USB_STORAGE_REALTEK is not set
# CONFIG_USB_STORAGE_DATAFAB is not set
# CONFIG_USB_STORAGE_FREECOM is not set
# CONFIG_USB_STORAGE_ISD200 is not set
# CONFIG_USB_STORAGE_USBAT is not set
# CONFIG_USB_STORAGE_SDDR09 is not set
# CONFIG_USB_STORAGE_SDDR55 is not set
# CONFIG_USB_STORAGE_JUMPSHOT is not set
# CONFIG_USB_STORAGE_ALAUDA is not set
# CONFIG_USB_STORAGE_ONETOUCH is not set
# CONFIG_USB_STORAGE_KARMA is not set
# CONFIG_USB_STORAGE_CYPRESS_ATACB is not set
# CONFIG_USB_STORAGE_ENE_UB6250 is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_RIO500 is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_LED is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HSIC_USB3503 is not set

#
# USB Physical Layer drivers
#
# CONFIG_USB_PHY is not set
# CONFIG_USB_OTG_FSM is not set
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_SAMSUNG_USB2PHY is not set
# CONFIG_SAMSUNG_USB3PHY is not set
# CONFIG_USB_ISP1301 is not set
# CONFIG_USB_RCAR_PHY is not set
# CONFIG_USB_GADGET is not set
# CONFIG_UWB is not set
# CONFIG_MMC is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y

#
# LED drivers
#
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP5521 is not set
# CONFIG_LEDS_LP5523 is not set
# CONFIG_LEDS_LP5562 is not set
# CONFIG_LEDS_LP8501 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_PCA9685 is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_DELL_NETBOOKS is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_LM355x is not set
# CONFIG_LEDS_OT200 is not set
# CONFIG_LEDS_BLINKM is not set

#
# LED Triggers
#
# CONFIG_LEDS_TRIGGERS is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=y
# CONFIG_EDAC_MCE_INJ is not set
# CONFIG_EDAC_MM_EDAC is not set
CONFIG_RTC_LIB=y
CONFIG_RTC_CLASS=y
# CONFIG_RTC_HCTOSYS is not set
CONFIG_RTC_SYSTOHC=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_DEBUG is not set

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_ISL12057 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# SPI RTC drivers
#

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_DS2404 is not set

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_MOXART is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
# CONFIG_INTEL_MID_DMAC is not set
CONFIG_INTEL_IOATDMA=y
# CONFIG_DW_DMAC_CORE is not set
# CONFIG_DW_DMAC is not set
# CONFIG_DW_DMAC_PCI is not set
# CONFIG_TIMB_DMA is not set
# CONFIG_PCH_DMA is not set
CONFIG_DMA_ENGINE=y
CONFIG_DMA_ACPI=y

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y
CONFIG_DCA=y
# CONFIG_AUXDISPLAY is not set
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y

#
# Virtio drivers
#
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
# CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
CONFIG_XEN_SELFBALLOONING=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES=y
CONFIG_XEN_DEV_EVTCHN=m
CONFIG_XEN_BACKEND=y
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_SWIOTLB_XEN=y
CONFIG_XEN_TMEM=m
CONFIG_XEN_PCIDEV_BACKEND=y
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_ACPI_PROCESSOR=y
CONFIG_XEN_MCE_LOG=y
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_STAGING=y
# CONFIG_ET131X is not set
# CONFIG_SLICOSS is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_ECHO is not set
# CONFIG_COMEDI is not set
# CONFIG_RTS5139 is not set
# CONFIG_RTS5208 is not set
# CONFIG_TRANZPORT is not set
# CONFIG_IDE_PHISON is not set
# CONFIG_DX_SEP is not set
# CONFIG_FB_SM7XX is not set
# CONFIG_CRYSTALHD is not set
# CONFIG_FB_XGI is not set
# CONFIG_ACPI_QUICKSTART is not set
# CONFIG_USB_ENESTORAGE is not set
# CONFIG_BCM_WIMAX is not set
# CONFIG_FT1000 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4 is not set
# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ANDROID is not set
# CONFIG_USB_WPAN_HCD is not set
# CONFIG_WIMAX_GDM72XX is not set
# CONFIG_LTE_GDM724X is not set
CONFIG_NET_VENDOR_SILICOM=y
# CONFIG_SBYPASS is not set
# CONFIG_BPCTL is not set
# CONFIG_CED1401 is not set
# CONFIG_DGRP is not set
# CONFIG_FIREWIRE_SERIAL is not set
# CONFIG_LUSTRE_FS is not set
# CONFIG_XILLYBUS is not set
# CONFIG_DGNC is not set
# CONFIG_DGAP is not set
CONFIG_X86_PLATFORM_DEVICES=y
# CONFIG_ACER_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ASUS_LAPTOP is not set
# CONFIG_DELL_WMI is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_FUJITSU_LAPTOP is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WIRELESS is not set
# CONFIG_HP_WMI is not set
# CONFIG_MSI_LAPTOP is not set
# CONFIG_PANASONIC_LAPTOP is not set
# CONFIG_COMPAL_LAPTOP is not set
# CONFIG_SONY_LAPTOP is not set
# CONFIG_IDEAPAD_LAPTOP is not set
# CONFIG_THINKPAD_ACPI is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_EEEPC_LAPTOP=y
# CONFIG_ASUS_WMI is not set
CONFIG_ACPI_WMI=m
# CONFIG_MSI_WMI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_ACPI_TOSHIBA is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_IBM_RTL is not set
# CONFIG_XO15_EBOOK is not set
# CONFIG_SAMSUNG_LAPTOP is not set
CONFIG_MXM_WMI=m
# CONFIG_INTEL_OAKTRAIL is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_APPLE_GMUX is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set
# CONFIG_PVPANIC is not set
# CONFIG_CHROME_PLATFORMS is not set

#
# Hardware Spinlock drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# CONFIG_MAILBOX is not set
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IRQ_REMAP is not set

#
# Remoteproc drivers
#
# CONFIG_STE_MODEM_RPROC is not set

#
# Rpmsg drivers
#
# CONFIG_PM_DEVFREQ is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
# CONFIG_IIO is not set
# CONFIG_NTB is not set
# CONFIG_VME_BUS is not set
# CONFIG_PWM is not set
# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set
# CONFIG_FMC is not set

#
# PHY Subsystem
#
CONFIG_GENERIC_PHY=y
# CONFIG_PHY_EXYNOS_MIPI_VIDEO is not set
# CONFIG_BCM_KONA_USB2_PHY is not set
# CONFIG_POWERCAP is not set

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
# CONFIG_DELL_RBU is not set
# CONFIG_DCDBAS is not set
CONFIG_DMIID=y
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_VARS_PSTORE=y
# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
CONFIG_EFI_RUNTIME_MAP=y
CONFIG_UEFI_CPER=y

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
CONFIG_EXT2_FS=m
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT2_FS_XIP=y
CONFIG_EXT3_FS=m
# CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
CONFIG_EXT3_FS_XATTR=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_EXT4_FS=m
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_FS_XIP=y
CONFIG_JBD=m
# CONFIG_JBD_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
CONFIG_REISERFS_FS=m
# CONFIG_REISERFS_CHECK is not set
CONFIG_REISERFS_PROC_INFO=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_JFS_FS=m
CONFIG_JFS_POSIX_ACL=y
CONFIG_JFS_SECURITY=y
# CONFIG_JFS_DEBUG is not set
CONFIG_JFS_STATISTICS=y
CONFIG_XFS_FS=m
# CONFIG_XFS_QUOTA is not set
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
# CONFIG_GFS2_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=m
CONFIG_FILE_LOCKING=y
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_FANOTIFY is not set
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
# CONFIG_FUSE_FS is not set

#
# Caches
#
# CONFIG_FSCACHE is not set

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
# CONFIG_UDF_FS is not set

#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="iso8859-1"
CONFIG_NTFS_FS=y
# CONFIG_NTFS_DEBUG is not set
CONFIG_NTFS_RW=y

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_CONFIGFS_FS=m
CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_LOGFS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_F2FS_FS is not set
# CONFIG_EFIVAR_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V2=y
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=y
# CONFIG_NFS_SWAP is not set
# CONFIG_NFS_V4_1 is not set
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
# CONFIG_NFSD is not set
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=y
# CONFIG_SUNRPC_DEBUG is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
CONFIG_NLS_UTF8=y
# CONFIG_DLM is not set

#
# Kernel hacking
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
CONFIG_DEFAULT_MESSAGE_LOGLEVEL=4
# CONFIG_BOOT_PRINTK_DELAY is not set
# CONFIG_DYNAMIC_DEBUG is not set

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
CONFIG_DEBUG_INFO_REDUCED=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_FS=y
# CONFIG_HEADERS_CHECK is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_ARCH_WANT_FRAME_POINTERS=y
CONFIG_FRAME_POINTER=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_DEBUG_KERNEL=y

#
# Memory Debugging
#
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_DEBUG_SLAB is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK=y
CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE=4000
# CONFIG_DEBUG_KMEMLEAK_TEST is not set
# CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF is not set
CONFIG_DEBUG_STACK_USAGE=y
CONFIG_DEBUG_VM=y
# CONFIG_DEBUG_VM_RB is not set
CONFIG_DEBUG_VIRTUAL=y
CONFIG_DEBUG_MEMORY_INIT=y
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_DEBUG_STACKOVERFLOW=y
CONFIG_DEBUG_STACKOVERFLOW=y
CONFIG_HAVE_ARCH_KMEMCHECK=y
# CONFIG_KMEMCHECK is not set
# CONFIG_DEBUG_SHIRQ is not set

#
# Debug Lockups and Hangs
#
CONFIG_LOCKUP_DETECTOR=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC=y
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=1
CONFIG_DETECT_HUNG_TASK=y
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=120
CONFIG_BOOTPARAM_HUNG_TASK_PANIC=y
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=1
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
# CONFIG_SCHED_DEBUG is not set
CONFIG_SCHEDSTATS=y
CONFIG_TIMER_STATS=y

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
# CONFIG_DEBUG_RT_MUTEXES is not set
# CONFIG_RT_MUTEX_TESTER is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_STACKTRACE=y
# CONFIG_DEBUG_KOBJECT is not set
CONFIG_DEBUG_BUGVERBOSE=y
# CONFIG_DEBUG_WRITECOUNT is not set
# CONFIG_DEBUG_LIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
# CONFIG_SPARSE_RCU_POINTER is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=21
# CONFIG_RCU_CPU_STALL_INFO is not set
# CONFIG_RCU_TRACE is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FAULT_INJECTION=y
# CONFIG_FAILSLAB is not set
# CONFIG_FAIL_PAGE_ALLOC is not set
CONFIG_FAIL_MAKE_REQUEST=y
# CONFIG_FAIL_IO_TIMEOUT is not set
CONFIG_FAULT_INJECTION_DEBUG_FS=y
# CONFIG_LATENCYTOP is not set
CONFIG_ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS=y
# CONFIG_DEBUG_STRICT_USER_COPY_CHECKS is not set
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_FP_TEST=y
CONFIG_HAVE_FUNCTION_TRACE_MCOUNT_TEST=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_FUNCTION_TRACER is not set
# CONFIG_IRQSOFF_TRACER is not set
# CONFIG_SCHED_TRACER is not set
# CONFIG_FTRACE_SYSCALLS is not set
# CONFIG_TRACER_SNAPSHOT is not set
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
# CONFIG_STACK_TRACER is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENT=y
# CONFIG_UPROBE_EVENT is not set
CONFIG_PROBE_EVENTS=y
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_MMIOTRACE is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set

#
# Runtime Testing
#
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_KSTRTOX is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
# CONFIG_DMA_API_DEBUG is not set
# CONFIG_TEST_MODULE is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KGDB=y
CONFIG_KGDB=y
CONFIG_KGDB_SERIAL_CONSOLE=y
# CONFIG_KGDB_TESTS is not set
# CONFIG_KGDB_LOW_LEVEL_TRAP is not set
CONFIG_KGDB_KDB=y
# CONFIG_KDB_KEYBOARD is not set
CONFIG_KDB_CONTINUE_CATASTROPHIC=0
# CONFIG_STRICT_DEVMEM is not set
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
# CONFIG_EARLY_PRINTK_EFI is not set
CONFIG_X86_PTDUMP=y
CONFIG_DEBUG_RODATA=y
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_DEBUG_SET_MODULE_RONX=y
# CONFIG_DEBUG_NX_TEST is not set
CONFIG_DOUBLEFAULT=y
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_STRESS is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
# CONFIG_X86_DECODER_SELFTEST is not set
CONFIG_IO_DELAY_TYPE_0X80=0
CONFIG_IO_DELAY_TYPE_0XED=1
CONFIG_IO_DELAY_TYPE_UDELAY=2
CONFIG_IO_DELAY_TYPE_NONE=3
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEFAULT_IO_DELAY_TYPE=0
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
CONFIG_OPTIMIZE_INLINING=y
# CONFIG_DEBUG_NMI_SELFTEST is not set
# CONFIG_X86_DEBUG_STATIC_CPU_HAS is not set

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_PERSISTENT_KEYRINGS is not set
# CONFIG_BIG_KEYS is not set
# CONFIG_TRUSTED_KEYS is not set
# CONFIG_ENCRYPTED_KEYS is not set
CONFIG_KEYS_DEBUG_PROC_KEYS=y
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
# CONFIG_SECURITY_NETWORK_XFRM is not set
# CONFIG_SECURITY_PATH is not set
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65534
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
# CONFIG_SECURITY_SELINUX_POLICYDB_VERSION_MAX is not set
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_YAMA is not set
# CONFIG_IMA is not set
# CONFIG_EVM is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_DEFAULT_SECURITY="selinux"
CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_BLKCIPHER=y
CONFIG_CRYPTO_BLKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_PCOMP=y
CONFIG_CRYPTO_PCOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
# CONFIG_CRYPTO_GF128MUL is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_WORKQUEUE=y
# CONFIG_CRYPTO_CRYPTD is not set
CONFIG_CRYPTO_AUTHENC=y
# CONFIG_CRYPTO_TEST is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_SEQIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
# CONFIG_CRYPTO_ECB is not set
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set

#
# Hash modes
#
# CONFIG_CRYPTO_CMAC is not set
CONFIG_CRYPTO_HMAC=y
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=m
CONFIG_CRYPTO_CRC32C_INTEL=m
# CONFIG_CRYPTO_CRC32 is not set
# CONFIG_CRYPTO_CRC32_PCLMUL is not set
CONFIG_CRYPTO_CRCT10DIF=m
# CONFIG_CRYPTO_CRCT10DIF_PCLMUL is not set
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=y
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256_SSSE3 is not set
# CONFIG_CRYPTO_SHA512_SSSE3 is not set
# CONFIG_CRYPTO_SHA256 is not set
# CONFIG_CRYPTO_SHA512 is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_X86_64 is not set
# CONFIG_CRYPTO_AES_NI_INTEL is not set
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_SALSA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
# CONFIG_CRYPTO_DEFLATE is not set
CONFIG_CRYPTO_ZLIB=y
# CONFIG_CRYPTO_LZO is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
CONFIG_CRYPTO_HW=y
# CONFIG_CRYPTO_DEV_PADLOCK is not set
# CONFIG_CRYPTO_DEV_CCP is not set
# CONFIG_ASYMMETRIC_KEY_TYPE is not set
CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_APIC_ARCHITECTURE=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
CONFIG_KVM_INTEL=m
CONFIG_KVM_AMD=m
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_KVM_DEVICE_ASSIGNMENT=y
CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_GENERIC_IO=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_CRC_CCITT=m
CONFIG_CRC16=m
CONFIG_CRC_T10DIF=m
CONFIG_CRC_ITU_T=y
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
# CONFIG_CRC8 is not set
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
# CONFIG_XZ_DEC_POWERPC is not set
# CONFIG_XZ_DEC_IA64 is not set
# CONFIG_XZ_DEC_ARM is not set
# CONFIG_XZ_DEC_ARMTHUMB is not set
# CONFIG_XZ_DEC_SPARC is not set
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT=y
CONFIG_HAS_DMA=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_NLATTR=y
CONFIG_ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE=y
CONFIG_AVERAGE=y
# CONFIG_CORDIC is not set
# CONFIG_DDR is not set
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_FONT_SUPPORT=m
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y

--82I3+IH0IqGh5yIs
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--82I3+IH0IqGh5yIs--


From xen-devel-bounces@lists.xen.org Fri Feb 28 02:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDL8-0001fs-V6; Fri, 28 Feb 2014 02:38:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJDL7-0001fi-6T
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:38:17 +0000
Received: from [193.109.254.147:51871] by server-8.bemta-14.messagelabs.com id
	63/3D-18529-896FF035; Fri, 28 Feb 2014 02:38:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393555094!7371569!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4266 invoked from network); 28 Feb 2014 02:38:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 02:38:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S2cBjr013092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 02:38:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1S2cBvN028089
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 02:38:11 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1S2cBvx028080; Fri, 28 Feb 2014 02:38:11 GMT
Received: from localhost.localdomain (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 18:38:10 -0800
Date: Thu, 27 Feb 2014 21:38:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140228023808.GD7114@localhost.localdomain>
References: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv3] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:07:55PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> Hypercalls submitted by user space tools via the privcmd driver can
> take a long time (potentially many 10s of seconds) if the hypercall
> has many sub-operations.
> 
> A fully preemptible kernel may deschedule such as task in any upcall
> called from a hypercall continuation.
> 
> However, in a kernel with voluntary or no preemption, hypercall
> continuations in Xen allow event handlers to be run but the task
> issuing the hypercall will not be descheduled until the hypercall is
> complete and the ioctl returns to user space.  These long running
> tasks may also trigger the kernel's soft lockup detection.
> 
> Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
> bracket hypercalls that may be preempted.  Use these in the privcmd
> driver.
> 
> When returning from an upcall, call preempt_schedule_irq() if the
> current task was within a preemptible hypercall.
> 
> Since preempt_schedule_irq() can move the task to a different CPU,
> clear and set xen_in_preemptible_hcall around the call.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> Changes in v3:
> - Export xen_in_preemptible_hcall (to fix modular privcmd driver).

On 32-bit builds I get:

arch/x86/built-in.o: In function `xen_do_upcall':
/home/konrad/ssd/konrad/linux/arch/x86/kernel/entry_32.S:1016: undefined
reference to `TI_preempt_count'

> 
> Changes in v2:
> - Use per-cpu variable to mark preemptible regions
> - Call preempt_schedule_irq() from the correct place in
>   xen_hypervisor_callback
> ---
>  arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
>  arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
>  drivers/xen/Makefile       |    2 +-
>  drivers/xen/preempt.c      |   17 +++++++++++++++++
>  drivers/xen/privcmd.c      |    2 ++
>  include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
>  6 files changed, 89 insertions(+), 1 deletions(-)
>  create mode 100644 drivers/xen/preempt.c
> 
> diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
> index a2a4f46..b99bc9c 100644
> --- a/arch/x86/kernel/entry_32.S
> +++ b/arch/x86/kernel/entry_32.S
> @@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
>  ENTRY(xen_do_upcall)
>  1:	mov %esp, %eax
>  	call xen_evtchn_do_upcall
> +#ifdef CONFIG_PREEMPT
>  	jmp  ret_from_intr
> +#else
> +	GET_THREAD_INFO(%ebp)
> +#ifdef CONFIG_VM86
> +	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
> +	movb PT_CS(%esp), %al
> +	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
> +#else
> +	movl PT_CS(%esp), %eax
> +	andl $SEGMENT_RPL_MASK, %eax
> +#endif
> +	cmpl $USER_RPL, %eax
> +	jae resume_userspace		# returning to v8086 or userspace
> +	DISABLE_INTERRUPTS(CLBR_ANY)
> +	cmpl $0,TI_preempt_count(%ebp)	# non-zero preempt_count ?
> +	jnz resume_kernel
> +	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jz resume_kernel
> +	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	call preempt_schedule_irq
> +	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jmp resume_kernel
> +#endif /* CONFIG_PREEMPT */
>  	CFI_ENDPROC
>  ENDPROC(xen_hypervisor_callback)
>  
> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
> index 1e96c36..d8f4fd8 100644
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
>  	popq %rsp
>  	CFI_DEF_CFA_REGISTER rsp
>  	decl PER_CPU_VAR(irq_count)
> +#ifdef CONFIG_PREEMPT
>  	jmp  error_exit
> +#else
> +	movl %ebx, %eax
> +	RESTORE_REST
> +	DISABLE_INTERRUPTS(CLBR_NONE)
> +	TRACE_IRQS_OFF
> +	GET_THREAD_INFO(%rcx)
> +	testl %eax, %eax
> +	je error_exit_user
> +	cmpl $0,PER_CPU_VAR(__preempt_count)
> +	jnz retint_kernel
> +	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jz retint_kernel
> +	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	call preempt_schedule_irq
> +	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jmp retint_kernel
> +#endif
>  	CFI_ENDPROC
>  END(xen_do_hypervisor_callback)
>  
> @@ -1629,6 +1647,7 @@ ENTRY(error_exit)
>  	GET_THREAD_INFO(%rcx)
>  	testl %eax,%eax
>  	jne retint_kernel
> +error_exit_user:
>  	LOCKDEP_SYS_EXIT_IRQ
>  	movl TI_flags(%rcx),%edx
>  	movl $_TIF_WORK_MASK,%edi
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 45e00af..6b867e9 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
>  obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
>  endif
>  obj-$(CONFIG_X86)			+= fallback.o
> -obj-y	+= grant-table.o features.o balloon.o manage.o
> +obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
>  obj-y	+= events/
>  obj-y	+= xenbus/
>  
> diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
> new file mode 100644
> index 0000000..b5a3e98
> --- /dev/null
> +++ b/drivers/xen/preempt.c
> @@ -0,0 +1,17 @@
> +/*
> + * Preemptible hypercalls
> + *
> + * Copyright (C) 2014 Citrix Systems R&D ltd.
> + *
> + * This source code is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version.
> + */
> +
> +#include <xen/xen-ops.h>
> +
> +#ifndef CONFIG_PREEMPT
> +DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
> +EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
> +#endif
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 569a13b..59ac71c 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
>  	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
>  		return -EFAULT;
>  
> +	xen_preemptible_hcall_begin();
>  	ret = privcmd_call(hypercall.op,
>  			   hypercall.arg[0], hypercall.arg[1],
>  			   hypercall.arg[2], hypercall.arg[3],
>  			   hypercall.arg[4]);
> +	xen_preemptible_hcall_end();
>  
>  	return ret;
>  }
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index fb2ea8f..6d8c042 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>  			       int numpgs, struct page **pages);
>  
>  bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
> +
> +#ifdef CONFIG_PREEMPT
> +
> +static inline void xen_preemptible_hcall_begin(void)
> +{
> +}
> +
> +static inline void xen_preemptible_hcall_end(void)
> +{
> +}
> +
> +#else
> +
> +DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
> +
> +static inline void xen_preemptible_hcall_begin(void)
> +{
> +	__this_cpu_write(xen_in_preemptible_hcall, true);
> +}
> +
> +static inline void xen_preemptible_hcall_end(void)
> +{
> +	__this_cpu_write(xen_in_preemptible_hcall, false);
> +}
> +
> +#endif /* CONFIG_PREEMPT */
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:38:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDL8-0001fs-V6; Fri, 28 Feb 2014 02:38:18 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJDL7-0001fi-6T
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:38:17 +0000
Received: from [193.109.254.147:51871] by server-8.bemta-14.messagelabs.com id
	63/3D-18529-896FF035; Fri, 28 Feb 2014 02:38:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393555094!7371569!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4266 invoked from network); 28 Feb 2014 02:38:15 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 02:38:15 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S2cBjr013092
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 02:38:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1S2cBvN028089
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 02:38:11 GMT
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1S2cBvx028080; Fri, 28 Feb 2014 02:38:11 GMT
Received: from localhost.localdomain (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 18:38:10 -0800
Date: Thu, 27 Feb 2014 21:38:08 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: David Vrabel <david.vrabel@citrix.com>
Message-ID: <20140228023808.GD7114@localhost.localdomain>
References: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv3] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:07:55PM +0000, David Vrabel wrote:
> From: David Vrabel <david.vrabel@citrix.com>
> 
> Hypercalls submitted by user space tools via the privcmd driver can
> take a long time (potentially many 10s of seconds) if the hypercall
> has many sub-operations.
> 
> A fully preemptible kernel may deschedule such as task in any upcall
> called from a hypercall continuation.
> 
> However, in a kernel with voluntary or no preemption, hypercall
> continuations in Xen allow event handlers to be run but the task
> issuing the hypercall will not be descheduled until the hypercall is
> complete and the ioctl returns to user space.  These long running
> tasks may also trigger the kernel's soft lockup detection.
> 
> Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
> bracket hypercalls that may be preempted.  Use these in the privcmd
> driver.
> 
> When returning from an upcall, call preempt_schedule_irq() if the
> current task was within a preemptible hypercall.
> 
> Since preempt_schedule_irq() can move the task to a different CPU,
> clear and set xen_in_preemptible_hcall around the call.
> 
> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
> ---
> Changes in v3:
> - Export xen_in_preemptible_hcall (to fix modular privcmd driver).

On 32-bit builds I get:

arch/x86/built-in.o: In function `xen_do_upcall':
/home/konrad/ssd/konrad/linux/arch/x86/kernel/entry_32.S:1016: undefined
reference to `TI_preempt_count'

> 
> Changes in v2:
> - Use per-cpu variable to mark preemptible regions
> - Call preempt_schedule_irq() from the correct place in
>   xen_hypervisor_callback
> ---
>  arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
>  arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
>  drivers/xen/Makefile       |    2 +-
>  drivers/xen/preempt.c      |   17 +++++++++++++++++
>  drivers/xen/privcmd.c      |    2 ++
>  include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
>  6 files changed, 89 insertions(+), 1 deletions(-)
>  create mode 100644 drivers/xen/preempt.c
> 
> diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
> index a2a4f46..b99bc9c 100644
> --- a/arch/x86/kernel/entry_32.S
> +++ b/arch/x86/kernel/entry_32.S
> @@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
>  ENTRY(xen_do_upcall)
>  1:	mov %esp, %eax
>  	call xen_evtchn_do_upcall
> +#ifdef CONFIG_PREEMPT
>  	jmp  ret_from_intr
> +#else
> +	GET_THREAD_INFO(%ebp)
> +#ifdef CONFIG_VM86
> +	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
> +	movb PT_CS(%esp), %al
> +	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
> +#else
> +	movl PT_CS(%esp), %eax
> +	andl $SEGMENT_RPL_MASK, %eax
> +#endif
> +	cmpl $USER_RPL, %eax
> +	jae resume_userspace		# returning to v8086 or userspace
> +	DISABLE_INTERRUPTS(CLBR_ANY)
> +	cmpl $0,TI_preempt_count(%ebp)	# non-zero preempt_count ?
> +	jnz resume_kernel
> +	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jz resume_kernel
> +	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	call preempt_schedule_irq
> +	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jmp resume_kernel
> +#endif /* CONFIG_PREEMPT */
>  	CFI_ENDPROC
>  ENDPROC(xen_hypervisor_callback)
>  
> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
> index 1e96c36..d8f4fd8 100644
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
>  	popq %rsp
>  	CFI_DEF_CFA_REGISTER rsp
>  	decl PER_CPU_VAR(irq_count)
> +#ifdef CONFIG_PREEMPT
>  	jmp  error_exit
> +#else
> +	movl %ebx, %eax
> +	RESTORE_REST
> +	DISABLE_INTERRUPTS(CLBR_NONE)
> +	TRACE_IRQS_OFF
> +	GET_THREAD_INFO(%rcx)
> +	testl %eax, %eax
> +	je error_exit_user
> +	cmpl $0,PER_CPU_VAR(__preempt_count)
> +	jnz retint_kernel
> +	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jz retint_kernel
> +	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	call preempt_schedule_irq
> +	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
> +	jmp retint_kernel
> +#endif
>  	CFI_ENDPROC
>  END(xen_do_hypervisor_callback)
>  
> @@ -1629,6 +1647,7 @@ ENTRY(error_exit)
>  	GET_THREAD_INFO(%rcx)
>  	testl %eax,%eax
>  	jne retint_kernel
> +error_exit_user:
>  	LOCKDEP_SYS_EXIT_IRQ
>  	movl TI_flags(%rcx),%edx
>  	movl $_TIF_WORK_MASK,%edi
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 45e00af..6b867e9 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
>  obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
>  endif
>  obj-$(CONFIG_X86)			+= fallback.o
> -obj-y	+= grant-table.o features.o balloon.o manage.o
> +obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
>  obj-y	+= events/
>  obj-y	+= xenbus/
>  
> diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
> new file mode 100644
> index 0000000..b5a3e98
> --- /dev/null
> +++ b/drivers/xen/preempt.c
> @@ -0,0 +1,17 @@
> +/*
> + * Preemptible hypercalls
> + *
> + * Copyright (C) 2014 Citrix Systems R&D ltd.
> + *
> + * This source code is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; either version 2 of the
> + * License, or (at your option) any later version.
> + */
> +
> +#include <xen/xen-ops.h>
> +
> +#ifndef CONFIG_PREEMPT
> +DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
> +EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
> +#endif
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 569a13b..59ac71c 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
>  	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
>  		return -EFAULT;
>  
> +	xen_preemptible_hcall_begin();
>  	ret = privcmd_call(hypercall.op,
>  			   hypercall.arg[0], hypercall.arg[1],
>  			   hypercall.arg[2], hypercall.arg[3],
>  			   hypercall.arg[4]);
> +	xen_preemptible_hcall_end();
>  
>  	return ret;
>  }
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index fb2ea8f..6d8c042 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
>  			       int numpgs, struct page **pages);
>  
>  bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
> +
> +#ifdef CONFIG_PREEMPT
> +
> +static inline void xen_preemptible_hcall_begin(void)
> +{
> +}
> +
> +static inline void xen_preemptible_hcall_end(void)
> +{
> +}
> +
> +#else
> +
> +DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
> +
> +static inline void xen_preemptible_hcall_begin(void)
> +{
> +	__this_cpu_write(xen_in_preemptible_hcall, true);
> +}
> +
> +static inline void xen_preemptible_hcall_end(void)
> +{
> +	__this_cpu_write(xen_in_preemptible_hcall, false);
> +}
> +
> +#endif /* CONFIG_PREEMPT */
> +
>  #endif /* INCLUDE_XEN_OPS_H */
> -- 
> 1.7.2.5
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:42:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDPJ-00025p-M3; Fri, 28 Feb 2014 02:42:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJDPI-00025b-QV
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:42:36 +0000
Received: from [85.158.137.68:33190] by server-15.bemta-3.messagelabs.com id
	F7/08-19263-B97FF035; Fri, 28 Feb 2014 02:42:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393555354!4687140!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4975 invoked from network); 28 Feb 2014 02:42:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:42:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="106503749"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 02:42:32 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 21:42:31 -0500
Message-ID: <1393555348.20365.11.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Charles Arnold <carnold@suse.com>
Date: Fri, 28 Feb 2014 02:42:28 +0000
In-Reply-To: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
References: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 15:59 -0700, Charles Arnold wrote:
> Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).

Ccing the author.

> Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
> Was this functionality ever intended to work?

One would like to assume so!

> Does anyone know the status of this code?

Matt?

Looks like Makefile.am was patched but not Makefile.in, and I don't
think we run automake as part of the stubdom build process.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:42:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDPJ-00025p-M3; Fri, 28 Feb 2014 02:42:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJDPI-00025b-QV
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 02:42:36 +0000
Received: from [85.158.137.68:33190] by server-15.bemta-3.messagelabs.com id
	F7/08-19263-B97FF035; Fri, 28 Feb 2014 02:42:35 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393555354!4687140!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4975 invoked from network); 28 Feb 2014 02:42:35 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:42:35 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="106503749"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 02:42:32 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 21:42:31 -0500
Message-ID: <1393555348.20365.11.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Charles Arnold <carnold@suse.com>
Date: Fri, 28 Feb 2014 02:42:28 +0000
In-Reply-To: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
References: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: Matt Wilson <msw@amazon.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, 2014-02-26 at 15:59 -0700, Charles Arnold wrote:
> Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).

Ccing the author.

> Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
> Was this functionality ever intended to work?

One would like to assume so!

> Does anyone know the status of this code?

Matt?

Looks like Makefile.am was patched but not Makefile.in, and I don't
think we run automake as part of the stubdom build process.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:48:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDUk-0002Hn-IH; Fri, 28 Feb 2014 02:48:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1WJDUi-0002Hi-Sx
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 02:48:13 +0000
Received: from [85.158.137.68:54962] by server-9.bemta-3.messagelabs.com id
	7D/29-10184-CE8FF035; Fri, 28 Feb 2014 02:48:12 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393555689!4750129!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31036 invoked from network); 28 Feb 2014 02:48:10 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:48:10 -0000
Received: by mail-lb0-f177.google.com with SMTP id z11so2242868lbi.8
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 18:48:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=QvQf8RZulWvLpambxtBxSnbwz4ohEY31HKuuRO4WiiE=;
	b=IXjrofsweWGPHhSyPhtuF9E71SwJ7QNOEy1dlTQaBI/orQTU0qzjlNbUSaAen510+e
	93FnFiZ90xxQxEs8s0Lkz9VkNUTN9KYo3lbPPTESk/RGuXxmDctHN7BGMsQBzZS65+lA
	KTkFlaqOUAC18Q5aNaQGgZzLLPzyQXD+iKti+VSvetkXVThNcQjHEeNHFQzbNmq8gr1v
	USAR++ebi3WM6y6ljWMkK4pzVo2K1JQm7z/spuCZwSc5OhKhASkkcawJmLThxbfPy0tO
	/Lz323suCx+dhHfWIJFspA7jY8pyXsCV8vBiMpqJMaxlu21GzoHqnRHDQYtJGgnXnOak
	oskw==
X-Gm-Message-State: ALoCoQmH5+1uFWsrKZZEVPAY5+QW+iXLCTjsIO2xvr9zoz4rtZtukFeYPNqJHwJ43vJHzSjTfPMZ
MIME-Version: 1.0
X-Received: by 10.112.142.161 with SMTP id rx1mr8590673lbb.33.1393555689318;
	Thu, 27 Feb 2014 18:48:09 -0800 (PST)
Received: by 10.114.95.65 with HTTP; Thu, 27 Feb 2014 18:48:09 -0800 (PST)
In-Reply-To: <20140228022532.GC7114@localhost.localdomain>
References: <20140228022532.GC7114@localhost.localdomain>
Date: Thu, 27 Feb 2014 18:48:09 -0800
Message-ID: <CAKbGBLiqOrZtcv-ZA_WnK0KzoO4McuuaZoSe+6KTK4uUusDnkw@mail.gmail.com>
From: Steven Noonan <steven@uplinklabs.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Mel Gorman <mgorman@suse.de>, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] Regression introduced by "xen: properly account for
 _PAGE_NUMA during xen pte translations"
 (a9c8e4beeeb64c22b84c803747487857fe424b68)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For the issue fixed by a9c8e4be, it was possible to trigger it using a
small test case. Does that same test work if you run it after
migrating the instance?

    #include <errno.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <sys/mman.h>

    void die(const char *what)
    {
        perror(what);
        exit(1);
    }

    int main(int arg, char **argv)
    {
        void *p = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE
| MAP_ANONYMOUS, -1, 0);

        if (p == MAP_FAILED)
                die("mmap");

        /* Tickle the page. */
        ((char *)p)[0] = 0;

        if (mprotect(p, 4096, PROT_NONE) != 0)
                die("mprotect");

        if (mprotect(p, 4096, PROT_READ) != 0)
                die("mprotect");

        if (munmap(p, 4096) != 0)
                die("munmap");

        return 0;
    }

On Thu, Feb 27, 2014 at 6:25 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> This only shows up when using the Linux kernel as a 64-bit PV guest and
> when I have migrated and I am running iscsid and I poweroff the guest.
> Note: I can also reproduce this if I kill 'iscsid'.
>
> If I revert the above mentioned commit the problem disappears.
>
> The page flags it shows are bogus - this guest is running from RAM
> and has no swap.
>
> Here is what the console says (ignore the first BUG please):
>
> [   42.268060] xen:grant_table: Grant tables using version 1 layout
> [   42.268060] BUG: sleeping function called from invalid context at /home/konrad/ssd/konrad/linux/kernel/locking/mutex.c:96
> [   42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name: migration/0
> [   42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc4upstream #1
> [   42.268060]  0000000000000002 ffff88003cc0dcd8 ffffffff816f0e59 ffff88003cc02630
> [   42.268060]  ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce ffff88003cc0dd08
> [   42.268060]  ffffffff816f3bdf ffff88003cc0dd08 0000000000000017 ffff88003cc0dd38
> [   42.268060] Call Trace:
> [   42.268060]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   42.268060]  [<ffffffff810cdfce>] __might_sleep+0xce/0xf0
> [   42.268060]  [<ffffffff816f3bdf>] mutex_lock+0x1f/0x40
> [   42.268060]  [<ffffffff813f49fb>] rebind_evtchn_irq+0x3b/0xb0
> [   42.268060]  [<ffffffff81428bdc>] xen_console_resume+0x5c/0x60
> [   42.268060]  [<ffffffff813f3c0a>] xen_suspend+0x8a/0xb0
> [   42.268060]  [<ffffffff811265db>] multi_cpu_stop+0xbb/0xe0
> [   42.268060]  [<ffffffff81126520>] ? irq_cpu_stop_queue_work+0x30/0x30
> [   42.268060]  [<ffffffff81126bfa>] cpu_stopper_thread+0x4a/0x180
> [   42.268060]  [<ffffffff816f1a01>] ? __schedule+0x381/0x7e0
> [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> [   42.268060]  [<ffffffff816f612b>] ? _raw_spin_unlock_irqrestore+0x1b/0x70
> [   42.268060]  [<ffffffff810cc058>] smpboot_thread_fn+0x148/0x1e0
> [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> [   42.268060]  [<ffffffff810c490e>] kthread+0xce/0xf0
> [   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0x80
> [   42.268060]  [<ffffffff816fe74c>] ret_from_fork+0x7c/0xb0
> [   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0x80
> [   42.268060] PM: noirq restore of devices complete after 0.251 msecs
> [   42.268645] PM: early restore of devices complete after 0.151 msecs
>
> #
> # [   42.281199] switch: port 1(eth0) entered disabled state
> [   42.282591] PM: restore of devices complete after 11.656 msecs
> [   42.307965] switch: port 1(eth0) entered forwarding state
> [   42.307990] switch: port 1(eth0) entered forwarding state
>
> #
> #
> # [   57.312124] switch: port 1(eth0) entered forwarding state
> lspci
> # lsscsi
> # ifconfig
> eth0      Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
>           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:99 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:86 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:8722 (8.5 KiB)  TX bytes:8709 (8.5 KiB)
>
> lo        Link encap:Local Loopback
>           inet addr:127.0.0.1  Mask:255.0.0.0
>           inet6 addr: ::1/128 Scope:Host
>           UP LOOPBACK RUNNING  MTU:65536  Metric:1
>           RX packets:4 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:260 (260.0 b)  TX bytes:260 (260.0 b)
>
> switch    Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
>           inet addr:192.168.102.68  Bcast:192.168.102.255  Mask:255.255.255.0
>           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:96 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:8506 (8.3 KiB)  TX bytes:7755 (7.5 KiB)
>
> # ping 1   8.8.8.8
> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=45 time=57.0 ms
> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=45 time=51.4 ms
> 64 bytes from 8.8.8.8: icmp_seq=3 ttl=45 time=51.8 ms
> ^C
> --- 8.8.8.8 ping statistics ---
> 3 packets transmitted, 3 received, 0% packet loss, time 2130ms
> rtt min/avg/max/mdev = 51.421/53.420/57.008/2.556 ms
> # poweroff
> Feb 28 02:05:30 g-pvops init: starting pid 2386, tty '': '/etc/init.d/halt'
>
> # Usage: /etc/init.d/halt {start}
>
> The system is going down NOW!
>
> Sent SIGTERM to all processes
> Feb 28 02:05:30 g-pvops exiting on signal 15
>
> [   71.195552] BUG: Bad page map in process iscsid  pte:39b22120 pmd:06953067
> [   71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1 mapping:ffff88003a536490 index:0x1c8
> [   71.195576] page flags: 0x100000000080078(uptodate|dirty|lru|active|swapbacked)
> [   71.195585] page dumped because: bad pte
> [   71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:c
> [   71.195598] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.195603] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted 3.14.0-rc4upstream #1
> [   71.195618]  00007fb105d37000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.195627]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b ffff880030ff3c58
> [   71.195634]  ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000 ffff8800069539b8
> [   71.195642] Call Trace:
> [   71.195649]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.195657]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.195663]  [<ffffffff8117f6e1>] ? activate_page+0xb1/0xe0
> [   71.195669]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.195676]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.195680]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.195688]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.195694]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.195702]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.195707]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.195712]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.195719]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.195724]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.195732]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.195736] Disabling lock debugging due to kernel taint
> [   71.195740] BUG: Bad page map in process iscsid  pte:39b23120 pmd:06953067
> [   71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1 mapping:ffff88003a536490 index:0x1c9
> [   71.195751] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
> [   71.195759] page dumped because: bad pte
> [   71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:d
> [   71.195770] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.195774] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.195783]  00007fb105d38000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.195791]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 0000000000000575
> [   71.195798]  0720072007200720 ffff880030ff3c58 00007fb105d38000 ffff8800069539c0
> [   71.195806] Call Trace:
> [   71.195810]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.195816]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.195820]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.195827]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.195832]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.195839]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.195843]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.195849]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.195854]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.195858]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.195863]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.195870]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.195875]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.195880] BUG: Bad page map in process iscsid  pte:39b24120 pmd:06953067
> [   71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1 mapping:ffff88003a536490 index:0x1ca
> [   71.195888] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
> [   71.195895] page dumped because: bad pte
> [   71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:e
> [   71.195904] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.195908] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.195916]  00007fb105d39000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.195923]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 000000000000058e
> [   71.195929]  0720072007200720 ffff880030ff3c58 00007fb105d39000 ffff8800069539c8
> [   71.195935] Call Trace:
> [   71.195939]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.195944]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.195948]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.195953]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.195958]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.195963]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.195967]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.195973]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.195977]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.195982]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.195987]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.195992]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.195997]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.196001] BUG: Bad page map in process iscsid  pte:39b25120 pmd:06953067
> [   71.196005] page:ffffea0000c9f018 count:2 mapcount:-1 mapping:ffff88003a536490 index:0x1cb
> [   71.362165] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
> [   71.362172] page dumped because: bad pte
> [   71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:f
> [   71.362181] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.362185] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.362197]  00007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.362206]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000000005a7
> [   71.362212]  0720072007200720 ffff880030ff3c58 00007fb105d3a000 ffff8800069539d0
> [   71.362221] Call Trace:
> [   71.362225]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.362230]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.362241]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.362247]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.362254]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.362258]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.362263]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.362270]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.362276]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.362282]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.362287]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.362292]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.362297]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.364639] BUG: Bad page state in process iscsid  pfn:39b25
> [   71.364651] page:ffffea0000c9f018 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1cb
> [   71.364656] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.364663] page dumped because: non-NULL mapping
> [   71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.364711]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.364718]  ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.364724]  0000000000000001 ffffea0000c9f018 0000000000000000 ffff880030ff3c48
> [   71.364733] Call Trace:
> [   71.364739]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.364748]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.364753]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.364760]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.364768]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.364773]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.364780]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.364785]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.364791]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.364798]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.364803]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.364810]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.364815]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.364820]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.364825]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.364830]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.364835]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.364842]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.364848]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.364854] BUG: Bad page state in process iscsid  pfn:39b24
> [   71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1ca
> [   71.364863] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.364871] page dumped because: non-NULL mapping
> [   71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.364913]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.364919]  ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.364925]  0000000000000001 ffffea0000c9efe0 0000000000000000 ffff880030ff3c48
> [   71.364933] Call Trace:
> [   71.364938]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.364945]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.364950]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.364955]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.364960]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.364965]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.364970]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.364975]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.364982]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.364987]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.364994]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.364999]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.365004]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.365009]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.365015]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.365020]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.365025]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.365030]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.365035]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.365039] BUG: Bad page state in process iscsid  pfn:39b23
> [   71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1c9
> [   71.365047] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.365074] page dumped because: non-NULL mapping
> [   71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.365109]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.365115]  ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.365121]  0000000000000001 ffffea0000c9efa8 0000000000000000 ffff880030ff3c48
> [   71.365127] Call Trace:
> [   71.365131]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.365136]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.365141]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.365146]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.365151]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.365155]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.365160]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.365165]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.365170]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.365175]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.365180]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.562344]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.562349]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.562354]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.562359]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.562364]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.562369]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.562379]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.562387]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.562391] BUG: Bad page state in process iscsid  pfn:39b22
> [   71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1c8
> [   71.562399] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.562413] page dumped because: non-NULL mapping
> [   71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.562461]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.562467]  ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.562476]  0000000000000001 ffffea0000c9ef70 0000000000000000 ffff880030ff3c48
> [   71.562486] Call Trace:
> [   71.562490]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.562495]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.562500]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.562515]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.562520]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.562525]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.562535]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.562544]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.562550]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.562555]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.562561]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.562568]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.562579]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.562584]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.562594]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.562603]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.562608]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.562613]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.562617]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
>
> Sent SIGKILL to all processes
>
> Requesting system poweroff
> [   73.375423] reboot: System halted

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 02:48:21 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 02:48:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDUk-0002Hn-IH; Fri, 28 Feb 2014 02:48:14 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <steven@uplinklabs.net>) id 1WJDUi-0002Hi-Sx
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 02:48:13 +0000
Received: from [85.158.137.68:54962] by server-9.bemta-3.messagelabs.com id
	7D/29-10184-CE8FF035; Fri, 28 Feb 2014 02:48:12 +0000
X-Env-Sender: steven@uplinklabs.net
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393555689!4750129!1
X-Originating-IP: [209.85.217.177]
X-SpamReason: No, hits=0.8 required=7.0 tests=BODY_RANDOM_LONG,
  RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31036 invoked from network); 28 Feb 2014 02:48:10 -0000
Received: from mail-lb0-f177.google.com (HELO mail-lb0-f177.google.com)
	(209.85.217.177)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 02:48:10 -0000
Received: by mail-lb0-f177.google.com with SMTP id z11so2242868lbi.8
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 18:48:09 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type
	:content-transfer-encoding;
	bh=QvQf8RZulWvLpambxtBxSnbwz4ohEY31HKuuRO4WiiE=;
	b=IXjrofsweWGPHhSyPhtuF9E71SwJ7QNOEy1dlTQaBI/orQTU0qzjlNbUSaAen510+e
	93FnFiZ90xxQxEs8s0Lkz9VkNUTN9KYo3lbPPTESk/RGuXxmDctHN7BGMsQBzZS65+lA
	KTkFlaqOUAC18Q5aNaQGgZzLLPzyQXD+iKti+VSvetkXVThNcQjHEeNHFQzbNmq8gr1v
	USAR++ebi3WM6y6ljWMkK4pzVo2K1JQm7z/spuCZwSc5OhKhASkkcawJmLThxbfPy0tO
	/Lz323suCx+dhHfWIJFspA7jY8pyXsCV8vBiMpqJMaxlu21GzoHqnRHDQYtJGgnXnOak
	oskw==
X-Gm-Message-State: ALoCoQmH5+1uFWsrKZZEVPAY5+QW+iXLCTjsIO2xvr9zoz4rtZtukFeYPNqJHwJ43vJHzSjTfPMZ
MIME-Version: 1.0
X-Received: by 10.112.142.161 with SMTP id rx1mr8590673lbb.33.1393555689318;
	Thu, 27 Feb 2014 18:48:09 -0800 (PST)
Received: by 10.114.95.65 with HTTP; Thu, 27 Feb 2014 18:48:09 -0800 (PST)
In-Reply-To: <20140228022532.GC7114@localhost.localdomain>
References: <20140228022532.GC7114@localhost.localdomain>
Date: Thu, 27 Feb 2014 18:48:09 -0800
Message-ID: <CAKbGBLiqOrZtcv-ZA_WnK0KzoO4McuuaZoSe+6KTK4uUusDnkw@mail.gmail.com>
From: Steven Noonan <steven@uplinklabs.net>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Mel Gorman <mgorman@suse.de>, Elena Ufimtseva <ufimtseva@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] Regression introduced by "xen: properly account for
 _PAGE_NUMA during xen pte translations"
 (a9c8e4beeeb64c22b84c803747487857fe424b68)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

For the issue fixed by a9c8e4be, it was possible to trigger it using a
small test case. Does that same test work if you run it after
migrating the instance?

    #include <errno.h>
    #include <stdio.h>
    #include <stdlib.h>
    #include <sys/mman.h>

    void die(const char *what)
    {
        perror(what);
        exit(1);
    }

    int main(int arg, char **argv)
    {
        void *p = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE
| MAP_ANONYMOUS, -1, 0);

        if (p == MAP_FAILED)
                die("mmap");

        /* Tickle the page. */
        ((char *)p)[0] = 0;

        if (mprotect(p, 4096, PROT_NONE) != 0)
                die("mprotect");

        if (mprotect(p, 4096, PROT_READ) != 0)
                die("mprotect");

        if (munmap(p, 4096) != 0)
                die("munmap");

        return 0;
    }

On Thu, Feb 27, 2014 at 6:25 PM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> This only shows up when using the Linux kernel as a 64-bit PV guest and
> when I have migrated and I am running iscsid and I poweroff the guest.
> Note: I can also reproduce this if I kill 'iscsid'.
>
> If I revert the above mentioned commit the problem disappears.
>
> The page flags it shows are bogus - this guest is running from RAM
> and has no swap.
>
> Here is what the console says (ignore the first BUG please):
>
> [   42.268060] xen:grant_table: Grant tables using version 1 layout
> [   42.268060] BUG: sleeping function called from invalid context at /home/konrad/ssd/konrad/linux/kernel/locking/mutex.c:96
> [   42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name: migration/0
> [   42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc4upstream #1
> [   42.268060]  0000000000000002 ffff88003cc0dcd8 ffffffff816f0e59 ffff88003cc02630
> [   42.268060]  ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce ffff88003cc0dd08
> [   42.268060]  ffffffff816f3bdf ffff88003cc0dd08 0000000000000017 ffff88003cc0dd38
> [   42.268060] Call Trace:
> [   42.268060]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   42.268060]  [<ffffffff810cdfce>] __might_sleep+0xce/0xf0
> [   42.268060]  [<ffffffff816f3bdf>] mutex_lock+0x1f/0x40
> [   42.268060]  [<ffffffff813f49fb>] rebind_evtchn_irq+0x3b/0xb0
> [   42.268060]  [<ffffffff81428bdc>] xen_console_resume+0x5c/0x60
> [   42.268060]  [<ffffffff813f3c0a>] xen_suspend+0x8a/0xb0
> [   42.268060]  [<ffffffff811265db>] multi_cpu_stop+0xbb/0xe0
> [   42.268060]  [<ffffffff81126520>] ? irq_cpu_stop_queue_work+0x30/0x30
> [   42.268060]  [<ffffffff81126bfa>] cpu_stopper_thread+0x4a/0x180
> [   42.268060]  [<ffffffff816f1a01>] ? __schedule+0x381/0x7e0
> [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> [   42.268060]  [<ffffffff816f612b>] ? _raw_spin_unlock_irqrestore+0x1b/0x70
> [   42.268060]  [<ffffffff810cc058>] smpboot_thread_fn+0x148/0x1e0
> [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> [   42.268060]  [<ffffffff810c490e>] kthread+0xce/0xf0
> [   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0x80
> [   42.268060]  [<ffffffff816fe74c>] ret_from_fork+0x7c/0xb0
> [   42.268060]  [<ffffffff810c4840>] ? kthread_freezable_should_stop+0x80/0x80
> [   42.268060] PM: noirq restore of devices complete after 0.251 msecs
> [   42.268645] PM: early restore of devices complete after 0.151 msecs
>
> #
> # [   42.281199] switch: port 1(eth0) entered disabled state
> [   42.282591] PM: restore of devices complete after 11.656 msecs
> [   42.307965] switch: port 1(eth0) entered forwarding state
> [   42.307990] switch: port 1(eth0) entered forwarding state
>
> #
> #
> # [   57.312124] switch: port 1(eth0) entered forwarding state
> lspci
> # lsscsi
> # ifconfig
> eth0      Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
>           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:99 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:86 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:1000
>           RX bytes:8722 (8.5 KiB)  TX bytes:8709 (8.5 KiB)
>
> lo        Link encap:Local Loopback
>           inet addr:127.0.0.1  Mask:255.0.0.0
>           inet6 addr: ::1/128 Scope:Host
>           UP LOOPBACK RUNNING  MTU:65536  Metric:1
>           RX packets:4 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:260 (260.0 b)  TX bytes:260 (260.0 b)
>
> switch    Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
>           inet addr:192.168.102.68  Bcast:192.168.102.255  Mask:255.255.255.0
>           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:96 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:8506 (8.3 KiB)  TX bytes:7755 (7.5 KiB)
>
> # ping 1   8.8.8.8
> PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
> 64 bytes from 8.8.8.8: icmp_seq=1 ttl=45 time=57.0 ms
> 64 bytes from 8.8.8.8: icmp_seq=2 ttl=45 time=51.4 ms
> 64 bytes from 8.8.8.8: icmp_seq=3 ttl=45 time=51.8 ms
> ^C
> --- 8.8.8.8 ping statistics ---
> 3 packets transmitted, 3 received, 0% packet loss, time 2130ms
> rtt min/avg/max/mdev = 51.421/53.420/57.008/2.556 ms
> # poweroff
> Feb 28 02:05:30 g-pvops init: starting pid 2386, tty '': '/etc/init.d/halt'
>
> # Usage: /etc/init.d/halt {start}
>
> The system is going down NOW!
>
> Sent SIGTERM to all processes
> Feb 28 02:05:30 g-pvops exiting on signal 15
>
> [   71.195552] BUG: Bad page map in process iscsid  pte:39b22120 pmd:06953067
> [   71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1 mapping:ffff88003a536490 index:0x1c8
> [   71.195576] page flags: 0x100000000080078(uptodate|dirty|lru|active|swapbacked)
> [   71.195585] page dumped because: bad pte
> [   71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:c
> [   71.195598] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.195603] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted 3.14.0-rc4upstream #1
> [   71.195618]  00007fb105d37000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.195627]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b ffff880030ff3c58
> [   71.195634]  ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000 ffff8800069539b8
> [   71.195642] Call Trace:
> [   71.195649]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.195657]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.195663]  [<ffffffff8117f6e1>] ? activate_page+0xb1/0xe0
> [   71.195669]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.195676]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.195680]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.195688]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.195694]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.195702]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.195707]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.195712]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.195719]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.195724]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.195732]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.195736] Disabling lock debugging due to kernel taint
> [   71.195740] BUG: Bad page map in process iscsid  pte:39b23120 pmd:06953067
> [   71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1 mapping:ffff88003a536490 index:0x1c9
> [   71.195751] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
> [   71.195759] page dumped because: bad pte
> [   71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:d
> [   71.195770] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.195774] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.195783]  00007fb105d38000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.195791]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 0000000000000575
> [   71.195798]  0720072007200720 ffff880030ff3c58 00007fb105d38000 ffff8800069539c0
> [   71.195806] Call Trace:
> [   71.195810]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.195816]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.195820]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.195827]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.195832]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.195839]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.195843]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.195849]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.195854]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.195858]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.195863]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.195870]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.195875]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.195880] BUG: Bad page map in process iscsid  pte:39b24120 pmd:06953067
> [   71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1 mapping:ffff88003a536490 index:0x1ca
> [   71.195888] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
> [   71.195895] page dumped because: bad pte
> [   71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:e
> [   71.195904] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.195908] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.195916]  00007fb105d39000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.195923]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 000000000000058e
> [   71.195929]  0720072007200720 ffff880030ff3c58 00007fb105d39000 ffff8800069539c8
> [   71.195935] Call Trace:
> [   71.195939]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.195944]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.195948]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.195953]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.195958]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.195963]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.195967]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.195973]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.195977]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.195982]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.195987]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.195992]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.195997]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.196001] BUG: Bad page map in process iscsid  pte:39b25120 pmd:06953067
> [   71.196005] page:ffffea0000c9f018 count:2 mapcount:-1 mapping:ffff88003a536490 index:0x1cb
> [   71.362165] page flags: 0x100000000080038(uptodate|dirty|lru|swapbacked)
> [   71.362172] page dumped because: bad pte
> [   71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma:          (null) mapping:ffff880032006970 index:f
> [   71.362181] vma->vm_ops->fault: shmem_fault+0x0/0x70
> [   71.362185] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> [   71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.362197]  00007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59 0000000000000000
> [   71.362206]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b 00000000000005a7
> [   71.362212]  0720072007200720 ffff880030ff3c58 00007fb105d3a000 ffff8800069539d0
> [   71.362221] Call Trace:
> [   71.362225]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.362230]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> [   71.362241]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> [   71.362247]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> [   71.362254]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> [   71.362258]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> [   71.362263]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> [   71.362270]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.362276]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.362282]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.362287]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.362292]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.362297]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.364639] BUG: Bad page state in process iscsid  pfn:39b25
> [   71.364651] page:ffffea0000c9f018 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1cb
> [   71.364656] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.364663] page dumped because: non-NULL mapping
> [   71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.364711]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.364718]  ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.364724]  0000000000000001 ffffea0000c9f018 0000000000000000 ffff880030ff3c48
> [   71.364733] Call Trace:
> [   71.364739]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.364748]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.364753]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.364760]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.364768]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.364773]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.364780]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.364785]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.364791]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.364798]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.364803]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.364810]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.364815]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.364820]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.364825]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.364830]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.364835]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.364842]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.364848]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.364854] BUG: Bad page state in process iscsid  pfn:39b24
> [   71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1ca
> [   71.364863] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.364871] page dumped because: non-NULL mapping
> [   71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.364913]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.364919]  ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.364925]  0000000000000001 ffffea0000c9efe0 0000000000000000 ffff880030ff3c48
> [   71.364933] Call Trace:
> [   71.364938]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.364945]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.364950]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.364955]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.364960]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.364965]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.364970]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.364975]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.364982]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.364987]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.364994]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.364999]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.365004]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.365009]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.365015]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.365020]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.365025]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.365030]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.365035]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.365039] BUG: Bad page state in process iscsid  pfn:39b23
> [   71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1c9
> [   71.365047] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.365074] page dumped because: non-NULL mapping
> [   71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.365109]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.365115]  ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.365121]  0000000000000001 ffffea0000c9efa8 0000000000000000 ffff880030ff3c48
> [   71.365127] Call Trace:
> [   71.365131]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.365136]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.365141]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.365146]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.365151]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.365155]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.365160]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.365165]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.365170]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.365175]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.365180]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.562344]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.562349]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.562354]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.562359]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.562364]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.562369]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.562379]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.562387]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> [   71.562391] BUG: Bad page state in process iscsid  pfn:39b22
> [   71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1 mapping:ffff88003a536490 index:0x1c8
> [   71.562399] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> [   71.562413] page dumped because: non-NULL mapping
> [   71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> [   71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B        3.14.0-rc4upstream #1
> [   71.562461]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59 ffffffff8197f00f
> [   71.562467]  ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0 ffff880030ff3c48
> [   71.562476]  0000000000000001 ffffea0000c9ef70 0000000000000000 ffff880030ff3c48
> [   71.562486] Call Trace:
> [   71.562490]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> [   71.562495]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> [   71.562500]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> [   71.562515]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> [   71.562520]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> [   71.562525]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> [   71.562535]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> [   71.562544]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> [   71.562550]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> [   71.562555]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> [   71.562561]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> [   71.562568]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> [   71.562579]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> [   71.562584]  [<ffffffff8109d672>] mmput+0x52/0x100
> [   71.562594]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> [   71.562603]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> [   71.562608]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> [   71.562613]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> [   71.562617]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
>
> Sent SIGKILL to all processes
>
> Requesting system poweroff
> [   73.375423] reboot: System halted

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 03:18:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDxk-0003Jf-IZ; Fri, 28 Feb 2014 03:18:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WJDxh-0003Ja-Gy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 03:18:10 +0000
Received: from [85.158.139.211:46883] by server-14.bemta-5.messagelabs.com id
	C4/BE-27598-0FFFF035; Fri, 28 Feb 2014 03:18:08 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393557484!6771791!1
X-Originating-IP: [209.85.216.175]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3056 invoked from network); 28 Feb 2014 03:18:05 -0000
Received: from mail-qc0-f175.google.com (HELO mail-qc0-f175.google.com)
	(209.85.216.175)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 03:18:05 -0000
Received: by mail-qc0-f175.google.com with SMTP id e16so145661qcx.34
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 19:18:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=RhSjURpKnJrAuLM6juMpd4lbAXaPEDGyqvZjSr0lpqM=;
	b=jr/4sX+Sbkr4z5nlttqP7JsoJpSD4kmAKs+MFdkbN4tVvGdfpWpg0cFoXyVe8VcPna
	UmNUjAbwQDJta0Zh2xiayUgGBzK8YwfMOA4ehAt9HTXz+vt0fFDBg66A77PAD5ApXUwy
	Vl35sdg5qDRMLW7Hv21t861oBQGGUkfSNs9Hzi6/6VSDZMlyiatimj2IZIhz0cFJQBTn
	qQNGqhLXHh1HkKhM7uhGLSw/RHbs/uU6XhOm0BXwk1qGf4uFczCTsDfqmSQLJIIlLlv7
	capIkWP+NlQsqKBh6oZGNE4sySCPSfGjVuA6I6zsH4KYibG7EtwfgnnKOdnxGC+shYxd
	E3JQ==
MIME-Version: 1.0
X-Received: by 10.229.117.3 with SMTP id o3mr631174qcq.3.1393557484295; Thu,
	27 Feb 2014 19:18:04 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 27 Feb 2014 19:18:04 -0800 (PST)
In-Reply-To: <CAKbGBLiqOrZtcv-ZA_WnK0KzoO4McuuaZoSe+6KTK4uUusDnkw@mail.gmail.com>
References: <20140228022532.GC7114@localhost.localdomain>
	<CAKbGBLiqOrZtcv-ZA_WnK0KzoO4McuuaZoSe+6KTK4uUusDnkw@mail.gmail.com>
Date: Thu, 27 Feb 2014 22:18:04 -0500
Message-ID: <CAEr7rXhBn57V8OEp0x6D=0TBca688xeoa5d_czTRDcxLN7ef_w@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Steven Noonan <steven@uplinklabs.net>
Cc: David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] Regression introduced by "xen: properly account for
 _PAGE_NUMA during xen pte translations"
 (a9c8e4beeeb64c22b84c803747487857fe424b68)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5251542664070070604=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5251542664070070604==
Content-Type: multipart/alternative; boundary=001a113330b2d35e8804f36ee2a8

--001a113330b2d35e8804f36ee2a8
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Feb 27, 2014 at 9:48 PM, Steven Noonan <steven@uplinklabs.net>wrote:

> For the issue fixed by a9c8e4be, it was possible to trigger it using a
> small test case. Does that same test work if you run it after
> migrating the instance?
>
>     #include <errno.h>
>     #include <stdio.h>
>     #include <stdlib.h>
>     #include <sys/mman.h>
>
>     void die(const char *what)
>     {
>         perror(what);
>         exit(1);
>     }
>
>     int main(int arg, char **argv)
>     {
>         void *p = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE
> | MAP_ANONYMOUS, -1, 0);
>
>         if (p == MAP_FAILED)
>                 die("mmap");
>
>         /* Tickle the page. */
>         ((char *)p)[0] = 0;
>
>         if (mprotect(p, 4096, PROT_NONE) != 0)
>                 die("mprotect");
>
>         if (mprotect(p, 4096, PROT_READ) != 0)
>                 die("mprotect");
>
>         if (munmap(p, 4096) != 0)
>                 die("munmap");
>
>         return 0;
>     }
>
> On Thu, Feb 27, 2014 at 6:25 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> >
> > This only shows up when using the Linux kernel as a 64-bit PV guest and
> > when I have migrated and I am running iscsid and I poweroff the guest.
> > Note: I can also reproduce this if I kill 'iscsid'.
> >
> > If I revert the above mentioned commit the problem disappears.
> >
> > The page flags it shows are bogus - this guest is running from RAM
> > and has no swap.
> >
> > Here is what the console says (ignore the first BUG please):
> >
> > [   42.268060] xen:grant_table: Grant tables using version 1 layout
> > [   42.268060] BUG: sleeping function called from invalid context at
> /home/konrad/ssd/konrad/linux/kernel/locking/mutex.c:96
> > [   42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name:
> migration/0
> > [   42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted
> 3.14.0-rc4upstream #1
> > [   42.268060]  0000000000000002 ffff88003cc0dcd8 ffffffff816f0e59
> ffff88003cc02630
> > [   42.268060]  ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce
> ffff88003cc0dd08
> > [   42.268060]  ffffffff816f3bdf ffff88003cc0dd08 0000000000000017
> ffff88003cc0dd38
> > [   42.268060] Call Trace:
> > [   42.268060]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   42.268060]  [<ffffffff810cdfce>] __might_sleep+0xce/0xf0
> > [   42.268060]  [<ffffffff816f3bdf>] mutex_lock+0x1f/0x40
> > [   42.268060]  [<ffffffff813f49fb>] rebind_evtchn_irq+0x3b/0xb0
> > [   42.268060]  [<ffffffff81428bdc>] xen_console_resume+0x5c/0x60
> > [   42.268060]  [<ffffffff813f3c0a>] xen_suspend+0x8a/0xb0
> > [   42.268060]  [<ffffffff811265db>] multi_cpu_stop+0xbb/0xe0
> > [   42.268060]  [<ffffffff81126520>] ? irq_cpu_stop_queue_work+0x30/0x30
> > [   42.268060]  [<ffffffff81126bfa>] cpu_stopper_thread+0x4a/0x180
> > [   42.268060]  [<ffffffff816f1a01>] ? __schedule+0x381/0x7e0
> > [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> > [   42.268060]  [<ffffffff816f612b>] ?
> _raw_spin_unlock_irqrestore+0x1b/0x70
> > [   42.268060]  [<ffffffff810cc058>] smpboot_thread_fn+0x148/0x1e0
> > [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> > [   42.268060]  [<ffffffff810c490e>] kthread+0xce/0xf0
> > [   42.268060]  [<ffffffff810c4840>] ?
> kthread_freezable_should_stop+0x80/0x80
> > [   42.268060]  [<ffffffff816fe74c>] ret_from_fork+0x7c/0xb0
> > [   42.268060]  [<ffffffff810c4840>] ?
> kthread_freezable_should_stop+0x80/0x80
> > [   42.268060] PM: noirq restore of devices complete after 0.251 msecs
> > [   42.268645] PM: early restore of devices complete after 0.151 msecs
> >
> > #
> > # [   42.281199] switch: port 1(eth0) entered disabled state
> > [   42.282591] PM: restore of devices complete after 11.656 msecs
> > [   42.307965] switch: port 1(eth0) entered forwarding state
> > [   42.307990] switch: port 1(eth0) entered forwarding state
> >
> > #
> > #
> > # [   57.312124] switch: port 1(eth0) entered forwarding state
> > lspci
> > # lsscsi
> > # ifconfig
> > eth0      Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
> >           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:99 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:86 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:1000
> >           RX bytes:8722 (8.5 KiB)  TX bytes:8709 (8.5 KiB)
> >
> > lo        Link encap:Local Loopback
> >           inet addr:127.0.0.1  Mask:255.0.0.0
> >           inet6 addr: ::1/128 Scope:Host
> >           UP LOOPBACK RUNNING  MTU:65536  Metric:1
> >           RX packets:4 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:260 (260.0 b)  TX bytes:260 (260.0 b)
> >
> > switch    Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
> >           inet addr:192.168.102.68  Bcast:192.168.102.255
>  Mask:255.255.255.0
> >           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:96 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:8506 (8.3 KiB)  TX bytes:7755 (7.5 KiB)
> >
> > # ping 1   8.8.8.8
> > PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
> > 64 bytes from 8.8.8.8: icmp_seq=1 ttl=45 time=57.0 ms
> > 64 bytes from 8.8.8.8: icmp_seq=2 ttl=45 time=51.4 ms
> > 64 bytes from 8.8.8.8: icmp_seq=3 ttl=45 time=51.8 ms
> > ^C
> > --- 8.8.8.8 ping statistics ---
> > 3 packets transmitted, 3 received, 0% packet loss, time 2130ms
> > rtt min/avg/max/mdev = 51.421/53.420/57.008/2.556 ms
> > # poweroff
> > Feb 28 02:05:30 g-pvops init: starting pid 2386, tty '':
> '/etc/init.d/halt'
> >
> > # Usage: /etc/init.d/halt {start}
> >
> > The system is going down NOW!
> >
> > Sent SIGTERM to all processes
> > Feb 28 02:05:30 g-pvops exiting on signal 15
> >
> > [   71.195552] BUG: Bad page map in process iscsid  pte:39b22120
> pmd:06953067
> > [   71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1
> mapping:ffff88003a536490 index:0x1c8
> > [   71.195576] page flags:
> 0x100000000080078(uptodate|dirty|lru|active|swapbacked)
> > [   71.195585] page dumped because: bad pte
> > [   71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:c
> > [   71.195598] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.195603] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted
> 3.14.0-rc4upstream #1
> > [   71.195618]  00007fb105d37000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.195627]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> ffff880030ff3c58
> > [   71.195634]  ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000
> ffff8800069539b8
> > [   71.195642] Call Trace:
> > [   71.195649]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.195657]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.195663]  [<ffffffff8117f6e1>] ? activate_page+0xb1/0xe0
> > [   71.195669]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.195676]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.195680]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.195688]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.195694]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.195702]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.195707]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.195712]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.195719]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.195724]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.195732]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.195736] Disabling lock debugging due to kernel taint
> > [   71.195740] BUG: Bad page map in process iscsid  pte:39b23120
> pmd:06953067
> > [   71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1
> mapping:ffff88003a536490 index:0x1c9
> > [   71.195751] page flags:
> 0x100000000080038(uptodate|dirty|lru|swapbacked)
> > [   71.195759] page dumped because: bad pte
> > [   71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:d
> > [   71.195770] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.195774] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.195783]  00007fb105d38000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.195791]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> 0000000000000575
> > [   71.195798]  0720072007200720 ffff880030ff3c58 00007fb105d38000
> ffff8800069539c0
> > [   71.195806] Call Trace:
> > [   71.195810]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.195816]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.195820]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.195827]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.195832]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.195839]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.195843]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.195849]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.195854]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.195858]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.195863]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.195870]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.195875]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.195880] BUG: Bad page map in process iscsid  pte:39b24120
> pmd:06953067
> > [   71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1
> mapping:ffff88003a536490 index:0x1ca
> > [   71.195888] page flags:
> 0x100000000080038(uptodate|dirty|lru|swapbacked)
> > [   71.195895] page dumped because: bad pte
> > [   71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:e
> > [   71.195904] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.195908] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.195916]  00007fb105d39000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.195923]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> 000000000000058e
> > [   71.195929]  0720072007200720 ffff880030ff3c58 00007fb105d39000
> ffff8800069539c8
> > [   71.195935] Call Trace:
> > [   71.195939]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.195944]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.195948]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.195953]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.195958]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.195963]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.195967]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.195973]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.195977]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.195982]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.195987]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.195992]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.195997]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.196001] BUG: Bad page map in process iscsid  pte:39b25120
> pmd:06953067
> > [   71.196005] page:ffffea0000c9f018 count:2 mapcount:-1
> mapping:ffff88003a536490 index:0x1cb
> > [   71.362165] page flags:
> 0x100000000080038(uptodate|dirty|lru|swapbacked)
> > [   71.362172] page dumped because: bad pte
> > [   71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:f
> > [   71.362181] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.362185] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.362197]  00007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.362206]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> 00000000000005a7
> > [   71.362212]  0720072007200720 ffff880030ff3c58 00007fb105d3a000
> ffff8800069539d0
> > [   71.362221] Call Trace:
> > [   71.362225]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.362230]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.362241]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.362247]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.362254]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.362258]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.362263]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.362270]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.362276]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.362282]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.362287]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.362292]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.362297]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.364639] BUG: Bad page state in process iscsid  pfn:39b25
> > [   71.364651] page:ffffea0000c9f018 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1cb
> > [   71.364656] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.364663] page dumped because: non-NULL mapping
> > [   71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.364711]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.364718]  ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.364724]  0000000000000001 ffffea0000c9f018 0000000000000000
> ffff880030ff3c48
> > [   71.364733] Call Trace:
> > [   71.364739]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.364748]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.364753]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.364760]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.364768]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.364773]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.364780]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.364785]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.364791]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.364798]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.364803]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.364810]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.364815]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.364820]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.364825]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.364830]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.364835]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.364842]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.364848]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.364854] BUG: Bad page state in process iscsid  pfn:39b24
> > [   71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1ca
> > [   71.364863] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.364871] page dumped because: non-NULL mapping
> > [   71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.364913]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.364919]  ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.364925]  0000000000000001 ffffea0000c9efe0 0000000000000000
> ffff880030ff3c48
> > [   71.364933] Call Trace:
> > [   71.364938]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.364945]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.364950]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.364955]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.364960]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.364965]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.364970]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.364975]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.364982]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.364987]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.364994]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.364999]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.365004]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.365009]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.365015]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.365020]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.365025]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.365030]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.365035]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.365039] BUG: Bad page state in process iscsid  pfn:39b23
> > [   71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1c9
> > [   71.365047] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.365074] page dumped because: non-NULL mapping
> > [   71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.365109]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.365115]  ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.365121]  0000000000000001 ffffea0000c9efa8 0000000000000000
> ffff880030ff3c48
> > [   71.365127] Call Trace:
> > [   71.365131]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.365136]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.365141]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.365146]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.365151]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.365155]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.365160]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.365165]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.365170]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.365175]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.365180]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.562344]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.562349]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.562354]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.562359]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.562364]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.562369]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.562379]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.562387]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.562391] BUG: Bad page state in process iscsid  pfn:39b22
> > [   71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1c8
> > [   71.562399] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.562413] page dumped because: non-NULL mapping
> > [   71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.562461]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.562467]  ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.562476]  0000000000000001 ffffea0000c9ef70 0000000000000000
> ffff880030ff3c48
> > [   71.562486] Call Trace:
> > [   71.562490]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.562495]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.562500]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.562515]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.562520]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.562525]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.562535]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.562544]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.562550]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.562555]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.562561]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.562568]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.562579]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.562584]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.562594]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.562603]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.562608]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.562613]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.562617]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> >
> > Sent SIGKILL to all processes
> >
> > Requesting system poweroff
> > [   73.375423] reboot: System halted
>


Nice, I will also look at this. I have not tested the above mentioned patch
with migration though.

-- 
Elena

--001a113330b2d35e8804f36ee2a8
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><div class=3D"gmail_extra"><br><br><div class=3D"gmail=
_quote">On Thu, Feb 27, 2014 at 9:48 PM, Steven Noonan <span dir=3D"ltr">&l=
t;<a href=3D"mailto:steven@uplinklabs.net" target=3D"_blank">steven@uplinkl=
abs.net</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">For the issue fixed by a9c8e4be, it was possible to trigge=
r it using a<br>

small test case. Does that same test work if you run it after<br>
migrating the instance?<br>
<br>
=A0 =A0 #include &lt;errno.h&gt;<br>
=A0 =A0 #include &lt;stdio.h&gt;<br>
=A0 =A0 #include &lt;stdlib.h&gt;<br>
=A0 =A0 #include &lt;sys/mman.h&gt;<br>
<br>
=A0 =A0 void die(const char *what)<br>
=A0 =A0 {<br>
=A0 =A0 =A0 =A0 perror(what);<br>
=A0 =A0 =A0 =A0 exit(1);<br>
=A0 =A0 }<br>
<br>
=A0 =A0 int main(int arg, char **argv)<br>
=A0 =A0 {<br>
=A0 =A0 =A0 =A0 void *p =3D mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PR=
IVATE<br>
| MAP_ANONYMOUS, -1, 0);<br>
<br>
=A0 =A0 =A0 =A0 if (p =3D=3D MAP_FAILED)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;mmap&quot;);<br>
<br>
=A0 =A0 =A0 =A0 /* Tickle the page. */<br>
=A0 =A0 =A0 =A0 ((char *)p)[0] =3D 0;<br>
<br>
=A0 =A0 =A0 =A0 if (mprotect(p, 4096, PROT_NONE) !=3D 0)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;mprotect&quot;);<br>
<br>
=A0 =A0 =A0 =A0 if (mprotect(p, 4096, PROT_READ) !=3D 0)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;mprotect&quot;);<br>
<br>
=A0 =A0 =A0 =A0 if (munmap(p, 4096) !=3D 0)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;munmap&quot;);<br>
<br>
=A0 =A0 =A0 =A0 return 0;<br>
<div class=3D""><div class=3D"h5">=A0 =A0 }<br>
<br>
On Thu, Feb 27, 2014 at 6:25 PM, Konrad Rzeszutek Wilk<br>
&lt;<a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&gt=
; wrote:<br>
&gt;<br>
&gt; This only shows up when using the Linux kernel as a 64-bit PV guest an=
d<br>
&gt; when I have migrated and I am running iscsid and I poweroff the guest.=
<br>
&gt; Note: I can also reproduce this if I kill &#39;iscsid&#39;.<br>
&gt;<br>
&gt; If I revert the above mentioned commit the problem disappears.<br>
&gt;<br>
&gt; The page flags it shows are bogus - this guest is running from RAM<br>
&gt; and has no swap.<br>
&gt;<br>
&gt; Here is what the console says (ignore the first BUG please):<br>
&gt;<br>
&gt; [ =A0 42.268060] xen:grant_table: Grant tables using version 1 layout<=
br>
&gt; [ =A0 42.268060] BUG: sleeping function called from invalid context at=
 /home/konrad/ssd/konrad/linux/kernel/locking/mutex.c:96<br>
&gt; [ =A0 42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name: mig=
ration/0<br>
&gt; [ =A0 42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc=
4upstream #1<br>
&gt; [ =A0 42.268060] =A00000000000000002 ffff88003cc0dcd8 ffffffff816f0e59=
 ffff88003cc02630<br>
&gt; [ =A0 42.268060] =A0ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce=
 ffff88003cc0dd08<br>
&gt; [ =A0 42.268060] =A0ffffffff816f3bdf ffff88003cc0dd08 0000000000000017=
 ffff88003cc0dd38<br>
&gt; [ =A0 42.268060] Call Trace:<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cdfce&gt;] __might_sleep+0xce/0xf0=
<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f3bdf&gt;] mutex_lock+0x1f/0x40<br=
>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff813f49fb&gt;] rebind_evtchn_irq+0x3b/=
0xb0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff81428bdc&gt;] xen_console_resume+0x5c=
/0x60<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff813f3c0a&gt;] xen_suspend+0x8a/0xb0<b=
r>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff811265db&gt;] multi_cpu_stop+0xbb/0xe=
0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff81126520&gt;] ? irq_cpu_stop_queue_wo=
rk+0x30/0x30<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff81126bfa&gt;] cpu_stopper_thread+0x4a=
/0x180<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f1a01&gt;] ? __schedule+0x381/0x7e=
0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cbf10&gt;] ? smpboot_create_thread=
s+0x80/0x80<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f612b&gt;] ? _raw_spin_unlock_irqr=
estore+0x1b/0x70<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cc058&gt;] smpboot_thread_fn+0x148=
/0x1e0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cbf10&gt;] ? smpboot_create_thread=
s+0x80/0x80<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810c490e&gt;] kthread+0xce/0xf0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810c4840&gt;] ? kthread_freezable_sho=
uld_stop+0x80/0x80<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816fe74c&gt;] ret_from_fork+0x7c/0xb0=
<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810c4840&gt;] ? kthread_freezable_sho=
uld_stop+0x80/0x80<br>
&gt; [ =A0 42.268060] PM: noirq restore of devices complete after 0.251 mse=
cs<br>
&gt; [ =A0 42.268645] PM: early restore of devices complete after 0.151 mse=
cs<br>
&gt;<br>
&gt; #<br>
&gt; # [ =A0 42.281199] switch: port 1(eth0) entered disabled state<br>
&gt; [ =A0 42.282591] PM: restore of devices complete after 11.656 msecs<br=
>
&gt; [ =A0 42.307965] switch: port 1(eth0) entered forwarding state<br>
&gt; [ =A0 42.307990] switch: port 1(eth0) entered forwarding state<br>
&gt;<br>
&gt; #<br>
&gt; #<br>
&gt; # [ =A0 57.312124] switch: port 1(eth0) entered forwarding state<br>
&gt; lspci<br>
&gt; # lsscsi<br>
&gt; # ifconfig<br>
&gt; eth0 =A0 =A0 =A0Link encap:Ethernet =A0HWaddr 00:0F:4B:00:00:68<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link<b=
r>
&gt; =A0 =A0 =A0 =A0 =A0 UP BROADCAST RUNNING MULTICAST =A0MTU:1500 =A0Metr=
ic:1<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX packets:99 errors:0 dropped:0 overruns:0 frame:=
0<br>
&gt; =A0 =A0 =A0 =A0 =A0 TX packets:86 errors:0 dropped:0 overruns:0 carrie=
r:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 collisions:0 txqueuelen:1000<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX bytes:8722 (8.5 KiB) =A0TX bytes:8709 (8.5 KiB)=
<br>
&gt;<br>
&gt; lo =A0 =A0 =A0 =A0Link encap:Local Loopback<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet addr:127.0.0.1 =A0Mask:255.0.0.0<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet6 addr: ::1/128 Scope:Host<br>
&gt; =A0 =A0 =A0 =A0 =A0 UP LOOPBACK RUNNING =A0MTU:65536 =A0Metric:1<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX packets:4 errors:0 dropped:0 overruns:0 frame:0=
<br>
&gt; =A0 =A0 =A0 =A0 =A0 TX packets:4 errors:0 dropped:0 overruns:0 carrier=
:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 collisions:0 txqueuelen:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX bytes:260 (260.0 b) =A0TX bytes:260 (260.0 b)<b=
r>
&gt;<br>
&gt; switch =A0 =A0Link encap:Ethernet =A0HWaddr 00:0F:4B:00:00:68<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet addr:192.168.102.68 =A0Bcast:192.168.102.255 =
=A0Mask:255.255.255.0<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link<b=
r>
&gt; =A0 =A0 =A0 =A0 =A0 UP BROADCAST RUNNING MULTICAST =A0MTU:1500 =A0Metr=
ic:1<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX packets:96 errors:0 dropped:0 overruns:0 frame:=
0<br>
&gt; =A0 =A0 =A0 =A0 =A0 TX packets:75 errors:0 dropped:0 overruns:0 carrie=
r:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 collisions:0 txqueuelen:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX bytes:8506 (8.3 KiB) =A0TX bytes:7755 (7.5 KiB)=
<br>
&gt;<br>
&gt; # ping 1 =A0 8.8.8.8<br>
&gt; PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.<br>
&gt; 64 bytes from <a href=3D"http://8.8.8.8" target=3D"_blank">8.8.8.8</a>=
: icmp_seq=3D1 ttl=3D45 time=3D57.0 ms<br>
&gt; 64 bytes from <a href=3D"http://8.8.8.8" target=3D"_blank">8.8.8.8</a>=
: icmp_seq=3D2 ttl=3D45 time=3D51.4 ms<br>
&gt; 64 bytes from <a href=3D"http://8.8.8.8" target=3D"_blank">8.8.8.8</a>=
: icmp_seq=3D3 ttl=3D45 time=3D51.8 ms<br>
&gt; ^C<br>
&gt; --- 8.8.8.8 ping statistics ---<br>
&gt; 3 packets transmitted, 3 received, 0% packet loss, time 2130ms<br>
&gt; rtt min/avg/max/mdev =3D 51.421/53.420/57.008/2.556 ms<br>
&gt; # poweroff<br>
&gt; Feb 28 02:05:30 g-pvops init: starting pid 2386, tty &#39;&#39;: &#39;=
/etc/init.d/halt&#39;<br>
&gt;<br>
&gt; # Usage: /etc/init.d/halt {start}<br>
&gt;<br>
&gt; The system is going down NOW!<br>
&gt;<br>
&gt; Sent SIGTERM to all processes<br>
&gt; Feb 28 02:05:30 g-pvops exiting on signal 15<br>
&gt;<br>
&gt; [ =A0 71.195552] BUG: Bad page map in process iscsid =A0pte:39b22120 p=
md:06953067<br>
&gt; [ =A0 71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c8<br>
&gt; [ =A0 71.195576] page flags: 0x100000000080078(uptodate|dirty|lru|acti=
ve|swapbacked)<br>
&gt; [ =A0 71.195585] page dumped because: bad pte<br>
&gt; [ =A0 71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:c<br>
&gt; [ =A0 71.195598] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.195603] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted 3.14.0-rc4u=
pstream #1<br>
&gt; [ =A0 71.195618] =A000007fb105d37000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.195627] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 ffff880030ff3c58<br>
&gt; [ =A0 71.195634] =A0ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000=
 ffff8800069539b8<br>
&gt; [ =A0 71.195642] Call Trace:<br>
&gt; [ =A0 71.195649] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.195657] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.195663] =A0[&lt;ffffffff8117f6e1&gt;] ? activate_page+0xb1/0x=
e0<br>
&gt; [ =A0 71.195669] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.195676] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.195680] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.195688] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.195694] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.195702] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.195707] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.195712] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.195719] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.195724] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.195732] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.195736] Disabling lock debugging due to kernel taint<br>
&gt; [ =A0 71.195740] BUG: Bad page map in process iscsid =A0pte:39b23120 p=
md:06953067<br>
&gt; [ =A0 71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c9<br>
&gt; [ =A0 71.195751] page flags: 0x100000000080038(uptodate|dirty|lru|swap=
backed)<br>
&gt; [ =A0 71.195759] page dumped because: bad pte<br>
&gt; [ =A0 71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:d<br>
&gt; [ =A0 71.195770] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.195774] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.195783] =A000007fb105d38000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.195791] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 0000000000000575<br>
&gt; [ =A0 71.195798] =A00720072007200720 ffff880030ff3c58 00007fb105d38000=
 ffff8800069539c0<br>
&gt; [ =A0 71.195806] Call Trace:<br>
&gt; [ =A0 71.195810] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.195816] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.195820] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.195827] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.195832] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.195839] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.195843] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.195849] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.195854] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.195858] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.195863] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.195870] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.195875] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.195880] BUG: Bad page map in process iscsid =A0pte:39b24120 p=
md:06953067<br>
&gt; [ =A0 71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1 mapping:fff=
f88003a536490 index:0x1ca<br>
&gt; [ =A0 71.195888] page flags: 0x100000000080038(uptodate|dirty|lru|swap=
backed)<br>
&gt; [ =A0 71.195895] page dumped because: bad pte<br>
&gt; [ =A0 71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:e<br>
&gt; [ =A0 71.195904] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.195908] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.195916] =A000007fb105d39000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.195923] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 000000000000058e<br>
&gt; [ =A0 71.195929] =A00720072007200720 ffff880030ff3c58 00007fb105d39000=
 ffff8800069539c8<br>
&gt; [ =A0 71.195935] Call Trace:<br>
&gt; [ =A0 71.195939] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.195944] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.195948] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.195953] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.195958] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.195963] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.195967] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.195973] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.195977] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.195982] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.195987] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.195992] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.195997] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.196001] BUG: Bad page map in process iscsid =A0pte:39b25120 p=
md:06953067<br>
&gt; [ =A0 71.196005] page:ffffea0000c9f018 count:2 mapcount:-1 mapping:fff=
f88003a536490 index:0x1cb<br>
&gt; [ =A0 71.362165] page flags: 0x100000000080038(uptodate|dirty|lru|swap=
backed)<br>
&gt; [ =A0 71.362172] page dumped because: bad pte<br>
&gt; [ =A0 71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:f<br>
&gt; [ =A0 71.362181] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.362185] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.362197] =A000007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.362206] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 00000000000005a7<br>
&gt; [ =A0 71.362212] =A00720072007200720 ffff880030ff3c58 00007fb105d3a000=
 ffff8800069539d0<br>
&gt; [ =A0 71.362221] Call Trace:<br>
&gt; [ =A0 71.362225] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.362230] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.362241] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.362247] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.362254] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.362258] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.362263] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.362270] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.362276] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.362282] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.362287] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.362292] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.362297] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.364639] BUG: Bad page state in process iscsid =A0pfn:39b25<br=
>
&gt; [ =A0 71.364651] page:ffffea0000c9f018 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1cb<br>
&gt; [ =A0 71.364656] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.364663] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.364711] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.364718] =A0ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364724] =A00000000000000001 ffffea0000c9f018 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364733] Call Trace:<br>
&gt; [ =A0 71.364739] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.364748] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.364753] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.364760] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.364768] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.364773] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.364780] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.364785] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.364791] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.364798] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.364803] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.364810] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.364815] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.364820] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.364825] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.364830] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.364835] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.364842] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.364848] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.364854] BUG: Bad page state in process iscsid =A0pfn:39b24<br=
>
&gt; [ =A0 71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1ca<br>
&gt; [ =A0 71.364863] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.364871] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.364913] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.364919] =A0ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364925] =A00000000000000001 ffffea0000c9efe0 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364933] Call Trace:<br>
&gt; [ =A0 71.364938] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.364945] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.364950] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.364955] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.364960] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.364965] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.364970] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.364975] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.364982] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.364987] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.364994] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.364999] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.365004] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.365009] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.365015] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.365020] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.365025] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.365030] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.365035] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.365039] BUG: Bad page state in process iscsid =A0pfn:39b23<br=
>
&gt; [ =A0 71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c9<br>
&gt; [ =A0 71.365047] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.365074] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.365109] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.365115] =A0ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.365121] =A00000000000000001 ffffea0000c9efa8 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.365127] Call Trace:<br>
&gt; [ =A0 71.365131] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.365136] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.365141] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.365146] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.365151] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.365155] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.365160] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.365165] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.365170] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.365175] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.365180] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.562344] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.562349] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.562354] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.562359] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.562364] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.562369] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.562379] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.562387] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.562391] BUG: Bad page state in process iscsid =A0pfn:39b22<br=
>
&gt; [ =A0 71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c8<br>
&gt; [ =A0 71.562399] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.562413] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.562461] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.562467] =A0ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.562476] =A00000000000000001 ffffea0000c9ef70 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.562486] Call Trace:<br>
&gt; [ =A0 71.562490] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.562495] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.562500] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.562515] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.562520] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.562525] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.562535] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.562544] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.562550] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.562555] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.562561] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.562568] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.562579] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.562584] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.562594] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.562603] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.562608] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.562613] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.562617] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt;<br>
&gt; Sent SIGKILL to all processes<br>
&gt;<br>
&gt; Requesting system poweroff<br>
&gt; [ =A0 73.375423] reboot: System halted<br>
</div></div></blockquote></div><br><br>Nice, I will also look at this. I ha=
ve not tested the above mentioned patch with migration though.</div><div cl=
ass=3D"gmail_extra"><br>-- <br>Elena
</div></div>

--001a113330b2d35e8804f36ee2a8--


--===============5251542664070070604==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5251542664070070604==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 03:18:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJDxk-0003Jf-IZ; Fri, 28 Feb 2014 03:18:12 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <ufimtseva@gmail.com>) id 1WJDxh-0003Ja-Gy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 03:18:10 +0000
Received: from [85.158.139.211:46883] by server-14.bemta-5.messagelabs.com id
	C4/BE-27598-0FFFF035; Fri, 28 Feb 2014 03:18:08 +0000
X-Env-Sender: ufimtseva@gmail.com
X-Msg-Ref: server-13.tower-206.messagelabs.com!1393557484!6771791!1
X-Originating-IP: [209.85.216.175]
X-SpamReason: No, hits=1.7 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_20_30, HTML_MESSAGE, ML_RADAR_SPEW_LINKS_14, RCVD_BY_IP,
	spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3056 invoked from network); 28 Feb 2014 03:18:05 -0000
Received: from mail-qc0-f175.google.com (HELO mail-qc0-f175.google.com)
	(209.85.216.175)
	by server-13.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 03:18:05 -0000
Received: by mail-qc0-f175.google.com with SMTP id e16so145661qcx.34
	for <xen-devel@lists.xenproject.org>;
	Thu, 27 Feb 2014 19:18:04 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:in-reply-to:references:date:message-id:subject:from:to
	:cc:content-type;
	bh=RhSjURpKnJrAuLM6juMpd4lbAXaPEDGyqvZjSr0lpqM=;
	b=jr/4sX+Sbkr4z5nlttqP7JsoJpSD4kmAKs+MFdkbN4tVvGdfpWpg0cFoXyVe8VcPna
	UmNUjAbwQDJta0Zh2xiayUgGBzK8YwfMOA4ehAt9HTXz+vt0fFDBg66A77PAD5ApXUwy
	Vl35sdg5qDRMLW7Hv21t861oBQGGUkfSNs9Hzi6/6VSDZMlyiatimj2IZIhz0cFJQBTn
	qQNGqhLXHh1HkKhM7uhGLSw/RHbs/uU6XhOm0BXwk1qGf4uFczCTsDfqmSQLJIIlLlv7
	capIkWP+NlQsqKBh6oZGNE4sySCPSfGjVuA6I6zsH4KYibG7EtwfgnnKOdnxGC+shYxd
	E3JQ==
MIME-Version: 1.0
X-Received: by 10.229.117.3 with SMTP id o3mr631174qcq.3.1393557484295; Thu,
	27 Feb 2014 19:18:04 -0800 (PST)
Received: by 10.140.101.81 with HTTP; Thu, 27 Feb 2014 19:18:04 -0800 (PST)
In-Reply-To: <CAKbGBLiqOrZtcv-ZA_WnK0KzoO4McuuaZoSe+6KTK4uUusDnkw@mail.gmail.com>
References: <20140228022532.GC7114@localhost.localdomain>
	<CAKbGBLiqOrZtcv-ZA_WnK0KzoO4McuuaZoSe+6KTK4uUusDnkw@mail.gmail.com>
Date: Thu, 27 Feb 2014 22:18:04 -0500
Message-ID: <CAEr7rXhBn57V8OEp0x6D=0TBca688xeoa5d_czTRDcxLN7ef_w@mail.gmail.com>
From: Elena Ufimtseva <ufimtseva@gmail.com>
To: Steven Noonan <steven@uplinklabs.net>
Cc: David Vrabel <david.vrabel@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Mel Gorman <mgorman@suse.de>
Subject: Re: [Xen-devel] Regression introduced by "xen: properly account for
 _PAGE_NUMA during xen pte translations"
 (a9c8e4beeeb64c22b84c803747487857fe424b68)
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5251542664070070604=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5251542664070070604==
Content-Type: multipart/alternative; boundary=001a113330b2d35e8804f36ee2a8

--001a113330b2d35e8804f36ee2a8
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Feb 27, 2014 at 9:48 PM, Steven Noonan <steven@uplinklabs.net>wrote:

> For the issue fixed by a9c8e4be, it was possible to trigger it using a
> small test case. Does that same test work if you run it after
> migrating the instance?
>
>     #include <errno.h>
>     #include <stdio.h>
>     #include <stdlib.h>
>     #include <sys/mman.h>
>
>     void die(const char *what)
>     {
>         perror(what);
>         exit(1);
>     }
>
>     int main(int arg, char **argv)
>     {
>         void *p = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PRIVATE
> | MAP_ANONYMOUS, -1, 0);
>
>         if (p == MAP_FAILED)
>                 die("mmap");
>
>         /* Tickle the page. */
>         ((char *)p)[0] = 0;
>
>         if (mprotect(p, 4096, PROT_NONE) != 0)
>                 die("mprotect");
>
>         if (mprotect(p, 4096, PROT_READ) != 0)
>                 die("mprotect");
>
>         if (munmap(p, 4096) != 0)
>                 die("munmap");
>
>         return 0;
>     }
>
> On Thu, Feb 27, 2014 at 6:25 PM, Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com> wrote:
> >
> > This only shows up when using the Linux kernel as a 64-bit PV guest and
> > when I have migrated and I am running iscsid and I poweroff the guest.
> > Note: I can also reproduce this if I kill 'iscsid'.
> >
> > If I revert the above mentioned commit the problem disappears.
> >
> > The page flags it shows are bogus - this guest is running from RAM
> > and has no swap.
> >
> > Here is what the console says (ignore the first BUG please):
> >
> > [   42.268060] xen:grant_table: Grant tables using version 1 layout
> > [   42.268060] BUG: sleeping function called from invalid context at
> /home/konrad/ssd/konrad/linux/kernel/locking/mutex.c:96
> > [   42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name:
> migration/0
> > [   42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted
> 3.14.0-rc4upstream #1
> > [   42.268060]  0000000000000002 ffff88003cc0dcd8 ffffffff816f0e59
> ffff88003cc02630
> > [   42.268060]  ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce
> ffff88003cc0dd08
> > [   42.268060]  ffffffff816f3bdf ffff88003cc0dd08 0000000000000017
> ffff88003cc0dd38
> > [   42.268060] Call Trace:
> > [   42.268060]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   42.268060]  [<ffffffff810cdfce>] __might_sleep+0xce/0xf0
> > [   42.268060]  [<ffffffff816f3bdf>] mutex_lock+0x1f/0x40
> > [   42.268060]  [<ffffffff813f49fb>] rebind_evtchn_irq+0x3b/0xb0
> > [   42.268060]  [<ffffffff81428bdc>] xen_console_resume+0x5c/0x60
> > [   42.268060]  [<ffffffff813f3c0a>] xen_suspend+0x8a/0xb0
> > [   42.268060]  [<ffffffff811265db>] multi_cpu_stop+0xbb/0xe0
> > [   42.268060]  [<ffffffff81126520>] ? irq_cpu_stop_queue_work+0x30/0x30
> > [   42.268060]  [<ffffffff81126bfa>] cpu_stopper_thread+0x4a/0x180
> > [   42.268060]  [<ffffffff816f1a01>] ? __schedule+0x381/0x7e0
> > [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> > [   42.268060]  [<ffffffff816f612b>] ?
> _raw_spin_unlock_irqrestore+0x1b/0x70
> > [   42.268060]  [<ffffffff810cc058>] smpboot_thread_fn+0x148/0x1e0
> > [   42.268060]  [<ffffffff810cbf10>] ? smpboot_create_threads+0x80/0x80
> > [   42.268060]  [<ffffffff810c490e>] kthread+0xce/0xf0
> > [   42.268060]  [<ffffffff810c4840>] ?
> kthread_freezable_should_stop+0x80/0x80
> > [   42.268060]  [<ffffffff816fe74c>] ret_from_fork+0x7c/0xb0
> > [   42.268060]  [<ffffffff810c4840>] ?
> kthread_freezable_should_stop+0x80/0x80
> > [   42.268060] PM: noirq restore of devices complete after 0.251 msecs
> > [   42.268645] PM: early restore of devices complete after 0.151 msecs
> >
> > #
> > # [   42.281199] switch: port 1(eth0) entered disabled state
> > [   42.282591] PM: restore of devices complete after 11.656 msecs
> > [   42.307965] switch: port 1(eth0) entered forwarding state
> > [   42.307990] switch: port 1(eth0) entered forwarding state
> >
> > #
> > #
> > # [   57.312124] switch: port 1(eth0) entered forwarding state
> > lspci
> > # lsscsi
> > # ifconfig
> > eth0      Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
> >           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:99 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:86 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:1000
> >           RX bytes:8722 (8.5 KiB)  TX bytes:8709 (8.5 KiB)
> >
> > lo        Link encap:Local Loopback
> >           inet addr:127.0.0.1  Mask:255.0.0.0
> >           inet6 addr: ::1/128 Scope:Host
> >           UP LOOPBACK RUNNING  MTU:65536  Metric:1
> >           RX packets:4 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:260 (260.0 b)  TX bytes:260 (260.0 b)
> >
> > switch    Link encap:Ethernet  HWaddr 00:0F:4B:00:00:68
> >           inet addr:192.168.102.68  Bcast:192.168.102.255
>  Mask:255.255.255.0
> >           inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link
> >           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >           RX packets:96 errors:0 dropped:0 overruns:0 frame:0
> >           TX packets:75 errors:0 dropped:0 overruns:0 carrier:0
> >           collisions:0 txqueuelen:0
> >           RX bytes:8506 (8.3 KiB)  TX bytes:7755 (7.5 KiB)
> >
> > # ping 1   8.8.8.8
> > PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
> > 64 bytes from 8.8.8.8: icmp_seq=1 ttl=45 time=57.0 ms
> > 64 bytes from 8.8.8.8: icmp_seq=2 ttl=45 time=51.4 ms
> > 64 bytes from 8.8.8.8: icmp_seq=3 ttl=45 time=51.8 ms
> > ^C
> > --- 8.8.8.8 ping statistics ---
> > 3 packets transmitted, 3 received, 0% packet loss, time 2130ms
> > rtt min/avg/max/mdev = 51.421/53.420/57.008/2.556 ms
> > # poweroff
> > Feb 28 02:05:30 g-pvops init: starting pid 2386, tty '':
> '/etc/init.d/halt'
> >
> > # Usage: /etc/init.d/halt {start}
> >
> > The system is going down NOW!
> >
> > Sent SIGTERM to all processes
> > Feb 28 02:05:30 g-pvops exiting on signal 15
> >
> > [   71.195552] BUG: Bad page map in process iscsid  pte:39b22120
> pmd:06953067
> > [   71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1
> mapping:ffff88003a536490 index:0x1c8
> > [   71.195576] page flags:
> 0x100000000080078(uptodate|dirty|lru|active|swapbacked)
> > [   71.195585] page dumped because: bad pte
> > [   71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:c
> > [   71.195598] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.195603] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted
> 3.14.0-rc4upstream #1
> > [   71.195618]  00007fb105d37000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.195627]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> ffff880030ff3c58
> > [   71.195634]  ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000
> ffff8800069539b8
> > [   71.195642] Call Trace:
> > [   71.195649]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.195657]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.195663]  [<ffffffff8117f6e1>] ? activate_page+0xb1/0xe0
> > [   71.195669]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.195676]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.195680]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.195688]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.195694]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.195702]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.195707]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.195712]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.195719]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.195724]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.195732]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.195736] Disabling lock debugging due to kernel taint
> > [   71.195740] BUG: Bad page map in process iscsid  pte:39b23120
> pmd:06953067
> > [   71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1
> mapping:ffff88003a536490 index:0x1c9
> > [   71.195751] page flags:
> 0x100000000080038(uptodate|dirty|lru|swapbacked)
> > [   71.195759] page dumped because: bad pte
> > [   71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:d
> > [   71.195770] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.195774] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.195783]  00007fb105d38000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.195791]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> 0000000000000575
> > [   71.195798]  0720072007200720 ffff880030ff3c58 00007fb105d38000
> ffff8800069539c0
> > [   71.195806] Call Trace:
> > [   71.195810]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.195816]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.195820]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.195827]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.195832]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.195839]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.195843]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.195849]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.195854]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.195858]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.195863]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.195870]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.195875]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.195880] BUG: Bad page map in process iscsid  pte:39b24120
> pmd:06953067
> > [   71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1
> mapping:ffff88003a536490 index:0x1ca
> > [   71.195888] page flags:
> 0x100000000080038(uptodate|dirty|lru|swapbacked)
> > [   71.195895] page dumped because: bad pte
> > [   71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:e
> > [   71.195904] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.195908] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.195916]  00007fb105d39000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.195923]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> 000000000000058e
> > [   71.195929]  0720072007200720 ffff880030ff3c58 00007fb105d39000
> ffff8800069539c8
> > [   71.195935] Call Trace:
> > [   71.195939]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.195944]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.195948]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.195953]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.195958]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.195963]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.195967]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.195973]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.195977]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.195982]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.195987]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.195992]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.195997]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.196001] BUG: Bad page map in process iscsid  pte:39b25120
> pmd:06953067
> > [   71.196005] page:ffffea0000c9f018 count:2 mapcount:-1
> mapping:ffff88003a536490 index:0x1cb
> > [   71.362165] page flags:
> 0x100000000080038(uptodate|dirty|lru|swapbacked)
> > [   71.362172] page dumped because: bad pte
> > [   71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma:
>  (null) mapping:ffff880032006970 index:f
> > [   71.362181] vma->vm_ops->fault: shmem_fault+0x0/0x70
> > [   71.362185] vma->vm_file->f_op->mmap: shmem_mmap+0x0/0x30
> > [   71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.362197]  00007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59
> 0000000000000000
> > [   71.362206]  ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b
> 00000000000005a7
> > [   71.362212]  0720072007200720 ffff880030ff3c58 00007fb105d3a000
> ffff8800069539d0
> > [   71.362221] Call Trace:
> > [   71.362225]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.362230]  [<ffffffff81199a5b>] print_bad_pte+0x1bb/0x280
> > [   71.362241]  [<ffffffff8119ad68>] unmap_single_vma+0x8c8/0x910
> > [   71.362247]  [<ffffffff81040a69>] ? xen_pte_unlock+0x9/0x10
> > [   71.362254]  [<ffffffff8119adfc>] unmap_vmas+0x4c/0xa0
> > [   71.362258]  [<ffffffff811a2920>] exit_mmap+0x90/0x160
> > [   71.362263]  [<ffffffff816f5f13>] ? _raw_spin_lock_irqsave+0x13/0x60
> > [   71.362270]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.362276]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.362282]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.362287]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.362292]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.362297]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.364639] BUG: Bad page state in process iscsid  pfn:39b25
> > [   71.364651] page:ffffea0000c9f018 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1cb
> > [   71.364656] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.364663] page dumped because: non-NULL mapping
> > [   71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.364711]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.364718]  ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.364724]  0000000000000001 ffffea0000c9f018 0000000000000000
> ffff880030ff3c48
> > [   71.364733] Call Trace:
> > [   71.364739]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.364748]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.364753]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.364760]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.364768]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.364773]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.364780]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.364785]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.364791]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.364798]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.364803]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.364810]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.364815]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.364820]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.364825]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.364830]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.364835]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.364842]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.364848]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.364854] BUG: Bad page state in process iscsid  pfn:39b24
> > [   71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1ca
> > [   71.364863] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.364871] page dumped because: non-NULL mapping
> > [   71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.364913]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.364919]  ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.364925]  0000000000000001 ffffea0000c9efe0 0000000000000000
> ffff880030ff3c48
> > [   71.364933] Call Trace:
> > [   71.364938]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.364945]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.364950]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.364955]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.364960]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.364965]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.364970]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.364975]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.364982]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.364987]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.364994]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.364999]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.365004]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.365009]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.365015]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.365020]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.365025]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.365030]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.365035]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.365039] BUG: Bad page state in process iscsid  pfn:39b23
> > [   71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1c9
> > [   71.365047] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.365074] page dumped because: non-NULL mapping
> > [   71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.365109]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.365115]  ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.365121]  0000000000000001 ffffea0000c9efa8 0000000000000000
> ffff880030ff3c48
> > [   71.365127] Call Trace:
> > [   71.365131]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.365136]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.365141]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.365146]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.365151]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.365155]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.365160]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.365165]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.365170]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.365175]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.365180]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.562344]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.562349]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.562354]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.562359]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.562364]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.562369]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.562379]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.562387]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> > [   71.562391] BUG: Bad page state in process iscsid  pfn:39b22
> > [   71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1
> mapping:ffff88003a536490 index:0x1c8
> > [   71.562399] page flags: 0x100000000080018(uptodate|dirty|swapbacked)
> > [   71.562413] page dumped because: non-NULL mapping
> > [   71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn
> iscsi_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi
> scsi_mod libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm
> drm_kms_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt
> sysfillrect syscopyarea xen_kbdfront xenfs xen_privcmd
> > [   71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G    B
>  3.14.0-rc4upstream #1
> > [   71.562461]  ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59
> ffffffff8197f00f
> > [   71.562467]  ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0
> ffff880030ff3c48
> > [   71.562476]  0000000000000001 ffffea0000c9ef70 0000000000000000
> ffff880030ff3c48
> > [   71.562486] Call Trace:
> > [   71.562490]  [<ffffffff816f0e59>] dump_stack+0x51/0x6b
> > [   71.562495]  [<ffffffff811751a0>] bad_page+0xd0/0x120
> > [   71.562500]  [<ffffffff81175335>] free_pages_prepare+0x145/0x160
> > [   71.562515]  [<ffffffff81041642>] ? xen_pte_val+0x32/0x40
> > [   71.562520]  [<ffffffff8117993b>] free_hot_cold_page+0x3b/0x150
> > [   71.562525]  [<ffffffff81179fc7>] free_hot_cold_page_list+0x47/0xb0
> > [   71.562535]  [<ffffffff8117e3ad>] release_pages+0x7d/0x230
> > [   71.562544]  [<ffffffff811b04c4>] free_pages_and_swap_cache+0xb4/0xe0
> > [   71.562550]  [<ffffffff81099507>] ? flush_tlb_mm_range+0x57/0x1b0
> > [   71.562555]  [<ffffffff81199cb7>] tlb_flush_mmu+0x57/0xa0
> > [   71.562561]  [<ffffffff81199d0f>] tlb_finish_mmu+0xf/0x40
> > [   71.562568]  [<ffffffff811a2947>] exit_mmap+0xb7/0x160
> > [   71.562579]  [<ffffffff816f5f12>] ? _raw_spin_lock_irqsave+0x12/0x60
> > [   71.562584]  [<ffffffff8109d672>] mmput+0x52/0x100
> > [   71.562594]  [<ffffffff810a186c>] do_exit+0x29c/0xb90
> > [   71.562603]  [<ffffffff810a3349>] ? SyS_wait4+0xa9/0xf0
> > [   71.562608]  [<ffffffff810a2271>] do_group_exit+0x51/0x130
> > [   71.562613]  [<ffffffff810a2362>] SyS_exit_group+0x12/0x20
> > [   71.562617]  [<ffffffff816fe7f9>] system_call_fastpath+0x16/0x1b
> >
> > Sent SIGKILL to all processes
> >
> > Requesting system poweroff
> > [   73.375423] reboot: System halted
>


Nice, I will also look at this. I have not tested the above mentioned patch
with migration though.

-- 
Elena

--001a113330b2d35e8804f36ee2a8
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><br><div class=3D"gmail_extra"><br><br><div class=3D"gmail=
_quote">On Thu, Feb 27, 2014 at 9:48 PM, Steven Noonan <span dir=3D"ltr">&l=
t;<a href=3D"mailto:steven@uplinklabs.net" target=3D"_blank">steven@uplinkl=
abs.net</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-=
left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;p=
adding-left:1ex">For the issue fixed by a9c8e4be, it was possible to trigge=
r it using a<br>

small test case. Does that same test work if you run it after<br>
migrating the instance?<br>
<br>
=A0 =A0 #include &lt;errno.h&gt;<br>
=A0 =A0 #include &lt;stdio.h&gt;<br>
=A0 =A0 #include &lt;stdlib.h&gt;<br>
=A0 =A0 #include &lt;sys/mman.h&gt;<br>
<br>
=A0 =A0 void die(const char *what)<br>
=A0 =A0 {<br>
=A0 =A0 =A0 =A0 perror(what);<br>
=A0 =A0 =A0 =A0 exit(1);<br>
=A0 =A0 }<br>
<br>
=A0 =A0 int main(int arg, char **argv)<br>
=A0 =A0 {<br>
=A0 =A0 =A0 =A0 void *p =3D mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_PR=
IVATE<br>
| MAP_ANONYMOUS, -1, 0);<br>
<br>
=A0 =A0 =A0 =A0 if (p =3D=3D MAP_FAILED)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;mmap&quot;);<br>
<br>
=A0 =A0 =A0 =A0 /* Tickle the page. */<br>
=A0 =A0 =A0 =A0 ((char *)p)[0] =3D 0;<br>
<br>
=A0 =A0 =A0 =A0 if (mprotect(p, 4096, PROT_NONE) !=3D 0)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;mprotect&quot;);<br>
<br>
=A0 =A0 =A0 =A0 if (mprotect(p, 4096, PROT_READ) !=3D 0)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;mprotect&quot;);<br>
<br>
=A0 =A0 =A0 =A0 if (munmap(p, 4096) !=3D 0)<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 die(&quot;munmap&quot;);<br>
<br>
=A0 =A0 =A0 =A0 return 0;<br>
<div class=3D""><div class=3D"h5">=A0 =A0 }<br>
<br>
On Thu, Feb 27, 2014 at 6:25 PM, Konrad Rzeszutek Wilk<br>
&lt;<a href=3D"mailto:konrad.wilk@oracle.com">konrad.wilk@oracle.com</a>&gt=
; wrote:<br>
&gt;<br>
&gt; This only shows up when using the Linux kernel as a 64-bit PV guest an=
d<br>
&gt; when I have migrated and I am running iscsid and I poweroff the guest.=
<br>
&gt; Note: I can also reproduce this if I kill &#39;iscsid&#39;.<br>
&gt;<br>
&gt; If I revert the above mentioned commit the problem disappears.<br>
&gt;<br>
&gt; The page flags it shows are bogus - this guest is running from RAM<br>
&gt; and has no swap.<br>
&gt;<br>
&gt; Here is what the console says (ignore the first BUG please):<br>
&gt;<br>
&gt; [ =A0 42.268060] xen:grant_table: Grant tables using version 1 layout<=
br>
&gt; [ =A0 42.268060] BUG: sleeping function called from invalid context at=
 /home/konrad/ssd/konrad/linux/kernel/locking/mutex.c:96<br>
&gt; [ =A0 42.268060] in_atomic(): 1, irqs_disabled(): 1, pid: 9, name: mig=
ration/0<br>
&gt; [ =A0 42.268060] CPU: 0 PID: 9 Comm: migration/0 Not tainted 3.14.0-rc=
4upstream #1<br>
&gt; [ =A0 42.268060] =A00000000000000002 ffff88003cc0dcd8 ffffffff816f0e59=
 ffff88003cc02630<br>
&gt; [ =A0 42.268060] =A0ffffffff81c6df80 ffff88003cc0dce8 ffffffff810cdfce=
 ffff88003cc0dd08<br>
&gt; [ =A0 42.268060] =A0ffffffff816f3bdf ffff88003cc0dd08 0000000000000017=
 ffff88003cc0dd38<br>
&gt; [ =A0 42.268060] Call Trace:<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cdfce&gt;] __might_sleep+0xce/0xf0=
<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f3bdf&gt;] mutex_lock+0x1f/0x40<br=
>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff813f49fb&gt;] rebind_evtchn_irq+0x3b/=
0xb0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff81428bdc&gt;] xen_console_resume+0x5c=
/0x60<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff813f3c0a&gt;] xen_suspend+0x8a/0xb0<b=
r>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff811265db&gt;] multi_cpu_stop+0xbb/0xe=
0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff81126520&gt;] ? irq_cpu_stop_queue_wo=
rk+0x30/0x30<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff81126bfa&gt;] cpu_stopper_thread+0x4a=
/0x180<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f1a01&gt;] ? __schedule+0x381/0x7e=
0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cbf10&gt;] ? smpboot_create_thread=
s+0x80/0x80<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816f612b&gt;] ? _raw_spin_unlock_irqr=
estore+0x1b/0x70<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cc058&gt;] smpboot_thread_fn+0x148=
/0x1e0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810cbf10&gt;] ? smpboot_create_thread=
s+0x80/0x80<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810c490e&gt;] kthread+0xce/0xf0<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810c4840&gt;] ? kthread_freezable_sho=
uld_stop+0x80/0x80<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff816fe74c&gt;] ret_from_fork+0x7c/0xb0=
<br>
&gt; [ =A0 42.268060] =A0[&lt;ffffffff810c4840&gt;] ? kthread_freezable_sho=
uld_stop+0x80/0x80<br>
&gt; [ =A0 42.268060] PM: noirq restore of devices complete after 0.251 mse=
cs<br>
&gt; [ =A0 42.268645] PM: early restore of devices complete after 0.151 mse=
cs<br>
&gt;<br>
&gt; #<br>
&gt; # [ =A0 42.281199] switch: port 1(eth0) entered disabled state<br>
&gt; [ =A0 42.282591] PM: restore of devices complete after 11.656 msecs<br=
>
&gt; [ =A0 42.307965] switch: port 1(eth0) entered forwarding state<br>
&gt; [ =A0 42.307990] switch: port 1(eth0) entered forwarding state<br>
&gt;<br>
&gt; #<br>
&gt; #<br>
&gt; # [ =A0 57.312124] switch: port 1(eth0) entered forwarding state<br>
&gt; lspci<br>
&gt; # lsscsi<br>
&gt; # ifconfig<br>
&gt; eth0 =A0 =A0 =A0Link encap:Ethernet =A0HWaddr 00:0F:4B:00:00:68<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link<b=
r>
&gt; =A0 =A0 =A0 =A0 =A0 UP BROADCAST RUNNING MULTICAST =A0MTU:1500 =A0Metr=
ic:1<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX packets:99 errors:0 dropped:0 overruns:0 frame:=
0<br>
&gt; =A0 =A0 =A0 =A0 =A0 TX packets:86 errors:0 dropped:0 overruns:0 carrie=
r:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 collisions:0 txqueuelen:1000<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX bytes:8722 (8.5 KiB) =A0TX bytes:8709 (8.5 KiB)=
<br>
&gt;<br>
&gt; lo =A0 =A0 =A0 =A0Link encap:Local Loopback<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet addr:127.0.0.1 =A0Mask:255.0.0.0<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet6 addr: ::1/128 Scope:Host<br>
&gt; =A0 =A0 =A0 =A0 =A0 UP LOOPBACK RUNNING =A0MTU:65536 =A0Metric:1<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX packets:4 errors:0 dropped:0 overruns:0 frame:0=
<br>
&gt; =A0 =A0 =A0 =A0 =A0 TX packets:4 errors:0 dropped:0 overruns:0 carrier=
:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 collisions:0 txqueuelen:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX bytes:260 (260.0 b) =A0TX bytes:260 (260.0 b)<b=
r>
&gt;<br>
&gt; switch =A0 =A0Link encap:Ethernet =A0HWaddr 00:0F:4B:00:00:68<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet addr:192.168.102.68 =A0Bcast:192.168.102.255 =
=A0Mask:255.255.255.0<br>
&gt; =A0 =A0 =A0 =A0 =A0 inet6 addr: fe80::20f:4bff:fe00:68/64 Scope:Link<b=
r>
&gt; =A0 =A0 =A0 =A0 =A0 UP BROADCAST RUNNING MULTICAST =A0MTU:1500 =A0Metr=
ic:1<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX packets:96 errors:0 dropped:0 overruns:0 frame:=
0<br>
&gt; =A0 =A0 =A0 =A0 =A0 TX packets:75 errors:0 dropped:0 overruns:0 carrie=
r:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 collisions:0 txqueuelen:0<br>
&gt; =A0 =A0 =A0 =A0 =A0 RX bytes:8506 (8.3 KiB) =A0TX bytes:7755 (7.5 KiB)=
<br>
&gt;<br>
&gt; # ping 1 =A0 8.8.8.8<br>
&gt; PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.<br>
&gt; 64 bytes from <a href=3D"http://8.8.8.8" target=3D"_blank">8.8.8.8</a>=
: icmp_seq=3D1 ttl=3D45 time=3D57.0 ms<br>
&gt; 64 bytes from <a href=3D"http://8.8.8.8" target=3D"_blank">8.8.8.8</a>=
: icmp_seq=3D2 ttl=3D45 time=3D51.4 ms<br>
&gt; 64 bytes from <a href=3D"http://8.8.8.8" target=3D"_blank">8.8.8.8</a>=
: icmp_seq=3D3 ttl=3D45 time=3D51.8 ms<br>
&gt; ^C<br>
&gt; --- 8.8.8.8 ping statistics ---<br>
&gt; 3 packets transmitted, 3 received, 0% packet loss, time 2130ms<br>
&gt; rtt min/avg/max/mdev =3D 51.421/53.420/57.008/2.556 ms<br>
&gt; # poweroff<br>
&gt; Feb 28 02:05:30 g-pvops init: starting pid 2386, tty &#39;&#39;: &#39;=
/etc/init.d/halt&#39;<br>
&gt;<br>
&gt; # Usage: /etc/init.d/halt {start}<br>
&gt;<br>
&gt; The system is going down NOW!<br>
&gt;<br>
&gt; Sent SIGTERM to all processes<br>
&gt; Feb 28 02:05:30 g-pvops exiting on signal 15<br>
&gt;<br>
&gt; [ =A0 71.195552] BUG: Bad page map in process iscsid =A0pte:39b22120 p=
md:06953067<br>
&gt; [ =A0 71.195569] page:ffffea0000c9ef70 count:1 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c8<br>
&gt; [ =A0 71.195576] page flags: 0x100000000080078(uptodate|dirty|lru|acti=
ve|swapbacked)<br>
&gt; [ =A0 71.195585] page dumped because: bad pte<br>
&gt; [ =A0 71.195589] addr:00007fb105d37000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:c<br>
&gt; [ =A0 71.195598] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.195603] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.195613] CPU: 0 PID: 2296 Comm: iscsid Not tainted 3.14.0-rc4u=
pstream #1<br>
&gt; [ =A0 71.195618] =A000007fb105d37000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.195627] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 ffff880030ff3c58<br>
&gt; [ =A0 71.195634] =A0ffffffff8117f6e1 ffff880030ff3c58 00007fb105d37000=
 ffff8800069539b8<br>
&gt; [ =A0 71.195642] Call Trace:<br>
&gt; [ =A0 71.195649] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.195657] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.195663] =A0[&lt;ffffffff8117f6e1&gt;] ? activate_page+0xb1/0x=
e0<br>
&gt; [ =A0 71.195669] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.195676] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.195680] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.195688] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.195694] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.195702] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.195707] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.195712] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.195719] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.195724] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.195732] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.195736] Disabling lock debugging due to kernel taint<br>
&gt; [ =A0 71.195740] BUG: Bad page map in process iscsid =A0pte:39b23120 p=
md:06953067<br>
&gt; [ =A0 71.195744] page:ffffea0000c9efa8 count:2 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c9<br>
&gt; [ =A0 71.195751] page flags: 0x100000000080038(uptodate|dirty|lru|swap=
backed)<br>
&gt; [ =A0 71.195759] page dumped because: bad pte<br>
&gt; [ =A0 71.195764] addr:00007fb105d38000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:d<br>
&gt; [ =A0 71.195770] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.195774] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.195778] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.195783] =A000007fb105d38000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.195791] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 0000000000000575<br>
&gt; [ =A0 71.195798] =A00720072007200720 ffff880030ff3c58 00007fb105d38000=
 ffff8800069539c0<br>
&gt; [ =A0 71.195806] Call Trace:<br>
&gt; [ =A0 71.195810] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.195816] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.195820] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.195827] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.195832] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.195839] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.195843] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.195849] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.195854] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.195858] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.195863] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.195870] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.195875] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.195880] BUG: Bad page map in process iscsid =A0pte:39b24120 p=
md:06953067<br>
&gt; [ =A0 71.195884] page:ffffea0000c9efe0 count:2 mapcount:-1 mapping:fff=
f88003a536490 index:0x1ca<br>
&gt; [ =A0 71.195888] page flags: 0x100000000080038(uptodate|dirty|lru|swap=
backed)<br>
&gt; [ =A0 71.195895] page dumped because: bad pte<br>
&gt; [ =A0 71.195898] addr:00007fb105d39000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:e<br>
&gt; [ =A0 71.195904] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.195908] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.195912] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.195916] =A000007fb105d39000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.195923] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 000000000000058e<br>
&gt; [ =A0 71.195929] =A00720072007200720 ffff880030ff3c58 00007fb105d39000=
 ffff8800069539c8<br>
&gt; [ =A0 71.195935] Call Trace:<br>
&gt; [ =A0 71.195939] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.195944] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.195948] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.195953] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.195958] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.195963] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.195967] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.195973] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.195977] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.195982] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.195987] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.195992] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.195997] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.196001] BUG: Bad page map in process iscsid =A0pte:39b25120 p=
md:06953067<br>
&gt; [ =A0 71.196005] page:ffffea0000c9f018 count:2 mapcount:-1 mapping:fff=
f88003a536490 index:0x1cb<br>
&gt; [ =A0 71.362165] page flags: 0x100000000080038(uptodate|dirty|lru|swap=
backed)<br>
&gt; [ =A0 71.362172] page dumped because: bad pte<br>
&gt; [ =A0 71.362175] addr:00007fb105d3a000 vm_flags:00000070 anon_vma: =A0=
 =A0 =A0 =A0 =A0(null) mapping:ffff880032006970 index:f<br>
&gt; [ =A0 71.362181] vma-&gt;vm_ops-&gt;fault: shmem_fault+0x0/0x70<br>
&gt; [ =A0 71.362185] vma-&gt;vm_file-&gt;f_op-&gt;mmap: shmem_mmap+0x0/0x3=
0<br>
&gt; [ =A0 71.362193] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.362197] =A000007fb105d3a000 ffff880030ff3c28 ffffffff816f0e59=
 0000000000000000<br>
&gt; [ =A0 71.362206] =A0ffff8800302f8378 ffff880030ff3c78 ffffffff81199a5b=
 00000000000005a7<br>
&gt; [ =A0 71.362212] =A00720072007200720 ffff880030ff3c58 00007fb105d3a000=
 ffff8800069539d0<br>
&gt; [ =A0 71.362221] Call Trace:<br>
&gt; [ =A0 71.362225] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.362230] =A0[&lt;ffffffff81199a5b&gt;] print_bad_pte+0x1bb/0x2=
80<br>
&gt; [ =A0 71.362241] =A0[&lt;ffffffff8119ad68&gt;] unmap_single_vma+0x8c8/=
0x910<br>
&gt; [ =A0 71.362247] =A0[&lt;ffffffff81040a69&gt;] ? xen_pte_unlock+0x9/0x=
10<br>
&gt; [ =A0 71.362254] =A0[&lt;ffffffff8119adfc&gt;] unmap_vmas+0x4c/0xa0<br=
>
&gt; [ =A0 71.362258] =A0[&lt;ffffffff811a2920&gt;] exit_mmap+0x90/0x160<br=
>
&gt; [ =A0 71.362263] =A0[&lt;ffffffff816f5f13&gt;] ? _raw_spin_lock_irqsav=
e+0x13/0x60<br>
&gt; [ =A0 71.362270] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.362276] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.362282] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.362287] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.362292] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.362297] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.364639] BUG: Bad page state in process iscsid =A0pfn:39b25<br=
>
&gt; [ =A0 71.364651] page:ffffea0000c9f018 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1cb<br>
&gt; [ =A0 71.364656] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.364663] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.364666] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.364706] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.364711] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.364718] =A0ffffea0000c9f018 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364724] =A00000000000000001 ffffea0000c9f018 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364733] Call Trace:<br>
&gt; [ =A0 71.364739] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.364748] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.364753] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.364760] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.364768] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.364773] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.364780] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.364785] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.364791] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.364798] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.364803] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.364810] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.364815] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.364820] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.364825] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.364830] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.364835] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.364842] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.364848] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.364854] BUG: Bad page state in process iscsid =A0pfn:39b24<br=
>
&gt; [ =A0 71.364858] page:ffffea0000c9efe0 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1ca<br>
&gt; [ =A0 71.364863] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.364871] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.364874] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.364906] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.364913] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.364919] =A0ffffea0000c9efe0 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364925] =A00000000000000001 ffffea0000c9efe0 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.364933] Call Trace:<br>
&gt; [ =A0 71.364938] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.364945] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.364950] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.364955] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.364960] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.364965] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.364970] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.364975] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.364982] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.364987] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.364994] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.364999] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.365004] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.365009] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.365015] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.365020] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.365025] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.365030] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.365035] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.365039] BUG: Bad page state in process iscsid =A0pfn:39b23<br=
>
&gt; [ =A0 71.365043] page:ffffea0000c9efa8 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c9<br>
&gt; [ =A0 71.365047] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.365074] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.365077] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.365104] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.365109] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.365115] =A0ffffea0000c9efa8 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.365121] =A00000000000000001 ffffea0000c9efa8 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.365127] Call Trace:<br>
&gt; [ =A0 71.365131] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.365136] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.365141] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.365146] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.365151] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.365155] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.365160] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.365165] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.365170] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.365175] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.365180] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.562344] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.562349] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.562354] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.562359] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.562364] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.562369] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.562379] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.562387] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt; [ =A0 71.562391] BUG: Bad page state in process iscsid =A0pfn:39b22<br=
>
&gt; [ =A0 71.562395] page:ffffea0000c9ef70 count:0 mapcount:-1 mapping:fff=
f88003a536490 index:0x1c8<br>
&gt; [ =A0 71.562399] page flags: 0x100000000080018(uptodate|dirty|swapback=
ed)<br>
&gt; [ =A0 71.562413] page dumped because: non-NULL mapping<br>
&gt; [ =A0 71.562420] Modules linked in: dm_multipath dm_mod xen_evtchn isc=
si_boot_sysfs iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi scsi_mod=
 libcrc32c crc32c fbcon tileblit font radeon bitblit softcursor ttm drm_kms=
_helper xen_blkfront xen_netfront xen_fbfront fb_sys_fops sysimgblt sysfill=
rect syscopyarea xen_kbdfront xenfs xen_privcmd<br>

&gt; [ =A0 71.562456] CPU: 0 PID: 2296 Comm: iscsid Tainted: G =A0 =A0B =A0=
 =A0 =A0 =A03.14.0-rc4upstream #1<br>
&gt; [ =A0 71.562461] =A0ffffffff8197f00f ffff880030ff3bb8 ffffffff816f0e59=
 ffffffff8197f00f<br>
&gt; [ =A0 71.562467] =A0ffffea0000c9ef70 ffff880030ff3be8 ffffffff811751a0=
 ffff880030ff3c48<br>
&gt; [ =A0 71.562476] =A00000000000000001 ffffea0000c9ef70 0000000000000000=
 ffff880030ff3c48<br>
&gt; [ =A0 71.562486] Call Trace:<br>
&gt; [ =A0 71.562490] =A0[&lt;ffffffff816f0e59&gt;] dump_stack+0x51/0x6b<br=
>
&gt; [ =A0 71.562495] =A0[&lt;ffffffff811751a0&gt;] bad_page+0xd0/0x120<br>
&gt; [ =A0 71.562500] =A0[&lt;ffffffff81175335&gt;] free_pages_prepare+0x14=
5/0x160<br>
&gt; [ =A0 71.562515] =A0[&lt;ffffffff81041642&gt;] ? xen_pte_val+0x32/0x40=
<br>
&gt; [ =A0 71.562520] =A0[&lt;ffffffff8117993b&gt;] free_hot_cold_page+0x3b=
/0x150<br>
&gt; [ =A0 71.562525] =A0[&lt;ffffffff81179fc7&gt;] free_hot_cold_page_list=
+0x47/0xb0<br>
&gt; [ =A0 71.562535] =A0[&lt;ffffffff8117e3ad&gt;] release_pages+0x7d/0x23=
0<br>
&gt; [ =A0 71.562544] =A0[&lt;ffffffff811b04c4&gt;] free_pages_and_swap_cac=
he+0xb4/0xe0<br>
&gt; [ =A0 71.562550] =A0[&lt;ffffffff81099507&gt;] ? flush_tlb_mm_range+0x=
57/0x1b0<br>
&gt; [ =A0 71.562555] =A0[&lt;ffffffff81199cb7&gt;] tlb_flush_mmu+0x57/0xa0=
<br>
&gt; [ =A0 71.562561] =A0[&lt;ffffffff81199d0f&gt;] tlb_finish_mmu+0xf/0x40=
<br>
&gt; [ =A0 71.562568] =A0[&lt;ffffffff811a2947&gt;] exit_mmap+0xb7/0x160<br=
>
&gt; [ =A0 71.562579] =A0[&lt;ffffffff816f5f12&gt;] ? _raw_spin_lock_irqsav=
e+0x12/0x60<br>
&gt; [ =A0 71.562584] =A0[&lt;ffffffff8109d672&gt;] mmput+0x52/0x100<br>
&gt; [ =A0 71.562594] =A0[&lt;ffffffff810a186c&gt;] do_exit+0x29c/0xb90<br>
&gt; [ =A0 71.562603] =A0[&lt;ffffffff810a3349&gt;] ? SyS_wait4+0xa9/0xf0<b=
r>
&gt; [ =A0 71.562608] =A0[&lt;ffffffff810a2271&gt;] do_group_exit+0x51/0x13=
0<br>
&gt; [ =A0 71.562613] =A0[&lt;ffffffff810a2362&gt;] SyS_exit_group+0x12/0x2=
0<br>
&gt; [ =A0 71.562617] =A0[&lt;ffffffff816fe7f9&gt;] system_call_fastpath+0x=
16/0x1b<br>
&gt;<br>
&gt; Sent SIGKILL to all processes<br>
&gt;<br>
&gt; Requesting system poweroff<br>
&gt; [ =A0 73.375423] reboot: System halted<br>
</div></div></blockquote></div><br><br>Nice, I will also look at this. I ha=
ve not tested the above mentioned patch with migration though.</div><div cl=
ass=3D"gmail_extra"><br>-- <br>Elena
</div></div>

--001a113330b2d35e8804f36ee2a8--


--===============5251542664070070604==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5251542664070070604==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 03:27:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:27:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJE6f-0003kH-0x; Fri, 28 Feb 2014 03:27:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <qin.l.li@oracle.com>) id 1WJE6d-0003kC-W9
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 03:27:24 +0000
Received: from [193.109.254.147:14940] by server-10.bemta-14.messagelabs.com
	id 5E/D7-10711-B1200135; Fri, 28 Feb 2014 03:27:23 +0000
X-Env-Sender: qin.l.li@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393558041!7352543!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2632 invoked from network); 28 Feb 2014 03:27:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 03:27:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S3RFpA005439
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 03:27:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S3REFR025790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 28 Feb 2014 03:27:14 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S3RDm4025782; Fri, 28 Feb 2014 03:27:14 GMT
Received: from [10.175.21.240] (/10.175.21.240)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 19:27:13 -0800
Message-ID: <53100204.1050801@oracle.com>
Date: Fri, 28 Feb 2014 11:27:00 +0800
From: Qin Li <qin.l.li@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
	<530DD57A.8010709@oracle.com>
	<alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CuS6jiAyMDE0LzIvMjcgMjE6MDMsIFN0ZWZhbm8gU3RhYmVsbGluaSDlhpnpgZM6Cj4+ID5JIGd1
ZXN0IGxpbnV4IGd1ZXN0IHdpbGwgZG8gdGhlIHNhbWUgdGhpbmcsIHJkdHNjKCkgZmV0Y2ggY3Vy
cmVudCB0aW1lc3RhbXAgZnJvbSBjdXJyZW50IHJ1bm5pbmcgdkNQVSwgVFNDIG91dC1vZi1zeW5j
IGlzc3VlIGlzIHN0aWxsIHRoZXJlLgo+PiA+SXQgc2VlbXMgdG8gbWUgcHZjbG9jayBmaW5hbGx5
IGZpeCB0aGUgdGltZSBkcmlmdCBpc3N1ZSBqdXN0IGJlY2F1c2UgdGhlIHdvcmthcm91bmQgZW5m
b3JjZWQgYXMgYWJvdmUsIHJpZ2h0Pwo+IEZpcnN0IHlvdSBzaG91bGQga25vdyB0aGF0IFRTQyBp
cyBub3QgYWx3YXlzIGd1YXJhbnRlZWQgdG8gYmUKPiBzeW5jaHJvbml6ZWQgYWNyb3NzIG11bHRp
cGxlIHByb2Nlc3NvcnMsIGVzcGVjaWFsbHkgb24gb2xkZXIgc3lzdGVtcy4KPiBPbiAiVFNDLXNh
ZmUiIHN5c3RlbXMsIFhlbiB3b3VsZCBleHBvcnQgYSBjb25zaXN0ZW50IFRTQyB0byBndWVzdHMs
IGJ5Cj4gc2V0dGluZyB0aGUgdnRzYyBvZmZzZXQgYW5kIHNjYWxlIGFwcHJvcHJpYXRlbHkuClN0
ZWZhbm8sCgpJcyB0aGVyZSBhIGVhc3kgd2F5IHRvIGNoZWNrIHdoZXRoZXIgdGhlIHN5c3RlbSBp
cyAiVFNDLXNhZmUiPyBEbyB5b3UgCm1lYW4gIlg4Nl9GRUFUVVJFX1RTQ19SRUxJQUJMRSIgYml0
IGluICJib290X2NwdV9kYXRhLng4Nl9jYXBhYmlsaXR5Ij8KClRoYW5rcywKTWljaGFlbAoKX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1h
aWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94
ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 28 03:27:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:27:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJE6f-0003kH-0x; Fri, 28 Feb 2014 03:27:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <qin.l.li@oracle.com>) id 1WJE6d-0003kC-W9
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 03:27:24 +0000
Received: from [193.109.254.147:14940] by server-10.bemta-14.messagelabs.com
	id 5E/D7-10711-B1200135; Fri, 28 Feb 2014 03:27:23 +0000
X-Env-Sender: qin.l.li@oracle.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393558041!7352543!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2632 invoked from network); 28 Feb 2014 03:27:22 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 03:27:22 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1S3RFpA005439
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 03:27:16 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S3REFR025790
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 28 Feb 2014 03:27:14 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1S3RDm4025782; Fri, 28 Feb 2014 03:27:14 GMT
Received: from [10.175.21.240] (/10.175.21.240)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 27 Feb 2014 19:27:13 -0800
Message-ID: <53100204.1050801@oracle.com>
Date: Fri, 28 Feb 2014 11:27:00 +0800
From: Qin Li <qin.l.li@oracle.com>
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
	<530DD57A.8010709@oracle.com>
	<alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
In-Reply-To: <alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: xen-devel@lists.xensource.com, Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

CuS6jiAyMDE0LzIvMjcgMjE6MDMsIFN0ZWZhbm8gU3RhYmVsbGluaSDlhpnpgZM6Cj4+ID5JIGd1
ZXN0IGxpbnV4IGd1ZXN0IHdpbGwgZG8gdGhlIHNhbWUgdGhpbmcsIHJkdHNjKCkgZmV0Y2ggY3Vy
cmVudCB0aW1lc3RhbXAgZnJvbSBjdXJyZW50IHJ1bm5pbmcgdkNQVSwgVFNDIG91dC1vZi1zeW5j
IGlzc3VlIGlzIHN0aWxsIHRoZXJlLgo+PiA+SXQgc2VlbXMgdG8gbWUgcHZjbG9jayBmaW5hbGx5
IGZpeCB0aGUgdGltZSBkcmlmdCBpc3N1ZSBqdXN0IGJlY2F1c2UgdGhlIHdvcmthcm91bmQgZW5m
b3JjZWQgYXMgYWJvdmUsIHJpZ2h0Pwo+IEZpcnN0IHlvdSBzaG91bGQga25vdyB0aGF0IFRTQyBp
cyBub3QgYWx3YXlzIGd1YXJhbnRlZWQgdG8gYmUKPiBzeW5jaHJvbml6ZWQgYWNyb3NzIG11bHRp
cGxlIHByb2Nlc3NvcnMsIGVzcGVjaWFsbHkgb24gb2xkZXIgc3lzdGVtcy4KPiBPbiAiVFNDLXNh
ZmUiIHN5c3RlbXMsIFhlbiB3b3VsZCBleHBvcnQgYSBjb25zaXN0ZW50IFRTQyB0byBndWVzdHMs
IGJ5Cj4gc2V0dGluZyB0aGUgdnRzYyBvZmZzZXQgYW5kIHNjYWxlIGFwcHJvcHJpYXRlbHkuClN0
ZWZhbm8sCgpJcyB0aGVyZSBhIGVhc3kgd2F5IHRvIGNoZWNrIHdoZXRoZXIgdGhlIHN5c3RlbSBp
cyAiVFNDLXNhZmUiPyBEbyB5b3UgCm1lYW4gIlg4Nl9GRUFUVVJFX1RTQ19SRUxJQUJMRSIgYml0
IGluICJib290X2NwdV9kYXRhLng4Nl9jYXBhYmlsaXR5Ij8KClRoYW5rcywKTWljaGFlbAoKX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVsIG1h
aWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9yZy94
ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 28 03:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJE9p-0003zZ-Mm; Fri, 28 Feb 2014 03:30:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WJE9o-0003zC-5s
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 03:30:40 +0000
Received: from [85.158.139.211:36632] by server-1.bemta-5.messagelabs.com id
	67/E6-12859-FD200135; Fri, 28 Feb 2014 03:30:39 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393558236!6792255!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4083 invoked from network); 28 Feb 2014 03:30:38 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 03:30:38 -0000
Received: by mail-pa0-f52.google.com with SMTP id fb1so171620pad.11
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 19:30:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=4W3LRFClq+pgMHVLJ5IeGtCPEn0ZfQtSrTsAR0Scqx0=;
	b=XAYrAk5zhyVXgpEr4b0lC6dDTeKtU/DDNi50Q2O8FUeGozTl/iFhT/hH3ElPU8ELln
	Yq4Qr3LJpfidkW3SWV+FEee7K2LRp3vhHz/34ANz4knb7hK/14Jy1dcVrnRf2R6uPhkQ
	rWfV9XmAE8WguwvsvRNZ9pogGoaZnYlqUFispinycT6XPN/qrSn/WZdA7xm0Mc9ba5Y6
	8bIRKebUYmI4RA1kl9aD1Enb2yz7JhYLcIlj+DA2rCB+E1s2DkTy1sxud7tV0sGwtdeO
	CAYu3XyevKGN3sz07Y01hPft+Sr512hnIQRix4+KPs9UCnIFltq0nYhxtm5gybUlxdgw
	9s8A==
X-Received: by 10.68.133.138 with SMTP id pc10mr763605pbb.98.1393558236316;
	Thu, 27 Feb 2014 19:30:36 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-185.amazon.com. [54.240.196.185])
	by mx.google.com with ESMTPSA id yd4sm976490pbc.13.2014.02.27.19.30.32
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 19:30:35 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 27 Feb 2014 19:30:31 -0800
Date: Thu, 27 Feb 2014 19:30:31 -0800
From: Matt Wilson <msw@linux.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140228033029.GA14114@u109add4315675089e695.ant.amazon.com>
References: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
	<1393555348.20365.11.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393555348.20365.11.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Charles Arnold <carnold@suse.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:42:28AM +0000, Ian Campbell wrote:
> On Wed, 2014-02-26 at 15:59 -0700, Charles Arnold wrote:
> > Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).
> 
> Ccing the author.
> 
> > Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
> > Was this functionality ever intended to work?
> 
> One would like to assume so!
> 
> > Does anyone know the status of this code?
> 
> Matt?
> 
> Looks like Makefile.am was patched but not Makefile.in, and I don't
> think we run automake as part of the stubdom build process.

True, but we also don't use the Makefiles in grub-upstream to build
GRUB. We use xen/stubdom/grub/Makefile, which also isn't patched to
build the btrfs code.

Unfortunately when I added it grub/Makefile, it doesn't build on a
modern compiler.

Do you think we should revert the patch that adds btrfs support?

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 03:30:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJE9p-0003zZ-Mm; Fri, 28 Feb 2014 03:30:41 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <mswilson@gmail.com>) id 1WJE9o-0003zC-5s
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 03:30:40 +0000
Received: from [85.158.139.211:36632] by server-1.bemta-5.messagelabs.com id
	67/E6-12859-FD200135; Fri, 28 Feb 2014 03:30:39 +0000
X-Env-Sender: mswilson@gmail.com
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393558236!6792255!1
X-Originating-IP: [209.85.220.52]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4083 invoked from network); 28 Feb 2014 03:30:38 -0000
Received: from mail-pa0-f52.google.com (HELO mail-pa0-f52.google.com)
	(209.85.220.52)
	by server-12.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 03:30:38 -0000
Received: by mail-pa0-f52.google.com with SMTP id fb1so171620pad.11
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 19:30:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=sender:date:from:to:cc:subject:message-id:references:mime-version
	:content-type:content-disposition:in-reply-to:user-agent;
	bh=4W3LRFClq+pgMHVLJ5IeGtCPEn0ZfQtSrTsAR0Scqx0=;
	b=XAYrAk5zhyVXgpEr4b0lC6dDTeKtU/DDNi50Q2O8FUeGozTl/iFhT/hH3ElPU8ELln
	Yq4Qr3LJpfidkW3SWV+FEee7K2LRp3vhHz/34ANz4knb7hK/14Jy1dcVrnRf2R6uPhkQ
	rWfV9XmAE8WguwvsvRNZ9pogGoaZnYlqUFispinycT6XPN/qrSn/WZdA7xm0Mc9ba5Y6
	8bIRKebUYmI4RA1kl9aD1Enb2yz7JhYLcIlj+DA2rCB+E1s2DkTy1sxud7tV0sGwtdeO
	CAYu3XyevKGN3sz07Y01hPft+Sr512hnIQRix4+KPs9UCnIFltq0nYhxtm5gybUlxdgw
	9s8A==
X-Received: by 10.68.133.138 with SMTP id pc10mr763605pbb.98.1393558236316;
	Thu, 27 Feb 2014 19:30:36 -0800 (PST)
Received: from mswilson@gmail.com (54-240-196-185.amazon.com. [54.240.196.185])
	by mx.google.com with ESMTPSA id yd4sm976490pbc.13.2014.02.27.19.30.32
	for <multiple recipients>
	(version=TLSv1 cipher=RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 19:30:35 -0800 (PST)
Received: by mswilson@gmail.com (sSMTP sendmail emulation);
	Thu, 27 Feb 2014 19:30:31 -0800
Date: Thu, 27 Feb 2014 19:30:31 -0800
From: Matt Wilson <msw@linux.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140228033029.GA14114@u109add4315675089e695.ant.amazon.com>
References: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
	<1393555348.20365.11.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393555348.20365.11.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
Cc: Charles Arnold <carnold@suse.com>, Matt Wilson <msw@amazon.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:42:28AM +0000, Ian Campbell wrote:
> On Wed, 2014-02-26 at 15:59 -0700, Charles Arnold wrote:
> > Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).
> 
> Ccing the author.
> 
> > Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
> > Was this functionality ever intended to work?
> 
> One would like to assume so!
> 
> > Does anyone know the status of this code?
> 
> Matt?
> 
> Looks like Makefile.am was patched but not Makefile.in, and I don't
> think we run automake as part of the stubdom build process.

True, but we also don't use the Makefiles in grub-upstream to build
GRUB. We use xen/stubdom/grub/Makefile, which also isn't patched to
build the btrfs code.

Unfortunately when I added it grub/Makefile, it doesn't build on a
modern compiler.

Do you think we should revert the patch that adds btrfs support?

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 03:44:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJEN6-0004XR-5O; Fri, 28 Feb 2014 03:44:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJEN5-0004XM-8R
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 03:44:23 +0000
Received: from [85.158.139.211:59309] by server-3.bemta-5.messagelabs.com id
	47/B5-13671-61600135; Fri, 28 Feb 2014 03:44:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393559060!6762887!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32172 invoked from network); 28 Feb 2014 03:44:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 03:44:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="104891952"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 03:44:19 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 22:44:18 -0500
Message-ID: <1393559055.27819.2.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 03:44:15 +0000
In-Reply-To: <21263.21387.343462.630238@mariner.uk.xensource.com>
References: <osstest-25315-mainreport@xen.org>
	<21263.21387.343462.630238@mariner.uk.xensource.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
 blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-27 at 15:02 +0000, Ian Jackson wrote:
> xen.org writes ("[xen-unstable test] 25315: regressions - trouble: blocked/broken/fail/pass"):
> > flight 25315 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25315/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-amd64-oldkern           4 xen-build            fail REGR. vs. 25275
> >  build-i386                    4 xen-build            fail REGR. vs. 25281
> 
> Our network infrastructure problems have become intolerable.

:-/

I think for this particular hg tree it would be tolerable for osstest to
clone from a local hg mirror updated via a daily cronjob. That tree is
pretty slow moving.

Once the initial clone to the mirror has completed then the incremental
updates ought to be fast enough to avoid the network timeouts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 03:44:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 03:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJEN6-0004XR-5O; Fri, 28 Feb 2014 03:44:24 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJEN5-0004XM-8R
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 03:44:23 +0000
Received: from [85.158.139.211:59309] by server-3.bemta-5.messagelabs.com id
	47/B5-13671-61600135; Fri, 28 Feb 2014 03:44:22 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393559060!6762887!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32172 invoked from network); 28 Feb 2014 03:44:21 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 03:44:21 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="104891952"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 03:44:19 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Thu, 27 Feb 2014 22:44:18 -0500
Message-ID: <1393559055.27819.2.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Ian Jackson <Ian.Jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 03:44:15 +0000
In-Reply-To: <21263.21387.343462.630238@mariner.uk.xensource.com>
References: <osstest-25315-mainreport@xen.org>
	<21263.21387.343462.630238@mariner.uk.xensource.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
 blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, 2014-02-27 at 15:02 +0000, Ian Jackson wrote:
> xen.org writes ("[xen-unstable test] 25315: regressions - trouble: blocked/broken/fail/pass"):
> > flight 25315 xen-unstable real [real]
> > http://www.chiark.greenend.org.uk/~xensrcts/logs/25315/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >  build-amd64-oldkern           4 xen-build            fail REGR. vs. 25275
> >  build-i386                    4 xen-build            fail REGR. vs. 25281
> 
> Our network infrastructure problems have become intolerable.

:-/

I think for this particular hg tree it would be tolerable for osstest to
clone from a local hg mirror updated via a daily cronjob. That tree is
pretty slow moving.

Once the initial clone to the mirror has completed then the incremental
updates ought to be fast enough to avoid the network timeouts.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 04:47:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 04:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJFLM-0006MW-Sp; Fri, 28 Feb 2014 04:46:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJFLH-0006MB-IM
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 04:46:35 +0000
Received: from [85.158.139.211:5660] by server-10.bemta-5.messagelabs.com id
	E1/11-08578-AA410135; Fri, 28 Feb 2014 04:46:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393562791!6784105!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7051 invoked from network); 28 Feb 2014 04:46:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 04:46:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="104900294"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 04:46:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 23:46:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJFLC-0006mG-06;
	Fri, 28 Feb 2014 04:46:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJFLB-0003iU-72;
	Fri, 28 Feb 2014 04:46:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25322-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 04:46:29 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25322: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25322 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25322/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu 17 guest-start.2             fail REGR. vs. 25281
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 11 guest-saverestore         fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  7bedbbb5c31ec7d7e653b4fc606c9871661d5e89
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Tamas K Lengyel <tamas.lengyel@zentific.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 448 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 04:47:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 04:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJFLM-0006MW-Sp; Fri, 28 Feb 2014 04:46:40 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJFLH-0006MB-IM
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 04:46:35 +0000
Received: from [85.158.139.211:5660] by server-10.bemta-5.messagelabs.com id
	E1/11-08578-AA410135; Fri, 28 Feb 2014 04:46:34 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393562791!6784105!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 7051 invoked from network); 28 Feb 2014 04:46:32 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 04:46:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,559,1389744000"; d="scan'208";a="104900294"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 04:46:30 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Thu, 27 Feb 2014 23:46:30 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJFLC-0006mG-06;
	Fri, 28 Feb 2014 04:46:30 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJFLB-0003iU-72;
	Fri, 28 Feb 2014 04:46:29 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25322-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 04:46:29 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25322: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25322 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25322/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-multivcpu 17 guest-start.2             fail REGR. vs. 25281
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-sedf-pin 11 guest-saverestore         fail REGR. vs. 25281

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass

version targeted for testing:
 xen                  7bedbbb5c31ec7d7e653b4fc606c9871661d5e89
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Tamas K Lengyel <tamas.lengyel@zentific.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 fail    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 448 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 08:49:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 08:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJ7w-00061r-8Y; Fri, 28 Feb 2014 08:49:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJJ7v-00061W-EI
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 08:49:03 +0000
Received: from [193.109.254.147:23344] by server-9.bemta-14.messagelabs.com id
	68/CD-24895-E7D40135; Fri, 28 Feb 2014 08:49:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393577340!7417355!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10837 invoked from network); 28 Feb 2014 08:49:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 08:49:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,560,1389744000"; d="scan'208";a="106556545"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 08:49:00 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 03:48:59 -0500
Message-ID: <1393577337.27819.12.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Anirban Chakraborty <abchak@juniper.net>
Date: Fri, 28 Feb 2014 08:48:57 +0000
In-Reply-To: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
References: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build issue in xcp-networkd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDE0LTAyLTI3IGF0IDIyOjA5ICswMDAwLCBBbmlyYmFuIENoYWtyYWJvcnR5IHdy
b3RlOgo+IEhpIEFsbCwKPiAKPiBJIHdhcyB0cnlpbmcgdG8g4oCYb3BhbSBpbnN0YWxsIHhjcC1u
ZXR3b3JrZOKAmSBpbiB4ZW5zZXJ2ZXIgRERLIHZtCj4gKDIuNi4zMi40My0wLjQuMS54czEuOC4w
Ljg0Ny4xNzA3ODV4ZW4pIGFuZCBlbmNvdW50ZXJlZCBmb2xsb3dpbmcKPiBlcnJvciBtZXNzYWdl
czoKClRoaXMgbGlzdCBkZWFscyBpbiB0aGUgZGV2ZWxvcG1lbnQgb2YgdGhlIGNvcmUgaHlwZXJ2
aXNvciBwYWNrYWdlcy4gRm9yCnhhcGkveGNwIHN0dWZmIHlvdSBlaXRoZXIgd2FudCB4ZW4tYXBp
QGxpc3RzLnhlbi5vcmcgb3IgdGhlIHZhcmlvdXMKd3d3LnhlbnNlcnZlci5vcmcgbGlzdHMvZm9y
dW1zIGV0Yy4KClRoYW5rcywKSWFuLAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 28 08:49:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 08:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJ7w-00061r-8Y; Fri, 28 Feb 2014 08:49:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJJ7v-00061W-EI
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 08:49:03 +0000
Received: from [193.109.254.147:23344] by server-9.bemta-14.messagelabs.com id
	68/CD-24895-E7D40135; Fri, 28 Feb 2014 08:49:02 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393577340!7417355!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10837 invoked from network); 28 Feb 2014 08:49:02 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 08:49:02 -0000
X-IronPort-AV: E=Sophos;i="4.97,560,1389744000"; d="scan'208";a="106556545"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 08:49:00 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 03:48:59 -0500
Message-ID: <1393577337.27819.12.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Anirban Chakraborty <abchak@juniper.net>
Date: Fri, 28 Feb 2014 08:48:57 +0000
In-Reply-To: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
References: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build issue in xcp-networkd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gVGh1LCAyMDE0LTAyLTI3IGF0IDIyOjA5ICswMDAwLCBBbmlyYmFuIENoYWtyYWJvcnR5IHdy
b3RlOgo+IEhpIEFsbCwKPiAKPiBJIHdhcyB0cnlpbmcgdG8g4oCYb3BhbSBpbnN0YWxsIHhjcC1u
ZXR3b3JrZOKAmSBpbiB4ZW5zZXJ2ZXIgRERLIHZtCj4gKDIuNi4zMi40My0wLjQuMS54czEuOC4w
Ljg0Ny4xNzA3ODV4ZW4pIGFuZCBlbmNvdW50ZXJlZCBmb2xsb3dpbmcKPiBlcnJvciBtZXNzYWdl
czoKClRoaXMgbGlzdCBkZWFscyBpbiB0aGUgZGV2ZWxvcG1lbnQgb2YgdGhlIGNvcmUgaHlwZXJ2
aXNvciBwYWNrYWdlcy4gRm9yCnhhcGkveGNwIHN0dWZmIHlvdSBlaXRoZXIgd2FudCB4ZW4tYXBp
QGxpc3RzLnhlbi5vcmcgb3IgdGhlIHZhcmlvdXMKd3d3LnhlbnNlcnZlci5vcmcgbGlzdHMvZm9y
dW1zIGV0Yy4KClRoYW5rcywKSWFuLAoKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 28 09:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJOO-0006tw-SG; Fri, 28 Feb 2014 09:06:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJJON-0006tq-RD
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 09:06:04 +0000
Received: from [193.109.254.147:16526] by server-14.bemta-14.messagelabs.com
	id 21/19-29228-B7150135; Fri, 28 Feb 2014 09:06:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393578362!7426564!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18387 invoked from network); 28 Feb 2014 09:06:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 09:06:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 09:06:12 +0000
Message-Id: <53105F970200007800120205@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 09:06:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
	<530F116F020000780011FC45@nat28.tlf.novell.com>
	<530FD0CA.5080906@terremark.com>
In-Reply-To: <530FD0CA.5080906@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.02.14 at 00:56, Don Slutz <dslutz@verizon.com> wrote:
> On 02/27/14 04:20, Jan Beulich wrote:
>>>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
>>> Currently in 32 bit mode the routine hpet_set_timer() will convert a
>>> time in the past to a time in future.  This is done by the uint32_t
>>> cast of diff.
>>>
>>> Even without this issue, hpet_tick_to_ns() does not support past
>>> times.
>>>
>>> Real hardware does not support past times.
>>>
>>> So just do the same thing in 32 bit mode as 64 bit mode.
>> While the change looks valid at the first glance, what I'm missing
>> is an explanation of how the problem that the introduction of this
>> fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
>> Vista") is now being taken care of (or why this is of no concern).
>> That's pretty relevant considering for how long this code has been
>> there without causing (known) problems to anyone.
> 
> Ok, digging around (the git version):
> 
> commit f545359b1c54f59be9d7c27112a68c51c45b06b5
> Date:   Thu Jan 18 18:54:28 2007 +0000
>      [HVM] Fix slow wallclock in x64 Vista. This is due to confusing a
> 
> 
> And one that changed how it worked:
> 
> commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac
> Date:   Tue Jan 8 16:20:04 2008 +0000
>     hvm: hpet: Fix overflow when converting to nanoseconds.
> 
> 
> Is when a past time was prevented.  Which may well have caused x64 Vista to 
> have wallclock issues.
> 
> Next:
> 
> commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2
> Date:   Sat May 24 09:27:03 2008 +0100
>      hvm: Build guest timers on monotonic system time.
> 
> 
> Has a chance to do 2 things:
> 1) Make the diff < 0 very unlikely
> 2) Fixed x64 Vista wallclock issues (again)
> 
> Looking closer at hpet_tick_to_ns() and doing some math.  I get:
> 
> 
>      h->stime_freq = S_TO_NS;
>      h->hpet_to_ns_scale = ((S_TO_NS * STIME_PER_HPET_TICK) << 10) / 
> h->stime_freq;
> 
> I.E.
> 
>      h->hpet_to_ns_scale = STIME_PER_HPET_TICK << 10;
> 
> And so:
> 
> #define hpet_tick_to_ns(h, tick)                        \
>      ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>          ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))
> 
> 
> Is really:
> 
> #define hpet_tick_to_ns(h, tick)                        \
>      ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>          (~0ULL >> 10) : (tick) * STIME_PER_HPET_TICK))
> 
> And if you change to using a signed multiply most of the time you will be 
> fine.  If you want a complex that is "safer":
> 
> #define hpet_tick_to_ns(tick)                                   \
>      ((s_time_t)(tick) >= 0 ?                                    \
>        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK >= 0 ?   \
>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>         (s_time_t)(~0ULL >> 10) :                                \
>        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK < 0 ?    \
>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>         0)
> 
> If the signed multiply overflows in the positive case then the old max is 
> returned.  Note: this can return larger values then the old max.
> 
> So I can re-work the patch to use this and still provide past times.  Which 
> path should I go with?

Did you perhaps misunderstand me? I didn't ask for the patch to be
changed. What I asked for is clarification that the issues previously
having caused this code to be the way it is being still taken care of
with your change.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 09:06:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJOO-0006tw-SG; Fri, 28 Feb 2014 09:06:04 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJJON-0006tq-RD
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 09:06:04 +0000
Received: from [193.109.254.147:16526] by server-14.bemta-14.messagelabs.com
	id 21/19-29228-B7150135; Fri, 28 Feb 2014 09:06:03 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393578362!7426564!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18387 invoked from network); 28 Feb 2014 09:06:02 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-14.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 09:06:02 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 09:06:12 +0000
Message-Id: <53105F970200007800120205@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 09:06:15 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Don Slutz" <dslutz@verizon.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
	<530F116F020000780011FC45@nat28.tlf.novell.com>
	<530FD0CA.5080906@terremark.com>
In-Reply-To: <530FD0CA.5080906@terremark.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.02.14 at 00:56, Don Slutz <dslutz@verizon.com> wrote:
> On 02/27/14 04:20, Jan Beulich wrote:
>>>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
>>> Currently in 32 bit mode the routine hpet_set_timer() will convert a
>>> time in the past to a time in future.  This is done by the uint32_t
>>> cast of diff.
>>>
>>> Even without this issue, hpet_tick_to_ns() does not support past
>>> times.
>>>
>>> Real hardware does not support past times.
>>>
>>> So just do the same thing in 32 bit mode as 64 bit mode.
>> While the change looks valid at the first glance, what I'm missing
>> is an explanation of how the problem that the introduction of this
>> fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
>> Vista") is now being taken care of (or why this is of no concern).
>> That's pretty relevant considering for how long this code has been
>> there without causing (known) problems to anyone.
> 
> Ok, digging around (the git version):
> 
> commit f545359b1c54f59be9d7c27112a68c51c45b06b5
> Date:   Thu Jan 18 18:54:28 2007 +0000
>      [HVM] Fix slow wallclock in x64 Vista. This is due to confusing a
> 
> 
> And one that changed how it worked:
> 
> commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac
> Date:   Tue Jan 8 16:20:04 2008 +0000
>     hvm: hpet: Fix overflow when converting to nanoseconds.
> 
> 
> Is when a past time was prevented.  Which may well have caused x64 Vista to 
> have wallclock issues.
> 
> Next:
> 
> commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2
> Date:   Sat May 24 09:27:03 2008 +0100
>      hvm: Build guest timers on monotonic system time.
> 
> 
> Has a chance to do 2 things:
> 1) Make the diff < 0 very unlikely
> 2) Fixed x64 Vista wallclock issues (again)
> 
> Looking closer at hpet_tick_to_ns() and doing some math.  I get:
> 
> 
>      h->stime_freq = S_TO_NS;
>      h->hpet_to_ns_scale = ((S_TO_NS * STIME_PER_HPET_TICK) << 10) / 
> h->stime_freq;
> 
> I.E.
> 
>      h->hpet_to_ns_scale = STIME_PER_HPET_TICK << 10;
> 
> And so:
> 
> #define hpet_tick_to_ns(h, tick)                        \
>      ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>          ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))
> 
> 
> Is really:
> 
> #define hpet_tick_to_ns(h, tick)                        \
>      ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>          (~0ULL >> 10) : (tick) * STIME_PER_HPET_TICK))
> 
> And if you change to using a signed multiply most of the time you will be 
> fine.  If you want a complex that is "safer":
> 
> #define hpet_tick_to_ns(tick)                                   \
>      ((s_time_t)(tick) >= 0 ?                                    \
>        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK >= 0 ?   \
>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>         (s_time_t)(~0ULL >> 10) :                                \
>        (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK < 0 ?    \
>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>         0)
> 
> If the signed multiply overflows in the positive case then the old max is 
> returned.  Note: this can return larger values then the old max.
> 
> So I can re-work the patch to use this and still provide past times.  Which 
> path should I go with?

Did you perhaps misunderstand me? I didn't ask for the patch to be
changed. What I asked for is clarification that the issues previously
having caused this code to be the way it is being still taken care of
with your change.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 09:17:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJZH-0007Mt-Ke; Fri, 28 Feb 2014 09:17:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hanyandong@iie.ac.cn>) id 1WJJZF-0007Mn-9z
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 09:17:17 +0000
Received: from [193.109.254.147:24839] by server-7.bemta-14.messagelabs.com id
	B3/7E-23424-C1450135; Fri, 28 Feb 2014 09:17:16 +0000
X-Env-Sender: hanyandong@iie.ac.cn
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393579032!7428346!1
X-Originating-IP: [159.226.251.23]
X-SpamReason: No, hits=0.6 required=7.0 tests=ratty_date: Non-RFC but 
	legit format in Fri, 28 Feb 2014 17:17:10 +0800 (GMT+08:00),
	BODY_RANDOM_LONG,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10001 invoked from network); 28 Feb 2014 09:17:13 -0000
Received: from smtp23.cstnet.cn (HELO cstnet.cn) (159.226.251.23)
	by server-8.tower-27.messagelabs.com with SMTP;
	28 Feb 2014 09:17:13 -0000
Received: by ajax-webmail-app3 (Coremail) ; Fri, 28 Feb 2014 17:17:10 +0800
	(GMT+08:00)
Date: Fri, 28 Feb 2014 17:17:10 +0800 (GMT+08:00)
From: =?GBK?B?uqvR3rar?= <hanyandong@iie.ac.cn>
To: xen-devel@lists.xensource.com
Message-ID: <1b24870.6a6c.14477c87837.Coremail.hanyandong@iie.ac.cn>
MIME-Version: 1.0
X-Originating-IP: [111.200.12.97]
X-Priority: 3
X-Mailer: Coremail Webmail Server Version XT2.1.10 dev build
	20131120(24194.5778.5783) Copyright (c) 2002-2014 www.mailtech.cn
	cstnet
X-SendMailWithSms: false
X-CM-CTRLDATA: XsOY12Zvb3Rlcl9odG09Mjk4MjI6MTImZm9vdGVyX3R4dD00OTkwOjY=
X-CM-TRANSID: SQCowJAL0ukWVBBTJrNwAQ--.48380W
X-CM-SenderInfo: 5kdq5txqgr0wo6llvhldfou0/1tbiAxIJBlD7NX3naQAAsz
X-Coremail-Antispam: 1Ur529EdanIXcx71UUUUU7IcSsGvfJ3iIAIbVAYjsxI4VWxJw
	CS07vEb4IE77IF4wCS07vE1I0E4x80FVAKz4kxMIAIbVAFxVCaYxvI4VCIwcAKzIAtYxBI
	daVFxhVjvjDU=
Subject: [Xen-devel] intercept and capture fast system call of linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5935373921665633701=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5935373921665633701==
Content-Type: multipart/alternative; 
	boundary="----=_Part_95690_8647576.1393579030583"

------=_Part_95690_8647576.1393579030583
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

hi all,
I want to intercept and capture fast  system call of linux. 
(1)I set GUEST_SYSENTER_EIP to   0xDDDDD0AE in vmx_vmexit_handler, and save the real value.


in vmx_vmexit_handler()
{
  ....

    //yandong
    if( is_hvm_domain(current->domain) )
    {
        //printk("MITCTL:is_hvm_domain\n");
        switch (current->domain->arch.hvm_domain.mitctl_op.xen_vmexit_handler_mitctl_method)
        {
            case -1: break;
            case XEN_VMEXIT_HANDLER_MITCTL_libvmi :
            {
                //printk("MITCTL:vmexit set_trap\n");
                vmx_properly_set_trap_flag(current->domain);   
                break;
            }
            default: break;

         }
    }

   return ...

}




inline void vmx_properly_set_trap_flag(struct domain *d)
{
    //set sysenter_eip
    if(d->arch.hvm_domain.mitctl_op.xen_vmexit_handler_mitctl_method != -1)
    {
        vmx_set_sysenter_msrs(d);
        //current->arch.hvm_vmx.exec_control |= CPU_BASED_MONITOR_TRAP_FLAG;
        //vmx_update_cpu_exec_control(current);
        //current->arch.hvm_vcpu.single_step = 1;
    }
    
    return;

}




/* force user supplied msr values on this guest */
inline  void vmx_set_sysenter_msrs(struct domain *d)
{
    u64 new_cs;
    u64 new_eip;
    u64 old_MSR_EIP = __vmread(GUEST_SYSENTER_EIP);
    u64 old_MSR_CS = __vmread(GUEST_SYSENTER_CS);
    if( 0xDDDDD0AE != old_MSR_EIP)
    {
        printk("MITCTL:old_MSR_EIP %lx\n",old_MSR_EIP);
        ether_set_imaginary_sysenter_eip(d, old_MSR_EIP);
        ether_set_imaginary_sysenter_cs(d, old_MSR_CS);
    }
    printk("MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP %lx %lx\n",old_MSR_EIP, old_MSR_CS);
    /* write MSR registers */

    /* default to writing old(imaginary) values to guest */
    new_cs = ether_get_imaginary_sysenter_cs(d);
    new_eip = ether_get_imaginary_sysenter_eip(d);

    if(d->arch.hvm_domain.mitctl_op.xen_vmexit_handler_mitctl_method != -1)
    {
        /* it seems that we should write user supplied
         * values instead
         */
        u64 forced_cs;
        u64 forced_eip;
        /* writing user supplied forced values to guest */
        forced_cs = ether_get_sysenter_cs(d);
        forced_eip = ether_get_sysenter_eip(d);

        if(forced_cs)
            new_cs = forced_cs;

        if(forced_eip)
            new_eip = forced_eip;
    }

    vmx_write_sysenter_msr(GUEST_SYSENTER_CS, new_cs);
    vmx_write_sysenter_msr(GUEST_SYSENTER_EIP, new_eip);
}









(2)When a fast syscall come,  I will caputue it in sh_page_fault.


(3) Then I set the real GUEST_SYSENTER_EIP(c0103ef0, ia32_sysenter_target) to GUEST_RIP.

static int sh_page_fault(struct vcpu *v, 
                          unsigned long va, 
                          struct cpu_user_regs *regs)
{
...

    /*yandong*/
    /* Check if this page fault occurs on our magic address */
    if(unlikely(ether_get_sysenter_eip(d) != 0 && ether_get_sysenter_eip(d) == va))
    {
        /* only go through
         * with the fault notification if it occurred during
         * an instruction fetch. 
         */
        if ( regs->error_code & PFEC_insn_fetch )
        {
            unsigned long real_rip = ether_get_imaginary_sysenter_eip(d);

            /* process system call notification */
            shadow_lock(d);
            ether_handle_syscall(v, regs);
            shadow_unlock(d);

            /* lets update rip to put us in a much happier
             * place in memory, notably the actual
             * sysenter handling address
             */
            printk("MITCTL: sh_page_fault syscall real_rip  %lx %lx\n", va, ether_get_sysenter_eip(d));
            printk("MITCTL: sh_page_fault syscall real_rip  %lx\n", __vmread(GUEST_RIP));
            //printk("MITCTL: sh_page_fault syscall  %lx %lx %lx %lx\n");
            __vmwrite(GUEST_RIP, real_rip);
            printk("MITCTL: sh_page_fault syscall real_rip  %lx\n", __vmread(GUEST_RIP));
            return 1;
        }
    }

...
}




But, I encounter Infinite loops as below. I always capture the same syscall.  
In sh_page_fault, I have successfully set c0103ef0 to GUEST_RIP.But I still capure a page fault , the GUEST_RIP is ddddd0ae.
why? Thank you very much.


(XEN) MIT SYSCALL 7
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae ddddd0ae   
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae
(XEN) MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60
(XEN)  vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) TRAP_page_fault
(XEN) MIT SYSCALL 7
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae ddddd0ae
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae
(XEN) MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60
(XEN)  vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) TRAP_page_fault
(XEN) MIT SYSCALL 7




Best Regards



------=_Part_95690_8647576.1393579030583
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<font size="3">hi all,</font><div><font size="3">I want to&nbsp;intercept and capture fast &nbsp;system call of linux.&nbsp;</font></div><div><font size="3">(1)I set&nbsp;<span style="font-family: courier;"><b>GUEST_SYSENTER_EIP</b>&nbsp;to&nbsp;</span>&nbsp;&nbsp;<span style="font-family: courier;"><b>0xDDDDD0AE</b>&nbsp;in&nbsp;</span><span style="font-family: courier;"><b>vmx_vmexit_handler</b></span><span style="font-family: courier;">,&nbsp;</span><span style="font-family: courier;">and</span><span style="font-family: courier;">&nbsp;save the real value.</span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><span style="font-family: courier; font-size: medium;">in vmx_vmexit_handler()</span></div><div><span style="font-family: courier; font-size: medium;">{</span></div><div><span style="font-family: courier; font-size: medium;">&nbsp; ....</span></div><div><p style="font-family: courier; font-size: 10pt"><span style="color: #808080;">&nbsp; &nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//yandong<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span><span style="color: #808080;">&nbsp;</span>is_hvm_domain<span style="font-weight: bold;">(</span>current<span style="font-weight: bold;">-&gt;</span>domain<span style="font-weight: bold;">)</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//printk("MITCTL:is_hvm_domain\n");<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">switch</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">(</span>current<span style="font-weight: bold;">-&gt;</span>domain<span style="font-weight: bold;">-&gt;</span>arch<span style="font-weight: bold;">.</span>hvm_domain<span style="font-weight: bold;">.</span>mitctl_op<span style="font-weight: bold;">.</span>xen_vmexit_handler_mitctl_method<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">case</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">-</span><span style="color: #008080;">1</span><span style="font-weight: bold;">:</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">break</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">case</span><span style="color: #808080;">&nbsp;</span>XEN_VMEXIT_HANDLER_MITCTL_libvmi<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">:</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//printk("MITCTL:vmexit&nbsp;set_trap\n");<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_properly_set_trap_flag<span style="font-weight: bold;">(</span>current<span style="font-weight: bold;">-&gt;</span>domain<span style="font-weight: bold;">);</span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">break</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">default</span><span style="font-weight: bold;">:</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">break</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;">&nbsp; &nbsp;return ...</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;">}</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;"><br></span></p><p style="font-family: courier; font-size: 10pt">inline<span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">void</span><span style="color: #808080;">&nbsp;</span>vmx_properly_set_trap_flag<span style="font-weight: bold;">(</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>domain<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//set&nbsp;sysenter_eip<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">-&gt;</span>arch<span style="font-weight: bold;">.</span>hvm_domain<span style="font-weight: bold;">.</span>mitctl_op<span style="font-weight: bold;">.</span>xen_vmexit_handler_mitctl_method<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">-</span><span style="color: #008080;">1</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_set_sysenter_msrs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//current-&gt;arch.hvm_vmx.exec_control&nbsp;|=&nbsp;CPU_BASED_MONITOR_TRAP_FLAG;<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//vmx_update_cpu_exec_control(current);<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//current-&gt;arch.hvm_vcpu.single_step&nbsp;=&nbsp;1;<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">return</span><span style="font-weight: bold;">;</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;">}</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;"><br></span></p><p style="font-family: courier; font-size: 10pt"><span style="color: #0000ff;font-style: italic;">/*&nbsp;force&nbsp;user&nbsp;supplied&nbsp;msr&nbsp;values&nbsp;on&nbsp;this&nbsp;guest&nbsp;*/</span><span style="color: #808080;"><br></span>inline<span style="color: #808080;">&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">void</span><span style="color: #808080;">&nbsp;</span>vmx_set_sysenter_msrs<span style="font-weight: bold;">(</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>domain<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>new_cs<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>new_eip<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>old_MSR_EIP<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_SYSENTER_EIP<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>old_MSR_CS<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_SYSENTER_CS<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span><span style="color: #808080;">&nbsp;</span><span style="color: #008080;">0xDDDDD0AE</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span>old_MSR_EIP<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:old_MSR_EIP&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span>old_MSR_EIP<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>ether_set_imaginary_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>old_MSR_EIP<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>ether_set_imaginary_sysenter_cs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>old_MSR_CS<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:vmx_set_sysenter_msrs&nbsp;GUEST_SYSENTER_EIP&nbsp;%lx&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span>old_MSR_EIP<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>old_MSR_CS<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;write&nbsp;MSR&nbsp;registers&nbsp;*/</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;default&nbsp;to&nbsp;writing&nbsp;old(imaginary)&nbsp;values&nbsp;to&nbsp;guest&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>new_cs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_imaginary_sysenter_cs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>new_eip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_imaginary_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">-&gt;</span>arch<span style="font-weight: bold;">.</span>hvm_domain<span style="font-weight: bold;">.</span>mitctl_op<span style="font-weight: bold;">.</span>xen_vmexit_handler_mitctl_method<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">-</span><span style="color: #008080;">1</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;it&nbsp;seems&nbsp;that&nbsp;we&nbsp;should&nbsp;write&nbsp;user&nbsp;supplied<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;values&nbsp;instead<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>forced_cs<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>forced_eip<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;writing&nbsp;user&nbsp;supplied&nbsp;forced&nbsp;values&nbsp;to&nbsp;guest&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>forced_cs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_cs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>forced_eip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>forced_cs<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>new_cs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>forced_cs<span style="font-weight: bold;">;</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>forced_eip<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>new_eip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>forced_eip<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_write_sysenter_msr<span style="font-weight: bold;">(</span>GUEST_SYSENTER_CS<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>new_cs<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_write_sysenter_msr<span style="font-weight: bold;">(</span>GUEST_SYSENTER_EIP<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>new_eip<span style="font-weight: bold;">);</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">}</span></p></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;">(2)When a fast syscall come,&nbsp;&nbsp;I will caputue it in&nbsp;</span><span style="font-family: courier;"><b>sh_page_fault</b>.</span></font></div><div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;">(3) Then I set the real&nbsp;</span><span style="font-family: courier;"><b>GUEST_SYSENTER_EIP(</b></span></font><span style="font-family: Arial, sans-serif; font-size: medium; line-height: 24px;">c0103ef0, ia32_sysenter_target</span><span style="font-size: medium; font-family: courier;"><b>)</b>&nbsp;</span><span style="font-size: medium; font-family: courier;">to</span><span style="font-size: medium; font-family: courier;">&nbsp;</span><span style="font-size: medium; font-family: courier;"><b>GUEST_RIP</b>.</span></div></div><div><p style="font-family: courier; font-size: 10pt"><span style="color: #000080;font-weight: bold;">static</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">int</span><span style="color: #808080;">&nbsp;</span>sh_page_fault<span style="font-weight: bold;">(</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>vcpu<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>v<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">unsigned</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">long</span><span style="color: #808080;">&nbsp;</span>va<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>cpu_user_regs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>regs<span style="font-weight: bold;">)</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br></span><span style="color: #808080;">...<br></span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*yandong*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;Check&nbsp;if&nbsp;this&nbsp;page&nbsp;fault&nbsp;occurs&nbsp;on&nbsp;our&nbsp;magic&nbsp;address&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>unlikely<span style="font-weight: bold;">(</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span><span style="color: #008080;">0</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">&amp;&amp;</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">==</span><span style="color: #808080;">&nbsp;</span>va<span style="font-weight: bold;">))</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;only&nbsp;go&nbsp;through<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;with&nbsp;the&nbsp;fault&nbsp;notification&nbsp;if&nbsp;it&nbsp;occurred&nbsp;during<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;an&nbsp;instruction&nbsp;fetch.&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">(</span><span style="color: #808080;">&nbsp;</span>regs<span style="font-weight: bold;">-&gt;</span>error_code<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">&amp;</span><span style="color: #808080;">&nbsp;</span>PFEC_insn_fetch<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">unsigned</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">long</span><span style="color: #808080;">&nbsp;</span>real_rip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_imaginary_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;process&nbsp;system&nbsp;call&nbsp;notification&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>shadow_lock<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>ether_handle_syscall<span style="font-weight: bold;">(</span>v<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>regs<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>shadow_unlock<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;lets&nbsp;update&nbsp;rip&nbsp;to&nbsp;put&nbsp;us&nbsp;in&nbsp;a&nbsp;much&nbsp;happier<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;place&nbsp;in&nbsp;memory,&nbsp;notably&nbsp;the&nbsp;actual<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;sysenter&nbsp;handling&nbsp;address<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;real_rip&nbsp;&nbsp;%lx&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>va<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">));</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;real_rip&nbsp;&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_RIP<span style="font-weight: bold;">));</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//printk("MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;&nbsp;%lx&nbsp;%lx&nbsp;%lx&nbsp;%lx\n");<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>__vmwrite<span style="font-weight: bold;">(</span>GUEST_RIP<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>real_rip<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;real_rip&nbsp;&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_RIP<span style="font-weight: bold;">));</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">return</span><span style="color: #808080;">&nbsp;</span><span style="color: #008080;">1</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span></p></div><div><font face="courier" size="3">...</font></div><div><font face="courier" size="3">}</font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><br></div><div><font size="3"><span style="font-family: courier;">But, I encounter&nbsp;</span><span style="font-family: Arial, sans-serif; line-height: 24px;">Infinite loops as below. I always capture the same syscall. &nbsp;</span></font></div><div><font face="Arial, sans-serif" size="3"><span style="line-height: 24px;">In&nbsp;</span></font><b style="font-family: courier; font-size: medium;">sh_page_fault,&nbsp;</b><span style="font-family: courier; font-size: medium;">I have&nbsp;</span><span style="color: rgb(67, 67, 67); font-family: Arial, sans-serif; line-height: 24px;">successfully</span><span style="font-family: courier; font-size: medium;">&nbsp;set&nbsp;</span><span style="font-family: Arial, sans-serif; font-size: medium; line-height: 24px;"><b>c0103ef0</b>&nbsp;to&nbsp;</span><b style="font-family: courier; font-size: medium;">GUEST_RIP.</b><span style="font-family: courier; font-size: medium;">But I still capure a page fault , the&nbsp;</span><span style="font-family: courier; font-size: medium;"><b>GUEST_RIP&nbsp;</b>is&nbsp;</span><span style="font-family: Arial, sans-serif; font-size: medium; line-height: 24px;">ddddd0ae.</span></div><div><font size="3"><span style="font-family: Arial, sans-serif; line-height: 24px;">why? Thank you very much.</span></font></div><div><font size="3"><span style="font-family: Arial, sans-serif; line-height: 24px;"><div><br></div><div>(XEN) MIT SYSCALL 7</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae ddddd0ae &nbsp;&nbsp;</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60</div><div>(XEN) &nbsp;vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) TRAP_page_fault</div><div>(XEN) MIT SYSCALL 7</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae ddddd0ae</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60</div><div>(XEN) &nbsp;vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) TRAP_page_fault</div><div>(XEN) MIT SYSCALL 7</div><div><br></div><div><br></div><div>Best Regards</div></span></font></div><span></span><br><br><br>
------=_Part_95690_8647576.1393579030583--



--===============5935373921665633701==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5935373921665633701==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 09:17:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJZH-0007Mt-Ke; Fri, 28 Feb 2014 09:17:19 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <hanyandong@iie.ac.cn>) id 1WJJZF-0007Mn-9z
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 09:17:17 +0000
Received: from [193.109.254.147:24839] by server-7.bemta-14.messagelabs.com id
	B3/7E-23424-C1450135; Fri, 28 Feb 2014 09:17:16 +0000
X-Env-Sender: hanyandong@iie.ac.cn
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393579032!7428346!1
X-Originating-IP: [159.226.251.23]
X-SpamReason: No, hits=0.6 required=7.0 tests=ratty_date: Non-RFC but 
	legit format in Fri, 28 Feb 2014 17:17:10 +0800 (GMT+08:00),
	BODY_RANDOM_LONG,HTML_MESSAGE
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10001 invoked from network); 28 Feb 2014 09:17:13 -0000
Received: from smtp23.cstnet.cn (HELO cstnet.cn) (159.226.251.23)
	by server-8.tower-27.messagelabs.com with SMTP;
	28 Feb 2014 09:17:13 -0000
Received: by ajax-webmail-app3 (Coremail) ; Fri, 28 Feb 2014 17:17:10 +0800
	(GMT+08:00)
Date: Fri, 28 Feb 2014 17:17:10 +0800 (GMT+08:00)
From: =?GBK?B?uqvR3rar?= <hanyandong@iie.ac.cn>
To: xen-devel@lists.xensource.com
Message-ID: <1b24870.6a6c.14477c87837.Coremail.hanyandong@iie.ac.cn>
MIME-Version: 1.0
X-Originating-IP: [111.200.12.97]
X-Priority: 3
X-Mailer: Coremail Webmail Server Version XT2.1.10 dev build
	20131120(24194.5778.5783) Copyright (c) 2002-2014 www.mailtech.cn
	cstnet
X-SendMailWithSms: false
X-CM-CTRLDATA: XsOY12Zvb3Rlcl9odG09Mjk4MjI6MTImZm9vdGVyX3R4dD00OTkwOjY=
X-CM-TRANSID: SQCowJAL0ukWVBBTJrNwAQ--.48380W
X-CM-SenderInfo: 5kdq5txqgr0wo6llvhldfou0/1tbiAxIJBlD7NX3naQAAsz
X-Coremail-Antispam: 1Ur529EdanIXcx71UUUUU7IcSsGvfJ3iIAIbVAYjsxI4VWxJw
	CS07vEb4IE77IF4wCS07vE1I0E4x80FVAKz4kxMIAIbVAFxVCaYxvI4VCIwcAKzIAtYxBI
	daVFxhVjvjDU=
Subject: [Xen-devel] intercept and capture fast system call of linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5935373921665633701=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============5935373921665633701==
Content-Type: multipart/alternative; 
	boundary="----=_Part_95690_8647576.1393579030583"

------=_Part_95690_8647576.1393579030583
Content-Type: text/plain; charset=GBK
Content-Transfer-Encoding: 7bit

hi all,
I want to intercept and capture fast  system call of linux. 
(1)I set GUEST_SYSENTER_EIP to   0xDDDDD0AE in vmx_vmexit_handler, and save the real value.


in vmx_vmexit_handler()
{
  ....

    //yandong
    if( is_hvm_domain(current->domain) )
    {
        //printk("MITCTL:is_hvm_domain\n");
        switch (current->domain->arch.hvm_domain.mitctl_op.xen_vmexit_handler_mitctl_method)
        {
            case -1: break;
            case XEN_VMEXIT_HANDLER_MITCTL_libvmi :
            {
                //printk("MITCTL:vmexit set_trap\n");
                vmx_properly_set_trap_flag(current->domain);   
                break;
            }
            default: break;

         }
    }

   return ...

}




inline void vmx_properly_set_trap_flag(struct domain *d)
{
    //set sysenter_eip
    if(d->arch.hvm_domain.mitctl_op.xen_vmexit_handler_mitctl_method != -1)
    {
        vmx_set_sysenter_msrs(d);
        //current->arch.hvm_vmx.exec_control |= CPU_BASED_MONITOR_TRAP_FLAG;
        //vmx_update_cpu_exec_control(current);
        //current->arch.hvm_vcpu.single_step = 1;
    }
    
    return;

}




/* force user supplied msr values on this guest */
inline  void vmx_set_sysenter_msrs(struct domain *d)
{
    u64 new_cs;
    u64 new_eip;
    u64 old_MSR_EIP = __vmread(GUEST_SYSENTER_EIP);
    u64 old_MSR_CS = __vmread(GUEST_SYSENTER_CS);
    if( 0xDDDDD0AE != old_MSR_EIP)
    {
        printk("MITCTL:old_MSR_EIP %lx\n",old_MSR_EIP);
        ether_set_imaginary_sysenter_eip(d, old_MSR_EIP);
        ether_set_imaginary_sysenter_cs(d, old_MSR_CS);
    }
    printk("MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP %lx %lx\n",old_MSR_EIP, old_MSR_CS);
    /* write MSR registers */

    /* default to writing old(imaginary) values to guest */
    new_cs = ether_get_imaginary_sysenter_cs(d);
    new_eip = ether_get_imaginary_sysenter_eip(d);

    if(d->arch.hvm_domain.mitctl_op.xen_vmexit_handler_mitctl_method != -1)
    {
        /* it seems that we should write user supplied
         * values instead
         */
        u64 forced_cs;
        u64 forced_eip;
        /* writing user supplied forced values to guest */
        forced_cs = ether_get_sysenter_cs(d);
        forced_eip = ether_get_sysenter_eip(d);

        if(forced_cs)
            new_cs = forced_cs;

        if(forced_eip)
            new_eip = forced_eip;
    }

    vmx_write_sysenter_msr(GUEST_SYSENTER_CS, new_cs);
    vmx_write_sysenter_msr(GUEST_SYSENTER_EIP, new_eip);
}









(2)When a fast syscall come,  I will caputue it in sh_page_fault.


(3) Then I set the real GUEST_SYSENTER_EIP(c0103ef0, ia32_sysenter_target) to GUEST_RIP.

static int sh_page_fault(struct vcpu *v, 
                          unsigned long va, 
                          struct cpu_user_regs *regs)
{
...

    /*yandong*/
    /* Check if this page fault occurs on our magic address */
    if(unlikely(ether_get_sysenter_eip(d) != 0 && ether_get_sysenter_eip(d) == va))
    {
        /* only go through
         * with the fault notification if it occurred during
         * an instruction fetch. 
         */
        if ( regs->error_code & PFEC_insn_fetch )
        {
            unsigned long real_rip = ether_get_imaginary_sysenter_eip(d);

            /* process system call notification */
            shadow_lock(d);
            ether_handle_syscall(v, regs);
            shadow_unlock(d);

            /* lets update rip to put us in a much happier
             * place in memory, notably the actual
             * sysenter handling address
             */
            printk("MITCTL: sh_page_fault syscall real_rip  %lx %lx\n", va, ether_get_sysenter_eip(d));
            printk("MITCTL: sh_page_fault syscall real_rip  %lx\n", __vmread(GUEST_RIP));
            //printk("MITCTL: sh_page_fault syscall  %lx %lx %lx %lx\n");
            __vmwrite(GUEST_RIP, real_rip);
            printk("MITCTL: sh_page_fault syscall real_rip  %lx\n", __vmread(GUEST_RIP));
            return 1;
        }
    }

...
}




But, I encounter Infinite loops as below. I always capture the same syscall.  
In sh_page_fault, I have successfully set c0103ef0 to GUEST_RIP.But I still capure a page fault , the GUEST_RIP is ddddd0ae.
why? Thank you very much.


(XEN) MIT SYSCALL 7
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae ddddd0ae   
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae
(XEN) MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60
(XEN)  vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) TRAP_page_fault
(XEN) MIT SYSCALL 7
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae ddddd0ae
(XEN) MITCTL: sh_page_fault syscall real_rip  ddddd0ae
(XEN) MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60
(XEN)  vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip  c0103ef0
(XEN) TRAP_page_fault
(XEN) MIT SYSCALL 7




Best Regards



------=_Part_95690_8647576.1393579030583
Content-Type: text/html; charset=GBK
Content-Transfer-Encoding: 7bit

<font size="3">hi all,</font><div><font size="3">I want to&nbsp;intercept and capture fast &nbsp;system call of linux.&nbsp;</font></div><div><font size="3">(1)I set&nbsp;<span style="font-family: courier;"><b>GUEST_SYSENTER_EIP</b>&nbsp;to&nbsp;</span>&nbsp;&nbsp;<span style="font-family: courier;"><b>0xDDDDD0AE</b>&nbsp;in&nbsp;</span><span style="font-family: courier;"><b>vmx_vmexit_handler</b></span><span style="font-family: courier;">,&nbsp;</span><span style="font-family: courier;">and</span><span style="font-family: courier;">&nbsp;save the real value.</span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><span style="font-family: courier; font-size: medium;">in vmx_vmexit_handler()</span></div><div><span style="font-family: courier; font-size: medium;">{</span></div><div><span style="font-family: courier; font-size: medium;">&nbsp; ....</span></div><div><p style="font-family: courier; font-size: 10pt"><span style="color: #808080;">&nbsp; &nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//yandong<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span><span style="color: #808080;">&nbsp;</span>is_hvm_domain<span style="font-weight: bold;">(</span>current<span style="font-weight: bold;">-&gt;</span>domain<span style="font-weight: bold;">)</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//printk("MITCTL:is_hvm_domain\n");<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">switch</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">(</span>current<span style="font-weight: bold;">-&gt;</span>domain<span style="font-weight: bold;">-&gt;</span>arch<span style="font-weight: bold;">.</span>hvm_domain<span style="font-weight: bold;">.</span>mitctl_op<span style="font-weight: bold;">.</span>xen_vmexit_handler_mitctl_method<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">case</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">-</span><span style="color: #008080;">1</span><span style="font-weight: bold;">:</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">break</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">case</span><span style="color: #808080;">&nbsp;</span>XEN_VMEXIT_HANDLER_MITCTL_libvmi<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">:</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//printk("MITCTL:vmexit&nbsp;set_trap\n");<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_properly_set_trap_flag<span style="font-weight: bold;">(</span>current<span style="font-weight: bold;">-&gt;</span>domain<span style="font-weight: bold;">);</span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">break</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">default</span><span style="font-weight: bold;">:</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">break</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;">&nbsp; &nbsp;return ...</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;">}</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;"><br></span></p><p style="font-family: courier; font-size: 10pt">inline<span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">void</span><span style="color: #808080;">&nbsp;</span>vmx_properly_set_trap_flag<span style="font-weight: bold;">(</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>domain<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//set&nbsp;sysenter_eip<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">-&gt;</span>arch<span style="font-weight: bold;">.</span>hvm_domain<span style="font-weight: bold;">.</span>mitctl_op<span style="font-weight: bold;">.</span>xen_vmexit_handler_mitctl_method<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">-</span><span style="color: #008080;">1</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_set_sysenter_msrs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//current-&gt;arch.hvm_vmx.exec_control&nbsp;|=&nbsp;CPU_BASED_MONITOR_TRAP_FLAG;<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//vmx_update_cpu_exec_control(current);<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//current-&gt;arch.hvm_vcpu.single_step&nbsp;=&nbsp;1;<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">return</span><span style="font-weight: bold;">;</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;">}</span></p><p style="font-family: courier; font-size: 10pt"><span style="font-weight: bold;"><br></span></p><p style="font-family: courier; font-size: 10pt"><span style="color: #0000ff;font-style: italic;">/*&nbsp;force&nbsp;user&nbsp;supplied&nbsp;msr&nbsp;values&nbsp;on&nbsp;this&nbsp;guest&nbsp;*/</span><span style="color: #808080;"><br></span>inline<span style="color: #808080;">&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">void</span><span style="color: #808080;">&nbsp;</span>vmx_set_sysenter_msrs<span style="font-weight: bold;">(</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>domain<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>new_cs<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>new_eip<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>old_MSR_EIP<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_SYSENTER_EIP<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>old_MSR_CS<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_SYSENTER_CS<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span><span style="color: #808080;">&nbsp;</span><span style="color: #008080;">0xDDDDD0AE</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span>old_MSR_EIP<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:old_MSR_EIP&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span>old_MSR_EIP<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>ether_set_imaginary_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>old_MSR_EIP<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>ether_set_imaginary_sysenter_cs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>old_MSR_CS<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:vmx_set_sysenter_msrs&nbsp;GUEST_SYSENTER_EIP&nbsp;%lx&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span>old_MSR_EIP<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>old_MSR_CS<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;write&nbsp;MSR&nbsp;registers&nbsp;*/</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;default&nbsp;to&nbsp;writing&nbsp;old(imaginary)&nbsp;values&nbsp;to&nbsp;guest&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>new_cs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_imaginary_sysenter_cs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>new_eip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_imaginary_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">-&gt;</span>arch<span style="font-weight: bold;">.</span>hvm_domain<span style="font-weight: bold;">.</span>mitctl_op<span style="font-weight: bold;">.</span>xen_vmexit_handler_mitctl_method<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">-</span><span style="color: #008080;">1</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;it&nbsp;seems&nbsp;that&nbsp;we&nbsp;should&nbsp;write&nbsp;user&nbsp;supplied<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;values&nbsp;instead<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>forced_cs<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>u64<span style="color: #808080;">&nbsp;</span>forced_eip<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;writing&nbsp;user&nbsp;supplied&nbsp;forced&nbsp;values&nbsp;to&nbsp;guest&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>forced_cs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_cs<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>forced_eip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>forced_cs<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>new_cs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>forced_cs<span style="font-weight: bold;">;</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>forced_eip<span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>new_eip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>forced_eip<span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_write_sysenter_msr<span style="font-weight: bold;">(</span>GUEST_SYSENTER_CS<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>new_cs<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span>vmx_write_sysenter_msr<span style="font-weight: bold;">(</span>GUEST_SYSENTER_EIP<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>new_eip<span style="font-weight: bold;">);</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">}</span></p></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;">(2)When a fast syscall come,&nbsp;&nbsp;I will caputue it in&nbsp;</span><span style="font-family: courier;"><b>sh_page_fault</b>.</span></font></div><div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><font size="3"><span style="font-family: courier;">(3) Then I set the real&nbsp;</span><span style="font-family: courier;"><b>GUEST_SYSENTER_EIP(</b></span></font><span style="font-family: Arial, sans-serif; font-size: medium; line-height: 24px;">c0103ef0, ia32_sysenter_target</span><span style="font-size: medium; font-family: courier;"><b>)</b>&nbsp;</span><span style="font-size: medium; font-family: courier;">to</span><span style="font-size: medium; font-family: courier;">&nbsp;</span><span style="font-size: medium; font-family: courier;"><b>GUEST_RIP</b>.</span></div></div><div><p style="font-family: courier; font-size: 10pt"><span style="color: #000080;font-weight: bold;">static</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">int</span><span style="color: #808080;">&nbsp;</span>sh_page_fault<span style="font-weight: bold;">(</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>vcpu<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>v<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">unsigned</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">long</span><span style="color: #808080;">&nbsp;</span>va<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">struct</span><span style="color: #808080;">&nbsp;</span>cpu_user_regs<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">*</span>regs<span style="font-weight: bold;">)</span><span style="color: #808080;"><br></span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br></span><span style="color: #808080;">...<br></span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*yandong*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;Check&nbsp;if&nbsp;this&nbsp;page&nbsp;fault&nbsp;occurs&nbsp;on&nbsp;our&nbsp;magic&nbsp;address&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="font-weight: bold;">(</span>unlikely<span style="font-weight: bold;">(</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">!=</span><span style="color: #808080;">&nbsp;</span><span style="color: #008080;">0</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">&amp;&amp;</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">)</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">==</span><span style="color: #808080;">&nbsp;</span>va<span style="font-weight: bold;">))</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;only&nbsp;go&nbsp;through<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;with&nbsp;the&nbsp;fault&nbsp;notification&nbsp;if&nbsp;it&nbsp;occurred&nbsp;during<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;an&nbsp;instruction&nbsp;fetch.&nbsp;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">if</span><span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">(</span><span style="color: #808080;">&nbsp;</span>regs<span style="font-weight: bold;">-&gt;</span>error_code<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">&amp;</span><span style="color: #808080;">&nbsp;</span>PFEC_insn_fetch<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">)</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">{</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">unsigned</span><span style="color: #808080;">&nbsp;</span><span style="color: #000080;font-weight: bold;">long</span><span style="color: #808080;">&nbsp;</span>real_rip<span style="color: #808080;">&nbsp;</span><span style="font-weight: bold;">=</span><span style="color: #808080;">&nbsp;</span>ether_get_imaginary_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;process&nbsp;system&nbsp;call&nbsp;notification&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>shadow_lock<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>ether_handle_syscall<span style="font-weight: bold;">(</span>v<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>regs<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>shadow_unlock<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">);</span><span style="color: #808080;"><br><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">/*&nbsp;lets&nbsp;update&nbsp;rip&nbsp;to&nbsp;put&nbsp;us&nbsp;in&nbsp;a&nbsp;much&nbsp;happier<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;place&nbsp;in&nbsp;memory,&nbsp;notably&nbsp;the&nbsp;actual<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*&nbsp;sysenter&nbsp;handling&nbsp;address<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;*/</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;real_rip&nbsp;&nbsp;%lx&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>va<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>ether_get_sysenter_eip<span style="font-weight: bold;">(</span>d<span style="font-weight: bold;">));</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;real_rip&nbsp;&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_RIP<span style="font-weight: bold;">));</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #0000ff;font-style: italic;">//printk("MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;&nbsp;%lx&nbsp;%lx&nbsp;%lx&nbsp;%lx\n");<br></span><span style="color: #808080;">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>__vmwrite<span style="font-weight: bold;">(</span>GUEST_RIP<span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>real_rip<span style="font-weight: bold;">);</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span>printk<span style="font-weight: bold;">(</span><span style="color: #cc0000;">"MITCTL:&nbsp;sh_page_fault&nbsp;syscall&nbsp;real_rip&nbsp;&nbsp;%lx\n"</span><span style="font-weight: bold;">,</span><span style="color: #808080;">&nbsp;</span>__vmread<span style="font-weight: bold;">(</span>GUEST_RIP<span style="font-weight: bold;">));</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="color: #000080;font-weight: bold;">return</span><span style="color: #808080;">&nbsp;</span><span style="color: #008080;">1</span><span style="font-weight: bold;">;</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span><span style="color: #808080;"><br>&nbsp;&nbsp;&nbsp;&nbsp;</span><span style="font-weight: bold;">}</span></p></div><div><font face="courier" size="3">...</font></div><div><font face="courier" size="3">}</font></div><div><font size="3"><span style="font-family: courier;"><br></span></font></div><div><br></div><div><font size="3"><span style="font-family: courier;">But, I encounter&nbsp;</span><span style="font-family: Arial, sans-serif; line-height: 24px;">Infinite loops as below. I always capture the same syscall. &nbsp;</span></font></div><div><font face="Arial, sans-serif" size="3"><span style="line-height: 24px;">In&nbsp;</span></font><b style="font-family: courier; font-size: medium;">sh_page_fault,&nbsp;</b><span style="font-family: courier; font-size: medium;">I have&nbsp;</span><span style="color: rgb(67, 67, 67); font-family: Arial, sans-serif; line-height: 24px;">successfully</span><span style="font-family: courier; font-size: medium;">&nbsp;set&nbsp;</span><span style="font-family: Arial, sans-serif; font-size: medium; line-height: 24px;"><b>c0103ef0</b>&nbsp;to&nbsp;</span><b style="font-family: courier; font-size: medium;">GUEST_RIP.</b><span style="font-family: courier; font-size: medium;">But I still capure a page fault , the&nbsp;</span><span style="font-family: courier; font-size: medium;"><b>GUEST_RIP&nbsp;</b>is&nbsp;</span><span style="font-family: Arial, sans-serif; font-size: medium; line-height: 24px;">ddddd0ae.</span></div><div><font size="3"><span style="font-family: Arial, sans-serif; line-height: 24px;">why? Thank you very much.</span></font></div><div><font size="3"><span style="font-family: Arial, sans-serif; line-height: 24px;"><div><br></div><div>(XEN) MIT SYSCALL 7</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae ddddd0ae &nbsp;&nbsp;</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60</div><div>(XEN) &nbsp;vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) TRAP_page_fault</div><div>(XEN) MIT SYSCALL 7</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae ddddd0ae</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;ddddd0ae</div><div>(XEN) MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) MITCTL:vmx_set_sysenter_msrs GUEST_SYSENTER_EIP ddddd0ae 60</div><div>(XEN) &nbsp;vmx_vmenter_helper MITCTL: sh_page_fault syscall real_rip &nbsp;c0103ef0</div><div>(XEN) TRAP_page_fault</div><div>(XEN) MIT SYSCALL 7</div><div><br></div><div><br></div><div>Best Regards</div></span></font></div><span></span><br><br><br>
------=_Part_95690_8647576.1393579030583--



--===============5935373921665633701==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5935373921665633701==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 09:30:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJm1-0007pi-6s; Fri, 28 Feb 2014 09:30:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WJJlz-0007pc-NR
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 09:30:27 +0000
Received: from [85.158.143.35:16456] by server-3.bemta-4.messagelabs.com id
	E4/71-11539-23750135; Fri, 28 Feb 2014 09:30:26 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393579825!8940042!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16888 invoked from network); 28 Feb 2014 09:30:26 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 09:30:26 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=twins)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WJJlL-00047j-Bx; Fri, 28 Feb 2014 09:29:47 +0000
Received: by twins (Postfix, from userid 1000)
	id C684582787BE; Fri, 28 Feb 2014 10:29:45 +0100 (CET)
Date: Fri, 28 Feb 2014 10:29:45 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <waiman.long@hp.com>
Message-ID: <20140228092945.GG27965@twins.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530FA32B.8010202@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote:
> >>+	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
> >>+
> >>+	if (old == 0) {
> >>+		/*
> >>+		 * Got the lock, can clear the waiting bit now
> >>+		 */
> >>+		smp_u8_store_release(&qlock->wait, 0);
> >
> >So we just did an atomic op, and now you're trying to optimize this
> >write. Why do you need a whole byte for that?
> >
> >Surely a cmpxchg loop with the right atomic op can't be _that_ much
> >slower? Its far more readable and likely avoids that steal fail below as
> >well.
> 
> At low contention level, atomic operations that requires a lock prefix are
> the major contributor to the total execution times. I saw estimate online
> that the time to execute a lock prefix instruction can easily be 50X longer
> than a regular instruction that can be pipelined. That is why I try to do it
> with as few lock prefix instructions as possible. If I have to do an atomic
> cmpxchg, it probably won't be faster than the regular qspinlock slowpath.

At low contention the cmpxchg won't have to be retried (much) so using
it won't be a problem and you get to have arbitrary atomic ops.

> Given that speed at low contention level which is the common case is
> important to get this patch accepted, I have to do what I can to make it run
> as far as possible for this 2 contending task case.

What I'm saying is that you can do the whole thing with a single
cmpxchg. No extra ops needed. And at that point you don't need a whole
byte, you can use a single bit.

that removes the whole NR_CPUS dependent logic.

> >>+		/*
> >>+		 * Someone has steal the lock, so wait again
> >>+		 */
> >>+		goto try_again;

> >That's just a fail.. steals should not ever be allowed. It's a fair lock
> >after all.
> 
> The code is unfair, but this unfairness help it to run faster than ticket
> spinlock in this particular case. And the regular qspinlock slowpath is
> fair. A little bit of unfairness in this particular case helps its speed.

*groan*, no, unfairness not cool. ticket lock is absolutely fair; we
should preserve this.

BTW; can you share your benchmark thingy? 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 09:30:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJJm1-0007pi-6s; Fri, 28 Feb 2014 09:30:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WJJlz-0007pc-NR
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 09:30:27 +0000
Received: from [85.158.143.35:16456] by server-3.bemta-4.messagelabs.com id
	E4/71-11539-23750135; Fri, 28 Feb 2014 09:30:26 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393579825!8940042!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16888 invoked from network); 28 Feb 2014 09:30:26 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 09:30:26 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=twins)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WJJlL-00047j-Bx; Fri, 28 Feb 2014 09:29:47 +0000
Received: by twins (Postfix, from userid 1000)
	id C684582787BE; Fri, 28 Feb 2014 10:29:45 +0100 (CET)
Date: Fri, 28 Feb 2014 10:29:45 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <waiman.long@hp.com>
Message-ID: <20140228092945.GG27965@twins.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <530FA32B.8010202@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote:
> >>+	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
> >>+
> >>+	if (old == 0) {
> >>+		/*
> >>+		 * Got the lock, can clear the waiting bit now
> >>+		 */
> >>+		smp_u8_store_release(&qlock->wait, 0);
> >
> >So we just did an atomic op, and now you're trying to optimize this
> >write. Why do you need a whole byte for that?
> >
> >Surely a cmpxchg loop with the right atomic op can't be _that_ much
> >slower? Its far more readable and likely avoids that steal fail below as
> >well.
> 
> At low contention level, atomic operations that requires a lock prefix are
> the major contributor to the total execution times. I saw estimate online
> that the time to execute a lock prefix instruction can easily be 50X longer
> than a regular instruction that can be pipelined. That is why I try to do it
> with as few lock prefix instructions as possible. If I have to do an atomic
> cmpxchg, it probably won't be faster than the regular qspinlock slowpath.

At low contention the cmpxchg won't have to be retried (much) so using
it won't be a problem and you get to have arbitrary atomic ops.

> Given that speed at low contention level which is the common case is
> important to get this patch accepted, I have to do what I can to make it run
> as far as possible for this 2 contending task case.

What I'm saying is that you can do the whole thing with a single
cmpxchg. No extra ops needed. And at that point you don't need a whole
byte, you can use a single bit.

that removes the whole NR_CPUS dependent logic.

> >>+		/*
> >>+		 * Someone has steal the lock, so wait again
> >>+		 */
> >>+		goto try_again;

> >That's just a fail.. steals should not ever be allowed. It's a fair lock
> >after all.
> 
> The code is unfair, but this unfairness help it to run faster than ticket
> spinlock in this particular case. And the regular qspinlock slowpath is
> fair. A little bit of unfairness in this particular case helps its speed.

*groan*, no, unfairness not cool. ticket lock is absolutely fair; we
should preserve this.

BTW; can you share your benchmark thingy? 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 09:53:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJK7q-0008WB-Ns; Fri, 28 Feb 2014 09:53:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avanzini.arianna@gmail.com>) id 1WJBFb-0001sF-Uw
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 00:24:28 +0000
Received: from [85.158.137.68:14131] by server-15.bemta-3.messagelabs.com id
	6A/0C-19263-B37DF035; Fri, 28 Feb 2014 00:24:27 +0000
X-Env-Sender: avanzini.arianna@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393547066!4719689!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11756 invoked from network); 28 Feb 2014 00:24:26 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 00:24:26 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so1854249eek.3
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 16:24:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:reply-to:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=T5IqL+gHmEUrD0XxzbXZAGzW54oRcEYHT8DmOX3bLaI=;
	b=kwgmBVmb2qCN8gKuphA1lhkl/Hn6mz7bEWN4Y7Mjlfb9NENQdPSZ/JbDrWLLCaeYnZ
	OB+BPY1glioDm0AgBaEkcEiiKDacceVTfNI185Bjt8HGsDrn9GlyV1rbd5fi+WaytE0U
	tITbdSKD4DTIVGJPVz8uHGzxN68paZes9mt/75W3PaBud1XOLpeHCtOETSnZZHuxJCrR
	TyJSituovSerMhW7TnBuPq51quJVb3rd0h89fFqrrXZhdFuKL5anTNGgOKpWZTf17emc
	nxa47SHO2LSTBbxUrcA2mSGQbWwNvrw+jKaYKXnXJRPLxJWw0wdOoB2Dc7Jpk7D5bB6J
	kh1Q==
X-Received: by 10.14.110.68 with SMTP id t44mr16801109eeg.74.1393547065670;
	Thu, 27 Feb 2014 16:24:25 -0800 (PST)
Received: from [192.168.43.23] (adsl-ull-215-86.44-151.net24.it.
	[151.44.86.215])
	by mx.google.com with ESMTPSA id j41sm3688938eeg.10.2014.02.27.16.24.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 16:24:24 -0800 (PST)
Message-ID: <530FD736.8080401@gmail.com>
Date: Fri, 28 Feb 2014 01:24:22 +0100
From: Arianna Avanzini <avanzini.arianna@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, 
	Eric Trudeau <etrudeau@broadcom.com>,
	Viktor Kleinik <viktor.kleinik@globallogic.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>	<1393496069.3921.14.camel@Solace>	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>,
	<CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>,
	<0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
	<bdbwiri5n1ohxkmfi9y58mr4.1393544349845@email.android.com>
In-Reply-To: <bdbwiri5n1ohxkmfi9y58mr4.1393544349845@email.android.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Fri, 28 Feb 2014 09:53:01 +0000
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] R:  ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Arianna Avanzini <avanzini.arianna@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/28/2014 12:39 AM, Dario Faggioli wrote:
> As I said, Arianna is doing something very similar... perhaps she can merge her
> and your work and try to upstream it properly, in the next few days... Arianna?
> 

I'd be certainly happy to, once the patch is ready and if Eric Trudeau agrees.

Sorry for the delay,
Arianna


> Regards,
> Dario
> 
> 
> Inviato da Samsung Mobile
> 
> 
> -------- Messaggio originale --------
> Da: Eric Trudeau
> Data:28/02/2014 00:03 (GMT+01:00)
> A: Viktor Kleinik
> Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@lists.xen.org,Arianna Avanzini
> ,Julien Grall
> Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> 
> 
> On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@globallogic.com
> <mailto:viktor.kleinik@globallogic.com>> wrote:
> 
>> Thank you all for your responses.
>>
>> I will try those changes on our platform.
>> Are you planning push the implementation of
>> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
>> xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
>> official Xen release?
>>
>> Regards,
>> Victor
>>
> I don't expect to push the changes up. If you want to submit, please go ahead. 
>>
>> On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com
>> <mailto:etrudeau@broadcom.com>> wrote:
>>
>>     > -----Original Message-----
>>     > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com
>>     <mailto:stefano.stabellini@eu.citrix.com>]
>>     > Sent: Thursday, February 27, 2014 8:16 AM
>>     > To: Dario Faggioli
>>     > Cc: Viktor Kleinik; xen-devel@lists.xen.org
>>     <mailto:xen-devel@lists.xen.org>; Arianna Avanzini; Stefano Stabellini;
>>     > Julien Grall; Eric Trudeau
>>     > Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>>     >
>>     > On Thu, 27 Feb 2014, Dario Faggioli wrote:
>>     > > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
>>     > > > Hi all,
>>     > > >
>>     > > Hi,
>>     > >
>>     > > > Does anyone knows something about future plans to implement
>>     > > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
>>     > > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
>>     > > >
>>     > > I think Arianna is working on an implementation of the former
>>     > > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
>>     > > list soon, isn't it so, Arianna?
>>     >
>>     > Eric Trudeau did some work in the area too:
>>     >
>>     > http://marc.info/?l=xen-devel&m=137338996422503
>>     > http://marc.info/?l=xen-devel&m=137365750318936
>>
>>     I checked our repo and the route IRQ changes to DomUs in the second patch
>>     URL Stefano provided below are up-to-date with what we have been using on
>>     our platforms.  We made no further changes after that patch, i.e. we left
>>     the 100 msec max wait for a domain to finish an ISR when destroying it.
>>
>>     We also added support for a DomU to map in I/O memory with the iomem
>>     configuration parameter.  Unfortunately, I don't have time to provide an
>>     official patch on recent Xen upstream code due to time constraints, but
>>     below is a patch based on last October, :( , commit
>>     d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
>>     I hope this is helpful, because that is the best I can do at this time.
>>
>>     -----------------
>>
>>     tools/libxl/libxl_create.c |  5 +++--
>>      xen/arch/arm/domctl.c      | 74
>>     +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>      2 files changed, 76 insertions(+), 3 deletions(-)
>>
>>     diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>     index 1b320d3..53ed52e 100644
>>     --- a/tools/libxl/libxl_create.c
>>     +++ b/tools/libxl/libxl_create.c
>>     @@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc,
>>     libxl__multidev *multidev,
>>              LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
>>                  domid, io->start, io->start + io->number - 1);
>>
>>     -        ret = xc_domain_iomem_permission(CTX->xch, domid,
>>     -                                          io->start, io->number, 1);
>>     +        ret = xc_domain_memory_mapping(CTX->xch, domid,
>>     +                                       io->start, io->start,
>>     +                                       io->number, 1);
>>              if (ret < 0) {
>>                  LOGE(ERROR,
>>                       "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
>>     diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>>     index 851ee40..222aac9 100644
>>     --- a/xen/arch/arm/domctl.c
>>     +++ b/xen/arch/arm/domctl.c
>>     @@ -10,11 +10,83 @@
>>      #include <xen/errno.h>
>>      #include <xen/sched.h>
>>      #include <public/domctl.h>
>>     +#include <xen/iocap.h>
>>     +#include <xsm/xsm.h>
>>     +#include <xen/paging.h>
>>     +#include <xen/guest_access.h>
>>
>>      long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>>                          XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>      {
>>     -    return -ENOSYS;
>>     +    long ret = 0;
>>     +    bool_t copyback = 0;
>>     +
>>     +    switch ( domctl->cmd )
>>     +    {
>>     +    case XEN_DOMCTL_memory_mapping:
>>     +    {
>>     +        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
>>     +        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
>>     +        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
>>     +        int add = domctl->u.memory_mapping.add_mapping;
>>     +
>>     +        /* removing i/o memory is not implemented yet */
>>     +        if (!add) {
>>     +            ret = -ENOSYS;
>>     +            break;
>>     +        }
>>     +        ret = -EINVAL;
>>     +        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
>>     +             /* x86 checks wrap based on paddr_bits which is not
>>     implemented on ARM? */
>>     +             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits -
>>     PAGE_SHIFT)) || */
>>     +             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
>>     +            break;
>>     +
>>     +        ret = -EPERM;
>>     +        if ( current->domain->domain_id != 0 )
>>     +            break;
>>     +
>>     +        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
>>     +        if ( ret )
>>     +            break;
>>     +
>>     +        if ( add )
>>     +        {
>>     +            printk(XENLOG_G_INFO
>>     +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>>     +                   d->domain_id, gfn, mfn, nr_mfns);
>>     +
>>     +            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
>>     +            if ( !ret && paging_mode_translate(d) )
>>     +            {
>>     +                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
>>     +                                       (gfn + nr_mfns) << PAGE_SHIFT,
>>     +                                       mfn << PAGE_SHIFT);
>>     +                if ( ret )
>>     +                {
>>     +                    printk(XENLOG_G_WARNING
>>     +                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>>     +                           d->domain_id, gfn, mfn, nr_mfns);
>>     +                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
>>     +                         is_hardware_domain(current->domain) )
>>     +                        printk(XENLOG_ERR
>>     +                               "memory_map: failed to deny dom%d access
>>     to [%lx,%lx]\n",
>>     +                               d->domain_id, mfn, mfn + nr_mfns - 1);
>>     +                }
>>     +            }
>>     +        }
>>     +    }
>>     +    break;
>>     +
>>     +    default:
>>     +        ret = -ENOSYS;
>>     +        break;
>>     +    }
>>     +
>>     +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
>>     +        ret = -EFAULT;
>>     +
>>     +    return ret;
>>      }
>>
>>      void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
>>
>> <http://www.globallogic.com/email_disclaimer.txt>


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@gmail.com
 * 73628@studenti.unimore.it
 */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 09:53:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 09:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJK7q-0008WB-Ns; Fri, 28 Feb 2014 09:53:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <avanzini.arianna@gmail.com>) id 1WJBFb-0001sF-Uw
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 00:24:28 +0000
Received: from [85.158.137.68:14131] by server-15.bemta-3.messagelabs.com id
	6A/0C-19263-B37DF035; Fri, 28 Feb 2014 00:24:27 +0000
X-Env-Sender: avanzini.arianna@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393547066!4719689!1
X-Originating-IP: [74.125.83.44]
X-SpamReason: No, hits=0.5 required=7.0 tests=BODY_RANDOM_LONG,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11756 invoked from network); 28 Feb 2014 00:24:26 -0000
Received: from mail-ee0-f44.google.com (HELO mail-ee0-f44.google.com)
	(74.125.83.44)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 00:24:26 -0000
Received: by mail-ee0-f44.google.com with SMTP id d49so1854249eek.3
	for <xen-devel@lists.xen.org>; Thu, 27 Feb 2014 16:24:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:reply-to:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type:content-transfer-encoding;
	bh=T5IqL+gHmEUrD0XxzbXZAGzW54oRcEYHT8DmOX3bLaI=;
	b=kwgmBVmb2qCN8gKuphA1lhkl/Hn6mz7bEWN4Y7Mjlfb9NENQdPSZ/JbDrWLLCaeYnZ
	OB+BPY1glioDm0AgBaEkcEiiKDacceVTfNI185Bjt8HGsDrn9GlyV1rbd5fi+WaytE0U
	tITbdSKD4DTIVGJPVz8uHGzxN68paZes9mt/75W3PaBud1XOLpeHCtOETSnZZHuxJCrR
	TyJSituovSerMhW7TnBuPq51quJVb3rd0h89fFqrrXZhdFuKL5anTNGgOKpWZTf17emc
	nxa47SHO2LSTBbxUrcA2mSGQbWwNvrw+jKaYKXnXJRPLxJWw0wdOoB2Dc7Jpk7D5bB6J
	kh1Q==
X-Received: by 10.14.110.68 with SMTP id t44mr16801109eeg.74.1393547065670;
	Thu, 27 Feb 2014 16:24:25 -0800 (PST)
Received: from [192.168.43.23] (adsl-ull-215-86.44-151.net24.it.
	[151.44.86.215])
	by mx.google.com with ESMTPSA id j41sm3688938eeg.10.2014.02.27.16.24.23
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Thu, 27 Feb 2014 16:24:24 -0800 (PST)
Message-ID: <530FD736.8080401@gmail.com>
Date: Fri, 28 Feb 2014 01:24:22 +0100
From: Arianna Avanzini <avanzini.arianna@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Dario Faggioli <dario.faggioli@citrix.com>, 
	Eric Trudeau <etrudeau@broadcom.com>,
	Viktor Kleinik <viktor.kleinik@globallogic.com>
References: <CAM=aOxhdbnc8fwQ3W570DO7nwQpKx9h3CV50bcHQfnwQOiTCTw@mail.gmail.com>	<1393496069.3921.14.camel@Solace>	<alpine.DEB.2.02.1402271314120.31489@kaball.uk.xensource.com>	<FF3E5629F9FF6745ADE4EE690CEC81AD2ADBC546@SJEXCHMB09.corp.ad.broadcom.com>,
	<CAM=aOxhiBPiL0UF=iuzW4dpm+g7wambp4fN9FOf1Lu60uUb1yg@mail.gmail.com>,
	<0873BA48-F5C5-4C4D-93AE-5CB967FF35E1@broadcom.com>
	<bdbwiri5n1ohxkmfi9y58mr4.1393544349845@email.android.com>
In-Reply-To: <bdbwiri5n1ohxkmfi9y58mr4.1393544349845@email.android.com>
X-Enigmail-Version: 1.6
X-Mailman-Approved-At: Fri, 28 Feb 2014 09:53:01 +0000
Cc: Julien Grall <julien.grall@citrix.com>,
	Stefano Stabellini <Stefano.Stabellini@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] R:  ARM: access to iomem and HW IRQ
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: Arianna Avanzini <avanzini.arianna@gmail.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/28/2014 12:39 AM, Dario Faggioli wrote:
> As I said, Arianna is doing something very similar... perhaps she can merge her
> and your work and try to upstream it properly, in the next few days... Arianna?
> 

I'd be certainly happy to, once the patch is ready and if Eric Trudeau agrees.

Sorry for the delay,
Arianna


> Regards,
> Dario
> 
> 
> Inviato da Samsung Mobile
> 
> 
> -------- Messaggio originale --------
> Da: Eric Trudeau
> Data:28/02/2014 00:03 (GMT+01:00)
> A: Viktor Kleinik
> Cc: Stefano Stabellini ,Dario Faggioli ,xen-devel@lists.xen.org,Arianna Avanzini
> ,Julien Grall
> Oggetto: Re: [Xen-devel] ARM: access to iomem and HW IRQ
> 
> 
> On Feb 27, 2014, at 1:10 PM, "Viktor Kleinik" <viktor.kleinik@globallogic.com
> <mailto:viktor.kleinik@globallogic.com>> wrote:
> 
>> Thank you all for your responses.
>>
>> I will try those changes on our platform.
>> Are you planning push the implementation of
>> xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
>> xc_physdev_map_pirq/PHYSDEVOP_map_pirq hypercalls into
>> official Xen release?
>>
>> Regards,
>> Victor
>>
> I don't expect to push the changes up. If you want to submit, please go ahead. 
>>
>> On Thu, Feb 27, 2014 at 2:11 PM, Eric Trudeau <etrudeau@broadcom.com
>> <mailto:etrudeau@broadcom.com>> wrote:
>>
>>     > -----Original Message-----
>>     > From: Stefano Stabellini [mailto:stefano.stabellini@eu.citrix.com
>>     <mailto:stefano.stabellini@eu.citrix.com>]
>>     > Sent: Thursday, February 27, 2014 8:16 AM
>>     > To: Dario Faggioli
>>     > Cc: Viktor Kleinik; xen-devel@lists.xen.org
>>     <mailto:xen-devel@lists.xen.org>; Arianna Avanzini; Stefano Stabellini;
>>     > Julien Grall; Eric Trudeau
>>     > Subject: Re: [Xen-devel] ARM: access to iomem and HW IRQ
>>     >
>>     > On Thu, 27 Feb 2014, Dario Faggioli wrote:
>>     > > On mer, 2014-02-26 at 15:43 +0000, Viktor Kleinik wrote:
>>     > > > Hi all,
>>     > > >
>>     > > Hi,
>>     > >
>>     > > > Does anyone knows something about future plans to implement
>>     > > > xc_domain_memory_mapping/XEN_DOMCTL_memory_mapping and
>>     > > > xc_physdev_map_pirq/PHYSDEVOP_map_pirq calls for ARM?
>>     > > >
>>     > > I think Arianna is working on an implementation of the former
>>     > > (XEN_DOMCTL_memory_mapping), and she should be sending patches to this
>>     > > list soon, isn't it so, Arianna?
>>     >
>>     > Eric Trudeau did some work in the area too:
>>     >
>>     > http://marc.info/?l=xen-devel&m=137338996422503
>>     > http://marc.info/?l=xen-devel&m=137365750318936
>>
>>     I checked our repo and the route IRQ changes to DomUs in the second patch
>>     URL Stefano provided below are up-to-date with what we have been using on
>>     our platforms.  We made no further changes after that patch, i.e. we left
>>     the 100 msec max wait for a domain to finish an ISR when destroying it.
>>
>>     We also added support for a DomU to map in I/O memory with the iomem
>>     configuration parameter.  Unfortunately, I don't have time to provide an
>>     official patch on recent Xen upstream code due to time constraints, but
>>     below is a patch based on last October, :( , commit
>>     d70d87d2ccf93e3d5302bb034c0a1ae1d6fc1d29.
>>     I hope this is helpful, because that is the best I can do at this time.
>>
>>     -----------------
>>
>>     tools/libxl/libxl_create.c |  5 +++--
>>      xen/arch/arm/domctl.c      | 74
>>     +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
>>      2 files changed, 76 insertions(+), 3 deletions(-)
>>
>>     diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
>>     index 1b320d3..53ed52e 100644
>>     --- a/tools/libxl/libxl_create.c
>>     +++ b/tools/libxl/libxl_create.c
>>     @@ -976,8 +976,9 @@ static void domcreate_launch_dm(libxl__egc *egc,
>>     libxl__multidev *multidev,
>>              LOG(DEBUG, "dom%d iomem %"PRIx64"-%"PRIx64,
>>                  domid, io->start, io->start + io->number - 1);
>>
>>     -        ret = xc_domain_iomem_permission(CTX->xch, domid,
>>     -                                          io->start, io->number, 1);
>>     +        ret = xc_domain_memory_mapping(CTX->xch, domid,
>>     +                                       io->start, io->start,
>>     +                                       io->number, 1);
>>              if (ret < 0) {
>>                  LOGE(ERROR,
>>                       "failed give dom%d access to iomem range %"PRIx64"-%"PRIx64,
>>     diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>>     index 851ee40..222aac9 100644
>>     --- a/xen/arch/arm/domctl.c
>>     +++ b/xen/arch/arm/domctl.c
>>     @@ -10,11 +10,83 @@
>>      #include <xen/errno.h>
>>      #include <xen/sched.h>
>>      #include <public/domctl.h>
>>     +#include <xen/iocap.h>
>>     +#include <xsm/xsm.h>
>>     +#include <xen/paging.h>
>>     +#include <xen/guest_access.h>
>>
>>      long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>>                          XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>      {
>>     -    return -ENOSYS;
>>     +    long ret = 0;
>>     +    bool_t copyback = 0;
>>     +
>>     +    switch ( domctl->cmd )
>>     +    {
>>     +    case XEN_DOMCTL_memory_mapping:
>>     +    {
>>     +        unsigned long gfn = domctl->u.memory_mapping.first_gfn;
>>     +        unsigned long mfn = domctl->u.memory_mapping.first_mfn;
>>     +        unsigned long nr_mfns = domctl->u.memory_mapping.nr_mfns;
>>     +        int add = domctl->u.memory_mapping.add_mapping;
>>     +
>>     +        /* removing i/o memory is not implemented yet */
>>     +        if (!add) {
>>     +            ret = -ENOSYS;
>>     +            break;
>>     +        }
>>     +        ret = -EINVAL;
>>     +        if ( (mfn + nr_mfns - 1) < mfn || /* wrap? */
>>     +             /* x86 checks wrap based on paddr_bits which is not
>>     implemented on ARM? */
>>     +             /* ((mfn | (mfn + nr_mfns - 1)) >> (paddr_bits -
>>     PAGE_SHIFT)) || */
>>     +             (gfn + nr_mfns - 1) < gfn ) /* wrap? */
>>     +            break;
>>     +
>>     +        ret = -EPERM;
>>     +        if ( current->domain->domain_id != 0 )
>>     +            break;
>>     +
>>     +        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, add);
>>     +        if ( ret )
>>     +            break;
>>     +
>>     +        if ( add )
>>     +        {
>>     +            printk(XENLOG_G_INFO
>>     +                   "memory_map:add: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>>     +                   d->domain_id, gfn, mfn, nr_mfns);
>>     +
>>     +            ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
>>     +            if ( !ret && paging_mode_translate(d) )
>>     +            {
>>     +                ret = map_mmio_regions(d, gfn << PAGE_SHIFT,
>>     +                                       (gfn + nr_mfns) << PAGE_SHIFT,
>>     +                                       mfn << PAGE_SHIFT);
>>     +                if ( ret )
>>     +                {
>>     +                    printk(XENLOG_G_WARNING
>>     +                           "memory_map:fail: dom%d gfn=%lx mfn=%lx nr=%lx\n",
>>     +                           d->domain_id, gfn, mfn, nr_mfns);
>>     +                    if ( iomem_deny_access(d, mfn, mfn + nr_mfns - 1) &&
>>     +                         is_hardware_domain(current->domain) )
>>     +                        printk(XENLOG_ERR
>>     +                               "memory_map: failed to deny dom%d access
>>     to [%lx,%lx]\n",
>>     +                               d->domain_id, mfn, mfn + nr_mfns - 1);
>>     +                }
>>     +            }
>>     +        }
>>     +    }
>>     +    break;
>>     +
>>     +    default:
>>     +        ret = -ENOSYS;
>>     +        break;
>>     +    }
>>     +
>>     +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
>>     +        ret = -EFAULT;
>>     +
>>     +    return ret;
>>      }
>>
>>      void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
>>
>> <http://www.globallogic.com/email_disclaimer.txt>


-- 
/*
 * Arianna Avanzini
 * avanzini.arianna@gmail.com
 * 73628@studenti.unimore.it
 */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKGq-0000YB-Bu; Fri, 28 Feb 2014 10:02:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WJKGo-0000Y5-Fu
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 10:02:18 +0000
Received: from [85.158.143.35:49996] by server-2.bemta-4.messagelabs.com id
	0A/6D-04779-9AE50135; Fri, 28 Feb 2014 10:02:17 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393581736!8988362!1
X-Originating-IP: [212.227.17.13]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23480 invoked from network); 28 Feb 2014 10:02:16 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.13)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 28 Feb 2014 10:02:16 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue103) with ESMTP (Nemesis)
	id 0MbsGc-1WcV2l2B8w-00JGdr; Fri, 28 Feb 2014 11:01:53 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Alexander Graf <agraf@suse.de>
Date: Fri, 28 Feb 2014 11:01:50 +0100
Message-ID: <7825615.Ak6TOGibTE@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
References: <20140226183454.GA14639@cbox> <6039706.L1ZWvgjHF0@wuerfel>
	<CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
MIME-Version: 1.0
X-Provags-ID: V02:K0:oCZ56teXMa1XMHNhjhXdo5w6eyYVYyTUPJ+JwCnt63J
	ejSDhqmktDvjeGMVenfUUrpazgByIGkj0drnFrqdQkUb43boox
	X+w4dRQQ5RLdigTBVr/FDNaQNltLocNUc/IlEm1J+qKPQyXq7J
	YOPPY0hcdlLN/YtTLbao+Avk37NF5Uk1gfFwXdOl/NuiDxbubt
	nV4hIdGoKzSmCC7K9R1NUlOEgfqlXAubbDjZiTKoiKFYHiBZkk
	fW3l1HbDcYSjObsFJTr9cFtxVqlg2DC67nn8yQl9x2EVxDElfK
	9eloZ01dlqq9EJCKZ/np6K7VJ50fO2pEkQjN5gF7Z3OXmHvtaU
	5ua8OMleYIigFN5+M3/w=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Friday 28 February 2014 08:05:15 Alexander Graf wrote: 
> > Am 28.02.2014 um 03:56 schrieb Arnd Bergmann <arnd@arndb.de>:
> >> On Thursday 27 February 2014 22:24:13 Alexander Graf wrote:
>
> > When you have that, why do you still care about a
> > system specification? 
> 
> Because I don't want to go back to the system level definition. To me a 
> peripheral is a peripheral - regardless of whether it is on a platform 
> bus or a PCI bus. I want to leverage common ground and only add the few 
> pieces that diverge from it.

You may be missing a lot of the complexity that describing platform devices
in the general case brings then. To pass through an ethernet controller, you may 
also need to add (any subset of)

* phy device
* clock controller
* voltage regulator
* gpio controller
* LED controller
* DMA engine
* an MSI irqchip
* IOMMU

Each of the above in turn is shared with other peripherals on the host,
which brings you to three options:

* change the driver to not depend on the above, but instead support an
  abstract virtualized version of the platform device that doesn't
  need them.
* Pass through all devices this one depends on, giving up on guest
  isolation. This may work for some embedded use cases, but not
  for running untrusted guests.
* Implement virtualized versions of the other interfaces and make
  the hypervisor talk to the real hardware.

I would still argue that each of those approaches is out of scope for
this specification.

> > Going back to the previous argument, since the hypervisor has to make up the
> > description for the platform device itself, it won't matter whether the host
> > is booted using DT or ACPI: the description that the hypervisor makes up for
> > the guest has to match what the hypervisor uses to describe the rest of the
> > guest environment, which is independent of what the host uses.
> 
> I agree 100%. This spec should be completely independent of the host.
> 
> The reason I brought up the host is that if you want to migrate an OS from
> physical to virtual, you may need to pass through devices to make this work.
> If your host driver developers only ever worked with ACPI, there's a good
> chance the drivers won't work in a pure dt environment.
> 
> Brw, the same argument applies the other way around as well. I don't believe
> we will get around with generating and mandating a single machibe
> description environment.

I see those two cases as completely distinct. There are good reasons
for emulating a real machine for running a guest image that expects
certain hardware, and you can easily do this with qemu. But since you
are emulating an existing platform and run an existing OS, you don't
need a VM System Specification, you just do whatever the platform would
normally do that the guest relies on.

The VM system specification on the other hand should allow you to run
any OS that is written to support this specification on any hypervisor
that implements it.

> >> Replace Windows by "Linux with custom drivers" and you're in the same
> >> situation even when you neglect Windows. Reality will be that we will
> >> have fdt and acpi based systems.
> > 
> > We will however want to boot all sorts of guests in a standardized
> > virtual environment:
> > 
> > * 32 bit Linux (since some distros don't support biarch or multiarch
> >  on arm64) for running applications that are either binary-only
> >  or not 64-bit safe.
> > * 32-bit Android
> > * big-endian Linux for running applications that are not endian-clean
> >  (typically network stuff ported from powerpc or mipseb.
> > * OS/v guests
> > * NOMMU Linux
> > * BSD based OSs
> > * QNX
> > * random other RTOSs
> > 
> > Most of these will not work with ACPI, or at least not in 32-bit mode.
> > 64-bit Linux will obviously support both DT (always)
> 
> Unfortunately not
> 
> > and ACPI (optionally),
> > depending on the platform, but for a specification like this, I think
> > it's much easier to support fewer options, to make it easier for other
> > guest OSs to ensure they actually run on any compliant hypervisor.
> 
> You're forgetting a few pretty important cases here:
> 
> * Enterprise grade Linux distribution that only supports ACPI

You can't actually turn off DT support in the kernel, and I don't think
there is any point in patching the kernel to remove it. The only sane
thing the enterprise distros can do is turn on the "SBSA" platform
that supports all compliant machines running ACPI, but turn off all
platforms that are not SBSA compliant and boot using DT. With the way
that the VM spec is written at this point, Linux will still boot on
these guests, since the hardware support is a subset of SBSA, with the
addition of a few drivers for hypervisor specific features.

> * Maybe WinRT if we can convince MS to use it

I'd argue that would be unlikely.

> * Non-Linux with x86/ia64 heritage and thus ACPI support

That assumes that x86 ACPI support is anything like ARM64 ACPI
support, which it really isn't. In particular, you can turn off
ACPI on any x86 machine and it will still work for the most
part, while on ARM64 we will need to use ACPI to describe even
the most basic aspects of the platform.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:02:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKGq-0000YB-Bu; Fri, 28 Feb 2014 10:02:20 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <arnd@arndb.de>) id 1WJKGo-0000Y5-Fu
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 10:02:18 +0000
Received: from [85.158.143.35:49996] by server-2.bemta-4.messagelabs.com id
	0A/6D-04779-9AE50135; Fri, 28 Feb 2014 10:02:17 +0000
X-Env-Sender: arnd@arndb.de
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393581736!8988362!1
X-Originating-IP: [212.227.17.13]
X-SpamReason: No, hits=0.2 required=7.0 tests=RCVD_ILLEGAL_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23480 invoked from network); 28 Feb 2014 10:02:16 -0000
Received: from moutng.kundenserver.de (HELO moutng.kundenserver.de)
	(212.227.17.13)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES128-SHA encrypted
	SMTP; 28 Feb 2014 10:02:16 -0000
Received: from wuerfel.localnet
	(HSI-KBW-5-56-226-176.hsi17.kabel-badenwuerttemberg.de
	[5.56.226.176])
	by mrelayeu.kundenserver.de (node=mreue103) with ESMTP (Nemesis)
	id 0MbsGc-1WcV2l2B8w-00JGdr; Fri, 28 Feb 2014 11:01:53 +0100
From: Arnd Bergmann <arnd@arndb.de>
To: Alexander Graf <agraf@suse.de>
Date: Fri, 28 Feb 2014 11:01:50 +0100
Message-ID: <7825615.Ak6TOGibTE@wuerfel>
User-Agent: KMail/4.11.3 (Linux/3.11.0-15-generic; KDE/4.11.3; x86_64; ; )
In-Reply-To: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
References: <20140226183454.GA14639@cbox> <6039706.L1ZWvgjHF0@wuerfel>
	<CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
MIME-Version: 1.0
X-Provags-ID: V02:K0:oCZ56teXMa1XMHNhjhXdo5w6eyYVYyTUPJ+JwCnt63J
	ejSDhqmktDvjeGMVenfUUrpazgByIGkj0drnFrqdQkUb43boox
	X+w4dRQQ5RLdigTBVr/FDNaQNltLocNUc/IlEm1J+qKPQyXq7J
	YOPPY0hcdlLN/YtTLbao+Avk37NF5Uk1gfFwXdOl/NuiDxbubt
	nV4hIdGoKzSmCC7K9R1NUlOEgfqlXAubbDjZiTKoiKFYHiBZkk
	fW3l1HbDcYSjObsFJTr9cFtxVqlg2DC67nn8yQl9x2EVxDElfK
	9eloZ01dlqq9EJCKZ/np6K7VJ50fO2pEkQjN5gF7Z3OXmHvtaU
	5ua8OMleYIigFN5+M3/w=
Cc: Peter Maydell <peter.maydell@linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Friday 28 February 2014 08:05:15 Alexander Graf wrote: 
> > Am 28.02.2014 um 03:56 schrieb Arnd Bergmann <arnd@arndb.de>:
> >> On Thursday 27 February 2014 22:24:13 Alexander Graf wrote:
>
> > When you have that, why do you still care about a
> > system specification? 
> 
> Because I don't want to go back to the system level definition. To me a 
> peripheral is a peripheral - regardless of whether it is on a platform 
> bus or a PCI bus. I want to leverage common ground and only add the few 
> pieces that diverge from it.

You may be missing a lot of the complexity that describing platform devices
in the general case brings then. To pass through an ethernet controller, you may 
also need to add (any subset of)

* phy device
* clock controller
* voltage regulator
* gpio controller
* LED controller
* DMA engine
* an MSI irqchip
* IOMMU

Each of the above in turn is shared with other peripherals on the host,
which brings you to three options:

* change the driver to not depend on the above, but instead support an
  abstract virtualized version of the platform device that doesn't
  need them.
* Pass through all devices this one depends on, giving up on guest
  isolation. This may work for some embedded use cases, but not
  for running untrusted guests.
* Implement virtualized versions of the other interfaces and make
  the hypervisor talk to the real hardware.

I would still argue that each of those approaches is out of scope for
this specification.

> > Going back to the previous argument, since the hypervisor has to make up the
> > description for the platform device itself, it won't matter whether the host
> > is booted using DT or ACPI: the description that the hypervisor makes up for
> > the guest has to match what the hypervisor uses to describe the rest of the
> > guest environment, which is independent of what the host uses.
> 
> I agree 100%. This spec should be completely independent of the host.
> 
> The reason I brought up the host is that if you want to migrate an OS from
> physical to virtual, you may need to pass through devices to make this work.
> If your host driver developers only ever worked with ACPI, there's a good
> chance the drivers won't work in a pure dt environment.
> 
> Brw, the same argument applies the other way around as well. I don't believe
> we will get around with generating and mandating a single machibe
> description environment.

I see those two cases as completely distinct. There are good reasons
for emulating a real machine for running a guest image that expects
certain hardware, and you can easily do this with qemu. But since you
are emulating an existing platform and run an existing OS, you don't
need a VM System Specification, you just do whatever the platform would
normally do that the guest relies on.

The VM system specification on the other hand should allow you to run
any OS that is written to support this specification on any hypervisor
that implements it.

> >> Replace Windows by "Linux with custom drivers" and you're in the same
> >> situation even when you neglect Windows. Reality will be that we will
> >> have fdt and acpi based systems.
> > 
> > We will however want to boot all sorts of guests in a standardized
> > virtual environment:
> > 
> > * 32 bit Linux (since some distros don't support biarch or multiarch
> >  on arm64) for running applications that are either binary-only
> >  or not 64-bit safe.
> > * 32-bit Android
> > * big-endian Linux for running applications that are not endian-clean
> >  (typically network stuff ported from powerpc or mipseb.
> > * OS/v guests
> > * NOMMU Linux
> > * BSD based OSs
> > * QNX
> > * random other RTOSs
> > 
> > Most of these will not work with ACPI, or at least not in 32-bit mode.
> > 64-bit Linux will obviously support both DT (always)
> 
> Unfortunately not
> 
> > and ACPI (optionally),
> > depending on the platform, but for a specification like this, I think
> > it's much easier to support fewer options, to make it easier for other
> > guest OSs to ensure they actually run on any compliant hypervisor.
> 
> You're forgetting a few pretty important cases here:
> 
> * Enterprise grade Linux distribution that only supports ACPI

You can't actually turn off DT support in the kernel, and I don't think
there is any point in patching the kernel to remove it. The only sane
thing the enterprise distros can do is turn on the "SBSA" platform
that supports all compliant machines running ACPI, but turn off all
platforms that are not SBSA compliant and boot using DT. With the way
that the VM spec is written at this point, Linux will still boot on
these guests, since the hardware support is a subset of SBSA, with the
addition of a few drivers for hypervisor specific features.

> * Maybe WinRT if we can convince MS to use it

I'd argue that would be unlikely.

> * Non-Linux with x86/ia64 heritage and thus ACPI support

That assumes that x86 ACPI support is anything like ARM64 ACPI
support, which it really isn't. In particular, you can turn off
ACPI on any x86 machine and it will still work for the most
part, while on ARM64 we will need to use ACPI to describe even
the most basic aspects of the platform.

	Arnd

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:35:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKn9-000179-Lw; Fri, 28 Feb 2014 10:35:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJKn7-000174-F7
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 10:35:42 +0000
Received: from [193.109.254.147:25909] by server-4.bemta-14.messagelabs.com id
	AE/A7-32066-C7660135; Fri, 28 Feb 2014 10:35:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393583738!3772405!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6733 invoked from network); 28 Feb 2014 10:35:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 10:35:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="106575235"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 10:35:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 05:35:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJKn2-0000ZH-RN;
	Fri, 28 Feb 2014 10:35:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJKn2-0007I6-HR;
	Fri, 28 Feb 2014 10:35:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25324-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 10:35:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25324: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25324 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25324/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25317 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-winxpsp3 10 guest-localmigrate          fail pass in 25321
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 25317
 test-amd64-amd64-xl-win7-amd64 13 guest-localmigrate/x10 fail in 25321 pass in 25324
 test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail in 25321 pass in 25324

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop            fail in 25321 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25317 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:35:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKn9-000179-Lw; Fri, 28 Feb 2014 10:35:43 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJKn7-000174-F7
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 10:35:42 +0000
Received: from [193.109.254.147:25909] by server-4.bemta-14.messagelabs.com id
	AE/A7-32066-C7660135; Fri, 28 Feb 2014 10:35:40 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393583738!3772405!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 6733 invoked from network); 28 Feb 2014 10:35:39 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 10:35:39 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="106575235"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 10:35:37 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 05:35:36 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJKn2-0000ZH-RN;
	Fri, 28 Feb 2014 10:35:36 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJKn2-0007I6-HR;
	Fri, 28 Feb 2014 10:35:36 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25324-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 10:35:36 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25324: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25324 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25324/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-oldkern            4 xen-build        fail in 25317 REGR. vs. 25266

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-winxpsp3 10 guest-localmigrate          fail pass in 25321
 test-amd64-i386-xl-qemut-win7-amd64  7 windows-install      fail pass in 25317
 test-amd64-amd64-xl-win7-amd64 13 guest-localmigrate/x10 fail in 25321 pass in 25324
 test-amd64-amd64-xl-qemuu-win7-amd64 7 windows-install fail in 25321 pass in 25324

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop            fail in 25321 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25317 never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:46:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKxA-0001Mq-0u; Fri, 28 Feb 2014 10:46:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJKx8-0001Ml-8X
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 10:46:02 +0000
Received: from [85.158.139.211:63198] by server-3.bemta-5.messagelabs.com id
	BD/82-13671-9E860135; Fri, 28 Feb 2014 10:46:01 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393584359!6466007!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14475 invoked from network); 28 Feb 2014 10:46:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 10:46:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="104962023"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 10:45:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 05:45:57 -0500
Message-ID: <531068E4.6040005@citrix.com>
Date: Fri, 28 Feb 2014 10:45:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
	<20140228023808.GD7114@localhost.localdomain>
In-Reply-To: <20140228023808.GD7114@localhost.localdomain>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv3] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 02:38, Konrad Rzeszutek Wilk wrote:
> On 32-bit builds I get:
> 
> arch/x86/built-in.o: In function `xen_do_upcall':
> /home/konrad/ssd/konrad/linux/arch/x86/kernel/entry_32.S:1016: undefined
> reference to `TI_preempt_count'

Sorry, I forward ported the 32-bit code from 3.10 kernel I was
using for testing in XenServer.

8<-----------------------------------------------------------
x86/xen: allow privcmd hypercalls to be preempted

From: David Vrabel <david.vrabel@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
Changes in v4:
- Fix 32-bit build.

Changes in v3:
- Export xen_in_preemptible_hcall (to fix modular privcmd driver).

Changes in v2:
- Use per-cpu variable to mark preemptible regions
- Call preempt_schedule_irq() from the correct place in
  xen_hypervisor_callback
---
 arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
 arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
 drivers/xen/Makefile       |    2 +-
 drivers/xen/preempt.c      |   17 +++++++++++++++++
 drivers/xen/privcmd.c      |    2 ++
 include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
 6 files changed, 89 insertions(+), 1 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..024578d 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
 ENTRY(xen_do_upcall)
 1:	mov %esp, %eax
 	call xen_evtchn_do_upcall
+#ifdef CONFIG_PREEMPT
 	jmp  ret_from_intr
+#else
+	GET_THREAD_INFO(%ebp)
+#ifdef CONFIG_VM86
+	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
+	movb PT_CS(%esp), %al
+	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+#else
+	movl PT_CS(%esp), %eax
+	andl $SEGMENT_RPL_MASK, %eax
+#endif
+	cmpl $USER_RPL, %eax
+	jae resume_userspace		# returning to v8086 or userspace
+	DISABLE_INTERRUPTS(CLBR_ANY)
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz resume_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz resume_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp resume_kernel
+#endif /* CONFIG_PREEMPT */
 	CFI_ENDPROC
 ENDPROC(xen_hypervisor_callback)
 
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..d8f4fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
+#ifdef CONFIG_PREEMPT
 	jmp  error_exit
+#else
+	movl %ebx, %eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax, %eax
+	je error_exit_user
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz retint_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz retint_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp retint_kernel
+#endif
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)
 
@@ -1629,6 +1647,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 45e00af..6b867e9 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/
 
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..b5a3e98
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,17 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;
 
+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();
 
 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
 bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:46:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKxA-0001Mq-0u; Fri, 28 Feb 2014 10:46:04 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJKx8-0001Ml-8X
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 10:46:02 +0000
Received: from [85.158.139.211:63198] by server-3.bemta-5.messagelabs.com id
	BD/82-13671-9E860135; Fri, 28 Feb 2014 10:46:01 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393584359!6466007!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14475 invoked from network); 28 Feb 2014 10:46:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 10:46:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="104962023"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 10:45:58 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 05:45:57 -0500
Message-ID: <531068E4.6040005@citrix.com>
Date: Fri, 28 Feb 2014 10:45:56 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393513675-31038-1-git-send-email-david.vrabel@citrix.com>
	<20140228023808.GD7114@localhost.localdomain>
In-Reply-To: <20140228023808.GD7114@localhost.localdomain>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCHv3] x86/xen: allow privcmd hypercalls to be
	preempted
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 02:38, Konrad Rzeszutek Wilk wrote:
> On 32-bit builds I get:
> 
> arch/x86/built-in.o: In function `xen_do_upcall':
> /home/konrad/ssd/konrad/linux/arch/x86/kernel/entry_32.S:1016: undefined
> reference to `TI_preempt_count'

Sorry, I forward ported the 32-bit code from 3.10 kernel I was
using for testing in XenServer.

8<-----------------------------------------------------------
x86/xen: allow privcmd hypercalls to be preempted

From: David Vrabel <david.vrabel@citrix.com>

Hypercalls submitted by user space tools via the privcmd driver can
take a long time (potentially many 10s of seconds) if the hypercall
has many sub-operations.

A fully preemptible kernel may deschedule such as task in any upcall
called from a hypercall continuation.

However, in a kernel with voluntary or no preemption, hypercall
continuations in Xen allow event handlers to be run but the task
issuing the hypercall will not be descheduled until the hypercall is
complete and the ioctl returns to user space.  These long running
tasks may also trigger the kernel's soft lockup detection.

Add xen_preemptible_hcall_begin() and xen_preemptible_hcall_end() to
bracket hypercalls that may be preempted.  Use these in the privcmd
driver.

When returning from an upcall, call preempt_schedule_irq() if the
current task was within a preemptible hypercall.

Since preempt_schedule_irq() can move the task to a different CPU,
clear and set xen_in_preemptible_hcall around the call.

Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
Changes in v4:
- Fix 32-bit build.

Changes in v3:
- Export xen_in_preemptible_hcall (to fix modular privcmd driver).

Changes in v2:
- Use per-cpu variable to mark preemptible regions
- Call preempt_schedule_irq() from the correct place in
  xen_hypervisor_callback
---
 arch/x86/kernel/entry_32.S |   23 +++++++++++++++++++++++
 arch/x86/kernel/entry_64.S |   19 +++++++++++++++++++
 drivers/xen/Makefile       |    2 +-
 drivers/xen/preempt.c      |   17 +++++++++++++++++
 drivers/xen/privcmd.c      |    2 ++
 include/xen/xen-ops.h      |   27 +++++++++++++++++++++++++++
 6 files changed, 89 insertions(+), 1 deletions(-)
 create mode 100644 drivers/xen/preempt.c

diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S
index a2a4f46..024578d 100644
--- a/arch/x86/kernel/entry_32.S
+++ b/arch/x86/kernel/entry_32.S
@@ -998,7 +998,30 @@ ENTRY(xen_hypervisor_callback)
 ENTRY(xen_do_upcall)
 1:	mov %esp, %eax
 	call xen_evtchn_do_upcall
+#ifdef CONFIG_PREEMPT
 	jmp  ret_from_intr
+#else
+	GET_THREAD_INFO(%ebp)
+#ifdef CONFIG_VM86
+	movl PT_EFLAGS(%esp), %eax	# mix EFLAGS and CS
+	movb PT_CS(%esp), %al
+	andl $(X86_EFLAGS_VM | SEGMENT_RPL_MASK), %eax
+#else
+	movl PT_CS(%esp), %eax
+	andl $SEGMENT_RPL_MASK, %eax
+#endif
+	cmpl $USER_RPL, %eax
+	jae resume_userspace		# returning to v8086 or userspace
+	DISABLE_INTERRUPTS(CLBR_ANY)
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz resume_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz resume_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp resume_kernel
+#endif /* CONFIG_PREEMPT */
 	CFI_ENDPROC
 ENDPROC(xen_hypervisor_callback)
 
diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
index 1e96c36..d8f4fd8 100644
--- a/arch/x86/kernel/entry_64.S
+++ b/arch/x86/kernel/entry_64.S
@@ -1404,7 +1404,25 @@ ENTRY(xen_do_hypervisor_callback)   # do_hypervisor_callback(struct *pt_regs)
 	popq %rsp
 	CFI_DEF_CFA_REGISTER rsp
 	decl PER_CPU_VAR(irq_count)
+#ifdef CONFIG_PREEMPT
 	jmp  error_exit
+#else
+	movl %ebx, %eax
+	RESTORE_REST
+	DISABLE_INTERRUPTS(CLBR_NONE)
+	TRACE_IRQS_OFF
+	GET_THREAD_INFO(%rcx)
+	testl %eax, %eax
+	je error_exit_user
+	cmpl $0,PER_CPU_VAR(__preempt_count)
+	jnz retint_kernel
+	cmpb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jz retint_kernel
+	movb $0,PER_CPU_VAR(xen_in_preemptible_hcall)
+	call preempt_schedule_irq
+	movb $1,PER_CPU_VAR(xen_in_preemptible_hcall)
+	jmp retint_kernel
+#endif
 	CFI_ENDPROC
 END(xen_do_hypervisor_callback)
 
@@ -1629,6 +1647,7 @@ ENTRY(error_exit)
 	GET_THREAD_INFO(%rcx)
 	testl %eax,%eax
 	jne retint_kernel
+error_exit_user:
 	LOCKDEP_SYS_EXIT_IRQ
 	movl TI_flags(%rcx),%edx
 	movl $_TIF_WORK_MASK,%edi
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 45e00af..6b867e9 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -2,7 +2,7 @@ ifeq ($(filter y, $(CONFIG_ARM) $(CONFIG_ARM64)),)
 obj-$(CONFIG_HOTPLUG_CPU)		+= cpu_hotplug.o
 endif
 obj-$(CONFIG_X86)			+= fallback.o
-obj-y	+= grant-table.o features.o balloon.o manage.o
+obj-y	+= grant-table.o features.o balloon.o manage.o preempt.o
 obj-y	+= events/
 obj-y	+= xenbus/
 
diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
new file mode 100644
index 0000000..b5a3e98
--- /dev/null
+++ b/drivers/xen/preempt.c
@@ -0,0 +1,17 @@
+/*
+ * Preemptible hypercalls
+ *
+ * Copyright (C) 2014 Citrix Systems R&D ltd.
+ *
+ * This source code is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ */
+
+#include <xen/xen-ops.h>
+
+#ifndef CONFIG_PREEMPT
+DEFINE_PER_CPU(bool, xen_in_preemptible_hcall);
+EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
+#endif
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 569a13b..59ac71c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -56,10 +56,12 @@ static long privcmd_ioctl_hypercall(void __user *udata)
 	if (copy_from_user(&hypercall, udata, sizeof(hypercall)))
 		return -EFAULT;
 
+	xen_preemptible_hcall_begin();
 	ret = privcmd_call(hypercall.op,
 			   hypercall.arg[0], hypercall.arg[1],
 			   hypercall.arg[2], hypercall.arg[3],
 			   hypercall.arg[4]);
+	xen_preemptible_hcall_end();
 
 	return ret;
 }
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index fb2ea8f..6d8c042 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -35,4 +35,31 @@ int xen_unmap_domain_mfn_range(struct vm_area_struct *vma,
 			       int numpgs, struct page **pages);
 
 bool xen_running_on_version_or_later(unsigned int major, unsigned int minor);
+
+#ifdef CONFIG_PREEMPT
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+}
+
+#else
+
+DECLARE_PER_CPU(bool, xen_in_preemptible_hcall);
+
+static inline void xen_preemptible_hcall_begin(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, true);
+}
+
+static inline void xen_preemptible_hcall_end(void)
+{
+	__this_cpu_write(xen_in_preemptible_hcall, false);
+}
+
+#endif /* CONFIG_PREEMPT */
+
 #endif /* INCLUDE_XEN_OPS_H */
-- 
1.7.2.5


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:47:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKy2-0001QL-LD; Fri, 28 Feb 2014 10:46:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJKy0-0001Q1-UW
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 10:46:57 +0000
Received: from [85.158.139.211:16967] by server-16.bemta-5.messagelabs.com id
	1D/ED-05060-02960135; Fri, 28 Feb 2014 10:46:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393584414!25655!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12718 invoked from network); 28 Feb 2014 10:46:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 10:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; 
	d="scan'208,223";a="104962133"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 10:46:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 05:46:53 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJKxx-0003KS-53;
	Fri, 28 Feb 2014 10:46:53 +0000
Date: Fri, 28 Feb 2014 10:46:53 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: =?utf-8?B?6Z+p6Imz5Lic?= <hanyandong@iie.ac.cn>
Message-ID: <20140228104653.GK16241@zion.uk.xensource.com>
References: <1b24870.6a6c.14477c87837.Coremail.hanyandong@iie.ac.cn>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1b24870.6a6c.14477c87837.Coremail.hanyandong@iie.ac.cn>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, wei.liu2@citrix.com
Subject: Re: [Xen-devel] intercept and capture fast system call of linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From the look of the code I think you're playing with Ether plus some
extra stuffs (VMI etc)? Probably you need to talk to Ether developers?

Apparently it is impossible for us to second-guess what all these
functions do.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 10:47:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 10:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJKy2-0001QL-LD; Fri, 28 Feb 2014 10:46:58 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJKy0-0001Q1-UW
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 10:46:57 +0000
Received: from [85.158.139.211:16967] by server-16.bemta-5.messagelabs.com id
	1D/ED-05060-02960135; Fri, 28 Feb 2014 10:46:56 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393584414!25655!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 12718 invoked from network); 28 Feb 2014 10:46:55 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-2.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 10:46:55 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; 
	d="scan'208,223";a="104962133"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 10:46:53 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 05:46:53 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJKxx-0003KS-53;
	Fri, 28 Feb 2014 10:46:53 +0000
Date: Fri, 28 Feb 2014 10:46:53 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: =?utf-8?B?6Z+p6Imz5Lic?= <hanyandong@iie.ac.cn>
Message-ID: <20140228104653.GK16241@zion.uk.xensource.com>
References: <1b24870.6a6c.14477c87837.Coremail.hanyandong@iie.ac.cn>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1b24870.6a6c.14477c87837.Coremail.hanyandong@iie.ac.cn>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, wei.liu2@citrix.com
Subject: Re: [Xen-devel] intercept and capture fast system call of linux
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>From the look of the code I think you're playing with Ether plus some
extra stuffs (VMI etc)? Probably you need to talk to Ether developers?

Apparently it is impossible for us to second-guess what all these
functions do.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 11:04:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 11:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJLEI-0002AK-WE; Fri, 28 Feb 2014 11:03:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1WJLEG-0002AE-Uq
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 11:03:45 +0000
Received: from [193.109.254.147:62014] by server-9.bemta-14.messagelabs.com id
	4F/93-24895-01D60135; Fri, 28 Feb 2014 11:03:44 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393585422!2177894!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18975 invoked from network); 28 Feb 2014 11:03:43 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	28 Feb 2014 11:03:43 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1SB3Ym2032329
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 06:03:34 -0500
Received: from redhat.com (dhcp-1-236.lcy.redhat.com [10.32.224.236])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1SB3UC7005724
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Fri, 28 Feb 2014 06:03:32 -0500
Date: Fri, 28 Feb 2014 11:03:30 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140228110329.GB17909@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
	<1393553550.20365.5.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393553550.20365.5.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: Olaf Hering <olaf@aepfle.de>, stefano.stabellini@eu.citrix.com,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:12:30AM +0000, Ian Campbell wrote:
> On Wed, 2014-02-26 at 15:01 +0000, Daniel P. Berrange wrote:
> > Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
> > to allow us to configure that explicitly, similar to how we do for KVM's
> > virtio-console support.
> 
> Do you mean I need to add something to the XML config snippet, or I need
> to add some special handling in the XML parser/consumer?
> 
> I've grepped around the virtio-console stuff and I'm none the wiser.

Opps, yes, I should have explained this better, since our docs here are
about as clear as mud.

With traditional x86 paravirt Xen, we just have the plain paravirt console
devices

    <console type='pty'>
      <target type='xen'/>
    </console>

With x86  fullvirt Xen/KVM/QEMU, the console type just defaults to being
a serial port so you would usually just add

    <serial type='pty'>
    </serial>

and then libvirt would automatically add a <console> with

    <console type='pty'>
      <target type='serial'/>
    </console>


With x86 fullvirt KVM, we also have support for virtio which is
done using

    <console type='pty'>
      <target type='virtio'/>
    </console>


So actually this leads me to ask what kind of console Arm fullvirt Xen
guests actually have ? If they just use the traditional Xen paravirt
console, then we just need to make sure that this works for them by
default:

    <console type='pty'>
      <target type='xen'/>
    </console>


If there's a different type of console device that's not related to
the Xen paravirt console device, then we'd need to invent a new
<target type='xxx'/> value for Arm.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 11:04:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 11:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJLEI-0002AK-WE; Fri, 28 Feb 2014 11:03:47 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <berrange@redhat.com>) id 1WJLEG-0002AE-Uq
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 11:03:45 +0000
Received: from [193.109.254.147:62014] by server-9.bemta-14.messagelabs.com id
	4F/93-24895-01D60135; Fri, 28 Feb 2014 11:03:44 +0000
X-Env-Sender: berrange@redhat.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393585422!2177894!1
X-Originating-IP: [209.132.183.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA5LjEzMi4xODMuMjggPT4gNTQwNjQ=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18975 invoked from network); 28 Feb 2014 11:03:43 -0000
Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28)
	by server-11.tower-27.messagelabs.com with SMTP;
	28 Feb 2014 11:03:43 -0000
Received: from int-mx11.intmail.prod.int.phx2.redhat.com
	(int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24])
	by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s1SB3Ym2032329
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 06:03:34 -0500
Received: from redhat.com (dhcp-1-236.lcy.redhat.com [10.32.224.236])
	by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP
	id s1SB3UC7005724
	(version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO);
	Fri, 28 Feb 2014 06:03:32 -0500
Date: Fri, 28 Feb 2014 11:03:30 +0000
From: "Daniel P. Berrange" <berrange@redhat.com>
To: Ian Campbell <ian.campbell@citrix.com>
Message-ID: <20140228110329.GB17909@redhat.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
	<1393553550.20365.5.camel@hastur.hellion.org.uk>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393553550.20365.5.camel@hastur.hellion.org.uk>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Scanned-By: MIMEDefang 2.68 on 10.5.11.24
Cc: Olaf Hering <olaf@aepfle.de>, stefano.stabellini@eu.citrix.com,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
Reply-To: "Daniel P. Berrange" <berrange@redhat.com>
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:12:30AM +0000, Ian Campbell wrote:
> On Wed, 2014-02-26 at 15:01 +0000, Daniel P. Berrange wrote:
> > Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
> > to allow us to configure that explicitly, similar to how we do for KVM's
> > virtio-console support.
> 
> Do you mean I need to add something to the XML config snippet, or I need
> to add some special handling in the XML parser/consumer?
> 
> I've grepped around the virtio-console stuff and I'm none the wiser.

Opps, yes, I should have explained this better, since our docs here are
about as clear as mud.

With traditional x86 paravirt Xen, we just have the plain paravirt console
devices

    <console type='pty'>
      <target type='xen'/>
    </console>

With x86  fullvirt Xen/KVM/QEMU, the console type just defaults to being
a serial port so you would usually just add

    <serial type='pty'>
    </serial>

and then libvirt would automatically add a <console> with

    <console type='pty'>
      <target type='serial'/>
    </console>


With x86 fullvirt KVM, we also have support for virtio which is
done using

    <console type='pty'>
      <target type='virtio'/>
    </console>


So actually this leads me to ask what kind of console Arm fullvirt Xen
guests actually have ? If they just use the traditional Xen paravirt
console, then we just need to make sure that this works for them by
default:

    <console type='pty'>
      <target type='xen'/>
    </console>


If there's a different type of console device that's not related to
the Xen paravirt console device, then we'd need to invent a new
<target type='xxx'/> value for Arm.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 11:32:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 11:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJLg6-0002cw-NQ; Fri, 28 Feb 2014 11:32:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJLg5-0002cr-1R
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 11:32:29 +0000
Received: from [85.158.143.35:23112] by server-1.bemta-4.messagelabs.com id
	D3/7F-31661-CC370135; Fri, 28 Feb 2014 11:32:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393587146!9015818!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30922 invoked from network); 28 Feb 2014 11:32:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 11:32:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="106585498"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 11:32:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 06:32:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJLg1-0000qK-8d;
	Fri, 28 Feb 2014 11:32:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJLg1-0003TD-04;
	Fri, 28 Feb 2014 11:32:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21264.29640.728284.413355@mariner.uk.xensource.com>
Date: Fri, 28 Feb 2014 11:32:24 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1393559055.27819.2.camel@hastur.hellion.org.uk>
References: <osstest-25315-mainreport@xen.org>
	<21263.21387.343462.630238@mariner.uk.xensource.com>
	<1393559055.27819.2.camel@hastur.hellion.org.uk>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
 blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [xen-unstable test] 25315: regressions - trouble: blocked/broken/fail/pass"):
> On Thu, 2014-02-27 at 15:02 +0000, Ian Jackson wrote:
> > Our network infrastructure problems have become intolerable.
> 
> :-/
> 
> I think for this particular hg tree it would be tolerable for osstest to
> clone from a local hg mirror updated via a daily cronjob. That tree is
> pretty slow moving.
> 
> Once the initial clone to the mirror has completed then the incremental
> updates ought to be fast enough to avoid the network timeouts.

I've been chasing this with our IT people.  I think we should see how
that goes, a bit longer, before going to the effort of setting up a
mirror like that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 11:32:49 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 11:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJLg6-0002cw-NQ; Fri, 28 Feb 2014 11:32:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJLg5-0002cr-1R
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 11:32:29 +0000
Received: from [85.158.143.35:23112] by server-1.bemta-4.messagelabs.com id
	D3/7F-31661-CC370135; Fri, 28 Feb 2014 11:32:28 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393587146!9015818!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30922 invoked from network); 28 Feb 2014 11:32:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 11:32:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="106585498"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 11:32:26 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.78) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 06:32:25 -0500
Received: from mariner.cam.xci-test.com ([10.80.2.22]
	helo=mariner.uk.xensource.com)	by norwich.cam.xci-test.com with esmtp
	(Exim
	4.72)	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJLg1-0000qK-8d;
	Fri, 28 Feb 2014 11:32:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.80)
	(envelope-from <Ian.Jackson@eu.citrix.com>)	id 1WJLg1-0003TD-04;
	Fri, 28 Feb 2014 11:32:25 +0000
From: Ian Jackson <Ian.Jackson@eu.citrix.com>
MIME-Version: 1.0
Message-ID: <21264.29640.728284.413355@mariner.uk.xensource.com>
Date: Fri, 28 Feb 2014 11:32:24 +0000
To: Ian Campbell <ian.campbell@citrix.com>
In-Reply-To: <1393559055.27819.2.camel@hastur.hellion.org.uk>
References: <osstest-25315-mainreport@xen.org>
	<21263.21387.343462.630238@mariner.uk.xensource.com>
	<1393559055.27819.2.camel@hastur.hellion.org.uk>
X-Mailer: VM 8.1.0 under 23.4.1 (i486-pc-linux-gnu)
X-DLP: MIA1
Cc: xen-devel@lists.xensource.com, Jan Beulich <JBeulich@suse.com>
Subject: Re: [Xen-devel] [xen-unstable test] 25315: regressions - trouble:
 blocked/broken/fail/pass
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Ian Campbell writes ("Re: [xen-unstable test] 25315: regressions - trouble: blocked/broken/fail/pass"):
> On Thu, 2014-02-27 at 15:02 +0000, Ian Jackson wrote:
> > Our network infrastructure problems have become intolerable.
> 
> :-/
> 
> I think for this particular hg tree it would be tolerable for osstest to
> clone from a local hg mirror updated via a daily cronjob. That tree is
> pretty slow moving.
> 
> Once the initial clone to the mirror has completed then the incremental
> updates ought to be fast enough to avoid the network timeouts.

I've been chasing this with our IT people.  I think we should see how
that goes, a bit longer, before going to the effort of setting up a
mirror like that.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 11:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 11:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJM1H-000317-Ti; Fri, 28 Feb 2014 11:54:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WJM1F-000312-Op
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 11:54:22 +0000
Received: from [85.158.143.35:59325] by server-2.bemta-4.messagelabs.com id
	D0/43-04779-DE870135; Fri, 28 Feb 2014 11:54:21 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393588457!9025445!1
X-Originating-IP: [209.85.160.66]
X-SpamReason: No, hits=2.2 required=7.0 tests=HTML_FONT_LOW_CONTRAST,
	HTML_MESSAGE, MAILTO_TO_SPAM_ADDR, MIME_BASE64_TEXT, MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23000 invoked from network); 28 Feb 2014 11:54:19 -0000
Received: from mail-pb0-f66.google.com (HELO mail-pb0-f66.google.com)
	(209.85.160.66)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 11:54:19 -0000
Received: by mail-pb0-f66.google.com with SMTP id md12so357760pbc.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Feb 2014 03:54:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:mime-version:message-id:content-type;
	bh=Wb87gyb6dAA7rnkeUBpLAo2hKn9v4YQ55fhf56wXlQY=;
	b=kKOj0hOeql5dvH/xSUV6Q/k5z12GdMD60SRIKxGbU2Aq4RRZF5yWUggn+5qz6j8RjI
	0clDZC4PqwVdaAgjXVbv4FQ1730rET3nxTrOgU7MIKoZjxKPz32yD66vQCyph2BGt7NS
	dsn4OAs0fWXmNOIXeMSVAQN807qXdDjjr/nkHfDDavww2gLzp1/jorl72WGgSu/yF383
	FE2JlXID9E6mg6LuJ2r6djQLJ3vd6gJS/z/mrXxnkzKe+hSVMUQOPvANeGvPwTbXPxbB
	DTL0jQWtX2IxP2IGsUYAgiObc8jZj3sSzMcikdTfX1x2KEEiA5xrDXXL4rk6dog5kZIW
	RWSQ==
X-Received: by 10.68.34.6 with SMTP id v6mr2967281pbi.61.1393588457178;
	Fri, 28 Feb 2014 03:54:17 -0800 (PST)
Received: from gordon-PC ([183.17.52.174])
	by mx.google.com with ESMTPSA id xs1sm12123961pac.7.2014.02.28.03.54.12
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 28 Feb 2014 03:54:16 -0800 (PST)
Date: Fri, 28 Feb 2014 19:54:21 +0800
From: "gordongong0350@gmail.com" <gordongong0350@gmail.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-GUID: 8B4EA011-C9CB-4E17-8377-A4DF050480EC
X-Has-Attach: no
X-Mailer: Foxmail 7, 2, 0, 111[cn]
Mime-Version: 1.0
Message-ID: <2014022819541754090812@gmail.com>
Cc: "ian.jackson" <ian.jackson@eu.citrix.com>,
	"ian.campbell" <ian.campbell@citrix.com>,
	"stefano.stabellini" <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH1/1] Fix vreq with error of -EBUSY fails 100
	times too soon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6032888578548417215=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6032888578548417215==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart124380706250_=----"

This is a multi-part message in MIME format.

------=_001_NextPart124380706250_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

CgoKCgoKRml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBz
b29uLCA/dGhhdD9yZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4gSWYgdGhpcyB0ZGRldmlj
ZSByZXF1ZXN0IGlzIG1ldGFkYXRhIG9mIGZpbGVzeXN0ZW0sIHRoZSByZXN1bHQgaXMgbm90IGdv
b2QgYXQgYWxsLgpUbyByZXByb2R1Y2UgaXQgd2l0Y2ggPyIuL2lvem9uZSAtSSAtcyAyRyAtciA1
MTJrIC1yIDFtIC1yIDJtIC1yIDRtIC1pIDAgLWkgMSAtZiAvZGF0YS9pb3pvbmVfdGVzdC50bXAi
LHRoZSByZXN1bHQgaXMgaW5wdXQvb3V0cHV0IGVycm9yICYgaW96b25lIGlzIHN0b3BwZWQuCgoK
CkZyb20gY2NjMGNlM2FlZGUxZjVjOTIwMDM2ODQ3ZGRiMTZiMjU5NjliZTdiYSBNb24gU2VwIDE3
IDAwOjAwOjAwIDIwMDFGcm9tOiBYaWFvZG9uZyBHb25nIDxnb3Jkb25nb25nMDM1MEBnbWFpbC5j
b20+RGF0ZTogRnJpLCAyOCBGZWIgMjAxNCAwMzowMToxOSAtMDUwMFN1YmplY3Q6IFtQQVRDSF0g
Zml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBzb29uLD9y
ZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4KMSkgYWxsIGZyZWUgbGlzdCBpcyBjb25zdW1l
ZCBieSB2cmVxIGluIG9uZSBvciB0d28gYmxvY2sgdGhhdD8gP2lzIHNldGVkIGluIGJhdCBlbnRy
eS4yKSB2cmVxIGluIGZhaWwgcXVldWUsIGdldCBhIGNoYW5jZSB0byBydW4gcXVpY2tseSwgc3Vj
aCBhcyA5MHVzLi0tLT90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyB8IDE3ICsr
KysrKysrKysrKysrLS0tP3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oIHwgPzEg
Kz8yIGZpbGVzIGNoYW5nZWQsIDE1IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCmRpZmYg
LS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyBiL3Rvb2xzL2Jsa3Rh
cDIvZHJpdmVycy90YXBkaXNrLXZiZC5jaW5kZXggYzY2NWYyNy4uNTI5ZWI5MSAxMDA2NDQtLS0g
YS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYysrKyBiL3Rvb2xzL2Jsa3RhcDIv
ZHJpdmVycy90YXBkaXNrLXZiZC5jQEAgLTEwODEsNyArMTA4MSw3IEBAIHRhcGRpc2tfdmJkX2No
ZWNrX3N0YXRlKHRkX3ZiZF90ICp2YmQpPwl0ZF92YmRfcmVxdWVzdF90ICp2cmVxLCAqdG1wOz8/
CXRhcGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVx
dWVzdHMpLQkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVUUklFUykrCQlp
ZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJEX01BWF9SRVRSSUVTICYmIHZyZXEtPmJ1c3lf
bG9vcGluZyAhPSAxICk/CQkJdGFwZGlza192YmRfY29tcGxldGVfdmJkX3JlcXVlc3QodmJkLCB2
cmVxKTs/PwlpZiAoIWxpc3RfZW1wdHkoJnZiZC0+bmV3X3JlcXVlc3RzKSB8fEBAIC0xMTY4LDcg
KzExNjgsNyBAQCB0YXBkaXNrX3ZiZF9jb21wbGV0ZV92YmRfcmVxdWVzdCh0ZF92YmRfdCAqdmJk
LCB0ZF92YmRfcmVxdWVzdF90ICp2cmVxKT97PwlpZiAoIXZyZXEtPnN1Ym1pdHRpbmcgJiYgIXZy
ZXEtPnNlY3NfcGVuZGluZykgez8JCWlmICh2cmVxLT5zdGF0dXMgPT0gQkxLSUZfUlNQX0VSUk9S
ICYmLQkJPz8gP3ZyZXEtPm51bV9yZXRyaWVzIDwgVERfVkJEX01BWF9SRVRSSUVTICYmKwkJPz8g
Pyh2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9NQVhfUkVUUklFUyB8fCB2cmVxLT5idXN5X2xv
b3BpbmcgPT0gMSkmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9ERUFE
KSAmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9XTl9SRVFV
RVNURUQpKT8JCQl0YXBkaXNrX3ZiZF9tb3ZlX3JlcXVlc3QodnJlcSwgJnZiZC0+ZmFpbGVkX3Jl
cXVlc3RzKTtAQCAtMTQ1MCwxNyArMTQ1MCwyOCBAQCB0YXBkaXNrX3ZiZF9yZWlzc3VlX2ZhaWxl
ZF9yZXF1ZXN0cyh0ZF92YmRfdCAqdmJkKT8JZ2V0dGltZW9mZGF5KCZub3csIE5VTEwpOz8/CXRh
cGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVxdWVz
dHMpIHsrCQl1aW50NjRfdCBkZWx0YSA9IDA7Kz8JCWlmICh2cmVxLT5zZWNzX3BlbmRpbmcpPwkJ
CWNvbnRpbnVlOz8/CQlpZiAodGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9X
Tl9SRVFVRVNURUQpKT8JCQlnb3RvIGZhaWw7PysJCWlmICh2cmVxLT5udW1fcmV0cmllcyA+IFRE
X1ZCRF9NQVhfUkVUUklFUyAtIDEwKSB7KwkJCWRlbHRhID0gKG5vdy50dl9zZWMgLSB2cmVxLT5s
YXN0X3RyeS50dl9zZWMpICogMTAwMDAwMFwrCQkJCSsgbm93LnR2X3VzZWMgLSB2cmVxLT5sYXN0
X3RyeS50dl91c2VjOysJCQlpZiAoZGVsdGEgKiB2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9S
RVRSWV9JTlRFUlZBTCkrCQkJCXZyZXEtPmJ1c3lfbG9vcGluZyA9IDE7KwkJCWVsc2UrCQkJCXZy
ZXEtPmJ1c3lfbG9vcGluZyA9IDA7KwkJfSs/CQlpZiAodnJlcS0+ZXJyb3IgIT0gLUVCVVNZICYm
PwkJPz8gP25vdy50dl9zZWMgLSB2cmVxLT5sYXN0X3RyeS50dl9zZWMgPCBURF9WQkRfUkVUUllf
SU5URVJWQUwpPwkJCWNvbnRpbnVlOz8tCQlpZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJE
X01BWF9SRVRSSUVTKSB7KwkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVU
UklFUyAmJiB2cmVxLT5idXN5X2xvb3BpbmcgIT0gMSkgez8JCWZhaWw6PwkJCURCRyhUTE9HX0lO
Rk8sICJyZXEgJSJQUkl1NjQicmV0cmllZCAlZCB0aW1lc1xuIiw/CQkJPz8gP3ZyZXEtPnJlcS5p
ZCwgdnJlcS0+bnVtX3JldHJpZXMpO2RpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMv
dGFwZGlzay12YmQuaCBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oaW5kZXgg
YmUwODRiMi4uOWU1ZjVmNiAxMDA2NDQtLS0gYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlz
ay12YmQuaCsrKyBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oQEAgLTczLDYg
KzczLDcgQEAgc3RydWN0IHRkX3ZiZF9yZXF1ZXN0IHs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/
ID8gPyBzdWJtaXR0aW5nOz8JaW50ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyA/IHNlY3NfcGVuZGlu
Zzs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyBudW1fcmV0cmllczsrCWludCA/ID8gPyA/
ID8gPyA/ID8gPyA/ID8gPyBidXN5X2xvb3Bpbmc7PwlzdHJ1Y3QgdGltZXZhbCA/ID8gPyA/ID8g
PyA/bGFzdF90cnk7Pz8JdGRfdmJkX3QgPyA/ID8gPyA/ID8gPyA/ID8gKnZiZDstLT8xLjguMy4x
CgoKZ29yZG9uZ29uZzAzNTBAZ21haWwuY29tCgo=

------=_001_NextPart124380706250_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
 }</style></head><body>=0A<div><span></span><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">Fix vreq wit=
h error of -EBUSY fails 100 times too soon, &nbsp;that&nbsp;<span style=3D=
"font-size: 10.5pt; background-color: window;">return EIO to td device lay=
er. If this td</span></div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;"><span style=3D"font-size: 10.=
5pt; background-color: window;">device request is metadata of filesystem, =
the result is not good at all.</span></div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;"><br></div><di=
v style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-he=
ight: normal;"><span style=3D"background-color: window; font-size: 10.5pt;=
">To reproduce it witch &nbsp;"./iozone -I -s 2G -r 512k -r 1m -r 2m -r 4m=
 -i 0 -i 1 -f /data/iozone_test.tmp",</span></div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><span =
style=3D"background-color: window; font-size: 10.5pt;">the result is input=
/output error &amp; iozone is stopped.</span></div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><br><=
/div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;"><br></div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">From ccc0ce3aede1f5c920036847ddb16b<wbr>2=
5969be7ba Mon Sep 17 00:00:00 2001</div><div style=3D"color: rgb(34, 34, 3=
4); font-family: arial, sans-serif; line-height: normal;">From: Xiaodong G=
ong &lt;<a href=3D"mailto:gordongong0350@gmail.com" target=3D"_blank" styl=
e=3D"color: rgb(17, 85, 204);">gordongong0350@gmail.com</a>&gt;</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">Date: Fri, 28 Feb 2014 03:01:19 -0500</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
Subject: [PATCH] fix vreq with error of -EBUSY fails 100 times too soon,</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">&nbsp;return EIO to td device layer.</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;">1) all free list is consumed by vreq i=
n one or two block that</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">&nbsp; &nbsp;is seted in ba=
t entry.</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sa=
ns-serif; line-height: normal;">2) vreq in fail queue, get a chance to run=
 quickly, such as 90us.</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">---</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
&nbsp;tools/blktap2/drivers/<wbr>tapdisk-vbd.c | 17 ++++++++++++++---</div=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">&nbsp;tools/blktap2/drivers/<wbr>tapdisk-vbd.h | &nbsp;=
1 +</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">&nbsp;2 files changed, 15 insertions(+), 3 dele=
tions(-)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sa=
ns-serif; line-height: normal;"><br></div><div style=3D"color: rgb(34, 34,=
 34); font-family: arial, sans-serif; line-height: normal;">diff --git a/t=
ools/blktap2/drivers/<wbr>tapdisk-vbd.c b/tools/blktap2/drivers/<wbr>tapdi=
sk-vbd.c</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sa=
ns-serif; line-height: normal;">index c665f27..529eb91 100644</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">--- a/tools/blktap2/drivers/<wbr>tapdisk-vbd.c</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+++ b/tools/blktap2/drivers/<wbr>tapdisk-vbd.c</div><div style=3D"=
color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norma=
l;">@@ -1081,7 +1081,7 @@ tapdisk_vbd_check_state(td_<wbr>vbd_t *vbd)</div=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>td=
_vbd_request_t *vreq, *tmp;</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>tapdisk_vbd_f=
or_each_request(<wbr>vreq, tmp, &amp;vbd-&gt;failed_requests)</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">-<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;n=
um_retries &gt;=3D TD_VBD_MAX_RETRIES)</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">+<span style=
=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_V=
BD_MAX_RETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1 )</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>tapdisk_vbd=
_complete_vbd_<wbr>request(vbd, vreq);</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</div>=
<div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line=
-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if =
(!list_empty(&amp;vbd-&gt;new_<wbr>requests) ||</div><div style=3D"color: =
rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">@@ =
-1168,7 +1168,7 @@ tapdisk_vbd_complete_vbd_<wbr>request(td_vbd_t *vbd, td=
_vbd_request_t *vreq)</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">&nbsp;{</div><div style=3D"co=
lor: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;=
">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if (!vreq-&gt;submi=
tting &amp;&amp; !vreq-&gt;secs_pending) {</div><div style=3D"color: rgb(3=
4, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<s=
pan style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;status =3D=3D B=
LKIF_RSP_ERROR &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-=
family: arial, sans-serif; line-height: normal;">-<span style=3D"white-spa=
ce: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;vreq-&gt;num_retries &lt; TD_VB=
D_MAX_RETRIES &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-f=
amily: arial, sans-serif; line-height: normal;">+<span style=3D"white-spac=
e: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;(vreq-&gt;num_retries &lt; TD_VB=
D_MAX_RETRIES || vreq-&gt;busy_looping =3D=3D 1)&amp;&amp;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp;=
 &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_DEAD) &amp;&amp;</div><div styl=
e=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: =
normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp=
; &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>tapdi=
sk_vbd_move_request(vreq, &amp;vbd-&gt;failed_requests);</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">@@ -1450,17 +1450,28 @@ tapdisk_vbd_reissue_failed_<wbr>requests(t=
d_vbd_t *vbd)</div><div style=3D"color: rgb(34, 34, 34); font-family: aria=
l, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre=
-wrap;">	</span>gettimeofday(&amp;now, NULL);</div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp=
;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</s=
pan>tapdisk_vbd_for_each_request(<wbr>vreq, tmp, &amp;vbd-&gt;failed_reque=
sts) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans=
-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">		</=
span>uint64_t delta =3D 0;</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>&nbsp;<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;secs_p=
ending)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;=
">			</span>continue;</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>&nbsp;<span style=3D"white-space: pre-wrap;">		</span>if (td_flag_test(vb=
d-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<spa=
n style=3D"white-space: pre-wrap;">			</span>goto fail;</div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial,=
 sans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;"=
>		</span>if (vreq-&gt;num_retries &gt; TD_VBD_MAX_RETRIES - 10) {</div><d=
iv style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-h=
eight: normal;">+<span style=3D"white-space: pre-wrap;">			</span>delta =
=3D (now.tv_sec - vreq-&gt;last_try.tv_sec) * 1000000\</div><div style=3D"=
color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norma=
l;">+<span style=3D"white-space: pre-wrap;">				</span>+ now.tv_usec - vre=
q-&gt;last_try.tv_usec;</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">+<span style=3D"white-space=
: pre-wrap;">			</span>if (delta * vreq-&gt;num_retries &lt; TD_VBD_RETRY_=
INTERVAL)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">	=
			</span>vreq-&gt;busy_looping =3D 1;</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">+<span style=
=3D"white-space: pre-wrap;">			</span>else</div><div style=3D"color: rgb(3=
4, 34, 34); font-family: arial, sans-serif; line-height: normal;">+<span s=
tyle=3D"white-space: pre-wrap;">				</span>vreq-&gt;busy_looping =3D 0;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">+<span style=3D"white-space: pre-wrap;">		</span>}</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">+</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-s=
pace: pre-wrap;">		</span>if (vreq-&gt;error !=3D -EBUSY &amp;&amp;</div><=
div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nb=
sp;&nbsp; &nbsp;now.tv_sec - vreq-&gt;last_try.tv_sec &lt; TD_VBD_RETRY_IN=
TERVAL)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;=
">			</span>continue;</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>-<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries=
 &gt;=3D TD_VBD_MAX_RETRIES) {</div><div style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;">+<span style=3D"whit=
e-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_R=
ETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1) {</div><div style=3D"color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&=
nbsp;<span style=3D"white-space: pre-wrap;">		</span>fail:</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>DBG(TLOG_IN=
FO, "req %"PRIu64"retried %d times\n",</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span =
style=3D"white-space: pre-wrap;">			</span>&nbsp;&nbsp; &nbsp;vreq-&gt;<a =
href=3D"http://req.id/" target=3D"_blank" style=3D"color: rgb(17, 85, 204)=
;">req.id</a>, vreq-&gt;num_retries);</div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;">diff --git a/=
tools/blktap2/drivers/<wbr>tapdisk-vbd.h b/tools/blktap2/drivers/<wbr>tapd=
isk-vbd.h</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">index be084b2..9e5f5f6 100644</div><div s=
tyle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heigh=
t: normal;">--- a/tools/blktap2/drivers/<wbr>tapdisk-vbd.h</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+++ b/tools/blktap2/drivers/<wbr>tapdisk-vbd.h</div><div style=3D"=
color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norma=
l;">@@ -73,6 +73,7 @@ struct td_vbd_request {</div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp=
;<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &=
nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; submitting;<=
/div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</spa=
n>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; secs_pending;</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"=
white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; num_retries;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; busy=
_looping;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wra=
p;">	</span>struct timeval &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;last_try;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, =
sans-serif; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<spa=
n style=3D"white-space: pre-wrap;">	</span>td_vbd_t &nbsp; &nbsp; &nbsp; &=
nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; *vbd;</div><div class=3D"yj6qo aj=
U" style=3D"cursor: pointer; outline: none; padding: 10px 0px; width: 22px=
; color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: nor=
mal;"><div id=3D":wh" class=3D"ajR" role=3D"button" tabindex=3D"0" data-to=
oltip=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" aria-label=3D"=D2=FE=
=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=3D"background-color: rgb(241, =
241, 241); border: 1px solid rgb(221, 221, 221); clear: both; line-height:=
 6px; outline: none; position: relative; width: 20px;"><img class=3D"ajT" =
src=3D"https://mail.google.com/mail/u/0/images/cleardot.gif" style=3D"back=
ground-image: url(https://ssl.gstatic.com/ui/v1/icons/mail/ellipsis.png); =
height: 8px; opacity: 0.3; width: 20px; background-position: initial initi=
al; background-repeat: no-repeat no-repeat;"></div></div><span class=3D"HO=
EnZb adL" style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;"><font color=3D"#888888"><div>--&nbsp;</div><div>1.8=
.3.1</div></font></span></div>=0A<div><br></div><hr style=3D"width: 210px;=
 height: 1px;" color=3D"#b5c4df" size=3D"1" align=3D"left">=0A<div><span><=
div style=3D"MARGIN: 10px; FONT-FAMILY: verdana; FONT-SIZE: 10pt"><div>gor=
dongong0350@gmail.com</div></div></span></div>=0A</body></html>
------=_001_NextPart124380706250_=------



--===============6032888578548417215==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6032888578548417215==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 11:54:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 11:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJM1H-000317-Ti; Fri, 28 Feb 2014 11:54:23 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WJM1F-000312-Op
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 11:54:22 +0000
Received: from [85.158.143.35:59325] by server-2.bemta-4.messagelabs.com id
	D0/43-04779-DE870135; Fri, 28 Feb 2014 11:54:21 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393588457!9025445!1
X-Originating-IP: [209.85.160.66]
X-SpamReason: No, hits=2.2 required=7.0 tests=HTML_FONT_LOW_CONTRAST,
	HTML_MESSAGE, MAILTO_TO_SPAM_ADDR, MIME_BASE64_TEXT, MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23000 invoked from network); 28 Feb 2014 11:54:19 -0000
Received: from mail-pb0-f66.google.com (HELO mail-pb0-f66.google.com)
	(209.85.160.66)
	by server-16.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 11:54:19 -0000
Received: by mail-pb0-f66.google.com with SMTP id md12so357760pbc.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Feb 2014 03:54:17 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:mime-version:message-id:content-type;
	bh=Wb87gyb6dAA7rnkeUBpLAo2hKn9v4YQ55fhf56wXlQY=;
	b=kKOj0hOeql5dvH/xSUV6Q/k5z12GdMD60SRIKxGbU2Aq4RRZF5yWUggn+5qz6j8RjI
	0clDZC4PqwVdaAgjXVbv4FQ1730rET3nxTrOgU7MIKoZjxKPz32yD66vQCyph2BGt7NS
	dsn4OAs0fWXmNOIXeMSVAQN807qXdDjjr/nkHfDDavww2gLzp1/jorl72WGgSu/yF383
	FE2JlXID9E6mg6LuJ2r6djQLJ3vd6gJS/z/mrXxnkzKe+hSVMUQOPvANeGvPwTbXPxbB
	DTL0jQWtX2IxP2IGsUYAgiObc8jZj3sSzMcikdTfX1x2KEEiA5xrDXXL4rk6dog5kZIW
	RWSQ==
X-Received: by 10.68.34.6 with SMTP id v6mr2967281pbi.61.1393588457178;
	Fri, 28 Feb 2014 03:54:17 -0800 (PST)
Received: from gordon-PC ([183.17.52.174])
	by mx.google.com with ESMTPSA id xs1sm12123961pac.7.2014.02.28.03.54.12
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 28 Feb 2014 03:54:16 -0800 (PST)
Date: Fri, 28 Feb 2014 19:54:21 +0800
From: "gordongong0350@gmail.com" <gordongong0350@gmail.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-GUID: 8B4EA011-C9CB-4E17-8377-A4DF050480EC
X-Has-Attach: no
X-Mailer: Foxmail 7, 2, 0, 111[cn]
Mime-Version: 1.0
Message-ID: <2014022819541754090812@gmail.com>
Cc: "ian.jackson" <ian.jackson@eu.citrix.com>,
	"ian.campbell" <ian.campbell@citrix.com>,
	"stefano.stabellini" <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [PATCH1/1] Fix vreq with error of -EBUSY fails 100
	times too soon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6032888578548417215=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6032888578548417215==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart124380706250_=----"

This is a multi-part message in MIME format.

------=_001_NextPart124380706250_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

CgoKCgoKRml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBz
b29uLCA/dGhhdD9yZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4gSWYgdGhpcyB0ZGRldmlj
ZSByZXF1ZXN0IGlzIG1ldGFkYXRhIG9mIGZpbGVzeXN0ZW0sIHRoZSByZXN1bHQgaXMgbm90IGdv
b2QgYXQgYWxsLgpUbyByZXByb2R1Y2UgaXQgd2l0Y2ggPyIuL2lvem9uZSAtSSAtcyAyRyAtciA1
MTJrIC1yIDFtIC1yIDJtIC1yIDRtIC1pIDAgLWkgMSAtZiAvZGF0YS9pb3pvbmVfdGVzdC50bXAi
LHRoZSByZXN1bHQgaXMgaW5wdXQvb3V0cHV0IGVycm9yICYgaW96b25lIGlzIHN0b3BwZWQuCgoK
CkZyb20gY2NjMGNlM2FlZGUxZjVjOTIwMDM2ODQ3ZGRiMTZiMjU5NjliZTdiYSBNb24gU2VwIDE3
IDAwOjAwOjAwIDIwMDFGcm9tOiBYaWFvZG9uZyBHb25nIDxnb3Jkb25nb25nMDM1MEBnbWFpbC5j
b20+RGF0ZTogRnJpLCAyOCBGZWIgMjAxNCAwMzowMToxOSAtMDUwMFN1YmplY3Q6IFtQQVRDSF0g
Zml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBzb29uLD9y
ZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4KMSkgYWxsIGZyZWUgbGlzdCBpcyBjb25zdW1l
ZCBieSB2cmVxIGluIG9uZSBvciB0d28gYmxvY2sgdGhhdD8gP2lzIHNldGVkIGluIGJhdCBlbnRy
eS4yKSB2cmVxIGluIGZhaWwgcXVldWUsIGdldCBhIGNoYW5jZSB0byBydW4gcXVpY2tseSwgc3Vj
aCBhcyA5MHVzLi0tLT90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyB8IDE3ICsr
KysrKysrKysrKysrLS0tP3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oIHwgPzEg
Kz8yIGZpbGVzIGNoYW5nZWQsIDE1IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCmRpZmYg
LS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyBiL3Rvb2xzL2Jsa3Rh
cDIvZHJpdmVycy90YXBkaXNrLXZiZC5jaW5kZXggYzY2NWYyNy4uNTI5ZWI5MSAxMDA2NDQtLS0g
YS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYysrKyBiL3Rvb2xzL2Jsa3RhcDIv
ZHJpdmVycy90YXBkaXNrLXZiZC5jQEAgLTEwODEsNyArMTA4MSw3IEBAIHRhcGRpc2tfdmJkX2No
ZWNrX3N0YXRlKHRkX3ZiZF90ICp2YmQpPwl0ZF92YmRfcmVxdWVzdF90ICp2cmVxLCAqdG1wOz8/
CXRhcGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVx
dWVzdHMpLQkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVUUklFUykrCQlp
ZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJEX01BWF9SRVRSSUVTICYmIHZyZXEtPmJ1c3lf
bG9vcGluZyAhPSAxICk/CQkJdGFwZGlza192YmRfY29tcGxldGVfdmJkX3JlcXVlc3QodmJkLCB2
cmVxKTs/PwlpZiAoIWxpc3RfZW1wdHkoJnZiZC0+bmV3X3JlcXVlc3RzKSB8fEBAIC0xMTY4LDcg
KzExNjgsNyBAQCB0YXBkaXNrX3ZiZF9jb21wbGV0ZV92YmRfcmVxdWVzdCh0ZF92YmRfdCAqdmJk
LCB0ZF92YmRfcmVxdWVzdF90ICp2cmVxKT97PwlpZiAoIXZyZXEtPnN1Ym1pdHRpbmcgJiYgIXZy
ZXEtPnNlY3NfcGVuZGluZykgez8JCWlmICh2cmVxLT5zdGF0dXMgPT0gQkxLSUZfUlNQX0VSUk9S
ICYmLQkJPz8gP3ZyZXEtPm51bV9yZXRyaWVzIDwgVERfVkJEX01BWF9SRVRSSUVTICYmKwkJPz8g
Pyh2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9NQVhfUkVUUklFUyB8fCB2cmVxLT5idXN5X2xv
b3BpbmcgPT0gMSkmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9ERUFE
KSAmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9XTl9SRVFV
RVNURUQpKT8JCQl0YXBkaXNrX3ZiZF9tb3ZlX3JlcXVlc3QodnJlcSwgJnZiZC0+ZmFpbGVkX3Jl
cXVlc3RzKTtAQCAtMTQ1MCwxNyArMTQ1MCwyOCBAQCB0YXBkaXNrX3ZiZF9yZWlzc3VlX2ZhaWxl
ZF9yZXF1ZXN0cyh0ZF92YmRfdCAqdmJkKT8JZ2V0dGltZW9mZGF5KCZub3csIE5VTEwpOz8/CXRh
cGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVxdWVz
dHMpIHsrCQl1aW50NjRfdCBkZWx0YSA9IDA7Kz8JCWlmICh2cmVxLT5zZWNzX3BlbmRpbmcpPwkJ
CWNvbnRpbnVlOz8/CQlpZiAodGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9X
Tl9SRVFVRVNURUQpKT8JCQlnb3RvIGZhaWw7PysJCWlmICh2cmVxLT5udW1fcmV0cmllcyA+IFRE
X1ZCRF9NQVhfUkVUUklFUyAtIDEwKSB7KwkJCWRlbHRhID0gKG5vdy50dl9zZWMgLSB2cmVxLT5s
YXN0X3RyeS50dl9zZWMpICogMTAwMDAwMFwrCQkJCSsgbm93LnR2X3VzZWMgLSB2cmVxLT5sYXN0
X3RyeS50dl91c2VjOysJCQlpZiAoZGVsdGEgKiB2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9S
RVRSWV9JTlRFUlZBTCkrCQkJCXZyZXEtPmJ1c3lfbG9vcGluZyA9IDE7KwkJCWVsc2UrCQkJCXZy
ZXEtPmJ1c3lfbG9vcGluZyA9IDA7KwkJfSs/CQlpZiAodnJlcS0+ZXJyb3IgIT0gLUVCVVNZICYm
PwkJPz8gP25vdy50dl9zZWMgLSB2cmVxLT5sYXN0X3RyeS50dl9zZWMgPCBURF9WQkRfUkVUUllf
SU5URVJWQUwpPwkJCWNvbnRpbnVlOz8tCQlpZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJE
X01BWF9SRVRSSUVTKSB7KwkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVU
UklFUyAmJiB2cmVxLT5idXN5X2xvb3BpbmcgIT0gMSkgez8JCWZhaWw6PwkJCURCRyhUTE9HX0lO
Rk8sICJyZXEgJSJQUkl1NjQicmV0cmllZCAlZCB0aW1lc1xuIiw/CQkJPz8gP3ZyZXEtPnJlcS5p
ZCwgdnJlcS0+bnVtX3JldHJpZXMpO2RpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMv
dGFwZGlzay12YmQuaCBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oaW5kZXgg
YmUwODRiMi4uOWU1ZjVmNiAxMDA2NDQtLS0gYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlz
ay12YmQuaCsrKyBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oQEAgLTczLDYg
KzczLDcgQEAgc3RydWN0IHRkX3ZiZF9yZXF1ZXN0IHs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/
ID8gPyBzdWJtaXR0aW5nOz8JaW50ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyA/IHNlY3NfcGVuZGlu
Zzs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyBudW1fcmV0cmllczsrCWludCA/ID8gPyA/
ID8gPyA/ID8gPyA/ID8gPyBidXN5X2xvb3Bpbmc7PwlzdHJ1Y3QgdGltZXZhbCA/ID8gPyA/ID8g
PyA/bGFzdF90cnk7Pz8JdGRfdmJkX3QgPyA/ID8gPyA/ID8gPyA/ID8gKnZiZDstLT8xLjguMy4x
CgoKZ29yZG9uZ29uZzAzNTBAZ21haWwuY29tCgo=

------=_001_NextPart124380706250_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
 }</style></head><body>=0A<div><span></span><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">Fix vreq wit=
h error of -EBUSY fails 100 times too soon, &nbsp;that&nbsp;<span style=3D=
"font-size: 10.5pt; background-color: window;">return EIO to td device lay=
er. If this td</span></div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;"><span style=3D"font-size: 10.=
5pt; background-color: window;">device request is metadata of filesystem, =
the result is not good at all.</span></div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;"><br></div><di=
v style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-he=
ight: normal;"><span style=3D"background-color: window; font-size: 10.5pt;=
">To reproduce it witch &nbsp;"./iozone -I -s 2G -r 512k -r 1m -r 2m -r 4m=
 -i 0 -i 1 -f /data/iozone_test.tmp",</span></div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><span =
style=3D"background-color: window; font-size: 10.5pt;">the result is input=
/output error &amp; iozone is stopped.</span></div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><br><=
/div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;"><br></div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">From ccc0ce3aede1f5c920036847ddb16b<wbr>2=
5969be7ba Mon Sep 17 00:00:00 2001</div><div style=3D"color: rgb(34, 34, 3=
4); font-family: arial, sans-serif; line-height: normal;">From: Xiaodong G=
ong &lt;<a href=3D"mailto:gordongong0350@gmail.com" target=3D"_blank" styl=
e=3D"color: rgb(17, 85, 204);">gordongong0350@gmail.com</a>&gt;</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">Date: Fri, 28 Feb 2014 03:01:19 -0500</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
Subject: [PATCH] fix vreq with error of -EBUSY fails 100 times too soon,</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">&nbsp;return EIO to td device layer.</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;">1) all free list is consumed by vreq i=
n one or two block that</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">&nbsp; &nbsp;is seted in ba=
t entry.</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sa=
ns-serif; line-height: normal;">2) vreq in fail queue, get a chance to run=
 quickly, such as 90us.</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">---</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
&nbsp;tools/blktap2/drivers/<wbr>tapdisk-vbd.c | 17 ++++++++++++++---</div=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">&nbsp;tools/blktap2/drivers/<wbr>tapdisk-vbd.h | &nbsp;=
1 +</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">&nbsp;2 files changed, 15 insertions(+), 3 dele=
tions(-)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sa=
ns-serif; line-height: normal;"><br></div><div style=3D"color: rgb(34, 34,=
 34); font-family: arial, sans-serif; line-height: normal;">diff --git a/t=
ools/blktap2/drivers/<wbr>tapdisk-vbd.c b/tools/blktap2/drivers/<wbr>tapdi=
sk-vbd.c</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sa=
ns-serif; line-height: normal;">index c665f27..529eb91 100644</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">--- a/tools/blktap2/drivers/<wbr>tapdisk-vbd.c</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+++ b/tools/blktap2/drivers/<wbr>tapdisk-vbd.c</div><div style=3D"=
color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norma=
l;">@@ -1081,7 +1081,7 @@ tapdisk_vbd_check_state(td_<wbr>vbd_t *vbd)</div=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>td=
_vbd_request_t *vreq, *tmp;</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>tapdisk_vbd_f=
or_each_request(<wbr>vreq, tmp, &amp;vbd-&gt;failed_requests)</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">-<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;n=
um_retries &gt;=3D TD_VBD_MAX_RETRIES)</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">+<span style=
=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_V=
BD_MAX_RETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1 )</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>tapdisk_vbd=
_complete_vbd_<wbr>request(vbd, vreq);</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</div>=
<div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line=
-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if =
(!list_empty(&amp;vbd-&gt;new_<wbr>requests) ||</div><div style=3D"color: =
rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">@@ =
-1168,7 +1168,7 @@ tapdisk_vbd_complete_vbd_<wbr>request(td_vbd_t *vbd, td=
_vbd_request_t *vreq)</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">&nbsp;{</div><div style=3D"co=
lor: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;=
">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if (!vreq-&gt;submi=
tting &amp;&amp; !vreq-&gt;secs_pending) {</div><div style=3D"color: rgb(3=
4, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<s=
pan style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;status =3D=3D B=
LKIF_RSP_ERROR &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-=
family: arial, sans-serif; line-height: normal;">-<span style=3D"white-spa=
ce: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;vreq-&gt;num_retries &lt; TD_VB=
D_MAX_RETRIES &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-f=
amily: arial, sans-serif; line-height: normal;">+<span style=3D"white-spac=
e: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;(vreq-&gt;num_retries &lt; TD_VB=
D_MAX_RETRIES || vreq-&gt;busy_looping =3D=3D 1)&amp;&amp;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp;=
 &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_DEAD) &amp;&amp;</div><div styl=
e=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: =
normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp=
; &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>tapdi=
sk_vbd_move_request(vreq, &amp;vbd-&gt;failed_requests);</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">@@ -1450,17 +1450,28 @@ tapdisk_vbd_reissue_failed_<wbr>requests(t=
d_vbd_t *vbd)</div><div style=3D"color: rgb(34, 34, 34); font-family: aria=
l, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre=
-wrap;">	</span>gettimeofday(&amp;now, NULL);</div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp=
;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</s=
pan>tapdisk_vbd_for_each_request(<wbr>vreq, tmp, &amp;vbd-&gt;failed_reque=
sts) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans=
-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">		</=
span>uint64_t delta =3D 0;</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>&nbsp;<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;secs_p=
ending)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;=
">			</span>continue;</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>&nbsp;<span style=3D"white-space: pre-wrap;">		</span>if (td_flag_test(vb=
d-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<spa=
n style=3D"white-space: pre-wrap;">			</span>goto fail;</div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial,=
 sans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;"=
>		</span>if (vreq-&gt;num_retries &gt; TD_VBD_MAX_RETRIES - 10) {</div><d=
iv style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-h=
eight: normal;">+<span style=3D"white-space: pre-wrap;">			</span>delta =
=3D (now.tv_sec - vreq-&gt;last_try.tv_sec) * 1000000\</div><div style=3D"=
color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norma=
l;">+<span style=3D"white-space: pre-wrap;">				</span>+ now.tv_usec - vre=
q-&gt;last_try.tv_usec;</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">+<span style=3D"white-space=
: pre-wrap;">			</span>if (delta * vreq-&gt;num_retries &lt; TD_VBD_RETRY_=
INTERVAL)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">	=
			</span>vreq-&gt;busy_looping =3D 1;</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">+<span style=
=3D"white-space: pre-wrap;">			</span>else</div><div style=3D"color: rgb(3=
4, 34, 34); font-family: arial, sans-serif; line-height: normal;">+<span s=
tyle=3D"white-space: pre-wrap;">				</span>vreq-&gt;busy_looping =3D 0;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">+<span style=3D"white-space: pre-wrap;">		</span>}</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">+</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-s=
pace: pre-wrap;">		</span>if (vreq-&gt;error !=3D -EBUSY &amp;&amp;</div><=
div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nb=
sp;&nbsp; &nbsp;now.tv_sec - vreq-&gt;last_try.tv_sec &lt; TD_VBD_RETRY_IN=
TERVAL)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;=
">			</span>continue;</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>-<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries=
 &gt;=3D TD_VBD_MAX_RETRIES) {</div><div style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;">+<span style=3D"whit=
e-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_R=
ETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1) {</div><div style=3D"color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&=
nbsp;<span style=3D"white-space: pre-wrap;">		</span>fail:</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>DBG(TLOG_IN=
FO, "req %"PRIu64"retried %d times\n",</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span =
style=3D"white-space: pre-wrap;">			</span>&nbsp;&nbsp; &nbsp;vreq-&gt;<a =
href=3D"http://req.id/" target=3D"_blank" style=3D"color: rgb(17, 85, 204)=
;">req.id</a>, vreq-&gt;num_retries);</div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;">diff --git a/=
tools/blktap2/drivers/<wbr>tapdisk-vbd.h b/tools/blktap2/drivers/<wbr>tapd=
isk-vbd.h</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">index be084b2..9e5f5f6 100644</div><div s=
tyle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heigh=
t: normal;">--- a/tools/blktap2/drivers/<wbr>tapdisk-vbd.h</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+++ b/tools/blktap2/drivers/<wbr>tapdisk-vbd.h</div><div style=3D"=
color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norma=
l;">@@ -73,6 +73,7 @@ struct td_vbd_request {</div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp=
;<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &=
nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; submitting;<=
/div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</spa=
n>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; secs_pending;</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"=
white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nb=
sp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; num_retries;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; busy=
_looping;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wra=
p;">	</span>struct timeval &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
;last_try;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, =
sans-serif; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<spa=
n style=3D"white-space: pre-wrap;">	</span>td_vbd_t &nbsp; &nbsp; &nbsp; &=
nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; *vbd;</div><div class=3D"yj6qo aj=
U" style=3D"cursor: pointer; outline: none; padding: 10px 0px; width: 22px=
; color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: nor=
mal;"><div id=3D":wh" class=3D"ajR" role=3D"button" tabindex=3D"0" data-to=
oltip=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" aria-label=3D"=D2=FE=
=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=3D"background-color: rgb(241, =
241, 241); border: 1px solid rgb(221, 221, 221); clear: both; line-height:=
 6px; outline: none; position: relative; width: 20px;"><img class=3D"ajT" =
src=3D"https://mail.google.com/mail/u/0/images/cleardot.gif" style=3D"back=
ground-image: url(https://ssl.gstatic.com/ui/v1/icons/mail/ellipsis.png); =
height: 8px; opacity: 0.3; width: 20px; background-position: initial initi=
al; background-repeat: no-repeat no-repeat;"></div></div><span class=3D"HO=
EnZb adL" style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;"><font color=3D"#888888"><div>--&nbsp;</div><div>1.8=
.3.1</div></font></span></div>=0A<div><br></div><hr style=3D"width: 210px;=
 height: 1px;" color=3D"#b5c4df" size=3D"1" align=3D"left">=0A<div><span><=
div style=3D"MARGIN: 10px; FONT-FAMILY: verdana; FONT-SIZE: 10pt"><div>gor=
dongong0350@gmail.com</div></div></span></div>=0A</body></html>
------=_001_NextPart124380706250_=------



--===============6032888578548417215==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6032888578548417215==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 12:37:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 12:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJMgf-0003pM-Nk; Fri, 28 Feb 2014 12:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WJMgd-0003pH-5K
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 12:37:07 +0000
Received: from [85.158.143.35:18519] by server-3.bemta-4.messagelabs.com id
	96/87-11539-2F280135; Fri, 28 Feb 2014 12:37:06 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393591023!9040087!1
X-Originating-IP: [209.85.220.66]
X-SpamReason: No, hits=2.2 required=7.0 tests=HTML_FONT_LOW_CONTRAST,
	HTML_MESSAGE, MAILTO_TO_SPAM_ADDR, MIME_BASE64_TEXT, MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18977 invoked from network); 28 Feb 2014 12:37:05 -0000
Received: from mail-pa0-f66.google.com (HELO mail-pa0-f66.google.com)
	(209.85.220.66)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 12:37:05 -0000
Received: by mail-pa0-f66.google.com with SMTP id fa1so383621pad.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Feb 2014 04:37:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:mime-version:message-id:content-type;
	bh=gvPGZtyveR7FAt33kvRsC2ntTAn0OeudD5WgLNB0A50=;
	b=AY82QldgHqQqOiB3GtHITx/AIr5X+Lp0XcOnQJNJm4VvTgq39rr62OfMVltwlEJg6I
	oVAA/+C+Tkzjl+/PLch/rxOtGoQ7dUnZSU8DY6ysKGBPNSv+TGzozfSGeV6SQh/gOODl
	VIowS+6yVlWX0IH3s4M5rFXqUWpzka3JcfvbkyNNs69TCuTtQdO3XIeNuQpkyExWT6Rq
	RW6zrnv3CVtRPIQtz8ZfZPAV2e2tmB7W5dNlCTCMuCZ7rfVnvn7eOLKXcly7N1IbvTYf
	dYL4cto95RH4+psVUkCb7ntrB0i6Gzcp6eNftVzajrgjfedtvEGiN+fJt5hca9REHs9V
	CoRw==
X-Received: by 10.68.238.201 with SMTP id vm9mr3362361pbc.18.1393591022865;
	Fri, 28 Feb 2014 04:37:02 -0800 (PST)
Received: from gordon-PC ([183.17.52.174])
	by mx.google.com with ESMTPSA id sx8sm12939796pab.5.2014.02.28.04.36.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 28 Feb 2014 04:37:02 -0800 (PST)
Date: Fri, 28 Feb 2014 20:37:06 +0800
From: "gordongong0350@gmail.com" <gordongong0350@gmail.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-GUID: 48FBD803-DB0B-4333-8023-5F29E629DACD
X-Has-Attach: no
X-Mailer: Foxmail 7, 2, 0, 111[cn]
Mime-Version: 1.0
Message-ID: <201402282037028705182@gmail.com>
Cc: "ian.jackson" <ian.jackson@eu.citrix.com>,
	"ian.campbell" <ian.campbell@citrix.com>,
	"stefano.stabellini" <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [tapdisk2][PATCH1/1] Fix vreq with error of -EBUSY
	fails 100 times too soon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6271216277323262865=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6271216277323262865==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart476321248375_=----"

This is a multi-part message in MIME format.

------=_001_NextPart476321248375_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

CgoKCgoKRml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBz
b29uLCA/dGhhdD9yZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4gSWYgdGhpcyB0ZGRldmlj
ZSByZXF1ZXN0IGlzIG1ldGFkYXRhIG9mIGZpbGVzeXN0ZW0sIHRoZSByZXN1bHQgaXMgbm90IGdv
b2QgYXQgYWxsLgpUbyByZXByb2R1Y2UgaXQgd2l0Y2ggPyIuL2lvem9uZSAtSSAtcyAyRyAtciA1
MTJrIC1yIDFtIC1yIDJtIC1yIDRtIC1pIDAgLWkgMSAtZiAvZGF0YS9pb3pvbmVfdGVzdC50bXAi
LHRoZSByZXN1bHQgaXMgaW5wdXQvb3V0cHV0IGVycm9yICYgaW96b25lIGlzIHN0b3BwZWQuCgoK
CkZyb20gY2NjMGNlM2FlZGUxZjVjOTIwMDM2ODQ3ZGRiMTZiMjU5NjliZTdiYSBNb24gU2VwIDE3
IDAwOjAwOjAwIDIwMDFGcm9tOiBYaWFvZG9uZyBHb25nIDxnb3Jkb25nb25nMDM1MEBnbWFpbC5j
b20+RGF0ZTogRnJpLCAyOCBGZWIgMjAxNCAwMzowMToxOSAtMDUwMFN1YmplY3Q6IFtQQVRDSF0g
Zml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBzb29uLD9y
ZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4KMSkgYWxsIGZyZWUgbGlzdCBpcyBjb25zdW1l
ZCBieSB2cmVxIGluIG9uZSBvciB0d28gYmxvY2sgdGhhdD8gP2lzIHNldGVkIGluIGJhdCBlbnRy
eS4yKSB2cmVxIGluIGZhaWwgcXVldWUsIGdldCBhIGNoYW5jZSB0byBydW4gcXVpY2tseSwgc3Vj
aCBhcyA5MHVzLi0tLT90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyB8IDE3ICsr
KysrKysrKysrKysrLS0tP3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oIHwgPzEg
Kz8yIGZpbGVzIGNoYW5nZWQsIDE1IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCmRpZmYg
LS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyBiL3Rvb2xzL2Jsa3Rh
cDIvZHJpdmVycy90YXBkaXNrLXZiZC5jaW5kZXggYzY2NWYyNy4uNTI5ZWI5MSAxMDA2NDQtLS0g
YS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYysrKyBiL3Rvb2xzL2Jsa3RhcDIv
ZHJpdmVycy90YXBkaXNrLXZiZC5jQEAgLTEwODEsNyArMTA4MSw3IEBAIHRhcGRpc2tfdmJkX2No
ZWNrX3N0YXRlKHRkX3ZiZF90ICp2YmQpPwl0ZF92YmRfcmVxdWVzdF90ICp2cmVxLCAqdG1wOz8/
CXRhcGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVx
dWVzdHMpLQkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVUUklFUykrCQlp
ZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJEX01BWF9SRVRSSUVTICYmIHZyZXEtPmJ1c3lf
bG9vcGluZyAhPSAxICk/CQkJdGFwZGlza192YmRfY29tcGxldGVfdmJkX3JlcXVlc3QodmJkLCB2
cmVxKTs/PwlpZiAoIWxpc3RfZW1wdHkoJnZiZC0+bmV3X3JlcXVlc3RzKSB8fEBAIC0xMTY4LDcg
KzExNjgsNyBAQCB0YXBkaXNrX3ZiZF9jb21wbGV0ZV92YmRfcmVxdWVzdCh0ZF92YmRfdCAqdmJk
LCB0ZF92YmRfcmVxdWVzdF90ICp2cmVxKT97PwlpZiAoIXZyZXEtPnN1Ym1pdHRpbmcgJiYgIXZy
ZXEtPnNlY3NfcGVuZGluZykgez8JCWlmICh2cmVxLT5zdGF0dXMgPT0gQkxLSUZfUlNQX0VSUk9S
ICYmLQkJPz8gP3ZyZXEtPm51bV9yZXRyaWVzIDwgVERfVkJEX01BWF9SRVRSSUVTICYmKwkJPz8g
Pyh2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9NQVhfUkVUUklFUyB8fCB2cmVxLT5idXN5X2xv
b3BpbmcgPT0gMSkmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9ERUFE
KSAmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9XTl9SRVFV
RVNURUQpKT8JCQl0YXBkaXNrX3ZiZF9tb3ZlX3JlcXVlc3QodnJlcSwgJnZiZC0+ZmFpbGVkX3Jl
cXVlc3RzKTtAQCAtMTQ1MCwxNyArMTQ1MCwyOCBAQCB0YXBkaXNrX3ZiZF9yZWlzc3VlX2ZhaWxl
ZF9yZXF1ZXN0cyh0ZF92YmRfdCAqdmJkKT8JZ2V0dGltZW9mZGF5KCZub3csIE5VTEwpOz8/CXRh
cGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVxdWVz
dHMpIHsrCQl1aW50NjRfdCBkZWx0YSA9IDA7Kz8JCWlmICh2cmVxLT5zZWNzX3BlbmRpbmcpPwkJ
CWNvbnRpbnVlOz8/CQlpZiAodGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9X
Tl9SRVFVRVNURUQpKT8JCQlnb3RvIGZhaWw7PysJCWlmICh2cmVxLT5udW1fcmV0cmllcyA+IFRE
X1ZCRF9NQVhfUkVUUklFUyAtIDEwKSB7KwkJCWRlbHRhID0gKG5vdy50dl9zZWMgLSB2cmVxLT5s
YXN0X3RyeS50dl9zZWMpICogMTAwMDAwMFwrCQkJCSsgbm93LnR2X3VzZWMgLSB2cmVxLT5sYXN0
X3RyeS50dl91c2VjOysJCQlpZiAoZGVsdGEgKiB2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9S
RVRSWV9JTlRFUlZBTCkrCQkJCXZyZXEtPmJ1c3lfbG9vcGluZyA9IDE7KwkJCWVsc2UrCQkJCXZy
ZXEtPmJ1c3lfbG9vcGluZyA9IDA7KwkJfSs/CQlpZiAodnJlcS0+ZXJyb3IgIT0gLUVCVVNZICYm
PwkJPz8gP25vdy50dl9zZWMgLSB2cmVxLT5sYXN0X3RyeS50dl9zZWMgPCBURF9WQkRfUkVUUllf
SU5URVJWQUwpPwkJCWNvbnRpbnVlOz8tCQlpZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJE
X01BWF9SRVRSSUVTKSB7KwkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVU
UklFUyAmJiB2cmVxLT5idXN5X2xvb3BpbmcgIT0gMSkgez8JCWZhaWw6PwkJCURCRyhUTE9HX0lO
Rk8sICJyZXEgJSJQUkl1NjQicmV0cmllZCAlZCB0aW1lc1xuIiw/CQkJPz8gP3ZyZXEtPnJlcS5p
ZCwgdnJlcS0+bnVtX3JldHJpZXMpO2RpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMv
dGFwZGlzay12YmQuaCBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oaW5kZXgg
YmUwODRiMi4uOWU1ZjVmNiAxMDA2NDQtLS0gYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlz
ay12YmQuaCsrKyBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oQEAgLTczLDYg
KzczLDcgQEAgc3RydWN0IHRkX3ZiZF9yZXF1ZXN0IHs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/
ID8gPyBzdWJtaXR0aW5nOz8JaW50ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyA/IHNlY3NfcGVuZGlu
Zzs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyBudW1fcmV0cmllczsrCWludCA/ID8gPyA/
ID8gPyA/ID8gPyA/ID8gPyBidXN5X2xvb3Bpbmc7PwlzdHJ1Y3QgdGltZXZhbCA/ID8gPyA/ID8g
PyA/bGFzdF90cnk7Pz8JdGRfdmJkX3QgPyA/ID8gPyA/ID8gPyA/ID8gKnZiZDstLT8xLjguMy4x
CgoKZ29yZG9uZ29uZzAzNTBAZ21haWwuY29tCgo=

------=_001_NextPart476321248375_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
 }</style></head><body>=0A<div><span></span><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">Fix vreq wit=
h error of -EBUSY fails 100 times too soon, &nbsp;that&nbsp;<span style=3D=
"font-size: 10.5pt; background-color: window;">return EIO to td device lay=
er. If this td</span></div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;"><span style=3D"font-size: 10.=
5pt; background-color: window;">device request is metadata of filesystem, =
the result is not good at all.</span></div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;"><br></div><di=
v style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-he=
ight: normal;"><span style=3D"background-color: window; font-size: 10.5pt;=
">To reproduce it witch &nbsp;"./iozone -I -s 2G -r 512k -r 1m -r 2m -r 4m=
 -i 0 -i 1 -f /data/iozone_test.tmp",</span></div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><span =
style=3D"background-color: window; font-size: 10.5pt;">the result is input=
/output error &amp; iozone is stopped.</span></div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><br><=
/div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;"><br></div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">From ccc0ce3aede1f5c920036847ddb16b<u></u=
><wbr>25969be7ba Mon Sep 17 00:00:00 2001</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">From: Xia=
odong Gong &lt;<a href=3D"mailto:gordongong0350@gmail.com" target=3D"_blan=
k" style=3D"color: rgb(17, 85, 204);">gordongong0350@gmail.com</a>&gt;</di=
v><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; li=
ne-height: normal;">Date: Fri, 28 Feb 2014 03:01:19 -0500</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">Subject: [PATCH] fix vreq with error of -EBUSY fails 100 times too=
 soon,</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans=
-serif; line-height: normal;">&nbsp;return EIO to td device layer.</div><d=
iv style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-h=
eight: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-famil=
y: arial, sans-serif; line-height: normal;">1) all free list is consumed b=
y vreq in one or two block that</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">&nbsp; &nbsp;is set=
ed in bat entry.</div><div style=3D"color: rgb(34, 34, 34); font-family: a=
rial, sans-serif; line-height: normal;">2) vreq in fail queue, get a chanc=
e to run quickly, such as 90us.</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">---</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;tools/blktap2/drivers/<u></u>tapdisk<wbr>-vbd.c | 17 +++++++=
+++++++---</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, =
sans-serif; line-height: normal;">&nbsp;tools/blktap2/drivers/<u></u>tapdi=
sk<wbr>-vbd.h | &nbsp;1 +</div><div style=3D"color: rgb(34, 34, 34); font-=
family: arial, sans-serif; line-height: normal;">&nbsp;2 files changed, 15=
 insertions(+), 3 deletions(-)</div><div style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;"><br></div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">diff --git a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c b/too=
ls/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c</div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">index =
c665f27..529eb91 100644</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">--- a/tools/blktap2/drivers=
/<u></u>tapdis<wbr>k-vbd.c</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+++ b/tools/blktap2/driv=
ers/<u></u>tapdis<wbr>k-vbd.c</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">@@ -1081,7 +1081,7 @@=
 tapdisk_vbd_check_state(td_<u></u>vbd<wbr>_t *vbd)</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>&nbsp;<span style=3D"white-space: pre-wrap;">	</span>td_vbd_request_t *vr=
eq, *tmp;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, =
34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span=
 style=3D"white-space: pre-wrap;">	</span>tapdisk_vbd_for_each_request(<u>=
</u>v<wbr>req, tmp, &amp;vbd-&gt;failed_requests)</div><div style=3D"color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">-=
<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &=
gt;=3D TD_VBD_MAX_RETRIES)</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+<span style=3D"white-sp=
ace: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRI=
ES &amp;&amp; vreq-&gt;busy_looping !=3D 1 )</div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;=
<span style=3D"white-space: pre-wrap;">			</span>tapdisk_vbd_complete_vbd_=
<u></u>reque<wbr>st(vbd, vreq);</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">&nbsp;</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if (!list_=
empty(&amp;vbd-&gt;new_<u></u>request<wbr>s) ||</div><div style=3D"color: =
rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">@@ =
-1168,7 +1168,7 @@ tapdisk_vbd_complete_vbd_<u></u>reque<wbr>st(td_vbd_t *=
vbd, td_vbd_request_t *vreq)</div><div style=3D"color: rgb(34, 34, 34); fo=
nt-family: arial, sans-serif; line-height: normal;">&nbsp;{</div><div styl=
e=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: =
normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if (!vreq-&g=
t;submitting &amp;&amp; !vreq-&gt;secs_pending) {</div><div style=3D"color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&=
nbsp;<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;status =
=3D=3D BLKIF_RSP_ERROR &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34=
); font-family: arial, sans-serif; line-height: normal;">-<span style=3D"w=
hite-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;vreq-&gt;num_retries &l=
t; TD_VBD_MAX_RETRIES &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34)=
; font-family: arial, sans-serif; line-height: normal;">+<span style=3D"wh=
ite-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;(vreq-&gt;num_retries &l=
t; TD_VBD_MAX_RETRIES || vreq-&gt;busy_looping =3D=3D 1)&amp;&amp;</div><d=
iv style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-h=
eight: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nbs=
p;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_DEAD) &amp;&amp;</div><=
div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nb=
sp;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</sp=
an>tapdisk_vbd_move_request(vreq, &amp;vbd-&gt;failed_requests);</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">@@ -1450,17 +1450,28 @@ tapdisk_vbd_reissue_failed_<u></u>re=
q<wbr>uests(td_vbd_t *vbd)</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"whi=
te-space: pre-wrap;">	</span>gettimeofday(&amp;now, NULL);</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pr=
e-wrap;">	</span>tapdisk_vbd_for_each_request(<u></u>v<wbr>req, tmp, &amp;=
vbd-&gt;failed_requests) {</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+<span style=3D"white-sp=
ace: pre-wrap;">		</span>uint64_t delta =3D 0;</div><div style=3D"color: r=
gb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">+</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span=
>if (vreq-&gt;secs_pending)</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"wh=
ite-space: pre-wrap;">			</span>continue;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span=
>if (td_flag_test(vbd-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</div><div sty=
le=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height:=
 normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>goto fail=
;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">+<span style=3D"whi=
te-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt; TD_VBD_MAX_RET=
RIES - 10) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;=
">			</span>delta =3D (now.tv_sec - vreq-&gt;last_try.tv_sec) * 1000000\</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">+<span style=3D"white-space: pre-wrap;">				</span>+=
 now.tv_usec - vreq-&gt;last_try.tv_usec;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">+<span st=
yle=3D"white-space: pre-wrap;">			</span>if (delta * vreq-&gt;num_retries =
&lt; TD_VBD_RETRY_INTERVAL)</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">+<span style=3D"white-s=
pace: pre-wrap;">				</span>vreq-&gt;busy_looping =3D 1;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">			</span>else</div><div s=
tyle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heigh=
t: normal;">+<span style=3D"white-space: pre-wrap;">				</span>vreq-&gt;bu=
sy_looping =3D 0;</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">+<span style=3D"white-space: pre-=
wrap;">		</span>}</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">+</div><div style=3D"color: rgb(3=
4, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<s=
pan style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;error !=3D -EBU=
SY &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-family: aria=
l, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre=
-wrap;">		</span>&nbsp;&nbsp; &nbsp;now.tv_sec - vreq-&gt;last_try.tv_sec =
&lt; TD_VBD_RETRY_INTERVAL)</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"wh=
ite-space: pre-wrap;">			</span>continue;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">-<span style=3D"white-space: pre-wrap;">		</span>if (=
vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRIES) {</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
+<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries =
&gt;=3D TD_VBD_MAX_RETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1) {</div=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>f=
ail:</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-s=
erif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	=
		</span>DBG(TLOG_INFO, "req %"PRIu64"retried %d times\n",</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>&nbsp;&nbsp=
; &nbsp;vreq-&gt;<a href=3D"http://req.id/" target=3D"_blank" style=3D"col=
or: rgb(17, 85, 204);">req.id</a>, vreq-&gt;num_retries);</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">diff --git a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h b/too=
ls/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h</div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">index =
be084b2..9e5f5f6 100644</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">--- a/tools/blktap2/drivers=
/<u></u>tapdis<wbr>k-vbd.h</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+++ b/tools/blktap2/driv=
ers/<u></u>tapdis<wbr>k-vbd.h</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">@@ -73,6 +73,7 @@ str=
uct td_vbd_request {</div><div style=3D"color: rgb(34, 34, 34); font-famil=
y: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-spa=
ce: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; submitting;</div><div style=3D"color:=
 rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&n=
bsp;<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; secs_pend=
ing;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-s=
erif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	=
</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; num_retries;</div><div style=3D"color: rgb(34, 34, 34=
); font-family: arial, sans-serif; line-height: normal;">+<span style=3D"w=
hite-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; busy_looping;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>struct timeva=
l &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;last_try;</div><div styl=
e=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: =
normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family: ar=
ial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: p=
re-wrap;">	</span>td_vbd_t &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; *vbd;</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal; outline: none; padding: 10px 0=
px; width: 22px;"><div role=3D"button" aria-label=3D"=D2=FE=B2=D8=D5=B9=BF=
=AA=B5=C4=C4=DA=C8=DD" style=3D"background-color: rgb(241, 241, 241); bord=
er: 1px solid rgb(221, 221, 221); clear: both; line-height: 6px; outline: =
none; width: 20px;"><img src=3D"https://ci3.googleusercontent.com/proxy/8C=
BWEZYl_gju-O0mEKjDNw0Eh_qIbYUu1oZBKfZqOypmLO1gU9NfBBUnKK5HrwoN-VKG8iQn_PvQ=
pZDlbxq0IwVL9dQOy_HcGJ0=3Ds0-d-e1-ft#https://mail.google.com/mail/u/0/imag=
es/cleardot.gif" style=3D"background-image: url(https://ci3.googleusercont=
ent.com/proxy/4Pr9QlSavgRrq9Z2e9xIdvM0tknChd1d3dwHXzb4AxHH58w0Ke02qizRBkj-=
OCDGy4wP2RrG7OYjvK0G5jWJojcVQldSGVQLT8rI=3Ds0-d-e1-ft#https://ssl.gstatic.=
com/ui/v1/icons/mail/ellipsis.png); min-height: 8px; width: 20px; backgrou=
nd-repeat: no-repeat no-repeat;"></div></div><div class=3D"yj6qo ajU" styl=
e=3D"cursor: pointer; outline: none; padding: 10px 0px; width: 22px; color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><=
div id=3D":vv" class=3D"ajR" role=3D"button" tabindex=3D"0" data-tooltip=
=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" aria-label=3D"=D2=FE=B2=D8=
=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=3D"background-color: rgb(241, 241, 2=
41); border: 1px solid rgb(221, 221, 221); clear: both; line-height: 6px; =
outline: none; position: relative; width: 20px;"><img class=3D"ajT" src=3D=
"https://mail.google.com/mail/u/0/images/cleardot.gif" style=3D"background=
-image: url(https://ssl.gstatic.com/ui/v1/icons/mail/ellipsis.png); height=
: 8px; opacity: 0.3; width: 20px; background-position: initial initial; ba=
ckground-repeat: no-repeat no-repeat;"></div></div><span class=3D"HOEnZb a=
dL" style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;"><font color=3D"#888888"><span style=3D"color: rgb(34, 34,=
 34);"><font color=3D"#888888"><div>--&nbsp;</div><div>1.8.3.1</div></font=
></span></font></span></div>=0A<div><br></div><hr style=3D"WIDTH: 210px; H=
EIGHT: 1px" color=3D"#b5c4df" size=3D"1" align=3D"left">=0A<div><span><div=
 style=3D"MARGIN: 10px; FONT-FAMILY: verdana; FONT-SIZE: 10pt"><div>gordon=
gong0350@gmail.com</div></div></span></div>=0A</body></html>
------=_001_NextPart476321248375_=------



--===============6271216277323262865==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6271216277323262865==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 12:37:31 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 12:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJMgf-0003pM-Nk; Fri, 28 Feb 2014 12:37:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WJMgd-0003pH-5K
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 12:37:07 +0000
Received: from [85.158.143.35:18519] by server-3.bemta-4.messagelabs.com id
	96/87-11539-2F280135; Fri, 28 Feb 2014 12:37:06 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393591023!9040087!1
X-Originating-IP: [209.85.220.66]
X-SpamReason: No, hits=2.2 required=7.0 tests=HTML_FONT_LOW_CONTRAST,
	HTML_MESSAGE, MAILTO_TO_SPAM_ADDR, MIME_BASE64_TEXT, MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18977 invoked from network); 28 Feb 2014 12:37:05 -0000
Received: from mail-pa0-f66.google.com (HELO mail-pa0-f66.google.com)
	(209.85.220.66)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 12:37:05 -0000
Received: by mail-pa0-f66.google.com with SMTP id fa1so383621pad.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Feb 2014 04:37:03 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:mime-version:message-id:content-type;
	bh=gvPGZtyveR7FAt33kvRsC2ntTAn0OeudD5WgLNB0A50=;
	b=AY82QldgHqQqOiB3GtHITx/AIr5X+Lp0XcOnQJNJm4VvTgq39rr62OfMVltwlEJg6I
	oVAA/+C+Tkzjl+/PLch/rxOtGoQ7dUnZSU8DY6ysKGBPNSv+TGzozfSGeV6SQh/gOODl
	VIowS+6yVlWX0IH3s4M5rFXqUWpzka3JcfvbkyNNs69TCuTtQdO3XIeNuQpkyExWT6Rq
	RW6zrnv3CVtRPIQtz8ZfZPAV2e2tmB7W5dNlCTCMuCZ7rfVnvn7eOLKXcly7N1IbvTYf
	dYL4cto95RH4+psVUkCb7ntrB0i6Gzcp6eNftVzajrgjfedtvEGiN+fJt5hca9REHs9V
	CoRw==
X-Received: by 10.68.238.201 with SMTP id vm9mr3362361pbc.18.1393591022865;
	Fri, 28 Feb 2014 04:37:02 -0800 (PST)
Received: from gordon-PC ([183.17.52.174])
	by mx.google.com with ESMTPSA id sx8sm12939796pab.5.2014.02.28.04.36.58
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 28 Feb 2014 04:37:02 -0800 (PST)
Date: Fri, 28 Feb 2014 20:37:06 +0800
From: "gordongong0350@gmail.com" <gordongong0350@gmail.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-GUID: 48FBD803-DB0B-4333-8023-5F29E629DACD
X-Has-Attach: no
X-Mailer: Foxmail 7, 2, 0, 111[cn]
Mime-Version: 1.0
Message-ID: <201402282037028705182@gmail.com>
Cc: "ian.jackson" <ian.jackson@eu.citrix.com>,
	"ian.campbell" <ian.campbell@citrix.com>,
	"stefano.stabellini" <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [tapdisk2][PATCH1/1] Fix vreq with error of -EBUSY
	fails 100 times too soon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6271216277323262865=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============6271216277323262865==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart476321248375_=----"

This is a multi-part message in MIME format.

------=_001_NextPart476321248375_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

CgoKCgoKRml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBz
b29uLCA/dGhhdD9yZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4gSWYgdGhpcyB0ZGRldmlj
ZSByZXF1ZXN0IGlzIG1ldGFkYXRhIG9mIGZpbGVzeXN0ZW0sIHRoZSByZXN1bHQgaXMgbm90IGdv
b2QgYXQgYWxsLgpUbyByZXByb2R1Y2UgaXQgd2l0Y2ggPyIuL2lvem9uZSAtSSAtcyAyRyAtciA1
MTJrIC1yIDFtIC1yIDJtIC1yIDRtIC1pIDAgLWkgMSAtZiAvZGF0YS9pb3pvbmVfdGVzdC50bXAi
LHRoZSByZXN1bHQgaXMgaW5wdXQvb3V0cHV0IGVycm9yICYgaW96b25lIGlzIHN0b3BwZWQuCgoK
CkZyb20gY2NjMGNlM2FlZGUxZjVjOTIwMDM2ODQ3ZGRiMTZiMjU5NjliZTdiYSBNb24gU2VwIDE3
IDAwOjAwOjAwIDIwMDFGcm9tOiBYaWFvZG9uZyBHb25nIDxnb3Jkb25nb25nMDM1MEBnbWFpbC5j
b20+RGF0ZTogRnJpLCAyOCBGZWIgMjAxNCAwMzowMToxOSAtMDUwMFN1YmplY3Q6IFtQQVRDSF0g
Zml4IHZyZXEgd2l0aCBlcnJvciBvZiAtRUJVU1kgZmFpbHMgMTAwIHRpbWVzIHRvbyBzb29uLD9y
ZXR1cm4gRUlPIHRvIHRkIGRldmljZSBsYXllci4KMSkgYWxsIGZyZWUgbGlzdCBpcyBjb25zdW1l
ZCBieSB2cmVxIGluIG9uZSBvciB0d28gYmxvY2sgdGhhdD8gP2lzIHNldGVkIGluIGJhdCBlbnRy
eS4yKSB2cmVxIGluIGZhaWwgcXVldWUsIGdldCBhIGNoYW5jZSB0byBydW4gcXVpY2tseSwgc3Vj
aCBhcyA5MHVzLi0tLT90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyB8IDE3ICsr
KysrKysrKysrKysrLS0tP3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oIHwgPzEg
Kz8yIGZpbGVzIGNoYW5nZWQsIDE1IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCmRpZmYg
LS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYyBiL3Rvb2xzL2Jsa3Rh
cDIvZHJpdmVycy90YXBkaXNrLXZiZC5jaW5kZXggYzY2NWYyNy4uNTI5ZWI5MSAxMDA2NDQtLS0g
YS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuYysrKyBiL3Rvb2xzL2Jsa3RhcDIv
ZHJpdmVycy90YXBkaXNrLXZiZC5jQEAgLTEwODEsNyArMTA4MSw3IEBAIHRhcGRpc2tfdmJkX2No
ZWNrX3N0YXRlKHRkX3ZiZF90ICp2YmQpPwl0ZF92YmRfcmVxdWVzdF90ICp2cmVxLCAqdG1wOz8/
CXRhcGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVx
dWVzdHMpLQkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVUUklFUykrCQlp
ZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJEX01BWF9SRVRSSUVTICYmIHZyZXEtPmJ1c3lf
bG9vcGluZyAhPSAxICk/CQkJdGFwZGlza192YmRfY29tcGxldGVfdmJkX3JlcXVlc3QodmJkLCB2
cmVxKTs/PwlpZiAoIWxpc3RfZW1wdHkoJnZiZC0+bmV3X3JlcXVlc3RzKSB8fEBAIC0xMTY4LDcg
KzExNjgsNyBAQCB0YXBkaXNrX3ZiZF9jb21wbGV0ZV92YmRfcmVxdWVzdCh0ZF92YmRfdCAqdmJk
LCB0ZF92YmRfcmVxdWVzdF90ICp2cmVxKT97PwlpZiAoIXZyZXEtPnN1Ym1pdHRpbmcgJiYgIXZy
ZXEtPnNlY3NfcGVuZGluZykgez8JCWlmICh2cmVxLT5zdGF0dXMgPT0gQkxLSUZfUlNQX0VSUk9S
ICYmLQkJPz8gP3ZyZXEtPm51bV9yZXRyaWVzIDwgVERfVkJEX01BWF9SRVRSSUVTICYmKwkJPz8g
Pyh2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9NQVhfUkVUUklFUyB8fCB2cmVxLT5idXN5X2xv
b3BpbmcgPT0gMSkmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9ERUFE
KSAmJj8JCT8/ID8hdGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9XTl9SRVFV
RVNURUQpKT8JCQl0YXBkaXNrX3ZiZF9tb3ZlX3JlcXVlc3QodnJlcSwgJnZiZC0+ZmFpbGVkX3Jl
cXVlc3RzKTtAQCAtMTQ1MCwxNyArMTQ1MCwyOCBAQCB0YXBkaXNrX3ZiZF9yZWlzc3VlX2ZhaWxl
ZF9yZXF1ZXN0cyh0ZF92YmRfdCAqdmJkKT8JZ2V0dGltZW9mZGF5KCZub3csIE5VTEwpOz8/CXRh
cGRpc2tfdmJkX2Zvcl9lYWNoX3JlcXVlc3QodnJlcSwgdG1wLCAmdmJkLT5mYWlsZWRfcmVxdWVz
dHMpIHsrCQl1aW50NjRfdCBkZWx0YSA9IDA7Kz8JCWlmICh2cmVxLT5zZWNzX3BlbmRpbmcpPwkJ
CWNvbnRpbnVlOz8/CQlpZiAodGRfZmxhZ190ZXN0KHZiZC0+c3RhdGUsIFREX1ZCRF9TSFVURE9X
Tl9SRVFVRVNURUQpKT8JCQlnb3RvIGZhaWw7PysJCWlmICh2cmVxLT5udW1fcmV0cmllcyA+IFRE
X1ZCRF9NQVhfUkVUUklFUyAtIDEwKSB7KwkJCWRlbHRhID0gKG5vdy50dl9zZWMgLSB2cmVxLT5s
YXN0X3RyeS50dl9zZWMpICogMTAwMDAwMFwrCQkJCSsgbm93LnR2X3VzZWMgLSB2cmVxLT5sYXN0
X3RyeS50dl91c2VjOysJCQlpZiAoZGVsdGEgKiB2cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9S
RVRSWV9JTlRFUlZBTCkrCQkJCXZyZXEtPmJ1c3lfbG9vcGluZyA9IDE7KwkJCWVsc2UrCQkJCXZy
ZXEtPmJ1c3lfbG9vcGluZyA9IDA7KwkJfSs/CQlpZiAodnJlcS0+ZXJyb3IgIT0gLUVCVVNZICYm
PwkJPz8gP25vdy50dl9zZWMgLSB2cmVxLT5sYXN0X3RyeS50dl9zZWMgPCBURF9WQkRfUkVUUllf
SU5URVJWQUwpPwkJCWNvbnRpbnVlOz8tCQlpZiAodnJlcS0+bnVtX3JldHJpZXMgPj0gVERfVkJE
X01BWF9SRVRSSUVTKSB7KwkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVU
UklFUyAmJiB2cmVxLT5idXN5X2xvb3BpbmcgIT0gMSkgez8JCWZhaWw6PwkJCURCRyhUTE9HX0lO
Rk8sICJyZXEgJSJQUkl1NjQicmV0cmllZCAlZCB0aW1lc1xuIiw/CQkJPz8gP3ZyZXEtPnJlcS5p
ZCwgdnJlcS0+bnVtX3JldHJpZXMpO2RpZmYgLS1naXQgYS90b29scy9ibGt0YXAyL2RyaXZlcnMv
dGFwZGlzay12YmQuaCBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oaW5kZXgg
YmUwODRiMi4uOWU1ZjVmNiAxMDA2NDQtLS0gYS90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlz
ay12YmQuaCsrKyBiL3Rvb2xzL2Jsa3RhcDIvZHJpdmVycy90YXBkaXNrLXZiZC5oQEAgLTczLDYg
KzczLDcgQEAgc3RydWN0IHRkX3ZiZF9yZXF1ZXN0IHs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/
ID8gPyBzdWJtaXR0aW5nOz8JaW50ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyA/IHNlY3NfcGVuZGlu
Zzs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyBudW1fcmV0cmllczsrCWludCA/ID8gPyA/
ID8gPyA/ID8gPyA/ID8gPyBidXN5X2xvb3Bpbmc7PwlzdHJ1Y3QgdGltZXZhbCA/ID8gPyA/ID8g
PyA/bGFzdF90cnk7Pz8JdGRfdmJkX3QgPyA/ID8gPyA/ID8gPyA/ID8gKnZiZDstLT8xLjguMy4x
CgoKZ29yZG9uZ29uZzAzNTBAZ21haWwuY29tCgo=

------=_001_NextPart476321248375_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
 }</style></head><body>=0A<div><span></span><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">Fix vreq wit=
h error of -EBUSY fails 100 times too soon, &nbsp;that&nbsp;<span style=3D=
"font-size: 10.5pt; background-color: window;">return EIO to td device lay=
er. If this td</span></div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;"><span style=3D"font-size: 10.=
5pt; background-color: window;">device request is metadata of filesystem, =
the result is not good at all.</span></div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;"><br></div><di=
v style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-he=
ight: normal;"><span style=3D"background-color: window; font-size: 10.5pt;=
">To reproduce it witch &nbsp;"./iozone -I -s 2G -r 512k -r 1m -r 2m -r 4m=
 -i 0 -i 1 -f /data/iozone_test.tmp",</span></div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><span =
style=3D"background-color: window; font-size: 10.5pt;">the result is input=
/output error &amp; iozone is stopped.</span></div><div style=3D"color: rg=
b(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><br><=
/div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif;=
 line-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;"><br></div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">From ccc0ce3aede1f5c920036847ddb16b<u></u=
><wbr>25969be7ba Mon Sep 17 00:00:00 2001</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">From: Xia=
odong Gong &lt;<a href=3D"mailto:gordongong0350@gmail.com" target=3D"_blan=
k" style=3D"color: rgb(17, 85, 204);">gordongong0350@gmail.com</a>&gt;</di=
v><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; li=
ne-height: normal;">Date: Fri, 28 Feb 2014 03:01:19 -0500</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">Subject: [PATCH] fix vreq with error of -EBUSY fails 100 times too=
 soon,</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans=
-serif; line-height: normal;">&nbsp;return EIO to td device layer.</div><d=
iv style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-h=
eight: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-famil=
y: arial, sans-serif; line-height: normal;">1) all free list is consumed b=
y vreq in one or two block that</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">&nbsp; &nbsp;is set=
ed in bat entry.</div><div style=3D"color: rgb(34, 34, 34); font-family: a=
rial, sans-serif; line-height: normal;">2) vreq in fail queue, get a chanc=
e to run quickly, such as 90us.</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">---</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;tools/blktap2/drivers/<u></u>tapdisk<wbr>-vbd.c | 17 +++++++=
+++++++---</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, =
sans-serif; line-height: normal;">&nbsp;tools/blktap2/drivers/<u></u>tapdi=
sk<wbr>-vbd.h | &nbsp;1 +</div><div style=3D"color: rgb(34, 34, 34); font-=
family: arial, sans-serif; line-height: normal;">&nbsp;2 files changed, 15=
 insertions(+), 3 deletions(-)</div><div style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;"><br></div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">diff --git a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c b/too=
ls/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c</div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">index =
c665f27..529eb91 100644</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">--- a/tools/blktap2/drivers=
/<u></u>tapdis<wbr>k-vbd.c</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+++ b/tools/blktap2/driv=
ers/<u></u>tapdis<wbr>k-vbd.c</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">@@ -1081,7 +1081,7 @@=
 tapdisk_vbd_check_state(td_<u></u>vbd<wbr>_t *vbd)</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>&nbsp;<span style=3D"white-space: pre-wrap;">	</span>td_vbd_request_t *vr=
eq, *tmp;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, =
34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span=
 style=3D"white-space: pre-wrap;">	</span>tapdisk_vbd_for_each_request(<u>=
</u>v<wbr>req, tmp, &amp;vbd-&gt;failed_requests)</div><div style=3D"color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">-=
<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &=
gt;=3D TD_VBD_MAX_RETRIES)</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+<span style=3D"white-sp=
ace: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRI=
ES &amp;&amp; vreq-&gt;busy_looping !=3D 1 )</div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;=
<span style=3D"white-space: pre-wrap;">			</span>tapdisk_vbd_complete_vbd_=
<u></u>reque<wbr>st(vbd, vreq);</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">&nbsp;</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if (!list_=
empty(&amp;vbd-&gt;new_<u></u>request<wbr>s) ||</div><div style=3D"color: =
rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">@@ =
-1168,7 +1168,7 @@ tapdisk_vbd_complete_vbd_<u></u>reque<wbr>st(td_vbd_t *=
vbd, td_vbd_request_t *vreq)</div><div style=3D"color: rgb(34, 34, 34); fo=
nt-family: arial, sans-serif; line-height: normal;">&nbsp;{</div><div styl=
e=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: =
normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>if (!vreq-&g=
t;submitting &amp;&amp; !vreq-&gt;secs_pending) {</div><div style=3D"color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&=
nbsp;<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;status =
=3D=3D BLKIF_RSP_ERROR &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34=
); font-family: arial, sans-serif; line-height: normal;">-<span style=3D"w=
hite-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;vreq-&gt;num_retries &l=
t; TD_VBD_MAX_RETRIES &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34)=
; font-family: arial, sans-serif; line-height: normal;">+<span style=3D"wh=
ite-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;(vreq-&gt;num_retries &l=
t; TD_VBD_MAX_RETRIES || vreq-&gt;busy_looping =3D=3D 1)&amp;&amp;</div><d=
iv style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-h=
eight: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nbs=
p;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_DEAD) &amp;&amp;</div><=
div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>&nb=
sp;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</sp=
an>tapdisk_vbd_move_request(vreq, &amp;vbd-&gt;failed_requests);</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">@@ -1450,17 +1450,28 @@ tapdisk_vbd_reissue_failed_<u></u>re=
q<wbr>uests(td_vbd_t *vbd)</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"whi=
te-space: pre-wrap;">	</span>gettimeofday(&amp;now, NULL);</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pr=
e-wrap;">	</span>tapdisk_vbd_for_each_request(<u></u>v<wbr>req, tmp, &amp;=
vbd-&gt;failed_requests) {</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+<span style=3D"white-sp=
ace: pre-wrap;">		</span>uint64_t delta =3D 0;</div><div style=3D"color: r=
gb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">+</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span=
>if (vreq-&gt;secs_pending)</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"wh=
ite-space: pre-wrap;">			</span>continue;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span=
>if (td_flag_test(vbd-&gt;state, TD_VBD_SHUTDOWN_REQUESTED))</div><div sty=
le=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height:=
 normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>goto fail=
;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">+<span style=3D"whi=
te-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt; TD_VBD_MAX_RET=
RIES - 10) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;=
">			</span>delta =3D (now.tv_sec - vreq-&gt;last_try.tv_sec) * 1000000\</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">+<span style=3D"white-space: pre-wrap;">				</span>+=
 now.tv_usec - vreq-&gt;last_try.tv_usec;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">+<span st=
yle=3D"white-space: pre-wrap;">			</span>if (delta * vreq-&gt;num_retries =
&lt; TD_VBD_RETRY_INTERVAL)</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">+<span style=3D"white-s=
pace: pre-wrap;">				</span>vreq-&gt;busy_looping =3D 1;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">			</span>else</div><div s=
tyle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heigh=
t: normal;">+<span style=3D"white-space: pre-wrap;">				</span>vreq-&gt;bu=
sy_looping =3D 0;</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">+<span style=3D"white-space: pre-=
wrap;">		</span>}</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">+</div><div style=3D"color: rgb(3=
4, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<s=
pan style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;error !=3D -EBU=
SY &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-family: aria=
l, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre=
-wrap;">		</span>&nbsp;&nbsp; &nbsp;now.tv_sec - vreq-&gt;last_try.tv_sec =
&lt; TD_VBD_RETRY_INTERVAL)</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"wh=
ite-space: pre-wrap;">			</span>continue;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">-<span style=3D"white-space: pre-wrap;">		</span>if (=
vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRIES) {</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
+<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_retries =
&gt;=3D TD_VBD_MAX_RETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1) {</div=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		</span>f=
ail:</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-s=
erif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	=
		</span>DBG(TLOG_INFO, "req %"PRIu64"retried %d times\n",</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>&nbsp;&nbsp=
; &nbsp;vreq-&gt;<a href=3D"http://req.id/" target=3D"_blank" style=3D"col=
or: rgb(17, 85, 204);">req.id</a>, vreq-&gt;num_retries);</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">diff --git a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h b/too=
ls/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h</div><div style=3D"color: rgb=
(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">index =
be084b2..9e5f5f6 100644</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">--- a/tools/blktap2/drivers=
/<u></u>tapdis<wbr>k-vbd.h</div><div style=3D"color: rgb(34, 34, 34); font=
-family: arial, sans-serif; line-height: normal;">+++ b/tools/blktap2/driv=
ers/<u></u>tapdis<wbr>k-vbd.h</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">@@ -73,6 +73,7 @@ str=
uct td_vbd_request {</div><div style=3D"color: rgb(34, 34, 34); font-famil=
y: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-spa=
ce: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; submitting;</div><div style=3D"color:=
 rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&n=
bsp;<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; secs_pend=
ing;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-s=
erif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	=
</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; num_retries;</div><div style=3D"color: rgb(34, 34, 34=
); font-family: arial, sans-serif; line-height: normal;">+<span style=3D"w=
hite-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; busy_looping;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>struct timeva=
l &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;last_try;</div><div styl=
e=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: =
normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family: ar=
ial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: p=
re-wrap;">	</span>td_vbd_t &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp=
; &nbsp; &nbsp; *vbd;</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal; outline: none; padding: 10px 0=
px; width: 22px;"><div role=3D"button" aria-label=3D"=D2=FE=B2=D8=D5=B9=BF=
=AA=B5=C4=C4=DA=C8=DD" style=3D"background-color: rgb(241, 241, 241); bord=
er: 1px solid rgb(221, 221, 221); clear: both; line-height: 6px; outline: =
none; width: 20px;"><img src=3D"https://ci3.googleusercontent.com/proxy/8C=
BWEZYl_gju-O0mEKjDNw0Eh_qIbYUu1oZBKfZqOypmLO1gU9NfBBUnKK5HrwoN-VKG8iQn_PvQ=
pZDlbxq0IwVL9dQOy_HcGJ0=3Ds0-d-e1-ft#https://mail.google.com/mail/u/0/imag=
es/cleardot.gif" style=3D"background-image: url(https://ci3.googleusercont=
ent.com/proxy/4Pr9QlSavgRrq9Z2e9xIdvM0tknChd1d3dwHXzb4AxHH58w0Ke02qizRBkj-=
OCDGy4wP2RrG7OYjvK0G5jWJojcVQldSGVQLT8rI=3Ds0-d-e1-ft#https://ssl.gstatic.=
com/ui/v1/icons/mail/ellipsis.png); min-height: 8px; width: 20px; backgrou=
nd-repeat: no-repeat no-repeat;"></div></div><div class=3D"yj6qo ajU" styl=
e=3D"cursor: pointer; outline: none; padding: 10px 0px; width: 22px; color=
: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"><=
div id=3D":vv" class=3D"ajR" role=3D"button" tabindex=3D"0" data-tooltip=
=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" aria-label=3D"=D2=FE=B2=D8=
=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=3D"background-color: rgb(241, 241, 2=
41); border: 1px solid rgb(221, 221, 221); clear: both; line-height: 6px; =
outline: none; position: relative; width: 20px;"><img class=3D"ajT" src=3D=
"https://mail.google.com/mail/u/0/images/cleardot.gif" style=3D"background=
-image: url(https://ssl.gstatic.com/ui/v1/icons/mail/ellipsis.png); height=
: 8px; opacity: 0.3; width: 20px; background-position: initial initial; ba=
ckground-repeat: no-repeat no-repeat;"></div></div><span class=3D"HOEnZb a=
dL" style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;"><font color=3D"#888888"><span style=3D"color: rgb(34, 34,=
 34);"><font color=3D"#888888"><div>--&nbsp;</div><div>1.8.3.1</div></font=
></span></font></span></div>=0A<div><br></div><hr style=3D"WIDTH: 210px; H=
EIGHT: 1px" color=3D"#b5c4df" size=3D"1" align=3D"left">=0A<div><span><div=
 style=3D"MARGIN: 10px; FONT-FAMILY: verdana; FONT-SIZE: 10pt"><div>gordon=
gong0350@gmail.com</div></div></span></div>=0A</body></html>
------=_001_NextPart476321248375_=------



--===============6271216277323262865==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6271216277323262865==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 13:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN6s-0004Ge-5Q; Fri, 28 Feb 2014 13:04:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN6q-0004GQ-Ed
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:04:12 +0000
Received: from [85.158.143.35:43230] by server-3.bemta-4.messagelabs.com id
	7D/58-11539-B4980135; Fri, 28 Feb 2014 13:04:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393592649!9046939!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23688 invoked from network); 28 Feb 2014 13:04:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:04:10 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD3v32021345
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:03:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3uRH011951
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:56 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3tNb024519; Fri, 28 Feb 2014 13:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9DD501C0FB6; Tue, 25 Feb 2014 16:09:28 -0500 (EST)
Date: Tue, 25 Feb 2014 16:09:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140225210928.GB5827@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
	<20140221100506.GR18398@zion.uk.xensource.com>
	<20140221150107.GG15905@phenom.dumpdata.com>
	<87y51058vf.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87y51058vf.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 11:03:24AM +1030, Rusty Russell wrote:
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> writes:
> > On Fri, Feb 21, 2014 at 10:05:06AM +0000, Wei Liu wrote:
> >> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> >> > The standard should say, "physical address"
> >
> > This conversation is heading towards - implementation needs it - hence lets
> > make the design have it. Which I am OK with - but if we are going that
> > route we might as well call this thing 'my-pony-number' because I think
> > each hypervisor will have a different view of it.
> >
> > Some of them might use a physical address with some flag bits on it.
> > Some might use just physical address.
> >
> > And some might want an 32-bit value that has no correlation to to physical
> > nor virtual addresses.
> 
> True, but if the standard doesn't define what it is, it's not a standard
> worth anything.  Xen is special because it's already requiring guest
> changes; it's a platform in itself and so can be different from
> everything else.  But it still needs to be defined.
> 
> At the moment, anything but guest-phys would not be compliant.  That's a
> Good Thing if we simply don't know the best answer for Xen; we'll adjust
> the standard when we do.

I think Daniel's suggestion of a 'handle' should cover it, no?

Or are you saying that the 'handle' should actually say what it is
for every platform on which VirtIO will run?

For Xen it would be whatever the DMA API gives back as 'dma_addr_t'.
Which would require the VirtIO drivers to use the DMA (or PCI) APIs.


> 
> Cheers,
> Rusty.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN6s-0004Gm-H3; Fri, 28 Feb 2014 13:04:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN6q-0004GR-Fb
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:04:12 +0000
Received: from [85.158.139.211:10717] by server-13.bemta-5.messagelabs.com id
	63/5B-18801-B4980135; Fri, 28 Feb 2014 13:04:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393592649!6883874!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32758 invoked from network); 28 Feb 2014 13:04:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:04:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD3vx9021344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:03:58 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3tli009358
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:55 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SD3s8S011623; Fri, 28 Feb 2014 13:03:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5492C1C0FB7; Tue, 25 Feb 2014 16:12:47 -0500 (EST)
Date: Tue, 25 Feb 2014 16:12:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140225211247.GC5827@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
	<CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
	<87vbw458jr.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87vbw458jr.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 11:10:24AM +1030, Rusty Russell wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
> > On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >> On the other hand, if we wanted a more Xen-like setup, it would looke
> >> like this:
> >>
> >> 1) Abstract away the "physical addresses" to "handles" in the standard,
> >>    and allow some platform-specific mapping setup and teardown.
> >
> > At the risk of beating a dead horse, passing handles (grant
> > references) is going to be slow.
> ...
> > I really think the best paths forward for virtio on Xen are either (1)
> > reject the memory isolation thing and leave things as is or (2) assume
> > bounce buffering at the transport layer (by using the PCI DMA API).
> 
> Xen can get memory isolation back by doing the copy in the hypervisor.
> I've always liked that approach because it doesn't alter the guest
> semantics, but it's very different from what Xen does now.

It could. But why do it - the backend can choose it as well to do it
and perhaps even do some translation of the payload as it sees fit.

Or it can map it - and if using DPDK for example - one has
memory pages shared between the domains all the time - where you
just need to map once.

> 
> Cheers,
> Rusty.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN6s-0004Ge-5Q; Fri, 28 Feb 2014 13:04:14 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN6q-0004GQ-Ed
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:04:12 +0000
Received: from [85.158.143.35:43230] by server-3.bemta-4.messagelabs.com id
	7D/58-11539-B4980135; Fri, 28 Feb 2014 13:04:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-16.tower-21.messagelabs.com!1393592649!9046939!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23688 invoked from network); 28 Feb 2014 13:04:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-16.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:04:10 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD3v32021345
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:03:58 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3uRH011951
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:56 GMT
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3tNb024519; Fri, 28 Feb 2014 13:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 9DD501C0FB6; Tue, 25 Feb 2014 16:09:28 -0500 (EST)
Date: Tue, 25 Feb 2014 16:09:28 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140225210928.GB5827@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<CA+aC4ksPq2n_zAz47VBQEwRSawcxmUDiGa=4y+zRsW-F7ckEWA@mail.gmail.com>
	<87ha7ubme0.fsf@rustcorp.com.au>
	<CA+aC4kvr+JQWTBKX8hfLKHzXQafnom46tsSxRUY2CGEKumXR+g@mail.gmail.com>
	<20140221100506.GR18398@zion.uk.xensource.com>
	<20140221150107.GG15905@phenom.dumpdata.com>
	<87y51058vf.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87y51058vf.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] VIRTIO - compatibility with different
	virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 11:03:24AM +1030, Rusty Russell wrote:
> Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> writes:
> > On Fri, Feb 21, 2014 at 10:05:06AM +0000, Wei Liu wrote:
> >> On Thu, Feb 20, 2014 at 06:50:59PM -0800, Anthony Liguori wrote:
> >> > The standard should say, "physical address"
> >
> > This conversation is heading towards - implementation needs it - hence lets
> > make the design have it. Which I am OK with - but if we are going that
> > route we might as well call this thing 'my-pony-number' because I think
> > each hypervisor will have a different view of it.
> >
> > Some of them might use a physical address with some flag bits on it.
> > Some might use just physical address.
> >
> > And some might want an 32-bit value that has no correlation to to physical
> > nor virtual addresses.
> 
> True, but if the standard doesn't define what it is, it's not a standard
> worth anything.  Xen is special because it's already requiring guest
> changes; it's a platform in itself and so can be different from
> everything else.  But it still needs to be defined.
> 
> At the moment, anything but guest-phys would not be compliant.  That's a
> Good Thing if we simply don't know the best answer for Xen; we'll adjust
> the standard when we do.

I think Daniel's suggestion of a 'handle' should cover it, no?

Or are you saying that the 'handle' should actually say what it is
for every platform on which VirtIO will run?

For Xen it would be whatever the DMA API gives back as 'dma_addr_t'.
Which would require the VirtIO drivers to use the DMA (or PCI) APIs.


> 
> Cheers,
> Rusty.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:04:29 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:04:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN6s-0004Gm-H3; Fri, 28 Feb 2014 13:04:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN6q-0004GR-Fb
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:04:12 +0000
Received: from [85.158.139.211:10717] by server-13.bemta-5.messagelabs.com id
	63/5B-18801-B4980135; Fri, 28 Feb 2014 13:04:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-206.messagelabs.com!1393592649!6883874!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32758 invoked from network); 28 Feb 2014 13:04:11 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-6.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:04:11 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD3vx9021344
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:03:58 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3tli009358
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:55 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SD3s8S011623; Fri, 28 Feb 2014 13:03:54 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 5492C1C0FB7; Tue, 25 Feb 2014 16:12:47 -0500 (EST)
Date: Tue, 25 Feb 2014 16:12:47 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Rusty Russell <rusty@au1.ibm.com>
Message-ID: <20140225211247.GC5827@phenom.dumpdata.com>
References: <20140217132331.GA3441@olila.local.net-space.pl>
	<87vbwcaqxe.fsf@rustcorp.com.au>
	<1392804568.23084.112.camel@kazak.uk.xensource.com>
	<8761oab4y7.fsf@rustcorp.com.au>
	<20140220203704.GG3441@olila.local.net-space.pl>
	<8761o99tft.fsf@rustcorp.com.au>
	<CA+aC4kt1HcH8dryqDTxcv0rNWBx8Ed4F7VNVjuHCkAFSWC+Ktw@mail.gmail.com>
	<87vbw458jr.fsf@rustcorp.com.au>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <87vbw458jr.fsf@rustcorp.com.au>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: virtio-dev@lists.oasis-open.org, Wei Liu <wei.liu2@citrix.com>,
	Ian Campbell <Ian.Campbell@citrix.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Daniel Kiper <daniel.kiper@oracle.com>, ian@bromium.com,
	Anthony Liguori <anthony@codemonkey.ws>, sasha.levin@oracle.com,
	xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [virtio-dev] Re: VIRTIO - compatibility with
 different virtualization solutions
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Tue, Feb 25, 2014 at 11:10:24AM +1030, Rusty Russell wrote:
> Anthony Liguori <anthony@codemonkey.ws> writes:
> > On Thu, Feb 20, 2014 at 4:54 PM, Rusty Russell <rusty@au1.ibm.com> wrote:
> >> On the other hand, if we wanted a more Xen-like setup, it would looke
> >> like this:
> >>
> >> 1) Abstract away the "physical addresses" to "handles" in the standard,
> >>    and allow some platform-specific mapping setup and teardown.
> >
> > At the risk of beating a dead horse, passing handles (grant
> > references) is going to be slow.
> ...
> > I really think the best paths forward for virtio on Xen are either (1)
> > reject the memory isolation thing and leave things as is or (2) assume
> > bounce buffering at the transport layer (by using the PCI DMA API).
> 
> Xen can get memory isolation back by doing the copy in the hypervisor.
> I've always liked that approach because it doesn't alter the guest
> semantics, but it's very different from what Xen does now.

It could. But why do it - the backend can choose it as well to do it
and perhaps even do some translation of the payload as it sees fit.

Or it can map it - and if using DPDK for example - one has
memory pages shared between the domains all the time - where you
just need to map once.

> 
> Cheers,
> Rusty.
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7p-0004O8-4u; Fri, 28 Feb 2014 13:05:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7o-0004Nn-C3
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:12 +0000
Received: from [193.109.254.147:21655] by server-8.bemta-14.messagelabs.com id
	B1/66-18529-78980135; Fri, 28 Feb 2014 13:05:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393592709!7492193!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18985 invoked from network); 28 Feb 2014 13:05:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:10 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD42CC021652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3ubP009410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:56 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3uD4024527; Fri, 28 Feb 2014 13:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6961D1C0FBA; Wed, 26 Feb 2014 12:00:41 -0500 (EST)
Date: Wed, 26 Feb 2014 12:00:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226170041.GA20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Marcos Matsunaga <Marcos.Matsunaga@oracle.com>,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:20AM -0500, Waiman Long wrote:
> v4->v5:
>  - Move the optimized 2-task contending code to the generic file to
>    enable more architectures to use it without code duplication.
>  - Address some of the style-related comments by PeterZ.
>  - Allow the use of unfair queue spinlock in a real para-virtualized
>    execution environment.
>  - Add para-virtualization support to the qspinlock code by ensuring
>    that the lock holder and queue head stay alive as much as possible.
> 
> v3->v4:
>  - Remove debugging code and fix a configuration error
>  - Simplify the qspinlock structure and streamline the code to make it
>    perform a bit better
>  - Add an x86 version of asm/qspinlock.h for holding x86 specific
>    optimization.
>  - Add an optimized x86 code path for 2 contending tasks to improve
>    low contention performance.
> 
> v2->v3:
>  - Simplify the code by using numerous mode only without an unfair option.
>  - Use the latest smp_load_acquire()/smp_store_release() barriers.
>  - Move the queue spinlock code to kernel/locking.
>  - Make the use of queue spinlock the default for x86-64 without user
>    configuration.
>  - Additional performance tuning.
> 
> v1->v2:
>  - Add some more comments to document what the code does.
>  - Add a numerous CPU mode to support >= 16K CPUs
>  - Add a configuration option to allow lock stealing which can further
>    improve performance in many cases.
>  - Enable wakeup of queue head CPU at unlock time for non-numerous
>    CPU mode.
> 
> This patch set has 3 different sections:
>  1) Patches 1-3: Introduces a queue-based spinlock implementation that
>     can replace the default ticket spinlock without increasing the
>     size of the spinlock data structure. As a result, critical kernel
>     data structures that embed spinlock won't increase in size and
>     breaking data alignments.
>  2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
>     real para-virtualized execution environment. This can resolve
>     some of the locking related performance issues due to the fact
>     that the next CPU to get the lock may have been scheduled out
>     for a period of time.
>  3) Patches 6-8: Enable qspinlock para-virtualization support by making
>     sure that the lock holder and the queue head stay alive as long as
>     possible.
> 
> Patches 1-3 are fully tested and ready for production. Patches 4-8, on
> the other hands, are not fully tested. They have undergone compilation
> tests with various combinations of kernel config setting and boot-up
> tests in a non-virtualized setting. Further tests and performance
> characterization are still needed to be done in a KVM guest. So
> comments on them are welcomed. Suggestions or recommendations on how
> to add PV support in the Xen environment are also needed.

It should be fairly easy. You just need to implement the kick right?
An IPI should be all that is needed - look in xen_unlock_kick. The
rest of the spinlock code is all generic that is shared between
KVM, Xen and baremetal.

You should be able to run all of this under an HVM guests as well - as
in you don't need a pure PV guest to use the PV ticketlocks.

An easy way to install/run this is to install your latest distro,
do 'yum install xen' or 'apt-get install xen'. Reboot and you
are under Xen. Launch guests, etc with your favorite virtualization
toolstack.
> 
> The queue spinlock has slightly better performance than the ticket
> spinlock in uncontended case. Its performance can be much better
> with moderate to heavy contention.  This patch has the potential of
> improving the performance of all the workloads that have moderate to
> heavy spinlock contention.
> 
> The queue spinlock is especially suitable for NUMA machines with at
> least 2 sockets, though noticeable performance benefit probably won't
> show up in machines with less than 4 sockets.
> 
> The purpose of this patch set is not to solve any particular spinlock
> contention problems. Those need to be solved by refactoring the code
> to make more efficient use of the lock or finer granularity ones. The
> main purpose is to make the lock contention problems more tolerable
> until someone can spend the time and effort to fix them.
> 
> Waiman Long (8):
>   qspinlock: Introducing a 4-byte queue spinlock implementation
>   qspinlock, x86: Enable x86-64 to use queue spinlock
>   qspinlock, x86: Add x86 specific optimization for 2 contending tasks
>   pvqspinlock, x86: Allow unfair spinlock in a real PV environment
>   pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
>   pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
>   pvqspinlock, x86: Add qspinlock para-virtualization support
>   pvqspinlock, x86: Enable KVM to use qspinlock's PV support
> 
>  arch/x86/Kconfig                      |   12 +
>  arch/x86/include/asm/paravirt.h       |    9 +-
>  arch/x86/include/asm/paravirt_types.h |   12 +
>  arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
>  arch/x86/include/asm/qspinlock.h      |  133 +++++++
>  arch/x86/include/asm/spinlock.h       |    9 +-
>  arch/x86/include/asm/spinlock_types.h |    4 +
>  arch/x86/kernel/Makefile              |    1 +
>  arch/x86/kernel/kvm.c                 |   73 ++++-
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
>  arch/x86/xen/spinlock.c               |    2 +-
>  include/asm-generic/qspinlock.h       |  122 +++++++
>  include/asm-generic/qspinlock_types.h |   61 ++++
>  kernel/Kconfig.locks                  |    7 +
>  kernel/locking/Makefile               |    1 +
>  kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
>  16 files changed, 1239 insertions(+), 8 deletions(-)
>  create mode 100644 arch/x86/include/asm/pvqspinlock.h
>  create mode 100644 arch/x86/include/asm/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock_types.h
>  create mode 100644 kernel/locking/qspinlock.c
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7p-0004O8-4u; Fri, 28 Feb 2014 13:05:13 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7o-0004Nn-C3
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:12 +0000
Received: from [193.109.254.147:21655] by server-8.bemta-14.messagelabs.com id
	B1/66-18529-78980135; Fri, 28 Feb 2014 13:05:11 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393592709!7492193!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18985 invoked from network); 28 Feb 2014 13:05:10 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-5.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:10 -0000
Received: from acsinet21.oracle.com (acsinet21.oracle.com [141.146.126.237])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD42CC021652
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3ubP009410
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:56 GMT
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3uD4024527; Fri, 28 Feb 2014 13:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:55 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 6961D1C0FBA; Wed, 26 Feb 2014 12:00:41 -0500 (EST)
Date: Wed, 26 Feb 2014 12:00:41 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226170041.GA20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet21.oracle.com [141.146.126.237]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Marcos Matsunaga <Marcos.Matsunaga@oracle.com>,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:20AM -0500, Waiman Long wrote:
> v4->v5:
>  - Move the optimized 2-task contending code to the generic file to
>    enable more architectures to use it without code duplication.
>  - Address some of the style-related comments by PeterZ.
>  - Allow the use of unfair queue spinlock in a real para-virtualized
>    execution environment.
>  - Add para-virtualization support to the qspinlock code by ensuring
>    that the lock holder and queue head stay alive as much as possible.
> 
> v3->v4:
>  - Remove debugging code and fix a configuration error
>  - Simplify the qspinlock structure and streamline the code to make it
>    perform a bit better
>  - Add an x86 version of asm/qspinlock.h for holding x86 specific
>    optimization.
>  - Add an optimized x86 code path for 2 contending tasks to improve
>    low contention performance.
> 
> v2->v3:
>  - Simplify the code by using numerous mode only without an unfair option.
>  - Use the latest smp_load_acquire()/smp_store_release() barriers.
>  - Move the queue spinlock code to kernel/locking.
>  - Make the use of queue spinlock the default for x86-64 without user
>    configuration.
>  - Additional performance tuning.
> 
> v1->v2:
>  - Add some more comments to document what the code does.
>  - Add a numerous CPU mode to support >= 16K CPUs
>  - Add a configuration option to allow lock stealing which can further
>    improve performance in many cases.
>  - Enable wakeup of queue head CPU at unlock time for non-numerous
>    CPU mode.
> 
> This patch set has 3 different sections:
>  1) Patches 1-3: Introduces a queue-based spinlock implementation that
>     can replace the default ticket spinlock without increasing the
>     size of the spinlock data structure. As a result, critical kernel
>     data structures that embed spinlock won't increase in size and
>     breaking data alignments.
>  2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
>     real para-virtualized execution environment. This can resolve
>     some of the locking related performance issues due to the fact
>     that the next CPU to get the lock may have been scheduled out
>     for a period of time.
>  3) Patches 6-8: Enable qspinlock para-virtualization support by making
>     sure that the lock holder and the queue head stay alive as long as
>     possible.
> 
> Patches 1-3 are fully tested and ready for production. Patches 4-8, on
> the other hands, are not fully tested. They have undergone compilation
> tests with various combinations of kernel config setting and boot-up
> tests in a non-virtualized setting. Further tests and performance
> characterization are still needed to be done in a KVM guest. So
> comments on them are welcomed. Suggestions or recommendations on how
> to add PV support in the Xen environment are also needed.

It should be fairly easy. You just need to implement the kick right?
An IPI should be all that is needed - look in xen_unlock_kick. The
rest of the spinlock code is all generic that is shared between
KVM, Xen and baremetal.

You should be able to run all of this under an HVM guests as well - as
in you don't need a pure PV guest to use the PV ticketlocks.

An easy way to install/run this is to install your latest distro,
do 'yum install xen' or 'apt-get install xen'. Reboot and you
are under Xen. Launch guests, etc with your favorite virtualization
toolstack.
> 
> The queue spinlock has slightly better performance than the ticket
> spinlock in uncontended case. Its performance can be much better
> with moderate to heavy contention.  This patch has the potential of
> improving the performance of all the workloads that have moderate to
> heavy spinlock contention.
> 
> The queue spinlock is especially suitable for NUMA machines with at
> least 2 sockets, though noticeable performance benefit probably won't
> show up in machines with less than 4 sockets.
> 
> The purpose of this patch set is not to solve any particular spinlock
> contention problems. Those need to be solved by refactoring the code
> to make more efficient use of the lock or finer granularity ones. The
> main purpose is to make the lock contention problems more tolerable
> until someone can spend the time and effort to fix them.
> 
> Waiman Long (8):
>   qspinlock: Introducing a 4-byte queue spinlock implementation
>   qspinlock, x86: Enable x86-64 to use queue spinlock
>   qspinlock, x86: Add x86 specific optimization for 2 contending tasks
>   pvqspinlock, x86: Allow unfair spinlock in a real PV environment
>   pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
>   pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
>   pvqspinlock, x86: Add qspinlock para-virtualization support
>   pvqspinlock, x86: Enable KVM to use qspinlock's PV support
> 
>  arch/x86/Kconfig                      |   12 +
>  arch/x86/include/asm/paravirt.h       |    9 +-
>  arch/x86/include/asm/paravirt_types.h |   12 +
>  arch/x86/include/asm/pvqspinlock.h    |  176 ++++++++++
>  arch/x86/include/asm/qspinlock.h      |  133 +++++++
>  arch/x86/include/asm/spinlock.h       |    9 +-
>  arch/x86/include/asm/spinlock_types.h |    4 +
>  arch/x86/kernel/Makefile              |    1 +
>  arch/x86/kernel/kvm.c                 |   73 ++++-
>  arch/x86/kernel/paravirt-spinlocks.c  |   15 +-
>  arch/x86/xen/spinlock.c               |    2 +-
>  include/asm-generic/qspinlock.h       |  122 +++++++
>  include/asm-generic/qspinlock_types.h |   61 ++++
>  kernel/Kconfig.locks                  |    7 +
>  kernel/locking/Makefile               |    1 +
>  kernel/locking/qspinlock.c            |  610 +++++++++++++++++++++++++++++++++
>  16 files changed, 1239 insertions(+), 8 deletions(-)
>  create mode 100644 arch/x86/include/asm/pvqspinlock.h
>  create mode 100644 arch/x86/include/asm/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock.h
>  create mode 100644 include/asm-generic/qspinlock_types.h
>  create mode 100644 kernel/locking/qspinlock.c
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7r-0004PC-JA; Fri, 28 Feb 2014 13:05:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7q-0004OP-Cc
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:14 +0000
Received: from [85.158.137.68:41063] by server-2.bemta-3.messagelabs.com id
	AB/11-06531-98980135; Fri, 28 Feb 2014 13:05:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393592710!87331!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11290 invoked from network); 28 Feb 2014 13:05:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:12 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD43VP017362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3vBc011992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:57 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3uwb009430; Fri, 28 Feb 2014 13:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E2D1C1C0FBC; Wed, 26 Feb 2014 12:07:34 -0500 (EST)
Date: Wed, 26 Feb 2014 12:07:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226170734.GB20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
> 
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer can
> come and steal the lock if the next-in-line CPU to get the lock is
> scheduled out. Unfair lock in a native environment is generally not a

Hmm, how do you know if the 'next-in-line CPU' is scheduled out? As
in the hypervisor knows - but you as a guest might have no idea
of it.

> good idea as there is a possibility of lock starvation for a heavily
> contended lock.

Should this then detect whether it is running under a virtualization
and only then activate itself? And when run under baremetal don't enable?

> 
> This patch add a new configuration option for the x86
> architecture to enable the use of unfair queue spinlock
> (PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
> (paravirt_unfairlocks_enabled) is used to switch between a fair and
> an unfair version of the spinlock code. This jump label will only be
> enabled in a real PV guest.

As opposed to fake PV guest :-) I think you can remove the 'real'.


> 
> Enabling this configuration feature decreases the performance of an
> uncontended lock-unlock operation by about 1-2%.

Presumarily on baremetal right?

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/Kconfig                     |   11 +++++
>  arch/x86/include/asm/qspinlock.h     |   74 ++++++++++++++++++++++++++++++++++
>  arch/x86/kernel/Makefile             |    1 +
>  arch/x86/kernel/paravirt-spinlocks.c |    7 +++
>  4 files changed, 93 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 5bf70ab..8d7c941 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -645,6 +645,17 @@ config PARAVIRT_SPINLOCKS
>  
>  	  If you are unsure how to answer this question, answer Y.
>  
> +config PARAVIRT_UNFAIR_LOCKS
> +	bool "Enable unfair locks in a para-virtualized guest"
> +	depends on PARAVIRT && SMP && QUEUE_SPINLOCK
> +	depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
> +	---help---
> +	  This changes the kernel to use unfair locks in a real
> +	  para-virtualized guest system. This will help performance
> +	  in most cases. However, there is a possibility of lock
> +	  starvation on a heavily contended lock especially in a
> +	  large guest with many virtual CPUs.
> +
>  source "arch/x86/xen/Kconfig"
>  
>  config KVM_GUEST
> diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> index 98db42e..c278aed 100644
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -56,4 +56,78 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
>  
>  #include <asm-generic/qspinlock.h>
>  
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/**
> + * queue_spin_lock_unfair - acquire a queue spinlock unfairly
> + * @lock: Pointer to queue spinlock structure
> + */
> +static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
> +{
> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
> +
> +	if (likely(cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
> +		return;
> +	/*
> +	 * Since the lock is now unfair, there is no need to activate
> +	 * the 2-task quick spinning code path.
> +	 */
> +	queue_spin_lock_slowpath(lock, -1);
> +}
> +
> +/**
> + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 if failed
> + */
> +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
> +{
> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
> +
> +	if (!qlock->lock &&
> +	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
> +		return 1;
> +	return 0;
> +}
> +
> +/*
> + * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
> + * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
> + * is true.
> + */
> +#undef arch_spin_lock
> +#undef arch_spin_trylock
> +#undef arch_spin_lock_flags
> +
> +extern struct static_key paravirt_unfairlocks_enabled;
> +
> +/**
> + * arch_spin_lock - acquire a queue spinlock
> + * @lock: Pointer to queue spinlock structure
> + */
> +static inline void arch_spin_lock(struct qspinlock *lock)
> +{
> +	if (static_key_false(&paravirt_unfairlocks_enabled)) {
> +		queue_spin_lock_unfair(lock);
> +		return;
> +	}
> +	queue_spin_lock(lock);

What happens when you are booting and you are in the middle of using a
ticketlock (say you are waiting for it and your are in the slow-path)
 and suddenly the unfairlocks_enabled is turned on.

All the other CPUs start using the unfair version and are you still
in the ticketlock unlocker (or worst, locker and going to sleep).


> +}
> +
> +/**
> + * arch_spin_trylock - try to acquire the queue spinlock
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 if failed
> + */
> +static inline int arch_spin_trylock(struct qspinlock *lock)
> +{
> +	if (static_key_false(&paravirt_unfairlocks_enabled)) {
> +		return queue_spin_trylock_unfair(lock);
> +	}
> +	return queue_spin_trylock(lock);
> +}
> +
> +#define arch_spin_lock_flags(l, f)	arch_spin_lock(l)
> +
> +#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
> +
>  #endif /* _ASM_X86_QSPINLOCK_H */
> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
> index cb648c8..1107a20 100644
> --- a/arch/x86/kernel/Makefile
> +++ b/arch/x86/kernel/Makefile
> @@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
>  obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
>  obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
>  obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
> +obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
>  obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
>  
>  obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index bbb6c73..a50032a 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -8,6 +8,7 @@
>  
>  #include <asm/paravirt.h>
>  
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
>  	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
> @@ -18,3 +19,9 @@ EXPORT_SYMBOL(pv_lock_ops);
>  
>  struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
>  EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
> +#endif
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
> +EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
> +#endif
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:15 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7r-0004PC-JA; Fri, 28 Feb 2014 13:05:15 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7q-0004OP-Cc
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:14 +0000
Received: from [85.158.137.68:41063] by server-2.bemta-3.messagelabs.com id
	AB/11-06531-98980135; Fri, 28 Feb 2014 13:05:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393592710!87331!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 11290 invoked from network); 28 Feb 2014 13:05:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:12 -0000
Received: from ucsinet21.oracle.com (ucsinet21.oracle.com [156.151.31.93])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD43VP017362
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from aserz7021.oracle.com (aserz7021.oracle.com [141.146.126.230])
	by ucsinet21.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3vBc011992
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:57 GMT
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3uwb009430; Fri, 28 Feb 2014 13:03:56 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:56 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id E2D1C1C0FBC; Wed, 26 Feb 2014 12:07:34 -0500 (EST)
Date: Wed, 26 Feb 2014 12:07:34 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226170734.GB20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet21.oracle.com [156.151.31.93]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
> Locking is always an issue in a virtualized environment as the virtual
> CPU that is waiting on a lock may get scheduled out and hence block
> any progress in lock acquisition even when the lock has been freed.
> 
> One solution to this problem is to allow unfair lock in a
> para-virtualized environment. In this case, a new lock acquirer can
> come and steal the lock if the next-in-line CPU to get the lock is
> scheduled out. Unfair lock in a native environment is generally not a

Hmm, how do you know if the 'next-in-line CPU' is scheduled out? As
in the hypervisor knows - but you as a guest might have no idea
of it.

> good idea as there is a possibility of lock starvation for a heavily
> contended lock.

Should this then detect whether it is running under a virtualization
and only then activate itself? And when run under baremetal don't enable?

> 
> This patch add a new configuration option for the x86
> architecture to enable the use of unfair queue spinlock
> (PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
> (paravirt_unfairlocks_enabled) is used to switch between a fair and
> an unfair version of the spinlock code. This jump label will only be
> enabled in a real PV guest.

As opposed to fake PV guest :-) I think you can remove the 'real'.


> 
> Enabling this configuration feature decreases the performance of an
> uncontended lock-unlock operation by about 1-2%.

Presumarily on baremetal right?

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/Kconfig                     |   11 +++++
>  arch/x86/include/asm/qspinlock.h     |   74 ++++++++++++++++++++++++++++++++++
>  arch/x86/kernel/Makefile             |    1 +
>  arch/x86/kernel/paravirt-spinlocks.c |    7 +++
>  4 files changed, 93 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index 5bf70ab..8d7c941 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -645,6 +645,17 @@ config PARAVIRT_SPINLOCKS
>  
>  	  If you are unsure how to answer this question, answer Y.
>  
> +config PARAVIRT_UNFAIR_LOCKS
> +	bool "Enable unfair locks in a para-virtualized guest"
> +	depends on PARAVIRT && SMP && QUEUE_SPINLOCK
> +	depends on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE
> +	---help---
> +	  This changes the kernel to use unfair locks in a real
> +	  para-virtualized guest system. This will help performance
> +	  in most cases. However, there is a possibility of lock
> +	  starvation on a heavily contended lock especially in a
> +	  large guest with many virtual CPUs.
> +
>  source "arch/x86/xen/Kconfig"
>  
>  config KVM_GUEST
> diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
> index 98db42e..c278aed 100644
> --- a/arch/x86/include/asm/qspinlock.h
> +++ b/arch/x86/include/asm/qspinlock.h
> @@ -56,4 +56,78 @@ static inline void queue_spin_unlock(struct qspinlock *lock)
>  
>  #include <asm-generic/qspinlock.h>
>  
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/**
> + * queue_spin_lock_unfair - acquire a queue spinlock unfairly
> + * @lock: Pointer to queue spinlock structure
> + */
> +static __always_inline void queue_spin_lock_unfair(struct qspinlock *lock)
> +{
> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
> +
> +	if (likely(cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
> +		return;
> +	/*
> +	 * Since the lock is now unfair, there is no need to activate
> +	 * the 2-task quick spinning code path.
> +	 */
> +	queue_spin_lock_slowpath(lock, -1);
> +}
> +
> +/**
> + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 if failed
> + */
> +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock)
> +{
> +	union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
> +
> +	if (!qlock->lock &&
> +	   (cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0))
> +		return 1;
> +	return 0;
> +}
> +
> +/*
> + * Redefine arch_spin_lock and arch_spin_trylock as inline functions that will
> + * jump to the unfair versions if the static key paravirt_unfairlocks_enabled
> + * is true.
> + */
> +#undef arch_spin_lock
> +#undef arch_spin_trylock
> +#undef arch_spin_lock_flags
> +
> +extern struct static_key paravirt_unfairlocks_enabled;
> +
> +/**
> + * arch_spin_lock - acquire a queue spinlock
> + * @lock: Pointer to queue spinlock structure
> + */
> +static inline void arch_spin_lock(struct qspinlock *lock)
> +{
> +	if (static_key_false(&paravirt_unfairlocks_enabled)) {
> +		queue_spin_lock_unfair(lock);
> +		return;
> +	}
> +	queue_spin_lock(lock);

What happens when you are booting and you are in the middle of using a
ticketlock (say you are waiting for it and your are in the slow-path)
 and suddenly the unfairlocks_enabled is turned on.

All the other CPUs start using the unfair version and are you still
in the ticketlock unlocker (or worst, locker and going to sleep).


> +}
> +
> +/**
> + * arch_spin_trylock - try to acquire the queue spinlock
> + * @lock : Pointer to queue spinlock structure
> + * Return: 1 if lock acquired, 0 if failed
> + */
> +static inline int arch_spin_trylock(struct qspinlock *lock)
> +{
> +	if (static_key_false(&paravirt_unfairlocks_enabled)) {
> +		return queue_spin_trylock_unfair(lock);
> +	}
> +	return queue_spin_trylock(lock);
> +}
> +
> +#define arch_spin_lock_flags(l, f)	arch_spin_lock(l)
> +
> +#endif /* CONFIG_PARAVIRT_UNFAIR_LOCKS */
> +
>  #endif /* _ASM_X86_QSPINLOCK_H */
> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
> index cb648c8..1107a20 100644
> --- a/arch/x86/kernel/Makefile
> +++ b/arch/x86/kernel/Makefile
> @@ -88,6 +88,7 @@ obj-$(CONFIG_DEBUG_NMI_SELFTEST) += nmi_selftest.o
>  obj-$(CONFIG_KVM_GUEST)		+= kvm.o kvmclock.o
>  obj-$(CONFIG_PARAVIRT)		+= paravirt.o paravirt_patch_$(BITS).o
>  obj-$(CONFIG_PARAVIRT_SPINLOCKS)+= paravirt-spinlocks.o
> +obj-$(CONFIG_PARAVIRT_UNFAIR_LOCKS)+= paravirt-spinlocks.o
>  obj-$(CONFIG_PARAVIRT_CLOCK)	+= pvclock.o
>  
>  obj-$(CONFIG_PCSPKR_PLATFORM)	+= pcspeaker.o
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index bbb6c73..a50032a 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -8,6 +8,7 @@
>  
>  #include <asm/paravirt.h>
>  
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
>  	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
> @@ -18,3 +19,9 @@ EXPORT_SYMBOL(pv_lock_ops);
>  
>  struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE;
>  EXPORT_SYMBOL(paravirt_ticketlocks_enabled);
> +#endif
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +struct static_key paravirt_unfairlocks_enabled = STATIC_KEY_INIT_FALSE;
> +EXPORT_SYMBOL(paravirt_unfairlocks_enabled);
> +#endif
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7s-0004Pi-3T; Fri, 28 Feb 2014 13:05:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7q-0004Oe-FH
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:14 +0000
Received: from [193.109.254.147:21956] by server-2.bemta-14.messagelabs.com id
	7B/1A-01236-98980135; Fri, 28 Feb 2014 13:05:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393592711!7477447!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3333 invoked from network); 28 Feb 2014 13:05:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD42cO017361
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1SD3wIg011808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:59 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3wdO024597; Fri, 28 Feb 2014 13:03:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 31A181C0FBF; Wed, 26 Feb 2014 12:54:56 -0500 (EST)
Date: Wed, 26 Feb 2014 12:54:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226175456.GD20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:27AM -0500, Waiman Long wrote:
> This patch adds para-virtualization support to the queue spinlock code
> by enabling the queue head to kick the lock holder CPU, if known,
> in when the lock isn't released for a certain amount of time. It
  ^^ - ?
> also enables the mutual monitoring of the queue head CPU and the
> following node CPU in the queue to make sure that their CPUs will
> stay scheduled in.

stay scheduled in? How are you influencing the hypervisor to schedule
them in next?  I see this patch "x86: Enable KVM to use qspinlock's PV support"
but that might not be the best choice.

What if the hypervisor has another CPU ready to go - which is also
a lock-holder. Wouldn't it be better to just provide a cpu mask of the
CPUs it could kick?

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/include/asm/paravirt.h       |    9 ++-
>  arch/x86/include/asm/paravirt_types.h |   12 +++
>  arch/x86/include/asm/pvqspinlock.h    |  176 +++++++++++++++++++++++++++++++++
>  arch/x86/kernel/paravirt-spinlocks.c  |    4 +
>  kernel/locking/qspinlock.c            |   41 +++++++-
>  5 files changed, 235 insertions(+), 7 deletions(-)
>  create mode 100644 arch/x86/include/asm/pvqspinlock.h
> 
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index cd6e161..06d3279 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
>  }
>  
>  #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
> -
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
> +{
> +	PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
> +}
> +#else
>  static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
>  							__ticket_t ticket)
>  {
> @@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
>  {
>  	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
>  }
> -
> +#endif
>  #endif
>  
>  #ifdef CONFIG_X86_32
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 7549b8b..87f8836 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -333,9 +333,21 @@ struct arch_spinlock;
>  typedef u16 __ticket_t;
>  #endif
>  
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +enum pv_kick_type {
> +	PV_KICK_LOCK_HOLDER,
> +	PV_KICK_QUEUE_HEAD,
> +	PV_KICK_NEXT_NODE
> +};
> +#endif
> +
>  struct pv_lock_ops {
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +	void (*kick_cpu)(int cpu, enum pv_kick_type);
> +#else
>  	struct paravirt_callee_save lock_spinning;
>  	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
> +#endif
>  };
>  
>  /* This contains all the paravirt structures: we get a convenient
> diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
> new file mode 100644
> index 0000000..45aae39
> --- /dev/null
> +++ b/arch/x86/include/asm/pvqspinlock.h
> @@ -0,0 +1,176 @@
> +#ifndef _ASM_X86_PVQSPINLOCK_H
> +#define _ASM_X86_PVQSPINLOCK_H
> +
> +/*
> + *	Queue Spinlock Para-Virtualization Support
> + *
> + *	+------+	    +-----+ nxtcpu_p1  +----+
> + *	| Lock |	    |Queue|----------->|Next|
> + *	|Holder|<-----------|Head |<-----------|Node|
> + *	+------+ prev_qcode +-----+ prev_qcode +----+
> + *
> + * As long as the current lock holder passes through the slowpath, the queue

Um, why would the the lock holder pass through the slowpath? It already
has the lock hasn't it? Or is this when it acquired it (either via fastpath
or slowpath) and it stashes this information somewhere?


> + * head CPU will have its CPU number stored in prev_qcode. The situation is
> + * the same for the node next to the queue head.
                       ^^^^^^^^         ^^^^^^^^^^

Do you mean to say next node's queue head?
> + *
> + * The next node, while setting up the next pointer in the queue head, can
> + * also store its CPU number in that node. With that change, the queue head

can or MUST?

> + * will have the CPU numbers of both its upstream and downstream neighbors.
> + *
> + * To make forward progress in lock acquisition and release, it is necessary
> + * that both the lock holder and the queue head virtual CPUs are present.
> + * The queue head can monitor the lock holder, but the lock holder can't
> + * monitor the queue head back. As a result, the next node is also brought
> + * into the picture to monitor the queue head. In the above diagram, all the
> + * 3 virtual CPUs should be present with the queue head and next node
> + * monitoring each other to make sure they are both present.

OK, that implies you must have those 3 VCPUs active right?
> + *
> + * Heartbeat counters are used to track if a neighbor is active. There are
> + * 3 different sets of heartbeat counter monitoring going on:
> + * 1) The queue head will wait until the number loop iteration exceeds a
> + *    certain threshold (HEAD_SPIN_THRESHOLD). In that case, it will send
> + *    a kick-cpu signal to the lock holder if it has the CPU number available.
> + *    The kick-cpu siginal will be sent only once as the real lock holder
> + *    may not be the same as what the queue head thinks it is.

Why would it not be the same?

Is there another patch I should read before asking these questions?

> + * 2) The queue head will periodically clear the active flag of the next node.
> + *    It will then check to see if the active flag remains cleared at the end
> + *    of the cycle. If it is, the next node CPU may be scheduled out. So it
> + *    send a kick-cpu signal to make sure that the next node CPU remain active.

So the next CPU can be scheduled out but you also kick it to make sure it is active
(aka scheduled in). Or maybe I am reading this wrong?

> + * 3) The next node CPU will monitor its own active flag to see if it gets
> + *    clear periodically. If it is not, the queue head CPU may be scheduled
         ^^^^ cleared                                             ^^^ have been?
> + *    out. It will then send the kick-cpu signal to the queue head CPU.
> + */
> +
> +/*
> + * Loop thresholds
> + */
> +#define	HEAD_SPIN_THRESHOLD	(1<<12)	/* Threshold to kick lock holder  */
> +#define	CLEAR_ACTIVE_THRESHOLD	(1<<8)	/* Threahold for clearing active flag */
> +#define CLEAR_ACTIVE_MASK	(CLEAR_ACTIVE_THRESHOLD - 1)

Something is off with the tabs here.
> +
> +/*
> + * PV macros
> + */
> +#define PV_SET_VAR(type, var, val)	type var = val
> +#define PV_VAR(var)			var
> +#define	PV_GET_NXTCPU(node)		(node)->pv.nxtcpu_p1

Ditto.
> +
> +/*
> + * Additional fields to be added to the qnode structure
> + *
> + * Try to cram the PV fields into a 32 bits so that it won't increase the
> + * qnode size in x86-64.
> + */
> +#if CONFIG_NR_CPUS >= (1 << 16)
> +#define _cpuid_t	u32
> +#else
> +#define _cpuid_t	u16
> +#endif
> +
> +struct pv_qvars {
> +	u8	 active;	/* Set if CPU active		*/
> +	u8	 prehead;	/* Set if next to queue head	*/
> +	_cpuid_t nxtcpu_p1;	/* CPU number of next node + 1	*/
> +};
> +
> +/**
> + * pv_init_vars - initialize fields in struct pv_qvars
> + * @pv: pointer to struct pv_qvars
> + */
> +static __always_inline void pv_init_vars(struct pv_qvars *pv)
> +{
> +	pv->active    = false;
> +	pv->prehead   = false;
> +	pv->nxtcpu_p1 = 0;
> +}
> +
> +/**
> + * head_spin_check - perform para-virtualization checks for queue head
> + * @count : loop count
> + * @qcode : queue code of the supposed lock holder
> + * @nxtcpu: CPU number of next node + 1
> + * @next  : pointer to the next node
> + * @offset: offset of the pv_qvars within the qnode
> + *
> + * 4 checks will be done:
> + * 1) See if it is time to kick the lock holder
> + * 2) Set the prehead flag of the next node
> + * 3) Clear the active flag of the next node periodically
> + * 4) If the active flag is not set after a while, assume the CPU of the
> + *    next-in-line node is offline and kick it back up again.
> + */
> +static __always_inline void
> +pv_head_spin_check(int *count, u32 qcode, int nxtcpu, void *next, int offset)
> +{
> +	if (!static_key_false(&paravirt_spinlocks_enabled))
> +		return;
> +	if ((++(*count) == HEAD_SPIN_THRESHOLD) && qcode) {
> +		/*
> +		 * Get the CPU number of the lock holder & kick it
> +		 * The lock may have been stealed by another CPU
                                          ^^^^^^ - stolen

> +		 * if PARAVIRT_UNFAIR_LOCKS is set, so the computed
> +		 * CPU number may not be the actual lock holder.
> +		 */
> +		int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
> +		__queue_kick_cpu(cpu, PV_KICK_LOCK_HOLDER);
> +	}
> +	if (next) {
> +		struct pv_qvars *pv = (struct pv_qvars *)
> +				      ((char *)next + offset);
> +
> +		if (!pv->prehead)
> +			pv->prehead = true;
> +		if ((*count & CLEAR_ACTIVE_MASK) == CLEAR_ACTIVE_MASK)
> +			pv->active = false;
> +		if (((*count & CLEAR_ACTIVE_MASK) == 0) &&
> +			!pv->active && nxtcpu)
> +			/*
> +			 * The CPU of the next node doesn't seem to be
> +			 * active, need to kick it to make sure that
> +			 * it is ready to be transitioned to queue head.
> +			 */
> +			__queue_kick_cpu(nxtcpu - 1, PV_KICK_NEXT_NODE);
> +	}
> +}
> +
> +/**
> + * head_spin_check - perform para-virtualization checks for queue member
> + * @pv   : pointer to struct pv_qvars
> + * @count: loop count
> + * @qcode: queue code of the previous node (queue head if pv->prehead set)
> + *
> + * Set the active flag if it is next to the queue head
> + */
> +static __always_inline void
> +pv_queue_spin_check(struct pv_qvars *pv, int *count, u32 qcode)
> +{
> +	if (!static_key_false(&paravirt_spinlocks_enabled))
> +		return;
> +	if (ACCESS_ONCE(pv->prehead)) {
> +		if (pv->active == false) {
> +			*count = 0;	/* Reset counter */
> +			pv->active = true;
> +		}
> +		if ((++(*count) >= 4 * CLEAR_ACTIVE_THRESHOLD) && qcode) {

This magic value could be wrapped in a macro.
> +			/*
> +			 * The queue head isn't clearing the active flag for
                                          ^^^^^^^^^^^^^ hadn't cleared
> +			 * too long. Need to kick it.
> +			 */
> +			int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
> +			__queue_kick_cpu(cpu, PV_KICK_QUEUE_HEAD);
> +			*count = 0;
> +		}
> +	}
> +}
> +
> +/**
> + * pv_set_cpu - set CPU # in the given pv_qvars structure
> + * @pv : pointer to struct pv_qvars to be set
> + * @cpu: cpu number to be set
> + */
> +static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)
> +{
> +	pv->nxtcpu_p1 = cpu + 1;
> +}
> +
> +#endif /* _ASM_X86_PVQSPINLOCK_H */
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index 8c67cbe..30d76f5 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -11,9 +11,13 @@
>  #ifdef CONFIG_PARAVIRT_SPINLOCKS
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +	.kick_cpu = paravirt_nop,
> +#else
>  	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
>  	.unlock_kick = paravirt_nop,
>  #endif
> +#endif
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
>  
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 22a63fa..f10446e 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -58,6 +58,26 @@
>   */
>  
>  /*
> + * Para-virtualized queue spinlock support
> + */
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +#include <asm/pvqspinlock.h>
> +#else
> +
> +#define PV_SET_VAR(type, var, val)
> +#define PV_VAR(var)			0
> +#define PV_GET_NXTCPU(node)		0
> +
> +struct pv_qvars {};
> +static __always_inline void pv_init_vars(struct pv_qvars *pv)		{}
> +static __always_inline void pv_head_spin_check(int *count, u32 qcode,
> +				int nxtcpu, void *next, int offset)	{}
> +static __always_inline void pv_queue_spin_check(struct pv_qvars *pv,
> +				int *count, u32 qcode)			{}
> +static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)	{}
> +#endif
> +
> +/*
>   * The 24-bit queue node code is divided into the following 2 fields:
>   * Bits 0-1 : queue node index (4 nodes)
>   * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
> @@ -77,15 +97,13 @@
>  
>  /*
>   * The queue node structure
> - *
> - * This structure is essentially the same as the mcs_spinlock structure
> - * in mcs_spinlock.h file. This structure is retained for future extension
> - * where new fields may be added.

How come you are deleting this? Should that be a part of another patch?

>   */
>  struct qnode {
>  	u32		 wait;		/* Waiting flag		*/
> +	struct pv_qvars	 pv;		/* Para-virtualization  */
>  	struct qnode	*next;		/* Next queue node addr */
>  };
> +#define PV_OFFSET	offsetof(struct qnode, pv)
>  
>  struct qnode_set {
>  	struct qnode	nodes[MAX_QNODES];
> @@ -441,6 +459,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  	unsigned int cpu_nr, qn_idx;
>  	struct qnode *node, *next;
>  	u32 prev_qcode, my_qcode;
> +	PV_SET_VAR(int, hcnt, 0);
>  
>  	/*
>  	 * Try the quick spinning code path
> @@ -468,6 +487,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  	 */
>  	node->wait = true;
>  	node->next = NULL;
> +	pv_init_vars(&node->pv);
>  
>  	/*
>  	 * The lock may be available at this point, try again if no task was
> @@ -522,13 +542,22 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  		 * and set up the "next" fields of the that node.
>  		 */
>  		struct qnode *prev = xlate_qcode(prev_qcode);
> +		PV_SET_VAR(int, qcnt, 0);
>  
>  		ACCESS_ONCE(prev->next) = node;
>  		/*
> +		 * Set current CPU number into the previous node
> +		 */
> +		pv_set_cpu(&prev->pv, cpu_nr);
> +
> +		/*
>  		 * Wait until the waiting flag is off
>  		 */
> -		while (smp_load_acquire(&node->wait))
> +		while (smp_load_acquire(&node->wait)) {
>  			arch_mutex_cpu_relax();
> +			pv_queue_spin_check(&node->pv, PV_VAR(&qcnt),
> +					    prev_qcode);
> +		}
>  	}
>  
>  	/*
> @@ -560,6 +589,8 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  				goto release_node;
>  		}
>  		arch_mutex_cpu_relax();
> +		pv_head_spin_check(PV_VAR(&hcnt), prev_qcode,
> +				PV_GET_NXTCPU(node), node->next, PV_OFFSET);
>  	}
>  
>  notify_next:
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:16 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7s-0004Pi-3T; Fri, 28 Feb 2014 13:05:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7q-0004Oe-FH
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:14 +0000
Received: from [193.109.254.147:21956] by server-2.bemta-14.messagelabs.com id
	7B/1A-01236-98980135; Fri, 28 Feb 2014 13:05:13 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393592711!7477447!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3333 invoked from network); 28 Feb 2014 13:05:12 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-6.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:12 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD42cO017361
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1SD3wIg011808
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:59 GMT
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3wdO024597; Fri, 28 Feb 2014 13:03:58 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:57 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 31A181C0FBF; Wed, 26 Feb 2014 12:54:56 -0500 (EST)
Date: Wed, 26 Feb 2014 12:54:56 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226175456.GD20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-8-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 7/8] pvqspinlock,
 x86: Add qspinlock para-virtualization support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:27AM -0500, Waiman Long wrote:
> This patch adds para-virtualization support to the queue spinlock code
> by enabling the queue head to kick the lock holder CPU, if known,
> in when the lock isn't released for a certain amount of time. It
  ^^ - ?
> also enables the mutual monitoring of the queue head CPU and the
> following node CPU in the queue to make sure that their CPUs will
> stay scheduled in.

stay scheduled in? How are you influencing the hypervisor to schedule
them in next?  I see this patch "x86: Enable KVM to use qspinlock's PV support"
but that might not be the best choice.

What if the hypervisor has another CPU ready to go - which is also
a lock-holder. Wouldn't it be better to just provide a cpu mask of the
CPUs it could kick?

> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/include/asm/paravirt.h       |    9 ++-
>  arch/x86/include/asm/paravirt_types.h |   12 +++
>  arch/x86/include/asm/pvqspinlock.h    |  176 +++++++++++++++++++++++++++++++++
>  arch/x86/kernel/paravirt-spinlocks.c  |    4 +
>  kernel/locking/qspinlock.c            |   41 +++++++-
>  5 files changed, 235 insertions(+), 7 deletions(-)
>  create mode 100644 arch/x86/include/asm/pvqspinlock.h
> 
> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
> index cd6e161..06d3279 100644
> --- a/arch/x86/include/asm/paravirt.h
> +++ b/arch/x86/include/asm/paravirt.h
> @@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
>  }
>  
>  #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
> -
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
> +{
> +	PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
> +}
> +#else
>  static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
>  							__ticket_t ticket)
>  {
> @@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock,
>  {
>  	PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
>  }
> -
> +#endif
>  #endif
>  
>  #ifdef CONFIG_X86_32
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 7549b8b..87f8836 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -333,9 +333,21 @@ struct arch_spinlock;
>  typedef u16 __ticket_t;
>  #endif
>  
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +enum pv_kick_type {
> +	PV_KICK_LOCK_HOLDER,
> +	PV_KICK_QUEUE_HEAD,
> +	PV_KICK_NEXT_NODE
> +};
> +#endif
> +
>  struct pv_lock_ops {
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +	void (*kick_cpu)(int cpu, enum pv_kick_type);
> +#else
>  	struct paravirt_callee_save lock_spinning;
>  	void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
> +#endif
>  };
>  
>  /* This contains all the paravirt structures: we get a convenient
> diff --git a/arch/x86/include/asm/pvqspinlock.h b/arch/x86/include/asm/pvqspinlock.h
> new file mode 100644
> index 0000000..45aae39
> --- /dev/null
> +++ b/arch/x86/include/asm/pvqspinlock.h
> @@ -0,0 +1,176 @@
> +#ifndef _ASM_X86_PVQSPINLOCK_H
> +#define _ASM_X86_PVQSPINLOCK_H
> +
> +/*
> + *	Queue Spinlock Para-Virtualization Support
> + *
> + *	+------+	    +-----+ nxtcpu_p1  +----+
> + *	| Lock |	    |Queue|----------->|Next|
> + *	|Holder|<-----------|Head |<-----------|Node|
> + *	+------+ prev_qcode +-----+ prev_qcode +----+
> + *
> + * As long as the current lock holder passes through the slowpath, the queue

Um, why would the the lock holder pass through the slowpath? It already
has the lock hasn't it? Or is this when it acquired it (either via fastpath
or slowpath) and it stashes this information somewhere?


> + * head CPU will have its CPU number stored in prev_qcode. The situation is
> + * the same for the node next to the queue head.
                       ^^^^^^^^         ^^^^^^^^^^

Do you mean to say next node's queue head?
> + *
> + * The next node, while setting up the next pointer in the queue head, can
> + * also store its CPU number in that node. With that change, the queue head

can or MUST?

> + * will have the CPU numbers of both its upstream and downstream neighbors.
> + *
> + * To make forward progress in lock acquisition and release, it is necessary
> + * that both the lock holder and the queue head virtual CPUs are present.
> + * The queue head can monitor the lock holder, but the lock holder can't
> + * monitor the queue head back. As a result, the next node is also brought
> + * into the picture to monitor the queue head. In the above diagram, all the
> + * 3 virtual CPUs should be present with the queue head and next node
> + * monitoring each other to make sure they are both present.

OK, that implies you must have those 3 VCPUs active right?
> + *
> + * Heartbeat counters are used to track if a neighbor is active. There are
> + * 3 different sets of heartbeat counter monitoring going on:
> + * 1) The queue head will wait until the number loop iteration exceeds a
> + *    certain threshold (HEAD_SPIN_THRESHOLD). In that case, it will send
> + *    a kick-cpu signal to the lock holder if it has the CPU number available.
> + *    The kick-cpu siginal will be sent only once as the real lock holder
> + *    may not be the same as what the queue head thinks it is.

Why would it not be the same?

Is there another patch I should read before asking these questions?

> + * 2) The queue head will periodically clear the active flag of the next node.
> + *    It will then check to see if the active flag remains cleared at the end
> + *    of the cycle. If it is, the next node CPU may be scheduled out. So it
> + *    send a kick-cpu signal to make sure that the next node CPU remain active.

So the next CPU can be scheduled out but you also kick it to make sure it is active
(aka scheduled in). Or maybe I am reading this wrong?

> + * 3) The next node CPU will monitor its own active flag to see if it gets
> + *    clear periodically. If it is not, the queue head CPU may be scheduled
         ^^^^ cleared                                             ^^^ have been?
> + *    out. It will then send the kick-cpu signal to the queue head CPU.
> + */
> +
> +/*
> + * Loop thresholds
> + */
> +#define	HEAD_SPIN_THRESHOLD	(1<<12)	/* Threshold to kick lock holder  */
> +#define	CLEAR_ACTIVE_THRESHOLD	(1<<8)	/* Threahold for clearing active flag */
> +#define CLEAR_ACTIVE_MASK	(CLEAR_ACTIVE_THRESHOLD - 1)

Something is off with the tabs here.
> +
> +/*
> + * PV macros
> + */
> +#define PV_SET_VAR(type, var, val)	type var = val
> +#define PV_VAR(var)			var
> +#define	PV_GET_NXTCPU(node)		(node)->pv.nxtcpu_p1

Ditto.
> +
> +/*
> + * Additional fields to be added to the qnode structure
> + *
> + * Try to cram the PV fields into a 32 bits so that it won't increase the
> + * qnode size in x86-64.
> + */
> +#if CONFIG_NR_CPUS >= (1 << 16)
> +#define _cpuid_t	u32
> +#else
> +#define _cpuid_t	u16
> +#endif
> +
> +struct pv_qvars {
> +	u8	 active;	/* Set if CPU active		*/
> +	u8	 prehead;	/* Set if next to queue head	*/
> +	_cpuid_t nxtcpu_p1;	/* CPU number of next node + 1	*/
> +};
> +
> +/**
> + * pv_init_vars - initialize fields in struct pv_qvars
> + * @pv: pointer to struct pv_qvars
> + */
> +static __always_inline void pv_init_vars(struct pv_qvars *pv)
> +{
> +	pv->active    = false;
> +	pv->prehead   = false;
> +	pv->nxtcpu_p1 = 0;
> +}
> +
> +/**
> + * head_spin_check - perform para-virtualization checks for queue head
> + * @count : loop count
> + * @qcode : queue code of the supposed lock holder
> + * @nxtcpu: CPU number of next node + 1
> + * @next  : pointer to the next node
> + * @offset: offset of the pv_qvars within the qnode
> + *
> + * 4 checks will be done:
> + * 1) See if it is time to kick the lock holder
> + * 2) Set the prehead flag of the next node
> + * 3) Clear the active flag of the next node periodically
> + * 4) If the active flag is not set after a while, assume the CPU of the
> + *    next-in-line node is offline and kick it back up again.
> + */
> +static __always_inline void
> +pv_head_spin_check(int *count, u32 qcode, int nxtcpu, void *next, int offset)
> +{
> +	if (!static_key_false(&paravirt_spinlocks_enabled))
> +		return;
> +	if ((++(*count) == HEAD_SPIN_THRESHOLD) && qcode) {
> +		/*
> +		 * Get the CPU number of the lock holder & kick it
> +		 * The lock may have been stealed by another CPU
                                          ^^^^^^ - stolen

> +		 * if PARAVIRT_UNFAIR_LOCKS is set, so the computed
> +		 * CPU number may not be the actual lock holder.
> +		 */
> +		int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
> +		__queue_kick_cpu(cpu, PV_KICK_LOCK_HOLDER);
> +	}
> +	if (next) {
> +		struct pv_qvars *pv = (struct pv_qvars *)
> +				      ((char *)next + offset);
> +
> +		if (!pv->prehead)
> +			pv->prehead = true;
> +		if ((*count & CLEAR_ACTIVE_MASK) == CLEAR_ACTIVE_MASK)
> +			pv->active = false;
> +		if (((*count & CLEAR_ACTIVE_MASK) == 0) &&
> +			!pv->active && nxtcpu)
> +			/*
> +			 * The CPU of the next node doesn't seem to be
> +			 * active, need to kick it to make sure that
> +			 * it is ready to be transitioned to queue head.
> +			 */
> +			__queue_kick_cpu(nxtcpu - 1, PV_KICK_NEXT_NODE);
> +	}
> +}
> +
> +/**
> + * head_spin_check - perform para-virtualization checks for queue member
> + * @pv   : pointer to struct pv_qvars
> + * @count: loop count
> + * @qcode: queue code of the previous node (queue head if pv->prehead set)
> + *
> + * Set the active flag if it is next to the queue head
> + */
> +static __always_inline void
> +pv_queue_spin_check(struct pv_qvars *pv, int *count, u32 qcode)
> +{
> +	if (!static_key_false(&paravirt_spinlocks_enabled))
> +		return;
> +	if (ACCESS_ONCE(pv->prehead)) {
> +		if (pv->active == false) {
> +			*count = 0;	/* Reset counter */
> +			pv->active = true;
> +		}
> +		if ((++(*count) >= 4 * CLEAR_ACTIVE_THRESHOLD) && qcode) {

This magic value could be wrapped in a macro.
> +			/*
> +			 * The queue head isn't clearing the active flag for
                                          ^^^^^^^^^^^^^ hadn't cleared
> +			 * too long. Need to kick it.
> +			 */
> +			int cpu = (qcode >> (_QCODE_VAL_OFFSET + 2)) - 1;
> +			__queue_kick_cpu(cpu, PV_KICK_QUEUE_HEAD);
> +			*count = 0;
> +		}
> +	}
> +}
> +
> +/**
> + * pv_set_cpu - set CPU # in the given pv_qvars structure
> + * @pv : pointer to struct pv_qvars to be set
> + * @cpu: cpu number to be set
> + */
> +static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)
> +{
> +	pv->nxtcpu_p1 = cpu + 1;
> +}
> +
> +#endif /* _ASM_X86_PVQSPINLOCK_H */
> diff --git a/arch/x86/kernel/paravirt-spinlocks.c b/arch/x86/kernel/paravirt-spinlocks.c
> index 8c67cbe..30d76f5 100644
> --- a/arch/x86/kernel/paravirt-spinlocks.c
> +++ b/arch/x86/kernel/paravirt-spinlocks.c
> @@ -11,9 +11,13 @@
>  #ifdef CONFIG_PARAVIRT_SPINLOCKS
>  struct pv_lock_ops pv_lock_ops = {
>  #ifdef CONFIG_SMP
> +#ifdef CONFIG_QUEUE_SPINLOCK
> +	.kick_cpu = paravirt_nop,
> +#else
>  	.lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
>  	.unlock_kick = paravirt_nop,
>  #endif
> +#endif
>  };
>  EXPORT_SYMBOL(pv_lock_ops);
>  
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 22a63fa..f10446e 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -58,6 +58,26 @@
>   */
>  
>  /*
> + * Para-virtualized queue spinlock support
> + */
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +#include <asm/pvqspinlock.h>
> +#else
> +
> +#define PV_SET_VAR(type, var, val)
> +#define PV_VAR(var)			0
> +#define PV_GET_NXTCPU(node)		0
> +
> +struct pv_qvars {};
> +static __always_inline void pv_init_vars(struct pv_qvars *pv)		{}
> +static __always_inline void pv_head_spin_check(int *count, u32 qcode,
> +				int nxtcpu, void *next, int offset)	{}
> +static __always_inline void pv_queue_spin_check(struct pv_qvars *pv,
> +				int *count, u32 qcode)			{}
> +static __always_inline void pv_set_cpu(struct pv_qvars *pv, int cpu)	{}
> +#endif
> +
> +/*
>   * The 24-bit queue node code is divided into the following 2 fields:
>   * Bits 0-1 : queue node index (4 nodes)
>   * Bits 2-23: CPU number + 1   (4M - 1 CPUs)
> @@ -77,15 +97,13 @@
>  
>  /*
>   * The queue node structure
> - *
> - * This structure is essentially the same as the mcs_spinlock structure
> - * in mcs_spinlock.h file. This structure is retained for future extension
> - * where new fields may be added.

How come you are deleting this? Should that be a part of another patch?

>   */
>  struct qnode {
>  	u32		 wait;		/* Waiting flag		*/
> +	struct pv_qvars	 pv;		/* Para-virtualization  */
>  	struct qnode	*next;		/* Next queue node addr */
>  };
> +#define PV_OFFSET	offsetof(struct qnode, pv)
>  
>  struct qnode_set {
>  	struct qnode	nodes[MAX_QNODES];
> @@ -441,6 +459,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  	unsigned int cpu_nr, qn_idx;
>  	struct qnode *node, *next;
>  	u32 prev_qcode, my_qcode;
> +	PV_SET_VAR(int, hcnt, 0);
>  
>  	/*
>  	 * Try the quick spinning code path
> @@ -468,6 +487,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  	 */
>  	node->wait = true;
>  	node->next = NULL;
> +	pv_init_vars(&node->pv);
>  
>  	/*
>  	 * The lock may be available at this point, try again if no task was
> @@ -522,13 +542,22 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  		 * and set up the "next" fields of the that node.
>  		 */
>  		struct qnode *prev = xlate_qcode(prev_qcode);
> +		PV_SET_VAR(int, qcnt, 0);
>  
>  		ACCESS_ONCE(prev->next) = node;
>  		/*
> +		 * Set current CPU number into the previous node
> +		 */
> +		pv_set_cpu(&prev->pv, cpu_nr);
> +
> +		/*
>  		 * Wait until the waiting flag is off
>  		 */
> -		while (smp_load_acquire(&node->wait))
> +		while (smp_load_acquire(&node->wait)) {
>  			arch_mutex_cpu_relax();
> +			pv_queue_spin_check(&node->pv, PV_VAR(&qcnt),
> +					    prev_qcode);
> +		}
>  	}
>  
>  	/*
> @@ -560,6 +589,8 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>  				goto release_node;
>  		}
>  		arch_mutex_cpu_relax();
> +		pv_head_spin_check(PV_VAR(&hcnt), prev_qcode,
> +				PV_GET_NXTCPU(node), node->next, PV_OFFSET);
>  	}
>  
>  notify_next:
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7t-0004RC-Sf; Fri, 28 Feb 2014 13:05:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7s-0004Q6-Qy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:17 +0000
Received: from [85.158.139.211:39010] by server-17.bemta-5.messagelabs.com id
	E6/C3-31975-C8980135; Fri, 28 Feb 2014 13:05:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393592713!6913200!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18465 invoked from network); 28 Feb 2014 13:05:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD42h4021651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3ujp024557
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:57 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SD3s6n011647; Fri, 28 Feb 2014 13:03:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 060B21C0FBE; Wed, 26 Feb 2014 12:09:00 -0500 (EST)
Date: Wed, 26 Feb 2014 12:08:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226170859.GC20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:25AM -0500, Waiman Long wrote:
> This patch adds a KVM init function to activate the unfair queue
> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>  1 files changed, 17 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 713f1b3..a489140 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>  early_initcall(kvm_spinlock_init_jump);
>  
>  #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/*
> + * Enable unfair lock if running in a real para-virtualized environment
> + */
> +static __init int kvm_unfair_locks_init_jump(void)
> +{
> +	if (!kvm_para_available())
> +		return 0;

I think you also need to check for !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)?
Otherwise you might enable this, but the kicker function won't be
enabled.
> +
> +	static_key_slow_inc(&paravirt_unfairlocks_enabled);
> +	printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> +	return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:05:18 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJN7t-0004RC-Sf; Fri, 28 Feb 2014 13:05:17 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJN7s-0004Q6-Qy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 13:05:17 +0000
Received: from [85.158.139.211:39010] by server-17.bemta-5.messagelabs.com id
	E6/C3-31975-C8980135; Fri, 28 Feb 2014 13:05:16 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393592713!6913200!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18465 invoked from network); 28 Feb 2014 13:05:14 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:05:14 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SD42h4021651
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:04:03 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SD3ujp024557
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:03:57 GMT
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SD3s6n011647; Fri, 28 Feb 2014 13:03:55 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:03:54 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 060B21C0FBE; Wed, 26 Feb 2014 12:09:00 -0500 (EST)
Date: Wed, 26 Feb 2014 12:08:59 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Waiman Long <Waiman.Long@hp.com>
Message-ID: <20140226170859.GC20775@phenom.dumpdata.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Wed, Feb 26, 2014 at 10:14:25AM -0500, Waiman Long wrote:
> This patch adds a KVM init function to activate the unfair queue
> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
> option is selected.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hp.com>
> ---
>  arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>  1 files changed, 17 insertions(+), 0 deletions(-)
> 
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> index 713f1b3..a489140 100644
> --- a/arch/x86/kernel/kvm.c
> +++ b/arch/x86/kernel/kvm.c
> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>  early_initcall(kvm_spinlock_init_jump);
>  
>  #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
> +
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> +/*
> + * Enable unfair lock if running in a real para-virtualized environment
> + */
> +static __init int kvm_unfair_locks_init_jump(void)
> +{
> +	if (!kvm_para_available())
> +		return 0;

I think you also need to check for !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)?
Otherwise you might enable this, but the kicker function won't be
enabled.
> +
> +	static_key_slow_inc(&paravirt_unfairlocks_enabled);
> +	printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> +	return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
> -- 
> 1.7.1
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:16:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNI9-0005Cw-4w; Fri, 28 Feb 2014 13:15:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJNI7-0005Cr-Ed
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 13:15:51 +0000
Received: from [85.158.139.211:32254] by server-15.bemta-5.messagelabs.com id
	FF/47-24395-60C80135; Fri, 28 Feb 2014 13:15:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393593348!6907000!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29704 invoked from network); 28 Feb 2014 13:15:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 13:15:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="106606305"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 13:15:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 08:15:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJNI2-0001Lh-Qd;
	Fri, 28 Feb 2014 13:15:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJNI2-0005BM-NC;
	Fri, 28 Feb 2014 13:15:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25325-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 13:15:46 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25325: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25325 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25325/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25322
 test-amd64-i386-xl-multivcpu 17 guest-start.2      fail in 25322 pass in 25325
 test-amd64-amd64-xl-sedf-pin 11 guest-saverestore  fail in 25322 pass in 25325

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install        fail like 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25322 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop    fail in 25322 never pass

version targeted for testing:
 xen                  7bedbbb5c31ec7d7e653b4fc606c9871661d5e89
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Tamas K Lengyel <tamas.lengyel@zentific.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 448 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:16:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNI9-0005Cw-4w; Fri, 28 Feb 2014 13:15:53 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJNI7-0005Cr-Ed
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 13:15:51 +0000
Received: from [85.158.139.211:32254] by server-15.bemta-5.messagelabs.com id
	FF/47-24395-60C80135; Fri, 28 Feb 2014 13:15:50 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393593348!6907000!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29704 invoked from network); 28 Feb 2014 13:15:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 13:15:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,561,1389744000"; d="scan'208";a="106606305"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 13:15:47 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 08:15:47 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJNI2-0001Lh-Qd;
	Fri, 28 Feb 2014 13:15:46 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJNI2-0005BM-NC;
	Fri, 28 Feb 2014 13:15:46 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25325-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 13:15:46 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25325: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25325 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25325/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-win7-amd64  7 windows-install      fail pass in 25322
 test-amd64-i386-xl-multivcpu 17 guest-start.2      fail in 25322 pass in 25325
 test-amd64-amd64-xl-sedf-pin 11 guest-saverestore  fail in 25322 pass in 25325

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install        fail like 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25322 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop    fail in 25322 never pass

version targeted for testing:
 xen                  7bedbbb5c31ec7d7e653b4fc606c9871661d5e89
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Tamas K Lengyel <tamas.lengyel@zentific.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 448 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:39:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNeu-0005d1-EX; Fri, 28 Feb 2014 13:39:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WJNes-0005cw-Lv
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 13:39:22 +0000
Received: from [193.109.254.147:7027] by server-13.bemta-14.messagelabs.com id
	4A/67-01226-A8190135; Fri, 28 Feb 2014 13:39:22 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393594759!2220909!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14003 invoked from network); 28 Feb 2014 13:39:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:39:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SDdCwX022195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:39:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1SDdAx2000946
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:39:11 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SDdAOc000927; Fri, 28 Feb 2014 13:39:10 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:39:09 -0800
Date: Fri, 28 Feb 2014 14:39:04 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: george.dunlap@eu.citrix.com, ian.campbell@citrix.com, jbeulich@suse.com,
	keir@xen.org, konrad.wilk@oracle.com, phcoder@gmail.com,
	richard.l.maliszewski@intel.com, ross.philipson@citrix.com,
	stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Message-ID: <20140228133904.GB3516@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] EFI + GRUB2 + Xen work update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Wednesday evening I was able to boot Xen using GRUB2 on EFI platform.
Not all stuff is in place right now but it is good sign. I hope that I
will be able to add missing things and cleanup code in 2-3 weeks (if
nothing unexpected happens). So I suppose that first version of patches
should be ready no later than at the end of March.

This work would not be possible without Vladimir who added missing
features into multiboot2 protocol.

BTW, Vladimir, when are you going to release GRUB2 version 2.02?

Have a nice weekend,

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:39:46 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNeu-0005d1-EX; Fri, 28 Feb 2014 13:39:24 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <daniel.kiper@oracle.com>) id 1WJNes-0005cw-Lv
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 13:39:22 +0000
Received: from [193.109.254.147:7027] by server-13.bemta-14.messagelabs.com id
	4A/67-01226-A8190135; Fri, 28 Feb 2014 13:39:22 +0000
X-Env-Sender: daniel.kiper@oracle.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393594759!2220909!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14003 invoked from network); 28 Feb 2014 13:39:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-11.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 13:39:21 -0000
Received: from ucsinet22.oracle.com (ucsinet22.oracle.com [156.151.31.94])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SDdCwX022195
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 13:39:12 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by ucsinet22.oracle.com (8.14.5+Sun/8.14.5) with ESMTP id
	s1SDdAx2000946
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 13:39:11 GMT
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SDdAOc000927; Fri, 28 Feb 2014 13:39:10 GMT
Received: from olila.local.net-space.pl (/89.174.63.77)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 05:39:09 -0800
Date: Fri, 28 Feb 2014 14:39:04 +0100
From: Daniel Kiper <daniel.kiper@oracle.com>
To: george.dunlap@eu.citrix.com, ian.campbell@citrix.com, jbeulich@suse.com,
	keir@xen.org, konrad.wilk@oracle.com, phcoder@gmail.com,
	richard.l.maliszewski@intel.com, ross.philipson@citrix.com,
	stefano.stabellini@eu.citrix.com, xen-devel@lists.xen.org
Message-ID: <20140228133904.GB3516@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: ucsinet22.oracle.com [156.151.31.94]
Subject: [Xen-devel] EFI + GRUB2 + Xen work update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Hey,

On Wednesday evening I was able to boot Xen using GRUB2 on EFI platform.
Not all stuff is in place right now but it is good sign. I hope that I
will be able to add missing things and cleanup code in 2-3 weeks (if
nothing unexpected happens). So I suppose that first version of patches
should be ready no later than at the end of March.

This work would not be possible without Vladimir who added missing
features into multiboot2 protocol.

BTW, Vladimir, when are you going to release GRUB2 version 2.02?

Have a nice weekend,

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 13:48:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNnC-0005mv-Fu; Fri, 28 Feb 2014 13:47:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <phcoder@gmail.com>) id 1WJNnA-0005mq-RH
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 13:47:57 +0000
Received: from [85.158.137.68:59920] by server-2.bemta-3.messagelabs.com id
	7C/D0-06531-B8390135; Fri, 28 Feb 2014 13:47:55 +0000
X-Env-Sender: phcoder@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393595275!98554!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	SORTED_RECIPS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21848 invoked from network); 28 Feb 2014 13:47:55 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 13:47:55 -0000
Received: by mail-ee0-f48.google.com with SMTP id e51so1160955eek.35
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 05:47:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:subject:references
	:in-reply-to:content-type;
	bh=hKd2yRkR2zMugh8yxZCI8QuQv1H27TKKrwJuDnTO6cU=;
	b=urt3m2H0CwKyyS4W+eezeCNSo60WXuR/8YwRJh5rRAhLTezspgD5Yv4NlE6JfU0iS9
	CPbz+lke+fOVD5qz6Dqw7TULs0yrhtu8DLcKhuLQaPNbDRU9qzIokN5YRWOnEir3qq+F
	WOLszkl0LNmND7kMpzkq4WU9ISgrcM/SOVKrqenHCll0k4PsBow62aQWjg9oe6OEehGY
	S/nqUuaj+H1DxWQi0NGwe0n5eF8vijQKCXIpxtLGKVapvjjEN8IQw/p27xH8oNk7R7KN
	U+GH7+AGXoub449bxKiQibYV9+syiXZFsbMBmT6zOyVqCh/fTyY4ai+AVZUpDuZIckbh
	USAA==
X-Received: by 10.14.100.8 with SMTP id y8mr2412860eef.29.1393595275126;
	Fri, 28 Feb 2014 05:47:55 -0800 (PST)
Received: from [10.2.46.172] (public-docking-pat-hg-mapped-0033.ethz.ch.
	[195.176.110.98])
	by mx.google.com with ESMTPSA id l4sm10680731eeo.9.2014.02.28.05.47.53
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 05:47:53 -0800 (PST)
Message-ID: <53109387.9090507@gmail.com>
Date: Fri, 28 Feb 2014 14:47:51 +0100
From: =?UTF-8?B?VmxhZGltaXIgJ8+GLWNvZGVyL3BoY29kZXInIFNlcmJpbmVua28=?=
	<phcoder@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Icedove/24.3.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>, george.dunlap@eu.citrix.com, 
	ian.campbell@citrix.com, jbeulich@suse.com, keir@xen.org, 
	konrad.wilk@oracle.com, richard.l.maliszewski@intel.com, 
	ross.philipson@citrix.com, stefano.stabellini@eu.citrix.com, 
	xen-devel@lists.xen.org
References: <20140228133904.GB3516@olila.local.net-space.pl>
In-Reply-To: <20140228133904.GB3516@olila.local.net-space.pl>
X-Enigmail-Version: 1.6
Subject: Re: [Xen-devel] EFI + GRUB2 + Xen work update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2664404212102372545=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============2664404212102372545==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="VQkVdnh4kgspukE2PuertQj4nQBNNfHEj"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VQkVdnh4kgspukE2PuertQj4nQBNNfHEj
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 28.02.2014 14:39, Daniel Kiper wrote:
> Hey,
>=20
> On Wednesday evening I was able to boot Xen using GRUB2 on EFI platform=
=2E
> Not all stuff is in place right now but it is good sign. I hope that I
> will be able to add missing things and cleanup code in 2-3 weeks (if
> nothing unexpected happens). So I suppose that first version of patches=

> should be ready no later than at the end of March.
>=20
> This work would not be possible without Vladimir who added missing
> features into multiboot2 protocol.
>=20
> BTW, Vladimir, when are you going to release GRUB2 version 2.02?
>=20
I've just finished my masters in mathematics which was preventing me
from spending more time on GRUB. I'm sifting through reports on
2.02~beta2. Release will depend on their number. I expect to roll out
2.02~beta3 in couple of days.
> Have a nice weekend,
>=20
> Daniel
>=20



--VQkVdnh4kgspukE2PuertQj4nQBNNfHEj
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iF4EAREKAAYFAlMQk4cACgkQmBXlbbo5nOtIBQD/UtB6e/MTD/cpfFbEWtDFvcP/
V/MD9vIZ5H4bY2d3vOsA/iZf8lJRHHjY5U9/Rp47fRVOY1wrVI1XLpEQ6QbXBNlC
=YR5g
-----END PGP SIGNATURE-----

--VQkVdnh4kgspukE2PuertQj4nQBNNfHEj--


--===============2664404212102372545==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2664404212102372545==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 13:48:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNnC-0005mv-Fu; Fri, 28 Feb 2014 13:47:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <phcoder@gmail.com>) id 1WJNnA-0005mq-RH
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 13:47:57 +0000
Received: from [85.158.137.68:59920] by server-2.bemta-3.messagelabs.com id
	7C/D0-06531-B8390135; Fri, 28 Feb 2014 13:47:55 +0000
X-Env-Sender: phcoder@gmail.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393595275!98554!1
X-Originating-IP: [74.125.83.48]
X-SpamReason: No, hits=1.8 required=7.0 tests=ML_RADAR_SPEW_LINKS_14,
	SORTED_RECIPS,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 21848 invoked from network); 28 Feb 2014 13:47:55 -0000
Received: from mail-ee0-f48.google.com (HELO mail-ee0-f48.google.com)
	(74.125.83.48)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 13:47:55 -0000
Received: by mail-ee0-f48.google.com with SMTP id e51so1160955eek.35
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 05:47:55 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:subject:references
	:in-reply-to:content-type;
	bh=hKd2yRkR2zMugh8yxZCI8QuQv1H27TKKrwJuDnTO6cU=;
	b=urt3m2H0CwKyyS4W+eezeCNSo60WXuR/8YwRJh5rRAhLTezspgD5Yv4NlE6JfU0iS9
	CPbz+lke+fOVD5qz6Dqw7TULs0yrhtu8DLcKhuLQaPNbDRU9qzIokN5YRWOnEir3qq+F
	WOLszkl0LNmND7kMpzkq4WU9ISgrcM/SOVKrqenHCll0k4PsBow62aQWjg9oe6OEehGY
	S/nqUuaj+H1DxWQi0NGwe0n5eF8vijQKCXIpxtLGKVapvjjEN8IQw/p27xH8oNk7R7KN
	U+GH7+AGXoub449bxKiQibYV9+syiXZFsbMBmT6zOyVqCh/fTyY4ai+AVZUpDuZIckbh
	USAA==
X-Received: by 10.14.100.8 with SMTP id y8mr2412860eef.29.1393595275126;
	Fri, 28 Feb 2014 05:47:55 -0800 (PST)
Received: from [10.2.46.172] (public-docking-pat-hg-mapped-0033.ethz.ch.
	[195.176.110.98])
	by mx.google.com with ESMTPSA id l4sm10680731eeo.9.2014.02.28.05.47.53
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 05:47:53 -0800 (PST)
Message-ID: <53109387.9090507@gmail.com>
Date: Fri, 28 Feb 2014 14:47:51 +0100
From: =?UTF-8?B?VmxhZGltaXIgJ8+GLWNvZGVyL3BoY29kZXInIFNlcmJpbmVua28=?=
	<phcoder@gmail.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Icedove/24.3.0
MIME-Version: 1.0
To: Daniel Kiper <daniel.kiper@oracle.com>, george.dunlap@eu.citrix.com, 
	ian.campbell@citrix.com, jbeulich@suse.com, keir@xen.org, 
	konrad.wilk@oracle.com, richard.l.maliszewski@intel.com, 
	ross.philipson@citrix.com, stefano.stabellini@eu.citrix.com, 
	xen-devel@lists.xen.org
References: <20140228133904.GB3516@olila.local.net-space.pl>
In-Reply-To: <20140228133904.GB3516@olila.local.net-space.pl>
X-Enigmail-Version: 1.6
Subject: Re: [Xen-devel] EFI + GRUB2 + Xen work update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2664404212102372545=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--===============2664404212102372545==
Content-Type: multipart/signed; micalg=pgp-sha512;
 protocol="application/pgp-signature";
 boundary="VQkVdnh4kgspukE2PuertQj4nQBNNfHEj"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--VQkVdnh4kgspukE2PuertQj4nQBNNfHEj
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 28.02.2014 14:39, Daniel Kiper wrote:
> Hey,
>=20
> On Wednesday evening I was able to boot Xen using GRUB2 on EFI platform=
=2E
> Not all stuff is in place right now but it is good sign. I hope that I
> will be able to add missing things and cleanup code in 2-3 weeks (if
> nothing unexpected happens). So I suppose that first version of patches=

> should be ready no later than at the end of March.
>=20
> This work would not be possible without Vladimir who added missing
> features into multiboot2 protocol.
>=20
> BTW, Vladimir, when are you going to release GRUB2 version 2.02?
>=20
I've just finished my masters in mathematics which was preventing me
from spending more time on GRUB. I'm sifting through reports on
2.02~beta2. Release will depend on their number. I expect to roll out
2.02~beta3 in couple of days.
> Have a nice weekend,
>=20
> Daniel
>=20



--VQkVdnh4kgspukE2PuertQj4nQBNNfHEj
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Icedove - http://www.enigmail.net/

iF4EAREKAAYFAlMQk4cACgkQmBXlbbo5nOtIBQD/UtB6e/MTD/cpfFbEWtDFvcP/
V/MD9vIZ5H4bY2d3vOsA/iZf8lJRHHjY5U9/Rp47fRVOY1wrVI1XLpEQ6QbXBNlC
=YR5g
-----END PGP SIGNATURE-----

--VQkVdnh4kgspukE2PuertQj4nQBNNfHEj--


--===============2664404212102372545==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2664404212102372545==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 13:53:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNsM-0005zT-9I; Fri, 28 Feb 2014 13:53:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <riku.voipio@linaro.org>) id 1WJNdl-0005Xp-Q5
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 13:38:14 +0000
Received: from [85.158.137.68:29595] by server-4.bemta-3.messagelabs.com id
	C4/66-04858-44190135; Fri, 28 Feb 2014 13:38:12 +0000
X-Env-Sender: riku.voipio@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393594690!4809474!1
X-Originating-IP: [209.85.160.178]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23927 invoked from network); 28 Feb 2014 13:38:11 -0000
Received: from mail-yk0-f178.google.com (HELO mail-yk0-f178.google.com)
	(209.85.160.178)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 13:38:11 -0000
Received: by mail-yk0-f178.google.com with SMTP id 79so1865766ykr.9
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 05:38:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=tIWcfk66Z7GHhIIafzPM7N7OHii5r5TSLL1582adDnI=;
	b=KeMbDHdRIISZNK3fihgVx6t1oNdS7b5rq67RyX+gdGsInSGkO7RGf6jjMPUtUSsCDH
	OyG4aWVOkvv0HRUra21IztJghxnNP7q7dxz/YARokuh9t1owPoalankKjUWkHzoX0jHF
	4Qk+/k03QohrEYVTR7etqaRjrF8FHhE4zMCF25DrHpMenqyLv/ypM6IyBo0W1tiH8kZi
	WyfnRck3LaNWJVKyTM7dPXYGmD9c90YApAaddgyehN5q/QByH3hg9hkBZyeHWl3dzFCP
	iAwF/DJj5qo+prZyXRdcqFfR+FkRvQCrc8/mdCq4Id8v14iUfpiFQlvfF45DuzYDDTib
	8Fpg==
X-Gm-Message-State: ALoCoQkImekWiAPqlc+Dmyp4N0e8yPNMUSkAjzsJLweEdHutq6EwR4ObGkdp0fw33ben6YRZQh+r
MIME-Version: 1.0
X-Received: by 10.236.149.2 with SMTP id w2mr1578416yhj.114.1393594690439;
	Fri, 28 Feb 2014 05:38:10 -0800 (PST)
Received: by 10.170.187.130 with HTTP; Fri, 28 Feb 2014 05:38:10 -0800 (PST)
In-Reply-To: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
	<6039706.L1ZWvgjHF0@wuerfel>
	<CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
Date: Fri, 28 Feb 2014 15:38:10 +0200
Message-ID: <CAAqcGHnZMwhL3vGqN=pqogoVdh4xKhYoZ6=a201qiegVfwNcZA@mail.gmail.com>
From: Riku Voipio <riku.voipio@linaro.org>
To: Alexander Graf <agraf@suse.de>
X-Mailman-Approved-At: Fri, 28 Feb 2014 13:53:17 +0000
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2970367566289542746=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2970367566289542746==
Content-Type: multipart/alternative; boundary=20cf302efa447c845e04f3778c60

--20cf302efa447c845e04f3778c60
Content-Type: text/plain; charset=ISO-8859-1

On 28 February 2014 02:05, Alexander Graf <agraf@suse.de> wrote:
> > Am 28.02.2014 um 03:56 schrieb Arnd Bergmann <arnd@arndb.de>
> >> Replace Windows by "Linux with custom drivers" and you're in the same
> >> situation even when you neglect Windows. Reality will be that we will
> >> have fdt and acpi based systems.

> > We will however want to boot all sorts of guests in a standardized
> > virtual environment:

> > * 32 bit Linux (since some distros don't support biarch or multiarch
> >  on arm64) for running applications that are either binary-only
> >  or not 64-bit safe.
> > * 32-bit Android
> > * big-endian Linux for running applications that are not endian-clean
> >  (typically network stuff ported from powerpc or mipseb.
> > * OS/v guests
> > * NOMMU Linux
> > * BSD based OSs
> > * QNX
> > * random other RTOSs

*snip*

> You're forgetting a few pretty important cases here:
>
> * Enterprise grade Linux distribution that only supports ACPI
> * Maybe WinRT if we can convince MS to use it
> * Non-Linux with x86/ia64 heritage and thus ACPI support

I think we need limit the scope of the spec a bit here.

For a this VM system specification, we should describe what is simple to
set up and high performance for KVM, Xen and mainline Linux. For everyone
else, there is Qemu where we can mangle to provide whatever the guest OS
might want.

Re UEFI,

Riku

--20cf302efa447c845e04f3778c60
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On 28 February 2014 02:05, Alexander Graf &lt;<a href=3D"m=
ailto:agraf@suse.de">agraf@suse.de</a>&gt; wrote:<br>&gt; &gt; Am 28.02.201=
4 um 03:56 schrieb Arnd Bergmann &lt;<a href=3D"mailto:arnd@arndb.de">arnd@=
arndb.de</a>&gt;<br>
&gt; &gt;&gt; Replace Windows by &quot;Linux with custom drivers&quot; and =
you&#39;re in the same<br>&gt; &gt;&gt; situation even when you neglect Win=
dows. Reality will be that we will<br>&gt; &gt;&gt; have fdt and acpi based=
 systems.<br>
<br>&gt; &gt; We will however want to boot all sorts of guests in a standar=
dized<br>&gt; &gt; virtual environment:<br><br>&gt; &gt; * 32 bit Linux (si=
nce some distros don&#39;t support biarch or multiarch<br>&gt; &gt; =A0on a=
rm64) for running applications that are either binary-only<br>
&gt; &gt; =A0or not 64-bit safe.<br>&gt; &gt; * 32-bit Android<br>&gt; &gt;=
 * big-endian Linux for running applications that are not endian-clean<br>&=
gt; &gt; =A0(typically network stuff ported from powerpc or mipseb.<br>&gt;=
 &gt; * OS/v guests<br>
&gt; &gt; * NOMMU Linux<br>&gt; &gt; * BSD based OSs<br>&gt; &gt; * QNX<br>=
&gt; &gt; * random other RTOSs<br><br>*snip*<br>=A0<br>&gt; You&#39;re forg=
etting a few pretty important cases here:<br>&gt;<br>&gt; * Enterprise grad=
e Linux distribution that only supports ACPI<br>
&gt; * Maybe WinRT if we can convince MS to use it<br>&gt; * Non-Linux with=
 x86/ia64 heritage and thus ACPI support<br><br>I think we need limit the s=
cope of the spec a bit here.<div><br>For a this VM system specification, we=
 should describe what is simple to set up and high performance for KVM, Xen=
 and mainline Linux. For everyone else, there is Qemu where we can mangle t=
o provide whatever the guest OS might want.<br>
<br>Re UEFI,=A0<br><br><div>Riku</div></div></div>

--20cf302efa447c845e04f3778c60--


--===============2970367566289542746==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2970367566289542746==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 13:53:25 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 13:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJNsM-0005zT-9I; Fri, 28 Feb 2014 13:53:18 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <riku.voipio@linaro.org>) id 1WJNdl-0005Xp-Q5
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 13:38:14 +0000
Received: from [85.158.137.68:29595] by server-4.bemta-3.messagelabs.com id
	C4/66-04858-44190135; Fri, 28 Feb 2014 13:38:12 +0000
X-Env-Sender: riku.voipio@linaro.org
X-Msg-Ref: server-9.tower-31.messagelabs.com!1393594690!4809474!1
X-Originating-IP: [209.85.160.178]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23927 invoked from network); 28 Feb 2014 13:38:11 -0000
Received: from mail-yk0-f178.google.com (HELO mail-yk0-f178.google.com)
	(209.85.160.178)
	by server-9.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 13:38:11 -0000
Received: by mail-yk0-f178.google.com with SMTP id 79so1865766ykr.9
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 05:38:10 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:mime-version:in-reply-to:references:date
	:message-id:subject:from:to:cc:content-type;
	bh=tIWcfk66Z7GHhIIafzPM7N7OHii5r5TSLL1582adDnI=;
	b=KeMbDHdRIISZNK3fihgVx6t1oNdS7b5rq67RyX+gdGsInSGkO7RGf6jjMPUtUSsCDH
	OyG4aWVOkvv0HRUra21IztJghxnNP7q7dxz/YARokuh9t1owPoalankKjUWkHzoX0jHF
	4Qk+/k03QohrEYVTR7etqaRjrF8FHhE4zMCF25DrHpMenqyLv/ypM6IyBo0W1tiH8kZi
	WyfnRck3LaNWJVKyTM7dPXYGmD9c90YApAaddgyehN5q/QByH3hg9hkBZyeHWl3dzFCP
	iAwF/DJj5qo+prZyXRdcqFfR+FkRvQCrc8/mdCq4Id8v14iUfpiFQlvfF45DuzYDDTib
	8Fpg==
X-Gm-Message-State: ALoCoQkImekWiAPqlc+Dmyp4N0e8yPNMUSkAjzsJLweEdHutq6EwR4ObGkdp0fw33ben6YRZQh+r
MIME-Version: 1.0
X-Received: by 10.236.149.2 with SMTP id w2mr1578416yhj.114.1393594690439;
	Fri, 28 Feb 2014 05:38:10 -0800 (PST)
Received: by 10.170.187.130 with HTTP; Fri, 28 Feb 2014 05:38:10 -0800 (PST)
In-Reply-To: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
	<6039706.L1ZWvgjHF0@wuerfel>
	<CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
Date: Fri, 28 Feb 2014 15:38:10 +0200
Message-ID: <CAAqcGHnZMwhL3vGqN=pqogoVdh4xKhYoZ6=a201qiegVfwNcZA@mail.gmail.com>
From: Riku Voipio <riku.voipio@linaro.org>
To: Alexander Graf <agraf@suse.de>
X-Mailman-Approved-At: Fri, 28 Feb 2014 13:53:17 +0000
Cc: "cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	Ian Campbell <ian.campbell@citrix.com>, Arnd Bergmann <arnd@arndb.de>,
	Rob Herring <rob.herring@linaro.org>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2970367566289542746=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2970367566289542746==
Content-Type: multipart/alternative; boundary=20cf302efa447c845e04f3778c60

--20cf302efa447c845e04f3778c60
Content-Type: text/plain; charset=ISO-8859-1

On 28 February 2014 02:05, Alexander Graf <agraf@suse.de> wrote:
> > Am 28.02.2014 um 03:56 schrieb Arnd Bergmann <arnd@arndb.de>
> >> Replace Windows by "Linux with custom drivers" and you're in the same
> >> situation even when you neglect Windows. Reality will be that we will
> >> have fdt and acpi based systems.

> > We will however want to boot all sorts of guests in a standardized
> > virtual environment:

> > * 32 bit Linux (since some distros don't support biarch or multiarch
> >  on arm64) for running applications that are either binary-only
> >  or not 64-bit safe.
> > * 32-bit Android
> > * big-endian Linux for running applications that are not endian-clean
> >  (typically network stuff ported from powerpc or mipseb.
> > * OS/v guests
> > * NOMMU Linux
> > * BSD based OSs
> > * QNX
> > * random other RTOSs

*snip*

> You're forgetting a few pretty important cases here:
>
> * Enterprise grade Linux distribution that only supports ACPI
> * Maybe WinRT if we can convince MS to use it
> * Non-Linux with x86/ia64 heritage and thus ACPI support

I think we need limit the scope of the spec a bit here.

For a this VM system specification, we should describe what is simple to
set up and high performance for KVM, Xen and mainline Linux. For everyone
else, there is Qemu where we can mangle to provide whatever the guest OS
might want.

Re UEFI,

Riku

--20cf302efa447c845e04f3778c60
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">On 28 February 2014 02:05, Alexander Graf &lt;<a href=3D"m=
ailto:agraf@suse.de">agraf@suse.de</a>&gt; wrote:<br>&gt; &gt; Am 28.02.201=
4 um 03:56 schrieb Arnd Bergmann &lt;<a href=3D"mailto:arnd@arndb.de">arnd@=
arndb.de</a>&gt;<br>
&gt; &gt;&gt; Replace Windows by &quot;Linux with custom drivers&quot; and =
you&#39;re in the same<br>&gt; &gt;&gt; situation even when you neglect Win=
dows. Reality will be that we will<br>&gt; &gt;&gt; have fdt and acpi based=
 systems.<br>
<br>&gt; &gt; We will however want to boot all sorts of guests in a standar=
dized<br>&gt; &gt; virtual environment:<br><br>&gt; &gt; * 32 bit Linux (si=
nce some distros don&#39;t support biarch or multiarch<br>&gt; &gt; =A0on a=
rm64) for running applications that are either binary-only<br>
&gt; &gt; =A0or not 64-bit safe.<br>&gt; &gt; * 32-bit Android<br>&gt; &gt;=
 * big-endian Linux for running applications that are not endian-clean<br>&=
gt; &gt; =A0(typically network stuff ported from powerpc or mipseb.<br>&gt;=
 &gt; * OS/v guests<br>
&gt; &gt; * NOMMU Linux<br>&gt; &gt; * BSD based OSs<br>&gt; &gt; * QNX<br>=
&gt; &gt; * random other RTOSs<br><br>*snip*<br>=A0<br>&gt; You&#39;re forg=
etting a few pretty important cases here:<br>&gt;<br>&gt; * Enterprise grad=
e Linux distribution that only supports ACPI<br>
&gt; * Maybe WinRT if we can convince MS to use it<br>&gt; * Non-Linux with=
 x86/ia64 heritage and thus ACPI support<br><br>I think we need limit the s=
cope of the spec a bit here.<div><br>For a this VM system specification, we=
 should describe what is simple to set up and high performance for KVM, Xen=
 and mainline Linux. For everyone else, there is Qemu where we can mangle t=
o provide whatever the guest OS might want.<br>
<br>Re UEFI,=A0<br><br><div>Riku</div></div></div>

--20cf302efa447c845e04f3778c60--


--===============2970367566289542746==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2970367566289542746==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 14:02:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJO0q-0006Kc-F9; Fri, 28 Feb 2014 14:02:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJO0n-0006KX-Kf
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:02:03 +0000
Received: from [85.158.143.35:37791] by server-1.bemta-4.messagelabs.com id
	FF/A4-31661-8D690135; Fri, 28 Feb 2014 14:02:00 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393596119!9015082!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30436 invoked from network); 28 Feb 2014 14:02:00 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 14:02:00 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 28 Feb 2014 14:01:58 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="662986224"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.2.253])
	by fldsmtpi02.verizon.com with ESMTP; 28 Feb 2014 14:01:57 +0000
Message-ID: <531096D5.7020308@terremark.com>
Date: Fri, 28 Feb 2014 09:01:57 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
	<530F116F020000780011FC45@nat28.tlf.novell.com>
	<530FD0CA.5080906@terremark.com>
	<53105F970200007800120205@nat28.tlf.novell.com>
In-Reply-To: <53105F970200007800120205@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/28/14 04:06, Jan Beulich wrote:
>>>> On 28.02.14 at 00:56, Don Slutz <dslutz@verizon.com> wrote:
>> On 02/27/14 04:20, Jan Beulich wrote:
>>>>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
>>>> Currently in 32 bit mode the routine hpet_set_timer() will convert a
>>>> time in the past to a time in future.  This is done by the uint32_t
>>>> cast of diff.
>>>>
>>>> Even without this issue, hpet_tick_to_ns() does not support past
>>>> times.
>>>>
>>>> Real hardware does not support past times.
>>>>
>>>> So just do the same thing in 32 bit mode as 64 bit mode.
>>> While the change looks valid at the first glance, what I'm missing
>>> is an explanation of how the problem that the introduction of this
>>> fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
>>> Vista") is now being taken care of (or why this is of no concern).
>>> That's pretty relevant considering for how long this code has been
>>> there without causing (known) problems to anyone.
>> Ok, digging around (the git version):
>>
>> commit f545359b1c54f59be9d7c27112a68c51c45b06b5
>> Date:   Thu Jan 18 18:54:28 2007 +0000
>>       [HVM] Fix slow wallclock in x64 Vista. This is due to confusing a
>>
>>
>> And one that changed how it worked:
>>
>> commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac
>> Date:   Tue Jan 8 16:20:04 2008 +0000
>>      hvm: hpet: Fix overflow when converting to nanoseconds.
>>
>>
>> Is when a past time was prevented.  Which may well have caused x64 Vista to
>> have wallclock issues.
>>
>> Next:
>>
>> commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2
>> Date:   Sat May 24 09:27:03 2008 +0100
>>       hvm: Build guest timers on monotonic system time.
>>
>>
>> Has a chance to do 2 things:
>> 1) Make the diff < 0 very unlikely
>> 2) Fixed x64 Vista wallclock issues (again)

 From your question below, my best guess is that this is just to short 
an explanation.  Here is an expanded one:

The only way that I can see the patch (c/s 13495:e2539ab3580a commit 
f545359b1c54f59be9d7c27112a68c51c45b06b5 "[HVM] Fix slow wallclock in 
x64 Vista") fixed the reported issue is by assuming that:

1) x64 Vista wallclock is using hpet in 32bit oneshot mode.
2) very offen the diff would be in the range -0.9765625 milliseconds to 
zero (0).
3) The sum of these is the amount that x64 Vista wallclock was off by.

The next change (c/s ? commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac 
"hvm: hpet: Fix overflow when converting to nanoseconds.") clearly 
breaks this by preventing hpet_tick_to_ns() to return tiny negative ns 
values.  And so this change reverted the fix for slow wallclock in x64 
Vista.

The third change (c/s ? commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2 
"hvm: Build guest timers on monotonic system time." ) looks to me to 
have changed the rate of diff being in the range -0.9765625 milliseconds 
to zero (0) from a lot of the time to almost never.  This is based on 
the assumption that the 1st patch fixed the reported issue and the same 
issue was not reported since that time.

This is based on that fact that currently  HPET_TINY_TIME_SPAN 
(0xffff1194 4294906260) converts to 68,718,500,160 ns and -1 converts to 
18,014,398,509,481,983 ns so any number in the "tiny" range looks to me 
to mess up x64 Vista wallclock time and will also cause the linux 
MP-BIOS error message.

Hope this is clear.
     -Don Slutz



>> Looking closer at hpet_tick_to_ns() and doing some math.  I get:
>>
>>
>>       h->stime_freq = S_TO_NS;
>>       h->hpet_to_ns_scale = ((S_TO_NS * STIME_PER_HPET_TICK) << 10) /
>> h->stime_freq;
>>
>> I.E.
>>
>>       h->hpet_to_ns_scale = STIME_PER_HPET_TICK << 10;
>>
>> And so:
>>
>> #define hpet_tick_to_ns(h, tick)                        \
>>       ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>>           ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))
>>
>>
>> Is really:
>>
>> #define hpet_tick_to_ns(h, tick)                        \
>>       ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>>           (~0ULL >> 10) : (tick) * STIME_PER_HPET_TICK))
>>
>> And if you change to using a signed multiply most of the time you will be
>> fine.  If you want a complex that is "safer":
>>
>> #define hpet_tick_to_ns(tick)                                   \
>>       ((s_time_t)(tick) >= 0 ?                                    \
>>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK >= 0 ?   \
>>          (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>>          (s_time_t)(~0ULL >> 10) :                                \
>>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK < 0 ?    \
>>          (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>>          0)
>>
>> If the signed multiply overflows in the positive case then the old max is
>> returned.  Note: this can return larger values then the old max.
>>
>> So I can re-work the patch to use this and still provide past times.  Which
>> path should I go with?
> Did you perhaps misunderstand me? I didn't ask for the patch to be
> changed. What I asked for is clarification that the issues previously
> having caused this code to be the way it is being still taken care of
> with your change.
>
> Jan
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:02:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJO0q-0006Kc-F9; Fri, 28 Feb 2014 14:02:04 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJO0n-0006KX-Kf
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:02:03 +0000
Received: from [85.158.143.35:37791] by server-1.bemta-4.messagelabs.com id
	FF/A4-31661-8D690135; Fri, 28 Feb 2014 14:02:00 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393596119!9015082!1
X-Originating-IP: [140.108.26.143]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQwLjEwOC4yNi4xNDMgPT4gMjYwNTMz\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30436 invoked from network); 28 Feb 2014 14:02:00 -0000
Received: from fldsmtpe04.verizon.com (HELO fldsmtpe04.verizon.com)
	(140.108.26.143)
	by server-5.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 14:02:00 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi02.verizon.com) ([166.68.71.144])
	by fldsmtpe04.verizon.com with ESMTP; 28 Feb 2014 14:01:58 +0000
From: Don Slutz <dslutz@verizon.com>
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="662986224"
Received: from unknown (HELO don-lt.don.CloudSwitch.com) ([162.47.2.253])
	by fldsmtpi02.verizon.com with ESMTP; 28 Feb 2014 14:01:57 +0000
Message-ID: <531096D5.7020308@terremark.com>
Date: Fri, 28 Feb 2014 09:01:57 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130625 Thunderbird/17.0.7
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393437647-16694-1-git-send-email-dslutz@verizon.com>
	<1393437647-16694-2-git-send-email-dslutz@verizon.com>
	<530F116F020000780011FC45@nat28.tlf.novell.com>
	<530FD0CA.5080906@terremark.com>
	<53105F970200007800120205@nat28.tlf.novell.com>
In-Reply-To: <53105F970200007800120205@nat28.tlf.novell.com>
Cc: Keir Fraser <keir@xen.org>, Don Slutz <dslutz@verizon.com>,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH 1/1] hpet: Act more like real hardware
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/28/14 04:06, Jan Beulich wrote:
>>>> On 28.02.14 at 00:56, Don Slutz <dslutz@verizon.com> wrote:
>> On 02/27/14 04:20, Jan Beulich wrote:
>>>>>> On 26.02.14 at 19:00, Don Slutz <dslutz@verizon.com> wrote:
>>>> Currently in 32 bit mode the routine hpet_set_timer() will convert a
>>>> time in the past to a time in future.  This is done by the uint32_t
>>>> cast of diff.
>>>>
>>>> Even without this issue, hpet_tick_to_ns() does not support past
>>>> times.
>>>>
>>>> Real hardware does not support past times.
>>>>
>>>> So just do the same thing in 32 bit mode as 64 bit mode.
>>> While the change looks valid at the first glance, what I'm missing
>>> is an explanation of how the problem that the introduction of this
>>> fixed (c/s 13495:e2539ab3580a "[HVM] Fix slow wallclock in x64
>>> Vista") is now being taken care of (or why this is of no concern).
>>> That's pretty relevant considering for how long this code has been
>>> there without causing (known) problems to anyone.
>> Ok, digging around (the git version):
>>
>> commit f545359b1c54f59be9d7c27112a68c51c45b06b5
>> Date:   Thu Jan 18 18:54:28 2007 +0000
>>       [HVM] Fix slow wallclock in x64 Vista. This is due to confusing a
>>
>>
>> And one that changed how it worked:
>>
>> commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac
>> Date:   Tue Jan 8 16:20:04 2008 +0000
>>      hvm: hpet: Fix overflow when converting to nanoseconds.
>>
>>
>> Is when a past time was prevented.  Which may well have caused x64 Vista to
>> have wallclock issues.
>>
>> Next:
>>
>> commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2
>> Date:   Sat May 24 09:27:03 2008 +0100
>>       hvm: Build guest timers on monotonic system time.
>>
>>
>> Has a chance to do 2 things:
>> 1) Make the diff < 0 very unlikely
>> 2) Fixed x64 Vista wallclock issues (again)

 From your question below, my best guess is that this is just to short 
an explanation.  Here is an expanded one:

The only way that I can see the patch (c/s 13495:e2539ab3580a commit 
f545359b1c54f59be9d7c27112a68c51c45b06b5 "[HVM] Fix slow wallclock in 
x64 Vista") fixed the reported issue is by assuming that:

1) x64 Vista wallclock is using hpet in 32bit oneshot mode.
2) very offen the diff would be in the range -0.9765625 milliseconds to 
zero (0).
3) The sum of these is the amount that x64 Vista wallclock was off by.

The next change (c/s ? commit 73ee2f2e11fcdc27aae4f8caa72d240c4c9ed5ac 
"hvm: hpet: Fix overflow when converting to nanoseconds.") clearly 
breaks this by preventing hpet_tick_to_ns() to return tiny negative ns 
values.  And so this change reverted the fix for slow wallclock in x64 
Vista.

The third change (c/s ? commit e1845bbe732b5ad5755f0f3a93fb6ea85919e8a2 
"hvm: Build guest timers on monotonic system time." ) looks to me to 
have changed the rate of diff being in the range -0.9765625 milliseconds 
to zero (0) from a lot of the time to almost never.  This is based on 
the assumption that the 1st patch fixed the reported issue and the same 
issue was not reported since that time.

This is based on that fact that currently  HPET_TINY_TIME_SPAN 
(0xffff1194 4294906260) converts to 68,718,500,160 ns and -1 converts to 
18,014,398,509,481,983 ns so any number in the "tiny" range looks to me 
to mess up x64 Vista wallclock time and will also cause the linux 
MP-BIOS error message.

Hope this is clear.
     -Don Slutz



>> Looking closer at hpet_tick_to_ns() and doing some math.  I get:
>>
>>
>>       h->stime_freq = S_TO_NS;
>>       h->hpet_to_ns_scale = ((S_TO_NS * STIME_PER_HPET_TICK) << 10) /
>> h->stime_freq;
>>
>> I.E.
>>
>>       h->hpet_to_ns_scale = STIME_PER_HPET_TICK << 10;
>>
>> And so:
>>
>> #define hpet_tick_to_ns(h, tick)                        \
>>       ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>>           ~0ULL : (tick) * (h)->hpet_to_ns_scale) >> 10))
>>
>>
>> Is really:
>>
>> #define hpet_tick_to_ns(h, tick)                        \
>>       ((s_time_t)((((tick) > (h)->hpet_to_ns_limit) ?     \
>>           (~0ULL >> 10) : (tick) * STIME_PER_HPET_TICK))
>>
>> And if you change to using a signed multiply most of the time you will be
>> fine.  If you want a complex that is "safer":
>>
>> #define hpet_tick_to_ns(tick)                                   \
>>       ((s_time_t)(tick) >= 0 ?                                    \
>>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK >= 0 ?   \
>>          (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>>          (s_time_t)(~0ULL >> 10) :                                \
>>         (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK < 0 ?    \
>>          (s_time_t)(tick) * (s_time_t)STIME_PER_HPET_TICK :       \
>>          0)
>>
>> If the signed multiply overflows in the positive case then the old max is
>> returned.  Note: this can return larger values then the old max.
>>
>> So I can re-work the patch to use this and still provide past times.  Which
>> path should I go with?
> Did you perhaps misunderstand me? I didn't ask for the patch to be
> changed. What I asked for is clarification that the issues previously
> having caused this code to be the way it is being still taken care of
> with your change.
>
> Jan
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:30:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJORd-0006jR-1h; Fri, 28 Feb 2014 14:29:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJORc-0006jM-1D
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:29:44 +0000
Received: from [85.158.137.68:23688] by server-7.bemta-3.messagelabs.com id
	B3/E7-13775-75D90135; Fri, 28 Feb 2014 14:29:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393597779!4880997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20733 invoked from network); 28 Feb 2014 14:29:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:29:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106625558"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 14:29:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 09:29:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJORQ-0006PJ-7L;
	Fri, 28 Feb 2014 14:29:32 +0000
Date: Fri, 28 Feb 2014 14:29:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Daniel P. Berrange" <berrange@redhat.com>
In-Reply-To: <20140228110329.GB17909@redhat.com>
Message-ID: <alpine.DEB.2.02.1402281428470.31489@kaball.uk.xensource.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
	<1393553550.20365.5.camel@hastur.hellion.org.uk>
	<20140228110329.GB17909@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 28 Feb 2014, Daniel P. Berrange wrote:
> On Fri, Feb 28, 2014 at 02:12:30AM +0000, Ian Campbell wrote:
> > On Wed, 2014-02-26 at 15:01 +0000, Daniel P. Berrange wrote:
> > > Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
> > > to allow us to configure that explicitly, similar to how we do for KVM's
> > > virtio-console support.
> > 
> > Do you mean I need to add something to the XML config snippet, or I need
> > to add some special handling in the XML parser/consumer?
> > 
> > I've grepped around the virtio-console stuff and I'm none the wiser.
> 
> Opps, yes, I should have explained this better, since our docs here are
> about as clear as mud.
> 
> With traditional x86 paravirt Xen, we just have the plain paravirt console
> devices
> 
>     <console type='pty'>
>       <target type='xen'/>
>     </console>
> 
> With x86  fullvirt Xen/KVM/QEMU, the console type just defaults to being
> a serial port so you would usually just add
> 
>     <serial type='pty'>
>     </serial>
> 
> and then libvirt would automatically add a <console> with
> 
>     <console type='pty'>
>       <target type='serial'/>
>     </console>
> 
> 
> With x86 fullvirt KVM, we also have support for virtio which is
> done using
> 
>     <console type='pty'>
>       <target type='virtio'/>
>     </console>
> 
> 
> So actually this leads me to ask what kind of console Arm fullvirt Xen
> guests actually have ? If they just use the traditional Xen paravirt
> console, then we just need to make sure that this works for them by
> default:
> 
>     <console type='pty'>
>       <target type='xen'/>
>     </console>
> 
> 
> If there's a different type of console device that's not related to
> the Xen paravirt console device, then we'd need to invent a new
> <target type='xxx'/> value for Arm.

It is just the traditional Xen paravirt console.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:30:04 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJORd-0006jR-1h; Fri, 28 Feb 2014 14:29:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJORc-0006jM-1D
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:29:44 +0000
Received: from [85.158.137.68:23688] by server-7.bemta-3.messagelabs.com id
	B3/E7-13775-75D90135; Fri, 28 Feb 2014 14:29:43 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-11.tower-31.messagelabs.com!1393597779!4880997!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20733 invoked from network); 28 Feb 2014 14:29:42 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:29:42 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106625558"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 14:29:32 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 09:29:32 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJORQ-0006PJ-7L;
	Fri, 28 Feb 2014 14:29:32 +0000
Date: Fri, 28 Feb 2014 14:29:26 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: "Daniel P. Berrange" <berrange@redhat.com>
In-Reply-To: <20140228110329.GB17909@redhat.com>
Message-ID: <alpine.DEB.2.02.1402281428470.31489@kaball.uk.xensource.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
	<1393553550.20365.5.camel@hastur.hellion.org.uk>
	<20140228110329.GB17909@redhat.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, Ian Campbell <ian.campbell@citrix.com>,
	stefano.stabellini@eu.citrix.com, libvir-list@redhat.com,
	julien.grall@linaro.org, tim@xen.org, xen-devel@lists.xen.org,
	Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 28 Feb 2014, Daniel P. Berrange wrote:
> On Fri, Feb 28, 2014 at 02:12:30AM +0000, Ian Campbell wrote:
> > On Wed, 2014-02-26 at 15:01 +0000, Daniel P. Berrange wrote:
> > > Yep, if ARM has a PV console, then we'd need to add  tiny bit to the XML
> > > to allow us to configure that explicitly, similar to how we do for KVM's
> > > virtio-console support.
> > 
> > Do you mean I need to add something to the XML config snippet, or I need
> > to add some special handling in the XML parser/consumer?
> > 
> > I've grepped around the virtio-console stuff and I'm none the wiser.
> 
> Opps, yes, I should have explained this better, since our docs here are
> about as clear as mud.
> 
> With traditional x86 paravirt Xen, we just have the plain paravirt console
> devices
> 
>     <console type='pty'>
>       <target type='xen'/>
>     </console>
> 
> With x86  fullvirt Xen/KVM/QEMU, the console type just defaults to being
> a serial port so you would usually just add
> 
>     <serial type='pty'>
>     </serial>
> 
> and then libvirt would automatically add a <console> with
> 
>     <console type='pty'>
>       <target type='serial'/>
>     </console>
> 
> 
> With x86 fullvirt KVM, we also have support for virtio which is
> done using
> 
>     <console type='pty'>
>       <target type='virtio'/>
>     </console>
> 
> 
> So actually this leads me to ask what kind of console Arm fullvirt Xen
> guests actually have ? If they just use the traditional Xen paravirt
> console, then we just need to make sure that this works for them by
> default:
> 
>     <console type='pty'>
>       <target type='xen'/>
>     </console>
> 
> 
> If there's a different type of console device that's not related to
> the Xen paravirt console device, then we'd need to invent a new
> <target type='xxx'/> value for Arm.

It is just the traditional Xen paravirt console.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOZz-0006ri-Hu; Fri, 28 Feb 2014 14:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJOZy-0006rd-Gy
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:38:22 +0000
Received: from [85.158.137.68:34697] by server-1.bemta-3.messagelabs.com id
	B7/BE-17293-C5F90135; Fri, 28 Feb 2014 14:38:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393598299!1490563!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28619 invoked from network); 28 Feb 2014 14:38:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 14:38:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 14:38:18 +0000
Message-Id: <5310AD790200007800120338@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 14:38:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1393513666-29259-1-git-send-email-tim@xen.org>
In-Reply-To: <1393513666-29259-1-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/hvm: assert that we we saved a sane
 number of MSRs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 16:07, Tim Deegan <tim@xen.org> wrote:
> Just as a backstop measure against later changes that add MSRs to the
> save function without updating the count in the init function.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/arch/x86/hvm/hvm.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 9e85c13..ae24211 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1148,6 +1148,8 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
>          if ( hvm_funcs.save_msr )
>              hvm_funcs.save_msr(v, ctxt);
>  
> +        ASSERT(ctxt->count <= msr_count_max);
> +
>          for ( i = 0; i < ctxt->count; ++i )
>              ctxt->msr[i]._rsvd = 0;
>  
> -- 
> 1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:38:30 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:38:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOZz-0006ri-Hu; Fri, 28 Feb 2014 14:38:23 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJOZy-0006rd-Gy
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:38:22 +0000
Received: from [85.158.137.68:34697] by server-1.bemta-3.messagelabs.com id
	B7/BE-17293-C5F90135; Fri, 28 Feb 2014 14:38:20 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393598299!1490563!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28619 invoked from network); 28 Feb 2014 14:38:19 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 14:38:19 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 14:38:18 +0000
Message-Id: <5310AD790200007800120338@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 14:38:33 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1393513666-29259-1-git-send-email-tim@xen.org>
In-Reply-To: <1393513666-29259-1-git-send-email-tim@xen.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH] x86/hvm: assert that we we saved a sane
 number of MSRs.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 16:07, Tim Deegan <tim@xen.org> wrote:
> Just as a backstop measure against later changes that add MSRs to the
> save function without updating the count in the init function.
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
>  xen/arch/x86/hvm/hvm.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 9e85c13..ae24211 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1148,6 +1148,8 @@ static int hvm_save_cpu_msrs(struct domain *d, hvm_domain_context_t *h)
>          if ( hvm_funcs.save_msr )
>              hvm_funcs.save_msr(v, ctxt);
>  
> +        ASSERT(ctxt->count <= msr_count_max);
> +
>          for ( i = 0; i < ctxt->count; ++i )
>              ctxt->msr[i]._rsvd = 0;
>  
> -- 
> 1.8.5.2




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:40:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJObl-00073O-6I; Fri, 28 Feb 2014 14:40:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJObe-000738-Ff
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:40:11 +0000
Received: from [85.158.143.35:19003] by server-3.bemta-4.messagelabs.com id
	A0/93-11539-5CF90135; Fri, 28 Feb 2014 14:40:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393598404!9069363!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22575 invoked from network); 28 Feb 2014 14:40:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 14:40:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 14:40:04 +0000
Message-Id: <5310ADE30200007800120343@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 14:40:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-5-git-send-email-tim@xen.org>
	<530F763C020000780011FF5F@nat28.tlf.novell.com>
	<20140227163839.GH53925@deinos.phlegethon.org>
In-Reply-To: <20140227163839.GH53925@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for
 small constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 17:38, Tim Deegan <tim@xen.org> wrote:
> From dba941064825d723b79f866d16bb0f07585de320 Mon Sep 17 00:00:00 2001
> From: Tim Deegan <tim@xen.org>
> Date: Thu, 28 Nov 2013 15:40:48 +0000
> Subject: [PATCH] bitmaps/bitops: Clarify tests for small constant size.
> 
> No semantic changes, just makes the control flow a bit clearer.
> 
> I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
> formula is too clever for Coverity, but in fact it always takes me a
> minute or two to understand it too. :)
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

A little reluctantly
Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
> 
> v3: Consistenly use '!foo' rather than 'foo == 0'
> v2: fix find_next_bit macros to evaluate 'addr' exactly once.
> ---
>  xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
>  xen/include/xen/bitmap.h     | 30 ++++++++++++---------
>  2 files changed, 46 insertions(+), 46 deletions(-)
> 
> diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
> index ab21d92..82a08ee 100644
> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -335,23 +335,20 @@ static inline unsigned int __scanbit(unsigned long val, 
> unsigned long max)
>   * @offset: The bitnumber to start searching at
>   * @size: The maximum size to search
>   */
> -#define find_next_bit(addr, size, off) ({ \
> -    unsigned int r__ = (size); \
> -    unsigned int o__ = (off); \
> -    switch ( -!__builtin_constant_p(size) | r__ ) \
> -    { \
> -    case 0: (void)(addr); break; \
> -    case 1 ... BITS_PER_LONG: \
> -        r__ = o__ + __scanbit(*(const unsigned long *)(addr) >> o__, r__); \
> -        break; \
> -    default: \
> -        if ( __builtin_constant_p(off) && !o__ ) \
> -            r__ = __find_first_bit(addr, r__); \
> -        else \
> -            r__ = __find_next_bit(addr, r__, o__); \
> -        break; \
> -    } \
> -    r__; \
> +#define find_next_bit(addr, size, off) ({                                   
> \
> +    unsigned int r__;                                                       
> \
> +    const unsigned long *a__ = (addr);                                      
> \
> +    unsigned int s__ = (size);                                              
> \
> +    unsigned int o__ = (off);                                               
> \
> +    if ( __builtin_constant_p(size) && !s__ )                               \
> +        r__ = s__;                                                          
> \
> +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> +        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
> +    else if ( __builtin_constant_p(off) && !o__ )                           \
> +        r__ = __find_first_bit(a__, s__);                                   
> \
> +    else                                                                    
> \
> +        r__ = __find_next_bit(a__, s__, o__);                               
> \
> +    r__;                                                                    
> \
>  })
>  
>  /**
> @@ -370,23 +367,20 @@ static inline unsigned int __scanbit(unsigned long val, 
> unsigned long max)
>   * @offset: The bitnumber to start searching at
>   * @size: The maximum size to search
>   */
> -#define find_next_zero_bit(addr, size, off) ({ \
> -    unsigned int r__ = (size); \
> -    unsigned int o__ = (off); \
> -    switch ( -!__builtin_constant_p(size) | r__ ) \
> -    { \
> -    case 0: (void)(addr); break; \
> -    case 1 ... BITS_PER_LONG: \
> -        r__ = o__ + __scanbit(~*(const unsigned long *)(addr) >> o__, r__); \
> -        break; \
> -    default: \
> -        if ( __builtin_constant_p(off) && !o__ ) \
> -            r__ = __find_first_zero_bit(addr, r__); \
> -        else \
> -            r__ = __find_next_zero_bit(addr, r__, o__); \
> -        break; \
> -    } \
> -    r__; \
> +#define find_next_zero_bit(addr, size, off) ({                              
> \
> +    unsigned int r__;                                                       
> \
> +    const unsigned long *a__ = (addr);                                      
> \
> +    unsigned int s__ = (size);                                              
> \
> +    unsigned int o__ = (off);                                               
> \
> +    if ( __builtin_constant_p(size) && !s__ )                               \
> +        r__ = s__;                                                          
> \
> +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> +        r__ = o__ + __scanbit(~*(const unsigned long *)(a__) >> o__, s__);  \
> +    else if ( __builtin_constant_p(off) && !o__ )                           \
> +        r__ = __find_first_zero_bit(a__, s__);                              
> \
> +    else                                                                    
> \
> +        r__ = __find_next_zero_bit(a__, s__, o__);                          
> \
> +    r__;                                                                    
> \
>  })
>  
>  /**
> diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
> index b5ec455..e2a3686 100644
> --- a/xen/include/xen/bitmap.h
> +++ b/xen/include/xen/bitmap.h
> @@ -110,13 +110,14 @@ extern int bitmap_allocate_region(unsigned long 
> *bitmap, int pos, int order);
>  
>  #define bitmap_bytes(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
>  
> -#define bitmap_switch(nbits, zero_ret, small, large)			\
> -	switch (-!__builtin_constant_p(nbits) | (nbits)) {		\
> -	case 0:	return zero_ret;					\
> -	case 1 ... BITS_PER_LONG:					\
> -		small; break;						\
> -	default:							\
> -		large; break;						\
> +#define bitmap_switch(nbits, zero, small, large)			  \
> +	unsigned int n__ = (nbits);					  \
> +	if (__builtin_constant_p(nbits) && !n__) {			  \
> +		zero;							  \
> +	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
> +		small;							  \
> +	} else {							  \
> +		large;							  \
>  	}
>  
>  static inline void bitmap_zero(unsigned long *dst, int nbits)
> @@ -191,7 +192,8 @@ static inline void bitmap_complement(unsigned long *dst, 
> const unsigned long *sr
>  static inline int bitmap_equal(const unsigned long *src1,
>  			const unsigned long *src2, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_equal(src1, src2, nbits));
>  }
> @@ -199,7 +201,8 @@ static inline int bitmap_equal(const unsigned long *src1,
>  static inline int bitmap_intersects(const unsigned long *src1,
>  			const unsigned long *src2, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0,
>  		return __bitmap_intersects(src1, src2, nbits));
>  }
> @@ -207,21 +210,24 @@ static inline int bitmap_intersects(const unsigned long 
> *src1,
>  static inline int bitmap_subset(const unsigned long *src1,
>  			const unsigned long *src2, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !((*src1 & ~*src2) & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_subset(src1, src2, nbits));
>  }
>  
>  static inline int bitmap_empty(const unsigned long *src, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !(*src & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_empty(src, nbits));
>  }
>  
>  static inline int bitmap_full(const unsigned long *src, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !(~*src & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_full(src, nbits));
>  }
> -- 
> 1.8.5.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:40:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJObl-00073O-6I; Fri, 28 Feb 2014 14:40:13 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJObe-000738-Ff
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:40:11 +0000
Received: from [85.158.143.35:19003] by server-3.bemta-4.messagelabs.com id
	A0/93-11539-5CF90135; Fri, 28 Feb 2014 14:40:05 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393598404!9069363!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 22575 invoked from network); 28 Feb 2014 14:40:05 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 14:40:05 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 14:40:04 +0000
Message-Id: <5310ADE30200007800120343@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 14:40:19 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Tim Deegan" <tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-5-git-send-email-tim@xen.org>
	<530F763C020000780011FF5F@nat28.tlf.novell.com>
	<20140227163839.GH53925@deinos.phlegethon.org>
In-Reply-To: <20140227163839.GH53925@deinos.phlegethon.org>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2 4/4] bitmaps/bitops: Clarify tests for
 small constant size.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 27.02.14 at 17:38, Tim Deegan <tim@xen.org> wrote:
> From dba941064825d723b79f866d16bb0f07585de320 Mon Sep 17 00:00:00 2001
> From: Tim Deegan <tim@xen.org>
> Date: Thu, 28 Nov 2013 15:40:48 +0000
> Subject: [PATCH] bitmaps/bitops: Clarify tests for small constant size.
> 
> No semantic changes, just makes the control flow a bit clearer.
> 
> I was looking at this bcause the (-!__builtin_constant_p(x) | x__)
> formula is too clever for Coverity, but in fact it always takes me a
> minute or two to understand it too. :)
> 
> Signed-off-by: Tim Deegan <tim@xen.org>

A little reluctantly
Acked-by: Jan Beulich <jbeulich@suse.com>

> ---
> 
> v3: Consistenly use '!foo' rather than 'foo == 0'
> v2: fix find_next_bit macros to evaluate 'addr' exactly once.
> ---
>  xen/include/asm-x86/bitops.h | 62 ++++++++++++++++++++------------------------
>  xen/include/xen/bitmap.h     | 30 ++++++++++++---------
>  2 files changed, 46 insertions(+), 46 deletions(-)
> 
> diff --git a/xen/include/asm-x86/bitops.h b/xen/include/asm-x86/bitops.h
> index ab21d92..82a08ee 100644
> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -335,23 +335,20 @@ static inline unsigned int __scanbit(unsigned long val, 
> unsigned long max)
>   * @offset: The bitnumber to start searching at
>   * @size: The maximum size to search
>   */
> -#define find_next_bit(addr, size, off) ({ \
> -    unsigned int r__ = (size); \
> -    unsigned int o__ = (off); \
> -    switch ( -!__builtin_constant_p(size) | r__ ) \
> -    { \
> -    case 0: (void)(addr); break; \
> -    case 1 ... BITS_PER_LONG: \
> -        r__ = o__ + __scanbit(*(const unsigned long *)(addr) >> o__, r__); \
> -        break; \
> -    default: \
> -        if ( __builtin_constant_p(off) && !o__ ) \
> -            r__ = __find_first_bit(addr, r__); \
> -        else \
> -            r__ = __find_next_bit(addr, r__, o__); \
> -        break; \
> -    } \
> -    r__; \
> +#define find_next_bit(addr, size, off) ({                                   
> \
> +    unsigned int r__;                                                       
> \
> +    const unsigned long *a__ = (addr);                                      
> \
> +    unsigned int s__ = (size);                                              
> \
> +    unsigned int o__ = (off);                                               
> \
> +    if ( __builtin_constant_p(size) && !s__ )                               \
> +        r__ = s__;                                                          
> \
> +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> +        r__ = o__ + __scanbit(*(const unsigned long *)(a__) >> o__, s__);   \
> +    else if ( __builtin_constant_p(off) && !o__ )                           \
> +        r__ = __find_first_bit(a__, s__);                                   
> \
> +    else                                                                    
> \
> +        r__ = __find_next_bit(a__, s__, o__);                               
> \
> +    r__;                                                                    
> \
>  })
>  
>  /**
> @@ -370,23 +367,20 @@ static inline unsigned int __scanbit(unsigned long val, 
> unsigned long max)
>   * @offset: The bitnumber to start searching at
>   * @size: The maximum size to search
>   */
> -#define find_next_zero_bit(addr, size, off) ({ \
> -    unsigned int r__ = (size); \
> -    unsigned int o__ = (off); \
> -    switch ( -!__builtin_constant_p(size) | r__ ) \
> -    { \
> -    case 0: (void)(addr); break; \
> -    case 1 ... BITS_PER_LONG: \
> -        r__ = o__ + __scanbit(~*(const unsigned long *)(addr) >> o__, r__); \
> -        break; \
> -    default: \
> -        if ( __builtin_constant_p(off) && !o__ ) \
> -            r__ = __find_first_zero_bit(addr, r__); \
> -        else \
> -            r__ = __find_next_zero_bit(addr, r__, o__); \
> -        break; \
> -    } \
> -    r__; \
> +#define find_next_zero_bit(addr, size, off) ({                              
> \
> +    unsigned int r__;                                                       
> \
> +    const unsigned long *a__ = (addr);                                      
> \
> +    unsigned int s__ = (size);                                              
> \
> +    unsigned int o__ = (off);                                               
> \
> +    if ( __builtin_constant_p(size) && !s__ )                               \
> +        r__ = s__;                                                          
> \
> +    else if ( __builtin_constant_p(size) && s__ <= BITS_PER_LONG )          \
> +        r__ = o__ + __scanbit(~*(const unsigned long *)(a__) >> o__, s__);  \
> +    else if ( __builtin_constant_p(off) && !o__ )                           \
> +        r__ = __find_first_zero_bit(a__, s__);                              
> \
> +    else                                                                    
> \
> +        r__ = __find_next_zero_bit(a__, s__, o__);                          
> \
> +    r__;                                                                    
> \
>  })
>  
>  /**
> diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
> index b5ec455..e2a3686 100644
> --- a/xen/include/xen/bitmap.h
> +++ b/xen/include/xen/bitmap.h
> @@ -110,13 +110,14 @@ extern int bitmap_allocate_region(unsigned long 
> *bitmap, int pos, int order);
>  
>  #define bitmap_bytes(nbits) (BITS_TO_LONGS(nbits) * sizeof(unsigned long))
>  
> -#define bitmap_switch(nbits, zero_ret, small, large)			\
> -	switch (-!__builtin_constant_p(nbits) | (nbits)) {		\
> -	case 0:	return zero_ret;					\
> -	case 1 ... BITS_PER_LONG:					\
> -		small; break;						\
> -	default:							\
> -		large; break;						\
> +#define bitmap_switch(nbits, zero, small, large)			  \
> +	unsigned int n__ = (nbits);					  \
> +	if (__builtin_constant_p(nbits) && !n__) {			  \
> +		zero;							  \
> +	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
> +		small;							  \
> +	} else {							  \
> +		large;							  \
>  	}
>  
>  static inline void bitmap_zero(unsigned long *dst, int nbits)
> @@ -191,7 +192,8 @@ static inline void bitmap_complement(unsigned long *dst, 
> const unsigned long *sr
>  static inline int bitmap_equal(const unsigned long *src1,
>  			const unsigned long *src2, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_equal(src1, src2, nbits));
>  }
> @@ -199,7 +201,8 @@ static inline int bitmap_equal(const unsigned long *src1,
>  static inline int bitmap_intersects(const unsigned long *src1,
>  			const unsigned long *src2, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0,
>  		return __bitmap_intersects(src1, src2, nbits));
>  }
> @@ -207,21 +210,24 @@ static inline int bitmap_intersects(const unsigned long 
> *src1,
>  static inline int bitmap_subset(const unsigned long *src1,
>  			const unsigned long *src2, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !((*src1 & ~*src2) & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_subset(src1, src2, nbits));
>  }
>  
>  static inline int bitmap_empty(const unsigned long *src, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !(*src & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_empty(src, nbits));
>  }
>  
>  static inline int bitmap_full(const unsigned long *src, int nbits)
>  {
> -	bitmap_switch(nbits, -1,
> +	bitmap_switch(nbits,
> +		return -1,
>  		return !(~*src & BITMAP_LAST_WORD_MASK(nbits)),
>  		return __bitmap_full(src, nbits));
>  }
> -- 
> 1.8.5.2



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOgB-0007Dy-9q; Fri, 28 Feb 2014 14:44:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJOg9-0007Dq-3U
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 14:44:45 +0000
Received: from [85.158.143.35:62923] by server-2.bemta-4.messagelabs.com id
	95/FA-04779-CD0A0135; Fri, 28 Feb 2014 14:44:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393598683!9075585!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5784 invoked from network); 28 Feb 2014 14:44:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 14:44:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 14:44:43 +0000
Message-Id: <5310AEF90200007800120355@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 14:44:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.campbell@citrix.com>,<paul.durrant@citrix.com>, <wei.liu2@citrix.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Anyone of you networking people care to review/ack this?

Thanks, Jan

> ---
> V2: Improve documentation based on comments about areas which were unclear.
> 
> ---
>  xen/include/public/io/netif.h |   29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
> 
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index d7fb771..5d98734 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -69,6 +69,35 @@
>   */
>  
>  /*
> + * Multiple transmit and receive queues:
> + * If supported, the backend will write "multi-queue-max-queues" and set its
> + * value to the maximum supported number of queues.
> + * Frontends that are aware of this feature and wish to use it can write 
> the
> + * key "multi-queue-num-queues", set to the number they wish to use.
> + *
> + * Queues replicate the shared rings and event channels, and
> + * "feature-split-event-channels" may be used when using multiple queues.
> + * Each queue consists of one shared ring pair, i.e. there must be the same
> + * number of tx and rx rings.
> + *
> + * For frontends requesting just one queue, the usual event-channel and
> + * ring-ref keys are written as before, simplifying the backend processing
> + * to avoid distinguishing between a frontend that doesn't understand the
> + * multi-queue feature, and one that does, but requested only one queue.
> + *
> + * Frontends requesting two or more queues must not write the toplevel
> + * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
> + * instead writing them under sub-keys having the name "queue-N" where
> + * N is the integer ID of the queue for which those keys belong. Queues 
> + * are indexed from zero.
> + *
> + * Mapping of packets to queues is considered to be a function of the
> + * transmitting system (backend or frontend) and is not negotiated
> + * between the two. Guests are free to transmit packets on any queue
> + * they choose, provided it has been set up correctly.
> + */
> +
> +/*
>   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
>   * offload off or on. If it is missing then the feature is assumed to be 
> on.
>   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:44:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOgB-0007Dy-9q; Fri, 28 Feb 2014 14:44:47 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJOg9-0007Dq-3U
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 14:44:45 +0000
Received: from [85.158.143.35:62923] by server-2.bemta-4.messagelabs.com id
	95/FA-04779-CD0A0135; Fri, 28 Feb 2014 14:44:44 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393598683!9075585!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5784 invoked from network); 28 Feb 2014 14:44:43 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 14:44:43 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 14:44:43 +0000
Message-Id: <5310AEF90200007800120355@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 14:44:57 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: <ian.campbell@citrix.com>,<paul.durrant@citrix.com>, <wei.liu2@citrix.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
In-Reply-To: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	david.vrabel@citrix.com
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Document the multi-queue feature in terms of XenStore keys to be written
> by the backend and by the frontend.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Anyone of you networking people care to review/ack this?

Thanks, Jan

> ---
> V2: Improve documentation based on comments about areas which were unclear.
> 
> ---
>  xen/include/public/io/netif.h |   29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
> 
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index d7fb771..5d98734 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -69,6 +69,35 @@
>   */
>  
>  /*
> + * Multiple transmit and receive queues:
> + * If supported, the backend will write "multi-queue-max-queues" and set its
> + * value to the maximum supported number of queues.
> + * Frontends that are aware of this feature and wish to use it can write 
> the
> + * key "multi-queue-num-queues", set to the number they wish to use.
> + *
> + * Queues replicate the shared rings and event channels, and
> + * "feature-split-event-channels" may be used when using multiple queues.
> + * Each queue consists of one shared ring pair, i.e. there must be the same
> + * number of tx and rx rings.
> + *
> + * For frontends requesting just one queue, the usual event-channel and
> + * ring-ref keys are written as before, simplifying the backend processing
> + * to avoid distinguishing between a frontend that doesn't understand the
> + * multi-queue feature, and one that does, but requested only one queue.
> + *
> + * Frontends requesting two or more queues must not write the toplevel
> + * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
> + * instead writing them under sub-keys having the name "queue-N" where
> + * N is the integer ID of the queue for which those keys belong. Queues 
> + * are indexed from zero.
> + *
> + * Mapping of packets to queues is considered to be a function of the
> + * transmitting system (backend or frontend) and is not negotiated
> + * between the two. Guests are free to transmit packets on any queue
> + * they choose, provided it has been set up correctly.
> + */
> +
> +/*
>   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP checksum
>   * offload off or on. If it is missing then the feature is assumed to be 
> on.
>   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP checksum
> -- 
> 1.7.10.4
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org 
> http://lists.xen.org/xen-devel 




_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:45:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOgN-0007Fq-V9; Fri, 28 Feb 2014 14:44:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJOgM-0007FU-Lj
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:44:58 +0000
Received: from [85.158.137.68:55102] by server-7.bemta-3.messagelabs.com id
	6C/EE-13775-9E0A0135; Fri, 28 Feb 2014 14:44:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393598695!4861654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27459 invoked from network); 28 Feb 2014 14:44:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:44:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106630612"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 14:44:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 09:44:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJOgI-0006ct-BV;
	Fri, 28 Feb 2014 14:44:54 +0000
Date: Fri, 28 Feb 2014 14:44:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alexander Graf <agraf@suse.de>
In-Reply-To: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
Message-ID: <alpine.DEB.2.02.1402281436290.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
	<6039706.L1ZWvgjHF0@wuerfel>
	<CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Arnd Bergmann <arnd@arndb.de>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 28 Feb 2014, Alexander Graf wrote:
> > We will however want to boot all sorts of guests in a standardized
> > virtual environment:
> > 
> > * 32 bit Linux (since some distros don't support biarch or multiarch
> >  on arm64) for running applications that are either binary-only
> >  or not 64-bit safe.
> > * 32-bit Android
> > * big-endian Linux for running applications that are not endian-clean
> >  (typically network stuff ported from powerpc or mipseb.
> > * OS/v guests
> > * NOMMU Linux
> > * BSD based OSs
> > * QNX
> > * random other RTOSs

8<---

> * Enterprise grade Linux distribution that only supports ACPI
> * Maybe WinRT if we can convince MS to use it
> * Non-Linux with x86/ia64 heritage and thus ACPI support
> 
> If we want to run those, we need to expose ACPI tables.
> 
> Again, I think the only reasonable thing to do is to implement and expose both. That situation sucks, but we got into it ourselves ;).

I think we should have a clear idea on the purpose of this doc: is it a
spec that we expect Linux and other guest OSes to comply to if they want
to run on KVM/Xen? Or is it a document that describes the state of the
world at the beginning of 2014?

If it is a spec, then we should simply ignore non-collaborative vendors
and their products. If we know in advance that they are not going to
comply to the spec, what's the point of trying to accommodate them here?
We can always carry our workarounds and hacks in the hypervisor if we
want to run their products as guests.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:45:02 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:45:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOgN-0007Fq-V9; Fri, 28 Feb 2014 14:44:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJOgM-0007FU-Lj
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 14:44:58 +0000
Received: from [85.158.137.68:55102] by server-7.bemta-3.messagelabs.com id
	6C/EE-13775-9E0A0135; Fri, 28 Feb 2014 14:44:57 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393598695!4861654!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27459 invoked from network); 28 Feb 2014 14:44:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:44:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106630612"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 14:44:54 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 09:44:54 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJOgI-0006ct-BV;
	Fri, 28 Feb 2014 14:44:54 +0000
Date: Fri, 28 Feb 2014 14:44:48 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Alexander Graf <agraf@suse.de>
In-Reply-To: <CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
Message-ID: <alpine.DEB.2.02.1402281436290.31489@kaball.uk.xensource.com>
References: <20140226183454.GA14639@cbox> <11174980.YJ21cfsHoG@wuerfel>
	<5AA88E43-1A40-4409-9A56-334988483843@suse.de>
	<6039706.L1ZWvgjHF0@wuerfel>
	<CB7DBE07-42BD-4588-AC9E-CB0BF95A811A@suse.de>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
X-DLP: MIA2
Cc: Peter Maydell <peter.maydell@linaro.org>, Arnd Bergmann <arnd@arndb.de>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"cross-distro@lists.linaro.org" <cross-distro@lists.linaro.org>,
	"marc.zyngier@arm.com" <marc.zyngier@arm.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Michael Casadevall <Michael.casadevall@linaro.org>,
	Rob Herring <rob.herring@linaro.org>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Grant Likely <grant.likely@linaro.org>,
	"kvmarm@lists.cs.columbia.edu" <kvmarm@lists.cs.columbia.edu>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: [Xen-devel] [RFC] ARM VM System Sepcification
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 28 Feb 2014, Alexander Graf wrote:
> > We will however want to boot all sorts of guests in a standardized
> > virtual environment:
> > 
> > * 32 bit Linux (since some distros don't support biarch or multiarch
> >  on arm64) for running applications that are either binary-only
> >  or not 64-bit safe.
> > * 32-bit Android
> > * big-endian Linux for running applications that are not endian-clean
> >  (typically network stuff ported from powerpc or mipseb.
> > * OS/v guests
> > * NOMMU Linux
> > * BSD based OSs
> > * QNX
> > * random other RTOSs

8<---

> * Enterprise grade Linux distribution that only supports ACPI
> * Maybe WinRT if we can convince MS to use it
> * Non-Linux with x86/ia64 heritage and thus ACPI support
> 
> If we want to run those, we need to expose ACPI tables.
> 
> Again, I think the only reasonable thing to do is to implement and expose both. That situation sucks, but we got into it ourselves ;).

I think we should have a clear idea on the purpose of this doc: is it a
spec that we expect Linux and other guest OSes to comply to if they want
to run on KVM/Xen? Or is it a document that describes the state of the
world at the beginning of 2014?

If it is a spec, then we should simply ignore non-collaborative vendors
and their products. If we know in advance that they are not going to
comply to the spec, what's the point of trying to accommodate them here?
We can always carry our workarounds and hacks in the hypervisor if we
want to run their products as guests.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:50:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOlf-0007dh-Ps; Fri, 28 Feb 2014 14:50:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WJOld-0007dc-Oe
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 14:50:26 +0000
Received: from [85.158.137.68:58785] by server-15.bemta-3.messagelabs.com id
	C8/45-19263-132A0135; Fri, 28 Feb 2014 14:50:25 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393599021!4879507!1
X-Originating-IP: [209.85.192.194]
X-SpamReason: No, hits=2.2 required=7.0 tests=HTML_FONT_LOW_CONTRAST,
	HTML_MESSAGE, MAILTO_TO_SPAM_ADDR, MIME_BASE64_TEXT, MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5838 invoked from network); 28 Feb 2014 14:50:23 -0000
Received: from mail-pd0-f194.google.com (HELO mail-pd0-f194.google.com)
	(209.85.192.194)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:50:23 -0000
Received: by mail-pd0-f194.google.com with SMTP id fp1so412764pdb.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Feb 2014 06:50:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:mime-version:message-id:content-type;
	bh=gwXNNpd3W6NP/osBi8D9dJkWwJcHE/jsomN5B0BPU5k=;
	b=o+MKHXCfFjwwHk2E5N6C0V/8M3WcazwvFqPbrEMunRhEffc8gCkBP1+nHbg1X7k9nU
	Sm66C1Jw1h9Ihus6NQmEaaJRAAU75wGZIX1ClP2oXskzPaUKETnLsJkcmIk0qG5FoaKt
	j3jFIHBszd13vj5EBStqZ2p5RG7pP50YB+AJV9c8EKvEZ+UzZAec51EYvS7b0cx+7PWb
	Cl9HzpVcTNtbCPYg+fncXIIy5TfG/q5gAp9b7XUbV1JBG3Xx3LNO1UhRFY8pksBMLaxX
	a6amVuaVO8J6iDoEGHo+rprlAl/IFOafJ7jYGx/ZE14IUD4zZPt/tS6kAdLzWNuDU5GQ
	zRgg==
X-Received: by 10.66.146.133 with SMTP id tc5mr3993983pab.58.1393599021531;
	Fri, 28 Feb 2014 06:50:21 -0800 (PST)
Received: from gordon-PC ([183.17.52.174])
	by mx.google.com with ESMTPSA id gj9sm6922653pbc.7.2014.02.28.06.50.14
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 28 Feb 2014 06:50:20 -0800 (PST)
Date: Fri, 28 Feb 2014 22:50:23 +0800
From: "gordongong0350@gmail.com" <gordongong0350@gmail.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-GUID: 38A681FC-BCBD-4D8F-99B2-45DBC95ECACF
X-Has-Attach: no
X-Mailer: Foxmail 7, 2, 0, 111[cn]
Mime-Version: 1.0
Message-ID: <201402282250180185871@gmail.com>
Cc: "ian.jackson" <ian.jackson@eu.citrix.com>,
	"ian.campbell" <ian.campbell@citrix.com>,
	"stefano.stabellini" <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [tapdisk2][PATCH1/1] Fix vreq with error of -EBUSY
	fails 100 times too soon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7224567069689727295=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============7224567069689727295==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart306036843662_=----"

This is a multi-part message in MIME format.

------=_001_NextPart306036843662_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

CkZpeCB2cmVxIHdpdGggZXJyb3Igb2YgLUVCVVNZIGZhaWxzIDEwMCB0aW1lcyB0b28gc29vbiwg
P3RoYXQ/cmV0dXJuIEVJTyB0byB0ZCBkZXZpY2UgbGF5ZXIuIElmIHRoaXMgdGRkZXZpY2UgcmVx
dWVzdCBpcyBtZXRhZGF0YSBvZiBmaWxlc3lzdGVtLCB0aGUgcmVzdWx0IGlzIG5vdCBnb29kIGF0
IGFsbC4KVG8gcmVwcm9kdWNlIGl0IHdpdGNoID8iLi9pb3pvbmUgLUkgLXMgMkcgLXIgNTEyayAt
ciAxbSAtciAybSAtciA0bSAtaSAwIC1pIDEgLWYgL2RhdGEvaW96b25lX3Rlc3QudG1wIix0aGUg
cmVzdWx0IGlzIGlucHV0L291dHB1dCBlcnJvciAmIGlvem9uZSBpcyBzdG9wcGVkLgoKCgpGcm9t
IGNjYzBjZTNhZWRlMWY1YzkyMDAzNjg0N2RkYjE2YjI1OTY5YmU3YmEgTW9uIFNlcCAxNyAwMDow
MDowMCAyMDAxRnJvbTogWGlhb2RvbmcgR29uZyA8Z29yZG9uZ29uZzAzNTBAZ21haWwuY29tPkRh
dGU6IEZyaSwgMjggRmViIDIwMTQgMDM6MDE6MTkgLTA1MDBTdWJqZWN0OiBbUEFUQ0hdIGZpeCB2
cmVxIHdpdGggZXJyb3Igb2YgLUVCVVNZIGZhaWxzIDEwMCB0aW1lcyB0b28gc29vbiw/cmV0dXJu
IEVJTyB0byB0ZCBkZXZpY2UgbGF5ZXIuCjEpIGFsbCBmcmVlIGxpc3QgaXMgY29uc3VtZWQgYnkg
dnJlcSBpbiBvbmUgb3IgdHdvIGJsb2NrIHRoYXQ/ID9pcyBzZXRlZCBpbiBiYXQgZW50cnkuMikg
dnJlcSBpbiBmYWlsIHF1ZXVlLCBnZXQgYSBjaGFuY2UgdG8gcnVuIHF1aWNrbHksIHN1Y2ggYXMg
OTB1cy4tLS0/dG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJkLmMgfCAxNyArKysrKysr
KysrKysrKy0tLT90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuaCB8ID8xICs/MiBm
aWxlcyBjaGFuZ2VkLCAxNSBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQpkaWZmIC0tZ2l0
IGEvdG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJkLmMgYi90b29scy9ibGt0YXAyL2Ry
aXZlcnMvdGFwZGlzay12YmQuY2luZGV4IGM2NjVmMjcuLjUyOWViOTEgMTAwNjQ0LS0tIGEvdG9v
bHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJkLmMrKysgYi90b29scy9ibGt0YXAyL2RyaXZl
cnMvdGFwZGlzay12YmQuY0BAIC0xMDgxLDcgKzEwODEsNyBAQCB0YXBkaXNrX3ZiZF9jaGVja19z
dGF0ZSh0ZF92YmRfdCAqdmJkKT8JdGRfdmJkX3JlcXVlc3RfdCAqdnJlcSwgKnRtcDs/Pwl0YXBk
aXNrX3ZiZF9mb3JfZWFjaF9yZXF1ZXN0KHZyZXEsIHRtcCwgJnZiZC0+ZmFpbGVkX3JlcXVlc3Rz
KS0JCWlmICh2cmVxLT5udW1fcmV0cmllcyA+PSBURF9WQkRfTUFYX1JFVFJJRVMpKwkJaWYgKHZy
ZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVUUklFUyAmJiB2cmVxLT5idXN5X2xvb3Bp
bmcgIT0gMSApPwkJCXRhcGRpc2tfdmJkX2NvbXBsZXRlX3ZiZF9yZXF1ZXN0KHZiZCwgdnJlcSk7
Pz8JaWYgKCFsaXN0X2VtcHR5KCZ2YmQtPm5ld19yZXF1ZXN0cykgfHxAQCAtMTE2OCw3ICsxMTY4
LDcgQEAgdGFwZGlza192YmRfY29tcGxldGVfdmJkX3JlcXVlc3QodGRfdmJkX3QgKnZiZCwgdGRf
dmJkX3JlcXVlc3RfdCAqdnJlcSk/ez8JaWYgKCF2cmVxLT5zdWJtaXR0aW5nICYmICF2cmVxLT5z
ZWNzX3BlbmRpbmcpIHs/CQlpZiAodnJlcS0+c3RhdHVzID09IEJMS0lGX1JTUF9FUlJPUiAmJi0J
CT8/ID92cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9NQVhfUkVUUklFUyAmJisJCT8/ID8odnJl
cS0+bnVtX3JldHJpZXMgPCBURF9WQkRfTUFYX1JFVFJJRVMgfHwgdnJlcS0+YnVzeV9sb29waW5n
ID09IDEpJiY/CQk/PyA/IXRkX2ZsYWdfdGVzdCh2YmQtPnN0YXRlLCBURF9WQkRfREVBRCkgJiY/
CQk/PyA/IXRkX2ZsYWdfdGVzdCh2YmQtPnN0YXRlLCBURF9WQkRfU0hVVERPV05fUkVRVUVTVEVE
KSk/CQkJdGFwZGlza192YmRfbW92ZV9yZXF1ZXN0KHZyZXEsICZ2YmQtPmZhaWxlZF9yZXF1ZXN0
cyk7QEAgLTE0NTAsMTcgKzE0NTAsMjggQEAgdGFwZGlza192YmRfcmVpc3N1ZV9mYWlsZWRfcmVx
dWVzdHModGRfdmJkX3QgKnZiZCk/CWdldHRpbWVvZmRheSgmbm93LCBOVUxMKTs/Pwl0YXBkaXNr
X3ZiZF9mb3JfZWFjaF9yZXF1ZXN0KHZyZXEsIHRtcCwgJnZiZC0+ZmFpbGVkX3JlcXVlc3RzKSB7
KwkJdWludDY0X3QgZGVsdGEgPSAwOys/CQlpZiAodnJlcS0+c2Vjc19wZW5kaW5nKT8JCQljb250
aW51ZTs/PwkJaWYgKHRkX2ZsYWdfdGVzdCh2YmQtPnN0YXRlLCBURF9WQkRfU0hVVERPV05fUkVR
VUVTVEVEKSk/CQkJZ290byBmYWlsOz8rCQlpZiAodnJlcS0+bnVtX3JldHJpZXMgPiBURF9WQkRf
TUFYX1JFVFJJRVMgLSAxMCkgeysJCQlkZWx0YSA9IChub3cudHZfc2VjIC0gdnJlcS0+bGFzdF90
cnkudHZfc2VjKSAqIDEwMDAwMDBcKwkJCQkrIG5vdy50dl91c2VjIC0gdnJlcS0+bGFzdF90cnku
dHZfdXNlYzsrCQkJaWYgKGRlbHRhICogdnJlcS0+bnVtX3JldHJpZXMgPCBURF9WQkRfUkVUUllf
SU5URVJWQUwpKwkJCQl2cmVxLT5idXN5X2xvb3BpbmcgPSAxOysJCQllbHNlKwkJCQl2cmVxLT5i
dXN5X2xvb3BpbmcgPSAwOysJCX0rPwkJaWYgKHZyZXEtPmVycm9yICE9IC1FQlVTWSAmJj8JCT8/
ID9ub3cudHZfc2VjIC0gdnJlcS0+bGFzdF90cnkudHZfc2VjIDwgVERfVkJEX1JFVFJZX0lOVEVS
VkFMKT8JCQljb250aW51ZTs/LQkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhf
UkVUUklFUykgeysJCWlmICh2cmVxLT5udW1fcmV0cmllcyA+PSBURF9WQkRfTUFYX1JFVFJJRVMg
JiYgdnJlcS0+YnVzeV9sb29waW5nICE9IDEpIHs/CQlmYWlsOj8JCQlEQkcoVExPR19JTkZPLCAi
cmVxICUiUFJJdTY0InJldHJpZWQgJWQgdGltZXNcbiIsPwkJCT8/ID92cmVxLT5yZXEuaWQsIHZy
ZXEtPm51bV9yZXRyaWVzKTtkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRp
c2stdmJkLmggYi90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuaGluZGV4IGJlMDg0
YjIuLjllNWY1ZjYgMTAwNjQ0LS0tIGEvdG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJk
LmgrKysgYi90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuaEBAIC03Myw2ICs3Myw3
IEBAIHN0cnVjdCB0ZF92YmRfcmVxdWVzdCB7PwlpbnQgPyA/ID8gPyA/ID8gPyA/ID8gPyA/ID8g
c3VibWl0dGluZzs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyBzZWNzX3BlbmRpbmc7Pwlp
bnQgPyA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gbnVtX3JldHJpZXM7KwlpbnQgPyA/ID8gPyA/ID8g
PyA/ID8gPyA/ID8gYnVzeV9sb29waW5nOz8Jc3RydWN0IHRpbWV2YWwgPyA/ID8gPyA/ID8gP2xh
c3RfdHJ5Oz8/CXRkX3ZiZF90ID8gPyA/ID8gPyA/ID8gPyA/ICp2YmQ7LS0/MS44LjMuMQoKCmdv
cmRvbmdvbmcwMzUwQGdtYWlsLmNvbQo=

------=_001_NextPart306036843662_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
 }body { font-size: 10.5pt; font-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: =
rgb(0, 0, 0); line-height: 1.5; }</style></head><body>=0A<div><span></span=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">Fix vreq with error of -EBUSY fails 100 times too soon,=
 &nbsp;that&nbsp;<span style=3D"font-size: 10.5pt; background-color: windo=
w;">return EIO to td device layer. If this td</span></div><div style=3D"co=
lor: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;=
"><span style=3D"font-size: 10.5pt; background-color: window;">device requ=
est is metadata of filesystem, the result is not good at all.</span></div>=
<div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line=
-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;"><span style=3D"background-co=
lor: window; font-size: 10.5pt;">To reproduce it witch &nbsp;"./iozone -I =
-s 2G -r 512k -r 1m -r 2m -r 4m -i 0 -i 1 -f /data/iozone_test.tmp",</span=
></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;"><span style=3D"background-color: window; font-siz=
e: 10.5pt;">the result is input/output error &amp; iozone is stopped.</spa=
n></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-ser=
if; line-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;"><br></div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;"><br></div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">From ccc0c=
e3aede1f5c920036847ddb16b<u></u><wbr>25969be7ba Mon Sep 17 00:00:00 2001</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">From: Xiaodong Gong &lt;<a href=3D"mailto:gordongong=
0350@gmail.com" target=3D"_blank" style=3D"color: rgb(17, 85, 204);">gordo=
ngong0350@gmail.com</a>&gt;</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">Date: Fri, 28 Feb 2014 =
03:01:19 -0500</div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">Subject: [PATCH] fix vreq with error=
 of -EBUSY fails 100 times too soon,</div><div style=3D"color: rgb(34, 34,=
 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;return E=
IO to td device layer.</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;"><br></div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
1) all free list is consumed by vreq in one or two block that</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">&nbsp; &nbsp;is seted in bat entry.</div><div style=3D"color: r=
gb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">2) v=
req in fail queue, get a chance to run quickly, such as 90us.</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">---</div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">&nbsp;tools/blktap2/drivers/<u></u>t=
apdisk<wbr>-vbd.c | 17 ++++++++++++++---</div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;tool=
s/blktap2/drivers/<u></u>tapdisk<wbr>-vbd.h | &nbsp;1 +</div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;">&nbsp;2 files changed, 15 insertions(+), 3 deletions(-)</div><div sty=
le=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height:=
 normal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">diff --git a/tools/blktap2/drivers/<=
u></u>tapdis<wbr>k-vbd.c b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c=
</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif=
; line-height: normal;">index c665f27..529eb91 100644</div><div style=3D"c=
olor: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal=
;">--- a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+++ b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">@@ -1081,7 +1081,7 @@ tapdisk_vbd_check_state(td_<u></u>vbd<wbr=
>_t *vbd)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wra=
p;">	</span>td_vbd_request_t *vreq, *tmp;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>=
tapdisk_vbd_for_each_request(<u></u>v<wbr>req, tmp, &amp;vbd-&gt;failed_re=
quests)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">-<span style=3D"white-space: pre-wrap;">		<=
/span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRIES)</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_=
retries &gt;=3D TD_VBD_MAX_RETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1=
 )</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-ser=
if; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			=
</span>tapdisk_vbd_complete_vbd_<u></u>reque<wbr>st(vbd, vreq);</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family=
: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-spac=
e: pre-wrap;">	</span>if (!list_empty(&amp;vbd-&gt;new_<u></u>request<wbr>=
s) ||</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-=
serif; line-height: normal;">@@ -1168,7 +1168,7 @@ tapdisk_vbd_complete_vb=
d_<u></u>reque<wbr>st(td_vbd_t *vbd, td_vbd_request_t *vreq)</div><div sty=
le=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height:=
 normal;">&nbsp;{</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space:=
 pre-wrap;">	</span>if (!vreq-&gt;submitting &amp;&amp; !vreq-&gt;secs_pen=
ding) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;=
">		</span>if (vreq-&gt;status =3D=3D BLKIF_RSP_ERROR &amp;&amp;</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">-<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp=
; &nbsp;vreq-&gt;num_retries &lt; TD_VBD_MAX_RETRIES &amp;&amp;</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">+<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp;=
 &nbsp;(vreq-&gt;num_retries &lt; TD_VBD_MAX_RETRIES || vreq-&gt;busy_loop=
ing =3D=3D 1)&amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-=
space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state,=
 TD_VBD_DEAD) &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-f=
amily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white=
-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state=
, TD_VBD_SHUTDOWN_REQUESTED))</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"=
white-space: pre-wrap;">			</span>tapdisk_vbd_move_request(vreq, &amp;vbd-=
&gt;failed_requests);</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">@@ -1450,17 +1450,28 @@ tapdi=
sk_vbd_reissue_failed_<u></u>req<wbr>uests(td_vbd_t *vbd)</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>gettimeofday(=
&amp;now, NULL);</div><div style=3D"color: rgb(34, 34, 34); font-family: a=
rial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"color: r=
gb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbs=
p;<span style=3D"white-space: pre-wrap;">	</span>tapdisk_vbd_for_each_requ=
est(<u></u>v<wbr>req, tmp, &amp;vbd-&gt;failed_requests) {</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">		</span>uint64_t delta =
=3D 0;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans=
-serif; line-height: normal;">+</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=
=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;secs_pending)</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>contin=
ue;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34=
); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=
=3D"white-space: pre-wrap;">		</span>if (td_flag_test(vbd-&gt;state, TD_VB=
D_SHUTDOWN_REQUESTED))</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-s=
pace: pre-wrap;">			</span>goto fail;</div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</div><=
div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;">+<span style=3D"white-space: pre-wrap;">		</span>if (vreq=
-&gt;num_retries &gt; TD_VBD_MAX_RETRIES - 10) {</div><div style=3D"color:=
 rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">+<=
span style=3D"white-space: pre-wrap;">			</span>delta =3D (now.tv_sec - vr=
eq-&gt;last_try.tv_sec) * 1000000\</div><div style=3D"color: rgb(34, 34, 3=
4); font-family: arial, sans-serif; line-height: normal;">+<span style=3D"=
white-space: pre-wrap;">				</span>+ now.tv_usec - vreq-&gt;last_try.tv_us=
ec;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">			</sp=
an>if (delta * vreq-&gt;num_retries &lt; TD_VBD_RETRY_INTERVAL)</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">+<span style=3D"white-space: pre-wrap;">				</span>vreq-&gt;b=
usy_looping =3D 1;</div><div style=3D"color: rgb(34, 34, 34); font-family:=
 arial, sans-serif; line-height: normal;">+<span style=3D"white-space: pre=
-wrap;">			</span>else</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;">+<span style=3D"white-space:=
 pre-wrap;">				</span>vreq-&gt;busy_looping =3D 0;</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>+<span style=3D"white-space: pre-wrap;">		</span>}</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>+</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-ser=
if; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		<=
/span>if (vreq-&gt;error !=3D -EBUSY &amp;&amp;</div><div style=3D"color: =
rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nb=
sp;<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;now.=
tv_sec - vreq-&gt;last_try.tv_sec &lt; TD_VBD_RETRY_INTERVAL)</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>continue=
;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">-<span style=3D"whi=
te-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_=
RETRIES) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, =
sans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">=
		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRIES &amp;&amp; vr=
eq-&gt;busy_looping !=3D 1) {</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"=
white-space: pre-wrap;">		</span>fail:</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span =
style=3D"white-space: pre-wrap;">			</span>DBG(TLOG_INFO, "req %"PRIu64"re=
tried %d times\n",</div><div style=3D"color: rgb(34, 34, 34); font-family:=
 arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space=
: pre-wrap;">			</span>&nbsp;&nbsp; &nbsp;vreq-&gt;<a href=3D"http://req.i=
d/" target=3D"_blank" style=3D"color: rgb(17, 85, 204);">req.id</a>, vreq-=
&gt;num_retries);</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">diff --git a/tools/blktap2/driver=
s/<u></u>tapdis<wbr>k-vbd.h b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vb=
d.h</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">index be084b2..9e5f5f6 100644</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">--- a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">+++ b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">@@ -73,6 +73,7 @@ struct td_vbd_request {</div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; su=
bmitting;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wra=
p;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp; &nbsp; &nbsp; secs_pending;</div><div style=3D"color: rgb(34, =
34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span=
 style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; num_retries;</div>=
<div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line=
-height: normal;">+<span style=3D"white-space: pre-wrap;">	</span>int &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &=
nbsp; busy_looping;</div><div style=3D"color: rgb(34, 34, 34); font-family=
: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-spac=
e: pre-wrap;">	</span>struct timeval &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp;last_try;</div><div style=3D"color: rgb(34, 34, 34); font-famil=
y: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
&nbsp;<span style=3D"white-space: pre-wrap;">	</span>td_vbd_t &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; *vbd;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal; outline: none; padding: 10px 0px; width: 22px;"><div role=3D"button=
" aria-label=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=3D"backg=
round-color: rgb(241, 241, 241); border: 1px solid rgb(221, 221, 221); cle=
ar: both; line-height: 6px; outline: none; width: 20px;"><img src=3D"https=
://ci3.googleusercontent.com/proxy/8CBWEZYl_gju-O0mEKjDNw0Eh_qIbYUu1oZBKfZ=
qOypmLO1gU9NfBBUnKK5HrwoN-VKG8iQn_PvQpZDlbxq0IwVL9dQOy_HcGJ0=3Ds0-d-e1-ft#=
https://mail.google.com/mail/u/0/images/cleardot.gif" style=3D"background-=
image: url(https://ci3.googleusercontent.com/proxy/4Pr9QlSavgRrq9Z2e9xIdvM=
0tknChd1d3dwHXzb4AxHH58w0Ke02qizRBkj-OCDGy4wP2RrG7OYjvK0G5jWJojcVQldSGVQLT=
8rI=3Ds0-d-e1-ft#https://ssl.gstatic.com/ui/v1/icons/mail/ellipsis.png); m=
in-height: 8px; width: 20px; background-repeat: no-repeat no-repeat;"></di=
v></div><div class=3D"yj6qo ajU" style=3D"cursor: pointer; outline: none; =
padding: 10px 0px; width: 22px; color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;"><div id=3D":vv" class=3D"ajR" role=3D"=
button" tabindex=3D"0" data-tooltip=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=
=DA=C8=DD" aria-label=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=
=3D"background-color: rgb(241, 241, 241); border: 1px solid rgb(221, 221, =
221); clear: both; line-height: 6px; outline: none; position: relative; wi=
dth: 20px;"><img class=3D"ajT" src=3D"https://mail.google.com/mail/u/0/ima=
ges/cleardot.gif" style=3D"background-image: url(https://ssl.gstatic.com/u=
i/v1/icons/mail/ellipsis.png); height: 8px; opacity: 0.3; width: 20px; bac=
kground-position: initial initial; background-repeat: no-repeat no-repeat;=
"></div></div><span class=3D"HOEnZb adL" style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;"><font color=3D"#8888=
88"><span style=3D"color: rgb(34, 34, 34);"><font color=3D"#888888"><div>-=
-&nbsp;</div><div>1.8.3.1</div></font></span></font></span></div>=0A<div><=
br></div><hr style=3D"WIDTH: 210px; HEIGHT: 1px" color=3D"#b5c4df" size=3D=
"1" align=3D"left">=0A<div><span><div style=3D"MARGIN: 10px; FONT-FAMILY: =
verdana; FONT-SIZE: 10pt"><div>gordongong0350@gmail.com</div></div></span>=
</div>=0A</body></html>
------=_001_NextPart306036843662_=------



--===============7224567069689727295==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7224567069689727295==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 14:50:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOlf-0007dh-Ps; Fri, 28 Feb 2014 14:50:27 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <gordongong0350@gmail.com>) id 1WJOld-0007dc-Oe
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 14:50:26 +0000
Received: from [85.158.137.68:58785] by server-15.bemta-3.messagelabs.com id
	C8/45-19263-132A0135; Fri, 28 Feb 2014 14:50:25 +0000
X-Env-Sender: gordongong0350@gmail.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393599021!4879507!1
X-Originating-IP: [209.85.192.194]
X-SpamReason: No, hits=2.2 required=7.0 tests=HTML_FONT_LOW_CONTRAST,
	HTML_MESSAGE, MAILTO_TO_SPAM_ADDR, MIME_BASE64_TEXT, MIME_BOUND_NEXTPART,
	ML_RADAR_SPEW_LINKS_14,ML_RADAR_SPEW_LINKS_23,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5838 invoked from network); 28 Feb 2014 14:50:23 -0000
Received: from mail-pd0-f194.google.com (HELO mail-pd0-f194.google.com)
	(209.85.192.194)
	by server-14.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:50:23 -0000
Received: by mail-pd0-f194.google.com with SMTP id fp1so412764pdb.5
	for <xen-devel@lists.xensource.com>;
	Fri, 28 Feb 2014 06:50:21 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=date:from:to:cc:subject:mime-version:message-id:content-type;
	bh=gwXNNpd3W6NP/osBi8D9dJkWwJcHE/jsomN5B0BPU5k=;
	b=o+MKHXCfFjwwHk2E5N6C0V/8M3WcazwvFqPbrEMunRhEffc8gCkBP1+nHbg1X7k9nU
	Sm66C1Jw1h9Ihus6NQmEaaJRAAU75wGZIX1ClP2oXskzPaUKETnLsJkcmIk0qG5FoaKt
	j3jFIHBszd13vj5EBStqZ2p5RG7pP50YB+AJV9c8EKvEZ+UzZAec51EYvS7b0cx+7PWb
	Cl9HzpVcTNtbCPYg+fncXIIy5TfG/q5gAp9b7XUbV1JBG3Xx3LNO1UhRFY8pksBMLaxX
	a6amVuaVO8J6iDoEGHo+rprlAl/IFOafJ7jYGx/ZE14IUD4zZPt/tS6kAdLzWNuDU5GQ
	zRgg==
X-Received: by 10.66.146.133 with SMTP id tc5mr3993983pab.58.1393599021531;
	Fri, 28 Feb 2014 06:50:21 -0800 (PST)
Received: from gordon-PC ([183.17.52.174])
	by mx.google.com with ESMTPSA id gj9sm6922653pbc.7.2014.02.28.06.50.14
	for <multiple recipients>
	(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);
	Fri, 28 Feb 2014 06:50:20 -0800 (PST)
Date: Fri, 28 Feb 2014 22:50:23 +0800
From: "gordongong0350@gmail.com" <gordongong0350@gmail.com>
To: xen-devel <xen-devel@lists.xensource.com>
X-Priority: 3
X-GUID: 38A681FC-BCBD-4D8F-99B2-45DBC95ECACF
X-Has-Attach: no
X-Mailer: Foxmail 7, 2, 0, 111[cn]
Mime-Version: 1.0
Message-ID: <201402282250180185871@gmail.com>
Cc: "ian.jackson" <ian.jackson@eu.citrix.com>,
	"ian.campbell" <ian.campbell@citrix.com>,
	"stefano.stabellini" <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [tapdisk2][PATCH1/1] Fix vreq with error of -EBUSY
	fails 100 times too soon
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============7224567069689727295=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.

--===============7224567069689727295==
Content-Type: multipart/alternative;
	boundary="----=_001_NextPart306036843662_=----"

This is a multi-part message in MIME format.

------=_001_NextPart306036843662_=----
Content-Type: text/plain;
	charset="GB2312"
Content-Transfer-Encoding: base64

CkZpeCB2cmVxIHdpdGggZXJyb3Igb2YgLUVCVVNZIGZhaWxzIDEwMCB0aW1lcyB0b28gc29vbiwg
P3RoYXQ/cmV0dXJuIEVJTyB0byB0ZCBkZXZpY2UgbGF5ZXIuIElmIHRoaXMgdGRkZXZpY2UgcmVx
dWVzdCBpcyBtZXRhZGF0YSBvZiBmaWxlc3lzdGVtLCB0aGUgcmVzdWx0IGlzIG5vdCBnb29kIGF0
IGFsbC4KVG8gcmVwcm9kdWNlIGl0IHdpdGNoID8iLi9pb3pvbmUgLUkgLXMgMkcgLXIgNTEyayAt
ciAxbSAtciAybSAtciA0bSAtaSAwIC1pIDEgLWYgL2RhdGEvaW96b25lX3Rlc3QudG1wIix0aGUg
cmVzdWx0IGlzIGlucHV0L291dHB1dCBlcnJvciAmIGlvem9uZSBpcyBzdG9wcGVkLgoKCgpGcm9t
IGNjYzBjZTNhZWRlMWY1YzkyMDAzNjg0N2RkYjE2YjI1OTY5YmU3YmEgTW9uIFNlcCAxNyAwMDow
MDowMCAyMDAxRnJvbTogWGlhb2RvbmcgR29uZyA8Z29yZG9uZ29uZzAzNTBAZ21haWwuY29tPkRh
dGU6IEZyaSwgMjggRmViIDIwMTQgMDM6MDE6MTkgLTA1MDBTdWJqZWN0OiBbUEFUQ0hdIGZpeCB2
cmVxIHdpdGggZXJyb3Igb2YgLUVCVVNZIGZhaWxzIDEwMCB0aW1lcyB0b28gc29vbiw/cmV0dXJu
IEVJTyB0byB0ZCBkZXZpY2UgbGF5ZXIuCjEpIGFsbCBmcmVlIGxpc3QgaXMgY29uc3VtZWQgYnkg
dnJlcSBpbiBvbmUgb3IgdHdvIGJsb2NrIHRoYXQ/ID9pcyBzZXRlZCBpbiBiYXQgZW50cnkuMikg
dnJlcSBpbiBmYWlsIHF1ZXVlLCBnZXQgYSBjaGFuY2UgdG8gcnVuIHF1aWNrbHksIHN1Y2ggYXMg
OTB1cy4tLS0/dG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJkLmMgfCAxNyArKysrKysr
KysrKysrKy0tLT90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuaCB8ID8xICs/MiBm
aWxlcyBjaGFuZ2VkLCAxNSBpbnNlcnRpb25zKCspLCAzIGRlbGV0aW9ucygtKQpkaWZmIC0tZ2l0
IGEvdG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJkLmMgYi90b29scy9ibGt0YXAyL2Ry
aXZlcnMvdGFwZGlzay12YmQuY2luZGV4IGM2NjVmMjcuLjUyOWViOTEgMTAwNjQ0LS0tIGEvdG9v
bHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJkLmMrKysgYi90b29scy9ibGt0YXAyL2RyaXZl
cnMvdGFwZGlzay12YmQuY0BAIC0xMDgxLDcgKzEwODEsNyBAQCB0YXBkaXNrX3ZiZF9jaGVja19z
dGF0ZSh0ZF92YmRfdCAqdmJkKT8JdGRfdmJkX3JlcXVlc3RfdCAqdnJlcSwgKnRtcDs/Pwl0YXBk
aXNrX3ZiZF9mb3JfZWFjaF9yZXF1ZXN0KHZyZXEsIHRtcCwgJnZiZC0+ZmFpbGVkX3JlcXVlc3Rz
KS0JCWlmICh2cmVxLT5udW1fcmV0cmllcyA+PSBURF9WQkRfTUFYX1JFVFJJRVMpKwkJaWYgKHZy
ZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhfUkVUUklFUyAmJiB2cmVxLT5idXN5X2xvb3Bp
bmcgIT0gMSApPwkJCXRhcGRpc2tfdmJkX2NvbXBsZXRlX3ZiZF9yZXF1ZXN0KHZiZCwgdnJlcSk7
Pz8JaWYgKCFsaXN0X2VtcHR5KCZ2YmQtPm5ld19yZXF1ZXN0cykgfHxAQCAtMTE2OCw3ICsxMTY4
LDcgQEAgdGFwZGlza192YmRfY29tcGxldGVfdmJkX3JlcXVlc3QodGRfdmJkX3QgKnZiZCwgdGRf
dmJkX3JlcXVlc3RfdCAqdnJlcSk/ez8JaWYgKCF2cmVxLT5zdWJtaXR0aW5nICYmICF2cmVxLT5z
ZWNzX3BlbmRpbmcpIHs/CQlpZiAodnJlcS0+c3RhdHVzID09IEJMS0lGX1JTUF9FUlJPUiAmJi0J
CT8/ID92cmVxLT5udW1fcmV0cmllcyA8IFREX1ZCRF9NQVhfUkVUUklFUyAmJisJCT8/ID8odnJl
cS0+bnVtX3JldHJpZXMgPCBURF9WQkRfTUFYX1JFVFJJRVMgfHwgdnJlcS0+YnVzeV9sb29waW5n
ID09IDEpJiY/CQk/PyA/IXRkX2ZsYWdfdGVzdCh2YmQtPnN0YXRlLCBURF9WQkRfREVBRCkgJiY/
CQk/PyA/IXRkX2ZsYWdfdGVzdCh2YmQtPnN0YXRlLCBURF9WQkRfU0hVVERPV05fUkVRVUVTVEVE
KSk/CQkJdGFwZGlza192YmRfbW92ZV9yZXF1ZXN0KHZyZXEsICZ2YmQtPmZhaWxlZF9yZXF1ZXN0
cyk7QEAgLTE0NTAsMTcgKzE0NTAsMjggQEAgdGFwZGlza192YmRfcmVpc3N1ZV9mYWlsZWRfcmVx
dWVzdHModGRfdmJkX3QgKnZiZCk/CWdldHRpbWVvZmRheSgmbm93LCBOVUxMKTs/Pwl0YXBkaXNr
X3ZiZF9mb3JfZWFjaF9yZXF1ZXN0KHZyZXEsIHRtcCwgJnZiZC0+ZmFpbGVkX3JlcXVlc3RzKSB7
KwkJdWludDY0X3QgZGVsdGEgPSAwOys/CQlpZiAodnJlcS0+c2Vjc19wZW5kaW5nKT8JCQljb250
aW51ZTs/PwkJaWYgKHRkX2ZsYWdfdGVzdCh2YmQtPnN0YXRlLCBURF9WQkRfU0hVVERPV05fUkVR
VUVTVEVEKSk/CQkJZ290byBmYWlsOz8rCQlpZiAodnJlcS0+bnVtX3JldHJpZXMgPiBURF9WQkRf
TUFYX1JFVFJJRVMgLSAxMCkgeysJCQlkZWx0YSA9IChub3cudHZfc2VjIC0gdnJlcS0+bGFzdF90
cnkudHZfc2VjKSAqIDEwMDAwMDBcKwkJCQkrIG5vdy50dl91c2VjIC0gdnJlcS0+bGFzdF90cnku
dHZfdXNlYzsrCQkJaWYgKGRlbHRhICogdnJlcS0+bnVtX3JldHJpZXMgPCBURF9WQkRfUkVUUllf
SU5URVJWQUwpKwkJCQl2cmVxLT5idXN5X2xvb3BpbmcgPSAxOysJCQllbHNlKwkJCQl2cmVxLT5i
dXN5X2xvb3BpbmcgPSAwOysJCX0rPwkJaWYgKHZyZXEtPmVycm9yICE9IC1FQlVTWSAmJj8JCT8/
ID9ub3cudHZfc2VjIC0gdnJlcS0+bGFzdF90cnkudHZfc2VjIDwgVERfVkJEX1JFVFJZX0lOVEVS
VkFMKT8JCQljb250aW51ZTs/LQkJaWYgKHZyZXEtPm51bV9yZXRyaWVzID49IFREX1ZCRF9NQVhf
UkVUUklFUykgeysJCWlmICh2cmVxLT5udW1fcmV0cmllcyA+PSBURF9WQkRfTUFYX1JFVFJJRVMg
JiYgdnJlcS0+YnVzeV9sb29waW5nICE9IDEpIHs/CQlmYWlsOj8JCQlEQkcoVExPR19JTkZPLCAi
cmVxICUiUFJJdTY0InJldHJpZWQgJWQgdGltZXNcbiIsPwkJCT8/ID92cmVxLT5yZXEuaWQsIHZy
ZXEtPm51bV9yZXRyaWVzKTtkaWZmIC0tZ2l0IGEvdG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRp
c2stdmJkLmggYi90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuaGluZGV4IGJlMDg0
YjIuLjllNWY1ZjYgMTAwNjQ0LS0tIGEvdG9vbHMvYmxrdGFwMi9kcml2ZXJzL3RhcGRpc2stdmJk
LmgrKysgYi90b29scy9ibGt0YXAyL2RyaXZlcnMvdGFwZGlzay12YmQuaEBAIC03Myw2ICs3Myw3
IEBAIHN0cnVjdCB0ZF92YmRfcmVxdWVzdCB7PwlpbnQgPyA/ID8gPyA/ID8gPyA/ID8gPyA/ID8g
c3VibWl0dGluZzs/CWludCA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gPyBzZWNzX3BlbmRpbmc7Pwlp
bnQgPyA/ID8gPyA/ID8gPyA/ID8gPyA/ID8gbnVtX3JldHJpZXM7KwlpbnQgPyA/ID8gPyA/ID8g
PyA/ID8gPyA/ID8gYnVzeV9sb29waW5nOz8Jc3RydWN0IHRpbWV2YWwgPyA/ID8gPyA/ID8gP2xh
c3RfdHJ5Oz8/CXRkX3ZiZF90ID8gPyA/ID8gPyA/ID8gPyA/ICp2YmQ7LS0/MS44LjMuMQoKCmdv
cmRvbmdvbmcwMzUwQGdtYWlsLmNvbQo=

------=_001_NextPart306036843662_=----
Content-Type: text/html;
	charset="GB2312"
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charse=
t=3DGB2312"><style>body { line-height: 1.5; }body { font-size: 10.5pt; fon=
t-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: rgb(0, 0, 0); line-height: 1.5;=
 }body { font-size: 10.5pt; font-family: =CE=A2=C8=ED=D1=C5=BA=DA; color: =
rgb(0, 0, 0); line-height: 1.5; }</style></head><body>=0A<div><span></span=
><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; lin=
e-height: normal;">Fix vreq with error of -EBUSY fails 100 times too soon,=
 &nbsp;that&nbsp;<span style=3D"font-size: 10.5pt; background-color: windo=
w;">return EIO to td device layer. If this td</span></div><div style=3D"co=
lor: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;=
"><span style=3D"font-size: 10.5pt; background-color: window;">device requ=
est is metadata of filesystem, the result is not good at all.</span></div>=
<div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line=
-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;"><span style=3D"background-co=
lor: window; font-size: 10.5pt;">To reproduce it witch &nbsp;"./iozone -I =
-s 2G -r 512k -r 1m -r 2m -r 4m -i 0 -i 1 -f /data/iozone_test.tmp",</span=
></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;"><span style=3D"background-color: window; font-siz=
e: 10.5pt;">the result is input/output error &amp; iozone is stopped.</spa=
n></div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-ser=
if; line-height: normal;"><br></div><div style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;"><br></div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;"><br></div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">From ccc0c=
e3aede1f5c920036847ddb16b<u></u><wbr>25969be7ba Mon Sep 17 00:00:00 2001</=
div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; =
line-height: normal;">From: Xiaodong Gong &lt;<a href=3D"mailto:gordongong=
0350@gmail.com" target=3D"_blank" style=3D"color: rgb(17, 85, 204);">gordo=
ngong0350@gmail.com</a>&gt;</div><div style=3D"color: rgb(34, 34, 34); fon=
t-family: arial, sans-serif; line-height: normal;">Date: Fri, 28 Feb 2014 =
03:01:19 -0500</div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">Subject: [PATCH] fix vreq with error=
 of -EBUSY fails 100 times too soon,</div><div style=3D"color: rgb(34, 34,=
 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;return E=
IO to td device layer.</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;"><br></div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
1) all free list is consumed by vreq in one or two block that</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">&nbsp; &nbsp;is seted in bat entry.</div><div style=3D"color: r=
gb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">2) v=
req in fail queue, get a chance to run quickly, such as 90us.</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">---</div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">&nbsp;tools/blktap2/drivers/<u></u>t=
apdisk<wbr>-vbd.c | 17 ++++++++++++++---</div><div style=3D"color: rgb(34,=
 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;tool=
s/blktap2/drivers/<u></u>tapdisk<wbr>-vbd.h | &nbsp;1 +</div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;">&nbsp;2 files changed, 15 insertions(+), 3 deletions(-)</div><div sty=
le=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height:=
 normal;"><br></div><div style=3D"color: rgb(34, 34, 34); font-family: ari=
al, sans-serif; line-height: normal;">diff --git a/tools/blktap2/drivers/<=
u></u>tapdis<wbr>k-vbd.c b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c=
</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif=
; line-height: normal;">index c665f27..529eb91 100644</div><div style=3D"c=
olor: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal=
;">--- a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+++ b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.c</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">@@ -1081,7 +1081,7 @@ tapdisk_vbd_check_state(td_<u></u>vbd<wbr=
>_t *vbd)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wra=
p;">	</span>td_vbd_request_t *vreq, *tmp;</div><div style=3D"color: rgb(34=
, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</d=
iv><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; l=
ine-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>=
tapdisk_vbd_for_each_request(<u></u>v<wbr>req, tmp, &amp;vbd-&gt;failed_re=
quests)</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">-<span style=3D"white-space: pre-wrap;">		<=
/span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRIES)</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;num_=
retries &gt;=3D TD_VBD_MAX_RETRIES &amp;&amp; vreq-&gt;busy_looping !=3D 1=
 )</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-ser=
if; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			=
</span>tapdisk_vbd_complete_vbd_<u></u>reque<wbr>st(vbd, vreq);</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34); font-family=
: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-spac=
e: pre-wrap;">	</span>if (!list_empty(&amp;vbd-&gt;new_<u></u>request<wbr>=
s) ||</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-=
serif; line-height: normal;">@@ -1168,7 +1168,7 @@ tapdisk_vbd_complete_vb=
d_<u></u>reque<wbr>st(td_vbd_t *vbd, td_vbd_request_t *vreq)</div><div sty=
le=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height:=
 normal;">&nbsp;{</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space:=
 pre-wrap;">	</span>if (!vreq-&gt;submitting &amp;&amp; !vreq-&gt;secs_pen=
ding) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, san=
s-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;=
">		</span>if (vreq-&gt;status =3D=3D BLKIF_RSP_ERROR &amp;&amp;</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">-<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp=
; &nbsp;vreq-&gt;num_retries &lt; TD_VBD_MAX_RETRIES &amp;&amp;</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">+<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp;=
 &nbsp;(vreq-&gt;num_retries &lt; TD_VBD_MAX_RETRIES || vreq-&gt;busy_loop=
ing =3D=3D 1)&amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-fa=
mily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-=
space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state,=
 TD_VBD_DEAD) &amp;&amp;</div><div style=3D"color: rgb(34, 34, 34); font-f=
amily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white=
-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;!td_flag_test(vbd-&gt;state=
, TD_VBD_SHUTDOWN_REQUESTED))</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"=
white-space: pre-wrap;">			</span>tapdisk_vbd_move_request(vreq, &amp;vbd-=
&gt;failed_requests);</div><div style=3D"color: rgb(34, 34, 34); font-fami=
ly: arial, sans-serif; line-height: normal;">@@ -1450,17 +1450,28 @@ tapdi=
sk_vbd_reissue_failed_<u></u>req<wbr>uests(td_vbd_t *vbd)</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>gettimeofday(=
&amp;now, NULL);</div><div style=3D"color: rgb(34, 34, 34); font-family: a=
rial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"color: r=
gb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nbs=
p;<span style=3D"white-space: pre-wrap;">	</span>tapdisk_vbd_for_each_requ=
est(<u></u>v<wbr>req, tmp, &amp;vbd-&gt;failed_requests) {</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">+<span style=3D"white-space: pre-wrap;">		</span>uint64_t delta =
=3D 0;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans=
-serif; line-height: normal;">+</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=
=3D"white-space: pre-wrap;">		</span>if (vreq-&gt;secs_pending)</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>contin=
ue;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34=
); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=
=3D"white-space: pre-wrap;">		</span>if (td_flag_test(vbd-&gt;state, TD_VB=
D_SHUTDOWN_REQUESTED))</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-s=
pace: pre-wrap;">			</span>goto fail;</div><div style=3D"color: rgb(34, 34=
, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;</div><=
div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-=
height: normal;">+<span style=3D"white-space: pre-wrap;">		</span>if (vreq=
-&gt;num_retries &gt; TD_VBD_MAX_RETRIES - 10) {</div><div style=3D"color:=
 rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">+<=
span style=3D"white-space: pre-wrap;">			</span>delta =3D (now.tv_sec - vr=
eq-&gt;last_try.tv_sec) * 1000000\</div><div style=3D"color: rgb(34, 34, 3=
4); font-family: arial, sans-serif; line-height: normal;">+<span style=3D"=
white-space: pre-wrap;">				</span>+ now.tv_usec - vreq-&gt;last_try.tv_us=
ec;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">			</sp=
an>if (delta * vreq-&gt;num_retries &lt; TD_VBD_RETRY_INTERVAL)</div><div =
style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-heig=
ht: normal;">+<span style=3D"white-space: pre-wrap;">				</span>vreq-&gt;b=
usy_looping =3D 1;</div><div style=3D"color: rgb(34, 34, 34); font-family:=
 arial, sans-serif; line-height: normal;">+<span style=3D"white-space: pre=
-wrap;">			</span>else</div><div style=3D"color: rgb(34, 34, 34); font-fam=
ily: arial, sans-serif; line-height: normal;">+<span style=3D"white-space:=
 pre-wrap;">				</span>vreq-&gt;busy_looping =3D 0;</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>+<span style=3D"white-space: pre-wrap;">		</span>}</div><div style=3D"col=
or: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;"=
>+</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-ser=
if; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">		<=
/span>if (vreq-&gt;error !=3D -EBUSY &amp;&amp;</div><div style=3D"color: =
rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">&nb=
sp;<span style=3D"white-space: pre-wrap;">		</span>&nbsp;&nbsp; &nbsp;now.=
tv_sec - vreq-&gt;last_try.tv_sec &lt; TD_VBD_RETRY_INTERVAL)</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">&nbsp;<span style=3D"white-space: pre-wrap;">			</span>continue=
;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-seri=
f; line-height: normal;">&nbsp;</div><div style=3D"color: rgb(34, 34, 34);=
 font-family: arial, sans-serif; line-height: normal;">-<span style=3D"whi=
te-space: pre-wrap;">		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_=
RETRIES) {</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, =
sans-serif; line-height: normal;">+<span style=3D"white-space: pre-wrap;">=
		</span>if (vreq-&gt;num_retries &gt;=3D TD_VBD_MAX_RETRIES &amp;&amp; vr=
eq-&gt;busy_looping !=3D 1) {</div><div style=3D"color: rgb(34, 34, 34); f=
ont-family: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"=
white-space: pre-wrap;">		</span>fail:</div><div style=3D"color: rgb(34, 3=
4, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span =
style=3D"white-space: pre-wrap;">			</span>DBG(TLOG_INFO, "req %"PRIu64"re=
tried %d times\n",</div><div style=3D"color: rgb(34, 34, 34); font-family:=
 arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-space=
: pre-wrap;">			</span>&nbsp;&nbsp; &nbsp;vreq-&gt;<a href=3D"http://req.i=
d/" target=3D"_blank" style=3D"color: rgb(17, 85, 204);">req.id</a>, vreq-=
&gt;num_retries);</div><div style=3D"color: rgb(34, 34, 34); font-family: =
arial, sans-serif; line-height: normal;">diff --git a/tools/blktap2/driver=
s/<u></u>tapdis<wbr>k-vbd.h b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vb=
d.h</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-se=
rif; line-height: normal;">index be084b2..9e5f5f6 100644</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal;">--- a/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h</div><div st=
yle=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height=
: normal;">+++ b/tools/blktap2/drivers/<u></u>tapdis<wbr>k-vbd.h</div><div=
 style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-hei=
ght: normal;">@@ -73,6 +73,7 @@ struct td_vbd_request {</div><div style=3D=
"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: norm=
al;">&nbsp;<span style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; su=
bmitting;</div><div style=3D"color: rgb(34, 34, 34); font-family: arial, s=
ans-serif; line-height: normal;">&nbsp;<span style=3D"white-space: pre-wra=
p;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp; &nbsp; &nbsp; secs_pending;</div><div style=3D"color: rgb(34, =
34, 34); font-family: arial, sans-serif; line-height: normal;">&nbsp;<span=
 style=3D"white-space: pre-wrap;">	</span>int &nbsp; &nbsp; &nbsp; &nbsp; =
&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; num_retries;</div>=
<div style=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line=
-height: normal;">+<span style=3D"white-space: pre-wrap;">	</span>int &nbs=
p; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &=
nbsp; busy_looping;</div><div style=3D"color: rgb(34, 34, 34); font-family=
: arial, sans-serif; line-height: normal;">&nbsp;<span style=3D"white-spac=
e: pre-wrap;">	</span>struct timeval &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp;last_try;</div><div style=3D"color: rgb(34, 34, 34); font-famil=
y: arial, sans-serif; line-height: normal;">&nbsp;</div><div style=3D"colo=
r: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: normal;">=
&nbsp;<span style=3D"white-space: pre-wrap;">	</span>td_vbd_t &nbsp; &nbsp=
; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; *vbd;</div><div style=
=3D"color: rgb(34, 34, 34); font-family: arial, sans-serif; line-height: n=
ormal; outline: none; padding: 10px 0px; width: 22px;"><div role=3D"button=
" aria-label=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=3D"backg=
round-color: rgb(241, 241, 241); border: 1px solid rgb(221, 221, 221); cle=
ar: both; line-height: 6px; outline: none; width: 20px;"><img src=3D"https=
://ci3.googleusercontent.com/proxy/8CBWEZYl_gju-O0mEKjDNw0Eh_qIbYUu1oZBKfZ=
qOypmLO1gU9NfBBUnKK5HrwoN-VKG8iQn_PvQpZDlbxq0IwVL9dQOy_HcGJ0=3Ds0-d-e1-ft#=
https://mail.google.com/mail/u/0/images/cleardot.gif" style=3D"background-=
image: url(https://ci3.googleusercontent.com/proxy/4Pr9QlSavgRrq9Z2e9xIdvM=
0tknChd1d3dwHXzb4AxHH58w0Ke02qizRBkj-OCDGy4wP2RrG7OYjvK0G5jWJojcVQldSGVQLT=
8rI=3Ds0-d-e1-ft#https://ssl.gstatic.com/ui/v1/icons/mail/ellipsis.png); m=
in-height: 8px; width: 20px; background-repeat: no-repeat no-repeat;"></di=
v></div><div class=3D"yj6qo ajU" style=3D"cursor: pointer; outline: none; =
padding: 10px 0px; width: 22px; color: rgb(34, 34, 34); font-family: arial=
, sans-serif; line-height: normal;"><div id=3D":vv" class=3D"ajR" role=3D"=
button" tabindex=3D"0" data-tooltip=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=
=DA=C8=DD" aria-label=3D"=D2=FE=B2=D8=D5=B9=BF=AA=B5=C4=C4=DA=C8=DD" style=
=3D"background-color: rgb(241, 241, 241); border: 1px solid rgb(221, 221, =
221); clear: both; line-height: 6px; outline: none; position: relative; wi=
dth: 20px;"><img class=3D"ajT" src=3D"https://mail.google.com/mail/u/0/ima=
ges/cleardot.gif" style=3D"background-image: url(https://ssl.gstatic.com/u=
i/v1/icons/mail/ellipsis.png); height: 8px; opacity: 0.3; width: 20px; bac=
kground-position: initial initial; background-repeat: no-repeat no-repeat;=
"></div></div><span class=3D"HOEnZb adL" style=3D"color: rgb(34, 34, 34); =
font-family: arial, sans-serif; line-height: normal;"><font color=3D"#8888=
88"><span style=3D"color: rgb(34, 34, 34);"><font color=3D"#888888"><div>-=
-&nbsp;</div><div>1.8.3.1</div></font></span></font></span></div>=0A<div><=
br></div><hr style=3D"WIDTH: 210px; HEIGHT: 1px" color=3D"#b5c4df" size=3D=
"1" align=3D"left">=0A<div><span><div style=3D"MARGIN: 10px; FONT-FAMILY: =
verdana; FONT-SIZE: 10pt"><div>gordongong0350@gmail.com</div></div></span>=
</div>=0A</body></html>
------=_001_NextPart306036843662_=------



--===============7224567069689727295==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============7224567069689727295==--



From xen-devel-bounces@lists.xen.org Fri Feb 28 14:56:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOrj-0007pO-SL; Fri, 28 Feb 2014 14:56:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WJOri-0007pJ-Vy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 14:56:43 +0000
Received: from [85.158.137.68:15884] by server-2.bemta-3.messagelabs.com id
	15/E8-06531-AA3A0135; Fri, 28 Feb 2014 14:56:42 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393599401!3607113!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5064 invoked from network); 28 Feb 2014 14:56:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; 
   d="scan'208";a="9907333"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 28 Feb 2014 14:56:41 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 28 Feb 2014 15:56:40 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Thread-Topic: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back, front}
	multi-queue feature
Thread-Index: AQHPNJOhGwFaAtGdLkiGN8P0CntdwZrKwSgg
Date: Fri, 28 Feb 2014 14:56:40 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD025F224@AMSPEX01CL01.citrite.net>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
In-Reply-To: <5310AEF90200007800120355@nat28.tlf.novell.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Bennieston <andrew.bennieston@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 28 February 2014 14:45
> To: Ian Campbell; Paul Durrant; Wei Liu
> Cc: Andrew Bennieston; David Vrabel; xen-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back, front}
> multi-queue feature
> 
> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston"
> <andrew.bennieston@citrix.com> wrote:
> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >
> > Document the multi-queue feature in terms of XenStore keys to be written
> > by the backend and by the frontend.
> >
> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> 
> Anyone of you networking people care to review/ack this?
> 

I'm not a maintainer, so I can't ack, but

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> Thanks, Jan
> 
> > ---
> > V2: Improve documentation based on comments about areas which were
> unclear.
> >
> > ---
> >  xen/include/public/io/netif.h |   29 +++++++++++++++++++++++++++++
> >  1 file changed, 29 insertions(+)
> >
> > diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> > index d7fb771..5d98734 100644
> > --- a/xen/include/public/io/netif.h
> > +++ b/xen/include/public/io/netif.h
> > @@ -69,6 +69,35 @@
> >   */
> >
> >  /*
> > + * Multiple transmit and receive queues:
> > + * If supported, the backend will write "multi-queue-max-queues" and
> set its
> > + * value to the maximum supported number of queues.
> > + * Frontends that are aware of this feature and wish to use it can write
> > the
> > + * key "multi-queue-num-queues", set to the number they wish to use.
> > + *
> > + * Queues replicate the shared rings and event channels, and
> > + * "feature-split-event-channels" may be used when using multiple
> queues.
> > + * Each queue consists of one shared ring pair, i.e. there must be the
> same
> > + * number of tx and rx rings.
> > + *
> > + * For frontends requesting just one queue, the usual event-channel and
> > + * ring-ref keys are written as before, simplifying the backend processing
> > + * to avoid distinguishing between a frontend that doesn't understand the
> > + * multi-queue feature, and one that does, but requested only one
> queue.
> > + *
> > + * Frontends requesting two or more queues must not write the toplevel
> > + * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
> > + * instead writing them under sub-keys having the name "queue-N"
> where
> > + * N is the integer ID of the queue for which those keys belong. Queues
> > + * are indexed from zero.
> > + *
> > + * Mapping of packets to queues is considered to be a function of the
> > + * transmitting system (backend or frontend) and is not negotiated
> > + * between the two. Guests are free to transmit packets on any queue
> > + * they choose, provided it has been set up correctly.
> > + */
> > +
> > +/*
> >   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP
> checksum
> >   * offload off or on. If it is missing then the feature is assumed to be
> > on.
> >   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP
> checksum
> > --
> > 1.7.10.4
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 14:56:50 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 14:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOrj-0007pO-SL; Fri, 28 Feb 2014 14:56:43 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Paul.Durrant@citrix.com>) id 1WJOri-0007pJ-Vy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 14:56:43 +0000
Received: from [85.158.137.68:15884] by server-2.bemta-3.messagelabs.com id
	15/E8-06531-AA3A0135; Fri, 28 Feb 2014 14:56:42 +0000
X-Env-Sender: Paul.Durrant@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393599401!3607113!1
X-Originating-IP: [185.25.65.24]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5064 invoked from network); 28 Feb 2014 14:56:41 -0000
Received: from smtp.eu.citrix.com (HELO SMTP.EU.CITRIX.COM) (185.25.65.24)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 14:56:41 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; 
   d="scan'208";a="9907333"
Received: from unknown (HELO AMSPEX01CL03.citrite.net) ([10.69.60.9])
	by AMSPIP01.EU.Citrix.com with ESMTP; 28 Feb 2014 14:56:41 +0000
Received: from AMSPEX01CL01.citrite.net ([169.254.6.60]) by
	AMSPEX01CL03.citrite.net ([169.254.8.48]) with mapi id 14.02.0342.004;
	Fri, 28 Feb 2014 15:56:40 +0100
From: Paul Durrant <Paul.Durrant@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Ian Campbell <Ian.Campbell@citrix.com>,
	Wei Liu <wei.liu2@citrix.com>
Thread-Topic: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back, front}
	multi-queue feature
Thread-Index: AQHPNJOhGwFaAtGdLkiGN8P0CntdwZrKwSgg
Date: Fri, 28 Feb 2014 14:56:40 +0000
Message-ID: <9AAE0902D5BC7E449B7C8E4E778ABCD025F224@AMSPEX01CL01.citrite.net>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
In-Reply-To: <5310AEF90200007800120355@nat28.tlf.novell.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.80.2.29]
MIME-Version: 1.0
X-DLP: AMS1
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Bennieston <andrew.bennieston@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 28 February 2014 14:45
> To: Ian Campbell; Paul Durrant; Wei Liu
> Cc: Andrew Bennieston; David Vrabel; xen-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back, front}
> multi-queue feature
> 
> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston"
> <andrew.bennieston@citrix.com> wrote:
> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >
> > Document the multi-queue feature in terms of XenStore keys to be written
> > by the backend and by the frontend.
> >
> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> 
> Anyone of you networking people care to review/ack this?
> 

I'm not a maintainer, so I can't ack, but

Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

> Thanks, Jan
> 
> > ---
> > V2: Improve documentation based on comments about areas which were
> unclear.
> >
> > ---
> >  xen/include/public/io/netif.h |   29 +++++++++++++++++++++++++++++
> >  1 file changed, 29 insertions(+)
> >
> > diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> > index d7fb771..5d98734 100644
> > --- a/xen/include/public/io/netif.h
> > +++ b/xen/include/public/io/netif.h
> > @@ -69,6 +69,35 @@
> >   */
> >
> >  /*
> > + * Multiple transmit and receive queues:
> > + * If supported, the backend will write "multi-queue-max-queues" and
> set its
> > + * value to the maximum supported number of queues.
> > + * Frontends that are aware of this feature and wish to use it can write
> > the
> > + * key "multi-queue-num-queues", set to the number they wish to use.
> > + *
> > + * Queues replicate the shared rings and event channels, and
> > + * "feature-split-event-channels" may be used when using multiple
> queues.
> > + * Each queue consists of one shared ring pair, i.e. there must be the
> same
> > + * number of tx and rx rings.
> > + *
> > + * For frontends requesting just one queue, the usual event-channel and
> > + * ring-ref keys are written as before, simplifying the backend processing
> > + * to avoid distinguishing between a frontend that doesn't understand the
> > + * multi-queue feature, and one that does, but requested only one
> queue.
> > + *
> > + * Frontends requesting two or more queues must not write the toplevel
> > + * event-channel (or event-channel-{tx,rx}) and {tx,rx}-ring-ref keys,
> > + * instead writing them under sub-keys having the name "queue-N"
> where
> > + * N is the integer ID of the queue for which those keys belong. Queues
> > + * are indexed from zero.
> > + *
> > + * Mapping of packets to queues is considered to be a function of the
> > + * transmitting system (backend or frontend) and is not negotiated
> > + * between the two. Guests are free to transmit packets on any queue
> > + * they choose, provided it has been set up correctly.
> > + */
> > +
> > +/*
> >   * "feature-no-csum-offload" should be used to turn IPv4 TCP/UDP
> checksum
> >   * offload off or on. If it is missing then the feature is assumed to be
> > on.
> >   * "feature-ipv6-csum-offload" should be used to turn IPv6 TCP/UDP
> checksum
> > --
> > 1.7.10.4
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xen.org
> > http://lists.xen.org/xen-devel
> 
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:04:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOzH-00089h-SH; Fri, 28 Feb 2014 15:04:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJOzG-00089c-8z
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 15:04:30 +0000
Received: from [85.158.143.35:19516] by server-1.bemta-4.messagelabs.com id
	EA/23-31661-D75A0135; Fri, 28 Feb 2014 15:04:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393599867!9076524!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23403 invoked from network); 28 Feb 2014 15:04:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:04:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105020942"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:04:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:04:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJOzC-0006tq-OU;
	Fri, 28 Feb 2014 15:04:26 +0000
Date: Fri, 28 Feb 2014 15:04:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Qin Li <qin.l.li@oracle.com>
In-Reply-To: <53100204.1050801@oracle.com>
Message-ID: <alpine.DEB.2.02.1402281502390.31489@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
	<530DD57A.8010709@oracle.com>
	<alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
	<53100204.1050801@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-585040928-1393599860=:31489"
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-585040928-1393599860=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Fri, 28 Feb 2014, Qin Li wrote:
> =E4=BA=8E 2014/2/27 21:03, Stefano Stabellini =E5=86=99=E9=81=93:
> > > >I guest linux guest will do the same thing, rdtsc() fetch current
> > > timestamp from current running vCPU, TSC out-of-sync issue is still t=
here.
> > > >It seems to me pvclock finally fix the time drift issue just because=
 the
> > > workaround enforced as above, right?
> > First you should know that TSC is not always guaranteed to be
> > synchronized across multiple processors, especially on older systems.
> > On "TSC-safe" systems, Xen would export a consistent TSC to guests, by
> > setting the vtsc offset and scale appropriately.
> Stefano,
>=20
> Is there a easy way to check whether the system is "TSC-safe"? Do you mea=
n
> "X86_FEATURE_TSC_RELIABLE" bit in "boot_cpu_data.x86_capability"?

Yes, I think that's it.
Maybe you want to read docs/misc/tscmode.txt.
--1342847746-585040928-1393599860=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-585040928-1393599860=:31489--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:04:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJOzH-00089h-SH; Fri, 28 Feb 2014 15:04:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Stefano.Stabellini@citrix.com>) id 1WJOzG-00089c-8z
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 15:04:30 +0000
Received: from [85.158.143.35:19516] by server-1.bemta-4.messagelabs.com id
	EA/23-31661-D75A0135; Fri, 28 Feb 2014 15:04:29 +0000
X-Env-Sender: Stefano.Stabellini@citrix.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393599867!9076524!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23403 invoked from network); 28 Feb 2014 15:04:28 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:04:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105020942"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:04:27 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:04:26 -0500
Received: from kaball.uk.xensource.com ([10.80.2.59])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<stefano.stabellini@eu.citrix.com>)	id 1WJOzC-0006tq-OU;
	Fri, 28 Feb 2014 15:04:26 +0000
Date: Fri, 28 Feb 2014 15:04:20 +0000
From: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
X-X-Sender: sstabellini@kaball.uk.xensource.com
To: Qin Li <qin.l.li@oracle.com>
In-Reply-To: <53100204.1050801@oracle.com>
Message-ID: <alpine.DEB.2.02.1402281502390.31489@kaball.uk.xensource.com>
References: <52D6566C.5050302@oracle.com>
	<alpine.DEB.2.02.1401161209320.21510@kaball.uk.xensource.com>
	<530DD57A.8010709@oracle.com>
	<alpine.DEB.2.02.1402271253000.31489@kaball.uk.xensource.com>
	<53100204.1050801@oracle.com>
User-Agent: Alpine 2.02 (DEB 1266 2009-07-14)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="1342847746-585040928-1393599860=:31489"
X-DLP: MIA2
Cc: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xensource.com,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: Re: [Xen-devel] Could you please answer some questions regarding
 Solaris PVHVM pvclock support.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--1342847746-585040928-1393599860=:31489
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: QUOTED-PRINTABLE

On Fri, 28 Feb 2014, Qin Li wrote:
> =E4=BA=8E 2014/2/27 21:03, Stefano Stabellini =E5=86=99=E9=81=93:
> > > >I guest linux guest will do the same thing, rdtsc() fetch current
> > > timestamp from current running vCPU, TSC out-of-sync issue is still t=
here.
> > > >It seems to me pvclock finally fix the time drift issue just because=
 the
> > > workaround enforced as above, right?
> > First you should know that TSC is not always guaranteed to be
> > synchronized across multiple processors, especially on older systems.
> > On "TSC-safe" systems, Xen would export a consistent TSC to guests, by
> > setting the vtsc offset and scale appropriately.
> Stefano,
>=20
> Is there a easy way to check whether the system is "TSC-safe"? Do you mea=
n
> "X86_FEATURE_TSC_RELIABLE" bit in "boot_cpu_data.x86_capability"?

Yes, I think that's it.
Maybe you want to read docs/misc/tscmode.txt.
--1342847746-585040928-1393599860=:31489
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--1342847746-585040928-1393599860=:31489--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:08:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP2f-0008HP-TV; Fri, 28 Feb 2014 15:08:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP2e-0008HB-7A
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:08:00 +0000
Received: from [85.158.137.68:7184] by server-2.bemta-3.messagelabs.com id
	4F/8A-06531-F46A0135; Fri, 28 Feb 2014 15:07:59 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393600076!4879980!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32031 invoked from network); 28 Feb 2014 15:07:58 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:07:58 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so874022pbb.0
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:07:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=dClxB/rQ+yTrfPDBk0V+/CEJDYvDIoJ7AO8I//xLzJY=;
	b=qAIPZIzTkwiJR4y4bZJX2CSM8o7kWmuH4bX61KpSVuZjCk9VKJII3Jrk3CxGhLMevw
	9idEuQJGr9axgPA11OORILt+YG8EGwc8hDL+jmtb0lP2ZBQB21iJm5+fuTSqD/8Jnft2
	mS88E65V6wnWNhsvfDoMUN/U5c8Uz8VaCS49lf9fzO2cNvtaP6TFoE55u/u3CT0OpfBc
	nfkcplMdOGMMWvJeX9ormgYJ9jgcM84hrHoSKy2xgsKqJu9zDdI6fhjK2zEkfVJcvusQ
	UrwptBHNL/IuvUe0P5LuSES/i++FM1CMXOv045D7DCdquNbsFTjzKbRQsvNmmvpGYY5C
	XnfA==
MIME-Version: 1.0
X-Received: by 10.66.182.199 with SMTP id eg7mr3952022pac.135.1393600075133;
	Fri, 28 Feb 2014 07:07:55 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:07:55 -0800 (PST)
In-Reply-To: <1393511233-28942-2-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-2-git-send-email-tim@xen.org>
Date: Fri, 28 Feb 2014 15:07:55 +0000
X-Google-Sender-Auth: uKTJA06Y0FNvIW0bCMhDQ62nAmE
Message-ID: <CAHqL=ad_Ko76LPeFU=q5DvjyyRf=X+TJPP_H8wTLgM-BhcW3nA@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Tim Deegan <tim@xen.org>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/4] common/vsprintf: Explicitly treat
 negative lengths as 'unlimited'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8052187555898969498=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8052187555898969498==
Content-Type: multipart/alternative; boundary=047d7bd6c70a6ff7d404f378cd2d

--047d7bd6c70a6ff7d404f378cd2d
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Feb 27, 2014 at 2:27 PM, Tim Deegan <tim@xen.org> wrote:

> The old code relied on implictly casting negative numbers to size_t
> making a very large limit, which was correct but non-obvious.
>
> Coverity CID 1128575
>
> Signed-off-by: Tim Deegan <tim@xen.org>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Acked-by: Keir Fraser <keir@xen.org>


> ---
>  xen/common/vsprintf.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
> index 1a6198e..0a6fa05 100644
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -239,7 +239,7 @@ static char *number(
>  static char *string(char *str, char *end, const char *s,
>                      int field_width, int precision, int flags)
>  {
> -    int i, len = strnlen(s, precision);
> +    int i, len = (precision < 0) ? strlen(s) : strnlen(s, precision);
>
>      if (!(flags & LEFT)) {
>          while (len < field_width--) {
> --
> 1.8.5.2
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--047d7bd6c70a6ff7d404f378cd2d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
hu, Feb 27, 2014 at 2:27 PM, Tim Deegan <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:tim@xen.org" target=3D"_blank">tim@xen.org</a>&gt;</span> wrote:<br><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">
The old code relied on implictly casting negative numbers to size_t<br>
making a very large limit, which was correct but non-obvious.<br>
<br>
Coverity CID 1128575<br>
<br>
Signed-off-by: Tim Deegan &lt;<a href=3D"mailto:tim@xen.org">tim@xen.org</a=
>&gt;<br>
Reviewed-by: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.com"=
>andrew.cooper3@citrix.com</a>&gt;<br></blockquote><div><br></div><div>Acke=
d-by: Keir Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;<=
/div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;=
border-left:1px #ccc solid;padding-left:1ex">
---<br>
=A0xen/common/vsprintf.c | 2 +-<br>
=A01 file changed, 1 insertion(+), 1 deletion(-)<br>
<br>
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c<br>
index 1a6198e..0a6fa05 100644<br>
--- a/xen/common/vsprintf.c<br>
+++ b/xen/common/vsprintf.c<br>
@@ -239,7 +239,7 @@ static char *number(<br>
=A0static char *string(char *str, char *end, const char *s,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0int field_width, int precision, =
int flags)<br>
=A0{<br>
- =A0 =A0int i, len =3D strnlen(s, precision);<br>
+ =A0 =A0int i, len =3D (precision &lt; 0) ? strlen(s) : strnlen(s, precisi=
on);<br>
<br>
=A0 =A0 =A0if (!(flags &amp; LEFT)) {<br>
=A0 =A0 =A0 =A0 =A0while (len &lt; field_width--) {<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
1.8.5.2<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</font></span></blockquote></div><br></div></div>

--047d7bd6c70a6ff7d404f378cd2d--


--===============8052187555898969498==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8052187555898969498==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:08:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP2f-0008HH-Gb; Fri, 28 Feb 2014 15:08:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP2d-0008H4-6m
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:07:59 +0000
Received: from [85.158.137.68:7071] by server-6.bemta-3.messagelabs.com id
	69/F9-09180-E46A0135; Fri, 28 Feb 2014 15:07:58 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393600076!3624782!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24026 invoked from network); 28 Feb 2014 15:07:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:07:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105022489"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:07:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:07:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP1z-0006wf-5z;
	Fri, 28 Feb 2014 15:07:19 +0000
Date: Fri, 28 Feb 2014 15:07:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140228150719.GL16241@zion.uk.xensource.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5310AEF90200007800120355@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, paul.durrant@citrix.com,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:44:57PM +0000, Jan Beulich wrote:
> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> wrote:
> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> > 
> > Document the multi-queue feature in terms of XenStore keys to be written
> > by the backend and by the frontend.
> > 
> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> 
> Anyone of you networking people care to review/ack this?
> 
> Thanks, Jan
> 

Sorry for the late response.

I was actually thinking about acking this after Linux code goes in.
If there's no user and acutal implementation of this feature it wouldn't
be very useful to make it to master tree anyway.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:08:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP2f-0008HP-TV; Fri, 28 Feb 2014 15:08:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP2e-0008HB-7A
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:08:00 +0000
Received: from [85.158.137.68:7184] by server-2.bemta-3.messagelabs.com id
	4F/8A-06531-F46A0135; Fri, 28 Feb 2014 15:07:59 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-31.messagelabs.com!1393600076!4879980!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32031 invoked from network); 28 Feb 2014 15:07:58 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-3.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:07:58 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so874022pbb.0
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:07:56 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=dClxB/rQ+yTrfPDBk0V+/CEJDYvDIoJ7AO8I//xLzJY=;
	b=qAIPZIzTkwiJR4y4bZJX2CSM8o7kWmuH4bX61KpSVuZjCk9VKJII3Jrk3CxGhLMevw
	9idEuQJGr9axgPA11OORILt+YG8EGwc8hDL+jmtb0lP2ZBQB21iJm5+fuTSqD/8Jnft2
	mS88E65V6wnWNhsvfDoMUN/U5c8Uz8VaCS49lf9fzO2cNvtaP6TFoE55u/u3CT0OpfBc
	nfkcplMdOGMMWvJeX9ormgYJ9jgcM84hrHoSKy2xgsKqJu9zDdI6fhjK2zEkfVJcvusQ
	UrwptBHNL/IuvUe0P5LuSES/i++FM1CMXOv045D7DCdquNbsFTjzKbRQsvNmmvpGYY5C
	XnfA==
MIME-Version: 1.0
X-Received: by 10.66.182.199 with SMTP id eg7mr3952022pac.135.1393600075133;
	Fri, 28 Feb 2014 07:07:55 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:07:55 -0800 (PST)
In-Reply-To: <1393511233-28942-2-git-send-email-tim@xen.org>
References: <1393511233-28942-1-git-send-email-tim@xen.org>
	<1393511233-28942-2-git-send-email-tim@xen.org>
Date: Fri, 28 Feb 2014 15:07:55 +0000
X-Google-Sender-Auth: uKTJA06Y0FNvIW0bCMhDQ62nAmE
Message-ID: <CAHqL=ad_Ko76LPeFU=q5DvjyyRf=X+TJPP_H8wTLgM-BhcW3nA@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Tim Deegan <tim@xen.org>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v2 1/4] common/vsprintf: Explicitly treat
 negative lengths as 'unlimited'
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8052187555898969498=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8052187555898969498==
Content-Type: multipart/alternative; boundary=047d7bd6c70a6ff7d404f378cd2d

--047d7bd6c70a6ff7d404f378cd2d
Content-Type: text/plain; charset=ISO-8859-1

On Thu, Feb 27, 2014 at 2:27 PM, Tim Deegan <tim@xen.org> wrote:

> The old code relied on implictly casting negative numbers to size_t
> making a very large limit, which was correct but non-obvious.
>
> Coverity CID 1128575
>
> Signed-off-by: Tim Deegan <tim@xen.org>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Acked-by: Keir Fraser <keir@xen.org>


> ---
>  xen/common/vsprintf.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
> index 1a6198e..0a6fa05 100644
> --- a/xen/common/vsprintf.c
> +++ b/xen/common/vsprintf.c
> @@ -239,7 +239,7 @@ static char *number(
>  static char *string(char *str, char *end, const char *s,
>                      int field_width, int precision, int flags)
>  {
> -    int i, len = strnlen(s, precision);
> +    int i, len = (precision < 0) ? strlen(s) : strnlen(s, precision);
>
>      if (!(flags & LEFT)) {
>          while (len < field_width--) {
> --
> 1.8.5.2
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--047d7bd6c70a6ff7d404f378cd2d
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
hu, Feb 27, 2014 at 2:27 PM, Tim Deegan <span dir=3D"ltr">&lt;<a href=3D"ma=
ilto:tim@xen.org" target=3D"_blank">tim@xen.org</a>&gt;</span> wrote:<br><b=
lockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px =
#ccc solid;padding-left:1ex">
The old code relied on implictly casting negative numbers to size_t<br>
making a very large limit, which was correct but non-obvious.<br>
<br>
Coverity CID 1128575<br>
<br>
Signed-off-by: Tim Deegan &lt;<a href=3D"mailto:tim@xen.org">tim@xen.org</a=
>&gt;<br>
Reviewed-by: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.com"=
>andrew.cooper3@citrix.com</a>&gt;<br></blockquote><div><br></div><div>Acke=
d-by: Keir Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;<=
/div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;=
border-left:1px #ccc solid;padding-left:1ex">
---<br>
=A0xen/common/vsprintf.c | 2 +-<br>
=A01 file changed, 1 insertion(+), 1 deletion(-)<br>
<br>
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c<br>
index 1a6198e..0a6fa05 100644<br>
--- a/xen/common/vsprintf.c<br>
+++ b/xen/common/vsprintf.c<br>
@@ -239,7 +239,7 @@ static char *number(<br>
=A0static char *string(char *str, char *end, const char *s,<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0int field_width, int precision, =
int flags)<br>
=A0{<br>
- =A0 =A0int i, len =3D strnlen(s, precision);<br>
+ =A0 =A0int i, len =3D (precision &lt; 0) ? strlen(s) : strnlen(s, precisi=
on);<br>
<br>
=A0 =A0 =A0if (!(flags &amp; LEFT)) {<br>
=A0 =A0 =A0 =A0 =A0while (len &lt; field_width--) {<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
1.8.5.2<br>
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</font></span></blockquote></div><br></div></div>

--047d7bd6c70a6ff7d404f378cd2d--


--===============8052187555898969498==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8052187555898969498==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:08:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP2f-0008HH-Gb; Fri, 28 Feb 2014 15:08:01 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP2d-0008H4-6m
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:07:59 +0000
Received: from [85.158.137.68:7071] by server-6.bemta-3.messagelabs.com id
	69/F9-09180-E46A0135; Fri, 28 Feb 2014 15:07:58 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-4.tower-31.messagelabs.com!1393600076!3624782!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24026 invoked from network); 28 Feb 2014 15:07:57 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-4.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:07:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105022489"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:07:19 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:07:19 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP1z-0006wf-5z;
	Fri, 28 Feb 2014 15:07:19 +0000
Date: Fri, 28 Feb 2014 15:07:19 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Message-ID: <20140228150719.GL16241@zion.uk.xensource.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5310AEF90200007800120355@nat28.tlf.novell.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: wei.liu2@citrix.com, ian.campbell@citrix.com, paul.durrant@citrix.com,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:44:57PM +0000, Jan Beulich wrote:
> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> wrote:
> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> > 
> > Document the multi-queue feature in terms of XenStore keys to be written
> > by the backend and by the frontend.
> > 
> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> 
> Anyone of you networking people care to review/ack this?
> 
> Thanks, Jan
> 

Sorry for the late response.

I was actually thinking about acking this after Linux code goes in.
If there's no user and acutal implementation of this feature it wouldn't
be very useful to make it to master tree anyway.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:09:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP3x-0008SE-Dn; Fri, 28 Feb 2014 15:09:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP3v-0008Re-Qn
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:09:20 +0000
Received: from [85.158.139.211:37536] by server-12.bemta-5.messagelabs.com id
	5C/B5-15415-F96A0135; Fri, 28 Feb 2014 15:09:19 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393600156!6945980!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4401 invoked from network); 28 Feb 2014 15:09:18 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:09:18 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so871457pbb.28
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:09:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=s9nj4+GYRETU4ZkB2zm3kv2lkEVWy/E1yXndnxEVgdw=;
	b=FLWr7rjcDQhKhfDuEIwmfWktXfAPZEKNZxgOpQI2+BuOFcJ5kloO9BWptPrjM70/W+
	OslbqWvIHuIGseRzp9MwMflS3u6Z4uVlOn25ZtP9fR7mafX+Carff/OIbteMlVBM3yhq
	jnYAFu7YLIQK0M1UUJNZb9pPvrFxwsY9AhMlV0r+nJxpnA82acFvkc7etHp1s9j4XAHg
	vVUok/PmUMkjO8zq3f6v0R1BoyHQT0C65zH1H+BOAnw47QR8/N1pWKd6kmRXs6t40tru
	LZwwwiobfl7n8/nWXmJCLmbeDQHKqEIsMLE2UnGXtChwgx+fkM9ka55W6zmmddL7vf4F
	nwsA==
MIME-Version: 1.0
X-Received: by 10.68.33.106 with SMTP id q10mr4070556pbi.132.1393600155966;
	Fri, 28 Feb 2014 07:09:15 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:09:15 -0800 (PST)
In-Reply-To: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
References: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
Date: Fri, 28 Feb 2014 15:09:15 +0000
X-Google-Sender-Auth: ykh0YtwEXCIRr7MuPf5wF0aa0KI
Message-ID: <CAHqL=afNjXq0+=uT-9oY5qNaVT+DKZwReLP-Hq+sxPGKiFQQ_Q@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] mm: ensure useful progress in
	decrease_reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2972504644548728279=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2972504644548728279==
Content-Type: multipart/alternative; boundary=bcaec520e91341628904f378d2e6

--bcaec520e91341628904f378d2e6
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Feb 26, 2014 at 11:47 AM, Wei Liu <wei.liu2@citrix.com> wrote:

> During my fun time playing with balloon driver I found that hypervisor's
> preemption check kept decrease_reservation from doing any useful work
> for 32 bit guests, resulting in hanging the guests.
>
> As Andrew suggested, we can force the check to fail for the first
> iteration to ensure progress. We did this in d3a55d7d9 "x86/mm: Ensure
> useful progress in alloc_l2_table()" already.
>
> After this change I cannot see the hang caused by continuation logic
> anymore.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>

Acked-by: Keir Fraser <keir@xen.org>


> ---
>  xen/common/memory.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 5a0efd5..9d0d32e 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)
>
>      for ( i = a->nr_done; i < a->nr_extents; i++ )
>      {
> -        if ( hypercall_preempt_check() )
> +        if ( hypercall_preempt_check() && i != a->nr_done )
>          {
>              a->preempted = 1;
>              goto out;
> --
> 1.7.10.4
>
>

--bcaec520e91341628904f378d2e6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On W=
ed, Feb 26, 2014 at 11:47 AM, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mail=
to:wei.liu2@citrix.com" target=3D"_blank">wei.liu2@citrix.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">During my fun time playing with balloon driv=
er I found that hypervisor&#39;s<br>
preemption check kept decrease_reservation from doing any useful work<br>
for 32 bit guests, resulting in hanging the guests.<br>
<br>
As Andrew suggested, we can force the check to fail for the first<br>
iteration to ensure progress. We did this in d3a55d7d9 &quot;x86/mm: Ensure=
<br>
useful progress in alloc_l2_table()&quot; already.<br>
<br>
After this change I cannot see the hang caused by continuation logic<br>
anymore.<br>
<br>
Signed-off-by: Wei Liu &lt;<a href=3D"mailto:wei.liu2@citrix.com">wei.liu2@=
citrix.com</a>&gt;<br>
Cc: Jan Beulich &lt;<a href=3D"mailto:JBeulich@suse.com">JBeulich@suse.com<=
/a>&gt;<br>
Cc: Keir Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;<br=
>
Cc: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.com">andrew.c=
ooper3@citrix.com</a>&gt;<br></blockquote><div><br></div><div>Acked-by: Kei=
r Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;</div><div=
>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
---<br>
=A0xen/common/memory.c | =A0 =A02 +-<br>
=A01 file changed, 1 insertion(+), 1 deletion(-)<br>
<br>
diff --git a/xen/common/memory.c b/xen/common/memory.c<br>
index 5a0efd5..9d0d32e 100644<br>
--- a/xen/common/memory.c<br>
+++ b/xen/common/memory.c<br>
@@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)<=
br>
<br>
=A0 =A0 =A0for ( i =3D a-&gt;nr_done; i &lt; a-&gt;nr_extents; i++ )<br>
=A0 =A0 =A0{<br>
- =A0 =A0 =A0 =A0if ( hypercall_preempt_check() )<br>
+ =A0 =A0 =A0 =A0if ( hypercall_preempt_check() &amp;&amp; i !=3D a-&gt;nr_=
done )<br>
=A0 =A0 =A0 =A0 =A0{<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0a-&gt;preempted =3D 1;<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
1.7.10.4<br>
<br>
</font></span></blockquote></div><br></div></div>

--bcaec520e91341628904f378d2e6--


--===============2972504644548728279==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2972504644548728279==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:09:22 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP3x-0008SE-Dn; Fri, 28 Feb 2014 15:09:21 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP3v-0008Re-Qn
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:09:20 +0000
Received: from [85.158.139.211:37536] by server-12.bemta-5.messagelabs.com id
	5C/B5-15415-F96A0135; Fri, 28 Feb 2014 15:09:19 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393600156!6945980!1
X-Originating-IP: [209.85.160.41]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_40_50,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4401 invoked from network); 28 Feb 2014 15:09:18 -0000
Received: from mail-pb0-f41.google.com (HELO mail-pb0-f41.google.com)
	(209.85.160.41)
	by server-3.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:09:18 -0000
Received: by mail-pb0-f41.google.com with SMTP id jt11so871457pbb.28
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:09:16 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=s9nj4+GYRETU4ZkB2zm3kv2lkEVWy/E1yXndnxEVgdw=;
	b=FLWr7rjcDQhKhfDuEIwmfWktXfAPZEKNZxgOpQI2+BuOFcJ5kloO9BWptPrjM70/W+
	OslbqWvIHuIGseRzp9MwMflS3u6Z4uVlOn25ZtP9fR7mafX+Carff/OIbteMlVBM3yhq
	jnYAFu7YLIQK0M1UUJNZb9pPvrFxwsY9AhMlV0r+nJxpnA82acFvkc7etHp1s9j4XAHg
	vVUok/PmUMkjO8zq3f6v0R1BoyHQT0C65zH1H+BOAnw47QR8/N1pWKd6kmRXs6t40tru
	LZwwwiobfl7n8/nWXmJCLmbeDQHKqEIsMLE2UnGXtChwgx+fkM9ka55W6zmmddL7vf4F
	nwsA==
MIME-Version: 1.0
X-Received: by 10.68.33.106 with SMTP id q10mr4070556pbi.132.1393600155966;
	Fri, 28 Feb 2014 07:09:15 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:09:15 -0800 (PST)
In-Reply-To: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
References: <1393415227-32092-1-git-send-email-wei.liu2@citrix.com>
Date: Fri, 28 Feb 2014 15:09:15 +0000
X-Google-Sender-Auth: ykh0YtwEXCIRr7MuPf5wF0aa0KI
Message-ID: <CAHqL=afNjXq0+=uT-9oY5qNaVT+DKZwReLP-Hq+sxPGKiFQQ_Q@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH] mm: ensure useful progress in
	decrease_reservation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============2972504644548728279=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============2972504644548728279==
Content-Type: multipart/alternative; boundary=bcaec520e91341628904f378d2e6

--bcaec520e91341628904f378d2e6
Content-Type: text/plain; charset=ISO-8859-1

On Wed, Feb 26, 2014 at 11:47 AM, Wei Liu <wei.liu2@citrix.com> wrote:

> During my fun time playing with balloon driver I found that hypervisor's
> preemption check kept decrease_reservation from doing any useful work
> for 32 bit guests, resulting in hanging the guests.
>
> As Andrew suggested, we can force the check to fail for the first
> iteration to ensure progress. We did this in d3a55d7d9 "x86/mm: Ensure
> useful progress in alloc_l2_table()" already.
>
> After this change I cannot see the hang caused by continuation logic
> anymore.
>
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Cc: Jan Beulich <JBeulich@suse.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>

Acked-by: Keir Fraser <keir@xen.org>


> ---
>  xen/common/memory.c |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 5a0efd5..9d0d32e 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)
>
>      for ( i = a->nr_done; i < a->nr_extents; i++ )
>      {
> -        if ( hypercall_preempt_check() )
> +        if ( hypercall_preempt_check() && i != a->nr_done )
>          {
>              a->preempted = 1;
>              goto out;
> --
> 1.7.10.4
>
>

--bcaec520e91341628904f378d2e6
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On W=
ed, Feb 26, 2014 at 11:47 AM, Wei Liu <span dir=3D"ltr">&lt;<a href=3D"mail=
to:wei.liu2@citrix.com" target=3D"_blank">wei.liu2@citrix.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">During my fun time playing with balloon driv=
er I found that hypervisor&#39;s<br>
preemption check kept decrease_reservation from doing any useful work<br>
for 32 bit guests, resulting in hanging the guests.<br>
<br>
As Andrew suggested, we can force the check to fail for the first<br>
iteration to ensure progress. We did this in d3a55d7d9 &quot;x86/mm: Ensure=
<br>
useful progress in alloc_l2_table()&quot; already.<br>
<br>
After this change I cannot see the hang caused by continuation logic<br>
anymore.<br>
<br>
Signed-off-by: Wei Liu &lt;<a href=3D"mailto:wei.liu2@citrix.com">wei.liu2@=
citrix.com</a>&gt;<br>
Cc: Jan Beulich &lt;<a href=3D"mailto:JBeulich@suse.com">JBeulich@suse.com<=
/a>&gt;<br>
Cc: Keir Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;<br=
>
Cc: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.com">andrew.c=
ooper3@citrix.com</a>&gt;<br></blockquote><div><br></div><div>Acked-by: Kei=
r Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;</div><div=
>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
---<br>
=A0xen/common/memory.c | =A0 =A02 +-<br>
=A01 file changed, 1 insertion(+), 1 deletion(-)<br>
<br>
diff --git a/xen/common/memory.c b/xen/common/memory.c<br>
index 5a0efd5..9d0d32e 100644<br>
--- a/xen/common/memory.c<br>
+++ b/xen/common/memory.c<br>
@@ -268,7 +268,7 @@ static void decrease_reservation(struct memop_args *a)<=
br>
<br>
=A0 =A0 =A0for ( i =3D a-&gt;nr_done; i &lt; a-&gt;nr_extents; i++ )<br>
=A0 =A0 =A0{<br>
- =A0 =A0 =A0 =A0if ( hypercall_preempt_check() )<br>
+ =A0 =A0 =A0 =A0if ( hypercall_preempt_check() &amp;&amp; i !=3D a-&gt;nr_=
done )<br>
=A0 =A0 =A0 =A0 =A0{<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0a-&gt;preempted =3D 1;<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0goto out;<br>
<span class=3D"HOEnZb"><font color=3D"#888888">--<br>
1.7.10.4<br>
<br>
</font></span></blockquote></div><br></div></div>

--bcaec520e91341628904f378d2e6--


--===============2972504644548728279==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============2972504644548728279==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:10:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP4Z-0000A6-T3; Fri, 28 Feb 2014 15:09:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP4Y-00009l-Ba
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:09:58 +0000
Received: from [85.158.137.68:63554] by server-4.bemta-3.messagelabs.com id
	91/B2-04858-5C6A0135; Fri, 28 Feb 2014 15:09:57 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393600195!3561453!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30663 invoked from network); 28 Feb 2014 15:09:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:09:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105023757"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:09:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:09:54 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP4U-0006yt-I1;
	Fri, 28 Feb 2014 15:09:54 +0000
Date: Fri, 28 Feb 2014 15:09:54 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228150954.GM16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 3/5] xen-netfront: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:05PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netfront, move the
> queue-specific data from struct netfront_info to struct netfront_queue,
> and update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_etherdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0, selecting the first (and
> only) queue.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Did you forget to add in David's "Reviewed-by"? Is there any significant
change compared to the last submission?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:10:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP4Z-0000A6-T3; Fri, 28 Feb 2014 15:09:59 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP4Y-00009l-Ba
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:09:58 +0000
Received: from [85.158.137.68:63554] by server-4.bemta-3.messagelabs.com id
	91/B2-04858-5C6A0135; Fri, 28 Feb 2014 15:09:57 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-16.tower-31.messagelabs.com!1393600195!3561453!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30663 invoked from network); 28 Feb 2014 15:09:56 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-16.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:09:56 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105023757"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:09:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:09:54 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP4U-0006yt-I1;
	Fri, 28 Feb 2014 15:09:54 +0000
Date: Fri, 28 Feb 2014 15:09:54 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228150954.GM16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 3/5] xen-netfront: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:05PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netfront, move the
> queue-specific data from struct netfront_info to struct netfront_queue,
> and update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_etherdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0, selecting the first (and
> only) queue.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Did you forget to add in David's "Reviewed-by"? Is there any significant
change compared to the last submission?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:11:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP5p-0000Mg-Hv; Fri, 28 Feb 2014 15:11:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP5o-0000MY-Ds
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:11:16 +0000
Received: from [193.109.254.147:25730] by server-13.bemta-14.messagelabs.com
	id 66/22-01226-317A0135; Fri, 28 Feb 2014 15:11:15 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393600273!3849837!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32152 invoked from network); 28 Feb 2014 15:11:14 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:11:14 -0000
Received: by mail-pb0-f45.google.com with SMTP id uo5so163000pbc.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 07:11:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=OPlRc2aAQd0lpmLsuOoEX1jQRSCXKoG+LYlW/b6sVrQ=;
	b=BF4XfOBID3Bcpsq2SYSbKgkf8NgKLtTNqrSiHT6hcYGQTJ/69WPskVDLRDdcDxbymC
	ytbRPUFr4Aa2Cw4q7pLrBfQmmqefEXdS/D8m5huBN3plPGGO1ntml7mRYaHznqExpO9J
	nhoWAFJ4rezCt97rDHCTbeKWZn3PTLeu963rUl2tbpzx4usk1oI884gKk3hhHvPGSQTA
	ZjHuMoazZahF3L8XyxColFnG75TmCx9/JS5sTY0fEdAVCmRSyG6Hc4oSrXlxI475IIDF
	uL770Ddd+W9EPmU117sA2bMsKuwXDlCWAb2PWxY0j2WefGe6efLnGlqdzQwvUQDOzgGC
	tqgQ==
MIME-Version: 1.0
X-Received: by 10.66.142.42 with SMTP id rt10mr4035363pab.1.1393600272811;
	Fri, 28 Feb 2014 07:11:12 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:11:12 -0800 (PST)
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
Date: Fri, 28 Feb 2014 15:11:12 +0000
X-Google-Sender-Auth: 7FgfoBR-RnRUKnrL2ChOMBCltiA
Message-ID: <CAHqL=ad+RPQN15tFsyJDUU5L4vB53qtt6m_-B=bdKN2rT1JYhw@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4136345602820033646=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4136345602820033646==
Content-Type: multipart/alternative; boundary=001a1133048a3848ff04f378d9ed

--001a1133048a3848ff04f378d9ed
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 25, 2014 at 10:00 AM, Jan Beulich <JBeulich@suse.com> wrote:

> ... in a simplified and consistent way.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>

Looks nice to me.

Acked-by: Keir Fraser <keir@xen.org>

--001a1133048a3848ff04f378d9ed
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 25, 2014 at 10:00 AM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"=
mailto:JBeulich@suse.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">... in a simplified and consistent way.<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulic=
h@suse.com</a>&gt;<br></blockquote><div><br></div><div>Looks nice to me.</d=
iv><div><br></div><div>Acked-by: Keir Fraser &lt;<a href=3D"mailto:keir@xen=
.org">keir@xen.org</a>&gt;</div>
<div>=A0</div></div></div></div>

--001a1133048a3848ff04f378d9ed--


--===============4136345602820033646==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4136345602820033646==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:11:19 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:11:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP5p-0000Mg-Hv; Fri, 28 Feb 2014 15:11:17 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP5o-0000MY-Ds
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:11:16 +0000
Received: from [193.109.254.147:25730] by server-13.bemta-14.messagelabs.com
	id 66/22-01226-317A0135; Fri, 28 Feb 2014 15:11:15 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-12.tower-27.messagelabs.com!1393600273!3849837!1
X-Originating-IP: [209.85.160.45]
X-SpamReason: No, hits=0.6 required=7.0 tests=HTML_60_70,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32152 invoked from network); 28 Feb 2014 15:11:14 -0000
Received: from mail-pb0-f45.google.com (HELO mail-pb0-f45.google.com)
	(209.85.160.45)
	by server-12.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:11:14 -0000
Received: by mail-pb0-f45.google.com with SMTP id uo5so163000pbc.4
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 07:11:12 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=OPlRc2aAQd0lpmLsuOoEX1jQRSCXKoG+LYlW/b6sVrQ=;
	b=BF4XfOBID3Bcpsq2SYSbKgkf8NgKLtTNqrSiHT6hcYGQTJ/69WPskVDLRDdcDxbymC
	ytbRPUFr4Aa2Cw4q7pLrBfQmmqefEXdS/D8m5huBN3plPGGO1ntml7mRYaHznqExpO9J
	nhoWAFJ4rezCt97rDHCTbeKWZn3PTLeu963rUl2tbpzx4usk1oI884gKk3hhHvPGSQTA
	ZjHuMoazZahF3L8XyxColFnG75TmCx9/JS5sTY0fEdAVCmRSyG6Hc4oSrXlxI475IIDF
	uL770Ddd+W9EPmU117sA2bMsKuwXDlCWAb2PWxY0j2WefGe6efLnGlqdzQwvUQDOzgGC
	tqgQ==
MIME-Version: 1.0
X-Received: by 10.66.142.42 with SMTP id rt10mr4035363pab.1.1393600272811;
	Fri, 28 Feb 2014 07:11:12 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:11:12 -0800 (PST)
In-Reply-To: <530C77C8020000780011F114@nat28.tlf.novell.com>
References: <530C77C8020000780011F114@nat28.tlf.novell.com>
Date: Fri, 28 Feb 2014 15:11:12 +0000
X-Google-Sender-Auth: 7FgfoBR-RnRUKnrL2ChOMBCltiA
Message-ID: <CAHqL=ad+RPQN15tFsyJDUU5L4vB53qtt6m_-B=bdKN2rT1JYhw@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] vsprintf: introduce %pv extended format
 specifier to print domain/vcpu ID pair
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4136345602820033646=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4136345602820033646==
Content-Type: multipart/alternative; boundary=001a1133048a3848ff04f378d9ed

--001a1133048a3848ff04f378d9ed
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 25, 2014 at 10:00 AM, Jan Beulich <JBeulich@suse.com> wrote:

> ... in a simplified and consistent way.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>

Looks nice to me.

Acked-by: Keir Fraser <keir@xen.org>

--001a1133048a3848ff04f378d9ed
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 25, 2014 at 10:00 AM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"=
mailto:JBeulich@suse.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">... in a simplified and consistent way.<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulic=
h@suse.com</a>&gt;<br></blockquote><div><br></div><div>Looks nice to me.</d=
iv><div><br></div><div>Acked-by: Keir Fraser &lt;<a href=3D"mailto:keir@xen=
.org">keir@xen.org</a>&gt;</div>
<div>=A0</div></div></div></div>

--001a1133048a3848ff04f378d9ed--


--===============4136345602820033646==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4136345602820033646==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:11:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP6D-0000Qy-VN; Fri, 28 Feb 2014 15:11:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP6B-0000QP-T1
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:11:40 +0000
Received: from [85.158.137.68:33551] by server-8.bemta-3.messagelabs.com id
	FB/2A-16039-B27A0135; Fri, 28 Feb 2014 15:11:39 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393600296!4868786!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28553 invoked from network); 28 Feb 2014 15:11:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:11:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106640211"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 15:11:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:11:32 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP64-00070E-HW;
	Fri, 28 Feb 2014 15:11:32 +0000
Date: Fri, 28 Feb 2014 15:11:32 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228151132.GN16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:03PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_hash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:11:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP6D-0000Qy-VN; Fri, 28 Feb 2014 15:11:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP6B-0000QP-T1
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:11:40 +0000
Received: from [85.158.137.68:33551] by server-8.bemta-3.messagelabs.com id
	FB/2A-16039-B27A0135; Fri, 28 Feb 2014 15:11:39 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-8.tower-31.messagelabs.com!1393600296!4868786!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28553 invoked from network); 28 Feb 2014 15:11:37 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-8.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:11:37 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106640211"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 15:11:33 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:11:32 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP64-00070E-HW;
	Fri, 28 Feb 2014 15:11:32 +0000
Date: Fri, 28 Feb 2014 15:11:32 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228151132.GN16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-2-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 1/5] xen-netback: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:03PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> In preparation for multi-queue support in xen-netback, move the
> queue-specific data from struct xenvif into struct xenvif_queue, and
> update the rest of the code to use this.
> 
> Also adds loops over queues where appropriate, even though only one is
> configured at this point, and uses alloc_netdev_mq() and the
> corresponding multi-queue netif wake/start/stop functions in preparation
> for multiple active queues.
> 
> Finally, implements a trivial queue selection function suitable for
> ndo_select_queue, which simply returns 0 for a single queue and uses
> skb_get_hash() to compute the queue index otherwise.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:13:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP7a-0000fj-FG; Fri, 28 Feb 2014 15:13:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP7Y-0000fJ-J9
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:13:04 +0000
Received: from [85.158.143.35:33087] by server-3.bemta-4.messagelabs.com id
	DC/E0-11539-F77A0135; Fri, 28 Feb 2014 15:13:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393600382!9048375!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 730 invoked from network); 28 Feb 2014 15:13:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:13:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105025105"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:13:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:13:01 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP7V-00071b-BN;
	Fri, 28 Feb 2014 15:13:01 +0000
Date: Fri, 28 Feb 2014 15:13:01 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228151301.GO16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:04PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:13:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP7a-0000fj-FG; Fri, 28 Feb 2014 15:13:06 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJP7Y-0000fJ-J9
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:13:04 +0000
Received: from [85.158.143.35:33087] by server-3.bemta-4.messagelabs.com id
	DC/E0-11539-F77A0135; Fri, 28 Feb 2014 15:13:03 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393600382!9048375!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 730 invoked from network); 28 Feb 2014 15:13:03 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:13:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105025105"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:13:01 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:13:01 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJP7V-00071b-BN;
	Fri, 28 Feb 2014 15:13:01 +0000
Date: Fri, 28 Feb 2014 15:13:01 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228151301.GO16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-3-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 2/5] xen-netback: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:04PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Builds on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Writes the maximum supported number of queues into XenStore, and reads
> the values written by the frontend to determine how many queues to use.
> 
> Ring references and event channels are read from XenStore on a per-queue
> basis and rings are connected accordingly.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Thanks
Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:13:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP7d-0000gp-SZ; Fri, 28 Feb 2014 15:13:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WJP7c-0000gD-Uy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:13:09 +0000
Received: from [85.158.143.35:49079] by server-3.bemta-4.messagelabs.com id
	E4/11-11539-487A0135; Fri, 28 Feb 2014 15:13:08 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393600382!9048375!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1191 invoked from network); 28 Feb 2014 15:13:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:13:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105025147"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:13:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:13:06 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WJP7a-00071n-HC; Fri, 28 Feb 2014 15:13:06 +0000
Message-ID: <5310A782.5050705@citrix.com>
Date: Fri, 28 Feb 2014 15:13:06 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
	<20140228150954.GM16241@zion.uk.xensource.com>
In-Reply-To: <20140228150954.GM16241@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 3/5] xen-netfront: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 15:09, Wei Liu wrote:
> On Mon, Feb 24, 2014 at 02:33:05PM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netfront, move the
>> queue-specific data from struct netfront_info to struct netfront_queue,
>> and update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_etherdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0, selecting the first (and
>> only) queue.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>
> Did you forget to add in David's "Reviewed-by"? Is there any significant
> change compared to the last submission?
>
> Wei.
>
No significant change. I just forgot.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:13:09 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP7d-0000gp-SZ; Fri, 28 Feb 2014 15:13:09 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WJP7c-0000gD-Uy
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:13:09 +0000
Received: from [85.158.143.35:49079] by server-3.bemta-4.messagelabs.com id
	E4/11-11539-487A0135; Fri, 28 Feb 2014 15:13:08 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393600382!9048375!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1191 invoked from network); 28 Feb 2014 15:13:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-11.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:13:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105025147"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:13:07 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:13:06 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WJP7a-00071n-HC; Fri, 28 Feb 2014 15:13:06 +0000
Message-ID: <5310A782.5050705@citrix.com>
Date: Fri, 28 Feb 2014 15:13:06 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-4-git-send-email-andrew.bennieston@citrix.com>
	<20140228150954.GM16241@zion.uk.xensource.com>
In-Reply-To: <20140228150954.GM16241@zion.uk.xensource.com>
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 3/5] xen-netfront: Factor
 queue-specific data into queue struct.
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 15:09, Wei Liu wrote:
> On Mon, Feb 24, 2014 at 02:33:05PM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> In preparation for multi-queue support in xen-netfront, move the
>> queue-specific data from struct netfront_info to struct netfront_queue,
>> and update the rest of the code to use this.
>>
>> Also adds loops over queues where appropriate, even though only one is
>> configured at this point, and uses alloc_etherdev_mq() and the
>> corresponding multi-queue netif wake/start/stop functions in preparation
>> for multiple active queues.
>>
>> Finally, implements a trivial queue selection function suitable for
>> ndo_select_queue, which simply returns 0, selecting the first (and
>> only) queue.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>
> Did you forget to add in David's "Reviewed-by"? Is there any significant
> change compared to the last submission?
>
> Wei.
>
No significant change. I just forgot.

Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:13:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP8A-0000pY-B7; Fri, 28 Feb 2014 15:13:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP88-0000p2-5S
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:13:40 +0000
Received: from [193.109.254.147:20287] by server-6.bemta-14.messagelabs.com id
	D7/E0-03396-3A7A0135; Fri, 28 Feb 2014 15:13:39 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393600416!7511174!1
X-Originating-IP: [209.85.192.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23610 invoked from network); 28 Feb 2014 15:13:37 -0000
Received: from mail-pd0-f169.google.com (HELO mail-pd0-f169.google.com)
	(209.85.192.169)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:13:37 -0000
Received: by mail-pd0-f169.google.com with SMTP id fp1so846011pdb.14
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 07:13:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=I4w9av+li+rmhGkVtUiUg9VZzzmq2sRaS2tniEAcZEU=;
	b=TBFCX+ng1DcbRJEC48zUfeOjjid7z/myrR7TZH75BVfTpxJMgtdupF6IBgacKR+h4R
	Nv7V7rnYyNRlWVXP1eWBcQqan/fsYOLKCCffnz8T5VzIfimGJ0FyXKopSE1feMamNMeg
	q1K+WdoGEBSoYtomqq3k/mLPgSZf6ChWXi+TCRvjqmJbl0MPpqYowmS2UHLiQcneY+aL
	SbsHIbxXe3hv04dpP2swGeHVZlxRthtjRnG6b2gA1A1iVG9CPKHXjuzadLyhwMSKYhoW
	zbxQDQBDtZZ3bOB67TROJYPMRXKboJYECNl3FjNO2zzExaGP6J5q/BGM3JZDArwHiDwL
	h0kw==
MIME-Version: 1.0
X-Received: by 10.67.22.3 with SMTP id ho3mr4078910pad.128.1393600415679; Fri,
	28 Feb 2014 07:13:35 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:13:35 -0800 (PST)
In-Reply-To: <530C8213020000780011F18D@nat28.tlf.novell.com>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
	<530C8213020000780011F18D@nat28.tlf.novell.com>
Date: Fri, 28 Feb 2014 15:13:35 +0000
X-Google-Sender-Auth: W4UoajA4vepPSFkcza7ByXVEy5o
Message-ID: <CAHqL=afHkZHs05hFo00YLDbkUg2JmOmZPXj=FCod8Wpjti9iRg@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 1/4] flask: add compat mode guest support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4075118133630999340=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4075118133630999340==
Content-Type: multipart/alternative; boundary=047d7b5d86bfbc4aa004f378e177

--047d7b5d86bfbc4aa004f378e177
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 25, 2014 at 10:44 AM, Jan Beulich <JBeulich@suse.com> wrote:

> ... which has been missing since the introduction of the new interface
> in the 4.2 development cycle.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>

Acked-by: Keir Fraser <keir@xen.org>


>
> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)
>          .quad compat_vcpu_op
>          .quad compat_ni_hypercall       /* 25 */
>          .quad compat_mmuext_op
> -        .quad do_xsm_op
> +        .quad compat_xsm_op
>          .quad compat_nmi_op
>          .quad compat_sched_op
>          .quad compat_callback_op        /* 30 */
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     += compat/arch
>  headers-$(CONFIG_X86)     += compat/arch-x86/xen.h
>  headers-$(CONFIG_X86)     += compat/arch-x86/xen-$(compat-arch-y).h
>  headers-y                 += compat/arch-$(compat-arch-y).h compat/xlat.h
> +headers-$(FLASK_ENABLE)   += compat/xsm/flask_op.h
>
>  cppflags-y                := -include public/xen-compat.h
>  cppflags-$(CONFIG_X86)    += -m32
> @@ -69,7 +70,9 @@ compat/xlat.h: xlat.lst $(filter-out com
>         export PYTHON=$(PYTHON); \
>         grep -v '^[      ]*#' xlat.lst | \
>         while read what name hdr; do \
> -               $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what"
> compat_$$name $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g')
> || exit $$?; \
> +               hdr="compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')";
> \
> +               echo '$(headers-y)' | grep -q "$$hdr" || continue; \
> +               $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what"
> compat_$$name $$hdr || exit $$?; \
>         done >$@.new
>         mv -f $@.new $@
>
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -99,3 +99,16 @@
>  !      vcpu_set_singleshot_timer       vcpu.h
>  ?      xenoprof_init                   xenoprof.h
>  ?      xenoprof_passive                xenoprof.h
> +?      flask_access                    xsm/flask_op.h
> +!      flask_boolean                   xsm/flask_op.h
> +?      flask_cache_stats               xsm/flask_op.h
> +?      flask_hash_stats                xsm/flask_op.h
> +!      flask_load                      xsm/flask_op.h
> +?      flask_ocontext                  xsm/flask_op.h
> +?      flask_peersid                   xsm/flask_op.h
> +?      flask_relabel                   xsm/flask_op.h
> +?      flask_setavc_threshold          xsm/flask_op.h
> +?      flask_setenforce                xsm/flask_op.h
> +!      flask_sid_context               xsm/flask_op.h
> +?      flask_transition                xsm/flask_op.h
> +!      flask_userlist                  xsm/flask_op.h
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN
>      return -ENOSYS;
>  }
>
> +#ifdef CONFIG_COMPAT
> +static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t)
> op)
> +{
> +    return -ENOSYS;
> +}
> +#endif
> +
>  static XSM_INLINE char *xsm_show_irq_sid(int irq)
>  {
>      return NULL;
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -129,6 +129,9 @@ struct xsm_operations {
>      int (*tmem_control)(void);
>
>      long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
> +#ifdef CONFIG_COMPAT
> +    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
> +#endif
>
>      int (*hvm_param) (struct domain *d, unsigned long op);
>      int (*hvm_param_nested) (struct domain *d);
> @@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU
>      return xsm_ops->do_xsm_op(op);
>  }
>
> +#ifdef CONFIG_COMPAT
> +static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
> +{
> +    return xsm_ops->do_compat_op(op);
> +}
> +#endif
> +
>  static inline int xsm_hvm_param (xsm_default_t def, struct domain *d,
> unsigned long op)
>  {
>      return xsm_ops->hvm_param(d, op);
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation
>      set_to_dummy_if_null(ops, hvm_param_nested);
>
>      set_to_dummy_if_null(ops, do_xsm_op);
> +#ifdef CONFIG_COMPAT
> +    set_to_dummy_if_null(ops, do_compat_op);
> +#endif
>
>      set_to_dummy_if_null(ops, add_to_physmap);
>      set_to_dummy_if_null(ops, remove_from_physmap);
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -7,7 +7,7 @@
>   *  it under the terms of the GNU General Public License version 2,
>   *  as published by the Free Software Foundation.
>   */
> -
> +#ifndef COMPAT
>  #include <xen/errno.h>
>  #include <xen/event.h>
>  #include <xsm/xsm.h>
> @@ -20,6 +20,10 @@
>  #include <objsec.h>
>  #include <conditional.h>
>
> +#define ret_t long
> +#define _copy_to_guest copy_to_guest
> +#define _copy_from_guest copy_from_guest
> +
>  #ifdef FLASK_DEVELOP
>  int flask_enforcing = 0;
>  integer_param("flask_enforcing", flask_enforcing);
> @@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST
>      return 0;
>  }
>
> +#endif /* COMPAT */
> +
>  static int flask_security_user(struct xen_flask_userlist *arg)
>  {
>      char *user;
> @@ -119,7 +125,7 @@ static int flask_security_user(struct xe
>
>      arg->size = nsids;
>
> -    if ( copy_to_guest(arg->u.sids, sids, nsids) )
> +    if ( _copy_to_guest(arg->u.sids, sids, nsids) )
>          rv = -EFAULT;
>
>      xfree(sids);
> @@ -128,6 +134,8 @@ static int flask_security_user(struct xe
>      return rv;
>  }
>
> +#ifndef COMPAT
> +
>  static int flask_security_relabel(struct xen_flask_transition *arg)
>  {
>      int rv;
> @@ -208,6 +216,8 @@ static int flask_security_setenforce(str
>      return 0;
>  }
>
> +#endif /* COMPAT */
> +
>  static int flask_security_context(struct xen_flask_sid_context *arg)
>  {
>      int rv;
> @@ -252,7 +262,7 @@ static int flask_security_sid(struct xen
>
>      arg->size = len;
>
> -    if ( !rv && copy_to_guest(arg->context, context, len) )
> +    if ( !rv && _copy_to_guest(arg->context, context, len) )
>          rv = -EFAULT;
>
>      xfree(context);
> @@ -260,6 +270,8 @@ static int flask_security_sid(struct xen
>      return rv;
>  }
>
> +#ifndef COMPAT
> +
>  int flask_disable(void)
>  {
>      static int flask_disabled = 0;
> @@ -302,6 +314,8 @@ static int flask_security_setavc_thresho
>      return rv;
>  }
>
> +#endif /* COMPAT */
> +
>  static int flask_security_resolve_bool(struct xen_flask_boolean *arg)
>  {
>      char *name;
> @@ -382,24 +396,6 @@ static int flask_security_set_bool(struc
>      return rv;
>  }
>
> -static int flask_security_commit_bools(void)
> -{
> -    int rv;
> -
> -    spin_lock(&sel_sem);
> -
> -    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
> -    if ( rv )
> -        goto out;
> -
> -    if ( bool_pending_values )
> -        rv = security_set_bools(bool_num, bool_pending_values);
> -
> - out:
> -    spin_unlock(&sel_sem);
> -    return rv;
> -}
> -
>  static int flask_security_get_bool(struct xen_flask_boolean *arg)
>  {
>      int rv;
> @@ -431,7 +427,7 @@ static int flask_security_get_bool(struc
>              rv = -ERANGE;
>          arg->size = nameout_len;
>
> -        if ( !rv && copy_to_guest(arg->name, nameout, nameout_len) )
> +        if ( !rv && _copy_to_guest(arg->name, nameout, nameout_len) )
>              rv = -EFAULT;
>          xfree(nameout);
>      }
> @@ -441,6 +437,26 @@ static int flask_security_get_bool(struc
>      return rv;
>  }
>
> +#ifndef COMPAT
> +
> +static int flask_security_commit_bools(void)
> +{
> +    int rv;
> +
> +    spin_lock(&sel_sem);
> +
> +    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
> +    if ( rv )
> +        goto out;
> +
> +    if ( bool_pending_values )
> +        rv = security_set_bools(bool_num, bool_pending_values);
> +
> + out:
> +    spin_unlock(&sel_sem);
> +    return rv;
> +}
> +
>  static int flask_security_make_bools(void)
>  {
>      int ret = 0;
> @@ -484,6 +500,7 @@ static int flask_security_avc_cachestats
>  }
>
>  #endif
> +#endif /* COMPAT */
>
>  static int flask_security_load(struct xen_flask_load *load)
>  {
> @@ -501,7 +518,7 @@ static int flask_security_load(struct xe
>      if ( !buf )
>          return -ENOMEM;
>
> -    if ( copy_from_guest(buf, load->buffer, load->size) )
> +    if ( _copy_from_guest(buf, load->buffer, load->size) )
>      {
>          ret = -EFAULT;
>          goto out_free;
> @@ -524,6 +541,8 @@ static int flask_security_load(struct xe
>      return ret;
>  }
>
> +#ifndef COMPAT
> +
>  static int flask_ocontext_del(struct xen_flask_ocontext *arg)
>  {
>      int rv;
> @@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x
>      return rc;
>  }
>
> -long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
> +#endif /* !COMPAT */
> +
> +ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
>  {
>      xen_flask_op_t op;
>      int rv;
> @@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(
>   out:
>      return rv;
>  }
> +
> +#ifndef COMPAT
> +#undef _copy_to_guest
> +#define _copy_to_guest copy_to_compat
> +#undef _copy_from_guest
> +#define _copy_from_guest copy_from_compat
> +
> +#include <compat/event_channel.h>
> +#include <compat/xsm/flask_op.h>
> +
> +CHECK_flask_access;
> +CHECK_flask_cache_stats;
> +CHECK_flask_hash_stats;
> +CHECK_flask_ocontext;
> +CHECK_flask_peersid;
> +CHECK_flask_relabel;
> +CHECK_flask_setavc_threshold;
> +CHECK_flask_setenforce;
> +CHECK_flask_transition;
> +
> +#define COMPAT
> +#define flask_copyin_string(ch, pb, sz, mx) ({ \
> +       XEN_GUEST_HANDLE_PARAM(char) gh; \
> +       guest_from_compat_handle(gh, ch); \
> +       flask_copyin_string(gh, pb, sz, mx); \
> +})
> +
> +#define xen_flask_load compat_flask_load
> +#define flask_security_load compat_security_load
> +
> +#define xen_flask_userlist compat_flask_userlist
> +#define flask_security_user compat_security_user
> +
> +#define xen_flask_sid_context compat_flask_sid_context
> +#define flask_security_context compat_security_context
> +#define flask_security_sid compat_security_sid
> +
> +#define xen_flask_boolean compat_flask_boolean
> +#define flask_security_resolve_bool compat_security_resolve_bool
> +#define flask_security_get_bool compat_security_get_bool
> +#define flask_security_set_bool compat_security_set_bool
> +
> +#define xen_flask_op_t compat_flask_op_t
> +#undef ret_t
> +#define ret_t int
> +#define do_flask_op compat_flask_op
> +
> +#include "flask_op.c"
> +#endif
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -1464,6 +1464,7 @@ static int flask_map_gmfn_foreign(struct
>  #endif
>
>  long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
> +int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
>
>  static struct xsm_operations flask_ops = {
>      .security_domaininfo = flask_security_domaininfo,
> @@ -1538,6 +1539,9 @@ static struct xsm_operations flask_ops =
>      .hvm_param_nested = flask_hvm_param_nested,
>
>      .do_xsm_op = do_flask_op,
> +#ifdef CONFIG_COMPAT
> +    .do_compat_op = compat_flask_op,
> +#endif
>
>      .add_to_physmap = flask_add_to_physmap,
>      .remove_from_physmap = flask_remove_from_physmap,
> --- a/xen/xsm/xsm_core.c
> +++ b/xen/xsm/xsm_core.c
> @@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x
>      return xsm_do_xsm_op(op);
>  }
>
> -
> +#ifdef CONFIG_COMPAT
> +int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
> +{
> +    return xsm_do_compat_op(op);
> +}
> +#endif
>
>
>

--047d7b5d86bfbc4aa004f378e177
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 25, 2014 at 10:44 AM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"=
mailto:JBeulich@suse.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">... which has been missing since the introdu=
ction of the new interface<br>
in the 4.2 development cycle.<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulic=
h@suse.com</a>&gt;<br>
Acked-by: Daniel De Graaf &lt;<a href=3D"mailto:dgdegra@tycho.nsa.gov">dgde=
gra@tycho.nsa.gov</a>&gt;<br></blockquote><div><br></div><div>Acked-by: Kei=
r Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;</div><div=
>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
<br>
--- a/xen/arch/x86/x86_64/compat/entry.S<br>
+++ b/xen/arch/x86/x86_64/compat/entry.S<br>
@@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)<br>
=A0 =A0 =A0 =A0 =A0.quad compat_vcpu_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_ni_hypercall =A0 =A0 =A0 /* 25 */<br>
=A0 =A0 =A0 =A0 =A0.quad compat_mmuext_op<br>
- =A0 =A0 =A0 =A0.quad do_xsm_op<br>
+ =A0 =A0 =A0 =A0.quad compat_xsm_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_nmi_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_sched_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_callback_op =A0 =A0 =A0 =A0/* 30 */<br>
--- a/xen/include/Makefile<br>
+++ b/xen/include/Makefile<br>
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86) =A0 =A0 +=3D compat/arch<br>
=A0headers-$(CONFIG_X86) =A0 =A0 +=3D compat/arch-x86/xen.h<br>
=A0headers-$(CONFIG_X86) =A0 =A0 +=3D compat/arch-x86/xen-$(compat-arch-y).=
h<br>
=A0headers-y =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 +=3D compat/arch-$(compat-arch=
-y).h compat/xlat.h<br>
+headers-$(FLASK_ENABLE) =A0 +=3D compat/xsm/flask_op.h<br>
<br>
=A0cppflags-y =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0:=3D -include public/xen-compa=
t.h<br>
=A0cppflags-$(CONFIG_X86) =A0 =A0+=3D -m32<br>
@@ -69,7 +70,9 @@ compat/xlat.h: xlat.lst $(filter-out com<br>
=A0 =A0 =A0 =A0 export PYTHON=3D$(PYTHON); \<br>
=A0 =A0 =A0 =A0 grep -v &#39;^[ =A0 =A0 =A0]*#&#39; xlat.lst | \<br>
=A0 =A0 =A0 =A0 while read what name hdr; do \<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 $(SHELL) $(BASEDIR)/tools/get-fields.sh &quot=
;$$what&quot; compat_$$name $$(echo compat/$$hdr | sed &#39;s,@arch@,$(comp=
at-arch-y),g&#39;) || exit $$?; \<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 hdr=3D&quot;compat/$$(echo $$hdr | sed &#39;s=
,@arch@,$(compat-arch-y),g&#39;)&quot;; \<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 echo &#39;$(headers-y)&#39; | grep -q &quot;$=
$hdr&quot; || continue; \<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 $(SHELL) $(BASEDIR)/tools/get-fields.sh &quot=
;$$what&quot; compat_$$name $$hdr || exit $$?; \<br>
=A0 =A0 =A0 =A0 done &gt;$@.new<br>
=A0 =A0 =A0 =A0 mv -f $@.new $@<br>
<br>
--- a/xen/include/xlat.lst<br>
+++ b/xen/include/xlat.lst<br>
@@ -99,3 +99,16 @@<br>
=A0! =A0 =A0 =A0vcpu_set_singleshot_timer =A0 =A0 =A0 vcpu.h<br>
=A0? =A0 =A0 =A0xenoprof_init =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xenoprof.=
h<br>
=A0? =A0 =A0 =A0xenoprof_passive =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xenoprof.h<=
br>
+? =A0 =A0 =A0flask_access =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask=
_op.h<br>
+! =A0 =A0 =A0flask_boolean =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_cache_stats =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_op.h<=
br>
+? =A0 =A0 =A0flask_hash_stats =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_op.=
h<br>
+! =A0 =A0 =A0flask_load =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/fla=
sk_op.h<br>
+? =A0 =A0 =A0flask_ocontext =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_peersid =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_relabel =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_setavc_threshold =A0 =A0 =A0 =A0 =A0xsm/flask_op.h<br>
+? =A0 =A0 =A0flask_setenforce =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_op.=
h<br>
+! =A0 =A0 =A0flask_sid_context =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_op.h<=
br>
+? =A0 =A0 =A0flask_transition =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_op.=
h<br>
+! =A0 =A0 =A0flask_userlist =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_o=
p.h<br>
--- a/xen/include/xsm/dummy.h<br>
+++ b/xen/include/xsm/dummy.h<br>
@@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN<br>
=A0 =A0 =A0return -ENOSYS;<br>
=A0}<br>
<br>
+#ifdef CONFIG_COMPAT<br>
+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op=
)<br>
+{<br>
+ =A0 =A0return -ENOSYS;<br>
+}<br>
+#endif<br>
+<br>
=A0static XSM_INLINE char *xsm_show_irq_sid(int irq)<br>
=A0{<br>
=A0 =A0 =A0return NULL;<br>
--- a/xen/include/xsm/xsm.h<br>
+++ b/xen/include/xsm/xsm.h<br>
@@ -129,6 +129,9 @@ struct xsm_operations {<br>
=A0 =A0 =A0int (*tmem_control)(void);<br>
<br>
=A0 =A0 =A0long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);<br>
+#ifdef CONFIG_COMPAT<br>
+ =A0 =A0int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);<br>
+#endif<br>
<br>
=A0 =A0 =A0int (*hvm_param) (struct domain *d, unsigned long op);<br>
=A0 =A0 =A0int (*hvm_param_nested) (struct domain *d);<br>
@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU<br>
=A0 =A0 =A0return xsm_ops-&gt;do_xsm_op(op);<br>
=A0}<br>
<br>
+#ifdef CONFIG_COMPAT<br>
+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)<b=
r>
+{<br>
+ =A0 =A0return xsm_ops-&gt;do_compat_op(op);<br>
+}<br>
+#endif<br>
+<br>
=A0static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, un=
signed long op)<br>
=A0{<br>
=A0 =A0 =A0return xsm_ops-&gt;hvm_param(d, op);<br>
--- a/xen/xsm/dummy.c<br>
+++ b/xen/xsm/dummy.c<br>
@@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, hvm_param_nested);<br>
<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, do_xsm_op);<br>
+#ifdef CONFIG_COMPAT<br>
+ =A0 =A0set_to_dummy_if_null(ops, do_compat_op);<br>
+#endif<br>
<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, add_to_physmap);<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, remove_from_physmap);<br>
--- a/xen/xsm/flask/flask_op.c<br>
+++ b/xen/xsm/flask/flask_op.c<br>
@@ -7,7 +7,7 @@<br>
=A0 * =A0it under the terms of the GNU General Public License version 2,<br=
>
=A0 * =A0as published by the Free Software Foundation.<br>
=A0 */<br>
-<br>
+#ifndef COMPAT<br>
=A0#include &lt;xen/errno.h&gt;<br>
=A0#include &lt;xen/event.h&gt;<br>
=A0#include &lt;xsm/xsm.h&gt;<br>
@@ -20,6 +20,10 @@<br>
=A0#include &lt;objsec.h&gt;<br>
=A0#include &lt;conditional.h&gt;<br>
<br>
+#define ret_t long<br>
+#define _copy_to_guest copy_to_guest<br>
+#define _copy_from_guest copy_from_guest<br>
+<br>
=A0#ifdef FLASK_DEVELOP<br>
=A0int flask_enforcing =3D 0;<br>
=A0integer_param(&quot;flask_enforcing&quot;, flask_enforcing);<br>
@@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST<br>
=A0 =A0 =A0return 0;<br>
=A0}<br>
<br>
+#endif /* COMPAT */<br>
+<br>
=A0static int flask_security_user(struct xen_flask_userlist *arg)<br>
=A0{<br>
=A0 =A0 =A0char *user;<br>
@@ -119,7 +125,7 @@ static int flask_security_user(struct xe<br>
<br>
=A0 =A0 =A0arg-&gt;size =3D nsids;<br>
<br>
- =A0 =A0if ( copy_to_guest(arg-&gt;u.sids, sids, nsids) )<br>
+ =A0 =A0if ( _copy_to_guest(arg-&gt;u.sids, sids, nsids) )<br>
=A0 =A0 =A0 =A0 =A0rv =3D -EFAULT;<br>
<br>
=A0 =A0 =A0xfree(sids);<br>
@@ -128,6 +134,8 @@ static int flask_security_user(struct xe<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
=A0static int flask_security_relabel(struct xen_flask_transition *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -208,6 +216,8 @@ static int flask_security_setenforce(str<br>
=A0 =A0 =A0return 0;<br>
=A0}<br>
<br>
+#endif /* COMPAT */<br>
+<br>
=A0static int flask_security_context(struct xen_flask_sid_context *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -252,7 +262,7 @@ static int flask_security_sid(struct xen<br>
<br>
=A0 =A0 =A0arg-&gt;size =3D len;<br>
<br>
- =A0 =A0if ( !rv &amp;&amp; copy_to_guest(arg-&gt;context, context, len) )=
<br>
+ =A0 =A0if ( !rv &amp;&amp; _copy_to_guest(arg-&gt;context, context, len) =
)<br>
=A0 =A0 =A0 =A0 =A0rv =3D -EFAULT;<br>
<br>
=A0 =A0 =A0xfree(context);<br>
@@ -260,6 +270,8 @@ static int flask_security_sid(struct xen<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
=A0int flask_disable(void)<br>
=A0{<br>
=A0 =A0 =A0static int flask_disabled =3D 0;<br>
@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#endif /* COMPAT */<br>
+<br>
=A0static int flask_security_resolve_bool(struct xen_flask_boolean *arg)<br=
>
=A0{<br>
=A0 =A0 =A0char *name;<br>
@@ -382,24 +396,6 @@ static int flask_security_set_bool(struc<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
-static int flask_security_commit_bools(void)<br>
-{<br>
- =A0 =A0int rv;<br>
-<br>
- =A0 =A0spin_lock(&amp;sel_sem);<br>
-<br>
- =A0 =A0rv =3D domain_has_security(current-&gt;domain, SECURITY__SETBOOL);=
<br>
- =A0 =A0if ( rv )<br>
- =A0 =A0 =A0 =A0goto out;<br>
-<br>
- =A0 =A0if ( bool_pending_values )<br>
- =A0 =A0 =A0 =A0rv =3D security_set_bools(bool_num, bool_pending_values);<=
br>
-<br>
- out:<br>
- =A0 =A0spin_unlock(&amp;sel_sem);<br>
- =A0 =A0return rv;<br>
-}<br>
-<br>
=A0static int flask_security_get_bool(struct xen_flask_boolean *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -431,7 +427,7 @@ static int flask_security_get_bool(struc<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0rv =3D -ERANGE;<br>
=A0 =A0 =A0 =A0 =A0arg-&gt;size =3D nameout_len;<br>
<br>
- =A0 =A0 =A0 =A0if ( !rv &amp;&amp; copy_to_guest(arg-&gt;name, nameout, n=
ameout_len) )<br>
+ =A0 =A0 =A0 =A0if ( !rv &amp;&amp; _copy_to_guest(arg-&gt;name, nameout, =
nameout_len) )<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0rv =3D -EFAULT;<br>
=A0 =A0 =A0 =A0 =A0xfree(nameout);<br>
=A0 =A0 =A0}<br>
@@ -441,6 +437,26 @@ static int flask_security_get_bool(struc<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
+static int flask_security_commit_bools(void)<br>
+{<br>
+ =A0 =A0int rv;<br>
+<br>
+ =A0 =A0spin_lock(&amp;sel_sem);<br>
+<br>
+ =A0 =A0rv =3D domain_has_security(current-&gt;domain, SECURITY__SETBOOL);=
<br>
+ =A0 =A0if ( rv )<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+<br>
+ =A0 =A0if ( bool_pending_values )<br>
+ =A0 =A0 =A0 =A0rv =3D security_set_bools(bool_num, bool_pending_values);<=
br>
+<br>
+ out:<br>
+ =A0 =A0spin_unlock(&amp;sel_sem);<br>
+ =A0 =A0return rv;<br>
+}<br>
+<br>
=A0static int flask_security_make_bools(void)<br>
=A0{<br>
=A0 =A0 =A0int ret =3D 0;<br>
@@ -484,6 +500,7 @@ static int flask_security_avc_cachestats<br>
=A0}<br>
<br>
=A0#endif<br>
+#endif /* COMPAT */<br>
<br>
=A0static int flask_security_load(struct xen_flask_load *load)<br>
=A0{<br>
@@ -501,7 +518,7 @@ static int flask_security_load(struct xe<br>
=A0 =A0 =A0if ( !buf )<br>
=A0 =A0 =A0 =A0 =A0return -ENOMEM;<br>
<br>
- =A0 =A0if ( copy_from_guest(buf, load-&gt;buffer, load-&gt;size) )<br>
+ =A0 =A0if ( _copy_from_guest(buf, load-&gt;buffer, load-&gt;size) )<br>
=A0 =A0 =A0{<br>
=A0 =A0 =A0 =A0 =A0ret =3D -EFAULT;<br>
=A0 =A0 =A0 =A0 =A0goto out_free;<br>
@@ -524,6 +541,8 @@ static int flask_security_load(struct xe<br>
=A0 =A0 =A0return ret;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
=A0static int flask_ocontext_del(struct xen_flask_ocontext *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x<br>
=A0 =A0 =A0return rc;<br>
=A0}<br>
<br>
-long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)<br>
+#endif /* !COMPAT */<br>
+<br>
+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)<br>
=A0{<br>
=A0 =A0 =A0xen_flask_op_t op;<br>
=A0 =A0 =A0int rv;<br>
@@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(<br>
=A0 out:<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
+<br>
+#ifndef COMPAT<br>
+#undef _copy_to_guest<br>
+#define _copy_to_guest copy_to_compat<br>
+#undef _copy_from_guest<br>
+#define _copy_from_guest copy_from_compat<br>
+<br>
+#include &lt;compat/event_channel.h&gt;<br>
+#include &lt;compat/xsm/flask_op.h&gt;<br>
+<br>
+CHECK_flask_access;<br>
+CHECK_flask_cache_stats;<br>
+CHECK_flask_hash_stats;<br>
+CHECK_flask_ocontext;<br>
+CHECK_flask_peersid;<br>
+CHECK_flask_relabel;<br>
+CHECK_flask_setavc_threshold;<br>
+CHECK_flask_setenforce;<br>
+CHECK_flask_transition;<br>
+<br>
+#define COMPAT<br>
+#define flask_copyin_string(ch, pb, sz, mx) ({ \<br>
+ =A0 =A0 =A0 XEN_GUEST_HANDLE_PARAM(char) gh; \<br>
+ =A0 =A0 =A0 guest_from_compat_handle(gh, ch); \<br>
+ =A0 =A0 =A0 flask_copyin_string(gh, pb, sz, mx); \<br>
+})<br>
+<br>
+#define xen_flask_load compat_flask_load<br>
+#define flask_security_load compat_security_load<br>
+<br>
+#define xen_flask_userlist compat_flask_userlist<br>
+#define flask_security_user compat_security_user<br>
+<br>
+#define xen_flask_sid_context compat_flask_sid_context<br>
+#define flask_security_context compat_security_context<br>
+#define flask_security_sid compat_security_sid<br>
+<br>
+#define xen_flask_boolean compat_flask_boolean<br>
+#define flask_security_resolve_bool compat_security_resolve_bool<br>
+#define flask_security_get_bool compat_security_get_bool<br>
+#define flask_security_set_bool compat_security_set_bool<br>
+<br>
+#define xen_flask_op_t compat_flask_op_t<br>
+#undef ret_t<br>
+#define ret_t int<br>
+#define do_flask_op compat_flask_op<br>
+<br>
+#include &quot;flask_op.c&quot;<br>
+#endif<br>
--- a/xen/xsm/flask/hooks.c<br>
+++ b/xen/xsm/flask/hooks.c<br>
@@ -1464,6 +1464,7 @@ static int flask_map_gmfn_foreign(struct<br>
=A0#endif<br>
<br>
=A0long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);<br>
+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);<br>
<br>
=A0static struct xsm_operations flask_ops =3D {<br>
=A0 =A0 =A0.security_domaininfo =3D flask_security_domaininfo,<br>
@@ -1538,6 +1539,9 @@ static struct xsm_operations flask_ops =3D<br>
=A0 =A0 =A0.hvm_param_nested =3D flask_hvm_param_nested,<br>
<br>
=A0 =A0 =A0.do_xsm_op =3D do_flask_op,<br>
+#ifdef CONFIG_COMPAT<br>
+ =A0 =A0.do_compat_op =3D compat_flask_op,<br>
+#endif<br>
<br>
=A0 =A0 =A0.add_to_physmap =3D flask_add_to_physmap,<br>
=A0 =A0 =A0.remove_from_physmap =3D flask_remove_from_physmap,<br>
--- a/xen/xsm/xsm_core.c<br>
+++ b/xen/xsm/xsm_core.c<br>
@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x<br>
=A0 =A0 =A0return xsm_do_xsm_op(op);<br>
=A0}<br>
<br>
-<br>
+#ifdef CONFIG_COMPAT<br>
+int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)<br>
+{<br>
+ =A0 =A0return xsm_do_compat_op(op);<br>
+}<br>
+#endif<br>
<br>
<br>
</blockquote></div><br></div></div>

--047d7b5d86bfbc4aa004f378e177--


--===============4075118133630999340==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4075118133630999340==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:13:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP8A-0000pY-B7; Fri, 28 Feb 2014 15:13:42 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP88-0000p2-5S
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:13:40 +0000
Received: from [193.109.254.147:20287] by server-6.bemta-14.messagelabs.com id
	D7/E0-03396-3A7A0135; Fri, 28 Feb 2014 15:13:39 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-6.tower-27.messagelabs.com!1393600416!7511174!1
X-Originating-IP: [209.85.192.169]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23610 invoked from network); 28 Feb 2014 15:13:37 -0000
Received: from mail-pd0-f169.google.com (HELO mail-pd0-f169.google.com)
	(209.85.192.169)
	by server-6.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:13:37 -0000
Received: by mail-pd0-f169.google.com with SMTP id fp1so846011pdb.14
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 07:13:35 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=I4w9av+li+rmhGkVtUiUg9VZzzmq2sRaS2tniEAcZEU=;
	b=TBFCX+ng1DcbRJEC48zUfeOjjid7z/myrR7TZH75BVfTpxJMgtdupF6IBgacKR+h4R
	Nv7V7rnYyNRlWVXP1eWBcQqan/fsYOLKCCffnz8T5VzIfimGJ0FyXKopSE1feMamNMeg
	q1K+WdoGEBSoYtomqq3k/mLPgSZf6ChWXi+TCRvjqmJbl0MPpqYowmS2UHLiQcneY+aL
	SbsHIbxXe3hv04dpP2swGeHVZlxRthtjRnG6b2gA1A1iVG9CPKHXjuzadLyhwMSKYhoW
	zbxQDQBDtZZ3bOB67TROJYPMRXKboJYECNl3FjNO2zzExaGP6J5q/BGM3JZDArwHiDwL
	h0kw==
MIME-Version: 1.0
X-Received: by 10.67.22.3 with SMTP id ho3mr4078910pad.128.1393600415679; Fri,
	28 Feb 2014 07:13:35 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:13:35 -0800 (PST)
In-Reply-To: <530C8213020000780011F18D@nat28.tlf.novell.com>
References: <530C81A4020000780011F182@nat28.tlf.novell.com>
	<530C8213020000780011F18D@nat28.tlf.novell.com>
Date: Fri, 28 Feb 2014 15:13:35 +0000
X-Google-Sender-Auth: W4UoajA4vepPSFkcza7ByXVEy5o
Message-ID: <CAHqL=afHkZHs05hFo00YLDbkUg2JmOmZPXj=FCod8Wpjti9iRg@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Jan Beulich <JBeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, dgdegra@tycho.nsa.gov
Subject: Re: [Xen-devel] [PATCH 1/4] flask: add compat mode guest support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4075118133630999340=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============4075118133630999340==
Content-Type: multipart/alternative; boundary=047d7b5d86bfbc4aa004f378e177

--047d7b5d86bfbc4aa004f378e177
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 25, 2014 at 10:44 AM, Jan Beulich <JBeulich@suse.com> wrote:

> ... which has been missing since the introduction of the new interface
> in the 4.2 development cycle.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Daniel De Graaf <dgdegra@tycho.nsa.gov>
>

Acked-by: Keir Fraser <keir@xen.org>


>
> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)
>          .quad compat_vcpu_op
>          .quad compat_ni_hypercall       /* 25 */
>          .quad compat_mmuext_op
> -        .quad do_xsm_op
> +        .quad compat_xsm_op
>          .quad compat_nmi_op
>          .quad compat_sched_op
>          .quad compat_callback_op        /* 30 */
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -27,6 +27,7 @@ headers-$(CONFIG_X86)     += compat/arch
>  headers-$(CONFIG_X86)     += compat/arch-x86/xen.h
>  headers-$(CONFIG_X86)     += compat/arch-x86/xen-$(compat-arch-y).h
>  headers-y                 += compat/arch-$(compat-arch-y).h compat/xlat.h
> +headers-$(FLASK_ENABLE)   += compat/xsm/flask_op.h
>
>  cppflags-y                := -include public/xen-compat.h
>  cppflags-$(CONFIG_X86)    += -m32
> @@ -69,7 +70,9 @@ compat/xlat.h: xlat.lst $(filter-out com
>         export PYTHON=$(PYTHON); \
>         grep -v '^[      ]*#' xlat.lst | \
>         while read what name hdr; do \
> -               $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what"
> compat_$$name $$(echo compat/$$hdr | sed 's,@arch@,$(compat-arch-y),g')
> || exit $$?; \
> +               hdr="compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')";
> \
> +               echo '$(headers-y)' | grep -q "$$hdr" || continue; \
> +               $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what"
> compat_$$name $$hdr || exit $$?; \
>         done >$@.new
>         mv -f $@.new $@
>
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -99,3 +99,16 @@
>  !      vcpu_set_singleshot_timer       vcpu.h
>  ?      xenoprof_init                   xenoprof.h
>  ?      xenoprof_passive                xenoprof.h
> +?      flask_access                    xsm/flask_op.h
> +!      flask_boolean                   xsm/flask_op.h
> +?      flask_cache_stats               xsm/flask_op.h
> +?      flask_hash_stats                xsm/flask_op.h
> +!      flask_load                      xsm/flask_op.h
> +?      flask_ocontext                  xsm/flask_op.h
> +?      flask_peersid                   xsm/flask_op.h
> +?      flask_relabel                   xsm/flask_op.h
> +?      flask_setavc_threshold          xsm/flask_op.h
> +?      flask_setenforce                xsm/flask_op.h
> +!      flask_sid_context               xsm/flask_op.h
> +?      flask_transition                xsm/flask_op.h
> +!      flask_userlist                  xsm/flask_op.h
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN
>      return -ENOSYS;
>  }
>
> +#ifdef CONFIG_COMPAT
> +static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t)
> op)
> +{
> +    return -ENOSYS;
> +}
> +#endif
> +
>  static XSM_INLINE char *xsm_show_irq_sid(int irq)
>  {
>      return NULL;
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -129,6 +129,9 @@ struct xsm_operations {
>      int (*tmem_control)(void);
>
>      long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
> +#ifdef CONFIG_COMPAT
> +    int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
> +#endif
>
>      int (*hvm_param) (struct domain *d, unsigned long op);
>      int (*hvm_param_nested) (struct domain *d);
> @@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU
>      return xsm_ops->do_xsm_op(op);
>  }
>
> +#ifdef CONFIG_COMPAT
> +static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
> +{
> +    return xsm_ops->do_compat_op(op);
> +}
> +#endif
> +
>  static inline int xsm_hvm_param (xsm_default_t def, struct domain *d,
> unsigned long op)
>  {
>      return xsm_ops->hvm_param(d, op);
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation
>      set_to_dummy_if_null(ops, hvm_param_nested);
>
>      set_to_dummy_if_null(ops, do_xsm_op);
> +#ifdef CONFIG_COMPAT
> +    set_to_dummy_if_null(ops, do_compat_op);
> +#endif
>
>      set_to_dummy_if_null(ops, add_to_physmap);
>      set_to_dummy_if_null(ops, remove_from_physmap);
> --- a/xen/xsm/flask/flask_op.c
> +++ b/xen/xsm/flask/flask_op.c
> @@ -7,7 +7,7 @@
>   *  it under the terms of the GNU General Public License version 2,
>   *  as published by the Free Software Foundation.
>   */
> -
> +#ifndef COMPAT
>  #include <xen/errno.h>
>  #include <xen/event.h>
>  #include <xsm/xsm.h>
> @@ -20,6 +20,10 @@
>  #include <objsec.h>
>  #include <conditional.h>
>
> +#define ret_t long
> +#define _copy_to_guest copy_to_guest
> +#define _copy_from_guest copy_from_guest
> +
>  #ifdef FLASK_DEVELOP
>  int flask_enforcing = 0;
>  integer_param("flask_enforcing", flask_enforcing);
> @@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST
>      return 0;
>  }
>
> +#endif /* COMPAT */
> +
>  static int flask_security_user(struct xen_flask_userlist *arg)
>  {
>      char *user;
> @@ -119,7 +125,7 @@ static int flask_security_user(struct xe
>
>      arg->size = nsids;
>
> -    if ( copy_to_guest(arg->u.sids, sids, nsids) )
> +    if ( _copy_to_guest(arg->u.sids, sids, nsids) )
>          rv = -EFAULT;
>
>      xfree(sids);
> @@ -128,6 +134,8 @@ static int flask_security_user(struct xe
>      return rv;
>  }
>
> +#ifndef COMPAT
> +
>  static int flask_security_relabel(struct xen_flask_transition *arg)
>  {
>      int rv;
> @@ -208,6 +216,8 @@ static int flask_security_setenforce(str
>      return 0;
>  }
>
> +#endif /* COMPAT */
> +
>  static int flask_security_context(struct xen_flask_sid_context *arg)
>  {
>      int rv;
> @@ -252,7 +262,7 @@ static int flask_security_sid(struct xen
>
>      arg->size = len;
>
> -    if ( !rv && copy_to_guest(arg->context, context, len) )
> +    if ( !rv && _copy_to_guest(arg->context, context, len) )
>          rv = -EFAULT;
>
>      xfree(context);
> @@ -260,6 +270,8 @@ static int flask_security_sid(struct xen
>      return rv;
>  }
>
> +#ifndef COMPAT
> +
>  int flask_disable(void)
>  {
>      static int flask_disabled = 0;
> @@ -302,6 +314,8 @@ static int flask_security_setavc_thresho
>      return rv;
>  }
>
> +#endif /* COMPAT */
> +
>  static int flask_security_resolve_bool(struct xen_flask_boolean *arg)
>  {
>      char *name;
> @@ -382,24 +396,6 @@ static int flask_security_set_bool(struc
>      return rv;
>  }
>
> -static int flask_security_commit_bools(void)
> -{
> -    int rv;
> -
> -    spin_lock(&sel_sem);
> -
> -    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
> -    if ( rv )
> -        goto out;
> -
> -    if ( bool_pending_values )
> -        rv = security_set_bools(bool_num, bool_pending_values);
> -
> - out:
> -    spin_unlock(&sel_sem);
> -    return rv;
> -}
> -
>  static int flask_security_get_bool(struct xen_flask_boolean *arg)
>  {
>      int rv;
> @@ -431,7 +427,7 @@ static int flask_security_get_bool(struc
>              rv = -ERANGE;
>          arg->size = nameout_len;
>
> -        if ( !rv && copy_to_guest(arg->name, nameout, nameout_len) )
> +        if ( !rv && _copy_to_guest(arg->name, nameout, nameout_len) )
>              rv = -EFAULT;
>          xfree(nameout);
>      }
> @@ -441,6 +437,26 @@ static int flask_security_get_bool(struc
>      return rv;
>  }
>
> +#ifndef COMPAT
> +
> +static int flask_security_commit_bools(void)
> +{
> +    int rv;
> +
> +    spin_lock(&sel_sem);
> +
> +    rv = domain_has_security(current->domain, SECURITY__SETBOOL);
> +    if ( rv )
> +        goto out;
> +
> +    if ( bool_pending_values )
> +        rv = security_set_bools(bool_num, bool_pending_values);
> +
> + out:
> +    spin_unlock(&sel_sem);
> +    return rv;
> +}
> +
>  static int flask_security_make_bools(void)
>  {
>      int ret = 0;
> @@ -484,6 +500,7 @@ static int flask_security_avc_cachestats
>  }
>
>  #endif
> +#endif /* COMPAT */
>
>  static int flask_security_load(struct xen_flask_load *load)
>  {
> @@ -501,7 +518,7 @@ static int flask_security_load(struct xe
>      if ( !buf )
>          return -ENOMEM;
>
> -    if ( copy_from_guest(buf, load->buffer, load->size) )
> +    if ( _copy_from_guest(buf, load->buffer, load->size) )
>      {
>          ret = -EFAULT;
>          goto out_free;
> @@ -524,6 +541,8 @@ static int flask_security_load(struct xe
>      return ret;
>  }
>
> +#ifndef COMPAT
> +
>  static int flask_ocontext_del(struct xen_flask_ocontext *arg)
>  {
>      int rv;
> @@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x
>      return rc;
>  }
>
> -long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
> +#endif /* !COMPAT */
> +
> +ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)
>  {
>      xen_flask_op_t op;
>      int rv;
> @@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(
>   out:
>      return rv;
>  }
> +
> +#ifndef COMPAT
> +#undef _copy_to_guest
> +#define _copy_to_guest copy_to_compat
> +#undef _copy_from_guest
> +#define _copy_from_guest copy_from_compat
> +
> +#include <compat/event_channel.h>
> +#include <compat/xsm/flask_op.h>
> +
> +CHECK_flask_access;
> +CHECK_flask_cache_stats;
> +CHECK_flask_hash_stats;
> +CHECK_flask_ocontext;
> +CHECK_flask_peersid;
> +CHECK_flask_relabel;
> +CHECK_flask_setavc_threshold;
> +CHECK_flask_setenforce;
> +CHECK_flask_transition;
> +
> +#define COMPAT
> +#define flask_copyin_string(ch, pb, sz, mx) ({ \
> +       XEN_GUEST_HANDLE_PARAM(char) gh; \
> +       guest_from_compat_handle(gh, ch); \
> +       flask_copyin_string(gh, pb, sz, mx); \
> +})
> +
> +#define xen_flask_load compat_flask_load
> +#define flask_security_load compat_security_load
> +
> +#define xen_flask_userlist compat_flask_userlist
> +#define flask_security_user compat_security_user
> +
> +#define xen_flask_sid_context compat_flask_sid_context
> +#define flask_security_context compat_security_context
> +#define flask_security_sid compat_security_sid
> +
> +#define xen_flask_boolean compat_flask_boolean
> +#define flask_security_resolve_bool compat_security_resolve_bool
> +#define flask_security_get_bool compat_security_get_bool
> +#define flask_security_set_bool compat_security_set_bool
> +
> +#define xen_flask_op_t compat_flask_op_t
> +#undef ret_t
> +#define ret_t int
> +#define do_flask_op compat_flask_op
> +
> +#include "flask_op.c"
> +#endif
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -1464,6 +1464,7 @@ static int flask_map_gmfn_foreign(struct
>  #endif
>
>  long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
> +int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
>
>  static struct xsm_operations flask_ops = {
>      .security_domaininfo = flask_security_domaininfo,
> @@ -1538,6 +1539,9 @@ static struct xsm_operations flask_ops =
>      .hvm_param_nested = flask_hvm_param_nested,
>
>      .do_xsm_op = do_flask_op,
> +#ifdef CONFIG_COMPAT
> +    .do_compat_op = compat_flask_op,
> +#endif
>
>      .add_to_physmap = flask_add_to_physmap,
>      .remove_from_physmap = flask_remove_from_physmap,
> --- a/xen/xsm/xsm_core.c
> +++ b/xen/xsm/xsm_core.c
> @@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x
>      return xsm_do_xsm_op(op);
>  }
>
> -
> +#ifdef CONFIG_COMPAT
> +int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
> +{
> +    return xsm_do_compat_op(op);
> +}
> +#endif
>
>
>

--047d7b5d86bfbc4aa004f378e177
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 25, 2014 at 10:44 AM, Jan Beulich <span dir=3D"ltr">&lt;<a href=3D"=
mailto:JBeulich@suse.com" target=3D"_blank">JBeulich@suse.com</a>&gt;</span=
> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">... which has been missing since the introdu=
ction of the new interface<br>
in the 4.2 development cycle.<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com">jbeulic=
h@suse.com</a>&gt;<br>
Acked-by: Daniel De Graaf &lt;<a href=3D"mailto:dgdegra@tycho.nsa.gov">dgde=
gra@tycho.nsa.gov</a>&gt;<br></blockquote><div><br></div><div>Acked-by: Kei=
r Fraser &lt;<a href=3D"mailto:keir@xen.org">keir@xen.org</a>&gt;</div><div=
>
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;borde=
r-left:1px #ccc solid;padding-left:1ex">
<br>
--- a/xen/arch/x86/x86_64/compat/entry.S<br>
+++ b/xen/arch/x86/x86_64/compat/entry.S<br>
@@ -404,7 +404,7 @@ ENTRY(compat_hypercall_table)<br>
=A0 =A0 =A0 =A0 =A0.quad compat_vcpu_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_ni_hypercall =A0 =A0 =A0 /* 25 */<br>
=A0 =A0 =A0 =A0 =A0.quad compat_mmuext_op<br>
- =A0 =A0 =A0 =A0.quad do_xsm_op<br>
+ =A0 =A0 =A0 =A0.quad compat_xsm_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_nmi_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_sched_op<br>
=A0 =A0 =A0 =A0 =A0.quad compat_callback_op =A0 =A0 =A0 =A0/* 30 */<br>
--- a/xen/include/Makefile<br>
+++ b/xen/include/Makefile<br>
@@ -27,6 +27,7 @@ headers-$(CONFIG_X86) =A0 =A0 +=3D compat/arch<br>
=A0headers-$(CONFIG_X86) =A0 =A0 +=3D compat/arch-x86/xen.h<br>
=A0headers-$(CONFIG_X86) =A0 =A0 +=3D compat/arch-x86/xen-$(compat-arch-y).=
h<br>
=A0headers-y =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 +=3D compat/arch-$(compat-arch=
-y).h compat/xlat.h<br>
+headers-$(FLASK_ENABLE) =A0 +=3D compat/xsm/flask_op.h<br>
<br>
=A0cppflags-y =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0:=3D -include public/xen-compa=
t.h<br>
=A0cppflags-$(CONFIG_X86) =A0 =A0+=3D -m32<br>
@@ -69,7 +70,9 @@ compat/xlat.h: xlat.lst $(filter-out com<br>
=A0 =A0 =A0 =A0 export PYTHON=3D$(PYTHON); \<br>
=A0 =A0 =A0 =A0 grep -v &#39;^[ =A0 =A0 =A0]*#&#39; xlat.lst | \<br>
=A0 =A0 =A0 =A0 while read what name hdr; do \<br>
- =A0 =A0 =A0 =A0 =A0 =A0 =A0 $(SHELL) $(BASEDIR)/tools/get-fields.sh &quot=
;$$what&quot; compat_$$name $$(echo compat/$$hdr | sed &#39;s,@arch@,$(comp=
at-arch-y),g&#39;) || exit $$?; \<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 hdr=3D&quot;compat/$$(echo $$hdr | sed &#39;s=
,@arch@,$(compat-arch-y),g&#39;)&quot;; \<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 echo &#39;$(headers-y)&#39; | grep -q &quot;$=
$hdr&quot; || continue; \<br>
+ =A0 =A0 =A0 =A0 =A0 =A0 =A0 $(SHELL) $(BASEDIR)/tools/get-fields.sh &quot=
;$$what&quot; compat_$$name $$hdr || exit $$?; \<br>
=A0 =A0 =A0 =A0 done &gt;$@.new<br>
=A0 =A0 =A0 =A0 mv -f $@.new $@<br>
<br>
--- a/xen/include/xlat.lst<br>
+++ b/xen/include/xlat.lst<br>
@@ -99,3 +99,16 @@<br>
=A0! =A0 =A0 =A0vcpu_set_singleshot_timer =A0 =A0 =A0 vcpu.h<br>
=A0? =A0 =A0 =A0xenoprof_init =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xenoprof.=
h<br>
=A0? =A0 =A0 =A0xenoprof_passive =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xenoprof.h<=
br>
+? =A0 =A0 =A0flask_access =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask=
_op.h<br>
+! =A0 =A0 =A0flask_boolean =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_cache_stats =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_op.h<=
br>
+? =A0 =A0 =A0flask_hash_stats =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_op.=
h<br>
+! =A0 =A0 =A0flask_load =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/fla=
sk_op.h<br>
+? =A0 =A0 =A0flask_ocontext =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_peersid =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_relabel =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_o=
p.h<br>
+? =A0 =A0 =A0flask_setavc_threshold =A0 =A0 =A0 =A0 =A0xsm/flask_op.h<br>
+? =A0 =A0 =A0flask_setenforce =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_op.=
h<br>
+! =A0 =A0 =A0flask_sid_context =A0 =A0 =A0 =A0 =A0 =A0 =A0 xsm/flask_op.h<=
br>
+? =A0 =A0 =A0flask_transition =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_op.=
h<br>
+! =A0 =A0 =A0flask_userlist =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0xsm/flask_o=
p.h<br>
--- a/xen/include/xsm/dummy.h<br>
+++ b/xen/include/xsm/dummy.h<br>
@@ -412,6 +412,13 @@ static XSM_INLINE long xsm_do_xsm_op(XEN<br>
=A0 =A0 =A0return -ENOSYS;<br>
=A0}<br>
<br>
+#ifdef CONFIG_COMPAT<br>
+static XSM_INLINE int xsm_do_compat_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op=
)<br>
+{<br>
+ =A0 =A0return -ENOSYS;<br>
+}<br>
+#endif<br>
+<br>
=A0static XSM_INLINE char *xsm_show_irq_sid(int irq)<br>
=A0{<br>
=A0 =A0 =A0return NULL;<br>
--- a/xen/include/xsm/xsm.h<br>
+++ b/xen/include/xsm/xsm.h<br>
@@ -129,6 +129,9 @@ struct xsm_operations {<br>
=A0 =A0 =A0int (*tmem_control)(void);<br>
<br>
=A0 =A0 =A0long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);<br>
+#ifdef CONFIG_COMPAT<br>
+ =A0 =A0int (*do_compat_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);<br>
+#endif<br>
<br>
=A0 =A0 =A0int (*hvm_param) (struct domain *d, unsigned long op);<br>
=A0 =A0 =A0int (*hvm_param_nested) (struct domain *d);<br>
@@ -499,6 +502,13 @@ static inline long xsm_do_xsm_op (XEN_GU<br>
=A0 =A0 =A0return xsm_ops-&gt;do_xsm_op(op);<br>
=A0}<br>
<br>
+#ifdef CONFIG_COMPAT<br>
+static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)<b=
r>
+{<br>
+ =A0 =A0return xsm_ops-&gt;do_compat_op(op);<br>
+}<br>
+#endif<br>
+<br>
=A0static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, un=
signed long op)<br>
=A0{<br>
=A0 =A0 =A0return xsm_ops-&gt;hvm_param(d, op);<br>
--- a/xen/xsm/dummy.c<br>
+++ b/xen/xsm/dummy.c<br>
@@ -105,6 +105,9 @@ void xsm_fixup_ops (struct xsm_operation<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, hvm_param_nested);<br>
<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, do_xsm_op);<br>
+#ifdef CONFIG_COMPAT<br>
+ =A0 =A0set_to_dummy_if_null(ops, do_compat_op);<br>
+#endif<br>
<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, add_to_physmap);<br>
=A0 =A0 =A0set_to_dummy_if_null(ops, remove_from_physmap);<br>
--- a/xen/xsm/flask/flask_op.c<br>
+++ b/xen/xsm/flask/flask_op.c<br>
@@ -7,7 +7,7 @@<br>
=A0 * =A0it under the terms of the GNU General Public License version 2,<br=
>
=A0 * =A0as published by the Free Software Foundation.<br>
=A0 */<br>
-<br>
+#ifndef COMPAT<br>
=A0#include &lt;xen/errno.h&gt;<br>
=A0#include &lt;xen/event.h&gt;<br>
=A0#include &lt;xsm/xsm.h&gt;<br>
@@ -20,6 +20,10 @@<br>
=A0#include &lt;objsec.h&gt;<br>
=A0#include &lt;conditional.h&gt;<br>
<br>
+#define ret_t long<br>
+#define _copy_to_guest copy_to_guest<br>
+#define _copy_from_guest copy_from_guest<br>
+<br>
=A0#ifdef FLASK_DEVELOP<br>
=A0int flask_enforcing =3D 0;<br>
=A0integer_param(&quot;flask_enforcing&quot;, flask_enforcing);<br>
@@ -95,6 +99,8 @@ static int flask_copyin_string(XEN_GUEST<br>
=A0 =A0 =A0return 0;<br>
=A0}<br>
<br>
+#endif /* COMPAT */<br>
+<br>
=A0static int flask_security_user(struct xen_flask_userlist *arg)<br>
=A0{<br>
=A0 =A0 =A0char *user;<br>
@@ -119,7 +125,7 @@ static int flask_security_user(struct xe<br>
<br>
=A0 =A0 =A0arg-&gt;size =3D nsids;<br>
<br>
- =A0 =A0if ( copy_to_guest(arg-&gt;u.sids, sids, nsids) )<br>
+ =A0 =A0if ( _copy_to_guest(arg-&gt;u.sids, sids, nsids) )<br>
=A0 =A0 =A0 =A0 =A0rv =3D -EFAULT;<br>
<br>
=A0 =A0 =A0xfree(sids);<br>
@@ -128,6 +134,8 @@ static int flask_security_user(struct xe<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
=A0static int flask_security_relabel(struct xen_flask_transition *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -208,6 +216,8 @@ static int flask_security_setenforce(str<br>
=A0 =A0 =A0return 0;<br>
=A0}<br>
<br>
+#endif /* COMPAT */<br>
+<br>
=A0static int flask_security_context(struct xen_flask_sid_context *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -252,7 +262,7 @@ static int flask_security_sid(struct xen<br>
<br>
=A0 =A0 =A0arg-&gt;size =3D len;<br>
<br>
- =A0 =A0if ( !rv &amp;&amp; copy_to_guest(arg-&gt;context, context, len) )=
<br>
+ =A0 =A0if ( !rv &amp;&amp; _copy_to_guest(arg-&gt;context, context, len) =
)<br>
=A0 =A0 =A0 =A0 =A0rv =3D -EFAULT;<br>
<br>
=A0 =A0 =A0xfree(context);<br>
@@ -260,6 +270,8 @@ static int flask_security_sid(struct xen<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
=A0int flask_disable(void)<br>
=A0{<br>
=A0 =A0 =A0static int flask_disabled =3D 0;<br>
@@ -302,6 +314,8 @@ static int flask_security_setavc_thresho<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#endif /* COMPAT */<br>
+<br>
=A0static int flask_security_resolve_bool(struct xen_flask_boolean *arg)<br=
>
=A0{<br>
=A0 =A0 =A0char *name;<br>
@@ -382,24 +396,6 @@ static int flask_security_set_bool(struc<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
-static int flask_security_commit_bools(void)<br>
-{<br>
- =A0 =A0int rv;<br>
-<br>
- =A0 =A0spin_lock(&amp;sel_sem);<br>
-<br>
- =A0 =A0rv =3D domain_has_security(current-&gt;domain, SECURITY__SETBOOL);=
<br>
- =A0 =A0if ( rv )<br>
- =A0 =A0 =A0 =A0goto out;<br>
-<br>
- =A0 =A0if ( bool_pending_values )<br>
- =A0 =A0 =A0 =A0rv =3D security_set_bools(bool_num, bool_pending_values);<=
br>
-<br>
- out:<br>
- =A0 =A0spin_unlock(&amp;sel_sem);<br>
- =A0 =A0return rv;<br>
-}<br>
-<br>
=A0static int flask_security_get_bool(struct xen_flask_boolean *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -431,7 +427,7 @@ static int flask_security_get_bool(struc<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0rv =3D -ERANGE;<br>
=A0 =A0 =A0 =A0 =A0arg-&gt;size =3D nameout_len;<br>
<br>
- =A0 =A0 =A0 =A0if ( !rv &amp;&amp; copy_to_guest(arg-&gt;name, nameout, n=
ameout_len) )<br>
+ =A0 =A0 =A0 =A0if ( !rv &amp;&amp; _copy_to_guest(arg-&gt;name, nameout, =
nameout_len) )<br>
=A0 =A0 =A0 =A0 =A0 =A0 =A0rv =3D -EFAULT;<br>
=A0 =A0 =A0 =A0 =A0xfree(nameout);<br>
=A0 =A0 =A0}<br>
@@ -441,6 +437,26 @@ static int flask_security_get_bool(struc<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
+static int flask_security_commit_bools(void)<br>
+{<br>
+ =A0 =A0int rv;<br>
+<br>
+ =A0 =A0spin_lock(&amp;sel_sem);<br>
+<br>
+ =A0 =A0rv =3D domain_has_security(current-&gt;domain, SECURITY__SETBOOL);=
<br>
+ =A0 =A0if ( rv )<br>
+ =A0 =A0 =A0 =A0goto out;<br>
+<br>
+ =A0 =A0if ( bool_pending_values )<br>
+ =A0 =A0 =A0 =A0rv =3D security_set_bools(bool_num, bool_pending_values);<=
br>
+<br>
+ out:<br>
+ =A0 =A0spin_unlock(&amp;sel_sem);<br>
+ =A0 =A0return rv;<br>
+}<br>
+<br>
=A0static int flask_security_make_bools(void)<br>
=A0{<br>
=A0 =A0 =A0int ret =3D 0;<br>
@@ -484,6 +500,7 @@ static int flask_security_avc_cachestats<br>
=A0}<br>
<br>
=A0#endif<br>
+#endif /* COMPAT */<br>
<br>
=A0static int flask_security_load(struct xen_flask_load *load)<br>
=A0{<br>
@@ -501,7 +518,7 @@ static int flask_security_load(struct xe<br>
=A0 =A0 =A0if ( !buf )<br>
=A0 =A0 =A0 =A0 =A0return -ENOMEM;<br>
<br>
- =A0 =A0if ( copy_from_guest(buf, load-&gt;buffer, load-&gt;size) )<br>
+ =A0 =A0if ( _copy_from_guest(buf, load-&gt;buffer, load-&gt;size) )<br>
=A0 =A0 =A0{<br>
=A0 =A0 =A0 =A0 =A0ret =3D -EFAULT;<br>
=A0 =A0 =A0 =A0 =A0goto out_free;<br>
@@ -524,6 +541,8 @@ static int flask_security_load(struct xe<br>
=A0 =A0 =A0return ret;<br>
=A0}<br>
<br>
+#ifndef COMPAT<br>
+<br>
=A0static int flask_ocontext_del(struct xen_flask_ocontext *arg)<br>
=A0{<br>
=A0 =A0 =A0int rv;<br>
@@ -636,7 +655,9 @@ static int flask_relabel_domain(struct x<br>
=A0 =A0 =A0return rc;<br>
=A0}<br>
<br>
-long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)<br>
+#endif /* !COMPAT */<br>
+<br>
+ret_t do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op)<br>
=A0{<br>
=A0 =A0 =A0xen_flask_op_t op;<br>
=A0 =A0 =A0int rv;<br>
@@ -763,3 +784,52 @@ long do_flask_op(XEN_GUEST_HANDLE_PARAM(<br>
=A0 out:<br>
=A0 =A0 =A0return rv;<br>
=A0}<br>
+<br>
+#ifndef COMPAT<br>
+#undef _copy_to_guest<br>
+#define _copy_to_guest copy_to_compat<br>
+#undef _copy_from_guest<br>
+#define _copy_from_guest copy_from_compat<br>
+<br>
+#include &lt;compat/event_channel.h&gt;<br>
+#include &lt;compat/xsm/flask_op.h&gt;<br>
+<br>
+CHECK_flask_access;<br>
+CHECK_flask_cache_stats;<br>
+CHECK_flask_hash_stats;<br>
+CHECK_flask_ocontext;<br>
+CHECK_flask_peersid;<br>
+CHECK_flask_relabel;<br>
+CHECK_flask_setavc_threshold;<br>
+CHECK_flask_setenforce;<br>
+CHECK_flask_transition;<br>
+<br>
+#define COMPAT<br>
+#define flask_copyin_string(ch, pb, sz, mx) ({ \<br>
+ =A0 =A0 =A0 XEN_GUEST_HANDLE_PARAM(char) gh; \<br>
+ =A0 =A0 =A0 guest_from_compat_handle(gh, ch); \<br>
+ =A0 =A0 =A0 flask_copyin_string(gh, pb, sz, mx); \<br>
+})<br>
+<br>
+#define xen_flask_load compat_flask_load<br>
+#define flask_security_load compat_security_load<br>
+<br>
+#define xen_flask_userlist compat_flask_userlist<br>
+#define flask_security_user compat_security_user<br>
+<br>
+#define xen_flask_sid_context compat_flask_sid_context<br>
+#define flask_security_context compat_security_context<br>
+#define flask_security_sid compat_security_sid<br>
+<br>
+#define xen_flask_boolean compat_flask_boolean<br>
+#define flask_security_resolve_bool compat_security_resolve_bool<br>
+#define flask_security_get_bool compat_security_get_bool<br>
+#define flask_security_set_bool compat_security_set_bool<br>
+<br>
+#define xen_flask_op_t compat_flask_op_t<br>
+#undef ret_t<br>
+#define ret_t int<br>
+#define do_flask_op compat_flask_op<br>
+<br>
+#include &quot;flask_op.c&quot;<br>
+#endif<br>
--- a/xen/xsm/flask/hooks.c<br>
+++ b/xen/xsm/flask/hooks.c<br>
@@ -1464,6 +1464,7 @@ static int flask_map_gmfn_foreign(struct<br>
=A0#endif<br>
<br>
=A0long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);<br>
+int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);<br>
<br>
=A0static struct xsm_operations flask_ops =3D {<br>
=A0 =A0 =A0.security_domaininfo =3D flask_security_domaininfo,<br>
@@ -1538,6 +1539,9 @@ static struct xsm_operations flask_ops =3D<br>
=A0 =A0 =A0.hvm_param_nested =3D flask_hvm_param_nested,<br>
<br>
=A0 =A0 =A0.do_xsm_op =3D do_flask_op,<br>
+#ifdef CONFIG_COMPAT<br>
+ =A0 =A0.do_compat_op =3D compat_flask_op,<br>
+#endif<br>
<br>
=A0 =A0 =A0.add_to_physmap =3D flask_add_to_physmap,<br>
=A0 =A0 =A0.remove_from_physmap =3D flask_remove_from_physmap,<br>
--- a/xen/xsm/xsm_core.c<br>
+++ b/xen/xsm/xsm_core.c<br>
@@ -116,4 +116,9 @@ long do_xsm_op (XEN_GUEST_HANDLE_PARAM(x<br>
=A0 =A0 =A0return xsm_do_xsm_op(op);<br>
=A0}<br>
<br>
-<br>
+#ifdef CONFIG_COMPAT<br>
+int compat_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)<br>
+{<br>
+ =A0 =A0return xsm_do_compat_op(op);<br>
+}<br>
+#endif<br>
<br>
<br>
</blockquote></div><br></div></div>

--047d7b5d86bfbc4aa004f378e177--


--===============4075118133630999340==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4075118133630999340==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP9u-00018U-4N; Fri, 28 Feb 2014 15:15:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP9t-000185-1n
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:15:29 +0000
Received: from [85.158.143.35:8472] by server-3.bemta-4.messagelabs.com id
	54/A5-11539-018A0135; Fri, 28 Feb 2014 15:15:28 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393600525!9079973!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17257 invoked from network); 28 Feb 2014 15:15:27 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:15:27 -0000
Received: by mail-pb0-f52.google.com with SMTP id rr13so880489pbb.11
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:15:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=xh7y+D8lQfuGC/YuRRAWjc2Wmh4V40hiyeUtDmpxHCQ=;
	b=sAcU1wQ+yYOQpWDj0OpFycDOxTlFZFMZoNYmuTpG4luQXD05ii9Ogn/LfjlynPSQaj
	Al7rcFyW0hGYJiJhstz+rBVieAwi1dJ3yQiwZVwYhW7SNbU9B0UVshd21tMsPNd+2pKG
	XAVwJUnf+k1spdceF+ym2yjf74lWB9NUTKnWJ2oJXEN1Iv9+oXvWp0trEDasttW/pfHl
	7ClMHrbEU5gaW6px3bU3GkE4RbTiRbzv1U3JDUONxuy++HOUdSkw4Nzp5c+Bk3D+3uOm
	lIIpC1fpjTz4rQm6+iqKTivX9y7eQJ4SfqeZcegQPyH1t7WFH+pdUnPpAOJ21oZXkiNR
	57+w==
MIME-Version: 1.0
X-Received: by 10.67.22.3 with SMTP id ho3mr4088330pad.128.1393600525285; Fri,
	28 Feb 2014 07:15:25 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:15:25 -0800 (PST)
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
Date: Fri, 28 Feb 2014 15:15:25 +0000
X-Google-Sender-Auth: 9vqZ_0i1nq5auiiosl_lW8AKrjI
Message-ID: <CAHqL=adNmqC-uOgzU0CgaLOsj6gJ3KZq+TRrrc3u_P9V7onZew@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3578653455800864626=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3578653455800864626==
Content-Type: multipart/alternative; boundary=047d7b5d86bf44bb7804f378e8b5

--047d7b5d86bf44bb7804f378e8b5
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 25, 2014 at 12:23 PM, Andrew Cooper
<andrew.cooper3@citrix.com>wrote:

> Make better use of noreturn.  It allows optimising compilers to produce
> more
> efficient code.
>
> Each patch is compile tested on each architecture, and the result is
> functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
> older compilers are happy.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Nice.

Acked-by: Keir Fraser <keir@xen.org>


>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--047d7b5d86bf44bb7804f378e8b5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 25, 2014 at 12:23 PM, Andrew Cooper <span dir=3D"ltr">&lt;<a href=
=3D"mailto:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@citr=
ix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Make better use of noreturn. =A0It allows op=
timising compilers to produce more<br>
efficient code.<br>
<br>
Each patch is compile tested on each architecture, and the result is<br>
functionally tested on x86 and compile tested on GCC 4.1.1 to verify that<b=
r>
older compilers are happy.<br>
<br>
Signed-off-by: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.co=
m">andrew.cooper3@citrix.com</a>&gt;<br></blockquote><div><br></div><div>Ni=
ce.</div><div><br></div><div>Acked-by: Keir Fraser &lt;<a href=3D"mailto:ke=
ir@xen.org">keir@xen.org</a>&gt;</div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;=
border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote></div><br></div></div>

--047d7b5d86bf44bb7804f378e8b5--


--===============3578653455800864626==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3578653455800864626==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:15:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJP9u-00018U-4N; Fri, 28 Feb 2014 15:15:30 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJP9t-000185-1n
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:15:29 +0000
Received: from [85.158.143.35:8472] by server-3.bemta-4.messagelabs.com id
	54/A5-11539-018A0135; Fri, 28 Feb 2014 15:15:28 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-3.tower-21.messagelabs.com!1393600525!9079973!1
X-Originating-IP: [209.85.160.52]
X-SpamReason: No, hits=0.4 required=7.0 tests=HTML_50_60,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17257 invoked from network); 28 Feb 2014 15:15:27 -0000
Received: from mail-pb0-f52.google.com (HELO mail-pb0-f52.google.com)
	(209.85.160.52)
	by server-3.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:15:27 -0000
Received: by mail-pb0-f52.google.com with SMTP id rr13so880489pbb.11
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:15:25 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=xh7y+D8lQfuGC/YuRRAWjc2Wmh4V40hiyeUtDmpxHCQ=;
	b=sAcU1wQ+yYOQpWDj0OpFycDOxTlFZFMZoNYmuTpG4luQXD05ii9Ogn/LfjlynPSQaj
	Al7rcFyW0hGYJiJhstz+rBVieAwi1dJ3yQiwZVwYhW7SNbU9B0UVshd21tMsPNd+2pKG
	XAVwJUnf+k1spdceF+ym2yjf74lWB9NUTKnWJ2oJXEN1Iv9+oXvWp0trEDasttW/pfHl
	7ClMHrbEU5gaW6px3bU3GkE4RbTiRbzv1U3JDUONxuy++HOUdSkw4Nzp5c+Bk3D+3uOm
	lIIpC1fpjTz4rQm6+iqKTivX9y7eQJ4SfqeZcegQPyH1t7WFH+pdUnPpAOJ21oZXkiNR
	57+w==
MIME-Version: 1.0
X-Received: by 10.67.22.3 with SMTP id ho3mr4088330pad.128.1393600525285; Fri,
	28 Feb 2014 07:15:25 -0800 (PST)
Received: by 10.70.21.37 with HTTP; Fri, 28 Feb 2014 07:15:25 -0800 (PST)
In-Reply-To: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
Date: Fri, 28 Feb 2014 15:15:25 +0000
X-Google-Sender-Auth: 9vqZ_0i1nq5auiiosl_lW8AKrjI
Message-ID: <CAHqL=adNmqC-uOgzU0CgaLOsj6gJ3KZq+TRrrc3u_P9V7onZew@mail.gmail.com>
From: Keir Fraser <keir@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============3578653455800864626=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============3578653455800864626==
Content-Type: multipart/alternative; boundary=047d7b5d86bf44bb7804f378e8b5

--047d7b5d86bf44bb7804f378e8b5
Content-Type: text/plain; charset=ISO-8859-1

On Tue, Feb 25, 2014 at 12:23 PM, Andrew Cooper
<andrew.cooper3@citrix.com>wrote:

> Make better use of noreturn.  It allows optimising compilers to produce
> more
> efficient code.
>
> Each patch is compile tested on each architecture, and the result is
> functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
> older compilers are happy.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>

Nice.

Acked-by: Keir Fraser <keir@xen.org>


>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel
>

--047d7b5d86bf44bb7804f378e8b5
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote">On T=
ue, Feb 25, 2014 at 12:23 PM, Andrew Cooper <span dir=3D"ltr">&lt;<a href=
=3D"mailto:andrew.cooper3@citrix.com" target=3D"_blank">andrew.cooper3@citr=
ix.com</a>&gt;</span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex">Make better use of noreturn. =A0It allows op=
timising compilers to produce more<br>
efficient code.<br>
<br>
Each patch is compile tested on each architecture, and the result is<br>
functionally tested on x86 and compile tested on GCC 4.1.1 to verify that<b=
r>
older compilers are happy.<br>
<br>
Signed-off-by: Andrew Cooper &lt;<a href=3D"mailto:andrew.cooper3@citrix.co=
m">andrew.cooper3@citrix.com</a>&gt;<br></blockquote><div><br></div><div>Ni=
ce.</div><div><br></div><div>Acked-by: Keir Fraser &lt;<a href=3D"mailto:ke=
ir@xen.org">keir@xen.org</a>&gt;</div>
<div>=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;=
border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
_______________________________________________<br>
Xen-devel mailing list<br>
<a href=3D"mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a><br>
<a href=3D"http://lists.xen.org/xen-devel" target=3D"_blank">http://lists.x=
en.org/xen-devel</a><br>
</blockquote></div><br></div></div>

--047d7b5d86bf44bb7804f378e8b5--


--===============3578653455800864626==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============3578653455800864626==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 15:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPD9-0001OS-Ry; Fri, 28 Feb 2014 15:18:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WJPD8-0001OM-V8
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:18:51 +0000
Received: from [85.158.137.68:35940] by server-15.bemta-3.messagelabs.com id
	CF/E5-19263-AD8A0135; Fri, 28 Feb 2014 15:18:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393600727!3349602!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23485 invoked from network); 28 Feb 2014 15:18:49 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:18:49 -0000
Received: by mail-pb0-f43.google.com with SMTP id um1so876538pbc.30
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:18:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=6bXNPqWbYTcCEPZWCqn+gKk+P+jm0VOUd1G1ur+eELI=;
	b=fQ8W29/pnvAhYO2n3dtQU2DoYqbc7R38rU1w90no0tax60fNMJGERyULqfTrGU+dD5
	ESYg4cvddgDEctNO1CTcYEAF7dYaDTa54PgLlVgdJJSdaGmU2BReH0pS+GFfz5hYXow4
	cPJrTaIbIhx6Dcl26gx0JtMFctqdIIpEOV03Bi82EBUtaCitvFx8vX3yi7n2bSEt++aF
	IQvwDYyeTm16vvKgxA8+KDFMX7a9r0362aPkOHFv/2nv06pVJCYOQcXiLV1fkcOq4ahA
	QefkxKIH2D5zMR9Cy8nMec6o2hhFHZTKyrX3RV/P7K/1WRlL+Bjg4lH09r1jG+KIrPso
	sWPw==
X-Gm-Message-State: ALoCoQkAUjIfeQiuSG4h/lbxCXOOOUpRByiFenv9fF6jx+O5h/u2KHptcgpD6YyA/bpA91Mlhocg
X-Received: by 10.68.201.226 with SMTP id kd2mr3927534pbc.157.1393600727177;
	Fri, 28 Feb 2014 07:18:47 -0800 (PST)
Received: from localhost.localdomain (162.39.246.220.static.netvigator.com.
	[220.246.39.162])
	by mx.google.com with ESMTPSA id oz7sm7093047pbc.41.2014.02.28.07.18.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 07:18:46 -0800 (PST)
Message-ID: <5310A8D1.6010501@linaro.org>
Date: Fri, 28 Feb 2014 23:18:41 +0800
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
References: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
	<1393416841.18730.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1393416841.18730.38.camel@kazak.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, ian.jackson@eu.citrix.com,
	tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs
 in DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 26/02/14 20:14, Ian Campbell wrote:
> On Wed, 2014-02-26 at 12:13 +0000, Ian Campbell wrote:
>> Otherwise we deference a NULL pointer.
>>
>> I saw this while experimenting with libvirt on Xen on ARM, xl already checks
>> that the command line is non NULL and provides "" as a default.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: george.dunlap@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

>
> Typo (extra ">") so Goerge's CC got missed out...
>
>> ---
>> This is a pretty obvious fix and would be nice to have if we are taking any
>> more fixes for other stuff , but otherwise I think we can leave to 4.4.1 quite
>> happily.
>> ---
>>   tools/libxl/libxl_arm.c |    6 ++++--
>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
>> index 0a1c8c5..0cfd0cf 100644
>> --- a/tools/libxl/libxl_arm.c
>> +++ b/tools/libxl/libxl_arm.c
>> @@ -164,8 +164,10 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,
>>       res = fdt_begin_node(fdt, "chosen");
>>       if (res) return res;
>>
>> -    res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
>> -    if (res) return res;
>> +    if (info->u.pv.cmdline) {
>> +        res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
>> +        if (res) return res;
>> +    }
>>
>>       res = fdt_end_node(fdt);
>>       if (res) return res;
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:18:53 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:18:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPD9-0001OS-Ry; Fri, 28 Feb 2014 15:18:51 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <julien.grall@linaro.org>) id 1WJPD8-0001OM-V8
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:18:51 +0000
Received: from [85.158.137.68:35940] by server-15.bemta-3.messagelabs.com id
	CF/E5-19263-AD8A0135; Fri, 28 Feb 2014 15:18:50 +0000
X-Env-Sender: julien.grall@linaro.org
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393600727!3349602!1
X-Originating-IP: [209.85.160.43]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23485 invoked from network); 28 Feb 2014 15:18:49 -0000
Received: from mail-pb0-f43.google.com (HELO mail-pb0-f43.google.com)
	(209.85.160.43)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:18:49 -0000
Received: by mail-pb0-f43.google.com with SMTP id um1so876538pbc.30
	for <xen-devel@lists.xen.org>; Fri, 28 Feb 2014 07:18:47 -0800 (PST)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=1e100.net; s=20130820;
	h=x-gm-message-state:message-id:date:from:user-agent:mime-version:to
	:cc:subject:references:in-reply-to:content-type
	:content-transfer-encoding;
	bh=6bXNPqWbYTcCEPZWCqn+gKk+P+jm0VOUd1G1ur+eELI=;
	b=fQ8W29/pnvAhYO2n3dtQU2DoYqbc7R38rU1w90no0tax60fNMJGERyULqfTrGU+dD5
	ESYg4cvddgDEctNO1CTcYEAF7dYaDTa54PgLlVgdJJSdaGmU2BReH0pS+GFfz5hYXow4
	cPJrTaIbIhx6Dcl26gx0JtMFctqdIIpEOV03Bi82EBUtaCitvFx8vX3yi7n2bSEt++aF
	IQvwDYyeTm16vvKgxA8+KDFMX7a9r0362aPkOHFv/2nv06pVJCYOQcXiLV1fkcOq4ahA
	QefkxKIH2D5zMR9Cy8nMec6o2hhFHZTKyrX3RV/P7K/1WRlL+Bjg4lH09r1jG+KIrPso
	sWPw==
X-Gm-Message-State: ALoCoQkAUjIfeQiuSG4h/lbxCXOOOUpRByiFenv9fF6jx+O5h/u2KHptcgpD6YyA/bpA91Mlhocg
X-Received: by 10.68.201.226 with SMTP id kd2mr3927534pbc.157.1393600727177;
	Fri, 28 Feb 2014 07:18:47 -0800 (PST)
Received: from localhost.localdomain (162.39.246.220.static.netvigator.com.
	[220.246.39.162])
	by mx.google.com with ESMTPSA id oz7sm7093047pbc.41.2014.02.28.07.18.43
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 07:18:46 -0800 (PST)
Message-ID: <5310A8D1.6010501@linaro.org>
Date: Fri, 28 Feb 2014 23:18:41 +0800
From: Julien Grall <julien.grall@linaro.org>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Ian Campbell <Ian.Campbell@citrix.com>, xen-devel@lists.xen.org
References: <1393416780-10912-1-git-send-email-ian.campbell@citrix.com>
	<1393416841.18730.38.camel@kazak.uk.xensource.com>
In-Reply-To: <1393416841.18730.38.camel@kazak.uk.xensource.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>, ian.jackson@eu.citrix.com,
	tim@xen.org, stefano.stabellini@eu.citrix.com
Subject: Re: [Xen-devel] [PATCH] libxl: arm: do not create /chosen/bootargs
 in DTB if no cmdline is specified
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



On 26/02/14 20:14, Ian Campbell wrote:
> On Wed, 2014-02-26 at 12:13 +0000, Ian Campbell wrote:
>> Otherwise we deference a NULL pointer.
>>
>> I saw this while experimenting with libvirt on Xen on ARM, xl already checks
>> that the command line is non NULL and provides "" as a default.
>>
>> Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
>> Cc: george.dunlap@citrix.com>
Acked-by: Julien Grall <julien.grall@linaro.org>

>
> Typo (extra ">") so Goerge's CC got missed out...
>
>> ---
>> This is a pretty obvious fix and would be nice to have if we are taking any
>> more fixes for other stuff , but otherwise I think we can leave to 4.4.1 quite
>> happily.
>> ---
>>   tools/libxl/libxl_arm.c |    6 ++++--
>>   1 file changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
>> index 0a1c8c5..0cfd0cf 100644
>> --- a/tools/libxl/libxl_arm.c
>> +++ b/tools/libxl/libxl_arm.c
>> @@ -164,8 +164,10 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,
>>       res = fdt_begin_node(fdt, "chosen");
>>       if (res) return res;
>>
>> -    res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
>> -    if (res) return res;
>> +    if (info->u.pv.cmdline) {
>> +        res = fdt_property_string(fdt, "bootargs", info->u.pv.cmdline);
>> +        if (res) return res;
>> +    }
>>
>>       res = fdt_end_node(fdt);
>>       if (res) return res;
>
>

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:23:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPH6-0001cZ-Iu; Fri, 28 Feb 2014 15:22:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJPH4-0001cS-OJ
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:22:54 +0000
Received: from [85.158.143.35:2192] by server-1.bemta-4.messagelabs.com id
	C9/83-31661-EC9A0135; Fri, 28 Feb 2014 15:22:54 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393600972!9071161!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8093 invoked from network); 28 Feb 2014 15:22:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:22:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105028740"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:22:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:22:50 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJPH0-00079I-Dj;
	Fri, 28 Feb 2014 15:22:50 +0000
Date: Fri, 28 Feb 2014 15:22:50 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228152250.GP16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
>  1 file changed, 140 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 4f5a431..470d6ed 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,12 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues;
> +module_param(xennet_max_queues, uint, 0644);
> +MODULE_PARM_DESC(xennet_max_queues,
> +		"Maximum number of queues per virtual interface");
> +

Maybe I'm nit-picking here. But I think exposing xennet_max_queues as
sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look
very good to me -- userspace tools would need to query different knobs
in frontend and backend. I think it makes sense to use a unified name in
both frontend and backend.

You can either use xenvif_max_queues as backend does or even just
max_queues for both frontend and backend.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:23:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPH6-0001cZ-Iu; Fri, 28 Feb 2014 15:22:56 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJPH4-0001cS-OJ
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:22:54 +0000
Received: from [85.158.143.35:2192] by server-1.bemta-4.messagelabs.com id
	C9/83-31661-EC9A0135; Fri, 28 Feb 2014 15:22:54 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393600972!9071161!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8093 invoked from network); 28 Feb 2014 15:22:53 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:22:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105028740"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 15:22:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:22:50 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJPH0-00079I-Dj;
	Fri, 28 Feb 2014 15:22:50 +0000
Date: Fri, 28 Feb 2014 15:22:50 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
Message-ID: <20140228152250.GP16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	wei.liu2@citrix.com, ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote:
> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> 
> Build on the refactoring of the previous patch to implement multiple
> queues between xen-netfront and xen-netback.
> 
> Check XenStore for multi-queue support, and set up the rings and event
> channels accordingly.
> 
> Write ring references and event channels to XenStore in a queue
> hierarchy if appropriate, or flat when using only one queue.
> 
> Update the xennet_select_queue() function to choose the queue on which
> to transmit a packet based on the skb hash result.
> 
> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> ---
>  drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
>  1 file changed, 140 insertions(+), 38 deletions(-)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 4f5a431..470d6ed 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -57,6 +57,12 @@
>  #include <xen/interface/memory.h>
>  #include <xen/interface/grant_table.h>
>  
> +/* Module parameters */
> +unsigned int xennet_max_queues;
> +module_param(xennet_max_queues, uint, 0644);
> +MODULE_PARM_DESC(xennet_max_queues,
> +		"Maximum number of queues per virtual interface");
> +

Maybe I'm nit-picking here. But I think exposing xennet_max_queues as
sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look
very good to me -- userspace tools would need to query different knobs
in frontend and backend. I think it makes sense to use a unified name in
both frontend and backend.

You can either use xenvif_max_queues as backend does or even just
max_queues for both frontend and backend.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:25:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPJC-0001ii-48; Fri, 28 Feb 2014 15:25:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1WJPJA-0001ia-BI
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:25:04 +0000
Received: from [193.109.254.147:48014] by server-1.bemta-14.messagelabs.com id
	9D/3F-29588-E4AA0135; Fri, 28 Feb 2014 15:25:02 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393601100!2245306!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27688 invoked from network); 28 Feb 2014 15:25:01 -0000
Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com)
	(137.65.248.74)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 15:25:01 -0000
Received: from INET-PRV-MTA by prv-mh.provo.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 08:24:59 -0700
Message-Id: <531047D902000091000B9532@prv-mh.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 14.0.0 
Date: Fri, 28 Feb 2014 08:24:57 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
	<1393555348.20365.11.camel@hastur.hellion.org.uk>
	<20140228033029.GA14114@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140228033029.GA14114@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 2/27/2014 at 08:30 PM, Matt Wilson <msw@linux.com> wrote: 
> On Fri, Feb 28, 2014 at 02:42:28AM +0000, Ian Campbell wrote:
>> On Wed, 2014-02-26 at 15:59 -0700, Charles Arnold wrote:
>> > Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).
>> 
>> Ccing the author.
>> 
>> > Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
>> > Was this functionality ever intended to work?
>> 
>> One would like to assume so!
>> 
>> > Does anyone know the status of this code?
>> 
>> Matt?
>> 
>> Looks like Makefile.am was patched but not Makefile.in, and I don't
>> think we run automake as part of the stubdom build process.
> 
> True, but we also don't use the Makefiles in grub-upstream to build
> GRUB. We use xen/stubdom/grub/Makefile, which also isn't patched to
> build the btrfs code.
> 
> Unfortunately when I added it grub/Makefile, it doesn't build on a
> modern compiler.
> 
> Do you think we should revert the patch that adds btrfs support?
> 
> --msw

Another option would be to fix the compiler errors and get it working :)
The compiler has problems with the cassert define and to many args in
the call to next_partition in the 61btrfs.diff patch.  These issues may be
worked around but given that the code has never actually been used there
may be runtime problems as well.  Is there someone familiar enough with
btrfs on the list to work through these issues?

- Charles



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:25:08 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPJC-0001ii-48; Fri, 28 Feb 2014 15:25:06 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <carnold@suse.com>) id 1WJPJA-0001ia-BI
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:25:04 +0000
Received: from [193.109.254.147:48014] by server-1.bemta-14.messagelabs.com id
	9D/3F-29588-E4AA0135; Fri, 28 Feb 2014 15:25:02 +0000
X-Env-Sender: carnold@suse.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393601100!2245306!1
X-Originating-IP: [137.65.248.74]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 27688 invoked from network); 28 Feb 2014 15:25:01 -0000
Received: from prv-mh.provo.novell.com (HELO prv-mh.provo.novell.com)
	(137.65.248.74)
	by server-7.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 15:25:01 -0000
Received: from INET-PRV-MTA by prv-mh.provo.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 08:24:59 -0700
Message-Id: <531047D902000091000B9532@prv-mh.provo.novell.com>
X-Mailer: Novell GroupWise Internet Agent 14.0.0 
Date: Fri, 28 Feb 2014 08:24:57 -0700
From: "Charles Arnold" <carnold@suse.com>
To: "xen-devel" <xen-devel@lists.xen.org>
References: <530E0F7E02000091000B8C20@prv-mh.provo.novell.com>
	<1393555348.20365.11.camel@hastur.hellion.org.uk>
	<20140228033029.GA14114@u109add4315675089e695.ant.amazon.com>
In-Reply-To: <20140228033029.GA14114@u109add4315675089e695.ant.amazon.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Matt Wilson <msw@amazon.com>
Subject: Re: [Xen-devel] Support for btrfs in pv-grub
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 2/27/2014 at 08:30 PM, Matt Wilson <msw@linux.com> wrote: 
> On Fri, Feb 28, 2014 at 02:42:28AM +0000, Ian Campbell wrote:
>> On Wed, 2014-02-26 at 15:59 -0700, Charles Arnold wrote:
>> > Code to support btrfs in pv-grub was added almost two years ago (c/s 25154).
>> 
>> Ccing the author.
>> 
>> > Building the code doesn't seem to be enabled in stubdom/grub/Makefile.
>> > Was this functionality ever intended to work?
>> 
>> One would like to assume so!
>> 
>> > Does anyone know the status of this code?
>> 
>> Matt?
>> 
>> Looks like Makefile.am was patched but not Makefile.in, and I don't
>> think we run automake as part of the stubdom build process.
> 
> True, but we also don't use the Makefiles in grub-upstream to build
> GRUB. We use xen/stubdom/grub/Makefile, which also isn't patched to
> build the btrfs code.
> 
> Unfortunately when I added it grub/Makefile, it doesn't build on a
> modern compiler.
> 
> Do you think we should revert the patch that adds btrfs support?
> 
> --msw

Another option would be to fix the compiler errors and get it working :)
The compiler has problems with the cassert define and to many args in
the call to next_partition in the 61btrfs.diff patch.  These issues may be
worked around but given that the code has never actually been used there
may be runtime problems as well.  Is there someone familiar enough with
btrfs on the list to work through these issues?

- Charles



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:51:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPiZ-0002EQ-Ji; Fri, 28 Feb 2014 15:51:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJPiX-0002EL-SS
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:51:18 +0000
Received: from [85.158.139.211:42565] by server-9.bemta-5.messagelabs.com id
	1F/9B-11237-570B0135; Fri, 28 Feb 2014 15:51:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393602676!6893299!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19976 invoked from network); 28 Feb 2014 15:51:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 15:51:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 15:51:15 +0000
Message-Id: <5310BE8F0200007800120407@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 15:51:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
	<20140228150719.GL16241@zion.uk.xensource.com>
In-Reply-To: <20140228150719.GL16241@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	david.vrabel@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.02.14 at 16:07, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Feb 28, 2014 at 02:44:57PM +0000, Jan Beulich wrote:
>> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> 
> wrote:
>> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>> > 
>> > Document the multi-queue feature in terms of XenStore keys to be written
>> > by the backend and by the frontend.
>> > 
>> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> 
>> Anyone of you networking people care to review/ack this?
> 
> Sorry for the late response.
> 
> I was actually thinking about acking this after Linux code goes in.
> If there's no user and acutal implementation of this feature it wouldn't
> be very useful to make it to master tree anyway.

Why not - a specification doesn't depend on there being an
actual implementation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:51:48 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPiZ-0002EQ-Ji; Fri, 28 Feb 2014 15:51:19 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJPiX-0002EL-SS
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:51:18 +0000
Received: from [85.158.139.211:42565] by server-9.bemta-5.messagelabs.com id
	1F/9B-11237-570B0135; Fri, 28 Feb 2014 15:51:17 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-206.messagelabs.com!1393602676!6893299!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 19976 invoked from network); 28 Feb 2014 15:51:16 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 15:51:16 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 15:51:15 +0000
Message-Id: <5310BE8F0200007800120407@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 15:51:27 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Wei Liu" <wei.liu2@citrix.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
	<20140228150719.GL16241@zion.uk.xensource.com>
In-Reply-To: <20140228150719.GL16241@zion.uk.xensource.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	"Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	david.vrabel@citrix.com, ian.campbell@citrix.com
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.02.14 at 16:07, Wei Liu <wei.liu2@citrix.com> wrote:
> On Fri, Feb 28, 2014 at 02:44:57PM +0000, Jan Beulich wrote:
>> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> 
> wrote:
>> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>> > 
>> > Document the multi-queue feature in terms of XenStore keys to be written
>> > by the backend and by the frontend.
>> > 
>> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> 
>> Anyone of you networking people care to review/ack this?
> 
> Sorry for the late response.
> 
> I was actually thinking about acking this after Linux code goes in.
> If there's no user and acutal implementation of this feature it wouldn't
> be very useful to make it to master tree anyway.

Why not - a specification doesn't depend on there being an
actual implementation.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPmX-0002LI-UW; Fri, 28 Feb 2014 15:55:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJPmX-0002LD-1x
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:55:25 +0000
Received: from [193.109.254.147:22579] by server-4.bemta-14.messagelabs.com id
	D6/42-32066-C61B0135; Fri, 28 Feb 2014 15:55:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393602922!7518914!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14588 invoked from network); 28 Feb 2014 15:55:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106656080"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 15:55:21 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:55:20 -0500
Message-ID: <1393602915.27819.23.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 28 Feb 2014 15:55:15 +0000
In-Reply-To: <alpine.DEB.2.02.1402281428470.31489@kaball.uk.xensource.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
	<1393553550.20365.5.camel@hastur.hellion.org.uk>
	<20140228110329.GB17909@redhat.com>
	<alpine.DEB.2.02.1402281428470.31489@kaball.uk.xensource.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, "Daniel P. Berrange" <berrange@redhat.com>,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-28 at 14:29 +0000, Stefano Stabellini wrote:
> On Fri, 28 Feb 2014, Daniel P. Berrange wrote:
> > So actually this leads me to ask what kind of console Arm fullvirt Xen
> > guests actually have ? If they just use the traditional Xen paravirt
> > console, then we just need to make sure that this works for them by
> > default:
> > 
> >     <console type='pty'>
> >       <target type='xen'/>
> >     </console>
> > 
> > 
> > If there's a different type of console device that's not related to
> > the Xen paravirt console device, then we'd need to invent a new
> > <target type='xxx'/> value for Arm.
> 
> It is just the traditional Xen paravirt console.

I tried the above,which AIUI should work, but it fails with the same
"cannot find character device <null>" error. I'll investigate properly
next week.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:55:28 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPmX-0002LI-UW; Fri, 28 Feb 2014 15:55:25 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJPmX-0002LD-1x
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 15:55:25 +0000
Received: from [193.109.254.147:22579] by server-4.bemta-14.messagelabs.com id
	D6/42-32066-C61B0135; Fri, 28 Feb 2014 15:55:24 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-2.tower-27.messagelabs.com!1393602922!7518914!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14588 invoked from network); 28 Feb 2014 15:55:23 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-2.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 15:55:23 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106656080"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 15:55:21 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 10:55:20 -0500
Message-ID: <1393602915.27819.23.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Date: Fri, 28 Feb 2014 15:55:15 +0000
In-Reply-To: <alpine.DEB.2.02.1402281428470.31489@kaball.uk.xensource.com>
References: <1393418058-14113-1-git-send-email-ian.campbell@citrix.com>
	<20140226123703.GC29185@redhat.com>
	<1393418554.18730.41.camel@kazak.uk.xensource.com>
	<20140226140033.GA15045@aepfle.de>
	<1393426513.9640.18.camel@dagon.hellion.org.uk>
	<20140226150100.GC6046@redhat.com>
	<1393553550.20365.5.camel@hastur.hellion.org.uk>
	<20140228110329.GB17909@redhat.com>
	<alpine.DEB.2.02.1402281428470.31489@kaball.uk.xensource.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA1
Cc: Olaf Hering <olaf@aepfle.de>, "Daniel P. Berrange" <berrange@redhat.com>,
	libvir-list@redhat.com, julien.grall@linaro.org, tim@xen.org,
	xen-devel@lists.xen.org, Jim Fehlig <jfehlig@suse.com>
Subject: Re: [Xen-devel] [libvirt] [PATCH LIBVIRT] libxl: Recognise ARM
 architectures
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-28 at 14:29 +0000, Stefano Stabellini wrote:
> On Fri, 28 Feb 2014, Daniel P. Berrange wrote:
> > So actually this leads me to ask what kind of console Arm fullvirt Xen
> > guests actually have ? If they just use the traditional Xen paravirt
> > console, then we just need to make sure that this works for them by
> > default:
> > 
> >     <console type='pty'>
> >       <target type='xen'/>
> >     </console>
> > 
> > 
> > If there's a different type of console device that's not related to
> > the Xen paravirt console device, then we'd need to invent a new
> > <target type='xxx'/> value for Arm.
> 
> It is just the traditional Xen paravirt console.

I tried the above,which AIUI should work, but it fails with the same
"cannot find character device <null>" error. I'll investigate properly
next week.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPn6-0002Nr-BP; Fri, 28 Feb 2014 15:56:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJPn5-0002Nm-Cq
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:55:59 +0000
Received: from [85.158.139.211:3461] by server-1.bemta-5.messagelabs.com id
	9C/A9-12859-E81B0135; Fri, 28 Feb 2014 15:55:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393602957!6957835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18083 invoked from network); 28 Feb 2014 15:55:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 15:55:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 15:55:57 +0000
Message-Id: <5310BFAC020000780012040A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 15:56:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Paul Durrant" <Paul.Durrant@citrix.com>, "Wei Liu" <wei.liu2@citrix.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD025F224@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD025F224@AMSPEX01CL01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Bennieston <andrew.bennieston@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.02.14 at 15:56, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 28 February 2014 14:45
>> To: Ian Campbell; Paul Durrant; Wei Liu
>> Cc: Andrew Bennieston; David Vrabel; xen-devel@lists.xenproject.org 
>> Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back, front}
>> multi-queue feature
>> 
>> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston"
>> <andrew.bennieston@citrix.com> wrote:
>> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>> >
>> > Document the multi-queue feature in terms of XenStore keys to be written
>> > by the backend and by the frontend.
>> >
>> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> 
>> Anyone of you networking people care to review/ack this?
>> 
> 
> I'm not a maintainer, so I can't ack, but
> 
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

Yeah, the situation with the public headers is all but satisfying.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 15:56:01 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 15:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPn6-0002Nr-BP; Fri, 28 Feb 2014 15:56:00 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJPn5-0002Nm-Cq
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 15:55:59 +0000
Received: from [85.158.139.211:3461] by server-1.bemta-5.messagelabs.com id
	9C/A9-12859-E81B0135; Fri, 28 Feb 2014 15:55:58 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-3.tower-206.messagelabs.com!1393602957!6957835!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18083 invoked from network); 28 Feb 2014 15:55:58 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-3.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 15:55:58 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 15:55:57 +0000
Message-Id: <5310BFAC020000780012040A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 15:56:12 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <Ian.Campbell@citrix.com>,
	"Paul Durrant" <Paul.Durrant@citrix.com>, "Wei Liu" <wei.liu2@citrix.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
	<9AAE0902D5BC7E449B7C8E4E778ABCD025F224@AMSPEX01CL01.citrite.net>
In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD025F224@AMSPEX01CL01.citrite.net>
Mime-Version: 1.0
Content-Disposition: inline
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Bennieston <andrew.bennieston@citrix.com>,
	David Vrabel <david.vrabel@citrix.com>
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 28.02.14 at 15:56, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 28 February 2014 14:45
>> To: Ian Campbell; Paul Durrant; Wei Liu
>> Cc: Andrew Bennieston; David Vrabel; xen-devel@lists.xenproject.org 
>> Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back, front}
>> multi-queue feature
>> 
>> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston"
>> <andrew.bennieston@citrix.com> wrote:
>> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>> >
>> > Document the multi-queue feature in terms of XenStore keys to be written
>> > by the backend and by the frontend.
>> >
>> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> 
>> Anyone of you networking people care to review/ack this?
>> 
> 
> I'm not a maintainer, so I can't ack, but
> 
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

Yeah, the situation with the public headers is all but satisfying.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:06:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPxE-000386-KP; Fri, 28 Feb 2014 16:06:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WJPxC-000381-Ls
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:06:26 +0000
Received: from [193.109.254.147:18677] by server-13.bemta-14.messagelabs.com
	id 95/28-01226-104B0135; Fri, 28 Feb 2014 16:06:25 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393603583!7539259!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25184 invoked from network); 28 Feb 2014 16:06:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:06:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105044672"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 16:06:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 11:06:22 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WJPvM-0007op-8E; Fri, 28 Feb 2014 16:04:32 +0000
Message-ID: <5310B390.1060604@citrix.com>
Date: Fri, 28 Feb 2014 16:04:32 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
	<20140228152250.GP16241@zion.uk.xensource.com>
In-Reply-To: <20140228152250.GP16241@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 15:22, Wei Liu wrote:
> On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Build on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Check XenStore for multi-queue support, and set up the rings and event
>> channels accordingly.
>>
>> Write ring references and event channels to XenStore in a queue
>> hierarchy if appropriate, or flat when using only one queue.
>>
>> Update the xennet_select_queue() function to choose the queue on which
>> to transmit a packet based on the skb hash result.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
>>   1 file changed, 140 insertions(+), 38 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index 4f5a431..470d6ed 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -57,6 +57,12 @@
>>   #include <xen/interface/memory.h>
>>   #include <xen/interface/grant_table.h>
>>
>> +/* Module parameters */
>> +unsigned int xennet_max_queues;
>> +module_param(xennet_max_queues, uint, 0644);
>> +MODULE_PARM_DESC(xennet_max_queues,
>> +		"Maximum number of queues per virtual interface");
>> +
>
> Maybe I'm nit-picking here. But I think exposing xennet_max_queues as
> sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look
> very good to me -- userspace tools would need to query different knobs
> in frontend and backend. I think it makes sense to use a unified name in
> both frontend and backend.
>
> You can either use xenvif_max_queues as backend does or even just
> max_queues for both frontend and backend.
They already have to look in a different place, /sys/module/xen_netback
vs /sys/module/xen_netfront, so having a different param. name as well
isn't really a problem...

-Andrew

>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:06:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJPxE-000386-KP; Fri, 28 Feb 2014 16:06:28 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <andrew.bennieston@citrix.com>) id 1WJPxC-000381-Ls
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:06:26 +0000
Received: from [193.109.254.147:18677] by server-13.bemta-14.messagelabs.com
	id 95/28-01226-104B0135; Fri, 28 Feb 2014 16:06:25 +0000
X-Env-Sender: andrew.bennieston@citrix.com
X-Msg-Ref: server-5.tower-27.messagelabs.com!1393603583!7539259!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25184 invoked from network); 28 Feb 2014 16:06:25 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-5.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:06:25 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105044672"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 16:06:22 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 11:06:22 -0500
Received: from [10.80.3.220]	by ukmail1.uk.xensource.com with esmtp (Exim
	4.69)	(envelope-from <andrew.bennieston@citrix.com>)	id
	1WJPvM-0007op-8E; Fri, 28 Feb 2014 16:04:32 +0000
Message-ID: <5310B390.1060604@citrix.com>
Date: Fri, 28 Feb 2014 16:04:32 +0000
From: Andrew Bennieston <andrew.bennieston@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux i686 on x86_64;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Wei Liu <wei.liu2@citrix.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
	<20140228152250.GP16241@zion.uk.xensource.com>
In-Reply-To: <20140228152250.GP16241@zion.uk.xensource.com>
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 15:22, Wei Liu wrote:
> On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote:
>> From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
>>
>> Build on the refactoring of the previous patch to implement multiple
>> queues between xen-netfront and xen-netback.
>>
>> Check XenStore for multi-queue support, and set up the rings and event
>> channels accordingly.
>>
>> Write ring references and event channels to XenStore in a queue
>> hierarchy if appropriate, or flat when using only one queue.
>>
>> Update the xennet_select_queue() function to choose the queue on which
>> to transmit a packet based on the skb hash result.
>>
>> Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
>> ---
>>   drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
>>   1 file changed, 140 insertions(+), 38 deletions(-)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index 4f5a431..470d6ed 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -57,6 +57,12 @@
>>   #include <xen/interface/memory.h>
>>   #include <xen/interface/grant_table.h>
>>
>> +/* Module parameters */
>> +unsigned int xennet_max_queues;
>> +module_param(xennet_max_queues, uint, 0644);
>> +MODULE_PARM_DESC(xennet_max_queues,
>> +		"Maximum number of queues per virtual interface");
>> +
>
> Maybe I'm nit-picking here. But I think exposing xennet_max_queues as
> sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look
> very good to me -- userspace tools would need to query different knobs
> in frontend and backend. I think it makes sense to use a unified name in
> both frontend and backend.
>
> You can either use xenvif_max_queues as backend does or even just
> max_queues for both frontend and backend.
They already have to look in a different place, /sys/module/xen_netback
vs /sys/module/xen_netfront, so having a different param. name as well
isn't really a problem...

-Andrew

>
> Wei.
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQ47-0003Ps-OJ; Fri, 28 Feb 2014 16:13:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJQ46-0003Pn-CK
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:13:34 +0000
Received: from [85.158.137.68:43807] by server-10.bemta-3.messagelabs.com id
	81/4B-07302-DA5B0135; Fri, 28 Feb 2014 16:13:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393604011!4854090!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15044 invoked from network); 28 Feb 2014 16:13:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:13:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106663412"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 16:13:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 11:13:29 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJQ41-00080q-6N;
	Fri, 28 Feb 2014 16:13:29 +0000
Date: Fri, 28 Feb 2014 16:13:29 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140228161329.GQ16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
	<20140228152250.GP16241@zion.uk.xensource.com>
	<5310B390.1060604@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5310B390.1060604@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 04:04:32PM +0000, Andrew Bennieston wrote:
> On 28/02/14 15:22, Wei Liu wrote:
> >On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote:
> >>From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >>
> >>Build on the refactoring of the previous patch to implement multiple
> >>queues between xen-netfront and xen-netback.
> >>
> >>Check XenStore for multi-queue support, and set up the rings and event
> >>channels accordingly.
> >>
> >>Write ring references and event channels to XenStore in a queue
> >>hierarchy if appropriate, or flat when using only one queue.
> >>
> >>Update the xennet_select_queue() function to choose the queue on which
> >>to transmit a packet based on the skb hash result.
> >>
> >>Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> >>---
> >>  drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
> >>  1 file changed, 140 insertions(+), 38 deletions(-)
> >>
> >>diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> >>index 4f5a431..470d6ed 100644
> >>--- a/drivers/net/xen-netfront.c
> >>+++ b/drivers/net/xen-netfront.c
> >>@@ -57,6 +57,12 @@
> >>  #include <xen/interface/memory.h>
> >>  #include <xen/interface/grant_table.h>
> >>
> >>+/* Module parameters */
> >>+unsigned int xennet_max_queues;
> >>+module_param(xennet_max_queues, uint, 0644);
> >>+MODULE_PARM_DESC(xennet_max_queues,
> >>+		"Maximum number of queues per virtual interface");
> >>+
> >
> >Maybe I'm nit-picking here. But I think exposing xennet_max_queues as
> >sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look
> >very good to me -- userspace tools would need to query different knobs
> >in frontend and backend. I think it makes sense to use a unified name in
> >both frontend and backend.
> >
> >You can either use xenvif_max_queues as backend does or even just
> >max_queues for both frontend and backend.
> They already have to look in a different place, /sys/module/xen_netback
> vs /sys/module/xen_netfront, so having a different param. name as well
> isn't really a problem...
> 

Cool. I think both frontend and backend use max_queues will be
sufficient and more concise, since the path in sysfs is quite
straightforward. Take netback as example, the existing parameter don't
have xenvif_ prefix.

Wei.

> -Andrew
> 
> >
> >Wei.
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:13:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQ47-0003Ps-OJ; Fri, 28 Feb 2014 16:13:35 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <wei.liu2@citrix.com>) id 1WJQ46-0003Pn-CK
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:13:34 +0000
Received: from [85.158.137.68:43807] by server-10.bemta-3.messagelabs.com id
	81/4B-07302-DA5B0135; Fri, 28 Feb 2014 16:13:33 +0000
X-Env-Sender: wei.liu2@citrix.com
X-Msg-Ref: server-6.tower-31.messagelabs.com!1393604011!4854090!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15044 invoked from network); 28 Feb 2014 16:13:32 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-6.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:13:32 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="106663412"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 16:13:29 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 11:13:29 -0500
Received: from zion.uk.xensource.com ([10.80.2.73])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<wei.liu2@citrix.com>)	id 1WJQ41-00080q-6N;
	Fri, 28 Feb 2014 16:13:29 +0000
Date: Fri, 28 Feb 2014 16:13:29 +0000
From: Wei Liu <wei.liu2@citrix.com>
To: Andrew Bennieston <andrew.bennieston@citrix.com>
Message-ID: <20140228161329.GQ16241@zion.uk.xensource.com>
References: <1393252387-17496-1-git-send-email-andrew.bennieston@citrix.com>
	<1393252387-17496-5-git-send-email-andrew.bennieston@citrix.com>
	<20140228152250.GP16241@zion.uk.xensource.com>
	<5310B390.1060604@citrix.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5310B390.1060604@citrix.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-DLP: MIA2
Cc: xen-devel@lists.xenproject.org, paul.durrant@citrix.com,
	Wei Liu <wei.liu2@citrix.com>, ian.campbell@citrix.com,
	netdev@vger.kernel.org
Subject: Re: [Xen-devel] [PATCH V5 net-next 4/5] xen-netfront: Add support
 for multiple queues
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 04:04:32PM +0000, Andrew Bennieston wrote:
> On 28/02/14 15:22, Wei Liu wrote:
> >On Mon, Feb 24, 2014 at 02:33:06PM +0000, Andrew J. Bennieston wrote:
> >>From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> >>
> >>Build on the refactoring of the previous patch to implement multiple
> >>queues between xen-netfront and xen-netback.
> >>
> >>Check XenStore for multi-queue support, and set up the rings and event
> >>channels accordingly.
> >>
> >>Write ring references and event channels to XenStore in a queue
> >>hierarchy if appropriate, or flat when using only one queue.
> >>
> >>Update the xennet_select_queue() function to choose the queue on which
> >>to transmit a packet based on the skb hash result.
> >>
> >>Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> >>---
> >>  drivers/net/xen-netfront.c |  178 ++++++++++++++++++++++++++++++++++----------
> >>  1 file changed, 140 insertions(+), 38 deletions(-)
> >>
> >>diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> >>index 4f5a431..470d6ed 100644
> >>--- a/drivers/net/xen-netfront.c
> >>+++ b/drivers/net/xen-netfront.c
> >>@@ -57,6 +57,12 @@
> >>  #include <xen/interface/memory.h>
> >>  #include <xen/interface/grant_table.h>
> >>
> >>+/* Module parameters */
> >>+unsigned int xennet_max_queues;
> >>+module_param(xennet_max_queues, uint, 0644);
> >>+MODULE_PARM_DESC(xennet_max_queues,
> >>+		"Maximum number of queues per virtual interface");
> >>+
> >
> >Maybe I'm nit-picking here. But I think exposing xennet_max_queues as
> >sysfs knob in frontend v.s. xenvif_max_queues in backend doesn't look
> >very good to me -- userspace tools would need to query different knobs
> >in frontend and backend. I think it makes sense to use a unified name in
> >both frontend and backend.
> >
> >You can either use xenvif_max_queues as backend does or even just
> >max_queues for both frontend and backend.
> They already have to look in a different place, /sys/module/xen_netback
> vs /sys/module/xen_netfront, so having a different param. name as well
> isn't really a problem...
> 

Cool. I think both frontend and backend use max_queues will be
sufficient and more concise, since the path in sysfs is quite
straightforward. Take netback as example, the existing parameter don't
have xenvif_ prefix.

Wei.

> -Andrew
> 
> >
> >Wei.
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQ6w-0003Vg-B5; Fri, 28 Feb 2014 16:16:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJQ6u-0003VZ-GJ
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:16:28 +0000
Received: from [193.109.254.147:29812] by server-6.bemta-14.messagelabs.com id
	53/FF-03396-B56B0135; Fri, 28 Feb 2014 16:16:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393604186!7583358!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3899 invoked from network); 28 Feb 2014 16:16:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105048859"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 16:16:25 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 11:16:24 -0500
Message-ID: <1393604180.27819.25.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 28 Feb 2014 16:16:20 +0000
In-Reply-To: <5310AEF90200007800120355@nat28.tlf.novell.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-28 at 14:44 +0000, Jan Beulich wrote:
> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> wrote:
> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> > 
> > Document the multi-queue feature in terms of XenStore keys to be written
> > by the backend and by the frontend.
> > 
> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> 
> Anyone of you networking people care to review/ack this?

It's in my queue, I'll probably get to it after my current bout of
travel. The first version was looking pretty good so I expect it'll be
ok.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:16:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQ6w-0003Vg-B5; Fri, 28 Feb 2014 16:16:30 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Campbell@citrix.com>) id 1WJQ6u-0003VZ-GJ
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:16:28 +0000
Received: from [193.109.254.147:29812] by server-6.bemta-14.messagelabs.com id
	53/FF-03396-B56B0135; Fri, 28 Feb 2014 16:16:27 +0000
X-Env-Sender: Ian.Campbell@citrix.com
X-Msg-Ref: server-3.tower-27.messagelabs.com!1393604186!7583358!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 3899 invoked from network); 28 Feb 2014 16:16:27 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-3.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:16:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,562,1389744000"; d="scan'208";a="105048859"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 16:16:25 +0000
Received: from [127.0.0.1] (10.80.16.47) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 11:16:24 -0500
Message-ID: <1393604180.27819.25.camel@hastur.hellion.org.uk>
From: Ian Campbell <ian.campbell@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Date: Fri, 28 Feb 2014 16:16:20 +0000
In-Reply-To: <5310AEF90200007800120355@nat28.tlf.novell.com>
References: <1393252131-17305-1-git-send-email-andrew.bennieston@citrix.com>
	<5310AEF90200007800120355@nat28.tlf.novell.com>
X-Mailer: Evolution 3.8.5-2+b1 
MIME-Version: 1.0
X-DLP: MIA2
Cc: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>,
	paul.durrant@citrix.com, wei.liu2@citrix.com,
	david.vrabel@citrix.com, xen-devel@lists.xenproject.org
Subject: Re: [Xen-devel] [PATCH V2] netif.h: Document xen-net{back,
 front} multi-queue feature
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, 2014-02-28 at 14:44 +0000, Jan Beulich wrote:
> >>> On 24.02.14 at 15:28, "Andrew J. Bennieston" <andrew.bennieston@citrix.com> wrote:
> > From: "Andrew J. Bennieston" <andrew.bennieston@citrix.com>
> > 
> > Document the multi-queue feature in terms of XenStore keys to be written
> > by the backend and by the frontend.
> > 
> > Signed-off-by: Andrew J. Bennieston <andrew.bennieston@citrix.com>
> 
> Anyone of you networking people care to review/ack this?

It's in my queue, I'll probably get to it after my current bout of
travel. The first version was looking pretty good so I expect it'll be
ok.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:25:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQFd-0003n5-GS; Fri, 28 Feb 2014 16:25:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1WJQFb-0003n0-ML
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:25:27 +0000
Received: from [85.158.139.211:60852] by server-12.bemta-5.messagelabs.com id
	A0/6B-15415-778B0135; Fri, 28 Feb 2014 16:25:27 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393604725!2406397!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31463 invoked from network); 28 Feb 2014 16:25:26 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:25:26 -0000
Received: by mail-vc0-f178.google.com with SMTP id ik5so959335vcb.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:25:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=HitHppDQhrnGnBlf5msopj8zvSXq8GomBG7xn/1dTkg=;
	b=CBLXT2jeKSmOL5ShqvTRiQgd/2FTf9bYBgO4banm96tZoGxWmIBHRb6LsBGGvRaXc/
	E5wVkswMcBeBgFgV3NetYP00h3odnYdsYSjcK75vsZeRR8g6yMRBsGdPX3+a5tb2oqQD
	WrU4+KEIG4FPYDa9WPhejurqvKG9x2kWTP5AuWJO4+C4zwvbB5mR3pMxTZxS7MhiQbBw
	NI0fTbujlCfTFo+m0LdW6iCnhB/HH6boeZ7XykoPEaCLl5BvBaXQTEwgLJPT2pxEXZCO
	SDOyX1dvmX3LPoP6z7PdCINw7zYHralVO8cRa2B3gAFpsL2iFZdU1T7AeNb7bPrZ4n5W
	Yy+Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=linux-foundation.org; s=google;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=HitHppDQhrnGnBlf5msopj8zvSXq8GomBG7xn/1dTkg=;
	b=PYeQYx4bu7wsMy0upnWPffpU6nC82Ei96YsVJV6qrNCvxbCDahxwJ+f+5JNak1Imdo
	tIHcDHCDK11n/rkMI9p6ByMG1mD7YnYhlvvnFj4uayvoqNluEPYD9M29qVuWbJVtu70E
	zs3sdKslJLfCqv47lR4O5BGkACmxyWabV0RDA=
MIME-Version: 1.0
X-Received: by 10.58.37.232 with SMTP id b8mr3241601vek.27.1393604724616; Fri,
	28 Feb 2014 08:25:24 -0800 (PST)
Received: by 10.220.13.2 with HTTP; Fri, 28 Feb 2014 08:25:24 -0800 (PST)
Received: by 10.220.13.2 with HTTP; Fri, 28 Feb 2014 08:25:24 -0800 (PST)
In-Reply-To: <20140228092945.GG27965@twins.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
Date: Fri, 28 Feb 2014 08:25:24 -0800
X-Google-Sender-Auth: 7XLED_ulM9fTcbKtw0dhZcsnvq0
Message-ID: <CA+55aFyz-7NvwUN5q8xqSvHsufC4Z1NSCvnUYrKRAhBeVjUuDA@mail.gmail.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Andi Kleen <andi@firstfloor.org>, Peter Anvin <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	the arch/x86 maintainers <x86@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Daniel J Blueman <daniel@numascale.com>, xen-devel@lists.xenproject.org,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Chegu Vinod <chegu_vinod@hp.com>, Waiman Long <waiman.long@hp.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8847244582086524882=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8847244582086524882==
Content-Type: multipart/alternative; boundary=089e01176de591788204f379e2c3

--089e01176de591788204f379e2c3
Content-Type: text/plain; charset=UTF-8

On Feb 28, 2014 1:30 AM, "Peter Zijlstra" <peterz@infradead.org> wrote:
>
> At low contention the cmpxchg won't have to be retried (much) so using
> it won't be a problem and you get to have arbitrary atomic ops.

Peter, the difference between an atomic op and *no* atomic op is huge.

And Waiman posted numbers for the optimization. Why do you argue with
handwaving and against numbers?

       Linus

--089e01176de591788204f379e2c3
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr"><br>
On Feb 28, 2014 1:30 AM, &quot;Peter Zijlstra&quot; &lt;<a href=3D"mailto:p=
eterz@infradead.org">peterz@infradead.org</a>&gt; wrote:<br>
&gt;<br>
&gt; At low contention the cmpxchg won&#39;t have to be retried (much) so u=
sing<br>
&gt; it won&#39;t be a problem and you get to have arbitrary atomic ops.</p=
>
<p dir=3D"ltr">Peter, the difference between an atomic op and *no* atomic o=
p is huge.</p>
<p dir=3D"ltr">And Waiman posted numbers for the optimization. Why do you a=
rgue with handwaving and against numbers?</p>
<p dir=3D"ltr">=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Linus<br>
</p>

--089e01176de591788204f379e2c3--


--===============8847244582086524882==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8847244582086524882==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:25:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQFd-0003n5-GS; Fri, 28 Feb 2014 16:25:29 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <linus971@gmail.com>) id 1WJQFb-0003n0-ML
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:25:27 +0000
Received: from [85.158.139.211:60852] by server-12.bemta-5.messagelabs.com id
	A0/6B-15415-778B0135; Fri, 28 Feb 2014 16:25:27 +0000
X-Env-Sender: linus971@gmail.com
X-Msg-Ref: server-5.tower-206.messagelabs.com!1393604725!2406397!1
X-Originating-IP: [209.85.220.178]
X-SpamReason: No, hits=1.2 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	RCVD_BY_IP
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 31463 invoked from network); 28 Feb 2014 16:25:26 -0000
Received: from mail-vc0-f178.google.com (HELO mail-vc0-f178.google.com)
	(209.85.220.178)
	by server-5.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:25:26 -0000
Received: by mail-vc0-f178.google.com with SMTP id ik5so959335vcb.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:25:24 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=HitHppDQhrnGnBlf5msopj8zvSXq8GomBG7xn/1dTkg=;
	b=CBLXT2jeKSmOL5ShqvTRiQgd/2FTf9bYBgO4banm96tZoGxWmIBHRb6LsBGGvRaXc/
	E5wVkswMcBeBgFgV3NetYP00h3odnYdsYSjcK75vsZeRR8g6yMRBsGdPX3+a5tb2oqQD
	WrU4+KEIG4FPYDa9WPhejurqvKG9x2kWTP5AuWJO4+C4zwvbB5mR3pMxTZxS7MhiQbBw
	NI0fTbujlCfTFo+m0LdW6iCnhB/HH6boeZ7XykoPEaCLl5BvBaXQTEwgLJPT2pxEXZCO
	SDOyX1dvmX3LPoP6z7PdCINw7zYHralVO8cRa2B3gAFpsL2iFZdU1T7AeNb7bPrZ4n5W
	Yy+Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=linux-foundation.org; s=google;
	h=mime-version:sender:in-reply-to:references:date:message-id:subject
	:from:to:cc:content-type;
	bh=HitHppDQhrnGnBlf5msopj8zvSXq8GomBG7xn/1dTkg=;
	b=PYeQYx4bu7wsMy0upnWPffpU6nC82Ei96YsVJV6qrNCvxbCDahxwJ+f+5JNak1Imdo
	tIHcDHCDK11n/rkMI9p6ByMG1mD7YnYhlvvnFj4uayvoqNluEPYD9M29qVuWbJVtu70E
	zs3sdKslJLfCqv47lR4O5BGkACmxyWabV0RDA=
MIME-Version: 1.0
X-Received: by 10.58.37.232 with SMTP id b8mr3241601vek.27.1393604724616; Fri,
	28 Feb 2014 08:25:24 -0800 (PST)
Received: by 10.220.13.2 with HTTP; Fri, 28 Feb 2014 08:25:24 -0800 (PST)
Received: by 10.220.13.2 with HTTP; Fri, 28 Feb 2014 08:25:24 -0800 (PST)
In-Reply-To: <20140228092945.GG27965@twins.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
Date: Fri, 28 Feb 2014 08:25:24 -0800
X-Google-Sender-Auth: 7XLED_ulM9fTcbKtw0dhZcsnvq0
Message-ID: <CA+55aFyz-7NvwUN5q8xqSvHsufC4Z1NSCvnUYrKRAhBeVjUuDA@mail.gmail.com>
From: Linus Torvalds <torvalds@linux-foundation.org>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Andi Kleen <andi@firstfloor.org>, Peter Anvin <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	the arch/x86 maintainers <x86@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Daniel J Blueman <daniel@numascale.com>, xen-devel@lists.xenproject.org,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Chegu Vinod <chegu_vinod@hp.com>, Waiman Long <waiman.long@hp.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8847244582086524882=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8847244582086524882==
Content-Type: multipart/alternative; boundary=089e01176de591788204f379e2c3

--089e01176de591788204f379e2c3
Content-Type: text/plain; charset=UTF-8

On Feb 28, 2014 1:30 AM, "Peter Zijlstra" <peterz@infradead.org> wrote:
>
> At low contention the cmpxchg won't have to be retried (much) so using
> it won't be a problem and you get to have arbitrary atomic ops.

Peter, the difference between an atomic op and *no* atomic op is huge.

And Waiman posted numbers for the optimization. Why do you argue with
handwaving and against numbers?

       Linus

--089e01176de591788204f379e2c3
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

<p dir=3D"ltr"><br>
On Feb 28, 2014 1:30 AM, &quot;Peter Zijlstra&quot; &lt;<a href=3D"mailto:p=
eterz@infradead.org">peterz@infradead.org</a>&gt; wrote:<br>
&gt;<br>
&gt; At low contention the cmpxchg won&#39;t have to be retried (much) so u=
sing<br>
&gt; it won&#39;t be a problem and you get to have arbitrary atomic ops.</p=
>
<p dir=3D"ltr">Peter, the difference between an atomic op and *no* atomic o=
p is huge.</p>
<p dir=3D"ltr">And Waiman posted numbers for the optimization. Why do you a=
rgue with handwaving and against numbers?</p>
<p dir=3D"ltr">=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Linus<br>
</p>

--089e01176de591788204f379e2c3--


--===============8847244582086524882==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8847244582086524882==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQNg-00041O-Gj; Fri, 28 Feb 2014 16:33:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQNf-00041J-Mr
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 16:33:47 +0000
Received: from [193.109.254.147:17366] by server-8.bemta-14.messagelabs.com id
	25/7E-18529-B6AB0135; Fri, 28 Feb 2014 16:33:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393605226!7528475!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32747 invoked from network); 28 Feb 2014 16:33:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:33:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:33:45 +0000
Message-Id: <5310C887020000780012044A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:33:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
 __attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
> __attribute__((noreturn)).  This includes removing redundant uses with
> function definitions which have a public declaration.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
> Acked-by: Tim Deegan <tim@xen.org>

I had already committed this, but it failed my pre-push build test:

> --- a/xen/include/xen/compiler.h
> +++ b/xen/include/xen/compiler.h
> @@ -14,6 +14,8 @@
>  #define always_inline __inline__ __attribute__ ((always_inline))
>  #define noinline      __attribute__((noinline))
>  
> +#define noreturn      __attribute__((noreturn))

This collides with uses of __attribute__((noreturn)) elsewhere in
the tree. Did this really build for you without issue?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:33:56 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQNg-00041O-Gj; Fri, 28 Feb 2014 16:33:48 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQNf-00041J-Mr
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 16:33:47 +0000
Received: from [193.109.254.147:17366] by server-8.bemta-14.messagelabs.com id
	25/7E-18529-B6AB0135; Fri, 28 Feb 2014 16:33:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-27.messagelabs.com!1393605226!7528475!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32747 invoked from network); 28 Feb 2014 16:33:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:33:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:33:45 +0000
Message-Id: <5310C887020000780012044A@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:33:59 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
In-Reply-To: <1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
Mime-Version: 1.0
Content-Disposition: inline
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
 __attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
> __attribute__((noreturn)).  This includes removing redundant uses with
> function definitions which have a public declaration.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Keir Fraser <keir@xen.org>
> CC: Jan Beulich <JBeulich@suse.com>
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
> Acked-by: Tim Deegan <tim@xen.org>

I had already committed this, but it failed my pre-push build test:

> --- a/xen/include/xen/compiler.h
> +++ b/xen/include/xen/compiler.h
> @@ -14,6 +14,8 @@
>  #define always_inline __inline__ __attribute__ ((always_inline))
>  #define noinline      __attribute__((noinline))
>  
> +#define noreturn      __attribute__((noreturn))

This collides with uses of __attribute__((noreturn)) elsewhere in
the tree. Did this really build for you without issue?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQSQ-0004C3-A8; Fri, 28 Feb 2014 16:38:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQSO-0004BW-BA
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:38:40 +0000
Received: from [85.158.139.211:24334] by server-10.bemta-5.messagelabs.com id
	70/FA-08578-F8BB0135; Fri, 28 Feb 2014 16:38:39 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393605515!2421354!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14112 invoked from network); 28 Feb 2014 16:38:36 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:38:36 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 3C1F6283;
	Fri, 28 Feb 2014 16:38:32 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 018C54E;
	Fri, 28 Feb 2014 16:38:25 +0000 (UTC)
Message-ID: <5310BB81.3090508@hp.com>
Date: Fri, 28 Feb 2014 11:38:25 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
In-Reply-To: <20140228092945.GG27965@twins.programming.kicks-ass.net>
Content-Type: multipart/mixed; boundary="------------000909070404030104070106"
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------000909070404030104070106
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/28/2014 04:29 AM, Peter Zijlstra wrote:
> On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote:
>>>> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
>>>> +
>>>> +	if (old == 0) {
>>>> +		/*
>>>> +		 * Got the lock, can clear the waiting bit now
>>>> +		 */
>>>> +		smp_u8_store_release(&qlock->wait, 0);
>>> So we just did an atomic op, and now you're trying to optimize this
>>> write. Why do you need a whole byte for that?
>>>
>>> Surely a cmpxchg loop with the right atomic op can't be _that_ much
>>> slower? Its far more readable and likely avoids that steal fail below as
>>> well.
>> At low contention level, atomic operations that requires a lock prefix are
>> the major contributor to the total execution times. I saw estimate online
>> that the time to execute a lock prefix instruction can easily be 50X longer
>> than a regular instruction that can be pipelined. That is why I try to do it
>> with as few lock prefix instructions as possible. If I have to do an atomic
>> cmpxchg, it probably won't be faster than the regular qspinlock slowpath.
> At low contention the cmpxchg won't have to be retried (much) so using
> it won't be a problem and you get to have arbitrary atomic ops.
>
>> Given that speed at low contention level which is the common case is
>> important to get this patch accepted, I have to do what I can to make it run
>> as far as possible for this 2 contending task case.
> What I'm saying is that you can do the whole thing with a single
> cmpxchg. No extra ops needed. And at that point you don't need a whole
> byte, you can use a single bit.
>
> that removes the whole NR_CPUS dependent logic.

After modifying it to do a deterministic cmpxchg, the test run time of 2 
contending tasks jumps up from 600ms (best case) to about 1700ms which 
was worse than the original qspinlock's 1300-1500ms. It is the 
opportunistic nature of the xchg() code that can potentially combine 
multiple steps in the deterministic atomic sequence which can saves 
time. Without that, I would rather prefer going back to the basic 
qspinlock queuing sequence for 2 contending tasks.

Please take a look at the performance data in my patch 3 to see if the 
slowdown at 2 and 3 contending tasks are acceptable or not.

The reason why I need a whole byte for the lock bit is because of the 
simple unlock code of assigning 0 to the lock byte by the lock holder. 
Utilizing other bits in the low byte for other purpose will complicate 
the unlock path and slow down the no-contention case.

>>>> +		/*
>>>> +		 * Someone has steal the lock, so wait again
>>>> +		 */
>>>> +		goto try_again;
>>> That's just a fail.. steals should not ever be allowed. It's a fair lock
>>> after all.
>> The code is unfair, but this unfairness help it to run faster than ticket
>> spinlock in this particular case. And the regular qspinlock slowpath is
>> fair. A little bit of unfairness in this particular case helps its speed.
> *groan*, no, unfairness not cool. ticket lock is absolutely fair; we
> should preserve this.

We can preserve that by removing patch 3.

> BTW; can you share your benchmark thingy?

I have attached the test program that I used to generate the timing data 
for patch 3.

-Longman



--------------000909070404030104070106
Content-Type: application/x-gzip;
 name="locktest.tar.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="locktest.tar.gz"

H4sIAM2aEFMAA+07aXPbOLL5Kv0KRDPjkI5kS07s2ZLi7Dq2knGNLLkkeVN5mRSLIiGLMUXq
8fCxif/7624AvCU7tbM79arMmolIoA+g0Wj0Abu+dRXxMNo9M6/43HH5sz//acNzcPCafn8t
/OLz+teDZ529vV8PXnUOXh20n7U7r/de7z9j7f/AWEpPHEZmwNgz1/cul6a3Fu6h/v+njz/7
2lqyQ+ZKPdjx67//sz+enI6G0PqzFi6467LYM5ectQK9fjI+hW7oCmKvpZDqddN1uwAtevV6
bQnaxFrHbNd1ZrtL345dHu7+rCnS+u4sdlybnR3+rJ1/PNGZBKnXExoZcjtWvWZZrOWuokXA
TRvelqzls5//Af9BZ91yuel1f5wr4dVrAVCbp+wyjP163fFAQWB29Rq8Ac1UUld+/a9evX//
UbPZTaZl/ek8Nu//zv5B+xXs/1ft153Oq72Dfdz/+wcHT/v/v/HsbtfZNpsunJCtAv8yMJcM
XucB5yz059GNGfAeu/NjZpkeC7jthFHgzOKIMydipmfv+gFuXmd+h3SgLfZsHrBowVnEg2XI
/Dl9fBhesA/c44HpsvN45joWGzgW90LOTGCNLWBqbDYjOojxHscwkWNg730gbEaO7/UYd6A/
YNc8COGb7SkekmCT+QES0cwIRx4wf4V4Ogz3jrlmlKLuAFjV9NNZ2szxiPbCX8GMFkAS5njj
gE2ccRaHfB67TSQBwOzj6fS30cWUHQ0/sY9H4/HRcPqpB8DRwodefs0FKWe5ch2gDPMKTC+6
g+EjhbP++Pg3QDl6dzo4nX6CSbD3p9NhfzJh70djdsTOj8bT0+OLwdGYnV+Mz0eT/g5jE47D
4khgg4jntEogRptHpuOGauJHMYwtCLvso+mAcrMBKDl7c0MfO6jx/1isdix/+VbCw6yGp8MP
XTaOPc8B2Aglh2aDWES+zxCLwarg+xIlfnx+EcLbHbMCM1yowdYYPFc88LibWwXRJM8Dxj1z
BjYcBA30QIKmG8P6kRQt34u4hwsLK7Ewrx3gD3qAZixMKeLUXde/wbGGd+E8ZNdm4AiiahUt
OFNgpbuI0dHZLsDtimEkRnHXMixQwAhB9taAOCnIqzUgbgryeg3IKgXZX0cFX+5WJMaDNTDB
jRHgbkGYX9fSMe1a7SeGv0wR/NsaYB45Sw7A3DVXISgvftLecEDegLhb/8nxLDe2OXvjOl58
K8/fncXb2u4uk302qQkcp/l1DkvIUjMqkH/vj4fG6fD9qITjeE6EGCUUw8AutFbwym/hdWla
gV/BFbwhbiGRUhcpT2UHWArvsqrHjPylY2FPHc4YEBSMKmJSk1ivtrvNjlMlptYayDED62Rh
T8FwmWtB3SzoAFdUQBXAVlmwcxMMWDWc0jFJzroiBSlDmXYGSqpREUrpIkGNPzLN68I2oyaE
rIsjaLLwb9gWCyM0U3J9SK1sCEs8Xpv8NvpoHA1PjMl0NO5r13qNnj/qNckqDJ1/cSNi1+yn
n5gRAjkN1ia2IiaXlW3jS1MgwZPpNcxIHWvb+Npk1gK8gu1ZPNcR+lutlrCrBTyKA4+FK1j4
aK4BTJM1frH/8BpNdq33JNh9BqV6hDjTdUN8eIQKBuwgWN9kuE0mmdCy6rXS6MMQTvJ01DEM
ekuOOjs7Qu8laPdqmRJdyC3OYHT8uzE5Px3iS62dbx9/pNZOSiJjb0ATJhHsTBPODfDdSfFA
DeAwjkAesIHh0DfR0IR8ZQZo/C3TgpgEqIfSGveXM27jXmeb8PHMCDGKSfALMzhC1QIFOxqM
hv3cHKCnf/auf3LSP0ln0b+F/ejBUTuPPQu3pVBXTs3s2ndstm0YV0swdr5leL4Nay1WBn+a
7HK+gve5a16GTdomCKL3snIWFKUSkJX+hlEIrG0HFHSvV7/v1VV3aNDMv9WV0nzDtV45HjYD
IxJMr57oFJFD2UDbPZhFw0jkYpiuc+lxG6ymES5XOTawlyv4BDeKy2Y+j2EjdkpuUmw7FFTz
vWos24J9r9qScPBw7JCMf7KFhGALFkUaZp31ij3O2h53bc9qPY40rXqpRxnKco8ytHpJRAXb
ICeRaTms1wzjaDodqwk2GQRjB80EFC1l5gulphdFXeDibODi5Lk4OS7Oj3BxN3Bx81zcHBf3
R7isNnBZ5bmsclxWPzQXuejVk5GdyWwUsJxO8vkYTkqJKjmpTsUpARac0s/HzUko5Zo5ic50
ThJYzUl9Sk6pXksrSQ7npkNyzajE6YhHIxijzYdjHWwXuCQn8XJ5Jw0F2oX8EVi/T4bm+WBO
wXIlvo+WNXLbaOWEKXfFzPH1JnAi+epakqczZxqBsMNDVjh4dDKo6LVqPcJgb1lbvLVaOk7L
WsVGAG74raaT7eYuOHFIklhtxqfO2tHxMUSVxmh43Ndw0K23FhzDLw8FPOutgdkjmL1thEKY
+4Q7UsUZhvGyp7jDOztUrEuTwF4gtmYkVe17ONvE9yJG98lStSuXiQ5hjF4MdRBq2CTWAN1r
GGDiYoumFTSt8k0WZkfzTS4lTKXvK5q+1mqgxGLqrAekYcb4q4SOI6DjStuiw6z1Fv+lOZEm
qVahRG6T4RFP3YQZe9W4xO8rDhqYfZVMv1ZrymYRiUP0PyegAHPGKhKQMmuyq1RqOIt2XnRN
9vXlSyE/VHCYH9CQSgwb9yMqvPD4cNvWakii1SH9pc0gBS6mlpHaQyKX2EroJfyM2lNMgznp
zDAwR/2DrNsJa0Jez1mt+ZVc8ysprasH17xgWNfGR4+wq2lwhJoSeyE5cpT6wVgniJCID8OD
tRVRsGFzS9PkO3DSt5S7hTC4liblDiPHldkd1CaQBTiLIUwQBXuzcFzOtKxtUERw6rmZ18U4
QEhfnfnc4WFPml15lgrLmwlakELeVKD8cJlVh9wg2EyTy5KuDgldERMSdIvRgPSsoaoWtFif
qjOVetSBmi5jM3tWlj3EwhKGn7/A0GHZtkq+4g6+MVjjrZKDl3aVvLK0q+RKZbBK/o/ok51F
RyLTWfZndqQmbhUkJZEQa3gxGFTEEwmocRn48Yq+5SuKhMmHCIXQRL/NCkLJjlH5MQNbkgjk
TGQvKeukwkMZyyVZEZmUSgjgV976glJdmy7qco4LjEuyN0Tm0oDYxjBtW2soOFA6kUAjBF3q
/vMcGT09TVv94eisf4acBE9gQfkuxYBEpOXQm2wrlZ7iILCFgVZjXMVRHjN7jqspYn4BrEC9
hjlpDJgx0MeAGLb+ki/94I4SeZTcRSCwBqEIDQ9ZRZDtzzURMupN9uH9uYE5w/4Adg7yFvt4
PaLor8IkIQq+37+z5xKQZksb/0qj5KRMkrNkOboFVmxugiWzn4N1yAojWQa08pmDnoaKdsog
i3VB9qp/ohV8ARmEkx4VD456boCYPU1Hp3LtuAuTQdUxiTr0vda/eOAzOcAlN71QFEHQRks8
2POu7b2IMJUuaOzUi+ub2jxyOGQWNlELqsOChiXq//Bo8YQkXlKIm9QNuAssIZzcloNu2YdD
0opDQlGcjU4uBn1jcHrcH076WuPD+QBZyuaji+lvo7HWyFRQMr0n/cnx+PR8ejoaag2ROwPq
WI0QXBH0L63/pRWDTD3/T64Bb67/7rX39/ZV/fc1fD5rd/Zf7T/d//ivPE/136f67+PrvyQo
rPjCNOzAATEmYkMWQVIVlmEZGZNcnt2AURqT0cX4uJ8pmGHFET3AQoUNm/HcKLei51duDXgI
620VeyLb8UtNrjMrtpULebEHmmAXi3uzS+7l2+TFpHzj0owW+Za55UVugSsEOIQnXcfknBPV
aryRV6iDfZq8nxjj0Whaa1RVahsFyOPj0cVwmkFjDVXNLoKeVoA61aCDClC3GvS8AnRVDQpb
Bk7KPKjy/0sjACdo+um8XxiDjDPK0EcnVdAi8ChC96enZwVQijUapSpYWF0Ge3QJrIj/6CKU
dKcoEo9D85JTWNf4o36BH132S8g+t2bfW+++wO8C/7nGfyzQ36Q4/BZbHPbGUZXl8O2XP7w/
6rXPLZd9w0D4e3BzzxBqwL61v3fu8dVjb+Q1PEHkcws0/Zbl2r4oOivYG2mVmRgG7E1wI8q/
9B2yNxC60+stvbZgc4iB3ICZ56w1g+iZ34LRtJzIvWMzx7MlO7oWAofPHOAgYsKrJognwrjW
uw14WA/C4BlxmBcvZ3AiZFAtQDWXlISgSyUmHQRW4IDQwbyGnGI6psHymLGLeYaOnkFfADr5
sOKCzIK7K/CeQ1yaDJQDUE6+qp/pdVU1k/IVmjz0cFXwaAhusvwGBCt1imntLmYbZFG1yTpd
xmWNNIsEEUdu1eDAZCGYQPCu8aINRzYmLJR3ybNoq5xoxOrOeHTDeVlA4VoJBUAFWfNgl5Jt
gdAIptE5IkI033PvsjihyqLgGZOu21oe1wAPR9TMxxPQNS8zXbfQlaEAGhfwJapQiRb+/+dc
e0rZJ1efnmNbo5e7IJLuRhjDvvCRH3EBRNlJhjFxp5ckJrsk3vS6ReH6BmX8Mygbro7gSgvw
BHrDDRK10Qh8T4BPM9pWhAcbolAS+DPz1lnGy7yWVl9UITaJ/X3ctRWJk7O5vU03WUj5wEJl
RTChPGOqSwUUgAblYjkZVyleAQ1tlZh1T+SY36XGC20ealfV4CR0R+2UJmt1yAo6RcHJjVFK
bUV0YQJjcMHNiNSbuDMkx7AA8+JykYoBcjVWs8HWiPmd4FUO3HA10Z9PENNJqvRz5QfoZOdu
lQHKPduOxKWBXN6ATKpBx50mctDWUiQM5jLzCt4aD4KmOBKbDLsxjYuhfUcv5SFEmp88LUkP
/S1RrSMTSLSpZxbPP/+t/UVWLpjLvSab25ScA7t3WLoNhHeBiILMG2lzG6AgiPA0wWNkfByP
hoNPus7esLYs2xVm0egHgR902bGJ+RVEhnP9OSWVkQglj5K5YdYoKQMCuyajscDwdPb8UPxu
YpI6nOg8CKvBsWsTQ8sFHQJmZdmmqVGhMZpotbHYmqQ4YcFRLBoqhk5dvWp5J3nJtmoRwpc5
w0z1AO/royJz2ie0CWTGUNYPRGHgzaESOspMtqmmNA0qq0lqL9Dct2SaU9Vj4tDlfJVbg8xq
ZzxKXPTxyYOLLq3kL7YQf275GyxDD0fWEKsDcqzWBhJ9ogwyx4mVGz0jgoeHkFGO8hDoIM/q
SuVoaEN/hq4vO2QCcNNEQeTHrtg0mLMHX4UyrRlYNCvCdtbXLUNB8yAKNZTyFeqZvYIxopQm
NNo+xMuchcLGiQtpgQ3R8rUjTly8p5ho24SDsq3EjbJMlCa1LGNTsmEVY83M0Y6TLEMeJ5Dy
KKwGE2ESgmVuDFXAqRApvUKyDk4ER+m1jDVwyfCk51ANdp6AKZdBz2zVYyopJCkCcqLkbKUE
qb7pJNXgN4krAV+qHCwUxEnVA+1C2qoUjFpxGyjNEQUNbSuBlB1K/QpGi+qdNcak8dKFoXJ0
aSZUjVFZCdoCWiPPTOT4M3uBLk6I3Zke84Ii1jBDHuElQR6h0Gpw0Bv/0x+PtC1oSUv+D4in
Jl0P2mPSZ3mpfJFt3AcIhLQn/akG7U2W0M/JCxrN+RwT5XeGt9JKYhPXVIVVQQKSTsaUFoVU
klKehZRWVlxkZGmwx4NxYbAZ+7sPliOpJJNbjjnINEyccbZ0LvEKqbjlc5/WiHFkD4i1nh4C
X33HK0uCFCin6P5yFUtNRzNDVxJLhkWmLvktt2JqQM0tbYSmoLBhcNT/8pDltwDMSyJqAkAh
7u7pu/I1uYAhOYV4dLV31nKCfuCjFfZai0aob69px6I50g3/N4i00N5N7VtxI7C3eCc4qanN
tQZFUoeMfCqlzfmd0MkNNRlpQuF7Dhek4GyLvZC5QZXlOE0CEULMGOP7egLUZGco0qmwNL/Y
4k8hmunpcZIsMnTv7M3paERpoIyVXyjdcDFj1BxUnYETihIbSMW5duzYdJXvn1eUsC5d7Eeo
b2F2MOJugVpmGjhW2odOs6BSQgp45oqoDfYCvpjBpaVuzcP79ecv2To29CjHDl12LK0jDPl3
opV7di+5U4ynOf0DjgFqiPTcNLxpdMkjfxVpgh9SAXdl9s7qLpyu2x10ve6qG3TD7vVtt6GT
49uS6hTeOJG1YJolTa0Ji/5i9qKL08yon7AG0ARfdJFZAL6rANyrArQEoIqt8RKB72gwZBir
XoWwEAi52EaELSVQR4DmMgTKhxIcwDSCJGW9WphyDWWL911enL3QsXCdNixf0M2ZHMVtEIBM
OtCxpW4Xpli/F8lcrSdTNQtXzoL87sBarpKxNzC71dCRaFtQTO/rZIL73LiKJIKbjQREJjYh
IU6kR4h+kC6rJPfQunoCI01qPLhSiUBbL9ThmcuKKAIvXxKm8FgkPhHIQCcGQIiheo7Ze2x5
VsnZoI7fwuRWUmlVTuhhcQQCI0lTPYwRCozEbD+McS0wVM5vzVa+FVDSG9pEVeYCH7E/73NX
WI4X3LpC7UxDhMTXtZ2AWxHeaKGaIw+5pwJU2mZo9rS09IC+jjCE0qnCmycQU0xOTsea7NkB
f3WJfz2yOcBXgRuSFXFbOhbb5yEGmeB2hREGcQRw7nKUmMgsZx323F/vqSsYuXCvLIvA9yOc
syoSOtlpg1XnsWNrZLGro1IakZxKlC3YepwnHh4QZ7M74vV847DORXYeJ4N/V7QE/oEKQNRB
qbK9ybSxlEuVlZwFRh+hXpN53cTe/B1NEX41WFcYNvooBoGVlxKFkdIzHkka6hHDbqeRD/4U
HJZN1AUeMdFTzDDiKlKlMXZxQgDtQ9h2h0ILoNEiwbUiv4Xyo7+QpStr6fp4jsW11l6bTKs6
UZUnj50bRJ1c6JIHJ7oTWY0XGcdDpuUykNu6RTemkntZ1AwBRmqn6NTBxOqhcL5zgxLYVcPK
5QcohBXuxcuXifE8zHIhsirJI8xticR9Lsn4V1/qeHqenqfn6Xl6np6n5+l5ep6eNc//AXYC
/tgAUAAA
--------------000909070404030104070106
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000909070404030104070106--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:38:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQSQ-0004C3-A8; Fri, 28 Feb 2014 16:38:42 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQSO-0004BW-BA
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:38:40 +0000
Received: from [85.158.139.211:24334] by server-10.bemta-5.messagelabs.com id
	70/FA-08578-F8BB0135; Fri, 28 Feb 2014 16:38:39 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393605515!2421354!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14112 invoked from network); 28 Feb 2014 16:38:36 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:38:36 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id 3C1F6283;
	Fri, 28 Feb 2014 16:38:32 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 018C54E;
	Fri, 28 Feb 2014 16:38:25 +0000 (UTC)
Message-ID: <5310BB81.3090508@hp.com>
Date: Fri, 28 Feb 2014 11:38:25 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Peter Zijlstra <peterz@infradead.org>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
In-Reply-To: <20140228092945.GG27965@twins.programming.kicks-ass.net>
Content-Type: multipart/mixed; boundary="------------000909070404030104070106"
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--------------000909070404030104070106
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit

On 02/28/2014 04:29 AM, Peter Zijlstra wrote:
> On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote:
>>>> +	old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED);
>>>> +
>>>> +	if (old == 0) {
>>>> +		/*
>>>> +		 * Got the lock, can clear the waiting bit now
>>>> +		 */
>>>> +		smp_u8_store_release(&qlock->wait, 0);
>>> So we just did an atomic op, and now you're trying to optimize this
>>> write. Why do you need a whole byte for that?
>>>
>>> Surely a cmpxchg loop with the right atomic op can't be _that_ much
>>> slower? Its far more readable and likely avoids that steal fail below as
>>> well.
>> At low contention level, atomic operations that requires a lock prefix are
>> the major contributor to the total execution times. I saw estimate online
>> that the time to execute a lock prefix instruction can easily be 50X longer
>> than a regular instruction that can be pipelined. That is why I try to do it
>> with as few lock prefix instructions as possible. If I have to do an atomic
>> cmpxchg, it probably won't be faster than the regular qspinlock slowpath.
> At low contention the cmpxchg won't have to be retried (much) so using
> it won't be a problem and you get to have arbitrary atomic ops.
>
>> Given that speed at low contention level which is the common case is
>> important to get this patch accepted, I have to do what I can to make it run
>> as far as possible for this 2 contending task case.
> What I'm saying is that you can do the whole thing with a single
> cmpxchg. No extra ops needed. And at that point you don't need a whole
> byte, you can use a single bit.
>
> that removes the whole NR_CPUS dependent logic.

After modifying it to do a deterministic cmpxchg, the test run time of 2 
contending tasks jumps up from 600ms (best case) to about 1700ms which 
was worse than the original qspinlock's 1300-1500ms. It is the 
opportunistic nature of the xchg() code that can potentially combine 
multiple steps in the deterministic atomic sequence which can saves 
time. Without that, I would rather prefer going back to the basic 
qspinlock queuing sequence for 2 contending tasks.

Please take a look at the performance data in my patch 3 to see if the 
slowdown at 2 and 3 contending tasks are acceptable or not.

The reason why I need a whole byte for the lock bit is because of the 
simple unlock code of assigning 0 to the lock byte by the lock holder. 
Utilizing other bits in the low byte for other purpose will complicate 
the unlock path and slow down the no-contention case.

>>>> +		/*
>>>> +		 * Someone has steal the lock, so wait again
>>>> +		 */
>>>> +		goto try_again;
>>> That's just a fail.. steals should not ever be allowed. It's a fair lock
>>> after all.
>> The code is unfair, but this unfairness help it to run faster than ticket
>> spinlock in this particular case. And the regular qspinlock slowpath is
>> fair. A little bit of unfairness in this particular case helps its speed.
> *groan*, no, unfairness not cool. ticket lock is absolutely fair; we
> should preserve this.

We can preserve that by removing patch 3.

> BTW; can you share your benchmark thingy?

I have attached the test program that I used to generate the timing data 
for patch 3.

-Longman



--------------000909070404030104070106
Content-Type: application/x-gzip;
 name="locktest.tar.gz"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="locktest.tar.gz"

H4sIAM2aEFMAA+07aXPbOLL5Kv0KRDPjkI5kS07s2ZLi7Dq2knGNLLkkeVN5mRSLIiGLMUXq
8fCxif/7624AvCU7tbM79arMmolIoA+g0Wj0Abu+dRXxMNo9M6/43HH5sz//acNzcPCafn8t
/OLz+teDZ529vV8PXnUOXh20n7U7r/de7z9j7f/AWEpPHEZmwNgz1/cul6a3Fu6h/v+njz/7
2lqyQ+ZKPdjx67//sz+enI6G0PqzFi6467LYM5ectQK9fjI+hW7oCmKvpZDqddN1uwAtevV6
bQnaxFrHbNd1ZrtL345dHu7+rCnS+u4sdlybnR3+rJ1/PNGZBKnXExoZcjtWvWZZrOWuokXA
TRvelqzls5//Af9BZ91yuel1f5wr4dVrAVCbp+wyjP163fFAQWB29Rq8Ac1UUld+/a9evX//
UbPZTaZl/ek8Nu//zv5B+xXs/1ft153Oq72Dfdz/+wcHT/v/v/HsbtfZNpsunJCtAv8yMJcM
XucB5yz059GNGfAeu/NjZpkeC7jthFHgzOKIMydipmfv+gFuXmd+h3SgLfZsHrBowVnEg2XI
/Dl9fBhesA/c44HpsvN45joWGzgW90LOTGCNLWBqbDYjOojxHscwkWNg730gbEaO7/UYd6A/
YNc8COGb7SkekmCT+QES0cwIRx4wf4V4Ogz3jrlmlKLuAFjV9NNZ2szxiPbCX8GMFkAS5njj
gE2ccRaHfB67TSQBwOzj6fS30cWUHQ0/sY9H4/HRcPqpB8DRwodefs0FKWe5ch2gDPMKTC+6
g+EjhbP++Pg3QDl6dzo4nX6CSbD3p9NhfzJh70djdsTOj8bT0+OLwdGYnV+Mz0eT/g5jE47D
4khgg4jntEogRptHpuOGauJHMYwtCLvso+mAcrMBKDl7c0MfO6jx/1isdix/+VbCw6yGp8MP
XTaOPc8B2Aglh2aDWES+zxCLwarg+xIlfnx+EcLbHbMCM1yowdYYPFc88LibWwXRJM8Dxj1z
BjYcBA30QIKmG8P6kRQt34u4hwsLK7Ewrx3gD3qAZixMKeLUXde/wbGGd+E8ZNdm4AiiahUt
OFNgpbuI0dHZLsDtimEkRnHXMixQwAhB9taAOCnIqzUgbgryeg3IKgXZX0cFX+5WJMaDNTDB
jRHgbkGYX9fSMe1a7SeGv0wR/NsaYB45Sw7A3DVXISgvftLecEDegLhb/8nxLDe2OXvjOl58
K8/fncXb2u4uk302qQkcp/l1DkvIUjMqkH/vj4fG6fD9qITjeE6EGCUUw8AutFbwym/hdWla
gV/BFbwhbiGRUhcpT2UHWArvsqrHjPylY2FPHc4YEBSMKmJSk1ivtrvNjlMlptYayDED62Rh
T8FwmWtB3SzoAFdUQBXAVlmwcxMMWDWc0jFJzroiBSlDmXYGSqpREUrpIkGNPzLN68I2oyaE
rIsjaLLwb9gWCyM0U3J9SK1sCEs8Xpv8NvpoHA1PjMl0NO5r13qNnj/qNckqDJ1/cSNi1+yn
n5gRAjkN1ia2IiaXlW3jS1MgwZPpNcxIHWvb+Npk1gK8gu1ZPNcR+lutlrCrBTyKA4+FK1j4
aK4BTJM1frH/8BpNdq33JNh9BqV6hDjTdUN8eIQKBuwgWN9kuE0mmdCy6rXS6MMQTvJ01DEM
ekuOOjs7Qu8laPdqmRJdyC3OYHT8uzE5Px3iS62dbx9/pNZOSiJjb0ATJhHsTBPODfDdSfFA
DeAwjkAesIHh0DfR0IR8ZQZo/C3TgpgEqIfSGveXM27jXmeb8PHMCDGKSfALMzhC1QIFOxqM
hv3cHKCnf/auf3LSP0ln0b+F/ejBUTuPPQu3pVBXTs3s2ndstm0YV0swdr5leL4Nay1WBn+a
7HK+gve5a16GTdomCKL3snIWFKUSkJX+hlEIrG0HFHSvV7/v1VV3aNDMv9WV0nzDtV45HjYD
IxJMr57oFJFD2UDbPZhFw0jkYpiuc+lxG6ymES5XOTawlyv4BDeKy2Y+j2EjdkpuUmw7FFTz
vWos24J9r9qScPBw7JCMf7KFhGALFkUaZp31ij3O2h53bc9qPY40rXqpRxnKco8ytHpJRAXb
ICeRaTms1wzjaDodqwk2GQRjB80EFC1l5gulphdFXeDibODi5Lk4OS7Oj3BxN3Bx81zcHBf3
R7isNnBZ5bmsclxWPzQXuejVk5GdyWwUsJxO8vkYTkqJKjmpTsUpARac0s/HzUko5Zo5ic50
ThJYzUl9Sk6pXksrSQ7npkNyzajE6YhHIxijzYdjHWwXuCQn8XJ5Jw0F2oX8EVi/T4bm+WBO
wXIlvo+WNXLbaOWEKXfFzPH1JnAi+epakqczZxqBsMNDVjh4dDKo6LVqPcJgb1lbvLVaOk7L
WsVGAG74raaT7eYuOHFIklhtxqfO2tHxMUSVxmh43Ndw0K23FhzDLw8FPOutgdkjmL1thEKY
+4Q7UsUZhvGyp7jDOztUrEuTwF4gtmYkVe17ONvE9yJG98lStSuXiQ5hjF4MdRBq2CTWAN1r
GGDiYoumFTSt8k0WZkfzTS4lTKXvK5q+1mqgxGLqrAekYcb4q4SOI6DjStuiw6z1Fv+lOZEm
qVahRG6T4RFP3YQZe9W4xO8rDhqYfZVMv1ZrymYRiUP0PyegAHPGKhKQMmuyq1RqOIt2XnRN
9vXlSyE/VHCYH9CQSgwb9yMqvPD4cNvWakii1SH9pc0gBS6mlpHaQyKX2EroJfyM2lNMgznp
zDAwR/2DrNsJa0Jez1mt+ZVc8ysprasH17xgWNfGR4+wq2lwhJoSeyE5cpT6wVgniJCID8OD
tRVRsGFzS9PkO3DSt5S7hTC4liblDiPHldkd1CaQBTiLIUwQBXuzcFzOtKxtUERw6rmZ18U4
QEhfnfnc4WFPml15lgrLmwlakELeVKD8cJlVh9wg2EyTy5KuDgldERMSdIvRgPSsoaoWtFif
qjOVetSBmi5jM3tWlj3EwhKGn7/A0GHZtkq+4g6+MVjjrZKDl3aVvLK0q+RKZbBK/o/ok51F
RyLTWfZndqQmbhUkJZEQa3gxGFTEEwmocRn48Yq+5SuKhMmHCIXQRL/NCkLJjlH5MQNbkgjk
TGQvKeukwkMZyyVZEZmUSgjgV976glJdmy7qco4LjEuyN0Tm0oDYxjBtW2soOFA6kUAjBF3q
/vMcGT09TVv94eisf4acBE9gQfkuxYBEpOXQm2wrlZ7iILCFgVZjXMVRHjN7jqspYn4BrEC9
hjlpDJgx0MeAGLb+ki/94I4SeZTcRSCwBqEIDQ9ZRZDtzzURMupN9uH9uYE5w/4Adg7yFvt4
PaLor8IkIQq+37+z5xKQZksb/0qj5KRMkrNkOboFVmxugiWzn4N1yAojWQa08pmDnoaKdsog
i3VB9qp/ohV8ARmEkx4VD456boCYPU1Hp3LtuAuTQdUxiTr0vda/eOAzOcAlN71QFEHQRks8
2POu7b2IMJUuaOzUi+ub2jxyOGQWNlELqsOChiXq//Bo8YQkXlKIm9QNuAssIZzcloNu2YdD
0opDQlGcjU4uBn1jcHrcH076WuPD+QBZyuaji+lvo7HWyFRQMr0n/cnx+PR8ejoaag2ROwPq
WI0QXBH0L63/pRWDTD3/T64Bb67/7rX39/ZV/fc1fD5rd/Zf7T/d//ivPE/136f67+PrvyQo
rPjCNOzAATEmYkMWQVIVlmEZGZNcnt2AURqT0cX4uJ8pmGHFET3AQoUNm/HcKLei51duDXgI
620VeyLb8UtNrjMrtpULebEHmmAXi3uzS+7l2+TFpHzj0owW+Za55UVugSsEOIQnXcfknBPV
aryRV6iDfZq8nxjj0Whaa1RVahsFyOPj0cVwmkFjDVXNLoKeVoA61aCDClC3GvS8AnRVDQpb
Bk7KPKjy/0sjACdo+um8XxiDjDPK0EcnVdAi8ChC96enZwVQijUapSpYWF0Ge3QJrIj/6CKU
dKcoEo9D85JTWNf4o36BH132S8g+t2bfW+++wO8C/7nGfyzQ36Q4/BZbHPbGUZXl8O2XP7w/
6rXPLZd9w0D4e3BzzxBqwL61v3fu8dVjb+Q1PEHkcws0/Zbl2r4oOivYG2mVmRgG7E1wI8q/
9B2yNxC60+stvbZgc4iB3ICZ56w1g+iZ34LRtJzIvWMzx7MlO7oWAofPHOAgYsKrJognwrjW
uw14WA/C4BlxmBcvZ3AiZFAtQDWXlISgSyUmHQRW4IDQwbyGnGI6psHymLGLeYaOnkFfADr5
sOKCzIK7K/CeQ1yaDJQDUE6+qp/pdVU1k/IVmjz0cFXwaAhusvwGBCt1imntLmYbZFG1yTpd
xmWNNIsEEUdu1eDAZCGYQPCu8aINRzYmLJR3ybNoq5xoxOrOeHTDeVlA4VoJBUAFWfNgl5Jt
gdAIptE5IkI033PvsjihyqLgGZOu21oe1wAPR9TMxxPQNS8zXbfQlaEAGhfwJapQiRb+/+dc
e0rZJ1efnmNbo5e7IJLuRhjDvvCRH3EBRNlJhjFxp5ckJrsk3vS6ReH6BmX8Mygbro7gSgvw
BHrDDRK10Qh8T4BPM9pWhAcbolAS+DPz1lnGy7yWVl9UITaJ/X3ctRWJk7O5vU03WUj5wEJl
RTChPGOqSwUUgAblYjkZVyleAQ1tlZh1T+SY36XGC20ealfV4CR0R+2UJmt1yAo6RcHJjVFK
bUV0YQJjcMHNiNSbuDMkx7AA8+JykYoBcjVWs8HWiPmd4FUO3HA10Z9PENNJqvRz5QfoZOdu
lQHKPduOxKWBXN6ATKpBx50mctDWUiQM5jLzCt4aD4KmOBKbDLsxjYuhfUcv5SFEmp88LUkP
/S1RrSMTSLSpZxbPP/+t/UVWLpjLvSab25ScA7t3WLoNhHeBiILMG2lzG6AgiPA0wWNkfByP
hoNPus7esLYs2xVm0egHgR902bGJ+RVEhnP9OSWVkQglj5K5YdYoKQMCuyajscDwdPb8UPxu
YpI6nOg8CKvBsWsTQ8sFHQJmZdmmqVGhMZpotbHYmqQ4YcFRLBoqhk5dvWp5J3nJtmoRwpc5
w0z1AO/royJz2ie0CWTGUNYPRGHgzaESOspMtqmmNA0qq0lqL9Dct2SaU9Vj4tDlfJVbg8xq
ZzxKXPTxyYOLLq3kL7YQf275GyxDD0fWEKsDcqzWBhJ9ogwyx4mVGz0jgoeHkFGO8hDoIM/q
SuVoaEN/hq4vO2QCcNNEQeTHrtg0mLMHX4UyrRlYNCvCdtbXLUNB8yAKNZTyFeqZvYIxopQm
NNo+xMuchcLGiQtpgQ3R8rUjTly8p5ho24SDsq3EjbJMlCa1LGNTsmEVY83M0Y6TLEMeJ5Dy
KKwGE2ESgmVuDFXAqRApvUKyDk4ER+m1jDVwyfCk51ANdp6AKZdBz2zVYyopJCkCcqLkbKUE
qb7pJNXgN4krAV+qHCwUxEnVA+1C2qoUjFpxGyjNEQUNbSuBlB1K/QpGi+qdNcak8dKFoXJ0
aSZUjVFZCdoCWiPPTOT4M3uBLk6I3Zke84Ii1jBDHuElQR6h0Gpw0Bv/0x+PtC1oSUv+D4in
Jl0P2mPSZ3mpfJFt3AcIhLQn/akG7U2W0M/JCxrN+RwT5XeGt9JKYhPXVIVVQQKSTsaUFoVU
klKehZRWVlxkZGmwx4NxYbAZ+7sPliOpJJNbjjnINEyccbZ0LvEKqbjlc5/WiHFkD4i1nh4C
X33HK0uCFCin6P5yFUtNRzNDVxJLhkWmLvktt2JqQM0tbYSmoLBhcNT/8pDltwDMSyJqAkAh
7u7pu/I1uYAhOYV4dLV31nKCfuCjFfZai0aob69px6I50g3/N4i00N5N7VtxI7C3eCc4qanN
tQZFUoeMfCqlzfmd0MkNNRlpQuF7Dhek4GyLvZC5QZXlOE0CEULMGOP7egLUZGco0qmwNL/Y
4k8hmunpcZIsMnTv7M3paERpoIyVXyjdcDFj1BxUnYETihIbSMW5duzYdJXvn1eUsC5d7Eeo
b2F2MOJugVpmGjhW2odOs6BSQgp45oqoDfYCvpjBpaVuzcP79ecv2To29CjHDl12LK0jDPl3
opV7di+5U4ynOf0DjgFqiPTcNLxpdMkjfxVpgh9SAXdl9s7qLpyu2x10ve6qG3TD7vVtt6GT
49uS6hTeOJG1YJolTa0Ji/5i9qKL08yon7AG0ARfdJFZAL6rANyrArQEoIqt8RKB72gwZBir
XoWwEAi52EaELSVQR4DmMgTKhxIcwDSCJGW9WphyDWWL911enL3QsXCdNixf0M2ZHMVtEIBM
OtCxpW4Xpli/F8lcrSdTNQtXzoL87sBarpKxNzC71dCRaFtQTO/rZIL73LiKJIKbjQREJjYh
IU6kR4h+kC6rJPfQunoCI01qPLhSiUBbL9ThmcuKKAIvXxKm8FgkPhHIQCcGQIiheo7Ze2x5
VsnZoI7fwuRWUmlVTuhhcQQCI0lTPYwRCozEbD+McS0wVM5vzVa+FVDSG9pEVeYCH7E/73NX
WI4X3LpC7UxDhMTXtZ2AWxHeaKGaIw+5pwJU2mZo9rS09IC+jjCE0qnCmycQU0xOTsea7NkB
f3WJfz2yOcBXgRuSFXFbOhbb5yEGmeB2hREGcQRw7nKUmMgsZx323F/vqSsYuXCvLIvA9yOc
syoSOtlpg1XnsWNrZLGro1IakZxKlC3YepwnHh4QZ7M74vV847DORXYeJ4N/V7QE/oEKQNRB
qbK9ybSxlEuVlZwFRh+hXpN53cTe/B1NEX41WFcYNvooBoGVlxKFkdIzHkka6hHDbqeRD/4U
HJZN1AUeMdFTzDDiKlKlMXZxQgDtQ9h2h0ILoNEiwbUiv4Xyo7+QpStr6fp4jsW11l6bTKs6
UZUnj50bRJ1c6JIHJ7oTWY0XGcdDpuUykNu6RTemkntZ1AwBRmqn6NTBxOqhcL5zgxLYVcPK
5QcohBXuxcuXifE8zHIhsirJI8xticR9Lsn4V1/qeHqenqfn6Xl6np6n5+l5ep6eNc//AXYC
/tgAUAAA
--------------000909070404030104070106
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--------------000909070404030104070106--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:42:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQVS-0004OM-4b; Fri, 28 Feb 2014 16:41:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQVQ-0004OC-7K
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:41:48 +0000
Received: from [193.109.254.147:56874] by server-8.bemta-14.messagelabs.com id
	0E/37-18529-B4CB0135; Fri, 28 Feb 2014 16:41:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393605706!7528524!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20601 invoked from network); 28 Feb 2014 16:41:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:41:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:41:46 +0000
Message-Id: <5310CA680200007800120464@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:42:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDFED9148.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86/time: cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDFED9148.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Eliminate effectively unused variables mistakenly left in place by
9539:08aede767c63 ("Rename update_dom_time() to
update_vcpu_system_time()").

Drop the pointless casts.

Use SECONDS() instead of open coding it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -921,15 +921,15 @@ int cpu_frequency_change(u64 freq)
 void do_settime(unsigned long secs, unsigned long nsecs, u64 system_time_b=
ase)
 {
     u64 x;
-    u32 y, _wc_sec, _wc_nsec;
+    u32 y;
     struct domain *d;
=20
-    x =3D (secs * 1000000000ULL) + (u64)nsecs - system_time_base;
+    x =3D SECONDS(secs) + (u64)nsecs - system_time_base;
     y =3D do_div(x, 1000000000);
=20
     spin_lock(&wc_lock);
-    wc_sec  =3D _wc_sec  =3D (u32)x;
-    wc_nsec =3D _wc_nsec =3D (u32)y;
+    wc_sec  =3D x;
+    wc_nsec =3D y;
     spin_unlock(&wc_lock);
=20
     rcu_read_lock(&domlist_read_lock);
@@ -1548,8 +1548,8 @@ unsigned long get_localtime(struct domai
 /* Return microsecs after 00:00:00 localtime, 1 January, 1970. */
 uint64_t get_localtime_us(struct domain *d)
 {
-    return ((wc_sec + d->time_offset_seconds) * 1000000000ULL
-        + wc_nsec + NOW()) / 1000UL;
+    return (SECONDS(wc_sec + d->time_offset_seconds) + wc_nsec + NOW())
+           / 1000UL;
 }
=20
 unsigned long get_sec(void)
@@ -1651,7 +1651,7 @@ struct tm wallclock_time(void)
     if ( !wc_sec )
         return (struct tm) { 0 };
=20
-    seconds =3D NOW() + (wc_sec * 1000000000ull) + wc_nsec;
+    seconds =3D NOW() + SECONDS(wc_sec) + wc_nsec;
     do_div(seconds, 1000000000);
     return gmtime(seconds);
 }




--=__PartDFED9148.0__=
Content-Type: text/plain; name="x86-time-simplify.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-time-simplify.patch"

x86/time: cleanup=0A=0AEliminate effectively unused variables mistakenly =
left in place by=0A9539:08aede767c63 ("Rename update_dom_time() to=0Aupdate=
_vcpu_system_time()").=0A=0ADrop the pointless casts.=0A=0AUse SECONDS() =
instead of open coding it.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.c=
om>=0A=0A--- a/xen/arch/x86/time.c=0A+++ b/xen/arch/x86/time.c=0A@@ =
-921,15 +921,15 @@ int cpu_frequency_change(u64 freq)=0A void do_settime(un=
signed long secs, unsigned long nsecs, u64 system_time_base)=0A {=0A     =
u64 x;=0A-    u32 y, _wc_sec, _wc_nsec;=0A+    u32 y;=0A     struct domain =
*d;=0A =0A-    x =3D (secs * 1000000000ULL) + (u64)nsecs - system_time_base=
;=0A+    x =3D SECONDS(secs) + (u64)nsecs - system_time_base;=0A     y =3D =
do_div(x, 1000000000);=0A =0A     spin_lock(&wc_lock);=0A-    wc_sec  =3D =
_wc_sec  =3D (u32)x;=0A-    wc_nsec =3D _wc_nsec =3D (u32)y;=0A+    wc_sec =
 =3D x;=0A+    wc_nsec =3D y;=0A     spin_unlock(&wc_lock);=0A =0A     =
rcu_read_lock(&domlist_read_lock);=0A@@ -1548,8 +1548,8 @@ unsigned long =
get_localtime(struct domai=0A /* Return microsecs after 00:00:00 localtime,=
 1 January, 1970. */=0A uint64_t get_localtime_us(struct domain *d)=0A =
{=0A-    return ((wc_sec + d->time_offset_seconds) * 1000000000ULL=0A-     =
   + wc_nsec + NOW()) / 1000UL;=0A+    return (SECONDS(wc_sec + d->time_off=
set_seconds) + wc_nsec + NOW())=0A+           / 1000UL;=0A }=0A =0A =
unsigned long get_sec(void)=0A@@ -1651,7 +1651,7 @@ struct tm wallclock_tim=
e(void)=0A     if ( !wc_sec )=0A         return (struct tm) { 0 };=0A =0A- =
   seconds =3D NOW() + (wc_sec * 1000000000ull) + wc_nsec;=0A+    seconds =
=3D NOW() + SECONDS(wc_sec) + wc_nsec;=0A     do_div(seconds, 1000000000);=
=0A     return gmtime(seconds);=0A }=0A
--=__PartDFED9148.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDFED9148.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:42:00 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQVS-0004OM-4b; Fri, 28 Feb 2014 16:41:50 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQVQ-0004OC-7K
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:41:48 +0000
Received: from [193.109.254.147:56874] by server-8.bemta-14.messagelabs.com id
	0E/37-18529-B4CB0135; Fri, 28 Feb 2014 16:41:47 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393605706!7528524!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20601 invoked from network); 28 Feb 2014 16:41:46 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-15.tower-27.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:41:46 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:41:46 +0000
Message-Id: <5310CA680200007800120464@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:42:00 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartDFED9148.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86/time: cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartDFED9148.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Eliminate effectively unused variables mistakenly left in place by
9539:08aede767c63 ("Rename update_dom_time() to
update_vcpu_system_time()").

Drop the pointless casts.

Use SECONDS() instead of open coding it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -921,15 +921,15 @@ int cpu_frequency_change(u64 freq)
 void do_settime(unsigned long secs, unsigned long nsecs, u64 system_time_b=
ase)
 {
     u64 x;
-    u32 y, _wc_sec, _wc_nsec;
+    u32 y;
     struct domain *d;
=20
-    x =3D (secs * 1000000000ULL) + (u64)nsecs - system_time_base;
+    x =3D SECONDS(secs) + (u64)nsecs - system_time_base;
     y =3D do_div(x, 1000000000);
=20
     spin_lock(&wc_lock);
-    wc_sec  =3D _wc_sec  =3D (u32)x;
-    wc_nsec =3D _wc_nsec =3D (u32)y;
+    wc_sec  =3D x;
+    wc_nsec =3D y;
     spin_unlock(&wc_lock);
=20
     rcu_read_lock(&domlist_read_lock);
@@ -1548,8 +1548,8 @@ unsigned long get_localtime(struct domai
 /* Return microsecs after 00:00:00 localtime, 1 January, 1970. */
 uint64_t get_localtime_us(struct domain *d)
 {
-    return ((wc_sec + d->time_offset_seconds) * 1000000000ULL
-        + wc_nsec + NOW()) / 1000UL;
+    return (SECONDS(wc_sec + d->time_offset_seconds) + wc_nsec + NOW())
+           / 1000UL;
 }
=20
 unsigned long get_sec(void)
@@ -1651,7 +1651,7 @@ struct tm wallclock_time(void)
     if ( !wc_sec )
         return (struct tm) { 0 };
=20
-    seconds =3D NOW() + (wc_sec * 1000000000ull) + wc_nsec;
+    seconds =3D NOW() + SECONDS(wc_sec) + wc_nsec;
     do_div(seconds, 1000000000);
     return gmtime(seconds);
 }




--=__PartDFED9148.0__=
Content-Type: text/plain; name="x86-time-simplify.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-time-simplify.patch"

x86/time: cleanup=0A=0AEliminate effectively unused variables mistakenly =
left in place by=0A9539:08aede767c63 ("Rename update_dom_time() to=0Aupdate=
_vcpu_system_time()").=0A=0ADrop the pointless casts.=0A=0AUse SECONDS() =
instead of open coding it.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.c=
om>=0A=0A--- a/xen/arch/x86/time.c=0A+++ b/xen/arch/x86/time.c=0A@@ =
-921,15 +921,15 @@ int cpu_frequency_change(u64 freq)=0A void do_settime(un=
signed long secs, unsigned long nsecs, u64 system_time_base)=0A {=0A     =
u64 x;=0A-    u32 y, _wc_sec, _wc_nsec;=0A+    u32 y;=0A     struct domain =
*d;=0A =0A-    x =3D (secs * 1000000000ULL) + (u64)nsecs - system_time_base=
;=0A+    x =3D SECONDS(secs) + (u64)nsecs - system_time_base;=0A     y =3D =
do_div(x, 1000000000);=0A =0A     spin_lock(&wc_lock);=0A-    wc_sec  =3D =
_wc_sec  =3D (u32)x;=0A-    wc_nsec =3D _wc_nsec =3D (u32)y;=0A+    wc_sec =
 =3D x;=0A+    wc_nsec =3D y;=0A     spin_unlock(&wc_lock);=0A =0A     =
rcu_read_lock(&domlist_read_lock);=0A@@ -1548,8 +1548,8 @@ unsigned long =
get_localtime(struct domai=0A /* Return microsecs after 00:00:00 localtime,=
 1 January, 1970. */=0A uint64_t get_localtime_us(struct domain *d)=0A =
{=0A-    return ((wc_sec + d->time_offset_seconds) * 1000000000ULL=0A-     =
   + wc_nsec + NOW()) / 1000UL;=0A+    return (SECONDS(wc_sec + d->time_off=
set_seconds) + wc_nsec + NOW())=0A+           / 1000UL;=0A }=0A =0A =
unsigned long get_sec(void)=0A@@ -1651,7 +1651,7 @@ struct tm wallclock_tim=
e(void)=0A     if ( !wc_sec )=0A         return (struct tm) { 0 };=0A =0A- =
   seconds =3D NOW() + (wc_sec * 1000000000ull) + wc_nsec;=0A+    seconds =
=3D NOW() + SECONDS(wc_sec) + wc_nsec;=0A     do_div(seconds, 1000000000);=
=0A     return gmtime(seconds);=0A }=0A
--=__PartDFED9148.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartDFED9148.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:45:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQZB-0004XA-Ru; Fri, 28 Feb 2014 16:45:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQZ9-0004X5-Ll
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:45:39 +0000
Received: from [85.158.137.68:7480] by server-3.bemta-3.messagelabs.com id
	14/79-14520-23DB0135; Fri, 28 Feb 2014 16:45:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393605937!1520728!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10280 invoked from network); 28 Feb 2014 16:45:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:45:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:45:36 +0000
Message-Id: <5310CB4F0200007800120468@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:45:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB98BF72F.2__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>
Subject: [Xen-devel] [PATCH] x86/MCE: mctelem_init() cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB98BF72F.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The function can be __init with its caller taking care of only calling
it on the BSP. And with that all its static variables can be dropped.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 *c,=20
=20
     intpose_init();
=20
-    mctelem_init(sizeof(struct mc_info));
+    if ( bsp )
+    {
+        mctelem_init(sizeof(struct mc_info));
+        register_cpu_notifier(&cpu_nfb);
+    }
=20
     /* Turn on MCE now */
     set_in_cr4(X86_CR4_MCE);
=20
-    if ( bsp )
-        register_cpu_notifier(&cpu_nfb);
     set_poll_bankmask(c);
=20
     return;
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -248,25 +248,14 @@ static void mctelem_processing_release(s
 	}
 }
=20
-void mctelem_init(int reqdatasz)
+void __init mctelem_init(unsigned int datasz)
 {
-	static int called =3D 0;
-	static int datasz =3D 0, realdatasz =3D 0;
 	char *datarr;
-	int i;
+	unsigned int i;
 =09
-	BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES !=3D =
2);
+	BUILD_BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES=
 !=3D 2);
=20
-	/* Called from mcheck_init for all processors; initialize for the
-	 * first call only (no race here since the boot cpu completes
-	 * init before others start up). */
-	if (++called =3D=3D 1) {
-		realdatasz =3D reqdatasz;
-		datasz =3D (reqdatasz & ~0xf) + 0x10;	/* 16 byte roundup =
*/
-	} else {
-		BUG_ON(reqdatasz !=3D realdatasz);
-		return;
-	}
+	datasz =3D (datasz & ~0xf) + 0x10;	/* 16 byte roundup */
=20
 	if ((mctctl.mctc_elems =3D xmalloc_array(struct mctelem_ent,
 	    MC_NENT)) =3D=3D NULL ||
--- a/xen/arch/x86/cpu/mcheck/mctelem.h
+++ b/xen/arch/x86/cpu/mcheck/mctelem.h
@@ -59,7 +59,7 @@ typedef enum mctelem_class {
 	MC_NONURGENT
 } mctelem_class_t;
=20
-extern void mctelem_init(int);
+extern void mctelem_init(unsigned int);
 extern mctelem_cookie_t mctelem_reserve(mctelem_class_t);
 extern void *mctelem_dataptr(mctelem_cookie_t);
 extern void mctelem_commit(mctelem_cookie_t);




--=__PartB98BF72F.2__=
Content-Type: text/plain; name="x86-mctelem-init.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-mctelem-init.patch"

x86/MCE: mctelem_init() cleanup=0A=0AThe function can be __init with its =
caller taking care of only calling=0Ait on the BSP. And with that all its =
static variables can be dropped.=0A=0ASigned-off-by: Jan Beulich <jbeulich@=
suse.com>=0A=0A--- a/xen/arch/x86/cpu/mcheck/mce.c=0A+++ b/xen/arch/x86/cpu=
/mcheck/mce.c=0A@@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 =
*c, =0A =0A     intpose_init();=0A =0A-    mctelem_init(sizeof(struct =
mc_info));=0A+    if ( bsp )=0A+    {=0A+        mctelem_init(sizeof(struct=
 mc_info));=0A+        register_cpu_notifier(&cpu_nfb);=0A+    }=0A =0A    =
 /* Turn on MCE now */=0A     set_in_cr4(X86_CR4_MCE);=0A =0A-    if ( bsp =
)=0A-        register_cpu_notifier(&cpu_nfb);=0A     set_poll_bankmask(c);=
=0A =0A     return;=0A--- a/xen/arch/x86/cpu/mcheck/mctelem.c=0A+++ =
b/xen/arch/x86/cpu/mcheck/mctelem.c=0A@@ -248,25 +248,14 @@ static void =
mctelem_processing_release(s=0A 	}=0A }=0A =0A-void mctelem_init(int=
 reqdatasz)=0A+void __init mctelem_init(unsigned int datasz)=0A {=0A-	=
static int called =3D 0;=0A-	static int datasz =3D 0, realdatasz =3D =
0;=0A 	char *datarr;=0A-	int i;=0A+	unsigned int i;=0A 	=
=0A-	BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES !=3D =
2);=0A+	BUILD_BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES=
 !=3D 2);=0A =0A-	/* Called from mcheck_init for all processors; =
initialize for the=0A-	 * first call only (no race here since the boot =
cpu completes=0A-	 * init before others start up). */=0A-	if =
(++called =3D=3D 1) {=0A-		realdatasz =3D reqdatasz;=0A-		=
datasz =3D (reqdatasz & ~0xf) + 0x10;	/* 16 byte roundup */=0A-	} =
else {=0A-		BUG_ON(reqdatasz !=3D realdatasz);=0A-		=
return;=0A-	}=0A+	datasz =3D (datasz & ~0xf) + 0x10;	/* 16 byte =
roundup */=0A =0A 	if ((mctctl.mctc_elems =3D xmalloc_array(struct =
mctelem_ent,=0A 	    MC_NENT)) =3D=3D NULL ||=0A--- a/xen/arch/x86/c=
pu/mcheck/mctelem.h=0A+++ b/xen/arch/x86/cpu/mcheck/mctelem.h=0A@@ -59,7 =
+59,7 @@ typedef enum mctelem_class {=0A 	MC_NONURGENT=0A } =
mctelem_class_t;=0A =0A-extern void mctelem_init(int);=0A+extern void =
mctelem_init(unsigned int);=0A extern mctelem_cookie_t mctelem_reserve(mcte=
lem_class_t);=0A extern void *mctelem_dataptr(mctelem_cookie_t);=0A extern =
void mctelem_commit(mctelem_cookie_t);=0A
--=__PartB98BF72F.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB98BF72F.2__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:45:45 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQZB-0004XA-Ru; Fri, 28 Feb 2014 16:45:41 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQZ9-0004X5-Ll
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:45:39 +0000
Received: from [85.158.137.68:7480] by server-3.bemta-3.messagelabs.com id
	14/79-14520-23DB0135; Fri, 28 Feb 2014 16:45:38 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393605937!1520728!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10280 invoked from network); 28 Feb 2014 16:45:37 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:45:37 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:45:36 +0000
Message-Id: <5310CB4F0200007800120468@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:45:51 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartB98BF72F.2__="
Cc: Jinsong Liu <jinsong.liu@intel.com>, Christoph Egger <chegger@amazon.de>
Subject: [Xen-devel] [PATCH] x86/MCE: mctelem_init() cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartB98BF72F.2__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

The function can be __init with its caller taking care of only calling
it on the BSP. And with that all its static variables can be dropped.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 *c,=20
=20
     intpose_init();
=20
-    mctelem_init(sizeof(struct mc_info));
+    if ( bsp )
+    {
+        mctelem_init(sizeof(struct mc_info));
+        register_cpu_notifier(&cpu_nfb);
+    }
=20
     /* Turn on MCE now */
     set_in_cr4(X86_CR4_MCE);
=20
-    if ( bsp )
-        register_cpu_notifier(&cpu_nfb);
     set_poll_bankmask(c);
=20
     return;
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -248,25 +248,14 @@ static void mctelem_processing_release(s
 	}
 }
=20
-void mctelem_init(int reqdatasz)
+void __init mctelem_init(unsigned int datasz)
 {
-	static int called =3D 0;
-	static int datasz =3D 0, realdatasz =3D 0;
 	char *datarr;
-	int i;
+	unsigned int i;
 =09
-	BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES !=3D =
2);
+	BUILD_BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES=
 !=3D 2);
=20
-	/* Called from mcheck_init for all processors; initialize for the
-	 * first call only (no race here since the boot cpu completes
-	 * init before others start up). */
-	if (++called =3D=3D 1) {
-		realdatasz =3D reqdatasz;
-		datasz =3D (reqdatasz & ~0xf) + 0x10;	/* 16 byte roundup =
*/
-	} else {
-		BUG_ON(reqdatasz !=3D realdatasz);
-		return;
-	}
+	datasz =3D (datasz & ~0xf) + 0x10;	/* 16 byte roundup */
=20
 	if ((mctctl.mctc_elems =3D xmalloc_array(struct mctelem_ent,
 	    MC_NENT)) =3D=3D NULL ||
--- a/xen/arch/x86/cpu/mcheck/mctelem.h
+++ b/xen/arch/x86/cpu/mcheck/mctelem.h
@@ -59,7 +59,7 @@ typedef enum mctelem_class {
 	MC_NONURGENT
 } mctelem_class_t;
=20
-extern void mctelem_init(int);
+extern void mctelem_init(unsigned int);
 extern mctelem_cookie_t mctelem_reserve(mctelem_class_t);
 extern void *mctelem_dataptr(mctelem_cookie_t);
 extern void mctelem_commit(mctelem_cookie_t);




--=__PartB98BF72F.2__=
Content-Type: text/plain; name="x86-mctelem-init.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-mctelem-init.patch"

x86/MCE: mctelem_init() cleanup=0A=0AThe function can be __init with its =
caller taking care of only calling=0Ait on the BSP. And with that all its =
static variables can be dropped.=0A=0ASigned-off-by: Jan Beulich <jbeulich@=
suse.com>=0A=0A--- a/xen/arch/x86/cpu/mcheck/mce.c=0A+++ b/xen/arch/x86/cpu=
/mcheck/mce.c=0A@@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 =
*c, =0A =0A     intpose_init();=0A =0A-    mctelem_init(sizeof(struct =
mc_info));=0A+    if ( bsp )=0A+    {=0A+        mctelem_init(sizeof(struct=
 mc_info));=0A+        register_cpu_notifier(&cpu_nfb);=0A+    }=0A =0A    =
 /* Turn on MCE now */=0A     set_in_cr4(X86_CR4_MCE);=0A =0A-    if ( bsp =
)=0A-        register_cpu_notifier(&cpu_nfb);=0A     set_poll_bankmask(c);=
=0A =0A     return;=0A--- a/xen/arch/x86/cpu/mcheck/mctelem.c=0A+++ =
b/xen/arch/x86/cpu/mcheck/mctelem.c=0A@@ -248,25 +248,14 @@ static void =
mctelem_processing_release(s=0A 	}=0A }=0A =0A-void mctelem_init(int=
 reqdatasz)=0A+void __init mctelem_init(unsigned int datasz)=0A {=0A-	=
static int called =3D 0;=0A-	static int datasz =3D 0, realdatasz =3D =
0;=0A 	char *datarr;=0A-	int i;=0A+	unsigned int i;=0A 	=
=0A-	BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES !=3D =
2);=0A+	BUILD_BUG_ON(MC_URGENT !=3D 0 || MC_NONURGENT !=3D 1 || MC_NCLASSES=
 !=3D 2);=0A =0A-	/* Called from mcheck_init for all processors; =
initialize for the=0A-	 * first call only (no race here since the boot =
cpu completes=0A-	 * init before others start up). */=0A-	if =
(++called =3D=3D 1) {=0A-		realdatasz =3D reqdatasz;=0A-		=
datasz =3D (reqdatasz & ~0xf) + 0x10;	/* 16 byte roundup */=0A-	} =
else {=0A-		BUG_ON(reqdatasz !=3D realdatasz);=0A-		=
return;=0A-	}=0A+	datasz =3D (datasz & ~0xf) + 0x10;	/* 16 byte =
roundup */=0A =0A 	if ((mctctl.mctc_elems =3D xmalloc_array(struct =
mctelem_ent,=0A 	    MC_NENT)) =3D=3D NULL ||=0A--- a/xen/arch/x86/c=
pu/mcheck/mctelem.h=0A+++ b/xen/arch/x86/cpu/mcheck/mctelem.h=0A@@ -59,7 =
+59,7 @@ typedef enum mctelem_class {=0A 	MC_NONURGENT=0A } =
mctelem_class_t;=0A =0A-extern void mctelem_init(int);=0A+extern void =
mctelem_init(unsigned int);=0A extern mctelem_cookie_t mctelem_reserve(mcte=
lem_class_t);=0A extern void *mctelem_dataptr(mctelem_cookie_t);=0A extern =
void mctelem_commit(mctelem_cookie_t);=0A
--=__PartB98BF72F.2__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartB98BF72F.2__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:46:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQa0-0004bD-A7; Fri, 28 Feb 2014 16:46:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQZy-0004b5-JB
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:46:30 +0000
Received: from [85.158.143.35:46701] by server-2.bemta-4.messagelabs.com id
	0D/C7-04779-56DB0135; Fri, 28 Feb 2014 16:46:29 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393605988!9060869!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15147 invoked from network); 28 Feb 2014 16:46:29 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:46:29 -0000
Received: by mail-we0-f173.google.com with SMTP id w61so770553wes.32
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:46:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=nrGv2Y5YjGGYiUQ5WobMH0IYbB3sTy9L/i6HUfrDFQ0=;
	b=vrRJVVLA7MNK7BxiuJRVB3gmfbundRZCeTPb0bUHS81rZCSjHjeJddYkiolTltGk9Y
	V895fQCDlskQ/TM32vQP4k1phn080BRVEMgkzz271ARJgA+QlAwXRG1QsN+vRTtxeHYV
	uYGpmJoq2x+OmOogP0EzL6JQ63quHvQZ0qQa9225er7Ngf+ZRXgcA+d68pcJWRh1/KTP
	ysS07VTlappYxJOo9qkfGLJ+hOlNOk8EPx326yFvahTehjz1hr7K09ozeS1217dnaQTX
	h1ajJ7/c9Q5AQG+FH6CekiTe/eKJhSBJRbmCCHZE+nng0S8nNE0QYDUv1G6LtwIhKlz+
	i9oA==
X-Received: by 10.195.13.103 with SMTP id ex7mr3678439wjd.3.1393605988714;
	Fri, 28 Feb 2014 08:46:28 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id r1sm7610992wia.5.2014.02.28.08.46.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:46:27 -0800 (PST)
Message-ID: <5310BD60.1070800@gmail.com>
Date: Fri, 28 Feb 2014 16:46:24 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CA680200007800120464@nat28.tlf.novell.com>
In-Reply-To: <5310CA680200007800120464@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/time: cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5954979961175805327=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5954979961175805327==
Content-Type: multipart/alternative;
 boundary="------------040109090707030509030305"

This is a multi-part message in MIME format.
--------------040109090707030509030305
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



Jan Beulich wrote:
>
> Eliminate effectively unused variables mistakenly left in place by
> 9539:08aede767c63 ("Rename update_dom_time() to
> update_vcpu_system_time()").
>
> Drop the pointless casts.
>
> Use SECONDS() instead of open coding it.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
>
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -921,15 +921,15 @@ int cpu_frequency_change(u64 freq)
> void do_settime(unsigned long secs, unsigned long nsecs, u64 
> system_time_base)
> {
> u64 x;
> - u32 y, _wc_sec, _wc_nsec;
> + u32 y;
> struct domain *d;
>
> - x = (secs * 1000000000ULL) + (u64)nsecs - system_time_base;
> + x = SECONDS(secs) + (u64)nsecs - system_time_base;
> y = do_div(x, 1000000000);
>
> spin_lock(&wc_lock);
> - wc_sec = _wc_sec = (u32)x;
> - wc_nsec = _wc_nsec = (u32)y;
> + wc_sec = x;
> + wc_nsec = y;
> spin_unlock(&wc_lock);
>
> rcu_read_lock(&domlist_read_lock);
> @@ -1548,8 +1548,8 @@ unsigned long get_localtime(struct domai
> /* Return microsecs after 00:00:00 localtime, 1 January, 1970. */
> uint64_t get_localtime_us(struct domain *d)
> {
> - return ((wc_sec + d->time_offset_seconds) * 1000000000ULL
> - + wc_nsec + NOW()) / 1000UL;
> + return (SECONDS(wc_sec + d->time_offset_seconds) + wc_nsec + NOW())
> + / 1000UL;
> }
>
> unsigned long get_sec(void)
> @@ -1651,7 +1651,7 @@ struct tm wallclock_time(void)
> if ( !wc_sec )
> return (struct tm) { 0 };
>
> - seconds = NOW() + (wc_sec * 1000000000ull) + wc_nsec;
> + seconds = NOW() + SECONDS(wc_sec) + wc_nsec;
> do_div(seconds, 1000000000);
> return gmtime(seconds);
> }
>
>

--------------040109090707030509030305
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY><BR>
<BR>
Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
Eliminate effectively unused variables mistakenly left in place by<BR>
9539:08aede767c63 ("Rename update_dom_time() to<BR>
update_vcpu_system_time()").<BR>
<BR>
Drop the pointless casts.<BR>
<BR>
Use SECONDS() instead of open coding it.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
<BR>
--- a/xen/arch/x86/time.c<BR>
+++ b/xen/arch/x86/time.c<BR>
@@ -921,15 +921,15 @@ int cpu_frequency_change(u64 freq)<BR>
&nbsp; void do_settime(unsigned long secs, unsigned long nsecs, u64 system_time_base)<BR>
&nbsp; {<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; u64 x;<BR>
-&nbsp;&nbsp;&nbsp; u32 y, _wc_sec, _wc_nsec;<BR>
+&nbsp;&nbsp;&nbsp; u32 y;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struct domain *d;<BR>
<BR>
-&nbsp;&nbsp;&nbsp; x = (secs * 1000000000ULL) + (u64)nsecs - system_time_base;<BR>
+&nbsp;&nbsp;&nbsp; x = SECONDS(secs) + (u64)nsecs - system_time_base;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; y = do_div(x, 1000000000);<BR>
<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; spin_lock(&amp;wc_lock);<BR>
-&nbsp;&nbsp;&nbsp; wc_sec&nbsp; = _wc_sec&nbsp; = (u32)x;<BR>
-&nbsp;&nbsp;&nbsp; wc_nsec = _wc_nsec = (u32)y;<BR>
+&nbsp;&nbsp;&nbsp; wc_sec&nbsp; = x;<BR>
+&nbsp;&nbsp;&nbsp; wc_nsec = y;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; spin_unlock(&amp;wc_lock);<BR>
<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; rcu_read_lock(&amp;domlist_read_lock);<BR>
@@ -1548,8 +1548,8 @@ unsigned long get_localtime(struct domai<BR>
&nbsp; /* Return microsecs after 00:00:00 localtime, 1 January, 1970. */<BR>
&nbsp; uint64_t get_localtime_us(struct domain *d)<BR>
&nbsp; {<BR>
-&nbsp;&nbsp;&nbsp; return ((wc_sec + d-&gt;time_offset_seconds) * 1000000000ULL<BR>
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; + wc_nsec + NOW()) / 1000UL;<BR>
+&nbsp;&nbsp;&nbsp; return (SECONDS(wc_sec + d-&gt;time_offset_seconds) + wc_nsec + NOW())<BR>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; / 1000UL;<BR>
&nbsp; }<BR>
<BR>
&nbsp; unsigned long get_sec(void)<BR>
@@ -1651,7 +1651,7 @@ struct tm wallclock_time(void)<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if ( !wc_sec )<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return (struct tm) { 0 };<BR>
<BR>
-&nbsp;&nbsp;&nbsp; seconds = NOW() + (wc_sec * 1000000000ull) + wc_nsec;<BR>
+&nbsp;&nbsp;&nbsp; seconds = NOW() + SECONDS(wc_sec) + wc_nsec;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; do_div(seconds, 1000000000);<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return gmtime(seconds);<BR>
&nbsp; }<BR>
<BR>
<BR>
</BODY></HTML>

--------------040109090707030509030305--


--===============5954979961175805327==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5954979961175805327==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:46:32 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:46:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQa0-0004bD-A7; Fri, 28 Feb 2014 16:46:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQZy-0004b5-JB
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:46:30 +0000
Received: from [85.158.143.35:46701] by server-2.bemta-4.messagelabs.com id
	0D/C7-04779-56DB0135; Fri, 28 Feb 2014 16:46:29 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393605988!9060869!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 15147 invoked from network); 28 Feb 2014 16:46:29 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-5.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:46:29 -0000
Received: by mail-we0-f173.google.com with SMTP id w61so770553wes.32
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:46:28 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=nrGv2Y5YjGGYiUQ5WobMH0IYbB3sTy9L/i6HUfrDFQ0=;
	b=vrRJVVLA7MNK7BxiuJRVB3gmfbundRZCeTPb0bUHS81rZCSjHjeJddYkiolTltGk9Y
	V895fQCDlskQ/TM32vQP4k1phn080BRVEMgkzz271ARJgA+QlAwXRG1QsN+vRTtxeHYV
	uYGpmJoq2x+OmOogP0EzL6JQ63quHvQZ0qQa9225er7Ngf+ZRXgcA+d68pcJWRh1/KTP
	ysS07VTlappYxJOo9qkfGLJ+hOlNOk8EPx326yFvahTehjz1hr7K09ozeS1217dnaQTX
	h1ajJ7/c9Q5AQG+FH6CekiTe/eKJhSBJRbmCCHZE+nng0S8nNE0QYDUv1G6LtwIhKlz+
	i9oA==
X-Received: by 10.195.13.103 with SMTP id ex7mr3678439wjd.3.1393605988714;
	Fri, 28 Feb 2014 08:46:28 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id r1sm7610992wia.5.2014.02.28.08.46.26
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:46:27 -0800 (PST)
Message-ID: <5310BD60.1070800@gmail.com>
Date: Fri, 28 Feb 2014 16:46:24 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CA680200007800120464@nat28.tlf.novell.com>
In-Reply-To: <5310CA680200007800120464@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/time: cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5954979961175805327=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5954979961175805327==
Content-Type: multipart/alternative;
 boundary="------------040109090707030509030305"

This is a multi-part message in MIME format.
--------------040109090707030509030305
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



Jan Beulich wrote:
>
> Eliminate effectively unused variables mistakenly left in place by
> 9539:08aede767c63 ("Rename update_dom_time() to
> update_vcpu_system_time()").
>
> Drop the pointless casts.
>
> Use SECONDS() instead of open coding it.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
>
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -921,15 +921,15 @@ int cpu_frequency_change(u64 freq)
> void do_settime(unsigned long secs, unsigned long nsecs, u64 
> system_time_base)
> {
> u64 x;
> - u32 y, _wc_sec, _wc_nsec;
> + u32 y;
> struct domain *d;
>
> - x = (secs * 1000000000ULL) + (u64)nsecs - system_time_base;
> + x = SECONDS(secs) + (u64)nsecs - system_time_base;
> y = do_div(x, 1000000000);
>
> spin_lock(&wc_lock);
> - wc_sec = _wc_sec = (u32)x;
> - wc_nsec = _wc_nsec = (u32)y;
> + wc_sec = x;
> + wc_nsec = y;
> spin_unlock(&wc_lock);
>
> rcu_read_lock(&domlist_read_lock);
> @@ -1548,8 +1548,8 @@ unsigned long get_localtime(struct domai
> /* Return microsecs after 00:00:00 localtime, 1 January, 1970. */
> uint64_t get_localtime_us(struct domain *d)
> {
> - return ((wc_sec + d->time_offset_seconds) * 1000000000ULL
> - + wc_nsec + NOW()) / 1000UL;
> + return (SECONDS(wc_sec + d->time_offset_seconds) + wc_nsec + NOW())
> + / 1000UL;
> }
>
> unsigned long get_sec(void)
> @@ -1651,7 +1651,7 @@ struct tm wallclock_time(void)
> if ( !wc_sec )
> return (struct tm) { 0 };
>
> - seconds = NOW() + (wc_sec * 1000000000ull) + wc_nsec;
> + seconds = NOW() + SECONDS(wc_sec) + wc_nsec;
> do_div(seconds, 1000000000);
> return gmtime(seconds);
> }
>
>

--------------040109090707030509030305
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY><BR>
<BR>
Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
Eliminate effectively unused variables mistakenly left in place by<BR>
9539:08aede767c63 ("Rename update_dom_time() to<BR>
update_vcpu_system_time()").<BR>
<BR>
Drop the pointless casts.<BR>
<BR>
Use SECONDS() instead of open coding it.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
<BR>
--- a/xen/arch/x86/time.c<BR>
+++ b/xen/arch/x86/time.c<BR>
@@ -921,15 +921,15 @@ int cpu_frequency_change(u64 freq)<BR>
&nbsp; void do_settime(unsigned long secs, unsigned long nsecs, u64 system_time_base)<BR>
&nbsp; {<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; u64 x;<BR>
-&nbsp;&nbsp;&nbsp; u32 y, _wc_sec, _wc_nsec;<BR>
+&nbsp;&nbsp;&nbsp; u32 y;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; struct domain *d;<BR>
<BR>
-&nbsp;&nbsp;&nbsp; x = (secs * 1000000000ULL) + (u64)nsecs - system_time_base;<BR>
+&nbsp;&nbsp;&nbsp; x = SECONDS(secs) + (u64)nsecs - system_time_base;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; y = do_div(x, 1000000000);<BR>
<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; spin_lock(&amp;wc_lock);<BR>
-&nbsp;&nbsp;&nbsp; wc_sec&nbsp; = _wc_sec&nbsp; = (u32)x;<BR>
-&nbsp;&nbsp;&nbsp; wc_nsec = _wc_nsec = (u32)y;<BR>
+&nbsp;&nbsp;&nbsp; wc_sec&nbsp; = x;<BR>
+&nbsp;&nbsp;&nbsp; wc_nsec = y;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; spin_unlock(&amp;wc_lock);<BR>
<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; rcu_read_lock(&amp;domlist_read_lock);<BR>
@@ -1548,8 +1548,8 @@ unsigned long get_localtime(struct domai<BR>
&nbsp; /* Return microsecs after 00:00:00 localtime, 1 January, 1970. */<BR>
&nbsp; uint64_t get_localtime_us(struct domain *d)<BR>
&nbsp; {<BR>
-&nbsp;&nbsp;&nbsp; return ((wc_sec + d-&gt;time_offset_seconds) * 1000000000ULL<BR>
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; + wc_nsec + NOW()) / 1000UL;<BR>
+&nbsp;&nbsp;&nbsp; return (SECONDS(wc_sec + d-&gt;time_offset_seconds) + wc_nsec + NOW())<BR>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; / 1000UL;<BR>
&nbsp; }<BR>
<BR>
&nbsp; unsigned long get_sec(void)<BR>
@@ -1651,7 +1651,7 @@ struct tm wallclock_time(void)<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; if ( !wc_sec )<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return (struct tm) { 0 };<BR>
<BR>
-&nbsp;&nbsp;&nbsp; seconds = NOW() + (wc_sec * 1000000000ull) + wc_nsec;<BR>
+&nbsp;&nbsp;&nbsp; seconds = NOW() + SECONDS(wc_sec) + wc_nsec;<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; do_div(seconds, 1000000000);<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return gmtime(seconds);<BR>
&nbsp; }<BR>
<BR>
<BR>
</BODY></HTML>

--------------040109090707030509030305--


--===============5954979961175805327==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5954979961175805327==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:46:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQa4-0004cF-Nd; Fri, 28 Feb 2014 16:46:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQa3-0004bo-3O
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:46:35 +0000
Received: from [85.158.139.211:7091] by server-2.bemta-5.messagelabs.com id
	B4/BC-23037-A6DB0135; Fri, 28 Feb 2014 16:46:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393605993!6946742!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26050 invoked from network); 28 Feb 2014 16:46:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:46:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:46:32 +0000
Message-Id: <5310CB88020000780012046C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:46:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFECCB068.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86: don't propagate acpi_skip_timer_override
	do Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFECCB068.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It's unclear why c/s 4850:923dd9975981 added this - Dom0 isn't
controlling the timer interrupt, and hence has no need to know.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -56,7 +56,9 @@ bool_t __initdata acpi_ht =3D 1;	/* enable
 bool_t __initdata acpi_lapic;
 bool_t __initdata acpi_ioapic;
=20
-bool_t acpi_skip_timer_override __initdata;
+/* acpi_skip_timer_override: Skip IRQ0 overrides. */
+static bool_t acpi_skip_timer_override __initdata;
+boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
=20
 #ifdef CONFIG_X86_LOCAL_APIC
 static u64 acpi_lapic_addr __initdata =3D APIC_DEFAULT_PHYS_BASE;
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -71,10 +71,6 @@ static void parse_acpi_param(char *s);
 custom_param("acpi", parse_acpi_param);
=20
 /* **** Linux config option: propagated to domain0. */
-/* acpi_skip_timer_override: Skip IRQ0 overrides. */
-boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
-
-/* **** Linux config option: propagated to domain0. */
 /* noapic: Disable IOAPIC setup. */
 boolean_param("noapic", skip_ioapic_setup);
=20
@@ -1365,9 +1361,6 @@ void __init __start_xen(unsigned long mb
         /* Append any extra parameters. */
         if ( skip_ioapic_setup && !strstr(dom0_cmdline, "noapic") )
             safe_strcat(dom0_cmdline, " noapic");
-        if ( acpi_skip_timer_override &&
-             !strstr(dom0_cmdline, "acpi_skip_timer_override") )
-            safe_strcat(dom0_cmdline, " acpi_skip_timer_override");
         if ( (strlen(acpi_param) =3D=3D 0) && acpi_disabled )
         {
             printk("ACPI is disabled, notifying Domain 0 (acpi=3Doff)\n");=

--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -80,7 +80,6 @@ int __acpi_release_global_lock(unsigned=20
=20
 extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
 extern bool_t acpi_force, acpi_ht, acpi_disabled;
-extern bool_t acpi_skip_timer_override;
 extern u32 acpi_smi_cmd;
 extern u8 acpi_enable_value, acpi_disable_value;
 void acpi_pic_sci_set_trigger(unsigned int, u16);




--=__PartFECCB068.0__=
Content-Type: text/plain; name="x86-dont-propagate-asto.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-dont-propagate-asto.patch"

x86: don't propagate acpi_skip_timer_override do Dom0=0A=0AIt's unclear =
why c/s 4850:923dd9975981 added this - Dom0 isn't=0Acontrolling the timer =
interrupt, and hence has no need to know.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/acpi/boot.c=0A+++ b/xen/arch/x8=
6/acpi/boot.c=0A@@ -56,7 +56,9 @@ bool_t __initdata acpi_ht =3D 1;	/* =
enable=0A bool_t __initdata acpi_lapic;=0A bool_t __initdata acpi_ioapic;=
=0A =0A-bool_t acpi_skip_timer_override __initdata;=0A+/* acpi_skip_timer_o=
verride: Skip IRQ0 overrides. */=0A+static bool_t acpi_skip_timer_override =
__initdata;=0A+boolean_param("acpi_skip_timer_override", acpi_skip_timer_ov=
erride);=0A =0A #ifdef CONFIG_X86_LOCAL_APIC=0A static u64 acpi_lapic_addr =
__initdata =3D APIC_DEFAULT_PHYS_BASE;=0A--- a/xen/arch/x86/setup.c=0A+++ =
b/xen/arch/x86/setup.c=0A@@ -71,10 +71,6 @@ static void parse_acpi_param(ch=
ar *s);=0A custom_param("acpi", parse_acpi_param);=0A =0A /* **** Linux =
config option: propagated to domain0. */=0A-/* acpi_skip_timer_override: =
Skip IRQ0 overrides. */=0A-boolean_param("acpi_skip_timer_override", =
acpi_skip_timer_override);=0A-=0A-/* **** Linux config option: propagated =
to domain0. */=0A /* noapic: Disable IOAPIC setup. */=0A boolean_param("noa=
pic", skip_ioapic_setup);=0A =0A@@ -1365,9 +1361,6 @@ void __init =
__start_xen(unsigned long mb=0A         /* Append any extra parameters. =
*/=0A         if ( skip_ioapic_setup && !strstr(dom0_cmdline, "noapic") =
)=0A             safe_strcat(dom0_cmdline, " noapic");=0A-        if ( =
acpi_skip_timer_override &&=0A-             !strstr(dom0_cmdline, =
"acpi_skip_timer_override") )=0A-            safe_strcat(dom0_cmdline, " =
acpi_skip_timer_override");=0A         if ( (strlen(acpi_param) =3D=3D 0) =
&& acpi_disabled )=0A         {=0A             printk("ACPI is disabled, =
notifying Domain 0 (acpi=3Doff)\n");=0A--- a/xen/include/asm-x86/acpi.h=0A+=
++ b/xen/include/asm-x86/acpi.h=0A@@ -80,7 +80,6 @@ int __acpi_release_glob=
al_lock(unsigned =0A =0A extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;=
=0A extern bool_t acpi_force, acpi_ht, acpi_disabled;=0A-extern bool_t =
acpi_skip_timer_override;=0A extern u32 acpi_smi_cmd;=0A extern u8 =
acpi_enable_value, acpi_disable_value;=0A void acpi_pic_sci_set_trigger(uns=
igned int, u16);=0A
--=__PartFECCB068.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFECCB068.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:46:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQa4-0004cF-Nd; Fri, 28 Feb 2014 16:46:36 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQa3-0004bo-3O
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:46:35 +0000
Received: from [85.158.139.211:7091] by server-2.bemta-5.messagelabs.com id
	B4/BC-23037-A6DB0135; Fri, 28 Feb 2014 16:46:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393605993!6946742!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26050 invoked from network); 28 Feb 2014 16:46:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:46:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:46:32 +0000
Message-Id: <5310CB88020000780012046C@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:46:48 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartFECCB068.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86: don't propagate acpi_skip_timer_override
	do Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartFECCB068.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It's unclear why c/s 4850:923dd9975981 added this - Dom0 isn't
controlling the timer interrupt, and hence has no need to know.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -56,7 +56,9 @@ bool_t __initdata acpi_ht =3D 1;	/* enable
 bool_t __initdata acpi_lapic;
 bool_t __initdata acpi_ioapic;
=20
-bool_t acpi_skip_timer_override __initdata;
+/* acpi_skip_timer_override: Skip IRQ0 overrides. */
+static bool_t acpi_skip_timer_override __initdata;
+boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
=20
 #ifdef CONFIG_X86_LOCAL_APIC
 static u64 acpi_lapic_addr __initdata =3D APIC_DEFAULT_PHYS_BASE;
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -71,10 +71,6 @@ static void parse_acpi_param(char *s);
 custom_param("acpi", parse_acpi_param);
=20
 /* **** Linux config option: propagated to domain0. */
-/* acpi_skip_timer_override: Skip IRQ0 overrides. */
-boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
-
-/* **** Linux config option: propagated to domain0. */
 /* noapic: Disable IOAPIC setup. */
 boolean_param("noapic", skip_ioapic_setup);
=20
@@ -1365,9 +1361,6 @@ void __init __start_xen(unsigned long mb
         /* Append any extra parameters. */
         if ( skip_ioapic_setup && !strstr(dom0_cmdline, "noapic") )
             safe_strcat(dom0_cmdline, " noapic");
-        if ( acpi_skip_timer_override &&
-             !strstr(dom0_cmdline, "acpi_skip_timer_override") )
-            safe_strcat(dom0_cmdline, " acpi_skip_timer_override");
         if ( (strlen(acpi_param) =3D=3D 0) && acpi_disabled )
         {
             printk("ACPI is disabled, notifying Domain 0 (acpi=3Doff)\n");=

--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -80,7 +80,6 @@ int __acpi_release_global_lock(unsigned=20
=20
 extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
 extern bool_t acpi_force, acpi_ht, acpi_disabled;
-extern bool_t acpi_skip_timer_override;
 extern u32 acpi_smi_cmd;
 extern u8 acpi_enable_value, acpi_disable_value;
 void acpi_pic_sci_set_trigger(unsigned int, u16);




--=__PartFECCB068.0__=
Content-Type: text/plain; name="x86-dont-propagate-asto.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-dont-propagate-asto.patch"

x86: don't propagate acpi_skip_timer_override do Dom0=0A=0AIt's unclear =
why c/s 4850:923dd9975981 added this - Dom0 isn't=0Acontrolling the timer =
interrupt, and hence has no need to know.=0A=0ASigned-off-by: Jan Beulich =
<jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/acpi/boot.c=0A+++ b/xen/arch/x8=
6/acpi/boot.c=0A@@ -56,7 +56,9 @@ bool_t __initdata acpi_ht =3D 1;	/* =
enable=0A bool_t __initdata acpi_lapic;=0A bool_t __initdata acpi_ioapic;=
=0A =0A-bool_t acpi_skip_timer_override __initdata;=0A+/* acpi_skip_timer_o=
verride: Skip IRQ0 overrides. */=0A+static bool_t acpi_skip_timer_override =
__initdata;=0A+boolean_param("acpi_skip_timer_override", acpi_skip_timer_ov=
erride);=0A =0A #ifdef CONFIG_X86_LOCAL_APIC=0A static u64 acpi_lapic_addr =
__initdata =3D APIC_DEFAULT_PHYS_BASE;=0A--- a/xen/arch/x86/setup.c=0A+++ =
b/xen/arch/x86/setup.c=0A@@ -71,10 +71,6 @@ static void parse_acpi_param(ch=
ar *s);=0A custom_param("acpi", parse_acpi_param);=0A =0A /* **** Linux =
config option: propagated to domain0. */=0A-/* acpi_skip_timer_override: =
Skip IRQ0 overrides. */=0A-boolean_param("acpi_skip_timer_override", =
acpi_skip_timer_override);=0A-=0A-/* **** Linux config option: propagated =
to domain0. */=0A /* noapic: Disable IOAPIC setup. */=0A boolean_param("noa=
pic", skip_ioapic_setup);=0A =0A@@ -1365,9 +1361,6 @@ void __init =
__start_xen(unsigned long mb=0A         /* Append any extra parameters. =
*/=0A         if ( skip_ioapic_setup && !strstr(dom0_cmdline, "noapic") =
)=0A             safe_strcat(dom0_cmdline, " noapic");=0A-        if ( =
acpi_skip_timer_override &&=0A-             !strstr(dom0_cmdline, =
"acpi_skip_timer_override") )=0A-            safe_strcat(dom0_cmdline, " =
acpi_skip_timer_override");=0A         if ( (strlen(acpi_param) =3D=3D 0) =
&& acpi_disabled )=0A         {=0A             printk("ACPI is disabled, =
notifying Domain 0 (acpi=3Doff)\n");=0A--- a/xen/include/asm-x86/acpi.h=0A+=
++ b/xen/include/asm-x86/acpi.h=0A@@ -80,7 +80,6 @@ int __acpi_release_glob=
al_lock(unsigned =0A =0A extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;=
=0A extern bool_t acpi_force, acpi_ht, acpi_disabled;=0A-extern bool_t =
acpi_skip_timer_override;=0A extern u32 acpi_smi_cmd;=0A extern u8 =
acpi_enable_value, acpi_disable_value;=0A void acpi_pic_sci_set_trigger(uns=
igned int, u16);=0A
--=__PartFECCB068.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartFECCB068.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQc1-0004qP-9l; Fri, 28 Feb 2014 16:48:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQby-0004q4-QX
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:48:35 +0000
Received: from [85.158.137.68:38703] by server-6.bemta-3.messagelabs.com id
	CE/5B-09180-2EDB0135; Fri, 28 Feb 2014 16:48:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393606113!141652!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20464 invoked from network); 28 Feb 2014 16:48:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:48:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:48:32 +0000
Message-Id: <5310CBFF0200007800120491@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:48:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part695B27FF.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86/AMD: re-use function wide variables in
	init_amd()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part695B27FF.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -399,13 +399,9 @@ static void __devinit init_amd(struct cp
 		 * revision D (model =3D 0x14) and later actually support =
it.
 		 * (AMD Erratum #110, docId: 25759).
 		 */
-		unsigned int lo, hi;
-
 		clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);
-		if (!rdmsr_amd_safe(0xc001100d, &lo, &hi)) {
-			hi &=3D ~1;
-			wrmsr_amd_safe(0xc001100d, lo, hi);
-		}
+		if (!rdmsr_amd_safe(0xc001100d, &l, &h))
+			wrmsr_amd_safe(0xc001100d, l, h & ~1);
 	}
=20
 	switch(c->x86)




--=__Part695B27FF.0__=
Content-Type: text/plain; name="x86-AMD-init-reuse-vars.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-AMD-init-reuse-vars.patch"

x86/AMD: re-use function wide variables in init_amd()=0A=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/cpu/amd.c=0A+++ =
b/xen/arch/x86/cpu/amd.c=0A@@ -399,13 +399,9 @@ static void __devinit =
init_amd(struct cp=0A 		 * revision D (model =3D 0x14) and later =
actually support it.=0A 		 * (AMD Erratum #110, docId: =
25759).=0A 		 */=0A-		unsigned int lo, hi;=0A-=0A 		=
clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);=0A-		if =
(!rdmsr_amd_safe(0xc001100d, &lo, &hi)) {=0A-			hi &=3D =
~1;=0A-			wrmsr_amd_safe(0xc001100d, lo, hi);=0A-		=
}=0A+		if (!rdmsr_amd_safe(0xc001100d, &l, &h))=0A+			=
wrmsr_amd_safe(0xc001100d, l, h & ~1);=0A 	}=0A =0A 	switch(c->x=
86)=0A
--=__Part695B27FF.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part695B27FF.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:48:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQc1-0004qP-9l; Fri, 28 Feb 2014 16:48:37 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQby-0004q4-QX
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:48:35 +0000
Received: from [85.158.137.68:38703] by server-6.bemta-3.messagelabs.com id
	CE/5B-09180-2EDB0135; Fri, 28 Feb 2014 16:48:34 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393606113!141652!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 20464 invoked from network); 28 Feb 2014 16:48:33 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-7.tower-31.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:48:33 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:48:32 +0000
Message-Id: <5310CBFF0200007800120491@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:48:47 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part695B27FF.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] x86/AMD: re-use function wide variables in
	init_amd()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part695B27FF.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -399,13 +399,9 @@ static void __devinit init_amd(struct cp
 		 * revision D (model =3D 0x14) and later actually support =
it.
 		 * (AMD Erratum #110, docId: 25759).
 		 */
-		unsigned int lo, hi;
-
 		clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);
-		if (!rdmsr_amd_safe(0xc001100d, &lo, &hi)) {
-			hi &=3D ~1;
-			wrmsr_amd_safe(0xc001100d, lo, hi);
-		}
+		if (!rdmsr_amd_safe(0xc001100d, &l, &h))
+			wrmsr_amd_safe(0xc001100d, l, h & ~1);
 	}
=20
 	switch(c->x86)




--=__Part695B27FF.0__=
Content-Type: text/plain; name="x86-AMD-init-reuse-vars.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-AMD-init-reuse-vars.patch"

x86/AMD: re-use function wide variables in init_amd()=0A=0ASigned-off-by: =
Jan Beulich <jbeulich@suse.com>=0A=0A--- a/xen/arch/x86/cpu/amd.c=0A+++ =
b/xen/arch/x86/cpu/amd.c=0A@@ -399,13 +399,9 @@ static void __devinit =
init_amd(struct cp=0A 		 * revision D (model =3D 0x14) and later =
actually support it.=0A 		 * (AMD Erratum #110, docId: =
25759).=0A 		 */=0A-		unsigned int lo, hi;=0A-=0A 		=
clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);=0A-		if =
(!rdmsr_amd_safe(0xc001100d, &lo, &hi)) {=0A-			hi &=3D =
~1;=0A-			wrmsr_amd_safe(0xc001100d, lo, hi);=0A-		=
}=0A+		if (!rdmsr_amd_safe(0xc001100d, &l, &h))=0A+			=
wrmsr_amd_safe(0xc001100d, l, h & ~1);=0A 	}=0A =0A 	switch(c->x=
86)=0A
--=__Part695B27FF.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part695B27FF.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:50:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQds-00057X-09; Fri, 28 Feb 2014 16:50:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQdr-00057P-Db
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:50:31 +0000
Received: from [85.158.139.211:2517] by server-16.bemta-5.messagelabs.com id
	8F/6F-05060-65EB0135; Fri, 28 Feb 2014 16:50:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393606229!2423411!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16335 invoked from network); 28 Feb 2014 16:50:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:50:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:50:29 +0000
Message-Id: <5310CC730200007800120495@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:50:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartE6D4A873.0__="
Subject: [Xen-devel] [PATCH] x86/ACPI: also print address space for PM1x
	fields
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartE6D4A873.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

At least one vendor is in the process of making systems available where
these live in MMIO, not in I/O port space.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 	acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);
=20
 	printk(KERN_INFO PREFIX
-	       "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "
-	       "pm1x_evt[%"PRIx64",%"PRIx64"]\n",
+	       "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "
+	       "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",
+	       acpi_sinfo.pm1a_cnt_blk.space_id,
 	       acpi_sinfo.pm1a_cnt_blk.address,
+	       acpi_sinfo.pm1b_cnt_blk.space_id,
 	       acpi_sinfo.pm1b_cnt_blk.address,
+	       acpi_sinfo.pm1a_evt_blk.space_id,
 	       acpi_sinfo.pm1a_evt_blk.address,
+	       acpi_sinfo.pm1b_evt_blk.space_id,
 	       acpi_sinfo.pm1b_evt_blk.address);
=20
 	/* Now FACS... */




--=__PartE6D4A873.0__=
Content-Type: text/plain; name="x86-ACPI-pm1x-space.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-ACPI-pm1x-space.patch"

x86/ACPI: also print address space for PM1x fields=0A=0AAt least one =
vendor is in the process of making systems available where=0Athese live in =
MMIO, not in I/O port space.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse=
.com>=0A=0A--- a/xen/arch/x86/acpi/boot.c=0A+++ b/xen/arch/x86/acpi/boot.c=
=0A@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t=0A 	=
acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);=0A =0A 	=
printk(KERN_INFO PREFIX=0A-	       "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PR=
Ix64"], "=0A-	       "pm1x_evt[%"PRIx64",%"PRIx64"]\n",=0A+	       =
"SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "=0A+	       "pm1x_evt[%d=
:%"PRIx64",%d:%"PRIx64"]\n",=0A+	       acpi_sinfo.pm1a_cnt_blk.spac=
e_id,=0A 	       acpi_sinfo.pm1a_cnt_blk.address,=0A+	       =
acpi_sinfo.pm1b_cnt_blk.space_id,=0A 	       acpi_sinfo.pm1b_cnt_blk.addr=
ess,=0A+	       acpi_sinfo.pm1a_evt_blk.space_id,=0A 	       =
acpi_sinfo.pm1a_evt_blk.address,=0A+	       acpi_sinfo.pm1b_evt_blk.spac=
e_id,=0A 	       acpi_sinfo.pm1b_evt_blk.address);=0A =0A 	/* =
Now FACS... */=0A
--=__PartE6D4A873.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartE6D4A873.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:50:36 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQds-00057X-09; Fri, 28 Feb 2014 16:50:32 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQdr-00057P-Db
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:50:31 +0000
Received: from [85.158.139.211:2517] by server-16.bemta-5.messagelabs.com id
	8F/6F-05060-65EB0135; Fri, 28 Feb 2014 16:50:30 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393606229!2423411!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16335 invoked from network); 28 Feb 2014 16:50:29 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:50:29 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:50:29 +0000
Message-Id: <5310CC730200007800120495@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:50:43 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__PartE6D4A873.0__="
Subject: [Xen-devel] [PATCH] x86/ACPI: also print address space for PM1x
	fields
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__PartE6D4A873.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

At least one vendor is in the process of making systems available where
these live in MMIO, not in I/O port space.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t
 	acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);
=20
 	printk(KERN_INFO PREFIX
-	       "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "
-	       "pm1x_evt[%"PRIx64",%"PRIx64"]\n",
+	       "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "
+	       "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",
+	       acpi_sinfo.pm1a_cnt_blk.space_id,
 	       acpi_sinfo.pm1a_cnt_blk.address,
+	       acpi_sinfo.pm1b_cnt_blk.space_id,
 	       acpi_sinfo.pm1b_cnt_blk.address,
+	       acpi_sinfo.pm1a_evt_blk.space_id,
 	       acpi_sinfo.pm1a_evt_blk.address,
+	       acpi_sinfo.pm1b_evt_blk.space_id,
 	       acpi_sinfo.pm1b_evt_blk.address);
=20
 	/* Now FACS... */




--=__PartE6D4A873.0__=
Content-Type: text/plain; name="x86-ACPI-pm1x-space.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="x86-ACPI-pm1x-space.patch"

x86/ACPI: also print address space for PM1x fields=0A=0AAt least one =
vendor is in the process of making systems available where=0Athese live in =
MMIO, not in I/O port space.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse=
.com>=0A=0A--- a/xen/arch/x86/acpi/boot.c=0A+++ b/xen/arch/x86/acpi/boot.c=
=0A@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t=0A 	=
acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);=0A =0A 	=
printk(KERN_INFO PREFIX=0A-	       "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PR=
Ix64"], "=0A-	       "pm1x_evt[%"PRIx64",%"PRIx64"]\n",=0A+	       =
"SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "=0A+	       "pm1x_evt[%d=
:%"PRIx64",%d:%"PRIx64"]\n",=0A+	       acpi_sinfo.pm1a_cnt_blk.spac=
e_id,=0A 	       acpi_sinfo.pm1a_cnt_blk.address,=0A+	       =
acpi_sinfo.pm1b_cnt_blk.space_id,=0A 	       acpi_sinfo.pm1b_cnt_blk.addr=
ess,=0A+	       acpi_sinfo.pm1a_evt_blk.space_id,=0A 	       =
acpi_sinfo.pm1a_evt_blk.address,=0A+	       acpi_sinfo.pm1b_evt_blk.spac=
e_id,=0A 	       acpi_sinfo.pm1b_evt_blk.address);=0A =0A 	/* =
Now FACS... */=0A
--=__PartE6D4A873.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__PartE6D4A873.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:55:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQhz-0005Kc-OC; Fri, 28 Feb 2014 16:54:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQhy-0005KV-4E
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:54:46 +0000
Received: from [85.158.139.211:3421] by server-13.bemta-5.messagelabs.com id
	D2/B8-18801-55FB0135; Fri, 28 Feb 2014 16:54:45 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393606484!6957435!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24245 invoked from network); 28 Feb 2014 16:54:44 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:54:44 -0000
Received: by mail-we0-f173.google.com with SMTP id w61so769696wes.18
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:54:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=ZQ2si2mw0AURQ+lqoYCzKbhf4JW8k0SbXDkqDyi+BJY=;
	b=j9UegsSNJHKb3n8jyeukTE6254AhUpdbiw77j6xRb+zerckN43xfZQpzj1u7zURXfN
	kkZZ9CrhNkNv+NiXBDDqwU/xX22xoTrPdmMw2JuBXyYCZ7As8h5uHk/4iJdQq0b8K/tO
	rfuLul2UTeFRxg+HtqrkO4JyM5k5ArZ2Zt+jXsG6PqN8m+89kaCbbNda9b8M3FoViGPc
	6HlVQLRKNJ1Riy1XkV2xvGowWJ7zUMaTdFEO4LiM1SxwR7sVWKR2xvsudu0vVUGiatH9
	p+ZLhxVM9bLey1pCNKOTkpVDDv3LJtbIur2Qr2Xm/FPGn6cJm3lhcm52PEZ2NIN7q33x
	m+Ig==
X-Received: by 10.180.13.33 with SMTP id e1mr4342477wic.38.1393606484120;
	Fri, 28 Feb 2014 08:54:44 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id ee5sm6503824wib.8.2014.02.28.08.54.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:54:43 -0800 (PST)
Message-ID: <5310BF50.3050707@gmail.com>
Date: Fri, 28 Feb 2014 16:54:40 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CBFF0200007800120491@nat28.tlf.novell.com>
In-Reply-To: <5310CBFF0200007800120491@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/AMD: re-use function wide variables in
	init_amd()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6494920506135569819=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6494920506135569819==
Content-Type: multipart/alternative;
 boundary="------------060207020109030407090704"

This is a multi-part message in MIME format.
--------------060207020109030407090704
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
>
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -399,13 +399,9 @@ static void __devinit init_amd(struct cp
> * revision D (model = 0x14) and later actually support it.
> * (AMD Erratum #110, docId: 25759).
> */
> - unsigned int lo, hi;
> -
> clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);
> - if (!rdmsr_amd_safe(0xc001100d,&lo,&hi)) {
> - hi&= ~1;
> - wrmsr_amd_safe(0xc001100d, lo, hi);
> - }
> + if (!rdmsr_amd_safe(0xc001100d,&l,&h))
> + wrmsr_amd_safe(0xc001100d, l, h& ~1);
> }
>
> switch(c->x86)
>
>

--------------060207020109030407090704
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
<BR>
--- a/xen/arch/x86/cpu/amd.c<BR>
+++ b/xen/arch/x86/cpu/amd.c<BR>
@@ -399,13 +399,9 @@ static void __devinit init_amd(struct cp<BR>
&nbsp; 		 * revision D (model = 0x14) and later actually support it.<BR>
&nbsp; 		 * (AMD Erratum #110, docId: 25759).<BR>
&nbsp; 		 */<BR>
-		unsigned int lo, hi;<BR>
-<BR>
&nbsp; 		clear_bit(X86_FEATURE_LAHF_LM, c-&gt;x86_capability);<BR>
-		if (!rdmsr_amd_safe(0xc001100d,&amp;lo,&amp;hi)) {<BR>
-			hi&amp;= ~1;<BR>
-			wrmsr_amd_safe(0xc001100d, lo, hi);<BR>
-		}<BR>
+		if (!rdmsr_amd_safe(0xc001100d,&amp;l,&amp;h))<BR>
+			wrmsr_amd_safe(0xc001100d, l, h&amp;&nbsp; ~1);<BR>
&nbsp; 	}<BR>
<BR>
&nbsp; 	switch(c-&gt;x86)<BR>
<BR>
<BR>
</BODY></HTML>

--------------060207020109030407090704--


--===============6494920506135569819==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6494920506135569819==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:55:03 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQhz-0005Kc-OC; Fri, 28 Feb 2014 16:54:47 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQhy-0005KV-4E
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:54:46 +0000
Received: from [85.158.139.211:3421] by server-13.bemta-5.messagelabs.com id
	D2/B8-18801-55FB0135; Fri, 28 Feb 2014 16:54:45 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-4.tower-206.messagelabs.com!1393606484!6957435!1
X-Originating-IP: [74.125.82.173]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24245 invoked from network); 28 Feb 2014 16:54:44 -0000
Received: from mail-we0-f173.google.com (HELO mail-we0-f173.google.com)
	(74.125.82.173)
	by server-4.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:54:44 -0000
Received: by mail-we0-f173.google.com with SMTP id w61so769696wes.18
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:54:44 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=ZQ2si2mw0AURQ+lqoYCzKbhf4JW8k0SbXDkqDyi+BJY=;
	b=j9UegsSNJHKb3n8jyeukTE6254AhUpdbiw77j6xRb+zerckN43xfZQpzj1u7zURXfN
	kkZZ9CrhNkNv+NiXBDDqwU/xX22xoTrPdmMw2JuBXyYCZ7As8h5uHk/4iJdQq0b8K/tO
	rfuLul2UTeFRxg+HtqrkO4JyM5k5ArZ2Zt+jXsG6PqN8m+89kaCbbNda9b8M3FoViGPc
	6HlVQLRKNJ1Riy1XkV2xvGowWJ7zUMaTdFEO4LiM1SxwR7sVWKR2xvsudu0vVUGiatH9
	p+ZLhxVM9bLey1pCNKOTkpVDDv3LJtbIur2Qr2Xm/FPGn6cJm3lhcm52PEZ2NIN7q33x
	m+Ig==
X-Received: by 10.180.13.33 with SMTP id e1mr4342477wic.38.1393606484120;
	Fri, 28 Feb 2014 08:54:44 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id ee5sm6503824wib.8.2014.02.28.08.54.42
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:54:43 -0800 (PST)
Message-ID: <5310BF50.3050707@gmail.com>
Date: Fri, 28 Feb 2014 16:54:40 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CBFF0200007800120491@nat28.tlf.novell.com>
In-Reply-To: <5310CBFF0200007800120491@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86/AMD: re-use function wide variables in
	init_amd()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============6494920506135569819=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============6494920506135569819==
Content-Type: multipart/alternative;
 boundary="------------060207020109030407090704"

This is a multi-part message in MIME format.
--------------060207020109030407090704
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
>
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -399,13 +399,9 @@ static void __devinit init_amd(struct cp
> * revision D (model = 0x14) and later actually support it.
> * (AMD Erratum #110, docId: 25759).
> */
> - unsigned int lo, hi;
> -
> clear_bit(X86_FEATURE_LAHF_LM, c->x86_capability);
> - if (!rdmsr_amd_safe(0xc001100d,&lo,&hi)) {
> - hi&= ~1;
> - wrmsr_amd_safe(0xc001100d, lo, hi);
> - }
> + if (!rdmsr_amd_safe(0xc001100d,&l,&h))
> + wrmsr_amd_safe(0xc001100d, l, h& ~1);
> }
>
> switch(c->x86)
>
>

--------------060207020109030407090704
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
<BR>
--- a/xen/arch/x86/cpu/amd.c<BR>
+++ b/xen/arch/x86/cpu/amd.c<BR>
@@ -399,13 +399,9 @@ static void __devinit init_amd(struct cp<BR>
&nbsp; 		 * revision D (model = 0x14) and later actually support it.<BR>
&nbsp; 		 * (AMD Erratum #110, docId: 25759).<BR>
&nbsp; 		 */<BR>
-		unsigned int lo, hi;<BR>
-<BR>
&nbsp; 		clear_bit(X86_FEATURE_LAHF_LM, c-&gt;x86_capability);<BR>
-		if (!rdmsr_amd_safe(0xc001100d,&amp;lo,&amp;hi)) {<BR>
-			hi&amp;= ~1;<BR>
-			wrmsr_amd_safe(0xc001100d, lo, hi);<BR>
-		}<BR>
+		if (!rdmsr_amd_safe(0xc001100d,&amp;l,&amp;h))<BR>
+			wrmsr_amd_safe(0xc001100d, l, h&amp;&nbsp; ~1);<BR>
&nbsp; 	}<BR>
<BR>
&nbsp; 	switch(c-&gt;x86)<BR>
<BR>
<BR>
</BODY></HTML>

--------------060207020109030407090704--


--===============6494920506135569819==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============6494920506135569819==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQiQ-0005MI-5d; Fri, 28 Feb 2014 16:55:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQiO-0005M2-Fq
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:55:12 +0000
Received: from [85.158.139.211:11141] by server-14.bemta-5.messagelabs.com id
	CD/2F-27598-F6FB0135; Fri, 28 Feb 2014 16:55:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393606510!6931465!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4991 invoked from network); 28 Feb 2014 16:55:10 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:55:10 -0000
Received: by mail-we0-f176.google.com with SMTP id x48so766904wes.21
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:55:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=QTHvSf/mYChafB163RuwH829q52oO25NMiihJUNjlKc=;
	b=My76C02dkbddR7OR9TYBdrnqg2Voh3BwJhrIHIDNdZP2q+VATqwM5CT/mUwc2uuhtA
	YMoq/4JrtFjTzdFlcLNNWcRrmj/H7qSrGMu2JEUZO09YjTXZ6maon8T4UnN0b1fDujC9
	0oWTQelwaq9oDQTx7qLMTmpPFy9dz5V5+MOVFxGQUXjzx7S6X5jDUwSpuqzd0YgGDYCh
	x6xy5/+oozk8EwslpzPru75NGR4BU6PRSt3MwGbaCxh/Ya3baO/WDT6mcAOarr9UmaSe
	Wvrg+lJrlsbAtYLC/WVFi0mzDMvRczFtPdKQDqmAW8nApaLiIkN36TlW8YaPL3Zo1/sc
	8r6g==
X-Received: by 10.180.36.8 with SMTP id m8mr3563456wij.42.1393606510516;
	Fri, 28 Feb 2014 08:55:10 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id fm3sm7595432wib.8.2014.02.28.08.55.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:55:09 -0800 (PST)
Message-ID: <5310BF6A.5080206@gmail.com>
Date: Fri, 28 Feb 2014 16:55:06 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CC730200007800120495@nat28.tlf.novell.com>
In-Reply-To: <5310CC730200007800120495@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] x86/ACPI: also print address space for PM1x
 fields
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4454056794193653491=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4454056794193653491==
Content-Type: multipart/alternative;
 boundary="------------000605020606030207060607"

This is a multi-part message in MIME format.
--------------000605020606030207060607
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> At least one vendor is in the process of making systems available where
> these live in MMIO, not in I/O port space.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t
> acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);
>
> printk(KERN_INFO PREFIX
> - "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "
> - "pm1x_evt[%"PRIx64",%"PRIx64"]\n",
> + "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "
> + "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",
> + acpi_sinfo.pm1a_cnt_blk.space_id,
> acpi_sinfo.pm1a_cnt_blk.address,
> + acpi_sinfo.pm1b_cnt_blk.space_id,
> acpi_sinfo.pm1b_cnt_blk.address,
> + acpi_sinfo.pm1a_evt_blk.space_id,
> acpi_sinfo.pm1a_evt_blk.address,
> + acpi_sinfo.pm1b_evt_blk.space_id,
> acpi_sinfo.pm1b_evt_blk.address);
>
> /* Now FACS... */
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--------------000605020606030207060607
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
At least one vendor is in the process of making systems available where<BR>
these live in MMIO, not in I/O port space.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
--- a/xen/arch/x86/acpi/boot.c<BR>
+++ b/xen/arch/x86/acpi/boot.c<BR>
@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t<BR>
&nbsp; 	acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);<BR>
<BR>
&nbsp; 	printk(KERN_INFO PREFIX<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%"PRIx64",%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.address);<BR>
<BR>
&nbsp; 	/* Now FACS... */<BR>
<BR>
<BR>
<BR>
_______________________________________________<BR>
Xen-devel mailing list<BR>
Xen-devel@lists.xen.org<BR>
http://lists.xen.org/xen-devel</BODY></HTML>

--------------000605020606030207060607--


--===============4454056794193653491==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4454056794193653491==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:55:14 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQiQ-0005MI-5d; Fri, 28 Feb 2014 16:55:14 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQiO-0005M2-Fq
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:55:12 +0000
Received: from [85.158.139.211:11141] by server-14.bemta-5.messagelabs.com id
	CD/2F-27598-F6FB0135; Fri, 28 Feb 2014 16:55:11 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393606510!6931465!1
X-Originating-IP: [74.125.82.176]
X-SpamReason: No, hits=1.1 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_40_50,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4991 invoked from network); 28 Feb 2014 16:55:10 -0000
Received: from mail-we0-f176.google.com (HELO mail-we0-f176.google.com)
	(74.125.82.176)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:55:10 -0000
Received: by mail-we0-f176.google.com with SMTP id x48so766904wes.21
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:55:10 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=QTHvSf/mYChafB163RuwH829q52oO25NMiihJUNjlKc=;
	b=My76C02dkbddR7OR9TYBdrnqg2Voh3BwJhrIHIDNdZP2q+VATqwM5CT/mUwc2uuhtA
	YMoq/4JrtFjTzdFlcLNNWcRrmj/H7qSrGMu2JEUZO09YjTXZ6maon8T4UnN0b1fDujC9
	0oWTQelwaq9oDQTx7qLMTmpPFy9dz5V5+MOVFxGQUXjzx7S6X5jDUwSpuqzd0YgGDYCh
	x6xy5/+oozk8EwslpzPru75NGR4BU6PRSt3MwGbaCxh/Ya3baO/WDT6mcAOarr9UmaSe
	Wvrg+lJrlsbAtYLC/WVFi0mzDMvRczFtPdKQDqmAW8nApaLiIkN36TlW8YaPL3Zo1/sc
	8r6g==
X-Received: by 10.180.36.8 with SMTP id m8mr3563456wij.42.1393606510516;
	Fri, 28 Feb 2014 08:55:10 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id fm3sm7595432wib.8.2014.02.28.08.55.08
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:55:09 -0800 (PST)
Message-ID: <5310BF6A.5080206@gmail.com>
Date: Fri, 28 Feb 2014 16:55:06 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CC730200007800120495@nat28.tlf.novell.com>
In-Reply-To: <5310CC730200007800120495@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] x86/ACPI: also print address space for PM1x
 fields
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4454056794193653491=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4454056794193653491==
Content-Type: multipart/alternative;
 boundary="------------000605020606030207060607"

This is a multi-part message in MIME format.
--------------000605020606030207060607
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> At least one vendor is in the process of making systems available where
> these live in MMIO, not in I/O port space.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t
> acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);
>
> printk(KERN_INFO PREFIX
> - "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "
> - "pm1x_evt[%"PRIx64",%"PRIx64"]\n",
> + "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "
> + "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",
> + acpi_sinfo.pm1a_cnt_blk.space_id,
> acpi_sinfo.pm1a_cnt_blk.address,
> + acpi_sinfo.pm1b_cnt_blk.space_id,
> acpi_sinfo.pm1b_cnt_blk.address,
> + acpi_sinfo.pm1a_evt_blk.space_id,
> acpi_sinfo.pm1a_evt_blk.address,
> + acpi_sinfo.pm1b_evt_blk.space_id,
> acpi_sinfo.pm1b_evt_blk.address);
>
> /* Now FACS... */
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--------------000605020606030207060607
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
At least one vendor is in the process of making systems available where<BR>
these live in MMIO, not in I/O port space.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
--- a/xen/arch/x86/acpi/boot.c<BR>
+++ b/xen/arch/x86/acpi/boot.c<BR>
@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t<BR>
&nbsp; 	acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);<BR>
<BR>
&nbsp; 	printk(KERN_INFO PREFIX<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%"PRIx64",%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.address);<BR>
<BR>
&nbsp; 	/* Now FACS... */<BR>
<BR>
<BR>
<BR>
_______________________________________________<BR>
Xen-devel mailing list<BR>
Xen-devel@lists.xen.org<BR>
http://lists.xen.org/xen-devel</BODY></HTML>

--------------000605020606030207060607--


--===============4454056794193653491==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4454056794193653491==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:55:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQj0-0005R2-MC; Fri, 28 Feb 2014 16:55:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQiy-0005Qk-QY
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:55:49 +0000
Received: from [85.158.139.211:19665] by server-6.bemta-5.messagelabs.com id
	D0/84-14342-49FB0135; Fri, 28 Feb 2014 16:55:48 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393606547!6931496!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5396 invoked from network); 28 Feb 2014 16:55:47 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:55:47 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so747808wes.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:55:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=gmgkQX1JjyMi+Ajkmy3cpjYuJAwf0g05l88I9ZAPMBM=;
	b=oXukOcnYVyVOAZd3oQk0UlnTaRa3h9rLWKf0ArZnwPjuFpSfkp8cjdqwTsrmMNBWfF
	4nvcQSP7WZ6ejnktdPEDHU/9GF0XIpaqjcydlxC9zUPZvf37an97SEgT/ZCmK2cvuX9w
	Yc9ZIfBqGCvpCMnYGgg53/fByzhvzcsAndR6hWN9+nlGQci9dFjYbo4BAFxrUVWconSt
	83E2mAlHNdvvQU1wdpdT5eCdVWaRhE9LlOYL3kn/0RrsJaBU+wt13wVqVPToFrUJV4UK
	FSa2+fO8Yunh/ALcEgfCZcMiYsMYN9uOoh+pawr1UKrLOwSjJWDMAIbO+Fg8umJZS0OJ
	IGqg==
X-Received: by 10.180.7.130 with SMTP id j2mr4221362wia.25.1393606547015;
	Fri, 28 Feb 2014 08:55:47 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id ci4sm5691072wjc.21.2014.02.28.08.55.45
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:55:46 -0800 (PST)
Message-ID: <5310BF8F.6080706@gmail.com>
Date: Fri, 28 Feb 2014 16:55:43 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CC730200007800120495@nat28.tlf.novell.com>
In-Reply-To: <5310CC730200007800120495@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] x86/ACPI: also print address space for PM1x
 fields
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4278944338052735500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4278944338052735500==
Content-Type: multipart/alternative;
 boundary="------------090205090405090500090206"

This is a multi-part message in MIME format.
--------------090205090405090500090206
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> At least one vendor is in the process of making systems available where
> these live in MMIO, not in I/O port space.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Probably an abundance of caution to keep everything platform related in 
sync between dom0 and Xen. I agree this shouldn't be necessary in this 
case and I'm okay with this simplification. Frankly I doubt anyone has 
to specify this option on any halfway sane box anyway.

Acked-by: Keir Fraser <keir@xen.org>

>
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t
> acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);
>
> printk(KERN_INFO PREFIX
> - "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "
> - "pm1x_evt[%"PRIx64",%"PRIx64"]\n",
> + "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "
> + "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",
> + acpi_sinfo.pm1a_cnt_blk.space_id,
> acpi_sinfo.pm1a_cnt_blk.address,
> + acpi_sinfo.pm1b_cnt_blk.space_id,
> acpi_sinfo.pm1b_cnt_blk.address,
> + acpi_sinfo.pm1a_evt_blk.space_id,
> acpi_sinfo.pm1a_evt_blk.address,
> + acpi_sinfo.pm1b_evt_blk.space_id,
> acpi_sinfo.pm1b_evt_blk.address);
>
> /* Now FACS... */
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--------------090205090405090500090206
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
At least one vendor is in the process of making systems available where<BR>
these live in MMIO, not in I/O port space.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Probably an abundance of caution to keep everything platform related in sync between dom0 and Xen. I agree this shouldn't be necessary in this case and I'm okay with this simplification. Frankly I doubt anyone has to specify this option on any halfway sane box anyway.<BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
--- a/xen/arch/x86/acpi/boot.c<BR>
+++ b/xen/arch/x86/acpi/boot.c<BR>
@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t<BR>
&nbsp; 	acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);<BR>
<BR>
&nbsp; 	printk(KERN_INFO PREFIX<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%"PRIx64",%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.address);<BR>
<BR>
&nbsp; 	/* Now FACS... */<BR>
<BR>
<BR>
<BR>
_______________________________________________<BR>
Xen-devel mailing list<BR>
Xen-devel@lists.xen.org<BR>
http://lists.xen.org/xen-devel</BODY></HTML>

--------------090205090405090500090206--


--===============4278944338052735500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4278944338052735500==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:55:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQj0-0005R2-MC; Fri, 28 Feb 2014 16:55:50 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQiy-0005Qk-QY
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:55:49 +0000
Received: from [85.158.139.211:19665] by server-6.bemta-5.messagelabs.com id
	D0/84-14342-49FB0135; Fri, 28 Feb 2014 16:55:48 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393606547!6931496!1
X-Originating-IP: [74.125.82.177]
X-SpamReason: No, hits=0.6 required=7.0 tests=BODY_RANDOM_LONG,
	HTML_30_40,HTML_MESSAGE,ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5396 invoked from network); 28 Feb 2014 16:55:47 -0000
Received: from mail-we0-f177.google.com (HELO mail-we0-f177.google.com)
	(74.125.82.177)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 16:55:47 -0000
Received: by mail-we0-f177.google.com with SMTP id t61so747808wes.8
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 08:55:47 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=gmgkQX1JjyMi+Ajkmy3cpjYuJAwf0g05l88I9ZAPMBM=;
	b=oXukOcnYVyVOAZd3oQk0UlnTaRa3h9rLWKf0ArZnwPjuFpSfkp8cjdqwTsrmMNBWfF
	4nvcQSP7WZ6ejnktdPEDHU/9GF0XIpaqjcydlxC9zUPZvf37an97SEgT/ZCmK2cvuX9w
	Yc9ZIfBqGCvpCMnYGgg53/fByzhvzcsAndR6hWN9+nlGQci9dFjYbo4BAFxrUVWconSt
	83E2mAlHNdvvQU1wdpdT5eCdVWaRhE9LlOYL3kn/0RrsJaBU+wt13wVqVPToFrUJV4UK
	FSa2+fO8Yunh/ALcEgfCZcMiYsMYN9uOoh+pawr1UKrLOwSjJWDMAIbO+Fg8umJZS0OJ
	IGqg==
X-Received: by 10.180.7.130 with SMTP id j2mr4221362wia.25.1393606547015;
	Fri, 28 Feb 2014 08:55:47 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id ci4sm5691072wjc.21.2014.02.28.08.55.45
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 08:55:46 -0800 (PST)
Message-ID: <5310BF8F.6080706@gmail.com>
Date: Fri, 28 Feb 2014 16:55:43 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CC730200007800120495@nat28.tlf.novell.com>
In-Reply-To: <5310CC730200007800120495@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] x86/ACPI: also print address space for PM1x
 fields
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============4278944338052735500=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============4278944338052735500==
Content-Type: multipart/alternative;
 boundary="------------090205090405090500090206"

This is a multi-part message in MIME format.
--------------090205090405090500090206
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> At least one vendor is in the process of making systems available where
> these live in MMIO, not in I/O port space.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Probably an abundance of caution to keep everything platform related in 
sync between dom0 and Xen. I agree this shouldn't be necessary in this 
case and I'm okay with this simplification. Frankly I doubt anyone has 
to specify this option on any halfway sane box anyway.

Acked-by: Keir Fraser <keir@xen.org>

>
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t
> acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);
>
> printk(KERN_INFO PREFIX
> - "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "
> - "pm1x_evt[%"PRIx64",%"PRIx64"]\n",
> + "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "
> + "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",
> + acpi_sinfo.pm1a_cnt_blk.space_id,
> acpi_sinfo.pm1a_cnt_blk.address,
> + acpi_sinfo.pm1b_cnt_blk.space_id,
> acpi_sinfo.pm1b_cnt_blk.address,
> + acpi_sinfo.pm1a_evt_blk.space_id,
> acpi_sinfo.pm1a_evt_blk.address,
> + acpi_sinfo.pm1b_evt_blk.space_id,
> acpi_sinfo.pm1b_evt_blk.address);
>
> /* Now FACS... */
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

--------------090205090405090500090206
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
At least one vendor is in the process of making systems available where<BR>
these live in MMIO, not in I/O port space.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Probably an abundance of caution to keep everything platform related in sync between dom0 and Xen. I agree this shouldn't be necessary in this case and I'm okay with this simplification. Frankly I doubt anyone has to specify this option on any halfway sane box anyway.<BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
--- a/xen/arch/x86/acpi/boot.c<BR>
+++ b/xen/arch/x86/acpi/boot.c<BR>
@@ -402,11 +402,15 @@ acpi_fadt_parse_sleep_info(struct acpi_t<BR>
&nbsp; 	acpi_fadt_copy_address(pm1b_evt, pm1b_event, pm1_event);<BR>
<BR>
&nbsp; 	printk(KERN_INFO PREFIX<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%"PRIx64",%"PRIx64"], "<BR>
-	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%"PRIx64",%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "SLEEP INFO: pm1x_cnt[%d:%"PRIx64",%d:%"PRIx64"], "<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; "pm1x_evt[%d:%"PRIx64",%d:%"PRIx64"]\n",<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_cnt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1a_evt_blk.address,<BR>
+	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.space_id,<BR>
&nbsp; 	&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; acpi_sinfo.pm1b_evt_blk.address);<BR>
<BR>
&nbsp; 	/* Now FACS... */<BR>
<BR>
<BR>
<BR>
_______________________________________________<BR>
Xen-devel mailing list<BR>
Xen-devel@lists.xen.org<BR>
http://lists.xen.org/xen-devel</BODY></HTML>

--------------090205090405090500090206--


--===============4278944338052735500==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============4278944338052735500==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:56:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQjW-0005VV-3c; Fri, 28 Feb 2014 16:56:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQjV-0005VH-6h
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:56:21 +0000
Received: from [85.158.143.35:45466] by server-2.bemta-4.messagelabs.com id
	12/65-04779-4BFB0135; Fri, 28 Feb 2014 16:56:20 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393606578!9072462!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32285 invoked from network); 28 Feb 2014 16:56:19 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 16:56:19 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id E75E824B;
	Fri, 28 Feb 2014 16:56:17 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 959A056;
	Fri, 28 Feb 2014 16:56:14 +0000 (UTC)
Message-ID: <5310BFAE.1080003@hp.com>
Date: Fri, 28 Feb 2014 11:56:14 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<20140226170041.GA20775@phenom.dumpdata.com>
In-Reply-To: <20140226170041.GA20775@phenom.dumpdata.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Marcos Matsunaga <Marcos.Matsunaga@oracle.com>,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:00 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 26, 2014 at 10:14:20AM -0500, Waiman Long wrote:
> It should be fairly easy. You just need to implement the kick right?
> An IPI should be all that is needed - look in xen_unlock_kick. The
> rest of the spinlock code is all generic that is shared between
> KVM, Xen and baremetal.
>
> You should be able to run all of this under an HVM guests as well - as
> in you don't need a pure PV guest to use the PV ticketlocks.
>
> An easy way to install/run this is to install your latest distro,
> do 'yum install xen' or 'apt-get install xen'. Reboot and you
> are under Xen. Launch guests, etc with your favorite virtualization
> toolstack.

Thank for the tip. I will try that out.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:56:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQjW-0005VV-3c; Fri, 28 Feb 2014 16:56:22 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQjV-0005VH-6h
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:56:21 +0000
Received: from [85.158.143.35:45466] by server-2.bemta-4.messagelabs.com id
	12/65-04779-4BFB0135; Fri, 28 Feb 2014 16:56:20 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-11.tower-21.messagelabs.com!1393606578!9072462!1
X-Originating-IP: [15.217.128.53]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 32285 invoked from network); 28 Feb 2014 16:56:19 -0000
Received: from g2t2354.austin.hp.com (HELO g2t2354.austin.hp.com)
	(15.217.128.53)
	by server-11.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 16:56:19 -0000
Received: from g2t2360.austin.hp.com (g2t2360.austin.hp.com [16.197.8.247])
	by g2t2354.austin.hp.com (Postfix) with ESMTP id E75E824B;
	Fri, 28 Feb 2014 16:56:17 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g2t2360.austin.hp.com (Postfix) with ESMTP id 959A056;
	Fri, 28 Feb 2014 16:56:14 +0000 (UTC)
Message-ID: <5310BFAE.1080003@hp.com>
Date: Fri, 28 Feb 2014 11:56:14 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<20140226170041.GA20775@phenom.dumpdata.com>
In-Reply-To: <20140226170041.GA20775@phenom.dumpdata.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Marcos Matsunaga <Marcos.Matsunaga@oracle.com>,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock
	with PV support
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:00 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 26, 2014 at 10:14:20AM -0500, Waiman Long wrote:
> It should be fairly easy. You just need to implement the kick right?
> An IPI should be all that is needed - look in xen_unlock_kick. The
> rest of the spinlock code is all generic that is shared between
> KVM, Xen and baremetal.
>
> You should be able to run all of this under an HVM guests as well - as
> in you don't need a pure PV guest to use the PV ticketlocks.
>
> An easy way to install/run this is to install your latest distro,
> do 'yum install xen' or 'apt-get install xen'. Reboot and you
> are under Xen. Launch guests, etc with your favorite virtualization
> toolstack.

Thank for the tip. I will try that out.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 16:57:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQkI-0005dF-Io; Fri, 28 Feb 2014 16:57:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQkH-0005cx-EV
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:57:09 +0000
Received: from [85.158.139.211:46170] by server-5.bemta-5.messagelabs.com id
	39/E6-32749-4EFB0135; Fri, 28 Feb 2014 16:57:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393606627!121074!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29170 invoked from network); 28 Feb 2014 16:57:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:57:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:57:06 +0000
Message-Id: <5310CE0102000078001204D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:57:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part75473BE1.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] correctly use gcc's -x option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part75473BE1.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In Linux the improper use was found to cause problems with certain
distributed build environments. Even if not directly affecting us, be
on the safe side.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/Config.mk
+++ b/Config.mk
@@ -101,7 +101,7 @@ PYTHON_PREFIX_ARG ?=3D --prefix=3D"$(PREFIX)
 #
 # Usage: cflags-y +=3D $(call cc-option,$(CC),-march=3Dwinchip-c6,-march=
=3Di586)
 cc-option =3D $(shell if test -z "`echo 'void*p=3D1;' | \
-              $(1) $(2) -S -o /dev/null -xc - 2>&1 | grep -- $(2) -`"; \
+              $(1) $(2) -S -o /dev/null -x c - 2>&1 | grep -- $(2) -`"; \
               then echo "$(2)"; else echo "$(3)"; fi ;)
=20
 # cc-option-add: Add an option to compilation flags, but only if =
supported.
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -81,7 +81,7 @@ ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A
 all: headers.chk
=20
 headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% =
public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) =
Makefile
-	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall =
-W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done >$@.new
+	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall =
-W -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; done >$@.new
 	mv $@.new $@
=20
 endif




--=__Part75473BE1.0__=
Content-Type: text/plain; name="gcc-x-option.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="gcc-x-option.patch"

correctly use gcc's -x option=0A=0AIn Linux the improper use was found to =
cause problems with certain=0Adistributed build environments. Even if not =
directly affecting us, be=0Aon the safe side.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/Config.mk=0A+++ b/Config.mk=0A@@ =
-101,7 +101,7 @@ PYTHON_PREFIX_ARG ?=3D --prefix=3D"$(PREFIX)=0A #=0A # =
Usage: cflags-y +=3D $(call cc-option,$(CC),-march=3Dwinchip-c6,-march=3Di5=
86)=0A cc-option =3D $(shell if test -z "`echo 'void*p=3D1;' | \=0A-       =
       $(1) $(2) -S -o /dev/null -xc - 2>&1 | grep -- $(2) -`"; \=0A+      =
        $(1) $(2) -S -o /dev/null -x c - 2>&1 | grep -- $(2) -`"; \=0A     =
          then echo "$(2)"; else echo "$(3)"; fi ;)=0A =0A # cc-option-add:=
 Add an option to compilation flags, but only if supported.=0A--- =
a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ -81,7 +81,7 @@ =
ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A=0A all: headers.chk=0A =0A =
headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% =
public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) =
Makefile=0A-	for i in $(filter %.h,$^); do $(CC) -ansi -include =
stdint.h -Wall -W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; =
done >$@.new=0A+	for i in $(filter %.h,$^); do $(CC) -ansi -include =
stdint.h -Wall -W -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; =
done >$@.new=0A 	mv $@.new $@=0A =0A endif=0A
--=__Part75473BE1.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part75473BE1.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:57:13 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQkI-0005dF-Io; Fri, 28 Feb 2014 16:57:10 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQkH-0005cx-EV
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:57:09 +0000
Received: from [85.158.139.211:46170] by server-5.bemta-5.messagelabs.com id
	39/E6-32749-4EFB0135; Fri, 28 Feb 2014 16:57:08 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-2.tower-206.messagelabs.com!1393606627!121074!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29170 invoked from network); 28 Feb 2014 16:57:08 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-2.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:57:08 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:57:06 +0000
Message-Id: <5310CE0102000078001204D0@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:57:21 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part75473BE1.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] correctly use gcc's -x option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part75473BE1.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

In Linux the improper use was found to cause problems with certain
distributed build environments. Even if not directly affecting us, be
on the safe side.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/Config.mk
+++ b/Config.mk
@@ -101,7 +101,7 @@ PYTHON_PREFIX_ARG ?=3D --prefix=3D"$(PREFIX)
 #
 # Usage: cflags-y +=3D $(call cc-option,$(CC),-march=3Dwinchip-c6,-march=
=3Di586)
 cc-option =3D $(shell if test -z "`echo 'void*p=3D1;' | \
-              $(1) $(2) -S -o /dev/null -xc - 2>&1 | grep -- $(2) -`"; \
+              $(1) $(2) -S -o /dev/null -x c - 2>&1 | grep -- $(2) -`"; \
               then echo "$(2)"; else echo "$(3)"; fi ;)
=20
 # cc-option-add: Add an option to compilation flags, but only if =
supported.
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -81,7 +81,7 @@ ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A
 all: headers.chk
=20
 headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% =
public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) =
Makefile
-	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall =
-W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done >$@.new
+	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall =
-W -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; done >$@.new
 	mv $@.new $@
=20
 endif




--=__Part75473BE1.0__=
Content-Type: text/plain; name="gcc-x-option.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="gcc-x-option.patch"

correctly use gcc's -x option=0A=0AIn Linux the improper use was found to =
cause problems with certain=0Adistributed build environments. Even if not =
directly affecting us, be=0Aon the safe side.=0A=0ASigned-off-by: Jan =
Beulich <jbeulich@suse.com>=0A=0A--- a/Config.mk=0A+++ b/Config.mk=0A@@ =
-101,7 +101,7 @@ PYTHON_PREFIX_ARG ?=3D --prefix=3D"$(PREFIX)=0A #=0A # =
Usage: cflags-y +=3D $(call cc-option,$(CC),-march=3Dwinchip-c6,-march=3Di5=
86)=0A cc-option =3D $(shell if test -z "`echo 'void*p=3D1;' | \=0A-       =
       $(1) $(2) -S -o /dev/null -xc - 2>&1 | grep -- $(2) -`"; \=0A+      =
        $(1) $(2) -S -o /dev/null -x c - 2>&1 | grep -- $(2) -`"; \=0A     =
          then echo "$(2)"; else echo "$(3)"; fi ;)=0A =0A # cc-option-add:=
 Add an option to compilation flags, but only if supported.=0A--- =
a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ -81,7 +81,7 @@ =
ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A=0A all: headers.chk=0A =0A =
headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% =
public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) =
Makefile=0A-	for i in $(filter %.h,$^); do $(CC) -ansi -include =
stdint.h -Wall -W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; =
done >$@.new=0A+	for i in $(filter %.h,$^); do $(CC) -ansi -include =
stdint.h -Wall -W -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; =
done >$@.new=0A 	mv $@.new $@=0A =0A endif=0A
--=__Part75473BE1.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part75473BE1.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:59:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQmn-00064B-Ei; Fri, 28 Feb 2014 16:59:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQmm-000642-Gl
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:59:44 +0000
Received: from [85.158.143.35:27816] by server-1.bemta-4.messagelabs.com id
	6E/9F-31661-F70C0135; Fri, 28 Feb 2014 16:59:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393606782!9092902!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8528 invoked from network); 28 Feb 2014 16:59:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:59:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:59:42 +0000
Message-Id: <5310CE9C02000078001204D7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:59:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7745399C.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] include: parallelize compat/xlat.h generation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7745399C.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Splitting this up into pieces signficantly speeds up building on multi-
CPU systems when making use of make's -j option.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -66,14 +66,24 @@ compat/%.c: public/%.h xlat.lst Makefile
 	$(PYTHON) $(BASEDIR)/tools/compat-build-source.py >$@.new
 	mv -f $@.new $@
=20
-compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) =
$(BASEDIR)/tools/get-fields.sh Makefile
+compat/.xlat/%.h: compat/%.h compat/.xlat/%.lst $(BASEDIR)/tools/get-field=
s.sh Makefile
 	export PYTHON=3D$(PYTHON); \
-	grep -v '^[	 ]*#' xlat.lst | \
-	while read what name hdr; do \
-		hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y=
),g')"; \
-		echo '$(headers-y)' | grep -q "$$hdr" || continue; \
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$hdr || exit $$?; \
-	done >$@.new
+	while read what name; do \
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $< || exit $$?; \
+	done <$(patsubst compat/%,compat/.xlat/%,$(basename $<)).lst =
>$@.new
+	mv -f $@.new $@
+
+.PRECIOUS: compat/.xlat/%.lst
+compat/.xlat/%.lst: xlat.lst Makefile
+	mkdir -p $(@D)
+	grep -v '^[ \t]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -e =
's,[ \t]\+$*\.h[ \t]*$$,,p' >$@.new
+	$(call move-if-changed,$@.new,$@)
+
+xlat-y :=3D $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,^[?!][ =
\t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)
+xlat-y :=3D $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
+
+compat/xlat.h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile
+	cat $(filter %.h,$^) >$@.new
 	mv -f $@.new $@
=20
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))




--=__Part7745399C.0__=
Content-Type: text/plain; name="parallel-include-compat-build.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="parallel-include-compat-build.patch"

include: parallelize compat/xlat.h generation=0A=0ASplitting this up into =
pieces signficantly speeds up building on multi-=0ACPU systems when making =
use of make's -j option.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com=
>=0A=0A--- a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ -66,14 =
+66,24 @@ compat/%.c: public/%.h xlat.lst Makefile=0A 	$(PYTHON) =
$(BASEDIR)/tools/compat-build-source.py >$@.new=0A 	mv -f $@.new $@=0A =
=0A-compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) =
$(BASEDIR)/tools/get-fields.sh Makefile=0A+compat/.xlat/%.h: compat/%.h =
compat/.xlat/%.lst $(BASEDIR)/tools/get-fields.sh Makefile=0A 	export =
PYTHON=3D$(PYTHON); \=0A-	grep -v '^[	 ]*#' xlat.lst | \=0A-	=
while read what name hdr; do \=0A-		hdr=3D"compat/$$(echo =
$$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \=0A-		echo =
'$(headers-y)' | grep -q "$$hdr" || continue; \=0A-		$(SHELL) =
$(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr || exit $$?; =
\=0A-	done >$@.new=0A+	while read what name; do \=0A+		=
$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $< || exit =
$$?; \=0A+	done <$(patsubst compat/%,compat/.xlat/%,$(basename =
$<)).lst >$@.new=0A+	mv -f $@.new $@=0A+=0A+.PRECIOUS: compat/.xlat/%.ls=
t=0A+compat/.xlat/%.lst: xlat.lst Makefile=0A+	mkdir -p $(@D)=0A+	=
grep -v '^[ \t]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,[ =
\t]\+$*\.h[ \t]*$$,,p' >$@.new=0A+	$(call move-if-changed,$@.new,$@)=
=0A+=0A+xlat-y :=3D $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e =
's,^[?!][ \t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)=0A+xlat-y :=3D =
$(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))=0A+=0A+compat/xlat.=
h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile=0A+	cat $(filter =
%.h,$^) >$@.new=0A 	mv -f $@.new $@=0A =0A ifeq ($(XEN_TARGET_ARCH),$(X=
EN_COMPILE_ARCH))=0A
--=__Part7745399C.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7745399C.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 16:59:47 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 16:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQmn-00064B-Ei; Fri, 28 Feb 2014 16:59:45 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <JBeulich@suse.com>) id 1WJQmm-000642-Gl
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 16:59:44 +0000
Received: from [85.158.143.35:27816] by server-1.bemta-4.messagelabs.com id
	6E/9F-31661-F70C0135; Fri, 28 Feb 2014 16:59:43 +0000
X-Env-Sender: JBeulich@suse.com
X-Msg-Ref: server-4.tower-21.messagelabs.com!1393606782!9092902!1
X-Originating-IP: [130.57.49.28]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTMwLjU3LjQ5LjI4ID0+IDQ4MDU=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 8528 invoked from network); 28 Feb 2014 16:59:42 -0000
Received: from nat28.tlf.novell.com (HELO nat28.tlf.novell.com) (130.57.49.28)
	by server-4.tower-21.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 16:59:42 -0000
Received: from EMEA1-MTA by nat28.tlf.novell.com
	with Novell_GroupWise; Fri, 28 Feb 2014 16:59:42 +0000
Message-Id: <5310CE9C02000078001204D7@nat28.tlf.novell.com>
X-Mailer: Novell GroupWise Internet Agent 12.0.2 
Date: Fri, 28 Feb 2014 16:59:56 +0000
From: "Jan Beulich" <JBeulich@suse.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Mime-Version: 1.0
Content-Type: multipart/mixed; boundary="=__Part7745399C.0__="
Cc: Keir Fraser <keir@xen.org>
Subject: [Xen-devel] [PATCH] include: parallelize compat/xlat.h generation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a MIME message. If you are reading this text, you may want to 
consider changing to a mail reader or gateway that understands how to 
properly handle MIME multipart messages.

--=__Part7745399C.0__=
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Splitting this up into pieces signficantly speeds up building on multi-
CPU systems when making use of make's -j option.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -66,14 +66,24 @@ compat/%.c: public/%.h xlat.lst Makefile
 	$(PYTHON) $(BASEDIR)/tools/compat-build-source.py >$@.new
 	mv -f $@.new $@
=20
-compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) =
$(BASEDIR)/tools/get-fields.sh Makefile
+compat/.xlat/%.h: compat/%.h compat/.xlat/%.lst $(BASEDIR)/tools/get-field=
s.sh Makefile
 	export PYTHON=3D$(PYTHON); \
-	grep -v '^[	 ]*#' xlat.lst | \
-	while read what name hdr; do \
-		hdr=3D"compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y=
),g')"; \
-		echo '$(headers-y)' | grep -q "$$hdr" || continue; \
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $$hdr || exit $$?; \
-	done >$@.new
+	while read what name; do \
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$n=
ame $< || exit $$?; \
+	done <$(patsubst compat/%,compat/.xlat/%,$(basename $<)).lst =
>$@.new
+	mv -f $@.new $@
+
+.PRECIOUS: compat/.xlat/%.lst
+compat/.xlat/%.lst: xlat.lst Makefile
+	mkdir -p $(@D)
+	grep -v '^[ \t]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -e =
's,[ \t]\+$*\.h[ \t]*$$,,p' >$@.new
+	$(call move-if-changed,$@.new,$@)
+
+xlat-y :=3D $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,^[?!][ =
\t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)
+xlat-y :=3D $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
+
+compat/xlat.h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile
+	cat $(filter %.h,$^) >$@.new
 	mv -f $@.new $@
=20
 ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))




--=__Part7745399C.0__=
Content-Type: text/plain; name="parallel-include-compat-build.patch"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment; filename="parallel-include-compat-build.patch"

include: parallelize compat/xlat.h generation=0A=0ASplitting this up into =
pieces signficantly speeds up building on multi-=0ACPU systems when making =
use of make's -j option.=0A=0ASigned-off-by: Jan Beulich <jbeulich@suse.com=
>=0A=0A--- a/xen/include/Makefile=0A+++ b/xen/include/Makefile=0A@@ -66,14 =
+66,24 @@ compat/%.c: public/%.h xlat.lst Makefile=0A 	$(PYTHON) =
$(BASEDIR)/tools/compat-build-source.py >$@.new=0A 	mv -f $@.new $@=0A =
=0A-compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) =
$(BASEDIR)/tools/get-fields.sh Makefile=0A+compat/.xlat/%.h: compat/%.h =
compat/.xlat/%.lst $(BASEDIR)/tools/get-fields.sh Makefile=0A 	export =
PYTHON=3D$(PYTHON); \=0A-	grep -v '^[	 ]*#' xlat.lst | \=0A-	=
while read what name hdr; do \=0A-		hdr=3D"compat/$$(echo =
$$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \=0A-		echo =
'$(headers-y)' | grep -q "$$hdr" || continue; \=0A-		$(SHELL) =
$(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr || exit $$?; =
\=0A-	done >$@.new=0A+	while read what name; do \=0A+		=
$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $< || exit =
$$?; \=0A+	done <$(patsubst compat/%,compat/.xlat/%,$(basename =
$<)).lst >$@.new=0A+	mv -f $@.new $@=0A+=0A+.PRECIOUS: compat/.xlat/%.ls=
t=0A+compat/.xlat/%.lst: xlat.lst Makefile=0A+	mkdir -p $(@D)=0A+	=
grep -v '^[ \t]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,[ =
\t]\+$*\.h[ \t]*$$,,p' >$@.new=0A+	$(call move-if-changed,$@.new,$@)=
=0A+=0A+xlat-y :=3D $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e =
's,^[?!][ \t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)=0A+xlat-y :=3D =
$(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))=0A+=0A+compat/xlat.=
h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile=0A+	cat $(filter =
%.h,$^) >$@.new=0A 	mv -f $@.new $@=0A =0A ifeq ($(XEN_TARGET_ARCH),$(X=
EN_COMPILE_ARCH))=0A
--=__Part7745399C.0__=
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--=__Part7745399C.0__=--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQsD-0006J8-AX; Fri, 28 Feb 2014 17:05:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQsB-0006J3-Vl
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:05:20 +0000
Received: from [193.109.254.147:47267] by server-11.bemta-14.messagelabs.com
	id 29/4D-24604-FC1C0135; Fri, 28 Feb 2014 17:05:19 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393607118!7533814!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30324 invoked from network); 28 Feb 2014 17:05:18 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:05:18 -0000
Received: by mail-we0-f169.google.com with SMTP id t61so798431wes.14
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 09:05:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=2KuDQjPMRMocAiDW7uQiJtntPdyg7yppyh+TxRPNJAQ=;
	b=nz3iSbVLEbjyiF5AdTFVRlDxxZiGG9iOP4u7BCicMeZOvtO/U0KBZeYh5m+l7NkjQ0
	nN0QAbwpNyhO3LGn5CtxBALPYQRivnGbCtassVWgQGiBoVgWSs9rUFGht57HlWq0m6Ex
	bIRHETiacae9Paxy8MQbiHNFxA+uGDi9LxVjwG9uZbIwfvGSSgGwLj5FXY6UXNA1Ls1W
	mOjJ0fCv3mC+is79eJXFw85ATFaSAnOUClhD4N1OCPcvURBmfgFjbsRgy5xKpcwGSJir
	Ukw9zujITT/vCVSEbIXLsl4vlo7uAVRVk/w+t48yXii7QWJ/g3iBgcElOMtHA3ZHaqIp
	87Yw==
X-Received: by 10.194.85.75 with SMTP id f11mr3778258wjz.47.1393607117977;
	Fri, 28 Feb 2014 09:05:17 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id t6sm7751291wix.4.2014.02.28.09.05.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 09:05:16 -0800 (PST)
Message-ID: <5310C1CA.8030301@gmail.com>
Date: Fri, 28 Feb 2014 17:05:14 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CE0102000078001204D0@nat28.tlf.novell.com>
In-Reply-To: <5310CE0102000078001204D0@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] correctly use gcc's -x option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1375950672336386925=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============1375950672336386925==
Content-Type: multipart/alternative;
 boundary="------------070204040006010006080700"

This is a multi-part message in MIME format.
--------------070204040006010006080700
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> In Linux the improper use was found to cause problems with certain
> distributed build environments. Even if not directly affecting us, be
> on the safe side.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
>
> --- a/Config.mk
> +++ b/Config.mk
> @@ -101,7 +101,7 @@ PYTHON_PREFIX_ARG ?= --prefix="$(PREFIX)
> #
> # Usage: cflags-y += $(call cc-option,$(CC),-march=winchip-c6,-march=i586)
> cc-option = $(shell if test -z "`echo 'void*p=1;' | \
> - $(1) $(2) -S -o /dev/null -xc - 2>&1 | grep -- $(2) -`"; \
> + $(1) $(2) -S -o /dev/null -x c - 2>&1 | grep -- $(2) -`"; \
> then echo "$(2)"; else echo "$(3)"; fi ;)
>
> # cc-option-add: Add an option to compilation flags, but only if 
> supported.
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -81,7 +81,7 @@ ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A
> all: headers.chk
>
> headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% 
> public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) 
> Makefile
> - for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W 
> -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done>$@.new
> + for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W 
> -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; done>$@.new
> mv $@.new $@
>
> endif
>
>

--------------070204040006010006080700
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
In Linux the improper use was found to cause problems with certain<BR>
distributed build environments. Even if not directly affecting us, be<BR>
on the safe side.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
<BR>
--- a/Config.mk<BR>
+++ b/Config.mk<BR>
@@ -101,7 +101,7 @@ PYTHON_PREFIX_ARG ?= --prefix="$(PREFIX)<BR>
&nbsp; #<BR>
&nbsp; # Usage: cflags-y += $(call cc-option,$(CC),-march=winchip-c6,-march=i586)<BR>
&nbsp; cc-option = $(shell if test -z "`echo 'void*p=1;' | \<BR>
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(1) $(2) -S -o /dev/null -xc - 2&gt;&amp;1 | grep -- $(2) -`"; \<BR>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(1) $(2) -S -o /dev/null -x c - 2&gt;&amp;1 | grep -- $(2) -`"; \<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; then echo "$(2)"; else echo "$(3)"; fi ;)<BR>
<BR>
&nbsp; # cc-option-add: Add an option to compilation flags, but only if supported.<BR>
--- a/xen/include/Makefile<BR>
+++ b/xen/include/Makefile<BR>
@@ -81,7 +81,7 @@ ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A<BR>
&nbsp; all: headers.chk<BR>
<BR>
&nbsp; headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) Makefile<BR>
-	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done&gt;$@.new<BR>
+	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; done&gt;$@.new<BR>
&nbsp; 	mv $@.new $@<BR>
<BR>
&nbsp; endif<BR>
<BR>
<BR>
</BODY></HTML>

--------------070204040006010006080700--


--===============1375950672336386925==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1375950672336386925==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:05:34 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQsD-0006J8-AX; Fri, 28 Feb 2014 17:05:21 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQsB-0006J3-Vl
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:05:20 +0000
Received: from [193.109.254.147:47267] by server-11.bemta-14.messagelabs.com
	id 29/4D-24604-FC1C0135; Fri, 28 Feb 2014 17:05:19 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-15.tower-27.messagelabs.com!1393607118!7533814!1
X-Originating-IP: [74.125.82.169]
X-SpamReason: No, hits=0.1 required=7.0 tests=HTML_30_40,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30324 invoked from network); 28 Feb 2014 17:05:18 -0000
Received: from mail-we0-f169.google.com (HELO mail-we0-f169.google.com)
	(74.125.82.169)
	by server-15.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:05:18 -0000
Received: by mail-we0-f169.google.com with SMTP id t61so798431wes.14
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 09:05:18 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=2KuDQjPMRMocAiDW7uQiJtntPdyg7yppyh+TxRPNJAQ=;
	b=nz3iSbVLEbjyiF5AdTFVRlDxxZiGG9iOP4u7BCicMeZOvtO/U0KBZeYh5m+l7NkjQ0
	nN0QAbwpNyhO3LGn5CtxBALPYQRivnGbCtassVWgQGiBoVgWSs9rUFGht57HlWq0m6Ex
	bIRHETiacae9Paxy8MQbiHNFxA+uGDi9LxVjwG9uZbIwfvGSSgGwLj5FXY6UXNA1Ls1W
	mOjJ0fCv3mC+is79eJXFw85ATFaSAnOUClhD4N1OCPcvURBmfgFjbsRgy5xKpcwGSJir
	Ukw9zujITT/vCVSEbIXLsl4vlo7uAVRVk/w+t48yXii7QWJ/g3iBgcElOMtHA3ZHaqIp
	87Yw==
X-Received: by 10.194.85.75 with SMTP id f11mr3778258wjz.47.1393607117977;
	Fri, 28 Feb 2014 09:05:17 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id t6sm7751291wix.4.2014.02.28.09.05.15
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 09:05:16 -0800 (PST)
Message-ID: <5310C1CA.8030301@gmail.com>
Date: Fri, 28 Feb 2014 17:05:14 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CE0102000078001204D0@nat28.tlf.novell.com>
In-Reply-To: <5310CE0102000078001204D0@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] correctly use gcc's -x option
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============1375950672336386925=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============1375950672336386925==
Content-Type: multipart/alternative;
 boundary="------------070204040006010006080700"

This is a multi-part message in MIME format.
--------------070204040006010006080700
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> In Linux the improper use was found to cause problems with certain
> distributed build environments. Even if not directly affecting us, be
> on the safe side.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Acked-by: Keir Fraser <keir@xen.org>

>
>
> --- a/Config.mk
> +++ b/Config.mk
> @@ -101,7 +101,7 @@ PYTHON_PREFIX_ARG ?= --prefix="$(PREFIX)
> #
> # Usage: cflags-y += $(call cc-option,$(CC),-march=winchip-c6,-march=i586)
> cc-option = $(shell if test -z "`echo 'void*p=1;' | \
> - $(1) $(2) -S -o /dev/null -xc - 2>&1 | grep -- $(2) -`"; \
> + $(1) $(2) -S -o /dev/null -x c - 2>&1 | grep -- $(2) -`"; \
> then echo "$(2)"; else echo "$(3)"; fi ;)
>
> # cc-option-add: Add an option to compilation flags, but only if 
> supported.
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -81,7 +81,7 @@ ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A
> all: headers.chk
>
> headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% 
> public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) 
> Makefile
> - for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W 
> -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done>$@.new
> + for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W 
> -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; done>$@.new
> mv $@.new $@
>
> endif
>
>

--------------070204040006010006080700
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
In Linux the improper use was found to cause problems with certain<BR>
distributed build environments. Even if not directly affecting us, be<BR>
on the safe side.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
<BR>
--- a/Config.mk<BR>
+++ b/Config.mk<BR>
@@ -101,7 +101,7 @@ PYTHON_PREFIX_ARG ?= --prefix="$(PREFIX)<BR>
&nbsp; #<BR>
&nbsp; # Usage: cflags-y += $(call cc-option,$(CC),-march=winchip-c6,-march=i586)<BR>
&nbsp; cc-option = $(shell if test -z "`echo 'void*p=1;' | \<BR>
-&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(1) $(2) -S -o /dev/null -xc - 2&gt;&amp;1 | grep -- $(2) -`"; \<BR>
+&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; $(1) $(2) -S -o /dev/null -x c - 2&gt;&amp;1 | grep -- $(2) -`"; \<BR>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; then echo "$(2)"; else echo "$(3)"; fi ;)<BR>
<BR>
&nbsp; # cc-option-add: Add an option to compilation flags, but only if supported.<BR>
--- a/xen/include/Makefile<BR>
+++ b/xen/include/Makefile<BR>
@@ -81,7 +81,7 @@ ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_A<BR>
&nbsp; all: headers.chk<BR>
<BR>
&nbsp; headers.chk: $(filter-out public/arch-% public/%ctl.h public/xsm/% public/%hvm/save.h, $(wildcard public/*.h public/*/*.h) $(public-y)) Makefile<BR>
-	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W -Werror -S -o /dev/null -xc $$i || exit 1; echo $$i; done&gt;$@.new<BR>
+	for i in $(filter %.h,$^); do $(CC) -ansi -include stdint.h -Wall -W -Werror -S -o /dev/null -x c $$i || exit 1; echo $$i; done&gt;$@.new<BR>
&nbsp; 	mv $@.new $@<BR>
<BR>
&nbsp; endif<BR>
<BR>
<BR>
</BODY></HTML>

--------------070204040006010006080700--


--===============1375950672336386925==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============1375950672336386925==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:06:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQtT-0006Mx-QN; Fri, 28 Feb 2014 17:06:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQtS-0006Mp-Hk
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:06:38 +0000
Received: from [193.109.254.147:26137] by server-5.bemta-14.messagelabs.com id
	58/F3-16688-D12C0135; Fri, 28 Feb 2014 17:06:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393607196!7581042!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2596 invoked from network); 28 Feb 2014 17:06:36 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:06:36 -0000
Received: by mail-wi0-f182.google.com with SMTP id f8so861780wiw.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 09:06:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=+Do/+sPFUDZ3L1aNvW5cpZ7bq150ZaEa+GzY6Wix3b4=;
	b=fKuLAlKOQcEe0fyUzOfhP04IfhmQ6tEsmLsGYnAt/IdcT52L9CdgNSCLb3Gb2vMQGC
	lIj+ER+VOl8yzzTLoDK2vByZKuGTd2c4vlvhZaE50hncHgj275pbX3KfNyxEQJvzZvoy
	1xZ0x1PpEVsDyMZZ0iZq1qEsz4peoqNAha9WvHF8SCnt+s1BrHUSoX1iZoKFy3YWtDgs
	4xXmnfENS870rpI+o7CvAfUId0GcHBO3nwSFLzWOVqeruhXwGHJfzFWMvgq574/oaxzX
	rNlEwLO+lsoNQ3A4n24qBW4KmV3XsVhpR6RQqBdORiCkGuHqSOdx8rB1SMpts1wxmYlm
	uB8w==
X-Received: by 10.194.202.230 with SMTP id kl6mr3883150wjc.9.1393607196508;
	Fri, 28 Feb 2014 09:06:36 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id ha1sm5755302wjc.23.2014.02.28.09.06.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 09:06:35 -0800 (PST)
Message-ID: <5310C218.7050300@gmail.com>
Date: Fri, 28 Feb 2014 17:06:32 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CE9C02000078001204D7@nat28.tlf.novell.com>
In-Reply-To: <5310CE9C02000078001204D7@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] include: parallelize compat/xlat.h
	generation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5671846392851891603=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5671846392851891603==
Content-Type: multipart/alternative;
 boundary="------------040205090203030809060406"

This is a multi-part message in MIME format.
--------------040205090203030809060406
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> Splitting this up into pieces signficantly speeds up building on multi-
> CPU systems when making use of make's -j option.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Yes, very welcome!

Acked-by: Keir Fraser <keir@xen.org>

>
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -66,14 +66,24 @@ compat/%.c: public/%.h xlat.lst Makefile
> $(PYTHON) $(BASEDIR)/tools/compat-build-source.py>$@.new
> mv -f $@.new $@
>
> -compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) 
> $(BASEDIR)/tools/get-fields.sh Makefile
> +compat/.xlat/%.h: compat/%.h compat/.xlat/%.lst 
> $(BASEDIR)/tools/get-fields.sh Makefile
> export PYTHON=$(PYTHON); \
> - grep -v '^[ ]*#' xlat.lst | \
> - while read what name hdr; do \
> - hdr="compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \
> - echo '$(headers-y)' | grep -q "$$hdr" || continue; \
> - $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr 
> || exit $$?; \
> - done>$@.new
> + while read what name; do \
> + $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $< || 
> exit $$?; \
> + done<$(patsubst compat/%,compat/.xlat/%,$(basename $<)).lst>$@.new
> + mv -f $@.new $@
> +
> +.PRECIOUS: compat/.xlat/%.lst
> +compat/.xlat/%.lst: xlat.lst Makefile
> + mkdir -p $(@D)
> + grep -v '^[ \t]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -e 
> 's,[ \t]\+$*\.h[ \t]*$$,,p'>$@.new
> + $(call move-if-changed,$@.new,$@)
> +
> +xlat-y := $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,^[?!][ 
> \t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)
> +xlat-y := $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
> +
> +compat/xlat.h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile
> + cat $(filter %.h,$^)>$@.new
> mv -f $@.new $@
>
> ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
>
>

--------------040205090203030809060406
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
Splitting this up into pieces signficantly speeds up building on multi-<BR>
CPU systems when making use of make's -j option.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Yes, very welcome!<BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
--- a/xen/include/Makefile<BR>
+++ b/xen/include/Makefile<BR>
@@ -66,14 +66,24 @@ compat/%.c: public/%.h xlat.lst Makefile<BR>
&nbsp; 	$(PYTHON) $(BASEDIR)/tools/compat-build-source.py&gt;$@.new<BR>
&nbsp; 	mv -f $@.new $@<BR>
<BR>
-compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) $(BASEDIR)/tools/get-fields.sh Makefile<BR>
+compat/.xlat/%.h: compat/%.h compat/.xlat/%.lst $(BASEDIR)/tools/get-fields.sh Makefile<BR>
&nbsp; 	export PYTHON=$(PYTHON); \<BR>
-	grep -v '^[	 ]*#' xlat.lst | \<BR>
-	while read what name hdr; do \<BR>
-		hdr="compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \<BR>
-		echo '$(headers-y)' | grep -q "$$hdr" || continue; \<BR>
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr || exit $$?; \<BR>
-	done&gt;$@.new<BR>
+	while read what name; do \<BR>
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $&lt;&nbsp; || exit $$?; \<BR>
+	done&lt;$(patsubst compat/%,compat/.xlat/%,$(basename $&lt;)).lst&gt;$@.new<BR>
+	mv -f $@.new $@<BR>
+<BR>
+.PRECIOUS: compat/.xlat/%.lst<BR>
+compat/.xlat/%.lst: xlat.lst Makefile<BR>
+	mkdir -p $(@D)<BR>
+	grep -v '^[ \t]*#' $&lt;&nbsp; | sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,[ \t]\+$*\.h[ \t]*$$,,p'&gt;$@.new<BR>
+	$(call move-if-changed,$@.new,$@)<BR>
+<BR>
+xlat-y := $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,^[?!][ \t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)<BR>
+xlat-y := $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))<BR>
+<BR>
+compat/xlat.h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile<BR>
+	cat $(filter %.h,$^)&gt;$@.new<BR>
&nbsp; 	mv -f $@.new $@<BR>
<BR>
&nbsp; ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))<BR>
<BR>
<BR>
</BODY></HTML>

--------------040205090203030809060406--


--===============5671846392851891603==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5671846392851891603==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:06:42 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQtT-0006Mx-QN; Fri, 28 Feb 2014 17:06:39 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <keir.xen@gmail.com>) id 1WJQtS-0006Mp-Hk
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:06:38 +0000
Received: from [193.109.254.147:26137] by server-5.bemta-14.messagelabs.com id
	58/F3-16688-D12C0135; Fri, 28 Feb 2014 17:06:37 +0000
X-Env-Sender: keir.xen@gmail.com
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393607196!7581042!1
X-Originating-IP: [209.85.212.182]
X-SpamReason: No, hits=0.9 required=7.0 tests=HTML_20_30,HTML_MESSAGE,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 2596 invoked from network); 28 Feb 2014 17:06:36 -0000
Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com)
	(209.85.212.182)
	by server-13.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:06:36 -0000
Received: by mail-wi0-f182.google.com with SMTP id f8so861780wiw.9
	for <xen-devel@lists.xenproject.org>;
	Fri, 28 Feb 2014 09:06:36 -0800 (PST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;
	h=message-id:date:from:user-agent:mime-version:to:cc:subject
	:references:in-reply-to:content-type;
	bh=+Do/+sPFUDZ3L1aNvW5cpZ7bq150ZaEa+GzY6Wix3b4=;
	b=fKuLAlKOQcEe0fyUzOfhP04IfhmQ6tEsmLsGYnAt/IdcT52L9CdgNSCLb3Gb2vMQGC
	lIj+ER+VOl8yzzTLoDK2vByZKuGTd2c4vlvhZaE50hncHgj275pbX3KfNyxEQJvzZvoy
	1xZ0x1PpEVsDyMZZ0iZq1qEsz4peoqNAha9WvHF8SCnt+s1BrHUSoX1iZoKFy3YWtDgs
	4xXmnfENS870rpI+o7CvAfUId0GcHBO3nwSFLzWOVqeruhXwGHJfzFWMvgq574/oaxzX
	rNlEwLO+lsoNQ3A4n24qBW4KmV3XsVhpR6RQqBdORiCkGuHqSOdx8rB1SMpts1wxmYlm
	uB8w==
X-Received: by 10.194.202.230 with SMTP id kl6mr3883150wjc.9.1393607196508;
	Fri, 28 Feb 2014 09:06:36 -0800 (PST)
Received: from unknown-a8:20:66:16:48:c7.home
	(host86-136-210-233.range86-136.btcentralplus.com. [86.136.210.233])
	by mx.google.com with ESMTPSA id ha1sm5755302wjc.23.2014.02.28.09.06.34
	for <multiple recipients>
	(version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128);
	Fri, 28 Feb 2014 09:06:35 -0800 (PST)
Message-ID: <5310C218.7050300@gmail.com>
Date: Fri, 28 Feb 2014 17:06:32 +0000
From: Keir Fraser <keir.xen@gmail.com>
User-Agent: Postbox 3.0.9 (Macintosh/20140129)
MIME-Version: 1.0
To: "Jan Beulich" <JBeulich@suse.com>
References: <5310CE9C02000078001204D7@nat28.tlf.novell.com>
In-Reply-To: <5310CE9C02000078001204D7@nat28.tlf.novell.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] include: parallelize compat/xlat.h
	generation
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============5671846392851891603=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is a multi-part message in MIME format.
--===============5671846392851891603==
Content-Type: multipart/alternative;
 boundary="------------040205090203030809060406"

This is a multi-part message in MIME format.
--------------040205090203030809060406
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Jan Beulich wrote:
>
> Splitting this up into pieces signficantly speeds up building on multi-
> CPU systems when making use of make's -j option.
>
> Signed-off-by: Jan Beulich<jbeulich@suse.com>


Yes, very welcome!

Acked-by: Keir Fraser <keir@xen.org>

>
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -66,14 +66,24 @@ compat/%.c: public/%.h xlat.lst Makefile
> $(PYTHON) $(BASEDIR)/tools/compat-build-source.py>$@.new
> mv -f $@.new $@
>
> -compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) 
> $(BASEDIR)/tools/get-fields.sh Makefile
> +compat/.xlat/%.h: compat/%.h compat/.xlat/%.lst 
> $(BASEDIR)/tools/get-fields.sh Makefile
> export PYTHON=$(PYTHON); \
> - grep -v '^[ ]*#' xlat.lst | \
> - while read what name hdr; do \
> - hdr="compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \
> - echo '$(headers-y)' | grep -q "$$hdr" || continue; \
> - $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr 
> || exit $$?; \
> - done>$@.new
> + while read what name; do \
> + $(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $< || 
> exit $$?; \
> + done<$(patsubst compat/%,compat/.xlat/%,$(basename $<)).lst>$@.new
> + mv -f $@.new $@
> +
> +.PRECIOUS: compat/.xlat/%.lst
> +compat/.xlat/%.lst: xlat.lst Makefile
> + mkdir -p $(@D)
> + grep -v '^[ \t]*#' $< | sed -ne 's,@arch@,$(compat-arch-y),g' -e 
> 's,[ \t]\+$*\.h[ \t]*$$,,p'>$@.new
> + $(call move-if-changed,$@.new,$@)
> +
> +xlat-y := $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,^[?!][ 
> \t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)
> +xlat-y := $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))
> +
> +compat/xlat.h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile
> + cat $(filter %.h,$^)>$@.new
> mv -f $@.new $@
>
> ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))
>
>

--------------040205090203030809060406
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<HTML><HEAD><META content="text/html; charset=UTF-8" http-equiv="Content-Type"></HEAD><BODY>Jan Beulich wrote:<BR>
<blockquote type="cite"><BR>
Splitting this up into pieces signficantly speeds up building on multi-<BR>
CPU systems when making use of make's -j option.<BR>
<BR>
Signed-off-by: Jan Beulich&lt;jbeulich@suse.com&gt;<BR>
</blockquote><BR>
<BR>
Yes, very welcome!<BR>
<BR>
Acked-by: Keir Fraser &lt;keir@xen.org&gt;<BR>
<BR>
<blockquote type="cite"><BR>
--- a/xen/include/Makefile<BR>
+++ b/xen/include/Makefile<BR>
@@ -66,14 +66,24 @@ compat/%.c: public/%.h xlat.lst Makefile<BR>
&nbsp; 	$(PYTHON) $(BASEDIR)/tools/compat-build-source.py&gt;$@.new<BR>
&nbsp; 	mv -f $@.new $@<BR>
<BR>
-compat/xlat.h: xlat.lst $(filter-out compat/xlat.h,$(headers-y)) $(BASEDIR)/tools/get-fields.sh Makefile<BR>
+compat/.xlat/%.h: compat/%.h compat/.xlat/%.lst $(BASEDIR)/tools/get-fields.sh Makefile<BR>
&nbsp; 	export PYTHON=$(PYTHON); \<BR>
-	grep -v '^[	 ]*#' xlat.lst | \<BR>
-	while read what name hdr; do \<BR>
-		hdr="compat/$$(echo $$hdr | sed 's,@arch@,$(compat-arch-y),g')"; \<BR>
-		echo '$(headers-y)' | grep -q "$$hdr" || continue; \<BR>
-		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $$hdr || exit $$?; \<BR>
-	done&gt;$@.new<BR>
+	while read what name; do \<BR>
+		$(SHELL) $(BASEDIR)/tools/get-fields.sh "$$what" compat_$$name $&lt;&nbsp; || exit $$?; \<BR>
+	done&lt;$(patsubst compat/%,compat/.xlat/%,$(basename $&lt;)).lst&gt;$@.new<BR>
+	mv -f $@.new $@<BR>
+<BR>
+.PRECIOUS: compat/.xlat/%.lst<BR>
+compat/.xlat/%.lst: xlat.lst Makefile<BR>
+	mkdir -p $(@D)<BR>
+	grep -v '^[ \t]*#' $&lt;&nbsp; | sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,[ \t]\+$*\.h[ \t]*$$,,p'&gt;$@.new<BR>
+	$(call move-if-changed,$@.new,$@)<BR>
+<BR>
+xlat-y := $(shell sed -ne 's,@arch@,$(compat-arch-y),g' -e 's,^[?!][ \t]\+[^ \t]\+[ \t]\+,,p' xlat.lst | uniq)<BR>
+xlat-y := $(filter $(patsubst compat/%,%,$(headers-y)),$(xlat-y))<BR>
+<BR>
+compat/xlat.h: $(addprefix compat/.xlat/,$(xlat-y)) Makefile<BR>
+	cat $(filter %.h,$^)&gt;$@.new<BR>
&nbsp; 	mv -f $@.new $@<BR>
<BR>
&nbsp; ifeq ($(XEN_TARGET_ARCH),$(XEN_COMPILE_ARCH))<BR>
<BR>
<BR>
</BODY></HTML>

--------------040205090203030809060406--


--===============5671846392851891603==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============5671846392851891603==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:06:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQtg-0006OL-8j; Fri, 28 Feb 2014 17:06:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQte-0006OB-I3
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:06:50 +0000
Received: from [85.158.139.211:38414] by server-12.bemta-5.messagelabs.com id
	84/72-15415-922C0135; Fri, 28 Feb 2014 17:06:49 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393607207!6950722!1
X-Originating-IP: [15.192.137.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18754 invoked from network); 28 Feb 2014 17:06:48 -0000
Received: from g5t1626.atlanta.hp.com (HELO g5t1626.atlanta.hp.com)
	(15.192.137.9)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 17:06:48 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g5t1626.atlanta.hp.com (Postfix) with ESMTP id 11DD9223;
	Fri, 28 Feb 2014 17:06:45 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id 14A3164;
	Fri, 28 Feb 2014 17:06:40 +0000 (UTC)
Message-ID: <5310C21F.7000809@hp.com>
Date: Fri, 28 Feb 2014 12:06:39 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
	<20140226170734.GB20775@phenom.dumpdata.com>
In-Reply-To: <20140226170734.GB20775@phenom.dumpdata.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
>> Locking is always an issue in a virtualized environment as the virtual
>> CPU that is waiting on a lock may get scheduled out and hence block
>> any progress in lock acquisition even when the lock has been freed.
>>
>> One solution to this problem is to allow unfair lock in a
>> para-virtualized environment. In this case, a new lock acquirer can
>> come and steal the lock if the next-in-line CPU to get the lock is
>> scheduled out. Unfair lock in a native environment is generally not a
> Hmm, how do you know if the 'next-in-line CPU' is scheduled out? As
> in the hypervisor knows - but you as a guest might have no idea
> of it.

I use a heart-beat counter to see if the other side responses within a 
certain time limit. If not, I assume it has been scheduled out probably 
due to PLE.

>> good idea as there is a possibility of lock starvation for a heavily
>> contended lock.
> Should this then detect whether it is running under a virtualization
> and only then activate itself? And when run under baremetal don't enable?

Yes, unfair lock should only be enabled if it is running under a 
para-virtualized guest. A jump label (static key) is used for this 
purpose and will be enabled by the appropriate KVM or Xen code.

>> This patch add a new configuration option for the x86
>> architecture to enable the use of unfair queue spinlock
>> (PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
>> (paravirt_unfairlocks_enabled) is used to switch between a fair and
>> an unfair version of the spinlock code. This jump label will only be
>> enabled in a real PV guest.
> As opposed to fake PV guest :-) I think you can remove the 'real'.

Yes, you are right. I will remove that in the next series.

>
>> Enabling this configuration feature decreases the performance of an
>> uncontended lock-unlock operation by about 1-2%.
> Presumarily on baremetal right?

Enabling unfair lock will add additional code which has a slight 
performance penalty of 1-2% on both bare-metal and virtualized.

>> +/**
>> + * arch_spin_lock - acquire a queue spinlock
>> + * @lock: Pointer to queue spinlock structure
>> + */
>> +static inline void arch_spin_lock(struct qspinlock *lock)
>> +{
>> +	if (static_key_false(&paravirt_unfairlocks_enabled)) {
>> +		queue_spin_lock_unfair(lock);
>> +		return;
>> +	}
>> +	queue_spin_lock(lock);
> What happens when you are booting and you are in the middle of using a
> ticketlock (say you are waiting for it and your are in the slow-path)
>   and suddenly the unfairlocks_enabled is turned on.

The static key will only be changed only in the early boot period which 
I presumably doesn't need to use spinlock. This static key is 
initialized in the same way as the PV ticketlock's static key which has 
the same problem that you mentioned.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:06:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQtg-0006OL-8j; Fri, 28 Feb 2014 17:06:52 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQte-0006OB-I3
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:06:50 +0000
Received: from [85.158.139.211:38414] by server-12.bemta-5.messagelabs.com id
	84/72-15415-922C0135; Fri, 28 Feb 2014 17:06:49 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393607207!6950722!1
X-Originating-IP: [15.192.137.9]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 18754 invoked from network); 28 Feb 2014 17:06:48 -0000
Received: from g5t1626.atlanta.hp.com (HELO g5t1626.atlanta.hp.com)
	(15.192.137.9)
	by server-16.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 17:06:48 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g5t1626.atlanta.hp.com (Postfix) with ESMTP id 11DD9223;
	Fri, 28 Feb 2014 17:06:45 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id 14A3164;
	Fri, 28 Feb 2014 17:06:40 +0000 (UTC)
Message-ID: <5310C21F.7000809@hp.com>
Date: Fri, 28 Feb 2014 12:06:39 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-5-git-send-email-Waiman.Long@hp.com>
	<20140226170734.GB20775@phenom.dumpdata.com>
In-Reply-To: <20140226170734.GB20775@phenom.dumpdata.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 4/8] pvqspinlock,
 x86: Allow unfair spinlock in a real PV environment
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote:
>> Locking is always an issue in a virtualized environment as the virtual
>> CPU that is waiting on a lock may get scheduled out and hence block
>> any progress in lock acquisition even when the lock has been freed.
>>
>> One solution to this problem is to allow unfair lock in a
>> para-virtualized environment. In this case, a new lock acquirer can
>> come and steal the lock if the next-in-line CPU to get the lock is
>> scheduled out. Unfair lock in a native environment is generally not a
> Hmm, how do you know if the 'next-in-line CPU' is scheduled out? As
> in the hypervisor knows - but you as a guest might have no idea
> of it.

I use a heart-beat counter to see if the other side responses within a 
certain time limit. If not, I assume it has been scheduled out probably 
due to PLE.

>> good idea as there is a possibility of lock starvation for a heavily
>> contended lock.
> Should this then detect whether it is running under a virtualization
> and only then activate itself? And when run under baremetal don't enable?

Yes, unfair lock should only be enabled if it is running under a 
para-virtualized guest. A jump label (static key) is used for this 
purpose and will be enabled by the appropriate KVM or Xen code.

>> This patch add a new configuration option for the x86
>> architecture to enable the use of unfair queue spinlock
>> (PARAVIRT_UNFAIR_LOCKS) in a real para-virtualized guest. A jump label
>> (paravirt_unfairlocks_enabled) is used to switch between a fair and
>> an unfair version of the spinlock code. This jump label will only be
>> enabled in a real PV guest.
> As opposed to fake PV guest :-) I think you can remove the 'real'.

Yes, you are right. I will remove that in the next series.

>
>> Enabling this configuration feature decreases the performance of an
>> uncontended lock-unlock operation by about 1-2%.
> Presumarily on baremetal right?

Enabling unfair lock will add additional code which has a slight 
performance penalty of 1-2% on both bare-metal and virtualized.

>> +/**
>> + * arch_spin_lock - acquire a queue spinlock
>> + * @lock: Pointer to queue spinlock structure
>> + */
>> +static inline void arch_spin_lock(struct qspinlock *lock)
>> +{
>> +	if (static_key_false(&paravirt_unfairlocks_enabled)) {
>> +		queue_spin_lock_unfair(lock);
>> +		return;
>> +	}
>> +	queue_spin_lock(lock);
> What happens when you are booting and you are in the middle of using a
> ticketlock (say you are waiting for it and your are in the slow-path)
>   and suddenly the unfairlocks_enabled is turned on.

The static key will only be changed only in the early boot period which 
I presumably doesn't need to use spinlock. This static key is 
initialized in the same way as the PV ticketlock's static key which has 
the same problem that you mentioned.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQvh-0006d3-0H; Fri, 28 Feb 2014 17:08:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQvf-0006ck-9d
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:08:55 +0000
Received: from [85.158.137.68:33735] by server-7.bemta-3.messagelabs.com id
	B9/9D-13775-6A2C0135; Fri, 28 Feb 2014 17:08:54 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393607332!3373519!1
X-Originating-IP: [15.192.137.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1953 invoked from network); 28 Feb 2014 17:08:53 -0000
Received: from g5t1627.atlanta.hp.com (HELO g5t1627.atlanta.hp.com)
	(15.192.137.10)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 17:08:53 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g5t1627.atlanta.hp.com (Postfix) with ESMTP id EEEDE167;
	Fri, 28 Feb 2014 17:08:49 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id 19A675D;
	Fri, 28 Feb 2014 17:08:48 +0000 (UTC)
Message-ID: <5310C29F.8040706@hp.com>
Date: Fri, 28 Feb 2014 12:08:47 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
	<20140226170859.GC20775@phenom.dumpdata.com>
In-Reply-To: <20140226170859.GC20775@phenom.dumpdata.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:08 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 26, 2014 at 10:14:25AM -0500, Waiman Long wrote:
>> This patch adds a KVM init function to activate the unfair queue
>> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
>> option is selected.
>>
>> Signed-off-by: Waiman Long<Waiman.Long@hp.com>
>> ---
>>   arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>>   1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 713f1b3..a489140 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>>   early_initcall(kvm_spinlock_init_jump);
>>
>>   #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> +/*
>> + * Enable unfair lock if running in a real para-virtualized environment
>> + */
>> +static __init int kvm_unfair_locks_init_jump(void)
>> +{
>> +	if (!kvm_para_available())
>> +		return 0;
> I think you also need to check for !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)?
> Otherwise you might enable this, but the kicker function won't be
> enabled.

The unfair lock code doesn't need to use the CPU kicker function. That 
is why the KVM_FEATURE_PV_UNHALT feature is not checked.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:08:59 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJQvh-0006d3-0H; Fri, 28 Feb 2014 17:08:57 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <waiman.long@hp.com>) id 1WJQvf-0006ck-9d
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:08:55 +0000
Received: from [85.158.137.68:33735] by server-7.bemta-3.messagelabs.com id
	B9/9D-13775-6A2C0135; Fri, 28 Feb 2014 17:08:54 +0000
X-Env-Sender: waiman.long@hp.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393607332!3373519!1
X-Originating-IP: [15.192.137.10]
X-SpamReason: No, hits=0.0 required=7.0 tests=SUBJECT_RANDOMQ
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 1953 invoked from network); 28 Feb 2014 17:08:53 -0000
Received: from g5t1627.atlanta.hp.com (HELO g5t1627.atlanta.hp.com)
	(15.192.137.10)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 17:08:53 -0000
Received: from g5t1633.atlanta.hp.com (g5t1633.atlanta.hp.com [16.201.144.132])
	by g5t1627.atlanta.hp.com (Postfix) with ESMTP id EEEDE167;
	Fri, 28 Feb 2014 17:08:49 +0000 (UTC)
Received: from [192.168.142.176] (unknown [16.213.48.182])
	by g5t1633.atlanta.hp.com (Postfix) with ESMTP id 19A675D;
	Fri, 28 Feb 2014 17:08:48 +0000 (UTC)
Message-ID: <5310C29F.8040706@hp.com>
Date: Fri, 28 Feb 2014 12:08:47 -0500
From: Waiman Long <waiman.long@hp.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:10.0.12) Gecko/20130109 Thunderbird/10.0.12
MIME-Version: 1.0
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-6-git-send-email-Waiman.Long@hp.com>
	<20140226170859.GC20775@phenom.dumpdata.com>
In-Reply-To: <20140226170859.GC20775@phenom.dumpdata.com>
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH RFC v5 5/8] pvqspinlock,
 x86: Enable unfair queue spinlock in a KVM guest
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/26/2014 12:08 PM, Konrad Rzeszutek Wilk wrote:
> On Wed, Feb 26, 2014 at 10:14:25AM -0500, Waiman Long wrote:
>> This patch adds a KVM init function to activate the unfair queue
>> spinlock in a KVM guest when the PARAVIRT_UNFAIR_LOCKS kernel config
>> option is selected.
>>
>> Signed-off-by: Waiman Long<Waiman.Long@hp.com>
>> ---
>>   arch/x86/kernel/kvm.c |   17 +++++++++++++++++
>>   1 files changed, 17 insertions(+), 0 deletions(-)
>>
>> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
>> index 713f1b3..a489140 100644
>> --- a/arch/x86/kernel/kvm.c
>> +++ b/arch/x86/kernel/kvm.c
>> @@ -826,3 +826,20 @@ static __init int kvm_spinlock_init_jump(void)
>>   early_initcall(kvm_spinlock_init_jump);
>>
>>   #endif	/* CONFIG_PARAVIRT_SPINLOCKS */
>> +
>> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
>> +/*
>> + * Enable unfair lock if running in a real para-virtualized environment
>> + */
>> +static __init int kvm_unfair_locks_init_jump(void)
>> +{
>> +	if (!kvm_para_available())
>> +		return 0;
> I think you also need to check for !kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)?
> Otherwise you might enable this, but the kicker function won't be
> enabled.

The unfair lock code doesn't need to use the CPU kicker function. That 
is why the KVM_FEATURE_PV_UNHALT feature is not checked.

-Longman

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:18:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJR4r-00070q-Cd; Fri, 28 Feb 2014 17:18:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJR4q-00070l-Gv
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:18:24 +0000
Received: from [85.158.143.35:53220] by server-2.bemta-4.messagelabs.com id
	14/44-04779-FD4C0135; Fri, 28 Feb 2014 17:18:23 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393607900!9116126!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24500 invoked from network); 28 Feb 2014 17:18:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 17:18:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SHIGmm030262
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 17:18:17 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHIEMR020910
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 28 Feb 2014 17:18:15 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHIDQd015395; Fri, 28 Feb 2014 17:18:14 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 09:18:13 -0800
Message-ID: <5310C54A.1080906@oracle.com>
Date: Fri, 28 Feb 2014 12:20:10 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com>
In-Reply-To: <530F87CC.8090000@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 01:45 PM, Boris Ostrovsky wrote:
> On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
>> Add support for MSI message groups for Xen Dom0 using the
>> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
>>
>> In order to keep track of which pirq is the first one in the group all
>> pirqs in the MSI group except for the first one have the newly
>> introduced PIRQ_MSI_GROUP flag set. This prevents calling
>> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
>> first pirq in the group.
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
>


I was just looking at xen_setup_msi_irqs() (for a different reason) and 
I am no longer sure this patch does anything:

static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
{
         int irq, ret, i;
         struct msi_desc *msidesc;
         int *v;

         if (type == PCI_CAP_ID_MSI && nvec > 1)
                 return 1;
....

Same thing for xen_hvm_setup_msi_irqs().

Take a look at commit 884ac2978a295b7df3c4a686d3bff6932bbbb460.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:18:41 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJR4r-00070q-Cd; Fri, 28 Feb 2014 17:18:25 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJR4q-00070l-Gv
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:18:24 +0000
Received: from [85.158.143.35:53220] by server-2.bemta-4.messagelabs.com id
	14/44-04779-FD4C0135; Fri, 28 Feb 2014 17:18:23 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-8.tower-21.messagelabs.com!1393607900!9116126!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24500 invoked from network); 28 Feb 2014 17:18:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-8.tower-21.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 17:18:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SHIGmm030262
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 17:18:17 GMT
Received: from userz7021.oracle.com (userz7021.oracle.com [156.151.31.85])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHIEMR020910
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL);
	Fri, 28 Feb 2014 17:18:15 GMT
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
	by userz7021.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHIDQd015395; Fri, 28 Feb 2014 17:18:14 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 09:18:13 -0800
Message-ID: <5310C54A.1080906@oracle.com>
Date: Fri, 28 Feb 2014 12:20:10 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Roger Pau Monne <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com>
In-Reply-To: <530F87CC.8090000@oracle.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/27/2014 01:45 PM, Boris Ostrovsky wrote:
> On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
>> Add support for MSI message groups for Xen Dom0 using the
>> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
>>
>> In order to keep track of which pirq is the first one in the group all
>> pirqs in the MSI group except for the first one have the newly
>> introduced PIRQ_MSI_GROUP flag set. This prevents calling
>> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
>> first pirq in the group.
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>
>


I was just looking at xen_setup_msi_irqs() (for a different reason) and 
I am no longer sure this patch does anything:

static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
{
         int irq, ret, i;
         struct msi_desc *msidesc;
         int *v;

         if (type == PCI_CAP_ID_MSI && nvec > 1)
                 return 1;
....

Same thing for xen_hvm_setup_msi_irqs().

Take a look at commit 884ac2978a295b7df3c4a686d3bff6932bbbb460.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:32:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRI9-0007SA-3m; Fri, 28 Feb 2014 17:32:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRI7-0007S5-SP
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:32:08 +0000
Received: from [193.109.254.147:14544] by server-9.bemta-14.messagelabs.com id
	13/13-24895-718C0135; Fri, 28 Feb 2014 17:32:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393608725!7562425!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17618 invoked from network); 28 Feb 2014 17:32:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:32:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105073874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 17:32:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:32:04 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJRI4-0000i7-Bv;
	Fri, 28 Feb 2014 17:32:04 +0000
Message-ID: <5310C814.3070702@citrix.com>
Date: Fri, 28 Feb 2014 17:32:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
	<5310C887020000780012044A@nat28.tlf.novell.com>
In-Reply-To: <5310C887020000780012044A@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 16:33, Jan Beulich wrote:
>>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
>> __attribute__((noreturn)).  This includes removing redundant uses with
>> function definitions which have a public declaration.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
>> Acked-by: Tim Deegan <tim@xen.org>
> I had already committed this, but it failed my pre-push build test:
>
>> --- a/xen/include/xen/compiler.h
>> +++ b/xen/include/xen/compiler.h
>> @@ -14,6 +14,8 @@
>>  #define always_inline __inline__ __attribute__ ((always_inline))
>>  #define noinline      __attribute__((noinline))
>>  
>> +#define noreturn      __attribute__((noreturn))
> This collides with uses of __attribute__((noreturn)) elsewhere in
> the tree. Did this really build for you without issue?
>
> Jan
>

Hmm - I can see why.  I will respin the series and double check each
commit for compilation.  I think this was broken by splitting out the
changes to nmi_crash() in v3.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:32:20 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRI9-0007SA-3m; Fri, 28 Feb 2014 17:32:09 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRI7-0007S5-SP
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:32:08 +0000
Received: from [193.109.254.147:14544] by server-9.bemta-14.messagelabs.com id
	13/13-24895-718C0135; Fri, 28 Feb 2014 17:32:07 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393608725!7562425!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 17618 invoked from network); 28 Feb 2014 17:32:06 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:32:06 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105073874"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 17:32:04 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:32:04 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJRI4-0000i7-Bv;
	Fri, 28 Feb 2014 17:32:04 +0000
Message-ID: <5310C814.3070702@citrix.com>
Date: Fri, 28 Feb 2014 17:32:04 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <1393331011-22240-1-git-send-email-andrew.cooper3@citrix.com>
	<1393331011-22240-2-git-send-email-andrew.cooper3@citrix.com>
	<5310C887020000780012044A@nat28.tlf.novell.com>
In-Reply-To: <5310C887020000780012044A@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Keir Fraser <keir@xen.org>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Xen-devel <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] [PATCH v4 1/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 16:33, Jan Beulich wrote:
>>>> On 25.02.14 at 13:23, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
>> __attribute__((noreturn)).  This includes removing redundant uses with
>> function definitions which have a public declaration.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> CC: Keir Fraser <keir@xen.org>
>> CC: Jan Beulich <JBeulich@suse.com>
>> Acked-by: Ian Campbell <ian.campbell@citrix.com>
>> CC: Stefano Stabellini <stefano.stabellini@citrix.com>
>> Acked-by: Tim Deegan <tim@xen.org>
> I had already committed this, but it failed my pre-push build test:
>
>> --- a/xen/include/xen/compiler.h
>> +++ b/xen/include/xen/compiler.h
>> @@ -14,6 +14,8 @@
>>  #define always_inline __inline__ __attribute__ ((always_inline))
>>  #define noinline      __attribute__((noinline))
>>  
>> +#define noreturn      __attribute__((noreturn))
> This collides with uses of __attribute__((noreturn)) elsewhere in
> the tree. Did this really build for you without issue?
>
> Jan
>

Hmm - I can see why.  I will respin the series and double check each
commit for compilation.  I think this was broken by splitting out the
changes to nmi_crash() in v3.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRNq-0007aJ-PP; Fri, 28 Feb 2014 17:38:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WJRNo-0007aE-If
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:38:00 +0000
Received: from [85.158.139.211:39266] by server-17.bemta-5.messagelabs.com id
	15/EC-31975-779C0135; Fri, 28 Feb 2014 17:37:59 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393609078!6970768!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23977 invoked from network); 28 Feb 2014 17:37:59 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 17:37:59 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WJRN4-0007UU-GM; Fri, 28 Feb 2014 17:37:15 +0000
Received: by laptop (Postfix, from userid 1000)
	id 8F01C100A70A3; Fri, 28 Feb 2014 18:37:12 +0100 (CET)
Date: Fri, 28 Feb 2014 18:37:12 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Linus Torvalds <torvalds@linux-foundation.org>
Message-ID: <20140228173712.GZ6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
	<CA+55aFyz-7NvwUN5q8xqSvHsufC4Z1NSCvnUYrKRAhBeVjUuDA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+55aFyz-7NvwUN5q8xqSvHsufC4Z1NSCvnUYrKRAhBeVjUuDA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Andi Kleen <andi@firstfloor.org>, Peter Anvin <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	the arch/x86 maintainers <x86@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Daniel J Blueman <daniel@numascale.com>, xen-devel@lists.xenproject.org,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Chegu Vinod <chegu_vinod@hp.com>, Waiman Long <waiman.long@hp.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 08:25:24AM -0800, Linus Torvalds wrote:
> On Feb 28, 2014 1:30 AM, "Peter Zijlstra" <peterz@infradead.org> wrote:
> >
> > At low contention the cmpxchg won't have to be retried (much) so using
> > it won't be a problem and you get to have arbitrary atomic ops.
> 
> Peter, the difference between an atomic op and *no* atomic op is huge.

I know, I'm just asking what the difference is between the xchg() - and
atomic op, and an cmpxchg(), also an atomic op.

The xchg() makes the entire thing somewhat difficult. Needing to fixup
all kinds of states if we guessed wrong about what was in the variables.

> And Waiman posted numbers for the optimization. Why do you argue with
> handwaving and against numbers?

I've asked for his benchmark.. 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:38:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRNq-0007aJ-PP; Fri, 28 Feb 2014 17:38:02 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WJRNo-0007aE-If
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:38:00 +0000
Received: from [85.158.139.211:39266] by server-17.bemta-5.messagelabs.com id
	15/EC-31975-779C0135; Fri, 28 Feb 2014 17:37:59 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-12.tower-206.messagelabs.com!1393609078!6970768!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23977 invoked from network); 28 Feb 2014 17:37:59 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-12.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 17:37:59 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WJRN4-0007UU-GM; Fri, 28 Feb 2014 17:37:15 +0000
Received: by laptop (Postfix, from userid 1000)
	id 8F01C100A70A3; Fri, 28 Feb 2014 18:37:12 +0100 (CET)
Date: Fri, 28 Feb 2014 18:37:12 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Linus Torvalds <torvalds@linux-foundation.org>
Message-ID: <20140228173712.GZ6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
	<CA+55aFyz-7NvwUN5q8xqSvHsufC4Z1NSCvnUYrKRAhBeVjUuDA@mail.gmail.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <CA+55aFyz-7NvwUN5q8xqSvHsufC4Z1NSCvnUYrKRAhBeVjUuDA@mail.gmail.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Aswin Chandramouleeswaran <aswin@hp.com>,
	Andi Kleen <andi@firstfloor.org>, Peter Anvin <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	the arch/x86 maintainers <x86@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Daniel J Blueman <daniel@numascale.com>, xen-devel@lists.xenproject.org,
	Paul McKenney <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Scott J Norton <scott.norton@hp.com>,
	Rusty Russell <rusty@rustcorp.com.au>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization <virtualization@lists.linux-foundation.org>,
	Chegu Vinod <chegu_vinod@hp.com>, Waiman Long <waiman.long@hp.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 08:25:24AM -0800, Linus Torvalds wrote:
> On Feb 28, 2014 1:30 AM, "Peter Zijlstra" <peterz@infradead.org> wrote:
> >
> > At low contention the cmpxchg won't have to be retried (much) so using
> > it won't be a problem and you get to have arbitrary atomic ops.
> 
> Peter, the difference between an atomic op and *no* atomic op is huge.

I know, I'm just asking what the difference is between the xchg() - and
atomic op, and an cmpxchg(), also an atomic op.

The xchg() makes the entire thing somewhat difficult. Needing to fixup
all kinds of states if we guessed wrong about what was in the variables.

> And Waiman posted numbers for the optimization. Why do you argue with
> handwaving and against numbers?

I've asked for his benchmark.. 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:38:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJROi-0007h3-78; Fri, 28 Feb 2014 17:38:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJROg-0007ge-O1
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:38:55 +0000
Received: from [85.158.137.68:22936] by server-9.bemta-3.messagelabs.com id
	24/05-10184-EA9C0135; Fri, 28 Feb 2014 17:38:54 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393609130!150691!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29675 invoked from network); 28 Feb 2014 17:38:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:38:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; 
	d="scan'208,217";a="106692450"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 17:38:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:38:49 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJROb-0000nR-Hs;
	Fri, 28 Feb 2014 17:38:49 +0000
Message-ID: <5310C9A9.8080000@citrix.com>
Date: Fri, 28 Feb 2014 17:38:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5310CB88020000780012046C@nat28.tlf.novell.com>
In-Reply-To: <5310CB88020000780012046C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't propagate
 acpi_skip_timer_override do Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8967453668041991393=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8967453668041991393==
Content-Type: multipart/alternative;
	boundary="------------000607000201050200080704"

--------------000607000201050200080704
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 28/02/14 16:46, Jan Beulich wrote:
> It's unclear why c/s 4850:923dd9975981 added this - Dom0 isn't
> controlling the timer interrupt, and hence has no need to know.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -56,7 +56,9 @@ bool_t __initdata acpi_ht = 1;	/* enable
>  bool_t __initdata acpi_lapic;
>  bool_t __initdata acpi_ioapic;
>  
> -bool_t acpi_skip_timer_override __initdata;
> +/* acpi_skip_timer_override: Skip IRQ0 overrides. */
> +static bool_t acpi_skip_timer_override __initdata;
> +boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
>  
>  #ifdef CONFIG_X86_LOCAL_APIC
>  static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -71,10 +71,6 @@ static void parse_acpi_param(char *s);
>  custom_param("acpi", parse_acpi_param);
>  
>  /* **** Linux config option: propagated to domain0. */
> -/* acpi_skip_timer_override: Skip IRQ0 overrides. */
> -boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
> -
> -/* **** Linux config option: propagated to domain0. */
>  /* noapic: Disable IOAPIC setup. */
>  boolean_param("noapic", skip_ioapic_setup);
>  
> @@ -1365,9 +1361,6 @@ void __init __start_xen(unsigned long mb
>          /* Append any extra parameters. */
>          if ( skip_ioapic_setup && !strstr(dom0_cmdline, "noapic") )
>              safe_strcat(dom0_cmdline, " noapic");
> -        if ( acpi_skip_timer_override &&
> -             !strstr(dom0_cmdline, "acpi_skip_timer_override") )
> -            safe_strcat(dom0_cmdline, " acpi_skip_timer_override");
>          if ( (strlen(acpi_param) == 0) && acpi_disabled )
>          {
>              printk("ACPI is disabled, notifying Domain 0 (acpi=off)\n");
> --- a/xen/include/asm-x86/acpi.h
> +++ b/xen/include/asm-x86/acpi.h
> @@ -80,7 +80,6 @@ int __acpi_release_global_lock(unsigned 
>  
>  extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
>  extern bool_t acpi_force, acpi_ht, acpi_disabled;
> -extern bool_t acpi_skip_timer_override;
>  extern u32 acpi_smi_cmd;
>  extern u8 acpi_enable_value, acpi_disable_value;
>  void acpi_pic_sci_set_trigger(unsigned int, u16);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------000607000201050200080704
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 28/02/14 16:46, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:5310CB88020000780012046C@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">It's unclear why c/s 4850:923dd9975981 added this - Dom0 isn't
controlling the timer interrupt, and hence has no need to know.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:5310CB88020000780012046C@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -56,7 +56,9 @@ bool_t __initdata acpi_ht = 1;	/* enable
 bool_t __initdata acpi_lapic;
 bool_t __initdata acpi_ioapic;
 
-bool_t acpi_skip_timer_override __initdata;
+/* acpi_skip_timer_override: Skip IRQ0 overrides. */
+static bool_t acpi_skip_timer_override __initdata;
+boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
 
 #ifdef CONFIG_X86_LOCAL_APIC
 static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -71,10 +71,6 @@ static void parse_acpi_param(char *s);
 custom_param("acpi", parse_acpi_param);
 
 /* **** Linux config option: propagated to domain0. */
-/* acpi_skip_timer_override: Skip IRQ0 overrides. */
-boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
-
-/* **** Linux config option: propagated to domain0. */
 /* noapic: Disable IOAPIC setup. */
 boolean_param("noapic", skip_ioapic_setup);
 
@@ -1365,9 +1361,6 @@ void __init __start_xen(unsigned long mb
         /* Append any extra parameters. */
         if ( skip_ioapic_setup &amp;&amp; !strstr(dom0_cmdline, "noapic") )
             safe_strcat(dom0_cmdline, " noapic");
-        if ( acpi_skip_timer_override &amp;&amp;
-             !strstr(dom0_cmdline, "acpi_skip_timer_override") )
-            safe_strcat(dom0_cmdline, " acpi_skip_timer_override");
         if ( (strlen(acpi_param) == 0) &amp;&amp; acpi_disabled )
         {
             printk("ACPI is disabled, notifying Domain 0 (acpi=off)\n");
--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -80,7 +80,6 @@ int __acpi_release_global_lock(unsigned 
 
 extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
 extern bool_t acpi_force, acpi_ht, acpi_disabled;
-extern bool_t acpi_skip_timer_override;
 extern u32 acpi_smi_cmd;
 extern u8 acpi_enable_value, acpi_disable_value;
 void acpi_pic_sci_set_trigger(unsigned int, u16);



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------000607000201050200080704--


--===============8967453668041991393==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8967453668041991393==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:38:57 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJROi-0007h3-78; Fri, 28 Feb 2014 17:38:56 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJROg-0007ge-O1
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:38:55 +0000
Received: from [85.158.137.68:22936] by server-9.bemta-3.messagelabs.com id
	24/05-10184-EA9C0135; Fri, 28 Feb 2014 17:38:54 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-31.messagelabs.com!1393609130!150691!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29675 invoked from network); 28 Feb 2014 17:38:53 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:38:53 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; 
	d="scan'208,217";a="106692450"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 17:38:50 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:38:49 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJROb-0000nR-Hs;
	Fri, 28 Feb 2014 17:38:49 +0000
Message-ID: <5310C9A9.8080000@citrix.com>
Date: Fri, 28 Feb 2014 17:38:49 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Jan Beulich <JBeulich@suse.com>
References: <5310CB88020000780012046C@nat28.tlf.novell.com>
In-Reply-To: <5310CB88020000780012046C@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: xen-devel <xen-devel@lists.xenproject.org>, Keir Fraser <keir@xen.org>
Subject: Re: [Xen-devel] [PATCH] x86: don't propagate
 acpi_skip_timer_override do Dom0
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============8967453668041991393=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============8967453668041991393==
Content-Type: multipart/alternative;
	boundary="------------000607000201050200080704"

--------------000607000201050200080704
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 28/02/14 16:46, Jan Beulich wrote:
> It's unclear why c/s 4850:923dd9975981 added this - Dom0 isn't
> controlling the timer interrupt, and hence has no need to know.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
> --- a/xen/arch/x86/acpi/boot.c
> +++ b/xen/arch/x86/acpi/boot.c
> @@ -56,7 +56,9 @@ bool_t __initdata acpi_ht = 1;	/* enable
>  bool_t __initdata acpi_lapic;
>  bool_t __initdata acpi_ioapic;
>  
> -bool_t acpi_skip_timer_override __initdata;
> +/* acpi_skip_timer_override: Skip IRQ0 overrides. */
> +static bool_t acpi_skip_timer_override __initdata;
> +boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
>  
>  #ifdef CONFIG_X86_LOCAL_APIC
>  static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -71,10 +71,6 @@ static void parse_acpi_param(char *s);
>  custom_param("acpi", parse_acpi_param);
>  
>  /* **** Linux config option: propagated to domain0. */
> -/* acpi_skip_timer_override: Skip IRQ0 overrides. */
> -boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
> -
> -/* **** Linux config option: propagated to domain0. */
>  /* noapic: Disable IOAPIC setup. */
>  boolean_param("noapic", skip_ioapic_setup);
>  
> @@ -1365,9 +1361,6 @@ void __init __start_xen(unsigned long mb
>          /* Append any extra parameters. */
>          if ( skip_ioapic_setup && !strstr(dom0_cmdline, "noapic") )
>              safe_strcat(dom0_cmdline, " noapic");
> -        if ( acpi_skip_timer_override &&
> -             !strstr(dom0_cmdline, "acpi_skip_timer_override") )
> -            safe_strcat(dom0_cmdline, " acpi_skip_timer_override");
>          if ( (strlen(acpi_param) == 0) && acpi_disabled )
>          {
>              printk("ACPI is disabled, notifying Domain 0 (acpi=off)\n");
> --- a/xen/include/asm-x86/acpi.h
> +++ b/xen/include/asm-x86/acpi.h
> @@ -80,7 +80,6 @@ int __acpi_release_global_lock(unsigned 
>  
>  extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
>  extern bool_t acpi_force, acpi_ht, acpi_disabled;
> -extern bool_t acpi_skip_timer_override;
>  extern u32 acpi_smi_cmd;
>  extern u8 acpi_enable_value, acpi_disable_value;
>  void acpi_pic_sci_set_trigger(unsigned int, u16);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------000607000201050200080704
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 28/02/14 16:46, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:5310CB88020000780012046C@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">It's unclear why c/s 4850:923dd9975981 added this - Dom0 isn't
controlling the timer interrupt, and hence has no need to know.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper3@citrix.com">&lt;andrew.cooper3@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:5310CB88020000780012046C@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/acpi/boot.c
+++ b/xen/arch/x86/acpi/boot.c
@@ -56,7 +56,9 @@ bool_t __initdata acpi_ht = 1;	/* enable
 bool_t __initdata acpi_lapic;
 bool_t __initdata acpi_ioapic;
 
-bool_t acpi_skip_timer_override __initdata;
+/* acpi_skip_timer_override: Skip IRQ0 overrides. */
+static bool_t acpi_skip_timer_override __initdata;
+boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
 
 #ifdef CONFIG_X86_LOCAL_APIC
 static u64 acpi_lapic_addr __initdata = APIC_DEFAULT_PHYS_BASE;
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -71,10 +71,6 @@ static void parse_acpi_param(char *s);
 custom_param("acpi", parse_acpi_param);
 
 /* **** Linux config option: propagated to domain0. */
-/* acpi_skip_timer_override: Skip IRQ0 overrides. */
-boolean_param("acpi_skip_timer_override", acpi_skip_timer_override);
-
-/* **** Linux config option: propagated to domain0. */
 /* noapic: Disable IOAPIC setup. */
 boolean_param("noapic", skip_ioapic_setup);
 
@@ -1365,9 +1361,6 @@ void __init __start_xen(unsigned long mb
         /* Append any extra parameters. */
         if ( skip_ioapic_setup &amp;&amp; !strstr(dom0_cmdline, "noapic") )
             safe_strcat(dom0_cmdline, " noapic");
-        if ( acpi_skip_timer_override &amp;&amp;
-             !strstr(dom0_cmdline, "acpi_skip_timer_override") )
-            safe_strcat(dom0_cmdline, " acpi_skip_timer_override");
         if ( (strlen(acpi_param) == 0) &amp;&amp; acpi_disabled )
         {
             printk("ACPI is disabled, notifying Domain 0 (acpi=off)\n");
--- a/xen/include/asm-x86/acpi.h
+++ b/xen/include/asm-x86/acpi.h
@@ -80,7 +80,6 @@ int __acpi_release_global_lock(unsigned 
 
 extern bool_t acpi_lapic, acpi_ioapic, acpi_noirq;
 extern bool_t acpi_force, acpi_ht, acpi_disabled;
-extern bool_t acpi_skip_timer_override;
 extern u32 acpi_smi_cmd;
 extern u8 acpi_enable_value, acpi_disable_value;
 void acpi_pic_sci_set_trigger(unsigned int, u16);



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------000607000201050200080704--


--===============8967453668041991393==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============8967453668041991393==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRQl-0007ue-R5; Fri, 28 Feb 2014 17:41:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRQk-0007uW-EE
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:41:02 +0000
Received: from [193.109.254.147:16786] by server-1.bemta-14.messagelabs.com id
	BB/DD-29588-D2AC0135; Fri, 28 Feb 2014 17:41:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393609259!7563802!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5273 invoked from network); 28 Feb 2014 17:41:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:41:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; 
	d="scan'208,217";a="105076202"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 17:40:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:40:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJRQg-0000pI-Nw	for
	xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:40:58 +0000
Message-ID: <5310CA2A.9080907@citrix.com>
Date: Fri, 28 Feb 2014 17:40:58 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <5310CB4F0200007800120468@nat28.tlf.novell.com>
In-Reply-To: <5310CB4F0200007800120468@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH] x86/MCE: mctelem_init() cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0820811070695253739=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0820811070695253739==
Content-Type: multipart/alternative;
	boundary="------------010509080700020608050200"

--------------010509080700020608050200
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 28/02/14 16:45, Jan Beulich wrote:
> The function can be __init with its caller taking care of only calling
> it on the BSP. And with that all its static variables can be dropped.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper@citrix.com>

>
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 *c, 
>  
>      intpose_init();
>  
> -    mctelem_init(sizeof(struct mc_info));
> +    if ( bsp )
> +    {
> +        mctelem_init(sizeof(struct mc_info));
> +        register_cpu_notifier(&cpu_nfb);
> +    }
>  
>      /* Turn on MCE now */
>      set_in_cr4(X86_CR4_MCE);
>  
> -    if ( bsp )
> -        register_cpu_notifier(&cpu_nfb);
>      set_poll_bankmask(c);
>  
>      return;
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -248,25 +248,14 @@ static void mctelem_processing_release(s
>  	}
>  }
>  
> -void mctelem_init(int reqdatasz)
> +void __init mctelem_init(unsigned int datasz)
>  {
> -	static int called = 0;
> -	static int datasz = 0, realdatasz = 0;
>  	char *datarr;
> -	int i;
> +	unsigned int i;
>  	
> -	BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
> +	BUILD_BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
>  
> -	/* Called from mcheck_init for all processors; initialize for the
> -	 * first call only (no race here since the boot cpu completes
> -	 * init before others start up). */
> -	if (++called == 1) {
> -		realdatasz = reqdatasz;
> -		datasz = (reqdatasz & ~0xf) + 0x10;	/* 16 byte roundup */
> -	} else {
> -		BUG_ON(reqdatasz != realdatasz);
> -		return;
> -	}
> +	datasz = (datasz & ~0xf) + 0x10;	/* 16 byte roundup */
>  
>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>  	    MC_NENT)) == NULL ||
> --- a/xen/arch/x86/cpu/mcheck/mctelem.h
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.h
> @@ -59,7 +59,7 @@ typedef enum mctelem_class {
>  	MC_NONURGENT
>  } mctelem_class_t;
>  
> -extern void mctelem_init(int);
> +extern void mctelem_init(unsigned int);
>  extern mctelem_cookie_t mctelem_reserve(mctelem_class_t);
>  extern void *mctelem_dataptr(mctelem_cookie_t);
>  extern void mctelem_commit(mctelem_cookie_t);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010509080700020608050200
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 28/02/14 16:45, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:5310CB4F0200007800120468@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">The function can be __init with its caller taking care of only calling
it on the BSP. And with that all its static variables can be dropped.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper@citrix.com">&lt;andrew.cooper@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:5310CB4F0200007800120468@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 *c, 
 
     intpose_init();
 
-    mctelem_init(sizeof(struct mc_info));
+    if ( bsp )
+    {
+        mctelem_init(sizeof(struct mc_info));
+        register_cpu_notifier(&amp;cpu_nfb);
+    }
 
     /* Turn on MCE now */
     set_in_cr4(X86_CR4_MCE);
 
-    if ( bsp )
-        register_cpu_notifier(&amp;cpu_nfb);
     set_poll_bankmask(c);
 
     return;
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -248,25 +248,14 @@ static void mctelem_processing_release(s
 	}
 }
 
-void mctelem_init(int reqdatasz)
+void __init mctelem_init(unsigned int datasz)
 {
-	static int called = 0;
-	static int datasz = 0, realdatasz = 0;
 	char *datarr;
-	int i;
+	unsigned int i;
 	
-	BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
+	BUILD_BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
 
-	/* Called from mcheck_init for all processors; initialize for the
-	 * first call only (no race here since the boot cpu completes
-	 * init before others start up). */
-	if (++called == 1) {
-		realdatasz = reqdatasz;
-		datasz = (reqdatasz &amp; ~0xf) + 0x10;	/* 16 byte roundup */
-	} else {
-		BUG_ON(reqdatasz != realdatasz);
-		return;
-	}
+	datasz = (datasz &amp; ~0xf) + 0x10;	/* 16 byte roundup */
 
 	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
 	    MC_NENT)) == NULL ||
--- a/xen/arch/x86/cpu/mcheck/mctelem.h
+++ b/xen/arch/x86/cpu/mcheck/mctelem.h
@@ -59,7 +59,7 @@ typedef enum mctelem_class {
 	MC_NONURGENT
 } mctelem_class_t;
 
-extern void mctelem_init(int);
+extern void mctelem_init(unsigned int);
 extern mctelem_cookie_t mctelem_reserve(mctelem_class_t);
 extern void *mctelem_dataptr(mctelem_cookie_t);
 extern void mctelem_commit(mctelem_cookie_t);



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010509080700020608050200--


--===============0820811070695253739==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0820811070695253739==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:41:06 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRQl-0007ue-R5; Fri, 28 Feb 2014 17:41:03 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRQk-0007uW-EE
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:41:02 +0000
Received: from [193.109.254.147:16786] by server-1.bemta-14.messagelabs.com id
	BB/DD-29588-D2AC0135; Fri, 28 Feb 2014 17:41:01 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-8.tower-27.messagelabs.com!1393609259!7563802!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5273 invoked from network); 28 Feb 2014 17:41:00 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-8.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:41:00 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; 
	d="scan'208,217";a="105076202"
Received: from accessns.citrite.net (HELO FTLPEX01CL01.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 17:40:59 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.78) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:40:58 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJRQg-0000pI-Nw	for
	xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:40:58 +0000
Message-ID: <5310CA2A.9080907@citrix.com>
Date: Fri, 28 Feb 2014 17:40:58 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: <xen-devel@lists.xen.org>
References: <5310CB4F0200007800120468@nat28.tlf.novell.com>
In-Reply-To: <5310CB4F0200007800120468@nat28.tlf.novell.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Subject: Re: [Xen-devel] [PATCH] x86/MCE: mctelem_init() cleanup
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: multipart/mixed; boundary="===============0820811070695253739=="
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

--===============0820811070695253739==
Content-Type: multipart/alternative;
	boundary="------------010509080700020608050200"

--------------010509080700020608050200
Content-Type: text/plain; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

On 28/02/14 16:45, Jan Beulich wrote:
> The function can be __init with its caller taking care of only calling
> it on the BSP. And with that all its static variables can be dropped.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper@citrix.com>

>
> --- a/xen/arch/x86/cpu/mcheck/mce.c
> +++ b/xen/arch/x86/cpu/mcheck/mce.c
> @@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 *c, 
>  
>      intpose_init();
>  
> -    mctelem_init(sizeof(struct mc_info));
> +    if ( bsp )
> +    {
> +        mctelem_init(sizeof(struct mc_info));
> +        register_cpu_notifier(&cpu_nfb);
> +    }
>  
>      /* Turn on MCE now */
>      set_in_cr4(X86_CR4_MCE);
>  
> -    if ( bsp )
> -        register_cpu_notifier(&cpu_nfb);
>      set_poll_bankmask(c);
>  
>      return;
> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -248,25 +248,14 @@ static void mctelem_processing_release(s
>  	}
>  }
>  
> -void mctelem_init(int reqdatasz)
> +void __init mctelem_init(unsigned int datasz)
>  {
> -	static int called = 0;
> -	static int datasz = 0, realdatasz = 0;
>  	char *datarr;
> -	int i;
> +	unsigned int i;
>  	
> -	BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
> +	BUILD_BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
>  
> -	/* Called from mcheck_init for all processors; initialize for the
> -	 * first call only (no race here since the boot cpu completes
> -	 * init before others start up). */
> -	if (++called == 1) {
> -		realdatasz = reqdatasz;
> -		datasz = (reqdatasz & ~0xf) + 0x10;	/* 16 byte roundup */
> -	} else {
> -		BUG_ON(reqdatasz != realdatasz);
> -		return;
> -	}
> +	datasz = (datasz & ~0xf) + 0x10;	/* 16 byte roundup */
>  
>  	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
>  	    MC_NENT)) == NULL ||
> --- a/xen/arch/x86/cpu/mcheck/mctelem.h
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.h
> @@ -59,7 +59,7 @@ typedef enum mctelem_class {
>  	MC_NONURGENT
>  } mctelem_class_t;
>  
> -extern void mctelem_init(int);
> +extern void mctelem_init(unsigned int);
>  extern mctelem_cookie_t mctelem_reserve(mctelem_class_t);
>  extern void *mctelem_dataptr(mctelem_cookie_t);
>  extern void mctelem_commit(mctelem_cookie_t);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel


--------------010509080700020608050200
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit

<html>
  <head>
    <meta content="text/html; charset=ISO-8859-1"
      http-equiv="Content-Type">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 28/02/14 16:45, Jan Beulich wrote:<br>
    </div>
    <blockquote cite="mid:5310CB4F0200007800120468@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">The function can be __init with its caller taking care of only calling
it on the BSP. And with that all its static variables can be dropped.

Signed-off-by: Jan Beulich <a class="moz-txt-link-rfc2396E" href="mailto:jbeulich@suse.com">&lt;jbeulich@suse.com&gt;</a></pre>
    </blockquote>
    <br>
    Reviewed-by: Andrew Cooper <a class="moz-txt-link-rfc2396E" href="mailto:andrew.cooper@citrix.com">&lt;andrew.cooper@citrix.com&gt;</a><br>
    <br>
    <blockquote cite="mid:5310CB4F0200007800120468@nat28.tlf.novell.com"
      type="cite">
      <pre wrap="">

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -775,13 +775,15 @@ void mcheck_init(struct cpuinfo_x86 *c, 
 
     intpose_init();
 
-    mctelem_init(sizeof(struct mc_info));
+    if ( bsp )
+    {
+        mctelem_init(sizeof(struct mc_info));
+        register_cpu_notifier(&amp;cpu_nfb);
+    }
 
     /* Turn on MCE now */
     set_in_cr4(X86_CR4_MCE);
 
-    if ( bsp )
-        register_cpu_notifier(&amp;cpu_nfb);
     set_poll_bankmask(c);
 
     return;
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -248,25 +248,14 @@ static void mctelem_processing_release(s
 	}
 }
 
-void mctelem_init(int reqdatasz)
+void __init mctelem_init(unsigned int datasz)
 {
-	static int called = 0;
-	static int datasz = 0, realdatasz = 0;
 	char *datarr;
-	int i;
+	unsigned int i;
 	
-	BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
+	BUILD_BUG_ON(MC_URGENT != 0 || MC_NONURGENT != 1 || MC_NCLASSES != 2);
 
-	/* Called from mcheck_init for all processors; initialize for the
-	 * first call only (no race here since the boot cpu completes
-	 * init before others start up). */
-	if (++called == 1) {
-		realdatasz = reqdatasz;
-		datasz = (reqdatasz &amp; ~0xf) + 0x10;	/* 16 byte roundup */
-	} else {
-		BUG_ON(reqdatasz != realdatasz);
-		return;
-	}
+	datasz = (datasz &amp; ~0xf) + 0x10;	/* 16 byte roundup */
 
 	if ((mctctl.mctc_elems = xmalloc_array(struct mctelem_ent,
 	    MC_NENT)) == NULL ||
--- a/xen/arch/x86/cpu/mcheck/mctelem.h
+++ b/xen/arch/x86/cpu/mcheck/mctelem.h
@@ -59,7 +59,7 @@ typedef enum mctelem_class {
 	MC_NONURGENT
 } mctelem_class_t;
 
-extern void mctelem_init(int);
+extern void mctelem_init(unsigned int);
 extern mctelem_cookie_t mctelem_reserve(mctelem_class_t);
 extern void *mctelem_dataptr(mctelem_cookie_t);
 extern void mctelem_commit(mctelem_cookie_t);



</pre>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <br>
      <pre wrap="">_______________________________________________
Xen-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Xen-devel@lists.xen.org">Xen-devel@lists.xen.org</a>
<a class="moz-txt-link-freetext" href="http://lists.xen.org/xen-devel">http://lists.xen.org/xen-devel</a>
</pre>
    </blockquote>
    <br>
  </body>
</html>

--------------010509080700020608050200--


--===============0820811070695253739==
Content-Type: text/plain; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: inline

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

--===============0820811070695253739==--


From xen-devel-bounces@lists.xen.org Fri Feb 28 17:46:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRW2-00086l-Mm; Fri, 28 Feb 2014 17:46:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJRW0-00086g-SJ
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:46:29 +0000
Received: from [85.158.137.68:57985] by server-15.bemta-3.messagelabs.com id
	BF/64-19263-47BC0135; Fri, 28 Feb 2014 17:46:28 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393609585!3379562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 918 invoked from network); 28 Feb 2014 17:46:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:46:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106694369"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 17:46:25 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:46:25 -0500
Message-ID: <5310CB6F.1070706@citrix.com>
Date: Fri, 28 Feb 2014 18:46:23 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
In-Reply-To: <5310C54A.1080906@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 18:20, Boris Ostrovsky wrote:
> On 02/27/2014 01:45 PM, Boris Ostrovsky wrote:
>> On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
>>> Add support for MSI message groups for Xen Dom0 using the
>>> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
>>>
>>> In order to keep track of which pirq is the first one in the group all
>>> pirqs in the MSI group except for the first one have the newly
>>> introduced PIRQ_MSI_GROUP flag set. This prevents calling
>>> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
>>> first pirq in the group.
>>
>> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>
>>
> 
> 
> I was just looking at xen_setup_msi_irqs() (for a different reason) and
> I am no longer sure this patch does anything:
> 
> static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
> {
>         int irq, ret, i;
>         struct msi_desc *msidesc;
>         int *v;
> 
>         if (type == PCI_CAP_ID_MSI && nvec > 1)
>                 return 1;
> ....
> 
> Same thing for xen_hvm_setup_msi_irqs().

As said in the commit message this is only for Dom0, so the function
modified is xen_initdom_setup_msi_irqs (were this check is removed).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:46:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRW2-00086l-Mm; Fri, 28 Feb 2014 17:46:30 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJRW0-00086g-SJ
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:46:29 +0000
Received: from [85.158.137.68:57985] by server-15.bemta-3.messagelabs.com id
	BF/64-19263-47BC0135; Fri, 28 Feb 2014 17:46:28 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393609585!3379562!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 918 invoked from network); 28 Feb 2014 17:46:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 17:46:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106694369"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 17:46:25 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 12:46:25 -0500
Message-ID: <5310CB6F.1070706@citrix.com>
Date: Fri, 28 Feb 2014 18:46:23 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
In-Reply-To: <5310C54A.1080906@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 28/02/14 18:20, Boris Ostrovsky wrote:
> On 02/27/2014 01:45 PM, Boris Ostrovsky wrote:
>> On 02/27/2014 01:15 PM, Roger Pau Monne wrote:
>>> Add support for MSI message groups for Xen Dom0 using the
>>> MAP_PIRQ_TYPE_MULTI_MSI pirq map type.
>>>
>>> In order to keep track of which pirq is the first one in the group all
>>> pirqs in the MSI group except for the first one have the newly
>>> introduced PIRQ_MSI_GROUP flag set. This prevents calling
>>> PHYSDEVOP_unmap_pirq on them, since the unmap must be done with the
>>> first pirq in the group.
>>
>> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>
>>
> 
> 
> I was just looking at xen_setup_msi_irqs() (for a different reason) and
> I am no longer sure this patch does anything:
> 
> static int xen_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
> {
>         int irq, ret, i;
>         struct msi_desc *msidesc;
>         int *v;
> 
>         if (type == PCI_CAP_ID_MSI && nvec > 1)
>                 return 1;
> ....
> 
> Same thing for xen_hvm_setup_msi_irqs().

As said in the commit message this is only for Dom0, so the function
modified is xen_initdom_setup_msi_irqs (were this check is removed).

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:57:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRgS-0008Ou-1F; Fri, 28 Feb 2014 17:57:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WJRgR-0008Op-2g
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:57:15 +0000
Received: from [193.109.254.147:14796] by server-16.bemta-14.messagelabs.com
	id 07/F9-21945-AFDC0135; Fri, 28 Feb 2014 17:57:14 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393610232!7589839!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5328 invoked from network); 28 Feb 2014 17:57:13 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 17:57:13 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WJRfy-0008Ed-WE; Fri, 28 Feb 2014 17:56:47 +0000
Received: by laptop (Postfix, from userid 1000)
	id 3580D100A71E4; Fri, 28 Feb 2014 18:56:45 +0100 (CET)
Date: Fri, 28 Feb 2014 18:56:45 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <waiman.long@hp.com>
Message-ID: <20140228175645.GA6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
	<5310BB81.3090508@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5310BB81.3090508@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> After modifying it to do a deterministic cmpxchg, the test run time of 2
> contending tasks jumps up from 600ms (best case) to about 1700ms which was
> worse than the original qspinlock's 1300-1500ms. It is the opportunistic
> nature of the xchg() code that can potentially combine multiple steps in the
> deterministic atomic sequence which can saves time. Without that, I would
> rather prefer going back to the basic qspinlock queuing sequence for 2
> contending tasks.
> 
> Please take a look at the performance data in my patch 3 to see if the
> slowdown at 2 and 3 contending tasks are acceptable or not.

Right; so I've gone back to a simple version (~200 lines) that's fairly
easy to comprehend (to me, since I wrote it). And will now try to see if
I can find the same state transitions in your code.

I find your code somewhat hard to follow; mostly due to those xchg() +
fixup thingies. But I'll get there.

> The reason why I need a whole byte for the lock bit is because of the simple
> unlock code of assigning 0 to the lock byte by the lock holder. Utilizing
> other bits in the low byte for other purpose will complicate the unlock path
> and slow down the no-contention case.

Yeah, I get why you need a whole byte for the lock part, I was asking if
we really need another whole byte for the pending part.

So in my version I think I see an optimization case where this is indeed
useful and I can trade an atomic op for a write barrier, which should be
a big win.

It just wasn't at all clear (to me) from your code.

(I also think the optimization isn't x86 specific)

> >>The code is unfair, but this unfairness help it to run faster than ticket
> >>spinlock in this particular case. And the regular qspinlock slowpath is
> >>fair. A little bit of unfairness in this particular case helps its speed.

> >*groan*, no, unfairness not cool. ticket lock is absolutely fair; we
> >should preserve this.
> 
> We can preserve that by removing patch 3.

I've got a version that does the pending thing and still is entirely
fair.

I don't think the concept of the extra spinner is incompatible with
fairness.

> >BTW; can you share your benchmark thingy?
> 
> I have attached the test program that I used to generate the timing data for
> patch 3.

Thanks, I'll have a play.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:57:39 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRgS-0008Ou-1F; Fri, 28 Feb 2014 17:57:16 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <peterz@infradead.org>) id 1WJRgR-0008Op-2g
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 17:57:15 +0000
Received: from [193.109.254.147:14796] by server-16.bemta-14.messagelabs.com
	id 07/F9-21945-AFDC0135; Fri, 28 Feb 2014 17:57:14 +0000
X-Env-Sender: peterz@infradead.org
X-Msg-Ref: server-13.tower-27.messagelabs.com!1393610232!7589839!1
X-Originating-IP: [205.233.59.134]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjA1LjIzMy41OS4xMzQgPT4gMTcxNDMx\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5328 invoked from network); 28 Feb 2014 17:57:13 -0000
Received: from merlin.infradead.org (HELO merlin.infradead.org)
	(205.233.59.134)
	by server-13.tower-27.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 17:57:13 -0000
Received: from dhcp-077-248-225-117.chello.nl ([77.248.225.117] helo=laptop)
	by merlin.infradead.org with esmtpsa (Exim 4.80.1 #2 (Red Hat Linux))
	id 1WJRfy-0008Ed-WE; Fri, 28 Feb 2014 17:56:47 +0000
Received: by laptop (Postfix, from userid 1000)
	id 3580D100A71E4; Fri, 28 Feb 2014 18:56:45 +0100 (CET)
Date: Fri, 28 Feb 2014 18:56:45 +0100
From: Peter Zijlstra <peterz@infradead.org>
To: Waiman Long <waiman.long@hp.com>
Message-ID: <20140228175645.GA6835@laptop.programming.kicks-ass.net>
References: <1393427668-60228-1-git-send-email-Waiman.Long@hp.com>
	<1393427668-60228-4-git-send-email-Waiman.Long@hp.com>
	<20140226162057.GW6835@laptop.programming.kicks-ass.net>
	<530FA32B.8010202@hp.com>
	<20140228092945.GG27965@twins.programming.kicks-ass.net>
	<5310BB81.3090508@hp.com>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <5310BB81.3090508@hp.com>
User-Agent: Mutt/1.5.21 (2012-12-30)
Cc: Jeremy Fitzhardinge <jeremy@goop.org>,
	Raghavendra K T <raghavendra.kt@linux.vnet.ibm.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	virtualization@lists.linux-foundation.org,
	Andi Kleen <andi@firstfloor.org>, "H. Peter Anvin" <hpa@zytor.com>,
	Michel Lespinasse <walken@google.com>,
	Alok Kataria <akataria@vmware.com>, linux-arch@vger.kernel.org,
	x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
	Scott J Norton <scott.norton@hp.com>, xen-devel@lists.xenproject.org,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	Alexander Fyodorov <halcy@yandex.ru>, Arnd Bergmann <arnd@arndb.de>,
	Daniel J Blueman <daniel@numascale.com>,
	Rusty Russell <rusty@rustcorp.com.au>, Oleg Nesterov <oleg@redhat.com>,
	Steven Rostedt <rostedt@goodmis.org>, Chris Wright <chrisw@sous-sol.org>,
	George Spelvin <linux@horizon.com>, Thomas Gleixner <tglx@linutronix.de>,
	Aswin Chandramouleeswaran <aswin@hp.com>, Chegu Vinod <chegu_vinod@hp.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	linux-kernel@vger.kernel.org, David Vrabel <david.vrabel@citrix.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tim Chen <tim.c.chen@linux.intel.com>
Subject: Re: [Xen-devel] [PATCH v5 3/8] qspinlock,
 x86: Add x86 specific optimization for 2 contending tasks
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org



> After modifying it to do a deterministic cmpxchg, the test run time of 2
> contending tasks jumps up from 600ms (best case) to about 1700ms which was
> worse than the original qspinlock's 1300-1500ms. It is the opportunistic
> nature of the xchg() code that can potentially combine multiple steps in the
> deterministic atomic sequence which can saves time. Without that, I would
> rather prefer going back to the basic qspinlock queuing sequence for 2
> contending tasks.
> 
> Please take a look at the performance data in my patch 3 to see if the
> slowdown at 2 and 3 contending tasks are acceptable or not.

Right; so I've gone back to a simple version (~200 lines) that's fairly
easy to comprehend (to me, since I wrote it). And will now try to see if
I can find the same state transitions in your code.

I find your code somewhat hard to follow; mostly due to those xchg() +
fixup thingies. But I'll get there.

> The reason why I need a whole byte for the lock bit is because of the simple
> unlock code of assigning 0 to the lock byte by the lock holder. Utilizing
> other bits in the low byte for other purpose will complicate the unlock path
> and slow down the no-contention case.

Yeah, I get why you need a whole byte for the lock part, I was asking if
we really need another whole byte for the pending part.

So in my version I think I see an optimization case where this is indeed
useful and I can trade an atomic op for a write barrier, which should be
a big win.

It just wasn't at all clear (to me) from your code.

(I also think the optimization isn't x86 specific)

> >>The code is unfair, but this unfairness help it to run faster than ticket
> >>spinlock in this particular case. And the regular qspinlock slowpath is
> >>fair. A little bit of unfairness in this particular case helps its speed.

> >*groan*, no, unfairness not cool. ticket lock is absolutely fair; we
> >should preserve this.
> 
> We can preserve that by removing patch 3.

I've got a version that does the pending thing and still is entirely
fair.

I don't think the concept of the extra spinner is incompatible with
fairness.

> >BTW; can you share your benchmark thingy?
> 
> I have attached the test program that I used to generate the timing data for
> patch 3.

Thanks, I'll have a play.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:58:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRhi-0008TR-Gl; Fri, 28 Feb 2014 17:58:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJRhg-0008Sr-A7
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:58:32 +0000
Received: from [85.158.139.211:22843] by server-7.bemta-5.messagelabs.com id
	48/9D-14867-74EC0135; Fri, 28 Feb 2014 17:58:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393610309!2434907!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25654 invoked from network); 28 Feb 2014 17:58:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 17:58:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SHwQFr025019
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 17:58:27 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHwPwP025011
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 17:58:26 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHwPSB025000; Fri, 28 Feb 2014 17:58:25 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 09:58:25 -0800
Message-ID: <5310CEB6.4010604@oracle.com>
Date: Fri, 28 Feb 2014 13:00:22 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com>
In-Reply-To: <5310CB6F.1070706@citrix.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjgvMjAxNCAxMjo0NiBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAyOC8w
Mi8xNCAxODoyMCwgQm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+PiBPbiAwMi8yNy8yMDE0IDAxOjQ1
IFBNLCBCb3JpcyBPc3Ryb3Zza3kgd3JvdGU6Cj4+PiBPbiAwMi8yNy8yMDE0IDAxOjE1IFBNLCBS
b2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+Pj4gQWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdy
b3VwcyBmb3IgWGVuIERvbTAgdXNpbmcgdGhlCj4+Pj4gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kg
cGlycSBtYXAgdHlwZS4KPj4+Pgo+Pj4+IEluIG9yZGVyIHRvIGtlZXAgdHJhY2sgb2Ygd2hpY2gg
cGlycSBpcyB0aGUgZmlyc3Qgb25lIGluIHRoZSBncm91cCBhbGwKPj4+PiBwaXJxcyBpbiB0aGUg
TVNJIGdyb3VwIGV4Y2VwdCBmb3IgdGhlIGZpcnN0IG9uZSBoYXZlIHRoZSBuZXdseQo+Pj4+IGlu
dHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2FsbGluZwo+
Pj4+IFBIWVNERVZPUF91bm1hcF9waXJxIG9uIHRoZW0sIHNpbmNlIHRoZSB1bm1hcCBtdXN0IGJl
IGRvbmUgd2l0aCB0aGUKPj4+PiBmaXJzdCBwaXJxIGluIHRoZSBncm91cC4KPj4+IFJldmlld2Vk
LWJ5OiBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPgo+Pj4KPj4+
Cj4+Cj4+IEkgd2FzIGp1c3QgbG9va2luZyBhdCB4ZW5fc2V0dXBfbXNpX2lycXMoKSAoZm9yIGEg
ZGlmZmVyZW50IHJlYXNvbikgYW5kCj4+IEkgYW0gbm8gbG9uZ2VyIHN1cmUgdGhpcyBwYXRjaCBk
b2VzIGFueXRoaW5nOgo+Pgo+PiBzdGF0aWMgaW50IHhlbl9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3Qg
cGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+IHsKPj4gICAgICAgICAgaW50IGly
cSwgcmV0LCBpOwo+PiAgICAgICAgICBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2M7Cj4+ICAgICAg
ICAgIGludCAqdjsKPj4KPj4gICAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYg
bnZlYyA+IDEpCj4+ICAgICAgICAgICAgICAgICAgcmV0dXJuIDE7Cj4+IC4uLi4KPj4KPj4gU2Ft
ZSB0aGluZyBmb3IgeGVuX2h2bV9zZXR1cF9tc2lfaXJxcygpLgo+IEFzIHNhaWQgaW4gdGhlIGNv
bW1pdCBtZXNzYWdlIHRoaXMgaXMgb25seSBmb3IgRG9tMCwgc28gdGhlIGZ1bmN0aW9uCj4gbW9k
aWZpZWQgaXMgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMgKHdlcmUgdGhpcyBjaGVjayBpcyBy
ZW1vdmVkKS4KClRoZW4gd2hhdCBpcyB0aGUgcmVhc29uIGZvciB0aGVzZSBjaGFuZ2VzOgoKZGlm
ZiAtLWdpdCBhL2FyY2gveDg2L3BjaS94ZW4uYyBiL2FyY2gveDg2L3BjaS94ZW4uYwppbmRleCAx
MDNlNzAyLi45MDU5NTZmIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni9wY2kveGVuLmMKKysrIGIvYXJj
aC94ODYvcGNpL3hlbi5jCkBAIC0xNzgsNiArMTc4LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBf
bXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMsIGludCB0eXBlKQogIAlpID0g
MDsKICAJbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkg
ewogIAkJaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgdltpXSwK
KwkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52ZWMgOiAxLAogIAkJCQkJ
ICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwogIAkJCQkJICAgICAgICJwY2lmcm9u
dC1tc2kteCIgOgogIAkJCQkJICAgICAgICJwY2lmcm9udC1tc2kiLApAQCAtMjQ1LDYgKzI0Niw3
IEBAIHN0YXRpYyBpbnQgeGVuX2h2bV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2
LCBpbnQgbnZlYywgaW50IHR5cGUpCiAgCQkJCSJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRvIHBp
cnE9JWRcbiIsIHBpcnEpOwogIAkJfQogIAkJaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJx
KGRldiwgbXNpZGVzYywgcGlycSwKKwkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJ
KSA/IG52ZWMgOiAxLAogIAkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwog
IAkJCQkJICAgICAgICJtc2kteCIgOiAibXNpIiwKICAJCQkJCSAgICAgICBET01JRF9TRUxGKTsK
ClNob3VsZCB5b3Ugc2ltcGx5IHBhc3MgMT8KCi1ib3JpcwoKCl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRl
dmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Feb 28 17:58:35 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 17:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRhi-0008TR-Gl; Fri, 28 Feb 2014 17:58:34 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJRhg-0008Sr-A7
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 17:58:32 +0000
Received: from [85.158.139.211:22843] by server-7.bemta-5.messagelabs.com id
	48/9D-14867-74EC0135; Fri, 28 Feb 2014 17:58:31 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-11.tower-206.messagelabs.com!1393610309!2434907!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25654 invoked from network); 28 Feb 2014 17:58:30 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-11.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 17:58:30 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SHwQFr025019
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 17:58:27 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHwPwP025011
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 17:58:26 GMT
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SHwPSB025000; Fri, 28 Feb 2014 17:58:25 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 09:58:25 -0800
Message-ID: <5310CEB6.4010604@oracle.com>
Date: Fri, 28 Feb 2014 13:00:22 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com>
In-Reply-To: <5310CB6F.1070706@citrix.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjgvMjAxNCAxMjo0NiBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAyOC8w
Mi8xNCAxODoyMCwgQm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+PiBPbiAwMi8yNy8yMDE0IDAxOjQ1
IFBNLCBCb3JpcyBPc3Ryb3Zza3kgd3JvdGU6Cj4+PiBPbiAwMi8yNy8yMDE0IDAxOjE1IFBNLCBS
b2dlciBQYXUgTW9ubmUgd3JvdGU6Cj4+Pj4gQWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdy
b3VwcyBmb3IgWGVuIERvbTAgdXNpbmcgdGhlCj4+Pj4gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kg
cGlycSBtYXAgdHlwZS4KPj4+Pgo+Pj4+IEluIG9yZGVyIHRvIGtlZXAgdHJhY2sgb2Ygd2hpY2gg
cGlycSBpcyB0aGUgZmlyc3Qgb25lIGluIHRoZSBncm91cCBhbGwKPj4+PiBwaXJxcyBpbiB0aGUg
TVNJIGdyb3VwIGV4Y2VwdCBmb3IgdGhlIGZpcnN0IG9uZSBoYXZlIHRoZSBuZXdseQo+Pj4+IGlu
dHJvZHVjZWQgUElSUV9NU0lfR1JPVVAgZmxhZyBzZXQuIFRoaXMgcHJldmVudHMgY2FsbGluZwo+
Pj4+IFBIWVNERVZPUF91bm1hcF9waXJxIG9uIHRoZW0sIHNpbmNlIHRoZSB1bm1hcCBtdXN0IGJl
IGRvbmUgd2l0aCB0aGUKPj4+PiBmaXJzdCBwaXJxIGluIHRoZSBncm91cC4KPj4+IFJldmlld2Vk
LWJ5OiBCb3JpcyBPc3Ryb3Zza3kgPGJvcmlzLm9zdHJvdnNreUBvcmFjbGUuY29tPgo+Pj4KPj4+
Cj4+Cj4+IEkgd2FzIGp1c3QgbG9va2luZyBhdCB4ZW5fc2V0dXBfbXNpX2lycXMoKSAoZm9yIGEg
ZGlmZmVyZW50IHJlYXNvbikgYW5kCj4+IEkgYW0gbm8gbG9uZ2VyIHN1cmUgdGhpcyBwYXRjaCBk
b2VzIGFueXRoaW5nOgo+Pgo+PiBzdGF0aWMgaW50IHhlbl9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3Qg
cGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+IHsKPj4gICAgICAgICAgaW50IGly
cSwgcmV0LCBpOwo+PiAgICAgICAgICBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2M7Cj4+ICAgICAg
ICAgIGludCAqdjsKPj4KPj4gICAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYg
bnZlYyA+IDEpCj4+ICAgICAgICAgICAgICAgICAgcmV0dXJuIDE7Cj4+IC4uLi4KPj4KPj4gU2Ft
ZSB0aGluZyBmb3IgeGVuX2h2bV9zZXR1cF9tc2lfaXJxcygpLgo+IEFzIHNhaWQgaW4gdGhlIGNv
bW1pdCBtZXNzYWdlIHRoaXMgaXMgb25seSBmb3IgRG9tMCwgc28gdGhlIGZ1bmN0aW9uCj4gbW9k
aWZpZWQgaXMgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMgKHdlcmUgdGhpcyBjaGVjayBpcyBy
ZW1vdmVkKS4KClRoZW4gd2hhdCBpcyB0aGUgcmVhc29uIGZvciB0aGVzZSBjaGFuZ2VzOgoKZGlm
ZiAtLWdpdCBhL2FyY2gveDg2L3BjaS94ZW4uYyBiL2FyY2gveDg2L3BjaS94ZW4uYwppbmRleCAx
MDNlNzAyLi45MDU5NTZmIDEwMDY0NAotLS0gYS9hcmNoL3g4Ni9wY2kveGVuLmMKKysrIGIvYXJj
aC94ODYvcGNpL3hlbi5jCkBAIC0xNzgsNiArMTc4LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBf
bXNpX2lycXMoc3RydWN0IHBjaV9kZXYgKmRldiwgaW50IG52ZWMsIGludCB0eXBlKQogIAlpID0g
MDsKICAJbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwgbGlzdCkg
ewogIAkJaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgdltpXSwK
KwkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJKSA/IG52ZWMgOiAxLAogIAkJCQkJ
ICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwogIAkJCQkJICAgICAgICJwY2lmcm9u
dC1tc2kteCIgOgogIAkJCQkJICAgICAgICJwY2lmcm9udC1tc2kiLApAQCAtMjQ1LDYgKzI0Niw3
IEBAIHN0YXRpYyBpbnQgeGVuX2h2bV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2
LCBpbnQgbnZlYywgaW50IHR5cGUpCiAgCQkJCSJ4ZW46IG1zaSBhbHJlYWR5IGJvdW5kIHRvIHBp
cnE9JWRcbiIsIHBpcnEpOwogIAkJfQogIAkJaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJx
KGRldiwgbXNpZGVzYywgcGlycSwKKwkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJ
KSA/IG52ZWMgOiAxLAogIAkJCQkJICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwog
IAkJCQkJICAgICAgICJtc2kteCIgOiAibXNpIiwKICAJCQkJCSAgICAgICBET01JRF9TRUxGKTsK
ClNob3VsZCB5b3Ugc2ltcGx5IHBhc3MgMT8KCi1ib3JpcwoKCl9fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fClhlbi1kZXZlbCBtYWlsaW5nIGxpc3QKWGVuLWRl
dmVsQGxpc3RzLnhlbi5vcmcKaHR0cDovL2xpc3RzLnhlbi5vcmcveGVuLWRldmVsCg==

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqs-0000Qw-G6; Fri, 28 Feb 2014 18:08:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqp-0000QY-VR
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:00 +0000
Received: from [85.158.143.35:19060] by server-2.bemta-4.messagelabs.com id
	8A/4F-04779-F70D0135; Fri, 28 Feb 2014 18:07:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393610876!9124315!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23231 invoked from network); 28 Feb 2014 18:07:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701621"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-BV; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:51 +0000
Message-ID: <1393610874-27492-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 2/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
__attribute__((noreturn)).  This includes removing redundant uses with
function definitions which have a public declaration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/arch/x86/efi/boot.c            |    4 ++--
 xen/arch/x86/shutdown.c            |    2 +-
 xen/include/asm-arm/early_printk.h |    4 ++--
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/xen/compiler.h         |    2 ++
 xen/include/xen/lib.h              |    2 +-
 xen/include/xen/sched.h            |    4 ++--
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..2870a30 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -52,7 +52,7 @@ void __init early_printk(const char *fmt, ...)
     va_end(args);
 }
 
-void __attribute__((noreturn)) __init
+void __init
 early_panic(const char *fmt, ...)
 {
     va_list args;
diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index 0dd935c..a26e0af 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -183,7 +183,7 @@ static bool_t __init match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2)
            !memcmp(guid1->Data4, guid2->Data4, sizeof(guid1->Data4));
 }
 
-static void __init __attribute__((__noreturn__)) blexit(const CHAR16 *str)
+static void __init noreturn blexit(const CHAR16 *str)
 {
     if ( str )
         PrintStr((CHAR16 *)str);
@@ -762,7 +762,7 @@ static void __init relocate_trampoline(unsigned long phys)
         *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
 }
 
-void EFIAPI __init __attribute__((__noreturn__))
+void EFIAPI __init noreturn
 efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     static EFI_GUID __initdata loaded_image_guid = LOADED_IMAGE_PROTOCOL;
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6eba271..6143c40 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -85,7 +85,7 @@ static inline void kb_wait(void)
             break;
 }
 
-static void __attribute__((noreturn)) __machine_halt(void *unused)
+static void noreturn __machine_halt(void *unused)
 {
     local_irq_disable();
     for ( ; ; )
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..8047141 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -26,7 +26,7 @@
 
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
+void noreturn early_panic(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 
 #else
@@ -35,7 +35,7 @@ static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
 
-static inline void  __attribute__((noreturn))
+static inline void noreturn
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 4629b32..4da545e 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -550,7 +550,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs);
+void noreturn do_nmi_crash(struct cpu_user_regs *regs);
 
 void syscall_enter(void);
 void sysenter_entry(void);
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d6805c..c80398d 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -14,6 +14,8 @@
 #define always_inline __inline__ __attribute__ ((always_inline))
 #define noinline      __attribute__((noinline))
 
+#define noreturn      __attribute__((noreturn))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..0d1a5d3 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -8,7 +8,7 @@
 #include <xen/string.h>
 #include <asm/bug.h>
 
-void __bug(char *file, int line) __attribute__((noreturn));
+void noreturn __bug(char *file, int line);
 void __warn(char *file, int line);
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..00f0eba 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -580,7 +580,7 @@ void __domain_crash(struct domain *d);
  * Mark current domain as crashed and synchronously deschedule from the local
  * processor. This function never returns.
  */
-void __domain_crash_synchronous(void) __attribute__((noreturn));
+void noreturn __domain_crash_synchronous(void);
 #define domain_crash_synchronous() do {                                   \
     printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__);  \
     __domain_crash_synchronous();                                         \
@@ -591,7 +591,7 @@ void __domain_crash_synchronous(void) __attribute__((noreturn));
  * the crash occured.  If addr is 0, look up address from last extable
  * redirection.
  */
-void asm_domain_crash_synchronous(unsigned long addr) __attribute__((noreturn));
+void noreturn asm_domain_crash_synchronous(unsigned long addr);
 
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqp-0000QZ-Lh; Fri, 28 Feb 2014 18:07:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqo-0000QO-Ll
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:07:58 +0000
Received: from [85.158.143.35:18972] by server-3.bemta-4.messagelabs.com id
	91/EB-11539-D70D0135; Fri, 28 Feb 2014 18:07:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393610876!9124315!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23106 invoked from network); 28 Feb 2014 18:07:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701619"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-9N; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:49 +0000
Message-ID: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel]  [PATCH v5 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make better use of noreturn.  It allows optimising compilers to produce more
efficient code.

Each patch is compile tested on each architecture, and the result is
functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
older compilers are happy.

Changes in v5:
 * Reorder patches 1 and 2 and tweak so they compile in isolation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqs-0000RF-VV; Fri, 28 Feb 2014 18:08:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqq-0000Qm-RP
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:01 +0000
Received: from [85.158.137.68:56522] by server-8.bemta-3.messagelabs.com id
	2B/C5-16039-080D0135; Fri, 28 Feb 2014 18:08:00 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393610877!3645719!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26219 invoked from network); 28 Feb 2014 18:07:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701623"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-DA; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:52 +0000
Message-ID: <1393610874-27492-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH v5 3/5] xen: Identify panic and reboot/halt
	functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On an x86 build (GCC Debian 4.7.2-5), this substantially reduces the size of
.text and .init.text sections.

Experimentally, even in a non-debug build, GCC uses `call` rather than `jmp`
so there should be no impact on any stack trace generation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/shutdown.c         |    2 +-
 xen/arch/x86/cpu/mcheck/mce.h   |    2 +-
 xen/arch/x86/shutdown.c         |    2 +-
 xen/common/shutdown.c           |    2 +-
 xen/include/asm-arm/smp.h       |    2 +-
 xen/include/asm-x86/processor.h |    2 +-
 xen/include/xen/lib.h           |    2 +-
 xen/include/xen/shutdown.h      |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 767cc12..adc0529 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -11,7 +11,7 @@ static void raw_machine_reset(void)
     platform_reset();
 }
 
-static void halt_this_cpu(void *arg)
+static void noreturn halt_this_cpu(void *arg)
 {
     stop_cpu();
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index cbd123d..33bd1ab 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
 struct mc_info *x86_mcinfo_getptr(void);
-void mc_panic(char *s);
+void noreturn mc_panic(char *s);
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
 			 uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6143c40..827515d 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -452,7 +452,7 @@ static int __init reboot_init(void)
 }
 __initcall(reboot_init);
 
-static void __machine_restart(void *pdelay)
+static void noreturn __machine_restart(void *pdelay)
 {
     machine_restart(*(unsigned int *)pdelay);
 }
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 9bccd34..fadb69b 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -17,7 +17,7 @@
 bool_t __read_mostly opt_noreboot;
 boolean_param("noreboot", opt_noreboot);
 
-static void maybe_reboot(void)
+static void noreturn maybe_reboot(void)
 {
     if ( opt_noreboot )
     {
diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
index a1de03c..91b1e52 100644
--- a/xen/include/asm-arm/smp.h
+++ b/xen/include/asm-arm/smp.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
 #define raw_smp_processor_id() (get_processor_id())
 
-extern void stop_cpu(void);
+extern void noreturn stop_cpu(void);
 
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 4da545e..498d8a7 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -515,7 +515,7 @@ void show_registers(struct cpu_user_regs *regs);
 void show_execution_state(struct cpu_user_regs *regs);
 #define dump_execution_state() run_in_exception_handler(show_execution_state)
 void show_page_walk(unsigned long addr);
-void fatal_trap(int trapnr, struct cpu_user_regs *regs);
+void noreturn fatal_trap(int trapnr, struct cpu_user_regs *regs);
 
 void compat_show_guest_stack(struct vcpu *, struct cpu_user_regs *, int lines);
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 0d1a5d3..1369b2b 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -87,7 +87,7 @@ extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
-extern void panic(const char *format, ...)
+extern void noreturn panic(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2bee748..f04905b 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -1,13 +1,15 @@
 #ifndef __XEN_SHUTDOWN_H__
 #define __XEN_SHUTDOWN_H__
 
+#include <xen/compiler.h>
+
 /* opt_noreboot: If true, machine will need manual reset on error. */
 extern bool_t opt_noreboot;
 
-void dom0_shutdown(u8 reason);
+void noreturn dom0_shutdown(u8 reason);
 
-void machine_restart(unsigned int delay_millisecs);
-void machine_halt(void);
+void noreturn machine_restart(unsigned int delay_millisecs);
+void noreturn machine_halt(void);
 void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqs-0000Qw-G6; Fri, 28 Feb 2014 18:08:02 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqp-0000QY-VR
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:00 +0000
Received: from [85.158.143.35:19060] by server-2.bemta-4.messagelabs.com id
	8A/4F-04779-F70D0135; Fri, 28 Feb 2014 18:07:59 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393610876!9124315!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23231 invoked from network); 28 Feb 2014 18:07:58 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:58 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701621"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-BV; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:51 +0000
Message-ID: <1393610874-27492-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 2/5] xen/compiler: Replace opencoded
	__attribute__((noreturn))
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make a formal define for noreturn in compiler.h, and fix up opencoded uses of
__attribute__((noreturn)).  This includes removing redundant uses with
function definitions which have a public declaration.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
Acked-by: Ian Campbell <ian.campbell@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/early_printk.c        |    2 +-
 xen/arch/x86/efi/boot.c            |    4 ++--
 xen/arch/x86/shutdown.c            |    2 +-
 xen/include/asm-arm/early_printk.h |    4 ++--
 xen/include/asm-x86/processor.h    |    2 +-
 xen/include/xen/compiler.h         |    2 ++
 xen/include/xen/lib.h              |    2 +-
 xen/include/xen/sched.h            |    4 ++--
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/early_printk.c b/xen/arch/arm/early_printk.c
index 41938bb..2870a30 100644
--- a/xen/arch/arm/early_printk.c
+++ b/xen/arch/arm/early_printk.c
@@ -52,7 +52,7 @@ void __init early_printk(const char *fmt, ...)
     va_end(args);
 }
 
-void __attribute__((noreturn)) __init
+void __init
 early_panic(const char *fmt, ...)
 {
     va_list args;
diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index 0dd935c..a26e0af 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -183,7 +183,7 @@ static bool_t __init match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2)
            !memcmp(guid1->Data4, guid2->Data4, sizeof(guid1->Data4));
 }
 
-static void __init __attribute__((__noreturn__)) blexit(const CHAR16 *str)
+static void __init noreturn blexit(const CHAR16 *str)
 {
     if ( str )
         PrintStr((CHAR16 *)str);
@@ -762,7 +762,7 @@ static void __init relocate_trampoline(unsigned long phys)
         *(u16 *)(*trampoline_ptr + (long)trampoline_ptr) = phys >> 4;
 }
 
-void EFIAPI __init __attribute__((__noreturn__))
+void EFIAPI __init noreturn
 efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 {
     static EFI_GUID __initdata loaded_image_guid = LOADED_IMAGE_PROTOCOL;
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6eba271..6143c40 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -85,7 +85,7 @@ static inline void kb_wait(void)
             break;
 }
 
-static void __attribute__((noreturn)) __machine_halt(void *unused)
+static void noreturn __machine_halt(void *unused)
 {
     local_irq_disable();
     for ( ; ; )
diff --git a/xen/include/asm-arm/early_printk.h b/xen/include/asm-arm/early_printk.h
index 707bbf7..8047141 100644
--- a/xen/include/asm-arm/early_printk.h
+++ b/xen/include/asm-arm/early_printk.h
@@ -26,7 +26,7 @@
 
 void early_printk(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
-void early_panic(const char *fmt, ...) __attribute__((noreturn))
+void noreturn early_panic(const char *fmt, ...)
     __attribute__((format (printf, 1, 2)));
 
 #else
@@ -35,7 +35,7 @@ static inline  __attribute__((format (printf, 1, 2))) void
 early_printk(const char *fmt, ...)
 {}
 
-static inline void  __attribute__((noreturn))
+static inline void noreturn
 __attribute__((format (printf, 1, 2))) early_panic(const char *fmt, ...)
 {while(1);}
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 4629b32..4da545e 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -550,7 +550,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs);
+void noreturn do_nmi_crash(struct cpu_user_regs *regs);
 
 void syscall_enter(void);
 void sysenter_entry(void);
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 7d6805c..c80398d 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -14,6 +14,8 @@
 #define always_inline __inline__ __attribute__ ((always_inline))
 #define noinline      __attribute__((noinline))
 
+#define noreturn      __attribute__((noreturn))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 5b258fd..0d1a5d3 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -8,7 +8,7 @@
 #include <xen/string.h>
 #include <asm/bug.h>
 
-void __bug(char *file, int line) __attribute__((noreturn));
+void noreturn __bug(char *file, int line);
 void __warn(char *file, int line);
 
 #define BUG_ON(p)  do { if (unlikely(p)) BUG();  } while (0)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index fb8bd36..00f0eba 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -580,7 +580,7 @@ void __domain_crash(struct domain *d);
  * Mark current domain as crashed and synchronously deschedule from the local
  * processor. This function never returns.
  */
-void __domain_crash_synchronous(void) __attribute__((noreturn));
+void noreturn __domain_crash_synchronous(void);
 #define domain_crash_synchronous() do {                                   \
     printk("domain_crash_sync called from %s:%d\n", __FILE__, __LINE__);  \
     __domain_crash_synchronous();                                         \
@@ -591,7 +591,7 @@ void __domain_crash_synchronous(void) __attribute__((noreturn));
  * the crash occured.  If addr is 0, look up address from last extable
  * redirection.
  */
-void asm_domain_crash_synchronous(unsigned long addr) __attribute__((noreturn));
+void noreturn asm_domain_crash_synchronous(unsigned long addr);
 
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqp-0000QZ-Lh; Fri, 28 Feb 2014 18:07:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqo-0000QO-Ll
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:07:58 +0000
Received: from [85.158.143.35:18972] by server-3.bemta-4.messagelabs.com id
	91/EB-11539-D70D0135; Fri, 28 Feb 2014 18:07:57 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393610876!9124315!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23106 invoked from network); 28 Feb 2014 18:07:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701619"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-9N; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:49 +0000
Message-ID: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel]  [PATCH v5 0/5] Improvements with noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Make better use of noreturn.  It allows optimising compilers to produce more
efficient code.

Each patch is compile tested on each architecture, and the result is
functionally tested on x86 and compile tested on GCC 4.1.1 to verify that
older compilers are happy.

Changes in v5:
 * Reorder patches 1 and 2 and tweak so they compile in isolation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqs-0000RF-VV; Fri, 28 Feb 2014 18:08:02 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqq-0000Qm-RP
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:01 +0000
Received: from [85.158.137.68:56522] by server-8.bemta-3.messagelabs.com id
	2B/C5-16039-080D0135; Fri, 28 Feb 2014 18:08:00 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-5.tower-31.messagelabs.com!1393610877!3645719!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 26219 invoked from network); 28 Feb 2014 18:07:59 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-5.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:59 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701623"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-DA; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:52 +0000
Message-ID: <1393610874-27492-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefano.stabellini@citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: [Xen-devel] [PATCH v5 3/5] xen: Identify panic and reboot/halt
	functions as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On an x86 build (GCC Debian 4.7.2-5), this substantially reduces the size of
.text and .init.text sections.

Experimentally, even in a non-debug build, GCC uses `call` rather than `jmp`
so there should be no impact on any stack trace generation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
CC: Ian Campbell <ian.campbell@citrix.com>
CC: Stefano Stabellini <stefano.stabellini@citrix.com>
Acked-by: Tim Deegan <tim@xen.org>
---
 xen/arch/arm/shutdown.c         |    2 +-
 xen/arch/x86/cpu/mcheck/mce.h   |    2 +-
 xen/arch/x86/shutdown.c         |    2 +-
 xen/common/shutdown.c           |    2 +-
 xen/include/asm-arm/smp.h       |    2 +-
 xen/include/asm-x86/processor.h |    2 +-
 xen/include/xen/lib.h           |    2 +-
 xen/include/xen/shutdown.h      |    8 +++++---
 8 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index 767cc12..adc0529 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -11,7 +11,7 @@ static void raw_machine_reset(void)
     platform_reset();
 }
 
-static void halt_this_cpu(void *arg)
+static void noreturn halt_this_cpu(void *arg)
 {
     stop_cpu();
 }
diff --git a/xen/arch/x86/cpu/mcheck/mce.h b/xen/arch/x86/cpu/mcheck/mce.h
index cbd123d..33bd1ab 100644
--- a/xen/arch/x86/cpu/mcheck/mce.h
+++ b/xen/arch/x86/cpu/mcheck/mce.h
@@ -57,7 +57,7 @@ int mce_available(struct cpuinfo_x86 *c);
 unsigned int mce_firstbank(struct cpuinfo_x86 *c);
 /* Helper functions used for collecting error telemetry */
 struct mc_info *x86_mcinfo_getptr(void);
-void mc_panic(char *s);
+void noreturn mc_panic(char *s);
 void x86_mc_get_cpu_info(unsigned, uint32_t *, uint16_t *, uint16_t *,
 			 uint32_t *, uint32_t *, uint32_t *, uint32_t *);
 
diff --git a/xen/arch/x86/shutdown.c b/xen/arch/x86/shutdown.c
index 6143c40..827515d 100644
--- a/xen/arch/x86/shutdown.c
+++ b/xen/arch/x86/shutdown.c
@@ -452,7 +452,7 @@ static int __init reboot_init(void)
 }
 __initcall(reboot_init);
 
-static void __machine_restart(void *pdelay)
+static void noreturn __machine_restart(void *pdelay)
 {
     machine_restart(*(unsigned int *)pdelay);
 }
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 9bccd34..fadb69b 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -17,7 +17,7 @@
 bool_t __read_mostly opt_noreboot;
 boolean_param("noreboot", opt_noreboot);
 
-static void maybe_reboot(void)
+static void noreturn maybe_reboot(void)
 {
     if ( opt_noreboot )
     {
diff --git a/xen/include/asm-arm/smp.h b/xen/include/asm-arm/smp.h
index a1de03c..91b1e52 100644
--- a/xen/include/asm-arm/smp.h
+++ b/xen/include/asm-arm/smp.h
@@ -15,7 +15,7 @@ DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
 
 #define raw_smp_processor_id() (get_processor_id())
 
-extern void stop_cpu(void);
+extern void noreturn stop_cpu(void);
 
 extern int arch_smp_init(void);
 extern int arch_cpu_init(int cpu, struct dt_device_node *dn);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 4da545e..498d8a7 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -515,7 +515,7 @@ void show_registers(struct cpu_user_regs *regs);
 void show_execution_state(struct cpu_user_regs *regs);
 #define dump_execution_state() run_in_exception_handler(show_execution_state)
 void show_page_walk(unsigned long addr);
-void fatal_trap(int trapnr, struct cpu_user_regs *regs);
+void noreturn fatal_trap(int trapnr, struct cpu_user_regs *regs);
 
 void compat_show_guest_stack(struct vcpu *, struct cpu_user_regs *, int lines);
 
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 0d1a5d3..1369b2b 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -87,7 +87,7 @@ extern void printk(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern void guest_printk(const struct domain *d, const char *format, ...)
     __attribute__ ((format (printf, 2, 3)));
-extern void panic(const char *format, ...)
+extern void noreturn panic(const char *format, ...)
     __attribute__ ((format (printf, 1, 2)));
 extern long vm_assist(struct domain *, unsigned int, unsigned int);
 extern int __printk_ratelimit(int ratelimit_ms, int ratelimit_burst);
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index 2bee748..f04905b 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -1,13 +1,15 @@
 #ifndef __XEN_SHUTDOWN_H__
 #define __XEN_SHUTDOWN_H__
 
+#include <xen/compiler.h>
+
 /* opt_noreboot: If true, machine will need manual reset on error. */
 extern bool_t opt_noreboot;
 
-void dom0_shutdown(u8 reason);
+void noreturn dom0_shutdown(u8 reason);
 
-void machine_restart(unsigned int delay_millisecs);
-void machine_halt(void);
+void noreturn machine_restart(unsigned int delay_millisecs);
+void noreturn machine_halt(void);
 void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqq-0000Qg-0t; Fri, 28 Feb 2014 18:08:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqp-0000QP-4o
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:07:59 +0000
Received: from [85.158.143.35:19016] by server-1.bemta-4.messagelabs.com id
	3D/E0-31661-E70D0135; Fri, 28 Feb 2014 18:07:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393610876!9124315!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23164 invoked from network); 28 Feb 2014 18:07:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701620"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-B7; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:50 +0000
Message-ID: <1393610874-27492-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 1/5] x86/crash: Fix up declaration of
	do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... so it can correctly be annotated as noreturn.  Move the declaration of
nmi_crash() to be effectively private in crash.c

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>

---

Changes in v5:
 * Move to start of series to fix bisection issue.
---
 xen/arch/x86/crash.c            |    3 ++-
 xen/include/asm-x86/processor.h |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 01fd906..ec586bd 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -36,7 +36,7 @@ static unsigned int crashing_cpu;
 static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
 /* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
+void do_nmi_crash(struct cpu_user_regs *regs)
 {
     int cpu = smp_processor_id();
 
@@ -113,6 +113,7 @@ void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
         halt();
 }
 
+void nmi_crash(void);
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index f1cccff..4629b32 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -530,7 +530,6 @@ void do_ ## _name(struct cpu_user_regs *regs)
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
-DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -551,6 +550,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs);
 
 void syscall_enter(void);
 void sysenter_entry(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRqq-0000Qg-0t; Fri, 28 Feb 2014 18:08:00 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqp-0000QP-4o
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:07:59 +0000
Received: from [85.158.143.35:19016] by server-1.bemta-4.messagelabs.com id
	3D/E0-31661-E70D0135; Fri, 28 Feb 2014 18:07:58 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-15.tower-21.messagelabs.com!1393610876!9124315!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 23164 invoked from network); 28 Feb 2014 18:07:57 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:07:57 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106701620"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:55 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-B7; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:50 +0000
Message-ID: <1393610874-27492-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 1/5] x86/crash: Fix up declaration of
	do_nmi_crash()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

... so it can correctly be annotated as noreturn.  Move the declaration of
nmi_crash() to be effectively private in crash.c

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>

---

Changes in v5:
 * Move to start of series to fix bisection issue.
---
 xen/arch/x86/crash.c            |    3 ++-
 xen/include/asm-x86/processor.h |    2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 01fd906..ec586bd 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -36,7 +36,7 @@ static unsigned int crashing_cpu;
 static DEFINE_PER_CPU_READ_MOSTLY(bool_t, crash_save_done);
 
 /* This becomes the NMI handler for non-crashing CPUs, when Xen is crashing. */
-void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
+void do_nmi_crash(struct cpu_user_regs *regs)
 {
     int cpu = smp_processor_id();
 
@@ -113,6 +113,7 @@ void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs)
         halt();
 }
 
+void nmi_crash(void);
 static void nmi_shootdown_cpus(void)
 {
     unsigned long msecs;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index f1cccff..4629b32 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -530,7 +530,6 @@ void do_ ## _name(struct cpu_user_regs *regs)
 DECLARE_TRAP_HANDLER(divide_error);
 DECLARE_TRAP_HANDLER(debug);
 DECLARE_TRAP_HANDLER(nmi);
-DECLARE_TRAP_HANDLER(nmi_crash);
 DECLARE_TRAP_HANDLER(int3);
 DECLARE_TRAP_HANDLER(overflow);
 DECLARE_TRAP_HANDLER(bounds);
@@ -551,6 +550,7 @@ DECLARE_TRAP_HANDLER(spurious_interrupt_bug);
 
 void trap_nop(void);
 void enable_nmis(void);
+void __attribute__((noreturn)) do_nmi_crash(struct cpu_user_regs *regs);
 
 void syscall_enter(void);
 void sysenter_entry(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRr0-0000TL-LG; Fri, 28 Feb 2014 18:08:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqz-0000SN-5y
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:09 +0000
Received: from [85.158.143.35:57045] by server-2.bemta-4.messagelabs.com id
	E4/7F-04779-880D0135; Fri, 28 Feb 2014 18:08:08 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393610886!9125925!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13844 invoked from network); 28 Feb 2014 18:08:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:08:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105085433"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-F7; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:54 +0000
Message-ID: <1393610874-27492-6-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 5/5] xen/x86: Identify reset_stack_and_jump()
	as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

reset_stack_and_jump() is actually a macro, but can effectively become noreturn
by giving it an unreachable() declaration.

Propagate the 'noreturn-ness' up through the direct and indirect callers.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/domain.c             |    6 ++----
 xen/arch/x86/hvm/svm/svm.c        |    2 +-
 xen/arch/x86/setup.c              |    2 +-
 xen/include/asm-x86/current.h     |   11 +++++++----
 xen/include/asm-x86/domain.h      |    2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 +-
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..c42a079 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -133,12 +133,12 @@ void startup_cpu_idle_loop(void)
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_idle_domain(struct vcpu *v)
+static void noreturn continue_idle_domain(struct vcpu *v)
 {
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_nonidle_domain(struct vcpu *v)
+static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
@@ -1521,13 +1521,11 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     update_vcpu_system_time(next);
 
     schedule_tail(next);
-    BUG();
 }
 
 void continue_running(struct vcpu *same)
 {
     schedule_tail(same);
-    BUG();
 }
 
 int __sync_local_execstate(void)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 61b1ec8..4fd5376 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -911,7 +911,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
         wrmsrl(MSR_TSC_AUX, hvm_msr_tsc_aux(v));
 }
 
-static void svm_do_resume(struct vcpu *v) 
+static void noreturn svm_do_resume(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     bool_t debug_state = v->domain->debugger_attached;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..addd071 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -541,7 +541,7 @@ static char * __init cmdline_cook(char *p, char *loader_name)
     return p;
 }
 
-void __init __start_xen(unsigned long mbi_p)
+void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
     char *cmdline, *kextra, *loader;
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index c2792ce..4d1f20e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -59,10 +59,13 @@ static inline struct cpu_info *get_cpu_info(void)
     ((sp & (~(STACK_SIZE-1))) +                 \
      (STACK_SIZE - sizeof(struct cpu_info) - sizeof(unsigned long)))
 
-#define reset_stack_and_jump(__fn)              \
-    __asm__ __volatile__ (                      \
-        "mov %0,%%"__OP"sp; jmp %c1"            \
-        : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" )
+#define reset_stack_and_jump(__fn)                                      \
+    ({                                                                  \
+        __asm__ __volatile__ (                                          \
+            "mov %0,%%"__OP"sp; jmp %c1"                                \
+            : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" );   \
+        unreachable();                                                  \
+    })
 
 #define schedule_tail(vcpu) (((vcpu)->arch.schedule_tail)(vcpu))
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4ff89f0..49f7c0c 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -395,7 +395,7 @@ struct arch_vcpu
 
     unsigned long      flags; /* TF_ */
 
-    void (*schedule_tail) (struct vcpu *);
+    void noreturn (*schedule_tail) (struct vcpu *);
 
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 6f6b672..827c97e 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -91,7 +91,7 @@ typedef enum {
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
-void vmx_do_resume(struct vcpu *);
+void noreturn vmx_do_resume(struct vcpu *);
 void vmx_vlapic_msr_changed(struct vcpu *v);
 void vmx_realmode(struct cpu_user_regs *regs);
 void vmx_update_debug_state(struct vcpu *v);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:10 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRr0-0000TL-LG; Fri, 28 Feb 2014 18:08:10 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRqz-0000SN-5y
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:09 +0000
Received: from [85.158.143.35:57045] by server-2.bemta-4.messagelabs.com id
	E4/7F-04779-880D0135; Fri, 28 Feb 2014 18:08:08 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393610886!9125925!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13844 invoked from network); 28 Feb 2014 18:08:07 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:08:07 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105085433"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-F7; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:54 +0000
Message-ID: <1393610874-27492-6-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 5/5] xen/x86: Identify reset_stack_and_jump()
	as noreturn
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

reset_stack_and_jump() is actually a macro, but can effectively become noreturn
by giving it an unreachable() declaration.

Propagate the 'noreturn-ness' up through the direct and indirect callers.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/domain.c             |    6 ++----
 xen/arch/x86/hvm/svm/svm.c        |    2 +-
 xen/arch/x86/setup.c              |    2 +-
 xen/include/asm-x86/current.h     |   11 +++++++----
 xen/include/asm-x86/domain.h      |    2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h |    2 +-
 6 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6618ae6..c42a079 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -133,12 +133,12 @@ void startup_cpu_idle_loop(void)
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_idle_domain(struct vcpu *v)
+static void noreturn continue_idle_domain(struct vcpu *v)
 {
     reset_stack_and_jump(idle_loop);
 }
 
-static void continue_nonidle_domain(struct vcpu *v)
+static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
     check_wakeup_from_wait();
     mark_regs_dirty(guest_cpu_user_regs());
@@ -1521,13 +1521,11 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     update_vcpu_system_time(next);
 
     schedule_tail(next);
-    BUG();
 }
 
 void continue_running(struct vcpu *same)
 {
     schedule_tail(same);
-    BUG();
 }
 
 int __sync_local_execstate(void)
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 61b1ec8..4fd5376 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -911,7 +911,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
         wrmsrl(MSR_TSC_AUX, hvm_msr_tsc_aux(v));
 }
 
-static void svm_do_resume(struct vcpu *v) 
+static void noreturn svm_do_resume(struct vcpu *v)
 {
     struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
     bool_t debug_state = v->domain->debugger_attached;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index b49256d..addd071 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -541,7 +541,7 @@ static char * __init cmdline_cook(char *p, char *loader_name)
     return p;
 }
 
-void __init __start_xen(unsigned long mbi_p)
+void __init noreturn __start_xen(unsigned long mbi_p)
 {
     char *memmap_type = NULL;
     char *cmdline, *kextra, *loader;
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index c2792ce..4d1f20e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -59,10 +59,13 @@ static inline struct cpu_info *get_cpu_info(void)
     ((sp & (~(STACK_SIZE-1))) +                 \
      (STACK_SIZE - sizeof(struct cpu_info) - sizeof(unsigned long)))
 
-#define reset_stack_and_jump(__fn)              \
-    __asm__ __volatile__ (                      \
-        "mov %0,%%"__OP"sp; jmp %c1"            \
-        : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" )
+#define reset_stack_and_jump(__fn)                                      \
+    ({                                                                  \
+        __asm__ __volatile__ (                                          \
+            "mov %0,%%"__OP"sp; jmp %c1"                                \
+            : : "r" (guest_cpu_user_regs()), "i" (__fn) : "memory" );   \
+        unreachable();                                                  \
+    })
 
 #define schedule_tail(vcpu) (((vcpu)->arch.schedule_tail)(vcpu))
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4ff89f0..49f7c0c 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -395,7 +395,7 @@ struct arch_vcpu
 
     unsigned long      flags; /* TF_ */
 
-    void (*schedule_tail) (struct vcpu *);
+    void noreturn (*schedule_tail) (struct vcpu *);
 
     void (*ctxt_switch_from) (struct vcpu *);
     void (*ctxt_switch_to) (struct vcpu *);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 6f6b672..827c97e 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -91,7 +91,7 @@ typedef enum {
 void vmx_asm_vmexit_handler(struct cpu_user_regs);
 void vmx_asm_do_vmentry(void);
 void vmx_intr_assist(void);
-void vmx_do_resume(struct vcpu *);
+void noreturn vmx_do_resume(struct vcpu *);
 void vmx_vlapic_msr_changed(struct vcpu *v);
 void vmx_realmode(struct cpu_user_regs *regs);
 void vmx_update_debug_state(struct vcpu *v);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRr1-0000U9-B5; Fri, 28 Feb 2014 18:08:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRr0-0000SX-1K
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:10 +0000
Received: from [85.158.143.35:19725] by server-2.bemta-4.messagelabs.com id
	28/7F-04779-980D0135; Fri, 28 Feb 2014 18:08:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393610886!9125925!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13880 invoked from network); 28 Feb 2014 18:08:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:08:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105085427"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-Em; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:53 +0000
Message-ID: <1393610874-27492-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 4/5] xen: Misc cleanup as a result of the
	previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This includes:
 * A stale comment in sh_skip_sync()
 * A dead for ever loop in __bug()
 * A prototype for machine_power_off() which unimplemented in any architecture
 * Replacing a for(;;); loop with unreachable()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/efi/boot.c         |    2 +-
 xen/arch/x86/mm/shadow/common.c |    1 -
 xen/drivers/char/console.c      |    1 -
 xen/include/xen/shutdown.h      |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index a26e0af..62c4812 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -201,7 +201,7 @@ static void __init noreturn blexit(const CHAR16 *str)
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_bs->Exit(efi_ih, EFI_SUCCESS, 0, NULL);
-    for( ; ; ); /* not reached */
+    unreachable(); /* not reached */
 }
 
 /* generic routine for printing error messages */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 11c6b62..b400ccb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
     SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
                  mfn_x(gl1mfn));
     BUG();
-    return 0; /* BUG() is no longer __attribute__((noreturn)). */
 }
 
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..7d4383c 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1089,7 +1089,6 @@ void __bug(char *file, int line)
     printk("Xen BUG at %s:%d\n", file, line);
     dump_execution_state();
     panic("Xen BUG at %s:%d", file, line);
-    for ( ; ; ) ;
 }
 
 void __warn(char *file, int line)
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index f04905b..a00bfef 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -10,6 +10,5 @@ void noreturn dom0_shutdown(u8 reason);
 
 void noreturn machine_restart(unsigned int delay_millisecs);
 void noreturn machine_halt(void);
-void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:08:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRr1-0000U9-B5; Fri, 28 Feb 2014 18:08:11 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJRr0-0000SX-1K
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:08:10 +0000
Received: from [85.158.143.35:19725] by server-2.bemta-4.messagelabs.com id
	28/7F-04779-980D0135; Fri, 28 Feb 2014 18:08:09 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-14.tower-21.messagelabs.com!1393610886!9125925!2
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 13880 invoked from network); 28 Feb 2014 18:08:08 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-14.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:08:08 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105085427"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 18:07:56 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:07:55 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJRql-0001Bx-Em; Fri, 28 Feb 2014 18:07:55 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:07:53 +0000
Message-ID: <1393610874-27492-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393610874-27492-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA1
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v5 4/5] xen: Misc cleanup as a result of the
	previous patches
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This includes:
 * A stale comment in sh_skip_sync()
 * A dead for ever loop in __bug()
 * A prototype for machine_power_off() which unimplemented in any architecture
 * Replacing a for(;;); loop with unreachable()

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Keir Fraser <keir@xen.org>
Reviewed-by: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/efi/boot.c         |    2 +-
 xen/arch/x86/mm/shadow/common.c |    1 -
 xen/drivers/char/console.c      |    1 -
 xen/include/xen/shutdown.h      |    1 -
 4 files changed, 1 insertion(+), 4 deletions(-)

diff --git a/xen/arch/x86/efi/boot.c b/xen/arch/x86/efi/boot.c
index a26e0af..62c4812 100644
--- a/xen/arch/x86/efi/boot.c
+++ b/xen/arch/x86/efi/boot.c
@@ -201,7 +201,7 @@ static void __init noreturn blexit(const CHAR16 *str)
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_bs->Exit(efi_ih, EFI_SUCCESS, 0, NULL);
-    for( ; ; ); /* not reached */
+    unreachable(); /* not reached */
 }
 
 /* generic routine for printing error messages */
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 11c6b62..b400ccb 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -875,7 +875,6 @@ static int sh_skip_sync(struct vcpu *v, mfn_t gl1mfn)
     SHADOW_ERROR("gmfn %#lx was OOS but not shadowed as an l1.\n",
                  mfn_x(gl1mfn));
     BUG();
-    return 0; /* BUG() is no longer __attribute__((noreturn)). */
 }
 
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..7d4383c 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1089,7 +1089,6 @@ void __bug(char *file, int line)
     printk("Xen BUG at %s:%d\n", file, line);
     dump_execution_state();
     panic("Xen BUG at %s:%d", file, line);
-    for ( ; ; ) ;
 }
 
 void __warn(char *file, int line)
diff --git a/xen/include/xen/shutdown.h b/xen/include/xen/shutdown.h
index f04905b..a00bfef 100644
--- a/xen/include/xen/shutdown.h
+++ b/xen/include/xen/shutdown.h
@@ -10,6 +10,5 @@ void noreturn dom0_shutdown(u8 reason);
 
 void noreturn machine_restart(unsigned int delay_millisecs);
 void noreturn machine_halt(void);
-void machine_power_off(void);
 
 #endif /* __XEN_SHUTDOWN_H__ */
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:10:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRtO-0001Cb-Ud; Fri, 28 Feb 2014 18:10:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJRtN-0001CO-Tp
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:10:38 +0000
Received: from [193.109.254.147:23635] by server-14.bemta-14.messagelabs.com
	id 74/F7-29228-D11D0135; Fri, 28 Feb 2014 18:10:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393611035!7571838!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14109 invoked from network); 28 Feb 2014 18:10:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:10:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106702611"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:10:34 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:10:34 -0500
Message-ID: <5310D118.2090302@citrix.com>
Date: Fri, 28 Feb 2014 19:10:32 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com> <5310CEB6.4010604@oracle.com>
In-Reply-To: <5310CEB6.4010604@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjgvMDIvMTQgMTk6MDAsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yOC8yMDE0
IDEyOjQ2IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAyOC8wMi8xNCAxODoyMCwg
Qm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+Pj4gT24gMDIvMjcvMjAxNCAwMTo0NSBQTSwgQm9yaXMg
T3N0cm92c2t5IHdyb3RlOgo+Pj4+IE9uIDAyLzI3LzIwMTQgMDE6MTUgUE0sIFJvZ2VyIFBhdSBN
b25uZSB3cm90ZToKPj4+Pj4gQWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3Ig
WGVuIERvbTAgdXNpbmcgdGhlCj4+Pj4+IE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFw
IHR5cGUuCj4+Pj4+Cj4+Pj4+IEluIG9yZGVyIHRvIGtlZXAgdHJhY2sgb2Ygd2hpY2ggcGlycSBp
cyB0aGUgZmlyc3Qgb25lIGluIHRoZSBncm91cCBhbGwKPj4+Pj4gcGlycXMgaW4gdGhlIE1TSSBn
cm91cCBleGNlcHQgZm9yIHRoZSBmaXJzdCBvbmUgaGF2ZSB0aGUgbmV3bHkKPj4+Pj4gaW50cm9k
dWNlZCBQSVJRX01TSV9HUk9VUCBmbGFnIHNldC4gVGhpcyBwcmV2ZW50cyBjYWxsaW5nCj4+Pj4+
IFBIWVNERVZPUF91bm1hcF9waXJxIG9uIHRoZW0sIHNpbmNlIHRoZSB1bm1hcCBtdXN0IGJlIGRv
bmUgd2l0aCB0aGUKPj4+Pj4gZmlyc3QgcGlycSBpbiB0aGUgZ3JvdXAuCj4+Pj4gUmV2aWV3ZWQt
Ynk6IEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+Cj4+Pj4KPj4+
Pgo+Pj4KPj4+IEkgd2FzIGp1c3QgbG9va2luZyBhdCB4ZW5fc2V0dXBfbXNpX2lycXMoKSAoZm9y
IGEgZGlmZmVyZW50IHJlYXNvbikgYW5kCj4+PiBJIGFtIG5vIGxvbmdlciBzdXJlIHRoaXMgcGF0
Y2ggZG9lcyBhbnl0aGluZzoKPj4+Cj4+PiBzdGF0aWMgaW50IHhlbl9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+PiB7Cj4+PiAgICAgICAg
ICBpbnQgaXJxLCByZXQsIGk7Cj4+PiAgICAgICAgICBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2M7
Cj4+PiAgICAgICAgICBpbnQgKnY7Cj4+Pgo+Pj4gICAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NB
UF9JRF9NU0kgJiYgbnZlYyA+IDEpCj4+PiAgICAgICAgICAgICAgICAgIHJldHVybiAxOwo+Pj4g
Li4uLgo+Pj4KPj4+IFNhbWUgdGhpbmcgZm9yIHhlbl9odm1fc2V0dXBfbXNpX2lycXMoKS4KPj4g
QXMgc2FpZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UgdGhpcyBpcyBvbmx5IGZvciBEb20wLCBzbyB0
aGUgZnVuY3Rpb24KPj4gbW9kaWZpZWQgaXMgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMgKHdl
cmUgdGhpcyBjaGVjayBpcyByZW1vdmVkKS4KPiAKPiBUaGVuIHdoYXQgaXMgdGhlIHJlYXNvbiBm
b3IgdGhlc2UgY2hhbmdlczoKPiAKPiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYvcGNpL3hlbi5jIGIv
YXJjaC94ODYvcGNpL3hlbi5jCj4gaW5kZXggMTAzZTcwMi4uOTA1OTU2ZiAxMDA2NDQKPiAtLS0g
YS9hcmNoL3g4Ni9wY2kveGVuLmMKPiArKysgYi9hcmNoL3g4Ni9wY2kveGVuLmMKPiBAQCAtMTc4
LDYgKzE3OCw3IEBAIHN0YXRpYyBpbnQgeGVuX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2
ICpkZXYsCj4gaW50IG52ZWMsIGludCB0eXBlKQo+ICAgICAgaSA9IDA7Cj4gICAgICBsaXN0X2Zv
cl9lYWNoX2VudHJ5KG1zaWRlc2MsICZkZXYtPm1zaV9saXN0LCBsaXN0KSB7Cj4gICAgICAgICAg
aXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgdltpXSwKPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6
IDEsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJ
WCkgPwo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAicGNpZnJvbnQtbXNpLXgiIDoKPiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgInBjaWZyb250LW1zaSIsCj4gQEAgLTI0NSw2ICsy
NDYsNyBAQCBzdGF0aWMgaW50IHhlbl9odm1fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYK
PiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4gICAgICAgICAgICAgICAgICAieGVuOiBtc2kg
YWxyZWFkeSBib3VuZCB0byBwaXJxPSVkXG4iLCBwaXJxKTsKPiAgICAgICAgICB9Cj4gICAgICAg
ICAgaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgcGlycSwKPiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZl
YyA6IDEsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURf
TVNJWCkgPwo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAibXNpLXgiIDogIm1zaSIsCj4g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIERPTUlEX1NFTEYpOwo+IAo+IFNob3VsZCB5b3Ug
c2ltcGx5IHBhc3MgMT8KClllcywgYnV0IHRoZW4gaWYgd2UgaW1wbGVtZW50IE1TSSBtZXNzYWdl
IGdyb3VwcyBmb3IgdGhvc2UgY2FzZXMgd2Ugd2lsbApuZWVkIHRvIG1vZGlmeSB0aGlzIGxpbmUg
YWdhaW4sIHRoaXMgd2F5IGl0J3MgYWxyZWFkeSBjb3JyZWN0bHkgc2V0dXAuCklmIHlvdSB0aGlu
ayBpdCdzIGJlc3QgdG8gaGFyZGNvZGUgaXQgdG8gMSwgSSBjYW4gY2hhbmdlIGl0IChJIHdhcyBh
bHNvCmluIGRvdWJ0IGFib3V0IHdoaWNoIHdheSB3YXMgYmV0dGVyIHdoZW4gbW9kaWZ5aW5nIHRo
b3NlIGxpbmVzKS4KClJvZ2VyLgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:10:40 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJRtO-0001Cb-Ud; Fri, 28 Feb 2014 18:10:38 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <roger.pau@citrix.com>) id 1WJRtN-0001CO-Tp
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:10:38 +0000
Received: from [193.109.254.147:23635] by server-14.bemta-14.messagelabs.com
	id 74/F7-29228-D11D0135; Fri, 28 Feb 2014 18:10:37 +0000
X-Env-Sender: roger.pau@citrix.com
X-Msg-Ref: server-14.tower-27.messagelabs.com!1393611035!7571838!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 14109 invoked from network); 28 Feb 2014 18:10:36 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:10:36 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106702611"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:10:34 +0000
Received: from [IPv6:::1] (10.80.16.47) by smtprelay.citrix.com (10.13.107.80)
	with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:10:34 -0500
Message-ID: <5310D118.2090302@citrix.com>
Date: Fri, 28 Feb 2014 19:10:32 +0100
From: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7;
	rv:24.0) Gecko/20100101 Thunderbird/24.3.0
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com> <5310CEB6.4010604@oracle.com>
In-Reply-To: <5310CEB6.4010604@oracle.com>
X-Enigmail-Version: 1.6
X-DLP: MIA2
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjgvMDIvMTQgMTk6MDAsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yOC8yMDE0
IDEyOjQ2IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAyOC8wMi8xNCAxODoyMCwg
Qm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+Pj4gT24gMDIvMjcvMjAxNCAwMTo0NSBQTSwgQm9yaXMg
T3N0cm92c2t5IHdyb3RlOgo+Pj4+IE9uIDAyLzI3LzIwMTQgMDE6MTUgUE0sIFJvZ2VyIFBhdSBN
b25uZSB3cm90ZToKPj4+Pj4gQWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3Ig
WGVuIERvbTAgdXNpbmcgdGhlCj4+Pj4+IE1BUF9QSVJRX1RZUEVfTVVMVElfTVNJIHBpcnEgbWFw
IHR5cGUuCj4+Pj4+Cj4+Pj4+IEluIG9yZGVyIHRvIGtlZXAgdHJhY2sgb2Ygd2hpY2ggcGlycSBp
cyB0aGUgZmlyc3Qgb25lIGluIHRoZSBncm91cCBhbGwKPj4+Pj4gcGlycXMgaW4gdGhlIE1TSSBn
cm91cCBleGNlcHQgZm9yIHRoZSBmaXJzdCBvbmUgaGF2ZSB0aGUgbmV3bHkKPj4+Pj4gaW50cm9k
dWNlZCBQSVJRX01TSV9HUk9VUCBmbGFnIHNldC4gVGhpcyBwcmV2ZW50cyBjYWxsaW5nCj4+Pj4+
IFBIWVNERVZPUF91bm1hcF9waXJxIG9uIHRoZW0sIHNpbmNlIHRoZSB1bm1hcCBtdXN0IGJlIGRv
bmUgd2l0aCB0aGUKPj4+Pj4gZmlyc3QgcGlycSBpbiB0aGUgZ3JvdXAuCj4+Pj4gUmV2aWV3ZWQt
Ynk6IEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+Cj4+Pj4KPj4+
Pgo+Pj4KPj4+IEkgd2FzIGp1c3QgbG9va2luZyBhdCB4ZW5fc2V0dXBfbXNpX2lycXMoKSAoZm9y
IGEgZGlmZmVyZW50IHJlYXNvbikgYW5kCj4+PiBJIGFtIG5vIGxvbmdlciBzdXJlIHRoaXMgcGF0
Y2ggZG9lcyBhbnl0aGluZzoKPj4+Cj4+PiBzdGF0aWMgaW50IHhlbl9zZXR1cF9tc2lfaXJxcyhz
dHJ1Y3QgcGNpX2RldiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4+PiB7Cj4+PiAgICAgICAg
ICBpbnQgaXJxLCByZXQsIGk7Cj4+PiAgICAgICAgICBzdHJ1Y3QgbXNpX2Rlc2MgKm1zaWRlc2M7
Cj4+PiAgICAgICAgICBpbnQgKnY7Cj4+Pgo+Pj4gICAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NB
UF9JRF9NU0kgJiYgbnZlYyA+IDEpCj4+PiAgICAgICAgICAgICAgICAgIHJldHVybiAxOwo+Pj4g
Li4uLgo+Pj4KPj4+IFNhbWUgdGhpbmcgZm9yIHhlbl9odm1fc2V0dXBfbXNpX2lycXMoKS4KPj4g
QXMgc2FpZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UgdGhpcyBpcyBvbmx5IGZvciBEb20wLCBzbyB0
aGUgZnVuY3Rpb24KPj4gbW9kaWZpZWQgaXMgeGVuX2luaXRkb21fc2V0dXBfbXNpX2lycXMgKHdl
cmUgdGhpcyBjaGVjayBpcyByZW1vdmVkKS4KPiAKPiBUaGVuIHdoYXQgaXMgdGhlIHJlYXNvbiBm
b3IgdGhlc2UgY2hhbmdlczoKPiAKPiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYvcGNpL3hlbi5jIGIv
YXJjaC94ODYvcGNpL3hlbi5jCj4gaW5kZXggMTAzZTcwMi4uOTA1OTU2ZiAxMDA2NDQKPiAtLS0g
YS9hcmNoL3g4Ni9wY2kveGVuLmMKPiArKysgYi9hcmNoL3g4Ni9wY2kveGVuLmMKPiBAQCAtMTc4
LDYgKzE3OCw3IEBAIHN0YXRpYyBpbnQgeGVuX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2
ICpkZXYsCj4gaW50IG52ZWMsIGludCB0eXBlKQo+ICAgICAgaSA9IDA7Cj4gICAgICBsaXN0X2Zv
cl9lYWNoX2VudHJ5KG1zaWRlc2MsICZkZXYtPm1zaV9saXN0LCBsaXN0KSB7Cj4gICAgICAgICAg
aXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgdltpXSwKPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6
IDEsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJ
WCkgPwo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAicGNpZnJvbnQtbXNpLXgiIDoKPiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgInBjaWZyb250LW1zaSIsCj4gQEAgLTI0NSw2ICsy
NDYsNyBAQCBzdGF0aWMgaW50IHhlbl9odm1fc2V0dXBfbXNpX2lycXMoc3RydWN0IHBjaV9kZXYK
PiAqZGV2LCBpbnQgbnZlYywgaW50IHR5cGUpCj4gICAgICAgICAgICAgICAgICAieGVuOiBtc2kg
YWxyZWFkeSBib3VuZCB0byBwaXJxPSVkXG4iLCBwaXJxKTsKPiAgICAgICAgICB9Cj4gICAgICAg
ICAgaXJxID0geGVuX2JpbmRfcGlycV9tc2lfdG9faXJxKGRldiwgbXNpZGVzYywgcGlycSwKPiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZl
YyA6IDEsCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURf
TVNJWCkgPwo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAibXNpLXgiIDogIm1zaSIsCj4g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIERPTUlEX1NFTEYpOwo+IAo+IFNob3VsZCB5b3Ug
c2ltcGx5IHBhc3MgMT8KClllcywgYnV0IHRoZW4gaWYgd2UgaW1wbGVtZW50IE1TSSBtZXNzYWdl
IGdyb3VwcyBmb3IgdGhvc2UgY2FzZXMgd2Ugd2lsbApuZWVkIHRvIG1vZGlmeSB0aGlzIGxpbmUg
YWdhaW4sIHRoaXMgd2F5IGl0J3MgYWxyZWFkeSBjb3JyZWN0bHkgc2V0dXAuCklmIHlvdSB0aGlu
ayBpdCdzIGJlc3QgdG8gaGFyZGNvZGUgaXQgdG8gMSwgSSBjYW4gY2hhbmdlIGl0IChJIHdhcyBh
bHNvCmluIGRvdWJ0IGFib3V0IHdoaWNoIHdheSB3YXMgYmV0dGVyIHdoZW4gbW9kaWZ5aW5nIHRo
b3NlIGxpbmVzKS4KClJvZ2VyLgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX18KWGVuLWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9y
ZwpodHRwOi8vbGlzdHMueGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:34:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSGO-0001qL-08; Fri, 28 Feb 2014 18:34:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJSGM-0001qG-Re
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:34:23 +0000
Received: from [85.158.137.68:61188] by server-4.bemta-3.messagelabs.com id
	65/F5-04858-EA6D0135; Fri, 28 Feb 2014 18:34:22 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393612459!4923081!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29235 invoked from network); 28 Feb 2014 18:34:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 18:34:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SIYFo8017935
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 18:34:16 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SIYENx020944
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 18:34:15 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SIYDcH020089; Fri, 28 Feb 2014 18:34:14 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 10:34:13 -0800
Message-ID: <5310D71A.6050507@oracle.com>
Date: Fri, 28 Feb 2014 13:36:10 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com> <5310CEB6.4010604@oracle.com>
	<5310D118.2090302@citrix.com>
In-Reply-To: <5310D118.2090302@citrix.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjgvMjAxNCAwMToxMCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAyOC8w
Mi8xNCAxOTowMCwgQm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+PiBPbiAwMi8yOC8yMDE0IDEyOjQ2
IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4gT24gMjgvMDIvMTQgMTg6MjAsIEJvcmlz
IE9zdHJvdnNreSB3cm90ZToKPj4+PiBPbiAwMi8yNy8yMDE0IDAxOjQ1IFBNLCBCb3JpcyBPc3Ry
b3Zza3kgd3JvdGU6Cj4+Pj4+IE9uIDAyLzI3LzIwMTQgMDE6MTUgUE0sIFJvZ2VyIFBhdSBNb25u
ZSB3cm90ZToKPj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBNU0kgbWVzc2FnZSBncm91cHMgZm9yIFhl
biBEb20wIHVzaW5nIHRoZQo+Pj4+Pj4gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgcGlycSBtYXAg
dHlwZS4KPj4+Pj4+Cj4+Pj4+PiBJbiBvcmRlciB0byBrZWVwIHRyYWNrIG9mIHdoaWNoIHBpcnEg
aXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCj4+Pj4+PiBwaXJxcyBpbiB0aGUgTVNJ
IGdyb3VwIGV4Y2VwdCBmb3IgdGhlIGZpcnN0IG9uZSBoYXZlIHRoZSBuZXdseQo+Pj4+Pj4gaW50
cm9kdWNlZCBQSVJRX01TSV9HUk9VUCBmbGFnIHNldC4gVGhpcyBwcmV2ZW50cyBjYWxsaW5nCj4+
Pj4+PiBQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBi
ZSBkb25lIHdpdGggdGhlCj4+Pj4+PiBmaXJzdCBwaXJxIGluIHRoZSBncm91cC4KPj4+Pj4gUmV2
aWV3ZWQtYnk6IEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+Cj4+
Pj4+Cj4+Pj4+Cj4+Pj4gSSB3YXMganVzdCBsb29raW5nIGF0IHhlbl9zZXR1cF9tc2lfaXJxcygp
IChmb3IgYSBkaWZmZXJlbnQgcmVhc29uKSBhbmQKPj4+PiBJIGFtIG5vIGxvbmdlciBzdXJlIHRo
aXMgcGF0Y2ggZG9lcyBhbnl0aGluZzoKPj4+Pgo+Pj4+IHN0YXRpYyBpbnQgeGVuX3NldHVwX21z
aV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPj4+PiB7Cj4+
Pj4gICAgICAgICAgIGludCBpcnEsIHJldCwgaTsKPj4+PiAgICAgICAgICAgc3RydWN0IG1zaV9k
ZXNjICptc2lkZXNjOwo+Pj4+ICAgICAgICAgICBpbnQgKnY7Cj4+Pj4KPj4+PiAgICAgICAgICAg
aWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYgbnZlYyA+IDEpCj4+Pj4gICAgICAgICAgICAg
ICAgICAgcmV0dXJuIDE7Cj4+Pj4gLi4uLgo+Pj4+Cj4+Pj4gU2FtZSB0aGluZyBmb3IgeGVuX2h2
bV9zZXR1cF9tc2lfaXJxcygpLgo+Pj4gQXMgc2FpZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UgdGhp
cyBpcyBvbmx5IGZvciBEb20wLCBzbyB0aGUgZnVuY3Rpb24KPj4+IG1vZGlmaWVkIGlzIHhlbl9p
bml0ZG9tX3NldHVwX21zaV9pcnFzICh3ZXJlIHRoaXMgY2hlY2sgaXMgcmVtb3ZlZCkuCj4+IFRo
ZW4gd2hhdCBpcyB0aGUgcmVhc29uIGZvciB0aGVzZSBjaGFuZ2VzOgo+Pgo+PiBkaWZmIC0tZ2l0
IGEvYXJjaC94ODYvcGNpL3hlbi5jIGIvYXJjaC94ODYvcGNpL3hlbi5jCj4+IGluZGV4IDEwM2U3
MDIuLjkwNTk1NmYgMTAwNjQ0Cj4+IC0tLSBhL2FyY2gveDg2L3BjaS94ZW4uYwo+PiArKysgYi9h
cmNoL3g4Ni9wY2kveGVuLmMKPj4gQEAgLTE3OCw2ICsxNzgsNyBAQCBzdGF0aWMgaW50IHhlbl9z
ZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2LAo+PiBpbnQgbnZlYywgaW50IHR5cGUp
Cj4+ICAgICAgIGkgPSAwOwo+PiAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5KG1zaWRlc2MsICZk
ZXYtPm1zaV9saXN0LCBsaXN0KSB7Cj4+ICAgICAgICAgICBpcnEgPSB4ZW5fYmluZF9waXJxX21z
aV90b19pcnEoZGV2LCBtc2lkZXNjLCB2W2ldLAo+PiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/Cj4+ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgInBjaWZyb250LW1zaS14IiA6Cj4+ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgInBjaWZyb250LW1zaSIsCj4+IEBAIC0yNDUsNiArMjQ2LDcgQEAgc3RhdGlj
IGludCB4ZW5faHZtX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2Cj4+ICpkZXYsIGludCBu
dmVjLCBpbnQgdHlwZSkKPj4gICAgICAgICAgICAgICAgICAgInhlbjogbXNpIGFscmVhZHkgYm91
bmQgdG8gcGlycT0lZFxuIiwgcGlycSk7Cj4+ICAgICAgICAgICB9Cj4+ICAgICAgICAgICBpcnEg
PSB4ZW5fYmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCBwaXJxLAo+PiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEs
Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lY
KSA/Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIm1zaS14IiA6ICJtc2kiLAo+PiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIERPTUlEX1NFTEYpOwo+Pgo+PiBTaG91bGQgeW91
IHNpbXBseSBwYXNzIDE/Cj4gWWVzLCBidXQgdGhlbiBpZiB3ZSBpbXBsZW1lbnQgTVNJIG1lc3Nh
Z2UgZ3JvdXBzIGZvciB0aG9zZSBjYXNlcyB3ZSB3aWxsCj4gbmVlZCB0byBtb2RpZnkgdGhpcyBs
aW5lIGFnYWluLCB0aGlzIHdheSBpdCdzIGFscmVhZHkgY29ycmVjdGx5IHNldHVwLgo+IElmIHlv
dSB0aGluayBpdCdzIGJlc3QgdG8gaGFyZGNvZGUgaXQgdG8gMSwgSSBjYW4gY2hhbmdlIGl0IChJ
IHdhcyBhbHNvCj4gaW4gZG91YnQgYWJvdXQgd2hpY2ggd2F5IHdhcyBiZXR0ZXIgd2hlbiBtb2Rp
ZnlpbmcgdGhvc2UgbGluZXMpLgoKCkkgdGhpbmsgcGFzc2luZyAxIGV4cGxpY2l0bHkgdGhpcyB3
b3VsZCBiZSBiZXR0ZXIuIElmIHdlIGV4dGVuZCBzdXBwb3J0IApmb3Igbm9uLWRvbTAgd2Ugd291
bGQgaGF2ZSB0byBtb2RpZnkgdGhlc2Ugcm91dGluZXMgYW55d2F5IHNvIG1ha2luZyAKY2hhbmdl
cyBpbiBib3RoIHBsYWNlcyBzaW11bHRhbmVvdXNseSB3b3VsZCBtYWtlIHRoZSBjb21taXQgbW9y
ZSBjbGVhciAKKElNTykuCgotYm9yaXMKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:34:38 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSGO-0001qL-08; Fri, 28 Feb 2014 18:34:24 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJSGM-0001qG-Re
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:34:23 +0000
Received: from [85.158.137.68:61188] by server-4.bemta-3.messagelabs.com id
	65/F5-04858-EA6D0135; Fri, 28 Feb 2014 18:34:22 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-14.tower-31.messagelabs.com!1393612459!4923081!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29235 invoked from network); 28 Feb 2014 18:34:21 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-14.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 18:34:21 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SIYFo8017935
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 18:34:16 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SIYENx020944
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 18:34:15 GMT
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SIYDcH020089; Fri, 28 Feb 2014 18:34:14 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 10:34:13 -0800
Message-ID: <5310D71A.6050507@oracle.com>
Date: Fri, 28 Feb 2014 13:36:10 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: =?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com> <5310CEB6.4010604@oracle.com>
	<5310D118.2090302@citrix.com>
In-Reply-To: <5310D118.2090302@citrix.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: David Vrabel <david.vrabel@citrix.com>, linux-kernel@vger.kernel.org,
	xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: base64
Content-Type: text/plain; charset="utf-8"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMDIvMjgvMjAxNCAwMToxMCBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToKPiBPbiAyOC8w
Mi8xNCAxOTowMCwgQm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+PiBPbiAwMi8yOC8yMDE0IDEyOjQ2
IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+Pj4gT24gMjgvMDIvMTQgMTg6MjAsIEJvcmlz
IE9zdHJvdnNreSB3cm90ZToKPj4+PiBPbiAwMi8yNy8yMDE0IDAxOjQ1IFBNLCBCb3JpcyBPc3Ry
b3Zza3kgd3JvdGU6Cj4+Pj4+IE9uIDAyLzI3LzIwMTQgMDE6MTUgUE0sIFJvZ2VyIFBhdSBNb25u
ZSB3cm90ZToKPj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBNU0kgbWVzc2FnZSBncm91cHMgZm9yIFhl
biBEb20wIHVzaW5nIHRoZQo+Pj4+Pj4gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgcGlycSBtYXAg
dHlwZS4KPj4+Pj4+Cj4+Pj4+PiBJbiBvcmRlciB0byBrZWVwIHRyYWNrIG9mIHdoaWNoIHBpcnEg
aXMgdGhlIGZpcnN0IG9uZSBpbiB0aGUgZ3JvdXAgYWxsCj4+Pj4+PiBwaXJxcyBpbiB0aGUgTVNJ
IGdyb3VwIGV4Y2VwdCBmb3IgdGhlIGZpcnN0IG9uZSBoYXZlIHRoZSBuZXdseQo+Pj4+Pj4gaW50
cm9kdWNlZCBQSVJRX01TSV9HUk9VUCBmbGFnIHNldC4gVGhpcyBwcmV2ZW50cyBjYWxsaW5nCj4+
Pj4+PiBQSFlTREVWT1BfdW5tYXBfcGlycSBvbiB0aGVtLCBzaW5jZSB0aGUgdW5tYXAgbXVzdCBi
ZSBkb25lIHdpdGggdGhlCj4+Pj4+PiBmaXJzdCBwaXJxIGluIHRoZSBncm91cC4KPj4+Pj4gUmV2
aWV3ZWQtYnk6IEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNsZS5jb20+Cj4+
Pj4+Cj4+Pj4+Cj4+Pj4gSSB3YXMganVzdCBsb29raW5nIGF0IHhlbl9zZXR1cF9tc2lfaXJxcygp
IChmb3IgYSBkaWZmZXJlbnQgcmVhc29uKSBhbmQKPj4+PiBJIGFtIG5vIGxvbmdlciBzdXJlIHRo
aXMgcGF0Y2ggZG9lcyBhbnl0aGluZzoKPj4+Pgo+Pj4+IHN0YXRpYyBpbnQgeGVuX3NldHVwX21z
aV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVjLCBpbnQgdHlwZSkKPj4+PiB7Cj4+
Pj4gICAgICAgICAgIGludCBpcnEsIHJldCwgaTsKPj4+PiAgICAgICAgICAgc3RydWN0IG1zaV9k
ZXNjICptc2lkZXNjOwo+Pj4+ICAgICAgICAgICBpbnQgKnY7Cj4+Pj4KPj4+PiAgICAgICAgICAg
aWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYgbnZlYyA+IDEpCj4+Pj4gICAgICAgICAgICAg
ICAgICAgcmV0dXJuIDE7Cj4+Pj4gLi4uLgo+Pj4+Cj4+Pj4gU2FtZSB0aGluZyBmb3IgeGVuX2h2
bV9zZXR1cF9tc2lfaXJxcygpLgo+Pj4gQXMgc2FpZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UgdGhp
cyBpcyBvbmx5IGZvciBEb20wLCBzbyB0aGUgZnVuY3Rpb24KPj4+IG1vZGlmaWVkIGlzIHhlbl9p
bml0ZG9tX3NldHVwX21zaV9pcnFzICh3ZXJlIHRoaXMgY2hlY2sgaXMgcmVtb3ZlZCkuCj4+IFRo
ZW4gd2hhdCBpcyB0aGUgcmVhc29uIGZvciB0aGVzZSBjaGFuZ2VzOgo+Pgo+PiBkaWZmIC0tZ2l0
IGEvYXJjaC94ODYvcGNpL3hlbi5jIGIvYXJjaC94ODYvcGNpL3hlbi5jCj4+IGluZGV4IDEwM2U3
MDIuLjkwNTk1NmYgMTAwNjQ0Cj4+IC0tLSBhL2FyY2gveDg2L3BjaS94ZW4uYwo+PiArKysgYi9h
cmNoL3g4Ni9wY2kveGVuLmMKPj4gQEAgLTE3OCw2ICsxNzgsNyBAQCBzdGF0aWMgaW50IHhlbl9z
ZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2RldiAqZGV2LAo+PiBpbnQgbnZlYywgaW50IHR5cGUp
Cj4+ICAgICAgIGkgPSAwOwo+PiAgICAgICBsaXN0X2Zvcl9lYWNoX2VudHJ5KG1zaWRlc2MsICZk
ZXYtPm1zaV9saXN0LCBsaXN0KSB7Cj4+ICAgICAgICAgICBpcnEgPSB4ZW5fYmluZF9waXJxX21z
aV90b19pcnEoZGV2LCBtc2lkZXNjLCB2W2ldLAo+PiArICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lYKSA/Cj4+ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgInBjaWZyb250LW1zaS14IiA6Cj4+ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgInBjaWZyb250LW1zaSIsCj4+IEBAIC0yNDUsNiArMjQ2LDcgQEAgc3RhdGlj
IGludCB4ZW5faHZtX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2Cj4+ICpkZXYsIGludCBu
dmVjLCBpbnQgdHlwZSkKPj4gICAgICAgICAgICAgICAgICAgInhlbjogbXNpIGFscmVhZHkgYm91
bmQgdG8gcGlycT0lZFxuIiwgcGlycSk7Cj4+ICAgICAgICAgICB9Cj4+ICAgICAgICAgICBpcnEg
PSB4ZW5fYmluZF9waXJxX21zaV90b19pcnEoZGV2LCBtc2lkZXNjLCBwaXJxLAo+PiArICAgICAg
ICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEs
Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0lY
KSA/Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIm1zaS14IiA6ICJtc2kiLAo+PiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIERPTUlEX1NFTEYpOwo+Pgo+PiBTaG91bGQgeW91
IHNpbXBseSBwYXNzIDE/Cj4gWWVzLCBidXQgdGhlbiBpZiB3ZSBpbXBsZW1lbnQgTVNJIG1lc3Nh
Z2UgZ3JvdXBzIGZvciB0aG9zZSBjYXNlcyB3ZSB3aWxsCj4gbmVlZCB0byBtb2RpZnkgdGhpcyBs
aW5lIGFnYWluLCB0aGlzIHdheSBpdCdzIGFscmVhZHkgY29ycmVjdGx5IHNldHVwLgo+IElmIHlv
dSB0aGluayBpdCdzIGJlc3QgdG8gaGFyZGNvZGUgaXQgdG8gMSwgSSBjYW4gY2hhbmdlIGl0IChJ
IHdhcyBhbHNvCj4gaW4gZG91YnQgYWJvdXQgd2hpY2ggd2F5IHdhcyBiZXR0ZXIgd2hlbiBtb2Rp
ZnlpbmcgdGhvc2UgbGluZXMpLgoKCkkgdGhpbmsgcGFzc2luZyAxIGV4cGxpY2l0bHkgdGhpcyB3
b3VsZCBiZSBiZXR0ZXIuIElmIHdlIGV4dGVuZCBzdXBwb3J0IApmb3Igbm9uLWRvbTAgd2Ugd291
bGQgaGF2ZSB0byBtb2RpZnkgdGhlc2Ugcm91dGluZXMgYW55d2F5IHNvIG1ha2luZyAKY2hhbmdl
cyBpbiBib3RoIHBsYWNlcyBzaW11bHRhbmVvdXNseSB3b3VsZCBtYWtlIHRoZSBjb21taXQgbW9y
ZSBjbGVhciAKKElNTykuCgotYm9yaXMKCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54
ZW4ub3JnCmh0dHA6Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:42:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSOI-00027V-0Z; Fri, 28 Feb 2014 18:42:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJSOG-00027Q-NO
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 18:42:32 +0000
Received: from [193.109.254.147:37691] by server-4.bemta-14.messagelabs.com id
	81/81-32066-798D0135; Fri, 28 Feb 2014 18:42:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393612949!2284829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25586 invoked from network); 28 Feb 2014 18:42:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:42:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106712393"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:42:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 13:42:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJSO0-00030u-Ci;
	Fri, 28 Feb 2014 18:42:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJSO0-0003yp-9e;
	Fri, 28 Feb 2014 18:42:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25329-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 18:42:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25329: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25329 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25329/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass

version targeted for testing:
 linux                86c7654f4a0afcbbd2fedefec01082f292b14cb4
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2392703 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:42:43 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:42:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSOI-00027V-0Z; Fri, 28 Feb 2014 18:42:34 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJSOG-00027Q-NO
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 18:42:32 +0000
Received: from [193.109.254.147:37691] by server-4.bemta-14.messagelabs.com id
	81/81-32066-798D0135; Fri, 28 Feb 2014 18:42:31 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-7.tower-27.messagelabs.com!1393612949!2284829!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25586 invoked from network); 28 Feb 2014 18:42:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:42:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106712393"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:42:17 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.80) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 13:42:16 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJSO0-00030u-Ci;
	Fri, 28 Feb 2014 18:42:16 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJSO0-0003yp-9e;
	Fri, 28 Feb 2014 18:42:16 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25329-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 18:42:16 +0000
MIME-Version: 1.0
X-DLP: MIA2
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [linux-linus test] 25329: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25329 linux-linus real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25329/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair   17 guest-migrate/src_host/dst_host fail REGR. vs. 12557

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-winxpsp3  7 windows-install          fail like 12557

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 build-armhf-pvops             4 kernel-build                 fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass

version targeted for testing:
 linux                86c7654f4a0afcbbd2fedefec01082f292b14cb4
baseline version:
 linux                c16fa4f2ad19908a47c63d8fa436a1178438c7e7

------------------------------------------------------------
7066 people touched revisions under test,
not listing them all
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2392703 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScj-0002Qf-L6; Fri, 28 Feb 2014 18:57:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJSci-0002QV-IS
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:28 +0000
Received: from [85.158.143.35:37577] by server-2.bemta-4.messagelabs.com id
	7A/63-04779-71CD0135; Fri, 28 Feb 2014 18:57:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16135 invoked from network); 28 Feb 2014 18:57:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716113"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-Ii; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:02 +0000
Message-ID: <1393613824-13230-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH v2 3/5] x86/time: Initialise time earlier during
	start_secondary()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is safe to do so, and useful for "[second.nanoseconds]" style timestamps on
printk()s during secondary bringup.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/smpboot.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 42b8a59..5014397 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -347,6 +347,8 @@ void start_secondary(void *unused)
 
     percpu_traps_init();
 
+    init_percpu_time();
+
     cpu_init();
 
     smp_callin();
@@ -381,8 +383,6 @@ void start_secondary(void *unused)
     cpumask_set_cpu(cpu, &cpu_online_map);
     unlock_vector_lock();
 
-    init_percpu_time();
-
     /* We can take interrupts now: we're officially "up". */
     local_irq_enable();
     mtrr_ap_init();
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScj-0002Qf-L6; Fri, 28 Feb 2014 18:57:29 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJSci-0002QV-IS
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:28 +0000
Received: from [85.158.143.35:37577] by server-2.bemta-4.messagelabs.com id
	7A/63-04779-71CD0135; Fri, 28 Feb 2014 18:57:27 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16135 invoked from network); 28 Feb 2014 18:57:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716113"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-Ii; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:02 +0000
Message-ID: <1393613824-13230-4-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel] [PATCH v2 3/5] x86/time: Initialise time earlier during
	start_secondary()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

It is safe to do so, and useful for "[second.nanoseconds]" style timestamps on
printk()s during secondary bringup.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/smpboot.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 42b8a59..5014397 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -347,6 +347,8 @@ void start_secondary(void *unused)
 
     percpu_traps_init();
 
+    init_percpu_time();
+
     cpu_init();
 
     smp_callin();
@@ -381,8 +383,6 @@ void start_secondary(void *unused)
     cpumask_set_cpu(cpu, &cpu_online_map);
     unlock_vector_lock();
 
-    init_percpu_time();
-
     /* We can take interrupts now: we're officially "up". */
     local_irq_enable();
     mtrr_ap_init();
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSco-0002S9-IL; Fri, 28 Feb 2014 18:57:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScn-0002RD-5C
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:33 +0000
Received: from [85.158.137.68:36627] by server-16.bemta-3.messagelabs.com id
	33/CC-29917-B1CD0135; Fri, 28 Feb 2014 18:57:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393613847!3389208!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5356 invoked from network); 28 Feb 2014 18:57:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716112"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-Gg; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:00 +0000
Message-ID: <1393613824-13230-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel]  [PATCH v2 1/5] x86/time: Avoid redundant this_cpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

this_cpu() makes use of RELOC_HIDE() to prevent unsafe optimisations, forcing
a recalculation of the per-cpu data area.  Don't use it needlessly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/time.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 82492c1..883c135 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1323,7 +1323,7 @@ void init_percpu_time(void)
     s_time_t now;
 
     /* Initial estimate for TSC rate. */
-    this_cpu(cpu_time).tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
+    t->tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
 
     local_irq_save(flags);
     rdtscll(t->local_tsc_stamp);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScl-0002Qt-0y; Fri, 28 Feb 2014 18:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScj-0002Qa-99
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:29 +0000
Received: from [85.158.143.35:37619] by server-3.bemta-4.messagelabs.com id
	94/A0-11539-81CD0135; Fri, 28 Feb 2014 18:57:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16147 invoked from network); 28 Feb 2014 18:57:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716114"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-J4; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:03 +0000
Message-ID: <1393613824-13230-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/5] [RFC] xen/console: Provide timestamps as
	an offset since boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds a new "Linux style" console timestamp method, which is shorter and
more useful than the current date/time timestamps with single-second
granularity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/drivers/char/console.c |   58 +++++++++++++++++++++++++++++++++++++-------
 1 file changed, 49 insertions(+), 9 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..652d02d 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -49,8 +49,17 @@ static bool_t __read_mostly opt_console_to_ring;
 boolean_param("console_to_ring", opt_console_to_ring);
 
 /* console_timestamps: include a timestamp prefix on every Xen console line. */
-static bool_t __read_mostly opt_console_timestamps;
-boolean_param("console_timestamps", opt_console_timestamps);
+enum con_timestamp_mode
+{
+    TSM_NONE,          /* No timestamps */
+    TSM_DATE,          /* [YYYY-MM-DD HH:MM:SS] */
+    TSM_SINCE_BOOT     /* [SSSSSS.mmmmmm] */
+};
+
+static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
+
+static void parse_console_timestamps(char *s);
+custom_param("console_timestamps", parse_console_timestamps);
 
 /* conring_size: allows a large console ring than default (16kB). */
 static uint32_t __initdata opt_conring_size;
@@ -546,23 +555,54 @@ static int printk_prefix_check(char *p, char **pp)
             ((loglvl < upper_thresh) && printk_ratelimit()));
 } 
 
+static void __init parse_console_timestamps(char *s)
+{
+    if ( *s == '\0' || /* Compat for old booleanparam() */
+         !strcmp(s, "date") )
+        opt_con_timestamp_mode = TSM_DATE;
+    else if ( !strcmp(s, "boot") )
+        opt_con_timestamp_mode = TSM_SINCE_BOOT;
+    else if ( !strcmp(s, "none") )
+        opt_con_timestamp_mode = TSM_NONE;
+    else
+        printk(XENLOG_ERR "Unrecognised timestamp mode '%s'\n", s);
+}
+
 static void printk_start_of_line(const char *prefix)
 {
     struct tm tm;
     char tstr[32];
+    uint64_t sec, nsec;
 
     __putstr(prefix);
 
-    if ( !opt_console_timestamps )
-        return;
+    switch ( opt_con_timestamp_mode )
+    {
+    case TSM_DATE:
+        tm = wallclock_time();
+
+        if ( tm.tm_mday == 0 )
+            return;
+
+        if ( opt_con_timestamp_mode == TSM_DATE )
+            snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
+                     1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
+                     tm.tm_hour, tm.tm_min, tm.tm_sec);
+        break;
+
+    case TSM_SINCE_BOOT:
+        sec = NOW();
+        nsec = do_div(sec, 1000000000);
 
-    tm = wallclock_time();
-    if ( tm.tm_mday == 0 )
+        snprintf(tstr, sizeof(tstr), "[%5"PRIu64".%06"PRIu64"] ",
+                 sec, nsec / 1000);
+        break;
+
+    case TSM_NONE:
+    default:
         return;
+    }
 
-    snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
-             1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
-             tm.tm_hour, tm.tm_min, tm.tm_sec);
     __putstr(tstr);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScm-0002RG-Oq; Fri, 28 Feb 2014 18:57:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScl-0002Qs-7F
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:31 +0000
Received: from [85.158.143.35:43834] by server-1.bemta-4.messagelabs.com id
	53/05-31661-A1CD0135; Fri, 28 Feb 2014 18:57:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16192 invoked from network); 28 Feb 2014 18:57:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-H5; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:01 +0000
Message-ID: <1393613824-13230-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 2/5] x86/time: Always count s_time from Xen
	boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

In the early-boot clock, before the calibration routines kick in,
count time from Xen boot rather than from when the BSP's TSC was 0.

Signed-off-by: Tim Deegan <tim@xen.org>

Stash the timestamp in head.S as a good approximation of when we actually
started, rather than measuing "time since we ran early_time_init()"

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/boot/head.S |   10 ++++++++++
 xen/arch/x86/time.c      |    4 ++++
 2 files changed, 14 insertions(+)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index b12eefb..d62eaae 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -136,6 +136,12 @@ __start:
         /* Check for availability of long mode. */
         bt      $29,%edx
         jnc     bad_cpu
+
+        /* Stash TSC to calculate a good approximation of time-since-boot */
+        rdtsc
+        mov     %eax,sym_phys(boot_tsc_stamp)
+        mov     %edx,sym_phys(boot_tsc_stamp+4)
+
         /* Initialise L2 boot-map page table entries (16MB). */
         mov     $sym_phys(l2_bootmap),%edx
         mov     $PAGE_HYPERVISOR|_PAGE_PSE,%eax
@@ -203,6 +209,10 @@ GLOBAL(trampoline_end)
 __high_start:
 #include "x86_64.S"
 
+        .section .init.data, "a", @progbits
+GLOBAL(boot_tsc_stamp)
+        .quad   0
+
         .section .data.page_aligned, "aw", @progbits
         .p2align PAGE_SHIFT
 /*
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 883c135..3da4def 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -84,6 +84,9 @@ static u16 pit_stamp16;
 static u32 pit_stamp32;
 static bool_t __read_mostly using_pit;
 
+/* Boot timestamp, filled in head.S (initdata) */
+extern u64 boot_tsc_stamp;
+
 /*
  * 32-bit division of integer dividend and integer divisor yielding
  * 32-bit fractional quotient.
@@ -1456,6 +1459,7 @@ void __init early_time_init(void)
     u64 tmp = init_pit_and_calibrate_tsc();
 
     set_time_scale(&this_cpu(cpu_time).tsc_scale, tmp);
+    this_cpu(cpu_time).local_tsc_stamp = boot_tsc_stamp;
 
     do_div(tmp, 1000);
     cpu_khz = (unsigned long)tmp;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScl-0002R0-CR; Fri, 28 Feb 2014 18:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJSck-0002Qh-3Q
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:30 +0000
Received: from [85.158.143.35:37645] by server-3.bemta-4.messagelabs.com id
	07/A0-11539-91CD0135; Fri, 28 Feb 2014 18:57:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16167 invoked from network); 28 Feb 2014 18:57:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716115"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-Ka; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:04 +0000
Message-ID: <1393613824-13230-6-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v2 5/5] [RFC] Traditional console timestamps
	including milliseconds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Suggested-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/time.c        |   10 +++++++---
 xen/drivers/char/console.c |   11 ++++++++++-
 xen/include/asm-x86/time.h |    2 +-
 3 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 3da4def..884a522 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1648,15 +1648,19 @@ int dom0_pit_access(struct ioreq *ioreq)
     return 0;
 }
 
-struct tm wallclock_time(void)
+struct tm wallclock_time(uint64_t *ns)
 {
-    uint64_t seconds;
+    uint64_t seconds, nsec;
 
     if ( !wc_sec )
         return (struct tm) { 0 };
 
     seconds = NOW() + (wc_sec * 1000000000ull) + wc_nsec;
-    do_div(seconds, 1000000000);
+    nsec = do_div(seconds, 1000000000);
+
+    if ( *ns )
+        *ns = nsec;
+
     return gmtime(seconds);
 }
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 652d02d..02f6599 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -53,6 +53,7 @@ enum con_timestamp_mode
 {
     TSM_NONE,          /* No timestamps */
     TSM_DATE,          /* [YYYY-MM-DD HH:MM:SS] */
+    TSM_DATE_MS,       /* [YYYY-MM-DD HH:MM:SS.mmmmmm] */
     TSM_SINCE_BOOT     /* [SSSSSS.mmmmmm] */
 };
 
@@ -560,6 +561,8 @@ static void __init parse_console_timestamps(char *s)
     if ( *s == '\0' || /* Compat for old booleanparam() */
          !strcmp(s, "date") )
         opt_con_timestamp_mode = TSM_DATE;
+    else if ( !strcmp(s, "datems") )
+        opt_con_timestamp_mode = TSM_DATE_MS;
     else if ( !strcmp(s, "boot") )
         opt_con_timestamp_mode = TSM_SINCE_BOOT;
     else if ( !strcmp(s, "none") )
@@ -579,7 +582,8 @@ static void printk_start_of_line(const char *prefix)
     switch ( opt_con_timestamp_mode )
     {
     case TSM_DATE:
-        tm = wallclock_time();
+    case TSM_DATE_MS:
+        tm = wallclock_time(&nsec);
 
         if ( tm.tm_mday == 0 )
             return;
@@ -588,6 +592,11 @@ static void printk_start_of_line(const char *prefix)
             snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
                      1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
                      tm.tm_hour, tm.tm_min, tm.tm_sec);
+        else
+            snprintf(tstr, sizeof(tstr),
+                     "[%04u-%02u-%02u %02u:%02u:%02u.%06"PRIu64"] ",
+                     1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
+                     tm.tm_hour, tm.tm_min, tm.tm_sec, nsec / 1000);
         break;
 
     case TSM_SINCE_BOOT:
diff --git a/xen/include/asm-x86/time.h b/xen/include/asm-x86/time.h
index 147b39e..2f9dbd1 100644
--- a/xen/include/asm-x86/time.h
+++ b/xen/include/asm-x86/time.h
@@ -49,7 +49,7 @@ int dom0_pit_access(struct ioreq *ioreq);
 int cpu_frequency_change(u64 freq);
 
 struct tm;
-struct tm wallclock_time(void);
+struct tm wallclock_time(uint64_t *ns);
 
 void pit_broadcast_enter(void);
 void pit_broadcast_exit(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScn-0002RN-5S; Fri, 28 Feb 2014 18:57:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScl-0002Qr-9D
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:31 +0000
Received: from [85.158.137.68:62307] by server-12.bemta-3.messagelabs.com id
	4C/7B-01674-A1CD0135; Fri, 28 Feb 2014 18:57:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393613847!3389208!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5263 invoked from network); 28 Feb 2014 18:57:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716110"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-GD; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:56:59 +0000
Message-ID: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [RFC PATCH v2 0/5] Improvements to console timestamps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series aims to improve on the current implementation of console
timestamps in Xen.

Patch 1 is a plain optimisation fix and logically independent from the rest of
the series.

Patch 2 changes Xen's idea of when time starts, from when the BSPs TSC was 0,
to when Xen boots.

Patch 3 guesses at AP time calibration earlier during boot, so printk()s using
the new console timestamp have a real stamp, rather than 0s.

Patch 4 is the meat of the series, adding a new timestamp implementation to
printk_start_of_line().

Patch 5 comes as a intermediate suggestion, to retain the old timestamp style,
but to display milliseconds as well.

There is still one bug to fix; The time step when the platform timer start:

    (XEN) [    1.069271] ENABLING IO-APIC IRQs
    (XEN) [    1.073308]  -> Using new ACK method
    (XEN) [    1.077771] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
    (XEN) [    0.017017] Platform timer is 14.318MHz HPET
    (XEN) [    0.021701] Allocated console ring of 16 KiB.

Also, from discussion in the office, it has been suggested that the timestamp
mode/format would be better as build-time configuration rather than boot-time
configuration, and I would have to lean towards agreeing with this.

Furthermore, 3 different timestamp modes would seem to be overkill.

Comments welcome,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScm-0002RG-Oq; Fri, 28 Feb 2014 18:57:32 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScl-0002Qs-7F
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:31 +0000
Received: from [85.158.143.35:43834] by server-1.bemta-4.messagelabs.com id
	53/05-31661-A1CD0135; Fri, 28 Feb 2014 18:57:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!4
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16192 invoked from network); 28 Feb 2014 18:57:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716111"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-H5; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:01 +0000
Message-ID: <1393613824-13230-3-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Tim Deegan <tim@xen.org>, Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] [PATCH v2 2/5] x86/time: Always count s_time from Xen
	boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

From: Tim Deegan <tim@xen.org>

In the early-boot clock, before the calibration routines kick in,
count time from Xen boot rather than from when the BSP's TSC was 0.

Signed-off-by: Tim Deegan <tim@xen.org>

Stash the timestamp in head.S as a good approximation of when we actually
started, rather than measuing "time since we ran early_time_init()"

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/boot/head.S |   10 ++++++++++
 xen/arch/x86/time.c      |    4 ++++
 2 files changed, 14 insertions(+)

diff --git a/xen/arch/x86/boot/head.S b/xen/arch/x86/boot/head.S
index b12eefb..d62eaae 100644
--- a/xen/arch/x86/boot/head.S
+++ b/xen/arch/x86/boot/head.S
@@ -136,6 +136,12 @@ __start:
         /* Check for availability of long mode. */
         bt      $29,%edx
         jnc     bad_cpu
+
+        /* Stash TSC to calculate a good approximation of time-since-boot */
+        rdtsc
+        mov     %eax,sym_phys(boot_tsc_stamp)
+        mov     %edx,sym_phys(boot_tsc_stamp+4)
+
         /* Initialise L2 boot-map page table entries (16MB). */
         mov     $sym_phys(l2_bootmap),%edx
         mov     $PAGE_HYPERVISOR|_PAGE_PSE,%eax
@@ -203,6 +209,10 @@ GLOBAL(trampoline_end)
 __high_start:
 #include "x86_64.S"
 
+        .section .init.data, "a", @progbits
+GLOBAL(boot_tsc_stamp)
+        .quad   0
+
         .section .data.page_aligned, "aw", @progbits
         .p2align PAGE_SHIFT
 /*
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 883c135..3da4def 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -84,6 +84,9 @@ static u16 pit_stamp16;
 static u32 pit_stamp32;
 static bool_t __read_mostly using_pit;
 
+/* Boot timestamp, filled in head.S (initdata) */
+extern u64 boot_tsc_stamp;
+
 /*
  * 32-bit division of integer dividend and integer divisor yielding
  * 32-bit fractional quotient.
@@ -1456,6 +1459,7 @@ void __init early_time_init(void)
     u64 tmp = init_pit_and_calibrate_tsc();
 
     set_time_scale(&this_cpu(cpu_time).tsc_scale, tmp);
+    this_cpu(cpu_time).local_tsc_stamp = boot_tsc_stamp;
 
     do_div(tmp, 1000);
     cpu_khz = (unsigned long)tmp;
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScl-0002Qt-0y; Fri, 28 Feb 2014 18:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScj-0002Qa-99
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:29 +0000
Received: from [85.158.143.35:37619] by server-3.bemta-4.messagelabs.com id
	94/A0-11539-81CD0135; Fri, 28 Feb 2014 18:57:28 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16147 invoked from network); 28 Feb 2014 18:57:27 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:27 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716114"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-J4; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:03 +0000
Message-ID: <1393613824-13230-5-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v2 4/5] [RFC] xen/console: Provide timestamps as
	an offset since boot
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This adds a new "Linux style" console timestamp method, which is shorter and
more useful than the current date/time timestamps with single-second
granularity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/drivers/char/console.c |   58 +++++++++++++++++++++++++++++++++++++-------
 1 file changed, 49 insertions(+), 9 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 532c426..652d02d 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -49,8 +49,17 @@ static bool_t __read_mostly opt_console_to_ring;
 boolean_param("console_to_ring", opt_console_to_ring);
 
 /* console_timestamps: include a timestamp prefix on every Xen console line. */
-static bool_t __read_mostly opt_console_timestamps;
-boolean_param("console_timestamps", opt_console_timestamps);
+enum con_timestamp_mode
+{
+    TSM_NONE,          /* No timestamps */
+    TSM_DATE,          /* [YYYY-MM-DD HH:MM:SS] */
+    TSM_SINCE_BOOT     /* [SSSSSS.mmmmmm] */
+};
+
+static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
+
+static void parse_console_timestamps(char *s);
+custom_param("console_timestamps", parse_console_timestamps);
 
 /* conring_size: allows a large console ring than default (16kB). */
 static uint32_t __initdata opt_conring_size;
@@ -546,23 +555,54 @@ static int printk_prefix_check(char *p, char **pp)
             ((loglvl < upper_thresh) && printk_ratelimit()));
 } 
 
+static void __init parse_console_timestamps(char *s)
+{
+    if ( *s == '\0' || /* Compat for old booleanparam() */
+         !strcmp(s, "date") )
+        opt_con_timestamp_mode = TSM_DATE;
+    else if ( !strcmp(s, "boot") )
+        opt_con_timestamp_mode = TSM_SINCE_BOOT;
+    else if ( !strcmp(s, "none") )
+        opt_con_timestamp_mode = TSM_NONE;
+    else
+        printk(XENLOG_ERR "Unrecognised timestamp mode '%s'\n", s);
+}
+
 static void printk_start_of_line(const char *prefix)
 {
     struct tm tm;
     char tstr[32];
+    uint64_t sec, nsec;
 
     __putstr(prefix);
 
-    if ( !opt_console_timestamps )
-        return;
+    switch ( opt_con_timestamp_mode )
+    {
+    case TSM_DATE:
+        tm = wallclock_time();
+
+        if ( tm.tm_mday == 0 )
+            return;
+
+        if ( opt_con_timestamp_mode == TSM_DATE )
+            snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
+                     1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
+                     tm.tm_hour, tm.tm_min, tm.tm_sec);
+        break;
+
+    case TSM_SINCE_BOOT:
+        sec = NOW();
+        nsec = do_div(sec, 1000000000);
 
-    tm = wallclock_time();
-    if ( tm.tm_mday == 0 )
+        snprintf(tstr, sizeof(tstr), "[%5"PRIu64".%06"PRIu64"] ",
+                 sec, nsec / 1000);
+        break;
+
+    case TSM_NONE:
+    default:
         return;
+    }
 
-    snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
-             1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
-             tm.tm_hour, tm.tm_min, tm.tm_sec);
     __putstr(tstr);
 }
 
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScl-0002R0-CR; Fri, 28 Feb 2014 18:57:31 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJSck-0002Qh-3Q
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:30 +0000
Received: from [85.158.143.35:37645] by server-3.bemta-4.messagelabs.com id
	07/A0-11539-91CD0135; Fri, 28 Feb 2014 18:57:29 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-7.tower-21.messagelabs.com!1393613845!9103250!3
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16167 invoked from network); 28 Feb 2014 18:57:28 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-7.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:28 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716115"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-Ka; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:04 +0000
Message-ID: <1393613824-13230-6-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [PATCH v2 5/5] [RFC] Traditional console timestamps
	including milliseconds
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Suggested-by: Don Slutz <dslutz@verizon.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/time.c        |   10 +++++++---
 xen/drivers/char/console.c |   11 ++++++++++-
 xen/include/asm-x86/time.h |    2 +-
 3 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 3da4def..884a522 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1648,15 +1648,19 @@ int dom0_pit_access(struct ioreq *ioreq)
     return 0;
 }
 
-struct tm wallclock_time(void)
+struct tm wallclock_time(uint64_t *ns)
 {
-    uint64_t seconds;
+    uint64_t seconds, nsec;
 
     if ( !wc_sec )
         return (struct tm) { 0 };
 
     seconds = NOW() + (wc_sec * 1000000000ull) + wc_nsec;
-    do_div(seconds, 1000000000);
+    nsec = do_div(seconds, 1000000000);
+
+    if ( *ns )
+        *ns = nsec;
+
     return gmtime(seconds);
 }
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 652d02d..02f6599 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -53,6 +53,7 @@ enum con_timestamp_mode
 {
     TSM_NONE,          /* No timestamps */
     TSM_DATE,          /* [YYYY-MM-DD HH:MM:SS] */
+    TSM_DATE_MS,       /* [YYYY-MM-DD HH:MM:SS.mmmmmm] */
     TSM_SINCE_BOOT     /* [SSSSSS.mmmmmm] */
 };
 
@@ -560,6 +561,8 @@ static void __init parse_console_timestamps(char *s)
     if ( *s == '\0' || /* Compat for old booleanparam() */
          !strcmp(s, "date") )
         opt_con_timestamp_mode = TSM_DATE;
+    else if ( !strcmp(s, "datems") )
+        opt_con_timestamp_mode = TSM_DATE_MS;
     else if ( !strcmp(s, "boot") )
         opt_con_timestamp_mode = TSM_SINCE_BOOT;
     else if ( !strcmp(s, "none") )
@@ -579,7 +582,8 @@ static void printk_start_of_line(const char *prefix)
     switch ( opt_con_timestamp_mode )
     {
     case TSM_DATE:
-        tm = wallclock_time();
+    case TSM_DATE_MS:
+        tm = wallclock_time(&nsec);
 
         if ( tm.tm_mday == 0 )
             return;
@@ -588,6 +592,11 @@ static void printk_start_of_line(const char *prefix)
             snprintf(tstr, sizeof(tstr), "[%04u-%02u-%02u %02u:%02u:%02u] ",
                      1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
                      tm.tm_hour, tm.tm_min, tm.tm_sec);
+        else
+            snprintf(tstr, sizeof(tstr),
+                     "[%04u-%02u-%02u %02u:%02u:%02u.%06"PRIu64"] ",
+                     1900 + tm.tm_year, tm.tm_mon + 1, tm.tm_mday,
+                     tm.tm_hour, tm.tm_min, tm.tm_sec, nsec / 1000);
         break;
 
     case TSM_SINCE_BOOT:
diff --git a/xen/include/asm-x86/time.h b/xen/include/asm-x86/time.h
index 147b39e..2f9dbd1 100644
--- a/xen/include/asm-x86/time.h
+++ b/xen/include/asm-x86/time.h
@@ -49,7 +49,7 @@ int dom0_pit_access(struct ioreq *ioreq);
 int cpu_frequency_change(u64 freq);
 
 struct tm;
-struct tm wallclock_time(void);
+struct tm wallclock_time(uint64_t *ns);
 
 void pit_broadcast_enter(void);
 void pit_broadcast_exit(void);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSco-0002S9-IL; Fri, 28 Feb 2014 18:57:34 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScn-0002RD-5C
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:33 +0000
Received: from [85.158.137.68:36627] by server-16.bemta-3.messagelabs.com id
	33/CC-29917-B1CD0135; Fri, 28 Feb 2014 18:57:31 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393613847!3389208!2
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5356 invoked from network); 28 Feb 2014 18:57:30 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:30 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716112"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-Gg; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:57:00 +0000
Message-ID: <1393613824-13230-2-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
In-Reply-To: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
References: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>,
	Jan Beulich <JBeulich@suse.com>, Tim Deegan <tim@xen.org>
Subject: [Xen-devel]  [PATCH v2 1/5] x86/time: Avoid redundant this_cpu()
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

this_cpu() makes use of RELOC_HIDE() to prevent unsafe optimisations, forcing
a recalculation of the per-cpu data area.  Don't use it needlessly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Keir Fraser <keir@xen.org>
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/time.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 82492c1..883c135 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1323,7 +1323,7 @@ void init_percpu_time(void)
     s_time_t now;
 
     /* Initial estimate for TSC rate. */
-    this_cpu(cpu_time).tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
+    t->tsc_scale = per_cpu(cpu_time, 0).tsc_scale;
 
     local_irq_save(flags);
     rdtscll(t->local_tsc_stamp);
-- 
1.7.10.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 18:57:52 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJScn-0002RN-5S; Fri, 28 Feb 2014 18:57:33 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJScl-0002Qr-9D
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 18:57:31 +0000
Received: from [85.158.137.68:62307] by server-12.bemta-3.messagelabs.com id
	4C/7B-01674-A1CD0135; Fri, 28 Feb 2014 18:57:30 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393613847!3389208!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 5263 invoked from network); 28 Feb 2014 18:57:29 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-10.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 18:57:29 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106716110"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 18:57:06 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 13:57:05 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18]
	helo=andrewcoop.uk.xensource.com.)	by ukmail1.uk.xensource.com with
	esmtp (Exim 4.69)	(envelope-from <andrew.cooper3@citrix.com>)	id
	1WJScL-0001vh-GD; Fri, 28 Feb 2014 18:57:05 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xen.org>
Date: Fri, 28 Feb 2014 18:56:59 +0000
Message-ID: <1393613824-13230-1-git-send-email-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 1.7.10.4
MIME-Version: 1.0
X-DLP: MIA2
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [Xen-devel] [RFC PATCH v2 0/5] Improvements to console timestamps
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This series aims to improve on the current implementation of console
timestamps in Xen.

Patch 1 is a plain optimisation fix and logically independent from the rest of
the series.

Patch 2 changes Xen's idea of when time starts, from when the BSPs TSC was 0,
to when Xen boots.

Patch 3 guesses at AP time calibration earlier during boot, so printk()s using
the new console timestamp have a real stamp, rather than 0s.

Patch 4 is the meat of the series, adding a new timestamp implementation to
printk_start_of_line().

Patch 5 comes as a intermediate suggestion, to retain the old timestamp style,
but to display milliseconds as well.

There is still one bug to fix; The time step when the platform timer start:

    (XEN) [    1.069271] ENABLING IO-APIC IRQs
    (XEN) [    1.073308]  -> Using new ACK method
    (XEN) [    1.077771] ..TIMER: vector=0xF0 apic1=0 pin1=2 apic2=-1 pin2=-1
    (XEN) [    0.017017] Platform timer is 14.318MHz HPET
    (XEN) [    0.021701] Allocated console ring of 16 KiB.

Also, from discussion in the office, it has been suggested that the timestamp
mode/format would be better as build-time configuration rather than boot-time
configuration, and I would have to lean towards agreeing with this.

Furthermore, 3 different timestamp modes would seem to be overkill.

Comments welcome,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:17:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSvk-0003ZU-Is; Fri, 28 Feb 2014 19:17:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WJSvi-0003ZP-Qa
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 19:17:06 +0000
Received: from [85.158.143.35:30865] by server-2.bemta-4.messagelabs.com id
	9F/82-04779-2B0E0135; Fri, 28 Feb 2014 19:17:06 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393615023!9118686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25012 invoked from network); 28 Feb 2014 19:17:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:17:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105108196"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 19:17:03 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 14:17:02 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Jesse Gross <jesse@nicira.com>, <pshelar@nicira.com>, <tgraf@redhat.com>, 
	<dev@openvswitch.org>
Date: Fri, 28 Feb 2014 19:16:56 +0000
Message-ID: <1393615016-9187-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, netdev@vger.kernel.org
Subject: [Xen-devel] [PATCH] openvswitch: Orphan frags before sending to
	userspace via Netlink to avoid guest stall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The kernel datapath now switched to zerocopy Netlink messages, but that also
means that the pages on frags array are sent straight to userspace. If those
pages came outside the kernel, we have to swap them out with local copies.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 net/openvswitch/datapath.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 36f8872..ffb563c 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -464,6 +464,12 @@ static int queue_userspace_packet(struct datapath *dp, struct sk_buff *skb,
 	}
 	nla->nla_len = nla_attr_size(skb->len);
 
+	if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) {
+		err = -ENOMEM;
+		skb_tx_error(skb);
+		goto out;
+	}
+
 	skb_zerocopy(user_skb, skb, skb->len, hlen);
 
 	/* Pad OVS_PACKET_ATTR_PACKET if linear copy was performed */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:17:23 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSvk-0003ZU-Is; Fri, 28 Feb 2014 19:17:08 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <zoltan.kiss@citrix.com>) id 1WJSvi-0003ZP-Qa
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 19:17:06 +0000
Received: from [85.158.143.35:30865] by server-2.bemta-4.messagelabs.com id
	9F/82-04779-2B0E0135; Fri, 28 Feb 2014 19:17:06 +0000
X-Env-Sender: zoltan.kiss@citrix.com
X-Msg-Ref: server-9.tower-21.messagelabs.com!1393615023!9118686!1
X-Originating-IP: [66.165.176.63]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni42MyA9PiAzMDYwNDg=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 25012 invoked from network); 28 Feb 2014 19:17:05 -0000
Received: from smtp02.citrix.com (HELO SMTP02.CITRIX.COM) (66.165.176.63)
	by server-9.tower-21.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:17:05 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="105108196"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO02.CITRIX.COM with ESMTP; 28 Feb 2014 19:17:03 +0000
Received: from imagesandwords.uk.xensource.com (10.80.2.133) by
	FTLPEX01CL02.citrite.net (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 14:17:02 -0500
From: Zoltan Kiss <zoltan.kiss@citrix.com>
To: Jesse Gross <jesse@nicira.com>, <pshelar@nicira.com>, <tgraf@redhat.com>, 
	<dev@openvswitch.org>
Date: Fri, 28 Feb 2014 19:16:56 +0000
Message-ID: <1393615016-9187-1-git-send-email-zoltan.kiss@citrix.com>
X-Mailer: git-send-email 1.7.9.5
MIME-Version: 1.0
X-Originating-IP: [10.80.2.133]
X-DLP: MIA1
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	Zoltan Kiss <zoltan.kiss@citrix.com>, netdev@vger.kernel.org
Subject: [Xen-devel] [PATCH] openvswitch: Orphan frags before sending to
	userspace via Netlink to avoid guest stall
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

The kernel datapath now switched to zerocopy Netlink messages, but that also
means that the pages on frags array are sent straight to userspace. If those
pages came outside the kernel, we have to swap them out with local copies.

Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
---
 net/openvswitch/datapath.c |    6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
index 36f8872..ffb563c 100644
--- a/net/openvswitch/datapath.c
+++ b/net/openvswitch/datapath.c
@@ -464,6 +464,12 @@ static int queue_userspace_packet(struct datapath *dp, struct sk_buff *skb,
 	}
 	nla->nla_len = nla_attr_size(skb->len);
 
+	if (unlikely(skb_orphan_frags(skb, GFP_ATOMIC))) {
+		err = -ENOMEM;
+		skb_tx_error(skb);
+		goto out;
+	}
+
 	skb_zerocopy(user_skb, skb, skb->len, hlen);
 
 	/* Pad OVS_PACKET_ATTR_PACKET if linear copy was performed */

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:19:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSyO-0003lU-9y; Fri, 28 Feb 2014 19:19:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJSyM-0003lP-KO
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 19:19:50 +0000
Received: from [193.109.254.147:48946] by server-14.bemta-14.messagelabs.com
	id B9/44-29228-551E0135; Fri, 28 Feb 2014 19:19:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393615188!2293502!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29212 invoked from network); 28 Feb 2014 19:19:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:19:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106723405"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 19:19:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 14:19:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJSyJ-0002Fz-3v;
	Fri, 28 Feb 2014 19:19:47 +0000
Message-ID: <5310E152.8030407@citrix.com>
Date: Fri, 28 Feb 2014 19:19:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Source tree tidy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Looking through the root of the git tree, there are some files which
appear to be remnants of legacy source code management systems, and thus
are good candidates for deletion.

>From bitkeeper:
.bk-to-hg
.rootkeys
.hg-to-bk

>From mercurial:
.hgsigs

The .hgtags and .hgignore are still relevant given the git->hg mirror
(and certainly still used by XenServer)

While I could post a patch, doing so would be a huge email and quite
boring to read/review.

If the committers agree, would someone mind doing this without a formal
patch?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:19:54 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJSyO-0003lU-9y; Fri, 28 Feb 2014 19:19:52 +0000
Received: from mail6.bemta14.messagelabs.com ([193.109.254.103])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Andrew.Cooper3@citrix.com>) id 1WJSyM-0003lP-KO
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 19:19:50 +0000
Received: from [193.109.254.147:48946] by server-14.bemta-14.messagelabs.com
	id B9/44-29228-551E0135; Fri, 28 Feb 2014 19:19:49 +0000
X-Env-Sender: Andrew.Cooper3@citrix.com
X-Msg-Ref: server-11.tower-27.messagelabs.com!1393615188!2293502!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29212 invoked from network); 28 Feb 2014 19:19:49 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-11.tower-27.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:19:49 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106723405"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 19:19:47 +0000
Received: from ukmail1.uk.xensource.com (10.80.16.128) by smtprelay.citrix.com
	(10.13.107.79) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 14:19:47 -0500
Received: from andrewcoop.uk.xensource.com ([10.80.2.18])	by
	ukmail1.uk.xensource.com with esmtp (Exim 4.69)	(envelope-from
	<andrew.cooper3@citrix.com>)	id 1WJSyJ-0002Fz-3v;
	Fri, 28 Feb 2014 19:19:47 +0000
Message-ID: <5310E152.8030407@citrix.com>
Date: Fri, 28 Feb 2014 19:19:46 +0000
From: Andrew Cooper <andrew.cooper3@citrix.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20131103 Icedove/17.0.10
MIME-Version: 1.0
To: Xen-devel List <xen-devel@lists.xen.org>
X-Enigmail-Version: 1.6
X-DLP: MIA1
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Keir Fraser <keir@xen.org>, Ian Campbell <Ian.Campbell@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [Xen-devel] Source tree tidy
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

Looking through the root of the git tree, there are some files which
appear to be remnants of legacy source code management systems, and thus
are good candidates for deletion.

>From bitkeeper:
.bk-to-hg
.rootkeys
.hg-to-bk

>From mercurial:
.hgsigs

The .hgtags and .hgignore are still relevant given the git->hg mirror
(and certainly still used by XenServer)

While I could post a patch, doing so would be a huge email and quite
boring to read/review.

If the committers agree, would someone mind doing this without a formal
patch?

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:29:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJT7v-00047P-FQ; Fri, 28 Feb 2014 19:29:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJT7t-00047D-Vz
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 19:29:42 +0000
Received: from [85.158.139.211:15519] by server-8.bemta-5.messagelabs.com id
	2D/46-05298-5A3E0135; Fri, 28 Feb 2014 19:29:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393615778!6953495!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28298 invoked from network); 28 Feb 2014 19:29:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:29:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106726200"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 19:29:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 14:29:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJT7p-0003FA-6M;
	Fri, 28 Feb 2014 19:29:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJT7p-0000cx-16;
	Fri, 28 Feb 2014 19:29:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25330-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 19:29:37 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25330: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25330 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25330/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-xend               4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:29:51 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJT7v-00047P-FQ; Fri, 28 Feb 2014 19:29:43 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJT7t-00047D-Vz
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 19:29:42 +0000
Received: from [85.158.139.211:15519] by server-8.bemta-5.messagelabs.com id
	2D/46-05298-5A3E0135; Fri, 28 Feb 2014 19:29:41 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-9.tower-206.messagelabs.com!1393615778!6953495!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28298 invoked from network); 28 Feb 2014 19:29:40 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-9.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:29:40 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106726200"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 19:29:38 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 14:29:37 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJT7p-0003FA-6M;
	Fri, 28 Feb 2014 19:29:37 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJT7p-0000cx-16;
	Fri, 28 Feb 2014 19:29:37 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25330-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 19:29:37 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-4.4-testing test] 25330: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25330 xen-4.4-testing real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25330/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25266
 build-i386-xend               4 xen-build                 fail REGR. vs. 25266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-pv            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop               fail never pass
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop                   fail  never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop              fail never pass
 test-amd64-i386-xend-qemut-winxpsp3  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop         fail never pass
 test-amd64-i386-xend-winxpsp3  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass

version targeted for testing:
 xen                  5be1e95318147855713709094e6847e3104ae910
baseline version:
 xen                  29f130cfd9aca7ee12deddbbc0217f39d55bec60

------------------------------------------------------------
People who touched revisions under test:
  Ian Campbell <Ian.Campbell@citrix.com>
  Ian Jackson <Ian.Jackson@eu.citrix.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-i386-rhel6hvm-amd                                 pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                fail    
 test-amd64-i386-xl-credit2                                   pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-i386-xl-multivcpu                                 pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           blocked 
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     fail    
 test-amd64-i386-xl-winxpsp3-vcpus1                           fail    
 test-amd64-i386-xend-qemut-winxpsp3                          blocked 
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                blocked 
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 5be1e95318147855713709094e6847e3104ae910
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Feb 24 12:57:53 2014 +0000

    libxl: Fix libxl_postfork_child_noexec deadlock etc.
    
    libxl_postfork_child_noexec would nestedly reaquire the non-recursive
    "no_forking" mutex: atfork_lock uses it, as does sigchld_user_remove.
    The result on Linux is that the process always deadlocks before
    returning from this function.
    
    This is used by xl's console child.  So, the ultimate effect is that
    xl with pygrub does not manage to connect to the pygrub console.
    This behaviour was reported by Michael Young in Xen 4.4.0 RC5.
    
    Also, the use of sigchld_user_remove in libxl_postfork_child_noexec is
    not correct with SIGCHLD sharing.  libxl_postfork_child_noexec is
    documented to suffice if called only on one ctx.  So deregistering the
    ctx it's called on is not sufficient.  Instead, we need a new approach
    which discards the whole sigchld_user list and unconditionally removes
    our SIGCHLD handler if we had one.
    
    Prompted by this, clarify the semantics of
    libxl_postfork_child_noexec.  Specifically, expand on the meaning of
    "quickly" by explaining what operations are not permitted; and
    document the fact that the function doesn't reclaim the resources in
    the ctxs.
    
    And add a comment in libxl_postfork_child_noexec explaining the
    internal concurrency situation.
    
    This is an important bugfix.  IMO the bug is a blocker for Xen 4.4.
    
    Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
    Reported-by: M A Young <m.a.young@durham.ac.uk>
    CC: Ian Campbell <Ian.Campbell@citrix.com>
    CC: George Dunlap <george.dunlap@eu.citrix.com>
    Acked-by: Ian Campbell <Ian.Campbell@citrix.com>
    Release-acked-by: George Dunlap <george.dunlap@eu.citrix.com>
(qemu changes not included)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:41:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJTJ3-0004S0-Mj; Fri, 28 Feb 2014 19:41:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJTJ2-0004Rv-Q8
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 19:41:13 +0000
Received: from [85.158.139.211:30067] by server-6.bemta-5.messagelabs.com id
	97/B6-14342-856E0135; Fri, 28 Feb 2014 19:41:12 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393616470!6582883!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4476 invoked from network); 28 Feb 2014 19:41:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:41:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106729489"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 19:41:09 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 14:41:09 -0500
Message-ID: <5310E653.20307@citrix.com>
Date: Fri, 28 Feb 2014 19:41:07 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com> <5310CEB6.4010604@oracle.com>
	<5310D118.2090302@citrix.com> <5310D71A.6050507@oracle.com>
In-Reply-To: <5310D71A.6050507@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	=?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjgvMDIvMTQgMTg6MzYsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yOC8yMDE0
IDAxOjEwIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAyOC8wMi8xNCAxOTowMCwg
Qm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+Pj4gT24gMDIvMjgvMjAxNCAxMjo0NiBQTSwgUm9nZXIg
UGF1IE1vbm7DqSB3cm90ZToKPj4+PiBPbiAyOC8wMi8xNCAxODoyMCwgQm9yaXMgT3N0cm92c2t5
IHdyb3RlOgo+Pj4+PiBPbiAwMi8yNy8yMDE0IDAxOjQ1IFBNLCBCb3JpcyBPc3Ryb3Zza3kgd3Jv
dGU6Cj4+Pj4+PiBPbiAwMi8yNy8yMDE0IDAxOjE1IFBNLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4+Pj4+Pj4gQWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3IgWGVuIERvbTAg
dXNpbmcgdGhlCj4+Pj4+Pj4gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgcGlycSBtYXAgdHlwZS4K
Pj4+Pj4+Pgo+Pj4+Pj4+IEluIG9yZGVyIHRvIGtlZXAgdHJhY2sgb2Ygd2hpY2ggcGlycSBpcyB0
aGUgZmlyc3Qgb25lIGluIHRoZQo+Pj4+Pj4+IGdyb3VwIGFsbAo+Pj4+Pj4+IHBpcnFzIGluIHRo
ZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5ld2x5Cj4+Pj4+
Pj4gaW50cm9kdWNlZCBQSVJRX01TSV9HUk9VUCBmbGFnIHNldC4gVGhpcyBwcmV2ZW50cyBjYWxs
aW5nCj4+Pj4+Pj4gUEhZU0RFVk9QX3VubWFwX3BpcnEgb24gdGhlbSwgc2luY2UgdGhlIHVubWFw
IG11c3QgYmUgZG9uZSB3aXRoIHRoZQo+Pj4+Pj4+IGZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgo+
Pj4+Pj4gUmV2aWV3ZWQtYnk6IEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNs
ZS5jb20+Cj4+Pj4+Pgo+Pj4+Pj4KPj4+Pj4gSSB3YXMganVzdCBsb29raW5nIGF0IHhlbl9zZXR1
cF9tc2lfaXJxcygpIChmb3IgYSBkaWZmZXJlbnQgcmVhc29uKQo+Pj4+PiBhbmQKPj4+Pj4gSSBh
bSBubyBsb25nZXIgc3VyZSB0aGlzIHBhdGNoIGRvZXMgYW55dGhpbmc6Cj4+Pj4+Cj4+Pj4+IHN0
YXRpYyBpbnQgeGVuX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVj
LCBpbnQgdHlwZSkKPj4+Pj4gewo+Pj4+PiAgICAgICAgICAgaW50IGlycSwgcmV0LCBpOwo+Pj4+
PiAgICAgICAgICAgc3RydWN0IG1zaV9kZXNjICptc2lkZXNjOwo+Pj4+PiAgICAgICAgICAgaW50
ICp2Owo+Pj4+Pgo+Pj4+PiAgICAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYg
bnZlYyA+IDEpCj4+Pj4+ICAgICAgICAgICAgICAgICAgIHJldHVybiAxOwo+Pj4+PiAuLi4uCj4+
Pj4+Cj4+Pj4+IFNhbWUgdGhpbmcgZm9yIHhlbl9odm1fc2V0dXBfbXNpX2lycXMoKS4KPj4+PiBB
cyBzYWlkIGluIHRoZSBjb21taXQgbWVzc2FnZSB0aGlzIGlzIG9ubHkgZm9yIERvbTAsIHNvIHRo
ZSBmdW5jdGlvbgo+Pj4+IG1vZGlmaWVkIGlzIHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzICh3
ZXJlIHRoaXMgY2hlY2sgaXMgcmVtb3ZlZCkuCj4+PiBUaGVuIHdoYXQgaXMgdGhlIHJlYXNvbiBm
b3IgdGhlc2UgY2hhbmdlczoKPj4+Cj4+PiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYvcGNpL3hlbi5j
IGIvYXJjaC94ODYvcGNpL3hlbi5jCj4+PiBpbmRleCAxMDNlNzAyLi45MDU5NTZmIDEwMDY0NAo+
Pj4gLS0tIGEvYXJjaC94ODYvcGNpL3hlbi5jCj4+PiArKysgYi9hcmNoL3g4Ni9wY2kveGVuLmMK
Pj4+IEBAIC0xNzgsNiArMTc4LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBfbXNpX2lycXMoc3Ry
dWN0IHBjaV9kZXYgKmRldiwKPj4+IGludCBudmVjLCBpbnQgdHlwZSkKPj4+ICAgICAgIGkgPSAw
Owo+Pj4gICAgICAgbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwg
bGlzdCkgewo+Pj4gICAgICAgICAgIGlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYs
IG1zaWRlc2MsIHZbaV0sCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0g
UENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+Pj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAicGNpZnJvbnQtbXNpLXgiIDoKPj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgInBjaWZyb250LW1zaSIsCj4+PiBAQCAtMjQ1LDYgKzI0Niw3IEBAIHN0YXRpYyBpbnQgeGVu
X2h2bV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2Rldgo+Pj4gKmRldiwgaW50IG52ZWMsIGlu
dCB0eXBlKQo+Pj4gICAgICAgICAgICAgICAgICAgInhlbjogbXNpIGFscmVhZHkgYm91bmQgdG8g
cGlycT0lZFxuIiwgcGlycSk7Cj4+PiAgICAgICAgICAgfQo+Pj4gICAgICAgICAgIGlycSA9IHhl
bl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1zaWRlc2MsIHBpcnEsCj4+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkg
Pwo+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAibXNpLXgiIDogIm1zaSIsCj4+PiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIERPTUlEX1NFTEYpOwo+Pj4KPj4+IFNob3VsZCB5
b3Ugc2ltcGx5IHBhc3MgMT8KPj4gWWVzLCBidXQgdGhlbiBpZiB3ZSBpbXBsZW1lbnQgTVNJIG1l
c3NhZ2UgZ3JvdXBzIGZvciB0aG9zZSBjYXNlcyB3ZSB3aWxsCj4+IG5lZWQgdG8gbW9kaWZ5IHRo
aXMgbGluZSBhZ2FpbiwgdGhpcyB3YXkgaXQncyBhbHJlYWR5IGNvcnJlY3RseSBzZXR1cC4KPj4g
SWYgeW91IHRoaW5rIGl0J3MgYmVzdCB0byBoYXJkY29kZSBpdCB0byAxLCBJIGNhbiBjaGFuZ2Ug
aXQgKEkgd2FzIGFsc28KPj4gaW4gZG91YnQgYWJvdXQgd2hpY2ggd2F5IHdhcyBiZXR0ZXIgd2hl
biBtb2RpZnlpbmcgdGhvc2UgbGluZXMpLgo+IAo+IAo+IEkgdGhpbmsgcGFzc2luZyAxIGV4cGxp
Y2l0bHkgdGhpcyB3b3VsZCBiZSBiZXR0ZXIuIElmIHdlIGV4dGVuZCBzdXBwb3J0Cj4gZm9yIG5v
bi1kb20wIHdlIHdvdWxkIGhhdmUgdG8gbW9kaWZ5IHRoZXNlIHJvdXRpbmVzIGFueXdheSBzbyBt
YWtpbmcKPiBjaGFuZ2VzIGluIGJvdGggcGxhY2VzIHNpbXVsdGFuZW91c2x5IHdvdWxkIG1ha2Ug
dGhlIGNvbW1pdCBtb3JlIGNsZWFyCj4gKElNTykuCgpJZiB3ZSBrbm93IG5vdyB0aGF0IHRoaXMg
d2lsbCBuZWVkIHRvIGJlIGNoYW5nZWQsIGl0J3MgYmV0dGVyIHRvIGRvIGl0Cm5vdyB0aGFuIGZv
cmdldCBhYm91dCBpdCBsYXRlci4KCkFwcGxpZWQgdG8gZGV2ZWwvZm9yLWxpbnVzLTMuMTUsIHRo
YW5rcy4KCkRhdmlkCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 28 19:41:26 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 19:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJTJ3-0004S0-Mj; Fri, 28 Feb 2014 19:41:13 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <david.vrabel@citrix.com>) id 1WJTJ2-0004Rv-Q8
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 19:41:13 +0000
Received: from [85.158.139.211:30067] by server-6.bemta-5.messagelabs.com id
	97/B6-14342-856E0135; Fri, 28 Feb 2014 19:41:12 +0000
X-Env-Sender: david.vrabel@citrix.com
X-Msg-Ref: server-14.tower-206.messagelabs.com!1393616470!6582883!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 4476 invoked from network); 28 Feb 2014 19:41:11 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-14.tower-206.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 19:41:11 -0000
X-IronPort-AV: E=Sophos;i="4.97,563,1389744000"; d="scan'208";a="106729489"
Received: from accessns.citrite.net (HELO FTLPEX01CL03.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 19:41:09 +0000
Received: from [10.80.2.76] (10.80.2.76) by FTLPEX01CL03.citrite.net
	(10.13.107.80) with Microsoft SMTP Server id 14.2.342.4;
	Fri, 28 Feb 2014 14:41:09 -0500
Message-ID: <5310E653.20307@citrix.com>
Date: Fri, 28 Feb 2014 19:41:07 +0000
From: David Vrabel <david.vrabel@citrix.com>
User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US;
	rv:1.9.1.16) Gecko/20121215 Iceowl/1.0b1 Icedove/3.0.11
MIME-Version: 1.0
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <530F68BE.2070505@oracle.com>
	<1393524935-4216-1-git-send-email-roger.pau@citrix.com>
	<530F87CC.8090000@oracle.com> <5310C54A.1080906@oracle.com>
	<5310CB6F.1070706@citrix.com> <5310CEB6.4010604@oracle.com>
	<5310D118.2090302@citrix.com> <5310D71A.6050507@oracle.com>
In-Reply-To: <5310D71A.6050507@oracle.com>
X-Originating-IP: [10.80.2.76]
X-DLP: MIA2
Cc: xen-devel@lists.xen.org, linux-kernel@vger.kernel.org,
	=?UTF-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [Xen-devel] [PATCH v2] xen: add support for MSI message groups
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gMjgvMDIvMTQgMTg6MzYsIEJvcmlzIE9zdHJvdnNreSB3cm90ZToKPiBPbiAwMi8yOC8yMDE0
IDAxOjEwIFBNLCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOgo+PiBPbiAyOC8wMi8xNCAxOTowMCwg
Qm9yaXMgT3N0cm92c2t5IHdyb3RlOgo+Pj4gT24gMDIvMjgvMjAxNCAxMjo0NiBQTSwgUm9nZXIg
UGF1IE1vbm7DqSB3cm90ZToKPj4+PiBPbiAyOC8wMi8xNCAxODoyMCwgQm9yaXMgT3N0cm92c2t5
IHdyb3RlOgo+Pj4+PiBPbiAwMi8yNy8yMDE0IDAxOjQ1IFBNLCBCb3JpcyBPc3Ryb3Zza3kgd3Jv
dGU6Cj4+Pj4+PiBPbiAwMi8yNy8yMDE0IDAxOjE1IFBNLCBSb2dlciBQYXUgTW9ubmUgd3JvdGU6
Cj4+Pj4+Pj4gQWRkIHN1cHBvcnQgZm9yIE1TSSBtZXNzYWdlIGdyb3VwcyBmb3IgWGVuIERvbTAg
dXNpbmcgdGhlCj4+Pj4+Pj4gTUFQX1BJUlFfVFlQRV9NVUxUSV9NU0kgcGlycSBtYXAgdHlwZS4K
Pj4+Pj4+Pgo+Pj4+Pj4+IEluIG9yZGVyIHRvIGtlZXAgdHJhY2sgb2Ygd2hpY2ggcGlycSBpcyB0
aGUgZmlyc3Qgb25lIGluIHRoZQo+Pj4+Pj4+IGdyb3VwIGFsbAo+Pj4+Pj4+IHBpcnFzIGluIHRo
ZSBNU0kgZ3JvdXAgZXhjZXB0IGZvciB0aGUgZmlyc3Qgb25lIGhhdmUgdGhlIG5ld2x5Cj4+Pj4+
Pj4gaW50cm9kdWNlZCBQSVJRX01TSV9HUk9VUCBmbGFnIHNldC4gVGhpcyBwcmV2ZW50cyBjYWxs
aW5nCj4+Pj4+Pj4gUEhZU0RFVk9QX3VubWFwX3BpcnEgb24gdGhlbSwgc2luY2UgdGhlIHVubWFw
IG11c3QgYmUgZG9uZSB3aXRoIHRoZQo+Pj4+Pj4+IGZpcnN0IHBpcnEgaW4gdGhlIGdyb3VwLgo+
Pj4+Pj4gUmV2aWV3ZWQtYnk6IEJvcmlzIE9zdHJvdnNreSA8Ym9yaXMub3N0cm92c2t5QG9yYWNs
ZS5jb20+Cj4+Pj4+Pgo+Pj4+Pj4KPj4+Pj4gSSB3YXMganVzdCBsb29raW5nIGF0IHhlbl9zZXR1
cF9tc2lfaXJxcygpIChmb3IgYSBkaWZmZXJlbnQgcmVhc29uKQo+Pj4+PiBhbmQKPj4+Pj4gSSBh
bSBubyBsb25nZXIgc3VyZSB0aGlzIHBhdGNoIGRvZXMgYW55dGhpbmc6Cj4+Pj4+Cj4+Pj4+IHN0
YXRpYyBpbnQgeGVuX3NldHVwX21zaV9pcnFzKHN0cnVjdCBwY2lfZGV2ICpkZXYsIGludCBudmVj
LCBpbnQgdHlwZSkKPj4+Pj4gewo+Pj4+PiAgICAgICAgICAgaW50IGlycSwgcmV0LCBpOwo+Pj4+
PiAgICAgICAgICAgc3RydWN0IG1zaV9kZXNjICptc2lkZXNjOwo+Pj4+PiAgICAgICAgICAgaW50
ICp2Owo+Pj4+Pgo+Pj4+PiAgICAgICAgICAgaWYgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kgJiYg
bnZlYyA+IDEpCj4+Pj4+ICAgICAgICAgICAgICAgICAgIHJldHVybiAxOwo+Pj4+PiAuLi4uCj4+
Pj4+Cj4+Pj4+IFNhbWUgdGhpbmcgZm9yIHhlbl9odm1fc2V0dXBfbXNpX2lycXMoKS4KPj4+PiBB
cyBzYWlkIGluIHRoZSBjb21taXQgbWVzc2FnZSB0aGlzIGlzIG9ubHkgZm9yIERvbTAsIHNvIHRo
ZSBmdW5jdGlvbgo+Pj4+IG1vZGlmaWVkIGlzIHhlbl9pbml0ZG9tX3NldHVwX21zaV9pcnFzICh3
ZXJlIHRoaXMgY2hlY2sgaXMgcmVtb3ZlZCkuCj4+PiBUaGVuIHdoYXQgaXMgdGhlIHJlYXNvbiBm
b3IgdGhlc2UgY2hhbmdlczoKPj4+Cj4+PiBkaWZmIC0tZ2l0IGEvYXJjaC94ODYvcGNpL3hlbi5j
IGIvYXJjaC94ODYvcGNpL3hlbi5jCj4+PiBpbmRleCAxMDNlNzAyLi45MDU5NTZmIDEwMDY0NAo+
Pj4gLS0tIGEvYXJjaC94ODYvcGNpL3hlbi5jCj4+PiArKysgYi9hcmNoL3g4Ni9wY2kveGVuLmMK
Pj4+IEBAIC0xNzgsNiArMTc4LDcgQEAgc3RhdGljIGludCB4ZW5fc2V0dXBfbXNpX2lycXMoc3Ry
dWN0IHBjaV9kZXYgKmRldiwKPj4+IGludCBudmVjLCBpbnQgdHlwZSkKPj4+ICAgICAgIGkgPSAw
Owo+Pj4gICAgICAgbGlzdF9mb3JfZWFjaF9lbnRyeShtc2lkZXNjLCAmZGV2LT5tc2lfbGlzdCwg
bGlzdCkgewo+Pj4gICAgICAgICAgIGlycSA9IHhlbl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYs
IG1zaWRlc2MsIHZbaV0sCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgKHR5cGUgPT0g
UENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4+PiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkgPwo+Pj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAicGNpZnJvbnQtbXNpLXgiIDoKPj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgInBjaWZyb250LW1zaSIsCj4+PiBAQCAtMjQ1LDYgKzI0Niw3IEBAIHN0YXRpYyBpbnQgeGVu
X2h2bV9zZXR1cF9tc2lfaXJxcyhzdHJ1Y3QgcGNpX2Rldgo+Pj4gKmRldiwgaW50IG52ZWMsIGlu
dCB0eXBlKQo+Pj4gICAgICAgICAgICAgICAgICAgInhlbjogbXNpIGFscmVhZHkgYm91bmQgdG8g
cGlycT0lZFxuIiwgcGlycSk7Cj4+PiAgICAgICAgICAgfQo+Pj4gICAgICAgICAgIGlycSA9IHhl
bl9iaW5kX3BpcnFfbXNpX3RvX2lycShkZXYsIG1zaWRlc2MsIHBpcnEsCj4+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKHR5cGUgPT0gUENJX0NBUF9JRF9NU0kpID8gbnZlYyA6IDEsCj4+
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICh0eXBlID09IFBDSV9DQVBfSURfTVNJWCkg
Pwo+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAibXNpLXgiIDogIm1zaSIsCj4+PiAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIERPTUlEX1NFTEYpOwo+Pj4KPj4+IFNob3VsZCB5
b3Ugc2ltcGx5IHBhc3MgMT8KPj4gWWVzLCBidXQgdGhlbiBpZiB3ZSBpbXBsZW1lbnQgTVNJIG1l
c3NhZ2UgZ3JvdXBzIGZvciB0aG9zZSBjYXNlcyB3ZSB3aWxsCj4+IG5lZWQgdG8gbW9kaWZ5IHRo
aXMgbGluZSBhZ2FpbiwgdGhpcyB3YXkgaXQncyBhbHJlYWR5IGNvcnJlY3RseSBzZXR1cC4KPj4g
SWYgeW91IHRoaW5rIGl0J3MgYmVzdCB0byBoYXJkY29kZSBpdCB0byAxLCBJIGNhbiBjaGFuZ2Ug
aXQgKEkgd2FzIGFsc28KPj4gaW4gZG91YnQgYWJvdXQgd2hpY2ggd2F5IHdhcyBiZXR0ZXIgd2hl
biBtb2RpZnlpbmcgdGhvc2UgbGluZXMpLgo+IAo+IAo+IEkgdGhpbmsgcGFzc2luZyAxIGV4cGxp
Y2l0bHkgdGhpcyB3b3VsZCBiZSBiZXR0ZXIuIElmIHdlIGV4dGVuZCBzdXBwb3J0Cj4gZm9yIG5v
bi1kb20wIHdlIHdvdWxkIGhhdmUgdG8gbW9kaWZ5IHRoZXNlIHJvdXRpbmVzIGFueXdheSBzbyBt
YWtpbmcKPiBjaGFuZ2VzIGluIGJvdGggcGxhY2VzIHNpbXVsdGFuZW91c2x5IHdvdWxkIG1ha2Ug
dGhlIGNvbW1pdCBtb3JlIGNsZWFyCj4gKElNTykuCgpJZiB3ZSBrbm93IG5vdyB0aGF0IHRoaXMg
d2lsbCBuZWVkIHRvIGJlIGNoYW5nZWQsIGl0J3MgYmV0dGVyIHRvIGRvIGl0Cm5vdyB0aGFuIGZv
cmdldCBhYm91dCBpdCBsYXRlci4KCkFwcGxpZWQgdG8gZGV2ZWwvZm9yLWxpbnVzLTMuMTUsIHRo
YW5rcy4KCkRhdmlkCgpfX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fXwpYZW4tZGV2ZWwgbWFpbGluZyBsaXN0Clhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnCmh0dHA6
Ly9saXN0cy54ZW4ub3JnL3hlbi1kZXZlbAo=

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJTqT-0005A7-S8; Fri, 28 Feb 2014 20:15:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJTqS-0005A1-EJ
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 20:15:44 +0000
Received: from [85.158.137.68:59477] by server-14.bemta-3.messagelabs.com id
	35/EF-08196-F6EE0135; Fri, 28 Feb 2014 20:15:43 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393618541!3397967!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10419 invoked from network); 28 Feb 2014 20:15:42 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 20:15:42 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 28 Feb 2014 20:15:39 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,564,1389744000"; d="scan'208";a="664362532"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi03.verizon.com with ESMTP; 28 Feb 2014 20:15:38 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 28 Feb 2014 15:15:34 -0500
Message-Id: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC PATCH 0/1] Add pci_hole_min_size
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is posted as an RFC because you need the upstream version of qemu with:

http://marc.info/?l=qemu-devel&m=139354036513567&w=2

This allows growing the pci_hole to the size needed.

This may help with using pci passthru and HVM.

Don Slutz (1):
  Add pci_hole_min_size

 tools/firmware/hvmloader/pci.c | 10 ++++++++++
 tools/libxc/xc_hvm_build_x86.c | 22 ++++++++++++++++++++++
 tools/libxc/xenguest.h         | 11 +++++++++++
 tools/libxl/libxl_create.c     |  4 +++-
 tools/libxl/libxl_dm.c         | 13 +++++++++++--
 tools/libxl/libxl_dom.c        |  3 ++-
 tools/libxl/libxl_types.idl    |  1 +
 tools/libxl/xl_cmdimpl.c       |  6 ++++++
 8 files changed, 66 insertions(+), 4 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJTqT-0005A7-S8; Fri, 28 Feb 2014 20:15:45 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJTqS-0005A1-EJ
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 20:15:44 +0000
Received: from [85.158.137.68:59477] by server-14.bemta-3.messagelabs.com id
	35/EF-08196-F6EE0135; Fri, 28 Feb 2014 20:15:43 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393618541!3397967!1
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10419 invoked from network); 28 Feb 2014 20:15:42 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 20:15:42 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 28 Feb 2014 20:15:39 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,564,1389744000"; d="scan'208";a="664362532"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi03.verizon.com with ESMTP; 28 Feb 2014 20:15:38 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 28 Feb 2014 15:15:34 -0500
Message-Id: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC PATCH 0/1] Add pci_hole_min_size
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This is posted as an RFC because you need the upstream version of qemu with:

http://marc.info/?l=qemu-devel&m=139354036513567&w=2

This allows growing the pci_hole to the size needed.

This may help with using pci passthru and HVM.

Don Slutz (1):
  Add pci_hole_min_size

 tools/firmware/hvmloader/pci.c | 10 ++++++++++
 tools/libxc/xc_hvm_build_x86.c | 22 ++++++++++++++++++++++
 tools/libxc/xenguest.h         | 11 +++++++++++
 tools/libxl/libxl_create.c     |  4 +++-
 tools/libxl/libxl_dm.c         | 13 +++++++++++--
 tools/libxl/libxl_dom.c        |  3 ++-
 tools/libxl/libxl_types.idl    |  1 +
 tools/libxl/xl_cmdimpl.c       |  6 ++++++
 8 files changed, 66 insertions(+), 4 deletions(-)

-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJTqU-0005AI-Py; Fri, 28 Feb 2014 20:15:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJTqT-0005A2-6T
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 20:15:45 +0000
Received: from [85.158.137.68:59533] by server-9.bemta-3.messagelabs.com id
	3A/F3-10184-07EE0135; Fri, 28 Feb 2014 20:15:44 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393618541!3397967!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10463 invoked from network); 28 Feb 2014 20:15:43 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 20:15:43 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 28 Feb 2014 20:15:40 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,564,1389744000"; d="scan'208";a="664362541"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi03.verizon.com with ESMTP; 28 Feb 2014 20:15:39 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 28 Feb 2014 15:15:35 -0500
Message-Id: <1393618535-9587-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
References: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC PATCH 1/1] Add pci_hole_min_size
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows growing the pci_hole to the size needed.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/firmware/hvmloader/pci.c | 10 ++++++++++
 tools/libxc/xc_hvm_build_x86.c | 22 ++++++++++++++++++++++
 tools/libxc/xenguest.h         | 11 +++++++++++
 tools/libxl/libxl_create.c     |  4 +++-
 tools/libxl/libxl_dm.c         | 13 +++++++++++--
 tools/libxl/libxl_dom.c        |  3 ++-
 tools/libxl/libxl_types.idl    |  1 +
 tools/libxl/xl_cmdimpl.c       |  6 ++++++
 8 files changed, 66 insertions(+), 4 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 627e8cb..b36d00b 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -58,6 +58,7 @@ void pci_setup(void)
         uint64_t bar_sz;
     } *bars = (struct bars *)scratch_start;
     unsigned int i, nr_bars = 0;
+    uint64_t pci_hole_min_size;
 
     const char *s;
     /*
@@ -85,6 +86,9 @@ void pci_setup(void)
     printf("Relocating guest memory for lowmem MMIO space %s\n",
            allow_memory_relocate?"enabled":"disabled");
 
+    s = xenstore_read("platform/pci_hole_min_size", "0");
+    pci_hole_min_size = strtoll(s, NULL, 0);
+
     /* Program PCI-ISA bridge with appropriate link routes. */
     isa_irq = 0;
     for ( link = 0; link < 4; link++ )
@@ -236,6 +240,12 @@ void pci_setup(void)
         pci_writew(devfn, PCI_COMMAND, cmd);
     }
 
+    if (pci_hole_min_size) {
+        pci_mem_start = (1ull << 32) - pci_hole_min_size;
+        printf("pci_mem_start=0x%lx (was 0x%x) for pci_hole_min_size=%llu\n",
+               pci_mem_start, PCI_MEM_START, pci_hole_min_size);
+    }
+
     /*
      * At the moment qemu-xen can't deal with relocated memory regions.
      * It's too close to the release to make a proper fix; for now,
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index dd3b522..4752d58 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -603,16 +603,38 @@ int xc_hvm_build_target_mem(xc_interface *xch,
                            int target,
                            const char *image_name)
 {
+    return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target, image_name, 0);
+}
+
+int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
+                                      uint32_t domid,
+                                      int memsize,
+                                      int target,
+                                      const char *image_name,
+                                      uint64_t pci_hole_min_size)
+{
     struct xc_hvm_build_args args = {};
 
     memset(&args, 0, sizeof(struct xc_hvm_build_args));
     args.mem_size = (uint64_t)memsize << 20;
     args.mem_target = (uint64_t)target << 20;
     args.image_file_name = image_name;
+    if (pci_hole_min_size > HVM_BELOW_4G_MMIO_LENGTH)
+        args.mmio_size = pci_hole_min_size;
 
     return xc_hvm_build(xch, domid, &args);
 }
 
+int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
+                           struct xc_hvm_build_args *args,
+                           uint64_t pci_hole_min_size)
+{
+    if (pci_hole_min_size > HVM_BELOW_4G_MMIO_LENGTH)
+        args->mmio_size = pci_hole_min_size;
+
+    return xc_hvm_build(xch, domid, args);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index a0e30e1..44290bd 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -248,12 +248,23 @@ struct xc_hvm_build_args {
 int xc_hvm_build(xc_interface *xch, uint32_t domid,
                  struct xc_hvm_build_args *hvm_args);
 
+int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
+                           struct xc_hvm_build_args *args,
+                           uint64_t pci_hole_min_size);
+
 int xc_hvm_build_target_mem(xc_interface *xch,
                             uint32_t domid,
                             int memsize,
                             int target,
                             const char *image_name);
 
+int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
+                                      uint32_t domid,
+                                      int memsize,
+                                      int target,
+                                      const char *image_name,
+                                      uint64_t pci_hole_min_size);
+
 int xc_suspend_evtchn_release(xc_interface *xch, xc_evtchn *xce, int domid, int suspend_evtchn);
 
 int xc_suspend_evtchn_init(xc_interface *xch, xc_evtchn *xce, int domid, int port);
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..24ceac6 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -388,13 +388,15 @@ int libxl__domain_build(libxl__gc *gc,
         vments[4] = "start_time";
         vments[5] = libxl__sprintf(gc, "%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
 
-        localents = libxl__calloc(gc, 7, sizeof(char *));
+        localents = libxl__calloc(gc, 9, sizeof(char *));
         localents[0] = "platform/acpi";
         localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
         localents[2] = "platform/acpi_s3";
         localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
         localents[4] = "platform/acpi_s4";
         localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
+        localents[6] = "platform/pci_hole_min_size";
+        localents[7] = libxl__sprintf(gc, "%llu", (unsigned long long)info->u.hvm.pci_hole_min_size);
 
         break;
     case LIBXL_DOMAIN_TYPE_PV:
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f6f7bbd..66ba7cd 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -653,17 +653,26 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
+    {
+        char *machinearg;
+
         if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
-            flexarray_append(dm_args, "pc,accel=xen");
+            machinearg = libxl__sprintf(gc, "pc,accel=xen");
         } else {
-            flexarray_append(dm_args, "xenfv");
+            machinearg = libxl__sprintf(gc, "xenfv");
+        }
+        if (b_info->u.hvm.pci_hole_min_size) {
+            machinearg = libxl__sprintf(gc, "%s,pci_hole_min_size=%llu", machinearg,
+                            (unsigned long long) b_info->u.hvm.pci_hole_min_size);
         }
+        flexarray_append(dm_args, machinearg);
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
         break;
+    }
     default:
         abort();
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..6ebc606 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -642,7 +642,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
         goto out;
     }
 
-    ret = xc_hvm_build(ctx->xch, domid, &args);
+    ret = xc_hvm_build_with_hole(ctx->xch, domid, &args,
+                                 info->u.hvm.pci_hole_min_size);
     if (ret) {
         LOGEV(ERROR, ret, "hvm building failed");
         goto out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..febfbb4 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -346,6 +346,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("timeoffset",       string),
                                        ("hpet",             libxl_defbool),
                                        ("vpt_align",        libxl_defbool),
+                                       ("pci_hole_min_size",uint64),
                                        ("timer_mode",       libxl_timer_mode),
                                        ("nested_hvm",       libxl_defbool),
                                        ("smbios_firmware",  string),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..fe247ee 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1025,6 +1025,12 @@ static void parse_config_data(const char *config_source,
         xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
         xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
 
+        if (!xlu_cfg_get_long(config, "pci_hole_min_size", &l, 0)) {
+            b_info->u.hvm.pci_hole_min_size = (uint64_t) l;
+            if (dom_info->debug)
+                fprintf(stderr, "pci_hole_min_size: %llu\n", (unsigned long long) b_info->u.hvm.pci_hole_min_size);
+        }
+
         if (!xlu_cfg_get_long(config, "timer_mode", &l, 1)) {
             const char *s = libxl_timer_mode_to_string(l);
             fprintf(stderr, "WARNING: specifying \"timer_mode\" as an integer is deprecated. "
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:16:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJTqU-0005AI-Py; Fri, 28 Feb 2014 20:15:47 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <dslutz@verizon.com>) id 1WJTqT-0005A2-6T
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 20:15:45 +0000
Received: from [85.158.137.68:59533] by server-9.bemta-3.messagelabs.com id
	3A/F3-10184-07EE0135; Fri, 28 Feb 2014 20:15:44 +0000
X-Env-Sender: dslutz@verizon.com
X-Msg-Ref: server-10.tower-31.messagelabs.com!1393618541!3397967!2
X-Originating-IP: [199.249.25.207]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTk5LjI0OS4yNS4yMDcgPT4gMjk3MjAw\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 10463 invoked from network); 28 Feb 2014 20:15:43 -0000
Received: from omzsmtpe04.verizonbusiness.com (HELO
	omzsmtpe04.verizonbusiness.com) (199.249.25.207)
	by server-10.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 20:15:43 -0000
X-IronPort-Anti-Spam-Filtered: false
Received: from unknown (HELO fldsmtpi03.verizon.com) ([166.68.71.145])
	by omzsmtpe04.verizonbusiness.com with ESMTP; 28 Feb 2014 20:15:40 +0000
X-VzAPP: 1
X-IronPort-AV: E=Sophos;i="4.97,564,1389744000"; d="scan'208";a="664362541"
Received: from unknown (HELO don-760.CloudSwitch.com) ([162.47.6.230])
	by fldsmtpi03.verizon.com with ESMTP; 28 Feb 2014 20:15:39 +0000
From: Don Slutz <dslutz@verizon.com>
To: xen-devel@lists.xen.org
Date: Fri, 28 Feb 2014 15:15:35 -0500
Message-Id: <1393618535-9587-2-git-send-email-dslutz@verizon.com>
X-Mailer: git-send-email 1.8.4
In-Reply-To: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
References: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, Don Slutz <dslutz@verizon.com>,
	Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Subject: [Xen-devel] [RFC PATCH 1/1] Add pci_hole_min_size
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

This allows growing the pci_hole to the size needed.

Signed-off-by: Don Slutz <dslutz@verizon.com>
---
 tools/firmware/hvmloader/pci.c | 10 ++++++++++
 tools/libxc/xc_hvm_build_x86.c | 22 ++++++++++++++++++++++
 tools/libxc/xenguest.h         | 11 +++++++++++
 tools/libxl/libxl_create.c     |  4 +++-
 tools/libxl/libxl_dm.c         | 13 +++++++++++--
 tools/libxl/libxl_dom.c        |  3 ++-
 tools/libxl/libxl_types.idl    |  1 +
 tools/libxl/xl_cmdimpl.c       |  6 ++++++
 8 files changed, 66 insertions(+), 4 deletions(-)

diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
index 627e8cb..b36d00b 100644
--- a/tools/firmware/hvmloader/pci.c
+++ b/tools/firmware/hvmloader/pci.c
@@ -58,6 +58,7 @@ void pci_setup(void)
         uint64_t bar_sz;
     } *bars = (struct bars *)scratch_start;
     unsigned int i, nr_bars = 0;
+    uint64_t pci_hole_min_size;
 
     const char *s;
     /*
@@ -85,6 +86,9 @@ void pci_setup(void)
     printf("Relocating guest memory for lowmem MMIO space %s\n",
            allow_memory_relocate?"enabled":"disabled");
 
+    s = xenstore_read("platform/pci_hole_min_size", "0");
+    pci_hole_min_size = strtoll(s, NULL, 0);
+
     /* Program PCI-ISA bridge with appropriate link routes. */
     isa_irq = 0;
     for ( link = 0; link < 4; link++ )
@@ -236,6 +240,12 @@ void pci_setup(void)
         pci_writew(devfn, PCI_COMMAND, cmd);
     }
 
+    if (pci_hole_min_size) {
+        pci_mem_start = (1ull << 32) - pci_hole_min_size;
+        printf("pci_mem_start=0x%lx (was 0x%x) for pci_hole_min_size=%llu\n",
+               pci_mem_start, PCI_MEM_START, pci_hole_min_size);
+    }
+
     /*
      * At the moment qemu-xen can't deal with relocated memory regions.
      * It's too close to the release to make a proper fix; for now,
diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index dd3b522..4752d58 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -603,16 +603,38 @@ int xc_hvm_build_target_mem(xc_interface *xch,
                            int target,
                            const char *image_name)
 {
+    return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target, image_name, 0);
+}
+
+int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
+                                      uint32_t domid,
+                                      int memsize,
+                                      int target,
+                                      const char *image_name,
+                                      uint64_t pci_hole_min_size)
+{
     struct xc_hvm_build_args args = {};
 
     memset(&args, 0, sizeof(struct xc_hvm_build_args));
     args.mem_size = (uint64_t)memsize << 20;
     args.mem_target = (uint64_t)target << 20;
     args.image_file_name = image_name;
+    if (pci_hole_min_size > HVM_BELOW_4G_MMIO_LENGTH)
+        args.mmio_size = pci_hole_min_size;
 
     return xc_hvm_build(xch, domid, &args);
 }
 
+int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
+                           struct xc_hvm_build_args *args,
+                           uint64_t pci_hole_min_size)
+{
+    if (pci_hole_min_size > HVM_BELOW_4G_MMIO_LENGTH)
+        args->mmio_size = pci_hole_min_size;
+
+    return xc_hvm_build(xch, domid, args);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xenguest.h b/tools/libxc/xenguest.h
index a0e30e1..44290bd 100644
--- a/tools/libxc/xenguest.h
+++ b/tools/libxc/xenguest.h
@@ -248,12 +248,23 @@ struct xc_hvm_build_args {
 int xc_hvm_build(xc_interface *xch, uint32_t domid,
                  struct xc_hvm_build_args *hvm_args);
 
+int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
+                           struct xc_hvm_build_args *args,
+                           uint64_t pci_hole_min_size);
+
 int xc_hvm_build_target_mem(xc_interface *xch,
                             uint32_t domid,
                             int memsize,
                             int target,
                             const char *image_name);
 
+int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
+                                      uint32_t domid,
+                                      int memsize,
+                                      int target,
+                                      const char *image_name,
+                                      uint64_t pci_hole_min_size);
+
 int xc_suspend_evtchn_release(xc_interface *xch, xc_evtchn *xce, int domid, int suspend_evtchn);
 
 int xc_suspend_evtchn_init(xc_interface *xch, xc_evtchn *xce, int domid, int port);
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index a604cd8..24ceac6 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -388,13 +388,15 @@ int libxl__domain_build(libxl__gc *gc,
         vments[4] = "start_time";
         vments[5] = libxl__sprintf(gc, "%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
 
-        localents = libxl__calloc(gc, 7, sizeof(char *));
+        localents = libxl__calloc(gc, 9, sizeof(char *));
         localents[0] = "platform/acpi";
         localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
         localents[2] = "platform/acpi_s3";
         localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
         localents[4] = "platform/acpi_s4";
         localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
+        localents[6] = "platform/pci_hole_min_size";
+        localents[7] = libxl__sprintf(gc, "%llu", (unsigned long long)info->u.hvm.pci_hole_min_size);
 
         break;
     case LIBXL_DOMAIN_TYPE_PV:
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f6f7bbd..66ba7cd 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -653,17 +653,26 @@ static char ** libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, b_info->extra_pv[i]);
         break;
     case LIBXL_DOMAIN_TYPE_HVM:
+    {
+        char *machinearg;
+
         if (!libxl_defbool_val(b_info->u.hvm.xen_platform_pci)) {
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
-            flexarray_append(dm_args, "pc,accel=xen");
+            machinearg = libxl__sprintf(gc, "pc,accel=xen");
         } else {
-            flexarray_append(dm_args, "xenfv");
+            machinearg = libxl__sprintf(gc, "xenfv");
+        }
+        if (b_info->u.hvm.pci_hole_min_size) {
+            machinearg = libxl__sprintf(gc, "%s,pci_hole_min_size=%llu", machinearg,
+                            (unsigned long long) b_info->u.hvm.pci_hole_min_size);
         }
+        flexarray_append(dm_args, machinearg);
         for (i = 0; b_info->extra_hvm && b_info->extra_hvm[i] != NULL; i++)
             flexarray_append(dm_args, b_info->extra_hvm[i]);
         break;
+    }
     default:
         abort();
     }
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 55f74b2..6ebc606 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -642,7 +642,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
         goto out;
     }
 
-    ret = xc_hvm_build(ctx->xch, domid, &args);
+    ret = xc_hvm_build_with_hole(ctx->xch, domid, &args,
+                                 info->u.hvm.pci_hole_min_size);
     if (ret) {
         LOGEV(ERROR, ret, "hvm building failed");
         goto out;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 649ce50..febfbb4 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -346,6 +346,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
                                        ("timeoffset",       string),
                                        ("hpet",             libxl_defbool),
                                        ("vpt_align",        libxl_defbool),
+                                       ("pci_hole_min_size",uint64),
                                        ("timer_mode",       libxl_timer_mode),
                                        ("nested_hvm",       libxl_defbool),
                                        ("smbios_firmware",  string),
diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
index 4fc46eb..fe247ee 100644
--- a/tools/libxl/xl_cmdimpl.c
+++ b/tools/libxl/xl_cmdimpl.c
@@ -1025,6 +1025,12 @@ static void parse_config_data(const char *config_source,
         xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
         xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
 
+        if (!xlu_cfg_get_long(config, "pci_hole_min_size", &l, 0)) {
+            b_info->u.hvm.pci_hole_min_size = (uint64_t) l;
+            if (dom_info->debug)
+                fprintf(stderr, "pci_hole_min_size: %llu\n", (unsigned long long) b_info->u.hvm.pci_hole_min_size);
+        }
+
         if (!xlu_cfg_get_long(config, "timer_mode", &l, 1)) {
             const char *s = libxl_timer_mode_to_string(l);
             fprintf(stderr, "WARNING: specifying \"timer_mode\" as an integer is deprecated. "
-- 
1.8.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJU6A-0005m7-BJ; Fri, 28 Feb 2014 20:31:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJU68-0005m2-Tl
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 20:31:57 +0000
Received: from [85.158.137.68:19477] by server-12.bemta-3.messagelabs.com id
	89/26-01674-C32F0135; Fri, 28 Feb 2014 20:31:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393619513!1551500!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28514 invoked from network); 28 Feb 2014 20:31:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 20:31:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SKVi1G018668
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 20:31:45 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SKVgXU025869
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 20:31:43 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SKVf8r022400; Fri, 28 Feb 2014 20:31:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 12:31:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2F0981C02F0; Fri, 28 Feb 2014 15:31:40 -0500 (EST)
Date: Fri, 28 Feb 2014 15:31:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20140228203139.GB30718@phenom.dumpdata.com>
References: <20140228133904.GB3516@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140228133904.GB3516@olila.local.net-space.pl>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	george.dunlap@eu.citrix.com, phcoder@gmail.com,
	ross.philipson@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xen.org, richard.l.maliszewski@intel.com
Subject: Re: [Xen-devel] EFI + GRUB2 + Xen work update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:39:04PM +0100, Daniel Kiper wrote:
> Hey,
> 
> On Wednesday evening I was able to boot Xen using GRUB2 on EFI platform.
> Not all stuff is in place right now but it is good sign. I hope that I
> will be able to add missing things and cleanup code in 2-3 weeks (if
> nothing unexpected happens). So I suppose that first version of patches
> should be ready no later than at the end of March.

Fantastic! Looking forward to them.
> 
> This work would not be possible without Vladimir who added missing
> features into multiboot2 protocol.
> 
> BTW, Vladimir, when are you going to release GRUB2 version 2.02?
> 
> Have a nice weekend,
> 
> Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:32:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJU6A-0005m7-BJ; Fri, 28 Feb 2014 20:31:58 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <konrad.wilk@oracle.com>) id 1WJU68-0005m2-Tl
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 20:31:57 +0000
Received: from [85.158.137.68:19477] by server-12.bemta-3.messagelabs.com id
	89/26-01674-C32F0135; Fri, 28 Feb 2014 20:31:56 +0000
X-Env-Sender: konrad.wilk@oracle.com
X-Msg-Ref: server-2.tower-31.messagelabs.com!1393619513!1551500!1
X-Originating-IP: [156.151.31.81]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTU2LjE1MS4zMS44MSA9PiAyODgzMzk=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 28514 invoked from network); 28 Feb 2014 20:31:55 -0000
Received: from userp1040.oracle.com (HELO userp1040.oracle.com) (156.151.31.81)
	by server-2.tower-31.messagelabs.com with DHE-RSA-AES256-SHA encrypted
	SMTP; 28 Feb 2014 20:31:55 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SKVi1G018668
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 20:31:45 GMT
Received: from userz7022.oracle.com (userz7022.oracle.com [156.151.31.86])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SKVgXU025869
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 20:31:43 GMT
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
	by userz7022.oracle.com (8.14.5+Sun/8.14.4) with ESMTP id
	s1SKVf8r022400; Fri, 28 Feb 2014 20:31:42 GMT
Received: from phenom.dumpdata.com (/50.195.21.189)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 12:31:41 -0800
Received: by phenom.dumpdata.com (Postfix, from userid 1000)
	id 2F0981C02F0; Fri, 28 Feb 2014 15:31:40 -0500 (EST)
Date: Fri, 28 Feb 2014 15:31:40 -0500
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Message-ID: <20140228203139.GB30718@phenom.dumpdata.com>
References: <20140228133904.GB3516@olila.local.net-space.pl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <20140228133904.GB3516@olila.local.net-space.pl>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: keir@xen.org, ian.campbell@citrix.com, stefano.stabellini@eu.citrix.com,
	george.dunlap@eu.citrix.com, phcoder@gmail.com,
	ross.philipson@citrix.com, jbeulich@suse.com,
	xen-devel@lists.xen.org, richard.l.maliszewski@intel.com
Subject: Re: [Xen-devel] EFI + GRUB2 + Xen work update
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On Fri, Feb 28, 2014 at 02:39:04PM +0100, Daniel Kiper wrote:
> Hey,
> 
> On Wednesday evening I was able to boot Xen using GRUB2 on EFI platform.
> Not all stuff is in place right now but it is good sign. I hope that I
> will be able to add missing things and cleanup code in 2-3 weeks (if
> nothing unexpected happens). So I suppose that first version of patches
> should be ready no later than at the end of March.

Fantastic! Looking forward to them.
> 
> This work would not be possible without Vladimir who added missing
> features into multiboot2 protocol.
> 
> BTW, Vladimir, when are you going to release GRUB2 version 2.02?
> 
> Have a nice weekend,
> 
> Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:55:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJUSH-0006Ea-E9; Fri, 28 Feb 2014 20:54:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WJUSF-0006EV-UA
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 20:54:48 +0000
Received: from [85.158.139.211:27657] by server-14.bemta-5.messagelabs.com id
	94/96-27598-797F0135; Fri, 28 Feb 2014 20:54:47 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393620886!6978198!1
X-Originating-IP: [213.75.39.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjE1ID0+IDY1OTY2NA==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16101 invoked from network); 28 Feb 2014 20:54:46 -0000
Received: from cpsmtpb-ews10.kpnxchange.com (HELO
	cpsmtpb-ews10.kpnxchange.com) (213.75.39.15)
	by server-16.tower-206.messagelabs.com with SMTP;
	28 Feb 2014 20:54:46 -0000
Received: from cpsps-ews02.kpnxchange.com ([10.94.84.169]) by
	cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Fri, 28 Feb 2014 21:54:46 +0100
Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by
	cpsps-ews02.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Fri, 28 Feb 2014 21:54:46 +0100
Received: from [195.241.198.118] ([195.241.198.118]) by
	CPSMTPM-TLF103.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Fri, 28 Feb 2014 21:54:44 +0100
Message-ID: <1393620863.4052.2.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Vladimir =?UTF-8?Q?=27=CF=86-coder/phcoder=27?= Serbinenko
	<phcoder@gmail.com>
Date: Fri, 28 Feb 2014 21:54:23 +0100
In-Reply-To: <530B9FB6.6000909@gmail.com>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
	<20140224183939.GA26794@andromeda.dapyr.net>
	<530B9FB6.6000909@gmail.com>
X-Mailer: Evolution 3.10.4 (3.10.4-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 28 Feb 2014 20:54:46.0070 (UTC)
	FILETIME=[4FE6DD60:01CF34C7]
X-RcptDomain: lists.xenproject.org
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org,
	Richard Weinberger <richard@nod.at>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTI0IGF0IDIwOjM4ICswMTAwLCBWbGFkaW1pciAnz4YtY29kZXIvcGhj
b2RlcicgU2VyYmluZW5rbwp3cm90ZToKPiBPbiAyNC4wMi4yMDE0IDE5OjM5LCBLb25yYWQgUnpl
c3p1dGVrIFdpbGsgd3JvdGU6Cj4gPiBPbiBUdWUsIEZlYiAxOCwgMjAxNCBhdCAxMToxNDoyN0FN
ICswMTAwLCBQYXVsIEJvbGxlIHdyb3RlOgo+ID4+IEFuZCB0aGF0IGNvbW1pdCB3YXMgcmV2ZXJ0
ZWQgYSB3ZWVrIGxhdGVyIGluIGdydWIgY29tbWl0Cj4gPj4gZmFmNGE2NWUxZTFjZTFkODIyZDI1
MWMxZTRiNTNkOTZlYzdmYWVjNSAoIlJldmVydCBncnViLWZpbGUgdXNhZ2UgaW4KPiA+PiBncnVi
LW1rY29uZmlnLiIpLCBzZWUKPiA+PiBodHRwOi8vZ2l0LnNhdmFubmFoLmdudS5vcmcvY2dpdC9n
cnViLmdpdC9jb21taXQvdXRpbC9ncnViLmQvMjBfbGludXhfeGVuLmluP2lkPWZhZjRhNjVlMWUx
Y2UxZDgyMmQyNTFjMWU0YjUzZDk2ZWM3ZmFlYzUgLgo+ID4+Cj4gPj4gVGhhdCBjb21taXQgaGFz
IG5vIGV4cGxhbmF0aW9uIChvdGhlciB0aGFuIGl0cyBvbmUgbGluZSBzdW1tYXJ5KS4gU28KPiA+
PiB3ZSdyZSBsZWZ0IGd1ZXNzaW5nIHdoeSB0aGlzIHdhcyBkb25lLiBMdWNraWx5LCBpdCBkb2Vz
bid0IG1hdHRlciBoZXJlLAo+ID4+IGJlY2F1c2UgdGhlIHRlc3QgZm9yIENPTkZJR19YRU5fUFJJ
VklMRUdFRF9HVUVTVCBpcyBzdXBlcmZsdW91cy4KPiA+IAo+ID4gSG93IGFib3V0IHdlIGFzayBW
bGFkaW1pcj8KPiA+IAo+ID4gVmxhZGltaXIgLSBjb3VsZCB5b3Ugc2hlZCBzb21lIGxpZ2h0IG9u
IGl0PyBUaGFua3MhCj4gPiAKPiBDT05GSUdfWEVOX1BSSVZJTEVHRURfR1VFU1QgaXMgbm90IHBy
ZXNlbnQgb24gTGludXggZXZlbiB0aG91Z2ggaXQKPiBzaG91bGQgYmUuIFRoZSB0ZXN0IHdhcyBy
ZW1vdmVkIHRvIGFjY29tb2RhdGUgdGhpcy4KCkl0J3Mgbm90IGNsZWFyIHRvIG1lIHdoYXQgdGhp
cyBtZWFucywgc29ycnkuIAoKPiBUaGUgdXNhZ2Ugb2YgZ3J1Yi1maWxlIHdhcyByZW1vdmVkIGJl
Y2F1c2UgaXQgd2Fzbid0IHJlbGVhc2UtcmVhZHkuCgpJIHNlZS4KClRoYW5rcy4KCgpQYXVsIEJv
bGxlCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 28 20:55:05 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 20:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJUSH-0006Ea-E9; Fri, 28 Feb 2014 20:54:49 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <pebolle@tiscali.nl>) id 1WJUSF-0006EV-UA
	for xen-devel@lists.xenproject.org; Fri, 28 Feb 2014 20:54:48 +0000
Received: from [85.158.139.211:27657] by server-14.bemta-5.messagelabs.com id
	94/96-27598-797F0135; Fri, 28 Feb 2014 20:54:47 +0000
X-Env-Sender: pebolle@tiscali.nl
X-Msg-Ref: server-16.tower-206.messagelabs.com!1393620886!6978198!1
X-Originating-IP: [213.75.39.15]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMjEzLjc1LjM5LjE1ID0+IDY1OTY2NA==\n,
	ML_RADAR_SPEW_LINKS_14,spamassassin: 
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 16101 invoked from network); 28 Feb 2014 20:54:46 -0000
Received: from cpsmtpb-ews10.kpnxchange.com (HELO
	cpsmtpb-ews10.kpnxchange.com) (213.75.39.15)
	by server-16.tower-206.messagelabs.com with SMTP;
	28 Feb 2014 20:54:46 -0000
Received: from cpsps-ews02.kpnxchange.com ([10.94.84.169]) by
	cpsmtpb-ews10.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Fri, 28 Feb 2014 21:54:46 +0100
Received: from CPSMTPM-TLF103.kpnxchange.com ([195.121.3.6]) by
	cpsps-ews02.kpnxchange.com with Microsoft SMTPSVC(7.5.7601.17514); 
	Fri, 28 Feb 2014 21:54:46 +0100
Received: from [195.241.198.118] ([195.241.198.118]) by
	CPSMTPM-TLF103.kpnxchange.com with Microsoft
	SMTPSVC(7.5.7601.17514); Fri, 28 Feb 2014 21:54:44 +0100
Message-ID: <1393620863.4052.2.camel@x220>
From: Paul Bolle <pebolle@tiscali.nl>
To: Vladimir =?UTF-8?Q?=27=CF=86-coder/phcoder=27?= Serbinenko
	<phcoder@gmail.com>
Date: Fri, 28 Feb 2014 21:54:23 +0100
In-Reply-To: <530B9FB6.6000909@gmail.com>
References: <201402171223.s1HCNG0S023567@userz7022.oracle.com>
	<1392642197.13000.20.camel@x220>
	<20140217144307.GB28658@localhost.localdomain>
	<1392718467.30073.12.camel@x220>
	<20140224183939.GA26794@andromeda.dapyr.net>
	<530B9FB6.6000909@gmail.com>
X-Mailer: Evolution 3.10.4 (3.10.4-1.fc20) 
Mime-Version: 1.0
X-OriginalArrivalTime: 28 Feb 2014 20:54:46.0070 (UTC)
	FILETIME=[4FE6DD60:01CF34C7]
X-RcptDomain: lists.xenproject.org
Cc: Michael Opdenacker <michael.opdenacker@free-electrons.com>,
	xen-devel@lists.xenproject.org,
	Richard Weinberger <richard@nod.at>, x86@kernel.org,
	linux-kernel@vger.kernel.org, Ingo Molnar <mingo@redhat.com>,
	David Vrabel <david.vrabel@citrix.com>, "H. Peter Anvin" <hpa@zytor.com>,
	Konrad Rzeszutek Wilk <konrad@darnok.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [Xen-devel] [PATCH] xen: remove XEN_PRIVILEGED_GUEST
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

T24gTW9uLCAyMDE0LTAyLTI0IGF0IDIwOjM4ICswMTAwLCBWbGFkaW1pciAnz4YtY29kZXIvcGhj
b2RlcicgU2VyYmluZW5rbwp3cm90ZToKPiBPbiAyNC4wMi4yMDE0IDE5OjM5LCBLb25yYWQgUnpl
c3p1dGVrIFdpbGsgd3JvdGU6Cj4gPiBPbiBUdWUsIEZlYiAxOCwgMjAxNCBhdCAxMToxNDoyN0FN
ICswMTAwLCBQYXVsIEJvbGxlIHdyb3RlOgo+ID4+IEFuZCB0aGF0IGNvbW1pdCB3YXMgcmV2ZXJ0
ZWQgYSB3ZWVrIGxhdGVyIGluIGdydWIgY29tbWl0Cj4gPj4gZmFmNGE2NWUxZTFjZTFkODIyZDI1
MWMxZTRiNTNkOTZlYzdmYWVjNSAoIlJldmVydCBncnViLWZpbGUgdXNhZ2UgaW4KPiA+PiBncnVi
LW1rY29uZmlnLiIpLCBzZWUKPiA+PiBodHRwOi8vZ2l0LnNhdmFubmFoLmdudS5vcmcvY2dpdC9n
cnViLmdpdC9jb21taXQvdXRpbC9ncnViLmQvMjBfbGludXhfeGVuLmluP2lkPWZhZjRhNjVlMWUx
Y2UxZDgyMmQyNTFjMWU0YjUzZDk2ZWM3ZmFlYzUgLgo+ID4+Cj4gPj4gVGhhdCBjb21taXQgaGFz
IG5vIGV4cGxhbmF0aW9uIChvdGhlciB0aGFuIGl0cyBvbmUgbGluZSBzdW1tYXJ5KS4gU28KPiA+
PiB3ZSdyZSBsZWZ0IGd1ZXNzaW5nIHdoeSB0aGlzIHdhcyBkb25lLiBMdWNraWx5LCBpdCBkb2Vz
bid0IG1hdHRlciBoZXJlLAo+ID4+IGJlY2F1c2UgdGhlIHRlc3QgZm9yIENPTkZJR19YRU5fUFJJ
VklMRUdFRF9HVUVTVCBpcyBzdXBlcmZsdW91cy4KPiA+IAo+ID4gSG93IGFib3V0IHdlIGFzayBW
bGFkaW1pcj8KPiA+IAo+ID4gVmxhZGltaXIgLSBjb3VsZCB5b3Ugc2hlZCBzb21lIGxpZ2h0IG9u
IGl0PyBUaGFua3MhCj4gPiAKPiBDT05GSUdfWEVOX1BSSVZJTEVHRURfR1VFU1QgaXMgbm90IHBy
ZXNlbnQgb24gTGludXggZXZlbiB0aG91Z2ggaXQKPiBzaG91bGQgYmUuIFRoZSB0ZXN0IHdhcyBy
ZW1vdmVkIHRvIGFjY29tb2RhdGUgdGhpcy4KCkl0J3Mgbm90IGNsZWFyIHRvIG1lIHdoYXQgdGhp
cyBtZWFucywgc29ycnkuIAoKPiBUaGUgdXNhZ2Ugb2YgZ3J1Yi1maWxlIHdhcyByZW1vdmVkIGJl
Y2F1c2UgaXQgd2Fzbid0IHJlbGVhc2UtcmVhZHkuCgpJIHNlZS4KClRoYW5rcy4KCgpQYXVsIEJv
bGxlCgoKX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVu
LWRldmVsIG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMu
eGVuLm9yZy94ZW4tZGV2ZWwK

From xen-devel-bounces@lists.xen.org Fri Feb 28 21:23:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 21:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJUtf-0006t6-0C; Fri, 28 Feb 2014 21:23:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJUtd-0006t1-Dp
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 21:23:05 +0000
Received: from [85.158.137.68:44531] by server-16.bemta-3.messagelabs.com id
	BC/B3-29917-83EF0135; Fri, 28 Feb 2014 21:23:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393622581!4882157!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29633 invoked from network); 28 Feb 2014 21:23:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 21:23:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,564,1389744000"; d="scan'208";a="106758906"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 21:22:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 16:22:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJUtP-0003nS-NF;
	Fri, 28 Feb 2014 21:22:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJUtP-0002KA-1K;
	Fri, 28 Feb 2014 21:22:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25331-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 21:22:51 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25331: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25331 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25331/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  4 xen-install           fail pass in 25325
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 25325 pass in 25322
 test-amd64-i386-xl-multivcpu 17 guest-start.2      fail in 25322 pass in 25325
 test-amd64-amd64-xl-sedf-pin 11 guest-saverestore  fail in 25322 pass in 25331

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 25325 like 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail in 25325 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop      fail in 25325 never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop           fail in 25325 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25325 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop      fail in 25325 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail in 25325 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25322 never pass

version targeted for testing:
 xen                  7bedbbb5c31ec7d7e653b4fc606c9871661d5e89
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Tamas K Lengyel <tamas.lengyel@zentific.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 448 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 21:23:33 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 21:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJUtf-0006t6-0C; Fri, 28 Feb 2014 21:23:07 +0000
Received: from mail6.bemta3.messagelabs.com ([195.245.230.39])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <Ian.Jackson@citrix.com>) id 1WJUtd-0006t1-Dp
	for xen-devel@lists.xensource.com; Fri, 28 Feb 2014 21:23:05 +0000
Received: from [85.158.137.68:44531] by server-16.bemta-3.messagelabs.com id
	BC/B3-29917-83EF0135; Fri, 28 Feb 2014 21:23:04 +0000
X-Env-Sender: Ian.Jackson@citrix.com
X-Msg-Ref: server-15.tower-31.messagelabs.com!1393622581!4882157!1
X-Originating-IP: [66.165.176.89]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogNjYuMTY1LjE3Ni44OSA9PiAyMDMwMDc=\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 29633 invoked from network); 28 Feb 2014 21:23:03 -0000
Received: from smtp.citrix.com (HELO SMTP.CITRIX.COM) (66.165.176.89)
	by server-15.tower-31.messagelabs.com with RC4-SHA encrypted SMTP;
	28 Feb 2014 21:23:03 -0000
X-IronPort-AV: E=Sophos;i="4.97,564,1389744000"; d="scan'208";a="106758906"
Received: from accessns.citrite.net (HELO FTLPEX01CL02.citrite.net)
	([10.9.154.239])
	by FTLPIPO01.CITRIX.COM with ESMTP; 28 Feb 2014 21:22:52 +0000
Received: from norwich.cam.xci-test.com (10.80.248.129) by
	smtprelay.citrix.com (10.13.107.79) with Microsoft SMTP Server id
	14.2.342.4; Fri, 28 Feb 2014 16:22:51 -0500
Received: from [10.80.248.135] (helo=woking.cam.xci-test.com)	by
	norwich.cam.xci-test.com with esmtp (Exim 4.72)	(envelope-from
	<ian.jackson@eu.citrix.com>)	id 1WJUtP-0003nS-NF;
	Fri, 28 Feb 2014 21:22:51 +0000
Received: from osstest by woking.cam.xci-test.com with local (Exim 4.69)
	(envelope-from <ian.jackson@eu.citrix.com>)	id 1WJUtP-0002KA-1K;
	Fri, 28 Feb 2014 21:22:51 +0000
To: <xen-devel@lists.xensource.com>
Message-ID: <osstest-25331-mainreport@xen.org>
From: xen.org <ian.jackson@eu.citrix.com>
Date: Fri, 28 Feb 2014 21:22:51 +0000
MIME-Version: 1.0
X-DLP: MIA1
Cc: ian.jackson@eu.citrix.com
Subject: [Xen-devel] [xen-unstable test] 25331: regressions - FAIL
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

flight 25331 xen-unstable real [real]
http://www.chiark.greenend.org.uk/~xensrcts/logs/25331/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-oldkern           4 xen-build                 fail REGR. vs. 25275
 build-i386                    4 xen-build                 fail REGR. vs. 25281

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-winxpsp3  4 xen-install           fail pass in 25325
 test-amd64-i386-xl-qemuu-win7-amd64 7 windows-install fail in 25325 pass in 25322
 test-amd64-i386-xl-multivcpu 17 guest-start.2      fail in 25322 pass in 25325
 test-amd64-amd64-xl-sedf-pin 11 guest-saverestore  fail in 25322 pass in 25331

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64 7 windows-install fail in 25325 like 25275

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl          10 migrate-support-check        fail   never pass
 test-amd64-i386-rhel6hvm-amd  1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-pcipt-intel  9 guest-start                 fail never pass
 test-amd64-i386-qemuu-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 xen-build-check(1)           blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-credit2    1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 xen-build-check(1)         blocked n/a
 test-amd64-i386-xl-multivcpu  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl            1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-freebsd10-i386  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-rhel6hvm-intel  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-i386-xend-qemut-winxpsp3 17 leak-check/check        fail never pass
 test-amd64-i386-xend-winxpsp3 17 leak-check/check             fail  never pass
 test-amd64-i386-pair          1 xen-build-check(1)           blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 14 guest-stop             fail never pass
 test-amd64-i386-xl-winxpsp3-vcpus1  1 xen-build-check(1)           blocked n/a
 test-amd64-amd64-xl-qemuu-winxpsp3 14 guest-stop               fail never pass
 test-amd64-i386-xl-win7-amd64  1 xen-build-check(1)           blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1  1 xen-build-check(1)     blocked n/a
 test-amd64-amd64-xl-winxpsp3 14 guest-stop                   fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64  1 xen-build-check(1)          blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 14 guest-stop             fail never pass
 test-amd64-amd64-xl-win7-amd64 14 guest-stop                   fail never pass
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 14 guest-stop fail in 25325 never pass
 test-amd64-i386-xl-winxpsp3-vcpus1 14 guest-stop      fail in 25325 never pass
 test-amd64-i386-xl-win7-amd64 14 guest-stop           fail in 25325 never pass
 test-amd64-i386-xl-qemut-win7-amd64 14 guest-stop     fail in 25325 never pass
 test-amd64-amd64-xl-qemut-winxpsp3 14 guest-stop      fail in 25325 never pass
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 14 guest-stop fail in 25325 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 14 guest-stop     fail in 25322 never pass

version targeted for testing:
 xen                  7bedbbb5c31ec7d7e653b4fc606c9871661d5e89
baseline version:
 xen                  869f5b6deab53bc924798df4dacfae92ee198cb4

------------------------------------------------------------
People who touched revisions under test:
  Andres Lagar-Cavilla <andres@lagarcavilla.org>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Aravind Gopalakrishnan <aravind.gopalakrishnan@amd.com>
  Daniel De Graaf <dgdegra@tycho.nsa.gov>
  Fabio Fantoni <fabio.fantoni@m2r.biz>
  Frediano Ziglio <frediano.ziglio@citrix.com>
  Ian Cambell <ian.campbell@citrix.com>
  Ian Campbell <ian.campbell@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@novell.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien.grall@linaro.org>
  Keir Fraser <keir@xen.org>
  Liu Jinsong <jinsong.liu@intel.com>
  Suravee Suthikulpanit <Suravee.Suthikulpanit@amd.com>
  Tamas K Lengyel <tamas.lengyel@zentific.com>
  Thomas Lendacky <Thomas.Lendacky@amd.com>
  Tim Deegan <tim@xen.org>
  Xiantoa Zhang <xiantao.zhang@intel.com>
  Xudong Hao <xudong.hao@intel.com>
  Yang Zhang <yang.z.zhang@Intel.com>
------------------------------------------------------------

jobs:
 build-amd64-xend                                             pass    
 build-i386-xend                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-oldkern                                          fail    
 build-i386-oldkern                                           pass    
 build-amd64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-i386-rhel6hvm-amd                                 blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-win7-amd64                               fail    
 test-amd64-i386-xl-win7-amd64                                blocked 
 test-amd64-i386-xl-credit2                                   blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-xl-pcipt-intel                              fail    
 test-amd64-i386-rhel6hvm-intel                               blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-i386-xl-multivcpu                                 blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-xl-sedf-pin                                 pass    
 test-amd64-amd64-pv                                          pass    
 test-amd64-i386-pv                                           pass    
 test-amd64-amd64-xl-sedf                                     pass    
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1                     blocked 
 test-amd64-i386-xl-winxpsp3-vcpus1                           blocked 
 test-amd64-i386-xend-qemut-winxpsp3                          fail    
 test-amd64-amd64-xl-qemut-winxpsp3                           fail    
 test-amd64-amd64-xl-qemuu-winxpsp3                           fail    
 test-amd64-i386-xend-winxpsp3                                fail    
 test-amd64-amd64-xl-winxpsp3                                 fail    


------------------------------------------------------------
sg-report-flight on woking.cam.xci-test.com
logs: /home/xc_osstest/logs
images: /home/xc_osstest/images

Logs, config files, etc. are available at
    http://www.chiark.greenend.org.uk/~xensrcts/logs

Test harness code can be found at
    http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 448 lines long.)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 22:06:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 22:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJVZ6-0007hK-J9; Fri, 28 Feb 2014 22:05:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJVZ4-0007hF-Mx
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 22:05:54 +0000
Received: from [85.158.139.211:38046] by server-11.bemta-5.messagelabs.com id
	FA/6F-23886-24801135; Fri, 28 Feb 2014 22:05:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393625151!3049406!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24502 invoked from network); 28 Feb 2014 22:05:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 22:05:53 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SM5lX5004994
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 22:05:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SM5kMB010754
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 22:05:46 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SM5ktm010747; Fri, 28 Feb 2014 22:05:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 14:05:45 -0800
Message-ID: <531108AE.9020809@oracle.com>
Date: Fri, 28 Feb 2014 17:07:42 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
	<1393618535-9587-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1393618535-9587-2-git-send-email-dslutz@verizon.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 1/1] Add pci_hole_min_size
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/28/2014 03:15 PM, Don Slutz wrote:
> This allows growing the pci_hole to the size needed.
>
> Signed-off-by: Don Slutz<dslutz@verizon.com>
> ---
>   tools/firmware/hvmloader/pci.c | 10 ++++++++++
>   tools/libxc/xc_hvm_build_x86.c | 22 ++++++++++++++++++++++
>   tools/libxc/xenguest.h         | 11 +++++++++++
>   tools/libxl/libxl_create.c     |  4 +++-
>   tools/libxl/libxl_dm.c         | 13 +++++++++++--
>   tools/libxl/libxl_dom.c        |  3 ++-
>   tools/libxl/libxl_types.idl    |  1 +
>   tools/libxl/xl_cmdimpl.c       |  6 ++++++
>   8 files changed, 66 insertions(+), 4 deletions(-)
>
> diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
> index 627e8cb..b36d00b 100644
> --- a/tools/firmware/hvmloader/pci.c
> +++ b/tools/firmware/hvmloader/pci.c
> @@ -58,6 +58,7 @@ void pci_setup(void)
>           uint64_t bar_sz;
>       } *bars = (struct bars *)scratch_start;
>       unsigned int i, nr_bars = 0;
> +    uint64_t pci_hole_min_size;
>   
>       const char *s;
>       /*
> @@ -85,6 +86,9 @@ void pci_setup(void)
>       printf("Relocating guest memory for lowmem MMIO space %s\n",
>              allow_memory_relocate?"enabled":"disabled");
>   
> +    s = xenstore_read("platform/pci_hole_min_size", "0");

'if (s)'

> +    pci_hole_min_size = strtoll(s, NULL, 0);
> +
>       /* Program PCI-ISA bridge with appropriate link routes. */
>       isa_irq = 0;
>       for ( link = 0; link < 4; link++ )
> @@ -236,6 +240,12 @@ void pci_setup(void)
>           pci_writew(devfn, PCI_COMMAND, cmd);
>       }
>   
> +    if (pci_hole_min_size) {
> +        pci_mem_start = (1ull << 32) - pci_hole_min_size;
> +        printf("pci_mem_start=0x%lx (was 0x%x) for pci_hole_min_size=%llu\n",
> +               pci_mem_start, PCI_MEM_START, pci_hole_min_size);
> +    }
> +
>       /*
>        * At the moment qemu-xen can't deal with relocated memory regions.
>        * It's too close to the release to make a proper fix; for now,

...

> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..24ceac6 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -388,13 +388,15 @@ int libxl__domain_build(libxl__gc *gc,
>           vments[4] = "start_time";
>           vments[5] = libxl__sprintf(gc, "%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
>   
> -        localents = libxl__calloc(gc, 7, sizeof(char *));
> +        localents = libxl__calloc(gc, 9, sizeof(char *));
>           localents[0] = "platform/acpi";
>           localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
>           localents[2] = "platform/acpi_s3";
>           localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
>           localents[4] = "platform/acpi_s4";
>           localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
> +        localents[6] = "platform/pci_hole_min_size";
> +        localents[7] = libxl__sprintf(gc, "%llu", (unsigned long long)info->u.hvm.pci_hole_min_size);

Do you want to always store this parameter? There is a default already 
(HVM_BELOW_4G_MMIO_LENGTH) so if it's not set in the config file it may 
be safe to omit it.

...


> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 4fc46eb..fe247ee 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1025,6 +1025,12 @@ static void parse_config_data(const char *config_source,
>           xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
>           xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
>   
> +        if (!xlu_cfg_get_long(config, "pci_hole_min_size", &l, 0)) {
> +            b_info->u.hvm.pci_hole_min_size = (uint64_t) l;
> +            if (dom_info->debug)
> +                fprintf(stderr, "pci_hole_min_size: %llu\n", (unsigned long long) b_info->u.hvm.pci_hole_min_size);
> +        }

You probably want to set b_info->u.hvm.pci_hole_min_size to 0 (or 
HVM_BELOW_4G_MMIO_LENGTH?) in case it's not specified in 
libxl__domain_build_info_setdefault().

-boris


> +
>           if (!xlu_cfg_get_long(config, "timer_mode", &l, 1)) {
>               const char *s = libxl_timer_mode_to_string(l);
>               fprintf(stderr, "WARNING: specifying \"timer_mode\" as an integer is deprecated. "


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 22:06:11 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 22:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJVZ6-0007hK-J9; Fri, 28 Feb 2014 22:05:56 +0000
Received: from mail6.bemta5.messagelabs.com ([195.245.231.135])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <boris.ostrovsky@oracle.com>) id 1WJVZ4-0007hF-Mx
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 22:05:54 +0000
Received: from [85.158.139.211:38046] by server-11.bemta-5.messagelabs.com id
	FA/6F-23886-24801135; Fri, 28 Feb 2014 22:05:54 +0000
X-Env-Sender: boris.ostrovsky@oracle.com
X-Msg-Ref: server-10.tower-206.messagelabs.com!1393625151!3049406!1
X-Originating-IP: [141.146.126.69]
X-SpamReason: No, hits=0.0 required=7.0 tests=sa_preprocessor: 
	VHJ1c3RlZCBJUDogMTQxLjE0Ni4xMjYuNjkgPT4gMjc3MjE4\n
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 24502 invoked from network); 28 Feb 2014 22:05:53 -0000
Received: from aserp1040.oracle.com (HELO aserp1040.oracle.com)
	(141.146.126.69)
	by server-10.tower-206.messagelabs.com with DHE-RSA-AES256-SHA
	encrypted SMTP; 28 Feb 2014 22:05:53 -0000
Received: from acsinet22.oracle.com (acsinet22.oracle.com [141.146.126.238])
	by aserp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with
	ESMTP id s1SM5lX5004994
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK);
	Fri, 28 Feb 2014 22:05:47 GMT
Received: from aserz7022.oracle.com (aserz7022.oracle.com [141.146.126.231])
	by acsinet22.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SM5kMB010754
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO);
	Fri, 28 Feb 2014 22:05:46 GMT
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by aserz7022.oracle.com (8.14.4+Sun/8.14.4) with ESMTP id
	s1SM5ktm010747; Fri, 28 Feb 2014 22:05:46 GMT
Received: from dhcp-burlington7-2nd-B-east-10-152-55-162.usdhcp.oraclecorp.com
	(/10.152.55.112) by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 28 Feb 2014 14:05:45 -0800
Message-ID: <531108AE.9020809@oracle.com>
Date: Fri, 28 Feb 2014 17:07:42 -0500
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
User-Agent: Mozilla/5.0 (X11; Linux x86_64;
	rv:17.0) Gecko/20130805 Thunderbird/17.0.8
MIME-Version: 1.0
To: Don Slutz <dslutz@verizon.com>
References: <1393618535-9587-1-git-send-email-dslutz@verizon.com>
	<1393618535-9587-2-git-send-email-dslutz@verizon.com>
In-Reply-To: <1393618535-9587-2-git-send-email-dslutz@verizon.com>
X-Source-IP: acsinet22.oracle.com [141.146.126.238]
Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>, xen-devel@lists.xen.org
Subject: Re: [Xen-devel] [RFC PATCH 1/1] Add pci_hole_min_size
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset="us-ascii"; Format="flowed"
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org

On 02/28/2014 03:15 PM, Don Slutz wrote:
> This allows growing the pci_hole to the size needed.
>
> Signed-off-by: Don Slutz<dslutz@verizon.com>
> ---
>   tools/firmware/hvmloader/pci.c | 10 ++++++++++
>   tools/libxc/xc_hvm_build_x86.c | 22 ++++++++++++++++++++++
>   tools/libxc/xenguest.h         | 11 +++++++++++
>   tools/libxl/libxl_create.c     |  4 +++-
>   tools/libxl/libxl_dm.c         | 13 +++++++++++--
>   tools/libxl/libxl_dom.c        |  3 ++-
>   tools/libxl/libxl_types.idl    |  1 +
>   tools/libxl/xl_cmdimpl.c       |  6 ++++++
>   8 files changed, 66 insertions(+), 4 deletions(-)
>
> diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c
> index 627e8cb..b36d00b 100644
> --- a/tools/firmware/hvmloader/pci.c
> +++ b/tools/firmware/hvmloader/pci.c
> @@ -58,6 +58,7 @@ void pci_setup(void)
>           uint64_t bar_sz;
>       } *bars = (struct bars *)scratch_start;
>       unsigned int i, nr_bars = 0;
> +    uint64_t pci_hole_min_size;
>   
>       const char *s;
>       /*
> @@ -85,6 +86,9 @@ void pci_setup(void)
>       printf("Relocating guest memory for lowmem MMIO space %s\n",
>              allow_memory_relocate?"enabled":"disabled");
>   
> +    s = xenstore_read("platform/pci_hole_min_size", "0");

'if (s)'

> +    pci_hole_min_size = strtoll(s, NULL, 0);
> +
>       /* Program PCI-ISA bridge with appropriate link routes. */
>       isa_irq = 0;
>       for ( link = 0; link < 4; link++ )
> @@ -236,6 +240,12 @@ void pci_setup(void)
>           pci_writew(devfn, PCI_COMMAND, cmd);
>       }
>   
> +    if (pci_hole_min_size) {
> +        pci_mem_start = (1ull << 32) - pci_hole_min_size;
> +        printf("pci_mem_start=0x%lx (was 0x%x) for pci_hole_min_size=%llu\n",
> +               pci_mem_start, PCI_MEM_START, pci_hole_min_size);
> +    }
> +
>       /*
>        * At the moment qemu-xen can't deal with relocated memory regions.
>        * It's too close to the release to make a proper fix; for now,

...

> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index a604cd8..24ceac6 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -388,13 +388,15 @@ int libxl__domain_build(libxl__gc *gc,
>           vments[4] = "start_time";
>           vments[5] = libxl__sprintf(gc, "%lu.%02d", start_time.tv_sec,(int)start_time.tv_usec/10000);
>   
> -        localents = libxl__calloc(gc, 7, sizeof(char *));
> +        localents = libxl__calloc(gc, 9, sizeof(char *));
>           localents[0] = "platform/acpi";
>           localents[1] = libxl_defbool_val(info->u.hvm.acpi) ? "1" : "0";
>           localents[2] = "platform/acpi_s3";
>           localents[3] = libxl_defbool_val(info->u.hvm.acpi_s3) ? "1" : "0";
>           localents[4] = "platform/acpi_s4";
>           localents[5] = libxl_defbool_val(info->u.hvm.acpi_s4) ? "1" : "0";
> +        localents[6] = "platform/pci_hole_min_size";
> +        localents[7] = libxl__sprintf(gc, "%llu", (unsigned long long)info->u.hvm.pci_hole_min_size);

Do you want to always store this parameter? There is a default already 
(HVM_BELOW_4G_MMIO_LENGTH) so if it's not set in the config file it may 
be safe to omit it.

...


> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> index 4fc46eb..fe247ee 100644
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1025,6 +1025,12 @@ static void parse_config_data(const char *config_source,
>           xlu_cfg_get_defbool(config, "hpet", &b_info->u.hvm.hpet, 0);
>           xlu_cfg_get_defbool(config, "vpt_align", &b_info->u.hvm.vpt_align, 0);
>   
> +        if (!xlu_cfg_get_long(config, "pci_hole_min_size", &l, 0)) {
> +            b_info->u.hvm.pci_hole_min_size = (uint64_t) l;
> +            if (dom_info->debug)
> +                fprintf(stderr, "pci_hole_min_size: %llu\n", (unsigned long long) b_info->u.hvm.pci_hole_min_size);
> +        }

You probably want to set b_info->u.hvm.pci_hole_min_size to 0 (or 
HVM_BELOW_4G_MMIO_LENGTH?) in case it's not specified in 
libxl__domain_build_info_setdefault().

-boris


> +
>           if (!xlu_cfg_get_long(config, "timer_mode", &l, 1)) {
>               const char *s = libxl_timer_mode_to_string(l);
>               fprintf(stderr, "WARNING: specifying \"timer_mode\" as an integer is deprecated. "


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 22:23:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 22:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJVpb-00089x-7w; Fri, 28 Feb 2014 22:22:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abchak@juniper.net>) id 1WJVpZ-00089s-TR
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 22:22:58 +0000
Received: from [85.158.143.35:49872] by server-1.bemta-4.messagelabs.com id
	34/50-31661-14C01135; Fri, 28 Feb 2014 22:22:57 +0000
X-Env-Sender: abchak@juniper.net
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393626176!9107986!1
X-Originating-IP: [213.199.154.205]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30843 invoked from network); 28 Feb 2014 22:22:56 -0000
Received: from am1ehsobe002.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.205)
	by server-5.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	28 Feb 2014 22:22:56 -0000
Received: from mail97-am1-R.bigfish.com (10.3.201.253) by
	AM1EHSOBE009.bigfish.com (10.3.204.29) with Microsoft SMTP Server id
	14.1.225.22; Fri, 28 Feb 2014 22:22:56 +0000
Received: from mail97-am1 (localhost [127.0.0.1])	by mail97-am1-R.bigfish.com
	(Postfix) with ESMTP id 16BA044052C;
	Fri, 28 Feb 2014 22:22:56 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:157.56.240.101; KIP:(null); UIP:(null);
	IPV:NLI; H:BL2PRD0510HT002.namprd05.prod.outlook.com; RD:none;
	EFVD:NLI
X-SpamScore: -6
X-BigFish: VPS-6(zz98dI9371I936eI148cI1432I14ffIzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh8275dh1de097h18602ehz2fh109h2a8h839h946hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h19ceh1ad9h1b0ah224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h2052h20b3h2216h22d0h2336h2438h2461h2487h24d7h2516h2545h255eh25cch1155h)
Received-SPF: pass (mail97-am1: domain of juniper.net designates
	157.56.240.101 as permitted sender) client-ip=157.56.240.101;
	envelope-from=abchak@juniper.net;
	helo=BL2PRD0510HT002.namprd05.prod.outlook.com ; .outlook.com ; 
X-Forefront-Antispam-Report-Untrusted: SFV:NSPM;
	SFS:(10009001)(6009001)(428001)(53754006)(377424004)(51704005)(377454003)(24454002)(189002)(52604005)(199002)(54356001)(83072002)(85852003)(74706001)(81816001)(90146001)(74876001)(63696002)(15974865002)(76786001)(76796001)(59766001)(36756003)(92726001)(80022001)(56816005)(65816001)(77096001)(31966008)(66066001)(95416001)(82746002)(79102001)(94316002)(86362001)(87266001)(94946001)(85306002)(83716003)(4396001)(19580405001)(83322001)(19580395003)(76482001)(54316002)(69226001)(56776001)(87936001)(81342001)(47976001)(46102001)(50986001)(33656001)(47736001)(51856001)(49866001)(2656002)(74366001)(81686001)(80976001)(77982001)(93136001)(95666003)(93516002)(81542001)(47446002)(74502001)(53806001)(92566001)(74662001);
	DIR:OUT; SFP:1101; SCL:1; SRVR:CO2PR05MB780;
	H:BL2PR05MB193.namprd05.prod.outlook.com; CLIP:66.129.239.13;
	FPR:A897F3D1.7B2378A.F0631DAA.40A36751.20135; PTR:InfoNoRecords;
	MX:1; A:1; LANG:en; 
Received: from mail97-am1 (localhost.localdomain [127.0.0.1]) by mail97-am1
	(MessageSwitch) id 1393626174596055_28666;
	Fri, 28 Feb 2014 22:22:54 +0000 (UTC)
Received: from AM1EHSMHS015.bigfish.com (unknown [10.3.201.245])	by
	mail97-am1.bigfish.com (Postfix) with ESMTP id 84614C0070;
	Fri, 28 Feb 2014 22:22:54 +0000 (UTC)
Received: from BL2PRD0510HT002.namprd05.prod.outlook.com (157.56.240.101) by
	AM1EHSMHS015.bigfish.com (10.3.207.153) with Microsoft SMTP Server
	(TLS) id 14.16.227.3; Fri, 28 Feb 2014 22:22:54 +0000
Received: from CO2PR05MB780.namprd05.prod.outlook.com (10.141.226.156) by
	BL2PRD0510HT002.namprd05.prod.outlook.com (10.255.100.37) with
	Microsoft SMTP
	Server (TLS) id 14.16.423.0; Fri, 28 Feb 2014 22:22:47 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com (10.242.198.143) by
	CO2PR05MB780.namprd05.prod.outlook.com (10.141.226.156) with Microsoft
	SMTP Server (TLS) id 15.0.883.10; Fri, 28 Feb 2014 22:22:45 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.94]) by
	BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.94]) with mapi id
	15.00.0888.003; Fri, 28 Feb 2014 22:22:44 +0000
From: Anirban Chakraborty <abchak@juniper.net>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] build issue in xcp-networkd
Thread-Index: AQHPNAiY63+F7YGNT0KzFjnlrIvRu5rKW8SAgADjXYA=
Date: Fri, 28 Feb 2014 22:22:44 +0000
Message-ID: <E5BB3380-B6E8-42ED-A445-E4E4DD0F5F87@juniper.net>
References: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
	<1393577337.27819.12.camel@hastur.hellion.org.uk>
In-Reply-To: <1393577337.27819.12.camel@hastur.hellion.org.uk>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [66.129.239.13]
x-forefront-prvs: 0136C1DDA4
Content-ID: <B0AD59E67950214CAF2B626847092F11@namprd05.prod.outlook.com>
MIME-Version: 1.0
X-OriginatorOrg: juniper.net
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build issue in xcp-networkd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 28, 2014, at 12:48 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2014-02-27 at 22:09 +0000, Anirban Chakraborty wrote:
>> Hi All,
>> =

>> I was trying to =91opam install xcp-networkd=92 in xenserver DDK vm
>> (2.6.32.43-0.4.1.xs1.8.0.847.170785xen) and encountered following
>> error messages:
> =

> This list deals in the development of the core hypervisor packages. For
> xapi/xcp stuff you either want xen-api@lists.xen.org or the various
> www.xenserver.org lists/forums etc.

Thanks for pointing out. I=92ll take it there.

Anirban

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

From xen-devel-bounces@lists.xen.org Fri Feb 28 22:23:12 2014
Return-path: <xen-devel-bounces@lists.xen.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Feb 2014 22:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xen.org)
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <xen-devel-bounces@lists.xen.org>)
	id 1WJVpb-00089x-7w; Fri, 28 Feb 2014 22:22:59 +0000
Received: from mail6.bemta4.messagelabs.com ([85.158.143.247])
	by lists.xen.org with esmtp (Exim 4.72)
	(envelope-from <abchak@juniper.net>) id 1WJVpZ-00089s-TR
	for xen-devel@lists.xen.org; Fri, 28 Feb 2014 22:22:58 +0000
Received: from [85.158.143.35:49872] by server-1.bemta-4.messagelabs.com id
	34/50-31661-14C01135; Fri, 28 Feb 2014 22:22:57 +0000
X-Env-Sender: abchak@juniper.net
X-Msg-Ref: server-5.tower-21.messagelabs.com!1393626176!9107986!1
X-Originating-IP: [213.199.154.205]
X-SpamReason: No, hits=0.0 required=7.0 tests=
X-StarScan-Received: 
X-StarScan-Version: 6.9.16; banners=-,-,-
X-VirusChecked: Checked
Received: (qmail 30843 invoked from network); 28 Feb 2014 22:22:56 -0000
Received: from am1ehsobe002.messaging.microsoft.com (HELO
	am1outboundpool.messaging.microsoft.com) (213.199.154.205)
	by server-5.tower-21.messagelabs.com with AES128-SHA encrypted SMTP;
	28 Feb 2014 22:22:56 -0000
Received: from mail97-am1-R.bigfish.com (10.3.201.253) by
	AM1EHSOBE009.bigfish.com (10.3.204.29) with Microsoft SMTP Server id
	14.1.225.22; Fri, 28 Feb 2014 22:22:56 +0000
Received: from mail97-am1 (localhost [127.0.0.1])	by mail97-am1-R.bigfish.com
	(Postfix) with ESMTP id 16BA044052C;
	Fri, 28 Feb 2014 22:22:56 +0000 (UTC)
X-Forefront-Antispam-Report: CIP:157.56.240.101; KIP:(null); UIP:(null);
	IPV:NLI; H:BL2PRD0510HT002.namprd05.prod.outlook.com; RD:none;
	EFVD:NLI
X-SpamScore: -6
X-BigFish: VPS-6(zz98dI9371I936eI148cI1432I14ffIzz1f42h2148h208ch1ee6h1de0h1fdah2073h2146h1202h1e76h2189h1d1ah1d2ah21bch1fc6hzz1de098h8275bh8275dh1de097h18602ehz2fh109h2a8h839h946hd25he5bhf0ah1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh162dh1631h1758h18e1h1946h19b5h19ceh1ad9h1b0ah224fh1d0ch1d2eh1d3fh1dfeh1dffh1e1dh1fe8h1ff5h2052h20b3h2216h22d0h2336h2438h2461h2487h24d7h2516h2545h255eh25cch1155h)
Received-SPF: pass (mail97-am1: domain of juniper.net designates
	157.56.240.101 as permitted sender) client-ip=157.56.240.101;
	envelope-from=abchak@juniper.net;
	helo=BL2PRD0510HT002.namprd05.prod.outlook.com ; .outlook.com ; 
X-Forefront-Antispam-Report-Untrusted: SFV:NSPM;
	SFS:(10009001)(6009001)(428001)(53754006)(377424004)(51704005)(377454003)(24454002)(189002)(52604005)(199002)(54356001)(83072002)(85852003)(74706001)(81816001)(90146001)(74876001)(63696002)(15974865002)(76786001)(76796001)(59766001)(36756003)(92726001)(80022001)(56816005)(65816001)(77096001)(31966008)(66066001)(95416001)(82746002)(79102001)(94316002)(86362001)(87266001)(94946001)(85306002)(83716003)(4396001)(19580405001)(83322001)(19580395003)(76482001)(54316002)(69226001)(56776001)(87936001)(81342001)(47976001)(46102001)(50986001)(33656001)(47736001)(51856001)(49866001)(2656002)(74366001)(81686001)(80976001)(77982001)(93136001)(95666003)(93516002)(81542001)(47446002)(74502001)(53806001)(92566001)(74662001);
	DIR:OUT; SFP:1101; SCL:1; SRVR:CO2PR05MB780;
	H:BL2PR05MB193.namprd05.prod.outlook.com; CLIP:66.129.239.13;
	FPR:A897F3D1.7B2378A.F0631DAA.40A36751.20135; PTR:InfoNoRecords;
	MX:1; A:1; LANG:en; 
Received: from mail97-am1 (localhost.localdomain [127.0.0.1]) by mail97-am1
	(MessageSwitch) id 1393626174596055_28666;
	Fri, 28 Feb 2014 22:22:54 +0000 (UTC)
Received: from AM1EHSMHS015.bigfish.com (unknown [10.3.201.245])	by
	mail97-am1.bigfish.com (Postfix) with ESMTP id 84614C0070;
	Fri, 28 Feb 2014 22:22:54 +0000 (UTC)
Received: from BL2PRD0510HT002.namprd05.prod.outlook.com (157.56.240.101) by
	AM1EHSMHS015.bigfish.com (10.3.207.153) with Microsoft SMTP Server
	(TLS) id 14.16.227.3; Fri, 28 Feb 2014 22:22:54 +0000
Received: from CO2PR05MB780.namprd05.prod.outlook.com (10.141.226.156) by
	BL2PRD0510HT002.namprd05.prod.outlook.com (10.255.100.37) with
	Microsoft SMTP
	Server (TLS) id 14.16.423.0; Fri, 28 Feb 2014 22:22:47 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com (10.242.198.143) by
	CO2PR05MB780.namprd05.prod.outlook.com (10.141.226.156) with Microsoft
	SMTP Server (TLS) id 15.0.883.10; Fri, 28 Feb 2014 22:22:45 +0000
Received: from BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.94]) by
	BL2PR05MB193.namprd05.prod.outlook.com ([169.254.9.94]) with mapi id
	15.00.0888.003; Fri, 28 Feb 2014 22:22:44 +0000
From: Anirban Chakraborty <abchak@juniper.net>
To: Ian Campbell <Ian.Campbell@citrix.com>
Thread-Topic: [Xen-devel] build issue in xcp-networkd
Thread-Index: AQHPNAiY63+F7YGNT0KzFjnlrIvRu5rKW8SAgADjXYA=
Date: Fri, 28 Feb 2014 22:22:44 +0000
Message-ID: <E5BB3380-B6E8-42ED-A445-E4E4DD0F5F87@juniper.net>
References: <71367575-F192-4B19-9C60-34D2D65D2A23@juniper.net>
	<1393577337.27819.12.camel@hastur.hellion.org.uk>
In-Reply-To: <1393577337.27819.12.camel@hastur.hellion.org.uk>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [66.129.239.13]
x-forefront-prvs: 0136C1DDA4
Content-ID: <B0AD59E67950214CAF2B626847092F11@namprd05.prod.outlook.com>
MIME-Version: 1.0
X-OriginatorOrg: juniper.net
X-FOPE-CONNECTOR: Id%0$Dn%*$RO%0$TLS%0$FQDN%$TlsDn%
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [Xen-devel] build issue in xcp-networkd
X-BeenThere: xen-devel@lists.xen.org
X-Mailman-Version: 2.1.13
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xen.org>
List-Unsubscribe: <http://lists.xen.org/cgi-bin/mailman/options/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xen.org>
List-Help: <mailto:xen-devel-request@lists.xen.org?subject=help>
List-Subscribe: <http://lists.xen.org/cgi-bin/mailman/listinfo/xen-devel>,
	<mailto:xen-devel-request@lists.xen.org?subject=subscribe>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Sender: xen-devel-bounces@lists.xen.org
Errors-To: xen-devel-bounces@lists.xen.org


On Feb 28, 2014, at 12:48 AM, Ian Campbell <Ian.Campbell@citrix.com> wrote:

> On Thu, 2014-02-27 at 22:09 +0000, Anirban Chakraborty wrote:
>> Hi All,
>> =

>> I was trying to =91opam install xcp-networkd=92 in xenserver DDK vm
>> (2.6.32.43-0.4.1.xs1.8.0.847.170785xen) and encountered following
>> error messages:
> =

> This list deals in the development of the core hypervisor packages. For
> xapi/xcp stuff you either want xen-api@lists.xen.org or the various
> www.xenserver.org lists/forums etc.

Thanks for pointing out. I=92ll take it there.

Anirban

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

